Jan 26 14:41:51 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 26 14:41:51 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 26 14:41:51 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 26 14:41:51 localhost kernel: BIOS-provided physical RAM map:
Jan 26 14:41:51 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 26 14:41:51 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 26 14:41:51 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 26 14:41:51 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 26 14:41:51 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 26 14:41:51 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 26 14:41:51 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 26 14:41:51 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 26 14:41:51 localhost kernel: NX (Execute Disable) protection: active
Jan 26 14:41:51 localhost kernel: APIC: Static calls initialized
Jan 26 14:41:51 localhost kernel: SMBIOS 2.8 present.
Jan 26 14:41:51 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 26 14:41:51 localhost kernel: Hypervisor detected: KVM
Jan 26 14:41:51 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 26 14:41:51 localhost kernel: kvm-clock: using sched offset of 3355106163 cycles
Jan 26 14:41:51 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 26 14:41:51 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 26 14:41:51 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 26 14:41:51 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 26 14:41:51 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 26 14:41:51 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 26 14:41:51 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 26 14:41:51 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 26 14:41:51 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 26 14:41:51 localhost kernel: Using GB pages for direct mapping
Jan 26 14:41:51 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 26 14:41:51 localhost kernel: ACPI: Early table checksum verification disabled
Jan 26 14:41:51 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 26 14:41:51 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 14:41:51 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 14:41:51 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 14:41:51 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 26 14:41:51 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 14:41:51 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 26 14:41:51 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 26 14:41:51 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 26 14:41:51 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 26 14:41:51 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 26 14:41:51 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 26 14:41:51 localhost kernel: No NUMA configuration found
Jan 26 14:41:51 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 26 14:41:51 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 26 14:41:51 localhost kernel: crashkernel reserved: 0x000000009e000000 - 0x00000000ae000000 (256 MB)
Jan 26 14:41:51 localhost kernel: Zone ranges:
Jan 26 14:41:51 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 26 14:41:51 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 26 14:41:51 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 26 14:41:51 localhost kernel:   Device   empty
Jan 26 14:41:51 localhost kernel: Movable zone start for each node
Jan 26 14:41:51 localhost kernel: Early memory node ranges
Jan 26 14:41:51 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 26 14:41:51 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 26 14:41:51 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 26 14:41:51 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 26 14:41:51 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 26 14:41:51 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 26 14:41:51 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 26 14:41:51 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 26 14:41:51 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 26 14:41:51 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 26 14:41:51 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 26 14:41:51 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 26 14:41:51 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 26 14:41:51 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 26 14:41:51 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 26 14:41:51 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 26 14:41:51 localhost kernel: TSC deadline timer available
Jan 26 14:41:51 localhost kernel: CPU topo: Max. logical packages:   8
Jan 26 14:41:51 localhost kernel: CPU topo: Max. logical dies:       8
Jan 26 14:41:51 localhost kernel: CPU topo: Max. dies per package:   1
Jan 26 14:41:51 localhost kernel: CPU topo: Max. threads per core:   1
Jan 26 14:41:51 localhost kernel: CPU topo: Num. cores per package:     1
Jan 26 14:41:51 localhost kernel: CPU topo: Num. threads per package:   1
Jan 26 14:41:51 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 26 14:41:51 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 26 14:41:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 26 14:41:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 26 14:41:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 26 14:41:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 26 14:41:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 26 14:41:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 26 14:41:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 26 14:41:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 26 14:41:51 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 26 14:41:51 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 26 14:41:51 localhost kernel: Booting paravirtualized kernel on KVM
Jan 26 14:41:51 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 26 14:41:51 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 26 14:41:51 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 26 14:41:51 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 26 14:41:51 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 26 14:41:51 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 26 14:41:51 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 26 14:41:51 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 26 14:41:51 localhost kernel: random: crng init done
Jan 26 14:41:51 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 26 14:41:51 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 26 14:41:51 localhost kernel: Fallback order for Node 0: 0 
Jan 26 14:41:51 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 26 14:41:51 localhost kernel: Policy zone: Normal
Jan 26 14:41:51 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 26 14:41:51 localhost kernel: software IO TLB: area num 8.
Jan 26 14:41:51 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 26 14:41:51 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 26 14:41:51 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 26 14:41:51 localhost kernel: Dynamic Preempt: voluntary
Jan 26 14:41:51 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 26 14:41:51 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 26 14:41:51 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 26 14:41:51 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 26 14:41:51 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 26 14:41:51 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 26 14:41:51 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 26 14:41:51 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 26 14:41:51 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 26 14:41:51 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 26 14:41:51 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 26 14:41:51 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 26 14:41:51 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 26 14:41:51 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 26 14:41:51 localhost kernel: Console: colour VGA+ 80x25
Jan 26 14:41:51 localhost kernel: printk: console [ttyS0] enabled
Jan 26 14:41:51 localhost kernel: ACPI: Core revision 20230331
Jan 26 14:41:51 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 26 14:41:51 localhost kernel: x2apic enabled
Jan 26 14:41:51 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 26 14:41:51 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 26 14:41:51 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 26 14:41:51 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 26 14:41:51 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 26 14:41:51 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 26 14:41:51 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 26 14:41:51 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 26 14:41:51 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 26 14:41:51 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 26 14:41:51 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 26 14:41:51 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 26 14:41:51 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 26 14:41:51 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 26 14:41:51 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 26 14:41:51 localhost kernel: x86/bugs: return thunk changed
Jan 26 14:41:51 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 26 14:41:51 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 26 14:41:51 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 26 14:41:51 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 26 14:41:51 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 26 14:41:51 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 26 14:41:51 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 26 14:41:51 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 26 14:41:51 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 26 14:41:51 localhost kernel: landlock: Up and running.
Jan 26 14:41:51 localhost kernel: Yama: becoming mindful.
Jan 26 14:41:51 localhost kernel: SELinux:  Initializing.
Jan 26 14:41:51 localhost kernel: LSM support for eBPF active
Jan 26 14:41:51 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 26 14:41:51 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 26 14:41:51 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 26 14:41:51 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 26 14:41:51 localhost kernel: ... version:                0
Jan 26 14:41:51 localhost kernel: ... bit width:              48
Jan 26 14:41:51 localhost kernel: ... generic registers:      6
Jan 26 14:41:51 localhost kernel: ... value mask:             0000ffffffffffff
Jan 26 14:41:51 localhost kernel: ... max period:             00007fffffffffff
Jan 26 14:41:51 localhost kernel: ... fixed-purpose events:   0
Jan 26 14:41:51 localhost kernel: ... event mask:             000000000000003f
Jan 26 14:41:51 localhost kernel: signal: max sigframe size: 1776
Jan 26 14:41:51 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 26 14:41:51 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 26 14:41:51 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 26 14:41:51 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 26 14:41:51 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 26 14:41:51 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 26 14:41:51 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 26 14:41:51 localhost kernel: node 0 deferred pages initialised in 14ms
Jan 26 14:41:51 localhost kernel: Memory: 7763796K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618372K reserved, 0K cma-reserved)
Jan 26 14:41:51 localhost kernel: devtmpfs: initialized
Jan 26 14:41:51 localhost kernel: x86/mm: Memory block size: 128MB
Jan 26 14:41:51 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 26 14:41:51 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 26 14:41:51 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 26 14:41:51 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 26 14:41:51 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 26 14:41:51 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 26 14:41:51 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 26 14:41:51 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 26 14:41:51 localhost kernel: audit: type=2000 audit(1769438509.493:1): state=initialized audit_enabled=0 res=1
Jan 26 14:41:51 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 26 14:41:51 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 26 14:41:51 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 26 14:41:51 localhost kernel: cpuidle: using governor menu
Jan 26 14:41:51 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 26 14:41:51 localhost kernel: PCI: Using configuration type 1 for base access
Jan 26 14:41:51 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 26 14:41:51 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 26 14:41:51 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 26 14:41:51 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 26 14:41:51 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 26 14:41:51 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 26 14:41:51 localhost kernel: Demotion targets for Node 0: null
Jan 26 14:41:51 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 26 14:41:51 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 26 14:41:51 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 26 14:41:51 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 26 14:41:51 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 26 14:41:51 localhost kernel: ACPI: Interpreter enabled
Jan 26 14:41:51 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 26 14:41:51 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 26 14:41:51 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 26 14:41:51 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 26 14:41:51 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 26 14:41:51 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 26 14:41:51 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [3] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [4] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [5] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [6] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [7] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [8] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [9] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [10] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [11] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [12] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [13] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [14] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [15] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [16] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [17] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [18] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [19] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [20] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [21] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [22] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [23] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [24] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [25] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [26] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [27] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [28] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [29] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [30] registered
Jan 26 14:41:51 localhost kernel: acpiphp: Slot [31] registered
Jan 26 14:41:51 localhost kernel: PCI host bridge to bus 0000:00
Jan 26 14:41:51 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 26 14:41:51 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 26 14:41:51 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 26 14:41:51 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 26 14:41:51 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 26 14:41:51 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 26 14:41:51 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 26 14:41:51 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 26 14:41:51 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 26 14:41:51 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 26 14:41:51 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 26 14:41:51 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 26 14:41:51 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 26 14:41:51 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 26 14:41:51 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 26 14:41:51 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 26 14:41:51 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 26 14:41:51 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 26 14:41:51 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 26 14:41:51 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 26 14:41:51 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 26 14:41:51 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 26 14:41:51 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 26 14:41:51 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 26 14:41:51 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 26 14:41:51 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 26 14:41:51 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 26 14:41:51 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 26 14:41:51 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 26 14:41:51 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 26 14:41:51 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 26 14:41:51 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 26 14:41:51 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 26 14:41:51 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 26 14:41:51 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 26 14:41:51 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 26 14:41:51 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 26 14:41:51 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 26 14:41:51 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 26 14:41:51 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 26 14:41:51 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 26 14:41:51 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 26 14:41:51 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 26 14:41:51 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 26 14:41:51 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 26 14:41:51 localhost kernel: iommu: Default domain type: Translated
Jan 26 14:41:51 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 26 14:41:51 localhost kernel: SCSI subsystem initialized
Jan 26 14:41:51 localhost kernel: ACPI: bus type USB registered
Jan 26 14:41:51 localhost kernel: usbcore: registered new interface driver usbfs
Jan 26 14:41:51 localhost kernel: usbcore: registered new interface driver hub
Jan 26 14:41:51 localhost kernel: usbcore: registered new device driver usb
Jan 26 14:41:51 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 26 14:41:51 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 26 14:41:51 localhost kernel: PTP clock support registered
Jan 26 14:41:51 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 26 14:41:51 localhost kernel: NetLabel: Initializing
Jan 26 14:41:51 localhost kernel: NetLabel:  domain hash size = 128
Jan 26 14:41:51 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 26 14:41:51 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 26 14:41:51 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 26 14:41:51 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 26 14:41:51 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 26 14:41:51 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 26 14:41:51 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 26 14:41:51 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 26 14:41:51 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 26 14:41:51 localhost kernel: vgaarb: loaded
Jan 26 14:41:51 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 26 14:41:51 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 26 14:41:51 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 26 14:41:51 localhost kernel: pnp: PnP ACPI init
Jan 26 14:41:51 localhost kernel: pnp 00:03: [dma 2]
Jan 26 14:41:51 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 26 14:41:51 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 26 14:41:51 localhost kernel: NET: Registered PF_INET protocol family
Jan 26 14:41:51 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 26 14:41:51 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 26 14:41:51 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 26 14:41:51 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 26 14:41:51 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 26 14:41:51 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 26 14:41:51 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 26 14:41:51 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 26 14:41:51 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 26 14:41:51 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 26 14:41:51 localhost kernel: NET: Registered PF_XDP protocol family
Jan 26 14:41:51 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 26 14:41:51 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 26 14:41:51 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 26 14:41:51 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 26 14:41:51 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 26 14:41:51 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 26 14:41:51 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 26 14:41:51 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 26 14:41:51 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 78160 usecs
Jan 26 14:41:51 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 26 14:41:51 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 26 14:41:51 localhost kernel: software IO TLB: mapped [mem 0x00000000bbfdb000-0x00000000bffdb000] (64MB)
Jan 26 14:41:51 localhost kernel: ACPI: bus type thunderbolt registered
Jan 26 14:41:51 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 26 14:41:51 localhost kernel: Initialise system trusted keyrings
Jan 26 14:41:51 localhost kernel: Key type blacklist registered
Jan 26 14:41:51 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 26 14:41:51 localhost kernel: zbud: loaded
Jan 26 14:41:51 localhost kernel: integrity: Platform Keyring initialized
Jan 26 14:41:51 localhost kernel: integrity: Machine keyring initialized
Jan 26 14:41:51 localhost kernel: Freeing initrd memory: 87956K
Jan 26 14:41:51 localhost kernel: NET: Registered PF_ALG protocol family
Jan 26 14:41:51 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 26 14:41:51 localhost kernel: Key type asymmetric registered
Jan 26 14:41:51 localhost kernel: Asymmetric key parser 'x509' registered
Jan 26 14:41:51 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 26 14:41:51 localhost kernel: io scheduler mq-deadline registered
Jan 26 14:41:51 localhost kernel: io scheduler kyber registered
Jan 26 14:41:51 localhost kernel: io scheduler bfq registered
Jan 26 14:41:51 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 26 14:41:51 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 26 14:41:51 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 26 14:41:51 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 26 14:41:51 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 26 14:41:51 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 26 14:41:51 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 26 14:41:51 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 26 14:41:51 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 26 14:41:51 localhost kernel: Non-volatile memory driver v1.3
Jan 26 14:41:51 localhost kernel: rdac: device handler registered
Jan 26 14:41:51 localhost kernel: hp_sw: device handler registered
Jan 26 14:41:51 localhost kernel: emc: device handler registered
Jan 26 14:41:51 localhost kernel: alua: device handler registered
Jan 26 14:41:51 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 26 14:41:51 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 26 14:41:51 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 26 14:41:51 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 26 14:41:51 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 26 14:41:51 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 26 14:41:51 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 26 14:41:51 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 26 14:41:51 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 26 14:41:51 localhost kernel: hub 1-0:1.0: USB hub found
Jan 26 14:41:51 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 26 14:41:51 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 26 14:41:51 localhost kernel: usbserial: USB Serial support registered for generic
Jan 26 14:41:51 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 26 14:41:51 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 26 14:41:51 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 26 14:41:51 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 26 14:41:51 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 26 14:41:51 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 26 14:41:51 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 26 14:41:51 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-26T14:41:50 UTC (1769438510)
Jan 26 14:41:51 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 26 14:41:51 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 26 14:41:51 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 26 14:41:51 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 26 14:41:51 localhost kernel: usbcore: registered new interface driver usbhid
Jan 26 14:41:51 localhost kernel: usbhid: USB HID core driver
Jan 26 14:41:51 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 26 14:41:51 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 26 14:41:51 localhost kernel: Initializing XFRM netlink socket
Jan 26 14:41:51 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 26 14:41:51 localhost kernel: Segment Routing with IPv6
Jan 26 14:41:51 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 26 14:41:51 localhost kernel: mpls_gso: MPLS GSO support
Jan 26 14:41:51 localhost kernel: IPI shorthand broadcast: enabled
Jan 26 14:41:51 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 26 14:41:51 localhost kernel: AES CTR mode by8 optimization enabled
Jan 26 14:41:51 localhost kernel: sched_clock: Marking stable (1405002787, 148594246)->(1664008878, -110411845)
Jan 26 14:41:51 localhost kernel: registered taskstats version 1
Jan 26 14:41:51 localhost kernel: Loading compiled-in X.509 certificates
Jan 26 14:41:51 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 26 14:41:51 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 26 14:41:51 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 26 14:41:51 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 26 14:41:51 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 26 14:41:51 localhost kernel: Demotion targets for Node 0: null
Jan 26 14:41:51 localhost kernel: page_owner is disabled
Jan 26 14:41:51 localhost kernel: Key type .fscrypt registered
Jan 26 14:41:51 localhost kernel: Key type fscrypt-provisioning registered
Jan 26 14:41:51 localhost kernel: Key type big_key registered
Jan 26 14:41:51 localhost kernel: Key type encrypted registered
Jan 26 14:41:51 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 26 14:41:51 localhost kernel: Loading compiled-in module X.509 certificates
Jan 26 14:41:51 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 26 14:41:51 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 26 14:41:51 localhost kernel: ima: No architecture policies found
Jan 26 14:41:51 localhost kernel: evm: Initialising EVM extended attributes:
Jan 26 14:41:51 localhost kernel: evm: security.selinux
Jan 26 14:41:51 localhost kernel: evm: security.SMACK64 (disabled)
Jan 26 14:41:51 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 26 14:41:51 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 26 14:41:51 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 26 14:41:51 localhost kernel: evm: security.apparmor (disabled)
Jan 26 14:41:51 localhost kernel: evm: security.ima
Jan 26 14:41:51 localhost kernel: evm: security.capability
Jan 26 14:41:51 localhost kernel: evm: HMAC attrs: 0x1
Jan 26 14:41:51 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 26 14:41:51 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 26 14:41:51 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 26 14:41:51 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 26 14:41:51 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 26 14:41:51 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 26 14:41:51 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 26 14:41:51 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 26 14:41:51 localhost kernel: Running certificate verification RSA selftest
Jan 26 14:41:51 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 26 14:41:51 localhost kernel: Running certificate verification ECDSA selftest
Jan 26 14:41:51 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 26 14:41:51 localhost kernel: clk: Disabling unused clocks
Jan 26 14:41:51 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 26 14:41:51 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 26 14:41:51 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 26 14:41:51 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 26 14:41:51 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 26 14:41:51 localhost kernel: Run /init as init process
Jan 26 14:41:51 localhost kernel:   with arguments:
Jan 26 14:41:51 localhost kernel:     /init
Jan 26 14:41:51 localhost kernel:   with environment:
Jan 26 14:41:51 localhost kernel:     HOME=/
Jan 26 14:41:51 localhost kernel:     TERM=linux
Jan 26 14:41:51 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 26 14:41:51 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 26 14:41:51 localhost systemd[1]: Detected virtualization kvm.
Jan 26 14:41:51 localhost systemd[1]: Detected architecture x86-64.
Jan 26 14:41:51 localhost systemd[1]: Running in initrd.
Jan 26 14:41:51 localhost systemd[1]: No hostname configured, using default hostname.
Jan 26 14:41:51 localhost systemd[1]: Hostname set to <localhost>.
Jan 26 14:41:51 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 26 14:41:51 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 26 14:41:51 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 26 14:41:51 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 26 14:41:51 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 26 14:41:51 localhost systemd[1]: Reached target Local File Systems.
Jan 26 14:41:51 localhost systemd[1]: Reached target Path Units.
Jan 26 14:41:51 localhost systemd[1]: Reached target Slice Units.
Jan 26 14:41:51 localhost systemd[1]: Reached target Swaps.
Jan 26 14:41:51 localhost systemd[1]: Reached target Timer Units.
Jan 26 14:41:51 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 26 14:41:51 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 26 14:41:51 localhost systemd[1]: Listening on Journal Socket.
Jan 26 14:41:51 localhost systemd[1]: Listening on udev Control Socket.
Jan 26 14:41:51 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 26 14:41:51 localhost systemd[1]: Reached target Socket Units.
Jan 26 14:41:51 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 26 14:41:51 localhost systemd[1]: Starting Journal Service...
Jan 26 14:41:51 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 26 14:41:51 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 26 14:41:51 localhost systemd[1]: Starting Create System Users...
Jan 26 14:41:51 localhost systemd[1]: Starting Setup Virtual Console...
Jan 26 14:41:51 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 26 14:41:51 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 26 14:41:51 localhost systemd-journald[306]: Journal started
Jan 26 14:41:51 localhost systemd-journald[306]: Runtime Journal (/run/log/journal/899b885c8c66485fb49d8445aa8881a6) is 8.0M, max 153.6M, 145.6M free.
Jan 26 14:41:51 localhost systemd[1]: Started Journal Service.
Jan 26 14:41:51 localhost systemd-sysusers[310]: Creating group 'users' with GID 100.
Jan 26 14:41:51 localhost systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Jan 26 14:41:51 localhost systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 26 14:41:51 localhost systemd[1]: Finished Create System Users.
Jan 26 14:41:51 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 26 14:41:51 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 26 14:41:51 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 26 14:41:51 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 26 14:41:51 localhost systemd[1]: Finished Setup Virtual Console.
Jan 26 14:41:51 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 26 14:41:51 localhost systemd[1]: Starting dracut cmdline hook...
Jan 26 14:41:51 localhost dracut-cmdline[327]: dracut-9 dracut-057-102.git20250818.el9
Jan 26 14:41:51 localhost dracut-cmdline[327]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 26 14:41:51 localhost systemd[1]: Finished dracut cmdline hook.
Jan 26 14:41:51 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 26 14:41:51 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 26 14:41:51 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 26 14:41:51 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 26 14:41:51 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 26 14:41:51 localhost kernel: RPC: Registered udp transport module.
Jan 26 14:41:51 localhost kernel: RPC: Registered tcp transport module.
Jan 26 14:41:51 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 26 14:41:51 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 26 14:41:51 localhost rpc.statd[443]: Version 2.5.4 starting
Jan 26 14:41:51 localhost rpc.statd[443]: Initializing NSM state
Jan 26 14:41:51 localhost rpc.idmapd[448]: Setting log level to 0
Jan 26 14:41:51 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 26 14:41:51 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 26 14:41:52 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Jan 26 14:41:52 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 26 14:41:52 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 26 14:41:52 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 26 14:41:52 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 26 14:41:52 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 26 14:41:52 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 26 14:41:52 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 26 14:41:52 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 26 14:41:52 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 26 14:41:52 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 26 14:41:52 localhost systemd[1]: Reached target Network.
Jan 26 14:41:52 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 26 14:41:52 localhost systemd[1]: Starting dracut initqueue hook...
Jan 26 14:41:52 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 26 14:41:52 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 26 14:41:52 localhost kernel: libata version 3.00 loaded.
Jan 26 14:41:52 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 26 14:41:52 localhost kernel: scsi host0: ata_piix
Jan 26 14:41:52 localhost kernel: scsi host1: ata_piix
Jan 26 14:41:52 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 26 14:41:52 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 26 14:41:52 localhost kernel:  vda: vda1
Jan 26 14:41:52 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 26 14:41:52 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 26 14:41:52 localhost systemd[1]: Reached target System Initialization.
Jan 26 14:41:52 localhost systemd[1]: Reached target Basic System.
Jan 26 14:41:52 localhost kernel: ata1: found unknown device (class 0)
Jan 26 14:41:52 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 26 14:41:52 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 26 14:41:52 localhost systemd-udevd[487]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 14:41:52 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 26 14:41:52 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 26 14:41:52 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 26 14:41:52 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 26 14:41:52 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 26 14:41:52 localhost systemd[1]: Reached target Initrd Root Device.
Jan 26 14:41:52 localhost systemd[1]: Finished dracut initqueue hook.
Jan 26 14:41:52 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 26 14:41:52 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 26 14:41:52 localhost systemd[1]: Reached target Remote File Systems.
Jan 26 14:41:52 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 26 14:41:52 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 26 14:41:52 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 26 14:41:52 localhost systemd-fsck[559]: /usr/sbin/fsck.xfs: XFS file system.
Jan 26 14:41:52 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 26 14:41:52 localhost systemd[1]: Mounting /sysroot...
Jan 26 14:41:53 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 26 14:41:53 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 26 14:41:53 localhost kernel: XFS (vda1): Ending clean mount
Jan 26 14:41:53 localhost systemd[1]: Mounted /sysroot.
Jan 26 14:41:53 localhost systemd[1]: Reached target Initrd Root File System.
Jan 26 14:41:53 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 26 14:41:53 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 26 14:41:53 localhost systemd[1]: Reached target Initrd File Systems.
Jan 26 14:41:53 localhost systemd[1]: Reached target Initrd Default Target.
Jan 26 14:41:53 localhost systemd[1]: Starting dracut mount hook...
Jan 26 14:41:53 localhost systemd[1]: Finished dracut mount hook.
Jan 26 14:41:53 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 26 14:41:53 localhost rpc.idmapd[448]: exiting on signal 15
Jan 26 14:41:53 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 26 14:41:53 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 26 14:41:53 localhost systemd[1]: Stopped target Network.
Jan 26 14:41:53 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 26 14:41:53 localhost systemd[1]: Stopped target Timer Units.
Jan 26 14:41:53 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 26 14:41:53 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 26 14:41:53 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 26 14:41:53 localhost systemd[1]: Stopped target Basic System.
Jan 26 14:41:53 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 26 14:41:53 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 26 14:41:53 localhost systemd[1]: Stopped target Path Units.
Jan 26 14:41:53 localhost systemd[1]: Stopped target Remote File Systems.
Jan 26 14:41:53 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 26 14:41:53 localhost systemd[1]: Stopped target Slice Units.
Jan 26 14:41:53 localhost systemd[1]: Stopped target Socket Units.
Jan 26 14:41:53 localhost systemd[1]: Stopped target System Initialization.
Jan 26 14:41:53 localhost systemd[1]: Stopped target Local File Systems.
Jan 26 14:41:53 localhost systemd[1]: Stopped target Swaps.
Jan 26 14:41:53 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Stopped dracut mount hook.
Jan 26 14:41:53 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 26 14:41:53 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 26 14:41:53 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 26 14:41:53 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 26 14:41:53 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 26 14:41:53 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 26 14:41:53 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 26 14:41:53 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 26 14:41:53 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 26 14:41:53 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 26 14:41:53 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 26 14:41:53 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 26 14:41:53 localhost systemd[1]: systemd-udevd.service: Consumed 1.023s CPU time.
Jan 26 14:41:53 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Closed udev Control Socket.
Jan 26 14:41:53 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Closed udev Kernel Socket.
Jan 26 14:41:53 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 26 14:41:53 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 26 14:41:53 localhost systemd[1]: Starting Cleanup udev Database...
Jan 26 14:41:53 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 26 14:41:53 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 26 14:41:53 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Stopped Create System Users.
Jan 26 14:41:53 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 26 14:41:53 localhost systemd[1]: Finished Cleanup udev Database.
Jan 26 14:41:53 localhost systemd[1]: Reached target Switch Root.
Jan 26 14:41:53 localhost systemd[1]: Starting Switch Root...
Jan 26 14:41:53 localhost systemd[1]: Switching root.
Jan 26 14:41:53 localhost systemd-journald[306]: Journal stopped
Jan 26 15:32:22 compute-0 sudo[159025]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:22 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 15:32:22 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 15:32:23 compute-0 sudo[159190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtsvefzrmexdoucarrxftqvantpbndxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441542.6966858-111-39727517609375/AnsiballZ_file.py'
Jan 26 15:32:23 compute-0 sudo[159190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:23 compute-0 python3.9[159192]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:32:23 compute-0 sudo[159190]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:23 compute-0 sudo[159345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-assjqknofalfyttnwheeziwnkocmyigl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441543.37954-111-278544666273614/AnsiballZ_file.py'
Jan 26 15:32:23 compute-0 sudo[159345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:24 compute-0 ceph-mon[75140]: pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:24 compute-0 python3.9[159347]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:32:24 compute-0 sudo[159345]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:25 compute-0 sudo[159498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfzbkmziingsmntzirjkqeimqspwjnej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441544.9867213-111-9947185800716/AnsiballZ_file.py'
Jan 26 15:32:25 compute-0 sudo[159498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:25 compute-0 python3.9[159500]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:32:25 compute-0 sudo[159498]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:25 compute-0 ceph-mon[75140]: pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:25 compute-0 sudo[159650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shjwhlyjtoaqeqlxhamydttzovnjzeme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441545.5962071-111-158973142855200/AnsiballZ_file.py'
Jan 26 15:32:25 compute-0 sudo[159650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:26 compute-0 python3.9[159652]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:32:26 compute-0 sudo[159650]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:26 compute-0 sudo[159802]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkqjtyzkyvlgzfdjqwocqrqrtrgcofdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441546.317583-161-98420053797618/AnsiballZ_file.py'
Jan 26 15:32:26 compute-0 sudo[159802]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:26 compute-0 python3.9[159804]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:32:26 compute-0 sudo[159802]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:32:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:27 compute-0 sudo[159954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meswtwzhuffzzlzjhylywhzzassdzuaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441547.0002306-161-136247735481172/AnsiballZ_file.py'
Jan 26 15:32:27 compute-0 sudo[159954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:27 compute-0 python3.9[159956]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:32:27 compute-0 sudo[159954]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:27 compute-0 podman[160080]: 2026-01-26 15:32:27.952009145 +0000 UTC m=+0.052733777 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 15:32:27 compute-0 sudo[160125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdpdkjmkgqokpqvxuwexyatsjcnpirpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441547.6362987-161-153841263087704/AnsiballZ_file.py'
Jan 26 15:32:27 compute-0 sudo[160125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:28 compute-0 python3.9[160127]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:32:28 compute-0 sudo[160125]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:28 compute-0 ceph-mon[75140]: pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:32:28
Jan 26 15:32:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:32:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:32:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.rgw.root', 'vms', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'default.rgw.log']
Jan 26 15:32:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:32:28 compute-0 sudo[160277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzijmzzmiboycxwquwtkfgypssktusmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441548.2997475-161-257461883048443/AnsiballZ_file.py'
Jan 26 15:32:28 compute-0 sudo[160277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:28 compute-0 python3.9[160279]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:32:28 compute-0 sudo[160277]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:29 compute-0 sudo[160429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuvghvjncmanodmnvdkgbpohyuqhvgpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441548.9128015-161-23726220749983/AnsiballZ_file.py'
Jan 26 15:32:29 compute-0 sudo[160429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:29 compute-0 python3.9[160431]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:32:29 compute-0 sudo[160429]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:29 compute-0 sudo[160581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgpseojufoqvlsabofrqahadopujmibc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441549.4923184-161-65532009447198/AnsiballZ_file.py'
Jan 26 15:32:29 compute-0 sudo[160581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:29 compute-0 python3.9[160583]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:32:29 compute-0 sudo[160581]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:30 compute-0 sudo[160733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcxnqxavhmahecpltunqwklcfqvsftpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441550.0474749-161-199891862423377/AnsiballZ_file.py'
Jan 26 15:32:30 compute-0 sudo[160733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:32:30 compute-0 ceph-mon[75140]: pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:32:30 compute-0 python3.9[160735]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:32:30 compute-0 sudo[160733]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:30 compute-0 sudo[160885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oegrtgumenwdasrbzuxhyiegwuqqywnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441550.7338424-212-261428256266283/AnsiballZ_command.py'
Jan 26 15:32:30 compute-0 sudo[160885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:31 compute-0 python3.9[160887]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:32:31 compute-0 sudo[160885]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:31 compute-0 ceph-mon[75140]: pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:32 compute-0 python3.9[161039]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 15:32:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:32:32 compute-0 sudo[161189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhvibgeqomghlmdfogblcicrtnewdzku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441552.3448818-230-242804377730570/AnsiballZ_systemd_service.py'
Jan 26 15:32:32 compute-0 sudo[161189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:33 compute-0 python3.9[161191]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 15:32:33 compute-0 systemd[1]: Reloading.
Jan 26 15:32:33 compute-0 systemd-rc-local-generator[161220]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:32:33 compute-0 systemd-sysv-generator[161224]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:32:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:33 compute-0 sudo[161189]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:33 compute-0 sudo[161377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlumwxsviprpjgcucjrfetmqswzzuidl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441553.55422-238-129305242491372/AnsiballZ_command.py'
Jan 26 15:32:33 compute-0 sudo[161377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:34 compute-0 python3.9[161379]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:32:34 compute-0 sudo[161377]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:34 compute-0 ceph-mon[75140]: pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:34 compute-0 sudo[161530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-binhturwreoklatyufpvirdayevyjtuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441554.2570708-238-110913011423561/AnsiballZ_command.py'
Jan 26 15:32:34 compute-0 sudo[161530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:34 compute-0 python3.9[161532]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:32:34 compute-0 sudo[161530]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:35 compute-0 sudo[161683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrrhtwmvpmxptzivyfyezxwzhslxuivy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441554.8848286-238-73499871168273/AnsiballZ_command.py'
Jan 26 15:32:35 compute-0 sudo[161683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:35 compute-0 python3.9[161685]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:32:35 compute-0 sudo[161683]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:35 compute-0 sudo[161836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aeyswthkxuuyqfblusyodzjucbfflrcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441555.6417325-238-79076509368858/AnsiballZ_command.py'
Jan 26 15:32:35 compute-0 sudo[161836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:36 compute-0 python3.9[161838]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:32:36 compute-0 sudo[161836]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:36 compute-0 ceph-mon[75140]: pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:36 compute-0 sudo[161989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrxyxirxklckwepoitvhcxfqlclhhtpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441556.360857-238-125829606700351/AnsiballZ_command.py'
Jan 26 15:32:36 compute-0 sudo[161989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:36 compute-0 python3.9[161991]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:32:36 compute-0 sudo[161989]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:32:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:37 compute-0 sudo[162142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-widxjoupbjpmswpbatmvdglggipfdekf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441557.0285985-238-52395857557310/AnsiballZ_command.py'
Jan 26 15:32:37 compute-0 sudo[162142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:37 compute-0 python3.9[162144]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:32:37 compute-0 sudo[162142]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:38 compute-0 sudo[162295]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqaovhbaqfwqdvxupvcmvyzebvuzbqle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441557.6754603-238-80458181159236/AnsiballZ_command.py'
Jan 26 15:32:38 compute-0 sudo[162295]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:38 compute-0 python3.9[162297]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:32:38 compute-0 ceph-mon[75140]: pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:38 compute-0 sudo[162295]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:39 compute-0 sudo[162448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvqpozrnpwvjicjdlzzgwatuvsmwrizs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441558.6280134-292-213500802579874/AnsiballZ_getent.py'
Jan 26 15:32:39 compute-0 sudo[162448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:39 compute-0 python3.9[162450]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 26 15:32:39 compute-0 sudo[162448]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:40 compute-0 sudo[162601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eurhloojgscmqtyohvqfyesppxnckxbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441559.547743-300-13817264582628/AnsiballZ_group.py'
Jan 26 15:32:40 compute-0 sudo[162601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:40 compute-0 python3.9[162603]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 15:32:40 compute-0 ceph-mon[75140]: pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:40 compute-0 groupadd[162604]: group added to /etc/group: name=libvirt, GID=42473
Jan 26 15:32:40 compute-0 groupadd[162604]: group added to /etc/gshadow: name=libvirt
Jan 26 15:32:40 compute-0 groupadd[162604]: new group: name=libvirt, GID=42473
Jan 26 15:32:40 compute-0 sudo[162601]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:41 compute-0 sudo[162759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elwzthrwwcbxqtqkoeerofakcmkmiune ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441560.4953265-308-216449418060140/AnsiballZ_user.py'
Jan 26 15:32:41 compute-0 sudo[162759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:41 compute-0 python3.9[162761]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 26 15:32:41 compute-0 useradd[162763]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 26 15:32:41 compute-0 sudo[162759]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:41 compute-0 sudo[162919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwfwunnalsycxkmyqknzmvgjsqisrbuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441561.6717103-319-70967398678424/AnsiballZ_setup.py'
Jan 26 15:32:41 compute-0 sudo[162919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:32:42 compute-0 ceph-mon[75140]: pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:42 compute-0 python3.9[162921]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 15:32:42 compute-0 sudo[162919]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:42 compute-0 sudo[163003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlbmkqnmewxtviuujqbjbwptgdkxgnvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441561.6717103-319-70967398678424/AnsiballZ_dnf.py'
Jan 26 15:32:42 compute-0 sudo[163003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:32:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:43 compute-0 python3.9[163005]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 15:32:44 compute-0 ceph-mon[75140]: pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:46 compute-0 ceph-mon[75140]: pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:32:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:48 compute-0 ceph-mon[75140]: pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1837404050763064e-06 of space, bias 4.0, pg target 0.0014204884860915677 quantized to 16 (current 16)
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:32:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:32:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:50 compute-0 ceph-mon[75140]: pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:32:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 26 15:32:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:32:52 compute-0 ceph-mon[75140]: pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 26 15:32:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:32:53 compute-0 podman[163050]: 2026-01-26 15:32:53.417224362 +0000 UTC m=+0.100741347 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 15:32:54 compute-0 ceph-mon[75140]: pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:32:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:32:56 compute-0 ceph-mon[75140]: pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:32:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:32:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:32:57 compute-0 sudo[163211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:32:57 compute-0 sudo[163211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:32:57 compute-0 sudo[163211]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:58 compute-0 sudo[163239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 26 15:32:58 compute-0 sudo[163239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:32:58 compute-0 podman[163264]: 2026-01-26 15:32:58.11763798 +0000 UTC m=+0.062061844 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 15:32:58 compute-0 ceph-mon[75140]: pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:32:58 compute-0 podman[163327]: 2026-01-26 15:32:58.513342401 +0000 UTC m=+0.091082092 container exec 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 15:32:58 compute-0 podman[163327]: 2026-01-26 15:32:58.599482881 +0000 UTC m=+0.177222552 container exec_died 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 15:32:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:32:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:32:59.194 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:32:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:32:59.195 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:32:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:32:59.195 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:32:59 compute-0 sudo[163239]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:32:59 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:32:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:32:59 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:32:59 compute-0 sudo[163513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:32:59 compute-0 sudo[163513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:32:59 compute-0 sudo[163513]: pam_unix(sudo:session): session closed for user root
Jan 26 15:32:59 compute-0 sudo[163538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:32:59 compute-0 sudo[163538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:33:00 compute-0 sudo[163538]: pam_unix(sudo:session): session closed for user root
Jan 26 15:33:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:33:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:33:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:33:00 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:33:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:33:00 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:33:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:33:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:33:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:33:00 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:33:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:33:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:33:00 compute-0 sudo[163596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:33:00 compute-0 sudo[163596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:33:00 compute-0 sudo[163596]: pam_unix(sudo:session): session closed for user root
Jan 26 15:33:00 compute-0 sudo[163621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:33:00 compute-0 sudo[163621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:33:00 compute-0 ceph-mon[75140]: pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:33:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:33:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:33:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:33:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:33:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:33:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:33:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:33:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:33:00 compute-0 podman[163656]: 2026-01-26 15:33:00.510272004 +0000 UTC m=+0.041780282 container create 7518f989aaae6ef9808afc91efa24b91a5c55625aeab553f799b89a7fdbad2b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhaskara, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:33:00 compute-0 systemd[1]: Started libpod-conmon-7518f989aaae6ef9808afc91efa24b91a5c55625aeab553f799b89a7fdbad2b5.scope.
Jan 26 15:33:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:33:00 compute-0 podman[163656]: 2026-01-26 15:33:00.565722731 +0000 UTC m=+0.097231059 container init 7518f989aaae6ef9808afc91efa24b91a5c55625aeab553f799b89a7fdbad2b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhaskara, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:33:00 compute-0 podman[163656]: 2026-01-26 15:33:00.571466609 +0000 UTC m=+0.102974887 container start 7518f989aaae6ef9808afc91efa24b91a5c55625aeab553f799b89a7fdbad2b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:33:00 compute-0 podman[163656]: 2026-01-26 15:33:00.574473996 +0000 UTC m=+0.105982294 container attach 7518f989aaae6ef9808afc91efa24b91a5c55625aeab553f799b89a7fdbad2b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 15:33:00 compute-0 adoring_bhaskara[163672]: 167 167
Jan 26 15:33:00 compute-0 systemd[1]: libpod-7518f989aaae6ef9808afc91efa24b91a5c55625aeab553f799b89a7fdbad2b5.scope: Deactivated successfully.
Jan 26 15:33:00 compute-0 podman[163656]: 2026-01-26 15:33:00.577058973 +0000 UTC m=+0.108567261 container died 7518f989aaae6ef9808afc91efa24b91a5c55625aeab553f799b89a7fdbad2b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 26 15:33:00 compute-0 podman[163656]: 2026-01-26 15:33:00.494655226 +0000 UTC m=+0.026163524 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:33:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-594e6094ac1551b57b60a6241b804861653c2f161565784b7ef62992fcf46de8-merged.mount: Deactivated successfully.
Jan 26 15:33:00 compute-0 podman[163656]: 2026-01-26 15:33:00.61550604 +0000 UTC m=+0.147014348 container remove 7518f989aaae6ef9808afc91efa24b91a5c55625aeab553f799b89a7fdbad2b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 15:33:00 compute-0 systemd[1]: libpod-conmon-7518f989aaae6ef9808afc91efa24b91a5c55625aeab553f799b89a7fdbad2b5.scope: Deactivated successfully.
Jan 26 15:33:00 compute-0 podman[163696]: 2026-01-26 15:33:00.749264452 +0000 UTC m=+0.020191261 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:33:00 compute-0 podman[163696]: 2026-01-26 15:33:00.986669565 +0000 UTC m=+0.257596354 container create 7fae4e8e99792344ba02b71e661ae3b71694f1c843a483611ff807f0127d18b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 15:33:01 compute-0 systemd[1]: Started libpod-conmon-7fae4e8e99792344ba02b71e661ae3b71694f1c843a483611ff807f0127d18b4.scope.
Jan 26 15:33:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e1d864a6743d78927109a3b8fa7acae3951f3ec8511dcacf3fec98794dfa2b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e1d864a6743d78927109a3b8fa7acae3951f3ec8511dcacf3fec98794dfa2b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e1d864a6743d78927109a3b8fa7acae3951f3ec8511dcacf3fec98794dfa2b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e1d864a6743d78927109a3b8fa7acae3951f3ec8511dcacf3fec98794dfa2b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:33:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e1d864a6743d78927109a3b8fa7acae3951f3ec8511dcacf3fec98794dfa2b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:33:01 compute-0 podman[163696]: 2026-01-26 15:33:01.075686538 +0000 UTC m=+0.346613327 container init 7fae4e8e99792344ba02b71e661ae3b71694f1c843a483611ff807f0127d18b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hertz, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 15:33:01 compute-0 podman[163696]: 2026-01-26 15:33:01.082086291 +0000 UTC m=+0.353013080 container start 7fae4e8e99792344ba02b71e661ae3b71694f1c843a483611ff807f0127d18b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hertz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:33:01 compute-0 podman[163696]: 2026-01-26 15:33:01.085848754 +0000 UTC m=+0.356775563 container attach 7fae4e8e99792344ba02b71e661ae3b71694f1c843a483611ff807f0127d18b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hertz, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:33:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:33:01 compute-0 objective_hertz[163713]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:33:01 compute-0 objective_hertz[163713]: --> All data devices are unavailable
Jan 26 15:33:01 compute-0 systemd[1]: libpod-7fae4e8e99792344ba02b71e661ae3b71694f1c843a483611ff807f0127d18b4.scope: Deactivated successfully.
Jan 26 15:33:01 compute-0 podman[163696]: 2026-01-26 15:33:01.547411234 +0000 UTC m=+0.818338033 container died 7fae4e8e99792344ba02b71e661ae3b71694f1c843a483611ff807f0127d18b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hertz, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e1d864a6743d78927109a3b8fa7acae3951f3ec8511dcacf3fec98794dfa2b2-merged.mount: Deactivated successfully.
Jan 26 15:33:01 compute-0 podman[163696]: 2026-01-26 15:33:01.601645603 +0000 UTC m=+0.872572392 container remove 7fae4e8e99792344ba02b71e661ae3b71694f1c843a483611ff807f0127d18b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_hertz, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:33:01 compute-0 systemd[1]: libpod-conmon-7fae4e8e99792344ba02b71e661ae3b71694f1c843a483611ff807f0127d18b4.scope: Deactivated successfully.
Jan 26 15:33:01 compute-0 sudo[163621]: pam_unix(sudo:session): session closed for user root
Jan 26 15:33:01 compute-0 sudo[163746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:33:01 compute-0 sudo[163746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:33:01 compute-0 sudo[163746]: pam_unix(sudo:session): session closed for user root
Jan 26 15:33:01 compute-0 sudo[163771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:33:01 compute-0 sudo[163771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:33:02 compute-0 podman[163808]: 2026-01-26 15:33:02.084367163 +0000 UTC m=+0.034635323 container create a7772ef757326e99772b9582511092dbefc734f9656013197adf775ef44fcf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:33:02 compute-0 systemd[1]: Started libpod-conmon-a7772ef757326e99772b9582511092dbefc734f9656013197adf775ef44fcf2a.scope.
Jan 26 15:33:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:33:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:33:02 compute-0 podman[163808]: 2026-01-26 15:33:02.06895704 +0000 UTC m=+0.019225210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:33:02 compute-0 podman[163808]: 2026-01-26 15:33:02.175030214 +0000 UTC m=+0.125298374 container init a7772ef757326e99772b9582511092dbefc734f9656013197adf775ef44fcf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_maxwell, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:33:02 compute-0 podman[163808]: 2026-01-26 15:33:02.182641474 +0000 UTC m=+0.132909624 container start a7772ef757326e99772b9582511092dbefc734f9656013197adf775ef44fcf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:33:02 compute-0 fervent_maxwell[163826]: 167 167
Jan 26 15:33:02 compute-0 systemd[1]: libpod-a7772ef757326e99772b9582511092dbefc734f9656013197adf775ef44fcf2a.scope: Deactivated successfully.
Jan 26 15:33:02 compute-0 conmon[163826]: conmon a7772ef757326e99772b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a7772ef757326e99772b9582511092dbefc734f9656013197adf775ef44fcf2a.scope/container/memory.events
Jan 26 15:33:02 compute-0 podman[163808]: 2026-01-26 15:33:02.194346104 +0000 UTC m=+0.144614284 container attach a7772ef757326e99772b9582511092dbefc734f9656013197adf775ef44fcf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 15:33:02 compute-0 podman[163808]: 2026-01-26 15:33:02.194653841 +0000 UTC m=+0.144921991 container died a7772ef757326e99772b9582511092dbefc734f9656013197adf775ef44fcf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_maxwell, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:33:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-d16d3703420a4f25152a442e1caeec800f9fc64f9b518c1ffee11651bffeca49-merged.mount: Deactivated successfully.
Jan 26 15:33:02 compute-0 podman[163808]: 2026-01-26 15:33:02.228846634 +0000 UTC m=+0.179114784 container remove a7772ef757326e99772b9582511092dbefc734f9656013197adf775ef44fcf2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 15:33:02 compute-0 systemd[1]: libpod-conmon-a7772ef757326e99772b9582511092dbefc734f9656013197adf775ef44fcf2a.scope: Deactivated successfully.
Jan 26 15:33:02 compute-0 podman[163849]: 2026-01-26 15:33:02.385840093 +0000 UTC m=+0.037129709 container create 96cb93478bd9bc768d64b2493d558d5ca93b2afd858742f3b843ffad0e7b80a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_northcutt, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 15:33:02 compute-0 systemd[1]: Started libpod-conmon-96cb93478bd9bc768d64b2493d558d5ca93b2afd858742f3b843ffad0e7b80a1.scope.
Jan 26 15:33:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe917b9137a16fcc66855037577582b0459d2442478cd652506d1d9362db5c9a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe917b9137a16fcc66855037577582b0459d2442478cd652506d1d9362db5c9a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe917b9137a16fcc66855037577582b0459d2442478cd652506d1d9362db5c9a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:33:02 compute-0 podman[163849]: 2026-01-26 15:33:02.367677918 +0000 UTC m=+0.018967564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe917b9137a16fcc66855037577582b0459d2442478cd652506d1d9362db5c9a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:33:02 compute-0 ceph-mon[75140]: pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:33:02 compute-0 podman[163849]: 2026-01-26 15:33:02.481224279 +0000 UTC m=+0.132513915 container init 96cb93478bd9bc768d64b2493d558d5ca93b2afd858742f3b843ffad0e7b80a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:33:02 compute-0 podman[163849]: 2026-01-26 15:33:02.487770416 +0000 UTC m=+0.139060032 container start 96cb93478bd9bc768d64b2493d558d5ca93b2afd858742f3b843ffad0e7b80a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_northcutt, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 15:33:02 compute-0 podman[163849]: 2026-01-26 15:33:02.490904175 +0000 UTC m=+0.142193791 container attach 96cb93478bd9bc768d64b2493d558d5ca93b2afd858742f3b843ffad0e7b80a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_northcutt, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]: {
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:     "0": [
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:         {
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "devices": [
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "/dev/loop3"
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             ],
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "lv_name": "ceph_lv0",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "lv_size": "21470642176",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "name": "ceph_lv0",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "tags": {
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.cluster_name": "ceph",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.crush_device_class": "",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.encrypted": "0",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.objectstore": "bluestore",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.osd_id": "0",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.type": "block",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.vdo": "0",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.with_tpm": "0"
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             },
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "type": "block",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "vg_name": "ceph_vg0"
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:         }
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:     ],
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:     "1": [
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:         {
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "devices": [
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "/dev/loop4"
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             ],
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "lv_name": "ceph_lv1",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "lv_size": "21470642176",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "name": "ceph_lv1",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "tags": {
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.cluster_name": "ceph",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.crush_device_class": "",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.encrypted": "0",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.objectstore": "bluestore",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.osd_id": "1",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.type": "block",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.vdo": "0",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.with_tpm": "0"
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             },
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "type": "block",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "vg_name": "ceph_vg1"
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:         }
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:     ],
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:     "2": [
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:         {
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "devices": [
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "/dev/loop5"
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             ],
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "lv_name": "ceph_lv2",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "lv_size": "21470642176",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "name": "ceph_lv2",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "tags": {
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.cluster_name": "ceph",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.crush_device_class": "",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.encrypted": "0",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.objectstore": "bluestore",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.osd_id": "2",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.type": "block",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.vdo": "0",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:                 "ceph.with_tpm": "0"
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             },
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "type": "block",
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:             "vg_name": "ceph_vg2"
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:         }
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]:     ]
Jan 26 15:33:02 compute-0 crazy_northcutt[163866]: }
Jan 26 15:33:02 compute-0 systemd[1]: libpod-96cb93478bd9bc768d64b2493d558d5ca93b2afd858742f3b843ffad0e7b80a1.scope: Deactivated successfully.
Jan 26 15:33:02 compute-0 podman[163849]: 2026-01-26 15:33:02.784598732 +0000 UTC m=+0.435888358 container died 96cb93478bd9bc768d64b2493d558d5ca93b2afd858742f3b843ffad0e7b80a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 15:33:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe917b9137a16fcc66855037577582b0459d2442478cd652506d1d9362db5c9a-merged.mount: Deactivated successfully.
Jan 26 15:33:02 compute-0 podman[163849]: 2026-01-26 15:33:02.831715802 +0000 UTC m=+0.483005428 container remove 96cb93478bd9bc768d64b2493d558d5ca93b2afd858742f3b843ffad0e7b80a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_northcutt, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 15:33:02 compute-0 systemd[1]: libpod-conmon-96cb93478bd9bc768d64b2493d558d5ca93b2afd858742f3b843ffad0e7b80a1.scope: Deactivated successfully.
Jan 26 15:33:02 compute-0 sudo[163771]: pam_unix(sudo:session): session closed for user root
Jan 26 15:33:02 compute-0 sudo[163893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:33:02 compute-0 sudo[163893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:33:02 compute-0 sudo[163893]: pam_unix(sudo:session): session closed for user root
Jan 26 15:33:03 compute-0 sudo[163918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:33:03 compute-0 sudo[163918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:33:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 26 15:33:03 compute-0 podman[163955]: 2026-01-26 15:33:03.325888058 +0000 UTC m=+0.053130936 container create 0eacba80d427ca7d81388faddb9883e892dd0d32a62ffa4f82f4a79d9405f618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_proskuriakova, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 15:33:03 compute-0 systemd[1]: Started libpod-conmon-0eacba80d427ca7d81388faddb9883e892dd0d32a62ffa4f82f4a79d9405f618.scope.
Jan 26 15:33:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:33:03 compute-0 podman[163955]: 2026-01-26 15:33:03.298679382 +0000 UTC m=+0.025922280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:33:03 compute-0 podman[163955]: 2026-01-26 15:33:03.399740404 +0000 UTC m=+0.126983302 container init 0eacba80d427ca7d81388faddb9883e892dd0d32a62ffa4f82f4a79d9405f618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_proskuriakova, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:33:03 compute-0 podman[163955]: 2026-01-26 15:33:03.408164912 +0000 UTC m=+0.135407790 container start 0eacba80d427ca7d81388faddb9883e892dd0d32a62ffa4f82f4a79d9405f618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 26 15:33:03 compute-0 wonderful_proskuriakova[163972]: 167 167
Jan 26 15:33:03 compute-0 systemd[1]: libpod-0eacba80d427ca7d81388faddb9883e892dd0d32a62ffa4f82f4a79d9405f618.scope: Deactivated successfully.
Jan 26 15:33:03 compute-0 conmon[163972]: conmon 0eacba80d427ca7d8138 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0eacba80d427ca7d81388faddb9883e892dd0d32a62ffa4f82f4a79d9405f618.scope/container/memory.events
Jan 26 15:33:03 compute-0 podman[163955]: 2026-01-26 15:33:03.412800085 +0000 UTC m=+0.140042963 container attach 0eacba80d427ca7d81388faddb9883e892dd0d32a62ffa4f82f4a79d9405f618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_proskuriakova, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 15:33:03 compute-0 podman[163955]: 2026-01-26 15:33:03.413962971 +0000 UTC m=+0.141205849 container died 0eacba80d427ca7d81388faddb9883e892dd0d32a62ffa4f82f4a79d9405f618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_proskuriakova, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 15:33:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b4ef9b74ca46afe38ff27bdaacae1b31a2810e2e1ded211a729136c8f531221-merged.mount: Deactivated successfully.
Jan 26 15:33:03 compute-0 podman[163955]: 2026-01-26 15:33:03.451328764 +0000 UTC m=+0.178571632 container remove 0eacba80d427ca7d81388faddb9883e892dd0d32a62ffa4f82f4a79d9405f618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 15:33:03 compute-0 systemd[1]: libpod-conmon-0eacba80d427ca7d81388faddb9883e892dd0d32a62ffa4f82f4a79d9405f618.scope: Deactivated successfully.
Jan 26 15:33:03 compute-0 podman[163996]: 2026-01-26 15:33:03.626042879 +0000 UTC m=+0.047740195 container create 3ff99bb290a3c26184b8a7cc1445cc3cc83fb47710aaae632359bb58260416e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 15:33:03 compute-0 systemd[1]: Started libpod-conmon-3ff99bb290a3c26184b8a7cc1445cc3cc83fb47710aaae632359bb58260416e0.scope.
Jan 26 15:33:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6a64e694f5849418d5648062545e14be36496115e5c3b9f217b08af7a9f6d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:33:03 compute-0 podman[163996]: 2026-01-26 15:33:03.602480004 +0000 UTC m=+0.024177360 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6a64e694f5849418d5648062545e14be36496115e5c3b9f217b08af7a9f6d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6a64e694f5849418d5648062545e14be36496115e5c3b9f217b08af7a9f6d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:33:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b6a64e694f5849418d5648062545e14be36496115e5c3b9f217b08af7a9f6d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:33:03 compute-0 podman[163996]: 2026-01-26 15:33:03.716636488 +0000 UTC m=+0.138333814 container init 3ff99bb290a3c26184b8a7cc1445cc3cc83fb47710aaae632359bb58260416e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_volhard, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:33:03 compute-0 podman[163996]: 2026-01-26 15:33:03.723922591 +0000 UTC m=+0.145619907 container start 3ff99bb290a3c26184b8a7cc1445cc3cc83fb47710aaae632359bb58260416e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_volhard, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 15:33:03 compute-0 podman[163996]: 2026-01-26 15:33:03.731378537 +0000 UTC m=+0.153075873 container attach 3ff99bb290a3c26184b8a7cc1445cc3cc83fb47710aaae632359bb58260416e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_volhard, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 26 15:33:04 compute-0 lvm[164091]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:33:04 compute-0 lvm[164090]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:33:04 compute-0 lvm[164091]: VG ceph_vg1 finished
Jan 26 15:33:04 compute-0 lvm[164090]: VG ceph_vg0 finished
Jan 26 15:33:04 compute-0 lvm[164093]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:33:04 compute-0 lvm[164093]: VG ceph_vg2 finished
Jan 26 15:33:04 compute-0 ceph-mon[75140]: pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 26 15:33:04 compute-0 nostalgic_volhard[164012]: {}
Jan 26 15:33:04 compute-0 systemd[1]: libpod-3ff99bb290a3c26184b8a7cc1445cc3cc83fb47710aaae632359bb58260416e0.scope: Deactivated successfully.
Jan 26 15:33:04 compute-0 systemd[1]: libpod-3ff99bb290a3c26184b8a7cc1445cc3cc83fb47710aaae632359bb58260416e0.scope: Consumed 1.277s CPU time.
Jan 26 15:33:04 compute-0 podman[163996]: 2026-01-26 15:33:04.532943824 +0000 UTC m=+0.954641170 container died 3ff99bb290a3c26184b8a7cc1445cc3cc83fb47710aaae632359bb58260416e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:33:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b6a64e694f5849418d5648062545e14be36496115e5c3b9f217b08af7a9f6d6-merged.mount: Deactivated successfully.
Jan 26 15:33:04 compute-0 podman[163996]: 2026-01-26 15:33:04.59602036 +0000 UTC m=+1.017717676 container remove 3ff99bb290a3c26184b8a7cc1445cc3cc83fb47710aaae632359bb58260416e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_volhard, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:33:04 compute-0 systemd[1]: libpod-conmon-3ff99bb290a3c26184b8a7cc1445cc3cc83fb47710aaae632359bb58260416e0.scope: Deactivated successfully.
Jan 26 15:33:04 compute-0 sudo[163918]: pam_unix(sudo:session): session closed for user root
Jan 26 15:33:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:33:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:33:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:33:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:33:04 compute-0 sudo[164110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:33:04 compute-0 sudo[164110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:33:04 compute-0 sudo[164110]: pam_unix(sudo:session): session closed for user root
Jan 26 15:33:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:33:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:33:06 compute-0 ceph-mon[75140]: pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:33:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:07 compute-0 ceph-mon[75140]: pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:10 compute-0 ceph-mon[75140]: pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:33:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:13 compute-0 ceph-mon[75140]: pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:13 compute-0 kernel: SELinux:  Converting 2775 SID table entries...
Jan 26 15:33:13 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 15:33:13 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 26 15:33:13 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 15:33:13 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 26 15:33:13 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 15:33:13 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 15:33:13 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 15:33:14 compute-0 ceph-mon[75140]: pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:16 compute-0 ceph-mon[75140]: pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:33:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:18 compute-0 ceph-mon[75140]: pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:19 compute-0 ceph-mon[75140]: pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:33:22 compute-0 ceph-mon[75140]: pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:23 compute-0 kernel: SELinux:  Converting 2775 SID table entries...
Jan 26 15:33:23 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 15:33:23 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 26 15:33:23 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 15:33:23 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 26 15:33:23 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 15:33:23 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 15:33:23 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 15:33:24 compute-0 ceph-mon[75140]: pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:24 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 26 15:33:24 compute-0 podman[164150]: 2026-01-26 15:33:24.438646179 +0000 UTC m=+0.119956585 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 26 15:33:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:26 compute-0 ceph-mon[75140]: pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:33:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:28 compute-0 ceph-mon[75140]: pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:28 compute-0 podman[164176]: 2026-01-26 15:33:28.379168044 +0000 UTC m=+0.069540782 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 26 15:33:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:33:28
Jan 26 15:33:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:33:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:33:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['backups', '.rgw.root', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 26 15:33:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:33:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:33:30 compute-0 ceph-mon[75140]: pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:33:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:33:32 compute-0 ceph-mon[75140]: pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:34 compute-0 ceph-mon[75140]: pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:36 compute-0 ceph-mon[75140]: pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:33:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:38 compute-0 ceph-mon[75140]: pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:40 compute-0 ceph-mon[75140]: pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:41 compute-0 ceph-mon[75140]: pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:33:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:44 compute-0 ceph-mon[75140]: pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:46 compute-0 ceph-mon[75140]: pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:33:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:48 compute-0 ceph-mon[75140]: pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1837404050763064e-06 of space, bias 4.0, pg target 0.0014204884860915677 quantized to 16 (current 16)
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:33:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:33:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:50 compute-0 ceph-mon[75140]: pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:33:52 compute-0 ceph-mon[75140]: pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:54 compute-0 ceph-mon[75140]: pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:55 compute-0 podman[176201]: 2026-01-26 15:33:55.387838317 +0000 UTC m=+0.080580556 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 15:33:56 compute-0 ceph-mon[75140]: pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:33:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:58 compute-0 ceph-mon[75140]: pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:33:59.195 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:33:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:33:59.195 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:33:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:33:59.196 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:33:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:33:59 compute-0 podman[178578]: 2026-01-26 15:33:59.378270293 +0000 UTC m=+0.069892230 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 15:33:59 compute-0 ceph-mon[75140]: pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:34:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:34:02 compute-0 ceph-mon[75140]: pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:04 compute-0 ceph-mon[75140]: pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:04 compute-0 sudo[181091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:34:04 compute-0 sudo[181091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:34:04 compute-0 sudo[181091]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:04 compute-0 sudo[181116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:34:04 compute-0 sudo[181116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:34:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:05 compute-0 sudo[181116]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:34:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:34:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:34:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:34:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:34:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:34:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:34:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:34:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:34:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:34:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:34:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:34:05 compute-0 sudo[181174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:34:05 compute-0 sudo[181174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:34:05 compute-0 sudo[181174]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:05 compute-0 sudo[181199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:34:05 compute-0 sudo[181199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:34:05 compute-0 ceph-mon[75140]: pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:34:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:34:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:34:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:34:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:34:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:34:05 compute-0 podman[181236]: 2026-01-26 15:34:05.909800992 +0000 UTC m=+0.042361614 container create 37205b160b9664cd379e1bb677406f1081c9e0b57c6c87095c1e74174186cd88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 15:34:05 compute-0 systemd[1]: Started libpod-conmon-37205b160b9664cd379e1bb677406f1081c9e0b57c6c87095c1e74174186cd88.scope.
Jan 26 15:34:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:34:05 compute-0 podman[181236]: 2026-01-26 15:34:05.88964086 +0000 UTC m=+0.022201482 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:34:05 compute-0 podman[181236]: 2026-01-26 15:34:05.994838732 +0000 UTC m=+0.127399374 container init 37205b160b9664cd379e1bb677406f1081c9e0b57c6c87095c1e74174186cd88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_euclid, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 15:34:06 compute-0 podman[181236]: 2026-01-26 15:34:06.003258653 +0000 UTC m=+0.135819255 container start 37205b160b9664cd379e1bb677406f1081c9e0b57c6c87095c1e74174186cd88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_euclid, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:34:06 compute-0 podman[181236]: 2026-01-26 15:34:06.007558446 +0000 UTC m=+0.140119148 container attach 37205b160b9664cd379e1bb677406f1081c9e0b57c6c87095c1e74174186cd88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_euclid, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:34:06 compute-0 nice_euclid[181254]: 167 167
Jan 26 15:34:06 compute-0 systemd[1]: libpod-37205b160b9664cd379e1bb677406f1081c9e0b57c6c87095c1e74174186cd88.scope: Deactivated successfully.
Jan 26 15:34:06 compute-0 podman[181236]: 2026-01-26 15:34:06.010191219 +0000 UTC m=+0.142751821 container died 37205b160b9664cd379e1bb677406f1081c9e0b57c6c87095c1e74174186cd88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_euclid, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:34:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cbfd6364784260130a6b84daa72b508a3082d6c094eb4627d7f48b8ba7c5cc4-merged.mount: Deactivated successfully.
Jan 26 15:34:06 compute-0 podman[181236]: 2026-01-26 15:34:06.050318898 +0000 UTC m=+0.182879500 container remove 37205b160b9664cd379e1bb677406f1081c9e0b57c6c87095c1e74174186cd88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:34:06 compute-0 systemd[1]: libpod-conmon-37205b160b9664cd379e1bb677406f1081c9e0b57c6c87095c1e74174186cd88.scope: Deactivated successfully.
Jan 26 15:34:06 compute-0 podman[181278]: 2026-01-26 15:34:06.239364073 +0000 UTC m=+0.055455196 container create eccfeed418ea0217139b7998ce1b4ff3e1a3b6c5ccc84c787d32f1dc91c77de4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:34:06 compute-0 systemd[1]: Started libpod-conmon-eccfeed418ea0217139b7998ce1b4ff3e1a3b6c5ccc84c787d32f1dc91c77de4.scope.
Jan 26 15:34:06 compute-0 podman[181278]: 2026-01-26 15:34:06.211471246 +0000 UTC m=+0.027562399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:34:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b21d2d83169b84b43a64b38ce7e08f5f5e4cce287056e2dd2f0923bc9e18a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b21d2d83169b84b43a64b38ce7e08f5f5e4cce287056e2dd2f0923bc9e18a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b21d2d83169b84b43a64b38ce7e08f5f5e4cce287056e2dd2f0923bc9e18a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b21d2d83169b84b43a64b38ce7e08f5f5e4cce287056e2dd2f0923bc9e18a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:34:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3b21d2d83169b84b43a64b38ce7e08f5f5e4cce287056e2dd2f0923bc9e18a6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:34:06 compute-0 podman[181278]: 2026-01-26 15:34:06.329177338 +0000 UTC m=+0.145268491 container init eccfeed418ea0217139b7998ce1b4ff3e1a3b6c5ccc84c787d32f1dc91c77de4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:34:06 compute-0 podman[181278]: 2026-01-26 15:34:06.34182681 +0000 UTC m=+0.157917923 container start eccfeed418ea0217139b7998ce1b4ff3e1a3b6c5ccc84c787d32f1dc91c77de4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:34:06 compute-0 podman[181278]: 2026-01-26 15:34:06.356005028 +0000 UTC m=+0.172096181 container attach eccfeed418ea0217139b7998ce1b4ff3e1a3b6c5ccc84c787d32f1dc91c77de4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 15:34:06 compute-0 dazzling_goldwasser[181295]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:34:06 compute-0 dazzling_goldwasser[181295]: --> All data devices are unavailable
Jan 26 15:34:06 compute-0 podman[181278]: 2026-01-26 15:34:06.831651889 +0000 UTC m=+0.647743022 container died eccfeed418ea0217139b7998ce1b4ff3e1a3b6c5ccc84c787d32f1dc91c77de4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:34:06 compute-0 systemd[1]: libpod-eccfeed418ea0217139b7998ce1b4ff3e1a3b6c5ccc84c787d32f1dc91c77de4.scope: Deactivated successfully.
Jan 26 15:34:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3b21d2d83169b84b43a64b38ce7e08f5f5e4cce287056e2dd2f0923bc9e18a6-merged.mount: Deactivated successfully.
Jan 26 15:34:06 compute-0 podman[181278]: 2026-01-26 15:34:06.878754414 +0000 UTC m=+0.694845527 container remove eccfeed418ea0217139b7998ce1b4ff3e1a3b6c5ccc84c787d32f1dc91c77de4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_goldwasser, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Jan 26 15:34:06 compute-0 systemd[1]: libpod-conmon-eccfeed418ea0217139b7998ce1b4ff3e1a3b6c5ccc84c787d32f1dc91c77de4.scope: Deactivated successfully.
Jan 26 15:34:06 compute-0 sudo[181199]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:06 compute-0 sudo[181335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:34:07 compute-0 sudo[181335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:34:07 compute-0 sudo[181335]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:07 compute-0 sudo[181360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:34:07 compute-0 sudo[181360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:34:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:34:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:07 compute-0 podman[181405]: 2026-01-26 15:34:07.427245994 +0000 UTC m=+0.087995012 container create e7feea7ba2731e6cc8abe9e0b29673ac2e727dfcf8e29d530cf27d5a5dbdd321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bardeen, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:34:07 compute-0 systemd[1]: Started libpod-conmon-e7feea7ba2731e6cc8abe9e0b29673ac2e727dfcf8e29d530cf27d5a5dbdd321.scope.
Jan 26 15:34:07 compute-0 podman[181405]: 2026-01-26 15:34:07.398837865 +0000 UTC m=+0.059586973 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:34:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:34:07 compute-0 podman[181405]: 2026-01-26 15:34:07.524666661 +0000 UTC m=+0.185415769 container init e7feea7ba2731e6cc8abe9e0b29673ac2e727dfcf8e29d530cf27d5a5dbdd321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bardeen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:34:07 compute-0 podman[181405]: 2026-01-26 15:34:07.533905732 +0000 UTC m=+0.194654750 container start e7feea7ba2731e6cc8abe9e0b29673ac2e727dfcf8e29d530cf27d5a5dbdd321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bardeen, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 15:34:07 compute-0 podman[181405]: 2026-01-26 15:34:07.538140063 +0000 UTC m=+0.198889141 container attach e7feea7ba2731e6cc8abe9e0b29673ac2e727dfcf8e29d530cf27d5a5dbdd321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:34:07 compute-0 admiring_bardeen[181421]: 167 167
Jan 26 15:34:07 compute-0 systemd[1]: libpod-e7feea7ba2731e6cc8abe9e0b29673ac2e727dfcf8e29d530cf27d5a5dbdd321.scope: Deactivated successfully.
Jan 26 15:34:07 compute-0 podman[181405]: 2026-01-26 15:34:07.543271565 +0000 UTC m=+0.204020613 container died e7feea7ba2731e6cc8abe9e0b29673ac2e727dfcf8e29d530cf27d5a5dbdd321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bardeen, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-62d3530cd34f04cacafb3ad999adb2010d9346f12dcc371ba7b2ff734a4fa682-merged.mount: Deactivated successfully.
Jan 26 15:34:07 compute-0 podman[181405]: 2026-01-26 15:34:07.589890529 +0000 UTC m=+0.250639557 container remove e7feea7ba2731e6cc8abe9e0b29673ac2e727dfcf8e29d530cf27d5a5dbdd321 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 26 15:34:07 compute-0 systemd[1]: libpod-conmon-e7feea7ba2731e6cc8abe9e0b29673ac2e727dfcf8e29d530cf27d5a5dbdd321.scope: Deactivated successfully.
Jan 26 15:34:07 compute-0 podman[181444]: 2026-01-26 15:34:07.825810173 +0000 UTC m=+0.077637125 container create 116e22ac7fe09eb20e3282b575053f56ed0fcbb6d8ab9b471e658ea01c68070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_margulis, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:34:07 compute-0 podman[181444]: 2026-01-26 15:34:07.790185693 +0000 UTC m=+0.042012695 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:34:07 compute-0 systemd[1]: Started libpod-conmon-116e22ac7fe09eb20e3282b575053f56ed0fcbb6d8ab9b471e658ea01c68070a.scope.
Jan 26 15:34:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177b43d77c16e04874c38edbf80be197ab0a160269a20f490e4eed3c03a7a275/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177b43d77c16e04874c38edbf80be197ab0a160269a20f490e4eed3c03a7a275/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177b43d77c16e04874c38edbf80be197ab0a160269a20f490e4eed3c03a7a275/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:34:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/177b43d77c16e04874c38edbf80be197ab0a160269a20f490e4eed3c03a7a275/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:34:07 compute-0 podman[181444]: 2026-01-26 15:34:07.949105578 +0000 UTC m=+0.200932550 container init 116e22ac7fe09eb20e3282b575053f56ed0fcbb6d8ab9b471e658ea01c68070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_margulis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:34:07 compute-0 podman[181444]: 2026-01-26 15:34:07.960737915 +0000 UTC m=+0.212564867 container start 116e22ac7fe09eb20e3282b575053f56ed0fcbb6d8ab9b471e658ea01c68070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_margulis, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 26 15:34:07 compute-0 podman[181444]: 2026-01-26 15:34:07.964801513 +0000 UTC m=+0.216628515 container attach 116e22ac7fe09eb20e3282b575053f56ed0fcbb6d8ab9b471e658ea01c68070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:34:08 compute-0 ceph-mon[75140]: pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]: {
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:     "0": [
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:         {
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "devices": [
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "/dev/loop3"
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             ],
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "lv_name": "ceph_lv0",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "lv_size": "21470642176",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "name": "ceph_lv0",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "tags": {
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.cluster_name": "ceph",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.crush_device_class": "",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.encrypted": "0",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.objectstore": "bluestore",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.osd_id": "0",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.type": "block",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.vdo": "0",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.with_tpm": "0"
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             },
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "type": "block",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "vg_name": "ceph_vg0"
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:         }
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:     ],
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:     "1": [
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:         {
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "devices": [
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "/dev/loop4"
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             ],
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "lv_name": "ceph_lv1",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "lv_size": "21470642176",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "name": "ceph_lv1",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "tags": {
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.cluster_name": "ceph",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.crush_device_class": "",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.encrypted": "0",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.objectstore": "bluestore",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.osd_id": "1",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.type": "block",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.vdo": "0",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.with_tpm": "0"
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             },
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "type": "block",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "vg_name": "ceph_vg1"
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:         }
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:     ],
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:     "2": [
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:         {
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "devices": [
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "/dev/loop5"
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             ],
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "lv_name": "ceph_lv2",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "lv_size": "21470642176",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "name": "ceph_lv2",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "tags": {
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.cluster_name": "ceph",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.crush_device_class": "",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.encrypted": "0",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.objectstore": "bluestore",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.osd_id": "2",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.type": "block",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.vdo": "0",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:                 "ceph.with_tpm": "0"
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             },
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "type": "block",
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:             "vg_name": "ceph_vg2"
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:         }
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]:     ]
Jan 26 15:34:08 compute-0 heuristic_margulis[181461]: }
Jan 26 15:34:08 compute-0 systemd[1]: libpod-116e22ac7fe09eb20e3282b575053f56ed0fcbb6d8ab9b471e658ea01c68070a.scope: Deactivated successfully.
Jan 26 15:34:08 compute-0 podman[181444]: 2026-01-26 15:34:08.299320362 +0000 UTC m=+0.551147274 container died 116e22ac7fe09eb20e3282b575053f56ed0fcbb6d8ab9b471e658ea01c68070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_margulis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 15:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-177b43d77c16e04874c38edbf80be197ab0a160269a20f490e4eed3c03a7a275-merged.mount: Deactivated successfully.
Jan 26 15:34:08 compute-0 podman[181444]: 2026-01-26 15:34:08.345085756 +0000 UTC m=+0.596912668 container remove 116e22ac7fe09eb20e3282b575053f56ed0fcbb6d8ab9b471e658ea01c68070a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 15:34:08 compute-0 systemd[1]: libpod-conmon-116e22ac7fe09eb20e3282b575053f56ed0fcbb6d8ab9b471e658ea01c68070a.scope: Deactivated successfully.
Jan 26 15:34:08 compute-0 sudo[181360]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:08 compute-0 sudo[181483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:34:08 compute-0 sudo[181483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:34:08 compute-0 sudo[181483]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:08 compute-0 sudo[181508]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:34:08 compute-0 sudo[181508]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:34:08 compute-0 podman[181546]: 2026-01-26 15:34:08.799383826 +0000 UTC m=+0.034943046 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:34:09 compute-0 podman[181546]: 2026-01-26 15:34:09.140124394 +0000 UTC m=+0.375683564 container create cfa7dddd3306e6719e3a9f9939975228c61664e5e8fc942c9cc2b18837891e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_wiles, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 26 15:34:09 compute-0 systemd[1]: Started libpod-conmon-cfa7dddd3306e6719e3a9f9939975228c61664e5e8fc942c9cc2b18837891e61.scope.
Jan 26 15:34:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:34:09 compute-0 podman[181546]: 2026-01-26 15:34:09.235497112 +0000 UTC m=+0.471056272 container init cfa7dddd3306e6719e3a9f9939975228c61664e5e8fc942c9cc2b18837891e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_wiles, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 15:34:09 compute-0 podman[181546]: 2026-01-26 15:34:09.245853509 +0000 UTC m=+0.481412649 container start cfa7dddd3306e6719e3a9f9939975228c61664e5e8fc942c9cc2b18837891e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 15:34:09 compute-0 podman[181546]: 2026-01-26 15:34:09.249113948 +0000 UTC m=+0.484673078 container attach cfa7dddd3306e6719e3a9f9939975228c61664e5e8fc942c9cc2b18837891e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 15:34:09 compute-0 jolly_wiles[181562]: 167 167
Jan 26 15:34:09 compute-0 systemd[1]: libpod-cfa7dddd3306e6719e3a9f9939975228c61664e5e8fc942c9cc2b18837891e61.scope: Deactivated successfully.
Jan 26 15:34:09 compute-0 podman[181546]: 2026-01-26 15:34:09.251220147 +0000 UTC m=+0.486779277 container died cfa7dddd3306e6719e3a9f9939975228c61664e5e8fc942c9cc2b18837891e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 15:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae9ed1e9bf6638dfc995ee3ed3792b2a39e6341c0c3624e58b35a49df2057eb6-merged.mount: Deactivated successfully.
Jan 26 15:34:10 compute-0 ceph-mon[75140]: pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:10 compute-0 podman[181546]: 2026-01-26 15:34:10.397639058 +0000 UTC m=+1.633198198 container remove cfa7dddd3306e6719e3a9f9939975228c61664e5e8fc942c9cc2b18837891e61 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_wiles, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 15:34:10 compute-0 systemd[1]: libpod-conmon-cfa7dddd3306e6719e3a9f9939975228c61664e5e8fc942c9cc2b18837891e61.scope: Deactivated successfully.
Jan 26 15:34:10 compute-0 podman[181585]: 2026-01-26 15:34:10.667374791 +0000 UTC m=+0.110128221 container create 2e09369dcf4d0184b4534c2a416fd31c69489d06418c8c1ab9da87a9054fb1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:34:10 compute-0 podman[181585]: 2026-01-26 15:34:10.611923246 +0000 UTC m=+0.054676676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:34:10 compute-0 systemd[1]: Started libpod-conmon-2e09369dcf4d0184b4534c2a416fd31c69489d06418c8c1ab9da87a9054fb1c5.scope.
Jan 26 15:34:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0156fe920186862af41a9354bda89032b71d718568a2dacfc0fb3ee942843d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0156fe920186862af41a9354bda89032b71d718568a2dacfc0fb3ee942843d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0156fe920186862af41a9354bda89032b71d718568a2dacfc0fb3ee942843d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f0156fe920186862af41a9354bda89032b71d718568a2dacfc0fb3ee942843d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:34:10 compute-0 podman[181585]: 2026-01-26 15:34:10.790735837 +0000 UTC m=+0.233489317 container init 2e09369dcf4d0184b4534c2a416fd31c69489d06418c8c1ab9da87a9054fb1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 15:34:10 compute-0 podman[181585]: 2026-01-26 15:34:10.797219322 +0000 UTC m=+0.239972732 container start 2e09369dcf4d0184b4534c2a416fd31c69489d06418c8c1ab9da87a9054fb1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 15:34:10 compute-0 podman[181585]: 2026-01-26 15:34:10.80131443 +0000 UTC m=+0.244067840 container attach 2e09369dcf4d0184b4534c2a416fd31c69489d06418c8c1ab9da87a9054fb1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 26 15:34:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:11 compute-0 lvm[181679]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:34:11 compute-0 lvm[181679]: VG ceph_vg0 finished
Jan 26 15:34:11 compute-0 lvm[181680]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:34:11 compute-0 lvm[181680]: VG ceph_vg1 finished
Jan 26 15:34:11 compute-0 lvm[181682]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:34:11 compute-0 lvm[181682]: VG ceph_vg2 finished
Jan 26 15:34:11 compute-0 adoring_jemison[181601]: {}
Jan 26 15:34:11 compute-0 systemd[1]: libpod-2e09369dcf4d0184b4534c2a416fd31c69489d06418c8c1ab9da87a9054fb1c5.scope: Deactivated successfully.
Jan 26 15:34:11 compute-0 podman[181585]: 2026-01-26 15:34:11.6357895 +0000 UTC m=+1.078542930 container died 2e09369dcf4d0184b4534c2a416fd31c69489d06418c8c1ab9da87a9054fb1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jemison, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:34:11 compute-0 systemd[1]: libpod-2e09369dcf4d0184b4534c2a416fd31c69489d06418c8c1ab9da87a9054fb1c5.scope: Consumed 1.290s CPU time.
Jan 26 15:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f0156fe920186862af41a9354bda89032b71d718568a2dacfc0fb3ee942843d-merged.mount: Deactivated successfully.
Jan 26 15:34:11 compute-0 podman[181585]: 2026-01-26 15:34:11.684430902 +0000 UTC m=+1.127184292 container remove 2e09369dcf4d0184b4534c2a416fd31c69489d06418c8c1ab9da87a9054fb1c5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 15:34:11 compute-0 systemd[1]: libpod-conmon-2e09369dcf4d0184b4534c2a416fd31c69489d06418c8c1ab9da87a9054fb1c5.scope: Deactivated successfully.
Jan 26 15:34:11 compute-0 sudo[181508]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:34:11 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:34:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:34:11 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:34:11 compute-0 sudo[181696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:34:11 compute-0 sudo[181696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:34:11 compute-0 sudo[181696]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:34:12 compute-0 ceph-mon[75140]: pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:12 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:34:12 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:34:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:14 compute-0 ceph-mon[75140]: pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:16 compute-0 kernel: SELinux:  Converting 2776 SID table entries...
Jan 26 15:34:16 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 26 15:34:16 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 26 15:34:16 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 26 15:34:16 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 26 15:34:16 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 26 15:34:16 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 26 15:34:16 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 26 15:34:16 compute-0 ceph-mon[75140]: pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:34:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:17 compute-0 groupadd[181733]: group added to /etc/group: name=dnsmasq, GID=992
Jan 26 15:34:17 compute-0 groupadd[181733]: group added to /etc/gshadow: name=dnsmasq
Jan 26 15:34:17 compute-0 groupadd[181733]: new group: name=dnsmasq, GID=992
Jan 26 15:34:17 compute-0 useradd[181740]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 26 15:34:17 compute-0 dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Jan 26 15:34:17 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 26 15:34:17 compute-0 dbus-broker-launch[750]: Noticed file-system modification, trigger reload.
Jan 26 15:34:18 compute-0 ceph-mon[75140]: pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:18 compute-0 groupadd[181753]: group added to /etc/group: name=clevis, GID=991
Jan 26 15:34:18 compute-0 groupadd[181753]: group added to /etc/gshadow: name=clevis
Jan 26 15:34:18 compute-0 groupadd[181753]: new group: name=clevis, GID=991
Jan 26 15:34:18 compute-0 useradd[181760]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 26 15:34:18 compute-0 usermod[181770]: add 'clevis' to group 'tss'
Jan 26 15:34:18 compute-0 usermod[181770]: add 'clevis' to shadow group 'tss'
Jan 26 15:34:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:20 compute-0 ceph-mon[75140]: pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:21 compute-0 polkitd[43441]: Reloading rules
Jan 26 15:34:21 compute-0 polkitd[43441]: Collecting garbage unconditionally...
Jan 26 15:34:21 compute-0 polkitd[43441]: Loading rules from directory /etc/polkit-1/rules.d
Jan 26 15:34:21 compute-0 polkitd[43441]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 26 15:34:21 compute-0 polkitd[43441]: Finished loading, compiling and executing 3 rules
Jan 26 15:34:21 compute-0 polkitd[43441]: Reloading rules
Jan 26 15:34:21 compute-0 polkitd[43441]: Collecting garbage unconditionally...
Jan 26 15:34:21 compute-0 polkitd[43441]: Loading rules from directory /etc/polkit-1/rules.d
Jan 26 15:34:21 compute-0 polkitd[43441]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 26 15:34:21 compute-0 polkitd[43441]: Finished loading, compiling and executing 3 rules
Jan 26 15:34:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:34:22 compute-0 ceph-mon[75140]: pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:24 compute-0 ceph-mon[75140]: pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:26 compute-0 sshd[1007]: Received signal 15; terminating.
Jan 26 15:34:26 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 26 15:34:26 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 26 15:34:26 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 26 15:34:26 compute-0 systemd[1]: sshd.service: Consumed 3.486s CPU time, read 32.0K from disk, written 48.0K to disk.
Jan 26 15:34:26 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 26 15:34:26 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 26 15:34:26 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 15:34:26 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 15:34:26 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 26 15:34:26 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 26 15:34:26 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 26 15:34:26 compute-0 sshd[182589]: Server listening on 0.0.0.0 port 22.
Jan 26 15:34:26 compute-0 sshd[182589]: Server listening on :: port 22.
Jan 26 15:34:26 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 26 15:34:26 compute-0 podman[182576]: 2026-01-26 15:34:26.105922923 +0000 UTC m=+0.118439862 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 15:34:26 compute-0 ceph-mon[75140]: pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:34:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:27 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 15:34:27 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 15:34:27 compute-0 systemd[1]: Reloading.
Jan 26 15:34:28 compute-0 systemd-rc-local-generator[182861]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:34:28 compute-0 systemd-sysv-generator[182865]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:34:28 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 15:34:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:34:28
Jan 26 15:34:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:34:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:34:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'images']
Jan 26 15:34:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:34:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:29 compute-0 ceph-mon[75140]: pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:34:30 compute-0 podman[185558]: 2026-01-26 15:34:30.375405327 +0000 UTC m=+0.065980735 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 26 15:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:34:30 compute-0 ceph-mon[75140]: pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:31 compute-0 sudo[163003]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:31 compute-0 sudo[187425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ieypxihfiutonlqxfvrbkdziibtcztvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441671.3200524-331-182340288778374/AnsiballZ_systemd.py'
Jan 26 15:34:31 compute-0 sudo[187425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:34:32 compute-0 python3.9[187451]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 15:34:32 compute-0 systemd[1]: Reloading.
Jan 26 15:34:32 compute-0 systemd-sysv-generator[187860]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:34:32 compute-0 systemd-rc-local-generator[187854]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:34:32 compute-0 ceph-mon[75140]: pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:32 compute-0 sudo[187425]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:33 compute-0 sudo[188667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahwiwvpyqzxxodtxryqnlvrvradtzhhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441672.8123922-331-227134294639351/AnsiballZ_systemd.py'
Jan 26 15:34:33 compute-0 sudo[188667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:33 compute-0 python3.9[188701]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 15:34:33 compute-0 systemd[1]: Reloading.
Jan 26 15:34:33 compute-0 systemd-sysv-generator[189138]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:34:33 compute-0 systemd-rc-local-generator[189135]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:34:33 compute-0 sudo[188667]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:34 compute-0 sudo[190007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwpddkclhzsvlyyeoeelxmzplgofufkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441673.9262996-331-116708396009080/AnsiballZ_systemd.py'
Jan 26 15:34:34 compute-0 sudo[190007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:34 compute-0 python3.9[190032]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 15:34:34 compute-0 systemd[1]: Reloading.
Jan 26 15:34:34 compute-0 systemd-rc-local-generator[190512]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:34:34 compute-0 systemd-sysv-generator[190515]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:34:34 compute-0 ceph-mon[75140]: pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:34 compute-0 sudo[190007]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:35 compute-0 sudo[191265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxtiudnmhdgysvfzwucibdmhukqegznt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441675.0165749-331-248987841557883/AnsiballZ_systemd.py'
Jan 26 15:34:35 compute-0 sudo[191265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:35 compute-0 python3.9[191287]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 15:34:35 compute-0 systemd[1]: Reloading.
Jan 26 15:34:35 compute-0 systemd-sysv-generator[191778]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:34:35 compute-0 systemd-rc-local-generator[191773]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:34:35 compute-0 sudo[191265]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:36 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 15:34:36 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 15:34:36 compute-0 systemd[1]: man-db-cache-update.service: Consumed 10.273s CPU time.
Jan 26 15:34:36 compute-0 systemd[1]: run-r01732292dc6b4cefbfef9b0476912966.service: Deactivated successfully.
Jan 26 15:34:36 compute-0 sudo[192172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-potbxmsibytlriyuegljjhudhdhoxivc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441676.1468358-360-63066571267212/AnsiballZ_systemd.py'
Jan 26 15:34:36 compute-0 sudo[192172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:36 compute-0 ceph-mon[75140]: pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:36 compute-0 python3.9[192174]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:36 compute-0 systemd[1]: Reloading.
Jan 26 15:34:36 compute-0 systemd-rc-local-generator[192200]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:34:36 compute-0 systemd-sysv-generator[192204]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:34:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:34:37 compute-0 sudo[192172]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:37 compute-0 sudo[192362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnghtgepkeswxhrrnmvkgiktftgxwrtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441677.2958536-360-5921433828967/AnsiballZ_systemd.py'
Jan 26 15:34:37 compute-0 sudo[192362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:37 compute-0 ceph-mon[75140]: pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:37 compute-0 python3.9[192364]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:37 compute-0 systemd[1]: Reloading.
Jan 26 15:34:38 compute-0 systemd-rc-local-generator[192394]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:34:38 compute-0 systemd-sysv-generator[192397]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:34:38 compute-0 sudo[192362]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:38 compute-0 sudo[192552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oncycrkafsjfxepaixojssmavyivnagc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441678.445455-360-78528835477659/AnsiballZ_systemd.py'
Jan 26 15:34:38 compute-0 sudo[192552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:39 compute-0 python3.9[192554]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:39 compute-0 systemd[1]: Reloading.
Jan 26 15:34:39 compute-0 systemd-rc-local-generator[192581]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:34:39 compute-0 systemd-sysv-generator[192585]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:34:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:39 compute-0 sudo[192552]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:39 compute-0 sudo[192742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxebqevkepzrgvitkuqzfvweyvkknulz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441679.613531-360-107441648314787/AnsiballZ_systemd.py'
Jan 26 15:34:39 compute-0 sudo[192742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:40 compute-0 python3.9[192744]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:40 compute-0 ceph-mon[75140]: pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:40 compute-0 sudo[192742]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:40 compute-0 sudo[192897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpqxwqxlrpanwtkaosrnkdrrzigvijkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441680.422739-360-228436208851525/AnsiballZ_systemd.py'
Jan 26 15:34:40 compute-0 sudo[192897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:40 compute-0 python3.9[192899]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:41 compute-0 systemd[1]: Reloading.
Jan 26 15:34:41 compute-0 systemd-rc-local-generator[192926]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:34:41 compute-0 systemd-sysv-generator[192930]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:34:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:41 compute-0 sudo[192897]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:42 compute-0 ceph-mon[75140]: pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:34:42 compute-0 sudo[193087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkbdokzjppdhqaedqyceujxmvoxcuuhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441682.0340068-396-239787423771277/AnsiballZ_systemd.py'
Jan 26 15:34:42 compute-0 sudo[193087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:42 compute-0 python3.9[193089]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 26 15:34:42 compute-0 systemd[1]: Reloading.
Jan 26 15:34:42 compute-0 systemd-rc-local-generator[193118]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:34:42 compute-0 systemd-sysv-generator[193123]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:34:42 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 26 15:34:43 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 26 15:34:43 compute-0 sudo[193087]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:43 compute-0 sudo[193280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogrbvvjxzpyscjehznqlzyppqnxwfwwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441683.2502837-404-111852634345294/AnsiballZ_systemd.py'
Jan 26 15:34:43 compute-0 sudo[193280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:43 compute-0 python3.9[193282]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:43 compute-0 sudo[193280]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:44 compute-0 ceph-mon[75140]: pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:44 compute-0 sudo[193435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-inslzioayimauxmoztarhobirklgmggt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441684.1052063-404-39877197354508/AnsiballZ_systemd.py'
Jan 26 15:34:44 compute-0 sudo[193435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:44 compute-0 python3.9[193437]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:44 compute-0 sudo[193435]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:45 compute-0 sudo[193590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaromsartjfgrpjrjtikyinigqltxlfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441684.970971-404-171150263030551/AnsiballZ_systemd.py'
Jan 26 15:34:45 compute-0 sudo[193590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:45 compute-0 python3.9[193592]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:45 compute-0 sudo[193590]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:46 compute-0 sudo[193745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irejqrnxpuootpzpktytltmctgdhzueb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441685.804522-404-139193067955858/AnsiballZ_systemd.py'
Jan 26 15:34:46 compute-0 sudo[193745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:46 compute-0 ceph-mon[75140]: pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:46 compute-0 python3.9[193747]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:46 compute-0 sudo[193745]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:46 compute-0 sudo[193900]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdmnrpbgqinomqekezdiwxqnncxdnxup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441686.6105134-404-245318294478897/AnsiballZ_systemd.py'
Jan 26 15:34:46 compute-0 sudo[193900]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:34:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:47 compute-0 python3.9[193902]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:47 compute-0 sudo[193900]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:47 compute-0 sudo[194055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxyglkktvzuztzuofkgeuwozkcqfipcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441687.4785674-404-115334706202414/AnsiballZ_systemd.py'
Jan 26 15:34:47 compute-0 sudo[194055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:48 compute-0 python3.9[194057]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:48 compute-0 sudo[194055]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:48 compute-0 ceph-mon[75140]: pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1837404050763064e-06 of space, bias 4.0, pg target 0.0014204884860915677 quantized to 16 (current 16)
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:34:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:34:48 compute-0 sudo[194210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdcboldagelbxleokymlqlatmfacfxhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441688.4257793-404-117867357172243/AnsiballZ_systemd.py'
Jan 26 15:34:48 compute-0 sudo[194210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:49 compute-0 python3.9[194212]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:49 compute-0 sudo[194210]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:49 compute-0 sudo[194365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhkpjgzgyzcvqhearkwetvbiluxoeiba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441689.3148289-404-196071999331692/AnsiballZ_systemd.py'
Jan 26 15:34:49 compute-0 sudo[194365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:49 compute-0 python3.9[194367]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:50 compute-0 ceph-mon[75140]: pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:51 compute-0 sudo[194365]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:51 compute-0 sudo[194520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxwciazqjudxlrohvkworirmqpyclaay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441691.1778758-404-250904950996729/AnsiballZ_systemd.py'
Jan 26 15:34:51 compute-0 sudo[194520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:51 compute-0 python3.9[194522]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:34:52 compute-0 ceph-mon[75140]: pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:52 compute-0 sudo[194520]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:53 compute-0 sudo[194675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpkjquyopsvpstjdypzkqeklkneacmkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441693.1110578-404-60810864904666/AnsiballZ_systemd.py'
Jan 26 15:34:53 compute-0 sudo[194675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:53 compute-0 python3.9[194677]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:53 compute-0 sudo[194675]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:54 compute-0 ceph-mon[75140]: pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:54 compute-0 sudo[194830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdsboqwsvlxzndialcomfnsgodwpuown ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441694.028434-404-186354559128943/AnsiballZ_systemd.py'
Jan 26 15:34:54 compute-0 sudo[194830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:54 compute-0 python3.9[194832]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:54 compute-0 sudo[194830]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:55 compute-0 sudo[194986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjpcxfcnydklunuxnbzacvgzcieecbkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441694.9803233-404-213853835505283/AnsiballZ_systemd.py'
Jan 26 15:34:55 compute-0 sudo[194986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:55 compute-0 python3.9[194988]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:55 compute-0 sudo[194986]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:56 compute-0 sudo[195141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnjntgpqjtzrrhhbrqocdksusizccbgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441695.8067806-404-200675483592917/AnsiballZ_systemd.py'
Jan 26 15:34:56 compute-0 sudo[195141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:56 compute-0 ceph-mon[75140]: pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:34:56.360207) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769441696360249, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2038, "num_deletes": 251, "total_data_size": 3580705, "memory_usage": 3631632, "flush_reason": "Manual Compaction"}
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769441696377939, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3504505, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9899, "largest_seqno": 11936, "table_properties": {"data_size": 3495203, "index_size": 5924, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17751, "raw_average_key_size": 19, "raw_value_size": 3476826, "raw_average_value_size": 3808, "num_data_blocks": 269, "num_entries": 913, "num_filter_entries": 913, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769441461, "oldest_key_time": 1769441461, "file_creation_time": 1769441696, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 17961 microseconds, and 6493 cpu microseconds.
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:34:56.378164) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3504505 bytes OK
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:34:56.378245) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:34:56.380418) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:34:56.380441) EVENT_LOG_v1 {"time_micros": 1769441696380435, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:34:56.380466) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3572212, prev total WAL file size 3572212, number of live WAL files 2.
Jan 26 15:34:56 compute-0 python3.9[195143]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:34:56.382603) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3422KB)], [26(6683KB)]
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769441696382677, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10348371, "oldest_snapshot_seqno": -1}
Jan 26 15:34:56 compute-0 podman[195144]: 2026-01-26 15:34:56.429435042 +0000 UTC m=+0.107542188 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3804 keys, 8575269 bytes, temperature: kUnknown
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769441696443572, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8575269, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8545397, "index_size": 19266, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9541, "raw_key_size": 91395, "raw_average_key_size": 24, "raw_value_size": 8472350, "raw_average_value_size": 2227, "num_data_blocks": 833, "num_entries": 3804, "num_filter_entries": 3804, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769441696, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:34:56.443991) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8575269 bytes
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:34:56.445749) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.6 rd, 140.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 6.5 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(5.4) write-amplify(2.4) OK, records in: 4318, records dropped: 514 output_compression: NoCompression
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:34:56.445777) EVENT_LOG_v1 {"time_micros": 1769441696445761, "job": 10, "event": "compaction_finished", "compaction_time_micros": 61018, "compaction_time_cpu_micros": 18024, "output_level": 6, "num_output_files": 1, "total_output_size": 8575269, "num_input_records": 4318, "num_output_records": 3804, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769441696446812, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769441696448671, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:34:56.382497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:34:56.448764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:34:56.448772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:34:56.448775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:34:56.448777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:34:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:34:56.448779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:34:56 compute-0 sudo[195141]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:56 compute-0 sudo[195322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmirifkhzbgiolxmxowdydzpfljhtrxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441696.63246-404-233938476359732/AnsiballZ_systemd.py'
Jan 26 15:34:56 compute-0 sudo[195322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:34:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:57 compute-0 python3.9[195324]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 26 15:34:57 compute-0 sudo[195322]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:57 compute-0 sudo[195477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgnxriquchxhzpvbulusqocfjsosuhqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441697.6448004-506-66885272500642/AnsiballZ_file.py'
Jan 26 15:34:57 compute-0 sudo[195477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:58 compute-0 python3.9[195479]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:34:58 compute-0 sudo[195477]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:58 compute-0 ceph-mon[75140]: pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:58 compute-0 sudo[195629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnhtqysakzdrykujloaoobcqngdxpwqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441698.3009906-506-159682599817456/AnsiballZ_file.py'
Jan 26 15:34:58 compute-0 sudo[195629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:58 compute-0 python3.9[195631]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:34:58 compute-0 sudo[195629]: pam_unix(sudo:session): session closed for user root
Jan 26 15:34:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:34:59.196 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:34:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:34:59.196 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:34:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:34:59.196 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:34:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:34:59 compute-0 sudo[195781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlneoqfrdoksxvefcacxldgyncthktoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441698.9665387-506-271278037501691/AnsiballZ_file.py'
Jan 26 15:34:59 compute-0 sudo[195781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:34:59 compute-0 python3.9[195783]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:34:59 compute-0 sudo[195781]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:00 compute-0 sudo[195933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhevmnpkzhhzffmtisrcmbzvtabslezb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441699.643188-506-36636783637498/AnsiballZ_file.py'
Jan 26 15:35:00 compute-0 sudo[195933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:00 compute-0 python3.9[195935]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:35:00 compute-0 sudo[195933]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:35:00 compute-0 ceph-mon[75140]: pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:00 compute-0 sudo[196101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajklgwnvkslijkwisuwwgginjykjshwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441700.431122-506-246759929618475/AnsiballZ_file.py'
Jan 26 15:35:00 compute-0 sudo[196101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:00 compute-0 podman[196059]: 2026-01-26 15:35:00.745943843 +0000 UTC m=+0.055561803 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 26 15:35:00 compute-0 python3.9[196106]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:35:00 compute-0 sudo[196101]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:01 compute-0 sudo[196256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-numnudxscrjfbnauspyvspioneohpvwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441701.0839777-506-28721595806128/AnsiballZ_file.py'
Jan 26 15:35:01 compute-0 sudo[196256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:01 compute-0 python3.9[196258]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:35:01 compute-0 sudo[196256]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:35:02 compute-0 python3.9[196408]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 15:35:02 compute-0 ceph-mon[75140]: pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:03 compute-0 sudo[196558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbgosmblmvngjuaqontudwxvgtocqmbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441702.5989609-557-267078974325297/AnsiballZ_stat.py'
Jan 26 15:35:03 compute-0 sudo[196558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:03 compute-0 python3.9[196560]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:03 compute-0 sudo[196558]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:03 compute-0 sudo[196683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jekvxobhhxjapfmxdndkghlgkempscvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441702.5989609-557-267078974325297/AnsiballZ_copy.py'
Jan 26 15:35:03 compute-0 sudo[196683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:04 compute-0 python3.9[196685]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769441702.5989609-557-267078974325297/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:04 compute-0 sudo[196683]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:04 compute-0 ceph-mon[75140]: pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:04 compute-0 sudo[196835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khgfqczvmaprpfgeolmajhahchweivka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441704.1995862-557-119988947278691/AnsiballZ_stat.py'
Jan 26 15:35:04 compute-0 sudo[196835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:04 compute-0 python3.9[196837]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:04 compute-0 sudo[196835]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:05 compute-0 sudo[196960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywzxznzvbpemlahmsuyzpmbbmvfhltlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441704.1995862-557-119988947278691/AnsiballZ_copy.py'
Jan 26 15:35:05 compute-0 sudo[196960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:05 compute-0 python3.9[196962]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769441704.1995862-557-119988947278691/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:05 compute-0 sudo[196960]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:05 compute-0 sudo[197112]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrvrvkwapuxxldqfglctsntnfsjuhbkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441705.576115-557-134548327249256/AnsiballZ_stat.py'
Jan 26 15:35:05 compute-0 sudo[197112]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:06 compute-0 python3.9[197114]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:06 compute-0 sudo[197112]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:06 compute-0 ceph-mon[75140]: pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:06 compute-0 sudo[197237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wntdpticrhktahipdiwmvfxcfcnttsui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441705.576115-557-134548327249256/AnsiballZ_copy.py'
Jan 26 15:35:06 compute-0 sudo[197237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:06 compute-0 python3.9[197239]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769441705.576115-557-134548327249256/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:06 compute-0 sudo[197237]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:35:07 compute-0 sudo[197389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uldbuvukvjesudokuywgbwcxugkjicww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441706.899794-557-146248427100955/AnsiballZ_stat.py'
Jan 26 15:35:07 compute-0 sudo[197389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:07 compute-0 python3.9[197391]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:07 compute-0 sudo[197389]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:07 compute-0 sudo[197514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceyjrcnhyneabbfkrqgxfoqpbnrxsklo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441706.899794-557-146248427100955/AnsiballZ_copy.py'
Jan 26 15:35:07 compute-0 sudo[197514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:07 compute-0 python3.9[197516]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769441706.899794-557-146248427100955/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:07 compute-0 sudo[197514]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:08 compute-0 sudo[197666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnmxpxzbkygpaxeonublntnykgszvtgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441708.1120143-557-96799263788199/AnsiballZ_stat.py'
Jan 26 15:35:08 compute-0 sudo[197666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:08 compute-0 ceph-mon[75140]: pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:08 compute-0 python3.9[197668]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:08 compute-0 sudo[197666]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:09 compute-0 sudo[197791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrlszytaekjqtsjztresnotwoywnjkvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441708.1120143-557-96799263788199/AnsiballZ_copy.py'
Jan 26 15:35:09 compute-0 sudo[197791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:09 compute-0 python3.9[197793]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769441708.1120143-557-96799263788199/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:09 compute-0 sudo[197791]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:09 compute-0 sudo[197943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkeefcwocpyxazzmexqizwipryhbfecn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441709.4094558-557-207079116198225/AnsiballZ_stat.py'
Jan 26 15:35:09 compute-0 sudo[197943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:09 compute-0 python3.9[197945]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:09 compute-0 sudo[197943]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:10 compute-0 sudo[198068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmkcbgoqolsxoqbzorlgykrsmiibkabh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441709.4094558-557-207079116198225/AnsiballZ_copy.py'
Jan 26 15:35:10 compute-0 sudo[198068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:10 compute-0 python3.9[198070]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769441709.4094558-557-207079116198225/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:10 compute-0 sudo[198068]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:10 compute-0 ceph-mon[75140]: pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:10 compute-0 sudo[198220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdnfzwncycbxemaischjadacknmbplsy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441710.6105695-557-8046272173781/AnsiballZ_stat.py'
Jan 26 15:35:10 compute-0 sudo[198220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:11 compute-0 python3.9[198222]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:11 compute-0 sudo[198220]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:11 compute-0 sudo[198343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dydshfvnicedjpkugskaaqjipmsrnqzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441710.6105695-557-8046272173781/AnsiballZ_copy.py'
Jan 26 15:35:11 compute-0 sudo[198343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:11 compute-0 python3.9[198345]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769441710.6105695-557-8046272173781/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:11 compute-0 sudo[198343]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:11 compute-0 sudo[198367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:35:11 compute-0 sudo[198367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:35:11 compute-0 sudo[198367]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:11 compute-0 sudo[198399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:35:11 compute-0 sudo[198399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:35:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:35:12 compute-0 sudo[198559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftindvprawrlythdpnfuwpelbrzlgddd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441711.911583-557-132935948265179/AnsiballZ_stat.py'
Jan 26 15:35:12 compute-0 sudo[198559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:12 compute-0 python3.9[198561]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:12 compute-0 sudo[198559]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:12 compute-0 sudo[198399]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:12 compute-0 ceph-mon[75140]: pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:35:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:35:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:35:12 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:35:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:35:12 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:35:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:35:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:35:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:35:12 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:35:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:35:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:35:12 compute-0 sudo[198628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:35:12 compute-0 sudo[198628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:35:12 compute-0 sudo[198628]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:12 compute-0 sudo[198676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:35:12 compute-0 sudo[198676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:35:12 compute-0 sudo[198751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbpryabahuywhwikdpyhonkpmtdhucpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441711.911583-557-132935948265179/AnsiballZ_copy.py'
Jan 26 15:35:12 compute-0 sudo[198751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:12 compute-0 python3.9[198753]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1769441711.911583-557-132935948265179/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:12 compute-0 sudo[198751]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:12 compute-0 podman[198766]: 2026-01-26 15:35:12.89438145 +0000 UTC m=+0.021592672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:35:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:13 compute-0 sudo[198929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqtlafalmjknvevqgjdgrhhjabpkfhvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441713.1146345-670-160211963354805/AnsiballZ_command.py'
Jan 26 15:35:13 compute-0 sudo[198929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:13 compute-0 python3.9[198931]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 26 15:35:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:35:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:35:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:35:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:35:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:35:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:35:13 compute-0 podman[198766]: 2026-01-26 15:35:13.90739602 +0000 UTC m=+1.034607262 container create 202ba840db3401484639aa99d5251195496f68f91e2d3b18648902231c49847f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_swanson, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:35:13 compute-0 sudo[198929]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:13 compute-0 systemd[1]: Started libpod-conmon-202ba840db3401484639aa99d5251195496f68f91e2d3b18648902231c49847f.scope.
Jan 26 15:35:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:35:14 compute-0 podman[198766]: 2026-01-26 15:35:14.08331083 +0000 UTC m=+1.210522052 container init 202ba840db3401484639aa99d5251195496f68f91e2d3b18648902231c49847f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:35:14 compute-0 podman[198766]: 2026-01-26 15:35:14.094642814 +0000 UTC m=+1.221854006 container start 202ba840db3401484639aa99d5251195496f68f91e2d3b18648902231c49847f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_swanson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 15:35:14 compute-0 ecstatic_swanson[198936]: 167 167
Jan 26 15:35:14 compute-0 systemd[1]: libpod-202ba840db3401484639aa99d5251195496f68f91e2d3b18648902231c49847f.scope: Deactivated successfully.
Jan 26 15:35:14 compute-0 podman[198766]: 2026-01-26 15:35:14.101658153 +0000 UTC m=+1.228869385 container attach 202ba840db3401484639aa99d5251195496f68f91e2d3b18648902231c49847f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_swanson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 15:35:14 compute-0 podman[198766]: 2026-01-26 15:35:14.103377644 +0000 UTC m=+1.230588886 container died 202ba840db3401484639aa99d5251195496f68f91e2d3b18648902231c49847f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 15:35:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f21e35700b5cdfefeb151245c0d2f50b599551b148939ec540c4c628f951aab-merged.mount: Deactivated successfully.
Jan 26 15:35:14 compute-0 podman[198766]: 2026-01-26 15:35:14.186654587 +0000 UTC m=+1.313865829 container remove 202ba840db3401484639aa99d5251195496f68f91e2d3b18648902231c49847f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_swanson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 15:35:14 compute-0 systemd[1]: libpod-conmon-202ba840db3401484639aa99d5251195496f68f91e2d3b18648902231c49847f.scope: Deactivated successfully.
Jan 26 15:35:14 compute-0 sudo[199124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqtcppfacszcqksdwglqzzbsimsazhkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441714.1347842-679-232114794832730/AnsiballZ_file.py'
Jan 26 15:35:14 compute-0 sudo[199124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:14 compute-0 podman[199065]: 2026-01-26 15:35:14.335628595 +0000 UTC m=+0.029293729 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:35:14 compute-0 podman[199065]: 2026-01-26 15:35:14.444136906 +0000 UTC m=+0.137802050 container create 7178ec7d3d11df03c2f472171db395a249195e3c06489ae76579940f2b5a30c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:35:14 compute-0 python3.9[199126]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:14 compute-0 systemd[1]: Started libpod-conmon-7178ec7d3d11df03c2f472171db395a249195e3c06489ae76579940f2b5a30c4.scope.
Jan 26 15:35:14 compute-0 sudo[199124]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e252fd7fa72f219bdd1cd108b8b62bdce131ab68d1def588a8cf5990559eec69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e252fd7fa72f219bdd1cd108b8b62bdce131ab68d1def588a8cf5990559eec69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e252fd7fa72f219bdd1cd108b8b62bdce131ab68d1def588a8cf5990559eec69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e252fd7fa72f219bdd1cd108b8b62bdce131ab68d1def588a8cf5990559eec69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:35:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e252fd7fa72f219bdd1cd108b8b62bdce131ab68d1def588a8cf5990559eec69/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:35:14 compute-0 podman[199065]: 2026-01-26 15:35:14.697188889 +0000 UTC m=+0.390854023 container init 7178ec7d3d11df03c2f472171db395a249195e3c06489ae76579940f2b5a30c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 15:35:14 compute-0 podman[199065]: 2026-01-26 15:35:14.705858129 +0000 UTC m=+0.399523253 container start 7178ec7d3d11df03c2f472171db395a249195e3c06489ae76579940f2b5a30c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bell, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:35:14 compute-0 podman[199065]: 2026-01-26 15:35:14.713277767 +0000 UTC m=+0.406942901 container attach 7178ec7d3d11df03c2f472171db395a249195e3c06489ae76579940f2b5a30c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bell, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 15:35:14 compute-0 ceph-mon[75140]: pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:15 compute-0 sudo[199290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpmaeynaflwfzejknbwsjppeqqsetzwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441714.7567935-679-111940073720355/AnsiballZ_file.py'
Jan 26 15:35:15 compute-0 sudo[199290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:15 compute-0 ecstatic_bell[199130]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:35:15 compute-0 ecstatic_bell[199130]: --> All data devices are unavailable
Jan 26 15:35:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:15 compute-0 python3.9[199293]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:15 compute-0 sudo[199290]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:15 compute-0 systemd[1]: libpod-7178ec7d3d11df03c2f472171db395a249195e3c06489ae76579940f2b5a30c4.scope: Deactivated successfully.
Jan 26 15:35:15 compute-0 podman[199065]: 2026-01-26 15:35:15.289150468 +0000 UTC m=+0.982815602 container died 7178ec7d3d11df03c2f472171db395a249195e3c06489ae76579940f2b5a30c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 15:35:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-e252fd7fa72f219bdd1cd108b8b62bdce131ab68d1def588a8cf5990559eec69-merged.mount: Deactivated successfully.
Jan 26 15:35:15 compute-0 podman[199065]: 2026-01-26 15:35:15.561282262 +0000 UTC m=+1.254947376 container remove 7178ec7d3d11df03c2f472171db395a249195e3c06489ae76579940f2b5a30c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_bell, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:35:15 compute-0 sudo[198676]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:15 compute-0 systemd[1]: libpod-conmon-7178ec7d3d11df03c2f472171db395a249195e3c06489ae76579940f2b5a30c4.scope: Deactivated successfully.
Jan 26 15:35:15 compute-0 sudo[199436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:35:15 compute-0 sudo[199436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:35:15 compute-0 sudo[199436]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:15 compute-0 sudo[199487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bedjqyarheygobrxxljqhlibbqixnmsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441715.4259822-679-150923399321061/AnsiballZ_file.py'
Jan 26 15:35:15 compute-0 sudo[199487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:15 compute-0 sudo[199488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:35:15 compute-0 sudo[199488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:35:15 compute-0 python3.9[199493]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:15 compute-0 sudo[199487]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:15 compute-0 ceph-mon[75140]: pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:16 compute-0 podman[199527]: 2026-01-26 15:35:16.084923112 +0000 UTC m=+0.082611997 container create 080b25f3285d6ddcc179a55808178289813eadc46caafe902de9e851f76cd145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haibt, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:35:16 compute-0 podman[199527]: 2026-01-26 15:35:16.030077176 +0000 UTC m=+0.027766051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:35:16 compute-0 systemd[1]: Started libpod-conmon-080b25f3285d6ddcc179a55808178289813eadc46caafe902de9e851f76cd145.scope.
Jan 26 15:35:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:35:16 compute-0 podman[199527]: 2026-01-26 15:35:16.304327601 +0000 UTC m=+0.302016466 container init 080b25f3285d6ddcc179a55808178289813eadc46caafe902de9e851f76cd145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:35:16 compute-0 podman[199527]: 2026-01-26 15:35:16.312427707 +0000 UTC m=+0.310116542 container start 080b25f3285d6ddcc179a55808178289813eadc46caafe902de9e851f76cd145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haibt, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 15:35:16 compute-0 vigorous_haibt[199590]: 167 167
Jan 26 15:35:16 compute-0 systemd[1]: libpod-080b25f3285d6ddcc179a55808178289813eadc46caafe902de9e851f76cd145.scope: Deactivated successfully.
Jan 26 15:35:16 compute-0 sudo[199706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cridemmronltcbynsissorzlnciilfey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441716.1372216-679-27695975226695/AnsiballZ_file.py'
Jan 26 15:35:16 compute-0 sudo[199706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:16 compute-0 podman[199527]: 2026-01-26 15:35:16.517945511 +0000 UTC m=+0.515634456 container attach 080b25f3285d6ddcc179a55808178289813eadc46caafe902de9e851f76cd145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:35:16 compute-0 podman[199527]: 2026-01-26 15:35:16.518840253 +0000 UTC m=+0.516529128 container died 080b25f3285d6ddcc179a55808178289813eadc46caafe902de9e851f76cd145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haibt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:35:16 compute-0 python3.9[199708]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:16 compute-0 sudo[199706]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-8122311c103ab4e2f2d88330a005213ed9f9b94bc0417ba14f32348b5e65b2d9-merged.mount: Deactivated successfully.
Jan 26 15:35:16 compute-0 podman[199527]: 2026-01-26 15:35:16.772293016 +0000 UTC m=+0.769981901 container remove 080b25f3285d6ddcc179a55808178289813eadc46caafe902de9e851f76cd145 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 15:35:16 compute-0 systemd[1]: libpod-conmon-080b25f3285d6ddcc179a55808178289813eadc46caafe902de9e851f76cd145.scope: Deactivated successfully.
Jan 26 15:35:16 compute-0 podman[199816]: 2026-01-26 15:35:16.987043203 +0000 UTC m=+0.067912382 container create f95318691c24a16b219edbe3dd1c2869f01f502d463dd925ae56c75d1940ecbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galileo, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 15:35:17 compute-0 podman[199816]: 2026-01-26 15:35:16.942418235 +0000 UTC m=+0.023287424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:35:17 compute-0 systemd[1]: Started libpod-conmon-f95318691c24a16b219edbe3dd1c2869f01f502d463dd925ae56c75d1940ecbe.scope.
Jan 26 15:35:17 compute-0 sudo[199881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnewokrlfgyykoaekwpkbrzcgjgjwqel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441716.7505593-679-25154788944089/AnsiballZ_file.py'
Jan 26 15:35:17 compute-0 sudo[199881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bdf44bf8b26c496857b93b0fd58d81ace497c0535e0c207a351fd47fad3260/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bdf44bf8b26c496857b93b0fd58d81ace497c0535e0c207a351fd47fad3260/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bdf44bf8b26c496857b93b0fd58d81ace497c0535e0c207a351fd47fad3260/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:35:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44bdf44bf8b26c496857b93b0fd58d81ace497c0535e0c207a351fd47fad3260/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:35:17 compute-0 podman[199816]: 2026-01-26 15:35:17.11363173 +0000 UTC m=+0.194500939 container init f95318691c24a16b219edbe3dd1c2869f01f502d463dd925ae56c75d1940ecbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galileo, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:35:17 compute-0 podman[199816]: 2026-01-26 15:35:17.121595673 +0000 UTC m=+0.202464872 container start f95318691c24a16b219edbe3dd1c2869f01f502d463dd925ae56c75d1940ecbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galileo, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:35:17 compute-0 podman[199816]: 2026-01-26 15:35:17.145320296 +0000 UTC m=+0.226189495 container attach f95318691c24a16b219edbe3dd1c2869f01f502d463dd925ae56c75d1940ecbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galileo, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 15:35:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:35:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:17 compute-0 python3.9[199887]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:17 compute-0 sudo[199881]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]: {
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:     "0": [
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:         {
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "devices": [
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "/dev/loop3"
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             ],
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "lv_name": "ceph_lv0",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "lv_size": "21470642176",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "name": "ceph_lv0",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "tags": {
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.cluster_name": "ceph",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.crush_device_class": "",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.encrypted": "0",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.objectstore": "bluestore",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.osd_id": "0",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.type": "block",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.vdo": "0",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.with_tpm": "0"
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             },
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "type": "block",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "vg_name": "ceph_vg0"
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:         }
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:     ],
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:     "1": [
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:         {
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "devices": [
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "/dev/loop4"
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             ],
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "lv_name": "ceph_lv1",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "lv_size": "21470642176",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "name": "ceph_lv1",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "tags": {
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.cluster_name": "ceph",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.crush_device_class": "",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.encrypted": "0",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.objectstore": "bluestore",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.osd_id": "1",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.type": "block",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.vdo": "0",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.with_tpm": "0"
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             },
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "type": "block",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "vg_name": "ceph_vg1"
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:         }
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:     ],
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:     "2": [
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:         {
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "devices": [
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "/dev/loop5"
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             ],
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "lv_name": "ceph_lv2",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "lv_size": "21470642176",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "name": "ceph_lv2",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "tags": {
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.cluster_name": "ceph",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.crush_device_class": "",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.encrypted": "0",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.objectstore": "bluestore",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.osd_id": "2",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.type": "block",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.vdo": "0",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:                 "ceph.with_tpm": "0"
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             },
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "type": "block",
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:             "vg_name": "ceph_vg2"
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:         }
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]:     ]
Jan 26 15:35:17 compute-0 vigorous_galileo[199885]: }
Jan 26 15:35:17 compute-0 systemd[1]: libpod-f95318691c24a16b219edbe3dd1c2869f01f502d463dd925ae56c75d1940ecbe.scope: Deactivated successfully.
Jan 26 15:35:17 compute-0 podman[199816]: 2026-01-26 15:35:17.445138768 +0000 UTC m=+0.526007957 container died f95318691c24a16b219edbe3dd1c2869f01f502d463dd925ae56c75d1940ecbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 15:35:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-44bdf44bf8b26c496857b93b0fd58d81ace497c0535e0c207a351fd47fad3260-merged.mount: Deactivated successfully.
Jan 26 15:35:17 compute-0 podman[199816]: 2026-01-26 15:35:17.679192532 +0000 UTC m=+0.760061741 container remove f95318691c24a16b219edbe3dd1c2869f01f502d463dd925ae56c75d1940ecbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galileo, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 15:35:17 compute-0 sudo[200056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbhhicnnsvgcejmxcqsycuzcqvtlfwds ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441717.4377034-679-117539682740932/AnsiballZ_file.py'
Jan 26 15:35:17 compute-0 sudo[200056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:17 compute-0 systemd[1]: libpod-conmon-f95318691c24a16b219edbe3dd1c2869f01f502d463dd925ae56c75d1940ecbe.scope: Deactivated successfully.
Jan 26 15:35:17 compute-0 sudo[199488]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:17 compute-0 sudo[200061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:35:17 compute-0 sudo[200061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:35:17 compute-0 sudo[200061]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:17 compute-0 sudo[200086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:35:17 compute-0 sudo[200086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:35:17 compute-0 python3.9[200060]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:17 compute-0 sudo[200056]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:18 compute-0 podman[200170]: 2026-01-26 15:35:18.139672176 +0000 UTC m=+0.045878970 container create 427c44b82df9b1c745c7434db94108743693b1a55d7f5e8ede57e70bf2fc1a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:35:18 compute-0 systemd[1]: Started libpod-conmon-427c44b82df9b1c745c7434db94108743693b1a55d7f5e8ede57e70bf2fc1a4d.scope.
Jan 26 15:35:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:35:18 compute-0 podman[200170]: 2026-01-26 15:35:18.117163262 +0000 UTC m=+0.023370106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:35:18 compute-0 podman[200170]: 2026-01-26 15:35:18.22180008 +0000 UTC m=+0.128006894 container init 427c44b82df9b1c745c7434db94108743693b1a55d7f5e8ede57e70bf2fc1a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:35:18 compute-0 podman[200170]: 2026-01-26 15:35:18.232591881 +0000 UTC m=+0.138798665 container start 427c44b82df9b1c745c7434db94108743693b1a55d7f5e8ede57e70bf2fc1a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:35:18 compute-0 podman[200170]: 2026-01-26 15:35:18.235240684 +0000 UTC m=+0.141447468 container attach 427c44b82df9b1c745c7434db94108743693b1a55d7f5e8ede57e70bf2fc1a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 15:35:18 compute-0 vigorous_wu[200223]: 167 167
Jan 26 15:35:18 compute-0 systemd[1]: libpod-427c44b82df9b1c745c7434db94108743693b1a55d7f5e8ede57e70bf2fc1a4d.scope: Deactivated successfully.
Jan 26 15:35:18 compute-0 podman[200170]: 2026-01-26 15:35:18.24042582 +0000 UTC m=+0.146632604 container died 427c44b82df9b1c745c7434db94108743693b1a55d7f5e8ede57e70bf2fc1a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 15:35:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-c25b3d62e98d53c74ec90257fc6415005be48267807276c551258e93b548cbcd-merged.mount: Deactivated successfully.
Jan 26 15:35:18 compute-0 podman[200170]: 2026-01-26 15:35:18.281074711 +0000 UTC m=+0.187281495 container remove 427c44b82df9b1c745c7434db94108743693b1a55d7f5e8ede57e70bf2fc1a4d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 15:35:18 compute-0 systemd[1]: libpod-conmon-427c44b82df9b1c745c7434db94108743693b1a55d7f5e8ede57e70bf2fc1a4d.scope: Deactivated successfully.
Jan 26 15:35:18 compute-0 ceph-mon[75140]: pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:18 compute-0 sudo[200308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boatgyaslzygufpauwkvniittmebcllf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441718.079956-679-57666366307707/AnsiballZ_file.py'
Jan 26 15:35:18 compute-0 sudo[200308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:18 compute-0 podman[200316]: 2026-01-26 15:35:18.445826702 +0000 UTC m=+0.045337607 container create f588ae35305428bf1ebdf77ec04922eccb3a9e8d38ce40c252b4c46db9ab5744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:35:18 compute-0 systemd[1]: Started libpod-conmon-f588ae35305428bf1ebdf77ec04922eccb3a9e8d38ce40c252b4c46db9ab5744.scope.
Jan 26 15:35:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1dbd2ea2cdbb046d081e47407891206c68c86aea873f0ecd3bd9440c8f537e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1dbd2ea2cdbb046d081e47407891206c68c86aea873f0ecd3bd9440c8f537e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1dbd2ea2cdbb046d081e47407891206c68c86aea873f0ecd3bd9440c8f537e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:35:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe1dbd2ea2cdbb046d081e47407891206c68c86aea873f0ecd3bd9440c8f537e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:35:18 compute-0 podman[200316]: 2026-01-26 15:35:18.427627072 +0000 UTC m=+0.027137997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:35:18 compute-0 podman[200316]: 2026-01-26 15:35:18.536094382 +0000 UTC m=+0.135605307 container init f588ae35305428bf1ebdf77ec04922eccb3a9e8d38ce40c252b4c46db9ab5744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wozniak, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:35:18 compute-0 podman[200316]: 2026-01-26 15:35:18.543811999 +0000 UTC m=+0.143322914 container start f588ae35305428bf1ebdf77ec04922eccb3a9e8d38ce40c252b4c46db9ab5744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:35:18 compute-0 podman[200316]: 2026-01-26 15:35:18.547865706 +0000 UTC m=+0.147376631 container attach f588ae35305428bf1ebdf77ec04922eccb3a9e8d38ce40c252b4c46db9ab5744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wozniak, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:35:18 compute-0 python3.9[200310]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:18 compute-0 sudo[200308]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:18 compute-0 sudo[200520]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmzfvkdmggviksnthvnnochvmqzvedlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441718.6928968-679-132520228095873/AnsiballZ_file.py'
Jan 26 15:35:18 compute-0 sudo[200520]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:19 compute-0 python3.9[200529]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:19 compute-0 lvm[200564]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:35:19 compute-0 lvm[200561]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:35:19 compute-0 lvm[200564]: VG ceph_vg1 finished
Jan 26 15:35:19 compute-0 lvm[200561]: VG ceph_vg0 finished
Jan 26 15:35:19 compute-0 lvm[200566]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:35:19 compute-0 lvm[200566]: VG ceph_vg2 finished
Jan 26 15:35:19 compute-0 sudo[200520]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:19 compute-0 reverent_wozniak[200333]: {}
Jan 26 15:35:19 compute-0 systemd[1]: libpod-f588ae35305428bf1ebdf77ec04922eccb3a9e8d38ce40c252b4c46db9ab5744.scope: Deactivated successfully.
Jan 26 15:35:19 compute-0 podman[200316]: 2026-01-26 15:35:19.298581941 +0000 UTC m=+0.898092846 container died f588ae35305428bf1ebdf77ec04922eccb3a9e8d38ce40c252b4c46db9ab5744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:35:19 compute-0 systemd[1]: libpod-f588ae35305428bf1ebdf77ec04922eccb3a9e8d38ce40c252b4c46db9ab5744.scope: Consumed 1.214s CPU time.
Jan 26 15:35:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe1dbd2ea2cdbb046d081e47407891206c68c86aea873f0ecd3bd9440c8f537e-merged.mount: Deactivated successfully.
Jan 26 15:35:19 compute-0 podman[200316]: 2026-01-26 15:35:19.486785807 +0000 UTC m=+1.086296732 container remove f588ae35305428bf1ebdf77ec04922eccb3a9e8d38ce40c252b4c46db9ab5744 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 15:35:19 compute-0 systemd[1]: libpod-conmon-f588ae35305428bf1ebdf77ec04922eccb3a9e8d38ce40c252b4c46db9ab5744.scope: Deactivated successfully.
Jan 26 15:35:19 compute-0 sudo[200086]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:35:19 compute-0 sudo[200731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-revieeoeakfxemcwkowfweahlxyfhgzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441719.3035958-679-118675014893353/AnsiballZ_file.py'
Jan 26 15:35:19 compute-0 sudo[200731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:19 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:35:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:35:19 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:35:19 compute-0 sudo[200734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:35:19 compute-0 sudo[200734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:35:19 compute-0 sudo[200734]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:19 compute-0 python3.9[200733]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:19 compute-0 sudo[200731]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:20 compute-0 sudo[200908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tczfofhgzgzonzolzkviktalpyokodjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441719.9882212-679-31999872583897/AnsiballZ_file.py'
Jan 26 15:35:20 compute-0 sudo[200908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:20 compute-0 python3.9[200910]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:20 compute-0 sudo[200908]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:20 compute-0 ceph-mon[75140]: pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:20 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:35:20 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:35:20 compute-0 sudo[201060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkuhqfgvlmxpxdjadvhchbsnsmsondbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441720.6758277-679-254298673317356/AnsiballZ_file.py'
Jan 26 15:35:20 compute-0 sudo[201060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:21 compute-0 python3.9[201062]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:21 compute-0 sudo[201060]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:21 compute-0 sudo[201212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwpnhulixgxzvzontumwpyvjhywhsxsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441721.347569-679-254714189435768/AnsiballZ_file.py'
Jan 26 15:35:21 compute-0 sudo[201212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:21 compute-0 ceph-mon[75140]: pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:21 compute-0 python3.9[201214]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:21 compute-0 sudo[201212]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:35:22 compute-0 sudo[201364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewahuswcbykjrxdxcrzbmojlbwzdjslt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441721.9377189-679-86022093842045/AnsiballZ_file.py'
Jan 26 15:35:22 compute-0 sudo[201364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:22 compute-0 python3.9[201366]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:22 compute-0 sudo[201364]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:22 compute-0 sudo[201516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxvghynuwpxzitzarwadehvynovncapf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441722.5416892-679-182600039098331/AnsiballZ_file.py'
Jan 26 15:35:22 compute-0 sudo[201516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:23 compute-0 python3.9[201518]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:23 compute-0 sudo[201516]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:23 compute-0 sudo[201668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppuumnzujyblgumvjsnslsunabolcdhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441723.2365913-778-87372536438817/AnsiballZ_stat.py'
Jan 26 15:35:23 compute-0 sudo[201668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:23 compute-0 python3.9[201670]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:23 compute-0 sudo[201668]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:24 compute-0 sudo[201791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yidslhlttenuyidxccrgtupzmyrwbonc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441723.2365913-778-87372536438817/AnsiballZ_copy.py'
Jan 26 15:35:24 compute-0 sudo[201791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:24 compute-0 ceph-mon[75140]: pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:24 compute-0 python3.9[201793]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769441723.2365913-778-87372536438817/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:24 compute-0 sudo[201791]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:24 compute-0 sudo[201943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shcczjviqmjtflbnzbmilzinabmalvgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441724.5262053-778-247533241316263/AnsiballZ_stat.py'
Jan 26 15:35:24 compute-0 sudo[201943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:25 compute-0 python3.9[201945]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:25 compute-0 sudo[201943]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:25 compute-0 sudo[202066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pulsvfsqfeppqmkjkxnqdnihupyuosza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441724.5262053-778-247533241316263/AnsiballZ_copy.py'
Jan 26 15:35:25 compute-0 sudo[202066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:25 compute-0 python3.9[202068]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769441724.5262053-778-247533241316263/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:25 compute-0 sudo[202066]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:26 compute-0 sudo[202218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqmqpgzcvrwfbpbdshtopbasqxsutlod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441725.7415373-778-202956213594533/AnsiballZ_stat.py'
Jan 26 15:35:26 compute-0 sudo[202218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:26 compute-0 ceph-mon[75140]: pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:26 compute-0 python3.9[202220]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:26 compute-0 sudo[202218]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:26 compute-0 sudo[202351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kewblijlvbqmwyvzdverraquvfzanlzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441725.7415373-778-202956213594533/AnsiballZ_copy.py'
Jan 26 15:35:26 compute-0 sudo[202351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:26 compute-0 podman[202315]: 2026-01-26 15:35:26.993519083 +0000 UTC m=+0.161998912 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 15:35:27 compute-0 python3.9[202357]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769441725.7415373-778-202956213594533/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:27 compute-0 sudo[202351]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:35:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:27 compute-0 sudo[202519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dccgywxzvdxjpwngolvvbjpresoganpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441727.2498322-778-203460890072747/AnsiballZ_stat.py'
Jan 26 15:35:27 compute-0 sudo[202519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:27 compute-0 python3.9[202521]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:27 compute-0 sudo[202519]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:28 compute-0 sudo[202642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnjkheqcunkvxkcjdmmhcyptdwdtipma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441727.2498322-778-203460890072747/AnsiballZ_copy.py'
Jan 26 15:35:28 compute-0 sudo[202642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:28 compute-0 ceph-mon[75140]: pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:28 compute-0 python3.9[202644]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769441727.2498322-778-203460890072747/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:28 compute-0 sudo[202642]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:35:28
Jan 26 15:35:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:35:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:35:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.data', 'vms', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', '.rgw.root', 'default.rgw.log', '.mgr', 'volumes', 'default.rgw.meta']
Jan 26 15:35:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:35:28 compute-0 sudo[202794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjsfqywayxhmfuysifqkhvehoecvkkza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441728.6109838-778-157645846532692/AnsiballZ_stat.py'
Jan 26 15:35:28 compute-0 sudo[202794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:29 compute-0 python3.9[202796]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:29 compute-0 sudo[202794]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:29 compute-0 sudo[202917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oazbrsfelbrkbvldbpufzghpqkfoogjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441728.6109838-778-157645846532692/AnsiballZ_copy.py'
Jan 26 15:35:29 compute-0 sudo[202917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:29 compute-0 python3.9[202919]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769441728.6109838-778-157645846532692/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:29 compute-0 sudo[202917]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:30 compute-0 sudo[203069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsapssdydpegnaasrpbhlbvaijxmzzwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441729.9317608-778-65816224607505/AnsiballZ_stat.py'
Jan 26 15:35:30 compute-0 sudo[203069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:35:30 compute-0 ceph-mon[75140]: pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:30 compute-0 python3.9[203071]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:30 compute-0 sudo[203069]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:30 compute-0 sudo[203209]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmnllxbuedfytlvrjkmdnahypyjideud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441729.9317608-778-65816224607505/AnsiballZ_copy.py'
Jan 26 15:35:30 compute-0 sudo[203209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:30 compute-0 podman[203166]: 2026-01-26 15:35:30.870938106 +0000 UTC m=+0.072797195 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 26 15:35:31 compute-0 python3.9[203211]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769441729.9317608-778-65816224607505/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:31 compute-0 sudo[203209]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:31 compute-0 sudo[203361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jniqtodtjevxoglzqfxpapvwohybjlof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441731.2735739-778-65727961280058/AnsiballZ_stat.py'
Jan 26 15:35:31 compute-0 sudo[203361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:31 compute-0 python3.9[203363]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:31 compute-0 sudo[203361]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:35:32 compute-0 sudo[203484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flrlniaoiwigryrlajaiyjmgqzrqugcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441731.2735739-778-65727961280058/AnsiballZ_copy.py'
Jan 26 15:35:32 compute-0 sudo[203484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:32 compute-0 python3.9[203486]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769441731.2735739-778-65727961280058/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:32 compute-0 sudo[203484]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:32 compute-0 ceph-mon[75140]: pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:32 compute-0 sudo[203636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itebuskkpzaexxmzbwrdxkyenxmilgxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441732.5296826-778-121651126120525/AnsiballZ_stat.py'
Jan 26 15:35:32 compute-0 sudo[203636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:33 compute-0 python3.9[203638]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:33 compute-0 sudo[203636]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:33 compute-0 sudo[203759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prmjseysvngfuuzpbldmsdfdehbhydxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441732.5296826-778-121651126120525/AnsiballZ_copy.py'
Jan 26 15:35:33 compute-0 sudo[203759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:33 compute-0 python3.9[203761]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769441732.5296826-778-121651126120525/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:33 compute-0 sudo[203759]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:34 compute-0 sudo[203911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohthquuoislkaawsdfguxeofphfmsdiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441733.8042367-778-115685843368885/AnsiballZ_stat.py'
Jan 26 15:35:34 compute-0 sudo[203911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:34 compute-0 python3.9[203913]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:34 compute-0 sudo[203911]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:34 compute-0 ceph-mon[75140]: pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:34 compute-0 sudo[204034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgnvvienohmmolwmmrqosxuxcrmljpzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441733.8042367-778-115685843368885/AnsiballZ_copy.py'
Jan 26 15:35:34 compute-0 sudo[204034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:34 compute-0 python3.9[204036]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769441733.8042367-778-115685843368885/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:34 compute-0 sudo[204034]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:34 compute-0 auditd[703]: Audit daemon rotating log files
Jan 26 15:35:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:35 compute-0 sudo[204186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nygpshccqxootiwfaymfrvfsvzwmmvnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441735.0348032-778-182803615162661/AnsiballZ_stat.py'
Jan 26 15:35:35 compute-0 sudo[204186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:35 compute-0 python3.9[204188]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:35 compute-0 sudo[204186]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:36 compute-0 sudo[204309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otfxjeniygclmfzenqajgfludgdiobej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441735.0348032-778-182803615162661/AnsiballZ_copy.py'
Jan 26 15:35:36 compute-0 sudo[204309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:36 compute-0 python3.9[204311]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769441735.0348032-778-182803615162661/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:36 compute-0 sudo[204309]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:36 compute-0 ceph-mon[75140]: pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:36 compute-0 sudo[204461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibfixcpnuqibjnvxqxttcbrydicfxjvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441736.3676665-778-187168134010867/AnsiballZ_stat.py'
Jan 26 15:35:36 compute-0 sudo[204461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:36 compute-0 python3.9[204463]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:36 compute-0 sudo[204461]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:35:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:37 compute-0 sudo[204584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejouxolljzyzuxupyvwqtacxqgvewftt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441736.3676665-778-187168134010867/AnsiballZ_copy.py'
Jan 26 15:35:37 compute-0 sudo[204584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:37 compute-0 python3.9[204586]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769441736.3676665-778-187168134010867/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:37 compute-0 sudo[204584]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:37 compute-0 sudo[204736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kypyufvjewyfuuerollojwmogyvyrqoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441737.6306386-778-11137060414477/AnsiballZ_stat.py'
Jan 26 15:35:37 compute-0 sudo[204736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:38 compute-0 python3.9[204738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:38 compute-0 sudo[204736]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:38 compute-0 sudo[204859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzuowdqdeiwliehdauwbrpodlufmrrze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441737.6306386-778-11137060414477/AnsiballZ_copy.py'
Jan 26 15:35:38 compute-0 sudo[204859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:38 compute-0 python3.9[204861]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769441737.6306386-778-11137060414477/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:38 compute-0 sudo[204859]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:38 compute-0 ceph-mon[75140]: pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:39 compute-0 sudo[205011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxdfuvaawmmovvshqbcgarhijewscijx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441738.7625709-778-124255264137905/AnsiballZ_stat.py'
Jan 26 15:35:39 compute-0 sudo[205011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:39 compute-0 python3.9[205013]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:39 compute-0 sudo[205011]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:39 compute-0 sudo[205134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyqyjsszdyeijgqmtvefdnxfafitmbrn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441738.7625709-778-124255264137905/AnsiballZ_copy.py'
Jan 26 15:35:39 compute-0 sudo[205134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:39 compute-0 python3.9[205136]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769441738.7625709-778-124255264137905/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:39 compute-0 sudo[205134]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:40 compute-0 sudo[205286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbardaiipltgguxjfinlyjvtjddsttnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441739.9755404-778-196195195429193/AnsiballZ_stat.py'
Jan 26 15:35:40 compute-0 sudo[205286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:40 compute-0 python3.9[205288]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:35:40 compute-0 sudo[205286]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:40 compute-0 ceph-mon[75140]: pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:40 compute-0 sudo[205409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exmyuyrogbfckteqoedjyfyuwkvzgdeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441739.9755404-778-196195195429193/AnsiballZ_copy.py'
Jan 26 15:35:40 compute-0 sudo[205409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:41 compute-0 python3.9[205411]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769441739.9755404-778-196195195429193/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:41 compute-0 sudo[205409]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:35:42 compute-0 ceph-mon[75140]: pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:42 compute-0 python3.9[205561]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:35:42 compute-0 sudo[205714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkckoemafjaetofzbimnztoijyfgnmfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441742.4909885-984-236360749393872/AnsiballZ_seboolean.py'
Jan 26 15:35:42 compute-0 sudo[205714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:43 compute-0 python3.9[205716]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 26 15:35:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:44 compute-0 ceph-mon[75140]: pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:44 compute-0 sudo[205714]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:44 compute-0 sudo[205870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxvdamwxcuyqmtrxwwlwfyymenhmvxwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441744.5926714-992-138225214713120/AnsiballZ_copy.py'
Jan 26 15:35:44 compute-0 dbus-broker-launch[775]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 26 15:35:44 compute-0 sudo[205870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:45 compute-0 python3.9[205872]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:45 compute-0 sudo[205870]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:45 compute-0 sudo[206022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmiyhfafgsphxpqlwolxhyedobiaprsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441745.2742438-992-252297708483582/AnsiballZ_copy.py'
Jan 26 15:35:45 compute-0 sudo[206022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:45 compute-0 python3.9[206024]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:45 compute-0 sudo[206022]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:46 compute-0 sudo[206174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jigfiemwzfpqctfxkzjzscjignxhhpba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441745.9374957-992-189206402897729/AnsiballZ_copy.py'
Jan 26 15:35:46 compute-0 sudo[206174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:46 compute-0 ceph-mon[75140]: pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:46 compute-0 python3.9[206176]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:46 compute-0 sudo[206174]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:46 compute-0 sudo[206326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqxemvhhzyntdrtlgfcehwcjvzdlckpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441746.5465424-992-219784803276211/AnsiballZ_copy.py'
Jan 26 15:35:46 compute-0 sudo[206326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:47 compute-0 python3.9[206328]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:47 compute-0 sudo[206326]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:35:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:47 compute-0 sudo[206478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knqrxmtodtpkryigfmtmenuiisagsmda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441747.1952362-992-272116640702648/AnsiballZ_copy.py'
Jan 26 15:35:47 compute-0 sudo[206478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:47 compute-0 python3.9[206480]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:47 compute-0 sudo[206478]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:48 compute-0 sudo[206630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uynsvzyjlrijwkbqwweqwckgmxaauvkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441747.8699934-1028-73564680481385/AnsiballZ_copy.py'
Jan 26 15:35:48 compute-0 sudo[206630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:48 compute-0 python3.9[206632]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:48 compute-0 ceph-mon[75140]: pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:48 compute-0 sudo[206630]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1837404050763064e-06 of space, bias 4.0, pg target 0.0014204884860915677 quantized to 16 (current 16)
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:35:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:35:48 compute-0 sudo[206782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rztzyelpsjqbcrbkqcvzuulgockgkiiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441748.6057382-1028-175623661078475/AnsiballZ_copy.py'
Jan 26 15:35:48 compute-0 sudo[206782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:49 compute-0 python3.9[206784]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:49 compute-0 sudo[206782]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:49 compute-0 sudo[206934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnhoqlrvwacgxcnykvphzzxhwravbkqi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441749.2326357-1028-220864817581185/AnsiballZ_copy.py'
Jan 26 15:35:49 compute-0 sudo[206934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:49 compute-0 python3.9[206936]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:49 compute-0 sudo[206934]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:50 compute-0 sudo[207086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uypmlftdvqeydmgyxjbzkkzgkypwwjgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441749.8516238-1028-154898683122469/AnsiballZ_copy.py'
Jan 26 15:35:50 compute-0 sudo[207086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:50 compute-0 python3.9[207088]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:50 compute-0 sudo[207086]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:50 compute-0 ceph-mon[75140]: pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:50 compute-0 sudo[207238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kolxhobwsaakszyrnktftxledblemgpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441750.5507412-1028-13075179157418/AnsiballZ_copy.py'
Jan 26 15:35:50 compute-0 sudo[207238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:51 compute-0 python3.9[207240]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:51 compute-0 sudo[207238]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:51 compute-0 sudo[207390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikintdiuphpadklvyggmowbbuuatvluz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441751.258701-1064-267504356396296/AnsiballZ_systemd.py'
Jan 26 15:35:51 compute-0 sudo[207390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:51 compute-0 python3.9[207392]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 15:35:51 compute-0 systemd[1]: Reloading.
Jan 26 15:35:51 compute-0 systemd-rc-local-generator[207414]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:35:51 compute-0 systemd-sysv-generator[207419]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:35:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:35:52 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 26 15:35:52 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 26 15:35:52 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 26 15:35:52 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 26 15:35:52 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 26 15:35:52 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 26 15:35:52 compute-0 sudo[207390]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:52 compute-0 ceph-mon[75140]: pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:52 compute-0 sudo[207583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhnnaoyvztjcnroppkukkxqhfxpqnusu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441752.4431603-1064-194171113514625/AnsiballZ_systemd.py'
Jan 26 15:35:52 compute-0 sudo[207583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:53 compute-0 python3.9[207585]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 15:35:53 compute-0 systemd[1]: Reloading.
Jan 26 15:35:53 compute-0 systemd-rc-local-generator[207610]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:35:53 compute-0 systemd-sysv-generator[207615]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:35:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:53 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 26 15:35:53 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 26 15:35:53 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 26 15:35:53 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 26 15:35:53 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 26 15:35:53 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 26 15:35:53 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 26 15:35:53 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 26 15:35:53 compute-0 sudo[207583]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:53 compute-0 sudo[207798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-getqcfdjqzlzmqoazcqebbowtdkmfghf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441753.5664077-1064-211992221631109/AnsiballZ_systemd.py'
Jan 26 15:35:53 compute-0 sudo[207798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:54 compute-0 python3.9[207800]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 15:35:54 compute-0 systemd[1]: Reloading.
Jan 26 15:35:54 compute-0 systemd-rc-local-generator[207825]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:35:54 compute-0 systemd-sysv-generator[207830]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:35:54 compute-0 ceph-mon[75140]: pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:54 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 26 15:35:54 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 26 15:35:54 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 26 15:35:54 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 26 15:35:54 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 26 15:35:54 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 26 15:35:54 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 26 15:35:54 compute-0 sudo[207798]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:54 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 26 15:35:55 compute-0 sudo[208012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppxnaexlbvvfgktbljvagobxjwfojdsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441754.7666607-1064-14830467614960/AnsiballZ_systemd.py'
Jan 26 15:35:55 compute-0 sudo[208012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:55 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 26 15:35:55 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 26 15:35:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:55 compute-0 python3.9[208015]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 15:35:55 compute-0 systemd[1]: Reloading.
Jan 26 15:35:55 compute-0 systemd-sysv-generator[208052]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:35:55 compute-0 systemd-rc-local-generator[208049]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:35:55 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 26 15:35:55 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 26 15:35:55 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 26 15:35:55 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 26 15:35:55 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 26 15:35:55 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 26 15:35:55 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 26 15:35:55 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 26 15:35:55 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 26 15:35:55 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 26 15:35:55 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 26 15:35:55 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 26 15:35:55 compute-0 sudo[208012]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:56 compute-0 setroubleshoot[207837]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 040e9baf-3852-4b28-abf0-cc26afd7d58a
Jan 26 15:35:56 compute-0 setroubleshoot[207837]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 26 15:35:56 compute-0 setroubleshoot[207837]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 040e9baf-3852-4b28-abf0-cc26afd7d58a
Jan 26 15:35:56 compute-0 setroubleshoot[207837]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 26 15:35:56 compute-0 sudo[208235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awirwdouampgwmmoxhbppaewtvjayamh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441756.0107212-1064-48071748350058/AnsiballZ_systemd.py'
Jan 26 15:35:56 compute-0 sudo[208235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:56 compute-0 ceph-mon[75140]: pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:56 compute-0 python3.9[208237]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 15:35:56 compute-0 systemd[1]: Reloading.
Jan 26 15:35:56 compute-0 systemd-rc-local-generator[208266]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:35:56 compute-0 systemd-sysv-generator[208270]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:35:57 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 26 15:35:57 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 26 15:35:57 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 26 15:35:57 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 26 15:35:57 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 26 15:35:57 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 26 15:35:57 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 26 15:35:57 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 26 15:35:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:35:57 compute-0 podman[208274]: 2026-01-26 15:35:57.177043155 +0000 UTC m=+0.099246413 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:35:57 compute-0 sudo[208235]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:57 compute-0 sudo[208471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nueodltmrqlxrdxautqlrpergxexvkhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441757.4339707-1101-221457157119896/AnsiballZ_file.py'
Jan 26 15:35:57 compute-0 sudo[208471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:57 compute-0 python3.9[208473]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:35:57 compute-0 sudo[208471]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:58 compute-0 sudo[208623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbqjwwcsrtgrlmflezuetklwvqypnfgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441758.0559556-1109-222910034941744/AnsiballZ_find.py'
Jan 26 15:35:58 compute-0 sudo[208623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:58 compute-0 python3.9[208625]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 15:35:58 compute-0 ceph-mon[75140]: pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:58 compute-0 sudo[208623]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:58 compute-0 sudo[208775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kegmozwaqbaalbmntbntjrqyxohglgnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441758.6929927-1117-6346305306765/AnsiballZ_command.py'
Jan 26 15:35:58 compute-0 sudo[208775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:35:59 compute-0 python3.9[208777]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:35:59 compute-0 sudo[208775]: pam_unix(sudo:session): session closed for user root
Jan 26 15:35:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:35:59.197 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:35:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:35:59.197 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:35:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:35:59.197 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:35:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:35:59 compute-0 python3.9[208931]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 15:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:36:00 compute-0 ceph-mon[75140]: pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:00 compute-0 python3.9[209081]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:36:01 compute-0 python3.9[209202]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1769441760.0991232-1136-203648526446901/.source.xml follow=False _original_basename=secret.xml.j2 checksum=abcf9f900b5e1ff1b2e6ad63e8e0d1bc86cc1d59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:01 compute-0 podman[209203]: 2026-01-26 15:36:01.213925407 +0000 UTC m=+0.062014351 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 26 15:36:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:01 compute-0 sudo[209371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xodefsbuzttwoqciqfwwvfrricnnqsyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441761.4466622-1151-95061408984379/AnsiballZ_command.py'
Jan 26 15:36:01 compute-0 sudo[209371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:01 compute-0 python3.9[209373]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 8b9831ad-4b0d-59b4-8860-96eb895a171f
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:36:01 compute-0 polkitd[43441]: Registered Authentication Agent for unix-process:209375:325277 (system bus name :1.2577 [pkttyagent --process 209375 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 26 15:36:01 compute-0 polkitd[43441]: Unregistered Authentication Agent for unix-process:209375:325277 (system bus name :1.2577, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 26 15:36:02 compute-0 polkitd[43441]: Registered Authentication Agent for unix-process:209374:325277 (system bus name :1.2578 [pkttyagent --process 209374 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 26 15:36:02 compute-0 polkitd[43441]: Unregistered Authentication Agent for unix-process:209374:325277 (system bus name :1.2578, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 26 15:36:02 compute-0 sudo[209371]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:36:02 compute-0 ceph-mon[75140]: pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:02 compute-0 python3.9[209535]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:03 compute-0 sudo[209685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knqedajukidjagrifmnzayyayvzfdjhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441762.8716953-1167-146823966634276/AnsiballZ_command.py'
Jan 26 15:36:03 compute-0 sudo[209685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:03 compute-0 sudo[209685]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:03 compute-0 sudo[209838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlwexfofukwzcoxdvbhwweugrjmlaggk ; FSID=8b9831ad-4b0d-59b4-8860-96eb895a171f KEY=AQD3hXdpAAAAABAAApuqLCvtPSt+QrTWj7m/nw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441763.5651107-1175-271779957890529/AnsiballZ_command.py'
Jan 26 15:36:03 compute-0 sudo[209838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:04 compute-0 polkitd[43441]: Registered Authentication Agent for unix-process:209841:325497 (system bus name :1.2581 [pkttyagent --process 209841 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 26 15:36:04 compute-0 polkitd[43441]: Unregistered Authentication Agent for unix-process:209841:325497 (system bus name :1.2581, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 26 15:36:04 compute-0 sudo[209838]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:04 compute-0 ceph-mon[75140]: pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:04 compute-0 sudo[209996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owfpknmfzyxlrwdbdbqeuruqfrxanhyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441764.4560149-1183-240423287096963/AnsiballZ_copy.py'
Jan 26 15:36:04 compute-0 sudo[209996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:04 compute-0 python3.9[209998]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:04 compute-0 sudo[209996]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:05 compute-0 sudo[210148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-savkiqvpdefaxeftzkabhwgcamjpyduc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441765.17289-1191-273727808596418/AnsiballZ_stat.py'
Jan 26 15:36:05 compute-0 sudo[210148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:05 compute-0 python3.9[210150]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:36:05 compute-0 sudo[210148]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:06 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 26 15:36:06 compute-0 sudo[210271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scxlmugihqqxhurnoqpalsantbvqywiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441765.17289-1191-273727808596418/AnsiballZ_copy.py'
Jan 26 15:36:06 compute-0 sudo[210271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:06 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 26 15:36:06 compute-0 python3.9[210273]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1769441765.17289-1191-273727808596418/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:06 compute-0 sudo[210271]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:06 compute-0 ceph-mon[75140]: pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:06 compute-0 sudo[210423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-subbhnzhdeutkvvxubldjsvuhotdfrwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441766.6892638-1207-209790409847496/AnsiballZ_file.py'
Jan 26 15:36:06 compute-0 sudo[210423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:07 compute-0 python3.9[210425]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:07 compute-0 sudo[210423]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:36:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:07 compute-0 sudo[210575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udusoaandvygtevwetmavgzqxyhfqjvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441767.2777472-1215-9312886185911/AnsiballZ_stat.py'
Jan 26 15:36:07 compute-0 sudo[210575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:07 compute-0 python3.9[210577]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:36:07 compute-0 sudo[210575]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:07 compute-0 sudo[210653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwcqfbexjagrgzvdkjmvjvbnnoowsjnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441767.2777472-1215-9312886185911/AnsiballZ_file.py'
Jan 26 15:36:07 compute-0 sudo[210653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:08 compute-0 python3.9[210655]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:08 compute-0 sudo[210653]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:08 compute-0 ceph-mon[75140]: pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:08 compute-0 sudo[210805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mutwxipioeugohsapagrktnrgotczbfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441768.3722625-1227-188808732117286/AnsiballZ_stat.py'
Jan 26 15:36:08 compute-0 sudo[210805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:08 compute-0 python3.9[210807]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:36:08 compute-0 sudo[210805]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:09 compute-0 sudo[210883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtrjcuwmufvemwqsutbmhtzgxnmetylp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441768.3722625-1227-188808732117286/AnsiballZ_file.py'
Jan 26 15:36:09 compute-0 sudo[210883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:09 compute-0 python3.9[210885]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.h4h_8tvk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:09 compute-0 sudo[210883]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:09 compute-0 sudo[211035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiohqcntslamvbymmduaskgdeiyvkoun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441769.4987526-1239-270966815162456/AnsiballZ_stat.py'
Jan 26 15:36:09 compute-0 sudo[211035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:09 compute-0 python3.9[211037]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:36:09 compute-0 sudo[211035]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:10 compute-0 sudo[211113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qneqdezoqfusthfelfixkvoxnjyuuouq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441769.4987526-1239-270966815162456/AnsiballZ_file.py'
Jan 26 15:36:10 compute-0 sudo[211113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:10 compute-0 python3.9[211115]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:10 compute-0 sudo[211113]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:10 compute-0 ceph-mon[75140]: pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:10 compute-0 sudo[211265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlbfijtssdqphoxbjkjwblrqdypczvpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441770.610869-1252-44710670482301/AnsiballZ_command.py'
Jan 26 15:36:10 compute-0 sudo[211265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:11 compute-0 python3.9[211267]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:36:11 compute-0 sudo[211265]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:11 compute-0 sudo[211418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrgglhomvgntqqjoolpzvjlzpqafbhnr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769441771.2849038-1260-15420180863555/AnsiballZ_edpm_nftables_from_files.py'
Jan 26 15:36:11 compute-0 sudo[211418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:11 compute-0 python3[211420]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 26 15:36:11 compute-0 sudo[211418]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:36:12 compute-0 sudo[211570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loeixadbjynodliuyxpwbwjovcttklnm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441772.1119556-1268-7299442921216/AnsiballZ_stat.py'
Jan 26 15:36:12 compute-0 sudo[211570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:12 compute-0 ceph-mon[75140]: pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:12 compute-0 python3.9[211572]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:36:12 compute-0 sudo[211570]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:12 compute-0 sudo[211648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnvrxiwhimjuffgbekbstblsskpompvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441772.1119556-1268-7299442921216/AnsiballZ_file.py'
Jan 26 15:36:12 compute-0 sudo[211648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:13 compute-0 python3.9[211650]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:13 compute-0 sudo[211648]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:13 compute-0 sudo[211800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnvycnirnlhnczgnxtbswuaqjqbeqjfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441773.235074-1280-41093341175809/AnsiballZ_stat.py'
Jan 26 15:36:13 compute-0 sudo[211800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:13 compute-0 python3.9[211802]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:36:13 compute-0 sudo[211800]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:14 compute-0 sudo[211925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrrqpdkyoakekdyposlyxefmawpqnxnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441773.235074-1280-41093341175809/AnsiballZ_copy.py'
Jan 26 15:36:14 compute-0 sudo[211925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:14 compute-0 python3.9[211927]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769441773.235074-1280-41093341175809/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:14 compute-0 sudo[211925]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:14 compute-0 ceph-mon[75140]: pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:14 compute-0 sudo[212077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnqveuywntgmbgihenuklwunpwxnntgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441774.475871-1295-54935380581029/AnsiballZ_stat.py'
Jan 26 15:36:14 compute-0 sudo[212077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:14 compute-0 python3.9[212079]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:36:14 compute-0 sudo[212077]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:15 compute-0 sudo[212155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnghgwhmvwojfkswzlijkwnwjjreihff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441774.475871-1295-54935380581029/AnsiballZ_file.py'
Jan 26 15:36:15 compute-0 sudo[212155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:15 compute-0 python3.9[212157]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:15 compute-0 sudo[212155]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:16 compute-0 sudo[212307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbkvcllnuyypyvtkcsvtcdmqxmhruocg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441775.570086-1307-138319834530518/AnsiballZ_stat.py'
Jan 26 15:36:16 compute-0 sudo[212307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:16 compute-0 python3.9[212309]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:36:16 compute-0 sudo[212307]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:16 compute-0 sudo[212385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkervfahdrxcbrsfjsccllttqkcljxct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441775.570086-1307-138319834530518/AnsiballZ_file.py'
Jan 26 15:36:16 compute-0 sudo[212385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:16 compute-0 ceph-mon[75140]: pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:16 compute-0 python3.9[212387]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:16 compute-0 sudo[212385]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:36:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:17 compute-0 sudo[212537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bicrvlbkcmgekppqjeinplrazhozjile ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441777.106768-1319-141944782666511/AnsiballZ_stat.py'
Jan 26 15:36:17 compute-0 sudo[212537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:17 compute-0 python3.9[212539]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:36:17 compute-0 sudo[212537]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:17 compute-0 ceph-mon[75140]: pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:18 compute-0 sudo[212662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiuwdyyhnbubljahktdlyuyzoqibzwnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441777.106768-1319-141944782666511/AnsiballZ_copy.py'
Jan 26 15:36:18 compute-0 sudo[212662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:18 compute-0 python3.9[212664]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1769441777.106768-1319-141944782666511/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:18 compute-0 sudo[212662]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:18 compute-0 sudo[212814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmpcqvvibzneffenuyggzfxgftubmhrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441778.4379654-1334-2617911529690/AnsiballZ_file.py'
Jan 26 15:36:18 compute-0 sudo[212814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:18 compute-0 python3.9[212816]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:18 compute-0 sudo[212814]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:19 compute-0 sudo[212966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xginfewjfzrmssdilypovbrwrdvurqxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441779.055902-1342-196222980813217/AnsiballZ_command.py'
Jan 26 15:36:19 compute-0 sudo[212966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:19 compute-0 python3.9[212968]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:36:19 compute-0 sudo[212966]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:19 compute-0 sudo[213005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:36:19 compute-0 sudo[213005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:36:19 compute-0 sudo[213005]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:19 compute-0 sudo[213061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:36:19 compute-0 sudo[213061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:36:20 compute-0 sudo[213185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lckdinrihoksdcvekodvwnaqvgunomxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441779.805619-1350-162374335266592/AnsiballZ_blockinfile.py'
Jan 26 15:36:20 compute-0 sudo[213185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:20 compute-0 python3.9[213188]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:20 compute-0 ceph-mon[75140]: pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:20 compute-0 sudo[213185]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:20 compute-0 sudo[213061]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:36:20 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:36:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:36:20 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:36:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:36:20 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:36:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:36:20 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:36:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:36:20 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:36:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:36:20 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:36:20 compute-0 sudo[213209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:36:20 compute-0 sudo[213209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:36:20 compute-0 sudo[213209]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:20 compute-0 sudo[213254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:36:20 compute-0 sudo[213254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:36:20 compute-0 podman[213365]: 2026-01-26 15:36:20.880877777 +0000 UTC m=+0.040440642 container create 43fed6b831e72b7e0d83ac389293ea9faafdbe2087843668542af9ff75f48668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_moore, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:36:20 compute-0 systemd[1]: Started libpod-conmon-43fed6b831e72b7e0d83ac389293ea9faafdbe2087843668542af9ff75f48668.scope.
Jan 26 15:36:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:36:20 compute-0 podman[213365]: 2026-01-26 15:36:20.957442664 +0000 UTC m=+0.117005549 container init 43fed6b831e72b7e0d83ac389293ea9faafdbe2087843668542af9ff75f48668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:36:20 compute-0 podman[213365]: 2026-01-26 15:36:20.86181012 +0000 UTC m=+0.021373005 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:36:20 compute-0 podman[213365]: 2026-01-26 15:36:20.964806924 +0000 UTC m=+0.124369789 container start 43fed6b831e72b7e0d83ac389293ea9faafdbe2087843668542af9ff75f48668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_moore, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:36:20 compute-0 affectionate_moore[213405]: 167 167
Jan 26 15:36:20 compute-0 podman[213365]: 2026-01-26 15:36:20.968101895 +0000 UTC m=+0.127664780 container attach 43fed6b831e72b7e0d83ac389293ea9faafdbe2087843668542af9ff75f48668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_moore, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:36:20 compute-0 systemd[1]: libpod-43fed6b831e72b7e0d83ac389293ea9faafdbe2087843668542af9ff75f48668.scope: Deactivated successfully.
Jan 26 15:36:20 compute-0 podman[213365]: 2026-01-26 15:36:20.970374041 +0000 UTC m=+0.129936906 container died 43fed6b831e72b7e0d83ac389293ea9faafdbe2087843668542af9ff75f48668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 15:36:20 compute-0 sudo[213435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnaamgvdoswolntoflghitvdpwhhvcys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441780.705261-1359-30208430457345/AnsiballZ_command.py'
Jan 26 15:36:20 compute-0 sudo[213435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-38766a44041c11037e3e34418105adced2118d6ec35607935ede8d4d3d9dd2ea-merged.mount: Deactivated successfully.
Jan 26 15:36:21 compute-0 podman[213365]: 2026-01-26 15:36:21.014269976 +0000 UTC m=+0.173832841 container remove 43fed6b831e72b7e0d83ac389293ea9faafdbe2087843668542af9ff75f48668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_moore, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:36:21 compute-0 systemd[1]: libpod-conmon-43fed6b831e72b7e0d83ac389293ea9faafdbe2087843668542af9ff75f48668.scope: Deactivated successfully.
Jan 26 15:36:21 compute-0 python3.9[213445]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:36:21 compute-0 podman[213457]: 2026-01-26 15:36:21.212718011 +0000 UTC m=+0.045466066 container create 96e3fd510a07718a38d7375c198aa03e5b2c59c938c5aa729620014f1ab7c275 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:36:21 compute-0 sudo[213435]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:21 compute-0 systemd[1]: Started libpod-conmon-96e3fd510a07718a38d7375c198aa03e5b2c59c938c5aa729620014f1ab7c275.scope.
Jan 26 15:36:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a1022ad51daf7421d65739b9b50f79eb9d01dc452a364053cad1f8199624d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a1022ad51daf7421d65739b9b50f79eb9d01dc452a364053cad1f8199624d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a1022ad51daf7421d65739b9b50f79eb9d01dc452a364053cad1f8199624d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:36:21 compute-0 podman[213457]: 2026-01-26 15:36:21.195694613 +0000 UTC m=+0.028442698 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a1022ad51daf7421d65739b9b50f79eb9d01dc452a364053cad1f8199624d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a1022ad51daf7421d65739b9b50f79eb9d01dc452a364053cad1f8199624d6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:36:21 compute-0 podman[213457]: 2026-01-26 15:36:21.355728725 +0000 UTC m=+0.188476810 container init 96e3fd510a07718a38d7375c198aa03e5b2c59c938c5aa729620014f1ab7c275 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:36:21 compute-0 podman[213457]: 2026-01-26 15:36:21.369931013 +0000 UTC m=+0.202679108 container start 96e3fd510a07718a38d7375c198aa03e5b2c59c938c5aa729620014f1ab7c275 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hofstadter, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 15:36:21 compute-0 podman[213457]: 2026-01-26 15:36:21.374332961 +0000 UTC m=+0.207081026 container attach 96e3fd510a07718a38d7375c198aa03e5b2c59c938c5aa729620014f1ab7c275 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 26 15:36:21 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:36:21 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:36:21 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:36:21 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:36:21 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:36:21 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:36:21 compute-0 sudo[213633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czoltongbbpazamdpqumcndhbdyvyzrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441781.406927-1367-12172441618971/AnsiballZ_stat.py'
Jan 26 15:36:21 compute-0 sudo[213633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:21 compute-0 python3.9[213636]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 15:36:21 compute-0 awesome_hofstadter[213481]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:36:21 compute-0 awesome_hofstadter[213481]: --> All data devices are unavailable
Jan 26 15:36:21 compute-0 sudo[213633]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:21 compute-0 systemd[1]: libpod-96e3fd510a07718a38d7375c198aa03e5b2c59c938c5aa729620014f1ab7c275.scope: Deactivated successfully.
Jan 26 15:36:21 compute-0 podman[213457]: 2026-01-26 15:36:21.925284555 +0000 UTC m=+0.758032640 container died 96e3fd510a07718a38d7375c198aa03e5b2c59c938c5aa729620014f1ab7c275 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 26 15:36:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-67a1022ad51daf7421d65739b9b50f79eb9d01dc452a364053cad1f8199624d6-merged.mount: Deactivated successfully.
Jan 26 15:36:22 compute-0 podman[213457]: 2026-01-26 15:36:22.006315821 +0000 UTC m=+0.839063876 container remove 96e3fd510a07718a38d7375c198aa03e5b2c59c938c5aa729620014f1ab7c275 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 15:36:22 compute-0 systemd[1]: libpod-conmon-96e3fd510a07718a38d7375c198aa03e5b2c59c938c5aa729620014f1ab7c275.scope: Deactivated successfully.
Jan 26 15:36:22 compute-0 sudo[213254]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:22 compute-0 sudo[213711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:36:22 compute-0 sudo[213711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:36:22 compute-0 sudo[213711]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:22 compute-0 sudo[213736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:36:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:36:22 compute-0 sudo[213736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:36:22 compute-0 sudo[213870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thkdhxcegkntpfthzeitaompwhwbojst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441782.117028-1375-275613847612863/AnsiballZ_command.py'
Jan 26 15:36:22 compute-0 sudo[213870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:22 compute-0 podman[213878]: 2026-01-26 15:36:22.464148923 +0000 UTC m=+0.044610235 container create e1a7d247e32d37952854db824ab1bdebe64eb6c072a2917d1abc3ab7e4b75d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 15:36:22 compute-0 systemd[1]: Started libpod-conmon-e1a7d247e32d37952854db824ab1bdebe64eb6c072a2917d1abc3ab7e4b75d6d.scope.
Jan 26 15:36:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:36:22 compute-0 podman[213878]: 2026-01-26 15:36:22.44572748 +0000 UTC m=+0.026188722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:36:22 compute-0 python3.9[213877]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:36:22 compute-0 ceph-mon[75140]: pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:22 compute-0 podman[213878]: 2026-01-26 15:36:22.594930228 +0000 UTC m=+0.175391470 container init e1a7d247e32d37952854db824ab1bdebe64eb6c072a2917d1abc3ab7e4b75d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mayer, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:36:22 compute-0 podman[213878]: 2026-01-26 15:36:22.606285786 +0000 UTC m=+0.186747008 container start e1a7d247e32d37952854db824ab1bdebe64eb6c072a2917d1abc3ab7e4b75d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 15:36:22 compute-0 podman[213878]: 2026-01-26 15:36:22.610475589 +0000 UTC m=+0.190936811 container attach e1a7d247e32d37952854db824ab1bdebe64eb6c072a2917d1abc3ab7e4b75d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:36:22 compute-0 youthful_mayer[213894]: 167 167
Jan 26 15:36:22 compute-0 systemd[1]: libpod-e1a7d247e32d37952854db824ab1bdebe64eb6c072a2917d1abc3ab7e4b75d6d.scope: Deactivated successfully.
Jan 26 15:36:22 compute-0 conmon[213894]: conmon e1a7d247e32d37952854 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1a7d247e32d37952854db824ab1bdebe64eb6c072a2917d1abc3ab7e4b75d6d.scope/container/memory.events
Jan 26 15:36:22 compute-0 podman[213878]: 2026-01-26 15:36:22.613574174 +0000 UTC m=+0.194035496 container died e1a7d247e32d37952854db824ab1bdebe64eb6c072a2917d1abc3ab7e4b75d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mayer, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Jan 26 15:36:22 compute-0 sudo[213870]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f50e685bc9bc75ce44dec9bad034057bceb7231b99eac9837262d2f5c6618ca-merged.mount: Deactivated successfully.
Jan 26 15:36:22 compute-0 podman[213878]: 2026-01-26 15:36:22.665358384 +0000 UTC m=+0.245819606 container remove e1a7d247e32d37952854db824ab1bdebe64eb6c072a2917d1abc3ab7e4b75d6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:36:22 compute-0 systemd[1]: libpod-conmon-e1a7d247e32d37952854db824ab1bdebe64eb6c072a2917d1abc3ab7e4b75d6d.scope: Deactivated successfully.
Jan 26 15:36:22 compute-0 podman[213964]: 2026-01-26 15:36:22.813557596 +0000 UTC m=+0.022004751 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:36:22 compute-0 podman[213964]: 2026-01-26 15:36:22.919176825 +0000 UTC m=+0.127623960 container create adbebd6c1bbb5b387ba02146b7679ed13953a7292c24603d3b6e312c29b9c17b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_jennings, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 15:36:22 compute-0 systemd[1]: Started libpod-conmon-adbebd6c1bbb5b387ba02146b7679ed13953a7292c24603d3b6e312c29b9c17b.scope.
Jan 26 15:36:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4702deb0cf0e062839a4818476bfb22277f91f48687a35c50de9db0710e24b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4702deb0cf0e062839a4818476bfb22277f91f48687a35c50de9db0710e24b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4702deb0cf0e062839a4818476bfb22277f91f48687a35c50de9db0710e24b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:36:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4702deb0cf0e062839a4818476bfb22277f91f48687a35c50de9db0710e24b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:36:23 compute-0 podman[213964]: 2026-01-26 15:36:23.020765445 +0000 UTC m=+0.229212670 container init adbebd6c1bbb5b387ba02146b7679ed13953a7292c24603d3b6e312c29b9c17b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_jennings, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:36:23 compute-0 podman[213964]: 2026-01-26 15:36:23.029469228 +0000 UTC m=+0.237916363 container start adbebd6c1bbb5b387ba02146b7679ed13953a7292c24603d3b6e312c29b9c17b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_jennings, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:36:23 compute-0 podman[213964]: 2026-01-26 15:36:23.03686212 +0000 UTC m=+0.245309265 container attach adbebd6c1bbb5b387ba02146b7679ed13953a7292c24603d3b6e312c29b9c17b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_jennings, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 26 15:36:23 compute-0 sudo[214090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlxqcgjyahturrvfeulohbqtdecroaga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441782.7938914-1383-168058486583146/AnsiballZ_file.py'
Jan 26 15:36:23 compute-0 sudo[214090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]: {
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:     "0": [
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:         {
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "devices": [
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "/dev/loop3"
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             ],
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "lv_name": "ceph_lv0",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "lv_size": "21470642176",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "name": "ceph_lv0",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "tags": {
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.cluster_name": "ceph",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.crush_device_class": "",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.encrypted": "0",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.objectstore": "bluestore",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.osd_id": "0",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.type": "block",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.vdo": "0",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.with_tpm": "0"
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             },
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "type": "block",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "vg_name": "ceph_vg0"
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:         }
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:     ],
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:     "1": [
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:         {
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "devices": [
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "/dev/loop4"
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             ],
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "lv_name": "ceph_lv1",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "lv_size": "21470642176",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "name": "ceph_lv1",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "tags": {
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.cluster_name": "ceph",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.crush_device_class": "",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.encrypted": "0",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.objectstore": "bluestore",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.osd_id": "1",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.type": "block",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.vdo": "0",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.with_tpm": "0"
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             },
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "type": "block",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "vg_name": "ceph_vg1"
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:         }
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:     ],
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:     "2": [
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:         {
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "devices": [
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "/dev/loop5"
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             ],
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "lv_name": "ceph_lv2",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "lv_size": "21470642176",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "name": "ceph_lv2",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "tags": {
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.cluster_name": "ceph",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.crush_device_class": "",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.encrypted": "0",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.objectstore": "bluestore",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.osd_id": "2",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.type": "block",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.vdo": "0",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:                 "ceph.with_tpm": "0"
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             },
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "type": "block",
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:             "vg_name": "ceph_vg2"
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:         }
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]:     ]
Jan 26 15:36:23 compute-0 affectionate_jennings[214041]: }
Jan 26 15:36:23 compute-0 python3.9[214092]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:23 compute-0 sudo[214090]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:23 compute-0 systemd[1]: libpod-adbebd6c1bbb5b387ba02146b7679ed13953a7292c24603d3b6e312c29b9c17b.scope: Deactivated successfully.
Jan 26 15:36:23 compute-0 podman[213964]: 2026-01-26 15:36:23.33841729 +0000 UTC m=+0.546864425 container died adbebd6c1bbb5b387ba02146b7679ed13953a7292c24603d3b6e312c29b9c17b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_jennings, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:36:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4702deb0cf0e062839a4818476bfb22277f91f48687a35c50de9db0710e24b7-merged.mount: Deactivated successfully.
Jan 26 15:36:23 compute-0 podman[213964]: 2026-01-26 15:36:23.48282904 +0000 UTC m=+0.691276175 container remove adbebd6c1bbb5b387ba02146b7679ed13953a7292c24603d3b6e312c29b9c17b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_jennings, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 15:36:23 compute-0 systemd[1]: libpod-conmon-adbebd6c1bbb5b387ba02146b7679ed13953a7292c24603d3b6e312c29b9c17b.scope: Deactivated successfully.
Jan 26 15:36:23 compute-0 sudo[213736]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:23 compute-0 sudo[214177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:36:23 compute-0 sudo[214177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:36:23 compute-0 sudo[214177]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:23 compute-0 sudo[214218]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:36:23 compute-0 sudo[214218]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:36:23 compute-0 sudo[214308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrufhmwohglnfiblderkhthmthrkqkkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441783.4931154-1391-96766405932592/AnsiballZ_stat.py'
Jan 26 15:36:23 compute-0 sudo[214308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:23 compute-0 podman[214323]: 2026-01-26 15:36:23.910361369 +0000 UTC m=+0.040321890 container create dff1982be57696cca33bd2e9f96ea91d70eb53d35865ca602f53507a32236ed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_nash, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 15:36:23 compute-0 python3.9[214310]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:36:23 compute-0 sudo[214308]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:23 compute-0 systemd[1]: Started libpod-conmon-dff1982be57696cca33bd2e9f96ea91d70eb53d35865ca602f53507a32236ed7.scope.
Jan 26 15:36:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:36:23 compute-0 podman[214323]: 2026-01-26 15:36:23.891532647 +0000 UTC m=+0.021493158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:36:24 compute-0 podman[214323]: 2026-01-26 15:36:24.012140033 +0000 UTC m=+0.142100594 container init dff1982be57696cca33bd2e9f96ea91d70eb53d35865ca602f53507a32236ed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_nash, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:36:24 compute-0 podman[214323]: 2026-01-26 15:36:24.018207732 +0000 UTC m=+0.148168263 container start dff1982be57696cca33bd2e9f96ea91d70eb53d35865ca602f53507a32236ed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:36:24 compute-0 boring_nash[214339]: 167 167
Jan 26 15:36:24 compute-0 systemd[1]: libpod-dff1982be57696cca33bd2e9f96ea91d70eb53d35865ca602f53507a32236ed7.scope: Deactivated successfully.
Jan 26 15:36:24 compute-0 podman[214323]: 2026-01-26 15:36:24.203869232 +0000 UTC m=+0.333829763 container attach dff1982be57696cca33bd2e9f96ea91d70eb53d35865ca602f53507a32236ed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_nash, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 26 15:36:24 compute-0 podman[214323]: 2026-01-26 15:36:24.205839701 +0000 UTC m=+0.335800222 container died dff1982be57696cca33bd2e9f96ea91d70eb53d35865ca602f53507a32236ed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_nash, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 15:36:24 compute-0 sudo[214474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-niqrpzicnrmaesckfmvebjmuoskyhqrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441783.4931154-1391-96766405932592/AnsiballZ_copy.py'
Jan 26 15:36:24 compute-0 sudo[214474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1229f496b589046dbba9e124a97e692fb361c0a00364fff85193e2675cf6d2a-merged.mount: Deactivated successfully.
Jan 26 15:36:24 compute-0 podman[214323]: 2026-01-26 15:36:24.348847246 +0000 UTC m=+0.478807757 container remove dff1982be57696cca33bd2e9f96ea91d70eb53d35865ca602f53507a32236ed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:36:24 compute-0 systemd[1]: libpod-conmon-dff1982be57696cca33bd2e9f96ea91d70eb53d35865ca602f53507a32236ed7.scope: Deactivated successfully.
Jan 26 15:36:24 compute-0 python3.9[214476]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769441783.4931154-1391-96766405932592/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:24 compute-0 sudo[214474]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:24 compute-0 podman[214485]: 2026-01-26 15:36:24.566833128 +0000 UTC m=+0.093108813 container create a6aa2d495df33a7b81cd540bbb1e78e90e653d8f8bb731c290916717ba791f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cannon, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 15:36:24 compute-0 podman[214485]: 2026-01-26 15:36:24.499269822 +0000 UTC m=+0.025545527 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:36:24 compute-0 ceph-mon[75140]: pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:24 compute-0 systemd[1]: Started libpod-conmon-a6aa2d495df33a7b81cd540bbb1e78e90e653d8f8bb731c290916717ba791f00.scope.
Jan 26 15:36:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd19eb6e7c034e34c904888c0000d7e4f2abf2e0f90a1d174bfb72cb0ca88e07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd19eb6e7c034e34c904888c0000d7e4f2abf2e0f90a1d174bfb72cb0ca88e07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd19eb6e7c034e34c904888c0000d7e4f2abf2e0f90a1d174bfb72cb0ca88e07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:36:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd19eb6e7c034e34c904888c0000d7e4f2abf2e0f90a1d174bfb72cb0ca88e07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:36:24 compute-0 podman[214485]: 2026-01-26 15:36:24.683268593 +0000 UTC m=+0.209544298 container init a6aa2d495df33a7b81cd540bbb1e78e90e653d8f8bb731c290916717ba791f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 15:36:24 compute-0 podman[214485]: 2026-01-26 15:36:24.694253311 +0000 UTC m=+0.220528996 container start a6aa2d495df33a7b81cd540bbb1e78e90e653d8f8bb731c290916717ba791f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:36:24 compute-0 podman[214485]: 2026-01-26 15:36:24.697391909 +0000 UTC m=+0.223667594 container attach a6aa2d495df33a7b81cd540bbb1e78e90e653d8f8bb731c290916717ba791f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cannon, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 15:36:25 compute-0 sudo[214666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwdbbruosghlyuxsrnrfdwzczsvmjrni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441784.7028894-1406-133928252618984/AnsiballZ_stat.py'
Jan 26 15:36:25 compute-0 sudo[214666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:25 compute-0 python3.9[214668]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:36:25 compute-0 sudo[214666]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:25 compute-0 lvm[214773]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:36:25 compute-0 lvm[214773]: VG ceph_vg0 finished
Jan 26 15:36:25 compute-0 lvm[214779]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:36:25 compute-0 lvm[214779]: VG ceph_vg1 finished
Jan 26 15:36:25 compute-0 lvm[214790]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:36:25 compute-0 lvm[214790]: VG ceph_vg2 finished
Jan 26 15:36:25 compute-0 blissful_cannon[214526]: {}
Jan 26 15:36:25 compute-0 systemd[1]: libpod-a6aa2d495df33a7b81cd540bbb1e78e90e653d8f8bb731c290916717ba791f00.scope: Deactivated successfully.
Jan 26 15:36:25 compute-0 systemd[1]: libpod-a6aa2d495df33a7b81cd540bbb1e78e90e653d8f8bb731c290916717ba791f00.scope: Consumed 1.328s CPU time.
Jan 26 15:36:25 compute-0 podman[214485]: 2026-01-26 15:36:25.521850235 +0000 UTC m=+1.048125940 container died a6aa2d495df33a7b81cd540bbb1e78e90e653d8f8bb731c290916717ba791f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cannon, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:36:25 compute-0 sudo[214858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywamhwaosgoaczfcttopwrrgvbarhzlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441784.7028894-1406-133928252618984/AnsiballZ_copy.py'
Jan 26 15:36:25 compute-0 sudo[214858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd19eb6e7c034e34c904888c0000d7e4f2abf2e0f90a1d174bfb72cb0ca88e07-merged.mount: Deactivated successfully.
Jan 26 15:36:25 compute-0 podman[214485]: 2026-01-26 15:36:25.564788978 +0000 UTC m=+1.091064653 container remove a6aa2d495df33a7b81cd540bbb1e78e90e653d8f8bb731c290916717ba791f00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_cannon, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:36:25 compute-0 systemd[1]: libpod-conmon-a6aa2d495df33a7b81cd540bbb1e78e90e653d8f8bb731c290916717ba791f00.scope: Deactivated successfully.
Jan 26 15:36:25 compute-0 sudo[214218]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:36:25 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:36:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:36:25 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:36:25 compute-0 sudo[214872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:36:25 compute-0 sudo[214872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:36:25 compute-0 sudo[214872]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:25 compute-0 python3.9[214867]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769441784.7028894-1406-133928252618984/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:25 compute-0 sudo[214858]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:26 compute-0 sudo[215046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljcquuvlnjirkdsaccjjsdzhzrueqeqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441785.9047897-1421-591179450964/AnsiballZ_stat.py'
Jan 26 15:36:26 compute-0 sudo[215046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:26 compute-0 python3.9[215048]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:36:26 compute-0 sudo[215046]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:26 compute-0 ceph-mon[75140]: pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:26 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:36:26 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:36:26 compute-0 sudo[215169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juykmxyrnggkqimyatphvutgzkeogyga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441785.9047897-1421-591179450964/AnsiballZ_copy.py'
Jan 26 15:36:26 compute-0 sudo[215169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:26 compute-0 python3.9[215171]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769441785.9047897-1421-591179450964/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:26 compute-0 sudo[215169]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:36:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:27 compute-0 podman[215271]: 2026-01-26 15:36:27.449753345 +0000 UTC m=+0.138526015 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 26 15:36:27 compute-0 sudo[215347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luvdinijnrbeurnhfsredcjzdzcfcawa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441787.1361907-1436-235231505076659/AnsiballZ_systemd.py'
Jan 26 15:36:27 compute-0 sudo[215347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:27 compute-0 python3.9[215349]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 15:36:27 compute-0 systemd[1]: Reloading.
Jan 26 15:36:27 compute-0 systemd-sysv-generator[215377]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:36:27 compute-0 systemd-rc-local-generator[215371]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:36:28 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 26 15:36:28 compute-0 sudo[215347]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:36:28
Jan 26 15:36:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:36:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:36:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'volumes', 'images', 'backups']
Jan 26 15:36:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:36:28 compute-0 sudo[215537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epfdtvtckmokoeldsdivukunbgmictce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441788.330118-1444-180866931906742/AnsiballZ_systemd.py'
Jan 26 15:36:28 compute-0 sudo[215537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:28 compute-0 ceph-mon[75140]: pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:28 compute-0 python3.9[215539]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 26 15:36:28 compute-0 systemd[1]: Reloading.
Jan 26 15:36:29 compute-0 systemd-rc-local-generator[215566]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:36:29 compute-0 systemd-sysv-generator[215570]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:36:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:29 compute-0 systemd[1]: Reloading.
Jan 26 15:36:29 compute-0 systemd-rc-local-generator[215601]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:36:29 compute-0 systemd-sysv-generator[215605]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:36:29 compute-0 sudo[215537]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:30 compute-0 sshd-session[156700]: Connection closed by 192.168.122.30 port 43810
Jan 26 15:36:30 compute-0 sshd-session[156697]: pam_unix(sshd:session): session closed for user zuul
Jan 26 15:36:30 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Jan 26 15:36:30 compute-0 systemd[1]: session-48.scope: Consumed 3min 35.089s CPU time.
Jan 26 15:36:30 compute-0 systemd-logind[790]: Session 48 logged out. Waiting for processes to exit.
Jan 26 15:36:30 compute-0 systemd-logind[790]: Removed session 48.
Jan 26 15:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:36:30 compute-0 ceph-mon[75140]: pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:31 compute-0 podman[215638]: 2026-01-26 15:36:31.397026885 +0000 UTC m=+0.075273729 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 26 15:36:31 compute-0 ceph-mon[75140]: pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:36:32 compute-0 sshd-session[215612]: Connection reset by authenticating user nobody 176.120.22.13 port 45218 [preauth]
Jan 26 15:36:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:34 compute-0 ceph-mon[75140]: pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:34 compute-0 sshd-session[215657]: Connection reset by authenticating user root 176.120.22.13 port 35624 [preauth]
Jan 26 15:36:34 compute-0 sshd-session[215659]: Accepted publickey for zuul from 192.168.122.30 port 46606 ssh2: ECDSA SHA256:+bxXQQ9mjgGTATCpcZMPbqE6T7Ypcu1bWdLgmv9dcFM
Jan 26 15:36:34 compute-0 systemd-logind[790]: New session 49 of user zuul.
Jan 26 15:36:34 compute-0 systemd[1]: Started Session 49 of User zuul.
Jan 26 15:36:34 compute-0 sshd-session[215659]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 15:36:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:35 compute-0 python3.9[215814]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 15:36:36 compute-0 sshd-session[215661]: Connection reset by authenticating user root 176.120.22.13 port 35646 [preauth]
Jan 26 15:36:36 compute-0 ceph-mon[75140]: pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:37 compute-0 python3.9[215969]: ansible-ansible.builtin.service_facts Invoked
Jan 26 15:36:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:36:37 compute-0 network[215987]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 15:36:37 compute-0 network[215988]: 'network-scripts' will be removed from distribution in near future.
Jan 26 15:36:37 compute-0 network[215989]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 15:36:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:38 compute-0 ceph-mon[75140]: pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:40 compute-0 ceph-mon[75140]: pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:40 compute-0 sshd-session[215918]: Invalid user admin from 176.120.22.13 port 35660
Jan 26 15:36:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:41 compute-0 sshd-session[215918]: Connection reset by invalid user admin 176.120.22.13 port 35660 [preauth]
Jan 26 15:36:42 compute-0 ceph-mon[75140]: pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:36:42 compute-0 sudo[216261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xojojzjienelvzqacszzmopfwbceolpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441802.530794-42-246444258600605/AnsiballZ_setup.py'
Jan 26 15:36:42 compute-0 sudo[216261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:43 compute-0 python3.9[216263]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 26 15:36:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:43 compute-0 sshd-session[216103]: Connection reset by authenticating user root 176.120.22.13 port 35672 [preauth]
Jan 26 15:36:43 compute-0 sudo[216261]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:43 compute-0 sudo[216345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubedwkfhlppznwwfmbpyxktdamoidhcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441802.530794-42-246444258600605/AnsiballZ_dnf.py'
Jan 26 15:36:43 compute-0 sudo[216345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:44 compute-0 python3.9[216347]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 15:36:44 compute-0 ceph-mon[75140]: pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:46 compute-0 ceph-mon[75140]: pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:36:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1837404050763064e-06 of space, bias 4.0, pg target 0.0014204884860915677 quantized to 16 (current 16)
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:36:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:36:48 compute-0 ceph-mon[75140]: pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:50 compute-0 sudo[216345]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:50 compute-0 ceph-mon[75140]: pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:50 compute-0 sudo[216498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nopdpkvaimpuebocuwdsbxrmqxipkwpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441810.444129-54-138819599418578/AnsiballZ_stat.py'
Jan 26 15:36:50 compute-0 sudo[216498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:51 compute-0 python3.9[216500]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 15:36:51 compute-0 sudo[216498]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:51 compute-0 ceph-mon[75140]: pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:51 compute-0 sudo[216650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbksuvwhjswpdewimxfaqrsiykcccmcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441811.2657309-64-66145237516199/AnsiballZ_command.py'
Jan 26 15:36:51 compute-0 sudo[216650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:51 compute-0 python3.9[216652]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:36:52 compute-0 sudo[216650]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:36:52 compute-0 sudo[216803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaynnoobzkftnycyxehybuyhvsecdqzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441812.2789993-74-149477217515287/AnsiballZ_stat.py'
Jan 26 15:36:52 compute-0 sudo[216803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:52 compute-0 python3.9[216805]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 15:36:52 compute-0 sudo[216803]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:53 compute-0 sudo[216955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hugujvzcmqdldakaxkuxrkbkgkzvvsei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441812.9297762-82-266952727319803/AnsiballZ_command.py'
Jan 26 15:36:53 compute-0 sudo[216955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:53 compute-0 python3.9[216957]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:36:53 compute-0 sudo[216955]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:53 compute-0 sudo[217108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytbshjxvvvvqgtltkcdffaibmmzdafod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441813.5657215-90-228439996487683/AnsiballZ_stat.py'
Jan 26 15:36:53 compute-0 sudo[217108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:54 compute-0 python3.9[217110]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:36:54 compute-0 sudo[217108]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:54 compute-0 ceph-mon[75140]: pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:54 compute-0 sudo[217231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evzarmmvawexzcewegiutswissrlhjqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441813.5657215-90-228439996487683/AnsiballZ_copy.py'
Jan 26 15:36:54 compute-0 sudo[217231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:54 compute-0 python3.9[217233]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769441813.5657215-90-228439996487683/.source.iscsi _original_basename=.1apdljki follow=False checksum=201691e6d456232eead8acf9f38f9dacace280f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:54 compute-0 sudo[217231]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:55 compute-0 sudo[217383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiebvzyntdhopwknrbjailxsmqulfige ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441814.9707332-105-220539769022921/AnsiballZ_file.py'
Jan 26 15:36:55 compute-0 sudo[217383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:55 compute-0 python3.9[217385]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:55 compute-0 sudo[217383]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:56 compute-0 sshd-session[217386]: Connection closed by authenticating user root 178.128.250.55 port 43878 [preauth]
Jan 26 15:36:56 compute-0 sudo[217538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abjexogaxrweajdwmdglixlulxzaeeby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441815.8684878-113-74209192508626/AnsiballZ_lineinfile.py'
Jan 26 15:36:56 compute-0 sudo[217538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:56 compute-0 ceph-mon[75140]: pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:56 compute-0 python3.9[217540]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:36:56 compute-0 sudo[217538]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:36:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:57 compute-0 sudo[217690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zivwvfqqgphujococbohljskssztjjkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441816.737317-122-36886169081622/AnsiballZ_systemd_service.py'
Jan 26 15:36:57 compute-0 sudo[217690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:57 compute-0 python3.9[217692]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 15:36:57 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 26 15:36:57 compute-0 sudo[217690]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:57 compute-0 podman[217694]: 2026-01-26 15:36:57.717790166 +0000 UTC m=+0.084214028 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 15:36:58 compute-0 sudo[217872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjcvzkfwtjerkujbdokbuednpocwxlwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441817.8353376-130-22377037526074/AnsiballZ_systemd_service.py'
Jan 26 15:36:58 compute-0 sudo[217872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:36:58 compute-0 python3.9[217874]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 15:36:58 compute-0 ceph-mon[75140]: pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:58 compute-0 systemd[1]: Reloading.
Jan 26 15:36:58 compute-0 systemd-rc-local-generator[217904]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:36:58 compute-0 systemd-sysv-generator[217908]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:36:58 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 26 15:36:58 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 26 15:36:58 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 26 15:36:58 compute-0 systemd[1]: Started Open-iSCSI.
Jan 26 15:36:58 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 26 15:36:58 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 26 15:36:58 compute-0 sudo[217872]: pam_unix(sudo:session): session closed for user root
Jan 26 15:36:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:36:59.198 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:36:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:36:59.199 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:36:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:36:59.199 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:36:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:36:59 compute-0 python3.9[218072]: ansible-ansible.builtin.service_facts Invoked
Jan 26 15:36:59 compute-0 network[218089]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 15:36:59 compute-0 network[218090]: 'network-scripts' will be removed from distribution in near future.
Jan 26 15:36:59 compute-0 network[218091]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 15:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:37:00 compute-0 ceph-mon[75140]: pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:01 compute-0 podman[218138]: 2026-01-26 15:37:01.546241305 +0000 UTC m=+0.096202273 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 26 15:37:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:37:02 compute-0 ceph-mon[75140]: pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:04 compute-0 ceph-mon[75140]: pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:05 compute-0 sudo[218380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imhpdoommzhbfimfuvnaijlvejkpjwqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441825.3102212-153-150138698097514/AnsiballZ_dnf.py'
Jan 26 15:37:05 compute-0 sudo[218380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:05 compute-0 python3.9[218382]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 15:37:06 compute-0 ceph-mon[75140]: pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:37:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:09 compute-0 ceph-mon[75140]: pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:10 compute-0 ceph-mon[75140]: pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:11 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 15:37:11 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 15:37:11 compute-0 systemd[1]: Reloading.
Jan 26 15:37:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:11 compute-0 systemd-rc-local-generator[218429]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:37:11 compute-0 systemd-sysv-generator[218432]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:37:11 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 15:37:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:37:12 compute-0 ceph-mon[75140]: pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:12 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 15:37:12 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 15:37:12 compute-0 systemd[1]: run-r1cc49be3ebf44e5a8f670887ef58c703.service: Deactivated successfully.
Jan 26 15:37:13 compute-0 sudo[218380]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:13 compute-0 sudo[218697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqzmrjgmtwppsuzuiivoxojpqwgsvqzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441833.2478461-162-41986411937553/AnsiballZ_file.py'
Jan 26 15:37:13 compute-0 sudo[218697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:13 compute-0 python3.9[218699]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 26 15:37:13 compute-0 sudo[218697]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:14 compute-0 sudo[218849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itrvwrabapyqjotwjhnwakjmlhjjigyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441833.9847257-170-267158926805907/AnsiballZ_modprobe.py'
Jan 26 15:37:14 compute-0 sudo[218849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:14 compute-0 ceph-mon[75140]: pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:14 compute-0 python3.9[218851]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 26 15:37:14 compute-0 sudo[218849]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:15 compute-0 sudo[219005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnuvkysiduolbfbdtmwvqxpalpjoevhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441834.8438735-178-92930081351130/AnsiballZ_stat.py'
Jan 26 15:37:15 compute-0 sudo[219005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:15 compute-0 python3.9[219007]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:37:15 compute-0 sudo[219005]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:15 compute-0 sudo[219128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlpasosdabirmfvwibnzlkngqxhbsshl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441834.8438735-178-92930081351130/AnsiballZ_copy.py'
Jan 26 15:37:15 compute-0 sudo[219128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:15 compute-0 python3.9[219130]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769441834.8438735-178-92930081351130/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:37:15 compute-0 sudo[219128]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:16 compute-0 sudo[219280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnrnrqehamofmjoqrwddrvatbcafqzek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441836.1435587-194-95316250953020/AnsiballZ_lineinfile.py'
Jan 26 15:37:16 compute-0 sudo[219280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:16 compute-0 ceph-mon[75140]: pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:16 compute-0 python3.9[219282]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:37:16 compute-0 sudo[219280]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:17 compute-0 sudo[219432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcnkvlpwzdnrzhavumvrrxptbccncbgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441836.9166672-202-66369170721900/AnsiballZ_systemd.py'
Jan 26 15:37:17 compute-0 sudo[219432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:37:17 compute-0 python3.9[219434]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 15:37:17 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 26 15:37:17 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 26 15:37:17 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 26 15:37:17 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 26 15:37:17 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 26 15:37:17 compute-0 sudo[219432]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:18 compute-0 ceph-mon[75140]: pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:18 compute-0 sudo[219588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkmhuankugrdsgahvdnzsnunbiavazun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441838.2365384-210-196998347803398/AnsiballZ_command.py'
Jan 26 15:37:18 compute-0 sudo[219588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:18 compute-0 python3.9[219590]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:37:18 compute-0 sudo[219588]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:19 compute-0 sudo[219741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uohwxhiieskthqygevltalvpuiodigzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441839.3400662-220-215046479935031/AnsiballZ_stat.py'
Jan 26 15:37:19 compute-0 sudo[219741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:19 compute-0 python3.9[219743]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 15:37:19 compute-0 sudo[219741]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:20 compute-0 sudo[219893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uefbbgufohybmawtbnzjwauelvxliabh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441840.0080957-229-48353624542573/AnsiballZ_stat.py'
Jan 26 15:37:20 compute-0 sudo[219893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:20 compute-0 python3.9[219895]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:37:20 compute-0 sudo[219893]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:20 compute-0 ceph-mon[75140]: pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:20 compute-0 sudo[220016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nltryrjmpqakmumxhiadhicwljlfnyfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441840.0080957-229-48353624542573/AnsiballZ_copy.py'
Jan 26 15:37:20 compute-0 sudo[220016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:21 compute-0 python3.9[220018]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769441840.0080957-229-48353624542573/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:37:21 compute-0 sudo[220016]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:21 compute-0 sudo[220168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blxmklybmwmdybczjjtulhmzixgroxum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441841.2906976-244-120012023998053/AnsiballZ_command.py'
Jan 26 15:37:21 compute-0 sudo[220168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:21 compute-0 python3.9[220170]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:37:21 compute-0 sudo[220168]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:21 compute-0 ceph-mon[75140]: pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:22 compute-0 sudo[220321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezjleevfcgwhznuuhbkbcsjfpweqzikb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441841.9054594-252-610448573042/AnsiballZ_lineinfile.py'
Jan 26 15:37:22 compute-0 sudo[220321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:22 compute-0 python3.9[220323]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:37:22 compute-0 sudo[220321]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:37:23 compute-0 sudo[220473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihnoiciwlcofluihvntbibzvtjcojcxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441842.5759447-260-240069001903416/AnsiballZ_replace.py'
Jan 26 15:37:23 compute-0 sudo[220473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:23 compute-0 python3.9[220475]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:37:23 compute-0 sudo[220473]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:23 compute-0 sudo[220625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwhitbibicazsncmhteypdpkeqflisit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441843.4913068-268-223254159951755/AnsiballZ_replace.py'
Jan 26 15:37:23 compute-0 sudo[220625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:24 compute-0 python3.9[220627]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:37:24 compute-0 sudo[220625]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:24 compute-0 sudo[220777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vynpkqjqiurrglwsrhobbsafrgjewihi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441844.2768154-277-65501380470997/AnsiballZ_lineinfile.py'
Jan 26 15:37:24 compute-0 sudo[220777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:24 compute-0 python3.9[220779]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:37:24 compute-0 sudo[220777]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:25 compute-0 sudo[220929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvhkfzsnmuvylwptxzipsiohshbvmrvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441844.9319758-277-233260749100262/AnsiballZ_lineinfile.py'
Jan 26 15:37:25 compute-0 sudo[220929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:25 compute-0 python3.9[220931]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:37:25 compute-0 sudo[220929]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:25 compute-0 sudo[221019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:37:25 compute-0 sudo[221019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:37:25 compute-0 sudo[221019]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:25 compute-0 sudo[221062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:37:25 compute-0 sudo[221062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:37:25 compute-0 sudo[221131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umimwdrjccajmgvqdtmfjrakgcwxlytc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441845.5670092-277-160879271435344/AnsiballZ_lineinfile.py'
Jan 26 15:37:25 compute-0 sudo[221131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:26 compute-0 ceph-mon[75140]: pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:26 compute-0 ceph-mon[75140]: pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:26 compute-0 python3.9[221133]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:37:26 compute-0 sudo[221131]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:26 compute-0 sudo[221062]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:37:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:37:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:37:26 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:37:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:37:26 compute-0 sudo[221314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knonpqvgekbuejasrlfdbhmojrfbtoif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441846.289818-277-121823782116660/AnsiballZ_lineinfile.py'
Jan 26 15:37:26 compute-0 sudo[221314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:26 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:37:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:37:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:37:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:37:26 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:37:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:37:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:37:26 compute-0 sudo[221317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:37:26 compute-0 sudo[221317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:37:26 compute-0 sudo[221317]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:26 compute-0 sudo[221342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:37:26 compute-0 python3.9[221316]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:37:26 compute-0 sudo[221342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:37:26 compute-0 sudo[221314]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:27 compute-0 podman[221446]: 2026-01-26 15:37:27.025881647 +0000 UTC m=+0.094108242 container create 04922203b24653f604950c57fd3209619c2dbe73665f71f8cc80996e5f72f603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:37:27 compute-0 podman[221446]: 2026-01-26 15:37:26.954561506 +0000 UTC m=+0.022788161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:37:27 compute-0 systemd[1]: Started libpod-conmon-04922203b24653f604950c57fd3209619c2dbe73665f71f8cc80996e5f72f603.scope.
Jan 26 15:37:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:37:27 compute-0 sudo[221547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzmqozypvoartzlaeqfqqfaefmpxbeyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441846.8849363-306-13451438615374/AnsiballZ_stat.py'
Jan 26 15:37:27 compute-0 sudo[221547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:37:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:37:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:37:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:37:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:37:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:37:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:27 compute-0 python3.9[221549]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 15:37:27 compute-0 sudo[221547]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:27 compute-0 podman[221446]: 2026-01-26 15:37:27.459231745 +0000 UTC m=+0.527458390 container init 04922203b24653f604950c57fd3209619c2dbe73665f71f8cc80996e5f72f603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_shaw, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 15:37:27 compute-0 podman[221446]: 2026-01-26 15:37:27.467500519 +0000 UTC m=+0.535727084 container start 04922203b24653f604950c57fd3209619c2dbe73665f71f8cc80996e5f72f603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 15:37:27 compute-0 funny_shaw[221517]: 167 167
Jan 26 15:37:27 compute-0 systemd[1]: libpod-04922203b24653f604950c57fd3209619c2dbe73665f71f8cc80996e5f72f603.scope: Deactivated successfully.
Jan 26 15:37:27 compute-0 conmon[221517]: conmon 04922203b24653f60495 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-04922203b24653f604950c57fd3209619c2dbe73665f71f8cc80996e5f72f603.scope/container/memory.events
Jan 26 15:37:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:37:27 compute-0 podman[221446]: 2026-01-26 15:37:27.881622945 +0000 UTC m=+0.949849580 container attach 04922203b24653f604950c57fd3209619c2dbe73665f71f8cc80996e5f72f603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_shaw, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:37:27 compute-0 podman[221446]: 2026-01-26 15:37:27.883930921 +0000 UTC m=+0.952157516 container died 04922203b24653f604950c57fd3209619c2dbe73665f71f8cc80996e5f72f603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_shaw, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 15:37:27 compute-0 sudo[221727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqflkhsjzqfiyqplfddmgasbjtoyftcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441847.543746-314-91842453335600/AnsiballZ_command.py'
Jan 26 15:37:27 compute-0 sudo[221727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7e2e152c279210ccb7949eee9f9bef2ccf0d32308ba34d0ef8dc64dc870b2a6-merged.mount: Deactivated successfully.
Jan 26 15:37:28 compute-0 podman[221446]: 2026-01-26 15:37:28.07409365 +0000 UTC m=+1.142320205 container remove 04922203b24653f604950c57fd3209619c2dbe73665f71f8cc80996e5f72f603 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_shaw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 15:37:28 compute-0 python3.9[221729]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:37:28 compute-0 sudo[221727]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:28 compute-0 systemd[1]: libpod-conmon-04922203b24653f604950c57fd3209619c2dbe73665f71f8cc80996e5f72f603.scope: Deactivated successfully.
Jan 26 15:37:28 compute-0 podman[221689]: 2026-01-26 15:37:28.158214655 +0000 UTC m=+0.320074598 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:37:28 compute-0 podman[221778]: 2026-01-26 15:37:28.226473921 +0000 UTC m=+0.035656276 container create 8a8e2ae33b17ba52eebd14e9198d90c3d0ccf205b4e7a5d657a1cb03a113e6a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_yalow, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:37:28 compute-0 systemd[1]: Started libpod-conmon-8a8e2ae33b17ba52eebd14e9198d90c3d0ccf205b4e7a5d657a1cb03a113e6a8.scope.
Jan 26 15:37:28 compute-0 ceph-mon[75140]: pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba431a3f643d296222fafae4bcecdce1cd88371536ff533f93352838cb8362d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba431a3f643d296222fafae4bcecdce1cd88371536ff533f93352838cb8362d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba431a3f643d296222fafae4bcecdce1cd88371536ff533f93352838cb8362d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba431a3f643d296222fafae4bcecdce1cd88371536ff533f93352838cb8362d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eba431a3f643d296222fafae4bcecdce1cd88371536ff533f93352838cb8362d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:37:28 compute-0 podman[221778]: 2026-01-26 15:37:28.211277518 +0000 UTC m=+0.020459863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:37:28 compute-0 podman[221778]: 2026-01-26 15:37:28.309886949 +0000 UTC m=+0.119069294 container init 8a8e2ae33b17ba52eebd14e9198d90c3d0ccf205b4e7a5d657a1cb03a113e6a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_yalow, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 15:37:28 compute-0 podman[221778]: 2026-01-26 15:37:28.318068709 +0000 UTC m=+0.127251034 container start 8a8e2ae33b17ba52eebd14e9198d90c3d0ccf205b4e7a5d657a1cb03a113e6a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_yalow, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:37:28 compute-0 podman[221778]: 2026-01-26 15:37:28.321682539 +0000 UTC m=+0.130864884 container attach 8a8e2ae33b17ba52eebd14e9198d90c3d0ccf205b4e7a5d657a1cb03a113e6a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_yalow, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:37:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:37:28
Jan 26 15:37:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:37:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:37:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'default.rgw.control', '.rgw.root', 'backups', 'images', 'cephfs.cephfs.meta']
Jan 26 15:37:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:37:28 compute-0 sudo[221930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptxbblyefwfxixsszcvwcsmkwmkojbtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441848.3202362-323-146397647484957/AnsiballZ_systemd_service.py'
Jan 26 15:37:28 compute-0 sudo[221930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:28 compute-0 inspiring_yalow[221794]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:37:28 compute-0 inspiring_yalow[221794]: --> All data devices are unavailable
Jan 26 15:37:28 compute-0 systemd[1]: libpod-8a8e2ae33b17ba52eebd14e9198d90c3d0ccf205b4e7a5d657a1cb03a113e6a8.scope: Deactivated successfully.
Jan 26 15:37:28 compute-0 podman[221778]: 2026-01-26 15:37:28.816567108 +0000 UTC m=+0.625749433 container died 8a8e2ae33b17ba52eebd14e9198d90c3d0ccf205b4e7a5d657a1cb03a113e6a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 26 15:37:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-eba431a3f643d296222fafae4bcecdce1cd88371536ff533f93352838cb8362d-merged.mount: Deactivated successfully.
Jan 26 15:37:28 compute-0 podman[221778]: 2026-01-26 15:37:28.866211377 +0000 UTC m=+0.675393702 container remove 8a8e2ae33b17ba52eebd14e9198d90c3d0ccf205b4e7a5d657a1cb03a113e6a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_yalow, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 15:37:28 compute-0 systemd[1]: libpod-conmon-8a8e2ae33b17ba52eebd14e9198d90c3d0ccf205b4e7a5d657a1cb03a113e6a8.scope: Deactivated successfully.
Jan 26 15:37:28 compute-0 sudo[221342]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:28 compute-0 python3.9[221933]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 15:37:28 compute-0 sudo[221954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:37:28 compute-0 sudo[221954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:37:28 compute-0 sudo[221954]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:29 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 26 15:37:29 compute-0 sudo[221980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:37:29 compute-0 sudo[221980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:37:29 compute-0 sudo[221930]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:29 compute-0 podman[222073]: 2026-01-26 15:37:29.306735292 +0000 UTC m=+0.046695928 container create c9f895932f6ea81b0d5b02325e87687ce6df4da10fff69f56aaa8bcb0e25a604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_chaum, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 15:37:29 compute-0 systemd[1]: Started libpod-conmon-c9f895932f6ea81b0d5b02325e87687ce6df4da10fff69f56aaa8bcb0e25a604.scope.
Jan 26 15:37:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:37:29 compute-0 podman[222073]: 2026-01-26 15:37:29.377628722 +0000 UTC m=+0.117589368 container init c9f895932f6ea81b0d5b02325e87687ce6df4da10fff69f56aaa8bcb0e25a604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_chaum, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:37:29 compute-0 podman[222073]: 2026-01-26 15:37:29.28631973 +0000 UTC m=+0.026280416 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:37:29 compute-0 podman[222073]: 2026-01-26 15:37:29.386489309 +0000 UTC m=+0.126449945 container start c9f895932f6ea81b0d5b02325e87687ce6df4da10fff69f56aaa8bcb0e25a604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_chaum, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:37:29 compute-0 podman[222073]: 2026-01-26 15:37:29.390067577 +0000 UTC m=+0.130028243 container attach c9f895932f6ea81b0d5b02325e87687ce6df4da10fff69f56aaa8bcb0e25a604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 15:37:29 compute-0 naughty_chaum[222129]: 167 167
Jan 26 15:37:29 compute-0 systemd[1]: libpod-c9f895932f6ea81b0d5b02325e87687ce6df4da10fff69f56aaa8bcb0e25a604.scope: Deactivated successfully.
Jan 26 15:37:29 compute-0 podman[222073]: 2026-01-26 15:37:29.391326428 +0000 UTC m=+0.131287064 container died c9f895932f6ea81b0d5b02325e87687ce6df4da10fff69f56aaa8bcb0e25a604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 15:37:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0bd0b258ecc60a5638d6ab6de405eca24437bfb1f3dfb860f2ed63029c3d808-merged.mount: Deactivated successfully.
Jan 26 15:37:29 compute-0 podman[222073]: 2026-01-26 15:37:29.427420195 +0000 UTC m=+0.167380831 container remove c9f895932f6ea81b0d5b02325e87687ce6df4da10fff69f56aaa8bcb0e25a604 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_chaum, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:37:29 compute-0 systemd[1]: libpod-conmon-c9f895932f6ea81b0d5b02325e87687ce6df4da10fff69f56aaa8bcb0e25a604.scope: Deactivated successfully.
Jan 26 15:37:29 compute-0 sudo[222203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdizbwckymllhgtxgbjglcwwpsmglynu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441849.2189581-331-44284203982622/AnsiballZ_systemd_service.py'
Jan 26 15:37:29 compute-0 sudo[222203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:29 compute-0 podman[222211]: 2026-01-26 15:37:29.595784018 +0000 UTC m=+0.043554621 container create 2097671f89fd1020d1251ed9e96c8a16933dc1ce60ae40e9abd1bdedab369b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 15:37:29 compute-0 systemd[1]: Started libpod-conmon-2097671f89fd1020d1251ed9e96c8a16933dc1ce60ae40e9abd1bdedab369b59.scope.
Jan 26 15:37:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:37:29 compute-0 podman[222211]: 2026-01-26 15:37:29.575811128 +0000 UTC m=+0.023581731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:37:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6970218f778eec88a0cf290bc1e21ff13d5b93b2c0110ade8ed3d151e88928a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:37:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6970218f778eec88a0cf290bc1e21ff13d5b93b2c0110ade8ed3d151e88928a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:37:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6970218f778eec88a0cf290bc1e21ff13d5b93b2c0110ade8ed3d151e88928a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:37:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6970218f778eec88a0cf290bc1e21ff13d5b93b2c0110ade8ed3d151e88928a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:37:29 compute-0 podman[222211]: 2026-01-26 15:37:29.686239469 +0000 UTC m=+0.134010072 container init 2097671f89fd1020d1251ed9e96c8a16933dc1ce60ae40e9abd1bdedab369b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wu, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 15:37:29 compute-0 podman[222211]: 2026-01-26 15:37:29.692676136 +0000 UTC m=+0.140446729 container start 2097671f89fd1020d1251ed9e96c8a16933dc1ce60ae40e9abd1bdedab369b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wu, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:37:29 compute-0 podman[222211]: 2026-01-26 15:37:29.696165882 +0000 UTC m=+0.143936465 container attach 2097671f89fd1020d1251ed9e96c8a16933dc1ce60ae40e9abd1bdedab369b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 15:37:29 compute-0 python3.9[222205]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 15:37:29 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 26 15:37:29 compute-0 udevadm[222236]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 26 15:37:29 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 26 15:37:29 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 26 15:37:29 compute-0 multipathd[222242]: --------start up--------
Jan 26 15:37:29 compute-0 multipathd[222242]: read /etc/multipath.conf
Jan 26 15:37:29 compute-0 multipathd[222242]: path checkers start up
Jan 26 15:37:29 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 26 15:37:29 compute-0 serene_wu[222227]: {
Jan 26 15:37:29 compute-0 serene_wu[222227]:     "0": [
Jan 26 15:37:29 compute-0 serene_wu[222227]:         {
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "devices": [
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "/dev/loop3"
Jan 26 15:37:29 compute-0 serene_wu[222227]:             ],
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "lv_name": "ceph_lv0",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "lv_size": "21470642176",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "name": "ceph_lv0",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "tags": {
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.cluster_name": "ceph",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.crush_device_class": "",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.encrypted": "0",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.objectstore": "bluestore",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.osd_id": "0",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.type": "block",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.vdo": "0",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.with_tpm": "0"
Jan 26 15:37:29 compute-0 serene_wu[222227]:             },
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "type": "block",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "vg_name": "ceph_vg0"
Jan 26 15:37:29 compute-0 serene_wu[222227]:         }
Jan 26 15:37:29 compute-0 serene_wu[222227]:     ],
Jan 26 15:37:29 compute-0 serene_wu[222227]:     "1": [
Jan 26 15:37:29 compute-0 serene_wu[222227]:         {
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "devices": [
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "/dev/loop4"
Jan 26 15:37:29 compute-0 serene_wu[222227]:             ],
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "lv_name": "ceph_lv1",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "lv_size": "21470642176",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "name": "ceph_lv1",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "tags": {
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.cluster_name": "ceph",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.crush_device_class": "",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.encrypted": "0",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.objectstore": "bluestore",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.osd_id": "1",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.type": "block",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.vdo": "0",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.with_tpm": "0"
Jan 26 15:37:29 compute-0 serene_wu[222227]:             },
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "type": "block",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "vg_name": "ceph_vg1"
Jan 26 15:37:29 compute-0 serene_wu[222227]:         }
Jan 26 15:37:29 compute-0 serene_wu[222227]:     ],
Jan 26 15:37:29 compute-0 serene_wu[222227]:     "2": [
Jan 26 15:37:29 compute-0 serene_wu[222227]:         {
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "devices": [
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "/dev/loop5"
Jan 26 15:37:29 compute-0 serene_wu[222227]:             ],
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "lv_name": "ceph_lv2",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "lv_size": "21470642176",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "name": "ceph_lv2",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "tags": {
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.cluster_name": "ceph",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.crush_device_class": "",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.encrypted": "0",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.objectstore": "bluestore",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.osd_id": "2",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.type": "block",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.vdo": "0",
Jan 26 15:37:29 compute-0 serene_wu[222227]:                 "ceph.with_tpm": "0"
Jan 26 15:37:29 compute-0 serene_wu[222227]:             },
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "type": "block",
Jan 26 15:37:29 compute-0 serene_wu[222227]:             "vg_name": "ceph_vg2"
Jan 26 15:37:29 compute-0 serene_wu[222227]:         }
Jan 26 15:37:29 compute-0 serene_wu[222227]:     ]
Jan 26 15:37:29 compute-0 serene_wu[222227]: }
Jan 26 15:37:29 compute-0 sudo[222203]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:29 compute-0 systemd[1]: libpod-2097671f89fd1020d1251ed9e96c8a16933dc1ce60ae40e9abd1bdedab369b59.scope: Deactivated successfully.
Jan 26 15:37:29 compute-0 podman[222211]: 2026-01-26 15:37:29.976102425 +0000 UTC m=+0.423873018 container died 2097671f89fd1020d1251ed9e96c8a16933dc1ce60ae40e9abd1bdedab369b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 15:37:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6970218f778eec88a0cf290bc1e21ff13d5b93b2c0110ade8ed3d151e88928a-merged.mount: Deactivated successfully.
Jan 26 15:37:30 compute-0 podman[222211]: 2026-01-26 15:37:30.027116457 +0000 UTC m=+0.474887040 container remove 2097671f89fd1020d1251ed9e96c8a16933dc1ce60ae40e9abd1bdedab369b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 26 15:37:30 compute-0 systemd[1]: libpod-conmon-2097671f89fd1020d1251ed9e96c8a16933dc1ce60ae40e9abd1bdedab369b59.scope: Deactivated successfully.
Jan 26 15:37:30 compute-0 sudo[221980]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:30 compute-0 sudo[222289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:37:30 compute-0 sudo[222289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:37:30 compute-0 sudo[222289]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:30 compute-0 sudo[222314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:37:30 compute-0 sudo[222314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:37:30 compute-0 ceph-mon[75140]: pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:37:30 compute-0 podman[222403]: 2026-01-26 15:37:30.491366474 +0000 UTC m=+0.043621162 container create 4f76d28955ad63e32db4f8873b51b833b6ca786aaa8e76d2fcb74fb0107fc91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lumiere, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 15:37:30 compute-0 systemd[1]: Started libpod-conmon-4f76d28955ad63e32db4f8873b51b833b6ca786aaa8e76d2fcb74fb0107fc91f.scope.
Jan 26 15:37:30 compute-0 podman[222403]: 2026-01-26 15:37:30.473403193 +0000 UTC m=+0.025657911 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:37:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:37:30 compute-0 podman[222403]: 2026-01-26 15:37:30.589213266 +0000 UTC m=+0.141467984 container init 4f76d28955ad63e32db4f8873b51b833b6ca786aaa8e76d2fcb74fb0107fc91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 15:37:30 compute-0 podman[222403]: 2026-01-26 15:37:30.600605306 +0000 UTC m=+0.152860004 container start 4f76d28955ad63e32db4f8873b51b833b6ca786aaa8e76d2fcb74fb0107fc91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lumiere, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 26 15:37:30 compute-0 podman[222403]: 2026-01-26 15:37:30.605078946 +0000 UTC m=+0.157333684 container attach 4f76d28955ad63e32db4f8873b51b833b6ca786aaa8e76d2fcb74fb0107fc91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 15:37:30 compute-0 vigorous_lumiere[222447]: 167 167
Jan 26 15:37:30 compute-0 systemd[1]: libpod-4f76d28955ad63e32db4f8873b51b833b6ca786aaa8e76d2fcb74fb0107fc91f.scope: Deactivated successfully.
Jan 26 15:37:30 compute-0 podman[222403]: 2026-01-26 15:37:30.606714676 +0000 UTC m=+0.158969374 container died 4f76d28955ad63e32db4f8873b51b833b6ca786aaa8e76d2fcb74fb0107fc91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 15:37:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-952253859ccd427a264b49f906cf368bc2cd819ebf3305145e5627067406a013-merged.mount: Deactivated successfully.
Jan 26 15:37:30 compute-0 podman[222403]: 2026-01-26 15:37:30.642850653 +0000 UTC m=+0.195105351 container remove 4f76d28955ad63e32db4f8873b51b833b6ca786aaa8e76d2fcb74fb0107fc91f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 15:37:30 compute-0 sudo[222509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szaoqojundemxebrssiypniwquwocgqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441850.3677948-343-192163239744703/AnsiballZ_file.py'
Jan 26 15:37:30 compute-0 systemd[1]: libpod-conmon-4f76d28955ad63e32db4f8873b51b833b6ca786aaa8e76d2fcb74fb0107fc91f.scope: Deactivated successfully.
Jan 26 15:37:30 compute-0 sudo[222509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:30 compute-0 podman[222519]: 2026-01-26 15:37:30.820062213 +0000 UTC m=+0.050485110 container create 0e20f4338900cf3295072f07c244111fdd01e461cdab47782fd240cdcfdf6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chebyshev, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 15:37:30 compute-0 systemd[1]: Started libpod-conmon-0e20f4338900cf3295072f07c244111fdd01e461cdab47782fd240cdcfdf6e71.scope.
Jan 26 15:37:30 compute-0 python3.9[222513]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 26 15:37:30 compute-0 podman[222519]: 2026-01-26 15:37:30.794094716 +0000 UTC m=+0.024517663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:37:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6ecab249c3ed6ae6fe189618f8af56bc4dca8d6e4d3305f0265f81d96c04b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6ecab249c3ed6ae6fe189618f8af56bc4dca8d6e4d3305f0265f81d96c04b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6ecab249c3ed6ae6fe189618f8af56bc4dca8d6e4d3305f0265f81d96c04b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:37:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e6ecab249c3ed6ae6fe189618f8af56bc4dca8d6e4d3305f0265f81d96c04b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:37:30 compute-0 sudo[222509]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:30 compute-0 podman[222519]: 2026-01-26 15:37:30.915771363 +0000 UTC m=+0.146194300 container init 0e20f4338900cf3295072f07c244111fdd01e461cdab47782fd240cdcfdf6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chebyshev, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:37:30 compute-0 podman[222519]: 2026-01-26 15:37:30.922480968 +0000 UTC m=+0.152903905 container start 0e20f4338900cf3295072f07c244111fdd01e461cdab47782fd240cdcfdf6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chebyshev, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 15:37:30 compute-0 podman[222519]: 2026-01-26 15:37:30.927313557 +0000 UTC m=+0.157736504 container attach 0e20f4338900cf3295072f07c244111fdd01e461cdab47782fd240cdcfdf6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chebyshev, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 15:37:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:31 compute-0 sudo[222715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhddqnbgrdbwtnjxmwyhrpaeoxlqhkdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441851.0797136-351-231279762811032/AnsiballZ_modprobe.py'
Jan 26 15:37:31 compute-0 sudo[222715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:31 compute-0 python3.9[222723]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 26 15:37:31 compute-0 kernel: Key type psk registered
Jan 26 15:37:31 compute-0 lvm[222773]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:37:31 compute-0 lvm[222773]: VG ceph_vg0 finished
Jan 26 15:37:31 compute-0 lvm[222779]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:37:31 compute-0 lvm[222779]: VG ceph_vg2 finished
Jan 26 15:37:31 compute-0 lvm[222778]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:37:31 compute-0 lvm[222778]: VG ceph_vg1 finished
Jan 26 15:37:31 compute-0 sudo[222715]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:31 compute-0 lvm[222786]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:37:31 compute-0 lvm[222786]: VG ceph_vg0 finished
Jan 26 15:37:31 compute-0 peaceful_chebyshev[222536]: {}
Jan 26 15:37:31 compute-0 systemd[1]: libpod-0e20f4338900cf3295072f07c244111fdd01e461cdab47782fd240cdcfdf6e71.scope: Deactivated successfully.
Jan 26 15:37:31 compute-0 podman[222519]: 2026-01-26 15:37:31.66964399 +0000 UTC m=+0.900066887 container died 0e20f4338900cf3295072f07c244111fdd01e461cdab47782fd240cdcfdf6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:37:31 compute-0 systemd[1]: libpod-0e20f4338900cf3295072f07c244111fdd01e461cdab47782fd240cdcfdf6e71.scope: Consumed 1.221s CPU time.
Jan 26 15:37:31 compute-0 podman[222789]: 2026-01-26 15:37:31.682322352 +0000 UTC m=+0.055797001 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent)
Jan 26 15:37:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e6ecab249c3ed6ae6fe189618f8af56bc4dca8d6e4d3305f0265f81d96c04b9-merged.mount: Deactivated successfully.
Jan 26 15:37:31 compute-0 podman[222519]: 2026-01-26 15:37:31.710725299 +0000 UTC m=+0.941148196 container remove 0e20f4338900cf3295072f07c244111fdd01e461cdab47782fd240cdcfdf6e71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_chebyshev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 15:37:31 compute-0 systemd[1]: libpod-conmon-0e20f4338900cf3295072f07c244111fdd01e461cdab47782fd240cdcfdf6e71.scope: Deactivated successfully.
Jan 26 15:37:31 compute-0 sudo[222314]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:37:31 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:37:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:37:31 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:37:31 compute-0 sudo[222877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:37:31 compute-0 sudo[222877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:37:31 compute-0 sudo[222877]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:31 compute-0 sudo[222987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnhuninyypdfqdohleemlpvgxcwaeoxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441851.742205-359-264528148643366/AnsiballZ_stat.py'
Jan 26 15:37:32 compute-0 sudo[222987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:32 compute-0 python3.9[222989]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:37:32 compute-0 sudo[222987]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:32 compute-0 ceph-mon[75140]: pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:37:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:37:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:37:32 compute-0 sudo[223110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecbzybzucqfytmalvnqyyksvnhlgceef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441851.742205-359-264528148643366/AnsiballZ_copy.py'
Jan 26 15:37:32 compute-0 sudo[223110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:32 compute-0 python3.9[223112]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1769441851.742205-359-264528148643366/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:37:32 compute-0 sudo[223110]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:33 compute-0 sudo[223262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alrsnslrjlwgojbvlnyplkdkyejbgnxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441853.0975087-375-268686060853296/AnsiballZ_lineinfile.py'
Jan 26 15:37:33 compute-0 sudo[223262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:33 compute-0 python3.9[223264]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:37:33 compute-0 sudo[223262]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:34 compute-0 sudo[223414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yutyeofkdesjplxjoverwalovjgfykjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441853.7846565-383-3108879663053/AnsiballZ_systemd.py'
Jan 26 15:37:34 compute-0 sudo[223414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:34 compute-0 ceph-mon[75140]: pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:34 compute-0 python3.9[223416]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 15:37:34 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 26 15:37:34 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 26 15:37:34 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 26 15:37:34 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 26 15:37:34 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 26 15:37:34 compute-0 sudo[223414]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:34 compute-0 sudo[223570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cumcfsulovatsobsdpucbwdwvwahnmmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441854.6921272-391-225084317806204/AnsiballZ_dnf.py'
Jan 26 15:37:34 compute-0 sudo[223570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:35 compute-0 python3.9[223572]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 26 15:37:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:36 compute-0 ceph-mon[75140]: pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:37:38 compute-0 systemd[1]: Reloading.
Jan 26 15:37:38 compute-0 systemd-rc-local-generator[223600]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:37:38 compute-0 systemd-sysv-generator[223603]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:37:38 compute-0 systemd[1]: Reloading.
Jan 26 15:37:38 compute-0 ceph-mon[75140]: pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:38 compute-0 systemd-sysv-generator[223641]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:37:38 compute-0 systemd-rc-local-generator[223637]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:37:38 compute-0 systemd-logind[790]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 26 15:37:38 compute-0 systemd-logind[790]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 26 15:37:38 compute-0 lvm[223686]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:37:38 compute-0 lvm[223683]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:37:38 compute-0 lvm[223686]: VG ceph_vg2 finished
Jan 26 15:37:38 compute-0 lvm[223683]: VG ceph_vg1 finished
Jan 26 15:37:38 compute-0 lvm[223684]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:37:38 compute-0 lvm[223684]: VG ceph_vg0 finished
Jan 26 15:37:39 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 26 15:37:39 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 26 15:37:39 compute-0 systemd[1]: Reloading.
Jan 26 15:37:39 compute-0 systemd-rc-local-generator[223738]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:37:39 compute-0 systemd-sysv-generator[223743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:37:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:39 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 26 15:37:39 compute-0 sudo[223570]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:40 compute-0 sudo[225037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tllgzwjmasasybmjgkzbiojwangzhdzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441860.041842-399-83205511736474/AnsiballZ_systemd_service.py'
Jan 26 15:37:40 compute-0 sudo[225037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:40 compute-0 ceph-mon[75140]: pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:40 compute-0 python3.9[225039]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 15:37:40 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 26 15:37:40 compute-0 iscsid[217914]: iscsid shutting down.
Jan 26 15:37:40 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 26 15:37:40 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 26 15:37:40 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 26 15:37:40 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 26 15:37:40 compute-0 systemd[1]: Started Open-iSCSI.
Jan 26 15:37:40 compute-0 sudo[225037]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 26 15:37:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 26 15:37:40 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.576s CPU time.
Jan 26 15:37:40 compute-0 systemd[1]: run-r21e8707ce8eb4ec4bae7cb25405dc409.service: Deactivated successfully.
Jan 26 15:37:41 compute-0 sudo[225194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsbvbmwpjjheaobstzxnsesbhmscszsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441860.8285143-407-140982883637679/AnsiballZ_systemd_service.py'
Jan 26 15:37:41 compute-0 sudo[225194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:41 compute-0 python3.9[225196]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 15:37:41 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 26 15:37:41 compute-0 multipathd[222242]: exit (signal)
Jan 26 15:37:41 compute-0 multipathd[222242]: --------shut down-------
Jan 26 15:37:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:37:42 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 26 15:37:42 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 26 15:37:42 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 26 15:37:42 compute-0 ceph-mon[75140]: pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:42 compute-0 multipathd[225202]: --------start up--------
Jan 26 15:37:42 compute-0 multipathd[225202]: read /etc/multipath.conf
Jan 26 15:37:42 compute-0 multipathd[225202]: path checkers start up
Jan 26 15:37:42 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 26 15:37:42 compute-0 sudo[225194]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:43 compute-0 python3.9[225359]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 26 15:37:44 compute-0 sudo[225513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyatvrizvvvzoqsqhgxenqtlgfnunjcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441863.900671-425-98505969485159/AnsiballZ_file.py'
Jan 26 15:37:44 compute-0 sudo[225513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:44 compute-0 python3.9[225515]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:37:44 compute-0 sudo[225513]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:44 compute-0 ceph-mon[75140]: pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:44 compute-0 sudo[225665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofqlobdpzkzekahqpjjhhylyyswadaap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441864.6822517-436-194314729087464/AnsiballZ_systemd_service.py'
Jan 26 15:37:44 compute-0 sudo[225665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:45 compute-0 python3.9[225667]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 15:37:45 compute-0 systemd[1]: Reloading.
Jan 26 15:37:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:45 compute-0 systemd-rc-local-generator[225695]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:37:45 compute-0 systemd-sysv-generator[225699]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:37:45 compute-0 sudo[225665]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:45 compute-0 ceph-mon[75140]: pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:46 compute-0 python3.9[225852]: ansible-ansible.builtin.service_facts Invoked
Jan 26 15:37:46 compute-0 network[225869]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 26 15:37:46 compute-0 network[225870]: 'network-scripts' will be removed from distribution in near future.
Jan 26 15:37:46 compute-0 network[225871]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 26 15:37:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:37:48.280015) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769441868280043, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1558, "num_deletes": 253, "total_data_size": 2589879, "memory_usage": 2624920, "flush_reason": "Manual Compaction"}
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769441868292158, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1480570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11937, "largest_seqno": 13494, "table_properties": {"data_size": 1475299, "index_size": 2539, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13281, "raw_average_key_size": 20, "raw_value_size": 1463730, "raw_average_value_size": 2214, "num_data_blocks": 117, "num_entries": 661, "num_filter_entries": 661, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769441697, "oldest_key_time": 1769441697, "file_creation_time": 1769441868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 12390 microseconds, and 4140 cpu microseconds.
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:37:48.292401) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1480570 bytes OK
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:37:48.292420) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:37:48.293925) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:37:48.293937) EVENT_LOG_v1 {"time_micros": 1769441868293933, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:37:48.293984) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2583112, prev total WAL file size 2583112, number of live WAL files 2.
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:37:48.294639) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353034' seq:0, type:0; will stop at (end)
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1445KB)], [29(8374KB)]
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769441868294670, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 10055839, "oldest_snapshot_seqno": -1}
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4027 keys, 7739943 bytes, temperature: kUnknown
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769441868354485, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7739943, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7710880, "index_size": 17881, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 96189, "raw_average_key_size": 23, "raw_value_size": 7636156, "raw_average_value_size": 1896, "num_data_blocks": 776, "num_entries": 4027, "num_filter_entries": 4027, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769441868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:37:48.354748) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7739943 bytes
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:37:48.356492) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.8 rd, 129.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 8.2 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(12.0) write-amplify(5.2) OK, records in: 4465, records dropped: 438 output_compression: NoCompression
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:37:48.356514) EVENT_LOG_v1 {"time_micros": 1769441868356503, "job": 12, "event": "compaction_finished", "compaction_time_micros": 59912, "compaction_time_cpu_micros": 16653, "output_level": 6, "num_output_files": 1, "total_output_size": 7739943, "num_input_records": 4465, "num_output_records": 4027, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769441868356904, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769441868358478, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:37:48.294555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:37:48.358551) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:37:48.358555) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:37:48.358557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:37:48.358559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:37:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:37:48.358561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:37:48 compute-0 ceph-mon[75140]: pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1837404050763064e-06 of space, bias 4.0, pg target 0.0014204884860915677 quantized to 16 (current 16)
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:37:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:37:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:49 compute-0 sshd-session[225947]: Connection closed by authenticating user root 178.128.250.55 port 35756 [preauth]
Jan 26 15:37:50 compute-0 ceph-mon[75140]: pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:37:51 compute-0 sudo[226144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulmqgkbszmuxachcburrgzvwnjnxtbys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441870.7426934-455-59934406243173/AnsiballZ_systemd_service.py'
Jan 26 15:37:51 compute-0 sudo[226144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 26 15:37:51 compute-0 python3.9[226146]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 15:37:52 compute-0 sudo[226144]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:52 compute-0 ceph-mon[75140]: pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 26 15:37:52 compute-0 sudo[226297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgfxyfjfnubsdsbzvxtxdtcczunpsagj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441872.5149481-455-9103798081891/AnsiballZ_systemd_service.py'
Jan 26 15:37:52 compute-0 sudo[226297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:53 compute-0 python3.9[226299]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 15:37:53 compute-0 sudo[226297]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:37:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 15:37:53 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 26 15:37:53 compute-0 sudo[226451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mntsrinvyaabfroklcqiqiqkqutgoite ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441873.2594583-455-273785582196581/AnsiballZ_systemd_service.py'
Jan 26 15:37:53 compute-0 sudo[226451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:53 compute-0 python3.9[226453]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 15:37:53 compute-0 sudo[226451]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:54 compute-0 sudo[226604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gciihusnohfyofalqafioxkapjatakht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441874.0671945-455-47433705705342/AnsiballZ_systemd_service.py'
Jan 26 15:37:54 compute-0 sudo[226604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:54 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 26 15:37:54 compute-0 python3.9[226606]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 15:37:54 compute-0 sudo[226604]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:54 compute-0 ceph-mon[75140]: pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 15:37:55 compute-0 sudo[226758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhnsetuqctaosvbgdyfbhufmuklcgseu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441874.8011563-455-166305708777750/AnsiballZ_systemd_service.py'
Jan 26 15:37:55 compute-0 sudo[226758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 15:37:55 compute-0 python3.9[226760]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 15:37:55 compute-0 sudo[226758]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:55 compute-0 ceph-mon[75140]: pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 15:37:55 compute-0 sudo[226911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfhofwykbkitjpnifteharlsjygzmhho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441875.5185475-455-173506995011919/AnsiballZ_systemd_service.py'
Jan 26 15:37:55 compute-0 sudo[226911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:56 compute-0 python3.9[226913]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 15:37:56 compute-0 sudo[226911]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:56 compute-0 sudo[227064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrjrfqxtunutaaypzvhumecjpmvaqxef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441876.2516427-455-190990058043841/AnsiballZ_systemd_service.py'
Jan 26 15:37:56 compute-0 sudo[227064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:56 compute-0 python3.9[227066]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 15:37:56 compute-0 sudo[227064]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 15:37:57 compute-0 sudo[227217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkjdnoenfbihcrrtxfukikuhegobnvyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441877.0097969-455-249455587287557/AnsiballZ_systemd_service.py'
Jan 26 15:37:57 compute-0 sudo[227217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:57 compute-0 python3.9[227219]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 15:37:57 compute-0 sudo[227217]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:58 compute-0 sudo[227370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbjwygutqdvgajjrvwjakvbxtlvggeou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441877.8991678-514-216530100069181/AnsiballZ_file.py'
Jan 26 15:37:58 compute-0 sudo[227370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:37:58 compute-0 python3.9[227372]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:37:58 compute-0 sudo[227370]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:58 compute-0 podman[227373]: 2026-01-26 15:37:58.401265099 +0000 UTC m=+0.089028196 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:37:58 compute-0 ceph-mon[75140]: pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 15:37:58 compute-0 sudo[227548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkmqibjitimpegxywvchdaovdmqremoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441878.5025964-514-43590535130910/AnsiballZ_file.py'
Jan 26 15:37:58 compute-0 sudo[227548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:58 compute-0 python3.9[227550]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:37:58 compute-0 sudo[227548]: pam_unix(sudo:session): session closed for user root
Jan 26 15:37:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:37:59.199 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:37:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:37:59.199 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:37:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:37:59.199 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:37:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 15:37:59 compute-0 sudo[227700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpmbxuxpqbxwgktyguvxigfxirdlrjml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441879.1870801-514-197380886939753/AnsiballZ_file.py'
Jan 26 15:37:59 compute-0 sudo[227700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:37:59 compute-0 python3.9[227702]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:37:59 compute-0 sudo[227700]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:00 compute-0 sudo[227852]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aneycfqyrdqivwemryfooveuyfgsqspp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441879.8469129-514-199553931571937/AnsiballZ_file.py'
Jan 26 15:38:00 compute-0 sudo[227852]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:00 compute-0 python3.9[227854]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:38:00 compute-0 sudo[227852]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:38:00 compute-0 sudo[228004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eurrbfzzjchcjwnglkyinjilopaeiatu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441880.4366598-514-20801740841024/AnsiballZ_file.py'
Jan 26 15:38:00 compute-0 sudo[228004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:00 compute-0 python3.9[228006]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:38:00 compute-0 sudo[228004]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 15:38:01 compute-0 sudo[228156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcqdbkowudultukapomvzrvdijwcicex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441881.0106692-514-221511565322122/AnsiballZ_file.py'
Jan 26 15:38:01 compute-0 sudo[228156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:01 compute-0 python3.9[228158]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:38:01 compute-0 sudo[228156]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:01 compute-0 podman[228282]: 2026-01-26 15:38:01.931939447 +0000 UTC m=+0.045762604 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Jan 26 15:38:01 compute-0 sudo[228325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrarzyvpvvdmqajpkwjkefcyxkxymrom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441881.6535425-514-198638397293519/AnsiballZ_file.py'
Jan 26 15:38:01 compute-0 sudo[228325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:02 compute-0 python3.9[228327]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:38:02 compute-0 ceph-mon[75140]: pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 15:38:02 compute-0 sudo[228325]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:02 compute-0 sudo[228477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emcizwfwlxotraulhjiuboltmledwtzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441882.266135-514-31835923648629/AnsiballZ_file.py'
Jan 26 15:38:02 compute-0 sudo[228477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:02 compute-0 python3.9[228479]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:38:02 compute-0 sudo[228477]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:03 compute-0 sudo[228629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-facqldqbdjhrpzztxjgxljvfqsznyntw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441882.8619664-571-202640545231337/AnsiballZ_file.py'
Jan 26 15:38:03 compute-0 sudo[228629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 26 15:38:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:38:03 compute-0 ceph-mon[75140]: pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 15:38:03 compute-0 python3.9[228631]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:38:03 compute-0 sudo[228629]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:03 compute-0 sudo[228781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptlmuqneyzoigylerupkdakhgtqarftr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441883.4731045-571-202315664249267/AnsiballZ_file.py'
Jan 26 15:38:03 compute-0 sudo[228781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:03 compute-0 python3.9[228783]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:38:03 compute-0 sudo[228781]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:04 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 26 15:38:04 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 26 15:38:04 compute-0 ceph-mon[75140]: pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 26 15:38:04 compute-0 sudo[228935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uityiltisreiwpfvvekrxtaynxrjjoxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441884.1008387-571-6385257717113/AnsiballZ_file.py'
Jan 26 15:38:04 compute-0 sudo[228935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:04 compute-0 python3.9[228937]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:38:04 compute-0 sudo[228935]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:05 compute-0 sudo[229087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqluqtspitgohjnetetplsfyvpnokuej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441884.7829971-571-29187721209744/AnsiballZ_file.py'
Jan 26 15:38:05 compute-0 sudo[229087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:05 compute-0 python3.9[229089]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:38:05 compute-0 sudo[229087]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:05 compute-0 sudo[229239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqqncfxflmmavcaykttazdyaavpefbqe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441885.3843474-571-39878919131158/AnsiballZ_file.py'
Jan 26 15:38:05 compute-0 sudo[229239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:05 compute-0 python3.9[229241]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:38:05 compute-0 sudo[229239]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:06 compute-0 sudo[229391]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfvdmrtnfcaieshbreotbcrkkrtgfhgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441886.0119185-571-11550015885557/AnsiballZ_file.py'
Jan 26 15:38:06 compute-0 sudo[229391]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:06 compute-0 python3.9[229393]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:38:06 compute-0 sudo[229391]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:07 compute-0 sudo[229545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdrenvxgqkokkpsvexaiffictwcmasoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441886.7243612-571-181983583309792/AnsiballZ_file.py'
Jan 26 15:38:07 compute-0 sudo[229545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:07 compute-0 python3.9[229547]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:38:07 compute-0 sudo[229545]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:07 compute-0 ceph-mon[75140]: pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:07 compute-0 sudo[229697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epqrmckfvtrwxvufippobbhvlomnkcfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441887.4271944-571-234223106771835/AnsiballZ_file.py'
Jan 26 15:38:07 compute-0 sudo[229697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:07 compute-0 python3.9[229699]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:38:07 compute-0 sudo[229697]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:08 compute-0 sshd-session[229394]: Received disconnect from 103.42.57.158 port 57300:11:  [preauth]
Jan 26 15:38:08 compute-0 sshd-session[229394]: Disconnected from authenticating user root 103.42.57.158 port 57300 [preauth]
Jan 26 15:38:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:38:08 compute-0 ceph-mon[75140]: pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:08 compute-0 sudo[229849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qliretibxjvhqvhwncutpgjhkxmpfdyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441888.1269126-629-96819459601884/AnsiballZ_command.py'
Jan 26 15:38:08 compute-0 sudo[229849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:08 compute-0 python3.9[229851]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:38:08 compute-0 sudo[229849]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:09 compute-0 python3.9[230003]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 26 15:38:09 compute-0 sudo[230153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzfkhnaoofaigdpexmjegxpmqlboarsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441889.6604314-647-283655593063/AnsiballZ_systemd_service.py'
Jan 26 15:38:09 compute-0 sudo[230153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:10 compute-0 python3.9[230155]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 15:38:10 compute-0 systemd[1]: Reloading.
Jan 26 15:38:10 compute-0 systemd-rc-local-generator[230182]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:38:10 compute-0 systemd-sysv-generator[230186]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:38:10 compute-0 ceph-mon[75140]: pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:10 compute-0 sudo[230153]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:11 compute-0 sudo[230341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgyybxhsjpdlrgcfhzrnysucbixwfots ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441890.7240565-655-248556480963736/AnsiballZ_command.py'
Jan 26 15:38:11 compute-0 sudo[230341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:11 compute-0 python3.9[230343]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:38:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:12 compute-0 sudo[230341]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:12 compute-0 ceph-mon[75140]: pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:12 compute-0 sudo[230494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xriwtzkkibbfhzglxyvggidypilhriqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441892.4216316-655-216827688652242/AnsiballZ_command.py'
Jan 26 15:38:12 compute-0 sudo[230494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:12 compute-0 python3.9[230496]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:38:12 compute-0 sudo[230494]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:38:13 compute-0 sudo[230647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfkcanhimwkhhesxgsenwdxriotlzsvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441893.07712-655-184913324434244/AnsiballZ_command.py'
Jan 26 15:38:13 compute-0 sudo[230647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:13 compute-0 python3.9[230649]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:38:14 compute-0 ceph-mon[75140]: pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:14 compute-0 sudo[230647]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:14 compute-0 sudo[230800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkdkyqpualaaeaoyovdzzdijdxstykit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441894.6920023-655-203000538318747/AnsiballZ_command.py'
Jan 26 15:38:14 compute-0 sudo[230800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:15 compute-0 python3.9[230802]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:38:15 compute-0 sudo[230800]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:15 compute-0 sudo[230953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duftweyuvbulouzcxsmlgpyqabelbrig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441895.3258758-655-237181676506061/AnsiballZ_command.py'
Jan 26 15:38:15 compute-0 sudo[230953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:15 compute-0 python3.9[230955]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:38:15 compute-0 sudo[230953]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:16 compute-0 sudo[231106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tokwcqqxwdkpuswwwqnojlpwbumkbmaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441895.994142-655-177218815633851/AnsiballZ_command.py'
Jan 26 15:38:16 compute-0 sudo[231106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:16 compute-0 ceph-mon[75140]: pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:16 compute-0 python3.9[231108]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:38:16 compute-0 sudo[231106]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:16 compute-0 sudo[231259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iixeaikmgbkszkpyvzwhqpygcaibggjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441896.6440592-655-169162123598575/AnsiballZ_command.py'
Jan 26 15:38:16 compute-0 sudo[231259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:17 compute-0 python3.9[231261]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:38:17 compute-0 sudo[231259]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:17 compute-0 sudo[231412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spxpaxztuapphqxgvugdiwlwdxqjqtyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441897.299854-655-234541147839877/AnsiballZ_command.py'
Jan 26 15:38:17 compute-0 sudo[231412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:17 compute-0 python3.9[231414]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 26 15:38:17 compute-0 sudo[231412]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:38:18 compute-0 ceph-mon[75140]: pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:18 compute-0 sudo[231565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjstpilmalovhifzvtroygleuhhczidn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441898.6211305-734-84988322509983/AnsiballZ_file.py'
Jan 26 15:38:18 compute-0 sudo[231565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:19 compute-0 python3.9[231567]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:19 compute-0 sudo[231565]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:19 compute-0 sudo[231717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egjyzetoldmmoympitpazmbpluyuvwug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441899.2892466-734-238330736003134/AnsiballZ_file.py'
Jan 26 15:38:19 compute-0 sudo[231717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:19 compute-0 python3.9[231719]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:19 compute-0 sudo[231717]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:20 compute-0 sudo[231869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfjgyfvvgymryiujngqdeavbybyblkws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441899.9321072-734-1107808667156/AnsiballZ_file.py'
Jan 26 15:38:20 compute-0 sudo[231869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:20 compute-0 python3.9[231871]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:20 compute-0 sudo[231869]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:20 compute-0 ceph-mon[75140]: pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:20 compute-0 sudo[232021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnwqmdwstdlqtdcodauxnzcwgmoyequy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441900.5805066-756-5188418935129/AnsiballZ_file.py'
Jan 26 15:38:20 compute-0 sudo[232021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:21 compute-0 python3.9[232023]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:21 compute-0 sudo[232021]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:21 compute-0 sudo[232173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gztysajyqomzlmqelxsxhamoxwarczzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441901.188021-756-90145913908989/AnsiballZ_file.py'
Jan 26 15:38:21 compute-0 sudo[232173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:21 compute-0 python3.9[232175]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:21 compute-0 sudo[232173]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:22 compute-0 sudo[232325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xttjymwqwtxhfhigywfwmdzgirfdxabq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441901.8508594-756-244234200104046/AnsiballZ_file.py'
Jan 26 15:38:22 compute-0 sudo[232325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:22 compute-0 python3.9[232327]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:22 compute-0 sudo[232325]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:22 compute-0 ceph-mon[75140]: pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:22 compute-0 sudo[232477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghsedcspnbvgirjumrmdfznpejaydxvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441902.5088367-756-101062917208546/AnsiballZ_file.py'
Jan 26 15:38:22 compute-0 sudo[232477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:22 compute-0 python3.9[232479]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:23 compute-0 sudo[232477]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:38:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:23 compute-0 sudo[232629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbjonvgbvbkjxzkzbuekoaiekbucwvxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441903.1728535-756-146371005927753/AnsiballZ_file.py'
Jan 26 15:38:23 compute-0 sudo[232629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:23 compute-0 python3.9[232631]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:23 compute-0 sudo[232629]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:23 compute-0 sudo[232781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eogghvxyqnysliqwswkoqtbpsbococai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441903.7363157-756-279542597875223/AnsiballZ_file.py'
Jan 26 15:38:23 compute-0 sudo[232781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:24 compute-0 python3.9[232783]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:24 compute-0 sudo[232781]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:24 compute-0 ceph-mon[75140]: pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:24 compute-0 sudo[232933]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-engaoltvnambfdkuckwmpuvhbbohqeqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441904.296156-756-270144964313274/AnsiballZ_file.py'
Jan 26 15:38:24 compute-0 sudo[232933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:24 compute-0 python3.9[232935]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:24 compute-0 sudo[232933]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:26 compute-0 ceph-mon[75140]: pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:38:28 compute-0 ceph-mon[75140]: pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:38:28
Jan 26 15:38:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:38:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:38:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'images', 'backups', 'default.rgw.control', 'vms']
Jan 26 15:38:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:38:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:29 compute-0 podman[233012]: 2026-01-26 15:38:29.403044799 +0000 UTC m=+0.083602533 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 15:38:29 compute-0 sudo[233111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzodwqfsaywcmcuesjasevhnedrspdfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441909.197525-945-49399869594918/AnsiballZ_getent.py'
Jan 26 15:38:29 compute-0 sudo[233111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:29 compute-0 python3.9[233113]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 26 15:38:29 compute-0 sudo[233111]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:38:30 compute-0 sudo[233264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciwxrqjipfxofumnsvsjdifwzsktkkyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441909.9780836-953-26908361006726/AnsiballZ_group.py'
Jan 26 15:38:30 compute-0 sudo[233264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:38:30 compute-0 ceph-mon[75140]: pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:30 compute-0 python3.9[233266]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 26 15:38:30 compute-0 groupadd[233267]: group added to /etc/group: name=nova, GID=42436
Jan 26 15:38:30 compute-0 groupadd[233267]: group added to /etc/gshadow: name=nova
Jan 26 15:38:30 compute-0 groupadd[233267]: new group: name=nova, GID=42436
Jan 26 15:38:30 compute-0 sudo[233264]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:31 compute-0 sudo[233422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnovostkztszmkcqsldymxlxryxcvexd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441910.870079-961-20335658285074/AnsiballZ_user.py'
Jan 26 15:38:31 compute-0 sudo[233422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:31 compute-0 python3.9[233424]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 26 15:38:31 compute-0 useradd[233426]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 26 15:38:31 compute-0 useradd[233426]: add 'nova' to group 'libvirt'
Jan 26 15:38:31 compute-0 useradd[233426]: add 'nova' to shadow group 'libvirt'
Jan 26 15:38:31 compute-0 sudo[233422]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:31 compute-0 sudo[233457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:38:31 compute-0 sudo[233457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:38:31 compute-0 sudo[233457]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:31 compute-0 sudo[233482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:38:31 compute-0 sudo[233482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:38:32 compute-0 podman[233506]: 2026-01-26 15:38:32.029453597 +0000 UTC m=+0.056090307 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:38:32 compute-0 ceph-mon[75140]: pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:32 compute-0 sudo[233482]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:32 compute-0 sshd-session[233544]: Accepted publickey for zuul from 192.168.122.30 port 43536 ssh2: ECDSA SHA256:+bxXQQ9mjgGTATCpcZMPbqE6T7Ypcu1bWdLgmv9dcFM
Jan 26 15:38:32 compute-0 systemd-logind[790]: New session 50 of user zuul.
Jan 26 15:38:32 compute-0 systemd[1]: Started Session 50 of User zuul.
Jan 26 15:38:32 compute-0 sshd-session[233544]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 15:38:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:38:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:38:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:38:32 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:38:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:38:32 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:38:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:38:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:38:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:38:32 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:38:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:38:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:38:32 compute-0 sudo[233560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:38:32 compute-0 sudo[233560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:38:32 compute-0 sudo[233560]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:32 compute-0 sshd-session[233559]: Received disconnect from 192.168.122.30 port 43536:11: disconnected by user
Jan 26 15:38:32 compute-0 sshd-session[233559]: Disconnected from user zuul 192.168.122.30 port 43536
Jan 26 15:38:32 compute-0 sshd-session[233544]: pam_unix(sshd:session): session closed for user zuul
Jan 26 15:38:32 compute-0 sudo[233608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:38:32 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Jan 26 15:38:32 compute-0 systemd-logind[790]: Session 50 logged out. Waiting for processes to exit.
Jan 26 15:38:32 compute-0 sudo[233608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:38:32 compute-0 systemd-logind[790]: Removed session 50.
Jan 26 15:38:32 compute-0 podman[233697]: 2026-01-26 15:38:32.91880341 +0000 UTC m=+0.037310107 container create 3071af0da33602ee4d16ccb664d5e36c70e8baf5ba9e641ba2ab498765d44896 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lehmann, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 15:38:32 compute-0 systemd[1]: Started libpod-conmon-3071af0da33602ee4d16ccb664d5e36c70e8baf5ba9e641ba2ab498765d44896.scope.
Jan 26 15:38:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:38:32 compute-0 podman[233697]: 2026-01-26 15:38:32.902473559 +0000 UTC m=+0.020980276 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:38:33 compute-0 podman[233697]: 2026-01-26 15:38:33.003144161 +0000 UTC m=+0.121650878 container init 3071af0da33602ee4d16ccb664d5e36c70e8baf5ba9e641ba2ab498765d44896 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:38:33 compute-0 podman[233697]: 2026-01-26 15:38:33.011043005 +0000 UTC m=+0.129549702 container start 3071af0da33602ee4d16ccb664d5e36c70e8baf5ba9e641ba2ab498765d44896 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lehmann, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 15:38:33 compute-0 podman[233697]: 2026-01-26 15:38:33.01450846 +0000 UTC m=+0.133015177 container attach 3071af0da33602ee4d16ccb664d5e36c70e8baf5ba9e641ba2ab498765d44896 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 15:38:33 compute-0 inspiring_lehmann[233738]: 167 167
Jan 26 15:38:33 compute-0 systemd[1]: libpod-3071af0da33602ee4d16ccb664d5e36c70e8baf5ba9e641ba2ab498765d44896.scope: Deactivated successfully.
Jan 26 15:38:33 compute-0 podman[233697]: 2026-01-26 15:38:33.017508804 +0000 UTC m=+0.136015501 container died 3071af0da33602ee4d16ccb664d5e36c70e8baf5ba9e641ba2ab498765d44896 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 15:38:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-94e69166e5e60be03ed09b11ad931ca36d06dc40601f078c6950c9de28de50a2-merged.mount: Deactivated successfully.
Jan 26 15:38:33 compute-0 podman[233697]: 2026-01-26 15:38:33.064154619 +0000 UTC m=+0.182661316 container remove 3071af0da33602ee4d16ccb664d5e36c70e8baf5ba9e641ba2ab498765d44896 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 15:38:33 compute-0 systemd[1]: libpod-conmon-3071af0da33602ee4d16ccb664d5e36c70e8baf5ba9e641ba2ab498765d44896.scope: Deactivated successfully.
Jan 26 15:38:33 compute-0 podman[233811]: 2026-01-26 15:38:33.25077432 +0000 UTC m=+0.053035433 container create 81ea7b8178517cf4c28d608f5b593e671214dc3397c377b86ca2a393566b823b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_elgamal, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 15:38:33 compute-0 python3.9[233805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:38:33 compute-0 systemd[1]: Started libpod-conmon-81ea7b8178517cf4c28d608f5b593e671214dc3397c377b86ca2a393566b823b.scope.
Jan 26 15:38:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:38:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:33 compute-0 podman[233811]: 2026-01-26 15:38:33.230361599 +0000 UTC m=+0.032622722 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:38:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1aa85ffe7c7a629b3ee5f6cbd5dce1f28f428acc580fbce67fc63c914af748/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1aa85ffe7c7a629b3ee5f6cbd5dce1f28f428acc580fbce67fc63c914af748/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1aa85ffe7c7a629b3ee5f6cbd5dce1f28f428acc580fbce67fc63c914af748/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1aa85ffe7c7a629b3ee5f6cbd5dce1f28f428acc580fbce67fc63c914af748/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1aa85ffe7c7a629b3ee5f6cbd5dce1f28f428acc580fbce67fc63c914af748/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:38:33 compute-0 podman[233811]: 2026-01-26 15:38:33.35340847 +0000 UTC m=+0.155669583 container init 81ea7b8178517cf4c28d608f5b593e671214dc3397c377b86ca2a393566b823b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:38:33 compute-0 podman[233811]: 2026-01-26 15:38:33.363356854 +0000 UTC m=+0.165617947 container start 81ea7b8178517cf4c28d608f5b593e671214dc3397c377b86ca2a393566b823b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_elgamal, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 15:38:33 compute-0 podman[233811]: 2026-01-26 15:38:33.36851251 +0000 UTC m=+0.170773603 container attach 81ea7b8178517cf4c28d608f5b593e671214dc3397c377b86ca2a393566b823b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_elgamal, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:38:33 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:38:33 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:38:33 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:38:33 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:38:33 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:38:33 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:38:33 compute-0 python3.9[233957]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769441912.814048-986-280416560890171/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:33 compute-0 pensive_elgamal[233828]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:38:33 compute-0 pensive_elgamal[233828]: --> All data devices are unavailable
Jan 26 15:38:33 compute-0 systemd[1]: libpod-81ea7b8178517cf4c28d608f5b593e671214dc3397c377b86ca2a393566b823b.scope: Deactivated successfully.
Jan 26 15:38:33 compute-0 podman[233811]: 2026-01-26 15:38:33.878636885 +0000 UTC m=+0.680897968 container died 81ea7b8178517cf4c28d608f5b593e671214dc3397c377b86ca2a393566b823b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_elgamal, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:38:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d1aa85ffe7c7a629b3ee5f6cbd5dce1f28f428acc580fbce67fc63c914af748-merged.mount: Deactivated successfully.
Jan 26 15:38:33 compute-0 podman[233811]: 2026-01-26 15:38:33.923773342 +0000 UTC m=+0.726034435 container remove 81ea7b8178517cf4c28d608f5b593e671214dc3397c377b86ca2a393566b823b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_elgamal, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:38:33 compute-0 systemd[1]: libpod-conmon-81ea7b8178517cf4c28d608f5b593e671214dc3397c377b86ca2a393566b823b.scope: Deactivated successfully.
Jan 26 15:38:33 compute-0 sudo[233608]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:34 compute-0 sudo[234017]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:38:34 compute-0 sudo[234017]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:38:34 compute-0 sudo[234017]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:34 compute-0 sudo[234071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:38:34 compute-0 sudo[234071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:38:34 compute-0 podman[234192]: 2026-01-26 15:38:34.357195632 +0000 UTC m=+0.040033363 container create b2361281696c1f01a509d1a6bc711448120d6efb74729d30c98566ed55ca5c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_feistel, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:38:34 compute-0 systemd[1]: Started libpod-conmon-b2361281696c1f01a509d1a6bc711448120d6efb74729d30c98566ed55ca5c71.scope.
Jan 26 15:38:34 compute-0 podman[234192]: 2026-01-26 15:38:34.338653917 +0000 UTC m=+0.021491678 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:38:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:38:34 compute-0 podman[234192]: 2026-01-26 15:38:34.454753517 +0000 UTC m=+0.137591298 container init b2361281696c1f01a509d1a6bc711448120d6efb74729d30c98566ed55ca5c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 15:38:34 compute-0 podman[234192]: 2026-01-26 15:38:34.462840746 +0000 UTC m=+0.145678477 container start b2361281696c1f01a509d1a6bc711448120d6efb74729d30c98566ed55ca5c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:38:34 compute-0 podman[234192]: 2026-01-26 15:38:34.4666444 +0000 UTC m=+0.149482161 container attach b2361281696c1f01a509d1a6bc711448120d6efb74729d30c98566ed55ca5c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 26 15:38:34 compute-0 infallible_feistel[234209]: 167 167
Jan 26 15:38:34 compute-0 podman[234192]: 2026-01-26 15:38:34.469287355 +0000 UTC m=+0.152125106 container died b2361281696c1f01a509d1a6bc711448120d6efb74729d30c98566ed55ca5c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:38:34 compute-0 systemd[1]: libpod-b2361281696c1f01a509d1a6bc711448120d6efb74729d30c98566ed55ca5c71.scope: Deactivated successfully.
Jan 26 15:38:34 compute-0 python3.9[234182]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:38:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d09ab302a43d060facb47cfe6f454bb6df1e674534f0ce27623f53ba61290d4b-merged.mount: Deactivated successfully.
Jan 26 15:38:34 compute-0 podman[234192]: 2026-01-26 15:38:34.511535932 +0000 UTC m=+0.194373663 container remove b2361281696c1f01a509d1a6bc711448120d6efb74729d30c98566ed55ca5c71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 15:38:34 compute-0 systemd[1]: libpod-conmon-b2361281696c1f01a509d1a6bc711448120d6efb74729d30c98566ed55ca5c71.scope: Deactivated successfully.
Jan 26 15:38:34 compute-0 ceph-mon[75140]: pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:34 compute-0 podman[234258]: 2026-01-26 15:38:34.671297374 +0000 UTC m=+0.041799587 container create 5f076cfec093c3ed1899c3e70d1d6255a9ef43301660b4f975b9fb6408daa6d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_buck, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:38:34 compute-0 systemd[1]: Started libpod-conmon-5f076cfec093c3ed1899c3e70d1d6255a9ef43301660b4f975b9fb6408daa6d0.scope.
Jan 26 15:38:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:38:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58abb42a289c3c74b6aa12d4b48eed1424ef51d296c7e0ceb95f3b64ea0d497/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:38:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58abb42a289c3c74b6aa12d4b48eed1424ef51d296c7e0ceb95f3b64ea0d497/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:38:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58abb42a289c3c74b6aa12d4b48eed1424ef51d296c7e0ceb95f3b64ea0d497/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:38:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a58abb42a289c3c74b6aa12d4b48eed1424ef51d296c7e0ceb95f3b64ea0d497/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:38:34 compute-0 podman[234258]: 2026-01-26 15:38:34.744116302 +0000 UTC m=+0.114618535 container init 5f076cfec093c3ed1899c3e70d1d6255a9ef43301660b4f975b9fb6408daa6d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_buck, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:38:34 compute-0 podman[234258]: 2026-01-26 15:38:34.651302613 +0000 UTC m=+0.021804846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:38:34 compute-0 podman[234258]: 2026-01-26 15:38:34.751045582 +0000 UTC m=+0.121547795 container start 5f076cfec093c3ed1899c3e70d1d6255a9ef43301660b4f975b9fb6408daa6d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_buck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:38:34 compute-0 podman[234258]: 2026-01-26 15:38:34.754386903 +0000 UTC m=+0.124889146 container attach 5f076cfec093c3ed1899c3e70d1d6255a9ef43301660b4f975b9fb6408daa6d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 15:38:34 compute-0 python3.9[234326]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:35 compute-0 crazy_buck[234316]: {
Jan 26 15:38:35 compute-0 crazy_buck[234316]:     "0": [
Jan 26 15:38:35 compute-0 crazy_buck[234316]:         {
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "devices": [
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "/dev/loop3"
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             ],
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "lv_name": "ceph_lv0",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "lv_size": "21470642176",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "name": "ceph_lv0",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "tags": {
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.cluster_name": "ceph",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.crush_device_class": "",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.encrypted": "0",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.objectstore": "bluestore",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.osd_id": "0",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.type": "block",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.vdo": "0",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.with_tpm": "0"
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             },
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "type": "block",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "vg_name": "ceph_vg0"
Jan 26 15:38:35 compute-0 crazy_buck[234316]:         }
Jan 26 15:38:35 compute-0 crazy_buck[234316]:     ],
Jan 26 15:38:35 compute-0 crazy_buck[234316]:     "1": [
Jan 26 15:38:35 compute-0 crazy_buck[234316]:         {
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "devices": [
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "/dev/loop4"
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             ],
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "lv_name": "ceph_lv1",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "lv_size": "21470642176",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "name": "ceph_lv1",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "tags": {
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.cluster_name": "ceph",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.crush_device_class": "",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.encrypted": "0",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.objectstore": "bluestore",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.osd_id": "1",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.type": "block",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.vdo": "0",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.with_tpm": "0"
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             },
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "type": "block",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "vg_name": "ceph_vg1"
Jan 26 15:38:35 compute-0 crazy_buck[234316]:         }
Jan 26 15:38:35 compute-0 crazy_buck[234316]:     ],
Jan 26 15:38:35 compute-0 crazy_buck[234316]:     "2": [
Jan 26 15:38:35 compute-0 crazy_buck[234316]:         {
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "devices": [
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "/dev/loop5"
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             ],
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "lv_name": "ceph_lv2",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "lv_size": "21470642176",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "name": "ceph_lv2",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "tags": {
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.cluster_name": "ceph",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.crush_device_class": "",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.encrypted": "0",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.objectstore": "bluestore",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.osd_id": "2",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.type": "block",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.vdo": "0",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:                 "ceph.with_tpm": "0"
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             },
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "type": "block",
Jan 26 15:38:35 compute-0 crazy_buck[234316]:             "vg_name": "ceph_vg2"
Jan 26 15:38:35 compute-0 crazy_buck[234316]:         }
Jan 26 15:38:35 compute-0 crazy_buck[234316]:     ]
Jan 26 15:38:35 compute-0 crazy_buck[234316]: }
Jan 26 15:38:35 compute-0 systemd[1]: libpod-5f076cfec093c3ed1899c3e70d1d6255a9ef43301660b4f975b9fb6408daa6d0.scope: Deactivated successfully.
Jan 26 15:38:35 compute-0 podman[234258]: 2026-01-26 15:38:35.065610884 +0000 UTC m=+0.436113107 container died 5f076cfec093c3ed1899c3e70d1d6255a9ef43301660b4f975b9fb6408daa6d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:38:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-a58abb42a289c3c74b6aa12d4b48eed1424ef51d296c7e0ceb95f3b64ea0d497-merged.mount: Deactivated successfully.
Jan 26 15:38:35 compute-0 podman[234258]: 2026-01-26 15:38:35.112882294 +0000 UTC m=+0.483384507 container remove 5f076cfec093c3ed1899c3e70d1d6255a9ef43301660b4f975b9fb6408daa6d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:38:35 compute-0 systemd[1]: libpod-conmon-5f076cfec093c3ed1899c3e70d1d6255a9ef43301660b4f975b9fb6408daa6d0.scope: Deactivated successfully.
Jan 26 15:38:35 compute-0 sudo[234071]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:35 compute-0 sudo[234422]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:38:35 compute-0 sudo[234422]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:38:35 compute-0 sudo[234422]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:35 compute-0 sudo[234470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:38:35 compute-0 sudo[234470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:38:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:35 compute-0 python3.9[234545]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:38:35 compute-0 podman[234557]: 2026-01-26 15:38:35.583464987 +0000 UTC m=+0.044767920 container create c47e71014cedf3454dd6ead17259d861f53e1e3955b570630fe66861d62eabb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 15:38:35 compute-0 systemd[1]: Started libpod-conmon-c47e71014cedf3454dd6ead17259d861f53e1e3955b570630fe66861d62eabb2.scope.
Jan 26 15:38:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:38:35 compute-0 podman[234557]: 2026-01-26 15:38:35.564455951 +0000 UTC m=+0.025758894 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:38:35 compute-0 podman[234557]: 2026-01-26 15:38:35.668545176 +0000 UTC m=+0.129848119 container init c47e71014cedf3454dd6ead17259d861f53e1e3955b570630fe66861d62eabb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_jang, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 15:38:35 compute-0 podman[234557]: 2026-01-26 15:38:35.675741002 +0000 UTC m=+0.137043925 container start c47e71014cedf3454dd6ead17259d861f53e1e3955b570630fe66861d62eabb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_jang, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 15:38:35 compute-0 podman[234557]: 2026-01-26 15:38:35.679209128 +0000 UTC m=+0.140512051 container attach c47e71014cedf3454dd6ead17259d861f53e1e3955b570630fe66861d62eabb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 15:38:35 compute-0 exciting_jang[234596]: 167 167
Jan 26 15:38:35 compute-0 systemd[1]: libpod-c47e71014cedf3454dd6ead17259d861f53e1e3955b570630fe66861d62eabb2.scope: Deactivated successfully.
Jan 26 15:38:35 compute-0 podman[234557]: 2026-01-26 15:38:35.681680868 +0000 UTC m=+0.142983791 container died c47e71014cedf3454dd6ead17259d861f53e1e3955b570630fe66861d62eabb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_jang, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 15:38:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-023ec76c19c818728bad3478d418bd4b30908734b6c2054d41430343150d9302-merged.mount: Deactivated successfully.
Jan 26 15:38:35 compute-0 podman[234557]: 2026-01-26 15:38:35.717299123 +0000 UTC m=+0.178602046 container remove c47e71014cedf3454dd6ead17259d861f53e1e3955b570630fe66861d62eabb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_jang, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:38:35 compute-0 systemd[1]: libpod-conmon-c47e71014cedf3454dd6ead17259d861f53e1e3955b570630fe66861d62eabb2.scope: Deactivated successfully.
Jan 26 15:38:35 compute-0 podman[234707]: 2026-01-26 15:38:35.882783365 +0000 UTC m=+0.046985635 container create c969ed74b251dd73de39a873a967bcccecde04f5f330e5edec1c7500ec52e60d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_solomon, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:38:35 compute-0 systemd[1]: Started libpod-conmon-c969ed74b251dd73de39a873a967bcccecde04f5f330e5edec1c7500ec52e60d.scope.
Jan 26 15:38:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:38:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34effae10c99faf96ced8c4349cb10798976067878dca9c362f1bfce790d4aeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:38:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34effae10c99faf96ced8c4349cb10798976067878dca9c362f1bfce790d4aeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:38:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34effae10c99faf96ced8c4349cb10798976067878dca9c362f1bfce790d4aeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:38:35 compute-0 podman[234707]: 2026-01-26 15:38:35.859511944 +0000 UTC m=+0.023714284 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:38:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34effae10c99faf96ced8c4349cb10798976067878dca9c362f1bfce790d4aeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:38:35 compute-0 podman[234707]: 2026-01-26 15:38:35.966088751 +0000 UTC m=+0.130291051 container init c969ed74b251dd73de39a873a967bcccecde04f5f330e5edec1c7500ec52e60d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_solomon, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:38:35 compute-0 podman[234707]: 2026-01-26 15:38:35.97382068 +0000 UTC m=+0.138022950 container start c969ed74b251dd73de39a873a967bcccecde04f5f330e5edec1c7500ec52e60d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:38:35 compute-0 podman[234707]: 2026-01-26 15:38:35.977848979 +0000 UTC m=+0.142051279 container attach c969ed74b251dd73de39a873a967bcccecde04f5f330e5edec1c7500ec52e60d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_solomon, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:38:36 compute-0 python3.9[234724]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769441915.0763128-986-11236571093488/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:36 compute-0 python3.9[234922]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:38:36 compute-0 ceph-mon[75140]: pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:36 compute-0 lvm[234980]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:38:36 compute-0 lvm[234980]: VG ceph_vg1 finished
Jan 26 15:38:36 compute-0 lvm[234963]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:38:36 compute-0 lvm[234963]: VG ceph_vg0 finished
Jan 26 15:38:36 compute-0 lvm[234988]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:38:36 compute-0 lvm[234988]: VG ceph_vg2 finished
Jan 26 15:38:36 compute-0 eager_solomon[234734]: {}
Jan 26 15:38:36 compute-0 systemd[1]: libpod-c969ed74b251dd73de39a873a967bcccecde04f5f330e5edec1c7500ec52e60d.scope: Deactivated successfully.
Jan 26 15:38:36 compute-0 systemd[1]: libpod-c969ed74b251dd73de39a873a967bcccecde04f5f330e5edec1c7500ec52e60d.scope: Consumed 1.303s CPU time.
Jan 26 15:38:36 compute-0 podman[234707]: 2026-01-26 15:38:36.784659956 +0000 UTC m=+0.948862236 container died c969ed74b251dd73de39a873a967bcccecde04f5f330e5edec1c7500ec52e60d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_solomon, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 15:38:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-34effae10c99faf96ced8c4349cb10798976067878dca9c362f1bfce790d4aeb-merged.mount: Deactivated successfully.
Jan 26 15:38:36 compute-0 podman[234707]: 2026-01-26 15:38:36.831404534 +0000 UTC m=+0.995606804 container remove c969ed74b251dd73de39a873a967bcccecde04f5f330e5edec1c7500ec52e60d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 15:38:36 compute-0 systemd[1]: libpod-conmon-c969ed74b251dd73de39a873a967bcccecde04f5f330e5edec1c7500ec52e60d.scope: Deactivated successfully.
Jan 26 15:38:36 compute-0 sudo[234470]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:38:36 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:38:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:38:36 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:38:36 compute-0 sudo[235100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:38:36 compute-0 sudo[235100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:38:36 compute-0 sudo[235100]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:37 compute-0 python3.9[235099]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769441916.1492608-986-32884921188108/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:37 compute-0 python3.9[235274]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:38:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:38:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:38:37 compute-0 ceph-mon[75140]: pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:38 compute-0 python3.9[235395]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769441917.2228596-986-75107389852143/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:38:38 compute-0 python3.9[235545]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:38:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:39 compute-0 python3.9[235666]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769441918.4612367-986-78929453848368/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:39 compute-0 sudo[235816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-difbyothpifdbcwtrfnedxxlhtxqrgnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441919.6109061-1069-189999398009696/AnsiballZ_file.py'
Jan 26 15:38:39 compute-0 sudo[235816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:40 compute-0 python3.9[235818]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:38:40 compute-0 sudo[235816]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:40 compute-0 ceph-mon[75140]: pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:40 compute-0 sudo[235968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eceathdpkqjbtedsybxpwjzyazyrbxnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441920.2602918-1077-232608662201886/AnsiballZ_copy.py'
Jan 26 15:38:40 compute-0 sudo[235968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:40 compute-0 python3.9[235970]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:38:40 compute-0 sudo[235968]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:41 compute-0 sudo[236122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmspzdicxqbzbgyyszgfbfuusdzhhldo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441920.904724-1085-233628986132219/AnsiballZ_stat.py'
Jan 26 15:38:41 compute-0 sudo[236122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:41 compute-0 sshd-session[235971]: Connection closed by authenticating user root 178.128.250.55 port 35136 [preauth]
Jan 26 15:38:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:41 compute-0 python3.9[236124]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 15:38:41 compute-0 sudo[236122]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:41 compute-0 sudo[236274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcwxgwfywhuxiylytfmrqevlwjmmomva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441921.5564463-1093-135321594847110/AnsiballZ_stat.py'
Jan 26 15:38:41 compute-0 sudo[236274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:42 compute-0 python3.9[236276]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:38:42 compute-0 sudo[236274]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:42 compute-0 sudo[236397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmqvlztbmqqmtyezandqnxubzqostscp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441921.5564463-1093-135321594847110/AnsiballZ_copy.py'
Jan 26 15:38:42 compute-0 sudo[236397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:42 compute-0 ceph-mon[75140]: pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:42 compute-0 python3.9[236399]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1769441921.5564463-1093-135321594847110/.source _original_basename=.2s9rmini follow=False checksum=9d07073de0726c2240ca8c49b54d3e53278f3bd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 26 15:38:42 compute-0 sudo[236397]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:38:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:43 compute-0 python3.9[236551]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 15:38:44 compute-0 python3.9[236703]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:38:44 compute-0 ceph-mon[75140]: pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:44 compute-0 python3.9[236824]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769441923.8472002-1119-79961623923032/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:45 compute-0 python3.9[236974]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 26 15:38:45 compute-0 python3.9[237095]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1769441924.9951754-1134-275791049603189/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 26 15:38:46 compute-0 ceph-mon[75140]: pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:46 compute-0 sudo[237245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mggwoyauzbewuaxmbbifvnjegkjvubnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441926.2767324-1151-231533627124224/AnsiballZ_container_config_data.py'
Jan 26 15:38:46 compute-0 sudo[237245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:46 compute-0 python3.9[237247]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 26 15:38:46 compute-0 sudo[237245]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:47 compute-0 sudo[237397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tibezdjvmctgvwjhsmtiwfzfgjzmylyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441927.2800348-1162-80144565551902/AnsiballZ_container_config_hash.py'
Jan 26 15:38:47 compute-0 sudo[237397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:47 compute-0 python3.9[237399]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 15:38:47 compute-0 sudo[237397]: pam_unix(sudo:session): session closed for user root
Jan 26 15:38:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1837404050763064e-06 of space, bias 4.0, pg target 0.0014204884860915677 quantized to 16 (current 16)
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:38:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:38:48 compute-0 ceph-mon[75140]: pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:48 compute-0 sudo[237549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvnapnboeujtpygivrzegrotvoiabrof ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769441928.3498163-1172-131827488480142/AnsiballZ_edpm_container_manage.py'
Jan 26 15:38:48 compute-0 sudo[237549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:38:49 compute-0 python3[237551]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 15:38:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:50 compute-0 ceph-mon[75140]: pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:52 compute-0 ceph-mon[75140]: pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:38:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:53 compute-0 ceph-mon[75140]: pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:57 compute-0 ceph-mon[75140]: pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:38:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:38:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:38:59.200 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:38:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:38:59.201 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:38:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:38:59.201 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:38:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:00 compute-0 ceph-mon[75140]: pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:00 compute-0 podman[237564]: 2026-01-26 15:39:00.338649146 +0000 UTC m=+11.140148486 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 26 15:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:39:00 compute-0 podman[237620]: 2026-01-26 15:39:00.433082124 +0000 UTC m=+0.114381551 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:39:00 compute-0 podman[237668]: 2026-01-26 15:39:00.521409832 +0000 UTC m=+0.066110701 container create 1942d62c70400e118289ad3b909c21af4c82820a37e3d4c96a7e7998470e93fd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 26 15:39:00 compute-0 podman[237668]: 2026-01-26 15:39:00.488424649 +0000 UTC m=+0.033125608 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 26 15:39:00 compute-0 python3[237551]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 26 15:39:00 compute-0 sudo[237549]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:01 compute-0 ceph-mon[75140]: pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:01 compute-0 sudo[237856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-givbzwhjslsatyndiizedgkhmspoimib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441940.8987887-1180-70507851615333/AnsiballZ_stat.py'
Jan 26 15:39:01 compute-0 sudo[237856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:39:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:01 compute-0 python3.9[237858]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 15:39:01 compute-0 sudo[237856]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:02 compute-0 sudo[238021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcgjnttrmbjymiliimyrvmfnhkkwvsdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441941.913536-1192-172114964618990/AnsiballZ_container_config_data.py'
Jan 26 15:39:02 compute-0 sudo[238021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:39:02 compute-0 ceph-mon[75140]: pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:02 compute-0 podman[237984]: 2026-01-26 15:39:02.232999632 +0000 UTC m=+0.078862967 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 15:39:02 compute-0 python3.9[238029]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 26 15:39:02 compute-0 sudo[238021]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:03 compute-0 sudo[238181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bccbbovseomklcuxvwprgrtuxqwexizw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441942.7298498-1203-20950235138820/AnsiballZ_container_config_hash.py'
Jan 26 15:39:03 compute-0 sudo[238181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:39:03 compute-0 python3.9[238183]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 26 15:39:03 compute-0 sudo[238181]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:39:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:03 compute-0 sudo[238333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwxfeczocnqpomkfstixsczkrpbsonjm ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1769441943.6246667-1213-116591079536188/AnsiballZ_edpm_container_manage.py'
Jan 26 15:39:03 compute-0 sudo[238333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:39:04 compute-0 python3[238335]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 26 15:39:04 compute-0 podman[238372]: 2026-01-26 15:39:04.37155809 +0000 UTC m=+0.047327579 container create 9e703c707afa366b51dd7cefe0c4644161299b00cc17c1236e823f1ac065ffbb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, tcib_managed=true)
Jan 26 15:39:04 compute-0 podman[238372]: 2026-01-26 15:39:04.34521593 +0000 UTC m=+0.020985409 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 26 15:39:04 compute-0 python3[238335]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 26 15:39:04 compute-0 ceph-mon[75140]: pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:04 compute-0 sudo[238333]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:04 compute-0 sudo[238560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfninurjrkhenkynvgknueiepnnzcvdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441944.6583045-1221-32858550583743/AnsiballZ_stat.py'
Jan 26 15:39:04 compute-0 sudo[238560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:39:05 compute-0 python3.9[238562]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 15:39:05 compute-0 sudo[238560]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:05 compute-0 sudo[238714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-maqbkzkjcdapyzkfzfzmtozxkzwrkmpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441945.4498696-1230-186931434567244/AnsiballZ_file.py'
Jan 26 15:39:05 compute-0 sudo[238714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:39:05 compute-0 python3.9[238716]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:39:05 compute-0 sudo[238714]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:06 compute-0 sudo[238865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnjkusbgpzdkugdojvxturateapsuilt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441945.9733353-1230-32322886738653/AnsiballZ_copy.py'
Jan 26 15:39:06 compute-0 sudo[238865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:39:06 compute-0 ceph-mon[75140]: pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:06 compute-0 python3.9[238867]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1769441945.9733353-1230-32322886738653/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 26 15:39:06 compute-0 sudo[238865]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:06 compute-0 sudo[238941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jykinduhhblyakxhqfbnfzyvgasrohjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441945.9733353-1230-32322886738653/AnsiballZ_systemd.py'
Jan 26 15:39:06 compute-0 sudo[238941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:39:07 compute-0 python3.9[238943]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 26 15:39:07 compute-0 systemd[1]: Reloading.
Jan 26 15:39:07 compute-0 systemd-rc-local-generator[238966]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:39:07 compute-0 systemd-sysv-generator[238973]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:39:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:07 compute-0 sudo[238941]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:07 compute-0 sudo[239052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xorjyvvjnpuyoaebpnnwpvypogvowiuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441945.9733353-1230-32322886738653/AnsiballZ_systemd.py'
Jan 26 15:39:07 compute-0 sudo[239052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:39:08 compute-0 python3.9[239054]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 26 15:39:08 compute-0 systemd[1]: Reloading.
Jan 26 15:39:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:39:08 compute-0 systemd-rc-local-generator[239082]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 26 15:39:08 compute-0 systemd-sysv-generator[239088]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 26 15:39:08 compute-0 ceph-mon[75140]: pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:08 compute-0 systemd[1]: Starting nova_compute container...
Jan 26 15:39:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d0200b30139217b5faa18768c6f440ff936f4052ee83648cb7884ea4e12ac9/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d0200b30139217b5faa18768c6f440ff936f4052ee83648cb7884ea4e12ac9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d0200b30139217b5faa18768c6f440ff936f4052ee83648cb7884ea4e12ac9/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d0200b30139217b5faa18768c6f440ff936f4052ee83648cb7884ea4e12ac9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d0200b30139217b5faa18768c6f440ff936f4052ee83648cb7884ea4e12ac9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:08 compute-0 podman[239094]: 2026-01-26 15:39:08.732812651 +0000 UTC m=+0.140194498 container init 9e703c707afa366b51dd7cefe0c4644161299b00cc17c1236e823f1ac065ffbb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:39:08 compute-0 podman[239094]: 2026-01-26 15:39:08.746653402 +0000 UTC m=+0.154035209 container start 9e703c707afa366b51dd7cefe0c4644161299b00cc17c1236e823f1ac065ffbb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 15:39:08 compute-0 podman[239094]: nova_compute
Jan 26 15:39:08 compute-0 nova_compute[239110]: + sudo -E kolla_set_configs
Jan 26 15:39:08 compute-0 systemd[1]: Started nova_compute container.
Jan 26 15:39:08 compute-0 sudo[239052]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Validating config file
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Copying service configuration files
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Deleting /etc/ceph
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Creating directory /etc/ceph
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /etc/ceph
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Writing out command to execute
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 26 15:39:08 compute-0 nova_compute[239110]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 26 15:39:08 compute-0 nova_compute[239110]: ++ cat /run_command
Jan 26 15:39:08 compute-0 nova_compute[239110]: + CMD=nova-compute
Jan 26 15:39:08 compute-0 nova_compute[239110]: + ARGS=
Jan 26 15:39:08 compute-0 nova_compute[239110]: + sudo kolla_copy_cacerts
Jan 26 15:39:08 compute-0 nova_compute[239110]: + [[ ! -n '' ]]
Jan 26 15:39:08 compute-0 nova_compute[239110]: + . kolla_extend_start
Jan 26 15:39:08 compute-0 nova_compute[239110]: Running command: 'nova-compute'
Jan 26 15:39:08 compute-0 nova_compute[239110]: + echo 'Running command: '\''nova-compute'\'''
Jan 26 15:39:08 compute-0 nova_compute[239110]: + umask 0022
Jan 26 15:39:08 compute-0 nova_compute[239110]: + exec nova-compute
Jan 26 15:39:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:09 compute-0 python3.9[239271]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 15:39:10 compute-0 python3.9[239422]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 15:39:10 compute-0 ceph-mon[75140]: pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:11 compute-0 python3.9[239572]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 26 15:39:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:11 compute-0 nova_compute[239110]: 2026-01-26 15:39:11.463 239114 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 15:39:11 compute-0 nova_compute[239110]: 2026-01-26 15:39:11.464 239114 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 15:39:11 compute-0 nova_compute[239110]: 2026-01-26 15:39:11.464 239114 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 15:39:11 compute-0 nova_compute[239110]: 2026-01-26 15:39:11.464 239114 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 26 15:39:11 compute-0 nova_compute[239110]: 2026-01-26 15:39:11.595 239114 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:39:11 compute-0 nova_compute[239110]: 2026-01-26 15:39:11.627 239114 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:39:11 compute-0 nova_compute[239110]: 2026-01-26 15:39:11.628 239114 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 26 15:39:11 compute-0 sudo[239726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdgfdfujbaemcuuklkfqratnvonmuxrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441951.3231902-1290-146560896162025/AnsiballZ_podman_container.py'
Jan 26 15:39:11 compute-0 sudo[239726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:39:12 compute-0 python3.9[239728]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 26 15:39:12 compute-0 sudo[239726]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:12 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 15:39:12 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 15:39:12 compute-0 ceph-mon[75140]: pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:12 compute-0 sudo[239901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pznauqoqmwdoopdepujxfodauikofxye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441952.470997-1298-108900143736897/AnsiballZ_systemd.py'
Jan 26 15:39:12 compute-0 sudo[239901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:39:13 compute-0 python3.9[239903]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 26 15:39:13 compute-0 systemd[1]: Stopping nova_compute container...
Jan 26 15:39:13 compute-0 systemd[1]: libpod-9e703c707afa366b51dd7cefe0c4644161299b00cc17c1236e823f1ac065ffbb.scope: Deactivated successfully.
Jan 26 15:39:13 compute-0 podman[239907]: 2026-01-26 15:39:13.219062484 +0000 UTC m=+0.074192170 container died 9e703c707afa366b51dd7cefe0c4644161299b00cc17c1236e823f1ac065ffbb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=edpm)
Jan 26 15:39:13 compute-0 systemd[1]: libpod-9e703c707afa366b51dd7cefe0c4644161299b00cc17c1236e823f1ac065ffbb.scope: Consumed 2.485s CPU time.
Jan 26 15:39:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e703c707afa366b51dd7cefe0c4644161299b00cc17c1236e823f1ac065ffbb-userdata-shm.mount: Deactivated successfully.
Jan 26 15:39:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1d0200b30139217b5faa18768c6f440ff936f4052ee83648cb7884ea4e12ac9-merged.mount: Deactivated successfully.
Jan 26 15:39:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:39:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:14 compute-0 podman[239907]: 2026-01-26 15:39:14.067310923 +0000 UTC m=+0.922440619 container cleanup 9e703c707afa366b51dd7cefe0c4644161299b00cc17c1236e823f1ac065ffbb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 26 15:39:14 compute-0 podman[239907]: nova_compute
Jan 26 15:39:14 compute-0 ceph-mon[75140]: pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:14 compute-0 podman[239936]: nova_compute
Jan 26 15:39:14 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 26 15:39:14 compute-0 systemd[1]: Stopped nova_compute container.
Jan 26 15:39:14 compute-0 systemd[1]: Starting nova_compute container...
Jan 26 15:39:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d0200b30139217b5faa18768c6f440ff936f4052ee83648cb7884ea4e12ac9/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d0200b30139217b5faa18768c6f440ff936f4052ee83648cb7884ea4e12ac9/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d0200b30139217b5faa18768c6f440ff936f4052ee83648cb7884ea4e12ac9/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d0200b30139217b5faa18768c6f440ff936f4052ee83648cb7884ea4e12ac9/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1d0200b30139217b5faa18768c6f440ff936f4052ee83648cb7884ea4e12ac9/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:14 compute-0 podman[239949]: 2026-01-26 15:39:14.281353272 +0000 UTC m=+0.094848660 container init 9e703c707afa366b51dd7cefe0c4644161299b00cc17c1236e823f1ac065ffbb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 15:39:14 compute-0 podman[239949]: 2026-01-26 15:39:14.297409067 +0000 UTC m=+0.110904395 container start 9e703c707afa366b51dd7cefe0c4644161299b00cc17c1236e823f1ac065ffbb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 15:39:14 compute-0 nova_compute[239965]: + sudo -E kolla_set_configs
Jan 26 15:39:14 compute-0 podman[239949]: nova_compute
Jan 26 15:39:14 compute-0 systemd[1]: Started nova_compute container.
Jan 26 15:39:14 compute-0 sudo[239901]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Validating config file
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Copying service configuration files
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Deleting /etc/ceph
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Creating directory /etc/ceph
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /etc/ceph
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Writing out command to execute
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 26 15:39:14 compute-0 nova_compute[239965]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 26 15:39:14 compute-0 nova_compute[239965]: ++ cat /run_command
Jan 26 15:39:14 compute-0 nova_compute[239965]: + CMD=nova-compute
Jan 26 15:39:14 compute-0 nova_compute[239965]: + ARGS=
Jan 26 15:39:14 compute-0 nova_compute[239965]: + sudo kolla_copy_cacerts
Jan 26 15:39:14 compute-0 nova_compute[239965]: + [[ ! -n '' ]]
Jan 26 15:39:14 compute-0 nova_compute[239965]: + . kolla_extend_start
Jan 26 15:39:14 compute-0 nova_compute[239965]: + echo 'Running command: '\''nova-compute'\'''
Jan 26 15:39:14 compute-0 nova_compute[239965]: Running command: 'nova-compute'
Jan 26 15:39:14 compute-0 nova_compute[239965]: + umask 0022
Jan 26 15:39:14 compute-0 nova_compute[239965]: + exec nova-compute
Jan 26 15:39:14 compute-0 sudo[240126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flmmyncbdwnuhdoxzrgisnazdeyfysmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1769441954.5532253-1307-211680861076750/AnsiballZ_podman_container.py'
Jan 26 15:39:14 compute-0 sudo[240126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 15:39:15 compute-0 python3.9[240128]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 26 15:39:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:15 compute-0 systemd[1]: Started libpod-conmon-1942d62c70400e118289ad3b909c21af4c82820a37e3d4c96a7e7998470e93fd.scope.
Jan 26 15:39:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e7f89fee7263c306e6070f35d9fd2f437eeff6130874de9c03398c2228569c/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e7f89fee7263c306e6070f35d9fd2f437eeff6130874de9c03398c2228569c/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0e7f89fee7263c306e6070f35d9fd2f437eeff6130874de9c03398c2228569c/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:15 compute-0 podman[240154]: 2026-01-26 15:39:15.403507185 +0000 UTC m=+0.115429687 container init 1942d62c70400e118289ad3b909c21af4c82820a37e3d4c96a7e7998470e93fd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 15:39:15 compute-0 podman[240154]: 2026-01-26 15:39:15.412146318 +0000 UTC m=+0.124068820 container start 1942d62c70400e118289ad3b909c21af4c82820a37e3d4c96a7e7998470e93fd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 26 15:39:15 compute-0 python3.9[240128]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 26 15:39:15 compute-0 nova_compute_init[240176]: INFO:nova_statedir:Applying nova statedir ownership
Jan 26 15:39:15 compute-0 nova_compute_init[240176]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 26 15:39:15 compute-0 nova_compute_init[240176]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 26 15:39:15 compute-0 nova_compute_init[240176]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 26 15:39:15 compute-0 nova_compute_init[240176]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 26 15:39:15 compute-0 nova_compute_init[240176]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 26 15:39:15 compute-0 nova_compute_init[240176]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 26 15:39:15 compute-0 nova_compute_init[240176]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 26 15:39:15 compute-0 nova_compute_init[240176]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 26 15:39:15 compute-0 nova_compute_init[240176]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 26 15:39:15 compute-0 nova_compute_init[240176]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 26 15:39:15 compute-0 nova_compute_init[240176]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 26 15:39:15 compute-0 nova_compute_init[240176]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 26 15:39:15 compute-0 nova_compute_init[240176]: INFO:nova_statedir:Nova statedir ownership complete
Jan 26 15:39:15 compute-0 systemd[1]: libpod-1942d62c70400e118289ad3b909c21af4c82820a37e3d4c96a7e7998470e93fd.scope: Deactivated successfully.
Jan 26 15:39:15 compute-0 podman[240191]: 2026-01-26 15:39:15.525477612 +0000 UTC m=+0.023728536 container died 1942d62c70400e118289ad3b909c21af4c82820a37e3d4c96a7e7998470e93fd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 26 15:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1942d62c70400e118289ad3b909c21af4c82820a37e3d4c96a7e7998470e93fd-userdata-shm.mount: Deactivated successfully.
Jan 26 15:39:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0e7f89fee7263c306e6070f35d9fd2f437eeff6130874de9c03398c2228569c-merged.mount: Deactivated successfully.
Jan 26 15:39:15 compute-0 sudo[240126]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:15 compute-0 podman[240191]: 2026-01-26 15:39:15.559898982 +0000 UTC m=+0.058149876 container cleanup 1942d62c70400e118289ad3b909c21af4c82820a37e3d4c96a7e7998470e93fd (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm)
Jan 26 15:39:15 compute-0 systemd[1]: libpod-conmon-1942d62c70400e118289ad3b909c21af4c82820a37e3d4c96a7e7998470e93fd.scope: Deactivated successfully.
Jan 26 15:39:15 compute-0 sshd-session[215663]: Connection closed by 192.168.122.30 port 46606
Jan 26 15:39:15 compute-0 sshd-session[215659]: pam_unix(sshd:session): session closed for user zuul
Jan 26 15:39:15 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Jan 26 15:39:15 compute-0 systemd[1]: session-49.scope: Consumed 2min 2.100s CPU time.
Jan 26 15:39:15 compute-0 systemd-logind[790]: Session 49 logged out. Waiting for processes to exit.
Jan 26 15:39:15 compute-0 systemd-logind[790]: Removed session 49.
Jan 26 15:39:16 compute-0 nova_compute[239965]: 2026-01-26 15:39:16.338 239969 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 15:39:16 compute-0 nova_compute[239965]: 2026-01-26 15:39:16.339 239969 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 15:39:16 compute-0 nova_compute[239965]: 2026-01-26 15:39:16.339 239969 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 26 15:39:16 compute-0 nova_compute[239965]: 2026-01-26 15:39:16.339 239969 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 26 15:39:16 compute-0 ceph-mon[75140]: pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:16 compute-0 nova_compute[239965]: 2026-01-26 15:39:16.473 239969 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:39:16 compute-0 nova_compute[239965]: 2026-01-26 15:39:16.496 239969 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:39:16 compute-0 nova_compute[239965]: 2026-01-26 15:39:16.496 239969 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.192 239969 INFO nova.virt.driver [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 26 15:39:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.359 239969 INFO nova.compute.provider_config [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.372 239969 DEBUG oslo_concurrency.lockutils [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.372 239969 DEBUG oslo_concurrency.lockutils [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.372 239969 DEBUG oslo_concurrency.lockutils [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.373 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.373 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.373 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.373 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.373 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.373 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.373 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.374 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.374 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.374 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.374 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.374 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.374 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.374 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.375 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.375 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.375 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.375 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.375 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.375 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.375 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.375 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.376 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.376 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.376 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.376 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.376 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.376 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.377 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.377 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.377 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.377 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.377 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.377 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.377 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.378 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.378 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.378 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.378 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.378 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.378 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.378 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.379 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.379 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.379 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.379 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.379 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.379 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.379 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.380 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.380 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.380 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.380 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.380 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.380 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.380 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.381 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.381 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.381 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.381 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.381 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.381 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.381 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.381 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.382 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.382 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.382 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.382 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.382 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.382 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.382 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.383 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.383 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.383 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.383 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.383 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.383 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.383 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.384 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.384 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.384 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.384 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.384 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.384 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.384 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.385 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.385 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.385 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.385 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.385 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.385 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.385 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.385 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.386 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.386 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.386 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.386 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.386 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.386 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.386 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.387 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.387 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.387 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.387 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.387 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.387 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.387 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.387 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.388 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.388 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.388 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.388 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.388 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.388 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.388 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.389 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.389 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.389 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.389 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.389 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.389 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.389 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.390 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.390 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.390 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.390 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.390 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.390 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.390 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.390 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.391 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.391 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.391 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.391 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.391 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.391 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.391 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.392 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.392 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.392 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.392 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.392 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.392 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.392 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.393 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.393 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.393 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.393 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.393 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.393 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.393 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.394 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.394 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.394 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.394 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.394 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.394 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.394 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.395 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.395 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.395 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.395 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.395 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.395 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.395 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.396 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.396 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.396 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.396 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.396 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.396 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.396 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.396 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.397 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.397 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.397 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.397 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.397 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.397 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.397 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.398 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.398 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.398 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.398 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.398 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.398 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.398 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.399 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.399 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.399 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.399 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.399 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.399 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.399 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.400 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.400 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.400 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.400 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.400 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.400 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.400 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.400 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.401 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.401 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.401 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.401 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.401 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.401 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.401 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.402 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.402 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.402 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.402 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.402 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.402 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.402 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.403 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.403 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.403 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.403 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.403 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.403 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.403 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.404 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.404 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.404 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.404 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.404 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.404 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.404 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.405 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.405 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.405 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.405 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.405 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.405 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.405 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.405 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.406 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.406 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.406 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.406 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.406 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.406 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.406 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.407 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.407 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.407 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.407 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.407 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.407 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.407 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.408 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.408 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.408 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.408 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.408 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.408 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.408 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.408 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.409 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.409 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.409 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.409 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.409 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.409 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.409 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.410 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.410 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.410 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.410 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.410 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.410 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.410 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.411 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.411 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.411 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.411 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.411 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.411 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.411 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.412 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.412 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.412 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.412 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.412 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.412 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.412 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.412 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.413 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.413 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.413 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.413 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.413 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.413 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.413 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.414 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.414 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.414 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.414 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.414 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.414 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.414 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.415 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.415 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.415 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.415 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.415 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.415 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.415 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.415 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.416 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.416 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.416 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.416 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.416 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.416 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.416 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.417 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.417 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.417 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.417 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.417 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.417 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.417 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.418 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.418 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.418 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.418 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.418 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.418 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.418 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.418 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.419 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.419 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.419 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.419 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.419 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.419 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.419 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.420 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.420 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.420 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.420 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.420 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.420 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.421 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.421 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.421 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.421 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.421 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.421 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.421 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.421 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.422 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.422 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.422 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.422 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.422 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.423 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.423 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.423 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.423 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.423 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.423 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.423 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.423 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.424 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.424 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.424 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.424 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.424 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.424 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.424 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.425 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.425 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.425 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.425 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.425 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.425 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.425 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.426 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.426 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.426 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.426 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.426 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.426 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.426 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.426 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.427 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.427 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.427 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.427 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.427 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.427 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.427 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.428 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.428 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.428 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.428 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.428 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.428 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.428 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.429 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.429 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.429 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.429 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.429 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.429 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.429 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.429 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.430 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.430 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.430 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.430 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.430 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.430 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.430 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.430 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.431 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.431 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.431 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.431 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.431 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.431 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.431 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.432 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.432 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.432 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.432 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.432 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.432 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.432 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.432 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.433 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.433 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.433 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.433 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.433 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.433 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.433 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.434 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.434 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.434 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.434 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.434 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.434 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.434 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.435 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.435 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.435 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.435 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.435 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.435 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.435 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.436 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.436 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.436 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.436 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.436 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.436 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.436 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.437 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.437 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.437 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.437 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.437 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.437 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.437 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.437 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.438 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.438 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.438 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.438 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.438 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.438 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.438 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.439 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.439 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.439 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.439 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.439 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.439 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.439 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.439 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.440 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.440 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.440 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.440 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.440 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.440 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.440 239969 WARNING oslo_config.cfg [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 26 15:39:17 compute-0 nova_compute[239965]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 26 15:39:17 compute-0 nova_compute[239965]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 26 15:39:17 compute-0 nova_compute[239965]: and ``live_migration_inbound_addr`` respectively.
Jan 26 15:39:17 compute-0 nova_compute[239965]: ).  Its value may be silently ignored in the future.
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.441 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.441 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.441 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.441 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.441 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.441 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.442 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.442 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.442 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.442 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.442 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.442 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.442 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.442 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.443 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.443 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.443 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.443 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.445 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.rbd_secret_uuid        = 8b9831ad-4b0d-59b4-8860-96eb895a171f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.445 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.445 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.446 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.446 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.446 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.446 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.446 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.447 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.447 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.447 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.447 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.447 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.448 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.448 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.448 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.448 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.449 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.449 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.449 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.449 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.450 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.450 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.450 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.450 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.450 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.451 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.451 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.451 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.451 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.451 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.452 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.452 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.452 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.452 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.452 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.452 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.453 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.453 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.453 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.453 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.453 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.454 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.454 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.454 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.454 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.454 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.455 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.455 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.455 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.455 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.455 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.455 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.456 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.456 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.456 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.456 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.456 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.457 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.457 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.457 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.457 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.457 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.458 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.458 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.458 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.458 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.458 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.459 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.459 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.459 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.459 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.459 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.460 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.460 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.460 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.460 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.460 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.460 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.461 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.461 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.461 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.461 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.461 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.462 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.462 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.462 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.462 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.462 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.463 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.463 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.463 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.463 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.463 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.464 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.464 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.464 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.464 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.464 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.465 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.465 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.465 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.465 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.465 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.466 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.466 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.466 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.466 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.466 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.467 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.467 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.467 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.467 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.467 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.467 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.468 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.468 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.468 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.468 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.468 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.469 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.469 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.469 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.469 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.470 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.470 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.470 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.470 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.470 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.471 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.471 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.471 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.471 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.471 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.472 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.472 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.472 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.472 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.472 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.473 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.473 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.473 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.473 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.473 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.473 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.474 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.474 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.474 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.474 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.474 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.475 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.475 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.475 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.475 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.475 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.475 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.476 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.476 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.476 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.476 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.476 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.477 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.477 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.477 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.477 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.478 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.478 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.478 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.478 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.478 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.479 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.479 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.479 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.479 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.479 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.480 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.480 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.480 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.480 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.480 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.481 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.481 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.481 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.481 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.481 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.481 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.482 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.482 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.482 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.482 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.482 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.483 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.483 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.483 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.483 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.483 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.483 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.484 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.484 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.484 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.484 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.484 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.485 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.485 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.485 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.485 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.485 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.485 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.486 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.486 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.486 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.486 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.486 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.486 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.487 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.487 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.487 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.487 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.488 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.488 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.488 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.488 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.488 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.489 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.489 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.489 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.489 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.489 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.489 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.490 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.490 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.490 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.490 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.491 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.491 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.491 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.491 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.491 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.492 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.492 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.492 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.492 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.492 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.493 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.493 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.493 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.493 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.493 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.493 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.494 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.494 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.494 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.494 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.494 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.495 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.495 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.495 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.495 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.495 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.495 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.496 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.496 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.496 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.496 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.496 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.497 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.497 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.497 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.497 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.497 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.498 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.498 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.498 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.498 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.498 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.499 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.499 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.499 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.499 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.499 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.500 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.500 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.500 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.500 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.500 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.500 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.501 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.501 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.501 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.501 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.501 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.501 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.501 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.502 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.502 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.502 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.502 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.502 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.503 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.503 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.503 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.503 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.503 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.504 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.504 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.504 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.504 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.504 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.504 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.505 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.505 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.505 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.505 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.506 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.506 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.506 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.506 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.506 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.507 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.507 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.507 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.507 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.507 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.507 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.507 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.508 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.508 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.508 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.508 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.508 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.508 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.509 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.509 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.509 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.509 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.509 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.509 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.510 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.510 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.510 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.510 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.510 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.510 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.511 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.511 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.511 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.511 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.511 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.511 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.511 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.512 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.512 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.512 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.512 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.512 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.513 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.513 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.513 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.513 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.513 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.513 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.513 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.514 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.514 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.514 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.514 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.514 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.514 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.515 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.515 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.515 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.515 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.515 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.515 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.515 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.516 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.516 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.516 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.516 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.516 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.516 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.516 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.517 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.517 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.517 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.517 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.517 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.517 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.518 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.518 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.518 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.518 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.518 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.518 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.518 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.519 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.519 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.519 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.519 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.519 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.520 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.520 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.520 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.520 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.520 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.520 239969 DEBUG oslo_service.service [None req-ff589ada-a875-4914-9702-c3c112db54b1 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.522 239969 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.546 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.547 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.547 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.547 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 26 15:39:17 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 26 15:39:17 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.621 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f0c4ed60fd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.623 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f0c4ed60fd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.624 239969 INFO nova.virt.libvirt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Connection event '1' reason 'None'
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.646 239969 WARNING nova.virt.libvirt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 26 15:39:17 compute-0 nova_compute[239965]: 2026-01-26 15:39:17.646 239969 DEBUG nova.virt.libvirt.volume.mount [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 26 15:39:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:39:18 compute-0 ceph-mon[75140]: pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.423 239969 INFO nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Libvirt host capabilities <capabilities>
Jan 26 15:39:18 compute-0 nova_compute[239965]: 
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <host>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <uuid>899b885c-8c66-485f-b49d-8445aa8881a6</uuid>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <cpu>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <arch>x86_64</arch>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model>EPYC-Rome-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <vendor>AMD</vendor>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <microcode version='16777317'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <signature family='23' model='49' stepping='0'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='x2apic'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='tsc-deadline'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='osxsave'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='hypervisor'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='tsc_adjust'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='spec-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='stibp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='arch-capabilities'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='ssbd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='cmp_legacy'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='topoext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='virt-ssbd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='lbrv'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='tsc-scale'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='vmcb-clean'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='pause-filter'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='pfthreshold'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='svme-addr-chk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='rdctl-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='skip-l1dfl-vmentry'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='mds-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature name='pschange-mc-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <pages unit='KiB' size='4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <pages unit='KiB' size='2048'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <pages unit='KiB' size='1048576'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </cpu>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <power_management>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <suspend_mem/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </power_management>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <iommu support='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <migration_features>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <live/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <uri_transports>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <uri_transport>tcp</uri_transport>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <uri_transport>rdma</uri_transport>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </uri_transports>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </migration_features>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <topology>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <cells num='1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <cell id='0'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:           <memory unit='KiB'>7864300</memory>
Jan 26 15:39:18 compute-0 nova_compute[239965]:           <pages unit='KiB' size='4'>1966075</pages>
Jan 26 15:39:18 compute-0 nova_compute[239965]:           <pages unit='KiB' size='2048'>0</pages>
Jan 26 15:39:18 compute-0 nova_compute[239965]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 26 15:39:18 compute-0 nova_compute[239965]:           <distances>
Jan 26 15:39:18 compute-0 nova_compute[239965]:             <sibling id='0' value='10'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:           </distances>
Jan 26 15:39:18 compute-0 nova_compute[239965]:           <cpus num='8'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:           </cpus>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         </cell>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </cells>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </topology>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <cache>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </cache>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <secmodel>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model>selinux</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <doi>0</doi>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </secmodel>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <secmodel>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model>dac</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <doi>0</doi>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </secmodel>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </host>
Jan 26 15:39:18 compute-0 nova_compute[239965]: 
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <guest>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <os_type>hvm</os_type>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <arch name='i686'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <wordsize>32</wordsize>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <domain type='qemu'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <domain type='kvm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </arch>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <features>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <pae/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <nonpae/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <acpi default='on' toggle='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <apic default='on' toggle='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <cpuselection/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <deviceboot/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <disksnapshot default='on' toggle='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <externalSnapshot/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </features>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </guest>
Jan 26 15:39:18 compute-0 nova_compute[239965]: 
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <guest>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <os_type>hvm</os_type>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <arch name='x86_64'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <wordsize>64</wordsize>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <domain type='qemu'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <domain type='kvm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </arch>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <features>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <acpi default='on' toggle='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <apic default='on' toggle='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <cpuselection/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <deviceboot/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <disksnapshot default='on' toggle='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <externalSnapshot/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </features>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </guest>
Jan 26 15:39:18 compute-0 nova_compute[239965]: 
Jan 26 15:39:18 compute-0 nova_compute[239965]: </capabilities>
Jan 26 15:39:18 compute-0 nova_compute[239965]: 
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.435 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.468 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 26 15:39:18 compute-0 nova_compute[239965]: <domainCapabilities>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <domain>kvm</domain>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <arch>i686</arch>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <vcpu max='4096'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <iothreads supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <os supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <enum name='firmware'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <loader supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>rom</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pflash</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='readonly'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>yes</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>no</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='secure'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>no</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </loader>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </os>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <cpu>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <mode name='host-passthrough' supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='hostPassthroughMigratable'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>on</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>off</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </mode>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <mode name='maximum' supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='maximumMigratable'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>on</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>off</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </mode>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <mode name='host-model' supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <vendor>AMD</vendor>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='x2apic'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='hypervisor'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='stibp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='ssbd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='overflow-recov'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='succor'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='lbrv'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='tsc-scale'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='flushbyasid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='pause-filter'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='pfthreshold'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='disable' name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </mode>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <mode name='custom' supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-noTSX'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='ClearwaterForest'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ddpd-u'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='intel-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ipred-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='lam'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rrsba-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sha512'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sm3'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sm4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='ClearwaterForest-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ddpd-u'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='intel-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ipred-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='lam'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rrsba-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sha512'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sm3'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sm4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cooperlake'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cooperlake-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cooperlake-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Denverton'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mpx'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Denverton-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mpx'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Denverton-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Denverton-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Dhyana-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Genoa'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fs-gs-base-ns'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='perfmon-v2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Milan'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Milan-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Milan-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Milan-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Rome'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Rome-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Rome-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Rome-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Turin'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vp2intersect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fs-gs-base-ns'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibpb-brtype'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='perfmon-v2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbpb'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='srso-user-kernel-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Turin-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vp2intersect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fs-gs-base-ns'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibpb-brtype'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='perfmon-v2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbpb'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='srso-user-kernel-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-v5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='GraniteRapids'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='GraniteRapids-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='GraniteRapids-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-128'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-256'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-512'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='GraniteRapids-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-128'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-256'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-512'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-noTSX'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v6'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v7'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='IvyBridge'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='IvyBridge-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='IvyBridge-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='IvyBridge-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='KnightsMill'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-4fmaps'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-4vnniw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512er'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512pf'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='KnightsMill-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-4fmaps'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-4vnniw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512er'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512pf'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Opteron_G4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fma4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xop'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Opteron_G4-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fma4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xop'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Opteron_G5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fma4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tbm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xop'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Opteron_G5-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fma4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tbm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xop'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SierraForest'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SierraForest-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SierraForest-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='intel-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ipred-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='lam'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rrsba-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SierraForest-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='intel-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ipred-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='lam'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rrsba-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='core-capability'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mpx'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='split-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='core-capability'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mpx'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='split-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='core-capability'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='split-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='core-capability'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='split-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='athlon'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnow'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnowext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='athlon-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnow'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnowext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='core2duo'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='core2duo-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='coreduo'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='coreduo-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='n270'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='n270-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='phenom'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnow'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnowext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='phenom-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnow'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnowext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </mode>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <memoryBacking supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <enum name='sourceType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>file</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>anonymous</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>memfd</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </memoryBacking>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <disk supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='diskDevice'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>disk</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>cdrom</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>floppy</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>lun</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='bus'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>fdc</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>scsi</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>usb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>sata</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio-transitional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio-non-transitional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <graphics supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vnc</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>egl-headless</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>dbus</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </graphics>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <video supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='modelType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vga</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>cirrus</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>none</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>bochs</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>ramfb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </video>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <hostdev supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='mode'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>subsystem</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='startupPolicy'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>default</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>mandatory</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>requisite</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>optional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='subsysType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>usb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pci</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>scsi</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='capsType'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='pciBackend'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </hostdev>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <rng supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio-transitional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio-non-transitional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendModel'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>random</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>egd</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>builtin</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <filesystem supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='driverType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>path</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>handle</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtiofs</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </filesystem>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <tpm supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>tpm-tis</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>tpm-crb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendModel'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>emulator</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>external</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendVersion'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>2.0</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </tpm>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <redirdev supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='bus'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>usb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </redirdev>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <channel supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pty</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>unix</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </channel>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <crypto supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>qemu</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendModel'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>builtin</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </crypto>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <interface supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>default</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>passt</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <panic supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>isa</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>hyperv</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </panic>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <console supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>null</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vc</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pty</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>dev</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>file</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pipe</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>stdio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>udp</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>tcp</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>unix</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>qemu-vdagent</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>dbus</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </console>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <features>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <gic supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <vmcoreinfo supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <genid supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <backingStoreInput supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <backup supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <async-teardown supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <s390-pv supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <ps2 supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <tdx supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <sev supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <sgx supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <hyperv supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='features'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>relaxed</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vapic</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>spinlocks</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vpindex</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>runtime</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>synic</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>stimer</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>reset</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vendor_id</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>frequencies</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>reenlightenment</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>tlbflush</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>ipi</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>avic</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>emsr_bitmap</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>xmm_input</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <defaults>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <spinlocks>4095</spinlocks>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <stimer_direct>on</stimer_direct>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </defaults>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </hyperv>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <launchSecurity supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </features>
Jan 26 15:39:18 compute-0 nova_compute[239965]: </domainCapabilities>
Jan 26 15:39:18 compute-0 nova_compute[239965]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.481 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 26 15:39:18 compute-0 nova_compute[239965]: <domainCapabilities>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <domain>kvm</domain>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <arch>i686</arch>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <vcpu max='240'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <iothreads supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <os supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <enum name='firmware'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <loader supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>rom</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pflash</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='readonly'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>yes</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>no</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='secure'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>no</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </loader>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </os>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <cpu>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <mode name='host-passthrough' supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='hostPassthroughMigratable'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>on</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>off</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </mode>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <mode name='maximum' supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='maximumMigratable'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>on</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>off</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </mode>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <mode name='host-model' supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <vendor>AMD</vendor>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='x2apic'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='hypervisor'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='stibp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='ssbd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='overflow-recov'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='succor'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='lbrv'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='tsc-scale'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='flushbyasid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='pause-filter'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='pfthreshold'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='disable' name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </mode>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <mode name='custom' supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-noTSX'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='ClearwaterForest'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ddpd-u'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='intel-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ipred-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='lam'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rrsba-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sha512'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sm3'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sm4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='ClearwaterForest-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ddpd-u'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='intel-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ipred-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='lam'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rrsba-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sha512'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sm3'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sm4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cooperlake'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cooperlake-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cooperlake-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Denverton'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mpx'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Denverton-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mpx'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Denverton-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Denverton-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Dhyana-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Genoa'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fs-gs-base-ns'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='perfmon-v2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Milan'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Milan-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Milan-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Milan-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Rome'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Rome-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Rome-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Rome-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Turin'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vp2intersect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fs-gs-base-ns'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibpb-brtype'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='perfmon-v2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbpb'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='srso-user-kernel-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Turin-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vp2intersect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fs-gs-base-ns'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibpb-brtype'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='perfmon-v2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbpb'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='srso-user-kernel-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-v5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='GraniteRapids'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='GraniteRapids-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='GraniteRapids-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-128'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-256'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-512'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='GraniteRapids-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-128'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-256'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-512'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-noTSX'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v6'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v7'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='IvyBridge'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='IvyBridge-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='IvyBridge-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='IvyBridge-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='KnightsMill'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-4fmaps'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-4vnniw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512er'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512pf'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='KnightsMill-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-4fmaps'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-4vnniw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512er'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512pf'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Opteron_G4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fma4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xop'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Opteron_G4-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fma4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xop'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Opteron_G5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fma4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tbm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xop'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Opteron_G5-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fma4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tbm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xop'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SierraForest'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SierraForest-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SierraForest-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='intel-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ipred-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='lam'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rrsba-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SierraForest-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='intel-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ipred-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='lam'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rrsba-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='core-capability'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mpx'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='split-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='core-capability'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mpx'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='split-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='core-capability'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='split-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='core-capability'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='split-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='athlon'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnow'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnowext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='athlon-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnow'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnowext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='core2duo'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='core2duo-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='coreduo'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='coreduo-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='n270'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='n270-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='phenom'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnow'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnowext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='phenom-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnow'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnowext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </mode>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <memoryBacking supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <enum name='sourceType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>file</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>anonymous</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>memfd</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </memoryBacking>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <disk supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='diskDevice'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>disk</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>cdrom</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>floppy</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>lun</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='bus'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>ide</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>fdc</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>scsi</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>usb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>sata</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio-transitional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio-non-transitional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <graphics supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vnc</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>egl-headless</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>dbus</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </graphics>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <video supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='modelType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vga</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>cirrus</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>none</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>bochs</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>ramfb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </video>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <hostdev supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='mode'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>subsystem</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='startupPolicy'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>default</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>mandatory</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>requisite</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>optional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='subsysType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>usb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pci</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>scsi</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='capsType'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='pciBackend'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </hostdev>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <rng supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio-transitional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio-non-transitional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendModel'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>random</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>egd</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>builtin</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <filesystem supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='driverType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>path</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>handle</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtiofs</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </filesystem>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <tpm supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>tpm-tis</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>tpm-crb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendModel'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>emulator</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>external</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendVersion'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>2.0</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </tpm>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <redirdev supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='bus'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>usb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </redirdev>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <channel supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pty</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>unix</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </channel>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <crypto supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>qemu</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendModel'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>builtin</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </crypto>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <interface supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>default</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>passt</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <panic supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>isa</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>hyperv</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </panic>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <console supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>null</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vc</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pty</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>dev</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>file</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pipe</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>stdio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>udp</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>tcp</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>unix</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>qemu-vdagent</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>dbus</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </console>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <features>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <gic supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <vmcoreinfo supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <genid supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <backingStoreInput supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <backup supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <async-teardown supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <s390-pv supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <ps2 supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <tdx supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <sev supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <sgx supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <hyperv supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='features'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>relaxed</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vapic</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>spinlocks</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vpindex</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>runtime</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>synic</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>stimer</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>reset</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vendor_id</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>frequencies</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>reenlightenment</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>tlbflush</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>ipi</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>avic</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>emsr_bitmap</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>xmm_input</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <defaults>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <spinlocks>4095</spinlocks>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <stimer_direct>on</stimer_direct>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </defaults>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </hyperv>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <launchSecurity supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </features>
Jan 26 15:39:18 compute-0 nova_compute[239965]: </domainCapabilities>
Jan 26 15:39:18 compute-0 nova_compute[239965]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.533 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.538 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 26 15:39:18 compute-0 nova_compute[239965]: <domainCapabilities>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <domain>kvm</domain>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <arch>x86_64</arch>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <vcpu max='4096'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <iothreads supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <os supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <enum name='firmware'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>efi</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <loader supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>rom</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pflash</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='readonly'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>yes</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>no</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='secure'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>yes</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>no</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </loader>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </os>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <cpu>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <mode name='host-passthrough' supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='hostPassthroughMigratable'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>on</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>off</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </mode>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <mode name='maximum' supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='maximumMigratable'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>on</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>off</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </mode>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <mode name='host-model' supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <vendor>AMD</vendor>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='x2apic'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='hypervisor'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='stibp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='ssbd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='overflow-recov'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='succor'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='lbrv'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='tsc-scale'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='flushbyasid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='pause-filter'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='pfthreshold'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='disable' name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </mode>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <mode name='custom' supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-noTSX'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='ClearwaterForest'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ddpd-u'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='intel-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ipred-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='lam'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rrsba-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sha512'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sm3'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sm4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='ClearwaterForest-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ddpd-u'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='intel-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ipred-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='lam'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rrsba-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sha512'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sm3'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sm4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cooperlake'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cooperlake-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cooperlake-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Denverton'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mpx'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Denverton-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mpx'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Denverton-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Denverton-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Dhyana-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Genoa'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fs-gs-base-ns'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='perfmon-v2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Milan'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Milan-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Milan-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Milan-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Rome'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Rome-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Rome-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Rome-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Turin'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vp2intersect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fs-gs-base-ns'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibpb-brtype'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='perfmon-v2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbpb'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='srso-user-kernel-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Turin-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vp2intersect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fs-gs-base-ns'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibpb-brtype'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='perfmon-v2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbpb'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='srso-user-kernel-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-v5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='GraniteRapids'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='GraniteRapids-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='GraniteRapids-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-128'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-256'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-512'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='GraniteRapids-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-128'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-256'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-512'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-noTSX'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v6'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v7'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='IvyBridge'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='IvyBridge-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='IvyBridge-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='IvyBridge-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='KnightsMill'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-4fmaps'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-4vnniw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512er'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512pf'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='KnightsMill-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-4fmaps'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-4vnniw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512er'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512pf'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Opteron_G4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fma4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xop'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Opteron_G4-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fma4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xop'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Opteron_G5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fma4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tbm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xop'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Opteron_G5-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fma4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tbm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xop'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SierraForest'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SierraForest-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SierraForest-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='intel-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ipred-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='lam'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rrsba-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SierraForest-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='intel-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ipred-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='lam'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rrsba-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='core-capability'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mpx'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='split-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='core-capability'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mpx'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='split-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='core-capability'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='split-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='core-capability'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='split-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='athlon'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnow'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnowext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='athlon-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnow'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnowext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='core2duo'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='core2duo-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='coreduo'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='coreduo-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='n270'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='n270-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='phenom'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnow'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnowext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='phenom-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnow'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnowext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </mode>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <memoryBacking supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <enum name='sourceType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>file</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>anonymous</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>memfd</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </memoryBacking>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <disk supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='diskDevice'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>disk</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>cdrom</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>floppy</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>lun</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='bus'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>fdc</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>scsi</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>usb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>sata</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio-transitional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio-non-transitional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <graphics supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vnc</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>egl-headless</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>dbus</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </graphics>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <video supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='modelType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vga</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>cirrus</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>none</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>bochs</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>ramfb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </video>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <hostdev supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='mode'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>subsystem</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='startupPolicy'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>default</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>mandatory</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>requisite</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>optional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='subsysType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>usb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pci</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>scsi</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='capsType'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='pciBackend'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </hostdev>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <rng supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio-transitional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio-non-transitional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendModel'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>random</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>egd</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>builtin</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <filesystem supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='driverType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>path</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>handle</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtiofs</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </filesystem>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <tpm supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>tpm-tis</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>tpm-crb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendModel'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>emulator</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>external</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendVersion'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>2.0</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </tpm>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <redirdev supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='bus'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>usb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </redirdev>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <channel supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pty</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>unix</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </channel>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <crypto supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>qemu</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendModel'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>builtin</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </crypto>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <interface supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>default</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>passt</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <panic supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>isa</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>hyperv</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </panic>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <console supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>null</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vc</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pty</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>dev</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>file</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pipe</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>stdio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>udp</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>tcp</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>unix</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>qemu-vdagent</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>dbus</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </console>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <features>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <gic supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <vmcoreinfo supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <genid supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <backingStoreInput supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <backup supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <async-teardown supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <s390-pv supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <ps2 supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <tdx supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <sev supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <sgx supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <hyperv supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='features'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>relaxed</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vapic</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>spinlocks</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vpindex</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>runtime</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>synic</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>stimer</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>reset</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vendor_id</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>frequencies</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>reenlightenment</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>tlbflush</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>ipi</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>avic</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>emsr_bitmap</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>xmm_input</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <defaults>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <spinlocks>4095</spinlocks>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <stimer_direct>on</stimer_direct>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </defaults>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </hyperv>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <launchSecurity supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </features>
Jan 26 15:39:18 compute-0 nova_compute[239965]: </domainCapabilities>
Jan 26 15:39:18 compute-0 nova_compute[239965]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.613 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 26 15:39:18 compute-0 nova_compute[239965]: <domainCapabilities>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <path>/usr/libexec/qemu-kvm</path>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <domain>kvm</domain>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <arch>x86_64</arch>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <vcpu max='240'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <iothreads supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <os supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <enum name='firmware'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <loader supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>rom</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pflash</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='readonly'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>yes</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>no</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='secure'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>no</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </loader>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </os>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <cpu>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <mode name='host-passthrough' supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='hostPassthroughMigratable'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>on</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>off</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </mode>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <mode name='maximum' supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='maximumMigratable'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>on</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>off</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </mode>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <mode name='host-model' supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <vendor>AMD</vendor>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='x2apic'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='tsc-deadline'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='hypervisor'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='tsc_adjust'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='spec-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='stibp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='ssbd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='cmp_legacy'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='overflow-recov'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='succor'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='amd-ssbd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='virt-ssbd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='lbrv'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='tsc-scale'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='vmcb-clean'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='flushbyasid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='pause-filter'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='pfthreshold'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='svme-addr-chk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <feature policy='disable' name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </mode>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <mode name='custom' supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-noTSX'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Broadwell-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cascadelake-Server-v5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='ClearwaterForest'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ddpd-u'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='intel-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ipred-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='lam'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rrsba-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sha512'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sm3'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sm4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='ClearwaterForest-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ddpd-u'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='intel-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ipred-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='lam'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rrsba-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sha512'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sm3'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sm4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cooperlake'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cooperlake-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Cooperlake-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Denverton'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mpx'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Denverton-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mpx'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Denverton-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Denverton-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Dhyana-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Genoa'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Genoa-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Genoa-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fs-gs-base-ns'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='perfmon-v2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Milan'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Milan-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Milan-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Milan-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Rome'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Rome-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Rome-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Rome-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Turin'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vp2intersect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fs-gs-base-ns'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibpb-brtype'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='perfmon-v2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbpb'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='srso-user-kernel-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-Turin-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amd-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='auto-ibrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vp2intersect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fs-gs-base-ns'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibpb-brtype'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='no-nested-data-bp'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='null-sel-clr-base'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='perfmon-v2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbpb'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='srso-user-kernel-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='stibp-always-on'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='EPYC-v5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='GraniteRapids'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='GraniteRapids-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='GraniteRapids-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-128'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-256'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-512'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='GraniteRapids-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-128'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-256'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx10-512'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='prefetchiti'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-noTSX'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Haswell-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-noTSX'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v6'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Icelake-Server-v7'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='IvyBridge'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='IvyBridge-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='IvyBridge-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='IvyBridge-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='KnightsMill'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-4fmaps'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-4vnniw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512er'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512pf'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='KnightsMill-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-4fmaps'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-4vnniw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512er'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512pf'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Opteron_G4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fma4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xop'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Opteron_G4-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fma4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xop'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Opteron_G5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fma4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tbm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xop'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Opteron_G5-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fma4'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tbm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xop'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SapphireRapids-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='amx-tile'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-bf16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-fp16'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512-vpopcntdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bitalg'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vbmi2'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrc'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fzrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='la57'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='taa-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='tsx-ldtrk'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SierraForest'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SierraForest-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SierraForest-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='intel-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ipred-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='lam'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rrsba-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='SierraForest-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ifma'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-ne-convert'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx-vnni-int8'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bhi-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='bus-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cmpccxadd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fbsdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='fsrs'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ibrs-all'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='intel-psfd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ipred-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='lam'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mcdt-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pbrsb-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='psdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rrsba-ctrl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='sbdr-ssdp-no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='serialize'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vaes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='vpclmulqdq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Client-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='hle'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='rtm'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Skylake-Server-v5'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512bw'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512cd'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512dq'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512f'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='avx512vl'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='invpcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pcid'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='pku'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='core-capability'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mpx'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='split-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='core-capability'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='mpx'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='split-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge-v2'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='core-capability'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='split-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge-v3'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='core-capability'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='split-lock-detect'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='Snowridge-v4'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='cldemote'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='erms'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='gfni'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdir64b'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='movdiri'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='xsaves'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='athlon'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnow'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnowext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='athlon-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnow'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnowext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='core2duo'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='core2duo-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='coreduo'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='coreduo-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='n270'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='n270-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='ss'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='phenom'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnow'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnowext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <blockers model='phenom-v1'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnow'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <feature name='3dnowext'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </blockers>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </mode>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <memoryBacking supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <enum name='sourceType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>file</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>anonymous</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <value>memfd</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </memoryBacking>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <disk supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='diskDevice'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>disk</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>cdrom</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>floppy</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>lun</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='bus'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>ide</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>fdc</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>scsi</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>usb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>sata</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio-transitional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio-non-transitional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <graphics supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vnc</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>egl-headless</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>dbus</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </graphics>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <video supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='modelType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vga</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>cirrus</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>none</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>bochs</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>ramfb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </video>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <hostdev supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='mode'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>subsystem</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='startupPolicy'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>default</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>mandatory</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>requisite</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>optional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='subsysType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>usb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pci</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>scsi</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='capsType'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='pciBackend'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </hostdev>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <rng supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio-transitional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtio-non-transitional</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendModel'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>random</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>egd</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>builtin</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <filesystem supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='driverType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>path</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>handle</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>virtiofs</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </filesystem>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <tpm supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>tpm-tis</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>tpm-crb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendModel'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>emulator</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>external</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendVersion'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>2.0</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </tpm>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <redirdev supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='bus'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>usb</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </redirdev>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <channel supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pty</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>unix</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </channel>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <crypto supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>qemu</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendModel'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>builtin</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </crypto>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <interface supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='backendType'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>default</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>passt</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <panic supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='model'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>isa</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>hyperv</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </panic>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <console supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='type'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>null</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vc</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pty</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>dev</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>file</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>pipe</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>stdio</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>udp</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>tcp</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>unix</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>qemu-vdagent</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>dbus</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </console>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   <features>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <gic supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <vmcoreinfo supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <genid supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <backingStoreInput supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <backup supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <async-teardown supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <s390-pv supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <ps2 supported='yes'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <tdx supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <sev supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <sgx supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <hyperv supported='yes'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <enum name='features'>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>relaxed</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vapic</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>spinlocks</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vpindex</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>runtime</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>synic</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>stimer</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>reset</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>vendor_id</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>frequencies</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>reenlightenment</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>tlbflush</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>ipi</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>avic</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>emsr_bitmap</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <value>xmm_input</value>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </enum>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       <defaults>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <spinlocks>4095</spinlocks>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <stimer_direct>on</stimer_direct>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <tlbflush_direct>on</tlbflush_direct>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <tlbflush_extended>on</tlbflush_extended>
Jan 26 15:39:18 compute-0 nova_compute[239965]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 26 15:39:18 compute-0 nova_compute[239965]:       </defaults>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     </hyperv>
Jan 26 15:39:18 compute-0 nova_compute[239965]:     <launchSecurity supported='no'/>
Jan 26 15:39:18 compute-0 nova_compute[239965]:   </features>
Jan 26 15:39:18 compute-0 nova_compute[239965]: </domainCapabilities>
Jan 26 15:39:18 compute-0 nova_compute[239965]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.686 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.687 239969 INFO nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Secure Boot support detected
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.689 239969 INFO nova.virt.libvirt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.689 239969 INFO nova.virt.libvirt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.698 239969 DEBUG nova.virt.libvirt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.733 239969 INFO nova.virt.node [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Determined node identity 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from /var/lib/nova/compute_id
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.752 239969 WARNING nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Compute nodes ['54ba2329-fe1a-48ee-a2ff-6c84c26128ed'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.789 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.850 239969 WARNING nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.851 239969 DEBUG oslo_concurrency.lockutils [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.851 239969 DEBUG oslo_concurrency.lockutils [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.851 239969 DEBUG oslo_concurrency.lockutils [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.851 239969 DEBUG nova.compute.resource_tracker [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:39:18 compute-0 nova_compute[239965]: 2026-01-26 15:39:18.852 239969 DEBUG oslo_concurrency.processutils [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:39:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:39:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1337761538' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:39:19 compute-0 nova_compute[239965]: 2026-01-26 15:39:19.401 239969 DEBUG oslo_concurrency.processutils [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:39:19 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 26 15:39:19 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 26 15:39:19 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1337761538' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:39:19 compute-0 nova_compute[239965]: 2026-01-26 15:39:19.718 239969 WARNING nova.virt.libvirt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:39:19 compute-0 nova_compute[239965]: 2026-01-26 15:39:19.720 239969 DEBUG nova.compute.resource_tracker [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5084MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:39:19 compute-0 nova_compute[239965]: 2026-01-26 15:39:19.720 239969 DEBUG oslo_concurrency.lockutils [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:39:19 compute-0 nova_compute[239965]: 2026-01-26 15:39:19.720 239969 DEBUG oslo_concurrency.lockutils [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:39:19 compute-0 nova_compute[239965]: 2026-01-26 15:39:19.738 239969 WARNING nova.compute.resource_tracker [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] No compute node record for compute-0.ctlplane.example.com:54ba2329-fe1a-48ee-a2ff-6c84c26128ed: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 54ba2329-fe1a-48ee-a2ff-6c84c26128ed could not be found.
Jan 26 15:39:19 compute-0 nova_compute[239965]: 2026-01-26 15:39:19.759 239969 INFO nova.compute.resource_tracker [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed
Jan 26 15:39:19 compute-0 nova_compute[239965]: 2026-01-26 15:39:19.823 239969 DEBUG nova.compute.resource_tracker [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:39:19 compute-0 nova_compute[239965]: 2026-01-26 15:39:19.823 239969 DEBUG nova.compute.resource_tracker [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:39:20 compute-0 ceph-mon[75140]: pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:20 compute-0 nova_compute[239965]: 2026-01-26 15:39:20.757 239969 INFO nova.scheduler.client.report [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [req-e69d73b1-77ac-4ec9-84ce-7007c77872ba] Created resource provider record via placement API for resource provider with UUID 54ba2329-fe1a-48ee-a2ff-6c84c26128ed and name compute-0.ctlplane.example.com.
Jan 26 15:39:21 compute-0 nova_compute[239965]: 2026-01-26 15:39:21.157 239969 DEBUG oslo_concurrency.processutils [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:39:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:39:21 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4160813596' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:39:21 compute-0 nova_compute[239965]: 2026-01-26 15:39:21.748 239969 DEBUG oslo_concurrency.processutils [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:39:21 compute-0 nova_compute[239965]: 2026-01-26 15:39:21.754 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 26 15:39:21 compute-0 nova_compute[239965]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 26 15:39:21 compute-0 nova_compute[239965]: 2026-01-26 15:39:21.755 239969 INFO nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] kernel doesn't support AMD SEV
Jan 26 15:39:21 compute-0 nova_compute[239965]: 2026-01-26 15:39:21.756 239969 DEBUG nova.compute.provider_tree [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 15:39:21 compute-0 nova_compute[239965]: 2026-01-26 15:39:21.757 239969 DEBUG nova.virt.libvirt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:39:21 compute-0 nova_compute[239965]: 2026-01-26 15:39:21.825 239969 DEBUG nova.scheduler.client.report [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Updated inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 26 15:39:21 compute-0 nova_compute[239965]: 2026-01-26 15:39:21.825 239969 DEBUG nova.compute.provider_tree [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Updating resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 26 15:39:21 compute-0 nova_compute[239965]: 2026-01-26 15:39:21.825 239969 DEBUG nova.compute.provider_tree [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 15:39:22 compute-0 nova_compute[239965]: 2026-01-26 15:39:22.121 239969 DEBUG nova.compute.provider_tree [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Updating resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 26 15:39:22 compute-0 ceph-mon[75140]: pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4160813596' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:39:22 compute-0 nova_compute[239965]: 2026-01-26 15:39:22.597 239969 DEBUG nova.compute.resource_tracker [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:39:22 compute-0 nova_compute[239965]: 2026-01-26 15:39:22.597 239969 DEBUG oslo_concurrency.lockutils [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:39:22 compute-0 nova_compute[239965]: 2026-01-26 15:39:22.598 239969 DEBUG nova.service [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 26 15:39:22 compute-0 nova_compute[239965]: 2026-01-26 15:39:22.779 239969 DEBUG nova.service [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 26 15:39:22 compute-0 nova_compute[239965]: 2026-01-26 15:39:22.779 239969 DEBUG nova.servicegroup.drivers.db [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 26 15:39:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:39:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:23 compute-0 nova_compute[239965]: 2026-01-26 15:39:23.781 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:39:23 compute-0 nova_compute[239965]: 2026-01-26 15:39:23.802 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:39:24 compute-0 ceph-mon[75140]: pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:26 compute-0 ceph-mon[75140]: pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:39:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:39:28
Jan 26 15:39:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:39:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:39:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['volumes', 'vms', 'default.rgw.control', 'default.rgw.meta', 'images', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root']
Jan 26 15:39:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:39:28 compute-0 ceph-mon[75140]: pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:39:30 compute-0 ceph-mon[75140]: pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:31 compute-0 podman[240373]: 2026-01-26 15:39:31.424741617 +0000 UTC m=+0.111104741 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 15:39:32 compute-0 podman[240399]: 2026-01-26 15:39:32.361785654 +0000 UTC m=+0.052197398 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:39:32 compute-0 ceph-mon[75140]: pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:32 compute-0 sshd-session[240419]: Connection closed by authenticating user root 178.128.250.55 port 40764 [preauth]
Jan 26 15:39:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:39:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:34 compute-0 ceph-mon[75140]: pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:36 compute-0 ceph-mon[75140]: pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:37 compute-0 sudo[240421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:39:37 compute-0 sudo[240421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:39:37 compute-0 sudo[240421]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:37 compute-0 sudo[240446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:39:37 compute-0 sudo[240446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:39:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:37 compute-0 sudo[240446]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:39:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:39:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:39:37 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:39:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:39:37 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:39:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:39:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:39:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:39:37 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:39:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:39:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:39:37 compute-0 sudo[240501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:39:37 compute-0 sudo[240501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:39:37 compute-0 sudo[240501]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:37 compute-0 sudo[240526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:39:37 compute-0 sudo[240526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:39:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:39:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:39:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:39:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:39:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:39:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:39:38 compute-0 podman[240564]: 2026-01-26 15:39:38.099033739 +0000 UTC m=+0.024008853 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:39:38 compute-0 podman[240564]: 2026-01-26 15:39:38.301024649 +0000 UTC m=+0.225999743 container create b068d87c2a3bb2099052b720687dcbc6ed9748e26cc903eb299ff9065d48cef9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:39:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:39:38 compute-0 systemd[1]: Started libpod-conmon-b068d87c2a3bb2099052b720687dcbc6ed9748e26cc903eb299ff9065d48cef9.scope.
Jan 26 15:39:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:39:38 compute-0 podman[240564]: 2026-01-26 15:39:38.391390738 +0000 UTC m=+0.316365882 container init b068d87c2a3bb2099052b720687dcbc6ed9748e26cc903eb299ff9065d48cef9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:39:38 compute-0 podman[240564]: 2026-01-26 15:39:38.399294383 +0000 UTC m=+0.324269477 container start b068d87c2a3bb2099052b720687dcbc6ed9748e26cc903eb299ff9065d48cef9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:39:38 compute-0 podman[240564]: 2026-01-26 15:39:38.425389446 +0000 UTC m=+0.350364630 container attach b068d87c2a3bb2099052b720687dcbc6ed9748e26cc903eb299ff9065d48cef9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:39:38 compute-0 great_goldstine[240580]: 167 167
Jan 26 15:39:38 compute-0 systemd[1]: libpod-b068d87c2a3bb2099052b720687dcbc6ed9748e26cc903eb299ff9065d48cef9.scope: Deactivated successfully.
Jan 26 15:39:38 compute-0 conmon[240580]: conmon b068d87c2a3bb2099052 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b068d87c2a3bb2099052b720687dcbc6ed9748e26cc903eb299ff9065d48cef9.scope/container/memory.events
Jan 26 15:39:38 compute-0 podman[240564]: 2026-01-26 15:39:38.428436471 +0000 UTC m=+0.353411625 container died b068d87c2a3bb2099052b720687dcbc6ed9748e26cc903eb299ff9065d48cef9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:39:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f9683e76d52b16a7c9d5483f6d7660fab37ac55160e9d41263f47c1443a33b0a-merged.mount: Deactivated successfully.
Jan 26 15:39:38 compute-0 podman[240564]: 2026-01-26 15:39:38.485204592 +0000 UTC m=+0.410179686 container remove b068d87c2a3bb2099052b720687dcbc6ed9748e26cc903eb299ff9065d48cef9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_goldstine, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:39:38 compute-0 systemd[1]: libpod-conmon-b068d87c2a3bb2099052b720687dcbc6ed9748e26cc903eb299ff9065d48cef9.scope: Deactivated successfully.
Jan 26 15:39:38 compute-0 podman[240606]: 2026-01-26 15:39:38.673879704 +0000 UTC m=+0.043796511 container create ceae88f16023c8f32d0f893bc47b8fffd303505f2dd47d5b1bec3cf0668a63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:39:38 compute-0 systemd[1]: Started libpod-conmon-ceae88f16023c8f32d0f893bc47b8fffd303505f2dd47d5b1bec3cf0668a63b4.scope.
Jan 26 15:39:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:39:38 compute-0 podman[240606]: 2026-01-26 15:39:38.65302271 +0000 UTC m=+0.022939527 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c774517b171309ce485cf780af72d58c5a39bb9b9463255e0b08c6bf7d8cbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c774517b171309ce485cf780af72d58c5a39bb9b9463255e0b08c6bf7d8cbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c774517b171309ce485cf780af72d58c5a39bb9b9463255e0b08c6bf7d8cbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c774517b171309ce485cf780af72d58c5a39bb9b9463255e0b08c6bf7d8cbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c774517b171309ce485cf780af72d58c5a39bb9b9463255e0b08c6bf7d8cbb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:38 compute-0 podman[240606]: 2026-01-26 15:39:38.763029373 +0000 UTC m=+0.132946200 container init ceae88f16023c8f32d0f893bc47b8fffd303505f2dd47d5b1bec3cf0668a63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Jan 26 15:39:38 compute-0 podman[240606]: 2026-01-26 15:39:38.777061549 +0000 UTC m=+0.146978346 container start ceae88f16023c8f32d0f893bc47b8fffd303505f2dd47d5b1bec3cf0668a63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_euler, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:39:38 compute-0 podman[240606]: 2026-01-26 15:39:38.78117795 +0000 UTC m=+0.151094747 container attach ceae88f16023c8f32d0f893bc47b8fffd303505f2dd47d5b1bec3cf0668a63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_euler, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:39:38 compute-0 ceph-mon[75140]: pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:39 compute-0 hopeful_euler[240623]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:39:39 compute-0 hopeful_euler[240623]: --> All data devices are unavailable
Jan 26 15:39:39 compute-0 systemd[1]: libpod-ceae88f16023c8f32d0f893bc47b8fffd303505f2dd47d5b1bec3cf0668a63b4.scope: Deactivated successfully.
Jan 26 15:39:39 compute-0 podman[240606]: 2026-01-26 15:39:39.316847161 +0000 UTC m=+0.686763968 container died ceae88f16023c8f32d0f893bc47b8fffd303505f2dd47d5b1bec3cf0668a63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_euler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:39:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-86c774517b171309ce485cf780af72d58c5a39bb9b9463255e0b08c6bf7d8cbb-merged.mount: Deactivated successfully.
Jan 26 15:39:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:39 compute-0 podman[240606]: 2026-01-26 15:39:39.364391622 +0000 UTC m=+0.734308419 container remove ceae88f16023c8f32d0f893bc47b8fffd303505f2dd47d5b1bec3cf0668a63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_euler, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:39:39 compute-0 systemd[1]: libpod-conmon-ceae88f16023c8f32d0f893bc47b8fffd303505f2dd47d5b1bec3cf0668a63b4.scope: Deactivated successfully.
Jan 26 15:39:39 compute-0 sudo[240526]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:39 compute-0 sudo[240657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:39:39 compute-0 sudo[240657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:39:39 compute-0 sudo[240657]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:39 compute-0 sudo[240682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:39:39 compute-0 sudo[240682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:39:39 compute-0 podman[240719]: 2026-01-26 15:39:39.811201892 +0000 UTC m=+0.026163447 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:39:39 compute-0 podman[240719]: 2026-01-26 15:39:39.92873215 +0000 UTC m=+0.143693745 container create 5d56bd1a05dc82d1ba4242ec57369b088dba258c6f68d3559a412007f48561da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Jan 26 15:39:39 compute-0 ceph-mon[75140]: pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:40 compute-0 systemd[1]: Started libpod-conmon-5d56bd1a05dc82d1ba4242ec57369b088dba258c6f68d3559a412007f48561da.scope.
Jan 26 15:39:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:39:40 compute-0 podman[240719]: 2026-01-26 15:39:40.057613518 +0000 UTC m=+0.272575194 container init 5d56bd1a05dc82d1ba4242ec57369b088dba258c6f68d3559a412007f48561da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:39:40 compute-0 podman[240719]: 2026-01-26 15:39:40.070737172 +0000 UTC m=+0.285698727 container start 5d56bd1a05dc82d1ba4242ec57369b088dba258c6f68d3559a412007f48561da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wiles, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 15:39:40 compute-0 podman[240719]: 2026-01-26 15:39:40.074559446 +0000 UTC m=+0.289521081 container attach 5d56bd1a05dc82d1ba4242ec57369b088dba258c6f68d3559a412007f48561da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:39:40 compute-0 happy_wiles[240735]: 167 167
Jan 26 15:39:40 compute-0 systemd[1]: libpod-5d56bd1a05dc82d1ba4242ec57369b088dba258c6f68d3559a412007f48561da.scope: Deactivated successfully.
Jan 26 15:39:40 compute-0 podman[240719]: 2026-01-26 15:39:40.076051733 +0000 UTC m=+0.291013298 container died 5d56bd1a05dc82d1ba4242ec57369b088dba258c6f68d3559a412007f48561da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wiles, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 15:39:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d00715889c5ec86ff6a66c65aecc1cfe16bc3338f40d90b230409791ae1d9f05-merged.mount: Deactivated successfully.
Jan 26 15:39:40 compute-0 podman[240719]: 2026-01-26 15:39:40.113713292 +0000 UTC m=+0.328674857 container remove 5d56bd1a05dc82d1ba4242ec57369b088dba258c6f68d3559a412007f48561da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_wiles, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 15:39:40 compute-0 systemd[1]: libpod-conmon-5d56bd1a05dc82d1ba4242ec57369b088dba258c6f68d3559a412007f48561da.scope: Deactivated successfully.
Jan 26 15:39:40 compute-0 podman[240758]: 2026-01-26 15:39:40.314688148 +0000 UTC m=+0.049590324 container create 3cbc905cbf54b2adddd65042ad41a93428776388c31debc9c9903c4e3da4543d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_germain, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:39:40 compute-0 systemd[1]: Started libpod-conmon-3cbc905cbf54b2adddd65042ad41a93428776388c31debc9c9903c4e3da4543d.scope.
Jan 26 15:39:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f38b4dc829ba050945b45480851f69e20e3babe0ac13eee8638a2b01cc1b770/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f38b4dc829ba050945b45480851f69e20e3babe0ac13eee8638a2b01cc1b770/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f38b4dc829ba050945b45480851f69e20e3babe0ac13eee8638a2b01cc1b770/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f38b4dc829ba050945b45480851f69e20e3babe0ac13eee8638a2b01cc1b770/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:40 compute-0 podman[240758]: 2026-01-26 15:39:40.296434047 +0000 UTC m=+0.031336253 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:39:40 compute-0 podman[240758]: 2026-01-26 15:39:40.395030309 +0000 UTC m=+0.129932505 container init 3cbc905cbf54b2adddd65042ad41a93428776388c31debc9c9903c4e3da4543d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 15:39:40 compute-0 podman[240758]: 2026-01-26 15:39:40.401759195 +0000 UTC m=+0.136661371 container start 3cbc905cbf54b2adddd65042ad41a93428776388c31debc9c9903c4e3da4543d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 26 15:39:40 compute-0 podman[240758]: 2026-01-26 15:39:40.405166219 +0000 UTC m=+0.140068405 container attach 3cbc905cbf54b2adddd65042ad41a93428776388c31debc9c9903c4e3da4543d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 15:39:40 compute-0 agitated_germain[240774]: {
Jan 26 15:39:40 compute-0 agitated_germain[240774]:     "0": [
Jan 26 15:39:40 compute-0 agitated_germain[240774]:         {
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "devices": [
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "/dev/loop3"
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             ],
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "lv_name": "ceph_lv0",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "lv_size": "21470642176",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "name": "ceph_lv0",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "tags": {
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.cluster_name": "ceph",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.crush_device_class": "",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.encrypted": "0",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.objectstore": "bluestore",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.osd_id": "0",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.type": "block",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.vdo": "0",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.with_tpm": "0"
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             },
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "type": "block",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "vg_name": "ceph_vg0"
Jan 26 15:39:40 compute-0 agitated_germain[240774]:         }
Jan 26 15:39:40 compute-0 agitated_germain[240774]:     ],
Jan 26 15:39:40 compute-0 agitated_germain[240774]:     "1": [
Jan 26 15:39:40 compute-0 agitated_germain[240774]:         {
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "devices": [
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "/dev/loop4"
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             ],
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "lv_name": "ceph_lv1",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "lv_size": "21470642176",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "name": "ceph_lv1",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "tags": {
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.cluster_name": "ceph",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.crush_device_class": "",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.encrypted": "0",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.objectstore": "bluestore",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.osd_id": "1",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.type": "block",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.vdo": "0",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.with_tpm": "0"
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             },
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "type": "block",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "vg_name": "ceph_vg1"
Jan 26 15:39:40 compute-0 agitated_germain[240774]:         }
Jan 26 15:39:40 compute-0 agitated_germain[240774]:     ],
Jan 26 15:39:40 compute-0 agitated_germain[240774]:     "2": [
Jan 26 15:39:40 compute-0 agitated_germain[240774]:         {
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "devices": [
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "/dev/loop5"
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             ],
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "lv_name": "ceph_lv2",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "lv_size": "21470642176",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "name": "ceph_lv2",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "tags": {
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.cluster_name": "ceph",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.crush_device_class": "",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.encrypted": "0",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.objectstore": "bluestore",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.osd_id": "2",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.type": "block",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.vdo": "0",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:                 "ceph.with_tpm": "0"
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             },
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "type": "block",
Jan 26 15:39:40 compute-0 agitated_germain[240774]:             "vg_name": "ceph_vg2"
Jan 26 15:39:40 compute-0 agitated_germain[240774]:         }
Jan 26 15:39:40 compute-0 agitated_germain[240774]:     ]
Jan 26 15:39:40 compute-0 agitated_germain[240774]: }
Jan 26 15:39:40 compute-0 systemd[1]: libpod-3cbc905cbf54b2adddd65042ad41a93428776388c31debc9c9903c4e3da4543d.scope: Deactivated successfully.
Jan 26 15:39:40 compute-0 podman[240758]: 2026-01-26 15:39:40.69349911 +0000 UTC m=+0.428401286 container died 3cbc905cbf54b2adddd65042ad41a93428776388c31debc9c9903c4e3da4543d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_germain, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:39:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f38b4dc829ba050945b45480851f69e20e3babe0ac13eee8638a2b01cc1b770-merged.mount: Deactivated successfully.
Jan 26 15:39:41 compute-0 podman[240758]: 2026-01-26 15:39:41.044020923 +0000 UTC m=+0.778923099 container remove 3cbc905cbf54b2adddd65042ad41a93428776388c31debc9c9903c4e3da4543d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_germain, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Jan 26 15:39:41 compute-0 systemd[1]: libpod-conmon-3cbc905cbf54b2adddd65042ad41a93428776388c31debc9c9903c4e3da4543d.scope: Deactivated successfully.
Jan 26 15:39:41 compute-0 sudo[240682]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:41 compute-0 sudo[240795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:39:41 compute-0 sudo[240795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:39:41 compute-0 sudo[240795]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:39:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/872300011' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:39:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:39:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/872300011' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:39:41 compute-0 sudo[240820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:39:41 compute-0 sudo[240820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:39:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/872300011' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:39:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/872300011' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:39:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:39:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4174623178' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:39:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:39:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4174623178' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:39:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:41 compute-0 podman[240857]: 2026-01-26 15:39:41.471338291 +0000 UTC m=+0.041630307 container create 63890cbdbddda61bc1ebd44b09004ae5041b00bf483a47972337bdc4a2b56175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 15:39:41 compute-0 systemd[1]: Started libpod-conmon-63890cbdbddda61bc1ebd44b09004ae5041b00bf483a47972337bdc4a2b56175.scope.
Jan 26 15:39:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:39:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2430196541' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:39:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:39:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2430196541' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:39:41 compute-0 podman[240857]: 2026-01-26 15:39:41.453406819 +0000 UTC m=+0.023698895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:39:41 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:39:41 compute-0 podman[240857]: 2026-01-26 15:39:41.571197894 +0000 UTC m=+0.141489950 container init 63890cbdbddda61bc1ebd44b09004ae5041b00bf483a47972337bdc4a2b56175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 15:39:41 compute-0 podman[240857]: 2026-01-26 15:39:41.584119593 +0000 UTC m=+0.154411619 container start 63890cbdbddda61bc1ebd44b09004ae5041b00bf483a47972337bdc4a2b56175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 15:39:41 compute-0 podman[240857]: 2026-01-26 15:39:41.587943467 +0000 UTC m=+0.158235513 container attach 63890cbdbddda61bc1ebd44b09004ae5041b00bf483a47972337bdc4a2b56175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:39:41 compute-0 nostalgic_hofstadter[240873]: 167 167
Jan 26 15:39:41 compute-0 systemd[1]: libpod-63890cbdbddda61bc1ebd44b09004ae5041b00bf483a47972337bdc4a2b56175.scope: Deactivated successfully.
Jan 26 15:39:41 compute-0 conmon[240873]: conmon 63890cbdbddda61bc1eb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-63890cbdbddda61bc1ebd44b09004ae5041b00bf483a47972337bdc4a2b56175.scope/container/memory.events
Jan 26 15:39:41 compute-0 podman[240857]: 2026-01-26 15:39:41.593049803 +0000 UTC m=+0.163341829 container died 63890cbdbddda61bc1ebd44b09004ae5041b00bf483a47972337bdc4a2b56175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 15:39:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9de13a070541478bf699e8225600d3e4d76b611a5aad29ab45162444a6b32536-merged.mount: Deactivated successfully.
Jan 26 15:39:41 compute-0 podman[240857]: 2026-01-26 15:39:41.652222082 +0000 UTC m=+0.222514118 container remove 63890cbdbddda61bc1ebd44b09004ae5041b00bf483a47972337bdc4a2b56175 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hofstadter, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 15:39:41 compute-0 systemd[1]: libpod-conmon-63890cbdbddda61bc1ebd44b09004ae5041b00bf483a47972337bdc4a2b56175.scope: Deactivated successfully.
Jan 26 15:39:41 compute-0 podman[240897]: 2026-01-26 15:39:41.78880821 +0000 UTC m=+0.021430079 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:39:42 compute-0 podman[240897]: 2026-01-26 15:39:42.167216352 +0000 UTC m=+0.399838201 container create 35429a7543276b54b0a190ab896c757331de600d91f45483cccf843129612956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_goldwasser, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:39:42 compute-0 systemd[1]: Started libpod-conmon-35429a7543276b54b0a190ab896c757331de600d91f45483cccf843129612956.scope.
Jan 26 15:39:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/4174623178' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:39:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/4174623178' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:39:42 compute-0 ceph-mon[75140]: pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2430196541' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:39:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2430196541' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:39:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f2a6e16ce5b7718cc15cb8c8a67c080fbbb4e4d80b0c16a3784e4fc326efe3b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f2a6e16ce5b7718cc15cb8c8a67c080fbbb4e4d80b0c16a3784e4fc326efe3b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f2a6e16ce5b7718cc15cb8c8a67c080fbbb4e4d80b0c16a3784e4fc326efe3b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f2a6e16ce5b7718cc15cb8c8a67c080fbbb4e4d80b0c16a3784e4fc326efe3b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:39:42 compute-0 podman[240897]: 2026-01-26 15:39:42.260669217 +0000 UTC m=+0.493291076 container init 35429a7543276b54b0a190ab896c757331de600d91f45483cccf843129612956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_goldwasser, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 15:39:42 compute-0 podman[240897]: 2026-01-26 15:39:42.266948982 +0000 UTC m=+0.499570831 container start 35429a7543276b54b0a190ab896c757331de600d91f45483cccf843129612956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_goldwasser, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 15:39:42 compute-0 podman[240897]: 2026-01-26 15:39:42.271048162 +0000 UTC m=+0.503670021 container attach 35429a7543276b54b0a190ab896c757331de600d91f45483cccf843129612956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:39:42 compute-0 lvm[240994]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:39:42 compute-0 lvm[240990]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:39:42 compute-0 lvm[240990]: VG ceph_vg0 finished
Jan 26 15:39:42 compute-0 lvm[240994]: VG ceph_vg2 finished
Jan 26 15:39:42 compute-0 lvm[240993]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:39:42 compute-0 lvm[240993]: VG ceph_vg1 finished
Jan 26 15:39:43 compute-0 stoic_goldwasser[240913]: {}
Jan 26 15:39:43 compute-0 systemd[1]: libpod-35429a7543276b54b0a190ab896c757331de600d91f45483cccf843129612956.scope: Deactivated successfully.
Jan 26 15:39:43 compute-0 podman[240897]: 2026-01-26 15:39:43.081657402 +0000 UTC m=+1.314279251 container died 35429a7543276b54b0a190ab896c757331de600d91f45483cccf843129612956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_goldwasser, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:39:43 compute-0 systemd[1]: libpod-35429a7543276b54b0a190ab896c757331de600d91f45483cccf843129612956.scope: Consumed 1.316s CPU time.
Jan 26 15:39:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f2a6e16ce5b7718cc15cb8c8a67c080fbbb4e4d80b0c16a3784e4fc326efe3b-merged.mount: Deactivated successfully.
Jan 26 15:39:43 compute-0 podman[240897]: 2026-01-26 15:39:43.145169629 +0000 UTC m=+1.377791478 container remove 35429a7543276b54b0a190ab896c757331de600d91f45483cccf843129612956 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 15:39:43 compute-0 systemd[1]: libpod-conmon-35429a7543276b54b0a190ab896c757331de600d91f45483cccf843129612956.scope: Deactivated successfully.
Jan 26 15:39:43 compute-0 sudo[240820]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:39:43 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:39:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:39:43 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:39:43 compute-0 sudo[241008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:39:43 compute-0 sudo[241008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:39:43 compute-0 sudo[241008]: pam_unix(sudo:session): session closed for user root
Jan 26 15:39:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:39:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:39:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:39:44 compute-0 ceph-mon[75140]: pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:47 compute-0 ceph-mon[75140]: pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:48 compute-0 ceph-mon[75140]: pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1837404050763064e-06 of space, bias 4.0, pg target 0.0014204884860915677 quantized to 16 (current 16)
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:39:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:39:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:50 compute-0 ceph-mon[75140]: pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:52 compute-0 ceph-mon[75140]: pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:39:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:54 compute-0 ceph-mon[75140]: pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:56 compute-0 ceph-mon[75140]: pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:39:58 compute-0 ceph-mon[75140]: pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:39:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:39:59.201 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:39:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:39:59.202 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:39:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:39:59.202 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:39:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:40:00 compute-0 ceph-mon[75140]: pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:01 compute-0 anacron[30880]: Job `cron.weekly' started
Jan 26 15:40:01 compute-0 anacron[30880]: Job `cron.weekly' terminated
Jan 26 15:40:02 compute-0 podman[241035]: 2026-01-26 15:40:02.433961304 +0000 UTC m=+0.119935889 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 26 15:40:02 compute-0 ceph-mon[75140]: pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:02 compute-0 podman[241062]: 2026-01-26 15:40:02.521918443 +0000 UTC m=+0.057240372 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:40:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:40:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:04 compute-0 ceph-mon[75140]: pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 15:40:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3293 writes, 14K keys, 3293 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3293 writes, 3293 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1282 writes, 5575 keys, 1282 commit groups, 1.0 writes per commit group, ingest: 8.60 MB, 0.01 MB/s
                                           Interval WAL: 1282 writes, 1282 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     77.9      0.19              0.05         6    0.032       0      0       0.0       0.0
                                             L6      1/0    7.38 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4    137.0    113.3      0.32              0.08         5    0.065     20K   2209       0.0       0.0
                                            Sum      1/0    7.38 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.4     85.8    100.1      0.52              0.13        11    0.047     20K   2209       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6    112.6    114.7      0.25              0.06         6    0.041     12K   1466       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    137.0    113.3      0.32              0.08         5    0.065     20K   2209       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     79.6      0.19              0.05         5    0.038       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.6      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.015, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.04 MB/s read, 0.5 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.05 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x565356c818d0#2 capacity: 308.00 MB usage: 1.69 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(89,1.50 MB,0.487964%) FilterBlock(12,63.30 KB,0.0200693%) IndexBlock(12,130.14 KB,0.0412631%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 15:40:06 compute-0 ceph-mon[75140]: pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:40:08 compute-0 ceph-mon[75140]: pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:10 compute-0 ceph-mon[75140]: pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:12 compute-0 ceph-mon[75140]: pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:40:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:14 compute-0 ceph-mon[75140]: pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.513 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.539 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.540 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.540 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.540 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.541 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.541 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.541 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.541 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.541 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:40:16 compute-0 ceph-mon[75140]: pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.571 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.571 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.571 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.572 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:40:16 compute-0 nova_compute[239965]: 2026-01-26 15:40:16.572 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:40:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 26 15:40:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2427143273' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 26 15:40:16 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.14336 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 26 15:40:16 compute-0 ceph-mgr[75431]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 26 15:40:16 compute-0 ceph-mgr[75431]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 26 15:40:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:40:17 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2370525082' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:40:17 compute-0 nova_compute[239965]: 2026-01-26 15:40:17.178 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:40:17 compute-0 nova_compute[239965]: 2026-01-26 15:40:17.328 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:40:17 compute-0 nova_compute[239965]: 2026-01-26 15:40:17.329 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5162MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:40:17 compute-0 nova_compute[239965]: 2026-01-26 15:40:17.330 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:40:17 compute-0 nova_compute[239965]: 2026-01-26 15:40:17.330 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:40:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:17 compute-0 nova_compute[239965]: 2026-01-26 15:40:17.524 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:40:17 compute-0 nova_compute[239965]: 2026-01-26 15:40:17.525 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:40:17 compute-0 nova_compute[239965]: 2026-01-26 15:40:17.550 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:40:17 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2427143273' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 26 15:40:17 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2370525082' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:40:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:40:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2071238640' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:40:18 compute-0 nova_compute[239965]: 2026-01-26 15:40:18.098 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:40:18 compute-0 nova_compute[239965]: 2026-01-26 15:40:18.106 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:40:18 compute-0 nova_compute[239965]: 2026-01-26 15:40:18.128 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:40:18 compute-0 nova_compute[239965]: 2026-01-26 15:40:18.152 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:40:18 compute-0 nova_compute[239965]: 2026-01-26 15:40:18.153 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:40:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:40:18 compute-0 ceph-mon[75140]: from='client.14336 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 26 15:40:18 compute-0 ceph-mon[75140]: pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2071238640' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:40:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:20 compute-0 ceph-mon[75140]: pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:22 compute-0 ceph-mon[75140]: pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:23 compute-0 sshd-session[241127]: Connection closed by authenticating user root 178.128.250.55 port 36182 [preauth]
Jan 26 15:40:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:40:23.346327) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442023346389, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1702, "num_deletes": 505, "total_data_size": 2316770, "memory_usage": 2361856, "flush_reason": "Manual Compaction"}
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442023366041, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2283619, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13495, "largest_seqno": 15196, "table_properties": {"data_size": 2276212, "index_size": 3901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 17604, "raw_average_key_size": 18, "raw_value_size": 2259431, "raw_average_value_size": 2356, "num_data_blocks": 179, "num_entries": 959, "num_filter_entries": 959, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769441868, "oldest_key_time": 1769441868, "file_creation_time": 1769442023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 19779 microseconds, and 6533 cpu microseconds.
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:40:23.366104) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2283619 bytes OK
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:40:23.366131) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:40:23.367970) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:40:23.368004) EVENT_LOG_v1 {"time_micros": 1769442023367999, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:40:23.368027) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2308383, prev total WAL file size 2308383, number of live WAL files 2.
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:40:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:40:23.368833) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(2230KB)], [32(7558KB)]
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442023368881, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 10023562, "oldest_snapshot_seqno": -1}
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3963 keys, 7990070 bytes, temperature: kUnknown
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442023420389, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7990070, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7960965, "index_size": 18115, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9925, "raw_key_size": 96880, "raw_average_key_size": 24, "raw_value_size": 7886633, "raw_average_value_size": 1990, "num_data_blocks": 767, "num_entries": 3963, "num_filter_entries": 3963, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769442023, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:40:23.420645) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7990070 bytes
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:40:23.422084) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.2 rd, 154.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 7.4 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(7.9) write-amplify(3.5) OK, records in: 4986, records dropped: 1023 output_compression: NoCompression
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:40:23.422100) EVENT_LOG_v1 {"time_micros": 1769442023422092, "job": 14, "event": "compaction_finished", "compaction_time_micros": 51605, "compaction_time_cpu_micros": 16812, "output_level": 6, "num_output_files": 1, "total_output_size": 7990070, "num_input_records": 4986, "num_output_records": 3963, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442023422527, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442023423785, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:40:23.368732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:40:23.423835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:40:23.423839) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:40:23.423841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:40:23.423843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:40:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:40:23.423845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:40:24 compute-0 ceph-mon[75140]: pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:26 compute-0 ceph-mon[75140]: pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:40:28 compute-0 ceph-mon[75140]: pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:40:28
Jan 26 15:40:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:40:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:40:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'backups', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', '.mgr']
Jan 26 15:40:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:40:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:40:30 compute-0 ceph-mon[75140]: pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:32 compute-0 ceph-mon[75140]: pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:40:33 compute-0 podman[241129]: 2026-01-26 15:40:33.371677616 +0000 UTC m=+0.056088884 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:40:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:33 compute-0 podman[241130]: 2026-01-26 15:40:33.426894529 +0000 UTC m=+0.103975406 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 15:40:34 compute-0 ceph-mon[75140]: pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:36 compute-0 ceph-mon[75140]: pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:40:38 compute-0 ceph-mon[75140]: pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:41 compute-0 ceph-mon[75140]: pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 26 15:40:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1346510481' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 26 15:40:41 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 26 15:40:41 compute-0 ceph-mgr[75431]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 26 15:40:41 compute-0 ceph-mgr[75431]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 26 15:40:42 compute-0 ceph-mon[75140]: pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1346510481' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 26 15:40:42 compute-0 ceph-mon[75140]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 26 15:40:43 compute-0 sudo[241170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:40:43 compute-0 sudo[241170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:40:43 compute-0 sudo[241170]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:40:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:43 compute-0 sudo[241195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:40:43 compute-0 sudo[241195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:40:44 compute-0 sudo[241195]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:40:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:40:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:40:44 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:40:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:40:44 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:40:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:40:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:40:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:40:44 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:40:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:40:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:40:44 compute-0 sudo[241251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:40:44 compute-0 sudo[241251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:40:44 compute-0 sudo[241251]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:44 compute-0 sudo[241276]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:40:44 compute-0 sudo[241276]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:40:44 compute-0 ceph-mon[75140]: pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:40:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:40:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:40:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:40:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:40:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:40:44 compute-0 podman[241313]: 2026-01-26 15:40:44.568791843 +0000 UTC m=+0.039036434 container create 470ceab742552a72661f0bf401b16fe9454c59c82e9c486751d78d7900812134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclaren, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 15:40:44 compute-0 systemd[1]: Started libpod-conmon-470ceab742552a72661f0bf401b16fe9454c59c82e9c486751d78d7900812134.scope.
Jan 26 15:40:44 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:40:44 compute-0 podman[241313]: 2026-01-26 15:40:44.63601336 +0000 UTC m=+0.106257941 container init 470ceab742552a72661f0bf401b16fe9454c59c82e9c486751d78d7900812134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclaren, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 15:40:44 compute-0 podman[241313]: 2026-01-26 15:40:44.646000758 +0000 UTC m=+0.116245349 container start 470ceab742552a72661f0bf401b16fe9454c59c82e9c486751d78d7900812134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 15:40:44 compute-0 podman[241313]: 2026-01-26 15:40:44.551966668 +0000 UTC m=+0.022211259 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:40:44 compute-0 podman[241313]: 2026-01-26 15:40:44.649301809 +0000 UTC m=+0.119546410 container attach 470ceab742552a72661f0bf401b16fe9454c59c82e9c486751d78d7900812134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclaren, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 15:40:44 compute-0 systemd[1]: libpod-470ceab742552a72661f0bf401b16fe9454c59c82e9c486751d78d7900812134.scope: Deactivated successfully.
Jan 26 15:40:44 compute-0 sweet_mclaren[241329]: 167 167
Jan 26 15:40:44 compute-0 conmon[241329]: conmon 470ceab742552a72661f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-470ceab742552a72661f0bf401b16fe9454c59c82e9c486751d78d7900812134.scope/container/memory.events
Jan 26 15:40:44 compute-0 podman[241313]: 2026-01-26 15:40:44.655314927 +0000 UTC m=+0.125559538 container died 470ceab742552a72661f0bf401b16fe9454c59c82e9c486751d78d7900812134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:40:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2db2af4a0b6d44c50361ab5b3615b5ba19fe77380679021e226bd85016cbf0b-merged.mount: Deactivated successfully.
Jan 26 15:40:44 compute-0 podman[241313]: 2026-01-26 15:40:44.698224415 +0000 UTC m=+0.168468996 container remove 470ceab742552a72661f0bf401b16fe9454c59c82e9c486751d78d7900812134 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclaren, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:40:44 compute-0 systemd[1]: libpod-conmon-470ceab742552a72661f0bf401b16fe9454c59c82e9c486751d78d7900812134.scope: Deactivated successfully.
Jan 26 15:40:44 compute-0 podman[241351]: 2026-01-26 15:40:44.85983454 +0000 UTC m=+0.041369310 container create 221abccbb8ed2490965be7294072fe975d5b3ebc20bb1ef47314deb064e2cf37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 15:40:44 compute-0 systemd[1]: Started libpod-conmon-221abccbb8ed2490965be7294072fe975d5b3ebc20bb1ef47314deb064e2cf37.scope.
Jan 26 15:40:44 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27d5e2bd947569becc42ffd555f73a7d7c25c5157667719e828f455e22f6cd23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27d5e2bd947569becc42ffd555f73a7d7c25c5157667719e828f455e22f6cd23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27d5e2bd947569becc42ffd555f73a7d7c25c5157667719e828f455e22f6cd23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27d5e2bd947569becc42ffd555f73a7d7c25c5157667719e828f455e22f6cd23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27d5e2bd947569becc42ffd555f73a7d7c25c5157667719e828f455e22f6cd23/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:40:44 compute-0 podman[241351]: 2026-01-26 15:40:44.839664733 +0000 UTC m=+0.021199523 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:40:44 compute-0 podman[241351]: 2026-01-26 15:40:44.993461356 +0000 UTC m=+0.174996156 container init 221abccbb8ed2490965be7294072fe975d5b3ebc20bb1ef47314deb064e2cf37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 26 15:40:44 compute-0 podman[241351]: 2026-01-26 15:40:44.999832953 +0000 UTC m=+0.181367723 container start 221abccbb8ed2490965be7294072fe975d5b3ebc20bb1ef47314deb064e2cf37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_liskov, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 15:40:45 compute-0 podman[241351]: 2026-01-26 15:40:45.007934763 +0000 UTC m=+0.189469533 container attach 221abccbb8ed2490965be7294072fe975d5b3ebc20bb1ef47314deb064e2cf37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:40:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:45 compute-0 beautiful_liskov[241367]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:40:45 compute-0 beautiful_liskov[241367]: --> All data devices are unavailable
Jan 26 15:40:45 compute-0 systemd[1]: libpod-221abccbb8ed2490965be7294072fe975d5b3ebc20bb1ef47314deb064e2cf37.scope: Deactivated successfully.
Jan 26 15:40:45 compute-0 podman[241351]: 2026-01-26 15:40:45.53336469 +0000 UTC m=+0.714899460 container died 221abccbb8ed2490965be7294072fe975d5b3ebc20bb1ef47314deb064e2cf37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:40:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-27d5e2bd947569becc42ffd555f73a7d7c25c5157667719e828f455e22f6cd23-merged.mount: Deactivated successfully.
Jan 26 15:40:45 compute-0 podman[241351]: 2026-01-26 15:40:45.58040821 +0000 UTC m=+0.761943010 container remove 221abccbb8ed2490965be7294072fe975d5b3ebc20bb1ef47314deb064e2cf37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 15:40:45 compute-0 systemd[1]: libpod-conmon-221abccbb8ed2490965be7294072fe975d5b3ebc20bb1ef47314deb064e2cf37.scope: Deactivated successfully.
Jan 26 15:40:45 compute-0 sudo[241276]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:45 compute-0 sudo[241399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:40:45 compute-0 sudo[241399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:40:45 compute-0 sudo[241399]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:45 compute-0 sudo[241424]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:40:45 compute-0 sudo[241424]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:40:46 compute-0 podman[241461]: 2026-01-26 15:40:46.031866763 +0000 UTC m=+0.045596175 container create f0d5ef79eb42fe4dc5020150c172529a669eb9b331d3603c120172343d8530bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_curie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:40:46 compute-0 systemd[1]: Started libpod-conmon-f0d5ef79eb42fe4dc5020150c172529a669eb9b331d3603c120172343d8530bf.scope.
Jan 26 15:40:46 compute-0 podman[241461]: 2026-01-26 15:40:46.010864695 +0000 UTC m=+0.024594157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:40:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:40:46 compute-0 podman[241461]: 2026-01-26 15:40:46.120898829 +0000 UTC m=+0.134628251 container init f0d5ef79eb42fe4dc5020150c172529a669eb9b331d3603c120172343d8530bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_curie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:40:46 compute-0 podman[241461]: 2026-01-26 15:40:46.127954623 +0000 UTC m=+0.141684025 container start f0d5ef79eb42fe4dc5020150c172529a669eb9b331d3603c120172343d8530bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_curie, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:40:46 compute-0 podman[241461]: 2026-01-26 15:40:46.132242739 +0000 UTC m=+0.145972161 container attach f0d5ef79eb42fe4dc5020150c172529a669eb9b331d3603c120172343d8530bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:40:46 compute-0 modest_curie[241477]: 167 167
Jan 26 15:40:46 compute-0 systemd[1]: libpod-f0d5ef79eb42fe4dc5020150c172529a669eb9b331d3603c120172343d8530bf.scope: Deactivated successfully.
Jan 26 15:40:46 compute-0 podman[241461]: 2026-01-26 15:40:46.133630133 +0000 UTC m=+0.147359525 container died f0d5ef79eb42fe4dc5020150c172529a669eb9b331d3603c120172343d8530bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_curie, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 15:40:46 compute-0 irqbalance[785]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 26 15:40:46 compute-0 irqbalance[785]: IRQ 26 affinity is now unmanaged
Jan 26 15:40:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8414b803d21eb51c980e38840d44d9373bd447038787029f0e92344e392ee588-merged.mount: Deactivated successfully.
Jan 26 15:40:46 compute-0 podman[241461]: 2026-01-26 15:40:46.171563538 +0000 UTC m=+0.185292940 container remove f0d5ef79eb42fe4dc5020150c172529a669eb9b331d3603c120172343d8530bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:40:46 compute-0 systemd[1]: libpod-conmon-f0d5ef79eb42fe4dc5020150c172529a669eb9b331d3603c120172343d8530bf.scope: Deactivated successfully.
Jan 26 15:40:46 compute-0 podman[241501]: 2026-01-26 15:40:46.344190465 +0000 UTC m=+0.047604695 container create ef3185834d94686d53ba438d86a8146d3ab534ab062ca3247acaa7afd21371fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 15:40:46 compute-0 systemd[1]: Started libpod-conmon-ef3185834d94686d53ba438d86a8146d3ab534ab062ca3247acaa7afd21371fc.scope.
Jan 26 15:40:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddb1b57a0a115c4c28b2ad0ca6bc51fa981f253a399ddbc0803e17e87c4f1c4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddb1b57a0a115c4c28b2ad0ca6bc51fa981f253a399ddbc0803e17e87c4f1c4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddb1b57a0a115c4c28b2ad0ca6bc51fa981f253a399ddbc0803e17e87c4f1c4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:40:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddb1b57a0a115c4c28b2ad0ca6bc51fa981f253a399ddbc0803e17e87c4f1c4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:40:46 compute-0 podman[241501]: 2026-01-26 15:40:46.323996198 +0000 UTC m=+0.027410438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:40:46 compute-0 podman[241501]: 2026-01-26 15:40:46.429904449 +0000 UTC m=+0.133318689 container init ef3185834d94686d53ba438d86a8146d3ab534ab062ca3247acaa7afd21371fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 15:40:46 compute-0 podman[241501]: 2026-01-26 15:40:46.440734826 +0000 UTC m=+0.144149056 container start ef3185834d94686d53ba438d86a8146d3ab534ab062ca3247acaa7afd21371fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 15:40:46 compute-0 podman[241501]: 2026-01-26 15:40:46.44495882 +0000 UTC m=+0.148373040 container attach ef3185834d94686d53ba438d86a8146d3ab534ab062ca3247acaa7afd21371fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hellman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:40:46 compute-0 ceph-mon[75140]: pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]: {
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:     "0": [
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:         {
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "devices": [
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "/dev/loop3"
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             ],
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "lv_name": "ceph_lv0",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "lv_size": "21470642176",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "name": "ceph_lv0",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "tags": {
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.cluster_name": "ceph",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.crush_device_class": "",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.encrypted": "0",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.objectstore": "bluestore",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.osd_id": "0",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.type": "block",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.vdo": "0",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.with_tpm": "0"
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             },
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "type": "block",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "vg_name": "ceph_vg0"
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:         }
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:     ],
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:     "1": [
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:         {
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "devices": [
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "/dev/loop4"
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             ],
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "lv_name": "ceph_lv1",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "lv_size": "21470642176",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "name": "ceph_lv1",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "tags": {
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.cluster_name": "ceph",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.crush_device_class": "",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.encrypted": "0",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.objectstore": "bluestore",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.osd_id": "1",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.type": "block",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.vdo": "0",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.with_tpm": "0"
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             },
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "type": "block",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "vg_name": "ceph_vg1"
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:         }
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:     ],
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:     "2": [
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:         {
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "devices": [
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "/dev/loop5"
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             ],
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "lv_name": "ceph_lv2",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "lv_size": "21470642176",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "name": "ceph_lv2",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "tags": {
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.cluster_name": "ceph",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.crush_device_class": "",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.encrypted": "0",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.objectstore": "bluestore",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.osd_id": "2",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.type": "block",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.vdo": "0",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:                 "ceph.with_tpm": "0"
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             },
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "type": "block",
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:             "vg_name": "ceph_vg2"
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:         }
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]:     ]
Jan 26 15:40:46 compute-0 nostalgic_hellman[241517]: }
Jan 26 15:40:46 compute-0 systemd[1]: libpod-ef3185834d94686d53ba438d86a8146d3ab534ab062ca3247acaa7afd21371fc.scope: Deactivated successfully.
Jan 26 15:40:46 compute-0 podman[241501]: 2026-01-26 15:40:46.742643021 +0000 UTC m=+0.446057241 container died ef3185834d94686d53ba438d86a8146d3ab534ab062ca3247acaa7afd21371fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 26 15:40:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddb1b57a0a115c4c28b2ad0ca6bc51fa981f253a399ddbc0803e17e87c4f1c4a-merged.mount: Deactivated successfully.
Jan 26 15:40:46 compute-0 podman[241501]: 2026-01-26 15:40:46.792204793 +0000 UTC m=+0.495619013 container remove ef3185834d94686d53ba438d86a8146d3ab534ab062ca3247acaa7afd21371fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 15:40:46 compute-0 systemd[1]: libpod-conmon-ef3185834d94686d53ba438d86a8146d3ab534ab062ca3247acaa7afd21371fc.scope: Deactivated successfully.
Jan 26 15:40:46 compute-0 sudo[241424]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:46 compute-0 sudo[241539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:40:46 compute-0 sudo[241539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:40:46 compute-0 sudo[241539]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:46 compute-0 sudo[241564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:40:46 compute-0 sudo[241564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:40:47 compute-0 podman[241601]: 2026-01-26 15:40:47.270695633 +0000 UTC m=+0.044837486 container create 45ea5ab46baf44d41a0125177079fb02994956462061ee194d128685da0302f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 15:40:47 compute-0 systemd[1]: Started libpod-conmon-45ea5ab46baf44d41a0125177079fb02994956462061ee194d128685da0302f6.scope.
Jan 26 15:40:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:40:47 compute-0 podman[241601]: 2026-01-26 15:40:47.2506637 +0000 UTC m=+0.024805563 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:40:47 compute-0 podman[241601]: 2026-01-26 15:40:47.355272559 +0000 UTC m=+0.129414432 container init 45ea5ab46baf44d41a0125177079fb02994956462061ee194d128685da0302f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mirzakhani, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:40:47 compute-0 podman[241601]: 2026-01-26 15:40:47.360363875 +0000 UTC m=+0.134505718 container start 45ea5ab46baf44d41a0125177079fb02994956462061ee194d128685da0302f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:40:47 compute-0 magical_mirzakhani[241617]: 167 167
Jan 26 15:40:47 compute-0 systemd[1]: libpod-45ea5ab46baf44d41a0125177079fb02994956462061ee194d128685da0302f6.scope: Deactivated successfully.
Jan 26 15:40:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:47 compute-0 podman[241601]: 2026-01-26 15:40:47.5641213 +0000 UTC m=+0.338263153 container attach 45ea5ab46baf44d41a0125177079fb02994956462061ee194d128685da0302f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:40:47 compute-0 podman[241601]: 2026-01-26 15:40:47.564813916 +0000 UTC m=+0.338955769 container died 45ea5ab46baf44d41a0125177079fb02994956462061ee194d128685da0302f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 15:40:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-86be90570992b4ef7dc61def60cced2dc7dfb33d6d6691313ee07f1466a19e16-merged.mount: Deactivated successfully.
Jan 26 15:40:47 compute-0 podman[241601]: 2026-01-26 15:40:47.685830781 +0000 UTC m=+0.459972624 container remove 45ea5ab46baf44d41a0125177079fb02994956462061ee194d128685da0302f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_mirzakhani, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:40:47 compute-0 systemd[1]: libpod-conmon-45ea5ab46baf44d41a0125177079fb02994956462061ee194d128685da0302f6.scope: Deactivated successfully.
Jan 26 15:40:47 compute-0 podman[241642]: 2026-01-26 15:40:47.888210642 +0000 UTC m=+0.074100529 container create cbb1ac9ac30d328740415f28d4b22dc997516a229371a0bc1c273b18fb1cef02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mahavira, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 15:40:47 compute-0 podman[241642]: 2026-01-26 15:40:47.83947919 +0000 UTC m=+0.025369097 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:40:47 compute-0 systemd[1]: Started libpod-conmon-cbb1ac9ac30d328740415f28d4b22dc997516a229371a0bc1c273b18fb1cef02.scope.
Jan 26 15:40:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96377157da72c735660c01c29cf89e9614f4a65f6aecf550c2639848257c1c03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96377157da72c735660c01c29cf89e9614f4a65f6aecf550c2639848257c1c03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96377157da72c735660c01c29cf89e9614f4a65f6aecf550c2639848257c1c03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:40:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96377157da72c735660c01c29cf89e9614f4a65f6aecf550c2639848257c1c03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:40:48 compute-0 podman[241642]: 2026-01-26 15:40:48.009486453 +0000 UTC m=+0.195376360 container init cbb1ac9ac30d328740415f28d4b22dc997516a229371a0bc1c273b18fb1cef02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mahavira, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:40:48 compute-0 podman[241642]: 2026-01-26 15:40:48.015606133 +0000 UTC m=+0.201496020 container start cbb1ac9ac30d328740415f28d4b22dc997516a229371a0bc1c273b18fb1cef02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:40:48 compute-0 podman[241642]: 2026-01-26 15:40:48.121094824 +0000 UTC m=+0.306984741 container attach cbb1ac9ac30d328740415f28d4b22dc997516a229371a0bc1c273b18fb1cef02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 15:40:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1837404050763064e-06 of space, bias 4.0, pg target 0.0014204884860915677 quantized to 16 (current 16)
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:40:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:40:48 compute-0 ceph-mon[75140]: pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:40:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3649604345' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:40:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:40:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3649604345' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:40:48 compute-0 lvm[241737]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:40:48 compute-0 lvm[241737]: VG ceph_vg1 finished
Jan 26 15:40:48 compute-0 lvm[241736]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:40:48 compute-0 lvm[241736]: VG ceph_vg0 finished
Jan 26 15:40:48 compute-0 lvm[241739]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:40:48 compute-0 lvm[241739]: VG ceph_vg2 finished
Jan 26 15:40:48 compute-0 lvm[241742]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:40:48 compute-0 lvm[241742]: VG ceph_vg2 finished
Jan 26 15:40:48 compute-0 adoring_mahavira[241658]: {}
Jan 26 15:40:48 compute-0 systemd[1]: libpod-cbb1ac9ac30d328740415f28d4b22dc997516a229371a0bc1c273b18fb1cef02.scope: Deactivated successfully.
Jan 26 15:40:48 compute-0 systemd[1]: libpod-cbb1ac9ac30d328740415f28d4b22dc997516a229371a0bc1c273b18fb1cef02.scope: Consumed 1.299s CPU time.
Jan 26 15:40:48 compute-0 podman[241642]: 2026-01-26 15:40:48.816475315 +0000 UTC m=+1.002365262 container died cbb1ac9ac30d328740415f28d4b22dc997516a229371a0bc1c273b18fb1cef02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mahavira, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:40:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-96377157da72c735660c01c29cf89e9614f4a65f6aecf550c2639848257c1c03-merged.mount: Deactivated successfully.
Jan 26 15:40:48 compute-0 podman[241642]: 2026-01-26 15:40:48.858137266 +0000 UTC m=+1.044027153 container remove cbb1ac9ac30d328740415f28d4b22dc997516a229371a0bc1c273b18fb1cef02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_mahavira, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 15:40:48 compute-0 systemd[1]: libpod-conmon-cbb1ac9ac30d328740415f28d4b22dc997516a229371a0bc1c273b18fb1cef02.scope: Deactivated successfully.
Jan 26 15:40:48 compute-0 sudo[241564]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:40:48 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:40:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:40:48 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:40:48 compute-0 sudo[241755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:40:48 compute-0 sudo[241755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:40:48 compute-0 sudo[241755]: pam_unix(sudo:session): session closed for user root
Jan 26 15:40:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3649604345' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:40:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3649604345' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:40:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:40:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:40:50 compute-0 ceph-mon[75140]: pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:52 compute-0 ceph-mon[75140]: pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:40:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:54 compute-0 ceph-mon[75140]: pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:56 compute-0 ceph-mon[75140]: pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:57 compute-0 ceph-mon[75140]: pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:40:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:40:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:40:59.202 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:40:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:40:59.203 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:40:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:40:59.203 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:40:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:41:00 compute-0 ceph-mon[75140]: pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:02 compute-0 ceph-mon[75140]: pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:41:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:04 compute-0 podman[241780]: 2026-01-26 15:41:04.365634296 +0000 UTC m=+0.054896427 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:41:04 compute-0 podman[241781]: 2026-01-26 15:41:04.39724929 +0000 UTC m=+0.086194432 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 15:41:04 compute-0 ceph-mon[75140]: pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:06 compute-0 ceph-mon[75140]: pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:41:08 compute-0 ceph-mon[75140]: pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:09 compute-0 ceph-mon[75140]: pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:11 compute-0 sshd-session[241826]: Connection closed by authenticating user root 178.128.250.55 port 49514 [preauth]
Jan 26 15:41:12 compute-0 ceph-mon[75140]: pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:41:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:14 compute-0 ceph-mon[75140]: pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:16 compute-0 ceph-mon[75140]: pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.143 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.143 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.162 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.162 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.163 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.163 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.163 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.163 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.190 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.191 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.191 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.191 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.191 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:41:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:41:18 compute-0 ceph-mon[75140]: pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:41:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4241849737' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.739 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.894 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.895 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5157MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.896 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.896 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.958 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.959 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:41:18 compute-0 nova_compute[239965]: 2026-01-26 15:41:18.976 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:41:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:41:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4161863051' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:41:19 compute-0 nova_compute[239965]: 2026-01-26 15:41:19.570 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:41:19 compute-0 nova_compute[239965]: 2026-01-26 15:41:19.577 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:41:19 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4241849737' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:41:19 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4161863051' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:41:20 compute-0 nova_compute[239965]: 2026-01-26 15:41:20.028 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:41:20 compute-0 nova_compute[239965]: 2026-01-26 15:41:20.029 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:41:20 compute-0 nova_compute[239965]: 2026-01-26 15:41:20.030 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:41:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:41:20.169 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:41:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:41:20.171 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 15:41:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:41:20.172 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:41:20 compute-0 nova_compute[239965]: 2026-01-26 15:41:20.378 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:41:20 compute-0 nova_compute[239965]: 2026-01-26 15:41:20.378 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:41:20 compute-0 nova_compute[239965]: 2026-01-26 15:41:20.379 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 15:41:20 compute-0 nova_compute[239965]: 2026-01-26 15:41:20.403 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 15:41:20 compute-0 nova_compute[239965]: 2026-01-26 15:41:20.403 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:41:20 compute-0 nova_compute[239965]: 2026-01-26 15:41:20.403 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:41:20 compute-0 ceph-mon[75140]: pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:22 compute-0 ceph-mon[75140]: pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:41:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 15:41:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5832 writes, 24K keys, 5832 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5832 writes, 1000 syncs, 5.83 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 26 15:41:24 compute-0 ceph-mon[75140]: pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:25 compute-0 ceph-mon[75140]: pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:41:28 compute-0 ceph-mon[75140]: pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:41:28
Jan 26 15:41:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:41:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:41:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.mgr', 'vms', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'backups']
Jan 26 15:41:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:41:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 15:41:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.4 total, 600.0 interval
                                           Cumulative writes: 7246 writes, 29K keys, 7246 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7246 writes, 1471 syncs, 4.93 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.253       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.253       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.25              0.00         1    0.253       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.039       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.039       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.039       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.024       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.4 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 26 15:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:41:30 compute-0 ceph-mon[75140]: pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:32 compute-0 ceph-mon[75140]: pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:41:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 15:41:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 5746 writes, 24K keys, 5746 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5746 writes, 926 syncs, 6.21 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.039       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.039       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.039       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562773363a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562773363a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562773363a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 26 15:41:34 compute-0 ceph-mon[75140]: pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:35 compute-0 podman[241872]: 2026-01-26 15:41:35.365729903 +0000 UTC m=+0.048096219 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 15:41:35 compute-0 podman[241873]: 2026-01-26 15:41:35.395452021 +0000 UTC m=+0.077846128 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:41:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:35 compute-0 ceph-mon[75140]: pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:41:38 compute-0 ceph-mon[75140]: pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:39 compute-0 ceph-mgr[75431]: [devicehealth INFO root] Check health
Jan 26 15:41:40 compute-0 ceph-mon[75140]: pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:41 compute-0 ceph-mon[75140]: pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:41:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:44 compute-0 ceph-mon[75140]: pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:46 compute-0 ceph-mon[75140]: pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1837404050763064e-06 of space, bias 4.0, pg target 0.0014204884860915677 quantized to 16 (current 16)
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:41:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:41:48 compute-0 ceph-mon[75140]: pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:41:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3413233864' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:41:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:41:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3413233864' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:41:49 compute-0 sudo[241917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:41:49 compute-0 sudo[241917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:41:49 compute-0 sudo[241917]: pam_unix(sudo:session): session closed for user root
Jan 26 15:41:49 compute-0 sudo[241942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:41:49 compute-0 sudo[241942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:41:49 compute-0 ceph-osd[87780]: bluestore.MempoolThread fragmentation_score=0.000142 took=0.000031s
Jan 26 15:41:49 compute-0 ceph-osd[85687]: bluestore.MempoolThread fragmentation_score=0.000116 took=0.000015s
Jan 26 15:41:49 compute-0 ceph-osd[86729]: bluestore.MempoolThread fragmentation_score=0.000120 took=0.000016s
Jan 26 15:41:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:49 compute-0 sudo[241942]: pam_unix(sudo:session): session closed for user root
Jan 26 15:41:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:41:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:41:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:41:49 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:41:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:41:49 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:41:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:41:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:41:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:41:49 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:41:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:41:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:41:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3413233864' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:41:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3413233864' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:41:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:41:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:41:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:41:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:41:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:41:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:41:49 compute-0 sudo[241999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:41:49 compute-0 sudo[241999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:41:49 compute-0 sudo[241999]: pam_unix(sudo:session): session closed for user root
Jan 26 15:41:49 compute-0 sudo[242024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:41:49 compute-0 sudo[242024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:41:50 compute-0 podman[242062]: 2026-01-26 15:41:50.013804319 +0000 UTC m=+0.039233712 container create abefe945bc2567e84fc808b62d794605e164c8788088bd3d79b90dde2b03c1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 26 15:41:50 compute-0 systemd[1]: Started libpod-conmon-abefe945bc2567e84fc808b62d794605e164c8788088bd3d79b90dde2b03c1f0.scope.
Jan 26 15:41:50 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:41:50 compute-0 podman[242062]: 2026-01-26 15:41:49.996881665 +0000 UTC m=+0.022311078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:41:50 compute-0 podman[242062]: 2026-01-26 15:41:50.103707591 +0000 UTC m=+0.129137004 container init abefe945bc2567e84fc808b62d794605e164c8788088bd3d79b90dde2b03c1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_sammet, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:41:50 compute-0 podman[242062]: 2026-01-26 15:41:50.110895887 +0000 UTC m=+0.136325280 container start abefe945bc2567e84fc808b62d794605e164c8788088bd3d79b90dde2b03c1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_sammet, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 15:41:50 compute-0 podman[242062]: 2026-01-26 15:41:50.114270221 +0000 UTC m=+0.139699644 container attach abefe945bc2567e84fc808b62d794605e164c8788088bd3d79b90dde2b03c1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:41:50 compute-0 modest_sammet[242078]: 167 167
Jan 26 15:41:50 compute-0 systemd[1]: libpod-abefe945bc2567e84fc808b62d794605e164c8788088bd3d79b90dde2b03c1f0.scope: Deactivated successfully.
Jan 26 15:41:50 compute-0 podman[242062]: 2026-01-26 15:41:50.119315524 +0000 UTC m=+0.144744917 container died abefe945bc2567e84fc808b62d794605e164c8788088bd3d79b90dde2b03c1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_sammet, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:41:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-efaacafbb5e5884f3846525bc86caa7a0e250a648221ab1b507dfa0f73068686-merged.mount: Deactivated successfully.
Jan 26 15:41:50 compute-0 podman[242062]: 2026-01-26 15:41:50.240107214 +0000 UTC m=+0.265536647 container remove abefe945bc2567e84fc808b62d794605e164c8788088bd3d79b90dde2b03c1f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_sammet, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:41:50 compute-0 systemd[1]: libpod-conmon-abefe945bc2567e84fc808b62d794605e164c8788088bd3d79b90dde2b03c1f0.scope: Deactivated successfully.
Jan 26 15:41:50 compute-0 podman[242101]: 2026-01-26 15:41:50.439623861 +0000 UTC m=+0.052022165 container create 414158f2ebd936352fab03e891f72b5fb5e4d6a63e5d4edb19773a054d48988b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 15:41:50 compute-0 podman[242101]: 2026-01-26 15:41:50.415244804 +0000 UTC m=+0.027643118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:41:50 compute-0 systemd[1]: Started libpod-conmon-414158f2ebd936352fab03e891f72b5fb5e4d6a63e5d4edb19773a054d48988b.scope.
Jan 26 15:41:50 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b6e8473942718f9506e0ebf42319ea4f731440afb4cfe2490af695b520ad61/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b6e8473942718f9506e0ebf42319ea4f731440afb4cfe2490af695b520ad61/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b6e8473942718f9506e0ebf42319ea4f731440afb4cfe2490af695b520ad61/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b6e8473942718f9506e0ebf42319ea4f731440afb4cfe2490af695b520ad61/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9b6e8473942718f9506e0ebf42319ea4f731440afb4cfe2490af695b520ad61/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:41:50 compute-0 podman[242101]: 2026-01-26 15:41:50.59058954 +0000 UTC m=+0.202987864 container init 414158f2ebd936352fab03e891f72b5fb5e4d6a63e5d4edb19773a054d48988b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:41:50 compute-0 podman[242101]: 2026-01-26 15:41:50.605745801 +0000 UTC m=+0.218144105 container start 414158f2ebd936352fab03e891f72b5fb5e4d6a63e5d4edb19773a054d48988b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 15:41:50 compute-0 podman[242101]: 2026-01-26 15:41:50.609076903 +0000 UTC m=+0.221475317 container attach 414158f2ebd936352fab03e891f72b5fb5e4d6a63e5d4edb19773a054d48988b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:41:50 compute-0 ceph-mon[75140]: pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:51 compute-0 great_bardeen[242117]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:41:51 compute-0 great_bardeen[242117]: --> All data devices are unavailable
Jan 26 15:41:51 compute-0 systemd[1]: libpod-414158f2ebd936352fab03e891f72b5fb5e4d6a63e5d4edb19773a054d48988b.scope: Deactivated successfully.
Jan 26 15:41:51 compute-0 podman[242101]: 2026-01-26 15:41:51.075150351 +0000 UTC m=+0.687548665 container died 414158f2ebd936352fab03e891f72b5fb5e4d6a63e5d4edb19773a054d48988b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9b6e8473942718f9506e0ebf42319ea4f731440afb4cfe2490af695b520ad61-merged.mount: Deactivated successfully.
Jan 26 15:41:51 compute-0 podman[242101]: 2026-01-26 15:41:51.191779828 +0000 UTC m=+0.804178142 container remove 414158f2ebd936352fab03e891f72b5fb5e4d6a63e5d4edb19773a054d48988b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:41:51 compute-0 systemd[1]: libpod-conmon-414158f2ebd936352fab03e891f72b5fb5e4d6a63e5d4edb19773a054d48988b.scope: Deactivated successfully.
Jan 26 15:41:51 compute-0 sudo[242024]: pam_unix(sudo:session): session closed for user root
Jan 26 15:41:51 compute-0 sudo[242151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:41:51 compute-0 sudo[242151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:41:51 compute-0 sudo[242151]: pam_unix(sudo:session): session closed for user root
Jan 26 15:41:51 compute-0 sudo[242176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:41:51 compute-0 sudo[242176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:41:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:51 compute-0 podman[242213]: 2026-01-26 15:41:51.69022339 +0000 UTC m=+0.042454371 container create b633842e7253199462c7c93d79cba1d214b26bbee3e404754bb09c0adffbf4a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 15:41:51 compute-0 systemd[1]: Started libpod-conmon-b633842e7253199462c7c93d79cba1d214b26bbee3e404754bb09c0adffbf4a6.scope.
Jan 26 15:41:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:41:51 compute-0 podman[242213]: 2026-01-26 15:41:51.759388275 +0000 UTC m=+0.111619336 container init b633842e7253199462c7c93d79cba1d214b26bbee3e404754bb09c0adffbf4a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 15:41:51 compute-0 podman[242213]: 2026-01-26 15:41:51.765203367 +0000 UTC m=+0.117434388 container start b633842e7253199462c7c93d79cba1d214b26bbee3e404754bb09c0adffbf4a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:41:51 compute-0 podman[242213]: 2026-01-26 15:41:51.674440273 +0000 UTC m=+0.026671284 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:41:51 compute-0 podman[242213]: 2026-01-26 15:41:51.769213476 +0000 UTC m=+0.121444457 container attach b633842e7253199462c7c93d79cba1d214b26bbee3e404754bb09c0adffbf4a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_edison, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:41:51 compute-0 systemd[1]: libpod-b633842e7253199462c7c93d79cba1d214b26bbee3e404754bb09c0adffbf4a6.scope: Deactivated successfully.
Jan 26 15:41:51 compute-0 recursing_edison[242229]: 167 167
Jan 26 15:41:51 compute-0 conmon[242229]: conmon b633842e7253199462c7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b633842e7253199462c7c93d79cba1d214b26bbee3e404754bb09c0adffbf4a6.scope/container/memory.events
Jan 26 15:41:51 compute-0 podman[242213]: 2026-01-26 15:41:51.771628045 +0000 UTC m=+0.123859066 container died b633842e7253199462c7c93d79cba1d214b26bbee3e404754bb09c0adffbf4a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_edison, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e77bba8909f8db509a5bd3ccdc26d01ddc7de5a32e9063b2276b0cc9d9bdbb7d-merged.mount: Deactivated successfully.
Jan 26 15:41:51 compute-0 podman[242213]: 2026-01-26 15:41:51.81310124 +0000 UTC m=+0.165332221 container remove b633842e7253199462c7c93d79cba1d214b26bbee3e404754bb09c0adffbf4a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 15:41:51 compute-0 systemd[1]: libpod-conmon-b633842e7253199462c7c93d79cba1d214b26bbee3e404754bb09c0adffbf4a6.scope: Deactivated successfully.
Jan 26 15:41:51 compute-0 podman[242252]: 2026-01-26 15:41:51.98327751 +0000 UTC m=+0.042746639 container create 5956748de29387c944f4de264f6abea8736da71a303fc40a53da118b0e325ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_payne, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:41:52 compute-0 systemd[1]: Started libpod-conmon-5956748de29387c944f4de264f6abea8736da71a303fc40a53da118b0e325ffe.scope.
Jan 26 15:41:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7facc7a3e7d4d5a485c8c0fd71f9745329d9b2acfa5edb86f909ea9dd1302a39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7facc7a3e7d4d5a485c8c0fd71f9745329d9b2acfa5edb86f909ea9dd1302a39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7facc7a3e7d4d5a485c8c0fd71f9745329d9b2acfa5edb86f909ea9dd1302a39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:41:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7facc7a3e7d4d5a485c8c0fd71f9745329d9b2acfa5edb86f909ea9dd1302a39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:41:52 compute-0 podman[242252]: 2026-01-26 15:41:52.054572847 +0000 UTC m=+0.114042006 container init 5956748de29387c944f4de264f6abea8736da71a303fc40a53da118b0e325ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_payne, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 26 15:41:52 compute-0 podman[242252]: 2026-01-26 15:41:51.962727936 +0000 UTC m=+0.022197145 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:41:52 compute-0 podman[242252]: 2026-01-26 15:41:52.061114026 +0000 UTC m=+0.120583165 container start 5956748de29387c944f4de264f6abea8736da71a303fc40a53da118b0e325ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:41:52 compute-0 podman[242252]: 2026-01-26 15:41:52.064633433 +0000 UTC m=+0.124102612 container attach 5956748de29387c944f4de264f6abea8736da71a303fc40a53da118b0e325ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_payne, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 15:41:52 compute-0 recursing_payne[242269]: {
Jan 26 15:41:52 compute-0 recursing_payne[242269]:     "0": [
Jan 26 15:41:52 compute-0 recursing_payne[242269]:         {
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "devices": [
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "/dev/loop3"
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             ],
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "lv_name": "ceph_lv0",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "lv_size": "21470642176",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "name": "ceph_lv0",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "tags": {
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.cluster_name": "ceph",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.crush_device_class": "",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.encrypted": "0",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.objectstore": "bluestore",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.osd_id": "0",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.type": "block",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.vdo": "0",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.with_tpm": "0"
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             },
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "type": "block",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "vg_name": "ceph_vg0"
Jan 26 15:41:52 compute-0 recursing_payne[242269]:         }
Jan 26 15:41:52 compute-0 recursing_payne[242269]:     ],
Jan 26 15:41:52 compute-0 recursing_payne[242269]:     "1": [
Jan 26 15:41:52 compute-0 recursing_payne[242269]:         {
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "devices": [
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "/dev/loop4"
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             ],
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "lv_name": "ceph_lv1",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "lv_size": "21470642176",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "name": "ceph_lv1",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "tags": {
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.cluster_name": "ceph",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.crush_device_class": "",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.encrypted": "0",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.objectstore": "bluestore",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.osd_id": "1",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.type": "block",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.vdo": "0",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.with_tpm": "0"
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             },
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "type": "block",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "vg_name": "ceph_vg1"
Jan 26 15:41:52 compute-0 recursing_payne[242269]:         }
Jan 26 15:41:52 compute-0 recursing_payne[242269]:     ],
Jan 26 15:41:52 compute-0 recursing_payne[242269]:     "2": [
Jan 26 15:41:52 compute-0 recursing_payne[242269]:         {
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "devices": [
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "/dev/loop5"
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             ],
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "lv_name": "ceph_lv2",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "lv_size": "21470642176",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "name": "ceph_lv2",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "tags": {
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.cluster_name": "ceph",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.crush_device_class": "",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.encrypted": "0",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.objectstore": "bluestore",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.osd_id": "2",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.type": "block",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.vdo": "0",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:                 "ceph.with_tpm": "0"
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             },
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "type": "block",
Jan 26 15:41:52 compute-0 recursing_payne[242269]:             "vg_name": "ceph_vg2"
Jan 26 15:41:52 compute-0 recursing_payne[242269]:         }
Jan 26 15:41:52 compute-0 recursing_payne[242269]:     ]
Jan 26 15:41:52 compute-0 recursing_payne[242269]: }
Jan 26 15:41:52 compute-0 systemd[1]: libpod-5956748de29387c944f4de264f6abea8736da71a303fc40a53da118b0e325ffe.scope: Deactivated successfully.
Jan 26 15:41:52 compute-0 podman[242252]: 2026-01-26 15:41:52.380198794 +0000 UTC m=+0.439667933 container died 5956748de29387c944f4de264f6abea8736da71a303fc40a53da118b0e325ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_payne, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:41:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-7facc7a3e7d4d5a485c8c0fd71f9745329d9b2acfa5edb86f909ea9dd1302a39-merged.mount: Deactivated successfully.
Jan 26 15:41:52 compute-0 podman[242252]: 2026-01-26 15:41:52.420503951 +0000 UTC m=+0.479973080 container remove 5956748de29387c944f4de264f6abea8736da71a303fc40a53da118b0e325ffe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_payne, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 26 15:41:52 compute-0 systemd[1]: libpod-conmon-5956748de29387c944f4de264f6abea8736da71a303fc40a53da118b0e325ffe.scope: Deactivated successfully.
Jan 26 15:41:52 compute-0 sudo[242176]: pam_unix(sudo:session): session closed for user root
Jan 26 15:41:52 compute-0 sudo[242290]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:41:52 compute-0 sudo[242290]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:41:52 compute-0 sudo[242290]: pam_unix(sudo:session): session closed for user root
Jan 26 15:41:52 compute-0 sudo[242315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:41:52 compute-0 sudo[242315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:41:52 compute-0 ceph-mon[75140]: pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:52 compute-0 podman[242353]: 2026-01-26 15:41:52.853595642 +0000 UTC m=+0.033545453 container create 28b466c24b474b0e2ab3151b4e9a18c912ee7c71c36ff9d27304cd52d32955b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_shockley, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 15:41:52 compute-0 systemd[1]: Started libpod-conmon-28b466c24b474b0e2ab3151b4e9a18c912ee7c71c36ff9d27304cd52d32955b3.scope.
Jan 26 15:41:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:41:52 compute-0 podman[242353]: 2026-01-26 15:41:52.922504481 +0000 UTC m=+0.102454312 container init 28b466c24b474b0e2ab3151b4e9a18c912ee7c71c36ff9d27304cd52d32955b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_shockley, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 15:41:52 compute-0 podman[242353]: 2026-01-26 15:41:52.927647836 +0000 UTC m=+0.107597647 container start 28b466c24b474b0e2ab3151b4e9a18c912ee7c71c36ff9d27304cd52d32955b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 15:41:52 compute-0 podman[242353]: 2026-01-26 15:41:52.930627809 +0000 UTC m=+0.110577650 container attach 28b466c24b474b0e2ab3151b4e9a18c912ee7c71c36ff9d27304cd52d32955b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Jan 26 15:41:52 compute-0 hardcore_shockley[242369]: 167 167
Jan 26 15:41:52 compute-0 systemd[1]: libpod-28b466c24b474b0e2ab3151b4e9a18c912ee7c71c36ff9d27304cd52d32955b3.scope: Deactivated successfully.
Jan 26 15:41:52 compute-0 podman[242353]: 2026-01-26 15:41:52.933354076 +0000 UTC m=+0.113303917 container died 28b466c24b474b0e2ab3151b4e9a18c912ee7c71c36ff9d27304cd52d32955b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_shockley, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Jan 26 15:41:52 compute-0 podman[242353]: 2026-01-26 15:41:52.838483792 +0000 UTC m=+0.018433603 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:41:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-6642ae6d20c9fe2d8b5689f67d192031f54af4feac70c5d885d434ad611d9d29-merged.mount: Deactivated successfully.
Jan 26 15:41:52 compute-0 podman[242353]: 2026-01-26 15:41:52.967618376 +0000 UTC m=+0.147568187 container remove 28b466c24b474b0e2ab3151b4e9a18c912ee7c71c36ff9d27304cd52d32955b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_shockley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:41:52 compute-0 systemd[1]: libpod-conmon-28b466c24b474b0e2ab3151b4e9a18c912ee7c71c36ff9d27304cd52d32955b3.scope: Deactivated successfully.
Jan 26 15:41:53 compute-0 podman[242391]: 2026-01-26 15:41:53.127149244 +0000 UTC m=+0.040726829 container create d96ac163e11c9a0e140d6f648e23dd4c56155ad1752d96127c90c245ec98e807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_clarke, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:41:53 compute-0 systemd[1]: Started libpod-conmon-d96ac163e11c9a0e140d6f648e23dd4c56155ad1752d96127c90c245ec98e807.scope.
Jan 26 15:41:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b86fd06b29bd0137280a25a1e71280c6d964b267cd66c1e1f41b9b0e72d292/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b86fd06b29bd0137280a25a1e71280c6d964b267cd66c1e1f41b9b0e72d292/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b86fd06b29bd0137280a25a1e71280c6d964b267cd66c1e1f41b9b0e72d292/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92b86fd06b29bd0137280a25a1e71280c6d964b267cd66c1e1f41b9b0e72d292/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:41:53 compute-0 podman[242391]: 2026-01-26 15:41:53.107770669 +0000 UTC m=+0.021348284 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:41:53 compute-0 podman[242391]: 2026-01-26 15:41:53.212849733 +0000 UTC m=+0.126427328 container init d96ac163e11c9a0e140d6f648e23dd4c56155ad1752d96127c90c245ec98e807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:41:53 compute-0 podman[242391]: 2026-01-26 15:41:53.218387629 +0000 UTC m=+0.131965214 container start d96ac163e11c9a0e140d6f648e23dd4c56155ad1752d96127c90c245ec98e807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:41:53 compute-0 podman[242391]: 2026-01-26 15:41:53.221494265 +0000 UTC m=+0.135071880 container attach d96ac163e11c9a0e140d6f648e23dd4c56155ad1752d96127c90c245ec98e807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_clarke, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:41:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:41:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:53 compute-0 lvm[242486]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:41:53 compute-0 lvm[242487]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:41:53 compute-0 lvm[242486]: VG ceph_vg0 finished
Jan 26 15:41:53 compute-0 lvm[242487]: VG ceph_vg1 finished
Jan 26 15:41:53 compute-0 lvm[242489]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:41:53 compute-0 lvm[242489]: VG ceph_vg2 finished
Jan 26 15:41:54 compute-0 suspicious_clarke[242408]: {}
Jan 26 15:41:54 compute-0 systemd[1]: libpod-d96ac163e11c9a0e140d6f648e23dd4c56155ad1752d96127c90c245ec98e807.scope: Deactivated successfully.
Jan 26 15:41:54 compute-0 podman[242391]: 2026-01-26 15:41:54.124378515 +0000 UTC m=+1.037956090 container died d96ac163e11c9a0e140d6f648e23dd4c56155ad1752d96127c90c245ec98e807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_clarke, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:41:54 compute-0 systemd[1]: libpod-d96ac163e11c9a0e140d6f648e23dd4c56155ad1752d96127c90c245ec98e807.scope: Consumed 1.339s CPU time.
Jan 26 15:41:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-92b86fd06b29bd0137280a25a1e71280c6d964b267cd66c1e1f41b9b0e72d292-merged.mount: Deactivated successfully.
Jan 26 15:41:54 compute-0 podman[242391]: 2026-01-26 15:41:54.51927638 +0000 UTC m=+1.432853965 container remove d96ac163e11c9a0e140d6f648e23dd4c56155ad1752d96127c90c245ec98e807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:41:54 compute-0 sudo[242315]: pam_unix(sudo:session): session closed for user root
Jan 26 15:41:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:41:54 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:41:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:41:54 compute-0 systemd[1]: libpod-conmon-d96ac163e11c9a0e140d6f648e23dd4c56155ad1752d96127c90c245ec98e807.scope: Deactivated successfully.
Jan 26 15:41:54 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:41:54 compute-0 ceph-mon[75140]: pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:41:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:41:54 compute-0 sudo[242503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:41:54 compute-0 sudo[242503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:41:54 compute-0 sudo[242503]: pam_unix(sudo:session): session closed for user root
Jan 26 15:41:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:56 compute-0 ceph-mon[75140]: pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:58 compute-0 sshd-session[242528]: Connection closed by authenticating user root 178.128.250.55 port 35740 [preauth]
Jan 26 15:41:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:41:58 compute-0 ceph-mon[75140]: pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:41:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:41:59.202 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:41:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:41:59.203 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:41:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:41:59.203 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:41:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:42:00 compute-0 ceph-mon[75140]: pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:01 compute-0 ceph-mon[75140]: pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:42:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:04 compute-0 ceph-mon[75140]: pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:06 compute-0 podman[242530]: 2026-01-26 15:42:06.376320326 +0000 UTC m=+0.061569020 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 15:42:06 compute-0 podman[242531]: 2026-01-26 15:42:06.413075687 +0000 UTC m=+0.098293430 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:42:06 compute-0 ceph-mon[75140]: pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:42:08 compute-0 ceph-mon[75140]: pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:10 compute-0 ceph-mon[75140]: pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:12 compute-0 ceph-mon[75140]: pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:42:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:13 compute-0 ceph-mon[75140]: pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:16 compute-0 nova_compute[239965]: 2026-01-26 15:42:16.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:42:16 compute-0 nova_compute[239965]: 2026-01-26 15:42:16.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:42:16 compute-0 ceph-mon[75140]: pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:17 compute-0 nova_compute[239965]: 2026-01-26 15:42:17.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:42:17 compute-0 nova_compute[239965]: 2026-01-26 15:42:17.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:42:17 compute-0 ceph-mon[75140]: pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:42:18 compute-0 nova_compute[239965]: 2026-01-26 15:42:18.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:42:18 compute-0 nova_compute[239965]: 2026-01-26 15:42:18.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:42:18 compute-0 nova_compute[239965]: 2026-01-26 15:42:18.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:42:18 compute-0 nova_compute[239965]: 2026-01-26 15:42:18.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:42:19 compute-0 nova_compute[239965]: 2026-01-26 15:42:19.022 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:42:19 compute-0 nova_compute[239965]: 2026-01-26 15:42:19.023 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:42:19 compute-0 nova_compute[239965]: 2026-01-26 15:42:19.023 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:42:19 compute-0 nova_compute[239965]: 2026-01-26 15:42:19.023 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:42:19 compute-0 nova_compute[239965]: 2026-01-26 15:42:19.024 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:42:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:42:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1478657532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:42:19 compute-0 nova_compute[239965]: 2026-01-26 15:42:19.568 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:42:19 compute-0 nova_compute[239965]: 2026-01-26 15:42:19.727 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:42:19 compute-0 nova_compute[239965]: 2026-01-26 15:42:19.728 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5128MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:42:19 compute-0 nova_compute[239965]: 2026-01-26 15:42:19.728 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:42:19 compute-0 nova_compute[239965]: 2026-01-26 15:42:19.728 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:42:20 compute-0 ceph-mon[75140]: pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:20 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1478657532' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:42:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:22 compute-0 nova_compute[239965]: 2026-01-26 15:42:22.188 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:42:22 compute-0 nova_compute[239965]: 2026-01-26 15:42:22.188 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:42:22 compute-0 nova_compute[239965]: 2026-01-26 15:42:22.214 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:42:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:42:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/663683201' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:42:22 compute-0 nova_compute[239965]: 2026-01-26 15:42:22.749 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:42:22 compute-0 nova_compute[239965]: 2026-01-26 15:42:22.756 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:42:23 compute-0 ceph-mon[75140]: pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/663683201' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:42:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:42:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:23 compute-0 nova_compute[239965]: 2026-01-26 15:42:23.834 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:42:23 compute-0 nova_compute[239965]: 2026-01-26 15:42:23.837 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:42:23 compute-0 nova_compute[239965]: 2026-01-26 15:42:23.837 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:42:24 compute-0 ceph-mon[75140]: pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:25 compute-0 nova_compute[239965]: 2026-01-26 15:42:25.839 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:42:25 compute-0 nova_compute[239965]: 2026-01-26 15:42:25.840 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:42:25 compute-0 nova_compute[239965]: 2026-01-26 15:42:25.840 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 15:42:26 compute-0 nova_compute[239965]: 2026-01-26 15:42:26.645 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 15:42:26 compute-0 nova_compute[239965]: 2026-01-26 15:42:26.645 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:42:26 compute-0 ceph-mon[75140]: pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:42:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:42:28
Jan 26 15:42:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:42:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:42:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'images', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'vms', '.rgw.root']
Jan 26 15:42:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:42:28 compute-0 ceph-mon[75140]: pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:30 compute-0 ceph-mon[75140]: pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:42:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:32 compute-0 ceph-mon[75140]: pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:42:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:34 compute-0 ceph-mon[75140]: pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:36 compute-0 ceph-mon[75140]: pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:37 compute-0 podman[242616]: 2026-01-26 15:42:37.359037097 +0000 UTC m=+0.049986706 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 15:42:37 compute-0 podman[242617]: 2026-01-26 15:42:37.395677604 +0000 UTC m=+0.086650064 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 15:42:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:42:38 compute-0 ceph-mon[75140]: pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:40 compute-0 ceph-mon[75140]: pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:42 compute-0 sshd-session[242663]: Connection closed by authenticating user root 178.128.250.55 port 40108 [preauth]
Jan 26 15:42:42 compute-0 ceph-mon[75140]: pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:42:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:42:43.642005) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442163642032, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1333, "num_deletes": 251, "total_data_size": 2126518, "memory_usage": 2171856, "flush_reason": "Manual Compaction"}
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442163656659, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2095902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15197, "largest_seqno": 16529, "table_properties": {"data_size": 2089609, "index_size": 3560, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13002, "raw_average_key_size": 19, "raw_value_size": 2076982, "raw_average_value_size": 3137, "num_data_blocks": 163, "num_entries": 662, "num_filter_entries": 662, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769442024, "oldest_key_time": 1769442024, "file_creation_time": 1769442163, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 14692 microseconds, and 4232 cpu microseconds.
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:42:43.656696) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2095902 bytes OK
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:42:43.656713) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:42:43.658369) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:42:43.658382) EVENT_LOG_v1 {"time_micros": 1769442163658379, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:42:43.658396) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2120560, prev total WAL file size 2120560, number of live WAL files 2.
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:42:43.658954) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2046KB)], [35(7802KB)]
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442163659003, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 10085972, "oldest_snapshot_seqno": -1}
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4111 keys, 8276395 bytes, temperature: kUnknown
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442163717707, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 8276395, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8246080, "index_size": 18941, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10309, "raw_key_size": 100455, "raw_average_key_size": 24, "raw_value_size": 8168884, "raw_average_value_size": 1987, "num_data_blocks": 800, "num_entries": 4111, "num_filter_entries": 4111, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769442163, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:42:43.718010) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8276395 bytes
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:42:43.719540) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.5 rd, 140.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 7.6 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(8.8) write-amplify(3.9) OK, records in: 4625, records dropped: 514 output_compression: NoCompression
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:42:43.719555) EVENT_LOG_v1 {"time_micros": 1769442163719548, "job": 16, "event": "compaction_finished", "compaction_time_micros": 58799, "compaction_time_cpu_micros": 15946, "output_level": 6, "num_output_files": 1, "total_output_size": 8276395, "num_input_records": 4625, "num_output_records": 4111, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442163720005, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442163721328, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:42:43.658861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:42:43.721429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:42:43.721437) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:42:43.721439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:42:43.721442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:42:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:42:43.721445) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:42:44 compute-0 ceph-mon[75140]: pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:46 compute-0 ceph-mon[75140]: pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1837404050763064e-06 of space, bias 4.0, pg target 0.0014204884860915677 quantized to 16 (current 16)
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:42:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:42:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:42:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4189789182' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:42:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:42:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4189789182' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:42:48 compute-0 ceph-mon[75140]: pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/4189789182' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:42:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/4189789182' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:42:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:50 compute-0 ceph-mon[75140]: pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:42:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 26 15:42:52 compute-0 ceph-mon[75140]: pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 26 15:42:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:42:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:42:53 compute-0 ceph-mon[75140]: pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:42:54 compute-0 sudo[242667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:42:54 compute-0 sudo[242667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:42:54 compute-0 sudo[242667]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:54 compute-0 sudo[242692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 26 15:42:54 compute-0 sudo[242692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:42:55 compute-0 sudo[242692]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:42:55 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:42:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:42:55 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:42:55 compute-0 sudo[242738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:42:55 compute-0 sudo[242738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:42:55 compute-0 sudo[242738]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:55 compute-0 sudo[242763]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:42:55 compute-0 sudo[242763]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:42:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:42:55 compute-0 sudo[242763]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 15:42:55 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 15:42:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:42:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:42:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:42:55 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:42:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:42:55 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:42:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:42:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:42:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:42:55 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:42:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:42:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:42:55 compute-0 sudo[242818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:42:55 compute-0 sudo[242818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:42:55 compute-0 sudo[242818]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:55 compute-0 sshd-session[242665]: Invalid user admin from 103.236.95.173 port 34954
Jan 26 15:42:55 compute-0 sudo[242843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:42:55 compute-0 sudo[242843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:42:56 compute-0 sshd-session[242665]: Connection closed by invalid user admin 103.236.95.173 port 34954 [preauth]
Jan 26 15:42:56 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:42:56 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:42:56 compute-0 ceph-mon[75140]: pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:42:56 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 15:42:56 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:42:56 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:42:56 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:42:56 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:42:56 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:42:56 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:42:56 compute-0 podman[242879]: 2026-01-26 15:42:56.282716287 +0000 UTC m=+0.037192342 container create 30d9d6273e9db74e816f6f54989cdfde9998645aebd32721f3ef4f62b70b3238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wing, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 15:42:56 compute-0 systemd[1]: Started libpod-conmon-30d9d6273e9db74e816f6f54989cdfde9998645aebd32721f3ef4f62b70b3238.scope.
Jan 26 15:42:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:42:56 compute-0 podman[242879]: 2026-01-26 15:42:56.354478066 +0000 UTC m=+0.108954141 container init 30d9d6273e9db74e816f6f54989cdfde9998645aebd32721f3ef4f62b70b3238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 15:42:56 compute-0 podman[242879]: 2026-01-26 15:42:56.360497723 +0000 UTC m=+0.114973778 container start 30d9d6273e9db74e816f6f54989cdfde9998645aebd32721f3ef4f62b70b3238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 15:42:56 compute-0 podman[242879]: 2026-01-26 15:42:56.266785347 +0000 UTC m=+0.021261432 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:42:56 compute-0 podman[242879]: 2026-01-26 15:42:56.364157322 +0000 UTC m=+0.118633367 container attach 30d9d6273e9db74e816f6f54989cdfde9998645aebd32721f3ef4f62b70b3238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wing, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:42:56 compute-0 silly_wing[242895]: 167 167
Jan 26 15:42:56 compute-0 systemd[1]: libpod-30d9d6273e9db74e816f6f54989cdfde9998645aebd32721f3ef4f62b70b3238.scope: Deactivated successfully.
Jan 26 15:42:56 compute-0 conmon[242895]: conmon 30d9d6273e9db74e816f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30d9d6273e9db74e816f6f54989cdfde9998645aebd32721f3ef4f62b70b3238.scope/container/memory.events
Jan 26 15:42:56 compute-0 podman[242879]: 2026-01-26 15:42:56.36691958 +0000 UTC m=+0.121395665 container died 30d9d6273e9db74e816f6f54989cdfde9998645aebd32721f3ef4f62b70b3238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 26 15:42:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2a4e61ebb7025b93baa20b1dbc0d1932330c3a6038732eab0c2e93b3531466b-merged.mount: Deactivated successfully.
Jan 26 15:42:56 compute-0 podman[242879]: 2026-01-26 15:42:56.407483273 +0000 UTC m=+0.161959328 container remove 30d9d6273e9db74e816f6f54989cdfde9998645aebd32721f3ef4f62b70b3238 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 26 15:42:56 compute-0 systemd[1]: libpod-conmon-30d9d6273e9db74e816f6f54989cdfde9998645aebd32721f3ef4f62b70b3238.scope: Deactivated successfully.
Jan 26 15:42:56 compute-0 podman[242919]: 2026-01-26 15:42:56.584329617 +0000 UTC m=+0.056428884 container create 88d70d3131c9cb4ac834241276d4f8fa2f66d7cc390f72950405ea7cdde75e5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 15:42:56 compute-0 systemd[1]: Started libpod-conmon-88d70d3131c9cb4ac834241276d4f8fa2f66d7cc390f72950405ea7cdde75e5f.scope.
Jan 26 15:42:56 compute-0 podman[242919]: 2026-01-26 15:42:56.553415509 +0000 UTC m=+0.025514816 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:42:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ebd680269f7973a5f9d3e8c68f06740fa308d97e995d3deb341de5ed84fadeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ebd680269f7973a5f9d3e8c68f06740fa308d97e995d3deb341de5ed84fadeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ebd680269f7973a5f9d3e8c68f06740fa308d97e995d3deb341de5ed84fadeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ebd680269f7973a5f9d3e8c68f06740fa308d97e995d3deb341de5ed84fadeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:42:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ebd680269f7973a5f9d3e8c68f06740fa308d97e995d3deb341de5ed84fadeb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:42:56 compute-0 podman[242919]: 2026-01-26 15:42:56.674182148 +0000 UTC m=+0.146281395 container init 88d70d3131c9cb4ac834241276d4f8fa2f66d7cc390f72950405ea7cdde75e5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 15:42:56 compute-0 podman[242919]: 2026-01-26 15:42:56.68650668 +0000 UTC m=+0.158605947 container start 88d70d3131c9cb4ac834241276d4f8fa2f66d7cc390f72950405ea7cdde75e5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_fermi, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:42:56 compute-0 podman[242919]: 2026-01-26 15:42:56.6909976 +0000 UTC m=+0.163096847 container attach 88d70d3131c9cb4ac834241276d4f8fa2f66d7cc390f72950405ea7cdde75e5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_fermi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:42:57 compute-0 festive_fermi[242935]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:42:57 compute-0 festive_fermi[242935]: --> All data devices are unavailable
Jan 26 15:42:57 compute-0 systemd[1]: libpod-88d70d3131c9cb4ac834241276d4f8fa2f66d7cc390f72950405ea7cdde75e5f.scope: Deactivated successfully.
Jan 26 15:42:57 compute-0 podman[242919]: 2026-01-26 15:42:57.197424885 +0000 UTC m=+0.669524202 container died 88d70d3131c9cb4ac834241276d4f8fa2f66d7cc390f72950405ea7cdde75e5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_fermi, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 15:42:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ebd680269f7973a5f9d3e8c68f06740fa308d97e995d3deb341de5ed84fadeb-merged.mount: Deactivated successfully.
Jan 26 15:42:57 compute-0 podman[242919]: 2026-01-26 15:42:57.243840096 +0000 UTC m=+0.715939323 container remove 88d70d3131c9cb4ac834241276d4f8fa2f66d7cc390f72950405ea7cdde75e5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_fermi, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:42:57 compute-0 systemd[1]: libpod-conmon-88d70d3131c9cb4ac834241276d4f8fa2f66d7cc390f72950405ea7cdde75e5f.scope: Deactivated successfully.
Jan 26 15:42:57 compute-0 sudo[242843]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:57 compute-0 sudo[242966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:42:57 compute-0 sudo[242966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:42:57 compute-0 sudo[242966]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:57 compute-0 sudo[242991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:42:57 compute-0 sudo[242991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:42:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:42:57 compute-0 podman[243029]: 2026-01-26 15:42:57.665166982 +0000 UTC m=+0.041760559 container create 9df357b257d74ff57837202b78e6413def773cd77c246bcebb48385a5fc683e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_swartz, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:42:57 compute-0 systemd[1]: Started libpod-conmon-9df357b257d74ff57837202b78e6413def773cd77c246bcebb48385a5fc683e8.scope.
Jan 26 15:42:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:42:57 compute-0 podman[243029]: 2026-01-26 15:42:57.648521892 +0000 UTC m=+0.025115479 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:42:57 compute-0 podman[243029]: 2026-01-26 15:42:57.743633482 +0000 UTC m=+0.120227059 container init 9df357b257d74ff57837202b78e6413def773cd77c246bcebb48385a5fc683e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 15:42:57 compute-0 podman[243029]: 2026-01-26 15:42:57.751427423 +0000 UTC m=+0.128021000 container start 9df357b257d74ff57837202b78e6413def773cd77c246bcebb48385a5fc683e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_swartz, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 15:42:57 compute-0 podman[243029]: 2026-01-26 15:42:57.754859998 +0000 UTC m=+0.131453565 container attach 9df357b257d74ff57837202b78e6413def773cd77c246bcebb48385a5fc683e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 15:42:57 compute-0 beautiful_swartz[243045]: 167 167
Jan 26 15:42:57 compute-0 systemd[1]: libpod-9df357b257d74ff57837202b78e6413def773cd77c246bcebb48385a5fc683e8.scope: Deactivated successfully.
Jan 26 15:42:57 compute-0 podman[243029]: 2026-01-26 15:42:57.757186455 +0000 UTC m=+0.133780032 container died 9df357b257d74ff57837202b78e6413def773cd77c246bcebb48385a5fc683e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_swartz, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 15:42:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-efe65dd30d36c920343e24af5d1a418dbee8fbc6834ca5759af70523895adb3d-merged.mount: Deactivated successfully.
Jan 26 15:42:57 compute-0 podman[243029]: 2026-01-26 15:42:57.79599305 +0000 UTC m=+0.172586647 container remove 9df357b257d74ff57837202b78e6413def773cd77c246bcebb48385a5fc683e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_swartz, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 15:42:57 compute-0 systemd[1]: libpod-conmon-9df357b257d74ff57837202b78e6413def773cd77c246bcebb48385a5fc683e8.scope: Deactivated successfully.
Jan 26 15:42:57 compute-0 podman[243070]: 2026-01-26 15:42:57.97080141 +0000 UTC m=+0.042924527 container create d7ba0e5aaca8be9762e8de4eac7ce0fed96fb3db406ba79bfd9598e7d35f1dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_brattain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 15:42:58 compute-0 systemd[1]: Started libpod-conmon-d7ba0e5aaca8be9762e8de4eac7ce0fed96fb3db406ba79bfd9598e7d35f1dd3.scope.
Jan 26 15:42:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ccc45b59cd85fec33f67457b6317885d95f15be3ba45ae15e6257a9915af121/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ccc45b59cd85fec33f67457b6317885d95f15be3ba45ae15e6257a9915af121/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ccc45b59cd85fec33f67457b6317885d95f15be3ba45ae15e6257a9915af121/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ccc45b59cd85fec33f67457b6317885d95f15be3ba45ae15e6257a9915af121/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:42:58 compute-0 podman[243070]: 2026-01-26 15:42:57.954237523 +0000 UTC m=+0.026360680 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:42:58 compute-0 podman[243070]: 2026-01-26 15:42:58.061370989 +0000 UTC m=+0.133494116 container init d7ba0e5aaca8be9762e8de4eac7ce0fed96fb3db406ba79bfd9598e7d35f1dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:42:58 compute-0 podman[243070]: 2026-01-26 15:42:58.068046272 +0000 UTC m=+0.140169409 container start d7ba0e5aaca8be9762e8de4eac7ce0fed96fb3db406ba79bfd9598e7d35f1dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_brattain, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:42:58 compute-0 podman[243070]: 2026-01-26 15:42:58.071625661 +0000 UTC m=+0.143748808 container attach d7ba0e5aaca8be9762e8de4eac7ce0fed96fb3db406ba79bfd9598e7d35f1dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 15:42:58 compute-0 silly_brattain[243086]: {
Jan 26 15:42:58 compute-0 silly_brattain[243086]:     "0": [
Jan 26 15:42:58 compute-0 silly_brattain[243086]:         {
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "devices": [
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "/dev/loop3"
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             ],
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "lv_name": "ceph_lv0",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "lv_size": "21470642176",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "name": "ceph_lv0",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "tags": {
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.cluster_name": "ceph",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.crush_device_class": "",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.encrypted": "0",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.objectstore": "bluestore",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.osd_id": "0",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.type": "block",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.vdo": "0",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.with_tpm": "0"
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             },
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "type": "block",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "vg_name": "ceph_vg0"
Jan 26 15:42:58 compute-0 silly_brattain[243086]:         }
Jan 26 15:42:58 compute-0 silly_brattain[243086]:     ],
Jan 26 15:42:58 compute-0 silly_brattain[243086]:     "1": [
Jan 26 15:42:58 compute-0 silly_brattain[243086]:         {
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "devices": [
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "/dev/loop4"
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             ],
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "lv_name": "ceph_lv1",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "lv_size": "21470642176",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "name": "ceph_lv1",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "tags": {
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.cluster_name": "ceph",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.crush_device_class": "",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.encrypted": "0",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.objectstore": "bluestore",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.osd_id": "1",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.type": "block",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.vdo": "0",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.with_tpm": "0"
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             },
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "type": "block",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "vg_name": "ceph_vg1"
Jan 26 15:42:58 compute-0 silly_brattain[243086]:         }
Jan 26 15:42:58 compute-0 silly_brattain[243086]:     ],
Jan 26 15:42:58 compute-0 silly_brattain[243086]:     "2": [
Jan 26 15:42:58 compute-0 silly_brattain[243086]:         {
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "devices": [
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "/dev/loop5"
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             ],
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "lv_name": "ceph_lv2",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "lv_size": "21470642176",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "name": "ceph_lv2",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "tags": {
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.cluster_name": "ceph",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.crush_device_class": "",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.encrypted": "0",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.objectstore": "bluestore",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.osd_id": "2",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.type": "block",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.vdo": "0",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:                 "ceph.with_tpm": "0"
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             },
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "type": "block",
Jan 26 15:42:58 compute-0 silly_brattain[243086]:             "vg_name": "ceph_vg2"
Jan 26 15:42:58 compute-0 silly_brattain[243086]:         }
Jan 26 15:42:58 compute-0 silly_brattain[243086]:     ]
Jan 26 15:42:58 compute-0 silly_brattain[243086]: }
Jan 26 15:42:58 compute-0 systemd[1]: libpod-d7ba0e5aaca8be9762e8de4eac7ce0fed96fb3db406ba79bfd9598e7d35f1dd3.scope: Deactivated successfully.
Jan 26 15:42:58 compute-0 podman[243070]: 2026-01-26 15:42:58.348177064 +0000 UTC m=+0.420300191 container died d7ba0e5aaca8be9762e8de4eac7ce0fed96fb3db406ba79bfd9598e7d35f1dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:42:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:42:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ccc45b59cd85fec33f67457b6317885d95f15be3ba45ae15e6257a9915af121-merged.mount: Deactivated successfully.
Jan 26 15:42:58 compute-0 podman[243070]: 2026-01-26 15:42:58.39436606 +0000 UTC m=+0.466489217 container remove d7ba0e5aaca8be9762e8de4eac7ce0fed96fb3db406ba79bfd9598e7d35f1dd3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 15:42:58 compute-0 systemd[1]: libpod-conmon-d7ba0e5aaca8be9762e8de4eac7ce0fed96fb3db406ba79bfd9598e7d35f1dd3.scope: Deactivated successfully.
Jan 26 15:42:58 compute-0 sudo[242991]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:58 compute-0 ceph-mon[75140]: pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:42:58 compute-0 sudo[243105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:42:58 compute-0 sudo[243105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:42:58 compute-0 sudo[243105]: pam_unix(sudo:session): session closed for user root
Jan 26 15:42:58 compute-0 sudo[243130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:42:58 compute-0 sudo[243130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:42:58 compute-0 podman[243167]: 2026-01-26 15:42:58.873146089 +0000 UTC m=+0.038719743 container create 051fb24e91ae27ab6e28f449041300273334b68a7a3e6aa77ec239cbbb4e3d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_matsumoto, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:42:58 compute-0 systemd[1]: Started libpod-conmon-051fb24e91ae27ab6e28f449041300273334b68a7a3e6aa77ec239cbbb4e3d3c.scope.
Jan 26 15:42:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:42:58 compute-0 podman[243167]: 2026-01-26 15:42:58.949680792 +0000 UTC m=+0.115254436 container init 051fb24e91ae27ab6e28f449041300273334b68a7a3e6aa77ec239cbbb4e3d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_matsumoto, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 15:42:58 compute-0 podman[243167]: 2026-01-26 15:42:58.855555097 +0000 UTC m=+0.021128741 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:42:58 compute-0 podman[243167]: 2026-01-26 15:42:58.958833398 +0000 UTC m=+0.124407032 container start 051fb24e91ae27ab6e28f449041300273334b68a7a3e6aa77ec239cbbb4e3d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 15:42:58 compute-0 podman[243167]: 2026-01-26 15:42:58.963225636 +0000 UTC m=+0.128799260 container attach 051fb24e91ae27ab6e28f449041300273334b68a7a3e6aa77ec239cbbb4e3d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_matsumoto, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:42:58 compute-0 compassionate_matsumoto[243183]: 167 167
Jan 26 15:42:58 compute-0 systemd[1]: libpod-051fb24e91ae27ab6e28f449041300273334b68a7a3e6aa77ec239cbbb4e3d3c.scope: Deactivated successfully.
Jan 26 15:42:58 compute-0 conmon[243183]: conmon 051fb24e91ae27ab6e28 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-051fb24e91ae27ab6e28f449041300273334b68a7a3e6aa77ec239cbbb4e3d3c.scope/container/memory.events
Jan 26 15:42:58 compute-0 podman[243167]: 2026-01-26 15:42:58.966342732 +0000 UTC m=+0.131916356 container died 051fb24e91ae27ab6e28f449041300273334b68a7a3e6aa77ec239cbbb4e3d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Jan 26 15:42:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-44af413b25dda54384c74229694d89006b78899bb0e15a310a06138f026a0a44-merged.mount: Deactivated successfully.
Jan 26 15:42:59 compute-0 podman[243167]: 2026-01-26 15:42:59.006617353 +0000 UTC m=+0.172190977 container remove 051fb24e91ae27ab6e28f449041300273334b68a7a3e6aa77ec239cbbb4e3d3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:42:59 compute-0 systemd[1]: libpod-conmon-051fb24e91ae27ab6e28f449041300273334b68a7a3e6aa77ec239cbbb4e3d3c.scope: Deactivated successfully.
Jan 26 15:42:59 compute-0 podman[243207]: 2026-01-26 15:42:59.172470443 +0000 UTC m=+0.054300107 container create bb763aa4d6e42757e19b543dc1a54dfc681efad9d4468c952b6f7c9d5f1c5e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_galileo, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:42:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:42:59.203 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:42:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:42:59.204 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:42:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:42:59.204 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:42:59 compute-0 systemd[1]: Started libpod-conmon-bb763aa4d6e42757e19b543dc1a54dfc681efad9d4468c952b6f7c9d5f1c5e6d.scope.
Jan 26 15:42:59 compute-0 podman[243207]: 2026-01-26 15:42:59.136510078 +0000 UTC m=+0.018339762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:42:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3aaf3dbed0ee1ef68b89c812754291e0fb891295f002af1644d3b2fc5e508c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3aaf3dbed0ee1ef68b89c812754291e0fb891295f002af1644d3b2fc5e508c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3aaf3dbed0ee1ef68b89c812754291e0fb891295f002af1644d3b2fc5e508c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:42:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3aaf3dbed0ee1ef68b89c812754291e0fb891295f002af1644d3b2fc5e508c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:42:59 compute-0 podman[243207]: 2026-01-26 15:42:59.312243312 +0000 UTC m=+0.194073006 container init bb763aa4d6e42757e19b543dc1a54dfc681efad9d4468c952b6f7c9d5f1c5e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_galileo, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:42:59 compute-0 podman[243207]: 2026-01-26 15:42:59.321302504 +0000 UTC m=+0.203132168 container start bb763aa4d6e42757e19b543dc1a54dfc681efad9d4468c952b6f7c9d5f1c5e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_galileo, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:42:59 compute-0 podman[243207]: 2026-01-26 15:42:59.32474446 +0000 UTC m=+0.206574154 container attach bb763aa4d6e42757e19b543dc1a54dfc681efad9d4468c952b6f7c9d5f1c5e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_galileo, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:42:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:42:59 compute-0 lvm[243302]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:42:59 compute-0 lvm[243303]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:42:59 compute-0 lvm[243302]: VG ceph_vg0 finished
Jan 26 15:42:59 compute-0 lvm[243303]: VG ceph_vg1 finished
Jan 26 15:42:59 compute-0 lvm[243305]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:42:59 compute-0 lvm[243305]: VG ceph_vg2 finished
Jan 26 15:43:00 compute-0 jovial_galileo[243224]: {}
Jan 26 15:43:00 compute-0 systemd[1]: libpod-bb763aa4d6e42757e19b543dc1a54dfc681efad9d4468c952b6f7c9d5f1c5e6d.scope: Deactivated successfully.
Jan 26 15:43:00 compute-0 podman[243207]: 2026-01-26 15:43:00.091974535 +0000 UTC m=+0.973804219 container died bb763aa4d6e42757e19b543dc1a54dfc681efad9d4468c952b6f7c9d5f1c5e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_galileo, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 26 15:43:00 compute-0 systemd[1]: libpod-bb763aa4d6e42757e19b543dc1a54dfc681efad9d4468c952b6f7c9d5f1c5e6d.scope: Consumed 1.302s CPU time.
Jan 26 15:43:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3aaf3dbed0ee1ef68b89c812754291e0fb891295f002af1644d3b2fc5e508c9-merged.mount: Deactivated successfully.
Jan 26 15:43:00 compute-0 podman[243207]: 2026-01-26 15:43:00.132124593 +0000 UTC m=+1.013954257 container remove bb763aa4d6e42757e19b543dc1a54dfc681efad9d4468c952b6f7c9d5f1c5e6d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_galileo, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:43:00 compute-0 systemd[1]: libpod-conmon-bb763aa4d6e42757e19b543dc1a54dfc681efad9d4468c952b6f7c9d5f1c5e6d.scope: Deactivated successfully.
Jan 26 15:43:00 compute-0 sudo[243130]: pam_unix(sudo:session): session closed for user root
Jan 26 15:43:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:43:00 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:43:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:43:00 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:43:00 compute-0 sudo[243319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:43:00 compute-0 sudo[243319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:43:00 compute-0 sudo[243319]: pam_unix(sudo:session): session closed for user root
Jan 26 15:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:43:00 compute-0 ceph-mon[75140]: pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:43:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:43:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:43:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:43:02 compute-0 ceph-mon[75140]: pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 15:43:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:43:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 26 15:43:04 compute-0 ceph-mon[75140]: pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 26 15:43:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:06 compute-0 ceph-mon[75140]: pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:43:08 compute-0 podman[243344]: 2026-01-26 15:43:08.380128947 +0000 UTC m=+0.063589655 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 15:43:08 compute-0 podman[243345]: 2026-01-26 15:43:08.440732209 +0000 UTC m=+0.121479250 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 26 15:43:08 compute-0 ceph-mon[75140]: pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:09 compute-0 ceph-mon[75140]: pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:12 compute-0 ceph-mon[75140]: pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:43:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:14 compute-0 ceph-mon[75140]: pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:16 compute-0 nova_compute[239965]: 2026-01-26 15:43:16.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:43:16 compute-0 ceph-mon[75140]: pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:17 compute-0 nova_compute[239965]: 2026-01-26 15:43:17.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:43:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:43:18 compute-0 nova_compute[239965]: 2026-01-26 15:43:18.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:43:18 compute-0 nova_compute[239965]: 2026-01-26 15:43:18.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:43:18 compute-0 nova_compute[239965]: 2026-01-26 15:43:18.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:43:18 compute-0 nova_compute[239965]: 2026-01-26 15:43:18.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:43:18 compute-0 ceph-mon[75140]: pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:19 compute-0 nova_compute[239965]: 2026-01-26 15:43:19.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:43:19 compute-0 nova_compute[239965]: 2026-01-26 15:43:19.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:43:19 compute-0 nova_compute[239965]: 2026-01-26 15:43:19.535 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:43:19 compute-0 nova_compute[239965]: 2026-01-26 15:43:19.535 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:43:19 compute-0 nova_compute[239965]: 2026-01-26 15:43:19.536 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:43:19 compute-0 nova_compute[239965]: 2026-01-26 15:43:19.536 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:43:19 compute-0 nova_compute[239965]: 2026-01-26 15:43:19.536 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:43:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:43:20 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1785132080' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:43:20 compute-0 nova_compute[239965]: 2026-01-26 15:43:20.085 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:43:20 compute-0 nova_compute[239965]: 2026-01-26 15:43:20.225 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:43:20 compute-0 nova_compute[239965]: 2026-01-26 15:43:20.226 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5136MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:43:20 compute-0 nova_compute[239965]: 2026-01-26 15:43:20.226 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:43:20 compute-0 nova_compute[239965]: 2026-01-26 15:43:20.227 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:43:20 compute-0 nova_compute[239965]: 2026-01-26 15:43:20.417 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:43:20 compute-0 nova_compute[239965]: 2026-01-26 15:43:20.418 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:43:20 compute-0 nova_compute[239965]: 2026-01-26 15:43:20.442 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:43:20 compute-0 ceph-mon[75140]: pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:20 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1785132080' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:43:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:43:20 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3948809571' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:43:20 compute-0 nova_compute[239965]: 2026-01-26 15:43:20.955 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:43:20 compute-0 nova_compute[239965]: 2026-01-26 15:43:20.960 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:43:20 compute-0 nova_compute[239965]: 2026-01-26 15:43:20.976 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:43:20 compute-0 nova_compute[239965]: 2026-01-26 15:43:20.977 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:43:20 compute-0 nova_compute[239965]: 2026-01-26 15:43:20.977 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:43:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:21 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3948809571' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:43:22 compute-0 ceph-mon[75140]: pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:22 compute-0 nova_compute[239965]: 2026-01-26 15:43:22.969 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:43:22 compute-0 nova_compute[239965]: 2026-01-26 15:43:22.985 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:43:22 compute-0 nova_compute[239965]: 2026-01-26 15:43:22.985 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:43:22 compute-0 nova_compute[239965]: 2026-01-26 15:43:22.985 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 15:43:22 compute-0 nova_compute[239965]: 2026-01-26 15:43:22.998 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 15:43:22 compute-0 nova_compute[239965]: 2026-01-26 15:43:22.999 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:43:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:43:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:23 compute-0 ceph-mon[75140]: pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:25 compute-0 sshd-session[243432]: Connection closed by authenticating user root 178.128.250.55 port 44638 [preauth]
Jan 26 15:43:26 compute-0 ceph-mon[75140]: pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:43:28 compute-0 ceph-mon[75140]: pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:43:28
Jan 26 15:43:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:43:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:43:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['backups', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'images', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr']
Jan 26 15:43:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:43:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:43:30 compute-0 ceph-mon[75140]: pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:31 compute-0 sshd-session[243434]: Invalid user orangepi from 103.236.95.173 port 41734
Jan 26 15:43:31 compute-0 sshd-session[243434]: Connection closed by invalid user orangepi 103.236.95.173 port 41734 [preauth]
Jan 26 15:43:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:32 compute-0 ceph-mon[75140]: pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:43:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:34 compute-0 ceph-mon[75140]: pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:36 compute-0 ceph-mon[75140]: pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:43:38 compute-0 ceph-mon[75140]: pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:39 compute-0 podman[243437]: 2026-01-26 15:43:39.355895903 +0000 UTC m=+0.046228538 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true)
Jan 26 15:43:39 compute-0 podman[243438]: 2026-01-26 15:43:39.401822653 +0000 UTC m=+0.090181890 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 26 15:43:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:40 compute-0 ceph-mon[75140]: pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:41 compute-0 ceph-mon[75140]: pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:43:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:44 compute-0 ceph-mon[75140]: pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:46 compute-0 ceph-mon[75140]: pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1837404050763064e-06 of space, bias 4.0, pg target 0.0014204884860915677 quantized to 16 (current 16)
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:43:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:43:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:43:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2561978673' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:43:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:43:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2561978673' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:43:48 compute-0 ceph-mon[75140]: pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2561978673' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:43:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2561978673' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:43:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:50 compute-0 ceph-mon[75140]: pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:51 compute-0 ceph-mon[75140]: pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:43:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:54 compute-0 ceph-mon[75140]: pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:56 compute-0 ceph-mon[75140]: pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:57 compute-0 ceph-mon[75140]: pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:43:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:43:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:43:59.204 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:43:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:43:59.204 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:43:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:43:59.205 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:43:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:00 compute-0 sudo[243481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:44:00 compute-0 sudo[243481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:44:00 compute-0 sudo[243481]: pam_unix(sudo:session): session closed for user root
Jan 26 15:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:44:00 compute-0 sudo[243506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 26 15:44:00 compute-0 sudo[243506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:44:00 compute-0 ceph-mon[75140]: pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:00 compute-0 podman[243575]: 2026-01-26 15:44:00.873832772 +0000 UTC m=+0.084129180 container exec 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 15:44:00 compute-0 podman[243575]: 2026-01-26 15:44:00.980403634 +0000 UTC m=+0.190700032 container exec_died 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 15:44:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:01 compute-0 sudo[243506]: pam_unix(sudo:session): session closed for user root
Jan 26 15:44:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:44:01 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:44:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:44:01 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:44:01 compute-0 sudo[243760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:44:01 compute-0 sudo[243760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:44:01 compute-0 sudo[243760]: pam_unix(sudo:session): session closed for user root
Jan 26 15:44:01 compute-0 sudo[243787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:44:01 compute-0 sudo[243787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:44:02 compute-0 sudo[243787]: pam_unix(sudo:session): session closed for user root
Jan 26 15:44:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:44:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:44:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:44:02 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:44:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:44:02 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:44:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:44:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:44:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:44:02 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:44:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:44:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:44:02 compute-0 sudo[243844]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:44:02 compute-0 sudo[243844]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:44:02 compute-0 sudo[243844]: pam_unix(sudo:session): session closed for user root
Jan 26 15:44:02 compute-0 sudo[243869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:44:02 compute-0 sudo[243869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:44:02 compute-0 ceph-mon[75140]: pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:44:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:44:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:44:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:44:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:44:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:44:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:44:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:44:02 compute-0 podman[243906]: 2026-01-26 15:44:02.744673198 +0000 UTC m=+0.041096122 container create cd6a172c7c8245a139c2dc286f8280f8b5d1b1aaab7a5f1d7bdb37c9b2ed012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_brattain, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 15:44:02 compute-0 systemd[1]: Started libpod-conmon-cd6a172c7c8245a139c2dc286f8280f8b5d1b1aaab7a5f1d7bdb37c9b2ed012c.scope.
Jan 26 15:44:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:44:02 compute-0 podman[243906]: 2026-01-26 15:44:02.725483387 +0000 UTC m=+0.021906341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:44:02 compute-0 podman[243906]: 2026-01-26 15:44:02.83134208 +0000 UTC m=+0.127765024 container init cd6a172c7c8245a139c2dc286f8280f8b5d1b1aaab7a5f1d7bdb37c9b2ed012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_brattain, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 26 15:44:02 compute-0 podman[243906]: 2026-01-26 15:44:02.838925487 +0000 UTC m=+0.135348411 container start cd6a172c7c8245a139c2dc286f8280f8b5d1b1aaab7a5f1d7bdb37c9b2ed012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:44:02 compute-0 podman[243906]: 2026-01-26 15:44:02.843233103 +0000 UTC m=+0.139656057 container attach cd6a172c7c8245a139c2dc286f8280f8b5d1b1aaab7a5f1d7bdb37c9b2ed012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 15:44:02 compute-0 condescending_brattain[243922]: 167 167
Jan 26 15:44:02 compute-0 systemd[1]: libpod-cd6a172c7c8245a139c2dc286f8280f8b5d1b1aaab7a5f1d7bdb37c9b2ed012c.scope: Deactivated successfully.
Jan 26 15:44:02 compute-0 podman[243906]: 2026-01-26 15:44:02.844776601 +0000 UTC m=+0.141199525 container died cd6a172c7c8245a139c2dc286f8280f8b5d1b1aaab7a5f1d7bdb37c9b2ed012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_brattain, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:44:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-692ee6feb1e21ed26a4df5ebd25fb80798377b286ae7b01de6957e160db65c1a-merged.mount: Deactivated successfully.
Jan 26 15:44:02 compute-0 podman[243906]: 2026-01-26 15:44:02.892366642 +0000 UTC m=+0.188789566 container remove cd6a172c7c8245a139c2dc286f8280f8b5d1b1aaab7a5f1d7bdb37c9b2ed012c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 15:44:02 compute-0 systemd[1]: libpod-conmon-cd6a172c7c8245a139c2dc286f8280f8b5d1b1aaab7a5f1d7bdb37c9b2ed012c.scope: Deactivated successfully.
Jan 26 15:44:03 compute-0 podman[243945]: 2026-01-26 15:44:03.042666449 +0000 UTC m=+0.039897782 container create 2a9406a0233b56bd2b38f984b3830d865944b978182abc0c5862d26fa1bbb960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:44:03 compute-0 systemd[1]: Started libpod-conmon-2a9406a0233b56bd2b38f984b3830d865944b978182abc0c5862d26fa1bbb960.scope.
Jan 26 15:44:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11d5b5a40a750018a9d395c75a51675bd761f8fe8cbc8145af8e1cee85f41d72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11d5b5a40a750018a9d395c75a51675bd761f8fe8cbc8145af8e1cee85f41d72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11d5b5a40a750018a9d395c75a51675bd761f8fe8cbc8145af8e1cee85f41d72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11d5b5a40a750018a9d395c75a51675bd761f8fe8cbc8145af8e1cee85f41d72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:44:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11d5b5a40a750018a9d395c75a51675bd761f8fe8cbc8145af8e1cee85f41d72/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:44:03 compute-0 podman[243945]: 2026-01-26 15:44:03.120480024 +0000 UTC m=+0.117711367 container init 2a9406a0233b56bd2b38f984b3830d865944b978182abc0c5862d26fa1bbb960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:44:03 compute-0 podman[243945]: 2026-01-26 15:44:03.026634875 +0000 UTC m=+0.023866228 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:44:03 compute-0 podman[243945]: 2026-01-26 15:44:03.129016114 +0000 UTC m=+0.126247437 container start 2a9406a0233b56bd2b38f984b3830d865944b978182abc0c5862d26fa1bbb960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 15:44:03 compute-0 podman[243945]: 2026-01-26 15:44:03.133279219 +0000 UTC m=+0.130510552 container attach 2a9406a0233b56bd2b38f984b3830d865944b978182abc0c5862d26fa1bbb960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:44:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:44:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:03 compute-0 kind_wing[243961]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:44:03 compute-0 kind_wing[243961]: --> All data devices are unavailable
Jan 26 15:44:03 compute-0 systemd[1]: libpod-2a9406a0233b56bd2b38f984b3830d865944b978182abc0c5862d26fa1bbb960.scope: Deactivated successfully.
Jan 26 15:44:03 compute-0 podman[243945]: 2026-01-26 15:44:03.598106724 +0000 UTC m=+0.595338067 container died 2a9406a0233b56bd2b38f984b3830d865944b978182abc0c5862d26fa1bbb960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 15:44:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-11d5b5a40a750018a9d395c75a51675bd761f8fe8cbc8145af8e1cee85f41d72-merged.mount: Deactivated successfully.
Jan 26 15:44:03 compute-0 podman[243945]: 2026-01-26 15:44:03.646625167 +0000 UTC m=+0.643856510 container remove 2a9406a0233b56bd2b38f984b3830d865944b978182abc0c5862d26fa1bbb960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 15:44:03 compute-0 systemd[1]: libpod-conmon-2a9406a0233b56bd2b38f984b3830d865944b978182abc0c5862d26fa1bbb960.scope: Deactivated successfully.
Jan 26 15:44:03 compute-0 sudo[243869]: pam_unix(sudo:session): session closed for user root
Jan 26 15:44:03 compute-0 sudo[243994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:44:03 compute-0 sudo[243994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:44:03 compute-0 sudo[243994]: pam_unix(sudo:session): session closed for user root
Jan 26 15:44:03 compute-0 sshd-session[243783]: Connection closed by authenticating user root 103.236.95.173 port 52100 [preauth]
Jan 26 15:44:03 compute-0 sudo[244019]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:44:03 compute-0 sudo[244019]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:44:04 compute-0 podman[244056]: 2026-01-26 15:44:04.097264044 +0000 UTC m=+0.042334952 container create 7283b94fe24dc9b0589c33744a33556ce4ffc2eedaaccbff43970e8a8c2bc223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:44:04 compute-0 systemd[1]: Started libpod-conmon-7283b94fe24dc9b0589c33744a33556ce4ffc2eedaaccbff43970e8a8c2bc223.scope.
Jan 26 15:44:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:44:04 compute-0 podman[244056]: 2026-01-26 15:44:04.173013418 +0000 UTC m=+0.118084346 container init 7283b94fe24dc9b0589c33744a33556ce4ffc2eedaaccbff43970e8a8c2bc223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:44:04 compute-0 podman[244056]: 2026-01-26 15:44:04.082536482 +0000 UTC m=+0.027607410 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:44:04 compute-0 podman[244056]: 2026-01-26 15:44:04.18082621 +0000 UTC m=+0.125897118 container start 7283b94fe24dc9b0589c33744a33556ce4ffc2eedaaccbff43970e8a8c2bc223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:44:04 compute-0 podman[244056]: 2026-01-26 15:44:04.184914521 +0000 UTC m=+0.129985449 container attach 7283b94fe24dc9b0589c33744a33556ce4ffc2eedaaccbff43970e8a8c2bc223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:44:04 compute-0 systemd[1]: libpod-7283b94fe24dc9b0589c33744a33556ce4ffc2eedaaccbff43970e8a8c2bc223.scope: Deactivated successfully.
Jan 26 15:44:04 compute-0 clever_booth[244072]: 167 167
Jan 26 15:44:04 compute-0 conmon[244072]: conmon 7283b94fe24dc9b0589c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7283b94fe24dc9b0589c33744a33556ce4ffc2eedaaccbff43970e8a8c2bc223.scope/container/memory.events
Jan 26 15:44:04 compute-0 podman[244056]: 2026-01-26 15:44:04.187825792 +0000 UTC m=+0.132896700 container died 7283b94fe24dc9b0589c33744a33556ce4ffc2eedaaccbff43970e8a8c2bc223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:44:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0e1bcf5d9fb1329d607fa2ae16c2750671f6e4c6171719608ccd44f8080ab35-merged.mount: Deactivated successfully.
Jan 26 15:44:04 compute-0 podman[244056]: 2026-01-26 15:44:04.226540325 +0000 UTC m=+0.171611233 container remove 7283b94fe24dc9b0589c33744a33556ce4ffc2eedaaccbff43970e8a8c2bc223 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_booth, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:44:04 compute-0 systemd[1]: libpod-conmon-7283b94fe24dc9b0589c33744a33556ce4ffc2eedaaccbff43970e8a8c2bc223.scope: Deactivated successfully.
Jan 26 15:44:04 compute-0 podman[244096]: 2026-01-26 15:44:04.41982695 +0000 UTC m=+0.058195283 container create 19a95997cb6e09e9d28a4c0531b10d664809bbaaaf5ae202445777e98208d9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_buck, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:44:04 compute-0 systemd[1]: Started libpod-conmon-19a95997cb6e09e9d28a4c0531b10d664809bbaaaf5ae202445777e98208d9f4.scope.
Jan 26 15:44:04 compute-0 podman[244096]: 2026-01-26 15:44:04.386889749 +0000 UTC m=+0.025258092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:44:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c663d32bd630cbfa3a4940af2583d20ff2f126434ee709234c53de1c67384873/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c663d32bd630cbfa3a4940af2583d20ff2f126434ee709234c53de1c67384873/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c663d32bd630cbfa3a4940af2583d20ff2f126434ee709234c53de1c67384873/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:44:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c663d32bd630cbfa3a4940af2583d20ff2f126434ee709234c53de1c67384873/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:44:04 compute-0 podman[244096]: 2026-01-26 15:44:04.501143721 +0000 UTC m=+0.139512074 container init 19a95997cb6e09e9d28a4c0531b10d664809bbaaaf5ae202445777e98208d9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:44:04 compute-0 podman[244096]: 2026-01-26 15:44:04.5092235 +0000 UTC m=+0.147591833 container start 19a95997cb6e09e9d28a4c0531b10d664809bbaaaf5ae202445777e98208d9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 15:44:04 compute-0 podman[244096]: 2026-01-26 15:44:04.512786837 +0000 UTC m=+0.151155170 container attach 19a95997cb6e09e9d28a4c0531b10d664809bbaaaf5ae202445777e98208d9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:44:04 compute-0 ceph-mon[75140]: pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:04 compute-0 quizzical_buck[244113]: {
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:     "0": [
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:         {
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "devices": [
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "/dev/loop3"
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             ],
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "lv_name": "ceph_lv0",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "lv_size": "21470642176",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "name": "ceph_lv0",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "tags": {
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.cluster_name": "ceph",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.crush_device_class": "",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.encrypted": "0",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.objectstore": "bluestore",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.osd_id": "0",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.type": "block",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.vdo": "0",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.with_tpm": "0"
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             },
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "type": "block",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "vg_name": "ceph_vg0"
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:         }
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:     ],
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:     "1": [
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:         {
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "devices": [
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "/dev/loop4"
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             ],
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "lv_name": "ceph_lv1",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "lv_size": "21470642176",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "name": "ceph_lv1",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "tags": {
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.cluster_name": "ceph",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.crush_device_class": "",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.encrypted": "0",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.objectstore": "bluestore",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.osd_id": "1",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.type": "block",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.vdo": "0",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.with_tpm": "0"
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             },
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "type": "block",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "vg_name": "ceph_vg1"
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:         }
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:     ],
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:     "2": [
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:         {
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "devices": [
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "/dev/loop5"
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             ],
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "lv_name": "ceph_lv2",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "lv_size": "21470642176",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "name": "ceph_lv2",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "tags": {
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.cluster_name": "ceph",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.crush_device_class": "",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.encrypted": "0",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.objectstore": "bluestore",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.osd_id": "2",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.type": "block",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.vdo": "0",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:                 "ceph.with_tpm": "0"
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             },
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "type": "block",
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:             "vg_name": "ceph_vg2"
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:         }
Jan 26 15:44:04 compute-0 quizzical_buck[244113]:     ]
Jan 26 15:44:04 compute-0 quizzical_buck[244113]: }
Jan 26 15:44:04 compute-0 systemd[1]: libpod-19a95997cb6e09e9d28a4c0531b10d664809bbaaaf5ae202445777e98208d9f4.scope: Deactivated successfully.
Jan 26 15:44:04 compute-0 podman[244096]: 2026-01-26 15:44:04.801649443 +0000 UTC m=+0.440017776 container died 19a95997cb6e09e9d28a4c0531b10d664809bbaaaf5ae202445777e98208d9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_buck, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:44:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c663d32bd630cbfa3a4940af2583d20ff2f126434ee709234c53de1c67384873-merged.mount: Deactivated successfully.
Jan 26 15:44:05 compute-0 podman[244096]: 2026-01-26 15:44:05.062637935 +0000 UTC m=+0.701006268 container remove 19a95997cb6e09e9d28a4c0531b10d664809bbaaaf5ae202445777e98208d9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_buck, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 15:44:05 compute-0 systemd[1]: libpod-conmon-19a95997cb6e09e9d28a4c0531b10d664809bbaaaf5ae202445777e98208d9f4.scope: Deactivated successfully.
Jan 26 15:44:05 compute-0 sudo[244019]: pam_unix(sudo:session): session closed for user root
Jan 26 15:44:05 compute-0 sudo[244137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:44:05 compute-0 sudo[244137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:44:05 compute-0 sudo[244137]: pam_unix(sudo:session): session closed for user root
Jan 26 15:44:05 compute-0 sudo[244162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:44:05 compute-0 sudo[244162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:44:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:05 compute-0 podman[244198]: 2026-01-26 15:44:05.487930768 +0000 UTC m=+0.035998437 container create 265b4f3115e65bd73b6687cdae6b88083cc02a99d673045c9b08ec7c6574c18f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_brahmagupta, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:44:05 compute-0 systemd[1]: Started libpod-conmon-265b4f3115e65bd73b6687cdae6b88083cc02a99d673045c9b08ec7c6574c18f.scope.
Jan 26 15:44:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:44:05 compute-0 podman[244198]: 2026-01-26 15:44:05.562145293 +0000 UTC m=+0.110212982 container init 265b4f3115e65bd73b6687cdae6b88083cc02a99d673045c9b08ec7c6574c18f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 15:44:05 compute-0 podman[244198]: 2026-01-26 15:44:05.470887608 +0000 UTC m=+0.018955297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:44:05 compute-0 podman[244198]: 2026-01-26 15:44:05.568674293 +0000 UTC m=+0.116741962 container start 265b4f3115e65bd73b6687cdae6b88083cc02a99d673045c9b08ec7c6574c18f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_brahmagupta, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:44:05 compute-0 podman[244198]: 2026-01-26 15:44:05.572388715 +0000 UTC m=+0.120456414 container attach 265b4f3115e65bd73b6687cdae6b88083cc02a99d673045c9b08ec7c6574c18f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_brahmagupta, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 15:44:05 compute-0 systemd[1]: libpod-265b4f3115e65bd73b6687cdae6b88083cc02a99d673045c9b08ec7c6574c18f.scope: Deactivated successfully.
Jan 26 15:44:05 compute-0 crazy_brahmagupta[244214]: 167 167
Jan 26 15:44:05 compute-0 conmon[244214]: conmon 265b4f3115e65bd73b66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-265b4f3115e65bd73b6687cdae6b88083cc02a99d673045c9b08ec7c6574c18f.scope/container/memory.events
Jan 26 15:44:05 compute-0 podman[244198]: 2026-01-26 15:44:05.574153228 +0000 UTC m=+0.122220897 container died 265b4f3115e65bd73b6687cdae6b88083cc02a99d673045c9b08ec7c6574c18f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:44:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-56212cf99ac750ad674ef0fc8b4f2763427f5e5410caa3cdb3fc0421898be434-merged.mount: Deactivated successfully.
Jan 26 15:44:05 compute-0 podman[244198]: 2026-01-26 15:44:05.626195968 +0000 UTC m=+0.174263647 container remove 265b4f3115e65bd73b6687cdae6b88083cc02a99d673045c9b08ec7c6574c18f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_brahmagupta, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 15:44:05 compute-0 systemd[1]: libpod-conmon-265b4f3115e65bd73b6687cdae6b88083cc02a99d673045c9b08ec7c6574c18f.scope: Deactivated successfully.
Jan 26 15:44:05 compute-0 podman[244240]: 2026-01-26 15:44:05.818173831 +0000 UTC m=+0.046162366 container create 361de006dae652b3aded451e7d77e484a9d93bcfcda84401cfbcebe4afa11c93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_greider, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:44:05 compute-0 systemd[1]: Started libpod-conmon-361de006dae652b3aded451e7d77e484a9d93bcfcda84401cfbcebe4afa11c93.scope.
Jan 26 15:44:05 compute-0 podman[244240]: 2026-01-26 15:44:05.795132495 +0000 UTC m=+0.023121020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:44:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82626d6696051994b7afa00c5b264f94b0ee090c0f7b949f63885e97db9cf4a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82626d6696051994b7afa00c5b264f94b0ee090c0f7b949f63885e97db9cf4a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82626d6696051994b7afa00c5b264f94b0ee090c0f7b949f63885e97db9cf4a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:44:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82626d6696051994b7afa00c5b264f94b0ee090c0f7b949f63885e97db9cf4a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:44:05 compute-0 podman[244240]: 2026-01-26 15:44:05.912326508 +0000 UTC m=+0.140315063 container init 361de006dae652b3aded451e7d77e484a9d93bcfcda84401cfbcebe4afa11c93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_greider, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:44:05 compute-0 podman[244240]: 2026-01-26 15:44:05.919231558 +0000 UTC m=+0.147220063 container start 361de006dae652b3aded451e7d77e484a9d93bcfcda84401cfbcebe4afa11c93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 15:44:05 compute-0 podman[244240]: 2026-01-26 15:44:05.923069852 +0000 UTC m=+0.151058357 container attach 361de006dae652b3aded451e7d77e484a9d93bcfcda84401cfbcebe4afa11c93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 15:44:06 compute-0 lvm[244336]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:44:06 compute-0 lvm[244335]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:44:06 compute-0 lvm[244336]: VG ceph_vg1 finished
Jan 26 15:44:06 compute-0 lvm[244335]: VG ceph_vg0 finished
Jan 26 15:44:06 compute-0 lvm[244338]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:44:06 compute-0 lvm[244338]: VG ceph_vg2 finished
Jan 26 15:44:06 compute-0 brave_greider[244256]: {}
Jan 26 15:44:06 compute-0 systemd[1]: libpod-361de006dae652b3aded451e7d77e484a9d93bcfcda84401cfbcebe4afa11c93.scope: Deactivated successfully.
Jan 26 15:44:06 compute-0 podman[244240]: 2026-01-26 15:44:06.68715313 +0000 UTC m=+0.915141635 container died 361de006dae652b3aded451e7d77e484a9d93bcfcda84401cfbcebe4afa11c93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_greider, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:44:06 compute-0 systemd[1]: libpod-361de006dae652b3aded451e7d77e484a9d93bcfcda84401cfbcebe4afa11c93.scope: Consumed 1.283s CPU time.
Jan 26 15:44:07 compute-0 ceph-mon[75140]: pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:44:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-82626d6696051994b7afa00c5b264f94b0ee090c0f7b949f63885e97db9cf4a4-merged.mount: Deactivated successfully.
Jan 26 15:44:09 compute-0 ceph-mon[75140]: pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:09 compute-0 podman[244240]: 2026-01-26 15:44:09.024531773 +0000 UTC m=+3.252520278 container remove 361de006dae652b3aded451e7d77e484a9d93bcfcda84401cfbcebe4afa11c93 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_greider, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:44:09 compute-0 systemd[1]: libpod-conmon-361de006dae652b3aded451e7d77e484a9d93bcfcda84401cfbcebe4afa11c93.scope: Deactivated successfully.
Jan 26 15:44:09 compute-0 sudo[244162]: pam_unix(sudo:session): session closed for user root
Jan 26 15:44:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:44:09 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:44:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:44:09 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:44:09 compute-0 sudo[244354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:44:09 compute-0 sudo[244354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:44:09 compute-0 sudo[244354]: pam_unix(sudo:session): session closed for user root
Jan 26 15:44:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:44:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:44:10 compute-0 ceph-mon[75140]: pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:10 compute-0 podman[244379]: 2026-01-26 15:44:10.387428214 +0000 UTC m=+0.069125712 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 15:44:10 compute-0 podman[244380]: 2026-01-26 15:44:10.416086319 +0000 UTC m=+0.089183455 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Jan 26 15:44:11 compute-0 sshd-session[244426]: Connection closed by authenticating user root 178.128.250.55 port 51530 [preauth]
Jan 26 15:44:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:12 compute-0 ceph-mon[75140]: pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:44:15 compute-0 ceph-mon[75140]: pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:16 compute-0 ceph-mon[75140]: pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:16 compute-0 nova_compute[239965]: 2026-01-26 15:44:16.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:44:16 compute-0 nova_compute[239965]: 2026-01-26 15:44:16.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 15:44:16 compute-0 nova_compute[239965]: 2026-01-26 15:44:16.531 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 15:44:16 compute-0 nova_compute[239965]: 2026-01-26 15:44:16.532 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:44:16 compute-0 nova_compute[239965]: 2026-01-26 15:44:16.533 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 15:44:16 compute-0 nova_compute[239965]: 2026-01-26 15:44:16.546 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:44:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:18 compute-0 nova_compute[239965]: 2026-01-26 15:44:18.557 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:44:18 compute-0 nova_compute[239965]: 2026-01-26 15:44:18.558 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:44:18 compute-0 ceph-mon[75140]: pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:44:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:19 compute-0 nova_compute[239965]: 2026-01-26 15:44:19.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:44:20 compute-0 nova_compute[239965]: 2026-01-26 15:44:20.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:44:20 compute-0 nova_compute[239965]: 2026-01-26 15:44:20.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:44:20 compute-0 nova_compute[239965]: 2026-01-26 15:44:20.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:44:20 compute-0 nova_compute[239965]: 2026-01-26 15:44:20.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:44:20 compute-0 nova_compute[239965]: 2026-01-26 15:44:20.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:44:20 compute-0 nova_compute[239965]: 2026-01-26 15:44:20.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:44:20 compute-0 nova_compute[239965]: 2026-01-26 15:44:20.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:44:20 compute-0 nova_compute[239965]: 2026-01-26 15:44:20.543 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:44:20 compute-0 nova_compute[239965]: 2026-01-26 15:44:20.543 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:44:20 compute-0 ceph-mon[75140]: pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:44:21 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2397281552' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:44:21 compute-0 nova_compute[239965]: 2026-01-26 15:44:21.094 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:44:21 compute-0 nova_compute[239965]: 2026-01-26 15:44:21.249 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:44:21 compute-0 nova_compute[239965]: 2026-01-26 15:44:21.250 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5138MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:44:21 compute-0 nova_compute[239965]: 2026-01-26 15:44:21.250 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:44:21 compute-0 nova_compute[239965]: 2026-01-26 15:44:21.251 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:44:21 compute-0 nova_compute[239965]: 2026-01-26 15:44:21.431 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:44:21 compute-0 nova_compute[239965]: 2026-01-26 15:44:21.431 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:44:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:21 compute-0 nova_compute[239965]: 2026-01-26 15:44:21.518 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 15:44:21 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2397281552' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:44:21 compute-0 nova_compute[239965]: 2026-01-26 15:44:21.631 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 15:44:21 compute-0 nova_compute[239965]: 2026-01-26 15:44:21.631 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 15:44:21 compute-0 nova_compute[239965]: 2026-01-26 15:44:21.649 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 15:44:21 compute-0 nova_compute[239965]: 2026-01-26 15:44:21.678 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 15:44:21 compute-0 nova_compute[239965]: 2026-01-26 15:44:21.705 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:44:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:44:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2232374756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:44:22 compute-0 nova_compute[239965]: 2026-01-26 15:44:22.235 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:44:22 compute-0 nova_compute[239965]: 2026-01-26 15:44:22.240 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:44:22 compute-0 nova_compute[239965]: 2026-01-26 15:44:22.256 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:44:22 compute-0 nova_compute[239965]: 2026-01-26 15:44:22.258 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:44:22 compute-0 nova_compute[239965]: 2026-01-26 15:44:22.258 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:44:22 compute-0 ceph-mon[75140]: pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2232374756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:44:23 compute-0 nova_compute[239965]: 2026-01-26 15:44:23.258 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:44:23 compute-0 nova_compute[239965]: 2026-01-26 15:44:23.259 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:44:23 compute-0 nova_compute[239965]: 2026-01-26 15:44:23.259 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 15:44:23 compute-0 nova_compute[239965]: 2026-01-26 15:44:23.274 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 15:44:23 compute-0 nova_compute[239965]: 2026-01-26 15:44:23.275 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:44:23 compute-0 nova_compute[239965]: 2026-01-26 15:44:23.275 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:44:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 26 15:44:24 compute-0 ceph-mon[75140]: pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:26 compute-0 ceph-mon[75140]: pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:44:28
Jan 26 15:44:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:44:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:44:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'images', 'backups', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 26 15:44:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:44:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Jan 26 15:44:28 compute-0 ceph-mon[75140]: pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Jan 26 15:44:28 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Jan 26 15:44:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Jan 26 15:44:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Jan 26 15:44:29 compute-0 ceph-mon[75140]: osdmap e135: 3 total, 3 up, 3 in
Jan 26 15:44:29 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Jan 26 15:44:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:44:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:44:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:44:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:44:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:44:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:44:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Jan 26 15:44:30 compute-0 ceph-mon[75140]: pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:44:30 compute-0 ceph-mon[75140]: osdmap e136: 3 total, 3 up, 3 in
Jan 26 15:44:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Jan 26 15:44:30 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Jan 26 15:44:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 16 MiB data, 137 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 2.7 MiB/s wr, 1 op/s
Jan 26 15:44:31 compute-0 ceph-mon[75140]: osdmap e137: 3 total, 3 up, 3 in
Jan 26 15:44:32 compute-0 ceph-mon[75140]: pgmap v849: 305 pgs: 305 active+clean; 16 MiB data, 137 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 2.7 MiB/s wr, 1 op/s
Jan 26 15:44:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Jan 26 15:44:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Jan 26 15:44:32 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Jan 26 15:44:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 29 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 6.1 MiB/s wr, 50 op/s
Jan 26 15:44:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:44:33 compute-0 ceph-mon[75140]: osdmap e138: 3 total, 3 up, 3 in
Jan 26 15:44:33 compute-0 ceph-mon[75140]: pgmap v851: 305 pgs: 305 active+clean; 29 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 6.1 MiB/s wr, 50 op/s
Jan 26 15:44:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 29 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.7 MiB/s wr, 39 op/s
Jan 26 15:44:36 compute-0 ceph-mon[75140]: pgmap v852: 305 pgs: 305 active+clean; 29 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.7 MiB/s wr, 39 op/s
Jan 26 15:44:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.3 MiB/s wr, 49 op/s
Jan 26 15:44:38 compute-0 ceph-mon[75140]: pgmap v853: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.3 MiB/s wr, 49 op/s
Jan 26 15:44:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:44:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Jan 26 15:44:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Jan 26 15:44:38 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Jan 26 15:44:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.1 MiB/s wr, 46 op/s
Jan 26 15:44:39 compute-0 ceph-mon[75140]: osdmap e139: 3 total, 3 up, 3 in
Jan 26 15:44:40 compute-0 ceph-mon[75140]: pgmap v855: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.1 MiB/s wr, 46 op/s
Jan 26 15:44:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.4 MiB/s wr, 17 op/s
Jan 26 15:44:41 compute-0 podman[244475]: 2026-01-26 15:44:41.507053828 +0000 UTC m=+0.056946493 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:44:41 compute-0 podman[244476]: 2026-01-26 15:44:41.536918452 +0000 UTC m=+0.085414012 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 15:44:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Jan 26 15:44:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Jan 26 15:44:41 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Jan 26 15:44:42 compute-0 ceph-mon[75140]: pgmap v856: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.4 MiB/s wr, 17 op/s
Jan 26 15:44:42 compute-0 ceph-mon[75140]: osdmap e140: 3 total, 3 up, 3 in
Jan 26 15:44:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 37 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.6 MiB/s wr, 47 op/s
Jan 26 15:44:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:44:44 compute-0 ceph-mon[75140]: pgmap v858: 305 pgs: 305 active+clean; 37 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 1.6 MiB/s wr, 47 op/s
Jan 26 15:44:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 37 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Jan 26 15:44:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Jan 26 15:44:45 compute-0 ceph-mon[75140]: pgmap v859: 305 pgs: 305 active+clean; 37 MiB data, 173 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.7 KiB/s wr, 29 op/s
Jan 26 15:44:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Jan 26 15:44:45 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Jan 26 15:44:46 compute-0 ceph-mon[75140]: osdmap e141: 3 total, 3 up, 3 in
Jan 26 15:44:47 compute-0 sshd-session[244473]: ssh_dispatch_run_fatal: Connection from 103.236.95.173 port 57418: Connection timed out [preauth]
Jan 26 15:44:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.4 KiB/s wr, 48 op/s
Jan 26 15:44:48 compute-0 ceph-mon[75140]: pgmap v861: 305 pgs: 305 active+clean; 21 MiB data, 157 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.4 KiB/s wr, 48 op/s
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0003330001159129435 of space, bias 1.0, pg target 0.09990003477388304 quantized to 32 (current 32)
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.088928771126105e-06 of space, bias 4.0, pg target 0.0013067145253513259 quantized to 16 (current 16)
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:44:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:44:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:44:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1681073366' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:44:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:44:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1681073366' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:44:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:44:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Jan 26 15:44:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Jan 26 15:44:48 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Jan 26 15:44:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1681073366' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:44:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1681073366' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:44:49 compute-0 ceph-mon[75140]: osdmap e142: 3 total, 3 up, 3 in
Jan 26 15:44:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 13 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 3.5 KiB/s wr, 50 op/s
Jan 26 15:44:50 compute-0 ceph-mon[75140]: pgmap v863: 305 pgs: 305 active+clean; 13 MiB data, 149 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 3.5 KiB/s wr, 50 op/s
Jan 26 15:44:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.7 KiB/s wr, 33 op/s
Jan 26 15:44:52 compute-0 ceph-mon[75140]: pgmap v864: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.7 KiB/s wr, 33 op/s
Jan 26 15:44:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.7 KiB/s wr, 33 op/s
Jan 26 15:44:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:44:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Jan 26 15:44:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Jan 26 15:44:53 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Jan 26 15:44:54 compute-0 ceph-mon[75140]: pgmap v865: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.7 KiB/s wr, 33 op/s
Jan 26 15:44:54 compute-0 ceph-mon[75140]: osdmap e143: 3 total, 3 up, 3 in
Jan 26 15:44:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 127 B/s wr, 14 op/s
Jan 26 15:44:55 compute-0 sshd-session[244519]: Connection closed by authenticating user root 178.128.250.55 port 56258 [preauth]
Jan 26 15:44:56 compute-0 ceph-mon[75140]: pgmap v867: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 127 B/s wr, 14 op/s
Jan 26 15:44:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 117 B/s wr, 12 op/s
Jan 26 15:44:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Jan 26 15:44:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Jan 26 15:44:57 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Jan 26 15:44:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:44:58 compute-0 ceph-mon[75140]: pgmap v868: 305 pgs: 305 active+clean; 461 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 117 B/s wr, 12 op/s
Jan 26 15:44:58 compute-0 ceph-mon[75140]: osdmap e144: 3 total, 3 up, 3 in
Jan 26 15:44:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:44:59.205 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:44:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:44:59.206 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:44:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:44:59.206 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:44:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 767 B/s wr, 10 op/s
Jan 26 15:45:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:45:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:45:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:45:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:45:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:45:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:45:00 compute-0 ceph-mon[75140]: pgmap v870: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 767 B/s wr, 10 op/s
Jan 26 15:45:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 1.1 KiB/s wr, 11 op/s
Jan 26 15:45:01 compute-0 ceph-mon[75140]: pgmap v871: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 1.1 KiB/s wr, 11 op/s
Jan 26 15:45:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.6 KiB/s wr, 15 op/s
Jan 26 15:45:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:45:05 compute-0 ceph-mon[75140]: pgmap v872: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.6 KiB/s wr, 15 op/s
Jan 26 15:45:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 26 15:45:06 compute-0 ceph-mon[75140]: pgmap v873: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 26 15:45:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 26 15:45:08 compute-0 ceph-mon[75140]: pgmap v874: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 26 15:45:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:45:09 compute-0 sudo[244521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:45:09 compute-0 sudo[244521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:45:09 compute-0 sudo[244521]: pam_unix(sudo:session): session closed for user root
Jan 26 15:45:09 compute-0 sudo[244546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:45:09 compute-0 sudo[244546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:45:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Jan 26 15:45:09 compute-0 sudo[244546]: pam_unix(sudo:session): session closed for user root
Jan 26 15:45:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:45:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:45:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:45:09 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:45:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:45:09 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:45:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:45:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:45:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:45:09 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:45:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:45:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:45:09 compute-0 sudo[244603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:45:09 compute-0 sudo[244603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:45:09 compute-0 sudo[244603]: pam_unix(sudo:session): session closed for user root
Jan 26 15:45:09 compute-0 sudo[244628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:45:09 compute-0 sudo[244628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:45:10 compute-0 podman[244666]: 2026-01-26 15:45:10.252698046 +0000 UTC m=+0.047792359 container create 2e214ea96ee25abfd7d8189285fd721094906ee9fdd0a0e74bfcd4a6cbb10198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jemison, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:45:10 compute-0 systemd[1]: Started libpod-conmon-2e214ea96ee25abfd7d8189285fd721094906ee9fdd0a0e74bfcd4a6cbb10198.scope.
Jan 26 15:45:10 compute-0 podman[244666]: 2026-01-26 15:45:10.227145706 +0000 UTC m=+0.022240069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:45:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:45:10 compute-0 podman[244666]: 2026-01-26 15:45:10.342961511 +0000 UTC m=+0.138055904 container init 2e214ea96ee25abfd7d8189285fd721094906ee9fdd0a0e74bfcd4a6cbb10198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jemison, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 15:45:10 compute-0 podman[244666]: 2026-01-26 15:45:10.349827711 +0000 UTC m=+0.144922024 container start 2e214ea96ee25abfd7d8189285fd721094906ee9fdd0a0e74bfcd4a6cbb10198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 15:45:10 compute-0 podman[244666]: 2026-01-26 15:45:10.3546794 +0000 UTC m=+0.149773703 container attach 2e214ea96ee25abfd7d8189285fd721094906ee9fdd0a0e74bfcd4a6cbb10198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jemison, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 15:45:10 compute-0 gallant_jemison[244682]: 167 167
Jan 26 15:45:10 compute-0 systemd[1]: libpod-2e214ea96ee25abfd7d8189285fd721094906ee9fdd0a0e74bfcd4a6cbb10198.scope: Deactivated successfully.
Jan 26 15:45:10 compute-0 podman[244666]: 2026-01-26 15:45:10.359279294 +0000 UTC m=+0.154373607 container died 2e214ea96ee25abfd7d8189285fd721094906ee9fdd0a0e74bfcd4a6cbb10198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jemison, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:45:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-417627ff0c605f737597bb54217a77ef899d9d8de475269173aef5615b0151d1-merged.mount: Deactivated successfully.
Jan 26 15:45:10 compute-0 podman[244666]: 2026-01-26 15:45:10.415045548 +0000 UTC m=+0.210139861 container remove 2e214ea96ee25abfd7d8189285fd721094906ee9fdd0a0e74bfcd4a6cbb10198 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_jemison, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:45:10 compute-0 systemd[1]: libpod-conmon-2e214ea96ee25abfd7d8189285fd721094906ee9fdd0a0e74bfcd4a6cbb10198.scope: Deactivated successfully.
Jan 26 15:45:10 compute-0 ceph-mon[75140]: pgmap v875: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 8.9 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Jan 26 15:45:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:45:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:45:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:45:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:45:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:45:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:45:10 compute-0 podman[244705]: 2026-01-26 15:45:10.601477144 +0000 UTC m=+0.046790695 container create e2bb32990dfd2e1ca167d0d8fff75db2346ffc8b1b25d3cedf4b73a983fa23f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_herschel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 15:45:10 compute-0 systemd[1]: Started libpod-conmon-e2bb32990dfd2e1ca167d0d8fff75db2346ffc8b1b25d3cedf4b73a983fa23f3.scope.
Jan 26 15:45:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7f3f691c3206335b824d0a7827bdae84b6ac5837ddc6e5e8555133729adbba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7f3f691c3206335b824d0a7827bdae84b6ac5837ddc6e5e8555133729adbba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7f3f691c3206335b824d0a7827bdae84b6ac5837ddc6e5e8555133729adbba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7f3f691c3206335b824d0a7827bdae84b6ac5837ddc6e5e8555133729adbba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e7f3f691c3206335b824d0a7827bdae84b6ac5837ddc6e5e8555133729adbba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:45:10 compute-0 podman[244705]: 2026-01-26 15:45:10.58389705 +0000 UTC m=+0.029210621 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:45:10 compute-0 podman[244705]: 2026-01-26 15:45:10.684845968 +0000 UTC m=+0.130159549 container init e2bb32990dfd2e1ca167d0d8fff75db2346ffc8b1b25d3cedf4b73a983fa23f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_herschel, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 15:45:10 compute-0 podman[244705]: 2026-01-26 15:45:10.696181148 +0000 UTC m=+0.141494699 container start e2bb32990dfd2e1ca167d0d8fff75db2346ffc8b1b25d3cedf4b73a983fa23f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_herschel, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:45:10 compute-0 podman[244705]: 2026-01-26 15:45:10.708093662 +0000 UTC m=+0.153407213 container attach e2bb32990dfd2e1ca167d0d8fff75db2346ffc8b1b25d3cedf4b73a983fa23f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_herschel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:45:11 compute-0 recursing_herschel[244722]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:45:11 compute-0 recursing_herschel[244722]: --> All data devices are unavailable
Jan 26 15:45:11 compute-0 systemd[1]: libpod-e2bb32990dfd2e1ca167d0d8fff75db2346ffc8b1b25d3cedf4b73a983fa23f3.scope: Deactivated successfully.
Jan 26 15:45:11 compute-0 podman[244705]: 2026-01-26 15:45:11.199358102 +0000 UTC m=+0.644671663 container died e2bb32990dfd2e1ca167d0d8fff75db2346ffc8b1b25d3cedf4b73a983fa23f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_herschel, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:45:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e7f3f691c3206335b824d0a7827bdae84b6ac5837ddc6e5e8555133729adbba-merged.mount: Deactivated successfully.
Jan 26 15:45:11 compute-0 podman[244705]: 2026-01-26 15:45:11.258776836 +0000 UTC m=+0.704090397 container remove e2bb32990dfd2e1ca167d0d8fff75db2346ffc8b1b25d3cedf4b73a983fa23f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_herschel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 15:45:11 compute-0 systemd[1]: libpod-conmon-e2bb32990dfd2e1ca167d0d8fff75db2346ffc8b1b25d3cedf4b73a983fa23f3.scope: Deactivated successfully.
Jan 26 15:45:11 compute-0 sudo[244628]: pam_unix(sudo:session): session closed for user root
Jan 26 15:45:11 compute-0 sudo[244755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:45:11 compute-0 sudo[244755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:45:11 compute-0 sudo[244755]: pam_unix(sudo:session): session closed for user root
Jan 26 15:45:11 compute-0 sudo[244780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:45:11 compute-0 sudo[244780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:45:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 853 B/s wr, 5 op/s
Jan 26 15:45:11 compute-0 podman[244817]: 2026-01-26 15:45:11.700759991 +0000 UTC m=+0.047677416 container create 693c808629557ea13932df48b8e1f8ae8e3e1f513cde479d97945cd02716436f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_pascal, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 15:45:11 compute-0 systemd[1]: Started libpod-conmon-693c808629557ea13932df48b8e1f8ae8e3e1f513cde479d97945cd02716436f.scope.
Jan 26 15:45:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:45:11 compute-0 podman[244817]: 2026-01-26 15:45:11.678129453 +0000 UTC m=+0.025046908 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:45:11 compute-0 podman[244817]: 2026-01-26 15:45:11.774314803 +0000 UTC m=+0.121232248 container init 693c808629557ea13932df48b8e1f8ae8e3e1f513cde479d97945cd02716436f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_pascal, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:45:11 compute-0 podman[244817]: 2026-01-26 15:45:11.780452005 +0000 UTC m=+0.127369440 container start 693c808629557ea13932df48b8e1f8ae8e3e1f513cde479d97945cd02716436f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_pascal, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:45:11 compute-0 objective_pascal[244835]: 167 167
Jan 26 15:45:11 compute-0 systemd[1]: libpod-693c808629557ea13932df48b8e1f8ae8e3e1f513cde479d97945cd02716436f.scope: Deactivated successfully.
Jan 26 15:45:11 compute-0 podman[244817]: 2026-01-26 15:45:11.78874393 +0000 UTC m=+0.135661375 container attach 693c808629557ea13932df48b8e1f8ae8e3e1f513cde479d97945cd02716436f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_pascal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 15:45:11 compute-0 podman[244817]: 2026-01-26 15:45:11.790055461 +0000 UTC m=+0.136972876 container died 693c808629557ea13932df48b8e1f8ae8e3e1f513cde479d97945cd02716436f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 15:45:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-dcca73bc244848212d13da01b6e8dfda50f0d623c1b35e622c4478984f0c2e01-merged.mount: Deactivated successfully.
Jan 26 15:45:11 compute-0 podman[244817]: 2026-01-26 15:45:11.848365129 +0000 UTC m=+0.195282554 container remove 693c808629557ea13932df48b8e1f8ae8e3e1f513cde479d97945cd02716436f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_pascal, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 26 15:45:11 compute-0 podman[244830]: 2026-01-26 15:45:11.85202893 +0000 UTC m=+0.110816453 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 15:45:11 compute-0 systemd[1]: libpod-conmon-693c808629557ea13932df48b8e1f8ae8e3e1f513cde479d97945cd02716436f.scope: Deactivated successfully.
Jan 26 15:45:11 compute-0 podman[244834]: 2026-01-26 15:45:11.861698218 +0000 UTC m=+0.120597895 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 15:45:12 compute-0 podman[244902]: 2026-01-26 15:45:12.021271781 +0000 UTC m=+0.040808627 container create e2e92737f487d6c93c07b5d17665f295a76852d009dec57b231ed89f6caf1dbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Jan 26 15:45:12 compute-0 systemd[1]: Started libpod-conmon-e2e92737f487d6c93c07b5d17665f295a76852d009dec57b231ed89f6caf1dbe.scope.
Jan 26 15:45:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:45:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82d04887b0f7dd900e621ba255d9275547945ab748cfe76686acafbc333da8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:45:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82d04887b0f7dd900e621ba255d9275547945ab748cfe76686acafbc333da8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:45:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82d04887b0f7dd900e621ba255d9275547945ab748cfe76686acafbc333da8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:45:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b82d04887b0f7dd900e621ba255d9275547945ab748cfe76686acafbc333da8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:45:12 compute-0 podman[244902]: 2026-01-26 15:45:12.100843042 +0000 UTC m=+0.120379878 container init e2e92737f487d6c93c07b5d17665f295a76852d009dec57b231ed89f6caf1dbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 15:45:12 compute-0 podman[244902]: 2026-01-26 15:45:12.005195475 +0000 UTC m=+0.024732331 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:45:12 compute-0 podman[244902]: 2026-01-26 15:45:12.108274926 +0000 UTC m=+0.127811762 container start e2e92737f487d6c93c07b5d17665f295a76852d009dec57b231ed89f6caf1dbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 15:45:12 compute-0 podman[244902]: 2026-01-26 15:45:12.113097784 +0000 UTC m=+0.132634640 container attach e2e92737f487d6c93c07b5d17665f295a76852d009dec57b231ed89f6caf1dbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]: {
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:     "0": [
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:         {
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "devices": [
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "/dev/loop3"
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             ],
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "lv_name": "ceph_lv0",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "lv_size": "21470642176",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "name": "ceph_lv0",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "tags": {
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.cluster_name": "ceph",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.crush_device_class": "",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.encrypted": "0",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.objectstore": "bluestore",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.osd_id": "0",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.type": "block",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.vdo": "0",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.with_tpm": "0"
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             },
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "type": "block",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "vg_name": "ceph_vg0"
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:         }
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:     ],
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:     "1": [
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:         {
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "devices": [
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "/dev/loop4"
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             ],
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "lv_name": "ceph_lv1",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "lv_size": "21470642176",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "name": "ceph_lv1",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "tags": {
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.cluster_name": "ceph",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.crush_device_class": "",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.encrypted": "0",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.objectstore": "bluestore",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.osd_id": "1",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.type": "block",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.vdo": "0",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.with_tpm": "0"
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             },
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "type": "block",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "vg_name": "ceph_vg1"
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:         }
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:     ],
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:     "2": [
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:         {
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "devices": [
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "/dev/loop5"
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             ],
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "lv_name": "ceph_lv2",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "lv_size": "21470642176",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "name": "ceph_lv2",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "tags": {
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.cluster_name": "ceph",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.crush_device_class": "",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.encrypted": "0",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.objectstore": "bluestore",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.osd_id": "2",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.type": "block",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.vdo": "0",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:                 "ceph.with_tpm": "0"
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             },
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "type": "block",
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:             "vg_name": "ceph_vg2"
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:         }
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]:     ]
Jan 26 15:45:12 compute-0 wizardly_fermi[244919]: }
Jan 26 15:45:12 compute-0 systemd[1]: libpod-e2e92737f487d6c93c07b5d17665f295a76852d009dec57b231ed89f6caf1dbe.scope: Deactivated successfully.
Jan 26 15:45:12 compute-0 podman[244902]: 2026-01-26 15:45:12.409407359 +0000 UTC m=+0.428944195 container died e2e92737f487d6c93c07b5d17665f295a76852d009dec57b231ed89f6caf1dbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_fermi, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 15:45:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b82d04887b0f7dd900e621ba255d9275547945ab748cfe76686acafbc333da8f-merged.mount: Deactivated successfully.
Jan 26 15:45:12 compute-0 podman[244902]: 2026-01-26 15:45:12.466245159 +0000 UTC m=+0.485781995 container remove e2e92737f487d6c93c07b5d17665f295a76852d009dec57b231ed89f6caf1dbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_fermi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:45:12 compute-0 systemd[1]: libpod-conmon-e2e92737f487d6c93c07b5d17665f295a76852d009dec57b231ed89f6caf1dbe.scope: Deactivated successfully.
Jan 26 15:45:12 compute-0 sudo[244780]: pam_unix(sudo:session): session closed for user root
Jan 26 15:45:12 compute-0 sudo[244940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:45:12 compute-0 sudo[244940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:45:12 compute-0 sudo[244940]: pam_unix(sudo:session): session closed for user root
Jan 26 15:45:12 compute-0 ceph-mon[75140]: pgmap v876: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 853 B/s wr, 5 op/s
Jan 26 15:45:12 compute-0 sudo[244965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:45:12 compute-0 sudo[244965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:45:12 compute-0 podman[245001]: 2026-01-26 15:45:12.913890314 +0000 UTC m=+0.047041030 container create 3d77cc54b0e029189862311c74a11c7c79ef3d424007e1037859ca2519fb7972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_black, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 15:45:12 compute-0 systemd[1]: Started libpod-conmon-3d77cc54b0e029189862311c74a11c7c79ef3d424007e1037859ca2519fb7972.scope.
Jan 26 15:45:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:45:12 compute-0 podman[245001]: 2026-01-26 15:45:12.975090793 +0000 UTC m=+0.108241539 container init 3d77cc54b0e029189862311c74a11c7c79ef3d424007e1037859ca2519fb7972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_black, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 15:45:12 compute-0 podman[245001]: 2026-01-26 15:45:12.980793383 +0000 UTC m=+0.113944109 container start 3d77cc54b0e029189862311c74a11c7c79ef3d424007e1037859ca2519fb7972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_black, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 15:45:12 compute-0 serene_black[245017]: 167 167
Jan 26 15:45:12 compute-0 podman[245001]: 2026-01-26 15:45:12.985346725 +0000 UTC m=+0.118497481 container attach 3d77cc54b0e029189862311c74a11c7c79ef3d424007e1037859ca2519fb7972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_black, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:45:12 compute-0 systemd[1]: libpod-3d77cc54b0e029189862311c74a11c7c79ef3d424007e1037859ca2519fb7972.scope: Deactivated successfully.
Jan 26 15:45:12 compute-0 podman[245001]: 2026-01-26 15:45:12.987519679 +0000 UTC m=+0.120670405 container died 3d77cc54b0e029189862311c74a11c7c79ef3d424007e1037859ca2519fb7972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_black, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 26 15:45:12 compute-0 podman[245001]: 2026-01-26 15:45:12.896087235 +0000 UTC m=+0.029237981 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:45:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d882dbfbe14d4ebbd7b9588ea7cebca9611a6df04b41b586cc59a3d679ccc0a-merged.mount: Deactivated successfully.
Jan 26 15:45:13 compute-0 podman[245001]: 2026-01-26 15:45:13.032941408 +0000 UTC m=+0.166092144 container remove 3d77cc54b0e029189862311c74a11c7c79ef3d424007e1037859ca2519fb7972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_black, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:45:13 compute-0 systemd[1]: libpod-conmon-3d77cc54b0e029189862311c74a11c7c79ef3d424007e1037859ca2519fb7972.scope: Deactivated successfully.
Jan 26 15:45:13 compute-0 podman[245040]: 2026-01-26 15:45:13.229095313 +0000 UTC m=+0.045550213 container create 5d19a4698faf0f91b307aa575b8d33776babf46cd0c9d3684d82d1ea692460f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bohr, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 15:45:13 compute-0 systemd[1]: Started libpod-conmon-5d19a4698faf0f91b307aa575b8d33776babf46cd0c9d3684d82d1ea692460f7.scope.
Jan 26 15:45:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:45:13 compute-0 podman[245040]: 2026-01-26 15:45:13.209362027 +0000 UTC m=+0.025816947 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0fb846f1d14877be411839186a0b3adcd7e4555846faf686dda23bc98504e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0fb846f1d14877be411839186a0b3adcd7e4555846faf686dda23bc98504e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0fb846f1d14877be411839186a0b3adcd7e4555846faf686dda23bc98504e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:45:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd0fb846f1d14877be411839186a0b3adcd7e4555846faf686dda23bc98504e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:45:13 compute-0 podman[245040]: 2026-01-26 15:45:13.323520842 +0000 UTC m=+0.139975762 container init 5d19a4698faf0f91b307aa575b8d33776babf46cd0c9d3684d82d1ea692460f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bohr, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 15:45:13 compute-0 podman[245040]: 2026-01-26 15:45:13.33279275 +0000 UTC m=+0.149247660 container start 5d19a4698faf0f91b307aa575b8d33776babf46cd0c9d3684d82d1ea692460f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bohr, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 15:45:13 compute-0 podman[245040]: 2026-01-26 15:45:13.336814599 +0000 UTC m=+0.153269499 container attach 5d19a4698faf0f91b307aa575b8d33776babf46cd0c9d3684d82d1ea692460f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bohr, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Jan 26 15:45:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 597 B/s wr, 4 op/s
Jan 26 15:45:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:45:14 compute-0 lvm[245135]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:45:14 compute-0 lvm[245134]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:45:14 compute-0 lvm[245135]: VG ceph_vg1 finished
Jan 26 15:45:14 compute-0 lvm[245134]: VG ceph_vg0 finished
Jan 26 15:45:14 compute-0 lvm[245137]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:45:14 compute-0 lvm[245137]: VG ceph_vg2 finished
Jan 26 15:45:14 compute-0 affectionate_bohr[245056]: {}
Jan 26 15:45:14 compute-0 systemd[1]: libpod-5d19a4698faf0f91b307aa575b8d33776babf46cd0c9d3684d82d1ea692460f7.scope: Deactivated successfully.
Jan 26 15:45:14 compute-0 systemd[1]: libpod-5d19a4698faf0f91b307aa575b8d33776babf46cd0c9d3684d82d1ea692460f7.scope: Consumed 1.360s CPU time.
Jan 26 15:45:14 compute-0 podman[245040]: 2026-01-26 15:45:14.138730536 +0000 UTC m=+0.955185436 container died 5d19a4698faf0f91b307aa575b8d33776babf46cd0c9d3684d82d1ea692460f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bohr, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 15:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd0fb846f1d14877be411839186a0b3adcd7e4555846faf686dda23bc98504e1-merged.mount: Deactivated successfully.
Jan 26 15:45:14 compute-0 podman[245040]: 2026-01-26 15:45:14.306779848 +0000 UTC m=+1.123234748 container remove 5d19a4698faf0f91b307aa575b8d33776babf46cd0c9d3684d82d1ea692460f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bohr, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 15:45:14 compute-0 systemd[1]: libpod-conmon-5d19a4698faf0f91b307aa575b8d33776babf46cd0c9d3684d82d1ea692460f7.scope: Deactivated successfully.
Jan 26 15:45:14 compute-0 sudo[244965]: pam_unix(sudo:session): session closed for user root
Jan 26 15:45:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:45:14 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:45:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:45:14 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:45:14 compute-0 sudo[245151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:45:14 compute-0 sudo[245151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:45:14 compute-0 sudo[245151]: pam_unix(sudo:session): session closed for user root
Jan 26 15:45:14 compute-0 ceph-mon[75140]: pgmap v877: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 597 B/s wr, 4 op/s
Jan 26 15:45:14 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:45:14 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:45:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:16 compute-0 ceph-mon[75140]: pgmap v878: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:18 compute-0 nova_compute[239965]: 2026-01-26 15:45:18.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:45:18 compute-0 ceph-mon[75140]: pgmap v879: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:45:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:19 compute-0 nova_compute[239965]: 2026-01-26 15:45:19.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:45:20 compute-0 nova_compute[239965]: 2026-01-26 15:45:20.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:45:20 compute-0 nova_compute[239965]: 2026-01-26 15:45:20.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:45:20 compute-0 ceph-mon[75140]: pgmap v880: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:21 compute-0 nova_compute[239965]: 2026-01-26 15:45:21.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:45:21 compute-0 nova_compute[239965]: 2026-01-26 15:45:21.519 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:45:22 compute-0 nova_compute[239965]: 2026-01-26 15:45:22.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:45:22 compute-0 nova_compute[239965]: 2026-01-26 15:45:22.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:45:22 compute-0 nova_compute[239965]: 2026-01-26 15:45:22.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:45:22 compute-0 nova_compute[239965]: 2026-01-26 15:45:22.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:45:22 compute-0 nova_compute[239965]: 2026-01-26 15:45:22.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:45:22 compute-0 nova_compute[239965]: 2026-01-26 15:45:22.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:45:22 compute-0 nova_compute[239965]: 2026-01-26 15:45:22.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:45:22 compute-0 nova_compute[239965]: 2026-01-26 15:45:22.543 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:45:22 compute-0 nova_compute[239965]: 2026-01-26 15:45:22.544 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:45:22 compute-0 ceph-mon[75140]: pgmap v881: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:45:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1128332027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:45:23 compute-0 nova_compute[239965]: 2026-01-26 15:45:23.086 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:45:23 compute-0 nova_compute[239965]: 2026-01-26 15:45:23.236 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:45:23 compute-0 nova_compute[239965]: 2026-01-26 15:45:23.237 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5095MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:45:23 compute-0 nova_compute[239965]: 2026-01-26 15:45:23.238 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:45:23 compute-0 nova_compute[239965]: 2026-01-26 15:45:23.238 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:45:23 compute-0 nova_compute[239965]: 2026-01-26 15:45:23.299 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:45:23 compute-0 nova_compute[239965]: 2026-01-26 15:45:23.299 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:45:23 compute-0 nova_compute[239965]: 2026-01-26 15:45:23.320 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:45:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1128332027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:45:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:45:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:45:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1475699189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:45:23 compute-0 nova_compute[239965]: 2026-01-26 15:45:23.897 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:45:23 compute-0 nova_compute[239965]: 2026-01-26 15:45:23.905 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:45:23 compute-0 nova_compute[239965]: 2026-01-26 15:45:23.923 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:45:23 compute-0 nova_compute[239965]: 2026-01-26 15:45:23.924 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:45:23 compute-0 nova_compute[239965]: 2026-01-26 15:45:23.924 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:45:24 compute-0 ceph-mon[75140]: pgmap v882: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1475699189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:45:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:25 compute-0 nova_compute[239965]: 2026-01-26 15:45:25.924 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:45:25 compute-0 nova_compute[239965]: 2026-01-26 15:45:25.924 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:45:25 compute-0 nova_compute[239965]: 2026-01-26 15:45:25.925 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 15:45:25 compute-0 ceph-mon[75140]: pgmap v883: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:26 compute-0 nova_compute[239965]: 2026-01-26 15:45:26.073 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 15:45:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:45:28
Jan 26 15:45:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:45:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:45:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'backups', 'images', '.mgr']
Jan 26 15:45:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:45:28 compute-0 ceph-mon[75140]: pgmap v884: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:45:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:45:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:45:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:45:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:45:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:45:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:45:30 compute-0 ceph-mon[75140]: pgmap v885: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:32 compute-0 ceph-mon[75140]: pgmap v886: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:45:34 compute-0 ceph-mon[75140]: pgmap v887: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:36 compute-0 ceph-mon[75140]: pgmap v888: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:45:38 compute-0 ceph-mon[75140]: pgmap v889: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:39 compute-0 ceph-mon[75140]: pgmap v890: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:40 compute-0 sshd-session[245220]: Connection closed by authenticating user root 178.128.250.55 port 41590 [preauth]
Jan 26 15:45:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:42 compute-0 podman[245222]: 2026-01-26 15:45:42.371959428 +0000 UTC m=+0.058191344 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:45:42 compute-0 podman[245223]: 2026-01-26 15:45:42.406870229 +0000 UTC m=+0.093071865 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:45:43 compute-0 ceph-mon[75140]: pgmap v891: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:45:44 compute-0 ceph-mon[75140]: pgmap v892: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:46 compute-0 ceph-mon[75140]: pgmap v893: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 5.164571499897491e-07 of space, bias 1.0, pg target 0.00015493714499692474 quantized to 32 (current 32)
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0865068594024712e-06 of space, bias 4.0, pg target 0.0013038082312829655 quantized to 16 (current 16)
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:45:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:45:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:45:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/669736066' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:45:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:45:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/669736066' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:45:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:45:48 compute-0 ceph-mon[75140]: pgmap v894: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/669736066' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:45:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/669736066' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:45:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:50 compute-0 ceph-mon[75140]: pgmap v895: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:52 compute-0 ceph-mon[75140]: pgmap v896: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:45:53 compute-0 ceph-mon[75140]: pgmap v897: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:56 compute-0 ceph-mon[75140]: pgmap v898: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:58 compute-0 ceph-mon[75140]: pgmap v899: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:45:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:45:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:45:59.206 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:45:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:45:59.207 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:45:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:45:59.207 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:45:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:46:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:46:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:46:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:46:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:46:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:46:00 compute-0 ceph-mon[75140]: pgmap v900: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:02 compute-0 ceph-mon[75140]: pgmap v901: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:46:04 compute-0 ceph-mon[75140]: pgmap v902: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:06 compute-0 ceph-mon[75140]: pgmap v903: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:08.768928) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442368769019, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1979, "num_deletes": 252, "total_data_size": 3311379, "memory_usage": 3361976, "flush_reason": "Manual Compaction"}
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442368784851, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 1958768, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16530, "largest_seqno": 18508, "table_properties": {"data_size": 1952069, "index_size": 3581, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16635, "raw_average_key_size": 20, "raw_value_size": 1937297, "raw_average_value_size": 2385, "num_data_blocks": 164, "num_entries": 812, "num_filter_entries": 812, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769442164, "oldest_key_time": 1769442164, "file_creation_time": 1769442368, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 15958 microseconds, and 5271 cpu microseconds.
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:08.784895) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 1958768 bytes OK
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:08.784914) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:08.788751) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:08.788779) EVENT_LOG_v1 {"time_micros": 1769442368788774, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:08.788798) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3303010, prev total WAL file size 3303646, number of live WAL files 2.
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:08.790358) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353033' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(1912KB)], [38(8082KB)]
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442368790443, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 10235163, "oldest_snapshot_seqno": -1}
Jan 26 15:46:08 compute-0 ceph-mon[75140]: pgmap v904: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4501 keys, 8167929 bytes, temperature: kUnknown
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442368846597, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 8167929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8136567, "index_size": 19028, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 108819, "raw_average_key_size": 24, "raw_value_size": 8053946, "raw_average_value_size": 1789, "num_data_blocks": 807, "num_entries": 4501, "num_filter_entries": 4501, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769442368, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:08.846826) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 8167929 bytes
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:08.853004) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.0 rd, 145.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 7.9 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(9.4) write-amplify(4.2) OK, records in: 4923, records dropped: 422 output_compression: NoCompression
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:08.853027) EVENT_LOG_v1 {"time_micros": 1769442368853017, "job": 18, "event": "compaction_finished", "compaction_time_micros": 56241, "compaction_time_cpu_micros": 17953, "output_level": 6, "num_output_files": 1, "total_output_size": 8167929, "num_input_records": 4923, "num_output_records": 4501, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442368853452, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442368854685, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:08.790254) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:08.854775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:08.854781) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:08.854782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:08.854784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:46:08 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:08.854785) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:46:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:11 compute-0 ceph-mon[75140]: pgmap v905: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:12 compute-0 ceph-mon[75140]: pgmap v906: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:13 compute-0 podman[245268]: 2026-01-26 15:46:13.358768156 +0000 UTC m=+0.047404460 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 15:46:13 compute-0 podman[245269]: 2026-01-26 15:46:13.414829498 +0000 UTC m=+0.097145697 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 26 15:46:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:46:14 compute-0 sudo[245315]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:46:14 compute-0 sudo[245315]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:46:14 compute-0 sudo[245315]: pam_unix(sudo:session): session closed for user root
Jan 26 15:46:14 compute-0 sudo[245340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:46:14 compute-0 sudo[245340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:46:14 compute-0 ceph-mon[75140]: pgmap v907: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:15 compute-0 sudo[245340]: pam_unix(sudo:session): session closed for user root
Jan 26 15:46:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:46:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:46:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:46:15 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:46:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:46:15 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:46:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:46:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:46:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:46:15 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:46:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:46:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:46:15 compute-0 sudo[245396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:46:15 compute-0 sudo[245396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:46:15 compute-0 sudo[245396]: pam_unix(sudo:session): session closed for user root
Jan 26 15:46:15 compute-0 sudo[245421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:46:15 compute-0 sudo[245421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:46:15 compute-0 podman[245458]: 2026-01-26 15:46:15.492715917 +0000 UTC m=+0.049009389 container create dc2746f55a0a9eaf40a22899b3fe775a7f712983cf35e5927fbc8160c918e4fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_diffie, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 15:46:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:15 compute-0 systemd[1]: Started libpod-conmon-dc2746f55a0a9eaf40a22899b3fe775a7f712983cf35e5927fbc8160c918e4fc.scope.
Jan 26 15:46:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:46:15 compute-0 podman[245458]: 2026-01-26 15:46:15.469018733 +0000 UTC m=+0.025312255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:46:15 compute-0 podman[245458]: 2026-01-26 15:46:15.58049259 +0000 UTC m=+0.136786092 container init dc2746f55a0a9eaf40a22899b3fe775a7f712983cf35e5927fbc8160c918e4fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_diffie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:46:15 compute-0 podman[245458]: 2026-01-26 15:46:15.587343309 +0000 UTC m=+0.143636781 container start dc2746f55a0a9eaf40a22899b3fe775a7f712983cf35e5927fbc8160c918e4fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_diffie, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 15:46:15 compute-0 priceless_diffie[245473]: 167 167
Jan 26 15:46:15 compute-0 systemd[1]: libpod-dc2746f55a0a9eaf40a22899b3fe775a7f712983cf35e5927fbc8160c918e4fc.scope: Deactivated successfully.
Jan 26 15:46:15 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:46:15 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:46:15 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:46:15 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:46:15 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:46:15 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:46:15 compute-0 podman[245458]: 2026-01-26 15:46:15.598275799 +0000 UTC m=+0.154569271 container attach dc2746f55a0a9eaf40a22899b3fe775a7f712983cf35e5927fbc8160c918e4fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 15:46:15 compute-0 podman[245458]: 2026-01-26 15:46:15.598691779 +0000 UTC m=+0.154985251 container died dc2746f55a0a9eaf40a22899b3fe775a7f712983cf35e5927fbc8160c918e4fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_diffie, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 15:46:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7d9f3cf4a7ac9baf766186ed1e1c6869513c412dd88b5f60b5f435d62baa6b2-merged.mount: Deactivated successfully.
Jan 26 15:46:15 compute-0 podman[245458]: 2026-01-26 15:46:15.658844631 +0000 UTC m=+0.215138093 container remove dc2746f55a0a9eaf40a22899b3fe775a7f712983cf35e5927fbc8160c918e4fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_diffie, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Jan 26 15:46:15 compute-0 systemd[1]: libpod-conmon-dc2746f55a0a9eaf40a22899b3fe775a7f712983cf35e5927fbc8160c918e4fc.scope: Deactivated successfully.
Jan 26 15:46:15 compute-0 podman[245497]: 2026-01-26 15:46:15.796400383 +0000 UTC m=+0.025058969 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:46:17 compute-0 podman[245497]: 2026-01-26 15:46:17.050055795 +0000 UTC m=+1.278714371 container create 7a1556003d51b0cd54623c0e52317bd9bfbfcb1b48bb03c5183bc929092534d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:46:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:17 compute-0 systemd[1]: Started libpod-conmon-7a1556003d51b0cd54623c0e52317bd9bfbfcb1b48bb03c5183bc929092534d0.scope.
Jan 26 15:46:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce46c7e3b662e8ce6d611c1a77cfe0a6d8c9de9c61e3f8ef291bee0463d7d9c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce46c7e3b662e8ce6d611c1a77cfe0a6d8c9de9c61e3f8ef291bee0463d7d9c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce46c7e3b662e8ce6d611c1a77cfe0a6d8c9de9c61e3f8ef291bee0463d7d9c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce46c7e3b662e8ce6d611c1a77cfe0a6d8c9de9c61e3f8ef291bee0463d7d9c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:46:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce46c7e3b662e8ce6d611c1a77cfe0a6d8c9de9c61e3f8ef291bee0463d7d9c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:46:17 compute-0 ceph-mon[75140]: pgmap v908: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:17 compute-0 podman[245497]: 2026-01-26 15:46:17.848547687 +0000 UTC m=+2.077206283 container init 7a1556003d51b0cd54623c0e52317bd9bfbfcb1b48bb03c5183bc929092534d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_clarke, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:46:17 compute-0 podman[245497]: 2026-01-26 15:46:17.858159914 +0000 UTC m=+2.086818490 container start 7a1556003d51b0cd54623c0e52317bd9bfbfcb1b48bb03c5183bc929092534d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 15:46:17 compute-0 podman[245497]: 2026-01-26 15:46:17.971814555 +0000 UTC m=+2.200473151 container attach 7a1556003d51b0cd54623c0e52317bd9bfbfcb1b48bb03c5183bc929092534d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_clarke, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:46:18 compute-0 upbeat_clarke[245513]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:46:18 compute-0 upbeat_clarke[245513]: --> All data devices are unavailable
Jan 26 15:46:18 compute-0 systemd[1]: libpod-7a1556003d51b0cd54623c0e52317bd9bfbfcb1b48bb03c5183bc929092534d0.scope: Deactivated successfully.
Jan 26 15:46:18 compute-0 podman[245533]: 2026-01-26 15:46:18.371332464 +0000 UTC m=+0.027075669 container died 7a1556003d51b0cd54623c0e52317bd9bfbfcb1b48bb03c5183bc929092534d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_clarke, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:46:18 compute-0 nova_compute[239965]: 2026-01-26 15:46:18.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:46:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce46c7e3b662e8ce6d611c1a77cfe0a6d8c9de9c61e3f8ef291bee0463d7d9c5-merged.mount: Deactivated successfully.
Jan 26 15:46:18 compute-0 podman[245533]: 2026-01-26 15:46:18.619335147 +0000 UTC m=+0.275078262 container remove 7a1556003d51b0cd54623c0e52317bd9bfbfcb1b48bb03c5183bc929092534d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_clarke, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 15:46:18 compute-0 systemd[1]: libpod-conmon-7a1556003d51b0cd54623c0e52317bd9bfbfcb1b48bb03c5183bc929092534d0.scope: Deactivated successfully.
Jan 26 15:46:18 compute-0 sudo[245421]: pam_unix(sudo:session): session closed for user root
Jan 26 15:46:18 compute-0 sudo[245548]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:46:18 compute-0 ceph-mon[75140]: pgmap v909: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:18 compute-0 sudo[245548]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:46:18 compute-0 sudo[245548]: pam_unix(sudo:session): session closed for user root
Jan 26 15:46:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:46:18 compute-0 sudo[245573]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:46:18 compute-0 sudo[245573]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:46:19 compute-0 podman[245610]: 2026-01-26 15:46:19.039201727 +0000 UTC m=+0.029057197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:46:19 compute-0 podman[245610]: 2026-01-26 15:46:19.350784417 +0000 UTC m=+0.340639887 container create 7c23757a3cf6db341c94252f9dcecbca209b4f2c0f4a0fe8036cff42500a5d1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_aryabhata, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:46:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:19 compute-0 systemd[1]: Started libpod-conmon-7c23757a3cf6db341c94252f9dcecbca209b4f2c0f4a0fe8036cff42500a5d1f.scope.
Jan 26 15:46:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:46:19 compute-0 podman[245610]: 2026-01-26 15:46:19.788898597 +0000 UTC m=+0.778754127 container init 7c23757a3cf6db341c94252f9dcecbca209b4f2c0f4a0fe8036cff42500a5d1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:46:19 compute-0 podman[245610]: 2026-01-26 15:46:19.796756941 +0000 UTC m=+0.786612391 container start 7c23757a3cf6db341c94252f9dcecbca209b4f2c0f4a0fe8036cff42500a5d1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_aryabhata, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 15:46:19 compute-0 jolly_aryabhata[245626]: 167 167
Jan 26 15:46:19 compute-0 systemd[1]: libpod-7c23757a3cf6db341c94252f9dcecbca209b4f2c0f4a0fe8036cff42500a5d1f.scope: Deactivated successfully.
Jan 26 15:46:19 compute-0 podman[245610]: 2026-01-26 15:46:19.807900796 +0000 UTC m=+0.797756246 container attach 7c23757a3cf6db341c94252f9dcecbca209b4f2c0f4a0fe8036cff42500a5d1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_aryabhata, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:46:19 compute-0 podman[245610]: 2026-01-26 15:46:19.808246084 +0000 UTC m=+0.798101524 container died 7c23757a3cf6db341c94252f9dcecbca209b4f2c0f4a0fe8036cff42500a5d1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 15:46:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a186e5e38f50522600e2f39a379922094d6109c3e73bcc16a8887f1243b05f93-merged.mount: Deactivated successfully.
Jan 26 15:46:19 compute-0 podman[245610]: 2026-01-26 15:46:19.854527925 +0000 UTC m=+0.844383375 container remove 7c23757a3cf6db341c94252f9dcecbca209b4f2c0f4a0fe8036cff42500a5d1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_aryabhata, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:46:19 compute-0 systemd[1]: libpod-conmon-7c23757a3cf6db341c94252f9dcecbca209b4f2c0f4a0fe8036cff42500a5d1f.scope: Deactivated successfully.
Jan 26 15:46:20 compute-0 podman[245649]: 2026-01-26 15:46:20.01824036 +0000 UTC m=+0.045177524 container create 5a0a7ec37e761fedea1e07bfced59ba81e97298035a01fad35b53579161054b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:46:20 compute-0 systemd[1]: Started libpod-conmon-5a0a7ec37e761fedea1e07bfced59ba81e97298035a01fad35b53579161054b0.scope.
Jan 26 15:46:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41d6f63b1ce02007ddbb5368628f0ccea06f0806578753df7f38500ee682369/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41d6f63b1ce02007ddbb5368628f0ccea06f0806578753df7f38500ee682369/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41d6f63b1ce02007ddbb5368628f0ccea06f0806578753df7f38500ee682369/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:46:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a41d6f63b1ce02007ddbb5368628f0ccea06f0806578753df7f38500ee682369/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:46:20 compute-0 podman[245649]: 2026-01-26 15:46:19.995626813 +0000 UTC m=+0.022563997 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:46:20 compute-0 podman[245649]: 2026-01-26 15:46:20.098891349 +0000 UTC m=+0.125828543 container init 5a0a7ec37e761fedea1e07bfced59ba81e97298035a01fad35b53579161054b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 26 15:46:20 compute-0 podman[245649]: 2026-01-26 15:46:20.105193944 +0000 UTC m=+0.132131108 container start 5a0a7ec37e761fedea1e07bfced59ba81e97298035a01fad35b53579161054b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mccarthy, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:46:20 compute-0 podman[245649]: 2026-01-26 15:46:20.11070914 +0000 UTC m=+0.137646334 container attach 5a0a7ec37e761fedea1e07bfced59ba81e97298035a01fad35b53579161054b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mccarthy, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]: {
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:     "0": [
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:         {
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "devices": [
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "/dev/loop3"
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             ],
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "lv_name": "ceph_lv0",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "lv_size": "21470642176",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "name": "ceph_lv0",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "tags": {
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.cluster_name": "ceph",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.crush_device_class": "",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.encrypted": "0",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.objectstore": "bluestore",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.osd_id": "0",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.type": "block",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.vdo": "0",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.with_tpm": "0"
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             },
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "type": "block",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "vg_name": "ceph_vg0"
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:         }
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:     ],
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:     "1": [
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:         {
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "devices": [
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "/dev/loop4"
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             ],
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "lv_name": "ceph_lv1",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "lv_size": "21470642176",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "name": "ceph_lv1",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "tags": {
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.cluster_name": "ceph",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.crush_device_class": "",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.encrypted": "0",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.objectstore": "bluestore",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.osd_id": "1",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.type": "block",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.vdo": "0",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.with_tpm": "0"
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             },
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "type": "block",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "vg_name": "ceph_vg1"
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:         }
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:     ],
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:     "2": [
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:         {
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "devices": [
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "/dev/loop5"
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             ],
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "lv_name": "ceph_lv2",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "lv_size": "21470642176",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "name": "ceph_lv2",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "tags": {
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.cluster_name": "ceph",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.crush_device_class": "",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.encrypted": "0",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.objectstore": "bluestore",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.osd_id": "2",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.type": "block",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.vdo": "0",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:                 "ceph.with_tpm": "0"
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             },
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "type": "block",
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:             "vg_name": "ceph_vg2"
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:         }
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]:     ]
Jan 26 15:46:20 compute-0 infallible_mccarthy[245666]: }
Jan 26 15:46:20 compute-0 systemd[1]: libpod-5a0a7ec37e761fedea1e07bfced59ba81e97298035a01fad35b53579161054b0.scope: Deactivated successfully.
Jan 26 15:46:20 compute-0 podman[245675]: 2026-01-26 15:46:20.455509429 +0000 UTC m=+0.028655767 container died 5a0a7ec37e761fedea1e07bfced59ba81e97298035a01fad35b53579161054b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 15:46:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-a41d6f63b1ce02007ddbb5368628f0ccea06f0806578753df7f38500ee682369-merged.mount: Deactivated successfully.
Jan 26 15:46:20 compute-0 podman[245675]: 2026-01-26 15:46:20.502089598 +0000 UTC m=+0.075235906 container remove 5a0a7ec37e761fedea1e07bfced59ba81e97298035a01fad35b53579161054b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 15:46:20 compute-0 systemd[1]: libpod-conmon-5a0a7ec37e761fedea1e07bfced59ba81e97298035a01fad35b53579161054b0.scope: Deactivated successfully.
Jan 26 15:46:20 compute-0 nova_compute[239965]: 2026-01-26 15:46:20.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:46:20 compute-0 sudo[245573]: pam_unix(sudo:session): session closed for user root
Jan 26 15:46:20 compute-0 sudo[245690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:46:20 compute-0 sudo[245690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:46:20 compute-0 sudo[245690]: pam_unix(sudo:session): session closed for user root
Jan 26 15:46:20 compute-0 sudo[245715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:46:20 compute-0 sudo[245715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:46:20 compute-0 ceph-mon[75140]: pgmap v910: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:20 compute-0 podman[245752]: 2026-01-26 15:46:20.918955662 +0000 UTC m=+0.044679962 container create b289f840487a30b3a5193f2f6e83ec191bd1f87ffe42c16d2423fcafcf0c8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ardinghelli, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:46:20 compute-0 systemd[1]: Started libpod-conmon-b289f840487a30b3a5193f2f6e83ec191bd1f87ffe42c16d2423fcafcf0c8f79.scope.
Jan 26 15:46:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:46:20 compute-0 podman[245752]: 2026-01-26 15:46:20.976714457 +0000 UTC m=+0.102438837 container init b289f840487a30b3a5193f2f6e83ec191bd1f87ffe42c16d2423fcafcf0c8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 15:46:20 compute-0 podman[245752]: 2026-01-26 15:46:20.983656677 +0000 UTC m=+0.109380977 container start b289f840487a30b3a5193f2f6e83ec191bd1f87ffe42c16d2423fcafcf0c8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:46:20 compute-0 boring_ardinghelli[245769]: 167 167
Jan 26 15:46:20 compute-0 podman[245752]: 2026-01-26 15:46:20.987513372 +0000 UTC m=+0.113237712 container attach b289f840487a30b3a5193f2f6e83ec191bd1f87ffe42c16d2423fcafcf0c8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 15:46:20 compute-0 systemd[1]: libpod-b289f840487a30b3a5193f2f6e83ec191bd1f87ffe42c16d2423fcafcf0c8f79.scope: Deactivated successfully.
Jan 26 15:46:20 compute-0 podman[245752]: 2026-01-26 15:46:20.988789434 +0000 UTC m=+0.114513754 container died b289f840487a30b3a5193f2f6e83ec191bd1f87ffe42c16d2423fcafcf0c8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Jan 26 15:46:20 compute-0 podman[245752]: 2026-01-26 15:46:20.894761736 +0000 UTC m=+0.020486066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:46:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-80061edd4b89ae90858bee6193c85c7cf0869ef78fefb72a7f26b7f68049bc47-merged.mount: Deactivated successfully.
Jan 26 15:46:21 compute-0 podman[245752]: 2026-01-26 15:46:21.032200834 +0000 UTC m=+0.157925144 container remove b289f840487a30b3a5193f2f6e83ec191bd1f87ffe42c16d2423fcafcf0c8f79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_ardinghelli, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 15:46:21 compute-0 systemd[1]: libpod-conmon-b289f840487a30b3a5193f2f6e83ec191bd1f87ffe42c16d2423fcafcf0c8f79.scope: Deactivated successfully.
Jan 26 15:46:21 compute-0 podman[245792]: 2026-01-26 15:46:21.209078624 +0000 UTC m=+0.048032935 container create 9847154e93a0605bc0f0e19ce4d04113236b903eca94c87a219f136e6880e6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 15:46:21 compute-0 systemd[1]: Started libpod-conmon-9847154e93a0605bc0f0e19ce4d04113236b903eca94c87a219f136e6880e6c4.scope.
Jan 26 15:46:21 compute-0 podman[245792]: 2026-01-26 15:46:21.18378399 +0000 UTC m=+0.022738321 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:46:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52345ae3c8ec9e3d7f384ab014ed1dcfa4e5181b97e58246aa08f1ab982df4f9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52345ae3c8ec9e3d7f384ab014ed1dcfa4e5181b97e58246aa08f1ab982df4f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52345ae3c8ec9e3d7f384ab014ed1dcfa4e5181b97e58246aa08f1ab982df4f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:46:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52345ae3c8ec9e3d7f384ab014ed1dcfa4e5181b97e58246aa08f1ab982df4f9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:46:21 compute-0 podman[245792]: 2026-01-26 15:46:21.319495535 +0000 UTC m=+0.158449866 container init 9847154e93a0605bc0f0e19ce4d04113236b903eca94c87a219f136e6880e6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:46:21 compute-0 podman[245792]: 2026-01-26 15:46:21.332211369 +0000 UTC m=+0.171165670 container start 9847154e93a0605bc0f0e19ce4d04113236b903eca94c87a219f136e6880e6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 15:46:21 compute-0 podman[245792]: 2026-01-26 15:46:21.340701178 +0000 UTC m=+0.179655499 container attach 9847154e93a0605bc0f0e19ce4d04113236b903eca94c87a219f136e6880e6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 15:46:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:21.799553) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442381799606, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 373, "num_deletes": 251, "total_data_size": 217142, "memory_usage": 224048, "flush_reason": "Manual Compaction"}
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442381803639, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 215187, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18509, "largest_seqno": 18881, "table_properties": {"data_size": 212931, "index_size": 420, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5723, "raw_average_key_size": 18, "raw_value_size": 208379, "raw_average_value_size": 680, "num_data_blocks": 19, "num_entries": 306, "num_filter_entries": 306, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769442368, "oldest_key_time": 1769442368, "file_creation_time": 1769442381, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 4118 microseconds, and 1253 cpu microseconds.
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:21.803678) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 215187 bytes OK
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:21.803697) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:21.805809) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:21.805865) EVENT_LOG_v1 {"time_micros": 1769442381805855, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:21.805892) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 214688, prev total WAL file size 214688, number of live WAL files 2.
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:21.807079) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(210KB)], [41(7976KB)]
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442381807118, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 8383116, "oldest_snapshot_seqno": -1}
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4296 keys, 6606096 bytes, temperature: kUnknown
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442381852618, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6606096, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6577633, "index_size": 16641, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10757, "raw_key_size": 105247, "raw_average_key_size": 24, "raw_value_size": 6500096, "raw_average_value_size": 1513, "num_data_blocks": 697, "num_entries": 4296, "num_filter_entries": 4296, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769442381, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:21.852830) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6606096 bytes
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:21.854417) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 184.0 rd, 145.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 7.8 +0.0 blob) out(6.3 +0.0 blob), read-write-amplify(69.7) write-amplify(30.7) OK, records in: 4807, records dropped: 511 output_compression: NoCompression
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:21.854436) EVENT_LOG_v1 {"time_micros": 1769442381854427, "job": 20, "event": "compaction_finished", "compaction_time_micros": 45564, "compaction_time_cpu_micros": 15472, "output_level": 6, "num_output_files": 1, "total_output_size": 6606096, "num_input_records": 4807, "num_output_records": 4296, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442381854558, "job": 20, "event": "table_file_deletion", "file_number": 43}
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442381855864, "job": 20, "event": "table_file_deletion", "file_number": 41}
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:21.807032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:21.855900) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:21.855904) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:21.855907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:21.855909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:46:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:46:21.855911) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:46:21 compute-0 lvm[245886]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:46:21 compute-0 lvm[245887]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:46:21 compute-0 lvm[245886]: VG ceph_vg1 finished
Jan 26 15:46:21 compute-0 lvm[245887]: VG ceph_vg0 finished
Jan 26 15:46:22 compute-0 lvm[245889]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:46:22 compute-0 lvm[245889]: VG ceph_vg2 finished
Jan 26 15:46:22 compute-0 intelligent_chandrasekhar[245808]: {}
Jan 26 15:46:22 compute-0 systemd[1]: libpod-9847154e93a0605bc0f0e19ce4d04113236b903eca94c87a219f136e6880e6c4.scope: Deactivated successfully.
Jan 26 15:46:22 compute-0 podman[245792]: 2026-01-26 15:46:22.118483091 +0000 UTC m=+0.957437402 container died 9847154e93a0605bc0f0e19ce4d04113236b903eca94c87a219f136e6880e6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:46:22 compute-0 systemd[1]: libpod-9847154e93a0605bc0f0e19ce4d04113236b903eca94c87a219f136e6880e6c4.scope: Consumed 1.258s CPU time.
Jan 26 15:46:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-52345ae3c8ec9e3d7f384ab014ed1dcfa4e5181b97e58246aa08f1ab982df4f9-merged.mount: Deactivated successfully.
Jan 26 15:46:22 compute-0 podman[245792]: 2026-01-26 15:46:22.180864929 +0000 UTC m=+1.019819240 container remove 9847154e93a0605bc0f0e19ce4d04113236b903eca94c87a219f136e6880e6c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 26 15:46:22 compute-0 systemd[1]: libpod-conmon-9847154e93a0605bc0f0e19ce4d04113236b903eca94c87a219f136e6880e6c4.scope: Deactivated successfully.
Jan 26 15:46:22 compute-0 sudo[245715]: pam_unix(sudo:session): session closed for user root
Jan 26 15:46:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:46:22 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:46:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:46:22 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:46:22 compute-0 sudo[245904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:46:22 compute-0 sudo[245904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:46:22 compute-0 sudo[245904]: pam_unix(sudo:session): session closed for user root
Jan 26 15:46:22 compute-0 nova_compute[239965]: 2026-01-26 15:46:22.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:46:22 compute-0 nova_compute[239965]: 2026-01-26 15:46:22.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:46:22 compute-0 nova_compute[239965]: 2026-01-26 15:46:22.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:46:22 compute-0 nova_compute[239965]: 2026-01-26 15:46:22.544 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:46:22 compute-0 nova_compute[239965]: 2026-01-26 15:46:22.545 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:46:22 compute-0 nova_compute[239965]: 2026-01-26 15:46:22.545 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:46:22 compute-0 nova_compute[239965]: 2026-01-26 15:46:22.546 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:46:22 compute-0 nova_compute[239965]: 2026-01-26 15:46:22.546 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:46:22 compute-0 ceph-mon[75140]: pgmap v911: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:22 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:46:22 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:46:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:46:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3236680919' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:46:23 compute-0 nova_compute[239965]: 2026-01-26 15:46:23.128 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:46:23 compute-0 nova_compute[239965]: 2026-01-26 15:46:23.296 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:46:23 compute-0 nova_compute[239965]: 2026-01-26 15:46:23.298 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5124MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:46:23 compute-0 nova_compute[239965]: 2026-01-26 15:46:23.298 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:46:23 compute-0 nova_compute[239965]: 2026-01-26 15:46:23.298 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:46:23 compute-0 nova_compute[239965]: 2026-01-26 15:46:23.390 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:46:23 compute-0 nova_compute[239965]: 2026-01-26 15:46:23.391 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:46:23 compute-0 nova_compute[239965]: 2026-01-26 15:46:23.415 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:46:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:46:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3236680919' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:46:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:46:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2741329873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:46:23 compute-0 nova_compute[239965]: 2026-01-26 15:46:23.944 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:46:23 compute-0 nova_compute[239965]: 2026-01-26 15:46:23.950 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:46:24 compute-0 nova_compute[239965]: 2026-01-26 15:46:24.074 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:46:24 compute-0 nova_compute[239965]: 2026-01-26 15:46:24.076 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:46:24 compute-0 nova_compute[239965]: 2026-01-26 15:46:24.076 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:46:24 compute-0 ceph-mon[75140]: pgmap v912: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2741329873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:46:25 compute-0 nova_compute[239965]: 2026-01-26 15:46:25.078 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:46:25 compute-0 nova_compute[239965]: 2026-01-26 15:46:25.079 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:46:25 compute-0 nova_compute[239965]: 2026-01-26 15:46:25.079 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:46:25 compute-0 nova_compute[239965]: 2026-01-26 15:46:25.079 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:46:25 compute-0 nova_compute[239965]: 2026-01-26 15:46:25.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:46:25 compute-0 nova_compute[239965]: 2026-01-26 15:46:25.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:46:25 compute-0 nova_compute[239965]: 2026-01-26 15:46:25.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 15:46:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:25 compute-0 nova_compute[239965]: 2026-01-26 15:46:25.527 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 15:46:26 compute-0 ceph-mon[75140]: pgmap v913: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:46:28
Jan 26 15:46:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:46:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:46:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'vms', '.rgw.root', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta']
Jan 26 15:46:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:46:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:46:28 compute-0 ceph-mon[75140]: pgmap v914: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:46:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:46:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:46:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:46:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:46:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:46:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:46:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:46:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:46:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:46:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:46:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:46:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:46:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:46:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:46:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:46:30 compute-0 ceph-mon[75140]: pgmap v915: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:32 compute-0 ceph-mon[75140]: pgmap v916: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:32 compute-0 sshd-session[245973]: Connection closed by 3.137.73.221 port 59470
Jan 26 15:46:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:46:34 compute-0 ceph-mon[75140]: pgmap v917: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:35 compute-0 sshd-session[245974]: Connection closed by authenticating user root 178.128.250.55 port 48136 [preauth]
Jan 26 15:46:36 compute-0 ceph-mon[75140]: pgmap v918: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:46:38 compute-0 ceph-mon[75140]: pgmap v919: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:40 compute-0 ceph-mon[75140]: pgmap v920: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:42 compute-0 ceph-mon[75140]: pgmap v921: 305 pgs: 305 active+clean; 462 KiB data, 137 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:46:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Jan 26 15:46:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Jan 26 15:46:43 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Jan 26 15:46:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 819 KiB/s wr, 2 op/s
Jan 26 15:46:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:46:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Jan 26 15:46:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Jan 26 15:46:44 compute-0 ceph-mon[75140]: osdmap e145: 3 total, 3 up, 3 in
Jan 26 15:46:44 compute-0 ceph-mon[75140]: pgmap v923: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 819 KiB/s wr, 2 op/s
Jan 26 15:46:44 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Jan 26 15:46:44 compute-0 podman[245977]: 2026-01-26 15:46:44.385177477 +0000 UTC m=+0.064969862 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:46:44 compute-0 podman[245978]: 2026-01-26 15:46:44.414810888 +0000 UTC m=+0.094916230 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:46:45 compute-0 ceph-mon[75140]: osdmap e146: 3 total, 3 up, 3 in
Jan 26 15:46:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1.0 MiB/s wr, 2 op/s
Jan 26 15:46:46 compute-0 ceph-mon[75140]: pgmap v925: 305 pgs: 305 active+clean; 8.5 MiB data, 145 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1.0 MiB/s wr, 2 op/s
Jan 26 15:46:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 16 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 MiB/s wr, 20 op/s
Jan 26 15:46:48 compute-0 ceph-mon[75140]: pgmap v926: 305 pgs: 305 active+clean; 16 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 MiB/s wr, 20 op/s
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0002612638048123683 of space, bias 1.0, pg target 0.07837914144371048 quantized to 32 (current 32)
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 9.917573257528758e-07 of space, bias 4.0, pg target 0.0011901087909034509 quantized to 16 (current 16)
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:46:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:46:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:46:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2341930891' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:46:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:46:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2341930891' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:46:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:46:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 26 15:46:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2341930891' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:46:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2341930891' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:46:50 compute-0 ceph-mon[75140]: pgmap v927: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 26 15:46:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.0 MiB/s wr, 43 op/s
Jan 26 15:46:52 compute-0 ceph-mon[75140]: pgmap v928: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 4.0 MiB/s wr, 43 op/s
Jan 26 15:46:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 35 op/s
Jan 26 15:46:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:46:54 compute-0 ceph-mon[75140]: pgmap v929: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.3 MiB/s wr, 35 op/s
Jan 26 15:46:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.9 MiB/s wr, 32 op/s
Jan 26 15:46:56 compute-0 ceph-mon[75140]: pgmap v930: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.9 MiB/s wr, 32 op/s
Jan 26 15:46:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.7 MiB/s wr, 29 op/s
Jan 26 15:46:58 compute-0 ceph-mon[75140]: pgmap v931: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.7 MiB/s wr, 29 op/s
Jan 26 15:46:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:46:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:46:59.207 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:46:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:46:59.208 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:46:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:46:59.208 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:46:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.1 MiB/s wr, 18 op/s
Jan 26 15:47:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:47:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:47:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:47:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:47:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:47:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:47:00 compute-0 ceph-mon[75140]: pgmap v932: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.1 MiB/s wr, 18 op/s
Jan 26 15:47:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:47:02 compute-0 ceph-mon[75140]: pgmap v933: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:47:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:47:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:47:04 compute-0 ceph-mon[75140]: pgmap v934: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:47:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:04.850 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:47:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:04.851 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 15:47:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:47:06 compute-0 nova_compute[239965]: 2026-01-26 15:47:06.442 239969 DEBUG oslo_concurrency.lockutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Acquiring lock "b8687aa6-9965-456d-b057-a3e202d2220c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:06 compute-0 nova_compute[239965]: 2026-01-26 15:47:06.442 239969 DEBUG oslo_concurrency.lockutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lock "b8687aa6-9965-456d-b057-a3e202d2220c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:06 compute-0 nova_compute[239965]: 2026-01-26 15:47:06.473 239969 DEBUG nova.compute.manager [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:47:06 compute-0 ceph-mon[75140]: pgmap v935: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:47:06 compute-0 nova_compute[239965]: 2026-01-26 15:47:06.813 239969 DEBUG oslo_concurrency.lockutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:06 compute-0 nova_compute[239965]: 2026-01-26 15:47:06.814 239969 DEBUG oslo_concurrency.lockutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:06 compute-0 nova_compute[239965]: 2026-01-26 15:47:06.823 239969 DEBUG nova.virt.hardware [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:47:06 compute-0 nova_compute[239965]: 2026-01-26 15:47:06.824 239969 INFO nova.compute.claims [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:47:06 compute-0 nova_compute[239965]: 2026-01-26 15:47:06.941 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:47:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3576249988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:47:07 compute-0 nova_compute[239965]: 2026-01-26 15:47:07.503 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:07 compute-0 nova_compute[239965]: 2026-01-26 15:47:07.508 239969 DEBUG nova.compute.provider_tree [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:47:07 compute-0 nova_compute[239965]: 2026-01-26 15:47:07.525 239969 DEBUG nova.scheduler.client.report [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:47:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:47:07 compute-0 nova_compute[239965]: 2026-01-26 15:47:07.558 239969 DEBUG oslo_concurrency.lockutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:07 compute-0 nova_compute[239965]: 2026-01-26 15:47:07.559 239969 DEBUG nova.compute.manager [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:47:07 compute-0 nova_compute[239965]: 2026-01-26 15:47:07.604 239969 DEBUG nova.compute.manager [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 26 15:47:07 compute-0 nova_compute[239965]: 2026-01-26 15:47:07.623 239969 INFO nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:47:07 compute-0 nova_compute[239965]: 2026-01-26 15:47:07.647 239969 DEBUG nova.compute.manager [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:47:07 compute-0 nova_compute[239965]: 2026-01-26 15:47:07.757 239969 DEBUG nova.compute.manager [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:47:07 compute-0 nova_compute[239965]: 2026-01-26 15:47:07.758 239969 DEBUG nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:47:07 compute-0 nova_compute[239965]: 2026-01-26 15:47:07.759 239969 INFO nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Creating image(s)
Jan 26 15:47:07 compute-0 nova_compute[239965]: 2026-01-26 15:47:07.778 239969 DEBUG nova.storage.rbd_utils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] rbd image b8687aa6-9965-456d-b057-a3e202d2220c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3576249988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:47:07 compute-0 nova_compute[239965]: 2026-01-26 15:47:07.803 239969 DEBUG nova.storage.rbd_utils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] rbd image b8687aa6-9965-456d-b057-a3e202d2220c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:07 compute-0 nova_compute[239965]: 2026-01-26 15:47:07.828 239969 DEBUG nova.storage.rbd_utils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] rbd image b8687aa6-9965-456d-b057-a3e202d2220c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:07 compute-0 nova_compute[239965]: 2026-01-26 15:47:07.832 239969 DEBUG oslo_concurrency.lockutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:07 compute-0 nova_compute[239965]: 2026-01-26 15:47:07.833 239969 DEBUG oslo_concurrency.lockutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e146 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:47:08 compute-0 ceph-mon[75140]: pgmap v936: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:47:09 compute-0 nova_compute[239965]: 2026-01-26 15:47:09.425 239969 DEBUG nova.virt.libvirt.imagebackend [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Image locations are: [{'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/e0817942-948b-4945-aa42-c1cb3a1c65ba/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/e0817942-948b-4945-aa42-c1cb3a1c65ba/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 26 15:47:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:47:10 compute-0 ceph-mon[75140]: pgmap v937: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:47:10 compute-0 nova_compute[239965]: 2026-01-26 15:47:10.877 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:10 compute-0 nova_compute[239965]: 2026-01-26 15:47:10.941 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0.part --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:10 compute-0 nova_compute[239965]: 2026-01-26 15:47:10.943 239969 DEBUG nova.virt.images [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] e0817942-948b-4945-aa42-c1cb3a1c65ba was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 26 15:47:10 compute-0 nova_compute[239965]: 2026-01-26 15:47:10.944 239969 DEBUG nova.privsep.utils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 26 15:47:10 compute-0 nova_compute[239965]: 2026-01-26 15:47:10.945 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0.part /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:11 compute-0 nova_compute[239965]: 2026-01-26 15:47:11.178 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0.part /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0.converted" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:11 compute-0 nova_compute[239965]: 2026-01-26 15:47:11.185 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:11 compute-0 nova_compute[239965]: 2026-01-26 15:47:11.270 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0.converted --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:11 compute-0 nova_compute[239965]: 2026-01-26 15:47:11.271 239969 DEBUG oslo_concurrency.lockutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:11 compute-0 nova_compute[239965]: 2026-01-26 15:47:11.296 239969 DEBUG nova.storage.rbd_utils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] rbd image b8687aa6-9965-456d-b057-a3e202d2220c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:11 compute-0 nova_compute[239965]: 2026-01-26 15:47:11.300 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 b8687aa6-9965-456d-b057-a3e202d2220c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 15:47:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Jan 26 15:47:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Jan 26 15:47:11 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Jan 26 15:47:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Jan 26 15:47:12 compute-0 ceph-mon[75140]: pgmap v938: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Jan 26 15:47:12 compute-0 ceph-mon[75140]: osdmap e147: 3 total, 3 up, 3 in
Jan 26 15:47:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Jan 26 15:47:12 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.195 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 b8687aa6-9965-456d-b057-a3e202d2220c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.895s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.244 239969 DEBUG nova.storage.rbd_utils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] resizing rbd image b8687aa6-9965-456d-b057-a3e202d2220c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.343 239969 DEBUG nova.objects.instance [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lazy-loading 'migration_context' on Instance uuid b8687aa6-9965-456d-b057-a3e202d2220c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.386 239969 DEBUG nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.386 239969 DEBUG nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Ensure instance console log exists: /var/lib/nova/instances/b8687aa6-9965-456d-b057-a3e202d2220c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.387 239969 DEBUG oslo_concurrency.lockutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.387 239969 DEBUG oslo_concurrency.lockutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.387 239969 DEBUG oslo_concurrency.lockutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.390 239969 DEBUG nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.395 239969 WARNING nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.400 239969 DEBUG nova.virt.libvirt.host [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.400 239969 DEBUG nova.virt.libvirt.host [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.404 239969 DEBUG nova.virt.libvirt.host [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.404 239969 DEBUG nova.virt.libvirt.host [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.405 239969 DEBUG nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.405 239969 DEBUG nova.virt.hardware [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.405 239969 DEBUG nova.virt.hardware [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.406 239969 DEBUG nova.virt.hardware [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.406 239969 DEBUG nova.virt.hardware [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.406 239969 DEBUG nova.virt.hardware [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.406 239969 DEBUG nova.virt.hardware [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.407 239969 DEBUG nova.virt.hardware [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.407 239969 DEBUG nova.virt.hardware [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.407 239969 DEBUG nova.virt.hardware [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.408 239969 DEBUG nova.virt.hardware [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.408 239969 DEBUG nova.virt.hardware [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.412 239969 DEBUG nova.privsep.utils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 26 15:47:13 compute-0 nova_compute[239965]: 2026-01-26 15:47:13.412 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 10 op/s
Jan 26 15:47:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:47:13 compute-0 ceph-mon[75140]: osdmap e148: 3 total, 3 up, 3 in
Jan 26 15:47:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:13.852 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:47:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:47:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3132825324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:47:14 compute-0 nova_compute[239965]: 2026-01-26 15:47:14.005 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:14 compute-0 nova_compute[239965]: 2026-01-26 15:47:14.046 239969 DEBUG nova.storage.rbd_utils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] rbd image b8687aa6-9965-456d-b057-a3e202d2220c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:14 compute-0 nova_compute[239965]: 2026-01-26 15:47:14.051 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:47:14 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1859000296' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:47:14 compute-0 nova_compute[239965]: 2026-01-26 15:47:14.646 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:14 compute-0 nova_compute[239965]: 2026-01-26 15:47:14.649 239969 DEBUG nova.objects.instance [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lazy-loading 'pci_devices' on Instance uuid b8687aa6-9965-456d-b057-a3e202d2220c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:47:14 compute-0 nova_compute[239965]: 2026-01-26 15:47:14.689 239969 DEBUG nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:47:14 compute-0 nova_compute[239965]:   <uuid>b8687aa6-9965-456d-b057-a3e202d2220c</uuid>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   <name>instance-00000001</name>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <nova:name>tempest-AutoAllocateNetworkTest-server-705142921</nova:name>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:47:13</nova:creationTime>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:47:14 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:47:14 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:47:14 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:47:14 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:47:14 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:47:14 compute-0 nova_compute[239965]:         <nova:user uuid="6cdac65d15c94507bf44d7fbff5a3ac7">tempest-AutoAllocateNetworkTest-1383193136-project-member</nova:user>
Jan 26 15:47:14 compute-0 nova_compute[239965]:         <nova:project uuid="96f2f68ede0e4d7eba666d9d2f27789b">tempest-AutoAllocateNetworkTest-1383193136</nova:project>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <system>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <entry name="serial">b8687aa6-9965-456d-b057-a3e202d2220c</entry>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <entry name="uuid">b8687aa6-9965-456d-b057-a3e202d2220c</entry>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     </system>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   <os>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   </os>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   <features>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   </features>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/b8687aa6-9965-456d-b057-a3e202d2220c_disk">
Jan 26 15:47:14 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       </source>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:47:14 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/b8687aa6-9965-456d-b057-a3e202d2220c_disk.config">
Jan 26 15:47:14 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       </source>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:47:14 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/b8687aa6-9965-456d-b057-a3e202d2220c/console.log" append="off"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <video>
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     </video>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:47:14 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:47:14 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:47:14 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:47:14 compute-0 nova_compute[239965]: </domain>
Jan 26 15:47:14 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:47:14 compute-0 nova_compute[239965]: 2026-01-26 15:47:14.735 239969 DEBUG nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:47:14 compute-0 nova_compute[239965]: 2026-01-26 15:47:14.735 239969 DEBUG nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:47:14 compute-0 nova_compute[239965]: 2026-01-26 15:47:14.736 239969 INFO nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Using config drive
Jan 26 15:47:14 compute-0 nova_compute[239965]: 2026-01-26 15:47:14.755 239969 DEBUG nova.storage.rbd_utils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] rbd image b8687aa6-9965-456d-b057-a3e202d2220c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:14 compute-0 ceph-mon[75140]: pgmap v941: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 10 op/s
Jan 26 15:47:14 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3132825324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:47:14 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1859000296' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:47:15 compute-0 podman[246302]: 2026-01-26 15:47:15.359885308 +0000 UTC m=+0.043030412 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:47:15 compute-0 podman[246303]: 2026-01-26 15:47:15.390840291 +0000 UTC m=+0.072190441 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 15:47:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 10 op/s
Jan 26 15:47:15 compute-0 nova_compute[239965]: 2026-01-26 15:47:15.552 239969 INFO nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Creating config drive at /var/lib/nova/instances/b8687aa6-9965-456d-b057-a3e202d2220c/disk.config
Jan 26 15:47:15 compute-0 nova_compute[239965]: 2026-01-26 15:47:15.557 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b8687aa6-9965-456d-b057-a3e202d2220c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxiep0stx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:15 compute-0 nova_compute[239965]: 2026-01-26 15:47:15.689 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b8687aa6-9965-456d-b057-a3e202d2220c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxiep0stx" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:15 compute-0 nova_compute[239965]: 2026-01-26 15:47:15.710 239969 DEBUG nova.storage.rbd_utils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] rbd image b8687aa6-9965-456d-b057-a3e202d2220c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:15 compute-0 nova_compute[239965]: 2026-01-26 15:47:15.713 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b8687aa6-9965-456d-b057-a3e202d2220c/disk.config b8687aa6-9965-456d-b057-a3e202d2220c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:16 compute-0 ceph-mon[75140]: pgmap v942: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 10 op/s
Jan 26 15:47:16 compute-0 nova_compute[239965]: 2026-01-26 15:47:16.114 239969 DEBUG oslo_concurrency.processutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b8687aa6-9965-456d-b057-a3e202d2220c/disk.config b8687aa6-9965-456d-b057-a3e202d2220c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:16 compute-0 nova_compute[239965]: 2026-01-26 15:47:16.116 239969 INFO nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Deleting local config drive /var/lib/nova/instances/b8687aa6-9965-456d-b057-a3e202d2220c/disk.config because it was imported into RBD.
Jan 26 15:47:16 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 26 15:47:16 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 26 15:47:16 compute-0 systemd-machined[208061]: New machine qemu-1-instance-00000001.
Jan 26 15:47:16 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.094 239969 DEBUG nova.compute.manager [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.096 239969 DEBUG nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.097 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442437.0946777, b8687aa6-9965-456d-b057-a3e202d2220c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.097 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] VM Resumed (Lifecycle Event)
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.103 239969 INFO nova.virt.libvirt.driver [-] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Instance spawned successfully.
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.103 239969 DEBUG nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.131 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.134 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.158 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.159 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442437.0947986, b8687aa6-9965-456d-b057-a3e202d2220c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.159 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] VM Started (Lifecycle Event)
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.188 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.191 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.197 239969 DEBUG nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.198 239969 DEBUG nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.198 239969 DEBUG nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.199 239969 DEBUG nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.199 239969 DEBUG nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.199 239969 DEBUG nova.virt.libvirt.driver [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.206 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:47:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 65 MiB data, 192 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.557 239969 INFO nova.compute.manager [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Took 9.80 seconds to spawn the instance on the hypervisor.
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.558 239969 DEBUG nova.compute.manager [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.735 239969 INFO nova.compute.manager [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Took 10.96 seconds to build instance.
Jan 26 15:47:17 compute-0 nova_compute[239965]: 2026-01-26 15:47:17.786 239969 DEBUG oslo_concurrency.lockutils [None req-4843f97c-15b4-4992-a3a7-ae6595eae242 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lock "b8687aa6-9965-456d-b057-a3e202d2220c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:18 compute-0 nova_compute[239965]: 2026-01-26 15:47:18.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:47:18 compute-0 ceph-mon[75140]: pgmap v943: 305 pgs: 305 active+clean; 65 MiB data, 192 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 26 15:47:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:47:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 83 op/s
Jan 26 15:47:19 compute-0 nova_compute[239965]: 2026-01-26 15:47:19.904 239969 DEBUG oslo_concurrency.lockutils [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Acquiring lock "b8687aa6-9965-456d-b057-a3e202d2220c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:19 compute-0 nova_compute[239965]: 2026-01-26 15:47:19.905 239969 DEBUG oslo_concurrency.lockutils [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lock "b8687aa6-9965-456d-b057-a3e202d2220c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:19 compute-0 nova_compute[239965]: 2026-01-26 15:47:19.905 239969 DEBUG oslo_concurrency.lockutils [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Acquiring lock "b8687aa6-9965-456d-b057-a3e202d2220c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:19 compute-0 nova_compute[239965]: 2026-01-26 15:47:19.905 239969 DEBUG oslo_concurrency.lockutils [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lock "b8687aa6-9965-456d-b057-a3e202d2220c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:19 compute-0 nova_compute[239965]: 2026-01-26 15:47:19.905 239969 DEBUG oslo_concurrency.lockutils [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lock "b8687aa6-9965-456d-b057-a3e202d2220c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:19 compute-0 nova_compute[239965]: 2026-01-26 15:47:19.906 239969 INFO nova.compute.manager [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Terminating instance
Jan 26 15:47:19 compute-0 nova_compute[239965]: 2026-01-26 15:47:19.907 239969 DEBUG oslo_concurrency.lockutils [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Acquiring lock "refresh_cache-b8687aa6-9965-456d-b057-a3e202d2220c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:47:19 compute-0 nova_compute[239965]: 2026-01-26 15:47:19.907 239969 DEBUG oslo_concurrency.lockutils [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Acquired lock "refresh_cache-b8687aa6-9965-456d-b057-a3e202d2220c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:47:19 compute-0 nova_compute[239965]: 2026-01-26 15:47:19.908 239969 DEBUG nova.network.neutron [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:47:20 compute-0 ceph-mon[75140]: pgmap v944: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.7 MiB/s wr, 83 op/s
Jan 26 15:47:21 compute-0 nova_compute[239965]: 2026-01-26 15:47:21.078 239969 DEBUG nova.network.neutron [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:47:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 82 op/s
Jan 26 15:47:21 compute-0 nova_compute[239965]: 2026-01-26 15:47:21.656 239969 DEBUG nova.network.neutron [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:47:21 compute-0 nova_compute[239965]: 2026-01-26 15:47:21.674 239969 DEBUG oslo_concurrency.lockutils [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Releasing lock "refresh_cache-b8687aa6-9965-456d-b057-a3e202d2220c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:47:21 compute-0 nova_compute[239965]: 2026-01-26 15:47:21.675 239969 DEBUG nova.compute.manager [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:47:21 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 26 15:47:21 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 5.471s CPU time.
Jan 26 15:47:21 compute-0 systemd-machined[208061]: Machine qemu-1-instance-00000001 terminated.
Jan 26 15:47:21 compute-0 nova_compute[239965]: 2026-01-26 15:47:21.898 239969 INFO nova.virt.libvirt.driver [-] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Instance destroyed successfully.
Jan 26 15:47:21 compute-0 nova_compute[239965]: 2026-01-26 15:47:21.899 239969 DEBUG nova.objects.instance [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lazy-loading 'resources' on Instance uuid b8687aa6-9965-456d-b057-a3e202d2220c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.201 239969 INFO nova.virt.libvirt.driver [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Deleting instance files /var/lib/nova/instances/b8687aa6-9965-456d-b057-a3e202d2220c_del
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.201 239969 INFO nova.virt.libvirt.driver [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Deletion of /var/lib/nova/instances/b8687aa6-9965-456d-b057-a3e202d2220c_del complete
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.292 239969 DEBUG nova.virt.libvirt.host [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.292 239969 INFO nova.virt.libvirt.host [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] UEFI support detected
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.294 239969 INFO nova.compute.manager [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Took 0.62 seconds to destroy the instance on the hypervisor.
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.294 239969 DEBUG oslo.service.loopingcall [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.295 239969 DEBUG nova.compute.manager [-] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.295 239969 DEBUG nova.network.neutron [-] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:47:22 compute-0 sudo[246484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:47:22 compute-0 sudo[246484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:47:22 compute-0 sudo[246484]: pam_unix(sudo:session): session closed for user root
Jan 26 15:47:22 compute-0 sudo[246509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:47:22 compute-0 sudo[246509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.599 239969 DEBUG nova.network.neutron [-] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.628 239969 DEBUG nova.network.neutron [-] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.648 239969 INFO nova.compute.manager [-] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Took 0.35 seconds to deallocate network for instance.
Jan 26 15:47:22 compute-0 ceph-mon[75140]: pgmap v945: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.2 MiB/s wr, 82 op/s
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.699 239969 DEBUG oslo_concurrency.lockutils [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.700 239969 DEBUG oslo_concurrency.lockutils [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:22 compute-0 nova_compute[239965]: 2026-01-26 15:47:22.756 239969 DEBUG oslo_concurrency.processutils [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:22 compute-0 sudo[246509]: pam_unix(sudo:session): session closed for user root
Jan 26 15:47:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:47:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:47:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:47:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:47:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:47:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:47:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/948205152' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:47:23 compute-0 nova_compute[239965]: 2026-01-26 15:47:23.285 239969 DEBUG oslo_concurrency.processutils [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:23 compute-0 nova_compute[239965]: 2026-01-26 15:47:23.293 239969 DEBUG nova.compute.provider_tree [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 15:47:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:47:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:47:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:47:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:47:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:47:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:47:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:47:23 compute-0 nova_compute[239965]: 2026-01-26 15:47:23.332 239969 ERROR nova.scheduler.client.report [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] [req-b9ba5553-3413-434d-9c0b-9299e3745ce9] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 54ba2329-fe1a-48ee-a2ff-6c84c26128ed.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-b9ba5553-3413-434d-9c0b-9299e3745ce9"}]}
Jan 26 15:47:23 compute-0 nova_compute[239965]: 2026-01-26 15:47:23.353 239969 DEBUG nova.scheduler.client.report [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 15:47:23 compute-0 nova_compute[239965]: 2026-01-26 15:47:23.376 239969 DEBUG nova.scheduler.client.report [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 15:47:23 compute-0 nova_compute[239965]: 2026-01-26 15:47:23.377 239969 DEBUG nova.compute.provider_tree [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 15:47:23 compute-0 sudo[246588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:47:23 compute-0 sudo[246588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:47:23 compute-0 sudo[246588]: pam_unix(sudo:session): session closed for user root
Jan 26 15:47:23 compute-0 nova_compute[239965]: 2026-01-26 15:47:23.397 239969 DEBUG nova.scheduler.client.report [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 15:47:23 compute-0 nova_compute[239965]: 2026-01-26 15:47:23.435 239969 DEBUG nova.scheduler.client.report [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 15:47:23 compute-0 sudo[246613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:47:23 compute-0 sudo[246613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:47:23 compute-0 nova_compute[239965]: 2026-01-26 15:47:23.474 239969 DEBUG oslo_concurrency.processutils [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:23 compute-0 nova_compute[239965]: 2026-01-26 15:47:23.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:47:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 51 MiB data, 192 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.0 MiB/s wr, 142 op/s
Jan 26 15:47:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:47:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:47:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/948205152' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:47:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:47:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:47:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:47:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:47:23 compute-0 podman[246669]: 2026-01-26 15:47:23.754342212 +0000 UTC m=+0.044037769 container create 30ccadc256793abd0e3de62d5ec5a83ba91dcece1002bf90ab6e9c4762e42ac6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 15:47:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e148 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:47:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Jan 26 15:47:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Jan 26 15:47:23 compute-0 systemd[1]: Started libpod-conmon-30ccadc256793abd0e3de62d5ec5a83ba91dcece1002bf90ab6e9c4762e42ac6.scope.
Jan 26 15:47:23 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Jan 26 15:47:23 compute-0 podman[246669]: 2026-01-26 15:47:23.734958477 +0000 UTC m=+0.024654054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:47:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:47:23 compute-0 podman[246669]: 2026-01-26 15:47:23.851518242 +0000 UTC m=+0.141213829 container init 30ccadc256793abd0e3de62d5ec5a83ba91dcece1002bf90ab6e9c4762e42ac6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:47:23 compute-0 podman[246669]: 2026-01-26 15:47:23.859144828 +0000 UTC m=+0.148840375 container start 30ccadc256793abd0e3de62d5ec5a83ba91dcece1002bf90ab6e9c4762e42ac6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_agnesi, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 15:47:23 compute-0 pensive_agnesi[246686]: 167 167
Jan 26 15:47:23 compute-0 systemd[1]: libpod-30ccadc256793abd0e3de62d5ec5a83ba91dcece1002bf90ab6e9c4762e42ac6.scope: Deactivated successfully.
Jan 26 15:47:23 compute-0 podman[246669]: 2026-01-26 15:47:23.864279534 +0000 UTC m=+0.153975111 container attach 30ccadc256793abd0e3de62d5ec5a83ba91dcece1002bf90ab6e9c4762e42ac6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_agnesi, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 15:47:23 compute-0 conmon[246686]: conmon 30ccadc256793abd0e3d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30ccadc256793abd0e3de62d5ec5a83ba91dcece1002bf90ab6e9c4762e42ac6.scope/container/memory.events
Jan 26 15:47:23 compute-0 podman[246691]: 2026-01-26 15:47:23.912904595 +0000 UTC m=+0.032146879 container died 30ccadc256793abd0e3de62d5ec5a83ba91dcece1002bf90ab6e9c4762e42ac6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_agnesi, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 15:47:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7af5a9e840113aebe6dbecb665e5003108fae2a0b8ae110131ccd36a5527201-merged.mount: Deactivated successfully.
Jan 26 15:47:24 compute-0 podman[246691]: 2026-01-26 15:47:24.003126605 +0000 UTC m=+0.122368869 container remove 30ccadc256793abd0e3de62d5ec5a83ba91dcece1002bf90ab6e9c4762e42ac6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_agnesi, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 15:47:24 compute-0 systemd[1]: libpod-conmon-30ccadc256793abd0e3de62d5ec5a83ba91dcece1002bf90ab6e9c4762e42ac6.scope: Deactivated successfully.
Jan 26 15:47:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:47:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1632991312' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:47:24 compute-0 nova_compute[239965]: 2026-01-26 15:47:24.058 239969 DEBUG oslo_concurrency.processutils [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:24 compute-0 nova_compute[239965]: 2026-01-26 15:47:24.068 239969 DEBUG nova.compute.provider_tree [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 15:47:24 compute-0 nova_compute[239965]: 2026-01-26 15:47:24.114 239969 DEBUG nova.scheduler.client.report [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Updated inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with generation 7 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 26 15:47:24 compute-0 nova_compute[239965]: 2026-01-26 15:47:24.114 239969 DEBUG nova.compute.provider_tree [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Updating resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed generation from 7 to 8 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 26 15:47:24 compute-0 nova_compute[239965]: 2026-01-26 15:47:24.115 239969 DEBUG nova.compute.provider_tree [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 15:47:24 compute-0 nova_compute[239965]: 2026-01-26 15:47:24.154 239969 DEBUG oslo_concurrency.lockutils [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.454s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:24 compute-0 nova_compute[239965]: 2026-01-26 15:47:24.179 239969 INFO nova.scheduler.client.report [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Deleted allocations for instance b8687aa6-9965-456d-b057-a3e202d2220c
Jan 26 15:47:24 compute-0 podman[246715]: 2026-01-26 15:47:24.165603203 +0000 UTC m=+0.025376692 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:47:24 compute-0 nova_compute[239965]: 2026-01-26 15:47:24.265 239969 DEBUG oslo_concurrency.lockutils [None req-d8c5dd21-a230-4e63-9bae-844089123382 6cdac65d15c94507bf44d7fbff5a3ac7 96f2f68ede0e4d7eba666d9d2f27789b - - default default] Lock "b8687aa6-9965-456d-b057-a3e202d2220c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:24 compute-0 podman[246715]: 2026-01-26 15:47:24.416249132 +0000 UTC m=+0.276022631 container create 23cba8e071b1abd20667192bbd2f2fb643208914911b3ae18e4d1d33695f806f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_banach, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:47:24 compute-0 systemd[1]: Started libpod-conmon-23cba8e071b1abd20667192bbd2f2fb643208914911b3ae18e4d1d33695f806f.scope.
Jan 26 15:47:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e5452de10c73f863ee383aa324ea3854c73d9b4cb9291ac806d34ec624dce76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e5452de10c73f863ee383aa324ea3854c73d9b4cb9291ac806d34ec624dce76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e5452de10c73f863ee383aa324ea3854c73d9b4cb9291ac806d34ec624dce76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e5452de10c73f863ee383aa324ea3854c73d9b4cb9291ac806d34ec624dce76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e5452de10c73f863ee383aa324ea3854c73d9b4cb9291ac806d34ec624dce76/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:47:24 compute-0 nova_compute[239965]: 2026-01-26 15:47:24.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:47:24 compute-0 nova_compute[239965]: 2026-01-26 15:47:24.513 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:47:24 compute-0 nova_compute[239965]: 2026-01-26 15:47:24.513 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:47:24 compute-0 podman[246715]: 2026-01-26 15:47:24.529868484 +0000 UTC m=+0.389642063 container init 23cba8e071b1abd20667192bbd2f2fb643208914911b3ae18e4d1d33695f806f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 15:47:24 compute-0 podman[246715]: 2026-01-26 15:47:24.537429809 +0000 UTC m=+0.397203328 container start 23cba8e071b1abd20667192bbd2f2fb643208914911b3ae18e4d1d33695f806f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:47:24 compute-0 nova_compute[239965]: 2026-01-26 15:47:24.537 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:24 compute-0 nova_compute[239965]: 2026-01-26 15:47:24.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:24 compute-0 nova_compute[239965]: 2026-01-26 15:47:24.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:24 compute-0 nova_compute[239965]: 2026-01-26 15:47:24.539 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:47:24 compute-0 nova_compute[239965]: 2026-01-26 15:47:24.540 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:24 compute-0 podman[246715]: 2026-01-26 15:47:24.543534289 +0000 UTC m=+0.403307768 container attach 23cba8e071b1abd20667192bbd2f2fb643208914911b3ae18e4d1d33695f806f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_banach, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 15:47:24 compute-0 ceph-mon[75140]: pgmap v946: 305 pgs: 305 active+clean; 51 MiB data, 192 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.0 MiB/s wr, 142 op/s
Jan 26 15:47:24 compute-0 ceph-mon[75140]: osdmap e149: 3 total, 3 up, 3 in
Jan 26 15:47:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1632991312' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:47:25 compute-0 eloquent_banach[246731]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:47:25 compute-0 eloquent_banach[246731]: --> All data devices are unavailable
Jan 26 15:47:25 compute-0 systemd[1]: libpod-23cba8e071b1abd20667192bbd2f2fb643208914911b3ae18e4d1d33695f806f.scope: Deactivated successfully.
Jan 26 15:47:25 compute-0 conmon[246731]: conmon 23cba8e071b1abd20667 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23cba8e071b1abd20667192bbd2f2fb643208914911b3ae18e4d1d33695f806f.scope/container/memory.events
Jan 26 15:47:25 compute-0 podman[246715]: 2026-01-26 15:47:25.033533069 +0000 UTC m=+0.893306568 container died 23cba8e071b1abd20667192bbd2f2fb643208914911b3ae18e4d1d33695f806f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_banach, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:47:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e5452de10c73f863ee383aa324ea3854c73d9b4cb9291ac806d34ec624dce76-merged.mount: Deactivated successfully.
Jan 26 15:47:25 compute-0 podman[246715]: 2026-01-26 15:47:25.080420447 +0000 UTC m=+0.940193926 container remove 23cba8e071b1abd20667192bbd2f2fb643208914911b3ae18e4d1d33695f806f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_banach, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:47:25 compute-0 systemd[1]: libpod-conmon-23cba8e071b1abd20667192bbd2f2fb643208914911b3ae18e4d1d33695f806f.scope: Deactivated successfully.
Jan 26 15:47:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:47:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/911258624' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:47:25 compute-0 sudo[246613]: pam_unix(sudo:session): session closed for user root
Jan 26 15:47:25 compute-0 nova_compute[239965]: 2026-01-26 15:47:25.146 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:25 compute-0 sudo[246782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:47:25 compute-0 sudo[246782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:47:25 compute-0 sudo[246782]: pam_unix(sudo:session): session closed for user root
Jan 26 15:47:25 compute-0 sudo[246808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:47:25 compute-0 sudo[246808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:47:25 compute-0 nova_compute[239965]: 2026-01-26 15:47:25.371 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:47:25 compute-0 nova_compute[239965]: 2026-01-26 15:47:25.373 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5063MB free_disk=59.98365036211908GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:47:25 compute-0 nova_compute[239965]: 2026-01-26 15:47:25.374 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:25 compute-0 nova_compute[239965]: 2026-01-26 15:47:25.374 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:25 compute-0 nova_compute[239965]: 2026-01-26 15:47:25.436 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:47:25 compute-0 nova_compute[239965]: 2026-01-26 15:47:25.436 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:47:25 compute-0 nova_compute[239965]: 2026-01-26 15:47:25.456 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 51 MiB data, 192 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Jan 26 15:47:25 compute-0 podman[246846]: 2026-01-26 15:47:25.567062535 +0000 UTC m=+0.037611172 container create 79032c1dd43e52a6d7f8a971d7316a41222128272f7899409245bb684a9db545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:47:25 compute-0 systemd[1]: Started libpod-conmon-79032c1dd43e52a6d7f8a971d7316a41222128272f7899409245bb684a9db545.scope.
Jan 26 15:47:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:47:25 compute-0 podman[246846]: 2026-01-26 15:47:25.550004397 +0000 UTC m=+0.020553054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:47:25 compute-0 podman[246846]: 2026-01-26 15:47:25.653789529 +0000 UTC m=+0.124338216 container init 79032c1dd43e52a6d7f8a971d7316a41222128272f7899409245bb684a9db545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:47:25 compute-0 podman[246846]: 2026-01-26 15:47:25.66243058 +0000 UTC m=+0.132979217 container start 79032c1dd43e52a6d7f8a971d7316a41222128272f7899409245bb684a9db545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_driscoll, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:47:25 compute-0 ecstatic_driscoll[246881]: 167 167
Jan 26 15:47:25 compute-0 systemd[1]: libpod-79032c1dd43e52a6d7f8a971d7316a41222128272f7899409245bb684a9db545.scope: Deactivated successfully.
Jan 26 15:47:25 compute-0 conmon[246881]: conmon 79032c1dd43e52a6d7f8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79032c1dd43e52a6d7f8a971d7316a41222128272f7899409245bb684a9db545.scope/container/memory.events
Jan 26 15:47:25 compute-0 podman[246846]: 2026-01-26 15:47:25.667167326 +0000 UTC m=+0.137715963 container attach 79032c1dd43e52a6d7f8a971d7316a41222128272f7899409245bb684a9db545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_driscoll, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 15:47:25 compute-0 podman[246846]: 2026-01-26 15:47:25.672017475 +0000 UTC m=+0.142566132 container died 79032c1dd43e52a6d7f8a971d7316a41222128272f7899409245bb684a9db545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Jan 26 15:47:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-3558037bfc39db6fe188b0832175ab27d6698f8f4ca9c5173fd9eca589a50ae3-merged.mount: Deactivated successfully.
Jan 26 15:47:25 compute-0 podman[246846]: 2026-01-26 15:47:25.90455555 +0000 UTC m=+0.375104187 container remove 79032c1dd43e52a6d7f8a971d7316a41222128272f7899409245bb684a9db545 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_driscoll, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:47:25 compute-0 systemd[1]: libpod-conmon-79032c1dd43e52a6d7f8a971d7316a41222128272f7899409245bb684a9db545.scope: Deactivated successfully.
Jan 26 15:47:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/911258624' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:47:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:47:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/918661274' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:47:26 compute-0 nova_compute[239965]: 2026-01-26 15:47:26.040 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:26 compute-0 nova_compute[239965]: 2026-01-26 15:47:26.046 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:47:26 compute-0 nova_compute[239965]: 2026-01-26 15:47:26.061 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:47:26 compute-0 podman[246905]: 2026-01-26 15:47:26.075095566 +0000 UTC m=+0.042157443 container create 9ad8b9c85aba186aeb2e9a2382b5810da077b2d530b708f442884004e2e1b615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 15:47:26 compute-0 nova_compute[239965]: 2026-01-26 15:47:26.086 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:47:26 compute-0 nova_compute[239965]: 2026-01-26 15:47:26.086 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:26 compute-0 systemd[1]: Started libpod-conmon-9ad8b9c85aba186aeb2e9a2382b5810da077b2d530b708f442884004e2e1b615.scope.
Jan 26 15:47:26 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d091269733e7a4c6c95fbaba28b02606072ac934b64c0ff4e0f1dfe8a5fcc76a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d091269733e7a4c6c95fbaba28b02606072ac934b64c0ff4e0f1dfe8a5fcc76a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d091269733e7a4c6c95fbaba28b02606072ac934b64c0ff4e0f1dfe8a5fcc76a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:47:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d091269733e7a4c6c95fbaba28b02606072ac934b64c0ff4e0f1dfe8a5fcc76a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:47:26 compute-0 podman[246905]: 2026-01-26 15:47:26.149800476 +0000 UTC m=+0.116862373 container init 9ad8b9c85aba186aeb2e9a2382b5810da077b2d530b708f442884004e2e1b615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:47:26 compute-0 podman[246905]: 2026-01-26 15:47:26.057786022 +0000 UTC m=+0.024847929 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:47:26 compute-0 podman[246905]: 2026-01-26 15:47:26.158081218 +0000 UTC m=+0.125143095 container start 9ad8b9c85aba186aeb2e9a2382b5810da077b2d530b708f442884004e2e1b615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_clarke, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:47:26 compute-0 podman[246905]: 2026-01-26 15:47:26.161923083 +0000 UTC m=+0.128984970 container attach 9ad8b9c85aba186aeb2e9a2382b5810da077b2d530b708f442884004e2e1b615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_clarke, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 26 15:47:26 compute-0 sharp_clarke[246921]: {
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:     "0": [
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:         {
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "devices": [
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "/dev/loop3"
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             ],
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "lv_name": "ceph_lv0",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "lv_size": "21470642176",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "name": "ceph_lv0",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "tags": {
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.cluster_name": "ceph",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.crush_device_class": "",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.encrypted": "0",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.objectstore": "bluestore",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.osd_id": "0",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.type": "block",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.vdo": "0",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.with_tpm": "0"
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             },
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "type": "block",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "vg_name": "ceph_vg0"
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:         }
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:     ],
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:     "1": [
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:         {
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "devices": [
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "/dev/loop4"
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             ],
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "lv_name": "ceph_lv1",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "lv_size": "21470642176",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "name": "ceph_lv1",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "tags": {
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.cluster_name": "ceph",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.crush_device_class": "",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.encrypted": "0",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.objectstore": "bluestore",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.osd_id": "1",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.type": "block",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.vdo": "0",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.with_tpm": "0"
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             },
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "type": "block",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "vg_name": "ceph_vg1"
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:         }
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:     ],
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:     "2": [
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:         {
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "devices": [
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "/dev/loop5"
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             ],
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "lv_name": "ceph_lv2",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "lv_size": "21470642176",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "name": "ceph_lv2",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "tags": {
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.cluster_name": "ceph",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.crush_device_class": "",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.encrypted": "0",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.objectstore": "bluestore",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.osd_id": "2",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.type": "block",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.vdo": "0",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:                 "ceph.with_tpm": "0"
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             },
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "type": "block",
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:             "vg_name": "ceph_vg2"
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:         }
Jan 26 15:47:26 compute-0 sharp_clarke[246921]:     ]
Jan 26 15:47:26 compute-0 sharp_clarke[246921]: }
Jan 26 15:47:26 compute-0 systemd[1]: libpod-9ad8b9c85aba186aeb2e9a2382b5810da077b2d530b708f442884004e2e1b615.scope: Deactivated successfully.
Jan 26 15:47:26 compute-0 podman[246905]: 2026-01-26 15:47:26.486316646 +0000 UTC m=+0.453378533 container died 9ad8b9c85aba186aeb2e9a2382b5810da077b2d530b708f442884004e2e1b615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_clarke, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:47:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-d091269733e7a4c6c95fbaba28b02606072ac934b64c0ff4e0f1dfe8a5fcc76a-merged.mount: Deactivated successfully.
Jan 26 15:47:26 compute-0 podman[246905]: 2026-01-26 15:47:26.542103203 +0000 UTC m=+0.509165080 container remove 9ad8b9c85aba186aeb2e9a2382b5810da077b2d530b708f442884004e2e1b615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_clarke, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:47:26 compute-0 systemd[1]: libpod-conmon-9ad8b9c85aba186aeb2e9a2382b5810da077b2d530b708f442884004e2e1b615.scope: Deactivated successfully.
Jan 26 15:47:26 compute-0 sudo[246808]: pam_unix(sudo:session): session closed for user root
Jan 26 15:47:26 compute-0 sudo[246941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:47:26 compute-0 sudo[246941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:47:26 compute-0 sudo[246941]: pam_unix(sudo:session): session closed for user root
Jan 26 15:47:26 compute-0 sudo[246966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:47:26 compute-0 sudo[246966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:47:27 compute-0 ceph-mon[75140]: pgmap v948: 305 pgs: 305 active+clean; 51 MiB data, 192 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 151 op/s
Jan 26 15:47:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/918661274' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:47:27 compute-0 podman[247002]: 2026-01-26 15:47:27.033473596 +0000 UTC m=+0.044586893 container create bf5774f94aee4dc63b8aac10124a6a5243a3c4b5c403576190ee8cf87f6be171 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:47:27 compute-0 nova_compute[239965]: 2026-01-26 15:47:27.077 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:47:27 compute-0 nova_compute[239965]: 2026-01-26 15:47:27.078 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:47:27 compute-0 nova_compute[239965]: 2026-01-26 15:47:27.078 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:47:27 compute-0 nova_compute[239965]: 2026-01-26 15:47:27.078 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 15:47:27 compute-0 nova_compute[239965]: 2026-01-26 15:47:27.096 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 15:47:27 compute-0 nova_compute[239965]: 2026-01-26 15:47:27.096 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:47:27 compute-0 podman[247002]: 2026-01-26 15:47:27.010232448 +0000 UTC m=+0.021345755 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:47:27 compute-0 systemd[1]: Started libpod-conmon-bf5774f94aee4dc63b8aac10124a6a5243a3c4b5c403576190ee8cf87f6be171.scope.
Jan 26 15:47:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:47:27 compute-0 podman[247002]: 2026-01-26 15:47:27.321850798 +0000 UTC m=+0.332964175 container init bf5774f94aee4dc63b8aac10124a6a5243a3c4b5c403576190ee8cf87f6be171 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bohr, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 15:47:27 compute-0 podman[247002]: 2026-01-26 15:47:27.330744956 +0000 UTC m=+0.341858273 container start bf5774f94aee4dc63b8aac10124a6a5243a3c4b5c403576190ee8cf87f6be171 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bohr, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:47:27 compute-0 optimistic_bohr[247018]: 167 167
Jan 26 15:47:27 compute-0 systemd[1]: libpod-bf5774f94aee4dc63b8aac10124a6a5243a3c4b5c403576190ee8cf87f6be171.scope: Deactivated successfully.
Jan 26 15:47:27 compute-0 conmon[247018]: conmon bf5774f94aee4dc63b8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf5774f94aee4dc63b8aac10124a6a5243a3c4b5c403576190ee8cf87f6be171.scope/container/memory.events
Jan 26 15:47:27 compute-0 podman[247002]: 2026-01-26 15:47:27.351909225 +0000 UTC m=+0.363022542 container attach bf5774f94aee4dc63b8aac10124a6a5243a3c4b5c403576190ee8cf87f6be171 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bohr, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:47:27 compute-0 podman[247002]: 2026-01-26 15:47:27.352784316 +0000 UTC m=+0.363897633 container died bf5774f94aee4dc63b8aac10124a6a5243a3c4b5c403576190ee8cf87f6be171 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:47:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 740 KiB/s wr, 125 op/s
Jan 26 15:47:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9adbf9492c2decc7153667720bae60f2f088c807e930f8360e60fa80993dea8b-merged.mount: Deactivated successfully.
Jan 26 15:47:27 compute-0 podman[247002]: 2026-01-26 15:47:27.935702502 +0000 UTC m=+0.946815789 container remove bf5774f94aee4dc63b8aac10124a6a5243a3c4b5c403576190ee8cf87f6be171 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_bohr, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 15:47:27 compute-0 systemd[1]: libpod-conmon-bf5774f94aee4dc63b8aac10124a6a5243a3c4b5c403576190ee8cf87f6be171.scope: Deactivated successfully.
Jan 26 15:47:28 compute-0 ceph-mon[75140]: pgmap v949: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 740 KiB/s wr, 125 op/s
Jan 26 15:47:28 compute-0 podman[247042]: 2026-01-26 15:47:28.098851487 +0000 UTC m=+0.046088820 container create 4a39a805af5cef411920ed2a54847fe4ad77c1f749c2eae38964836903b7535f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 15:47:28 compute-0 systemd[1]: Started libpod-conmon-4a39a805af5cef411920ed2a54847fe4ad77c1f749c2eae38964836903b7535f.scope.
Jan 26 15:47:28 compute-0 podman[247042]: 2026-01-26 15:47:28.076699824 +0000 UTC m=+0.023937177 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:47:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:47:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7a8a76e7401979ed1b7eb95bd3b6d35987297d0be9f660176ac488f1c6daba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:47:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7a8a76e7401979ed1b7eb95bd3b6d35987297d0be9f660176ac488f1c6daba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:47:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7a8a76e7401979ed1b7eb95bd3b6d35987297d0be9f660176ac488f1c6daba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:47:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a7a8a76e7401979ed1b7eb95bd3b6d35987297d0be9f660176ac488f1c6daba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:47:28 compute-0 podman[247042]: 2026-01-26 15:47:28.243848898 +0000 UTC m=+0.191086261 container init 4a39a805af5cef411920ed2a54847fe4ad77c1f749c2eae38964836903b7535f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 15:47:28 compute-0 podman[247042]: 2026-01-26 15:47:28.252120091 +0000 UTC m=+0.199357424 container start 4a39a805af5cef411920ed2a54847fe4ad77c1f749c2eae38964836903b7535f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:47:28 compute-0 podman[247042]: 2026-01-26 15:47:28.257725048 +0000 UTC m=+0.204962431 container attach 4a39a805af5cef411920ed2a54847fe4ad77c1f749c2eae38964836903b7535f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:47:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:47:28
Jan 26 15:47:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:47:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:47:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'backups', 'volumes', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'images', 'vms']
Jan 26 15:47:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:47:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:47:28 compute-0 lvm[247137]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:47:28 compute-0 lvm[247138]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:47:28 compute-0 lvm[247137]: VG ceph_vg0 finished
Jan 26 15:47:28 compute-0 lvm[247138]: VG ceph_vg1 finished
Jan 26 15:47:28 compute-0 lvm[247140]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:47:28 compute-0 lvm[247140]: VG ceph_vg2 finished
Jan 26 15:47:29 compute-0 blissful_margulis[247059]: {}
Jan 26 15:47:29 compute-0 systemd[1]: libpod-4a39a805af5cef411920ed2a54847fe4ad77c1f749c2eae38964836903b7535f.scope: Deactivated successfully.
Jan 26 15:47:29 compute-0 systemd[1]: libpod-4a39a805af5cef411920ed2a54847fe4ad77c1f749c2eae38964836903b7535f.scope: Consumed 1.266s CPU time.
Jan 26 15:47:29 compute-0 podman[247042]: 2026-01-26 15:47:29.100936078 +0000 UTC m=+1.048173481 container died 4a39a805af5cef411920ed2a54847fe4ad77c1f749c2eae38964836903b7535f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 15:47:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a7a8a76e7401979ed1b7eb95bd3b6d35987297d0be9f660176ac488f1c6daba-merged.mount: Deactivated successfully.
Jan 26 15:47:29 compute-0 podman[247042]: 2026-01-26 15:47:29.248385299 +0000 UTC m=+1.195622632 container remove 4a39a805af5cef411920ed2a54847fe4ad77c1f749c2eae38964836903b7535f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_margulis, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 15:47:29 compute-0 systemd[1]: libpod-conmon-4a39a805af5cef411920ed2a54847fe4ad77c1f749c2eae38964836903b7535f.scope: Deactivated successfully.
Jan 26 15:47:29 compute-0 sudo[246966]: pam_unix(sudo:session): session closed for user root
Jan 26 15:47:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:47:29 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:47:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:47:29 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:47:29 compute-0 sudo[247158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:47:29 compute-0 sudo[247158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:47:29 compute-0 sudo[247158]: pam_unix(sudo:session): session closed for user root
Jan 26 15:47:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 16 KiB/s wr, 93 op/s
Jan 26 15:47:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:47:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:47:30 compute-0 ceph-mon[75140]: pgmap v950: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 16 KiB/s wr, 93 op/s
Jan 26 15:47:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:47:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:47:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:47:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:47:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:47:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:47:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:47:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:47:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:47:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:47:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:47:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:47:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:47:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:47:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:47:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:47:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 KiB/s wr, 79 op/s
Jan 26 15:47:32 compute-0 ceph-mon[75140]: pgmap v951: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 KiB/s wr, 79 op/s
Jan 26 15:47:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 0 B/s wr, 1 op/s
Jan 26 15:47:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:47:34 compute-0 nova_compute[239965]: 2026-01-26 15:47:34.550 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "0435a695-73ca-4cb1-9c6d-35cb1f235198" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:34 compute-0 nova_compute[239965]: 2026-01-26 15:47:34.551 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "0435a695-73ca-4cb1-9c6d-35cb1f235198" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:34 compute-0 nova_compute[239965]: 2026-01-26 15:47:34.578 239969 DEBUG nova.compute.manager [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:47:34 compute-0 ceph-mon[75140]: pgmap v952: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 818 B/s rd, 0 B/s wr, 1 op/s
Jan 26 15:47:34 compute-0 nova_compute[239965]: 2026-01-26 15:47:34.668 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:34 compute-0 nova_compute[239965]: 2026-01-26 15:47:34.669 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:34 compute-0 nova_compute[239965]: 2026-01-26 15:47:34.676 239969 DEBUG nova.virt.hardware [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:47:34 compute-0 nova_compute[239965]: 2026-01-26 15:47:34.676 239969 INFO nova.compute.claims [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:47:34 compute-0 nova_compute[239965]: 2026-01-26 15:47:34.809 239969 DEBUG oslo_concurrency.processutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:47:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3518868940' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.344 239969 DEBUG oslo_concurrency.processutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.352 239969 DEBUG nova.compute.provider_tree [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.368 239969 DEBUG nova.scheduler.client.report [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.396 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.397 239969 DEBUG nova.compute.manager [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.459 239969 DEBUG nova.compute.manager [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.459 239969 DEBUG nova.network.neutron [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.483 239969 INFO nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.508 239969 DEBUG nova.compute.manager [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:47:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 697 B/s rd, 0 B/s wr, 1 op/s
Jan 26 15:47:35 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3518868940' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.747 239969 DEBUG nova.compute.manager [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.749 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.750 239969 INFO nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Creating image(s)
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.780 239969 DEBUG nova.storage.rbd_utils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image 0435a695-73ca-4cb1-9c6d-35cb1f235198_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.807 239969 DEBUG nova.storage.rbd_utils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image 0435a695-73ca-4cb1-9c6d-35cb1f235198_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.834 239969 DEBUG nova.storage.rbd_utils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image 0435a695-73ca-4cb1-9c6d-35cb1f235198_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.837 239969 DEBUG oslo_concurrency.processutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.913 239969 DEBUG oslo_concurrency.processutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.914 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.916 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.916 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.937 239969 DEBUG nova.storage.rbd_utils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image 0435a695-73ca-4cb1-9c6d-35cb1f235198_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:35 compute-0 nova_compute[239965]: 2026-01-26 15:47:35.940 239969 DEBUG oslo_concurrency.processutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 0435a695-73ca-4cb1-9c6d-35cb1f235198_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:36 compute-0 nova_compute[239965]: 2026-01-26 15:47:36.210 239969 DEBUG oslo_concurrency.processutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 0435a695-73ca-4cb1-9c6d-35cb1f235198_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:36 compute-0 nova_compute[239965]: 2026-01-26 15:47:36.280 239969 DEBUG nova.storage.rbd_utils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] resizing rbd image 0435a695-73ca-4cb1-9c6d-35cb1f235198_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:47:36 compute-0 nova_compute[239965]: 2026-01-26 15:47:36.370 239969 DEBUG nova.objects.instance [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lazy-loading 'migration_context' on Instance uuid 0435a695-73ca-4cb1-9c6d-35cb1f235198 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:47:36 compute-0 nova_compute[239965]: 2026-01-26 15:47:36.399 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:47:36 compute-0 nova_compute[239965]: 2026-01-26 15:47:36.400 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Ensure instance console log exists: /var/lib/nova/instances/0435a695-73ca-4cb1-9c6d-35cb1f235198/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:47:36 compute-0 nova_compute[239965]: 2026-01-26 15:47:36.400 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:36 compute-0 nova_compute[239965]: 2026-01-26 15:47:36.400 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:36 compute-0 nova_compute[239965]: 2026-01-26 15:47:36.401 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:36 compute-0 nova_compute[239965]: 2026-01-26 15:47:36.428 239969 WARNING oslo_policy.policy [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 26 15:47:36 compute-0 nova_compute[239965]: 2026-01-26 15:47:36.428 239969 WARNING oslo_policy.policy [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 26 15:47:36 compute-0 nova_compute[239965]: 2026-01-26 15:47:36.430 239969 DEBUG nova.policy [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '84de675078294b38ba608227b57f84a5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b53671bcf3e946d58ed07bc7f2934508', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:47:36 compute-0 ceph-mon[75140]: pgmap v953: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 697 B/s rd, 0 B/s wr, 1 op/s
Jan 26 15:47:36 compute-0 nova_compute[239965]: 2026-01-26 15:47:36.896 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442441.8952372, b8687aa6-9965-456d-b057-a3e202d2220c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:47:36 compute-0 nova_compute[239965]: 2026-01-26 15:47:36.897 239969 INFO nova.compute.manager [-] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] VM Stopped (Lifecycle Event)
Jan 26 15:47:36 compute-0 nova_compute[239965]: 2026-01-26 15:47:36.914 239969 DEBUG nova.compute.manager [None req-cc238216-84da-409c-8050-f22cf7aaf6ba - - - - - -] [instance: b8687aa6-9965-456d-b057-a3e202d2220c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:47:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 67 MiB data, 193 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 15 op/s
Jan 26 15:47:38 compute-0 nova_compute[239965]: 2026-01-26 15:47:38.504 239969 DEBUG nova.network.neutron [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Successfully created port: 9c973342-d1ea-4af4-83c2-3dbfe75dc817 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:47:38 compute-0 ceph-mon[75140]: pgmap v954: 305 pgs: 305 active+clean; 67 MiB data, 193 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 MiB/s wr, 15 op/s
Jan 26 15:47:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:47:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 79 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Jan 26 15:47:40 compute-0 ceph-mon[75140]: pgmap v955: 305 pgs: 305 active+clean; 79 MiB data, 195 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 26 op/s
Jan 26 15:47:40 compute-0 nova_compute[239965]: 2026-01-26 15:47:40.743 239969 DEBUG nova.network.neutron [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Successfully updated port: 9c973342-d1ea-4af4-83c2-3dbfe75dc817 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:47:40 compute-0 nova_compute[239965]: 2026-01-26 15:47:40.777 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "refresh_cache-0435a695-73ca-4cb1-9c6d-35cb1f235198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:47:40 compute-0 nova_compute[239965]: 2026-01-26 15:47:40.778 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquired lock "refresh_cache-0435a695-73ca-4cb1-9c6d-35cb1f235198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:47:40 compute-0 nova_compute[239965]: 2026-01-26 15:47:40.778 239969 DEBUG nova.network.neutron [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:47:41 compute-0 nova_compute[239965]: 2026-01-26 15:47:41.111 239969 DEBUG nova.network.neutron [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:47:41 compute-0 nova_compute[239965]: 2026-01-26 15:47:41.415 239969 DEBUG nova.compute.manager [req-ed5616e2-ee25-48b2-a3fe-f443dc45c6dc req-364b90fd-f7d0-42b9-a699-f8e8703c75e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Received event network-changed-9c973342-d1ea-4af4-83c2-3dbfe75dc817 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:47:41 compute-0 nova_compute[239965]: 2026-01-26 15:47:41.416 239969 DEBUG nova.compute.manager [req-ed5616e2-ee25-48b2-a3fe-f443dc45c6dc req-364b90fd-f7d0-42b9-a699-f8e8703c75e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Refreshing instance network info cache due to event network-changed-9c973342-d1ea-4af4-83c2-3dbfe75dc817. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:47:41 compute-0 nova_compute[239965]: 2026-01-26 15:47:41.416 239969 DEBUG oslo_concurrency.lockutils [req-ed5616e2-ee25-48b2-a3fe-f443dc45c6dc req-364b90fd-f7d0-42b9-a699-f8e8703c75e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-0435a695-73ca-4cb1-9c6d-35cb1f235198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:47:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 15:47:42 compute-0 ceph-mon[75140]: pgmap v956: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 15:47:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 15:47:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.099 239969 DEBUG nova.network.neutron [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Updating instance_info_cache with network_info: [{"id": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "address": "fa:16:3e:08:ad:5c", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c973342-d1", "ovs_interfaceid": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.232 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Releasing lock "refresh_cache-0435a695-73ca-4cb1-9c6d-35cb1f235198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.232 239969 DEBUG nova.compute.manager [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Instance network_info: |[{"id": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "address": "fa:16:3e:08:ad:5c", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c973342-d1", "ovs_interfaceid": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.233 239969 DEBUG oslo_concurrency.lockutils [req-ed5616e2-ee25-48b2-a3fe-f443dc45c6dc req-364b90fd-f7d0-42b9-a699-f8e8703c75e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-0435a695-73ca-4cb1-9c6d-35cb1f235198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.233 239969 DEBUG nova.network.neutron [req-ed5616e2-ee25-48b2-a3fe-f443dc45c6dc req-364b90fd-f7d0-42b9-a699-f8e8703c75e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Refreshing network info cache for port 9c973342-d1ea-4af4-83c2-3dbfe75dc817 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.236 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Start _get_guest_xml network_info=[{"id": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "address": "fa:16:3e:08:ad:5c", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c973342-d1", "ovs_interfaceid": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.242 239969 WARNING nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.247 239969 DEBUG nova.virt.libvirt.host [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.247 239969 DEBUG nova.virt.libvirt.host [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.252 239969 DEBUG nova.virt.libvirt.host [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.252 239969 DEBUG nova.virt.libvirt.host [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.253 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.253 239969 DEBUG nova.virt.hardware [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:47:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1872429514',id=23,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-37376144',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.254 239969 DEBUG nova.virt.hardware [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.254 239969 DEBUG nova.virt.hardware [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.254 239969 DEBUG nova.virt.hardware [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.254 239969 DEBUG nova.virt.hardware [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.255 239969 DEBUG nova.virt.hardware [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.255 239969 DEBUG nova.virt.hardware [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.255 239969 DEBUG nova.virt.hardware [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.255 239969 DEBUG nova.virt.hardware [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.255 239969 DEBUG nova.virt.hardware [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.256 239969 DEBUG nova.virt.hardware [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.259 239969 DEBUG oslo_concurrency.processutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:47:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1301732585' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.818 239969 DEBUG oslo_concurrency.processutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:44 compute-0 ceph-mon[75140]: pgmap v957: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 15:47:44 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1301732585' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.847 239969 DEBUG nova.storage.rbd_utils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image 0435a695-73ca-4cb1-9c6d-35cb1f235198_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:44 compute-0 nova_compute[239965]: 2026-01-26 15:47:44.852 239969 DEBUG oslo_concurrency.processutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:47:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1928765232' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.413 239969 DEBUG oslo_concurrency.processutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.414 239969 DEBUG nova.virt.libvirt.vif [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:47:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1497817103',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1497817103',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(23),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1497817103',id=2,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=23,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCai7GwJddzKrW3TtkSQ1mOUip/vonOPhzdMngoUZvO4bbn1qLYfpHu7UDeWVieQmn+Md1KkRtB3+wPQBXRuMgP1dKOatHXM/89LkiId8i7p8znLMMfyiJvInuKeFqzzbQ==',key_name='tempest-keypair-1575067288',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b53671bcf3e946d58ed07bc7f2934508',ramdisk_id='',reservation_id='r-0e3ar3p5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1775060788',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1775060788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:47:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='84de675078294b38ba608227b57f84a5',uuid=0435a695-73ca-4cb1-9c6d-35cb1f235198,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "address": "fa:16:3e:08:ad:5c", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c973342-d1", "ovs_interfaceid": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.415 239969 DEBUG nova.network.os_vif_util [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Converting VIF {"id": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "address": "fa:16:3e:08:ad:5c", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c973342-d1", "ovs_interfaceid": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.415 239969 DEBUG nova.network.os_vif_util [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:ad:5c,bridge_name='br-int',has_traffic_filtering=True,id=9c973342-d1ea-4af4-83c2-3dbfe75dc817,network=Network(7b13a653-2720-44ed-8e59-f532313cf362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c973342-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.418 239969 DEBUG nova.objects.instance [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0435a695-73ca-4cb1-9c6d-35cb1f235198 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.433 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:47:45 compute-0 nova_compute[239965]:   <uuid>0435a695-73ca-4cb1-9c6d-35cb1f235198</uuid>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   <name>instance-00000002</name>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1497817103</nova:name>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:47:44</nova:creationTime>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <nova:flavor name="tempest-flavor_with_ephemeral_0-37376144">
Jan 26 15:47:45 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:47:45 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:47:45 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:47:45 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:47:45 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:47:45 compute-0 nova_compute[239965]:         <nova:user uuid="84de675078294b38ba608227b57f84a5">tempest-ServersWithSpecificFlavorTestJSON-1775060788-project-member</nova:user>
Jan 26 15:47:45 compute-0 nova_compute[239965]:         <nova:project uuid="b53671bcf3e946d58ed07bc7f2934508">tempest-ServersWithSpecificFlavorTestJSON-1775060788</nova:project>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:47:45 compute-0 nova_compute[239965]:         <nova:port uuid="9c973342-d1ea-4af4-83c2-3dbfe75dc817">
Jan 26 15:47:45 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <system>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <entry name="serial">0435a695-73ca-4cb1-9c6d-35cb1f235198</entry>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <entry name="uuid">0435a695-73ca-4cb1-9c6d-35cb1f235198</entry>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     </system>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   <os>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   </os>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   <features>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   </features>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/0435a695-73ca-4cb1-9c6d-35cb1f235198_disk">
Jan 26 15:47:45 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       </source>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:47:45 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/0435a695-73ca-4cb1-9c6d-35cb1f235198_disk.config">
Jan 26 15:47:45 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       </source>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:47:45 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:08:ad:5c"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <target dev="tap9c973342-d1"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/0435a695-73ca-4cb1-9c6d-35cb1f235198/console.log" append="off"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <video>
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     </video>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:47:45 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:47:45 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:47:45 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:47:45 compute-0 nova_compute[239965]: </domain>
Jan 26 15:47:45 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.434 239969 DEBUG nova.compute.manager [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Preparing to wait for external event network-vif-plugged-9c973342-d1ea-4af4-83c2-3dbfe75dc817 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.435 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.435 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.435 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.436 239969 DEBUG nova.virt.libvirt.vif [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:47:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1497817103',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1497817103',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(23),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1497817103',id=2,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=23,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCai7GwJddzKrW3TtkSQ1mOUip/vonOPhzdMngoUZvO4bbn1qLYfpHu7UDeWVieQmn+Md1KkRtB3+wPQBXRuMgP1dKOatHXM/89LkiId8i7p8znLMMfyiJvInuKeFqzzbQ==',key_name='tempest-keypair-1575067288',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b53671bcf3e946d58ed07bc7f2934508',ramdisk_id='',reservation_id='r-0e3ar3p5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1775060788',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1775060788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:47:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='84de675078294b38ba608227b57f84a5',uuid=0435a695-73ca-4cb1-9c6d-35cb1f235198,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "address": "fa:16:3e:08:ad:5c", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c973342-d1", "ovs_interfaceid": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.436 239969 DEBUG nova.network.os_vif_util [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Converting VIF {"id": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "address": "fa:16:3e:08:ad:5c", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c973342-d1", "ovs_interfaceid": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.437 239969 DEBUG nova.network.os_vif_util [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:ad:5c,bridge_name='br-int',has_traffic_filtering=True,id=9c973342-d1ea-4af4-83c2-3dbfe75dc817,network=Network(7b13a653-2720-44ed-8e59-f532313cf362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c973342-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.437 239969 DEBUG os_vif [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:ad:5c,bridge_name='br-int',has_traffic_filtering=True,id=9c973342-d1ea-4af4-83c2-3dbfe75dc817,network=Network(7b13a653-2720-44ed-8e59-f532313cf362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c973342-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.505 239969 DEBUG ovsdbapp.backend.ovs_idl [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.506 239969 DEBUG ovsdbapp.backend.ovs_idl [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.506 239969 DEBUG ovsdbapp.backend.ovs_idl [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.507 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.507 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.507 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.508 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.520 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.520 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.520 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:47:45 compute-0 nova_compute[239965]: 2026-01-26 15:47:45.521 239969 INFO oslo.privsep.daemon [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpx2jqe219/privsep.sock']
Jan 26 15:47:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 15:47:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1928765232' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.298 239969 INFO oslo.privsep.daemon [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Spawned new privsep daemon via rootwrap
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.186 247437 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.189 247437 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.192 247437 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.192 247437 INFO oslo.privsep.daemon [-] privsep daemon running as pid 247437
Jan 26 15:47:46 compute-0 podman[247438]: 2026-01-26 15:47:46.368789774 +0000 UTC m=+0.051780069 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 15:47:46 compute-0 podman[247440]: 2026-01-26 15:47:46.390499745 +0000 UTC m=+0.072916217 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.680 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.681 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c973342-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.682 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9c973342-d1, col_values=(('external_ids', {'iface-id': '9c973342-d1ea-4af4-83c2-3dbfe75dc817', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:ad:5c', 'vm-uuid': '0435a695-73ca-4cb1-9c6d-35cb1f235198'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.683 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:46 compute-0 NetworkManager[48954]: <info>  [1769442466.6843] manager: (tap9c973342-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.687 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.690 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.691 239969 INFO os_vif [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:ad:5c,bridge_name='br-int',has_traffic_filtering=True,id=9c973342-d1ea-4af4-83c2-3dbfe75dc817,network=Network(7b13a653-2720-44ed-8e59-f532313cf362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c973342-d1')
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.736 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.737 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.737 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] No VIF found with MAC fa:16:3e:08:ad:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.737 239969 INFO nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Using config drive
Jan 26 15:47:46 compute-0 nova_compute[239965]: 2026-01-26 15:47:46.760 239969 DEBUG nova.storage.rbd_utils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image 0435a695-73ca-4cb1-9c6d-35cb1f235198_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:46 compute-0 ceph-mon[75140]: pgmap v958: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 15:47:47 compute-0 nova_compute[239965]: 2026-01-26 15:47:47.020 239969 DEBUG nova.network.neutron [req-ed5616e2-ee25-48b2-a3fe-f443dc45c6dc req-364b90fd-f7d0-42b9-a699-f8e8703c75e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Updated VIF entry in instance network info cache for port 9c973342-d1ea-4af4-83c2-3dbfe75dc817. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:47:47 compute-0 nova_compute[239965]: 2026-01-26 15:47:47.021 239969 DEBUG nova.network.neutron [req-ed5616e2-ee25-48b2-a3fe-f443dc45c6dc req-364b90fd-f7d0-42b9-a699-f8e8703c75e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Updating instance_info_cache with network_info: [{"id": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "address": "fa:16:3e:08:ad:5c", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c973342-d1", "ovs_interfaceid": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:47:47 compute-0 nova_compute[239965]: 2026-01-26 15:47:47.035 239969 DEBUG oslo_concurrency.lockutils [req-ed5616e2-ee25-48b2-a3fe-f443dc45c6dc req-364b90fd-f7d0-42b9-a699-f8e8703c75e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-0435a695-73ca-4cb1-9c6d-35cb1f235198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:47:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 15:47:47 compute-0 nova_compute[239965]: 2026-01-26 15:47:47.583 239969 INFO nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Creating config drive at /var/lib/nova/instances/0435a695-73ca-4cb1-9c6d-35cb1f235198/disk.config
Jan 26 15:47:47 compute-0 nova_compute[239965]: 2026-01-26 15:47:47.588 239969 DEBUG oslo_concurrency.processutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0435a695-73ca-4cb1-9c6d-35cb1f235198/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpithlqflc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:47 compute-0 nova_compute[239965]: 2026-01-26 15:47:47.711 239969 DEBUG oslo_concurrency.processutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0435a695-73ca-4cb1-9c6d-35cb1f235198/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpithlqflc" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:47 compute-0 nova_compute[239965]: 2026-01-26 15:47:47.739 239969 DEBUG nova.storage.rbd_utils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image 0435a695-73ca-4cb1-9c6d-35cb1f235198_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:47 compute-0 nova_compute[239965]: 2026-01-26 15:47:47.742 239969 DEBUG oslo_concurrency.processutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0435a695-73ca-4cb1-9c6d-35cb1f235198/disk.config 0435a695-73ca-4cb1-9c6d-35cb1f235198_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:47 compute-0 nova_compute[239965]: 2026-01-26 15:47:47.876 239969 DEBUG oslo_concurrency.processutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0435a695-73ca-4cb1-9c6d-35cb1f235198/disk.config 0435a695-73ca-4cb1-9c6d-35cb1f235198_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:47 compute-0 nova_compute[239965]: 2026-01-26 15:47:47.877 239969 INFO nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Deleting local config drive /var/lib/nova/instances/0435a695-73ca-4cb1-9c6d-35cb1f235198/disk.config because it was imported into RBD.
Jan 26 15:47:47 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 26 15:47:47 compute-0 kernel: tap9c973342-d1: entered promiscuous mode
Jan 26 15:47:47 compute-0 NetworkManager[48954]: <info>  [1769442467.9556] manager: (tap9c973342-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Jan 26 15:47:47 compute-0 nova_compute[239965]: 2026-01-26 15:47:47.955 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:47 compute-0 ovn_controller[146046]: 2026-01-26T15:47:47Z|00027|binding|INFO|Claiming lport 9c973342-d1ea-4af4-83c2-3dbfe75dc817 for this chassis.
Jan 26 15:47:47 compute-0 ovn_controller[146046]: 2026-01-26T15:47:47Z|00028|binding|INFO|9c973342-d1ea-4af4-83c2-3dbfe75dc817: Claiming fa:16:3e:08:ad:5c 10.100.0.14
Jan 26 15:47:47 compute-0 nova_compute[239965]: 2026-01-26 15:47:47.960 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:47 compute-0 systemd-udevd[247559]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:47:48 compute-0 NetworkManager[48954]: <info>  [1769442468.0102] device (tap9c973342-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:47:48 compute-0 NetworkManager[48954]: <info>  [1769442468.0109] device (tap9c973342-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:47:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:48.027 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:ad:5c 10.100.0.14'], port_security=['fa:16:3e:08:ad:5c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0435a695-73ca-4cb1-9c6d-35cb1f235198', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b13a653-2720-44ed-8e59-f532313cf362', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b53671bcf3e946d58ed07bc7f2934508', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aa7896b6-fb15-4b71-8719-443aa677edd8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88b30d9b-ee3e-48c5-b1e4-65985eb98cc5, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=9c973342-d1ea-4af4-83c2-3dbfe75dc817) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:47:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:48.030 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 9c973342-d1ea-4af4-83c2-3dbfe75dc817 in datapath 7b13a653-2720-44ed-8e59-f532313cf362 bound to our chassis
Jan 26 15:47:48 compute-0 nova_compute[239965]: 2026-01-26 15:47:48.031 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:48.032 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b13a653-2720-44ed-8e59-f532313cf362
Jan 26 15:47:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:48.033 156105 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpgrbb33wq/privsep.sock']
Jan 26 15:47:48 compute-0 ovn_controller[146046]: 2026-01-26T15:47:48Z|00029|binding|INFO|Setting lport 9c973342-d1ea-4af4-83c2-3dbfe75dc817 ovn-installed in OVS
Jan 26 15:47:48 compute-0 ovn_controller[146046]: 2026-01-26T15:47:48Z|00030|binding|INFO|Setting lport 9c973342-d1ea-4af4-83c2-3dbfe75dc817 up in Southbound
Jan 26 15:47:48 compute-0 nova_compute[239965]: 2026-01-26 15:47:48.036 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:48 compute-0 systemd-machined[208061]: New machine qemu-2-instance-00000002.
Jan 26 15:47:48 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003462921574051014 of space, bias 1.0, pg target 0.10388764722153042 quantized to 32 (current 32)
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664048773846356 of space, bias 1.0, pg target 0.1999214632153907 quantized to 32 (current 32)
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0187243812910288e-06 of space, bias 4.0, pg target 0.0012224692575492344 quantized to 16 (current 16)
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:47:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:47:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:48.776 156105 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 26 15:47:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:48.777 156105 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpgrbb33wq/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 26 15:47:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:48.604 247577 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 26 15:47:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:48.608 247577 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 26 15:47:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:48.610 247577 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 26 15:47:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:48.610 247577 INFO oslo.privsep.daemon [-] privsep daemon running as pid 247577
Jan 26 15:47:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:48.779 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5ad61929-0527-4800-af89-654e68e8d0b2]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:47:48 compute-0 ceph-mon[75140]: pgmap v959: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 15:47:48 compute-0 nova_compute[239965]: 2026-01-26 15:47:48.925 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442468.924942, 0435a695-73ca-4cb1-9c6d-35cb1f235198 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:47:48 compute-0 nova_compute[239965]: 2026-01-26 15:47:48.925 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] VM Started (Lifecycle Event)
Jan 26 15:47:48 compute-0 nova_compute[239965]: 2026-01-26 15:47:48.971 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:47:48 compute-0 nova_compute[239965]: 2026-01-26 15:47:48.974 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442468.9251723, 0435a695-73ca-4cb1-9c6d-35cb1f235198 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:47:48 compute-0 nova_compute[239965]: 2026-01-26 15:47:48.974 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] VM Paused (Lifecycle Event)
Jan 26 15:47:49 compute-0 nova_compute[239965]: 2026-01-26 15:47:49.003 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:47:49 compute-0 nova_compute[239965]: 2026-01-26 15:47:49.006 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:47:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 546 KiB/s wr, 13 op/s
Jan 26 15:47:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:49.580 247577 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:49.580 247577 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:49.580 247577 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:49 compute-0 nova_compute[239965]: 2026-01-26 15:47:49.588 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:49 compute-0 nova_compute[239965]: 2026-01-26 15:47:49.882 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:47:50 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.517 239969 DEBUG nova.compute.manager [req-f55bdddf-0bf9-4112-83ec-aafe5e8258bf req-c1a89ac0-96ae-42f5-89e2-962ea3e92004 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Received event network-vif-plugged-9c973342-d1ea-4af4-83c2-3dbfe75dc817 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:47:50 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.518 239969 DEBUG oslo_concurrency.lockutils [req-f55bdddf-0bf9-4112-83ec-aafe5e8258bf req-c1a89ac0-96ae-42f5-89e2-962ea3e92004 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:50 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.519 239969 DEBUG oslo_concurrency.lockutils [req-f55bdddf-0bf9-4112-83ec-aafe5e8258bf req-c1a89ac0-96ae-42f5-89e2-962ea3e92004 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:50 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.519 239969 DEBUG oslo_concurrency.lockutils [req-f55bdddf-0bf9-4112-83ec-aafe5e8258bf req-c1a89ac0-96ae-42f5-89e2-962ea3e92004 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:50 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.519 239969 DEBUG nova.compute.manager [req-f55bdddf-0bf9-4112-83ec-aafe5e8258bf req-c1a89ac0-96ae-42f5-89e2-962ea3e92004 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Processing event network-vif-plugged-9c973342-d1ea-4af4-83c2-3dbfe75dc817 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:47:50 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.522 239969 DEBUG nova.compute.manager [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:47:50 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.523 239969 DEBUG oslo_concurrency.lockutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquiring lock "e8e51150-05c2-4d90-b463-0a5f26c999fc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:50 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.523 239969 DEBUG oslo_concurrency.lockutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "e8e51150-05c2-4d90-b463-0a5f26c999fc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:50 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.525 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442470.5256677, 0435a695-73ca-4cb1-9c6d-35cb1f235198 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:47:50 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.526 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] VM Resumed (Lifecycle Event)
Jan 26 15:47:50 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.528 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:47:50 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.530 239969 INFO nova.virt.libvirt.driver [-] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Instance spawned successfully.
Jan 26 15:47:50 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.531 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:47:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:50.540 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb4b69d-e6cc-4e4b-a13a-3acb81fd016d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:50.542 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7b13a653-21 in ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:47:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:50.544 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7b13a653-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:47:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:50.545 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[425fdb4b-b238-4858-a456-e1c90c8ef410]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:50.549 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6990f6-4135-47b4-abdd-a93fd77ee403]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:50.575 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[d55f81d4-0a23-4b60-9548-c63324715f36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:50.600 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f52187c4-9cbc-4d5b-a680-2d36ea23c77e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:50.603 156105 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmphc__8371/privsep.sock']
Jan 26 15:47:50 compute-0 ceph-mon[75140]: pgmap v960: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 546 KiB/s wr, 13 op/s
Jan 26 15:47:50 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.972 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:47:50 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.972 239969 DEBUG nova.compute.manager [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:47:50 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.982 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:50.999 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:51.000 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:51.001 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:51.001 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:51.001 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:51.002 239969 DEBUG nova.virt.libvirt.driver [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:51.070 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:51.148 239969 DEBUG oslo_concurrency.lockutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:51.149 239969 DEBUG oslo_concurrency.lockutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:51.155 239969 DEBUG nova.virt.hardware [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:51.155 239969 INFO nova.compute.claims [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:51.172 239969 INFO nova.compute.manager [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Took 15.42 seconds to spawn the instance on the hypervisor.
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:51.173 239969 DEBUG nova.compute.manager [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:51.308 239969 INFO nova.compute.manager [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Took 16.67 seconds to build instance.
Jan 26 15:47:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:51.359 156105 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 26 15:47:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:51.360 156105 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmphc__8371/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 26 15:47:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:51.199 247633 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 26 15:47:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:51.203 247633 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 26 15:47:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:51.206 247633 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 26 15:47:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:51.206 247633 INFO oslo.privsep.daemon [-] privsep daemon running as pid 247633
Jan 26 15:47:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:51.364 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb24e90-e728-43a7-98ea-2fe5ab252178]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:51.411 239969 DEBUG oslo_concurrency.lockutils [None req-520d93d2-a654-44b3-9a07-096fd7be5de0 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "0435a695-73ca-4cb1-9c6d-35cb1f235198" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:51.451 239969 DEBUG oslo_concurrency.processutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v961: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 387 KiB/s wr, 3 op/s
Jan 26 15:47:51 compute-0 nova_compute[239965]: 2026-01-26 15:47:51.684 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:51.985 247633 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:51.985 247633 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:51.985 247633 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:47:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1041017290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.131 239969 DEBUG oslo_concurrency.processutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.679s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.137 239969 DEBUG nova.compute.provider_tree [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.226 239969 DEBUG nova.scheduler.client.report [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.252 239969 DEBUG oslo_concurrency.lockutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.253 239969 DEBUG nova.compute.manager [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.337 239969 DEBUG nova.compute.manager [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.337 239969 DEBUG nova.network.neutron [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.373 239969 INFO nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.399 239969 DEBUG nova.compute.manager [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.535 239969 DEBUG nova.compute.manager [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.536 239969 DEBUG nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.537 239969 INFO nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Creating image(s)
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.558 239969 DEBUG nova.storage.rbd_utils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] rbd image e8e51150-05c2-4d90-b463-0a5f26c999fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.583 239969 DEBUG nova.storage.rbd_utils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] rbd image e8e51150-05c2-4d90-b463-0a5f26c999fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.609 239969 DEBUG nova.storage.rbd_utils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] rbd image e8e51150-05c2-4d90-b463-0a5f26c999fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.613 239969 DEBUG oslo_concurrency.processutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.684 239969 DEBUG oslo_concurrency.processutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.685 239969 DEBUG oslo_concurrency.lockutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.686 239969 DEBUG oslo_concurrency.lockutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.686 239969 DEBUG oslo_concurrency.lockutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.689 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[caae55e3-00f1-40d1-9bb3-63cc928a0685]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.705 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8afd0408-e79b-4ffa-94dd-cdb10920aa96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:52 compute-0 NetworkManager[48954]: <info>  [1769442472.7060] manager: (tap7b13a653-20): new Veth device (/org/freedesktop/NetworkManager/Devices/23)
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.717 239969 DEBUG nova.storage.rbd_utils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] rbd image e8e51150-05c2-4d90-b463-0a5f26c999fc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.723 239969 DEBUG oslo_concurrency.processutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 e8e51150-05c2-4d90-b463-0a5f26c999fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:52 compute-0 systemd-udevd[247743]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.734 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4b0e662f-4db0-4b89-98fb-5c45fbfb1c55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.741 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1ccf8a-bed4-494b-9e7f-0836582654ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:52 compute-0 NetworkManager[48954]: <info>  [1769442472.7705] device (tap7b13a653-20): carrier: link connected
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.775 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3f1d43b0-01bb-4cd0-9b01-4a074f150f11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.795 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0f299e42-21bb-4594-951a-21bfbbd6cbf3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b13a653-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:f1:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396357, 'reachable_time': 34094, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 247765, 'error': None, 'target': 'ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.817 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[32a66601-610f-458c-bea3-42e3f88e777a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:f1fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396357, 'tstamp': 396357}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 247777, 'error': None, 'target': 'ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.835 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3d1cfbd5-4f9e-45c3-aee1-44538a1fe4bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b13a653-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:f1:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396357, 'reachable_time': 34094, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 247778, 'error': None, 'target': 'ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.872 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0d95adb5-581f-4625-9950-420de7c7e1cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.929 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e1396638-7fdd-4315-9fa2-f627be22c1af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.931 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b13a653-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.932 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.932 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b13a653-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:47:52 compute-0 NetworkManager[48954]: <info>  [1769442472.9353] manager: (tap7b13a653-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/24)
Jan 26 15:47:52 compute-0 kernel: tap7b13a653-20: entered promiscuous mode
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.935 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.940 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b13a653-20, col_values=(('external_ids', {'iface-id': '89ce28e3-fbf3-4c2d-9139-ce3f45412c6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:47:52 compute-0 ovn_controller[146046]: 2026-01-26T15:47:52Z|00031|binding|INFO|Releasing lport 89ce28e3-fbf3-4c2d-9139-ce3f45412c6f from this chassis (sb_readonly=0)
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.942 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:52 compute-0 ceph-mon[75140]: pgmap v961: 305 pgs: 305 active+clean; 88 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 387 KiB/s wr, 3 op/s
Jan 26 15:47:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1041017290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:47:52 compute-0 nova_compute[239965]: 2026-01-26 15:47:52.960 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.963 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7b13a653-2720-44ed-8e59-f532313cf362.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7b13a653-2720-44ed-8e59-f532313cf362.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.964 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c22deeca-2c0b-446c-bbe3-63951f296032]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.965 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-7b13a653-2720-44ed-8e59-f532313cf362
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/7b13a653-2720-44ed-8e59-f532313cf362.pid.haproxy
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 7b13a653-2720-44ed-8e59-f532313cf362
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:47:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:52.966 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362', 'env', 'PROCESS_TAG=haproxy-7b13a653-2720-44ed-8e59-f532313cf362', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7b13a653-2720-44ed-8e59-f532313cf362.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.300 239969 DEBUG oslo_concurrency.processutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 e8e51150-05c2-4d90-b463-0a5f26c999fc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.400 239969 DEBUG nova.storage.rbd_utils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] resizing rbd image e8e51150-05c2-4d90-b463-0a5f26c999fc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.463 239969 DEBUG nova.network.neutron [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.463 239969 DEBUG nova.compute.manager [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:47:53 compute-0 podman[247847]: 2026-01-26 15:47:53.474856168 +0000 UTC m=+0.077390316 container create c9f5c336a3f74fce2abc93c378d59ace619ab59b1001395e91d8348abc2ee96c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 15:47:53 compute-0 podman[247847]: 2026-01-26 15:47:53.422166658 +0000 UTC m=+0.024700826 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:47:53 compute-0 systemd[1]: Started libpod-conmon-c9f5c336a3f74fce2abc93c378d59ace619ab59b1001395e91d8348abc2ee96c.scope.
Jan 26 15:47:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:47:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe45d414c067973c923da5a702bbb31431d4d75a6580f1637bf7a3f11ca735e6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:47:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 14 KiB/s wr, 57 op/s
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.584 239969 DEBUG nova.objects.instance [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lazy-loading 'migration_context' on Instance uuid e8e51150-05c2-4d90-b463-0a5f26c999fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:47:53 compute-0 podman[247847]: 2026-01-26 15:47:53.597115312 +0000 UTC m=+0.199649470 container init c9f5c336a3f74fce2abc93c378d59ace619ab59b1001395e91d8348abc2ee96c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 15:47:53 compute-0 podman[247847]: 2026-01-26 15:47:53.605105758 +0000 UTC m=+0.207639886 container start c9f5c336a3f74fce2abc93c378d59ace619ab59b1001395e91d8348abc2ee96c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.608 239969 DEBUG nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.608 239969 DEBUG nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Ensure instance console log exists: /var/lib/nova/instances/e8e51150-05c2-4d90-b463-0a5f26c999fc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.609 239969 DEBUG oslo_concurrency.lockutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.609 239969 DEBUG oslo_concurrency.lockutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.610 239969 DEBUG oslo_concurrency.lockutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.611 239969 DEBUG nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.617 239969 WARNING nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.623 239969 DEBUG nova.virt.libvirt.host [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.625 239969 DEBUG nova.virt.libvirt.host [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.644 239969 DEBUG nova.virt.libvirt.host [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.649 239969 DEBUG nova.virt.libvirt.host [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.649 239969 DEBUG nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.650 239969 DEBUG nova.virt.hardware [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.650 239969 DEBUG nova.virt.hardware [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.650 239969 DEBUG nova.virt.hardware [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.651 239969 DEBUG nova.virt.hardware [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.651 239969 DEBUG nova.virt.hardware [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.651 239969 DEBUG nova.virt.hardware [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.652 239969 DEBUG nova.virt.hardware [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.652 239969 DEBUG nova.virt.hardware [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.652 239969 DEBUG nova.virt.hardware [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.659 239969 DEBUG nova.virt.hardware [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.660 239969 DEBUG nova.virt.hardware [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:47:53 compute-0 neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362[247883]: [NOTICE]   (247905) : New worker (247907) forked
Jan 26 15:47:53 compute-0 neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362[247883]: [NOTICE]   (247905) : Loading success.
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.665 239969 DEBUG oslo_concurrency.processutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.745 239969 DEBUG nova.compute.manager [req-809c312b-d07e-4816-9cc8-5955a41145bc req-87a21224-ac38-49f2-a0d5-13e07db9504e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Received event network-vif-plugged-9c973342-d1ea-4af4-83c2-3dbfe75dc817 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.746 239969 DEBUG oslo_concurrency.lockutils [req-809c312b-d07e-4816-9cc8-5955a41145bc req-87a21224-ac38-49f2-a0d5-13e07db9504e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.746 239969 DEBUG oslo_concurrency.lockutils [req-809c312b-d07e-4816-9cc8-5955a41145bc req-87a21224-ac38-49f2-a0d5-13e07db9504e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.746 239969 DEBUG oslo_concurrency.lockutils [req-809c312b-d07e-4816-9cc8-5955a41145bc req-87a21224-ac38-49f2-a0d5-13e07db9504e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.747 239969 DEBUG nova.compute.manager [req-809c312b-d07e-4816-9cc8-5955a41145bc req-87a21224-ac38-49f2-a0d5-13e07db9504e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] No waiting events found dispatching network-vif-plugged-9c973342-d1ea-4af4-83c2-3dbfe75dc817 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:47:53 compute-0 nova_compute[239965]: 2026-01-26 15:47:53.747 239969 WARNING nova.compute.manager [req-809c312b-d07e-4816-9cc8-5955a41145bc req-87a21224-ac38-49f2-a0d5-13e07db9504e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Received unexpected event network-vif-plugged-9c973342-d1ea-4af4-83c2-3dbfe75dc817 for instance with vm_state active and task_state None.
Jan 26 15:47:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:47:54 compute-0 ceph-mon[75140]: pgmap v962: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 14 KiB/s wr, 57 op/s
Jan 26 15:47:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:47:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/92256694' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:47:54 compute-0 nova_compute[239965]: 2026-01-26 15:47:54.270 239969 DEBUG oslo_concurrency.processutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:54 compute-0 nova_compute[239965]: 2026-01-26 15:47:54.291 239969 DEBUG nova.storage.rbd_utils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] rbd image e8e51150-05c2-4d90-b463-0a5f26c999fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:54 compute-0 nova_compute[239965]: 2026-01-26 15:47:54.294 239969 DEBUG oslo_concurrency.processutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:54 compute-0 nova_compute[239965]: 2026-01-26 15:47:54.632 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:47:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1120474971' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:47:54 compute-0 nova_compute[239965]: 2026-01-26 15:47:54.858 239969 DEBUG oslo_concurrency.processutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:54 compute-0 nova_compute[239965]: 2026-01-26 15:47:54.859 239969 DEBUG nova.objects.instance [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lazy-loading 'pci_devices' on Instance uuid e8e51150-05c2-4d90-b463-0a5f26c999fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:47:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/92256694' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:47:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1120474971' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:47:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 14 KiB/s wr, 57 op/s
Jan 26 15:47:56 compute-0 ceph-mon[75140]: pgmap v963: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 14 KiB/s wr, 57 op/s
Jan 26 15:47:56 compute-0 nova_compute[239965]: 2026-01-26 15:47:56.686 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v964: 305 pgs: 305 active+clean; 126 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 100 op/s
Jan 26 15:47:58 compute-0 ceph-mon[75140]: pgmap v964: 305 pgs: 305 active+clean; 126 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.6 MiB/s wr, 100 op/s
Jan 26 15:47:58 compute-0 nova_compute[239965]: 2026-01-26 15:47:58.648 239969 DEBUG nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:47:58 compute-0 nova_compute[239965]:   <uuid>e8e51150-05c2-4d90-b463-0a5f26c999fc</uuid>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   <name>instance-00000003</name>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <nova:name>tempest-LiveMigrationNegativeTest-server-1176654668</nova:name>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:47:53</nova:creationTime>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:47:58 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:47:58 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:47:58 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:47:58 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:47:58 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:47:58 compute-0 nova_compute[239965]:         <nova:user uuid="f454000977d04b4e9b6d5cf626dbd6fe">tempest-LiveMigrationNegativeTest-1999008782-project-member</nova:user>
Jan 26 15:47:58 compute-0 nova_compute[239965]:         <nova:project uuid="6beda758126041608c246eb20e376115">tempest-LiveMigrationNegativeTest-1999008782</nova:project>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <system>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <entry name="serial">e8e51150-05c2-4d90-b463-0a5f26c999fc</entry>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <entry name="uuid">e8e51150-05c2-4d90-b463-0a5f26c999fc</entry>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     </system>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   <os>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   </os>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   <features>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   </features>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/e8e51150-05c2-4d90-b463-0a5f26c999fc_disk">
Jan 26 15:47:58 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       </source>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:47:58 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/e8e51150-05c2-4d90-b463-0a5f26c999fc_disk.config">
Jan 26 15:47:58 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       </source>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:47:58 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/e8e51150-05c2-4d90-b463-0a5f26c999fc/console.log" append="off"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <video>
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     </video>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:47:58 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:47:58 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:47:58 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:47:58 compute-0 nova_compute[239965]: </domain>
Jan 26 15:47:58 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:47:58 compute-0 nova_compute[239965]: 2026-01-26 15:47:58.718 239969 DEBUG nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:47:58 compute-0 nova_compute[239965]: 2026-01-26 15:47:58.720 239969 DEBUG nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:47:58 compute-0 nova_compute[239965]: 2026-01-26 15:47:58.721 239969 INFO nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Using config drive
Jan 26 15:47:58 compute-0 nova_compute[239965]: 2026-01-26 15:47:58.743 239969 DEBUG nova.storage.rbd_utils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] rbd image e8e51150-05c2-4d90-b463-0a5f26c999fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:47:58.819697) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442478819742, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 1116, "num_deletes": 255, "total_data_size": 1566133, "memory_usage": 1594256, "flush_reason": "Manual Compaction"}
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442478832481, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1539794, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18882, "largest_seqno": 19997, "table_properties": {"data_size": 1534422, "index_size": 2768, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11243, "raw_average_key_size": 18, "raw_value_size": 1523475, "raw_average_value_size": 2564, "num_data_blocks": 126, "num_entries": 594, "num_filter_entries": 594, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769442382, "oldest_key_time": 1769442382, "file_creation_time": 1769442478, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 13175 microseconds, and 4925 cpu microseconds.
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:47:58.832868) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1539794 bytes OK
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:47:58.833035) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:47:58.835297) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:47:58.835322) EVENT_LOG_v1 {"time_micros": 1769442478835316, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:47:58.835358) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1560934, prev total WAL file size 1560934, number of live WAL files 2.
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:47:58.836690) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323530' seq:72057594037927935, type:22 .. '6C6F676D00353031' seq:0, type:0; will stop at (end)
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1503KB)], [44(6451KB)]
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442478836812, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 8145890, "oldest_snapshot_seqno": -1}
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4364 keys, 8009570 bytes, temperature: kUnknown
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442478901246, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8009570, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7978667, "index_size": 18889, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10949, "raw_key_size": 107782, "raw_average_key_size": 24, "raw_value_size": 7897972, "raw_average_value_size": 1809, "num_data_blocks": 792, "num_entries": 4364, "num_filter_entries": 4364, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769442478, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:47:58.901668) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8009570 bytes
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:47:58.903372) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.1 rd, 124.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 6.3 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(10.5) write-amplify(5.2) OK, records in: 4890, records dropped: 526 output_compression: NoCompression
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:47:58.903405) EVENT_LOG_v1 {"time_micros": 1769442478903390, "job": 22, "event": "compaction_finished", "compaction_time_micros": 64580, "compaction_time_cpu_micros": 25057, "output_level": 6, "num_output_files": 1, "total_output_size": 8009570, "num_input_records": 4890, "num_output_records": 4364, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442478903838, "job": 22, "event": "table_file_deletion", "file_number": 46}
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442478905313, "job": 22, "event": "table_file_deletion", "file_number": 44}
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:47:58.836519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:47:58.905417) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:47:58.905424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:47:58.905425) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:47:58.905427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:47:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:47:58.905429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:47:59 compute-0 NetworkManager[48954]: <info>  [1769442479.1780] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Jan 26 15:47:59 compute-0 nova_compute[239965]: 2026-01-26 15:47:59.178 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:59 compute-0 NetworkManager[48954]: <info>  [1769442479.1789] device (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 15:47:59 compute-0 NetworkManager[48954]: <warn>  [1769442479.1792] device (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 15:47:59 compute-0 NetworkManager[48954]: <info>  [1769442479.1807] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Jan 26 15:47:59 compute-0 NetworkManager[48954]: <info>  [1769442479.1811] device (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 26 15:47:59 compute-0 NetworkManager[48954]: <warn>  [1769442479.1812] device (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 26 15:47:59 compute-0 NetworkManager[48954]: <info>  [1769442479.1820] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Jan 26 15:47:59 compute-0 NetworkManager[48954]: <info>  [1769442479.1827] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Jan 26 15:47:59 compute-0 NetworkManager[48954]: <info>  [1769442479.1832] device (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 26 15:47:59 compute-0 NetworkManager[48954]: <info>  [1769442479.1836] device (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 26 15:47:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:59.209 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:47:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:59.209 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:47:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:47:59.210 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:47:59 compute-0 nova_compute[239965]: 2026-01-26 15:47:59.262 239969 INFO nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Creating config drive at /var/lib/nova/instances/e8e51150-05c2-4d90-b463-0a5f26c999fc/disk.config
Jan 26 15:47:59 compute-0 ovn_controller[146046]: 2026-01-26T15:47:59Z|00032|binding|INFO|Releasing lport 89ce28e3-fbf3-4c2d-9139-ce3f45412c6f from this chassis (sb_readonly=0)
Jan 26 15:47:59 compute-0 nova_compute[239965]: 2026-01-26 15:47:59.274 239969 DEBUG oslo_concurrency.processutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e8e51150-05c2-4d90-b463-0a5f26c999fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4ezgatrl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:59 compute-0 nova_compute[239965]: 2026-01-26 15:47:59.293 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:59 compute-0 nova_compute[239965]: 2026-01-26 15:47:59.402 239969 DEBUG oslo_concurrency.processutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e8e51150-05c2-4d90-b463-0a5f26c999fc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4ezgatrl" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:59 compute-0 nova_compute[239965]: 2026-01-26 15:47:59.425 239969 DEBUG nova.storage.rbd_utils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] rbd image e8e51150-05c2-4d90-b463-0a5f26c999fc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:47:59 compute-0 nova_compute[239965]: 2026-01-26 15:47:59.429 239969 DEBUG oslo_concurrency.processutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e8e51150-05c2-4d90-b463-0a5f26c999fc/disk.config e8e51150-05c2-4d90-b463-0a5f26c999fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:47:59 compute-0 nova_compute[239965]: 2026-01-26 15:47:59.559 239969 DEBUG oslo_concurrency.processutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e8e51150-05c2-4d90-b463-0a5f26c999fc/disk.config e8e51150-05c2-4d90-b463-0a5f26c999fc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:47:59 compute-0 nova_compute[239965]: 2026-01-26 15:47:59.560 239969 INFO nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Deleting local config drive /var/lib/nova/instances/e8e51150-05c2-4d90-b463-0a5f26c999fc/disk.config because it was imported into RBD.
Jan 26 15:47:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 134 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 15:47:59 compute-0 systemd-machined[208061]: New machine qemu-3-instance-00000003.
Jan 26 15:47:59 compute-0 nova_compute[239965]: 2026-01-26 15:47:59.636 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:47:59 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.151 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442480.1507504, e8e51150-05c2-4d90-b463-0a5f26c999fc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.151 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] VM Resumed (Lifecycle Event)
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.154 239969 DEBUG nova.compute.manager [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.154 239969 DEBUG nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.158 239969 INFO nova.virt.libvirt.driver [-] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Instance spawned successfully.
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.159 239969 DEBUG nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.194 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.199 239969 DEBUG nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.199 239969 DEBUG nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.200 239969 DEBUG nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.201 239969 DEBUG nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.201 239969 DEBUG nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.202 239969 DEBUG nova.virt.libvirt.driver [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.208 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:48:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:48:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:48:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:48:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:48:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:48:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.732 239969 DEBUG nova.compute.manager [req-86767a28-cbfc-4db5-9cc9-cb78fe98a760 req-4b672a4b-c6c1-49db-8a75-2ff4c2bb2b67 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Received event network-changed-9c973342-d1ea-4af4-83c2-3dbfe75dc817 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.732 239969 DEBUG nova.compute.manager [req-86767a28-cbfc-4db5-9cc9-cb78fe98a760 req-4b672a4b-c6c1-49db-8a75-2ff4c2bb2b67 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Refreshing instance network info cache due to event network-changed-9c973342-d1ea-4af4-83c2-3dbfe75dc817. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.733 239969 DEBUG oslo_concurrency.lockutils [req-86767a28-cbfc-4db5-9cc9-cb78fe98a760 req-4b672a4b-c6c1-49db-8a75-2ff4c2bb2b67 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-0435a695-73ca-4cb1-9c6d-35cb1f235198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.733 239969 DEBUG oslo_concurrency.lockutils [req-86767a28-cbfc-4db5-9cc9-cb78fe98a760 req-4b672a4b-c6c1-49db-8a75-2ff4c2bb2b67 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-0435a695-73ca-4cb1-9c6d-35cb1f235198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.733 239969 DEBUG nova.network.neutron [req-86767a28-cbfc-4db5-9cc9-cb78fe98a760 req-4b672a4b-c6c1-49db-8a75-2ff4c2bb2b67 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Refreshing network info cache for port 9c973342-d1ea-4af4-83c2-3dbfe75dc817 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:48:00 compute-0 ceph-mon[75140]: pgmap v965: 305 pgs: 305 active+clean; 134 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.863 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.863 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442480.1538656, e8e51150-05c2-4d90-b463-0a5f26c999fc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.864 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] VM Started (Lifecycle Event)
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.875 239969 INFO nova.compute.manager [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Took 8.34 seconds to spawn the instance on the hypervisor.
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.875 239969 DEBUG nova.compute.manager [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.910 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.914 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.970 239969 INFO nova.compute.manager [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Took 9.85 seconds to build instance.
Jan 26 15:48:00 compute-0 nova_compute[239965]: 2026-01-26 15:48:00.991 239969 DEBUG oslo_concurrency.lockutils [None req-80d70386-acf4-4cc6-8d7a-f2a2481552db f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "e8e51150-05c2-4d90-b463-0a5f26c999fc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.468s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 134 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 26 15:48:01 compute-0 nova_compute[239965]: 2026-01-26 15:48:01.770 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:02 compute-0 nova_compute[239965]: 2026-01-26 15:48:02.750 239969 DEBUG nova.network.neutron [req-86767a28-cbfc-4db5-9cc9-cb78fe98a760 req-4b672a4b-c6c1-49db-8a75-2ff4c2bb2b67 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Updated VIF entry in instance network info cache for port 9c973342-d1ea-4af4-83c2-3dbfe75dc817. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:48:02 compute-0 nova_compute[239965]: 2026-01-26 15:48:02.752 239969 DEBUG nova.network.neutron [req-86767a28-cbfc-4db5-9cc9-cb78fe98a760 req-4b672a4b-c6c1-49db-8a75-2ff4c2bb2b67 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Updating instance_info_cache with network_info: [{"id": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "address": "fa:16:3e:08:ad:5c", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c973342-d1", "ovs_interfaceid": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:48:02 compute-0 nova_compute[239965]: 2026-01-26 15:48:02.785 239969 DEBUG oslo_concurrency.lockutils [req-86767a28-cbfc-4db5-9cc9-cb78fe98a760 req-4b672a4b-c6c1-49db-8a75-2ff4c2bb2b67 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-0435a695-73ca-4cb1-9c6d-35cb1f235198" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:48:02 compute-0 ceph-mon[75140]: pgmap v966: 305 pgs: 305 active+clean; 134 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 26 15:48:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 134 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Jan 26 15:48:03 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 26 15:48:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:48:03 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 26 15:48:04 compute-0 nova_compute[239965]: 2026-01-26 15:48:04.638 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:04 compute-0 ceph-mon[75140]: pgmap v967: 305 pgs: 305 active+clean; 134 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Jan 26 15:48:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:05.486 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:48:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:05.488 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 15:48:05 compute-0 nova_compute[239965]: 2026-01-26 15:48:05.486 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 134 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Jan 26 15:48:06 compute-0 nova_compute[239965]: 2026-01-26 15:48:06.072 239969 DEBUG oslo_concurrency.lockutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquiring lock "9f5c8de2-e88f-497c-b6f3-89bcbfd33843" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:06 compute-0 nova_compute[239965]: 2026-01-26 15:48:06.073 239969 DEBUG oslo_concurrency.lockutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "9f5c8de2-e88f-497c-b6f3-89bcbfd33843" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:06 compute-0 nova_compute[239965]: 2026-01-26 15:48:06.096 239969 DEBUG nova.compute.manager [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:48:06 compute-0 nova_compute[239965]: 2026-01-26 15:48:06.172 239969 DEBUG oslo_concurrency.lockutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:06 compute-0 nova_compute[239965]: 2026-01-26 15:48:06.173 239969 DEBUG oslo_concurrency.lockutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:06 compute-0 nova_compute[239965]: 2026-01-26 15:48:06.231 239969 DEBUG nova.virt.hardware [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:48:06 compute-0 nova_compute[239965]: 2026-01-26 15:48:06.232 239969 INFO nova.compute.claims [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:48:06 compute-0 ceph-mon[75140]: pgmap v968: 305 pgs: 305 active+clean; 134 MiB data, 221 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Jan 26 15:48:06 compute-0 nova_compute[239965]: 2026-01-26 15:48:06.522 239969 DEBUG oslo_concurrency.processutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:06 compute-0 nova_compute[239965]: 2026-01-26 15:48:06.772 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:06 compute-0 ovn_controller[146046]: 2026-01-26T15:48:06Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:ad:5c 10.100.0.14
Jan 26 15:48:06 compute-0 ovn_controller[146046]: 2026-01-26T15:48:06Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:ad:5c 10.100.0.14
Jan 26 15:48:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:48:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3538337193' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.190 239969 DEBUG oslo_concurrency.processutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.198 239969 DEBUG nova.compute.provider_tree [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.215 239969 DEBUG nova.scheduler.client.report [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.244 239969 DEBUG oslo_concurrency.lockutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.246 239969 DEBUG nova.compute.manager [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.299 239969 DEBUG nova.compute.manager [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.299 239969 DEBUG nova.network.neutron [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.319 239969 INFO nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.338 239969 DEBUG nova.compute.manager [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.434 239969 DEBUG nova.compute.manager [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.437 239969 DEBUG nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.437 239969 INFO nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Creating image(s)
Jan 26 15:48:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3538337193' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.468 239969 DEBUG nova.storage.rbd_utils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] rbd image 9f5c8de2-e88f-497c-b6f3-89bcbfd33843_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.490 239969 DEBUG nova.storage.rbd_utils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] rbd image 9f5c8de2-e88f-497c-b6f3-89bcbfd33843_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.520 239969 DEBUG nova.storage.rbd_utils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] rbd image 9f5c8de2-e88f-497c-b6f3-89bcbfd33843_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.528 239969 DEBUG oslo_concurrency.processutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v969: 305 pgs: 305 active+clean; 158 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.2 MiB/s wr, 148 op/s
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.590 239969 DEBUG oslo_concurrency.processutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.592 239969 DEBUG oslo_concurrency.lockutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.593 239969 DEBUG oslo_concurrency.lockutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.594 239969 DEBUG oslo_concurrency.lockutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.619 239969 DEBUG nova.storage.rbd_utils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] rbd image 9f5c8de2-e88f-497c-b6f3-89bcbfd33843_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.623 239969 DEBUG oslo_concurrency.processutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 9f5c8de2-e88f-497c-b6f3-89bcbfd33843_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.698 239969 DEBUG nova.network.neutron [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.701 239969 DEBUG nova.compute.manager [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:48:07 compute-0 nova_compute[239965]: 2026-01-26 15:48:07.950 239969 DEBUG oslo_concurrency.processutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 9f5c8de2-e88f-497c-b6f3-89bcbfd33843_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.010 239969 DEBUG nova.storage.rbd_utils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] resizing rbd image 9f5c8de2-e88f-497c-b6f3-89bcbfd33843_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.079 239969 DEBUG nova.objects.instance [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lazy-loading 'migration_context' on Instance uuid 9f5c8de2-e88f-497c-b6f3-89bcbfd33843 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.091 239969 DEBUG nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.092 239969 DEBUG nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Ensure instance console log exists: /var/lib/nova/instances/9f5c8de2-e88f-497c-b6f3-89bcbfd33843/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.092 239969 DEBUG oslo_concurrency.lockutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.093 239969 DEBUG oslo_concurrency.lockutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.093 239969 DEBUG oslo_concurrency.lockutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.094 239969 DEBUG nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.098 239969 WARNING nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.103 239969 DEBUG nova.virt.libvirt.host [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.104 239969 DEBUG nova.virt.libvirt.host [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.106 239969 DEBUG nova.virt.libvirt.host [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.106 239969 DEBUG nova.virt.libvirt.host [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.107 239969 DEBUG nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.107 239969 DEBUG nova.virt.hardware [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.108 239969 DEBUG nova.virt.hardware [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.108 239969 DEBUG nova.virt.hardware [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.108 239969 DEBUG nova.virt.hardware [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.109 239969 DEBUG nova.virt.hardware [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.109 239969 DEBUG nova.virt.hardware [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.109 239969 DEBUG nova.virt.hardware [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.109 239969 DEBUG nova.virt.hardware [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.110 239969 DEBUG nova.virt.hardware [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.110 239969 DEBUG nova.virt.hardware [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.110 239969 DEBUG nova.virt.hardware [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.113 239969 DEBUG oslo_concurrency.processutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:08 compute-0 ceph-mon[75140]: pgmap v969: 305 pgs: 305 active+clean; 158 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.2 MiB/s wr, 148 op/s
Jan 26 15:48:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:48:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1877844874' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.729 239969 DEBUG oslo_concurrency.processutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.751 239969 DEBUG nova.storage.rbd_utils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] rbd image 9f5c8de2-e88f-497c-b6f3-89bcbfd33843_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:08 compute-0 nova_compute[239965]: 2026-01-26 15:48:08.755 239969 DEBUG oslo_concurrency.processutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:48:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:48:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1820616443' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:09 compute-0 nova_compute[239965]: 2026-01-26 15:48:09.337 239969 DEBUG oslo_concurrency.processutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:09 compute-0 nova_compute[239965]: 2026-01-26 15:48:09.338 239969 DEBUG nova.objects.instance [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f5c8de2-e88f-497c-b6f3-89bcbfd33843 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:09 compute-0 nova_compute[239965]: 2026-01-26 15:48:09.357 239969 DEBUG nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:48:09 compute-0 nova_compute[239965]:   <uuid>9f5c8de2-e88f-497c-b6f3-89bcbfd33843</uuid>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   <name>instance-00000004</name>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <nova:name>tempest-LiveMigrationNegativeTest-server-186401403</nova:name>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:48:08</nova:creationTime>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:48:09 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:48:09 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:48:09 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:48:09 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:48:09 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:48:09 compute-0 nova_compute[239965]:         <nova:user uuid="f454000977d04b4e9b6d5cf626dbd6fe">tempest-LiveMigrationNegativeTest-1999008782-project-member</nova:user>
Jan 26 15:48:09 compute-0 nova_compute[239965]:         <nova:project uuid="6beda758126041608c246eb20e376115">tempest-LiveMigrationNegativeTest-1999008782</nova:project>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <system>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <entry name="serial">9f5c8de2-e88f-497c-b6f3-89bcbfd33843</entry>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <entry name="uuid">9f5c8de2-e88f-497c-b6f3-89bcbfd33843</entry>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     </system>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   <os>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   </os>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   <features>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   </features>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/9f5c8de2-e88f-497c-b6f3-89bcbfd33843_disk">
Jan 26 15:48:09 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       </source>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:48:09 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/9f5c8de2-e88f-497c-b6f3-89bcbfd33843_disk.config">
Jan 26 15:48:09 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       </source>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:48:09 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/9f5c8de2-e88f-497c-b6f3-89bcbfd33843/console.log" append="off"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <video>
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     </video>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:48:09 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:48:09 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:48:09 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:48:09 compute-0 nova_compute[239965]: </domain>
Jan 26 15:48:09 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:48:09 compute-0 nova_compute[239965]: 2026-01-26 15:48:09.406 239969 DEBUG nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:48:09 compute-0 nova_compute[239965]: 2026-01-26 15:48:09.407 239969 DEBUG nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:48:09 compute-0 nova_compute[239965]: 2026-01-26 15:48:09.407 239969 INFO nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Using config drive
Jan 26 15:48:09 compute-0 nova_compute[239965]: 2026-01-26 15:48:09.431 239969 DEBUG nova.storage.rbd_utils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] rbd image 9f5c8de2-e88f-497c-b6f3-89bcbfd33843_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1877844874' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1820616443' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v970: 305 pgs: 305 active+clean; 182 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 137 op/s
Jan 26 15:48:09 compute-0 nova_compute[239965]: 2026-01-26 15:48:09.642 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:09 compute-0 nova_compute[239965]: 2026-01-26 15:48:09.689 239969 INFO nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Creating config drive at /var/lib/nova/instances/9f5c8de2-e88f-497c-b6f3-89bcbfd33843/disk.config
Jan 26 15:48:09 compute-0 nova_compute[239965]: 2026-01-26 15:48:09.697 239969 DEBUG oslo_concurrency.processutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9f5c8de2-e88f-497c-b6f3-89bcbfd33843/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpptuy0go7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:09 compute-0 nova_compute[239965]: 2026-01-26 15:48:09.825 239969 DEBUG oslo_concurrency.processutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9f5c8de2-e88f-497c-b6f3-89bcbfd33843/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpptuy0go7" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:09 compute-0 nova_compute[239965]: 2026-01-26 15:48:09.854 239969 DEBUG nova.storage.rbd_utils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] rbd image 9f5c8de2-e88f-497c-b6f3-89bcbfd33843_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:09 compute-0 nova_compute[239965]: 2026-01-26 15:48:09.860 239969 DEBUG oslo_concurrency.processutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9f5c8de2-e88f-497c-b6f3-89bcbfd33843/disk.config 9f5c8de2-e88f-497c-b6f3-89bcbfd33843_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:09 compute-0 nova_compute[239965]: 2026-01-26 15:48:09.981 239969 DEBUG oslo_concurrency.processutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9f5c8de2-e88f-497c-b6f3-89bcbfd33843/disk.config 9f5c8de2-e88f-497c-b6f3-89bcbfd33843_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:09 compute-0 nova_compute[239965]: 2026-01-26 15:48:09.982 239969 INFO nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Deleting local config drive /var/lib/nova/instances/9f5c8de2-e88f-497c-b6f3-89bcbfd33843/disk.config because it was imported into RBD.
Jan 26 15:48:10 compute-0 systemd-machined[208061]: New machine qemu-4-instance-00000004.
Jan 26 15:48:10 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.482 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442490.4815319, 9f5c8de2-e88f-497c-b6f3-89bcbfd33843 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.482 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] VM Resumed (Lifecycle Event)
Jan 26 15:48:10 compute-0 ceph-mon[75140]: pgmap v970: 305 pgs: 305 active+clean; 182 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 137 op/s
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.486 239969 DEBUG nova.compute.manager [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.486 239969 DEBUG nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.489 239969 INFO nova.virt.libvirt.driver [-] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Instance spawned successfully.
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.490 239969 DEBUG nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.506 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.514 239969 DEBUG nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.514 239969 DEBUG nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.515 239969 DEBUG nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.515 239969 DEBUG nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.516 239969 DEBUG nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.516 239969 DEBUG nova.virt.libvirt.driver [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.519 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.567 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.568 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442490.4824164, 9f5c8de2-e88f-497c-b6f3-89bcbfd33843 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.568 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] VM Started (Lifecycle Event)
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.588 239969 INFO nova.compute.manager [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Took 3.15 seconds to spawn the instance on the hypervisor.
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.589 239969 DEBUG nova.compute.manager [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.590 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.596 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.636 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.658 239969 INFO nova.compute.manager [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Took 4.52 seconds to build instance.
Jan 26 15:48:10 compute-0 nova_compute[239965]: 2026-01-26 15:48:10.680 239969 DEBUG oslo_concurrency.lockutils [None req-2fe46237-bdfc-414f-9eb8-80f8ba822459 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "9f5c8de2-e88f-497c-b6f3-89bcbfd33843" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 192 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 139 op/s
Jan 26 15:48:11 compute-0 nova_compute[239965]: 2026-01-26 15:48:11.766 239969 DEBUG nova.objects.instance [None req-24036bf4-6ccd-4179-93b8-fb5062b710f9 28564c77a5fa4d64a8447eb95159bb3d ca4d2b63e18849f5a07ea33f505c43b5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9f5c8de2-e88f-497c-b6f3-89bcbfd33843 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:11 compute-0 nova_compute[239965]: 2026-01-26 15:48:11.774 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:11 compute-0 nova_compute[239965]: 2026-01-26 15:48:11.787 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442491.787787, 9f5c8de2-e88f-497c-b6f3-89bcbfd33843 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:11 compute-0 nova_compute[239965]: 2026-01-26 15:48:11.788 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] VM Paused (Lifecycle Event)
Jan 26 15:48:11 compute-0 nova_compute[239965]: 2026-01-26 15:48:11.814 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:11 compute-0 nova_compute[239965]: 2026-01-26 15:48:11.818 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:48:11 compute-0 nova_compute[239965]: 2026-01-26 15:48:11.836 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 26 15:48:12 compute-0 ceph-mon[75140]: pgmap v971: 305 pgs: 305 active+clean; 192 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.8 MiB/s wr, 139 op/s
Jan 26 15:48:12 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Jan 26 15:48:12 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1.836s CPU time.
Jan 26 15:48:12 compute-0 systemd-machined[208061]: Machine qemu-4-instance-00000004 terminated.
Jan 26 15:48:12 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Jan 26 15:48:12 compute-0 nova_compute[239965]: 2026-01-26 15:48:12.792 239969 DEBUG nova.compute.manager [None req-24036bf4-6ccd-4179-93b8-fb5062b710f9 28564c77a5fa4d64a8447eb95159bb3d ca4d2b63e18849f5a07ea33f505c43b5 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 222 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.5 MiB/s wr, 211 op/s
Jan 26 15:48:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:48:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:14.490 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:48:14 compute-0 ceph-mon[75140]: pgmap v972: 305 pgs: 305 active+clean; 222 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.5 MiB/s wr, 211 op/s
Jan 26 15:48:14 compute-0 nova_compute[239965]: 2026-01-26 15:48:14.643 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:15 compute-0 nova_compute[239965]: 2026-01-26 15:48:15.518 239969 DEBUG oslo_concurrency.lockutils [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquiring lock "9f5c8de2-e88f-497c-b6f3-89bcbfd33843" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:15 compute-0 nova_compute[239965]: 2026-01-26 15:48:15.519 239969 DEBUG oslo_concurrency.lockutils [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "9f5c8de2-e88f-497c-b6f3-89bcbfd33843" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:15 compute-0 nova_compute[239965]: 2026-01-26 15:48:15.519 239969 DEBUG oslo_concurrency.lockutils [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquiring lock "9f5c8de2-e88f-497c-b6f3-89bcbfd33843-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:15 compute-0 nova_compute[239965]: 2026-01-26 15:48:15.519 239969 DEBUG oslo_concurrency.lockutils [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "9f5c8de2-e88f-497c-b6f3-89bcbfd33843-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:15 compute-0 nova_compute[239965]: 2026-01-26 15:48:15.520 239969 DEBUG oslo_concurrency.lockutils [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "9f5c8de2-e88f-497c-b6f3-89bcbfd33843-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:15 compute-0 nova_compute[239965]: 2026-01-26 15:48:15.521 239969 INFO nova.compute.manager [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Terminating instance
Jan 26 15:48:15 compute-0 nova_compute[239965]: 2026-01-26 15:48:15.521 239969 DEBUG oslo_concurrency.lockutils [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquiring lock "refresh_cache-9f5c8de2-e88f-497c-b6f3-89bcbfd33843" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:48:15 compute-0 nova_compute[239965]: 2026-01-26 15:48:15.521 239969 DEBUG oslo_concurrency.lockutils [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquired lock "refresh_cache-9f5c8de2-e88f-497c-b6f3-89bcbfd33843" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:48:15 compute-0 nova_compute[239965]: 2026-01-26 15:48:15.522 239969 DEBUG nova.network.neutron [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:48:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 222 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.5 MiB/s wr, 160 op/s
Jan 26 15:48:15 compute-0 nova_compute[239965]: 2026-01-26 15:48:15.686 239969 DEBUG nova.network.neutron [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.021 239969 DEBUG nova.network.neutron [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.036 239969 DEBUG oslo_concurrency.lockutils [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Releasing lock "refresh_cache-9f5c8de2-e88f-497c-b6f3-89bcbfd33843" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.037 239969 DEBUG nova.compute.manager [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.044 239969 INFO nova.virt.libvirt.driver [-] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Instance destroyed successfully.
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.045 239969 DEBUG nova.objects.instance [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lazy-loading 'resources' on Instance uuid 9f5c8de2-e88f-497c-b6f3-89bcbfd33843 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.058 239969 DEBUG oslo_concurrency.lockutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Acquiring lock "9fa84b9e-e104-4f48-adce-484eb5512b9d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.058 239969 DEBUG oslo_concurrency.lockutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lock "9fa84b9e-e104-4f48-adce-484eb5512b9d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.094 239969 DEBUG nova.compute.manager [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.461 239969 DEBUG oslo_concurrency.lockutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.461 239969 DEBUG oslo_concurrency.lockutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.468 239969 DEBUG nova.virt.hardware [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.469 239969 INFO nova.compute.claims [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:48:16 compute-0 ceph-mon[75140]: pgmap v973: 305 pgs: 305 active+clean; 222 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.5 MiB/s wr, 160 op/s
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.654 239969 DEBUG oslo_concurrency.processutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.776 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.916 239969 INFO nova.virt.libvirt.driver [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Deleting instance files /var/lib/nova/instances/9f5c8de2-e88f-497c-b6f3-89bcbfd33843_del
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.917 239969 INFO nova.virt.libvirt.driver [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Deletion of /var/lib/nova/instances/9f5c8de2-e88f-497c-b6f3-89bcbfd33843_del complete
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.977 239969 INFO nova.compute.manager [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Took 0.94 seconds to destroy the instance on the hypervisor.
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.978 239969 DEBUG oslo.service.loopingcall [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.978 239969 DEBUG nova.compute.manager [-] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:48:16 compute-0 nova_compute[239965]: 2026-01-26 15:48:16.978 239969 DEBUG nova.network.neutron [-] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:48:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:48:17 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3101373683' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.173 239969 DEBUG oslo_concurrency.processutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.177 239969 DEBUG nova.network.neutron [-] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.182 239969 DEBUG nova.compute.provider_tree [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.201 239969 DEBUG nova.network.neutron [-] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.203 239969 DEBUG nova.scheduler.client.report [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.245 239969 INFO nova.compute.manager [-] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Took 0.27 seconds to deallocate network for instance.
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.251 239969 DEBUG oslo_concurrency.lockutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.251 239969 DEBUG nova.compute.manager [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.320 239969 DEBUG oslo_concurrency.lockutils [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.320 239969 DEBUG oslo_concurrency.lockutils [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.333 239969 DEBUG nova.compute.manager [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.333 239969 DEBUG nova.network.neutron [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.355 239969 INFO nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:48:17 compute-0 podman[248510]: 2026-01-26 15:48:17.366241069 +0000 UTC m=+0.053229945 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.382 239969 DEBUG nova.compute.manager [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:48:17 compute-0 podman[248511]: 2026-01-26 15:48:17.394282796 +0000 UTC m=+0.081202730 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.454 239969 DEBUG oslo_concurrency.processutils [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.513 239969 DEBUG nova.compute.manager [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.516 239969 DEBUG nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.517 239969 INFO nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Creating image(s)
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.538 239969 DEBUG nova.storage.rbd_utils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] rbd image 9fa84b9e-e104-4f48-adce-484eb5512b9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.559 239969 DEBUG nova.storage.rbd_utils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] rbd image 9fa84b9e-e104-4f48-adce-484eb5512b9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 231 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 230 op/s
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.591 239969 DEBUG nova.storage.rbd_utils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] rbd image 9fa84b9e-e104-4f48-adce-484eb5512b9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.596 239969 DEBUG oslo_concurrency.processutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.616 239969 DEBUG nova.network.neutron [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.616 239969 DEBUG nova.compute.manager [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:48:17 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3101373683' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.658 239969 DEBUG oslo_concurrency.processutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.659 239969 DEBUG oslo_concurrency.lockutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.659 239969 DEBUG oslo_concurrency.lockutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.660 239969 DEBUG oslo_concurrency.lockutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.679 239969 DEBUG nova.storage.rbd_utils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] rbd image 9fa84b9e-e104-4f48-adce-484eb5512b9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.682 239969 DEBUG oslo_concurrency.processutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 9fa84b9e-e104-4f48-adce-484eb5512b9d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.808 239969 DEBUG oslo_concurrency.lockutils [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "0435a695-73ca-4cb1-9c6d-35cb1f235198" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.809 239969 DEBUG oslo_concurrency.lockutils [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "0435a695-73ca-4cb1-9c6d-35cb1f235198" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.809 239969 DEBUG oslo_concurrency.lockutils [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.810 239969 DEBUG oslo_concurrency.lockutils [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.810 239969 DEBUG oslo_concurrency.lockutils [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.813 239969 INFO nova.compute.manager [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Terminating instance
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.814 239969 DEBUG nova.compute.manager [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:48:17 compute-0 kernel: tap9c973342-d1 (unregistering): left promiscuous mode
Jan 26 15:48:17 compute-0 NetworkManager[48954]: <info>  [1769442497.9376] device (tap9c973342-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:48:17 compute-0 ovn_controller[146046]: 2026-01-26T15:48:17Z|00033|binding|INFO|Releasing lport 9c973342-d1ea-4af4-83c2-3dbfe75dc817 from this chassis (sb_readonly=0)
Jan 26 15:48:17 compute-0 ovn_controller[146046]: 2026-01-26T15:48:17Z|00034|binding|INFO|Setting lport 9c973342-d1ea-4af4-83c2-3dbfe75dc817 down in Southbound
Jan 26 15:48:17 compute-0 ovn_controller[146046]: 2026-01-26T15:48:17Z|00035|binding|INFO|Removing iface tap9c973342-d1 ovn-installed in OVS
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.947 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.965 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:17.977 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:ad:5c 10.100.0.14'], port_security=['fa:16:3e:08:ad:5c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0435a695-73ca-4cb1-9c6d-35cb1f235198', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b13a653-2720-44ed-8e59-f532313cf362', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b53671bcf3e946d58ed07bc7f2934508', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aa7896b6-fb15-4b71-8719-443aa677edd8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88b30d9b-ee3e-48c5-b1e4-65985eb98cc5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=9c973342-d1ea-4af4-83c2-3dbfe75dc817) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:48:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:17.980 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 9c973342-d1ea-4af4-83c2-3dbfe75dc817 in datapath 7b13a653-2720-44ed-8e59-f532313cf362 unbound from our chassis
Jan 26 15:48:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:17.981 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b13a653-2720-44ed-8e59-f532313cf362, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:48:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:17.982 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2f12a09d-c482-4b88-8183-6c1f0589106d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:17.983 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362 namespace which is not needed anymore
Jan 26 15:48:17 compute-0 nova_compute[239965]: 2026-01-26 15:48:17.989 239969 DEBUG oslo_concurrency.processutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 9fa84b9e-e104-4f48-adce-484eb5512b9d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:17 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 26 15:48:17 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 15.850s CPU time.
Jan 26 15:48:18 compute-0 systemd-machined[208061]: Machine qemu-2-instance-00000002 terminated.
Jan 26 15:48:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:48:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3819790553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.064 239969 DEBUG oslo_concurrency.processutils [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.070 239969 DEBUG nova.storage.rbd_utils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] resizing rbd image 9fa84b9e-e104-4f48-adce-484eb5512b9d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.097 239969 DEBUG nova.compute.provider_tree [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.100 239969 INFO nova.virt.libvirt.driver [-] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Instance destroyed successfully.
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.100 239969 DEBUG nova.objects.instance [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lazy-loading 'resources' on Instance uuid 0435a695-73ca-4cb1-9c6d-35cb1f235198 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:18 compute-0 neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362[247883]: [NOTICE]   (247905) : haproxy version is 2.8.14-c23fe91
Jan 26 15:48:18 compute-0 neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362[247883]: [NOTICE]   (247905) : path to executable is /usr/sbin/haproxy
Jan 26 15:48:18 compute-0 neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362[247883]: [WARNING]  (247905) : Exiting Master process...
Jan 26 15:48:18 compute-0 neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362[247883]: [WARNING]  (247905) : Exiting Master process...
Jan 26 15:48:18 compute-0 neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362[247883]: [ALERT]    (247905) : Current worker (247907) exited with code 143 (Terminated)
Jan 26 15:48:18 compute-0 neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362[247883]: [WARNING]  (247905) : All workers exited. Exiting... (0)
Jan 26 15:48:18 compute-0 systemd[1]: libpod-c9f5c336a3f74fce2abc93c378d59ace619ab59b1001395e91d8348abc2ee96c.scope: Deactivated successfully.
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.115 239969 DEBUG nova.scheduler.client.report [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.120 239969 DEBUG nova.virt.libvirt.vif [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:47:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1497817103',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1497817103',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(23),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1497817103',id=2,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=23,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCai7GwJddzKrW3TtkSQ1mOUip/vonOPhzdMngoUZvO4bbn1qLYfpHu7UDeWVieQmn+Md1KkRtB3+wPQBXRuMgP1dKOatHXM/89LkiId8i7p8znLMMfyiJvInuKeFqzzbQ==',key_name='tempest-keypair-1575067288',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:47:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b53671bcf3e946d58ed07bc7f2934508',ramdisk_id='',reservation_id='r-0e3ar3p5',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1775060788',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1775060788-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:47:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='84de675078294b38ba608227b57f84a5',uuid=0435a695-73ca-4cb1-9c6d-35cb1f235198,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "address": "fa:16:3e:08:ad:5c", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c973342-d1", "ovs_interfaceid": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.120 239969 DEBUG nova.network.os_vif_util [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Converting VIF {"id": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "address": "fa:16:3e:08:ad:5c", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c973342-d1", "ovs_interfaceid": "9c973342-d1ea-4af4-83c2-3dbfe75dc817", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.121 239969 DEBUG nova.network.os_vif_util [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:08:ad:5c,bridge_name='br-int',has_traffic_filtering=True,id=9c973342-d1ea-4af4-83c2-3dbfe75dc817,network=Network(7b13a653-2720-44ed-8e59-f532313cf362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c973342-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.121 239969 DEBUG os_vif [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:ad:5c,bridge_name='br-int',has_traffic_filtering=True,id=9c973342-d1ea-4af4-83c2-3dbfe75dc817,network=Network(7b13a653-2720-44ed-8e59-f532313cf362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c973342-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.123 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.123 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c973342-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:48:18 compute-0 podman[248737]: 2026-01-26 15:48:18.125299748 +0000 UTC m=+0.042026230 container died c9f5c336a3f74fce2abc93c378d59ace619ab59b1001395e91d8348abc2ee96c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:48:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c9f5c336a3f74fce2abc93c378d59ace619ab59b1001395e91d8348abc2ee96c-userdata-shm.mount: Deactivated successfully.
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.153 239969 DEBUG oslo_concurrency.lockutils [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe45d414c067973c923da5a702bbb31431d4d75a6580f1637bf7a3f11ca735e6-merged.mount: Deactivated successfully.
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.159 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.161 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.164 239969 DEBUG nova.objects.instance [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lazy-loading 'migration_context' on Instance uuid 9fa84b9e-e104-4f48-adce-484eb5512b9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.166 239969 INFO os_vif [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:ad:5c,bridge_name='br-int',has_traffic_filtering=True,id=9c973342-d1ea-4af4-83c2-3dbfe75dc817,network=Network(7b13a653-2720-44ed-8e59-f532313cf362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c973342-d1')
Jan 26 15:48:18 compute-0 podman[248737]: 2026-01-26 15:48:18.169157052 +0000 UTC m=+0.085883524 container cleanup c9f5c336a3f74fce2abc93c378d59ace619ab59b1001395e91d8348abc2ee96c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 15:48:18 compute-0 systemd[1]: libpod-conmon-c9f5c336a3f74fce2abc93c378d59ace619ab59b1001395e91d8348abc2ee96c.scope: Deactivated successfully.
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.188 239969 DEBUG nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.188 239969 DEBUG nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Ensure instance console log exists: /var/lib/nova/instances/9fa84b9e-e104-4f48-adce-484eb5512b9d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.189 239969 DEBUG oslo_concurrency.lockutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.189 239969 DEBUG oslo_concurrency.lockutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.189 239969 DEBUG oslo_concurrency.lockutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.190 239969 DEBUG nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.192 239969 INFO nova.scheduler.client.report [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Deleted allocations for instance 9f5c8de2-e88f-497c-b6f3-89bcbfd33843
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.195 239969 WARNING nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.203 239969 DEBUG nova.virt.libvirt.host [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.204 239969 DEBUG nova.virt.libvirt.host [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.207 239969 DEBUG nova.virt.libvirt.host [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.207 239969 DEBUG nova.virt.libvirt.host [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.208 239969 DEBUG nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.208 239969 DEBUG nova.virt.hardware [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.208 239969 DEBUG nova.virt.hardware [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.209 239969 DEBUG nova.virt.hardware [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.209 239969 DEBUG nova.virt.hardware [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.209 239969 DEBUG nova.virt.hardware [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.209 239969 DEBUG nova.virt.hardware [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.209 239969 DEBUG nova.virt.hardware [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.210 239969 DEBUG nova.virt.hardware [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.210 239969 DEBUG nova.virt.hardware [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.210 239969 DEBUG nova.virt.hardware [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.210 239969 DEBUG nova.virt.hardware [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.213 239969 DEBUG oslo_concurrency.processutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:18 compute-0 podman[248808]: 2026-01-26 15:48:18.247171443 +0000 UTC m=+0.055134282 container remove c9f5c336a3f74fce2abc93c378d59ace619ab59b1001395e91d8348abc2ee96c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 15:48:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:18.254 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f3dd8df7-94be-4e45-9b55-e78870f9f029]: (4, ('Mon Jan 26 03:48:18 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362 (c9f5c336a3f74fce2abc93c378d59ace619ab59b1001395e91d8348abc2ee96c)\nc9f5c336a3f74fce2abc93c378d59ace619ab59b1001395e91d8348abc2ee96c\nMon Jan 26 03:48:18 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362 (c9f5c336a3f74fce2abc93c378d59ace619ab59b1001395e91d8348abc2ee96c)\nc9f5c336a3f74fce2abc93c378d59ace619ab59b1001395e91d8348abc2ee96c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:18.256 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[471e232d-daed-4e7c-bf89-ddc0fbe5d099]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:18.257 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b13a653-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.259 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:18 compute-0 kernel: tap7b13a653-20: left promiscuous mode
Jan 26 15:48:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:18.263 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[12191db2-59a1-4417-b5c4-1d3e03755971]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.275 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:18.279 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[daf3cbaa-8b07-4a8c-b7db-e412e7347e19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:18.281 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[303bd8ef-eeeb-446c-88bc-0b0338a2f15e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:18.296 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[31fe0928-fe5a-49bc-9300-5e1a1d7644ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396348, 'reachable_time': 34279, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248832, 'error': None, 'target': 'ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:18.307 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:48:18 compute-0 systemd[1]: run-netns-ovnmeta\x2d7b13a653\x2d2720\x2d44ed\x2d8e59\x2df532313cf362.mount: Deactivated successfully.
Jan 26 15:48:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:18.308 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[7c9c2276-f914-4bfa-b1cc-598f5569c589]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.462 239969 INFO nova.virt.libvirt.driver [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Deleting instance files /var/lib/nova/instances/0435a695-73ca-4cb1-9c6d-35cb1f235198_del
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.463 239969 INFO nova.virt.libvirt.driver [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Deletion of /var/lib/nova/instances/0435a695-73ca-4cb1-9c6d-35cb1f235198_del complete
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.571 239969 DEBUG oslo_concurrency.lockutils [None req-be6d4035-11a4-472d-b0f7-05b49f0d9d2b f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "9f5c8de2-e88f-497c-b6f3-89bcbfd33843" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.052s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.633 239969 INFO nova.compute.manager [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Took 0.82 seconds to destroy the instance on the hypervisor.
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.633 239969 DEBUG oslo.service.loopingcall [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.634 239969 DEBUG nova.compute.manager [-] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.634 239969 DEBUG nova.network.neutron [-] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:48:18 compute-0 ceph-mon[75140]: pgmap v974: 305 pgs: 305 active+clean; 231 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.8 MiB/s wr, 230 op/s
Jan 26 15:48:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3819790553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:48:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1764700138' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.783 239969 DEBUG oslo_concurrency.processutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.802 239969 DEBUG nova.storage.rbd_utils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] rbd image 9fa84b9e-e104-4f48-adce-484eb5512b9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.805 239969 DEBUG oslo_concurrency.processutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.898 239969 DEBUG nova.compute.manager [req-4aab6b9b-217a-4066-975a-1365d158edc1 req-9d09fc53-48a7-4a62-913d-94c4c14fb7a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Received event network-vif-unplugged-9c973342-d1ea-4af4-83c2-3dbfe75dc817 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.899 239969 DEBUG oslo_concurrency.lockutils [req-4aab6b9b-217a-4066-975a-1365d158edc1 req-9d09fc53-48a7-4a62-913d-94c4c14fb7a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.900 239969 DEBUG oslo_concurrency.lockutils [req-4aab6b9b-217a-4066-975a-1365d158edc1 req-9d09fc53-48a7-4a62-913d-94c4c14fb7a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.900 239969 DEBUG oslo_concurrency.lockutils [req-4aab6b9b-217a-4066-975a-1365d158edc1 req-9d09fc53-48a7-4a62-913d-94c4c14fb7a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.900 239969 DEBUG nova.compute.manager [req-4aab6b9b-217a-4066-975a-1365d158edc1 req-9d09fc53-48a7-4a62-913d-94c4c14fb7a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] No waiting events found dispatching network-vif-unplugged-9c973342-d1ea-4af4-83c2-3dbfe75dc817 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:48:18 compute-0 nova_compute[239965]: 2026-01-26 15:48:18.900 239969 DEBUG nova.compute.manager [req-4aab6b9b-217a-4066-975a-1365d158edc1 req-9d09fc53-48a7-4a62-913d-94c4c14fb7a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Received event network-vif-unplugged-9c973342-d1ea-4af4-83c2-3dbfe75dc817 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:48:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:48:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4098125573' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.340 239969 DEBUG oslo_concurrency.processutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.341 239969 DEBUG nova.objects.instance [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9fa84b9e-e104-4f48-adce-484eb5512b9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.425 239969 DEBUG nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:48:19 compute-0 nova_compute[239965]:   <uuid>9fa84b9e-e104-4f48-adce-484eb5512b9d</uuid>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   <name>instance-00000005</name>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerDiagnosticsTest-server-1948066547</nova:name>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:48:18</nova:creationTime>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:48:19 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:48:19 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:48:19 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:48:19 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:48:19 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:48:19 compute-0 nova_compute[239965]:         <nova:user uuid="09a58dc4391c469183ef8f0afb6d538f">tempest-ServerDiagnosticsTest-575259024-project-member</nova:user>
Jan 26 15:48:19 compute-0 nova_compute[239965]:         <nova:project uuid="d5ede54784584cc9a9242900faebdf09">tempest-ServerDiagnosticsTest-575259024</nova:project>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <system>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <entry name="serial">9fa84b9e-e104-4f48-adce-484eb5512b9d</entry>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <entry name="uuid">9fa84b9e-e104-4f48-adce-484eb5512b9d</entry>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     </system>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   <os>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   </os>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   <features>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   </features>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/9fa84b9e-e104-4f48-adce-484eb5512b9d_disk">
Jan 26 15:48:19 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       </source>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:48:19 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/9fa84b9e-e104-4f48-adce-484eb5512b9d_disk.config">
Jan 26 15:48:19 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       </source>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:48:19 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/9fa84b9e-e104-4f48-adce-484eb5512b9d/console.log" append="off"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <video>
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     </video>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:48:19 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:48:19 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:48:19 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:48:19 compute-0 nova_compute[239965]: </domain>
Jan 26 15:48:19 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.499 239969 DEBUG nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.499 239969 DEBUG nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.500 239969 INFO nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Using config drive
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.520 239969 DEBUG nova.storage.rbd_utils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] rbd image 9fa84b9e-e104-4f48-adce-484eb5512b9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.525 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.526 239969 DEBUG oslo_concurrency.lockutils [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquiring lock "e8e51150-05c2-4d90-b463-0a5f26c999fc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.526 239969 DEBUG oslo_concurrency.lockutils [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "e8e51150-05c2-4d90-b463-0a5f26c999fc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.526 239969 DEBUG oslo_concurrency.lockutils [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquiring lock "e8e51150-05c2-4d90-b463-0a5f26c999fc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.527 239969 DEBUG oslo_concurrency.lockutils [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "e8e51150-05c2-4d90-b463-0a5f26c999fc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.527 239969 DEBUG oslo_concurrency.lockutils [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "e8e51150-05c2-4d90-b463-0a5f26c999fc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.528 239969 INFO nova.compute.manager [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Terminating instance
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.528 239969 DEBUG oslo_concurrency.lockutils [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquiring lock "refresh_cache-e8e51150-05c2-4d90-b463-0a5f26c999fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.529 239969 DEBUG oslo_concurrency.lockutils [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquired lock "refresh_cache-e8e51150-05c2-4d90-b463-0a5f26c999fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.529 239969 DEBUG nova.network.neutron [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:48:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 212 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.7 MiB/s wr, 210 op/s
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.644 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:19 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1764700138' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:19 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4098125573' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.809 239969 DEBUG nova.network.neutron [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.818 239969 INFO nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Creating config drive at /var/lib/nova/instances/9fa84b9e-e104-4f48-adce-484eb5512b9d/disk.config
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.827 239969 DEBUG oslo_concurrency.processutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9fa84b9e-e104-4f48-adce-484eb5512b9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp738xq11x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.890 239969 DEBUG nova.network.neutron [-] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.910 239969 INFO nova.compute.manager [-] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Took 1.28 seconds to deallocate network for instance.
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.953 239969 DEBUG oslo_concurrency.processutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9fa84b9e-e104-4f48-adce-484eb5512b9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp738xq11x" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.974 239969 DEBUG nova.storage.rbd_utils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] rbd image 9fa84b9e-e104-4f48-adce-484eb5512b9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.977 239969 DEBUG oslo_concurrency.processutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9fa84b9e-e104-4f48-adce-484eb5512b9d/disk.config 9fa84b9e-e104-4f48-adce-484eb5512b9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.996 239969 DEBUG oslo_concurrency.lockutils [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:19 compute-0 nova_compute[239965]: 2026-01-26 15:48:19.996 239969 DEBUG oslo_concurrency.lockutils [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:20 compute-0 nova_compute[239965]: 2026-01-26 15:48:20.150 239969 DEBUG oslo_concurrency.processutils [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:20 compute-0 nova_compute[239965]: 2026-01-26 15:48:20.264 239969 DEBUG nova.network.neutron [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:48:20 compute-0 nova_compute[239965]: 2026-01-26 15:48:20.480 239969 DEBUG oslo_concurrency.lockutils [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Releasing lock "refresh_cache-e8e51150-05c2-4d90-b463-0a5f26c999fc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:48:20 compute-0 nova_compute[239965]: 2026-01-26 15:48:20.481 239969 DEBUG nova.compute.manager [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:48:20 compute-0 nova_compute[239965]: 2026-01-26 15:48:20.518 239969 DEBUG oslo_concurrency.processutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9fa84b9e-e104-4f48-adce-484eb5512b9d/disk.config 9fa84b9e-e104-4f48-adce-484eb5512b9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:20 compute-0 nova_compute[239965]: 2026-01-26 15:48:20.519 239969 INFO nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Deleting local config drive /var/lib/nova/instances/9fa84b9e-e104-4f48-adce-484eb5512b9d/disk.config because it was imported into RBD.
Jan 26 15:48:20 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Jan 26 15:48:20 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 13.080s CPU time.
Jan 26 15:48:20 compute-0 systemd-machined[208061]: Machine qemu-3-instance-00000003 terminated.
Jan 26 15:48:20 compute-0 systemd-machined[208061]: New machine qemu-5-instance-00000005.
Jan 26 15:48:20 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Jan 26 15:48:20 compute-0 ceph-mon[75140]: pgmap v975: 305 pgs: 305 active+clean; 212 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 4.7 MiB/s wr, 210 op/s
Jan 26 15:48:20 compute-0 nova_compute[239965]: 2026-01-26 15:48:20.709 239969 INFO nova.virt.libvirt.driver [-] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Instance destroyed successfully.
Jan 26 15:48:20 compute-0 nova_compute[239965]: 2026-01-26 15:48:20.710 239969 DEBUG nova.objects.instance [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lazy-loading 'resources' on Instance uuid e8e51150-05c2-4d90-b463-0a5f26c999fc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:48:20 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3914626732' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:20 compute-0 nova_compute[239965]: 2026-01-26 15:48:20.766 239969 DEBUG oslo_concurrency.processutils [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:20 compute-0 nova_compute[239965]: 2026-01-26 15:48:20.774 239969 DEBUG nova.compute.provider_tree [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.000 239969 DEBUG nova.scheduler.client.report [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.072 239969 INFO nova.virt.libvirt.driver [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Deleting instance files /var/lib/nova/instances/e8e51150-05c2-4d90-b463-0a5f26c999fc_del
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.073 239969 INFO nova.virt.libvirt.driver [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Deletion of /var/lib/nova/instances/e8e51150-05c2-4d90-b463-0a5f26c999fc_del complete
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.112 239969 DEBUG oslo_concurrency.lockutils [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.144 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442501.1439345, 9fa84b9e-e104-4f48-adce-484eb5512b9d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.144 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] VM Resumed (Lifecycle Event)
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.146 239969 DEBUG nova.compute.manager [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.147 239969 DEBUG nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.149 239969 INFO nova.virt.libvirt.driver [-] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Instance spawned successfully.
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.150 239969 DEBUG nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.183 239969 INFO nova.scheduler.client.report [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Deleted allocations for instance 0435a695-73ca-4cb1-9c6d-35cb1f235198
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.230 239969 DEBUG nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.231 239969 DEBUG nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.231 239969 DEBUG nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.232 239969 DEBUG nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.232 239969 DEBUG nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.233 239969 DEBUG nova.virt.libvirt.driver [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.237 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.241 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.280 239969 INFO nova.compute.manager [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Took 0.80 seconds to destroy the instance on the hypervisor.
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.281 239969 DEBUG oslo.service.loopingcall [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.282 239969 DEBUG nova.compute.manager [-] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.282 239969 DEBUG nova.network.neutron [-] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.287 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.288 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442501.1460588, 9fa84b9e-e104-4f48-adce-484eb5512b9d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.288 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] VM Started (Lifecycle Event)
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.317 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.321 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.347 239969 DEBUG nova.compute.manager [req-6374a37b-c577-46a5-bed0-4745115eca2b req-bae84701-06e6-457d-9589-9c7322b80116 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Received event network-vif-plugged-9c973342-d1ea-4af4-83c2-3dbfe75dc817 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.348 239969 DEBUG oslo_concurrency.lockutils [req-6374a37b-c577-46a5-bed0-4745115eca2b req-bae84701-06e6-457d-9589-9c7322b80116 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.348 239969 DEBUG oslo_concurrency.lockutils [req-6374a37b-c577-46a5-bed0-4745115eca2b req-bae84701-06e6-457d-9589-9c7322b80116 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.349 239969 DEBUG oslo_concurrency.lockutils [req-6374a37b-c577-46a5-bed0-4745115eca2b req-bae84701-06e6-457d-9589-9c7322b80116 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0435a695-73ca-4cb1-9c6d-35cb1f235198-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.349 239969 DEBUG nova.compute.manager [req-6374a37b-c577-46a5-bed0-4745115eca2b req-bae84701-06e6-457d-9589-9c7322b80116 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] No waiting events found dispatching network-vif-plugged-9c973342-d1ea-4af4-83c2-3dbfe75dc817 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.350 239969 WARNING nova.compute.manager [req-6374a37b-c577-46a5-bed0-4745115eca2b req-bae84701-06e6-457d-9589-9c7322b80116 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Received unexpected event network-vif-plugged-9c973342-d1ea-4af4-83c2-3dbfe75dc817 for instance with vm_state deleted and task_state None.
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.350 239969 DEBUG nova.compute.manager [req-6374a37b-c577-46a5-bed0-4745115eca2b req-bae84701-06e6-457d-9589-9c7322b80116 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Received event network-vif-deleted-9c973342-d1ea-4af4-83c2-3dbfe75dc817 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.352 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.360 239969 INFO nova.compute.manager [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Took 3.85 seconds to spawn the instance on the hypervisor.
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.361 239969 DEBUG nova.compute.manager [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.369 239969 DEBUG oslo_concurrency.lockutils [None req-38104ce5-9afa-424b-935d-0cc8bb7ff5a5 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "0435a695-73ca-4cb1-9c6d-35cb1f235198" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.561s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.433 239969 INFO nova.compute.manager [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Took 5.01 seconds to build instance.
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.452 239969 DEBUG oslo_concurrency.lockutils [None req-8f404aa1-5baa-4554-acd2-4733025804c2 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lock "9fa84b9e-e104-4f48-adce-484eb5512b9d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.394s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 205 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.5 MiB/s wr, 185 op/s
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.619 239969 DEBUG nova.network.neutron [-] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.639 239969 DEBUG nova.network.neutron [-] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.652 239969 INFO nova.compute.manager [-] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Took 0.37 seconds to deallocate network for instance.
Jan 26 15:48:21 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3914626732' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.695 239969 DEBUG oslo_concurrency.lockutils [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.695 239969 DEBUG oslo_concurrency.lockutils [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.750 239969 DEBUG oslo_concurrency.processutils [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.987 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:21 compute-0 nova_compute[239965]: 2026-01-26 15:48:21.988 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.009 239969 DEBUG nova.compute.manager [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.088 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:48:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2054554240' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.307 239969 DEBUG nova.compute.manager [None req-623ee9ca-8b73-459a-a67c-bf6ee5f0513a 2170add31d6745adb7fae2c79e03a950 b141f34ff5614a4da7ac06fecd361f7e - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.312 239969 INFO nova.compute.manager [None req-623ee9ca-8b73-459a-a67c-bf6ee5f0513a 2170add31d6745adb7fae2c79e03a950 b141f34ff5614a4da7ac06fecd361f7e - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Retrieving diagnostics
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.329 239969 DEBUG oslo_concurrency.processutils [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.336 239969 DEBUG nova.compute.provider_tree [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.352 239969 DEBUG nova.scheduler.client.report [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.382 239969 DEBUG oslo_concurrency.lockutils [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.384 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.296s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.393 239969 DEBUG nova.virt.hardware [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.394 239969 INFO nova.compute.claims [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.427 239969 INFO nova.scheduler.client.report [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Deleted allocations for instance e8e51150-05c2-4d90-b463-0a5f26c999fc
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.595 239969 DEBUG oslo_concurrency.lockutils [None req-7c96e578-3d6c-4fbf-8c06-6212be16c1f9 f454000977d04b4e9b6d5cf626dbd6fe 6beda758126041608c246eb20e376115 - - default default] Lock "e8e51150-05c2-4d90-b463-0a5f26c999fc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.069s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.610 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:22 compute-0 ceph-mon[75140]: pgmap v976: 305 pgs: 305 active+clean; 205 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 4.5 MiB/s wr, 185 op/s
Jan 26 15:48:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2054554240' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.707 239969 DEBUG oslo_concurrency.lockutils [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Acquiring lock "9fa84b9e-e104-4f48-adce-484eb5512b9d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.707 239969 DEBUG oslo_concurrency.lockutils [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lock "9fa84b9e-e104-4f48-adce-484eb5512b9d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.708 239969 DEBUG oslo_concurrency.lockutils [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Acquiring lock "9fa84b9e-e104-4f48-adce-484eb5512b9d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.708 239969 DEBUG oslo_concurrency.lockutils [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lock "9fa84b9e-e104-4f48-adce-484eb5512b9d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.708 239969 DEBUG oslo_concurrency.lockutils [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lock "9fa84b9e-e104-4f48-adce-484eb5512b9d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.709 239969 INFO nova.compute.manager [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Terminating instance
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.710 239969 DEBUG oslo_concurrency.lockutils [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Acquiring lock "refresh_cache-9fa84b9e-e104-4f48-adce-484eb5512b9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.710 239969 DEBUG oslo_concurrency.lockutils [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Acquired lock "refresh_cache-9fa84b9e-e104-4f48-adce-484eb5512b9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.711 239969 DEBUG nova.network.neutron [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:48:22 compute-0 nova_compute[239965]: 2026-01-26 15:48:22.891 239969 DEBUG nova.network.neutron [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.125 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:48:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1401558723' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.153 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.158 239969 DEBUG nova.compute.provider_tree [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.176 239969 DEBUG nova.scheduler.client.report [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.201 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.202 239969 DEBUG nova.compute.manager [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.245 239969 DEBUG nova.compute.manager [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.246 239969 DEBUG nova.network.neutron [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.277 239969 INFO nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.340 239969 DEBUG nova.compute.manager [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.464 239969 DEBUG nova.compute.manager [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.467 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.468 239969 INFO nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Creating image(s)
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.489 239969 DEBUG nova.storage.rbd_utils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.511 239969 DEBUG nova.storage.rbd_utils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.533 239969 DEBUG nova.storage.rbd_utils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.538 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 108 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.1 MiB/s wr, 283 op/s
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.594 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.595 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.596 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.596 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.617 239969 DEBUG nova.storage.rbd_utils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.622 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1401558723' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.829 239969 DEBUG nova.network.neutron [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.889 239969 DEBUG nova.policy [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '84de675078294b38ba608227b57f84a5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b53671bcf3e946d58ed07bc7f2934508', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.893 239969 DEBUG oslo_concurrency.lockutils [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Releasing lock "refresh_cache-9fa84b9e-e104-4f48-adce-484eb5512b9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.893 239969 DEBUG nova.compute.manager [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:48:23 compute-0 nova_compute[239965]: 2026-01-26 15:48:23.984 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:24 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 26 15:48:24 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 3.333s CPU time.
Jan 26 15:48:24 compute-0 systemd-machined[208061]: Machine qemu-5-instance-00000005 terminated.
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.040 239969 DEBUG nova.storage.rbd_utils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] resizing rbd image dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.116 239969 DEBUG nova.objects.instance [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lazy-loading 'migration_context' on Instance uuid dd8f3e21-de0a-484c-87ee-6c918b3ffb57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.120 239969 INFO nova.virt.libvirt.driver [-] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Instance destroyed successfully.
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.120 239969 DEBUG nova.objects.instance [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lazy-loading 'resources' on Instance uuid 9fa84b9e-e104-4f48-adce-484eb5512b9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.170 239969 DEBUG nova.storage.rbd_utils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.193 239969 DEBUG nova.storage.rbd_utils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.196 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.197 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.197 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.218 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.219 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.253 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.254 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.276 239969 DEBUG nova.storage.rbd_utils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.281 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.525 239969 INFO nova.virt.libvirt.driver [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Deleting instance files /var/lib/nova/instances/9fa84b9e-e104-4f48-adce-484eb5512b9d_del
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.526 239969 INFO nova.virt.libvirt.driver [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Deletion of /var/lib/nova/instances/9fa84b9e-e104-4f48-adce-484eb5512b9d_del complete
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.595 239969 INFO nova.compute.manager [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Took 0.70 seconds to destroy the instance on the hypervisor.
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.596 239969 DEBUG oslo.service.loopingcall [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.596 239969 DEBUG nova.compute.manager [-] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.596 239969 DEBUG nova.network.neutron [-] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.647 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.794 239969 DEBUG nova.network.neutron [-] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.818 239969 DEBUG nova.network.neutron [-] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.832 239969 INFO nova.compute.manager [-] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Took 0.24 seconds to deallocate network for instance.
Jan 26 15:48:24 compute-0 ceph-mon[75140]: pgmap v977: 305 pgs: 305 active+clean; 108 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.1 MiB/s wr, 283 op/s
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.881 239969 DEBUG oslo_concurrency.lockutils [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.882 239969 DEBUG oslo_concurrency.lockutils [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:24 compute-0 nova_compute[239965]: 2026-01-26 15:48:24.962 239969 DEBUG oslo_concurrency.processutils [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.047 239969 DEBUG nova.network.neutron [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Successfully created port: 09363911-8f3e-47bd-9fec-9005436e3369 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.184 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.902s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.261 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.261 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Ensure instance console log exists: /var/lib/nova/instances/dd8f3e21-de0a-484c-87ee-6c918b3ffb57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.262 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.262 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.262 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:48:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/128439493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.529 239969 DEBUG oslo_concurrency.processutils [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.533 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.537 239969 DEBUG nova.compute.provider_tree [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.554 239969 DEBUG nova.scheduler.client.report [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:48:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 108 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.4 MiB/s wr, 210 op/s
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.605 239969 DEBUG oslo_concurrency.lockutils [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.607 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.607 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.608 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.608 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.652 239969 INFO nova.scheduler.client.report [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Deleted allocations for instance 9fa84b9e-e104-4f48-adce-484eb5512b9d
Jan 26 15:48:25 compute-0 nova_compute[239965]: 2026-01-26 15:48:25.753 239969 DEBUG oslo_concurrency.lockutils [None req-72aac5a1-9f36-481d-9d69-c095938c6173 09a58dc4391c469183ef8f0afb6d538f d5ede54784584cc9a9242900faebdf09 - - default default] Lock "9fa84b9e-e104-4f48-adce-484eb5512b9d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/128439493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:48:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3176368842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.146 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.316 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.317 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4674MB free_disk=59.95595126692206GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.318 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.318 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.397 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance dd8f3e21-de0a-484c-87ee-6c918b3ffb57 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.398 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.398 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.437 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.553 239969 DEBUG nova.network.neutron [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Successfully updated port: 09363911-8f3e-47bd-9fec-9005436e3369 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.582 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "refresh_cache-dd8f3e21-de0a-484c-87ee-6c918b3ffb57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.582 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquired lock "refresh_cache-dd8f3e21-de0a-484c-87ee-6c918b3ffb57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.582 239969 DEBUG nova.network.neutron [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.839 239969 DEBUG nova.compute.manager [req-e35d7cc6-3ea4-4668-9ece-73ab00ac184e req-52e161e5-2ec1-427a-bc33-886f74dd5a1a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Received event network-changed-09363911-8f3e-47bd-9fec-9005436e3369 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.839 239969 DEBUG nova.compute.manager [req-e35d7cc6-3ea4-4668-9ece-73ab00ac184e req-52e161e5-2ec1-427a-bc33-886f74dd5a1a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Refreshing instance network info cache due to event network-changed-09363911-8f3e-47bd-9fec-9005436e3369. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.840 239969 DEBUG oslo_concurrency.lockutils [req-e35d7cc6-3ea4-4668-9ece-73ab00ac184e req-52e161e5-2ec1-427a-bc33-886f74dd5a1a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-dd8f3e21-de0a-484c-87ee-6c918b3ffb57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:48:26 compute-0 ceph-mon[75140]: pgmap v978: 305 pgs: 305 active+clean; 108 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.4 MiB/s wr, 210 op/s
Jan 26 15:48:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3176368842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.893 239969 DEBUG nova.network.neutron [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:48:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:48:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2225841546' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.979 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:26 compute-0 nova_compute[239965]: 2026-01-26 15:48:26.984 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:48:27 compute-0 nova_compute[239965]: 2026-01-26 15:48:27.003 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:48:27 compute-0 nova_compute[239965]: 2026-01-26 15:48:27.029 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:48:27 compute-0 nova_compute[239965]: 2026-01-26 15:48:27.030 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 84 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.8 MiB/s wr, 287 op/s
Jan 26 15:48:27 compute-0 nova_compute[239965]: 2026-01-26 15:48:27.794 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442492.792416, 9f5c8de2-e88f-497c-b6f3-89bcbfd33843 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:27 compute-0 nova_compute[239965]: 2026-01-26 15:48:27.795 239969 INFO nova.compute.manager [-] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] VM Stopped (Lifecycle Event)
Jan 26 15:48:27 compute-0 nova_compute[239965]: 2026-01-26 15:48:27.822 239969 DEBUG nova.compute.manager [None req-e07db2a8-30e1-454d-acd5-4eee15bed460 - - - - - -] [instance: 9f5c8de2-e88f-497c-b6f3-89bcbfd33843] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2225841546' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.030 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.031 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.031 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.031 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.053 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.053 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.053 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.129 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:48:28
Jan 26 15:48:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:48:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:48:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'vms', '.mgr', 'volumes', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'default.rgw.control']
Jan 26 15:48:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.817 239969 DEBUG nova.network.neutron [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Updating instance_info_cache with network_info: [{"id": "09363911-8f3e-47bd-9fec-9005436e3369", "address": "fa:16:3e:99:f6:b3", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09363911-8f", "ovs_interfaceid": "09363911-8f3e-47bd-9fec-9005436e3369", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:48:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:48:28 compute-0 ceph-mon[75140]: pgmap v979: 305 pgs: 305 active+clean; 84 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.8 MiB/s wr, 287 op/s
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.898 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Releasing lock "refresh_cache-dd8f3e21-de0a-484c-87ee-6c918b3ffb57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.898 239969 DEBUG nova.compute.manager [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Instance network_info: |[{"id": "09363911-8f3e-47bd-9fec-9005436e3369", "address": "fa:16:3e:99:f6:b3", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09363911-8f", "ovs_interfaceid": "09363911-8f3e-47bd-9fec-9005436e3369", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.898 239969 DEBUG oslo_concurrency.lockutils [req-e35d7cc6-3ea4-4668-9ece-73ab00ac184e req-52e161e5-2ec1-427a-bc33-886f74dd5a1a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-dd8f3e21-de0a-484c-87ee-6c918b3ffb57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.898 239969 DEBUG nova.network.neutron [req-e35d7cc6-3ea4-4668-9ece-73ab00ac184e req-52e161e5-2ec1-427a-bc33-886f74dd5a1a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Refreshing network info cache for port 09363911-8f3e-47bd-9fec-9005436e3369 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.901 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Start _get_guest_xml network_info=[{"id": "09363911-8f3e-47bd-9fec-9005436e3369", "address": "fa:16:3e:99:f6:b3", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09363911-8f", "ovs_interfaceid": "09363911-8f3e-47bd-9fec-9005436e3369", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vdb', 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'guest_format': None, 'size': 1, 'disk_bus': 'virtio'}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.906 239969 WARNING nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.911 239969 DEBUG nova.virt.libvirt.host [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.911 239969 DEBUG nova.virt.libvirt.host [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.919 239969 DEBUG nova.virt.libvirt.host [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.919 239969 DEBUG nova.virt.libvirt.host [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.920 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.920 239969 DEBUG nova.virt.hardware [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:47:27Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='1768148182',id=22,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-202363406',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.921 239969 DEBUG nova.virt.hardware [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.921 239969 DEBUG nova.virt.hardware [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.922 239969 DEBUG nova.virt.hardware [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.922 239969 DEBUG nova.virt.hardware [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.922 239969 DEBUG nova.virt.hardware [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.923 239969 DEBUG nova.virt.hardware [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.923 239969 DEBUG nova.virt.hardware [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.924 239969 DEBUG nova.virt.hardware [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.924 239969 DEBUG nova.virt.hardware [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.924 239969 DEBUG nova.virt.hardware [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:48:28 compute-0 nova_compute[239965]: 2026-01-26 15:48:28.928 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:29 compute-0 sudo[249504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:48:29 compute-0 sudo[249504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:48:29 compute-0 sudo[249504]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:48:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1684647428' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:29 compute-0 nova_compute[239965]: 2026-01-26 15:48:29.492 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:29 compute-0 nova_compute[239965]: 2026-01-26 15:48:29.493 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:29 compute-0 sudo[249529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:48:29 compute-0 sudo[249529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:48:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 90 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 244 op/s
Jan 26 15:48:29 compute-0 nova_compute[239965]: 2026-01-26 15:48:29.648 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1684647428' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:30 compute-0 sudo[249529]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:48:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/123160906' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:48:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:48:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:48:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:48:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:48:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:48:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:48:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:48:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:48:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:48:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:48:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.090 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.117 239969 DEBUG nova.storage.rbd_utils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.123 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:30 compute-0 sudo[249608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:48:30 compute-0 sudo[249608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:48:30 compute-0 sudo[249608]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:30 compute-0 sudo[249652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:48:30 compute-0 sudo[249652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:48:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:48:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:48:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:48:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:48:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:48:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:48:30 compute-0 podman[249708]: 2026-01-26 15:48:30.484562963 +0000 UTC m=+0.040456332 container create 0f8aaade52e48510eb06ec18fe3d0235ea04db03569f9c6295a48c7c0cb62bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lewin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 15:48:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:48:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:48:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:48:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:48:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:48:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:48:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:48:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:48:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:48:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:48:30 compute-0 systemd[1]: Started libpod-conmon-0f8aaade52e48510eb06ec18fe3d0235ea04db03569f9c6295a48c7c0cb62bc8.scope.
Jan 26 15:48:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:48:30 compute-0 podman[249708]: 2026-01-26 15:48:30.464433459 +0000 UTC m=+0.020326828 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:48:30 compute-0 podman[249708]: 2026-01-26 15:48:30.569302188 +0000 UTC m=+0.125195587 container init 0f8aaade52e48510eb06ec18fe3d0235ea04db03569f9c6295a48c7c0cb62bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lewin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:48:30 compute-0 podman[249708]: 2026-01-26 15:48:30.577526909 +0000 UTC m=+0.133420288 container start 0f8aaade52e48510eb06ec18fe3d0235ea04db03569f9c6295a48c7c0cb62bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 15:48:30 compute-0 podman[249708]: 2026-01-26 15:48:30.58079428 +0000 UTC m=+0.136687669 container attach 0f8aaade52e48510eb06ec18fe3d0235ea04db03569f9c6295a48c7c0cb62bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 15:48:30 compute-0 suspicious_lewin[249724]: 167 167
Jan 26 15:48:30 compute-0 systemd[1]: libpod-0f8aaade52e48510eb06ec18fe3d0235ea04db03569f9c6295a48c7c0cb62bc8.scope: Deactivated successfully.
Jan 26 15:48:30 compute-0 podman[249708]: 2026-01-26 15:48:30.584393737 +0000 UTC m=+0.140287106 container died 0f8aaade52e48510eb06ec18fe3d0235ea04db03569f9c6295a48c7c0cb62bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:48:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3621f12029fd0678eec9c950f19e4439d148e4acf992c3db53ef104ea4ac4002-merged.mount: Deactivated successfully.
Jan 26 15:48:30 compute-0 podman[249708]: 2026-01-26 15:48:30.620966023 +0000 UTC m=+0.176859392 container remove 0f8aaade52e48510eb06ec18fe3d0235ea04db03569f9c6295a48c7c0cb62bc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_lewin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 15:48:30 compute-0 systemd[1]: libpod-conmon-0f8aaade52e48510eb06ec18fe3d0235ea04db03569f9c6295a48c7c0cb62bc8.scope: Deactivated successfully.
Jan 26 15:48:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:48:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3824008939' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.698 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.701 239969 DEBUG nova.virt.libvirt.vif [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:48:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1834795809',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1834795809',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(22),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1834795809',id=6,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=22,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCai7GwJddzKrW3TtkSQ1mOUip/vonOPhzdMngoUZvO4bbn1qLYfpHu7UDeWVieQmn+Md1KkRtB3+wPQBXRuMgP1dKOatHXM/89LkiId8i7p8znLMMfyiJvInuKeFqzzbQ==',key_name='tempest-keypair-1575067288',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b53671bcf3e946d58ed07bc7f2934508',ramdisk_id='',reservation_id='r-803g32dt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1775060788',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1775060788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:48:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='84de675078294b38ba608227b57f84a5',uuid=dd8f3e21-de0a-484c-87ee-6c918b3ffb57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09363911-8f3e-47bd-9fec-9005436e3369", "address": "fa:16:3e:99:f6:b3", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09363911-8f", "ovs_interfaceid": "09363911-8f3e-47bd-9fec-9005436e3369", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.701 239969 DEBUG nova.network.os_vif_util [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Converting VIF {"id": "09363911-8f3e-47bd-9fec-9005436e3369", "address": "fa:16:3e:99:f6:b3", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09363911-8f", "ovs_interfaceid": "09363911-8f3e-47bd-9fec-9005436e3369", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.702 239969 DEBUG nova.network.os_vif_util [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:f6:b3,bridge_name='br-int',has_traffic_filtering=True,id=09363911-8f3e-47bd-9fec-9005436e3369,network=Network(7b13a653-2720-44ed-8e59-f532313cf362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09363911-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.703 239969 DEBUG nova.objects.instance [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd8f3e21-de0a-484c-87ee-6c918b3ffb57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.732 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:48:30 compute-0 nova_compute[239965]:   <uuid>dd8f3e21-de0a-484c-87ee-6c918b3ffb57</uuid>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   <name>instance-00000006</name>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1834795809</nova:name>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:48:28</nova:creationTime>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <nova:flavor name="tempest-flavor_with_ephemeral_1-202363406">
Jan 26 15:48:30 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:48:30 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:48:30 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:48:30 compute-0 nova_compute[239965]:         <nova:ephemeral>1</nova:ephemeral>
Jan 26 15:48:30 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:48:30 compute-0 nova_compute[239965]:         <nova:user uuid="84de675078294b38ba608227b57f84a5">tempest-ServersWithSpecificFlavorTestJSON-1775060788-project-member</nova:user>
Jan 26 15:48:30 compute-0 nova_compute[239965]:         <nova:project uuid="b53671bcf3e946d58ed07bc7f2934508">tempest-ServersWithSpecificFlavorTestJSON-1775060788</nova:project>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:48:30 compute-0 nova_compute[239965]:         <nova:port uuid="09363911-8f3e-47bd-9fec-9005436e3369">
Jan 26 15:48:30 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <system>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <entry name="serial">dd8f3e21-de0a-484c-87ee-6c918b3ffb57</entry>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <entry name="uuid">dd8f3e21-de0a-484c-87ee-6c918b3ffb57</entry>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     </system>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   <os>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   </os>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   <features>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   </features>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk">
Jan 26 15:48:30 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       </source>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:48:30 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk.eph0">
Jan 26 15:48:30 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       </source>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:48:30 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <target dev="vdb" bus="virtio"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk.config">
Jan 26 15:48:30 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       </source>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:48:30 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:99:f6:b3"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <target dev="tap09363911-8f"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/dd8f3e21-de0a-484c-87ee-6c918b3ffb57/console.log" append="off"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <video>
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     </video>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:48:30 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:48:30 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:48:30 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:48:30 compute-0 nova_compute[239965]: </domain>
Jan 26 15:48:30 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.735 239969 DEBUG nova.compute.manager [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Preparing to wait for external event network-vif-plugged-09363911-8f3e-47bd-9fec-9005436e3369 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.735 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.736 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.736 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.737 239969 DEBUG nova.virt.libvirt.vif [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:48:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1834795809',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1834795809',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(22),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1834795809',id=6,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=22,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCai7GwJddzKrW3TtkSQ1mOUip/vonOPhzdMngoUZvO4bbn1qLYfpHu7UDeWVieQmn+Md1KkRtB3+wPQBXRuMgP1dKOatHXM/89LkiId8i7p8znLMMfyiJvInuKeFqzzbQ==',key_name='tempest-keypair-1575067288',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b53671bcf3e946d58ed07bc7f2934508',ramdisk_id='',reservation_id='r-803g32dt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1775060788',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1775060788-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:48:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='84de675078294b38ba608227b57f84a5',uuid=dd8f3e21-de0a-484c-87ee-6c918b3ffb57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09363911-8f3e-47bd-9fec-9005436e3369", "address": "fa:16:3e:99:f6:b3", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09363911-8f", "ovs_interfaceid": "09363911-8f3e-47bd-9fec-9005436e3369", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.737 239969 DEBUG nova.network.os_vif_util [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Converting VIF {"id": "09363911-8f3e-47bd-9fec-9005436e3369", "address": "fa:16:3e:99:f6:b3", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09363911-8f", "ovs_interfaceid": "09363911-8f3e-47bd-9fec-9005436e3369", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.737 239969 DEBUG nova.network.os_vif_util [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:f6:b3,bridge_name='br-int',has_traffic_filtering=True,id=09363911-8f3e-47bd-9fec-9005436e3369,network=Network(7b13a653-2720-44ed-8e59-f532313cf362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09363911-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.738 239969 DEBUG os_vif [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:f6:b3,bridge_name='br-int',has_traffic_filtering=True,id=09363911-8f3e-47bd-9fec-9005436e3369,network=Network(7b13a653-2720-44ed-8e59-f532313cf362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09363911-8f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.738 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.739 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.739 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.742 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.742 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09363911-8f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.743 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09363911-8f, col_values=(('external_ids', {'iface-id': '09363911-8f3e-47bd-9fec-9005436e3369', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:f6:b3', 'vm-uuid': 'dd8f3e21-de0a-484c-87ee-6c918b3ffb57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.744 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:30 compute-0 NetworkManager[48954]: <info>  [1769442510.7453] manager: (tap09363911-8f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.747 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.753 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.753 239969 INFO os_vif [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:f6:b3,bridge_name='br-int',has_traffic_filtering=True,id=09363911-8f3e-47bd-9fec-9005436e3369,network=Network(7b13a653-2720-44ed-8e59-f532313cf362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09363911-8f')
Jan 26 15:48:30 compute-0 podman[249749]: 2026-01-26 15:48:30.778954802 +0000 UTC m=+0.042578213 container create d670f1914880afe8afd43181bbbd5b089e50b058bf950c501be583ca7065a9ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.797 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.798 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.799 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.799 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] No VIF found with MAC fa:16:3e:99:f6:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.799 239969 INFO nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Using config drive
Jan 26 15:48:30 compute-0 systemd[1]: Started libpod-conmon-d670f1914880afe8afd43181bbbd5b089e50b058bf950c501be583ca7065a9ee.scope.
Jan 26 15:48:30 compute-0 nova_compute[239965]: 2026-01-26 15:48:30.821 239969 DEBUG nova.storage.rbd_utils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1032cc4bbc6114e06c9f80e4a62423f9cccfe33ffd7898863ee5b6f15737ea62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1032cc4bbc6114e06c9f80e4a62423f9cccfe33ffd7898863ee5b6f15737ea62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1032cc4bbc6114e06c9f80e4a62423f9cccfe33ffd7898863ee5b6f15737ea62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1032cc4bbc6114e06c9f80e4a62423f9cccfe33ffd7898863ee5b6f15737ea62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1032cc4bbc6114e06c9f80e4a62423f9cccfe33ffd7898863ee5b6f15737ea62/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:48:30 compute-0 podman[249749]: 2026-01-26 15:48:30.856153183 +0000 UTC m=+0.119776614 container init d670f1914880afe8afd43181bbbd5b089e50b058bf950c501be583ca7065a9ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 15:48:30 compute-0 podman[249749]: 2026-01-26 15:48:30.763955035 +0000 UTC m=+0.027578476 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:48:30 compute-0 podman[249749]: 2026-01-26 15:48:30.865252036 +0000 UTC m=+0.128875437 container start d670f1914880afe8afd43181bbbd5b089e50b058bf950c501be583ca7065a9ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 15:48:30 compute-0 podman[249749]: 2026-01-26 15:48:30.868856224 +0000 UTC m=+0.132479635 container attach d670f1914880afe8afd43181bbbd5b089e50b058bf950c501be583ca7065a9ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goldstine, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 15:48:30 compute-0 ceph-mon[75140]: pgmap v980: 305 pgs: 305 active+clean; 90 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 244 op/s
Jan 26 15:48:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/123160906' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:48:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:48:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:48:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:48:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:48:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:48:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3824008939' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:31 compute-0 cool_goldstine[249782]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:48:31 compute-0 cool_goldstine[249782]: --> All data devices are unavailable
Jan 26 15:48:31 compute-0 systemd[1]: libpod-d670f1914880afe8afd43181bbbd5b089e50b058bf950c501be583ca7065a9ee.scope: Deactivated successfully.
Jan 26 15:48:31 compute-0 podman[249749]: 2026-01-26 15:48:31.385483746 +0000 UTC m=+0.649107187 container died d670f1914880afe8afd43181bbbd5b089e50b058bf950c501be583ca7065a9ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goldstine, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 15:48:31 compute-0 nova_compute[239965]: 2026-01-26 15:48:31.409 239969 INFO nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Creating config drive at /var/lib/nova/instances/dd8f3e21-de0a-484c-87ee-6c918b3ffb57/disk.config
Jan 26 15:48:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-1032cc4bbc6114e06c9f80e4a62423f9cccfe33ffd7898863ee5b6f15737ea62-merged.mount: Deactivated successfully.
Jan 26 15:48:31 compute-0 nova_compute[239965]: 2026-01-26 15:48:31.418 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd8f3e21-de0a-484c-87ee-6c918b3ffb57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa3wduu9t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:31 compute-0 podman[249749]: 2026-01-26 15:48:31.429938385 +0000 UTC m=+0.693561816 container remove d670f1914880afe8afd43181bbbd5b089e50b058bf950c501be583ca7065a9ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 15:48:31 compute-0 nova_compute[239965]: 2026-01-26 15:48:31.439 239969 DEBUG nova.network.neutron [req-e35d7cc6-3ea4-4668-9ece-73ab00ac184e req-52e161e5-2ec1-427a-bc33-886f74dd5a1a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Updated VIF entry in instance network info cache for port 09363911-8f3e-47bd-9fec-9005436e3369. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:48:31 compute-0 nova_compute[239965]: 2026-01-26 15:48:31.440 239969 DEBUG nova.network.neutron [req-e35d7cc6-3ea4-4668-9ece-73ab00ac184e req-52e161e5-2ec1-427a-bc33-886f74dd5a1a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Updating instance_info_cache with network_info: [{"id": "09363911-8f3e-47bd-9fec-9005436e3369", "address": "fa:16:3e:99:f6:b3", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09363911-8f", "ovs_interfaceid": "09363911-8f3e-47bd-9fec-9005436e3369", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:48:31 compute-0 systemd[1]: libpod-conmon-d670f1914880afe8afd43181bbbd5b089e50b058bf950c501be583ca7065a9ee.scope: Deactivated successfully.
Jan 26 15:48:31 compute-0 nova_compute[239965]: 2026-01-26 15:48:31.460 239969 DEBUG oslo_concurrency.lockutils [req-e35d7cc6-3ea4-4668-9ece-73ab00ac184e req-52e161e5-2ec1-427a-bc33-886f74dd5a1a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-dd8f3e21-de0a-484c-87ee-6c918b3ffb57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:48:31 compute-0 sudo[249652]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:31 compute-0 sudo[249820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:48:31 compute-0 sudo[249820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:48:31 compute-0 sudo[249820]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:31 compute-0 nova_compute[239965]: 2026-01-26 15:48:31.550 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd8f3e21-de0a-484c-87ee-6c918b3ffb57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa3wduu9t" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:31 compute-0 nova_compute[239965]: 2026-01-26 15:48:31.577 239969 DEBUG nova.storage.rbd_utils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] rbd image dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 90 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 212 op/s
Jan 26 15:48:31 compute-0 nova_compute[239965]: 2026-01-26 15:48:31.582 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd8f3e21-de0a-484c-87ee-6c918b3ffb57/disk.config dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:31 compute-0 sudo[249845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:48:31 compute-0 sudo[249845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:48:31 compute-0 nova_compute[239965]: 2026-01-26 15:48:31.703 239969 DEBUG oslo_concurrency.processutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd8f3e21-de0a-484c-87ee-6c918b3ffb57/disk.config dd8f3e21-de0a-484c-87ee-6c918b3ffb57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:31 compute-0 nova_compute[239965]: 2026-01-26 15:48:31.704 239969 INFO nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Deleting local config drive /var/lib/nova/instances/dd8f3e21-de0a-484c-87ee-6c918b3ffb57/disk.config because it was imported into RBD.
Jan 26 15:48:31 compute-0 kernel: tap09363911-8f: entered promiscuous mode
Jan 26 15:48:31 compute-0 nova_compute[239965]: 2026-01-26 15:48:31.767 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:31 compute-0 ovn_controller[146046]: 2026-01-26T15:48:31Z|00036|binding|INFO|Claiming lport 09363911-8f3e-47bd-9fec-9005436e3369 for this chassis.
Jan 26 15:48:31 compute-0 ovn_controller[146046]: 2026-01-26T15:48:31Z|00037|binding|INFO|09363911-8f3e-47bd-9fec-9005436e3369: Claiming fa:16:3e:99:f6:b3 10.100.0.8
Jan 26 15:48:31 compute-0 NetworkManager[48954]: <info>  [1769442511.7731] manager: (tap09363911-8f): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.775 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:f6:b3 10.100.0.8'], port_security=['fa:16:3e:99:f6:b3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'dd8f3e21-de0a-484c-87ee-6c918b3ffb57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b13a653-2720-44ed-8e59-f532313cf362', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b53671bcf3e946d58ed07bc7f2934508', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'aa7896b6-fb15-4b71-8719-443aa677edd8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88b30d9b-ee3e-48c5-b1e4-65985eb98cc5, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=09363911-8f3e-47bd-9fec-9005436e3369) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.776 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 09363911-8f3e-47bd-9fec-9005436e3369 in datapath 7b13a653-2720-44ed-8e59-f532313cf362 bound to our chassis
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.777 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7b13a653-2720-44ed-8e59-f532313cf362
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.791 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b2608022-0d37-49cd-9847-ed7a8cdc630c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.792 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7b13a653-21 in ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.794 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7b13a653-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.794 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2d4ee9-db38-4c6b-8324-fc2a44d7e553]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.795 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8228ebd5-c6cb-44a0-bc82-981f162ea9d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:31 compute-0 ovn_controller[146046]: 2026-01-26T15:48:31Z|00038|binding|INFO|Setting lport 09363911-8f3e-47bd-9fec-9005436e3369 ovn-installed in OVS
Jan 26 15:48:31 compute-0 ovn_controller[146046]: 2026-01-26T15:48:31Z|00039|binding|INFO|Setting lport 09363911-8f3e-47bd-9fec-9005436e3369 up in Southbound
Jan 26 15:48:31 compute-0 nova_compute[239965]: 2026-01-26 15:48:31.795 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:31 compute-0 nova_compute[239965]: 2026-01-26 15:48:31.797 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:31 compute-0 nova_compute[239965]: 2026-01-26 15:48:31.802 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.809 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[a55f16f9-ece2-42d1-b61f-0801f00bbf68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:31 compute-0 systemd-machined[208061]: New machine qemu-6-instance-00000006.
Jan 26 15:48:31 compute-0 systemd-udevd[249923]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:48:31 compute-0 NetworkManager[48954]: <info>  [1769442511.8268] device (tap09363911-8f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:48:31 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Jan 26 15:48:31 compute-0 NetworkManager[48954]: <info>  [1769442511.8277] device (tap09363911-8f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.832 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6d9a3441-e54f-49eb-9358-72e4062e7bcb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.856 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0cacdb59-f7b9-4380-9f58-e375457e32f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:31 compute-0 NetworkManager[48954]: <info>  [1769442511.8635] manager: (tap7b13a653-20): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.863 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6c331240-25d2-43e0-9293-09df9366274f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.894 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9230a2-e94a-45ca-b163-670f8e538ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.897 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c9513f0a-e9bf-4bd5-a4c6-26f4dd129911]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:31 compute-0 podman[249939]: 2026-01-26 15:48:31.915089596 +0000 UTC m=+0.042831570 container create 37c060de53f6504d2ad0653a0cc8ecab86208759f628c9840fc1f2f46baf92ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_visvesvaraya, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 15:48:31 compute-0 NetworkManager[48954]: <info>  [1769442511.9229] device (tap7b13a653-20): carrier: link connected
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.927 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[14f2c067-8911-420e-a161-eb259808a276]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.942 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[92b6bc9c-b295-4e1a-b237-c40a78484383]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b13a653-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:f1:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400272, 'reachable_time': 40358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249975, 'error': None, 'target': 'ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.956 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4ab45ec8-90ce-4db6-92d8-6f719932ce58]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:f1fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400272, 'tstamp': 400272}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249980, 'error': None, 'target': 'ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:31 compute-0 systemd[1]: Started libpod-conmon-37c060de53f6504d2ad0653a0cc8ecab86208759f628c9840fc1f2f46baf92ed.scope.
Jan 26 15:48:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:31.972 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2b90e330-27ca-40d9-8fa8-c0f576e9a8f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7b13a653-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:f1:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400272, 'reachable_time': 40358, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249982, 'error': None, 'target': 'ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:48:31 compute-0 podman[249939]: 2026-01-26 15:48:31.897686519 +0000 UTC m=+0.025428513 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:48:32 compute-0 podman[249939]: 2026-01-26 15:48:32.003786568 +0000 UTC m=+0.131528562 container init 37c060de53f6504d2ad0653a0cc8ecab86208759f628c9840fc1f2f46baf92ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:32.004 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa06962-345b-41d5-b828-a1fd0cd509a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:32 compute-0 podman[249939]: 2026-01-26 15:48:32.012836359 +0000 UTC m=+0.140578353 container start 37c060de53f6504d2ad0653a0cc8ecab86208759f628c9840fc1f2f46baf92ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_visvesvaraya, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 26 15:48:32 compute-0 relaxed_visvesvaraya[249981]: 167 167
Jan 26 15:48:32 compute-0 systemd[1]: libpod-37c060de53f6504d2ad0653a0cc8ecab86208759f628c9840fc1f2f46baf92ed.scope: Deactivated successfully.
Jan 26 15:48:32 compute-0 podman[249939]: 2026-01-26 15:48:32.019866011 +0000 UTC m=+0.147608005 container attach 37c060de53f6504d2ad0653a0cc8ecab86208759f628c9840fc1f2f46baf92ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_visvesvaraya, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 15:48:32 compute-0 podman[249939]: 2026-01-26 15:48:32.02020954 +0000 UTC m=+0.147951514 container died 37c060de53f6504d2ad0653a0cc8ecab86208759f628c9840fc1f2f46baf92ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_visvesvaraya, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:48:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5c71dd2ec76ffa14fe6e9e620eaee46f047bac0ab53fb11c27605526f4e0680-merged.mount: Deactivated successfully.
Jan 26 15:48:32 compute-0 podman[249939]: 2026-01-26 15:48:32.058928608 +0000 UTC m=+0.186670602 container remove 37c060de53f6504d2ad0653a0cc8ecab86208759f628c9840fc1f2f46baf92ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:32.071 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[73fc74f0-ffdd-4dfa-83cf-13f85c7dc265]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:32.082 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b13a653-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:32.082 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:32.082 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b13a653-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:48:32 compute-0 systemd[1]: libpod-conmon-37c060de53f6504d2ad0653a0cc8ecab86208759f628c9840fc1f2f46baf92ed.scope: Deactivated successfully.
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.084 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:32 compute-0 NetworkManager[48954]: <info>  [1769442512.0848] manager: (tap7b13a653-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Jan 26 15:48:32 compute-0 kernel: tap7b13a653-20: entered promiscuous mode
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.087 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:32.089 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7b13a653-20, col_values=(('external_ids', {'iface-id': '89ce28e3-fbf3-4c2d-9139-ce3f45412c6f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.090 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:32 compute-0 ovn_controller[146046]: 2026-01-26T15:48:32Z|00040|binding|INFO|Releasing lport 89ce28e3-fbf3-4c2d-9139-ce3f45412c6f from this chassis (sb_readonly=0)
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.111 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:32.112 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7b13a653-2720-44ed-8e59-f532313cf362.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7b13a653-2720-44ed-8e59-f532313cf362.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:32.112 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b7fe17-7641-4f07-9f3e-c7c0a66aa742]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:32.113 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-7b13a653-2720-44ed-8e59-f532313cf362
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/7b13a653-2720-44ed-8e59-f532313cf362.pid.haproxy
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 7b13a653-2720-44ed-8e59-f532313cf362
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:48:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:32.114 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362', 'env', 'PROCESS_TAG=haproxy-7b13a653-2720-44ed-8e59-f532313cf362', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7b13a653-2720-44ed-8e59-f532313cf362.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:48:32 compute-0 podman[250013]: 2026-01-26 15:48:32.224071392 +0000 UTC m=+0.038197816 container create 83eb9d7bf8e1599271007ada666d6471d9810da05a10ea43e60a1f084be63fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mestorf, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 15:48:32 compute-0 systemd[1]: Started libpod-conmon-83eb9d7bf8e1599271007ada666d6471d9810da05a10ea43e60a1f084be63fbd.scope.
Jan 26 15:48:32 compute-0 podman[250013]: 2026-01-26 15:48:32.207697941 +0000 UTC m=+0.021824385 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:48:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19fec9c53593a894c4361484f8c08a678eaf59aeeaeb27a94fedead5e9646968/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19fec9c53593a894c4361484f8c08a678eaf59aeeaeb27a94fedead5e9646968/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19fec9c53593a894c4361484f8c08a678eaf59aeeaeb27a94fedead5e9646968/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19fec9c53593a894c4361484f8c08a678eaf59aeeaeb27a94fedead5e9646968/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:48:32 compute-0 podman[250013]: 2026-01-26 15:48:32.336019484 +0000 UTC m=+0.150145958 container init 83eb9d7bf8e1599271007ada666d6471d9810da05a10ea43e60a1f084be63fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mestorf, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 26 15:48:32 compute-0 podman[250013]: 2026-01-26 15:48:32.348737765 +0000 UTC m=+0.162864199 container start 83eb9d7bf8e1599271007ada666d6471d9810da05a10ea43e60a1f084be63fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:48:32 compute-0 podman[250013]: 2026-01-26 15:48:32.352474447 +0000 UTC m=+0.166600891 container attach 83eb9d7bf8e1599271007ada666d6471d9810da05a10ea43e60a1f084be63fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:48:32 compute-0 podman[250067]: 2026-01-26 15:48:32.507050262 +0000 UTC m=+0.056437663 container create 4ff4ae2c3d6ec0eabaf25d735e5625cb7e9d14b40795a9e05568664e68409219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:48:32 compute-0 systemd[1]: Started libpod-conmon-4ff4ae2c3d6ec0eabaf25d735e5625cb7e9d14b40795a9e05568664e68409219.scope.
Jan 26 15:48:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:48:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad92b620d9d2a144795802c4b265e09351db2bd2a6043e91442017274227fb3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:48:32 compute-0 podman[250067]: 2026-01-26 15:48:32.566478108 +0000 UTC m=+0.115865529 container init 4ff4ae2c3d6ec0eabaf25d735e5625cb7e9d14b40795a9e05568664e68409219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 26 15:48:32 compute-0 podman[250067]: 2026-01-26 15:48:32.473588113 +0000 UTC m=+0.022975574 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:48:32 compute-0 podman[250067]: 2026-01-26 15:48:32.572644179 +0000 UTC m=+0.122031580 container start 4ff4ae2c3d6ec0eabaf25d735e5625cb7e9d14b40795a9e05568664e68409219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.584 239969 DEBUG nova.compute.manager [req-29aa315b-f547-48f7-bd2f-5a8b51b27008 req-af2fcfda-70e9-49e7-af73-be606589eaaf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Received event network-vif-plugged-09363911-8f3e-47bd-9fec-9005436e3369 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.586 239969 DEBUG oslo_concurrency.lockutils [req-29aa315b-f547-48f7-bd2f-5a8b51b27008 req-af2fcfda-70e9-49e7-af73-be606589eaaf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.587 239969 DEBUG oslo_concurrency.lockutils [req-29aa315b-f547-48f7-bd2f-5a8b51b27008 req-af2fcfda-70e9-49e7-af73-be606589eaaf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.587 239969 DEBUG oslo_concurrency.lockutils [req-29aa315b-f547-48f7-bd2f-5a8b51b27008 req-af2fcfda-70e9-49e7-af73-be606589eaaf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.588 239969 DEBUG nova.compute.manager [req-29aa315b-f547-48f7-bd2f-5a8b51b27008 req-af2fcfda-70e9-49e7-af73-be606589eaaf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Processing event network-vif-plugged-09363911-8f3e-47bd-9fec-9005436e3369 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:48:32 compute-0 neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362[250125]: [NOTICE]   (250135) : New worker (250139) forked
Jan 26 15:48:32 compute-0 neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362[250125]: [NOTICE]   (250135) : Loading success.
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]: {
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:     "0": [
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:         {
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "devices": [
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "/dev/loop3"
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             ],
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "lv_name": "ceph_lv0",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "lv_size": "21470642176",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "name": "ceph_lv0",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "tags": {
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.cluster_name": "ceph",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.crush_device_class": "",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.encrypted": "0",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.objectstore": "bluestore",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.osd_id": "0",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.type": "block",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.vdo": "0",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.with_tpm": "0"
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             },
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "type": "block",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "vg_name": "ceph_vg0"
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:         }
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:     ],
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:     "1": [
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:         {
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "devices": [
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "/dev/loop4"
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             ],
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "lv_name": "ceph_lv1",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "lv_size": "21470642176",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "name": "ceph_lv1",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "tags": {
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.650 239969 DEBUG nova.compute.manager [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.cluster_name": "ceph",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.crush_device_class": "",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.encrypted": "0",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.objectstore": "bluestore",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.osd_id": "1",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.type": "block",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.vdo": "0",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.with_tpm": "0"
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             },
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "type": "block",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "vg_name": "ceph_vg1"
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:         }
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:     ],
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:     "2": [
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:         {
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "devices": [
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "/dev/loop5"
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             ],
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "lv_name": "ceph_lv2",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "lv_size": "21470642176",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "name": "ceph_lv2",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "tags": {
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.cluster_name": "ceph",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.crush_device_class": "",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.encrypted": "0",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.objectstore": "bluestore",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.osd_id": "2",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.type": "block",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.vdo": "0",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:                 "ceph.with_tpm": "0"
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             },
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "type": "block",
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:             "vg_name": "ceph_vg2"
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:         }
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]:     ]
Jan 26 15:48:32 compute-0 suspicious_mestorf[250029]: }
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.652 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442512.649911, dd8f3e21-de0a-484c-87ee-6c918b3ffb57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.653 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] VM Started (Lifecycle Event)
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.656 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.659 239969 INFO nova.virt.libvirt.driver [-] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Instance spawned successfully.
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.659 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:48:32 compute-0 systemd[1]: libpod-83eb9d7bf8e1599271007ada666d6471d9810da05a10ea43e60a1f084be63fbd.scope: Deactivated successfully.
Jan 26 15:48:32 compute-0 podman[250013]: 2026-01-26 15:48:32.676060882 +0000 UTC m=+0.490187316 container died 83eb9d7bf8e1599271007ada666d6471d9810da05a10ea43e60a1f084be63fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mestorf, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.677 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.680 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.681 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.681 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.682 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.682 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.683 239969 DEBUG nova.virt.libvirt.driver [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.687 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.728 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.729 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442512.6501684, dd8f3e21-de0a-484c-87ee-6c918b3ffb57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.730 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] VM Paused (Lifecycle Event)
Jan 26 15:48:32 compute-0 podman[250013]: 2026-01-26 15:48:32.73314365 +0000 UTC m=+0.547270074 container remove 83eb9d7bf8e1599271007ada666d6471d9810da05a10ea43e60a1f084be63fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_mestorf, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 15:48:32 compute-0 systemd[1]: libpod-conmon-83eb9d7bf8e1599271007ada666d6471d9810da05a10ea43e60a1f084be63fbd.scope: Deactivated successfully.
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.754 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.761 239969 INFO nova.compute.manager [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Took 9.30 seconds to spawn the instance on the hypervisor.
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.764 239969 DEBUG nova.compute.manager [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.771 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442512.6561403, dd8f3e21-de0a-484c-87ee-6c918b3ffb57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.771 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] VM Resumed (Lifecycle Event)
Jan 26 15:48:32 compute-0 sudo[249845]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-19fec9c53593a894c4361484f8c08a678eaf59aeeaeb27a94fedead5e9646968-merged.mount: Deactivated successfully.
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.810 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.813 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.839 239969 INFO nova.compute.manager [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Took 10.77 seconds to build instance.
Jan 26 15:48:32 compute-0 sudo[250162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:48:32 compute-0 sudo[250162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:48:32 compute-0 nova_compute[239965]: 2026-01-26 15:48:32.856 239969 DEBUG oslo_concurrency.lockutils [None req-6ffe9265-4773-476c-94b7-9fa5199abc06 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:32 compute-0 sudo[250162]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:32 compute-0 ceph-mon[75140]: pgmap v981: 305 pgs: 305 active+clean; 90 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.5 MiB/s wr, 212 op/s
Jan 26 15:48:32 compute-0 sudo[250187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:48:32 compute-0 sudo[250187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:48:32 compute-0 sshd-session[250031]: Connection closed by 3.137.73.221 port 60746
Jan 26 15:48:33 compute-0 nova_compute[239965]: 2026-01-26 15:48:33.066 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442498.045901, 0435a695-73ca-4cb1-9c6d-35cb1f235198 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:33 compute-0 nova_compute[239965]: 2026-01-26 15:48:33.066 239969 INFO nova.compute.manager [-] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] VM Stopped (Lifecycle Event)
Jan 26 15:48:33 compute-0 nova_compute[239965]: 2026-01-26 15:48:33.090 239969 DEBUG nova.compute.manager [None req-ed0b6e16-33e6-4439-a591-072e9d96a29d - - - - - -] [instance: 0435a695-73ca-4cb1-9c6d-35cb1f235198] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:33 compute-0 podman[250225]: 2026-01-26 15:48:33.20276846 +0000 UTC m=+0.040315758 container create f97f7271a19fb634cc62266ef7b736fdbaceaefd36656f7ea600e49d7424375b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:48:33 compute-0 systemd[1]: Started libpod-conmon-f97f7271a19fb634cc62266ef7b736fdbaceaefd36656f7ea600e49d7424375b.scope.
Jan 26 15:48:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:48:33 compute-0 podman[250225]: 2026-01-26 15:48:33.279738215 +0000 UTC m=+0.117285533 container init f97f7271a19fb634cc62266ef7b736fdbaceaefd36656f7ea600e49d7424375b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:48:33 compute-0 podman[250225]: 2026-01-26 15:48:33.18481545 +0000 UTC m=+0.022362778 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:48:33 compute-0 podman[250225]: 2026-01-26 15:48:33.286248995 +0000 UTC m=+0.123796293 container start f97f7271a19fb634cc62266ef7b736fdbaceaefd36656f7ea600e49d7424375b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 26 15:48:33 compute-0 dazzling_zhukovsky[250241]: 167 167
Jan 26 15:48:33 compute-0 systemd[1]: libpod-f97f7271a19fb634cc62266ef7b736fdbaceaefd36656f7ea600e49d7424375b.scope: Deactivated successfully.
Jan 26 15:48:33 compute-0 podman[250225]: 2026-01-26 15:48:33.292665331 +0000 UTC m=+0.130212659 container attach f97f7271a19fb634cc62266ef7b736fdbaceaefd36656f7ea600e49d7424375b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_zhukovsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 26 15:48:33 compute-0 podman[250225]: 2026-01-26 15:48:33.293097473 +0000 UTC m=+0.130644771 container died f97f7271a19fb634cc62266ef7b736fdbaceaefd36656f7ea600e49d7424375b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 15:48:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-6579f3486170ab3cbd4a47b882499c97eb66bbda321ebffb92ea3e63442c1e61-merged.mount: Deactivated successfully.
Jan 26 15:48:33 compute-0 podman[250225]: 2026-01-26 15:48:33.334196859 +0000 UTC m=+0.171744167 container remove f97f7271a19fb634cc62266ef7b736fdbaceaefd36656f7ea600e49d7424375b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:48:33 compute-0 systemd[1]: libpod-conmon-f97f7271a19fb634cc62266ef7b736fdbaceaefd36656f7ea600e49d7424375b.scope: Deactivated successfully.
Jan 26 15:48:33 compute-0 podman[250265]: 2026-01-26 15:48:33.537544209 +0000 UTC m=+0.036184888 container create 9a3a2225c97f426ac83ceea0a3d0188f2cc309e94d659254b02217d8fee76114 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 15:48:33 compute-0 systemd[1]: Started libpod-conmon-9a3a2225c97f426ac83ceea0a3d0188f2cc309e94d659254b02217d8fee76114.scope.
Jan 26 15:48:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 90 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 205 op/s
Jan 26 15:48:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:48:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39979537a7af3611fa4b704bcab4f272a075ccea9a079900d132d71833a50bc7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:48:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39979537a7af3611fa4b704bcab4f272a075ccea9a079900d132d71833a50bc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:48:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39979537a7af3611fa4b704bcab4f272a075ccea9a079900d132d71833a50bc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:48:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39979537a7af3611fa4b704bcab4f272a075ccea9a079900d132d71833a50bc7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:48:33 compute-0 podman[250265]: 2026-01-26 15:48:33.619109237 +0000 UTC m=+0.117749936 container init 9a3a2225c97f426ac83ceea0a3d0188f2cc309e94d659254b02217d8fee76114 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:48:33 compute-0 podman[250265]: 2026-01-26 15:48:33.521771803 +0000 UTC m=+0.020412512 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:48:33 compute-0 podman[250265]: 2026-01-26 15:48:33.626415816 +0000 UTC m=+0.125056495 container start 9a3a2225c97f426ac83ceea0a3d0188f2cc309e94d659254b02217d8fee76114 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:48:33 compute-0 podman[250265]: 2026-01-26 15:48:33.629246025 +0000 UTC m=+0.127886704 container attach 9a3a2225c97f426ac83ceea0a3d0188f2cc309e94d659254b02217d8fee76114 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 15:48:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:48:33 compute-0 ceph-mon[75140]: pgmap v982: 305 pgs: 305 active+clean; 90 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 205 op/s
Jan 26 15:48:34 compute-0 lvm[250357]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:48:34 compute-0 lvm[250357]: VG ceph_vg0 finished
Jan 26 15:48:34 compute-0 lvm[250359]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:48:34 compute-0 lvm[250359]: VG ceph_vg1 finished
Jan 26 15:48:34 compute-0 lvm[250360]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:48:34 compute-0 lvm[250360]: VG ceph_vg0 finished
Jan 26 15:48:34 compute-0 lvm[250361]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:48:34 compute-0 lvm[250361]: VG ceph_vg2 finished
Jan 26 15:48:34 compute-0 lvm[250363]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:48:34 compute-0 lvm[250363]: VG ceph_vg2 finished
Jan 26 15:48:34 compute-0 vibrant_hopper[250281]: {}
Jan 26 15:48:34 compute-0 systemd[1]: libpod-9a3a2225c97f426ac83ceea0a3d0188f2cc309e94d659254b02217d8fee76114.scope: Deactivated successfully.
Jan 26 15:48:34 compute-0 podman[250265]: 2026-01-26 15:48:34.503140636 +0000 UTC m=+1.001781315 container died 9a3a2225c97f426ac83ceea0a3d0188f2cc309e94d659254b02217d8fee76114 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 15:48:34 compute-0 systemd[1]: libpod-9a3a2225c97f426ac83ceea0a3d0188f2cc309e94d659254b02217d8fee76114.scope: Consumed 1.407s CPU time.
Jan 26 15:48:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-39979537a7af3611fa4b704bcab4f272a075ccea9a079900d132d71833a50bc7-merged.mount: Deactivated successfully.
Jan 26 15:48:34 compute-0 podman[250265]: 2026-01-26 15:48:34.550634859 +0000 UTC m=+1.049275538 container remove 9a3a2225c97f426ac83ceea0a3d0188f2cc309e94d659254b02217d8fee76114 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_hopper, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 15:48:34 compute-0 systemd[1]: libpod-conmon-9a3a2225c97f426ac83ceea0a3d0188f2cc309e94d659254b02217d8fee76114.scope: Deactivated successfully.
Jan 26 15:48:34 compute-0 sudo[250187]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:48:34 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:48:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:48:34 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:48:34 compute-0 nova_compute[239965]: 2026-01-26 15:48:34.650 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:34 compute-0 sudo[250378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:48:34 compute-0 sudo[250378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:48:34 compute-0 sudo[250378]: pam_unix(sudo:session): session closed for user root
Jan 26 15:48:34 compute-0 nova_compute[239965]: 2026-01-26 15:48:34.823 239969 DEBUG nova.compute.manager [req-c4359bff-9d43-4314-b3eb-7df9381b36f9 req-234fb3a6-c27b-4655-bbd5-06b61f03bf2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Received event network-vif-plugged-09363911-8f3e-47bd-9fec-9005436e3369 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:48:34 compute-0 nova_compute[239965]: 2026-01-26 15:48:34.824 239969 DEBUG oslo_concurrency.lockutils [req-c4359bff-9d43-4314-b3eb-7df9381b36f9 req-234fb3a6-c27b-4655-bbd5-06b61f03bf2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:34 compute-0 nova_compute[239965]: 2026-01-26 15:48:34.824 239969 DEBUG oslo_concurrency.lockutils [req-c4359bff-9d43-4314-b3eb-7df9381b36f9 req-234fb3a6-c27b-4655-bbd5-06b61f03bf2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:34 compute-0 nova_compute[239965]: 2026-01-26 15:48:34.824 239969 DEBUG oslo_concurrency.lockutils [req-c4359bff-9d43-4314-b3eb-7df9381b36f9 req-234fb3a6-c27b-4655-bbd5-06b61f03bf2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:34 compute-0 nova_compute[239965]: 2026-01-26 15:48:34.824 239969 DEBUG nova.compute.manager [req-c4359bff-9d43-4314-b3eb-7df9381b36f9 req-234fb3a6-c27b-4655-bbd5-06b61f03bf2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] No waiting events found dispatching network-vif-plugged-09363911-8f3e-47bd-9fec-9005436e3369 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:48:34 compute-0 nova_compute[239965]: 2026-01-26 15:48:34.824 239969 WARNING nova.compute.manager [req-c4359bff-9d43-4314-b3eb-7df9381b36f9 req-234fb3a6-c27b-4655-bbd5-06b61f03bf2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Received unexpected event network-vif-plugged-09363911-8f3e-47bd-9fec-9005436e3369 for instance with vm_state active and task_state None.
Jan 26 15:48:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 90 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Jan 26 15:48:35 compute-0 nova_compute[239965]: 2026-01-26 15:48:35.614 239969 DEBUG oslo_concurrency.lockutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Acquiring lock "24110c5c-68c9-4c53-8028-d45b792ba592" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:35 compute-0 nova_compute[239965]: 2026-01-26 15:48:35.615 239969 DEBUG oslo_concurrency.lockutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lock "24110c5c-68c9-4c53-8028-d45b792ba592" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:35 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:48:35 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:48:35 compute-0 nova_compute[239965]: 2026-01-26 15:48:35.631 239969 DEBUG nova.compute.manager [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:48:35 compute-0 nova_compute[239965]: 2026-01-26 15:48:35.704 239969 DEBUG oslo_concurrency.lockutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:35 compute-0 nova_compute[239965]: 2026-01-26 15:48:35.705 239969 DEBUG oslo_concurrency.lockutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:35 compute-0 nova_compute[239965]: 2026-01-26 15:48:35.706 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442500.703748, e8e51150-05c2-4d90-b463-0a5f26c999fc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:35 compute-0 nova_compute[239965]: 2026-01-26 15:48:35.706 239969 INFO nova.compute.manager [-] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] VM Stopped (Lifecycle Event)
Jan 26 15:48:35 compute-0 nova_compute[239965]: 2026-01-26 15:48:35.713 239969 DEBUG nova.virt.hardware [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:48:35 compute-0 nova_compute[239965]: 2026-01-26 15:48:35.713 239969 INFO nova.compute.claims [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:48:35 compute-0 nova_compute[239965]: 2026-01-26 15:48:35.735 239969 DEBUG nova.compute.manager [None req-1a97ea9f-6225-4b7f-b920-8dc9759bbdd1 - - - - - -] [instance: e8e51150-05c2-4d90-b463-0a5f26c999fc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:35 compute-0 nova_compute[239965]: 2026-01-26 15:48:35.746 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:35 compute-0 nova_compute[239965]: 2026-01-26 15:48:35.845 239969 DEBUG oslo_concurrency.processutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.168 239969 DEBUG nova.compute.manager [req-6029ad68-7613-4f3d-a5e0-323a451686b9 req-52ccd25e-3bd1-44bf-b642-fba8bad94534 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Received event network-changed-09363911-8f3e-47bd-9fec-9005436e3369 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.168 239969 DEBUG nova.compute.manager [req-6029ad68-7613-4f3d-a5e0-323a451686b9 req-52ccd25e-3bd1-44bf-b642-fba8bad94534 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Refreshing instance network info cache due to event network-changed-09363911-8f3e-47bd-9fec-9005436e3369. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.169 239969 DEBUG oslo_concurrency.lockutils [req-6029ad68-7613-4f3d-a5e0-323a451686b9 req-52ccd25e-3bd1-44bf-b642-fba8bad94534 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-dd8f3e21-de0a-484c-87ee-6c918b3ffb57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.169 239969 DEBUG oslo_concurrency.lockutils [req-6029ad68-7613-4f3d-a5e0-323a451686b9 req-52ccd25e-3bd1-44bf-b642-fba8bad94534 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-dd8f3e21-de0a-484c-87ee-6c918b3ffb57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.169 239969 DEBUG nova.network.neutron [req-6029ad68-7613-4f3d-a5e0-323a451686b9 req-52ccd25e-3bd1-44bf-b642-fba8bad94534 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Refreshing network info cache for port 09363911-8f3e-47bd-9fec-9005436e3369 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:48:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:48:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2926511276' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.409 239969 DEBUG oslo_concurrency.processutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.414 239969 DEBUG nova.compute.provider_tree [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.429 239969 DEBUG nova.scheduler.client.report [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.458 239969 DEBUG oslo_concurrency.lockutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.458 239969 DEBUG nova.compute.manager [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.508 239969 DEBUG nova.compute.manager [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.508 239969 DEBUG nova.network.neutron [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.528 239969 INFO nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.549 239969 DEBUG nova.compute.manager [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:48:36 compute-0 ceph-mon[75140]: pgmap v983: 305 pgs: 305 active+clean; 90 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Jan 26 15:48:36 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2926511276' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.635 239969 DEBUG nova.compute.manager [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.637 239969 DEBUG nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.637 239969 INFO nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Creating image(s)
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.658 239969 DEBUG nova.storage.rbd_utils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] rbd image 24110c5c-68c9-4c53-8028-d45b792ba592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.680 239969 DEBUG nova.storage.rbd_utils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] rbd image 24110c5c-68c9-4c53-8028-d45b792ba592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.702 239969 DEBUG nova.storage.rbd_utils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] rbd image 24110c5c-68c9-4c53-8028-d45b792ba592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.705 239969 DEBUG oslo_concurrency.processutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.760 239969 DEBUG oslo_concurrency.processutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.760 239969 DEBUG oslo_concurrency.lockutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.761 239969 DEBUG oslo_concurrency.lockutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.761 239969 DEBUG oslo_concurrency.lockutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.784 239969 DEBUG nova.storage.rbd_utils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] rbd image 24110c5c-68c9-4c53-8028-d45b792ba592_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.788 239969 DEBUG oslo_concurrency.processutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 24110c5c-68c9-4c53-8028-d45b792ba592_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.957 239969 DEBUG nova.network.neutron [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 26 15:48:36 compute-0 nova_compute[239965]: 2026-01-26 15:48:36.958 239969 DEBUG nova.compute.manager [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.035 239969 DEBUG oslo_concurrency.processutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 24110c5c-68c9-4c53-8028-d45b792ba592_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.247s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.090 239969 DEBUG nova.storage.rbd_utils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] resizing rbd image 24110c5c-68c9-4c53-8028-d45b792ba592_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.444 239969 DEBUG nova.objects.instance [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lazy-loading 'migration_context' on Instance uuid 24110c5c-68c9-4c53-8028-d45b792ba592 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.458 239969 DEBUG nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.458 239969 DEBUG nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Ensure instance console log exists: /var/lib/nova/instances/24110c5c-68c9-4c53-8028-d45b792ba592/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.459 239969 DEBUG oslo_concurrency.lockutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.459 239969 DEBUG oslo_concurrency.lockutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.459 239969 DEBUG oslo_concurrency.lockutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.461 239969 DEBUG nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.464 239969 WARNING nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.468 239969 DEBUG nova.virt.libvirt.host [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.468 239969 DEBUG nova.virt.libvirt.host [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.472 239969 DEBUG nova.virt.libvirt.host [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.472 239969 DEBUG nova.virt.libvirt.host [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.473 239969 DEBUG nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.473 239969 DEBUG nova.virt.hardware [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.473 239969 DEBUG nova.virt.hardware [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.473 239969 DEBUG nova.virt.hardware [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.474 239969 DEBUG nova.virt.hardware [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.474 239969 DEBUG nova.virt.hardware [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.474 239969 DEBUG nova.virt.hardware [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.474 239969 DEBUG nova.virt.hardware [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.475 239969 DEBUG nova.virt.hardware [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.475 239969 DEBUG nova.virt.hardware [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.475 239969 DEBUG nova.virt.hardware [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.475 239969 DEBUG nova.virt.hardware [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.477 239969 DEBUG oslo_concurrency.processutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 119 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.8 MiB/s wr, 167 op/s
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.926 239969 DEBUG nova.network.neutron [req-6029ad68-7613-4f3d-a5e0-323a451686b9 req-52ccd25e-3bd1-44bf-b642-fba8bad94534 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Updated VIF entry in instance network info cache for port 09363911-8f3e-47bd-9fec-9005436e3369. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.928 239969 DEBUG nova.network.neutron [req-6029ad68-7613-4f3d-a5e0-323a451686b9 req-52ccd25e-3bd1-44bf-b642-fba8bad94534 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Updating instance_info_cache with network_info: [{"id": "09363911-8f3e-47bd-9fec-9005436e3369", "address": "fa:16:3e:99:f6:b3", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09363911-8f", "ovs_interfaceid": "09363911-8f3e-47bd-9fec-9005436e3369", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:48:37 compute-0 nova_compute[239965]: 2026-01-26 15:48:37.950 239969 DEBUG oslo_concurrency.lockutils [req-6029ad68-7613-4f3d-a5e0-323a451686b9 req-52ccd25e-3bd1-44bf-b642-fba8bad94534 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-dd8f3e21-de0a-484c-87ee-6c918b3ffb57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:48:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:48:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1632814555' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:38 compute-0 nova_compute[239965]: 2026-01-26 15:48:38.014 239969 DEBUG oslo_concurrency.processutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:38 compute-0 nova_compute[239965]: 2026-01-26 15:48:38.033 239969 DEBUG nova.storage.rbd_utils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] rbd image 24110c5c-68c9-4c53-8028-d45b792ba592_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:38 compute-0 nova_compute[239965]: 2026-01-26 15:48:38.036 239969 DEBUG oslo_concurrency.processutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:48:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2649226223' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:38 compute-0 nova_compute[239965]: 2026-01-26 15:48:38.584 239969 DEBUG oslo_concurrency.processutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:38 compute-0 nova_compute[239965]: 2026-01-26 15:48:38.586 239969 DEBUG nova.objects.instance [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 24110c5c-68c9-4c53-8028-d45b792ba592 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:38 compute-0 nova_compute[239965]: 2026-01-26 15:48:38.606 239969 DEBUG nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:48:38 compute-0 nova_compute[239965]:   <uuid>24110c5c-68c9-4c53-8028-d45b792ba592</uuid>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   <name>instance-00000007</name>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerDiagnosticsNegativeTest-server-2124395391</nova:name>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:48:37</nova:creationTime>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:48:38 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:48:38 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:48:38 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:48:38 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:48:38 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:48:38 compute-0 nova_compute[239965]:         <nova:user uuid="15a28b87c5ad4cd79820a036152d87a5">tempest-ServerDiagnosticsNegativeTest-196209841-project-member</nova:user>
Jan 26 15:48:38 compute-0 nova_compute[239965]:         <nova:project uuid="2fb0c62a9c4c44e7a25fef9e4ca7dec6">tempest-ServerDiagnosticsNegativeTest-196209841</nova:project>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <system>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <entry name="serial">24110c5c-68c9-4c53-8028-d45b792ba592</entry>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <entry name="uuid">24110c5c-68c9-4c53-8028-d45b792ba592</entry>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     </system>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   <os>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   </os>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   <features>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   </features>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/24110c5c-68c9-4c53-8028-d45b792ba592_disk">
Jan 26 15:48:38 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       </source>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:48:38 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/24110c5c-68c9-4c53-8028-d45b792ba592_disk.config">
Jan 26 15:48:38 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       </source>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:48:38 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/24110c5c-68c9-4c53-8028-d45b792ba592/console.log" append="off"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <video>
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     </video>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:48:38 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:48:38 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:48:38 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:48:38 compute-0 nova_compute[239965]: </domain>
Jan 26 15:48:38 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:48:38 compute-0 ceph-mon[75140]: pgmap v984: 305 pgs: 305 active+clean; 119 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.8 MiB/s wr, 167 op/s
Jan 26 15:48:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1632814555' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2649226223' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:38 compute-0 nova_compute[239965]: 2026-01-26 15:48:38.657 239969 DEBUG nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:48:38 compute-0 nova_compute[239965]: 2026-01-26 15:48:38.657 239969 DEBUG nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:48:38 compute-0 nova_compute[239965]: 2026-01-26 15:48:38.658 239969 INFO nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Using config drive
Jan 26 15:48:38 compute-0 nova_compute[239965]: 2026-01-26 15:48:38.681 239969 DEBUG nova.storage.rbd_utils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] rbd image 24110c5c-68c9-4c53-8028-d45b792ba592_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:48:38 compute-0 nova_compute[239965]: 2026-01-26 15:48:38.919 239969 INFO nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Creating config drive at /var/lib/nova/instances/24110c5c-68c9-4c53-8028-d45b792ba592/disk.config
Jan 26 15:48:38 compute-0 nova_compute[239965]: 2026-01-26 15:48:38.923 239969 DEBUG oslo_concurrency.processutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/24110c5c-68c9-4c53-8028-d45b792ba592/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgumlcs24 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.047 239969 DEBUG oslo_concurrency.processutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/24110c5c-68c9-4c53-8028-d45b792ba592/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgumlcs24" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.073 239969 DEBUG nova.storage.rbd_utils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] rbd image 24110c5c-68c9-4c53-8028-d45b792ba592_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.076 239969 DEBUG oslo_concurrency.processutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/24110c5c-68c9-4c53-8028-d45b792ba592/disk.config 24110c5c-68c9-4c53-8028-d45b792ba592_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.113 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442504.1113195, 9fa84b9e-e104-4f48-adce-484eb5512b9d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.114 239969 INFO nova.compute.manager [-] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] VM Stopped (Lifecycle Event)
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.133 239969 DEBUG nova.compute.manager [None req-56e25ba5-8e96-44ef-b75b-677f3cb9ad95 - - - - - -] [instance: 9fa84b9e-e104-4f48-adce-484eb5512b9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.201 239969 DEBUG oslo_concurrency.processutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/24110c5c-68c9-4c53-8028-d45b792ba592/disk.config 24110c5c-68c9-4c53-8028-d45b792ba592_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.202 239969 INFO nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Deleting local config drive /var/lib/nova/instances/24110c5c-68c9-4c53-8028-d45b792ba592/disk.config because it was imported into RBD.
Jan 26 15:48:39 compute-0 systemd-machined[208061]: New machine qemu-7-instance-00000007.
Jan 26 15:48:39 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Jan 26 15:48:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 126 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 106 op/s
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.652 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.766 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442519.7660334, 24110c5c-68c9-4c53-8028-d45b792ba592 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.766 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] VM Resumed (Lifecycle Event)
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.769 239969 DEBUG nova.compute.manager [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.769 239969 DEBUG nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.771 239969 INFO nova.virt.libvirt.driver [-] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Instance spawned successfully.
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.772 239969 DEBUG nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.791 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.795 239969 DEBUG nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.795 239969 DEBUG nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.796 239969 DEBUG nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.796 239969 DEBUG nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.796 239969 DEBUG nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.797 239969 DEBUG nova.virt.libvirt.driver [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.800 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.825 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.825 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442519.7684805, 24110c5c-68c9-4c53-8028-d45b792ba592 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.825 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] VM Started (Lifecycle Event)
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.851 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.857 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.861 239969 INFO nova.compute.manager [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Took 3.23 seconds to spawn the instance on the hypervisor.
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.862 239969 DEBUG nova.compute.manager [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.885 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.920 239969 INFO nova.compute.manager [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Took 4.25 seconds to build instance.
Jan 26 15:48:39 compute-0 nova_compute[239965]: 2026-01-26 15:48:39.937 239969 DEBUG oslo_concurrency.lockutils [None req-378d59d7-c2a4-493d-8e04-d4d1a073349f 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lock "24110c5c-68c9-4c53-8028-d45b792ba592" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.322s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:40 compute-0 ceph-mon[75140]: pgmap v985: 305 pgs: 305 active+clean; 126 MiB data, 256 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 106 op/s
Jan 26 15:48:40 compute-0 nova_compute[239965]: 2026-01-26 15:48:40.749 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 136 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Jan 26 15:48:41 compute-0 nova_compute[239965]: 2026-01-26 15:48:41.710 239969 DEBUG oslo_concurrency.lockutils [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Acquiring lock "24110c5c-68c9-4c53-8028-d45b792ba592" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:41 compute-0 nova_compute[239965]: 2026-01-26 15:48:41.711 239969 DEBUG oslo_concurrency.lockutils [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lock "24110c5c-68c9-4c53-8028-d45b792ba592" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:41 compute-0 nova_compute[239965]: 2026-01-26 15:48:41.711 239969 DEBUG oslo_concurrency.lockutils [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Acquiring lock "24110c5c-68c9-4c53-8028-d45b792ba592-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:41 compute-0 nova_compute[239965]: 2026-01-26 15:48:41.711 239969 DEBUG oslo_concurrency.lockutils [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lock "24110c5c-68c9-4c53-8028-d45b792ba592-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:41 compute-0 nova_compute[239965]: 2026-01-26 15:48:41.711 239969 DEBUG oslo_concurrency.lockutils [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lock "24110c5c-68c9-4c53-8028-d45b792ba592-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:41 compute-0 nova_compute[239965]: 2026-01-26 15:48:41.712 239969 INFO nova.compute.manager [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Terminating instance
Jan 26 15:48:41 compute-0 nova_compute[239965]: 2026-01-26 15:48:41.713 239969 DEBUG oslo_concurrency.lockutils [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Acquiring lock "refresh_cache-24110c5c-68c9-4c53-8028-d45b792ba592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:48:41 compute-0 nova_compute[239965]: 2026-01-26 15:48:41.713 239969 DEBUG oslo_concurrency.lockutils [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Acquired lock "refresh_cache-24110c5c-68c9-4c53-8028-d45b792ba592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:48:41 compute-0 nova_compute[239965]: 2026-01-26 15:48:41.713 239969 DEBUG nova.network.neutron [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:48:41 compute-0 nova_compute[239965]: 2026-01-26 15:48:41.942 239969 DEBUG nova.network.neutron [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:48:42 compute-0 nova_compute[239965]: 2026-01-26 15:48:42.395 239969 DEBUG nova.network.neutron [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:48:42 compute-0 nova_compute[239965]: 2026-01-26 15:48:42.411 239969 DEBUG oslo_concurrency.lockutils [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Releasing lock "refresh_cache-24110c5c-68c9-4c53-8028-d45b792ba592" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:48:42 compute-0 nova_compute[239965]: 2026-01-26 15:48:42.411 239969 DEBUG nova.compute.manager [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:48:43 compute-0 ceph-mon[75140]: pgmap v986: 305 pgs: 305 active+clean; 136 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Jan 26 15:48:43 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 26 15:48:43 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 3.209s CPU time.
Jan 26 15:48:43 compute-0 systemd-machined[208061]: Machine qemu-7-instance-00000007 terminated.
Jan 26 15:48:43 compute-0 nova_compute[239965]: 2026-01-26 15:48:43.436 239969 INFO nova.virt.libvirt.driver [-] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Instance destroyed successfully.
Jan 26 15:48:43 compute-0 nova_compute[239965]: 2026-01-26 15:48:43.436 239969 DEBUG nova.objects.instance [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lazy-loading 'resources' on Instance uuid 24110c5c-68c9-4c53-8028-d45b792ba592 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 136 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 166 op/s
Jan 26 15:48:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:48:44 compute-0 nova_compute[239965]: 2026-01-26 15:48:44.061 239969 INFO nova.virt.libvirt.driver [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Deleting instance files /var/lib/nova/instances/24110c5c-68c9-4c53-8028-d45b792ba592_del
Jan 26 15:48:44 compute-0 nova_compute[239965]: 2026-01-26 15:48:44.061 239969 INFO nova.virt.libvirt.driver [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Deletion of /var/lib/nova/instances/24110c5c-68c9-4c53-8028-d45b792ba592_del complete
Jan 26 15:48:44 compute-0 nova_compute[239965]: 2026-01-26 15:48:44.136 239969 INFO nova.compute.manager [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Took 1.72 seconds to destroy the instance on the hypervisor.
Jan 26 15:48:44 compute-0 nova_compute[239965]: 2026-01-26 15:48:44.137 239969 DEBUG oslo.service.loopingcall [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:48:44 compute-0 nova_compute[239965]: 2026-01-26 15:48:44.137 239969 DEBUG nova.compute.manager [-] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:48:44 compute-0 nova_compute[239965]: 2026-01-26 15:48:44.137 239969 DEBUG nova.network.neutron [-] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:48:44 compute-0 ceph-mon[75140]: pgmap v987: 305 pgs: 305 active+clean; 136 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 166 op/s
Jan 26 15:48:44 compute-0 nova_compute[239965]: 2026-01-26 15:48:44.362 239969 DEBUG nova.network.neutron [-] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:48:44 compute-0 nova_compute[239965]: 2026-01-26 15:48:44.376 239969 DEBUG nova.network.neutron [-] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:48:44 compute-0 nova_compute[239965]: 2026-01-26 15:48:44.387 239969 INFO nova.compute.manager [-] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Took 0.25 seconds to deallocate network for instance.
Jan 26 15:48:44 compute-0 nova_compute[239965]: 2026-01-26 15:48:44.428 239969 DEBUG oslo_concurrency.lockutils [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:44 compute-0 nova_compute[239965]: 2026-01-26 15:48:44.428 239969 DEBUG oslo_concurrency.lockutils [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:44 compute-0 nova_compute[239965]: 2026-01-26 15:48:44.492 239969 DEBUG oslo_concurrency.processutils [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:44 compute-0 nova_compute[239965]: 2026-01-26 15:48:44.652 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:48:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274348139' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:45 compute-0 nova_compute[239965]: 2026-01-26 15:48:45.058 239969 DEBUG oslo_concurrency.processutils [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:45 compute-0 nova_compute[239965]: 2026-01-26 15:48:45.062 239969 DEBUG nova.compute.provider_tree [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:48:45 compute-0 nova_compute[239965]: 2026-01-26 15:48:45.096 239969 DEBUG nova.scheduler.client.report [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:48:45 compute-0 nova_compute[239965]: 2026-01-26 15:48:45.116 239969 DEBUG oslo_concurrency.lockutils [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:45 compute-0 nova_compute[239965]: 2026-01-26 15:48:45.140 239969 INFO nova.scheduler.client.report [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Deleted allocations for instance 24110c5c-68c9-4c53-8028-d45b792ba592
Jan 26 15:48:45 compute-0 nova_compute[239965]: 2026-01-26 15:48:45.198 239969 DEBUG oslo_concurrency.lockutils [None req-509c9a92-2624-449a-8834-dcc4ccd412c2 15a28b87c5ad4cd79820a036152d87a5 2fb0c62a9c4c44e7a25fef9e4ca7dec6 - - default default] Lock "24110c5c-68c9-4c53-8028-d45b792ba592" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3274348139' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:45 compute-0 ovn_controller[146046]: 2026-01-26T15:48:45Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:99:f6:b3 10.100.0.8
Jan 26 15:48:45 compute-0 ovn_controller[146046]: 2026-01-26T15:48:45Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:99:f6:b3 10.100.0.8
Jan 26 15:48:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 136 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Jan 26 15:48:45 compute-0 nova_compute[239965]: 2026-01-26 15:48:45.751 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:46 compute-0 ceph-mon[75140]: pgmap v988: 305 pgs: 305 active+clean; 136 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 165 op/s
Jan 26 15:48:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 111 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.7 MiB/s wr, 236 op/s
Jan 26 15:48:48 compute-0 podman[250813]: 2026-01-26 15:48:48.387989381 +0000 UTC m=+0.065704160 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 15:48:48 compute-0 podman[250814]: 2026-01-26 15:48:48.423005249 +0000 UTC m=+0.104187134 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006000037894100232 of space, bias 1.0, pg target 0.18000113682300697 quantized to 32 (current 32)
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000666381962373712 of space, bias 1.0, pg target 0.19991458871211362 quantized to 32 (current 32)
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0152157143067901e-06 of space, bias 4.0, pg target 0.0012182588571681482 quantized to 16 (current 16)
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:48:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:48:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:48:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2940599820' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:48:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:48:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2940599820' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:48:48 compute-0 ceph-mon[75140]: pgmap v989: 305 pgs: 305 active+clean; 111 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.7 MiB/s wr, 236 op/s
Jan 26 15:48:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2940599820' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:48:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2940599820' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:48:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:48:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 120 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.9 MiB/s wr, 210 op/s
Jan 26 15:48:49 compute-0 nova_compute[239965]: 2026-01-26 15:48:49.654 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:50 compute-0 nova_compute[239965]: 2026-01-26 15:48:50.754 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:50 compute-0 ceph-mon[75140]: pgmap v990: 305 pgs: 305 active+clean; 120 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.9 MiB/s wr, 210 op/s
Jan 26 15:48:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 123 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 197 op/s
Jan 26 15:48:51 compute-0 ceph-mon[75140]: pgmap v991: 305 pgs: 305 active+clean; 123 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 197 op/s
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.047 239969 DEBUG oslo_concurrency.lockutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Acquiring lock "bfbba9d8-efe9-46fa-a6a4-2431c70eab06" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.047 239969 DEBUG oslo_concurrency.lockutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "bfbba9d8-efe9-46fa-a6a4-2431c70eab06" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.065 239969 DEBUG nova.compute.manager [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.148 239969 DEBUG oslo_concurrency.lockutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.148 239969 DEBUG oslo_concurrency.lockutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.154 239969 DEBUG nova.virt.hardware [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.155 239969 INFO nova.compute.claims [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.378 239969 DEBUG oslo_concurrency.processutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.598 239969 DEBUG oslo_concurrency.lockutils [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.599 239969 DEBUG oslo_concurrency.lockutils [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.600 239969 DEBUG oslo_concurrency.lockutils [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.600 239969 DEBUG oslo_concurrency.lockutils [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.600 239969 DEBUG oslo_concurrency.lockutils [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.601 239969 INFO nova.compute.manager [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Terminating instance
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.603 239969 DEBUG nova.compute.manager [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:48:52 compute-0 kernel: tap09363911-8f (unregistering): left promiscuous mode
Jan 26 15:48:52 compute-0 NetworkManager[48954]: <info>  [1769442532.6562] device (tap09363911-8f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:48:52 compute-0 ovn_controller[146046]: 2026-01-26T15:48:52Z|00041|binding|INFO|Releasing lport 09363911-8f3e-47bd-9fec-9005436e3369 from this chassis (sb_readonly=0)
Jan 26 15:48:52 compute-0 ovn_controller[146046]: 2026-01-26T15:48:52Z|00042|binding|INFO|Setting lport 09363911-8f3e-47bd-9fec-9005436e3369 down in Southbound
Jan 26 15:48:52 compute-0 ovn_controller[146046]: 2026-01-26T15:48:52Z|00043|binding|INFO|Removing iface tap09363911-8f ovn-installed in OVS
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.665 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.683 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:52 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Jan 26 15:48:52 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 13.734s CPU time.
Jan 26 15:48:52 compute-0 systemd-machined[208061]: Machine qemu-6-instance-00000006 terminated.
Jan 26 15:48:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:52.803 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:f6:b3 10.100.0.8'], port_security=['fa:16:3e:99:f6:b3 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'dd8f3e21-de0a-484c-87ee-6c918b3ffb57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7b13a653-2720-44ed-8e59-f532313cf362', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b53671bcf3e946d58ed07bc7f2934508', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'aa7896b6-fb15-4b71-8719-443aa677edd8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88b30d9b-ee3e-48c5-b1e4-65985eb98cc5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=09363911-8f3e-47bd-9fec-9005436e3369) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:48:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:52.805 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 09363911-8f3e-47bd-9fec-9005436e3369 in datapath 7b13a653-2720-44ed-8e59-f532313cf362 unbound from our chassis
Jan 26 15:48:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:52.806 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7b13a653-2720-44ed-8e59-f532313cf362, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:48:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:52.809 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fc006025-b0b8-43ca-b544-dc02a8e34f27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:52.809 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362 namespace which is not needed anymore
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.846 239969 INFO nova.virt.libvirt.driver [-] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Instance destroyed successfully.
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.846 239969 DEBUG nova.objects.instance [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lazy-loading 'resources' on Instance uuid dd8f3e21-de0a-484c-87ee-6c918b3ffb57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.865 239969 DEBUG nova.virt.libvirt.vif [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:48:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1834795809',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1834795809',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(22),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1834795809',id=6,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=22,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCai7GwJddzKrW3TtkSQ1mOUip/vonOPhzdMngoUZvO4bbn1qLYfpHu7UDeWVieQmn+Md1KkRtB3+wPQBXRuMgP1dKOatHXM/89LkiId8i7p8znLMMfyiJvInuKeFqzzbQ==',key_name='tempest-keypair-1575067288',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:48:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b53671bcf3e946d58ed07bc7f2934508',ramdisk_id='',reservation_id='r-803g32dt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1775060788',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1775060788-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:48:32Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='84de675078294b38ba608227b57f84a5',uuid=dd8f3e21-de0a-484c-87ee-6c918b3ffb57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09363911-8f3e-47bd-9fec-9005436e3369", "address": "fa:16:3e:99:f6:b3", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09363911-8f", "ovs_interfaceid": "09363911-8f3e-47bd-9fec-9005436e3369", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.866 239969 DEBUG nova.network.os_vif_util [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Converting VIF {"id": "09363911-8f3e-47bd-9fec-9005436e3369", "address": "fa:16:3e:99:f6:b3", "network": {"id": "7b13a653-2720-44ed-8e59-f532313cf362", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-888576438-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b53671bcf3e946d58ed07bc7f2934508", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09363911-8f", "ovs_interfaceid": "09363911-8f3e-47bd-9fec-9005436e3369", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.867 239969 DEBUG nova.network.os_vif_util [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:99:f6:b3,bridge_name='br-int',has_traffic_filtering=True,id=09363911-8f3e-47bd-9fec-9005436e3369,network=Network(7b13a653-2720-44ed-8e59-f532313cf362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09363911-8f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.868 239969 DEBUG os_vif [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:f6:b3,bridge_name='br-int',has_traffic_filtering=True,id=09363911-8f3e-47bd-9fec-9005436e3369,network=Network(7b13a653-2720-44ed-8e59-f532313cf362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09363911-8f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.870 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.870 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09363911-8f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.871 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.874 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:48:52 compute-0 nova_compute[239965]: 2026-01-26 15:48:52.876 239969 INFO os_vif [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:99:f6:b3,bridge_name='br-int',has_traffic_filtering=True,id=09363911-8f3e-47bd-9fec-9005436e3369,network=Network(7b13a653-2720-44ed-8e59-f532313cf362),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09363911-8f')
Jan 26 15:48:52 compute-0 neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362[250125]: [NOTICE]   (250135) : haproxy version is 2.8.14-c23fe91
Jan 26 15:48:52 compute-0 neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362[250125]: [NOTICE]   (250135) : path to executable is /usr/sbin/haproxy
Jan 26 15:48:52 compute-0 neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362[250125]: [WARNING]  (250135) : Exiting Master process...
Jan 26 15:48:52 compute-0 neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362[250125]: [WARNING]  (250135) : Exiting Master process...
Jan 26 15:48:52 compute-0 neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362[250125]: [ALERT]    (250135) : Current worker (250139) exited with code 143 (Terminated)
Jan 26 15:48:52 compute-0 neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362[250125]: [WARNING]  (250135) : All workers exited. Exiting... (0)
Jan 26 15:48:52 compute-0 systemd[1]: libpod-4ff4ae2c3d6ec0eabaf25d735e5625cb7e9d14b40795a9e05568664e68409219.scope: Deactivated successfully.
Jan 26 15:48:52 compute-0 podman[250930]: 2026-01-26 15:48:52.948813843 +0000 UTC m=+0.045545955 container died 4ff4ae2c3d6ec0eabaf25d735e5625cb7e9d14b40795a9e05568664e68409219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 15:48:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ff4ae2c3d6ec0eabaf25d735e5625cb7e9d14b40795a9e05568664e68409219-userdata-shm.mount: Deactivated successfully.
Jan 26 15:48:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ad92b620d9d2a144795802c4b265e09351db2bd2a6043e91442017274227fb3-merged.mount: Deactivated successfully.
Jan 26 15:48:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:48:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4118423218' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:52 compute-0 podman[250930]: 2026-01-26 15:48:52.993807235 +0000 UTC m=+0.090539317 container cleanup 4ff4ae2c3d6ec0eabaf25d735e5625cb7e9d14b40795a9e05568664e68409219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.003 239969 DEBUG oslo_concurrency.processutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:53 compute-0 systemd[1]: libpod-conmon-4ff4ae2c3d6ec0eabaf25d735e5625cb7e9d14b40795a9e05568664e68409219.scope: Deactivated successfully.
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.009 239969 DEBUG nova.compute.provider_tree [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:48:53 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4118423218' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:53 compute-0 podman[250966]: 2026-01-26 15:48:53.078253324 +0000 UTC m=+0.065069355 container remove 4ff4ae2c3d6ec0eabaf25d735e5625cb7e9d14b40795a9e05568664e68409219 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 15:48:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:53.084 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cce05584-baf3-4c33-a7ba-3185bcb327ee]: (4, ('Mon Jan 26 03:48:52 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362 (4ff4ae2c3d6ec0eabaf25d735e5625cb7e9d14b40795a9e05568664e68409219)\n4ff4ae2c3d6ec0eabaf25d735e5625cb7e9d14b40795a9e05568664e68409219\nMon Jan 26 03:48:53 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362 (4ff4ae2c3d6ec0eabaf25d735e5625cb7e9d14b40795a9e05568664e68409219)\n4ff4ae2c3d6ec0eabaf25d735e5625cb7e9d14b40795a9e05568664e68409219\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:53.086 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[641ebb24-c846-4a81-9a23-64194681d6ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:53.086 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b13a653-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.088 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:53 compute-0 kernel: tap7b13a653-20: left promiscuous mode
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.103 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:53.106 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[92638c85-7bd2-4015-8635-2a9299001f21]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:53.118 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ccc424ff-07ea-44c7-8f0d-25269dbfbd0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:53.119 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba3e5ec-7990-4738-af18-b4c5b059e219]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:53.136 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4edb1a-5102-46cd-b33f-3b1c59f9b87e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400265, 'reachable_time': 29022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250982, 'error': None, 'target': 'ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:53.139 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7b13a653-2720-44ed-8e59-f532313cf362 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:48:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:53.139 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[27f73a2c-90d7-4593-afab-bcef0120648f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:48:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d7b13a653\x2d2720\x2d44ed\x2d8e59\x2df532313cf362.mount: Deactivated successfully.
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.193 239969 DEBUG nova.compute.manager [req-a3f01816-48f5-4c79-8d24-0c7f3df31e2a req-f41a12ec-e856-4112-b068-916621dcd0af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Received event network-vif-unplugged-09363911-8f3e-47bd-9fec-9005436e3369 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.195 239969 DEBUG oslo_concurrency.lockutils [req-a3f01816-48f5-4c79-8d24-0c7f3df31e2a req-f41a12ec-e856-4112-b068-916621dcd0af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.195 239969 DEBUG oslo_concurrency.lockutils [req-a3f01816-48f5-4c79-8d24-0c7f3df31e2a req-f41a12ec-e856-4112-b068-916621dcd0af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.196 239969 DEBUG oslo_concurrency.lockutils [req-a3f01816-48f5-4c79-8d24-0c7f3df31e2a req-f41a12ec-e856-4112-b068-916621dcd0af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.196 239969 DEBUG nova.compute.manager [req-a3f01816-48f5-4c79-8d24-0c7f3df31e2a req-f41a12ec-e856-4112-b068-916621dcd0af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] No waiting events found dispatching network-vif-unplugged-09363911-8f3e-47bd-9fec-9005436e3369 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.196 239969 DEBUG nova.compute.manager [req-a3f01816-48f5-4c79-8d24-0c7f3df31e2a req-f41a12ec-e856-4112-b068-916621dcd0af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Received event network-vif-unplugged-09363911-8f3e-47bd-9fec-9005436e3369 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.198 239969 DEBUG nova.scheduler.client.report [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.237 239969 DEBUG oslo_concurrency.lockutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.088s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.237 239969 DEBUG nova.compute.manager [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.355 239969 DEBUG nova.compute.manager [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.399 239969 INFO nova.virt.libvirt.driver [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Deleting instance files /var/lib/nova/instances/dd8f3e21-de0a-484c-87ee-6c918b3ffb57_del
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.400 239969 INFO nova.virt.libvirt.driver [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Deletion of /var/lib/nova/instances/dd8f3e21-de0a-484c-87ee-6c918b3ffb57_del complete
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.455 239969 INFO nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.488 239969 DEBUG nova.compute.manager [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.498 239969 INFO nova.compute.manager [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Took 0.89 seconds to destroy the instance on the hypervisor.
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.498 239969 DEBUG oslo.service.loopingcall [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.498 239969 DEBUG nova.compute.manager [-] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.499 239969 DEBUG nova.network.neutron [-] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.574 239969 DEBUG nova.compute.manager [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.576 239969 DEBUG nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.576 239969 INFO nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Creating image(s)
Jan 26 15:48:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 123 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.599 239969 DEBUG nova.storage.rbd_utils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.622 239969 DEBUG nova.storage.rbd_utils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.643 239969 DEBUG nova.storage.rbd_utils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.646 239969 DEBUG oslo_concurrency.processutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.701 239969 DEBUG oslo_concurrency.processutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.702 239969 DEBUG oslo_concurrency.lockutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.703 239969 DEBUG oslo_concurrency.lockutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.703 239969 DEBUG oslo_concurrency.lockutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.725 239969 DEBUG nova.storage.rbd_utils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.729 239969 DEBUG oslo_concurrency.processutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:48:53 compute-0 nova_compute[239965]: 2026-01-26 15:48:53.977 239969 DEBUG oslo_concurrency.processutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.248s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.053 239969 DEBUG nova.storage.rbd_utils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] resizing rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:48:54 compute-0 ceph-mon[75140]: pgmap v992: 305 pgs: 305 active+clean; 123 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.149 239969 DEBUG nova.objects.instance [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lazy-loading 'migration_context' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.171 239969 DEBUG nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.172 239969 DEBUG nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Ensure instance console log exists: /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.172 239969 DEBUG oslo_concurrency.lockutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.173 239969 DEBUG oslo_concurrency.lockutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.173 239969 DEBUG oslo_concurrency.lockutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.175 239969 DEBUG nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.180 239969 WARNING nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.186 239969 DEBUG nova.virt.libvirt.host [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.186 239969 DEBUG nova.virt.libvirt.host [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.190 239969 DEBUG nova.virt.libvirt.host [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.190 239969 DEBUG nova.virt.libvirt.host [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.191 239969 DEBUG nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.191 239969 DEBUG nova.virt.hardware [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.192 239969 DEBUG nova.virt.hardware [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.192 239969 DEBUG nova.virt.hardware [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.192 239969 DEBUG nova.virt.hardware [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.192 239969 DEBUG nova.virt.hardware [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.193 239969 DEBUG nova.virt.hardware [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.193 239969 DEBUG nova.virt.hardware [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.193 239969 DEBUG nova.virt.hardware [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.193 239969 DEBUG nova.virt.hardware [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.194 239969 DEBUG nova.virt.hardware [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.194 239969 DEBUG nova.virt.hardware [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.197 239969 DEBUG oslo_concurrency.processutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.563 239969 DEBUG nova.network.neutron [-] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.596 239969 INFO nova.compute.manager [-] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Took 1.10 seconds to deallocate network for instance.
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.639 239969 DEBUG oslo_concurrency.lockutils [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.640 239969 DEBUG oslo_concurrency.lockutils [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.656 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.707 239969 DEBUG oslo_concurrency.processutils [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:48:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3692468722' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.756 239969 DEBUG oslo_concurrency.processutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.778 239969 DEBUG nova.storage.rbd_utils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.782 239969 DEBUG oslo_concurrency.processutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:54 compute-0 nova_compute[239965]: 2026-01-26 15:48:54.801 239969 DEBUG nova.compute.manager [req-45b5209a-8d48-4649-b917-571f1e18c080 req-d5fee679-8536-4390-ba78-55baad621da8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Received event network-vif-deleted-09363911-8f3e-47bd-9fec-9005436e3369 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:48:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3692468722' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:48:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3138136595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.262 239969 DEBUG oslo_concurrency.processutils [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.267 239969 DEBUG nova.compute.provider_tree [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.282 239969 DEBUG nova.scheduler.client.report [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:48:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:48:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2187033689' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.311 239969 DEBUG oslo_concurrency.lockutils [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.327 239969 DEBUG oslo_concurrency.processutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.328 239969 DEBUG nova.objects.instance [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lazy-loading 'pci_devices' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.343 239969 DEBUG nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:48:55 compute-0 nova_compute[239965]:   <uuid>bfbba9d8-efe9-46fa-a6a4-2431c70eab06</uuid>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   <name>instance-00000008</name>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersAdmin275Test-server-141780407</nova:name>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:48:54</nova:creationTime>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:48:55 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:48:55 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:48:55 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:48:55 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:48:55 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:48:55 compute-0 nova_compute[239965]:         <nova:user uuid="2d53086521a447778c494a2f71fa9685">tempest-ServersAdmin275Test-1784917804-project-member</nova:user>
Jan 26 15:48:55 compute-0 nova_compute[239965]:         <nova:project uuid="19e7b3737799454e9d511751ed18060a">tempest-ServersAdmin275Test-1784917804</nova:project>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <system>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <entry name="serial">bfbba9d8-efe9-46fa-a6a4-2431c70eab06</entry>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <entry name="uuid">bfbba9d8-efe9-46fa-a6a4-2431c70eab06</entry>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     </system>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   <os>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   </os>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   <features>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   </features>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk">
Jan 26 15:48:55 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 15:48:55 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       </source>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:48:55 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config">
Jan 26 15:48:55 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       </source>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:48:55 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/console.log" append="off"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <video>
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     </video>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:48:55 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:48:55 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:48:55 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:48:55 compute-0 nova_compute[239965]: </domain>
Jan 26 15:48:55 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.352 239969 INFO nova.scheduler.client.report [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Deleted allocations for instance dd8f3e21-de0a-484c-87ee-6c918b3ffb57
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.404 239969 DEBUG nova.compute.manager [req-f8f915bc-f1b7-441f-b0e9-e78729c1d995 req-f137289f-c9f4-4c3b-bb63-94bb6de3d966 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Received event network-vif-plugged-09363911-8f3e-47bd-9fec-9005436e3369 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.404 239969 DEBUG oslo_concurrency.lockutils [req-f8f915bc-f1b7-441f-b0e9-e78729c1d995 req-f137289f-c9f4-4c3b-bb63-94bb6de3d966 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.404 239969 DEBUG oslo_concurrency.lockutils [req-f8f915bc-f1b7-441f-b0e9-e78729c1d995 req-f137289f-c9f4-4c3b-bb63-94bb6de3d966 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.404 239969 DEBUG oslo_concurrency.lockutils [req-f8f915bc-f1b7-441f-b0e9-e78729c1d995 req-f137289f-c9f4-4c3b-bb63-94bb6de3d966 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.405 239969 DEBUG nova.compute.manager [req-f8f915bc-f1b7-441f-b0e9-e78729c1d995 req-f137289f-c9f4-4c3b-bb63-94bb6de3d966 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] No waiting events found dispatching network-vif-plugged-09363911-8f3e-47bd-9fec-9005436e3369 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.405 239969 WARNING nova.compute.manager [req-f8f915bc-f1b7-441f-b0e9-e78729c1d995 req-f137289f-c9f4-4c3b-bb63-94bb6de3d966 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Received unexpected event network-vif-plugged-09363911-8f3e-47bd-9fec-9005436e3369 for instance with vm_state deleted and task_state None.
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.414 239969 DEBUG nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.415 239969 DEBUG nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.415 239969 INFO nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Using config drive
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.433 239969 DEBUG nova.storage.rbd_utils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.440 239969 DEBUG oslo_concurrency.lockutils [None req-ff481be0-4d16-4f2e-9b92-bf3e5c796a60 84de675078294b38ba608227b57f84a5 b53671bcf3e946d58ed07bc7f2934508 - - default default] Lock "dd8f3e21-de0a-484c-87ee-6c918b3ffb57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 123 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 730 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.595 239969 INFO nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Creating config drive at /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.601 239969 DEBUG oslo_concurrency.processutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5wo24mal execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.727 239969 DEBUG oslo_concurrency.processutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp5wo24mal" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.752 239969 DEBUG nova.storage.rbd_utils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.756 239969 DEBUG oslo_concurrency.processutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.884 239969 DEBUG oslo_concurrency.processutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:48:55 compute-0 nova_compute[239965]: 2026-01-26 15:48:55.885 239969 INFO nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Deleting local config drive /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config because it was imported into RBD.
Jan 26 15:48:55 compute-0 systemd-machined[208061]: New machine qemu-8-instance-00000008.
Jan 26 15:48:55 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Jan 26 15:48:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3138136595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:48:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2187033689' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:48:56 compute-0 ceph-mon[75140]: pgmap v993: 305 pgs: 305 active+clean; 123 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 730 KiB/s rd, 2.1 MiB/s wr, 110 op/s
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.463 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442536.4632797, bfbba9d8-efe9-46fa-a6a4-2431c70eab06 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.464 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] VM Resumed (Lifecycle Event)
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.466 239969 DEBUG nova.compute.manager [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.467 239969 DEBUG nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.470 239969 INFO nova.virt.libvirt.driver [-] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance spawned successfully.
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.470 239969 DEBUG nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.487 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.489 239969 DEBUG nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.489 239969 DEBUG nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.490 239969 DEBUG nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.490 239969 DEBUG nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.491 239969 DEBUG nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.491 239969 DEBUG nova.virt.libvirt.driver [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.496 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.529 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.529 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442536.4662507, bfbba9d8-efe9-46fa-a6a4-2431c70eab06 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.530 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] VM Started (Lifecycle Event)
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.555 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.559 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.565 239969 INFO nova.compute.manager [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Took 2.99 seconds to spawn the instance on the hypervisor.
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.565 239969 DEBUG nova.compute.manager [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.577 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.628 239969 INFO nova.compute.manager [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Took 4.51 seconds to build instance.
Jan 26 15:48:56 compute-0 nova_compute[239965]: 2026-01-26 15:48:56.642 239969 DEBUG oslo_concurrency.lockutils [None req-dc58436e-9084-4841-9c1e-1de16b4ebe8c 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "bfbba9d8-efe9-46fa-a6a4-2431c70eab06" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v994: 305 pgs: 305 active+clean; 99 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.2 MiB/s wr, 158 op/s
Jan 26 15:48:57 compute-0 nova_compute[239965]: 2026-01-26 15:48:57.872 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:48:58 compute-0 nova_compute[239965]: 2026-01-26 15:48:58.434 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442523.433588, 24110c5c-68c9-4c53-8028-d45b792ba592 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:48:58 compute-0 nova_compute[239965]: 2026-01-26 15:48:58.435 239969 INFO nova.compute.manager [-] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] VM Stopped (Lifecycle Event)
Jan 26 15:48:58 compute-0 nova_compute[239965]: 2026-01-26 15:48:58.454 239969 DEBUG nova.compute.manager [None req-f1e98eb8-0717-4498-a4cb-f02488fd79d4 - - - - - -] [instance: 24110c5c-68c9-4c53-8028-d45b792ba592] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:48:58 compute-0 ceph-mon[75140]: pgmap v994: 305 pgs: 305 active+clean; 99 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 3.2 MiB/s wr, 158 op/s
Jan 26 15:48:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:48:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:59.209 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:48:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:59.210 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:48:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:48:59.211 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:48:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 88 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.0 MiB/s wr, 153 op/s
Jan 26 15:48:59 compute-0 nova_compute[239965]: 2026-01-26 15:48:59.657 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:49:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:49:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:49:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:49:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:49:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:49:00 compute-0 nova_compute[239965]: 2026-01-26 15:49:00.479 239969 INFO nova.compute.manager [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Rebuilding instance
Jan 26 15:49:00 compute-0 ceph-mon[75140]: pgmap v995: 305 pgs: 305 active+clean; 88 MiB data, 257 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.0 MiB/s wr, 153 op/s
Jan 26 15:49:00 compute-0 nova_compute[239965]: 2026-01-26 15:49:00.727 239969 DEBUG nova.objects.instance [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lazy-loading 'trusted_certs' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:00 compute-0 nova_compute[239965]: 2026-01-26 15:49:00.745 239969 DEBUG nova.compute.manager [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:00 compute-0 nova_compute[239965]: 2026-01-26 15:49:00.788 239969 DEBUG nova.objects.instance [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lazy-loading 'pci_requests' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:00 compute-0 nova_compute[239965]: 2026-01-26 15:49:00.802 239969 DEBUG nova.objects.instance [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lazy-loading 'pci_devices' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:00 compute-0 nova_compute[239965]: 2026-01-26 15:49:00.814 239969 DEBUG nova.objects.instance [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lazy-loading 'resources' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:00 compute-0 nova_compute[239965]: 2026-01-26 15:49:00.829 239969 DEBUG nova.objects.instance [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lazy-loading 'migration_context' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:00 compute-0 nova_compute[239965]: 2026-01-26 15:49:00.841 239969 DEBUG nova.objects.instance [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 15:49:00 compute-0 nova_compute[239965]: 2026-01-26 15:49:00.845 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 15:49:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 144 op/s
Jan 26 15:49:02 compute-0 ceph-mon[75140]: pgmap v996: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 144 op/s
Jan 26 15:49:02 compute-0 nova_compute[239965]: 2026-01-26 15:49:02.874 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:03 compute-0 nova_compute[239965]: 2026-01-26 15:49:03.563 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Jan 26 15:49:03 compute-0 nova_compute[239965]: 2026-01-26 15:49:03.674 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:03 compute-0 nova_compute[239965]: 2026-01-26 15:49:03.730 239969 DEBUG oslo_concurrency.lockutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Acquiring lock "cd0921bb-0089-4327-a253-f47f4fdbade5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:03 compute-0 nova_compute[239965]: 2026-01-26 15:49:03.730 239969 DEBUG oslo_concurrency.lockutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lock "cd0921bb-0089-4327-a253-f47f4fdbade5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:03 compute-0 nova_compute[239965]: 2026-01-26 15:49:03.761 239969 DEBUG nova.compute.manager [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:49:03 compute-0 nova_compute[239965]: 2026-01-26 15:49:03.838 239969 DEBUG oslo_concurrency.lockutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:03 compute-0 nova_compute[239965]: 2026-01-26 15:49:03.839 239969 DEBUG oslo_concurrency.lockutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:03 compute-0 nova_compute[239965]: 2026-01-26 15:49:03.844 239969 DEBUG nova.virt.hardware [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:49:03 compute-0 nova_compute[239965]: 2026-01-26 15:49:03.844 239969 INFO nova.compute.claims [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:49:03 compute-0 nova_compute[239965]: 2026-01-26 15:49:03.960 239969 DEBUG oslo_concurrency.processutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:49:04 compute-0 ceph-mon[75140]: pgmap v997: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Jan 26 15:49:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:49:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/444291670' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:04 compute-0 nova_compute[239965]: 2026-01-26 15:49:04.614 239969 DEBUG oslo_concurrency.processutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:04 compute-0 nova_compute[239965]: 2026-01-26 15:49:04.619 239969 DEBUG nova.compute.provider_tree [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:49:04 compute-0 nova_compute[239965]: 2026-01-26 15:49:04.635 239969 DEBUG nova.scheduler.client.report [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:49:04 compute-0 nova_compute[239965]: 2026-01-26 15:49:04.660 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:04 compute-0 nova_compute[239965]: 2026-01-26 15:49:04.665 239969 DEBUG oslo_concurrency.lockutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:04 compute-0 nova_compute[239965]: 2026-01-26 15:49:04.666 239969 DEBUG nova.compute.manager [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:49:04 compute-0 nova_compute[239965]: 2026-01-26 15:49:04.833 239969 DEBUG nova.compute.manager [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:49:04 compute-0 nova_compute[239965]: 2026-01-26 15:49:04.833 239969 DEBUG nova.network.neutron [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:49:04 compute-0 nova_compute[239965]: 2026-01-26 15:49:04.855 239969 INFO nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:49:04 compute-0 nova_compute[239965]: 2026-01-26 15:49:04.873 239969 DEBUG nova.compute.manager [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:49:04 compute-0 nova_compute[239965]: 2026-01-26 15:49:04.961 239969 DEBUG nova.compute.manager [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:49:04 compute-0 nova_compute[239965]: 2026-01-26 15:49:04.963 239969 DEBUG nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:49:04 compute-0 nova_compute[239965]: 2026-01-26 15:49:04.963 239969 INFO nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Creating image(s)
Jan 26 15:49:04 compute-0 nova_compute[239965]: 2026-01-26 15:49:04.985 239969 DEBUG nova.storage.rbd_utils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] rbd image cd0921bb-0089-4327-a253-f47f4fdbade5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.010 239969 DEBUG nova.storage.rbd_utils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] rbd image cd0921bb-0089-4327-a253-f47f4fdbade5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.035 239969 DEBUG nova.storage.rbd_utils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] rbd image cd0921bb-0089-4327-a253-f47f4fdbade5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.038 239969 DEBUG oslo_concurrency.processutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.103 239969 DEBUG oslo_concurrency.processutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.104 239969 DEBUG oslo_concurrency.lockutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.105 239969 DEBUG oslo_concurrency.lockutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.105 239969 DEBUG oslo_concurrency.lockutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.127 239969 DEBUG nova.storage.rbd_utils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] rbd image cd0921bb-0089-4327-a253-f47f4fdbade5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.132 239969 DEBUG oslo_concurrency.processutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 cd0921bb-0089-4327-a253-f47f4fdbade5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:05 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/444291670' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.407 239969 DEBUG nova.network.neutron [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.408 239969 DEBUG nova.compute.manager [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.422 239969 DEBUG oslo_concurrency.processutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 cd0921bb-0089-4327-a253-f47f4fdbade5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.488 239969 DEBUG nova.storage.rbd_utils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] resizing rbd image cd0921bb-0089-4327-a253-f47f4fdbade5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:49:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.721 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:05.724 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:49:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:05.726 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.792 239969 DEBUG nova.objects.instance [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lazy-loading 'migration_context' on Instance uuid cd0921bb-0089-4327-a253-f47f4fdbade5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.819 239969 DEBUG nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.820 239969 DEBUG nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Ensure instance console log exists: /var/lib/nova/instances/cd0921bb-0089-4327-a253-f47f4fdbade5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.820 239969 DEBUG oslo_concurrency.lockutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.821 239969 DEBUG oslo_concurrency.lockutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.821 239969 DEBUG oslo_concurrency.lockutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.823 239969 DEBUG nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.829 239969 WARNING nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.834 239969 DEBUG nova.virt.libvirt.host [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.834 239969 DEBUG nova.virt.libvirt.host [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.837 239969 DEBUG nova.virt.libvirt.host [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.838 239969 DEBUG nova.virt.libvirt.host [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.838 239969 DEBUG nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.839 239969 DEBUG nova.virt.hardware [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.839 239969 DEBUG nova.virt.hardware [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.839 239969 DEBUG nova.virt.hardware [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.840 239969 DEBUG nova.virt.hardware [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.840 239969 DEBUG nova.virt.hardware [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.840 239969 DEBUG nova.virt.hardware [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.841 239969 DEBUG nova.virt.hardware [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.841 239969 DEBUG nova.virt.hardware [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.841 239969 DEBUG nova.virt.hardware [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.842 239969 DEBUG nova.virt.hardware [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.842 239969 DEBUG nova.virt.hardware [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:49:05 compute-0 nova_compute[239965]: 2026-01-26 15:49:05.845 239969 DEBUG oslo_concurrency.processutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:06 compute-0 ceph-mon[75140]: pgmap v998: 305 pgs: 305 active+clean; 88 MiB data, 252 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Jan 26 15:49:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:49:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1074181432' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:06 compute-0 nova_compute[239965]: 2026-01-26 15:49:06.442 239969 DEBUG oslo_concurrency.processutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:06 compute-0 nova_compute[239965]: 2026-01-26 15:49:06.464 239969 DEBUG nova.storage.rbd_utils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] rbd image cd0921bb-0089-4327-a253-f47f4fdbade5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:06 compute-0 nova_compute[239965]: 2026-01-26 15:49:06.470 239969 DEBUG oslo_concurrency.processutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:49:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/165270253' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.047 239969 DEBUG oslo_concurrency.processutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.048 239969 DEBUG nova.objects.instance [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lazy-loading 'pci_devices' on Instance uuid cd0921bb-0089-4327-a253-f47f4fdbade5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.078 239969 DEBUG nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:49:07 compute-0 nova_compute[239965]:   <uuid>cd0921bb-0089-4327-a253-f47f4fdbade5</uuid>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   <name>instance-00000009</name>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <nova:name>tempest-TenantUsagesTestJSON-server-1881329707</nova:name>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:49:05</nova:creationTime>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:49:07 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:49:07 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:49:07 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:49:07 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:49:07 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:49:07 compute-0 nova_compute[239965]:         <nova:user uuid="5473c1e640744a5fbbf18df72d9452d3">tempest-TenantUsagesTestJSON-1510042954-project-member</nova:user>
Jan 26 15:49:07 compute-0 nova_compute[239965]:         <nova:project uuid="995f47174e6f434ca7b48631fa4dfdaa">tempest-TenantUsagesTestJSON-1510042954</nova:project>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <system>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <entry name="serial">cd0921bb-0089-4327-a253-f47f4fdbade5</entry>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <entry name="uuid">cd0921bb-0089-4327-a253-f47f4fdbade5</entry>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     </system>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   <os>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   </os>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   <features>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   </features>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/cd0921bb-0089-4327-a253-f47f4fdbade5_disk">
Jan 26 15:49:07 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       </source>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:49:07 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/cd0921bb-0089-4327-a253-f47f4fdbade5_disk.config">
Jan 26 15:49:07 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       </source>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:49:07 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/cd0921bb-0089-4327-a253-f47f4fdbade5/console.log" append="off"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <video>
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     </video>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:49:07 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:49:07 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:49:07 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:49:07 compute-0 nova_compute[239965]: </domain>
Jan 26 15:49:07 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.149 239969 DEBUG nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.149 239969 DEBUG nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.150 239969 INFO nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Using config drive
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.167 239969 DEBUG nova.storage.rbd_utils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] rbd image cd0921bb-0089-4327-a253-f47f4fdbade5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Jan 26 15:49:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Jan 26 15:49:07 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Jan 26 15:49:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1074181432' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/165270253' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.427 239969 INFO nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Creating config drive at /var/lib/nova/instances/cd0921bb-0089-4327-a253-f47f4fdbade5/disk.config
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.432 239969 DEBUG oslo_concurrency.processutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cd0921bb-0089-4327-a253-f47f4fdbade5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9s2nebvg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.555 239969 DEBUG oslo_concurrency.processutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cd0921bb-0089-4327-a253-f47f4fdbade5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9s2nebvg" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.580 239969 DEBUG nova.storage.rbd_utils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] rbd image cd0921bb-0089-4327-a253-f47f4fdbade5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.585 239969 DEBUG oslo_concurrency.processutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cd0921bb-0089-4327-a253-f47f4fdbade5/disk.config cd0921bb-0089-4327-a253-f47f4fdbade5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 130 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 130 op/s
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.699 239969 DEBUG oslo_concurrency.processutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cd0921bb-0089-4327-a253-f47f4fdbade5/disk.config cd0921bb-0089-4327-a253-f47f4fdbade5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.114s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.700 239969 INFO nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Deleting local config drive /var/lib/nova/instances/cd0921bb-0089-4327-a253-f47f4fdbade5/disk.config because it was imported into RBD.
Jan 26 15:49:07 compute-0 systemd-machined[208061]: New machine qemu-9-instance-00000009.
Jan 26 15:49:07 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.848 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442532.8477418, dd8f3e21-de0a-484c-87ee-6c918b3ffb57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.849 239969 INFO nova.compute.manager [-] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] VM Stopped (Lifecycle Event)
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.868 239969 DEBUG nova.compute.manager [None req-e609fffd-3e2d-4d01-b1ce-1a583fab171c - - - - - -] [instance: dd8f3e21-de0a-484c-87ee-6c918b3ffb57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:07 compute-0 nova_compute[239965]: 2026-01-26 15:49:07.877 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:08 compute-0 ceph-mon[75140]: osdmap e150: 3 total, 3 up, 3 in
Jan 26 15:49:08 compute-0 ceph-mon[75140]: pgmap v1000: 305 pgs: 305 active+clean; 130 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 130 op/s
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.313 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442548.3133059, cd0921bb-0089-4327-a253-f47f4fdbade5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.314 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] VM Resumed (Lifecycle Event)
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.316 239969 DEBUG nova.compute.manager [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.316 239969 DEBUG nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.321 239969 INFO nova.virt.libvirt.driver [-] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Instance spawned successfully.
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.322 239969 DEBUG nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.336 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.343 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.349 239969 DEBUG nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.351 239969 DEBUG nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.352 239969 DEBUG nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.352 239969 DEBUG nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.353 239969 DEBUG nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.353 239969 DEBUG nova.virt.libvirt.driver [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.362 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.362 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442548.315678, cd0921bb-0089-4327-a253-f47f4fdbade5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.363 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] VM Started (Lifecycle Event)
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.385 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.389 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.424 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.442 239969 INFO nova.compute.manager [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Took 3.48 seconds to spawn the instance on the hypervisor.
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.442 239969 DEBUG nova.compute.manager [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.508 239969 INFO nova.compute.manager [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Took 4.69 seconds to build instance.
Jan 26 15:49:08 compute-0 nova_compute[239965]: 2026-01-26 15:49:08.528 239969 DEBUG oslo_concurrency.lockutils [None req-f42c0f9a-970f-460d-9ee8-a49814049113 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lock "cd0921bb-0089-4327-a253-f47f4fdbade5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:49:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 134 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1003 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Jan 26 15:49:09 compute-0 nova_compute[239965]: 2026-01-26 15:49:09.662 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:09.727 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:49:10 compute-0 ceph-mon[75140]: pgmap v1001: 305 pgs: 305 active+clean; 134 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 1003 KiB/s rd, 2.1 MiB/s wr, 82 op/s
Jan 26 15:49:10 compute-0 nova_compute[239965]: 2026-01-26 15:49:10.889 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 15:49:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 148 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.2 MiB/s wr, 105 op/s
Jan 26 15:49:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Jan 26 15:49:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Jan 26 15:49:11 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Jan 26 15:49:12 compute-0 ceph-mon[75140]: pgmap v1002: 305 pgs: 305 active+clean; 148 MiB data, 284 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.2 MiB/s wr, 105 op/s
Jan 26 15:49:12 compute-0 ceph-mon[75140]: osdmap e151: 3 total, 3 up, 3 in
Jan 26 15:49:12 compute-0 nova_compute[239965]: 2026-01-26 15:49:12.880 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:13 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 26 15:49:13 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 13.353s CPU time.
Jan 26 15:49:13 compute-0 systemd-machined[208061]: Machine qemu-8-instance-00000008 terminated.
Jan 26 15:49:13 compute-0 nova_compute[239965]: 2026-01-26 15:49:13.142 239969 DEBUG oslo_concurrency.lockutils [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Acquiring lock "cd0921bb-0089-4327-a253-f47f4fdbade5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:13 compute-0 nova_compute[239965]: 2026-01-26 15:49:13.143 239969 DEBUG oslo_concurrency.lockutils [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lock "cd0921bb-0089-4327-a253-f47f4fdbade5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:13 compute-0 nova_compute[239965]: 2026-01-26 15:49:13.143 239969 DEBUG oslo_concurrency.lockutils [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Acquiring lock "cd0921bb-0089-4327-a253-f47f4fdbade5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:13 compute-0 nova_compute[239965]: 2026-01-26 15:49:13.143 239969 DEBUG oslo_concurrency.lockutils [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lock "cd0921bb-0089-4327-a253-f47f4fdbade5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:13 compute-0 nova_compute[239965]: 2026-01-26 15:49:13.144 239969 DEBUG oslo_concurrency.lockutils [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lock "cd0921bb-0089-4327-a253-f47f4fdbade5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:13 compute-0 nova_compute[239965]: 2026-01-26 15:49:13.145 239969 INFO nova.compute.manager [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Terminating instance
Jan 26 15:49:13 compute-0 nova_compute[239965]: 2026-01-26 15:49:13.146 239969 DEBUG oslo_concurrency.lockutils [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Acquiring lock "refresh_cache-cd0921bb-0089-4327-a253-f47f4fdbade5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:49:13 compute-0 nova_compute[239965]: 2026-01-26 15:49:13.146 239969 DEBUG oslo_concurrency.lockutils [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Acquired lock "refresh_cache-cd0921bb-0089-4327-a253-f47f4fdbade5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:49:13 compute-0 nova_compute[239965]: 2026-01-26 15:49:13.146 239969 DEBUG nova.network.neutron [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:49:13 compute-0 nova_compute[239965]: 2026-01-26 15:49:13.392 239969 DEBUG nova.network.neutron [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:49:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.9 MiB/s wr, 299 op/s
Jan 26 15:49:13 compute-0 nova_compute[239965]: 2026-01-26 15:49:13.732 239969 DEBUG nova.network.neutron [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:49:13 compute-0 nova_compute[239965]: 2026-01-26 15:49:13.902 239969 INFO nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance shutdown successfully after 13 seconds.
Jan 26 15:49:13 compute-0 nova_compute[239965]: 2026-01-26 15:49:13.909 239969 INFO nova.virt.libvirt.driver [-] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance destroyed successfully.
Jan 26 15:49:13 compute-0 nova_compute[239965]: 2026-01-26 15:49:13.914 239969 INFO nova.virt.libvirt.driver [-] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance destroyed successfully.
Jan 26 15:49:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:49:14 compute-0 nova_compute[239965]: 2026-01-26 15:49:14.192 239969 INFO nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Deleting instance files /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06_del
Jan 26 15:49:14 compute-0 nova_compute[239965]: 2026-01-26 15:49:14.193 239969 INFO nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Deletion of /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06_del complete
Jan 26 15:49:14 compute-0 nova_compute[239965]: 2026-01-26 15:49:14.227 239969 DEBUG oslo_concurrency.lockutils [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Releasing lock "refresh_cache-cd0921bb-0089-4327-a253-f47f4fdbade5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:49:14 compute-0 nova_compute[239965]: 2026-01-26 15:49:14.227 239969 DEBUG nova.compute.manager [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:49:14 compute-0 nova_compute[239965]: 2026-01-26 15:49:14.350 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:49:14 compute-0 nova_compute[239965]: 2026-01-26 15:49:14.351 239969 INFO nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Creating image(s)
Jan 26 15:49:14 compute-0 nova_compute[239965]: 2026-01-26 15:49:14.371 239969 DEBUG nova.storage.rbd_utils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:14 compute-0 nova_compute[239965]: 2026-01-26 15:49:14.393 239969 DEBUG nova.storage.rbd_utils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:14 compute-0 nova_compute[239965]: 2026-01-26 15:49:14.415 239969 DEBUG nova.storage.rbd_utils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:14 compute-0 nova_compute[239965]: 2026-01-26 15:49:14.419 239969 DEBUG oslo_concurrency.lockutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Acquiring lock "43c516c9cae415c0ab334521ada79f427fb2809a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:14 compute-0 nova_compute[239965]: 2026-01-26 15:49:14.420 239969 DEBUG oslo_concurrency.lockutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:14 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 26 15:49:14 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 6.409s CPU time.
Jan 26 15:49:14 compute-0 systemd-machined[208061]: Machine qemu-9-instance-00000009 terminated.
Jan 26 15:49:14 compute-0 nova_compute[239965]: 2026-01-26 15:49:14.645 239969 INFO nova.virt.libvirt.driver [-] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Instance destroyed successfully.
Jan 26 15:49:14 compute-0 nova_compute[239965]: 2026-01-26 15:49:14.646 239969 DEBUG nova.objects.instance [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lazy-loading 'resources' on Instance uuid cd0921bb-0089-4327-a253-f47f4fdbade5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:14 compute-0 nova_compute[239965]: 2026-01-26 15:49:14.664 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:14 compute-0 ceph-mon[75140]: pgmap v1004: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.9 MiB/s wr, 299 op/s
Jan 26 15:49:15 compute-0 nova_compute[239965]: 2026-01-26 15:49:15.068 239969 DEBUG nova.virt.libvirt.imagebackend [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Image locations are: [{'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/24ff5bfa-2bf0-4d76-ba05-fc857cd2108f/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/24ff5bfa-2bf0-4d76-ba05-fc857cd2108f/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 26 15:49:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.7 MiB/s wr, 282 op/s
Jan 26 15:49:16 compute-0 nova_compute[239965]: 2026-01-26 15:49:16.682 239969 INFO nova.virt.libvirt.driver [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Deleting instance files /var/lib/nova/instances/cd0921bb-0089-4327-a253-f47f4fdbade5_del
Jan 26 15:49:16 compute-0 nova_compute[239965]: 2026-01-26 15:49:16.682 239969 INFO nova.virt.libvirt.driver [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Deletion of /var/lib/nova/instances/cd0921bb-0089-4327-a253-f47f4fdbade5_del complete
Jan 26 15:49:16 compute-0 nova_compute[239965]: 2026-01-26 15:49:16.747 239969 INFO nova.compute.manager [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Took 2.52 seconds to destroy the instance on the hypervisor.
Jan 26 15:49:16 compute-0 nova_compute[239965]: 2026-01-26 15:49:16.747 239969 DEBUG oslo.service.loopingcall [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:49:16 compute-0 nova_compute[239965]: 2026-01-26 15:49:16.748 239969 DEBUG nova.compute.manager [-] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:49:16 compute-0 nova_compute[239965]: 2026-01-26 15:49:16.748 239969 DEBUG nova.network.neutron [-] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:49:16 compute-0 ceph-mon[75140]: pgmap v1005: 305 pgs: 305 active+clean; 167 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.7 MiB/s wr, 282 op/s
Jan 26 15:49:16 compute-0 nova_compute[239965]: 2026-01-26 15:49:16.891 239969 DEBUG oslo_concurrency.processutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:16 compute-0 nova_compute[239965]: 2026-01-26 15:49:16.909 239969 DEBUG nova.network.neutron [-] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:49:16 compute-0 nova_compute[239965]: 2026-01-26 15:49:16.925 239969 DEBUG nova.network.neutron [-] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:49:16 compute-0 nova_compute[239965]: 2026-01-26 15:49:16.951 239969 INFO nova.compute.manager [-] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Took 0.20 seconds to deallocate network for instance.
Jan 26 15:49:16 compute-0 nova_compute[239965]: 2026-01-26 15:49:16.952 239969 DEBUG oslo_concurrency.processutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a.part --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:16 compute-0 nova_compute[239965]: 2026-01-26 15:49:16.956 239969 DEBUG nova.virt.images [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] 24ff5bfa-2bf0-4d76-ba05-fc857cd2108f was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 26 15:49:16 compute-0 nova_compute[239965]: 2026-01-26 15:49:16.957 239969 DEBUG nova.privsep.utils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 26 15:49:16 compute-0 nova_compute[239965]: 2026-01-26 15:49:16.957 239969 DEBUG oslo_concurrency.processutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a.part /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.029 239969 DEBUG oslo_concurrency.lockutils [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.030 239969 DEBUG oslo_concurrency.lockutils [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.113 239969 DEBUG oslo_concurrency.processutils [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.347 239969 DEBUG oslo_concurrency.processutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a.part /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a.converted" returned: 0 in 0.389s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.352 239969 DEBUG oslo_concurrency.processutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.433 239969 DEBUG oslo_concurrency.processutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a.converted --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.435 239969 DEBUG oslo_concurrency.lockutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.456 239969 DEBUG nova.storage.rbd_utils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.461 239969 DEBUG oslo_concurrency.processutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.542 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 15:49:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 78 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.0 MiB/s wr, 246 op/s
Jan 26 15:49:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:49:17 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/229969411' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.692 239969 DEBUG oslo_concurrency.processutils [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.697 239969 DEBUG nova.compute.provider_tree [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.718 239969 DEBUG nova.scheduler.client.report [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.751 239969 DEBUG oslo_concurrency.lockutils [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.783 239969 INFO nova.scheduler.client.report [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Deleted allocations for instance cd0921bb-0089-4327-a253-f47f4fdbade5
Jan 26 15:49:17 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/229969411' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.850 239969 DEBUG oslo_concurrency.lockutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Acquiring lock "9dfd4fd6-d77b-4c05-bc93-07a935c53beb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.851 239969 DEBUG oslo_concurrency.lockutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "9dfd4fd6-d77b-4c05-bc93-07a935c53beb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.879 239969 DEBUG oslo_concurrency.lockutils [None req-00ac397a-7d50-4f16-9cf8-367d69e582a9 5473c1e640744a5fbbf18df72d9452d3 995f47174e6f434ca7b48631fa4dfdaa - - default default] Lock "cd0921bb-0089-4327-a253-f47f4fdbade5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.881 239969 DEBUG nova.compute.manager [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.884 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.945 239969 DEBUG oslo_concurrency.lockutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.946 239969 DEBUG oslo_concurrency.lockutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.952 239969 DEBUG nova.virt.hardware [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:49:17 compute-0 nova_compute[239965]: 2026-01-26 15:49:17.952 239969 INFO nova.compute.claims [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.020 239969 DEBUG oslo_concurrency.processutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.085 239969 DEBUG nova.storage.rbd_utils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] resizing rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.132 239969 DEBUG oslo_concurrency.processutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.189 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.191 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Ensure instance console log exists: /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.192 239969 DEBUG oslo_concurrency.lockutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.192 239969 DEBUG oslo_concurrency.lockutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.193 239969 DEBUG oslo_concurrency.lockutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.194 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.198 239969 WARNING nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.205 239969 DEBUG nova.virt.libvirt.host [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.205 239969 DEBUG nova.virt.libvirt.host [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.211 239969 DEBUG nova.virt.libvirt.host [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.211 239969 DEBUG nova.virt.libvirt.host [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.211 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.212 239969 DEBUG nova.virt.hardware [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.212 239969 DEBUG nova.virt.hardware [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.212 239969 DEBUG nova.virt.hardware [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.213 239969 DEBUG nova.virt.hardware [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.213 239969 DEBUG nova.virt.hardware [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.213 239969 DEBUG nova.virt.hardware [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.214 239969 DEBUG nova.virt.hardware [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.214 239969 DEBUG nova.virt.hardware [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.214 239969 DEBUG nova.virt.hardware [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.215 239969 DEBUG nova.virt.hardware [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.215 239969 DEBUG nova.virt.hardware [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.215 239969 DEBUG nova.objects.instance [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lazy-loading 'vcpu_model' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.236 239969 DEBUG oslo_concurrency.processutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:49:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658790247' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.743 239969 DEBUG oslo_concurrency.processutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.749 239969 DEBUG nova.compute.provider_tree [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:49:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:49:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/295230618' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.855 239969 DEBUG oslo_concurrency.processutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.876 239969 DEBUG nova.storage.rbd_utils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.880 239969 DEBUG oslo_concurrency.processutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:18 compute-0 ceph-mon[75140]: pgmap v1006: 305 pgs: 305 active+clean; 78 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.0 MiB/s wr, 246 op/s
Jan 26 15:49:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1658790247' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/295230618' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:18 compute-0 nova_compute[239965]: 2026-01-26 15:49:18.960 239969 DEBUG nova.scheduler.client.report [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:49:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:49:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Jan 26 15:49:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Jan 26 15:49:19 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.145 239969 DEBUG oslo_concurrency.lockutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.146 239969 DEBUG nova.compute.manager [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.196 239969 DEBUG nova.compute.manager [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.197 239969 DEBUG nova.network.neutron [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.251 239969 INFO nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.293 239969 DEBUG nova.compute.manager [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:49:19 compute-0 podman[252043]: 2026-01-26 15:49:19.392539452 +0000 UTC m=+0.079683162 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 15:49:19 compute-0 podman[252044]: 2026-01-26 15:49:19.398897997 +0000 UTC m=+0.086272294 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.399 239969 DEBUG nova.compute.manager [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.400 239969 DEBUG nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.400 239969 INFO nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Creating image(s)
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.433 239969 DEBUG nova.storage.rbd_utils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] rbd image 9dfd4fd6-d77b-4c05-bc93-07a935c53beb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:49:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1311755680' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.458 239969 DEBUG nova.storage.rbd_utils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] rbd image 9dfd4fd6-d77b-4c05-bc93-07a935c53beb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.478 239969 DEBUG nova.storage.rbd_utils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] rbd image 9dfd4fd6-d77b-4c05-bc93-07a935c53beb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.481 239969 DEBUG oslo_concurrency.processutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.501 239969 DEBUG oslo_concurrency.processutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.504 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:49:19 compute-0 nova_compute[239965]:   <uuid>bfbba9d8-efe9-46fa-a6a4-2431c70eab06</uuid>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   <name>instance-00000008</name>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersAdmin275Test-server-141780407</nova:name>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:49:18</nova:creationTime>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:49:19 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:49:19 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:49:19 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:49:19 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:49:19 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:49:19 compute-0 nova_compute[239965]:         <nova:user uuid="2d53086521a447778c494a2f71fa9685">tempest-ServersAdmin275Test-1784917804-project-member</nova:user>
Jan 26 15:49:19 compute-0 nova_compute[239965]:         <nova:project uuid="19e7b3737799454e9d511751ed18060a">tempest-ServersAdmin275Test-1784917804</nova:project>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="24ff5bfa-2bf0-4d76-ba05-fc857cd2108f"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <system>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <entry name="serial">bfbba9d8-efe9-46fa-a6a4-2431c70eab06</entry>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <entry name="uuid">bfbba9d8-efe9-46fa-a6a4-2431c70eab06</entry>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     </system>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   <os>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   </os>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   <features>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   </features>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk">
Jan 26 15:49:19 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       </source>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:49:19 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config">
Jan 26 15:49:19 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       </source>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:49:19 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/console.log" append="off"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <video>
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     </video>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:49:19 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:49:19 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:49:19 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:49:19 compute-0 nova_compute[239965]: </domain>
Jan 26 15:49:19 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.543 239969 DEBUG oslo_concurrency.processutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.543 239969 DEBUG oslo_concurrency.lockutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.544 239969 DEBUG oslo_concurrency.lockutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.544 239969 DEBUG oslo_concurrency.lockutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.565 239969 DEBUG nova.storage.rbd_utils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] rbd image 9dfd4fd6-d77b-4c05-bc93-07a935c53beb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.569 239969 DEBUG oslo_concurrency.processutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 9dfd4fd6-d77b-4c05-bc93-07a935c53beb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 264 op/s
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.600 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.601 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.601 239969 INFO nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Using config drive
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.638 239969 DEBUG nova.storage.rbd_utils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.657 239969 DEBUG nova.objects.instance [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lazy-loading 'ec2_ids' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.665 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.724 239969 DEBUG nova.network.neutron [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.725 239969 DEBUG nova.compute.manager [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.732 239969 DEBUG nova.objects.instance [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lazy-loading 'keypairs' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.835 239969 DEBUG oslo_concurrency.processutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 9dfd4fd6-d77b-4c05-bc93-07a935c53beb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.889 239969 DEBUG nova.storage.rbd_utils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] resizing rbd image 9dfd4fd6-d77b-4c05-bc93-07a935c53beb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.915 239969 INFO nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Creating config drive at /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.921 239969 DEBUG oslo_concurrency.processutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdzyzwmdr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.973 239969 DEBUG nova.objects.instance [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lazy-loading 'migration_context' on Instance uuid 9dfd4fd6-d77b-4c05-bc93-07a935c53beb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.987 239969 DEBUG nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.988 239969 DEBUG nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Ensure instance console log exists: /var/lib/nova/instances/9dfd4fd6-d77b-4c05-bc93-07a935c53beb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.988 239969 DEBUG oslo_concurrency.lockutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.989 239969 DEBUG oslo_concurrency.lockutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.989 239969 DEBUG oslo_concurrency.lockutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.991 239969 DEBUG nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.995 239969 WARNING nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:49:19 compute-0 nova_compute[239965]: 2026-01-26 15:49:19.999 239969 DEBUG nova.virt.libvirt.host [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.000 239969 DEBUG nova.virt.libvirt.host [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.002 239969 DEBUG nova.virt.libvirt.host [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.003 239969 DEBUG nova.virt.libvirt.host [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.003 239969 DEBUG nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.003 239969 DEBUG nova.virt.hardware [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.004 239969 DEBUG nova.virt.hardware [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.004 239969 DEBUG nova.virt.hardware [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.004 239969 DEBUG nova.virt.hardware [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.005 239969 DEBUG nova.virt.hardware [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.005 239969 DEBUG nova.virt.hardware [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.005 239969 DEBUG nova.virt.hardware [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.006 239969 DEBUG nova.virt.hardware [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.006 239969 DEBUG nova.virt.hardware [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.006 239969 DEBUG nova.virt.hardware [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.007 239969 DEBUG nova.virt.hardware [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.009 239969 DEBUG oslo_concurrency.processutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.047 239969 DEBUG oslo_concurrency.processutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdzyzwmdr" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.071 239969 DEBUG nova.storage.rbd_utils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.075 239969 DEBUG oslo_concurrency.processutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:20 compute-0 ceph-mon[75140]: osdmap e152: 3 total, 3 up, 3 in
Jan 26 15:49:20 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1311755680' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:20 compute-0 ceph-mon[75140]: pgmap v1008: 305 pgs: 305 active+clean; 41 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 1.9 MiB/s wr, 264 op/s
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.198 239969 DEBUG oslo_concurrency.processutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.199 239969 INFO nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Deleting local config drive /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config because it was imported into RBD.
Jan 26 15:49:20 compute-0 systemd-machined[208061]: New machine qemu-10-instance-00000008.
Jan 26 15:49:20 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-00000008.
Jan 26 15:49:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:49:20 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/417391997' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.562 239969 DEBUG oslo_concurrency.processutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.590 239969 DEBUG nova.storage.rbd_utils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] rbd image 9dfd4fd6-d77b-4c05-bc93-07a935c53beb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.596 239969 DEBUG oslo_concurrency.processutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.871 239969 DEBUG nova.compute.manager [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.872 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.873 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for bfbba9d8-efe9-46fa-a6a4-2431c70eab06 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.873 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442560.8710394, bfbba9d8-efe9-46fa-a6a4-2431c70eab06 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.873 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] VM Resumed (Lifecycle Event)
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.889 239969 INFO nova.virt.libvirt.driver [-] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance spawned successfully.
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.890 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.905 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.919 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.926 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.927 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.927 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.928 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.928 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.929 239969 DEBUG nova.virt.libvirt.driver [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.965 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.965 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442560.8711674, bfbba9d8-efe9-46fa-a6a4-2431c70eab06 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:20 compute-0 nova_compute[239965]: 2026-01-26 15:49:20.966 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] VM Started (Lifecycle Event)
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.003 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.007 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.013 239969 DEBUG nova.compute.manager [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.040 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.079 239969 DEBUG oslo_concurrency.lockutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.081 239969 DEBUG oslo_concurrency.lockutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.082 239969 DEBUG nova.objects.instance [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 15:49:21 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/417391997' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.165 239969 DEBUG oslo_concurrency.lockutils [None req-d159d2a8-3aac-4a3d-ae4b-ebe9da7f57c4 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:49:21 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/546111046' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.269 239969 DEBUG oslo_concurrency.processutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.271 239969 DEBUG nova.objects.instance [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9dfd4fd6-d77b-4c05-bc93-07a935c53beb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.293 239969 DEBUG nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:49:21 compute-0 nova_compute[239965]:   <uuid>9dfd4fd6-d77b-4c05-bc93-07a935c53beb</uuid>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   <name>instance-0000000a</name>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-1992583441</nova:name>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:49:19</nova:creationTime>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:49:21 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:49:21 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:49:21 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:49:21 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:49:21 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:49:21 compute-0 nova_compute[239965]:         <nova:user uuid="5ec5df94d484458495b5f09f2cc15d29">tempest-DeleteServersAdminTestJSON-2035503796-project-member</nova:user>
Jan 26 15:49:21 compute-0 nova_compute[239965]:         <nova:project uuid="7acace6b3e054d8baa3941910b0928b9">tempest-DeleteServersAdminTestJSON-2035503796</nova:project>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <system>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <entry name="serial">9dfd4fd6-d77b-4c05-bc93-07a935c53beb</entry>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <entry name="uuid">9dfd4fd6-d77b-4c05-bc93-07a935c53beb</entry>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     </system>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   <os>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   </os>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   <features>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   </features>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/9dfd4fd6-d77b-4c05-bc93-07a935c53beb_disk">
Jan 26 15:49:21 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       </source>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:49:21 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/9dfd4fd6-d77b-4c05-bc93-07a935c53beb_disk.config">
Jan 26 15:49:21 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       </source>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:49:21 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/9dfd4fd6-d77b-4c05-bc93-07a935c53beb/console.log" append="off"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <video>
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     </video>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:49:21 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:49:21 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:49:21 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:49:21 compute-0 nova_compute[239965]: </domain>
Jan 26 15:49:21 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.353 239969 DEBUG nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.354 239969 DEBUG nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.355 239969 INFO nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Using config drive
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.383 239969 DEBUG nova.storage.rbd_utils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] rbd image 9dfd4fd6-d77b-4c05-bc93-07a935c53beb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 15:49:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 54 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.3 MiB/s wr, 247 op/s
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.615 239969 INFO nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Creating config drive at /var/lib/nova/instances/9dfd4fd6-d77b-4c05-bc93-07a935c53beb/disk.config
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.620 239969 DEBUG oslo_concurrency.processutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9dfd4fd6-d77b-4c05-bc93-07a935c53beb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy5ezqt4p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.745 239969 DEBUG oslo_concurrency.processutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9dfd4fd6-d77b-4c05-bc93-07a935c53beb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy5ezqt4p" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.775 239969 DEBUG nova.storage.rbd_utils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] rbd image 9dfd4fd6-d77b-4c05-bc93-07a935c53beb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:21 compute-0 nova_compute[239965]: 2026-01-26 15:49:21.779 239969 DEBUG oslo_concurrency.processutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9dfd4fd6-d77b-4c05-bc93-07a935c53beb/disk.config 9dfd4fd6-d77b-4c05-bc93-07a935c53beb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.128 239969 DEBUG oslo_concurrency.processutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9dfd4fd6-d77b-4c05-bc93-07a935c53beb/disk.config 9dfd4fd6-d77b-4c05-bc93-07a935c53beb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.129 239969 INFO nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Deleting local config drive /var/lib/nova/instances/9dfd4fd6-d77b-4c05-bc93-07a935c53beb/disk.config because it was imported into RBD.
Jan 26 15:49:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/546111046' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:22 compute-0 ceph-mon[75140]: pgmap v1009: 305 pgs: 305 active+clean; 54 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.3 MiB/s wr, 247 op/s
Jan 26 15:49:22 compute-0 systemd-machined[208061]: New machine qemu-11-instance-0000000a.
Jan 26 15:49:22 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000a.
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.628 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442562.6277914, 9dfd4fd6-d77b-4c05-bc93-07a935c53beb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.630 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] VM Resumed (Lifecycle Event)
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.633 239969 DEBUG nova.compute.manager [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.633 239969 DEBUG nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.637 239969 INFO nova.virt.libvirt.driver [-] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Instance spawned successfully.
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.638 239969 DEBUG nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.657 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.667 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.672 239969 DEBUG nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.673 239969 DEBUG nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.674 239969 DEBUG nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.674 239969 DEBUG nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.675 239969 DEBUG nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.675 239969 DEBUG nova.virt.libvirt.driver [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.698 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.698 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442562.6315737, 9dfd4fd6-d77b-4c05-bc93-07a935c53beb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.699 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] VM Started (Lifecycle Event)
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.806 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.810 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.833 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.887 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.895 239969 INFO nova.compute.manager [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Took 3.50 seconds to spawn the instance on the hypervisor.
Jan 26 15:49:22 compute-0 nova_compute[239965]: 2026-01-26 15:49:22.896 239969 DEBUG nova.compute.manager [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.062 239969 INFO nova.compute.manager [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Took 5.13 seconds to build instance.
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.234 239969 DEBUG oslo_concurrency.lockutils [None req-90734022-9667-4814-babe-e2c3222a8359 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "9dfd4fd6-d77b-4c05-bc93-07a935c53beb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.383s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:49:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.3 MiB/s wr, 220 op/s
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.739 239969 INFO nova.compute.manager [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Rebuilding instance
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.776 239969 DEBUG oslo_concurrency.lockutils [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Acquiring lock "9dfd4fd6-d77b-4c05-bc93-07a935c53beb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.776 239969 DEBUG oslo_concurrency.lockutils [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Lock "9dfd4fd6-d77b-4c05-bc93-07a935c53beb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.777 239969 DEBUG oslo_concurrency.lockutils [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Acquiring lock "9dfd4fd6-d77b-4c05-bc93-07a935c53beb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.777 239969 DEBUG oslo_concurrency.lockutils [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Lock "9dfd4fd6-d77b-4c05-bc93-07a935c53beb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.777 239969 DEBUG oslo_concurrency.lockutils [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Lock "9dfd4fd6-d77b-4c05-bc93-07a935c53beb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.779 239969 INFO nova.compute.manager [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Terminating instance
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.780 239969 DEBUG oslo_concurrency.lockutils [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Acquiring lock "refresh_cache-9dfd4fd6-d77b-4c05-bc93-07a935c53beb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.780 239969 DEBUG oslo_concurrency.lockutils [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Acquired lock "refresh_cache-9dfd4fd6-d77b-4c05-bc93-07a935c53beb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.781 239969 DEBUG nova.network.neutron [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.832 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.860 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Triggering sync for uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.861 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Triggering sync for uuid 9dfd4fd6-d77b-4c05-bc93-07a935c53beb _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.862 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "bfbba9d8-efe9-46fa-a6a4-2431c70eab06" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.862 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "bfbba9d8-efe9-46fa-a6a4-2431c70eab06" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.862 239969 INFO nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] During sync_power_state the instance has a pending task (rebuilding). Skip.
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.863 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "bfbba9d8-efe9-46fa-a6a4-2431c70eab06" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:23 compute-0 nova_compute[239965]: 2026-01-26 15:49:23.863 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "9dfd4fd6-d77b-4c05-bc93-07a935c53beb" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.062 239969 DEBUG nova.network.neutron [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:49:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.294 239969 DEBUG nova.objects.instance [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.311 239969 DEBUG nova.compute.manager [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.313 239969 DEBUG nova.network.neutron [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.337 239969 DEBUG oslo_concurrency.lockutils [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Releasing lock "refresh_cache-9dfd4fd6-d77b-4c05-bc93-07a935c53beb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.337 239969 DEBUG nova.compute.manager [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.366 239969 DEBUG nova.objects.instance [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Lazy-loading 'pci_requests' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.380 239969 DEBUG nova.objects.instance [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Lazy-loading 'pci_devices' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.395 239969 DEBUG nova.objects.instance [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Lazy-loading 'resources' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.406 239969 DEBUG nova.objects.instance [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Lazy-loading 'migration_context' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.418 239969 DEBUG nova.objects.instance [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.421 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 15:49:24 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Jan 26 15:49:24 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Consumed 2.207s CPU time.
Jan 26 15:49:24 compute-0 systemd-machined[208061]: Machine qemu-11-instance-0000000a terminated.
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.542 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.558 239969 INFO nova.virt.libvirt.driver [-] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Instance destroyed successfully.
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.559 239969 DEBUG nova.objects.instance [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Lazy-loading 'resources' on Instance uuid 9dfd4fd6-d77b-4c05-bc93-07a935c53beb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:24 compute-0 ceph-mon[75140]: pgmap v1010: 305 pgs: 305 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.3 MiB/s wr, 220 op/s
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.667 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.878 239969 INFO nova.virt.libvirt.driver [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Deleting instance files /var/lib/nova/instances/9dfd4fd6-d77b-4c05-bc93-07a935c53beb_del
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.879 239969 INFO nova.virt.libvirt.driver [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Deletion of /var/lib/nova/instances/9dfd4fd6-d77b-4c05-bc93-07a935c53beb_del complete
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.936 239969 INFO nova.compute.manager [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Took 0.60 seconds to destroy the instance on the hypervisor.
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.936 239969 DEBUG oslo.service.loopingcall [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.938 239969 DEBUG nova.compute.manager [-] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:49:24 compute-0 nova_compute[239965]: 2026-01-26 15:49:24.938 239969 DEBUG nova.network.neutron [-] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:49:25 compute-0 nova_compute[239965]: 2026-01-26 15:49:25.179 239969 DEBUG nova.network.neutron [-] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:49:25 compute-0 nova_compute[239965]: 2026-01-26 15:49:25.193 239969 DEBUG nova.network.neutron [-] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:49:25 compute-0 nova_compute[239965]: 2026-01-26 15:49:25.207 239969 INFO nova.compute.manager [-] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Took 0.27 seconds to deallocate network for instance.
Jan 26 15:49:25 compute-0 nova_compute[239965]: 2026-01-26 15:49:25.254 239969 DEBUG oslo_concurrency.lockutils [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:25 compute-0 nova_compute[239965]: 2026-01-26 15:49:25.255 239969 DEBUG oslo_concurrency.lockutils [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:25 compute-0 nova_compute[239965]: 2026-01-26 15:49:25.475 239969 DEBUG oslo_concurrency.processutils [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:25 compute-0 nova_compute[239965]: 2026-01-26 15:49:25.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:49:25 compute-0 nova_compute[239965]: 2026-01-26 15:49:25.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:49:25 compute-0 nova_compute[239965]: 2026-01-26 15:49:25.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.3 MiB/s wr, 220 op/s
Jan 26 15:49:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:49:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2761800528' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:26 compute-0 nova_compute[239965]: 2026-01-26 15:49:26.250 239969 DEBUG oslo_concurrency.processutils [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.775s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:26 compute-0 nova_compute[239965]: 2026-01-26 15:49:26.256 239969 DEBUG nova.compute.provider_tree [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:49:26 compute-0 nova_compute[239965]: 2026-01-26 15:49:26.271 239969 DEBUG nova.scheduler.client.report [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:49:26 compute-0 ceph-mon[75140]: pgmap v1011: 305 pgs: 305 active+clean; 134 MiB data, 274 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.3 MiB/s wr, 220 op/s
Jan 26 15:49:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2761800528' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:26 compute-0 nova_compute[239965]: 2026-01-26 15:49:26.294 239969 DEBUG oslo_concurrency.lockutils [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:26 compute-0 nova_compute[239965]: 2026-01-26 15:49:26.297 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:26 compute-0 nova_compute[239965]: 2026-01-26 15:49:26.297 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:26 compute-0 nova_compute[239965]: 2026-01-26 15:49:26.297 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:49:26 compute-0 nova_compute[239965]: 2026-01-26 15:49:26.297 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:26 compute-0 nova_compute[239965]: 2026-01-26 15:49:26.338 239969 INFO nova.scheduler.client.report [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Deleted allocations for instance 9dfd4fd6-d77b-4c05-bc93-07a935c53beb
Jan 26 15:49:26 compute-0 nova_compute[239965]: 2026-01-26 15:49:26.400 239969 DEBUG oslo_concurrency.lockutils [None req-96dfceb3-555d-4e48-9279-3ad137d561ab 28aece23d5ce4c2e88a03f20a7ff2c77 f825fc11018c45268a788a132668d17e - - default default] Lock "9dfd4fd6-d77b-4c05-bc93-07a935c53beb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:26 compute-0 nova_compute[239965]: 2026-01-26 15:49:26.402 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "9dfd4fd6-d77b-4c05-bc93-07a935c53beb" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:26 compute-0 nova_compute[239965]: 2026-01-26 15:49:26.402 239969 INFO nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] During sync_power_state the instance has a pending task (deleting). Skip.
Jan 26 15:49:26 compute-0 nova_compute[239965]: 2026-01-26 15:49:26.402 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "9dfd4fd6-d77b-4c05-bc93-07a935c53beb" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:49:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1460521651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:26 compute-0 nova_compute[239965]: 2026-01-26 15:49:26.871 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:26 compute-0 nova_compute[239965]: 2026-01-26 15:49:26.933 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:49:26 compute-0 nova_compute[239965]: 2026-01-26 15:49:26.933 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:49:27 compute-0 nova_compute[239965]: 2026-01-26 15:49:27.069 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:49:27 compute-0 nova_compute[239965]: 2026-01-26 15:49:27.070 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4592MB free_disk=59.9465515865013GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:49:27 compute-0 nova_compute[239965]: 2026-01-26 15:49:27.070 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:27 compute-0 nova_compute[239965]: 2026-01-26 15:49:27.070 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:27 compute-0 nova_compute[239965]: 2026-01-26 15:49:27.128 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance bfbba9d8-efe9-46fa-a6a4-2431c70eab06 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:49:27 compute-0 nova_compute[239965]: 2026-01-26 15:49:27.129 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:49:27 compute-0 nova_compute[239965]: 2026-01-26 15:49:27.129 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:49:27 compute-0 nova_compute[239965]: 2026-01-26 15:49:27.168 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1460521651' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 108 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 4.3 MiB/s wr, 294 op/s
Jan 26 15:49:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:49:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4038434840' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:27 compute-0 nova_compute[239965]: 2026-01-26 15:49:27.776 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:27 compute-0 nova_compute[239965]: 2026-01-26 15:49:27.782 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:49:27 compute-0 nova_compute[239965]: 2026-01-26 15:49:27.796 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:49:27 compute-0 nova_compute[239965]: 2026-01-26 15:49:27.814 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:49:27 compute-0 nova_compute[239965]: 2026-01-26 15:49:27.814 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:27 compute-0 nova_compute[239965]: 2026-01-26 15:49:27.889 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:28 compute-0 ceph-mon[75140]: pgmap v1012: 305 pgs: 305 active+clean; 108 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 4.3 MiB/s wr, 294 op/s
Jan 26 15:49:28 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4038434840' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:49:28
Jan 26 15:49:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:49:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:49:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'images', '.mgr', 'backups', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta']
Jan 26 15:49:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:49:28 compute-0 nova_compute[239965]: 2026-01-26 15:49:28.806 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:49:28 compute-0 nova_compute[239965]: 2026-01-26 15:49:28.831 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:49:28 compute-0 nova_compute[239965]: 2026-01-26 15:49:28.831 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:49:28 compute-0 nova_compute[239965]: 2026-01-26 15:49:28.832 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 15:49:28 compute-0 nova_compute[239965]: 2026-01-26 15:49:28.850 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-bfbba9d8-efe9-46fa-a6a4-2431c70eab06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:49:28 compute-0 nova_compute[239965]: 2026-01-26 15:49:28.851 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-bfbba9d8-efe9-46fa-a6a4-2431c70eab06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:49:28 compute-0 nova_compute[239965]: 2026-01-26 15:49:28.851 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 15:49:28 compute-0 nova_compute[239965]: 2026-01-26 15:49:28.851 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.263 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.358 239969 DEBUG oslo_concurrency.lockutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Acquiring lock "05fe14be-b7c6-4ea7-b739-3ad97c924a51" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.359 239969 DEBUG oslo_concurrency.lockutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "05fe14be-b7c6-4ea7-b739-3ad97c924a51" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.382 239969 DEBUG nova.compute.manager [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.455 239969 DEBUG oslo_concurrency.lockutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.455 239969 DEBUG oslo_concurrency.lockutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.462 239969 DEBUG nova.virt.hardware [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.462 239969 INFO nova.compute.claims [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.556 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.570 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-bfbba9d8-efe9-46fa-a6a4-2431c70eab06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.570 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.571 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.571 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.571 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.571 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.592 239969 DEBUG oslo_concurrency.processutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.1 MiB/s wr, 258 op/s
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.644 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442554.6430724, cd0921bb-0089-4327-a253-f47f4fdbade5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.645 239969 INFO nova.compute.manager [-] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] VM Stopped (Lifecycle Event)
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.666 239969 DEBUG nova.compute.manager [None req-b6c8b990-4089-42de-9e47-4d163c17b8b1 - - - - - -] [instance: cd0921bb-0089-4327-a253-f47f4fdbade5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:29 compute-0 nova_compute[239965]: 2026-01-26 15:49:29.672 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:49:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1172324769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.229 239969 DEBUG oslo_concurrency.processutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.236 239969 DEBUG nova.compute.provider_tree [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.263 239969 DEBUG nova.scheduler.client.report [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.268 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.292 239969 DEBUG oslo_concurrency.lockutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.293 239969 DEBUG nova.compute.manager [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.351 239969 DEBUG nova.compute.manager [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.351 239969 DEBUG nova.network.neutron [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:49:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:49:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:49:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:49:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:49:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:49:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.369 239969 INFO nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.391 239969 DEBUG nova.compute.manager [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.488 239969 DEBUG nova.compute.manager [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.491 239969 DEBUG nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.491 239969 INFO nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Creating image(s)
Jan 26 15:49:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:49:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:49:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:49:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:49:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:49:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:49:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:49:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:49:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:49:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.522 239969 DEBUG nova.storage.rbd_utils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] rbd image 05fe14be-b7c6-4ea7-b739-3ad97c924a51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.548 239969 DEBUG nova.storage.rbd_utils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] rbd image 05fe14be-b7c6-4ea7-b739-3ad97c924a51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.574 239969 DEBUG nova.storage.rbd_utils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] rbd image 05fe14be-b7c6-4ea7-b739-3ad97c924a51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.578 239969 DEBUG oslo_concurrency.processutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.639 239969 DEBUG oslo_concurrency.processutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.641 239969 DEBUG oslo_concurrency.lockutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.642 239969 DEBUG oslo_concurrency.lockutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.642 239969 DEBUG oslo_concurrency.lockutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:30 compute-0 ceph-mon[75140]: pgmap v1013: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.1 MiB/s wr, 258 op/s
Jan 26 15:49:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1172324769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.668 239969 DEBUG nova.storage.rbd_utils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] rbd image 05fe14be-b7c6-4ea7-b739-3ad97c924a51_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.673 239969 DEBUG oslo_concurrency.processutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 05fe14be-b7c6-4ea7-b739-3ad97c924a51_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.842 239969 DEBUG nova.network.neutron [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.843 239969 DEBUG nova.compute.manager [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:49:30 compute-0 nova_compute[239965]: 2026-01-26 15:49:30.939 239969 DEBUG oslo_concurrency.processutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 05fe14be-b7c6-4ea7-b739-3ad97c924a51_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.014 239969 DEBUG nova.storage.rbd_utils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] resizing rbd image 05fe14be-b7c6-4ea7-b739-3ad97c924a51_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.108 239969 DEBUG nova.objects.instance [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lazy-loading 'migration_context' on Instance uuid 05fe14be-b7c6-4ea7-b739-3ad97c924a51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.145 239969 DEBUG nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.146 239969 DEBUG nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Ensure instance console log exists: /var/lib/nova/instances/05fe14be-b7c6-4ea7-b739-3ad97c924a51/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.146 239969 DEBUG oslo_concurrency.lockutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.147 239969 DEBUG oslo_concurrency.lockutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.147 239969 DEBUG oslo_concurrency.lockutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.149 239969 DEBUG nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.154 239969 WARNING nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.159 239969 DEBUG nova.virt.libvirt.host [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.160 239969 DEBUG nova.virt.libvirt.host [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.164 239969 DEBUG nova.virt.libvirt.host [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.165 239969 DEBUG nova.virt.libvirt.host [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.165 239969 DEBUG nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.166 239969 DEBUG nova.virt.hardware [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.166 239969 DEBUG nova.virt.hardware [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.167 239969 DEBUG nova.virt.hardware [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.167 239969 DEBUG nova.virt.hardware [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.167 239969 DEBUG nova.virt.hardware [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.168 239969 DEBUG nova.virt.hardware [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.168 239969 DEBUG nova.virt.hardware [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.168 239969 DEBUG nova.virt.hardware [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.169 239969 DEBUG nova.virt.hardware [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.169 239969 DEBUG nova.virt.hardware [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.170 239969 DEBUG nova.virt.hardware [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.173 239969 DEBUG oslo_concurrency.processutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.6 MiB/s wr, 224 op/s
Jan 26 15:49:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:49:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1314439208' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.815 239969 DEBUG oslo_concurrency.processutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:31 compute-0 nova_compute[239965]: 2026-01-26 15:49:31.992 239969 DEBUG nova.storage.rbd_utils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] rbd image 05fe14be-b7c6-4ea7-b739-3ad97c924a51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:32 compute-0 nova_compute[239965]: 2026-01-26 15:49:32.001 239969 DEBUG oslo_concurrency.processutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:49:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713870508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:32 compute-0 nova_compute[239965]: 2026-01-26 15:49:32.645 239969 DEBUG oslo_concurrency.processutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:32 compute-0 nova_compute[239965]: 2026-01-26 15:49:32.647 239969 DEBUG nova.objects.instance [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 05fe14be-b7c6-4ea7-b739-3ad97c924a51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:32 compute-0 nova_compute[239965]: 2026-01-26 15:49:32.666 239969 DEBUG nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:49:32 compute-0 nova_compute[239965]:   <uuid>05fe14be-b7c6-4ea7-b739-3ad97c924a51</uuid>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   <name>instance-0000000b</name>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <nova:name>tempest-DeleteServersAdminTestJSON-server-984767260</nova:name>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:49:31</nova:creationTime>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:49:32 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:49:32 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:49:32 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:49:32 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:49:32 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:49:32 compute-0 nova_compute[239965]:         <nova:user uuid="5ec5df94d484458495b5f09f2cc15d29">tempest-DeleteServersAdminTestJSON-2035503796-project-member</nova:user>
Jan 26 15:49:32 compute-0 nova_compute[239965]:         <nova:project uuid="7acace6b3e054d8baa3941910b0928b9">tempest-DeleteServersAdminTestJSON-2035503796</nova:project>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <system>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <entry name="serial">05fe14be-b7c6-4ea7-b739-3ad97c924a51</entry>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <entry name="uuid">05fe14be-b7c6-4ea7-b739-3ad97c924a51</entry>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     </system>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   <os>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   </os>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   <features>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   </features>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/05fe14be-b7c6-4ea7-b739-3ad97c924a51_disk">
Jan 26 15:49:32 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       </source>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:49:32 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/05fe14be-b7c6-4ea7-b739-3ad97c924a51_disk.config">
Jan 26 15:49:32 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       </source>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:49:32 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/05fe14be-b7c6-4ea7-b739-3ad97c924a51/console.log" append="off"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <video>
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     </video>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:49:32 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:49:32 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:49:32 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:49:32 compute-0 nova_compute[239965]: </domain>
Jan 26 15:49:32 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:49:32 compute-0 ceph-mon[75140]: pgmap v1014: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.6 MiB/s wr, 224 op/s
Jan 26 15:49:32 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1314439208' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:32 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3713870508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:32 compute-0 nova_compute[239965]: 2026-01-26 15:49:32.722 239969 DEBUG nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:49:32 compute-0 nova_compute[239965]: 2026-01-26 15:49:32.722 239969 DEBUG nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:49:32 compute-0 nova_compute[239965]: 2026-01-26 15:49:32.722 239969 INFO nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Using config drive
Jan 26 15:49:32 compute-0 nova_compute[239965]: 2026-01-26 15:49:32.742 239969 DEBUG nova.storage.rbd_utils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] rbd image 05fe14be-b7c6-4ea7-b739-3ad97c924a51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:32 compute-0 nova_compute[239965]: 2026-01-26 15:49:32.892 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:32 compute-0 nova_compute[239965]: 2026-01-26 15:49:32.934 239969 INFO nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Creating config drive at /var/lib/nova/instances/05fe14be-b7c6-4ea7-b739-3ad97c924a51/disk.config
Jan 26 15:49:32 compute-0 nova_compute[239965]: 2026-01-26 15:49:32.941 239969 DEBUG oslo_concurrency.processutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/05fe14be-b7c6-4ea7-b739-3ad97c924a51/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3viktt8d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.068 239969 DEBUG oslo_concurrency.processutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/05fe14be-b7c6-4ea7-b739-3ad97c924a51/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3viktt8d" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.095 239969 DEBUG nova.storage.rbd_utils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] rbd image 05fe14be-b7c6-4ea7-b739-3ad97c924a51_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.134 239969 DEBUG oslo_concurrency.processutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/05fe14be-b7c6-4ea7-b739-3ad97c924a51/disk.config 05fe14be-b7c6-4ea7-b739-3ad97c924a51_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.259 239969 DEBUG oslo_concurrency.processutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/05fe14be-b7c6-4ea7-b739-3ad97c924a51/disk.config 05fe14be-b7c6-4ea7-b739-3ad97c924a51_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.260 239969 INFO nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Deleting local config drive /var/lib/nova/instances/05fe14be-b7c6-4ea7-b739-3ad97c924a51/disk.config because it was imported into RBD.
Jan 26 15:49:33 compute-0 systemd-machined[208061]: New machine qemu-12-instance-0000000b.
Jan 26 15:49:33 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000b.
Jan 26 15:49:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 120 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.5 MiB/s wr, 231 op/s
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.720 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442573.7197433, 05fe14be-b7c6-4ea7-b739-3ad97c924a51 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.721 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] VM Resumed (Lifecycle Event)
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.723 239969 DEBUG nova.compute.manager [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.724 239969 DEBUG nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.727 239969 INFO nova.virt.libvirt.driver [-] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Instance spawned successfully.
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.728 239969 DEBUG nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.745 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.751 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.755 239969 DEBUG nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.755 239969 DEBUG nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.755 239969 DEBUG nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.756 239969 DEBUG nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.756 239969 DEBUG nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.757 239969 DEBUG nova.virt.libvirt.driver [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.789 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.790 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442573.7206676, 05fe14be-b7c6-4ea7-b739-3ad97c924a51 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.790 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] VM Started (Lifecycle Event)
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.820 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.824 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.828 239969 INFO nova.compute.manager [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Took 3.34 seconds to spawn the instance on the hypervisor.
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.829 239969 DEBUG nova.compute.manager [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.856 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.892 239969 INFO nova.compute.manager [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Took 4.45 seconds to build instance.
Jan 26 15:49:33 compute-0 nova_compute[239965]: 2026-01-26 15:49:33.911 239969 DEBUG oslo_concurrency.lockutils [None req-3175681d-231d-4c57-b5a7-2ea8b4e7dbf8 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "05fe14be-b7c6-4ea7-b739-3ad97c924a51" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:49:34 compute-0 nova_compute[239965]: 2026-01-26 15:49:34.468 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 15:49:34 compute-0 nova_compute[239965]: 2026-01-26 15:49:34.673 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:34 compute-0 ceph-mon[75140]: pgmap v1015: 305 pgs: 305 active+clean; 120 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.5 MiB/s wr, 231 op/s
Jan 26 15:49:34 compute-0 sudo[253004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:49:34 compute-0 sudo[253004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:49:34 compute-0 sudo[253004]: pam_unix(sudo:session): session closed for user root
Jan 26 15:49:34 compute-0 sudo[253029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:49:34 compute-0 sudo[253029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:49:35 compute-0 sudo[253029]: pam_unix(sudo:session): session closed for user root
Jan 26 15:49:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:49:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:49:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:49:35 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:49:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:49:35 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:49:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:49:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:49:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:49:35 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:49:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:49:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:49:35 compute-0 sudo[253086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:49:35 compute-0 sudo[253086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:49:35 compute-0 sudo[253086]: pam_unix(sudo:session): session closed for user root
Jan 26 15:49:35 compute-0 sudo[253111]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:49:35 compute-0 sudo[253111]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:49:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 120 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 140 op/s
Jan 26 15:49:35 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:49:35 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:49:35 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:49:35 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:49:35 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:49:35 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:49:35 compute-0 nova_compute[239965]: 2026-01-26 15:49:35.762 239969 DEBUG oslo_concurrency.lockutils [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Acquiring lock "05fe14be-b7c6-4ea7-b739-3ad97c924a51" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:35 compute-0 nova_compute[239965]: 2026-01-26 15:49:35.763 239969 DEBUG oslo_concurrency.lockutils [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "05fe14be-b7c6-4ea7-b739-3ad97c924a51" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:35 compute-0 nova_compute[239965]: 2026-01-26 15:49:35.763 239969 DEBUG oslo_concurrency.lockutils [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Acquiring lock "05fe14be-b7c6-4ea7-b739-3ad97c924a51-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:35 compute-0 nova_compute[239965]: 2026-01-26 15:49:35.763 239969 DEBUG oslo_concurrency.lockutils [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "05fe14be-b7c6-4ea7-b739-3ad97c924a51-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:35 compute-0 nova_compute[239965]: 2026-01-26 15:49:35.764 239969 DEBUG oslo_concurrency.lockutils [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "05fe14be-b7c6-4ea7-b739-3ad97c924a51-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:35 compute-0 nova_compute[239965]: 2026-01-26 15:49:35.765 239969 INFO nova.compute.manager [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Terminating instance
Jan 26 15:49:35 compute-0 nova_compute[239965]: 2026-01-26 15:49:35.765 239969 DEBUG oslo_concurrency.lockutils [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Acquiring lock "refresh_cache-05fe14be-b7c6-4ea7-b739-3ad97c924a51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:49:35 compute-0 nova_compute[239965]: 2026-01-26 15:49:35.766 239969 DEBUG oslo_concurrency.lockutils [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Acquired lock "refresh_cache-05fe14be-b7c6-4ea7-b739-3ad97c924a51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:49:35 compute-0 nova_compute[239965]: 2026-01-26 15:49:35.766 239969 DEBUG nova.network.neutron [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:49:35 compute-0 nova_compute[239965]: 2026-01-26 15:49:35.935 239969 DEBUG nova.network.neutron [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:49:35 compute-0 podman[253148]: 2026-01-26 15:49:35.961584311 +0000 UTC m=+0.115634913 container create fbb39aff861b4c610b1c12da48b1b51bbbf10f3eb43b120d13c1d2881d0a7479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:49:35 compute-0 podman[253148]: 2026-01-26 15:49:35.869737322 +0000 UTC m=+0.023787934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:49:36 compute-0 systemd[1]: Started libpod-conmon-fbb39aff861b4c610b1c12da48b1b51bbbf10f3eb43b120d13c1d2881d0a7479.scope.
Jan 26 15:49:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:49:36 compute-0 podman[253148]: 2026-01-26 15:49:36.138633527 +0000 UTC m=+0.292684149 container init fbb39aff861b4c610b1c12da48b1b51bbbf10f3eb43b120d13c1d2881d0a7479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 15:49:36 compute-0 podman[253148]: 2026-01-26 15:49:36.145900105 +0000 UTC m=+0.299950707 container start fbb39aff861b4c610b1c12da48b1b51bbbf10f3eb43b120d13c1d2881d0a7479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 15:49:36 compute-0 systemd[1]: libpod-fbb39aff861b4c610b1c12da48b1b51bbbf10f3eb43b120d13c1d2881d0a7479.scope: Deactivated successfully.
Jan 26 15:49:36 compute-0 relaxed_proskuriakova[253165]: 167 167
Jan 26 15:49:36 compute-0 conmon[253165]: conmon fbb39aff861b4c610b1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fbb39aff861b4c610b1c12da48b1b51bbbf10f3eb43b120d13c1d2881d0a7479.scope/container/memory.events
Jan 26 15:49:36 compute-0 nova_compute[239965]: 2026-01-26 15:49:36.274 239969 DEBUG nova.network.neutron [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:49:36 compute-0 nova_compute[239965]: 2026-01-26 15:49:36.296 239969 DEBUG oslo_concurrency.lockutils [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Releasing lock "refresh_cache-05fe14be-b7c6-4ea7-b739-3ad97c924a51" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:49:36 compute-0 nova_compute[239965]: 2026-01-26 15:49:36.297 239969 DEBUG nova.compute.manager [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:49:36 compute-0 podman[253148]: 2026-01-26 15:49:36.300718707 +0000 UTC m=+0.454769309 container attach fbb39aff861b4c610b1c12da48b1b51bbbf10f3eb43b120d13c1d2881d0a7479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 26 15:49:36 compute-0 podman[253148]: 2026-01-26 15:49:36.301390463 +0000 UTC m=+0.455441085 container died fbb39aff861b4c610b1c12da48b1b51bbbf10f3eb43b120d13c1d2881d0a7479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 15:49:36 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Jan 26 15:49:36 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Consumed 3.001s CPU time.
Jan 26 15:49:36 compute-0 systemd-machined[208061]: Machine qemu-12-instance-0000000b terminated.
Jan 26 15:49:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a0418254781852e9f5aabc07874383fbe193808bec3e7278a7883c4ddf7958d-merged.mount: Deactivated successfully.
Jan 26 15:49:36 compute-0 podman[253148]: 2026-01-26 15:49:36.491623161 +0000 UTC m=+0.645673763 container remove fbb39aff861b4c610b1c12da48b1b51bbbf10f3eb43b120d13c1d2881d0a7479 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:49:36 compute-0 systemd[1]: libpod-conmon-fbb39aff861b4c610b1c12da48b1b51bbbf10f3eb43b120d13c1d2881d0a7479.scope: Deactivated successfully.
Jan 26 15:49:36 compute-0 nova_compute[239965]: 2026-01-26 15:49:36.519 239969 INFO nova.virt.libvirt.driver [-] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Instance destroyed successfully.
Jan 26 15:49:36 compute-0 nova_compute[239965]: 2026-01-26 15:49:36.520 239969 DEBUG nova.objects.instance [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lazy-loading 'resources' on Instance uuid 05fe14be-b7c6-4ea7-b739-3ad97c924a51 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:36 compute-0 podman[253210]: 2026-01-26 15:49:36.636433137 +0000 UTC m=+0.020771649 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:49:37 compute-0 podman[253210]: 2026-01-26 15:49:37.170505277 +0000 UTC m=+0.554843809 container create bde30a96a28daa1812adea546e54ee3897ecdfbfc41eab73aec22959a9765e75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 15:49:37 compute-0 ceph-mon[75140]: pgmap v1016: 305 pgs: 305 active+clean; 120 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 140 op/s
Jan 26 15:49:37 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 26 15:49:37 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000008.scope: Consumed 13.577s CPU time.
Jan 26 15:49:37 compute-0 systemd-machined[208061]: Machine qemu-10-instance-00000008 terminated.
Jan 26 15:49:37 compute-0 systemd[1]: Started libpod-conmon-bde30a96a28daa1812adea546e54ee3897ecdfbfc41eab73aec22959a9765e75.scope.
Jan 26 15:49:37 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92caa3106bc031313b37b9fd1d0a686cdc9843b3a5034395edb5ba1a4b4eb9bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92caa3106bc031313b37b9fd1d0a686cdc9843b3a5034395edb5ba1a4b4eb9bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92caa3106bc031313b37b9fd1d0a686cdc9843b3a5034395edb5ba1a4b4eb9bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92caa3106bc031313b37b9fd1d0a686cdc9843b3a5034395edb5ba1a4b4eb9bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:49:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92caa3106bc031313b37b9fd1d0a686cdc9843b3a5034395edb5ba1a4b4eb9bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:49:37 compute-0 nova_compute[239965]: 2026-01-26 15:49:37.483 239969 INFO nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance shutdown successfully after 13 seconds.
Jan 26 15:49:37 compute-0 nova_compute[239965]: 2026-01-26 15:49:37.489 239969 INFO nova.virt.libvirt.driver [-] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance destroyed successfully.
Jan 26 15:49:37 compute-0 nova_compute[239965]: 2026-01-26 15:49:37.493 239969 INFO nova.virt.libvirt.driver [-] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance destroyed successfully.
Jan 26 15:49:37 compute-0 podman[253210]: 2026-01-26 15:49:37.518731735 +0000 UTC m=+0.903070227 container init bde30a96a28daa1812adea546e54ee3897ecdfbfc41eab73aec22959a9765e75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 15:49:37 compute-0 podman[253210]: 2026-01-26 15:49:37.528207966 +0000 UTC m=+0.912546448 container start bde30a96a28daa1812adea546e54ee3897ecdfbfc41eab73aec22959a9765e75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 15:49:37 compute-0 podman[253210]: 2026-01-26 15:49:37.539648526 +0000 UTC m=+0.923987038 container attach bde30a96a28daa1812adea546e54ee3897ecdfbfc41eab73aec22959a9765e75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 15:49:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 152 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.2 MiB/s wr, 228 op/s
Jan 26 15:49:37 compute-0 nova_compute[239965]: 2026-01-26 15:49:37.896 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:37 compute-0 nova_compute[239965]: 2026-01-26 15:49:37.913 239969 INFO nova.virt.libvirt.driver [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Deleting instance files /var/lib/nova/instances/05fe14be-b7c6-4ea7-b739-3ad97c924a51_del
Jan 26 15:49:37 compute-0 nova_compute[239965]: 2026-01-26 15:49:37.915 239969 INFO nova.virt.libvirt.driver [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Deletion of /var/lib/nova/instances/05fe14be-b7c6-4ea7-b739-3ad97c924a51_del complete
Jan 26 15:49:37 compute-0 nova_compute[239965]: 2026-01-26 15:49:37.961 239969 INFO nova.compute.manager [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Took 1.66 seconds to destroy the instance on the hypervisor.
Jan 26 15:49:37 compute-0 nova_compute[239965]: 2026-01-26 15:49:37.962 239969 DEBUG oslo.service.loopingcall [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:49:37 compute-0 nova_compute[239965]: 2026-01-26 15:49:37.962 239969 DEBUG nova.compute.manager [-] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:49:37 compute-0 nova_compute[239965]: 2026-01-26 15:49:37.963 239969 DEBUG nova.network.neutron [-] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:49:37 compute-0 nova_compute[239965]: 2026-01-26 15:49:37.971 239969 INFO nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Deleting instance files /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06_del
Jan 26 15:49:37 compute-0 nova_compute[239965]: 2026-01-26 15:49:37.972 239969 INFO nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Deletion of /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06_del complete
Jan 26 15:49:38 compute-0 sweet_mclean[253227]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:49:38 compute-0 sweet_mclean[253227]: --> All data devices are unavailable
Jan 26 15:49:38 compute-0 systemd[1]: libpod-bde30a96a28daa1812adea546e54ee3897ecdfbfc41eab73aec22959a9765e75.scope: Deactivated successfully.
Jan 26 15:49:38 compute-0 podman[253210]: 2026-01-26 15:49:38.085841793 +0000 UTC m=+1.470180275 container died bde30a96a28daa1812adea546e54ee3897ecdfbfc41eab73aec22959a9765e75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.112 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.113 239969 INFO nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Creating image(s)
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.137 239969 DEBUG nova.storage.rbd_utils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.160 239969 DEBUG nova.storage.rbd_utils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:38 compute-0 ceph-mon[75140]: pgmap v1017: 305 pgs: 305 active+clean; 152 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.2 MiB/s wr, 228 op/s
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.214 239969 DEBUG nova.storage.rbd_utils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.220 239969 DEBUG oslo_concurrency.processutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-92caa3106bc031313b37b9fd1d0a686cdc9843b3a5034395edb5ba1a4b4eb9bc-merged.mount: Deactivated successfully.
Jan 26 15:49:38 compute-0 podman[253210]: 2026-01-26 15:49:38.286497837 +0000 UTC m=+1.670836349 container remove bde30a96a28daa1812adea546e54ee3897ecdfbfc41eab73aec22959a9765e75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.291 239969 DEBUG oslo_concurrency.processutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.292 239969 DEBUG oslo_concurrency.lockutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.293 239969 DEBUG oslo_concurrency.lockutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.294 239969 DEBUG oslo_concurrency.lockutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:38 compute-0 systemd[1]: libpod-conmon-bde30a96a28daa1812adea546e54ee3897ecdfbfc41eab73aec22959a9765e75.scope: Deactivated successfully.
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.315 239969 DEBUG nova.storage.rbd_utils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.318 239969 DEBUG oslo_concurrency.processutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:38 compute-0 sudo[253111]: pam_unix(sudo:session): session closed for user root
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.335 239969 DEBUG nova.network.neutron [-] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.354 239969 DEBUG nova.network.neutron [-] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.381 239969 INFO nova.compute.manager [-] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Took 0.42 seconds to deallocate network for instance.
Jan 26 15:49:38 compute-0 sudo[253358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:49:38 compute-0 sudo[253358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:49:38 compute-0 sudo[253358]: pam_unix(sudo:session): session closed for user root
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.422 239969 DEBUG oslo_concurrency.lockutils [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.422 239969 DEBUG oslo_concurrency.lockutils [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:38 compute-0 sudo[253398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:49:38 compute-0 sudo[253398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.530 239969 DEBUG oslo_concurrency.processutils [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.686 239969 DEBUG oslo_concurrency.processutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:38 compute-0 podman[253456]: 2026-01-26 15:49:38.741818977 +0000 UTC m=+0.046044649 container create 0cd21f3a3c6aa66fdf13a009d9e20e952cb38d29be3069d038de243613c3b5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_driscoll, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.755 239969 DEBUG nova.storage.rbd_utils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] resizing rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:49:38 compute-0 systemd[1]: Started libpod-conmon-0cd21f3a3c6aa66fdf13a009d9e20e952cb38d29be3069d038de243613c3b5ba.scope.
Jan 26 15:49:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:49:38 compute-0 podman[253456]: 2026-01-26 15:49:38.815260566 +0000 UTC m=+0.119486258 container init 0cd21f3a3c6aa66fdf13a009d9e20e952cb38d29be3069d038de243613c3b5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_driscoll, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:49:38 compute-0 podman[253456]: 2026-01-26 15:49:38.722109355 +0000 UTC m=+0.026335077 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:49:38 compute-0 podman[253456]: 2026-01-26 15:49:38.823840156 +0000 UTC m=+0.128065868 container start 0cd21f3a3c6aa66fdf13a009d9e20e952cb38d29be3069d038de243613c3b5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:49:38 compute-0 podman[253456]: 2026-01-26 15:49:38.827993248 +0000 UTC m=+0.132218960 container attach 0cd21f3a3c6aa66fdf13a009d9e20e952cb38d29be3069d038de243613c3b5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_driscoll, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:49:38 compute-0 systemd[1]: libpod-0cd21f3a3c6aa66fdf13a009d9e20e952cb38d29be3069d038de243613c3b5ba.scope: Deactivated successfully.
Jan 26 15:49:38 compute-0 jolly_driscoll[253525]: 167 167
Jan 26 15:49:38 compute-0 conmon[253525]: conmon 0cd21f3a3c6aa66fdf13 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0cd21f3a3c6aa66fdf13a009d9e20e952cb38d29be3069d038de243613c3b5ba.scope/container/memory.events
Jan 26 15:49:38 compute-0 podman[253456]: 2026-01-26 15:49:38.831159385 +0000 UTC m=+0.135385067 container died 0cd21f3a3c6aa66fdf13a009d9e20e952cb38d29be3069d038de243613c3b5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.845 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.846 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Ensure instance console log exists: /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.846 239969 DEBUG oslo_concurrency.lockutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.847 239969 DEBUG oslo_concurrency.lockutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.847 239969 DEBUG oslo_concurrency.lockutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.848 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:49:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-33d70999b6cdea4d4f8728c7c8647c3562e6040e628a2d8a30e89e5cbfaf15b8-merged.mount: Deactivated successfully.
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.852 239969 WARNING nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.860 239969 DEBUG nova.virt.libvirt.host [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.861 239969 DEBUG nova.virt.libvirt.host [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.867 239969 DEBUG nova.virt.libvirt.host [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.868 239969 DEBUG nova.virt.libvirt.host [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:49:38 compute-0 podman[253456]: 2026-01-26 15:49:38.868608163 +0000 UTC m=+0.172833845 container remove 0cd21f3a3c6aa66fdf13a009d9e20e952cb38d29be3069d038de243613c3b5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_driscoll, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.869 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.869 239969 DEBUG nova.virt.hardware [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.870 239969 DEBUG nova.virt.hardware [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.870 239969 DEBUG nova.virt.hardware [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.870 239969 DEBUG nova.virt.hardware [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.871 239969 DEBUG nova.virt.hardware [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.871 239969 DEBUG nova.virt.hardware [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.871 239969 DEBUG nova.virt.hardware [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.872 239969 DEBUG nova.virt.hardware [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.872 239969 DEBUG nova.virt.hardware [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.872 239969 DEBUG nova.virt.hardware [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.873 239969 DEBUG nova.virt.hardware [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.873 239969 DEBUG nova.objects.instance [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:38 compute-0 systemd[1]: libpod-conmon-0cd21f3a3c6aa66fdf13a009d9e20e952cb38d29be3069d038de243613c3b5ba.scope: Deactivated successfully.
Jan 26 15:49:38 compute-0 nova_compute[239965]: 2026-01-26 15:49:38.893 239969 DEBUG oslo_concurrency.processutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:39 compute-0 podman[253567]: 2026-01-26 15:49:39.026862978 +0000 UTC m=+0.040639166 container create 356c68c4df10dc271b87cef36273e1c123286409ff985b592817197cd057aff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.060 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.062 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:39 compute-0 systemd[1]: Started libpod-conmon-356c68c4df10dc271b87cef36273e1c123286409ff985b592817197cd057aff3.scope.
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.081 239969 DEBUG nova.compute.manager [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:49:39 compute-0 podman[253567]: 2026-01-26 15:49:39.007524984 +0000 UTC m=+0.021301192 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:49:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df14313fef6df713f61aa65699de323e923b9e3163ea920206d03185ee29fdd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:49:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:49:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/775559218' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df14313fef6df713f61aa65699de323e923b9e3163ea920206d03185ee29fdd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df14313fef6df713f61aa65699de323e923b9e3163ea920206d03185ee29fdd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:49:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df14313fef6df713f61aa65699de323e923b9e3163ea920206d03185ee29fdd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:49:39 compute-0 podman[253567]: 2026-01-26 15:49:39.129713456 +0000 UTC m=+0.143489664 container init 356c68c4df10dc271b87cef36273e1c123286409ff985b592817197cd057aff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.134 239969 DEBUG oslo_concurrency.processutils [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:39 compute-0 podman[253567]: 2026-01-26 15:49:39.13637471 +0000 UTC m=+0.150150898 container start 356c68c4df10dc271b87cef36273e1c123286409ff985b592817197cd057aff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 15:49:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.142 239969 DEBUG nova.compute.provider_tree [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:49:39 compute-0 podman[253567]: 2026-01-26 15:49:39.140961762 +0000 UTC m=+0.154737970 container attach 356c68c4df10dc271b87cef36273e1c123286409ff985b592817197cd057aff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:49:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/775559218' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.236 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.238 239969 DEBUG nova.scheduler.client.report [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.256 239969 DEBUG oslo_concurrency.lockutils [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.260 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.266 239969 DEBUG nova.virt.hardware [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.267 239969 INFO nova.compute.claims [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.281 239969 INFO nova.scheduler.client.report [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Deleted allocations for instance 05fe14be-b7c6-4ea7-b739-3ad97c924a51
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.363 239969 DEBUG oslo_concurrency.lockutils [None req-72ea3f77-c563-45fb-ae9f-87db14f3b27d 5ec5df94d484458495b5f09f2cc15d29 7acace6b3e054d8baa3941910b0928b9 - - default default] Lock "05fe14be-b7c6-4ea7-b739-3ad97c924a51" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.390 239969 DEBUG oslo_concurrency.processutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]: {
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:     "0": [
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:         {
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "devices": [
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "/dev/loop3"
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             ],
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "lv_name": "ceph_lv0",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "lv_size": "21470642176",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "name": "ceph_lv0",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "tags": {
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.cluster_name": "ceph",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.crush_device_class": "",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.encrypted": "0",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.objectstore": "bluestore",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.osd_id": "0",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.type": "block",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.vdo": "0",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.with_tpm": "0"
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             },
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "type": "block",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "vg_name": "ceph_vg0"
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:         }
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:     ],
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:     "1": [
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:         {
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "devices": [
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "/dev/loop4"
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             ],
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "lv_name": "ceph_lv1",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "lv_size": "21470642176",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "name": "ceph_lv1",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "tags": {
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.cluster_name": "ceph",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.crush_device_class": "",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.encrypted": "0",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.objectstore": "bluestore",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.osd_id": "1",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.type": "block",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.vdo": "0",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.with_tpm": "0"
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             },
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "type": "block",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "vg_name": "ceph_vg1"
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:         }
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:     ],
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:     "2": [
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:         {
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "devices": [
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "/dev/loop5"
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             ],
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "lv_name": "ceph_lv2",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "lv_size": "21470642176",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "name": "ceph_lv2",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "tags": {
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.cluster_name": "ceph",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.crush_device_class": "",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.encrypted": "0",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.objectstore": "bluestore",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.osd_id": "2",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.type": "block",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.vdo": "0",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:                 "ceph.with_tpm": "0"
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             },
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "type": "block",
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:             "vg_name": "ceph_vg2"
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:         }
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]:     ]
Jan 26 15:49:39 compute-0 nifty_vaughan[253603]: }
Jan 26 15:49:39 compute-0 systemd[1]: libpod-356c68c4df10dc271b87cef36273e1c123286409ff985b592817197cd057aff3.scope: Deactivated successfully.
Jan 26 15:49:39 compute-0 conmon[253603]: conmon 356c68c4df10dc271b87 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-356c68c4df10dc271b87cef36273e1c123286409ff985b592817197cd057aff3.scope/container/memory.events
Jan 26 15:49:39 compute-0 podman[253567]: 2026-01-26 15:49:39.465622913 +0000 UTC m=+0.479399111 container died 356c68c4df10dc271b87cef36273e1c123286409ff985b592817197cd057aff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 15:49:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:49:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1678641507' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-df14313fef6df713f61aa65699de323e923b9e3163ea920206d03185ee29fdd2-merged.mount: Deactivated successfully.
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.504 239969 DEBUG oslo_concurrency.processutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:39 compute-0 podman[253567]: 2026-01-26 15:49:39.525148461 +0000 UTC m=+0.538924659 container remove 356c68c4df10dc271b87cef36273e1c123286409ff985b592817197cd057aff3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.540 239969 DEBUG nova.storage.rbd_utils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:39 compute-0 systemd[1]: libpod-conmon-356c68c4df10dc271b87cef36273e1c123286409ff985b592817197cd057aff3.scope: Deactivated successfully.
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.553 239969 DEBUG oslo_concurrency.processutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.571 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442564.5547106, 9dfd4fd6-d77b-4c05-bc93-07a935c53beb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.572 239969 INFO nova.compute.manager [-] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] VM Stopped (Lifecycle Event)
Jan 26 15:49:39 compute-0 sudo[253398]: pam_unix(sudo:session): session closed for user root
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.593 239969 DEBUG nova.compute.manager [None req-f6e5e675-d43e-42d8-b360-f36ae757071c - - - - - -] [instance: 9dfd4fd6-d77b-4c05-bc93-07a935c53beb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 109 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 211 op/s
Jan 26 15:49:39 compute-0 sudo[253667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:49:39 compute-0 sudo[253667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:49:39 compute-0 sudo[253667]: pam_unix(sudo:session): session closed for user root
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.677 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:39 compute-0 sudo[253692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:49:39 compute-0 sudo[253692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:49:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:49:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3058406613' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.970 239969 DEBUG oslo_concurrency.processutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:39 compute-0 nova_compute[239965]: 2026-01-26 15:49:39.981 239969 DEBUG nova.compute.provider_tree [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:49:39 compute-0 podman[253748]: 2026-01-26 15:49:39.995846418 +0000 UTC m=+0.040357149 container create c9f13a5a50a2c03cbd8c3f3709ea20f5c8ca9c2b35e2ac8761e5c4373f9197ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.027 239969 DEBUG nova.scheduler.client.report [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:49:40 compute-0 systemd[1]: Started libpod-conmon-c9f13a5a50a2c03cbd8c3f3709ea20f5c8ca9c2b35e2ac8761e5c4373f9197ba.scope.
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.050 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.051 239969 DEBUG nova.compute.manager [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:49:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:49:40 compute-0 podman[253748]: 2026-01-26 15:49:39.978242587 +0000 UTC m=+0.022753348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:49:40 compute-0 podman[253748]: 2026-01-26 15:49:40.080230744 +0000 UTC m=+0.124741485 container init c9f13a5a50a2c03cbd8c3f3709ea20f5c8ca9c2b35e2ac8761e5c4373f9197ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:49:40 compute-0 podman[253748]: 2026-01-26 15:49:40.087356269 +0000 UTC m=+0.131867000 container start c9f13a5a50a2c03cbd8c3f3709ea20f5c8ca9c2b35e2ac8761e5c4373f9197ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:49:40 compute-0 podman[253748]: 2026-01-26 15:49:40.091511631 +0000 UTC m=+0.136022382 container attach c9f13a5a50a2c03cbd8c3f3709ea20f5c8ca9c2b35e2ac8761e5c4373f9197ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 15:49:40 compute-0 relaxed_chaum[253766]: 167 167
Jan 26 15:49:40 compute-0 systemd[1]: libpod-c9f13a5a50a2c03cbd8c3f3709ea20f5c8ca9c2b35e2ac8761e5c4373f9197ba.scope: Deactivated successfully.
Jan 26 15:49:40 compute-0 podman[253748]: 2026-01-26 15:49:40.09313293 +0000 UTC m=+0.137643661 container died c9f13a5a50a2c03cbd8c3f3709ea20f5c8ca9c2b35e2ac8761e5c4373f9197ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.106 239969 DEBUG nova.compute.manager [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.107 239969 DEBUG nova.network.neutron [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:49:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:49:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/646444686' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a9a9b492d1228e507b599623bbcb47c979e08b9d24ba54ec1ffc28a80e47fc6-merged.mount: Deactivated successfully.
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.124 239969 INFO nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.144 239969 DEBUG nova.compute.manager [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:49:40 compute-0 podman[253748]: 2026-01-26 15:49:40.14701365 +0000 UTC m=+0.191524381 container remove c9f13a5a50a2c03cbd8c3f3709ea20f5c8ca9c2b35e2ac8761e5c4373f9197ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_chaum, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.148 239969 DEBUG oslo_concurrency.processutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.152 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:49:40 compute-0 nova_compute[239965]:   <uuid>bfbba9d8-efe9-46fa-a6a4-2431c70eab06</uuid>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   <name>instance-00000008</name>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersAdmin275Test-server-141780407</nova:name>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:49:38</nova:creationTime>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:49:40 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:49:40 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:49:40 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:49:40 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:49:40 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:49:40 compute-0 nova_compute[239965]:         <nova:user uuid="2d53086521a447778c494a2f71fa9685">tempest-ServersAdmin275Test-1784917804-project-member</nova:user>
Jan 26 15:49:40 compute-0 nova_compute[239965]:         <nova:project uuid="19e7b3737799454e9d511751ed18060a">tempest-ServersAdmin275Test-1784917804</nova:project>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <system>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <entry name="serial">bfbba9d8-efe9-46fa-a6a4-2431c70eab06</entry>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <entry name="uuid">bfbba9d8-efe9-46fa-a6a4-2431c70eab06</entry>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     </system>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   <os>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   </os>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   <features>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   </features>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk">
Jan 26 15:49:40 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       </source>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:49:40 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config">
Jan 26 15:49:40 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       </source>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:49:40 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/console.log" append="off"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <video>
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     </video>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:49:40 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:49:40 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:49:40 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:49:40 compute-0 nova_compute[239965]: </domain>
Jan 26 15:49:40 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:49:40 compute-0 systemd[1]: libpod-conmon-c9f13a5a50a2c03cbd8c3f3709ea20f5c8ca9c2b35e2ac8761e5c4373f9197ba.scope: Deactivated successfully.
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.219 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.220 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.220 239969 INFO nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Using config drive
Jan 26 15:49:40 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1678641507' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:40 compute-0 ceph-mon[75140]: pgmap v1018: 305 pgs: 305 active+clean; 109 MiB data, 299 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.9 MiB/s wr, 211 op/s
Jan 26 15:49:40 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3058406613' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:40 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/646444686' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.255 239969 DEBUG nova.storage.rbd_utils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.269 239969 DEBUG nova.compute.manager [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.271 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.271 239969 INFO nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Creating image(s)
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.295 239969 DEBUG nova.storage.rbd_utils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] rbd image 3a0641d5-a02c-4801-ab1d-4f097cd3f431_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:40 compute-0 podman[253811]: 2026-01-26 15:49:40.323847961 +0000 UTC m=+0.051300448 container create 3815830089a7197ba4b7050738e1d4f3b47a972e7c6b14a90d8f45ebc663838c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.326 239969 DEBUG nova.storage.rbd_utils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] rbd image 3a0641d5-a02c-4801-ab1d-4f097cd3f431_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.362 239969 DEBUG nova.storage.rbd_utils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] rbd image 3a0641d5-a02c-4801-ab1d-4f097cd3f431_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.366 239969 DEBUG oslo_concurrency.processutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:40 compute-0 systemd[1]: Started libpod-conmon-3815830089a7197ba4b7050738e1d4f3b47a972e7c6b14a90d8f45ebc663838c.scope.
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.389 239969 DEBUG nova.objects.instance [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Lazy-loading 'ec2_ids' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:40 compute-0 podman[253811]: 2026-01-26 15:49:40.299187066 +0000 UTC m=+0.026639573 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:49:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836664c2182d919e0593705cd58a9aa922ae895014686034bb93abc6e93cbf96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836664c2182d919e0593705cd58a9aa922ae895014686034bb93abc6e93cbf96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836664c2182d919e0593705cd58a9aa922ae895014686034bb93abc6e93cbf96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/836664c2182d919e0593705cd58a9aa922ae895014686034bb93abc6e93cbf96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.431 239969 DEBUG nova.objects.instance [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Lazy-loading 'keypairs' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.434 239969 DEBUG oslo_concurrency.processutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.435 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.435 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.436 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.632 239969 DEBUG nova.storage.rbd_utils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] rbd image 3a0641d5-a02c-4801-ab1d-4f097cd3f431_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.637 239969 DEBUG oslo_concurrency.processutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 3a0641d5-a02c-4801-ab1d-4f097cd3f431_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:40 compute-0 podman[253811]: 2026-01-26 15:49:40.646333438 +0000 UTC m=+0.373785945 container init 3815830089a7197ba4b7050738e1d4f3b47a972e7c6b14a90d8f45ebc663838c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:49:40 compute-0 podman[253811]: 2026-01-26 15:49:40.657596593 +0000 UTC m=+0.385049090 container start 3815830089a7197ba4b7050738e1d4f3b47a972e7c6b14a90d8f45ebc663838c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.659 239969 DEBUG nova.policy [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '19b1461c4dbc4033a684f6e523867df8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c2f3e7dabb6b4b6d8d0bd497c45857fc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:49:40 compute-0 podman[253811]: 2026-01-26 15:49:40.663822587 +0000 UTC m=+0.391275094 container attach 3815830089a7197ba4b7050738e1d4f3b47a972e7c6b14a90d8f45ebc663838c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.872 239969 INFO nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Creating config drive at /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.877 239969 DEBUG oslo_concurrency.processutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnzvjy04j execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:40 compute-0 nova_compute[239965]: 2026-01-26 15:49:40.950 239969 DEBUG oslo_concurrency.processutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 3a0641d5-a02c-4801-ab1d-4f097cd3f431_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.003 239969 DEBUG oslo_concurrency.processutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnzvjy04j" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.031 239969 DEBUG nova.storage.rbd_utils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] rbd image bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.037 239969 DEBUG oslo_concurrency.processutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.067 239969 DEBUG nova.storage.rbd_utils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] resizing rbd image 3a0641d5-a02c-4801-ab1d-4f097cd3f431_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.186 239969 DEBUG oslo_concurrency.processutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config bfbba9d8-efe9-46fa-a6a4-2431c70eab06_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.187 239969 INFO nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Deleting local config drive /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06/disk.config because it was imported into RBD.
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.194 239969 DEBUG nova.objects.instance [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lazy-loading 'migration_context' on Instance uuid 3a0641d5-a02c-4801-ab1d-4f097cd3f431 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.208 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.209 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Ensure instance console log exists: /var/lib/nova/instances/3a0641d5-a02c-4801-ab1d-4f097cd3f431/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.209 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.209 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.209 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:41 compute-0 systemd-machined[208061]: New machine qemu-13-instance-00000008.
Jan 26 15:49:41 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-00000008.
Jan 26 15:49:41 compute-0 lvm[254128]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:49:41 compute-0 lvm[254127]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:49:41 compute-0 lvm[254127]: VG ceph_vg0 finished
Jan 26 15:49:41 compute-0 lvm[254128]: VG ceph_vg1 finished
Jan 26 15:49:41 compute-0 lvm[254130]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:49:41 compute-0 lvm[254130]: VG ceph_vg2 finished
Jan 26 15:49:41 compute-0 gifted_bohr[253882]: {}
Jan 26 15:49:41 compute-0 systemd[1]: libpod-3815830089a7197ba4b7050738e1d4f3b47a972e7c6b14a90d8f45ebc663838c.scope: Deactivated successfully.
Jan 26 15:49:41 compute-0 systemd[1]: libpod-3815830089a7197ba4b7050738e1d4f3b47a972e7c6b14a90d8f45ebc663838c.scope: Consumed 1.314s CPU time.
Jan 26 15:49:41 compute-0 podman[253811]: 2026-01-26 15:49:41.536116549 +0000 UTC m=+1.263569066 container died 3815830089a7197ba4b7050738e1d4f3b47a972e7c6b14a90d8f45ebc663838c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 26 15:49:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-836664c2182d919e0593705cd58a9aa922ae895014686034bb93abc6e93cbf96-merged.mount: Deactivated successfully.
Jan 26 15:49:41 compute-0 podman[253811]: 2026-01-26 15:49:41.592820697 +0000 UTC m=+1.320273184 container remove 3815830089a7197ba4b7050738e1d4f3b47a972e7c6b14a90d8f45ebc663838c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_bohr, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:49:41 compute-0 systemd[1]: libpod-conmon-3815830089a7197ba4b7050738e1d4f3b47a972e7c6b14a90d8f45ebc663838c.scope: Deactivated successfully.
Jan 26 15:49:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 103 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.4 MiB/s wr, 203 op/s
Jan 26 15:49:41 compute-0 sudo[253692]: pam_unix(sudo:session): session closed for user root
Jan 26 15:49:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:49:41 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:49:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:49:41 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.678 239969 DEBUG nova.network.neutron [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Successfully created port: b4633a27-8c35-4f7a-8e72-b7a090e807f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:49:41 compute-0 sudo[254150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:49:41 compute-0 sudo[254150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:49:41 compute-0 sudo[254150]: pam_unix(sudo:session): session closed for user root
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.804 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for bfbba9d8-efe9-46fa-a6a4-2431c70eab06 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.804 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442581.8036587, bfbba9d8-efe9-46fa-a6a4-2431c70eab06 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.805 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] VM Resumed (Lifecycle Event)
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.807 239969 DEBUG nova.compute.manager [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.807 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.811 239969 INFO nova.virt.libvirt.driver [-] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance spawned successfully.
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.812 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.828 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.835 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.840 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.841 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.841 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.841 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.842 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.842 239969 DEBUG nova.virt.libvirt.driver [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.862 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.862 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442581.804879, bfbba9d8-efe9-46fa-a6a4-2431c70eab06 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.863 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] VM Started (Lifecycle Event)
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.891 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.894 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.898 239969 DEBUG nova.compute.manager [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.911 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.953 239969 DEBUG oslo_concurrency.lockutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.954 239969 DEBUG oslo_concurrency.lockutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:41 compute-0 nova_compute[239965]: 2026-01-26 15:49:41.954 239969 DEBUG nova.objects.instance [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 15:49:42 compute-0 nova_compute[239965]: 2026-01-26 15:49:42.009 239969 DEBUG oslo_concurrency.lockutils [None req-bb3bb757-a0c0-4889-9aaf-37e9324cba17 adc71842af01494b8e8bae8e91e8aab0 06473ab9535a4b1ebbd38998b4a75ad7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:42 compute-0 nova_compute[239965]: 2026-01-26 15:49:42.640 239969 DEBUG oslo_concurrency.lockutils [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Acquiring lock "bfbba9d8-efe9-46fa-a6a4-2431c70eab06" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:42 compute-0 nova_compute[239965]: 2026-01-26 15:49:42.642 239969 DEBUG oslo_concurrency.lockutils [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "bfbba9d8-efe9-46fa-a6a4-2431c70eab06" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:42 compute-0 nova_compute[239965]: 2026-01-26 15:49:42.642 239969 DEBUG oslo_concurrency.lockutils [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Acquiring lock "bfbba9d8-efe9-46fa-a6a4-2431c70eab06-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:42 compute-0 nova_compute[239965]: 2026-01-26 15:49:42.642 239969 DEBUG oslo_concurrency.lockutils [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "bfbba9d8-efe9-46fa-a6a4-2431c70eab06-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:42 compute-0 nova_compute[239965]: 2026-01-26 15:49:42.643 239969 DEBUG oslo_concurrency.lockutils [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "bfbba9d8-efe9-46fa-a6a4-2431c70eab06-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:42 compute-0 nova_compute[239965]: 2026-01-26 15:49:42.644 239969 INFO nova.compute.manager [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Terminating instance
Jan 26 15:49:42 compute-0 nova_compute[239965]: 2026-01-26 15:49:42.645 239969 DEBUG oslo_concurrency.lockutils [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Acquiring lock "refresh_cache-bfbba9d8-efe9-46fa-a6a4-2431c70eab06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:49:42 compute-0 nova_compute[239965]: 2026-01-26 15:49:42.645 239969 DEBUG oslo_concurrency.lockutils [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Acquired lock "refresh_cache-bfbba9d8-efe9-46fa-a6a4-2431c70eab06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:49:42 compute-0 nova_compute[239965]: 2026-01-26 15:49:42.646 239969 DEBUG nova.network.neutron [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:49:42 compute-0 ceph-mon[75140]: pgmap v1019: 305 pgs: 305 active+clean; 103 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.4 MiB/s wr, 203 op/s
Jan 26 15:49:42 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:49:42 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:49:42.664364) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442582664587, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1246, "num_deletes": 251, "total_data_size": 1695778, "memory_usage": 1721032, "flush_reason": "Manual Compaction"}
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442582677725, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1666509, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19998, "largest_seqno": 21243, "table_properties": {"data_size": 1660761, "index_size": 3015, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13219, "raw_average_key_size": 20, "raw_value_size": 1648770, "raw_average_value_size": 2509, "num_data_blocks": 136, "num_entries": 657, "num_filter_entries": 657, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769442479, "oldest_key_time": 1769442479, "file_creation_time": 1769442582, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 13414 microseconds, and 4846 cpu microseconds.
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:49:42.677774) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1666509 bytes OK
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:49:42.677793) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:49:42.680122) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:49:42.680157) EVENT_LOG_v1 {"time_micros": 1769442582680150, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:49:42.680180) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1690057, prev total WAL file size 1690057, number of live WAL files 2.
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:49:42.681546) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1627KB)], [47(7821KB)]
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442582681582, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9676079, "oldest_snapshot_seqno": -1}
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4503 keys, 7905500 bytes, temperature: kUnknown
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442582739883, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7905500, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7873979, "index_size": 19187, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 111466, "raw_average_key_size": 24, "raw_value_size": 7791162, "raw_average_value_size": 1730, "num_data_blocks": 800, "num_entries": 4503, "num_filter_entries": 4503, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769442582, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:49:42.740141) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7905500 bytes
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:49:42.781876) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 165.7 rd, 135.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.6 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(10.5) write-amplify(4.7) OK, records in: 5021, records dropped: 518 output_compression: NoCompression
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:49:42.781933) EVENT_LOG_v1 {"time_micros": 1769442582781897, "job": 24, "event": "compaction_finished", "compaction_time_micros": 58402, "compaction_time_cpu_micros": 17132, "output_level": 6, "num_output_files": 1, "total_output_size": 7905500, "num_input_records": 5021, "num_output_records": 4503, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442582782457, "job": 24, "event": "table_file_deletion", "file_number": 49}
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442582784149, "job": 24, "event": "table_file_deletion", "file_number": 47}
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:49:42.681446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:49:42.784360) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:49:42.784369) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:49:42.784372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:49:42.784374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:49:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:49:42.784376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:49:42 compute-0 nova_compute[239965]: 2026-01-26 15:49:42.839 239969 DEBUG nova.network.neutron [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:49:42 compute-0 nova_compute[239965]: 2026-01-26 15:49:42.900 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.419 239969 DEBUG nova.network.neutron [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.437 239969 DEBUG oslo_concurrency.lockutils [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Releasing lock "refresh_cache-bfbba9d8-efe9-46fa-a6a4-2431c70eab06" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.438 239969 DEBUG nova.compute.manager [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:49:43 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 26 15:49:43 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d00000008.scope: Consumed 2.294s CPU time.
Jan 26 15:49:43 compute-0 systemd-machined[208061]: Machine qemu-13-instance-00000008 terminated.
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.494 239969 DEBUG nova.network.neutron [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Successfully updated port: b4633a27-8c35-4f7a-8e72-b7a090e807f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.511 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.511 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquired lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.511 239969 DEBUG nova.network.neutron [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.607 239969 DEBUG nova.compute.manager [req-5a76d3ac-ded8-43a8-a699-a2109cb7eb76 req-19cae29c-9872-4f37-ab55-c5bf7978a99d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Received event network-changed-b4633a27-8c35-4f7a-8e72-b7a090e807f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.607 239969 DEBUG nova.compute.manager [req-5a76d3ac-ded8-43a8-a699-a2109cb7eb76 req-19cae29c-9872-4f37-ab55-c5bf7978a99d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Refreshing instance network info cache due to event network-changed-b4633a27-8c35-4f7a-8e72-b7a090e807f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.608 239969 DEBUG oslo_concurrency.lockutils [req-5a76d3ac-ded8-43a8-a699-a2109cb7eb76 req-19cae29c-9872-4f37-ab55-c5bf7978a99d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:49:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 119 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.8 MiB/s wr, 278 op/s
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.658 239969 INFO nova.virt.libvirt.driver [-] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance destroyed successfully.
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.658 239969 DEBUG nova.objects.instance [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lazy-loading 'resources' on Instance uuid bfbba9d8-efe9-46fa-a6a4-2431c70eab06 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.660 239969 DEBUG nova.network.neutron [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.923 239969 INFO nova.virt.libvirt.driver [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Deleting instance files /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06_del
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.924 239969 INFO nova.virt.libvirt.driver [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Deletion of /var/lib/nova/instances/bfbba9d8-efe9-46fa-a6a4-2431c70eab06_del complete
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.980 239969 INFO nova.compute.manager [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Took 0.54 seconds to destroy the instance on the hypervisor.
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.981 239969 DEBUG oslo.service.loopingcall [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.981 239969 DEBUG nova.compute.manager [-] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:49:43 compute-0 nova_compute[239965]: 2026-01-26 15:49:43.981 239969 DEBUG nova.network.neutron [-] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:49:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:49:44 compute-0 nova_compute[239965]: 2026-01-26 15:49:44.322 239969 DEBUG nova.network.neutron [-] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:49:44 compute-0 nova_compute[239965]: 2026-01-26 15:49:44.347 239969 DEBUG nova.network.neutron [-] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:49:44 compute-0 nova_compute[239965]: 2026-01-26 15:49:44.388 239969 INFO nova.compute.manager [-] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Took 0.41 seconds to deallocate network for instance.
Jan 26 15:49:44 compute-0 nova_compute[239965]: 2026-01-26 15:49:44.448 239969 DEBUG oslo_concurrency.lockutils [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:44 compute-0 nova_compute[239965]: 2026-01-26 15:49:44.448 239969 DEBUG oslo_concurrency.lockutils [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:44 compute-0 nova_compute[239965]: 2026-01-26 15:49:44.536 239969 DEBUG oslo_concurrency.processutils [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:44 compute-0 ceph-mon[75140]: pgmap v1020: 305 pgs: 305 active+clean; 119 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.8 MiB/s wr, 278 op/s
Jan 26 15:49:44 compute-0 nova_compute[239965]: 2026-01-26 15:49:44.677 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:49:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2781741996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.086 239969 DEBUG oslo_concurrency.processutils [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.093 239969 DEBUG nova.compute.provider_tree [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.113 239969 DEBUG nova.scheduler.client.report [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.140 239969 DEBUG oslo_concurrency.lockutils [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.173 239969 INFO nova.scheduler.client.report [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Deleted allocations for instance bfbba9d8-efe9-46fa-a6a4-2431c70eab06
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.234 239969 DEBUG oslo_concurrency.lockutils [None req-3ae302cf-3fed-4c4e-bf38-d30bef35af25 2d53086521a447778c494a2f71fa9685 19e7b3737799454e9d511751ed18060a - - default default] Lock "bfbba9d8-efe9-46fa-a6a4-2431c70eab06" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.593s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.243 239969 DEBUG nova.network.neutron [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Updating instance_info_cache with network_info: [{"id": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "address": "fa:16:3e:05:d1:74", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4633a27-8c", "ovs_interfaceid": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.263 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Releasing lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.264 239969 DEBUG nova.compute.manager [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Instance network_info: |[{"id": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "address": "fa:16:3e:05:d1:74", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4633a27-8c", "ovs_interfaceid": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.264 239969 DEBUG oslo_concurrency.lockutils [req-5a76d3ac-ded8-43a8-a699-a2109cb7eb76 req-19cae29c-9872-4f37-ab55-c5bf7978a99d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.264 239969 DEBUG nova.network.neutron [req-5a76d3ac-ded8-43a8-a699-a2109cb7eb76 req-19cae29c-9872-4f37-ab55-c5bf7978a99d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Refreshing network info cache for port b4633a27-8c35-4f7a-8e72-b7a090e807f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.267 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Start _get_guest_xml network_info=[{"id": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "address": "fa:16:3e:05:d1:74", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4633a27-8c", "ovs_interfaceid": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.273 239969 WARNING nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.277 239969 DEBUG nova.virt.libvirt.host [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.279 239969 DEBUG nova.virt.libvirt.host [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.284 239969 DEBUG nova.virt.libvirt.host [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.285 239969 DEBUG nova.virt.libvirt.host [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.285 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.286 239969 DEBUG nova.virt.hardware [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.286 239969 DEBUG nova.virt.hardware [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.286 239969 DEBUG nova.virt.hardware [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.287 239969 DEBUG nova.virt.hardware [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.287 239969 DEBUG nova.virt.hardware [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.287 239969 DEBUG nova.virt.hardware [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.287 239969 DEBUG nova.virt.hardware [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.288 239969 DEBUG nova.virt.hardware [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.288 239969 DEBUG nova.virt.hardware [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.288 239969 DEBUG nova.virt.hardware [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.289 239969 DEBUG nova.virt.hardware [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.292 239969 DEBUG oslo_concurrency.processutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 119 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 5.3 MiB/s wr, 242 op/s
Jan 26 15:49:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2781741996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:49:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2353233680' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.855 239969 DEBUG oslo_concurrency.processutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.881 239969 DEBUG nova.storage.rbd_utils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] rbd image 3a0641d5-a02c-4801-ab1d-4f097cd3f431_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:45 compute-0 nova_compute[239965]: 2026-01-26 15:49:45.885 239969 DEBUG oslo_concurrency.processutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:49:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2516893214' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.447 239969 DEBUG oslo_concurrency.processutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.449 239969 DEBUG nova.virt.libvirt.vif [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:49:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1789971645',display_name='tempest-FloatingIPsAssociationTestJSON-server-1789971645',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1789971645',id=12,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c2f3e7dabb6b4b6d8d0bd497c45857fc',ramdisk_id='',reservation_id='r-pulqeosh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-2042630915',owner_user_name='tempest-FloatingIPsAssociationTestJSON-2042630915-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:49:40Z,user_data=None,user_id='19b1461c4dbc4033a684f6e523867df8',uuid=3a0641d5-a02c-4801-ab1d-4f097cd3f431,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "address": "fa:16:3e:05:d1:74", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4633a27-8c", "ovs_interfaceid": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.449 239969 DEBUG nova.network.os_vif_util [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Converting VIF {"id": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "address": "fa:16:3e:05:d1:74", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4633a27-8c", "ovs_interfaceid": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.450 239969 DEBUG nova.network.os_vif_util [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:d1:74,bridge_name='br-int',has_traffic_filtering=True,id=b4633a27-8c35-4f7a-8e72-b7a090e807f9,network=Network(68eabea2-3a2d-4690-b867-0143d33ad099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4633a27-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.451 239969 DEBUG nova.objects.instance [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a0641d5-a02c-4801-ab1d-4f097cd3f431 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.469 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:49:46 compute-0 nova_compute[239965]:   <uuid>3a0641d5-a02c-4801-ab1d-4f097cd3f431</uuid>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   <name>instance-0000000c</name>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-1789971645</nova:name>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:49:45</nova:creationTime>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:49:46 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:49:46 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:49:46 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:49:46 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:49:46 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:49:46 compute-0 nova_compute[239965]:         <nova:user uuid="19b1461c4dbc4033a684f6e523867df8">tempest-FloatingIPsAssociationTestJSON-2042630915-project-member</nova:user>
Jan 26 15:49:46 compute-0 nova_compute[239965]:         <nova:project uuid="c2f3e7dabb6b4b6d8d0bd497c45857fc">tempest-FloatingIPsAssociationTestJSON-2042630915</nova:project>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:49:46 compute-0 nova_compute[239965]:         <nova:port uuid="b4633a27-8c35-4f7a-8e72-b7a090e807f9">
Jan 26 15:49:46 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <system>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <entry name="serial">3a0641d5-a02c-4801-ab1d-4f097cd3f431</entry>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <entry name="uuid">3a0641d5-a02c-4801-ab1d-4f097cd3f431</entry>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     </system>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   <os>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   </os>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   <features>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   </features>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/3a0641d5-a02c-4801-ab1d-4f097cd3f431_disk">
Jan 26 15:49:46 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       </source>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:49:46 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/3a0641d5-a02c-4801-ab1d-4f097cd3f431_disk.config">
Jan 26 15:49:46 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       </source>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:49:46 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:05:d1:74"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <target dev="tapb4633a27-8c"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/3a0641d5-a02c-4801-ab1d-4f097cd3f431/console.log" append="off"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <video>
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     </video>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:49:46 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:49:46 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:49:46 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:49:46 compute-0 nova_compute[239965]: </domain>
Jan 26 15:49:46 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.470 239969 DEBUG nova.compute.manager [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Preparing to wait for external event network-vif-plugged-b4633a27-8c35-4f7a-8e72-b7a090e807f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.471 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.471 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.471 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.472 239969 DEBUG nova.virt.libvirt.vif [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:49:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1789971645',display_name='tempest-FloatingIPsAssociationTestJSON-server-1789971645',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1789971645',id=12,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c2f3e7dabb6b4b6d8d0bd497c45857fc',ramdisk_id='',reservation_id='r-pulqeosh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-2042630915',owner_user_name='tempest-FloatingIPsAssociationTestJSON-2042630915-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:49:40Z,user_data=None,user_id='19b1461c4dbc4033a684f6e523867df8',uuid=3a0641d5-a02c-4801-ab1d-4f097cd3f431,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "address": "fa:16:3e:05:d1:74", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4633a27-8c", "ovs_interfaceid": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.472 239969 DEBUG nova.network.os_vif_util [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Converting VIF {"id": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "address": "fa:16:3e:05:d1:74", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4633a27-8c", "ovs_interfaceid": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.473 239969 DEBUG nova.network.os_vif_util [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:05:d1:74,bridge_name='br-int',has_traffic_filtering=True,id=b4633a27-8c35-4f7a-8e72-b7a090e807f9,network=Network(68eabea2-3a2d-4690-b867-0143d33ad099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4633a27-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.473 239969 DEBUG os_vif [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:d1:74,bridge_name='br-int',has_traffic_filtering=True,id=b4633a27-8c35-4f7a-8e72-b7a090e807f9,network=Network(68eabea2-3a2d-4690-b867-0143d33ad099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4633a27-8c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.474 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.475 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.475 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.478 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.479 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4633a27-8c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.479 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb4633a27-8c, col_values=(('external_ids', {'iface-id': 'b4633a27-8c35-4f7a-8e72-b7a090e807f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:05:d1:74', 'vm-uuid': '3a0641d5-a02c-4801-ab1d-4f097cd3f431'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.480 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:46 compute-0 NetworkManager[48954]: <info>  [1769442586.4816] manager: (tapb4633a27-8c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.484 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.488 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.489 239969 INFO os_vif [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:05:d1:74,bridge_name='br-int',has_traffic_filtering=True,id=b4633a27-8c35-4f7a-8e72-b7a090e807f9,network=Network(68eabea2-3a2d-4690-b867-0143d33ad099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4633a27-8c')
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.550 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.550 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.550 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] No VIF found with MAC fa:16:3e:05:d1:74, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.551 239969 INFO nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Using config drive
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.570 239969 DEBUG nova.storage.rbd_utils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] rbd image 3a0641d5-a02c-4801-ab1d-4f097cd3f431_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:46 compute-0 ceph-mon[75140]: pgmap v1021: 305 pgs: 305 active+clean; 119 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 5.3 MiB/s wr, 242 op/s
Jan 26 15:49:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2353233680' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2516893214' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.993 239969 DEBUG nova.network.neutron [req-5a76d3ac-ded8-43a8-a699-a2109cb7eb76 req-19cae29c-9872-4f37-ab55-c5bf7978a99d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Updated VIF entry in instance network info cache for port b4633a27-8c35-4f7a-8e72-b7a090e807f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:49:46 compute-0 nova_compute[239965]: 2026-01-26 15:49:46.993 239969 DEBUG nova.network.neutron [req-5a76d3ac-ded8-43a8-a699-a2109cb7eb76 req-19cae29c-9872-4f37-ab55-c5bf7978a99d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Updating instance_info_cache with network_info: [{"id": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "address": "fa:16:3e:05:d1:74", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4633a27-8c", "ovs_interfaceid": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:49:47 compute-0 nova_compute[239965]: 2026-01-26 15:49:47.013 239969 DEBUG oslo_concurrency.lockutils [req-5a76d3ac-ded8-43a8-a699-a2109cb7eb76 req-19cae29c-9872-4f37-ab55-c5bf7978a99d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:49:47 compute-0 nova_compute[239965]: 2026-01-26 15:49:47.325 239969 INFO nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Creating config drive at /var/lib/nova/instances/3a0641d5-a02c-4801-ab1d-4f097cd3f431/disk.config
Jan 26 15:49:47 compute-0 nova_compute[239965]: 2026-01-26 15:49:47.330 239969 DEBUG oslo_concurrency.processutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a0641d5-a02c-4801-ab1d-4f097cd3f431/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsb35t2a9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:47 compute-0 nova_compute[239965]: 2026-01-26 15:49:47.457 239969 DEBUG oslo_concurrency.processutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a0641d5-a02c-4801-ab1d-4f097cd3f431/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsb35t2a9" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:47 compute-0 nova_compute[239965]: 2026-01-26 15:49:47.482 239969 DEBUG nova.storage.rbd_utils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] rbd image 3a0641d5-a02c-4801-ab1d-4f097cd3f431_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:47 compute-0 nova_compute[239965]: 2026-01-26 15:49:47.485 239969 DEBUG oslo_concurrency.processutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a0641d5-a02c-4801-ab1d-4f097cd3f431/disk.config 3a0641d5-a02c-4801-ab1d-4f097cd3f431_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:47 compute-0 nova_compute[239965]: 2026-01-26 15:49:47.597 239969 DEBUG oslo_concurrency.processutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a0641d5-a02c-4801-ab1d-4f097cd3f431/disk.config 3a0641d5-a02c-4801-ab1d-4f097cd3f431_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:47 compute-0 nova_compute[239965]: 2026-01-26 15:49:47.599 239969 INFO nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Deleting local config drive /var/lib/nova/instances/3a0641d5-a02c-4801-ab1d-4f097cd3f431/disk.config because it was imported into RBD.
Jan 26 15:49:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 106 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.0 MiB/s wr, 304 op/s
Jan 26 15:49:47 compute-0 kernel: tapb4633a27-8c: entered promiscuous mode
Jan 26 15:49:47 compute-0 NetworkManager[48954]: <info>  [1769442587.6487] manager: (tapb4633a27-8c): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Jan 26 15:49:47 compute-0 ovn_controller[146046]: 2026-01-26T15:49:47Z|00044|binding|INFO|Claiming lport b4633a27-8c35-4f7a-8e72-b7a090e807f9 for this chassis.
Jan 26 15:49:47 compute-0 ovn_controller[146046]: 2026-01-26T15:49:47Z|00045|binding|INFO|b4633a27-8c35-4f7a-8e72-b7a090e807f9: Claiming fa:16:3e:05:d1:74 10.100.0.8
Jan 26 15:49:47 compute-0 nova_compute[239965]: 2026-01-26 15:49:47.651 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:47 compute-0 nova_compute[239965]: 2026-01-26 15:49:47.658 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.669 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:d1:74 10.100.0.8'], port_security=['fa:16:3e:05:d1:74 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3a0641d5-a02c-4801-ab1d-4f097cd3f431', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-68eabea2-3a2d-4690-b867-0143d33ad099', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c2f3e7dabb6b4b6d8d0bd497c45857fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '768559e5-eeab-4bfe-90a7-d37c537391d3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09fd1a35-0741-44b5-aefd-ffc4e1370811, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b4633a27-8c35-4f7a-8e72-b7a090e807f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.671 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b4633a27-8c35-4f7a-8e72-b7a090e807f9 in datapath 68eabea2-3a2d-4690-b867-0143d33ad099 bound to our chassis
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.672 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 68eabea2-3a2d-4690-b867-0143d33ad099
Jan 26 15:49:47 compute-0 systemd-udevd[254388]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:49:47 compute-0 systemd-machined[208061]: New machine qemu-14-instance-0000000c.
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.686 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[642baa9d-3f9f-4e75-8137-7866c418d94a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.687 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap68eabea2-31 in ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.689 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap68eabea2-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.689 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0819ee-da97-4678-968d-73ab22985ef8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.690 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[96489c9b-1619-4de5-8b77-a2f969ec1ebe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:49:47 compute-0 NetworkManager[48954]: <info>  [1769442587.6927] device (tapb4633a27-8c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:49:47 compute-0 NetworkManager[48954]: <info>  [1769442587.6937] device (tapb4633a27-8c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:49:47 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000c.
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.703 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[c6e01d27-b50b-4ea3-99dc-5755de000cd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.729 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc8296b-eff6-4daa-ad0d-5fdd7952ce19]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:49:47 compute-0 nova_compute[239965]: 2026-01-26 15:49:47.745 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:47 compute-0 ovn_controller[146046]: 2026-01-26T15:49:47Z|00046|binding|INFO|Setting lport b4633a27-8c35-4f7a-8e72-b7a090e807f9 ovn-installed in OVS
Jan 26 15:49:47 compute-0 ovn_controller[146046]: 2026-01-26T15:49:47Z|00047|binding|INFO|Setting lport b4633a27-8c35-4f7a-8e72-b7a090e807f9 up in Southbound
Jan 26 15:49:47 compute-0 nova_compute[239965]: 2026-01-26 15:49:47.749 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.759 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c9918e28-3e1c-48ca-bfad-3f08fdca9414]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:49:47 compute-0 systemd-udevd[254391]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:49:47 compute-0 NetworkManager[48954]: <info>  [1769442587.7669] manager: (tap68eabea2-30): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.766 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8b0e5ba0-7806-4483-85b7-3f24452e4c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.799 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[efe30434-15fb-4a1f-92c1-16dc500c9606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.802 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d8c624-c802-4ecf-8bd2-d303f2adeced]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:49:47 compute-0 NetworkManager[48954]: <info>  [1769442587.8290] device (tap68eabea2-30): carrier: link connected
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.834 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[81d65163-6b00-46ad-bddd-4769530a0a8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.853 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2238f7-a401-429b-a8a1-a31e6ea66f24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap68eabea2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:0e:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407863, 'reachable_time': 42637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254421, 'error': None, 'target': 'ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.871 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8c27ae-7862-46f7-9d7c-9ce8875aef7a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefe:eaa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407863, 'tstamp': 407863}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254422, 'error': None, 'target': 'ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.889 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8f159d65-b39d-4d75-aee5-981fdc5e7c44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap68eabea2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:0e:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407863, 'reachable_time': 42637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254423, 'error': None, 'target': 'ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.922 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5068c5c5-1f06-4972-9461-4d6c2398a3e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:49:47 compute-0 ceph-mon[75140]: pgmap v1022: 305 pgs: 305 active+clean; 106 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.0 MiB/s wr, 304 op/s
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.983 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5b0fd7a8-ff30-403e-819e-506139fb051c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.984 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68eabea2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.984 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.984 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68eabea2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:49:47 compute-0 kernel: tap68eabea2-30: entered promiscuous mode
Jan 26 15:49:47 compute-0 NetworkManager[48954]: <info>  [1769442587.9869] manager: (tap68eabea2-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Jan 26 15:49:47 compute-0 nova_compute[239965]: 2026-01-26 15:49:47.986 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.990 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap68eabea2-30, col_values=(('external_ids', {'iface-id': '0d1287ef-e69c-46bd-a8ed-5da65e222245'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:49:47 compute-0 ovn_controller[146046]: 2026-01-26T15:49:47Z|00048|binding|INFO|Releasing lport 0d1287ef-e69c-46bd-a8ed-5da65e222245 from this chassis (sb_readonly=0)
Jan 26 15:49:47 compute-0 nova_compute[239965]: 2026-01-26 15:49:47.991 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:47 compute-0 nova_compute[239965]: 2026-01-26 15:49:47.992 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.995 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/68eabea2-3a2d-4690-b867-0143d33ad099.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/68eabea2-3a2d-4690-b867-0143d33ad099.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.996 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[34a68b0d-84a4-4f84-9d06-e05828ab9239]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.996 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-68eabea2-3a2d-4690-b867-0143d33ad099
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/68eabea2-3a2d-4690-b867-0143d33ad099.pid.haproxy
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 68eabea2-3a2d-4690-b867-0143d33ad099
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:49:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:47.998 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099', 'env', 'PROCESS_TAG=haproxy-68eabea2-3a2d-4690-b867-0143d33ad099', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/68eabea2-3a2d-4690-b867-0143d33ad099.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.008 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.176 239969 DEBUG nova.compute.manager [req-21038d5b-7db1-464f-92d7-6cea050ea792 req-afcf3f48-a148-4eae-a4e1-070fc7cf33a7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Received event network-vif-plugged-b4633a27-8c35-4f7a-8e72-b7a090e807f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.177 239969 DEBUG oslo_concurrency.lockutils [req-21038d5b-7db1-464f-92d7-6cea050ea792 req-afcf3f48-a148-4eae-a4e1-070fc7cf33a7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.177 239969 DEBUG oslo_concurrency.lockutils [req-21038d5b-7db1-464f-92d7-6cea050ea792 req-afcf3f48-a148-4eae-a4e1-070fc7cf33a7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.178 239969 DEBUG oslo_concurrency.lockutils [req-21038d5b-7db1-464f-92d7-6cea050ea792 req-afcf3f48-a148-4eae-a4e1-070fc7cf33a7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.178 239969 DEBUG nova.compute.manager [req-21038d5b-7db1-464f-92d7-6cea050ea792 req-afcf3f48-a148-4eae-a4e1-070fc7cf33a7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Processing event network-vif-plugged-b4633a27-8c35-4f7a-8e72-b7a090e807f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:49:48 compute-0 podman[254491]: 2026-01-26 15:49:48.37658791 +0000 UTC m=+0.051117552 container create 775c1ac83aa211c8037cb29af527bc5430bb8753ffa1f5f651e199bdf95b7178 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 15:49:48 compute-0 systemd[1]: Started libpod-conmon-775c1ac83aa211c8037cb29af527bc5430bb8753ffa1f5f651e199bdf95b7178.scope.
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.438 239969 DEBUG nova.compute.manager [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.439 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442588.4372098, 3a0641d5-a02c-4801-ab1d-4f097cd3f431 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.439 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] VM Started (Lifecycle Event)
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.443 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:49:48 compute-0 podman[254491]: 2026-01-26 15:49:48.348878122 +0000 UTC m=+0.023407794 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.446 239969 INFO nova.virt.libvirt.driver [-] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Instance spawned successfully.
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.446 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:49:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.459 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f82a721cbb5c42ce2ca0264815576177600902595936c5eedbb52058b5a714d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.468 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:49:48 compute-0 podman[254491]: 2026-01-26 15:49:48.473687269 +0000 UTC m=+0.148216921 container init 775c1ac83aa211c8037cb29af527bc5430bb8753ffa1f5f651e199bdf95b7178 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.473 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.474 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.475 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.475 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.476 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.476 239969 DEBUG nova.virt.libvirt.driver [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:49:48 compute-0 podman[254491]: 2026-01-26 15:49:48.480511466 +0000 UTC m=+0.155041108 container start 775c1ac83aa211c8037cb29af527bc5430bb8753ffa1f5f651e199bdf95b7178 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 15:49:48 compute-0 neutron-haproxy-ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099[254513]: [NOTICE]   (254517) : New worker (254519) forked
Jan 26 15:49:48 compute-0 neutron-haproxy-ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099[254513]: [NOTICE]   (254517) : Loading success.
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.507 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.508 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442588.4373803, 3a0641d5-a02c-4801-ab1d-4f097cd3f431 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.508 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] VM Paused (Lifecycle Event)
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.534 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.537 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442588.4424062, 3a0641d5-a02c-4801-ab1d-4f097cd3f431 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.538 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] VM Resumed (Lifecycle Event)
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.566 239969 INFO nova.compute.manager [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Took 8.30 seconds to spawn the instance on the hypervisor.
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.566 239969 DEBUG nova.compute.manager [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.568 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.574 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.610 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.646 239969 INFO nova.compute.manager [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Took 9.43 seconds to build instance.
Jan 26 15:49:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:49:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/946867344' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:49:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:49:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/946867344' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000556994782393353 of space, bias 1.0, pg target 0.1670984347180059 quantized to 32 (current 32)
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664789164680332 of space, bias 1.0, pg target 0.19994367494040996 quantized to 32 (current 32)
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.60058113242714e-07 of space, bias 4.0, pg target 0.0010320697358912568 quantized to 16 (current 16)
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:49:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:49:48 compute-0 nova_compute[239965]: 2026-01-26 15:49:48.670 239969 DEBUG oslo_concurrency.lockutils [None req-5aac04de-b225-48bf-9e25-b2a66ba128ff 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/946867344' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:49:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/946867344' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:49:49 compute-0 sshd-session[254509]: banner exchange: Connection from 3.137.73.221 port 60622: invalid format
Jan 26 15:49:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:49:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.3 MiB/s wr, 245 op/s
Jan 26 15:49:49 compute-0 nova_compute[239965]: 2026-01-26 15:49:49.739 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:49 compute-0 ceph-mon[75140]: pgmap v1023: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.3 MiB/s wr, 245 op/s
Jan 26 15:49:50 compute-0 podman[254528]: 2026-01-26 15:49:50.369944377 +0000 UTC m=+0.054335992 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:49:50 compute-0 podman[254529]: 2026-01-26 15:49:50.4309084 +0000 UTC m=+0.114855363 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 15:49:50 compute-0 nova_compute[239965]: 2026-01-26 15:49:50.515 239969 DEBUG nova.compute.manager [req-bf9511f3-d71d-4802-bd42-a2d807426e7b req-d9ff2cc0-722b-44d1-8d32-3f9fe7e5099c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Received event network-vif-plugged-b4633a27-8c35-4f7a-8e72-b7a090e807f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:49:50 compute-0 nova_compute[239965]: 2026-01-26 15:49:50.516 239969 DEBUG oslo_concurrency.lockutils [req-bf9511f3-d71d-4802-bd42-a2d807426e7b req-d9ff2cc0-722b-44d1-8d32-3f9fe7e5099c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:50 compute-0 nova_compute[239965]: 2026-01-26 15:49:50.516 239969 DEBUG oslo_concurrency.lockutils [req-bf9511f3-d71d-4802-bd42-a2d807426e7b req-d9ff2cc0-722b-44d1-8d32-3f9fe7e5099c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:50 compute-0 nova_compute[239965]: 2026-01-26 15:49:50.516 239969 DEBUG oslo_concurrency.lockutils [req-bf9511f3-d71d-4802-bd42-a2d807426e7b req-d9ff2cc0-722b-44d1-8d32-3f9fe7e5099c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:50 compute-0 nova_compute[239965]: 2026-01-26 15:49:50.516 239969 DEBUG nova.compute.manager [req-bf9511f3-d71d-4802-bd42-a2d807426e7b req-d9ff2cc0-722b-44d1-8d32-3f9fe7e5099c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] No waiting events found dispatching network-vif-plugged-b4633a27-8c35-4f7a-8e72-b7a090e807f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:49:50 compute-0 nova_compute[239965]: 2026-01-26 15:49:50.517 239969 WARNING nova.compute.manager [req-bf9511f3-d71d-4802-bd42-a2d807426e7b req-d9ff2cc0-722b-44d1-8d32-3f9fe7e5099c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Received unexpected event network-vif-plugged-b4633a27-8c35-4f7a-8e72-b7a090e807f9 for instance with vm_state active and task_state None.
Jan 26 15:49:51 compute-0 nova_compute[239965]: 2026-01-26 15:49:51.482 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:51 compute-0 nova_compute[239965]: 2026-01-26 15:49:51.517 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442576.5162914, 05fe14be-b7c6-4ea7-b739-3ad97c924a51 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:51 compute-0 nova_compute[239965]: 2026-01-26 15:49:51.518 239969 INFO nova.compute.manager [-] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] VM Stopped (Lifecycle Event)
Jan 26 15:49:51 compute-0 nova_compute[239965]: 2026-01-26 15:49:51.542 239969 DEBUG nova.compute.manager [None req-1eb797b1-9729-4db7-b99a-b426c130fa9b - - - - - -] [instance: 05fe14be-b7c6-4ea7-b739-3ad97c924a51] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 211 op/s
Jan 26 15:49:52 compute-0 ceph-mon[75140]: pgmap v1024: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.6 MiB/s wr, 211 op/s
Jan 26 15:49:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 239 op/s
Jan 26 15:49:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:49:54 compute-0 nova_compute[239965]: 2026-01-26 15:49:54.223 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "f470fcc4-0f3d-4048-9e21-61e217237d40" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:54 compute-0 nova_compute[239965]: 2026-01-26 15:49:54.224 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "f470fcc4-0f3d-4048-9e21-61e217237d40" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:54 compute-0 nova_compute[239965]: 2026-01-26 15:49:54.242 239969 DEBUG nova.compute.manager [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:49:54 compute-0 nova_compute[239965]: 2026-01-26 15:49:54.331 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:54 compute-0 nova_compute[239965]: 2026-01-26 15:49:54.331 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:54 compute-0 nova_compute[239965]: 2026-01-26 15:49:54.338 239969 DEBUG nova.virt.hardware [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:49:54 compute-0 nova_compute[239965]: 2026-01-26 15:49:54.339 239969 INFO nova.compute.claims [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:49:54 compute-0 nova_compute[239965]: 2026-01-26 15:49:54.477 239969 DEBUG oslo_concurrency.processutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:54 compute-0 ceph-mon[75140]: pgmap v1025: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.1 MiB/s wr, 239 op/s
Jan 26 15:49:54 compute-0 nova_compute[239965]: 2026-01-26 15:49:54.783 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:49:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2258833181' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.087 239969 DEBUG oslo_concurrency.processutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.093 239969 DEBUG nova.compute.provider_tree [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.107 239969 DEBUG nova.scheduler.client.report [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.136 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.137 239969 DEBUG nova.compute.manager [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.184 239969 DEBUG nova.compute.manager [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.185 239969 DEBUG nova.network.neutron [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.205 239969 INFO nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.222 239969 DEBUG nova.compute.manager [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.321 239969 DEBUG nova.compute.manager [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.322 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.323 239969 INFO nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Creating image(s)
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.342 239969 DEBUG nova.storage.rbd_utils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image f470fcc4-0f3d-4048-9e21-61e217237d40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.382 239969 DEBUG nova.storage.rbd_utils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image f470fcc4-0f3d-4048-9e21-61e217237d40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.404 239969 DEBUG nova.storage.rbd_utils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image f470fcc4-0f3d-4048-9e21-61e217237d40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.407 239969 DEBUG oslo_concurrency.processutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.461 239969 DEBUG oslo_concurrency.processutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.462 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.463 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.463 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.483 239969 DEBUG nova.storage.rbd_utils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image f470fcc4-0f3d-4048-9e21-61e217237d40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.487 239969 DEBUG oslo_concurrency.processutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 f470fcc4-0f3d-4048-9e21-61e217237d40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.512 239969 DEBUG nova.policy [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3322c44e378e415bb486ef558314a67c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ddba5162f533447bba0159cafaa565bf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:49:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 759 KiB/s wr, 164 op/s
Jan 26 15:49:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2258833181' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.736 239969 DEBUG oslo_concurrency.processutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 f470fcc4-0f3d-4048-9e21-61e217237d40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.803 239969 DEBUG nova.storage.rbd_utils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] resizing rbd image f470fcc4-0f3d-4048-9e21-61e217237d40_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.876 239969 DEBUG nova.objects.instance [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'migration_context' on Instance uuid f470fcc4-0f3d-4048-9e21-61e217237d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.899 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.900 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Ensure instance console log exists: /var/lib/nova/instances/f470fcc4-0f3d-4048-9e21-61e217237d40/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.900 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.901 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:55 compute-0 nova_compute[239965]: 2026-01-26 15:49:55.901 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:56 compute-0 nova_compute[239965]: 2026-01-26 15:49:56.484 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:56 compute-0 nova_compute[239965]: 2026-01-26 15:49:56.682 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "45865956-34db-4434-8dca-48c8dac9811e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:56 compute-0 nova_compute[239965]: 2026-01-26 15:49:56.682 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "45865956-34db-4434-8dca-48c8dac9811e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:56 compute-0 nova_compute[239965]: 2026-01-26 15:49:56.696 239969 DEBUG nova.compute.manager [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:49:56 compute-0 ceph-mon[75140]: pgmap v1026: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 759 KiB/s wr, 164 op/s
Jan 26 15:49:56 compute-0 nova_compute[239965]: 2026-01-26 15:49:56.762 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:56 compute-0 nova_compute[239965]: 2026-01-26 15:49:56.763 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:56 compute-0 nova_compute[239965]: 2026-01-26 15:49:56.768 239969 DEBUG nova.virt.hardware [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:49:56 compute-0 nova_compute[239965]: 2026-01-26 15:49:56.769 239969 INFO nova.compute.claims [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:49:56 compute-0 nova_compute[239965]: 2026-01-26 15:49:56.788 239969 DEBUG nova.network.neutron [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Successfully created port: 54c94092-c736-46ff-b1b2-294a99ee679c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:49:56 compute-0 nova_compute[239965]: 2026-01-26 15:49:56.895 239969 DEBUG oslo_concurrency.processutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:49:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1562011544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.465 239969 DEBUG oslo_concurrency.processutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.470 239969 DEBUG nova.compute.provider_tree [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.490 239969 DEBUG nova.scheduler.client.report [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.513 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.514 239969 DEBUG nova.compute.manager [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.561 239969 DEBUG nova.compute.manager [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.562 239969 DEBUG nova.network.neutron [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.580 239969 INFO nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.603 239969 DEBUG nova.compute.manager [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:49:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 117 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.7 MiB/s wr, 168 op/s
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.687 239969 DEBUG nova.compute.manager [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.688 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.688 239969 INFO nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Creating image(s)
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.708 239969 DEBUG nova.storage.rbd_utils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] rbd image 45865956-34db-4434-8dca-48c8dac9811e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:57 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1562011544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.740 239969 DEBUG nova.storage.rbd_utils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] rbd image 45865956-34db-4434-8dca-48c8dac9811e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.764 239969 DEBUG nova.storage.rbd_utils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] rbd image 45865956-34db-4434-8dca-48c8dac9811e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.767 239969 DEBUG oslo_concurrency.processutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.822 239969 DEBUG oslo_concurrency.processutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.822 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.823 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.823 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.842 239969 DEBUG nova.storage.rbd_utils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] rbd image 45865956-34db-4434-8dca-48c8dac9811e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:49:57 compute-0 nova_compute[239965]: 2026-01-26 15:49:57.845 239969 DEBUG oslo_concurrency.processutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 45865956-34db-4434-8dca-48c8dac9811e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:49:58 compute-0 nova_compute[239965]: 2026-01-26 15:49:58.076 239969 DEBUG oslo_concurrency.processutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 45865956-34db-4434-8dca-48c8dac9811e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:49:58 compute-0 nova_compute[239965]: 2026-01-26 15:49:58.132 239969 DEBUG nova.storage.rbd_utils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] resizing rbd image 45865956-34db-4434-8dca-48c8dac9811e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:49:58 compute-0 nova_compute[239965]: 2026-01-26 15:49:58.232 239969 DEBUG nova.objects.instance [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lazy-loading 'migration_context' on Instance uuid 45865956-34db-4434-8dca-48c8dac9811e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:49:58 compute-0 nova_compute[239965]: 2026-01-26 15:49:58.246 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:49:58 compute-0 nova_compute[239965]: 2026-01-26 15:49:58.246 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Ensure instance console log exists: /var/lib/nova/instances/45865956-34db-4434-8dca-48c8dac9811e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:49:58 compute-0 nova_compute[239965]: 2026-01-26 15:49:58.247 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:58 compute-0 nova_compute[239965]: 2026-01-26 15:49:58.247 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:58 compute-0 nova_compute[239965]: 2026-01-26 15:49:58.247 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:58 compute-0 nova_compute[239965]: 2026-01-26 15:49:58.420 239969 DEBUG nova.policy [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '19b1461c4dbc4033a684f6e523867df8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c2f3e7dabb6b4b6d8d0bd497c45857fc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:49:58 compute-0 nova_compute[239965]: 2026-01-26 15:49:58.656 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442583.6553938, bfbba9d8-efe9-46fa-a6a4-2431c70eab06 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:49:58 compute-0 nova_compute[239965]: 2026-01-26 15:49:58.657 239969 INFO nova.compute.manager [-] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] VM Stopped (Lifecycle Event)
Jan 26 15:49:58 compute-0 nova_compute[239965]: 2026-01-26 15:49:58.676 239969 DEBUG nova.compute.manager [None req-201eac85-1e10-4c88-a33d-570f77e2aa0c - - - - - -] [instance: bfbba9d8-efe9-46fa-a6a4-2431c70eab06] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:49:58 compute-0 ceph-mon[75140]: pgmap v1027: 305 pgs: 305 active+clean; 117 MiB data, 262 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.7 MiB/s wr, 168 op/s
Jan 26 15:49:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:49:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:59.210 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:49:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:59.211 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:49:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:49:59.212 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:49:59 compute-0 nova_compute[239965]: 2026-01-26 15:49:59.307 239969 DEBUG nova.network.neutron [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Successfully updated port: 54c94092-c736-46ff-b1b2-294a99ee679c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:49:59 compute-0 nova_compute[239965]: 2026-01-26 15:49:59.329 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "refresh_cache-f470fcc4-0f3d-4048-9e21-61e217237d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:49:59 compute-0 nova_compute[239965]: 2026-01-26 15:49:59.329 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquired lock "refresh_cache-f470fcc4-0f3d-4048-9e21-61e217237d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:49:59 compute-0 nova_compute[239965]: 2026-01-26 15:49:59.329 239969 DEBUG nova.network.neutron [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:49:59 compute-0 nova_compute[239965]: 2026-01-26 15:49:59.526 239969 DEBUG nova.compute.manager [req-6772455a-d3ec-4054-9e44-7e9f49b7eabf req-e09f1cce-927a-4741-ab37-8c1054ad062b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Received event network-changed-54c94092-c736-46ff-b1b2-294a99ee679c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:49:59 compute-0 nova_compute[239965]: 2026-01-26 15:49:59.528 239969 DEBUG nova.compute.manager [req-6772455a-d3ec-4054-9e44-7e9f49b7eabf req-e09f1cce-927a-4741-ab37-8c1054ad062b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Refreshing instance network info cache due to event network-changed-54c94092-c736-46ff-b1b2-294a99ee679c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:49:59 compute-0 nova_compute[239965]: 2026-01-26 15:49:59.528 239969 DEBUG oslo_concurrency.lockutils [req-6772455a-d3ec-4054-9e44-7e9f49b7eabf req-e09f1cce-927a-4741-ab37-8c1054ad062b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-f470fcc4-0f3d-4048-9e21-61e217237d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:49:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 130 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.5 MiB/s wr, 129 op/s
Jan 26 15:49:59 compute-0 nova_compute[239965]: 2026-01-26 15:49:59.784 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:49:59 compute-0 nova_compute[239965]: 2026-01-26 15:49:59.903 239969 DEBUG nova.network.neutron [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:49:59 compute-0 nova_compute[239965]: 2026-01-26 15:49:59.968 239969 DEBUG nova.network.neutron [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Successfully created port: abdacb47-7e9f-4988-9a24-c617ebb61680 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:50:00 compute-0 ceph-mon[75140]: pgmap v1028: 305 pgs: 305 active+clean; 130 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.5 MiB/s wr, 129 op/s
Jan 26 15:50:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:50:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:50:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:50:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:50:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:50:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.956 239969 DEBUG nova.network.neutron [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Updating instance_info_cache with network_info: [{"id": "54c94092-c736-46ff-b1b2-294a99ee679c", "address": "fa:16:3e:99:1c:5e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54c94092-c7", "ovs_interfaceid": "54c94092-c736-46ff-b1b2-294a99ee679c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.973 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Releasing lock "refresh_cache-f470fcc4-0f3d-4048-9e21-61e217237d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.974 239969 DEBUG nova.compute.manager [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Instance network_info: |[{"id": "54c94092-c736-46ff-b1b2-294a99ee679c", "address": "fa:16:3e:99:1c:5e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54c94092-c7", "ovs_interfaceid": "54c94092-c736-46ff-b1b2-294a99ee679c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.974 239969 DEBUG oslo_concurrency.lockutils [req-6772455a-d3ec-4054-9e44-7e9f49b7eabf req-e09f1cce-927a-4741-ab37-8c1054ad062b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-f470fcc4-0f3d-4048-9e21-61e217237d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.974 239969 DEBUG nova.network.neutron [req-6772455a-d3ec-4054-9e44-7e9f49b7eabf req-e09f1cce-927a-4741-ab37-8c1054ad062b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Refreshing network info cache for port 54c94092-c736-46ff-b1b2-294a99ee679c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.976 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Start _get_guest_xml network_info=[{"id": "54c94092-c736-46ff-b1b2-294a99ee679c", "address": "fa:16:3e:99:1c:5e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54c94092-c7", "ovs_interfaceid": "54c94092-c736-46ff-b1b2-294a99ee679c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.980 239969 WARNING nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.986 239969 DEBUG nova.virt.libvirt.host [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.987 239969 DEBUG nova.virt.libvirt.host [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.993 239969 DEBUG nova.virt.libvirt.host [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.993 239969 DEBUG nova.virt.libvirt.host [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.994 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.994 239969 DEBUG nova.virt.hardware [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.994 239969 DEBUG nova.virt.hardware [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.995 239969 DEBUG nova.virt.hardware [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.995 239969 DEBUG nova.virt.hardware [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.995 239969 DEBUG nova.virt.hardware [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.995 239969 DEBUG nova.virt.hardware [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.996 239969 DEBUG nova.virt.hardware [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.996 239969 DEBUG nova.virt.hardware [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.996 239969 DEBUG nova.virt.hardware [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.996 239969 DEBUG nova.virt.hardware [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.997 239969 DEBUG nova.virt.hardware [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:50:00 compute-0 nova_compute[239965]: 2026-01-26 15:50:00.999 239969 DEBUG oslo_concurrency.processutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:01 compute-0 nova_compute[239965]: 2026-01-26 15:50:01.022 239969 DEBUG nova.network.neutron [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Successfully updated port: abdacb47-7e9f-4988-9a24-c617ebb61680 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:50:01 compute-0 nova_compute[239965]: 2026-01-26 15:50:01.039 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "refresh_cache-45865956-34db-4434-8dca-48c8dac9811e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:01 compute-0 nova_compute[239965]: 2026-01-26 15:50:01.039 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquired lock "refresh_cache-45865956-34db-4434-8dca-48c8dac9811e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:01 compute-0 nova_compute[239965]: 2026-01-26 15:50:01.039 239969 DEBUG nova.network.neutron [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:50:01 compute-0 ovn_controller[146046]: 2026-01-26T15:50:01Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:05:d1:74 10.100.0.8
Jan 26 15:50:01 compute-0 ovn_controller[146046]: 2026-01-26T15:50:01Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:05:d1:74 10.100.0.8
Jan 26 15:50:01 compute-0 nova_compute[239965]: 2026-01-26 15:50:01.210 239969 DEBUG nova.network.neutron [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:50:01 compute-0 nova_compute[239965]: 2026-01-26 15:50:01.487 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 148 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Jan 26 15:50:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:50:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/65820499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:01 compute-0 nova_compute[239965]: 2026-01-26 15:50:01.658 239969 DEBUG oslo_concurrency.processutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:01 compute-0 nova_compute[239965]: 2026-01-26 15:50:01.679 239969 DEBUG nova.storage.rbd_utils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image f470fcc4-0f3d-4048-9e21-61e217237d40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:01 compute-0 nova_compute[239965]: 2026-01-26 15:50:01.683 239969 DEBUG oslo_concurrency.processutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:01 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/65820499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:01 compute-0 nova_compute[239965]: 2026-01-26 15:50:01.950 239969 DEBUG nova.compute.manager [req-eb78eecf-b26d-4b9d-8104-144c6655784d req-2c65b293-107b-4198-8218-458854d07fb8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Received event network-changed-abdacb47-7e9f-4988-9a24-c617ebb61680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:01 compute-0 nova_compute[239965]: 2026-01-26 15:50:01.951 239969 DEBUG nova.compute.manager [req-eb78eecf-b26d-4b9d-8104-144c6655784d req-2c65b293-107b-4198-8218-458854d07fb8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Refreshing instance network info cache due to event network-changed-abdacb47-7e9f-4988-9a24-c617ebb61680. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:50:01 compute-0 nova_compute[239965]: 2026-01-26 15:50:01.951 239969 DEBUG oslo_concurrency.lockutils [req-eb78eecf-b26d-4b9d-8104-144c6655784d req-2c65b293-107b-4198-8218-458854d07fb8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-45865956-34db-4434-8dca-48c8dac9811e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:50:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1287408582' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.219 239969 DEBUG oslo_concurrency.processutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.220 239969 DEBUG nova.virt.libvirt.vif [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-783222857',display_name='tempest-ImagesTestJSON-server-783222857',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-783222857',id=13,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-av192pyl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:49:55Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=f470fcc4-0f3d-4048-9e21-61e217237d40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54c94092-c736-46ff-b1b2-294a99ee679c", "address": "fa:16:3e:99:1c:5e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54c94092-c7", "ovs_interfaceid": "54c94092-c736-46ff-b1b2-294a99ee679c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.220 239969 DEBUG nova.network.os_vif_util [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "54c94092-c736-46ff-b1b2-294a99ee679c", "address": "fa:16:3e:99:1c:5e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54c94092-c7", "ovs_interfaceid": "54c94092-c736-46ff-b1b2-294a99ee679c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.221 239969 DEBUG nova.network.os_vif_util [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:1c:5e,bridge_name='br-int',has_traffic_filtering=True,id=54c94092-c736-46ff-b1b2-294a99ee679c,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54c94092-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.222 239969 DEBUG nova.objects.instance [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'pci_devices' on Instance uuid f470fcc4-0f3d-4048-9e21-61e217237d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.239 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:50:02 compute-0 nova_compute[239965]:   <uuid>f470fcc4-0f3d-4048-9e21-61e217237d40</uuid>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   <name>instance-0000000d</name>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <nova:name>tempest-ImagesTestJSON-server-783222857</nova:name>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:50:00</nova:creationTime>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:50:02 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:50:02 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:50:02 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:50:02 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:50:02 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:50:02 compute-0 nova_compute[239965]:         <nova:user uuid="3322c44e378e415bb486ef558314a67c">tempest-ImagesTestJSON-1480202091-project-member</nova:user>
Jan 26 15:50:02 compute-0 nova_compute[239965]:         <nova:project uuid="ddba5162f533447bba0159cafaa565bf">tempest-ImagesTestJSON-1480202091</nova:project>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:50:02 compute-0 nova_compute[239965]:         <nova:port uuid="54c94092-c736-46ff-b1b2-294a99ee679c">
Jan 26 15:50:02 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <system>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <entry name="serial">f470fcc4-0f3d-4048-9e21-61e217237d40</entry>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <entry name="uuid">f470fcc4-0f3d-4048-9e21-61e217237d40</entry>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     </system>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   <os>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   </os>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   <features>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   </features>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/f470fcc4-0f3d-4048-9e21-61e217237d40_disk">
Jan 26 15:50:02 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       </source>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:50:02 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/f470fcc4-0f3d-4048-9e21-61e217237d40_disk.config">
Jan 26 15:50:02 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       </source>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:50:02 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:99:1c:5e"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <target dev="tap54c94092-c7"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/f470fcc4-0f3d-4048-9e21-61e217237d40/console.log" append="off"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <video>
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     </video>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:50:02 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:50:02 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:50:02 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:50:02 compute-0 nova_compute[239965]: </domain>
Jan 26 15:50:02 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.241 239969 DEBUG nova.compute.manager [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Preparing to wait for external event network-vif-plugged-54c94092-c736-46ff-b1b2-294a99ee679c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.242 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.242 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.242 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.243 239969 DEBUG nova.virt.libvirt.vif [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-783222857',display_name='tempest-ImagesTestJSON-server-783222857',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-783222857',id=13,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-av192pyl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:49:55Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=f470fcc4-0f3d-4048-9e21-61e217237d40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54c94092-c736-46ff-b1b2-294a99ee679c", "address": "fa:16:3e:99:1c:5e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54c94092-c7", "ovs_interfaceid": "54c94092-c736-46ff-b1b2-294a99ee679c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.243 239969 DEBUG nova.network.os_vif_util [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "54c94092-c736-46ff-b1b2-294a99ee679c", "address": "fa:16:3e:99:1c:5e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54c94092-c7", "ovs_interfaceid": "54c94092-c736-46ff-b1b2-294a99ee679c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.244 239969 DEBUG nova.network.os_vif_util [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:1c:5e,bridge_name='br-int',has_traffic_filtering=True,id=54c94092-c736-46ff-b1b2-294a99ee679c,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54c94092-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.244 239969 DEBUG os_vif [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:1c:5e,bridge_name='br-int',has_traffic_filtering=True,id=54c94092-c736-46ff-b1b2-294a99ee679c,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54c94092-c7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.245 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.246 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.247 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.250 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.250 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54c94092-c7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.251 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54c94092-c7, col_values=(('external_ids', {'iface-id': '54c94092-c736-46ff-b1b2-294a99ee679c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:1c:5e', 'vm-uuid': 'f470fcc4-0f3d-4048-9e21-61e217237d40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.252 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:02 compute-0 NetworkManager[48954]: <info>  [1769442602.2533] manager: (tap54c94092-c7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.254 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.260 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.261 239969 INFO os_vif [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:1c:5e,bridge_name='br-int',has_traffic_filtering=True,id=54c94092-c736-46ff-b1b2-294a99ee679c,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54c94092-c7')
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.316 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.316 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.316 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No VIF found with MAC fa:16:3e:99:1c:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.317 239969 INFO nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Using config drive
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.336 239969 DEBUG nova.storage.rbd_utils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image f470fcc4-0f3d-4048-9e21-61e217237d40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.371 239969 DEBUG nova.network.neutron [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Updating instance_info_cache with network_info: [{"id": "abdacb47-7e9f-4988-9a24-c617ebb61680", "address": "fa:16:3e:fd:85:19", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdacb47-7e", "ovs_interfaceid": "abdacb47-7e9f-4988-9a24-c617ebb61680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.389 239969 DEBUG nova.network.neutron [req-6772455a-d3ec-4054-9e44-7e9f49b7eabf req-e09f1cce-927a-4741-ab37-8c1054ad062b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Updated VIF entry in instance network info cache for port 54c94092-c736-46ff-b1b2-294a99ee679c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.389 239969 DEBUG nova.network.neutron [req-6772455a-d3ec-4054-9e44-7e9f49b7eabf req-e09f1cce-927a-4741-ab37-8c1054ad062b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Updating instance_info_cache with network_info: [{"id": "54c94092-c736-46ff-b1b2-294a99ee679c", "address": "fa:16:3e:99:1c:5e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54c94092-c7", "ovs_interfaceid": "54c94092-c736-46ff-b1b2-294a99ee679c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.394 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Releasing lock "refresh_cache-45865956-34db-4434-8dca-48c8dac9811e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.394 239969 DEBUG nova.compute.manager [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Instance network_info: |[{"id": "abdacb47-7e9f-4988-9a24-c617ebb61680", "address": "fa:16:3e:fd:85:19", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdacb47-7e", "ovs_interfaceid": "abdacb47-7e9f-4988-9a24-c617ebb61680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.394 239969 DEBUG oslo_concurrency.lockutils [req-eb78eecf-b26d-4b9d-8104-144c6655784d req-2c65b293-107b-4198-8218-458854d07fb8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-45865956-34db-4434-8dca-48c8dac9811e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.395 239969 DEBUG nova.network.neutron [req-eb78eecf-b26d-4b9d-8104-144c6655784d req-2c65b293-107b-4198-8218-458854d07fb8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Refreshing network info cache for port abdacb47-7e9f-4988-9a24-c617ebb61680 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.397 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Start _get_guest_xml network_info=[{"id": "abdacb47-7e9f-4988-9a24-c617ebb61680", "address": "fa:16:3e:fd:85:19", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdacb47-7e", "ovs_interfaceid": "abdacb47-7e9f-4988-9a24-c617ebb61680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.401 239969 WARNING nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.405 239969 DEBUG nova.virt.libvirt.host [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.406 239969 DEBUG nova.virt.libvirt.host [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.409 239969 DEBUG nova.virt.libvirt.host [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.409 239969 DEBUG nova.virt.libvirt.host [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.409 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.409 239969 DEBUG nova.virt.hardware [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.410 239969 DEBUG nova.virt.hardware [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.410 239969 DEBUG nova.virt.hardware [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.410 239969 DEBUG nova.virt.hardware [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.411 239969 DEBUG nova.virt.hardware [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.411 239969 DEBUG nova.virt.hardware [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.411 239969 DEBUG nova.virt.hardware [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.411 239969 DEBUG nova.virt.hardware [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.412 239969 DEBUG nova.virt.hardware [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.412 239969 DEBUG nova.virt.hardware [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.412 239969 DEBUG nova.virt.hardware [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.415 239969 DEBUG oslo_concurrency.processutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.441 239969 DEBUG oslo_concurrency.lockutils [req-6772455a-d3ec-4054-9e44-7e9f49b7eabf req-e09f1cce-927a-4741-ab37-8c1054ad062b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-f470fcc4-0f3d-4048-9e21-61e217237d40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.683 239969 INFO nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Creating config drive at /var/lib/nova/instances/f470fcc4-0f3d-4048-9e21-61e217237d40/disk.config
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.689 239969 DEBUG oslo_concurrency.processutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f470fcc4-0f3d-4048-9e21-61e217237d40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt4w6ut1a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:02 compute-0 ceph-mon[75140]: pgmap v1029: 305 pgs: 305 active+clean; 148 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 117 op/s
Jan 26 15:50:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1287408582' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.821 239969 DEBUG oslo_concurrency.processutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f470fcc4-0f3d-4048-9e21-61e217237d40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt4w6ut1a" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.846 239969 DEBUG nova.storage.rbd_utils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image f470fcc4-0f3d-4048-9e21-61e217237d40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.849 239969 DEBUG oslo_concurrency.processutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f470fcc4-0f3d-4048-9e21-61e217237d40/disk.config f470fcc4-0f3d-4048-9e21-61e217237d40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:50:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1269866268' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.965 239969 DEBUG oslo_concurrency.processutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f470fcc4-0f3d-4048-9e21-61e217237d40/disk.config f470fcc4-0f3d-4048-9e21-61e217237d40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.966 239969 INFO nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Deleting local config drive /var/lib/nova/instances/f470fcc4-0f3d-4048-9e21-61e217237d40/disk.config because it was imported into RBD.
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.971 239969 DEBUG oslo_concurrency.processutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.994 239969 DEBUG nova.storage.rbd_utils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] rbd image 45865956-34db-4434-8dca-48c8dac9811e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:02 compute-0 nova_compute[239965]: 2026-01-26 15:50:02.998 239969 DEBUG oslo_concurrency.processutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:03 compute-0 kernel: tap54c94092-c7: entered promiscuous mode
Jan 26 15:50:03 compute-0 NetworkManager[48954]: <info>  [1769442603.0227] manager: (tap54c94092-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Jan 26 15:50:03 compute-0 ovn_controller[146046]: 2026-01-26T15:50:03Z|00049|binding|INFO|Claiming lport 54c94092-c736-46ff-b1b2-294a99ee679c for this chassis.
Jan 26 15:50:03 compute-0 ovn_controller[146046]: 2026-01-26T15:50:03Z|00050|binding|INFO|54c94092-c736-46ff-b1b2-294a99ee679c: Claiming fa:16:3e:99:1c:5e 10.100.0.5
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.025 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.043 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:1c:5e 10.100.0.5'], port_security=['fa:16:3e:99:1c:5e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f470fcc4-0f3d-4048-9e21-61e217237d40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5e237c1-a75f-479a-88ee-c0f788914b11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddba5162f533447bba0159cafaa565bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '196db1cf-70b0-4b0e-9aef-decea6cfe3dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79fbd3fe-b646-4211-8e23-8d25cabdd600, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=54c94092-c736-46ff-b1b2-294a99ee679c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.046 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 54c94092-c736-46ff-b1b2-294a99ee679c in datapath f5e237c1-a75f-479a-88ee-c0f788914b11 bound to our chassis
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.048 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:50:03 compute-0 systemd-udevd[255128]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:50:03 compute-0 systemd-machined[208061]: New machine qemu-15-instance-0000000d.
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.061 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb768ee-eac7-4d95-8e66-1a382d6f4af7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.062 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf5e237c1-a1 in ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.064 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf5e237c1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.064 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5fbdf053-c3b9-42e8-903a-adff2f533628]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.065 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6f15cae6-edec-489c-87e8-270788681228]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:03 compute-0 NetworkManager[48954]: <info>  [1769442603.0740] device (tap54c94092-c7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:50:03 compute-0 NetworkManager[48954]: <info>  [1769442603.0750] device (tap54c94092-c7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.078 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[af8bb8ea-5b44-425e-ad2e-333e15b6029e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:03 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000d.
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.097 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[93f1f312-8d43-4b38-be28-0a8edb877cda]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.113 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:03 compute-0 ovn_controller[146046]: 2026-01-26T15:50:03Z|00051|binding|INFO|Setting lport 54c94092-c736-46ff-b1b2-294a99ee679c ovn-installed in OVS
Jan 26 15:50:03 compute-0 ovn_controller[146046]: 2026-01-26T15:50:03Z|00052|binding|INFO|Setting lport 54c94092-c736-46ff-b1b2-294a99ee679c up in Southbound
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.122 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.134 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[71ae2acd-7121-4462-b6a3-88e5a1e2442e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:03 compute-0 NetworkManager[48954]: <info>  [1769442603.1436] manager: (tapf5e237c1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.142 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[11f7a485-ade6-404e-8220-07f5595c7d74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.184 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5c5aac1d-7f3a-4e97-9881-562ad80a136d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.188 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[152f13cf-60d2-41bc-8f1a-5bc1a1c64de0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:03 compute-0 NetworkManager[48954]: <info>  [1769442603.2138] device (tapf5e237c1-a0): carrier: link connected
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.219 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2fbf17cc-dd00-48ab-9f22-83af751908b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.236 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[80dec207-fb7b-475b-ad30-048ccf6bde67]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5e237c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:0e:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409402, 'reachable_time': 22951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255180, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.252 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d36aca39-0248-4897-b35d-1b5084c0b4b0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:eb6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 409402, 'tstamp': 409402}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255181, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.270 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ab508458-be54-41eb-8b39-bec565b9b3ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5e237c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:0e:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409402, 'reachable_time': 22951, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255182, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.300 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3f6c981f-11dd-46fb-bac7-720d440269b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.386 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8ac510-0ad1-45d4-b4cf-8c4040f38a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.389 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5e237c1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.390 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.391 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5e237c1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.394 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:03 compute-0 NetworkManager[48954]: <info>  [1769442603.3951] manager: (tapf5e237c1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Jan 26 15:50:03 compute-0 kernel: tapf5e237c1-a0: entered promiscuous mode
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.396 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.398 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5e237c1-a0, col_values=(('external_ids', {'iface-id': 'fd0478e8-96d8-4fbd-8a9a-fe78757277ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.399 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:03 compute-0 ovn_controller[146046]: 2026-01-26T15:50:03Z|00053|binding|INFO|Releasing lport fd0478e8-96d8-4fbd-8a9a-fe78757277ca from this chassis (sb_readonly=0)
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.416 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.419 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f5e237c1-a75f-479a-88ee-c0f788914b11.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f5e237c1-a75f-479a-88ee-c0f788914b11.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.420 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1c850fb1-b523-4c33-8499-3e49ce774926]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.422 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/f5e237c1-a75f-479a-88ee-c0f788914b11.pid.haproxy
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:50:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:03.424 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'env', 'PROCESS_TAG=haproxy-f5e237c1-a75f-479a-88ee-c0f788914b11', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f5e237c1-a75f-479a-88ee-c0f788914b11.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:50:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 212 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 153 op/s
Jan 26 15:50:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:50:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1398510257' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1269866268' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1398510257' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.708 239969 DEBUG oslo_concurrency.processutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.710 239969 DEBUG nova.virt.libvirt.vif [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:49:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-229085207',display_name='tempest-FloatingIPsAssociationTestJSON-server-229085207',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-229085207',id=14,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c2f3e7dabb6b4b6d8d0bd497c45857fc',ramdisk_id='',reservation_id='r-e6r4a7w6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-2042630915',owner_user_name='tempest-FloatingIPsAssociationTestJSON-2042630915-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:49:57Z,user_data=None,user_id='19b1461c4dbc4033a684f6e523867df8',uuid=45865956-34db-4434-8dca-48c8dac9811e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "abdacb47-7e9f-4988-9a24-c617ebb61680", "address": "fa:16:3e:fd:85:19", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdacb47-7e", "ovs_interfaceid": "abdacb47-7e9f-4988-9a24-c617ebb61680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.713 239969 DEBUG nova.network.os_vif_util [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Converting VIF {"id": "abdacb47-7e9f-4988-9a24-c617ebb61680", "address": "fa:16:3e:fd:85:19", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdacb47-7e", "ovs_interfaceid": "abdacb47-7e9f-4988-9a24-c617ebb61680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.714 239969 DEBUG nova.network.os_vif_util [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:85:19,bridge_name='br-int',has_traffic_filtering=True,id=abdacb47-7e9f-4988-9a24-c617ebb61680,network=Network(68eabea2-3a2d-4690-b867-0143d33ad099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdacb47-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.715 239969 DEBUG nova.objects.instance [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lazy-loading 'pci_devices' on Instance uuid 45865956-34db-4434-8dca-48c8dac9811e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.729 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:50:03 compute-0 nova_compute[239965]:   <uuid>45865956-34db-4434-8dca-48c8dac9811e</uuid>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   <name>instance-0000000e</name>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <nova:name>tempest-FloatingIPsAssociationTestJSON-server-229085207</nova:name>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:50:02</nova:creationTime>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:50:03 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:50:03 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:50:03 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:50:03 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:50:03 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:50:03 compute-0 nova_compute[239965]:         <nova:user uuid="19b1461c4dbc4033a684f6e523867df8">tempest-FloatingIPsAssociationTestJSON-2042630915-project-member</nova:user>
Jan 26 15:50:03 compute-0 nova_compute[239965]:         <nova:project uuid="c2f3e7dabb6b4b6d8d0bd497c45857fc">tempest-FloatingIPsAssociationTestJSON-2042630915</nova:project>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:50:03 compute-0 nova_compute[239965]:         <nova:port uuid="abdacb47-7e9f-4988-9a24-c617ebb61680">
Jan 26 15:50:03 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <system>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <entry name="serial">45865956-34db-4434-8dca-48c8dac9811e</entry>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <entry name="uuid">45865956-34db-4434-8dca-48c8dac9811e</entry>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     </system>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   <os>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   </os>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   <features>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   </features>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/45865956-34db-4434-8dca-48c8dac9811e_disk">
Jan 26 15:50:03 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       </source>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:50:03 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/45865956-34db-4434-8dca-48c8dac9811e_disk.config">
Jan 26 15:50:03 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       </source>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:50:03 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:fd:85:19"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <target dev="tapabdacb47-7e"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/45865956-34db-4434-8dca-48c8dac9811e/console.log" append="off"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <video>
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     </video>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:50:03 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:50:03 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:50:03 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:50:03 compute-0 nova_compute[239965]: </domain>
Jan 26 15:50:03 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.730 239969 DEBUG nova.compute.manager [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Preparing to wait for external event network-vif-plugged-abdacb47-7e9f-4988-9a24-c617ebb61680 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.730 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "45865956-34db-4434-8dca-48c8dac9811e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.730 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "45865956-34db-4434-8dca-48c8dac9811e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.730 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "45865956-34db-4434-8dca-48c8dac9811e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.731 239969 DEBUG nova.virt.libvirt.vif [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:49:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-229085207',display_name='tempest-FloatingIPsAssociationTestJSON-server-229085207',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-229085207',id=14,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c2f3e7dabb6b4b6d8d0bd497c45857fc',ramdisk_id='',reservation_id='r-e6r4a7w6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-2042630915',owner_user_name='tempest-FloatingIPsAssociationTestJSON-2042630915-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:49:57Z,user_data=None,user_id='19b1461c4dbc4033a684f6e523867df8',uuid=45865956-34db-4434-8dca-48c8dac9811e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "abdacb47-7e9f-4988-9a24-c617ebb61680", "address": "fa:16:3e:fd:85:19", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdacb47-7e", "ovs_interfaceid": "abdacb47-7e9f-4988-9a24-c617ebb61680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.731 239969 DEBUG nova.network.os_vif_util [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Converting VIF {"id": "abdacb47-7e9f-4988-9a24-c617ebb61680", "address": "fa:16:3e:fd:85:19", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdacb47-7e", "ovs_interfaceid": "abdacb47-7e9f-4988-9a24-c617ebb61680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.732 239969 DEBUG nova.network.os_vif_util [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:85:19,bridge_name='br-int',has_traffic_filtering=True,id=abdacb47-7e9f-4988-9a24-c617ebb61680,network=Network(68eabea2-3a2d-4690-b867-0143d33ad099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdacb47-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.732 239969 DEBUG os_vif [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:85:19,bridge_name='br-int',has_traffic_filtering=True,id=abdacb47-7e9f-4988-9a24-c617ebb61680,network=Network(68eabea2-3a2d-4690-b867-0143d33ad099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdacb47-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.733 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.733 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.733 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.736 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.736 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabdacb47-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.737 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapabdacb47-7e, col_values=(('external_ids', {'iface-id': 'abdacb47-7e9f-4988-9a24-c617ebb61680', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fd:85:19', 'vm-uuid': '45865956-34db-4434-8dca-48c8dac9811e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:03 compute-0 NetworkManager[48954]: <info>  [1769442603.7397] manager: (tapabdacb47-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.741 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.743 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.749 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.750 239969 INFO os_vif [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:85:19,bridge_name='br-int',has_traffic_filtering=True,id=abdacb47-7e9f-4988-9a24-c617ebb61680,network=Network(68eabea2-3a2d-4690-b867-0143d33ad099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdacb47-7e')
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.799 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.800 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.800 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] No VIF found with MAC fa:16:3e:fd:85:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.800 239969 INFO nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Using config drive
Jan 26 15:50:03 compute-0 nova_compute[239965]: 2026-01-26 15:50:03.824 239969 DEBUG nova.storage.rbd_utils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] rbd image 45865956-34db-4434-8dca-48c8dac9811e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:03 compute-0 podman[255215]: 2026-01-26 15:50:03.842536795 +0000 UTC m=+0.069676597 container create d5f20369113d31fef9f796571bb79cab90a56dcde788d785f797f50e91242460 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 15:50:03 compute-0 systemd[1]: Started libpod-conmon-d5f20369113d31fef9f796571bb79cab90a56dcde788d785f797f50e91242460.scope.
Jan 26 15:50:03 compute-0 podman[255215]: 2026-01-26 15:50:03.807940438 +0000 UTC m=+0.035080270 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:50:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:50:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da67a331c29fdbc79d33fab33edc86c7051763ace689dea57abaa4dffefced7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:50:03 compute-0 podman[255215]: 2026-01-26 15:50:03.938100956 +0000 UTC m=+0.165240778 container init d5f20369113d31fef9f796571bb79cab90a56dcde788d785f797f50e91242460 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 15:50:03 compute-0 podman[255215]: 2026-01-26 15:50:03.943821076 +0000 UTC m=+0.170960878 container start d5f20369113d31fef9f796571bb79cab90a56dcde788d785f797f50e91242460 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:50:03 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[255247]: [NOTICE]   (255286) : New worker (255291) forked
Jan 26 15:50:03 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[255247]: [NOTICE]   (255286) : Loading success.
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.075 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442604.0745847, f470fcc4-0f3d-4048-9e21-61e217237d40 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.075 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] VM Started (Lifecycle Event)
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.092 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.096 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442604.0746953, f470fcc4-0f3d-4048-9e21-61e217237d40 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.096 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] VM Paused (Lifecycle Event)
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.129 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.133 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:50:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.152 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.447 239969 DEBUG nova.compute.manager [req-b70180bd-4e0e-43b9-b747-20543e9ea4e7 req-28a43030-e4c3-40f0-80a9-32722636eea5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Received event network-vif-plugged-54c94092-c736-46ff-b1b2-294a99ee679c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.447 239969 DEBUG oslo_concurrency.lockutils [req-b70180bd-4e0e-43b9-b747-20543e9ea4e7 req-28a43030-e4c3-40f0-80a9-32722636eea5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.448 239969 DEBUG oslo_concurrency.lockutils [req-b70180bd-4e0e-43b9-b747-20543e9ea4e7 req-28a43030-e4c3-40f0-80a9-32722636eea5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.448 239969 DEBUG oslo_concurrency.lockutils [req-b70180bd-4e0e-43b9-b747-20543e9ea4e7 req-28a43030-e4c3-40f0-80a9-32722636eea5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.448 239969 DEBUG nova.compute.manager [req-b70180bd-4e0e-43b9-b747-20543e9ea4e7 req-28a43030-e4c3-40f0-80a9-32722636eea5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Processing event network-vif-plugged-54c94092-c736-46ff-b1b2-294a99ee679c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.449 239969 DEBUG nova.compute.manager [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.453 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442604.4530802, f470fcc4-0f3d-4048-9e21-61e217237d40 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.453 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] VM Resumed (Lifecycle Event)
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.455 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.457 239969 INFO nova.virt.libvirt.driver [-] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Instance spawned successfully.
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.458 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.475 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.480 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.483 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.484 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.484 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.485 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.485 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.486 239969 DEBUG nova.virt.libvirt.driver [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.515 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.551 239969 INFO nova.compute.manager [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Took 9.23 seconds to spawn the instance on the hypervisor.
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.552 239969 DEBUG nova.compute.manager [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.563 239969 INFO nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Creating config drive at /var/lib/nova/instances/45865956-34db-4434-8dca-48c8dac9811e/disk.config
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.569 239969 DEBUG oslo_concurrency.processutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/45865956-34db-4434-8dca-48c8dac9811e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx2h90qi6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.622 239969 INFO nova.compute.manager [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Took 10.32 seconds to build instance.
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.643 239969 DEBUG oslo_concurrency.lockutils [None req-d5977f74-9391-4f13-8a62-098207444d4c 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "f470fcc4-0f3d-4048-9e21-61e217237d40" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.698 239969 DEBUG oslo_concurrency.processutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/45865956-34db-4434-8dca-48c8dac9811e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx2h90qi6" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:04 compute-0 ceph-mon[75140]: pgmap v1030: 305 pgs: 305 active+clean; 212 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 153 op/s
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.738 239969 DEBUG nova.storage.rbd_utils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] rbd image 45865956-34db-4434-8dca-48c8dac9811e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.750 239969 DEBUG oslo_concurrency.processutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/45865956-34db-4434-8dca-48c8dac9811e/disk.config 45865956-34db-4434-8dca-48c8dac9811e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.788 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.888 239969 DEBUG oslo_concurrency.processutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/45865956-34db-4434-8dca-48c8dac9811e/disk.config 45865956-34db-4434-8dca-48c8dac9811e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.888 239969 INFO nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Deleting local config drive /var/lib/nova/instances/45865956-34db-4434-8dca-48c8dac9811e/disk.config because it was imported into RBD.
Jan 26 15:50:04 compute-0 kernel: tapabdacb47-7e: entered promiscuous mode
Jan 26 15:50:04 compute-0 systemd-udevd[255162]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:50:04 compute-0 NetworkManager[48954]: <info>  [1769442604.9569] manager: (tapabdacb47-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Jan 26 15:50:04 compute-0 ovn_controller[146046]: 2026-01-26T15:50:04Z|00054|binding|INFO|Claiming lport abdacb47-7e9f-4988-9a24-c617ebb61680 for this chassis.
Jan 26 15:50:04 compute-0 ovn_controller[146046]: 2026-01-26T15:50:04Z|00055|binding|INFO|abdacb47-7e9f-4988-9a24-c617ebb61680: Claiming fa:16:3e:fd:85:19 10.100.0.14
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.957 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:04.963 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:85:19 10.100.0.14'], port_security=['fa:16:3e:fd:85:19 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '45865956-34db-4434-8dca-48c8dac9811e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-68eabea2-3a2d-4690-b867-0143d33ad099', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c2f3e7dabb6b4b6d8d0bd497c45857fc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '768559e5-eeab-4bfe-90a7-d37c537391d3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09fd1a35-0741-44b5-aefd-ffc4e1370811, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=abdacb47-7e9f-4988-9a24-c617ebb61680) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:50:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:04.964 156105 INFO neutron.agent.ovn.metadata.agent [-] Port abdacb47-7e9f-4988-9a24-c617ebb61680 in datapath 68eabea2-3a2d-4690-b867-0143d33ad099 bound to our chassis
Jan 26 15:50:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:04.966 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 68eabea2-3a2d-4690-b867-0143d33ad099
Jan 26 15:50:04 compute-0 NetworkManager[48954]: <info>  [1769442604.9734] device (tapabdacb47-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:50:04 compute-0 NetworkManager[48954]: <info>  [1769442604.9766] device (tapabdacb47-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:50:04 compute-0 ovn_controller[146046]: 2026-01-26T15:50:04Z|00056|binding|INFO|Setting lport abdacb47-7e9f-4988-9a24-c617ebb61680 ovn-installed in OVS
Jan 26 15:50:04 compute-0 ovn_controller[146046]: 2026-01-26T15:50:04Z|00057|binding|INFO|Setting lport abdacb47-7e9f-4988-9a24-c617ebb61680 up in Southbound
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.982 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:04 compute-0 nova_compute[239965]: 2026-01-26 15:50:04.988 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:04.988 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f0368c2c-ae38-4673-a7a2-1adf983444f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:05 compute-0 systemd-machined[208061]: New machine qemu-16-instance-0000000e.
Jan 26 15:50:05 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000e.
Jan 26 15:50:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:05.029 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f9a478-754e-4100-b17a-a97e9c7ad9c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:05.035 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[06d755e2-8486-4598-be62-679aee53b4ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.065 239969 DEBUG nova.network.neutron [req-eb78eecf-b26d-4b9d-8104-144c6655784d req-2c65b293-107b-4198-8218-458854d07fb8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Updated VIF entry in instance network info cache for port abdacb47-7e9f-4988-9a24-c617ebb61680. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.066 239969 DEBUG nova.network.neutron [req-eb78eecf-b26d-4b9d-8104-144c6655784d req-2c65b293-107b-4198-8218-458854d07fb8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Updating instance_info_cache with network_info: [{"id": "abdacb47-7e9f-4988-9a24-c617ebb61680", "address": "fa:16:3e:fd:85:19", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdacb47-7e", "ovs_interfaceid": "abdacb47-7e9f-4988-9a24-c617ebb61680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:05.074 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[632e1fe1-48f6-480c-9489-8166f15afd1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.084 239969 DEBUG oslo_concurrency.lockutils [req-eb78eecf-b26d-4b9d-8104-144c6655784d req-2c65b293-107b-4198-8218-458854d07fb8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-45865956-34db-4434-8dca-48c8dac9811e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:05.100 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1dec8acd-7356-41f7-8da7-05b9dc57ee7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap68eabea2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:0e:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407863, 'reachable_time': 42637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255372, 'error': None, 'target': 'ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:05.119 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ea354eb9-ba59-405d-b00d-ce0979f5daa0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap68eabea2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407875, 'tstamp': 407875}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255374, 'error': None, 'target': 'ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap68eabea2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407878, 'tstamp': 407878}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255374, 'error': None, 'target': 'ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:05.122 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68eabea2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.124 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.125 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:05.125 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68eabea2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:05.126 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:05.126 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap68eabea2-30, col_values=(('external_ids', {'iface-id': '0d1287ef-e69c-46bd-a8ed-5da65e222245'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:05.127 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.287 239969 DEBUG nova.compute.manager [req-48130299-fc66-481e-98e0-3b7087839028 req-a6ce7503-d189-42b6-bde7-010efa0e2687 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Received event network-vif-plugged-abdacb47-7e9f-4988-9a24-c617ebb61680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.287 239969 DEBUG oslo_concurrency.lockutils [req-48130299-fc66-481e-98e0-3b7087839028 req-a6ce7503-d189-42b6-bde7-010efa0e2687 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "45865956-34db-4434-8dca-48c8dac9811e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.288 239969 DEBUG oslo_concurrency.lockutils [req-48130299-fc66-481e-98e0-3b7087839028 req-a6ce7503-d189-42b6-bde7-010efa0e2687 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "45865956-34db-4434-8dca-48c8dac9811e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.288 239969 DEBUG oslo_concurrency.lockutils [req-48130299-fc66-481e-98e0-3b7087839028 req-a6ce7503-d189-42b6-bde7-010efa0e2687 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "45865956-34db-4434-8dca-48c8dac9811e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.288 239969 DEBUG nova.compute.manager [req-48130299-fc66-481e-98e0-3b7087839028 req-a6ce7503-d189-42b6-bde7-010efa0e2687 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Processing event network-vif-plugged-abdacb47-7e9f-4988-9a24-c617ebb61680 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:50:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 15:50:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 4713 writes, 21K keys, 4713 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4713 writes, 4713 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1420 writes, 6643 keys, 1420 commit groups, 1.0 writes per commit group, ingest: 9.04 MB, 0.02 MB/s
                                           Interval WAL: 1420 writes, 1420 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     88.8      0.27              0.08        12    0.023       0      0       0.0       0.0
                                             L6      1/0    7.54 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3    149.1    123.6      0.66              0.19        11    0.060     49K   5723       0.0       0.0
                                            Sum      1/0    7.54 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3    105.3    113.4      0.93              0.27        23    0.041     49K   5723       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.8    129.5    129.9      0.42              0.14        12    0.035     29K   3514       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    149.1    123.6      0.66              0.19        11    0.060     49K   5723       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     90.2      0.27              0.08        11    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.6      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.024, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.10 GB read, 0.05 MB/s read, 0.9 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x565356c818d0#2 capacity: 304.00 MB usage: 9.32 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000104 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(567,8.91 MB,2.93151%) FilterBlock(24,144.55 KB,0.0464339%) IndexBlock(24,269.27 KB,0.0864983%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.421 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442605.4207292, 45865956-34db-4434-8dca-48c8dac9811e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.421 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 45865956-34db-4434-8dca-48c8dac9811e] VM Started (Lifecycle Event)
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.423 239969 DEBUG nova.compute.manager [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.427 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.430 239969 INFO nova.virt.libvirt.driver [-] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Instance spawned successfully.
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.431 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.450 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.458 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.461 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.461 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.462 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.462 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.462 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.463 239969 DEBUG nova.virt.libvirt.driver [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.493 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 45865956-34db-4434-8dca-48c8dac9811e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.493 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442605.4208853, 45865956-34db-4434-8dca-48c8dac9811e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.493 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 45865956-34db-4434-8dca-48c8dac9811e] VM Paused (Lifecycle Event)
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.516 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.520 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442605.4261115, 45865956-34db-4434-8dca-48c8dac9811e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.521 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 45865956-34db-4434-8dca-48c8dac9811e] VM Resumed (Lifecycle Event)
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.527 239969 INFO nova.compute.manager [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Took 7.84 seconds to spawn the instance on the hypervisor.
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.527 239969 DEBUG nova.compute.manager [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.539 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.542 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.568 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 45865956-34db-4434-8dca-48c8dac9811e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.588 239969 INFO nova.compute.manager [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Took 8.84 seconds to build instance.
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.608 239969 DEBUG oslo_concurrency.lockutils [None req-09f7aeae-fdab-4382-a270-66d2b9601cf4 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "45865956-34db-4434-8dca-48c8dac9811e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 212 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 380 KiB/s rd, 5.7 MiB/s wr, 111 op/s
Jan 26 15:50:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:05.796 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:50:05 compute-0 nova_compute[239965]: 2026-01-26 15:50:05.796 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:05.797 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 15:50:06 compute-0 nova_compute[239965]: 2026-01-26 15:50:06.280 239969 INFO nova.compute.manager [None req-3b19fd0c-f258-44ad-aba9-0a14def49566 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Pausing
Jan 26 15:50:06 compute-0 nova_compute[239965]: 2026-01-26 15:50:06.281 239969 DEBUG nova.objects.instance [None req-3b19fd0c-f258-44ad-aba9-0a14def49566 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'flavor' on Instance uuid f470fcc4-0f3d-4048-9e21-61e217237d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:06 compute-0 nova_compute[239965]: 2026-01-26 15:50:06.333 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442606.332903, f470fcc4-0f3d-4048-9e21-61e217237d40 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:06 compute-0 nova_compute[239965]: 2026-01-26 15:50:06.334 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] VM Paused (Lifecycle Event)
Jan 26 15:50:06 compute-0 nova_compute[239965]: 2026-01-26 15:50:06.336 239969 DEBUG nova.compute.manager [None req-3b19fd0c-f258-44ad-aba9-0a14def49566 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:06 compute-0 nova_compute[239965]: 2026-01-26 15:50:06.354 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:06 compute-0 nova_compute[239965]: 2026-01-26 15:50:06.359 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:50:06 compute-0 nova_compute[239965]: 2026-01-26 15:50:06.525 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] During sync_power_state the instance has a pending task (pausing). Skip.
Jan 26 15:50:06 compute-0 nova_compute[239965]: 2026-01-26 15:50:06.574 239969 DEBUG nova.compute.manager [req-e6a2bb28-d705-4314-ac1f-1aaf0dd352a9 req-4ef3fa6d-5dac-4a7e-b1e2-0e1bf70c93ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Received event network-vif-plugged-54c94092-c736-46ff-b1b2-294a99ee679c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:06 compute-0 nova_compute[239965]: 2026-01-26 15:50:06.575 239969 DEBUG oslo_concurrency.lockutils [req-e6a2bb28-d705-4314-ac1f-1aaf0dd352a9 req-4ef3fa6d-5dac-4a7e-b1e2-0e1bf70c93ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:06 compute-0 nova_compute[239965]: 2026-01-26 15:50:06.575 239969 DEBUG oslo_concurrency.lockutils [req-e6a2bb28-d705-4314-ac1f-1aaf0dd352a9 req-4ef3fa6d-5dac-4a7e-b1e2-0e1bf70c93ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:06 compute-0 nova_compute[239965]: 2026-01-26 15:50:06.575 239969 DEBUG oslo_concurrency.lockutils [req-e6a2bb28-d705-4314-ac1f-1aaf0dd352a9 req-4ef3fa6d-5dac-4a7e-b1e2-0e1bf70c93ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:06 compute-0 nova_compute[239965]: 2026-01-26 15:50:06.576 239969 DEBUG nova.compute.manager [req-e6a2bb28-d705-4314-ac1f-1aaf0dd352a9 req-4ef3fa6d-5dac-4a7e-b1e2-0e1bf70c93ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] No waiting events found dispatching network-vif-plugged-54c94092-c736-46ff-b1b2-294a99ee679c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:50:06 compute-0 nova_compute[239965]: 2026-01-26 15:50:06.576 239969 WARNING nova.compute.manager [req-e6a2bb28-d705-4314-ac1f-1aaf0dd352a9 req-4ef3fa6d-5dac-4a7e-b1e2-0e1bf70c93ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Received unexpected event network-vif-plugged-54c94092-c736-46ff-b1b2-294a99ee679c for instance with vm_state paused and task_state None.
Jan 26 15:50:06 compute-0 ceph-mon[75140]: pgmap v1031: 305 pgs: 305 active+clean; 212 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 380 KiB/s rd, 5.7 MiB/s wr, 111 op/s
Jan 26 15:50:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 213 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 199 op/s
Jan 26 15:50:07 compute-0 nova_compute[239965]: 2026-01-26 15:50:07.763 239969 DEBUG nova.compute.manager [req-b4827984-7e3f-418a-b17c-b59902398c4d req-d9ca823d-0b47-431e-bb67-693b5192c622 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Received event network-vif-plugged-abdacb47-7e9f-4988-9a24-c617ebb61680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:07 compute-0 nova_compute[239965]: 2026-01-26 15:50:07.764 239969 DEBUG oslo_concurrency.lockutils [req-b4827984-7e3f-418a-b17c-b59902398c4d req-d9ca823d-0b47-431e-bb67-693b5192c622 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "45865956-34db-4434-8dca-48c8dac9811e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:07 compute-0 nova_compute[239965]: 2026-01-26 15:50:07.764 239969 DEBUG oslo_concurrency.lockutils [req-b4827984-7e3f-418a-b17c-b59902398c4d req-d9ca823d-0b47-431e-bb67-693b5192c622 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "45865956-34db-4434-8dca-48c8dac9811e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:07 compute-0 nova_compute[239965]: 2026-01-26 15:50:07.764 239969 DEBUG oslo_concurrency.lockutils [req-b4827984-7e3f-418a-b17c-b59902398c4d req-d9ca823d-0b47-431e-bb67-693b5192c622 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "45865956-34db-4434-8dca-48c8dac9811e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:07 compute-0 nova_compute[239965]: 2026-01-26 15:50:07.764 239969 DEBUG nova.compute.manager [req-b4827984-7e3f-418a-b17c-b59902398c4d req-d9ca823d-0b47-431e-bb67-693b5192c622 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] No waiting events found dispatching network-vif-plugged-abdacb47-7e9f-4988-9a24-c617ebb61680 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:50:07 compute-0 nova_compute[239965]: 2026-01-26 15:50:07.765 239969 WARNING nova.compute.manager [req-b4827984-7e3f-418a-b17c-b59902398c4d req-d9ca823d-0b47-431e-bb67-693b5192c622 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Received unexpected event network-vif-plugged-abdacb47-7e9f-4988-9a24-c617ebb61680 for instance with vm_state active and task_state None.
Jan 26 15:50:08 compute-0 nova_compute[239965]: 2026-01-26 15:50:08.739 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:08 compute-0 ceph-mon[75140]: pgmap v1032: 305 pgs: 305 active+clean; 213 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.7 MiB/s wr, 199 op/s
Jan 26 15:50:08 compute-0 NetworkManager[48954]: <info>  [1769442608.9185] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Jan 26 15:50:08 compute-0 NetworkManager[48954]: <info>  [1769442608.9195] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Jan 26 15:50:08 compute-0 nova_compute[239965]: 2026-01-26 15:50:08.922 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:09 compute-0 nova_compute[239965]: 2026-01-26 15:50:09.081 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:09 compute-0 ovn_controller[146046]: 2026-01-26T15:50:09Z|00058|binding|INFO|Releasing lport fd0478e8-96d8-4fbd-8a9a-fe78757277ca from this chassis (sb_readonly=0)
Jan 26 15:50:09 compute-0 ovn_controller[146046]: 2026-01-26T15:50:09Z|00059|binding|INFO|Releasing lport 0d1287ef-e69c-46bd-a8ed-5da65e222245 from this chassis (sb_readonly=0)
Jan 26 15:50:09 compute-0 nova_compute[239965]: 2026-01-26 15:50:09.099 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:50:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 214 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.7 MiB/s wr, 260 op/s
Jan 26 15:50:09 compute-0 nova_compute[239965]: 2026-01-26 15:50:09.788 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:10.799 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:10 compute-0 ceph-mon[75140]: pgmap v1033: 305 pgs: 305 active+clean; 214 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.7 MiB/s wr, 260 op/s
Jan 26 15:50:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 214 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.2 MiB/s wr, 240 op/s
Jan 26 15:50:11 compute-0 nova_compute[239965]: 2026-01-26 15:50:11.641 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:11 compute-0 nova_compute[239965]: 2026-01-26 15:50:11.642 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:11 compute-0 nova_compute[239965]: 2026-01-26 15:50:11.657 239969 DEBUG nova.compute.manager [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:50:11 compute-0 nova_compute[239965]: 2026-01-26 15:50:11.669 239969 DEBUG nova.compute.manager [req-a971c3b8-b749-41f3-bcea-9f6d29cdd5ea req-92228a07-e138-49bf-a86f-f9cdb346ab0d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Received event network-changed-b4633a27-8c35-4f7a-8e72-b7a090e807f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:11 compute-0 nova_compute[239965]: 2026-01-26 15:50:11.670 239969 DEBUG nova.compute.manager [req-a971c3b8-b749-41f3-bcea-9f6d29cdd5ea req-92228a07-e138-49bf-a86f-f9cdb346ab0d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Refreshing instance network info cache due to event network-changed-b4633a27-8c35-4f7a-8e72-b7a090e807f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:50:11 compute-0 nova_compute[239965]: 2026-01-26 15:50:11.670 239969 DEBUG oslo_concurrency.lockutils [req-a971c3b8-b749-41f3-bcea-9f6d29cdd5ea req-92228a07-e138-49bf-a86f-f9cdb346ab0d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:11 compute-0 nova_compute[239965]: 2026-01-26 15:50:11.670 239969 DEBUG oslo_concurrency.lockutils [req-a971c3b8-b749-41f3-bcea-9f6d29cdd5ea req-92228a07-e138-49bf-a86f-f9cdb346ab0d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:11 compute-0 nova_compute[239965]: 2026-01-26 15:50:11.671 239969 DEBUG nova.network.neutron [req-a971c3b8-b749-41f3-bcea-9f6d29cdd5ea req-92228a07-e138-49bf-a86f-f9cdb346ab0d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Refreshing network info cache for port b4633a27-8c35-4f7a-8e72-b7a090e807f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:50:11 compute-0 nova_compute[239965]: 2026-01-26 15:50:11.732 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:11 compute-0 nova_compute[239965]: 2026-01-26 15:50:11.732 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:11 compute-0 nova_compute[239965]: 2026-01-26 15:50:11.737 239969 DEBUG nova.virt.hardware [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:50:11 compute-0 nova_compute[239965]: 2026-01-26 15:50:11.738 239969 INFO nova.compute.claims [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:50:11 compute-0 nova_compute[239965]: 2026-01-26 15:50:11.880 239969 DEBUG oslo_concurrency.processutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.163 239969 DEBUG nova.compute.manager [None req-b080bd9b-c310-44f1-a421-ffb3ae5e8861 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.202 239969 INFO nova.compute.manager [None req-b080bd9b-c310-44f1-a421-ffb3ae5e8861 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] instance snapshotting
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.202 239969 WARNING nova.compute.manager [None req-b080bd9b-c310-44f1-a421-ffb3ae5e8861 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] trying to snapshot a non-running instance: (state: 3 expected: 1)
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.430 239969 INFO nova.virt.libvirt.driver [None req-b080bd9b-c310-44f1-a421-ffb3ae5e8861 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Beginning live snapshot process
Jan 26 15:50:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:50:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4284332396' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.466 239969 DEBUG oslo_concurrency.processutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.472 239969 DEBUG nova.compute.provider_tree [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.587 239969 DEBUG nova.scheduler.client.report [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.643 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.644 239969 DEBUG nova.compute.manager [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.763 239969 DEBUG nova.compute.manager [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.763 239969 DEBUG nova.network.neutron [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.770 239969 DEBUG nova.virt.libvirt.imagebackend [None req-b080bd9b-c310-44f1-a421-ffb3ae5e8861 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.782 239969 INFO nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.801 239969 DEBUG nova.compute.manager [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:50:12 compute-0 ceph-mon[75140]: pgmap v1034: 305 pgs: 305 active+clean; 214 MiB data, 320 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.2 MiB/s wr, 240 op/s
Jan 26 15:50:12 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4284332396' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.899 239969 DEBUG nova.compute.manager [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.900 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.901 239969 INFO nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Creating image(s)
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.921 239969 DEBUG nova.storage.rbd_utils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.940 239969 DEBUG nova.storage.rbd_utils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.960 239969 DEBUG nova.storage.rbd_utils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.963 239969 DEBUG oslo_concurrency.processutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.985 239969 DEBUG nova.policy [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b47ce75429ed4cd1ba21c0d50c2e552d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b3195eedf4a34fabaf019faaaad7eb71', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.989 239969 DEBUG nova.network.neutron [req-a971c3b8-b749-41f3-bcea-9f6d29cdd5ea req-92228a07-e138-49bf-a86f-f9cdb346ab0d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Updated VIF entry in instance network info cache for port b4633a27-8c35-4f7a-8e72-b7a090e807f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:50:12 compute-0 nova_compute[239965]: 2026-01-26 15:50:12.989 239969 DEBUG nova.network.neutron [req-a971c3b8-b749-41f3-bcea-9f6d29cdd5ea req-92228a07-e138-49bf-a86f-f9cdb346ab0d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Updating instance_info_cache with network_info: [{"id": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "address": "fa:16:3e:05:d1:74", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4633a27-8c", "ovs_interfaceid": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.012 239969 DEBUG oslo_concurrency.lockutils [req-a971c3b8-b749-41f3-bcea-9f6d29cdd5ea req-92228a07-e138-49bf-a86f-f9cdb346ab0d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.022 239969 DEBUG oslo_concurrency.processutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.022 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.023 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.023 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.046 239969 DEBUG nova.storage.rbd_utils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.049 239969 DEBUG oslo_concurrency.processutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.071 239969 DEBUG nova.storage.rbd_utils [None req-b080bd9b-c310-44f1-a421-ffb3ae5e8861 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] creating snapshot(e0feb04a804e424586a21a884ee7d006) on rbd image(f470fcc4-0f3d-4048-9e21-61e217237d40_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.329 239969 DEBUG oslo_concurrency.processutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.377 239969 DEBUG nova.storage.rbd_utils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] resizing rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.440 239969 DEBUG nova.objects.instance [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'migration_context' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.472 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.472 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Ensure instance console log exists: /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.473 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.473 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.473 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 214 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 225 op/s
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.755 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "20e8eee5-8c27-4d2b-b132-afa685238f37" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.755 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "20e8eee5-8c27-4d2b-b132-afa685238f37" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.773 239969 DEBUG nova.compute.manager [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.855 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.855 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.856 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.862 239969 DEBUG nova.virt.hardware [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.862 239969 INFO nova.compute.claims [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:50:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Jan 26 15:50:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Jan 26 15:50:13 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Jan 26 15:50:13 compute-0 nova_compute[239965]: 2026-01-26 15:50:13.967 239969 DEBUG nova.storage.rbd_utils [None req-b080bd9b-c310-44f1-a421-ffb3ae5e8861 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] cloning vms/f470fcc4-0f3d-4048-9e21-61e217237d40_disk@e0feb04a804e424586a21a884ee7d006 to images/f9e30dba-318c-4a5a-a6b7-5b3d3c0abe43 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 15:50:14 compute-0 nova_compute[239965]: 2026-01-26 15:50:14.050 239969 DEBUG nova.storage.rbd_utils [None req-b080bd9b-c310-44f1-a421-ffb3ae5e8861 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] flattening images/f9e30dba-318c-4a5a-a6b7-5b3d3c0abe43 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 15:50:14 compute-0 nova_compute[239965]: 2026-01-26 15:50:14.116 239969 DEBUG oslo_concurrency.processutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:50:14 compute-0 nova_compute[239965]: 2026-01-26 15:50:14.294 239969 DEBUG nova.storage.rbd_utils [None req-b080bd9b-c310-44f1-a421-ffb3ae5e8861 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] removing snapshot(e0feb04a804e424586a21a884ee7d006) on rbd image(f470fcc4-0f3d-4048-9e21-61e217237d40_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 15:50:14 compute-0 nova_compute[239965]: 2026-01-26 15:50:14.548 239969 DEBUG nova.network.neutron [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Successfully created port: 72aacd31-2789-4f27-a514-f80766db0d6e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:50:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:50:14 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2956781514' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:14 compute-0 nova_compute[239965]: 2026-01-26 15:50:14.698 239969 DEBUG oslo_concurrency.processutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:14 compute-0 nova_compute[239965]: 2026-01-26 15:50:14.704 239969 DEBUG nova.compute.provider_tree [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:50:14 compute-0 nova_compute[239965]: 2026-01-26 15:50:14.737 239969 DEBUG nova.scheduler.client.report [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:50:14 compute-0 nova_compute[239965]: 2026-01-26 15:50:14.790 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:14 compute-0 nova_compute[239965]: 2026-01-26 15:50:14.833 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:14 compute-0 nova_compute[239965]: 2026-01-26 15:50:14.834 239969 DEBUG nova.compute.manager [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:50:14 compute-0 nova_compute[239965]: 2026-01-26 15:50:14.882 239969 DEBUG nova.compute.manager [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:50:14 compute-0 nova_compute[239965]: 2026-01-26 15:50:14.882 239969 DEBUG nova.network.neutron [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:50:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Jan 26 15:50:14 compute-0 ceph-mon[75140]: pgmap v1035: 305 pgs: 305 active+clean; 214 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 225 op/s
Jan 26 15:50:14 compute-0 ceph-mon[75140]: osdmap e153: 3 total, 3 up, 3 in
Jan 26 15:50:14 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2956781514' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Jan 26 15:50:14 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Jan 26 15:50:14 compute-0 nova_compute[239965]: 2026-01-26 15:50:14.923 239969 DEBUG nova.storage.rbd_utils [None req-b080bd9b-c310-44f1-a421-ffb3ae5e8861 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] creating snapshot(snap) on rbd image(f9e30dba-318c-4a5a-a6b7-5b3d3c0abe43) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:50:14 compute-0 nova_compute[239965]: 2026-01-26 15:50:14.972 239969 INFO nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:50:14 compute-0 nova_compute[239965]: 2026-01-26 15:50:14.990 239969 DEBUG nova.compute.manager [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.101 239969 DEBUG nova.compute.manager [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.103 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.103 239969 INFO nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Creating image(s)
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.121 239969 DEBUG nova.storage.rbd_utils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image 20e8eee5-8c27-4d2b-b132-afa685238f37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.139 239969 DEBUG nova.storage.rbd_utils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image 20e8eee5-8c27-4d2b-b132-afa685238f37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.159 239969 DEBUG nova.storage.rbd_utils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image 20e8eee5-8c27-4d2b-b132-afa685238f37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.162 239969 DEBUG oslo_concurrency.processutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.189 239969 DEBUG nova.policy [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b47ce75429ed4cd1ba21c0d50c2e552d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b3195eedf4a34fabaf019faaaad7eb71', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.196 239969 DEBUG nova.compute.manager [req-f359b8fe-4468-4efc-9a2f-4fe39163e3d1 req-0c3cb874-6d98-4b57-a8df-ad021827ea6f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Received event network-changed-b4633a27-8c35-4f7a-8e72-b7a090e807f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.197 239969 DEBUG nova.compute.manager [req-f359b8fe-4468-4efc-9a2f-4fe39163e3d1 req-0c3cb874-6d98-4b57-a8df-ad021827ea6f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Refreshing instance network info cache due to event network-changed-b4633a27-8c35-4f7a-8e72-b7a090e807f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.197 239969 DEBUG oslo_concurrency.lockutils [req-f359b8fe-4468-4efc-9a2f-4fe39163e3d1 req-0c3cb874-6d98-4b57-a8df-ad021827ea6f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.197 239969 DEBUG oslo_concurrency.lockutils [req-f359b8fe-4468-4efc-9a2f-4fe39163e3d1 req-0c3cb874-6d98-4b57-a8df-ad021827ea6f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.197 239969 DEBUG nova.network.neutron [req-f359b8fe-4468-4efc-9a2f-4fe39163e3d1 req-0c3cb874-6d98-4b57-a8df-ad021827ea6f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Refreshing network info cache for port b4633a27-8c35-4f7a-8e72-b7a090e807f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.231 239969 DEBUG oslo_concurrency.processutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.232 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.232 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.232 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.251 239969 DEBUG nova.storage.rbd_utils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image 20e8eee5-8c27-4d2b-b132-afa685238f37_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.255 239969 DEBUG oslo_concurrency.processutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 20e8eee5-8c27-4d2b-b132-afa685238f37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.552 239969 DEBUG oslo_concurrency.processutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 20e8eee5-8c27-4d2b-b132-afa685238f37_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.296s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.605 239969 DEBUG nova.storage.rbd_utils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] resizing rbd image 20e8eee5-8c27-4d2b-b132-afa685238f37_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:50:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 214 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 39 KiB/s wr, 104 op/s
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.680 239969 DEBUG nova.objects.instance [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'migration_context' on Instance uuid 20e8eee5-8c27-4d2b-b132-afa685238f37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.700 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.700 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Ensure instance console log exists: /var/lib/nova/instances/20e8eee5-8c27-4d2b-b132-afa685238f37/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.701 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.701 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.701 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.809 239969 DEBUG nova.network.neutron [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Successfully created port: 25f90c0f-639d-4310-a846-2630969fa405 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.888 239969 DEBUG nova.network.neutron [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Successfully updated port: 72aacd31-2789-4f27-a514-f80766db0d6e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:50:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Jan 26 15:50:15 compute-0 ceph-mon[75140]: osdmap e154: 3 total, 3 up, 3 in
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.904 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "refresh_cache-db37e7ff-8499-4664-b2ba-014e27b8b6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.904 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquired lock "refresh_cache-db37e7ff-8499-4664-b2ba-014e27b8b6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:15 compute-0 nova_compute[239965]: 2026-01-26 15:50:15.904 239969 DEBUG nova.network.neutron [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:50:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Jan 26 15:50:15 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Jan 26 15:50:16 compute-0 nova_compute[239965]: 2026-01-26 15:50:16.256 239969 DEBUG nova.network.neutron [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:50:16 compute-0 ceph-mon[75140]: pgmap v1038: 305 pgs: 305 active+clean; 214 MiB data, 321 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 39 KiB/s wr, 104 op/s
Jan 26 15:50:16 compute-0 ceph-mon[75140]: osdmap e155: 3 total, 3 up, 3 in
Jan 26 15:50:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 307 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.2 MiB/s wr, 137 op/s
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.668 239969 DEBUG nova.network.neutron [req-f359b8fe-4468-4efc-9a2f-4fe39163e3d1 req-0c3cb874-6d98-4b57-a8df-ad021827ea6f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Updated VIF entry in instance network info cache for port b4633a27-8c35-4f7a-8e72-b7a090e807f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.669 239969 DEBUG nova.network.neutron [req-f359b8fe-4468-4efc-9a2f-4fe39163e3d1 req-0c3cb874-6d98-4b57-a8df-ad021827ea6f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Updating instance_info_cache with network_info: [{"id": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "address": "fa:16:3e:05:d1:74", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4633a27-8c", "ovs_interfaceid": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.693 239969 DEBUG oslo_concurrency.lockutils [req-f359b8fe-4468-4efc-9a2f-4fe39163e3d1 req-0c3cb874-6d98-4b57-a8df-ad021827ea6f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.696 239969 DEBUG nova.network.neutron [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Successfully updated port: 25f90c0f-639d-4310-a846-2630969fa405 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.724 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "refresh_cache-20e8eee5-8c27-4d2b-b132-afa685238f37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.724 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquired lock "refresh_cache-20e8eee5-8c27-4d2b-b132-afa685238f37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.725 239969 DEBUG nova.network.neutron [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.822 239969 DEBUG nova.network.neutron [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Updating instance_info_cache with network_info: [{"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.841 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Releasing lock "refresh_cache-db37e7ff-8499-4664-b2ba-014e27b8b6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.842 239969 DEBUG nova.compute.manager [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Instance network_info: |[{"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.844 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Start _get_guest_xml network_info=[{"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.849 239969 WARNING nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.854 239969 DEBUG nova.virt.libvirt.host [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.855 239969 DEBUG nova.virt.libvirt.host [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.860 239969 DEBUG nova.virt.libvirt.host [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.860 239969 DEBUG nova.virt.libvirt.host [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.861 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.861 239969 DEBUG nova.virt.hardware [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.861 239969 DEBUG nova.virt.hardware [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.862 239969 DEBUG nova.virt.hardware [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.862 239969 DEBUG nova.virt.hardware [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.862 239969 DEBUG nova.virt.hardware [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.862 239969 DEBUG nova.virt.hardware [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.863 239969 DEBUG nova.virt.hardware [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.863 239969 DEBUG nova.virt.hardware [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.863 239969 DEBUG nova.virt.hardware [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.863 239969 DEBUG nova.virt.hardware [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.864 239969 DEBUG nova.virt.hardware [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.866 239969 DEBUG oslo_concurrency.processutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:17 compute-0 nova_compute[239965]: 2026-01-26 15:50:17.971 239969 DEBUG nova.network.neutron [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:50:17 compute-0 ceph-mon[75140]: pgmap v1040: 305 pgs: 305 active+clean; 307 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.2 MiB/s wr, 137 op/s
Jan 26 15:50:18 compute-0 nova_compute[239965]: 2026-01-26 15:50:18.045 239969 INFO nova.virt.libvirt.driver [None req-b080bd9b-c310-44f1-a421-ffb3ae5e8861 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Snapshot image upload complete
Jan 26 15:50:18 compute-0 nova_compute[239965]: 2026-01-26 15:50:18.046 239969 INFO nova.compute.manager [None req-b080bd9b-c310-44f1-a421-ffb3ae5e8861 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Took 5.84 seconds to snapshot the instance on the hypervisor.
Jan 26 15:50:18 compute-0 nova_compute[239965]: 2026-01-26 15:50:18.055 239969 DEBUG nova.compute.manager [req-325fb546-7907-49e5-bcf8-72761692a3ba req-46e9a8a4-3b5f-46b4-9f59-9390d21b8ea3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Received event network-changed-abdacb47-7e9f-4988-9a24-c617ebb61680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:18 compute-0 nova_compute[239965]: 2026-01-26 15:50:18.056 239969 DEBUG nova.compute.manager [req-325fb546-7907-49e5-bcf8-72761692a3ba req-46e9a8a4-3b5f-46b4-9f59-9390d21b8ea3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Refreshing instance network info cache due to event network-changed-abdacb47-7e9f-4988-9a24-c617ebb61680. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:50:18 compute-0 nova_compute[239965]: 2026-01-26 15:50:18.056 239969 DEBUG oslo_concurrency.lockutils [req-325fb546-7907-49e5-bcf8-72761692a3ba req-46e9a8a4-3b5f-46b4-9f59-9390d21b8ea3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-45865956-34db-4434-8dca-48c8dac9811e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:18 compute-0 nova_compute[239965]: 2026-01-26 15:50:18.057 239969 DEBUG oslo_concurrency.lockutils [req-325fb546-7907-49e5-bcf8-72761692a3ba req-46e9a8a4-3b5f-46b4-9f59-9390d21b8ea3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-45865956-34db-4434-8dca-48c8dac9811e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:18 compute-0 nova_compute[239965]: 2026-01-26 15:50:18.057 239969 DEBUG nova.network.neutron [req-325fb546-7907-49e5-bcf8-72761692a3ba req-46e9a8a4-3b5f-46b4-9f59-9390d21b8ea3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Refreshing network info cache for port abdacb47-7e9f-4988-9a24-c617ebb61680 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:50:18 compute-0 nova_compute[239965]: 2026-01-26 15:50:18.132 239969 DEBUG nova.compute.manager [req-e2ca5b96-3a2c-4c06-bb4f-69abfeab215a req-49c14772-092b-4721-8e33-e0611aa09135 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Received event network-changed-25f90c0f-639d-4310-a846-2630969fa405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:18 compute-0 nova_compute[239965]: 2026-01-26 15:50:18.132 239969 DEBUG nova.compute.manager [req-e2ca5b96-3a2c-4c06-bb4f-69abfeab215a req-49c14772-092b-4721-8e33-e0611aa09135 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Refreshing instance network info cache due to event network-changed-25f90c0f-639d-4310-a846-2630969fa405. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:50:18 compute-0 nova_compute[239965]: 2026-01-26 15:50:18.133 239969 DEBUG oslo_concurrency.lockutils [req-e2ca5b96-3a2c-4c06-bb4f-69abfeab215a req-49c14772-092b-4721-8e33-e0611aa09135 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-20e8eee5-8c27-4d2b-b132-afa685238f37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:50:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3546951458' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:18 compute-0 nova_compute[239965]: 2026-01-26 15:50:18.492 239969 DEBUG oslo_concurrency.processutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:18 compute-0 nova_compute[239965]: 2026-01-26 15:50:18.517 239969 DEBUG nova.storage.rbd_utils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:18 compute-0 nova_compute[239965]: 2026-01-26 15:50:18.523 239969 DEBUG oslo_concurrency.processutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:18 compute-0 nova_compute[239965]: 2026-01-26 15:50:18.859 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3546951458' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:50:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3925232460' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.121 239969 DEBUG oslo_concurrency.processutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.123 239969 DEBUG nova.virt.libvirt.vif [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:50:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-784382261',display_name='tempest-ServersAdminTestJSON-server-784382261',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-784382261',id=15,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-x2p37smd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:50:12Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=db37e7ff-8499-4664-b2ba-014e27b8b6bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.123 239969 DEBUG nova.network.os_vif_util [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.124 239969 DEBUG nova.network.os_vif_util [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.125 239969 DEBUG nova.objects.instance [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'pci_devices' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.143 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:50:19 compute-0 nova_compute[239965]:   <uuid>db37e7ff-8499-4664-b2ba-014e27b8b6bb</uuid>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   <name>instance-0000000f</name>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersAdminTestJSON-server-784382261</nova:name>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:50:17</nova:creationTime>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:50:19 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:50:19 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:50:19 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:50:19 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:50:19 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:50:19 compute-0 nova_compute[239965]:         <nova:user uuid="b47ce75429ed4cd1ba21c0d50c2e552d">tempest-ServersAdminTestJSON-863857415-project-member</nova:user>
Jan 26 15:50:19 compute-0 nova_compute[239965]:         <nova:project uuid="b3195eedf4a34fabaf019faaaad7eb71">tempest-ServersAdminTestJSON-863857415</nova:project>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:50:19 compute-0 nova_compute[239965]:         <nova:port uuid="72aacd31-2789-4f27-a514-f80766db0d6e">
Jan 26 15:50:19 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <system>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <entry name="serial">db37e7ff-8499-4664-b2ba-014e27b8b6bb</entry>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <entry name="uuid">db37e7ff-8499-4664-b2ba-014e27b8b6bb</entry>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     </system>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   <os>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   </os>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   <features>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   </features>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk">
Jan 26 15:50:19 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       </source>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:50:19 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config">
Jan 26 15:50:19 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       </source>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:50:19 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:66:0d:aa"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <target dev="tap72aacd31-27"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/console.log" append="off"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <video>
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     </video>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:50:19 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:50:19 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:50:19 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:50:19 compute-0 nova_compute[239965]: </domain>
Jan 26 15:50:19 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.145 239969 DEBUG nova.compute.manager [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Preparing to wait for external event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.145 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.145 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.145 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.146 239969 DEBUG nova.virt.libvirt.vif [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:50:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-784382261',display_name='tempest-ServersAdminTestJSON-server-784382261',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-784382261',id=15,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-x2p37smd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:50:12Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=db37e7ff-8499-4664-b2ba-014e27b8b6bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.146 239969 DEBUG nova.network.os_vif_util [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:50:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e155 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.147 239969 DEBUG nova.network.os_vif_util [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.147 239969 DEBUG os_vif [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.148 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.148 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.149 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.151 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.151 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72aacd31-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.152 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap72aacd31-27, col_values=(('external_ids', {'iface-id': '72aacd31-2789-4f27-a514-f80766db0d6e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:0d:aa', 'vm-uuid': 'db37e7ff-8499-4664-b2ba-014e27b8b6bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.154 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:19 compute-0 NetworkManager[48954]: <info>  [1769442619.1554] manager: (tap72aacd31-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.158 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.160 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.162 239969 INFO os_vif [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27')
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.227 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.228 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.228 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No VIF found with MAC fa:16:3e:66:0d:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.228 239969 INFO nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Using config drive
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.255 239969 DEBUG nova.storage.rbd_utils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.355 239969 DEBUG nova.network.neutron [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Updating instance_info_cache with network_info: [{"id": "25f90c0f-639d-4310-a846-2630969fa405", "address": "fa:16:3e:d9:1a:6e", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f90c0f-63", "ovs_interfaceid": "25f90c0f-639d-4310-a846-2630969fa405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:19 compute-0 ovn_controller[146046]: 2026-01-26T15:50:19Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fd:85:19 10.100.0.14
Jan 26 15:50:19 compute-0 ovn_controller[146046]: 2026-01-26T15:50:19Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fd:85:19 10.100.0.14
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.604 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Releasing lock "refresh_cache-20e8eee5-8c27-4d2b-b132-afa685238f37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.604 239969 DEBUG nova.compute.manager [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Instance network_info: |[{"id": "25f90c0f-639d-4310-a846-2630969fa405", "address": "fa:16:3e:d9:1a:6e", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f90c0f-63", "ovs_interfaceid": "25f90c0f-639d-4310-a846-2630969fa405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.607 239969 DEBUG oslo_concurrency.lockutils [req-e2ca5b96-3a2c-4c06-bb4f-69abfeab215a req-49c14772-092b-4721-8e33-e0611aa09135 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-20e8eee5-8c27-4d2b-b132-afa685238f37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.608 239969 DEBUG nova.network.neutron [req-e2ca5b96-3a2c-4c06-bb4f-69abfeab215a req-49c14772-092b-4721-8e33-e0611aa09135 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Refreshing network info cache for port 25f90c0f-639d-4310-a846-2630969fa405 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.611 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Start _get_guest_xml network_info=[{"id": "25f90c0f-639d-4310-a846-2630969fa405", "address": "fa:16:3e:d9:1a:6e", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f90c0f-63", "ovs_interfaceid": "25f90c0f-639d-4310-a846-2630969fa405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.623 239969 WARNING nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:50:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 352 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 11 MiB/s wr, 210 op/s
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.631 239969 DEBUG nova.virt.libvirt.host [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.632 239969 DEBUG nova.virt.libvirt.host [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.636 239969 DEBUG nova.virt.libvirt.host [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.636 239969 DEBUG nova.virt.libvirt.host [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.637 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.637 239969 DEBUG nova.virt.hardware [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.637 239969 DEBUG nova.virt.hardware [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.638 239969 DEBUG nova.virt.hardware [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.638 239969 DEBUG nova.virt.hardware [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.638 239969 DEBUG nova.virt.hardware [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.638 239969 DEBUG nova.virt.hardware [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.638 239969 DEBUG nova.virt.hardware [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.638 239969 DEBUG nova.virt.hardware [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.639 239969 DEBUG nova.virt.hardware [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.639 239969 DEBUG nova.virt.hardware [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.639 239969 DEBUG nova.virt.hardware [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.642 239969 DEBUG oslo_concurrency.processutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.793 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.984 239969 INFO nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Creating config drive at /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config
Jan 26 15:50:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Jan 26 15:50:19 compute-0 nova_compute[239965]: 2026-01-26 15:50:19.990 239969 DEBUG oslo_concurrency.processutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpifv9rs2d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Jan 26 15:50:20 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3925232460' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:20 compute-0 ceph-mon[75140]: pgmap v1041: 305 pgs: 305 active+clean; 352 MiB data, 384 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 11 MiB/s wr, 210 op/s
Jan 26 15:50:20 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.131 239969 DEBUG oslo_concurrency.processutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpifv9rs2d" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.155 239969 DEBUG nova.storage.rbd_utils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.158 239969 DEBUG oslo_concurrency.processutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:50:20 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/522263605' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.233 239969 DEBUG oslo_concurrency.processutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.258 239969 DEBUG nova.storage.rbd_utils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image 20e8eee5-8c27-4d2b-b132-afa685238f37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.265 239969 DEBUG oslo_concurrency.processutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.285 239969 DEBUG oslo_concurrency.processutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.286 239969 INFO nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Deleting local config drive /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config because it was imported into RBD.
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.289 239969 DEBUG nova.network.neutron [req-325fb546-7907-49e5-bcf8-72761692a3ba req-46e9a8a4-3b5f-46b4-9f59-9390d21b8ea3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Updated VIF entry in instance network info cache for port abdacb47-7e9f-4988-9a24-c617ebb61680. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.290 239969 DEBUG nova.network.neutron [req-325fb546-7907-49e5-bcf8-72761692a3ba req-46e9a8a4-3b5f-46b4-9f59-9390d21b8ea3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Updating instance_info_cache with network_info: [{"id": "abdacb47-7e9f-4988-9a24-c617ebb61680", "address": "fa:16:3e:fd:85:19", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdacb47-7e", "ovs_interfaceid": "abdacb47-7e9f-4988-9a24-c617ebb61680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.320 239969 DEBUG oslo_concurrency.lockutils [req-325fb546-7907-49e5-bcf8-72761692a3ba req-46e9a8a4-3b5f-46b4-9f59-9390d21b8ea3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-45865956-34db-4434-8dca-48c8dac9811e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.321 239969 DEBUG nova.compute.manager [req-325fb546-7907-49e5-bcf8-72761692a3ba req-46e9a8a4-3b5f-46b4-9f59-9390d21b8ea3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received event network-changed-72aacd31-2789-4f27-a514-f80766db0d6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.322 239969 DEBUG nova.compute.manager [req-325fb546-7907-49e5-bcf8-72761692a3ba req-46e9a8a4-3b5f-46b4-9f59-9390d21b8ea3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Refreshing instance network info cache due to event network-changed-72aacd31-2789-4f27-a514-f80766db0d6e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.322 239969 DEBUG oslo_concurrency.lockutils [req-325fb546-7907-49e5-bcf8-72761692a3ba req-46e9a8a4-3b5f-46b4-9f59-9390d21b8ea3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-db37e7ff-8499-4664-b2ba-014e27b8b6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.322 239969 DEBUG oslo_concurrency.lockutils [req-325fb546-7907-49e5-bcf8-72761692a3ba req-46e9a8a4-3b5f-46b4-9f59-9390d21b8ea3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-db37e7ff-8499-4664-b2ba-014e27b8b6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.322 239969 DEBUG nova.network.neutron [req-325fb546-7907-49e5-bcf8-72761692a3ba req-46e9a8a4-3b5f-46b4-9f59-9390d21b8ea3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Refreshing network info cache for port 72aacd31-2789-4f27-a514-f80766db0d6e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:50:20 compute-0 kernel: tap72aacd31-27: entered promiscuous mode
Jan 26 15:50:20 compute-0 NetworkManager[48954]: <info>  [1769442620.3302] manager: (tap72aacd31-27): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Jan 26 15:50:20 compute-0 ovn_controller[146046]: 2026-01-26T15:50:20Z|00060|binding|INFO|Claiming lport 72aacd31-2789-4f27-a514-f80766db0d6e for this chassis.
Jan 26 15:50:20 compute-0 ovn_controller[146046]: 2026-01-26T15:50:20Z|00061|binding|INFO|72aacd31-2789-4f27-a514-f80766db0d6e: Claiming fa:16:3e:66:0d:aa 10.100.0.11
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.333 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.343 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:0d:aa 10.100.0.11'], port_security=['fa:16:3e:66:0d:aa 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'db37e7ff-8499-4664-b2ba-014e27b8b6bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3195eedf4a34fabaf019faaaad7eb71', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'deb9b716-672e-4115-a664-3008e95e0a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=172ee5c1-f0af-4b45-a16f-73e1fe15c93f, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=72aacd31-2789-4f27-a514-f80766db0d6e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.345 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 72aacd31-2789-4f27-a514-f80766db0d6e in datapath 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc bound to our chassis
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.347 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc
Jan 26 15:50:20 compute-0 ovn_controller[146046]: 2026-01-26T15:50:20Z|00062|binding|INFO|Setting lport 72aacd31-2789-4f27-a514-f80766db0d6e ovn-installed in OVS
Jan 26 15:50:20 compute-0 ovn_controller[146046]: 2026-01-26T15:50:20Z|00063|binding|INFO|Setting lport 72aacd31-2789-4f27-a514-f80766db0d6e up in Southbound
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.355 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.361 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.362 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9d020913-1012-4178-86ce-0565cd7ad6f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.363 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4a0d62aa-51 in ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.368 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4a0d62aa-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.368 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fd528253-423b-47ff-a3f4-15979ff86564]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:20 compute-0 systemd-machined[208061]: New machine qemu-17-instance-0000000f.
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.374 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[631fcd3c-b61e-4eed-a3e2-7333a14ef4e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:20 compute-0 systemd-udevd[256114]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:50:20 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-0000000f.
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.387 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[1989415f-2659-4ee1-a1be-0a7acf758b0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:20 compute-0 NetworkManager[48954]: <info>  [1769442620.3936] device (tap72aacd31-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:50:20 compute-0 NetworkManager[48954]: <info>  [1769442620.3945] device (tap72aacd31-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.413 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[231699c8-c699-44a0-9325-70867104dbc6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.444 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[24206039-1bfd-4db4-b9a4-151e19f0beca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:20 compute-0 NetworkManager[48954]: <info>  [1769442620.4510] manager: (tap4a0d62aa-50): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.449 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[31171384-8757-4672-800e-e5611a6580cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:20 compute-0 podman[256117]: 2026-01-26 15:50:20.477479388 +0000 UTC m=+0.075029017 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.485 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ceed45c2-3686-4e79-acb5-a805c3d0b205]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.489 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[dac8989a-1629-45a3-988b-922062a2797e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:20 compute-0 NetworkManager[48954]: <info>  [1769442620.5100] device (tap4a0d62aa-50): carrier: link connected
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.517 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f05a76bd-6abf-464f-b06a-89073642edca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.537 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f824683c-058f-49fb-8ed1-d1a9530e8ecb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a0d62aa-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:c2:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411131, 'reachable_time': 38822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256196, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.557 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[32409bd2-2063-41c2-a517-bd95fff6fe42]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8c:c2d0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411131, 'tstamp': 411131}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256203, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.575 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[20c416a9-f289-4000-a4ba-cd2d84cf05b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a0d62aa-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:c2:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411131, 'reachable_time': 38822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 256207, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:20 compute-0 podman[256164]: 2026-01-26 15:50:20.583399853 +0000 UTC m=+0.093361938 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.600 239969 DEBUG oslo_concurrency.lockutils [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "f470fcc4-0f3d-4048-9e21-61e217237d40" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.600 239969 DEBUG oslo_concurrency.lockutils [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "f470fcc4-0f3d-4048-9e21-61e217237d40" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.600 239969 DEBUG oslo_concurrency.lockutils [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.600 239969 DEBUG oslo_concurrency.lockutils [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.600 239969 DEBUG oslo_concurrency.lockutils [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.601 239969 INFO nova.compute.manager [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Terminating instance
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.602 239969 DEBUG nova.compute.manager [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.609 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[04f14264-51eb-417c-a239-8f1d89bdeaa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:20 compute-0 kernel: tap54c94092-c7 (unregistering): left promiscuous mode
Jan 26 15:50:20 compute-0 NetworkManager[48954]: <info>  [1769442620.6353] device (tap54c94092-c7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:50:20 compute-0 ovn_controller[146046]: 2026-01-26T15:50:20Z|00064|binding|INFO|Releasing lport 54c94092-c736-46ff-b1b2-294a99ee679c from this chassis (sb_readonly=0)
Jan 26 15:50:20 compute-0 ovn_controller[146046]: 2026-01-26T15:50:20Z|00065|binding|INFO|Setting lport 54c94092-c736-46ff-b1b2-294a99ee679c down in Southbound
Jan 26 15:50:20 compute-0 ovn_controller[146046]: 2026-01-26T15:50:20Z|00066|binding|INFO|Removing iface tap54c94092-c7 ovn-installed in OVS
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.646 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.659 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:1c:5e 10.100.0.5'], port_security=['fa:16:3e:99:1c:5e 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'f470fcc4-0f3d-4048-9e21-61e217237d40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5e237c1-a75f-479a-88ee-c0f788914b11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddba5162f533447bba0159cafaa565bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '196db1cf-70b0-4b0e-9aef-decea6cfe3dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79fbd3fe-b646-4211-8e23-8d25cabdd600, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=54c94092-c736-46ff-b1b2-294a99ee679c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.662 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.669 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4cbfbc2b-e561-4532-8689-886e19333928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.670 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a0d62aa-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.670 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.670 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a0d62aa-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:20 compute-0 NetworkManager[48954]: <info>  [1769442620.6726] manager: (tap4a0d62aa-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.672 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 kernel: tap4a0d62aa-50: entered promiscuous mode
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.677 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.678 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a0d62aa-50, col_values=(('external_ids', {'iface-id': '98f4f5ff-65bb-4d1b-80cd-163cfdeaa557'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:20 compute-0 ovn_controller[146046]: 2026-01-26T15:50:20Z|00067|binding|INFO|Releasing lport 98f4f5ff-65bb-4d1b-80cd-163cfdeaa557 from this chassis (sb_readonly=0)
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.679 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Jan 26 15:50:20 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Consumed 2.902s CPU time.
Jan 26 15:50:20 compute-0 systemd-machined[208061]: Machine qemu-15-instance-0000000d terminated.
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.699 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.700 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.701 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.702 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[529a78a7-b28a-41fd-9db6-7e32c32f7e62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.703 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc.pid.haproxy
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:50:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:20.703 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'env', 'PROCESS_TAG=haproxy-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.717 239969 DEBUG nova.compute.manager [req-5a47b5b7-843d-4995-b1cd-abfc26acb64b req-6e9822c0-4e8b-4160-8b59-b514b6e06008 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.717 239969 DEBUG oslo_concurrency.lockutils [req-5a47b5b7-843d-4995-b1cd-abfc26acb64b req-6e9822c0-4e8b-4160-8b59-b514b6e06008 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.717 239969 DEBUG oslo_concurrency.lockutils [req-5a47b5b7-843d-4995-b1cd-abfc26acb64b req-6e9822c0-4e8b-4160-8b59-b514b6e06008 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.717 239969 DEBUG oslo_concurrency.lockutils [req-5a47b5b7-843d-4995-b1cd-abfc26acb64b req-6e9822c0-4e8b-4160-8b59-b514b6e06008 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.718 239969 DEBUG nova.compute.manager [req-5a47b5b7-843d-4995-b1cd-abfc26acb64b req-6e9822c0-4e8b-4160-8b59-b514b6e06008 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Processing event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:50:20 compute-0 NetworkManager[48954]: <info>  [1769442620.8269] manager: (tap54c94092-c7): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.832 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.838 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.850 239969 INFO nova.virt.libvirt.driver [-] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Instance destroyed successfully.
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.851 239969 DEBUG nova.objects.instance [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'resources' on Instance uuid f470fcc4-0f3d-4048-9e21-61e217237d40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.866 239969 DEBUG nova.virt.libvirt.vif [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:49:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-783222857',display_name='tempest-ImagesTestJSON-server-783222857',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-783222857',id=13,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:50:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-av192pyl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:50:18Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=f470fcc4-0f3d-4048-9e21-61e217237d40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "54c94092-c736-46ff-b1b2-294a99ee679c", "address": "fa:16:3e:99:1c:5e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54c94092-c7", "ovs_interfaceid": "54c94092-c736-46ff-b1b2-294a99ee679c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.867 239969 DEBUG nova.network.os_vif_util [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "54c94092-c736-46ff-b1b2-294a99ee679c", "address": "fa:16:3e:99:1c:5e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54c94092-c7", "ovs_interfaceid": "54c94092-c736-46ff-b1b2-294a99ee679c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.868 239969 DEBUG nova.network.os_vif_util [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:1c:5e,bridge_name='br-int',has_traffic_filtering=True,id=54c94092-c736-46ff-b1b2-294a99ee679c,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54c94092-c7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.868 239969 DEBUG os_vif [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:1c:5e,bridge_name='br-int',has_traffic_filtering=True,id=54c94092-c736-46ff-b1b2-294a99ee679c,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54c94092-c7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.870 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.870 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54c94092-c7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:50:20 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1688265158' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.872 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.875 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.877 239969 INFO os_vif [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:1c:5e,bridge_name='br-int',has_traffic_filtering=True,id=54c94092-c736-46ff-b1b2-294a99ee679c,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54c94092-c7')
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.895 239969 DEBUG oslo_concurrency.processutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.896 239969 DEBUG nova.virt.libvirt.vif [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:50:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1266128042',display_name='tempest-ServersAdminTestJSON-server-1266128042',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1266128042',id=16,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-rj2hqc3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:50:15Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=20e8eee5-8c27-4d2b-b132-afa685238f37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25f90c0f-639d-4310-a846-2630969fa405", "address": "fa:16:3e:d9:1a:6e", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f90c0f-63", "ovs_interfaceid": "25f90c0f-639d-4310-a846-2630969fa405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.896 239969 DEBUG nova.network.os_vif_util [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "25f90c0f-639d-4310-a846-2630969fa405", "address": "fa:16:3e:d9:1a:6e", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f90c0f-63", "ovs_interfaceid": "25f90c0f-639d-4310-a846-2630969fa405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.900 239969 DEBUG nova.network.os_vif_util [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:1a:6e,bridge_name='br-int',has_traffic_filtering=True,id=25f90c0f-639d-4310-a846-2630969fa405,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f90c0f-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.903 239969 DEBUG nova.objects.instance [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20e8eee5-8c27-4d2b-b132-afa685238f37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.918 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:50:20 compute-0 nova_compute[239965]:   <uuid>20e8eee5-8c27-4d2b-b132-afa685238f37</uuid>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   <name>instance-00000010</name>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersAdminTestJSON-server-1266128042</nova:name>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:50:19</nova:creationTime>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:50:20 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:50:20 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:50:20 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:50:20 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:50:20 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:50:20 compute-0 nova_compute[239965]:         <nova:user uuid="b47ce75429ed4cd1ba21c0d50c2e552d">tempest-ServersAdminTestJSON-863857415-project-member</nova:user>
Jan 26 15:50:20 compute-0 nova_compute[239965]:         <nova:project uuid="b3195eedf4a34fabaf019faaaad7eb71">tempest-ServersAdminTestJSON-863857415</nova:project>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:50:20 compute-0 nova_compute[239965]:         <nova:port uuid="25f90c0f-639d-4310-a846-2630969fa405">
Jan 26 15:50:20 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <system>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <entry name="serial">20e8eee5-8c27-4d2b-b132-afa685238f37</entry>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <entry name="uuid">20e8eee5-8c27-4d2b-b132-afa685238f37</entry>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     </system>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   <os>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   </os>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   <features>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   </features>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/20e8eee5-8c27-4d2b-b132-afa685238f37_disk">
Jan 26 15:50:20 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       </source>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:50:20 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/20e8eee5-8c27-4d2b-b132-afa685238f37_disk.config">
Jan 26 15:50:20 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       </source>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:50:20 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:d9:1a:6e"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <target dev="tap25f90c0f-63"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/20e8eee5-8c27-4d2b-b132-afa685238f37/console.log" append="off"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <video>
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     </video>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:50:20 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:50:20 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:50:20 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:50:20 compute-0 nova_compute[239965]: </domain>
Jan 26 15:50:20 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.919 239969 DEBUG nova.compute.manager [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Preparing to wait for external event network-vif-plugged-25f90c0f-639d-4310-a846-2630969fa405 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.919 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.919 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.919 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.920 239969 DEBUG nova.virt.libvirt.vif [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:50:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1266128042',display_name='tempest-ServersAdminTestJSON-server-1266128042',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1266128042',id=16,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-rj2hqc3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:50:15Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=20e8eee5-8c27-4d2b-b132-afa685238f37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "25f90c0f-639d-4310-a846-2630969fa405", "address": "fa:16:3e:d9:1a:6e", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f90c0f-63", "ovs_interfaceid": "25f90c0f-639d-4310-a846-2630969fa405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.921 239969 DEBUG nova.network.os_vif_util [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "25f90c0f-639d-4310-a846-2630969fa405", "address": "fa:16:3e:d9:1a:6e", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f90c0f-63", "ovs_interfaceid": "25f90c0f-639d-4310-a846-2630969fa405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.921 239969 DEBUG nova.network.os_vif_util [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:1a:6e,bridge_name='br-int',has_traffic_filtering=True,id=25f90c0f-639d-4310-a846-2630969fa405,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f90c0f-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.921 239969 DEBUG os_vif [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:1a:6e,bridge_name='br-int',has_traffic_filtering=True,id=25f90c0f-639d-4310-a846-2630969fa405,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f90c0f-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.922 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.922 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.922 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.924 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.924 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap25f90c0f-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.925 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap25f90c0f-63, col_values=(('external_ids', {'iface-id': '25f90c0f-639d-4310-a846-2630969fa405', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:1a:6e', 'vm-uuid': '20e8eee5-8c27-4d2b-b132-afa685238f37'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.926 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 NetworkManager[48954]: <info>  [1769442620.9268] manager: (tap25f90c0f-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.928 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.933 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.934 239969 INFO os_vif [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:1a:6e,bridge_name='br-int',has_traffic_filtering=True,id=25f90c0f-639d-4310-a846-2630969fa405,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f90c0f-63')
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.985 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.985 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.985 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No VIF found with MAC fa:16:3e:d9:1a:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:50:20 compute-0 nova_compute[239965]: 2026-01-26 15:50:20.986 239969 INFO nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Using config drive
Jan 26 15:50:21 compute-0 ceph-mon[75140]: osdmap e156: 3 total, 3 up, 3 in
Jan 26 15:50:21 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/522263605' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:21 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1688265158' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.006 239969 DEBUG nova.storage.rbd_utils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image 20e8eee5-8c27-4d2b-b132-afa685238f37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:21 compute-0 podman[256317]: 2026-01-26 15:50:21.17126505 +0000 UTC m=+0.086599652 container create b4a88e6461d59c58b5d82ea02d6d381f31b71eb24e656ea35f53b8ccc17aee85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:50:21 compute-0 podman[256317]: 2026-01-26 15:50:21.109540498 +0000 UTC m=+0.024875120 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:50:21 compute-0 systemd[1]: Started libpod-conmon-b4a88e6461d59c58b5d82ea02d6d381f31b71eb24e656ea35f53b8ccc17aee85.scope.
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.217 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442621.2165744, db37e7ff-8499-4664-b2ba-014e27b8b6bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.217 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] VM Started (Lifecycle Event)
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.219 239969 DEBUG nova.compute.manager [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:50:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:50:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/431e4ea434cc11136276c5590b725eedf2973b3d0f420fbefa3163d27d0b434c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.229 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.235 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.238 239969 INFO nova.virt.libvirt.driver [-] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Instance spawned successfully.
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.239 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:50:21 compute-0 podman[256317]: 2026-01-26 15:50:21.248289346 +0000 UTC m=+0.163623968 container init b4a88e6461d59c58b5d82ea02d6d381f31b71eb24e656ea35f53b8ccc17aee85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.252 239969 INFO nova.virt.libvirt.driver [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Deleting instance files /var/lib/nova/instances/f470fcc4-0f3d-4048-9e21-61e217237d40_del
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.254 239969 INFO nova.virt.libvirt.driver [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Deletion of /var/lib/nova/instances/f470fcc4-0f3d-4048-9e21-61e217237d40_del complete
Jan 26 15:50:21 compute-0 podman[256317]: 2026-01-26 15:50:21.254211271 +0000 UTC m=+0.169545873 container start b4a88e6461d59c58b5d82ea02d6d381f31b71eb24e656ea35f53b8ccc17aee85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.261 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.271 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.271 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.272 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.272 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.272 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.273 239969 DEBUG nova.virt.libvirt.driver [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:21 compute-0 neutron-haproxy-ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc[256358]: [NOTICE]   (256362) : New worker (256364) forked
Jan 26 15:50:21 compute-0 neutron-haproxy-ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc[256358]: [NOTICE]   (256362) : Loading success.
Jan 26 15:50:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:21.315 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 54c94092-c736-46ff-b1b2-294a99ee679c in datapath f5e237c1-a75f-479a-88ee-c0f788914b11 unbound from our chassis
Jan 26 15:50:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:21.318 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5e237c1-a75f-479a-88ee-c0f788914b11, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.318 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.318 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442621.216694, db37e7ff-8499-4664-b2ba-014e27b8b6bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.318 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] VM Paused (Lifecycle Event)
Jan 26 15:50:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:21.319 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[889ebb10-0d07-41a9-9be5-f5f590f28050]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:21.321 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 namespace which is not needed anymore
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.364 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.368 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442621.2229142, db37e7ff-8499-4664-b2ba-014e27b8b6bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.368 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] VM Resumed (Lifecycle Event)
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.374 239969 INFO nova.compute.manager [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Took 0.77 seconds to destroy the instance on the hypervisor.
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.375 239969 DEBUG oslo.service.loopingcall [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.375 239969 DEBUG nova.compute.manager [-] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.375 239969 DEBUG nova.network.neutron [-] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.388 239969 INFO nova.compute.manager [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Took 8.49 seconds to spawn the instance on the hypervisor.
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.388 239969 DEBUG nova.compute.manager [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.414 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.417 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.450 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.465 239969 INFO nova.compute.manager [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Took 9.76 seconds to build instance.
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.489 239969 DEBUG oslo_concurrency.lockutils [None req-3e2369d9-4288-4120-8391-d87223006592 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:50:21 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[255247]: [NOTICE]   (255286) : haproxy version is 2.8.14-c23fe91
Jan 26 15:50:21 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[255247]: [NOTICE]   (255286) : path to executable is /usr/sbin/haproxy
Jan 26 15:50:21 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[255247]: [WARNING]  (255286) : Exiting Master process...
Jan 26 15:50:21 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[255247]: [WARNING]  (255286) : Exiting Master process...
Jan 26 15:50:21 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[255247]: [ALERT]    (255286) : Current worker (255291) exited with code 143 (Terminated)
Jan 26 15:50:21 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[255247]: [WARNING]  (255286) : All workers exited. Exiting... (0)
Jan 26 15:50:21 compute-0 systemd[1]: libpod-d5f20369113d31fef9f796571bb79cab90a56dcde788d785f797f50e91242460.scope: Deactivated successfully.
Jan 26 15:50:21 compute-0 podman[256390]: 2026-01-26 15:50:21.559139738 +0000 UTC m=+0.134899125 container died d5f20369113d31fef9f796571bb79cab90a56dcde788d785f797f50e91242460 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:50:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d5f20369113d31fef9f796571bb79cab90a56dcde788d785f797f50e91242460-userdata-shm.mount: Deactivated successfully.
Jan 26 15:50:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 341 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 10 MiB/s wr, 257 op/s
Jan 26 15:50:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-6da67a331c29fdbc79d33fab33edc86c7051763ace689dea57abaa4dffefced7-merged.mount: Deactivated successfully.
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.693 239969 DEBUG nova.compute.manager [req-cdafbaa0-4b80-4a33-8a5b-7f653e370433 req-f7e08ad6-820a-471b-8410-82155090d786 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Received event network-vif-unplugged-54c94092-c736-46ff-b1b2-294a99ee679c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.694 239969 DEBUG oslo_concurrency.lockutils [req-cdafbaa0-4b80-4a33-8a5b-7f653e370433 req-f7e08ad6-820a-471b-8410-82155090d786 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.694 239969 DEBUG oslo_concurrency.lockutils [req-cdafbaa0-4b80-4a33-8a5b-7f653e370433 req-f7e08ad6-820a-471b-8410-82155090d786 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.694 239969 DEBUG oslo_concurrency.lockutils [req-cdafbaa0-4b80-4a33-8a5b-7f653e370433 req-f7e08ad6-820a-471b-8410-82155090d786 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.695 239969 DEBUG nova.compute.manager [req-cdafbaa0-4b80-4a33-8a5b-7f653e370433 req-f7e08ad6-820a-471b-8410-82155090d786 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] No waiting events found dispatching network-vif-unplugged-54c94092-c736-46ff-b1b2-294a99ee679c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.695 239969 DEBUG nova.compute.manager [req-cdafbaa0-4b80-4a33-8a5b-7f653e370433 req-f7e08ad6-820a-471b-8410-82155090d786 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Received event network-vif-unplugged-54c94092-c736-46ff-b1b2-294a99ee679c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.805 239969 INFO nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Creating config drive at /var/lib/nova/instances/20e8eee5-8c27-4d2b-b132-afa685238f37/disk.config
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.810 239969 DEBUG oslo_concurrency.processutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20e8eee5-8c27-4d2b-b132-afa685238f37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjtz7dqwd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.854 239969 DEBUG nova.network.neutron [req-e2ca5b96-3a2c-4c06-bb4f-69abfeab215a req-49c14772-092b-4721-8e33-e0611aa09135 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Updated VIF entry in instance network info cache for port 25f90c0f-639d-4310-a846-2630969fa405. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.855 239969 DEBUG nova.network.neutron [req-e2ca5b96-3a2c-4c06-bb4f-69abfeab215a req-49c14772-092b-4721-8e33-e0611aa09135 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Updating instance_info_cache with network_info: [{"id": "25f90c0f-639d-4310-a846-2630969fa405", "address": "fa:16:3e:d9:1a:6e", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f90c0f-63", "ovs_interfaceid": "25f90c0f-639d-4310-a846-2630969fa405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.874 239969 DEBUG oslo_concurrency.lockutils [req-e2ca5b96-3a2c-4c06-bb4f-69abfeab215a req-49c14772-092b-4721-8e33-e0611aa09135 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-20e8eee5-8c27-4d2b-b132-afa685238f37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.939 239969 DEBUG oslo_concurrency.processutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20e8eee5-8c27-4d2b-b132-afa685238f37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjtz7dqwd" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.967 239969 DEBUG nova.storage.rbd_utils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image 20e8eee5-8c27-4d2b-b132-afa685238f37_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:21 compute-0 nova_compute[239965]: 2026-01-26 15:50:21.972 239969 DEBUG oslo_concurrency.processutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20e8eee5-8c27-4d2b-b132-afa685238f37/disk.config 20e8eee5-8c27-4d2b-b132-afa685238f37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:22 compute-0 ceph-mon[75140]: pgmap v1043: 305 pgs: 305 active+clean; 341 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 10 MiB/s wr, 257 op/s
Jan 26 15:50:22 compute-0 podman[256390]: 2026-01-26 15:50:22.047675752 +0000 UTC m=+0.623435139 container cleanup d5f20369113d31fef9f796571bb79cab90a56dcde788d785f797f50e91242460 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 26 15:50:22 compute-0 systemd[1]: libpod-conmon-d5f20369113d31fef9f796571bb79cab90a56dcde788d785f797f50e91242460.scope: Deactivated successfully.
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.135 239969 DEBUG oslo_concurrency.processutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20e8eee5-8c27-4d2b-b132-afa685238f37/disk.config 20e8eee5-8c27-4d2b-b132-afa685238f37_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.136 239969 INFO nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Deleting local config drive /var/lib/nova/instances/20e8eee5-8c27-4d2b-b132-afa685238f37/disk.config because it was imported into RBD.
Jan 26 15:50:22 compute-0 podman[256458]: 2026-01-26 15:50:22.137161954 +0000 UTC m=+0.064232274 container remove d5f20369113d31fef9f796571bb79cab90a56dcde788d785f797f50e91242460 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.151 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[09f059b5-3a2d-4309-af00-f0bc1ca4c64b]: (4, ('Mon Jan 26 03:50:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 (d5f20369113d31fef9f796571bb79cab90a56dcde788d785f797f50e91242460)\nd5f20369113d31fef9f796571bb79cab90a56dcde788d785f797f50e91242460\nMon Jan 26 03:50:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 (d5f20369113d31fef9f796571bb79cab90a56dcde788d785f797f50e91242460)\nd5f20369113d31fef9f796571bb79cab90a56dcde788d785f797f50e91242460\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.154 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[50e06d42-8d70-419f-a6d3-41a93d42f88b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.155 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5e237c1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:22 compute-0 kernel: tapf5e237c1-a0: left promiscuous mode
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.160 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.183 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.186 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7b6bda59-5871-4de5-8a1f-131b7245d553]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:22 compute-0 systemd-udevd[256174]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:50:22 compute-0 kernel: tap25f90c0f-63: entered promiscuous mode
Jan 26 15:50:22 compute-0 ovn_controller[146046]: 2026-01-26T15:50:22Z|00068|binding|INFO|Claiming lport 25f90c0f-639d-4310-a846-2630969fa405 for this chassis.
Jan 26 15:50:22 compute-0 ovn_controller[146046]: 2026-01-26T15:50:22Z|00069|binding|INFO|25f90c0f-639d-4310-a846-2630969fa405: Claiming fa:16:3e:d9:1a:6e 10.100.0.3
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.205 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:22 compute-0 NetworkManager[48954]: <info>  [1769442622.2076] manager: (tap25f90c0f-63): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.207 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[39972adb-824f-40cf-a0e7-c4328a9b622e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.210 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[849ba5be-dcc7-418a-92e6-b517cf733068]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.215 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:1a:6e 10.100.0.3'], port_security=['fa:16:3e:d9:1a:6e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '20e8eee5-8c27-4d2b-b132-afa685238f37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3195eedf4a34fabaf019faaaad7eb71', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'deb9b716-672e-4115-a664-3008e95e0a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=172ee5c1-f0af-4b45-a16f-73e1fe15c93f, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=25f90c0f-639d-4310-a846-2630969fa405) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:50:22 compute-0 NetworkManager[48954]: <info>  [1769442622.2227] device (tap25f90c0f-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:50:22 compute-0 NetworkManager[48954]: <info>  [1769442622.2233] device (tap25f90c0f-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:50:22 compute-0 ovn_controller[146046]: 2026-01-26T15:50:22Z|00070|binding|INFO|Setting lport 25f90c0f-639d-4310-a846-2630969fa405 ovn-installed in OVS
Jan 26 15:50:22 compute-0 ovn_controller[146046]: 2026-01-26T15:50:22Z|00071|binding|INFO|Setting lport 25f90c0f-639d-4310-a846-2630969fa405 up in Southbound
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.228 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.230 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2006f58a-5a63-4ae0-8d1b-02978fc6476e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 409393, 'reachable_time': 26852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256492, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:22 compute-0 systemd[1]: run-netns-ovnmeta\x2df5e237c1\x2da75f\x2d479a\x2d88ee\x2dc0f788914b11.mount: Deactivated successfully.
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.232 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.232 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[f60923f4-aa5e-4387-93ed-5bb2d4458c66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.234 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 25f90c0f-639d-4310-a846-2630969fa405 in datapath 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc bound to our chassis
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.236 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc
Jan 26 15:50:22 compute-0 systemd-machined[208061]: New machine qemu-18-instance-00000010.
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.256 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[83266bd3-9760-41df-80ff-73aa864c7d90]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:22 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000010.
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.293 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[273d7b43-147f-4edf-ae0c-6e494d568439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.297 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5d152532-9b3d-4fd0-8ea0-61fe986caff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.335 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[830eab78-7f97-44d8-acdb-54d3333338dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.357 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6d7b8664-b8ad-4ccd-b546-efc5a345490d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a0d62aa-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:c2:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411131, 'reachable_time': 38822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256507, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.367 239969 DEBUG nova.network.neutron [req-325fb546-7907-49e5-bcf8-72761692a3ba req-46e9a8a4-3b5f-46b4-9f59-9390d21b8ea3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Updated VIF entry in instance network info cache for port 72aacd31-2789-4f27-a514-f80766db0d6e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.367 239969 DEBUG nova.network.neutron [req-325fb546-7907-49e5-bcf8-72761692a3ba req-46e9a8a4-3b5f-46b4-9f59-9390d21b8ea3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Updating instance_info_cache with network_info: [{"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.374 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[506ce762-61ac-40e5-9558-198dded48a09]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411144, 'tstamp': 411144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256508, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411147, 'tstamp': 411147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256508, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.375 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a0d62aa-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.377 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.378 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.379 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a0d62aa-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.379 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.379 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a0d62aa-50, col_values=(('external_ids', {'iface-id': '98f4f5ff-65bb-4d1b-80cd-163cfdeaa557'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:22.380 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.385 239969 DEBUG oslo_concurrency.lockutils [req-325fb546-7907-49e5-bcf8-72761692a3ba req-46e9a8a4-3b5f-46b4-9f59-9390d21b8ea3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-db37e7ff-8499-4664-b2ba-014e27b8b6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.502 239969 DEBUG nova.network.neutron [-] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.531 239969 INFO nova.compute.manager [-] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Took 1.16 seconds to deallocate network for instance.
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.584 239969 DEBUG oslo_concurrency.lockutils [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.584 239969 DEBUG oslo_concurrency.lockutils [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.951 239969 DEBUG oslo_concurrency.processutils [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.986 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442622.985512, 20e8eee5-8c27-4d2b-b132-afa685238f37 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:22 compute-0 nova_compute[239965]: 2026-01-26 15:50:22.986 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] VM Started (Lifecycle Event)
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.006 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.010 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442622.9896002, 20e8eee5-8c27-4d2b-b132-afa685238f37 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.010 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] VM Paused (Lifecycle Event)
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.030 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.033 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.051 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:50:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:50:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2167059943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.527 239969 DEBUG oslo_concurrency.processutils [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.533 239969 DEBUG nova.compute.provider_tree [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.551 239969 DEBUG nova.scheduler.client.report [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:50:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2167059943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.581 239969 DEBUG oslo_concurrency.lockutils [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.606 239969 INFO nova.scheduler.client.report [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Deleted allocations for instance f470fcc4-0f3d-4048-9e21-61e217237d40
Jan 26 15:50:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 309 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 11 MiB/s wr, 426 op/s
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.682 239969 DEBUG oslo_concurrency.lockutils [None req-52afbe9b-d011-41d4-ae26-70c9d2084bed 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "f470fcc4-0f3d-4048-9e21-61e217237d40" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.082s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.957 239969 DEBUG nova.compute.manager [req-0849e290-9ceb-49fe-8664-579b3059cf4d req-98c7ad42-6b4f-48c3-bd33-660153d84766 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.957 239969 DEBUG oslo_concurrency.lockutils [req-0849e290-9ceb-49fe-8664-579b3059cf4d req-98c7ad42-6b4f-48c3-bd33-660153d84766 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.958 239969 DEBUG oslo_concurrency.lockutils [req-0849e290-9ceb-49fe-8664-579b3059cf4d req-98c7ad42-6b4f-48c3-bd33-660153d84766 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.958 239969 DEBUG oslo_concurrency.lockutils [req-0849e290-9ceb-49fe-8664-579b3059cf4d req-98c7ad42-6b4f-48c3-bd33-660153d84766 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.958 239969 DEBUG nova.compute.manager [req-0849e290-9ceb-49fe-8664-579b3059cf4d req-98c7ad42-6b4f-48c3-bd33-660153d84766 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] No waiting events found dispatching network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:50:23 compute-0 nova_compute[239965]: 2026-01-26 15:50:23.958 239969 WARNING nova.compute.manager [req-0849e290-9ceb-49fe-8664-579b3059cf4d req-98c7ad42-6b4f-48c3-bd33-660153d84766 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received unexpected event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e for instance with vm_state active and task_state None.
Jan 26 15:50:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:50:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Jan 26 15:50:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Jan 26 15:50:24 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.511 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.511 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.534 239969 DEBUG nova.compute.manager [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:50:24 compute-0 ceph-mon[75140]: pgmap v1044: 305 pgs: 305 active+clean; 309 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 11 MiB/s wr, 426 op/s
Jan 26 15:50:24 compute-0 ceph-mon[75140]: osdmap e157: 3 total, 3 up, 3 in
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.612 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.613 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.620 239969 DEBUG nova.virt.hardware [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.620 239969 INFO nova.compute.claims [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.631 239969 DEBUG nova.compute.manager [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Received event network-vif-plugged-54c94092-c736-46ff-b1b2-294a99ee679c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.631 239969 DEBUG oslo_concurrency.lockutils [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.631 239969 DEBUG oslo_concurrency.lockutils [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.631 239969 DEBUG oslo_concurrency.lockutils [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f470fcc4-0f3d-4048-9e21-61e217237d40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.632 239969 DEBUG nova.compute.manager [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] No waiting events found dispatching network-vif-plugged-54c94092-c736-46ff-b1b2-294a99ee679c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.632 239969 WARNING nova.compute.manager [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Received unexpected event network-vif-plugged-54c94092-c736-46ff-b1b2-294a99ee679c for instance with vm_state deleted and task_state None.
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.632 239969 DEBUG nova.compute.manager [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Received event network-vif-deleted-54c94092-c736-46ff-b1b2-294a99ee679c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.632 239969 DEBUG nova.compute.manager [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Received event network-vif-plugged-25f90c0f-639d-4310-a846-2630969fa405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.632 239969 DEBUG oslo_concurrency.lockutils [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.632 239969 DEBUG oslo_concurrency.lockutils [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.633 239969 DEBUG oslo_concurrency.lockutils [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.633 239969 DEBUG nova.compute.manager [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Processing event network-vif-plugged-25f90c0f-639d-4310-a846-2630969fa405 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.633 239969 DEBUG nova.compute.manager [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Received event network-vif-plugged-25f90c0f-639d-4310-a846-2630969fa405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.633 239969 DEBUG oslo_concurrency.lockutils [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.633 239969 DEBUG oslo_concurrency.lockutils [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.633 239969 DEBUG oslo_concurrency.lockutils [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.634 239969 DEBUG nova.compute.manager [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] No waiting events found dispatching network-vif-plugged-25f90c0f-639d-4310-a846-2630969fa405 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.634 239969 WARNING nova.compute.manager [req-46591b3a-d214-41e9-be52-8bd1e789a16a req-d6ffb7d0-2b37-41f5-977c-2e296dc1b06d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Received unexpected event network-vif-plugged-25f90c0f-639d-4310-a846-2630969fa405 for instance with vm_state building and task_state spawning.
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.634 239969 DEBUG nova.compute.manager [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.638 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442624.638503, 20e8eee5-8c27-4d2b-b132-afa685238f37 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.639 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] VM Resumed (Lifecycle Event)
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.640 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.643 239969 INFO nova.virt.libvirt.driver [-] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Instance spawned successfully.
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.643 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.673 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.679 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.682 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.682 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.683 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.683 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.684 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.684 239969 DEBUG nova.virt.libvirt.driver [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.719 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.788 239969 INFO nova.compute.manager [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Took 9.69 seconds to spawn the instance on the hypervisor.
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.788 239969 DEBUG nova.compute.manager [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.794 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:24 compute-0 nova_compute[239965]: 2026-01-26 15:50:24.875 239969 DEBUG oslo_concurrency.processutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.074 239969 INFO nova.compute.manager [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Took 11.24 seconds to build instance.
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.094 239969 DEBUG oslo_concurrency.lockutils [None req-da4ec002-1ec6-4a20-9e91-aa1ae791ad14 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "20e8eee5-8c27-4d2b-b132-afa685238f37" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:50:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1573140747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.463 239969 DEBUG oslo_concurrency.processutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.470 239969 DEBUG nova.compute.provider_tree [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.490 239969 DEBUG nova.scheduler.client.report [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.518 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.905s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.518 239969 DEBUG nova.compute.manager [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.543 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.544 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.573 239969 DEBUG nova.compute.manager [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.574 239969 DEBUG nova.network.neutron [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:50:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1573140747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.598 239969 INFO nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.620 239969 DEBUG nova.compute.manager [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:50:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 309 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.8 MiB/s wr, 326 op/s
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.722 239969 DEBUG nova.compute.manager [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.724 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.724 239969 INFO nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Creating image(s)
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.757 239969 DEBUG nova.storage.rbd_utils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image 8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.796 239969 DEBUG nova.storage.rbd_utils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image 8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.829 239969 DEBUG nova.storage.rbd_utils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image 8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.835 239969 DEBUG oslo_concurrency.processutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.866 239969 DEBUG nova.policy [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3322c44e378e415bb486ef558314a67c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ddba5162f533447bba0159cafaa565bf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.922 239969 DEBUG oslo_concurrency.processutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.923 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.924 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.924 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.947 239969 DEBUG nova.storage.rbd_utils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image 8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.953 239969 DEBUG oslo_concurrency.processutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:25 compute-0 nova_compute[239965]: 2026-01-26 15:50:25.979 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.054 239969 DEBUG oslo_concurrency.lockutils [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "45865956-34db-4434-8dca-48c8dac9811e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.054 239969 DEBUG oslo_concurrency.lockutils [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "45865956-34db-4434-8dca-48c8dac9811e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.054 239969 DEBUG oslo_concurrency.lockutils [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "45865956-34db-4434-8dca-48c8dac9811e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.054 239969 DEBUG oslo_concurrency.lockutils [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "45865956-34db-4434-8dca-48c8dac9811e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.055 239969 DEBUG oslo_concurrency.lockutils [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "45865956-34db-4434-8dca-48c8dac9811e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.056 239969 INFO nova.compute.manager [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Terminating instance
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.057 239969 DEBUG nova.compute.manager [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:50:26 compute-0 kernel: tapabdacb47-7e (unregistering): left promiscuous mode
Jan 26 15:50:26 compute-0 NetworkManager[48954]: <info>  [1769442626.1203] device (tapabdacb47-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:50:26 compute-0 ovn_controller[146046]: 2026-01-26T15:50:26Z|00072|binding|INFO|Releasing lport abdacb47-7e9f-4988-9a24-c617ebb61680 from this chassis (sb_readonly=0)
Jan 26 15:50:26 compute-0 ovn_controller[146046]: 2026-01-26T15:50:26Z|00073|binding|INFO|Setting lport abdacb47-7e9f-4988-9a24-c617ebb61680 down in Southbound
Jan 26 15:50:26 compute-0 ovn_controller[146046]: 2026-01-26T15:50:26Z|00074|binding|INFO|Removing iface tapabdacb47-7e ovn-installed in OVS
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.134 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:26.142 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:85:19 10.100.0.14'], port_security=['fa:16:3e:fd:85:19 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '45865956-34db-4434-8dca-48c8dac9811e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-68eabea2-3a2d-4690-b867-0143d33ad099', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c2f3e7dabb6b4b6d8d0bd497c45857fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '768559e5-eeab-4bfe-90a7-d37c537391d3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09fd1a35-0741-44b5-aefd-ffc4e1370811, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=abdacb47-7e9f-4988-9a24-c617ebb61680) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:50:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:26.144 156105 INFO neutron.agent.ovn.metadata.agent [-] Port abdacb47-7e9f-4988-9a24-c617ebb61680 in datapath 68eabea2-3a2d-4690-b867-0143d33ad099 unbound from our chassis
Jan 26 15:50:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:26.145 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 68eabea2-3a2d-4690-b867-0143d33ad099
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.155 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:26.168 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5d829204-c512-4e6c-8a73-bc13ff3ade30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:26 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Jan 26 15:50:26 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000e.scope: Consumed 14.996s CPU time.
Jan 26 15:50:26 compute-0 systemd-machined[208061]: Machine qemu-16-instance-0000000e terminated.
Jan 26 15:50:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:50:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4118338361' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:26.201 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e7466043-f8d0-444e-8593-9a5608470f18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:26.205 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[644b5d98-3f84-453e-9661-2b8d5ca1aab8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.218 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:26.238 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ffec3139-e0f3-4384-92d2-4a3aaf5de6fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:26.264 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[83fc8000-5111-4a25-9580-c1ccc8a6f11d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap68eabea2-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fe:0e:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407863, 'reachable_time': 42637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256722, 'error': None, 'target': 'ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:26.288 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[55c492fc-953e-4501-b645-38c4cb9718f1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap68eabea2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407875, 'tstamp': 407875}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256725, 'error': None, 'target': 'ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap68eabea2-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 407878, 'tstamp': 407878}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 256725, 'error': None, 'target': 'ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:26.290 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68eabea2-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.291 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.296 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:26.296 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68eabea2-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:26.297 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.297 239969 INFO nova.virt.libvirt.driver [-] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Instance destroyed successfully.
Jan 26 15:50:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:26.297 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap68eabea2-30, col_values=(('external_ids', {'iface-id': '0d1287ef-e69c-46bd-a8ed-5da65e222245'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.297 239969 DEBUG nova.objects.instance [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lazy-loading 'resources' on Instance uuid 45865956-34db-4434-8dca-48c8dac9811e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:26.298 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.319 239969 DEBUG nova.virt.libvirt.vif [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:49:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-229085207',display_name='tempest-FloatingIPsAssociationTestJSON-server-229085207',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-229085207',id=14,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:50:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c2f3e7dabb6b4b6d8d0bd497c45857fc',ramdisk_id='',reservation_id='r-e6r4a7w6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-2042630915',owner_user_name='tempest-FloatingIPsAssociationTestJSON-2042630915-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:50:05Z,user_data=None,user_id='19b1461c4dbc4033a684f6e523867df8',uuid=45865956-34db-4434-8dca-48c8dac9811e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "abdacb47-7e9f-4988-9a24-c617ebb61680", "address": "fa:16:3e:fd:85:19", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdacb47-7e", "ovs_interfaceid": "abdacb47-7e9f-4988-9a24-c617ebb61680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.320 239969 DEBUG nova.network.os_vif_util [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Converting VIF {"id": "abdacb47-7e9f-4988-9a24-c617ebb61680", "address": "fa:16:3e:fd:85:19", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdacb47-7e", "ovs_interfaceid": "abdacb47-7e9f-4988-9a24-c617ebb61680", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.321 239969 DEBUG nova.network.os_vif_util [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fd:85:19,bridge_name='br-int',has_traffic_filtering=True,id=abdacb47-7e9f-4988-9a24-c617ebb61680,network=Network(68eabea2-3a2d-4690-b867-0143d33ad099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdacb47-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.321 239969 DEBUG os_vif [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:85:19,bridge_name='br-int',has_traffic_filtering=True,id=abdacb47-7e9f-4988-9a24-c617ebb61680,network=Network(68eabea2-3a2d-4690-b867-0143d33ad099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdacb47-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.323 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.323 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabdacb47-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.325 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.326 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.329 239969 INFO os_vif [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:85:19,bridge_name='br-int',has_traffic_filtering=True,id=abdacb47-7e9f-4988-9a24-c617ebb61680,network=Network(68eabea2-3a2d-4690-b867-0143d33ad099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapabdacb47-7e')
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.353 239969 DEBUG oslo_concurrency.processutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.429 239969 DEBUG nova.storage.rbd_utils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] resizing rbd image 8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.466 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.466 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.470 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.470 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.474 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.474 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.479 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.479 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.515 239969 DEBUG nova.objects.instance [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'migration_context' on Instance uuid 8d71fe6f-299d-4a2f-912c-c1f52540bdd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.531 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.531 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Ensure instance console log exists: /var/lib/nova/instances/8d71fe6f-299d-4a2f-912c-c1f52540bdd8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.532 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.532 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.532 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.601 239969 DEBUG nova.network.neutron [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Successfully created port: 509bd168-4b1c-48dd-9b84-73e955518fdc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:50:26 compute-0 ceph-mon[75140]: pgmap v1046: 305 pgs: 305 active+clean; 309 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.8 MiB/s wr, 326 op/s
Jan 26 15:50:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4118338361' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.745 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.746 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4061MB free_disk=59.850483058951795GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.746 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.746 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.829 239969 DEBUG nova.compute.manager [req-f1ad80f8-6c37-4408-82d8-ef3081adf4f8 req-6815e59a-9dff-4f82-9c6b-0934ec0f9249 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Received event network-vif-unplugged-abdacb47-7e9f-4988-9a24-c617ebb61680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.829 239969 DEBUG oslo_concurrency.lockutils [req-f1ad80f8-6c37-4408-82d8-ef3081adf4f8 req-6815e59a-9dff-4f82-9c6b-0934ec0f9249 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "45865956-34db-4434-8dca-48c8dac9811e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.830 239969 DEBUG oslo_concurrency.lockutils [req-f1ad80f8-6c37-4408-82d8-ef3081adf4f8 req-6815e59a-9dff-4f82-9c6b-0934ec0f9249 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "45865956-34db-4434-8dca-48c8dac9811e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.830 239969 DEBUG oslo_concurrency.lockutils [req-f1ad80f8-6c37-4408-82d8-ef3081adf4f8 req-6815e59a-9dff-4f82-9c6b-0934ec0f9249 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "45865956-34db-4434-8dca-48c8dac9811e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.830 239969 DEBUG nova.compute.manager [req-f1ad80f8-6c37-4408-82d8-ef3081adf4f8 req-6815e59a-9dff-4f82-9c6b-0934ec0f9249 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] No waiting events found dispatching network-vif-unplugged-abdacb47-7e9f-4988-9a24-c617ebb61680 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.830 239969 DEBUG nova.compute.manager [req-f1ad80f8-6c37-4408-82d8-ef3081adf4f8 req-6815e59a-9dff-4f82-9c6b-0934ec0f9249 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Received event network-vif-unplugged-abdacb47-7e9f-4988-9a24-c617ebb61680 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.840 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 3a0641d5-a02c-4801-ab1d-4f097cd3f431 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.840 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 45865956-34db-4434-8dca-48c8dac9811e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.840 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance db37e7ff-8499-4664-b2ba-014e27b8b6bb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.840 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 20e8eee5-8c27-4d2b-b132-afa685238f37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.840 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 8d71fe6f-299d-4a2f-912c-c1f52540bdd8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.841 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.841 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.906 239969 DEBUG nova.compute.manager [req-727794f1-21d5-446e-b68b-01889176e8d7 req-d0197d99-3ae8-4dac-b696-3ef4d341ba94 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Received event network-changed-abdacb47-7e9f-4988-9a24-c617ebb61680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.906 239969 DEBUG nova.compute.manager [req-727794f1-21d5-446e-b68b-01889176e8d7 req-d0197d99-3ae8-4dac-b696-3ef4d341ba94 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Refreshing instance network info cache due to event network-changed-abdacb47-7e9f-4988-9a24-c617ebb61680. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.907 239969 DEBUG oslo_concurrency.lockutils [req-727794f1-21d5-446e-b68b-01889176e8d7 req-d0197d99-3ae8-4dac-b696-3ef4d341ba94 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-45865956-34db-4434-8dca-48c8dac9811e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.907 239969 DEBUG oslo_concurrency.lockutils [req-727794f1-21d5-446e-b68b-01889176e8d7 req-d0197d99-3ae8-4dac-b696-3ef4d341ba94 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-45865956-34db-4434-8dca-48c8dac9811e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.907 239969 DEBUG nova.network.neutron [req-727794f1-21d5-446e-b68b-01889176e8d7 req-d0197d99-3ae8-4dac-b696-3ef4d341ba94 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Refreshing network info cache for port abdacb47-7e9f-4988-9a24-c617ebb61680 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:50:26 compute-0 nova_compute[239965]: 2026-01-26 15:50:26.965 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:27 compute-0 nova_compute[239965]: 2026-01-26 15:50:27.513 239969 INFO nova.virt.libvirt.driver [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Deleting instance files /var/lib/nova/instances/45865956-34db-4434-8dca-48c8dac9811e_del
Jan 26 15:50:27 compute-0 nova_compute[239965]: 2026-01-26 15:50:27.514 239969 INFO nova.virt.libvirt.driver [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Deletion of /var/lib/nova/instances/45865956-34db-4434-8dca-48c8dac9811e_del complete
Jan 26 15:50:27 compute-0 nova_compute[239965]: 2026-01-26 15:50:27.523 239969 DEBUG nova.network.neutron [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Successfully updated port: 509bd168-4b1c-48dd-9b84-73e955518fdc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:50:27 compute-0 nova_compute[239965]: 2026-01-26 15:50:27.540 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "refresh_cache-8d71fe6f-299d-4a2f-912c-c1f52540bdd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:27 compute-0 nova_compute[239965]: 2026-01-26 15:50:27.540 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquired lock "refresh_cache-8d71fe6f-299d-4a2f-912c-c1f52540bdd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:27 compute-0 nova_compute[239965]: 2026-01-26 15:50:27.541 239969 DEBUG nova.network.neutron [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:50:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:50:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/541591423' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:27 compute-0 nova_compute[239965]: 2026-01-26 15:50:27.560 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:27 compute-0 nova_compute[239965]: 2026-01-26 15:50:27.568 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:50:27 compute-0 nova_compute[239965]: 2026-01-26 15:50:27.570 239969 INFO nova.compute.manager [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Took 1.51 seconds to destroy the instance on the hypervisor.
Jan 26 15:50:27 compute-0 nova_compute[239965]: 2026-01-26 15:50:27.571 239969 DEBUG oslo.service.loopingcall [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:50:27 compute-0 nova_compute[239965]: 2026-01-26 15:50:27.573 239969 DEBUG nova.compute.manager [-] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:50:27 compute-0 nova_compute[239965]: 2026-01-26 15:50:27.574 239969 DEBUG nova.network.neutron [-] [instance: 45865956-34db-4434-8dca-48c8dac9811e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:50:27 compute-0 nova_compute[239965]: 2026-01-26 15:50:27.581 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:50:27 compute-0 nova_compute[239965]: 2026-01-26 15:50:27.608 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:50:27 compute-0 nova_compute[239965]: 2026-01-26 15:50:27.608 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 281 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 4.0 MiB/s wr, 398 op/s
Jan 26 15:50:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/541591423' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:27 compute-0 nova_compute[239965]: 2026-01-26 15:50:27.903 239969 DEBUG nova.network.neutron [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:50:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:50:28
Jan 26 15:50:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:50:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:50:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'images', 'default.rgw.control', 'vms']
Jan 26 15:50:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:50:28 compute-0 nova_compute[239965]: 2026-01-26 15:50:28.608 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:50:28 compute-0 nova_compute[239965]: 2026-01-26 15:50:28.609 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:50:28 compute-0 nova_compute[239965]: 2026-01-26 15:50:28.609 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 15:50:28 compute-0 nova_compute[239965]: 2026-01-26 15:50:28.632 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 26 15:50:28 compute-0 nova_compute[239965]: 2026-01-26 15:50:28.632 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 15:50:28 compute-0 nova_compute[239965]: 2026-01-26 15:50:28.649 239969 DEBUG nova.network.neutron [-] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:28 compute-0 nova_compute[239965]: 2026-01-26 15:50:28.670 239969 INFO nova.compute.manager [-] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Took 1.10 seconds to deallocate network for instance.
Jan 26 15:50:28 compute-0 ceph-mon[75140]: pgmap v1047: 305 pgs: 305 active+clean; 281 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 4.0 MiB/s wr, 398 op/s
Jan 26 15:50:28 compute-0 nova_compute[239965]: 2026-01-26 15:50:28.732 239969 DEBUG oslo_concurrency.lockutils [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:28 compute-0 nova_compute[239965]: 2026-01-26 15:50:28.733 239969 DEBUG oslo_concurrency.lockutils [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:28 compute-0 nova_compute[239965]: 2026-01-26 15:50:28.857 239969 DEBUG oslo_concurrency.processutils [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:28 compute-0 nova_compute[239965]: 2026-01-26 15:50:28.902 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:28 compute-0 nova_compute[239965]: 2026-01-26 15:50:28.903 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:28 compute-0 nova_compute[239965]: 2026-01-26 15:50:28.903 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 15:50:28 compute-0 nova_compute[239965]: 2026-01-26 15:50:28.903 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 3a0641d5-a02c-4801-ab1d-4f097cd3f431 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:50:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Jan 26 15:50:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Jan 26 15:50:29 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Jan 26 15:50:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:50:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2496569760' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.428 239969 DEBUG oslo_concurrency.processutils [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.433 239969 DEBUG nova.compute.provider_tree [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.451 239969 DEBUG nova.scheduler.client.report [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.456 239969 DEBUG nova.network.neutron [req-727794f1-21d5-446e-b68b-01889176e8d7 req-d0197d99-3ae8-4dac-b696-3ef4d341ba94 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Updated VIF entry in instance network info cache for port abdacb47-7e9f-4988-9a24-c617ebb61680. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.457 239969 DEBUG nova.network.neutron [req-727794f1-21d5-446e-b68b-01889176e8d7 req-d0197d99-3ae8-4dac-b696-3ef4d341ba94 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Updating instance_info_cache with network_info: [{"id": "abdacb47-7e9f-4988-9a24-c617ebb61680", "address": "fa:16:3e:fd:85:19", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapabdacb47-7e", "ovs_interfaceid": "abdacb47-7e9f-4988-9a24-c617ebb61680", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.462 239969 DEBUG nova.network.neutron [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Updating instance_info_cache with network_info: [{"id": "509bd168-4b1c-48dd-9b84-73e955518fdc", "address": "fa:16:3e:20:e8:b4", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509bd168-4b", "ovs_interfaceid": "509bd168-4b1c-48dd-9b84-73e955518fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.501 239969 DEBUG oslo_concurrency.lockutils [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.505 239969 DEBUG oslo_concurrency.lockutils [req-727794f1-21d5-446e-b68b-01889176e8d7 req-d0197d99-3ae8-4dac-b696-3ef4d341ba94 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-45865956-34db-4434-8dca-48c8dac9811e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.508 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Releasing lock "refresh_cache-8d71fe6f-299d-4a2f-912c-c1f52540bdd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.508 239969 DEBUG nova.compute.manager [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Instance network_info: |[{"id": "509bd168-4b1c-48dd-9b84-73e955518fdc", "address": "fa:16:3e:20:e8:b4", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509bd168-4b", "ovs_interfaceid": "509bd168-4b1c-48dd-9b84-73e955518fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.511 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Start _get_guest_xml network_info=[{"id": "509bd168-4b1c-48dd-9b84-73e955518fdc", "address": "fa:16:3e:20:e8:b4", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509bd168-4b", "ovs_interfaceid": "509bd168-4b1c-48dd-9b84-73e955518fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.516 239969 WARNING nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.520 239969 DEBUG nova.virt.libvirt.host [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.521 239969 DEBUG nova.virt.libvirt.host [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.524 239969 DEBUG nova.virt.libvirt.host [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.524 239969 DEBUG nova.virt.libvirt.host [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.525 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.525 239969 DEBUG nova.virt.hardware [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.526 239969 DEBUG nova.virt.hardware [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.526 239969 DEBUG nova.virt.hardware [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.527 239969 DEBUG nova.virt.hardware [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.527 239969 DEBUG nova.virt.hardware [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.527 239969 DEBUG nova.virt.hardware [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.528 239969 DEBUG nova.virt.hardware [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.528 239969 DEBUG nova.virt.hardware [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.528 239969 DEBUG nova.virt.hardware [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.529 239969 DEBUG nova.virt.hardware [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.529 239969 DEBUG nova.virt.hardware [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.535 239969 DEBUG oslo_concurrency.processutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.559 239969 INFO nova.scheduler.client.report [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Deleted allocations for instance 45865956-34db-4434-8dca-48c8dac9811e
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.586 239969 DEBUG nova.compute.manager [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Received event network-vif-plugged-abdacb47-7e9f-4988-9a24-c617ebb61680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.587 239969 DEBUG oslo_concurrency.lockutils [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "45865956-34db-4434-8dca-48c8dac9811e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.587 239969 DEBUG oslo_concurrency.lockutils [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "45865956-34db-4434-8dca-48c8dac9811e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.588 239969 DEBUG oslo_concurrency.lockutils [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "45865956-34db-4434-8dca-48c8dac9811e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.588 239969 DEBUG nova.compute.manager [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] No waiting events found dispatching network-vif-plugged-abdacb47-7e9f-4988-9a24-c617ebb61680 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.589 239969 WARNING nova.compute.manager [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Received unexpected event network-vif-plugged-abdacb47-7e9f-4988-9a24-c617ebb61680 for instance with vm_state deleted and task_state None.
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.589 239969 DEBUG nova.compute.manager [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Received event network-changed-b4633a27-8c35-4f7a-8e72-b7a090e807f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.590 239969 DEBUG nova.compute.manager [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Refreshing instance network info cache due to event network-changed-b4633a27-8c35-4f7a-8e72-b7a090e807f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.590 239969 DEBUG oslo_concurrency.lockutils [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 261 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 4.6 MiB/s wr, 401 op/s
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.637 239969 DEBUG oslo_concurrency.lockutils [None req-2cf8049e-d05b-4ff5-8e31-e5617424852a 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "45865956-34db-4434-8dca-48c8dac9811e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.687 239969 DEBUG nova.compute.manager [req-10444692-a6af-4bce-82af-bed215a3860d req-76c74900-dc45-4337-b0f3-6b9b98b619cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Received event network-changed-509bd168-4b1c-48dd-9b84-73e955518fdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.687 239969 DEBUG nova.compute.manager [req-10444692-a6af-4bce-82af-bed215a3860d req-76c74900-dc45-4337-b0f3-6b9b98b619cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Refreshing instance network info cache due to event network-changed-509bd168-4b1c-48dd-9b84-73e955518fdc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.688 239969 DEBUG oslo_concurrency.lockutils [req-10444692-a6af-4bce-82af-bed215a3860d req-76c74900-dc45-4337-b0f3-6b9b98b619cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-8d71fe6f-299d-4a2f-912c-c1f52540bdd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.688 239969 DEBUG oslo_concurrency.lockutils [req-10444692-a6af-4bce-82af-bed215a3860d req-76c74900-dc45-4337-b0f3-6b9b98b619cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-8d71fe6f-299d-4a2f-912c-c1f52540bdd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.689 239969 DEBUG nova.network.neutron [req-10444692-a6af-4bce-82af-bed215a3860d req-76c74900-dc45-4337-b0f3-6b9b98b619cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Refreshing network info cache for port 509bd168-4b1c-48dd-9b84-73e955518fdc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:50:29 compute-0 nova_compute[239965]: 2026-01-26 15:50:29.798 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.081 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.082 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.109 239969 DEBUG nova.compute.manager [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:50:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:50:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2959999610' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.134 239969 DEBUG oslo_concurrency.processutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.157 239969 DEBUG nova.storage.rbd_utils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image 8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.160 239969 DEBUG oslo_concurrency.processutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:30 compute-0 ceph-mon[75140]: osdmap e158: 3 total, 3 up, 3 in
Jan 26 15:50:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2496569760' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:30 compute-0 ceph-mon[75140]: pgmap v1049: 305 pgs: 305 active+clean; 261 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 4.6 MiB/s wr, 401 op/s
Jan 26 15:50:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2959999610' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.210 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.211 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.218 239969 DEBUG nova.virt.hardware [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.218 239969 INFO nova.compute.claims [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:50:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:50:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:50:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:50:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:50:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:50:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.432 239969 DEBUG oslo_concurrency.processutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:50:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:50:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:50:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:50:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:50:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:50:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:50:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:50:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:50:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:50:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:50:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1582641735' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.730 239969 DEBUG oslo_concurrency.processutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.733 239969 DEBUG nova.virt.libvirt.vif [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:50:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1877048571',display_name='tempest-ImagesTestJSON-server-1877048571',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1877048571',id=17,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-ow60jkkh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:50:25Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=8d71fe6f-299d-4a2f-912c-c1f52540bdd8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "509bd168-4b1c-48dd-9b84-73e955518fdc", "address": "fa:16:3e:20:e8:b4", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509bd168-4b", "ovs_interfaceid": "509bd168-4b1c-48dd-9b84-73e955518fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.733 239969 DEBUG nova.network.os_vif_util [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "509bd168-4b1c-48dd-9b84-73e955518fdc", "address": "fa:16:3e:20:e8:b4", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509bd168-4b", "ovs_interfaceid": "509bd168-4b1c-48dd-9b84-73e955518fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.734 239969 DEBUG nova.network.os_vif_util [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:e8:b4,bridge_name='br-int',has_traffic_filtering=True,id=509bd168-4b1c-48dd-9b84-73e955518fdc,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509bd168-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.736 239969 DEBUG nova.objects.instance [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 8d71fe6f-299d-4a2f-912c-c1f52540bdd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.767 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:50:30 compute-0 nova_compute[239965]:   <uuid>8d71fe6f-299d-4a2f-912c-c1f52540bdd8</uuid>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   <name>instance-00000011</name>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <nova:name>tempest-ImagesTestJSON-server-1877048571</nova:name>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:50:29</nova:creationTime>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:50:30 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:50:30 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:50:30 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:50:30 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:50:30 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:50:30 compute-0 nova_compute[239965]:         <nova:user uuid="3322c44e378e415bb486ef558314a67c">tempest-ImagesTestJSON-1480202091-project-member</nova:user>
Jan 26 15:50:30 compute-0 nova_compute[239965]:         <nova:project uuid="ddba5162f533447bba0159cafaa565bf">tempest-ImagesTestJSON-1480202091</nova:project>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:50:30 compute-0 nova_compute[239965]:         <nova:port uuid="509bd168-4b1c-48dd-9b84-73e955518fdc">
Jan 26 15:50:30 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <system>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <entry name="serial">8d71fe6f-299d-4a2f-912c-c1f52540bdd8</entry>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <entry name="uuid">8d71fe6f-299d-4a2f-912c-c1f52540bdd8</entry>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     </system>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   <os>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   </os>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   <features>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   </features>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk">
Jan 26 15:50:30 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       </source>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:50:30 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk.config">
Jan 26 15:50:30 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       </source>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:50:30 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:20:e8:b4"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <target dev="tap509bd168-4b"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/8d71fe6f-299d-4a2f-912c-c1f52540bdd8/console.log" append="off"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <video>
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     </video>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:50:30 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:50:30 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:50:30 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:50:30 compute-0 nova_compute[239965]: </domain>
Jan 26 15:50:30 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.769 239969 DEBUG nova.compute.manager [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Preparing to wait for external event network-vif-plugged-509bd168-4b1c-48dd-9b84-73e955518fdc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.769 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.770 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.770 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.771 239969 DEBUG nova.virt.libvirt.vif [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:50:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1877048571',display_name='tempest-ImagesTestJSON-server-1877048571',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1877048571',id=17,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-ow60jkkh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:50:25Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=8d71fe6f-299d-4a2f-912c-c1f52540bdd8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "509bd168-4b1c-48dd-9b84-73e955518fdc", "address": "fa:16:3e:20:e8:b4", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509bd168-4b", "ovs_interfaceid": "509bd168-4b1c-48dd-9b84-73e955518fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.772 239969 DEBUG nova.network.os_vif_util [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "509bd168-4b1c-48dd-9b84-73e955518fdc", "address": "fa:16:3e:20:e8:b4", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509bd168-4b", "ovs_interfaceid": "509bd168-4b1c-48dd-9b84-73e955518fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.772 239969 DEBUG nova.network.os_vif_util [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:e8:b4,bridge_name='br-int',has_traffic_filtering=True,id=509bd168-4b1c-48dd-9b84-73e955518fdc,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509bd168-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.773 239969 DEBUG os_vif [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:e8:b4,bridge_name='br-int',has_traffic_filtering=True,id=509bd168-4b1c-48dd-9b84-73e955518fdc,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509bd168-4b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.774 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.775 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.775 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.778 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.778 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap509bd168-4b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.779 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap509bd168-4b, col_values=(('external_ids', {'iface-id': '509bd168-4b1c-48dd-9b84-73e955518fdc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:e8:b4', 'vm-uuid': '8d71fe6f-299d-4a2f-912c-c1f52540bdd8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:30 compute-0 NetworkManager[48954]: <info>  [1769442630.7814] manager: (tap509bd168-4b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.780 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.786 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.788 239969 INFO os_vif [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:e8:b4,bridge_name='br-int',has_traffic_filtering=True,id=509bd168-4b1c-48dd-9b84-73e955518fdc,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509bd168-4b')
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.910 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.911 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.911 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No VIF found with MAC fa:16:3e:20:e8:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.912 239969 INFO nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Using config drive
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.935 239969 DEBUG nova.storage.rbd_utils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image 8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.949 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Updating instance_info_cache with network_info: [{"id": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "address": "fa:16:3e:05:d1:74", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4633a27-8c", "ovs_interfaceid": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.968 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.968 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.968 239969 DEBUG oslo_concurrency.lockutils [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.969 239969 DEBUG nova.network.neutron [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Refreshing network info cache for port b4633a27-8c35-4f7a-8e72-b7a090e807f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.970 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.970 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.970 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.971 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:50:30 compute-0 nova_compute[239965]: 2026-01-26 15:50:30.971 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:50:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:50:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1381903240' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.004 239969 DEBUG oslo_concurrency.processutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.011 239969 DEBUG nova.compute.provider_tree [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.041 239969 DEBUG nova.scheduler.client.report [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.076 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.077 239969 DEBUG nova.compute.manager [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.143 239969 DEBUG nova.compute.manager [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.144 239969 DEBUG nova.network.neutron [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.168 239969 INFO nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:50:31 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1582641735' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:31 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1381903240' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.198 239969 DEBUG nova.compute.manager [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.313 239969 DEBUG nova.compute.manager [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.314 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.315 239969 INFO nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Creating image(s)
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.337 239969 DEBUG nova.storage.rbd_utils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image dcf4bd4f-ec23-4c13-b04d-b696bf93f350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.360 239969 DEBUG nova.storage.rbd_utils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image dcf4bd4f-ec23-4c13-b04d-b696bf93f350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.381 239969 DEBUG nova.storage.rbd_utils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image dcf4bd4f-ec23-4c13-b04d-b696bf93f350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.385 239969 DEBUG oslo_concurrency.processutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.448 239969 DEBUG oslo_concurrency.processutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.449 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.449 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.449 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.470 239969 DEBUG nova.storage.rbd_utils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image dcf4bd4f-ec23-4c13-b04d-b696bf93f350_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.474 239969 DEBUG oslo_concurrency.processutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 dcf4bd4f-ec23-4c13-b04d-b696bf93f350_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.499 239969 INFO nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Creating config drive at /var/lib/nova/instances/8d71fe6f-299d-4a2f-912c-c1f52540bdd8/disk.config
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.505 239969 DEBUG oslo_concurrency.processutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8d71fe6f-299d-4a2f-912c-c1f52540bdd8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc8mkjv4h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 260 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.7 MiB/s wr, 217 op/s
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.635 239969 DEBUG oslo_concurrency.processutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8d71fe6f-299d-4a2f-912c-c1f52540bdd8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc8mkjv4h" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.694 239969 DEBUG nova.storage.rbd_utils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image 8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.704 239969 DEBUG oslo_concurrency.processutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8d71fe6f-299d-4a2f-912c-c1f52540bdd8/disk.config 8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.746 239969 DEBUG nova.network.neutron [req-10444692-a6af-4bce-82af-bed215a3860d req-76c74900-dc45-4337-b0f3-6b9b98b619cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Updated VIF entry in instance network info cache for port 509bd168-4b1c-48dd-9b84-73e955518fdc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.746 239969 DEBUG nova.network.neutron [req-10444692-a6af-4bce-82af-bed215a3860d req-76c74900-dc45-4337-b0f3-6b9b98b619cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Updating instance_info_cache with network_info: [{"id": "509bd168-4b1c-48dd-9b84-73e955518fdc", "address": "fa:16:3e:20:e8:b4", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509bd168-4b", "ovs_interfaceid": "509bd168-4b1c-48dd-9b84-73e955518fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.752 239969 DEBUG oslo_concurrency.processutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 dcf4bd4f-ec23-4c13-b04d-b696bf93f350_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.781 239969 DEBUG nova.policy [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b47ce75429ed4cd1ba21c0d50c2e552d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b3195eedf4a34fabaf019faaaad7eb71', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.783 239969 DEBUG oslo_concurrency.lockutils [req-10444692-a6af-4bce-82af-bed215a3860d req-76c74900-dc45-4337-b0f3-6b9b98b619cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-8d71fe6f-299d-4a2f-912c-c1f52540bdd8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.824 239969 DEBUG nova.storage.rbd_utils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] resizing rbd image dcf4bd4f-ec23-4c13-b04d-b696bf93f350_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.848 239969 DEBUG oslo_concurrency.processutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8d71fe6f-299d-4a2f-912c-c1f52540bdd8/disk.config 8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.849 239969 INFO nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Deleting local config drive /var/lib/nova/instances/8d71fe6f-299d-4a2f-912c-c1f52540bdd8/disk.config because it was imported into RBD.
Jan 26 15:50:31 compute-0 kernel: tap509bd168-4b: entered promiscuous mode
Jan 26 15:50:31 compute-0 NetworkManager[48954]: <info>  [1769442631.9041] manager: (tap509bd168-4b): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Jan 26 15:50:31 compute-0 ovn_controller[146046]: 2026-01-26T15:50:31Z|00075|binding|INFO|Claiming lport 509bd168-4b1c-48dd-9b84-73e955518fdc for this chassis.
Jan 26 15:50:31 compute-0 ovn_controller[146046]: 2026-01-26T15:50:31Z|00076|binding|INFO|509bd168-4b1c-48dd-9b84-73e955518fdc: Claiming fa:16:3e:20:e8:b4 10.100.0.4
Jan 26 15:50:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:31.925 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:e8:b4 10.100.0.4'], port_security=['fa:16:3e:20:e8:b4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8d71fe6f-299d-4a2f-912c-c1f52540bdd8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5e237c1-a75f-479a-88ee-c0f788914b11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddba5162f533447bba0159cafaa565bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '196db1cf-70b0-4b0e-9aef-decea6cfe3dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79fbd3fe-b646-4211-8e23-8d25cabdd600, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=509bd168-4b1c-48dd-9b84-73e955518fdc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:50:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:31.926 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 509bd168-4b1c-48dd-9b84-73e955518fdc in datapath f5e237c1-a75f-479a-88ee-c0f788914b11 bound to our chassis
Jan 26 15:50:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:31.929 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:50:31 compute-0 ovn_controller[146046]: 2026-01-26T15:50:31Z|00077|binding|INFO|Setting lport 509bd168-4b1c-48dd-9b84-73e955518fdc ovn-installed in OVS
Jan 26 15:50:31 compute-0 ovn_controller[146046]: 2026-01-26T15:50:31Z|00078|binding|INFO|Setting lport 509bd168-4b1c-48dd-9b84-73e955518fdc up in Southbound
Jan 26 15:50:31 compute-0 systemd-udevd[257196]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:50:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:31.946 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1071b239-43e0-452a-aa5a-6a34a9b65925]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:31.947 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf5e237c1-a1 in ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.947 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:31.949 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf5e237c1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:50:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:31.949 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[07b1f247-fe52-46b7-a32a-bc717c123f59]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:31 compute-0 systemd-machined[208061]: New machine qemu-19-instance-00000011.
Jan 26 15:50:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:31.950 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d379ed59-ab2c-4c08-8959-242e3a679bb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:31 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000011.
Jan 26 15:50:31 compute-0 NetworkManager[48954]: <info>  [1769442631.9633] device (tap509bd168-4b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:50:31 compute-0 NetworkManager[48954]: <info>  [1769442631.9640] device (tap509bd168-4b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:50:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:31.962 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[911753c7-787d-4f6f-a655-72e9680ff97a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.970 239969 DEBUG nova.objects.instance [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'migration_context' on Instance uuid dcf4bd4f-ec23-4c13-b04d-b696bf93f350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:31.979 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b416f821-3381-4beb-9094-92974c260d26]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.996 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.996 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Ensure instance console log exists: /var/lib/nova/instances/dcf4bd4f-ec23-4c13-b04d-b696bf93f350/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.996 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.997 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:31 compute-0 nova_compute[239965]: 2026-01-26 15:50:31.997 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.016 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a0472254-4edd-4a18-b83b-e7bbd35eadcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:32 compute-0 systemd-udevd[257199]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:50:32 compute-0 NetworkManager[48954]: <info>  [1769442632.0225] manager: (tapf5e237c1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.024 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[eb270565-153b-4a98-b7b7-a08ca5b0406a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.056 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[87c5e387-cff3-4ef9-9f6b-a5066498412a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.059 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ca57a0d0-493c-45d2-8c8f-99fc8f9fb35f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:32 compute-0 NetworkManager[48954]: <info>  [1769442632.0834] device (tapf5e237c1-a0): carrier: link connected
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.090 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1086ca61-1c87-4e38-9d06-700b80f13a55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.111 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9e81e8aa-cf88-472e-afce-0f207b97d495]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5e237c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:0e:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 412288, 'reachable_time': 39143, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257228, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.128 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[039a9f10-9faa-4493-98a7-b23154607baa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:eb6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 412288, 'tstamp': 412288}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257229, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.146 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d6154ef4-3d15-4961-871d-ca8462acaf1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5e237c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:0e:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 412288, 'reachable_time': 39143, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257230, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:32 compute-0 ceph-mon[75140]: pgmap v1050: 305 pgs: 305 active+clean; 260 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.7 MiB/s wr, 217 op/s
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.184 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[71247ac1-2289-4897-9fc5-247ec57be81f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.242 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0f492f-12cf-42f0-8d61-a396cbc02373]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.243 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5e237c1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.244 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.244 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5e237c1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:32 compute-0 NetworkManager[48954]: <info>  [1769442632.2464] manager: (tapf5e237c1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Jan 26 15:50:32 compute-0 kernel: tapf5e237c1-a0: entered promiscuous mode
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.246 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.247 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.249 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5e237c1-a0, col_values=(('external_ids', {'iface-id': 'fd0478e8-96d8-4fbd-8a9a-fe78757277ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.250 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:32 compute-0 ovn_controller[146046]: 2026-01-26T15:50:32Z|00079|binding|INFO|Releasing lport fd0478e8-96d8-4fbd-8a9a-fe78757277ca from this chassis (sb_readonly=0)
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.268 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.270 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f5e237c1-a75f-479a-88ee-c0f788914b11.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f5e237c1-a75f-479a-88ee-c0f788914b11.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.271 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a3312b9e-c0d0-4a3c-ab5e-18366121b4a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.272 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/f5e237c1-a75f-479a-88ee-c0f788914b11.pid.haproxy
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:50:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:32.272 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'env', 'PROCESS_TAG=haproxy-f5e237c1-a75f-479a-88ee-c0f788914b11', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f5e237c1-a75f-479a-88ee-c0f788914b11.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:50:32 compute-0 podman[257262]: 2026-01-26 15:50:32.710511971 +0000 UTC m=+0.055407897 container create 36e305baab8d369e583a38541134520c97503e1b844ea72ba7cd91b681de5eba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:50:32 compute-0 systemd[1]: Started libpod-conmon-36e305baab8d369e583a38541134520c97503e1b844ea72ba7cd91b681de5eba.scope.
Jan 26 15:50:32 compute-0 podman[257262]: 2026-01-26 15:50:32.682229229 +0000 UTC m=+0.027125185 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:50:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:50:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06706a9596633dd844b58e4d67742db76b7795acc84a1afb7d6a5ca912d889f9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:50:32 compute-0 podman[257262]: 2026-01-26 15:50:32.815055891 +0000 UTC m=+0.159951847 container init 36e305baab8d369e583a38541134520c97503e1b844ea72ba7cd91b681de5eba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:50:32 compute-0 podman[257262]: 2026-01-26 15:50:32.821593592 +0000 UTC m=+0.166489538 container start 36e305baab8d369e583a38541134520c97503e1b844ea72ba7cd91b681de5eba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:50:32 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[257311]: [NOTICE]   (257322) : New worker (257324) forked
Jan 26 15:50:32 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[257311]: [NOTICE]   (257322) : Loading success.
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.853 239969 DEBUG nova.compute.manager [req-01af78bc-9ec8-42cb-ae0d-0672f12c0ddd req-6a3a5891-0b0d-4087-80b6-9febb044ea1a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Received event network-vif-plugged-509bd168-4b1c-48dd-9b84-73e955518fdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.853 239969 DEBUG oslo_concurrency.lockutils [req-01af78bc-9ec8-42cb-ae0d-0672f12c0ddd req-6a3a5891-0b0d-4087-80b6-9febb044ea1a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.854 239969 DEBUG oslo_concurrency.lockutils [req-01af78bc-9ec8-42cb-ae0d-0672f12c0ddd req-6a3a5891-0b0d-4087-80b6-9febb044ea1a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.854 239969 DEBUG oslo_concurrency.lockutils [req-01af78bc-9ec8-42cb-ae0d-0672f12c0ddd req-6a3a5891-0b0d-4087-80b6-9febb044ea1a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.854 239969 DEBUG nova.compute.manager [req-01af78bc-9ec8-42cb-ae0d-0672f12c0ddd req-6a3a5891-0b0d-4087-80b6-9febb044ea1a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Processing event network-vif-plugged-509bd168-4b1c-48dd-9b84-73e955518fdc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.864 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.884 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442632.8844872, 8d71fe6f-299d-4a2f-912c-c1f52540bdd8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.885 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] VM Started (Lifecycle Event)
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.887 239969 DEBUG nova.compute.manager [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.893 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.896 239969 INFO nova.virt.libvirt.driver [-] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Instance spawned successfully.
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.897 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.921 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.927 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.933 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.934 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.935 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.935 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.935 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.936 239969 DEBUG nova.virt.libvirt.driver [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.950 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.950 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442632.8855276, 8d71fe6f-299d-4a2f-912c-c1f52540bdd8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:32 compute-0 nova_compute[239965]: 2026-01-26 15:50:32.951 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] VM Paused (Lifecycle Event)
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.106 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.109 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442632.8933845, 8d71fe6f-299d-4a2f-912c-c1f52540bdd8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.109 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] VM Resumed (Lifecycle Event)
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.151 239969 INFO nova.compute.manager [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Took 7.43 seconds to spawn the instance on the hypervisor.
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.152 239969 DEBUG nova.compute.manager [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.153 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.164 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.215 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.257 239969 INFO nova.compute.manager [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Took 8.67 seconds to build instance.
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.275 239969 DEBUG oslo_concurrency.lockutils [None req-1272a911-f0b7-4189-a031-8bd5920e06fd 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.343 239969 DEBUG nova.network.neutron [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Updated VIF entry in instance network info cache for port b4633a27-8c35-4f7a-8e72-b7a090e807f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.344 239969 DEBUG nova.network.neutron [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Updating instance_info_cache with network_info: [{"id": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "address": "fa:16:3e:05:d1:74", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4633a27-8c", "ovs_interfaceid": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.360 239969 DEBUG oslo_concurrency.lockutils [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.361 239969 DEBUG nova.compute.manager [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Received event network-vif-deleted-abdacb47-7e9f-4988-9a24-c617ebb61680 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.363 239969 INFO nova.compute.manager [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Neutron deleted interface abdacb47-7e9f-4988-9a24-c617ebb61680; detaching it from the instance and deleting it from the info cache
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.363 239969 DEBUG nova.network.neutron [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.365 239969 DEBUG nova.network.neutron [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Successfully created port: bba927f8-b9ac-4383-94b1-86a81c2844fd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.389 239969 DEBUG nova.compute.manager [req-b684cbb7-0183-4908-9d02-f9c2ddf87101 req-0848a4ab-f0e1-4854-9465-354397807be9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Detach interface failed, port_id=abdacb47-7e9f-4988-9a24-c617ebb61680, reason: Instance 45865956-34db-4434-8dca-48c8dac9811e could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 15:50:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 294 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.8 MiB/s wr, 228 op/s
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.905 239969 DEBUG nova.compute.manager [req-53007c56-af6e-4d3b-b8c1-59ce6b1d6f25 req-2aa0ae10-ab32-45fd-bcd4-77f15e23f789 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Received event network-changed-b4633a27-8c35-4f7a-8e72-b7a090e807f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.906 239969 DEBUG nova.compute.manager [req-53007c56-af6e-4d3b-b8c1-59ce6b1d6f25 req-2aa0ae10-ab32-45fd-bcd4-77f15e23f789 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Refreshing instance network info cache due to event network-changed-b4633a27-8c35-4f7a-8e72-b7a090e807f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.906 239969 DEBUG oslo_concurrency.lockutils [req-53007c56-af6e-4d3b-b8c1-59ce6b1d6f25 req-2aa0ae10-ab32-45fd-bcd4-77f15e23f789 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.906 239969 DEBUG oslo_concurrency.lockutils [req-53007c56-af6e-4d3b-b8c1-59ce6b1d6f25 req-2aa0ae10-ab32-45fd-bcd4-77f15e23f789 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:33 compute-0 nova_compute[239965]: 2026-01-26 15:50:33.906 239969 DEBUG nova.network.neutron [req-53007c56-af6e-4d3b-b8c1-59ce6b1d6f25 req-2aa0ae10-ab32-45fd-bcd4-77f15e23f789 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Refreshing network info cache for port b4633a27-8c35-4f7a-8e72-b7a090e807f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:50:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:50:34 compute-0 ceph-mon[75140]: pgmap v1051: 305 pgs: 305 active+clean; 294 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.8 MiB/s wr, 228 op/s
Jan 26 15:50:34 compute-0 ovn_controller[146046]: 2026-01-26T15:50:34Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:0d:aa 10.100.0.11
Jan 26 15:50:34 compute-0 ovn_controller[146046]: 2026-01-26T15:50:34Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:0d:aa 10.100.0.11
Jan 26 15:50:34 compute-0 nova_compute[239965]: 2026-01-26 15:50:34.800 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:35 compute-0 nova_compute[239965]: 2026-01-26 15:50:35.523 239969 DEBUG nova.network.neutron [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Successfully updated port: bba927f8-b9ac-4383-94b1-86a81c2844fd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:50:35 compute-0 nova_compute[239965]: 2026-01-26 15:50:35.547 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "refresh_cache-dcf4bd4f-ec23-4c13-b04d-b696bf93f350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:35 compute-0 nova_compute[239965]: 2026-01-26 15:50:35.547 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquired lock "refresh_cache-dcf4bd4f-ec23-4c13-b04d-b696bf93f350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:35 compute-0 nova_compute[239965]: 2026-01-26 15:50:35.547 239969 DEBUG nova.network.neutron [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:50:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 294 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.6 MiB/s wr, 216 op/s
Jan 26 15:50:35 compute-0 nova_compute[239965]: 2026-01-26 15:50:35.771 239969 DEBUG nova.network.neutron [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:50:35 compute-0 nova_compute[239965]: 2026-01-26 15:50:35.782 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:35 compute-0 nova_compute[239965]: 2026-01-26 15:50:35.847 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442620.846505, f470fcc4-0f3d-4048-9e21-61e217237d40 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:35 compute-0 nova_compute[239965]: 2026-01-26 15:50:35.848 239969 INFO nova.compute.manager [-] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] VM Stopped (Lifecycle Event)
Jan 26 15:50:35 compute-0 nova_compute[239965]: 2026-01-26 15:50:35.912 239969 DEBUG nova.compute.manager [None req-0da775d2-6e05-4916-a0d2-546afc2a5abb - - - - - -] [instance: f470fcc4-0f3d-4048-9e21-61e217237d40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:36 compute-0 nova_compute[239965]: 2026-01-26 15:50:36.186 239969 DEBUG nova.network.neutron [req-53007c56-af6e-4d3b-b8c1-59ce6b1d6f25 req-2aa0ae10-ab32-45fd-bcd4-77f15e23f789 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Updated VIF entry in instance network info cache for port b4633a27-8c35-4f7a-8e72-b7a090e807f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:50:36 compute-0 nova_compute[239965]: 2026-01-26 15:50:36.186 239969 DEBUG nova.network.neutron [req-53007c56-af6e-4d3b-b8c1-59ce6b1d6f25 req-2aa0ae10-ab32-45fd-bcd4-77f15e23f789 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Updating instance_info_cache with network_info: [{"id": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "address": "fa:16:3e:05:d1:74", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4633a27-8c", "ovs_interfaceid": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:36 compute-0 nova_compute[239965]: 2026-01-26 15:50:36.260 239969 DEBUG nova.compute.manager [req-291de050-17be-4f52-872d-1349aa1a800b req-f7dbe055-b6ff-48b9-be1a-7516a9c218b8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Received event network-vif-plugged-509bd168-4b1c-48dd-9b84-73e955518fdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:36 compute-0 nova_compute[239965]: 2026-01-26 15:50:36.260 239969 DEBUG oslo_concurrency.lockutils [req-291de050-17be-4f52-872d-1349aa1a800b req-f7dbe055-b6ff-48b9-be1a-7516a9c218b8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:36 compute-0 nova_compute[239965]: 2026-01-26 15:50:36.261 239969 DEBUG oslo_concurrency.lockutils [req-291de050-17be-4f52-872d-1349aa1a800b req-f7dbe055-b6ff-48b9-be1a-7516a9c218b8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:36 compute-0 nova_compute[239965]: 2026-01-26 15:50:36.261 239969 DEBUG oslo_concurrency.lockutils [req-291de050-17be-4f52-872d-1349aa1a800b req-f7dbe055-b6ff-48b9-be1a-7516a9c218b8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:36 compute-0 nova_compute[239965]: 2026-01-26 15:50:36.261 239969 DEBUG nova.compute.manager [req-291de050-17be-4f52-872d-1349aa1a800b req-f7dbe055-b6ff-48b9-be1a-7516a9c218b8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] No waiting events found dispatching network-vif-plugged-509bd168-4b1c-48dd-9b84-73e955518fdc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:50:36 compute-0 nova_compute[239965]: 2026-01-26 15:50:36.261 239969 WARNING nova.compute.manager [req-291de050-17be-4f52-872d-1349aa1a800b req-f7dbe055-b6ff-48b9-be1a-7516a9c218b8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Received unexpected event network-vif-plugged-509bd168-4b1c-48dd-9b84-73e955518fdc for instance with vm_state active and task_state None.
Jan 26 15:50:36 compute-0 nova_compute[239965]: 2026-01-26 15:50:36.272 239969 DEBUG oslo_concurrency.lockutils [req-53007c56-af6e-4d3b-b8c1-59ce6b1d6f25 req-2aa0ae10-ab32-45fd-bcd4-77f15e23f789 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-3a0641d5-a02c-4801-ab1d-4f097cd3f431" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:36 compute-0 nova_compute[239965]: 2026-01-26 15:50:36.437 239969 DEBUG oslo_concurrency.lockutils [None req-7e9eb995-eb7f-43f8-9c70-294e9380ec0f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:36 compute-0 nova_compute[239965]: 2026-01-26 15:50:36.440 239969 DEBUG oslo_concurrency.lockutils [None req-7e9eb995-eb7f-43f8-9c70-294e9380ec0f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:36 compute-0 nova_compute[239965]: 2026-01-26 15:50:36.441 239969 DEBUG nova.compute.manager [None req-7e9eb995-eb7f-43f8-9c70-294e9380ec0f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:36 compute-0 nova_compute[239965]: 2026-01-26 15:50:36.446 239969 DEBUG nova.compute.manager [None req-7e9eb995-eb7f-43f8-9c70-294e9380ec0f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 26 15:50:36 compute-0 nova_compute[239965]: 2026-01-26 15:50:36.447 239969 DEBUG nova.objects.instance [None req-7e9eb995-eb7f-43f8-9c70-294e9380ec0f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'flavor' on Instance uuid 8d71fe6f-299d-4a2f-912c-c1f52540bdd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:36 compute-0 nova_compute[239965]: 2026-01-26 15:50:36.483 239969 DEBUG nova.virt.libvirt.driver [None req-7e9eb995-eb7f-43f8-9c70-294e9380ec0f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 15:50:36 compute-0 ceph-mon[75140]: pgmap v1052: 305 pgs: 305 active+clean; 294 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.6 MiB/s wr, 216 op/s
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.405 239969 DEBUG nova.network.neutron [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Updating instance_info_cache with network_info: [{"id": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "address": "fa:16:3e:cd:ab:15", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbba927f8-b9", "ovs_interfaceid": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.431 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Releasing lock "refresh_cache-dcf4bd4f-ec23-4c13-b04d-b696bf93f350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.431 239969 DEBUG nova.compute.manager [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Instance network_info: |[{"id": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "address": "fa:16:3e:cd:ab:15", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbba927f8-b9", "ovs_interfaceid": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.433 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Start _get_guest_xml network_info=[{"id": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "address": "fa:16:3e:cd:ab:15", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbba927f8-b9", "ovs_interfaceid": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.452 239969 WARNING nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.457 239969 DEBUG nova.virt.libvirt.host [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.458 239969 DEBUG nova.virt.libvirt.host [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.461 239969 DEBUG nova.virt.libvirt.host [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.461 239969 DEBUG nova.virt.libvirt.host [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.461 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.462 239969 DEBUG nova.virt.hardware [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.462 239969 DEBUG nova.virt.hardware [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.462 239969 DEBUG nova.virt.hardware [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.463 239969 DEBUG nova.virt.hardware [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.463 239969 DEBUG nova.virt.hardware [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.463 239969 DEBUG nova.virt.hardware [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.464 239969 DEBUG nova.virt.hardware [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.464 239969 DEBUG nova.virt.hardware [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.464 239969 DEBUG nova.virt.hardware [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.464 239969 DEBUG nova.virt.hardware [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.465 239969 DEBUG nova.virt.hardware [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:50:37 compute-0 nova_compute[239965]: 2026-01-26 15:50:37.470 239969 DEBUG oslo_concurrency.processutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1053: 305 pgs: 305 active+clean; 337 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.7 MiB/s wr, 223 op/s
Jan 26 15:50:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:50:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3533468732' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:38 compute-0 nova_compute[239965]: 2026-01-26 15:50:38.081 239969 DEBUG oslo_concurrency.processutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:38 compute-0 nova_compute[239965]: 2026-01-26 15:50:38.104 239969 DEBUG nova.storage.rbd_utils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image dcf4bd4f-ec23-4c13-b04d-b696bf93f350_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:38 compute-0 nova_compute[239965]: 2026-01-26 15:50:38.108 239969 DEBUG oslo_concurrency.processutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:50:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3215271051' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:38 compute-0 ceph-mon[75140]: pgmap v1053: 305 pgs: 305 active+clean; 337 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 6.7 MiB/s wr, 223 op/s
Jan 26 15:50:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3533468732' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3215271051' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:50:38 compute-0 nova_compute[239965]: 2026-01-26 15:50:38.755 239969 DEBUG oslo_concurrency.processutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:38 compute-0 nova_compute[239965]: 2026-01-26 15:50:38.757 239969 DEBUG nova.virt.libvirt.vif [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:50:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-109127296',display_name='tempest-ServersAdminTestJSON-server-109127296',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-109127296',id=18,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-vqo9e7of',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:50:31Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=dcf4bd4f-ec23-4c13-b04d-b696bf93f350,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "address": "fa:16:3e:cd:ab:15", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbba927f8-b9", "ovs_interfaceid": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:50:38 compute-0 nova_compute[239965]: 2026-01-26 15:50:38.757 239969 DEBUG nova.network.os_vif_util [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "address": "fa:16:3e:cd:ab:15", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbba927f8-b9", "ovs_interfaceid": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:50:38 compute-0 nova_compute[239965]: 2026-01-26 15:50:38.758 239969 DEBUG nova.network.os_vif_util [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:ab:15,bridge_name='br-int',has_traffic_filtering=True,id=bba927f8-b9ac-4383-94b1-86a81c2844fd,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbba927f8-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:50:38 compute-0 nova_compute[239965]: 2026-01-26 15:50:38.759 239969 DEBUG nova.objects.instance [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'pci_devices' on Instance uuid dcf4bd4f-ec23-4c13-b04d-b696bf93f350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:39 compute-0 ovn_controller[146046]: 2026-01-26T15:50:39Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d9:1a:6e 10.100.0.3
Jan 26 15:50:39 compute-0 ovn_controller[146046]: 2026-01-26T15:50:39Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d9:1a:6e 10.100.0.3
Jan 26 15:50:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:50:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 361 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 7.3 MiB/s wr, 256 op/s
Jan 26 15:50:39 compute-0 nova_compute[239965]: 2026-01-26 15:50:39.803 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.188 239969 DEBUG nova.compute.manager [req-b0f8124f-8695-4494-b790-183ff55245c5 req-29a4aa8e-1dbf-4e20-b223-3d3fb94354e9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Received event network-changed-bba927f8-b9ac-4383-94b1-86a81c2844fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.188 239969 DEBUG nova.compute.manager [req-b0f8124f-8695-4494-b790-183ff55245c5 req-29a4aa8e-1dbf-4e20-b223-3d3fb94354e9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Refreshing instance network info cache due to event network-changed-bba927f8-b9ac-4383-94b1-86a81c2844fd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.189 239969 DEBUG oslo_concurrency.lockutils [req-b0f8124f-8695-4494-b790-183ff55245c5 req-29a4aa8e-1dbf-4e20-b223-3d3fb94354e9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-dcf4bd4f-ec23-4c13-b04d-b696bf93f350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.189 239969 DEBUG oslo_concurrency.lockutils [req-b0f8124f-8695-4494-b790-183ff55245c5 req-29a4aa8e-1dbf-4e20-b223-3d3fb94354e9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-dcf4bd4f-ec23-4c13-b04d-b696bf93f350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.189 239969 DEBUG nova.network.neutron [req-b0f8124f-8695-4494-b790-183ff55245c5 req-29a4aa8e-1dbf-4e20-b223-3d3fb94354e9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Refreshing network info cache for port bba927f8-b9ac-4383-94b1-86a81c2844fd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.203 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:50:40 compute-0 nova_compute[239965]:   <uuid>dcf4bd4f-ec23-4c13-b04d-b696bf93f350</uuid>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   <name>instance-00000012</name>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersAdminTestJSON-server-109127296</nova:name>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:50:37</nova:creationTime>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:50:40 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:50:40 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:50:40 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:50:40 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:50:40 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:50:40 compute-0 nova_compute[239965]:         <nova:user uuid="b47ce75429ed4cd1ba21c0d50c2e552d">tempest-ServersAdminTestJSON-863857415-project-member</nova:user>
Jan 26 15:50:40 compute-0 nova_compute[239965]:         <nova:project uuid="b3195eedf4a34fabaf019faaaad7eb71">tempest-ServersAdminTestJSON-863857415</nova:project>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:50:40 compute-0 nova_compute[239965]:         <nova:port uuid="bba927f8-b9ac-4383-94b1-86a81c2844fd">
Jan 26 15:50:40 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <system>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <entry name="serial">dcf4bd4f-ec23-4c13-b04d-b696bf93f350</entry>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <entry name="uuid">dcf4bd4f-ec23-4c13-b04d-b696bf93f350</entry>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     </system>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   <os>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   </os>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   <features>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   </features>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/dcf4bd4f-ec23-4c13-b04d-b696bf93f350_disk">
Jan 26 15:50:40 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       </source>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:50:40 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/dcf4bd4f-ec23-4c13-b04d-b696bf93f350_disk.config">
Jan 26 15:50:40 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       </source>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:50:40 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:cd:ab:15"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <target dev="tapbba927f8-b9"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/dcf4bd4f-ec23-4c13-b04d-b696bf93f350/console.log" append="off"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <video>
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     </video>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:50:40 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:50:40 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:50:40 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:50:40 compute-0 nova_compute[239965]: </domain>
Jan 26 15:50:40 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.205 239969 DEBUG nova.compute.manager [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Preparing to wait for external event network-vif-plugged-bba927f8-b9ac-4383-94b1-86a81c2844fd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.205 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.205 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.205 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.206 239969 DEBUG nova.virt.libvirt.vif [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:50:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-109127296',display_name='tempest-ServersAdminTestJSON-server-109127296',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-109127296',id=18,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-vqo9e7of',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:50:31Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=dcf4bd4f-ec23-4c13-b04d-b696bf93f350,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "address": "fa:16:3e:cd:ab:15", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbba927f8-b9", "ovs_interfaceid": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.207 239969 DEBUG nova.network.os_vif_util [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "address": "fa:16:3e:cd:ab:15", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbba927f8-b9", "ovs_interfaceid": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.207 239969 DEBUG nova.network.os_vif_util [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:ab:15,bridge_name='br-int',has_traffic_filtering=True,id=bba927f8-b9ac-4383-94b1-86a81c2844fd,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbba927f8-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.208 239969 DEBUG os_vif [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:ab:15,bridge_name='br-int',has_traffic_filtering=True,id=bba927f8-b9ac-4383-94b1-86a81c2844fd,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbba927f8-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.209 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.209 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.209 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.215 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.215 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbba927f8-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.215 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbba927f8-b9, col_values=(('external_ids', {'iface-id': 'bba927f8-b9ac-4383-94b1-86a81c2844fd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:ab:15', 'vm-uuid': 'dcf4bd4f-ec23-4c13-b04d-b696bf93f350'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:40 compute-0 NetworkManager[48954]: <info>  [1769442640.2405] manager: (tapbba927f8-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.239 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.244 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.248 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.249 239969 INFO os_vif [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:ab:15,bridge_name='br-int',has_traffic_filtering=True,id=bba927f8-b9ac-4383-94b1-86a81c2844fd,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbba927f8-b9')
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.873 239969 DEBUG oslo_concurrency.processutils [None req-6c76b17d-bd35-4b2d-b7db-9159cac491fc e38009ca2c0843499bdf809289156bf8 d34eba1fc9bf4c74b14eded842a3c1dd - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.907 239969 DEBUG oslo_concurrency.processutils [None req-6c76b17d-bd35-4b2d-b7db-9159cac491fc e38009ca2c0843499bdf809289156bf8 d34eba1fc9bf4c74b14eded842a3c1dd - - default default] CMD "env LANG=C uptime" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:40 compute-0 ceph-mon[75140]: pgmap v1054: 305 pgs: 305 active+clean; 361 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 7.3 MiB/s wr, 256 op/s
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.917 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.918 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.919 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No VIF found with MAC fa:16:3e:cd:ab:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.919 239969 INFO nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Using config drive
Jan 26 15:50:40 compute-0 nova_compute[239965]: 2026-01-26 15:50:40.946 239969 DEBUG nova.storage.rbd_utils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image dcf4bd4f-ec23-4c13-b04d-b696bf93f350_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.167 239969 DEBUG oslo_concurrency.lockutils [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.168 239969 DEBUG oslo_concurrency.lockutils [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.168 239969 DEBUG oslo_concurrency.lockutils [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.169 239969 DEBUG oslo_concurrency.lockutils [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.169 239969 DEBUG oslo_concurrency.lockutils [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.170 239969 INFO nova.compute.manager [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Terminating instance
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.171 239969 DEBUG nova.compute.manager [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:50:41 compute-0 kernel: tapb4633a27-8c (unregistering): left promiscuous mode
Jan 26 15:50:41 compute-0 NetworkManager[48954]: <info>  [1769442641.2095] device (tapb4633a27-8c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:50:41 compute-0 ovn_controller[146046]: 2026-01-26T15:50:41Z|00080|binding|INFO|Releasing lport b4633a27-8c35-4f7a-8e72-b7a090e807f9 from this chassis (sb_readonly=0)
Jan 26 15:50:41 compute-0 ovn_controller[146046]: 2026-01-26T15:50:41Z|00081|binding|INFO|Setting lport b4633a27-8c35-4f7a-8e72-b7a090e807f9 down in Southbound
Jan 26 15:50:41 compute-0 ovn_controller[146046]: 2026-01-26T15:50:41Z|00082|binding|INFO|Removing iface tapb4633a27-8c ovn-installed in OVS
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.228 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.258 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:41 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Jan 26 15:50:41 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000c.scope: Consumed 14.889s CPU time.
Jan 26 15:50:41 compute-0 systemd-machined[208061]: Machine qemu-14-instance-0000000c terminated.
Jan 26 15:50:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:41.283 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:d1:74 10.100.0.8'], port_security=['fa:16:3e:05:d1:74 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3a0641d5-a02c-4801-ab1d-4f097cd3f431', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-68eabea2-3a2d-4690-b867-0143d33ad099', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c2f3e7dabb6b4b6d8d0bd497c45857fc', 'neutron:revision_number': '4', 'neutron:security_group_ids': '768559e5-eeab-4bfe-90a7-d37c537391d3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=09fd1a35-0741-44b5-aefd-ffc4e1370811, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b4633a27-8c35-4f7a-8e72-b7a090e807f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:50:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:41.284 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b4633a27-8c35-4f7a-8e72-b7a090e807f9 in datapath 68eabea2-3a2d-4690-b867-0143d33ad099 unbound from our chassis
Jan 26 15:50:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:41.286 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 68eabea2-3a2d-4690-b867-0143d33ad099, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:50:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:41.287 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3aaeb601-4893-4468-a2da-6360572aa462]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:41.287 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099 namespace which is not needed anymore
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.294 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442626.293704, 45865956-34db-4434-8dca-48c8dac9811e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.295 239969 INFO nova.compute.manager [-] [instance: 45865956-34db-4434-8dca-48c8dac9811e] VM Stopped (Lifecycle Event)
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.331 239969 DEBUG nova.compute.manager [None req-f8dc66d9-18bd-43aa-8828-e5e8ee8deaef - - - - - -] [instance: 45865956-34db-4434-8dca-48c8dac9811e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.451 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.459 239969 INFO nova.virt.libvirt.driver [-] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Instance destroyed successfully.
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.460 239969 DEBUG nova.objects.instance [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lazy-loading 'resources' on Instance uuid 3a0641d5-a02c-4801-ab1d-4f097cd3f431 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.504 239969 DEBUG nova.virt.libvirt.vif [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:49:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-1789971645',display_name='tempest-FloatingIPsAssociationTestJSON-server-1789971645',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-1789971645',id=12,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:49:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c2f3e7dabb6b4b6d8d0bd497c45857fc',ramdisk_id='',reservation_id='r-pulqeosh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-2042630915',owner_user_name='tempest-FloatingIPsAssociationTestJSON-2042630915-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:49:48Z,user_data=None,user_id='19b1461c4dbc4033a684f6e523867df8',uuid=3a0641d5-a02c-4801-ab1d-4f097cd3f431,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "address": "fa:16:3e:05:d1:74", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4633a27-8c", "ovs_interfaceid": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.504 239969 DEBUG nova.network.os_vif_util [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Converting VIF {"id": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "address": "fa:16:3e:05:d1:74", "network": {"id": "68eabea2-3a2d-4690-b867-0143d33ad099", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-363548753-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c2f3e7dabb6b4b6d8d0bd497c45857fc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4633a27-8c", "ovs_interfaceid": "b4633a27-8c35-4f7a-8e72-b7a090e807f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.505 239969 DEBUG nova.network.os_vif_util [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:05:d1:74,bridge_name='br-int',has_traffic_filtering=True,id=b4633a27-8c35-4f7a-8e72-b7a090e807f9,network=Network(68eabea2-3a2d-4690-b867-0143d33ad099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4633a27-8c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.505 239969 DEBUG os_vif [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:d1:74,bridge_name='br-int',has_traffic_filtering=True,id=b4633a27-8c35-4f7a-8e72-b7a090e807f9,network=Network(68eabea2-3a2d-4690-b867-0143d33ad099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4633a27-8c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.507 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.507 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4633a27-8c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.510 239969 INFO nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Creating config drive at /var/lib/nova/instances/dcf4bd4f-ec23-4c13-b04d-b696bf93f350/disk.config
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.516 239969 DEBUG oslo_concurrency.processutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dcf4bd4f-ec23-4c13-b04d-b696bf93f350/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp83efobo1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.535 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.538 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.541 239969 INFO os_vif [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:05:d1:74,bridge_name='br-int',has_traffic_filtering=True,id=b4633a27-8c35-4f7a-8e72-b7a090e807f9,network=Network(68eabea2-3a2d-4690-b867-0143d33ad099),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4633a27-8c')
Jan 26 15:50:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 370 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.5 MiB/s wr, 236 op/s
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.644 239969 DEBUG oslo_concurrency.processutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dcf4bd4f-ec23-4c13-b04d-b696bf93f350/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp83efobo1" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.760 239969 DEBUG nova.storage.rbd_utils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image dcf4bd4f-ec23-4c13-b04d-b696bf93f350_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:41 compute-0 neutron-haproxy-ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099[254513]: [NOTICE]   (254517) : haproxy version is 2.8.14-c23fe91
Jan 26 15:50:41 compute-0 neutron-haproxy-ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099[254513]: [NOTICE]   (254517) : path to executable is /usr/sbin/haproxy
Jan 26 15:50:41 compute-0 neutron-haproxy-ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099[254513]: [WARNING]  (254517) : Exiting Master process...
Jan 26 15:50:41 compute-0 neutron-haproxy-ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099[254513]: [WARNING]  (254517) : Exiting Master process...
Jan 26 15:50:41 compute-0 neutron-haproxy-ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099[254513]: [ALERT]    (254517) : Current worker (254519) exited with code 143 (Terminated)
Jan 26 15:50:41 compute-0 neutron-haproxy-ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099[254513]: [WARNING]  (254517) : All workers exited. Exiting... (0)
Jan 26 15:50:41 compute-0 systemd[1]: libpod-775c1ac83aa211c8037cb29af527bc5430bb8753ffa1f5f651e199bdf95b7178.scope: Deactivated successfully.
Jan 26 15:50:41 compute-0 podman[257442]: 2026-01-26 15:50:41.772802113 +0000 UTC m=+0.391270803 container died 775c1ac83aa211c8037cb29af527bc5430bb8753ffa1f5f651e199bdf95b7178 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.771 239969 DEBUG oslo_concurrency.processutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dcf4bd4f-ec23-4c13-b04d-b696bf93f350/disk.config dcf4bd4f-ec23-4c13-b04d-b696bf93f350_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:41 compute-0 sudo[257490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:50:41 compute-0 sudo[257490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:50:41 compute-0 sudo[257490]: pam_unix(sudo:session): session closed for user root
Jan 26 15:50:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-775c1ac83aa211c8037cb29af527bc5430bb8753ffa1f5f651e199bdf95b7178-userdata-shm.mount: Deactivated successfully.
Jan 26 15:50:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f82a721cbb5c42ce2ca0264815576177600902595936c5eedbb52058b5a714d-merged.mount: Deactivated successfully.
Jan 26 15:50:41 compute-0 podman[257442]: 2026-01-26 15:50:41.82450324 +0000 UTC m=+0.442971930 container cleanup 775c1ac83aa211c8037cb29af527bc5430bb8753ffa1f5f651e199bdf95b7178 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 15:50:41 compute-0 systemd[1]: libpod-conmon-775c1ac83aa211c8037cb29af527bc5430bb8753ffa1f5f651e199bdf95b7178.scope: Deactivated successfully.
Jan 26 15:50:41 compute-0 sudo[257545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:50:41 compute-0 sudo[257545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:50:41 compute-0 podman[257574]: 2026-01-26 15:50:41.908368693 +0000 UTC m=+0.055240064 container remove 775c1ac83aa211c8037cb29af527bc5430bb8753ffa1f5f651e199bdf95b7178 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 15:50:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:41.922 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a0bad787-71d6-49b7-80a3-5364cd36a3b3]: (4, ('Mon Jan 26 03:50:41 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099 (775c1ac83aa211c8037cb29af527bc5430bb8753ffa1f5f651e199bdf95b7178)\n775c1ac83aa211c8037cb29af527bc5430bb8753ffa1f5f651e199bdf95b7178\nMon Jan 26 03:50:41 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099 (775c1ac83aa211c8037cb29af527bc5430bb8753ffa1f5f651e199bdf95b7178)\n775c1ac83aa211c8037cb29af527bc5430bb8753ffa1f5f651e199bdf95b7178\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:41.923 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3f37a5-be50-4b14-be4a-00ab01917803]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:41.924 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68eabea2-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:41 compute-0 kernel: tap68eabea2-30: left promiscuous mode
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.927 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.940 239969 DEBUG oslo_concurrency.processutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dcf4bd4f-ec23-4c13-b04d-b696bf93f350/disk.config dcf4bd4f-ec23-4c13-b04d-b696bf93f350_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.941 239969 INFO nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Deleting local config drive /var/lib/nova/instances/dcf4bd4f-ec23-4c13-b04d-b696bf93f350/disk.config because it was imported into RBD.
Jan 26 15:50:41 compute-0 nova_compute[239965]: 2026-01-26 15:50:41.954 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:41.956 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[799487cf-47a4-492c-8a1d-86f72b2bd57d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:41.968 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcdc6e3-4542-4636-8301-6a2e67806cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:41.969 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[54a0b2c7-5891-40ac-b66b-1045e98f3433]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:41.988 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[42cfd2a0-ab2f-4ec1-969f-a957a6009936]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 407856, 'reachable_time': 21229, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257612, 'error': None, 'target': 'ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:41.990 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-68eabea2-3a2d-4690-b867-0143d33ad099 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:50:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:41.991 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[684860f2-8675-4793-8aa8-604f663af0ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d68eabea2\x2d3a2d\x2d4690\x2db867\x2d0143d33ad099.mount: Deactivated successfully.
Jan 26 15:50:42 compute-0 systemd-udevd[257422]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:50:42 compute-0 kernel: tapbba927f8-b9: entered promiscuous mode
Jan 26 15:50:42 compute-0 NetworkManager[48954]: <info>  [1769442642.0072] manager: (tapbba927f8-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Jan 26 15:50:42 compute-0 ovn_controller[146046]: 2026-01-26T15:50:42Z|00083|binding|INFO|Claiming lport bba927f8-b9ac-4383-94b1-86a81c2844fd for this chassis.
Jan 26 15:50:42 compute-0 ovn_controller[146046]: 2026-01-26T15:50:42Z|00084|binding|INFO|bba927f8-b9ac-4383-94b1-86a81c2844fd: Claiming fa:16:3e:cd:ab:15 10.100.0.7
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.013 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:42.014 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:ab:15 10.100.0.7'], port_security=['fa:16:3e:cd:ab:15 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'dcf4bd4f-ec23-4c13-b04d-b696bf93f350', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3195eedf4a34fabaf019faaaad7eb71', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'deb9b716-672e-4115-a664-3008e95e0a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=172ee5c1-f0af-4b45-a16f-73e1fe15c93f, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=bba927f8-b9ac-4383-94b1-86a81c2844fd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:50:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:42.015 156105 INFO neutron.agent.ovn.metadata.agent [-] Port bba927f8-b9ac-4383-94b1-86a81c2844fd in datapath 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc bound to our chassis
Jan 26 15:50:42 compute-0 NetworkManager[48954]: <info>  [1769442642.0187] device (tapbba927f8-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:50:42 compute-0 NetworkManager[48954]: <info>  [1769442642.0192] device (tapbba927f8-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:50:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:42.019 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc
Jan 26 15:50:42 compute-0 ovn_controller[146046]: 2026-01-26T15:50:42Z|00085|binding|INFO|Setting lport bba927f8-b9ac-4383-94b1-86a81c2844fd ovn-installed in OVS
Jan 26 15:50:42 compute-0 ovn_controller[146046]: 2026-01-26T15:50:42Z|00086|binding|INFO|Setting lport bba927f8-b9ac-4383-94b1-86a81c2844fd up in Southbound
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.033 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:42.035 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1e50e2-ae45-4f5d-8c07-d4d0a14d9bed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.038 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:42 compute-0 systemd-machined[208061]: New machine qemu-20-instance-00000012.
Jan 26 15:50:42 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000012.
Jan 26 15:50:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:42.070 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[40620d85-6bb6-4b46-b0b6-37a100f0425e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:42.075 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e33ba693-9204-47da-8153-ec045b4b4da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:42.108 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0aab9b-ca11-49de-b2a9-6c704c9e2ff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.123 239969 INFO nova.virt.libvirt.driver [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Deleting instance files /var/lib/nova/instances/3a0641d5-a02c-4801-ab1d-4f097cd3f431_del
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.124 239969 INFO nova.virt.libvirt.driver [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Deletion of /var/lib/nova/instances/3a0641d5-a02c-4801-ab1d-4f097cd3f431_del complete
Jan 26 15:50:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:42.136 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[54250268-5db1-4346-a8b8-0c80a8b0e468]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a0d62aa-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:c2:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411131, 'reachable_time': 38822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257646, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:42.151 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8e7ea21c-bf0a-45a0-84c4-b4e0bfbe63ae]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411144, 'tstamp': 411144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257650, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411147, 'tstamp': 411147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257650, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:42.153 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a0d62aa-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.154 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.155 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:42.155 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a0d62aa-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:42.156 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:42.156 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a0d62aa-50, col_values=(('external_ids', {'iface-id': '98f4f5ff-65bb-4d1b-80cd-163cfdeaa557'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:42.156 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.189 239969 INFO nova.compute.manager [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Took 1.02 seconds to destroy the instance on the hypervisor.
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.189 239969 DEBUG oslo.service.loopingcall [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.189 239969 DEBUG nova.compute.manager [-] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.190 239969 DEBUG nova.network.neutron [-] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.335 239969 DEBUG nova.network.neutron [req-b0f8124f-8695-4494-b790-183ff55245c5 req-29a4aa8e-1dbf-4e20-b223-3d3fb94354e9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Updated VIF entry in instance network info cache for port bba927f8-b9ac-4383-94b1-86a81c2844fd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.335 239969 DEBUG nova.network.neutron [req-b0f8124f-8695-4494-b790-183ff55245c5 req-29a4aa8e-1dbf-4e20-b223-3d3fb94354e9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Updating instance_info_cache with network_info: [{"id": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "address": "fa:16:3e:cd:ab:15", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbba927f8-b9", "ovs_interfaceid": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:42 compute-0 sudo[257545]: pam_unix(sudo:session): session closed for user root
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.509 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442642.5092423, dcf4bd4f-ec23-4c13-b04d-b696bf93f350 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.510 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] VM Started (Lifecycle Event)
Jan 26 15:50:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:50:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:50:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:50:42 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:50:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:50:42 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:50:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:50:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:50:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:50:42 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:50:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:50:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:50:42 compute-0 sudo[257710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:50:42 compute-0 sudo[257710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:50:42 compute-0 sudo[257710]: pam_unix(sudo:session): session closed for user root
Jan 26 15:50:42 compute-0 sudo[257735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.660 239969 DEBUG oslo_concurrency.lockutils [req-b0f8124f-8695-4494-b790-183ff55245c5 req-29a4aa8e-1dbf-4e20-b223-3d3fb94354e9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-dcf4bd4f-ec23-4c13-b04d-b696bf93f350" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.661 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:42 compute-0 sudo[257735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.665 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442642.5093668, dcf4bd4f-ec23-4c13-b04d-b696bf93f350 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.665 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] VM Paused (Lifecycle Event)
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.686 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.690 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.718 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:50:42 compute-0 ceph-mon[75140]: pgmap v1055: 305 pgs: 305 active+clean; 370 MiB data, 402 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.5 MiB/s wr, 236 op/s
Jan 26 15:50:42 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:50:42 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:50:42 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:50:42 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:50:42 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:50:42 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:50:42 compute-0 podman[257772]: 2026-01-26 15:50:42.932249488 +0000 UTC m=+0.036108416 container create a4669b9dbf61624e92b0aeb11212f79fb6300892094b39e9d2b7243809da23d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.950 239969 DEBUG nova.compute.manager [req-5b681fd1-d988-45bd-9026-3b4564f7b421 req-8de3b108-b35a-42e3-a936-1a1631838393 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Received event network-vif-unplugged-b4633a27-8c35-4f7a-8e72-b7a090e807f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.950 239969 DEBUG oslo_concurrency.lockutils [req-5b681fd1-d988-45bd-9026-3b4564f7b421 req-8de3b108-b35a-42e3-a936-1a1631838393 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.950 239969 DEBUG oslo_concurrency.lockutils [req-5b681fd1-d988-45bd-9026-3b4564f7b421 req-8de3b108-b35a-42e3-a936-1a1631838393 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.951 239969 DEBUG oslo_concurrency.lockutils [req-5b681fd1-d988-45bd-9026-3b4564f7b421 req-8de3b108-b35a-42e3-a936-1a1631838393 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.951 239969 DEBUG nova.compute.manager [req-5b681fd1-d988-45bd-9026-3b4564f7b421 req-8de3b108-b35a-42e3-a936-1a1631838393 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] No waiting events found dispatching network-vif-unplugged-b4633a27-8c35-4f7a-8e72-b7a090e807f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.951 239969 DEBUG nova.compute.manager [req-5b681fd1-d988-45bd-9026-3b4564f7b421 req-8de3b108-b35a-42e3-a936-1a1631838393 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Received event network-vif-unplugged-b4633a27-8c35-4f7a-8e72-b7a090e807f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.951 239969 DEBUG nova.compute.manager [req-5b681fd1-d988-45bd-9026-3b4564f7b421 req-8de3b108-b35a-42e3-a936-1a1631838393 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Received event network-vif-plugged-b4633a27-8c35-4f7a-8e72-b7a090e807f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.951 239969 DEBUG oslo_concurrency.lockutils [req-5b681fd1-d988-45bd-9026-3b4564f7b421 req-8de3b108-b35a-42e3-a936-1a1631838393 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.951 239969 DEBUG oslo_concurrency.lockutils [req-5b681fd1-d988-45bd-9026-3b4564f7b421 req-8de3b108-b35a-42e3-a936-1a1631838393 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.951 239969 DEBUG oslo_concurrency.lockutils [req-5b681fd1-d988-45bd-9026-3b4564f7b421 req-8de3b108-b35a-42e3-a936-1a1631838393 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.952 239969 DEBUG nova.compute.manager [req-5b681fd1-d988-45bd-9026-3b4564f7b421 req-8de3b108-b35a-42e3-a936-1a1631838393 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] No waiting events found dispatching network-vif-plugged-b4633a27-8c35-4f7a-8e72-b7a090e807f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:50:42 compute-0 nova_compute[239965]: 2026-01-26 15:50:42.952 239969 WARNING nova.compute.manager [req-5b681fd1-d988-45bd-9026-3b4564f7b421 req-8de3b108-b35a-42e3-a936-1a1631838393 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Received unexpected event network-vif-plugged-b4633a27-8c35-4f7a-8e72-b7a090e807f9 for instance with vm_state active and task_state deleting.
Jan 26 15:50:42 compute-0 systemd[1]: Started libpod-conmon-a4669b9dbf61624e92b0aeb11212f79fb6300892094b39e9d2b7243809da23d2.scope.
Jan 26 15:50:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:50:43 compute-0 podman[257772]: 2026-01-26 15:50:43.009474689 +0000 UTC m=+0.113333647 container init a4669b9dbf61624e92b0aeb11212f79fb6300892094b39e9d2b7243809da23d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 15:50:43 compute-0 podman[257772]: 2026-01-26 15:50:42.91481392 +0000 UTC m=+0.018672868 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:50:43 compute-0 podman[257772]: 2026-01-26 15:50:43.015616029 +0000 UTC m=+0.119474957 container start a4669b9dbf61624e92b0aeb11212f79fb6300892094b39e9d2b7243809da23d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kirch, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 15:50:43 compute-0 podman[257772]: 2026-01-26 15:50:43.019280659 +0000 UTC m=+0.123139587 container attach a4669b9dbf61624e92b0aeb11212f79fb6300892094b39e9d2b7243809da23d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kirch, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:50:43 compute-0 systemd[1]: libpod-a4669b9dbf61624e92b0aeb11212f79fb6300892094b39e9d2b7243809da23d2.scope: Deactivated successfully.
Jan 26 15:50:43 compute-0 loving_kirch[257789]: 167 167
Jan 26 15:50:43 compute-0 conmon[257789]: conmon a4669b9dbf61624e92b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4669b9dbf61624e92b0aeb11212f79fb6300892094b39e9d2b7243809da23d2.scope/container/memory.events
Jan 26 15:50:43 compute-0 podman[257772]: 2026-01-26 15:50:43.022047397 +0000 UTC m=+0.125906315 container died a4669b9dbf61624e92b0aeb11212f79fb6300892094b39e9d2b7243809da23d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kirch, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 26 15:50:43 compute-0 nova_compute[239965]: 2026-01-26 15:50:43.036 239969 DEBUG nova.network.neutron [-] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:50:43 compute-0 nova_compute[239965]: 2026-01-26 15:50:43.054 239969 INFO nova.compute.manager [-] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Took 0.86 seconds to deallocate network for instance.
Jan 26 15:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e8631310d102d7e52ae0b857c9a0c00da8655b0502c499acdab6d97dfaf1d46-merged.mount: Deactivated successfully.
Jan 26 15:50:43 compute-0 podman[257772]: 2026-01-26 15:50:43.077131655 +0000 UTC m=+0.180990583 container remove a4669b9dbf61624e92b0aeb11212f79fb6300892094b39e9d2b7243809da23d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_kirch, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:50:43 compute-0 systemd[1]: libpod-conmon-a4669b9dbf61624e92b0aeb11212f79fb6300892094b39e9d2b7243809da23d2.scope: Deactivated successfully.
Jan 26 15:50:43 compute-0 nova_compute[239965]: 2026-01-26 15:50:43.112 239969 DEBUG nova.compute.manager [req-56c6543d-e65d-4647-8cec-3724f2b1d133 req-a6676208-720d-42b3-a20f-62eb5977f488 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Received event network-vif-deleted-b4633a27-8c35-4f7a-8e72-b7a090e807f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:43 compute-0 nova_compute[239965]: 2026-01-26 15:50:43.144 239969 DEBUG oslo_concurrency.lockutils [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:43 compute-0 nova_compute[239965]: 2026-01-26 15:50:43.145 239969 DEBUG oslo_concurrency.lockutils [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:43 compute-0 podman[257812]: 2026-01-26 15:50:43.254197742 +0000 UTC m=+0.035962002 container create bc6b5c54c6b0d363b71494cc83b11ddcb031bc9217fd8086692c17689964992a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:50:43 compute-0 systemd[1]: Started libpod-conmon-bc6b5c54c6b0d363b71494cc83b11ddcb031bc9217fd8086692c17689964992a.scope.
Jan 26 15:50:43 compute-0 nova_compute[239965]: 2026-01-26 15:50:43.293 239969 DEBUG oslo_concurrency.processutils [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:50:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c96edf2e86164bc6ff5a31da9f9a865ddf6f24b3f56ba9866628ec192924b78/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:50:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c96edf2e86164bc6ff5a31da9f9a865ddf6f24b3f56ba9866628ec192924b78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:50:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c96edf2e86164bc6ff5a31da9f9a865ddf6f24b3f56ba9866628ec192924b78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:50:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c96edf2e86164bc6ff5a31da9f9a865ddf6f24b3f56ba9866628ec192924b78/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:50:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c96edf2e86164bc6ff5a31da9f9a865ddf6f24b3f56ba9866628ec192924b78/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:50:43 compute-0 podman[257812]: 2026-01-26 15:50:43.239172004 +0000 UTC m=+0.020936284 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:50:43 compute-0 podman[257812]: 2026-01-26 15:50:43.339405749 +0000 UTC m=+0.121170019 container init bc6b5c54c6b0d363b71494cc83b11ddcb031bc9217fd8086692c17689964992a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 15:50:43 compute-0 podman[257812]: 2026-01-26 15:50:43.348327867 +0000 UTC m=+0.130092137 container start bc6b5c54c6b0d363b71494cc83b11ddcb031bc9217fd8086692c17689964992a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:50:43 compute-0 podman[257812]: 2026-01-26 15:50:43.353409322 +0000 UTC m=+0.135173582 container attach bc6b5c54c6b0d363b71494cc83b11ddcb031bc9217fd8086692c17689964992a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:50:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 319 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 241 op/s
Jan 26 15:50:43 compute-0 fervent_tu[257828]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:50:43 compute-0 fervent_tu[257828]: --> All data devices are unavailable
Jan 26 15:50:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:50:43 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1000676316' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:43 compute-0 systemd[1]: libpod-bc6b5c54c6b0d363b71494cc83b11ddcb031bc9217fd8086692c17689964992a.scope: Deactivated successfully.
Jan 26 15:50:43 compute-0 podman[257812]: 2026-01-26 15:50:43.838494651 +0000 UTC m=+0.620258921 container died bc6b5c54c6b0d363b71494cc83b11ddcb031bc9217fd8086692c17689964992a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 15:50:43 compute-0 nova_compute[239965]: 2026-01-26 15:50:43.852 239969 DEBUG oslo_concurrency.processutils [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:43 compute-0 nova_compute[239965]: 2026-01-26 15:50:43.858 239969 DEBUG nova.compute.provider_tree [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:50:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c96edf2e86164bc6ff5a31da9f9a865ddf6f24b3f56ba9866628ec192924b78-merged.mount: Deactivated successfully.
Jan 26 15:50:43 compute-0 podman[257812]: 2026-01-26 15:50:43.891596682 +0000 UTC m=+0.673360942 container remove bc6b5c54c6b0d363b71494cc83b11ddcb031bc9217fd8086692c17689964992a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_tu, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 15:50:43 compute-0 systemd[1]: libpod-conmon-bc6b5c54c6b0d363b71494cc83b11ddcb031bc9217fd8086692c17689964992a.scope: Deactivated successfully.
Jan 26 15:50:43 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1000676316' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:43 compute-0 sudo[257735]: pam_unix(sudo:session): session closed for user root
Jan 26 15:50:43 compute-0 sudo[257882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:50:43 compute-0 sudo[257882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:50:44 compute-0 sudo[257882]: pam_unix(sudo:session): session closed for user root
Jan 26 15:50:44 compute-0 nova_compute[239965]: 2026-01-26 15:50:44.048 239969 DEBUG nova.scheduler.client.report [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:50:44 compute-0 sudo[257907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:50:44 compute-0 sudo[257907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:50:44 compute-0 nova_compute[239965]: 2026-01-26 15:50:44.076 239969 DEBUG oslo_concurrency.lockutils [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:44 compute-0 nova_compute[239965]: 2026-01-26 15:50:44.098 239969 INFO nova.scheduler.client.report [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Deleted allocations for instance 3a0641d5-a02c-4801-ab1d-4f097cd3f431
Jan 26 15:50:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:50:44 compute-0 nova_compute[239965]: 2026-01-26 15:50:44.303 239969 DEBUG oslo_concurrency.lockutils [None req-30502ece-7087-4692-862f-a6438c7ed43f 19b1461c4dbc4033a684f6e523867df8 c2f3e7dabb6b4b6d8d0bd497c45857fc - - default default] Lock "3a0641d5-a02c-4801-ab1d-4f097cd3f431" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:44 compute-0 podman[257943]: 2026-01-26 15:50:44.356711612 +0000 UTC m=+0.045873995 container create 373a8fe31f183ec03335331b44c093b32c19f2f7a6228516df90150425b17e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_northcutt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 15:50:44 compute-0 systemd[1]: Started libpod-conmon-373a8fe31f183ec03335331b44c093b32c19f2f7a6228516df90150425b17e60.scope.
Jan 26 15:50:44 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:50:44 compute-0 podman[257943]: 2026-01-26 15:50:44.338713611 +0000 UTC m=+0.027876024 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:50:44 compute-0 podman[257943]: 2026-01-26 15:50:44.441443747 +0000 UTC m=+0.130606161 container init 373a8fe31f183ec03335331b44c093b32c19f2f7a6228516df90150425b17e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_northcutt, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 15:50:44 compute-0 podman[257943]: 2026-01-26 15:50:44.448674364 +0000 UTC m=+0.137836747 container start 373a8fe31f183ec03335331b44c093b32c19f2f7a6228516df90150425b17e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 15:50:44 compute-0 podman[257943]: 2026-01-26 15:50:44.453107583 +0000 UTC m=+0.142269966 container attach 373a8fe31f183ec03335331b44c093b32c19f2f7a6228516df90150425b17e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 15:50:44 compute-0 gallant_northcutt[257960]: 167 167
Jan 26 15:50:44 compute-0 systemd[1]: libpod-373a8fe31f183ec03335331b44c093b32c19f2f7a6228516df90150425b17e60.scope: Deactivated successfully.
Jan 26 15:50:44 compute-0 podman[257943]: 2026-01-26 15:50:44.456176588 +0000 UTC m=+0.145338991 container died 373a8fe31f183ec03335331b44c093b32c19f2f7a6228516df90150425b17e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 15:50:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-043aabaa396828eb123c10883916c0457f76f5267bfd65d02e2f94f8b09c5d95-merged.mount: Deactivated successfully.
Jan 26 15:50:44 compute-0 podman[257943]: 2026-01-26 15:50:44.497684565 +0000 UTC m=+0.186846948 container remove 373a8fe31f183ec03335331b44c093b32c19f2f7a6228516df90150425b17e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_northcutt, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 15:50:44 compute-0 systemd[1]: libpod-conmon-373a8fe31f183ec03335331b44c093b32c19f2f7a6228516df90150425b17e60.scope: Deactivated successfully.
Jan 26 15:50:44 compute-0 podman[257983]: 2026-01-26 15:50:44.699055776 +0000 UTC m=+0.047825543 container create c8407ca87689102ab509f491c34ec962e23ddc1bc4fef6aa48da1a6c59c214dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_engelbart, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 26 15:50:44 compute-0 systemd[1]: Started libpod-conmon-c8407ca87689102ab509f491c34ec962e23ddc1bc4fef6aa48da1a6c59c214dc.scope.
Jan 26 15:50:44 compute-0 podman[257983]: 2026-01-26 15:50:44.674784721 +0000 UTC m=+0.023554518 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:50:44 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87cbb2be8e9de2526f39705b4339ae72e1def823d826a8ef3c375aab06c99553/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87cbb2be8e9de2526f39705b4339ae72e1def823d826a8ef3c375aab06c99553/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87cbb2be8e9de2526f39705b4339ae72e1def823d826a8ef3c375aab06c99553/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:50:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87cbb2be8e9de2526f39705b4339ae72e1def823d826a8ef3c375aab06c99553/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:50:44 compute-0 nova_compute[239965]: 2026-01-26 15:50:44.805 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:44 compute-0 podman[257983]: 2026-01-26 15:50:44.812681209 +0000 UTC m=+0.161450986 container init c8407ca87689102ab509f491c34ec962e23ddc1bc4fef6aa48da1a6c59c214dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_engelbart, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 15:50:44 compute-0 podman[257983]: 2026-01-26 15:50:44.819789333 +0000 UTC m=+0.168559110 container start c8407ca87689102ab509f491c34ec962e23ddc1bc4fef6aa48da1a6c59c214dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_engelbart, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 15:50:44 compute-0 podman[257983]: 2026-01-26 15:50:44.827267206 +0000 UTC m=+0.176037003 container attach c8407ca87689102ab509f491c34ec962e23ddc1bc4fef6aa48da1a6c59c214dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_engelbart, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 15:50:44 compute-0 ceph-mon[75140]: pgmap v1056: 305 pgs: 305 active+clean; 319 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 241 op/s
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.055 239969 DEBUG nova.compute.manager [req-631fa126-c606-4915-9a20-5a7d890378e3 req-a5ec265c-6cae-4719-949f-c83ef32332fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Received event network-vif-plugged-bba927f8-b9ac-4383-94b1-86a81c2844fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.055 239969 DEBUG oslo_concurrency.lockutils [req-631fa126-c606-4915-9a20-5a7d890378e3 req-a5ec265c-6cae-4719-949f-c83ef32332fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.056 239969 DEBUG oslo_concurrency.lockutils [req-631fa126-c606-4915-9a20-5a7d890378e3 req-a5ec265c-6cae-4719-949f-c83ef32332fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.056 239969 DEBUG oslo_concurrency.lockutils [req-631fa126-c606-4915-9a20-5a7d890378e3 req-a5ec265c-6cae-4719-949f-c83ef32332fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.056 239969 DEBUG nova.compute.manager [req-631fa126-c606-4915-9a20-5a7d890378e3 req-a5ec265c-6cae-4719-949f-c83ef32332fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Processing event network-vif-plugged-bba927f8-b9ac-4383-94b1-86a81c2844fd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.056 239969 DEBUG nova.compute.manager [req-631fa126-c606-4915-9a20-5a7d890378e3 req-a5ec265c-6cae-4719-949f-c83ef32332fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Received event network-vif-plugged-bba927f8-b9ac-4383-94b1-86a81c2844fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.057 239969 DEBUG oslo_concurrency.lockutils [req-631fa126-c606-4915-9a20-5a7d890378e3 req-a5ec265c-6cae-4719-949f-c83ef32332fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.057 239969 DEBUG oslo_concurrency.lockutils [req-631fa126-c606-4915-9a20-5a7d890378e3 req-a5ec265c-6cae-4719-949f-c83ef32332fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.057 239969 DEBUG oslo_concurrency.lockutils [req-631fa126-c606-4915-9a20-5a7d890378e3 req-a5ec265c-6cae-4719-949f-c83ef32332fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.057 239969 DEBUG nova.compute.manager [req-631fa126-c606-4915-9a20-5a7d890378e3 req-a5ec265c-6cae-4719-949f-c83ef32332fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] No waiting events found dispatching network-vif-plugged-bba927f8-b9ac-4383-94b1-86a81c2844fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.057 239969 WARNING nova.compute.manager [req-631fa126-c606-4915-9a20-5a7d890378e3 req-a5ec265c-6cae-4719-949f-c83ef32332fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Received unexpected event network-vif-plugged-bba927f8-b9ac-4383-94b1-86a81c2844fd for instance with vm_state building and task_state spawning.
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.058 239969 DEBUG nova.compute.manager [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.063 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.064 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442645.0641928, dcf4bd4f-ec23-4c13-b04d-b696bf93f350 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.064 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] VM Resumed (Lifecycle Event)
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.069 239969 INFO nova.virt.libvirt.driver [-] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Instance spawned successfully.
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.070 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.104 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.107 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.119 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]: {
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.120 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:     "0": [
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:         {
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "devices": [
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "/dev/loop3"
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             ],
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "lv_name": "ceph_lv0",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "lv_size": "21470642176",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "name": "ceph_lv0",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "tags": {
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.cluster_name": "ceph",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.crush_device_class": "",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.encrypted": "0",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.objectstore": "bluestore",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.osd_id": "0",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.type": "block",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.vdo": "0",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.with_tpm": "0"
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             },
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "type": "block",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "vg_name": "ceph_vg0"
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:         }
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:     ],
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:     "1": [
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:         {
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "devices": [
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "/dev/loop4"
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             ],
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "lv_name": "ceph_lv1",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "lv_size": "21470642176",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "name": "ceph_lv1",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "tags": {
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.120 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.cluster_name": "ceph",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.crush_device_class": "",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.encrypted": "0",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.objectstore": "bluestore",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.osd_id": "1",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.type": "block",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.vdo": "0",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.with_tpm": "0"
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             },
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "type": "block",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "vg_name": "ceph_vg1"
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:         }
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:     ],
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:     "2": [
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:         {
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "devices": [
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "/dev/loop5"
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             ],
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "lv_name": "ceph_lv2",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "lv_size": "21470642176",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "name": "ceph_lv2",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "tags": {
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.cluster_name": "ceph",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.crush_device_class": "",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.encrypted": "0",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.objectstore": "bluestore",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.120 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.osd_id": "2",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.type": "block",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.vdo": "0",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:                 "ceph.with_tpm": "0"
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             },
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "type": "block",
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:             "vg_name": "ceph_vg2"
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:         }
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]:     ]
Jan 26 15:50:45 compute-0 hopeful_engelbart[258000]: }
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.120 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.121 239969 DEBUG nova.virt.libvirt.driver [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.127 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:50:45 compute-0 systemd[1]: libpod-c8407ca87689102ab509f491c34ec962e23ddc1bc4fef6aa48da1a6c59c214dc.scope: Deactivated successfully.
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.178 239969 INFO nova.compute.manager [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Took 13.86 seconds to spawn the instance on the hypervisor.
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.179 239969 DEBUG nova.compute.manager [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:45 compute-0 podman[258009]: 2026-01-26 15:50:45.201805148 +0000 UTC m=+0.023792933 container died c8407ca87689102ab509f491c34ec962e23ddc1bc4fef6aa48da1a6c59c214dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_engelbart, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 15:50:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-87cbb2be8e9de2526f39705b4339ae72e1def823d826a8ef3c375aab06c99553-merged.mount: Deactivated successfully.
Jan 26 15:50:45 compute-0 podman[258009]: 2026-01-26 15:50:45.249406604 +0000 UTC m=+0.071394389 container remove c8407ca87689102ab509f491c34ec962e23ddc1bc4fef6aa48da1a6c59c214dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_engelbart, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 15:50:45 compute-0 systemd[1]: libpod-conmon-c8407ca87689102ab509f491c34ec962e23ddc1bc4fef6aa48da1a6c59c214dc.scope: Deactivated successfully.
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.253 239969 INFO nova.compute.manager [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Took 15.07 seconds to build instance.
Jan 26 15:50:45 compute-0 nova_compute[239965]: 2026-01-26 15:50:45.275 239969 DEBUG oslo_concurrency.lockutils [None req-b358ff4a-6df1-474e-8aab-3060a9ce2a9e b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.193s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:45 compute-0 sudo[257907]: pam_unix(sudo:session): session closed for user root
Jan 26 15:50:45 compute-0 sudo[258024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:50:45 compute-0 sudo[258024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:50:45 compute-0 sudo[258024]: pam_unix(sudo:session): session closed for user root
Jan 26 15:50:45 compute-0 sudo[258049]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:50:45 compute-0 sudo[258049]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:50:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 319 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.8 MiB/s wr, 206 op/s
Jan 26 15:50:45 compute-0 podman[258085]: 2026-01-26 15:50:45.755734993 +0000 UTC m=+0.048949300 container create e5270f3e9e0a847ed2934333cd0023dc35e768fbf4ad8f352377244589283998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shtern, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 26 15:50:45 compute-0 systemd[1]: Started libpod-conmon-e5270f3e9e0a847ed2934333cd0023dc35e768fbf4ad8f352377244589283998.scope.
Jan 26 15:50:45 compute-0 podman[258085]: 2026-01-26 15:50:45.737045706 +0000 UTC m=+0.030260033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:50:45 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:50:45 compute-0 podman[258085]: 2026-01-26 15:50:45.852583446 +0000 UTC m=+0.145797773 container init e5270f3e9e0a847ed2934333cd0023dc35e768fbf4ad8f352377244589283998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 15:50:45 compute-0 podman[258085]: 2026-01-26 15:50:45.860921459 +0000 UTC m=+0.154135756 container start e5270f3e9e0a847ed2934333cd0023dc35e768fbf4ad8f352377244589283998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shtern, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:50:45 compute-0 confident_shtern[258103]: 167 167
Jan 26 15:50:45 compute-0 systemd[1]: libpod-e5270f3e9e0a847ed2934333cd0023dc35e768fbf4ad8f352377244589283998.scope: Deactivated successfully.
Jan 26 15:50:45 compute-0 podman[258085]: 2026-01-26 15:50:45.868477785 +0000 UTC m=+0.161692122 container attach e5270f3e9e0a847ed2934333cd0023dc35e768fbf4ad8f352377244589283998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:50:45 compute-0 podman[258085]: 2026-01-26 15:50:45.869422478 +0000 UTC m=+0.162636795 container died e5270f3e9e0a847ed2934333cd0023dc35e768fbf4ad8f352377244589283998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 15:50:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d752df8c32d25d1666ced9247b20bb48ac3323ab353a4e49630a08e5d2e856a3-merged.mount: Deactivated successfully.
Jan 26 15:50:45 compute-0 podman[258085]: 2026-01-26 15:50:45.924025435 +0000 UTC m=+0.217239742 container remove e5270f3e9e0a847ed2934333cd0023dc35e768fbf4ad8f352377244589283998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:50:45 compute-0 systemd[1]: libpod-conmon-e5270f3e9e0a847ed2934333cd0023dc35e768fbf4ad8f352377244589283998.scope: Deactivated successfully.
Jan 26 15:50:45 compute-0 ceph-mon[75140]: pgmap v1057: 305 pgs: 305 active+clean; 319 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.8 MiB/s wr, 206 op/s
Jan 26 15:50:46 compute-0 podman[258126]: 2026-01-26 15:50:46.181294285 +0000 UTC m=+0.094490724 container create 1d8e31deac86db9525cf11b7a5f7136a0a70fd2df463f5ee0f7e1567e524c799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Jan 26 15:50:46 compute-0 podman[258126]: 2026-01-26 15:50:46.112686675 +0000 UTC m=+0.025883134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:50:46 compute-0 systemd[1]: Started libpod-conmon-1d8e31deac86db9525cf11b7a5f7136a0a70fd2df463f5ee0f7e1567e524c799.scope.
Jan 26 15:50:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4c193061a626a3c2721368a212a109089508e8c11f115f17e53b3925ed2bce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4c193061a626a3c2721368a212a109089508e8c11f115f17e53b3925ed2bce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4c193061a626a3c2721368a212a109089508e8c11f115f17e53b3925ed2bce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e4c193061a626a3c2721368a212a109089508e8c11f115f17e53b3925ed2bce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:50:46 compute-0 podman[258126]: 2026-01-26 15:50:46.303433917 +0000 UTC m=+0.216630386 container init 1d8e31deac86db9525cf11b7a5f7136a0a70fd2df463f5ee0f7e1567e524c799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 15:50:46 compute-0 podman[258126]: 2026-01-26 15:50:46.309824803 +0000 UTC m=+0.223021232 container start 1d8e31deac86db9525cf11b7a5f7136a0a70fd2df463f5ee0f7e1567e524c799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hypatia, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:50:46 compute-0 podman[258126]: 2026-01-26 15:50:46.313656977 +0000 UTC m=+0.226853436 container attach 1d8e31deac86db9525cf11b7a5f7136a0a70fd2df463f5ee0f7e1567e524c799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:50:46 compute-0 nova_compute[239965]: 2026-01-26 15:50:46.509 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:46 compute-0 nova_compute[239965]: 2026-01-26 15:50:46.586 239969 DEBUG nova.virt.libvirt.driver [None req-7e9eb995-eb7f-43f8-9c70-294e9380ec0f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 15:50:47 compute-0 lvm[258219]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:50:47 compute-0 lvm[258220]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:50:47 compute-0 lvm[258220]: VG ceph_vg1 finished
Jan 26 15:50:47 compute-0 lvm[258219]: VG ceph_vg0 finished
Jan 26 15:50:47 compute-0 lvm[258222]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:50:47 compute-0 lvm[258222]: VG ceph_vg2 finished
Jan 26 15:50:47 compute-0 serene_hypatia[258141]: {}
Jan 26 15:50:47 compute-0 systemd[1]: libpod-1d8e31deac86db9525cf11b7a5f7136a0a70fd2df463f5ee0f7e1567e524c799.scope: Deactivated successfully.
Jan 26 15:50:47 compute-0 podman[258126]: 2026-01-26 15:50:47.260651778 +0000 UTC m=+1.173848207 container died 1d8e31deac86db9525cf11b7a5f7136a0a70fd2df463f5ee0f7e1567e524c799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hypatia, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:50:47 compute-0 systemd[1]: libpod-1d8e31deac86db9525cf11b7a5f7136a0a70fd2df463f5ee0f7e1567e524c799.scope: Consumed 1.349s CPU time.
Jan 26 15:50:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e4c193061a626a3c2721368a212a109089508e8c11f115f17e53b3925ed2bce-merged.mount: Deactivated successfully.
Jan 26 15:50:47 compute-0 podman[258126]: 2026-01-26 15:50:47.309399532 +0000 UTC m=+1.222595961 container remove 1d8e31deac86db9525cf11b7a5f7136a0a70fd2df463f5ee0f7e1567e524c799 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_hypatia, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:50:47 compute-0 systemd[1]: libpod-conmon-1d8e31deac86db9525cf11b7a5f7136a0a70fd2df463f5ee0f7e1567e524c799.scope: Deactivated successfully.
Jan 26 15:50:47 compute-0 sudo[258049]: pam_unix(sudo:session): session closed for user root
Jan 26 15:50:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:50:47 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:50:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:50:47 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:50:47 compute-0 sudo[258236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:50:47 compute-0 sudo[258236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:50:47 compute-0 sudo[258236]: pam_unix(sudo:session): session closed for user root
Jan 26 15:50:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 318 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.8 MiB/s wr, 302 op/s
Jan 26 15:50:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:50:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:50:48 compute-0 ceph-mon[75140]: pgmap v1058: 305 pgs: 305 active+clean; 318 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.8 MiB/s wr, 302 op/s
Jan 26 15:50:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:50:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/790049638' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:50:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:50:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/790049638' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002590674010153401 of space, bias 1.0, pg target 0.7772022030460203 quantized to 32 (current 32)
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000666848118280211 of space, bias 1.0, pg target 0.2000544354840633 quantized to 32 (current 32)
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.192426906694244e-07 of space, bias 4.0, pg target 0.0009830912288033094 quantized to 16 (current 16)
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:50:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:50:48 compute-0 kernel: tap509bd168-4b (unregistering): left promiscuous mode
Jan 26 15:50:48 compute-0 NetworkManager[48954]: <info>  [1769442648.8128] device (tap509bd168-4b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:50:48 compute-0 ovn_controller[146046]: 2026-01-26T15:50:48Z|00087|binding|INFO|Releasing lport 509bd168-4b1c-48dd-9b84-73e955518fdc from this chassis (sb_readonly=0)
Jan 26 15:50:48 compute-0 nova_compute[239965]: 2026-01-26 15:50:48.823 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:48 compute-0 ovn_controller[146046]: 2026-01-26T15:50:48Z|00088|binding|INFO|Setting lport 509bd168-4b1c-48dd-9b84-73e955518fdc down in Southbound
Jan 26 15:50:48 compute-0 ovn_controller[146046]: 2026-01-26T15:50:48Z|00089|binding|INFO|Removing iface tap509bd168-4b ovn-installed in OVS
Jan 26 15:50:48 compute-0 nova_compute[239965]: 2026-01-26 15:50:48.827 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:48.830 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:e8:b4 10.100.0.4'], port_security=['fa:16:3e:20:e8:b4 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '8d71fe6f-299d-4a2f-912c-c1f52540bdd8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5e237c1-a75f-479a-88ee-c0f788914b11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddba5162f533447bba0159cafaa565bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '196db1cf-70b0-4b0e-9aef-decea6cfe3dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79fbd3fe-b646-4211-8e23-8d25cabdd600, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=509bd168-4b1c-48dd-9b84-73e955518fdc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:50:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:48.831 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 509bd168-4b1c-48dd-9b84-73e955518fdc in datapath f5e237c1-a75f-479a-88ee-c0f788914b11 unbound from our chassis
Jan 26 15:50:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:48.832 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5e237c1-a75f-479a-88ee-c0f788914b11, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:50:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:48.833 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e261e008-ee77-4a2d-a6d8-278c6e4aa9a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:48.834 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 namespace which is not needed anymore
Jan 26 15:50:48 compute-0 nova_compute[239965]: 2026-01-26 15:50:48.844 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:48 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Deactivated successfully.
Jan 26 15:50:48 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000011.scope: Consumed 13.612s CPU time.
Jan 26 15:50:48 compute-0 systemd-machined[208061]: Machine qemu-19-instance-00000011 terminated.
Jan 26 15:50:48 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[257311]: [NOTICE]   (257322) : haproxy version is 2.8.14-c23fe91
Jan 26 15:50:48 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[257311]: [NOTICE]   (257322) : path to executable is /usr/sbin/haproxy
Jan 26 15:50:48 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[257311]: [WARNING]  (257322) : Exiting Master process...
Jan 26 15:50:48 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[257311]: [WARNING]  (257322) : Exiting Master process...
Jan 26 15:50:48 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[257311]: [ALERT]    (257322) : Current worker (257324) exited with code 143 (Terminated)
Jan 26 15:50:48 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[257311]: [WARNING]  (257322) : All workers exited. Exiting... (0)
Jan 26 15:50:48 compute-0 systemd[1]: libpod-36e305baab8d369e583a38541134520c97503e1b844ea72ba7cd91b681de5eba.scope: Deactivated successfully.
Jan 26 15:50:48 compute-0 podman[258282]: 2026-01-26 15:50:48.976720745 +0000 UTC m=+0.043870917 container died 36e305baab8d369e583a38541134520c97503e1b844ea72ba7cd91b681de5eba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 15:50:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-36e305baab8d369e583a38541134520c97503e1b844ea72ba7cd91b681de5eba-userdata-shm.mount: Deactivated successfully.
Jan 26 15:50:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-06706a9596633dd844b58e4d67742db76b7795acc84a1afb7d6a5ca912d889f9-merged.mount: Deactivated successfully.
Jan 26 15:50:49 compute-0 podman[258282]: 2026-01-26 15:50:49.018516058 +0000 UTC m=+0.085666220 container cleanup 36e305baab8d369e583a38541134520c97503e1b844ea72ba7cd91b681de5eba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:50:49 compute-0 systemd[1]: libpod-conmon-36e305baab8d369e583a38541134520c97503e1b844ea72ba7cd91b681de5eba.scope: Deactivated successfully.
Jan 26 15:50:49 compute-0 nova_compute[239965]: 2026-01-26 15:50:49.034 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:49 compute-0 nova_compute[239965]: 2026-01-26 15:50:49.041 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:49 compute-0 podman[258312]: 2026-01-26 15:50:49.094909018 +0000 UTC m=+0.052006224 container remove 36e305baab8d369e583a38541134520c97503e1b844ea72ba7cd91b681de5eba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:50:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:49.101 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[09d17fe3-f04a-4b85-ab7a-260ad4ade0c7]: (4, ('Mon Jan 26 03:50:48 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 (36e305baab8d369e583a38541134520c97503e1b844ea72ba7cd91b681de5eba)\n36e305baab8d369e583a38541134520c97503e1b844ea72ba7cd91b681de5eba\nMon Jan 26 03:50:49 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 (36e305baab8d369e583a38541134520c97503e1b844ea72ba7cd91b681de5eba)\n36e305baab8d369e583a38541134520c97503e1b844ea72ba7cd91b681de5eba\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:49.103 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b68fd0dd-ae23-4ec6-bef9-fcaa81429fd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:49.104 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5e237c1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:50:49 compute-0 nova_compute[239965]: 2026-01-26 15:50:49.105 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:49 compute-0 kernel: tapf5e237c1-a0: left promiscuous mode
Jan 26 15:50:49 compute-0 nova_compute[239965]: 2026-01-26 15:50:49.126 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:49.128 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c12bbcc5-1435-455f-bef4-2c1160805115]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:49.149 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ac2223c9-24b0-49f6-b08d-b5349bf1bd47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:49.150 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[881e9e7a-97e2-44a6-b907-788fad7fddb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:50:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:49.164 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f3eb6493-c214-4801-93de-12051e77780a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 412281, 'reachable_time': 24288, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258336, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:49 compute-0 systemd[1]: run-netns-ovnmeta\x2df5e237c1\x2da75f\x2d479a\x2d88ee\x2dc0f788914b11.mount: Deactivated successfully.
Jan 26 15:50:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:49.166 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:50:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:49.166 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf7eb74-c2a2-40bb-abc4-d2e5c90b54c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:50:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/790049638' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:50:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/790049638' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:50:49 compute-0 nova_compute[239965]: 2026-01-26 15:50:49.601 239969 INFO nova.virt.libvirt.driver [None req-7e9eb995-eb7f-43f8-9c70-294e9380ec0f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Instance shutdown successfully after 13 seconds.
Jan 26 15:50:49 compute-0 nova_compute[239965]: 2026-01-26 15:50:49.606 239969 INFO nova.virt.libvirt.driver [-] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Instance destroyed successfully.
Jan 26 15:50:49 compute-0 nova_compute[239965]: 2026-01-26 15:50:49.607 239969 DEBUG nova.objects.instance [None req-7e9eb995-eb7f-43f8-9c70-294e9380ec0f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'numa_topology' on Instance uuid 8d71fe6f-299d-4a2f-912c-c1f52540bdd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 322 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 251 op/s
Jan 26 15:50:49 compute-0 nova_compute[239965]: 2026-01-26 15:50:49.807 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:49 compute-0 nova_compute[239965]: 2026-01-26 15:50:49.865 239969 DEBUG nova.compute.manager [req-9bf38140-b294-422e-8a98-e0c91b61a1c0 req-8fb9d961-76c8-4b57-919b-12629538e23b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Received event network-vif-unplugged-509bd168-4b1c-48dd-9b84-73e955518fdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:49 compute-0 nova_compute[239965]: 2026-01-26 15:50:49.866 239969 DEBUG oslo_concurrency.lockutils [req-9bf38140-b294-422e-8a98-e0c91b61a1c0 req-8fb9d961-76c8-4b57-919b-12629538e23b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:49 compute-0 nova_compute[239965]: 2026-01-26 15:50:49.866 239969 DEBUG oslo_concurrency.lockutils [req-9bf38140-b294-422e-8a98-e0c91b61a1c0 req-8fb9d961-76c8-4b57-919b-12629538e23b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:49 compute-0 nova_compute[239965]: 2026-01-26 15:50:49.866 239969 DEBUG oslo_concurrency.lockutils [req-9bf38140-b294-422e-8a98-e0c91b61a1c0 req-8fb9d961-76c8-4b57-919b-12629538e23b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:49 compute-0 nova_compute[239965]: 2026-01-26 15:50:49.866 239969 DEBUG nova.compute.manager [req-9bf38140-b294-422e-8a98-e0c91b61a1c0 req-8fb9d961-76c8-4b57-919b-12629538e23b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] No waiting events found dispatching network-vif-unplugged-509bd168-4b1c-48dd-9b84-73e955518fdc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:50:49 compute-0 nova_compute[239965]: 2026-01-26 15:50:49.867 239969 WARNING nova.compute.manager [req-9bf38140-b294-422e-8a98-e0c91b61a1c0 req-8fb9d961-76c8-4b57-919b-12629538e23b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Received unexpected event network-vif-unplugged-509bd168-4b1c-48dd-9b84-73e955518fdc for instance with vm_state active and task_state powering-off.
Jan 26 15:50:49 compute-0 nova_compute[239965]: 2026-01-26 15:50:49.876 239969 DEBUG nova.compute.manager [None req-7e9eb995-eb7f-43f8-9c70-294e9380ec0f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:49 compute-0 nova_compute[239965]: 2026-01-26 15:50:49.932 239969 DEBUG oslo_concurrency.lockutils [None req-7e9eb995-eb7f-43f8-9c70-294e9380ec0f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:50 compute-0 ceph-mon[75140]: pgmap v1059: 305 pgs: 305 active+clean; 322 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 251 op/s
Jan 26 15:50:50 compute-0 ovn_controller[146046]: 2026-01-26T15:50:50Z|00090|binding|INFO|Releasing lport 98f4f5ff-65bb-4d1b-80cd-163cfdeaa557 from this chassis (sb_readonly=0)
Jan 26 15:50:50 compute-0 nova_compute[239965]: 2026-01-26 15:50:50.526 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:50 compute-0 ovn_controller[146046]: 2026-01-26T15:50:50Z|00091|binding|INFO|Releasing lport 98f4f5ff-65bb-4d1b-80cd-163cfdeaa557 from this chassis (sb_readonly=0)
Jan 26 15:50:50 compute-0 nova_compute[239965]: 2026-01-26 15:50:50.724 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:51 compute-0 podman[258339]: 2026-01-26 15:50:51.365927615 +0000 UTC m=+0.051588505 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 15:50:51 compute-0 podman[258340]: 2026-01-26 15:50:51.393751506 +0000 UTC m=+0.079414416 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 15:50:51 compute-0 nova_compute[239965]: 2026-01-26 15:50:51.510 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 326 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 192 op/s
Jan 26 15:50:52 compute-0 ceph-mon[75140]: pgmap v1060: 305 pgs: 305 active+clean; 326 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 192 op/s
Jan 26 15:50:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 326 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.081 239969 DEBUG nova.compute.manager [req-eea327e6-2596-483e-81a1-df2ba04b7725 req-06e477b4-10f1-4322-872b-99bf31f424ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Received event network-vif-plugged-509bd168-4b1c-48dd-9b84-73e955518fdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.081 239969 DEBUG oslo_concurrency.lockutils [req-eea327e6-2596-483e-81a1-df2ba04b7725 req-06e477b4-10f1-4322-872b-99bf31f424ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.081 239969 DEBUG oslo_concurrency.lockutils [req-eea327e6-2596-483e-81a1-df2ba04b7725 req-06e477b4-10f1-4322-872b-99bf31f424ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.082 239969 DEBUG oslo_concurrency.lockutils [req-eea327e6-2596-483e-81a1-df2ba04b7725 req-06e477b4-10f1-4322-872b-99bf31f424ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.082 239969 DEBUG nova.compute.manager [req-eea327e6-2596-483e-81a1-df2ba04b7725 req-06e477b4-10f1-4322-872b-99bf31f424ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] No waiting events found dispatching network-vif-plugged-509bd168-4b1c-48dd-9b84-73e955518fdc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.082 239969 WARNING nova.compute.manager [req-eea327e6-2596-483e-81a1-df2ba04b7725 req-06e477b4-10f1-4322-872b-99bf31f424ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Received unexpected event network-vif-plugged-509bd168-4b1c-48dd-9b84-73e955518fdc for instance with vm_state stopped and task_state None.
Jan 26 15:50:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.696 239969 DEBUG nova.compute.manager [None req-9f1e7569-6546-4669-8bde-49b298a78407 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:54 compute-0 ceph-mon[75140]: pgmap v1061: 305 pgs: 305 active+clean; 326 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 180 op/s
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.715 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "d82c98a2-a53f-4e4f-a132-042f797f2c91" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.716 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "d82c98a2-a53f-4e4f-a132-042f797f2c91" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.745 239969 INFO nova.compute.manager [None req-9f1e7569-6546-4669-8bde-49b298a78407 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] instance snapshotting
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.745 239969 WARNING nova.compute.manager [None req-9f1e7569-6546-4669-8bde-49b298a78407 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] trying to snapshot a non-running instance: (state: 4 expected: 1)
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.746 239969 DEBUG nova.compute.manager [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.807 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.815 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.815 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.823 239969 DEBUG nova.virt.hardware [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.823 239969 INFO nova.compute.claims [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:50:54 compute-0 nova_compute[239965]: 2026-01-26 15:50:54.987 239969 DEBUG oslo_concurrency.processutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.088 239969 INFO nova.virt.libvirt.driver [None req-9f1e7569-6546-4669-8bde-49b298a78407 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Beginning cold snapshot process
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.232 239969 DEBUG nova.virt.libvirt.imagebackend [None req-9f1e7569-6546-4669-8bde-49b298a78407 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 15:50:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:50:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1798635032' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.558 239969 DEBUG oslo_concurrency.processutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.563 239969 DEBUG nova.compute.provider_tree [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.579 239969 DEBUG nova.scheduler.client.report [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.611 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.612 239969 DEBUG nova.compute.manager [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:50:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 326 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.666 239969 DEBUG nova.compute.manager [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.667 239969 DEBUG nova.network.neutron [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.697 239969 INFO nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:50:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1798635032' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.725 239969 DEBUG nova.compute.manager [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.831 239969 DEBUG nova.compute.manager [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.832 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.833 239969 INFO nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Creating image(s)
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.859 239969 DEBUG nova.storage.rbd_utils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image d82c98a2-a53f-4e4f-a132-042f797f2c91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.890 239969 DEBUG nova.storage.rbd_utils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image d82c98a2-a53f-4e4f-a132-042f797f2c91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.915 239969 DEBUG nova.storage.rbd_utils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image d82c98a2-a53f-4e4f-a132-042f797f2c91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.920 239969 DEBUG oslo_concurrency.processutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.942 239969 DEBUG nova.storage.rbd_utils [None req-9f1e7569-6546-4669-8bde-49b298a78407 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] creating snapshot(0d982b0a9a9744e8af90a15a682256a5) on rbd image(8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.982 239969 DEBUG oslo_concurrency.processutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.984 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.984 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:55 compute-0 nova_compute[239965]: 2026-01-26 15:50:55.985 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.003 239969 DEBUG nova.storage.rbd_utils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image d82c98a2-a53f-4e4f-a132-042f797f2c91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.007 239969 DEBUG oslo_concurrency.processutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d82c98a2-a53f-4e4f-a132-042f797f2c91_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.128 239969 DEBUG nova.policy [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b47ce75429ed4cd1ba21c0d50c2e552d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b3195eedf4a34fabaf019faaaad7eb71', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.263 239969 DEBUG oslo_concurrency.processutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d82c98a2-a53f-4e4f-a132-042f797f2c91_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.257s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.320 239969 DEBUG nova.storage.rbd_utils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] resizing rbd image d82c98a2-a53f-4e4f-a132-042f797f2c91_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.386 239969 DEBUG nova.objects.instance [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'migration_context' on Instance uuid d82c98a2-a53f-4e4f-a132-042f797f2c91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.404 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.405 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Ensure instance console log exists: /var/lib/nova/instances/d82c98a2-a53f-4e4f-a132-042f797f2c91/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.405 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.405 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.406 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.458 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442641.4567478, 3a0641d5-a02c-4801-ab1d-4f097cd3f431 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.458 239969 INFO nova.compute.manager [-] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] VM Stopped (Lifecycle Event)
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.485 239969 DEBUG nova.compute.manager [None req-5147d5f3-dfb3-4b4d-8ff6-a858a9f643ec - - - - - -] [instance: 3a0641d5-a02c-4801-ab1d-4f097cd3f431] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.513 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:50:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Jan 26 15:50:56 compute-0 ceph-mon[75140]: pgmap v1062: 305 pgs: 305 active+clean; 326 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 157 op/s
Jan 26 15:50:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Jan 26 15:50:56 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.781 239969 DEBUG nova.storage.rbd_utils [None req-9f1e7569-6546-4669-8bde-49b298a78407 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] cloning vms/8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk@0d982b0a9a9744e8af90a15a682256a5 to images/b98f163a-aed3-4962-b1be-b36de86eaddc clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.891 239969 DEBUG nova.network.neutron [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Successfully created port: 5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:50:56 compute-0 nova_compute[239965]: 2026-01-26 15:50:56.957 239969 DEBUG nova.storage.rbd_utils [None req-9f1e7569-6546-4669-8bde-49b298a78407 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] flattening images/b98f163a-aed3-4962-b1be-b36de86eaddc flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 15:50:57 compute-0 nova_compute[239965]: 2026-01-26 15:50:57.275 239969 DEBUG nova.storage.rbd_utils [None req-9f1e7569-6546-4669-8bde-49b298a78407 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] removing snapshot(0d982b0a9a9744e8af90a15a682256a5) on rbd image(8d71fe6f-299d-4a2f-912c-c1f52540bdd8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 15:50:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 383 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.2 MiB/s wr, 161 op/s
Jan 26 15:50:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Jan 26 15:50:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Jan 26 15:50:57 compute-0 ceph-mon[75140]: osdmap e159: 3 total, 3 up, 3 in
Jan 26 15:50:57 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Jan 26 15:50:57 compute-0 nova_compute[239965]: 2026-01-26 15:50:57.779 239969 DEBUG nova.storage.rbd_utils [None req-9f1e7569-6546-4669-8bde-49b298a78407 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] creating snapshot(snap) on rbd image(b98f163a-aed3-4962-b1be-b36de86eaddc) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:50:58 compute-0 nova_compute[239965]: 2026-01-26 15:50:58.310 239969 DEBUG nova.network.neutron [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Successfully updated port: 5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:50:58 compute-0 nova_compute[239965]: 2026-01-26 15:50:58.330 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "refresh_cache-d82c98a2-a53f-4e4f-a132-042f797f2c91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:58 compute-0 nova_compute[239965]: 2026-01-26 15:50:58.330 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquired lock "refresh_cache-d82c98a2-a53f-4e4f-a132-042f797f2c91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:50:58 compute-0 nova_compute[239965]: 2026-01-26 15:50:58.330 239969 DEBUG nova.network.neutron [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:50:58 compute-0 ovn_controller[146046]: 2026-01-26T15:50:58Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cd:ab:15 10.100.0.7
Jan 26 15:50:58 compute-0 ovn_controller[146046]: 2026-01-26T15:50:58Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cd:ab:15 10.100.0.7
Jan 26 15:50:58 compute-0 nova_compute[239965]: 2026-01-26 15:50:58.505 239969 DEBUG nova.compute.manager [req-f87170d9-d8c0-40bc-b198-ef3bbd5848bd req-889eaa52-76e8-4967-b848-7325a3a8cf5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Received event network-changed-5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:50:58 compute-0 nova_compute[239965]: 2026-01-26 15:50:58.506 239969 DEBUG nova.compute.manager [req-f87170d9-d8c0-40bc-b198-ef3bbd5848bd req-889eaa52-76e8-4967-b848-7325a3a8cf5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Refreshing instance network info cache due to event network-changed-5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:50:58 compute-0 nova_compute[239965]: 2026-01-26 15:50:58.506 239969 DEBUG oslo_concurrency.lockutils [req-f87170d9-d8c0-40bc-b198-ef3bbd5848bd req-889eaa52-76e8-4967-b848-7325a3a8cf5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d82c98a2-a53f-4e4f-a132-042f797f2c91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:50:58 compute-0 nova_compute[239965]: 2026-01-26 15:50:58.596 239969 DEBUG nova.network.neutron [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:50:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Jan 26 15:50:58 compute-0 ceph-mon[75140]: pgmap v1064: 305 pgs: 305 active+clean; 383 MiB data, 406 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.2 MiB/s wr, 161 op/s
Jan 26 15:50:58 compute-0 ceph-mon[75140]: osdmap e160: 3 total, 3 up, 3 in
Jan 26 15:50:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Jan 26 15:50:58 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Jan 26 15:50:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e161 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:50:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:59.211 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:50:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:59.212 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:50:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:50:59.213 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:50:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 429 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 10 MiB/s wr, 228 op/s
Jan 26 15:50:59 compute-0 ceph-mon[75140]: osdmap e161: 3 total, 3 up, 3 in
Jan 26 15:50:59 compute-0 nova_compute[239965]: 2026-01-26 15:50:59.810 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.099 239969 DEBUG nova.network.neutron [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Updating instance_info_cache with network_info: [{"id": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "address": "fa:16:3e:c8:b0:29", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b47f23b-ae", "ovs_interfaceid": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.128 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Releasing lock "refresh_cache-d82c98a2-a53f-4e4f-a132-042f797f2c91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.128 239969 DEBUG nova.compute.manager [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Instance network_info: |[{"id": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "address": "fa:16:3e:c8:b0:29", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b47f23b-ae", "ovs_interfaceid": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.129 239969 DEBUG oslo_concurrency.lockutils [req-f87170d9-d8c0-40bc-b198-ef3bbd5848bd req-889eaa52-76e8-4967-b848-7325a3a8cf5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d82c98a2-a53f-4e4f-a132-042f797f2c91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.129 239969 DEBUG nova.network.neutron [req-f87170d9-d8c0-40bc-b198-ef3bbd5848bd req-889eaa52-76e8-4967-b848-7325a3a8cf5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Refreshing network info cache for port 5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.132 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Start _get_guest_xml network_info=[{"id": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "address": "fa:16:3e:c8:b0:29", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b47f23b-ae", "ovs_interfaceid": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.137 239969 WARNING nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.143 239969 DEBUG nova.virt.libvirt.host [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.143 239969 DEBUG nova.virt.libvirt.host [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.149 239969 DEBUG nova.virt.libvirt.host [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.150 239969 DEBUG nova.virt.libvirt.host [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.150 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.150 239969 DEBUG nova.virt.hardware [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.150 239969 DEBUG nova.virt.hardware [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.151 239969 DEBUG nova.virt.hardware [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.151 239969 DEBUG nova.virt.hardware [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.151 239969 DEBUG nova.virt.hardware [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.151 239969 DEBUG nova.virt.hardware [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.151 239969 DEBUG nova.virt.hardware [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.152 239969 DEBUG nova.virt.hardware [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.152 239969 DEBUG nova.virt.hardware [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.152 239969 DEBUG nova.virt.hardware [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.152 239969 DEBUG nova.virt.hardware [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.155 239969 DEBUG oslo_concurrency.processutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:51:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:51:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:51:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:51:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:51:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.426 239969 INFO nova.virt.libvirt.driver [None req-9f1e7569-6546-4669-8bde-49b298a78407 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Snapshot image upload complete
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.427 239969 INFO nova.compute.manager [None req-9f1e7569-6546-4669-8bde-49b298a78407 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Took 5.68 seconds to snapshot the instance on the hypervisor.
Jan 26 15:51:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:51:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1892232999' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.701 239969 DEBUG oslo_concurrency.processutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.728 239969 DEBUG nova.storage.rbd_utils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image d82c98a2-a53f-4e4f-a132-042f797f2c91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:00 compute-0 nova_compute[239965]: 2026-01-26 15:51:00.733 239969 DEBUG oslo_concurrency.processutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:00 compute-0 ceph-mon[75140]: pgmap v1067: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 429 MiB data, 445 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 10 MiB/s wr, 228 op/s
Jan 26 15:51:00 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1892232999' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:51:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/617655376' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.303 239969 DEBUG oslo_concurrency.processutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.305 239969 DEBUG nova.virt.libvirt.vif [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:50:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1570723011',display_name='tempest-ServersAdminTestJSON-server-1570723011',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1570723011',id=19,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-9s64v06v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:50:55Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=d82c98a2-a53f-4e4f-a132-042f797f2c91,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "address": "fa:16:3e:c8:b0:29", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b47f23b-ae", "ovs_interfaceid": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.306 239969 DEBUG nova.network.os_vif_util [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "address": "fa:16:3e:c8:b0:29", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b47f23b-ae", "ovs_interfaceid": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.307 239969 DEBUG nova.network.os_vif_util [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b0:29,bridge_name='br-int',has_traffic_filtering=True,id=5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b47f23b-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.308 239969 DEBUG nova.objects.instance [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'pci_devices' on Instance uuid d82c98a2-a53f-4e4f-a132-042f797f2c91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.328 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:51:01 compute-0 nova_compute[239965]:   <uuid>d82c98a2-a53f-4e4f-a132-042f797f2c91</uuid>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   <name>instance-00000013</name>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersAdminTestJSON-server-1570723011</nova:name>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:51:00</nova:creationTime>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:51:01 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:51:01 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:51:01 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:51:01 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:51:01 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:51:01 compute-0 nova_compute[239965]:         <nova:user uuid="b47ce75429ed4cd1ba21c0d50c2e552d">tempest-ServersAdminTestJSON-863857415-project-member</nova:user>
Jan 26 15:51:01 compute-0 nova_compute[239965]:         <nova:project uuid="b3195eedf4a34fabaf019faaaad7eb71">tempest-ServersAdminTestJSON-863857415</nova:project>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:51:01 compute-0 nova_compute[239965]:         <nova:port uuid="5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b">
Jan 26 15:51:01 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <system>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <entry name="serial">d82c98a2-a53f-4e4f-a132-042f797f2c91</entry>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <entry name="uuid">d82c98a2-a53f-4e4f-a132-042f797f2c91</entry>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     </system>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   <os>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   </os>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   <features>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   </features>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d82c98a2-a53f-4e4f-a132-042f797f2c91_disk">
Jan 26 15:51:01 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       </source>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:51:01 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d82c98a2-a53f-4e4f-a132-042f797f2c91_disk.config">
Jan 26 15:51:01 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       </source>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:51:01 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:c8:b0:29"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <target dev="tap5b47f23b-ae"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/d82c98a2-a53f-4e4f-a132-042f797f2c91/console.log" append="off"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <video>
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     </video>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:51:01 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:51:01 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:51:01 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:51:01 compute-0 nova_compute[239965]: </domain>
Jan 26 15:51:01 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.330 239969 DEBUG nova.compute.manager [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Preparing to wait for external event network-vif-plugged-5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.330 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.331 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.331 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.333 239969 DEBUG nova.virt.libvirt.vif [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:50:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1570723011',display_name='tempest-ServersAdminTestJSON-server-1570723011',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1570723011',id=19,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-9s64v06v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:50:55Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=d82c98a2-a53f-4e4f-a132-042f797f2c91,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "address": "fa:16:3e:c8:b0:29", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b47f23b-ae", "ovs_interfaceid": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.333 239969 DEBUG nova.network.os_vif_util [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "address": "fa:16:3e:c8:b0:29", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b47f23b-ae", "ovs_interfaceid": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.334 239969 DEBUG nova.network.os_vif_util [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b0:29,bridge_name='br-int',has_traffic_filtering=True,id=5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b47f23b-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.335 239969 DEBUG os_vif [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b0:29,bridge_name='br-int',has_traffic_filtering=True,id=5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b47f23b-ae') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.336 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.337 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.338 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.341 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.342 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5b47f23b-ae, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.343 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5b47f23b-ae, col_values=(('external_ids', {'iface-id': '5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:b0:29', 'vm-uuid': 'd82c98a2-a53f-4e4f-a132-042f797f2c91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.345 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:01 compute-0 NetworkManager[48954]: <info>  [1769442661.3457] manager: (tap5b47f23b-ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.348 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.352 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.353 239969 INFO os_vif [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:b0:29,bridge_name='br-int',has_traffic_filtering=True,id=5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b47f23b-ae')
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.405 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.406 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.406 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No VIF found with MAC fa:16:3e:c8:b0:29, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.406 239969 INFO nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Using config drive
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.424 239969 DEBUG nova.storage.rbd_utils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image d82c98a2-a53f-4e4f-a132-042f797f2c91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 482 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 8.2 MiB/s rd, 15 MiB/s wr, 307 op/s
Jan 26 15:51:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Jan 26 15:51:01 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/617655376' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Jan 26 15:51:01 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.839 239969 DEBUG nova.network.neutron [req-f87170d9-d8c0-40bc-b198-ef3bbd5848bd req-889eaa52-76e8-4967-b848-7325a3a8cf5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Updated VIF entry in instance network info cache for port 5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.840 239969 DEBUG nova.network.neutron [req-f87170d9-d8c0-40bc-b198-ef3bbd5848bd req-889eaa52-76e8-4967-b848-7325a3a8cf5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Updating instance_info_cache with network_info: [{"id": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "address": "fa:16:3e:c8:b0:29", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b47f23b-ae", "ovs_interfaceid": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.890 239969 DEBUG oslo_concurrency.lockutils [req-f87170d9-d8c0-40bc-b198-ef3bbd5848bd req-889eaa52-76e8-4967-b848-7325a3a8cf5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d82c98a2-a53f-4e4f-a132-042f797f2c91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.926 239969 INFO nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Creating config drive at /var/lib/nova/instances/d82c98a2-a53f-4e4f-a132-042f797f2c91/disk.config
Jan 26 15:51:01 compute-0 nova_compute[239965]: 2026-01-26 15:51:01.930 239969 DEBUG oslo_concurrency.processutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d82c98a2-a53f-4e4f-a132-042f797f2c91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcbhl0ang execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.068 239969 DEBUG oslo_concurrency.processutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d82c98a2-a53f-4e4f-a132-042f797f2c91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcbhl0ang" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.090 239969 DEBUG nova.storage.rbd_utils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image d82c98a2-a53f-4e4f-a132-042f797f2c91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.094 239969 DEBUG oslo_concurrency.processutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d82c98a2-a53f-4e4f-a132-042f797f2c91/disk.config d82c98a2-a53f-4e4f-a132-042f797f2c91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.210 239969 DEBUG oslo_concurrency.processutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d82c98a2-a53f-4e4f-a132-042f797f2c91/disk.config d82c98a2-a53f-4e4f-a132-042f797f2c91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.211 239969 INFO nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Deleting local config drive /var/lib/nova/instances/d82c98a2-a53f-4e4f-a132-042f797f2c91/disk.config because it was imported into RBD.
Jan 26 15:51:02 compute-0 kernel: tap5b47f23b-ae: entered promiscuous mode
Jan 26 15:51:02 compute-0 NetworkManager[48954]: <info>  [1769442662.2630] manager: (tap5b47f23b-ae): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Jan 26 15:51:02 compute-0 ovn_controller[146046]: 2026-01-26T15:51:02Z|00092|binding|INFO|Claiming lport 5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b for this chassis.
Jan 26 15:51:02 compute-0 ovn_controller[146046]: 2026-01-26T15:51:02Z|00093|binding|INFO|5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b: Claiming fa:16:3e:c8:b0:29 10.100.0.8
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.266 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:02.275 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:b0:29 10.100.0.8'], port_security=['fa:16:3e:c8:b0:29 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd82c98a2-a53f-4e4f-a132-042f797f2c91', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3195eedf4a34fabaf019faaaad7eb71', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'deb9b716-672e-4115-a664-3008e95e0a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=172ee5c1-f0af-4b45-a16f-73e1fe15c93f, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:51:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:02.276 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b in datapath 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc bound to our chassis
Jan 26 15:51:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:02.278 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc
Jan 26 15:51:02 compute-0 ovn_controller[146046]: 2026-01-26T15:51:02Z|00094|binding|INFO|Setting lport 5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b ovn-installed in OVS
Jan 26 15:51:02 compute-0 ovn_controller[146046]: 2026-01-26T15:51:02Z|00095|binding|INFO|Setting lport 5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b up in Southbound
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.290 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.295 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:02 compute-0 systemd-machined[208061]: New machine qemu-21-instance-00000013.
Jan 26 15:51:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:02.297 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[934aa4db-39c4-4612-b432-867e315f0028]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.300 239969 DEBUG oslo_concurrency.lockutils [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.301 239969 DEBUG oslo_concurrency.lockutils [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.301 239969 DEBUG oslo_concurrency.lockutils [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.301 239969 DEBUG oslo_concurrency.lockutils [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.301 239969 DEBUG oslo_concurrency.lockutils [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:02 compute-0 systemd-udevd[258855]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.303 239969 INFO nova.compute.manager [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Terminating instance
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.304 239969 DEBUG nova.compute.manager [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:51:02 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000013.
Jan 26 15:51:02 compute-0 NetworkManager[48954]: <info>  [1769442662.3158] device (tap5b47f23b-ae): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:51:02 compute-0 NetworkManager[48954]: <info>  [1769442662.3163] device (tap5b47f23b-ae): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.318 239969 INFO nova.virt.libvirt.driver [-] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Instance destroyed successfully.
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.319 239969 DEBUG nova.objects.instance [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'resources' on Instance uuid 8d71fe6f-299d-4a2f-912c-c1f52540bdd8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:02.326 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0de80547-8bed-4fd0-b60c-a87c007a4488]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:02.329 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8b4ce4e2-a0f2-42a0-b4a7-f74ae4c73c1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.336 239969 DEBUG nova.virt.libvirt.vif [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:50:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1877048571',display_name='tempest-ImagesTestJSON-server-1877048571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1877048571',id=17,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:50:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-ow60jkkh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:51:00Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=8d71fe6f-299d-4a2f-912c-c1f52540bdd8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "509bd168-4b1c-48dd-9b84-73e955518fdc", "address": "fa:16:3e:20:e8:b4", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509bd168-4b", "ovs_interfaceid": "509bd168-4b1c-48dd-9b84-73e955518fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.336 239969 DEBUG nova.network.os_vif_util [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "509bd168-4b1c-48dd-9b84-73e955518fdc", "address": "fa:16:3e:20:e8:b4", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap509bd168-4b", "ovs_interfaceid": "509bd168-4b1c-48dd-9b84-73e955518fdc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.337 239969 DEBUG nova.network.os_vif_util [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:e8:b4,bridge_name='br-int',has_traffic_filtering=True,id=509bd168-4b1c-48dd-9b84-73e955518fdc,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509bd168-4b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.337 239969 DEBUG os_vif [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:e8:b4,bridge_name='br-int',has_traffic_filtering=True,id=509bd168-4b1c-48dd-9b84-73e955518fdc,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509bd168-4b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.338 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.339 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap509bd168-4b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.342 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.343 239969 INFO os_vif [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:e8:b4,bridge_name='br-int',has_traffic_filtering=True,id=509bd168-4b1c-48dd-9b84-73e955518fdc,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap509bd168-4b')
Jan 26 15:51:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:02.355 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[974c44db-75ae-4817-a9a3-91819667140c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:02.373 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c6b15495-3db6-45c5-ae12-59791fa47fb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a0d62aa-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:c2:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 700, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411131, 'reachable_time': 38822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258881, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:02.390 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1537e9a6-d38e-46da-a4a3-ede59a7ddc59]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411144, 'tstamp': 411144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258883, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411147, 'tstamp': 411147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258883, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:02.392 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a0d62aa-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.394 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:02.395 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a0d62aa-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:02.395 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:02.396 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a0d62aa-50, col_values=(('external_ids', {'iface-id': '98f4f5ff-65bb-4d1b-80cd-163cfdeaa557'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:02.396 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.609 239969 INFO nova.virt.libvirt.driver [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Deleting instance files /var/lib/nova/instances/8d71fe6f-299d-4a2f-912c-c1f52540bdd8_del
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.610 239969 INFO nova.virt.libvirt.driver [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Deletion of /var/lib/nova/instances/8d71fe6f-299d-4a2f-912c-c1f52540bdd8_del complete
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.695 239969 INFO nova.compute.manager [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Took 0.39 seconds to destroy the instance on the hypervisor.
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.695 239969 DEBUG oslo.service.loopingcall [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.696 239969 DEBUG nova.compute.manager [-] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.696 239969 DEBUG nova.network.neutron [-] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.786 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442662.7865357, d82c98a2-a53f-4e4f-a132-042f797f2c91 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.787 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] VM Started (Lifecycle Event)
Jan 26 15:51:02 compute-0 ceph-mon[75140]: pgmap v1068: 305 pgs: 2 active+clean+snaptrim, 4 active+clean+snaptrim_wait, 299 active+clean; 482 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 8.2 MiB/s rd, 15 MiB/s wr, 307 op/s
Jan 26 15:51:02 compute-0 ceph-mon[75140]: osdmap e162: 3 total, 3 up, 3 in
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.810 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.815 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442662.787528, d82c98a2-a53f-4e4f-a132-042f797f2c91 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.815 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] VM Paused (Lifecycle Event)
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.832 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.835 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:02 compute-0 nova_compute[239965]: 2026-01-26 15:51:02.856 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:51:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 409 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 9.0 MiB/s wr, 291 op/s
Jan 26 15:51:03 compute-0 nova_compute[239965]: 2026-01-26 15:51:03.710 239969 DEBUG nova.network.neutron [-] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:51:03 compute-0 nova_compute[239965]: 2026-01-26 15:51:03.730 239969 INFO nova.compute.manager [-] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Took 1.03 seconds to deallocate network for instance.
Jan 26 15:51:03 compute-0 nova_compute[239965]: 2026-01-26 15:51:03.781 239969 DEBUG oslo_concurrency.lockutils [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:03 compute-0 nova_compute[239965]: 2026-01-26 15:51:03.782 239969 DEBUG oslo_concurrency.lockutils [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:03 compute-0 nova_compute[239965]: 2026-01-26 15:51:03.896 239969 DEBUG oslo_concurrency.processutils [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.048 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442649.047293, 8d71fe6f-299d-4a2f-912c-c1f52540bdd8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.048 239969 INFO nova.compute.manager [-] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] VM Stopped (Lifecycle Event)
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.080 239969 DEBUG nova.compute.manager [None req-3c9ff539-6d27-4b55-8f19-cae09ff324df - - - - - -] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:51:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Jan 26 15:51:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Jan 26 15:51:04 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Jan 26 15:51:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:51:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4071513467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.447 239969 DEBUG oslo_concurrency.processutils [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.456 239969 DEBUG nova.compute.provider_tree [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.474 239969 DEBUG nova.scheduler.client.report [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.501 239969 DEBUG oslo_concurrency.lockutils [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.540 239969 INFO nova.scheduler.client.report [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Deleted allocations for instance 8d71fe6f-299d-4a2f-912c-c1f52540bdd8
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.607 239969 DEBUG oslo_concurrency.lockutils [None req-12b78544-9bcd-4e14-acba-8492b475855e 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "8d71fe6f-299d-4a2f-912c-c1f52540bdd8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:04 compute-0 ceph-mon[75140]: pgmap v1070: 305 pgs: 305 active+clean; 409 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 9.0 MiB/s wr, 291 op/s
Jan 26 15:51:04 compute-0 ceph-mon[75140]: osdmap e163: 3 total, 3 up, 3 in
Jan 26 15:51:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4071513467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.812 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.865 239969 DEBUG nova.compute.manager [req-577b1dec-77e5-4f9c-a78b-1d686496b03e req-cc61c351-0889-4fb2-a6e8-a254f26f8db8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8d71fe6f-299d-4a2f-912c-c1f52540bdd8] Received event network-vif-deleted-509bd168-4b1c-48dd-9b84-73e955518fdc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.875 239969 DEBUG nova.compute.manager [req-8c9de96d-35d9-4153-848a-acfe193bab9f req-597c41e0-31ba-4678-87de-a273efdc8e70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Received event network-vif-plugged-5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.876 239969 DEBUG oslo_concurrency.lockutils [req-8c9de96d-35d9-4153-848a-acfe193bab9f req-597c41e0-31ba-4678-87de-a273efdc8e70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.876 239969 DEBUG oslo_concurrency.lockutils [req-8c9de96d-35d9-4153-848a-acfe193bab9f req-597c41e0-31ba-4678-87de-a273efdc8e70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.876 239969 DEBUG oslo_concurrency.lockutils [req-8c9de96d-35d9-4153-848a-acfe193bab9f req-597c41e0-31ba-4678-87de-a273efdc8e70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.876 239969 DEBUG nova.compute.manager [req-8c9de96d-35d9-4153-848a-acfe193bab9f req-597c41e0-31ba-4678-87de-a273efdc8e70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Processing event network-vif-plugged-5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.877 239969 DEBUG nova.compute.manager [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.880 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442664.879957, d82c98a2-a53f-4e4f-a132-042f797f2c91 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.880 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] VM Resumed (Lifecycle Event)
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.881 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.886 239969 INFO nova.virt.libvirt.driver [-] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Instance spawned successfully.
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.886 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.904 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.909 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.912 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.912 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.913 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.913 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.914 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.914 239969 DEBUG nova.virt.libvirt.driver [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.946 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.978 239969 INFO nova.compute.manager [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Took 9.15 seconds to spawn the instance on the hypervisor.
Jan 26 15:51:04 compute-0 nova_compute[239965]: 2026-01-26 15:51:04.979 239969 DEBUG nova.compute.manager [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:05 compute-0 nova_compute[239965]: 2026-01-26 15:51:05.040 239969 INFO nova.compute.manager [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Took 10.25 seconds to build instance.
Jan 26 15:51:05 compute-0 nova_compute[239965]: 2026-01-26 15:51:05.056 239969 DEBUG oslo_concurrency.lockutils [None req-d771710f-5fa9-4faf-8235-521aac62e0b3 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "d82c98a2-a53f-4e4f-a132-042f797f2c91" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:05 compute-0 nova_compute[239965]: 2026-01-26 15:51:05.548 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:05 compute-0 nova_compute[239965]: 2026-01-26 15:51:05.549 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:05 compute-0 nova_compute[239965]: 2026-01-26 15:51:05.563 239969 DEBUG nova.compute.manager [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:51:05 compute-0 nova_compute[239965]: 2026-01-26 15:51:05.628 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:05 compute-0 nova_compute[239965]: 2026-01-26 15:51:05.629 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:05 compute-0 nova_compute[239965]: 2026-01-26 15:51:05.634 239969 DEBUG nova.virt.hardware [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:51:05 compute-0 nova_compute[239965]: 2026-01-26 15:51:05.634 239969 INFO nova.compute.claims [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:51:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 409 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 182 op/s
Jan 26 15:51:05 compute-0 nova_compute[239965]: 2026-01-26 15:51:05.835 239969 DEBUG oslo_concurrency.processutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:51:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/590638053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.431 239969 DEBUG oslo_concurrency.processutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.437 239969 DEBUG nova.compute.provider_tree [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.454 239969 DEBUG nova.scheduler.client.report [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.476 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.477 239969 DEBUG nova.compute.manager [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.517 239969 DEBUG oslo_concurrency.lockutils [None req-d0a311c3-8d92-4731-a580-a7c09591350e a80e442404154772af6655c14ab3728d f43dbf72208344aa83db9a8c4fd3423f - - default default] Acquiring lock "refresh_cache-d82c98a2-a53f-4e4f-a132-042f797f2c91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.517 239969 DEBUG oslo_concurrency.lockutils [None req-d0a311c3-8d92-4731-a580-a7c09591350e a80e442404154772af6655c14ab3728d f43dbf72208344aa83db9a8c4fd3423f - - default default] Acquired lock "refresh_cache-d82c98a2-a53f-4e4f-a132-042f797f2c91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.518 239969 DEBUG nova.network.neutron [None req-d0a311c3-8d92-4731-a580-a7c09591350e a80e442404154772af6655c14ab3728d f43dbf72208344aa83db9a8c4fd3423f - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.538 239969 DEBUG nova.compute.manager [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.538 239969 DEBUG nova.network.neutron [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.558 239969 INFO nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.573 239969 DEBUG nova.compute.manager [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.668 239969 DEBUG nova.compute.manager [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.669 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.670 239969 INFO nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Creating image(s)
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.691 239969 DEBUG nova.storage.rbd_utils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.714 239969 DEBUG nova.storage.rbd_utils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.737 239969 DEBUG nova.storage.rbd_utils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.741 239969 DEBUG oslo_concurrency.processutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.770 239969 DEBUG nova.policy [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3322c44e378e415bb486ef558314a67c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ddba5162f533447bba0159cafaa565bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:51:06 compute-0 ceph-mon[75140]: pgmap v1072: 305 pgs: 305 active+clean; 409 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.8 MiB/s wr, 182 op/s
Jan 26 15:51:06 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/590638053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.809 239969 DEBUG oslo_concurrency.processutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.810 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.811 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.811 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.831 239969 DEBUG nova.storage.rbd_utils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:06 compute-0 nova_compute[239965]: 2026-01-26 15:51:06.834 239969 DEBUG oslo_concurrency.processutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.069 239969 DEBUG oslo_concurrency.processutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.235s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.128 239969 DEBUG nova.compute.manager [req-0c4c2357-6350-4aee-90ee-923dee9394ba req-d5845b08-dd9a-4e10-a7e9-713cf182413f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Received event network-vif-plugged-5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.128 239969 DEBUG oslo_concurrency.lockutils [req-0c4c2357-6350-4aee-90ee-923dee9394ba req-d5845b08-dd9a-4e10-a7e9-713cf182413f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.129 239969 DEBUG oslo_concurrency.lockutils [req-0c4c2357-6350-4aee-90ee-923dee9394ba req-d5845b08-dd9a-4e10-a7e9-713cf182413f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.129 239969 DEBUG oslo_concurrency.lockutils [req-0c4c2357-6350-4aee-90ee-923dee9394ba req-d5845b08-dd9a-4e10-a7e9-713cf182413f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.129 239969 DEBUG nova.compute.manager [req-0c4c2357-6350-4aee-90ee-923dee9394ba req-d5845b08-dd9a-4e10-a7e9-713cf182413f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] No waiting events found dispatching network-vif-plugged-5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.130 239969 WARNING nova.compute.manager [req-0c4c2357-6350-4aee-90ee-923dee9394ba req-d5845b08-dd9a-4e10-a7e9-713cf182413f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Received unexpected event network-vif-plugged-5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b for instance with vm_state active and task_state None.
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.134 239969 DEBUG nova.storage.rbd_utils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] resizing rbd image c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.207 239969 DEBUG nova.objects.instance [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'migration_context' on Instance uuid c57c5c04-cfa8-4562-9f6d-a7211012e3bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.224 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.225 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Ensure instance console log exists: /var/lib/nova/instances/c57c5c04-cfa8-4562-9f6d-a7211012e3bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.225 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.225 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.226 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.341 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:07.431 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:51:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:07.431 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 15:51:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:07.432 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.433 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:07 compute-0 nova_compute[239965]: 2026-01-26 15:51:07.641 239969 DEBUG nova.network.neutron [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Successfully created port: faed3540-5f43-433f-9d2e-22c433f411c2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:51:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 343 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 5.1 MiB/s wr, 260 op/s
Jan 26 15:51:08 compute-0 nova_compute[239965]: 2026-01-26 15:51:08.127 239969 DEBUG nova.network.neutron [None req-d0a311c3-8d92-4731-a580-a7c09591350e a80e442404154772af6655c14ab3728d f43dbf72208344aa83db9a8c4fd3423f - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Updating instance_info_cache with network_info: [{"id": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "address": "fa:16:3e:c8:b0:29", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b47f23b-ae", "ovs_interfaceid": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:51:08 compute-0 nova_compute[239965]: 2026-01-26 15:51:08.148 239969 DEBUG oslo_concurrency.lockutils [None req-d0a311c3-8d92-4731-a580-a7c09591350e a80e442404154772af6655c14ab3728d f43dbf72208344aa83db9a8c4fd3423f - - default default] Releasing lock "refresh_cache-d82c98a2-a53f-4e4f-a132-042f797f2c91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:51:08 compute-0 nova_compute[239965]: 2026-01-26 15:51:08.149 239969 DEBUG nova.compute.manager [None req-d0a311c3-8d92-4731-a580-a7c09591350e a80e442404154772af6655c14ab3728d f43dbf72208344aa83db9a8c4fd3423f - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 26 15:51:08 compute-0 nova_compute[239965]: 2026-01-26 15:51:08.149 239969 DEBUG nova.compute.manager [None req-d0a311c3-8d92-4731-a580-a7c09591350e a80e442404154772af6655c14ab3728d f43dbf72208344aa83db9a8c4fd3423f - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] network_info to inject: |[{"id": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "address": "fa:16:3e:c8:b0:29", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b47f23b-ae", "ovs_interfaceid": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 26 15:51:08 compute-0 nova_compute[239965]: 2026-01-26 15:51:08.602 239969 DEBUG nova.network.neutron [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Successfully updated port: faed3540-5f43-433f-9d2e-22c433f411c2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:51:08 compute-0 nova_compute[239965]: 2026-01-26 15:51:08.621 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "refresh_cache-c57c5c04-cfa8-4562-9f6d-a7211012e3bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:51:08 compute-0 nova_compute[239965]: 2026-01-26 15:51:08.621 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquired lock "refresh_cache-c57c5c04-cfa8-4562-9f6d-a7211012e3bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:51:08 compute-0 nova_compute[239965]: 2026-01-26 15:51:08.621 239969 DEBUG nova.network.neutron [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:51:08 compute-0 nova_compute[239965]: 2026-01-26 15:51:08.773 239969 DEBUG nova.network.neutron [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:51:08 compute-0 ceph-mon[75140]: pgmap v1073: 305 pgs: 305 active+clean; 343 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 5.1 MiB/s wr, 260 op/s
Jan 26 15:51:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e163 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:51:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Jan 26 15:51:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Jan 26 15:51:09 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.251 239969 DEBUG nova.compute.manager [req-8262bc59-5179-4445-9021-311a25c35574 req-bc691dea-2f25-4e29-895e-2b352d8425f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Received event network-changed-faed3540-5f43-433f-9d2e-22c433f411c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.251 239969 DEBUG nova.compute.manager [req-8262bc59-5179-4445-9021-311a25c35574 req-bc691dea-2f25-4e29-895e-2b352d8425f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Refreshing instance network info cache due to event network-changed-faed3540-5f43-433f-9d2e-22c433f411c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.252 239969 DEBUG oslo_concurrency.lockutils [req-8262bc59-5179-4445-9021-311a25c35574 req-bc691dea-2f25-4e29-895e-2b352d8425f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-c57c5c04-cfa8-4562-9f6d-a7211012e3bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.561 239969 DEBUG nova.network.neutron [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Updating instance_info_cache with network_info: [{"id": "faed3540-5f43-433f-9d2e-22c433f411c2", "address": "fa:16:3e:62:88:2d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaed3540-5f", "ovs_interfaceid": "faed3540-5f43-433f-9d2e-22c433f411c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.594 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Releasing lock "refresh_cache-c57c5c04-cfa8-4562-9f6d-a7211012e3bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.594 239969 DEBUG nova.compute.manager [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Instance network_info: |[{"id": "faed3540-5f43-433f-9d2e-22c433f411c2", "address": "fa:16:3e:62:88:2d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaed3540-5f", "ovs_interfaceid": "faed3540-5f43-433f-9d2e-22c433f411c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.595 239969 DEBUG oslo_concurrency.lockutils [req-8262bc59-5179-4445-9021-311a25c35574 req-bc691dea-2f25-4e29-895e-2b352d8425f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-c57c5c04-cfa8-4562-9f6d-a7211012e3bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.595 239969 DEBUG nova.network.neutron [req-8262bc59-5179-4445-9021-311a25c35574 req-bc691dea-2f25-4e29-895e-2b352d8425f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Refreshing network info cache for port faed3540-5f43-433f-9d2e-22c433f411c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.599 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Start _get_guest_xml network_info=[{"id": "faed3540-5f43-433f-9d2e-22c433f411c2", "address": "fa:16:3e:62:88:2d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaed3540-5f", "ovs_interfaceid": "faed3540-5f43-433f-9d2e-22c433f411c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.605 239969 WARNING nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.610 239969 DEBUG nova.virt.libvirt.host [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.611 239969 DEBUG nova.virt.libvirt.host [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.620 239969 DEBUG nova.virt.libvirt.host [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.621 239969 DEBUG nova.virt.libvirt.host [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.622 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.622 239969 DEBUG nova.virt.hardware [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.623 239969 DEBUG nova.virt.hardware [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.623 239969 DEBUG nova.virt.hardware [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.623 239969 DEBUG nova.virt.hardware [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.623 239969 DEBUG nova.virt.hardware [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.624 239969 DEBUG nova.virt.hardware [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.624 239969 DEBUG nova.virt.hardware [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.624 239969 DEBUG nova.virt.hardware [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.624 239969 DEBUG nova.virt.hardware [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.625 239969 DEBUG nova.virt.hardware [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.625 239969 DEBUG nova.virt.hardware [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.628 239969 DEBUG oslo_concurrency.processutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 356 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.0 MiB/s wr, 242 op/s
Jan 26 15:51:09 compute-0 nova_compute[239965]: 2026-01-26 15:51:09.812 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:51:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2696530143' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:10 compute-0 nova_compute[239965]: 2026-01-26 15:51:10.193 239969 DEBUG oslo_concurrency.processutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:10 compute-0 nova_compute[239965]: 2026-01-26 15:51:10.215 239969 DEBUG nova.storage.rbd_utils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:10 compute-0 nova_compute[239965]: 2026-01-26 15:51:10.219 239969 DEBUG oslo_concurrency.processutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:10 compute-0 ceph-mon[75140]: osdmap e164: 3 total, 3 up, 3 in
Jan 26 15:51:10 compute-0 ceph-mon[75140]: pgmap v1075: 305 pgs: 305 active+clean; 356 MiB data, 417 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.0 MiB/s wr, 242 op/s
Jan 26 15:51:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2696530143' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:51:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769770101' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:10 compute-0 nova_compute[239965]: 2026-01-26 15:51:10.776 239969 DEBUG oslo_concurrency.processutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:10 compute-0 nova_compute[239965]: 2026-01-26 15:51:10.777 239969 DEBUG nova.virt.libvirt.vif [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:51:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2099021934',display_name='tempest-ImagesTestJSON-server-2099021934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2099021934',id=20,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-c6seime1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:51:06Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=c57c5c04-cfa8-4562-9f6d-a7211012e3bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faed3540-5f43-433f-9d2e-22c433f411c2", "address": "fa:16:3e:62:88:2d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaed3540-5f", "ovs_interfaceid": "faed3540-5f43-433f-9d2e-22c433f411c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:51:10 compute-0 nova_compute[239965]: 2026-01-26 15:51:10.778 239969 DEBUG nova.network.os_vif_util [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "faed3540-5f43-433f-9d2e-22c433f411c2", "address": "fa:16:3e:62:88:2d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaed3540-5f", "ovs_interfaceid": "faed3540-5f43-433f-9d2e-22c433f411c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:10 compute-0 nova_compute[239965]: 2026-01-26 15:51:10.779 239969 DEBUG nova.network.os_vif_util [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:88:2d,bridge_name='br-int',has_traffic_filtering=True,id=faed3540-5f43-433f-9d2e-22c433f411c2,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaed3540-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:10 compute-0 nova_compute[239965]: 2026-01-26 15:51:10.780 239969 DEBUG nova.objects.instance [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'pci_devices' on Instance uuid c57c5c04-cfa8-4562-9f6d-a7211012e3bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:11 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/769770101' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 372 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 159 op/s
Jan 26 15:51:12 compute-0 ceph-mon[75140]: pgmap v1076: 305 pgs: 305 active+clean; 372 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 159 op/s
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.343 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.483 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:51:12 compute-0 nova_compute[239965]:   <uuid>c57c5c04-cfa8-4562-9f6d-a7211012e3bf</uuid>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   <name>instance-00000014</name>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <nova:name>tempest-ImagesTestJSON-server-2099021934</nova:name>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:51:09</nova:creationTime>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:51:12 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:51:12 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:51:12 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:51:12 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:51:12 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:51:12 compute-0 nova_compute[239965]:         <nova:user uuid="3322c44e378e415bb486ef558314a67c">tempest-ImagesTestJSON-1480202091-project-member</nova:user>
Jan 26 15:51:12 compute-0 nova_compute[239965]:         <nova:project uuid="ddba5162f533447bba0159cafaa565bf">tempest-ImagesTestJSON-1480202091</nova:project>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:51:12 compute-0 nova_compute[239965]:         <nova:port uuid="faed3540-5f43-433f-9d2e-22c433f411c2">
Jan 26 15:51:12 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <system>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <entry name="serial">c57c5c04-cfa8-4562-9f6d-a7211012e3bf</entry>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <entry name="uuid">c57c5c04-cfa8-4562-9f6d-a7211012e3bf</entry>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     </system>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   <os>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   </os>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   <features>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   </features>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk">
Jan 26 15:51:12 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       </source>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:51:12 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk.config">
Jan 26 15:51:12 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       </source>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:51:12 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:62:88:2d"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <target dev="tapfaed3540-5f"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/c57c5c04-cfa8-4562-9f6d-a7211012e3bf/console.log" append="off"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <video>
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     </video>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:51:12 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:51:12 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:51:12 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:51:12 compute-0 nova_compute[239965]: </domain>
Jan 26 15:51:12 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.485 239969 DEBUG nova.compute.manager [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Preparing to wait for external event network-vif-plugged-faed3540-5f43-433f-9d2e-22c433f411c2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.485 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.485 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.486 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.486 239969 DEBUG nova.virt.libvirt.vif [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:51:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2099021934',display_name='tempest-ImagesTestJSON-server-2099021934',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2099021934',id=20,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-c6seime1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:51:06Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=c57c5c04-cfa8-4562-9f6d-a7211012e3bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "faed3540-5f43-433f-9d2e-22c433f411c2", "address": "fa:16:3e:62:88:2d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaed3540-5f", "ovs_interfaceid": "faed3540-5f43-433f-9d2e-22c433f411c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.487 239969 DEBUG nova.network.os_vif_util [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "faed3540-5f43-433f-9d2e-22c433f411c2", "address": "fa:16:3e:62:88:2d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaed3540-5f", "ovs_interfaceid": "faed3540-5f43-433f-9d2e-22c433f411c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.488 239969 DEBUG nova.network.os_vif_util [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:88:2d,bridge_name='br-int',has_traffic_filtering=True,id=faed3540-5f43-433f-9d2e-22c433f411c2,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaed3540-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.488 239969 DEBUG os_vif [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:88:2d,bridge_name='br-int',has_traffic_filtering=True,id=faed3540-5f43-433f-9d2e-22c433f411c2,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaed3540-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.489 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.489 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.489 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.492 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.492 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfaed3540-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.493 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfaed3540-5f, col_values=(('external_ids', {'iface-id': 'faed3540-5f43-433f-9d2e-22c433f411c2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:88:2d', 'vm-uuid': 'c57c5c04-cfa8-4562-9f6d-a7211012e3bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.495 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:12 compute-0 NetworkManager[48954]: <info>  [1769442672.4960] manager: (tapfaed3540-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.498 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.501 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.501 239969 INFO os_vif [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:88:2d,bridge_name='br-int',has_traffic_filtering=True,id=faed3540-5f43-433f-9d2e-22c433f411c2,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaed3540-5f')
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.689 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.689 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.689 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No VIF found with MAC fa:16:3e:62:88:2d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.690 239969 INFO nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Using config drive
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.708 239969 DEBUG nova.storage.rbd_utils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.952 239969 DEBUG nova.network.neutron [req-8262bc59-5179-4445-9021-311a25c35574 req-bc691dea-2f25-4e29-895e-2b352d8425f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Updated VIF entry in instance network info cache for port faed3540-5f43-433f-9d2e-22c433f411c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.952 239969 DEBUG nova.network.neutron [req-8262bc59-5179-4445-9021-311a25c35574 req-bc691dea-2f25-4e29-895e-2b352d8425f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Updating instance_info_cache with network_info: [{"id": "faed3540-5f43-433f-9d2e-22c433f411c2", "address": "fa:16:3e:62:88:2d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaed3540-5f", "ovs_interfaceid": "faed3540-5f43-433f-9d2e-22c433f411c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:51:12 compute-0 nova_compute[239965]: 2026-01-26 15:51:12.967 239969 DEBUG oslo_concurrency.lockutils [req-8262bc59-5179-4445-9021-311a25c35574 req-bc691dea-2f25-4e29-895e-2b352d8425f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-c57c5c04-cfa8-4562-9f6d-a7211012e3bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:51:13 compute-0 nova_compute[239965]: 2026-01-26 15:51:13.129 239969 INFO nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Creating config drive at /var/lib/nova/instances/c57c5c04-cfa8-4562-9f6d-a7211012e3bf/disk.config
Jan 26 15:51:13 compute-0 nova_compute[239965]: 2026-01-26 15:51:13.134 239969 DEBUG oslo_concurrency.processutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c57c5c04-cfa8-4562-9f6d-a7211012e3bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6sdxofo7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:13 compute-0 nova_compute[239965]: 2026-01-26 15:51:13.263 239969 DEBUG oslo_concurrency.processutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c57c5c04-cfa8-4562-9f6d-a7211012e3bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6sdxofo7" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:13 compute-0 nova_compute[239965]: 2026-01-26 15:51:13.288 239969 DEBUG nova.storage.rbd_utils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:13 compute-0 nova_compute[239965]: 2026-01-26 15:51:13.293 239969 DEBUG oslo_concurrency.processutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c57c5c04-cfa8-4562-9f6d-a7211012e3bf/disk.config c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:13 compute-0 nova_compute[239965]: 2026-01-26 15:51:13.423 239969 DEBUG oslo_concurrency.processutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c57c5c04-cfa8-4562-9f6d-a7211012e3bf/disk.config c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:13 compute-0 nova_compute[239965]: 2026-01-26 15:51:13.424 239969 INFO nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Deleting local config drive /var/lib/nova/instances/c57c5c04-cfa8-4562-9f6d-a7211012e3bf/disk.config because it was imported into RBD.
Jan 26 15:51:13 compute-0 kernel: tapfaed3540-5f: entered promiscuous mode
Jan 26 15:51:13 compute-0 NetworkManager[48954]: <info>  [1769442673.4657] manager: (tapfaed3540-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Jan 26 15:51:13 compute-0 ovn_controller[146046]: 2026-01-26T15:51:13Z|00096|binding|INFO|Claiming lport faed3540-5f43-433f-9d2e-22c433f411c2 for this chassis.
Jan 26 15:51:13 compute-0 ovn_controller[146046]: 2026-01-26T15:51:13Z|00097|binding|INFO|faed3540-5f43-433f-9d2e-22c433f411c2: Claiming fa:16:3e:62:88:2d 10.100.0.3
Jan 26 15:51:13 compute-0 nova_compute[239965]: 2026-01-26 15:51:13.474 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.486 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:88:2d 10.100.0.3'], port_security=['fa:16:3e:62:88:2d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c57c5c04-cfa8-4562-9f6d-a7211012e3bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5e237c1-a75f-479a-88ee-c0f788914b11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddba5162f533447bba0159cafaa565bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '196db1cf-70b0-4b0e-9aef-decea6cfe3dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79fbd3fe-b646-4211-8e23-8d25cabdd600, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=faed3540-5f43-433f-9d2e-22c433f411c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.487 156105 INFO neutron.agent.ovn.metadata.agent [-] Port faed3540-5f43-433f-9d2e-22c433f411c2 in datapath f5e237c1-a75f-479a-88ee-c0f788914b11 bound to our chassis
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.488 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:51:13 compute-0 systemd-udevd[259275]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.502 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b8ad5587-02a9-41a4-8e8d-f55ed517bc4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.503 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf5e237c1-a1 in ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.504 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf5e237c1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.505 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[14b0c566-39db-4078-b4ff-431cfce61ac7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:13 compute-0 systemd-machined[208061]: New machine qemu-22-instance-00000014.
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.510 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9a0e516f-d931-4bd2-b926-61ecddd2db98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:13 compute-0 NetworkManager[48954]: <info>  [1769442673.5124] device (tapfaed3540-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:51:13 compute-0 NetworkManager[48954]: <info>  [1769442673.5134] device (tapfaed3540-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:51:13 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000014.
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.532 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[61733234-d4c1-444b-b591-cc8cccd4ec50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.559 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[967e7819-11d4-4000-bb76-e7a389bc1709]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:13 compute-0 nova_compute[239965]: 2026-01-26 15:51:13.562 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:13 compute-0 ovn_controller[146046]: 2026-01-26T15:51:13Z|00098|binding|INFO|Setting lport faed3540-5f43-433f-9d2e-22c433f411c2 ovn-installed in OVS
Jan 26 15:51:13 compute-0 ovn_controller[146046]: 2026-01-26T15:51:13Z|00099|binding|INFO|Setting lport faed3540-5f43-433f-9d2e-22c433f411c2 up in Southbound
Jan 26 15:51:13 compute-0 nova_compute[239965]: 2026-01-26 15:51:13.566 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.603 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5418ac2a-de30-4801-b3d2-01e0150529f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.610 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ec64facc-800f-45ce-9bdf-31038e9b123d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:13 compute-0 NetworkManager[48954]: <info>  [1769442673.6112] manager: (tapf5e237c1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Jan 26 15:51:13 compute-0 systemd-udevd[259279]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.646 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b6e5266f-0920-453d-aa5c-3756976be598]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 372 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.3 MiB/s wr, 135 op/s
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.649 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d40ac051-450a-4f98-9695-a3446cc659bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:13 compute-0 NetworkManager[48954]: <info>  [1769442673.6752] device (tapf5e237c1-a0): carrier: link connected
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.681 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[faf59d9c-50cb-4319-875d-14794787055b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.698 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[535684a4-d41b-4f06-b77b-93d1163d9c6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5e237c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:0e:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416448, 'reachable_time': 19682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259309, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.714 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c1543d0a-5c21-4799-9698-e2078814f015]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:eb6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 416448, 'tstamp': 416448}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259310, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.733 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f564308b-4421-4596-8a3a-63c5d6159bb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5e237c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:0e:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416448, 'reachable_time': 19682, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259311, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.768 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4170f4d5-7276-48e1-b129-ec65d8690def]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.834 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ff7bd63c-397f-4a7b-b4e9-08ce59c0a996]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.836 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5e237c1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.836 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.837 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5e237c1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:13 compute-0 kernel: tapf5e237c1-a0: entered promiscuous mode
Jan 26 15:51:13 compute-0 NetworkManager[48954]: <info>  [1769442673.8513] manager: (tapf5e237c1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Jan 26 15:51:13 compute-0 nova_compute[239965]: 2026-01-26 15:51:13.850 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.855 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5e237c1-a0, col_values=(('external_ids', {'iface-id': 'fd0478e8-96d8-4fbd-8a9a-fe78757277ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:13 compute-0 ovn_controller[146046]: 2026-01-26T15:51:13Z|00100|binding|INFO|Releasing lport fd0478e8-96d8-4fbd-8a9a-fe78757277ca from this chassis (sb_readonly=0)
Jan 26 15:51:13 compute-0 nova_compute[239965]: 2026-01-26 15:51:13.857 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:13 compute-0 nova_compute[239965]: 2026-01-26 15:51:13.875 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.876 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f5e237c1-a75f-479a-88ee-c0f788914b11.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f5e237c1-a75f-479a-88ee-c0f788914b11.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.877 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[923fb48e-b66c-4cd9-8971-7f78be791098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.878 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/f5e237c1-a75f-479a-88ee-c0f788914b11.pid.haproxy
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:51:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:13.880 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'env', 'PROCESS_TAG=haproxy-f5e237c1-a75f-479a-88ee-c0f788914b11', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f5e237c1-a75f-479a-88ee-c0f788914b11.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:51:13 compute-0 nova_compute[239965]: 2026-01-26 15:51:13.973 239969 INFO nova.compute.manager [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Rebuilding instance
Jan 26 15:51:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:51:14 compute-0 podman[259343]: 2026-01-26 15:51:14.244007723 +0000 UTC m=+0.051458661 container create f5f33422fc5720f8413538f8c4b724ba05dcac381c24c5db9f848c8bbd3650ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.261 239969 DEBUG nova.compute.manager [req-c533eba5-4bd5-493d-9462-6187a49d2cd6 req-b01588d8-e3e4-4d55-8cf5-35ca52b6efb7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Received event network-vif-plugged-faed3540-5f43-433f-9d2e-22c433f411c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.261 239969 DEBUG oslo_concurrency.lockutils [req-c533eba5-4bd5-493d-9462-6187a49d2cd6 req-b01588d8-e3e4-4d55-8cf5-35ca52b6efb7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.261 239969 DEBUG oslo_concurrency.lockutils [req-c533eba5-4bd5-493d-9462-6187a49d2cd6 req-b01588d8-e3e4-4d55-8cf5-35ca52b6efb7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.262 239969 DEBUG oslo_concurrency.lockutils [req-c533eba5-4bd5-493d-9462-6187a49d2cd6 req-b01588d8-e3e4-4d55-8cf5-35ca52b6efb7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.262 239969 DEBUG nova.compute.manager [req-c533eba5-4bd5-493d-9462-6187a49d2cd6 req-b01588d8-e3e4-4d55-8cf5-35ca52b6efb7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Processing event network-vif-plugged-faed3540-5f43-433f-9d2e-22c433f411c2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.269 239969 DEBUG nova.objects.instance [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'trusted_certs' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:14 compute-0 systemd[1]: Started libpod-conmon-f5f33422fc5720f8413538f8c4b724ba05dcac381c24c5db9f848c8bbd3650ae.scope.
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.292 239969 DEBUG nova.compute.manager [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:14 compute-0 podman[259343]: 2026-01-26 15:51:14.215809253 +0000 UTC m=+0.023260211 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:51:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/765824c4a826b9cf0103cb5f1512e6550351f9d14c2af3a2f79948a86126989e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:14 compute-0 podman[259343]: 2026-01-26 15:51:14.338854986 +0000 UTC m=+0.146305944 container init f5f33422fc5720f8413538f8c4b724ba05dcac381c24c5db9f848c8bbd3650ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:51:14 compute-0 podman[259343]: 2026-01-26 15:51:14.344764621 +0000 UTC m=+0.152215559 container start f5f33422fc5720f8413538f8c4b724ba05dcac381c24c5db9f848c8bbd3650ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.346 239969 DEBUG nova.objects.instance [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'pci_requests' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.359 239969 DEBUG nova.objects.instance [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'pci_devices' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:14 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[259358]: [NOTICE]   (259362) : New worker (259364) forked
Jan 26 15:51:14 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[259358]: [NOTICE]   (259362) : Loading success.
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.372 239969 DEBUG nova.objects.instance [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'resources' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.389 239969 DEBUG nova.objects.instance [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'migration_context' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.403 239969 DEBUG nova.objects.instance [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.410 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.546 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442674.5455585, c57c5c04-cfa8-4562-9f6d-a7211012e3bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.546 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] VM Started (Lifecycle Event)
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.548 239969 DEBUG nova.compute.manager [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.552 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.554 239969 INFO nova.virt.libvirt.driver [-] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Instance spawned successfully.
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.555 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.606 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.610 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.610 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.611 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.611 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.611 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.612 239969 DEBUG nova.virt.libvirt.driver [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.615 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.675 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.676 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442674.5458143, c57c5c04-cfa8-4562-9f6d-a7211012e3bf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.676 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] VM Paused (Lifecycle Event)
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.701 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.704 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442674.5508065, c57c5c04-cfa8-4562-9f6d-a7211012e3bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.704 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] VM Resumed (Lifecycle Event)
Jan 26 15:51:14 compute-0 ceph-mon[75140]: pgmap v1077: 305 pgs: 305 active+clean; 372 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.3 MiB/s wr, 135 op/s
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.773 239969 INFO nova.compute.manager [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Took 8.10 seconds to spawn the instance on the hypervisor.
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.774 239969 DEBUG nova.compute.manager [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.814 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.827 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.830 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.863 239969 INFO nova.compute.manager [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Took 9.25 seconds to build instance.
Jan 26 15:51:14 compute-0 nova_compute[239965]: 2026-01-26 15:51:14.992 239969 DEBUG oslo_concurrency.lockutils [None req-17298699-1029-4c4e-8264-e4a909546129 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 372 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Jan 26 15:51:16 compute-0 kernel: tap72aacd31-27 (unregistering): left promiscuous mode
Jan 26 15:51:16 compute-0 nova_compute[239965]: 2026-01-26 15:51:16.669 239969 DEBUG nova.compute.manager [req-b7b123b7-4b3a-4506-b456-c79537062514 req-d3f45b60-edb3-41e1-9927-d9e3f13c7466 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Received event network-vif-plugged-faed3540-5f43-433f-9d2e-22c433f411c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:16 compute-0 nova_compute[239965]: 2026-01-26 15:51:16.669 239969 DEBUG oslo_concurrency.lockutils [req-b7b123b7-4b3a-4506-b456-c79537062514 req-d3f45b60-edb3-41e1-9927-d9e3f13c7466 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:16 compute-0 nova_compute[239965]: 2026-01-26 15:51:16.670 239969 DEBUG oslo_concurrency.lockutils [req-b7b123b7-4b3a-4506-b456-c79537062514 req-d3f45b60-edb3-41e1-9927-d9e3f13c7466 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:16 compute-0 nova_compute[239965]: 2026-01-26 15:51:16.670 239969 DEBUG oslo_concurrency.lockutils [req-b7b123b7-4b3a-4506-b456-c79537062514 req-d3f45b60-edb3-41e1-9927-d9e3f13c7466 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:16 compute-0 nova_compute[239965]: 2026-01-26 15:51:16.670 239969 DEBUG nova.compute.manager [req-b7b123b7-4b3a-4506-b456-c79537062514 req-d3f45b60-edb3-41e1-9927-d9e3f13c7466 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] No waiting events found dispatching network-vif-plugged-faed3540-5f43-433f-9d2e-22c433f411c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:51:16 compute-0 nova_compute[239965]: 2026-01-26 15:51:16.671 239969 WARNING nova.compute.manager [req-b7b123b7-4b3a-4506-b456-c79537062514 req-d3f45b60-edb3-41e1-9927-d9e3f13c7466 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Received unexpected event network-vif-plugged-faed3540-5f43-433f-9d2e-22c433f411c2 for instance with vm_state active and task_state None.
Jan 26 15:51:16 compute-0 NetworkManager[48954]: <info>  [1769442676.6761] device (tap72aacd31-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:51:16 compute-0 nova_compute[239965]: 2026-01-26 15:51:16.684 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:16 compute-0 ovn_controller[146046]: 2026-01-26T15:51:16Z|00101|binding|INFO|Releasing lport 72aacd31-2789-4f27-a514-f80766db0d6e from this chassis (sb_readonly=0)
Jan 26 15:51:16 compute-0 ovn_controller[146046]: 2026-01-26T15:51:16Z|00102|binding|INFO|Setting lport 72aacd31-2789-4f27-a514-f80766db0d6e down in Southbound
Jan 26 15:51:16 compute-0 ovn_controller[146046]: 2026-01-26T15:51:16Z|00103|binding|INFO|Removing iface tap72aacd31-27 ovn-installed in OVS
Jan 26 15:51:16 compute-0 nova_compute[239965]: 2026-01-26 15:51:16.687 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:16.694 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:0d:aa 10.100.0.11'], port_security=['fa:16:3e:66:0d:aa 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'db37e7ff-8499-4664-b2ba-014e27b8b6bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3195eedf4a34fabaf019faaaad7eb71', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'deb9b716-672e-4115-a664-3008e95e0a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=172ee5c1-f0af-4b45-a16f-73e1fe15c93f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=72aacd31-2789-4f27-a514-f80766db0d6e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:51:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:16.696 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 72aacd31-2789-4f27-a514-f80766db0d6e in datapath 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc unbound from our chassis
Jan 26 15:51:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:16.698 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc
Jan 26 15:51:16 compute-0 nova_compute[239965]: 2026-01-26 15:51:16.703 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:16 compute-0 ceph-mon[75140]: pgmap v1078: 305 pgs: 305 active+clean; 372 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 127 op/s
Jan 26 15:51:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:16.715 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[21fea448-ab03-49ea-9652-eff295a9c1f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:16 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Jan 26 15:51:16 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Consumed 15.901s CPU time.
Jan 26 15:51:16 compute-0 systemd-machined[208061]: Machine qemu-17-instance-0000000f terminated.
Jan 26 15:51:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:16.744 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a1c9497c-6b51-4862-8bf0-bb03ea22a0bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:16.747 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b1403a-436e-471b-bf12-0b1733dacf7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:16.775 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[80c09afc-acbc-4cc6-83af-79360cbe9499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:16.792 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[40c78e89-7555-42a7-adbd-b005e394bb32]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a0d62aa-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:c2:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 784, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 12, 'rx_bytes': 784, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411131, 'reachable_time': 38822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259426, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:16.806 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[77549757-4e28-48ec-b8ba-e9d4421a9353]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411144, 'tstamp': 411144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259427, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411147, 'tstamp': 411147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259427, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:16.807 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a0d62aa-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:16 compute-0 nova_compute[239965]: 2026-01-26 15:51:16.809 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:16.816 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a0d62aa-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:16 compute-0 nova_compute[239965]: 2026-01-26 15:51:16.816 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:16.817 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:16.817 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a0d62aa-50, col_values=(('external_ids', {'iface-id': '98f4f5ff-65bb-4d1b-80cd-163cfdeaa557'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:16.818 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:16 compute-0 kernel: tap72aacd31-27: entered promiscuous mode
Jan 26 15:51:16 compute-0 NetworkManager[48954]: <info>  [1769442676.9179] manager: (tap72aacd31-27): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Jan 26 15:51:16 compute-0 kernel: tap72aacd31-27 (unregistering): left promiscuous mode
Jan 26 15:51:16 compute-0 nova_compute[239965]: 2026-01-26 15:51:16.925 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.348 239969 DEBUG nova.compute.manager [req-c9ad4142-00c4-4310-ae89-834ac47f4aa2 req-62b1ae20-d9bc-42ad-8698-63fec336168a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received event network-vif-unplugged-72aacd31-2789-4f27-a514-f80766db0d6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.348 239969 DEBUG oslo_concurrency.lockutils [req-c9ad4142-00c4-4310-ae89-834ac47f4aa2 req-62b1ae20-d9bc-42ad-8698-63fec336168a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.348 239969 DEBUG oslo_concurrency.lockutils [req-c9ad4142-00c4-4310-ae89-834ac47f4aa2 req-62b1ae20-d9bc-42ad-8698-63fec336168a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.348 239969 DEBUG oslo_concurrency.lockutils [req-c9ad4142-00c4-4310-ae89-834ac47f4aa2 req-62b1ae20-d9bc-42ad-8698-63fec336168a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.349 239969 DEBUG nova.compute.manager [req-c9ad4142-00c4-4310-ae89-834ac47f4aa2 req-62b1ae20-d9bc-42ad-8698-63fec336168a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] No waiting events found dispatching network-vif-unplugged-72aacd31-2789-4f27-a514-f80766db0d6e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.349 239969 WARNING nova.compute.manager [req-c9ad4142-00c4-4310-ae89-834ac47f4aa2 req-62b1ae20-d9bc-42ad-8698-63fec336168a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received unexpected event network-vif-unplugged-72aacd31-2789-4f27-a514-f80766db0d6e for instance with vm_state error and task_state rebuilding.
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.431 239969 INFO nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Instance shutdown successfully after 3 seconds.
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.438 239969 INFO nova.virt.libvirt.driver [-] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Instance destroyed successfully.
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.442 239969 INFO nova.virt.libvirt.driver [-] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Instance destroyed successfully.
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.443 239969 DEBUG nova.virt.libvirt.vif [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:50:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-784382261',display_name='tempest-ServersAdminTestJSON-server-784382261',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-784382261',id=15,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:50:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-x2p37smd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:51:13Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=db37e7ff-8499-4664-b2ba-014e27b8b6bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.443 239969 DEBUG nova.network.os_vif_util [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.444 239969 DEBUG nova.network.os_vif_util [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.444 239969 DEBUG os_vif [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.446 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.446 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72aacd31-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.448 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.449 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.451 239969 INFO os_vif [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27')
Jan 26 15:51:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 381 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 119 op/s
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.828 239969 INFO nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Deleting instance files /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb_del
Jan 26 15:51:17 compute-0 nova_compute[239965]: 2026-01-26 15:51:17.828 239969 INFO nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Deletion of /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb_del complete
Jan 26 15:51:18 compute-0 ceph-mon[75140]: pgmap v1079: 305 pgs: 305 active+clean; 381 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 119 op/s
Jan 26 15:51:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e164 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:51:19 compute-0 ovn_controller[146046]: 2026-01-26T15:51:19Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c8:b0:29 10.100.0.8
Jan 26 15:51:19 compute-0 ovn_controller[146046]: 2026-01-26T15:51:19Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c8:b0:29 10.100.0.8
Jan 26 15:51:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 355 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 124 op/s
Jan 26 15:51:19 compute-0 nova_compute[239965]: 2026-01-26 15:51:19.817 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.097 239969 DEBUG nova.objects.instance [None req-27861199-ea7d-4af6-b7fb-a266eea2a754 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'pci_devices' on Instance uuid c57c5c04-cfa8-4562-9f6d-a7211012e3bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.176 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442680.1760356, c57c5c04-cfa8-4562-9f6d-a7211012e3bf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.177 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] VM Paused (Lifecycle Event)
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.219 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.222 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.255 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.319 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.319 239969 INFO nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Creating image(s)
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.340 239969 DEBUG nova.storage.rbd_utils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.364 239969 DEBUG nova.storage.rbd_utils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.387 239969 DEBUG nova.storage.rbd_utils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.390 239969 DEBUG oslo_concurrency.processutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.451 239969 DEBUG oslo_concurrency.processutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.453 239969 DEBUG oslo_concurrency.lockutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "43c516c9cae415c0ab334521ada79f427fb2809a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.453 239969 DEBUG oslo_concurrency.lockutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.454 239969 DEBUG oslo_concurrency.lockutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.479 239969 DEBUG nova.storage.rbd_utils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.484 239969 DEBUG oslo_concurrency.processutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Jan 26 15:51:20 compute-0 ceph-mon[75140]: pgmap v1080: 305 pgs: 305 active+clean; 355 MiB data, 435 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 124 op/s
Jan 26 15:51:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Jan 26 15:51:20 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Jan 26 15:51:20 compute-0 kernel: tapfaed3540-5f (unregistering): left promiscuous mode
Jan 26 15:51:20 compute-0 NetworkManager[48954]: <info>  [1769442680.8424] device (tapfaed3540-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.853 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:20 compute-0 ovn_controller[146046]: 2026-01-26T15:51:20Z|00104|binding|INFO|Releasing lport faed3540-5f43-433f-9d2e-22c433f411c2 from this chassis (sb_readonly=0)
Jan 26 15:51:20 compute-0 ovn_controller[146046]: 2026-01-26T15:51:20Z|00105|binding|INFO|Setting lport faed3540-5f43-433f-9d2e-22c433f411c2 down in Southbound
Jan 26 15:51:20 compute-0 ovn_controller[146046]: 2026-01-26T15:51:20Z|00106|binding|INFO|Removing iface tapfaed3540-5f ovn-installed in OVS
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.879 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:20 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Deactivated successfully.
Jan 26 15:51:20 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000014.scope: Consumed 6.845s CPU time.
Jan 26 15:51:20 compute-0 systemd-machined[208061]: Machine qemu-22-instance-00000014 terminated.
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.926 239969 DEBUG nova.compute.manager [req-b7b0f358-6399-4241-9128-23664677c374 req-898bd282-eeb3-444b-a80c-51980b1f43fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.926 239969 DEBUG oslo_concurrency.lockutils [req-b7b0f358-6399-4241-9128-23664677c374 req-898bd282-eeb3-444b-a80c-51980b1f43fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.927 239969 DEBUG oslo_concurrency.lockutils [req-b7b0f358-6399-4241-9128-23664677c374 req-898bd282-eeb3-444b-a80c-51980b1f43fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.927 239969 DEBUG oslo_concurrency.lockutils [req-b7b0f358-6399-4241-9128-23664677c374 req-898bd282-eeb3-444b-a80c-51980b1f43fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.927 239969 DEBUG nova.compute.manager [req-b7b0f358-6399-4241-9128-23664677c374 req-898bd282-eeb3-444b-a80c-51980b1f43fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] No waiting events found dispatching network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.927 239969 WARNING nova.compute.manager [req-b7b0f358-6399-4241-9128-23664677c374 req-898bd282-eeb3-444b-a80c-51980b1f43fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received unexpected event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e for instance with vm_state error and task_state rebuild_spawning.
Jan 26 15:51:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:20.929 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:88:2d 10.100.0.3'], port_security=['fa:16:3e:62:88:2d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'c57c5c04-cfa8-4562-9f6d-a7211012e3bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5e237c1-a75f-479a-88ee-c0f788914b11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddba5162f533447bba0159cafaa565bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '196db1cf-70b0-4b0e-9aef-decea6cfe3dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79fbd3fe-b646-4211-8e23-8d25cabdd600, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=faed3540-5f43-433f-9d2e-22c433f411c2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:51:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:20.930 156105 INFO neutron.agent.ovn.metadata.agent [-] Port faed3540-5f43-433f-9d2e-22c433f411c2 in datapath f5e237c1-a75f-479a-88ee-c0f788914b11 unbound from our chassis
Jan 26 15:51:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:20.932 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5e237c1-a75f-479a-88ee-c0f788914b11, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:51:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:20.933 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9d695514-7849-4d5d-a31b-13588797f574]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:20.934 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 namespace which is not needed anymore
Jan 26 15:51:20 compute-0 nova_compute[239965]: 2026-01-26 15:51:20.981 239969 DEBUG oslo_concurrency.processutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.060 239969 DEBUG nova.compute.manager [None req-27861199-ea7d-4af6-b7fb-a266eea2a754 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.069 239969 DEBUG nova.storage.rbd_utils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] resizing rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:51:21 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[259358]: [NOTICE]   (259362) : haproxy version is 2.8.14-c23fe91
Jan 26 15:51:21 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[259358]: [NOTICE]   (259362) : path to executable is /usr/sbin/haproxy
Jan 26 15:51:21 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[259358]: [ALERT]    (259362) : Current worker (259364) exited with code 143 (Terminated)
Jan 26 15:51:21 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[259358]: [WARNING]  (259362) : All workers exited. Exiting... (0)
Jan 26 15:51:21 compute-0 systemd[1]: libpod-f5f33422fc5720f8413538f8c4b724ba05dcac381c24c5db9f848c8bbd3650ae.scope: Deactivated successfully.
Jan 26 15:51:21 compute-0 conmon[259358]: conmon f5f33422fc5720f84135 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f5f33422fc5720f8413538f8c4b724ba05dcac381c24c5db9f848c8bbd3650ae.scope/container/memory.events
Jan 26 15:51:21 compute-0 podman[259604]: 2026-01-26 15:51:21.089565518 +0000 UTC m=+0.048605031 container died f5f33422fc5720f8413538f8c4b724ba05dcac381c24c5db9f848c8bbd3650ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 15:51:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5f33422fc5720f8413538f8c4b724ba05dcac381c24c5db9f848c8bbd3650ae-userdata-shm.mount: Deactivated successfully.
Jan 26 15:51:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-765824c4a826b9cf0103cb5f1512e6550351f9d14c2af3a2f79948a86126989e-merged.mount: Deactivated successfully.
Jan 26 15:51:21 compute-0 podman[259604]: 2026-01-26 15:51:21.133822672 +0000 UTC m=+0.092862195 container cleanup f5f33422fc5720f8413538f8c4b724ba05dcac381c24c5db9f848c8bbd3650ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 15:51:21 compute-0 systemd[1]: libpod-conmon-f5f33422fc5720f8413538f8c4b724ba05dcac381c24c5db9f848c8bbd3650ae.scope: Deactivated successfully.
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.181 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.182 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Ensure instance console log exists: /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.182 239969 DEBUG oslo_concurrency.lockutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.183 239969 DEBUG oslo_concurrency.lockutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.183 239969 DEBUG oslo_concurrency.lockutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.185 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Start _get_guest_xml network_info=[{"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.193 239969 WARNING nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 26 15:51:21 compute-0 podman[259670]: 2026-01-26 15:51:21.19782205 +0000 UTC m=+0.043688072 container remove f5f33422fc5720f8413538f8c4b724ba05dcac381c24c5db9f848c8bbd3650ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.200 239969 DEBUG nova.virt.libvirt.host [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.201 239969 DEBUG nova.virt.libvirt.host [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.204 239969 DEBUG nova.virt.libvirt.host [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.204 239969 DEBUG nova.virt.libvirt.host [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:51:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:21.203 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[06ccce27-e178-41f6-a241-b2ff43eb7d76]: (4, ('Mon Jan 26 03:51:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 (f5f33422fc5720f8413538f8c4b724ba05dcac381c24c5db9f848c8bbd3650ae)\nf5f33422fc5720f8413538f8c4b724ba05dcac381c24c5db9f848c8bbd3650ae\nMon Jan 26 03:51:21 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 (f5f33422fc5720f8413538f8c4b724ba05dcac381c24c5db9f848c8bbd3650ae)\nf5f33422fc5720f8413538f8c4b724ba05dcac381c24c5db9f848c8bbd3650ae\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.204 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.205 239969 DEBUG nova.virt.hardware [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.205 239969 DEBUG nova.virt.hardware [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.205 239969 DEBUG nova.virt.hardware [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.205 239969 DEBUG nova.virt.hardware [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:51:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:21.205 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[12e7b1c2-e7ae-41f8-a8d6-9b74d73d39c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.205 239969 DEBUG nova.virt.hardware [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.205 239969 DEBUG nova.virt.hardware [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.206 239969 DEBUG nova.virt.hardware [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.206 239969 DEBUG nova.virt.hardware [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:51:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:21.206 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5e237c1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.206 239969 DEBUG nova.virt.hardware [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.206 239969 DEBUG nova.virt.hardware [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.206 239969 DEBUG nova.virt.hardware [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.206 239969 DEBUG nova.objects.instance [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'vcpu_model' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:21 compute-0 kernel: tapf5e237c1-a0: left promiscuous mode
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.208 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.228 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:21.230 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[749256f6-2100-4bc4-a3e8-7a5e4de815d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:21.255 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b1d59158-2211-40ec-9c51-4c68f7b00442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:21.256 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e5763e37-cd99-43de-ae15-ffd990232183]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:21.271 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9dfa85e2-ead5-4c5a-b093-cc8228c711b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 416440, 'reachable_time': 33608, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259704, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:21.273 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:51:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:21.273 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[29afc4bd-8e67-4ff8-94f4-9ec10f606727]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:21 compute-0 systemd[1]: run-netns-ovnmeta\x2df5e237c1\x2da75f\x2d479a\x2d88ee\x2dc0f788914b11.mount: Deactivated successfully.
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.458 239969 DEBUG nova.compute.manager [req-07afe79e-6813-4154-a137-a6e01673aca4 req-22c4631d-33ef-490d-a71a-6c74f8d756be a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Received event network-vif-unplugged-faed3540-5f43-433f-9d2e-22c433f411c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.458 239969 DEBUG oslo_concurrency.lockutils [req-07afe79e-6813-4154-a137-a6e01673aca4 req-22c4631d-33ef-490d-a71a-6c74f8d756be a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.459 239969 DEBUG oslo_concurrency.lockutils [req-07afe79e-6813-4154-a137-a6e01673aca4 req-22c4631d-33ef-490d-a71a-6c74f8d756be a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.459 239969 DEBUG oslo_concurrency.lockutils [req-07afe79e-6813-4154-a137-a6e01673aca4 req-22c4631d-33ef-490d-a71a-6c74f8d756be a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.459 239969 DEBUG nova.compute.manager [req-07afe79e-6813-4154-a137-a6e01673aca4 req-22c4631d-33ef-490d-a71a-6c74f8d756be a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] No waiting events found dispatching network-vif-unplugged-faed3540-5f43-433f-9d2e-22c433f411c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.459 239969 WARNING nova.compute.manager [req-07afe79e-6813-4154-a137-a6e01673aca4 req-22c4631d-33ef-490d-a71a-6c74f8d756be a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Received unexpected event network-vif-unplugged-faed3540-5f43-433f-9d2e-22c433f411c2 for instance with vm_state suspended and task_state None.
Jan 26 15:51:21 compute-0 nova_compute[239965]: 2026-01-26 15:51:21.479 239969 DEBUG oslo_concurrency.processutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 357 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 156 op/s
Jan 26 15:51:21 compute-0 ceph-mon[75140]: osdmap e165: 3 total, 3 up, 3 in
Jan 26 15:51:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:51:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2436646527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.064 239969 DEBUG oslo_concurrency.processutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.087 239969 DEBUG nova.storage.rbd_utils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.091 239969 DEBUG oslo_concurrency.processutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:22 compute-0 podman[259765]: 2026-01-26 15:51:22.373132062 +0000 UTC m=+0.060699617 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 15:51:22 compute-0 podman[259766]: 2026-01-26 15:51:22.430496547 +0000 UTC m=+0.111460701 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.449 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:51:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:51:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2188627823' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.688 239969 DEBUG oslo_concurrency.processutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.690 239969 DEBUG nova.virt.libvirt.vif [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T15:50:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-784382261',display_name='tempest-ServersAdminTestJSON-server-784382261',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-784382261',id=15,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:50:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-x2p37smd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:51:20Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=db37e7ff-8499-4664-b2ba-014e27b8b6bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.691 239969 DEBUG nova.network.os_vif_util [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.692 239969 DEBUG nova.network.os_vif_util [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.696 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:51:22 compute-0 nova_compute[239965]:   <uuid>db37e7ff-8499-4664-b2ba-014e27b8b6bb</uuid>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   <name>instance-0000000f</name>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersAdminTestJSON-server-784382261</nova:name>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:51:21</nova:creationTime>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:51:22 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:51:22 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:51:22 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:51:22 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:51:22 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:51:22 compute-0 nova_compute[239965]:         <nova:user uuid="b47ce75429ed4cd1ba21c0d50c2e552d">tempest-ServersAdminTestJSON-863857415-project-member</nova:user>
Jan 26 15:51:22 compute-0 nova_compute[239965]:         <nova:project uuid="b3195eedf4a34fabaf019faaaad7eb71">tempest-ServersAdminTestJSON-863857415</nova:project>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="24ff5bfa-2bf0-4d76-ba05-fc857cd2108f"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:51:22 compute-0 nova_compute[239965]:         <nova:port uuid="72aacd31-2789-4f27-a514-f80766db0d6e">
Jan 26 15:51:22 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <system>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <entry name="serial">db37e7ff-8499-4664-b2ba-014e27b8b6bb</entry>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <entry name="uuid">db37e7ff-8499-4664-b2ba-014e27b8b6bb</entry>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     </system>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   <os>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   </os>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   <features>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   </features>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk">
Jan 26 15:51:22 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       </source>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:51:22 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config">
Jan 26 15:51:22 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       </source>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:51:22 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:66:0d:aa"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <target dev="tap72aacd31-27"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/console.log" append="off"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <video>
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     </video>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:51:22 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:51:22 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:51:22 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:51:22 compute-0 nova_compute[239965]: </domain>
Jan 26 15:51:22 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.698 239969 DEBUG nova.compute.manager [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Preparing to wait for external event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.699 239969 DEBUG oslo_concurrency.lockutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.699 239969 DEBUG oslo_concurrency.lockutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.700 239969 DEBUG oslo_concurrency.lockutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.700 239969 DEBUG nova.virt.libvirt.vif [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T15:50:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-784382261',display_name='tempest-ServersAdminTestJSON-server-784382261',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-784382261',id=15,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:50:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-x2p37smd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:51:20Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=db37e7ff-8499-4664-b2ba-014e27b8b6bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.701 239969 DEBUG nova.network.os_vif_util [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.701 239969 DEBUG nova.network.os_vif_util [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.702 239969 DEBUG os_vif [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.702 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.703 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.704 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.706 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.706 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72aacd31-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.707 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap72aacd31-27, col_values=(('external_ids', {'iface-id': '72aacd31-2789-4f27-a514-f80766db0d6e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:0d:aa', 'vm-uuid': 'db37e7ff-8499-4664-b2ba-014e27b8b6bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:22 compute-0 NetworkManager[48954]: <info>  [1769442682.7096] manager: (tap72aacd31-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.708 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.712 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.718 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.720 239969 INFO os_vif [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27')
Jan 26 15:51:22 compute-0 ceph-mon[75140]: pgmap v1082: 305 pgs: 305 active+clean; 357 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.4 MiB/s wr, 156 op/s
Jan 26 15:51:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2436646527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2188627823' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.840 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.841 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.842 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No VIF found with MAC fa:16:3e:66:0d:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.843 239969 INFO nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Using config drive
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.873 239969 DEBUG nova.storage.rbd_utils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:22 compute-0 nova_compute[239965]: 2026-01-26 15:51:22.993 239969 DEBUG nova.objects.instance [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'ec2_ids' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:23 compute-0 nova_compute[239965]: 2026-01-26 15:51:23.422 239969 DEBUG nova.objects.instance [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'keypairs' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 355 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 242 op/s
Jan 26 15:51:23 compute-0 nova_compute[239965]: 2026-01-26 15:51:23.792 239969 DEBUG nova.compute.manager [req-e9cc37fc-6929-4dc8-9cf1-4c1eab1653c8 req-119be7d9-72a5-4c61-987c-1a370d9b2db5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Received event network-vif-plugged-faed3540-5f43-433f-9d2e-22c433f411c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:23 compute-0 nova_compute[239965]: 2026-01-26 15:51:23.793 239969 DEBUG oslo_concurrency.lockutils [req-e9cc37fc-6929-4dc8-9cf1-4c1eab1653c8 req-119be7d9-72a5-4c61-987c-1a370d9b2db5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:23 compute-0 nova_compute[239965]: 2026-01-26 15:51:23.793 239969 DEBUG oslo_concurrency.lockutils [req-e9cc37fc-6929-4dc8-9cf1-4c1eab1653c8 req-119be7d9-72a5-4c61-987c-1a370d9b2db5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:23 compute-0 nova_compute[239965]: 2026-01-26 15:51:23.794 239969 DEBUG oslo_concurrency.lockutils [req-e9cc37fc-6929-4dc8-9cf1-4c1eab1653c8 req-119be7d9-72a5-4c61-987c-1a370d9b2db5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:23 compute-0 nova_compute[239965]: 2026-01-26 15:51:23.794 239969 DEBUG nova.compute.manager [req-e9cc37fc-6929-4dc8-9cf1-4c1eab1653c8 req-119be7d9-72a5-4c61-987c-1a370d9b2db5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] No waiting events found dispatching network-vif-plugged-faed3540-5f43-433f-9d2e-22c433f411c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:51:23 compute-0 nova_compute[239965]: 2026-01-26 15:51:23.794 239969 WARNING nova.compute.manager [req-e9cc37fc-6929-4dc8-9cf1-4c1eab1653c8 req-119be7d9-72a5-4c61-987c-1a370d9b2db5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Received unexpected event network-vif-plugged-faed3540-5f43-433f-9d2e-22c433f411c2 for instance with vm_state suspended and task_state image_snapshot_pending.
Jan 26 15:51:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Jan 26 15:51:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Jan 26 15:51:23 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Jan 26 15:51:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.264 239969 INFO nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Creating config drive at /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.270 239969 DEBUG oslo_concurrency.processutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeaq9hd96 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.327 239969 DEBUG nova.compute.manager [None req-c197b8d1-6f2a-4f4d-b3db-0473bc90bc85 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 15:51:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 10K writes, 44K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2884 syncs, 3.70 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4833 writes, 19K keys, 4833 commit groups, 1.0 writes per commit group, ingest: 21.30 MB, 0.04 MB/s
                                           Interval WAL: 4833 writes, 1884 syncs, 2.57 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.377 239969 INFO nova.compute.manager [None req-c197b8d1-6f2a-4f4d-b3db-0473bc90bc85 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] instance snapshotting
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.378 239969 WARNING nova.compute.manager [None req-c197b8d1-6f2a-4f4d-b3db-0473bc90bc85 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] trying to snapshot a non-running instance: (state: 4 expected: 1)
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.407 239969 DEBUG oslo_concurrency.processutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeaq9hd96" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.432 239969 DEBUG nova.storage.rbd_utils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.435 239969 DEBUG oslo_concurrency.processutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.578 239969 DEBUG oslo_concurrency.processutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.579 239969 INFO nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Deleting local config drive /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config because it was imported into RBD.
Jan 26 15:51:24 compute-0 NetworkManager[48954]: <info>  [1769442684.6313] manager: (tap72aacd31-27): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Jan 26 15:51:24 compute-0 kernel: tap72aacd31-27: entered promiscuous mode
Jan 26 15:51:24 compute-0 ovn_controller[146046]: 2026-01-26T15:51:24Z|00107|binding|INFO|Claiming lport 72aacd31-2789-4f27-a514-f80766db0d6e for this chassis.
Jan 26 15:51:24 compute-0 ovn_controller[146046]: 2026-01-26T15:51:24Z|00108|binding|INFO|72aacd31-2789-4f27-a514-f80766db0d6e: Claiming fa:16:3e:66:0d:aa 10.100.0.11
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.635 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:24.643 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:0d:aa 10.100.0.11'], port_security=['fa:16:3e:66:0d:aa 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'db37e7ff-8499-4664-b2ba-014e27b8b6bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3195eedf4a34fabaf019faaaad7eb71', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'deb9b716-672e-4115-a664-3008e95e0a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=172ee5c1-f0af-4b45-a16f-73e1fe15c93f, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=72aacd31-2789-4f27-a514-f80766db0d6e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:51:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:24.644 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 72aacd31-2789-4f27-a514-f80766db0d6e in datapath 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc bound to our chassis
Jan 26 15:51:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:24.646 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.647 239969 INFO nova.virt.libvirt.driver [None req-c197b8d1-6f2a-4f4d-b3db-0473bc90bc85 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Beginning cold snapshot process
Jan 26 15:51:24 compute-0 ovn_controller[146046]: 2026-01-26T15:51:24Z|00109|binding|INFO|Setting lport 72aacd31-2789-4f27-a514-f80766db0d6e ovn-installed in OVS
Jan 26 15:51:24 compute-0 ovn_controller[146046]: 2026-01-26T15:51:24Z|00110|binding|INFO|Setting lport 72aacd31-2789-4f27-a514-f80766db0d6e up in Southbound
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.660 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.664 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:24.665 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[208c2dd3-d003-4eb8-9dc0-027275629880]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:24 compute-0 systemd-udevd[259888]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:51:24 compute-0 systemd-machined[208061]: New machine qemu-23-instance-0000000f.
Jan 26 15:51:24 compute-0 NetworkManager[48954]: <info>  [1769442684.6829] device (tap72aacd31-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:51:24 compute-0 NetworkManager[48954]: <info>  [1769442684.6834] device (tap72aacd31-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:51:24 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-0000000f.
Jan 26 15:51:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:24.705 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[49807e35-46b6-4487-9356-1213099ce3c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:24.709 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8ded9517-4cf1-43a7-b023-f9022b1faf34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:24.744 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[dfbd1ee1-2c3e-47a1-9373-6a6c5dc32992]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:24.762 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[df5342a6-64bc-4a30-b221-01ce654a66aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a0d62aa-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:c2:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 14, 'rx_bytes': 826, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 14, 'rx_bytes': 826, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411131, 'reachable_time': 38822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259901, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:24.779 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd055cd-b309-4618-922d-1b46a02a58fd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411144, 'tstamp': 411144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259902, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411147, 'tstamp': 411147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259902, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:24.781 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a0d62aa-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.782 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.787 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:24.788 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a0d62aa-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:24.789 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:24.789 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a0d62aa-50, col_values=(('external_ids', {'iface-id': '98f4f5ff-65bb-4d1b-80cd-163cfdeaa557'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:24.790 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.849 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:24 compute-0 ceph-mon[75140]: pgmap v1083: 305 pgs: 305 active+clean; 355 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.8 MiB/s wr, 242 op/s
Jan 26 15:51:24 compute-0 ceph-mon[75140]: osdmap e166: 3 total, 3 up, 3 in
Jan 26 15:51:24 compute-0 nova_compute[239965]: 2026-01-26 15:51:24.855 239969 DEBUG nova.virt.libvirt.imagebackend [None req-c197b8d1-6f2a-4f4d-b3db-0473bc90bc85 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.015 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for db37e7ff-8499-4664-b2ba-014e27b8b6bb due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.015 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442685.0146925, db37e7ff-8499-4664-b2ba-014e27b8b6bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.015 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] VM Started (Lifecycle Event)
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.130 239969 DEBUG nova.storage.rbd_utils [None req-c197b8d1-6f2a-4f4d-b3db-0473bc90bc85 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] creating snapshot(33990bf996f04667ad8f7c2e775c540c) on rbd image(c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.161 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.165 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442685.0147877, db37e7ff-8499-4664-b2ba-014e27b8b6bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.166 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] VM Paused (Lifecycle Event)
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.184 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.189 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.213 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.299 239969 DEBUG nova.compute.manager [req-2824014c-9142-4f20-954a-1b81d1e7e57a req-7a9c97ce-ac19-4b9a-a2cf-d6302b0fc054 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.299 239969 DEBUG oslo_concurrency.lockutils [req-2824014c-9142-4f20-954a-1b81d1e7e57a req-7a9c97ce-ac19-4b9a-a2cf-d6302b0fc054 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.299 239969 DEBUG oslo_concurrency.lockutils [req-2824014c-9142-4f20-954a-1b81d1e7e57a req-7a9c97ce-ac19-4b9a-a2cf-d6302b0fc054 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.300 239969 DEBUG oslo_concurrency.lockutils [req-2824014c-9142-4f20-954a-1b81d1e7e57a req-7a9c97ce-ac19-4b9a-a2cf-d6302b0fc054 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.300 239969 DEBUG nova.compute.manager [req-2824014c-9142-4f20-954a-1b81d1e7e57a req-7a9c97ce-ac19-4b9a-a2cf-d6302b0fc054 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Processing event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.300 239969 DEBUG nova.compute.manager [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.305 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442685.3053012, db37e7ff-8499-4664-b2ba-014e27b8b6bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.306 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] VM Resumed (Lifecycle Event)
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.311 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.316 239969 INFO nova.virt.libvirt.driver [-] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Instance spawned successfully.
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.316 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.339 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.343 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.360 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.361 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.362 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.362 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.363 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.364 239969 DEBUG nova.virt.libvirt.driver [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.371 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.447 239969 DEBUG nova.compute.manager [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.508 239969 DEBUG oslo_concurrency.lockutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.508 239969 DEBUG oslo_concurrency.lockutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.509 239969 DEBUG nova.objects.instance [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.583 239969 DEBUG oslo_concurrency.lockutils [None req-3516c87b-e709-4e03-9022-2bdf06ae33c9 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.074s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 355 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.2 MiB/s wr, 209 op/s
Jan 26 15:51:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Jan 26 15:51:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Jan 26 15:51:25 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Jan 26 15:51:25 compute-0 nova_compute[239965]: 2026-01-26 15:51:25.905 239969 DEBUG nova.storage.rbd_utils [None req-c197b8d1-6f2a-4f4d-b3db-0473bc90bc85 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] cloning vms/c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk@33990bf996f04667ad8f7c2e775c540c to images/d456f4d9-22c3-4070-a269-0721cf5caae2 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 15:51:26 compute-0 nova_compute[239965]: 2026-01-26 15:51:26.006 239969 DEBUG nova.storage.rbd_utils [None req-c197b8d1-6f2a-4f4d-b3db-0473bc90bc85 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] flattening images/d456f4d9-22c3-4070-a269-0721cf5caae2 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 15:51:26 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 26 15:51:26 compute-0 nova_compute[239965]: 2026-01-26 15:51:26.242 239969 DEBUG nova.storage.rbd_utils [None req-c197b8d1-6f2a-4f4d-b3db-0473bc90bc85 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] removing snapshot(33990bf996f04667ad8f7c2e775c540c) on rbd image(c57c5c04-cfa8-4562-9f6d-a7211012e3bf_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 15:51:26 compute-0 nova_compute[239965]: 2026-01-26 15:51:26.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:51:26 compute-0 nova_compute[239965]: 2026-01-26 15:51:26.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:51:26 compute-0 nova_compute[239965]: 2026-01-26 15:51:26.820 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:26 compute-0 nova_compute[239965]: 2026-01-26 15:51:26.820 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:26 compute-0 nova_compute[239965]: 2026-01-26 15:51:26.820 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:26 compute-0 nova_compute[239965]: 2026-01-26 15:51:26.820 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:51:26 compute-0 nova_compute[239965]: 2026-01-26 15:51:26.821 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:26 compute-0 ceph-mon[75140]: pgmap v1085: 305 pgs: 305 active+clean; 355 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.2 MiB/s wr, 209 op/s
Jan 26 15:51:26 compute-0 ceph-mon[75140]: osdmap e167: 3 total, 3 up, 3 in
Jan 26 15:51:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Jan 26 15:51:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Jan 26 15:51:26 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Jan 26 15:51:26 compute-0 nova_compute[239965]: 2026-01-26 15:51:26.898 239969 DEBUG nova.storage.rbd_utils [None req-c197b8d1-6f2a-4f4d-b3db-0473bc90bc85 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] creating snapshot(snap) on rbd image(d456f4d9-22c3-4070-a269-0721cf5caae2) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:51:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:51:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3932815735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:27 compute-0 nova_compute[239965]: 2026-01-26 15:51:27.431 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 385 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.3 MiB/s wr, 297 op/s
Jan 26 15:51:27 compute-0 nova_compute[239965]: 2026-01-26 15:51:27.757 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:27 compute-0 nova_compute[239965]: 2026-01-26 15:51:27.819 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:51:27 compute-0 nova_compute[239965]: 2026-01-26 15:51:27.819 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:51:27 compute-0 nova_compute[239965]: 2026-01-26 15:51:27.823 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:51:27 compute-0 nova_compute[239965]: 2026-01-26 15:51:27.823 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:51:27 compute-0 nova_compute[239965]: 2026-01-26 15:51:27.826 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:51:27 compute-0 nova_compute[239965]: 2026-01-26 15:51:27.826 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:51:27 compute-0 nova_compute[239965]: 2026-01-26 15:51:27.830 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:51:27 compute-0 nova_compute[239965]: 2026-01-26 15:51:27.830 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:51:27 compute-0 nova_compute[239965]: 2026-01-26 15:51:27.832 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:51:27 compute-0 nova_compute[239965]: 2026-01-26 15:51:27.833 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:51:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Jan 26 15:51:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Jan 26 15:51:27 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Jan 26 15:51:27 compute-0 ceph-mon[75140]: osdmap e168: 3 total, 3 up, 3 in
Jan 26 15:51:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3932815735' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.060 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.061 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3726MB free_disk=59.818865419365466GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.061 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.061 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.231 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance db37e7ff-8499-4664-b2ba-014e27b8b6bb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.231 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 20e8eee5-8c27-4d2b-b132-afa685238f37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.231 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance dcf4bd4f-ec23-4c13-b04d-b696bf93f350 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.231 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance d82c98a2-a53f-4e4f-a132-042f797f2c91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.232 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance c57c5c04-cfa8-4562-9f6d-a7211012e3bf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.232 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.232 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.268 239969 DEBUG nova.compute.manager [req-7c6f7b9f-862a-4ae4-a1e3-82579ed5523b req-0f075291-1e5f-4bd5-83d8-f07529f5e648 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.269 239969 DEBUG oslo_concurrency.lockutils [req-7c6f7b9f-862a-4ae4-a1e3-82579ed5523b req-0f075291-1e5f-4bd5-83d8-f07529f5e648 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.269 239969 DEBUG oslo_concurrency.lockutils [req-7c6f7b9f-862a-4ae4-a1e3-82579ed5523b req-0f075291-1e5f-4bd5-83d8-f07529f5e648 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.269 239969 DEBUG oslo_concurrency.lockutils [req-7c6f7b9f-862a-4ae4-a1e3-82579ed5523b req-0f075291-1e5f-4bd5-83d8-f07529f5e648 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.269 239969 DEBUG nova.compute.manager [req-7c6f7b9f-862a-4ae4-a1e3-82579ed5523b req-0f075291-1e5f-4bd5-83d8-f07529f5e648 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] No waiting events found dispatching network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.270 239969 WARNING nova.compute.manager [req-7c6f7b9f-862a-4ae4-a1e3-82579ed5523b req-0f075291-1e5f-4bd5-83d8-f07529f5e648 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received unexpected event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e for instance with vm_state active and task_state None.
Jan 26 15:51:28 compute-0 nova_compute[239965]: 2026-01-26 15:51:28.368 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:51:28
Jan 26 15:51:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:51:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:51:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['images', '.mgr', 'vms', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'volumes']
Jan 26 15:51:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:51:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Jan 26 15:51:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Jan 26 15:51:28 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Jan 26 15:51:28 compute-0 ceph-mon[75140]: pgmap v1088: 305 pgs: 305 active+clean; 385 MiB data, 475 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.3 MiB/s wr, 297 op/s
Jan 26 15:51:28 compute-0 ceph-mon[75140]: osdmap e169: 3 total, 3 up, 3 in
Jan 26 15:51:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:51:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/122916534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.009 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.019 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.023 239969 INFO nova.compute.manager [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Rebuilding instance
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.036 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.061 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.061 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.312 239969 DEBUG nova.objects.instance [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'trusted_certs' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.329 239969 DEBUG nova.compute.manager [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.387 239969 DEBUG nova.objects.instance [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'pci_requests' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.538 239969 DEBUG nova.objects.instance [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'pci_devices' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.541 239969 INFO nova.virt.libvirt.driver [None req-c197b8d1-6f2a-4f4d-b3db-0473bc90bc85 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Snapshot image upload complete
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.541 239969 INFO nova.compute.manager [None req-c197b8d1-6f2a-4f4d-b3db-0473bc90bc85 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Took 5.16 seconds to snapshot the instance on the hypervisor.
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.640 239969 DEBUG nova.objects.instance [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'resources' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 401 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 7.1 MiB/s rd, 5.7 MiB/s wr, 308 op/s
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.688 239969 DEBUG oslo_concurrency.lockutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Acquiring lock "5c95a4fa-a47e-419d-ad9f-0714964644b4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.689 239969 DEBUG oslo_concurrency.lockutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lock "5c95a4fa-a47e-419d-ad9f-0714964644b4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.700 239969 DEBUG nova.objects.instance [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'migration_context' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.710 239969 DEBUG nova.compute.manager [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:51:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 15:51:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.4 total, 600.0 interval
                                           Cumulative writes: 12K writes, 49K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 3731 syncs, 3.42 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5505 writes, 19K keys, 5505 commit groups, 1.0 writes per commit group, ingest: 21.17 MB, 0.04 MB/s
                                           Interval WAL: 5505 writes, 2260 syncs, 2.44 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.730 239969 DEBUG nova.objects.instance [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.733 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.775 239969 DEBUG oslo_concurrency.lockutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.775 239969 DEBUG oslo_concurrency.lockutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.783 239969 DEBUG nova.virt.hardware [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.783 239969 INFO nova.compute.claims [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.821 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:29 compute-0 ceph-mon[75140]: osdmap e170: 3 total, 3 up, 3 in
Jan 26 15:51:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/122916534' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:29 compute-0 nova_compute[239965]: 2026-01-26 15:51:29.977 239969 DEBUG oslo_concurrency.processutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:51:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:51:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:51:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:51:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:51:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:51:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:51:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:51:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:51:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:51:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:51:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:51:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:51:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:51:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:51:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:51:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:51:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2171613473' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.563 239969 DEBUG oslo_concurrency.processutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.569 239969 DEBUG nova.compute.provider_tree [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.584 239969 DEBUG nova.scheduler.client.report [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.611 239969 DEBUG oslo_concurrency.lockutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.611 239969 DEBUG nova.compute.manager [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.660 239969 DEBUG nova.compute.manager [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.676 239969 INFO nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.695 239969 DEBUG nova.compute.manager [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.781 239969 DEBUG nova.compute.manager [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.783 239969 DEBUG nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.783 239969 INFO nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Creating image(s)
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.810 239969 DEBUG nova.storage.rbd_utils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] rbd image 5c95a4fa-a47e-419d-ad9f-0714964644b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.831 239969 DEBUG nova.storage.rbd_utils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] rbd image 5c95a4fa-a47e-419d-ad9f-0714964644b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.860 239969 DEBUG nova.storage.rbd_utils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] rbd image 5c95a4fa-a47e-419d-ad9f-0714964644b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.863 239969 DEBUG oslo_concurrency.processutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Jan 26 15:51:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Jan 26 15:51:30 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 26 15:51:30 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Jan 26 15:51:30 compute-0 ceph-mon[75140]: pgmap v1091: 305 pgs: 305 active+clean; 401 MiB data, 481 MiB used, 60 GiB / 60 GiB avail; 7.1 MiB/s rd, 5.7 MiB/s wr, 308 op/s
Jan 26 15:51:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2171613473' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.944 239969 DEBUG oslo_concurrency.processutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.946 239969 DEBUG oslo_concurrency.lockutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.946 239969 DEBUG oslo_concurrency.lockutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.947 239969 DEBUG oslo_concurrency.lockutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.970 239969 DEBUG nova.storage.rbd_utils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] rbd image 5c95a4fa-a47e-419d-ad9f-0714964644b4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:30 compute-0 nova_compute[239965]: 2026-01-26 15:51:30.974 239969 DEBUG oslo_concurrency.processutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 5c95a4fa-a47e-419d-ad9f-0714964644b4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.062 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.062 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.102 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.102 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.238 239969 DEBUG oslo_concurrency.processutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 5c95a4fa-a47e-419d-ad9f-0714964644b4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.264s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.292 239969 DEBUG nova.storage.rbd_utils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] resizing rbd image 5c95a4fa-a47e-419d-ad9f-0714964644b4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.319 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-db37e7ff-8499-4664-b2ba-014e27b8b6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.320 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-db37e7ff-8499-4664-b2ba-014e27b8b6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.320 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.370 239969 DEBUG nova.objects.instance [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lazy-loading 'migration_context' on Instance uuid 5c95a4fa-a47e-419d-ad9f-0714964644b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.387 239969 DEBUG nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.387 239969 DEBUG nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Ensure instance console log exists: /var/lib/nova/instances/5c95a4fa-a47e-419d-ad9f-0714964644b4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.388 239969 DEBUG oslo_concurrency.lockutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.388 239969 DEBUG oslo_concurrency.lockutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.388 239969 DEBUG oslo_concurrency.lockutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.389 239969 DEBUG nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.393 239969 WARNING nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.397 239969 DEBUG nova.virt.libvirt.host [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.398 239969 DEBUG nova.virt.libvirt.host [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.401 239969 DEBUG nova.virt.libvirt.host [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.402 239969 DEBUG nova.virt.libvirt.host [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.402 239969 DEBUG nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.402 239969 DEBUG nova.virt.hardware [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.403 239969 DEBUG nova.virt.hardware [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.403 239969 DEBUG nova.virt.hardware [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.403 239969 DEBUG nova.virt.hardware [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.403 239969 DEBUG nova.virt.hardware [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.404 239969 DEBUG nova.virt.hardware [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.404 239969 DEBUG nova.virt.hardware [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.404 239969 DEBUG nova.virt.hardware [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.404 239969 DEBUG nova.virt.hardware [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.405 239969 DEBUG nova.virt.hardware [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.405 239969 DEBUG nova.virt.hardware [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.407 239969 DEBUG oslo_concurrency.processutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 418 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 9.4 MiB/s rd, 4.5 MiB/s wr, 399 op/s
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.780 239969 DEBUG oslo_concurrency.lockutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquiring lock "131ac17b-4bc0-4c20-b861-7102599a66d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.780 239969 DEBUG oslo_concurrency.lockutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "131ac17b-4bc0-4c20-b861-7102599a66d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.809 239969 DEBUG nova.compute.manager [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.905 239969 DEBUG oslo_concurrency.lockutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.906 239969 DEBUG oslo_concurrency.lockutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.915 239969 DEBUG nova.virt.hardware [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.915 239969 INFO nova.compute.claims [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.919 239969 DEBUG oslo_concurrency.lockutils [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.919 239969 DEBUG oslo_concurrency.lockutils [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.919 239969 DEBUG oslo_concurrency.lockutils [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.919 239969 DEBUG oslo_concurrency.lockutils [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.920 239969 DEBUG oslo_concurrency.lockutils [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.920 239969 INFO nova.compute.manager [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Terminating instance
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.921 239969 DEBUG nova.compute.manager [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.927 239969 INFO nova.virt.libvirt.driver [-] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Instance destroyed successfully.
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.927 239969 DEBUG nova.objects.instance [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'resources' on Instance uuid c57c5c04-cfa8-4562-9f6d-a7211012e3bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.943 239969 DEBUG nova.virt.libvirt.vif [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:51:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2099021934',display_name='tempest-ImagesTestJSON-server-2099021934',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2099021934',id=20,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:51:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-c6seime1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:51:29Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=c57c5c04-cfa8-4562-9f6d-a7211012e3bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "faed3540-5f43-433f-9d2e-22c433f411c2", "address": "fa:16:3e:62:88:2d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaed3540-5f", "ovs_interfaceid": "faed3540-5f43-433f-9d2e-22c433f411c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.943 239969 DEBUG nova.network.os_vif_util [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "faed3540-5f43-433f-9d2e-22c433f411c2", "address": "fa:16:3e:62:88:2d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfaed3540-5f", "ovs_interfaceid": "faed3540-5f43-433f-9d2e-22c433f411c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.944 239969 DEBUG nova.network.os_vif_util [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:88:2d,bridge_name='br-int',has_traffic_filtering=True,id=faed3540-5f43-433f-9d2e-22c433f411c2,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaed3540-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.945 239969 DEBUG os_vif [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:88:2d,bridge_name='br-int',has_traffic_filtering=True,id=faed3540-5f43-433f-9d2e-22c433f411c2,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaed3540-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.946 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.947 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfaed3540-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.948 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.950 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:31 compute-0 nova_compute[239965]: 2026-01-26 15:51:31.952 239969 INFO os_vif [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:88:2d,bridge_name='br-int',has_traffic_filtering=True,id=faed3540-5f43-433f-9d2e-22c433f411c2,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfaed3540-5f')
Jan 26 15:51:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:51:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2201108881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:32 compute-0 nova_compute[239965]: 2026-01-26 15:51:32.045 239969 DEBUG oslo_concurrency.processutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:32 compute-0 nova_compute[239965]: 2026-01-26 15:51:32.159 239969 DEBUG nova.storage.rbd_utils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] rbd image 5c95a4fa-a47e-419d-ad9f-0714964644b4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:32 compute-0 nova_compute[239965]: 2026-01-26 15:51:32.164 239969 DEBUG oslo_concurrency.processutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:32 compute-0 nova_compute[239965]: 2026-01-26 15:51:32.392 239969 DEBUG oslo_concurrency.processutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:32 compute-0 ceph-mon[75140]: osdmap e171: 3 total, 3 up, 3 in
Jan 26 15:51:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:51:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1395713649' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:32 compute-0 nova_compute[239965]: 2026-01-26 15:51:32.737 239969 DEBUG oslo_concurrency.processutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:32 compute-0 nova_compute[239965]: 2026-01-26 15:51:32.739 239969 DEBUG nova.objects.instance [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c95a4fa-a47e-419d-ad9f-0714964644b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:32 compute-0 nova_compute[239965]: 2026-01-26 15:51:32.754 239969 DEBUG nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:51:32 compute-0 nova_compute[239965]:   <uuid>5c95a4fa-a47e-419d-ad9f-0714964644b4</uuid>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   <name>instance-00000015</name>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerDiagnosticsV248Test-server-1991052131</nova:name>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:51:31</nova:creationTime>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:51:32 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:51:32 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:51:32 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:51:32 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:51:32 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:51:32 compute-0 nova_compute[239965]:         <nova:user uuid="a91032fde2f84c048d3d60005453eb86">tempest-ServerDiagnosticsV248Test-1174595602-project-member</nova:user>
Jan 26 15:51:32 compute-0 nova_compute[239965]:         <nova:project uuid="8aa78698423142469e6f67832b8e9e93">tempest-ServerDiagnosticsV248Test-1174595602</nova:project>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <system>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <entry name="serial">5c95a4fa-a47e-419d-ad9f-0714964644b4</entry>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <entry name="uuid">5c95a4fa-a47e-419d-ad9f-0714964644b4</entry>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     </system>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   <os>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   </os>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   <features>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   </features>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/5c95a4fa-a47e-419d-ad9f-0714964644b4_disk">
Jan 26 15:51:32 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       </source>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:51:32 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/5c95a4fa-a47e-419d-ad9f-0714964644b4_disk.config">
Jan 26 15:51:32 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       </source>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:51:32 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/5c95a4fa-a47e-419d-ad9f-0714964644b4/console.log" append="off"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <video>
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     </video>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:51:32 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:51:32 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:51:32 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:51:32 compute-0 nova_compute[239965]: </domain>
Jan 26 15:51:32 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:51:32 compute-0 nova_compute[239965]: 2026-01-26 15:51:32.821 239969 DEBUG nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:51:32 compute-0 nova_compute[239965]: 2026-01-26 15:51:32.821 239969 DEBUG nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:51:32 compute-0 nova_compute[239965]: 2026-01-26 15:51:32.822 239969 INFO nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Using config drive
Jan 26 15:51:32 compute-0 nova_compute[239965]: 2026-01-26 15:51:32.848 239969 DEBUG nova.storage.rbd_utils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] rbd image 5c95a4fa-a47e-419d-ad9f-0714964644b4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:51:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/750462548' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:32 compute-0 nova_compute[239965]: 2026-01-26 15:51:32.994 239969 INFO nova.virt.libvirt.driver [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Deleting instance files /var/lib/nova/instances/c57c5c04-cfa8-4562-9f6d-a7211012e3bf_del
Jan 26 15:51:32 compute-0 nova_compute[239965]: 2026-01-26 15:51:32.995 239969 INFO nova.virt.libvirt.driver [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Deletion of /var/lib/nova/instances/c57c5c04-cfa8-4562-9f6d-a7211012e3bf_del complete
Jan 26 15:51:32 compute-0 nova_compute[239965]: 2026-01-26 15:51:32.998 239969 DEBUG oslo_concurrency.processutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.003 239969 DEBUG nova.compute.provider_tree [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.019 239969 DEBUG nova.scheduler.client.report [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.048 239969 DEBUG oslo_concurrency.lockutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.049 239969 DEBUG nova.compute.manager [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.072 239969 INFO nova.compute.manager [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Took 1.15 seconds to destroy the instance on the hypervisor.
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.073 239969 DEBUG oslo.service.loopingcall [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.073 239969 DEBUG nova.compute.manager [-] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.073 239969 DEBUG nova.network.neutron [-] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.103 239969 DEBUG nova.compute.manager [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.104 239969 DEBUG nova.network.neutron [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.118 239969 INFO nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.137 239969 DEBUG nova.compute.manager [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.225 239969 DEBUG nova.compute.manager [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.227 239969 DEBUG nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.227 239969 INFO nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Creating image(s)
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.249 239969 DEBUG nova.storage.rbd_utils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] rbd image 131ac17b-4bc0-4c20-b861-7102599a66d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.270 239969 DEBUG nova.storage.rbd_utils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] rbd image 131ac17b-4bc0-4c20-b861-7102599a66d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.293 239969 DEBUG nova.storage.rbd_utils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] rbd image 131ac17b-4bc0-4c20-b861-7102599a66d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.297 239969 DEBUG oslo_concurrency.processutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.364 239969 DEBUG oslo_concurrency.processutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.365 239969 DEBUG oslo_concurrency.lockutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.366 239969 DEBUG oslo_concurrency.lockutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.366 239969 DEBUG oslo_concurrency.lockutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.388 239969 DEBUG nova.storage.rbd_utils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] rbd image 131ac17b-4bc0-4c20-b861-7102599a66d3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.391 239969 DEBUG oslo_concurrency.processutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 131ac17b-4bc0-4c20-b861-7102599a66d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:33 compute-0 ceph-mon[75140]: pgmap v1093: 305 pgs: 305 active+clean; 418 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 9.4 MiB/s rd, 4.5 MiB/s wr, 399 op/s
Jan 26 15:51:33 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2201108881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:33 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1395713649' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:33 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/750462548' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.607 239969 INFO nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Creating config drive at /var/lib/nova/instances/5c95a4fa-a47e-419d-ad9f-0714964644b4/disk.config
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.613 239969 DEBUG oslo_concurrency.processutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5c95a4fa-a47e-419d-ad9f-0714964644b4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp35kauh75 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 411 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.5 MiB/s wr, 292 op/s
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.671 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Updating instance_info_cache with network_info: [{"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.675 239969 DEBUG oslo_concurrency.processutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 131ac17b-4bc0-4c20-b861-7102599a66d3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.730 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-db37e7ff-8499-4664-b2ba-014e27b8b6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.730 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.731 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.731 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.731 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.731 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.735 239969 DEBUG nova.storage.rbd_utils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] resizing rbd image 131ac17b-4bc0-4c20-b861-7102599a66d3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.761 239969 DEBUG oslo_concurrency.processutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5c95a4fa-a47e-419d-ad9f-0714964644b4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp35kauh75" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.786 239969 DEBUG nova.storage.rbd_utils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] rbd image 5c95a4fa-a47e-419d-ad9f-0714964644b4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.791 239969 DEBUG oslo_concurrency.processutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5c95a4fa-a47e-419d-ad9f-0714964644b4/disk.config 5c95a4fa-a47e-419d-ad9f-0714964644b4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.882 239969 DEBUG nova.objects.instance [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lazy-loading 'migration_context' on Instance uuid 131ac17b-4bc0-4c20-b861-7102599a66d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.898 239969 DEBUG nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.898 239969 DEBUG nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Ensure instance console log exists: /var/lib/nova/instances/131ac17b-4bc0-4c20-b861-7102599a66d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.899 239969 DEBUG oslo_concurrency.lockutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.899 239969 DEBUG oslo_concurrency.lockutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.899 239969 DEBUG oslo_concurrency.lockutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.947 239969 DEBUG oslo_concurrency.processutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5c95a4fa-a47e-419d-ad9f-0714964644b4/disk.config 5c95a4fa-a47e-419d-ad9f-0714964644b4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:33 compute-0 nova_compute[239965]: 2026-01-26 15:51:33.948 239969 INFO nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Deleting local config drive /var/lib/nova/instances/5c95a4fa-a47e-419d-ad9f-0714964644b4/disk.config because it was imported into RBD.
Jan 26 15:51:34 compute-0 systemd-machined[208061]: New machine qemu-24-instance-00000015.
Jan 26 15:51:34 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000015.
Jan 26 15:51:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:51:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Jan 26 15:51:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Jan 26 15:51:34 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.386 239969 DEBUG nova.network.neutron [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.387 239969 DEBUG nova.compute.manager [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.388 239969 DEBUG nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.392 239969 WARNING nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.401 239969 DEBUG nova.virt.libvirt.host [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.404 239969 DEBUG nova.virt.libvirt.host [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.411 239969 DEBUG nova.virt.libvirt.host [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.412 239969 DEBUG nova.virt.libvirt.host [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.412 239969 DEBUG nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.412 239969 DEBUG nova.virt.hardware [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.413 239969 DEBUG nova.virt.hardware [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.413 239969 DEBUG nova.virt.hardware [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.413 239969 DEBUG nova.virt.hardware [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.413 239969 DEBUG nova.virt.hardware [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.414 239969 DEBUG nova.virt.hardware [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.414 239969 DEBUG nova.virt.hardware [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.414 239969 DEBUG nova.virt.hardware [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.414 239969 DEBUG nova.virt.hardware [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.414 239969 DEBUG nova.virt.hardware [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.414 239969 DEBUG nova.virt.hardware [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:51:34 compute-0 ceph-mon[75140]: pgmap v1094: 305 pgs: 305 active+clean; 411 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.5 MiB/s wr, 292 op/s
Jan 26 15:51:34 compute-0 ceph-mon[75140]: osdmap e172: 3 total, 3 up, 3 in
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.417 239969 DEBUG oslo_concurrency.processutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.444 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442694.4354002, 5c95a4fa-a47e-419d-ad9f-0714964644b4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.445 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] VM Resumed (Lifecycle Event)
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.448 239969 DEBUG nova.compute.manager [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.449 239969 DEBUG nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.454 239969 INFO nova.virt.libvirt.driver [-] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Instance spawned successfully.
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.454 239969 DEBUG nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.619 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.624 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 15:51:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 9531 writes, 39K keys, 9531 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 9531 writes, 2486 syncs, 3.83 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3785 writes, 14K keys, 3785 commit groups, 1.0 writes per commit group, ingest: 14.28 MB, 0.02 MB/s
                                           Interval WAL: 3785 writes, 1560 syncs, 2.43 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.798 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.798 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442694.436044, 5c95a4fa-a47e-419d-ad9f-0714964644b4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.798 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] VM Started (Lifecycle Event)
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.803 239969 DEBUG nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.803 239969 DEBUG nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.803 239969 DEBUG nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.804 239969 DEBUG nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.804 239969 DEBUG nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.805 239969 DEBUG nova.virt.libvirt.driver [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.823 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.837 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.840 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.876 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.885 239969 INFO nova.compute.manager [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Took 4.10 seconds to spawn the instance on the hypervisor.
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.885 239969 DEBUG nova.compute.manager [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.947 239969 INFO nova.compute.manager [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Took 5.19 seconds to build instance.
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.967 239969 DEBUG oslo_concurrency.lockutils [None req-43914a15-401f-45d6-9b07-57b662e7d0db a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lock "5c95a4fa-a47e-419d-ad9f-0714964644b4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:51:34 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/650592153' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:34 compute-0 nova_compute[239965]: 2026-01-26 15:51:34.990 239969 DEBUG oslo_concurrency.processutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.014 239969 DEBUG nova.storage.rbd_utils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] rbd image 131ac17b-4bc0-4c20-b861-7102599a66d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.020 239969 DEBUG oslo_concurrency.processutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.146 239969 DEBUG oslo_concurrency.lockutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquiring lock "ad8f4f0d-55dd-416f-9164-9c0a4f024ee9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.146 239969 DEBUG oslo_concurrency.lockutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "ad8f4f0d-55dd-416f-9164-9c0a4f024ee9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.165 239969 DEBUG nova.compute.manager [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.238 239969 DEBUG nova.network.neutron [-] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.273 239969 DEBUG oslo_concurrency.lockutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.274 239969 DEBUG oslo_concurrency.lockutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.281 239969 DEBUG nova.virt.hardware [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.282 239969 INFO nova.compute.claims [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:51:35 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/650592153' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.577 239969 INFO nova.compute.manager [-] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Took 2.50 seconds to deallocate network for instance.
Jan 26 15:51:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:51:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/467459268' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.621 239969 DEBUG oslo_concurrency.processutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.622 239969 DEBUG nova.objects.instance [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lazy-loading 'pci_devices' on Instance uuid 131ac17b-4bc0-4c20-b861-7102599a66d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 411 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.2 MiB/s wr, 213 op/s
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.697 239969 DEBUG oslo_concurrency.lockutils [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.706 239969 DEBUG nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:51:35 compute-0 nova_compute[239965]:   <uuid>131ac17b-4bc0-4c20-b861-7102599a66d3</uuid>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   <name>instance-00000016</name>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <nova:name>tempest-ListImageFiltersTestJSON-server-344360342</nova:name>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:51:34</nova:creationTime>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:51:35 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:51:35 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:51:35 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:51:35 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:51:35 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:51:35 compute-0 nova_compute[239965]:         <nova:user uuid="d98e5783c5584ce8aa184772b7d2fed1">tempest-ListImageFiltersTestJSON-248880733-project-member</nova:user>
Jan 26 15:51:35 compute-0 nova_compute[239965]:         <nova:project uuid="b3a3b548cf6d4d5fbca5f44d4dfdefde">tempest-ListImageFiltersTestJSON-248880733</nova:project>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <system>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <entry name="serial">131ac17b-4bc0-4c20-b861-7102599a66d3</entry>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <entry name="uuid">131ac17b-4bc0-4c20-b861-7102599a66d3</entry>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     </system>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   <os>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   </os>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   <features>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   </features>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/131ac17b-4bc0-4c20-b861-7102599a66d3_disk">
Jan 26 15:51:35 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       </source>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:51:35 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/131ac17b-4bc0-4c20-b861-7102599a66d3_disk.config">
Jan 26 15:51:35 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       </source>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:51:35 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/131ac17b-4bc0-4c20-b861-7102599a66d3/console.log" append="off"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <video>
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     </video>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:51:35 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:51:35 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:51:35 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:51:35 compute-0 nova_compute[239965]: </domain>
Jan 26 15:51:35 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.786 239969 DEBUG nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.786 239969 DEBUG nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.787 239969 INFO nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Using config drive
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.807 239969 DEBUG nova.storage.rbd_utils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] rbd image 131ac17b-4bc0-4c20-b861-7102599a66d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.825 239969 DEBUG nova.compute.manager [req-a1c05dda-f7d8-42da-bdfb-185f075b37d7 req-f163ec3b-38c6-4230-8f94-34784cfe4bab a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Received event network-vif-deleted-faed3540-5f43-433f-9d2e-22c433f411c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:35 compute-0 nova_compute[239965]: 2026-01-26 15:51:35.949 239969 DEBUG oslo_concurrency.processutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.061 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442681.0242224, c57c5c04-cfa8-4562-9f6d-a7211012e3bf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.062 239969 INFO nova.compute.manager [-] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] VM Stopped (Lifecycle Event)
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.089 239969 DEBUG nova.compute.manager [None req-fe23c775-117d-4208-bde3-6ffddccf4ae6 - - - - - -] [instance: c57c5c04-cfa8-4562-9f6d-a7211012e3bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.229 239969 INFO nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Creating config drive at /var/lib/nova/instances/131ac17b-4bc0-4c20-b861-7102599a66d3/disk.config
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.236 239969 DEBUG oslo_concurrency.processutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/131ac17b-4bc0-4c20-b861-7102599a66d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf4z14jc2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.370 239969 DEBUG oslo_concurrency.processutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/131ac17b-4bc0-4c20-b861-7102599a66d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf4z14jc2" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.392 239969 DEBUG nova.storage.rbd_utils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] rbd image 131ac17b-4bc0-4c20-b861-7102599a66d3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.395 239969 DEBUG oslo_concurrency.processutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/131ac17b-4bc0-4c20-b861-7102599a66d3/disk.config 131ac17b-4bc0-4c20-b861-7102599a66d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:36 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/467459268' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:36 compute-0 ceph-mon[75140]: pgmap v1096: 305 pgs: 305 active+clean; 411 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.2 MiB/s wr, 213 op/s
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.493 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "342bf84f-0f28-4939-b844-ed14ee42c01c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.494 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "342bf84f-0f28-4939-b844-ed14ee42c01c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.511 239969 DEBUG nova.compute.manager [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:51:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:51:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4117971053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.520 239969 DEBUG oslo_concurrency.processutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/131ac17b-4bc0-4c20-b861-7102599a66d3/disk.config 131ac17b-4bc0-4c20-b861-7102599a66d3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.521 239969 INFO nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Deleting local config drive /var/lib/nova/instances/131ac17b-4bc0-4c20-b861-7102599a66d3/disk.config because it was imported into RBD.
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.542 239969 DEBUG oslo_concurrency.processutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.546 239969 DEBUG nova.compute.manager [None req-be10258c-e0a7-4acf-81e6-2fba8bfcd2fa ff510260fcce4854a6921f494e3c5f05 6009ecd9d1a54de59975e92ae25c7550 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.560 239969 DEBUG nova.compute.provider_tree [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.562 239969 INFO nova.compute.manager [None req-be10258c-e0a7-4acf-81e6-2fba8bfcd2fa ff510260fcce4854a6921f494e3c5f05 6009ecd9d1a54de59975e92ae25c7550 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Retrieving diagnostics
Jan 26 15:51:36 compute-0 systemd-machined[208061]: New machine qemu-25-instance-00000016.
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.570 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.577 239969 DEBUG nova.scheduler.client.report [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:51:36 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000016.
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.595 239969 DEBUG oslo_concurrency.lockutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.321s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.595 239969 DEBUG nova.compute.manager [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.598 239969 DEBUG oslo_concurrency.lockutils [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.642 239969 DEBUG nova.compute.manager [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.642 239969 DEBUG nova.network.neutron [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.663 239969 INFO nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.687 239969 DEBUG nova.compute.manager [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.781 239969 DEBUG oslo_concurrency.processutils [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.809 239969 DEBUG nova.compute.manager [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.811 239969 DEBUG nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.811 239969 INFO nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Creating image(s)
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.831 239969 DEBUG nova.storage.rbd_utils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] rbd image ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.854 239969 DEBUG nova.storage.rbd_utils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] rbd image ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.879 239969 DEBUG nova.storage.rbd_utils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] rbd image ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.883 239969 DEBUG oslo_concurrency.processutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.911 239969 DEBUG nova.network.neutron [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.912 239969 DEBUG nova.compute.manager [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.950 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.955 239969 DEBUG oslo_concurrency.processutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.955 239969 DEBUG oslo_concurrency.lockutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.956 239969 DEBUG oslo_concurrency.lockutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.956 239969 DEBUG oslo_concurrency.lockutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.974 239969 DEBUG nova.storage.rbd_utils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] rbd image ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:36 compute-0 nova_compute[239965]: 2026-01-26 15:51:36.977 239969 DEBUG oslo_concurrency.processutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.019 239969 DEBUG nova.compute.manager [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.019 239969 DEBUG nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.020 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442697.0182836, 131ac17b-4bc0-4c20-b861-7102599a66d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.020 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] VM Resumed (Lifecycle Event)
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.024 239969 INFO nova.virt.libvirt.driver [-] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Instance spawned successfully.
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.025 239969 DEBUG nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.044 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.048 239969 DEBUG nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.049 239969 DEBUG nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.049 239969 DEBUG nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.049 239969 DEBUG nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.050 239969 DEBUG nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.050 239969 DEBUG nova.virt.libvirt.driver [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.056 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.084 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.085 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442697.018473, 131ac17b-4bc0-4c20-b861-7102599a66d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.085 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] VM Started (Lifecycle Event)
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.109 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.115 239969 INFO nova.compute.manager [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Took 3.89 seconds to spawn the instance on the hypervisor.
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.117 239969 DEBUG nova.compute.manager [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.122 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.155 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.187 239969 INFO nova.compute.manager [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Took 5.31 seconds to build instance.
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.206 239969 DEBUG oslo_concurrency.lockutils [None req-f4cb0e73-6075-4eb6-b0a6-1147121c43e2 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "131ac17b-4bc0-4c20-b861-7102599a66d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.258 239969 DEBUG oslo_concurrency.processutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.340 239969 DEBUG nova.storage.rbd_utils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] resizing rbd image ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:51:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:51:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1363675351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.424 239969 DEBUG oslo_concurrency.processutils [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.429 239969 DEBUG nova.compute.provider_tree [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:51:37 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4117971053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:37 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1363675351' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.468 239969 DEBUG nova.scheduler.client.report [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.477 239969 DEBUG nova.objects.instance [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lazy-loading 'migration_context' on Instance uuid ad8f4f0d-55dd-416f-9164-9c0a4f024ee9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.489 239969 DEBUG oslo_concurrency.lockutils [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.892s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.493 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.923s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.493 239969 DEBUG nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.494 239969 DEBUG nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Ensure instance console log exists: /var/lib/nova/instances/ad8f4f0d-55dd-416f-9164-9c0a4f024ee9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.494 239969 DEBUG oslo_concurrency.lockutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.494 239969 DEBUG oslo_concurrency.lockutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.495 239969 DEBUG oslo_concurrency.lockutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.496 239969 DEBUG nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.501 239969 DEBUG nova.virt.hardware [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.502 239969 INFO nova.compute.claims [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.506 239969 WARNING nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.510 239969 DEBUG nova.virt.libvirt.host [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.511 239969 DEBUG nova.virt.libvirt.host [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.514 239969 DEBUG nova.virt.libvirt.host [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.514 239969 DEBUG nova.virt.libvirt.host [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.514 239969 DEBUG nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.515 239969 DEBUG nova.virt.hardware [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.515 239969 DEBUG nova.virt.hardware [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.515 239969 DEBUG nova.virt.hardware [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.515 239969 DEBUG nova.virt.hardware [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.516 239969 DEBUG nova.virt.hardware [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.516 239969 DEBUG nova.virt.hardware [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.516 239969 DEBUG nova.virt.hardware [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.516 239969 DEBUG nova.virt.hardware [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.517 239969 DEBUG nova.virt.hardware [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.517 239969 DEBUG nova.virt.hardware [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.517 239969 DEBUG nova.virt.hardware [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.520 239969 DEBUG oslo_concurrency.processutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.544 239969 INFO nova.scheduler.client.report [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Deleted allocations for instance c57c5c04-cfa8-4562-9f6d-a7211012e3bf
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.610 239969 DEBUG oslo_concurrency.lockutils [None req-fd16d48e-7936-47d0-b552-786174fe664f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "c57c5c04-cfa8-4562-9f6d-a7211012e3bf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 428 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 7.2 MiB/s wr, 379 op/s
Jan 26 15:51:37 compute-0 nova_compute[239965]: 2026-01-26 15:51:37.722 239969 DEBUG oslo_concurrency.processutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:51:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2305316406' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.140 239969 DEBUG oslo_concurrency.processutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.167 239969 DEBUG nova.storage.rbd_utils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] rbd image ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.173 239969 DEBUG oslo_concurrency.processutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:51:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3993310747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:38 compute-0 ceph-mon[75140]: pgmap v1097: 305 pgs: 305 active+clean; 428 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 7.2 MiB/s wr, 379 op/s
Jan 26 15:51:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2305316406' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3993310747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.469 239969 DEBUG oslo_concurrency.processutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.475 239969 DEBUG nova.compute.provider_tree [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.495 239969 DEBUG nova.scheduler.client.report [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.522 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.523 239969 DEBUG nova.compute.manager [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.579 239969 DEBUG nova.compute.manager [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.580 239969 DEBUG nova.network.neutron [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.611 239969 INFO nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.640 239969 DEBUG nova.compute.manager [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.778 239969 DEBUG nova.compute.manager [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.779 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.779 239969 INFO nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Creating image(s)
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.805 239969 DEBUG nova.storage.rbd_utils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image 342bf84f-0f28-4939-b844-ed14ee42c01c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:51:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1722443911' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.837 239969 DEBUG nova.storage.rbd_utils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image 342bf84f-0f28-4939-b844-ed14ee42c01c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.865 239969 DEBUG nova.storage.rbd_utils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image 342bf84f-0f28-4939-b844-ed14ee42c01c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.869 239969 DEBUG oslo_concurrency.processutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.902 239969 DEBUG oslo_concurrency.processutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.729s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.904 239969 DEBUG nova.objects.instance [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lazy-loading 'pci_devices' on Instance uuid ad8f4f0d-55dd-416f-9164-9c0a4f024ee9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.956 239969 DEBUG nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:51:38 compute-0 nova_compute[239965]:   <uuid>ad8f4f0d-55dd-416f-9164-9c0a4f024ee9</uuid>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   <name>instance-00000017</name>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <nova:name>tempest-ListImageFiltersTestJSON-server-356363517</nova:name>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:51:37</nova:creationTime>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:51:38 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:51:38 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:51:38 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:51:38 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:51:38 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:51:38 compute-0 nova_compute[239965]:         <nova:user uuid="d98e5783c5584ce8aa184772b7d2fed1">tempest-ListImageFiltersTestJSON-248880733-project-member</nova:user>
Jan 26 15:51:38 compute-0 nova_compute[239965]:         <nova:project uuid="b3a3b548cf6d4d5fbca5f44d4dfdefde">tempest-ListImageFiltersTestJSON-248880733</nova:project>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <system>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <entry name="serial">ad8f4f0d-55dd-416f-9164-9c0a4f024ee9</entry>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <entry name="uuid">ad8f4f0d-55dd-416f-9164-9c0a4f024ee9</entry>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     </system>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   <os>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   </os>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   <features>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   </features>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk">
Jan 26 15:51:38 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       </source>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:51:38 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk.config">
Jan 26 15:51:38 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       </source>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:51:38 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/ad8f4f0d-55dd-416f-9164-9c0a4f024ee9/console.log" append="off"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <video>
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     </video>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:51:38 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:51:38 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:51:38 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:51:38 compute-0 nova_compute[239965]: </domain>
Jan 26 15:51:38 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.988 239969 DEBUG oslo_concurrency.processutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.990 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.991 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:38 compute-0 nova_compute[239965]: 2026-01-26 15:51:38.991 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.018 239969 DEBUG nova.storage.rbd_utils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image 342bf84f-0f28-4939-b844-ed14ee42c01c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.022 239969 DEBUG oslo_concurrency.processutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 342bf84f-0f28-4939-b844-ed14ee42c01c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.078 239969 DEBUG nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.080 239969 DEBUG nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.081 239969 INFO nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Using config drive
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.106 239969 DEBUG nova.storage.rbd_utils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] rbd image ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e172 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:51:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Jan 26 15:51:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Jan 26 15:51:39 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Jan 26 15:51:39 compute-0 ovn_controller[146046]: 2026-01-26T15:51:39Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:0d:aa 10.100.0.11
Jan 26 15:51:39 compute-0 ovn_controller[146046]: 2026-01-26T15:51:39Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:0d:aa 10.100.0.11
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.334 239969 DEBUG oslo_concurrency.processutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 342bf84f-0f28-4939-b844-ed14ee42c01c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.389 239969 DEBUG nova.storage.rbd_utils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] resizing rbd image 342bf84f-0f28-4939-b844-ed14ee42c01c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.418 239969 INFO nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Creating config drive at /var/lib/nova/instances/ad8f4f0d-55dd-416f-9164-9c0a4f024ee9/disk.config
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.425 239969 DEBUG oslo_concurrency.processutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ad8f4f0d-55dd-416f-9164-9c0a4f024ee9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy4omy1ar execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.457 239969 DEBUG nova.policy [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3322c44e378e415bb486ef558314a67c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ddba5162f533447bba0159cafaa565bf', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:51:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1722443911' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:39 compute-0 ceph-mon[75140]: osdmap e173: 3 total, 3 up, 3 in
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.514 239969 DEBUG nova.objects.instance [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'migration_context' on Instance uuid 342bf84f-0f28-4939-b844-ed14ee42c01c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.560 239969 DEBUG oslo_concurrency.processutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ad8f4f0d-55dd-416f-9164-9c0a4f024ee9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy4omy1ar" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.587 239969 DEBUG nova.storage.rbd_utils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] rbd image ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.592 239969 DEBUG oslo_concurrency.processutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ad8f4f0d-55dd-416f-9164-9c0a4f024ee9/disk.config ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 428 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.3 MiB/s wr, 284 op/s
Jan 26 15:51:39 compute-0 ceph-mgr[75431]: [devicehealth INFO root] Check health
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.700 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.701 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Ensure instance console log exists: /var/lib/nova/instances/342bf84f-0f28-4939-b844-ed14ee42c01c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.702 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.702 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.703 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.720 239969 DEBUG oslo_concurrency.processutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ad8f4f0d-55dd-416f-9164-9c0a4f024ee9/disk.config ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.721 239969 INFO nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Deleting local config drive /var/lib/nova/instances/ad8f4f0d-55dd-416f-9164-9c0a4f024ee9/disk.config because it was imported into RBD.
Jan 26 15:51:39 compute-0 systemd-machined[208061]: New machine qemu-26-instance-00000017.
Jan 26 15:51:39 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-00000017.
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.825 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:39 compute-0 nova_compute[239965]: 2026-01-26 15:51:39.952 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 15:51:40 compute-0 ceph-mon[75140]: pgmap v1099: 305 pgs: 305 active+clean; 428 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.3 MiB/s wr, 284 op/s
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.604 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442700.6039498, ad8f4f0d-55dd-416f-9164-9c0a4f024ee9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.604 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] VM Resumed (Lifecycle Event)
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.607 239969 DEBUG nova.compute.manager [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.607 239969 DEBUG nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.610 239969 INFO nova.virt.libvirt.driver [-] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Instance spawned successfully.
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.610 239969 DEBUG nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.943 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.949 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.953 239969 DEBUG nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.953 239969 DEBUG nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.954 239969 DEBUG nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.954 239969 DEBUG nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.955 239969 DEBUG nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.955 239969 DEBUG nova.virt.libvirt.driver [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.972 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.973 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442700.6080108, ad8f4f0d-55dd-416f-9164-9c0a4f024ee9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:40 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.973 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] VM Started (Lifecycle Event)
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:40.999 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:41.005 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:41.032 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:41.041 239969 INFO nova.compute.manager [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Took 4.23 seconds to spawn the instance on the hypervisor.
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:41.042 239969 DEBUG nova.compute.manager [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:41.113 239969 DEBUG nova.network.neutron [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Successfully created port: 87baa7c8-584c-40fb-a5ee-7a18908da37f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:41.123 239969 INFO nova.compute.manager [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Took 5.88 seconds to build instance.
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:41.143 239969 DEBUG oslo_concurrency.lockutils [None req-b7b729ce-2b9c-4018-b153-b80ed59d9931 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "ad8f4f0d-55dd-416f-9164-9c0a4f024ee9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 461 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.8 MiB/s wr, 289 op/s
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:41.701 239969 DEBUG nova.network.neutron [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Successfully updated port: 87baa7c8-584c-40fb-a5ee-7a18908da37f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:41.721 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "refresh_cache-342bf84f-0f28-4939-b844-ed14ee42c01c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:41.721 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquired lock "refresh_cache-342bf84f-0f28-4939-b844-ed14ee42c01c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:41.721 239969 DEBUG nova.network.neutron [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:41.798 239969 DEBUG nova.compute.manager [req-51e62689-5908-4190-b43f-fd3da47a6e92 req-ecba628d-2a9a-4274-aea2-b9b32a733805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Received event network-changed-87baa7c8-584c-40fb-a5ee-7a18908da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:41.799 239969 DEBUG nova.compute.manager [req-51e62689-5908-4190-b43f-fd3da47a6e92 req-ecba628d-2a9a-4274-aea2-b9b32a733805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Refreshing instance network info cache due to event network-changed-87baa7c8-584c-40fb-a5ee-7a18908da37f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:41.799 239969 DEBUG oslo_concurrency.lockutils [req-51e62689-5908-4190-b43f-fd3da47a6e92 req-ecba628d-2a9a-4274-aea2-b9b32a733805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-342bf84f-0f28-4939-b844-ed14ee42c01c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:41.952 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:41 compute-0 nova_compute[239965]: 2026-01-26 15:51:41.969 239969 DEBUG nova.network.neutron [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:51:42 compute-0 kernel: tap72aacd31-27 (unregistering): left promiscuous mode
Jan 26 15:51:42 compute-0 NetworkManager[48954]: <info>  [1769442702.1828] device (tap72aacd31-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:51:42 compute-0 nova_compute[239965]: 2026-01-26 15:51:42.190 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:42 compute-0 ovn_controller[146046]: 2026-01-26T15:51:42Z|00111|binding|INFO|Releasing lport 72aacd31-2789-4f27-a514-f80766db0d6e from this chassis (sb_readonly=0)
Jan 26 15:51:42 compute-0 ovn_controller[146046]: 2026-01-26T15:51:42Z|00112|binding|INFO|Setting lport 72aacd31-2789-4f27-a514-f80766db0d6e down in Southbound
Jan 26 15:51:42 compute-0 ovn_controller[146046]: 2026-01-26T15:51:42Z|00113|binding|INFO|Removing iface tap72aacd31-27 ovn-installed in OVS
Jan 26 15:51:42 compute-0 nova_compute[239965]: 2026-01-26 15:51:42.193 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:42.211 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:0d:aa 10.100.0.11'], port_security=['fa:16:3e:66:0d:aa 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'db37e7ff-8499-4664-b2ba-014e27b8b6bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3195eedf4a34fabaf019faaaad7eb71', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'deb9b716-672e-4115-a664-3008e95e0a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=172ee5c1-f0af-4b45-a16f-73e1fe15c93f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=72aacd31-2789-4f27-a514-f80766db0d6e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:51:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:42.212 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 72aacd31-2789-4f27-a514-f80766db0d6e in datapath 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc unbound from our chassis
Jan 26 15:51:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:42.214 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc
Jan 26 15:51:42 compute-0 nova_compute[239965]: 2026-01-26 15:51:42.215 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:42.231 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f82ea4ed-27fa-44fd-8600-9bbfb8284a73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:42 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Jan 26 15:51:42 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d0000000f.scope: Consumed 13.467s CPU time.
Jan 26 15:51:42 compute-0 systemd-machined[208061]: Machine qemu-23-instance-0000000f terminated.
Jan 26 15:51:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:42.260 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c2591bf9-562e-4186-96c2-ec5a36b3ddca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:42.263 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1b171fe5-e163-4eb0-a1ec-8f840b397d5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:42.289 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9d864e13-fb95-4274-9b93-4c71a03a5350]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:42.307 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3a962b89-b78f-417d-9937-2b62dbdc915c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a0d62aa-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:c2:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 16, 'rx_bytes': 868, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 16, 'rx_bytes': 868, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411131, 'reachable_time': 38822, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 261470, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:42.323 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d7cc1b72-8b8d-4909-8571-e30a271a0e0e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411144, 'tstamp': 411144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261471, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411147, 'tstamp': 411147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 261471, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:42.325 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a0d62aa-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:42 compute-0 nova_compute[239965]: 2026-01-26 15:51:42.326 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:42 compute-0 nova_compute[239965]: 2026-01-26 15:51:42.330 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:42.330 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a0d62aa-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:42.331 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:42.331 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a0d62aa-50, col_values=(('external_ids', {'iface-id': '98f4f5ff-65bb-4d1b-80cd-163cfdeaa557'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:42.331 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:42 compute-0 ceph-mon[75140]: pgmap v1100: 305 pgs: 305 active+clean; 461 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 6.8 MiB/s wr, 289 op/s
Jan 26 15:51:42 compute-0 nova_compute[239965]: 2026-01-26 15:51:42.970 239969 INFO nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Instance shutdown successfully after 13 seconds.
Jan 26 15:51:42 compute-0 nova_compute[239965]: 2026-01-26 15:51:42.984 239969 INFO nova.virt.libvirt.driver [-] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Instance destroyed successfully.
Jan 26 15:51:42 compute-0 nova_compute[239965]: 2026-01-26 15:51:42.996 239969 INFO nova.virt.libvirt.driver [-] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Instance destroyed successfully.
Jan 26 15:51:42 compute-0 nova_compute[239965]: 2026-01-26 15:51:42.997 239969 DEBUG nova.virt.libvirt.vif [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T15:50:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-784382261',display_name='tempest-ServersAdminTestJSON-server-784382261',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-784382261',id=15,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:51:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-x2p37smd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:51:28Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=db37e7ff-8499-4664-b2ba-014e27b8b6bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:51:42 compute-0 nova_compute[239965]: 2026-01-26 15:51:42.998 239969 DEBUG nova.network.os_vif_util [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:42 compute-0 nova_compute[239965]: 2026-01-26 15:51:42.999 239969 DEBUG nova.network.os_vif_util [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:42.999 239969 DEBUG os_vif [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.001 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.001 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72aacd31-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.003 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.004 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.006 239969 INFO os_vif [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27')
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.276 239969 INFO nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Deleting instance files /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb_del
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.277 239969 INFO nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Deletion of /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb_del complete
Jan 26 15:51:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 544 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 10 MiB/s wr, 481 op/s
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.774 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.775 239969 INFO nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Creating image(s)
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.802 239969 DEBUG nova.storage.rbd_utils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.825 239969 DEBUG nova.storage.rbd_utils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.858 239969 DEBUG nova.storage.rbd_utils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.863 239969 DEBUG oslo_concurrency.processutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.937 239969 DEBUG oslo_concurrency.processutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.938 239969 DEBUG oslo_concurrency.lockutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.939 239969 DEBUG oslo_concurrency.lockutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.939 239969 DEBUG oslo_concurrency.lockutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.965 239969 DEBUG nova.storage.rbd_utils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.968 239969 DEBUG oslo_concurrency.processutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.994 239969 DEBUG nova.compute.manager [req-aac65556-1830-4ff5-b71d-1ad527a0d88d req-5430fc48-751d-455b-b216-a18dc024b41a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received event network-vif-unplugged-72aacd31-2789-4f27-a514-f80766db0d6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.995 239969 DEBUG oslo_concurrency.lockutils [req-aac65556-1830-4ff5-b71d-1ad527a0d88d req-5430fc48-751d-455b-b216-a18dc024b41a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.995 239969 DEBUG oslo_concurrency.lockutils [req-aac65556-1830-4ff5-b71d-1ad527a0d88d req-5430fc48-751d-455b-b216-a18dc024b41a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.996 239969 DEBUG oslo_concurrency.lockutils [req-aac65556-1830-4ff5-b71d-1ad527a0d88d req-5430fc48-751d-455b-b216-a18dc024b41a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.996 239969 DEBUG nova.compute.manager [req-aac65556-1830-4ff5-b71d-1ad527a0d88d req-5430fc48-751d-455b-b216-a18dc024b41a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] No waiting events found dispatching network-vif-unplugged-72aacd31-2789-4f27-a514-f80766db0d6e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.996 239969 WARNING nova.compute.manager [req-aac65556-1830-4ff5-b71d-1ad527a0d88d req-5430fc48-751d-455b-b216-a18dc024b41a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received unexpected event network-vif-unplugged-72aacd31-2789-4f27-a514-f80766db0d6e for instance with vm_state active and task_state rebuild_spawning.
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.996 239969 DEBUG nova.compute.manager [req-aac65556-1830-4ff5-b71d-1ad527a0d88d req-5430fc48-751d-455b-b216-a18dc024b41a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.996 239969 DEBUG oslo_concurrency.lockutils [req-aac65556-1830-4ff5-b71d-1ad527a0d88d req-5430fc48-751d-455b-b216-a18dc024b41a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.997 239969 DEBUG oslo_concurrency.lockutils [req-aac65556-1830-4ff5-b71d-1ad527a0d88d req-5430fc48-751d-455b-b216-a18dc024b41a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.997 239969 DEBUG oslo_concurrency.lockutils [req-aac65556-1830-4ff5-b71d-1ad527a0d88d req-5430fc48-751d-455b-b216-a18dc024b41a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.997 239969 DEBUG nova.compute.manager [req-aac65556-1830-4ff5-b71d-1ad527a0d88d req-5430fc48-751d-455b-b216-a18dc024b41a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] No waiting events found dispatching network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:51:43 compute-0 nova_compute[239965]: 2026-01-26 15:51:43.997 239969 WARNING nova.compute.manager [req-aac65556-1830-4ff5-b71d-1ad527a0d88d req-5430fc48-751d-455b-b216-a18dc024b41a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received unexpected event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e for instance with vm_state active and task_state rebuild_spawning.
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.099 239969 DEBUG nova.network.neutron [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Updating instance_info_cache with network_info: [{"id": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "address": "fa:16:3e:a1:48:76", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87baa7c8-58", "ovs_interfaceid": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.126 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Releasing lock "refresh_cache-342bf84f-0f28-4939-b844-ed14ee42c01c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.127 239969 DEBUG nova.compute.manager [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Instance network_info: |[{"id": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "address": "fa:16:3e:a1:48:76", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87baa7c8-58", "ovs_interfaceid": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.128 239969 DEBUG oslo_concurrency.lockutils [req-51e62689-5908-4190-b43f-fd3da47a6e92 req-ecba628d-2a9a-4274-aea2-b9b32a733805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-342bf84f-0f28-4939-b844-ed14ee42c01c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.128 239969 DEBUG nova.network.neutron [req-51e62689-5908-4190-b43f-fd3da47a6e92 req-ecba628d-2a9a-4274-aea2-b9b32a733805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Refreshing network info cache for port 87baa7c8-584c-40fb-a5ee-7a18908da37f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.131 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Start _get_guest_xml network_info=[{"id": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "address": "fa:16:3e:a1:48:76", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87baa7c8-58", "ovs_interfaceid": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.138 239969 WARNING nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.145 239969 DEBUG nova.virt.libvirt.host [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.146 239969 DEBUG nova.virt.libvirt.host [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.156 239969 DEBUG nova.virt.libvirt.host [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.156 239969 DEBUG nova.virt.libvirt.host [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.157 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.157 239969 DEBUG nova.virt.hardware [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.158 239969 DEBUG nova.virt.hardware [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.158 239969 DEBUG nova.virt.hardware [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.158 239969 DEBUG nova.virt.hardware [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.159 239969 DEBUG nova.virt.hardware [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.159 239969 DEBUG nova.virt.hardware [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.159 239969 DEBUG nova.virt.hardware [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.159 239969 DEBUG nova.virt.hardware [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.160 239969 DEBUG nova.virt.hardware [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.160 239969 DEBUG nova.virt.hardware [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.160 239969 DEBUG nova.virt.hardware [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.164 239969 DEBUG oslo_concurrency.processutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.198 239969 DEBUG nova.compute.manager [None req-04b88467-1199-4d74-a000-d17a92e49cbb d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.245 239969 DEBUG oslo_concurrency.processutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.275 239969 INFO nova.compute.manager [None req-04b88467-1199-4d74-a000-d17a92e49cbb d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] instance snapshotting
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.318 239969 DEBUG nova.storage.rbd_utils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] resizing rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.401 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.401 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Ensure instance console log exists: /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.402 239969 DEBUG oslo_concurrency.lockutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.402 239969 DEBUG oslo_concurrency.lockutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.402 239969 DEBUG oslo_concurrency.lockutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.404 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Start _get_guest_xml network_info=[{"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.411 239969 WARNING nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.416 239969 DEBUG nova.virt.libvirt.host [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.416 239969 DEBUG nova.virt.libvirt.host [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.425 239969 DEBUG nova.virt.libvirt.host [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.426 239969 DEBUG nova.virt.libvirt.host [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.426 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.426 239969 DEBUG nova.virt.hardware [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.427 239969 DEBUG nova.virt.hardware [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.427 239969 DEBUG nova.virt.hardware [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.427 239969 DEBUG nova.virt.hardware [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.427 239969 DEBUG nova.virt.hardware [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.428 239969 DEBUG nova.virt.hardware [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.428 239969 DEBUG nova.virt.hardware [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.428 239969 DEBUG nova.virt.hardware [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.428 239969 DEBUG nova.virt.hardware [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.428 239969 DEBUG nova.virt.hardware [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.429 239969 DEBUG nova.virt.hardware [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.429 239969 DEBUG nova.objects.instance [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'vcpu_model' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.445 239969 DEBUG oslo_concurrency.processutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.575 239969 INFO nova.virt.libvirt.driver [None req-04b88467-1199-4d74-a000-d17a92e49cbb d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Beginning live snapshot process
Jan 26 15:51:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:51:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2882154699' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.778 239969 DEBUG oslo_concurrency.processutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:44 compute-0 ceph-mon[75140]: pgmap v1101: 305 pgs: 305 active+clean; 544 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 10 MiB/s wr, 481 op/s
Jan 26 15:51:44 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2882154699' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.828 239969 DEBUG nova.storage.rbd_utils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image 342bf84f-0f28-4939-b844-ed14ee42c01c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.833 239969 DEBUG oslo_concurrency.processutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.856 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:44 compute-0 nova_compute[239965]: 2026-01-26 15:51:44.941 239969 DEBUG nova.virt.libvirt.imagebackend [None req-04b88467-1199-4d74-a000-d17a92e49cbb d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 15:51:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:51:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/419761618' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.098 239969 DEBUG nova.storage.rbd_utils [None req-04b88467-1199-4d74-a000-d17a92e49cbb d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] creating snapshot(97ed7bfbc2d44ef99b441c2586cd50d3) on rbd image(131ac17b-4bc0-4c20-b861-7102599a66d3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.126 239969 DEBUG oslo_concurrency.processutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.147 239969 DEBUG nova.storage.rbd_utils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.149 239969 DEBUG oslo_concurrency.processutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:51:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3888135303' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.436 239969 DEBUG oslo_concurrency.processutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.439 239969 DEBUG nova.virt.libvirt.vif [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:51:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2143786364',display_name='tempest-ImagesTestJSON-server-2143786364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2143786364',id=24,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-ro796nsr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:51:38Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=342bf84f-0f28-4939-b844-ed14ee42c01c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "address": "fa:16:3e:a1:48:76", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87baa7c8-58", "ovs_interfaceid": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.439 239969 DEBUG nova.network.os_vif_util [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "address": "fa:16:3e:a1:48:76", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87baa7c8-58", "ovs_interfaceid": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.440 239969 DEBUG nova.network.os_vif_util [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:48:76,bridge_name='br-int',has_traffic_filtering=True,id=87baa7c8-584c-40fb-a5ee-7a18908da37f,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87baa7c8-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.442 239969 DEBUG nova.objects.instance [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'pci_devices' on Instance uuid 342bf84f-0f28-4939-b844-ed14ee42c01c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.459 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <uuid>342bf84f-0f28-4939-b844-ed14ee42c01c</uuid>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <name>instance-00000018</name>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <nova:name>tempest-ImagesTestJSON-server-2143786364</nova:name>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:51:44</nova:creationTime>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <nova:user uuid="3322c44e378e415bb486ef558314a67c">tempest-ImagesTestJSON-1480202091-project-member</nova:user>
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <nova:project uuid="ddba5162f533447bba0159cafaa565bf">tempest-ImagesTestJSON-1480202091</nova:project>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <nova:port uuid="87baa7c8-584c-40fb-a5ee-7a18908da37f">
Jan 26 15:51:45 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <system>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <entry name="serial">342bf84f-0f28-4939-b844-ed14ee42c01c</entry>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <entry name="uuid">342bf84f-0f28-4939-b844-ed14ee42c01c</entry>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </system>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <os>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   </os>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <features>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   </features>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/342bf84f-0f28-4939-b844-ed14ee42c01c_disk">
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       </source>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/342bf84f-0f28-4939-b844-ed14ee42c01c_disk.config">
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       </source>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:a1:48:76"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <target dev="tap87baa7c8-58"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/342bf84f-0f28-4939-b844-ed14ee42c01c/console.log" append="off"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <video>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </video>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:51:45 compute-0 nova_compute[239965]: </domain>
Jan 26 15:51:45 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.467 239969 DEBUG nova.compute.manager [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Preparing to wait for external event network-vif-plugged-87baa7c8-584c-40fb-a5ee-7a18908da37f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.468 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.468 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.469 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.470 239969 DEBUG nova.virt.libvirt.vif [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:51:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2143786364',display_name='tempest-ImagesTestJSON-server-2143786364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2143786364',id=24,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-ro796nsr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:51:38Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=342bf84f-0f28-4939-b844-ed14ee42c01c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "address": "fa:16:3e:a1:48:76", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87baa7c8-58", "ovs_interfaceid": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.470 239969 DEBUG nova.network.os_vif_util [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "address": "fa:16:3e:a1:48:76", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87baa7c8-58", "ovs_interfaceid": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.471 239969 DEBUG nova.network.os_vif_util [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:48:76,bridge_name='br-int',has_traffic_filtering=True,id=87baa7c8-584c-40fb-a5ee-7a18908da37f,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87baa7c8-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.472 239969 DEBUG os_vif [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:48:76,bridge_name='br-int',has_traffic_filtering=True,id=87baa7c8-584c-40fb-a5ee-7a18908da37f,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87baa7c8-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.473 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.473 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.474 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.477 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.477 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87baa7c8-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.478 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap87baa7c8-58, col_values=(('external_ids', {'iface-id': '87baa7c8-584c-40fb-a5ee-7a18908da37f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a1:48:76', 'vm-uuid': '342bf84f-0f28-4939-b844-ed14ee42c01c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:45 compute-0 NetworkManager[48954]: <info>  [1769442705.4806] manager: (tap87baa7c8-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.483 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.486 239969 INFO os_vif [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:48:76,bridge_name='br-int',has_traffic_filtering=True,id=87baa7c8-584c-40fb-a5ee-7a18908da37f,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87baa7c8-58')
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.539 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.540 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.540 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No VIF found with MAC fa:16:3e:a1:48:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.541 239969 INFO nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Using config drive
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.562 239969 DEBUG nova.storage.rbd_utils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image 342bf84f-0f28-4939-b844-ed14ee42c01c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 531 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 10 MiB/s wr, 466 op/s
Jan 26 15:51:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:51:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1897130657' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.721 239969 DEBUG oslo_concurrency.processutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.722 239969 DEBUG nova.virt.libvirt.vif [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T15:50:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-784382261',display_name='tempest-ServersAdminTestJSON-server-784382261',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-784382261',id=15,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:51:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-x2p37smd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:51:43Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=db37e7ff-8499-4664-b2ba-014e27b8b6bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.722 239969 DEBUG nova.network.os_vif_util [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.723 239969 DEBUG nova.network.os_vif_util [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.725 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <uuid>db37e7ff-8499-4664-b2ba-014e27b8b6bb</uuid>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <name>instance-0000000f</name>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersAdminTestJSON-server-784382261</nova:name>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:51:44</nova:creationTime>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <nova:user uuid="b47ce75429ed4cd1ba21c0d50c2e552d">tempest-ServersAdminTestJSON-863857415-project-member</nova:user>
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <nova:project uuid="b3195eedf4a34fabaf019faaaad7eb71">tempest-ServersAdminTestJSON-863857415</nova:project>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <nova:port uuid="72aacd31-2789-4f27-a514-f80766db0d6e">
Jan 26 15:51:45 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <system>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <entry name="serial">db37e7ff-8499-4664-b2ba-014e27b8b6bb</entry>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <entry name="uuid">db37e7ff-8499-4664-b2ba-014e27b8b6bb</entry>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </system>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <os>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   </os>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <features>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   </features>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk">
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       </source>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config">
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       </source>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:51:45 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:66:0d:aa"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <target dev="tap72aacd31-27"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/console.log" append="off"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <video>
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </video>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:51:45 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:51:45 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:51:45 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:51:45 compute-0 nova_compute[239965]: </domain>
Jan 26 15:51:45 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.731 239969 DEBUG nova.compute.manager [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Preparing to wait for external event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.732 239969 DEBUG oslo_concurrency.lockutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.732 239969 DEBUG oslo_concurrency.lockutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.732 239969 DEBUG oslo_concurrency.lockutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.733 239969 DEBUG nova.virt.libvirt.vif [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T15:50:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-784382261',display_name='tempest-ServersAdminTestJSON-server-784382261',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-784382261',id=15,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:51:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-x2p37smd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:51:43Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=db37e7ff-8499-4664-b2ba-014e27b8b6bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.734 239969 DEBUG nova.network.os_vif_util [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.734 239969 DEBUG nova.network.os_vif_util [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.735 239969 DEBUG os_vif [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.735 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.736 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.736 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.738 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.738 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72aacd31-27, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.739 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap72aacd31-27, col_values=(('external_ids', {'iface-id': '72aacd31-2789-4f27-a514-f80766db0d6e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:0d:aa', 'vm-uuid': 'db37e7ff-8499-4664-b2ba-014e27b8b6bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:45 compute-0 NetworkManager[48954]: <info>  [1769442705.7414] manager: (tap72aacd31-27): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.743 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.747 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.748 239969 INFO os_vif [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27')
Jan 26 15:51:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.816 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.816 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.817 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] No VIF found with MAC fa:16:3e:66:0d:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.817 239969 INFO nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Using config drive
Jan 26 15:51:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/419761618' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3888135303' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1897130657' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:51:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Jan 26 15:51:45 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.856 239969 DEBUG nova.storage.rbd_utils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.877 239969 DEBUG nova.objects.instance [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'ec2_ids' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.886 239969 DEBUG nova.storage.rbd_utils [None req-04b88467-1199-4d74-a000-d17a92e49cbb d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] cloning vms/131ac17b-4bc0-4c20-b861-7102599a66d3_disk@97ed7bfbc2d44ef99b441c2586cd50d3 to images/ed3d438e-563f-4513-9d80-b1fbbc3d3ea4 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.918 239969 DEBUG nova.objects.instance [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'keypairs' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.971 239969 INFO nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Creating config drive at /var/lib/nova/instances/342bf84f-0f28-4939-b844-ed14ee42c01c/disk.config
Jan 26 15:51:45 compute-0 nova_compute[239965]: 2026-01-26 15:51:45.976 239969 DEBUG oslo_concurrency.processutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/342bf84f-0f28-4939-b844-ed14ee42c01c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps44qyu6f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.006 239969 DEBUG nova.storage.rbd_utils [None req-04b88467-1199-4d74-a000-d17a92e49cbb d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] flattening images/ed3d438e-563f-4513-9d80-b1fbbc3d3ea4 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.107 239969 DEBUG oslo_concurrency.processutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/342bf84f-0f28-4939-b844-ed14ee42c01c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps44qyu6f" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.144 239969 DEBUG nova.storage.rbd_utils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image 342bf84f-0f28-4939-b844-ed14ee42c01c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.149 239969 DEBUG oslo_concurrency.processutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/342bf84f-0f28-4939-b844-ed14ee42c01c/disk.config 342bf84f-0f28-4939-b844-ed14ee42c01c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.187 239969 DEBUG nova.network.neutron [req-51e62689-5908-4190-b43f-fd3da47a6e92 req-ecba628d-2a9a-4274-aea2-b9b32a733805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Updated VIF entry in instance network info cache for port 87baa7c8-584c-40fb-a5ee-7a18908da37f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.188 239969 DEBUG nova.network.neutron [req-51e62689-5908-4190-b43f-fd3da47a6e92 req-ecba628d-2a9a-4274-aea2-b9b32a733805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Updating instance_info_cache with network_info: [{"id": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "address": "fa:16:3e:a1:48:76", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87baa7c8-58", "ovs_interfaceid": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.236 239969 DEBUG oslo_concurrency.lockutils [req-51e62689-5908-4190-b43f-fd3da47a6e92 req-ecba628d-2a9a-4274-aea2-b9b32a733805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-342bf84f-0f28-4939-b844-ed14ee42c01c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.246 239969 DEBUG nova.storage.rbd_utils [None req-04b88467-1199-4d74-a000-d17a92e49cbb d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] removing snapshot(97ed7bfbc2d44ef99b441c2586cd50d3) on rbd image(131ac17b-4bc0-4c20-b861-7102599a66d3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.296 239969 DEBUG oslo_concurrency.processutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/342bf84f-0f28-4939-b844-ed14ee42c01c/disk.config 342bf84f-0f28-4939-b844-ed14ee42c01c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.296 239969 INFO nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Deleting local config drive /var/lib/nova/instances/342bf84f-0f28-4939-b844-ed14ee42c01c/disk.config because it was imported into RBD.
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.308 239969 INFO nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Creating config drive at /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.313 239969 DEBUG oslo_concurrency.processutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcyw0asi0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:46 compute-0 NetworkManager[48954]: <info>  [1769442706.3465] manager: (tap87baa7c8-58): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Jan 26 15:51:46 compute-0 kernel: tap87baa7c8-58: entered promiscuous mode
Jan 26 15:51:46 compute-0 ovn_controller[146046]: 2026-01-26T15:51:46Z|00114|binding|INFO|Claiming lport 87baa7c8-584c-40fb-a5ee-7a18908da37f for this chassis.
Jan 26 15:51:46 compute-0 ovn_controller[146046]: 2026-01-26T15:51:46Z|00115|binding|INFO|87baa7c8-584c-40fb-a5ee-7a18908da37f: Claiming fa:16:3e:a1:48:76 10.100.0.8
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.357 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.363 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:48:76 10.100.0.8'], port_security=['fa:16:3e:a1:48:76 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '342bf84f-0f28-4939-b844-ed14ee42c01c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5e237c1-a75f-479a-88ee-c0f788914b11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddba5162f533447bba0159cafaa565bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '196db1cf-70b0-4b0e-9aef-decea6cfe3dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79fbd3fe-b646-4211-8e23-8d25cabdd600, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=87baa7c8-584c-40fb-a5ee-7a18908da37f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.364 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 87baa7c8-584c-40fb-a5ee-7a18908da37f in datapath f5e237c1-a75f-479a-88ee-c0f788914b11 bound to our chassis
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.368 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:51:46 compute-0 systemd-machined[208061]: New machine qemu-27-instance-00000018.
Jan 26 15:51:46 compute-0 ovn_controller[146046]: 2026-01-26T15:51:46Z|00116|binding|INFO|Setting lport 87baa7c8-584c-40fb-a5ee-7a18908da37f ovn-installed in OVS
Jan 26 15:51:46 compute-0 ovn_controller[146046]: 2026-01-26T15:51:46Z|00117|binding|INFO|Setting lport 87baa7c8-584c-40fb-a5ee-7a18908da37f up in Southbound
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.385 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1400801d-d79b-45bb-94fd-4a2412cd2cac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.386 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.387 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf5e237c1-a1 in ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.389 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf5e237c1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.389 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[148c9160-f094-48f9-a27d-0e432a67a184]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.390 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[12f68d3b-63b5-4915-a5d6-b4707dedcb75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:46 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-00000018.
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.405 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[a23b1ceb-b2f4-47c0-b25c-ed6c5df49d9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:46 compute-0 systemd-udevd[262016]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:51:46 compute-0 NetworkManager[48954]: <info>  [1769442706.4202] device (tap87baa7c8-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:51:46 compute-0 NetworkManager[48954]: <info>  [1769442706.4215] device (tap87baa7c8-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.421 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2a755de7-b47b-4220-b852-01a6ea302a0e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.445 239969 DEBUG oslo_concurrency.processutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcyw0asi0" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.451 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[87d20df3-7cf1-4f66-8533-89eb171e1cdd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.457 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f5690269-acef-4b14-b3eb-3e59013b93bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:46 compute-0 NetworkManager[48954]: <info>  [1769442706.4591] manager: (tapf5e237c1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/70)
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.491 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[98ff3ccd-aa15-4a3e-b7bb-ab3b7dd94ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.493 239969 DEBUG nova.storage.rbd_utils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] rbd image db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.494 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[851b9dc8-8b36-4657-8437-11e307334abf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.503 239969 DEBUG oslo_concurrency.processutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:46 compute-0 NetworkManager[48954]: <info>  [1769442706.5200] device (tapf5e237c1-a0): carrier: link connected
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.526 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6fe95648-b66b-4664-9cc6-483c3cd8dd74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.553 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5e318bc0-5724-48f7-b504-59e12fedc559]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5e237c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:0e:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419732, 'reachable_time': 26934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262071, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.561 239969 DEBUG nova.compute.manager [req-6ef6d6a3-646a-4a43-a41a-cc001d1b0df8 req-15ec7823-5522-44a5-adf3-4d79eb6726d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Received event network-vif-plugged-87baa7c8-584c-40fb-a5ee-7a18908da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.563 239969 DEBUG oslo_concurrency.lockutils [req-6ef6d6a3-646a-4a43-a41a-cc001d1b0df8 req-15ec7823-5522-44a5-adf3-4d79eb6726d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.563 239969 DEBUG oslo_concurrency.lockutils [req-6ef6d6a3-646a-4a43-a41a-cc001d1b0df8 req-15ec7823-5522-44a5-adf3-4d79eb6726d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.563 239969 DEBUG oslo_concurrency.lockutils [req-6ef6d6a3-646a-4a43-a41a-cc001d1b0df8 req-15ec7823-5522-44a5-adf3-4d79eb6726d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.565 239969 DEBUG nova.compute.manager [req-6ef6d6a3-646a-4a43-a41a-cc001d1b0df8 req-15ec7823-5522-44a5-adf3-4d79eb6726d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Processing event network-vif-plugged-87baa7c8-584c-40fb-a5ee-7a18908da37f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.584 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a42c376f-d442-4dfb-8d59-130370affc8f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:eb6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 419732, 'tstamp': 419732}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262072, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.605 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9eab03ab-b24a-435b-9bf4-920f0d8ca8e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5e237c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:0e:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419732, 'reachable_time': 26934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262081, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.643 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e0acbaf7-748b-420a-bf2a-b7823b42164e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.683 239969 DEBUG nova.compute.manager [None req-60e117af-ebb5-4815-8f39-945dce4f0efb ff510260fcce4854a6921f494e3c5f05 6009ecd9d1a54de59975e92ae25c7550 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.692 239969 INFO nova.compute.manager [None req-60e117af-ebb5-4815-8f39-945dce4f0efb ff510260fcce4854a6921f494e3c5f05 6009ecd9d1a54de59975e92ae25c7550 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Retrieving diagnostics
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.730 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cd400ef9-d812-4082-a910-c6adcecf8652]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.731 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5e237c1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.731 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.732 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5e237c1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:46 compute-0 NetworkManager[48954]: <info>  [1769442706.7338] manager: (tapf5e237c1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.733 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:46 compute-0 kernel: tapf5e237c1-a0: entered promiscuous mode
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.742 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.746 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5e237c1-a0, col_values=(('external_ids', {'iface-id': 'fd0478e8-96d8-4fbd-8a9a-fe78757277ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:46 compute-0 ovn_controller[146046]: 2026-01-26T15:51:46Z|00118|binding|INFO|Releasing lport fd0478e8-96d8-4fbd-8a9a-fe78757277ca from this chassis (sb_readonly=0)
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.749 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.764 239969 DEBUG oslo_concurrency.processutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config db37e7ff-8499-4664-b2ba-014e27b8b6bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.764 239969 INFO nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Deleting local config drive /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb/disk.config because it was imported into RBD.
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.771 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.773 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f5e237c1-a75f-479a-88ee-c0f788914b11.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f5e237c1-a75f-479a-88ee-c0f788914b11.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.775 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.775 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[70313f0c-5bf8-449b-b9d9-d34509029f3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.776 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/f5e237c1-a75f-479a-88ee-c0f788914b11.pid.haproxy
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.777 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'env', 'PROCESS_TAG=haproxy-f5e237c1-a75f-479a-88ee-c0f788914b11', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f5e237c1-a75f-479a-88ee-c0f788914b11.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:51:46 compute-0 NetworkManager[48954]: <info>  [1769442706.8236] manager: (tap72aacd31-27): new Tun device (/org/freedesktop/NetworkManager/Devices/72)
Jan 26 15:51:46 compute-0 kernel: tap72aacd31-27: entered promiscuous mode
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.826 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:46 compute-0 ovn_controller[146046]: 2026-01-26T15:51:46Z|00119|binding|INFO|Claiming lport 72aacd31-2789-4f27-a514-f80766db0d6e for this chassis.
Jan 26 15:51:46 compute-0 ovn_controller[146046]: 2026-01-26T15:51:46Z|00120|binding|INFO|72aacd31-2789-4f27-a514-f80766db0d6e: Claiming fa:16:3e:66:0d:aa 10.100.0.11
Jan 26 15:51:46 compute-0 systemd-udevd[262063]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:51:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Jan 26 15:51:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:46.834 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:0d:aa 10.100.0.11'], port_security=['fa:16:3e:66:0d:aa 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'db37e7ff-8499-4664-b2ba-014e27b8b6bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3195eedf4a34fabaf019faaaad7eb71', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'deb9b716-672e-4115-a664-3008e95e0a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=172ee5c1-f0af-4b45-a16f-73e1fe15c93f, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=72aacd31-2789-4f27-a514-f80766db0d6e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:51:46 compute-0 ceph-mon[75140]: pgmap v1102: 305 pgs: 305 active+clean; 531 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 10 MiB/s wr, 466 op/s
Jan 26 15:51:46 compute-0 ceph-mon[75140]: osdmap e174: 3 total, 3 up, 3 in
Jan 26 15:51:46 compute-0 NetworkManager[48954]: <info>  [1769442706.8423] device (tap72aacd31-27): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:51:46 compute-0 NetworkManager[48954]: <info>  [1769442706.8429] device (tap72aacd31-27): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:51:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Jan 26 15:51:46 compute-0 ovn_controller[146046]: 2026-01-26T15:51:46Z|00121|binding|INFO|Setting lport 72aacd31-2789-4f27-a514-f80766db0d6e ovn-installed in OVS
Jan 26 15:51:46 compute-0 ovn_controller[146046]: 2026-01-26T15:51:46Z|00122|binding|INFO|Setting lport 72aacd31-2789-4f27-a514-f80766db0d6e up in Southbound
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.849 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:46 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.859 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.876 239969 DEBUG oslo_concurrency.lockutils [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Acquiring lock "5c95a4fa-a47e-419d-ad9f-0714964644b4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.876 239969 DEBUG oslo_concurrency.lockutils [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lock "5c95a4fa-a47e-419d-ad9f-0714964644b4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.876 239969 DEBUG oslo_concurrency.lockutils [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Acquiring lock "5c95a4fa-a47e-419d-ad9f-0714964644b4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.877 239969 DEBUG oslo_concurrency.lockutils [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lock "5c95a4fa-a47e-419d-ad9f-0714964644b4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.877 239969 DEBUG oslo_concurrency.lockutils [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lock "5c95a4fa-a47e-419d-ad9f-0714964644b4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:46 compute-0 systemd-machined[208061]: New machine qemu-28-instance-0000000f.
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.879 239969 INFO nova.compute.manager [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Terminating instance
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.880 239969 DEBUG oslo_concurrency.lockutils [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Acquiring lock "refresh_cache-5c95a4fa-a47e-419d-ad9f-0714964644b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.880 239969 DEBUG oslo_concurrency.lockutils [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Acquired lock "refresh_cache-5c95a4fa-a47e-419d-ad9f-0714964644b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.880 239969 DEBUG nova.network.neutron [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:51:46 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-0000000f.
Jan 26 15:51:46 compute-0 nova_compute[239965]: 2026-01-26 15:51:46.921 239969 DEBUG nova.storage.rbd_utils [None req-04b88467-1199-4d74-a000-d17a92e49cbb d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] creating snapshot(snap) on rbd image(ed3d438e-563f-4513-9d80-b1fbbc3d3ea4) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.027 239969 DEBUG nova.network.neutron [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.111 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442707.0987499, 342bf84f-0f28-4939-b844-ed14ee42c01c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.112 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] VM Started (Lifecycle Event)
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.114 239969 DEBUG nova.compute.manager [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.129 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.133 239969 INFO nova.virt.libvirt.driver [-] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Instance spawned successfully.
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.134 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.137 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.147 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.157 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.158 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.158 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.159 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.159 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.159 239969 DEBUG nova.virt.libvirt.driver [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.181 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.181 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442707.098949, 342bf84f-0f28-4939-b844-ed14ee42c01c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.181 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] VM Paused (Lifecycle Event)
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.210 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.213 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442707.127769, 342bf84f-0f28-4939-b844-ed14ee42c01c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.213 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] VM Resumed (Lifecycle Event)
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.226 239969 INFO nova.compute.manager [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Took 8.45 seconds to spawn the instance on the hypervisor.
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.226 239969 DEBUG nova.compute.manager [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.235 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.242 239969 DEBUG nova.compute.manager [req-8b751657-2666-4263-9c8b-6c960be4ae50 req-92e717ea-d5ce-4bac-8390-aed66d2a9cf9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.243 239969 DEBUG oslo_concurrency.lockutils [req-8b751657-2666-4263-9c8b-6c960be4ae50 req-92e717ea-d5ce-4bac-8390-aed66d2a9cf9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.243 239969 DEBUG oslo_concurrency.lockutils [req-8b751657-2666-4263-9c8b-6c960be4ae50 req-92e717ea-d5ce-4bac-8390-aed66d2a9cf9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.243 239969 DEBUG oslo_concurrency.lockutils [req-8b751657-2666-4263-9c8b-6c960be4ae50 req-92e717ea-d5ce-4bac-8390-aed66d2a9cf9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.244 239969 DEBUG nova.compute.manager [req-8b751657-2666-4263-9c8b-6c960be4ae50 req-92e717ea-d5ce-4bac-8390-aed66d2a9cf9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Processing event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.250 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.283 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.306 239969 INFO nova.compute.manager [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Took 10.75 seconds to build instance.
Jan 26 15:51:47 compute-0 podman[262202]: 2026-01-26 15:51:47.310683287 +0000 UTC m=+0.095968362 container create 9ad6a18d2b2de2192106b309245073d1d87c168cb07ddc79bbbd717eff1f7f8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:51:47 compute-0 podman[262202]: 2026-01-26 15:51:47.259868512 +0000 UTC m=+0.045153617 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.371 239969 DEBUG nova.network.neutron [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.373 239969 DEBUG oslo_concurrency.lockutils [None req-ad4600e8-d4bf-4348-a60c-693afdd799be 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "342bf84f-0f28-4939-b844-ed14ee42c01c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:47 compute-0 systemd[1]: Started libpod-conmon-9ad6a18d2b2de2192106b309245073d1d87c168cb07ddc79bbbd717eff1f7f8c.scope.
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.388 239969 DEBUG oslo_concurrency.lockutils [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Releasing lock "refresh_cache-5c95a4fa-a47e-419d-ad9f-0714964644b4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.389 239969 DEBUG nova.compute.manager [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:51:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:51:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab8cea90d6c7031a895a3ac01f0bb3c96ffe4155fd89ed3d46de127ddc5eeeed/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:47 compute-0 podman[262202]: 2026-01-26 15:51:47.440431557 +0000 UTC m=+0.225716662 container init 9ad6a18d2b2de2192106b309245073d1d87c168cb07ddc79bbbd717eff1f7f8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 15:51:47 compute-0 podman[262202]: 2026-01-26 15:51:47.450655516 +0000 UTC m=+0.235940591 container start 9ad6a18d2b2de2192106b309245073d1d87c168cb07ddc79bbbd717eff1f7f8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 26 15:51:47 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000015.scope: Deactivated successfully.
Jan 26 15:51:47 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000015.scope: Consumed 12.451s CPU time.
Jan 26 15:51:47 compute-0 systemd-machined[208061]: Machine qemu-24-instance-00000015 terminated.
Jan 26 15:51:47 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[262236]: [NOTICE]   (262262) : New worker (262286) forked
Jan 26 15:51:47 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[262236]: [NOTICE]   (262262) : Loading success.
Jan 26 15:51:47 compute-0 sudo[262259]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:51:47 compute-0 sudo[262259]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:51:47 compute-0 sudo[262259]: pam_unix(sudo:session): session closed for user root
Jan 26 15:51:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:47.521 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 72aacd31-2789-4f27-a514-f80766db0d6e in datapath 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc unbound from our chassis
Jan 26 15:51:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:47.524 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc
Jan 26 15:51:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:47.548 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4462f9d1-822c-4965-9cdc-99a1848abb28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:47 compute-0 sudo[262299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:51:47 compute-0 sudo[262299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:51:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:47.589 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9abb82fc-38d5-4be6-8dda-4ea4db5b7774]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:47.592 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c8bb2122-2cc9-423c-a55c-87b145ee7c16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.602 239969 DEBUG nova.compute.manager [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.607 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for db37e7ff-8499-4664-b2ba-014e27b8b6bb due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.607 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442707.6070437, db37e7ff-8499-4664-b2ba-014e27b8b6bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.608 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] VM Started (Lifecycle Event)
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.613 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.616 239969 INFO nova.virt.libvirt.driver [-] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Instance destroyed successfully.
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.617 239969 DEBUG nova.objects.instance [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lazy-loading 'resources' on Instance uuid 5c95a4fa-a47e-419d-ad9f-0714964644b4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.624 239969 INFO nova.virt.libvirt.driver [-] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Instance spawned successfully.
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.625 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:51:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:47.626 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[358b39fa-41a1-437a-92f7-6d751050c46f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:47.645 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b5225ffa-3c8f-4ddd-a15d-e3fe2c1d4242]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a0d62aa-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:c2:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 18, 'rx_bytes': 868, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 18, 'rx_bytes': 868, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411131, 'reachable_time': 26088, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262332, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 554 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 14 MiB/s wr, 588 op/s
Jan 26 15:51:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:47.670 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2e5d885b-a0de-418c-95d8-79ac2b79b1a0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411144, 'tstamp': 411144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262333, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411147, 'tstamp': 411147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262333, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:47.673 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a0d62aa-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.677 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.678 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:47.680 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a0d62aa-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:47.680 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:47.681 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a0d62aa-50, col_values=(('external_ids', {'iface-id': '98f4f5ff-65bb-4d1b-80cd-163cfdeaa557'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:47.681 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.781 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.790 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.790 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.791 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.791 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.794 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.795 239969 DEBUG nova.virt.libvirt.driver [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.802 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Jan 26 15:51:47 compute-0 ceph-mon[75140]: osdmap e175: 3 total, 3 up, 3 in
Jan 26 15:51:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Jan 26 15:51:47 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.887 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.890 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442707.6077614, db37e7ff-8499-4664-b2ba-014e27b8b6bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.892 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] VM Paused (Lifecycle Event)
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.919 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.927 239969 DEBUG nova.compute.manager [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.930 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442707.610759, db37e7ff-8499-4664-b2ba-014e27b8b6bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.931 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] VM Resumed (Lifecycle Event)
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.992 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.998 239969 DEBUG oslo_concurrency.lockutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.999 239969 DEBUG oslo_concurrency.lockutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:47 compute-0 nova_compute[239965]: 2026-01-26 15:51:47.999 239969 DEBUG nova.objects.instance [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 15:51:48 compute-0 nova_compute[239965]: 2026-01-26 15:51:48.004 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:51:48 compute-0 nova_compute[239965]: 2026-01-26 15:51:48.074 239969 DEBUG oslo_concurrency.lockutils [None req-f032643f-52c6-499f-af12-5b75f91ea7c1 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:48 compute-0 nova_compute[239965]: 2026-01-26 15:51:48.166 239969 INFO nova.virt.libvirt.driver [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Deleting instance files /var/lib/nova/instances/5c95a4fa-a47e-419d-ad9f-0714964644b4_del
Jan 26 15:51:48 compute-0 nova_compute[239965]: 2026-01-26 15:51:48.168 239969 INFO nova.virt.libvirt.driver [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Deletion of /var/lib/nova/instances/5c95a4fa-a47e-419d-ad9f-0714964644b4_del complete
Jan 26 15:51:48 compute-0 nova_compute[239965]: 2026-01-26 15:51:48.219 239969 INFO nova.compute.manager [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Took 0.83 seconds to destroy the instance on the hypervisor.
Jan 26 15:51:48 compute-0 nova_compute[239965]: 2026-01-26 15:51:48.222 239969 DEBUG oslo.service.loopingcall [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:51:48 compute-0 nova_compute[239965]: 2026-01-26 15:51:48.226 239969 DEBUG nova.compute.manager [-] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:51:48 compute-0 nova_compute[239965]: 2026-01-26 15:51:48.226 239969 DEBUG nova.network.neutron [-] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:51:48 compute-0 sudo[262299]: pam_unix(sudo:session): session closed for user root
Jan 26 15:51:48 compute-0 nova_compute[239965]: 2026-01-26 15:51:48.380 239969 DEBUG nova.network.neutron [-] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:51:48 compute-0 nova_compute[239965]: 2026-01-26 15:51:48.394 239969 DEBUG nova.network.neutron [-] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:51:48 compute-0 nova_compute[239965]: 2026-01-26 15:51:48.408 239969 INFO nova.compute.manager [-] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Took 0.18 seconds to deallocate network for instance.
Jan 26 15:51:48 compute-0 sudo[262382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:51:48 compute-0 sudo[262382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:51:48 compute-0 sudo[262382]: pam_unix(sudo:session): session closed for user root
Jan 26 15:51:48 compute-0 nova_compute[239965]: 2026-01-26 15:51:48.451 239969 DEBUG oslo_concurrency.lockutils [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:48 compute-0 nova_compute[239965]: 2026-01-26 15:51:48.452 239969 DEBUG oslo_concurrency.lockutils [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:48 compute-0 sudo[262407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 26 15:51:48 compute-0 sudo[262407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:51:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:51:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1136871613' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:51:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:51:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1136871613' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00425541035293904 of space, bias 1.0, pg target 1.2766231058817121 quantized to 32 (current 32)
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0008581353482102761 of space, bias 1.0, pg target 0.2565824691148726 quantized to 32 (current 32)
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.317899423410334e-07 of space, bias 4.0, pg target 0.000875220771039876 quantized to 16 (current 16)
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:51:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Jan 26 15:51:48 compute-0 nova_compute[239965]: 2026-01-26 15:51:48.688 239969 DEBUG oslo_concurrency.processutils [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:48 compute-0 sudo[262407]: pam_unix(sudo:session): session closed for user root
Jan 26 15:51:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:51:48 compute-0 ceph-mon[75140]: pgmap v1105: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 554 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 14 MiB/s wr, 588 op/s
Jan 26 15:51:48 compute-0 ceph-mon[75140]: osdmap e176: 3 total, 3 up, 3 in
Jan 26 15:51:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1136871613' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:51:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1136871613' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:51:48 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:51:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:51:48 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:51:48 compute-0 sudo[262472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:51:48 compute-0 sudo[262472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:51:48 compute-0 sudo[262472]: pam_unix(sudo:session): session closed for user root
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.002 239969 DEBUG nova.compute.manager [req-ad5475e6-9374-4f94-9474-da3ed71045fb req-4e3ecc9f-1eb7-41ed-b658-8a0733a0ca26 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Received event network-vif-plugged-87baa7c8-584c-40fb-a5ee-7a18908da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.003 239969 DEBUG oslo_concurrency.lockutils [req-ad5475e6-9374-4f94-9474-da3ed71045fb req-4e3ecc9f-1eb7-41ed-b658-8a0733a0ca26 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.003 239969 DEBUG oslo_concurrency.lockutils [req-ad5475e6-9374-4f94-9474-da3ed71045fb req-4e3ecc9f-1eb7-41ed-b658-8a0733a0ca26 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.003 239969 DEBUG oslo_concurrency.lockutils [req-ad5475e6-9374-4f94-9474-da3ed71045fb req-4e3ecc9f-1eb7-41ed-b658-8a0733a0ca26 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.004 239969 DEBUG nova.compute.manager [req-ad5475e6-9374-4f94-9474-da3ed71045fb req-4e3ecc9f-1eb7-41ed-b658-8a0733a0ca26 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] No waiting events found dispatching network-vif-plugged-87baa7c8-584c-40fb-a5ee-7a18908da37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.004 239969 WARNING nova.compute.manager [req-ad5475e6-9374-4f94-9474-da3ed71045fb req-4e3ecc9f-1eb7-41ed-b658-8a0733a0ca26 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Received unexpected event network-vif-plugged-87baa7c8-584c-40fb-a5ee-7a18908da37f for instance with vm_state active and task_state None.
Jan 26 15:51:49 compute-0 sudo[262497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- inventory --format=json-pretty --filter-for-batch
Jan 26 15:51:49 compute-0 sudo[262497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.131 239969 DEBUG nova.compute.manager [None req-8a2705a0-a34b-4436-8a09-8fa185e400b5 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.181 239969 INFO nova.compute.manager [None req-8a2705a0-a34b-4436-8a09-8fa185e400b5 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] instance snapshotting
Jan 26 15:51:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:51:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:51:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2807197365' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:49 compute-0 podman[262532]: 2026-01-26 15:51:49.379660733 +0000 UTC m=+0.062350595 container create a9d3051969122c485d67df9e8f9503d30bb6cdf797960d43785add6b43703db4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bouman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.390 239969 DEBUG oslo_concurrency.processutils [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.703s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.397 239969 DEBUG nova.compute.provider_tree [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.421 239969 DEBUG nova.compute.manager [req-01175325-62a8-49c2-8396-d23e838326fd req-f4243a64-6803-42c5-b3ec-75451990f3de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.422 239969 DEBUG oslo_concurrency.lockutils [req-01175325-62a8-49c2-8396-d23e838326fd req-f4243a64-6803-42c5-b3ec-75451990f3de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.422 239969 DEBUG oslo_concurrency.lockutils [req-01175325-62a8-49c2-8396-d23e838326fd req-f4243a64-6803-42c5-b3ec-75451990f3de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.422 239969 DEBUG oslo_concurrency.lockutils [req-01175325-62a8-49c2-8396-d23e838326fd req-f4243a64-6803-42c5-b3ec-75451990f3de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.422 239969 DEBUG nova.compute.manager [req-01175325-62a8-49c2-8396-d23e838326fd req-f4243a64-6803-42c5-b3ec-75451990f3de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] No waiting events found dispatching network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.423 239969 WARNING nova.compute.manager [req-01175325-62a8-49c2-8396-d23e838326fd req-f4243a64-6803-42c5-b3ec-75451990f3de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received unexpected event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e for instance with vm_state error and task_state None.
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.424 239969 DEBUG nova.scheduler.client.report [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:51:49 compute-0 systemd[1]: Started libpod-conmon-a9d3051969122c485d67df9e8f9503d30bb6cdf797960d43785add6b43703db4.scope.
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.436 239969 INFO nova.virt.libvirt.driver [None req-8a2705a0-a34b-4436-8a09-8fa185e400b5 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Beginning live snapshot process
Jan 26 15:51:49 compute-0 podman[262532]: 2026-01-26 15:51:49.356101462 +0000 UTC m=+0.038791354 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:51:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.460 239969 DEBUG oslo_concurrency.lockutils [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:49 compute-0 podman[262532]: 2026-01-26 15:51:49.480393249 +0000 UTC m=+0.163083131 container init a9d3051969122c485d67df9e8f9503d30bb6cdf797960d43785add6b43703db4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 15:51:49 compute-0 podman[262532]: 2026-01-26 15:51:49.489823158 +0000 UTC m=+0.172513020 container start a9d3051969122c485d67df9e8f9503d30bb6cdf797960d43785add6b43703db4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bouman, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.491 239969 INFO nova.scheduler.client.report [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Deleted allocations for instance 5c95a4fa-a47e-419d-ad9f-0714964644b4
Jan 26 15:51:49 compute-0 podman[262532]: 2026-01-26 15:51:49.495016255 +0000 UTC m=+0.177706117 container attach a9d3051969122c485d67df9e8f9503d30bb6cdf797960d43785add6b43703db4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:51:49 compute-0 zealous_bouman[262548]: 167 167
Jan 26 15:51:49 compute-0 systemd[1]: libpod-a9d3051969122c485d67df9e8f9503d30bb6cdf797960d43785add6b43703db4.scope: Deactivated successfully.
Jan 26 15:51:49 compute-0 conmon[262548]: conmon a9d3051969122c485d67 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9d3051969122c485d67df9e8f9503d30bb6cdf797960d43785add6b43703db4.scope/container/memory.events
Jan 26 15:51:49 compute-0 podman[262553]: 2026-01-26 15:51:49.567860984 +0000 UTC m=+0.051636485 container died a9d3051969122c485d67df9e8f9503d30bb6cdf797960d43785add6b43703db4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bouman, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 15:51:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-5d9463695a32b920cd9ca69dd28818e3b25aa38a644eb9458af3dda5964ed077-merged.mount: Deactivated successfully.
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.619 239969 INFO nova.virt.libvirt.driver [None req-04b88467-1199-4d74-a000-d17a92e49cbb d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Snapshot image upload complete
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.620 239969 INFO nova.compute.manager [None req-04b88467-1199-4d74-a000-d17a92e49cbb d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Took 5.34 seconds to snapshot the instance on the hypervisor.
Jan 26 15:51:49 compute-0 podman[262553]: 2026-01-26 15:51:49.624001187 +0000 UTC m=+0.107776658 container remove a9d3051969122c485d67df9e8f9503d30bb6cdf797960d43785add6b43703db4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_bouman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.624 239969 DEBUG oslo_concurrency.lockutils [None req-d84d9407-b823-4492-87f3-45d12d094ac5 a91032fde2f84c048d3d60005453eb86 8aa78698423142469e6f67832b8e9e93 - - default default] Lock "5c95a4fa-a47e-419d-ad9f-0714964644b4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:49 compute-0 systemd[1]: libpod-conmon-a9d3051969122c485d67df9e8f9503d30bb6cdf797960d43785add6b43703db4.scope: Deactivated successfully.
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.642 239969 DEBUG nova.virt.libvirt.imagebackend [None req-8a2705a0-a34b-4436-8a09-8fa185e400b5 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 15:51:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 554 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 8.0 MiB/s wr, 293 op/s
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.830 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:49 compute-0 podman[262605]: 2026-01-26 15:51:49.850182039 +0000 UTC m=+0.051933682 container create 0f596b451e1c295e34557ba251f9f159adacb53bee5f3c107a25baa0cd793f38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:51:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:51:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:51:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2807197365' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:49 compute-0 systemd[1]: Started libpod-conmon-0f596b451e1c295e34557ba251f9f159adacb53bee5f3c107a25baa0cd793f38.scope.
Jan 26 15:51:49 compute-0 podman[262605]: 2026-01-26 15:51:49.826268519 +0000 UTC m=+0.028020182 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:51:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4510e6fac47aab96d154907702d4d8848701623db4e3ebc19b33c5eac7be00b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4510e6fac47aab96d154907702d4d8848701623db4e3ebc19b33c5eac7be00b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:49 compute-0 nova_compute[239965]: 2026-01-26 15:51:49.945 239969 DEBUG nova.storage.rbd_utils [None req-8a2705a0-a34b-4436-8a09-8fa185e400b5 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] creating snapshot(4faba45e9dd44267b1271dc885398435) on rbd image(342bf84f-0f28-4939-b844-ed14ee42c01c_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4510e6fac47aab96d154907702d4d8848701623db4e3ebc19b33c5eac7be00b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4510e6fac47aab96d154907702d4d8848701623db4e3ebc19b33c5eac7be00b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:49 compute-0 podman[262605]: 2026-01-26 15:51:49.960692013 +0000 UTC m=+0.162443686 container init 0f596b451e1c295e34557ba251f9f159adacb53bee5f3c107a25baa0cd793f38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 15:51:49 compute-0 podman[262605]: 2026-01-26 15:51:49.973464174 +0000 UTC m=+0.175215817 container start 0f596b451e1c295e34557ba251f9f159adacb53bee5f3c107a25baa0cd793f38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_saha, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:51:49 compute-0 podman[262605]: 2026-01-26 15:51:49.977904872 +0000 UTC m=+0.179656515 container attach 0f596b451e1c295e34557ba251f9f159adacb53bee5f3c107a25baa0cd793f38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_saha, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:51:50 compute-0 nifty_saha[262621]: [
Jan 26 15:51:50 compute-0 nifty_saha[262621]:     {
Jan 26 15:51:50 compute-0 nifty_saha[262621]:         "available": false,
Jan 26 15:51:50 compute-0 nifty_saha[262621]:         "being_replaced": false,
Jan 26 15:51:50 compute-0 nifty_saha[262621]:         "ceph_device_lvm": false,
Jan 26 15:51:50 compute-0 nifty_saha[262621]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:         "lsm_data": {},
Jan 26 15:51:50 compute-0 nifty_saha[262621]:         "lvs": [],
Jan 26 15:51:50 compute-0 nifty_saha[262621]:         "path": "/dev/sr0",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:         "rejected_reasons": [
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "Has a FileSystem",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "Insufficient space (<5GB)"
Jan 26 15:51:50 compute-0 nifty_saha[262621]:         ],
Jan 26 15:51:50 compute-0 nifty_saha[262621]:         "sys_api": {
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "actuators": null,
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "device_nodes": [
Jan 26 15:51:50 compute-0 nifty_saha[262621]:                 "sr0"
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             ],
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "devname": "sr0",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "human_readable_size": "482.00 KB",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "id_bus": "ata",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "model": "QEMU DVD-ROM",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "nr_requests": "2",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "parent": "/dev/sr0",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "partitions": {},
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "path": "/dev/sr0",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "removable": "1",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "rev": "2.5+",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "ro": "0",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "rotational": "1",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "sas_address": "",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "sas_device_handle": "",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "scheduler_mode": "mq-deadline",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "sectors": 0,
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "sectorsize": "2048",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "size": 493568.0,
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "support_discard": "2048",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "type": "disk",
Jan 26 15:51:50 compute-0 nifty_saha[262621]:             "vendor": "QEMU"
Jan 26 15:51:50 compute-0 nifty_saha[262621]:         }
Jan 26 15:51:50 compute-0 nifty_saha[262621]:     }
Jan 26 15:51:50 compute-0 nifty_saha[262621]: ]
Jan 26 15:51:50 compute-0 systemd[1]: libpod-0f596b451e1c295e34557ba251f9f159adacb53bee5f3c107a25baa0cd793f38.scope: Deactivated successfully.
Jan 26 15:51:50 compute-0 podman[263556]: 2026-01-26 15:51:50.72601299 +0000 UTC m=+0.027281693 container died 0f596b451e1c295e34557ba251f9f159adacb53bee5f3c107a25baa0cd793f38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_saha, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:51:50 compute-0 nova_compute[239965]: 2026-01-26 15:51:50.742 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4510e6fac47aab96d154907702d4d8848701623db4e3ebc19b33c5eac7be00b0-merged.mount: Deactivated successfully.
Jan 26 15:51:50 compute-0 podman[263556]: 2026-01-26 15:51:50.776982378 +0000 UTC m=+0.078251061 container remove 0f596b451e1c295e34557ba251f9f159adacb53bee5f3c107a25baa0cd793f38 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:51:50 compute-0 systemd[1]: libpod-conmon-0f596b451e1c295e34557ba251f9f159adacb53bee5f3c107a25baa0cd793f38.scope: Deactivated successfully.
Jan 26 15:51:50 compute-0 sudo[262497]: pam_unix(sudo:session): session closed for user root
Jan 26 15:51:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:51:50 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:51:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:51:50 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:51:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:51:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:51:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:51:50 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:51:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:51:50 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:51:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:51:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:51:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:51:50 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:51:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:51:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:51:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Jan 26 15:51:50 compute-0 ceph-mon[75140]: pgmap v1107: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 554 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 8.0 MiB/s wr, 293 op/s
Jan 26 15:51:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:51:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:51:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:51:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:51:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:51:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:51:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:51:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:51:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Jan 26 15:51:50 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Jan 26 15:51:50 compute-0 sudo[263569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:51:50 compute-0 sudo[263569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:51:50 compute-0 sudo[263569]: pam_unix(sudo:session): session closed for user root
Jan 26 15:51:50 compute-0 nova_compute[239965]: 2026-01-26 15:51:50.993 239969 DEBUG oslo_concurrency.lockutils [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "d82c98a2-a53f-4e4f-a132-042f797f2c91" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:50 compute-0 nova_compute[239965]: 2026-01-26 15:51:50.994 239969 DEBUG oslo_concurrency.lockutils [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "d82c98a2-a53f-4e4f-a132-042f797f2c91" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:50 compute-0 nova_compute[239965]: 2026-01-26 15:51:50.994 239969 DEBUG oslo_concurrency.lockutils [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:50 compute-0 nova_compute[239965]: 2026-01-26 15:51:50.994 239969 DEBUG oslo_concurrency.lockutils [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:50 compute-0 nova_compute[239965]: 2026-01-26 15:51:50.996 239969 DEBUG oslo_concurrency.lockutils [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:50 compute-0 nova_compute[239965]: 2026-01-26 15:51:50.998 239969 INFO nova.compute.manager [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Terminating instance
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.000 239969 DEBUG nova.compute.manager [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.006 239969 DEBUG nova.storage.rbd_utils [None req-8a2705a0-a34b-4436-8a09-8fa185e400b5 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] cloning vms/342bf84f-0f28-4939-b844-ed14ee42c01c_disk@4faba45e9dd44267b1271dc885398435 to images/5c958714-b066-4366-9e88-c4b813d307b6 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 15:51:51 compute-0 sudo[263594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:51:51 compute-0 sudo[263594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:51:51 compute-0 kernel: tap5b47f23b-ae (unregistering): left promiscuous mode
Jan 26 15:51:51 compute-0 NetworkManager[48954]: <info>  [1769442711.1080] device (tap5b47f23b-ae): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:51:51 compute-0 ovn_controller[146046]: 2026-01-26T15:51:51Z|00123|binding|INFO|Releasing lport 5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b from this chassis (sb_readonly=0)
Jan 26 15:51:51 compute-0 ovn_controller[146046]: 2026-01-26T15:51:51Z|00124|binding|INFO|Setting lport 5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b down in Southbound
Jan 26 15:51:51 compute-0 ovn_controller[146046]: 2026-01-26T15:51:51Z|00125|binding|INFO|Removing iface tap5b47f23b-ae ovn-installed in OVS
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.120 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.144 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:51 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000013.scope: Deactivated successfully.
Jan 26 15:51:51 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000013.scope: Consumed 14.834s CPU time.
Jan 26 15:51:51 compute-0 systemd-machined[208061]: Machine qemu-21-instance-00000013 terminated.
Jan 26 15:51:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:51.174 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:b0:29 10.100.0.8'], port_security=['fa:16:3e:c8:b0:29 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'd82c98a2-a53f-4e4f-a132-042f797f2c91', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3195eedf4a34fabaf019faaaad7eb71', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'deb9b716-672e-4115-a664-3008e95e0a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=172ee5c1-f0af-4b45-a16f-73e1fe15c93f, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:51:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:51.176 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b in datapath 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc unbound from our chassis
Jan 26 15:51:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:51.179 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc
Jan 26 15:51:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:51.204 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[da527c08-6be0-4b43-baf4-469c24104385]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.210 239969 DEBUG nova.storage.rbd_utils [None req-8a2705a0-a34b-4436-8a09-8fa185e400b5 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] flattening images/5c958714-b066-4366-9e88-c4b813d307b6 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 15:51:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:51.247 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c96ba64a-a5a1-40a8-a60d-57af11464a02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:51.252 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d007b602-6a2a-46d8-b8fb-75904e876d5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:51.299 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[17d83ccc-b159-4018-87df-e157e77041e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.322 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:51.321 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4495dcea-3cbd-4f1c-9207-38fe6a6babef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a0d62aa-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:c2:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 20, 'rx_bytes': 868, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 20, 'rx_bytes': 868, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411131, 'reachable_time': 26088, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263705, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.326 239969 INFO nova.virt.libvirt.driver [-] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Instance destroyed successfully.
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.327 239969 DEBUG nova.objects.instance [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'resources' on Instance uuid d82c98a2-a53f-4e4f-a132-042f797f2c91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:51.343 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5c5523-94cf-40c3-ad56-6e256fa157b1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411144, 'tstamp': 411144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263708, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411147, 'tstamp': 411147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263708, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:51.345 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a0d62aa-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.347 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.353 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:51.353 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a0d62aa-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:51.354 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:51.354 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a0d62aa-50, col_values=(('external_ids', {'iface-id': '98f4f5ff-65bb-4d1b-80cd-163cfdeaa557'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:51.354 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.389 239969 DEBUG nova.virt.libvirt.vif [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:50:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1570723011',display_name='tempest-ServersAdminTestJSON-server-1570723011',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1570723011',id=19,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:51:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-9s64v06v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:51:05Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=d82c98a2-a53f-4e4f-a132-042f797f2c91,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "address": "fa:16:3e:c8:b0:29", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b47f23b-ae", "ovs_interfaceid": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.390 239969 DEBUG nova.network.os_vif_util [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "address": "fa:16:3e:c8:b0:29", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5b47f23b-ae", "ovs_interfaceid": "5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.391 239969 DEBUG nova.network.os_vif_util [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c8:b0:29,bridge_name='br-int',has_traffic_filtering=True,id=5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b47f23b-ae') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.398 239969 DEBUG os_vif [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:b0:29,bridge_name='br-int',has_traffic_filtering=True,id=5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b47f23b-ae') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.403 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.404 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5b47f23b-ae, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.405 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.406 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.408 239969 INFO os_vif [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:b0:29,bridge_name='br-int',has_traffic_filtering=True,id=5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5b47f23b-ae')
Jan 26 15:51:51 compute-0 podman[263706]: 2026-01-26 15:51:51.453591511 +0000 UTC m=+0.110484095 container create 8f975f2e43aa31f8b8d20d363691862e43e3f415b28b407c3d61d5ed84ef0708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noether, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:51:51 compute-0 podman[263706]: 2026-01-26 15:51:51.388632103 +0000 UTC m=+0.045524707 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:51:51 compute-0 systemd[1]: Started libpod-conmon-8f975f2e43aa31f8b8d20d363691862e43e3f415b28b407c3d61d5ed84ef0708.scope.
Jan 26 15:51:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:51:51 compute-0 podman[263706]: 2026-01-26 15:51:51.568808019 +0000 UTC m=+0.225700633 container init 8f975f2e43aa31f8b8d20d363691862e43e3f415b28b407c3d61d5ed84ef0708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noether, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:51:51 compute-0 podman[263706]: 2026-01-26 15:51:51.577342806 +0000 UTC m=+0.234235390 container start 8f975f2e43aa31f8b8d20d363691862e43e3f415b28b407c3d61d5ed84ef0708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 15:51:51 compute-0 blissful_noether[263743]: 167 167
Jan 26 15:51:51 compute-0 systemd[1]: libpod-8f975f2e43aa31f8b8d20d363691862e43e3f415b28b407c3d61d5ed84ef0708.scope: Deactivated successfully.
Jan 26 15:51:51 compute-0 podman[263706]: 2026-01-26 15:51:51.583881355 +0000 UTC m=+0.240773959 container attach 8f975f2e43aa31f8b8d20d363691862e43e3f415b28b407c3d61d5ed84ef0708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noether, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 15:51:51 compute-0 podman[263706]: 2026-01-26 15:51:51.584657604 +0000 UTC m=+0.241550198 container died 8f975f2e43aa31f8b8d20d363691862e43e3f415b28b407c3d61d5ed84ef0708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noether, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.592 239969 DEBUG nova.storage.rbd_utils [None req-8a2705a0-a34b-4436-8a09-8fa185e400b5 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] removing snapshot(4faba45e9dd44267b1271dc885398435) on rbd image(342bf84f-0f28-4939-b844-ed14ee42c01c_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 15:51:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d65ce3fb95bba9678fb29e8876a20842b88e0be5b3f7bbe7eaf25566d18098dd-merged.mount: Deactivated successfully.
Jan 26 15:51:51 compute-0 podman[263706]: 2026-01-26 15:51:51.629235386 +0000 UTC m=+0.286127970 container remove 8f975f2e43aa31f8b8d20d363691862e43e3f415b28b407c3d61d5ed84ef0708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 15:51:51 compute-0 systemd[1]: libpod-conmon-8f975f2e43aa31f8b8d20d363691862e43e3f415b28b407c3d61d5ed84ef0708.scope: Deactivated successfully.
Jan 26 15:51:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 561 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 11 MiB/s wr, 441 op/s
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.772 239969 INFO nova.virt.libvirt.driver [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Deleting instance files /var/lib/nova/instances/d82c98a2-a53f-4e4f-a132-042f797f2c91_del
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.773 239969 INFO nova.virt.libvirt.driver [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Deletion of /var/lib/nova/instances/d82c98a2-a53f-4e4f-a132-042f797f2c91_del complete
Jan 26 15:51:51 compute-0 podman[263784]: 2026-01-26 15:51:51.838695154 +0000 UTC m=+0.043979000 container create 3ddc3e6bdd22202f39047d8061e3d51bbf9e68b85699c678a64ab4e6acdf5c09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_merkle, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.840 239969 DEBUG nova.compute.manager [req-34811d97-42a7-4b73-a3e9-73c046a6a205 req-7d6abe97-31ec-4ba0-8b4b-50df80921b5f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Received event network-vif-unplugged-5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.842 239969 DEBUG oslo_concurrency.lockutils [req-34811d97-42a7-4b73-a3e9-73c046a6a205 req-7d6abe97-31ec-4ba0-8b4b-50df80921b5f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.842 239969 DEBUG oslo_concurrency.lockutils [req-34811d97-42a7-4b73-a3e9-73c046a6a205 req-7d6abe97-31ec-4ba0-8b4b-50df80921b5f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.842 239969 DEBUG oslo_concurrency.lockutils [req-34811d97-42a7-4b73-a3e9-73c046a6a205 req-7d6abe97-31ec-4ba0-8b4b-50df80921b5f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.843 239969 DEBUG nova.compute.manager [req-34811d97-42a7-4b73-a3e9-73c046a6a205 req-7d6abe97-31ec-4ba0-8b4b-50df80921b5f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] No waiting events found dispatching network-vif-unplugged-5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.843 239969 DEBUG nova.compute.manager [req-34811d97-42a7-4b73-a3e9-73c046a6a205 req-7d6abe97-31ec-4ba0-8b4b-50df80921b5f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Received event network-vif-unplugged-5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:51:51 compute-0 systemd[1]: Started libpod-conmon-3ddc3e6bdd22202f39047d8061e3d51bbf9e68b85699c678a64ab4e6acdf5c09.scope.
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.890 239969 DEBUG nova.compute.manager [None req-4f1b3a07-176a-459d-9b85-23089d81a52e d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:51:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56be4dd4be15182b96c35e4cf12e56fe3fac02cee5d2e777dcd9ae818afd6eb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56be4dd4be15182b96c35e4cf12e56fe3fac02cee5d2e777dcd9ae818afd6eb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56be4dd4be15182b96c35e4cf12e56fe3fac02cee5d2e777dcd9ae818afd6eb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56be4dd4be15182b96c35e4cf12e56fe3fac02cee5d2e777dcd9ae818afd6eb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56be4dd4be15182b96c35e4cf12e56fe3fac02cee5d2e777dcd9ae818afd6eb3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:51 compute-0 podman[263784]: 2026-01-26 15:51:51.819759803 +0000 UTC m=+0.025043659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.922 239969 INFO nova.compute.manager [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Took 0.92 seconds to destroy the instance on the hypervisor.
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.923 239969 DEBUG oslo.service.loopingcall [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.923 239969 DEBUG nova.compute.manager [-] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:51:51 compute-0 nova_compute[239965]: 2026-01-26 15:51:51.923 239969 DEBUG nova.network.neutron [-] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:51:51 compute-0 podman[263784]: 2026-01-26 15:51:51.924467857 +0000 UTC m=+0.129751723 container init 3ddc3e6bdd22202f39047d8061e3d51bbf9e68b85699c678a64ab4e6acdf5c09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_merkle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:51:51 compute-0 podman[263784]: 2026-01-26 15:51:51.931936918 +0000 UTC m=+0.137220754 container start 3ddc3e6bdd22202f39047d8061e3d51bbf9e68b85699c678a64ab4e6acdf5c09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_merkle, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 15:51:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Jan 26 15:51:51 compute-0 podman[263784]: 2026-01-26 15:51:51.936696414 +0000 UTC m=+0.141980280 container attach 3ddc3e6bdd22202f39047d8061e3d51bbf9e68b85699c678a64ab4e6acdf5c09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_merkle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:51:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Jan 26 15:51:51 compute-0 ceph-mon[75140]: osdmap e177: 3 total, 3 up, 3 in
Jan 26 15:51:51 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Jan 26 15:51:52 compute-0 nova_compute[239965]: 2026-01-26 15:51:52.021 239969 DEBUG nova.storage.rbd_utils [None req-8a2705a0-a34b-4436-8a09-8fa185e400b5 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] creating snapshot(snap) on rbd image(5c958714-b066-4366-9e88-c4b813d307b6) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:51:52 compute-0 nova_compute[239965]: 2026-01-26 15:51:52.352 239969 INFO nova.compute.manager [None req-4f1b3a07-176a-459d-9b85-23089d81a52e d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] instance snapshotting
Jan 26 15:51:52 compute-0 silly_merkle[263800]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:51:52 compute-0 silly_merkle[263800]: --> All data devices are unavailable
Jan 26 15:51:52 compute-0 systemd[1]: libpod-3ddc3e6bdd22202f39047d8061e3d51bbf9e68b85699c678a64ab4e6acdf5c09.scope: Deactivated successfully.
Jan 26 15:51:52 compute-0 podman[263784]: 2026-01-26 15:51:52.437961378 +0000 UTC m=+0.643245234 container died 3ddc3e6bdd22202f39047d8061e3d51bbf9e68b85699c678a64ab4e6acdf5c09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:51:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-56be4dd4be15182b96c35e4cf12e56fe3fac02cee5d2e777dcd9ae818afd6eb3-merged.mount: Deactivated successfully.
Jan 26 15:51:52 compute-0 podman[263784]: 2026-01-26 15:51:52.489061368 +0000 UTC m=+0.694345204 container remove 3ddc3e6bdd22202f39047d8061e3d51bbf9e68b85699c678a64ab4e6acdf5c09 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 15:51:52 compute-0 systemd[1]: libpod-conmon-3ddc3e6bdd22202f39047d8061e3d51bbf9e68b85699c678a64ab4e6acdf5c09.scope: Deactivated successfully.
Jan 26 15:51:52 compute-0 podman[263839]: 2026-01-26 15:51:52.544605978 +0000 UTC m=+0.075955416 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Jan 26 15:51:52 compute-0 sudo[263594]: pam_unix(sudo:session): session closed for user root
Jan 26 15:51:52 compute-0 podman[263845]: 2026-01-26 15:51:52.576158853 +0000 UTC m=+0.101632878 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 15:51:52 compute-0 nova_compute[239965]: 2026-01-26 15:51:52.578 239969 INFO nova.virt.libvirt.driver [None req-4f1b3a07-176a-459d-9b85-23089d81a52e d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Beginning live snapshot process
Jan 26 15:51:52 compute-0 sudo[263886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:51:52 compute-0 sudo[263886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:51:52 compute-0 sudo[263886]: pam_unix(sudo:session): session closed for user root
Jan 26 15:51:52 compute-0 sudo[263914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:51:52 compute-0 sudo[263914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:51:52 compute-0 nova_compute[239965]: 2026-01-26 15:51:52.717 239969 DEBUG nova.virt.libvirt.imagebackend [None req-4f1b3a07-176a-459d-9b85-23089d81a52e d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 15:51:52 compute-0 nova_compute[239965]: 2026-01-26 15:51:52.814 239969 DEBUG nova.network.neutron [-] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:51:52 compute-0 nova_compute[239965]: 2026-01-26 15:51:52.877 239969 DEBUG nova.compute.manager [req-8f267275-e4c4-4257-b9a4-bb3aea1efa3f req-c6b7a495-9b9a-42db-a381-d928f75e9185 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Received event network-vif-deleted-5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:52 compute-0 nova_compute[239965]: 2026-01-26 15:51:52.877 239969 INFO nova.compute.manager [req-8f267275-e4c4-4257-b9a4-bb3aea1efa3f req-c6b7a495-9b9a-42db-a381-d928f75e9185 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Neutron deleted interface 5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b; detaching it from the instance and deleting it from the info cache
Jan 26 15:51:52 compute-0 nova_compute[239965]: 2026-01-26 15:51:52.878 239969 DEBUG nova.network.neutron [req-8f267275-e4c4-4257-b9a4-bb3aea1efa3f req-c6b7a495-9b9a-42db-a381-d928f75e9185 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:51:52 compute-0 nova_compute[239965]: 2026-01-26 15:51:52.880 239969 INFO nova.compute.manager [-] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Took 0.96 seconds to deallocate network for instance.
Jan 26 15:51:52 compute-0 nova_compute[239965]: 2026-01-26 15:51:52.916 239969 DEBUG nova.compute.manager [req-8f267275-e4c4-4257-b9a4-bb3aea1efa3f req-c6b7a495-9b9a-42db-a381-d928f75e9185 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Detach interface failed, port_id=5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b, reason: Instance d82c98a2-a53f-4e4f-a132-042f797f2c91 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 15:51:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Jan 26 15:51:52 compute-0 ceph-mon[75140]: pgmap v1109: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 561 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 11 MiB/s wr, 441 op/s
Jan 26 15:51:52 compute-0 ceph-mon[75140]: osdmap e178: 3 total, 3 up, 3 in
Jan 26 15:51:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Jan 26 15:51:52 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Jan 26 15:51:52 compute-0 podman[263981]: 2026-01-26 15:51:52.963402789 +0000 UTC m=+0.047791352 container create 5b77483a5a0eb310beb6b8e2f801bff6fff75c97ff83bbdf49d432ee65cadf16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 15:51:53 compute-0 systemd[1]: Started libpod-conmon-5b77483a5a0eb310beb6b8e2f801bff6fff75c97ff83bbdf49d432ee65cadf16.scope.
Jan 26 15:51:53 compute-0 nova_compute[239965]: 2026-01-26 15:51:53.013 239969 DEBUG nova.storage.rbd_utils [None req-4f1b3a07-176a-459d-9b85-23089d81a52e d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] creating snapshot(f4bd2f1b85714e0692912b9885aa53e3) on rbd image(ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:51:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:51:53 compute-0 podman[263981]: 2026-01-26 15:51:52.945138615 +0000 UTC m=+0.029527198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:51:53 compute-0 podman[263981]: 2026-01-26 15:51:53.050648757 +0000 UTC m=+0.135037340 container init 5b77483a5a0eb310beb6b8e2f801bff6fff75c97ff83bbdf49d432ee65cadf16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 15:51:53 compute-0 nova_compute[239965]: 2026-01-26 15:51:53.060 239969 DEBUG oslo_concurrency.lockutils [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:53 compute-0 nova_compute[239965]: 2026-01-26 15:51:53.061 239969 DEBUG oslo_concurrency.lockutils [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:53 compute-0 podman[263981]: 2026-01-26 15:51:53.062771622 +0000 UTC m=+0.147160175 container start 5b77483a5a0eb310beb6b8e2f801bff6fff75c97ff83bbdf49d432ee65cadf16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:51:53 compute-0 stoic_mayer[263996]: 167 167
Jan 26 15:51:53 compute-0 systemd[1]: libpod-5b77483a5a0eb310beb6b8e2f801bff6fff75c97ff83bbdf49d432ee65cadf16.scope: Deactivated successfully.
Jan 26 15:51:53 compute-0 podman[263981]: 2026-01-26 15:51:53.069951946 +0000 UTC m=+0.154340529 container attach 5b77483a5a0eb310beb6b8e2f801bff6fff75c97ff83bbdf49d432ee65cadf16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 15:51:53 compute-0 podman[263981]: 2026-01-26 15:51:53.070369846 +0000 UTC m=+0.154758399 container died 5b77483a5a0eb310beb6b8e2f801bff6fff75c97ff83bbdf49d432ee65cadf16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:51:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-46d30f17f9055e7977640a5bde04075a336293216efdd56685391f631936d6a9-merged.mount: Deactivated successfully.
Jan 26 15:51:53 compute-0 podman[263981]: 2026-01-26 15:51:53.113170566 +0000 UTC m=+0.197559129 container remove 5b77483a5a0eb310beb6b8e2f801bff6fff75c97ff83bbdf49d432ee65cadf16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_mayer, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 26 15:51:53 compute-0 systemd[1]: libpod-conmon-5b77483a5a0eb310beb6b8e2f801bff6fff75c97ff83bbdf49d432ee65cadf16.scope: Deactivated successfully.
Jan 26 15:51:53 compute-0 nova_compute[239965]: 2026-01-26 15:51:53.217 239969 DEBUG oslo_concurrency.processutils [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:53 compute-0 podman[264040]: 2026-01-26 15:51:53.318630466 +0000 UTC m=+0.056871133 container create 88694e5816a65d8fc56a8a052accfe82b3c5242e4e2208aa91c6f3af2b564082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:51:53 compute-0 systemd[1]: Started libpod-conmon-88694e5816a65d8fc56a8a052accfe82b3c5242e4e2208aa91c6f3af2b564082.scope.
Jan 26 15:51:53 compute-0 podman[264040]: 2026-01-26 15:51:53.296908638 +0000 UTC m=+0.035149325 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:51:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:51:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d5216e6d1c9bfb06b61046ba080a5829d0c9117070e09768d499211ba568d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d5216e6d1c9bfb06b61046ba080a5829d0c9117070e09768d499211ba568d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d5216e6d1c9bfb06b61046ba080a5829d0c9117070e09768d499211ba568d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26d5216e6d1c9bfb06b61046ba080a5829d0c9117070e09768d499211ba568d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:53 compute-0 podman[264040]: 2026-01-26 15:51:53.418299486 +0000 UTC m=+0.156540173 container init 88694e5816a65d8fc56a8a052accfe82b3c5242e4e2208aa91c6f3af2b564082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_engelbart, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 15:51:53 compute-0 podman[264040]: 2026-01-26 15:51:53.426285401 +0000 UTC m=+0.164526068 container start 88694e5816a65d8fc56a8a052accfe82b3c5242e4e2208aa91c6f3af2b564082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_engelbart, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:51:53 compute-0 podman[264040]: 2026-01-26 15:51:53.430584405 +0000 UTC m=+0.168825092 container attach 88694e5816a65d8fc56a8a052accfe82b3c5242e4e2208aa91c6f3af2b564082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:51:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 511 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 9.4 MiB/s wr, 676 op/s
Jan 26 15:51:53 compute-0 sad_engelbart[264070]: {
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:     "0": [
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:         {
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "devices": [
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "/dev/loop3"
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             ],
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "lv_name": "ceph_lv0",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "lv_size": "21470642176",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "name": "ceph_lv0",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "tags": {
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.cluster_name": "ceph",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.crush_device_class": "",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.encrypted": "0",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.objectstore": "bluestore",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.osd_id": "0",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.type": "block",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.vdo": "0",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.with_tpm": "0"
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             },
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "type": "block",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "vg_name": "ceph_vg0"
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:         }
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:     ],
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:     "1": [
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:         {
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "devices": [
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "/dev/loop4"
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             ],
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "lv_name": "ceph_lv1",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "lv_size": "21470642176",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "name": "ceph_lv1",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "tags": {
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.cluster_name": "ceph",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.crush_device_class": "",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.encrypted": "0",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.objectstore": "bluestore",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.osd_id": "1",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.type": "block",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.vdo": "0",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.with_tpm": "0"
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             },
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "type": "block",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "vg_name": "ceph_vg1"
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:         }
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:     ],
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:     "2": [
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:         {
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "devices": [
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "/dev/loop5"
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             ],
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "lv_name": "ceph_lv2",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "lv_size": "21470642176",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "name": "ceph_lv2",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "tags": {
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.cluster_name": "ceph",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.crush_device_class": "",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.encrypted": "0",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.objectstore": "bluestore",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.osd_id": "2",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.type": "block",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.vdo": "0",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:                 "ceph.with_tpm": "0"
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             },
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "type": "block",
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:             "vg_name": "ceph_vg2"
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:         }
Jan 26 15:51:53 compute-0 sad_engelbart[264070]:     ]
Jan 26 15:51:53 compute-0 sad_engelbart[264070]: }
Jan 26 15:51:53 compute-0 systemd[1]: libpod-88694e5816a65d8fc56a8a052accfe82b3c5242e4e2208aa91c6f3af2b564082.scope: Deactivated successfully.
Jan 26 15:51:53 compute-0 podman[264040]: 2026-01-26 15:51:53.771775001 +0000 UTC m=+0.510015678 container died 88694e5816a65d8fc56a8a052accfe82b3c5242e4e2208aa91c6f3af2b564082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:51:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-26d5216e6d1c9bfb06b61046ba080a5829d0c9117070e09768d499211ba568d5-merged.mount: Deactivated successfully.
Jan 26 15:51:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:51:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3151621942' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:53 compute-0 podman[264040]: 2026-01-26 15:51:53.81989764 +0000 UTC m=+0.558138317 container remove 88694e5816a65d8fc56a8a052accfe82b3c5242e4e2208aa91c6f3af2b564082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:51:53 compute-0 systemd[1]: libpod-conmon-88694e5816a65d8fc56a8a052accfe82b3c5242e4e2208aa91c6f3af2b564082.scope: Deactivated successfully.
Jan 26 15:51:53 compute-0 nova_compute[239965]: 2026-01-26 15:51:53.845 239969 DEBUG oslo_concurrency.processutils [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:53 compute-0 nova_compute[239965]: 2026-01-26 15:51:53.851 239969 DEBUG nova.compute.provider_tree [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:51:53 compute-0 sudo[263914]: pam_unix(sudo:session): session closed for user root
Jan 26 15:51:53 compute-0 nova_compute[239965]: 2026-01-26 15:51:53.941 239969 DEBUG nova.compute.manager [req-ba358b4a-f8b9-4027-a32a-7bface128b93 req-b4e11551-7989-4f49-a7ad-2b164ebde375 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Received event network-vif-plugged-5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:53 compute-0 nova_compute[239965]: 2026-01-26 15:51:53.942 239969 DEBUG oslo_concurrency.lockutils [req-ba358b4a-f8b9-4027-a32a-7bface128b93 req-b4e11551-7989-4f49-a7ad-2b164ebde375 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:53 compute-0 nova_compute[239965]: 2026-01-26 15:51:53.942 239969 DEBUG oslo_concurrency.lockutils [req-ba358b4a-f8b9-4027-a32a-7bface128b93 req-b4e11551-7989-4f49-a7ad-2b164ebde375 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:53 compute-0 nova_compute[239965]: 2026-01-26 15:51:53.942 239969 DEBUG oslo_concurrency.lockutils [req-ba358b4a-f8b9-4027-a32a-7bface128b93 req-b4e11551-7989-4f49-a7ad-2b164ebde375 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d82c98a2-a53f-4e4f-a132-042f797f2c91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:53 compute-0 nova_compute[239965]: 2026-01-26 15:51:53.942 239969 DEBUG nova.compute.manager [req-ba358b4a-f8b9-4027-a32a-7bface128b93 req-b4e11551-7989-4f49-a7ad-2b164ebde375 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] No waiting events found dispatching network-vif-plugged-5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:51:53 compute-0 nova_compute[239965]: 2026-01-26 15:51:53.943 239969 WARNING nova.compute.manager [req-ba358b4a-f8b9-4027-a32a-7bface128b93 req-b4e11551-7989-4f49-a7ad-2b164ebde375 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Received unexpected event network-vif-plugged-5b47f23b-ae9c-4b4e-ae97-280ed3f2c44b for instance with vm_state deleted and task_state None.
Jan 26 15:51:53 compute-0 nova_compute[239965]: 2026-01-26 15:51:53.945 239969 DEBUG nova.scheduler.client.report [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:51:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Jan 26 15:51:53 compute-0 ceph-mon[75140]: osdmap e179: 3 total, 3 up, 3 in
Jan 26 15:51:53 compute-0 ceph-mon[75140]: pgmap v1112: 305 pgs: 5 active+clean+snaptrim_wait, 4 active+clean+snaptrim, 296 active+clean; 511 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 9.4 MiB/s wr, 676 op/s
Jan 26 15:51:53 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3151621942' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Jan 26 15:51:53 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Jan 26 15:51:53 compute-0 nova_compute[239965]: 2026-01-26 15:51:53.975 239969 DEBUG oslo_concurrency.lockutils [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:53 compute-0 sudo[264099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:51:53 compute-0 sudo[264099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:51:53 compute-0 sudo[264099]: pam_unix(sudo:session): session closed for user root
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.009 239969 INFO nova.scheduler.client.report [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Deleted allocations for instance d82c98a2-a53f-4e4f-a132-042f797f2c91
Jan 26 15:51:54 compute-0 sudo[264124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:51:54 compute-0 sudo[264124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.106 239969 DEBUG oslo_concurrency.lockutils [None req-ab9c9470-1f39-4865-9598-767101cf7566 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "d82c98a2-a53f-4e4f-a132-042f797f2c91" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.122 239969 DEBUG nova.storage.rbd_utils [None req-4f1b3a07-176a-459d-9b85-23089d81a52e d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] cloning vms/ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk@f4bd2f1b85714e0692912b9885aa53e3 to images/310f8a22-427c-4898-84be-4588800ee6fa clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 15:51:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:51:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Jan 26 15:51:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Jan 26 15:51:54 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.331 239969 DEBUG nova.storage.rbd_utils [None req-4f1b3a07-176a-459d-9b85-23089d81a52e d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] flattening images/310f8a22-427c-4898-84be-4588800ee6fa flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 15:51:54 compute-0 podman[264196]: 2026-01-26 15:51:54.402095469 +0000 UTC m=+0.059064886 container create 0ff4bb125fd6bcb7dfcf61142115016b3670ef8f7d0175e83fb486094c7ffbbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_margulis, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 15:51:54 compute-0 systemd[1]: Started libpod-conmon-0ff4bb125fd6bcb7dfcf61142115016b3670ef8f7d0175e83fb486094c7ffbbf.scope.
Jan 26 15:51:54 compute-0 podman[264196]: 2026-01-26 15:51:54.369911048 +0000 UTC m=+0.026880495 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:51:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:51:54 compute-0 podman[264196]: 2026-01-26 15:51:54.53474684 +0000 UTC m=+0.191716287 container init 0ff4bb125fd6bcb7dfcf61142115016b3670ef8f7d0175e83fb486094c7ffbbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_margulis, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 15:51:54 compute-0 podman[264196]: 2026-01-26 15:51:54.544760963 +0000 UTC m=+0.201730380 container start 0ff4bb125fd6bcb7dfcf61142115016b3670ef8f7d0175e83fb486094c7ffbbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_margulis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:51:54 compute-0 heuristic_margulis[264230]: 167 167
Jan 26 15:51:54 compute-0 systemd[1]: libpod-0ff4bb125fd6bcb7dfcf61142115016b3670ef8f7d0175e83fb486094c7ffbbf.scope: Deactivated successfully.
Jan 26 15:51:54 compute-0 podman[264196]: 2026-01-26 15:51:54.553179198 +0000 UTC m=+0.210148615 container attach 0ff4bb125fd6bcb7dfcf61142115016b3670ef8f7d0175e83fb486094c7ffbbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 15:51:54 compute-0 podman[264196]: 2026-01-26 15:51:54.554469889 +0000 UTC m=+0.211439306 container died 0ff4bb125fd6bcb7dfcf61142115016b3670ef8f7d0175e83fb486094c7ffbbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_margulis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.655 239969 INFO nova.virt.libvirt.driver [None req-8a2705a0-a34b-4436-8a09-8fa185e400b5 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Snapshot image upload complete
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.656 239969 INFO nova.compute.manager [None req-8a2705a0-a34b-4436-8a09-8fa185e400b5 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Took 5.47 seconds to snapshot the instance on the hypervisor.
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.709 239969 DEBUG oslo_concurrency.lockutils [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.709 239969 DEBUG oslo_concurrency.lockutils [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.710 239969 DEBUG oslo_concurrency.lockutils [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.710 239969 DEBUG oslo_concurrency.lockutils [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.710 239969 DEBUG oslo_concurrency.lockutils [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.711 239969 INFO nova.compute.manager [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Terminating instance
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.712 239969 DEBUG nova.compute.manager [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:51:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d135722d3b76a80716c03e8897fe29de501b6f9c6953de355368a397d7cc5c50-merged.mount: Deactivated successfully.
Jan 26 15:51:54 compute-0 kernel: tapbba927f8-b9 (unregistering): left promiscuous mode
Jan 26 15:51:54 compute-0 NetworkManager[48954]: <info>  [1769442714.7789] device (tapbba927f8-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:51:54 compute-0 podman[264196]: 2026-01-26 15:51:54.785824029 +0000 UTC m=+0.442793446 container remove 0ff4bb125fd6bcb7dfcf61142115016b3670ef8f7d0175e83fb486094c7ffbbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_margulis, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:51:54 compute-0 ovn_controller[146046]: 2026-01-26T15:51:54Z|00126|binding|INFO|Releasing lport bba927f8-b9ac-4383-94b1-86a81c2844fd from this chassis (sb_readonly=0)
Jan 26 15:51:54 compute-0 ovn_controller[146046]: 2026-01-26T15:51:54Z|00127|binding|INFO|Setting lport bba927f8-b9ac-4383-94b1-86a81c2844fd down in Southbound
Jan 26 15:51:54 compute-0 ovn_controller[146046]: 2026-01-26T15:51:54Z|00128|binding|INFO|Removing iface tapbba927f8-b9 ovn-installed in OVS
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.827 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:54 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Deactivated successfully.
Jan 26 15:51:54 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000012.scope: Consumed 15.155s CPU time.
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.833 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:54 compute-0 systemd-machined[208061]: Machine qemu-20-instance-00000012 terminated.
Jan 26 15:51:54 compute-0 systemd[1]: libpod-conmon-0ff4bb125fd6bcb7dfcf61142115016b3670ef8f7d0175e83fb486094c7ffbbf.scope: Deactivated successfully.
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.851 239969 DEBUG nova.storage.rbd_utils [None req-4f1b3a07-176a-459d-9b85-23089d81a52e d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] removing snapshot(f4bd2f1b85714e0692912b9885aa53e3) on rbd image(ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.951 239969 INFO nova.virt.libvirt.driver [-] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Instance destroyed successfully.
Jan 26 15:51:54 compute-0 nova_compute[239965]: 2026-01-26 15:51:54.951 239969 DEBUG nova.objects.instance [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'resources' on Instance uuid dcf4bd4f-ec23-4c13-b04d-b696bf93f350 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:54 compute-0 ceph-mon[75140]: osdmap e180: 3 total, 3 up, 3 in
Jan 26 15:51:54 compute-0 ceph-mon[75140]: osdmap e181: 3 total, 3 up, 3 in
Jan 26 15:51:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:54.998 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:ab:15 10.100.0.7'], port_security=['fa:16:3e:cd:ab:15 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'dcf4bd4f-ec23-4c13-b04d-b696bf93f350', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3195eedf4a34fabaf019faaaad7eb71', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'deb9b716-672e-4115-a664-3008e95e0a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=172ee5c1-f0af-4b45-a16f-73e1fe15c93f, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=bba927f8-b9ac-4383-94b1-86a81c2844fd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:51:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:55.000 156105 INFO neutron.agent.ovn.metadata.agent [-] Port bba927f8-b9ac-4383-94b1-86a81c2844fd in datapath 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc unbound from our chassis
Jan 26 15:51:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:55.002 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.009 239969 DEBUG nova.virt.libvirt.vif [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:50:29Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-109127296',display_name='tempest-ServersAdminTestJSON-server-109127296',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-109127296',id=18,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:50:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-vqo9e7of',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:50:45Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=dcf4bd4f-ec23-4c13-b04d-b696bf93f350,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "address": "fa:16:3e:cd:ab:15", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbba927f8-b9", "ovs_interfaceid": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.010 239969 DEBUG nova.network.os_vif_util [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "address": "fa:16:3e:cd:ab:15", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbba927f8-b9", "ovs_interfaceid": "bba927f8-b9ac-4383-94b1-86a81c2844fd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.011 239969 DEBUG nova.network.os_vif_util [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:ab:15,bridge_name='br-int',has_traffic_filtering=True,id=bba927f8-b9ac-4383-94b1-86a81c2844fd,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbba927f8-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.011 239969 DEBUG os_vif [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:ab:15,bridge_name='br-int',has_traffic_filtering=True,id=bba927f8-b9ac-4383-94b1-86a81c2844fd,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbba927f8-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.013 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.013 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbba927f8-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.016 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.017 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:51:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:55.018 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6e20bd82-b026-4c18-9365-b50c7c60247f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.019 239969 INFO os_vif [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:ab:15,bridge_name='br-int',has_traffic_filtering=True,id=bba927f8-b9ac-4383-94b1-86a81c2844fd,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbba927f8-b9')
Jan 26 15:51:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:55.044 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb75706-0d8b-4798-b1ae-6036f4331811]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:55.048 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2282f7ca-b99e-40b8-9bd9-bb9029dd0dfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:55.071 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0ca7f639-367c-4f7e-8aa8-1c3ca32c0b44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:55 compute-0 podman[264279]: 2026-01-26 15:51:54.983074868 +0000 UTC m=+0.034467757 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:51:55 compute-0 podman[264279]: 2026-01-26 15:51:55.087593687 +0000 UTC m=+0.138986556 container create b3d6c81ca761e782869f76a4a69541b3462f2a00433c1f3382fdbb202deae0b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 26 15:51:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:55.094 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e546fea9-88ce-4642-b799-d4c8470f9cb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a0d62aa-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:c2:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 22, 'rx_bytes': 868, 'tx_bytes': 1116, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 22, 'rx_bytes': 868, 'tx_bytes': 1116, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411131, 'reachable_time': 26088, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264316, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:55.122 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8daaccbb-ecf2-47ff-804b-7d877db47051]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411144, 'tstamp': 411144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264325, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411147, 'tstamp': 411147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264325, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:55.124 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a0d62aa-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.125 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:55.127 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a0d62aa-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:55.127 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.127 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:55.127 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a0d62aa-50, col_values=(('external_ids', {'iface-id': '98f4f5ff-65bb-4d1b-80cd-163cfdeaa557'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:55.128 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:55 compute-0 systemd[1]: Started libpod-conmon-b3d6c81ca761e782869f76a4a69541b3462f2a00433c1f3382fdbb202deae0b5.scope.
Jan 26 15:51:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6a901f8436030505a3e93eceb48a87508589610596aabc0b15c8fb2867f253/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6a901f8436030505a3e93eceb48a87508589610596aabc0b15c8fb2867f253/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6a901f8436030505a3e93eceb48a87508589610596aabc0b15c8fb2867f253/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa6a901f8436030505a3e93eceb48a87508589610596aabc0b15c8fb2867f253/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:51:55 compute-0 podman[264279]: 2026-01-26 15:51:55.215269778 +0000 UTC m=+0.266662667 container init b3d6c81ca761e782869f76a4a69541b3462f2a00433c1f3382fdbb202deae0b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 26 15:51:55 compute-0 podman[264279]: 2026-01-26 15:51:55.224895302 +0000 UTC m=+0.276288161 container start b3d6c81ca761e782869f76a4a69541b3462f2a00433c1f3382fdbb202deae0b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 15:51:55 compute-0 podman[264279]: 2026-01-26 15:51:55.236106214 +0000 UTC m=+0.287499153 container attach b3d6c81ca761e782869f76a4a69541b3462f2a00433c1f3382fdbb202deae0b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:51:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Jan 26 15:51:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Jan 26 15:51:55 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.279 239969 DEBUG nova.storage.rbd_utils [None req-4f1b3a07-176a-459d-9b85-23089d81a52e d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] creating snapshot(snap) on rbd image(310f8a22-427c-4898-84be-4588800ee6fa) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.552 239969 INFO nova.virt.libvirt.driver [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Deleting instance files /var/lib/nova/instances/dcf4bd4f-ec23-4c13-b04d-b696bf93f350_del
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.553 239969 INFO nova.virt.libvirt.driver [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Deletion of /var/lib/nova/instances/dcf4bd4f-ec23-4c13-b04d-b696bf93f350_del complete
Jan 26 15:51:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 6 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 293 active+clean; 505 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 17 MiB/s rd, 17 MiB/s wr, 934 op/s
Jan 26 15:51:55 compute-0 lvm[264426]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:51:55 compute-0 lvm[264426]: VG ceph_vg0 finished
Jan 26 15:51:55 compute-0 lvm[264428]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:51:55 compute-0 lvm[264428]: VG ceph_vg1 finished
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.972 239969 DEBUG nova.compute.manager [req-4bf56fb0-38e2-4dd1-8e97-e0e19c8bb803 req-9f307475-a316-4893-905c-69f52fb20d1c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Received event network-vif-unplugged-bba927f8-b9ac-4383-94b1-86a81c2844fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.972 239969 DEBUG oslo_concurrency.lockutils [req-4bf56fb0-38e2-4dd1-8e97-e0e19c8bb803 req-9f307475-a316-4893-905c-69f52fb20d1c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.973 239969 DEBUG oslo_concurrency.lockutils [req-4bf56fb0-38e2-4dd1-8e97-e0e19c8bb803 req-9f307475-a316-4893-905c-69f52fb20d1c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.973 239969 DEBUG oslo_concurrency.lockutils [req-4bf56fb0-38e2-4dd1-8e97-e0e19c8bb803 req-9f307475-a316-4893-905c-69f52fb20d1c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.973 239969 DEBUG nova.compute.manager [req-4bf56fb0-38e2-4dd1-8e97-e0e19c8bb803 req-9f307475-a316-4893-905c-69f52fb20d1c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] No waiting events found dispatching network-vif-unplugged-bba927f8-b9ac-4383-94b1-86a81c2844fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:51:55 compute-0 nova_compute[239965]: 2026-01-26 15:51:55.973 239969 DEBUG nova.compute.manager [req-4bf56fb0-38e2-4dd1-8e97-e0e19c8bb803 req-9f307475-a316-4893-905c-69f52fb20d1c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Received event network-vif-unplugged-bba927f8-b9ac-4383-94b1-86a81c2844fd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:51:55 compute-0 lvm[264429]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:51:55 compute-0 lvm[264429]: VG ceph_vg2 finished
Jan 26 15:51:56 compute-0 nova_compute[239965]: 2026-01-26 15:51:56.025 239969 INFO nova.compute.manager [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Took 1.31 seconds to destroy the instance on the hypervisor.
Jan 26 15:51:56 compute-0 nova_compute[239965]: 2026-01-26 15:51:56.026 239969 DEBUG oslo.service.loopingcall [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:51:56 compute-0 nova_compute[239965]: 2026-01-26 15:51:56.026 239969 DEBUG nova.compute.manager [-] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:51:56 compute-0 nova_compute[239965]: 2026-01-26 15:51:56.027 239969 DEBUG nova.network.neutron [-] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:51:56 compute-0 kind_poincare[264331]: {}
Jan 26 15:51:56 compute-0 systemd[1]: libpod-b3d6c81ca761e782869f76a4a69541b3462f2a00433c1f3382fdbb202deae0b5.scope: Deactivated successfully.
Jan 26 15:51:56 compute-0 systemd[1]: libpod-b3d6c81ca761e782869f76a4a69541b3462f2a00433c1f3382fdbb202deae0b5.scope: Consumed 1.309s CPU time.
Jan 26 15:51:56 compute-0 podman[264432]: 2026-01-26 15:51:56.125775551 +0000 UTC m=+0.021464443 container died b3d6c81ca761e782869f76a4a69541b3462f2a00433c1f3382fdbb202deae0b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:51:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa6a901f8436030505a3e93eceb48a87508589610596aabc0b15c8fb2867f253-merged.mount: Deactivated successfully.
Jan 26 15:51:56 compute-0 podman[264432]: 2026-01-26 15:51:56.225011621 +0000 UTC m=+0.120700513 container remove b3d6c81ca761e782869f76a4a69541b3462f2a00433c1f3382fdbb202deae0b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:51:56 compute-0 systemd[1]: libpod-conmon-b3d6c81ca761e782869f76a4a69541b3462f2a00433c1f3382fdbb202deae0b5.scope: Deactivated successfully.
Jan 26 15:51:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Jan 26 15:51:56 compute-0 sudo[264124]: pam_unix(sudo:session): session closed for user root
Jan 26 15:51:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:51:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Jan 26 15:51:56 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Jan 26 15:51:56 compute-0 ceph-mon[75140]: osdmap e182: 3 total, 3 up, 3 in
Jan 26 15:51:56 compute-0 ceph-mon[75140]: pgmap v1116: 305 pgs: 6 active+clean+snaptrim_wait, 6 active+clean+snaptrim, 293 active+clean; 505 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 17 MiB/s rd, 17 MiB/s wr, 934 op/s
Jan 26 15:51:56 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:51:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:51:56 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:51:56 compute-0 sudo[264445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:51:56 compute-0 sudo[264445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:51:56 compute-0 sudo[264445]: pam_unix(sudo:session): session closed for user root
Jan 26 15:51:56 compute-0 nova_compute[239965]: 2026-01-26 15:51:56.759 239969 DEBUG nova.network.neutron [-] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:51:56 compute-0 nova_compute[239965]: 2026-01-26 15:51:56.786 239969 INFO nova.compute.manager [-] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Took 0.76 seconds to deallocate network for instance.
Jan 26 15:51:56 compute-0 nova_compute[239965]: 2026-01-26 15:51:56.869 239969 DEBUG oslo_concurrency.lockutils [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:56 compute-0 nova_compute[239965]: 2026-01-26 15:51:56.869 239969 DEBUG oslo_concurrency.lockutils [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:57 compute-0 nova_compute[239965]: 2026-01-26 15:51:57.019 239969 DEBUG oslo_concurrency.processutils [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:51:57 compute-0 ceph-mon[75140]: osdmap e183: 3 total, 3 up, 3 in
Jan 26 15:51:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:51:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:51:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:51:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2176360249' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 513 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 15 MiB/s wr, 568 op/s
Jan 26 15:51:57 compute-0 nova_compute[239965]: 2026-01-26 15:51:57.667 239969 DEBUG oslo_concurrency.processutils [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:51:57 compute-0 nova_compute[239965]: 2026-01-26 15:51:57.675 239969 DEBUG nova.compute.provider_tree [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:51:57 compute-0 nova_compute[239965]: 2026-01-26 15:51:57.867 239969 DEBUG nova.scheduler.client.report [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:51:57 compute-0 nova_compute[239965]: 2026-01-26 15:51:57.900 239969 DEBUG oslo_concurrency.lockutils [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.030s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:57 compute-0 nova_compute[239965]: 2026-01-26 15:51:57.951 239969 INFO nova.scheduler.client.report [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Deleted allocations for instance dcf4bd4f-ec23-4c13-b04d-b696bf93f350
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.015 239969 DEBUG oslo_concurrency.lockutils [None req-7e942f71-1654-4d7e-8ef8-7228b165bb6f b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.140 239969 DEBUG nova.compute.manager [req-c9b739a1-42c2-4578-88ed-70eb0e01b054 req-42a9971f-24c5-48b3-a461-878d718b7d54 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Received event network-vif-plugged-bba927f8-b9ac-4383-94b1-86a81c2844fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.140 239969 DEBUG oslo_concurrency.lockutils [req-c9b739a1-42c2-4578-88ed-70eb0e01b054 req-42a9971f-24c5-48b3-a461-878d718b7d54 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.141 239969 DEBUG oslo_concurrency.lockutils [req-c9b739a1-42c2-4578-88ed-70eb0e01b054 req-42a9971f-24c5-48b3-a461-878d718b7d54 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.142 239969 DEBUG oslo_concurrency.lockutils [req-c9b739a1-42c2-4578-88ed-70eb0e01b054 req-42a9971f-24c5-48b3-a461-878d718b7d54 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dcf4bd4f-ec23-4c13-b04d-b696bf93f350-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.142 239969 DEBUG nova.compute.manager [req-c9b739a1-42c2-4578-88ed-70eb0e01b054 req-42a9971f-24c5-48b3-a461-878d718b7d54 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] No waiting events found dispatching network-vif-plugged-bba927f8-b9ac-4383-94b1-86a81c2844fd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.142 239969 WARNING nova.compute.manager [req-c9b739a1-42c2-4578-88ed-70eb0e01b054 req-42a9971f-24c5-48b3-a461-878d718b7d54 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Received unexpected event network-vif-plugged-bba927f8-b9ac-4383-94b1-86a81c2844fd for instance with vm_state deleted and task_state None.
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.143 239969 DEBUG nova.compute.manager [req-c9b739a1-42c2-4578-88ed-70eb0e01b054 req-42a9971f-24c5-48b3-a461-878d718b7d54 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Received event network-vif-deleted-bba927f8-b9ac-4383-94b1-86a81c2844fd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:51:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2176360249' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:51:58 compute-0 ceph-mon[75140]: pgmap v1118: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 513 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 15 MiB/s wr, 568 op/s
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.377 239969 INFO nova.virt.libvirt.driver [None req-4f1b3a07-176a-459d-9b85-23089d81a52e d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Snapshot image upload complete
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.377 239969 INFO nova.compute.manager [None req-4f1b3a07-176a-459d-9b85-23089d81a52e d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Took 6.02 seconds to snapshot the instance on the hypervisor.
Jan 26 15:51:58 compute-0 sshd-session[262432]: Connection closed by 3.137.73.221 port 38918 [preauth]
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.953 239969 DEBUG oslo_concurrency.lockutils [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "20e8eee5-8c27-4d2b-b132-afa685238f37" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.953 239969 DEBUG oslo_concurrency.lockutils [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "20e8eee5-8c27-4d2b-b132-afa685238f37" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.954 239969 DEBUG oslo_concurrency.lockutils [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.954 239969 DEBUG oslo_concurrency.lockutils [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.955 239969 DEBUG oslo_concurrency.lockutils [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.956 239969 INFO nova.compute.manager [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Terminating instance
Jan 26 15:51:58 compute-0 nova_compute[239965]: 2026-01-26 15:51:58.958 239969 DEBUG nova.compute.manager [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:51:59 compute-0 kernel: tap25f90c0f-63 (unregistering): left promiscuous mode
Jan 26 15:51:59 compute-0 NetworkManager[48954]: <info>  [1769442719.0652] device (tap25f90c0f-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:51:59 compute-0 ovn_controller[146046]: 2026-01-26T15:51:59Z|00129|binding|INFO|Releasing lport 25f90c0f-639d-4310-a846-2630969fa405 from this chassis (sb_readonly=0)
Jan 26 15:51:59 compute-0 ovn_controller[146046]: 2026-01-26T15:51:59Z|00130|binding|INFO|Setting lport 25f90c0f-639d-4310-a846-2630969fa405 down in Southbound
Jan 26 15:51:59 compute-0 ovn_controller[146046]: 2026-01-26T15:51:59Z|00131|binding|INFO|Removing iface tap25f90c0f-63 ovn-installed in OVS
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.077 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.091 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:1a:6e 10.100.0.3'], port_security=['fa:16:3e:d9:1a:6e 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '20e8eee5-8c27-4d2b-b132-afa685238f37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3195eedf4a34fabaf019faaaad7eb71', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'deb9b716-672e-4115-a664-3008e95e0a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=172ee5c1-f0af-4b45-a16f-73e1fe15c93f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=25f90c0f-639d-4310-a846-2630969fa405) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.092 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 25f90c0f-639d-4310-a846-2630969fa405 in datapath 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc unbound from our chassis
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.093 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.105 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.113 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f7f0f7-d4e7-4c2b-9df4-5b5703717b60]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:59 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Deactivated successfully.
Jan 26 15:51:59 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000010.scope: Consumed 16.884s CPU time.
Jan 26 15:51:59 compute-0 systemd-machined[208061]: Machine qemu-18-instance-00000010 terminated.
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.147 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6b7ac772-94f3-4d79-a4dc-2120c63932c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.150 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5d41dccd-218b-4585-af87-7cc991745c9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.190 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ed4a58f8-5081-479c-aa14-a96223f90788]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.205 239969 INFO nova.virt.libvirt.driver [-] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Instance destroyed successfully.
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.206 239969 DEBUG nova.objects.instance [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'resources' on Instance uuid 20e8eee5-8c27-4d2b-b132-afa685238f37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.212 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.212 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.213 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.222 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[98461989-160f-40a8-9243-df96d1cb6217]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4a0d62aa-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8c:c2:d0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 24, 'rx_bytes': 868, 'tx_bytes': 1200, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 24, 'rx_bytes': 868, 'tx_bytes': 1200, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411131, 'reachable_time': 26088, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264515, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.241 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0704bcb4-b303-4da5-9461-d4bbd2875762]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411144, 'tstamp': 411144}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264517, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4a0d62aa-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 411147, 'tstamp': 411147}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264517, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.242 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a0d62aa-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.243 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.247 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.248 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4a0d62aa-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.248 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.249 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4a0d62aa-50, col_values=(('external_ids', {'iface-id': '98f4f5ff-65bb-4d1b-80cd-163cfdeaa557'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:51:59.249 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:51:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:51:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Jan 26 15:51:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Jan 26 15:51:59 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.364 239969 DEBUG nova.virt.libvirt.vif [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:50:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1266128042',display_name='tempest-ServersAdminTestJSON-server-1266128042',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1266128042',id=16,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:50:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-rj2hqc3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:50:25Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=20e8eee5-8c27-4d2b-b132-afa685238f37,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "25f90c0f-639d-4310-a846-2630969fa405", "address": "fa:16:3e:d9:1a:6e", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f90c0f-63", "ovs_interfaceid": "25f90c0f-639d-4310-a846-2630969fa405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.364 239969 DEBUG nova.network.os_vif_util [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "25f90c0f-639d-4310-a846-2630969fa405", "address": "fa:16:3e:d9:1a:6e", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap25f90c0f-63", "ovs_interfaceid": "25f90c0f-639d-4310-a846-2630969fa405", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.365 239969 DEBUG nova.network.os_vif_util [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:1a:6e,bridge_name='br-int',has_traffic_filtering=True,id=25f90c0f-639d-4310-a846-2630969fa405,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f90c0f-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.365 239969 DEBUG os_vif [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:1a:6e,bridge_name='br-int',has_traffic_filtering=True,id=25f90c0f-639d-4310-a846-2630969fa405,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f90c0f-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.367 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.367 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap25f90c0f-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.369 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.371 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.373 239969 INFO os_vif [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:1a:6e,bridge_name='br-int',has_traffic_filtering=True,id=25f90c0f-639d-4310-a846-2630969fa405,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap25f90c0f-63')
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.634 239969 INFO nova.virt.libvirt.driver [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Deleting instance files /var/lib/nova/instances/20e8eee5-8c27-4d2b-b132-afa685238f37_del
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.635 239969 INFO nova.virt.libvirt.driver [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Deletion of /var/lib/nova/instances/20e8eee5-8c27-4d2b-b132-afa685238f37_del complete
Jan 26 15:51:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 513 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 11 MiB/s wr, 420 op/s
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.835 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.993 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.993 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.998 239969 INFO nova.compute.manager [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Took 1.04 seconds to destroy the instance on the hypervisor.
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.999 239969 DEBUG oslo.service.loopingcall [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.999 239969 DEBUG nova.compute.manager [-] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:51:59 compute-0 nova_compute[239965]: 2026-01-26 15:51:59.999 239969 DEBUG nova.network.neutron [-] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:52:00 compute-0 nova_compute[239965]: 2026-01-26 15:52:00.008 239969 DEBUG nova.compute.manager [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:52:00 compute-0 nova_compute[239965]: 2026-01-26 15:52:00.099 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:00 compute-0 nova_compute[239965]: 2026-01-26 15:52:00.100 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:00 compute-0 nova_compute[239965]: 2026-01-26 15:52:00.107 239969 DEBUG nova.virt.hardware [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:52:00 compute-0 nova_compute[239965]: 2026-01-26 15:52:00.107 239969 INFO nova.compute.claims [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:52:00 compute-0 ceph-mon[75140]: osdmap e184: 3 total, 3 up, 3 in
Jan 26 15:52:00 compute-0 ceph-mon[75140]: pgmap v1120: 305 pgs: 1 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 302 active+clean; 513 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 11 MiB/s wr, 420 op/s
Jan 26 15:52:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:52:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:52:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:52:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:52:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:52:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:52:00 compute-0 nova_compute[239965]: 2026-01-26 15:52:00.731 239969 DEBUG oslo_concurrency.processutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.092 239969 DEBUG nova.compute.manager [req-1bf3ea96-162b-4e03-ae35-bc6530ca04e0 req-6088060d-9e9e-498f-a25d-f144d6ec5687 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Received event network-vif-unplugged-25f90c0f-639d-4310-a846-2630969fa405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.093 239969 DEBUG oslo_concurrency.lockutils [req-1bf3ea96-162b-4e03-ae35-bc6530ca04e0 req-6088060d-9e9e-498f-a25d-f144d6ec5687 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.093 239969 DEBUG oslo_concurrency.lockutils [req-1bf3ea96-162b-4e03-ae35-bc6530ca04e0 req-6088060d-9e9e-498f-a25d-f144d6ec5687 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.093 239969 DEBUG oslo_concurrency.lockutils [req-1bf3ea96-162b-4e03-ae35-bc6530ca04e0 req-6088060d-9e9e-498f-a25d-f144d6ec5687 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.094 239969 DEBUG nova.compute.manager [req-1bf3ea96-162b-4e03-ae35-bc6530ca04e0 req-6088060d-9e9e-498f-a25d-f144d6ec5687 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] No waiting events found dispatching network-vif-unplugged-25f90c0f-639d-4310-a846-2630969fa405 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.094 239969 DEBUG nova.compute.manager [req-1bf3ea96-162b-4e03-ae35-bc6530ca04e0 req-6088060d-9e9e-498f-a25d-f144d6ec5687 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Received event network-vif-unplugged-25f90c0f-639d-4310-a846-2630969fa405 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:52:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1193552676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.351 239969 DEBUG oslo_concurrency.processutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.357 239969 DEBUG nova.compute.provider_tree [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.469 239969 DEBUG nova.scheduler.client.report [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:01 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1193552676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.533 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.534 239969 DEBUG nova.compute.manager [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.538 239969 DEBUG nova.compute.manager [None req-c25d0bdc-d099-44e7-b9af-872fdd003f49 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.618 239969 INFO nova.compute.manager [None req-c25d0bdc-d099-44e7-b9af-872fdd003f49 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] instance snapshotting
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.662 239969 DEBUG nova.compute.manager [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.662 239969 DEBUG nova.network.neutron [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:52:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 497 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 9.5 MiB/s wr, 361 op/s
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.785 239969 INFO nova.virt.libvirt.driver [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.808 239969 DEBUG nova.compute.manager [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.903 239969 DEBUG nova.network.neutron [-] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.920 239969 INFO nova.virt.libvirt.driver [None req-c25d0bdc-d099-44e7-b9af-872fdd003f49 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Beginning live snapshot process
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.936 239969 INFO nova.compute.manager [-] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Took 1.94 seconds to deallocate network for instance.
Jan 26 15:52:01 compute-0 nova_compute[239965]: 2026-01-26 15:52:01.938 239969 DEBUG nova.policy [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3322c44e378e415bb486ef558314a67c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ddba5162f533447bba0159cafaa565bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.046 239969 DEBUG nova.compute.manager [req-b93920f8-a02a-499b-88ca-673ece395de0 req-a9f3c122-4eef-458b-9543-0e0d11610774 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Received event network-vif-deleted-25f90c0f-639d-4310-a846-2630969fa405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.058 239969 DEBUG nova.compute.manager [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.060 239969 DEBUG nova.virt.libvirt.driver [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.061 239969 INFO nova.virt.libvirt.driver [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Creating image(s)
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.086 239969 DEBUG nova.storage.rbd_utils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image d4d304bb-bbea-43d0-b6d0-2185340bbf57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.109 239969 DEBUG nova.storage.rbd_utils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image d4d304bb-bbea-43d0-b6d0-2185340bbf57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.129 239969 DEBUG nova.storage.rbd_utils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image d4d304bb-bbea-43d0-b6d0-2185340bbf57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.132 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "5d53e76ffa831ae824c3d28e287885d82178df7b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.133 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "5d53e76ffa831ae824c3d28e287885d82178df7b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.205 239969 DEBUG oslo_concurrency.lockutils [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.205 239969 DEBUG oslo_concurrency.lockutils [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.265 239969 DEBUG nova.virt.libvirt.imagebackend [None req-c25d0bdc-d099-44e7-b9af-872fdd003f49 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.390 239969 DEBUG oslo_concurrency.processutils [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.423 239969 DEBUG nova.virt.libvirt.imagebackend [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Image locations are: [{'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/5c958714-b066-4366-9e88-c4b813d307b6/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/5c958714-b066-4366-9e88-c4b813d307b6/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.489 239969 DEBUG nova.virt.libvirt.imagebackend [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Selected location: {'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/5c958714-b066-4366-9e88-c4b813d307b6/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.489 239969 DEBUG nova.storage.rbd_utils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] cloning images/5c958714-b066-4366-9e88-c4b813d307b6@snap to None/d4d304bb-bbea-43d0-b6d0-2185340bbf57_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.576 239969 DEBUG nova.storage.rbd_utils [None req-c25d0bdc-d099-44e7-b9af-872fdd003f49 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] creating snapshot(79c051a674c04cf981a43ebc6974450f) on rbd image(131ac17b-4bc0-4c20-b861-7102599a66d3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:52:02 compute-0 ceph-mon[75140]: pgmap v1121: 305 pgs: 305 active+clean; 497 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 9.5 MiB/s wr, 361 op/s
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.635 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "5d53e76ffa831ae824c3d28e287885d82178df7b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:02 compute-0 ovn_controller[146046]: 2026-01-26T15:52:02Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:0d:aa 10.100.0.11
Jan 26 15:52:02 compute-0 ovn_controller[146046]: 2026-01-26T15:52:02Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:0d:aa 10.100.0.11
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.757 239969 DEBUG nova.objects.instance [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'migration_context' on Instance uuid d4d304bb-bbea-43d0-b6d0-2185340bbf57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.772 239969 DEBUG nova.network.neutron [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Successfully created port: 853f45e8-bdfc-4f63-9ddb-67df8210f001 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.776 239969 DEBUG nova.virt.libvirt.driver [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.776 239969 DEBUG nova.virt.libvirt.driver [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Ensure instance console log exists: /var/lib/nova/instances/d4d304bb-bbea-43d0-b6d0-2185340bbf57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.777 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.777 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.777 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.931 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442707.6091754, 5c95a4fa-a47e-419d-ad9f-0714964644b4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.931 239969 INFO nova.compute.manager [-] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] VM Stopped (Lifecycle Event)
Jan 26 15:52:02 compute-0 nova_compute[239965]: 2026-01-26 15:52:02.951 239969 DEBUG nova.compute.manager [None req-f0f102a7-e4ef-49e7-a2f2-77a0c4cf9d2e - - - - - -] [instance: 5c95a4fa-a47e-419d-ad9f-0714964644b4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:03 compute-0 ovn_controller[146046]: 2026-01-26T15:52:03Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a1:48:76 10.100.0.8
Jan 26 15:52:03 compute-0 ovn_controller[146046]: 2026-01-26T15:52:03Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a1:48:76 10.100.0.8
Jan 26 15:52:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2978623576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.063 239969 DEBUG oslo_concurrency.processutils [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.673s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.068 239969 DEBUG nova.compute.provider_tree [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.086 239969 DEBUG nova.scheduler.client.report [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.111 239969 DEBUG oslo_concurrency.lockutils [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.906s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.142 239969 INFO nova.scheduler.client.report [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Deleted allocations for instance 20e8eee5-8c27-4d2b-b132-afa685238f37
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.221 239969 DEBUG oslo_concurrency.lockutils [None req-00db7bdc-9913-4d2e-bc32-8662d13d4ff8 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "20e8eee5-8c27-4d2b-b132-afa685238f37" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.304 239969 DEBUG nova.compute.manager [req-83302655-de37-4622-9d6b-a5ef649a1093 req-9ade673c-114c-41ed-a331-772bf2de6df0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Received event network-vif-plugged-25f90c0f-639d-4310-a846-2630969fa405 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.304 239969 DEBUG oslo_concurrency.lockutils [req-83302655-de37-4622-9d6b-a5ef649a1093 req-9ade673c-114c-41ed-a331-772bf2de6df0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.304 239969 DEBUG oslo_concurrency.lockutils [req-83302655-de37-4622-9d6b-a5ef649a1093 req-9ade673c-114c-41ed-a331-772bf2de6df0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.305 239969 DEBUG oslo_concurrency.lockutils [req-83302655-de37-4622-9d6b-a5ef649a1093 req-9ade673c-114c-41ed-a331-772bf2de6df0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20e8eee5-8c27-4d2b-b132-afa685238f37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.305 239969 DEBUG nova.compute.manager [req-83302655-de37-4622-9d6b-a5ef649a1093 req-9ade673c-114c-41ed-a331-772bf2de6df0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] No waiting events found dispatching network-vif-plugged-25f90c0f-639d-4310-a846-2630969fa405 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.305 239969 WARNING nova.compute.manager [req-83302655-de37-4622-9d6b-a5ef649a1093 req-9ade673c-114c-41ed-a331-772bf2de6df0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Received unexpected event network-vif-plugged-25f90c0f-639d-4310-a846-2630969fa405 for instance with vm_state deleted and task_state None.
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.438 239969 DEBUG oslo_concurrency.lockutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Acquiring lock "5c3d3c1e-90e3-4917-abbb-2a348c45c9f2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.439 239969 DEBUG oslo_concurrency.lockutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lock "5c3d3c1e-90e3-4917-abbb-2a348c45c9f2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.457 239969 DEBUG nova.compute.manager [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.541 239969 DEBUG oslo_concurrency.lockutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.542 239969 DEBUG oslo_concurrency.lockutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.550 239969 DEBUG nova.virt.hardware [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.550 239969 INFO nova.compute.claims [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:52:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Jan 26 15:52:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Jan 26 15:52:03 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Jan 26 15:52:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2978623576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.642 239969 DEBUG nova.storage.rbd_utils [None req-c25d0bdc-d099-44e7-b9af-872fdd003f49 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] cloning vms/131ac17b-4bc0-4c20-b861-7102599a66d3_disk@79c051a674c04cf981a43ebc6974450f to images/fa567274-7fbd-4ee9-ac9e-c97c94cdb7ee clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 15:52:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 473 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 9.8 MiB/s wr, 447 op/s
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.728 239969 DEBUG nova.storage.rbd_utils [None req-c25d0bdc-d099-44e7-b9af-872fdd003f49 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] flattening images/fa567274-7fbd-4ee9-ac9e-c97c94cdb7ee flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 15:52:03 compute-0 nova_compute[239965]: 2026-01-26 15:52:03.794 239969 DEBUG oslo_concurrency.processutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.094 239969 DEBUG nova.storage.rbd_utils [None req-c25d0bdc-d099-44e7-b9af-872fdd003f49 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] removing snapshot(79c051a674c04cf981a43ebc6974450f) on rbd image(131ac17b-4bc0-4c20-b861-7102599a66d3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.154 239969 DEBUG oslo_concurrency.lockutils [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.155 239969 DEBUG oslo_concurrency.lockutils [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.155 239969 DEBUG oslo_concurrency.lockutils [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.155 239969 DEBUG oslo_concurrency.lockutils [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.156 239969 DEBUG oslo_concurrency.lockutils [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.157 239969 INFO nova.compute.manager [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Terminating instance
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.158 239969 DEBUG nova.compute.manager [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:52:04 compute-0 kernel: tap72aacd31-27 (unregistering): left promiscuous mode
Jan 26 15:52:04 compute-0 NetworkManager[48954]: <info>  [1769442724.2097] device (tap72aacd31-27): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.214 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:04 compute-0 ovn_controller[146046]: 2026-01-26T15:52:04Z|00132|binding|INFO|Releasing lport 72aacd31-2789-4f27-a514-f80766db0d6e from this chassis (sb_readonly=0)
Jan 26 15:52:04 compute-0 ovn_controller[146046]: 2026-01-26T15:52:04Z|00133|binding|INFO|Setting lport 72aacd31-2789-4f27-a514-f80766db0d6e down in Southbound
Jan 26 15:52:04 compute-0 ovn_controller[146046]: 2026-01-26T15:52:04Z|00134|binding|INFO|Removing iface tap72aacd31-27 ovn-installed in OVS
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.216 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:04.221 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:0d:aa 10.100.0.11'], port_security=['fa:16:3e:66:0d:aa 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'db37e7ff-8499-4664-b2ba-014e27b8b6bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b3195eedf4a34fabaf019faaaad7eb71', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'deb9b716-672e-4115-a664-3008e95e0a0f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=172ee5c1-f0af-4b45-a16f-73e1fe15c93f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=72aacd31-2789-4f27-a514-f80766db0d6e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:52:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:04.222 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 72aacd31-2789-4f27-a514-f80766db0d6e in datapath 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc unbound from our chassis
Jan 26 15:52:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:04.223 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:52:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:04.224 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[78111c49-46df-4e00-9a0e-93aa3dac549d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:04.225 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc namespace which is not needed anymore
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.241 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:52:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Jan 26 15:52:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Jan 26 15:52:04 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Jan 26 15:52:04 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000000f.scope: Consumed 14.347s CPU time.
Jan 26 15:52:04 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Jan 26 15:52:04 compute-0 systemd-machined[208061]: Machine qemu-28-instance-0000000f terminated.
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.289 239969 DEBUG nova.storage.rbd_utils [None req-c25d0bdc-d099-44e7-b9af-872fdd003f49 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] creating snapshot(snap) on rbd image(fa567274-7fbd-4ee9-ac9e-c97c94cdb7ee) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:52:04 compute-0 neutron-haproxy-ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc[256358]: [NOTICE]   (256362) : haproxy version is 2.8.14-c23fe91
Jan 26 15:52:04 compute-0 neutron-haproxy-ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc[256358]: [NOTICE]   (256362) : path to executable is /usr/sbin/haproxy
Jan 26 15:52:04 compute-0 neutron-haproxy-ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc[256358]: [WARNING]  (256362) : Exiting Master process...
Jan 26 15:52:04 compute-0 neutron-haproxy-ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc[256358]: [WARNING]  (256362) : Exiting Master process...
Jan 26 15:52:04 compute-0 neutron-haproxy-ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc[256358]: [ALERT]    (256362) : Current worker (256364) exited with code 143 (Terminated)
Jan 26 15:52:04 compute-0 neutron-haproxy-ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc[256358]: [WARNING]  (256362) : All workers exited. Exiting... (0)
Jan 26 15:52:04 compute-0 systemd[1]: libpod-b4a88e6461d59c58b5d82ea02d6d381f31b71eb24e656ea35f53b8ccc17aee85.scope: Deactivated successfully.
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.369 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:04 compute-0 podman[264943]: 2026-01-26 15:52:04.374996804 +0000 UTC m=+0.043086937 container died b4a88e6461d59c58b5d82ea02d6d381f31b71eb24e656ea35f53b8ccc17aee85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 15:52:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3830659625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.388 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.393 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.398 239969 INFO nova.virt.libvirt.driver [-] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Instance destroyed successfully.
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.399 239969 DEBUG nova.objects.instance [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lazy-loading 'resources' on Instance uuid db37e7ff-8499-4664-b2ba-014e27b8b6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.409 239969 DEBUG oslo_concurrency.processutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-431e4ea434cc11136276c5590b725eedf2973b3d0f420fbefa3163d27d0b434c-merged.mount: Deactivated successfully.
Jan 26 15:52:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4a88e6461d59c58b5d82ea02d6d381f31b71eb24e656ea35f53b8ccc17aee85-userdata-shm.mount: Deactivated successfully.
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.416 239969 DEBUG nova.virt.libvirt.vif [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T15:50:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-784382261',display_name='tempest-ServersAdminTestJSON-server-784382261',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-784382261',id=15,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:51:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b3195eedf4a34fabaf019faaaad7eb71',ramdisk_id='',reservation_id='r-x2p37smd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='2',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-863857415',owner_user_name='tempest-ServersAdminTestJSON-863857415-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:51:49Z,user_data=None,user_id='b47ce75429ed4cd1ba21c0d50c2e552d',uuid=db37e7ff-8499-4664-b2ba-014e27b8b6bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.417 239969 DEBUG nova.network.os_vif_util [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converting VIF {"id": "72aacd31-2789-4f27-a514-f80766db0d6e", "address": "fa:16:3e:66:0d:aa", "network": {"id": "4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-951892133-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b3195eedf4a34fabaf019faaaad7eb71", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72aacd31-27", "ovs_interfaceid": "72aacd31-2789-4f27-a514-f80766db0d6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.417 239969 DEBUG nova.network.os_vif_util [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.418 239969 DEBUG os_vif [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:52:04 compute-0 podman[264943]: 2026-01-26 15:52:04.418808798 +0000 UTC m=+0.086898921 container cleanup b4a88e6461d59c58b5d82ea02d6d381f31b71eb24e656ea35f53b8ccc17aee85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.420 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.420 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72aacd31-27, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.422 239969 DEBUG nova.compute.provider_tree [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.424 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.426 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:52:04 compute-0 systemd[1]: libpod-conmon-b4a88e6461d59c58b5d82ea02d6d381f31b71eb24e656ea35f53b8ccc17aee85.scope: Deactivated successfully.
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.428 239969 INFO os_vif [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:66:0d:aa,bridge_name='br-int',has_traffic_filtering=True,id=72aacd31-2789-4f27-a514-f80766db0d6e,network=Network(4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72aacd31-27')
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.448 239969 DEBUG nova.scheduler.client.report [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.475 239969 DEBUG oslo_concurrency.lockutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.933s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.476 239969 DEBUG nova.compute.manager [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.480 239969 DEBUG nova.network.neutron [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Successfully updated port: 853f45e8-bdfc-4f63-9ddb-67df8210f001 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:52:04 compute-0 podman[264983]: 2026-01-26 15:52:04.497360806 +0000 UTC m=+0.048469058 container remove b4a88e6461d59c58b5d82ea02d6d381f31b71eb24e656ea35f53b8ccc17aee85 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 15:52:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:04.502 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f00d07-3961-45ee-a476-decd2ce5666a]: (4, ('Mon Jan 26 03:52:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc (b4a88e6461d59c58b5d82ea02d6d381f31b71eb24e656ea35f53b8ccc17aee85)\nb4a88e6461d59c58b5d82ea02d6d381f31b71eb24e656ea35f53b8ccc17aee85\nMon Jan 26 03:52:04 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc (b4a88e6461d59c58b5d82ea02d6d381f31b71eb24e656ea35f53b8ccc17aee85)\nb4a88e6461d59c58b5d82ea02d6d381f31b71eb24e656ea35f53b8ccc17aee85\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:04.504 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[03fd7660-7e78-43da-91b7-d511c08bf592]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:04.507 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4a0d62aa-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.508 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "refresh_cache-d4d304bb-bbea-43d0-b6d0-2185340bbf57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.509 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquired lock "refresh_cache-d4d304bb-bbea-43d0-b6d0-2185340bbf57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:52:04 compute-0 kernel: tap4a0d62aa-50: left promiscuous mode
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.510 239969 DEBUG nova.network.neutron [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.512 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.527 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:04.531 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[12ff8c9f-02a1-477e-8381-d3ac8d25f8b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.540 239969 DEBUG nova.compute.manager [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.541 239969 DEBUG nova.network.neutron [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:52:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:04.546 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e8fd20c6-f4cd-443d-ab5e-11ecb455e66f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:04.547 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f354584e-2320-40b2-b466-381263ec186c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:04.565 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6557eea0-f550-423a-bc0e-6faed25a766d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 411124, 'reachable_time': 43826, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265016, 'error': None, 'target': 'ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.566 239969 INFO nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:52:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:04.568 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4a0d62aa-5b68-4972-9cb0-a8e3e45ac7cc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:52:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:04.568 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[7b078178-2057-4df7-9243-0f7d4539f6b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d4a0d62aa\x2d5b68\x2d4972\x2d9cb0\x2da8e3e45ac7cc.mount: Deactivated successfully.
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.585 239969 DEBUG nova.compute.manager [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:52:04 compute-0 ceph-mon[75140]: osdmap e185: 3 total, 3 up, 3 in
Jan 26 15:52:04 compute-0 ceph-mon[75140]: pgmap v1123: 305 pgs: 305 active+clean; 473 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 9.8 MiB/s wr, 447 op/s
Jan 26 15:52:04 compute-0 ceph-mon[75140]: osdmap e186: 3 total, 3 up, 3 in
Jan 26 15:52:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3830659625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.694 239969 INFO nova.virt.libvirt.driver [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Deleting instance files /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb_del
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.695 239969 INFO nova.virt.libvirt.driver [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Deletion of /var/lib/nova/instances/db37e7ff-8499-4664-b2ba-014e27b8b6bb_del complete
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.827 239969 DEBUG nova.compute.manager [req-8dc6d85b-780b-433f-b03f-3608f298fda5 req-430503f5-d94a-48f7-9cb5-a9c973520a7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received event network-vif-unplugged-72aacd31-2789-4f27-a514-f80766db0d6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.827 239969 DEBUG oslo_concurrency.lockutils [req-8dc6d85b-780b-433f-b03f-3608f298fda5 req-430503f5-d94a-48f7-9cb5-a9c973520a7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.827 239969 DEBUG oslo_concurrency.lockutils [req-8dc6d85b-780b-433f-b03f-3608f298fda5 req-430503f5-d94a-48f7-9cb5-a9c973520a7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.828 239969 DEBUG oslo_concurrency.lockutils [req-8dc6d85b-780b-433f-b03f-3608f298fda5 req-430503f5-d94a-48f7-9cb5-a9c973520a7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.828 239969 DEBUG nova.compute.manager [req-8dc6d85b-780b-433f-b03f-3608f298fda5 req-430503f5-d94a-48f7-9cb5-a9c973520a7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] No waiting events found dispatching network-vif-unplugged-72aacd31-2789-4f27-a514-f80766db0d6e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.828 239969 DEBUG nova.compute.manager [req-8dc6d85b-780b-433f-b03f-3608f298fda5 req-430503f5-d94a-48f7-9cb5-a9c973520a7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received event network-vif-unplugged-72aacd31-2789-4f27-a514-f80766db0d6e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.830 239969 DEBUG nova.compute.manager [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.832 239969 DEBUG nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.832 239969 INFO nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Creating image(s)
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.898 239969 DEBUG nova.storage.rbd_utils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] rbd image 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.917 239969 DEBUG nova.storage.rbd_utils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] rbd image 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.937 239969 DEBUG nova.storage.rbd_utils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] rbd image 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.940 239969 DEBUG oslo_concurrency.processutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.961 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.970 239969 INFO nova.compute.manager [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Took 0.81 seconds to destroy the instance on the hypervisor.
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.970 239969 DEBUG oslo.service.loopingcall [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.971 239969 DEBUG nova.compute.manager [-] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:52:04 compute-0 nova_compute[239965]: 2026-01-26 15:52:04.971 239969 DEBUG nova.network.neutron [-] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.002 239969 DEBUG oslo_concurrency.processutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.003 239969 DEBUG oslo_concurrency.lockutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.003 239969 DEBUG oslo_concurrency.lockutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.003 239969 DEBUG oslo_concurrency.lockutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.019 239969 DEBUG nova.storage.rbd_utils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] rbd image 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.022 239969 DEBUG oslo_concurrency.processutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.267 239969 DEBUG oslo_concurrency.processutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Jan 26 15:52:05 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.327 239969 DEBUG nova.storage.rbd_utils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] resizing rbd image 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.403 239969 DEBUG nova.objects.instance [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lazy-loading 'migration_context' on Instance uuid 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.414 239969 DEBUG nova.network.neutron [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.429 239969 DEBUG nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.429 239969 DEBUG nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Ensure instance console log exists: /var/lib/nova/instances/5c3d3c1e-90e3-4917-abbb-2a348c45c9f2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.429 239969 DEBUG oslo_concurrency.lockutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.430 239969 DEBUG oslo_concurrency.lockutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.430 239969 DEBUG oslo_concurrency.lockutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.536 239969 DEBUG nova.network.neutron [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.537 239969 DEBUG nova.compute.manager [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.538 239969 DEBUG nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.542 239969 WARNING nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.548 239969 DEBUG nova.virt.libvirt.host [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.549 239969 DEBUG nova.virt.libvirt.host [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.552 239969 DEBUG nova.virt.libvirt.host [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.553 239969 DEBUG nova.virt.libvirt.host [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.553 239969 DEBUG nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.553 239969 DEBUG nova.virt.hardware [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.553 239969 DEBUG nova.virt.hardware [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.554 239969 DEBUG nova.virt.hardware [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.554 239969 DEBUG nova.virt.hardware [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.554 239969 DEBUG nova.virt.hardware [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.554 239969 DEBUG nova.virt.hardware [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.554 239969 DEBUG nova.virt.hardware [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.554 239969 DEBUG nova.virt.hardware [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.554 239969 DEBUG nova.virt.hardware [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.555 239969 DEBUG nova.virt.hardware [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.555 239969 DEBUG nova.virt.hardware [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.557 239969 DEBUG oslo_concurrency.processutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 501 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 11 MiB/s wr, 374 op/s
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.689 239969 DEBUG nova.compute.manager [req-23113d9b-cd65-4495-9813-39cfa620708f req-af0f6666-93db-4a01-ade6-ff7ea0cd81ff a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Received event network-changed-853f45e8-bdfc-4f63-9ddb-67df8210f001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.689 239969 DEBUG nova.compute.manager [req-23113d9b-cd65-4495-9813-39cfa620708f req-af0f6666-93db-4a01-ade6-ff7ea0cd81ff a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Refreshing instance network info cache due to event network-changed-853f45e8-bdfc-4f63-9ddb-67df8210f001. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:52:05 compute-0 nova_compute[239965]: 2026-01-26 15:52:05.690 239969 DEBUG oslo_concurrency.lockutils [req-23113d9b-cd65-4495-9813-39cfa620708f req-af0f6666-93db-4a01-ade6-ff7ea0cd81ff a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d4d304bb-bbea-43d0-b6d0-2185340bbf57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.058 239969 DEBUG nova.network.neutron [-] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.074 239969 INFO nova.compute.manager [-] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Took 1.10 seconds to deallocate network for instance.
Jan 26 15:52:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:52:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4256014817' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.121 239969 DEBUG oslo_concurrency.lockutils [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.122 239969 DEBUG oslo_concurrency.lockutils [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.135 239969 DEBUG oslo_concurrency.processutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.157 239969 DEBUG nova.storage.rbd_utils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] rbd image 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.161 239969 DEBUG oslo_concurrency.processutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.302 239969 DEBUG oslo_concurrency.processutils [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:06 compute-0 ceph-mon[75140]: osdmap e187: 3 total, 3 up, 3 in
Jan 26 15:52:06 compute-0 ceph-mon[75140]: pgmap v1126: 305 pgs: 305 active+clean; 501 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 11 MiB/s wr, 374 op/s
Jan 26 15:52:06 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4256014817' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.331 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442711.2879624, d82c98a2-a53f-4e4f-a132-042f797f2c91 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.332 239969 INFO nova.compute.manager [-] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] VM Stopped (Lifecycle Event)
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.355 239969 DEBUG nova.compute.manager [None req-7b3529c9-884b-47d7-a67b-7dc17f15d8ce - - - - - -] [instance: d82c98a2-a53f-4e4f-a132-042f797f2c91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.384 239969 DEBUG nova.network.neutron [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Updating instance_info_cache with network_info: [{"id": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "address": "fa:16:3e:e4:2d:1e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f45e8-bd", "ovs_interfaceid": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.400 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Releasing lock "refresh_cache-d4d304bb-bbea-43d0-b6d0-2185340bbf57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.401 239969 DEBUG nova.compute.manager [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Instance network_info: |[{"id": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "address": "fa:16:3e:e4:2d:1e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f45e8-bd", "ovs_interfaceid": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.401 239969 DEBUG oslo_concurrency.lockutils [req-23113d9b-cd65-4495-9813-39cfa620708f req-af0f6666-93db-4a01-ade6-ff7ea0cd81ff a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d4d304bb-bbea-43d0-b6d0-2185340bbf57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.402 239969 DEBUG nova.network.neutron [req-23113d9b-cd65-4495-9813-39cfa620708f req-af0f6666-93db-4a01-ade6-ff7ea0cd81ff a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Refreshing network info cache for port 853f45e8-bdfc-4f63-9ddb-67df8210f001 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.406 239969 DEBUG nova.virt.libvirt.driver [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Start _get_guest_xml network_info=[{"id": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "address": "fa:16:3e:e4:2d:1e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f45e8-bd", "ovs_interfaceid": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-26T15:51:48Z,direct_url=<?>,disk_format='raw',id=5c958714-b066-4366-9e88-c4b813d307b6,min_disk=1,min_ram=0,name='tempest-test-snap-2040458180',owner='ddba5162f533447bba0159cafaa565bf',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-26T15:51:54Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': '5c958714-b066-4366-9e88-c4b813d307b6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.411 239969 WARNING nova.virt.libvirt.driver [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.421 239969 DEBUG nova.virt.libvirt.host [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.422 239969 DEBUG nova.virt.libvirt.host [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.425 239969 DEBUG nova.virt.libvirt.host [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.425 239969 DEBUG nova.virt.libvirt.host [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.426 239969 DEBUG nova.virt.libvirt.driver [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.426 239969 DEBUG nova.virt.hardware [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-26T15:51:48Z,direct_url=<?>,disk_format='raw',id=5c958714-b066-4366-9e88-c4b813d307b6,min_disk=1,min_ram=0,name='tempest-test-snap-2040458180',owner='ddba5162f533447bba0159cafaa565bf',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-26T15:51:54Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.427 239969 DEBUG nova.virt.hardware [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.427 239969 DEBUG nova.virt.hardware [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.427 239969 DEBUG nova.virt.hardware [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.427 239969 DEBUG nova.virt.hardware [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.428 239969 DEBUG nova.virt.hardware [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.428 239969 DEBUG nova.virt.hardware [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.428 239969 DEBUG nova.virt.hardware [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.428 239969 DEBUG nova.virt.hardware [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.428 239969 DEBUG nova.virt.hardware [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.429 239969 DEBUG nova.virt.hardware [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.432 239969 DEBUG oslo_concurrency.processutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:52:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2663607349' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.775 239969 DEBUG oslo_concurrency.processutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.776 239969 DEBUG nova.objects.instance [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2291017660' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.893 239969 DEBUG oslo_concurrency.processutils [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.899 239969 DEBUG nova.compute.provider_tree [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.970 239969 DEBUG nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:52:06 compute-0 nova_compute[239965]:   <uuid>5c3d3c1e-90e3-4917-abbb-2a348c45c9f2</uuid>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   <name>instance-0000001a</name>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerExternalEventsTest-server-677614810</nova:name>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:52:05</nova:creationTime>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:52:06 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:52:06 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:52:06 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:52:06 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:52:06 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:52:06 compute-0 nova_compute[239965]:         <nova:user uuid="fde5e64d36394041a5c7f1657651370c">tempest-ServerExternalEventsTest-1941383329-project-member</nova:user>
Jan 26 15:52:06 compute-0 nova_compute[239965]:         <nova:project uuid="25fba6f5a5fa45a69f8352d836971969">tempest-ServerExternalEventsTest-1941383329</nova:project>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <system>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <entry name="serial">5c3d3c1e-90e3-4917-abbb-2a348c45c9f2</entry>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <entry name="uuid">5c3d3c1e-90e3-4917-abbb-2a348c45c9f2</entry>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     </system>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   <os>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   </os>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   <features>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   </features>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/5c3d3c1e-90e3-4917-abbb-2a348c45c9f2_disk">
Jan 26 15:52:06 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       </source>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:52:06 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/5c3d3c1e-90e3-4917-abbb-2a348c45c9f2_disk.config">
Jan 26 15:52:06 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       </source>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:52:06 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/5c3d3c1e-90e3-4917-abbb-2a348c45c9f2/console.log" append="off"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <video>
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     </video>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:52:06 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:52:06 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:52:06 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:52:06 compute-0 nova_compute[239965]: </domain>
Jan 26 15:52:06 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:52:06 compute-0 nova_compute[239965]: 2026-01-26 15:52:06.972 239969 DEBUG nova.scheduler.client.report [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.013 239969 DEBUG nova.compute.manager [req-de04c10c-a423-4400-af88-b4835044bdef req-bb31b30a-fead-4a2c-bf67-3034ff6235fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.014 239969 DEBUG oslo_concurrency.lockutils [req-de04c10c-a423-4400-af88-b4835044bdef req-bb31b30a-fead-4a2c-bf67-3034ff6235fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.014 239969 DEBUG oslo_concurrency.lockutils [req-de04c10c-a423-4400-af88-b4835044bdef req-bb31b30a-fead-4a2c-bf67-3034ff6235fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.014 239969 DEBUG oslo_concurrency.lockutils [req-de04c10c-a423-4400-af88-b4835044bdef req-bb31b30a-fead-4a2c-bf67-3034ff6235fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.014 239969 DEBUG nova.compute.manager [req-de04c10c-a423-4400-af88-b4835044bdef req-bb31b30a-fead-4a2c-bf67-3034ff6235fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] No waiting events found dispatching network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.015 239969 WARNING nova.compute.manager [req-de04c10c-a423-4400-af88-b4835044bdef req-bb31b30a-fead-4a2c-bf67-3034ff6235fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received unexpected event network-vif-plugged-72aacd31-2789-4f27-a514-f80766db0d6e for instance with vm_state deleted and task_state None.
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.015 239969 DEBUG nova.compute.manager [req-de04c10c-a423-4400-af88-b4835044bdef req-bb31b30a-fead-4a2c-bf67-3034ff6235fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Received event network-vif-deleted-72aacd31-2789-4f27-a514-f80766db0d6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.022 239969 DEBUG oslo_concurrency.lockutils [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:52:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1944747111' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.033 239969 DEBUG nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.033 239969 DEBUG nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.034 239969 INFO nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Using config drive
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.055 239969 DEBUG nova.storage.rbd_utils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] rbd image 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.064 239969 INFO nova.scheduler.client.report [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Deleted allocations for instance db37e7ff-8499-4664-b2ba-014e27b8b6bb
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.066 239969 DEBUG oslo_concurrency.processutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.083 239969 DEBUG nova.storage.rbd_utils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image d4d304bb-bbea-43d0-b6d0-2185340bbf57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.087 239969 DEBUG oslo_concurrency.processutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.163 239969 INFO nova.virt.libvirt.driver [None req-c25d0bdc-d099-44e7-b9af-872fdd003f49 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Snapshot image upload complete
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.164 239969 INFO nova.compute.manager [None req-c25d0bdc-d099-44e7-b9af-872fdd003f49 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Took 5.54 seconds to snapshot the instance on the hypervisor.
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.210 239969 DEBUG oslo_concurrency.lockutils [None req-235b9da9-3aa2-4f3e-834b-5054ba88f3d5 b47ce75429ed4cd1ba21c0d50c2e552d b3195eedf4a34fabaf019faaaad7eb71 - - default default] Lock "db37e7ff-8499-4664-b2ba-014e27b8b6bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.281 239969 INFO nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Creating config drive at /var/lib/nova/instances/5c3d3c1e-90e3-4917-abbb-2a348c45c9f2/disk.config
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.287 239969 DEBUG oslo_concurrency.processutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5c3d3c1e-90e3-4917-abbb-2a348c45c9f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv0bzk16_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2663607349' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2291017660' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1944747111' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.429 239969 DEBUG oslo_concurrency.processutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5c3d3c1e-90e3-4917-abbb-2a348c45c9f2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv0bzk16_" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.459 239969 DEBUG nova.storage.rbd_utils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] rbd image 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.463 239969 DEBUG oslo_concurrency.processutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5c3d3c1e-90e3-4917-abbb-2a348c45c9f2/disk.config 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.599 239969 DEBUG oslo_concurrency.processutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5c3d3c1e-90e3-4917-abbb-2a348c45c9f2/disk.config 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.600 239969 INFO nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Deleting local config drive /var/lib/nova/instances/5c3d3c1e-90e3-4917-abbb-2a348c45c9f2/disk.config because it was imported into RBD.
Jan 26 15:52:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:52:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3796685452' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.657 239969 DEBUG oslo_concurrency.processutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.659 239969 DEBUG nova.virt.libvirt.vif [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:51:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2143415032',display_name='tempest-ImagesTestJSON-server-2143415032',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2143415032',id=25,image_ref='5c958714-b066-4366-9e88-c4b813d307b6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-126kreo7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='342bf84f-0f28-4939-b844-ed14ee42c01c',image_min_disk='1',image_min_ram='0',image_owner_id='ddba5162f533447bba0159cafaa565bf',image_owner_project_name='tempest-ImagesTestJSON-1480202091',image_owner_user_name='tempest-ImagesTestJSON-1480202091-project-member',image_user_id='3322c44e378e415bb486ef558314a67c',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:52:01Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=d4d304bb-bbea-43d0-b6d0-2185340bbf57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "address": "fa:16:3e:e4:2d:1e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f45e8-bd", "ovs_interfaceid": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:52:07 compute-0 systemd-machined[208061]: New machine qemu-29-instance-0000001a.
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.659 239969 DEBUG nova.network.os_vif_util [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "address": "fa:16:3e:e4:2d:1e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f45e8-bd", "ovs_interfaceid": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.660 239969 DEBUG nova.network.os_vif_util [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:2d:1e,bridge_name='br-int',has_traffic_filtering=True,id=853f45e8-bdfc-4f63-9ddb-67df8210f001,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f45e8-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.661 239969 DEBUG nova.objects.instance [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'pci_devices' on Instance uuid d4d304bb-bbea-43d0-b6d0-2185340bbf57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:07 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-0000001a.
Jan 26 15:52:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 546 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 20 MiB/s wr, 646 op/s
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.680 239969 DEBUG nova.virt.libvirt.driver [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:52:07 compute-0 nova_compute[239965]:   <uuid>d4d304bb-bbea-43d0-b6d0-2185340bbf57</uuid>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   <name>instance-00000019</name>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <nova:name>tempest-ImagesTestJSON-server-2143415032</nova:name>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:52:06</nova:creationTime>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:52:07 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:52:07 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:52:07 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:52:07 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:52:07 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:52:07 compute-0 nova_compute[239965]:         <nova:user uuid="3322c44e378e415bb486ef558314a67c">tempest-ImagesTestJSON-1480202091-project-member</nova:user>
Jan 26 15:52:07 compute-0 nova_compute[239965]:         <nova:project uuid="ddba5162f533447bba0159cafaa565bf">tempest-ImagesTestJSON-1480202091</nova:project>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="5c958714-b066-4366-9e88-c4b813d307b6"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:52:07 compute-0 nova_compute[239965]:         <nova:port uuid="853f45e8-bdfc-4f63-9ddb-67df8210f001">
Jan 26 15:52:07 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <system>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <entry name="serial">d4d304bb-bbea-43d0-b6d0-2185340bbf57</entry>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <entry name="uuid">d4d304bb-bbea-43d0-b6d0-2185340bbf57</entry>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     </system>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   <os>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   </os>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   <features>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   </features>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d4d304bb-bbea-43d0-b6d0-2185340bbf57_disk">
Jan 26 15:52:07 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       </source>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:52:07 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d4d304bb-bbea-43d0-b6d0-2185340bbf57_disk.config">
Jan 26 15:52:07 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       </source>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:52:07 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:e4:2d:1e"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <target dev="tap853f45e8-bd"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/d4d304bb-bbea-43d0-b6d0-2185340bbf57/console.log" append="off"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <video>
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     </video>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <input type="keyboard" bus="usb"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:52:07 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:52:07 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:52:07 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:52:07 compute-0 nova_compute[239965]: </domain>
Jan 26 15:52:07 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.681 239969 DEBUG nova.compute.manager [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Preparing to wait for external event network-vif-plugged-853f45e8-bdfc-4f63-9ddb-67df8210f001 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.681 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.681 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.681 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.682 239969 DEBUG nova.virt.libvirt.vif [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:51:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2143415032',display_name='tempest-ImagesTestJSON-server-2143415032',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2143415032',id=25,image_ref='5c958714-b066-4366-9e88-c4b813d307b6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-126kreo7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='342bf84f-0f28-4939-b844-ed14ee42c01c',image_min_disk='1',image_min_ram='0',image_owner_id='ddba5162f533447bba0159cafaa565bf',image_owner_project_name='tempest-ImagesTestJSON-1480202091',image_owner_user_name='tempest-ImagesTestJSON-1480202091-project-member',image_user_id='3322c44e378e415bb486ef558314a67c',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:52:01Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=d4d304bb-bbea-43d0-b6d0-2185340bbf57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "address": "fa:16:3e:e4:2d:1e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f45e8-bd", "ovs_interfaceid": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.682 239969 DEBUG nova.network.os_vif_util [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "address": "fa:16:3e:e4:2d:1e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f45e8-bd", "ovs_interfaceid": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.682 239969 DEBUG nova.network.os_vif_util [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:2d:1e,bridge_name='br-int',has_traffic_filtering=True,id=853f45e8-bdfc-4f63-9ddb-67df8210f001,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f45e8-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.683 239969 DEBUG os_vif [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:2d:1e,bridge_name='br-int',has_traffic_filtering=True,id=853f45e8-bdfc-4f63-9ddb-67df8210f001,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f45e8-bd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.683 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.684 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.684 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.687 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.687 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap853f45e8-bd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.688 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap853f45e8-bd, col_values=(('external_ids', {'iface-id': '853f45e8-bdfc-4f63-9ddb-67df8210f001', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e4:2d:1e', 'vm-uuid': 'd4d304bb-bbea-43d0-b6d0-2185340bbf57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.690 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:07 compute-0 NetworkManager[48954]: <info>  [1769442727.6911] manager: (tap853f45e8-bd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.692 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.696 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.697 239969 INFO os_vif [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:2d:1e,bridge_name='br-int',has_traffic_filtering=True,id=853f45e8-bdfc-4f63-9ddb-67df8210f001,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f45e8-bd')
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.758 239969 DEBUG nova.virt.libvirt.driver [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.758 239969 DEBUG nova.virt.libvirt.driver [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.758 239969 DEBUG nova.virt.libvirt.driver [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No VIF found with MAC fa:16:3e:e4:2d:1e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.759 239969 INFO nova.virt.libvirt.driver [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Using config drive
Jan 26 15:52:07 compute-0 nova_compute[239965]: 2026-01-26 15:52:07.779 239969 DEBUG nova.storage.rbd_utils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image d4d304bb-bbea-43d0-b6d0-2185340bbf57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.041 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442728.0409973, 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.041 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] VM Resumed (Lifecycle Event)
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.044 239969 DEBUG nova.compute.manager [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.044 239969 DEBUG nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.047 239969 INFO nova.virt.libvirt.driver [-] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Instance spawned successfully.
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.047 239969 DEBUG nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.070 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.076 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.079 239969 DEBUG nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.079 239969 DEBUG nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.080 239969 DEBUG nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.080 239969 DEBUG nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.080 239969 DEBUG nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.081 239969 DEBUG nova.virt.libvirt.driver [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.108 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.108 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442728.0434017, 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.108 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] VM Started (Lifecycle Event)
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.131 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.134 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.143 239969 INFO nova.compute.manager [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Took 3.31 seconds to spawn the instance on the hypervisor.
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.143 239969 DEBUG nova.compute.manager [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.186 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.219 239969 INFO nova.compute.manager [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Took 4.71 seconds to build instance.
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.238 239969 DEBUG oslo_concurrency.lockutils [None req-3a242ee0-d336-4291-ba6b-a44e98b237a5 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lock "5c3d3c1e-90e3-4917-abbb-2a348c45c9f2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3796685452' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:08 compute-0 ceph-mon[75140]: pgmap v1127: 305 pgs: 305 active+clean; 546 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 9.2 MiB/s rd, 20 MiB/s wr, 646 op/s
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.531 239969 DEBUG nova.network.neutron [req-23113d9b-cd65-4495-9813-39cfa620708f req-af0f6666-93db-4a01-ade6-ff7ea0cd81ff a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Updated VIF entry in instance network info cache for port 853f45e8-bdfc-4f63-9ddb-67df8210f001. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.531 239969 DEBUG nova.network.neutron [req-23113d9b-cd65-4495-9813-39cfa620708f req-af0f6666-93db-4a01-ade6-ff7ea0cd81ff a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Updating instance_info_cache with network_info: [{"id": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "address": "fa:16:3e:e4:2d:1e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f45e8-bd", "ovs_interfaceid": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:08 compute-0 nova_compute[239965]: 2026-01-26 15:52:08.544 239969 DEBUG oslo_concurrency.lockutils [req-23113d9b-cd65-4495-9813-39cfa620708f req-af0f6666-93db-4a01-ade6-ff7ea0cd81ff a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d4d304bb-bbea-43d0-b6d0-2185340bbf57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:52:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:52:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Jan 26 15:52:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Jan 26 15:52:09 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Jan 26 15:52:09 compute-0 nova_compute[239965]: 2026-01-26 15:52:09.403 239969 INFO nova.virt.libvirt.driver [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Creating config drive at /var/lib/nova/instances/d4d304bb-bbea-43d0-b6d0-2185340bbf57/disk.config
Jan 26 15:52:09 compute-0 nova_compute[239965]: 2026-01-26 15:52:09.409 239969 DEBUG oslo_concurrency.processutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d4d304bb-bbea-43d0-b6d0-2185340bbf57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp59fq0xg8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:09 compute-0 nova_compute[239965]: 2026-01-26 15:52:09.543 239969 DEBUG oslo_concurrency.processutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d4d304bb-bbea-43d0-b6d0-2185340bbf57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp59fq0xg8" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 546 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 13 MiB/s wr, 402 op/s
Jan 26 15:52:09 compute-0 nova_compute[239965]: 2026-01-26 15:52:09.904 239969 DEBUG nova.storage.rbd_utils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image d4d304bb-bbea-43d0-b6d0-2185340bbf57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:09 compute-0 nova_compute[239965]: 2026-01-26 15:52:09.910 239969 DEBUG oslo_concurrency.processutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d4d304bb-bbea-43d0-b6d0-2185340bbf57/disk.config d4d304bb-bbea-43d0-b6d0-2185340bbf57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:09 compute-0 nova_compute[239965]: 2026-01-26 15:52:09.946 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:09 compute-0 nova_compute[239965]: 2026-01-26 15:52:09.951 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442714.9459133, dcf4bd4f-ec23-4c13-b04d-b696bf93f350 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:09 compute-0 nova_compute[239965]: 2026-01-26 15:52:09.951 239969 INFO nova.compute.manager [-] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] VM Stopped (Lifecycle Event)
Jan 26 15:52:09 compute-0 nova_compute[239965]: 2026-01-26 15:52:09.969 239969 DEBUG nova.compute.manager [None req-dbfde207-3972-483e-8aa3-c2d0b5e2d379 df3e36b6cfd54daab2eb51aa1502a3d8 1eb07d996f7a4d4aa974ef2539108b5d - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:09 compute-0 nova_compute[239965]: 2026-01-26 15:52:09.969 239969 DEBUG nova.compute.manager [None req-dbfde207-3972-483e-8aa3-c2d0b5e2d379 df3e36b6cfd54daab2eb51aa1502a3d8 1eb07d996f7a4d4aa974ef2539108b5d - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:52:09 compute-0 nova_compute[239965]: 2026-01-26 15:52:09.970 239969 DEBUG oslo_concurrency.lockutils [None req-dbfde207-3972-483e-8aa3-c2d0b5e2d379 df3e36b6cfd54daab2eb51aa1502a3d8 1eb07d996f7a4d4aa974ef2539108b5d - - default default] Acquiring lock "refresh_cache-5c3d3c1e-90e3-4917-abbb-2a348c45c9f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:52:09 compute-0 nova_compute[239965]: 2026-01-26 15:52:09.970 239969 DEBUG oslo_concurrency.lockutils [None req-dbfde207-3972-483e-8aa3-c2d0b5e2d379 df3e36b6cfd54daab2eb51aa1502a3d8 1eb07d996f7a4d4aa974ef2539108b5d - - default default] Acquired lock "refresh_cache-5c3d3c1e-90e3-4917-abbb-2a348c45c9f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:52:09 compute-0 nova_compute[239965]: 2026-01-26 15:52:09.970 239969 DEBUG nova.network.neutron [None req-dbfde207-3972-483e-8aa3-c2d0b5e2d379 df3e36b6cfd54daab2eb51aa1502a3d8 1eb07d996f7a4d4aa974ef2539108b5d - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:52:09 compute-0 nova_compute[239965]: 2026-01-26 15:52:09.980 239969 DEBUG nova.compute.manager [None req-2241d71f-ae20-494f-8feb-0bcaee0c758f - - - - - -] [instance: dcf4bd4f-ec23-4c13-b04d-b696bf93f350] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.055 239969 DEBUG oslo_concurrency.processutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d4d304bb-bbea-43d0-b6d0-2185340bbf57/disk.config d4d304bb-bbea-43d0-b6d0-2185340bbf57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.055 239969 INFO nova.virt.libvirt.driver [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Deleting local config drive /var/lib/nova/instances/d4d304bb-bbea-43d0-b6d0-2185340bbf57/disk.config because it was imported into RBD.
Jan 26 15:52:10 compute-0 NetworkManager[48954]: <info>  [1769442730.0968] manager: (tap853f45e8-bd): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Jan 26 15:52:10 compute-0 kernel: tap853f45e8-bd: entered promiscuous mode
Jan 26 15:52:10 compute-0 ovn_controller[146046]: 2026-01-26T15:52:10Z|00135|binding|INFO|Claiming lport 853f45e8-bdfc-4f63-9ddb-67df8210f001 for this chassis.
Jan 26 15:52:10 compute-0 ovn_controller[146046]: 2026-01-26T15:52:10Z|00136|binding|INFO|853f45e8-bdfc-4f63-9ddb-67df8210f001: Claiming fa:16:3e:e4:2d:1e 10.100.0.7
Jan 26 15:52:10 compute-0 systemd-udevd[265464]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.103 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.110 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:2d:1e 10.100.0.7'], port_security=['fa:16:3e:e4:2d:1e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd4d304bb-bbea-43d0-b6d0-2185340bbf57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5e237c1-a75f-479a-88ee-c0f788914b11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddba5162f533447bba0159cafaa565bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '196db1cf-70b0-4b0e-9aef-decea6cfe3dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79fbd3fe-b646-4211-8e23-8d25cabdd600, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=853f45e8-bdfc-4f63-9ddb-67df8210f001) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.112 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 853f45e8-bdfc-4f63-9ddb-67df8210f001 in datapath f5e237c1-a75f-479a-88ee-c0f788914b11 bound to our chassis
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.114 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:52:10 compute-0 NetworkManager[48954]: <info>  [1769442730.1195] device (tap853f45e8-bd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:52:10 compute-0 NetworkManager[48954]: <info>  [1769442730.1205] device (tap853f45e8-bd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:52:10 compute-0 ovn_controller[146046]: 2026-01-26T15:52:10Z|00137|binding|INFO|Setting lport 853f45e8-bdfc-4f63-9ddb-67df8210f001 ovn-installed in OVS
Jan 26 15:52:10 compute-0 ovn_controller[146046]: 2026-01-26T15:52:10Z|00138|binding|INFO|Setting lport 853f45e8-bdfc-4f63-9ddb-67df8210f001 up in Southbound
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.137 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.146 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[45775814-f740-41da-91f3-da0e5618bf1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:10 compute-0 systemd-machined[208061]: New machine qemu-30-instance-00000019.
Jan 26 15:52:10 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-00000019.
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.186 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce9a7a3-cd5a-4582-8de6-357010bcbf97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.189 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[000d4816-1536-4163-9469-54e8c07d35f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.237 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[de7ff674-aded-4178-8b43-d48a90c723dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.255 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[62fa4d59-5474-419e-8402-8b36fa50bdff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5e237c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:0e:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419732, 'reachable_time': 26934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265533, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:10 compute-0 ceph-mon[75140]: osdmap e188: 3 total, 3 up, 3 in
Jan 26 15:52:10 compute-0 ceph-mon[75140]: pgmap v1129: 305 pgs: 305 active+clean; 546 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 13 MiB/s wr, 402 op/s
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.275 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5a925133-f85b-42de-aaba-ea1845d512c5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf5e237c1-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 419748, 'tstamp': 419748}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265534, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf5e237c1-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 419752, 'tstamp': 419752}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265534, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.280 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5e237c1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.282 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.283 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.283 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5e237c1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.283 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.284 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5e237c1-a0, col_values=(('external_ids', {'iface-id': 'fd0478e8-96d8-4fbd-8a9a-fe78757277ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.284 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.329 239969 DEBUG oslo_concurrency.lockutils [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Acquiring lock "5c3d3c1e-90e3-4917-abbb-2a348c45c9f2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.329 239969 DEBUG oslo_concurrency.lockutils [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lock "5c3d3c1e-90e3-4917-abbb-2a348c45c9f2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.329 239969 DEBUG oslo_concurrency.lockutils [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Acquiring lock "5c3d3c1e-90e3-4917-abbb-2a348c45c9f2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.329 239969 DEBUG oslo_concurrency.lockutils [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lock "5c3d3c1e-90e3-4917-abbb-2a348c45c9f2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.330 239969 DEBUG oslo_concurrency.lockutils [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lock "5c3d3c1e-90e3-4917-abbb-2a348c45c9f2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.331 239969 INFO nova.compute.manager [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Terminating instance
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.334 239969 DEBUG oslo_concurrency.lockutils [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Acquiring lock "refresh_cache-5c3d3c1e-90e3-4917-abbb-2a348c45c9f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.367 239969 DEBUG nova.network.neutron [None req-dbfde207-3972-483e-8aa3-c2d0b5e2d379 df3e36b6cfd54daab2eb51aa1502a3d8 1eb07d996f7a4d4aa974ef2539108b5d - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.386 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.388 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 15:52:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:10.389 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.388 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.833 239969 DEBUG nova.network.neutron [None req-dbfde207-3972-483e-8aa3-c2d0b5e2d379 df3e36b6cfd54daab2eb51aa1502a3d8 1eb07d996f7a4d4aa974ef2539108b5d - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.848 239969 DEBUG oslo_concurrency.lockutils [None req-dbfde207-3972-483e-8aa3-c2d0b5e2d379 df3e36b6cfd54daab2eb51aa1502a3d8 1eb07d996f7a4d4aa974ef2539108b5d - - default default] Releasing lock "refresh_cache-5c3d3c1e-90e3-4917-abbb-2a348c45c9f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.849 239969 DEBUG oslo_concurrency.lockutils [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Acquired lock "refresh_cache-5c3d3c1e-90e3-4917-abbb-2a348c45c9f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.849 239969 DEBUG nova.network.neutron [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.883 239969 DEBUG nova.compute.manager [req-1cbdf8c4-3f5d-4e10-a790-96f0de9227e5 req-7d31f4d3-e925-4b7d-bd57-dc3ed423a1b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Received event network-vif-plugged-853f45e8-bdfc-4f63-9ddb-67df8210f001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.884 239969 DEBUG oslo_concurrency.lockutils [req-1cbdf8c4-3f5d-4e10-a790-96f0de9227e5 req-7d31f4d3-e925-4b7d-bd57-dc3ed423a1b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.884 239969 DEBUG oslo_concurrency.lockutils [req-1cbdf8c4-3f5d-4e10-a790-96f0de9227e5 req-7d31f4d3-e925-4b7d-bd57-dc3ed423a1b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.884 239969 DEBUG oslo_concurrency.lockutils [req-1cbdf8c4-3f5d-4e10-a790-96f0de9227e5 req-7d31f4d3-e925-4b7d-bd57-dc3ed423a1b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:10 compute-0 nova_compute[239965]: 2026-01-26 15:52:10.884 239969 DEBUG nova.compute.manager [req-1cbdf8c4-3f5d-4e10-a790-96f0de9227e5 req-7d31f4d3-e925-4b7d-bd57-dc3ed423a1b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Processing event network-vif-plugged-853f45e8-bdfc-4f63-9ddb-67df8210f001 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.077 239969 DEBUG nova.network.neutron [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.383 239969 DEBUG nova.network.neutron [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.402 239969 DEBUG oslo_concurrency.lockutils [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Releasing lock "refresh_cache-5c3d3c1e-90e3-4917-abbb-2a348c45c9f2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.404 239969 DEBUG nova.compute.manager [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:52:11 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Jan 26 15:52:11 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001a.scope: Consumed 3.833s CPU time.
Jan 26 15:52:11 compute-0 systemd-machined[208061]: Machine qemu-29-instance-0000001a terminated.
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.536 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442731.5359714, d4d304bb-bbea-43d0-b6d0-2185340bbf57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.536 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] VM Started (Lifecycle Event)
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.538 239969 DEBUG nova.compute.manager [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.542 239969 DEBUG nova.virt.libvirt.driver [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.545 239969 INFO nova.virt.libvirt.driver [-] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Instance spawned successfully.
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.545 239969 INFO nova.compute.manager [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Took 9.49 seconds to spawn the instance on the hypervisor.
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.545 239969 DEBUG nova.compute.manager [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.576 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.579 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.607 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.607 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442731.5376914, d4d304bb-bbea-43d0-b6d0-2185340bbf57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.607 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] VM Paused (Lifecycle Event)
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.619 239969 INFO nova.compute.manager [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Took 11.55 seconds to build instance.
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.629 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.630 239969 INFO nova.virt.libvirt.driver [-] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Instance destroyed successfully.
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.630 239969 DEBUG nova.objects.instance [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lazy-loading 'resources' on Instance uuid 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.633 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442731.54117, d4d304bb-bbea-43d0-b6d0-2185340bbf57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.633 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] VM Resumed (Lifecycle Event)
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.658 239969 DEBUG oslo_concurrency.lockutils [None req-089c700d-a100-4af9-9f05-e0741aa6ed88 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.662 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.664 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:52:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 546 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 393 op/s
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.888 239969 INFO nova.virt.libvirt.driver [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Deleting instance files /var/lib/nova/instances/5c3d3c1e-90e3-4917-abbb-2a348c45c9f2_del
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.888 239969 INFO nova.virt.libvirt.driver [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Deletion of /var/lib/nova/instances/5c3d3c1e-90e3-4917-abbb-2a348c45c9f2_del complete
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.949 239969 INFO nova.compute.manager [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Took 0.54 seconds to destroy the instance on the hypervisor.
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.949 239969 DEBUG oslo.service.loopingcall [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.950 239969 DEBUG nova.compute.manager [-] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:52:11 compute-0 nova_compute[239965]: 2026-01-26 15:52:11.950 239969 DEBUG nova.network.neutron [-] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.126 239969 DEBUG nova.network.neutron [-] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.149 239969 DEBUG nova.network.neutron [-] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.163 239969 INFO nova.compute.manager [-] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Took 0.21 seconds to deallocate network for instance.
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.209 239969 DEBUG oslo_concurrency.lockutils [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.210 239969 DEBUG oslo_concurrency.lockutils [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.359 239969 DEBUG oslo_concurrency.lockutils [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.359 239969 DEBUG oslo_concurrency.lockutils [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.359 239969 DEBUG oslo_concurrency.lockutils [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.360 239969 DEBUG oslo_concurrency.lockutils [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.360 239969 DEBUG oslo_concurrency.lockutils [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.361 239969 INFO nova.compute.manager [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Terminating instance
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.363 239969 DEBUG nova.compute.manager [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.364 239969 DEBUG oslo_concurrency.processutils [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:12 compute-0 kernel: tap853f45e8-bd (unregistering): left promiscuous mode
Jan 26 15:52:12 compute-0 NetworkManager[48954]: <info>  [1769442732.4324] device (tap853f45e8-bd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:52:12 compute-0 ovn_controller[146046]: 2026-01-26T15:52:12Z|00139|binding|INFO|Releasing lport 853f45e8-bdfc-4f63-9ddb-67df8210f001 from this chassis (sb_readonly=0)
Jan 26 15:52:12 compute-0 ovn_controller[146046]: 2026-01-26T15:52:12Z|00140|binding|INFO|Setting lport 853f45e8-bdfc-4f63-9ddb-67df8210f001 down in Southbound
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.439 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:12 compute-0 ovn_controller[146046]: 2026-01-26T15:52:12Z|00141|binding|INFO|Removing iface tap853f45e8-bd ovn-installed in OVS
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.441 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:12.447 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e4:2d:1e 10.100.0.7'], port_security=['fa:16:3e:e4:2d:1e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd4d304bb-bbea-43d0-b6d0-2185340bbf57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5e237c1-a75f-479a-88ee-c0f788914b11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddba5162f533447bba0159cafaa565bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '196db1cf-70b0-4b0e-9aef-decea6cfe3dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79fbd3fe-b646-4211-8e23-8d25cabdd600, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=853f45e8-bdfc-4f63-9ddb-67df8210f001) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:52:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:12.449 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 853f45e8-bdfc-4f63-9ddb-67df8210f001 in datapath f5e237c1-a75f-479a-88ee-c0f788914b11 unbound from our chassis
Jan 26 15:52:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:12.450 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.468 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:12 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000019.scope: Deactivated successfully.
Jan 26 15:52:12 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d00000019.scope: Consumed 2.135s CPU time.
Jan 26 15:52:12 compute-0 ovn_controller[146046]: 2026-01-26T15:52:12Z|00142|binding|INFO|Releasing lport fd0478e8-96d8-4fbd-8a9a-fe78757277ca from this chassis (sb_readonly=0)
Jan 26 15:52:12 compute-0 systemd-machined[208061]: Machine qemu-30-instance-00000019 terminated.
Jan 26 15:52:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:12.471 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ef86c647-f4f7-451f-a557-6ea01f5a0da4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:12.501 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9eb0234f-1690-4bf0-813a-e06b48690732]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:12.503 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[17cdff3e-13d1-4023-a900-886bfd38f64c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:12.532 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0c89aeb2-a3b0-449c-a409-c0e868578dfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.544 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:12.550 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1501edb5-bdc2-43f5-aae6-4472935bab44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5e237c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:0e:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419732, 'reachable_time': 26934, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265629, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:12.566 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2a2551c1-936b-401a-83d2-eba707435223]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapf5e237c1-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 419748, 'tstamp': 419748}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265630, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapf5e237c1-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 419752, 'tstamp': 419752}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265630, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:12.568 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5e237c1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.569 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:12.573 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5e237c1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:12.574 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:52:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:12.574 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5e237c1-a0, col_values=(('external_ids', {'iface-id': 'fd0478e8-96d8-4fbd-8a9a-fe78757277ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:12.574 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.575 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.621 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.625 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.637 239969 INFO nova.virt.libvirt.driver [-] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Instance destroyed successfully.
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.638 239969 DEBUG nova.objects.instance [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'resources' on Instance uuid d4d304bb-bbea-43d0-b6d0-2185340bbf57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.654 239969 DEBUG nova.virt.libvirt.vif [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:51:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2143415032',display_name='tempest-ImagesTestJSON-server-2143415032',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2143415032',id=25,image_ref='5c958714-b066-4366-9e88-c4b813d307b6',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:52:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-126kreo7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='342bf84f-0f28-4939-b844-ed14ee42c01c',image_min_disk='1',image_min_ram='0',image_owner_id='ddba5162f533447bba0159cafaa565bf',image_owner_project_name='tempest-ImagesTestJSON-1480202091',image_owner_user_name='tempest-ImagesTestJSON-1480202091-project-member',image_user_id='3322c44e378e415bb486ef558314a67c',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:52:11Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=d4d304bb-bbea-43d0-b6d0-2185340bbf57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "address": "fa:16:3e:e4:2d:1e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f45e8-bd", "ovs_interfaceid": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.655 239969 DEBUG nova.network.os_vif_util [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "address": "fa:16:3e:e4:2d:1e", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f45e8-bd", "ovs_interfaceid": "853f45e8-bdfc-4f63-9ddb-67df8210f001", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.656 239969 DEBUG nova.network.os_vif_util [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e4:2d:1e,bridge_name='br-int',has_traffic_filtering=True,id=853f45e8-bdfc-4f63-9ddb-67df8210f001,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f45e8-bd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.656 239969 DEBUG os_vif [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:2d:1e,bridge_name='br-int',has_traffic_filtering=True,id=853f45e8-bdfc-4f63-9ddb-67df8210f001,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f45e8-bd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.657 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.658 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap853f45e8-bd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.659 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.662 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.664 239969 INFO os_vif [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e4:2d:1e,bridge_name='br-int',has_traffic_filtering=True,id=853f45e8-bdfc-4f63-9ddb-67df8210f001,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f45e8-bd')
Jan 26 15:52:12 compute-0 ceph-mon[75140]: pgmap v1130: 305 pgs: 305 active+clean; 546 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 11 MiB/s wr, 393 op/s
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.897 239969 INFO nova.virt.libvirt.driver [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Deleting instance files /var/lib/nova/instances/d4d304bb-bbea-43d0-b6d0-2185340bbf57_del
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.898 239969 INFO nova.virt.libvirt.driver [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Deletion of /var/lib/nova/instances/d4d304bb-bbea-43d0-b6d0-2185340bbf57_del complete
Jan 26 15:52:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468055386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.930 239969 DEBUG oslo_concurrency.processutils [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.935 239969 DEBUG nova.compute.provider_tree [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.958 239969 DEBUG nova.scheduler.client.report [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.965 239969 INFO nova.compute.manager [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Took 0.60 seconds to destroy the instance on the hypervisor.
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.965 239969 DEBUG oslo.service.loopingcall [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.965 239969 DEBUG nova.compute.manager [-] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.966 239969 DEBUG nova.network.neutron [-] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.976 239969 DEBUG nova.compute.manager [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Received event network-vif-plugged-853f45e8-bdfc-4f63-9ddb-67df8210f001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.976 239969 DEBUG oslo_concurrency.lockutils [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.977 239969 DEBUG oslo_concurrency.lockutils [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.977 239969 DEBUG oslo_concurrency.lockutils [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.977 239969 DEBUG nova.compute.manager [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] No waiting events found dispatching network-vif-plugged-853f45e8-bdfc-4f63-9ddb-67df8210f001 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.977 239969 WARNING nova.compute.manager [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Received unexpected event network-vif-plugged-853f45e8-bdfc-4f63-9ddb-67df8210f001 for instance with vm_state active and task_state deleting.
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.977 239969 DEBUG nova.compute.manager [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Received event network-vif-unplugged-853f45e8-bdfc-4f63-9ddb-67df8210f001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.977 239969 DEBUG oslo_concurrency.lockutils [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.978 239969 DEBUG oslo_concurrency.lockutils [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.978 239969 DEBUG oslo_concurrency.lockutils [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.978 239969 DEBUG nova.compute.manager [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] No waiting events found dispatching network-vif-unplugged-853f45e8-bdfc-4f63-9ddb-67df8210f001 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.978 239969 DEBUG nova.compute.manager [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Received event network-vif-unplugged-853f45e8-bdfc-4f63-9ddb-67df8210f001 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.978 239969 DEBUG nova.compute.manager [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Received event network-vif-plugged-853f45e8-bdfc-4f63-9ddb-67df8210f001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.978 239969 DEBUG oslo_concurrency.lockutils [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.979 239969 DEBUG oslo_concurrency.lockutils [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.979 239969 DEBUG oslo_concurrency.lockutils [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.979 239969 DEBUG nova.compute.manager [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] No waiting events found dispatching network-vif-plugged-853f45e8-bdfc-4f63-9ddb-67df8210f001 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.979 239969 WARNING nova.compute.manager [req-1ff9cac0-f68b-4c8f-9e7b-3cf869a0ef00 req-186b9a62-a14d-4ccb-9c39-fa77e33638af a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Received unexpected event network-vif-plugged-853f45e8-bdfc-4f63-9ddb-67df8210f001 for instance with vm_state active and task_state deleting.
Jan 26 15:52:12 compute-0 nova_compute[239965]: 2026-01-26 15:52:12.986 239969 DEBUG oslo_concurrency.lockutils [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:13 compute-0 nova_compute[239965]: 2026-01-26 15:52:13.031 239969 INFO nova.scheduler.client.report [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Deleted allocations for instance 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2
Jan 26 15:52:13 compute-0 nova_compute[239965]: 2026-01-26 15:52:13.349 239969 DEBUG oslo_concurrency.lockutils [None req-d2baf3af-f280-41f3-ae90-49caca9b2956 fde5e64d36394041a5c7f1657651370c 25fba6f5a5fa45a69f8352d836971969 - - default default] Lock "5c3d3c1e-90e3-4917-abbb-2a348c45c9f2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 518 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 9.6 MiB/s wr, 454 op/s
Jan 26 15:52:13 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1468055386' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:14 compute-0 nova_compute[239965]: 2026-01-26 15:52:14.203 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442719.2008061, 20e8eee5-8c27-4d2b-b132-afa685238f37 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:14 compute-0 nova_compute[239965]: 2026-01-26 15:52:14.203 239969 INFO nova.compute.manager [-] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] VM Stopped (Lifecycle Event)
Jan 26 15:52:14 compute-0 nova_compute[239965]: 2026-01-26 15:52:14.238 239969 DEBUG nova.compute.manager [None req-1c912465-d0ed-4386-880b-e2aeb63462de - - - - - -] [instance: 20e8eee5-8c27-4d2b-b132-afa685238f37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:14 compute-0 nova_compute[239965]: 2026-01-26 15:52:14.245 239969 DEBUG nova.network.neutron [-] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:52:14 compute-0 nova_compute[239965]: 2026-01-26 15:52:14.271 239969 INFO nova.compute.manager [-] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Took 1.30 seconds to deallocate network for instance.
Jan 26 15:52:14 compute-0 nova_compute[239965]: 2026-01-26 15:52:14.316 239969 DEBUG oslo_concurrency.lockutils [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:14 compute-0 nova_compute[239965]: 2026-01-26 15:52:14.316 239969 DEBUG oslo_concurrency.lockutils [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:14 compute-0 nova_compute[239965]: 2026-01-26 15:52:14.438 239969 DEBUG oslo_concurrency.processutils [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:14 compute-0 nova_compute[239965]: 2026-01-26 15:52:14.499 239969 DEBUG nova.compute.manager [req-3da9cc4b-c71c-49be-a7f6-3f4470bb5e76 req-6d238583-164e-4083-a9c1-8d517ba16e62 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Received event network-vif-deleted-853f45e8-bdfc-4f63-9ddb-67df8210f001 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:14 compute-0 ceph-mon[75140]: pgmap v1131: 305 pgs: 305 active+clean; 518 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 9.6 MiB/s wr, 454 op/s
Jan 26 15:52:14 compute-0 nova_compute[239965]: 2026-01-26 15:52:14.891 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:14 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4267824646' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:15 compute-0 nova_compute[239965]: 2026-01-26 15:52:15.024 239969 DEBUG oslo_concurrency.processutils [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:15 compute-0 nova_compute[239965]: 2026-01-26 15:52:15.029 239969 DEBUG nova.compute.provider_tree [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:15 compute-0 nova_compute[239965]: 2026-01-26 15:52:15.045 239969 DEBUG nova.scheduler.client.report [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:15 compute-0 nova_compute[239965]: 2026-01-26 15:52:15.068 239969 DEBUG oslo_concurrency.lockutils [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:15 compute-0 nova_compute[239965]: 2026-01-26 15:52:15.102 239969 INFO nova.scheduler.client.report [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Deleted allocations for instance d4d304bb-bbea-43d0-b6d0-2185340bbf57
Jan 26 15:52:15 compute-0 nova_compute[239965]: 2026-01-26 15:52:15.161 239969 DEBUG oslo_concurrency.lockutils [None req-2661f9f6-0306-49f1-bb33-2ba6d720ed3f 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "d4d304bb-bbea-43d0-b6d0-2185340bbf57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 500 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 5.5 MiB/s wr, 362 op/s
Jan 26 15:52:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Jan 26 15:52:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Jan 26 15:52:15 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Jan 26 15:52:15 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4267824646' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Jan 26 15:52:16 compute-0 ceph-mon[75140]: pgmap v1132: 305 pgs: 305 active+clean; 500 MiB data, 588 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 5.5 MiB/s wr, 362 op/s
Jan 26 15:52:16 compute-0 ceph-mon[75140]: osdmap e189: 3 total, 3 up, 3 in
Jan 26 15:52:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Jan 26 15:52:16 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Jan 26 15:52:17 compute-0 nova_compute[239965]: 2026-01-26 15:52:17.660 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 438 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 66 KiB/s wr, 303 op/s
Jan 26 15:52:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Jan 26 15:52:17 compute-0 nova_compute[239965]: 2026-01-26 15:52:17.830 239969 DEBUG oslo_concurrency.lockutils [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "342bf84f-0f28-4939-b844-ed14ee42c01c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:17 compute-0 nova_compute[239965]: 2026-01-26 15:52:17.830 239969 DEBUG oslo_concurrency.lockutils [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "342bf84f-0f28-4939-b844-ed14ee42c01c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:17 compute-0 nova_compute[239965]: 2026-01-26 15:52:17.831 239969 DEBUG oslo_concurrency.lockutils [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:17 compute-0 nova_compute[239965]: 2026-01-26 15:52:17.831 239969 DEBUG oslo_concurrency.lockutils [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:17 compute-0 nova_compute[239965]: 2026-01-26 15:52:17.831 239969 DEBUG oslo_concurrency.lockutils [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:17 compute-0 nova_compute[239965]: 2026-01-26 15:52:17.832 239969 INFO nova.compute.manager [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Terminating instance
Jan 26 15:52:17 compute-0 nova_compute[239965]: 2026-01-26 15:52:17.833 239969 DEBUG nova.compute.manager [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:52:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Jan 26 15:52:17 compute-0 ceph-mon[75140]: osdmap e190: 3 total, 3 up, 3 in
Jan 26 15:52:17 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Jan 26 15:52:17 compute-0 kernel: tap87baa7c8-58 (unregistering): left promiscuous mode
Jan 26 15:52:17 compute-0 NetworkManager[48954]: <info>  [1769442737.8973] device (tap87baa7c8-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:52:17 compute-0 ovn_controller[146046]: 2026-01-26T15:52:17Z|00143|binding|INFO|Releasing lport 87baa7c8-584c-40fb-a5ee-7a18908da37f from this chassis (sb_readonly=0)
Jan 26 15:52:17 compute-0 ovn_controller[146046]: 2026-01-26T15:52:17Z|00144|binding|INFO|Setting lport 87baa7c8-584c-40fb-a5ee-7a18908da37f down in Southbound
Jan 26 15:52:17 compute-0 nova_compute[239965]: 2026-01-26 15:52:17.969 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:17 compute-0 ovn_controller[146046]: 2026-01-26T15:52:17Z|00145|binding|INFO|Removing iface tap87baa7c8-58 ovn-installed in OVS
Jan 26 15:52:17 compute-0 nova_compute[239965]: 2026-01-26 15:52:17.971 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:17.982 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a1:48:76 10.100.0.8'], port_security=['fa:16:3e:a1:48:76 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '342bf84f-0f28-4939-b844-ed14ee42c01c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5e237c1-a75f-479a-88ee-c0f788914b11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddba5162f533447bba0159cafaa565bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '196db1cf-70b0-4b0e-9aef-decea6cfe3dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79fbd3fe-b646-4211-8e23-8d25cabdd600, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=87baa7c8-584c-40fb-a5ee-7a18908da37f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:52:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:17.984 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 87baa7c8-584c-40fb-a5ee-7a18908da37f in datapath f5e237c1-a75f-479a-88ee-c0f788914b11 unbound from our chassis
Jan 26 15:52:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:17.985 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5e237c1-a75f-479a-88ee-c0f788914b11, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:52:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:17.986 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0d81ed18-fd1a-4f02-8097-b8f6b0e725c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:17.987 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 namespace which is not needed anymore
Jan 26 15:52:17 compute-0 nova_compute[239965]: 2026-01-26 15:52:17.989 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:18 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000018.scope: Deactivated successfully.
Jan 26 15:52:18 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000018.scope: Consumed 15.305s CPU time.
Jan 26 15:52:18 compute-0 systemd-machined[208061]: Machine qemu-27-instance-00000018 terminated.
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.067 239969 INFO nova.virt.libvirt.driver [-] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Instance destroyed successfully.
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.068 239969 DEBUG nova.objects.instance [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'resources' on Instance uuid 342bf84f-0f28-4939-b844-ed14ee42c01c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:18 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[262236]: [NOTICE]   (262262) : haproxy version is 2.8.14-c23fe91
Jan 26 15:52:18 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[262236]: [NOTICE]   (262262) : path to executable is /usr/sbin/haproxy
Jan 26 15:52:18 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[262236]: [WARNING]  (262262) : Exiting Master process...
Jan 26 15:52:18 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[262236]: [WARNING]  (262262) : Exiting Master process...
Jan 26 15:52:18 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[262236]: [ALERT]    (262262) : Current worker (262286) exited with code 143 (Terminated)
Jan 26 15:52:18 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[262236]: [WARNING]  (262262) : All workers exited. Exiting... (0)
Jan 26 15:52:18 compute-0 systemd[1]: libpod-9ad6a18d2b2de2192106b309245073d1d87c168cb07ddc79bbbd717eff1f7f8c.scope: Deactivated successfully.
Jan 26 15:52:18 compute-0 podman[265720]: 2026-01-26 15:52:18.164700824 +0000 UTC m=+0.062829917 container died 9ad6a18d2b2de2192106b309245073d1d87c168cb07ddc79bbbd717eff1f7f8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:52:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9ad6a18d2b2de2192106b309245073d1d87c168cb07ddc79bbbd717eff1f7f8c-userdata-shm.mount: Deactivated successfully.
Jan 26 15:52:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab8cea90d6c7031a895a3ac01f0bb3c96ffe4155fd89ed3d46de127ddc5eeeed-merged.mount: Deactivated successfully.
Jan 26 15:52:18 compute-0 podman[265720]: 2026-01-26 15:52:18.209499972 +0000 UTC m=+0.107629075 container cleanup 9ad6a18d2b2de2192106b309245073d1d87c168cb07ddc79bbbd717eff1f7f8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:52:18 compute-0 systemd[1]: libpod-conmon-9ad6a18d2b2de2192106b309245073d1d87c168cb07ddc79bbbd717eff1f7f8c.scope: Deactivated successfully.
Jan 26 15:52:18 compute-0 podman[265749]: 2026-01-26 15:52:18.277029092 +0000 UTC m=+0.045329711 container remove 9ad6a18d2b2de2192106b309245073d1d87c168cb07ddc79bbbd717eff1f7f8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 26 15:52:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:18.283 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[107f8b69-eeb1-451d-8ddb-c573df250633]: (4, ('Mon Jan 26 03:52:18 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 (9ad6a18d2b2de2192106b309245073d1d87c168cb07ddc79bbbd717eff1f7f8c)\n9ad6a18d2b2de2192106b309245073d1d87c168cb07ddc79bbbd717eff1f7f8c\nMon Jan 26 03:52:18 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 (9ad6a18d2b2de2192106b309245073d1d87c168cb07ddc79bbbd717eff1f7f8c)\n9ad6a18d2b2de2192106b309245073d1d87c168cb07ddc79bbbd717eff1f7f8c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:18.285 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a107fade-1634-4479-9950-60be96b7ab19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:18.286 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5e237c1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.287 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:18 compute-0 kernel: tapf5e237c1-a0: left promiscuous mode
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.309 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:18.312 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bf64ed2b-122b-4d44-b85b-75f82f01823c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:18.328 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2064d2a0-4abe-48d3-8e03-8ffa32bcbb7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:18.329 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[659cff96-e951-4569-a2bb-a3cf25ae3dd8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:18.343 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e49a956e-a33a-46e9-9fc6-89f5e3069750]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 419725, 'reachable_time': 40736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265769, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:18.344 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:52:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:18.344 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[10dc64b5-2728-4152-aaa0-867dccc17870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:18 compute-0 systemd[1]: run-netns-ovnmeta\x2df5e237c1\x2da75f\x2d479a\x2d88ee\x2dc0f788914b11.mount: Deactivated successfully.
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.443 239969 DEBUG nova.virt.libvirt.vif [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:51:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-2143786364',display_name='tempest-ImagesTestJSON-server-2143786364',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-2143786364',id=24,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:51:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-ro796nsr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:51:54Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=342bf84f-0f28-4939-b844-ed14ee42c01c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "address": "fa:16:3e:a1:48:76", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87baa7c8-58", "ovs_interfaceid": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.444 239969 DEBUG nova.network.os_vif_util [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "address": "fa:16:3e:a1:48:76", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87baa7c8-58", "ovs_interfaceid": "87baa7c8-584c-40fb-a5ee-7a18908da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.445 239969 DEBUG nova.network.os_vif_util [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a1:48:76,bridge_name='br-int',has_traffic_filtering=True,id=87baa7c8-584c-40fb-a5ee-7a18908da37f,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87baa7c8-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.445 239969 DEBUG os_vif [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:48:76,bridge_name='br-int',has_traffic_filtering=True,id=87baa7c8-584c-40fb-a5ee-7a18908da37f,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87baa7c8-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.447 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.447 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87baa7c8-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.451 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.453 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.456 239969 INFO os_vif [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a1:48:76,bridge_name='br-int',has_traffic_filtering=True,id=87baa7c8-584c-40fb-a5ee-7a18908da37f,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87baa7c8-58')
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.721 239969 INFO nova.virt.libvirt.driver [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Deleting instance files /var/lib/nova/instances/342bf84f-0f28-4939-b844-ed14ee42c01c_del
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.722 239969 INFO nova.virt.libvirt.driver [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Deletion of /var/lib/nova/instances/342bf84f-0f28-4939-b844-ed14ee42c01c_del complete
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.862 239969 INFO nova.compute.manager [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Took 1.03 seconds to destroy the instance on the hypervisor.
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.863 239969 DEBUG oslo.service.loopingcall [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.863 239969 DEBUG nova.compute.manager [-] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:52:18 compute-0 nova_compute[239965]: 2026-01-26 15:52:18.863 239969 DEBUG nova.network.neutron [-] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:52:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:52:19 compute-0 ceph-mon[75140]: pgmap v1135: 305 pgs: 305 active+clean; 438 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 66 KiB/s wr, 303 op/s
Jan 26 15:52:19 compute-0 ceph-mon[75140]: osdmap e191: 3 total, 3 up, 3 in
Jan 26 15:52:19 compute-0 nova_compute[239965]: 2026-01-26 15:52:19.395 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442724.394295, db37e7ff-8499-4664-b2ba-014e27b8b6bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:19 compute-0 nova_compute[239965]: 2026-01-26 15:52:19.396 239969 INFO nova.compute.manager [-] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] VM Stopped (Lifecycle Event)
Jan 26 15:52:19 compute-0 nova_compute[239965]: 2026-01-26 15:52:19.426 239969 DEBUG nova.compute.manager [None req-d594018e-f260-4808-93e1-7f2f1fc2b679 - - - - - -] [instance: db37e7ff-8499-4664-b2ba-014e27b8b6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 438 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 952 KiB/s rd, 14 KiB/s wr, 172 op/s
Jan 26 15:52:19 compute-0 nova_compute[239965]: 2026-01-26 15:52:19.893 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Jan 26 15:52:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Jan 26 15:52:20 compute-0 ceph-mon[75140]: pgmap v1137: 305 pgs: 305 active+clean; 438 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 952 KiB/s rd, 14 KiB/s wr, 172 op/s
Jan 26 15:52:20 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Jan 26 15:52:20 compute-0 nova_compute[239965]: 2026-01-26 15:52:20.934 239969 DEBUG nova.network.neutron [-] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:20 compute-0 nova_compute[239965]: 2026-01-26 15:52:20.964 239969 INFO nova.compute.manager [-] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Took 2.10 seconds to deallocate network for instance.
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.011 239969 DEBUG oslo_concurrency.lockutils [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.012 239969 DEBUG oslo_concurrency.lockutils [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.120 239969 DEBUG oslo_concurrency.processutils [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.324 239969 DEBUG nova.compute.manager [req-5610a7a1-2138-4ed2-bf82-92a7b53704ca req-27fb105a-c893-4ded-9163-36a97a6eccc8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Received event network-vif-unplugged-87baa7c8-584c-40fb-a5ee-7a18908da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.326 239969 DEBUG oslo_concurrency.lockutils [req-5610a7a1-2138-4ed2-bf82-92a7b53704ca req-27fb105a-c893-4ded-9163-36a97a6eccc8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.326 239969 DEBUG oslo_concurrency.lockutils [req-5610a7a1-2138-4ed2-bf82-92a7b53704ca req-27fb105a-c893-4ded-9163-36a97a6eccc8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.326 239969 DEBUG oslo_concurrency.lockutils [req-5610a7a1-2138-4ed2-bf82-92a7b53704ca req-27fb105a-c893-4ded-9163-36a97a6eccc8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.326 239969 DEBUG nova.compute.manager [req-5610a7a1-2138-4ed2-bf82-92a7b53704ca req-27fb105a-c893-4ded-9163-36a97a6eccc8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] No waiting events found dispatching network-vif-unplugged-87baa7c8-584c-40fb-a5ee-7a18908da37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.327 239969 WARNING nova.compute.manager [req-5610a7a1-2138-4ed2-bf82-92a7b53704ca req-27fb105a-c893-4ded-9163-36a97a6eccc8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Received unexpected event network-vif-unplugged-87baa7c8-584c-40fb-a5ee-7a18908da37f for instance with vm_state deleted and task_state None.
Jan 26 15:52:21 compute-0 ceph-mon[75140]: osdmap e192: 3 total, 3 up, 3 in
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.431 239969 DEBUG nova.compute.manager [req-fd34295f-462e-48a3-b0d1-70a8d65f0929 req-ba1c912f-5a50-425b-935a-7dc2420e7528 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Received event network-vif-deleted-87baa7c8-584c-40fb-a5ee-7a18908da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.625 239969 DEBUG oslo_concurrency.lockutils [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquiring lock "ad8f4f0d-55dd-416f-9164-9c0a4f024ee9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.626 239969 DEBUG oslo_concurrency.lockutils [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "ad8f4f0d-55dd-416f-9164-9c0a4f024ee9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.626 239969 DEBUG oslo_concurrency.lockutils [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquiring lock "ad8f4f0d-55dd-416f-9164-9c0a4f024ee9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.627 239969 DEBUG oslo_concurrency.lockutils [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "ad8f4f0d-55dd-416f-9164-9c0a4f024ee9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.627 239969 DEBUG oslo_concurrency.lockutils [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "ad8f4f0d-55dd-416f-9164-9c0a4f024ee9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.628 239969 INFO nova.compute.manager [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Terminating instance
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.629 239969 DEBUG oslo_concurrency.lockutils [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquiring lock "refresh_cache-ad8f4f0d-55dd-416f-9164-9c0a4f024ee9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.629 239969 DEBUG oslo_concurrency.lockutils [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquired lock "refresh_cache-ad8f4f0d-55dd-416f-9164-9c0a4f024ee9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.630 239969 DEBUG nova.network.neutron [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:52:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 362 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 5.9 KiB/s wr, 112 op/s
Jan 26 15:52:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:21 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2005334639' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.734 239969 DEBUG oslo_concurrency.processutils [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.742 239969 DEBUG nova.compute.provider_tree [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.759 239969 DEBUG nova.scheduler.client.report [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.784 239969 DEBUG oslo_concurrency.lockutils [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.833 239969 INFO nova.scheduler.client.report [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Deleted allocations for instance 342bf84f-0f28-4939-b844-ed14ee42c01c
Jan 26 15:52:21 compute-0 nova_compute[239965]: 2026-01-26 15:52:21.912 239969 DEBUG oslo_concurrency.lockutils [None req-38567d59-4294-4ea7-91d6-72185fe065dc 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "342bf84f-0f28-4939-b844-ed14ee42c01c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:22 compute-0 nova_compute[239965]: 2026-01-26 15:52:22.089 239969 DEBUG nova.network.neutron [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:52:22 compute-0 ceph-mon[75140]: pgmap v1139: 305 pgs: 305 active+clean; 362 MiB data, 493 MiB used, 60 GiB / 60 GiB avail; 91 KiB/s rd, 5.9 KiB/s wr, 112 op/s
Jan 26 15:52:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2005334639' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:22 compute-0 nova_compute[239965]: 2026-01-26 15:52:22.630 239969 DEBUG nova.network.neutron [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:22 compute-0 nova_compute[239965]: 2026-01-26 15:52:22.672 239969 DEBUG oslo_concurrency.lockutils [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Releasing lock "refresh_cache-ad8f4f0d-55dd-416f-9164-9c0a4f024ee9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:52:22 compute-0 nova_compute[239965]: 2026-01-26 15:52:22.672 239969 DEBUG nova.compute.manager [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:52:22 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000017.scope: Deactivated successfully.
Jan 26 15:52:22 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000017.scope: Consumed 15.103s CPU time.
Jan 26 15:52:22 compute-0 systemd-machined[208061]: Machine qemu-26-instance-00000017 terminated.
Jan 26 15:52:22 compute-0 podman[265811]: 2026-01-26 15:52:22.796216457 +0000 UTC m=+0.054503895 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 15:52:22 compute-0 podman[265812]: 2026-01-26 15:52:22.851850248 +0000 UTC m=+0.108422005 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 26 15:52:22 compute-0 nova_compute[239965]: 2026-01-26 15:52:22.892 239969 INFO nova.virt.libvirt.driver [-] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Instance destroyed successfully.
Jan 26 15:52:22 compute-0 nova_compute[239965]: 2026-01-26 15:52:22.893 239969 DEBUG nova.objects.instance [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lazy-loading 'resources' on Instance uuid ad8f4f0d-55dd-416f-9164-9c0a4f024ee9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.493 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.538 239969 INFO nova.virt.libvirt.driver [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Deleting instance files /var/lib/nova/instances/ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_del
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.538 239969 INFO nova.virt.libvirt.driver [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Deletion of /var/lib/nova/instances/ad8f4f0d-55dd-416f-9164-9c0a4f024ee9_del complete
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.544 239969 DEBUG nova.compute.manager [req-b879982f-0bf4-40f8-bc44-0a885f3c204b req-d6bbb6c7-857e-41a4-bfa5-f5179abe0b56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Received event network-vif-plugged-87baa7c8-584c-40fb-a5ee-7a18908da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.544 239969 DEBUG oslo_concurrency.lockutils [req-b879982f-0bf4-40f8-bc44-0a885f3c204b req-d6bbb6c7-857e-41a4-bfa5-f5179abe0b56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.545 239969 DEBUG oslo_concurrency.lockutils [req-b879982f-0bf4-40f8-bc44-0a885f3c204b req-d6bbb6c7-857e-41a4-bfa5-f5179abe0b56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.545 239969 DEBUG oslo_concurrency.lockutils [req-b879982f-0bf4-40f8-bc44-0a885f3c204b req-d6bbb6c7-857e-41a4-bfa5-f5179abe0b56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "342bf84f-0f28-4939-b844-ed14ee42c01c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.545 239969 DEBUG nova.compute.manager [req-b879982f-0bf4-40f8-bc44-0a885f3c204b req-d6bbb6c7-857e-41a4-bfa5-f5179abe0b56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] No waiting events found dispatching network-vif-plugged-87baa7c8-584c-40fb-a5ee-7a18908da37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.545 239969 WARNING nova.compute.manager [req-b879982f-0bf4-40f8-bc44-0a885f3c204b req-d6bbb6c7-857e-41a4-bfa5-f5179abe0b56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Received unexpected event network-vif-plugged-87baa7c8-584c-40fb-a5ee-7a18908da37f for instance with vm_state deleted and task_state None.
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.634 239969 INFO nova.compute.manager [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Took 0.96 seconds to destroy the instance on the hypervisor.
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.635 239969 DEBUG oslo.service.loopingcall [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.635 239969 DEBUG nova.compute.manager [-] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.635 239969 DEBUG nova.network.neutron [-] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:52:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 217 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 11 KiB/s wr, 208 op/s
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.803 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "daafa464-6093-4ef3-b0a9-46f59c55cf94" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.803 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "daafa464-6093-4ef3-b0a9-46f59c55cf94" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.836 239969 DEBUG nova.compute.manager [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.932 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.933 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.946 239969 DEBUG nova.virt.hardware [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:52:23 compute-0 nova_compute[239965]: 2026-01-26 15:52:23.947 239969 INFO nova.compute.claims [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:52:24 compute-0 nova_compute[239965]: 2026-01-26 15:52:24.116 239969 DEBUG nova.scheduler.client.report [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 15:52:24 compute-0 nova_compute[239965]: 2026-01-26 15:52:24.138 239969 DEBUG nova.scheduler.client.report [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 15:52:24 compute-0 nova_compute[239965]: 2026-01-26 15:52:24.139 239969 DEBUG nova.compute.provider_tree [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 15:52:24 compute-0 nova_compute[239965]: 2026-01-26 15:52:24.163 239969 DEBUG nova.scheduler.client.report [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 15:52:24 compute-0 nova_compute[239965]: 2026-01-26 15:52:24.185 239969 DEBUG nova.scheduler.client.report [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 15:52:24 compute-0 nova_compute[239965]: 2026-01-26 15:52:24.188 239969 DEBUG nova.network.neutron [-] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:52:24 compute-0 nova_compute[239965]: 2026-01-26 15:52:24.206 239969 DEBUG nova.network.neutron [-] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:24 compute-0 nova_compute[239965]: 2026-01-26 15:52:24.227 239969 INFO nova.compute.manager [-] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Took 0.59 seconds to deallocate network for instance.
Jan 26 15:52:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e192 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:52:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Jan 26 15:52:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Jan 26 15:52:24 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Jan 26 15:52:24 compute-0 nova_compute[239965]: 2026-01-26 15:52:24.288 239969 DEBUG oslo_concurrency.processutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:24 compute-0 nova_compute[239965]: 2026-01-26 15:52:24.317 239969 DEBUG oslo_concurrency.lockutils [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:24 compute-0 nova_compute[239965]: 2026-01-26 15:52:24.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:52:24 compute-0 nova_compute[239965]: 2026-01-26 15:52:24.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:52:24 compute-0 ceph-mon[75140]: pgmap v1140: 305 pgs: 305 active+clean; 217 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 157 KiB/s rd, 11 KiB/s wr, 208 op/s
Jan 26 15:52:24 compute-0 ceph-mon[75140]: osdmap e193: 3 total, 3 up, 3 in
Jan 26 15:52:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695267643' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:24 compute-0 nova_compute[239965]: 2026-01-26 15:52:24.861 239969 DEBUG oslo_concurrency.processutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:24 compute-0 nova_compute[239965]: 2026-01-26 15:52:24.867 239969 DEBUG nova.compute.provider_tree [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:24 compute-0 nova_compute[239965]: 2026-01-26 15:52:24.894 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:24 compute-0 nova_compute[239965]: 2026-01-26 15:52:24.912 239969 DEBUG nova.scheduler.client.report [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.047 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.114s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.048 239969 DEBUG nova.compute.manager [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.052 239969 DEBUG oslo_concurrency.lockutils [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.196 239969 DEBUG nova.compute.manager [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.197 239969 DEBUG nova.network.neutron [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.228 239969 INFO nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.248 239969 DEBUG nova.compute.manager [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.276 239969 DEBUG oslo_concurrency.processutils [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.376 239969 DEBUG nova.compute.manager [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.380 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.380 239969 INFO nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Creating image(s)
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.404 239969 DEBUG nova.storage.rbd_utils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image daafa464-6093-4ef3-b0a9-46f59c55cf94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.432 239969 DEBUG nova.storage.rbd_utils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image daafa464-6093-4ef3-b0a9-46f59c55cf94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.456 239969 DEBUG nova.storage.rbd_utils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image daafa464-6093-4ef3-b0a9-46f59c55cf94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.459 239969 DEBUG oslo_concurrency.processutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.529 239969 DEBUG oslo_concurrency.processutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.530 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.531 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.532 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.559 239969 DEBUG nova.storage.rbd_utils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image daafa464-6093-4ef3-b0a9-46f59c55cf94_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.563 239969 DEBUG oslo_concurrency.processutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 daafa464-6093-4ef3-b0a9-46f59c55cf94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.591 239969 DEBUG nova.policy [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3322c44e378e415bb486ef558314a67c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ddba5162f533447bba0159cafaa565bf', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:52:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 183 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 17 KiB/s wr, 147 op/s
Jan 26 15:52:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1695267643' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.848 239969 DEBUG oslo_concurrency.processutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 daafa464-6093-4ef3-b0a9-46f59c55cf94_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3679541971' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.920 239969 DEBUG oslo_concurrency.processutils [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.926 239969 DEBUG nova.storage.rbd_utils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] resizing rbd image daafa464-6093-4ef3-b0a9-46f59c55cf94_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.958 239969 DEBUG nova.compute.provider_tree [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:25 compute-0 nova_compute[239965]: 2026-01-26 15:52:25.977 239969 DEBUG nova.scheduler.client.report [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.017 239969 DEBUG oslo_concurrency.lockutils [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.027 239969 DEBUG nova.objects.instance [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'migration_context' on Instance uuid daafa464-6093-4ef3-b0a9-46f59c55cf94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.054 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.054 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Ensure instance console log exists: /var/lib/nova/instances/daafa464-6093-4ef3-b0a9-46f59c55cf94/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.054 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.055 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.055 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.058 239969 INFO nova.scheduler.client.report [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Deleted allocations for instance ad8f4f0d-55dd-416f-9164-9c0a4f024ee9
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.119 239969 DEBUG oslo_concurrency.lockutils [None req-1cf80c95-d4f8-475b-b88c-6379b06a7a9d d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "ad8f4f0d-55dd-416f-9164-9c0a4f024ee9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.493s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.302 239969 DEBUG nova.network.neutron [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Successfully created port: fdd548e5-1d8a-450a-9a19-ee2626facc98 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.533 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.535 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.535 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.536 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.536 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.688 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442731.6269064, 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.689 239969 INFO nova.compute.manager [-] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] VM Stopped (Lifecycle Event)
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.717 239969 DEBUG nova.compute.manager [None req-980a8081-eb7a-4b20-b2ab-9bf80198e904 - - - - - -] [instance: 5c3d3c1e-90e3-4917-abbb-2a348c45c9f2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.924 239969 DEBUG oslo_concurrency.lockutils [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquiring lock "131ac17b-4bc0-4c20-b861-7102599a66d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.925 239969 DEBUG oslo_concurrency.lockutils [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "131ac17b-4bc0-4c20-b861-7102599a66d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.925 239969 DEBUG oslo_concurrency.lockutils [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquiring lock "131ac17b-4bc0-4c20-b861-7102599a66d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.926 239969 DEBUG oslo_concurrency.lockutils [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "131ac17b-4bc0-4c20-b861-7102599a66d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.926 239969 DEBUG oslo_concurrency.lockutils [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "131ac17b-4bc0-4c20-b861-7102599a66d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.928 239969 INFO nova.compute.manager [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Terminating instance
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.928 239969 DEBUG oslo_concurrency.lockutils [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquiring lock "refresh_cache-131ac17b-4bc0-4c20-b861-7102599a66d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.929 239969 DEBUG oslo_concurrency.lockutils [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquired lock "refresh_cache-131ac17b-4bc0-4c20-b861-7102599a66d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:52:26 compute-0 nova_compute[239965]: 2026-01-26 15:52:26.929 239969 DEBUG nova.network.neutron [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.021 239969 DEBUG oslo_concurrency.lockutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquiring lock "b6668f3e-920f-42d3-9bc6-e1ab78764d3d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.022 239969 DEBUG oslo_concurrency.lockutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "b6668f3e-920f-42d3-9bc6-e1ab78764d3d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.039 239969 DEBUG nova.compute.manager [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:52:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3301690752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.076 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:27 compute-0 ceph-mon[75140]: pgmap v1142: 305 pgs: 305 active+clean; 183 MiB data, 405 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 17 KiB/s wr, 147 op/s
Jan 26 15:52:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3679541971' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.119 239969 DEBUG oslo_concurrency.lockutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.119 239969 DEBUG oslo_concurrency.lockutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.128 239969 DEBUG nova.virt.hardware [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.128 239969 INFO nova.compute.claims [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.171 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.172 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.281 239969 DEBUG oslo_concurrency.processutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.385 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.388 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4049MB free_disk=59.90585035830736GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.388 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.476 239969 DEBUG nova.network.neutron [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.559 239969 DEBUG nova.network.neutron [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Successfully updated port: fdd548e5-1d8a-450a-9a19-ee2626facc98 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.587 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "refresh_cache-daafa464-6093-4ef3-b0a9-46f59c55cf94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.587 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquired lock "refresh_cache-daafa464-6093-4ef3-b0a9-46f59c55cf94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.588 239969 DEBUG nova.network.neutron [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.634 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442732.6333058, d4d304bb-bbea-43d0-b6d0-2185340bbf57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.635 239969 INFO nova.compute.manager [-] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] VM Stopped (Lifecycle Event)
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.657 239969 DEBUG nova.compute.manager [None req-fb3171f9-d17c-49fb-ac28-6d96c24879fb - - - - - -] [instance: d4d304bb-bbea-43d0-b6d0-2185340bbf57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 148 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 1.5 MiB/s wr, 209 op/s
Jan 26 15:52:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2447515129' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.881 239969 DEBUG oslo_concurrency.processutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.884 239969 DEBUG nova.network.neutron [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.888 239969 DEBUG nova.compute.provider_tree [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.911 239969 DEBUG nova.scheduler.client.report [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.920 239969 DEBUG nova.compute.manager [req-45217e64-9f2b-4193-8db7-c68bc642edb8 req-bf3d4a1c-6cb2-448f-b119-1c60d16e8a71 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Received event network-changed-fdd548e5-1d8a-450a-9a19-ee2626facc98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.921 239969 DEBUG nova.compute.manager [req-45217e64-9f2b-4193-8db7-c68bc642edb8 req-bf3d4a1c-6cb2-448f-b119-1c60d16e8a71 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Refreshing instance network info cache due to event network-changed-fdd548e5-1d8a-450a-9a19-ee2626facc98. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.921 239969 DEBUG oslo_concurrency.lockutils [req-45217e64-9f2b-4193-8db7-c68bc642edb8 req-bf3d4a1c-6cb2-448f-b119-1c60d16e8a71 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-daafa464-6093-4ef3-b0a9-46f59c55cf94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.948 239969 DEBUG oslo_concurrency.lockutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.949 239969 DEBUG nova.compute.manager [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:52:27 compute-0 nova_compute[239965]: 2026-01-26 15:52:27.951 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.562s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.005 239969 DEBUG nova.network.neutron [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.027 239969 DEBUG nova.compute.manager [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.027 239969 DEBUG nova.network.neutron [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.029 239969 DEBUG oslo_concurrency.lockutils [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Releasing lock "refresh_cache-131ac17b-4bc0-4c20-b861-7102599a66d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.030 239969 DEBUG nova.compute.manager [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.040 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 131ac17b-4bc0-4c20-b861-7102599a66d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.040 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance daafa464-6093-4ef3-b0a9-46f59c55cf94 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.040 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance b6668f3e-920f-42d3-9bc6-e1ab78764d3d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.041 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.041 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.047 239969 INFO nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.067 239969 DEBUG nova.compute.manager [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:52:28 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3301690752' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:28 compute-0 ceph-mon[75140]: pgmap v1143: 305 pgs: 305 active+clean; 148 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 140 KiB/s rd, 1.5 MiB/s wr, 209 op/s
Jan 26 15:52:28 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2447515129' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.115 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:28 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000016.scope: Deactivated successfully.
Jan 26 15:52:28 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000016.scope: Consumed 15.879s CPU time.
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.189 239969 DEBUG nova.compute.manager [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.191 239969 DEBUG nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:52:28 compute-0 systemd-machined[208061]: Machine qemu-25-instance-00000016 terminated.
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.192 239969 INFO nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Creating image(s)
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.217 239969 DEBUG nova.storage.rbd_utils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] rbd image b6668f3e-920f-42d3-9bc6-e1ab78764d3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.243 239969 DEBUG nova.storage.rbd_utils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] rbd image b6668f3e-920f-42d3-9bc6-e1ab78764d3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.267 239969 DEBUG nova.storage.rbd_utils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] rbd image b6668f3e-920f-42d3-9bc6-e1ab78764d3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.273 239969 DEBUG oslo_concurrency.processutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.302 239969 INFO nova.virt.libvirt.driver [-] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Instance destroyed successfully.
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.303 239969 DEBUG nova.objects.instance [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lazy-loading 'resources' on Instance uuid 131ac17b-4bc0-4c20-b861-7102599a66d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.350 239969 DEBUG oslo_concurrency.processutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.351 239969 DEBUG oslo_concurrency.lockutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.352 239969 DEBUG oslo_concurrency.lockutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.352 239969 DEBUG oslo_concurrency.lockutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.372 239969 DEBUG nova.storage.rbd_utils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] rbd image b6668f3e-920f-42d3-9bc6-e1ab78764d3d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.378 239969 DEBUG oslo_concurrency.processutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 b6668f3e-920f-42d3-9bc6-e1ab78764d3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.495 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:52:28
Jan 26 15:52:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:52:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:52:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['vms', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', '.rgw.root', 'images', 'default.rgw.log', 'default.rgw.control', 'volumes']
Jan 26 15:52:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.686 239969 DEBUG nova.network.neutron [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.688 239969 DEBUG nova.compute.manager [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:52:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1379526660' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.757 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.766 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.797 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.828 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.828 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:28 compute-0 nova_compute[239965]: 2026-01-26 15:52:28.983 239969 DEBUG oslo_concurrency.processutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 b6668f3e-920f-42d3-9bc6-e1ab78764d3d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.058 239969 DEBUG nova.storage.rbd_utils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] resizing rbd image b6668f3e-920f-42d3-9bc6-e1ab78764d3d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.100 239969 DEBUG nova.network.neutron [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Updating instance_info_cache with network_info: [{"id": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "address": "fa:16:3e:77:a8:7d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd548e5-1d", "ovs_interfaceid": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1379526660' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.147 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Releasing lock "refresh_cache-daafa464-6093-4ef3-b0a9-46f59c55cf94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.147 239969 DEBUG nova.compute.manager [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Instance network_info: |[{"id": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "address": "fa:16:3e:77:a8:7d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd548e5-1d", "ovs_interfaceid": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.148 239969 DEBUG oslo_concurrency.lockutils [req-45217e64-9f2b-4193-8db7-c68bc642edb8 req-bf3d4a1c-6cb2-448f-b119-1c60d16e8a71 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-daafa464-6093-4ef3-b0a9-46f59c55cf94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.148 239969 DEBUG nova.network.neutron [req-45217e64-9f2b-4193-8db7-c68bc642edb8 req-bf3d4a1c-6cb2-448f-b119-1c60d16e8a71 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Refreshing network info cache for port fdd548e5-1d8a-450a-9a19-ee2626facc98 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.151 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Start _get_guest_xml network_info=[{"id": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "address": "fa:16:3e:77:a8:7d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd548e5-1d", "ovs_interfaceid": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.157 239969 DEBUG nova.objects.instance [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lazy-loading 'migration_context' on Instance uuid b6668f3e-920f-42d3-9bc6-e1ab78764d3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.162 239969 WARNING nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.167 239969 DEBUG nova.virt.libvirt.host [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.168 239969 DEBUG nova.virt.libvirt.host [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.175 239969 DEBUG nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.176 239969 DEBUG nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Ensure instance console log exists: /var/lib/nova/instances/b6668f3e-920f-42d3-9bc6-e1ab78764d3d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.176 239969 DEBUG oslo_concurrency.lockutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.177 239969 DEBUG oslo_concurrency.lockutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.177 239969 DEBUG oslo_concurrency.lockutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.178 239969 DEBUG nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.179 239969 DEBUG nova.virt.libvirt.host [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.180 239969 DEBUG nova.virt.libvirt.host [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.180 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.180 239969 DEBUG nova.virt.hardware [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.181 239969 DEBUG nova.virt.hardware [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.181 239969 DEBUG nova.virt.hardware [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.181 239969 DEBUG nova.virt.hardware [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.182 239969 DEBUG nova.virt.hardware [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.182 239969 DEBUG nova.virt.hardware [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.182 239969 DEBUG nova.virt.hardware [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.182 239969 DEBUG nova.virt.hardware [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.183 239969 DEBUG nova.virt.hardware [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.183 239969 DEBUG nova.virt.hardware [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.183 239969 DEBUG nova.virt.hardware [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.186 239969 DEBUG oslo_concurrency.processutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.226 239969 INFO nova.virt.libvirt.driver [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Deleting instance files /var/lib/nova/instances/131ac17b-4bc0-4c20-b861-7102599a66d3_del
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.229 239969 INFO nova.virt.libvirt.driver [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Deletion of /var/lib/nova/instances/131ac17b-4bc0-4c20-b861-7102599a66d3_del complete
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.240 239969 WARNING nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.253 239969 DEBUG nova.virt.libvirt.host [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.254 239969 DEBUG nova.virt.libvirt.host [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.258 239969 DEBUG nova.virt.libvirt.host [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.259 239969 DEBUG nova.virt.libvirt.host [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.259 239969 DEBUG nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.259 239969 DEBUG nova.virt.hardware [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.260 239969 DEBUG nova.virt.hardware [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.260 239969 DEBUG nova.virt.hardware [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.260 239969 DEBUG nova.virt.hardware [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.261 239969 DEBUG nova.virt.hardware [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.261 239969 DEBUG nova.virt.hardware [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.261 239969 DEBUG nova.virt.hardware [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.261 239969 DEBUG nova.virt.hardware [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.261 239969 DEBUG nova.virt.hardware [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.262 239969 DEBUG nova.virt.hardware [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.262 239969 DEBUG nova.virt.hardware [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:52:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.265 239969 DEBUG oslo_concurrency.processutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.312 239969 INFO nova.compute.manager [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Took 1.28 seconds to destroy the instance on the hypervisor.
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.313 239969 DEBUG oslo.service.loopingcall [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.313 239969 DEBUG nova.compute.manager [-] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.314 239969 DEBUG nova.network.neutron [-] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.539 239969 DEBUG nova.network.neutron [-] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.560 239969 DEBUG nova.network.neutron [-] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.578 239969 INFO nova.compute.manager [-] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Took 0.26 seconds to deallocate network for instance.
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.622 239969 DEBUG oslo_concurrency.lockutils [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.623 239969 DEBUG oslo_concurrency.lockutils [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 148 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 120 KiB/s rd, 1.3 MiB/s wr, 179 op/s
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.717 239969 DEBUG oslo_concurrency.processutils [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:52:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1064513355' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.794 239969 DEBUG oslo_concurrency.processutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.818 239969 DEBUG nova.storage.rbd_utils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image daafa464-6093-4ef3-b0a9-46f59c55cf94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.822 239969 DEBUG oslo_concurrency.processutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:52:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3525331974' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.849 239969 DEBUG oslo_concurrency.processutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.875 239969 DEBUG nova.storage.rbd_utils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] rbd image b6668f3e-920f-42d3-9bc6-e1ab78764d3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.879 239969 DEBUG oslo_concurrency.processutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:29 compute-0 nova_compute[239965]: 2026-01-26 15:52:29.912 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:30 compute-0 ceph-mon[75140]: pgmap v1144: 305 pgs: 305 active+clean; 148 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 120 KiB/s rd, 1.3 MiB/s wr, 179 op/s
Jan 26 15:52:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1064513355' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3525331974' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3840968366' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.299 239969 DEBUG oslo_concurrency.processutils [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.305 239969 DEBUG nova.compute.provider_tree [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:52:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:52:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:52:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:52:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:52:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:52:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:52:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/788509304' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.403 239969 DEBUG oslo_concurrency.processutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.405 239969 DEBUG nova.virt.libvirt.vif [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:52:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-93390772',display_name='tempest-ImagesTestJSON-server-93390772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-93390772',id=27,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-8xfxway3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:52:25Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=daafa464-6093-4ef3-b0a9-46f59c55cf94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "address": "fa:16:3e:77:a8:7d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd548e5-1d", "ovs_interfaceid": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.406 239969 DEBUG nova.network.os_vif_util [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "address": "fa:16:3e:77:a8:7d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd548e5-1d", "ovs_interfaceid": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.408 239969 DEBUG nova.network.os_vif_util [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:a8:7d,bridge_name='br-int',has_traffic_filtering=True,id=fdd548e5-1d8a-450a-9a19-ee2626facc98,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd548e5-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.410 239969 DEBUG nova.objects.instance [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'pci_devices' on Instance uuid daafa464-6093-4ef3-b0a9-46f59c55cf94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:52:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/773141701' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.445 239969 DEBUG oslo_concurrency.processutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.446 239969 DEBUG nova.objects.instance [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lazy-loading 'pci_devices' on Instance uuid b6668f3e-920f-42d3-9bc6-e1ab78764d3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.506 239969 DEBUG nova.scheduler.client.report [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.513 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <uuid>daafa464-6093-4ef3-b0a9-46f59c55cf94</uuid>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <name>instance-0000001b</name>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <nova:name>tempest-ImagesTestJSON-server-93390772</nova:name>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:52:29</nova:creationTime>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <nova:user uuid="3322c44e378e415bb486ef558314a67c">tempest-ImagesTestJSON-1480202091-project-member</nova:user>
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <nova:project uuid="ddba5162f533447bba0159cafaa565bf">tempest-ImagesTestJSON-1480202091</nova:project>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <nova:port uuid="fdd548e5-1d8a-450a-9a19-ee2626facc98">
Jan 26 15:52:30 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <system>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <entry name="serial">daafa464-6093-4ef3-b0a9-46f59c55cf94</entry>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <entry name="uuid">daafa464-6093-4ef3-b0a9-46f59c55cf94</entry>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </system>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <os>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   </os>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <features>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   </features>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/daafa464-6093-4ef3-b0a9-46f59c55cf94_disk">
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       </source>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/daafa464-6093-4ef3-b0a9-46f59c55cf94_disk.config">
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       </source>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:77:a8:7d"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <target dev="tapfdd548e5-1d"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/daafa464-6093-4ef3-b0a9-46f59c55cf94/console.log" append="off"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <video>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </video>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:52:30 compute-0 nova_compute[239965]: </domain>
Jan 26 15:52:30 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.515 239969 DEBUG nova.compute.manager [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Preparing to wait for external event network-vif-plugged-fdd548e5-1d8a-450a-9a19-ee2626facc98 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.515 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.515 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.515 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.516 239969 DEBUG nova.virt.libvirt.vif [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:52:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-93390772',display_name='tempest-ImagesTestJSON-server-93390772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-93390772',id=27,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-8xfxway3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:52:25Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=daafa464-6093-4ef3-b0a9-46f59c55cf94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "address": "fa:16:3e:77:a8:7d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd548e5-1d", "ovs_interfaceid": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.516 239969 DEBUG nova.network.os_vif_util [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "address": "fa:16:3e:77:a8:7d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd548e5-1d", "ovs_interfaceid": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.517 239969 DEBUG nova.network.os_vif_util [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:a8:7d,bridge_name='br-int',has_traffic_filtering=True,id=fdd548e5-1d8a-450a-9a19-ee2626facc98,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd548e5-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.517 239969 DEBUG os_vif [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:a8:7d,bridge_name='br-int',has_traffic_filtering=True,id=fdd548e5-1d8a-450a-9a19-ee2626facc98,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd548e5-1d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.519 239969 DEBUG nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <uuid>b6668f3e-920f-42d3-9bc6-e1ab78764d3d</uuid>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <name>instance-0000001c</name>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-422527133</nova:name>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:52:29</nova:creationTime>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <nova:user uuid="38314961865549838edb2c26893eb430">tempest-ServersAdminNegativeTestJSON-1522942204-project-member</nova:user>
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <nova:project uuid="9ade4d4287974ef1b6df3e85a2b3b70b">tempest-ServersAdminNegativeTestJSON-1522942204</nova:project>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <system>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <entry name="serial">b6668f3e-920f-42d3-9bc6-e1ab78764d3d</entry>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <entry name="uuid">b6668f3e-920f-42d3-9bc6-e1ab78764d3d</entry>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </system>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <os>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   </os>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <features>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   </features>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/b6668f3e-920f-42d3-9bc6-e1ab78764d3d_disk">
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       </source>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/b6668f3e-920f-42d3-9bc6-e1ab78764d3d_disk.config">
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       </source>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:52:30 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/b6668f3e-920f-42d3-9bc6-e1ab78764d3d/console.log" append="off"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <video>
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </video>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:52:30 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:52:30 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:52:30 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:52:30 compute-0 nova_compute[239965]: </domain>
Jan 26 15:52:30 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.521 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.522 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.522 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:52:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:52:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:52:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:52:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:52:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:52:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:52:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:52:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:52:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:52:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.530 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.531 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdd548e5-1d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.532 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfdd548e5-1d, col_values=(('external_ids', {'iface-id': 'fdd548e5-1d8a-450a-9a19-ee2626facc98', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:a8:7d', 'vm-uuid': 'daafa464-6093-4ef3-b0a9-46f59c55cf94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.533 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:30 compute-0 NetworkManager[48954]: <info>  [1769442750.5347] manager: (tapfdd548e5-1d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/75)
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.539 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.544 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.546 239969 INFO os_vif [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:a8:7d,bridge_name='br-int',has_traffic_filtering=True,id=fdd548e5-1d8a-450a-9a19-ee2626facc98,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd548e5-1d')
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.548 239969 DEBUG oslo_concurrency.lockutils [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.586 239969 INFO nova.scheduler.client.report [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Deleted allocations for instance 131ac17b-4bc0-4c20-b861-7102599a66d3
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.592 239969 DEBUG nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.593 239969 DEBUG nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.594 239969 INFO nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Using config drive
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.616 239969 DEBUG nova.storage.rbd_utils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] rbd image b6668f3e-920f-42d3-9bc6-e1ab78764d3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.631 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.631 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.632 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No VIF found with MAC fa:16:3e:77:a8:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.632 239969 INFO nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Using config drive
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.649 239969 DEBUG nova.storage.rbd_utils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image daafa464-6093-4ef3-b0a9-46f59c55cf94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.700 239969 DEBUG oslo_concurrency.lockutils [None req-74e3f38d-c890-4fa5-bdba-08972d22d754 d98e5783c5584ce8aa184772b7d2fed1 b3a3b548cf6d4d5fbca5f44d4dfdefde - - default default] Lock "131ac17b-4bc0-4c20-b861-7102599a66d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.828 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.828 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.839 239969 INFO nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Creating config drive at /var/lib/nova/instances/b6668f3e-920f-42d3-9bc6-e1ab78764d3d/disk.config
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.844 239969 DEBUG oslo_concurrency.processutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b6668f3e-920f-42d3-9bc6-e1ab78764d3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqpoavs51 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.870 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.871 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.872 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.872 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.872 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.955 239969 DEBUG nova.network.neutron [req-45217e64-9f2b-4193-8db7-c68bc642edb8 req-bf3d4a1c-6cb2-448f-b119-1c60d16e8a71 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Updated VIF entry in instance network info cache for port fdd548e5-1d8a-450a-9a19-ee2626facc98. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.956 239969 DEBUG nova.network.neutron [req-45217e64-9f2b-4193-8db7-c68bc642edb8 req-bf3d4a1c-6cb2-448f-b119-1c60d16e8a71 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Updating instance_info_cache with network_info: [{"id": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "address": "fa:16:3e:77:a8:7d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd548e5-1d", "ovs_interfaceid": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:30 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.977 239969 DEBUG oslo_concurrency.processutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b6668f3e-920f-42d3-9bc6-e1ab78764d3d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqpoavs51" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:30.999 239969 DEBUG nova.storage.rbd_utils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] rbd image b6668f3e-920f-42d3-9bc6-e1ab78764d3d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.003 239969 DEBUG oslo_concurrency.processutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b6668f3e-920f-42d3-9bc6-e1ab78764d3d/disk.config b6668f3e-920f-42d3-9bc6-e1ab78764d3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.123 239969 DEBUG oslo_concurrency.lockutils [req-45217e64-9f2b-4193-8db7-c68bc642edb8 req-bf3d4a1c-6cb2-448f-b119-1c60d16e8a71 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-daafa464-6093-4ef3-b0a9-46f59c55cf94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.133 239969 DEBUG oslo_concurrency.processutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b6668f3e-920f-42d3-9bc6-e1ab78764d3d/disk.config b6668f3e-920f-42d3-9bc6-e1ab78764d3d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.134 239969 INFO nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Deleting local config drive /var/lib/nova/instances/b6668f3e-920f-42d3-9bc6-e1ab78764d3d/disk.config because it was imported into RBD.
Jan 26 15:52:31 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3840968366' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:31 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/788509304' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:31 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/773141701' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:31 compute-0 systemd-machined[208061]: New machine qemu-31-instance-0000001c.
Jan 26 15:52:31 compute-0 systemd[1]: Started Virtual Machine qemu-31-instance-0000001c.
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.313 239969 INFO nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Creating config drive at /var/lib/nova/instances/daafa464-6093-4ef3-b0a9-46f59c55cf94/disk.config
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.319 239969 DEBUG oslo_concurrency.processutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/daafa464-6093-4ef3-b0a9-46f59c55cf94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa6g2u74g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.453 239969 DEBUG oslo_concurrency.processutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/daafa464-6093-4ef3-b0a9-46f59c55cf94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa6g2u74g" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.479 239969 DEBUG nova.storage.rbd_utils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] rbd image daafa464-6093-4ef3-b0a9-46f59c55cf94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.484 239969 DEBUG oslo_concurrency.processutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/daafa464-6093-4ef3-b0a9-46f59c55cf94/disk.config daafa464-6093-4ef3-b0a9-46f59c55cf94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 178 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 3.0 MiB/s wr, 163 op/s
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.684 239969 DEBUG oslo_concurrency.processutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/daafa464-6093-4ef3-b0a9-46f59c55cf94/disk.config daafa464-6093-4ef3-b0a9-46f59c55cf94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.685 239969 INFO nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Deleting local config drive /var/lib/nova/instances/daafa464-6093-4ef3-b0a9-46f59c55cf94/disk.config because it was imported into RBD.
Jan 26 15:52:31 compute-0 kernel: tapfdd548e5-1d: entered promiscuous mode
Jan 26 15:52:31 compute-0 NetworkManager[48954]: <info>  [1769442751.7476] manager: (tapfdd548e5-1d): new Tun device (/org/freedesktop/NetworkManager/Devices/76)
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.750 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:31 compute-0 ovn_controller[146046]: 2026-01-26T15:52:31Z|00146|binding|INFO|Claiming lport fdd548e5-1d8a-450a-9a19-ee2626facc98 for this chassis.
Jan 26 15:52:31 compute-0 ovn_controller[146046]: 2026-01-26T15:52:31Z|00147|binding|INFO|fdd548e5-1d8a-450a-9a19-ee2626facc98: Claiming fa:16:3e:77:a8:7d 10.100.0.7
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.759 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:a8:7d 10.100.0.7'], port_security=['fa:16:3e:77:a8:7d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'daafa464-6093-4ef3-b0a9-46f59c55cf94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5e237c1-a75f-479a-88ee-c0f788914b11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddba5162f533447bba0159cafaa565bf', 'neutron:revision_number': '2', 'neutron:security_group_ids': '196db1cf-70b0-4b0e-9aef-decea6cfe3dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79fbd3fe-b646-4211-8e23-8d25cabdd600, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=fdd548e5-1d8a-450a-9a19-ee2626facc98) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.760 156105 INFO neutron.agent.ovn.metadata.agent [-] Port fdd548e5-1d8a-450a-9a19-ee2626facc98 in datapath f5e237c1-a75f-479a-88ee-c0f788914b11 bound to our chassis
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.761 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:52:31 compute-0 ovn_controller[146046]: 2026-01-26T15:52:31Z|00148|binding|INFO|Setting lport fdd548e5-1d8a-450a-9a19-ee2626facc98 ovn-installed in OVS
Jan 26 15:52:31 compute-0 ovn_controller[146046]: 2026-01-26T15:52:31Z|00149|binding|INFO|Setting lport fdd548e5-1d8a-450a-9a19-ee2626facc98 up in Southbound
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.772 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.777 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2c99e3d0-a568-4423-be50-8a48708f5d24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.779 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf5e237c1-a1 in ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.784 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf5e237c1-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.784 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b98ff1fa-ab71-406b-992a-ea6e6f6c0c73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.785 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e1fba9-6ab1-4c50-820c-a6363ada1cc0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:31 compute-0 systemd-udevd[266638]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:52:31 compute-0 systemd-machined[208061]: New machine qemu-32-instance-0000001b.
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.798 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc3ef8a-d57c-43da-aa24-8480ac72adda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:31 compute-0 systemd[1]: Started Virtual Machine qemu-32-instance-0000001b.
Jan 26 15:52:31 compute-0 NetworkManager[48954]: <info>  [1769442751.8031] device (tapfdd548e5-1d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:52:31 compute-0 NetworkManager[48954]: <info>  [1769442751.8036] device (tapfdd548e5-1d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.817 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0719370f-44c3-40a6-a15f-079fc7c2a7e9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.847 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[300acc07-1eda-4e61-9624-221af55bbdbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:31 compute-0 NetworkManager[48954]: <info>  [1769442751.8545] manager: (tapf5e237c1-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/77)
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.855 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[124fef17-ce9c-4b62-ba88-c5fb8178e19d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.891 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6246f8d6-3c95-4eac-87c4-231d7d21c018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.897 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f55ab1aa-e193-48d4-ae5c-7f14588f3084]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:31 compute-0 NetworkManager[48954]: <info>  [1769442751.9251] device (tapf5e237c1-a0): carrier: link connected
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.932 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d805d6bb-480b-45a1-bc74-cd644e9b1e4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.956 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a3973e08-cd09-4915-a680-0b3feb264d96]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5e237c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:0e:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424273, 'reachable_time': 26302, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266711, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.968 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442751.9680278, b6668f3e-920f-42d3-9bc6-e1ab78764d3d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.969 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] VM Resumed (Lifecycle Event)
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.972 239969 DEBUG nova.compute.manager [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.972 239969 DEBUG nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.975 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9c1c5375-a66e-4202-bfb2-2dec36ee5f4f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee4:eb6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 424273, 'tstamp': 424273}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266712, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.976 239969 INFO nova.virt.libvirt.driver [-] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Instance spawned successfully.
Jan 26 15:52:31 compute-0 nova_compute[239965]: 2026-01-26 15:52:31.977 239969 DEBUG nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:52:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:31.995 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dd93a988-a938-4137-84c6-536d2b991610]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf5e237c1-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e4:0e:b6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424273, 'reachable_time': 26302, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 266713, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.007 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.011 239969 DEBUG nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.011 239969 DEBUG nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.012 239969 DEBUG nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.013 239969 DEBUG nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.014 239969 DEBUG nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.014 239969 DEBUG nova.virt.libvirt.driver [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.020 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:32.028 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[763d5ea3-ac77-44f1-9c04-2d589190baf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.063 239969 DEBUG nova.compute.manager [req-f18f20ee-8bb2-4058-8dbe-7fb01c40051e req-b23285fc-73b5-44b0-883c-47d644f3f38b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Received event network-vif-plugged-fdd548e5-1d8a-450a-9a19-ee2626facc98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.064 239969 DEBUG oslo_concurrency.lockutils [req-f18f20ee-8bb2-4058-8dbe-7fb01c40051e req-b23285fc-73b5-44b0-883c-47d644f3f38b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.064 239969 DEBUG oslo_concurrency.lockutils [req-f18f20ee-8bb2-4058-8dbe-7fb01c40051e req-b23285fc-73b5-44b0-883c-47d644f3f38b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.064 239969 DEBUG oslo_concurrency.lockutils [req-f18f20ee-8bb2-4058-8dbe-7fb01c40051e req-b23285fc-73b5-44b0-883c-47d644f3f38b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.065 239969 DEBUG nova.compute.manager [req-f18f20ee-8bb2-4058-8dbe-7fb01c40051e req-b23285fc-73b5-44b0-883c-47d644f3f38b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Processing event network-vif-plugged-fdd548e5-1d8a-450a-9a19-ee2626facc98 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.072 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.072 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442751.968959, b6668f3e-920f-42d3-9bc6-e1ab78764d3d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.072 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] VM Started (Lifecycle Event)
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:32.096 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d46eb504-a18d-4eb5-8355-e06cb7cba99b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:32.097 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5e237c1-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:32.097 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:32.097 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf5e237c1-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:32 compute-0 NetworkManager[48954]: <info>  [1769442752.1288] manager: (tapf5e237c1-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/78)
Jan 26 15:52:32 compute-0 kernel: tapf5e237c1-a0: entered promiscuous mode
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.132 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.134 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.137 239969 INFO nova.compute.manager [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Took 3.95 seconds to spawn the instance on the hypervisor.
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:32.137 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf5e237c1-a0, col_values=(('external_ids', {'iface-id': 'fd0478e8-96d8-4fbd-8a9a-fe78757277ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.137 239969 DEBUG nova.compute.manager [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.139 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:32 compute-0 ovn_controller[146046]: 2026-01-26T15:52:32Z|00150|binding|INFO|Releasing lport fd0478e8-96d8-4fbd-8a9a-fe78757277ca from this chassis (sb_readonly=0)
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.143 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:32.150 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f5e237c1-a75f-479a-88ee-c0f788914b11.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f5e237c1-a75f-479a-88ee-c0f788914b11.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:52:32 compute-0 ceph-mon[75140]: pgmap v1145: 305 pgs: 305 active+clean; 178 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 3.0 MiB/s wr, 163 op/s
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:32.151 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d80472ce-b388-4ce9-ad3e-56509949efeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:32.152 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/f5e237c1-a75f-479a-88ee-c0f788914b11.pid.haproxy
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID f5e237c1-a75f-479a-88ee-c0f788914b11
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:52:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:32.152 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'env', 'PROCESS_TAG=haproxy-f5e237c1-a75f-479a-88ee-c0f788914b11', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f5e237c1-a75f-479a-88ee-c0f788914b11.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.162 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:52:32.169222) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442752169259, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2306, "num_deletes": 260, "total_data_size": 3373491, "memory_usage": 3441952, "flush_reason": "Manual Compaction"}
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.177 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442752192903, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3297727, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21244, "largest_seqno": 23549, "table_properties": {"data_size": 3287106, "index_size": 6795, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2757, "raw_key_size": 22628, "raw_average_key_size": 21, "raw_value_size": 3265630, "raw_average_value_size": 3032, "num_data_blocks": 299, "num_entries": 1077, "num_filter_entries": 1077, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769442583, "oldest_key_time": 1769442583, "file_creation_time": 1769442752, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 23725 microseconds, and 6326 cpu microseconds.
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:52:32.192945) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3297727 bytes OK
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:52:32.192962) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:52:32.195264) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:52:32.195279) EVENT_LOG_v1 {"time_micros": 1769442752195275, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:52:32.195295) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3363605, prev total WAL file size 3363605, number of live WAL files 2.
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:52:32.196140) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3220KB)], [50(7720KB)]
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442752196213, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11203227, "oldest_snapshot_seqno": -1}
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.224 239969 INFO nova.compute.manager [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Took 5.13 seconds to build instance.
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.252 239969 DEBUG oslo_concurrency.lockutils [None req-45f92fd4-71fb-4770-9a20-2e73c73532e5 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "b6668f3e-920f-42d3-9bc6-e1ab78764d3d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5051 keys, 9417322 bytes, temperature: kUnknown
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442752264183, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9417322, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9380421, "index_size": 23159, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12677, "raw_key_size": 124042, "raw_average_key_size": 24, "raw_value_size": 9286307, "raw_average_value_size": 1838, "num_data_blocks": 964, "num_entries": 5051, "num_filter_entries": 5051, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769442752, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:52:32.264418) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9417322 bytes
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:52:32.266722) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.7 rd, 138.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.5 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(6.3) write-amplify(2.9) OK, records in: 5580, records dropped: 529 output_compression: NoCompression
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:52:32.266755) EVENT_LOG_v1 {"time_micros": 1769442752266740, "job": 26, "event": "compaction_finished", "compaction_time_micros": 68036, "compaction_time_cpu_micros": 23190, "output_level": 6, "num_output_files": 1, "total_output_size": 9417322, "num_input_records": 5580, "num_output_records": 5051, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442752267486, "job": 26, "event": "table_file_deletion", "file_number": 52}
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442752268766, "job": 26, "event": "table_file_deletion", "file_number": 50}
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:52:32.196017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:52:32.268871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:52:32.268875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:52:32.268876) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:52:32.268878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:52:32 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:52:32.268880) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.547 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:52:32 compute-0 podman[266745]: 2026-01-26 15:52:32.548826789 +0000 UTC m=+0.059548127 container create 9cece0692458efe0457d246bea83e7cdeedbe297fdf82edaf112bc29c11f2ef7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:52:32 compute-0 systemd[1]: Started libpod-conmon-9cece0692458efe0457d246bea83e7cdeedbe297fdf82edaf112bc29c11f2ef7.scope.
Jan 26 15:52:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:52:32 compute-0 podman[266745]: 2026-01-26 15:52:32.516012572 +0000 UTC m=+0.026733940 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:52:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75bc75854fc04db2aa2b7caae6c5c4093f485654cde0dbc2a66e3d172e61da5c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:52:32 compute-0 podman[266745]: 2026-01-26 15:52:32.629596811 +0000 UTC m=+0.140318159 container init 9cece0692458efe0457d246bea83e7cdeedbe297fdf82edaf112bc29c11f2ef7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 15:52:32 compute-0 podman[266745]: 2026-01-26 15:52:32.635909334 +0000 UTC m=+0.146630672 container start 9cece0692458efe0457d246bea83e7cdeedbe297fdf82edaf112bc29c11f2ef7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 15:52:32 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[266791]: [NOTICE]   (266803) : New worker (266805) forked
Jan 26 15:52:32 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[266791]: [NOTICE]   (266803) : Loading success.
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.698 239969 DEBUG nova.compute.manager [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.699 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442752.698254, daafa464-6093-4ef3-b0a9-46f59c55cf94 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.699 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] VM Started (Lifecycle Event)
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.702 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.705 239969 INFO nova.virt.libvirt.driver [-] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Instance spawned successfully.
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.705 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.737 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.741 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.749 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.750 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.751 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.751 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.752 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.752 239969 DEBUG nova.virt.libvirt.driver [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.763 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.764 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442752.699105, daafa464-6093-4ef3-b0a9-46f59c55cf94 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.764 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] VM Paused (Lifecycle Event)
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.812 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.816 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442752.7017791, daafa464-6093-4ef3-b0a9-46f59c55cf94 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.817 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] VM Resumed (Lifecycle Event)
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.840 239969 INFO nova.compute.manager [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Took 7.46 seconds to spawn the instance on the hypervisor.
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.840 239969 DEBUG nova.compute.manager [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.842 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.847 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.877 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.902 239969 INFO nova.compute.manager [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Took 9.00 seconds to build instance.
Jan 26 15:52:32 compute-0 nova_compute[239965]: 2026-01-26 15:52:32.916 239969 DEBUG oslo_concurrency.lockutils [None req-f2323149-ac99-4edb-a7a0-6e89edfdde43 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "daafa464-6093-4ef3-b0a9-46f59c55cf94" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:33 compute-0 nova_compute[239965]: 2026-01-26 15:52:33.065 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442738.065348, 342bf84f-0f28-4939-b844-ed14ee42c01c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:33 compute-0 nova_compute[239965]: 2026-01-26 15:52:33.066 239969 INFO nova.compute.manager [-] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] VM Stopped (Lifecycle Event)
Jan 26 15:52:33 compute-0 nova_compute[239965]: 2026-01-26 15:52:33.083 239969 DEBUG nova.compute.manager [None req-91bd33a2-0d95-4a90-a99f-802520af867b - - - - - -] [instance: 342bf84f-0f28-4939-b844-ed14ee42c01c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Jan 26 15:52:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Jan 26 15:52:33 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Jan 26 15:52:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 134 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 4.6 MiB/s wr, 163 op/s
Jan 26 15:52:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Jan 26 15:52:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Jan 26 15:52:34 compute-0 ceph-mon[75140]: osdmap e194: 3 total, 3 up, 3 in
Jan 26 15:52:34 compute-0 ceph-mon[75140]: pgmap v1147: 305 pgs: 305 active+clean; 134 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 4.6 MiB/s wr, 163 op/s
Jan 26 15:52:34 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Jan 26 15:52:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e195 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:52:34 compute-0 nova_compute[239965]: 2026-01-26 15:52:34.899 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:35 compute-0 ceph-mon[75140]: osdmap e195: 3 total, 3 up, 3 in
Jan 26 15:52:35 compute-0 nova_compute[239965]: 2026-01-26 15:52:35.326 239969 DEBUG nova.compute.manager [req-bb9bafe6-9e79-4a28-a443-6ccb45f34bbf req-6d57ae8f-0533-4184-8cf7-f0aba1f62c83 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Received event network-vif-plugged-fdd548e5-1d8a-450a-9a19-ee2626facc98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:35 compute-0 nova_compute[239965]: 2026-01-26 15:52:35.326 239969 DEBUG oslo_concurrency.lockutils [req-bb9bafe6-9e79-4a28-a443-6ccb45f34bbf req-6d57ae8f-0533-4184-8cf7-f0aba1f62c83 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:35 compute-0 nova_compute[239965]: 2026-01-26 15:52:35.327 239969 DEBUG oslo_concurrency.lockutils [req-bb9bafe6-9e79-4a28-a443-6ccb45f34bbf req-6d57ae8f-0533-4184-8cf7-f0aba1f62c83 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:35 compute-0 nova_compute[239965]: 2026-01-26 15:52:35.327 239969 DEBUG oslo_concurrency.lockutils [req-bb9bafe6-9e79-4a28-a443-6ccb45f34bbf req-6d57ae8f-0533-4184-8cf7-f0aba1f62c83 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:35 compute-0 nova_compute[239965]: 2026-01-26 15:52:35.327 239969 DEBUG nova.compute.manager [req-bb9bafe6-9e79-4a28-a443-6ccb45f34bbf req-6d57ae8f-0533-4184-8cf7-f0aba1f62c83 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] No waiting events found dispatching network-vif-plugged-fdd548e5-1d8a-450a-9a19-ee2626facc98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:52:35 compute-0 nova_compute[239965]: 2026-01-26 15:52:35.328 239969 WARNING nova.compute.manager [req-bb9bafe6-9e79-4a28-a443-6ccb45f34bbf req-6d57ae8f-0533-4184-8cf7-f0aba1f62c83 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Received unexpected event network-vif-plugged-fdd548e5-1d8a-450a-9a19-ee2626facc98 for instance with vm_state active and task_state None.
Jan 26 15:52:35 compute-0 nova_compute[239965]: 2026-01-26 15:52:35.534 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:35 compute-0 nova_compute[239965]: 2026-01-26 15:52:35.619 239969 DEBUG oslo_concurrency.lockutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquiring lock "98902e40-0bb9-468b-9aa1-82d7fd8cc143" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:35 compute-0 nova_compute[239965]: 2026-01-26 15:52:35.620 239969 DEBUG oslo_concurrency.lockutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "98902e40-0bb9-468b-9aa1-82d7fd8cc143" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:35 compute-0 nova_compute[239965]: 2026-01-26 15:52:35.640 239969 DEBUG nova.compute.manager [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:52:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 134 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 201 op/s
Jan 26 15:52:35 compute-0 nova_compute[239965]: 2026-01-26 15:52:35.731 239969 DEBUG oslo_concurrency.lockutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:35 compute-0 nova_compute[239965]: 2026-01-26 15:52:35.732 239969 DEBUG oslo_concurrency.lockutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:35 compute-0 nova_compute[239965]: 2026-01-26 15:52:35.740 239969 DEBUG nova.virt.hardware [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:52:35 compute-0 nova_compute[239965]: 2026-01-26 15:52:35.740 239969 INFO nova.compute.claims [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:52:35 compute-0 nova_compute[239965]: 2026-01-26 15:52:35.898 239969 DEBUG oslo_concurrency.processutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.137 239969 DEBUG nova.compute.manager [None req-80aa6dd4-c019-47a1-812f-e9dd1273925a 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.180 239969 INFO nova.compute.manager [None req-80aa6dd4-c019-47a1-812f-e9dd1273925a 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] instance snapshotting
Jan 26 15:52:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Jan 26 15:52:36 compute-0 ceph-mon[75140]: pgmap v1149: 305 pgs: 305 active+clean; 134 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 201 op/s
Jan 26 15:52:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Jan 26 15:52:36 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.436 239969 INFO nova.virt.libvirt.driver [None req-80aa6dd4-c019-47a1-812f-e9dd1273925a 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Beginning live snapshot process
Jan 26 15:52:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/440555745' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.482 239969 DEBUG oslo_concurrency.processutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.491 239969 DEBUG nova.compute.provider_tree [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.516 239969 DEBUG nova.scheduler.client.report [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.596 239969 DEBUG oslo_concurrency.lockutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.597 239969 DEBUG nova.compute.manager [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.609 239969 DEBUG nova.virt.libvirt.imagebackend [None req-80aa6dd4-c019-47a1-812f-e9dd1273925a 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.651 239969 DEBUG nova.compute.manager [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.651 239969 DEBUG nova.network.neutron [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.669 239969 INFO nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.686 239969 DEBUG nova.compute.manager [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.781 239969 DEBUG nova.compute.manager [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.783 239969 DEBUG nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.783 239969 INFO nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Creating image(s)
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.816 239969 DEBUG nova.storage.rbd_utils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] rbd image 98902e40-0bb9-468b-9aa1-82d7fd8cc143_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.847 239969 DEBUG nova.storage.rbd_utils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] rbd image 98902e40-0bb9-468b-9aa1-82d7fd8cc143_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.878 239969 DEBUG nova.storage.rbd_utils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] rbd image 98902e40-0bb9-468b-9aa1-82d7fd8cc143_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.882 239969 DEBUG oslo_concurrency.processutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.945 239969 DEBUG oslo_concurrency.processutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.947 239969 DEBUG oslo_concurrency.lockutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.947 239969 DEBUG oslo_concurrency.lockutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.947 239969 DEBUG oslo_concurrency.lockutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.970 239969 DEBUG nova.storage.rbd_utils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] rbd image 98902e40-0bb9-468b-9aa1-82d7fd8cc143_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:36 compute-0 nova_compute[239965]: 2026-01-26 15:52:36.973 239969 DEBUG oslo_concurrency.processutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 98902e40-0bb9-468b-9aa1-82d7fd8cc143_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.037 239969 DEBUG nova.storage.rbd_utils [None req-80aa6dd4-c019-47a1-812f-e9dd1273925a 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] creating snapshot(865ce686dd2c4970933f18d9eb29fef1) on rbd image(daafa464-6093-4ef3-b0a9-46f59c55cf94_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:52:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Jan 26 15:52:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Jan 26 15:52:37 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Jan 26 15:52:37 compute-0 ceph-mon[75140]: osdmap e196: 3 total, 3 up, 3 in
Jan 26 15:52:37 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/440555745' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.415 239969 DEBUG oslo_concurrency.processutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 98902e40-0bb9-468b-9aa1-82d7fd8cc143_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.448 239969 DEBUG nova.storage.rbd_utils [None req-80aa6dd4-c019-47a1-812f-e9dd1273925a 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] cloning vms/daafa464-6093-4ef3-b0a9-46f59c55cf94_disk@865ce686dd2c4970933f18d9eb29fef1 to images/80e42b9d-cca6-484c-8051-04eaf17d53ac clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.510 239969 DEBUG nova.storage.rbd_utils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] resizing rbd image 98902e40-0bb9-468b-9aa1-82d7fd8cc143_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.541 239969 DEBUG nova.storage.rbd_utils [None req-80aa6dd4-c019-47a1-812f-e9dd1273925a 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] flattening images/80e42b9d-cca6-484c-8051-04eaf17d53ac flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.639 239969 DEBUG nova.objects.instance [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lazy-loading 'migration_context' on Instance uuid 98902e40-0bb9-468b-9aa1-82d7fd8cc143 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.654 239969 DEBUG nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.654 239969 DEBUG nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Ensure instance console log exists: /var/lib/nova/instances/98902e40-0bb9-468b-9aa1-82d7fd8cc143/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.655 239969 DEBUG oslo_concurrency.lockutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.655 239969 DEBUG oslo_concurrency.lockutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.655 239969 DEBUG oslo_concurrency.lockutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.677 239969 DEBUG nova.network.neutron [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.677 239969 DEBUG nova.compute.manager [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.679 239969 DEBUG nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:52:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 164 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 4.0 MiB/s wr, 506 op/s
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.684 239969 WARNING nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.689 239969 DEBUG nova.virt.libvirt.host [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.689 239969 DEBUG nova.virt.libvirt.host [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.694 239969 DEBUG nova.virt.libvirt.host [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.695 239969 DEBUG nova.virt.libvirt.host [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.695 239969 DEBUG nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.696 239969 DEBUG nova.virt.hardware [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.696 239969 DEBUG nova.virt.hardware [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.696 239969 DEBUG nova.virt.hardware [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.696 239969 DEBUG nova.virt.hardware [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.697 239969 DEBUG nova.virt.hardware [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.697 239969 DEBUG nova.virt.hardware [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.697 239969 DEBUG nova.virt.hardware [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.697 239969 DEBUG nova.virt.hardware [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.698 239969 DEBUG nova.virt.hardware [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.698 239969 DEBUG nova.virt.hardware [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.698 239969 DEBUG nova.virt.hardware [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.701 239969 DEBUG oslo_concurrency.processutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.799 239969 DEBUG nova.storage.rbd_utils [None req-80aa6dd4-c019-47a1-812f-e9dd1273925a 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] removing snapshot(865ce686dd2c4970933f18d9eb29fef1) on rbd image(daafa464-6093-4ef3-b0a9-46f59c55cf94_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.890 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442742.889706, ad8f4f0d-55dd-416f-9164-9c0a4f024ee9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.891 239969 INFO nova.compute.manager [-] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] VM Stopped (Lifecycle Event)
Jan 26 15:52:37 compute-0 nova_compute[239965]: 2026-01-26 15:52:37.912 239969 DEBUG nova.compute.manager [None req-7b0b8037-6d6a-4381-92a0-c2a9d4ad0ec9 - - - - - -] [instance: ad8f4f0d-55dd-416f-9164-9c0a4f024ee9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Jan 26 15:52:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Jan 26 15:52:38 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Jan 26 15:52:38 compute-0 ceph-mon[75140]: osdmap e197: 3 total, 3 up, 3 in
Jan 26 15:52:38 compute-0 ceph-mon[75140]: pgmap v1152: 305 pgs: 305 active+clean; 164 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 4.0 MiB/s wr, 506 op/s
Jan 26 15:52:38 compute-0 nova_compute[239965]: 2026-01-26 15:52:38.276 239969 DEBUG nova.storage.rbd_utils [None req-80aa6dd4-c019-47a1-812f-e9dd1273925a 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] creating snapshot(snap) on rbd image(80e42b9d-cca6-484c-8051-04eaf17d53ac) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:52:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:52:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1993772399' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:38 compute-0 nova_compute[239965]: 2026-01-26 15:52:38.322 239969 DEBUG oslo_concurrency.processutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:38 compute-0 nova_compute[239965]: 2026-01-26 15:52:38.341 239969 DEBUG nova.storage.rbd_utils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] rbd image 98902e40-0bb9-468b-9aa1-82d7fd8cc143_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:38 compute-0 nova_compute[239965]: 2026-01-26 15:52:38.345 239969 DEBUG oslo_concurrency.processutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:52:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1765110902' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:38 compute-0 nova_compute[239965]: 2026-01-26 15:52:38.901 239969 DEBUG oslo_concurrency.processutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:38 compute-0 nova_compute[239965]: 2026-01-26 15:52:38.903 239969 DEBUG nova.objects.instance [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lazy-loading 'pci_devices' on Instance uuid 98902e40-0bb9-468b-9aa1-82d7fd8cc143 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:38 compute-0 nova_compute[239965]: 2026-01-26 15:52:38.959 239969 DEBUG nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:52:38 compute-0 nova_compute[239965]:   <uuid>98902e40-0bb9-468b-9aa1-82d7fd8cc143</uuid>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   <name>instance-0000001d</name>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersAdminNegativeTestJSON-server-1763395653</nova:name>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:52:37</nova:creationTime>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:52:38 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:52:38 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:52:38 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:52:38 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:52:38 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:52:38 compute-0 nova_compute[239965]:         <nova:user uuid="38314961865549838edb2c26893eb430">tempest-ServersAdminNegativeTestJSON-1522942204-project-member</nova:user>
Jan 26 15:52:38 compute-0 nova_compute[239965]:         <nova:project uuid="9ade4d4287974ef1b6df3e85a2b3b70b">tempest-ServersAdminNegativeTestJSON-1522942204</nova:project>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <system>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <entry name="serial">98902e40-0bb9-468b-9aa1-82d7fd8cc143</entry>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <entry name="uuid">98902e40-0bb9-468b-9aa1-82d7fd8cc143</entry>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     </system>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   <os>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   </os>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   <features>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   </features>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/98902e40-0bb9-468b-9aa1-82d7fd8cc143_disk">
Jan 26 15:52:38 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       </source>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:52:38 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/98902e40-0bb9-468b-9aa1-82d7fd8cc143_disk.config">
Jan 26 15:52:38 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       </source>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:52:38 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/98902e40-0bb9-468b-9aa1-82d7fd8cc143/console.log" append="off"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <video>
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     </video>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:52:38 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:52:38 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:52:38 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:52:38 compute-0 nova_compute[239965]: </domain>
Jan 26 15:52:38 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.024 239969 DEBUG nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.025 239969 DEBUG nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.025 239969 INFO nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Using config drive
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.051 239969 DEBUG nova.storage.rbd_utils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] rbd image 98902e40-0bb9-468b-9aa1-82d7fd8cc143_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Jan 26 15:52:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Jan 26 15:52:39 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Jan 26 15:52:39 compute-0 ceph-mon[75140]: osdmap e198: 3 total, 3 up, 3 in
Jan 26 15:52:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1993772399' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1765110902' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e199 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:52:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Jan 26 15:52:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Jan 26 15:52:39 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver [None req-80aa6dd4-c019-47a1-812f-e9dd1273925a 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 80e42b9d-cca6-484c-8051-04eaf17d53ac could not be found.
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 80e42b9d-cca6-484c-8051-04eaf17d53ac
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver 
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver 
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 80e42b9d-cca6-484c-8051-04eaf17d53ac could not be found.
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.416 239969 ERROR nova.virt.libvirt.driver 
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.467 239969 DEBUG nova.storage.rbd_utils [None req-80aa6dd4-c019-47a1-812f-e9dd1273925a 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] removing snapshot(snap) on rbd image(80e42b9d-cca6-484c-8051-04eaf17d53ac) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.530 239969 INFO nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Creating config drive at /var/lib/nova/instances/98902e40-0bb9-468b-9aa1-82d7fd8cc143/disk.config
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.535 239969 DEBUG oslo_concurrency.processutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98902e40-0bb9-468b-9aa1-82d7fd8cc143/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbfyronjf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.675 239969 DEBUG oslo_concurrency.processutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98902e40-0bb9-468b-9aa1-82d7fd8cc143/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbfyronjf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 164 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 7.6 MiB/s rd, 5.1 MiB/s wr, 427 op/s
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.705 239969 DEBUG nova.storage.rbd_utils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] rbd image 98902e40-0bb9-468b-9aa1-82d7fd8cc143_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.711 239969 DEBUG oslo_concurrency.processutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98902e40-0bb9-468b-9aa1-82d7fd8cc143/disk.config 98902e40-0bb9-468b-9aa1-82d7fd8cc143_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.860 239969 DEBUG oslo_concurrency.processutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98902e40-0bb9-468b-9aa1-82d7fd8cc143/disk.config 98902e40-0bb9-468b-9aa1-82d7fd8cc143_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.862 239969 INFO nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Deleting local config drive /var/lib/nova/instances/98902e40-0bb9-468b-9aa1-82d7fd8cc143/disk.config because it was imported into RBD.
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.900 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.906 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Acquiring lock "d828ca5d-4f7c-4c81-9084-dd789819719d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.907 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lock "d828ca5d-4f7c-4c81-9084-dd789819719d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:39 compute-0 systemd-machined[208061]: New machine qemu-33-instance-0000001d.
Jan 26 15:52:39 compute-0 nova_compute[239965]: 2026-01-26 15:52:39.924 239969 DEBUG nova.compute.manager [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:52:39 compute-0 systemd[1]: Started Virtual Machine qemu-33-instance-0000001d.
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.001 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.002 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.009 239969 DEBUG nova.virt.hardware [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.009 239969 INFO nova.compute.claims [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.157 239969 DEBUG oslo_concurrency.processutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:40 compute-0 ceph-mon[75140]: osdmap e199: 3 total, 3 up, 3 in
Jan 26 15:52:40 compute-0 ceph-mon[75140]: osdmap e200: 3 total, 3 up, 3 in
Jan 26 15:52:40 compute-0 ceph-mon[75140]: pgmap v1156: 305 pgs: 305 active+clean; 164 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 7.6 MiB/s rd, 5.1 MiB/s wr, 427 op/s
Jan 26 15:52:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Jan 26 15:52:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Jan 26 15:52:40 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.368 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442760.3675644, 98902e40-0bb9-468b-9aa1-82d7fd8cc143 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.369 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] VM Resumed (Lifecycle Event)
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.374 239969 DEBUG nova.compute.manager [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.374 239969 DEBUG nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.379 239969 INFO nova.virt.libvirt.driver [-] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Instance spawned successfully.
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.380 239969 DEBUG nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.393 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.401 239969 DEBUG nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.402 239969 DEBUG nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.403 239969 DEBUG nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.403 239969 DEBUG nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.406 239969 DEBUG nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.407 239969 DEBUG nova.virt.libvirt.driver [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.411 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.438 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.439 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442760.3688495, 98902e40-0bb9-468b-9aa1-82d7fd8cc143 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.439 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] VM Started (Lifecycle Event)
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.465 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.473 239969 INFO nova.compute.manager [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Took 3.69 seconds to spawn the instance on the hypervisor.
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.474 239969 DEBUG nova.compute.manager [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.476 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.509 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.536 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.550 239969 INFO nova.compute.manager [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Took 4.86 seconds to build instance.
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.571 239969 DEBUG oslo_concurrency.lockutils [None req-7aa5064b-84ea-4960-844f-8a48010b69e1 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "98902e40-0bb9-468b-9aa1-82d7fd8cc143" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.609 239969 WARNING nova.compute.manager [None req-80aa6dd4-c019-47a1-812f-e9dd1273925a 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Image not found during snapshot: nova.exception.ImageNotFound: Image 80e42b9d-cca6-484c-8051-04eaf17d53ac could not be found.
Jan 26 15:52:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1734086290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.773 239969 DEBUG oslo_concurrency.processutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.779 239969 DEBUG nova.compute.provider_tree [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.796 239969 DEBUG nova.scheduler.client.report [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.820 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.821 239969 DEBUG nova.compute.manager [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.875 239969 DEBUG nova.compute.manager [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.876 239969 DEBUG nova.network.neutron [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.894 239969 INFO nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:52:40 compute-0 nova_compute[239965]: 2026-01-26 15:52:40.912 239969 DEBUG nova.compute.manager [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:52:41 compute-0 nova_compute[239965]: 2026-01-26 15:52:41.008 239969 DEBUG nova.compute.manager [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:52:41 compute-0 nova_compute[239965]: 2026-01-26 15:52:41.009 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:52:41 compute-0 nova_compute[239965]: 2026-01-26 15:52:41.010 239969 INFO nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Creating image(s)
Jan 26 15:52:41 compute-0 nova_compute[239965]: 2026-01-26 15:52:41.033 239969 DEBUG nova.storage.rbd_utils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] rbd image d828ca5d-4f7c-4c81-9084-dd789819719d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:41 compute-0 nova_compute[239965]: 2026-01-26 15:52:41.064 239969 DEBUG nova.storage.rbd_utils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] rbd image d828ca5d-4f7c-4c81-9084-dd789819719d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:41 compute-0 nova_compute[239965]: 2026-01-26 15:52:41.103 239969 DEBUG nova.storage.rbd_utils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] rbd image d828ca5d-4f7c-4c81-9084-dd789819719d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:41 compute-0 nova_compute[239965]: 2026-01-26 15:52:41.110 239969 DEBUG oslo_concurrency.processutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:41 compute-0 nova_compute[239965]: 2026-01-26 15:52:41.191 239969 DEBUG oslo_concurrency.processutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:41 compute-0 nova_compute[239965]: 2026-01-26 15:52:41.192 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:41 compute-0 nova_compute[239965]: 2026-01-26 15:52:41.193 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:41 compute-0 nova_compute[239965]: 2026-01-26 15:52:41.194 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 176 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.9 MiB/s wr, 141 op/s
Jan 26 15:52:42 compute-0 ceph-mon[75140]: osdmap e201: 3 total, 3 up, 3 in
Jan 26 15:52:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1734086290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.175 239969 DEBUG nova.storage.rbd_utils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] rbd image d828ca5d-4f7c-4c81-9084-dd789819719d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.178 239969 DEBUG oslo_concurrency.processutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d828ca5d-4f7c-4c81-9084-dd789819719d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.210 239969 DEBUG nova.policy [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '47cffb17f49240d1ae5b9302d2e92cf0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2ce4d550978141bdb03727e5abb57737', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.222 239969 DEBUG nova.objects.instance [None req-d4fd28f6-37b4-4d3f-b89e-66368f723257 95f5d3f735654f1ea159faf28a29ca15 21ca2fa825ea4524b18f69412811234b - - default default] Lazy-loading 'pci_devices' on Instance uuid 98902e40-0bb9-468b-9aa1-82d7fd8cc143 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.237 239969 DEBUG oslo_concurrency.lockutils [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "daafa464-6093-4ef3-b0a9-46f59c55cf94" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.238 239969 DEBUG oslo_concurrency.lockutils [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "daafa464-6093-4ef3-b0a9-46f59c55cf94" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.242 239969 DEBUG oslo_concurrency.lockutils [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.242 239969 DEBUG oslo_concurrency.lockutils [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.242 239969 DEBUG oslo_concurrency.lockutils [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.244 239969 INFO nova.compute.manager [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Terminating instance
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.245 239969 DEBUG nova.compute.manager [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.255 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442762.2549658, 98902e40-0bb9-468b-9aa1-82d7fd8cc143 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.256 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] VM Paused (Lifecycle Event)
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.273 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.277 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.302 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 26 15:52:42 compute-0 kernel: tapfdd548e5-1d (unregistering): left promiscuous mode
Jan 26 15:52:42 compute-0 NetworkManager[48954]: <info>  [1769442762.3127] device (tapfdd548e5-1d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:52:42 compute-0 ovn_controller[146046]: 2026-01-26T15:52:42Z|00151|binding|INFO|Releasing lport fdd548e5-1d8a-450a-9a19-ee2626facc98 from this chassis (sb_readonly=0)
Jan 26 15:52:42 compute-0 ovn_controller[146046]: 2026-01-26T15:52:42Z|00152|binding|INFO|Setting lport fdd548e5-1d8a-450a-9a19-ee2626facc98 down in Southbound
Jan 26 15:52:42 compute-0 ovn_controller[146046]: 2026-01-26T15:52:42Z|00153|binding|INFO|Removing iface tapfdd548e5-1d ovn-installed in OVS
Jan 26 15:52:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:42.329 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:a8:7d 10.100.0.7'], port_security=['fa:16:3e:77:a8:7d 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'daafa464-6093-4ef3-b0a9-46f59c55cf94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f5e237c1-a75f-479a-88ee-c0f788914b11', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ddba5162f533447bba0159cafaa565bf', 'neutron:revision_number': '4', 'neutron:security_group_ids': '196db1cf-70b0-4b0e-9aef-decea6cfe3dc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=79fbd3fe-b646-4211-8e23-8d25cabdd600, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=fdd548e5-1d8a-450a-9a19-ee2626facc98) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:52:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:42.330 156105 INFO neutron.agent.ovn.metadata.agent [-] Port fdd548e5-1d8a-450a-9a19-ee2626facc98 in datapath f5e237c1-a75f-479a-88ee-c0f788914b11 unbound from our chassis
Jan 26 15:52:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:42.333 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f5e237c1-a75f-479a-88ee-c0f788914b11, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.336 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:42.336 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c7ee74-de91-4eec-b6de-1c29fbdbf2fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:42.338 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 namespace which is not needed anymore
Jan 26 15:52:42 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Jan 26 15:52:42 compute-0 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001b.scope: Consumed 10.457s CPU time.
Jan 26 15:52:42 compute-0 systemd-machined[208061]: Machine qemu-32-instance-0000001b terminated.
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.363 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.475 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.480 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.491 239969 INFO nova.virt.libvirt.driver [-] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Instance destroyed successfully.
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.491 239969 DEBUG nova.objects.instance [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lazy-loading 'resources' on Instance uuid daafa464-6093-4ef3-b0a9-46f59c55cf94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.510 239969 DEBUG nova.virt.libvirt.vif [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:52:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-93390772',display_name='tempest-ImagesTestJSON-server-93390772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-93390772',id=27,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:52:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ddba5162f533447bba0159cafaa565bf',ramdisk_id='',reservation_id='r-8xfxway3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-1480202091',owner_user_name='tempest-ImagesTestJSON-1480202091-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:52:40Z,user_data=None,user_id='3322c44e378e415bb486ef558314a67c',uuid=daafa464-6093-4ef3-b0a9-46f59c55cf94,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "address": "fa:16:3e:77:a8:7d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd548e5-1d", "ovs_interfaceid": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.510 239969 DEBUG nova.network.os_vif_util [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converting VIF {"id": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "address": "fa:16:3e:77:a8:7d", "network": {"id": "f5e237c1-a75f-479a-88ee-c0f788914b11", "bridge": "br-int", "label": "tempest-ImagesTestJSON-1806451115-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ddba5162f533447bba0159cafaa565bf", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdd548e5-1d", "ovs_interfaceid": "fdd548e5-1d8a-450a-9a19-ee2626facc98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.511 239969 DEBUG nova.network.os_vif_util [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:a8:7d,bridge_name='br-int',has_traffic_filtering=True,id=fdd548e5-1d8a-450a-9a19-ee2626facc98,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd548e5-1d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.511 239969 DEBUG os_vif [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:a8:7d,bridge_name='br-int',has_traffic_filtering=True,id=fdd548e5-1d8a-450a-9a19-ee2626facc98,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd548e5-1d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.513 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.513 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdd548e5-1d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.515 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.516 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.518 239969 INFO os_vif [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:a8:7d,bridge_name='br-int',has_traffic_filtering=True,id=fdd548e5-1d8a-450a-9a19-ee2626facc98,network=Network(f5e237c1-a75f-479a-88ee-c0f788914b11),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdd548e5-1d')
Jan 26 15:52:42 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[266791]: [NOTICE]   (266803) : haproxy version is 2.8.14-c23fe91
Jan 26 15:52:42 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[266791]: [NOTICE]   (266803) : path to executable is /usr/sbin/haproxy
Jan 26 15:52:42 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[266791]: [WARNING]  (266803) : Exiting Master process...
Jan 26 15:52:42 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[266791]: [WARNING]  (266803) : Exiting Master process...
Jan 26 15:52:42 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[266791]: [ALERT]    (266803) : Current worker (266805) exited with code 143 (Terminated)
Jan 26 15:52:42 compute-0 neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11[266791]: [WARNING]  (266803) : All workers exited. Exiting... (0)
Jan 26 15:52:42 compute-0 systemd[1]: libpod-9cece0692458efe0457d246bea83e7cdeedbe297fdf82edaf112bc29c11f2ef7.scope: Deactivated successfully.
Jan 26 15:52:42 compute-0 conmon[266791]: conmon 9cece0692458efe0457d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9cece0692458efe0457d246bea83e7cdeedbe297fdf82edaf112bc29c11f2ef7.scope/container/memory.events
Jan 26 15:52:42 compute-0 podman[267501]: 2026-01-26 15:52:42.697269727 +0000 UTC m=+0.263701926 container died 9cece0692458efe0457d246bea83e7cdeedbe297fdf82edaf112bc29c11f2ef7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:52:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9cece0692458efe0457d246bea83e7cdeedbe297fdf82edaf112bc29c11f2ef7-userdata-shm.mount: Deactivated successfully.
Jan 26 15:52:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-75bc75854fc04db2aa2b7caae6c5c4093f485654cde0dbc2a66e3d172e61da5c-merged.mount: Deactivated successfully.
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.741 239969 DEBUG oslo_concurrency.processutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d828ca5d-4f7c-4c81-9084-dd789819719d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:42 compute-0 podman[267501]: 2026-01-26 15:52:42.747464036 +0000 UTC m=+0.313896235 container cleanup 9cece0692458efe0457d246bea83e7cdeedbe297fdf82edaf112bc29c11f2ef7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:52:42 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Jan 26 15:52:42 compute-0 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Consumed 2.402s CPU time.
Jan 26 15:52:42 compute-0 systemd-machined[208061]: Machine qemu-33-instance-0000001d terminated.
Jan 26 15:52:42 compute-0 systemd[1]: libpod-conmon-9cece0692458efe0457d246bea83e7cdeedbe297fdf82edaf112bc29c11f2ef7.scope: Deactivated successfully.
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.819 239969 DEBUG nova.storage.rbd_utils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] resizing rbd image d828ca5d-4f7c-4c81-9084-dd789819719d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:52:42 compute-0 podman[267576]: 2026-01-26 15:52:42.829886677 +0000 UTC m=+0.051101621 container remove 9cece0692458efe0457d246bea83e7cdeedbe297fdf82edaf112bc29c11f2ef7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:52:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:42.843 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8cd398-988e-4077-a102-9bf9eef0b814]: (4, ('Mon Jan 26 03:52:42 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 (9cece0692458efe0457d246bea83e7cdeedbe297fdf82edaf112bc29c11f2ef7)\n9cece0692458efe0457d246bea83e7cdeedbe297fdf82edaf112bc29c11f2ef7\nMon Jan 26 03:52:42 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 (9cece0692458efe0457d246bea83e7cdeedbe297fdf82edaf112bc29c11f2ef7)\n9cece0692458efe0457d246bea83e7cdeedbe297fdf82edaf112bc29c11f2ef7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:42.845 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[095a4b46-ba6a-4424-8d44-d328b33b5025]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:42.846 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf5e237c1-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.848 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:42 compute-0 kernel: tapf5e237c1-a0: left promiscuous mode
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.868 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:42.875 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1160b1-0839-422f-95fe-2bc690a47a07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:42.888 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e5b8c25b-ced8-40dd-bffb-39416db04766]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.890 239969 DEBUG nova.compute.manager [None req-d4fd28f6-37b4-4d3f-b89e-66368f723257 95f5d3f735654f1ea159faf28a29ca15 21ca2fa825ea4524b18f69412811234b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:42.890 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[eb18d4a8-6c90-489d-9f73-f05f18333957]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.898 239969 DEBUG nova.compute.manager [req-c035065b-c3e5-41c6-8b9a-3a157e3d5404 req-a7867dec-59b7-47ea-8c9a-5691392a1dce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Received event network-vif-unplugged-fdd548e5-1d8a-450a-9a19-ee2626facc98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.898 239969 DEBUG oslo_concurrency.lockutils [req-c035065b-c3e5-41c6-8b9a-3a157e3d5404 req-a7867dec-59b7-47ea-8c9a-5691392a1dce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.899 239969 DEBUG oslo_concurrency.lockutils [req-c035065b-c3e5-41c6-8b9a-3a157e3d5404 req-a7867dec-59b7-47ea-8c9a-5691392a1dce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.899 239969 DEBUG oslo_concurrency.lockutils [req-c035065b-c3e5-41c6-8b9a-3a157e3d5404 req-a7867dec-59b7-47ea-8c9a-5691392a1dce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.899 239969 DEBUG nova.compute.manager [req-c035065b-c3e5-41c6-8b9a-3a157e3d5404 req-a7867dec-59b7-47ea-8c9a-5691392a1dce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] No waiting events found dispatching network-vif-unplugged-fdd548e5-1d8a-450a-9a19-ee2626facc98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.899 239969 DEBUG nova.compute.manager [req-c035065b-c3e5-41c6-8b9a-3a157e3d5404 req-a7867dec-59b7-47ea-8c9a-5691392a1dce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Received event network-vif-unplugged-fdd548e5-1d8a-450a-9a19-ee2626facc98 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:52:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:42.910 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7eb3ad-7352-4581-ad70-822eaea0d937]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 424264, 'reachable_time': 38410, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267630, 'error': None, 'target': 'ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:42.912 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f5e237c1-a75f-479a-88ee-c0f788914b11 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:52:42 compute-0 systemd[1]: run-netns-ovnmeta\x2df5e237c1\x2da75f\x2d479a\x2d88ee\x2dc0f788914b11.mount: Deactivated successfully.
Jan 26 15:52:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:42.913 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[98d97a29-8cde-4f67-addd-a4a860cb98ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.943 239969 DEBUG nova.objects.instance [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lazy-loading 'migration_context' on Instance uuid d828ca5d-4f7c-4c81-9084-dd789819719d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.980 239969 INFO nova.virt.libvirt.driver [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Deleting instance files /var/lib/nova/instances/daafa464-6093-4ef3-b0a9-46f59c55cf94_del
Jan 26 15:52:42 compute-0 nova_compute[239965]: 2026-01-26 15:52:42.981 239969 INFO nova.virt.libvirt.driver [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Deletion of /var/lib/nova/instances/daafa464-6093-4ef3-b0a9-46f59c55cf94_del complete
Jan 26 15:52:43 compute-0 nova_compute[239965]: 2026-01-26 15:52:43.041 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:52:43 compute-0 nova_compute[239965]: 2026-01-26 15:52:43.041 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Ensure instance console log exists: /var/lib/nova/instances/d828ca5d-4f7c-4c81-9084-dd789819719d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:52:43 compute-0 nova_compute[239965]: 2026-01-26 15:52:43.042 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:43 compute-0 nova_compute[239965]: 2026-01-26 15:52:43.042 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:43 compute-0 nova_compute[239965]: 2026-01-26 15:52:43.042 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:43 compute-0 nova_compute[239965]: 2026-01-26 15:52:43.155 239969 INFO nova.compute.manager [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Took 0.91 seconds to destroy the instance on the hypervisor.
Jan 26 15:52:43 compute-0 nova_compute[239965]: 2026-01-26 15:52:43.156 239969 DEBUG oslo.service.loopingcall [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:52:43 compute-0 nova_compute[239965]: 2026-01-26 15:52:43.157 239969 DEBUG nova.compute.manager [-] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:52:43 compute-0 nova_compute[239965]: 2026-01-26 15:52:43.157 239969 DEBUG nova.network.neutron [-] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:52:43 compute-0 ceph-mon[75140]: pgmap v1158: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 176 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.9 MiB/s wr, 141 op/s
Jan 26 15:52:43 compute-0 nova_compute[239965]: 2026-01-26 15:52:43.299 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442748.2482393, 131ac17b-4bc0-4c20-b861-7102599a66d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:43 compute-0 nova_compute[239965]: 2026-01-26 15:52:43.300 239969 INFO nova.compute.manager [-] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] VM Stopped (Lifecycle Event)
Jan 26 15:52:43 compute-0 nova_compute[239965]: 2026-01-26 15:52:43.320 239969 DEBUG nova.compute.manager [None req-8dcbe8a1-61b0-4924-a0d8-607dc3301b68 - - - - - -] [instance: 131ac17b-4bc0-4c20-b861-7102599a66d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:43 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Jan 26 15:52:43 compute-0 nova_compute[239965]: 2026-01-26 15:52:43.679 239969 DEBUG nova.network.neutron [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Successfully created port: afe7812f-4f9b-460d-9b60-f6f0ad83893c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:52:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 181 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 4.6 MiB/s wr, 397 op/s
Jan 26 15:52:44 compute-0 ceph-mon[75140]: pgmap v1159: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 181 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 4.6 MiB/s wr, 397 op/s
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.211 239969 DEBUG nova.network.neutron [-] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.226 239969 INFO nova.compute.manager [-] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Took 1.07 seconds to deallocate network for instance.
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.271 239969 DEBUG oslo_concurrency.lockutils [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.272 239969 DEBUG oslo_concurrency.lockutils [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:52:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Jan 26 15:52:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Jan 26 15:52:44 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.368 239969 DEBUG oslo_concurrency.processutils [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.443 239969 DEBUG nova.compute.manager [req-12100187-6bbf-493f-baeb-a372b29108bd req-f9843870-5b61-4b90-8d14-730bb777b83b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Received event network-vif-deleted-fdd548e5-1d8a-450a-9a19-ee2626facc98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.658 239969 DEBUG nova.network.neutron [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Successfully updated port: afe7812f-4f9b-460d-9b60-f6f0ad83893c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.684 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Acquiring lock "refresh_cache-d828ca5d-4f7c-4c81-9084-dd789819719d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.684 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Acquired lock "refresh_cache-d828ca5d-4f7c-4c81-9084-dd789819719d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.684 239969 DEBUG nova.network.neutron [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.845 239969 DEBUG nova.network.neutron [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.903 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/746429658' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.949 239969 DEBUG oslo_concurrency.processutils [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.958 239969 DEBUG nova.compute.provider_tree [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.971 239969 DEBUG nova.compute.manager [req-8f20e0f9-fa36-41cc-a5b0-9ae9bca8892d req-742f6bb2-a661-4731-8e3a-af31d93ddbd7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Received event network-vif-plugged-fdd548e5-1d8a-450a-9a19-ee2626facc98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.971 239969 DEBUG oslo_concurrency.lockutils [req-8f20e0f9-fa36-41cc-a5b0-9ae9bca8892d req-742f6bb2-a661-4731-8e3a-af31d93ddbd7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.974 239969 DEBUG oslo_concurrency.lockutils [req-8f20e0f9-fa36-41cc-a5b0-9ae9bca8892d req-742f6bb2-a661-4731-8e3a-af31d93ddbd7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.974 239969 DEBUG oslo_concurrency.lockutils [req-8f20e0f9-fa36-41cc-a5b0-9ae9bca8892d req-742f6bb2-a661-4731-8e3a-af31d93ddbd7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "daafa464-6093-4ef3-b0a9-46f59c55cf94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.974 239969 DEBUG nova.compute.manager [req-8f20e0f9-fa36-41cc-a5b0-9ae9bca8892d req-742f6bb2-a661-4731-8e3a-af31d93ddbd7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] No waiting events found dispatching network-vif-plugged-fdd548e5-1d8a-450a-9a19-ee2626facc98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.974 239969 WARNING nova.compute.manager [req-8f20e0f9-fa36-41cc-a5b0-9ae9bca8892d req-742f6bb2-a661-4731-8e3a-af31d93ddbd7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Received unexpected event network-vif-plugged-fdd548e5-1d8a-450a-9a19-ee2626facc98 for instance with vm_state deleted and task_state None.
Jan 26 15:52:44 compute-0 nova_compute[239965]: 2026-01-26 15:52:44.976 239969 DEBUG nova.scheduler.client.report [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:45 compute-0 nova_compute[239965]: 2026-01-26 15:52:45.001 239969 DEBUG oslo_concurrency.lockutils [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:45 compute-0 nova_compute[239965]: 2026-01-26 15:52:45.042 239969 INFO nova.scheduler.client.report [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Deleted allocations for instance daafa464-6093-4ef3-b0a9-46f59c55cf94
Jan 26 15:52:45 compute-0 nova_compute[239965]: 2026-01-26 15:52:45.118 239969 DEBUG oslo_concurrency.lockutils [None req-f204db4f-8bd3-4cc9-93fa-007a1b771cee 3322c44e378e415bb486ef558314a67c ddba5162f533447bba0159cafaa565bf - - default default] Lock "daafa464-6093-4ef3-b0a9-46f59c55cf94" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:45 compute-0 ceph-mon[75140]: osdmap e202: 3 total, 3 up, 3 in
Jan 26 15:52:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/746429658' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 187 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 6.9 MiB/s wr, 391 op/s
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.038 239969 DEBUG nova.network.neutron [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Updating instance_info_cache with network_info: [{"id": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "address": "fa:16:3e:83:77:42", "network": {"id": "9c61a251-7778-440e-a088-7675d42cddee", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-463214765-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ce4d550978141bdb03727e5abb57737", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafe7812f-4f", "ovs_interfaceid": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.066 239969 DEBUG oslo_concurrency.lockutils [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquiring lock "98902e40-0bb9-468b-9aa1-82d7fd8cc143" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.066 239969 DEBUG oslo_concurrency.lockutils [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "98902e40-0bb9-468b-9aa1-82d7fd8cc143" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.066 239969 DEBUG oslo_concurrency.lockutils [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquiring lock "98902e40-0bb9-468b-9aa1-82d7fd8cc143-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.067 239969 DEBUG oslo_concurrency.lockutils [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "98902e40-0bb9-468b-9aa1-82d7fd8cc143-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.067 239969 DEBUG oslo_concurrency.lockutils [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "98902e40-0bb9-468b-9aa1-82d7fd8cc143-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.068 239969 INFO nova.compute.manager [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Terminating instance
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.069 239969 DEBUG oslo_concurrency.lockutils [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquiring lock "refresh_cache-98902e40-0bb9-468b-9aa1-82d7fd8cc143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.069 239969 DEBUG oslo_concurrency.lockutils [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquired lock "refresh_cache-98902e40-0bb9-468b-9aa1-82d7fd8cc143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.069 239969 DEBUG nova.network.neutron [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.071 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Releasing lock "refresh_cache-d828ca5d-4f7c-4c81-9084-dd789819719d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.071 239969 DEBUG nova.compute.manager [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Instance network_info: |[{"id": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "address": "fa:16:3e:83:77:42", "network": {"id": "9c61a251-7778-440e-a088-7675d42cddee", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-463214765-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ce4d550978141bdb03727e5abb57737", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafe7812f-4f", "ovs_interfaceid": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.073 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Start _get_guest_xml network_info=[{"id": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "address": "fa:16:3e:83:77:42", "network": {"id": "9c61a251-7778-440e-a088-7675d42cddee", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-463214765-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ce4d550978141bdb03727e5abb57737", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafe7812f-4f", "ovs_interfaceid": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.078 239969 WARNING nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.085 239969 DEBUG nova.virt.libvirt.host [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.085 239969 DEBUG nova.virt.libvirt.host [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.089 239969 DEBUG nova.virt.libvirt.host [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.089 239969 DEBUG nova.virt.libvirt.host [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.089 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.090 239969 DEBUG nova.virt.hardware [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.090 239969 DEBUG nova.virt.hardware [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.090 239969 DEBUG nova.virt.hardware [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.091 239969 DEBUG nova.virt.hardware [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.091 239969 DEBUG nova.virt.hardware [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.091 239969 DEBUG nova.virt.hardware [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.091 239969 DEBUG nova.virt.hardware [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.092 239969 DEBUG nova.virt.hardware [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.092 239969 DEBUG nova.virt.hardware [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.092 239969 DEBUG nova.virt.hardware [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.092 239969 DEBUG nova.virt.hardware [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.095 239969 DEBUG oslo_concurrency.processutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.237 239969 DEBUG nova.network.neutron [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:52:46 compute-0 ceph-mon[75140]: pgmap v1161: 305 pgs: 305 active+clean; 187 MiB data, 413 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 6.9 MiB/s wr, 391 op/s
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.461 239969 DEBUG nova.network.neutron [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.479 239969 DEBUG oslo_concurrency.lockutils [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Releasing lock "refresh_cache-98902e40-0bb9-468b-9aa1-82d7fd8cc143" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.480 239969 DEBUG nova.compute.manager [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.488 239969 INFO nova.virt.libvirt.driver [-] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Instance destroyed successfully.
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.488 239969 DEBUG nova.objects.instance [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lazy-loading 'resources' on Instance uuid 98902e40-0bb9-468b-9aa1-82d7fd8cc143 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.528 239969 DEBUG nova.compute.manager [req-c909752c-91f4-46d6-92ba-5e5259720466 req-0a1a1ab1-d7e4-4e80-b7cb-4547740aad35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Received event network-changed-afe7812f-4f9b-460d-9b60-f6f0ad83893c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.528 239969 DEBUG nova.compute.manager [req-c909752c-91f4-46d6-92ba-5e5259720466 req-0a1a1ab1-d7e4-4e80-b7cb-4547740aad35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Refreshing instance network info cache due to event network-changed-afe7812f-4f9b-460d-9b60-f6f0ad83893c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.529 239969 DEBUG oslo_concurrency.lockutils [req-c909752c-91f4-46d6-92ba-5e5259720466 req-0a1a1ab1-d7e4-4e80-b7cb-4547740aad35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d828ca5d-4f7c-4c81-9084-dd789819719d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.529 239969 DEBUG oslo_concurrency.lockutils [req-c909752c-91f4-46d6-92ba-5e5259720466 req-0a1a1ab1-d7e4-4e80-b7cb-4547740aad35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d828ca5d-4f7c-4c81-9084-dd789819719d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.529 239969 DEBUG nova.network.neutron [req-c909752c-91f4-46d6-92ba-5e5259720466 req-0a1a1ab1-d7e4-4e80-b7cb-4547740aad35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Refreshing network info cache for port afe7812f-4f9b-460d-9b60-f6f0ad83893c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:52:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:52:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3378639463' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.703 239969 DEBUG oslo_concurrency.processutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.724 239969 DEBUG nova.storage.rbd_utils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] rbd image d828ca5d-4f7c-4c81-9084-dd789819719d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.727 239969 DEBUG oslo_concurrency.processutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.782 239969 INFO nova.virt.libvirt.driver [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Deleting instance files /var/lib/nova/instances/98902e40-0bb9-468b-9aa1-82d7fd8cc143_del
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.783 239969 INFO nova.virt.libvirt.driver [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Deletion of /var/lib/nova/instances/98902e40-0bb9-468b-9aa1-82d7fd8cc143_del complete
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.851 239969 INFO nova.compute.manager [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Took 0.37 seconds to destroy the instance on the hypervisor.
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.852 239969 DEBUG oslo.service.loopingcall [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.852 239969 DEBUG nova.compute.manager [-] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.853 239969 DEBUG nova.network.neutron [-] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:52:46 compute-0 nova_compute[239965]: 2026-01-26 15:52:46.997 239969 DEBUG nova.network.neutron [-] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.011 239969 DEBUG nova.network.neutron [-] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.033 239969 INFO nova.compute.manager [-] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Took 0.18 seconds to deallocate network for instance.
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.088 239969 DEBUG oslo_concurrency.lockutils [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.088 239969 DEBUG oslo_concurrency.lockutils [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:52:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1774815663' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3378639463' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1774815663' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.304 239969 DEBUG oslo_concurrency.processutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.306 239969 DEBUG nova.virt.libvirt.vif [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:52:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-2014467891',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-2014467891',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-201446789',id=30,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ce4d550978141bdb03727e5abb57737',ramdisk_id='',reservation_id='r-dj9ugxrv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1145044074',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1145044074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:52:40Z,user_data=None,user_id='47cffb17f49240d1ae5b9302d2e92cf0',uuid=d828ca5d-4f7c-4c81-9084-dd789819719d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "address": "fa:16:3e:83:77:42", "network": {"id": "9c61a251-7778-440e-a088-7675d42cddee", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-463214765-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ce4d550978141bdb03727e5abb57737", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafe7812f-4f", "ovs_interfaceid": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.306 239969 DEBUG nova.network.os_vif_util [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Converting VIF {"id": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "address": "fa:16:3e:83:77:42", "network": {"id": "9c61a251-7778-440e-a088-7675d42cddee", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-463214765-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ce4d550978141bdb03727e5abb57737", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafe7812f-4f", "ovs_interfaceid": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.307 239969 DEBUG nova.network.os_vif_util [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:77:42,bridge_name='br-int',has_traffic_filtering=True,id=afe7812f-4f9b-460d-9b60-f6f0ad83893c,network=Network(9c61a251-7778-440e-a088-7675d42cddee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafe7812f-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.308 239969 DEBUG nova.objects.instance [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lazy-loading 'pci_devices' on Instance uuid d828ca5d-4f7c-4c81-9084-dd789819719d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.326 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:52:47 compute-0 nova_compute[239965]:   <uuid>d828ca5d-4f7c-4c81-9084-dd789819719d</uuid>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   <name>instance-0000001e</name>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-2014467891</nova:name>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:52:46</nova:creationTime>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:52:47 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:52:47 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:52:47 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:52:47 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:52:47 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:52:47 compute-0 nova_compute[239965]:         <nova:user uuid="47cffb17f49240d1ae5b9302d2e92cf0">tempest-FloatingIPsAssociationNegativeTestJSON-1145044074-project-member</nova:user>
Jan 26 15:52:47 compute-0 nova_compute[239965]:         <nova:project uuid="2ce4d550978141bdb03727e5abb57737">tempest-FloatingIPsAssociationNegativeTestJSON-1145044074</nova:project>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:52:47 compute-0 nova_compute[239965]:         <nova:port uuid="afe7812f-4f9b-460d-9b60-f6f0ad83893c">
Jan 26 15:52:47 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <system>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <entry name="serial">d828ca5d-4f7c-4c81-9084-dd789819719d</entry>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <entry name="uuid">d828ca5d-4f7c-4c81-9084-dd789819719d</entry>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     </system>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   <os>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   </os>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   <features>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   </features>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d828ca5d-4f7c-4c81-9084-dd789819719d_disk">
Jan 26 15:52:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       </source>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:52:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d828ca5d-4f7c-4c81-9084-dd789819719d_disk.config">
Jan 26 15:52:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       </source>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:52:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:83:77:42"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <target dev="tapafe7812f-4f"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/d828ca5d-4f7c-4c81-9084-dd789819719d/console.log" append="off"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <video>
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     </video>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:52:47 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:52:47 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:52:47 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:52:47 compute-0 nova_compute[239965]: </domain>
Jan 26 15:52:47 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.327 239969 DEBUG nova.compute.manager [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Preparing to wait for external event network-vif-plugged-afe7812f-4f9b-460d-9b60-f6f0ad83893c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.327 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Acquiring lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.327 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.327 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.328 239969 DEBUG nova.virt.libvirt.vif [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:52:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-2014467891',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-2014467891',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-201446789',id=30,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2ce4d550978141bdb03727e5abb57737',ramdisk_id='',reservation_id='r-dj9ugxrv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1145044074',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1145044074-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:52:40Z,user_data=None,user_id='47cffb17f49240d1ae5b9302d2e92cf0',uuid=d828ca5d-4f7c-4c81-9084-dd789819719d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "address": "fa:16:3e:83:77:42", "network": {"id": "9c61a251-7778-440e-a088-7675d42cddee", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-463214765-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ce4d550978141bdb03727e5abb57737", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafe7812f-4f", "ovs_interfaceid": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.328 239969 DEBUG nova.network.os_vif_util [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Converting VIF {"id": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "address": "fa:16:3e:83:77:42", "network": {"id": "9c61a251-7778-440e-a088-7675d42cddee", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-463214765-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ce4d550978141bdb03727e5abb57737", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafe7812f-4f", "ovs_interfaceid": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.329 239969 DEBUG nova.network.os_vif_util [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:77:42,bridge_name='br-int',has_traffic_filtering=True,id=afe7812f-4f9b-460d-9b60-f6f0ad83893c,network=Network(9c61a251-7778-440e-a088-7675d42cddee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafe7812f-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.329 239969 DEBUG os_vif [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:77:42,bridge_name='br-int',has_traffic_filtering=True,id=afe7812f-4f9b-460d-9b60-f6f0ad83893c,network=Network(9c61a251-7778-440e-a088-7675d42cddee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafe7812f-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.330 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.330 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.331 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.333 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.333 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapafe7812f-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.334 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapafe7812f-4f, col_values=(('external_ids', {'iface-id': 'afe7812f-4f9b-460d-9b60-f6f0ad83893c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:77:42', 'vm-uuid': 'd828ca5d-4f7c-4c81-9084-dd789819719d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:47 compute-0 NetworkManager[48954]: <info>  [1769442767.3361] manager: (tapafe7812f-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.335 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.337 239969 DEBUG oslo_concurrency.processutils [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.367 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.369 239969 INFO os_vif [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:77:42,bridge_name='br-int',has_traffic_filtering=True,id=afe7812f-4f9b-460d-9b60-f6f0ad83893c,network=Network(9c61a251-7778-440e-a088-7675d42cddee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafe7812f-4f')
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.425 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.426 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.426 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] No VIF found with MAC fa:16:3e:83:77:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.426 239969 INFO nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Using config drive
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.446 239969 DEBUG nova.storage.rbd_utils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] rbd image d828ca5d-4f7c-4c81-9084-dd789819719d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 178 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 9.0 MiB/s wr, 476 op/s
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.780 239969 DEBUG nova.network.neutron [req-c909752c-91f4-46d6-92ba-5e5259720466 req-0a1a1ab1-d7e4-4e80-b7cb-4547740aad35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Updated VIF entry in instance network info cache for port afe7812f-4f9b-460d-9b60-f6f0ad83893c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.780 239969 DEBUG nova.network.neutron [req-c909752c-91f4-46d6-92ba-5e5259720466 req-0a1a1ab1-d7e4-4e80-b7cb-4547740aad35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Updating instance_info_cache with network_info: [{"id": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "address": "fa:16:3e:83:77:42", "network": {"id": "9c61a251-7778-440e-a088-7675d42cddee", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-463214765-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ce4d550978141bdb03727e5abb57737", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafe7812f-4f", "ovs_interfaceid": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.801 239969 DEBUG oslo_concurrency.lockutils [req-c909752c-91f4-46d6-92ba-5e5259720466 req-0a1a1ab1-d7e4-4e80-b7cb-4547740aad35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d828ca5d-4f7c-4c81-9084-dd789819719d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.888 239969 INFO nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Creating config drive at /var/lib/nova/instances/d828ca5d-4f7c-4c81-9084-dd789819719d/disk.config
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.892 239969 DEBUG oslo_concurrency.processutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d828ca5d-4f7c-4c81-9084-dd789819719d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxbrux773 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2368996343' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.940 239969 DEBUG oslo_concurrency.processutils [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.947 239969 DEBUG nova.compute.provider_tree [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.967 239969 DEBUG nova.scheduler.client.report [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:47 compute-0 nova_compute[239965]: 2026-01-26 15:52:47.990 239969 DEBUG oslo_concurrency.lockutils [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.025 239969 DEBUG oslo_concurrency.processutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d828ca5d-4f7c-4c81-9084-dd789819719d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxbrux773" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.051 239969 DEBUG nova.storage.rbd_utils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] rbd image d828ca5d-4f7c-4c81-9084-dd789819719d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.055 239969 DEBUG oslo_concurrency.processutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d828ca5d-4f7c-4c81-9084-dd789819719d/disk.config d828ca5d-4f7c-4c81-9084-dd789819719d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.084 239969 INFO nova.scheduler.client.report [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Deleted allocations for instance 98902e40-0bb9-468b-9aa1-82d7fd8cc143
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.192 239969 DEBUG oslo_concurrency.processutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d828ca5d-4f7c-4c81-9084-dd789819719d/disk.config d828ca5d-4f7c-4c81-9084-dd789819719d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.193 239969 INFO nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Deleting local config drive /var/lib/nova/instances/d828ca5d-4f7c-4c81-9084-dd789819719d/disk.config because it was imported into RBD.
Jan 26 15:52:48 compute-0 kernel: tapafe7812f-4f: entered promiscuous mode
Jan 26 15:52:48 compute-0 NetworkManager[48954]: <info>  [1769442768.2447] manager: (tapafe7812f-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/80)
Jan 26 15:52:48 compute-0 ovn_controller[146046]: 2026-01-26T15:52:48Z|00154|binding|INFO|Claiming lport afe7812f-4f9b-460d-9b60-f6f0ad83893c for this chassis.
Jan 26 15:52:48 compute-0 ovn_controller[146046]: 2026-01-26T15:52:48Z|00155|binding|INFO|afe7812f-4f9b-460d-9b60-f6f0ad83893c: Claiming fa:16:3e:83:77:42 10.100.0.13
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.246 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.248 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:48 compute-0 systemd-udevd[267847]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:52:48 compute-0 systemd-machined[208061]: New machine qemu-34-instance-0000001e.
Jan 26 15:52:48 compute-0 NetworkManager[48954]: <info>  [1769442768.2877] device (tapafe7812f-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:52:48 compute-0 NetworkManager[48954]: <info>  [1769442768.2885] device (tapafe7812f-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:52:48 compute-0 systemd[1]: Started Virtual Machine qemu-34-instance-0000001e.
Jan 26 15:52:48 compute-0 ceph-mon[75140]: pgmap v1162: 305 pgs: 305 active+clean; 178 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 9.0 MiB/s wr, 476 op/s
Jan 26 15:52:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2368996343' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.339 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:48 compute-0 ovn_controller[146046]: 2026-01-26T15:52:48Z|00156|binding|INFO|Setting lport afe7812f-4f9b-460d-9b60-f6f0ad83893c ovn-installed in OVS
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.345 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:48 compute-0 ovn_controller[146046]: 2026-01-26T15:52:48Z|00157|binding|INFO|Setting lport afe7812f-4f9b-460d-9b60-f6f0ad83893c up in Southbound
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.388 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:77:42 10.100.0.13'], port_security=['fa:16:3e:83:77:42 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd828ca5d-4f7c-4c81-9084-dd789819719d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9c61a251-7778-440e-a088-7675d42cddee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ce4d550978141bdb03727e5abb57737', 'neutron:revision_number': '2', 'neutron:security_group_ids': '293adf39-d9d9-4170-85b0-24e07a5f81cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a3e8910-abf3-4840-9da4-27a6f0086a1a, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=afe7812f-4f9b-460d-9b60-f6f0ad83893c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.390 156105 INFO neutron.agent.ovn.metadata.agent [-] Port afe7812f-4f9b-460d-9b60-f6f0ad83893c in datapath 9c61a251-7778-440e-a088-7675d42cddee bound to our chassis
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.391 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9c61a251-7778-440e-a088-7675d42cddee
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.391 239969 DEBUG oslo_concurrency.lockutils [None req-8a640366-d84c-44df-b84a-b76c9ef720b0 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "98902e40-0bb9-468b-9aa1-82d7fd8cc143" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.404 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e08dc8-cf08-429e-8f50-7893acd60194]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.405 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9c61a251-71 in ovnmeta-9c61a251-7778-440e-a088-7675d42cddee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.408 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9c61a251-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.408 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a85214da-8aa5-4558-8ad3-aa58ffd3b369]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.409 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[531f0a82-f031-46c6-ac92-8a36699c7731]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.422 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[bee11eb2-7a88-4f3a-a2dd-25abaab4f483]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.446 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9aa81f6f-5e4b-4b79-8e7d-1f81c4387150]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.476 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8c1bfb6e-d621-4858-bb7b-979f7fdd2fe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.480 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b7839e18-d5a9-4d34-996e-801184f09e54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:48 compute-0 systemd-udevd[267849]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:52:48 compute-0 NetworkManager[48954]: <info>  [1769442768.4861] manager: (tap9c61a251-70): new Veth device (/org/freedesktop/NetworkManager/Devices/81)
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.513 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c5efc4-d23a-403d-b67e-99af0ecccfb0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.516 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe9401a-52ad-45ec-addb-a9f60de42a24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:48 compute-0 NetworkManager[48954]: <info>  [1769442768.5388] device (tap9c61a251-70): carrier: link connected
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.545 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[be3a853c-2466-4ede-b021-a537ddbec778]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.563 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[53e6afa7-afeb-4efd-832a-d04f09bcc145]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9c61a251-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:e2:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425934, 'reachable_time': 38769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267880, 'error': None, 'target': 'ovnmeta-9c61a251-7778-440e-a088-7675d42cddee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.580 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[09871814-cfed-4427-986d-b643f1b116ac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febc:e2ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 425934, 'tstamp': 425934}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267881, 'error': None, 'target': 'ovnmeta-9c61a251-7778-440e-a088-7675d42cddee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.598 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[777557a3-a22c-43a2-b5ba-d7aab7f3ff11]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9c61a251-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:e2:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 54], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425934, 'reachable_time': 38769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 267889, 'error': None, 'target': 'ovnmeta-9c61a251-7778-440e-a088-7675d42cddee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.635 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3566c46d-a03f-4a68-aeac-d45ca51b2736]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011346587649141275 of space, bias 1.0, pg target 0.34039762947423824 quantized to 32 (current 32)
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006681013644467405 of space, bias 1.0, pg target 0.20043040933402215 quantized to 32 (current 32)
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.096792443761014e-07 of space, bias 4.0, pg target 0.0009716150932513217 quantized to 16 (current 16)
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:52:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:52:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:52:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1534265977' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.700 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5dc48ed2-f4e0-4ef3-89d5-469fbfbc49d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.702 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c61a251-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.702 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.702 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c61a251-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:48 compute-0 kernel: tap9c61a251-70: entered promiscuous mode
Jan 26 15:52:48 compute-0 NetworkManager[48954]: <info>  [1769442768.7050] manager: (tap9c61a251-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.704 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.706 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.707 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9c61a251-70, col_values=(('external_ids', {'iface-id': 'c9cd23de-7438-4ff0-b9ac-0c7c95b78124'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:52:48 compute-0 ovn_controller[146046]: 2026-01-26T15:52:48Z|00158|binding|INFO|Releasing lport c9cd23de-7438-4ff0-b9ac-0c7c95b78124 from this chassis (sb_readonly=0)
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.708 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:52:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1534265977' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.727 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.728 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9c61a251-7778-440e-a088-7675d42cddee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9c61a251-7778-440e-a088-7675d42cddee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.728 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fada7665-f153-47a2-9162-344e2a927387]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.729 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-9c61a251-7778-440e-a088-7675d42cddee
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/9c61a251-7778-440e-a088-7675d42cddee.pid.haproxy
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 9c61a251-7778-440e-a088-7675d42cddee
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:52:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:48.730 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9c61a251-7778-440e-a088-7675d42cddee', 'env', 'PROCESS_TAG=haproxy-9c61a251-7778-440e-a088-7675d42cddee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9c61a251-7778-440e-a088-7675d42cddee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.755 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442768.754709, d828ca5d-4f7c-4c81-9084-dd789819719d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.756 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] VM Started (Lifecycle Event)
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.807 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.814 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442768.7548454, d828ca5d-4f7c-4c81-9084-dd789819719d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.815 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] VM Paused (Lifecycle Event)
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.848 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.850 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:52:48 compute-0 nova_compute[239965]: 2026-01-26 15:52:48.922 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.061 239969 DEBUG nova.compute.manager [req-08485eca-6de9-4f7e-b65c-c0ecb54688ab req-c1af53fc-4b65-4b25-a02d-cb0c61d4318a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Received event network-vif-plugged-afe7812f-4f9b-460d-9b60-f6f0ad83893c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.062 239969 DEBUG oslo_concurrency.lockutils [req-08485eca-6de9-4f7e-b65c-c0ecb54688ab req-c1af53fc-4b65-4b25-a02d-cb0c61d4318a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.062 239969 DEBUG oslo_concurrency.lockutils [req-08485eca-6de9-4f7e-b65c-c0ecb54688ab req-c1af53fc-4b65-4b25-a02d-cb0c61d4318a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.062 239969 DEBUG oslo_concurrency.lockutils [req-08485eca-6de9-4f7e-b65c-c0ecb54688ab req-c1af53fc-4b65-4b25-a02d-cb0c61d4318a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.062 239969 DEBUG nova.compute.manager [req-08485eca-6de9-4f7e-b65c-c0ecb54688ab req-c1af53fc-4b65-4b25-a02d-cb0c61d4318a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Processing event network-vif-plugged-afe7812f-4f9b-460d-9b60-f6f0ad83893c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.063 239969 DEBUG nova.compute.manager [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.066 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442769.0654855, d828ca5d-4f7c-4c81-9084-dd789819719d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.066 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] VM Resumed (Lifecycle Event)
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.068 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.071 239969 INFO nova.virt.libvirt.driver [-] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Instance spawned successfully.
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.071 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:52:49 compute-0 podman[267956]: 2026-01-26 15:52:49.101365668 +0000 UTC m=+0.061006082 container create 9ffa04346a0223638e71d1f8f64e1f858376adfcb3ff8ff0400623434bea4641 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9c61a251-7778-440e-a088-7675d42cddee, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.129 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:49 compute-0 systemd[1]: Started libpod-conmon-9ffa04346a0223638e71d1f8f64e1f858376adfcb3ff8ff0400623434bea4641.scope.
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.135 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.140 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.140 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.141 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.141 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.142 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.142 239969 DEBUG nova.virt.libvirt.driver [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:52:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:52:49 compute-0 podman[267956]: 2026-01-26 15:52:49.065943548 +0000 UTC m=+0.025583982 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:52:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11121a6fcd72af5cfcc6f1f20bb5407749d48cc2629bab68f9204e6ca9f03fea/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:52:49 compute-0 podman[267956]: 2026-01-26 15:52:49.182992001 +0000 UTC m=+0.142632425 container init 9ffa04346a0223638e71d1f8f64e1f858376adfcb3ff8ff0400623434bea4641 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9c61a251-7778-440e-a088-7675d42cddee, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 15:52:49 compute-0 podman[267956]: 2026-01-26 15:52:49.188188328 +0000 UTC m=+0.147828732 container start 9ffa04346a0223638e71d1f8f64e1f858376adfcb3ff8ff0400623434bea4641 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9c61a251-7778-440e-a088-7675d42cddee, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.202 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:52:49 compute-0 neutron-haproxy-ovnmeta-9c61a251-7778-440e-a088-7675d42cddee[267970]: [NOTICE]   (267974) : New worker (267976) forked
Jan 26 15:52:49 compute-0 neutron-haproxy-ovnmeta-9c61a251-7778-440e-a088-7675d42cddee[267970]: [NOTICE]   (267974) : Loading success.
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.252 239969 INFO nova.compute.manager [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Took 8.24 seconds to spawn the instance on the hypervisor.
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.253 239969 DEBUG nova.compute.manager [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:52:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Jan 26 15:52:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Jan 26 15:52:49 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Jan 26 15:52:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1534265977' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:52:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1534265977' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:52:49 compute-0 ceph-mon[75140]: osdmap e203: 3 total, 3 up, 3 in
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.348 239969 INFO nova.compute.manager [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Took 9.37 seconds to build instance.
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.388 239969 DEBUG oslo_concurrency.lockutils [None req-0b14ce68-2165-4247-bb05-7d9b850bafc2 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lock "d828ca5d-4f7c-4c81-9084-dd789819719d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.487 239969 DEBUG oslo_concurrency.lockutils [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquiring lock "b6668f3e-920f-42d3-9bc6-e1ab78764d3d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.487 239969 DEBUG oslo_concurrency.lockutils [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "b6668f3e-920f-42d3-9bc6-e1ab78764d3d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.488 239969 DEBUG oslo_concurrency.lockutils [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquiring lock "b6668f3e-920f-42d3-9bc6-e1ab78764d3d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.488 239969 DEBUG oslo_concurrency.lockutils [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "b6668f3e-920f-42d3-9bc6-e1ab78764d3d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.488 239969 DEBUG oslo_concurrency.lockutils [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "b6668f3e-920f-42d3-9bc6-e1ab78764d3d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.490 239969 INFO nova.compute.manager [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Terminating instance
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.491 239969 DEBUG oslo_concurrency.lockutils [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquiring lock "refresh_cache-b6668f3e-920f-42d3-9bc6-e1ab78764d3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.492 239969 DEBUG oslo_concurrency.lockutils [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquired lock "refresh_cache-b6668f3e-920f-42d3-9bc6-e1ab78764d3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.492 239969 DEBUG nova.network.neutron [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:52:49 compute-0 sshd-session[267985]: error: send_error: write: Broken pipe
Jan 26 15:52:49 compute-0 sshd-session[267985]: banner exchange: Connection from 3.137.73.221 port 58906: invalid format
Jan 26 15:52:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 178 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 8.0 MiB/s wr, 405 op/s
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.904 239969 DEBUG nova.network.neutron [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:52:49 compute-0 nova_compute[239965]: 2026-01-26 15:52:49.906 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:50 compute-0 nova_compute[239965]: 2026-01-26 15:52:50.246 239969 DEBUG nova.network.neutron [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:50 compute-0 nova_compute[239965]: 2026-01-26 15:52:50.267 239969 DEBUG oslo_concurrency.lockutils [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Releasing lock "refresh_cache-b6668f3e-920f-42d3-9bc6-e1ab78764d3d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:52:50 compute-0 nova_compute[239965]: 2026-01-26 15:52:50.267 239969 DEBUG nova.compute.manager [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:52:50 compute-0 ceph-mon[75140]: pgmap v1164: 305 pgs: 305 active+clean; 178 MiB data, 428 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 8.0 MiB/s wr, 405 op/s
Jan 26 15:52:50 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Jan 26 15:52:50 compute-0 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001c.scope: Consumed 13.113s CPU time.
Jan 26 15:52:50 compute-0 systemd-machined[208061]: Machine qemu-31-instance-0000001c terminated.
Jan 26 15:52:50 compute-0 nova_compute[239965]: 2026-01-26 15:52:50.492 239969 INFO nova.virt.libvirt.driver [-] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Instance destroyed successfully.
Jan 26 15:52:50 compute-0 nova_compute[239965]: 2026-01-26 15:52:50.492 239969 DEBUG nova.objects.instance [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lazy-loading 'resources' on Instance uuid b6668f3e-920f-42d3-9bc6-e1ab78764d3d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:52:50 compute-0 nova_compute[239965]: 2026-01-26 15:52:50.756 239969 INFO nova.virt.libvirt.driver [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Deleting instance files /var/lib/nova/instances/b6668f3e-920f-42d3-9bc6-e1ab78764d3d_del
Jan 26 15:52:50 compute-0 nova_compute[239965]: 2026-01-26 15:52:50.757 239969 INFO nova.virt.libvirt.driver [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Deletion of /var/lib/nova/instances/b6668f3e-920f-42d3-9bc6-e1ab78764d3d_del complete
Jan 26 15:52:50 compute-0 nova_compute[239965]: 2026-01-26 15:52:50.806 239969 INFO nova.compute.manager [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Took 0.54 seconds to destroy the instance on the hypervisor.
Jan 26 15:52:50 compute-0 nova_compute[239965]: 2026-01-26 15:52:50.807 239969 DEBUG oslo.service.loopingcall [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:52:50 compute-0 nova_compute[239965]: 2026-01-26 15:52:50.807 239969 DEBUG nova.compute.manager [-] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:52:50 compute-0 nova_compute[239965]: 2026-01-26 15:52:50.807 239969 DEBUG nova.network.neutron [-] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.128 239969 DEBUG nova.network.neutron [-] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.142 239969 DEBUG nova.network.neutron [-] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.153 239969 INFO nova.compute.manager [-] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Took 0.35 seconds to deallocate network for instance.
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.199 239969 DEBUG oslo_concurrency.lockutils [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.200 239969 DEBUG oslo_concurrency.lockutils [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:51 compute-0 ovn_controller[146046]: 2026-01-26T15:52:51Z|00159|binding|INFO|Releasing lport c9cd23de-7438-4ff0-b9ac-0c7c95b78124 from this chassis (sb_readonly=0)
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.245 239969 DEBUG nova.compute.manager [req-a00b8c59-d85c-4187-9f44-a9caa8f39fef req-088c7f98-6209-44d0-8576-2e894da1113b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Received event network-vif-plugged-afe7812f-4f9b-460d-9b60-f6f0ad83893c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.246 239969 DEBUG oslo_concurrency.lockutils [req-a00b8c59-d85c-4187-9f44-a9caa8f39fef req-088c7f98-6209-44d0-8576-2e894da1113b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.246 239969 DEBUG oslo_concurrency.lockutils [req-a00b8c59-d85c-4187-9f44-a9caa8f39fef req-088c7f98-6209-44d0-8576-2e894da1113b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.247 239969 DEBUG oslo_concurrency.lockutils [req-a00b8c59-d85c-4187-9f44-a9caa8f39fef req-088c7f98-6209-44d0-8576-2e894da1113b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.247 239969 DEBUG nova.compute.manager [req-a00b8c59-d85c-4187-9f44-a9caa8f39fef req-088c7f98-6209-44d0-8576-2e894da1113b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] No waiting events found dispatching network-vif-plugged-afe7812f-4f9b-460d-9b60-f6f0ad83893c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.247 239969 WARNING nova.compute.manager [req-a00b8c59-d85c-4187-9f44-a9caa8f39fef req-088c7f98-6209-44d0-8576-2e894da1113b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Received unexpected event network-vif-plugged-afe7812f-4f9b-460d-9b60-f6f0ad83893c for instance with vm_state active and task_state None.
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.275 239969 DEBUG oslo_concurrency.processutils [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.303 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 167 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.9 MiB/s wr, 295 op/s
Jan 26 15:52:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:52:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4205855753' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.841 239969 DEBUG oslo_concurrency.processutils [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.846 239969 DEBUG nova.compute.provider_tree [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.883 239969 DEBUG nova.scheduler.client.report [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.903 239969 DEBUG oslo_concurrency.lockutils [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.926 239969 INFO nova.scheduler.client.report [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Deleted allocations for instance b6668f3e-920f-42d3-9bc6-e1ab78764d3d
Jan 26 15:52:51 compute-0 nova_compute[239965]: 2026-01-26 15:52:51.989 239969 DEBUG oslo_concurrency.lockutils [None req-701389ee-208c-41ad-a955-0989e73de212 38314961865549838edb2c26893eb430 9ade4d4287974ef1b6df3e85a2b3b70b - - default default] Lock "b6668f3e-920f-42d3-9bc6-e1ab78764d3d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:52 compute-0 nova_compute[239965]: 2026-01-26 15:52:52.337 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:52 compute-0 ceph-mon[75140]: pgmap v1165: 305 pgs: 305 active+clean; 167 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 5.9 MiB/s wr, 295 op/s
Jan 26 15:52:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4205855753' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:52:53 compute-0 podman[268030]: 2026-01-26 15:52:53.363841779 +0000 UTC m=+0.053173583 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 15:52:53 compute-0 podman[268031]: 2026-01-26 15:52:53.389636125 +0000 UTC m=+0.078025936 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 15:52:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 118 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.0 MiB/s wr, 402 op/s
Jan 26 15:52:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:52:54 compute-0 ceph-mon[75140]: pgmap v1166: 305 pgs: 305 active+clean; 118 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 5.0 MiB/s wr, 402 op/s
Jan 26 15:52:54 compute-0 nova_compute[239965]: 2026-01-26 15:52:54.907 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 88 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.8 MiB/s wr, 356 op/s
Jan 26 15:52:55 compute-0 NetworkManager[48954]: <info>  [1769442775.9804] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Jan 26 15:52:55 compute-0 nova_compute[239965]: 2026-01-26 15:52:55.979 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:55 compute-0 ovn_controller[146046]: 2026-01-26T15:52:55Z|00160|binding|INFO|Releasing lport c9cd23de-7438-4ff0-b9ac-0c7c95b78124 from this chassis (sb_readonly=0)
Jan 26 15:52:55 compute-0 NetworkManager[48954]: <info>  [1769442775.9839] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/84)
Jan 26 15:52:56 compute-0 ovn_controller[146046]: 2026-01-26T15:52:56Z|00161|binding|INFO|Releasing lport c9cd23de-7438-4ff0-b9ac-0c7c95b78124 from this chassis (sb_readonly=0)
Jan 26 15:52:56 compute-0 nova_compute[239965]: 2026-01-26 15:52:56.029 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:56 compute-0 nova_compute[239965]: 2026-01-26 15:52:56.034 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:56 compute-0 sudo[268072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:52:56 compute-0 sudo[268072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:52:56 compute-0 sudo[268072]: pam_unix(sudo:session): session closed for user root
Jan 26 15:52:56 compute-0 sudo[268097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 26 15:52:56 compute-0 sudo[268097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:52:56 compute-0 ceph-mon[75140]: pgmap v1167: 305 pgs: 305 active+clean; 88 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.8 MiB/s wr, 356 op/s
Jan 26 15:52:56 compute-0 sudo[268097]: pam_unix(sudo:session): session closed for user root
Jan 26 15:52:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:52:56 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:52:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:52:56 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:52:56 compute-0 sudo[268142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:52:56 compute-0 sudo[268142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:52:56 compute-0 sudo[268142]: pam_unix(sudo:session): session closed for user root
Jan 26 15:52:56 compute-0 sudo[268167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:52:56 compute-0 sudo[268167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:52:57 compute-0 nova_compute[239965]: 2026-01-26 15:52:57.339 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:57 compute-0 nova_compute[239965]: 2026-01-26 15:52:57.489 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442762.4881608, daafa464-6093-4ef3-b0a9-46f59c55cf94 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:57 compute-0 nova_compute[239965]: 2026-01-26 15:52:57.489 239969 INFO nova.compute.manager [-] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] VM Stopped (Lifecycle Event)
Jan 26 15:52:57 compute-0 sudo[268167]: pam_unix(sudo:session): session closed for user root
Jan 26 15:52:57 compute-0 nova_compute[239965]: 2026-01-26 15:52:57.521 239969 DEBUG nova.compute.manager [None req-fc0777d2-aeb2-4c65-a74f-8ed41cd665fe - - - - - -] [instance: daafa464-6093-4ef3-b0a9-46f59c55cf94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 15:52:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 15:52:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:52:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:52:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:52:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:52:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:52:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:52:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:52:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:52:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:52:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:52:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:52:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:52:57 compute-0 sudo[268224]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:52:57 compute-0 sudo[268224]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:52:57 compute-0 sudo[268224]: pam_unix(sudo:session): session closed for user root
Jan 26 15:52:57 compute-0 sudo[268249]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:52:57 compute-0 sudo[268249]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:52:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 88 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 30 KiB/s wr, 226 op/s
Jan 26 15:52:57 compute-0 nova_compute[239965]: 2026-01-26 15:52:57.764 239969 DEBUG nova.compute.manager [req-0909865c-7464-476f-ab1d-c3cdc56a9497 req-4cbadd1c-4d2a-4d24-824e-e4c6ee631c0d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Received event network-changed-afe7812f-4f9b-460d-9b60-f6f0ad83893c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:52:57 compute-0 nova_compute[239965]: 2026-01-26 15:52:57.764 239969 DEBUG nova.compute.manager [req-0909865c-7464-476f-ab1d-c3cdc56a9497 req-4cbadd1c-4d2a-4d24-824e-e4c6ee631c0d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Refreshing instance network info cache due to event network-changed-afe7812f-4f9b-460d-9b60-f6f0ad83893c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:52:57 compute-0 nova_compute[239965]: 2026-01-26 15:52:57.764 239969 DEBUG oslo_concurrency.lockutils [req-0909865c-7464-476f-ab1d-c3cdc56a9497 req-4cbadd1c-4d2a-4d24-824e-e4c6ee631c0d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d828ca5d-4f7c-4c81-9084-dd789819719d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:52:57 compute-0 nova_compute[239965]: 2026-01-26 15:52:57.764 239969 DEBUG oslo_concurrency.lockutils [req-0909865c-7464-476f-ab1d-c3cdc56a9497 req-4cbadd1c-4d2a-4d24-824e-e4c6ee631c0d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d828ca5d-4f7c-4c81-9084-dd789819719d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:52:57 compute-0 nova_compute[239965]: 2026-01-26 15:52:57.765 239969 DEBUG nova.network.neutron [req-0909865c-7464-476f-ab1d-c3cdc56a9497 req-4cbadd1c-4d2a-4d24-824e-e4c6ee631c0d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Refreshing network info cache for port afe7812f-4f9b-460d-9b60-f6f0ad83893c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:52:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:52:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:52:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 15:52:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:52:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:52:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:52:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:52:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:52:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:52:57 compute-0 nova_compute[239965]: 2026-01-26 15:52:57.891 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442762.8903456, 98902e40-0bb9-468b-9aa1-82d7fd8cc143 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:52:57 compute-0 nova_compute[239965]: 2026-01-26 15:52:57.892 239969 INFO nova.compute.manager [-] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] VM Stopped (Lifecycle Event)
Jan 26 15:52:57 compute-0 nova_compute[239965]: 2026-01-26 15:52:57.944 239969 DEBUG nova.compute.manager [None req-c3c8f633-1020-4a0a-b9fc-54f36aeb8d72 - - - - - -] [instance: 98902e40-0bb9-468b-9aa1-82d7fd8cc143] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:52:57 compute-0 podman[268285]: 2026-01-26 15:52:57.978110872 +0000 UTC m=+0.058476871 container create 07fb6da8b3d6a400d62b73d7bb768cf8de7940b25ad43a12cdd9162fd8966a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ride, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 15:52:58 compute-0 systemd[1]: Started libpod-conmon-07fb6da8b3d6a400d62b73d7bb768cf8de7940b25ad43a12cdd9162fd8966a4c.scope.
Jan 26 15:52:58 compute-0 podman[268285]: 2026-01-26 15:52:57.947080498 +0000 UTC m=+0.027446527 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:52:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:52:58 compute-0 podman[268285]: 2026-01-26 15:52:58.072498245 +0000 UTC m=+0.152864264 container init 07fb6da8b3d6a400d62b73d7bb768cf8de7940b25ad43a12cdd9162fd8966a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:52:58 compute-0 podman[268285]: 2026-01-26 15:52:58.08136938 +0000 UTC m=+0.161735369 container start 07fb6da8b3d6a400d62b73d7bb768cf8de7940b25ad43a12cdd9162fd8966a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ride, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:52:58 compute-0 crazy_ride[268301]: 167 167
Jan 26 15:52:58 compute-0 podman[268285]: 2026-01-26 15:52:58.08754703 +0000 UTC m=+0.167913049 container attach 07fb6da8b3d6a400d62b73d7bb768cf8de7940b25ad43a12cdd9162fd8966a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ride, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 15:52:58 compute-0 podman[268285]: 2026-01-26 15:52:58.08836269 +0000 UTC m=+0.168728709 container died 07fb6da8b3d6a400d62b73d7bb768cf8de7940b25ad43a12cdd9162fd8966a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 26 15:52:58 compute-0 systemd[1]: libpod-07fb6da8b3d6a400d62b73d7bb768cf8de7940b25ad43a12cdd9162fd8966a4c.scope: Deactivated successfully.
Jan 26 15:52:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-09bea30b1011a7829d7bcc6c0878ed3147f725c98faeb644083c7b6bdda5bff6-merged.mount: Deactivated successfully.
Jan 26 15:52:58 compute-0 podman[268285]: 2026-01-26 15:52:58.12951149 +0000 UTC m=+0.209877479 container remove 07fb6da8b3d6a400d62b73d7bb768cf8de7940b25ad43a12cdd9162fd8966a4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 15:52:58 compute-0 systemd[1]: libpod-conmon-07fb6da8b3d6a400d62b73d7bb768cf8de7940b25ad43a12cdd9162fd8966a4c.scope: Deactivated successfully.
Jan 26 15:52:58 compute-0 podman[268327]: 2026-01-26 15:52:58.320516108 +0000 UTC m=+0.063935604 container create 79c994966026e15e1b61e587a962dbe962563d829d2b7d9cbf33024a470df082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hugle, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 15:52:58 compute-0 systemd[1]: Started libpod-conmon-79c994966026e15e1b61e587a962dbe962563d829d2b7d9cbf33024a470df082.scope.
Jan 26 15:52:58 compute-0 podman[268327]: 2026-01-26 15:52:58.287882576 +0000 UTC m=+0.031302152 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:52:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83af80cdb024c445e3c6a8308dbead795478e5b0cb63d8ddc0fb6ed37f1ed627/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83af80cdb024c445e3c6a8308dbead795478e5b0cb63d8ddc0fb6ed37f1ed627/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83af80cdb024c445e3c6a8308dbead795478e5b0cb63d8ddc0fb6ed37f1ed627/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83af80cdb024c445e3c6a8308dbead795478e5b0cb63d8ddc0fb6ed37f1ed627/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:52:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83af80cdb024c445e3c6a8308dbead795478e5b0cb63d8ddc0fb6ed37f1ed627/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:52:58 compute-0 podman[268327]: 2026-01-26 15:52:58.416249753 +0000 UTC m=+0.159669249 container init 79c994966026e15e1b61e587a962dbe962563d829d2b7d9cbf33024a470df082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hugle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:52:58 compute-0 podman[268327]: 2026-01-26 15:52:58.422531376 +0000 UTC m=+0.165950872 container start 79c994966026e15e1b61e587a962dbe962563d829d2b7d9cbf33024a470df082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hugle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 26 15:52:58 compute-0 podman[268327]: 2026-01-26 15:52:58.426623575 +0000 UTC m=+0.170043091 container attach 79c994966026e15e1b61e587a962dbe962563d829d2b7d9cbf33024a470df082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:52:58 compute-0 mystifying_hugle[268345]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:52:58 compute-0 mystifying_hugle[268345]: --> All data devices are unavailable
Jan 26 15:52:58 compute-0 systemd[1]: libpod-79c994966026e15e1b61e587a962dbe962563d829d2b7d9cbf33024a470df082.scope: Deactivated successfully.
Jan 26 15:52:58 compute-0 podman[268327]: 2026-01-26 15:52:58.908882517 +0000 UTC m=+0.652302013 container died 79c994966026e15e1b61e587a962dbe962563d829d2b7d9cbf33024a470df082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hugle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 15:52:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:59.212 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:52:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:59.214 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:52:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:52:59.215 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:52:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:52:59 compute-0 ceph-mon[75140]: pgmap v1168: 305 pgs: 305 active+clean; 88 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 30 KiB/s wr, 226 op/s
Jan 26 15:52:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-83af80cdb024c445e3c6a8308dbead795478e5b0cb63d8ddc0fb6ed37f1ed627-merged.mount: Deactivated successfully.
Jan 26 15:52:59 compute-0 podman[268327]: 2026-01-26 15:52:59.442704981 +0000 UTC m=+1.186124477 container remove 79c994966026e15e1b61e587a962dbe962563d829d2b7d9cbf33024a470df082 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 15:52:59 compute-0 systemd[1]: libpod-conmon-79c994966026e15e1b61e587a962dbe962563d829d2b7d9cbf33024a470df082.scope: Deactivated successfully.
Jan 26 15:52:59 compute-0 sudo[268249]: pam_unix(sudo:session): session closed for user root
Jan 26 15:52:59 compute-0 sudo[268378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:52:59 compute-0 sudo[268378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:52:59 compute-0 sudo[268378]: pam_unix(sudo:session): session closed for user root
Jan 26 15:52:59 compute-0 sudo[268403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:52:59 compute-0 sudo[268403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:52:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 88 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 29 KiB/s wr, 218 op/s
Jan 26 15:52:59 compute-0 nova_compute[239965]: 2026-01-26 15:52:59.895 239969 DEBUG nova.network.neutron [req-0909865c-7464-476f-ab1d-c3cdc56a9497 req-4cbadd1c-4d2a-4d24-824e-e4c6ee631c0d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Updated VIF entry in instance network info cache for port afe7812f-4f9b-460d-9b60-f6f0ad83893c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:52:59 compute-0 nova_compute[239965]: 2026-01-26 15:52:59.896 239969 DEBUG nova.network.neutron [req-0909865c-7464-476f-ab1d-c3cdc56a9497 req-4cbadd1c-4d2a-4d24-824e-e4c6ee631c0d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Updating instance_info_cache with network_info: [{"id": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "address": "fa:16:3e:83:77:42", "network": {"id": "9c61a251-7778-440e-a088-7675d42cddee", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-463214765-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ce4d550978141bdb03727e5abb57737", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafe7812f-4f", "ovs_interfaceid": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:52:59 compute-0 nova_compute[239965]: 2026-01-26 15:52:59.908 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:52:59 compute-0 podman[268439]: 2026-01-26 15:52:59.913212498 +0000 UTC m=+0.049457402 container create 8921bb9693a2eb2a05a984addaef81ecd7d4557c8a501a7e2cffb5aa8bc62105 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:52:59 compute-0 nova_compute[239965]: 2026-01-26 15:52:59.918 239969 DEBUG oslo_concurrency.lockutils [req-0909865c-7464-476f-ab1d-c3cdc56a9497 req-4cbadd1c-4d2a-4d24-824e-e4c6ee631c0d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d828ca5d-4f7c-4c81-9084-dd789819719d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:52:59 compute-0 systemd[1]: Started libpod-conmon-8921bb9693a2eb2a05a984addaef81ecd7d4557c8a501a7e2cffb5aa8bc62105.scope.
Jan 26 15:52:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:52:59 compute-0 podman[268439]: 2026-01-26 15:52:59.894399562 +0000 UTC m=+0.030644496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:52:59 compute-0 podman[268439]: 2026-01-26 15:52:59.991821268 +0000 UTC m=+0.128066212 container init 8921bb9693a2eb2a05a984addaef81ecd7d4557c8a501a7e2cffb5aa8bc62105 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 15:52:59 compute-0 podman[268439]: 2026-01-26 15:52:59.999595136 +0000 UTC m=+0.135840030 container start 8921bb9693a2eb2a05a984addaef81ecd7d4557c8a501a7e2cffb5aa8bc62105 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 15:53:00 compute-0 podman[268439]: 2026-01-26 15:53:00.004445035 +0000 UTC m=+0.140689969 container attach 8921bb9693a2eb2a05a984addaef81ecd7d4557c8a501a7e2cffb5aa8bc62105 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:53:00 compute-0 modest_neumann[268455]: 167 167
Jan 26 15:53:00 compute-0 systemd[1]: libpod-8921bb9693a2eb2a05a984addaef81ecd7d4557c8a501a7e2cffb5aa8bc62105.scope: Deactivated successfully.
Jan 26 15:53:00 compute-0 podman[268439]: 2026-01-26 15:53:00.0087928 +0000 UTC m=+0.145037714 container died 8921bb9693a2eb2a05a984addaef81ecd7d4557c8a501a7e2cffb5aa8bc62105 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 15:53:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-91bd174cd9c444873f17379e1fbb022da7a678f159e53f3bd3b4aa5ad6c86f5e-merged.mount: Deactivated successfully.
Jan 26 15:53:00 compute-0 podman[268439]: 2026-01-26 15:53:00.055538375 +0000 UTC m=+0.191783289 container remove 8921bb9693a2eb2a05a984addaef81ecd7d4557c8a501a7e2cffb5aa8bc62105 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:53:00 compute-0 systemd[1]: libpod-conmon-8921bb9693a2eb2a05a984addaef81ecd7d4557c8a501a7e2cffb5aa8bc62105.scope: Deactivated successfully.
Jan 26 15:53:00 compute-0 podman[268478]: 2026-01-26 15:53:00.236623613 +0000 UTC m=+0.047064484 container create 88da15cbc2c1ad9ada96a24a0c787b71c2adcb4c3ad71b8b9661d11366a37d7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:53:00 compute-0 systemd[1]: Started libpod-conmon-88da15cbc2c1ad9ada96a24a0c787b71c2adcb4c3ad71b8b9661d11366a37d7b.scope.
Jan 26 15:53:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:53:00 compute-0 podman[268478]: 2026-01-26 15:53:00.214258729 +0000 UTC m=+0.024699600 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07490317999a653a9c1cabdac5f1995b3102ced68e09a98ac17961ea605d10c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07490317999a653a9c1cabdac5f1995b3102ced68e09a98ac17961ea605d10c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07490317999a653a9c1cabdac5f1995b3102ced68e09a98ac17961ea605d10c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07490317999a653a9c1cabdac5f1995b3102ced68e09a98ac17961ea605d10c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:53:00 compute-0 podman[268478]: 2026-01-26 15:53:00.327860279 +0000 UTC m=+0.138301120 container init 88da15cbc2c1ad9ada96a24a0c787b71c2adcb4c3ad71b8b9661d11366a37d7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 15:53:00 compute-0 podman[268478]: 2026-01-26 15:53:00.335813722 +0000 UTC m=+0.146254563 container start 88da15cbc2c1ad9ada96a24a0c787b71c2adcb4c3ad71b8b9661d11366a37d7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:53:00 compute-0 podman[268478]: 2026-01-26 15:53:00.338833625 +0000 UTC m=+0.149274486 container attach 88da15cbc2c1ad9ada96a24a0c787b71c2adcb4c3ad71b8b9661d11366a37d7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 26 15:53:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:53:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:53:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:53:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:53:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:53:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:53:00 compute-0 ceph-mon[75140]: pgmap v1169: 305 pgs: 305 active+clean; 88 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 29 KiB/s wr, 218 op/s
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]: {
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:     "0": [
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:         {
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "devices": [
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "/dev/loop3"
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             ],
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "lv_name": "ceph_lv0",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "lv_size": "21470642176",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "name": "ceph_lv0",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "tags": {
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.cluster_name": "ceph",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.crush_device_class": "",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.encrypted": "0",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.objectstore": "bluestore",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.osd_id": "0",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.type": "block",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.vdo": "0",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.with_tpm": "0"
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             },
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "type": "block",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "vg_name": "ceph_vg0"
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:         }
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:     ],
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:     "1": [
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:         {
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "devices": [
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "/dev/loop4"
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             ],
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "lv_name": "ceph_lv1",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "lv_size": "21470642176",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "name": "ceph_lv1",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "tags": {
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.cluster_name": "ceph",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.crush_device_class": "",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.encrypted": "0",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.objectstore": "bluestore",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.osd_id": "1",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.type": "block",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.vdo": "0",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.with_tpm": "0"
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             },
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "type": "block",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "vg_name": "ceph_vg1"
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:         }
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:     ],
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:     "2": [
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:         {
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "devices": [
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "/dev/loop5"
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             ],
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "lv_name": "ceph_lv2",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "lv_size": "21470642176",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "name": "ceph_lv2",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "tags": {
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.cluster_name": "ceph",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.crush_device_class": "",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.encrypted": "0",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.objectstore": "bluestore",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.osd_id": "2",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.type": "block",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.vdo": "0",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:                 "ceph.with_tpm": "0"
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             },
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "type": "block",
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:             "vg_name": "ceph_vg2"
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:         }
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]:     ]
Jan 26 15:53:00 compute-0 gifted_chatterjee[268494]: }
Jan 26 15:53:00 compute-0 systemd[1]: libpod-88da15cbc2c1ad9ada96a24a0c787b71c2adcb4c3ad71b8b9661d11366a37d7b.scope: Deactivated successfully.
Jan 26 15:53:00 compute-0 podman[268503]: 2026-01-26 15:53:00.741266169 +0000 UTC m=+0.024570418 container died 88da15cbc2c1ad9ada96a24a0c787b71c2adcb4c3ad71b8b9661d11366a37d7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:53:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-07490317999a653a9c1cabdac5f1995b3102ced68e09a98ac17961ea605d10c0-merged.mount: Deactivated successfully.
Jan 26 15:53:00 compute-0 podman[268503]: 2026-01-26 15:53:00.782780928 +0000 UTC m=+0.066085167 container remove 88da15cbc2c1ad9ada96a24a0c787b71c2adcb4c3ad71b8b9661d11366a37d7b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:53:00 compute-0 systemd[1]: libpod-conmon-88da15cbc2c1ad9ada96a24a0c787b71c2adcb4c3ad71b8b9661d11366a37d7b.scope: Deactivated successfully.
Jan 26 15:53:00 compute-0 sudo[268403]: pam_unix(sudo:session): session closed for user root
Jan 26 15:53:00 compute-0 sudo[268517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:53:00 compute-0 sudo[268517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:53:00 compute-0 sudo[268517]: pam_unix(sudo:session): session closed for user root
Jan 26 15:53:00 compute-0 sudo[268542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:53:00 compute-0 sudo[268542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:53:01 compute-0 podman[268578]: 2026-01-26 15:53:01.249268497 +0000 UTC m=+0.041660583 container create 17a84352889868bef6d96a639745e8548faa3087f52cf5fba667e5b3acd634b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 15:53:01 compute-0 systemd[1]: Started libpod-conmon-17a84352889868bef6d96a639745e8548faa3087f52cf5fba667e5b3acd634b4.scope.
Jan 26 15:53:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:53:01 compute-0 podman[268578]: 2026-01-26 15:53:01.228101413 +0000 UTC m=+0.020493529 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:53:01 compute-0 podman[268578]: 2026-01-26 15:53:01.343835613 +0000 UTC m=+0.136227719 container init 17a84352889868bef6d96a639745e8548faa3087f52cf5fba667e5b3acd634b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jennings, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:53:01 compute-0 podman[268578]: 2026-01-26 15:53:01.353096798 +0000 UTC m=+0.145488874 container start 17a84352889868bef6d96a639745e8548faa3087f52cf5fba667e5b3acd634b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:53:01 compute-0 podman[268578]: 2026-01-26 15:53:01.358233253 +0000 UTC m=+0.150625369 container attach 17a84352889868bef6d96a639745e8548faa3087f52cf5fba667e5b3acd634b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jennings, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:53:01 compute-0 romantic_jennings[268594]: 167 167
Jan 26 15:53:01 compute-0 systemd[1]: libpod-17a84352889868bef6d96a639745e8548faa3087f52cf5fba667e5b3acd634b4.scope: Deactivated successfully.
Jan 26 15:53:01 compute-0 podman[268578]: 2026-01-26 15:53:01.360218101 +0000 UTC m=+0.152610197 container died 17a84352889868bef6d96a639745e8548faa3087f52cf5fba667e5b3acd634b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 15:53:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ff1b54e2b3e0c59a1a392ec40058551445612d92a334bc0b23a6b7438273cea-merged.mount: Deactivated successfully.
Jan 26 15:53:01 compute-0 podman[268578]: 2026-01-26 15:53:01.395489297 +0000 UTC m=+0.187881383 container remove 17a84352889868bef6d96a639745e8548faa3087f52cf5fba667e5b3acd634b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_jennings, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 15:53:01 compute-0 systemd[1]: libpod-conmon-17a84352889868bef6d96a639745e8548faa3087f52cf5fba667e5b3acd634b4.scope: Deactivated successfully.
Jan 26 15:53:01 compute-0 podman[268619]: 2026-01-26 15:53:01.575471869 +0000 UTC m=+0.040091435 container create e0addaaa240c1925f278e37a5f3cacb04b28bf9b9fe7684d11b7ac83ed415874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ellis, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:53:01 compute-0 systemd[1]: Started libpod-conmon-e0addaaa240c1925f278e37a5f3cacb04b28bf9b9fe7684d11b7ac83ed415874.scope.
Jan 26 15:53:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:53:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632a1c2053c63c3c179a78a19ed36b0ffbd06bef7e49b6eb4bece3d7e670d7b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:53:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632a1c2053c63c3c179a78a19ed36b0ffbd06bef7e49b6eb4bece3d7e670d7b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:53:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632a1c2053c63c3c179a78a19ed36b0ffbd06bef7e49b6eb4bece3d7e670d7b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:53:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/632a1c2053c63c3c179a78a19ed36b0ffbd06bef7e49b6eb4bece3d7e670d7b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:53:01 compute-0 podman[268619]: 2026-01-26 15:53:01.557100323 +0000 UTC m=+0.021719909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:53:01 compute-0 podman[268619]: 2026-01-26 15:53:01.660776101 +0000 UTC m=+0.125395677 container init e0addaaa240c1925f278e37a5f3cacb04b28bf9b9fe7684d11b7ac83ed415874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:53:01 compute-0 podman[268619]: 2026-01-26 15:53:01.667131345 +0000 UTC m=+0.131750911 container start e0addaaa240c1925f278e37a5f3cacb04b28bf9b9fe7684d11b7ac83ed415874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ellis, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:53:01 compute-0 podman[268619]: 2026-01-26 15:53:01.673340286 +0000 UTC m=+0.137959872 container attach e0addaaa240c1925f278e37a5f3cacb04b28bf9b9fe7684d11b7ac83ed415874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 15:53:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 100 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 199 op/s
Jan 26 15:53:01 compute-0 ovn_controller[146046]: 2026-01-26T15:53:01Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:83:77:42 10.100.0.13
Jan 26 15:53:01 compute-0 ovn_controller[146046]: 2026-01-26T15:53:01Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:83:77:42 10.100.0.13
Jan 26 15:53:02 compute-0 nova_compute[239965]: 2026-01-26 15:53:02.072 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:02 compute-0 nova_compute[239965]: 2026-01-26 15:53:02.342 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:02 compute-0 lvm[268713]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:53:02 compute-0 lvm[268714]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:53:02 compute-0 lvm[268713]: VG ceph_vg0 finished
Jan 26 15:53:02 compute-0 lvm[268714]: VG ceph_vg1 finished
Jan 26 15:53:02 compute-0 lvm[268716]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:53:02 compute-0 lvm[268716]: VG ceph_vg2 finished
Jan 26 15:53:02 compute-0 inspiring_ellis[268635]: {}
Jan 26 15:53:02 compute-0 systemd[1]: libpod-e0addaaa240c1925f278e37a5f3cacb04b28bf9b9fe7684d11b7ac83ed415874.scope: Deactivated successfully.
Jan 26 15:53:02 compute-0 systemd[1]: libpod-e0addaaa240c1925f278e37a5f3cacb04b28bf9b9fe7684d11b7ac83ed415874.scope: Consumed 1.494s CPU time.
Jan 26 15:53:02 compute-0 podman[268619]: 2026-01-26 15:53:02.631540077 +0000 UTC m=+1.096159653 container died e0addaaa240c1925f278e37a5f3cacb04b28bf9b9fe7684d11b7ac83ed415874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:53:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-632a1c2053c63c3c179a78a19ed36b0ffbd06bef7e49b6eb4bece3d7e670d7b6-merged.mount: Deactivated successfully.
Jan 26 15:53:02 compute-0 podman[268619]: 2026-01-26 15:53:02.682390781 +0000 UTC m=+1.147010347 container remove e0addaaa240c1925f278e37a5f3cacb04b28bf9b9fe7684d11b7ac83ed415874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_ellis, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Jan 26 15:53:02 compute-0 systemd[1]: libpod-conmon-e0addaaa240c1925f278e37a5f3cacb04b28bf9b9fe7684d11b7ac83ed415874.scope: Deactivated successfully.
Jan 26 15:53:02 compute-0 sudo[268542]: pam_unix(sudo:session): session closed for user root
Jan 26 15:53:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:53:02 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:53:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:53:02 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:53:02 compute-0 ceph-mon[75140]: pgmap v1170: 305 pgs: 305 active+clean; 100 MiB data, 370 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.0 MiB/s wr, 199 op/s
Jan 26 15:53:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:53:02 compute-0 sudo[268731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:53:02 compute-0 sudo[268731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:53:02 compute-0 sudo[268731]: pam_unix(sudo:session): session closed for user root
Jan 26 15:53:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 119 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 173 op/s
Jan 26 15:53:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:53:04 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:53:04 compute-0 nova_compute[239965]: 2026-01-26 15:53:04.910 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:05 compute-0 nova_compute[239965]: 2026-01-26 15:53:05.491 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442770.4902418, b6668f3e-920f-42d3-9bc6-e1ab78764d3d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:53:05 compute-0 nova_compute[239965]: 2026-01-26 15:53:05.491 239969 INFO nova.compute.manager [-] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] VM Stopped (Lifecycle Event)
Jan 26 15:53:05 compute-0 ceph-mon[75140]: pgmap v1171: 305 pgs: 305 active+clean; 119 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 173 op/s
Jan 26 15:53:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 26 15:53:06 compute-0 nova_compute[239965]: 2026-01-26 15:53:06.093 239969 DEBUG nova.compute.manager [None req-b308e43f-c820-435c-9f9d-a51e83e1446f - - - - - -] [instance: b6668f3e-920f-42d3-9bc6-e1ab78764d3d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:53:06 compute-0 ceph-mon[75140]: pgmap v1172: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 278 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Jan 26 15:53:07 compute-0 nova_compute[239965]: 2026-01-26 15:53:07.346 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 15:53:08 compute-0 ceph-mon[75140]: pgmap v1173: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 15:53:08 compute-0 nova_compute[239965]: 2026-01-26 15:53:08.996 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:53:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 15:53:09 compute-0 nova_compute[239965]: 2026-01-26 15:53:09.912 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Jan 26 15:53:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Jan 26 15:53:10 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Jan 26 15:53:10 compute-0 ceph-mon[75140]: pgmap v1174: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 15:53:11 compute-0 nova_compute[239965]: 2026-01-26 15:53:11.009 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 310 KiB/s rd, 1.3 MiB/s wr, 63 op/s
Jan 26 15:53:11 compute-0 ceph-mon[75140]: osdmap e204: 3 total, 3 up, 3 in
Jan 26 15:53:12 compute-0 nova_compute[239965]: 2026-01-26 15:53:12.349 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:12 compute-0 ceph-mon[75140]: pgmap v1176: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 310 KiB/s rd, 1.3 MiB/s wr, 63 op/s
Jan 26 15:53:13 compute-0 nova_compute[239965]: 2026-01-26 15:53:13.103 239969 DEBUG nova.compute.manager [req-f4178494-999b-4824-95ba-d899c7a5c211 req-055d21fd-bbb0-43d6-b097-45467956f597 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Received event network-changed-afe7812f-4f9b-460d-9b60-f6f0ad83893c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:53:13 compute-0 nova_compute[239965]: 2026-01-26 15:53:13.103 239969 DEBUG nova.compute.manager [req-f4178494-999b-4824-95ba-d899c7a5c211 req-055d21fd-bbb0-43d6-b097-45467956f597 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Refreshing instance network info cache due to event network-changed-afe7812f-4f9b-460d-9b60-f6f0ad83893c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:53:13 compute-0 nova_compute[239965]: 2026-01-26 15:53:13.104 239969 DEBUG oslo_concurrency.lockutils [req-f4178494-999b-4824-95ba-d899c7a5c211 req-055d21fd-bbb0-43d6-b097-45467956f597 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d828ca5d-4f7c-4c81-9084-dd789819719d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:53:13 compute-0 nova_compute[239965]: 2026-01-26 15:53:13.104 239969 DEBUG oslo_concurrency.lockutils [req-f4178494-999b-4824-95ba-d899c7a5c211 req-055d21fd-bbb0-43d6-b097-45467956f597 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d828ca5d-4f7c-4c81-9084-dd789819719d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:53:13 compute-0 nova_compute[239965]: 2026-01-26 15:53:13.104 239969 DEBUG nova.network.neutron [req-f4178494-999b-4824-95ba-d899c7a5c211 req-055d21fd-bbb0-43d6-b097-45467956f597 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Refreshing network info cache for port afe7812f-4f9b-460d-9b60-f6f0ad83893c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:53:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 67 KiB/s wr, 34 op/s
Jan 26 15:53:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:53:14 compute-0 ceph-mon[75140]: pgmap v1177: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 167 KiB/s rd, 67 KiB/s wr, 34 op/s
Jan 26 15:53:14 compute-0 nova_compute[239965]: 2026-01-26 15:53:14.914 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 16 KiB/s wr, 15 op/s
Jan 26 15:53:15 compute-0 nova_compute[239965]: 2026-01-26 15:53:15.745 239969 DEBUG nova.network.neutron [req-f4178494-999b-4824-95ba-d899c7a5c211 req-055d21fd-bbb0-43d6-b097-45467956f597 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Updated VIF entry in instance network info cache for port afe7812f-4f9b-460d-9b60-f6f0ad83893c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:53:15 compute-0 nova_compute[239965]: 2026-01-26 15:53:15.746 239969 DEBUG nova.network.neutron [req-f4178494-999b-4824-95ba-d899c7a5c211 req-055d21fd-bbb0-43d6-b097-45467956f597 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Updating instance_info_cache with network_info: [{"id": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "address": "fa:16:3e:83:77:42", "network": {"id": "9c61a251-7778-440e-a088-7675d42cddee", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-463214765-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ce4d550978141bdb03727e5abb57737", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafe7812f-4f", "ovs_interfaceid": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:53:15 compute-0 nova_compute[239965]: 2026-01-26 15:53:15.770 239969 DEBUG oslo_concurrency.lockutils [req-f4178494-999b-4824-95ba-d899c7a5c211 req-055d21fd-bbb0-43d6-b097-45467956f597 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d828ca5d-4f7c-4c81-9084-dd789819719d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:53:16 compute-0 ceph-mon[75140]: pgmap v1178: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 16 KiB/s wr, 15 op/s
Jan 26 15:53:17 compute-0 nova_compute[239965]: 2026-01-26 15:53:17.352 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 16 KiB/s wr, 15 op/s
Jan 26 15:53:18 compute-0 ceph-mon[75140]: pgmap v1179: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 16 KiB/s wr, 15 op/s
Jan 26 15:53:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:53:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 16 KiB/s wr, 15 op/s
Jan 26 15:53:19 compute-0 nova_compute[239965]: 2026-01-26 15:53:19.916 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:20 compute-0 nova_compute[239965]: 2026-01-26 15:53:20.077 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:20 compute-0 ceph-mon[75140]: pgmap v1180: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 16 KiB/s wr, 15 op/s
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.486 239969 DEBUG oslo_concurrency.lockutils [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Acquiring lock "d828ca5d-4f7c-4c81-9084-dd789819719d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.486 239969 DEBUG oslo_concurrency.lockutils [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lock "d828ca5d-4f7c-4c81-9084-dd789819719d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.486 239969 DEBUG oslo_concurrency.lockutils [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Acquiring lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.486 239969 DEBUG oslo_concurrency.lockutils [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.487 239969 DEBUG oslo_concurrency.lockutils [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.488 239969 INFO nova.compute.manager [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Terminating instance
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.489 239969 DEBUG nova.compute.manager [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:53:21 compute-0 kernel: tapafe7812f-4f (unregistering): left promiscuous mode
Jan 26 15:53:21 compute-0 NetworkManager[48954]: <info>  [1769442801.5356] device (tapafe7812f-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.544 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:21 compute-0 ovn_controller[146046]: 2026-01-26T15:53:21Z|00162|binding|INFO|Releasing lport afe7812f-4f9b-460d-9b60-f6f0ad83893c from this chassis (sb_readonly=0)
Jan 26 15:53:21 compute-0 ovn_controller[146046]: 2026-01-26T15:53:21Z|00163|binding|INFO|Setting lport afe7812f-4f9b-460d-9b60-f6f0ad83893c down in Southbound
Jan 26 15:53:21 compute-0 ovn_controller[146046]: 2026-01-26T15:53:21Z|00164|binding|INFO|Removing iface tapafe7812f-4f ovn-installed in OVS
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.546 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.547 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:21.553 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:77:42 10.100.0.13'], port_security=['fa:16:3e:83:77:42 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd828ca5d-4f7c-4c81-9084-dd789819719d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9c61a251-7778-440e-a088-7675d42cddee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2ce4d550978141bdb03727e5abb57737', 'neutron:revision_number': '4', 'neutron:security_group_ids': '293adf39-d9d9-4170-85b0-24e07a5f81cd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a3e8910-abf3-4840-9da4-27a6f0086a1a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=afe7812f-4f9b-460d-9b60-f6f0ad83893c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:53:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:21.554 156105 INFO neutron.agent.ovn.metadata.agent [-] Port afe7812f-4f9b-460d-9b60-f6f0ad83893c in datapath 9c61a251-7778-440e-a088-7675d42cddee unbound from our chassis
Jan 26 15:53:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:21.555 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9c61a251-7778-440e-a088-7675d42cddee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:53:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:21.556 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[29363cbf-f16f-4920-b4d4-046fa812a2b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:53:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:21.557 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9c61a251-7778-440e-a088-7675d42cddee namespace which is not needed anymore
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.569 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:21 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Jan 26 15:53:21 compute-0 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Consumed 14.052s CPU time.
Jan 26 15:53:21 compute-0 systemd-machined[208061]: Machine qemu-34-instance-0000001e terminated.
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.635 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:21.637 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:53:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 13 KiB/s wr, 11 op/s
Jan 26 15:53:21 compute-0 neutron-haproxy-ovnmeta-9c61a251-7778-440e-a088-7675d42cddee[267970]: [NOTICE]   (267974) : haproxy version is 2.8.14-c23fe91
Jan 26 15:53:21 compute-0 neutron-haproxy-ovnmeta-9c61a251-7778-440e-a088-7675d42cddee[267970]: [NOTICE]   (267974) : path to executable is /usr/sbin/haproxy
Jan 26 15:53:21 compute-0 neutron-haproxy-ovnmeta-9c61a251-7778-440e-a088-7675d42cddee[267970]: [WARNING]  (267974) : Exiting Master process...
Jan 26 15:53:21 compute-0 neutron-haproxy-ovnmeta-9c61a251-7778-440e-a088-7675d42cddee[267970]: [ALERT]    (267974) : Current worker (267976) exited with code 143 (Terminated)
Jan 26 15:53:21 compute-0 neutron-haproxy-ovnmeta-9c61a251-7778-440e-a088-7675d42cddee[267970]: [WARNING]  (267974) : All workers exited. Exiting... (0)
Jan 26 15:53:21 compute-0 systemd[1]: libpod-9ffa04346a0223638e71d1f8f64e1f858376adfcb3ff8ff0400623434bea4641.scope: Deactivated successfully.
Jan 26 15:53:21 compute-0 podman[268780]: 2026-01-26 15:53:21.713047844 +0000 UTC m=+0.048943190 container died 9ffa04346a0223638e71d1f8f64e1f858376adfcb3ff8ff0400623434bea4641 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9c61a251-7778-440e-a088-7675d42cddee, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 26 15:53:21 compute-0 NetworkManager[48954]: <info>  [1769442801.7152] manager: (tapafe7812f-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/85)
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.716 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.723 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.734 239969 INFO nova.virt.libvirt.driver [-] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Instance destroyed successfully.
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.735 239969 DEBUG nova.objects.instance [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lazy-loading 'resources' on Instance uuid d828ca5d-4f7c-4c81-9084-dd789819719d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:53:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9ffa04346a0223638e71d1f8f64e1f858376adfcb3ff8ff0400623434bea4641-userdata-shm.mount: Deactivated successfully.
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.756 239969 DEBUG nova.virt.libvirt.vif [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:52:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-2014467891',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-2014467891',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-201446789',id=30,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:52:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2ce4d550978141bdb03727e5abb57737',ramdisk_id='',reservation_id='r-dj9ugxrv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-1145044074',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-1145044074-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:52:49Z,user_data=None,user_id='47cffb17f49240d1ae5b9302d2e92cf0',uuid=d828ca5d-4f7c-4c81-9084-dd789819719d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "address": "fa:16:3e:83:77:42", "network": {"id": "9c61a251-7778-440e-a088-7675d42cddee", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-463214765-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ce4d550978141bdb03727e5abb57737", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafe7812f-4f", "ovs_interfaceid": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:53:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-11121a6fcd72af5cfcc6f1f20bb5407749d48cc2629bab68f9204e6ca9f03fea-merged.mount: Deactivated successfully.
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.758 239969 DEBUG nova.network.os_vif_util [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Converting VIF {"id": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "address": "fa:16:3e:83:77:42", "network": {"id": "9c61a251-7778-440e-a088-7675d42cddee", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-463214765-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2ce4d550978141bdb03727e5abb57737", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapafe7812f-4f", "ovs_interfaceid": "afe7812f-4f9b-460d-9b60-f6f0ad83893c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.759 239969 DEBUG nova.network.os_vif_util [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:83:77:42,bridge_name='br-int',has_traffic_filtering=True,id=afe7812f-4f9b-460d-9b60-f6f0ad83893c,network=Network(9c61a251-7778-440e-a088-7675d42cddee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafe7812f-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.759 239969 DEBUG os_vif [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:77:42,bridge_name='br-int',has_traffic_filtering=True,id=afe7812f-4f9b-460d-9b60-f6f0ad83893c,network=Network(9c61a251-7778-440e-a088-7675d42cddee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafe7812f-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.761 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.761 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapafe7812f-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.765 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.767 239969 INFO os_vif [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:77:42,bridge_name='br-int',has_traffic_filtering=True,id=afe7812f-4f9b-460d-9b60-f6f0ad83893c,network=Network(9c61a251-7778-440e-a088-7675d42cddee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapafe7812f-4f')
Jan 26 15:53:21 compute-0 podman[268780]: 2026-01-26 15:53:21.767546567 +0000 UTC m=+0.103441903 container cleanup 9ffa04346a0223638e71d1f8f64e1f858376adfcb3ff8ff0400623434bea4641 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9c61a251-7778-440e-a088-7675d42cddee, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 15:53:21 compute-0 systemd[1]: libpod-conmon-9ffa04346a0223638e71d1f8f64e1f858376adfcb3ff8ff0400623434bea4641.scope: Deactivated successfully.
Jan 26 15:53:21 compute-0 podman[268822]: 2026-01-26 15:53:21.85412849 +0000 UTC m=+0.060375347 container remove 9ffa04346a0223638e71d1f8f64e1f858376adfcb3ff8ff0400623434bea4641 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9c61a251-7778-440e-a088-7675d42cddee, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:53:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:21.860 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[aeae8688-a5f8-45ce-a74a-885827b38721]: (4, ('Mon Jan 26 03:53:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9c61a251-7778-440e-a088-7675d42cddee (9ffa04346a0223638e71d1f8f64e1f858376adfcb3ff8ff0400623434bea4641)\n9ffa04346a0223638e71d1f8f64e1f858376adfcb3ff8ff0400623434bea4641\nMon Jan 26 03:53:21 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9c61a251-7778-440e-a088-7675d42cddee (9ffa04346a0223638e71d1f8f64e1f858376adfcb3ff8ff0400623434bea4641)\n9ffa04346a0223638e71d1f8f64e1f858376adfcb3ff8ff0400623434bea4641\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:53:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:21.863 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a4892db8-fa71-462a-82b7-5b3ab98e23e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:53:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:21.865 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c61a251-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.868 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:21 compute-0 kernel: tap9c61a251-70: left promiscuous mode
Jan 26 15:53:21 compute-0 nova_compute[239965]: 2026-01-26 15:53:21.886 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:21.890 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4458ada6-d441-477c-a7cf-f330adcbf4cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:53:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:21.908 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[13a43d71-6df5-4ecd-9cfb-b99ff35d7f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:53:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:21.910 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d66e17f0-4d46-469a-bcb5-398cf1c249e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:53:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:21.928 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9c50ec38-86c9-4578-9f6b-f4138ee7bc70]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 425927, 'reachable_time': 17909, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268848, 'error': None, 'target': 'ovnmeta-9c61a251-7778-440e-a088-7675d42cddee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:53:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:21.931 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9c61a251-7778-440e-a088-7675d42cddee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:53:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:21.931 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[d9da4bcd-620c-4243-9e2b-c4adffd6781d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:53:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:21.932 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 15:53:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d9c61a251\x2d7778\x2d440e\x2da088\x2d7675d42cddee.mount: Deactivated successfully.
Jan 26 15:53:22 compute-0 nova_compute[239965]: 2026-01-26 15:53:22.056 239969 INFO nova.virt.libvirt.driver [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Deleting instance files /var/lib/nova/instances/d828ca5d-4f7c-4c81-9084-dd789819719d_del
Jan 26 15:53:22 compute-0 nova_compute[239965]: 2026-01-26 15:53:22.057 239969 INFO nova.virt.libvirt.driver [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Deletion of /var/lib/nova/instances/d828ca5d-4f7c-4c81-9084-dd789819719d_del complete
Jan 26 15:53:22 compute-0 nova_compute[239965]: 2026-01-26 15:53:22.120 239969 INFO nova.compute.manager [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Took 0.63 seconds to destroy the instance on the hypervisor.
Jan 26 15:53:22 compute-0 nova_compute[239965]: 2026-01-26 15:53:22.121 239969 DEBUG oslo.service.loopingcall [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:53:22 compute-0 nova_compute[239965]: 2026-01-26 15:53:22.121 239969 DEBUG nova.compute.manager [-] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:53:22 compute-0 nova_compute[239965]: 2026-01-26 15:53:22.121 239969 DEBUG nova.network.neutron [-] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:53:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:22.933 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:53:22 compute-0 ceph-mon[75140]: pgmap v1181: 305 pgs: 305 active+clean; 121 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 13 KiB/s wr, 11 op/s
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.026 239969 DEBUG nova.network.neutron [-] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.050 239969 INFO nova.compute.manager [-] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Took 0.93 seconds to deallocate network for instance.
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.121 239969 DEBUG oslo_concurrency.lockutils [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.122 239969 DEBUG oslo_concurrency.lockutils [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.214 239969 DEBUG oslo_concurrency.processutils [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.298 239969 DEBUG nova.compute.manager [req-34d7345a-233a-4a11-a7b8-cdf23a3007c4 req-d6b26e6c-d32d-43e9-88c6-1bda49fb03ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Received event network-vif-unplugged-afe7812f-4f9b-460d-9b60-f6f0ad83893c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.299 239969 DEBUG oslo_concurrency.lockutils [req-34d7345a-233a-4a11-a7b8-cdf23a3007c4 req-d6b26e6c-d32d-43e9-88c6-1bda49fb03ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.299 239969 DEBUG oslo_concurrency.lockutils [req-34d7345a-233a-4a11-a7b8-cdf23a3007c4 req-d6b26e6c-d32d-43e9-88c6-1bda49fb03ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.299 239969 DEBUG oslo_concurrency.lockutils [req-34d7345a-233a-4a11-a7b8-cdf23a3007c4 req-d6b26e6c-d32d-43e9-88c6-1bda49fb03ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.299 239969 DEBUG nova.compute.manager [req-34d7345a-233a-4a11-a7b8-cdf23a3007c4 req-d6b26e6c-d32d-43e9-88c6-1bda49fb03ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] No waiting events found dispatching network-vif-unplugged-afe7812f-4f9b-460d-9b60-f6f0ad83893c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.300 239969 WARNING nova.compute.manager [req-34d7345a-233a-4a11-a7b8-cdf23a3007c4 req-d6b26e6c-d32d-43e9-88c6-1bda49fb03ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Received unexpected event network-vif-unplugged-afe7812f-4f9b-460d-9b60-f6f0ad83893c for instance with vm_state deleted and task_state None.
Jan 26 15:53:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 69 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 34 op/s
Jan 26 15:53:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:53:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3827037646' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.829 239969 DEBUG oslo_concurrency.processutils [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.835 239969 DEBUG nova.compute.provider_tree [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.853 239969 DEBUG nova.scheduler.client.report [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.879 239969 DEBUG oslo_concurrency.lockutils [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.905 239969 INFO nova.scheduler.client.report [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Deleted allocations for instance d828ca5d-4f7c-4c81-9084-dd789819719d
Jan 26 15:53:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3827037646' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:53:23 compute-0 nova_compute[239965]: 2026-01-26 15:53:23.968 239969 DEBUG oslo_concurrency.lockutils [None req-48479f67-6290-40b2-9a7e-b46faddcc86e 47cffb17f49240d1ae5b9302d2e92cf0 2ce4d550978141bdb03727e5abb57737 - - default default] Lock "d828ca5d-4f7c-4c81-9084-dd789819719d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.482s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:53:24 compute-0 podman[268873]: 2026-01-26 15:53:24.39092618 +0000 UTC m=+0.078368434 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:53:24 compute-0 podman[268872]: 2026-01-26 15:53:24.392090639 +0000 UTC m=+0.078934839 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 26 15:53:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:53:24 compute-0 nova_compute[239965]: 2026-01-26 15:53:24.918 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:24 compute-0 ceph-mon[75140]: pgmap v1182: 305 pgs: 305 active+clean; 69 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 14 KiB/s wr, 34 op/s
Jan 26 15:53:25 compute-0 nova_compute[239965]: 2026-01-26 15:53:25.464 239969 DEBUG nova.compute.manager [req-f40ba30b-e67e-4f68-be0f-f49f67e7a3b6 req-dbf3dcb5-4bf7-4034-af0d-47fbeeefa260 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Received event network-vif-plugged-afe7812f-4f9b-460d-9b60-f6f0ad83893c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:53:25 compute-0 nova_compute[239965]: 2026-01-26 15:53:25.464 239969 DEBUG oslo_concurrency.lockutils [req-f40ba30b-e67e-4f68-be0f-f49f67e7a3b6 req-dbf3dcb5-4bf7-4034-af0d-47fbeeefa260 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:53:25 compute-0 nova_compute[239965]: 2026-01-26 15:53:25.465 239969 DEBUG oslo_concurrency.lockutils [req-f40ba30b-e67e-4f68-be0f-f49f67e7a3b6 req-dbf3dcb5-4bf7-4034-af0d-47fbeeefa260 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:53:25 compute-0 nova_compute[239965]: 2026-01-26 15:53:25.465 239969 DEBUG oslo_concurrency.lockutils [req-f40ba30b-e67e-4f68-be0f-f49f67e7a3b6 req-dbf3dcb5-4bf7-4034-af0d-47fbeeefa260 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d828ca5d-4f7c-4c81-9084-dd789819719d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:53:25 compute-0 nova_compute[239965]: 2026-01-26 15:53:25.465 239969 DEBUG nova.compute.manager [req-f40ba30b-e67e-4f68-be0f-f49f67e7a3b6 req-dbf3dcb5-4bf7-4034-af0d-47fbeeefa260 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] No waiting events found dispatching network-vif-plugged-afe7812f-4f9b-460d-9b60-f6f0ad83893c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:53:25 compute-0 nova_compute[239965]: 2026-01-26 15:53:25.465 239969 WARNING nova.compute.manager [req-f40ba30b-e67e-4f68-be0f-f49f67e7a3b6 req-dbf3dcb5-4bf7-4034-af0d-47fbeeefa260 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Received unexpected event network-vif-plugged-afe7812f-4f9b-460d-9b60-f6f0ad83893c for instance with vm_state deleted and task_state None.
Jan 26 15:53:25 compute-0 nova_compute[239965]: 2026-01-26 15:53:25.466 239969 DEBUG nova.compute.manager [req-f40ba30b-e67e-4f68-be0f-f49f67e7a3b6 req-dbf3dcb5-4bf7-4034-af0d-47fbeeefa260 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Received event network-vif-deleted-afe7812f-4f9b-460d-9b60-f6f0ad83893c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:53:25 compute-0 nova_compute[239965]: 2026-01-26 15:53:25.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:53:25 compute-0 nova_compute[239965]: 2026-01-26 15:53:25.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:53:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 41 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.3 KiB/s wr, 28 op/s
Jan 26 15:53:26 compute-0 ceph-mon[75140]: pgmap v1183: 305 pgs: 305 active+clean; 41 MiB data, 366 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.3 KiB/s wr, 28 op/s
Jan 26 15:53:26 compute-0 nova_compute[239965]: 2026-01-26 15:53:26.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:53:26 compute-0 nova_compute[239965]: 2026-01-26 15:53:26.764 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 41 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 26 15:53:28 compute-0 nova_compute[239965]: 2026-01-26 15:53:28.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:53:28 compute-0 nova_compute[239965]: 2026-01-26 15:53:28.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:53:28 compute-0 nova_compute[239965]: 2026-01-26 15:53:28.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 15:53:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:53:28
Jan 26 15:53:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:53:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:53:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'vms', 'default.rgw.meta', 'backups', '.mgr', 'images', 'cephfs.cephfs.data']
Jan 26 15:53:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:53:28 compute-0 ceph-mon[75140]: pgmap v1184: 305 pgs: 305 active+clean; 41 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 26 15:53:28 compute-0 nova_compute[239965]: 2026-01-26 15:53:28.975 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 15:53:28 compute-0 nova_compute[239965]: 2026-01-26 15:53:28.976 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:53:28 compute-0 nova_compute[239965]: 2026-01-26 15:53:28.997 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:53:28 compute-0 nova_compute[239965]: 2026-01-26 15:53:28.998 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:53:28 compute-0 nova_compute[239965]: 2026-01-26 15:53:28.998 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:53:28 compute-0 nova_compute[239965]: 2026-01-26 15:53:28.998 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:53:28 compute-0 nova_compute[239965]: 2026-01-26 15:53:28.999 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:53:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e204 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:53:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:53:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/784992426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:53:29 compute-0 nova_compute[239965]: 2026-01-26 15:53:29.623 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:53:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 41 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 26 15:53:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/784992426' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:53:29 compute-0 nova_compute[239965]: 2026-01-26 15:53:29.795 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:53:29 compute-0 nova_compute[239965]: 2026-01-26 15:53:29.796 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4304MB free_disk=59.98811295069754GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:53:29 compute-0 nova_compute[239965]: 2026-01-26 15:53:29.797 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:53:29 compute-0 nova_compute[239965]: 2026-01-26 15:53:29.797 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:53:29 compute-0 nova_compute[239965]: 2026-01-26 15:53:29.865 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:53:29 compute-0 nova_compute[239965]: 2026-01-26 15:53:29.865 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:53:29 compute-0 nova_compute[239965]: 2026-01-26 15:53:29.883 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:53:29 compute-0 nova_compute[239965]: 2026-01-26 15:53:29.922 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:53:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:53:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:53:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:53:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:53:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:53:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:53:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/50395908' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:53:30 compute-0 nova_compute[239965]: 2026-01-26 15:53:30.465 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:53:30 compute-0 nova_compute[239965]: 2026-01-26 15:53:30.471 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:53:30 compute-0 nova_compute[239965]: 2026-01-26 15:53:30.492 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:53:30 compute-0 nova_compute[239965]: 2026-01-26 15:53:30.516 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:53:30 compute-0 nova_compute[239965]: 2026-01-26 15:53:30.516 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:53:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:53:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:53:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:53:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:53:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:53:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:53:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:53:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:53:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:53:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:53:30 compute-0 ceph-mon[75140]: pgmap v1185: 305 pgs: 305 active+clean; 41 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 26 15:53:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/50395908' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:53:31 compute-0 nova_compute[239965]: 2026-01-26 15:53:31.050 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:53:31 compute-0 nova_compute[239965]: 2026-01-26 15:53:31.316 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:31 compute-0 nova_compute[239965]: 2026-01-26 15:53:31.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:53:31 compute-0 nova_compute[239965]: 2026-01-26 15:53:31.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:53:31 compute-0 nova_compute[239965]: 2026-01-26 15:53:31.568 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 41 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 26 15:53:31 compute-0 nova_compute[239965]: 2026-01-26 15:53:31.815 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:32 compute-0 nova_compute[239965]: 2026-01-26 15:53:32.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:53:32 compute-0 ceph-mon[75140]: pgmap v1186: 305 pgs: 305 active+clean; 41 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 26 15:53:33 compute-0 nova_compute[239965]: 2026-01-26 15:53:33.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:53:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 41 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 26 15:53:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Jan 26 15:53:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Jan 26 15:53:33 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Jan 26 15:53:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:53:34 compute-0 ceph-mon[75140]: pgmap v1187: 305 pgs: 305 active+clean; 41 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Jan 26 15:53:34 compute-0 ceph-mon[75140]: osdmap e205: 3 total, 3 up, 3 in
Jan 26 15:53:34 compute-0 nova_compute[239965]: 2026-01-26 15:53:34.923 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:35 compute-0 nova_compute[239965]: 2026-01-26 15:53:35.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:53:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 41 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 511 B/s wr, 5 op/s
Jan 26 15:53:36 compute-0 nova_compute[239965]: 2026-01-26 15:53:36.732 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442801.731093, d828ca5d-4f7c-4c81-9084-dd789819719d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:53:36 compute-0 nova_compute[239965]: 2026-01-26 15:53:36.733 239969 INFO nova.compute.manager [-] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] VM Stopped (Lifecycle Event)
Jan 26 15:53:36 compute-0 nova_compute[239965]: 2026-01-26 15:53:36.753 239969 DEBUG nova.compute.manager [None req-c1f88ba4-9760-495a-bafa-b0c3b012b890 - - - - - -] [instance: d828ca5d-4f7c-4c81-9084-dd789819719d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:53:36 compute-0 nova_compute[239965]: 2026-01-26 15:53:36.818 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:36 compute-0 ceph-mon[75140]: pgmap v1189: 305 pgs: 305 active+clean; 41 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 511 B/s wr, 5 op/s
Jan 26 15:53:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 26 15:53:38 compute-0 ceph-mon[75140]: pgmap v1190: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 26 15:53:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:53:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Jan 26 15:53:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Jan 26 15:53:39 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Jan 26 15:53:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 26 15:53:39 compute-0 nova_compute[239965]: 2026-01-26 15:53:39.975 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:40 compute-0 ceph-mon[75140]: osdmap e206: 3 total, 3 up, 3 in
Jan 26 15:53:40 compute-0 ceph-mon[75140]: pgmap v1192: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 26 15:53:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 26 15:53:41 compute-0 nova_compute[239965]: 2026-01-26 15:53:41.822 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:42 compute-0 ceph-mon[75140]: pgmap v1193: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 26 15:53:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 26 15:53:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:53:44 compute-0 ceph-mon[75140]: pgmap v1194: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 26 15:53:44 compute-0 nova_compute[239965]: 2026-01-26 15:53:44.977 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Jan 26 15:53:46 compute-0 ceph-mon[75140]: pgmap v1195: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Jan 26 15:53:46 compute-0 nova_compute[239965]: 2026-01-26 15:53:46.825 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:53:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:53:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2236001748' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:53:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:53:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2236001748' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 2.8058778822806272e-06 of space, bias 1.0, pg target 0.0008417633646841882 quantized to 32 (current 32)
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006678213386662329 of space, bias 1.0, pg target 0.20034640159986988 quantized to 32 (current 32)
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.918719831773326e-07 of space, bias 4.0, pg target 0.0009502463798127991 quantized to 16 (current 16)
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:53:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:53:48 compute-0 ceph-mon[75140]: pgmap v1196: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:53:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2236001748' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:53:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2236001748' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:53:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:53:49.408058) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442829408252, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1074, "num_deletes": 260, "total_data_size": 1428884, "memory_usage": 1462704, "flush_reason": "Manual Compaction"}
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442829419787, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1401701, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23550, "largest_seqno": 24623, "table_properties": {"data_size": 1396435, "index_size": 2726, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11408, "raw_average_key_size": 19, "raw_value_size": 1385663, "raw_average_value_size": 2368, "num_data_blocks": 122, "num_entries": 585, "num_filter_entries": 585, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769442753, "oldest_key_time": 1769442753, "file_creation_time": 1769442829, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 11643 microseconds, and 4658 cpu microseconds.
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:53:49.419842) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1401701 bytes OK
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:53:49.419864) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:53:49.422045) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:53:49.422067) EVENT_LOG_v1 {"time_micros": 1769442829422061, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:53:49.422086) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1423758, prev total WAL file size 1423758, number of live WAL files 2.
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:53:49.422920) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353030' seq:72057594037927935, type:22 .. '6C6F676D00373531' seq:0, type:0; will stop at (end)
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1368KB)], [53(9196KB)]
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442829423031, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10819023, "oldest_snapshot_seqno": -1}
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5101 keys, 10720860 bytes, temperature: kUnknown
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442829493567, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 10720860, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10681469, "index_size": 25521, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12805, "raw_key_size": 126319, "raw_average_key_size": 24, "raw_value_size": 10584391, "raw_average_value_size": 2074, "num_data_blocks": 1063, "num_entries": 5101, "num_filter_entries": 5101, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769442829, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:53:49.493831) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 10720860 bytes
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:53:49.495544) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.2 rd, 151.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.0 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(15.4) write-amplify(7.6) OK, records in: 5636, records dropped: 535 output_compression: NoCompression
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:53:49.495575) EVENT_LOG_v1 {"time_micros": 1769442829495559, "job": 28, "event": "compaction_finished", "compaction_time_micros": 70619, "compaction_time_cpu_micros": 24192, "output_level": 6, "num_output_files": 1, "total_output_size": 10720860, "num_input_records": 5636, "num_output_records": 5101, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442829496321, "job": 28, "event": "table_file_deletion", "file_number": 55}
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442829499896, "job": 28, "event": "table_file_deletion", "file_number": 53}
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:53:49.422768) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:53:49.499994) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:53:49.499998) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:53:49.499999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:53:49.500001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:53:49 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:53:49.500002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:53:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:53:49 compute-0 nova_compute[239965]: 2026-01-26 15:53:49.979 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:50 compute-0 ceph-mon[75140]: pgmap v1197: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:53:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:53:51 compute-0 nova_compute[239965]: 2026-01-26 15:53:51.829 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:52 compute-0 ceph-mon[75140]: pgmap v1198: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:53:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:53:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:53:54 compute-0 ceph-mon[75140]: pgmap v1199: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:53:54 compute-0 podman[268962]: 2026-01-26 15:53:54.863723702 +0000 UTC m=+0.052843484 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 15:53:54 compute-0 podman[268963]: 2026-01-26 15:53:54.930118034 +0000 UTC m=+0.113100537 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:53:54 compute-0 nova_compute[239965]: 2026-01-26 15:53:54.980 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:53:56 compute-0 ceph-mon[75140]: pgmap v1200: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:53:56 compute-0 nova_compute[239965]: 2026-01-26 15:53:56.833 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:53:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:53:58 compute-0 nova_compute[239965]: 2026-01-26 15:53:58.367 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "2cf7c37a-5399-4471-8385-7f483007196b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:53:58 compute-0 nova_compute[239965]: 2026-01-26 15:53:58.367 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "2cf7c37a-5399-4471-8385-7f483007196b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:53:58 compute-0 nova_compute[239965]: 2026-01-26 15:53:58.386 239969 DEBUG nova.compute.manager [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:53:58 compute-0 nova_compute[239965]: 2026-01-26 15:53:58.459 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:53:58 compute-0 nova_compute[239965]: 2026-01-26 15:53:58.459 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:53:58 compute-0 nova_compute[239965]: 2026-01-26 15:53:58.469 239969 DEBUG nova.virt.hardware [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:53:58 compute-0 nova_compute[239965]: 2026-01-26 15:53:58.469 239969 INFO nova.compute.claims [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:53:58 compute-0 nova_compute[239965]: 2026-01-26 15:53:58.579 239969 DEBUG oslo_concurrency.processutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:53:58 compute-0 ceph-mon[75140]: pgmap v1201: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:53:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:53:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2552856944' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.207 239969 DEBUG oslo_concurrency.processutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.213 239969 DEBUG nova.compute.provider_tree [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:53:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:59.214 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:53:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:59.214 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:53:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:53:59.215 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.232 239969 DEBUG nova.scheduler.client.report [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.257 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.258 239969 DEBUG nova.compute.manager [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.317 239969 DEBUG nova.compute.manager [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.318 239969 DEBUG nova.network.neutron [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.336 239969 INFO nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.356 239969 DEBUG nova.compute.manager [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:53:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.443 239969 DEBUG nova.compute.manager [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.445 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.445 239969 INFO nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Creating image(s)
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.470 239969 DEBUG nova.storage.rbd_utils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 2cf7c37a-5399-4471-8385-7f483007196b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.501 239969 DEBUG nova.storage.rbd_utils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 2cf7c37a-5399-4471-8385-7f483007196b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.531 239969 DEBUG nova.storage.rbd_utils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 2cf7c37a-5399-4471-8385-7f483007196b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.538 239969 DEBUG oslo_concurrency.processutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.572 239969 DEBUG nova.policy [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '700f80e28de6426c81d2be69a7fd8ffb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.612 239969 DEBUG oslo_concurrency.processutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.613 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.614 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.614 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.641 239969 DEBUG nova.storage.rbd_utils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 2cf7c37a-5399-4471-8385-7f483007196b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.645 239969 DEBUG oslo_concurrency.processutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 2cf7c37a-5399-4471-8385-7f483007196b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:53:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:53:59 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2552856944' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.907 239969 DEBUG oslo_concurrency.processutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 2cf7c37a-5399-4471-8385-7f483007196b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:53:59 compute-0 nova_compute[239965]: 2026-01-26 15:53:59.975 239969 DEBUG nova.storage.rbd_utils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] resizing rbd image 2cf7c37a-5399-4471-8385-7f483007196b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:54:00 compute-0 nova_compute[239965]: 2026-01-26 15:54:00.011 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:00 compute-0 nova_compute[239965]: 2026-01-26 15:54:00.063 239969 DEBUG nova.objects.instance [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'migration_context' on Instance uuid 2cf7c37a-5399-4471-8385-7f483007196b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:00 compute-0 nova_compute[239965]: 2026-01-26 15:54:00.089 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:54:00 compute-0 nova_compute[239965]: 2026-01-26 15:54:00.090 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Ensure instance console log exists: /var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:54:00 compute-0 nova_compute[239965]: 2026-01-26 15:54:00.090 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:00 compute-0 nova_compute[239965]: 2026-01-26 15:54:00.091 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:00 compute-0 nova_compute[239965]: 2026-01-26 15:54:00.091 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:54:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:54:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:54:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:54:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:54:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:54:00 compute-0 nova_compute[239965]: 2026-01-26 15:54:00.679 239969 DEBUG nova.network.neutron [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Successfully created port: cff19326-5cc9-485a-9f3e-eafabc790a6e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:54:00 compute-0 ceph-mon[75140]: pgmap v1202: 305 pgs: 305 active+clean; 41 MiB data, 339 MiB used, 60 GiB / 60 GiB avail
Jan 26 15:54:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 53 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 395 KiB/s wr, 12 op/s
Jan 26 15:54:01 compute-0 nova_compute[239965]: 2026-01-26 15:54:01.836 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:02 compute-0 nova_compute[239965]: 2026-01-26 15:54:02.034 239969 DEBUG nova.network.neutron [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Successfully updated port: cff19326-5cc9-485a-9f3e-eafabc790a6e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:54:02 compute-0 nova_compute[239965]: 2026-01-26 15:54:02.055 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:54:02 compute-0 nova_compute[239965]: 2026-01-26 15:54:02.056 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquired lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:54:02 compute-0 nova_compute[239965]: 2026-01-26 15:54:02.056 239969 DEBUG nova.network.neutron [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:54:02 compute-0 nova_compute[239965]: 2026-01-26 15:54:02.242 239969 DEBUG nova.network.neutron [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:54:02 compute-0 nova_compute[239965]: 2026-01-26 15:54:02.823 239969 DEBUG nova.compute.manager [req-38507d23-d793-476e-9ba2-8d7a7f2b1feb req-cafaaabb-dffd-46bb-89d3-7e0443d95b95 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Received event network-changed-cff19326-5cc9-485a-9f3e-eafabc790a6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:02 compute-0 nova_compute[239965]: 2026-01-26 15:54:02.824 239969 DEBUG nova.compute.manager [req-38507d23-d793-476e-9ba2-8d7a7f2b1feb req-cafaaabb-dffd-46bb-89d3-7e0443d95b95 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Refreshing instance network info cache due to event network-changed-cff19326-5cc9-485a-9f3e-eafabc790a6e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:54:02 compute-0 nova_compute[239965]: 2026-01-26 15:54:02.824 239969 DEBUG oslo_concurrency.lockutils [req-38507d23-d793-476e-9ba2-8d7a7f2b1feb req-cafaaabb-dffd-46bb-89d3-7e0443d95b95 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:54:02 compute-0 ceph-mon[75140]: pgmap v1203: 305 pgs: 305 active+clean; 53 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 395 KiB/s wr, 12 op/s
Jan 26 15:54:02 compute-0 sudo[269196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:54:02 compute-0 sudo[269196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:54:02 compute-0 sudo[269196]: pam_unix(sudo:session): session closed for user root
Jan 26 15:54:02 compute-0 sudo[269221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 26 15:54:02 compute-0 sudo[269221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.158 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Acquiring lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.161 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.178 239969 DEBUG nova.compute.manager [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.237 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.237 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.245 239969 DEBUG nova.virt.hardware [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.245 239969 INFO nova.compute.claims [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.367 239969 DEBUG oslo_concurrency.processutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:03 compute-0 podman[269289]: 2026-01-26 15:54:03.38616533 +0000 UTC m=+0.064917488 container exec 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:54:03 compute-0 podman[269289]: 2026-01-26 15:54:03.487392168 +0000 UTC m=+0.166144296 container exec_died 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.686 239969 DEBUG nova.network.neutron [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Updating instance_info_cache with network_info: [{"id": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "address": "fa:16:3e:d1:1c:47", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcff19326-5c", "ovs_interfaceid": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.710 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Releasing lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.711 239969 DEBUG nova.compute.manager [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Instance network_info: |[{"id": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "address": "fa:16:3e:d1:1c:47", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcff19326-5c", "ovs_interfaceid": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.712 239969 DEBUG oslo_concurrency.lockutils [req-38507d23-d793-476e-9ba2-8d7a7f2b1feb req-cafaaabb-dffd-46bb-89d3-7e0443d95b95 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.713 239969 DEBUG nova.network.neutron [req-38507d23-d793-476e-9ba2-8d7a7f2b1feb req-cafaaabb-dffd-46bb-89d3-7e0443d95b95 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Refreshing network info cache for port cff19326-5cc9-485a-9f3e-eafabc790a6e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.716 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Start _get_guest_xml network_info=[{"id": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "address": "fa:16:3e:d1:1c:47", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcff19326-5c", "ovs_interfaceid": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:54:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 88 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.725 239969 WARNING nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.730 239969 DEBUG nova.virt.libvirt.host [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.731 239969 DEBUG nova.virt.libvirt.host [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.740 239969 DEBUG nova.virt.libvirt.host [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.741 239969 DEBUG nova.virt.libvirt.host [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.742 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.742 239969 DEBUG nova.virt.hardware [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.743 239969 DEBUG nova.virt.hardware [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.743 239969 DEBUG nova.virt.hardware [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.743 239969 DEBUG nova.virt.hardware [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.744 239969 DEBUG nova.virt.hardware [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.744 239969 DEBUG nova.virt.hardware [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.744 239969 DEBUG nova.virt.hardware [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.744 239969 DEBUG nova.virt.hardware [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.745 239969 DEBUG nova.virt.hardware [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.745 239969 DEBUG nova.virt.hardware [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.745 239969 DEBUG nova.virt.hardware [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.749 239969 DEBUG oslo_concurrency.processutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:54:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3467303223' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.934 239969 DEBUG oslo_concurrency.processutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.940 239969 DEBUG nova.compute.provider_tree [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.955 239969 DEBUG nova.scheduler.client.report [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.980 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:03 compute-0 nova_compute[239965]: 2026-01-26 15:54:03.981 239969 DEBUG nova.compute.manager [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.024 239969 DEBUG nova.compute.manager [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.027 239969 DEBUG nova.network.neutron [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.050 239969 INFO nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.069 239969 DEBUG nova.compute.manager [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.181 239969 DEBUG nova.compute.manager [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.183 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.184 239969 INFO nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Creating image(s)
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.207 239969 DEBUG nova.storage.rbd_utils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] rbd image 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.230 239969 DEBUG nova.storage.rbd_utils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] rbd image 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.256 239969 DEBUG nova.storage.rbd_utils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] rbd image 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.260 239969 DEBUG oslo_concurrency.processutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:04 compute-0 sudo[269221]: pam_unix(sudo:session): session closed for user root
Jan 26 15:54:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.333 239969 DEBUG oslo_concurrency.processutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.334 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.335 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.335 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:54:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1858652576' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:54:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:54:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:54:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:54:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.461 239969 DEBUG nova.storage.rbd_utils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] rbd image 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.466 239969 DEBUG oslo_concurrency.processutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.494 239969 DEBUG nova.policy [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8a6b0cb0d5d2404782b95b68fbfb7e2e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '929e81b52f9f49d988ff490f77d499aa', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.498 239969 DEBUG oslo_concurrency.processutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.749s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:04 compute-0 sudo[269592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:54:04 compute-0 sudo[269592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:54:04 compute-0 sudo[269592]: pam_unix(sudo:session): session closed for user root
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.528 239969 DEBUG nova.storage.rbd_utils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 2cf7c37a-5399-4471-8385-7f483007196b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.536 239969 DEBUG oslo_concurrency.processutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:04 compute-0 sudo[269641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:54:04 compute-0 sudo[269641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.729 239969 DEBUG oslo_concurrency.processutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.262s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.794 239969 DEBUG nova.storage.rbd_utils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] resizing rbd image 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.875 239969 DEBUG nova.objects.instance [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lazy-loading 'migration_context' on Instance uuid 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:04 compute-0 ceph-mon[75140]: pgmap v1204: 305 pgs: 305 active+clean; 88 MiB data, 360 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 15:54:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3467303223' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1858652576' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:54:04 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:54:04 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.889 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.890 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Ensure instance console log exists: /var/lib/nova/instances/2a8ff768-0ab1-46cb-983c-9109c4a2c1dd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.890 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.891 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.892 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:04 compute-0 nova_compute[239965]: 2026-01-26 15:54:04.984 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:05 compute-0 sudo[269641]: pam_unix(sudo:session): session closed for user root
Jan 26 15:54:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:54:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4179595722' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:54:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:54:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:54:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:54:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:54:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:54:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:54:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:54:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:54:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:54:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:54:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:54:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.194 239969 DEBUG oslo_concurrency.processutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.195 239969 DEBUG nova.virt.libvirt.vif [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:53:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-191874167',display_name='tempest-AttachInterfacesTestJSON-server-191874167',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-191874167',id=31,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhtLtOel3pVoc9EYAkv14l7u/jTPTSUNpqzLh+IC+V3az/87mCK8InL3MxARo8S+BZI5v3zTF4Oa/Tp2iKSYMaK5pJEulAdCvLOStylM1u70TluwDQyNsqUX7wJdz2LIw==',key_name='tempest-keypair-1871598546',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-con5e9j1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:53:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=2cf7c37a-5399-4471-8385-7f483007196b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "address": "fa:16:3e:d1:1c:47", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcff19326-5c", "ovs_interfaceid": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.195 239969 DEBUG nova.network.os_vif_util [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "address": "fa:16:3e:d1:1c:47", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcff19326-5c", "ovs_interfaceid": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.196 239969 DEBUG nova.network.os_vif_util [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:1c:47,bridge_name='br-int',has_traffic_filtering=True,id=cff19326-5cc9-485a-9f3e-eafabc790a6e,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcff19326-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.197 239969 DEBUG nova.objects.instance [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2cf7c37a-5399-4471-8385-7f483007196b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.211 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:54:05 compute-0 nova_compute[239965]:   <uuid>2cf7c37a-5399-4471-8385-7f483007196b</uuid>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   <name>instance-0000001f</name>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <nova:name>tempest-AttachInterfacesTestJSON-server-191874167</nova:name>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:54:03</nova:creationTime>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:54:05 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:54:05 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:54:05 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:54:05 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:54:05 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:54:05 compute-0 nova_compute[239965]:         <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:54:05 compute-0 nova_compute[239965]:         <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:54:05 compute-0 nova_compute[239965]:         <nova:port uuid="cff19326-5cc9-485a-9f3e-eafabc790a6e">
Jan 26 15:54:05 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <system>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <entry name="serial">2cf7c37a-5399-4471-8385-7f483007196b</entry>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <entry name="uuid">2cf7c37a-5399-4471-8385-7f483007196b</entry>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     </system>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   <os>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   </os>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   <features>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   </features>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/2cf7c37a-5399-4471-8385-7f483007196b_disk">
Jan 26 15:54:05 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       </source>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:54:05 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/2cf7c37a-5399-4471-8385-7f483007196b_disk.config">
Jan 26 15:54:05 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       </source>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:54:05 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:d1:1c:47"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <target dev="tapcff19326-5c"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b/console.log" append="off"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <video>
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     </video>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:54:05 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:54:05 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:54:05 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:54:05 compute-0 nova_compute[239965]: </domain>
Jan 26 15:54:05 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.212 239969 DEBUG nova.compute.manager [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Preparing to wait for external event network-vif-plugged-cff19326-5cc9-485a-9f3e-eafabc790a6e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.212 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "2cf7c37a-5399-4471-8385-7f483007196b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.212 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "2cf7c37a-5399-4471-8385-7f483007196b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.213 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "2cf7c37a-5399-4471-8385-7f483007196b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.213 239969 DEBUG nova.virt.libvirt.vif [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:53:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-191874167',display_name='tempest-AttachInterfacesTestJSON-server-191874167',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-191874167',id=31,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhtLtOel3pVoc9EYAkv14l7u/jTPTSUNpqzLh+IC+V3az/87mCK8InL3MxARo8S+BZI5v3zTF4Oa/Tp2iKSYMaK5pJEulAdCvLOStylM1u70TluwDQyNsqUX7wJdz2LIw==',key_name='tempest-keypair-1871598546',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-con5e9j1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:53:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=2cf7c37a-5399-4471-8385-7f483007196b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "address": "fa:16:3e:d1:1c:47", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcff19326-5c", "ovs_interfaceid": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.213 239969 DEBUG nova.network.os_vif_util [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "address": "fa:16:3e:d1:1c:47", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcff19326-5c", "ovs_interfaceid": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.214 239969 DEBUG nova.network.os_vif_util [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:1c:47,bridge_name='br-int',has_traffic_filtering=True,id=cff19326-5cc9-485a-9f3e-eafabc790a6e,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcff19326-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.214 239969 DEBUG os_vif [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:1c:47,bridge_name='br-int',has_traffic_filtering=True,id=cff19326-5cc9-485a-9f3e-eafabc790a6e,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcff19326-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.215 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.215 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.216 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.219 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.220 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcff19326-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.220 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcff19326-5c, col_values=(('external_ids', {'iface-id': 'cff19326-5cc9-485a-9f3e-eafabc790a6e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:1c:47', 'vm-uuid': '2cf7c37a-5399-4471-8385-7f483007196b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.222 239969 DEBUG nova.network.neutron [req-38507d23-d793-476e-9ba2-8d7a7f2b1feb req-cafaaabb-dffd-46bb-89d3-7e0443d95b95 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Updated VIF entry in instance network info cache for port cff19326-5cc9-485a-9f3e-eafabc790a6e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.222 239969 DEBUG nova.network.neutron [req-38507d23-d793-476e-9ba2-8d7a7f2b1feb req-cafaaabb-dffd-46bb-89d3-7e0443d95b95 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Updating instance_info_cache with network_info: [{"id": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "address": "fa:16:3e:d1:1c:47", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcff19326-5c", "ovs_interfaceid": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:05 compute-0 NetworkManager[48954]: <info>  [1769442845.2238] manager: (tapcff19326-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.224 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.226 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.234 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.235 239969 INFO os_vif [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:1c:47,bridge_name='br-int',has_traffic_filtering=True,id=cff19326-5cc9-485a-9f3e-eafabc790a6e,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcff19326-5c')
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.243 239969 DEBUG oslo_concurrency.lockutils [req-38507d23-d793-476e-9ba2-8d7a7f2b1feb req-cafaaabb-dffd-46bb-89d3-7e0443d95b95 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:54:05 compute-0 sudo[269803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:54:05 compute-0 sudo[269803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:54:05 compute-0 sudo[269803]: pam_unix(sudo:session): session closed for user root
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.285 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.286 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.286 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:d1:1c:47, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.286 239969 INFO nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Using config drive
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.308 239969 DEBUG nova.storage.rbd_utils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 2cf7c37a-5399-4471-8385-7f483007196b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:05 compute-0 sudo[269830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:54:05 compute-0 sudo[269830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:54:05 compute-0 podman[269884]: 2026-01-26 15:54:05.636303448 +0000 UTC m=+0.044848931 container create 927390b78abaa910db3b2a3282fe4d23432e81f66aa754f6dee74ba2b6f482f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shaw, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 15:54:05 compute-0 systemd[1]: Started libpod-conmon-927390b78abaa910db3b2a3282fe4d23432e81f66aa754f6dee74ba2b6f482f8.scope.
Jan 26 15:54:05 compute-0 podman[269884]: 2026-01-26 15:54:05.614708003 +0000 UTC m=+0.023253506 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:54:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:54:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 104 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.4 MiB/s wr, 40 op/s
Jan 26 15:54:05 compute-0 podman[269884]: 2026-01-26 15:54:05.736884061 +0000 UTC m=+0.145429584 container init 927390b78abaa910db3b2a3282fe4d23432e81f66aa754f6dee74ba2b6f482f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shaw, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 15:54:05 compute-0 podman[269884]: 2026-01-26 15:54:05.744627728 +0000 UTC m=+0.153173211 container start 927390b78abaa910db3b2a3282fe4d23432e81f66aa754f6dee74ba2b6f482f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shaw, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 15:54:05 compute-0 podman[269884]: 2026-01-26 15:54:05.748930053 +0000 UTC m=+0.157475546 container attach 927390b78abaa910db3b2a3282fe4d23432e81f66aa754f6dee74ba2b6f482f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shaw, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:54:05 compute-0 pensive_shaw[269900]: 167 167
Jan 26 15:54:05 compute-0 systemd[1]: libpod-927390b78abaa910db3b2a3282fe4d23432e81f66aa754f6dee74ba2b6f482f8.scope: Deactivated successfully.
Jan 26 15:54:05 compute-0 podman[269884]: 2026-01-26 15:54:05.751954956 +0000 UTC m=+0.160500449 container died 927390b78abaa910db3b2a3282fe4d23432e81f66aa754f6dee74ba2b6f482f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shaw, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 15:54:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-a136bcc723efc6f3e16627a608479ddea04068d1580d4eb86dec9ac5fdb5dca5-merged.mount: Deactivated successfully.
Jan 26 15:54:05 compute-0 podman[269884]: 2026-01-26 15:54:05.816133855 +0000 UTC m=+0.224679338 container remove 927390b78abaa910db3b2a3282fe4d23432e81f66aa754f6dee74ba2b6f482f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:54:05 compute-0 systemd[1]: libpod-conmon-927390b78abaa910db3b2a3282fe4d23432e81f66aa754f6dee74ba2b6f482f8.scope: Deactivated successfully.
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.838 239969 INFO nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Creating config drive at /var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b/disk.config
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.844 239969 DEBUG oslo_concurrency.processutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnubl43_r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.875 239969 DEBUG nova.network.neutron [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Successfully created port: 0c45aaff-4f58-4867-9375-3e8614fe3f80 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:54:05 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4179595722' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:54:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:54:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:54:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:54:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:54:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:54:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:54:05 compute-0 nova_compute[239965]: 2026-01-26 15:54:05.982 239969 DEBUG oslo_concurrency.processutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnubl43_r" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:05 compute-0 podman[269927]: 2026-01-26 15:54:05.991314049 +0000 UTC m=+0.047584367 container create 105f0d7d2ba7252f557d3bd7b1b9751ff9876c6fe8c1e1130dcead97064a91aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_montalcini, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Jan 26 15:54:06 compute-0 nova_compute[239965]: 2026-01-26 15:54:06.012 239969 DEBUG nova.storage.rbd_utils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 2cf7c37a-5399-4471-8385-7f483007196b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:06 compute-0 nova_compute[239965]: 2026-01-26 15:54:06.015 239969 DEBUG oslo_concurrency.processutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b/disk.config 2cf7c37a-5399-4471-8385-7f483007196b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:06 compute-0 systemd[1]: Started libpod-conmon-105f0d7d2ba7252f557d3bd7b1b9751ff9876c6fe8c1e1130dcead97064a91aa.scope.
Jan 26 15:54:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:54:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f69983bd7522466855289168946b8d26d08eec9671f4966e16429af713b2690/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f69983bd7522466855289168946b8d26d08eec9671f4966e16429af713b2690/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f69983bd7522466855289168946b8d26d08eec9671f4966e16429af713b2690/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f69983bd7522466855289168946b8d26d08eec9671f4966e16429af713b2690/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f69983bd7522466855289168946b8d26d08eec9671f4966e16429af713b2690/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:06 compute-0 podman[269927]: 2026-01-26 15:54:05.970285018 +0000 UTC m=+0.026555366 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:54:06 compute-0 podman[269927]: 2026-01-26 15:54:06.078866376 +0000 UTC m=+0.135136704 container init 105f0d7d2ba7252f557d3bd7b1b9751ff9876c6fe8c1e1130dcead97064a91aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_montalcini, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 15:54:06 compute-0 podman[269927]: 2026-01-26 15:54:06.089812492 +0000 UTC m=+0.146082810 container start 105f0d7d2ba7252f557d3bd7b1b9751ff9876c6fe8c1e1130dcead97064a91aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_montalcini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:54:06 compute-0 podman[269927]: 2026-01-26 15:54:06.113740013 +0000 UTC m=+0.170010351 container attach 105f0d7d2ba7252f557d3bd7b1b9751ff9876c6fe8c1e1130dcead97064a91aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 15:54:06 compute-0 nova_compute[239965]: 2026-01-26 15:54:06.185 239969 DEBUG oslo_concurrency.processutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b/disk.config 2cf7c37a-5399-4471-8385-7f483007196b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:06 compute-0 nova_compute[239965]: 2026-01-26 15:54:06.186 239969 INFO nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Deleting local config drive /var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b/disk.config because it was imported into RBD.
Jan 26 15:54:06 compute-0 kernel: tapcff19326-5c: entered promiscuous mode
Jan 26 15:54:06 compute-0 NetworkManager[48954]: <info>  [1769442846.2551] manager: (tapcff19326-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Jan 26 15:54:06 compute-0 ovn_controller[146046]: 2026-01-26T15:54:06Z|00165|binding|INFO|Claiming lport cff19326-5cc9-485a-9f3e-eafabc790a6e for this chassis.
Jan 26 15:54:06 compute-0 ovn_controller[146046]: 2026-01-26T15:54:06Z|00166|binding|INFO|cff19326-5cc9-485a-9f3e-eafabc790a6e: Claiming fa:16:3e:d1:1c:47 10.100.0.12
Jan 26 15:54:06 compute-0 nova_compute[239965]: 2026-01-26 15:54:06.255 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:06 compute-0 nova_compute[239965]: 2026-01-26 15:54:06.259 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.271 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:1c:47 10.100.0.12'], port_security=['fa:16:3e:d1:1c:47 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2cf7c37a-5399-4471-8385-7f483007196b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be2f1de3-40ec-4511-99bb-ae602e3ecb84', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=cff19326-5cc9-485a-9f3e-eafabc790a6e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.272 156105 INFO neutron.agent.ovn.metadata.agent [-] Port cff19326-5cc9-485a-9f3e-eafabc790a6e in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e bound to our chassis
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.273 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.290 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d8737bcb-dd77-473d-9b8a-d1d7434e9efa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.292 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap91bcac0a-11 in ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.295 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap91bcac0a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.295 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a95c8a-480e-42c6-b0f3-099d18989cdc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.297 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ba054579-87b1-4b54-a06d-fc2ea44f3a36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:06 compute-0 systemd-udevd[270001]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.311 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[d16b5ffd-e6fc-40a4-9ac5-f95b056f1072]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:06 compute-0 systemd-machined[208061]: New machine qemu-35-instance-0000001f.
Jan 26 15:54:06 compute-0 NetworkManager[48954]: <info>  [1769442846.3229] device (tapcff19326-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:54:06 compute-0 NetworkManager[48954]: <info>  [1769442846.3238] device (tapcff19326-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:54:06 compute-0 systemd[1]: Started Virtual Machine qemu-35-instance-0000001f.
Jan 26 15:54:06 compute-0 nova_compute[239965]: 2026-01-26 15:54:06.337 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.339 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[184fea5c-d39d-4fdb-8f24-c282a9fac181]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:06 compute-0 ovn_controller[146046]: 2026-01-26T15:54:06Z|00167|binding|INFO|Setting lport cff19326-5cc9-485a-9f3e-eafabc790a6e ovn-installed in OVS
Jan 26 15:54:06 compute-0 ovn_controller[146046]: 2026-01-26T15:54:06Z|00168|binding|INFO|Setting lport cff19326-5cc9-485a-9f3e-eafabc790a6e up in Southbound
Jan 26 15:54:06 compute-0 nova_compute[239965]: 2026-01-26 15:54:06.346 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.370 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2b706cf7-f74c-44ff-95ff-a1dff1748480]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:06 compute-0 NetworkManager[48954]: <info>  [1769442846.3763] manager: (tap91bcac0a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/88)
Jan 26 15:54:06 compute-0 systemd-udevd[270007]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.376 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7da798c0-a3f2-4dec-b271-7eee7be534fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.408 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9fde00f0-09ba-46c8-83ab-d618359ad1cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.411 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f748beab-d054-4469-add1-494f710eb7da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:06 compute-0 NetworkManager[48954]: <info>  [1769442846.4317] device (tap91bcac0a-10): carrier: link connected
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.436 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6fc3c8ad-cb5d-4355-88f3-3eb164f2b2cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.452 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ec74b4a8-e1d3-430d-8ffb-609ff99d72ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433723, 'reachable_time': 43512, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270040, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.467 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0822d4e9-3863-4bf1-9a97-461b967c3425]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:d619'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 433723, 'tstamp': 433723}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270041, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.486 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fe551fc5-6d93-4967-94e5-ca5157237fbd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433723, 'reachable_time': 43512, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270044, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.518 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[00089ed5-d2e7-433d-98f5-b5cf038e4856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:06 compute-0 charming_montalcini[269964]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:54:06 compute-0 charming_montalcini[269964]: --> All data devices are unavailable
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.586 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fb68271f-2841-4a81-a378-0b7f5df19ad9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.587 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.587 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.588 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:06 compute-0 nova_compute[239965]: 2026-01-26 15:54:06.589 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:06 compute-0 NetworkManager[48954]: <info>  [1769442846.5902] manager: (tap91bcac0a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/89)
Jan 26 15:54:06 compute-0 kernel: tap91bcac0a-10: entered promiscuous mode
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.594 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:06 compute-0 nova_compute[239965]: 2026-01-26 15:54:06.596 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:06 compute-0 ovn_controller[146046]: 2026-01-26T15:54:06Z|00169|binding|INFO|Releasing lport 0a445094-538b-4c97-83d4-6fbe61c31fbe from this chassis (sb_readonly=0)
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.598 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/91bcac0a-1926-4861-88ab-ae3c06f7e57e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/91bcac0a-1926-4861-88ab-ae3c06f7e57e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.598 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a5341e85-a916-4548-b93c-e3ea3071dd5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.599 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/91bcac0a-1926-4861-88ab-ae3c06f7e57e.pid.haproxy
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:54:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:06.600 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'env', 'PROCESS_TAG=haproxy-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/91bcac0a-1926-4861-88ab-ae3c06f7e57e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:54:06 compute-0 nova_compute[239965]: 2026-01-26 15:54:06.613 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:06 compute-0 systemd[1]: libpod-105f0d7d2ba7252f557d3bd7b1b9751ff9876c6fe8c1e1130dcead97064a91aa.scope: Deactivated successfully.
Jan 26 15:54:06 compute-0 podman[269927]: 2026-01-26 15:54:06.618362588 +0000 UTC m=+0.674632936 container died 105f0d7d2ba7252f557d3bd7b1b9751ff9876c6fe8c1e1130dcead97064a91aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_montalcini, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:54:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f69983bd7522466855289168946b8d26d08eec9671f4966e16429af713b2690-merged.mount: Deactivated successfully.
Jan 26 15:54:06 compute-0 podman[269927]: 2026-01-26 15:54:06.673048546 +0000 UTC m=+0.729318874 container remove 105f0d7d2ba7252f557d3bd7b1b9751ff9876c6fe8c1e1130dcead97064a91aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_montalcini, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:54:06 compute-0 systemd[1]: libpod-conmon-105f0d7d2ba7252f557d3bd7b1b9751ff9876c6fe8c1e1130dcead97064a91aa.scope: Deactivated successfully.
Jan 26 15:54:06 compute-0 sudo[269830]: pam_unix(sudo:session): session closed for user root
Jan 26 15:54:06 compute-0 sudo[270070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:54:06 compute-0 sudo[270070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:54:06 compute-0 sudo[270070]: pam_unix(sudo:session): session closed for user root
Jan 26 15:54:06 compute-0 sudo[270095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:54:06 compute-0 sudo[270095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:54:06 compute-0 ceph-mon[75140]: pgmap v1205: 305 pgs: 305 active+clean; 104 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.4 MiB/s wr, 40 op/s
Jan 26 15:54:06 compute-0 podman[270155]: 2026-01-26 15:54:06.985086274 +0000 UTC m=+0.053484130 container create 861988cca4f8896e65ab9cbcf76586a4b0d106b8838758d92e7c7801ffaec660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 15:54:07 compute-0 systemd[1]: Started libpod-conmon-861988cca4f8896e65ab9cbcf76586a4b0d106b8838758d92e7c7801ffaec660.scope.
Jan 26 15:54:07 compute-0 nova_compute[239965]: 2026-01-26 15:54:07.047 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442847.0471756, 2cf7c37a-5399-4471-8385-7f483007196b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:54:07 compute-0 nova_compute[239965]: 2026-01-26 15:54:07.048 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] VM Started (Lifecycle Event)
Jan 26 15:54:07 compute-0 podman[270155]: 2026-01-26 15:54:06.954322277 +0000 UTC m=+0.022720163 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:54:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:54:07 compute-0 nova_compute[239965]: 2026-01-26 15:54:07.067 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b118c32a89a349d3c4a74cdc57f8685e244299f29bd6b0c228913706ec5c8dcf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:07 compute-0 nova_compute[239965]: 2026-01-26 15:54:07.074 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442847.0483868, 2cf7c37a-5399-4471-8385-7f483007196b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:54:07 compute-0 nova_compute[239965]: 2026-01-26 15:54:07.075 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] VM Paused (Lifecycle Event)
Jan 26 15:54:07 compute-0 podman[270155]: 2026-01-26 15:54:07.084014157 +0000 UTC m=+0.152412013 container init 861988cca4f8896e65ab9cbcf76586a4b0d106b8838758d92e7c7801ffaec660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:54:07 compute-0 podman[270155]: 2026-01-26 15:54:07.090091114 +0000 UTC m=+0.158488970 container start 861988cca4f8896e65ab9cbcf76586a4b0d106b8838758d92e7c7801ffaec660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:54:07 compute-0 nova_compute[239965]: 2026-01-26 15:54:07.104 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:07 compute-0 nova_compute[239965]: 2026-01-26 15:54:07.109 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:54:07 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[270203]: [NOTICE]   (270221) : New worker (270223) forked
Jan 26 15:54:07 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[270203]: [NOTICE]   (270221) : Loading success.
Jan 26 15:54:07 compute-0 podman[270210]: 2026-01-26 15:54:07.130171788 +0000 UTC m=+0.050053737 container create 9fc3fc5e3742501f0d4bcb63a271901a45d53554245803fa00a585487695f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cohen, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:54:07 compute-0 nova_compute[239965]: 2026-01-26 15:54:07.133 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:54:07 compute-0 systemd[1]: Started libpod-conmon-9fc3fc5e3742501f0d4bcb63a271901a45d53554245803fa00a585487695f87e.scope.
Jan 26 15:54:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:54:07 compute-0 podman[270210]: 2026-01-26 15:54:07.104950946 +0000 UTC m=+0.024832925 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:54:07 compute-0 podman[270210]: 2026-01-26 15:54:07.205379934 +0000 UTC m=+0.125261903 container init 9fc3fc5e3742501f0d4bcb63a271901a45d53554245803fa00a585487695f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:54:07 compute-0 podman[270210]: 2026-01-26 15:54:07.21259149 +0000 UTC m=+0.132473439 container start 9fc3fc5e3742501f0d4bcb63a271901a45d53554245803fa00a585487695f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:54:07 compute-0 podman[270210]: 2026-01-26 15:54:07.216605447 +0000 UTC m=+0.136487396 container attach 9fc3fc5e3742501f0d4bcb63a271901a45d53554245803fa00a585487695f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cohen, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:54:07 compute-0 nifty_cohen[270238]: 167 167
Jan 26 15:54:07 compute-0 systemd[1]: libpod-9fc3fc5e3742501f0d4bcb63a271901a45d53554245803fa00a585487695f87e.scope: Deactivated successfully.
Jan 26 15:54:07 compute-0 podman[270210]: 2026-01-26 15:54:07.218067803 +0000 UTC m=+0.137949782 container died 9fc3fc5e3742501f0d4bcb63a271901a45d53554245803fa00a585487695f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cohen, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:54:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ee2396504edf37229989ff17b39aba79fabbc00d860744684e3ff308a55ef52-merged.mount: Deactivated successfully.
Jan 26 15:54:07 compute-0 podman[270210]: 2026-01-26 15:54:07.254749614 +0000 UTC m=+0.174631563 container remove 9fc3fc5e3742501f0d4bcb63a271901a45d53554245803fa00a585487695f87e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cohen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 15:54:07 compute-0 systemd[1]: libpod-conmon-9fc3fc5e3742501f0d4bcb63a271901a45d53554245803fa00a585487695f87e.scope: Deactivated successfully.
Jan 26 15:54:07 compute-0 podman[270262]: 2026-01-26 15:54:07.488221053 +0000 UTC m=+0.104030587 container create 70d5f0d18c79a33bd8afc8f51e961aa83e4ea2318d715df4b6f732a01a25d8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_shirley, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:54:07 compute-0 podman[270262]: 2026-01-26 15:54:07.408125398 +0000 UTC m=+0.023934952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:54:07 compute-0 systemd[1]: Started libpod-conmon-70d5f0d18c79a33bd8afc8f51e961aa83e4ea2318d715df4b6f732a01a25d8c9.scope.
Jan 26 15:54:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4e7f3c7fab07c61328bc1a9a2d28c8c9207b0edd7ebcc94d64c8ac8b02ce993/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4e7f3c7fab07c61328bc1a9a2d28c8c9207b0edd7ebcc94d64c8ac8b02ce993/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4e7f3c7fab07c61328bc1a9a2d28c8c9207b0edd7ebcc94d64c8ac8b02ce993/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4e7f3c7fab07c61328bc1a9a2d28c8c9207b0edd7ebcc94d64c8ac8b02ce993/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:07 compute-0 podman[270262]: 2026-01-26 15:54:07.564087637 +0000 UTC m=+0.179897191 container init 70d5f0d18c79a33bd8afc8f51e961aa83e4ea2318d715df4b6f732a01a25d8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_shirley, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:54:07 compute-0 podman[270262]: 2026-01-26 15:54:07.571890955 +0000 UTC m=+0.187700489 container start 70d5f0d18c79a33bd8afc8f51e961aa83e4ea2318d715df4b6f732a01a25d8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:54:07 compute-0 podman[270262]: 2026-01-26 15:54:07.575561225 +0000 UTC m=+0.191370769 container attach 70d5f0d18c79a33bd8afc8f51e961aa83e4ea2318d715df4b6f732a01a25d8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 15:54:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 58 op/s
Jan 26 15:54:07 compute-0 adoring_shirley[270278]: {
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:     "0": [
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:         {
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "devices": [
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "/dev/loop3"
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             ],
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "lv_name": "ceph_lv0",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "lv_size": "21470642176",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "name": "ceph_lv0",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "tags": {
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.cluster_name": "ceph",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.crush_device_class": "",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.encrypted": "0",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.objectstore": "bluestore",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.osd_id": "0",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.type": "block",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.vdo": "0",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.with_tpm": "0"
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             },
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "type": "block",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "vg_name": "ceph_vg0"
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:         }
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:     ],
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:     "1": [
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:         {
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "devices": [
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "/dev/loop4"
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             ],
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "lv_name": "ceph_lv1",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "lv_size": "21470642176",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "name": "ceph_lv1",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "tags": {
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.cluster_name": "ceph",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.crush_device_class": "",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.encrypted": "0",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.objectstore": "bluestore",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.osd_id": "1",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.type": "block",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.vdo": "0",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.with_tpm": "0"
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             },
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "type": "block",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "vg_name": "ceph_vg1"
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:         }
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:     ],
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:     "2": [
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:         {
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "devices": [
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "/dev/loop5"
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             ],
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "lv_name": "ceph_lv2",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "lv_size": "21470642176",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "name": "ceph_lv2",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "tags": {
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.cluster_name": "ceph",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.crush_device_class": "",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.encrypted": "0",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.objectstore": "bluestore",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.osd_id": "2",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.type": "block",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.vdo": "0",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:                 "ceph.with_tpm": "0"
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             },
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "type": "block",
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:             "vg_name": "ceph_vg2"
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:         }
Jan 26 15:54:07 compute-0 adoring_shirley[270278]:     ]
Jan 26 15:54:07 compute-0 adoring_shirley[270278]: }
Jan 26 15:54:07 compute-0 systemd[1]: libpod-70d5f0d18c79a33bd8afc8f51e961aa83e4ea2318d715df4b6f732a01a25d8c9.scope: Deactivated successfully.
Jan 26 15:54:07 compute-0 podman[270287]: 2026-01-26 15:54:07.924841617 +0000 UTC m=+0.024427394 container died 70d5f0d18c79a33bd8afc8f51e961aa83e4ea2318d715df4b6f732a01a25d8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_shirley, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:54:07 compute-0 nova_compute[239965]: 2026-01-26 15:54:07.926 239969 DEBUG nova.network.neutron [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Successfully updated port: 0c45aaff-4f58-4867-9375-3e8614fe3f80 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:54:07 compute-0 nova_compute[239965]: 2026-01-26 15:54:07.950 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Acquiring lock "refresh_cache-2a8ff768-0ab1-46cb-983c-9109c4a2c1dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:54:07 compute-0 nova_compute[239965]: 2026-01-26 15:54:07.950 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Acquired lock "refresh_cache-2a8ff768-0ab1-46cb-983c-9109c4a2c1dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:54:07 compute-0 nova_compute[239965]: 2026-01-26 15:54:07.950 239969 DEBUG nova.network.neutron [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:54:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4e7f3c7fab07c61328bc1a9a2d28c8c9207b0edd7ebcc94d64c8ac8b02ce993-merged.mount: Deactivated successfully.
Jan 26 15:54:07 compute-0 podman[270287]: 2026-01-26 15:54:07.987195432 +0000 UTC m=+0.086781199 container remove 70d5f0d18c79a33bd8afc8f51e961aa83e4ea2318d715df4b6f732a01a25d8c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 15:54:07 compute-0 systemd[1]: libpod-conmon-70d5f0d18c79a33bd8afc8f51e961aa83e4ea2318d715df4b6f732a01a25d8c9.scope: Deactivated successfully.
Jan 26 15:54:08 compute-0 sudo[270095]: pam_unix(sudo:session): session closed for user root
Jan 26 15:54:08 compute-0 nova_compute[239965]: 2026-01-26 15:54:08.076 239969 DEBUG nova.compute.manager [req-cb93eed3-71ae-43c0-b23c-61e5a0640b58 req-ee1a5b1a-a4ff-4ba0-a5f3-212b7a48d554 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Received event network-changed-0c45aaff-4f58-4867-9375-3e8614fe3f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:08 compute-0 nova_compute[239965]: 2026-01-26 15:54:08.076 239969 DEBUG nova.compute.manager [req-cb93eed3-71ae-43c0-b23c-61e5a0640b58 req-ee1a5b1a-a4ff-4ba0-a5f3-212b7a48d554 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Refreshing instance network info cache due to event network-changed-0c45aaff-4f58-4867-9375-3e8614fe3f80. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:54:08 compute-0 nova_compute[239965]: 2026-01-26 15:54:08.076 239969 DEBUG oslo_concurrency.lockutils [req-cb93eed3-71ae-43c0-b23c-61e5a0640b58 req-ee1a5b1a-a4ff-4ba0-a5f3-212b7a48d554 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-2a8ff768-0ab1-46cb-983c-9109c4a2c1dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:54:08 compute-0 sudo[270300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:54:08 compute-0 sudo[270300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:54:08 compute-0 sudo[270300]: pam_unix(sudo:session): session closed for user root
Jan 26 15:54:08 compute-0 nova_compute[239965]: 2026-01-26 15:54:08.160 239969 DEBUG nova.network.neutron [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:54:08 compute-0 sudo[270325]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:54:08 compute-0 sudo[270325]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:54:08 compute-0 podman[270361]: 2026-01-26 15:54:08.465257202 +0000 UTC m=+0.038435184 container create 9d199f880a9ae72a636073bdc4faa96ad821bd9973f7ebc3400f3c8055ee2512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bose, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 15:54:08 compute-0 systemd[1]: Started libpod-conmon-9d199f880a9ae72a636073bdc4faa96ad821bd9973f7ebc3400f3c8055ee2512.scope.
Jan 26 15:54:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:54:08 compute-0 podman[270361]: 2026-01-26 15:54:08.448512015 +0000 UTC m=+0.021690027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:54:08 compute-0 podman[270361]: 2026-01-26 15:54:08.554669234 +0000 UTC m=+0.127847236 container init 9d199f880a9ae72a636073bdc4faa96ad821bd9973f7ebc3400f3c8055ee2512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bose, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:54:08 compute-0 podman[270361]: 2026-01-26 15:54:08.562287769 +0000 UTC m=+0.135465751 container start 9d199f880a9ae72a636073bdc4faa96ad821bd9973f7ebc3400f3c8055ee2512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bose, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:54:08 compute-0 podman[270361]: 2026-01-26 15:54:08.566097401 +0000 UTC m=+0.139275393 container attach 9d199f880a9ae72a636073bdc4faa96ad821bd9973f7ebc3400f3c8055ee2512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 15:54:08 compute-0 elastic_bose[270377]: 167 167
Jan 26 15:54:08 compute-0 systemd[1]: libpod-9d199f880a9ae72a636073bdc4faa96ad821bd9973f7ebc3400f3c8055ee2512.scope: Deactivated successfully.
Jan 26 15:54:08 compute-0 podman[270361]: 2026-01-26 15:54:08.568309055 +0000 UTC m=+0.141487037 container died 9d199f880a9ae72a636073bdc4faa96ad821bd9973f7ebc3400f3c8055ee2512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bose, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 15:54:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-49203f419c22a935b0f72739c1d51fc38a2abc22055c07e5ed74e3254a79481d-merged.mount: Deactivated successfully.
Jan 26 15:54:08 compute-0 podman[270361]: 2026-01-26 15:54:08.610333596 +0000 UTC m=+0.183511588 container remove 9d199f880a9ae72a636073bdc4faa96ad821bd9973f7ebc3400f3c8055ee2512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bose, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:54:08 compute-0 systemd[1]: libpod-conmon-9d199f880a9ae72a636073bdc4faa96ad821bd9973f7ebc3400f3c8055ee2512.scope: Deactivated successfully.
Jan 26 15:54:08 compute-0 podman[270399]: 2026-01-26 15:54:08.781839621 +0000 UTC m=+0.044489911 container create a7d3f68511a299541286a96fdf874d53854be7608071f192dbc5ecf823167e5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 15:54:08 compute-0 systemd[1]: Started libpod-conmon-a7d3f68511a299541286a96fdf874d53854be7608071f192dbc5ecf823167e5e.scope.
Jan 26 15:54:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:54:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6165c1fd81b171fbc9e0fd6c7444a6625e3c14ee27af07795d99b4c140fbb9ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6165c1fd81b171fbc9e0fd6c7444a6625e3c14ee27af07795d99b4c140fbb9ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6165c1fd81b171fbc9e0fd6c7444a6625e3c14ee27af07795d99b4c140fbb9ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6165c1fd81b171fbc9e0fd6c7444a6625e3c14ee27af07795d99b4c140fbb9ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:08 compute-0 podman[270399]: 2026-01-26 15:54:08.851021181 +0000 UTC m=+0.113671501 container init a7d3f68511a299541286a96fdf874d53854be7608071f192dbc5ecf823167e5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 15:54:08 compute-0 podman[270399]: 2026-01-26 15:54:08.763228239 +0000 UTC m=+0.025878559 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:54:08 compute-0 podman[270399]: 2026-01-26 15:54:08.858010331 +0000 UTC m=+0.120660621 container start a7d3f68511a299541286a96fdf874d53854be7608071f192dbc5ecf823167e5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bassi, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 15:54:08 compute-0 podman[270399]: 2026-01-26 15:54:08.861638109 +0000 UTC m=+0.124288449 container attach a7d3f68511a299541286a96fdf874d53854be7608071f192dbc5ecf823167e5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Jan 26 15:54:08 compute-0 ceph-mon[75140]: pgmap v1206: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 58 op/s
Jan 26 15:54:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:54:09 compute-0 lvm[270493]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:54:09 compute-0 lvm[270495]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:54:09 compute-0 lvm[270495]: VG ceph_vg1 finished
Jan 26 15:54:09 compute-0 lvm[270493]: VG ceph_vg0 finished
Jan 26 15:54:09 compute-0 lvm[270497]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:54:09 compute-0 lvm[270497]: VG ceph_vg2 finished
Jan 26 15:54:09 compute-0 hungry_bassi[270416]: {}
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.712 239969 DEBUG nova.network.neutron [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Updating instance_info_cache with network_info: [{"id": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "address": "fa:16:3e:f3:48:82", "network": {"id": "39d1dcdf-a3eb-462d-99c2-e4f1e8783825", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-550716390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e81b52f9f49d988ff490f77d499aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c45aaff-4f", "ovs_interfaceid": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 58 op/s
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.733 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Releasing lock "refresh_cache-2a8ff768-0ab1-46cb-983c-9109c4a2c1dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.734 239969 DEBUG nova.compute.manager [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Instance network_info: |[{"id": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "address": "fa:16:3e:f3:48:82", "network": {"id": "39d1dcdf-a3eb-462d-99c2-e4f1e8783825", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-550716390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e81b52f9f49d988ff490f77d499aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c45aaff-4f", "ovs_interfaceid": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.734 239969 DEBUG oslo_concurrency.lockutils [req-cb93eed3-71ae-43c0-b23c-61e5a0640b58 req-ee1a5b1a-a4ff-4ba0-a5f3-212b7a48d554 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-2a8ff768-0ab1-46cb-983c-9109c4a2c1dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.735 239969 DEBUG nova.network.neutron [req-cb93eed3-71ae-43c0-b23c-61e5a0640b58 req-ee1a5b1a-a4ff-4ba0-a5f3-212b7a48d554 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Refreshing network info cache for port 0c45aaff-4f58-4867-9375-3e8614fe3f80 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.738 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Start _get_guest_xml network_info=[{"id": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "address": "fa:16:3e:f3:48:82", "network": {"id": "39d1dcdf-a3eb-462d-99c2-e4f1e8783825", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-550716390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e81b52f9f49d988ff490f77d499aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c45aaff-4f", "ovs_interfaceid": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:54:09 compute-0 systemd[1]: libpod-a7d3f68511a299541286a96fdf874d53854be7608071f192dbc5ecf823167e5e.scope: Deactivated successfully.
Jan 26 15:54:09 compute-0 systemd[1]: libpod-a7d3f68511a299541286a96fdf874d53854be7608071f192dbc5ecf823167e5e.scope: Consumed 1.465s CPU time.
Jan 26 15:54:09 compute-0 podman[270399]: 2026-01-26 15:54:09.743787943 +0000 UTC m=+1.006438253 container died a7d3f68511a299541286a96fdf874d53854be7608071f192dbc5ecf823167e5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.747 239969 WARNING nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.760 239969 DEBUG nova.virt.libvirt.host [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.762 239969 DEBUG nova.virt.libvirt.host [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.769 239969 DEBUG nova.virt.libvirt.host [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.771 239969 DEBUG nova.virt.libvirt.host [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.771 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.771 239969 DEBUG nova.virt.hardware [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.772 239969 DEBUG nova.virt.hardware [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.772 239969 DEBUG nova.virt.hardware [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.772 239969 DEBUG nova.virt.hardware [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.773 239969 DEBUG nova.virt.hardware [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.773 239969 DEBUG nova.virt.hardware [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.773 239969 DEBUG nova.virt.hardware [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.773 239969 DEBUG nova.virt.hardware [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.773 239969 DEBUG nova.virt.hardware [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.773 239969 DEBUG nova.virt.hardware [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.774 239969 DEBUG nova.virt.hardware [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.776 239969 DEBUG oslo_concurrency.processutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6165c1fd81b171fbc9e0fd6c7444a6625e3c14ee27af07795d99b4c140fbb9ec-merged.mount: Deactivated successfully.
Jan 26 15:54:09 compute-0 podman[270399]: 2026-01-26 15:54:09.798593875 +0000 UTC m=+1.061244165 container remove a7d3f68511a299541286a96fdf874d53854be7608071f192dbc5ecf823167e5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:54:09 compute-0 systemd[1]: libpod-conmon-a7d3f68511a299541286a96fdf874d53854be7608071f192dbc5ecf823167e5e.scope: Deactivated successfully.
Jan 26 15:54:09 compute-0 sudo[270325]: pam_unix(sudo:session): session closed for user root
Jan 26 15:54:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:54:09 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:54:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:54:09 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:54:09 compute-0 sudo[270512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:54:09 compute-0 sudo[270512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:54:09 compute-0 sudo[270512]: pam_unix(sudo:session): session closed for user root
Jan 26 15:54:09 compute-0 nova_compute[239965]: 2026-01-26 15:54:09.986 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.222 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:54:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2556223552' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.383 239969 DEBUG oslo_concurrency.processutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.409 239969 DEBUG nova.storage.rbd_utils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] rbd image 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.415 239969 DEBUG oslo_concurrency.processutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.570 239969 DEBUG nova.compute.manager [req-68f7f154-e3a7-4bcd-9e83-1c2edaf76cb1 req-2d641701-96cf-4bde-982b-e4090839f68e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Received event network-vif-plugged-cff19326-5cc9-485a-9f3e-eafabc790a6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.571 239969 DEBUG oslo_concurrency.lockutils [req-68f7f154-e3a7-4bcd-9e83-1c2edaf76cb1 req-2d641701-96cf-4bde-982b-e4090839f68e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "2cf7c37a-5399-4471-8385-7f483007196b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.572 239969 DEBUG oslo_concurrency.lockutils [req-68f7f154-e3a7-4bcd-9e83-1c2edaf76cb1 req-2d641701-96cf-4bde-982b-e4090839f68e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2cf7c37a-5399-4471-8385-7f483007196b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.572 239969 DEBUG oslo_concurrency.lockutils [req-68f7f154-e3a7-4bcd-9e83-1c2edaf76cb1 req-2d641701-96cf-4bde-982b-e4090839f68e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2cf7c37a-5399-4471-8385-7f483007196b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.572 239969 DEBUG nova.compute.manager [req-68f7f154-e3a7-4bcd-9e83-1c2edaf76cb1 req-2d641701-96cf-4bde-982b-e4090839f68e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Processing event network-vif-plugged-cff19326-5cc9-485a-9f3e-eafabc790a6e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.573 239969 DEBUG nova.compute.manager [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.578 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442850.5777612, 2cf7c37a-5399-4471-8385-7f483007196b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.578 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] VM Resumed (Lifecycle Event)
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.581 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.585 239969 INFO nova.virt.libvirt.driver [-] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Instance spawned successfully.
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.585 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.612 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.618 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.621 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.621 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.622 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.622 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.622 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.623 239969 DEBUG nova.virt.libvirt.driver [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.674 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.708 239969 INFO nova.compute.manager [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Took 11.26 seconds to spawn the instance on the hypervisor.
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.709 239969 DEBUG nova.compute.manager [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.782 239969 INFO nova.compute.manager [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Took 12.35 seconds to build instance.
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.803 239969 DEBUG oslo_concurrency.lockutils [None req-0054d5f8-0147-45f9-acee-f28855352a3c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "2cf7c37a-5399-4471-8385-7f483007196b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:10 compute-0 ceph-mon[75140]: pgmap v1207: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 58 op/s
Jan 26 15:54:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:54:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:54:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2556223552' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:54:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:54:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3850642028' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.995 239969 DEBUG oslo_concurrency.processutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.996 239969 DEBUG nova.virt.libvirt.vif [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:54:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-889784483',display_name='tempest-ImagesNegativeTestJSON-server-889784483',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-889784483',id=32,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='929e81b52f9f49d988ff490f77d499aa',ramdisk_id='',reservation_id='r-4k5yehfn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-2087549606',owner_user_name='tempest-ImagesNegativeTestJSON-2087549606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:54:04Z,user_data=None,user_id='8a6b0cb0d5d2404782b95b68fbfb7e2e',uuid=2a8ff768-0ab1-46cb-983c-9109c4a2c1dd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "address": "fa:16:3e:f3:48:82", "network": {"id": "39d1dcdf-a3eb-462d-99c2-e4f1e8783825", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-550716390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e81b52f9f49d988ff490f77d499aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c45aaff-4f", "ovs_interfaceid": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.997 239969 DEBUG nova.network.os_vif_util [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Converting VIF {"id": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "address": "fa:16:3e:f3:48:82", "network": {"id": "39d1dcdf-a3eb-462d-99c2-e4f1e8783825", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-550716390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e81b52f9f49d988ff490f77d499aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c45aaff-4f", "ovs_interfaceid": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.998 239969 DEBUG nova.network.os_vif_util [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:48:82,bridge_name='br-int',has_traffic_filtering=True,id=0c45aaff-4f58-4867-9375-3e8614fe3f80,network=Network(39d1dcdf-a3eb-462d-99c2-e4f1e8783825),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c45aaff-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:10 compute-0 nova_compute[239965]: 2026-01-26 15:54:10.999 239969 DEBUG nova.objects.instance [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lazy-loading 'pci_devices' on Instance uuid 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:11 compute-0 nova_compute[239965]: 2026-01-26 15:54:11.287 239969 DEBUG nova.network.neutron [req-cb93eed3-71ae-43c0-b23c-61e5a0640b58 req-ee1a5b1a-a4ff-4ba0-a5f3-212b7a48d554 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Updated VIF entry in instance network info cache for port 0c45aaff-4f58-4867-9375-3e8614fe3f80. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:54:11 compute-0 nova_compute[239965]: 2026-01-26 15:54:11.288 239969 DEBUG nova.network.neutron [req-cb93eed3-71ae-43c0-b23c-61e5a0640b58 req-ee1a5b1a-a4ff-4ba0-a5f3-212b7a48d554 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Updating instance_info_cache with network_info: [{"id": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "address": "fa:16:3e:f3:48:82", "network": {"id": "39d1dcdf-a3eb-462d-99c2-e4f1e8783825", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-550716390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e81b52f9f49d988ff490f77d499aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c45aaff-4f", "ovs_interfaceid": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Jan 26 15:54:11 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3850642028' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:54:12 compute-0 ceph-mon[75140]: pgmap v1208: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.015 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:54:13 compute-0 nova_compute[239965]:   <uuid>2a8ff768-0ab1-46cb-983c-9109c4a2c1dd</uuid>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   <name>instance-00000020</name>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <nova:name>tempest-ImagesNegativeTestJSON-server-889784483</nova:name>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:54:09</nova:creationTime>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:54:13 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:54:13 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:54:13 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:54:13 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:54:13 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:54:13 compute-0 nova_compute[239965]:         <nova:user uuid="8a6b0cb0d5d2404782b95b68fbfb7e2e">tempest-ImagesNegativeTestJSON-2087549606-project-member</nova:user>
Jan 26 15:54:13 compute-0 nova_compute[239965]:         <nova:project uuid="929e81b52f9f49d988ff490f77d499aa">tempest-ImagesNegativeTestJSON-2087549606</nova:project>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:54:13 compute-0 nova_compute[239965]:         <nova:port uuid="0c45aaff-4f58-4867-9375-3e8614fe3f80">
Jan 26 15:54:13 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <system>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <entry name="serial">2a8ff768-0ab1-46cb-983c-9109c4a2c1dd</entry>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <entry name="uuid">2a8ff768-0ab1-46cb-983c-9109c4a2c1dd</entry>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     </system>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   <os>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   </os>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   <features>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   </features>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/2a8ff768-0ab1-46cb-983c-9109c4a2c1dd_disk">
Jan 26 15:54:13 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       </source>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:54:13 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/2a8ff768-0ab1-46cb-983c-9109c4a2c1dd_disk.config">
Jan 26 15:54:13 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       </source>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:54:13 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:f3:48:82"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <target dev="tap0c45aaff-4f"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/2a8ff768-0ab1-46cb-983c-9109c4a2c1dd/console.log" append="off"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <video>
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     </video>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:54:13 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:54:13 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:54:13 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:54:13 compute-0 nova_compute[239965]: </domain>
Jan 26 15:54:13 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.021 239969 DEBUG nova.compute.manager [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Preparing to wait for external event network-vif-plugged-0c45aaff-4f58-4867-9375-3e8614fe3f80 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.022 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Acquiring lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.022 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.022 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.023 239969 DEBUG nova.virt.libvirt.vif [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:54:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-889784483',display_name='tempest-ImagesNegativeTestJSON-server-889784483',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-889784483',id=32,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='929e81b52f9f49d988ff490f77d499aa',ramdisk_id='',reservation_id='r-4k5yehfn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-2087549606',owner_user_name='tempest-ImagesNegativeTestJSON-2087549606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:54:04Z,user_data=None,user_id='8a6b0cb0d5d2404782b95b68fbfb7e2e',uuid=2a8ff768-0ab1-46cb-983c-9109c4a2c1dd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "address": "fa:16:3e:f3:48:82", "network": {"id": "39d1dcdf-a3eb-462d-99c2-e4f1e8783825", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-550716390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e81b52f9f49d988ff490f77d499aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c45aaff-4f", "ovs_interfaceid": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.023 239969 DEBUG nova.network.os_vif_util [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Converting VIF {"id": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "address": "fa:16:3e:f3:48:82", "network": {"id": "39d1dcdf-a3eb-462d-99c2-e4f1e8783825", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-550716390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e81b52f9f49d988ff490f77d499aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c45aaff-4f", "ovs_interfaceid": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.024 239969 DEBUG nova.network.os_vif_util [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:48:82,bridge_name='br-int',has_traffic_filtering=True,id=0c45aaff-4f58-4867-9375-3e8614fe3f80,network=Network(39d1dcdf-a3eb-462d-99c2-e4f1e8783825),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c45aaff-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.024 239969 DEBUG os_vif [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:48:82,bridge_name='br-int',has_traffic_filtering=True,id=0c45aaff-4f58-4867-9375-3e8614fe3f80,network=Network(39d1dcdf-a3eb-462d-99c2-e4f1e8783825),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c45aaff-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.025 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.026 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.026 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.029 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.029 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0c45aaff-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.030 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0c45aaff-4f, col_values=(('external_ids', {'iface-id': '0c45aaff-4f58-4867-9375-3e8614fe3f80', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f3:48:82', 'vm-uuid': '2a8ff768-0ab1-46cb-983c-9109c4a2c1dd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.031 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:13 compute-0 NetworkManager[48954]: <info>  [1769442853.0328] manager: (tap0c45aaff-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.036 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.039 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.039 239969 INFO os_vif [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:48:82,bridge_name='br-int',has_traffic_filtering=True,id=0c45aaff-4f58-4867-9375-3e8614fe3f80,network=Network(39d1dcdf-a3eb-462d-99c2-e4f1e8783825),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c45aaff-4f')
Jan 26 15:54:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.2 MiB/s wr, 102 op/s
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.748 239969 DEBUG oslo_concurrency.lockutils [req-cb93eed3-71ae-43c0-b23c-61e5a0640b58 req-ee1a5b1a-a4ff-4ba0-a5f3-212b7a48d554 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-2a8ff768-0ab1-46cb-983c-9109c4a2c1dd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.783 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.784 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.785 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] No VIF found with MAC fa:16:3e:f3:48:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.785 239969 INFO nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Using config drive
Jan 26 15:54:13 compute-0 nova_compute[239965]: 2026-01-26 15:54:13.815 239969 DEBUG nova.storage.rbd_utils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] rbd image 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:54:14 compute-0 ceph-mon[75140]: pgmap v1209: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.2 MiB/s wr, 102 op/s
Jan 26 15:54:15 compute-0 nova_compute[239965]: 2026-01-26 15:54:15.033 239969 DEBUG nova.compute.manager [req-06319637-793a-40f7-9b4a-76370cde9ce8 req-3774e9d9-d260-4e55-8d7b-15195de12b51 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Received event network-vif-plugged-cff19326-5cc9-485a-9f3e-eafabc790a6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:15 compute-0 nova_compute[239965]: 2026-01-26 15:54:15.034 239969 DEBUG oslo_concurrency.lockutils [req-06319637-793a-40f7-9b4a-76370cde9ce8 req-3774e9d9-d260-4e55-8d7b-15195de12b51 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "2cf7c37a-5399-4471-8385-7f483007196b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:15 compute-0 nova_compute[239965]: 2026-01-26 15:54:15.034 239969 DEBUG oslo_concurrency.lockutils [req-06319637-793a-40f7-9b4a-76370cde9ce8 req-3774e9d9-d260-4e55-8d7b-15195de12b51 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2cf7c37a-5399-4471-8385-7f483007196b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:15 compute-0 nova_compute[239965]: 2026-01-26 15:54:15.034 239969 DEBUG oslo_concurrency.lockutils [req-06319637-793a-40f7-9b4a-76370cde9ce8 req-3774e9d9-d260-4e55-8d7b-15195de12b51 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2cf7c37a-5399-4471-8385-7f483007196b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:15 compute-0 nova_compute[239965]: 2026-01-26 15:54:15.034 239969 DEBUG nova.compute.manager [req-06319637-793a-40f7-9b4a-76370cde9ce8 req-3774e9d9-d260-4e55-8d7b-15195de12b51 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] No waiting events found dispatching network-vif-plugged-cff19326-5cc9-485a-9f3e-eafabc790a6e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:54:15 compute-0 nova_compute[239965]: 2026-01-26 15:54:15.035 239969 WARNING nova.compute.manager [req-06319637-793a-40f7-9b4a-76370cde9ce8 req-3774e9d9-d260-4e55-8d7b-15195de12b51 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Received unexpected event network-vif-plugged-cff19326-5cc9-485a-9f3e-eafabc790a6e for instance with vm_state active and task_state None.
Jan 26 15:54:15 compute-0 nova_compute[239965]: 2026-01-26 15:54:15.040 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:15 compute-0 nova_compute[239965]: 2026-01-26 15:54:15.645 239969 INFO nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Creating config drive at /var/lib/nova/instances/2a8ff768-0ab1-46cb-983c-9109c4a2c1dd/disk.config
Jan 26 15:54:15 compute-0 nova_compute[239965]: 2026-01-26 15:54:15.653 239969 DEBUG oslo_concurrency.processutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2a8ff768-0ab1-46cb-983c-9109c4a2c1dd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxx_d_3y_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 15:54:15 compute-0 nova_compute[239965]: 2026-01-26 15:54:15.791 239969 DEBUG oslo_concurrency.processutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2a8ff768-0ab1-46cb-983c-9109c4a2c1dd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxx_d_3y_" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:15 compute-0 nova_compute[239965]: 2026-01-26 15:54:15.823 239969 DEBUG nova.storage.rbd_utils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] rbd image 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:15 compute-0 nova_compute[239965]: 2026-01-26 15:54:15.831 239969 DEBUG oslo_concurrency.processutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2a8ff768-0ab1-46cb-983c-9109c4a2c1dd/disk.config 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.008 239969 DEBUG oslo_concurrency.processutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2a8ff768-0ab1-46cb-983c-9109c4a2c1dd/disk.config 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.010 239969 INFO nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Deleting local config drive /var/lib/nova/instances/2a8ff768-0ab1-46cb-983c-9109c4a2c1dd/disk.config because it was imported into RBD.
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.071 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Acquiring lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.073 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:16 compute-0 kernel: tap0c45aaff-4f: entered promiscuous mode
Jan 26 15:54:16 compute-0 NetworkManager[48954]: <info>  [1769442856.0907] manager: (tap0c45aaff-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.093 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:16 compute-0 ovn_controller[146046]: 2026-01-26T15:54:16Z|00170|binding|INFO|Claiming lport 0c45aaff-4f58-4867-9375-3e8614fe3f80 for this chassis.
Jan 26 15:54:16 compute-0 ovn_controller[146046]: 2026-01-26T15:54:16Z|00171|binding|INFO|0c45aaff-4f58-4867-9375-3e8614fe3f80: Claiming fa:16:3e:f3:48:82 10.100.0.11
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.110 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:48:82 10.100.0.11'], port_security=['fa:16:3e:f3:48:82 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2a8ff768-0ab1-46cb-983c-9109c4a2c1dd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39d1dcdf-a3eb-462d-99c2-e4f1e8783825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e81b52f9f49d988ff490f77d499aa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5d59dbfa-4c8a-4de3-97b6-4ca939c6bc29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5891c7be-fa43-4da4-a46c-69aa905acae5, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=0c45aaff-4f58-4867-9375-3e8614fe3f80) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.111 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 0c45aaff-4f58-4867-9375-3e8614fe3f80 in datapath 39d1dcdf-a3eb-462d-99c2-e4f1e8783825 bound to our chassis
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.113 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 39d1dcdf-a3eb-462d-99c2-e4f1e8783825
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.119 239969 DEBUG nova.compute.manager [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.130 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3df4caaa-473f-4ec2-8407-7a91dc20fb72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.132 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap39d1dcdf-a1 in ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:54:16 compute-0 systemd-udevd[270672]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.135 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap39d1dcdf-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.135 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[732e242e-1d1e-470c-9bac-fb637ea0aa8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.143 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e91f116d-c09d-4b51-9c1b-6872007fbdc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:16 compute-0 NetworkManager[48954]: <info>  [1769442856.1664] device (tap0c45aaff-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:54:16 compute-0 NetworkManager[48954]: <info>  [1769442856.1695] device (tap0c45aaff-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:54:16 compute-0 systemd-machined[208061]: New machine qemu-36-instance-00000020.
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.169 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b17ff9-bf3a-461c-ad1f-f3dab7e05015]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.180 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:16 compute-0 ovn_controller[146046]: 2026-01-26T15:54:16Z|00172|binding|INFO|Setting lport 0c45aaff-4f58-4867-9375-3e8614fe3f80 ovn-installed in OVS
Jan 26 15:54:16 compute-0 ovn_controller[146046]: 2026-01-26T15:54:16Z|00173|binding|INFO|Setting lport 0c45aaff-4f58-4867-9375-3e8614fe3f80 up in Southbound
Jan 26 15:54:16 compute-0 systemd[1]: Started Virtual Machine qemu-36-instance-00000020.
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.190 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.202 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c24ad852-3801-4efb-be4e-ce90d1ecbe5b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.232 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[461c4495-50b9-497f-bcb9-cfb0cfd3bcf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:16 compute-0 systemd-udevd[270676]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:54:16 compute-0 NetworkManager[48954]: <info>  [1769442856.2427] manager: (tap39d1dcdf-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/92)
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.239 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[71874299-365b-4de8-bddd-bc674e2360af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.247 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.248 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:16 compute-0 NetworkManager[48954]: <info>  [1769442856.2500] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/93)
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.249 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:16 compute-0 NetworkManager[48954]: <info>  [1769442856.2510] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/94)
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.268 239969 DEBUG nova.virt.hardware [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.269 239969 INFO nova.compute.claims [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.288 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[644d7cb8-ecb7-4b1b-b1db-8463864791cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.295 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2129a54c-ec7c-42c8-a597-b3a58eef03ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:16 compute-0 NetworkManager[48954]: <info>  [1769442856.3264] device (tap39d1dcdf-a0): carrier: link connected
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.332 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3cc3fe-072f-40d9-8e11-801137e6a9a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.347 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:16 compute-0 ovn_controller[146046]: 2026-01-26T15:54:16Z|00174|binding|INFO|Releasing lport 0a445094-538b-4c97-83d4-6fbe61c31fbe from this chassis (sb_readonly=0)
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.365 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.372 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[714feb34-1562-4f09-9ff9-e94416c3a7af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39d1dcdf-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:43:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434713, 'reachable_time': 15724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270705, 'error': None, 'target': 'ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.391 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e016ba-9b20-476d-93dd-51ce9a8c32c0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fece:4372'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 434713, 'tstamp': 434713}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270706, 'error': None, 'target': 'ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.415 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb5d498-e7fa-4962-991f-6a7752b584ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap39d1dcdf-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ce:43:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 59], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434713, 'reachable_time': 15724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270707, 'error': None, 'target': 'ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.453 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[eafe4b7a-5cd6-4314-afda-b234e0c62915]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.454 239969 DEBUG oslo_concurrency.processutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.496 239969 DEBUG nova.compute.manager [req-92757f4d-d3d3-456d-9883-358e541cd1b5 req-3e1458d9-285e-4898-b2dc-75d958bfd8f8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Received event network-vif-plugged-0c45aaff-4f58-4867-9375-3e8614fe3f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.497 239969 DEBUG oslo_concurrency.lockutils [req-92757f4d-d3d3-456d-9883-358e541cd1b5 req-3e1458d9-285e-4898-b2dc-75d958bfd8f8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.497 239969 DEBUG oslo_concurrency.lockutils [req-92757f4d-d3d3-456d-9883-358e541cd1b5 req-3e1458d9-285e-4898-b2dc-75d958bfd8f8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.498 239969 DEBUG oslo_concurrency.lockutils [req-92757f4d-d3d3-456d-9883-358e541cd1b5 req-3e1458d9-285e-4898-b2dc-75d958bfd8f8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.498 239969 DEBUG nova.compute.manager [req-92757f4d-d3d3-456d-9883-358e541cd1b5 req-3e1458d9-285e-4898-b2dc-75d958bfd8f8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Processing event network-vif-plugged-0c45aaff-4f58-4867-9375-3e8614fe3f80 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.527 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[30002f13-7ca1-49af-b958-13493dfbaccc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.531 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39d1dcdf-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.533 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.534 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39d1dcdf-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:16 compute-0 NetworkManager[48954]: <info>  [1769442856.5363] manager: (tap39d1dcdf-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.536 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:16 compute-0 kernel: tap39d1dcdf-a0: entered promiscuous mode
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.539 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap39d1dcdf-a0, col_values=(('external_ids', {'iface-id': 'b4ec0e82-1eca-4617-8ad2-1c87e6e4e507'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:16 compute-0 ovn_controller[146046]: 2026-01-26T15:54:16Z|00175|binding|INFO|Releasing lport b4ec0e82-1eca-4617-8ad2-1c87e6e4e507 from this chassis (sb_readonly=0)
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.541 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.559 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.559 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/39d1dcdf-a3eb-462d-99c2-e4f1e8783825.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/39d1dcdf-a3eb-462d-99c2-e4f1e8783825.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.560 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[293b5f20-4442-446a-aa7e-2e1a24908020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.562 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-39d1dcdf-a3eb-462d-99c2-e4f1e8783825
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/39d1dcdf-a3eb-462d-99c2-e4f1e8783825.pid.haproxy
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 39d1dcdf-a3eb-462d-99c2-e4f1e8783825
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:54:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:16.563 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825', 'env', 'PROCESS_TAG=haproxy-39d1dcdf-a3eb-462d-99c2-e4f1e8783825', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/39d1dcdf-a3eb-462d-99c2-e4f1e8783825.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.728 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442856.7274392, 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.728 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] VM Started (Lifecycle Event)
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.731 239969 DEBUG nova.compute.manager [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.735 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.741 239969 INFO nova.virt.libvirt.driver [-] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Instance spawned successfully.
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.742 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.755 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.769 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.773 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.774 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.775 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.775 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.776 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.777 239969 DEBUG nova.virt.libvirt.driver [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.808 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.809 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442856.7277076, 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.809 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] VM Paused (Lifecycle Event)
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.839 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.844 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442856.7343402, 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.845 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] VM Resumed (Lifecycle Event)
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.859 239969 INFO nova.compute.manager [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Took 12.68 seconds to spawn the instance on the hypervisor.
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.860 239969 DEBUG nova.compute.manager [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.870 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.874 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:54:16 compute-0 ceph-mon[75140]: pgmap v1210: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.905 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.943 239969 INFO nova.compute.manager [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Took 13.72 seconds to build instance.
Jan 26 15:54:16 compute-0 nova_compute[239965]: 2026-01-26 15:54:16.961 239969 DEBUG oslo_concurrency.lockutils [None req-1b7551f2-39ff-4921-b32e-c5c20abfc2f5 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.801s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:16 compute-0 podman[270799]: 2026-01-26 15:54:16.981700536 +0000 UTC m=+0.068632148 container create d50f523e5365410ada8703f8870d168d8dfc42c75aa6c4545ee54f91d30cae7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 26 15:54:17 compute-0 systemd[1]: Started libpod-conmon-d50f523e5365410ada8703f8870d168d8dfc42c75aa6c4545ee54f91d30cae7b.scope.
Jan 26 15:54:17 compute-0 podman[270799]: 2026-01-26 15:54:16.946337867 +0000 UTC m=+0.033269479 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:54:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:54:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d06bc86f43089521b95e3fdc11a47db1b6786e1b691e214824752592d45b8c78/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:17 compute-0 podman[270799]: 2026-01-26 15:54:17.077372269 +0000 UTC m=+0.164303901 container init d50f523e5365410ada8703f8870d168d8dfc42c75aa6c4545ee54f91d30cae7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:54:17 compute-0 podman[270799]: 2026-01-26 15:54:17.083222651 +0000 UTC m=+0.170154263 container start d50f523e5365410ada8703f8870d168d8dfc42c75aa6c4545ee54f91d30cae7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:54:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:54:17 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2232814206' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:17 compute-0 neutron-haproxy-ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825[270813]: [NOTICE]   (270817) : New worker (270821) forked
Jan 26 15:54:17 compute-0 neutron-haproxy-ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825[270813]: [NOTICE]   (270817) : Loading success.
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.119 239969 DEBUG oslo_concurrency.processutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.124 239969 DEBUG nova.compute.provider_tree [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.149 239969 DEBUG nova.scheduler.client.report [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.230 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.230 239969 DEBUG nova.compute.manager [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.374 239969 DEBUG nova.compute.manager [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.375 239969 DEBUG nova.network.neutron [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.403 239969 INFO nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.426 239969 DEBUG nova.compute.manager [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.486 239969 DEBUG nova.compute.manager [req-f5c31529-bcf3-4c69-98e0-27bfe39c56f8 req-8a714859-1805-4fa1-84c0-f01efc2cf8ff a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Received event network-changed-cff19326-5cc9-485a-9f3e-eafabc790a6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.487 239969 DEBUG nova.compute.manager [req-f5c31529-bcf3-4c69-98e0-27bfe39c56f8 req-8a714859-1805-4fa1-84c0-f01efc2cf8ff a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Refreshing instance network info cache due to event network-changed-cff19326-5cc9-485a-9f3e-eafabc790a6e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.487 239969 DEBUG oslo_concurrency.lockutils [req-f5c31529-bcf3-4c69-98e0-27bfe39c56f8 req-8a714859-1805-4fa1-84c0-f01efc2cf8ff a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.488 239969 DEBUG oslo_concurrency.lockutils [req-f5c31529-bcf3-4c69-98e0-27bfe39c56f8 req-8a714859-1805-4fa1-84c0-f01efc2cf8ff a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.488 239969 DEBUG nova.network.neutron [req-f5c31529-bcf3-4c69-98e0-27bfe39c56f8 req-8a714859-1805-4fa1-84c0-f01efc2cf8ff a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Refreshing network info cache for port cff19326-5cc9-485a-9f3e-eafabc790a6e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.563 239969 DEBUG nova.compute.manager [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.565 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.566 239969 INFO nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Creating image(s)
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.593 239969 DEBUG nova.storage.rbd_utils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] rbd image 3c352fa1-e91d-49ab-a44a-875ec50bb9c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.633 239969 DEBUG nova.storage.rbd_utils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] rbd image 3c352fa1-e91d-49ab-a44a-875ec50bb9c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.664 239969 DEBUG nova.storage.rbd_utils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] rbd image 3c352fa1-e91d-49ab-a44a-875ec50bb9c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.669 239969 DEBUG oslo_concurrency.processutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 134 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 95 op/s
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.750 239969 DEBUG oslo_concurrency.processutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.751 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.752 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.752 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.774 239969 DEBUG nova.storage.rbd_utils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] rbd image 3c352fa1-e91d-49ab-a44a-875ec50bb9c3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.779 239969 DEBUG oslo_concurrency.processutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 3c352fa1-e91d-49ab-a44a-875ec50bb9c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:17 compute-0 nova_compute[239965]: 2026-01-26 15:54:17.811 239969 DEBUG nova.policy [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5e1a7a8a20c4486d993bf5d6901001be', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a4fae17181d64f599975dc5816c6b9b8', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:54:17 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2232814206' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:18 compute-0 nova_compute[239965]: 2026-01-26 15:54:18.032 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:18 compute-0 nova_compute[239965]: 2026-01-26 15:54:18.090 239969 DEBUG oslo_concurrency.processutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 3c352fa1-e91d-49ab-a44a-875ec50bb9c3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.311s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:18 compute-0 nova_compute[239965]: 2026-01-26 15:54:18.169 239969 DEBUG nova.storage.rbd_utils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] resizing rbd image 3c352fa1-e91d-49ab-a44a-875ec50bb9c3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:54:18 compute-0 nova_compute[239965]: 2026-01-26 15:54:18.261 239969 DEBUG nova.objects.instance [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lazy-loading 'migration_context' on Instance uuid 3c352fa1-e91d-49ab-a44a-875ec50bb9c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:18 compute-0 nova_compute[239965]: 2026-01-26 15:54:18.288 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:54:18 compute-0 nova_compute[239965]: 2026-01-26 15:54:18.289 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Ensure instance console log exists: /var/lib/nova/instances/3c352fa1-e91d-49ab-a44a-875ec50bb9c3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:54:18 compute-0 nova_compute[239965]: 2026-01-26 15:54:18.290 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:18 compute-0 nova_compute[239965]: 2026-01-26 15:54:18.290 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:18 compute-0 nova_compute[239965]: 2026-01-26 15:54:18.290 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:18 compute-0 nova_compute[239965]: 2026-01-26 15:54:18.640 239969 DEBUG nova.compute.manager [req-73185951-acbf-4756-9803-28f5223fc00e req-e3002522-f36d-4744-b669-f727f598e199 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Received event network-vif-plugged-0c45aaff-4f58-4867-9375-3e8614fe3f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:18 compute-0 nova_compute[239965]: 2026-01-26 15:54:18.641 239969 DEBUG oslo_concurrency.lockutils [req-73185951-acbf-4756-9803-28f5223fc00e req-e3002522-f36d-4744-b669-f727f598e199 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:18 compute-0 nova_compute[239965]: 2026-01-26 15:54:18.641 239969 DEBUG oslo_concurrency.lockutils [req-73185951-acbf-4756-9803-28f5223fc00e req-e3002522-f36d-4744-b669-f727f598e199 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:18 compute-0 nova_compute[239965]: 2026-01-26 15:54:18.642 239969 DEBUG oslo_concurrency.lockutils [req-73185951-acbf-4756-9803-28f5223fc00e req-e3002522-f36d-4744-b669-f727f598e199 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:18 compute-0 nova_compute[239965]: 2026-01-26 15:54:18.642 239969 DEBUG nova.compute.manager [req-73185951-acbf-4756-9803-28f5223fc00e req-e3002522-f36d-4744-b669-f727f598e199 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] No waiting events found dispatching network-vif-plugged-0c45aaff-4f58-4867-9375-3e8614fe3f80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:54:18 compute-0 nova_compute[239965]: 2026-01-26 15:54:18.642 239969 WARNING nova.compute.manager [req-73185951-acbf-4756-9803-28f5223fc00e req-e3002522-f36d-4744-b669-f727f598e199 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Received unexpected event network-vif-plugged-0c45aaff-4f58-4867-9375-3e8614fe3f80 for instance with vm_state active and task_state None.
Jan 26 15:54:18 compute-0 nova_compute[239965]: 2026-01-26 15:54:18.768 239969 DEBUG nova.network.neutron [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Successfully created port: 5a7726a4-3201-49ec-ab56-ca4039acde5a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:54:18 compute-0 ceph-mon[75140]: pgmap v1211: 305 pgs: 305 active+clean; 134 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 95 op/s
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.118 239969 DEBUG nova.network.neutron [req-f5c31529-bcf3-4c69-98e0-27bfe39c56f8 req-8a714859-1805-4fa1-84c0-f01efc2cf8ff a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Updated VIF entry in instance network info cache for port cff19326-5cc9-485a-9f3e-eafabc790a6e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.119 239969 DEBUG nova.network.neutron [req-f5c31529-bcf3-4c69-98e0-27bfe39c56f8 req-8a714859-1805-4fa1-84c0-f01efc2cf8ff a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Updating instance_info_cache with network_info: [{"id": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "address": "fa:16:3e:d1:1c:47", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcff19326-5c", "ovs_interfaceid": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.157 239969 DEBUG oslo_concurrency.lockutils [req-f5c31529-bcf3-4c69-98e0-27bfe39c56f8 req-8a714859-1805-4fa1-84c0-f01efc2cf8ff a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.223 239969 DEBUG oslo_concurrency.lockutils [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Acquiring lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.225 239969 DEBUG oslo_concurrency.lockutils [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.226 239969 DEBUG oslo_concurrency.lockutils [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Acquiring lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.226 239969 DEBUG oslo_concurrency.lockutils [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.226 239969 DEBUG oslo_concurrency.lockutils [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.228 239969 INFO nova.compute.manager [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Terminating instance
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.229 239969 DEBUG nova.compute.manager [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:54:19 compute-0 kernel: tap0c45aaff-4f (unregistering): left promiscuous mode
Jan 26 15:54:19 compute-0 NetworkManager[48954]: <info>  [1769442859.2688] device (tap0c45aaff-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.277 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:19 compute-0 ovn_controller[146046]: 2026-01-26T15:54:19Z|00176|binding|INFO|Releasing lport 0c45aaff-4f58-4867-9375-3e8614fe3f80 from this chassis (sb_readonly=0)
Jan 26 15:54:19 compute-0 ovn_controller[146046]: 2026-01-26T15:54:19Z|00177|binding|INFO|Setting lport 0c45aaff-4f58-4867-9375-3e8614fe3f80 down in Southbound
Jan 26 15:54:19 compute-0 ovn_controller[146046]: 2026-01-26T15:54:19Z|00178|binding|INFO|Removing iface tap0c45aaff-4f ovn-installed in OVS
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.280 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:19.286 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f3:48:82 10.100.0.11'], port_security=['fa:16:3e:f3:48:82 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '2a8ff768-0ab1-46cb-983c-9109c4a2c1dd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-39d1dcdf-a3eb-462d-99c2-e4f1e8783825', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '929e81b52f9f49d988ff490f77d499aa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5d59dbfa-4c8a-4de3-97b6-4ca939c6bc29', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5891c7be-fa43-4da4-a46c-69aa905acae5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=0c45aaff-4f58-4867-9375-3e8614fe3f80) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:54:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:19.288 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 0c45aaff-4f58-4867-9375-3e8614fe3f80 in datapath 39d1dcdf-a3eb-462d-99c2-e4f1e8783825 unbound from our chassis
Jan 26 15:54:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:19.290 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 39d1dcdf-a3eb-462d-99c2-e4f1e8783825, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:54:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:19.292 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3262cf85-4260-43ff-83cb-0d901d870e34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:19.292 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825 namespace which is not needed anymore
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.309 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:19 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000020.scope: Deactivated successfully.
Jan 26 15:54:19 compute-0 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000020.scope: Consumed 2.951s CPU time.
Jan 26 15:54:19 compute-0 systemd-machined[208061]: Machine qemu-36-instance-00000020 terminated.
Jan 26 15:54:19 compute-0 neutron-haproxy-ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825[270813]: [NOTICE]   (270817) : haproxy version is 2.8.14-c23fe91
Jan 26 15:54:19 compute-0 neutron-haproxy-ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825[270813]: [NOTICE]   (270817) : path to executable is /usr/sbin/haproxy
Jan 26 15:54:19 compute-0 neutron-haproxy-ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825[270813]: [ALERT]    (270817) : Current worker (270821) exited with code 143 (Terminated)
Jan 26 15:54:19 compute-0 neutron-haproxy-ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825[270813]: [WARNING]  (270817) : All workers exited. Exiting... (0)
Jan 26 15:54:19 compute-0 systemd[1]: libpod-d50f523e5365410ada8703f8870d168d8dfc42c75aa6c4545ee54f91d30cae7b.scope: Deactivated successfully.
Jan 26 15:54:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.451 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:19 compute-0 podman[271019]: 2026-01-26 15:54:19.456771776 +0000 UTC m=+0.061633267 container died d50f523e5365410ada8703f8870d168d8dfc42c75aa6c4545ee54f91d30cae7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.458 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.468 239969 INFO nova.virt.libvirt.driver [-] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Instance destroyed successfully.
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.469 239969 DEBUG nova.objects.instance [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lazy-loading 'resources' on Instance uuid 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.484 239969 DEBUG nova.virt.libvirt.vif [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-889784483',display_name='tempest-ImagesNegativeTestJSON-server-889784483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-889784483',id=32,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:54:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='929e81b52f9f49d988ff490f77d499aa',ramdisk_id='',reservation_id='r-4k5yehfn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-2087549606',owner_user_name='tempest-ImagesNegativeTestJSON-2087549606-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:54:16Z,user_data=None,user_id='8a6b0cb0d5d2404782b95b68fbfb7e2e',uuid=2a8ff768-0ab1-46cb-983c-9109c4a2c1dd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "address": "fa:16:3e:f3:48:82", "network": {"id": "39d1dcdf-a3eb-462d-99c2-e4f1e8783825", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-550716390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e81b52f9f49d988ff490f77d499aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c45aaff-4f", "ovs_interfaceid": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.485 239969 DEBUG nova.network.os_vif_util [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Converting VIF {"id": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "address": "fa:16:3e:f3:48:82", "network": {"id": "39d1dcdf-a3eb-462d-99c2-e4f1e8783825", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-550716390-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "929e81b52f9f49d988ff490f77d499aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0c45aaff-4f", "ovs_interfaceid": "0c45aaff-4f58-4867-9375-3e8614fe3f80", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.486 239969 DEBUG nova.network.os_vif_util [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f3:48:82,bridge_name='br-int',has_traffic_filtering=True,id=0c45aaff-4f58-4867-9375-3e8614fe3f80,network=Network(39d1dcdf-a3eb-462d-99c2-e4f1e8783825),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c45aaff-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.486 239969 DEBUG os_vif [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:48:82,bridge_name='br-int',has_traffic_filtering=True,id=0c45aaff-4f58-4867-9375-3e8614fe3f80,network=Network(39d1dcdf-a3eb-462d-99c2-e4f1e8783825),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c45aaff-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.488 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.488 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0c45aaff-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.492 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.493 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:54:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d50f523e5365410ada8703f8870d168d8dfc42c75aa6c4545ee54f91d30cae7b-userdata-shm.mount: Deactivated successfully.
Jan 26 15:54:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-d06bc86f43089521b95e3fdc11a47db1b6786e1b691e214824752592d45b8c78-merged.mount: Deactivated successfully.
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.495 239969 INFO os_vif [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f3:48:82,bridge_name='br-int',has_traffic_filtering=True,id=0c45aaff-4f58-4867-9375-3e8614fe3f80,network=Network(39d1dcdf-a3eb-462d-99c2-e4f1e8783825),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0c45aaff-4f')
Jan 26 15:54:19 compute-0 podman[271019]: 2026-01-26 15:54:19.508603355 +0000 UTC m=+0.113464826 container cleanup d50f523e5365410ada8703f8870d168d8dfc42c75aa6c4545ee54f91d30cae7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:54:19 compute-0 systemd[1]: libpod-conmon-d50f523e5365410ada8703f8870d168d8dfc42c75aa6c4545ee54f91d30cae7b.scope: Deactivated successfully.
Jan 26 15:54:19 compute-0 podman[271074]: 2026-01-26 15:54:19.586881826 +0000 UTC m=+0.053095350 container remove d50f523e5365410ada8703f8870d168d8dfc42c75aa6c4545ee54f91d30cae7b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:54:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:19.594 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa58565-5ea9-4444-9bd5-e26e9b2057eb]: (4, ('Mon Jan 26 03:54:19 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825 (d50f523e5365410ada8703f8870d168d8dfc42c75aa6c4545ee54f91d30cae7b)\nd50f523e5365410ada8703f8870d168d8dfc42c75aa6c4545ee54f91d30cae7b\nMon Jan 26 03:54:19 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825 (d50f523e5365410ada8703f8870d168d8dfc42c75aa6c4545ee54f91d30cae7b)\nd50f523e5365410ada8703f8870d168d8dfc42c75aa6c4545ee54f91d30cae7b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:19.596 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cfcd4055-6fc8-4899-a647-f72577477eb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:19.597 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39d1dcdf-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:19 compute-0 kernel: tap39d1dcdf-a0: left promiscuous mode
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.599 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:19.606 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[24725b2f-1ab6-4e57-a970-04459b8592cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.623 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:19.624 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[659fa2c8-6362-4322-b193-b0ab53eab704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:19.627 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[82fb5593-7dd9-4e81-a7d3-25ca934dc012]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:19.644 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b822821b-b3a7-4479-89aa-4a3bc2bc210a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 434703, 'reachable_time': 39353, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271091, 'error': None, 'target': 'ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d39d1dcdf\x2da3eb\x2d462d\x2d99c2\x2de4f1e8783825.mount: Deactivated successfully.
Jan 26 15:54:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:19.650 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-39d1dcdf-a3eb-462d-99c2-e4f1e8783825 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:54:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:19.650 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[7f642426-9b78-45d8-9c93-74fe912ecae5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 134 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.834 239969 INFO nova.virt.libvirt.driver [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Deleting instance files /var/lib/nova/instances/2a8ff768-0ab1-46cb-983c-9109c4a2c1dd_del
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.834 239969 INFO nova.virt.libvirt.driver [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Deletion of /var/lib/nova/instances/2a8ff768-0ab1-46cb-983c-9109c4a2c1dd_del complete
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.921 239969 INFO nova.compute.manager [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Took 0.69 seconds to destroy the instance on the hypervisor.
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.921 239969 DEBUG oslo.service.loopingcall [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.921 239969 DEBUG nova.compute.manager [-] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.922 239969 DEBUG nova.network.neutron [-] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:54:19 compute-0 nova_compute[239965]: 2026-01-26 15:54:19.995 239969 DEBUG nova.network.neutron [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Successfully updated port: 5a7726a4-3201-49ec-ab56-ca4039acde5a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.026 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Acquiring lock "refresh_cache-3c352fa1-e91d-49ab-a44a-875ec50bb9c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.028 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Acquired lock "refresh_cache-3c352fa1-e91d-49ab-a44a-875ec50bb9c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.028 239969 DEBUG nova.network.neutron [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.042 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.611 239969 DEBUG nova.network.neutron [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.763 239969 DEBUG nova.compute.manager [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Received event network-vif-unplugged-0c45aaff-4f58-4867-9375-3e8614fe3f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.763 239969 DEBUG oslo_concurrency.lockutils [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.763 239969 DEBUG oslo_concurrency.lockutils [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.764 239969 DEBUG oslo_concurrency.lockutils [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.764 239969 DEBUG nova.compute.manager [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] No waiting events found dispatching network-vif-unplugged-0c45aaff-4f58-4867-9375-3e8614fe3f80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.764 239969 DEBUG nova.compute.manager [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Received event network-vif-unplugged-0c45aaff-4f58-4867-9375-3e8614fe3f80 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.764 239969 DEBUG nova.compute.manager [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Received event network-vif-plugged-0c45aaff-4f58-4867-9375-3e8614fe3f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.764 239969 DEBUG oslo_concurrency.lockutils [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.765 239969 DEBUG oslo_concurrency.lockutils [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.765 239969 DEBUG oslo_concurrency.lockutils [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.765 239969 DEBUG nova.compute.manager [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] No waiting events found dispatching network-vif-plugged-0c45aaff-4f58-4867-9375-3e8614fe3f80 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.765 239969 WARNING nova.compute.manager [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Received unexpected event network-vif-plugged-0c45aaff-4f58-4867-9375-3e8614fe3f80 for instance with vm_state active and task_state deleting.
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.766 239969 DEBUG nova.compute.manager [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Received event network-changed-5a7726a4-3201-49ec-ab56-ca4039acde5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.766 239969 DEBUG nova.compute.manager [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Refreshing instance network info cache due to event network-changed-5a7726a4-3201-49ec-ab56-ca4039acde5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:54:20 compute-0 nova_compute[239965]: 2026-01-26 15:54:20.766 239969 DEBUG oslo_concurrency.lockutils [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-3c352fa1-e91d-49ab-a44a-875ec50bb9c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:54:20 compute-0 ceph-mon[75140]: pgmap v1212: 305 pgs: 305 active+clean; 134 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.526 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.614 239969 DEBUG nova.network.neutron [-] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.630 239969 INFO nova.compute.manager [-] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Took 1.71 seconds to deallocate network for instance.
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.674 239969 DEBUG oslo_concurrency.lockutils [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.675 239969 DEBUG oslo_concurrency.lockutils [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 136 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 559 KiB/s wr, 113 op/s
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.777 239969 DEBUG oslo_concurrency.processutils [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.810 239969 DEBUG nova.network.neutron [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Updating instance_info_cache with network_info: [{"id": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "address": "fa:16:3e:02:63:a9", "network": {"id": "52963be3-f242-4f3f-83c5-61cee797b0b4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1738664060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4fae17181d64f599975dc5816c6b9b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a7726a4-32", "ovs_interfaceid": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.839 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Releasing lock "refresh_cache-3c352fa1-e91d-49ab-a44a-875ec50bb9c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.840 239969 DEBUG nova.compute.manager [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Instance network_info: |[{"id": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "address": "fa:16:3e:02:63:a9", "network": {"id": "52963be3-f242-4f3f-83c5-61cee797b0b4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1738664060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4fae17181d64f599975dc5816c6b9b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a7726a4-32", "ovs_interfaceid": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.841 239969 DEBUG oslo_concurrency.lockutils [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-3c352fa1-e91d-49ab-a44a-875ec50bb9c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.841 239969 DEBUG nova.network.neutron [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Refreshing network info cache for port 5a7726a4-3201-49ec-ab56-ca4039acde5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.844 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Start _get_guest_xml network_info=[{"id": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "address": "fa:16:3e:02:63:a9", "network": {"id": "52963be3-f242-4f3f-83c5-61cee797b0b4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1738664060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4fae17181d64f599975dc5816c6b9b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a7726a4-32", "ovs_interfaceid": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.849 239969 WARNING nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.855 239969 DEBUG nova.virt.libvirt.host [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.856 239969 DEBUG nova.virt.libvirt.host [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.863 239969 DEBUG nova.virt.libvirt.host [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.863 239969 DEBUG nova.virt.libvirt.host [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.864 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.864 239969 DEBUG nova.virt.hardware [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.865 239969 DEBUG nova.virt.hardware [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.865 239969 DEBUG nova.virt.hardware [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.866 239969 DEBUG nova.virt.hardware [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.866 239969 DEBUG nova.virt.hardware [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.866 239969 DEBUG nova.virt.hardware [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.866 239969 DEBUG nova.virt.hardware [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.867 239969 DEBUG nova.virt.hardware [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.867 239969 DEBUG nova.virt.hardware [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.867 239969 DEBUG nova.virt.hardware [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.868 239969 DEBUG nova.virt.hardware [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:54:21 compute-0 nova_compute[239965]: 2026-01-26 15:54:21.871 239969 DEBUG oslo_concurrency.processutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:54:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3294056189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:22 compute-0 nova_compute[239965]: 2026-01-26 15:54:22.378 239969 DEBUG oslo_concurrency.processutils [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:22 compute-0 nova_compute[239965]: 2026-01-26 15:54:22.385 239969 DEBUG nova.compute.provider_tree [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:54:22 compute-0 nova_compute[239965]: 2026-01-26 15:54:22.402 239969 DEBUG nova.scheduler.client.report [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:54:22 compute-0 nova_compute[239965]: 2026-01-26 15:54:22.423 239969 DEBUG oslo_concurrency.lockutils [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:54:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4003397456' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:54:22 compute-0 nova_compute[239965]: 2026-01-26 15:54:22.462 239969 INFO nova.scheduler.client.report [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Deleted allocations for instance 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd
Jan 26 15:54:22 compute-0 nova_compute[239965]: 2026-01-26 15:54:22.465 239969 DEBUG oslo_concurrency.processutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:22 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 15:54:22 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 15:54:22 compute-0 nova_compute[239965]: 2026-01-26 15:54:22.488 239969 DEBUG nova.storage.rbd_utils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] rbd image 3c352fa1-e91d-49ab-a44a-875ec50bb9c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:22 compute-0 nova_compute[239965]: 2026-01-26 15:54:22.493 239969 DEBUG oslo_concurrency.processutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:22 compute-0 nova_compute[239965]: 2026-01-26 15:54:22.593 239969 DEBUG oslo_concurrency.lockutils [None req-4549ca99-9bae-4d4b-9ba6-093181f2fa00 8a6b0cb0d5d2404782b95b68fbfb7e2e 929e81b52f9f49d988ff490f77d499aa - - default default] Lock "2a8ff768-0ab1-46cb-983c-9109c4a2c1dd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.368s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:22 compute-0 ceph-mon[75140]: pgmap v1213: 305 pgs: 305 active+clean; 136 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 559 KiB/s wr, 113 op/s
Jan 26 15:54:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3294056189' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4003397456' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.097 239969 DEBUG nova.compute.manager [req-30eb1385-cd1e-4d5b-b534-4be934cade9b req-39fcef7b-4183-495e-8b67-452d9c780c46 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Received event network-vif-deleted-0c45aaff-4f58-4867-9375-3e8614fe3f80 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:54:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2759565442' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.195 239969 DEBUG oslo_concurrency.processutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.702s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.197 239969 DEBUG nova.virt.libvirt.vif [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:54:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1884025666',display_name='tempest-ServersTestManualDisk-server-1884025666',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1884025666',id=33,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6lv1uakSqWZntjnzlYObwQ4O5lM4662rvmzzGK9m7PGcCa9az9O3xFja6m+OM+cleoC76M21DlcKWaQ1VvnnEb1p+cFpm6c2l3JNRpqxUHP51OvJUO11mfImVwFrZPUw==',key_name='tempest-keypair-796965151',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4fae17181d64f599975dc5816c6b9b8',ramdisk_id='',reservation_id='r-eb30osg8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1759969789',owner_user_name='tempest-ServersTestManualDisk-1759969789-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:54:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e1a7a8a20c4486d993bf5d6901001be',uuid=3c352fa1-e91d-49ab-a44a-875ec50bb9c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "address": "fa:16:3e:02:63:a9", "network": {"id": "52963be3-f242-4f3f-83c5-61cee797b0b4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1738664060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4fae17181d64f599975dc5816c6b9b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a7726a4-32", "ovs_interfaceid": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.198 239969 DEBUG nova.network.os_vif_util [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Converting VIF {"id": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "address": "fa:16:3e:02:63:a9", "network": {"id": "52963be3-f242-4f3f-83c5-61cee797b0b4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1738664060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4fae17181d64f599975dc5816c6b9b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a7726a4-32", "ovs_interfaceid": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.199 239969 DEBUG nova.network.os_vif_util [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:63:a9,bridge_name='br-int',has_traffic_filtering=True,id=5a7726a4-3201-49ec-ab56-ca4039acde5a,network=Network(52963be3-f242-4f3f-83c5-61cee797b0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a7726a4-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.200 239969 DEBUG nova.objects.instance [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3c352fa1-e91d-49ab-a44a-875ec50bb9c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.217 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:54:23 compute-0 nova_compute[239965]:   <uuid>3c352fa1-e91d-49ab-a44a-875ec50bb9c3</uuid>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   <name>instance-00000021</name>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersTestManualDisk-server-1884025666</nova:name>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:54:21</nova:creationTime>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:54:23 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:54:23 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:54:23 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:54:23 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:54:23 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:54:23 compute-0 nova_compute[239965]:         <nova:user uuid="5e1a7a8a20c4486d993bf5d6901001be">tempest-ServersTestManualDisk-1759969789-project-member</nova:user>
Jan 26 15:54:23 compute-0 nova_compute[239965]:         <nova:project uuid="a4fae17181d64f599975dc5816c6b9b8">tempest-ServersTestManualDisk-1759969789</nova:project>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:54:23 compute-0 nova_compute[239965]:         <nova:port uuid="5a7726a4-3201-49ec-ab56-ca4039acde5a">
Jan 26 15:54:23 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <system>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <entry name="serial">3c352fa1-e91d-49ab-a44a-875ec50bb9c3</entry>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <entry name="uuid">3c352fa1-e91d-49ab-a44a-875ec50bb9c3</entry>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     </system>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   <os>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   </os>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   <features>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   </features>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/3c352fa1-e91d-49ab-a44a-875ec50bb9c3_disk">
Jan 26 15:54:23 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       </source>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:54:23 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/3c352fa1-e91d-49ab-a44a-875ec50bb9c3_disk.config">
Jan 26 15:54:23 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       </source>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:54:23 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:02:63:a9"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <target dev="tap5a7726a4-32"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/3c352fa1-e91d-49ab-a44a-875ec50bb9c3/console.log" append="off"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <video>
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     </video>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:54:23 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:54:23 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:54:23 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:54:23 compute-0 nova_compute[239965]: </domain>
Jan 26 15:54:23 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.219 239969 DEBUG nova.compute.manager [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Preparing to wait for external event network-vif-plugged-5a7726a4-3201-49ec-ab56-ca4039acde5a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.220 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Acquiring lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.220 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.220 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.221 239969 DEBUG nova.virt.libvirt.vif [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:54:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1884025666',display_name='tempest-ServersTestManualDisk-server-1884025666',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1884025666',id=33,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6lv1uakSqWZntjnzlYObwQ4O5lM4662rvmzzGK9m7PGcCa9az9O3xFja6m+OM+cleoC76M21DlcKWaQ1VvnnEb1p+cFpm6c2l3JNRpqxUHP51OvJUO11mfImVwFrZPUw==',key_name='tempest-keypair-796965151',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a4fae17181d64f599975dc5816c6b9b8',ramdisk_id='',reservation_id='r-eb30osg8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1759969789',owner_user_name='tempest-ServersTestManualDisk-1759969789-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:54:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e1a7a8a20c4486d993bf5d6901001be',uuid=3c352fa1-e91d-49ab-a44a-875ec50bb9c3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "address": "fa:16:3e:02:63:a9", "network": {"id": "52963be3-f242-4f3f-83c5-61cee797b0b4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1738664060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4fae17181d64f599975dc5816c6b9b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a7726a4-32", "ovs_interfaceid": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.221 239969 DEBUG nova.network.os_vif_util [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Converting VIF {"id": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "address": "fa:16:3e:02:63:a9", "network": {"id": "52963be3-f242-4f3f-83c5-61cee797b0b4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1738664060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4fae17181d64f599975dc5816c6b9b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a7726a4-32", "ovs_interfaceid": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.222 239969 DEBUG nova.network.os_vif_util [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:63:a9,bridge_name='br-int',has_traffic_filtering=True,id=5a7726a4-3201-49ec-ab56-ca4039acde5a,network=Network(52963be3-f242-4f3f-83c5-61cee797b0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a7726a4-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.222 239969 DEBUG os_vif [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:63:a9,bridge_name='br-int',has_traffic_filtering=True,id=5a7726a4-3201-49ec-ab56-ca4039acde5a,network=Network(52963be3-f242-4f3f-83c5-61cee797b0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a7726a4-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.223 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.224 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.224 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.227 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.227 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a7726a4-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.228 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a7726a4-32, col_values=(('external_ids', {'iface-id': '5a7726a4-3201-49ec-ab56-ca4039acde5a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:63:a9', 'vm-uuid': '3c352fa1-e91d-49ab-a44a-875ec50bb9c3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:23 compute-0 NetworkManager[48954]: <info>  [1769442863.2304] manager: (tap5a7726a4-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.232 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.235 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.237 239969 INFO os_vif [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:63:a9,bridge_name='br-int',has_traffic_filtering=True,id=5a7726a4-3201-49ec-ab56-ca4039acde5a,network=Network(52963be3-f242-4f3f-83c5-61cee797b0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a7726a4-32')
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.297 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.298 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.298 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] No VIF found with MAC fa:16:3e:02:63:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.299 239969 INFO nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Using config drive
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.320 239969 DEBUG nova.storage.rbd_utils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] rbd image 3c352fa1-e91d-49ab-a44a-875ec50bb9c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 191 op/s
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.838 239969 INFO nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Creating config drive at /var/lib/nova/instances/3c352fa1-e91d-49ab-a44a-875ec50bb9c3/disk.config
Jan 26 15:54:23 compute-0 ovn_controller[146046]: 2026-01-26T15:54:23Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d1:1c:47 10.100.0.12
Jan 26 15:54:23 compute-0 ovn_controller[146046]: 2026-01-26T15:54:23Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d1:1c:47 10.100.0.12
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.845 239969 DEBUG oslo_concurrency.processutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3c352fa1-e91d-49ab-a44a-875ec50bb9c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_0z6wxae execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2759565442' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:54:23 compute-0 nova_compute[239965]: 2026-01-26 15:54:23.986 239969 DEBUG oslo_concurrency.processutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3c352fa1-e91d-49ab-a44a-875ec50bb9c3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_0z6wxae" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.027 239969 DEBUG nova.storage.rbd_utils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] rbd image 3c352fa1-e91d-49ab-a44a-875ec50bb9c3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.033 239969 DEBUG oslo_concurrency.processutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3c352fa1-e91d-49ab-a44a-875ec50bb9c3/disk.config 3c352fa1-e91d-49ab-a44a-875ec50bb9c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.196 239969 DEBUG oslo_concurrency.processutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3c352fa1-e91d-49ab-a44a-875ec50bb9c3/disk.config 3c352fa1-e91d-49ab-a44a-875ec50bb9c3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.197 239969 INFO nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Deleting local config drive /var/lib/nova/instances/3c352fa1-e91d-49ab-a44a-875ec50bb9c3/disk.config because it was imported into RBD.
Jan 26 15:54:24 compute-0 kernel: tap5a7726a4-32: entered promiscuous mode
Jan 26 15:54:24 compute-0 NetworkManager[48954]: <info>  [1769442864.2550] manager: (tap5a7726a4-32): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Jan 26 15:54:24 compute-0 ovn_controller[146046]: 2026-01-26T15:54:24Z|00179|binding|INFO|Claiming lport 5a7726a4-3201-49ec-ab56-ca4039acde5a for this chassis.
Jan 26 15:54:24 compute-0 ovn_controller[146046]: 2026-01-26T15:54:24Z|00180|binding|INFO|5a7726a4-3201-49ec-ab56-ca4039acde5a: Claiming fa:16:3e:02:63:a9 10.100.0.14
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.260 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.268 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:63:a9 10.100.0.14'], port_security=['fa:16:3e:02:63:a9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3c352fa1-e91d-49ab-a44a-875ec50bb9c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52963be3-f242-4f3f-83c5-61cee797b0b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4fae17181d64f599975dc5816c6b9b8', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd4f72978-51c5-43c4-afb4-1b796998a549', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1833d7da-ca9f-431f-aada-f0250e90c35e, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5a7726a4-3201-49ec-ab56-ca4039acde5a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.270 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5a7726a4-3201-49ec-ab56-ca4039acde5a in datapath 52963be3-f242-4f3f-83c5-61cee797b0b4 bound to our chassis
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.272 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 52963be3-f242-4f3f-83c5-61cee797b0b4
Jan 26 15:54:24 compute-0 ovn_controller[146046]: 2026-01-26T15:54:24Z|00181|binding|INFO|Setting lport 5a7726a4-3201-49ec-ab56-ca4039acde5a ovn-installed in OVS
Jan 26 15:54:24 compute-0 ovn_controller[146046]: 2026-01-26T15:54:24Z|00182|binding|INFO|Setting lport 5a7726a4-3201-49ec-ab56-ca4039acde5a up in Southbound
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.280 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.290 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f3b70ed1-ca54-4845-b325-7646397574f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.291 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap52963be3-f1 in ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.293 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap52963be3-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.293 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[06898a69-b188-401a-a4b4-088f49fadaa9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:24 compute-0 systemd-udevd[271252]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.297 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5c8593e3-8c7e-4a9e-b727-8d983eb9bc22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:24 compute-0 systemd-machined[208061]: New machine qemu-37-instance-00000021.
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.306 239969 DEBUG nova.network.neutron [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Updated VIF entry in instance network info cache for port 5a7726a4-3201-49ec-ab56-ca4039acde5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.306 239969 DEBUG nova.network.neutron [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Updating instance_info_cache with network_info: [{"id": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "address": "fa:16:3e:02:63:a9", "network": {"id": "52963be3-f242-4f3f-83c5-61cee797b0b4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1738664060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4fae17181d64f599975dc5816c6b9b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a7726a4-32", "ovs_interfaceid": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.311 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[c574202f-3589-4940-918c-d1faf484eb2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:24 compute-0 systemd[1]: Started Virtual Machine qemu-37-instance-00000021.
Jan 26 15:54:24 compute-0 NetworkManager[48954]: <info>  [1769442864.3186] device (tap5a7726a4-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:54:24 compute-0 NetworkManager[48954]: <info>  [1769442864.3197] device (tap5a7726a4-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.327 239969 DEBUG oslo_concurrency.lockutils [req-af1b9eef-ea3b-4a81-8fa9-56fc8df9ac27 req-d51f8f1a-ce7f-43ef-bd0b-1bd551f9593d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-3c352fa1-e91d-49ab-a44a-875ec50bb9c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.336 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b51e7cf6-deec-4e72-90f4-5910a3e3f49e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.363 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8291f928-d2e9-4244-8871-6bcbc236e5cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:24 compute-0 NetworkManager[48954]: <info>  [1769442864.3767] manager: (tap52963be3-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/98)
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.377 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3cbc490b-6b56-4861-833a-2e3788680126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.415 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[dbdf298b-d7c3-4f45-b877-26ceb0368d56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.419 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5e42b040-a669-4071-9fc6-22cfd732e475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:24 compute-0 NetworkManager[48954]: <info>  [1769442864.4473] device (tap52963be3-f0): carrier: link connected
Jan 26 15:54:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.453 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[504fbf91-0e54-4dcc-b931-66b532235de8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.471 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee0eef1-bde5-46bf-9b8b-71ab277d899c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52963be3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:21:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 435525, 'reachable_time': 24725, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271285, 'error': None, 'target': 'ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.489 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0dd86f0d-f1da-44db-a251-6adccb48492c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe77:210e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 435525, 'tstamp': 435525}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271286, 'error': None, 'target': 'ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.507 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[62dd96be-b61d-4bb4-9c6b-37bbf33d7f10]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap52963be3-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:21:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 62], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 435525, 'reachable_time': 24725, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271287, 'error': None, 'target': 'ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.542 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4cc2a9b8-9815-45a2-affd-7e1456edefa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.607 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7bbc10c9-220e-4d78-8b95-0a924c0c2c50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.609 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52963be3-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.609 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.610 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52963be3-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.612 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:24 compute-0 NetworkManager[48954]: <info>  [1769442864.6126] manager: (tap52963be3-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Jan 26 15:54:24 compute-0 kernel: tap52963be3-f0: entered promiscuous mode
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.614 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.619 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap52963be3-f0, col_values=(('external_ids', {'iface-id': '7b5f5084-5d75-40fc-8b68-f8150865de7f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:24 compute-0 ovn_controller[146046]: 2026-01-26T15:54:24Z|00183|binding|INFO|Releasing lport 7b5f5084-5d75-40fc-8b68-f8150865de7f from this chassis (sb_readonly=0)
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.621 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.625 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/52963be3-f242-4f3f-83c5-61cee797b0b4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/52963be3-f242-4f3f-83c5-61cee797b0b4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.626 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[509eb19a-fa37-43a8-a77d-c063fc281ddf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.627 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-52963be3-f242-4f3f-83c5-61cee797b0b4
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/52963be3-f242-4f3f-83c5-61cee797b0b4.pid.haproxy
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 52963be3-f242-4f3f-83c5-61cee797b0b4
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:54:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:24.627 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4', 'env', 'PROCESS_TAG=haproxy-52963be3-f242-4f3f-83c5-61cee797b0b4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/52963be3-f242-4f3f-83c5-61cee797b0b4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.637 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.693 239969 DEBUG nova.compute.manager [req-7eed7566-6237-4487-8204-5a268eebb79f req-a16f6a6f-a15b-4cbb-8ebb-b02084811507 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Received event network-vif-plugged-5a7726a4-3201-49ec-ab56-ca4039acde5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.694 239969 DEBUG oslo_concurrency.lockutils [req-7eed7566-6237-4487-8204-5a268eebb79f req-a16f6a6f-a15b-4cbb-8ebb-b02084811507 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.694 239969 DEBUG oslo_concurrency.lockutils [req-7eed7566-6237-4487-8204-5a268eebb79f req-a16f6a6f-a15b-4cbb-8ebb-b02084811507 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.695 239969 DEBUG oslo_concurrency.lockutils [req-7eed7566-6237-4487-8204-5a268eebb79f req-a16f6a6f-a15b-4cbb-8ebb-b02084811507 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.695 239969 DEBUG nova.compute.manager [req-7eed7566-6237-4487-8204-5a268eebb79f req-a16f6a6f-a15b-4cbb-8ebb-b02084811507 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Processing event network-vif-plugged-5a7726a4-3201-49ec-ab56-ca4039acde5a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.765 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442864.7651713, 3c352fa1-e91d-49ab-a44a-875ec50bb9c3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.766 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] VM Started (Lifecycle Event)
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.768 239969 DEBUG nova.compute.manager [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.774 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.778 239969 INFO nova.virt.libvirt.driver [-] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Instance spawned successfully.
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.778 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.800 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.806 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.812 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.812 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.813 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.814 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.814 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.815 239969 DEBUG nova.virt.libvirt.driver [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.843 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.844 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442864.7653043, 3c352fa1-e91d-49ab-a44a-875ec50bb9c3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.844 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] VM Paused (Lifecycle Event)
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.872 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.876 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442864.7727258, 3c352fa1-e91d-49ab-a44a-875ec50bb9c3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.877 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] VM Resumed (Lifecycle Event)
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.881 239969 INFO nova.compute.manager [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Took 7.32 seconds to spawn the instance on the hypervisor.
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.882 239969 DEBUG nova.compute.manager [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.907 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.913 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:54:24 compute-0 ceph-mon[75140]: pgmap v1214: 305 pgs: 305 active+clean; 134 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 191 op/s
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.945 239969 INFO nova.compute.manager [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Took 8.76 seconds to build instance.
Jan 26 15:54:24 compute-0 nova_compute[239965]: 2026-01-26 15:54:24.964 239969 DEBUG oslo_concurrency.lockutils [None req-fe23da77-57e4-49b8-8b22-20257163635d 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.891s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:25 compute-0 podman[271361]: 2026-01-26 15:54:25.024086536 +0000 UTC m=+0.053432449 container create 6c1757e81437fe1c0efef1e487669d79fe693a026a3c7cbe4daf0ec869b7f151 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:54:25 compute-0 nova_compute[239965]: 2026-01-26 15:54:25.045 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:25 compute-0 systemd[1]: Started libpod-conmon-6c1757e81437fe1c0efef1e487669d79fe693a026a3c7cbe4daf0ec869b7f151.scope.
Jan 26 15:54:25 compute-0 podman[271361]: 2026-01-26 15:54:24.994374184 +0000 UTC m=+0.023720097 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:54:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:54:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b875288cfc9ed9d7d7863aa0954d2e6b979cd34354cdea905d6b4369155f2be0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:25 compute-0 podman[271361]: 2026-01-26 15:54:25.149246205 +0000 UTC m=+0.178592148 container init 6c1757e81437fe1c0efef1e487669d79fe693a026a3c7cbe4daf0ec869b7f151 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 15:54:25 compute-0 podman[271361]: 2026-01-26 15:54:25.155709432 +0000 UTC m=+0.185055345 container start 6c1757e81437fe1c0efef1e487669d79fe693a026a3c7cbe4daf0ec869b7f151 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 15:54:25 compute-0 podman[271373]: 2026-01-26 15:54:25.166424273 +0000 UTC m=+0.106406085 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 15:54:25 compute-0 neutron-haproxy-ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4[271395]: [NOTICE]   (271420) : New worker (271425) forked
Jan 26 15:54:25 compute-0 neutron-haproxy-ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4[271395]: [NOTICE]   (271420) : Loading success.
Jan 26 15:54:25 compute-0 podman[271374]: 2026-01-26 15:54:25.201290549 +0000 UTC m=+0.141518398 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 15:54:25 compute-0 nova_compute[239965]: 2026-01-26 15:54:25.526 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:54:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 137 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 184 op/s
Jan 26 15:54:26 compute-0 nova_compute[239965]: 2026-01-26 15:54:26.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:54:26 compute-0 nova_compute[239965]: 2026-01-26 15:54:26.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:54:26 compute-0 nova_compute[239965]: 2026-01-26 15:54:26.794 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:26 compute-0 nova_compute[239965]: 2026-01-26 15:54:26.812 239969 DEBUG nova.compute.manager [req-beea8976-3425-421b-8f63-7f5954c9a32d req-690aeff1-151f-4e02-9c29-16c73f99df80 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Received event network-vif-plugged-5a7726a4-3201-49ec-ab56-ca4039acde5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:26 compute-0 nova_compute[239965]: 2026-01-26 15:54:26.812 239969 DEBUG oslo_concurrency.lockutils [req-beea8976-3425-421b-8f63-7f5954c9a32d req-690aeff1-151f-4e02-9c29-16c73f99df80 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:26 compute-0 nova_compute[239965]: 2026-01-26 15:54:26.813 239969 DEBUG oslo_concurrency.lockutils [req-beea8976-3425-421b-8f63-7f5954c9a32d req-690aeff1-151f-4e02-9c29-16c73f99df80 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:26 compute-0 nova_compute[239965]: 2026-01-26 15:54:26.813 239969 DEBUG oslo_concurrency.lockutils [req-beea8976-3425-421b-8f63-7f5954c9a32d req-690aeff1-151f-4e02-9c29-16c73f99df80 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:26 compute-0 nova_compute[239965]: 2026-01-26 15:54:26.813 239969 DEBUG nova.compute.manager [req-beea8976-3425-421b-8f63-7f5954c9a32d req-690aeff1-151f-4e02-9c29-16c73f99df80 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] No waiting events found dispatching network-vif-plugged-5a7726a4-3201-49ec-ab56-ca4039acde5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:54:26 compute-0 nova_compute[239965]: 2026-01-26 15:54:26.814 239969 WARNING nova.compute.manager [req-beea8976-3425-421b-8f63-7f5954c9a32d req-690aeff1-151f-4e02-9c29-16c73f99df80 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Received unexpected event network-vif-plugged-5a7726a4-3201-49ec-ab56-ca4039acde5a for instance with vm_state active and task_state None.
Jan 26 15:54:26 compute-0 ceph-mon[75140]: pgmap v1215: 305 pgs: 305 active+clean; 137 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.1 MiB/s wr, 184 op/s
Jan 26 15:54:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:27.164 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:54:27 compute-0 nova_compute[239965]: 2026-01-26 15:54:27.164 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:27.165 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 15:54:27 compute-0 ovn_controller[146046]: 2026-01-26T15:54:27Z|00184|binding|INFO|Releasing lport 0a445094-538b-4c97-83d4-6fbe61c31fbe from this chassis (sb_readonly=0)
Jan 26 15:54:27 compute-0 ovn_controller[146046]: 2026-01-26T15:54:27Z|00185|binding|INFO|Releasing lport 7b5f5084-5d75-40fc-8b68-f8150865de7f from this chassis (sb_readonly=0)
Jan 26 15:54:27 compute-0 nova_compute[239965]: 2026-01-26 15:54:27.463 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 167 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 255 op/s
Jan 26 15:54:28 compute-0 nova_compute[239965]: 2026-01-26 15:54:28.244 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:54:28
Jan 26 15:54:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:54:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:54:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['vms', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'images', 'default.rgw.control']
Jan 26 15:54:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:54:28 compute-0 ceph-mon[75140]: pgmap v1216: 305 pgs: 305 active+clean; 167 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 255 op/s
Jan 26 15:54:28 compute-0 nova_compute[239965]: 2026-01-26 15:54:28.980 239969 DEBUG nova.compute.manager [req-4d3fea85-d2e2-410f-a473-07138187a695 req-1d27443d-1969-4c1e-9e13-527c3ab98edb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Received event network-changed-5a7726a4-3201-49ec-ab56-ca4039acde5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:28 compute-0 nova_compute[239965]: 2026-01-26 15:54:28.980 239969 DEBUG nova.compute.manager [req-4d3fea85-d2e2-410f-a473-07138187a695 req-1d27443d-1969-4c1e-9e13-527c3ab98edb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Refreshing instance network info cache due to event network-changed-5a7726a4-3201-49ec-ab56-ca4039acde5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:54:28 compute-0 nova_compute[239965]: 2026-01-26 15:54:28.981 239969 DEBUG oslo_concurrency.lockutils [req-4d3fea85-d2e2-410f-a473-07138187a695 req-1d27443d-1969-4c1e-9e13-527c3ab98edb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-3c352fa1-e91d-49ab-a44a-875ec50bb9c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:54:28 compute-0 nova_compute[239965]: 2026-01-26 15:54:28.981 239969 DEBUG oslo_concurrency.lockutils [req-4d3fea85-d2e2-410f-a473-07138187a695 req-1d27443d-1969-4c1e-9e13-527c3ab98edb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-3c352fa1-e91d-49ab-a44a-875ec50bb9c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:54:28 compute-0 nova_compute[239965]: 2026-01-26 15:54:28.981 239969 DEBUG nova.network.neutron [req-4d3fea85-d2e2-410f-a473-07138187a695 req-1d27443d-1969-4c1e-9e13-527c3ab98edb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Refreshing network info cache for port 5a7726a4-3201-49ec-ab56-ca4039acde5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:54:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:54:29 compute-0 nova_compute[239965]: 2026-01-26 15:54:29.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:54:29 compute-0 nova_compute[239965]: 2026-01-26 15:54:29.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:54:29 compute-0 nova_compute[239965]: 2026-01-26 15:54:29.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 15:54:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 167 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 248 op/s
Jan 26 15:54:29 compute-0 nova_compute[239965]: 2026-01-26 15:54:29.907 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:54:29 compute-0 nova_compute[239965]: 2026-01-26 15:54:29.908 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:54:29 compute-0 nova_compute[239965]: 2026-01-26 15:54:29.908 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 15:54:29 compute-0 nova_compute[239965]: 2026-01-26 15:54:29.909 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2cf7c37a-5399-4471-8385-7f483007196b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:30 compute-0 nova_compute[239965]: 2026-01-26 15:54:30.047 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:54:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:54:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:54:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:54:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:54:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:54:30 compute-0 ceph-mon[75140]: pgmap v1217: 305 pgs: 305 active+clean; 167 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 248 op/s
Jan 26 15:54:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:54:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:54:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:54:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:54:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:54:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:54:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:54:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:54:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:54:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:54:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 167 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 258 op/s
Jan 26 15:54:32 compute-0 nova_compute[239965]: 2026-01-26 15:54:32.682 239969 DEBUG nova.network.neutron [req-4d3fea85-d2e2-410f-a473-07138187a695 req-1d27443d-1969-4c1e-9e13-527c3ab98edb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Updated VIF entry in instance network info cache for port 5a7726a4-3201-49ec-ab56-ca4039acde5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:54:32 compute-0 nova_compute[239965]: 2026-01-26 15:54:32.683 239969 DEBUG nova.network.neutron [req-4d3fea85-d2e2-410f-a473-07138187a695 req-1d27443d-1969-4c1e-9e13-527c3ab98edb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Updating instance_info_cache with network_info: [{"id": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "address": "fa:16:3e:02:63:a9", "network": {"id": "52963be3-f242-4f3f-83c5-61cee797b0b4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1738664060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4fae17181d64f599975dc5816c6b9b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a7726a4-32", "ovs_interfaceid": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:32 compute-0 nova_compute[239965]: 2026-01-26 15:54:32.723 239969 DEBUG oslo_concurrency.lockutils [req-4d3fea85-d2e2-410f-a473-07138187a695 req-1d27443d-1969-4c1e-9e13-527c3ab98edb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-3c352fa1-e91d-49ab-a44a-875ec50bb9c3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:54:32 compute-0 ceph-mon[75140]: pgmap v1218: 305 pgs: 305 active+clean; 167 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 258 op/s
Jan 26 15:54:33 compute-0 nova_compute[239965]: 2026-01-26 15:54:33.247 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:33 compute-0 nova_compute[239965]: 2026-01-26 15:54:33.306 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Updating instance_info_cache with network_info: [{"id": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "address": "fa:16:3e:d1:1c:47", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcff19326-5c", "ovs_interfaceid": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:33 compute-0 nova_compute[239965]: 2026-01-26 15:54:33.339 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:54:33 compute-0 nova_compute[239965]: 2026-01-26 15:54:33.339 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 15:54:33 compute-0 nova_compute[239965]: 2026-01-26 15:54:33.339 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:54:33 compute-0 nova_compute[239965]: 2026-01-26 15:54:33.339 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:54:33 compute-0 nova_compute[239965]: 2026-01-26 15:54:33.362 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:33 compute-0 nova_compute[239965]: 2026-01-26 15:54:33.362 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:33 compute-0 nova_compute[239965]: 2026-01-26 15:54:33.363 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:33 compute-0 nova_compute[239965]: 2026-01-26 15:54:33.363 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:54:33 compute-0 nova_compute[239965]: 2026-01-26 15:54:33.363 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:33 compute-0 nova_compute[239965]: 2026-01-26 15:54:33.419 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 167 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.4 MiB/s wr, 221 op/s
Jan 26 15:54:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:54:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3990638536' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.012 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.096 239969 DEBUG oslo_concurrency.lockutils [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "interface-2cf7c37a-5399-4471-8385-7f483007196b-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.097 239969 DEBUG oslo_concurrency.lockutils [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-2cf7c37a-5399-4471-8385-7f483007196b-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.098 239969 DEBUG nova.objects.instance [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'flavor' on Instance uuid 2cf7c37a-5399-4471-8385-7f483007196b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.109 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.110 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000001f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.114 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.114 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.292 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.294 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3904MB free_disk=59.921606585383415GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.294 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.294 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.464 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442859.463648, 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.465 239969 INFO nova.compute.manager [-] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] VM Stopped (Lifecycle Event)
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.488 239969 DEBUG nova.compute.manager [None req-fa67ce6e-6d92-4130-bcf3-5b38615ead89 - - - - - -] [instance: 2a8ff768-0ab1-46cb-983c-9109c4a2c1dd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.498 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 2cf7c37a-5399-4471-8385-7f483007196b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.499 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 3c352fa1-e91d-49ab-a44a-875ec50bb9c3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.499 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.499 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.631 239969 DEBUG nova.objects.instance [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'pci_requests' on Instance uuid 2cf7c37a-5399-4471-8385-7f483007196b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.646 239969 DEBUG nova.network.neutron [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.673 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:34 compute-0 ceph-mon[75140]: pgmap v1219: 305 pgs: 305 active+clean; 167 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.4 MiB/s wr, 221 op/s
Jan 26 15:54:34 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3990638536' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:34 compute-0 nova_compute[239965]: 2026-01-26 15:54:34.829 239969 DEBUG nova.policy [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '700f80e28de6426c81d2be69a7fd8ffb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:54:35 compute-0 nova_compute[239965]: 2026-01-26 15:54:35.049 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:54:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1266312493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:35 compute-0 nova_compute[239965]: 2026-01-26 15:54:35.245 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:35 compute-0 nova_compute[239965]: 2026-01-26 15:54:35.249 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:54:35 compute-0 nova_compute[239965]: 2026-01-26 15:54:35.283 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:54:35 compute-0 nova_compute[239965]: 2026-01-26 15:54:35.308 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:54:35 compute-0 nova_compute[239965]: 2026-01-26 15:54:35.308 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:35 compute-0 nova_compute[239965]: 2026-01-26 15:54:35.309 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:54:35 compute-0 nova_compute[239965]: 2026-01-26 15:54:35.491 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:54:35 compute-0 nova_compute[239965]: 2026-01-26 15:54:35.491 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:54:35 compute-0 nova_compute[239965]: 2026-01-26 15:54:35.492 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:54:35 compute-0 nova_compute[239965]: 2026-01-26 15:54:35.492 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:54:35 compute-0 nova_compute[239965]: 2026-01-26 15:54:35.495 239969 DEBUG nova.network.neutron [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Successfully created port: 4f4b7573-e9e2-42e7-8820-e563371095b8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:54:35 compute-0 nova_compute[239965]: 2026-01-26 15:54:35.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:54:35 compute-0 nova_compute[239965]: 2026-01-26 15:54:35.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 15:54:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 167 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 26 15:54:35 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1266312493' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:36 compute-0 nova_compute[239965]: 2026-01-26 15:54:36.187 239969 DEBUG nova.network.neutron [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Successfully updated port: 4f4b7573-e9e2-42e7-8820-e563371095b8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:54:36 compute-0 nova_compute[239965]: 2026-01-26 15:54:36.205 239969 DEBUG oslo_concurrency.lockutils [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:54:36 compute-0 nova_compute[239965]: 2026-01-26 15:54:36.205 239969 DEBUG oslo_concurrency.lockutils [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquired lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:54:36 compute-0 nova_compute[239965]: 2026-01-26 15:54:36.206 239969 DEBUG nova.network.neutron [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:54:36 compute-0 nova_compute[239965]: 2026-01-26 15:54:36.283 239969 DEBUG nova.compute.manager [req-c47e78ae-dec5-4db7-9da0-7dfc9f27a187 req-bbaf98f9-cd19-4ee5-a43e-e775b620964d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Received event network-changed-4f4b7573-e9e2-42e7-8820-e563371095b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:36 compute-0 nova_compute[239965]: 2026-01-26 15:54:36.284 239969 DEBUG nova.compute.manager [req-c47e78ae-dec5-4db7-9da0-7dfc9f27a187 req-bbaf98f9-cd19-4ee5-a43e-e775b620964d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Refreshing instance network info cache due to event network-changed-4f4b7573-e9e2-42e7-8820-e563371095b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:54:36 compute-0 nova_compute[239965]: 2026-01-26 15:54:36.284 239969 DEBUG oslo_concurrency.lockutils [req-c47e78ae-dec5-4db7-9da0-7dfc9f27a187 req-bbaf98f9-cd19-4ee5-a43e-e775b620964d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:54:36 compute-0 nova_compute[239965]: 2026-01-26 15:54:36.364 239969 WARNING nova.network.neutron [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] 91bcac0a-1926-4861-88ab-ae3c06f7e57e already exists in list: networks containing: ['91bcac0a-1926-4861-88ab-ae3c06f7e57e']. ignoring it
Jan 26 15:54:36 compute-0 nova_compute[239965]: 2026-01-26 15:54:36.375 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:37 compute-0 ceph-mon[75140]: pgmap v1220: 305 pgs: 305 active+clean; 167 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 139 op/s
Jan 26 15:54:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:37.167 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 167 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Jan 26 15:54:38 compute-0 ovn_controller[146046]: 2026-01-26T15:54:38Z|00186|binding|INFO|Releasing lport 0a445094-538b-4c97-83d4-6fbe61c31fbe from this chassis (sb_readonly=0)
Jan 26 15:54:38 compute-0 ovn_controller[146046]: 2026-01-26T15:54:38Z|00187|binding|INFO|Releasing lport 7b5f5084-5d75-40fc-8b68-f8150865de7f from this chassis (sb_readonly=0)
Jan 26 15:54:38 compute-0 ceph-mon[75140]: pgmap v1221: 305 pgs: 305 active+clean; 167 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.168 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.249 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.371 239969 DEBUG nova.network.neutron [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Updating instance_info_cache with network_info: [{"id": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "address": "fa:16:3e:d1:1c:47", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcff19326-5c", "ovs_interfaceid": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4f4b7573-e9e2-42e7-8820-e563371095b8", "address": "fa:16:3e:cb:8b:3e", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f4b7573-e9", "ovs_interfaceid": "4f4b7573-e9e2-42e7-8820-e563371095b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.552 239969 DEBUG oslo_concurrency.lockutils [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Releasing lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.553 239969 DEBUG oslo_concurrency.lockutils [req-c47e78ae-dec5-4db7-9da0-7dfc9f27a187 req-bbaf98f9-cd19-4ee5-a43e-e775b620964d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.553 239969 DEBUG nova.network.neutron [req-c47e78ae-dec5-4db7-9da0-7dfc9f27a187 req-bbaf98f9-cd19-4ee5-a43e-e775b620964d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Refreshing network info cache for port 4f4b7573-e9e2-42e7-8820-e563371095b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.556 239969 DEBUG nova.virt.libvirt.vif [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:53:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-191874167',display_name='tempest-AttachInterfacesTestJSON-server-191874167',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-191874167',id=31,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhtLtOel3pVoc9EYAkv14l7u/jTPTSUNpqzLh+IC+V3az/87mCK8InL3MxARo8S+BZI5v3zTF4Oa/Tp2iKSYMaK5pJEulAdCvLOStylM1u70TluwDQyNsqUX7wJdz2LIw==',key_name='tempest-keypair-1871598546',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:54:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-con5e9j1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:54:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=2cf7c37a-5399-4471-8385-7f483007196b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f4b7573-e9e2-42e7-8820-e563371095b8", "address": "fa:16:3e:cb:8b:3e", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f4b7573-e9", "ovs_interfaceid": "4f4b7573-e9e2-42e7-8820-e563371095b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.556 239969 DEBUG nova.network.os_vif_util [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "4f4b7573-e9e2-42e7-8820-e563371095b8", "address": "fa:16:3e:cb:8b:3e", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f4b7573-e9", "ovs_interfaceid": "4f4b7573-e9e2-42e7-8820-e563371095b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.557 239969 DEBUG nova.network.os_vif_util [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:8b:3e,bridge_name='br-int',has_traffic_filtering=True,id=4f4b7573-e9e2-42e7-8820-e563371095b8,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f4b7573-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.557 239969 DEBUG os_vif [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:8b:3e,bridge_name='br-int',has_traffic_filtering=True,id=4f4b7573-e9e2-42e7-8820-e563371095b8,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f4b7573-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.558 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.558 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.559 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.562 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.562 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f4b7573-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.563 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f4b7573-e9, col_values=(('external_ids', {'iface-id': '4f4b7573-e9e2-42e7-8820-e563371095b8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:8b:3e', 'vm-uuid': '2cf7c37a-5399-4471-8385-7f483007196b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.564 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:38 compute-0 NetworkManager[48954]: <info>  [1769442878.5658] manager: (tap4f4b7573-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.567 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.574 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.575 239969 INFO os_vif [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:8b:3e,bridge_name='br-int',has_traffic_filtering=True,id=4f4b7573-e9e2-42e7-8820-e563371095b8,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f4b7573-e9')
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.576 239969 DEBUG nova.virt.libvirt.vif [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:53:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-191874167',display_name='tempest-AttachInterfacesTestJSON-server-191874167',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-191874167',id=31,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhtLtOel3pVoc9EYAkv14l7u/jTPTSUNpqzLh+IC+V3az/87mCK8InL3MxARo8S+BZI5v3zTF4Oa/Tp2iKSYMaK5pJEulAdCvLOStylM1u70TluwDQyNsqUX7wJdz2LIw==',key_name='tempest-keypair-1871598546',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:54:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-con5e9j1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:54:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=2cf7c37a-5399-4471-8385-7f483007196b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f4b7573-e9e2-42e7-8820-e563371095b8", "address": "fa:16:3e:cb:8b:3e", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f4b7573-e9", "ovs_interfaceid": "4f4b7573-e9e2-42e7-8820-e563371095b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.576 239969 DEBUG nova.network.os_vif_util [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "4f4b7573-e9e2-42e7-8820-e563371095b8", "address": "fa:16:3e:cb:8b:3e", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f4b7573-e9", "ovs_interfaceid": "4f4b7573-e9e2-42e7-8820-e563371095b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.577 239969 DEBUG nova.network.os_vif_util [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:8b:3e,bridge_name='br-int',has_traffic_filtering=True,id=4f4b7573-e9e2-42e7-8820-e563371095b8,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f4b7573-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.580 239969 DEBUG nova.virt.libvirt.guest [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] attach device xml: <interface type="ethernet">
Jan 26 15:54:38 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:cb:8b:3e"/>
Jan 26 15:54:38 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 15:54:38 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:54:38 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 15:54:38 compute-0 nova_compute[239965]:   <target dev="tap4f4b7573-e9"/>
Jan 26 15:54:38 compute-0 nova_compute[239965]: </interface>
Jan 26 15:54:38 compute-0 nova_compute[239965]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 26 15:54:38 compute-0 kernel: tap4f4b7573-e9: entered promiscuous mode
Jan 26 15:54:38 compute-0 NetworkManager[48954]: <info>  [1769442878.5936] manager: (tap4f4b7573-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Jan 26 15:54:38 compute-0 ovn_controller[146046]: 2026-01-26T15:54:38Z|00188|binding|INFO|Claiming lport 4f4b7573-e9e2-42e7-8820-e563371095b8 for this chassis.
Jan 26 15:54:38 compute-0 ovn_controller[146046]: 2026-01-26T15:54:38Z|00189|binding|INFO|4f4b7573-e9e2-42e7-8820-e563371095b8: Claiming fa:16:3e:cb:8b:3e 10.100.0.14
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.594 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:38.604 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:8b:3e 10.100.0.14'], port_security=['fa:16:3e:cb:8b:3e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2cf7c37a-5399-4471-8385-7f483007196b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6f59e25-9c42-4de8-8df2-2beb03c79af0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=4f4b7573-e9e2-42e7-8820-e563371095b8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:54:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:38.605 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 4f4b7573-e9e2-42e7-8820-e563371095b8 in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e bound to our chassis
Jan 26 15:54:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:38.607 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:54:38 compute-0 ovn_controller[146046]: 2026-01-26T15:54:38Z|00190|binding|INFO|Setting lport 4f4b7573-e9e2-42e7-8820-e563371095b8 ovn-installed in OVS
Jan 26 15:54:38 compute-0 ovn_controller[146046]: 2026-01-26T15:54:38Z|00191|binding|INFO|Setting lport 4f4b7573-e9e2-42e7-8820-e563371095b8 up in Southbound
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.615 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.617 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:38.619 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a915dec6-581d-44df-a81d-b37b29a659e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:38 compute-0 systemd-udevd[271487]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:54:38 compute-0 NetworkManager[48954]: <info>  [1769442878.6461] device (tap4f4b7573-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:54:38 compute-0 NetworkManager[48954]: <info>  [1769442878.6471] device (tap4f4b7573-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:54:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:38.652 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea7b28c-c2bc-4c63-ae26-8dad2fbcd4e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:38.656 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9c1ded12-9736-4f34-8f4f-f2235f2977c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:38.680 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[40e03a71-9e64-414d-92c5-09e04e1dca12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:38.700 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1063d1-2719-4827-88cd-1515a06ba0ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433723, 'reachable_time': 43512, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271494, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:38.715 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5eb742f1-8281-45d6-b840-92464f7d45f3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 433735, 'tstamp': 433735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271495, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 433738, 'tstamp': 433738}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271495, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:38.718 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.719 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:38.721 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:38.721 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.721 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:38.722 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:38.722 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.736 239969 DEBUG nova.virt.libvirt.driver [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.737 239969 DEBUG nova.virt.libvirt.driver [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.737 239969 DEBUG nova.virt.libvirt.driver [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:d1:1c:47, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.737 239969 DEBUG nova.virt.libvirt.driver [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:cb:8b:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.762 239969 DEBUG nova.virt.libvirt.guest [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:54:38 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:54:38 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesTestJSON-server-191874167</nova:name>
Jan 26 15:54:38 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:54:38</nova:creationTime>
Jan 26 15:54:38 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:54:38 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:54:38 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:54:38 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:54:38 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:54:38 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:54:38 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:54:38 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:54:38 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:54:38 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:54:38 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:54:38 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:54:38 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:54:38 compute-0 nova_compute[239965]:     <nova:port uuid="cff19326-5cc9-485a-9f3e-eafabc790a6e">
Jan 26 15:54:38 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 15:54:38 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:54:38 compute-0 nova_compute[239965]:     <nova:port uuid="4f4b7573-e9e2-42e7-8820-e563371095b8">
Jan 26 15:54:38 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 15:54:38 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:54:38 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:54:38 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:54:38 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 15:54:38 compute-0 nova_compute[239965]: 2026-01-26 15:54:38.792 239969 DEBUG oslo_concurrency.lockutils [None req-04975f73-e117-456d-8f67-29c7e46c943b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-2cf7c37a-5399-4471-8385-7f483007196b-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.053 239969 DEBUG nova.compute.manager [req-969e8a1d-f343-4a74-86f0-eac4ae1b3079 req-3f40037c-a461-4b48-8ad4-c7b98135e7a8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Received event network-vif-plugged-4f4b7573-e9e2-42e7-8820-e563371095b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.053 239969 DEBUG oslo_concurrency.lockutils [req-969e8a1d-f343-4a74-86f0-eac4ae1b3079 req-3f40037c-a461-4b48-8ad4-c7b98135e7a8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "2cf7c37a-5399-4471-8385-7f483007196b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.053 239969 DEBUG oslo_concurrency.lockutils [req-969e8a1d-f343-4a74-86f0-eac4ae1b3079 req-3f40037c-a461-4b48-8ad4-c7b98135e7a8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2cf7c37a-5399-4471-8385-7f483007196b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.054 239969 DEBUG oslo_concurrency.lockutils [req-969e8a1d-f343-4a74-86f0-eac4ae1b3079 req-3f40037c-a461-4b48-8ad4-c7b98135e7a8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2cf7c37a-5399-4471-8385-7f483007196b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.054 239969 DEBUG nova.compute.manager [req-969e8a1d-f343-4a74-86f0-eac4ae1b3079 req-3f40037c-a461-4b48-8ad4-c7b98135e7a8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] No waiting events found dispatching network-vif-plugged-4f4b7573-e9e2-42e7-8820-e563371095b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.054 239969 WARNING nova.compute.manager [req-969e8a1d-f343-4a74-86f0-eac4ae1b3079 req-3f40037c-a461-4b48-8ad4-c7b98135e7a8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Received unexpected event network-vif-plugged-4f4b7573-e9e2-42e7-8820-e563371095b8 for instance with vm_state active and task_state None.
Jan 26 15:54:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:54:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 167 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 336 KiB/s rd, 13 KiB/s wr, 13 op/s
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.736 239969 DEBUG nova.network.neutron [req-c47e78ae-dec5-4db7-9da0-7dfc9f27a187 req-bbaf98f9-cd19-4ee5-a43e-e775b620964d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Updated VIF entry in instance network info cache for port 4f4b7573-e9e2-42e7-8820-e563371095b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.737 239969 DEBUG nova.network.neutron [req-c47e78ae-dec5-4db7-9da0-7dfc9f27a187 req-bbaf98f9-cd19-4ee5-a43e-e775b620964d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Updating instance_info_cache with network_info: [{"id": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "address": "fa:16:3e:d1:1c:47", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcff19326-5c", "ovs_interfaceid": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "4f4b7573-e9e2-42e7-8820-e563371095b8", "address": "fa:16:3e:cb:8b:3e", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f4b7573-e9", "ovs_interfaceid": "4f4b7573-e9e2-42e7-8820-e563371095b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.755 239969 DEBUG oslo_concurrency.lockutils [req-c47e78ae-dec5-4db7-9da0-7dfc9f27a187 req-bbaf98f9-cd19-4ee5-a43e-e775b620964d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:54:39 compute-0 ovn_controller[146046]: 2026-01-26T15:54:39Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:63:a9 10.100.0.14
Jan 26 15:54:39 compute-0 ovn_controller[146046]: 2026-01-26T15:54:39Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:63:a9 10.100.0.14
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.941 239969 DEBUG oslo_concurrency.lockutils [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "interface-2cf7c37a-5399-4471-8385-7f483007196b-4f4b7573-e9e2-42e7-8820-e563371095b8" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.941 239969 DEBUG oslo_concurrency.lockutils [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-2cf7c37a-5399-4471-8385-7f483007196b-4f4b7573-e9e2-42e7-8820-e563371095b8" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.956 239969 DEBUG nova.objects.instance [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'flavor' on Instance uuid 2cf7c37a-5399-4471-8385-7f483007196b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.984 239969 DEBUG nova.virt.libvirt.vif [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:53:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-191874167',display_name='tempest-AttachInterfacesTestJSON-server-191874167',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-191874167',id=31,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhtLtOel3pVoc9EYAkv14l7u/jTPTSUNpqzLh+IC+V3az/87mCK8InL3MxARo8S+BZI5v3zTF4Oa/Tp2iKSYMaK5pJEulAdCvLOStylM1u70TluwDQyNsqUX7wJdz2LIw==',key_name='tempest-keypair-1871598546',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:54:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-con5e9j1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:54:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=2cf7c37a-5399-4471-8385-7f483007196b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f4b7573-e9e2-42e7-8820-e563371095b8", "address": "fa:16:3e:cb:8b:3e", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f4b7573-e9", "ovs_interfaceid": "4f4b7573-e9e2-42e7-8820-e563371095b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.984 239969 DEBUG nova.network.os_vif_util [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "4f4b7573-e9e2-42e7-8820-e563371095b8", "address": "fa:16:3e:cb:8b:3e", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f4b7573-e9", "ovs_interfaceid": "4f4b7573-e9e2-42e7-8820-e563371095b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.985 239969 DEBUG nova.network.os_vif_util [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:8b:3e,bridge_name='br-int',has_traffic_filtering=True,id=4f4b7573-e9e2-42e7-8820-e563371095b8,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f4b7573-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.988 239969 DEBUG nova.virt.libvirt.guest [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:cb:8b:3e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4f4b7573-e9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.991 239969 DEBUG nova.virt.libvirt.guest [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:cb:8b:3e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4f4b7573-e9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.993 239969 DEBUG nova.virt.libvirt.driver [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Attempting to detach device tap4f4b7573-e9 from instance 2cf7c37a-5399-4471-8385-7f483007196b from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 26 15:54:39 compute-0 nova_compute[239965]: 2026-01-26 15:54:39.993 239969 DEBUG nova.virt.libvirt.guest [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] detach device xml: <interface type="ethernet">
Jan 26 15:54:39 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:cb:8b:3e"/>
Jan 26 15:54:39 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 15:54:39 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:54:39 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 15:54:39 compute-0 nova_compute[239965]:   <target dev="tap4f4b7573-e9"/>
Jan 26 15:54:39 compute-0 nova_compute[239965]: </interface>
Jan 26 15:54:39 compute-0 nova_compute[239965]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.106 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.115 239969 DEBUG nova.virt.libvirt.guest [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:cb:8b:3e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4f4b7573-e9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.121 239969 DEBUG nova.virt.libvirt.guest [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:cb:8b:3e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4f4b7573-e9"/></interface>not found in domain: <domain type='kvm' id='35'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <name>instance-0000001f</name>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <uuid>2cf7c37a-5399-4471-8385-7f483007196b</uuid>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesTestJSON-server-191874167</nova:name>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:54:38</nova:creationTime>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:port uuid="cff19326-5cc9-485a-9f3e-eafabc790a6e">
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:port uuid="4f4b7573-e9e2-42e7-8820-e563371095b8">
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:54:40 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <resource>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </resource>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <system>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <entry name='serial'>2cf7c37a-5399-4471-8385-7f483007196b</entry>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <entry name='uuid'>2cf7c37a-5399-4471-8385-7f483007196b</entry>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </system>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <os>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </os>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <features>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </features>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/2cf7c37a-5399-4471-8385-7f483007196b_disk' index='2'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       </source>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/2cf7c37a-5399-4471-8385-7f483007196b_disk.config' index='1'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       </source>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:d1:1c:47'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target dev='tapcff19326-5c'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:cb:8b:3e'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target dev='tap4f4b7573-e9'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='net1'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b/console.log' append='off'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       </target>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/0'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b/console.log' append='off'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </console>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </input>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </input>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </input>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </graphics>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <video>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </video>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c400,c757</label>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c400,c757</imagelabel>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:54:40 compute-0 nova_compute[239965]: </domain>
Jan 26 15:54:40 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.122 239969 INFO nova.virt.libvirt.driver [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully detached device tap4f4b7573-e9 from instance 2cf7c37a-5399-4471-8385-7f483007196b from the persistent domain config.
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.123 239969 DEBUG nova.virt.libvirt.driver [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] (1/8): Attempting to detach device tap4f4b7573-e9 with device alias net1 from instance 2cf7c37a-5399-4471-8385-7f483007196b from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.123 239969 DEBUG nova.virt.libvirt.guest [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] detach device xml: <interface type="ethernet">
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:cb:8b:3e"/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <target dev="tap4f4b7573-e9"/>
Jan 26 15:54:40 compute-0 nova_compute[239965]: </interface>
Jan 26 15:54:40 compute-0 nova_compute[239965]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 26 15:54:40 compute-0 kernel: tap4f4b7573-e9 (unregistering): left promiscuous mode
Jan 26 15:54:40 compute-0 NetworkManager[48954]: <info>  [1769442880.2224] device (tap4f4b7573-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.230 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:40 compute-0 ovn_controller[146046]: 2026-01-26T15:54:40Z|00192|binding|INFO|Releasing lport 4f4b7573-e9e2-42e7-8820-e563371095b8 from this chassis (sb_readonly=0)
Jan 26 15:54:40 compute-0 ovn_controller[146046]: 2026-01-26T15:54:40Z|00193|binding|INFO|Setting lport 4f4b7573-e9e2-42e7-8820-e563371095b8 down in Southbound
Jan 26 15:54:40 compute-0 ovn_controller[146046]: 2026-01-26T15:54:40Z|00194|binding|INFO|Removing iface tap4f4b7573-e9 ovn-installed in OVS
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.232 239969 DEBUG nova.virt.libvirt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Received event <DeviceRemovedEvent: 1769442880.2318208, 2cf7c37a-5399-4471-8385-7f483007196b => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.234 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.235 239969 DEBUG nova.virt.libvirt.driver [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Start waiting for the detach event from libvirt for device tap4f4b7573-e9 with device alias net1 for instance 2cf7c37a-5399-4471-8385-7f483007196b _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.235 239969 DEBUG nova.virt.libvirt.guest [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:cb:8b:3e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4f4b7573-e9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.239 239969 DEBUG nova.virt.libvirt.guest [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:cb:8b:3e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4f4b7573-e9"/></interface>not found in domain: <domain type='kvm' id='35'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <name>instance-0000001f</name>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <uuid>2cf7c37a-5399-4471-8385-7f483007196b</uuid>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesTestJSON-server-191874167</nova:name>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:54:38</nova:creationTime>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:port uuid="cff19326-5cc9-485a-9f3e-eafabc790a6e">
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:port uuid="4f4b7573-e9e2-42e7-8820-e563371095b8">
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:54:40 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <resource>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </resource>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <system>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <entry name='serial'>2cf7c37a-5399-4471-8385-7f483007196b</entry>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <entry name='uuid'>2cf7c37a-5399-4471-8385-7f483007196b</entry>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </system>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <os>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </os>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <features>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </features>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/2cf7c37a-5399-4471-8385-7f483007196b_disk' index='2'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       </source>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/2cf7c37a-5399-4471-8385-7f483007196b_disk.config' index='1'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       </source>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:d1:1c:47'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target dev='tapcff19326-5c'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b/console.log' append='off'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       </target>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/0'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b/console.log' append='off'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </console>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </input>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </input>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </input>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </graphics>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <video>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </video>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c400,c757</label>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c400,c757</imagelabel>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:54:40 compute-0 nova_compute[239965]: </domain>
Jan 26 15:54:40 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.239 239969 INFO nova.virt.libvirt.driver [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully detached device tap4f4b7573-e9 from instance 2cf7c37a-5399-4471-8385-7f483007196b from the live domain config.
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.240 239969 DEBUG nova.virt.libvirt.vif [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:53:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-191874167',display_name='tempest-AttachInterfacesTestJSON-server-191874167',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-191874167',id=31,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhtLtOel3pVoc9EYAkv14l7u/jTPTSUNpqzLh+IC+V3az/87mCK8InL3MxARo8S+BZI5v3zTF4Oa/Tp2iKSYMaK5pJEulAdCvLOStylM1u70TluwDQyNsqUX7wJdz2LIw==',key_name='tempest-keypair-1871598546',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:54:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-con5e9j1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:54:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=2cf7c37a-5399-4471-8385-7f483007196b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f4b7573-e9e2-42e7-8820-e563371095b8", "address": "fa:16:3e:cb:8b:3e", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f4b7573-e9", "ovs_interfaceid": "4f4b7573-e9e2-42e7-8820-e563371095b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.240 239969 DEBUG nova.network.os_vif_util [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "4f4b7573-e9e2-42e7-8820-e563371095b8", "address": "fa:16:3e:cb:8b:3e", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f4b7573-e9", "ovs_interfaceid": "4f4b7573-e9e2-42e7-8820-e563371095b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.241 239969 DEBUG nova.network.os_vif_util [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:8b:3e,bridge_name='br-int',has_traffic_filtering=True,id=4f4b7573-e9e2-42e7-8820-e563371095b8,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f4b7573-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.241 239969 DEBUG os_vif [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:8b:3e,bridge_name='br-int',has_traffic_filtering=True,id=4f4b7573-e9e2-42e7-8820-e563371095b8,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f4b7573-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:54:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:40.242 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:8b:3e 10.100.0.14'], port_security=['fa:16:3e:cb:8b:3e 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '2cf7c37a-5399-4471-8385-7f483007196b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6f59e25-9c42-4de8-8df2-2beb03c79af0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=4f4b7573-e9e2-42e7-8820-e563371095b8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.242 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.243 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f4b7573-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:40.243 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 4f4b7573-e9e2-42e7-8820-e563371095b8 in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e unbound from our chassis
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.244 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:40.244 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.247 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.247 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.249 239969 INFO os_vif [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:8b:3e,bridge_name='br-int',has_traffic_filtering=True,id=4f4b7573-e9e2-42e7-8820-e563371095b8,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f4b7573-e9')
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.250 239969 DEBUG nova.virt.libvirt.guest [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesTestJSON-server-191874167</nova:name>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:54:40</nova:creationTime>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     <nova:port uuid="cff19326-5cc9-485a-9f3e-eafabc790a6e">
Jan 26 15:54:40 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 15:54:40 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:54:40 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:54:40 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:54:40 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 15:54:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:40.259 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[43ce885c-95d9-4f50-a27c-07300baf1cdc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:40.286 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2a67274d-22ea-48e1-b839-ffb626733b8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:40.291 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7d253bcb-0efc-4fa1-81f4-89bb98aa9fcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:40.325 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0bbaf5df-f2a8-4298-ad38-9b2e7e626814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:40.347 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[74513a3f-f90b-4641-9af8-10ad9587a3d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 57], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433723, 'reachable_time': 43512, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271505, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:40.367 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2e11831a-29fc-4cbb-b49b-27cc4ad2010e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 433735, 'tstamp': 433735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271506, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 433738, 'tstamp': 433738}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271506, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:40.369 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.372 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:40.373 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:40.373 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:54:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:40.373 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:40.374 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:54:40 compute-0 ceph-mon[75140]: pgmap v1222: 305 pgs: 305 active+clean; 167 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 336 KiB/s rd, 13 KiB/s wr, 13 op/s
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.915 239969 DEBUG oslo_concurrency.lockutils [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.916 239969 DEBUG oslo_concurrency.lockutils [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquired lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:54:40 compute-0 nova_compute[239965]: 2026-01-26 15:54:40.916 239969 DEBUG nova.network.neutron [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.012 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.054 239969 DEBUG nova.compute.manager [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Received event network-vif-deleted-4f4b7573-e9e2-42e7-8820-e563371095b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.054 239969 INFO nova.compute.manager [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Neutron deleted interface 4f4b7573-e9e2-42e7-8820-e563371095b8; detaching it from the instance and deleting it from the info cache
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.055 239969 DEBUG nova.network.neutron [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Updating instance_info_cache with network_info: [{"id": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "address": "fa:16:3e:d1:1c:47", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcff19326-5c", "ovs_interfaceid": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.080 239969 DEBUG nova.objects.instance [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lazy-loading 'system_metadata' on Instance uuid 2cf7c37a-5399-4471-8385-7f483007196b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.123 239969 DEBUG nova.objects.instance [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lazy-loading 'flavor' on Instance uuid 2cf7c37a-5399-4471-8385-7f483007196b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.160 239969 DEBUG nova.virt.libvirt.vif [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:53:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-191874167',display_name='tempest-AttachInterfacesTestJSON-server-191874167',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-191874167',id=31,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhtLtOel3pVoc9EYAkv14l7u/jTPTSUNpqzLh+IC+V3az/87mCK8InL3MxARo8S+BZI5v3zTF4Oa/Tp2iKSYMaK5pJEulAdCvLOStylM1u70TluwDQyNsqUX7wJdz2LIw==',key_name='tempest-keypair-1871598546',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:54:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-con5e9j1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:54:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=2cf7c37a-5399-4471-8385-7f483007196b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f4b7573-e9e2-42e7-8820-e563371095b8", "address": "fa:16:3e:cb:8b:3e", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f4b7573-e9", "ovs_interfaceid": "4f4b7573-e9e2-42e7-8820-e563371095b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.160 239969 DEBUG nova.network.os_vif_util [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converting VIF {"id": "4f4b7573-e9e2-42e7-8820-e563371095b8", "address": "fa:16:3e:cb:8b:3e", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f4b7573-e9", "ovs_interfaceid": "4f4b7573-e9e2-42e7-8820-e563371095b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.161 239969 DEBUG nova.network.os_vif_util [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:8b:3e,bridge_name='br-int',has_traffic_filtering=True,id=4f4b7573-e9e2-42e7-8820-e563371095b8,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f4b7573-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.163 239969 DEBUG nova.virt.libvirt.guest [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:cb:8b:3e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4f4b7573-e9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.166 239969 DEBUG nova.virt.libvirt.guest [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:cb:8b:3e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4f4b7573-e9"/></interface>not found in domain: <domain type='kvm' id='35'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <name>instance-0000001f</name>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <uuid>2cf7c37a-5399-4471-8385-7f483007196b</uuid>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesTestJSON-server-191874167</nova:name>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:54:40</nova:creationTime>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:port uuid="cff19326-5cc9-485a-9f3e-eafabc790a6e">
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:54:41 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <resource>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </resource>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <system>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <entry name='serial'>2cf7c37a-5399-4471-8385-7f483007196b</entry>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <entry name='uuid'>2cf7c37a-5399-4471-8385-7f483007196b</entry>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </system>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <os>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </os>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <features>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </features>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/2cf7c37a-5399-4471-8385-7f483007196b_disk' index='2'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       </source>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/2cf7c37a-5399-4471-8385-7f483007196b_disk.config' index='1'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       </source>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:d1:1c:47'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target dev='tapcff19326-5c'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b/console.log' append='off'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       </target>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/0'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b/console.log' append='off'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </console>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </input>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </input>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </input>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </graphics>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <video>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </video>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c400,c757</label>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c400,c757</imagelabel>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:54:41 compute-0 nova_compute[239965]: </domain>
Jan 26 15:54:41 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.167 239969 DEBUG nova.virt.libvirt.guest [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:cb:8b:3e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4f4b7573-e9"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.170 239969 DEBUG nova.virt.libvirt.guest [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:cb:8b:3e"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap4f4b7573-e9"/></interface>not found in domain: <domain type='kvm' id='35'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <name>instance-0000001f</name>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <uuid>2cf7c37a-5399-4471-8385-7f483007196b</uuid>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesTestJSON-server-191874167</nova:name>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:54:40</nova:creationTime>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:port uuid="cff19326-5cc9-485a-9f3e-eafabc790a6e">
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:54:41 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <resource>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </resource>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <system>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <entry name='serial'>2cf7c37a-5399-4471-8385-7f483007196b</entry>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <entry name='uuid'>2cf7c37a-5399-4471-8385-7f483007196b</entry>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </system>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <os>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </os>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <features>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </features>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/2cf7c37a-5399-4471-8385-7f483007196b_disk' index='2'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       </source>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/2cf7c37a-5399-4471-8385-7f483007196b_disk.config' index='1'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       </source>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:d1:1c:47'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target dev='tapcff19326-5c'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b/console.log' append='off'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       </target>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/0'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b/console.log' append='off'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </console>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </input>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </input>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </input>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </graphics>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <video>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </video>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c400,c757</label>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c400,c757</imagelabel>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:54:41 compute-0 nova_compute[239965]: </domain>
Jan 26 15:54:41 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.171 239969 WARNING nova.virt.libvirt.driver [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Detaching interface fa:16:3e:cb:8b:3e failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap4f4b7573-e9' not found.
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.171 239969 DEBUG nova.virt.libvirt.vif [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:53:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-191874167',display_name='tempest-AttachInterfacesTestJSON-server-191874167',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-191874167',id=31,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhtLtOel3pVoc9EYAkv14l7u/jTPTSUNpqzLh+IC+V3az/87mCK8InL3MxARo8S+BZI5v3zTF4Oa/Tp2iKSYMaK5pJEulAdCvLOStylM1u70TluwDQyNsqUX7wJdz2LIw==',key_name='tempest-keypair-1871598546',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:54:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-con5e9j1',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:54:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=2cf7c37a-5399-4471-8385-7f483007196b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f4b7573-e9e2-42e7-8820-e563371095b8", "address": "fa:16:3e:cb:8b:3e", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f4b7573-e9", "ovs_interfaceid": "4f4b7573-e9e2-42e7-8820-e563371095b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.172 239969 DEBUG nova.network.os_vif_util [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converting VIF {"id": "4f4b7573-e9e2-42e7-8820-e563371095b8", "address": "fa:16:3e:cb:8b:3e", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f4b7573-e9", "ovs_interfaceid": "4f4b7573-e9e2-42e7-8820-e563371095b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.172 239969 DEBUG nova.network.os_vif_util [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:8b:3e,bridge_name='br-int',has_traffic_filtering=True,id=4f4b7573-e9e2-42e7-8820-e563371095b8,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f4b7573-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.173 239969 DEBUG os_vif [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:8b:3e,bridge_name='br-int',has_traffic_filtering=True,id=4f4b7573-e9e2-42e7-8820-e563371095b8,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f4b7573-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.176 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.176 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f4b7573-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.176 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.178 239969 INFO os_vif [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:8b:3e,bridge_name='br-int',has_traffic_filtering=True,id=4f4b7573-e9e2-42e7-8820-e563371095b8,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f4b7573-e9')
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.179 239969 DEBUG nova.virt.libvirt.guest [req-fcceb831-71b5-425f-be41-63bcdb2e7570 req-65df54c0-40be-400b-8290-89fca020511b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesTestJSON-server-191874167</nova:name>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:54:41</nova:creationTime>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     <nova:port uuid="cff19326-5cc9-485a-9f3e-eafabc790a6e">
Jan 26 15:54:41 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 15:54:41 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:54:41 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:54:41 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:54:41 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.205 239969 DEBUG nova.compute.manager [req-2c3557e0-c739-4662-b1c0-28354cb08d9e req-15e26147-ef12-4f57-b052-a24331dabb84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Received event network-vif-plugged-4f4b7573-e9e2-42e7-8820-e563371095b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.206 239969 DEBUG oslo_concurrency.lockutils [req-2c3557e0-c739-4662-b1c0-28354cb08d9e req-15e26147-ef12-4f57-b052-a24331dabb84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "2cf7c37a-5399-4471-8385-7f483007196b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.206 239969 DEBUG oslo_concurrency.lockutils [req-2c3557e0-c739-4662-b1c0-28354cb08d9e req-15e26147-ef12-4f57-b052-a24331dabb84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2cf7c37a-5399-4471-8385-7f483007196b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.206 239969 DEBUG oslo_concurrency.lockutils [req-2c3557e0-c739-4662-b1c0-28354cb08d9e req-15e26147-ef12-4f57-b052-a24331dabb84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2cf7c37a-5399-4471-8385-7f483007196b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.206 239969 DEBUG nova.compute.manager [req-2c3557e0-c739-4662-b1c0-28354cb08d9e req-15e26147-ef12-4f57-b052-a24331dabb84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] No waiting events found dispatching network-vif-plugged-4f4b7573-e9e2-42e7-8820-e563371095b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:54:41 compute-0 nova_compute[239965]: 2026-01-26 15:54:41.206 239969 WARNING nova.compute.manager [req-2c3557e0-c739-4662-b1c0-28354cb08d9e req-15e26147-ef12-4f57-b052-a24331dabb84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Received unexpected event network-vif-plugged-4f4b7573-e9e2-42e7-8820-e563371095b8 for instance with vm_state active and task_state None.
Jan 26 15:54:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 181 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 497 KiB/s rd, 852 KiB/s wr, 46 op/s
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.106 239969 DEBUG oslo_concurrency.lockutils [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "2cf7c37a-5399-4471-8385-7f483007196b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.107 239969 DEBUG oslo_concurrency.lockutils [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "2cf7c37a-5399-4471-8385-7f483007196b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.107 239969 DEBUG oslo_concurrency.lockutils [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "2cf7c37a-5399-4471-8385-7f483007196b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.107 239969 DEBUG oslo_concurrency.lockutils [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "2cf7c37a-5399-4471-8385-7f483007196b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.107 239969 DEBUG oslo_concurrency.lockutils [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "2cf7c37a-5399-4471-8385-7f483007196b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.108 239969 INFO nova.compute.manager [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Terminating instance
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.109 239969 DEBUG nova.compute.manager [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:54:42 compute-0 kernel: tapcff19326-5c (unregistering): left promiscuous mode
Jan 26 15:54:42 compute-0 NetworkManager[48954]: <info>  [1769442882.1618] device (tapcff19326-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:54:42 compute-0 ovn_controller[146046]: 2026-01-26T15:54:42Z|00195|binding|INFO|Releasing lport cff19326-5cc9-485a-9f3e-eafabc790a6e from this chassis (sb_readonly=0)
Jan 26 15:54:42 compute-0 ovn_controller[146046]: 2026-01-26T15:54:42Z|00196|binding|INFO|Setting lport cff19326-5cc9-485a-9f3e-eafabc790a6e down in Southbound
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.167 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:42 compute-0 ovn_controller[146046]: 2026-01-26T15:54:42Z|00197|binding|INFO|Removing iface tapcff19326-5c ovn-installed in OVS
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.170 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:42.176 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:1c:47 10.100.0.12'], port_security=['fa:16:3e:d1:1c:47 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2cf7c37a-5399-4471-8385-7f483007196b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be2f1de3-40ec-4511-99bb-ae602e3ecb84', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=cff19326-5cc9-485a-9f3e-eafabc790a6e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:54:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:42.177 156105 INFO neutron.agent.ovn.metadata.agent [-] Port cff19326-5cc9-485a-9f3e-eafabc790a6e in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e unbound from our chassis
Jan 26 15:54:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:42.179 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:54:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:42.180 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[69d50282-d1ab-4726-ba39-6b6d804ec800]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:42.180 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e namespace which is not needed anymore
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.184 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:42 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Jan 26 15:54:42 compute-0 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Consumed 14.726s CPU time.
Jan 26 15:54:42 compute-0 systemd-machined[208061]: Machine qemu-35-instance-0000001f terminated.
Jan 26 15:54:42 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[270203]: [NOTICE]   (270221) : haproxy version is 2.8.14-c23fe91
Jan 26 15:54:42 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[270203]: [NOTICE]   (270221) : path to executable is /usr/sbin/haproxy
Jan 26 15:54:42 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[270203]: [WARNING]  (270221) : Exiting Master process...
Jan 26 15:54:42 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[270203]: [ALERT]    (270221) : Current worker (270223) exited with code 143 (Terminated)
Jan 26 15:54:42 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[270203]: [WARNING]  (270221) : All workers exited. Exiting... (0)
Jan 26 15:54:42 compute-0 systemd[1]: libpod-861988cca4f8896e65ab9cbcf76586a4b0d106b8838758d92e7c7801ffaec660.scope: Deactivated successfully.
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.339 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:42 compute-0 podman[271530]: 2026-01-26 15:54:42.34801989 +0000 UTC m=+0.063619547 container died 861988cca4f8896e65ab9cbcf76586a4b0d106b8838758d92e7c7801ffaec660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.349 239969 INFO nova.virt.libvirt.driver [-] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Instance destroyed successfully.
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.349 239969 DEBUG nova.objects.instance [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'resources' on Instance uuid 2cf7c37a-5399-4471-8385-7f483007196b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.371 239969 DEBUG nova.virt.libvirt.vif [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:53:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-191874167',display_name='tempest-AttachInterfacesTestJSON-server-191874167',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-191874167',id=31,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhtLtOel3pVoc9EYAkv14l7u/jTPTSUNpqzLh+IC+V3az/87mCK8InL3MxARo8S+BZI5v3zTF4Oa/Tp2iKSYMaK5pJEulAdCvLOStylM1u70TluwDQyNsqUX7wJdz2LIw==',key_name='tempest-keypair-1871598546',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:54:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-con5e9j1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:54:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=2cf7c37a-5399-4471-8385-7f483007196b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "address": "fa:16:3e:d1:1c:47", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcff19326-5c", "ovs_interfaceid": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.371 239969 DEBUG nova.network.os_vif_util [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "address": "fa:16:3e:d1:1c:47", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcff19326-5c", "ovs_interfaceid": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.372 239969 DEBUG nova.network.os_vif_util [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d1:1c:47,bridge_name='br-int',has_traffic_filtering=True,id=cff19326-5cc9-485a-9f3e-eafabc790a6e,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcff19326-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.372 239969 DEBUG os_vif [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:1c:47,bridge_name='br-int',has_traffic_filtering=True,id=cff19326-5cc9-485a-9f3e-eafabc790a6e,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcff19326-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.373 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.374 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcff19326-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.377 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.380 239969 INFO os_vif [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:1c:47,bridge_name='br-int',has_traffic_filtering=True,id=cff19326-5cc9-485a-9f3e-eafabc790a6e,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcff19326-5c')
Jan 26 15:54:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-861988cca4f8896e65ab9cbcf76586a4b0d106b8838758d92e7c7801ffaec660-userdata-shm.mount: Deactivated successfully.
Jan 26 15:54:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b118c32a89a349d3c4a74cdc57f8685e244299f29bd6b0c228913706ec5c8dcf-merged.mount: Deactivated successfully.
Jan 26 15:54:42 compute-0 podman[271530]: 2026-01-26 15:54:42.407272728 +0000 UTC m=+0.122872395 container cleanup 861988cca4f8896e65ab9cbcf76586a4b0d106b8838758d92e7c7801ffaec660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 15:54:42 compute-0 systemd[1]: libpod-conmon-861988cca4f8896e65ab9cbcf76586a4b0d106b8838758d92e7c7801ffaec660.scope: Deactivated successfully.
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.482 239969 INFO nova.network.neutron [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Port 4f4b7573-e9e2-42e7-8820-e563371095b8 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.484 239969 DEBUG nova.network.neutron [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Updating instance_info_cache with network_info: [{"id": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "address": "fa:16:3e:d1:1c:47", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcff19326-5c", "ovs_interfaceid": "cff19326-5cc9-485a-9f3e-eafabc790a6e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:42 compute-0 podman[271582]: 2026-01-26 15:54:42.492410756 +0000 UTC m=+0.060816768 container remove 861988cca4f8896e65ab9cbcf76586a4b0d106b8838758d92e7c7801ffaec660 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.500 239969 DEBUG oslo_concurrency.lockutils [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Releasing lock "refresh_cache-2cf7c37a-5399-4471-8385-7f483007196b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:54:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:42.501 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[de207283-1137-46dc-8c09-5bbbccb1f4fb]: (4, ('Mon Jan 26 03:54:42 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e (861988cca4f8896e65ab9cbcf76586a4b0d106b8838758d92e7c7801ffaec660)\n861988cca4f8896e65ab9cbcf76586a4b0d106b8838758d92e7c7801ffaec660\nMon Jan 26 03:54:42 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e (861988cca4f8896e65ab9cbcf76586a4b0d106b8838758d92e7c7801ffaec660)\n861988cca4f8896e65ab9cbcf76586a4b0d106b8838758d92e7c7801ffaec660\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:42.503 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[199382a2-9eeb-4c3e-a340-7ebf350e98ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:42.504 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:42 compute-0 kernel: tap91bcac0a-10: left promiscuous mode
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.506 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.509 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:42.515 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[293c46ed-3e4e-4479-a6d0-0e33551d9d63]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.522 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.526 239969 DEBUG oslo_concurrency.lockutils [None req-9219edb5-db23-4f29-b497-0360d6b25d5f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-2cf7c37a-5399-4471-8385-7f483007196b-4f4b7573-e9e2-42e7-8820-e563371095b8" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:42.533 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b805180e-f0f1-4b68-8ca4-215b06313516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:42.537 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a05a2529-bc46-44fd-9996-3bd68e0dfe5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:42.562 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9154a531-91ec-44fb-9bb1-3ee7a15f5bd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 433717, 'reachable_time': 37879, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271600, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:42.565 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:54:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:42.565 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[b1bbae8a-09da-4059-ae40-5105fe8aa18c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d91bcac0a\x2d1926\x2d4861\x2d88ab\x2dae3c06f7e57e.mount: Deactivated successfully.
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.728 239969 INFO nova.virt.libvirt.driver [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Deleting instance files /var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b_del
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.730 239969 INFO nova.virt.libvirt.driver [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Deletion of /var/lib/nova/instances/2cf7c37a-5399-4471-8385-7f483007196b_del complete
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.785 239969 INFO nova.compute.manager [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Took 0.68 seconds to destroy the instance on the hypervisor.
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.785 239969 DEBUG oslo.service.loopingcall [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.786 239969 DEBUG nova.compute.manager [-] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:54:42 compute-0 nova_compute[239965]: 2026-01-26 15:54:42.786 239969 DEBUG nova.network.neutron [-] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:54:42 compute-0 ceph-mon[75140]: pgmap v1223: 305 pgs: 305 active+clean; 181 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 497 KiB/s rd, 852 KiB/s wr, 46 op/s
Jan 26 15:54:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 200 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 26 15:54:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:54:44 compute-0 ceph-mon[75140]: pgmap v1224: 305 pgs: 305 active+clean; 200 MiB data, 432 MiB used, 60 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 26 15:54:45 compute-0 nova_compute[239965]: 2026-01-26 15:54:45.107 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:45 compute-0 nova_compute[239965]: 2026-01-26 15:54:45.705 239969 DEBUG nova.network.neutron [-] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 186 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 282 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 26 15:54:45 compute-0 nova_compute[239965]: 2026-01-26 15:54:45.881 239969 INFO nova.compute.manager [-] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Took 3.09 seconds to deallocate network for instance.
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.072 239969 DEBUG oslo_concurrency.lockutils [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.073 239969 DEBUG oslo_concurrency.lockutils [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.173 239969 DEBUG oslo_concurrency.processutils [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.211 239969 DEBUG nova.compute.manager [req-ff455445-ffb9-48c6-8b70-f9a7ede31afe req-665b191c-2ceb-4397-b2ac-f56520205b4d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Received event network-vif-deleted-cff19326-5cc9-485a-9f3e-eafabc790a6e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.427 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Acquiring lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.427 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.447 239969 DEBUG nova.compute.manager [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.499 239969 DEBUG oslo_concurrency.lockutils [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Acquiring lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.500 239969 DEBUG oslo_concurrency.lockutils [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.500 239969 DEBUG oslo_concurrency.lockutils [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Acquiring lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.501 239969 DEBUG oslo_concurrency.lockutils [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.501 239969 DEBUG oslo_concurrency.lockutils [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.503 239969 INFO nova.compute.manager [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Terminating instance
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.504 239969 DEBUG nova.compute.manager [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:54:46 compute-0 kernel: tap5a7726a4-32 (unregistering): left promiscuous mode
Jan 26 15:54:46 compute-0 NetworkManager[48954]: <info>  [1769442886.5471] device (tap5a7726a4-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.549 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.553 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:46 compute-0 ovn_controller[146046]: 2026-01-26T15:54:46Z|00198|binding|INFO|Releasing lport 5a7726a4-3201-49ec-ab56-ca4039acde5a from this chassis (sb_readonly=0)
Jan 26 15:54:46 compute-0 ovn_controller[146046]: 2026-01-26T15:54:46Z|00199|binding|INFO|Setting lport 5a7726a4-3201-49ec-ab56-ca4039acde5a down in Southbound
Jan 26 15:54:46 compute-0 ovn_controller[146046]: 2026-01-26T15:54:46Z|00200|binding|INFO|Removing iface tap5a7726a4-32 ovn-installed in OVS
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.556 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:46.564 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:63:a9 10.100.0.14'], port_security=['fa:16:3e:02:63:a9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '3c352fa1-e91d-49ab-a44a-875ec50bb9c3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-52963be3-f242-4f3f-83c5-61cee797b0b4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a4fae17181d64f599975dc5816c6b9b8', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd4f72978-51c5-43c4-afb4-1b796998a549', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.182'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1833d7da-ca9f-431f-aada-f0250e90c35e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5a7726a4-3201-49ec-ab56-ca4039acde5a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:54:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:46.566 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5a7726a4-3201-49ec-ab56-ca4039acde5a in datapath 52963be3-f242-4f3f-83c5-61cee797b0b4 unbound from our chassis
Jan 26 15:54:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:46.569 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 52963be3-f242-4f3f-83c5-61cee797b0b4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.571 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:46.571 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b0af39c2-aa0d-456c-a8c1-83addfceefc3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:46.572 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4 namespace which is not needed anymore
Jan 26 15:54:46 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Deactivated successfully.
Jan 26 15:54:46 compute-0 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Consumed 13.826s CPU time.
Jan 26 15:54:46 compute-0 systemd-machined[208061]: Machine qemu-37-instance-00000021 terminated.
Jan 26 15:54:46 compute-0 neutron-haproxy-ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4[271395]: [NOTICE]   (271420) : haproxy version is 2.8.14-c23fe91
Jan 26 15:54:46 compute-0 neutron-haproxy-ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4[271395]: [NOTICE]   (271420) : path to executable is /usr/sbin/haproxy
Jan 26 15:54:46 compute-0 neutron-haproxy-ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4[271395]: [WARNING]  (271420) : Exiting Master process...
Jan 26 15:54:46 compute-0 neutron-haproxy-ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4[271395]: [WARNING]  (271420) : Exiting Master process...
Jan 26 15:54:46 compute-0 neutron-haproxy-ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4[271395]: [ALERT]    (271420) : Current worker (271425) exited with code 143 (Terminated)
Jan 26 15:54:46 compute-0 neutron-haproxy-ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4[271395]: [WARNING]  (271420) : All workers exited. Exiting... (0)
Jan 26 15:54:46 compute-0 systemd[1]: libpod-6c1757e81437fe1c0efef1e487669d79fe693a026a3c7cbe4daf0ec869b7f151.scope: Deactivated successfully.
Jan 26 15:54:46 compute-0 podman[271647]: 2026-01-26 15:54:46.71148974 +0000 UTC m=+0.049591045 container died 6c1757e81437fe1c0efef1e487669d79fe693a026a3c7cbe4daf0ec869b7f151 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.739 239969 INFO nova.virt.libvirt.driver [-] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Instance destroyed successfully.
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.740 239969 DEBUG nova.objects.instance [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lazy-loading 'resources' on Instance uuid 3c352fa1-e91d-49ab-a44a-875ec50bb9c3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6c1757e81437fe1c0efef1e487669d79fe693a026a3c7cbe4daf0ec869b7f151-userdata-shm.mount: Deactivated successfully.
Jan 26 15:54:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b875288cfc9ed9d7d7863aa0954d2e6b979cd34354cdea905d6b4369155f2be0-merged.mount: Deactivated successfully.
Jan 26 15:54:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:54:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1817682536' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.754 239969 DEBUG nova.virt.libvirt.vif [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1884025666',display_name='tempest-ServersTestManualDisk-server-1884025666',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1884025666',id=33,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI6lv1uakSqWZntjnzlYObwQ4O5lM4662rvmzzGK9m7PGcCa9az9O3xFja6m+OM+cleoC76M21DlcKWaQ1VvnnEb1p+cFpm6c2l3JNRpqxUHP51OvJUO11mfImVwFrZPUw==',key_name='tempest-keypair-796965151',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:54:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a4fae17181d64f599975dc5816c6b9b8',ramdisk_id='',reservation_id='r-eb30osg8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1759969789',owner_user_name='tempest-ServersTestManualDisk-1759969789-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:54:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5e1a7a8a20c4486d993bf5d6901001be',uuid=3c352fa1-e91d-49ab-a44a-875ec50bb9c3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "address": "fa:16:3e:02:63:a9", "network": {"id": "52963be3-f242-4f3f-83c5-61cee797b0b4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1738664060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4fae17181d64f599975dc5816c6b9b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a7726a4-32", "ovs_interfaceid": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.755 239969 DEBUG nova.network.os_vif_util [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Converting VIF {"id": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "address": "fa:16:3e:02:63:a9", "network": {"id": "52963be3-f242-4f3f-83c5-61cee797b0b4", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1738664060-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a4fae17181d64f599975dc5816c6b9b8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a7726a4-32", "ovs_interfaceid": "5a7726a4-3201-49ec-ab56-ca4039acde5a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.755 239969 DEBUG nova.network.os_vif_util [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:63:a9,bridge_name='br-int',has_traffic_filtering=True,id=5a7726a4-3201-49ec-ab56-ca4039acde5a,network=Network(52963be3-f242-4f3f-83c5-61cee797b0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a7726a4-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.755 239969 DEBUG os_vif [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:63:a9,bridge_name='br-int',has_traffic_filtering=True,id=5a7726a4-3201-49ec-ab56-ca4039acde5a,network=Network(52963be3-f242-4f3f-83c5-61cee797b0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a7726a4-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.757 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.757 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a7726a4-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:46 compute-0 podman[271647]: 2026-01-26 15:54:46.760455219 +0000 UTC m=+0.098556514 container cleanup 6c1757e81437fe1c0efef1e487669d79fe693a026a3c7cbe4daf0ec869b7f151 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.760 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.762 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.764 239969 INFO os_vif [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:63:a9,bridge_name='br-int',has_traffic_filtering=True,id=5a7726a4-3201-49ec-ab56-ca4039acde5a,network=Network(52963be3-f242-4f3f-83c5-61cee797b0b4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a7726a4-32')
Jan 26 15:54:46 compute-0 systemd[1]: libpod-conmon-6c1757e81437fe1c0efef1e487669d79fe693a026a3c7cbe4daf0ec869b7f151.scope: Deactivated successfully.
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.779 239969 DEBUG oslo_concurrency.processutils [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.786 239969 DEBUG nova.compute.provider_tree [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.801 239969 DEBUG nova.scheduler.client.report [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.823 239969 DEBUG oslo_concurrency.lockutils [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.825 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.834 239969 DEBUG nova.virt.hardware [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.834 239969 INFO nova.compute.claims [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:54:46 compute-0 ceph-mon[75140]: pgmap v1225: 305 pgs: 305 active+clean; 186 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 282 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Jan 26 15:54:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1817682536' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.859 239969 INFO nova.scheduler.client.report [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Deleted allocations for instance 2cf7c37a-5399-4471-8385-7f483007196b
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.935 239969 DEBUG oslo_concurrency.lockutils [None req-ca76bdff-2834-46d1-8957-d85d1e86d0f5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "2cf7c37a-5399-4471-8385-7f483007196b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:46 compute-0 podman[271689]: 2026-01-26 15:54:46.947630265 +0000 UTC m=+0.162715993 container remove 6c1757e81437fe1c0efef1e487669d79fe693a026a3c7cbe4daf0ec869b7f151 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:54:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:46.953 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f3e1cc-a31d-4200-8454-e4e009073f88]: (4, ('Mon Jan 26 03:54:46 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4 (6c1757e81437fe1c0efef1e487669d79fe693a026a3c7cbe4daf0ec869b7f151)\n6c1757e81437fe1c0efef1e487669d79fe693a026a3c7cbe4daf0ec869b7f151\nMon Jan 26 03:54:46 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4 (6c1757e81437fe1c0efef1e487669d79fe693a026a3c7cbe4daf0ec869b7f151)\n6c1757e81437fe1c0efef1e487669d79fe693a026a3c7cbe4daf0ec869b7f151\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:46.955 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a223ce5f-1463-4441-b28f-a39ac9780771]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:46.958 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52963be3-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.958 239969 DEBUG oslo_concurrency.processutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:46 compute-0 kernel: tap52963be3-f0: left promiscuous mode
Jan 26 15:54:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:46.979 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[378a1ed7-53af-42f4-9eb8-a9836edaaddc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:46 compute-0 nova_compute[239965]: 2026-01-26 15:54:46.989 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:46.996 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[600a8fbf-2cd6-49d0-ac8c-3a29fec25fa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:46.997 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4b0036f9-32a2-4b0e-9205-02792898a408]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:47.012 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[274e44a7-0712-4267-9444-39dac062df20]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 435516, 'reachable_time': 42971, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271722, 'error': None, 'target': 'ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:47.014 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-52963be3-f242-4f3f-83c5-61cee797b0b4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:54:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:47.014 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d9946f-8084-4964-a8e2-4808abe866b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d52963be3\x2df242\x2d4f3f\x2d83c5\x2d61cee797b0b4.mount: Deactivated successfully.
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.223 239969 INFO nova.virt.libvirt.driver [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Deleting instance files /var/lib/nova/instances/3c352fa1-e91d-49ab-a44a-875ec50bb9c3_del
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.224 239969 INFO nova.virt.libvirt.driver [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Deletion of /var/lib/nova/instances/3c352fa1-e91d-49ab-a44a-875ec50bb9c3_del complete
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.277 239969 INFO nova.compute.manager [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Took 0.77 seconds to destroy the instance on the hypervisor.
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.278 239969 DEBUG oslo.service.loopingcall [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.278 239969 DEBUG nova.compute.manager [-] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.278 239969 DEBUG nova.network.neutron [-] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:54:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:54:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1541796154' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.524 239969 DEBUG oslo_concurrency.processutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.529 239969 DEBUG nova.compute.provider_tree [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.550 239969 DEBUG nova.scheduler.client.report [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.578 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.579 239969 DEBUG nova.compute.manager [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.627 239969 DEBUG nova.compute.manager [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.627 239969 DEBUG nova.network.neutron [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.645 239969 INFO nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.664 239969 DEBUG nova.compute.manager [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:54:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1226: 305 pgs: 305 active+clean; 74 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.1 MiB/s wr, 107 op/s
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.771 239969 DEBUG nova.compute.manager [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.772 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.772 239969 INFO nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Creating image(s)
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.797 239969 DEBUG nova.storage.rbd_utils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] rbd image d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.824 239969 DEBUG nova.storage.rbd_utils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] rbd image d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.851 239969 DEBUG nova.storage.rbd_utils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] rbd image d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.854 239969 DEBUG oslo_concurrency.processutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1541796154' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.883 239969 DEBUG nova.policy [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b63a12d3960a490293ce7e52f058b753', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd6524312aa97473dbf559a96f287a9b7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.924 239969 DEBUG oslo_concurrency.processutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.925 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.926 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.926 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.949 239969 DEBUG nova.storage.rbd_utils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] rbd image d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:47 compute-0 nova_compute[239965]: 2026-01-26 15:54:47.952 239969 DEBUG oslo_concurrency.processutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.220 239969 DEBUG oslo_concurrency.processutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.275 239969 DEBUG nova.network.neutron [-] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.280 239969 DEBUG nova.storage.rbd_utils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] resizing rbd image d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.306 239969 INFO nova.compute.manager [-] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Took 1.03 seconds to deallocate network for instance.
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.357 239969 DEBUG oslo_concurrency.lockutils [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.358 239969 DEBUG oslo_concurrency.lockutils [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.362 239969 DEBUG nova.compute.manager [req-88d0bbd8-f456-4e28-890d-2851230fc14c req-092f40b8-e950-4d5e-9243-cf4f0182b0a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Received event network-vif-unplugged-5a7726a4-3201-49ec-ab56-ca4039acde5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.363 239969 DEBUG oslo_concurrency.lockutils [req-88d0bbd8-f456-4e28-890d-2851230fc14c req-092f40b8-e950-4d5e-9243-cf4f0182b0a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.363 239969 DEBUG oslo_concurrency.lockutils [req-88d0bbd8-f456-4e28-890d-2851230fc14c req-092f40b8-e950-4d5e-9243-cf4f0182b0a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.363 239969 DEBUG oslo_concurrency.lockutils [req-88d0bbd8-f456-4e28-890d-2851230fc14c req-092f40b8-e950-4d5e-9243-cf4f0182b0a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.363 239969 DEBUG nova.compute.manager [req-88d0bbd8-f456-4e28-890d-2851230fc14c req-092f40b8-e950-4d5e-9243-cf4f0182b0a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] No waiting events found dispatching network-vif-unplugged-5a7726a4-3201-49ec-ab56-ca4039acde5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.363 239969 WARNING nova.compute.manager [req-88d0bbd8-f456-4e28-890d-2851230fc14c req-092f40b8-e950-4d5e-9243-cf4f0182b0a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Received unexpected event network-vif-unplugged-5a7726a4-3201-49ec-ab56-ca4039acde5a for instance with vm_state deleted and task_state None.
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.364 239969 DEBUG nova.compute.manager [req-88d0bbd8-f456-4e28-890d-2851230fc14c req-092f40b8-e950-4d5e-9243-cf4f0182b0a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Received event network-vif-plugged-5a7726a4-3201-49ec-ab56-ca4039acde5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.364 239969 DEBUG oslo_concurrency.lockutils [req-88d0bbd8-f456-4e28-890d-2851230fc14c req-092f40b8-e950-4d5e-9243-cf4f0182b0a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.364 239969 DEBUG oslo_concurrency.lockutils [req-88d0bbd8-f456-4e28-890d-2851230fc14c req-092f40b8-e950-4d5e-9243-cf4f0182b0a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.364 239969 DEBUG oslo_concurrency.lockutils [req-88d0bbd8-f456-4e28-890d-2851230fc14c req-092f40b8-e950-4d5e-9243-cf4f0182b0a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.364 239969 DEBUG nova.compute.manager [req-88d0bbd8-f456-4e28-890d-2851230fc14c req-092f40b8-e950-4d5e-9243-cf4f0182b0a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] No waiting events found dispatching network-vif-plugged-5a7726a4-3201-49ec-ab56-ca4039acde5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.364 239969 WARNING nova.compute.manager [req-88d0bbd8-f456-4e28-890d-2851230fc14c req-092f40b8-e950-4d5e-9243-cf4f0182b0a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Received unexpected event network-vif-plugged-5a7726a4-3201-49ec-ab56-ca4039acde5a for instance with vm_state deleted and task_state None.
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.368 239969 DEBUG nova.objects.instance [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lazy-loading 'migration_context' on Instance uuid d6511f20-baa5-46ee-80c7-fbb3e13d5163 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.379 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.380 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Ensure instance console log exists: /var/lib/nova/instances/d6511f20-baa5-46ee-80c7-fbb3e13d5163/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.380 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.380 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.381 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:48 compute-0 nova_compute[239965]: 2026-01-26 15:54:48.428 239969 DEBUG oslo_concurrency.processutils [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:54:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1605401430' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:54:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:54:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1605401430' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00026580420619087494 of space, bias 1.0, pg target 0.07974126185726248 quantized to 32 (current 32)
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006675012271416842 of space, bias 1.0, pg target 0.20025036814250524 quantized to 32 (current 32)
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.887203929215784e-07 of space, bias 4.0, pg target 0.0009464644715058942 quantized to 16 (current 16)
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:54:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:54:48 compute-0 ceph-mon[75140]: pgmap v1226: 305 pgs: 305 active+clean; 74 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.1 MiB/s wr, 107 op/s
Jan 26 15:54:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1605401430' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:54:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1605401430' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:54:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:54:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1281199983' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:49 compute-0 nova_compute[239965]: 2026-01-26 15:54:49.015 239969 DEBUG oslo_concurrency.processutils [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:49 compute-0 nova_compute[239965]: 2026-01-26 15:54:49.023 239969 DEBUG nova.compute.provider_tree [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:54:49 compute-0 nova_compute[239965]: 2026-01-26 15:54:49.039 239969 DEBUG nova.scheduler.client.report [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:54:49 compute-0 nova_compute[239965]: 2026-01-26 15:54:49.063 239969 DEBUG oslo_concurrency.lockutils [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:49 compute-0 nova_compute[239965]: 2026-01-26 15:54:49.092 239969 INFO nova.scheduler.client.report [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Deleted allocations for instance 3c352fa1-e91d-49ab-a44a-875ec50bb9c3
Jan 26 15:54:49 compute-0 nova_compute[239965]: 2026-01-26 15:54:49.164 239969 DEBUG oslo_concurrency.lockutils [None req-f567a66b-2163-4690-8ff3-f59c8364e02b 5e1a7a8a20c4486d993bf5d6901001be a4fae17181d64f599975dc5816c6b9b8 - - default default] Lock "3c352fa1-e91d-49ab-a44a-875ec50bb9c3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:49 compute-0 nova_compute[239965]: 2026-01-26 15:54:49.385 239969 DEBUG nova.network.neutron [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Successfully created port: 1faa1913-d2b2-468b-b3a1-69ccd5286bff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:54:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:54:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 74 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Jan 26 15:54:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1281199983' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:50 compute-0 nova_compute[239965]: 2026-01-26 15:54:50.109 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:50 compute-0 nova_compute[239965]: 2026-01-26 15:54:50.448 239969 DEBUG nova.compute.manager [req-2e913c0c-1741-46bf-9651-20fc16687b12 req-7752ba65-a7ec-4aa7-960c-93996f06c39b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Received event network-vif-deleted-5a7726a4-3201-49ec-ab56-ca4039acde5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:50 compute-0 nova_compute[239965]: 2026-01-26 15:54:50.497 239969 DEBUG nova.network.neutron [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Successfully updated port: 1faa1913-d2b2-468b-b3a1-69ccd5286bff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:54:50 compute-0 nova_compute[239965]: 2026-01-26 15:54:50.518 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Acquiring lock "refresh_cache-d6511f20-baa5-46ee-80c7-fbb3e13d5163" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:54:50 compute-0 nova_compute[239965]: 2026-01-26 15:54:50.518 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Acquired lock "refresh_cache-d6511f20-baa5-46ee-80c7-fbb3e13d5163" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:54:50 compute-0 nova_compute[239965]: 2026-01-26 15:54:50.518 239969 DEBUG nova.network.neutron [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:54:50 compute-0 nova_compute[239965]: 2026-01-26 15:54:50.598 239969 DEBUG nova.compute.manager [req-05f93fa8-9031-4dbc-aa97-e9af9c8cca6b req-fda01041-70cb-4add-9037-ff7a5c5cde70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Received event network-changed-1faa1913-d2b2-468b-b3a1-69ccd5286bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:50 compute-0 nova_compute[239965]: 2026-01-26 15:54:50.599 239969 DEBUG nova.compute.manager [req-05f93fa8-9031-4dbc-aa97-e9af9c8cca6b req-fda01041-70cb-4add-9037-ff7a5c5cde70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Refreshing instance network info cache due to event network-changed-1faa1913-d2b2-468b-b3a1-69ccd5286bff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:54:50 compute-0 nova_compute[239965]: 2026-01-26 15:54:50.599 239969 DEBUG oslo_concurrency.lockutils [req-05f93fa8-9031-4dbc-aa97-e9af9c8cca6b req-fda01041-70cb-4add-9037-ff7a5c5cde70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d6511f20-baa5-46ee-80c7-fbb3e13d5163" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:54:50 compute-0 nova_compute[239965]: 2026-01-26 15:54:50.695 239969 DEBUG nova.network.neutron [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:54:50 compute-0 ceph-mon[75140]: pgmap v1227: 305 pgs: 305 active+clean; 74 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 291 KiB/s rd, 2.1 MiB/s wr, 104 op/s
Jan 26 15:54:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 56 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 299 KiB/s rd, 2.8 MiB/s wr, 115 op/s
Jan 26 15:54:51 compute-0 nova_compute[239965]: 2026-01-26 15:54:51.761 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.711 239969 DEBUG nova.network.neutron [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Updating instance_info_cache with network_info: [{"id": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "address": "fa:16:3e:45:15:bd", "network": {"id": "8c72a58b-5da5-4af3-9fe6-bf85dce1faaa", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-459796709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6524312aa97473dbf559a96f287a9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa1913-d2", "ovs_interfaceid": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.750 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Releasing lock "refresh_cache-d6511f20-baa5-46ee-80c7-fbb3e13d5163" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.750 239969 DEBUG nova.compute.manager [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Instance network_info: |[{"id": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "address": "fa:16:3e:45:15:bd", "network": {"id": "8c72a58b-5da5-4af3-9fe6-bf85dce1faaa", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-459796709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6524312aa97473dbf559a96f287a9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa1913-d2", "ovs_interfaceid": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.751 239969 DEBUG oslo_concurrency.lockutils [req-05f93fa8-9031-4dbc-aa97-e9af9c8cca6b req-fda01041-70cb-4add-9037-ff7a5c5cde70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d6511f20-baa5-46ee-80c7-fbb3e13d5163" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.751 239969 DEBUG nova.network.neutron [req-05f93fa8-9031-4dbc-aa97-e9af9c8cca6b req-fda01041-70cb-4add-9037-ff7a5c5cde70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Refreshing network info cache for port 1faa1913-d2b2-468b-b3a1-69ccd5286bff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.754 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Start _get_guest_xml network_info=[{"id": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "address": "fa:16:3e:45:15:bd", "network": {"id": "8c72a58b-5da5-4af3-9fe6-bf85dce1faaa", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-459796709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6524312aa97473dbf559a96f287a9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa1913-d2", "ovs_interfaceid": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.758 239969 WARNING nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.763 239969 DEBUG nova.virt.libvirt.host [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.764 239969 DEBUG nova.virt.libvirt.host [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.770 239969 DEBUG nova.virt.libvirt.host [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.771 239969 DEBUG nova.virt.libvirt.host [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.771 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.771 239969 DEBUG nova.virt.hardware [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.772 239969 DEBUG nova.virt.hardware [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.772 239969 DEBUG nova.virt.hardware [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.772 239969 DEBUG nova.virt.hardware [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.773 239969 DEBUG nova.virt.hardware [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.773 239969 DEBUG nova.virt.hardware [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.773 239969 DEBUG nova.virt.hardware [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.773 239969 DEBUG nova.virt.hardware [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.773 239969 DEBUG nova.virt.hardware [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.774 239969 DEBUG nova.virt.hardware [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.774 239969 DEBUG nova.virt.hardware [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:54:52 compute-0 nova_compute[239965]: 2026-01-26 15:54:52.777 239969 DEBUG oslo_concurrency.processutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:53 compute-0 ceph-mon[75140]: pgmap v1228: 305 pgs: 305 active+clean; 56 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 299 KiB/s rd, 2.8 MiB/s wr, 115 op/s
Jan 26 15:54:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:54:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3345052670' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:54:53 compute-0 nova_compute[239965]: 2026-01-26 15:54:53.543 239969 DEBUG oslo_concurrency.processutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.766s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:53 compute-0 nova_compute[239965]: 2026-01-26 15:54:53.566 239969 DEBUG nova.storage.rbd_utils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] rbd image d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:53 compute-0 nova_compute[239965]: 2026-01-26 15:54:53.571 239969 DEBUG oslo_concurrency.processutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 88 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 154 KiB/s rd, 3.1 MiB/s wr, 109 op/s
Jan 26 15:54:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3345052670' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:54:54 compute-0 ceph-mon[75140]: pgmap v1229: 305 pgs: 305 active+clean; 88 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 154 KiB/s rd, 3.1 MiB/s wr, 109 op/s
Jan 26 15:54:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:54:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3527571722' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.135 239969 DEBUG oslo_concurrency.processutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.137 239969 DEBUG nova.virt.libvirt.vif [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:54:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1645979738',display_name='tempest-ImagesOneServerTestJSON-server-1645979738',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1645979738',id=34,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6524312aa97473dbf559a96f287a9b7',ramdisk_id='',reservation_id='r-28dlaor7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1585463751',owner_user_name='tempest-ImagesOneServerTestJSON-1585463751-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:54:47Z,user_data=None,user_id='b63a12d3960a490293ce7e52f058b753',uuid=d6511f20-baa5-46ee-80c7-fbb3e13d5163,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "address": "fa:16:3e:45:15:bd", "network": {"id": "8c72a58b-5da5-4af3-9fe6-bf85dce1faaa", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-459796709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6524312aa97473dbf559a96f287a9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa1913-d2", "ovs_interfaceid": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.138 239969 DEBUG nova.network.os_vif_util [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Converting VIF {"id": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "address": "fa:16:3e:45:15:bd", "network": {"id": "8c72a58b-5da5-4af3-9fe6-bf85dce1faaa", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-459796709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6524312aa97473dbf559a96f287a9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa1913-d2", "ovs_interfaceid": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.138 239969 DEBUG nova.network.os_vif_util [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:15:bd,bridge_name='br-int',has_traffic_filtering=True,id=1faa1913-d2b2-468b-b3a1-69ccd5286bff,network=Network(8c72a58b-5da5-4af3-9fe6-bf85dce1faaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa1913-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.140 239969 DEBUG nova.objects.instance [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lazy-loading 'pci_devices' on Instance uuid d6511f20-baa5-46ee-80c7-fbb3e13d5163 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.164 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:54:54 compute-0 nova_compute[239965]:   <uuid>d6511f20-baa5-46ee-80c7-fbb3e13d5163</uuid>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   <name>instance-00000022</name>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <nova:name>tempest-ImagesOneServerTestJSON-server-1645979738</nova:name>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:54:52</nova:creationTime>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:54:54 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:54:54 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:54:54 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:54:54 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:54:54 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:54:54 compute-0 nova_compute[239965]:         <nova:user uuid="b63a12d3960a490293ce7e52f058b753">tempest-ImagesOneServerTestJSON-1585463751-project-member</nova:user>
Jan 26 15:54:54 compute-0 nova_compute[239965]:         <nova:project uuid="d6524312aa97473dbf559a96f287a9b7">tempest-ImagesOneServerTestJSON-1585463751</nova:project>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:54:54 compute-0 nova_compute[239965]:         <nova:port uuid="1faa1913-d2b2-468b-b3a1-69ccd5286bff">
Jan 26 15:54:54 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <system>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <entry name="serial">d6511f20-baa5-46ee-80c7-fbb3e13d5163</entry>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <entry name="uuid">d6511f20-baa5-46ee-80c7-fbb3e13d5163</entry>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     </system>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   <os>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   </os>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   <features>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   </features>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk">
Jan 26 15:54:54 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       </source>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:54:54 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk.config">
Jan 26 15:54:54 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       </source>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:54:54 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:45:15:bd"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <target dev="tap1faa1913-d2"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/d6511f20-baa5-46ee-80c7-fbb3e13d5163/console.log" append="off"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <video>
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     </video>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:54:54 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:54:54 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:54:54 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:54:54 compute-0 nova_compute[239965]: </domain>
Jan 26 15:54:54 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.166 239969 DEBUG nova.compute.manager [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Preparing to wait for external event network-vif-plugged-1faa1913-d2b2-468b-b3a1-69ccd5286bff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.166 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Acquiring lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.166 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.166 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.167 239969 DEBUG nova.virt.libvirt.vif [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:54:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1645979738',display_name='tempest-ImagesOneServerTestJSON-server-1645979738',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1645979738',id=34,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d6524312aa97473dbf559a96f287a9b7',ramdisk_id='',reservation_id='r-28dlaor7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-1585463751',owner_user_name='tempest-ImagesOneServerTestJSON-1585463751-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:54:47Z,user_data=None,user_id='b63a12d3960a490293ce7e52f058b753',uuid=d6511f20-baa5-46ee-80c7-fbb3e13d5163,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "address": "fa:16:3e:45:15:bd", "network": {"id": "8c72a58b-5da5-4af3-9fe6-bf85dce1faaa", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-459796709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6524312aa97473dbf559a96f287a9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa1913-d2", "ovs_interfaceid": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.168 239969 DEBUG nova.network.os_vif_util [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Converting VIF {"id": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "address": "fa:16:3e:45:15:bd", "network": {"id": "8c72a58b-5da5-4af3-9fe6-bf85dce1faaa", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-459796709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6524312aa97473dbf559a96f287a9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa1913-d2", "ovs_interfaceid": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.168 239969 DEBUG nova.network.os_vif_util [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:15:bd,bridge_name='br-int',has_traffic_filtering=True,id=1faa1913-d2b2-468b-b3a1-69ccd5286bff,network=Network(8c72a58b-5da5-4af3-9fe6-bf85dce1faaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa1913-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.168 239969 DEBUG os_vif [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:15:bd,bridge_name='br-int',has_traffic_filtering=True,id=1faa1913-d2b2-468b-b3a1-69ccd5286bff,network=Network(8c72a58b-5da5-4af3-9fe6-bf85dce1faaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa1913-d2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.169 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.170 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.170 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.172 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.172 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1faa1913-d2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.173 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1faa1913-d2, col_values=(('external_ids', {'iface-id': '1faa1913-d2b2-468b-b3a1-69ccd5286bff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:15:bd', 'vm-uuid': 'd6511f20-baa5-46ee-80c7-fbb3e13d5163'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.174 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:54 compute-0 NetworkManager[48954]: <info>  [1769442894.1755] manager: (tap1faa1913-d2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.179 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.180 239969 INFO os_vif [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:15:bd,bridge_name='br-int',has_traffic_filtering=True,id=1faa1913-d2b2-468b-b3a1-69ccd5286bff,network=Network(8c72a58b-5da5-4af3-9fe6-bf85dce1faaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa1913-d2')
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.265 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.266 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.266 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] No VIF found with MAC fa:16:3e:45:15:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.267 239969 INFO nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Using config drive
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.289 239969 DEBUG nova.storage.rbd_utils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] rbd image d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.814 239969 INFO nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Creating config drive at /var/lib/nova/instances/d6511f20-baa5-46ee-80c7-fbb3e13d5163/disk.config
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.821 239969 DEBUG oslo_concurrency.processutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d6511f20-baa5-46ee-80c7-fbb3e13d5163/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphp78c4cx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.954 239969 DEBUG oslo_concurrency.processutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d6511f20-baa5-46ee-80c7-fbb3e13d5163/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphp78c4cx" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.983 239969 DEBUG nova.storage.rbd_utils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] rbd image d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:54 compute-0 nova_compute[239965]: 2026-01-26 15:54:54.986 239969 DEBUG oslo_concurrency.processutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d6511f20-baa5-46ee-80c7-fbb3e13d5163/disk.config d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.030 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3527571722' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.111 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.124 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.126 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.132 239969 DEBUG nova.network.neutron [req-05f93fa8-9031-4dbc-aa97-e9af9c8cca6b req-fda01041-70cb-4add-9037-ff7a5c5cde70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Updated VIF entry in instance network info cache for port 1faa1913-d2b2-468b-b3a1-69ccd5286bff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.133 239969 DEBUG nova.network.neutron [req-05f93fa8-9031-4dbc-aa97-e9af9c8cca6b req-fda01041-70cb-4add-9037-ff7a5c5cde70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Updating instance_info_cache with network_info: [{"id": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "address": "fa:16:3e:45:15:bd", "network": {"id": "8c72a58b-5da5-4af3-9fe6-bf85dce1faaa", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-459796709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6524312aa97473dbf559a96f287a9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa1913-d2", "ovs_interfaceid": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.149 239969 DEBUG nova.compute.manager [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.153 239969 DEBUG oslo_concurrency.lockutils [req-05f93fa8-9031-4dbc-aa97-e9af9c8cca6b req-fda01041-70cb-4add-9037-ff7a5c5cde70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d6511f20-baa5-46ee-80c7-fbb3e13d5163" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.196 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.218 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.219 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.225 239969 DEBUG nova.virt.hardware [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.225 239969 INFO nova.compute.claims [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.352 239969 DEBUG oslo_concurrency.processutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.396 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:55 compute-0 podman[272054]: 2026-01-26 15:54:55.465327987 +0000 UTC m=+0.093657445 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 26 15:54:55 compute-0 podman[272056]: 2026-01-26 15:54:55.493659776 +0000 UTC m=+0.118790456 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.515 239969 DEBUG oslo_concurrency.processutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d6511f20-baa5-46ee-80c7-fbb3e13d5163/disk.config d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.515 239969 INFO nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Deleting local config drive /var/lib/nova/instances/d6511f20-baa5-46ee-80c7-fbb3e13d5163/disk.config because it was imported into RBD.
Jan 26 15:54:55 compute-0 kernel: tap1faa1913-d2: entered promiscuous mode
Jan 26 15:54:55 compute-0 NetworkManager[48954]: <info>  [1769442895.5730] manager: (tap1faa1913-d2): new Tun device (/org/freedesktop/NetworkManager/Devices/103)
Jan 26 15:54:55 compute-0 ovn_controller[146046]: 2026-01-26T15:54:55Z|00201|binding|INFO|Claiming lport 1faa1913-d2b2-468b-b3a1-69ccd5286bff for this chassis.
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.576 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.579 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:55 compute-0 ovn_controller[146046]: 2026-01-26T15:54:55Z|00202|binding|INFO|1faa1913-d2b2-468b-b3a1-69ccd5286bff: Claiming fa:16:3e:45:15:bd 10.100.0.12
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.588 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:15:bd 10.100.0.12'], port_security=['fa:16:3e:45:15:bd 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd6511f20-baa5-46ee-80c7-fbb3e13d5163', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6524312aa97473dbf559a96f287a9b7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5f7184b0-1c30-417a-a7e6-e39e091b9c66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78461b0f-389b-40aa-88db-5742b64d0a9b, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1faa1913-d2b2-468b-b3a1-69ccd5286bff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.589 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1faa1913-d2b2-468b-b3a1-69ccd5286bff in datapath 8c72a58b-5da5-4af3-9fe6-bf85dce1faaa bound to our chassis
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.590 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8c72a58b-5da5-4af3-9fe6-bf85dce1faaa
Jan 26 15:54:55 compute-0 systemd-udevd[272136]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.606 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d4e7f0a0-8503-49bb-baba-c6743e857191]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.607 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8c72a58b-51 in ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.609 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8c72a58b-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.609 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[93050fdc-b2e4-4338-848f-490542bc8de1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:55 compute-0 systemd-machined[208061]: New machine qemu-38-instance-00000022.
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.610 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6f99defc-9b81-44d1-9b27-e505e47b6562]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:55 compute-0 NetworkManager[48954]: <info>  [1769442895.6253] device (tap1faa1913-d2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:54:55 compute-0 NetworkManager[48954]: <info>  [1769442895.6259] device (tap1faa1913-d2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.627 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[1e6ea358-8aad-4d46-aee5-644bf5ab37b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.642 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f353cca3-e6f2-4694-a076-2be65cd650ea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:55 compute-0 systemd[1]: Started Virtual Machine qemu-38-instance-00000022.
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.650 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:55 compute-0 ovn_controller[146046]: 2026-01-26T15:54:55Z|00203|binding|INFO|Setting lport 1faa1913-d2b2-468b-b3a1-69ccd5286bff ovn-installed in OVS
Jan 26 15:54:55 compute-0 ovn_controller[146046]: 2026-01-26T15:54:55Z|00204|binding|INFO|Setting lport 1faa1913-d2b2-468b-b3a1-69ccd5286bff up in Southbound
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.652 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.675 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ea015219-5a2e-4e65-8804-e8f9dc1ff7e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:55 compute-0 NetworkManager[48954]: <info>  [1769442895.6848] manager: (tap8c72a58b-50): new Veth device (/org/freedesktop/NetworkManager/Devices/104)
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.684 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8ee4be-94c6-4850-b64d-db8e66b5b1a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.722 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[90066720-160b-4c52-9623-fd2ad9cbc069]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.726 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9db9ef15-594f-4ebd-9b22-e2c26c1e20c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 88 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Jan 26 15:54:55 compute-0 NetworkManager[48954]: <info>  [1769442895.7514] device (tap8c72a58b-50): carrier: link connected
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.756 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b22eb7e6-18b1-44b5-a0a0-708a4f17f08f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.775 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c18fcd3a-4679-453b-837d-e40cba037a56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c72a58b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ee:a5:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438655, 'reachable_time': 36181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272168, 'error': None, 'target': 'ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.792 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f5869e94-d38e-4a0e-9213-aa4c7b83f869]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feee:a527'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 438655, 'tstamp': 438655}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272169, 'error': None, 'target': 'ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.812 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d643a785-4c53-423a-8159-36b2ba9d8f44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8c72a58b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ee:a5:27'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438655, 'reachable_time': 36181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272170, 'error': None, 'target': 'ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.844 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[99e48094-f56b-47fe-9b34-f14d03777e92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.891 239969 DEBUG nova.compute.manager [req-11c8bfd9-86b7-4830-a6c3-d04e725a0bb9 req-6ccaa9c4-c67d-4a4f-9d59-1e49ce5fe4bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Received event network-vif-plugged-1faa1913-d2b2-468b-b3a1-69ccd5286bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.892 239969 DEBUG oslo_concurrency.lockutils [req-11c8bfd9-86b7-4830-a6c3-d04e725a0bb9 req-6ccaa9c4-c67d-4a4f-9d59-1e49ce5fe4bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.893 239969 DEBUG oslo_concurrency.lockutils [req-11c8bfd9-86b7-4830-a6c3-d04e725a0bb9 req-6ccaa9c4-c67d-4a4f-9d59-1e49ce5fe4bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.893 239969 DEBUG oslo_concurrency.lockutils [req-11c8bfd9-86b7-4830-a6c3-d04e725a0bb9 req-6ccaa9c4-c67d-4a4f-9d59-1e49ce5fe4bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.893 239969 DEBUG nova.compute.manager [req-11c8bfd9-86b7-4830-a6c3-d04e725a0bb9 req-6ccaa9c4-c67d-4a4f-9d59-1e49ce5fe4bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Processing event network-vif-plugged-1faa1913-d2b2-468b-b3a1-69ccd5286bff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.914 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bba22d1f-8198-4c41-a426-bb2e3ef9859f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.916 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c72a58b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.916 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.917 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8c72a58b-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:55 compute-0 NetworkManager[48954]: <info>  [1769442895.9198] manager: (tap8c72a58b-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Jan 26 15:54:55 compute-0 kernel: tap8c72a58b-50: entered promiscuous mode
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.918 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.921 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.923 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8c72a58b-50, col_values=(('external_ids', {'iface-id': 'b42e7ffc-fd24-4242-861e-74bdc067259a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:54:55 compute-0 ovn_controller[146046]: 2026-01-26T15:54:55Z|00205|binding|INFO|Releasing lport b42e7ffc-fd24-4242-861e-74bdc067259a from this chassis (sb_readonly=0)
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.925 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.926 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.927 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8c72a58b-5da5-4af3-9fe6-bf85dce1faaa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8c72a58b-5da5-4af3-9fe6-bf85dce1faaa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.928 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5e311075-a90c-4fca-8f3a-67ba94de169b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.929 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/8c72a58b-5da5-4af3-9fe6-bf85dce1faaa.pid.haproxy
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 8c72a58b-5da5-4af3-9fe6-bf85dce1faaa
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:54:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:55.930 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa', 'env', 'PROCESS_TAG=haproxy-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8c72a58b-5da5-4af3-9fe6-bf85dce1faaa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.941 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:54:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1713535388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.977 239969 DEBUG oslo_concurrency.processutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.982 239969 DEBUG nova.compute.provider_tree [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:54:55 compute-0 nova_compute[239965]: 2026-01-26 15:54:55.998 239969 DEBUG nova.scheduler.client.report [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.021 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.022 239969 DEBUG nova.compute.manager [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.067 239969 DEBUG nova.compute.manager [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.067 239969 DEBUG nova.network.neutron [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.089 239969 INFO nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:54:56 compute-0 ceph-mon[75140]: pgmap v1230: 305 pgs: 305 active+clean; 88 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Jan 26 15:54:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1713535388' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.108 239969 DEBUG nova.compute.manager [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.216 239969 DEBUG nova.compute.manager [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.218 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.218 239969 INFO nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Creating image(s)
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.248 239969 DEBUG nova.storage.rbd_utils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 819151c3-9892-4926-879b-6bcf2047e116_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.280 239969 DEBUG nova.storage.rbd_utils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 819151c3-9892-4926-879b-6bcf2047e116_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.315 239969 DEBUG nova.storage.rbd_utils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 819151c3-9892-4926-879b-6bcf2047e116_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.320 239969 DEBUG oslo_concurrency.processutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:56 compute-0 podman[272278]: 2026-01-26 15:54:56.339424456 +0000 UTC m=+0.055170031 container create 9f4aa7be61cde73eb4c61dabca9ac0ab01b557be4fa808c2ddd8859534da6dfc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.352 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442896.2321205, d6511f20-baa5-46ee-80c7-fbb3e13d5163 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.353 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] VM Started (Lifecycle Event)
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.357 239969 DEBUG nova.policy [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '700f80e28de6426c81d2be69a7fd8ffb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.360 239969 DEBUG nova.compute.manager [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.363 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:54:56 compute-0 systemd[1]: Started libpod-conmon-9f4aa7be61cde73eb4c61dabca9ac0ab01b557be4fa808c2ddd8859534da6dfc.scope.
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.367 239969 INFO nova.virt.libvirt.driver [-] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Instance spawned successfully.
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.367 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.390 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.395 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.395 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.396 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.396 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.396 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.397 239969 DEBUG nova.virt.libvirt.driver [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.401 239969 DEBUG oslo_concurrency.processutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.402 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3fcd9dfb69cd691bea593064e64a40a5f1b0351adc1fa1407aad72c8bc26964/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:54:56 compute-0 podman[272278]: 2026-01-26 15:54:56.310563605 +0000 UTC m=+0.026309210 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.407 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.408 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:56 compute-0 podman[272278]: 2026-01-26 15:54:56.417326188 +0000 UTC m=+0.133071793 container init 9f4aa7be61cde73eb4c61dabca9ac0ab01b557be4fa808c2ddd8859534da6dfc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 15:54:56 compute-0 podman[272278]: 2026-01-26 15:54:56.423913838 +0000 UTC m=+0.139659413 container start 9f4aa7be61cde73eb4c61dabca9ac0ab01b557be4fa808c2ddd8859534da6dfc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.434 239969 DEBUG nova.storage.rbd_utils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 819151c3-9892-4926-879b-6bcf2047e116_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.444 239969 DEBUG oslo_concurrency.processutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 819151c3-9892-4926-879b-6bcf2047e116_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:54:56 compute-0 neutron-haproxy-ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa[272316]: [NOTICE]   (272337) : New worker (272342) forked
Jan 26 15:54:56 compute-0 neutron-haproxy-ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa[272316]: [NOTICE]   (272337) : Loading success.
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.476 239969 INFO nova.compute.manager [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Took 8.70 seconds to spawn the instance on the hypervisor.
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.476 239969 DEBUG nova.compute.manager [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.478 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.516 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.516 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442896.2323394, d6511f20-baa5-46ee-80c7-fbb3e13d5163 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.517 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] VM Paused (Lifecycle Event)
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.547 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.550 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442896.3632772, d6511f20-baa5-46ee-80c7-fbb3e13d5163 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.550 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] VM Resumed (Lifecycle Event)
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.580 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.583 239969 INFO nova.compute.manager [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Took 10.05 seconds to build instance.
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.589 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.614 239969 DEBUG oslo_concurrency.lockutils [None req-33c14a68-2c42-460a-a55d-5660535f21a7 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.721 239969 DEBUG oslo_concurrency.processutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 819151c3-9892-4926-879b-6bcf2047e116_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.787 239969 DEBUG nova.storage.rbd_utils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] resizing rbd image 819151c3-9892-4926-879b-6bcf2047e116_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.868 239969 DEBUG nova.objects.instance [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'migration_context' on Instance uuid 819151c3-9892-4926-879b-6bcf2047e116 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.885 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.885 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Ensure instance console log exists: /var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.885 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.886 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:56 compute-0 nova_compute[239965]: 2026-01-26 15:54:56.886 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:57 compute-0 nova_compute[239965]: 2026-01-26 15:54:57.347 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442882.346326, 2cf7c37a-5399-4471-8385-7f483007196b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:54:57 compute-0 nova_compute[239965]: 2026-01-26 15:54:57.348 239969 INFO nova.compute.manager [-] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] VM Stopped (Lifecycle Event)
Jan 26 15:54:57 compute-0 nova_compute[239965]: 2026-01-26 15:54:57.390 239969 DEBUG nova.compute.manager [None req-fae5f693-dafd-46fb-871b-ced30833567e - - - - - -] [instance: 2cf7c37a-5399-4471-8385-7f483007196b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 126 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 597 KiB/s rd, 3.0 MiB/s wr, 113 op/s
Jan 26 15:54:57 compute-0 nova_compute[239965]: 2026-01-26 15:54:57.885 239969 DEBUG nova.network.neutron [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Successfully created port: 786c06cb-4344-4e72-b40a-f3aa914d037c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:54:58 compute-0 nova_compute[239965]: 2026-01-26 15:54:58.089 239969 DEBUG nova.compute.manager [req-66e5e21f-9313-4056-a41d-a79df2e9100f req-d6e7c58d-c5d3-4a46-9853-93e6b0eddd12 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Received event network-vif-plugged-1faa1913-d2b2-468b-b3a1-69ccd5286bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:54:58 compute-0 nova_compute[239965]: 2026-01-26 15:54:58.089 239969 DEBUG oslo_concurrency.lockutils [req-66e5e21f-9313-4056-a41d-a79df2e9100f req-d6e7c58d-c5d3-4a46-9853-93e6b0eddd12 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:58 compute-0 nova_compute[239965]: 2026-01-26 15:54:58.089 239969 DEBUG oslo_concurrency.lockutils [req-66e5e21f-9313-4056-a41d-a79df2e9100f req-d6e7c58d-c5d3-4a46-9853-93e6b0eddd12 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:58 compute-0 nova_compute[239965]: 2026-01-26 15:54:58.090 239969 DEBUG oslo_concurrency.lockutils [req-66e5e21f-9313-4056-a41d-a79df2e9100f req-d6e7c58d-c5d3-4a46-9853-93e6b0eddd12 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:58 compute-0 nova_compute[239965]: 2026-01-26 15:54:58.090 239969 DEBUG nova.compute.manager [req-66e5e21f-9313-4056-a41d-a79df2e9100f req-d6e7c58d-c5d3-4a46-9853-93e6b0eddd12 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] No waiting events found dispatching network-vif-plugged-1faa1913-d2b2-468b-b3a1-69ccd5286bff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:54:58 compute-0 nova_compute[239965]: 2026-01-26 15:54:58.090 239969 WARNING nova.compute.manager [req-66e5e21f-9313-4056-a41d-a79df2e9100f req-d6e7c58d-c5d3-4a46-9853-93e6b0eddd12 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Received unexpected event network-vif-plugged-1faa1913-d2b2-468b-b3a1-69ccd5286bff for instance with vm_state active and task_state None.
Jan 26 15:54:58 compute-0 nova_compute[239965]: 2026-01-26 15:54:58.253 239969 DEBUG nova.compute.manager [None req-f8473bed-94dd-4ad5-a656-7383511edf7e b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:54:58 compute-0 nova_compute[239965]: 2026-01-26 15:54:58.301 239969 INFO nova.compute.manager [None req-f8473bed-94dd-4ad5-a656-7383511edf7e b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] instance snapshotting
Jan 26 15:54:58 compute-0 ceph-mon[75140]: pgmap v1231: 305 pgs: 305 active+clean; 126 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 597 KiB/s rd, 3.0 MiB/s wr, 113 op/s
Jan 26 15:54:58 compute-0 nova_compute[239965]: 2026-01-26 15:54:58.844 239969 INFO nova.virt.libvirt.driver [None req-f8473bed-94dd-4ad5-a656-7383511edf7e b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Beginning live snapshot process
Jan 26 15:54:59 compute-0 nova_compute[239965]: 2026-01-26 15:54:59.014 239969 DEBUG nova.virt.libvirt.imagebackend [None req-f8473bed-94dd-4ad5-a656-7383511edf7e b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 15:54:59 compute-0 nova_compute[239965]: 2026-01-26 15:54:59.124 239969 DEBUG nova.network.neutron [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Successfully updated port: 786c06cb-4344-4e72-b40a-f3aa914d037c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:54:59 compute-0 nova_compute[239965]: 2026-01-26 15:54:59.145 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:54:59 compute-0 nova_compute[239965]: 2026-01-26 15:54:59.145 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquired lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:54:59 compute-0 nova_compute[239965]: 2026-01-26 15:54:59.145 239969 DEBUG nova.network.neutron [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:54:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:59.216 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:54:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:59.216 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:54:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:54:59.217 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:54:59 compute-0 nova_compute[239965]: 2026-01-26 15:54:59.231 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:54:59 compute-0 nova_compute[239965]: 2026-01-26 15:54:59.399 239969 DEBUG nova.storage.rbd_utils [None req-f8473bed-94dd-4ad5-a656-7383511edf7e b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] creating snapshot(dadf472924544219bd0ae744056e0b82) on rbd image(d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:54:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:54:59 compute-0 nova_compute[239965]: 2026-01-26 15:54:59.657 239969 DEBUG nova.network.neutron [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:54:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 126 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 575 KiB/s rd, 3.0 MiB/s wr, 79 op/s
Jan 26 15:54:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Jan 26 15:54:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Jan 26 15:54:59 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Jan 26 15:54:59 compute-0 nova_compute[239965]: 2026-01-26 15:54:59.858 239969 DEBUG nova.storage.rbd_utils [None req-f8473bed-94dd-4ad5-a656-7383511edf7e b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] cloning vms/d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk@dadf472924544219bd0ae744056e0b82 to images/cff9a4f9-54d9-428a-ac86-602637e98b00 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 15:54:59 compute-0 nova_compute[239965]: 2026-01-26 15:54:59.940 239969 DEBUG nova.storage.rbd_utils [None req-f8473bed-94dd-4ad5-a656-7383511edf7e b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] flattening images/cff9a4f9-54d9-428a-ac86-602637e98b00 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 15:55:00 compute-0 nova_compute[239965]: 2026-01-26 15:55:00.113 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:00 compute-0 nova_compute[239965]: 2026-01-26 15:55:00.159 239969 DEBUG nova.storage.rbd_utils [None req-f8473bed-94dd-4ad5-a656-7383511edf7e b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] removing snapshot(dadf472924544219bd0ae744056e0b82) on rbd image(d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 15:55:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:55:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:55:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:55:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:55:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:55:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:55:00 compute-0 nova_compute[239965]: 2026-01-26 15:55:00.591 239969 DEBUG nova.compute.manager [req-b5551085-5485-4243-8a43-42a853ed6873 req-94c7b16a-9207-4630-817b-d6d3e53e9f2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-changed-786c06cb-4344-4e72-b40a-f3aa914d037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:00 compute-0 nova_compute[239965]: 2026-01-26 15:55:00.591 239969 DEBUG nova.compute.manager [req-b5551085-5485-4243-8a43-42a853ed6873 req-94c7b16a-9207-4630-817b-d6d3e53e9f2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Refreshing instance network info cache due to event network-changed-786c06cb-4344-4e72-b40a-f3aa914d037c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:55:00 compute-0 nova_compute[239965]: 2026-01-26 15:55:00.592 239969 DEBUG oslo_concurrency.lockutils [req-b5551085-5485-4243-8a43-42a853ed6873 req-94c7b16a-9207-4630-817b-d6d3e53e9f2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:55:00 compute-0 ceph-mon[75140]: pgmap v1232: 305 pgs: 305 active+clean; 126 MiB data, 378 MiB used, 60 GiB / 60 GiB avail; 575 KiB/s rd, 3.0 MiB/s wr, 79 op/s
Jan 26 15:55:00 compute-0 ceph-mon[75140]: osdmap e207: 3 total, 3 up, 3 in
Jan 26 15:55:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Jan 26 15:55:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Jan 26 15:55:00 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Jan 26 15:55:00 compute-0 nova_compute[239965]: 2026-01-26 15:55:00.889 239969 DEBUG nova.storage.rbd_utils [None req-f8473bed-94dd-4ad5-a656-7383511edf7e b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] creating snapshot(snap) on rbd image(cff9a4f9-54d9-428a-ac86-602637e98b00) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.036 239969 DEBUG nova.network.neutron [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updating instance_info_cache with network_info: [{"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.071 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Releasing lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.072 239969 DEBUG nova.compute.manager [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Instance network_info: |[{"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.072 239969 DEBUG oslo_concurrency.lockutils [req-b5551085-5485-4243-8a43-42a853ed6873 req-94c7b16a-9207-4630-817b-d6d3e53e9f2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.073 239969 DEBUG nova.network.neutron [req-b5551085-5485-4243-8a43-42a853ed6873 req-94c7b16a-9207-4630-817b-d6d3e53e9f2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Refreshing network info cache for port 786c06cb-4344-4e72-b40a-f3aa914d037c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.076 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Start _get_guest_xml network_info=[{"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.081 239969 WARNING nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.086 239969 DEBUG nova.virt.libvirt.host [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.086 239969 DEBUG nova.virt.libvirt.host [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.089 239969 DEBUG nova.virt.libvirt.host [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.089 239969 DEBUG nova.virt.libvirt.host [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.090 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.090 239969 DEBUG nova.virt.hardware [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.090 239969 DEBUG nova.virt.hardware [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.091 239969 DEBUG nova.virt.hardware [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.091 239969 DEBUG nova.virt.hardware [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.091 239969 DEBUG nova.virt.hardware [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.091 239969 DEBUG nova.virt.hardware [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.091 239969 DEBUG nova.virt.hardware [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.092 239969 DEBUG nova.virt.hardware [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.092 239969 DEBUG nova.virt.hardware [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.092 239969 DEBUG nova.virt.hardware [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.092 239969 DEBUG nova.virt.hardware [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.094 239969 DEBUG oslo_concurrency.processutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:55:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1800825629' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.724 239969 DEBUG oslo_concurrency.processutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 155 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.1 MiB/s wr, 137 op/s
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.756 239969 DEBUG nova.storage.rbd_utils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 819151c3-9892-4926-879b-6bcf2047e116_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.764 239969 DEBUG oslo_concurrency.processutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.809 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442886.7374227, 3c352fa1-e91d-49ab-a44a-875ec50bb9c3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.810 239969 INFO nova.compute.manager [-] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] VM Stopped (Lifecycle Event)
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.817 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.818 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Jan 26 15:55:01 compute-0 ceph-mon[75140]: osdmap e208: 3 total, 3 up, 3 in
Jan 26 15:55:01 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1800825629' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:55:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Jan 26 15:55:01 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.850 239969 DEBUG nova.compute.manager [None req-6f280a07-4016-43d6-960c-9b2348da572e - - - - - -] [instance: 3c352fa1-e91d-49ab-a44a-875ec50bb9c3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.857 239969 DEBUG nova.compute.manager [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.931 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.932 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.939 239969 DEBUG nova.virt.hardware [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:55:01 compute-0 nova_compute[239965]: 2026-01-26 15:55:01.940 239969 INFO nova.compute.claims [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.104 239969 DEBUG oslo_concurrency.processutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.237 239969 DEBUG nova.network.neutron [req-b5551085-5485-4243-8a43-42a853ed6873 req-94c7b16a-9207-4630-817b-d6d3e53e9f2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updated VIF entry in instance network info cache for port 786c06cb-4344-4e72-b40a-f3aa914d037c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.239 239969 DEBUG nova.network.neutron [req-b5551085-5485-4243-8a43-42a853ed6873 req-94c7b16a-9207-4630-817b-d6d3e53e9f2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updating instance_info_cache with network_info: [{"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.271 239969 DEBUG oslo_concurrency.lockutils [req-b5551085-5485-4243-8a43-42a853ed6873 req-94c7b16a-9207-4630-817b-d6d3e53e9f2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:55:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:55:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2535936510' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.410 239969 DEBUG oslo_concurrency.processutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.412 239969 DEBUG nova.virt.libvirt.vif [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:54:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.412 239969 DEBUG nova.network.os_vif_util [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.413 239969 DEBUG nova.network.os_vif_util [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ed:8a,bridge_name='br-int',has_traffic_filtering=True,id=786c06cb-4344-4e72-b40a-f3aa914d037c,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap786c06cb-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.414 239969 DEBUG nova.objects.instance [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'pci_devices' on Instance uuid 819151c3-9892-4926-879b-6bcf2047e116 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.440 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:55:02 compute-0 nova_compute[239965]:   <uuid>819151c3-9892-4926-879b-6bcf2047e116</uuid>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   <name>instance-00000023</name>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1950902598</nova:name>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:55:01</nova:creationTime>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:55:02 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:55:02 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:55:02 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:55:02 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:55:02 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:55:02 compute-0 nova_compute[239965]:         <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:55:02 compute-0 nova_compute[239965]:         <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:55:02 compute-0 nova_compute[239965]:         <nova:port uuid="786c06cb-4344-4e72-b40a-f3aa914d037c">
Jan 26 15:55:02 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <system>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <entry name="serial">819151c3-9892-4926-879b-6bcf2047e116</entry>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <entry name="uuid">819151c3-9892-4926-879b-6bcf2047e116</entry>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     </system>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   <os>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   </os>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   <features>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   </features>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/819151c3-9892-4926-879b-6bcf2047e116_disk">
Jan 26 15:55:02 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       </source>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:55:02 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/819151c3-9892-4926-879b-6bcf2047e116_disk.config">
Jan 26 15:55:02 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       </source>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:55:02 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:1d:ed:8a"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <target dev="tap786c06cb-43"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116/console.log" append="off"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <video>
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     </video>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:55:02 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:55:02 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:55:02 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:55:02 compute-0 nova_compute[239965]: </domain>
Jan 26 15:55:02 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.446 239969 DEBUG nova.compute.manager [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Preparing to wait for external event network-vif-plugged-786c06cb-4344-4e72-b40a-f3aa914d037c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.447 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.447 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.447 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.448 239969 DEBUG nova.virt.libvirt.vif [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:54:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.448 239969 DEBUG nova.network.os_vif_util [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.449 239969 DEBUG nova.network.os_vif_util [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ed:8a,bridge_name='br-int',has_traffic_filtering=True,id=786c06cb-4344-4e72-b40a-f3aa914d037c,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap786c06cb-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.450 239969 DEBUG os_vif [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ed:8a,bridge_name='br-int',has_traffic_filtering=True,id=786c06cb-4344-4e72-b40a-f3aa914d037c,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap786c06cb-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.450 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.451 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.451 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.454 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.454 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap786c06cb-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.455 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap786c06cb-43, col_values=(('external_ids', {'iface-id': '786c06cb-4344-4e72-b40a-f3aa914d037c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1d:ed:8a', 'vm-uuid': '819151c3-9892-4926-879b-6bcf2047e116'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.456 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:02 compute-0 NetworkManager[48954]: <info>  [1769442902.4575] manager: (tap786c06cb-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.459 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.464 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.465 239969 INFO os_vif [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1d:ed:8a,bridge_name='br-int',has_traffic_filtering=True,id=786c06cb-4344-4e72-b40a-f3aa914d037c,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap786c06cb-43')
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.540 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.542 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.542 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:1d:ed:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.543 239969 INFO nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Using config drive
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.572 239969 DEBUG nova.storage.rbd_utils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 819151c3-9892-4926-879b-6bcf2047e116_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:55:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1228184885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.738 239969 DEBUG oslo_concurrency.processutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.744 239969 DEBUG nova.compute.provider_tree [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.763 239969 DEBUG nova.scheduler.client.report [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.795 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.796 239969 DEBUG nova.compute.manager [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.844 239969 DEBUG nova.compute.manager [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.845 239969 DEBUG nova.network.neutron [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:55:02 compute-0 ceph-mon[75140]: pgmap v1235: 305 pgs: 305 active+clean; 155 MiB data, 388 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.1 MiB/s wr, 137 op/s
Jan 26 15:55:02 compute-0 ceph-mon[75140]: osdmap e209: 3 total, 3 up, 3 in
Jan 26 15:55:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2535936510' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:55:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1228184885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.863 239969 INFO nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.881 239969 DEBUG nova.compute.manager [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.938 239969 INFO nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Creating config drive at /var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116/disk.config
Jan 26 15:55:02 compute-0 nova_compute[239965]: 2026-01-26 15:55:02.947 239969 DEBUG oslo_concurrency.processutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0rw9g23w execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.046 239969 DEBUG nova.policy [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b6639ef07f74c9ebad76dffa361ec4e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1b3f0866575347bd80cdc80b692f07f5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.051 239969 DEBUG nova.compute.manager [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.053 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.054 239969 INFO nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Creating image(s)
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.074 239969 DEBUG nova.storage.rbd_utils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.096 239969 DEBUG nova.storage.rbd_utils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.115 239969 DEBUG nova.storage.rbd_utils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.117 239969 DEBUG oslo_concurrency.processutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.144 239969 DEBUG oslo_concurrency.processutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0rw9g23w" returned: 0 in 0.198s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.169 239969 DEBUG nova.storage.rbd_utils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 819151c3-9892-4926-879b-6bcf2047e116_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.172 239969 DEBUG oslo_concurrency.processutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116/disk.config 819151c3-9892-4926-879b-6bcf2047e116_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.200 239969 DEBUG oslo_concurrency.processutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.201 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.202 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.202 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.229 239969 DEBUG nova.storage.rbd_utils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.238 239969 DEBUG oslo_concurrency.processutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.326 239969 DEBUG oslo_concurrency.processutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116/disk.config 819151c3-9892-4926-879b-6bcf2047e116_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.327 239969 INFO nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Deleting local config drive /var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116/disk.config because it was imported into RBD.
Jan 26 15:55:03 compute-0 kernel: tap786c06cb-43: entered promiscuous mode
Jan 26 15:55:03 compute-0 NetworkManager[48954]: <info>  [1769442903.3878] manager: (tap786c06cb-43): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.390 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:03 compute-0 ovn_controller[146046]: 2026-01-26T15:55:03Z|00206|binding|INFO|Claiming lport 786c06cb-4344-4e72-b40a-f3aa914d037c for this chassis.
Jan 26 15:55:03 compute-0 ovn_controller[146046]: 2026-01-26T15:55:03Z|00207|binding|INFO|786c06cb-4344-4e72-b40a-f3aa914d037c: Claiming fa:16:3e:1d:ed:8a 10.100.0.7
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.405 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:ed:8a 10.100.0.7'], port_security=['fa:16:3e:1d:ed:8a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '819151c3-9892-4926-879b-6bcf2047e116', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5f60fb61-4ff7-4a1e-9574-6be67928a1fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=786c06cb-4344-4e72-b40a-f3aa914d037c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.406 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 786c06cb-4344-4e72-b40a-f3aa914d037c in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e bound to our chassis
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.408 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.433 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[92f3ba22-0d5d-4095-8c2b-c1c3cc8d17e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.434 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap91bcac0a-11 in ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:55:03 compute-0 systemd-udevd[272836]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.438 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap91bcac0a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.438 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a9262b-ac72-4e7d-a0e9-c368754974a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:03 compute-0 systemd-machined[208061]: New machine qemu-39-instance-00000023.
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.441 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0d65419c-d25c-4e35-9c2f-4dbb9b7a554b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.456 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[44632ad6-4e07-4a1a-a1ce-9066c92f3292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:03 compute-0 NetworkManager[48954]: <info>  [1769442903.4590] device (tap786c06cb-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:55:03 compute-0 systemd[1]: Started Virtual Machine qemu-39-instance-00000023.
Jan 26 15:55:03 compute-0 NetworkManager[48954]: <info>  [1769442903.4601] device (tap786c06cb-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.461 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:03 compute-0 ovn_controller[146046]: 2026-01-26T15:55:03Z|00208|binding|INFO|Setting lport 786c06cb-4344-4e72-b40a-f3aa914d037c ovn-installed in OVS
Jan 26 15:55:03 compute-0 ovn_controller[146046]: 2026-01-26T15:55:03Z|00209|binding|INFO|Setting lport 786c06cb-4344-4e72-b40a-f3aa914d037c up in Southbound
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.469 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.485 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8392fa5f-3cee-422c-b6f8-2b45587bf126]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.515 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[831feb32-0130-49bc-976e-161ec83bdc71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.522 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c4aaf18f-2852-41d1-9264-d3cd66e3864f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:03 compute-0 NetworkManager[48954]: <info>  [1769442903.5237] manager: (tap91bcac0a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/108)
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.537 239969 INFO nova.virt.libvirt.driver [None req-f8473bed-94dd-4ad5-a656-7383511edf7e b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Snapshot image upload complete
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.538 239969 INFO nova.compute.manager [None req-f8473bed-94dd-4ad5-a656-7383511edf7e b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Took 5.24 seconds to snapshot the instance on the hypervisor.
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.546 239969 DEBUG oslo_concurrency.processutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.559 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[00a9eb30-a0dd-4228-a35e-a727896bbdeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.562 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[24a43cbf-ae32-4b44-b318-794fdebe9106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:03 compute-0 NetworkManager[48954]: <info>  [1769442903.5861] device (tap91bcac0a-10): carrier: link connected
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.591 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[96227049-a83f-4029-86af-01adef511970]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.612 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4229fe84-9bf9-4d6a-8c4b-65f239c3abdf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439439, 'reachable_time': 27465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272903, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.626 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e75ac9-2145-4f7c-a39c-ee6f0791d5b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:d619'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439439, 'tstamp': 439439}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272907, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.631 239969 DEBUG nova.storage.rbd_utils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] resizing rbd image b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.647 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e4404116-2663-45ec-b08c-030ec14b6952]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439439, 'reachable_time': 27465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272908, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.681 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[614ebbd1-b52d-4b29-90d2-ea96ed96780b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.741 239969 DEBUG nova.objects.instance [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lazy-loading 'migration_context' on Instance uuid b8a4a981-46c3-4b65-ac27-0a742bb0caeb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 180 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 6.4 MiB/s rd, 4.7 MiB/s wr, 226 op/s
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.759 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c8bd494d-9107-4ca7-a40f-ecb5aa85e657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.761 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.761 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.761 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:03 compute-0 kernel: tap91bcac0a-10: entered promiscuous mode
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.777 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.779 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Ensure instance console log exists: /var/lib/nova/instances/b8a4a981-46c3-4b65-ac27-0a742bb0caeb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.779 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.779 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:03 compute-0 NetworkManager[48954]: <info>  [1769442903.7802] manager: (tap91bcac0a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.780 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.780 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.782 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.783 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:03 compute-0 ovn_controller[146046]: 2026-01-26T15:55:03Z|00210|binding|INFO|Releasing lport 0a445094-538b-4c97-83d4-6fbe61c31fbe from this chassis (sb_readonly=0)
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.785 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:03 compute-0 nova_compute[239965]: 2026-01-26 15:55:03.802 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.803 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/91bcac0a-1926-4861-88ab-ae3c06f7e57e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/91bcac0a-1926-4861-88ab-ae3c06f7e57e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.804 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8b927e15-36c2-4f2e-a432-c19852a88631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.805 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/91bcac0a-1926-4861-88ab-ae3c06f7e57e.pid.haproxy
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:55:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:03.805 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'env', 'PROCESS_TAG=haproxy-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/91bcac0a-1926-4861-88ab-ae3c06f7e57e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.100 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442904.100426, 819151c3-9892-4926-879b-6bcf2047e116 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.101 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 819151c3-9892-4926-879b-6bcf2047e116] VM Started (Lifecycle Event)
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.121 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.126 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442904.1005461, 819151c3-9892-4926-879b-6bcf2047e116 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.126 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 819151c3-9892-4926-879b-6bcf2047e116] VM Paused (Lifecycle Event)
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.140 239969 DEBUG nova.network.neutron [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Successfully created port: e69ca133-3044-44b5-8c00-0780ebcf9b5e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.145 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.148 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:55:04 compute-0 podman[273018]: 2026-01-26 15:55:04.168371202 +0000 UTC m=+0.047643278 container create 080ce259709b8219f7cb252c124786f834c4edaa089124e389d34ce0fd87db5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.178 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 819151c3-9892-4926-879b-6bcf2047e116] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:55:04 compute-0 systemd[1]: Started libpod-conmon-080ce259709b8219f7cb252c124786f834c4edaa089124e389d34ce0fd87db5e.scope.
Jan 26 15:55:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:55:04 compute-0 podman[273018]: 2026-01-26 15:55:04.142161106 +0000 UTC m=+0.021433212 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:55:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edfc6f0f09920060f6a34eea4620015cf022a2125fbb7da8c7baac780dbef837/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:04 compute-0 podman[273018]: 2026-01-26 15:55:04.25557064 +0000 UTC m=+0.134842746 container init 080ce259709b8219f7cb252c124786f834c4edaa089124e389d34ce0fd87db5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:55:04 compute-0 podman[273018]: 2026-01-26 15:55:04.26094918 +0000 UTC m=+0.140221256 container start 080ce259709b8219f7cb252c124786f834c4edaa089124e389d34ce0fd87db5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 15:55:04 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[273033]: [NOTICE]   (273037) : New worker (273039) forked
Jan 26 15:55:04 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[273033]: [NOTICE]   (273037) : Loading success.
Jan 26 15:55:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.802 239969 DEBUG nova.compute.manager [req-462e4636-4ee1-45fb-bd07-9313f462d625 req-554d7c56-6c4c-4ad4-9c12-b595fb786423 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-plugged-786c06cb-4344-4e72-b40a-f3aa914d037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.803 239969 DEBUG oslo_concurrency.lockutils [req-462e4636-4ee1-45fb-bd07-9313f462d625 req-554d7c56-6c4c-4ad4-9c12-b595fb786423 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.803 239969 DEBUG oslo_concurrency.lockutils [req-462e4636-4ee1-45fb-bd07-9313f462d625 req-554d7c56-6c4c-4ad4-9c12-b595fb786423 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.803 239969 DEBUG oslo_concurrency.lockutils [req-462e4636-4ee1-45fb-bd07-9313f462d625 req-554d7c56-6c4c-4ad4-9c12-b595fb786423 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.803 239969 DEBUG nova.compute.manager [req-462e4636-4ee1-45fb-bd07-9313f462d625 req-554d7c56-6c4c-4ad4-9c12-b595fb786423 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Processing event network-vif-plugged-786c06cb-4344-4e72-b40a-f3aa914d037c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.804 239969 DEBUG nova.compute.manager [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.809 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.812 239969 INFO nova.virt.libvirt.driver [-] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Instance spawned successfully.
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.813 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.815 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442904.8149233, 819151c3-9892-4926-879b-6bcf2047e116 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.815 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 819151c3-9892-4926-879b-6bcf2047e116] VM Resumed (Lifecycle Event)
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.846 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.846 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.847 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.847 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.848 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.848 239969 DEBUG nova.virt.libvirt.driver [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.852 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.855 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:55:04 compute-0 ceph-mon[75140]: pgmap v1237: 305 pgs: 305 active+clean; 180 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 6.4 MiB/s rd, 4.7 MiB/s wr, 226 op/s
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.884 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 819151c3-9892-4926-879b-6bcf2047e116] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.919 239969 INFO nova.compute.manager [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Took 8.70 seconds to spawn the instance on the hypervisor.
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.919 239969 DEBUG nova.compute.manager [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:04 compute-0 nova_compute[239965]: 2026-01-26 15:55:04.980 239969 INFO nova.compute.manager [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Took 9.79 seconds to build instance.
Jan 26 15:55:05 compute-0 nova_compute[239965]: 2026-01-26 15:55:05.003 239969 DEBUG oslo_concurrency.lockutils [None req-cc0e36de-09a6-4140-b793-ff097ef6f6c4 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:05 compute-0 nova_compute[239965]: 2026-01-26 15:55:05.115 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:05 compute-0 nova_compute[239965]: 2026-01-26 15:55:05.621 239969 DEBUG nova.network.neutron [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Successfully updated port: e69ca133-3044-44b5-8c00-0780ebcf9b5e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:55:05 compute-0 nova_compute[239965]: 2026-01-26 15:55:05.639 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "refresh_cache-b8a4a981-46c3-4b65-ac27-0a742bb0caeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:55:05 compute-0 nova_compute[239965]: 2026-01-26 15:55:05.640 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquired lock "refresh_cache-b8a4a981-46c3-4b65-ac27-0a742bb0caeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:55:05 compute-0 nova_compute[239965]: 2026-01-26 15:55:05.640 239969 DEBUG nova.network.neutron [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:55:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 198 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 6.4 MiB/s rd, 6.1 MiB/s wr, 284 op/s
Jan 26 15:55:05 compute-0 nova_compute[239965]: 2026-01-26 15:55:05.851 239969 DEBUG nova.network.neutron [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.754 239969 DEBUG nova.network.neutron [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Updating instance_info_cache with network_info: [{"id": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "address": "fa:16:3e:e9:a7:e5", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape69ca133-30", "ovs_interfaceid": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.776 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Releasing lock "refresh_cache-b8a4a981-46c3-4b65-ac27-0a742bb0caeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.777 239969 DEBUG nova.compute.manager [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Instance network_info: |[{"id": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "address": "fa:16:3e:e9:a7:e5", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape69ca133-30", "ovs_interfaceid": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.779 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Start _get_guest_xml network_info=[{"id": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "address": "fa:16:3e:e9:a7:e5", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape69ca133-30", "ovs_interfaceid": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.786 239969 WARNING nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.794 239969 DEBUG nova.virt.libvirt.host [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.796 239969 DEBUG nova.virt.libvirt.host [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.800 239969 DEBUG nova.virt.libvirt.host [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.800 239969 DEBUG nova.virt.libvirt.host [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.801 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.801 239969 DEBUG nova.virt.hardware [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.802 239969 DEBUG nova.virt.hardware [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.802 239969 DEBUG nova.virt.hardware [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.802 239969 DEBUG nova.virt.hardware [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.803 239969 DEBUG nova.virt.hardware [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.803 239969 DEBUG nova.virt.hardware [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.803 239969 DEBUG nova.virt.hardware [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.804 239969 DEBUG nova.virt.hardware [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.804 239969 DEBUG nova.virt.hardware [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.804 239969 DEBUG nova.virt.hardware [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.804 239969 DEBUG nova.virt.hardware [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:55:06 compute-0 nova_compute[239965]: 2026-01-26 15:55:06.807 239969 DEBUG oslo_concurrency.processutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Jan 26 15:55:06 compute-0 ceph-mon[75140]: pgmap v1238: 305 pgs: 305 active+clean; 198 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 6.4 MiB/s rd, 6.1 MiB/s wr, 284 op/s
Jan 26 15:55:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Jan 26 15:55:06 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Jan 26 15:55:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:55:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/310006786' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:55:07 compute-0 nova_compute[239965]: 2026-01-26 15:55:07.396 239969 DEBUG oslo_concurrency.processutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:07 compute-0 nova_compute[239965]: 2026-01-26 15:55:07.416 239969 DEBUG nova.storage.rbd_utils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:07 compute-0 nova_compute[239965]: 2026-01-26 15:55:07.419 239969 DEBUG oslo_concurrency.processutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:07 compute-0 nova_compute[239965]: 2026-01-26 15:55:07.459 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:07 compute-0 nova_compute[239965]: 2026-01-26 15:55:07.535 239969 DEBUG nova.compute.manager [req-481033c0-5826-49f4-925e-acb5731ac515 req-ad00d3ad-d397-488a-9033-279307c5111a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-plugged-786c06cb-4344-4e72-b40a-f3aa914d037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:07 compute-0 nova_compute[239965]: 2026-01-26 15:55:07.536 239969 DEBUG oslo_concurrency.lockutils [req-481033c0-5826-49f4-925e-acb5731ac515 req-ad00d3ad-d397-488a-9033-279307c5111a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:07 compute-0 nova_compute[239965]: 2026-01-26 15:55:07.536 239969 DEBUG oslo_concurrency.lockutils [req-481033c0-5826-49f4-925e-acb5731ac515 req-ad00d3ad-d397-488a-9033-279307c5111a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:07 compute-0 nova_compute[239965]: 2026-01-26 15:55:07.537 239969 DEBUG oslo_concurrency.lockutils [req-481033c0-5826-49f4-925e-acb5731ac515 req-ad00d3ad-d397-488a-9033-279307c5111a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:07 compute-0 nova_compute[239965]: 2026-01-26 15:55:07.537 239969 DEBUG nova.compute.manager [req-481033c0-5826-49f4-925e-acb5731ac515 req-ad00d3ad-d397-488a-9033-279307c5111a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] No waiting events found dispatching network-vif-plugged-786c06cb-4344-4e72-b40a-f3aa914d037c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:55:07 compute-0 nova_compute[239965]: 2026-01-26 15:55:07.537 239969 WARNING nova.compute.manager [req-481033c0-5826-49f4-925e-acb5731ac515 req-ad00d3ad-d397-488a-9033-279307c5111a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received unexpected event network-vif-plugged-786c06cb-4344-4e72-b40a-f3aa914d037c for instance with vm_state active and task_state None.
Jan 26 15:55:07 compute-0 nova_compute[239965]: 2026-01-26 15:55:07.537 239969 DEBUG nova.compute.manager [req-481033c0-5826-49f4-925e-acb5731ac515 req-ad00d3ad-d397-488a-9033-279307c5111a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Received event network-changed-e69ca133-3044-44b5-8c00-0780ebcf9b5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:07 compute-0 nova_compute[239965]: 2026-01-26 15:55:07.538 239969 DEBUG nova.compute.manager [req-481033c0-5826-49f4-925e-acb5731ac515 req-ad00d3ad-d397-488a-9033-279307c5111a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Refreshing instance network info cache due to event network-changed-e69ca133-3044-44b5-8c00-0780ebcf9b5e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:55:07 compute-0 nova_compute[239965]: 2026-01-26 15:55:07.538 239969 DEBUG oslo_concurrency.lockutils [req-481033c0-5826-49f4-925e-acb5731ac515 req-ad00d3ad-d397-488a-9033-279307c5111a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b8a4a981-46c3-4b65-ac27-0a742bb0caeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:55:07 compute-0 nova_compute[239965]: 2026-01-26 15:55:07.538 239969 DEBUG oslo_concurrency.lockutils [req-481033c0-5826-49f4-925e-acb5731ac515 req-ad00d3ad-d397-488a-9033-279307c5111a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b8a4a981-46c3-4b65-ac27-0a742bb0caeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:55:07 compute-0 nova_compute[239965]: 2026-01-26 15:55:07.539 239969 DEBUG nova.network.neutron [req-481033c0-5826-49f4-925e-acb5731ac515 req-ad00d3ad-d397-488a-9033-279307c5111a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Refreshing network info cache for port e69ca133-3044-44b5-8c00-0780ebcf9b5e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:55:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 299 active+clean; 227 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 5.7 MiB/s wr, 295 op/s
Jan 26 15:55:07 compute-0 ceph-mon[75140]: osdmap e210: 3 total, 3 up, 3 in
Jan 26 15:55:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/310006786' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:55:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:55:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1362892164' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.046 239969 DEBUG oslo_concurrency.processutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.047 239969 DEBUG nova.virt.libvirt.vif [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:55:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-677054834',display_name='tempest-ImagesOneServerNegativeTestJSON-server-677054834',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-677054834',id=36,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b3f0866575347bd80cdc80b692f07f5',ramdisk_id='',reservation_id='r-z32u575p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-744790849',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-744790849-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:55:02Z,user_data=None,user_id='8b6639ef07f74c9ebad76dffa361ec4e',uuid=b8a4a981-46c3-4b65-ac27-0a742bb0caeb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "address": "fa:16:3e:e9:a7:e5", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape69ca133-30", "ovs_interfaceid": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.048 239969 DEBUG nova.network.os_vif_util [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converting VIF {"id": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "address": "fa:16:3e:e9:a7:e5", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape69ca133-30", "ovs_interfaceid": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.048 239969 DEBUG nova.network.os_vif_util [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:a7:e5,bridge_name='br-int',has_traffic_filtering=True,id=e69ca133-3044-44b5-8c00-0780ebcf9b5e,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape69ca133-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.049 239969 DEBUG nova.objects.instance [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid b8a4a981-46c3-4b65-ac27-0a742bb0caeb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.069 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:55:08 compute-0 nova_compute[239965]:   <uuid>b8a4a981-46c3-4b65-ac27-0a742bb0caeb</uuid>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   <name>instance-00000024</name>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-677054834</nova:name>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:55:06</nova:creationTime>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:55:08 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:55:08 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:55:08 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:55:08 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:55:08 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:55:08 compute-0 nova_compute[239965]:         <nova:user uuid="8b6639ef07f74c9ebad76dffa361ec4e">tempest-ImagesOneServerNegativeTestJSON-744790849-project-member</nova:user>
Jan 26 15:55:08 compute-0 nova_compute[239965]:         <nova:project uuid="1b3f0866575347bd80cdc80b692f07f5">tempest-ImagesOneServerNegativeTestJSON-744790849</nova:project>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:55:08 compute-0 nova_compute[239965]:         <nova:port uuid="e69ca133-3044-44b5-8c00-0780ebcf9b5e">
Jan 26 15:55:08 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <system>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <entry name="serial">b8a4a981-46c3-4b65-ac27-0a742bb0caeb</entry>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <entry name="uuid">b8a4a981-46c3-4b65-ac27-0a742bb0caeb</entry>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     </system>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   <os>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   </os>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   <features>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   </features>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk">
Jan 26 15:55:08 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       </source>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:55:08 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk.config">
Jan 26 15:55:08 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       </source>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:55:08 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:e9:a7:e5"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <target dev="tape69ca133-30"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/b8a4a981-46c3-4b65-ac27-0a742bb0caeb/console.log" append="off"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <video>
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     </video>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:55:08 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:55:08 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:55:08 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:55:08 compute-0 nova_compute[239965]: </domain>
Jan 26 15:55:08 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.071 239969 DEBUG nova.compute.manager [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Preparing to wait for external event network-vif-plugged-e69ca133-3044-44b5-8c00-0780ebcf9b5e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.072 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.072 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.072 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.073 239969 DEBUG nova.virt.libvirt.vif [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:55:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-677054834',display_name='tempest-ImagesOneServerNegativeTestJSON-server-677054834',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-677054834',id=36,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b3f0866575347bd80cdc80b692f07f5',ramdisk_id='',reservation_id='r-z32u575p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-744790849',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-744790849-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:55:02Z,user_data=None,user_id='8b6639ef07f74c9ebad76dffa361ec4e',uuid=b8a4a981-46c3-4b65-ac27-0a742bb0caeb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "address": "fa:16:3e:e9:a7:e5", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape69ca133-30", "ovs_interfaceid": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.073 239969 DEBUG nova.network.os_vif_util [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converting VIF {"id": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "address": "fa:16:3e:e9:a7:e5", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape69ca133-30", "ovs_interfaceid": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.074 239969 DEBUG nova.network.os_vif_util [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:a7:e5,bridge_name='br-int',has_traffic_filtering=True,id=e69ca133-3044-44b5-8c00-0780ebcf9b5e,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape69ca133-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.074 239969 DEBUG os_vif [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:a7:e5,bridge_name='br-int',has_traffic_filtering=True,id=e69ca133-3044-44b5-8c00-0780ebcf9b5e,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape69ca133-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.075 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.075 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.076 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.079 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.080 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape69ca133-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.081 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape69ca133-30, col_values=(('external_ids', {'iface-id': 'e69ca133-3044-44b5-8c00-0780ebcf9b5e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:a7:e5', 'vm-uuid': 'b8a4a981-46c3-4b65-ac27-0a742bb0caeb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:08 compute-0 NetworkManager[48954]: <info>  [1769442908.0835] manager: (tape69ca133-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.082 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.086 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.091 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.094 239969 INFO os_vif [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:a7:e5,bridge_name='br-int',has_traffic_filtering=True,id=e69ca133-3044-44b5-8c00-0780ebcf9b5e,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape69ca133-30')
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.153 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.154 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.154 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] No VIF found with MAC fa:16:3e:e9:a7:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.154 239969 INFO nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Using config drive
Jan 26 15:55:08 compute-0 nova_compute[239965]: 2026-01-26 15:55:08.178 239969 DEBUG nova.storage.rbd_utils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:08 compute-0 ceph-mon[75140]: pgmap v1240: 305 pgs: 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 299 active+clean; 227 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 5.6 MiB/s rd, 5.7 MiB/s wr, 295 op/s
Jan 26 15:55:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1362892164' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:55:09 compute-0 nova_compute[239965]: 2026-01-26 15:55:09.309 239969 INFO nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Creating config drive at /var/lib/nova/instances/b8a4a981-46c3-4b65-ac27-0a742bb0caeb/disk.config
Jan 26 15:55:09 compute-0 nova_compute[239965]: 2026-01-26 15:55:09.315 239969 DEBUG oslo_concurrency.processutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b8a4a981-46c3-4b65-ac27-0a742bb0caeb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps5_vfunj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:09 compute-0 nova_compute[239965]: 2026-01-26 15:55:09.353 239969 DEBUG nova.compute.manager [None req-6bdffa86-879a-4852-8992-4e5048a3f5db b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:09 compute-0 nova_compute[239965]: 2026-01-26 15:55:09.415 239969 INFO nova.compute.manager [None req-6bdffa86-879a-4852-8992-4e5048a3f5db b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] instance snapshotting
Jan 26 15:55:09 compute-0 nova_compute[239965]: 2026-01-26 15:55:09.456 239969 DEBUG oslo_concurrency.processutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b8a4a981-46c3-4b65-ac27-0a742bb0caeb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps5_vfunj" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:55:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Jan 26 15:55:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Jan 26 15:55:09 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Jan 26 15:55:09 compute-0 nova_compute[239965]: 2026-01-26 15:55:09.496 239969 DEBUG nova.storage.rbd_utils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:09 compute-0 nova_compute[239965]: 2026-01-26 15:55:09.499 239969 DEBUG oslo_concurrency.processutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b8a4a981-46c3-4b65-ac27-0a742bb0caeb/disk.config b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:09 compute-0 nova_compute[239965]: 2026-01-26 15:55:09.642 239969 INFO nova.virt.libvirt.driver [None req-6bdffa86-879a-4852-8992-4e5048a3f5db b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Beginning live snapshot process
Jan 26 15:55:09 compute-0 nova_compute[239965]: 2026-01-26 15:55:09.646 239969 DEBUG oslo_concurrency.processutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b8a4a981-46c3-4b65-ac27-0a742bb0caeb/disk.config b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:09 compute-0 nova_compute[239965]: 2026-01-26 15:55:09.646 239969 INFO nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Deleting local config drive /var/lib/nova/instances/b8a4a981-46c3-4b65-ac27-0a742bb0caeb/disk.config because it was imported into RBD.
Jan 26 15:55:09 compute-0 kernel: tape69ca133-30: entered promiscuous mode
Jan 26 15:55:09 compute-0 NetworkManager[48954]: <info>  [1769442909.7014] manager: (tape69ca133-30): new Tun device (/org/freedesktop/NetworkManager/Devices/111)
Jan 26 15:55:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 299 active+clean; 227 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 5.0 MiB/s wr, 258 op/s
Jan 26 15:55:09 compute-0 nova_compute[239965]: 2026-01-26 15:55:09.756 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:09 compute-0 ovn_controller[146046]: 2026-01-26T15:55:09Z|00211|binding|INFO|Claiming lport e69ca133-3044-44b5-8c00-0780ebcf9b5e for this chassis.
Jan 26 15:55:09 compute-0 ovn_controller[146046]: 2026-01-26T15:55:09Z|00212|binding|INFO|e69ca133-3044-44b5-8c00-0780ebcf9b5e: Claiming fa:16:3e:e9:a7:e5 10.100.0.3
Jan 26 15:55:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:09.774 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:a7:e5 10.100.0.3'], port_security=['fa:16:3e:e9:a7:e5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b8a4a981-46c3-4b65-ac27-0a742bb0caeb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12eb3028-37a4-48dc-836b-0978d0bddbce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b3f0866575347bd80cdc80b692f07f5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd45551f-cdcc-4a64-9f48-bf991126bbec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef363ab1-8cec-4345-8968-ee11106642fa, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=e69ca133-3044-44b5-8c00-0780ebcf9b5e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:55:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:09.775 156105 INFO neutron.agent.ovn.metadata.agent [-] Port e69ca133-3044-44b5-8c00-0780ebcf9b5e in datapath 12eb3028-37a4-48dc-836b-0978d0bddbce bound to our chassis
Jan 26 15:55:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:09.777 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 12eb3028-37a4-48dc-836b-0978d0bddbce
Jan 26 15:55:09 compute-0 systemd-udevd[273183]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:55:09 compute-0 systemd-machined[208061]: New machine qemu-40-instance-00000024.
Jan 26 15:55:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:09.803 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c1ccd85e-c715-43bc-8aef-1cecdb838a0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:09.804 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap12eb3028-31 in ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:55:09 compute-0 NetworkManager[48954]: <info>  [1769442909.8062] device (tape69ca133-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:55:09 compute-0 NetworkManager[48954]: <info>  [1769442909.8072] device (tape69ca133-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:55:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:09.805 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap12eb3028-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:55:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:09.806 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9dc933c0-cfaf-4800-bcf5-7de2fd548ac9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:09.808 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ee2324be-249b-4262-9fb4-32f75aa95728]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:09 compute-0 systemd[1]: Started Virtual Machine qemu-40-instance-00000024.
Jan 26 15:55:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:09.822 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[75f1bdac-d457-4c78-80e5-eabefb2fac19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:09 compute-0 ovn_controller[146046]: 2026-01-26T15:55:09Z|00213|binding|INFO|Setting lport e69ca133-3044-44b5-8c00-0780ebcf9b5e ovn-installed in OVS
Jan 26 15:55:09 compute-0 ovn_controller[146046]: 2026-01-26T15:55:09Z|00214|binding|INFO|Setting lport e69ca133-3044-44b5-8c00-0780ebcf9b5e up in Southbound
Jan 26 15:55:09 compute-0 nova_compute[239965]: 2026-01-26 15:55:09.842 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:09.845 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[703e92eb-909b-4de7-a7d7-b658f131a6ca]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:09.881 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[12a662fc-f5be-456f-86a3-dc01d00e12b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:09 compute-0 NetworkManager[48954]: <info>  [1769442909.8897] manager: (tap12eb3028-30): new Veth device (/org/freedesktop/NetworkManager/Devices/112)
Jan 26 15:55:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:09.890 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d3187437-86f5-4eb1-aa2b-d36ced7c7add]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:09 compute-0 nova_compute[239965]: 2026-01-26 15:55:09.901 239969 DEBUG nova.network.neutron [req-481033c0-5826-49f4-925e-acb5731ac515 req-ad00d3ad-d397-488a-9033-279307c5111a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Updated VIF entry in instance network info cache for port e69ca133-3044-44b5-8c00-0780ebcf9b5e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:55:09 compute-0 nova_compute[239965]: 2026-01-26 15:55:09.902 239969 DEBUG nova.network.neutron [req-481033c0-5826-49f4-925e-acb5731ac515 req-ad00d3ad-d397-488a-9033-279307c5111a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Updating instance_info_cache with network_info: [{"id": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "address": "fa:16:3e:e9:a7:e5", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape69ca133-30", "ovs_interfaceid": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:55:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:09.932 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0636a5a0-285d-4178-8529-c8a86fd0ea43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:09.934 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[26dd50c9-ced9-4ecd-9b0f-31be71e54bca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:09 compute-0 NetworkManager[48954]: <info>  [1769442909.9587] device (tap12eb3028-30): carrier: link connected
Jan 26 15:55:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:09.965 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[cecc29ec-e975-4ab1-9160-f8b556e72719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:09.983 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[342ca0ca-32c0-457e-845f-178f958b08e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12eb3028-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c1:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440076, 'reachable_time': 30545, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273230, 'error': None, 'target': 'ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:10.000 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[907162f3-0ca4-41f4-bfb2-3becea6b48ad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0d:c192'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 440076, 'tstamp': 440076}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273241, 'error': None, 'target': 'ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:10 compute-0 sudo[273217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:55:10 compute-0 sudo[273217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:55:10 compute-0 sudo[273217]: pam_unix(sudo:session): session closed for user root
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:10.023 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff610a3-ac56-4fa9-93ed-eca7cb11d6e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12eb3028-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c1:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440076, 'reachable_time': 30545, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273244, 'error': None, 'target': 'ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.030 239969 DEBUG oslo_concurrency.lockutils [req-481033c0-5826-49f4-925e-acb5731ac515 req-ad00d3ad-d397-488a-9033-279307c5111a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b8a4a981-46c3-4b65-ac27-0a742bb0caeb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:10.057 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[12c252f2-e3c0-4366-9d2b-3525a1ac526b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:10 compute-0 sudo[273245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:55:10 compute-0 sudo[273245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:55:10 compute-0 NetworkManager[48954]: <info>  [1769442910.0981] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/113)
Jan 26 15:55:10 compute-0 NetworkManager[48954]: <info>  [1769442910.0989] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:10.118 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d38b1339-7139-4117-8842-b235185fc39e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:10.122 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12eb3028-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:10.122 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:10.123 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12eb3028-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:10 compute-0 NetworkManager[48954]: <info>  [1769442910.1260] manager: (tap12eb3028-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Jan 26 15:55:10 compute-0 kernel: tap12eb3028-30: entered promiscuous mode
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:10.186 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap12eb3028-30, col_values=(('external_ids', {'iface-id': '25ff8da3-57db-4715-8307-e7e4fd4fbd09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.188 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:10 compute-0 ovn_controller[146046]: 2026-01-26T15:55:10Z|00215|binding|INFO|Releasing lport 25ff8da3-57db-4715-8307-e7e4fd4fbd09 from this chassis (sb_readonly=0)
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:10.192 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/12eb3028-37a4-48dc-836b-0978d0bddbce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/12eb3028-37a4-48dc-836b-0978d0bddbce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:55:10 compute-0 ovn_controller[146046]: 2026-01-26T15:55:10Z|00216|binding|INFO|Releasing lport 0a445094-538b-4c97-83d4-6fbe61c31fbe from this chassis (sb_readonly=0)
Jan 26 15:55:10 compute-0 ovn_controller[146046]: 2026-01-26T15:55:10Z|00217|binding|INFO|Releasing lport b42e7ffc-fd24-4242-861e-74bdc067259a from this chassis (sb_readonly=0)
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:10.194 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c1a6bb-e622-4a8d-885e-fb26681319c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:10.194 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-12eb3028-37a4-48dc-836b-0978d0bddbce
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/12eb3028-37a4-48dc-836b-0978d0bddbce.pid.haproxy
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 12eb3028-37a4-48dc-836b-0978d0bddbce
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:55:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:10.195 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce', 'env', 'PROCESS_TAG=haproxy-12eb3028-37a4-48dc-836b-0978d0bddbce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/12eb3028-37a4-48dc-836b-0978d0bddbce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.211 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:10 compute-0 ovn_controller[146046]: 2026-01-26T15:55:10Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:15:bd 10.100.0.12
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.217 239969 DEBUG nova.virt.libvirt.imagebackend [None req-6bdffa86-879a-4852-8992-4e5048a3f5db b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 15:55:10 compute-0 ovn_controller[146046]: 2026-01-26T15:55:10Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:15:bd 10.100.0.12
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.219 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.384 239969 DEBUG nova.compute.manager [req-12b93969-80aa-4127-9120-efd054731ca5 req-7fd3ae64-885d-417f-98b7-00b2c3671e66 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Received event network-vif-plugged-e69ca133-3044-44b5-8c00-0780ebcf9b5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.388 239969 DEBUG oslo_concurrency.lockutils [req-12b93969-80aa-4127-9120-efd054731ca5 req-7fd3ae64-885d-417f-98b7-00b2c3671e66 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.389 239969 DEBUG oslo_concurrency.lockutils [req-12b93969-80aa-4127-9120-efd054731ca5 req-7fd3ae64-885d-417f-98b7-00b2c3671e66 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.390 239969 DEBUG oslo_concurrency.lockutils [req-12b93969-80aa-4127-9120-efd054731ca5 req-7fd3ae64-885d-417f-98b7-00b2c3671e66 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.391 239969 DEBUG nova.compute.manager [req-12b93969-80aa-4127-9120-efd054731ca5 req-7fd3ae64-885d-417f-98b7-00b2c3671e66 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Processing event network-vif-plugged-e69ca133-3044-44b5-8c00-0780ebcf9b5e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:55:10 compute-0 ceph-mon[75140]: osdmap e211: 3 total, 3 up, 3 in
Jan 26 15:55:10 compute-0 ceph-mon[75140]: pgmap v1242: 305 pgs: 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 299 active+clean; 227 MiB data, 442 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 5.0 MiB/s wr, 258 op/s
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.532 239969 DEBUG nova.storage.rbd_utils [None req-6bdffa86-879a-4852-8992-4e5048a3f5db b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] creating snapshot(b690bc004e194de0845b14263f38d94e) on rbd image(d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.595 239969 DEBUG nova.compute.manager [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.598 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442910.5976832, b8a4a981-46c3-4b65-ac27-0a742bb0caeb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.599 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] VM Started (Lifecycle Event)
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.602 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.612 239969 INFO nova.virt.libvirt.driver [-] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Instance spawned successfully.
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.612 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:55:10 compute-0 podman[273410]: 2026-01-26 15:55:10.627348627 +0000 UTC m=+0.034438577 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:55:10 compute-0 sudo[273245]: pam_unix(sudo:session): session closed for user root
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.769 239969 DEBUG nova.compute.manager [req-ca10bb6d-75a0-4f27-a35d-14818c441433 req-c74aa314-5aa8-429c-8939-beb89ec39bdc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-changed-786c06cb-4344-4e72-b40a-f3aa914d037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.769 239969 DEBUG nova.compute.manager [req-ca10bb6d-75a0-4f27-a35d-14818c441433 req-c74aa314-5aa8-429c-8939-beb89ec39bdc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Refreshing instance network info cache due to event network-changed-786c06cb-4344-4e72-b40a-f3aa914d037c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.769 239969 DEBUG oslo_concurrency.lockutils [req-ca10bb6d-75a0-4f27-a35d-14818c441433 req-c74aa314-5aa8-429c-8939-beb89ec39bdc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.769 239969 DEBUG oslo_concurrency.lockutils [req-ca10bb6d-75a0-4f27-a35d-14818c441433 req-c74aa314-5aa8-429c-8939-beb89ec39bdc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.769 239969 DEBUG nova.network.neutron [req-ca10bb6d-75a0-4f27-a35d-14818c441433 req-c74aa314-5aa8-429c-8939-beb89ec39bdc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Refreshing network info cache for port 786c06cb-4344-4e72-b40a-f3aa914d037c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:55:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:55:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:55:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:55:10 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:55:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.797 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.802 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.802 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.803 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.805 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.805 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.806 239969 DEBUG nova.virt.libvirt.driver [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.811 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:55:10 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:55:10 compute-0 podman[273410]: 2026-01-26 15:55:10.827332744 +0000 UTC m=+0.234422764 container create 659ffcd2e6ec7088bb78727576416195074a0e96d3ebd1ce9b6f65e997373105 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 15:55:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:55:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:55:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:55:10 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:55:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:55:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.868 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.869 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442910.5994892, b8a4a981-46c3-4b65-ac27-0a742bb0caeb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.869 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] VM Paused (Lifecycle Event)
Jan 26 15:55:10 compute-0 systemd[1]: Started libpod-conmon-659ffcd2e6ec7088bb78727576416195074a0e96d3ebd1ce9b6f65e997373105.scope.
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.899 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.910 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442910.5997736, b8a4a981-46c3-4b65-ac27-0a742bb0caeb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.911 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] VM Resumed (Lifecycle Event)
Jan 26 15:55:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:55:10 compute-0 sudo[273436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:55:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d1f347e2802c901c0015fc7f254eec2f5b02cdf6f80bebec6c0d0fcb926778e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:10 compute-0 sudo[273436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:55:10 compute-0 sudo[273436]: pam_unix(sudo:session): session closed for user root
Jan 26 15:55:10 compute-0 podman[273410]: 2026-01-26 15:55:10.93958737 +0000 UTC m=+0.346677320 container init 659ffcd2e6ec7088bb78727576416195074a0e96d3ebd1ce9b6f65e997373105 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true)
Jan 26 15:55:10 compute-0 podman[273410]: 2026-01-26 15:55:10.946775274 +0000 UTC m=+0.353865194 container start 659ffcd2e6ec7088bb78727576416195074a0e96d3ebd1ce9b6f65e997373105 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.951 239969 INFO nova.compute.manager [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Took 7.90 seconds to spawn the instance on the hypervisor.
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.951 239969 DEBUG nova.compute.manager [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.970 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:10 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[273458]: [NOTICE]   (273468) : New worker (273486) forked
Jan 26 15:55:10 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[273458]: [NOTICE]   (273468) : Loading success.
Jan 26 15:55:10 compute-0 nova_compute[239965]: 2026-01-26 15:55:10.977 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:55:11 compute-0 sudo[273467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:55:11 compute-0 sudo[273467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:55:11 compute-0 nova_compute[239965]: 2026-01-26 15:55:11.038 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:55:11 compute-0 nova_compute[239965]: 2026-01-26 15:55:11.185 239969 INFO nova.compute.manager [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Took 9.28 seconds to build instance.
Jan 26 15:55:11 compute-0 nova_compute[239965]: 2026-01-26 15:55:11.219 239969 DEBUG oslo_concurrency.lockutils [None req-6c942160-a203-46f2-b809-60b407b49f21 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:11 compute-0 podman[273514]: 2026-01-26 15:55:11.370262229 +0000 UTC m=+0.060796607 container create eff638e052120a8549e5435e7823656da78a0ee905bf8a14c92899f1fec86053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_liskov, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 15:55:11 compute-0 systemd[1]: Started libpod-conmon-eff638e052120a8549e5435e7823656da78a0ee905bf8a14c92899f1fec86053.scope.
Jan 26 15:55:11 compute-0 podman[273514]: 2026-01-26 15:55:11.345345254 +0000 UTC m=+0.035879652 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:55:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:55:11 compute-0 podman[273514]: 2026-01-26 15:55:11.472277287 +0000 UTC m=+0.162811655 container init eff638e052120a8549e5435e7823656da78a0ee905bf8a14c92899f1fec86053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:55:11 compute-0 podman[273514]: 2026-01-26 15:55:11.47938293 +0000 UTC m=+0.169917298 container start eff638e052120a8549e5435e7823656da78a0ee905bf8a14c92899f1fec86053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_liskov, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:55:11 compute-0 podman[273514]: 2026-01-26 15:55:11.484022552 +0000 UTC m=+0.174556930 container attach eff638e052120a8549e5435e7823656da78a0ee905bf8a14c92899f1fec86053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 15:55:11 compute-0 elastic_liskov[273527]: 167 167
Jan 26 15:55:11 compute-0 systemd[1]: libpod-eff638e052120a8549e5435e7823656da78a0ee905bf8a14c92899f1fec86053.scope: Deactivated successfully.
Jan 26 15:55:11 compute-0 conmon[273527]: conmon eff638e052120a8549e5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eff638e052120a8549e5435e7823656da78a0ee905bf8a14c92899f1fec86053.scope/container/memory.events
Jan 26 15:55:11 compute-0 podman[273514]: 2026-01-26 15:55:11.488315746 +0000 UTC m=+0.178850114 container died eff638e052120a8549e5435e7823656da78a0ee905bf8a14c92899f1fec86053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 15:55:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Jan 26 15:55:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:55:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:55:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:55:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:55:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:55:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:55:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Jan 26 15:55:11 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Jan 26 15:55:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5943e17a3a127a8b27ffc635ab68a7f3f465bbcd6f7d1d7e315330afa93c41cf-merged.mount: Deactivated successfully.
Jan 26 15:55:11 compute-0 podman[273514]: 2026-01-26 15:55:11.541314944 +0000 UTC m=+0.231849312 container remove eff638e052120a8549e5435e7823656da78a0ee905bf8a14c92899f1fec86053 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_liskov, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:55:11 compute-0 systemd[1]: libpod-conmon-eff638e052120a8549e5435e7823656da78a0ee905bf8a14c92899f1fec86053.scope: Deactivated successfully.
Jan 26 15:55:11 compute-0 nova_compute[239965]: 2026-01-26 15:55:11.562 239969 DEBUG nova.storage.rbd_utils [None req-6bdffa86-879a-4852-8992-4e5048a3f5db b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] cloning vms/d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk@b690bc004e194de0845b14263f38d94e to images/60d6b7fd-af34-4b90-a819-e6748b83b027 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 15:55:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 299 active+clean; 213 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 222 op/s
Jan 26 15:55:11 compute-0 podman[273586]: 2026-01-26 15:55:11.730255892 +0000 UTC m=+0.027476108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:55:11 compute-0 podman[273586]: 2026-01-26 15:55:11.834963345 +0000 UTC m=+0.132183531 container create 5720045ba623fa9392e1e026f65fe8c7f4fdc1ff5f6dacdf16a2d9328a53aed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 26 15:55:11 compute-0 nova_compute[239965]: 2026-01-26 15:55:11.857 239969 DEBUG nova.storage.rbd_utils [None req-6bdffa86-879a-4852-8992-4e5048a3f5db b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] flattening images/60d6b7fd-af34-4b90-a819-e6748b83b027 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 15:55:11 compute-0 systemd[1]: Started libpod-conmon-5720045ba623fa9392e1e026f65fe8c7f4fdc1ff5f6dacdf16a2d9328a53aed3.scope.
Jan 26 15:55:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eada335c45569c1bfe49ff6f8237cf23bf6c4a2e1b127e138ca6b82860594e72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eada335c45569c1bfe49ff6f8237cf23bf6c4a2e1b127e138ca6b82860594e72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eada335c45569c1bfe49ff6f8237cf23bf6c4a2e1b127e138ca6b82860594e72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eada335c45569c1bfe49ff6f8237cf23bf6c4a2e1b127e138ca6b82860594e72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eada335c45569c1bfe49ff6f8237cf23bf6c4a2e1b127e138ca6b82860594e72/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:11 compute-0 podman[273586]: 2026-01-26 15:55:11.941428361 +0000 UTC m=+0.238648577 container init 5720045ba623fa9392e1e026f65fe8c7f4fdc1ff5f6dacdf16a2d9328a53aed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_haibt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:55:11 compute-0 podman[273586]: 2026-01-26 15:55:11.949355384 +0000 UTC m=+0.246575570 container start 5720045ba623fa9392e1e026f65fe8c7f4fdc1ff5f6dacdf16a2d9328a53aed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_haibt, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:55:11 compute-0 podman[273586]: 2026-01-26 15:55:11.953798372 +0000 UTC m=+0.251018558 container attach 5720045ba623fa9392e1e026f65fe8c7f4fdc1ff5f6dacdf16a2d9328a53aed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_haibt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 15:55:12 compute-0 nova_compute[239965]: 2026-01-26 15:55:12.328 239969 DEBUG nova.storage.rbd_utils [None req-6bdffa86-879a-4852-8992-4e5048a3f5db b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] removing snapshot(b690bc004e194de0845b14263f38d94e) on rbd image(d6511f20-baa5-46ee-80c7-fbb3e13d5163_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 15:55:12 compute-0 funny_haibt[273618]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:55:12 compute-0 funny_haibt[273618]: --> All data devices are unavailable
Jan 26 15:55:12 compute-0 systemd[1]: libpod-5720045ba623fa9392e1e026f65fe8c7f4fdc1ff5f6dacdf16a2d9328a53aed3.scope: Deactivated successfully.
Jan 26 15:55:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Jan 26 15:55:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Jan 26 15:55:12 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Jan 26 15:55:12 compute-0 ceph-mon[75140]: osdmap e212: 3 total, 3 up, 3 in
Jan 26 15:55:12 compute-0 ceph-mon[75140]: pgmap v1244: 305 pgs: 3 active+clean+snaptrim_wait, 3 active+clean+snaptrim, 299 active+clean; 213 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.0 MiB/s wr, 222 op/s
Jan 26 15:55:12 compute-0 nova_compute[239965]: 2026-01-26 15:55:12.530 239969 DEBUG nova.storage.rbd_utils [None req-6bdffa86-879a-4852-8992-4e5048a3f5db b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] creating snapshot(snap) on rbd image(60d6b7fd-af34-4b90-a819-e6748b83b027) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:55:12 compute-0 podman[273659]: 2026-01-26 15:55:12.541485944 +0000 UTC m=+0.033891974 container died 5720045ba623fa9392e1e026f65fe8c7f4fdc1ff5f6dacdf16a2d9328a53aed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_haibt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 15:55:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-eada335c45569c1bfe49ff6f8237cf23bf6c4a2e1b127e138ca6b82860594e72-merged.mount: Deactivated successfully.
Jan 26 15:55:12 compute-0 podman[273659]: 2026-01-26 15:55:12.597035213 +0000 UTC m=+0.089441223 container remove 5720045ba623fa9392e1e026f65fe8c7f4fdc1ff5f6dacdf16a2d9328a53aed3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_haibt, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:55:12 compute-0 systemd[1]: libpod-conmon-5720045ba623fa9392e1e026f65fe8c7f4fdc1ff5f6dacdf16a2d9328a53aed3.scope: Deactivated successfully.
Jan 26 15:55:12 compute-0 sudo[273467]: pam_unix(sudo:session): session closed for user root
Jan 26 15:55:12 compute-0 sudo[273691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:55:12 compute-0 sudo[273691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:55:12 compute-0 sudo[273691]: pam_unix(sudo:session): session closed for user root
Jan 26 15:55:12 compute-0 sudo[273716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:55:12 compute-0 sudo[273716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:55:12 compute-0 nova_compute[239965]: 2026-01-26 15:55:12.980 239969 DEBUG nova.compute.manager [req-f37b317d-ef53-465a-8f22-8aed48c0cf24 req-80fb6205-d2b8-47ab-bf46-78edd78bce9a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Received event network-vif-plugged-e69ca133-3044-44b5-8c00-0780ebcf9b5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:12 compute-0 nova_compute[239965]: 2026-01-26 15:55:12.981 239969 DEBUG oslo_concurrency.lockutils [req-f37b317d-ef53-465a-8f22-8aed48c0cf24 req-80fb6205-d2b8-47ab-bf46-78edd78bce9a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:12 compute-0 nova_compute[239965]: 2026-01-26 15:55:12.981 239969 DEBUG oslo_concurrency.lockutils [req-f37b317d-ef53-465a-8f22-8aed48c0cf24 req-80fb6205-d2b8-47ab-bf46-78edd78bce9a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:12 compute-0 nova_compute[239965]: 2026-01-26 15:55:12.981 239969 DEBUG oslo_concurrency.lockutils [req-f37b317d-ef53-465a-8f22-8aed48c0cf24 req-80fb6205-d2b8-47ab-bf46-78edd78bce9a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:12 compute-0 nova_compute[239965]: 2026-01-26 15:55:12.982 239969 DEBUG nova.compute.manager [req-f37b317d-ef53-465a-8f22-8aed48c0cf24 req-80fb6205-d2b8-47ab-bf46-78edd78bce9a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] No waiting events found dispatching network-vif-plugged-e69ca133-3044-44b5-8c00-0780ebcf9b5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:55:12 compute-0 nova_compute[239965]: 2026-01-26 15:55:12.982 239969 WARNING nova.compute.manager [req-f37b317d-ef53-465a-8f22-8aed48c0cf24 req-80fb6205-d2b8-47ab-bf46-78edd78bce9a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Received unexpected event network-vif-plugged-e69ca133-3044-44b5-8c00-0780ebcf9b5e for instance with vm_state active and task_state None.
Jan 26 15:55:13 compute-0 nova_compute[239965]: 2026-01-26 15:55:13.083 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:13 compute-0 podman[273752]: 2026-01-26 15:55:13.056853521 +0000 UTC m=+0.034615482 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:55:13 compute-0 podman[273752]: 2026-01-26 15:55:13.32240867 +0000 UTC m=+0.300170601 container create 3c18a426bc50ba419b69e203758993ad89cf1d05650f385ccefc7eea761ef7ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 15:55:13 compute-0 systemd[1]: Started libpod-conmon-3c18a426bc50ba419b69e203758993ad89cf1d05650f385ccefc7eea761ef7ef.scope.
Jan 26 15:55:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:55:13 compute-0 podman[273752]: 2026-01-26 15:55:13.408303886 +0000 UTC m=+0.386065817 container init 3c18a426bc50ba419b69e203758993ad89cf1d05650f385ccefc7eea761ef7ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:55:13 compute-0 podman[273752]: 2026-01-26 15:55:13.417346846 +0000 UTC m=+0.395108777 container start 3c18a426bc50ba419b69e203758993ad89cf1d05650f385ccefc7eea761ef7ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 15:55:13 compute-0 podman[273752]: 2026-01-26 15:55:13.422313547 +0000 UTC m=+0.400075508 container attach 3c18a426bc50ba419b69e203758993ad89cf1d05650f385ccefc7eea761ef7ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:55:13 compute-0 lucid_lewin[273768]: 167 167
Jan 26 15:55:13 compute-0 systemd[1]: libpod-3c18a426bc50ba419b69e203758993ad89cf1d05650f385ccefc7eea761ef7ef.scope: Deactivated successfully.
Jan 26 15:55:13 compute-0 conmon[273768]: conmon 3c18a426bc50ba419b69 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c18a426bc50ba419b69e203758993ad89cf1d05650f385ccefc7eea761ef7ef.scope/container/memory.events
Jan 26 15:55:13 compute-0 podman[273752]: 2026-01-26 15:55:13.425385471 +0000 UTC m=+0.403147402 container died 3c18a426bc50ba419b69e203758993ad89cf1d05650f385ccefc7eea761ef7ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:55:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-9857a200b71de877f8cdc070bc4c54ffbf9633163a2070a2550e558c71805b42-merged.mount: Deactivated successfully.
Jan 26 15:55:13 compute-0 podman[273752]: 2026-01-26 15:55:13.473464398 +0000 UTC m=+0.451226329 container remove 3c18a426bc50ba419b69e203758993ad89cf1d05650f385ccefc7eea761ef7ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 15:55:13 compute-0 systemd[1]: libpod-conmon-3c18a426bc50ba419b69e203758993ad89cf1d05650f385ccefc7eea761ef7ef.scope: Deactivated successfully.
Jan 26 15:55:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Jan 26 15:55:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Jan 26 15:55:13 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Jan 26 15:55:13 compute-0 ceph-mon[75140]: osdmap e213: 3 total, 3 up, 3 in
Jan 26 15:55:13 compute-0 podman[273792]: 2026-01-26 15:55:13.695010809 +0000 UTC m=+0.048149450 container create fc44c81e174ce25fa095e55e8d5f8fb996936c26ac4f5d6305178f21496e7615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:55:13 compute-0 nova_compute[239965]: 2026-01-26 15:55:13.722 239969 DEBUG nova.network.neutron [req-ca10bb6d-75a0-4f27-a35d-14818c441433 req-c74aa314-5aa8-429c-8939-beb89ec39bdc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updated VIF entry in instance network info cache for port 786c06cb-4344-4e72-b40a-f3aa914d037c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:55:13 compute-0 nova_compute[239965]: 2026-01-26 15:55:13.722 239969 DEBUG nova.network.neutron [req-ca10bb6d-75a0-4f27-a35d-14818c441433 req-c74aa314-5aa8-429c-8939-beb89ec39bdc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updating instance_info_cache with network_info: [{"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:55:13 compute-0 systemd[1]: Started libpod-conmon-fc44c81e174ce25fa095e55e8d5f8fb996936c26ac4f5d6305178f21496e7615.scope.
Jan 26 15:55:13 compute-0 nova_compute[239965]: 2026-01-26 15:55:13.748 239969 DEBUG oslo_concurrency.lockutils [req-ca10bb6d-75a0-4f27-a35d-14818c441433 req-c74aa314-5aa8-429c-8939-beb89ec39bdc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:55:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 260 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 14 MiB/s rd, 13 MiB/s wr, 545 op/s
Jan 26 15:55:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:55:13 compute-0 podman[273792]: 2026-01-26 15:55:13.673333763 +0000 UTC m=+0.026472434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c15aed559ef8bc9d94bca4a8767b2fa87be639589c1b1cbce4aa4a7b7a8ad2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c15aed559ef8bc9d94bca4a8767b2fa87be639589c1b1cbce4aa4a7b7a8ad2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c15aed559ef8bc9d94bca4a8767b2fa87be639589c1b1cbce4aa4a7b7a8ad2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c15aed559ef8bc9d94bca4a8767b2fa87be639589c1b1cbce4aa4a7b7a8ad2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:13 compute-0 podman[273792]: 2026-01-26 15:55:13.78606093 +0000 UTC m=+0.139199591 container init fc44c81e174ce25fa095e55e8d5f8fb996936c26ac4f5d6305178f21496e7615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shockley, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:55:13 compute-0 podman[273792]: 2026-01-26 15:55:13.793200663 +0000 UTC m=+0.146339304 container start fc44c81e174ce25fa095e55e8d5f8fb996936c26ac4f5d6305178f21496e7615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shockley, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:55:13 compute-0 podman[273792]: 2026-01-26 15:55:13.798060342 +0000 UTC m=+0.151199003 container attach fc44c81e174ce25fa095e55e8d5f8fb996936c26ac4f5d6305178f21496e7615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 15:55:14 compute-0 busy_shockley[273808]: {
Jan 26 15:55:14 compute-0 busy_shockley[273808]:     "0": [
Jan 26 15:55:14 compute-0 busy_shockley[273808]:         {
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "devices": [
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "/dev/loop3"
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             ],
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "lv_name": "ceph_lv0",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "lv_size": "21470642176",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "name": "ceph_lv0",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "tags": {
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.cluster_name": "ceph",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.crush_device_class": "",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.encrypted": "0",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.objectstore": "bluestore",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.osd_id": "0",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.type": "block",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.vdo": "0",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.with_tpm": "0"
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             },
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "type": "block",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "vg_name": "ceph_vg0"
Jan 26 15:55:14 compute-0 busy_shockley[273808]:         }
Jan 26 15:55:14 compute-0 busy_shockley[273808]:     ],
Jan 26 15:55:14 compute-0 busy_shockley[273808]:     "1": [
Jan 26 15:55:14 compute-0 busy_shockley[273808]:         {
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "devices": [
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "/dev/loop4"
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             ],
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "lv_name": "ceph_lv1",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "lv_size": "21470642176",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "name": "ceph_lv1",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "tags": {
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.cluster_name": "ceph",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.crush_device_class": "",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.encrypted": "0",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.objectstore": "bluestore",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.osd_id": "1",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.type": "block",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.vdo": "0",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.with_tpm": "0"
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             },
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "type": "block",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "vg_name": "ceph_vg1"
Jan 26 15:55:14 compute-0 busy_shockley[273808]:         }
Jan 26 15:55:14 compute-0 busy_shockley[273808]:     ],
Jan 26 15:55:14 compute-0 busy_shockley[273808]:     "2": [
Jan 26 15:55:14 compute-0 busy_shockley[273808]:         {
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "devices": [
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "/dev/loop5"
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             ],
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "lv_name": "ceph_lv2",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "lv_size": "21470642176",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "name": "ceph_lv2",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "tags": {
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.cluster_name": "ceph",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.crush_device_class": "",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.encrypted": "0",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.objectstore": "bluestore",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.osd_id": "2",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.type": "block",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.vdo": "0",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:                 "ceph.with_tpm": "0"
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             },
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "type": "block",
Jan 26 15:55:14 compute-0 busy_shockley[273808]:             "vg_name": "ceph_vg2"
Jan 26 15:55:14 compute-0 busy_shockley[273808]:         }
Jan 26 15:55:14 compute-0 busy_shockley[273808]:     ]
Jan 26 15:55:14 compute-0 busy_shockley[273808]: }
Jan 26 15:55:14 compute-0 systemd[1]: libpod-fc44c81e174ce25fa095e55e8d5f8fb996936c26ac4f5d6305178f21496e7615.scope: Deactivated successfully.
Jan 26 15:55:14 compute-0 podman[273817]: 2026-01-26 15:55:14.189777556 +0000 UTC m=+0.027368737 container died fc44c81e174ce25fa095e55e8d5f8fb996936c26ac4f5d6305178f21496e7615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 15:55:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9c15aed559ef8bc9d94bca4a8767b2fa87be639589c1b1cbce4aa4a7b7a8ad2-merged.mount: Deactivated successfully.
Jan 26 15:55:14 compute-0 podman[273817]: 2026-01-26 15:55:14.248123352 +0000 UTC m=+0.085714503 container remove fc44c81e174ce25fa095e55e8d5f8fb996936c26ac4f5d6305178f21496e7615 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_shockley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:55:14 compute-0 systemd[1]: libpod-conmon-fc44c81e174ce25fa095e55e8d5f8fb996936c26ac4f5d6305178f21496e7615.scope: Deactivated successfully.
Jan 26 15:55:14 compute-0 sudo[273716]: pam_unix(sudo:session): session closed for user root
Jan 26 15:55:14 compute-0 sudo[273829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:55:14 compute-0 sudo[273829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:55:14 compute-0 sudo[273829]: pam_unix(sudo:session): session closed for user root
Jan 26 15:55:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:55:14 compute-0 sudo[273854]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:55:14 compute-0 sudo[273854]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:55:14 compute-0 ceph-mon[75140]: osdmap e214: 3 total, 3 up, 3 in
Jan 26 15:55:14 compute-0 ceph-mon[75140]: pgmap v1247: 305 pgs: 305 active+clean; 260 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 14 MiB/s rd, 13 MiB/s wr, 545 op/s
Jan 26 15:55:14 compute-0 nova_compute[239965]: 2026-01-26 15:55:14.550 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Acquiring lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:14 compute-0 nova_compute[239965]: 2026-01-26 15:55:14.554 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:14 compute-0 nova_compute[239965]: 2026-01-26 15:55:14.574 239969 DEBUG nova.compute.manager [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:55:14 compute-0 nova_compute[239965]: 2026-01-26 15:55:14.674 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:14 compute-0 nova_compute[239965]: 2026-01-26 15:55:14.677 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:14 compute-0 nova_compute[239965]: 2026-01-26 15:55:14.694 239969 DEBUG nova.virt.hardware [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:55:14 compute-0 nova_compute[239965]: 2026-01-26 15:55:14.695 239969 INFO nova.compute.claims [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:55:14 compute-0 podman[273890]: 2026-01-26 15:55:14.851883126 +0000 UTC m=+0.075668999 container create 037bf0858b5d84c98b7060c8d0145f576030eee6629cda7ae51a3b4a870fcd53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_knuth, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 15:55:14 compute-0 nova_compute[239965]: 2026-01-26 15:55:14.855 239969 DEBUG oslo_concurrency.processutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:14 compute-0 podman[273890]: 2026-01-26 15:55:14.810847748 +0000 UTC m=+0.034633641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:55:14 compute-0 systemd[1]: Started libpod-conmon-037bf0858b5d84c98b7060c8d0145f576030eee6629cda7ae51a3b4a870fcd53.scope.
Jan 26 15:55:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:55:14 compute-0 podman[273890]: 2026-01-26 15:55:14.960408511 +0000 UTC m=+0.184194414 container init 037bf0858b5d84c98b7060c8d0145f576030eee6629cda7ae51a3b4a870fcd53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 15:55:14 compute-0 podman[273890]: 2026-01-26 15:55:14.968719203 +0000 UTC m=+0.192505076 container start 037bf0858b5d84c98b7060c8d0145f576030eee6629cda7ae51a3b4a870fcd53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_knuth, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:55:14 compute-0 podman[273890]: 2026-01-26 15:55:14.972886274 +0000 UTC m=+0.196672147 container attach 037bf0858b5d84c98b7060c8d0145f576030eee6629cda7ae51a3b4a870fcd53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:55:14 compute-0 ecstatic_knuth[273906]: 167 167
Jan 26 15:55:14 compute-0 systemd[1]: libpod-037bf0858b5d84c98b7060c8d0145f576030eee6629cda7ae51a3b4a870fcd53.scope: Deactivated successfully.
Jan 26 15:55:14 compute-0 podman[273890]: 2026-01-26 15:55:14.977618459 +0000 UTC m=+0.201404332 container died 037bf0858b5d84c98b7060c8d0145f576030eee6629cda7ae51a3b4a870fcd53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_knuth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:55:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-74f3317963ca8586c4a5cd402f2bf48cbf522985b13a0f8808e35f1570f22eb6-merged.mount: Deactivated successfully.
Jan 26 15:55:15 compute-0 podman[273890]: 2026-01-26 15:55:15.022834587 +0000 UTC m=+0.246620460 container remove 037bf0858b5d84c98b7060c8d0145f576030eee6629cda7ae51a3b4a870fcd53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 15:55:15 compute-0 systemd[1]: libpod-conmon-037bf0858b5d84c98b7060c8d0145f576030eee6629cda7ae51a3b4a870fcd53.scope: Deactivated successfully.
Jan 26 15:55:15 compute-0 nova_compute[239965]: 2026-01-26 15:55:15.133 239969 INFO nova.virt.libvirt.driver [None req-6bdffa86-879a-4852-8992-4e5048a3f5db b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Snapshot image upload complete
Jan 26 15:55:15 compute-0 nova_compute[239965]: 2026-01-26 15:55:15.134 239969 INFO nova.compute.manager [None req-6bdffa86-879a-4852-8992-4e5048a3f5db b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Took 5.72 seconds to snapshot the instance on the hypervisor.
Jan 26 15:55:15 compute-0 nova_compute[239965]: 2026-01-26 15:55:15.189 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:15 compute-0 podman[273947]: 2026-01-26 15:55:15.234538859 +0000 UTC m=+0.053982183 container create 9267b0e941596a8ca6e05ffde404625f05e9b59da9c3dd54b93c0c23bbaeed58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:55:15 compute-0 systemd[1]: Started libpod-conmon-9267b0e941596a8ca6e05ffde404625f05e9b59da9c3dd54b93c0c23bbaeed58.scope.
Jan 26 15:55:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:55:15 compute-0 podman[273947]: 2026-01-26 15:55:15.21113818 +0000 UTC m=+0.030581514 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5689bef82630c6d227fef4c9683d1a89c138d6e9be11d0c330c1e735d881c84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5689bef82630c6d227fef4c9683d1a89c138d6e9be11d0c330c1e735d881c84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5689bef82630c6d227fef4c9683d1a89c138d6e9be11d0c330c1e735d881c84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5689bef82630c6d227fef4c9683d1a89c138d6e9be11d0c330c1e735d881c84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:55:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/723539986' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:15 compute-0 nova_compute[239965]: 2026-01-26 15:55:15.486 239969 DEBUG oslo_concurrency.processutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:15 compute-0 nova_compute[239965]: 2026-01-26 15:55:15.493 239969 DEBUG nova.compute.provider_tree [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:55:15 compute-0 nova_compute[239965]: 2026-01-26 15:55:15.509 239969 DEBUG nova.scheduler.client.report [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:55:15 compute-0 nova_compute[239965]: 2026-01-26 15:55:15.532 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:15 compute-0 nova_compute[239965]: 2026-01-26 15:55:15.533 239969 DEBUG nova.compute.manager [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:55:15 compute-0 nova_compute[239965]: 2026-01-26 15:55:15.573 239969 DEBUG nova.compute.manager [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:55:15 compute-0 nova_compute[239965]: 2026-01-26 15:55:15.574 239969 DEBUG nova.network.neutron [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:55:15 compute-0 nova_compute[239965]: 2026-01-26 15:55:15.593 239969 INFO nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:55:15 compute-0 nova_compute[239965]: 2026-01-26 15:55:15.618 239969 DEBUG nova.compute.manager [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:55:15 compute-0 nova_compute[239965]: 2026-01-26 15:55:15.706 239969 DEBUG nova.compute.manager [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:55:15 compute-0 nova_compute[239965]: 2026-01-26 15:55:15.708 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:55:15 compute-0 nova_compute[239965]: 2026-01-26 15:55:15.708 239969 INFO nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Creating image(s)
Jan 26 15:55:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 279 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 11 MiB/s wr, 443 op/s
Jan 26 15:55:15 compute-0 podman[273947]: 2026-01-26 15:55:15.957401104 +0000 UTC m=+0.776844458 container init 9267b0e941596a8ca6e05ffde404625f05e9b59da9c3dd54b93c0c23bbaeed58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lalande, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:55:15 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/723539986' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:15 compute-0 podman[273947]: 2026-01-26 15:55:15.965818739 +0000 UTC m=+0.785262063 container start 9267b0e941596a8ca6e05ffde404625f05e9b59da9c3dd54b93c0c23bbaeed58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lalande, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 15:55:15 compute-0 podman[273947]: 2026-01-26 15:55:15.985449615 +0000 UTC m=+0.804892929 container attach 9267b0e941596a8ca6e05ffde404625f05e9b59da9c3dd54b93c0c23bbaeed58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lalande, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 15:55:16 compute-0 nova_compute[239965]: 2026-01-26 15:55:16.028 239969 DEBUG nova.storage.rbd_utils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] rbd image de374e75-a6b9-4a59-bd2f-642cc808b9e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:16 compute-0 nova_compute[239965]: 2026-01-26 15:55:16.068 239969 DEBUG nova.storage.rbd_utils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] rbd image de374e75-a6b9-4a59-bd2f-642cc808b9e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:16 compute-0 nova_compute[239965]: 2026-01-26 15:55:16.107 239969 DEBUG nova.storage.rbd_utils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] rbd image de374e75-a6b9-4a59-bd2f-642cc808b9e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:16 compute-0 nova_compute[239965]: 2026-01-26 15:55:16.115 239969 DEBUG oslo_concurrency.processutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:16 compute-0 nova_compute[239965]: 2026-01-26 15:55:16.151 239969 DEBUG nova.policy [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed38e14a163d491a97b7e0d25836dc2e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '009c08c2c07248cbb7df2b6fcd0060e3', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:55:16 compute-0 nova_compute[239965]: 2026-01-26 15:55:16.194 239969 DEBUG oslo_concurrency.processutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:16 compute-0 nova_compute[239965]: 2026-01-26 15:55:16.195 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:16 compute-0 nova_compute[239965]: 2026-01-26 15:55:16.195 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:16 compute-0 nova_compute[239965]: 2026-01-26 15:55:16.196 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:16 compute-0 nova_compute[239965]: 2026-01-26 15:55:16.230 239969 DEBUG nova.storage.rbd_utils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] rbd image de374e75-a6b9-4a59-bd2f-642cc808b9e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:16 compute-0 nova_compute[239965]: 2026-01-26 15:55:16.245 239969 DEBUG oslo_concurrency.processutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 de374e75-a6b9-4a59-bd2f-642cc808b9e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:16 compute-0 lvm[274135]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:55:16 compute-0 lvm[274134]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:55:16 compute-0 lvm[274134]: VG ceph_vg0 finished
Jan 26 15:55:16 compute-0 lvm[274135]: VG ceph_vg1 finished
Jan 26 15:55:16 compute-0 lvm[274137]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:55:16 compute-0 lvm[274137]: VG ceph_vg2 finished
Jan 26 15:55:16 compute-0 nova_compute[239965]: 2026-01-26 15:55:16.782 239969 DEBUG nova.network.neutron [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Successfully created port: 9300df63-810a-43e0-9363-92262754c92d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:55:16 compute-0 lvm[274139]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:55:16 compute-0 lvm[274139]: VG ceph_vg2 finished
Jan 26 15:55:16 compute-0 busy_lalande[273963]: {}
Jan 26 15:55:16 compute-0 lvm[274141]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:55:16 compute-0 lvm[274141]: VG ceph_vg2 finished
Jan 26 15:55:16 compute-0 systemd[1]: libpod-9267b0e941596a8ca6e05ffde404625f05e9b59da9c3dd54b93c0c23bbaeed58.scope: Deactivated successfully.
Jan 26 15:55:16 compute-0 systemd[1]: libpod-9267b0e941596a8ca6e05ffde404625f05e9b59da9c3dd54b93c0c23bbaeed58.scope: Consumed 1.347s CPU time.
Jan 26 15:55:16 compute-0 podman[273947]: 2026-01-26 15:55:16.876873665 +0000 UTC m=+1.696316989 container died 9267b0e941596a8ca6e05ffde404625f05e9b59da9c3dd54b93c0c23bbaeed58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lalande, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 15:55:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5689bef82630c6d227fef4c9683d1a89c138d6e9be11d0c330c1e735d881c84-merged.mount: Deactivated successfully.
Jan 26 15:55:16 compute-0 podman[273947]: 2026-01-26 15:55:16.94954434 +0000 UTC m=+1.768987664 container remove 9267b0e941596a8ca6e05ffde404625f05e9b59da9c3dd54b93c0c23bbaeed58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_lalande, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 15:55:16 compute-0 systemd[1]: libpod-conmon-9267b0e941596a8ca6e05ffde404625f05e9b59da9c3dd54b93c0c23bbaeed58.scope: Deactivated successfully.
Jan 26 15:55:17 compute-0 ceph-mon[75140]: pgmap v1248: 305 pgs: 305 active+clean; 279 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 13 MiB/s rd, 11 MiB/s wr, 443 op/s
Jan 26 15:55:17 compute-0 sudo[273854]: pam_unix(sudo:session): session closed for user root
Jan 26 15:55:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:55:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Jan 26 15:55:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:55:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:55:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Jan 26 15:55:17 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Jan 26 15:55:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:55:17 compute-0 nova_compute[239965]: 2026-01-26 15:55:17.414 239969 DEBUG oslo_concurrency.processutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 de374e75-a6b9-4a59-bd2f-642cc808b9e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:17 compute-0 sudo[274156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:55:17 compute-0 sudo[274156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:55:17 compute-0 sudo[274156]: pam_unix(sudo:session): session closed for user root
Jan 26 15:55:17 compute-0 nova_compute[239965]: 2026-01-26 15:55:17.508 239969 DEBUG nova.storage.rbd_utils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] resizing rbd image de374e75-a6b9-4a59-bd2f-642cc808b9e2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:55:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 305 active+clean; 314 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 13 MiB/s wr, 496 op/s
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.172 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.175 239969 DEBUG nova.network.neutron [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Successfully updated port: 9300df63-810a-43e0-9363-92262754c92d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.178 239969 DEBUG nova.compute.manager [req-986efc42-6e71-4ef1-8950-6eac1620e331 req-b528a68d-62c1-49b3-9b3a-e515fa8e6388 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Received event network-changed-9300df63-810a-43e0-9363-92262754c92d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.178 239969 DEBUG nova.compute.manager [req-986efc42-6e71-4ef1-8950-6eac1620e331 req-b528a68d-62c1-49b3-9b3a-e515fa8e6388 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Refreshing instance network info cache due to event network-changed-9300df63-810a-43e0-9363-92262754c92d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.178 239969 DEBUG oslo_concurrency.lockutils [req-986efc42-6e71-4ef1-8950-6eac1620e331 req-b528a68d-62c1-49b3-9b3a-e515fa8e6388 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-de374e75-a6b9-4a59-bd2f-642cc808b9e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.179 239969 DEBUG oslo_concurrency.lockutils [req-986efc42-6e71-4ef1-8950-6eac1620e331 req-b528a68d-62c1-49b3-9b3a-e515fa8e6388 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-de374e75-a6b9-4a59-bd2f-642cc808b9e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.179 239969 DEBUG nova.network.neutron [req-986efc42-6e71-4ef1-8950-6eac1620e331 req-b528a68d-62c1-49b3-9b3a-e515fa8e6388 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Refreshing network info cache for port 9300df63-810a-43e0-9363-92262754c92d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.243 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Acquiring lock "refresh_cache-de374e75-a6b9-4a59-bd2f-642cc808b9e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:55:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:55:18 compute-0 ceph-mon[75140]: osdmap e215: 3 total, 3 up, 3 in
Jan 26 15:55:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:55:18 compute-0 ceph-mon[75140]: pgmap v1250: 305 pgs: 305 active+clean; 314 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 13 MiB/s wr, 496 op/s
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.520 239969 DEBUG nova.objects.instance [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lazy-loading 'migration_context' on Instance uuid de374e75-a6b9-4a59-bd2f-642cc808b9e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.537 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.538 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Ensure instance console log exists: /var/lib/nova/instances/de374e75-a6b9-4a59-bd2f-642cc808b9e2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.538 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.538 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.539 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.540 239969 DEBUG nova.network.neutron [req-986efc42-6e71-4ef1-8950-6eac1620e331 req-b528a68d-62c1-49b3-9b3a-e515fa8e6388 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:55:18 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.641 239969 DEBUG nova.compute.manager [None req-8fcf9f75-3d06-4619-b9ab-c69423fad16f 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.722 239969 INFO nova.compute.manager [None req-8fcf9f75-3d06-4619-b9ab-c69423fad16f 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] instance snapshotting
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.952 239969 DEBUG nova.network.neutron [req-986efc42-6e71-4ef1-8950-6eac1620e331 req-b528a68d-62c1-49b3-9b3a-e515fa8e6388 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.953 239969 INFO nova.virt.libvirt.driver [None req-8fcf9f75-3d06-4619-b9ab-c69423fad16f 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Beginning live snapshot process
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.984 239969 DEBUG oslo_concurrency.lockutils [req-986efc42-6e71-4ef1-8950-6eac1620e331 req-b528a68d-62c1-49b3-9b3a-e515fa8e6388 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-de374e75-a6b9-4a59-bd2f-642cc808b9e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.986 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Acquired lock "refresh_cache-de374e75-a6b9-4a59-bd2f-642cc808b9e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:55:18 compute-0 nova_compute[239965]: 2026-01-26 15:55:18.986 239969 DEBUG nova.network.neutron [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.121 239969 DEBUG nova.virt.libvirt.imagebackend [None req-8fcf9f75-3d06-4619-b9ab-c69423fad16f 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.167 239969 DEBUG nova.network.neutron [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.404 239969 DEBUG nova.storage.rbd_utils [None req-8fcf9f75-3d06-4619-b9ab-c69423fad16f 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] creating snapshot(62a856beaf8f49439806adf7e72b06a4) on rbd image(b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:55:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:55:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Jan 26 15:55:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Jan 26 15:55:19 compute-0 ovn_controller[146046]: 2026-01-26T15:55:19Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1d:ed:8a 10.100.0.7
Jan 26 15:55:19 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Jan 26 15:55:19 compute-0 ovn_controller[146046]: 2026-01-26T15:55:19Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1d:ed:8a 10.100.0.7
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.613 239969 DEBUG oslo_concurrency.lockutils [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Acquiring lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.614 239969 DEBUG oslo_concurrency.lockutils [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.614 239969 DEBUG oslo_concurrency.lockutils [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Acquiring lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.614 239969 DEBUG oslo_concurrency.lockutils [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.615 239969 DEBUG oslo_concurrency.lockutils [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.616 239969 INFO nova.compute.manager [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Terminating instance
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.617 239969 DEBUG nova.compute.manager [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:55:19 compute-0 kernel: tap1faa1913-d2 (unregistering): left promiscuous mode
Jan 26 15:55:19 compute-0 NetworkManager[48954]: <info>  [1769442919.6668] device (tap1faa1913-d2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.709 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:19 compute-0 ovn_controller[146046]: 2026-01-26T15:55:19Z|00218|binding|INFO|Releasing lport 1faa1913-d2b2-468b-b3a1-69ccd5286bff from this chassis (sb_readonly=0)
Jan 26 15:55:19 compute-0 ovn_controller[146046]: 2026-01-26T15:55:19Z|00219|binding|INFO|Setting lport 1faa1913-d2b2-468b-b3a1-69ccd5286bff down in Southbound
Jan 26 15:55:19 compute-0 ovn_controller[146046]: 2026-01-26T15:55:19Z|00220|binding|INFO|Removing iface tap1faa1913-d2 ovn-installed in OVS
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.712 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:19.721 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:15:bd 10.100.0.12'], port_security=['fa:16:3e:45:15:bd 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd6511f20-baa5-46ee-80c7-fbb3e13d5163', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd6524312aa97473dbf559a96f287a9b7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f7184b0-1c30-417a-a7e6-e39e091b9c66', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=78461b0f-389b-40aa-88db-5742b64d0a9b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1faa1913-d2b2-468b-b3a1-69ccd5286bff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:55:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:19.722 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1faa1913-d2b2-468b-b3a1-69ccd5286bff in datapath 8c72a58b-5da5-4af3-9fe6-bf85dce1faaa unbound from our chassis
Jan 26 15:55:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:19.726 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8c72a58b-5da5-4af3-9fe6-bf85dce1faaa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:55:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:19.727 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[091bf7d0-7632-4f39-bb54-e67c38f790b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:19.728 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa namespace which is not needed anymore
Jan 26 15:55:19 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Deactivated successfully.
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.731 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:19 compute-0 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Consumed 13.903s CPU time.
Jan 26 15:55:19 compute-0 systemd-machined[208061]: Machine qemu-38-instance-00000022 terminated.
Jan 26 15:55:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 305 active+clean; 314 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.5 MiB/s wr, 168 op/s
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.852 239969 INFO nova.virt.libvirt.driver [-] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Instance destroyed successfully.
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.854 239969 DEBUG nova.objects.instance [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lazy-loading 'resources' on Instance uuid d6511f20-baa5-46ee-80c7-fbb3e13d5163 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.875 239969 DEBUG nova.virt.libvirt.vif [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-1645979738',display_name='tempest-ImagesOneServerTestJSON-server-1645979738',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-1645979738',id=34,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:54:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d6524312aa97473dbf559a96f287a9b7',ramdisk_id='',reservation_id='r-28dlaor7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-1585463751',owner_user_name='tempest-ImagesOneServerTestJSON-1585463751-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:55:15Z,user_data=None,user_id='b63a12d3960a490293ce7e52f058b753',uuid=d6511f20-baa5-46ee-80c7-fbb3e13d5163,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "address": "fa:16:3e:45:15:bd", "network": {"id": "8c72a58b-5da5-4af3-9fe6-bf85dce1faaa", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-459796709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6524312aa97473dbf559a96f287a9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa1913-d2", "ovs_interfaceid": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.876 239969 DEBUG nova.network.os_vif_util [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Converting VIF {"id": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "address": "fa:16:3e:45:15:bd", "network": {"id": "8c72a58b-5da5-4af3-9fe6-bf85dce1faaa", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-459796709-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d6524312aa97473dbf559a96f287a9b7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa1913-d2", "ovs_interfaceid": "1faa1913-d2b2-468b-b3a1-69ccd5286bff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.876 239969 DEBUG nova.network.os_vif_util [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:15:bd,bridge_name='br-int',has_traffic_filtering=True,id=1faa1913-d2b2-468b-b3a1-69ccd5286bff,network=Network(8c72a58b-5da5-4af3-9fe6-bf85dce1faaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa1913-d2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.877 239969 DEBUG os_vif [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:15:bd,bridge_name='br-int',has_traffic_filtering=True,id=1faa1913-d2b2-468b-b3a1-69ccd5286bff,network=Network(8c72a58b-5da5-4af3-9fe6-bf85dce1faaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa1913-d2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:55:19 compute-0 neutron-haproxy-ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa[272316]: [NOTICE]   (272337) : haproxy version is 2.8.14-c23fe91
Jan 26 15:55:19 compute-0 neutron-haproxy-ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa[272316]: [NOTICE]   (272337) : path to executable is /usr/sbin/haproxy
Jan 26 15:55:19 compute-0 neutron-haproxy-ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa[272316]: [WARNING]  (272337) : Exiting Master process...
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.878 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.878 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1faa1913-d2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:19 compute-0 neutron-haproxy-ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa[272316]: [ALERT]    (272337) : Current worker (272342) exited with code 143 (Terminated)
Jan 26 15:55:19 compute-0 neutron-haproxy-ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa[272316]: [WARNING]  (272337) : All workers exited. Exiting... (0)
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.881 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.882 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:55:19 compute-0 systemd[1]: libpod-9f4aa7be61cde73eb4c61dabca9ac0ab01b557be4fa808c2ddd8859534da6dfc.scope: Deactivated successfully.
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.884 239969 INFO os_vif [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:15:bd,bridge_name='br-int',has_traffic_filtering=True,id=1faa1913-d2b2-468b-b3a1-69ccd5286bff,network=Network(8c72a58b-5da5-4af3-9fe6-bf85dce1faaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa1913-d2')
Jan 26 15:55:19 compute-0 podman[274326]: 2026-01-26 15:55:19.890194087 +0000 UTC m=+0.057914387 container died 9f4aa7be61cde73eb4c61dabca9ac0ab01b557be4fa808c2ddd8859534da6dfc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 15:55:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f4aa7be61cde73eb4c61dabca9ac0ab01b557be4fa808c2ddd8859534da6dfc-userdata-shm.mount: Deactivated successfully.
Jan 26 15:55:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3fcd9dfb69cd691bea593064e64a40a5f1b0351adc1fa1407aad72c8bc26964-merged.mount: Deactivated successfully.
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.979 239969 DEBUG nova.compute.manager [req-915e57fd-6f1c-40af-90fc-3b6c701da16e req-70fcd2dc-8406-4a9e-8521-0c36e41b39b6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Received event network-vif-unplugged-1faa1913-d2b2-468b-b3a1-69ccd5286bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.980 239969 DEBUG oslo_concurrency.lockutils [req-915e57fd-6f1c-40af-90fc-3b6c701da16e req-70fcd2dc-8406-4a9e-8521-0c36e41b39b6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.981 239969 DEBUG oslo_concurrency.lockutils [req-915e57fd-6f1c-40af-90fc-3b6c701da16e req-70fcd2dc-8406-4a9e-8521-0c36e41b39b6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.982 239969 DEBUG oslo_concurrency.lockutils [req-915e57fd-6f1c-40af-90fc-3b6c701da16e req-70fcd2dc-8406-4a9e-8521-0c36e41b39b6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.982 239969 DEBUG nova.compute.manager [req-915e57fd-6f1c-40af-90fc-3b6c701da16e req-70fcd2dc-8406-4a9e-8521-0c36e41b39b6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] No waiting events found dispatching network-vif-unplugged-1faa1913-d2b2-468b-b3a1-69ccd5286bff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:55:19 compute-0 nova_compute[239965]: 2026-01-26 15:55:19.982 239969 DEBUG nova.compute.manager [req-915e57fd-6f1c-40af-90fc-3b6c701da16e req-70fcd2dc-8406-4a9e-8521-0c36e41b39b6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Received event network-vif-unplugged-1faa1913-d2b2-468b-b3a1-69ccd5286bff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:55:19 compute-0 podman[274326]: 2026-01-26 15:55:19.986757563 +0000 UTC m=+0.154477843 container cleanup 9f4aa7be61cde73eb4c61dabca9ac0ab01b557be4fa808c2ddd8859534da6dfc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 26 15:55:20 compute-0 systemd[1]: libpod-conmon-9f4aa7be61cde73eb4c61dabca9ac0ab01b557be4fa808c2ddd8859534da6dfc.scope: Deactivated successfully.
Jan 26 15:55:20 compute-0 podman[274385]: 2026-01-26 15:55:20.100954947 +0000 UTC m=+0.089396893 container remove 9f4aa7be61cde73eb4c61dabca9ac0ab01b557be4fa808c2ddd8859534da6dfc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 15:55:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:20.108 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[06f22dcd-1836-404f-9740-847eebe4da28]: (4, ('Mon Jan 26 03:55:19 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa (9f4aa7be61cde73eb4c61dabca9ac0ab01b557be4fa808c2ddd8859534da6dfc)\n9f4aa7be61cde73eb4c61dabca9ac0ab01b557be4fa808c2ddd8859534da6dfc\nMon Jan 26 03:55:19 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa (9f4aa7be61cde73eb4c61dabca9ac0ab01b557be4fa808c2ddd8859534da6dfc)\n9f4aa7be61cde73eb4c61dabca9ac0ab01b557be4fa808c2ddd8859534da6dfc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:20.111 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ae8f75-c31c-4806-814e-651cfdaf6481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:20.113 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8c72a58b-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:20 compute-0 nova_compute[239965]: 2026-01-26 15:55:20.115 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:20 compute-0 kernel: tap8c72a58b-50: left promiscuous mode
Jan 26 15:55:20 compute-0 nova_compute[239965]: 2026-01-26 15:55:20.136 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:20.139 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ceab038f-ab64-44a5-82d8-f914ebde37c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:20.157 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba0788a-d845-40a0-a247-056e0b636dcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:20 compute-0 nova_compute[239965]: 2026-01-26 15:55:20.159 239969 DEBUG nova.network.neutron [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Updating instance_info_cache with network_info: [{"id": "9300df63-810a-43e0-9363-92262754c92d", "address": "fa:16:3e:c1:cf:6b", "network": {"id": "3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2131918329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "009c08c2c07248cbb7df2b6fcd0060e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9300df63-81", "ovs_interfaceid": "9300df63-810a-43e0-9363-92262754c92d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:55:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:20.161 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[608e70df-ac49-4841-b314-6b1d622ae8b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:20.179 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e9feb78b-751b-4da0-bdde-066f0d2f60ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 438647, 'reachable_time': 35778, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274400, 'error': None, 'target': 'ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d8c72a58b\x2d5da5\x2d4af3\x2d9fe6\x2dbf85dce1faaa.mount: Deactivated successfully.
Jan 26 15:55:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:20.183 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8c72a58b-5da5-4af3-9fe6-bf85dce1faaa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:55:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:20.183 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[47199877-28a2-411b-b7c0-4f75141d4832]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:20 compute-0 nova_compute[239965]: 2026-01-26 15:55:20.191 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:20 compute-0 nova_compute[239965]: 2026-01-26 15:55:20.341 239969 INFO nova.virt.libvirt.driver [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Deleting instance files /var/lib/nova/instances/d6511f20-baa5-46ee-80c7-fbb3e13d5163_del
Jan 26 15:55:20 compute-0 nova_compute[239965]: 2026-01-26 15:55:20.342 239969 INFO nova.virt.libvirt.driver [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Deletion of /var/lib/nova/instances/d6511f20-baa5-46ee-80c7-fbb3e13d5163_del complete
Jan 26 15:55:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Jan 26 15:55:20 compute-0 ceph-mon[75140]: osdmap e216: 3 total, 3 up, 3 in
Jan 26 15:55:20 compute-0 ceph-mon[75140]: pgmap v1252: 305 pgs: 305 active+clean; 314 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 4.5 MiB/s wr, 168 op/s
Jan 26 15:55:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Jan 26 15:55:20 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Jan 26 15:55:20 compute-0 nova_compute[239965]: 2026-01-26 15:55:20.544 239969 DEBUG nova.storage.rbd_utils [None req-8fcf9f75-3d06-4619-b9ab-c69423fad16f 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] cloning vms/b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk@62a856beaf8f49439806adf7e72b06a4 to images/04c8dd58-8f38-44e5-8181-edf90679a92c clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 15:55:20 compute-0 nova_compute[239965]: 2026-01-26 15:55:20.658 239969 DEBUG nova.storage.rbd_utils [None req-8fcf9f75-3d06-4619-b9ab-c69423fad16f 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] flattening images/04c8dd58-8f38-44e5-8181-edf90679a92c flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 15:55:20 compute-0 nova_compute[239965]: 2026-01-26 15:55:20.910 239969 DEBUG nova.storage.rbd_utils [None req-8fcf9f75-3d06-4619-b9ab-c69423fad16f 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] removing snapshot(62a856beaf8f49439806adf7e72b06a4) on rbd image(b8a4a981-46c3-4b65-ac27-0a742bb0caeb_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.353 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Releasing lock "refresh_cache-de374e75-a6b9-4a59-bd2f-642cc808b9e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.354 239969 DEBUG nova.compute.manager [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Instance network_info: |[{"id": "9300df63-810a-43e0-9363-92262754c92d", "address": "fa:16:3e:c1:cf:6b", "network": {"id": "3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2131918329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "009c08c2c07248cbb7df2b6fcd0060e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9300df63-81", "ovs_interfaceid": "9300df63-810a-43e0-9363-92262754c92d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.357 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Start _get_guest_xml network_info=[{"id": "9300df63-810a-43e0-9363-92262754c92d", "address": "fa:16:3e:c1:cf:6b", "network": {"id": "3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2131918329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "009c08c2c07248cbb7df2b6fcd0060e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9300df63-81", "ovs_interfaceid": "9300df63-810a-43e0-9363-92262754c92d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.362 239969 WARNING nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.368 239969 DEBUG nova.virt.libvirt.host [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.369 239969 DEBUG nova.virt.libvirt.host [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.383 239969 DEBUG nova.virt.libvirt.host [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.384 239969 DEBUG nova.virt.libvirt.host [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.384 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.385 239969 DEBUG nova.virt.hardware [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.385 239969 DEBUG nova.virt.hardware [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.386 239969 DEBUG nova.virt.hardware [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.386 239969 DEBUG nova.virt.hardware [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.386 239969 DEBUG nova.virt.hardware [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.386 239969 DEBUG nova.virt.hardware [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.387 239969 DEBUG nova.virt.hardware [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.387 239969 DEBUG nova.virt.hardware [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.387 239969 DEBUG nova.virt.hardware [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.388 239969 DEBUG nova.virt.hardware [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.388 239969 DEBUG nova.virt.hardware [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.391 239969 DEBUG oslo_concurrency.processutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.428 239969 INFO nova.compute.manager [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Took 1.81 seconds to destroy the instance on the hypervisor.
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.430 239969 DEBUG oslo.service.loopingcall [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.431 239969 DEBUG nova.compute.manager [-] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.431 239969 DEBUG nova.network.neutron [-] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:55:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Jan 26 15:55:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Jan 26 15:55:21 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Jan 26 15:55:21 compute-0 ceph-mon[75140]: osdmap e217: 3 total, 3 up, 3 in
Jan 26 15:55:21 compute-0 nova_compute[239965]: 2026-01-26 15:55:21.539 239969 DEBUG nova.storage.rbd_utils [None req-8fcf9f75-3d06-4619-b9ab-c69423fad16f 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] creating snapshot(snap) on rbd image(04c8dd58-8f38-44e5-8181-edf90679a92c) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:55:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1255: 305 pgs: 305 active+clean; 295 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.5 MiB/s wr, 43 op/s
Jan 26 15:55:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:55:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4231775084' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.054 239969 DEBUG oslo_concurrency.processutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.663s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.075 239969 DEBUG nova.storage.rbd_utils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] rbd image de374e75-a6b9-4a59-bd2f-642cc808b9e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.078 239969 DEBUG oslo_concurrency.processutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.119 239969 DEBUG nova.compute.manager [req-79cda4d4-d755-40df-99ac-bfd93ebbd54b req-1ba342fc-2dcd-4e57-8d23-3e02b89f321e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Received event network-vif-plugged-1faa1913-d2b2-468b-b3a1-69ccd5286bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.119 239969 DEBUG oslo_concurrency.lockutils [req-79cda4d4-d755-40df-99ac-bfd93ebbd54b req-1ba342fc-2dcd-4e57-8d23-3e02b89f321e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.120 239969 DEBUG oslo_concurrency.lockutils [req-79cda4d4-d755-40df-99ac-bfd93ebbd54b req-1ba342fc-2dcd-4e57-8d23-3e02b89f321e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.120 239969 DEBUG oslo_concurrency.lockutils [req-79cda4d4-d755-40df-99ac-bfd93ebbd54b req-1ba342fc-2dcd-4e57-8d23-3e02b89f321e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.120 239969 DEBUG nova.compute.manager [req-79cda4d4-d755-40df-99ac-bfd93ebbd54b req-1ba342fc-2dcd-4e57-8d23-3e02b89f321e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] No waiting events found dispatching network-vif-plugged-1faa1913-d2b2-468b-b3a1-69ccd5286bff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.121 239969 WARNING nova.compute.manager [req-79cda4d4-d755-40df-99ac-bfd93ebbd54b req-1ba342fc-2dcd-4e57-8d23-3e02b89f321e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Received unexpected event network-vif-plugged-1faa1913-d2b2-468b-b3a1-69ccd5286bff for instance with vm_state active and task_state deleting.
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.201 239969 DEBUG nova.network.neutron [-] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.221 239969 INFO nova.compute.manager [-] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Took 0.79 seconds to deallocate network for instance.
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.266 239969 DEBUG oslo_concurrency.lockutils [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.266 239969 DEBUG oslo_concurrency.lockutils [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.284 239969 DEBUG nova.compute.manager [req-662028ca-70fa-4698-80a5-d36c747577b5 req-9aee8690-57aa-426a-973a-3b5fe46eeabb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Received event network-vif-deleted-1faa1913-d2b2-468b-b3a1-69ccd5286bff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.386 239969 DEBUG oslo_concurrency.processutils [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Jan 26 15:55:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Jan 26 15:55:22 compute-0 ceph-mon[75140]: osdmap e218: 3 total, 3 up, 3 in
Jan 26 15:55:22 compute-0 ceph-mon[75140]: pgmap v1255: 305 pgs: 305 active+clean; 295 MiB data, 510 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.5 MiB/s wr, 43 op/s
Jan 26 15:55:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4231775084' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:55:22 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Jan 26 15:55:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:55:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3720189221' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver [None req-8fcf9f75-3d06-4619-b9ab-c69423fad16f 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 04c8dd58-8f38-44e5-8181-edf90679a92c could not be found.
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 04c8dd58-8f38-44e5-8181-edf90679a92c
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver 
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver 
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 04c8dd58-8f38-44e5-8181-edf90679a92c could not be found.
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.707 239969 ERROR nova.virt.libvirt.driver 
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.738 239969 DEBUG oslo_concurrency.processutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.740 239969 DEBUG nova.virt.libvirt.vif [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1250402326',display_name='tempest-InstanceActionsNegativeTestJSON-server-1250402326',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1250402326',id=37,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='009c08c2c07248cbb7df2b6fcd0060e3',ramdisk_id='',reservation_id='r-vmki3eqi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1468770434',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1468770434-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:55:15Z,user_data=None,user_id='ed38e14a163d491a97b7e0d25836dc2e',uuid=de374e75-a6b9-4a59-bd2f-642cc808b9e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9300df63-810a-43e0-9363-92262754c92d", "address": "fa:16:3e:c1:cf:6b", "network": {"id": "3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2131918329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "009c08c2c07248cbb7df2b6fcd0060e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9300df63-81", "ovs_interfaceid": "9300df63-810a-43e0-9363-92262754c92d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.740 239969 DEBUG nova.network.os_vif_util [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Converting VIF {"id": "9300df63-810a-43e0-9363-92262754c92d", "address": "fa:16:3e:c1:cf:6b", "network": {"id": "3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2131918329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "009c08c2c07248cbb7df2b6fcd0060e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9300df63-81", "ovs_interfaceid": "9300df63-810a-43e0-9363-92262754c92d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.741 239969 DEBUG nova.network.os_vif_util [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:6b,bridge_name='br-int',has_traffic_filtering=True,id=9300df63-810a-43e0-9363-92262754c92d,network=Network(3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9300df63-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.743 239969 DEBUG nova.objects.instance [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lazy-loading 'pci_devices' on Instance uuid de374e75-a6b9-4a59-bd2f-642cc808b9e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.755 239969 DEBUG nova.storage.rbd_utils [None req-8fcf9f75-3d06-4619-b9ab-c69423fad16f 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] removing snapshot(snap) on rbd image(04c8dd58-8f38-44e5-8181-edf90679a92c) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.765 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:55:22 compute-0 nova_compute[239965]:   <uuid>de374e75-a6b9-4a59-bd2f-642cc808b9e2</uuid>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   <name>instance-00000025</name>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1250402326</nova:name>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:55:21</nova:creationTime>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:55:22 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:55:22 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:55:22 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:55:22 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:55:22 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:55:22 compute-0 nova_compute[239965]:         <nova:user uuid="ed38e14a163d491a97b7e0d25836dc2e">tempest-InstanceActionsNegativeTestJSON-1468770434-project-member</nova:user>
Jan 26 15:55:22 compute-0 nova_compute[239965]:         <nova:project uuid="009c08c2c07248cbb7df2b6fcd0060e3">tempest-InstanceActionsNegativeTestJSON-1468770434</nova:project>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:55:22 compute-0 nova_compute[239965]:         <nova:port uuid="9300df63-810a-43e0-9363-92262754c92d">
Jan 26 15:55:22 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <system>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <entry name="serial">de374e75-a6b9-4a59-bd2f-642cc808b9e2</entry>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <entry name="uuid">de374e75-a6b9-4a59-bd2f-642cc808b9e2</entry>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     </system>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   <os>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   </os>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   <features>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   </features>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/de374e75-a6b9-4a59-bd2f-642cc808b9e2_disk">
Jan 26 15:55:22 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       </source>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:55:22 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/de374e75-a6b9-4a59-bd2f-642cc808b9e2_disk.config">
Jan 26 15:55:22 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       </source>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:55:22 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:c1:cf:6b"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <target dev="tap9300df63-81"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/de374e75-a6b9-4a59-bd2f-642cc808b9e2/console.log" append="off"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <video>
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     </video>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:55:22 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:55:22 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:55:22 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:55:22 compute-0 nova_compute[239965]: </domain>
Jan 26 15:55:22 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.766 239969 DEBUG nova.compute.manager [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Preparing to wait for external event network-vif-plugged-9300df63-810a-43e0-9363-92262754c92d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.766 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Acquiring lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.767 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.767 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.768 239969 DEBUG nova.virt.libvirt.vif [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1250402326',display_name='tempest-InstanceActionsNegativeTestJSON-server-1250402326',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1250402326',id=37,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='009c08c2c07248cbb7df2b6fcd0060e3',ramdisk_id='',reservation_id='r-vmki3eqi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1468770434',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1468770434-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:55:15Z,user_data=None,user_id='ed38e14a163d491a97b7e0d25836dc2e',uuid=de374e75-a6b9-4a59-bd2f-642cc808b9e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9300df63-810a-43e0-9363-92262754c92d", "address": "fa:16:3e:c1:cf:6b", "network": {"id": "3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2131918329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "009c08c2c07248cbb7df2b6fcd0060e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9300df63-81", "ovs_interfaceid": "9300df63-810a-43e0-9363-92262754c92d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.768 239969 DEBUG nova.network.os_vif_util [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Converting VIF {"id": "9300df63-810a-43e0-9363-92262754c92d", "address": "fa:16:3e:c1:cf:6b", "network": {"id": "3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2131918329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "009c08c2c07248cbb7df2b6fcd0060e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9300df63-81", "ovs_interfaceid": "9300df63-810a-43e0-9363-92262754c92d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.769 239969 DEBUG nova.network.os_vif_util [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:6b,bridge_name='br-int',has_traffic_filtering=True,id=9300df63-810a-43e0-9363-92262754c92d,network=Network(3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9300df63-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.769 239969 DEBUG os_vif [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:6b,bridge_name='br-int',has_traffic_filtering=True,id=9300df63-810a-43e0-9363-92262754c92d,network=Network(3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9300df63-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.770 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.770 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.771 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.774 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.775 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9300df63-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.775 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9300df63-81, col_values=(('external_ids', {'iface-id': '9300df63-810a-43e0-9363-92262754c92d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:cf:6b', 'vm-uuid': 'de374e75-a6b9-4a59-bd2f-642cc808b9e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.777 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:22 compute-0 NetworkManager[48954]: <info>  [1769442922.7790] manager: (tap9300df63-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.779 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.785 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.786 239969 INFO os_vif [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:6b,bridge_name='br-int',has_traffic_filtering=True,id=9300df63-810a-43e0-9363-92262754c92d,network=Network(3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9300df63-81')
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.842 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.843 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.843 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] No VIF found with MAC fa:16:3e:c1:cf:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.844 239969 INFO nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Using config drive
Jan 26 15:55:22 compute-0 nova_compute[239965]: 2026-01-26 15:55:22.862 239969 DEBUG nova.storage.rbd_utils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] rbd image de374e75-a6b9-4a59-bd2f-642cc808b9e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:55:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1326147000' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:23 compute-0 nova_compute[239965]: 2026-01-26 15:55:23.037 239969 DEBUG oslo_concurrency.processutils [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:23 compute-0 nova_compute[239965]: 2026-01-26 15:55:23.044 239969 DEBUG nova.compute.provider_tree [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:55:23 compute-0 nova_compute[239965]: 2026-01-26 15:55:23.061 239969 DEBUG nova.scheduler.client.report [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:55:23 compute-0 nova_compute[239965]: 2026-01-26 15:55:23.085 239969 DEBUG oslo_concurrency.lockutils [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:23 compute-0 nova_compute[239965]: 2026-01-26 15:55:23.107 239969 INFO nova.scheduler.client.report [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Deleted allocations for instance d6511f20-baa5-46ee-80c7-fbb3e13d5163
Jan 26 15:55:23 compute-0 nova_compute[239965]: 2026-01-26 15:55:23.205 239969 DEBUG oslo_concurrency.lockutils [None req-19efee76-3724-4cc0-9055-6c4ecdd5b244 b63a12d3960a490293ce7e52f058b753 d6524312aa97473dbf559a96f287a9b7 - - default default] Lock "d6511f20-baa5-46ee-80c7-fbb3e13d5163" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:23 compute-0 nova_compute[239965]: 2026-01-26 15:55:23.323 239969 INFO nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Creating config drive at /var/lib/nova/instances/de374e75-a6b9-4a59-bd2f-642cc808b9e2/disk.config
Jan 26 15:55:23 compute-0 nova_compute[239965]: 2026-01-26 15:55:23.331 239969 DEBUG oslo_concurrency.processutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/de374e75-a6b9-4a59-bd2f-642cc808b9e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyqr23_7q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:23 compute-0 nova_compute[239965]: 2026-01-26 15:55:23.476 239969 DEBUG oslo_concurrency.processutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/de374e75-a6b9-4a59-bd2f-642cc808b9e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyqr23_7q" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:23 compute-0 nova_compute[239965]: 2026-01-26 15:55:23.504 239969 DEBUG nova.storage.rbd_utils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] rbd image de374e75-a6b9-4a59-bd2f-642cc808b9e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:23 compute-0 nova_compute[239965]: 2026-01-26 15:55:23.511 239969 DEBUG oslo_concurrency.processutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/de374e75-a6b9-4a59-bd2f-642cc808b9e2/disk.config de374e75-a6b9-4a59-bd2f-642cc808b9e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Jan 26 15:55:23 compute-0 ceph-mon[75140]: osdmap e219: 3 total, 3 up, 3 in
Jan 26 15:55:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3720189221' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:55:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1326147000' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Jan 26 15:55:23 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Jan 26 15:55:23 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 26 15:55:23 compute-0 nova_compute[239965]: 2026-01-26 15:55:23.696 239969 DEBUG oslo_concurrency.processutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/de374e75-a6b9-4a59-bd2f-642cc808b9e2/disk.config de374e75-a6b9-4a59-bd2f-642cc808b9e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:23 compute-0 nova_compute[239965]: 2026-01-26 15:55:23.697 239969 INFO nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Deleting local config drive /var/lib/nova/instances/de374e75-a6b9-4a59-bd2f-642cc808b9e2/disk.config because it was imported into RBD.
Jan 26 15:55:23 compute-0 kernel: tap9300df63-81: entered promiscuous mode
Jan 26 15:55:23 compute-0 NetworkManager[48954]: <info>  [1769442923.7520] manager: (tap9300df63-81): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Jan 26 15:55:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 239 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 12 MiB/s wr, 381 op/s
Jan 26 15:55:23 compute-0 systemd-udevd[274683]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:55:23 compute-0 ovn_controller[146046]: 2026-01-26T15:55:23Z|00221|binding|INFO|Claiming lport 9300df63-810a-43e0-9363-92262754c92d for this chassis.
Jan 26 15:55:23 compute-0 ovn_controller[146046]: 2026-01-26T15:55:23Z|00222|binding|INFO|9300df63-810a-43e0-9363-92262754c92d: Claiming fa:16:3e:c1:cf:6b 10.100.0.8
Jan 26 15:55:23 compute-0 nova_compute[239965]: 2026-01-26 15:55:23.846 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:23 compute-0 NetworkManager[48954]: <info>  [1769442923.8594] device (tap9300df63-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:55:23 compute-0 NetworkManager[48954]: <info>  [1769442923.8601] device (tap9300df63-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:55:23 compute-0 ovn_controller[146046]: 2026-01-26T15:55:23Z|00223|binding|INFO|Setting lport 9300df63-810a-43e0-9363-92262754c92d ovn-installed in OVS
Jan 26 15:55:23 compute-0 nova_compute[239965]: 2026-01-26 15:55:23.866 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:23 compute-0 nova_compute[239965]: 2026-01-26 15:55:23.869 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:23 compute-0 systemd-machined[208061]: New machine qemu-41-instance-00000025.
Jan 26 15:55:23 compute-0 systemd[1]: Started Virtual Machine qemu-41-instance-00000025.
Jan 26 15:55:23 compute-0 ovn_controller[146046]: 2026-01-26T15:55:23Z|00224|binding|INFO|Setting lport 9300df63-810a-43e0-9363-92262754c92d up in Southbound
Jan 26 15:55:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:23.959 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:cf:6b 10.100.0.8'], port_security=['fa:16:3e:c1:cf:6b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'de374e75-a6b9-4a59-bd2f-642cc808b9e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '009c08c2c07248cbb7df2b6fcd0060e3', 'neutron:revision_number': '2', 'neutron:security_group_ids': '716d2123-dee4-4ee1-b1ec-8d559e7bb84e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88ee0f2c-a29d-4f65-afbb-5b87baff50ab, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=9300df63-810a-43e0-9363-92262754c92d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:55:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:23.960 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 9300df63-810a-43e0-9363-92262754c92d in datapath 3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2 bound to our chassis
Jan 26 15:55:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:23.962 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2
Jan 26 15:55:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:23.974 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[540db937-8900-451e-a1fb-46a2293e57c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:23.975 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3e4cf33d-a1 in ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:55:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:23.977 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3e4cf33d-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:55:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:23.978 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd07fa5-5886-4808-aaf9-cfd7980fe89d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:23.980 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c426d58f-a26f-4355-b313-0405c9085ff0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:23.993 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[7794ebf5-91a1-410c-8336-40049a797b00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.000 239969 WARNING nova.compute.manager [None req-8fcf9f75-3d06-4619-b9ab-c69423fad16f 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Image not found during snapshot: nova.exception.ImageNotFound: Image 04c8dd58-8f38-44e5-8181-edf90679a92c could not be found.
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.019 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0f072595-2429-4c60-8a61-4783b83b431b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.051 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[606831f0-db60-4635-8b8f-7860e5be1374]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:24 compute-0 NetworkManager[48954]: <info>  [1769442924.0597] manager: (tap3e4cf33d-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/118)
Jan 26 15:55:24 compute-0 systemd-udevd[274685]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.061 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0584584a-4557-478a-9f74-99fed7cfc1a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.098 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc9034c-de1d-46a4-a86a-8653009ac84f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.104 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f96eb562-5fda-4814-b775-4a7cdaa40221]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:24 compute-0 NetworkManager[48954]: <info>  [1769442924.1315] device (tap3e4cf33d-a0): carrier: link connected
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.138 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7a8149-1259-4e46-a1d8-80bc6a2266f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.157 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e4bba827-af7f-4ccf-af59-529ee8ae2a33]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e4cf33d-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:f7:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441493, 'reachable_time': 30143, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274720, 'error': None, 'target': 'ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.173 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[69cc2072-2e27-4a53-bb0c-37df8f759f9a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb9:f7f1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 441493, 'tstamp': 441493}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274721, 'error': None, 'target': 'ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.195 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e735e9-186d-40fc-b5ac-94a815fbb3f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3e4cf33d-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b9:f7:f1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 74], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441493, 'reachable_time': 30143, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274722, 'error': None, 'target': 'ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.226 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a6c47c5a-a1a5-4fac-a39c-5fa2d7a88807]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.290 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fb69daf4-7eba-45fe-9c3b-332ec7eb9854]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.292 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e4cf33d-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.292 239969 DEBUG nova.compute.manager [req-99c0a79d-53d5-4a34-a807-abfeed6260b3 req-510fd804-1599-4c3c-b31d-595cd21e5386 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Received event network-vif-plugged-9300df63-810a-43e0-9363-92262754c92d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.292 239969 DEBUG oslo_concurrency.lockutils [req-99c0a79d-53d5-4a34-a807-abfeed6260b3 req-510fd804-1599-4c3c-b31d-595cd21e5386 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.292 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.293 239969 DEBUG oslo_concurrency.lockutils [req-99c0a79d-53d5-4a34-a807-abfeed6260b3 req-510fd804-1599-4c3c-b31d-595cd21e5386 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.293 239969 DEBUG oslo_concurrency.lockutils [req-99c0a79d-53d5-4a34-a807-abfeed6260b3 req-510fd804-1599-4c3c-b31d-595cd21e5386 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.293 239969 DEBUG nova.compute.manager [req-99c0a79d-53d5-4a34-a807-abfeed6260b3 req-510fd804-1599-4c3c-b31d-595cd21e5386 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Processing event network-vif-plugged-9300df63-810a-43e0-9363-92262754c92d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.294 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e4cf33d-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:24 compute-0 NetworkManager[48954]: <info>  [1769442924.2981] manager: (tap3e4cf33d-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.298 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:24 compute-0 kernel: tap3e4cf33d-a0: entered promiscuous mode
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.304 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3e4cf33d-a0, col_values=(('external_ids', {'iface-id': '0e9dcd7b-5483-4e12-b2ad-09d8083e6c99'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:24 compute-0 ovn_controller[146046]: 2026-01-26T15:55:24Z|00225|binding|INFO|Releasing lport 0e9dcd7b-5483-4e12-b2ad-09d8083e6c99 from this chassis (sb_readonly=0)
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.305 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.308 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.309 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3e755907-169d-4ea9-ace2-a5c10be80cb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.310 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2.pid.haproxy
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:55:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:24.310 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2', 'env', 'PROCESS_TAG=haproxy-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.326 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:55:24 compute-0 ceph-mon[75140]: osdmap e220: 3 total, 3 up, 3 in
Jan 26 15:55:24 compute-0 ceph-mon[75140]: pgmap v1258: 305 pgs: 305 active+clean; 239 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 12 MiB/s wr, 381 op/s
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.683 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442924.6826558, de374e75-a6b9-4a59-bd2f-642cc808b9e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.683 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] VM Started (Lifecycle Event)
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.685 239969 DEBUG nova.compute.manager [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.691 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.695 239969 INFO nova.virt.libvirt.driver [-] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Instance spawned successfully.
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.695 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.711 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.718 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:55:24 compute-0 podman[274794]: 2026-01-26 15:55:24.7206432 +0000 UTC m=+0.061676219 container create da706af1829edf21496ed887f7c02c3b3a8bf3ccf55c1d92f726f685757f60d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.722 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.723 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.726 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.726 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.727 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.727 239969 DEBUG nova.virt.libvirt.driver [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.735 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.735 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442924.6833124, de374e75-a6b9-4a59-bd2f-642cc808b9e2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.735 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] VM Paused (Lifecycle Event)
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.762 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.766 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442924.6883972, de374e75-a6b9-4a59-bd2f-642cc808b9e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.767 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] VM Resumed (Lifecycle Event)
Jan 26 15:55:24 compute-0 systemd[1]: Started libpod-conmon-da706af1829edf21496ed887f7c02c3b3a8bf3ccf55c1d92f726f685757f60d7.scope.
Jan 26 15:55:24 compute-0 podman[274794]: 2026-01-26 15:55:24.683249191 +0000 UTC m=+0.024282230 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.794 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.800 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:55:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.805 239969 INFO nova.compute.manager [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Took 9.10 seconds to spawn the instance on the hypervisor.
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.806 239969 DEBUG nova.compute.manager [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c23e318a18c49a9a3d2bfc1cada6d22e556a7e9b960151b30fedf73cd23450e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:24 compute-0 podman[274794]: 2026-01-26 15:55:24.824145013 +0000 UTC m=+0.165178052 container init da706af1829edf21496ed887f7c02c3b3a8bf3ccf55c1d92f726f685757f60d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:55:24 compute-0 podman[274794]: 2026-01-26 15:55:24.831819259 +0000 UTC m=+0.172852278 container start da706af1829edf21496ed887f7c02c3b3a8bf3ccf55c1d92f726f685757f60d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.836 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:55:24 compute-0 neutron-haproxy-ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2[274809]: [NOTICE]   (274813) : New worker (274815) forked
Jan 26 15:55:24 compute-0 neutron-haproxy-ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2[274809]: [NOTICE]   (274813) : Loading success.
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.878 239969 INFO nova.compute.manager [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Took 10.25 seconds to build instance.
Jan 26 15:55:24 compute-0 nova_compute[239965]: 2026-01-26 15:55:24.934 239969 DEBUG oslo_concurrency.lockutils [None req-c41c7a73-072b-40a7-b11e-b20346cca984 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:25 compute-0 ovn_controller[146046]: 2026-01-26T15:55:25Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e9:a7:e5 10.100.0.3
Jan 26 15:55:25 compute-0 ovn_controller[146046]: 2026-01-26T15:55:25Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e9:a7:e5 10.100.0.3
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.149 239969 DEBUG oslo_concurrency.lockutils [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.149 239969 DEBUG oslo_concurrency.lockutils [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.149 239969 DEBUG oslo_concurrency.lockutils [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.150 239969 DEBUG oslo_concurrency.lockutils [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.150 239969 DEBUG oslo_concurrency.lockutils [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.153 239969 INFO nova.compute.manager [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Terminating instance
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.155 239969 DEBUG nova.compute.manager [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.195 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:25 compute-0 kernel: tape69ca133-30 (unregistering): left promiscuous mode
Jan 26 15:55:25 compute-0 NetworkManager[48954]: <info>  [1769442925.2505] device (tape69ca133-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:55:25 compute-0 ovn_controller[146046]: 2026-01-26T15:55:25Z|00226|binding|INFO|Releasing lport e69ca133-3044-44b5-8c00-0780ebcf9b5e from this chassis (sb_readonly=0)
Jan 26 15:55:25 compute-0 ovn_controller[146046]: 2026-01-26T15:55:25Z|00227|binding|INFO|Setting lport e69ca133-3044-44b5-8c00-0780ebcf9b5e down in Southbound
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.261 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:25 compute-0 ovn_controller[146046]: 2026-01-26T15:55:25Z|00228|binding|INFO|Removing iface tape69ca133-30 ovn-installed in OVS
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.265 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.281 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:25 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Deactivated successfully.
Jan 26 15:55:25 compute-0 systemd[1]: machine-qemu\x2d40\x2dinstance\x2d00000024.scope: Consumed 14.318s CPU time.
Jan 26 15:55:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:25.311 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:a7:e5 10.100.0.3'], port_security=['fa:16:3e:e9:a7:e5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b8a4a981-46c3-4b65-ac27-0a742bb0caeb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12eb3028-37a4-48dc-836b-0978d0bddbce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b3f0866575347bd80cdc80b692f07f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd45551f-cdcc-4a64-9f48-bf991126bbec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef363ab1-8cec-4345-8968-ee11106642fa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=e69ca133-3044-44b5-8c00-0780ebcf9b5e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:55:25 compute-0 systemd-machined[208061]: Machine qemu-40-instance-00000024 terminated.
Jan 26 15:55:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:25.312 156105 INFO neutron.agent.ovn.metadata.agent [-] Port e69ca133-3044-44b5-8c00-0780ebcf9b5e in datapath 12eb3028-37a4-48dc-836b-0978d0bddbce unbound from our chassis
Jan 26 15:55:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:25.314 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 12eb3028-37a4-48dc-836b-0978d0bddbce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:55:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:25.315 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4a3dfe-2ee3-4acc-a401-c05c3b0b6ebd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:25.315 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce namespace which is not needed anymore
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.400 239969 INFO nova.virt.libvirt.driver [-] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Instance destroyed successfully.
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.401 239969 DEBUG nova.objects.instance [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lazy-loading 'resources' on Instance uuid b8a4a981-46c3-4b65-ac27-0a742bb0caeb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.424 239969 DEBUG nova.virt.libvirt.vif [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:55:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-677054834',display_name='tempest-ImagesOneServerNegativeTestJSON-server-677054834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-677054834',id=36,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1b3f0866575347bd80cdc80b692f07f5',ramdisk_id='',reservation_id='r-z32u575p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-744790849',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-744790849-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:55:23Z,user_data=None,user_id='8b6639ef07f74c9ebad76dffa361ec4e',uuid=b8a4a981-46c3-4b65-ac27-0a742bb0caeb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "address": "fa:16:3e:e9:a7:e5", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape69ca133-30", "ovs_interfaceid": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.425 239969 DEBUG nova.network.os_vif_util [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converting VIF {"id": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "address": "fa:16:3e:e9:a7:e5", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape69ca133-30", "ovs_interfaceid": "e69ca133-3044-44b5-8c00-0780ebcf9b5e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.426 239969 DEBUG nova.network.os_vif_util [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:a7:e5,bridge_name='br-int',has_traffic_filtering=True,id=e69ca133-3044-44b5-8c00-0780ebcf9b5e,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape69ca133-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.427 239969 DEBUG os_vif [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:a7:e5,bridge_name='br-int',has_traffic_filtering=True,id=e69ca133-3044-44b5-8c00-0780ebcf9b5e,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape69ca133-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.429 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.429 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape69ca133-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.431 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.433 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.434 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.436 239969 INFO os_vif [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:a7:e5,bridge_name='br-int',has_traffic_filtering=True,id=e69ca133-3044-44b5-8c00-0780ebcf9b5e,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape69ca133-30')
Jan 26 15:55:25 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[273458]: [NOTICE]   (273468) : haproxy version is 2.8.14-c23fe91
Jan 26 15:55:25 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[273458]: [NOTICE]   (273468) : path to executable is /usr/sbin/haproxy
Jan 26 15:55:25 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[273458]: [WARNING]  (273468) : Exiting Master process...
Jan 26 15:55:25 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[273458]: [WARNING]  (273468) : Exiting Master process...
Jan 26 15:55:25 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[273458]: [ALERT]    (273468) : Current worker (273486) exited with code 143 (Terminated)
Jan 26 15:55:25 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[273458]: [WARNING]  (273468) : All workers exited. Exiting... (0)
Jan 26 15:55:25 compute-0 systemd[1]: libpod-659ffcd2e6ec7088bb78727576416195074a0e96d3ebd1ce9b6f65e997373105.scope: Deactivated successfully.
Jan 26 15:55:25 compute-0 podman[274851]: 2026-01-26 15:55:25.518373714 +0000 UTC m=+0.091632997 container died 659ffcd2e6ec7088bb78727576416195074a0e96d3ebd1ce9b6f65e997373105 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.530 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:55:25 compute-0 podman[274884]: 2026-01-26 15:55:25.686945247 +0000 UTC m=+0.148643531 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 26 15:55:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-659ffcd2e6ec7088bb78727576416195074a0e96d3ebd1ce9b6f65e997373105-userdata-shm.mount: Deactivated successfully.
Jan 26 15:55:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d1f347e2802c901c0015fc7f254eec2f5b02cdf6f80bebec6c0d0fcb926778e-merged.mount: Deactivated successfully.
Jan 26 15:55:25 compute-0 podman[274887]: 2026-01-26 15:55:25.693276471 +0000 UTC m=+0.151386608 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:55:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1259: 305 pgs: 305 active+clean; 248 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 12 MiB/s wr, 505 op/s
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.862 239969 DEBUG nova.compute.manager [req-84ddbab4-a00e-45c1-9f4c-b6e2002affcc req-65dc53ee-05dd-4c89-9577-b405bb696e49 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Received event network-vif-unplugged-e69ca133-3044-44b5-8c00-0780ebcf9b5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.864 239969 DEBUG oslo_concurrency.lockutils [req-84ddbab4-a00e-45c1-9f4c-b6e2002affcc req-65dc53ee-05dd-4c89-9577-b405bb696e49 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.865 239969 DEBUG oslo_concurrency.lockutils [req-84ddbab4-a00e-45c1-9f4c-b6e2002affcc req-65dc53ee-05dd-4c89-9577-b405bb696e49 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.865 239969 DEBUG oslo_concurrency.lockutils [req-84ddbab4-a00e-45c1-9f4c-b6e2002affcc req-65dc53ee-05dd-4c89-9577-b405bb696e49 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.865 239969 DEBUG nova.compute.manager [req-84ddbab4-a00e-45c1-9f4c-b6e2002affcc req-65dc53ee-05dd-4c89-9577-b405bb696e49 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] No waiting events found dispatching network-vif-unplugged-e69ca133-3044-44b5-8c00-0780ebcf9b5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:55:25 compute-0 nova_compute[239965]: 2026-01-26 15:55:25.865 239969 DEBUG nova.compute.manager [req-84ddbab4-a00e-45c1-9f4c-b6e2002affcc req-65dc53ee-05dd-4c89-9577-b405bb696e49 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Received event network-vif-unplugged-e69ca133-3044-44b5-8c00-0780ebcf9b5e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:55:26 compute-0 podman[274851]: 2026-01-26 15:55:26.263487649 +0000 UTC m=+0.836746932 container cleanup 659ffcd2e6ec7088bb78727576416195074a0e96d3ebd1ce9b6f65e997373105 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 15:55:26 compute-0 systemd[1]: libpod-conmon-659ffcd2e6ec7088bb78727576416195074a0e96d3ebd1ce9b6f65e997373105.scope: Deactivated successfully.
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.420 239969 DEBUG nova.compute.manager [req-7862c61b-e445-4876-91cd-15bd3a6fd156 req-6c2741f2-5cae-4c9a-9ddd-c63d06ec7cb9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Received event network-vif-plugged-9300df63-810a-43e0-9363-92262754c92d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.421 239969 DEBUG oslo_concurrency.lockutils [req-7862c61b-e445-4876-91cd-15bd3a6fd156 req-6c2741f2-5cae-4c9a-9ddd-c63d06ec7cb9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.421 239969 DEBUG oslo_concurrency.lockutils [req-7862c61b-e445-4876-91cd-15bd3a6fd156 req-6c2741f2-5cae-4c9a-9ddd-c63d06ec7cb9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.421 239969 DEBUG oslo_concurrency.lockutils [req-7862c61b-e445-4876-91cd-15bd3a6fd156 req-6c2741f2-5cae-4c9a-9ddd-c63d06ec7cb9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.421 239969 DEBUG nova.compute.manager [req-7862c61b-e445-4876-91cd-15bd3a6fd156 req-6c2741f2-5cae-4c9a-9ddd-c63d06ec7cb9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] No waiting events found dispatching network-vif-plugged-9300df63-810a-43e0-9363-92262754c92d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.422 239969 WARNING nova.compute.manager [req-7862c61b-e445-4876-91cd-15bd3a6fd156 req-6c2741f2-5cae-4c9a-9ddd-c63d06ec7cb9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Received unexpected event network-vif-plugged-9300df63-810a-43e0-9363-92262754c92d for instance with vm_state active and task_state None.
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:55:26 compute-0 ceph-mon[75140]: pgmap v1259: 305 pgs: 305 active+clean; 248 MiB data, 504 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 12 MiB/s wr, 505 op/s
Jan 26 15:55:26 compute-0 ovn_controller[146046]: 2026-01-26T15:55:26Z|00229|binding|INFO|Releasing lport 25ff8da3-57db-4715-8307-e7e4fd4fbd09 from this chassis (sb_readonly=0)
Jan 26 15:55:26 compute-0 ovn_controller[146046]: 2026-01-26T15:55:26Z|00230|binding|INFO|Releasing lport 0a445094-538b-4c97-83d4-6fbe61c31fbe from this chassis (sb_readonly=0)
Jan 26 15:55:26 compute-0 ovn_controller[146046]: 2026-01-26T15:55:26Z|00231|binding|INFO|Releasing lport 0e9dcd7b-5483-4e12-b2ad-09d8083e6c99 from this chassis (sb_readonly=0)
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.824 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.887 239969 DEBUG oslo_concurrency.lockutils [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Acquiring lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.887 239969 DEBUG oslo_concurrency.lockutils [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.888 239969 DEBUG oslo_concurrency.lockutils [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Acquiring lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.889 239969 DEBUG oslo_concurrency.lockutils [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.889 239969 DEBUG oslo_concurrency.lockutils [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.890 239969 INFO nova.compute.manager [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Terminating instance
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.891 239969 DEBUG nova.compute.manager [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:55:26 compute-0 podman[274939]: 2026-01-26 15:55:26.907603732 +0000 UTC m=+0.617816755 container remove 659ffcd2e6ec7088bb78727576416195074a0e96d3ebd1ce9b6f65e997373105 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 15:55:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:26.913 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[89232da4-65ac-46fa-b607-8b74ba06ebc9]: (4, ('Mon Jan 26 03:55:25 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce (659ffcd2e6ec7088bb78727576416195074a0e96d3ebd1ce9b6f65e997373105)\n659ffcd2e6ec7088bb78727576416195074a0e96d3ebd1ce9b6f65e997373105\nMon Jan 26 03:55:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce (659ffcd2e6ec7088bb78727576416195074a0e96d3ebd1ce9b6f65e997373105)\n659ffcd2e6ec7088bb78727576416195074a0e96d3ebd1ce9b6f65e997373105\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:26.915 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef04e36-ec0d-4693-ae53-8c9b5db2032e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:26.915 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12eb3028-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.917 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:26 compute-0 kernel: tap12eb3028-30: left promiscuous mode
Jan 26 15:55:26 compute-0 nova_compute[239965]: 2026-01-26 15:55:26.933 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:26.937 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[73264921-350a-4c14-b106-2529385dbd8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:26.956 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[63ec33ed-cafd-46d1-b3ca-229ad98e6b0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:26.958 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ff9108aa-5573-4d22-8264-d463a4764577]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:26.977 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d42aa88b-65e1-402d-8e67-a1cfb13b38b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 440068, 'reachable_time': 39743, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274953, 'error': None, 'target': 'ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:26.980 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:55:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:26.980 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[310ad186-4544-4603-acc7-060a00e00ef1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:26 compute-0 systemd[1]: run-netns-ovnmeta\x2d12eb3028\x2d37a4\x2d48dc\x2d836b\x2d0978d0bddbce.mount: Deactivated successfully.
Jan 26 15:55:27 compute-0 kernel: tap9300df63-81 (unregistering): left promiscuous mode
Jan 26 15:55:27 compute-0 NetworkManager[48954]: <info>  [1769442927.0646] device (tap9300df63-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.075 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:27 compute-0 ovn_controller[146046]: 2026-01-26T15:55:27Z|00232|binding|INFO|Releasing lport 9300df63-810a-43e0-9363-92262754c92d from this chassis (sb_readonly=0)
Jan 26 15:55:27 compute-0 ovn_controller[146046]: 2026-01-26T15:55:27Z|00233|binding|INFO|Setting lport 9300df63-810a-43e0-9363-92262754c92d down in Southbound
Jan 26 15:55:27 compute-0 ovn_controller[146046]: 2026-01-26T15:55:27Z|00234|binding|INFO|Removing iface tap9300df63-81 ovn-installed in OVS
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.078 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:27.082 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:cf:6b 10.100.0.8'], port_security=['fa:16:3e:c1:cf:6b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'de374e75-a6b9-4a59-bd2f-642cc808b9e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '009c08c2c07248cbb7df2b6fcd0060e3', 'neutron:revision_number': '4', 'neutron:security_group_ids': '716d2123-dee4-4ee1-b1ec-8d559e7bb84e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88ee0f2c-a29d-4f65-afbb-5b87baff50ab, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=9300df63-810a-43e0-9363-92262754c92d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:55:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:27.084 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 9300df63-810a-43e0-9363-92262754c92d in datapath 3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2 unbound from our chassis
Jan 26 15:55:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:27.085 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:55:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:27.086 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0a7c2e0a-073b-4f5d-9e3b-a464a89e4820]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:27.087 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2 namespace which is not needed anymore
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.094 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:27 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000025.scope: Deactivated successfully.
Jan 26 15:55:27 compute-0 systemd[1]: machine-qemu\x2d41\x2dinstance\x2d00000025.scope: Consumed 2.993s CPU time.
Jan 26 15:55:27 compute-0 systemd-machined[208061]: Machine qemu-41-instance-00000025 terminated.
Jan 26 15:55:27 compute-0 neutron-haproxy-ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2[274809]: [NOTICE]   (274813) : haproxy version is 2.8.14-c23fe91
Jan 26 15:55:27 compute-0 neutron-haproxy-ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2[274809]: [NOTICE]   (274813) : path to executable is /usr/sbin/haproxy
Jan 26 15:55:27 compute-0 neutron-haproxy-ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2[274809]: [WARNING]  (274813) : Exiting Master process...
Jan 26 15:55:27 compute-0 neutron-haproxy-ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2[274809]: [ALERT]    (274813) : Current worker (274815) exited with code 143 (Terminated)
Jan 26 15:55:27 compute-0 neutron-haproxy-ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2[274809]: [WARNING]  (274813) : All workers exited. Exiting... (0)
Jan 26 15:55:27 compute-0 systemd[1]: libpod-da706af1829edf21496ed887f7c02c3b3a8bf3ccf55c1d92f726f685757f60d7.scope: Deactivated successfully.
Jan 26 15:55:27 compute-0 conmon[274809]: conmon da706af1829edf21496e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-da706af1829edf21496ed887f7c02c3b3a8bf3ccf55c1d92f726f685757f60d7.scope/container/memory.events
Jan 26 15:55:27 compute-0 podman[274978]: 2026-01-26 15:55:27.250216104 +0000 UTC m=+0.065766909 container died da706af1829edf21496ed887f7c02c3b3a8bf3ccf55c1d92f726f685757f60d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:55:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-da706af1829edf21496ed887f7c02c3b3a8bf3ccf55c1d92f726f685757f60d7-userdata-shm.mount: Deactivated successfully.
Jan 26 15:55:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c23e318a18c49a9a3d2bfc1cada6d22e556a7e9b960151b30fedf73cd23450e-merged.mount: Deactivated successfully.
Jan 26 15:55:27 compute-0 podman[274978]: 2026-01-26 15:55:27.326872215 +0000 UTC m=+0.142422990 container cleanup da706af1829edf21496ed887f7c02c3b3a8bf3ccf55c1d92f726f685757f60d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.329 239969 INFO nova.virt.libvirt.driver [-] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Instance destroyed successfully.
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.329 239969 DEBUG nova.objects.instance [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lazy-loading 'resources' on Instance uuid de374e75-a6b9-4a59-bd2f-642cc808b9e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:27 compute-0 systemd[1]: libpod-conmon-da706af1829edf21496ed887f7c02c3b3a8bf3ccf55c1d92f726f685757f60d7.scope: Deactivated successfully.
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.346 239969 DEBUG nova.virt.libvirt.vif [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:55:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1250402326',display_name='tempest-InstanceActionsNegativeTestJSON-server-1250402326',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1250402326',id=37,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='009c08c2c07248cbb7df2b6fcd0060e3',ramdisk_id='',reservation_id='r-vmki3eqi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-1468770434',owner_user_name='tempest-InstanceActionsNegativeTestJSON-1468770434-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:55:24Z,user_data=None,user_id='ed38e14a163d491a97b7e0d25836dc2e',uuid=de374e75-a6b9-4a59-bd2f-642cc808b9e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9300df63-810a-43e0-9363-92262754c92d", "address": "fa:16:3e:c1:cf:6b", "network": {"id": "3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2131918329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "009c08c2c07248cbb7df2b6fcd0060e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9300df63-81", "ovs_interfaceid": "9300df63-810a-43e0-9363-92262754c92d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.347 239969 DEBUG nova.network.os_vif_util [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Converting VIF {"id": "9300df63-810a-43e0-9363-92262754c92d", "address": "fa:16:3e:c1:cf:6b", "network": {"id": "3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-2131918329-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "009c08c2c07248cbb7df2b6fcd0060e3", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9300df63-81", "ovs_interfaceid": "9300df63-810a-43e0-9363-92262754c92d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.348 239969 DEBUG nova.network.os_vif_util [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:6b,bridge_name='br-int',has_traffic_filtering=True,id=9300df63-810a-43e0-9363-92262754c92d,network=Network(3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9300df63-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.348 239969 DEBUG os_vif [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:6b,bridge_name='br-int',has_traffic_filtering=True,id=9300df63-810a-43e0-9363-92262754c92d,network=Network(3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9300df63-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.350 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.350 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9300df63-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.385 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.387 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.390 239969 INFO os_vif [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:cf:6b,bridge_name='br-int',has_traffic_filtering=True,id=9300df63-810a-43e0-9363-92262754c92d,network=Network(3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9300df63-81')
Jan 26 15:55:27 compute-0 podman[275013]: 2026-01-26 15:55:27.467239044 +0000 UTC m=+0.112515224 container remove da706af1829edf21496ed887f7c02c3b3a8bf3ccf55c1d92f726f685757f60d7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 26 15:55:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:27.473 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1c88e6a1-cf13-48df-8a0a-96b7867b57d6]: (4, ('Mon Jan 26 03:55:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2 (da706af1829edf21496ed887f7c02c3b3a8bf3ccf55c1d92f726f685757f60d7)\nda706af1829edf21496ed887f7c02c3b3a8bf3ccf55c1d92f726f685757f60d7\nMon Jan 26 03:55:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2 (da706af1829edf21496ed887f7c02c3b3a8bf3ccf55c1d92f726f685757f60d7)\nda706af1829edf21496ed887f7c02c3b3a8bf3ccf55c1d92f726f685757f60d7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:27.475 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a38985c9-d3cf-4dd8-91f0-9550ec7c285a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:27.475 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e4cf33d-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.479 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:27 compute-0 kernel: tap3e4cf33d-a0: left promiscuous mode
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.496 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:27.499 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2d36f87e-7321-4299-bec9-2bac129ab12d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:55:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:27.520 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c97f2dfe-8f73-41e5-8478-aa3e6bc9fbc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:27.521 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[623d7ddf-a71f-4e25-9998-c83fe11b6cd5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:27.538 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[00d85724-a7cc-4288-b611-34a038a047c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441485, 'reachable_time': 16216, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275045, 'error': None, 'target': 'ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:27.540 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3e4cf33d-a8e3-4ff0-93a2-eb7dedd11da2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:55:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:27.540 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[521b3941-26ab-496c-a15f-13dd7f7e77e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d3e4cf33d\x2da8e3\x2d4ff0\x2d93a2\x2deb7dedd11da2.mount: Deactivated successfully.
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.678 239969 INFO nova.virt.libvirt.driver [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Deleting instance files /var/lib/nova/instances/b8a4a981-46c3-4b65-ac27-0a742bb0caeb_del
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.679 239969 INFO nova.virt.libvirt.driver [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Deletion of /var/lib/nova/instances/b8a4a981-46c3-4b65-ac27-0a742bb0caeb_del complete
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.736 239969 DEBUG oslo_concurrency.lockutils [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "interface-819151c3-9892-4926-879b-6bcf2047e116-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.737 239969 DEBUG oslo_concurrency.lockutils [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-819151c3-9892-4926-879b-6bcf2047e116-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.737 239969 DEBUG nova.objects.instance [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'flavor' on Instance uuid 819151c3-9892-4926-879b-6bcf2047e116 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.742 239969 INFO nova.compute.manager [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Took 2.58 seconds to destroy the instance on the hypervisor.
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.742 239969 DEBUG oslo.service.loopingcall [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.742 239969 DEBUG nova.compute.manager [-] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.743 239969 DEBUG nova.network.neutron [-] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:55:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 196 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 12 MiB/s wr, 600 op/s
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.767 239969 DEBUG nova.objects.instance [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'pci_requests' on Instance uuid 819151c3-9892-4926-879b-6bcf2047e116 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.781 239969 DEBUG nova.network.neutron [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.948 239969 INFO nova.virt.libvirt.driver [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Deleting instance files /var/lib/nova/instances/de374e75-a6b9-4a59-bd2f-642cc808b9e2_del
Jan 26 15:55:27 compute-0 nova_compute[239965]: 2026-01-26 15:55:27.948 239969 INFO nova.virt.libvirt.driver [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Deletion of /var/lib/nova/instances/de374e75-a6b9-4a59-bd2f-642cc808b9e2_del complete
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.008 239969 INFO nova.compute.manager [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Took 1.12 seconds to destroy the instance on the hypervisor.
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.009 239969 DEBUG oslo.service.loopingcall [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.009 239969 DEBUG nova.compute.manager [-] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.010 239969 DEBUG nova.network.neutron [-] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.110 239969 DEBUG nova.compute.manager [req-3d1dcbf4-7194-4015-8d0d-d16b69ddb93b req-c01220e3-4cce-4173-a57f-f832daefe873 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Received event network-vif-plugged-e69ca133-3044-44b5-8c00-0780ebcf9b5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.110 239969 DEBUG oslo_concurrency.lockutils [req-3d1dcbf4-7194-4015-8d0d-d16b69ddb93b req-c01220e3-4cce-4173-a57f-f832daefe873 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.111 239969 DEBUG oslo_concurrency.lockutils [req-3d1dcbf4-7194-4015-8d0d-d16b69ddb93b req-c01220e3-4cce-4173-a57f-f832daefe873 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.111 239969 DEBUG oslo_concurrency.lockutils [req-3d1dcbf4-7194-4015-8d0d-d16b69ddb93b req-c01220e3-4cce-4173-a57f-f832daefe873 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.111 239969 DEBUG nova.compute.manager [req-3d1dcbf4-7194-4015-8d0d-d16b69ddb93b req-c01220e3-4cce-4173-a57f-f832daefe873 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] No waiting events found dispatching network-vif-plugged-e69ca133-3044-44b5-8c00-0780ebcf9b5e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.112 239969 WARNING nova.compute.manager [req-3d1dcbf4-7194-4015-8d0d-d16b69ddb93b req-c01220e3-4cce-4173-a57f-f832daefe873 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Received unexpected event network-vif-plugged-e69ca133-3044-44b5-8c00-0780ebcf9b5e for instance with vm_state active and task_state deleting.
Jan 26 15:55:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:28.379 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.379 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:28.380 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.421 239969 DEBUG nova.network.neutron [-] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.436 239969 INFO nova.compute.manager [-] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Took 0.69 seconds to deallocate network for instance.
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.479 239969 DEBUG oslo_concurrency.lockutils [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.480 239969 DEBUG oslo_concurrency.lockutils [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:55:28
Jan 26 15:55:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:55:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:55:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['vms', 'default.rgw.log', '.mgr', 'backups', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control']
Jan 26 15:55:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.654 239969 DEBUG oslo_concurrency.processutils [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.701 239969 DEBUG nova.policy [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '700f80e28de6426c81d2be69a7fd8ffb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:55:28 compute-0 ceph-mon[75140]: pgmap v1260: 305 pgs: 305 active+clean; 196 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 12 MiB/s wr, 600 op/s
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.878 239969 DEBUG nova.network.neutron [-] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.899 239969 INFO nova.compute.manager [-] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Took 0.89 seconds to deallocate network for instance.
Jan 26 15:55:28 compute-0 nova_compute[239965]: 2026-01-26 15:55:28.962 239969 DEBUG oslo_concurrency.lockutils [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:55:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3997286604' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.300 239969 DEBUG oslo_concurrency.processutils [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.307 239969 DEBUG nova.compute.provider_tree [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.324 239969 DEBUG nova.scheduler.client.report [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.348 239969 DEBUG oslo_concurrency.lockutils [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.350 239969 DEBUG oslo_concurrency.lockutils [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.375 239969 INFO nova.scheduler.client.report [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Deleted allocations for instance b8a4a981-46c3-4b65-ac27-0a742bb0caeb
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.452 239969 DEBUG oslo_concurrency.processutils [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e220 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:55:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Jan 26 15:55:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Jan 26 15:55:29 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.493 239969 DEBUG oslo_concurrency.lockutils [None req-95a572dc-808a-463e-a457-3560c3272499 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "b8a4a981-46c3-4b65-ac27-0a742bb0caeb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.344s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.531 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.615 239969 DEBUG nova.compute.manager [req-7f1be891-e886-40e6-a58c-21e1abcf5db2 req-0d2c98e2-849e-41f0-a2d2-106fed0f9b05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Received event network-vif-unplugged-9300df63-810a-43e0-9363-92262754c92d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.615 239969 DEBUG oslo_concurrency.lockutils [req-7f1be891-e886-40e6-a58c-21e1abcf5db2 req-0d2c98e2-849e-41f0-a2d2-106fed0f9b05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.616 239969 DEBUG oslo_concurrency.lockutils [req-7f1be891-e886-40e6-a58c-21e1abcf5db2 req-0d2c98e2-849e-41f0-a2d2-106fed0f9b05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.617 239969 DEBUG oslo_concurrency.lockutils [req-7f1be891-e886-40e6-a58c-21e1abcf5db2 req-0d2c98e2-849e-41f0-a2d2-106fed0f9b05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.617 239969 DEBUG nova.compute.manager [req-7f1be891-e886-40e6-a58c-21e1abcf5db2 req-0d2c98e2-849e-41f0-a2d2-106fed0f9b05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] No waiting events found dispatching network-vif-unplugged-9300df63-810a-43e0-9363-92262754c92d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.617 239969 WARNING nova.compute.manager [req-7f1be891-e886-40e6-a58c-21e1abcf5db2 req-0d2c98e2-849e-41f0-a2d2-106fed0f9b05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Received unexpected event network-vif-unplugged-9300df63-810a-43e0-9363-92262754c92d for instance with vm_state deleted and task_state None.
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.617 239969 DEBUG nova.compute.manager [req-7f1be891-e886-40e6-a58c-21e1abcf5db2 req-0d2c98e2-849e-41f0-a2d2-106fed0f9b05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Received event network-vif-plugged-9300df63-810a-43e0-9363-92262754c92d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.617 239969 DEBUG oslo_concurrency.lockutils [req-7f1be891-e886-40e6-a58c-21e1abcf5db2 req-0d2c98e2-849e-41f0-a2d2-106fed0f9b05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.617 239969 DEBUG oslo_concurrency.lockutils [req-7f1be891-e886-40e6-a58c-21e1abcf5db2 req-0d2c98e2-849e-41f0-a2d2-106fed0f9b05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.618 239969 DEBUG oslo_concurrency.lockutils [req-7f1be891-e886-40e6-a58c-21e1abcf5db2 req-0d2c98e2-849e-41f0-a2d2-106fed0f9b05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.618 239969 DEBUG nova.compute.manager [req-7f1be891-e886-40e6-a58c-21e1abcf5db2 req-0d2c98e2-849e-41f0-a2d2-106fed0f9b05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] No waiting events found dispatching network-vif-plugged-9300df63-810a-43e0-9363-92262754c92d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.618 239969 WARNING nova.compute.manager [req-7f1be891-e886-40e6-a58c-21e1abcf5db2 req-0d2c98e2-849e-41f0-a2d2-106fed0f9b05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Received unexpected event network-vif-plugged-9300df63-810a-43e0-9363-92262754c92d for instance with vm_state deleted and task_state None.
Jan 26 15:55:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 305 active+clean; 196 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.7 MiB/s wr, 334 op/s
Jan 26 15:55:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3997286604' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:29 compute-0 ceph-mon[75140]: osdmap e221: 3 total, 3 up, 3 in
Jan 26 15:55:29 compute-0 nova_compute[239965]: 2026-01-26 15:55:29.855 239969 DEBUG nova.network.neutron [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Successfully created port: cbd122d4-01d6-46df-bc1e-8707cf1566d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:55:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:55:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1592464061' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:30 compute-0 nova_compute[239965]: 2026-01-26 15:55:30.086 239969 DEBUG oslo_concurrency.processutils [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:30 compute-0 nova_compute[239965]: 2026-01-26 15:55:30.095 239969 DEBUG nova.compute.provider_tree [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:55:30 compute-0 nova_compute[239965]: 2026-01-26 15:55:30.114 239969 DEBUG nova.scheduler.client.report [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:55:30 compute-0 nova_compute[239965]: 2026-01-26 15:55:30.144 239969 DEBUG oslo_concurrency.lockutils [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:30 compute-0 nova_compute[239965]: 2026-01-26 15:55:30.170 239969 INFO nova.scheduler.client.report [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Deleted allocations for instance de374e75-a6b9-4a59-bd2f-642cc808b9e2
Jan 26 15:55:30 compute-0 nova_compute[239965]: 2026-01-26 15:55:30.196 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:55:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:55:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:55:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:55:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:55:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:55:30 compute-0 nova_compute[239965]: 2026-01-26 15:55:30.484 239969 DEBUG nova.compute.manager [req-fc8e7733-f331-4f86-92ef-88c8c34d5cda req-bcc3654d-f130-414b-8306-4eeede4c46fc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Received event network-vif-deleted-e69ca133-3044-44b5-8c00-0780ebcf9b5e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:30 compute-0 nova_compute[239965]: 2026-01-26 15:55:30.485 239969 DEBUG nova.compute.manager [req-fc8e7733-f331-4f86-92ef-88c8c34d5cda req-bcc3654d-f130-414b-8306-4eeede4c46fc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Received event network-vif-deleted-9300df63-810a-43e0-9363-92262754c92d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:30 compute-0 nova_compute[239965]: 2026-01-26 15:55:30.489 239969 DEBUG oslo_concurrency.lockutils [None req-d6b0aaf4-d34a-4faf-a8e4-19328c14ddb6 ed38e14a163d491a97b7e0d25836dc2e 009c08c2c07248cbb7df2b6fcd0060e3 - - default default] Lock "de374e75-a6b9-4a59-bd2f-642cc808b9e2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:30 compute-0 nova_compute[239965]: 2026-01-26 15:55:30.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:55:30 compute-0 nova_compute[239965]: 2026-01-26 15:55:30.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:55:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:55:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:55:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:55:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:55:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:55:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:55:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:55:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:55:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:55:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:55:30 compute-0 nova_compute[239965]: 2026-01-26 15:55:30.588 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:30 compute-0 nova_compute[239965]: 2026-01-26 15:55:30.588 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:30 compute-0 nova_compute[239965]: 2026-01-26 15:55:30.589 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:30 compute-0 nova_compute[239965]: 2026-01-26 15:55:30.589 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:55:30 compute-0 nova_compute[239965]: 2026-01-26 15:55:30.589 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:30 compute-0 ceph-mon[75140]: pgmap v1262: 305 pgs: 305 active+clean; 196 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.7 MiB/s wr, 334 op/s
Jan 26 15:55:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1592464061' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:55:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/920413333' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.198 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.457 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.457 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000023 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.579 239969 DEBUG nova.network.neutron [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Successfully updated port: cbd122d4-01d6-46df-bc1e-8707cf1566d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.597 239969 DEBUG oslo_concurrency.lockutils [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.598 239969 DEBUG oslo_concurrency.lockutils [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquired lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.598 239969 DEBUG nova.network.neutron [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.724 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.726 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3968MB free_disk=59.90941029880196GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.726 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.727 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.755 239969 DEBUG nova.compute.manager [req-52bcecd4-c0c1-4ee2-a2e0-b38556f3c5e4 req-2352046e-8d70-4a96-a8c4-401c5b6c6ef7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-changed-cbd122d4-01d6-46df-bc1e-8707cf1566d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.755 239969 DEBUG nova.compute.manager [req-52bcecd4-c0c1-4ee2-a2e0-b38556f3c5e4 req-2352046e-8d70-4a96-a8c4-401c5b6c6ef7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Refreshing instance network info cache due to event network-changed-cbd122d4-01d6-46df-bc1e-8707cf1566d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.755 239969 DEBUG oslo_concurrency.lockutils [req-52bcecd4-c0c1-4ee2-a2e0-b38556f3c5e4 req-2352046e-8d70-4a96-a8c4-401c5b6c6ef7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:55:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 158 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.2 MiB/s wr, 337 op/s
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.797 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 819151c3-9892-4926-879b-6bcf2047e116 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.797 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.797 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.826 239969 WARNING nova.network.neutron [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] 91bcac0a-1926-4861-88ab-ae3c06f7e57e already exists in list: networks containing: ['91bcac0a-1926-4861-88ab-ae3c06f7e57e']. ignoring it
Jan 26 15:55:31 compute-0 nova_compute[239965]: 2026-01-26 15:55:31.839 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:31 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/920413333' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:32 compute-0 nova_compute[239965]: 2026-01-26 15:55:32.387 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:55:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2563982935' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:32 compute-0 nova_compute[239965]: 2026-01-26 15:55:32.504 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:32 compute-0 nova_compute[239965]: 2026-01-26 15:55:32.511 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:55:32 compute-0 nova_compute[239965]: 2026-01-26 15:55:32.535 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:55:32 compute-0 nova_compute[239965]: 2026-01-26 15:55:32.570 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:55:32 compute-0 nova_compute[239965]: 2026-01-26 15:55:32.570 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:32 compute-0 ceph-mon[75140]: pgmap v1263: 305 pgs: 305 active+clean; 158 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.2 MiB/s wr, 337 op/s
Jan 26 15:55:32 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2563982935' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1264: 305 pgs: 305 active+clean; 121 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.4 MiB/s wr, 309 op/s
Jan 26 15:55:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:34.382 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:55:34 compute-0 nova_compute[239965]: 2026-01-26 15:55:34.851 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442919.8497357, d6511f20-baa5-46ee-80c7-fbb3e13d5163 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:55:34 compute-0 nova_compute[239965]: 2026-01-26 15:55:34.851 239969 INFO nova.compute.manager [-] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] VM Stopped (Lifecycle Event)
Jan 26 15:55:34 compute-0 nova_compute[239965]: 2026-01-26 15:55:34.918 239969 DEBUG nova.compute.manager [None req-3118ec73-13c9-46fd-8791-24447df416d4 - - - - - -] [instance: d6511f20-baa5-46ee-80c7-fbb3e13d5163] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:34 compute-0 ceph-mon[75140]: pgmap v1264: 305 pgs: 305 active+clean; 121 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.4 MiB/s wr, 309 op/s
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.198 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.365 239969 DEBUG nova.network.neutron [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updating instance_info_cache with network_info: [{"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.551 239969 DEBUG oslo_concurrency.lockutils [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Releasing lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.552 239969 DEBUG oslo_concurrency.lockutils [req-52bcecd4-c0c1-4ee2-a2e0-b38556f3c5e4 req-2352046e-8d70-4a96-a8c4-401c5b6c6ef7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.552 239969 DEBUG nova.network.neutron [req-52bcecd4-c0c1-4ee2-a2e0-b38556f3c5e4 req-2352046e-8d70-4a96-a8c4-401c5b6c6ef7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Refreshing network info cache for port cbd122d4-01d6-46df-bc1e-8707cf1566d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.556 239969 DEBUG nova.virt.libvirt.vif [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:55:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.556 239969 DEBUG nova.network.os_vif_util [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.557 239969 DEBUG nova.network.os_vif_util [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:ca:1b,bridge_name='br-int',has_traffic_filtering=True,id=cbd122d4-01d6-46df-bc1e-8707cf1566d4,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbd122d4-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.558 239969 DEBUG os_vif [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:ca:1b,bridge_name='br-int',has_traffic_filtering=True,id=cbd122d4-01d6-46df-bc1e-8707cf1566d4,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbd122d4-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.558 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.559 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.559 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.562 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.563 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcbd122d4-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.563 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcbd122d4-01, col_values=(('external_ids', {'iface-id': 'cbd122d4-01d6-46df-bc1e-8707cf1566d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:ca:1b', 'vm-uuid': '819151c3-9892-4926-879b-6bcf2047e116'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.570 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.571 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.571 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.571 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.635 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:35 compute-0 NetworkManager[48954]: <info>  [1769442935.6360] manager: (tapcbd122d4-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.638 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.642 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.643 239969 INFO os_vif [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:ca:1b,bridge_name='br-int',has_traffic_filtering=True,id=cbd122d4-01d6-46df-bc1e-8707cf1566d4,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbd122d4-01')
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.644 239969 DEBUG nova.virt.libvirt.vif [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:55:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.645 239969 DEBUG nova.network.os_vif_util [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.645 239969 DEBUG nova.network.os_vif_util [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:ca:1b,bridge_name='br-int',has_traffic_filtering=True,id=cbd122d4-01d6-46df-bc1e-8707cf1566d4,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbd122d4-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.648 239969 DEBUG nova.virt.libvirt.guest [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] attach device xml: <interface type="ethernet">
Jan 26 15:55:35 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:34:ca:1b"/>
Jan 26 15:55:35 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 15:55:35 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:55:35 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 15:55:35 compute-0 nova_compute[239965]:   <target dev="tapcbd122d4-01"/>
Jan 26 15:55:35 compute-0 nova_compute[239965]: </interface>
Jan 26 15:55:35 compute-0 nova_compute[239965]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 26 15:55:35 compute-0 kernel: tapcbd122d4-01: entered promiscuous mode
Jan 26 15:55:35 compute-0 ovn_controller[146046]: 2026-01-26T15:55:35Z|00235|binding|INFO|Claiming lport cbd122d4-01d6-46df-bc1e-8707cf1566d4 for this chassis.
Jan 26 15:55:35 compute-0 ovn_controller[146046]: 2026-01-26T15:55:35Z|00236|binding|INFO|cbd122d4-01d6-46df-bc1e-8707cf1566d4: Claiming fa:16:3e:34:ca:1b 10.100.0.8
Jan 26 15:55:35 compute-0 NetworkManager[48954]: <info>  [1769442935.6637] manager: (tapcbd122d4-01): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.662 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:35 compute-0 ovn_controller[146046]: 2026-01-26T15:55:35Z|00237|binding|INFO|Setting lport cbd122d4-01d6-46df-bc1e-8707cf1566d4 ovn-installed in OVS
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.683 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:35 compute-0 systemd-udevd[275142]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:55:35 compute-0 NetworkManager[48954]: <info>  [1769442935.7187] device (tapcbd122d4-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:55:35 compute-0 NetworkManager[48954]: <info>  [1769442935.7197] device (tapcbd122d4-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:55:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:35.723 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:ca:1b 10.100.0.8'], port_security=['fa:16:3e:34:ca:1b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '819151c3-9892-4926-879b-6bcf2047e116', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6f59e25-9c42-4de8-8df2-2beb03c79af0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=cbd122d4-01d6-46df-bc1e-8707cf1566d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:55:35 compute-0 ovn_controller[146046]: 2026-01-26T15:55:35Z|00238|binding|INFO|Setting lport cbd122d4-01d6-46df-bc1e-8707cf1566d4 up in Southbound
Jan 26 15:55:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:35.724 156105 INFO neutron.agent.ovn.metadata.agent [-] Port cbd122d4-01d6-46df-bc1e-8707cf1566d4 in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e bound to our chassis
Jan 26 15:55:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:35.726 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:55:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:35.749 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc83879-5013-4895-93c3-79186e39194d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 121 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 196 op/s
Jan 26 15:55:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:35.784 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[15c2e3c9-dab9-4ed8-aaeb-8f55e66f7eca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:35.788 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2929537a-bc63-448e-b269-0ee468eb1327]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:35.819 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[bde2955b-3aaa-428f-b66d-cce1477ac73b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:35.842 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a36a0fcf-4357-4c1a-9449-117b73d71cf1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439439, 'reachable_time': 27465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275150, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:35 compute-0 ovn_controller[146046]: 2026-01-26T15:55:35Z|00239|binding|INFO|Releasing lport 0a445094-538b-4c97-83d4-6fbe61c31fbe from this chassis (sb_readonly=0)
Jan 26 15:55:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:35.866 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9989fe55-d574-416a-91fa-9f990aef7464]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439451, 'tstamp': 439451}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275151, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439456, 'tstamp': 439456}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275151, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:35.868 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.870 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.884 239969 DEBUG nova.virt.libvirt.driver [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.885 239969 DEBUG nova.virt.libvirt.driver [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.885 239969 DEBUG nova.virt.libvirt.driver [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:1d:ed:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.885 239969 DEBUG nova.virt.libvirt.driver [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:34:ca:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.927 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:35.928 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:35.929 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:55:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:35.929 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:35.929 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.931 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.936 239969 DEBUG nova.virt.libvirt.guest [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:55:35 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:55:35 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1950902598</nova:name>
Jan 26 15:55:35 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:55:35</nova:creationTime>
Jan 26 15:55:35 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:55:35 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:55:35 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:55:35 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:55:35 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:55:35 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:55:35 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:55:35 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:55:35 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:55:35 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:55:35 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:55:35 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:55:35 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:55:35 compute-0 nova_compute[239965]:     <nova:port uuid="786c06cb-4344-4e72-b40a-f3aa914d037c">
Jan 26 15:55:35 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 15:55:35 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:55:35 compute-0 nova_compute[239965]:     <nova:port uuid="cbd122d4-01d6-46df-bc1e-8707cf1566d4">
Jan 26 15:55:35 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 26 15:55:35 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:55:35 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:55:35 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:55:35 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 15:55:35 compute-0 nova_compute[239965]: 2026-01-26 15:55:35.969 239969 DEBUG oslo_concurrency.lockutils [None req-bc6654a3-c9e0-440c-b9c0-af8e6bf0c9d3 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-819151c3-9892-4926-879b-6bcf2047e116-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 8.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:36 compute-0 ceph-mon[75140]: pgmap v1265: 305 pgs: 305 active+clean; 121 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.2 MiB/s wr, 196 op/s
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:55:36.949574) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442936949659, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1646, "num_deletes": 506, "total_data_size": 1993396, "memory_usage": 2029384, "flush_reason": "Manual Compaction"}
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442936967587, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1760470, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24624, "largest_seqno": 26269, "table_properties": {"data_size": 1753612, "index_size": 3420, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19042, "raw_average_key_size": 20, "raw_value_size": 1737369, "raw_average_value_size": 1836, "num_data_blocks": 152, "num_entries": 946, "num_filter_entries": 946, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769442830, "oldest_key_time": 1769442830, "file_creation_time": 1769442936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 18062 microseconds, and 5888 cpu microseconds.
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:55:36.967644) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1760470 bytes OK
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:55:36.967670) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:55:36.969699) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:55:36.969718) EVENT_LOG_v1 {"time_micros": 1769442936969713, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:55:36.969735) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1985076, prev total WAL file size 1985076, number of live WAL files 2.
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:55:36.970479) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1719KB)], [56(10MB)]
Jan 26 15:55:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442936970540, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12481330, "oldest_snapshot_seqno": -1}
Jan 26 15:55:37 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5035 keys, 7680896 bytes, temperature: kUnknown
Jan 26 15:55:37 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442937026464, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7680896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7646209, "index_size": 21009, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 126228, "raw_average_key_size": 25, "raw_value_size": 7554471, "raw_average_value_size": 1500, "num_data_blocks": 859, "num_entries": 5035, "num_filter_entries": 5035, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769442936, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:55:37 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:55:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:55:37.026860) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7680896 bytes
Jan 26 15:55:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:55:37.028942) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 222.7 rd, 137.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.2 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(11.5) write-amplify(4.4) OK, records in: 6047, records dropped: 1012 output_compression: NoCompression
Jan 26 15:55:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:55:37.028999) EVENT_LOG_v1 {"time_micros": 1769442937028956, "job": 30, "event": "compaction_finished", "compaction_time_micros": 56047, "compaction_time_cpu_micros": 20181, "output_level": 6, "num_output_files": 1, "total_output_size": 7680896, "num_input_records": 6047, "num_output_records": 5035, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 15:55:37 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:55:37 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442937029634, "job": 30, "event": "table_file_deletion", "file_number": 58}
Jan 26 15:55:37 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:55:37 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769442937032347, "job": 30, "event": "table_file_deletion", "file_number": 56}
Jan 26 15:55:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:55:36.970409) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:55:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:55:37.032569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:55:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:55:37.032579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:55:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:55:37.032581) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:55:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:55:37.032583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:55:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:55:37.032586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:55:37 compute-0 ovn_controller[146046]: 2026-01-26T15:55:37Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:34:ca:1b 10.100.0.8
Jan 26 15:55:37 compute-0 ovn_controller[146046]: 2026-01-26T15:55:37Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:34:ca:1b 10.100.0.8
Jan 26 15:55:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 121 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 454 KiB/s rd, 4.5 KiB/s wr, 68 op/s
Jan 26 15:55:37 compute-0 nova_compute[239965]: 2026-01-26 15:55:37.862 239969 DEBUG nova.compute.manager [req-14285d48-7852-43ed-9870-b11912176c17 req-140f4e24-3d76-4efe-b112-5104eed8ab05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-plugged-cbd122d4-01d6-46df-bc1e-8707cf1566d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:37 compute-0 nova_compute[239965]: 2026-01-26 15:55:37.863 239969 DEBUG oslo_concurrency.lockutils [req-14285d48-7852-43ed-9870-b11912176c17 req-140f4e24-3d76-4efe-b112-5104eed8ab05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:37 compute-0 nova_compute[239965]: 2026-01-26 15:55:37.864 239969 DEBUG oslo_concurrency.lockutils [req-14285d48-7852-43ed-9870-b11912176c17 req-140f4e24-3d76-4efe-b112-5104eed8ab05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:37 compute-0 nova_compute[239965]: 2026-01-26 15:55:37.864 239969 DEBUG oslo_concurrency.lockutils [req-14285d48-7852-43ed-9870-b11912176c17 req-140f4e24-3d76-4efe-b112-5104eed8ab05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:37 compute-0 nova_compute[239965]: 2026-01-26 15:55:37.865 239969 DEBUG nova.compute.manager [req-14285d48-7852-43ed-9870-b11912176c17 req-140f4e24-3d76-4efe-b112-5104eed8ab05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] No waiting events found dispatching network-vif-plugged-cbd122d4-01d6-46df-bc1e-8707cf1566d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:55:37 compute-0 nova_compute[239965]: 2026-01-26 15:55:37.865 239969 WARNING nova.compute.manager [req-14285d48-7852-43ed-9870-b11912176c17 req-140f4e24-3d76-4efe-b112-5104eed8ab05 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received unexpected event network-vif-plugged-cbd122d4-01d6-46df-bc1e-8707cf1566d4 for instance with vm_state active and task_state None.
Jan 26 15:55:38 compute-0 nova_compute[239965]: 2026-01-26 15:55:38.241 239969 DEBUG nova.network.neutron [req-52bcecd4-c0c1-4ee2-a2e0-b38556f3c5e4 req-2352046e-8d70-4a96-a8c4-401c5b6c6ef7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updated VIF entry in instance network info cache for port cbd122d4-01d6-46df-bc1e-8707cf1566d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:55:38 compute-0 nova_compute[239965]: 2026-01-26 15:55:38.242 239969 DEBUG nova.network.neutron [req-52bcecd4-c0c1-4ee2-a2e0-b38556f3c5e4 req-2352046e-8d70-4a96-a8c4-401c5b6c6ef7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updating instance_info_cache with network_info: [{"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:55:38 compute-0 nova_compute[239965]: 2026-01-26 15:55:38.264 239969 DEBUG oslo_concurrency.lockutils [req-52bcecd4-c0c1-4ee2-a2e0-b38556f3c5e4 req-2352046e-8d70-4a96-a8c4-401c5b6c6ef7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:55:38 compute-0 nova_compute[239965]: 2026-01-26 15:55:38.293 239969 DEBUG oslo_concurrency.lockutils [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "interface-819151c3-9892-4926-879b-6bcf2047e116-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:38 compute-0 nova_compute[239965]: 2026-01-26 15:55:38.294 239969 DEBUG oslo_concurrency.lockutils [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-819151c3-9892-4926-879b-6bcf2047e116-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:38 compute-0 nova_compute[239965]: 2026-01-26 15:55:38.294 239969 DEBUG nova.objects.instance [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'flavor' on Instance uuid 819151c3-9892-4926-879b-6bcf2047e116 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:38 compute-0 nova_compute[239965]: 2026-01-26 15:55:38.920 239969 DEBUG nova.objects.instance [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'pci_requests' on Instance uuid 819151c3-9892-4926-879b-6bcf2047e116 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:39 compute-0 nova_compute[239965]: 2026-01-26 15:55:39.156 239969 DEBUG nova.network.neutron [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:55:39 compute-0 ceph-mon[75140]: pgmap v1266: 305 pgs: 305 active+clean; 121 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 454 KiB/s rd, 4.5 KiB/s wr, 68 op/s
Jan 26 15:55:39 compute-0 nova_compute[239965]: 2026-01-26 15:55:39.376 239969 DEBUG nova.policy [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '700f80e28de6426c81d2be69a7fd8ffb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:55:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:55:39 compute-0 nova_compute[239965]: 2026-01-26 15:55:39.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:55:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 305 active+clean; 121 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 442 KiB/s rd, 4.4 KiB/s wr, 66 op/s
Jan 26 15:55:40 compute-0 nova_compute[239965]: 2026-01-26 15:55:40.201 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:40 compute-0 ceph-mon[75140]: pgmap v1267: 305 pgs: 305 active+clean; 121 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 442 KiB/s rd, 4.4 KiB/s wr, 66 op/s
Jan 26 15:55:40 compute-0 nova_compute[239965]: 2026-01-26 15:55:40.399 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442925.3974996, b8a4a981-46c3-4b65-ac27-0a742bb0caeb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:55:40 compute-0 nova_compute[239965]: 2026-01-26 15:55:40.399 239969 INFO nova.compute.manager [-] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] VM Stopped (Lifecycle Event)
Jan 26 15:55:40 compute-0 nova_compute[239965]: 2026-01-26 15:55:40.480 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "8266f382-d7db-41e3-bc97-b1510341e248" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:40 compute-0 nova_compute[239965]: 2026-01-26 15:55:40.480 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "8266f382-d7db-41e3-bc97-b1510341e248" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:40 compute-0 nova_compute[239965]: 2026-01-26 15:55:40.497 239969 DEBUG nova.compute.manager [None req-dbd0a39d-7735-4311-afa4-0223993a3b2f - - - - - -] [instance: b8a4a981-46c3-4b65-ac27-0a742bb0caeb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:40 compute-0 nova_compute[239965]: 2026-01-26 15:55:40.564 239969 DEBUG nova.compute.manager [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:55:40 compute-0 nova_compute[239965]: 2026-01-26 15:55:40.635 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:40 compute-0 nova_compute[239965]: 2026-01-26 15:55:40.703 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:40 compute-0 nova_compute[239965]: 2026-01-26 15:55:40.703 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:40 compute-0 nova_compute[239965]: 2026-01-26 15:55:40.710 239969 DEBUG nova.virt.hardware [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:55:40 compute-0 nova_compute[239965]: 2026-01-26 15:55:40.711 239969 INFO nova.compute.claims [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:55:41 compute-0 nova_compute[239965]: 2026-01-26 15:55:41.239 239969 DEBUG nova.compute.manager [req-6503cb4d-8bcb-4398-8509-eedf90120e3c req-88f9a8b9-6dbc-4956-8ee3-c8d81b44be2f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-plugged-cbd122d4-01d6-46df-bc1e-8707cf1566d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:41 compute-0 nova_compute[239965]: 2026-01-26 15:55:41.239 239969 DEBUG oslo_concurrency.lockutils [req-6503cb4d-8bcb-4398-8509-eedf90120e3c req-88f9a8b9-6dbc-4956-8ee3-c8d81b44be2f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:41 compute-0 nova_compute[239965]: 2026-01-26 15:55:41.239 239969 DEBUG oslo_concurrency.lockutils [req-6503cb4d-8bcb-4398-8509-eedf90120e3c req-88f9a8b9-6dbc-4956-8ee3-c8d81b44be2f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:41 compute-0 nova_compute[239965]: 2026-01-26 15:55:41.240 239969 DEBUG oslo_concurrency.lockutils [req-6503cb4d-8bcb-4398-8509-eedf90120e3c req-88f9a8b9-6dbc-4956-8ee3-c8d81b44be2f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:41 compute-0 nova_compute[239965]: 2026-01-26 15:55:41.240 239969 DEBUG nova.compute.manager [req-6503cb4d-8bcb-4398-8509-eedf90120e3c req-88f9a8b9-6dbc-4956-8ee3-c8d81b44be2f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] No waiting events found dispatching network-vif-plugged-cbd122d4-01d6-46df-bc1e-8707cf1566d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:55:41 compute-0 nova_compute[239965]: 2026-01-26 15:55:41.240 239969 WARNING nova.compute.manager [req-6503cb4d-8bcb-4398-8509-eedf90120e3c req-88f9a8b9-6dbc-4956-8ee3-c8d81b44be2f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received unexpected event network-vif-plugged-cbd122d4-01d6-46df-bc1e-8707cf1566d4 for instance with vm_state active and task_state None.
Jan 26 15:55:41 compute-0 nova_compute[239965]: 2026-01-26 15:55:41.270 239969 DEBUG nova.network.neutron [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Successfully created port: 7fbc7328-fd77-441a-86c3-1f41f7e706a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:55:41 compute-0 nova_compute[239965]: 2026-01-26 15:55:41.375 239969 DEBUG oslo_concurrency.processutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 121 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 3.7 KiB/s wr, 56 op/s
Jan 26 15:55:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:55:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4168650004' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:41 compute-0 nova_compute[239965]: 2026-01-26 15:55:41.951 239969 DEBUG oslo_concurrency.processutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:41 compute-0 nova_compute[239965]: 2026-01-26 15:55:41.957 239969 DEBUG nova.compute.provider_tree [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:55:42 compute-0 nova_compute[239965]: 2026-01-26 15:55:42.328 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442927.327215, de374e75-a6b9-4a59-bd2f-642cc808b9e2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:55:42 compute-0 nova_compute[239965]: 2026-01-26 15:55:42.329 239969 INFO nova.compute.manager [-] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] VM Stopped (Lifecycle Event)
Jan 26 15:55:42 compute-0 nova_compute[239965]: 2026-01-26 15:55:42.497 239969 DEBUG nova.scheduler.client.report [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:55:42 compute-0 nova_compute[239965]: 2026-01-26 15:55:42.712 239969 DEBUG nova.compute.manager [None req-c7e67f26-31df-4078-962b-93019702daba - - - - - -] [instance: de374e75-a6b9-4a59-bd2f-642cc808b9e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:42 compute-0 nova_compute[239965]: 2026-01-26 15:55:42.732 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.029s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:42 compute-0 nova_compute[239965]: 2026-01-26 15:55:42.733 239969 DEBUG nova.compute.manager [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:55:42 compute-0 ceph-mon[75140]: pgmap v1268: 305 pgs: 305 active+clean; 121 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 378 KiB/s rd, 3.7 KiB/s wr, 56 op/s
Jan 26 15:55:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4168650004' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:42 compute-0 nova_compute[239965]: 2026-01-26 15:55:42.848 239969 DEBUG nova.compute.manager [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:55:42 compute-0 nova_compute[239965]: 2026-01-26 15:55:42.849 239969 DEBUG nova.network.neutron [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:55:43 compute-0 nova_compute[239965]: 2026-01-26 15:55:43.002 239969 INFO nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:55:43 compute-0 nova_compute[239965]: 2026-01-26 15:55:43.121 239969 DEBUG nova.compute.manager [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:55:43 compute-0 nova_compute[239965]: 2026-01-26 15:55:43.757 239969 DEBUG nova.policy [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b6639ef07f74c9ebad76dffa361ec4e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1b3f0866575347bd80cdc80b692f07f5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:55:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 121 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 4.4 KiB/s wr, 28 op/s
Jan 26 15:55:43 compute-0 nova_compute[239965]: 2026-01-26 15:55:43.987 239969 DEBUG nova.compute.manager [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:55:43 compute-0 nova_compute[239965]: 2026-01-26 15:55:43.988 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:55:43 compute-0 nova_compute[239965]: 2026-01-26 15:55:43.989 239969 INFO nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Creating image(s)
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.017 239969 DEBUG nova.storage.rbd_utils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image 8266f382-d7db-41e3-bc97-b1510341e248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.052 239969 DEBUG nova.storage.rbd_utils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image 8266f382-d7db-41e3-bc97-b1510341e248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.080 239969 DEBUG nova.storage.rbd_utils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image 8266f382-d7db-41e3-bc97-b1510341e248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.087 239969 DEBUG oslo_concurrency.processutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.162 239969 DEBUG oslo_concurrency.processutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.163 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.163 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.164 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.185 239969 DEBUG nova.storage.rbd_utils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image 8266f382-d7db-41e3-bc97-b1510341e248_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.189 239969 DEBUG oslo_concurrency.processutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 8266f382-d7db-41e3-bc97-b1510341e248_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.615 239969 DEBUG oslo_concurrency.processutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 8266f382-d7db-41e3-bc97-b1510341e248_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.693 239969 DEBUG nova.storage.rbd_utils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] resizing rbd image 8266f382-d7db-41e3-bc97-b1510341e248_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.829 239969 DEBUG nova.objects.instance [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lazy-loading 'migration_context' on Instance uuid 8266f382-d7db-41e3-bc97-b1510341e248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:44 compute-0 ceph-mon[75140]: pgmap v1269: 305 pgs: 305 active+clean; 121 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 4.4 KiB/s wr, 28 op/s
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.872 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.872 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Ensure instance console log exists: /var/lib/nova/instances/8266f382-d7db-41e3-bc97-b1510341e248/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.873 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.873 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:44 compute-0 nova_compute[239965]: 2026-01-26 15:55:44.874 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:45 compute-0 nova_compute[239965]: 2026-01-26 15:55:45.203 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:45 compute-0 nova_compute[239965]: 2026-01-26 15:55:45.637 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 125 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 329 KiB/s wr, 23 op/s
Jan 26 15:55:46 compute-0 nova_compute[239965]: 2026-01-26 15:55:46.358 239969 DEBUG nova.network.neutron [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Successfully updated port: 7fbc7328-fd77-441a-86c3-1f41f7e706a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:55:46 compute-0 nova_compute[239965]: 2026-01-26 15:55:46.524 239969 DEBUG oslo_concurrency.lockutils [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:55:46 compute-0 nova_compute[239965]: 2026-01-26 15:55:46.524 239969 DEBUG oslo_concurrency.lockutils [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquired lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:55:46 compute-0 nova_compute[239965]: 2026-01-26 15:55:46.524 239969 DEBUG nova.network.neutron [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:55:46 compute-0 nova_compute[239965]: 2026-01-26 15:55:46.589 239969 DEBUG nova.network.neutron [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Successfully created port: 50f13561-1233-4562-9ae2-f16314467652 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:55:46 compute-0 nova_compute[239965]: 2026-01-26 15:55:46.813 239969 DEBUG nova.compute.manager [req-8d201eb6-a9b5-4387-ac1a-c2d1fe3fbdba req-3f95389a-a4e3-4d4b-9c81-817abbff69e5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-changed-7fbc7328-fd77-441a-86c3-1f41f7e706a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:46 compute-0 nova_compute[239965]: 2026-01-26 15:55:46.813 239969 DEBUG nova.compute.manager [req-8d201eb6-a9b5-4387-ac1a-c2d1fe3fbdba req-3f95389a-a4e3-4d4b-9c81-817abbff69e5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Refreshing instance network info cache due to event network-changed-7fbc7328-fd77-441a-86c3-1f41f7e706a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:55:46 compute-0 nova_compute[239965]: 2026-01-26 15:55:46.814 239969 DEBUG oslo_concurrency.lockutils [req-8d201eb6-a9b5-4387-ac1a-c2d1fe3fbdba req-3f95389a-a4e3-4d4b-9c81-817abbff69e5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:55:46 compute-0 nova_compute[239965]: 2026-01-26 15:55:46.868 239969 WARNING nova.network.neutron [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] 91bcac0a-1926-4861-88ab-ae3c06f7e57e already exists in list: networks containing: ['91bcac0a-1926-4861-88ab-ae3c06f7e57e']. ignoring it
Jan 26 15:55:46 compute-0 nova_compute[239965]: 2026-01-26 15:55:46.869 239969 WARNING nova.network.neutron [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] 91bcac0a-1926-4861-88ab-ae3c06f7e57e already exists in list: networks containing: ['91bcac0a-1926-4861-88ab-ae3c06f7e57e']. ignoring it
Jan 26 15:55:46 compute-0 ceph-mon[75140]: pgmap v1270: 305 pgs: 305 active+clean; 125 MiB data, 460 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 329 KiB/s wr, 23 op/s
Jan 26 15:55:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 15:55:48 compute-0 nova_compute[239965]: 2026-01-26 15:55:48.348 239969 DEBUG nova.network.neutron [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Successfully updated port: 50f13561-1233-4562-9ae2-f16314467652 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:55:48 compute-0 nova_compute[239965]: 2026-01-26 15:55:48.438 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "refresh_cache-8266f382-d7db-41e3-bc97-b1510341e248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:55:48 compute-0 nova_compute[239965]: 2026-01-26 15:55:48.439 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquired lock "refresh_cache-8266f382-d7db-41e3-bc97-b1510341e248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:55:48 compute-0 nova_compute[239965]: 2026-01-26 15:55:48.439 239969 DEBUG nova.network.neutron [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:55:48 compute-0 nova_compute[239965]: 2026-01-26 15:55:48.661 239969 DEBUG nova.network.neutron [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:55:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:55:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3967243491' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:55:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:55:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3967243491' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011098806207717345 of space, bias 1.0, pg target 0.33296418623152035 quantized to 32 (current 32)
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668422578251625 of space, bias 1.0, pg target 0.2005267734754875 quantized to 32 (current 32)
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.609740926766524e-07 of space, bias 4.0, pg target 0.001033168911211983 quantized to 16 (current 16)
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:55:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:55:48 compute-0 ceph-mon[75140]: pgmap v1271: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 15:55:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3967243491' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:55:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3967243491' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:55:48 compute-0 nova_compute[239965]: 2026-01-26 15:55:48.907 239969 DEBUG nova.compute.manager [req-2a2fc5b3-3222-4e04-95ca-1cc44a4a3014 req-280719af-6e53-4075-ba85-3d709b9c30d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Received event network-changed-50f13561-1233-4562-9ae2-f16314467652 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:48 compute-0 nova_compute[239965]: 2026-01-26 15:55:48.907 239969 DEBUG nova.compute.manager [req-2a2fc5b3-3222-4e04-95ca-1cc44a4a3014 req-280719af-6e53-4075-ba85-3d709b9c30d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Refreshing instance network info cache due to event network-changed-50f13561-1233-4562-9ae2-f16314467652. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:55:48 compute-0 nova_compute[239965]: 2026-01-26 15:55:48.907 239969 DEBUG oslo_concurrency.lockutils [req-2a2fc5b3-3222-4e04-95ca-1cc44a4a3014 req-280719af-6e53-4075-ba85-3d709b9c30d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-8266f382-d7db-41e3-bc97-b1510341e248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:55:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:55:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 15:55:50 compute-0 nova_compute[239965]: 2026-01-26 15:55:50.205 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:50 compute-0 nova_compute[239965]: 2026-01-26 15:55:50.639 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:50 compute-0 ceph-mon[75140]: pgmap v1272: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.269 239969 DEBUG nova.network.neutron [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updating instance_info_cache with network_info: [{"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "address": "fa:16:3e:7a:f0:af", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fbc7328-fd", "ovs_interfaceid": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.295 239969 DEBUG oslo_concurrency.lockutils [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Releasing lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.297 239969 DEBUG oslo_concurrency.lockutils [req-8d201eb6-a9b5-4387-ac1a-c2d1fe3fbdba req-3f95389a-a4e3-4d4b-9c81-817abbff69e5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.297 239969 DEBUG nova.network.neutron [req-8d201eb6-a9b5-4387-ac1a-c2d1fe3fbdba req-3f95389a-a4e3-4d4b-9c81-817abbff69e5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Refreshing network info cache for port 7fbc7328-fd77-441a-86c3-1f41f7e706a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.301 239969 DEBUG nova.virt.libvirt.vif [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:55:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "address": "fa:16:3e:7a:f0:af", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fbc7328-fd", "ovs_interfaceid": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.302 239969 DEBUG nova.network.os_vif_util [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "address": "fa:16:3e:7a:f0:af", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fbc7328-fd", "ovs_interfaceid": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.304 239969 DEBUG nova.network.os_vif_util [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:f0:af,bridge_name='br-int',has_traffic_filtering=True,id=7fbc7328-fd77-441a-86c3-1f41f7e706a0,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fbc7328-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.304 239969 DEBUG os_vif [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:f0:af,bridge_name='br-int',has_traffic_filtering=True,id=7fbc7328-fd77-441a-86c3-1f41f7e706a0,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fbc7328-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.305 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.305 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.305 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.308 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.308 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7fbc7328-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.309 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7fbc7328-fd, col_values=(('external_ids', {'iface-id': '7fbc7328-fd77-441a-86c3-1f41f7e706a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:f0:af', 'vm-uuid': '819151c3-9892-4926-879b-6bcf2047e116'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.311 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:51 compute-0 NetworkManager[48954]: <info>  [1769442951.3118] manager: (tap7fbc7328-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/122)
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.313 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.319 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.321 239969 DEBUG nova.network.neutron [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Updating instance_info_cache with network_info: [{"id": "50f13561-1233-4562-9ae2-f16314467652", "address": "fa:16:3e:cd:9d:4c", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f13561-12", "ovs_interfaceid": "50f13561-1233-4562-9ae2-f16314467652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.322 239969 INFO os_vif [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:f0:af,bridge_name='br-int',has_traffic_filtering=True,id=7fbc7328-fd77-441a-86c3-1f41f7e706a0,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fbc7328-fd')
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.323 239969 DEBUG nova.virt.libvirt.vif [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:55:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "address": "fa:16:3e:7a:f0:af", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fbc7328-fd", "ovs_interfaceid": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.323 239969 DEBUG nova.network.os_vif_util [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "address": "fa:16:3e:7a:f0:af", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fbc7328-fd", "ovs_interfaceid": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.324 239969 DEBUG nova.network.os_vif_util [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:f0:af,bridge_name='br-int',has_traffic_filtering=True,id=7fbc7328-fd77-441a-86c3-1f41f7e706a0,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fbc7328-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.326 239969 DEBUG nova.virt.libvirt.guest [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] attach device xml: <interface type="ethernet">
Jan 26 15:55:51 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:7a:f0:af"/>
Jan 26 15:55:51 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 15:55:51 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:55:51 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 15:55:51 compute-0 nova_compute[239965]:   <target dev="tap7fbc7328-fd"/>
Jan 26 15:55:51 compute-0 nova_compute[239965]: </interface>
Jan 26 15:55:51 compute-0 nova_compute[239965]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 26 15:55:51 compute-0 kernel: tap7fbc7328-fd: entered promiscuous mode
Jan 26 15:55:51 compute-0 NetworkManager[48954]: <info>  [1769442951.3395] manager: (tap7fbc7328-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/123)
Jan 26 15:55:51 compute-0 ovn_controller[146046]: 2026-01-26T15:55:51Z|00240|binding|INFO|Claiming lport 7fbc7328-fd77-441a-86c3-1f41f7e706a0 for this chassis.
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.342 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:51 compute-0 ovn_controller[146046]: 2026-01-26T15:55:51Z|00241|binding|INFO|7fbc7328-fd77-441a-86c3-1f41f7e706a0: Claiming fa:16:3e:7a:f0:af 10.100.0.4
Jan 26 15:55:51 compute-0 ovn_controller[146046]: 2026-01-26T15:55:51Z|00242|binding|INFO|Setting lport 7fbc7328-fd77-441a-86c3-1f41f7e706a0 ovn-installed in OVS
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.363 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:51 compute-0 systemd-udevd[275346]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:55:51 compute-0 NetworkManager[48954]: <info>  [1769442951.3792] device (tap7fbc7328-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:55:51 compute-0 NetworkManager[48954]: <info>  [1769442951.3798] device (tap7fbc7328-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.386 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Releasing lock "refresh_cache-8266f382-d7db-41e3-bc97-b1510341e248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.386 239969 DEBUG nova.compute.manager [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Instance network_info: |[{"id": "50f13561-1233-4562-9ae2-f16314467652", "address": "fa:16:3e:cd:9d:4c", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f13561-12", "ovs_interfaceid": "50f13561-1233-4562-9ae2-f16314467652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.386 239969 DEBUG oslo_concurrency.lockutils [req-2a2fc5b3-3222-4e04-95ca-1cc44a4a3014 req-280719af-6e53-4075-ba85-3d709b9c30d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-8266f382-d7db-41e3-bc97-b1510341e248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.387 239969 DEBUG nova.network.neutron [req-2a2fc5b3-3222-4e04-95ca-1cc44a4a3014 req-280719af-6e53-4075-ba85-3d709b9c30d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Refreshing network info cache for port 50f13561-1233-4562-9ae2-f16314467652 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.390 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Start _get_guest_xml network_info=[{"id": "50f13561-1233-4562-9ae2-f16314467652", "address": "fa:16:3e:cd:9d:4c", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f13561-12", "ovs_interfaceid": "50f13561-1233-4562-9ae2-f16314467652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:55:51 compute-0 ovn_controller[146046]: 2026-01-26T15:55:51Z|00243|binding|INFO|Setting lport 7fbc7328-fd77-441a-86c3-1f41f7e706a0 up in Southbound
Jan 26 15:55:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:51.391 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:f0:af 10.100.0.4'], port_security=['fa:16:3e:7a:f0:af 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '819151c3-9892-4926-879b-6bcf2047e116', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6f59e25-9c42-4de8-8df2-2beb03c79af0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=7fbc7328-fd77-441a-86c3-1f41f7e706a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:55:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:51.392 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 7fbc7328-fd77-441a-86c3-1f41f7e706a0 in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e bound to our chassis
Jan 26 15:55:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:51.394 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.395 239969 WARNING nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:55:51 compute-0 ovn_controller[146046]: 2026-01-26T15:55:51Z|00244|binding|INFO|Releasing lport 0a445094-538b-4c97-83d4-6fbe61c31fbe from this chassis (sb_readonly=0)
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.402 239969 DEBUG nova.virt.libvirt.host [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.403 239969 DEBUG nova.virt.libvirt.host [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.407 239969 DEBUG nova.virt.libvirt.host [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.408 239969 DEBUG nova.virt.libvirt.host [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.408 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.408 239969 DEBUG nova.virt.hardware [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.409 239969 DEBUG nova.virt.hardware [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.409 239969 DEBUG nova.virt.hardware [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.409 239969 DEBUG nova.virt.hardware [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.409 239969 DEBUG nova.virt.hardware [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.410 239969 DEBUG nova.virt.hardware [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:55:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:51.409 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8820462e-c429-47b4-911a-39a52352f4a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.410 239969 DEBUG nova.virt.hardware [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.410 239969 DEBUG nova.virt.hardware [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.410 239969 DEBUG nova.virt.hardware [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.411 239969 DEBUG nova.virt.hardware [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.411 239969 DEBUG nova.virt.hardware [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.414 239969 DEBUG oslo_concurrency.processutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:51.436 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b7869904-6741-415f-b1d5-dbd4146ae14a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:51.439 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e24b3c-fa77-4ce1-95b5-e1d2a6702e39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:51.466 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[39c077c1-8866-4efa-a0c0-02c8f0ceb1b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:51.486 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8e969ac9-e518-4275-b6bd-eeff7e1ce0e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439439, 'reachable_time': 27465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275355, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:51.502 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7be023-3cfd-4e18-92db-556f36e90aa6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439451, 'tstamp': 439451}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275356, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439456, 'tstamp': 439456}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275356, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:51.503 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.504 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:51.506 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:51.506 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:55:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:51.506 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:51.507 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.713 239969 DEBUG nova.virt.libvirt.driver [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.714 239969 DEBUG nova.virt.libvirt.driver [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.714 239969 DEBUG nova.virt.libvirt.driver [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:1d:ed:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.715 239969 DEBUG nova.virt.libvirt.driver [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:34:ca:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.715 239969 DEBUG nova.virt.libvirt.driver [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:7a:f0:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.763 239969 DEBUG nova.virt.libvirt.guest [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:55:51 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:55:51 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1950902598</nova:name>
Jan 26 15:55:51 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:55:51</nova:creationTime>
Jan 26 15:55:51 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:55:51 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:55:51 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:55:51 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:55:51 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:55:51 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:55:51 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:55:51 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:55:51 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:55:51 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:55:51 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:55:51 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:55:51 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:55:51 compute-0 nova_compute[239965]:     <nova:port uuid="786c06cb-4344-4e72-b40a-f3aa914d037c">
Jan 26 15:55:51 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 15:55:51 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:55:51 compute-0 nova_compute[239965]:     <nova:port uuid="cbd122d4-01d6-46df-bc1e-8707cf1566d4">
Jan 26 15:55:51 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 26 15:55:51 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:55:51 compute-0 nova_compute[239965]:     <nova:port uuid="7fbc7328-fd77-441a-86c3-1f41f7e706a0">
Jan 26 15:55:51 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 15:55:51 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:55:51 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:55:51 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:55:51 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 15:55:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 15:55:51 compute-0 nova_compute[239965]: 2026-01-26 15:55:51.796 239969 DEBUG oslo_concurrency.lockutils [None req-473e3db6-c275-4ebd-91bb-9f709f765a7b 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-819151c3-9892-4926-879b-6bcf2047e116-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 13.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:55:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/385031354' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.016 239969 DEBUG oslo_concurrency.processutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.047 239969 DEBUG nova.storage.rbd_utils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image 8266f382-d7db-41e3-bc97-b1510341e248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.054 239969 DEBUG oslo_concurrency.processutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.240 239969 DEBUG nova.compute.manager [req-059b03a5-2992-4cb5-9b90-c1436173fdb6 req-6eea1368-c806-4e60-8f31-1c05a18517d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-plugged-7fbc7328-fd77-441a-86c3-1f41f7e706a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.241 239969 DEBUG oslo_concurrency.lockutils [req-059b03a5-2992-4cb5-9b90-c1436173fdb6 req-6eea1368-c806-4e60-8f31-1c05a18517d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.241 239969 DEBUG oslo_concurrency.lockutils [req-059b03a5-2992-4cb5-9b90-c1436173fdb6 req-6eea1368-c806-4e60-8f31-1c05a18517d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.241 239969 DEBUG oslo_concurrency.lockutils [req-059b03a5-2992-4cb5-9b90-c1436173fdb6 req-6eea1368-c806-4e60-8f31-1c05a18517d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.242 239969 DEBUG nova.compute.manager [req-059b03a5-2992-4cb5-9b90-c1436173fdb6 req-6eea1368-c806-4e60-8f31-1c05a18517d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] No waiting events found dispatching network-vif-plugged-7fbc7328-fd77-441a-86c3-1f41f7e706a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.242 239969 WARNING nova.compute.manager [req-059b03a5-2992-4cb5-9b90-c1436173fdb6 req-6eea1368-c806-4e60-8f31-1c05a18517d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received unexpected event network-vif-plugged-7fbc7328-fd77-441a-86c3-1f41f7e706a0 for instance with vm_state active and task_state None.
Jan 26 15:55:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:55:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/932439628' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.656 239969 DEBUG oslo_concurrency.processutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.658 239969 DEBUG nova.virt.libvirt.vif [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:55:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1447210512',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1447210512',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1447210512',id=38,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b3f0866575347bd80cdc80b692f07f5',ramdisk_id='',reservation_id='r-ht6slxng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-744790849',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-744790849-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:55:43Z,user_data=None,user_id='8b6639ef07f74c9ebad76dffa361ec4e',uuid=8266f382-d7db-41e3-bc97-b1510341e248,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50f13561-1233-4562-9ae2-f16314467652", "address": "fa:16:3e:cd:9d:4c", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f13561-12", "ovs_interfaceid": "50f13561-1233-4562-9ae2-f16314467652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.658 239969 DEBUG nova.network.os_vif_util [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converting VIF {"id": "50f13561-1233-4562-9ae2-f16314467652", "address": "fa:16:3e:cd:9d:4c", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f13561-12", "ovs_interfaceid": "50f13561-1233-4562-9ae2-f16314467652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.659 239969 DEBUG nova.network.os_vif_util [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:9d:4c,bridge_name='br-int',has_traffic_filtering=True,id=50f13561-1233-4562-9ae2-f16314467652,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50f13561-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.660 239969 DEBUG nova.objects.instance [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8266f382-d7db-41e3-bc97-b1510341e248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.689 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:55:52 compute-0 nova_compute[239965]:   <uuid>8266f382-d7db-41e3-bc97-b1510341e248</uuid>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   <name>instance-00000026</name>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1447210512</nova:name>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:55:51</nova:creationTime>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:55:52 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:55:52 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:55:52 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:55:52 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:55:52 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:55:52 compute-0 nova_compute[239965]:         <nova:user uuid="8b6639ef07f74c9ebad76dffa361ec4e">tempest-ImagesOneServerNegativeTestJSON-744790849-project-member</nova:user>
Jan 26 15:55:52 compute-0 nova_compute[239965]:         <nova:project uuid="1b3f0866575347bd80cdc80b692f07f5">tempest-ImagesOneServerNegativeTestJSON-744790849</nova:project>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:55:52 compute-0 nova_compute[239965]:         <nova:port uuid="50f13561-1233-4562-9ae2-f16314467652">
Jan 26 15:55:52 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <system>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <entry name="serial">8266f382-d7db-41e3-bc97-b1510341e248</entry>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <entry name="uuid">8266f382-d7db-41e3-bc97-b1510341e248</entry>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     </system>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   <os>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   </os>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   <features>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   </features>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/8266f382-d7db-41e3-bc97-b1510341e248_disk">
Jan 26 15:55:52 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       </source>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:55:52 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/8266f382-d7db-41e3-bc97-b1510341e248_disk.config">
Jan 26 15:55:52 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       </source>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:55:52 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:cd:9d:4c"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <target dev="tap50f13561-12"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/8266f382-d7db-41e3-bc97-b1510341e248/console.log" append="off"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <video>
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     </video>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:55:52 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:55:52 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:55:52 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:55:52 compute-0 nova_compute[239965]: </domain>
Jan 26 15:55:52 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.690 239969 DEBUG nova.compute.manager [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Preparing to wait for external event network-vif-plugged-50f13561-1233-4562-9ae2-f16314467652 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.690 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "8266f382-d7db-41e3-bc97-b1510341e248-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.691 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "8266f382-d7db-41e3-bc97-b1510341e248-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.691 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "8266f382-d7db-41e3-bc97-b1510341e248-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.692 239969 DEBUG nova.virt.libvirt.vif [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:55:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1447210512',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1447210512',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1447210512',id=38,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b3f0866575347bd80cdc80b692f07f5',ramdisk_id='',reservation_id='r-ht6slxng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-744790849',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-744790849-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:55:43Z,user_data=None,user_id='8b6639ef07f74c9ebad76dffa361ec4e',uuid=8266f382-d7db-41e3-bc97-b1510341e248,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50f13561-1233-4562-9ae2-f16314467652", "address": "fa:16:3e:cd:9d:4c", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f13561-12", "ovs_interfaceid": "50f13561-1233-4562-9ae2-f16314467652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.692 239969 DEBUG nova.network.os_vif_util [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converting VIF {"id": "50f13561-1233-4562-9ae2-f16314467652", "address": "fa:16:3e:cd:9d:4c", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f13561-12", "ovs_interfaceid": "50f13561-1233-4562-9ae2-f16314467652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.693 239969 DEBUG nova.network.os_vif_util [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:9d:4c,bridge_name='br-int',has_traffic_filtering=True,id=50f13561-1233-4562-9ae2-f16314467652,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50f13561-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.693 239969 DEBUG os_vif [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:9d:4c,bridge_name='br-int',has_traffic_filtering=True,id=50f13561-1233-4562-9ae2-f16314467652,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50f13561-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.694 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.694 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.694 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.697 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.697 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50f13561-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.697 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap50f13561-12, col_values=(('external_ids', {'iface-id': '50f13561-1233-4562-9ae2-f16314467652', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:9d:4c', 'vm-uuid': '8266f382-d7db-41e3-bc97-b1510341e248'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.699 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:52 compute-0 NetworkManager[48954]: <info>  [1769442952.7001] manager: (tap50f13561-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.701 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.704 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.705 239969 INFO os_vif [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:9d:4c,bridge_name='br-int',has_traffic_filtering=True,id=50f13561-1233-4562-9ae2-f16314467652,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50f13561-12')
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.797 239969 DEBUG nova.network.neutron [req-2a2fc5b3-3222-4e04-95ca-1cc44a4a3014 req-280719af-6e53-4075-ba85-3d709b9c30d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Updated VIF entry in instance network info cache for port 50f13561-1233-4562-9ae2-f16314467652. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.798 239969 DEBUG nova.network.neutron [req-2a2fc5b3-3222-4e04-95ca-1cc44a4a3014 req-280719af-6e53-4075-ba85-3d709b9c30d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Updating instance_info_cache with network_info: [{"id": "50f13561-1233-4562-9ae2-f16314467652", "address": "fa:16:3e:cd:9d:4c", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f13561-12", "ovs_interfaceid": "50f13561-1233-4562-9ae2-f16314467652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.812 239969 DEBUG oslo_concurrency.lockutils [req-2a2fc5b3-3222-4e04-95ca-1cc44a4a3014 req-280719af-6e53-4075-ba85-3d709b9c30d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-8266f382-d7db-41e3-bc97-b1510341e248" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.873 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.874 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.874 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] No VIF found with MAC fa:16:3e:cd:9d:4c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.875 239969 INFO nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Using config drive
Jan 26 15:55:52 compute-0 nova_compute[239965]: 2026-01-26 15:55:52.902 239969 DEBUG nova.storage.rbd_utils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image 8266f382-d7db-41e3-bc97-b1510341e248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:52 compute-0 ceph-mon[75140]: pgmap v1273: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 15:55:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/385031354' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:55:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/932439628' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:55:53 compute-0 nova_compute[239965]: 2026-01-26 15:55:53.279 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "d8c8962d-c696-4ea6-826d-466a804a7770" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:53 compute-0 nova_compute[239965]: 2026-01-26 15:55:53.280 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:53 compute-0 nova_compute[239965]: 2026-01-26 15:55:53.299 239969 DEBUG nova.compute.manager [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:55:53 compute-0 nova_compute[239965]: 2026-01-26 15:55:53.411 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:53 compute-0 nova_compute[239965]: 2026-01-26 15:55:53.412 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:53 compute-0 nova_compute[239965]: 2026-01-26 15:55:53.418 239969 DEBUG nova.virt.hardware [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:55:53 compute-0 nova_compute[239965]: 2026-01-26 15:55:53.418 239969 INFO nova.compute.claims [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:55:53 compute-0 nova_compute[239965]: 2026-01-26 15:55:53.596 239969 INFO nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Creating config drive at /var/lib/nova/instances/8266f382-d7db-41e3-bc97-b1510341e248/disk.config
Jan 26 15:55:53 compute-0 nova_compute[239965]: 2026-01-26 15:55:53.601 239969 DEBUG oslo_concurrency.processutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8266f382-d7db-41e3-bc97-b1510341e248/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwnmucnim execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:53 compute-0 nova_compute[239965]: 2026-01-26 15:55:53.740 239969 DEBUG oslo_concurrency.processutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8266f382-d7db-41e3-bc97-b1510341e248/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwnmucnim" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:53 compute-0 nova_compute[239965]: 2026-01-26 15:55:53.765 239969 DEBUG nova.storage.rbd_utils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image 8266f382-d7db-41e3-bc97-b1510341e248_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 15:55:53 compute-0 nova_compute[239965]: 2026-01-26 15:55:53.769 239969 DEBUG oslo_concurrency.processutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8266f382-d7db-41e3-bc97-b1510341e248/disk.config 8266f382-d7db-41e3-bc97-b1510341e248_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:53 compute-0 nova_compute[239965]: 2026-01-26 15:55:53.829 239969 DEBUG oslo_concurrency.processutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:53 compute-0 ovn_controller[146046]: 2026-01-26T15:55:53Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7a:f0:af 10.100.0.4
Jan 26 15:55:53 compute-0 ovn_controller[146046]: 2026-01-26T15:55:53Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7a:f0:af 10.100.0.4
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.022 239969 DEBUG nova.network.neutron [req-8d201eb6-a9b5-4387-ac1a-c2d1fe3fbdba req-3f95389a-a4e3-4d4b-9c81-817abbff69e5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updated VIF entry in instance network info cache for port 7fbc7328-fd77-441a-86c3-1f41f7e706a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.023 239969 DEBUG nova.network.neutron [req-8d201eb6-a9b5-4387-ac1a-c2d1fe3fbdba req-3f95389a-a4e3-4d4b-9c81-817abbff69e5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updating instance_info_cache with network_info: [{"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "address": "fa:16:3e:7a:f0:af", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fbc7328-fd", "ovs_interfaceid": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.060 239969 DEBUG oslo_concurrency.lockutils [req-8d201eb6-a9b5-4387-ac1a-c2d1fe3fbdba req-3f95389a-a4e3-4d4b-9c81-817abbff69e5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.162 239969 DEBUG oslo_concurrency.processutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8266f382-d7db-41e3-bc97-b1510341e248/disk.config 8266f382-d7db-41e3-bc97-b1510341e248_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.393s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.163 239969 INFO nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Deleting local config drive /var/lib/nova/instances/8266f382-d7db-41e3-bc97-b1510341e248/disk.config because it was imported into RBD.
Jan 26 15:55:54 compute-0 ceph-mon[75140]: pgmap v1274: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 15:55:54 compute-0 kernel: tap50f13561-12: entered promiscuous mode
Jan 26 15:55:54 compute-0 NetworkManager[48954]: <info>  [1769442954.2218] manager: (tap50f13561-12): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Jan 26 15:55:54 compute-0 systemd-udevd[275348]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:55:54 compute-0 ovn_controller[146046]: 2026-01-26T15:55:54Z|00245|binding|INFO|Claiming lport 50f13561-1233-4562-9ae2-f16314467652 for this chassis.
Jan 26 15:55:54 compute-0 ovn_controller[146046]: 2026-01-26T15:55:54Z|00246|binding|INFO|50f13561-1233-4562-9ae2-f16314467652: Claiming fa:16:3e:cd:9d:4c 10.100.0.6
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.223 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.233 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:9d:4c 10.100.0.6'], port_security=['fa:16:3e:cd:9d:4c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8266f382-d7db-41e3-bc97-b1510341e248', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12eb3028-37a4-48dc-836b-0978d0bddbce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b3f0866575347bd80cdc80b692f07f5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd45551f-cdcc-4a64-9f48-bf991126bbec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef363ab1-8cec-4345-8968-ee11106642fa, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=50f13561-1233-4562-9ae2-f16314467652) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.234 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 50f13561-1233-4562-9ae2-f16314467652 in datapath 12eb3028-37a4-48dc-836b-0978d0bddbce bound to our chassis
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.237 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 12eb3028-37a4-48dc-836b-0978d0bddbce
Jan 26 15:55:54 compute-0 ovn_controller[146046]: 2026-01-26T15:55:54Z|00247|binding|INFO|Setting lport 50f13561-1233-4562-9ae2-f16314467652 ovn-installed in OVS
Jan 26 15:55:54 compute-0 ovn_controller[146046]: 2026-01-26T15:55:54Z|00248|binding|INFO|Setting lport 50f13561-1233-4562-9ae2-f16314467652 up in Southbound
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.243 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:54 compute-0 NetworkManager[48954]: <info>  [1769442954.2441] device (tap50f13561-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:55:54 compute-0 NetworkManager[48954]: <info>  [1769442954.2447] device (tap50f13561-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.249 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.254 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[69a38919-2e1f-41d3-8b17-b75c408caca8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.255 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap12eb3028-31 in ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.258 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap12eb3028-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.258 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9b37802c-bb1b-4946-8f36-547378da1534]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.260 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6659feaa-b3e5-4866-ba91-7583d232fc7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.273 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[33ab1abc-add3-4cc9-b462-42024e2c87bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:54 compute-0 systemd-machined[208061]: New machine qemu-42-instance-00000026.
Jan 26 15:55:54 compute-0 systemd[1]: Started Virtual Machine qemu-42-instance-00000026.
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.298 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e6bd8332-42e8-432a-a164-0a70cfb11811]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.327 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a5ed2b83-0a45-41d7-bad1-f3c2c119cfec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.336 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bc1ff76b-5111-48d8-96ed-346918752be4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:54 compute-0 NetworkManager[48954]: <info>  [1769442954.3371] manager: (tap12eb3028-30): new Veth device (/org/freedesktop/NetworkManager/Devices/126)
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.374 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b1f21435-2200-489d-951f-9eb4e0d91550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.379 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8ed99f62-c554-45cd-be05-6242038d600a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:54 compute-0 NetworkManager[48954]: <info>  [1769442954.4062] device (tap12eb3028-30): carrier: link connected
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.412 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0158ee9b-f51b-49ce-ab60-211ee37658ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.432 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a57348fd-cd95-4c23-b190-163786c4604f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12eb3028-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c1:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444521, 'reachable_time': 20455, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275548, 'error': None, 'target': 'ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.457 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ba5e298b-3836-4225-bec5-f502d15a0502]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0d:c192'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 444521, 'tstamp': 444521}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275549, 'error': None, 'target': 'ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.479 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e44f176c-2bc1-4e13-8aab-df263baf5e31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12eb3028-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c1:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 80], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444521, 'reachable_time': 20455, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275550, 'error': None, 'target': 'ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.514 239969 DEBUG nova.compute.manager [req-f0d08115-62ef-46f0-902e-5a0648b7af9f req-ea2805d7-484c-4f6b-8a1c-cd1c86cff73d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-plugged-7fbc7328-fd77-441a-86c3-1f41f7e706a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.514 239969 DEBUG oslo_concurrency.lockutils [req-f0d08115-62ef-46f0-902e-5a0648b7af9f req-ea2805d7-484c-4f6b-8a1c-cd1c86cff73d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.515 239969 DEBUG oslo_concurrency.lockutils [req-f0d08115-62ef-46f0-902e-5a0648b7af9f req-ea2805d7-484c-4f6b-8a1c-cd1c86cff73d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.515 239969 DEBUG oslo_concurrency.lockutils [req-f0d08115-62ef-46f0-902e-5a0648b7af9f req-ea2805d7-484c-4f6b-8a1c-cd1c86cff73d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.515 239969 DEBUG nova.compute.manager [req-f0d08115-62ef-46f0-902e-5a0648b7af9f req-ea2805d7-484c-4f6b-8a1c-cd1c86cff73d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] No waiting events found dispatching network-vif-plugged-7fbc7328-fd77-441a-86c3-1f41f7e706a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.515 239969 WARNING nova.compute.manager [req-f0d08115-62ef-46f0-902e-5a0648b7af9f req-ea2805d7-484c-4f6b-8a1c-cd1c86cff73d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received unexpected event network-vif-plugged-7fbc7328-fd77-441a-86c3-1f41f7e706a0 for instance with vm_state active and task_state None.
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.516 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1b629890-2636-41ba-9ccc-5001269ff336]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.589 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[05ebd0c7-9412-4a2a-b49b-73211c6260dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.592 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12eb3028-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.593 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.593 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12eb3028-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.596 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:54 compute-0 NetworkManager[48954]: <info>  [1769442954.5966] manager: (tap12eb3028-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Jan 26 15:55:54 compute-0 kernel: tap12eb3028-30: entered promiscuous mode
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.599 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.600 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap12eb3028-30, col_values=(('external_ids', {'iface-id': '25ff8da3-57db-4715-8307-e7e4fd4fbd09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:55:54 compute-0 ovn_controller[146046]: 2026-01-26T15:55:54Z|00249|binding|INFO|Releasing lport 25ff8da3-57db-4715-8307-e7e4fd4fbd09 from this chassis (sb_readonly=0)
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.602 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.620 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.622 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/12eb3028-37a4-48dc-836b-0978d0bddbce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/12eb3028-37a4-48dc-836b-0978d0bddbce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.623 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f1670fd2-7e42-48c6-80e9-b12031b8cbc1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.624 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-12eb3028-37a4-48dc-836b-0978d0bddbce
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/12eb3028-37a4-48dc-836b-0978d0bddbce.pid.haproxy
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 12eb3028-37a4-48dc-836b-0978d0bddbce
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:55:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:54.626 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce', 'env', 'PROCESS_TAG=haproxy-12eb3028-37a4-48dc-836b-0978d0bddbce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/12eb3028-37a4-48dc-836b-0978d0bddbce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:55:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:55:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/956648710' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.666 239969 DEBUG oslo_concurrency.processutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.837s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.674 239969 DEBUG nova.compute.provider_tree [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.697 239969 DEBUG nova.scheduler.client.report [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.726 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.727 239969 DEBUG nova.compute.manager [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.794 239969 DEBUG nova.compute.manager [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.794 239969 DEBUG nova.network.neutron [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.824 239969 INFO nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.845 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442954.844968, 8266f382-d7db-41e3-bc97-b1510341e248 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.846 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] VM Started (Lifecycle Event)
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.852 239969 DEBUG nova.compute.manager [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.858 239969 DEBUG oslo_concurrency.lockutils [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "interface-819151c3-9892-4926-879b-6bcf2047e116-0a00f244-298c-43b6-937a-89de92a84d31" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.858 239969 DEBUG oslo_concurrency.lockutils [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-819151c3-9892-4926-879b-6bcf2047e116-0a00f244-298c-43b6-937a-89de92a84d31" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.859 239969 DEBUG nova.objects.instance [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'flavor' on Instance uuid 819151c3-9892-4926-879b-6bcf2047e116 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.886 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.891 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442954.8478644, 8266f382-d7db-41e3-bc97-b1510341e248 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.891 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] VM Paused (Lifecycle Event)
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.927 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.931 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:55:54 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.968 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:54.999 239969 DEBUG nova.compute.manager [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.001 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.001 239969 INFO nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Creating image(s)
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.025 239969 DEBUG nova.storage.rbd_utils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image d8c8962d-c696-4ea6-826d-466a804a7770_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.049 239969 DEBUG nova.storage.rbd_utils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image d8c8962d-c696-4ea6-826d-466a804a7770_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.070 239969 DEBUG nova.storage.rbd_utils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image d8c8962d-c696-4ea6-826d-466a804a7770_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.079 239969 DEBUG oslo_concurrency.processutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:55 compute-0 podman[275624]: 2026-01-26 15:55:55.082805282 +0000 UTC m=+0.073362191 container create 6152b5a1083a349dfacbc7e7afb976dfaf729e6f747f0a32d110b70e8611a12a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.118 239969 DEBUG nova.policy [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e19e9687c7784557a28b0297dc26ff06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b303275701ea4029a4a744bf25ce8726', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:55:55 compute-0 podman[275624]: 2026-01-26 15:55:55.031919904 +0000 UTC m=+0.022476833 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.129 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.130 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:55 compute-0 systemd[1]: Started libpod-conmon-6152b5a1083a349dfacbc7e7afb976dfaf729e6f747f0a32d110b70e8611a12a.scope.
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.150 239969 DEBUG nova.compute.manager [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.165 239969 DEBUG oslo_concurrency.processutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.166 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.166 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.167 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:55:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60adedcb02eb3c9a93dd5f60c190a05ac491434ae42fa69f843e54d97c675b56/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:55:55 compute-0 podman[275624]: 2026-01-26 15:55:55.191858259 +0000 UTC m=+0.182415168 container init 6152b5a1083a349dfacbc7e7afb976dfaf729e6f747f0a32d110b70e8611a12a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.197 239969 DEBUG nova.storage.rbd_utils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image d8c8962d-c696-4ea6-826d-466a804a7770_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:55 compute-0 podman[275624]: 2026-01-26 15:55:55.201013413 +0000 UTC m=+0.191570322 container start 6152b5a1083a349dfacbc7e7afb976dfaf729e6f747f0a32d110b70e8611a12a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 15:55:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/956648710' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.206 239969 DEBUG oslo_concurrency.processutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d8c8962d-c696-4ea6-826d-466a804a7770_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:55 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[275694]: [NOTICE]   (275718) : New worker (275721) forked
Jan 26 15:55:55 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[275694]: [NOTICE]   (275718) : Loading success.
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.239 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.261 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.262 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.270 239969 DEBUG nova.virt.hardware [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.271 239969 INFO nova.compute.claims [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.456 239969 DEBUG oslo_concurrency.processutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.573 239969 DEBUG oslo_concurrency.processutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d8c8962d-c696-4ea6-826d-466a804a7770_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.661 239969 DEBUG nova.storage.rbd_utils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] resizing rbd image d8c8962d-c696-4ea6-826d-466a804a7770_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:55:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 179 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.4 MiB/s wr, 35 op/s
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.776 239969 DEBUG nova.objects.instance [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'migration_context' on Instance uuid d8c8962d-c696-4ea6-826d-466a804a7770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.792 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.792 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Ensure instance console log exists: /var/lib/nova/instances/d8c8962d-c696-4ea6-826d-466a804a7770/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.793 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.793 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:55 compute-0 nova_compute[239965]: 2026-01-26 15:55:55.793 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:55:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4140383241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:56 compute-0 nova_compute[239965]: 2026-01-26 15:55:56.071 239969 DEBUG oslo_concurrency.processutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:56 compute-0 nova_compute[239965]: 2026-01-26 15:55:56.077 239969 DEBUG nova.compute.provider_tree [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:55:56 compute-0 nova_compute[239965]: 2026-01-26 15:55:56.120 239969 DEBUG nova.objects.instance [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'pci_requests' on Instance uuid 819151c3-9892-4926-879b-6bcf2047e116 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:56 compute-0 podman[275842]: 2026-01-26 15:55:56.359531939 +0000 UTC m=+0.045935899 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 26 15:55:56 compute-0 ceph-mon[75140]: pgmap v1275: 305 pgs: 305 active+clean; 179 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.4 MiB/s wr, 35 op/s
Jan 26 15:55:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4140383241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:55:56 compute-0 podman[275843]: 2026-01-26 15:55:56.417053121 +0000 UTC m=+0.101750019 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:55:56 compute-0 nova_compute[239965]: 2026-01-26 15:55:56.914 239969 DEBUG nova.scheduler.client.report [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:55:56 compute-0 nova_compute[239965]: 2026-01-26 15:55:56.920 239969 DEBUG nova.network.neutron [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:55:56 compute-0 nova_compute[239965]: 2026-01-26 15:55:56.941 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:56 compute-0 nova_compute[239965]: 2026-01-26 15:55:56.945 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:56 compute-0 nova_compute[239965]: 2026-01-26 15:55:56.946 239969 DEBUG nova.compute.manager [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:55:56 compute-0 nova_compute[239965]: 2026-01-26 15:55:56.995 239969 DEBUG nova.compute.manager [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:55:56 compute-0 nova_compute[239965]: 2026-01-26 15:55:56.996 239969 DEBUG nova.network.neutron [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.013 239969 INFO nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.031 239969 DEBUG nova.compute.manager [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.119 239969 DEBUG nova.compute.manager [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.122 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.123 239969 INFO nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Creating image(s)
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.144 239969 DEBUG nova.storage.rbd_utils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image 6f15e546-481b-42eb-ad9c-7c40e8bfe459_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.173 239969 DEBUG nova.storage.rbd_utils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image 6f15e546-481b-42eb-ad9c-7c40e8bfe459_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.202 239969 DEBUG nova.storage.rbd_utils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image 6f15e546-481b-42eb-ad9c-7c40e8bfe459_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.207 239969 DEBUG oslo_concurrency.processutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.239 239969 DEBUG nova.policy [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e19e9687c7784557a28b0297dc26ff06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b303275701ea4029a4a744bf25ce8726', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.279 239969 DEBUG oslo_concurrency.processutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.280 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "43c516c9cae415c0ab334521ada79f427fb2809a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.280 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.281 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.298 239969 DEBUG nova.storage.rbd_utils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image 6f15e546-481b-42eb-ad9c-7c40e8bfe459_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.302 239969 DEBUG oslo_concurrency.processutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a 6f15e546-481b-42eb-ad9c-7c40e8bfe459_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.618 239969 DEBUG nova.network.neutron [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Successfully created port: 1c2767ea-eb7b-4942-a452-31fe1017e883 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.632 239969 DEBUG nova.policy [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '700f80e28de6426c81d2be69a7fd8ffb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.699 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.736 239969 DEBUG oslo_concurrency.processutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a 6f15e546-481b-42eb-ad9c-7c40e8bfe459_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:55:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 213 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.2 MiB/s wr, 33 op/s
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.800 239969 DEBUG nova.storage.rbd_utils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] resizing rbd image 6f15e546-481b-42eb-ad9c-7c40e8bfe459_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.908 239969 DEBUG nova.objects.instance [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'migration_context' on Instance uuid 6f15e546-481b-42eb-ad9c-7c40e8bfe459 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.923 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.923 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Ensure instance console log exists: /var/lib/nova/instances/6f15e546-481b-42eb-ad9c-7c40e8bfe459/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.924 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.925 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:57 compute-0 nova_compute[239965]: 2026-01-26 15:55:57.925 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:58 compute-0 nova_compute[239965]: 2026-01-26 15:55:58.421 239969 DEBUG nova.network.neutron [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Successfully created port: 5132a5fe-0b78-49d1-94f4-efde60d77ba6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:55:58 compute-0 ceph-mon[75140]: pgmap v1276: 305 pgs: 305 active+clean; 213 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.2 MiB/s wr, 33 op/s
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.025 239969 DEBUG nova.compute.manager [req-12775672-cc6c-4107-ab4d-5d3dad659661 req-b1c09528-fe30-4f2e-9fd0-92c293d153dd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Received event network-vif-plugged-50f13561-1233-4562-9ae2-f16314467652 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.025 239969 DEBUG oslo_concurrency.lockutils [req-12775672-cc6c-4107-ab4d-5d3dad659661 req-b1c09528-fe30-4f2e-9fd0-92c293d153dd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8266f382-d7db-41e3-bc97-b1510341e248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.026 239969 DEBUG oslo_concurrency.lockutils [req-12775672-cc6c-4107-ab4d-5d3dad659661 req-b1c09528-fe30-4f2e-9fd0-92c293d153dd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8266f382-d7db-41e3-bc97-b1510341e248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.026 239969 DEBUG oslo_concurrency.lockutils [req-12775672-cc6c-4107-ab4d-5d3dad659661 req-b1c09528-fe30-4f2e-9fd0-92c293d153dd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8266f382-d7db-41e3-bc97-b1510341e248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.026 239969 DEBUG nova.compute.manager [req-12775672-cc6c-4107-ab4d-5d3dad659661 req-b1c09528-fe30-4f2e-9fd0-92c293d153dd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Processing event network-vif-plugged-50f13561-1233-4562-9ae2-f16314467652 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.026 239969 DEBUG nova.compute.manager [req-12775672-cc6c-4107-ab4d-5d3dad659661 req-b1c09528-fe30-4f2e-9fd0-92c293d153dd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Received event network-vif-plugged-50f13561-1233-4562-9ae2-f16314467652 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.026 239969 DEBUG oslo_concurrency.lockutils [req-12775672-cc6c-4107-ab4d-5d3dad659661 req-b1c09528-fe30-4f2e-9fd0-92c293d153dd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8266f382-d7db-41e3-bc97-b1510341e248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.027 239969 DEBUG oslo_concurrency.lockutils [req-12775672-cc6c-4107-ab4d-5d3dad659661 req-b1c09528-fe30-4f2e-9fd0-92c293d153dd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8266f382-d7db-41e3-bc97-b1510341e248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.027 239969 DEBUG oslo_concurrency.lockutils [req-12775672-cc6c-4107-ab4d-5d3dad659661 req-b1c09528-fe30-4f2e-9fd0-92c293d153dd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8266f382-d7db-41e3-bc97-b1510341e248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.027 239969 DEBUG nova.compute.manager [req-12775672-cc6c-4107-ab4d-5d3dad659661 req-b1c09528-fe30-4f2e-9fd0-92c293d153dd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] No waiting events found dispatching network-vif-plugged-50f13561-1233-4562-9ae2-f16314467652 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.027 239969 WARNING nova.compute.manager [req-12775672-cc6c-4107-ab4d-5d3dad659661 req-b1c09528-fe30-4f2e-9fd0-92c293d153dd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Received unexpected event network-vif-plugged-50f13561-1233-4562-9ae2-f16314467652 for instance with vm_state building and task_state spawning.
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.028 239969 DEBUG nova.compute.manager [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.032 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442959.032406, 8266f382-d7db-41e3-bc97-b1510341e248 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.033 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] VM Resumed (Lifecycle Event)
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.035 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.038 239969 INFO nova.virt.libvirt.driver [-] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Instance spawned successfully.
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.038 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.059 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.065 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.065 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.065 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.066 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.066 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.066 239969 DEBUG nova.virt.libvirt.driver [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.070 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.109 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.125 239969 INFO nova.compute.manager [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Took 15.14 seconds to spawn the instance on the hypervisor.
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.125 239969 DEBUG nova.compute.manager [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.148 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.148 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.179 239969 DEBUG nova.compute.manager [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.189 239969 INFO nova.compute.manager [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Took 18.53 seconds to build instance.
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.214 239969 DEBUG oslo_concurrency.lockutils [None req-da766a6a-12d7-4772-beec-56d2ffdc4ba0 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "8266f382-d7db-41e3-bc97-b1510341e248" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:59.217 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:59.219 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:55:59.220 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.263 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.264 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.273 239969 DEBUG nova.virt.hardware [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.273 239969 INFO nova.compute.claims [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.470 239969 DEBUG oslo_concurrency.processutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:55:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.666 239969 DEBUG nova.network.neutron [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Successfully updated port: 5132a5fe-0b78-49d1-94f4-efde60d77ba6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.688 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "refresh_cache-6f15e546-481b-42eb-ad9c-7c40e8bfe459" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.688 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquired lock "refresh_cache-6f15e546-481b-42eb-ad9c-7c40e8bfe459" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.688 239969 DEBUG nova.network.neutron [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:55:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 213 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.826 239969 DEBUG nova.compute.manager [req-e1893610-d2d2-4055-8a22-2a7bf874ade2 req-fb064652-097f-4a67-8203-535665d66ee1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Received event network-changed-5132a5fe-0b78-49d1-94f4-efde60d77ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.826 239969 DEBUG nova.compute.manager [req-e1893610-d2d2-4055-8a22-2a7bf874ade2 req-fb064652-097f-4a67-8203-535665d66ee1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Refreshing instance network info cache due to event network-changed-5132a5fe-0b78-49d1-94f4-efde60d77ba6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.826 239969 DEBUG oslo_concurrency.lockutils [req-e1893610-d2d2-4055-8a22-2a7bf874ade2 req-fb064652-097f-4a67-8203-535665d66ee1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-6f15e546-481b-42eb-ad9c-7c40e8bfe459" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.875 239969 DEBUG nova.network.neutron [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Successfully updated port: 0a00f244-298c-43b6-937a-89de92a84d31 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.891 239969 DEBUG nova.network.neutron [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Successfully updated port: 1c2767ea-eb7b-4942-a452-31fe1017e883 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.895 239969 DEBUG oslo_concurrency.lockutils [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.895 239969 DEBUG oslo_concurrency.lockutils [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquired lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.895 239969 DEBUG nova.network.neutron [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.915 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "refresh_cache-d8c8962d-c696-4ea6-826d-466a804a7770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.915 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquired lock "refresh_cache-d8c8962d-c696-4ea6-826d-466a804a7770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:55:59 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.915 239969 DEBUG nova.network.neutron [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:55:59.999 239969 DEBUG nova.network.neutron [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:56:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:56:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3576812013' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.145 239969 DEBUG oslo_concurrency.processutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.152 239969 DEBUG nova.compute.provider_tree [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.154 239969 DEBUG nova.network.neutron [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.168 239969 WARNING nova.network.neutron [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] 91bcac0a-1926-4861-88ab-ae3c06f7e57e already exists in list: networks containing: ['91bcac0a-1926-4861-88ab-ae3c06f7e57e']. ignoring it
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.168 239969 WARNING nova.network.neutron [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] 91bcac0a-1926-4861-88ab-ae3c06f7e57e already exists in list: networks containing: ['91bcac0a-1926-4861-88ab-ae3c06f7e57e']. ignoring it
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.169 239969 WARNING nova.network.neutron [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] 91bcac0a-1926-4861-88ab-ae3c06f7e57e already exists in list: networks containing: ['91bcac0a-1926-4861-88ab-ae3c06f7e57e']. ignoring it
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.173 239969 DEBUG nova.scheduler.client.report [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.195 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.196 239969 DEBUG nova.compute.manager [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.209 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.252 239969 DEBUG nova.compute.manager [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.253 239969 DEBUG nova.network.neutron [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.276 239969 INFO nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.296 239969 DEBUG nova.compute.manager [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:56:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:56:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:56:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:56:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:56:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:56:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.396 239969 DEBUG nova.compute.manager [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.397 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.398 239969 INFO nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Creating image(s)
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.421 239969 DEBUG nova.storage.rbd_utils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.445 239969 DEBUG nova.storage.rbd_utils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.470 239969 DEBUG nova.storage.rbd_utils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.474 239969 DEBUG oslo_concurrency.processutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.546 239969 DEBUG oslo_concurrency.processutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.547 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.548 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.548 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.572 239969 DEBUG nova.storage.rbd_utils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.576 239969 DEBUG oslo_concurrency.processutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.863 239969 DEBUG oslo_concurrency.processutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:00 compute-0 ceph-mon[75140]: pgmap v1277: 305 pgs: 305 active+clean; 213 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Jan 26 15:56:00 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3576812013' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.898 239969 DEBUG nova.policy [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e19e9687c7784557a28b0297dc26ff06', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b303275701ea4029a4a744bf25ce8726', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:56:00 compute-0 nova_compute[239965]: 2026-01-26 15:56:00.946 239969 DEBUG nova.storage.rbd_utils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] resizing rbd image f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.041 239969 DEBUG nova.objects.instance [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'migration_context' on Instance uuid f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.058 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.058 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Ensure instance console log exists: /var/lib/nova/instances/f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.058 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.059 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.059 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 223 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.3 MiB/s wr, 62 op/s
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.787 239969 DEBUG nova.network.neutron [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Updating instance_info_cache with network_info: [{"id": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "address": "fa:16:3e:69:ae:87", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5132a5fe-0b", "ovs_interfaceid": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.816 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Releasing lock "refresh_cache-6f15e546-481b-42eb-ad9c-7c40e8bfe459" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.816 239969 DEBUG nova.compute.manager [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Instance network_info: |[{"id": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "address": "fa:16:3e:69:ae:87", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5132a5fe-0b", "ovs_interfaceid": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.816 239969 DEBUG oslo_concurrency.lockutils [req-e1893610-d2d2-4055-8a22-2a7bf874ade2 req-fb064652-097f-4a67-8203-535665d66ee1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-6f15e546-481b-42eb-ad9c-7c40e8bfe459" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.817 239969 DEBUG nova.network.neutron [req-e1893610-d2d2-4055-8a22-2a7bf874ade2 req-fb064652-097f-4a67-8203-535665d66ee1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Refreshing network info cache for port 5132a5fe-0b78-49d1-94f4-efde60d77ba6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.820 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Start _get_guest_xml network_info=[{"id": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "address": "fa:16:3e:69:ae:87", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5132a5fe-0b", "ovs_interfaceid": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': '24ff5bfa-2bf0-4d76-ba05-fc857cd2108f'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.823 239969 WARNING nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.828 239969 DEBUG nova.virt.libvirt.host [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.828 239969 DEBUG nova.virt.libvirt.host [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.830 239969 DEBUG nova.virt.libvirt.host [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.831 239969 DEBUG nova.virt.libvirt.host [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.831 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.832 239969 DEBUG nova.virt.hardware [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.832 239969 DEBUG nova.virt.hardware [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.833 239969 DEBUG nova.virt.hardware [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.833 239969 DEBUG nova.virt.hardware [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.833 239969 DEBUG nova.virt.hardware [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.834 239969 DEBUG nova.virt.hardware [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.834 239969 DEBUG nova.virt.hardware [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.834 239969 DEBUG nova.virt.hardware [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.835 239969 DEBUG nova.virt.hardware [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.835 239969 DEBUG nova.virt.hardware [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.835 239969 DEBUG nova.virt.hardware [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.839 239969 DEBUG oslo_concurrency.processutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.874 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.877 239969 DEBUG nova.compute.manager [req-f3aef4bc-0c0b-49f9-bcaf-2c4fe43f3337 req-5fb8840c-83f5-4a2b-8ba7-3f19019957d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-changed-0a00f244-298c-43b6-937a-89de92a84d31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.878 239969 DEBUG nova.compute.manager [req-f3aef4bc-0c0b-49f9-bcaf-2c4fe43f3337 req-5fb8840c-83f5-4a2b-8ba7-3f19019957d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Refreshing instance network info cache due to event network-changed-0a00f244-298c-43b6-937a-89de92a84d31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:56:01 compute-0 nova_compute[239965]: 2026-01-26 15:56:01.878 239969 DEBUG oslo_concurrency.lockutils [req-f3aef4bc-0c0b-49f9-bcaf-2c4fe43f3337 req-5fb8840c-83f5-4a2b-8ba7-3f19019957d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:02 compute-0 ceph-mon[75140]: pgmap v1278: 305 pgs: 305 active+clean; 223 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 373 KiB/s rd, 2.3 MiB/s wr, 62 op/s
Jan 26 15:56:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:56:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2941647983' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.525 239969 DEBUG oslo_concurrency.processutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.552 239969 DEBUG nova.storage.rbd_utils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image 6f15e546-481b-42eb-ad9c-7c40e8bfe459_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.558 239969 DEBUG oslo_concurrency.processutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.697 239969 DEBUG nova.network.neutron [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Updating instance_info_cache with network_info: [{"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.701 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.728 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Releasing lock "refresh_cache-d8c8962d-c696-4ea6-826d-466a804a7770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.729 239969 DEBUG nova.compute.manager [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Instance network_info: |[{"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.731 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Start _get_guest_xml network_info=[{"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.742 239969 WARNING nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.749 239969 DEBUG nova.virt.libvirt.host [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.750 239969 DEBUG nova.virt.libvirt.host [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.755 239969 DEBUG nova.virt.libvirt.host [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.756 239969 DEBUG nova.virt.libvirt.host [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.757 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.757 239969 DEBUG nova.virt.hardware [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.757 239969 DEBUG nova.virt.hardware [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.758 239969 DEBUG nova.virt.hardware [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.758 239969 DEBUG nova.virt.hardware [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.758 239969 DEBUG nova.virt.hardware [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.758 239969 DEBUG nova.virt.hardware [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.759 239969 DEBUG nova.virt.hardware [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.759 239969 DEBUG nova.virt.hardware [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.759 239969 DEBUG nova.virt.hardware [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.759 239969 DEBUG nova.virt.hardware [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.760 239969 DEBUG nova.virt.hardware [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.763 239969 DEBUG oslo_concurrency.processutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:02 compute-0 nova_compute[239965]: 2026-01-26 15:56:02.922 239969 DEBUG nova.network.neutron [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Successfully created port: 09e051a6-5a1d-4163-8066-913fa9569057 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:56:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2941647983' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:56:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/503987206' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.162 239969 DEBUG oslo_concurrency.processutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.163 239969 DEBUG nova.virt.libvirt.vif [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:55:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1817493364',display_name='tempest-ListServerFiltersTestJSON-instance-1817493364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1817493364',id=40,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b303275701ea4029a4a744bf25ce8726',ramdisk_id='',reservation_id='r-zph60iyz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-31069689',owner_user_name='tempest-ListServerFiltersTestJSON-31069689-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:55:57Z,user_data=None,user_id='e19e9687c7784557a28b0297dc26ff06',uuid=6f15e546-481b-42eb-ad9c-7c40e8bfe459,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "address": "fa:16:3e:69:ae:87", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5132a5fe-0b", "ovs_interfaceid": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.164 239969 DEBUG nova.network.os_vif_util [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converting VIF {"id": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "address": "fa:16:3e:69:ae:87", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5132a5fe-0b", "ovs_interfaceid": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.165 239969 DEBUG nova.network.os_vif_util [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:ae:87,bridge_name='br-int',has_traffic_filtering=True,id=5132a5fe-0b78-49d1-94f4-efde60d77ba6,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5132a5fe-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.166 239969 DEBUG nova.objects.instance [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f15e546-481b-42eb-ad9c-7c40e8bfe459 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.184 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:56:03 compute-0 nova_compute[239965]:   <uuid>6f15e546-481b-42eb-ad9c-7c40e8bfe459</uuid>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   <name>instance-00000028</name>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-1817493364</nova:name>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:56:01</nova:creationTime>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:56:03 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:56:03 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:56:03 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:56:03 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:03 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:56:03 compute-0 nova_compute[239965]:         <nova:user uuid="e19e9687c7784557a28b0297dc26ff06">tempest-ListServerFiltersTestJSON-31069689-project-member</nova:user>
Jan 26 15:56:03 compute-0 nova_compute[239965]:         <nova:project uuid="b303275701ea4029a4a744bf25ce8726">tempest-ListServerFiltersTestJSON-31069689</nova:project>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="24ff5bfa-2bf0-4d76-ba05-fc857cd2108f"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:56:03 compute-0 nova_compute[239965]:         <nova:port uuid="5132a5fe-0b78-49d1-94f4-efde60d77ba6">
Jan 26 15:56:03 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <system>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <entry name="serial">6f15e546-481b-42eb-ad9c-7c40e8bfe459</entry>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <entry name="uuid">6f15e546-481b-42eb-ad9c-7c40e8bfe459</entry>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     </system>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   <os>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   </os>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   <features>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   </features>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/6f15e546-481b-42eb-ad9c-7c40e8bfe459_disk">
Jan 26 15:56:03 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:56:03 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/6f15e546-481b-42eb-ad9c-7c40e8bfe459_disk.config">
Jan 26 15:56:03 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:56:03 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:69:ae:87"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <target dev="tap5132a5fe-0b"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/6f15e546-481b-42eb-ad9c-7c40e8bfe459/console.log" append="off"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <video>
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     </video>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:56:03 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:56:03 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:56:03 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:56:03 compute-0 nova_compute[239965]: </domain>
Jan 26 15:56:03 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.187 239969 DEBUG nova.compute.manager [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Preparing to wait for external event network-vif-plugged-5132a5fe-0b78-49d1-94f4-efde60d77ba6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.187 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.188 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.188 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.189 239969 DEBUG nova.virt.libvirt.vif [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:55:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1817493364',display_name='tempest-ListServerFiltersTestJSON-instance-1817493364',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1817493364',id=40,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b303275701ea4029a4a744bf25ce8726',ramdisk_id='',reservation_id='r-zph60iyz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-31069689',owner_user_name='tempest-ListServerFiltersTestJSON-31069689-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:55:57Z,user_data=None,user_id='e19e9687c7784557a28b0297dc26ff06',uuid=6f15e546-481b-42eb-ad9c-7c40e8bfe459,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "address": "fa:16:3e:69:ae:87", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5132a5fe-0b", "ovs_interfaceid": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.189 239969 DEBUG nova.network.os_vif_util [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converting VIF {"id": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "address": "fa:16:3e:69:ae:87", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5132a5fe-0b", "ovs_interfaceid": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.190 239969 DEBUG nova.network.os_vif_util [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:ae:87,bridge_name='br-int',has_traffic_filtering=True,id=5132a5fe-0b78-49d1-94f4-efde60d77ba6,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5132a5fe-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.191 239969 DEBUG os_vif [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:ae:87,bridge_name='br-int',has_traffic_filtering=True,id=5132a5fe-0b78-49d1-94f4-efde60d77ba6,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5132a5fe-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.191 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.192 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.192 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.196 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.197 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5132a5fe-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.197 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5132a5fe-0b, col_values=(('external_ids', {'iface-id': '5132a5fe-0b78-49d1-94f4-efde60d77ba6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:ae:87', 'vm-uuid': '6f15e546-481b-42eb-ad9c-7c40e8bfe459'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.199 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:03 compute-0 NetworkManager[48954]: <info>  [1769442963.1999] manager: (tap5132a5fe-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.201 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.206 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.207 239969 INFO os_vif [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:ae:87,bridge_name='br-int',has_traffic_filtering=True,id=5132a5fe-0b78-49d1-94f4-efde60d77ba6,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5132a5fe-0b')
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.226 239969 DEBUG nova.network.neutron [req-e1893610-d2d2-4055-8a22-2a7bf874ade2 req-fb064652-097f-4a67-8203-535665d66ee1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Updated VIF entry in instance network info cache for port 5132a5fe-0b78-49d1-94f4-efde60d77ba6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.227 239969 DEBUG nova.network.neutron [req-e1893610-d2d2-4055-8a22-2a7bf874ade2 req-fb064652-097f-4a67-8203-535665d66ee1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Updating instance_info_cache with network_info: [{"id": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "address": "fa:16:3e:69:ae:87", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5132a5fe-0b", "ovs_interfaceid": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.250 239969 DEBUG oslo_concurrency.lockutils [req-e1893610-d2d2-4055-8a22-2a7bf874ade2 req-fb064652-097f-4a67-8203-535665d66ee1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-6f15e546-481b-42eb-ad9c-7c40e8bfe459" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.275 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.275 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.275 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] No VIF found with MAC fa:16:3e:69:ae:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.276 239969 INFO nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Using config drive
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.303 239969 DEBUG nova.storage.rbd_utils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image 6f15e546-481b-42eb-ad9c-7c40e8bfe459_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:56:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2497772889' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.433 239969 DEBUG oslo_concurrency.processutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.457 239969 DEBUG nova.storage.rbd_utils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image d8c8962d-c696-4ea6-826d-466a804a7770_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.462 239969 DEBUG oslo_concurrency.processutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.493 239969 DEBUG nova.compute.manager [None req-430f0987-d1fc-4db6-a4c2-3ff69f3ec711 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.542 239969 INFO nova.compute.manager [None req-430f0987-d1fc-4db6-a4c2-3ff69f3ec711 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] instance snapshotting
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.730 239969 INFO nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Creating config drive at /var/lib/nova/instances/6f15e546-481b-42eb-ad9c-7c40e8bfe459/disk.config
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.735 239969 DEBUG oslo_concurrency.processutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f15e546-481b-42eb-ad9c-7c40e8bfe459/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxwkrk9cf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 295 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 154 op/s
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.790 239969 INFO nova.virt.libvirt.driver [None req-430f0987-d1fc-4db6-a4c2-3ff69f3ec711 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Beginning live snapshot process
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.860 239969 DEBUG nova.network.neutron [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Successfully updated port: 09e051a6-5a1d-4163-8066-913fa9569057 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.878 239969 DEBUG oslo_concurrency.processutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f15e546-481b-42eb-ad9c-7c40e8bfe459/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxwkrk9cf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.905 239969 DEBUG nova.storage.rbd_utils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image 6f15e546-481b-42eb-ad9c-7c40e8bfe459_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.911 239969 DEBUG oslo_concurrency.processutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f15e546-481b-42eb-ad9c-7c40e8bfe459/disk.config 6f15e546-481b-42eb-ad9c-7c40e8bfe459_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.951 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "refresh_cache-f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.952 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquired lock "refresh_cache-f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:03 compute-0 nova_compute[239965]: 2026-01-26 15:56:03.952 239969 DEBUG nova.network.neutron [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:56:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:56:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/158655126' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.084 239969 DEBUG oslo_concurrency.processutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.085 239969 DEBUG oslo_concurrency.processutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f15e546-481b-42eb-ad9c-7c40e8bfe459/disk.config 6f15e546-481b-42eb-ad9c-7c40e8bfe459_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.086 239969 DEBUG nova.virt.libvirt.vif [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:55:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-2052998401',display_name='tempest-ListServerFiltersTestJSON-instance-2052998401',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-2052998401',id=39,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b303275701ea4029a4a744bf25ce8726',ramdisk_id='',reservation_id='r-xtx413wp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-31069689',owner_user_name='tempest-ListServerFiltersTestJSON-31069689-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:55:54Z,user_data=None,user_id='e19e9687c7784557a28b0297dc26ff06',uuid=d8c8962d-c696-4ea6-826d-466a804a7770,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.086 239969 DEBUG nova.network.os_vif_util [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converting VIF {"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.087 239969 DEBUG nova.network.os_vif_util [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:cd:80,bridge_name='br-int',has_traffic_filtering=True,id=1c2767ea-eb7b-4942-a452-31fe1017e883,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2767ea-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.088 239969 DEBUG nova.objects.instance [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'pci_devices' on Instance uuid d8c8962d-c696-4ea6-826d-466a804a7770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.090 239969 INFO nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Deleting local config drive /var/lib/nova/instances/6f15e546-481b-42eb-ad9c-7c40e8bfe459/disk.config because it was imported into RBD.
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.110 239969 DEBUG nova.virt.libvirt.imagebackend [None req-430f0987-d1fc-4db6-a4c2-3ff69f3ec711 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 15:56:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/503987206' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2497772889' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:04 compute-0 ceph-mon[75140]: pgmap v1279: 305 pgs: 305 active+clean; 295 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.9 MiB/s wr, 154 op/s
Jan 26 15:56:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/158655126' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.121 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:56:04 compute-0 nova_compute[239965]:   <uuid>d8c8962d-c696-4ea6-826d-466a804a7770</uuid>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   <name>instance-00000027</name>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-2052998401</nova:name>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:56:02</nova:creationTime>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:56:04 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:56:04 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:56:04 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:56:04 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:04 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:56:04 compute-0 nova_compute[239965]:         <nova:user uuid="e19e9687c7784557a28b0297dc26ff06">tempest-ListServerFiltersTestJSON-31069689-project-member</nova:user>
Jan 26 15:56:04 compute-0 nova_compute[239965]:         <nova:project uuid="b303275701ea4029a4a744bf25ce8726">tempest-ListServerFiltersTestJSON-31069689</nova:project>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:56:04 compute-0 nova_compute[239965]:         <nova:port uuid="1c2767ea-eb7b-4942-a452-31fe1017e883">
Jan 26 15:56:04 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <system>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <entry name="serial">d8c8962d-c696-4ea6-826d-466a804a7770</entry>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <entry name="uuid">d8c8962d-c696-4ea6-826d-466a804a7770</entry>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     </system>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   <os>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   </os>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   <features>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   </features>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d8c8962d-c696-4ea6-826d-466a804a7770_disk">
Jan 26 15:56:04 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:56:04 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d8c8962d-c696-4ea6-826d-466a804a7770_disk.config">
Jan 26 15:56:04 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:56:04 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:3a:cd:80"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <target dev="tap1c2767ea-eb"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/d8c8962d-c696-4ea6-826d-466a804a7770/console.log" append="off"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <video>
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     </video>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:56:04 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:56:04 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:56:04 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:56:04 compute-0 nova_compute[239965]: </domain>
Jan 26 15:56:04 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.128 239969 DEBUG nova.compute.manager [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Preparing to wait for external event network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.129 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.129 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.129 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.130 239969 DEBUG nova.virt.libvirt.vif [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:55:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-2052998401',display_name='tempest-ListServerFiltersTestJSON-instance-2052998401',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-2052998401',id=39,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b303275701ea4029a4a744bf25ce8726',ramdisk_id='',reservation_id='r-xtx413wp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-31069689',owner_user_name='tempest-ListServerFiltersTestJSON-31069689-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:55:54Z,user_data=None,user_id='e19e9687c7784557a28b0297dc26ff06',uuid=d8c8962d-c696-4ea6-826d-466a804a7770,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.131 239969 DEBUG nova.network.os_vif_util [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converting VIF {"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.132 239969 DEBUG nova.network.os_vif_util [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:cd:80,bridge_name='br-int',has_traffic_filtering=True,id=1c2767ea-eb7b-4942-a452-31fe1017e883,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2767ea-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.133 239969 DEBUG os_vif [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:cd:80,bridge_name='br-int',has_traffic_filtering=True,id=1c2767ea-eb7b-4942-a452-31fe1017e883,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2767ea-eb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.133 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.134 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.135 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.136 239969 DEBUG nova.network.neutron [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.143 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.144 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c2767ea-eb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.145 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c2767ea-eb, col_values=(('external_ids', {'iface-id': '1c2767ea-eb7b-4942-a452-31fe1017e883', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:cd:80', 'vm-uuid': 'd8c8962d-c696-4ea6-826d-466a804a7770'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:04 compute-0 NetworkManager[48954]: <info>  [1769442964.1485] manager: (tap1c2767ea-eb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.147 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.150 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.157 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.160 239969 INFO os_vif [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:cd:80,bridge_name='br-int',has_traffic_filtering=True,id=1c2767ea-eb7b-4942-a452-31fe1017e883,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2767ea-eb')
Jan 26 15:56:04 compute-0 NetworkManager[48954]: <info>  [1769442964.1624] manager: (tap5132a5fe-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/130)
Jan 26 15:56:04 compute-0 kernel: tap5132a5fe-0b: entered promiscuous mode
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.169 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:04 compute-0 ovn_controller[146046]: 2026-01-26T15:56:04Z|00250|binding|INFO|Claiming lport 5132a5fe-0b78-49d1-94f4-efde60d77ba6 for this chassis.
Jan 26 15:56:04 compute-0 ovn_controller[146046]: 2026-01-26T15:56:04Z|00251|binding|INFO|5132a5fe-0b78-49d1-94f4-efde60d77ba6: Claiming fa:16:3e:69:ae:87 10.100.0.12
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.182 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:ae:87 10.100.0.12'], port_security=['fa:16:3e:69:ae:87 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6f15e546-481b-42eb-ad9c-7c40e8bfe459', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b303275701ea4029a4a744bf25ce8726', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bbe71d43-f65e-47cf-a850-0e359656056c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6a88dad-0ef4-4088-9429-227b0056ef4f, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5132a5fe-0b78-49d1-94f4-efde60d77ba6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.183 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5132a5fe-0b78-49d1-94f4-efde60d77ba6 in datapath 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c bound to our chassis
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.185 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.200 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[561a573e-d0e8-46c0-80d5-af1e6932758e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.201 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap99f86ca1-71 in ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:56:04 compute-0 systemd-udevd[276475]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:56:04 compute-0 systemd-machined[208061]: New machine qemu-43-instance-00000028.
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.208 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap99f86ca1-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.208 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6e19e807-08a9-47f3-9f9a-b7bc32eb6851]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.210 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa10f83-bb8f-409a-adfe-e5fcf08145ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:04 compute-0 NetworkManager[48954]: <info>  [1769442964.2195] device (tap5132a5fe-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:56:04 compute-0 NetworkManager[48954]: <info>  [1769442964.2205] device (tap5132a5fe-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.221 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[adf930dd-e9ca-4976-b5ce-15d541076a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:04 compute-0 systemd[1]: Started Virtual Machine qemu-43-instance-00000028.
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.229 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:04 compute-0 ovn_controller[146046]: 2026-01-26T15:56:04Z|00252|binding|INFO|Setting lport 5132a5fe-0b78-49d1-94f4-efde60d77ba6 ovn-installed in OVS
Jan 26 15:56:04 compute-0 ovn_controller[146046]: 2026-01-26T15:56:04Z|00253|binding|INFO|Setting lport 5132a5fe-0b78-49d1-94f4-efde60d77ba6 up in Southbound
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.233 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.241 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.241 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.242 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] No VIF found with MAC fa:16:3e:3a:cd:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.242 239969 INFO nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Using config drive
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.250 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[50602a24-bf00-42fa-8e7d-014af05b2c87]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.276 239969 DEBUG nova.storage.rbd_utils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image d8c8962d-c696-4ea6-826d-466a804a7770_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.281 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f8f760d3-8463-4a21-bf2d-fa5d6e924d4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:04 compute-0 systemd-udevd[276480]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:56:04 compute-0 NetworkManager[48954]: <info>  [1769442964.2957] manager: (tap99f86ca1-70): new Veth device (/org/freedesktop/NetworkManager/Devices/131)
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.294 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3c8662c8-42f5-4a70-bd09-4d3cddfdab9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.327 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e0577413-c087-4d23-8b7d-3c2d0e312fa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.334 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[45806e5b-5601-4a3f-bbda-6520cf0bcd59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:04 compute-0 NetworkManager[48954]: <info>  [1769442964.3624] device (tap99f86ca1-70): carrier: link connected
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.366 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7d11b7e6-c51c-40cd-9e74-2472e26a9f95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.382 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[978d64bc-f30b-4765-9e95-cc9da75a84e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99f86ca1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:cf:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445516, 'reachable_time': 19368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276529, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.391 239969 DEBUG nova.storage.rbd_utils [None req-430f0987-d1fc-4db6-a4c2-3ff69f3ec711 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] creating snapshot(8e197319cd48410596312e33cb86795a) on rbd image(8266f382-d7db-41e3-bc97-b1510341e248_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.402 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[792684a4-aa52-475a-bbec-598bfde74a8e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea8:cf65'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445516, 'tstamp': 445516}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276530, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.430 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[808a9f16-12b6-4c2c-9793-a81680c9e290]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99f86ca1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:cf:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445516, 'reachable_time': 19368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 276547, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.461 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[39320287-898e-4dea-9ad3-82936ff20193]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.534 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[59504099-1766-4562-80c4-faec14377828]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.536 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99f86ca1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.536 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.537 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99f86ca1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.539 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:04 compute-0 NetworkManager[48954]: <info>  [1769442964.5399] manager: (tap99f86ca1-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/132)
Jan 26 15:56:04 compute-0 kernel: tap99f86ca1-70: entered promiscuous mode
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.546 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.547 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99f86ca1-70, col_values=(('external_ids', {'iface-id': '8cf3bd60-3f77-4742-a649-f7b9bc748f38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.548 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:04 compute-0 ovn_controller[146046]: 2026-01-26T15:56:04Z|00254|binding|INFO|Releasing lport 8cf3bd60-3f77-4742-a649-f7b9bc748f38 from this chassis (sb_readonly=0)
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.568 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.575 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.576 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/99f86ca1-7e08-4ddd-92c4-0c2af6afae0c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/99f86ca1-7e08-4ddd-92c4-0c2af6afae0c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.578 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d1581fa3-4553-4389-81a7-2b4f8ea91248]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.579 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/99f86ca1-7e08-4ddd-92c4-0c2af6afae0c.pid.haproxy
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:56:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:04.581 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'env', 'PROCESS_TAG=haproxy-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/99f86ca1-7e08-4ddd-92c4-0c2af6afae0c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.677 239969 DEBUG nova.compute.manager [req-9c97f095-c0b7-42f9-9a73-f19767a83346 req-98552ee9-c344-4b28-a605-67a31e849def a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Received event network-changed-09e051a6-5a1d-4163-8066-913fa9569057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.678 239969 DEBUG nova.compute.manager [req-9c97f095-c0b7-42f9-9a73-f19767a83346 req-98552ee9-c344-4b28-a605-67a31e849def a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Refreshing instance network info cache due to event network-changed-09e051a6-5a1d-4163-8066-913fa9569057. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.678 239969 DEBUG oslo_concurrency.lockutils [req-9c97f095-c0b7-42f9-9a73-f19767a83346 req-98552ee9-c344-4b28-a605-67a31e849def a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.807 239969 INFO nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Creating config drive at /var/lib/nova/instances/d8c8962d-c696-4ea6-826d-466a804a7770/disk.config
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.812 239969 DEBUG oslo_concurrency.processutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d8c8962d-c696-4ea6-826d-466a804a7770/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq5ucmbcy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.958 239969 DEBUG oslo_concurrency.processutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d8c8962d-c696-4ea6-826d-466a804a7770/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq5ucmbcy" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:04 compute-0 podman[276587]: 2026-01-26 15:56:04.988001981 +0000 UTC m=+0.061804067 container create dde07e8a7d5721a396ce289b3bbba4547c4b4d149b5bf260b7b48404f14b7eb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:56:04 compute-0 nova_compute[239965]: 2026-01-26 15:56:04.989 239969 DEBUG nova.storage.rbd_utils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image d8c8962d-c696-4ea6-826d-466a804a7770_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.000 239969 DEBUG oslo_concurrency.processutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d8c8962d-c696-4ea6-826d-466a804a7770/disk.config d8c8962d-c696-4ea6-826d-466a804a7770_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:05 compute-0 systemd[1]: Started libpod-conmon-dde07e8a7d5721a396ce289b3bbba4547c4b4d149b5bf260b7b48404f14b7eb9.scope.
Jan 26 15:56:05 compute-0 podman[276587]: 2026-01-26 15:56:04.951387193 +0000 UTC m=+0.025189309 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:56:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.069 239969 DEBUG nova.network.neutron [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Updating instance_info_cache with network_info: [{"id": "09e051a6-5a1d-4163-8066-913fa9569057", "address": "fa:16:3e:70:e8:d9", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e051a6-5a", "ovs_interfaceid": "09e051a6-5a1d-4163-8066-913fa9569057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/35bf16b9dba8646110cc88ddf690dc458b6f4c3643208264a7f62a038532b97e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:05 compute-0 podman[276587]: 2026-01-26 15:56:05.089630776 +0000 UTC m=+0.163432872 container init dde07e8a7d5721a396ce289b3bbba4547c4b4d149b5bf260b7b48404f14b7eb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:56:05 compute-0 podman[276587]: 2026-01-26 15:56:05.097287705 +0000 UTC m=+0.171089791 container start dde07e8a7d5721a396ce289b3bbba4547c4b4d149b5bf260b7b48404f14b7eb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 15:56:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Jan 26 15:56:05 compute-0 neutron-haproxy-ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c[276656]: [NOTICE]   (276684) : New worker (276689) forked
Jan 26 15:56:05 compute-0 neutron-haproxy-ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c[276656]: [NOTICE]   (276684) : Loading success.
Jan 26 15:56:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Jan 26 15:56:05 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.166 239969 DEBUG oslo_concurrency.processutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d8c8962d-c696-4ea6-826d-466a804a7770/disk.config d8c8962d-c696-4ea6-826d-466a804a7770_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.167 239969 INFO nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Deleting local config drive /var/lib/nova/instances/d8c8962d-c696-4ea6-826d-466a804a7770/disk.config because it was imported into RBD.
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.177 239969 DEBUG nova.storage.rbd_utils [None req-430f0987-d1fc-4db6-a4c2-3ff69f3ec711 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] cloning vms/8266f382-d7db-41e3-bc97-b1510341e248_disk@8e197319cd48410596312e33cb86795a to images/3e527f14-bd71-44d8-addd-c364e4d34ae5 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.211 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442965.1884327, 6f15e546-481b-42eb-ad9c-7c40e8bfe459 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.211 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] VM Started (Lifecycle Event)
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.218 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.222 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Releasing lock "refresh_cache-f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.223 239969 DEBUG nova.compute.manager [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Instance network_info: |[{"id": "09e051a6-5a1d-4163-8066-913fa9569057", "address": "fa:16:3e:70:e8:d9", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e051a6-5a", "ovs_interfaceid": "09e051a6-5a1d-4163-8066-913fa9569057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.224 239969 DEBUG oslo_concurrency.lockutils [req-9c97f095-c0b7-42f9-9a73-f19767a83346 req-98552ee9-c344-4b28-a605-67a31e849def a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.224 239969 DEBUG nova.network.neutron [req-9c97f095-c0b7-42f9-9a73-f19767a83346 req-98552ee9-c344-4b28-a605-67a31e849def a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Refreshing network info cache for port 09e051a6-5a1d-4163-8066-913fa9569057 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.227 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Start _get_guest_xml network_info=[{"id": "09e051a6-5a1d-4163-8066-913fa9569057", "address": "fa:16:3e:70:e8:d9", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e051a6-5a", "ovs_interfaceid": "09e051a6-5a1d-4163-8066-913fa9569057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:56:05 compute-0 kernel: tap1c2767ea-eb: entered promiscuous mode
Jan 26 15:56:05 compute-0 NetworkManager[48954]: <info>  [1769442965.2346] manager: (tap1c2767ea-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/133)
Jan 26 15:56:05 compute-0 ovn_controller[146046]: 2026-01-26T15:56:05Z|00255|binding|INFO|Claiming lport 1c2767ea-eb7b-4942-a452-31fe1017e883 for this chassis.
Jan 26 15:56:05 compute-0 ovn_controller[146046]: 2026-01-26T15:56:05Z|00256|binding|INFO|1c2767ea-eb7b-4942-a452-31fe1017e883: Claiming fa:16:3e:3a:cd:80 10.100.0.4
Jan 26 15:56:05 compute-0 systemd-udevd[276523]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.247 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.249 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.247 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:cd:80 10.100.0.4'], port_security=['fa:16:3e:3a:cd:80 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd8c8962d-c696-4ea6-826d-466a804a7770', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b303275701ea4029a4a744bf25ce8726', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bbe71d43-f65e-47cf-a850-0e359656056c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6a88dad-0ef4-4088-9429-227b0056ef4f, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1c2767ea-eb7b-4942-a452-31fe1017e883) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.248 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1c2767ea-eb7b-4942-a452-31fe1017e883 in datapath 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c bound to our chassis
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.251 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.252 239969 WARNING nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:56:05 compute-0 NetworkManager[48954]: <info>  [1769442965.2574] device (tap1c2767ea-eb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:56:05 compute-0 NetworkManager[48954]: <info>  [1769442965.2582] device (tap1c2767ea-eb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.259 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:05 compute-0 ovn_controller[146046]: 2026-01-26T15:56:05Z|00257|binding|INFO|Setting lport 1c2767ea-eb7b-4942-a452-31fe1017e883 ovn-installed in OVS
Jan 26 15:56:05 compute-0 ovn_controller[146046]: 2026-01-26T15:56:05Z|00258|binding|INFO|Setting lport 1c2767ea-eb7b-4942-a452-31fe1017e883 up in Southbound
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.262 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.264 239969 DEBUG nova.virt.libvirt.host [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.265 239969 DEBUG nova.virt.libvirt.host [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.270 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442965.188556, 6f15e546-481b-42eb-ad9c-7c40e8bfe459 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.270 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] VM Paused (Lifecycle Event)
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.276 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[835d2bce-ccc4-4316-a6e5-49b10f90ba17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.280 239969 DEBUG nova.virt.libvirt.host [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.281 239969 DEBUG nova.virt.libvirt.host [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.281 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.281 239969 DEBUG nova.virt.hardware [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:42Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='430d07ae-dea6-4314-8e35-13c856c35347',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.282 239969 DEBUG nova.virt.hardware [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.282 239969 DEBUG nova.virt.hardware [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.282 239969 DEBUG nova.virt.hardware [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.282 239969 DEBUG nova.virt.hardware [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.283 239969 DEBUG nova.virt.hardware [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.283 239969 DEBUG nova.virt.hardware [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.284 239969 DEBUG nova.virt.hardware [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:56:05 compute-0 systemd-machined[208061]: New machine qemu-44-instance-00000027.
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.285 239969 DEBUG nova.virt.hardware [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.285 239969 DEBUG nova.virt.hardware [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.285 239969 DEBUG nova.virt.hardware [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.292 239969 DEBUG oslo_concurrency.processutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:05 compute-0 systemd[1]: Started Virtual Machine qemu-44-instance-00000027.
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.308 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3af41857-cb9b-402b-9e58-3af71be0c12d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.312 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d53db7e9-ba33-46a2-baec-be61eeb7e8e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.339 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.341 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[17c74dfd-dba1-4723-805a-b3192d2d7027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.372 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2eecbdb8-8bbf-4af9-882a-4534a3c46e3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99f86ca1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:cf:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445516, 'reachable_time': 19368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276758, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.377 239969 DEBUG nova.storage.rbd_utils [None req-430f0987-d1fc-4db6-a4c2-3ff69f3ec711 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] flattening images/3e527f14-bd71-44d8-addd-c364e4d34ae5 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.390 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ba636cc5-8253-48fa-bb1a-fb6ae29f750d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap99f86ca1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445530, 'tstamp': 445530}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276762, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap99f86ca1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445533, 'tstamp': 445533}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276762, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.394 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99f86ca1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.398 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99f86ca1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.399 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.400 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99f86ca1-70, col_values=(('external_ids', {'iface-id': '8cf3bd60-3f77-4742-a649-f7b9bc748f38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.401 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.421 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.429 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.451 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.655 239969 DEBUG nova.storage.rbd_utils [None req-430f0987-d1fc-4db6-a4c2-3ff69f3ec711 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] removing snapshot(8e197319cd48410596312e33cb86795a) on rbd image(8266f382-d7db-41e3-bc97-b1510341e248_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 15:56:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1281: 305 pgs: 305 active+clean; 321 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.5 MiB/s wr, 203 op/s
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.833 239969 DEBUG nova.network.neutron [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updating instance_info_cache with network_info: [{"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "address": "fa:16:3e:7a:f0:af", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fbc7328-fd", "ovs_interfaceid": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0a00f244-298c-43b6-937a-89de92a84d31", "address": "fa:16:3e:33:e2:34", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a00f244-29", "ovs_interfaceid": "0a00f244-298c-43b6-937a-89de92a84d31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.860 239969 DEBUG oslo_concurrency.lockutils [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Releasing lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.861 239969 DEBUG oslo_concurrency.lockutils [req-f3aef4bc-0c0b-49f9-bcaf-2c4fe43f3337 req-5fb8840c-83f5-4a2b-8ba7-3f19019957d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.862 239969 DEBUG nova.network.neutron [req-f3aef4bc-0c0b-49f9-bcaf-2c4fe43f3337 req-5fb8840c-83f5-4a2b-8ba7-3f19019957d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Refreshing network info cache for port 0a00f244-298c-43b6-937a-89de92a84d31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.869 239969 DEBUG nova.virt.libvirt.vif [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:55:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0a00f244-298c-43b6-937a-89de92a84d31", "address": "fa:16:3e:33:e2:34", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a00f244-29", "ovs_interfaceid": "0a00f244-298c-43b6-937a-89de92a84d31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.870 239969 DEBUG nova.network.os_vif_util [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "0a00f244-298c-43b6-937a-89de92a84d31", "address": "fa:16:3e:33:e2:34", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a00f244-29", "ovs_interfaceid": "0a00f244-298c-43b6-937a-89de92a84d31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.871 239969 DEBUG nova.network.os_vif_util [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:e2:34,bridge_name='br-int',has_traffic_filtering=True,id=0a00f244-298c-43b6-937a-89de92a84d31,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0a00f244-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.871 239969 DEBUG os_vif [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:e2:34,bridge_name='br-int',has_traffic_filtering=True,id=0a00f244-298c-43b6-937a-89de92a84d31,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0a00f244-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.872 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.872 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.873 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.876 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.876 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a00f244-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.877 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0a00f244-29, col_values=(('external_ids', {'iface-id': '0a00f244-298c-43b6-937a-89de92a84d31', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:e2:34', 'vm-uuid': '819151c3-9892-4926-879b-6bcf2047e116'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.878 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:05 compute-0 NetworkManager[48954]: <info>  [1769442965.8793] manager: (tap0a00f244-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.883 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.884 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.885 239969 INFO os_vif [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:e2:34,bridge_name='br-int',has_traffic_filtering=True,id=0a00f244-298c-43b6-937a-89de92a84d31,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0a00f244-29')
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.886 239969 DEBUG nova.virt.libvirt.vif [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:55:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0a00f244-298c-43b6-937a-89de92a84d31", "address": "fa:16:3e:33:e2:34", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a00f244-29", "ovs_interfaceid": "0a00f244-298c-43b6-937a-89de92a84d31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.886 239969 DEBUG nova.network.os_vif_util [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "0a00f244-298c-43b6-937a-89de92a84d31", "address": "fa:16:3e:33:e2:34", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a00f244-29", "ovs_interfaceid": "0a00f244-298c-43b6-937a-89de92a84d31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.887 239969 DEBUG nova.network.os_vif_util [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:e2:34,bridge_name='br-int',has_traffic_filtering=True,id=0a00f244-298c-43b6-937a-89de92a84d31,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0a00f244-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.890 239969 DEBUG nova.virt.libvirt.guest [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] attach device xml: <interface type="ethernet">
Jan 26 15:56:05 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:33:e2:34"/>
Jan 26 15:56:05 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 15:56:05 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:56:05 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 15:56:05 compute-0 nova_compute[239965]:   <target dev="tap0a00f244-29"/>
Jan 26 15:56:05 compute-0 nova_compute[239965]: </interface>
Jan 26 15:56:05 compute-0 nova_compute[239965]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 26 15:56:05 compute-0 NetworkManager[48954]: <info>  [1769442965.9038] manager: (tap0a00f244-29): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Jan 26 15:56:05 compute-0 kernel: tap0a00f244-29: entered promiscuous mode
Jan 26 15:56:05 compute-0 ovn_controller[146046]: 2026-01-26T15:56:05Z|00259|binding|INFO|Claiming lport 0a00f244-298c-43b6-937a-89de92a84d31 for this chassis.
Jan 26 15:56:05 compute-0 ovn_controller[146046]: 2026-01-26T15:56:05Z|00260|binding|INFO|0a00f244-298c-43b6-937a-89de92a84d31: Claiming fa:16:3e:33:e2:34 10.100.0.9
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.907 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.914 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:e2:34 10.100.0.9'], port_security=['fa:16:3e:33:e2:34 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-906740713', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '819151c3-9892-4926-879b-6bcf2047e116', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-906740713', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6f59e25-9c42-4de8-8df2-2beb03c79af0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=0a00f244-298c-43b6-937a-89de92a84d31) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:05 compute-0 NetworkManager[48954]: <info>  [1769442965.9163] device (tap0a00f244-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:56:05 compute-0 NetworkManager[48954]: <info>  [1769442965.9170] device (tap0a00f244-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.916 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 0a00f244-298c-43b6-937a-89de92a84d31 in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e bound to our chassis
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.919 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:56:05 compute-0 ovn_controller[146046]: 2026-01-26T15:56:05Z|00261|binding|INFO|Setting lport 0a00f244-298c-43b6-937a-89de92a84d31 ovn-installed in OVS
Jan 26 15:56:05 compute-0 ovn_controller[146046]: 2026-01-26T15:56:05Z|00262|binding|INFO|Setting lport 0a00f244-298c-43b6-937a-89de92a84d31 up in Southbound
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.929 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:56:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2402323087' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.936 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.940 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9aadca5d-b809-44c7-829e-debcc7166fa4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:05 compute-0 nova_compute[239965]: 2026-01-26 15:56:05.960 239969 DEBUG oslo_concurrency.processutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.965 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8f9dff-40a8-4d5a-84be-0f6695def7de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:05.967 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[26fbc5e7-6bb3-40d6-bbd7-8615bfab06e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:06.003 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[fa8e2a71-4dd5-41ca-81e9-f046c12a1aee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.008 239969 DEBUG nova.storage.rbd_utils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.015 239969 DEBUG oslo_concurrency.processutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:06.023 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[45989957-6c94-4cc4-a226-44dbeadf1bef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439439, 'reachable_time': 27465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 276891, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:06.040 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ea44e3d8-ac03-4d7c-9229-b810f7de80be]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439451, 'tstamp': 439451}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276892, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439456, 'tstamp': 439456}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 276892, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:06.042 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:06.045 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:06.046 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:06.046 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:06.046 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.077 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.084 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442966.034544, d8c8962d-c696-4ea6-826d-466a804a7770 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.085 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] VM Started (Lifecycle Event)
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.115 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.123 239969 DEBUG nova.virt.libvirt.driver [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.123 239969 DEBUG nova.virt.libvirt.driver [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.123 239969 DEBUG nova.virt.libvirt.driver [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:1d:ed:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.124 239969 DEBUG nova.virt.libvirt.driver [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:34:ca:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.124 239969 DEBUG nova.virt.libvirt.driver [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:7a:f0:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.124 239969 DEBUG nova.virt.libvirt.driver [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:33:e2:34, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:56:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.128 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442966.0347967, d8c8962d-c696-4ea6-826d-466a804a7770 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.128 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] VM Paused (Lifecycle Event)
Jan 26 15:56:06 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Jan 26 15:56:06 compute-0 ceph-mon[75140]: osdmap e222: 3 total, 3 up, 3 in
Jan 26 15:56:06 compute-0 ceph-mon[75140]: pgmap v1281: 305 pgs: 305 active+clean; 321 MiB data, 542 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.5 MiB/s wr, 203 op/s
Jan 26 15:56:06 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2402323087' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.161 239969 DEBUG nova.virt.libvirt.guest [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1950902598</nova:name>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:56:06</nova:creationTime>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <nova:port uuid="786c06cb-4344-4e72-b40a-f3aa914d037c">
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <nova:port uuid="cbd122d4-01d6-46df-bc1e-8707cf1566d4">
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <nova:port uuid="7fbc7328-fd77-441a-86c3-1f41f7e706a0">
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <nova:port uuid="0a00f244-298c-43b6-937a-89de92a84d31">
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:56:06 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:56:06 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.172 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.181 239969 DEBUG nova.storage.rbd_utils [None req-430f0987-d1fc-4db6-a4c2-3ff69f3ec711 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] creating snapshot(snap) on rbd image(3e527f14-bd71-44d8-addd-c364e4d34ae5) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.223 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.253 239969 DEBUG oslo_concurrency.lockutils [None req-a6e19452-74e4-46ea-aa0d-a0b0ede233e1 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-819151c3-9892-4926-879b-6bcf2047e116-0a00f244-298c-43b6-937a-89de92a84d31" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 11.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.258 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.264 239969 DEBUG nova.network.neutron [req-9c97f095-c0b7-42f9-9a73-f19767a83346 req-98552ee9-c344-4b28-a605-67a31e849def a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Updated VIF entry in instance network info cache for port 09e051a6-5a1d-4163-8066-913fa9569057. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.265 239969 DEBUG nova.network.neutron [req-9c97f095-c0b7-42f9-9a73-f19767a83346 req-98552ee9-c344-4b28-a605-67a31e849def a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Updating instance_info_cache with network_info: [{"id": "09e051a6-5a1d-4163-8066-913fa9569057", "address": "fa:16:3e:70:e8:d9", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e051a6-5a", "ovs_interfaceid": "09e051a6-5a1d-4163-8066-913fa9569057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.281 239969 DEBUG oslo_concurrency.lockutils [req-9c97f095-c0b7-42f9-9a73-f19767a83346 req-98552ee9-c344-4b28-a605-67a31e849def a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:56:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1749931439' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.718 239969 DEBUG oslo_concurrency.processutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.702s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.719 239969 DEBUG nova.virt.libvirt.vif [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:55:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-42280864',display_name='tempest-ListServerFiltersTestJSON-instance-42280864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-42280864',id=41,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b303275701ea4029a4a744bf25ce8726',ramdisk_id='',reservation_id='r-0bv7g6mv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-31069689',owner_user_name='tempest-ListServerFiltersTestJSON-31069689-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:56:00Z,user_data=None,user_id='e19e9687c7784557a28b0297dc26ff06',uuid=f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09e051a6-5a1d-4163-8066-913fa9569057", "address": "fa:16:3e:70:e8:d9", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e051a6-5a", "ovs_interfaceid": "09e051a6-5a1d-4163-8066-913fa9569057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.719 239969 DEBUG nova.network.os_vif_util [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converting VIF {"id": "09e051a6-5a1d-4163-8066-913fa9569057", "address": "fa:16:3e:70:e8:d9", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e051a6-5a", "ovs_interfaceid": "09e051a6-5a1d-4163-8066-913fa9569057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.720 239969 DEBUG nova.network.os_vif_util [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:e8:d9,bridge_name='br-int',has_traffic_filtering=True,id=09e051a6-5a1d-4163-8066-913fa9569057,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e051a6-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.721 239969 DEBUG nova.objects.instance [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'pci_devices' on Instance uuid f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.736 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <uuid>f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1</uuid>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <name>instance-00000029</name>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <memory>196608</memory>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-42280864</nova:name>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:56:05</nova:creationTime>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <nova:flavor name="m1.micro">
Jan 26 15:56:06 compute-0 nova_compute[239965]:         <nova:memory>192</nova:memory>
Jan 26 15:56:06 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:56:06 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:56:06 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:06 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:56:06 compute-0 nova_compute[239965]:         <nova:user uuid="e19e9687c7784557a28b0297dc26ff06">tempest-ListServerFiltersTestJSON-31069689-project-member</nova:user>
Jan 26 15:56:06 compute-0 nova_compute[239965]:         <nova:project uuid="b303275701ea4029a4a744bf25ce8726">tempest-ListServerFiltersTestJSON-31069689</nova:project>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:56:06 compute-0 nova_compute[239965]:         <nova:port uuid="09e051a6-5a1d-4163-8066-913fa9569057">
Jan 26 15:56:06 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <system>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <entry name="serial">f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1</entry>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <entry name="uuid">f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1</entry>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     </system>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <os>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   </os>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <features>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   </features>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1_disk">
Jan 26 15:56:06 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:56:06 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1_disk.config">
Jan 26 15:56:06 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:56:06 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:70:e8:d9"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <target dev="tap09e051a6-5a"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1/console.log" append="off"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <video>
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     </video>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:56:06 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:56:06 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:56:06 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:56:06 compute-0 nova_compute[239965]: </domain>
Jan 26 15:56:06 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.736 239969 DEBUG nova.compute.manager [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Preparing to wait for external event network-vif-plugged-09e051a6-5a1d-4163-8066-913fa9569057 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.737 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.737 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.737 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.737 239969 DEBUG nova.virt.libvirt.vif [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:55:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-42280864',display_name='tempest-ListServerFiltersTestJSON-instance-42280864',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-42280864',id=41,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b303275701ea4029a4a744bf25ce8726',ramdisk_id='',reservation_id='r-0bv7g6mv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-31069689',owner_user_name='tempest-ListServerFiltersTestJSON-31069689-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:56:00Z,user_data=None,user_id='e19e9687c7784557a28b0297dc26ff06',uuid=f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "09e051a6-5a1d-4163-8066-913fa9569057", "address": "fa:16:3e:70:e8:d9", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e051a6-5a", "ovs_interfaceid": "09e051a6-5a1d-4163-8066-913fa9569057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.738 239969 DEBUG nova.network.os_vif_util [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converting VIF {"id": "09e051a6-5a1d-4163-8066-913fa9569057", "address": "fa:16:3e:70:e8:d9", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e051a6-5a", "ovs_interfaceid": "09e051a6-5a1d-4163-8066-913fa9569057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.738 239969 DEBUG nova.network.os_vif_util [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:e8:d9,bridge_name='br-int',has_traffic_filtering=True,id=09e051a6-5a1d-4163-8066-913fa9569057,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e051a6-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.738 239969 DEBUG os_vif [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:e8:d9,bridge_name='br-int',has_traffic_filtering=True,id=09e051a6-5a1d-4163-8066-913fa9569057,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e051a6-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.739 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.739 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.739 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.741 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.741 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap09e051a6-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.742 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap09e051a6-5a, col_values=(('external_ids', {'iface-id': '09e051a6-5a1d-4163-8066-913fa9569057', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:70:e8:d9', 'vm-uuid': 'f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.743 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:06 compute-0 NetworkManager[48954]: <info>  [1769442966.7442] manager: (tap09e051a6-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.746 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.749 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.749 239969 INFO os_vif [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:e8:d9,bridge_name='br-int',has_traffic_filtering=True,id=09e051a6-5a1d-4163-8066-913fa9569057,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e051a6-5a')
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.801 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.802 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.802 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] No VIF found with MAC fa:16:3e:70:e8:d9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.803 239969 INFO nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Using config drive
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.823 239969 DEBUG nova.storage.rbd_utils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.833 239969 DEBUG nova.compute.manager [req-e74a9ad4-cc25-477d-9485-8c2a99ecde15 req-dedd42d2-2a57-4ae3-bc7d-ed49b04753b6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Received event network-vif-plugged-5132a5fe-0b78-49d1-94f4-efde60d77ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.833 239969 DEBUG oslo_concurrency.lockutils [req-e74a9ad4-cc25-477d-9485-8c2a99ecde15 req-dedd42d2-2a57-4ae3-bc7d-ed49b04753b6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.833 239969 DEBUG oslo_concurrency.lockutils [req-e74a9ad4-cc25-477d-9485-8c2a99ecde15 req-dedd42d2-2a57-4ae3-bc7d-ed49b04753b6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.833 239969 DEBUG oslo_concurrency.lockutils [req-e74a9ad4-cc25-477d-9485-8c2a99ecde15 req-dedd42d2-2a57-4ae3-bc7d-ed49b04753b6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.834 239969 DEBUG nova.compute.manager [req-e74a9ad4-cc25-477d-9485-8c2a99ecde15 req-dedd42d2-2a57-4ae3-bc7d-ed49b04753b6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Processing event network-vif-plugged-5132a5fe-0b78-49d1-94f4-efde60d77ba6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.834 239969 DEBUG nova.compute.manager [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.847 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442966.8456533, 6f15e546-481b-42eb-ad9c-7c40e8bfe459 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.847 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] VM Resumed (Lifecycle Event)
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.856 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.862 239969 INFO nova.virt.libvirt.driver [-] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Instance spawned successfully.
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.862 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.877 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.884 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.889 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.890 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.890 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.891 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.892 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.892 239969 DEBUG nova.virt.libvirt.driver [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.919 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.952 239969 INFO nova.compute.manager [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Took 9.83 seconds to spawn the instance on the hypervisor.
Jan 26 15:56:06 compute-0 nova_compute[239965]: 2026-01-26 15:56:06.952 239969 DEBUG nova.compute.manager [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.038 239969 INFO nova.compute.manager [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Took 11.79 seconds to build instance.
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.056 239969 DEBUG oslo_concurrency.lockutils [None req-76c1ea72-7209-4d32-b029-12af228caff5 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.151 239969 INFO nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Creating config drive at /var/lib/nova/instances/f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1/disk.config
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.156 239969 DEBUG oslo_concurrency.processutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk21ava7h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Jan 26 15:56:07 compute-0 ceph-mon[75140]: osdmap e223: 3 total, 3 up, 3 in
Jan 26 15:56:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1749931439' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Jan 26 15:56:07 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.295 239969 DEBUG oslo_concurrency.processutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk21ava7h" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.321 239969 DEBUG nova.storage.rbd_utils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] rbd image f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.324 239969 DEBUG oslo_concurrency.processutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1/disk.config f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.358 239969 DEBUG nova.network.neutron [req-f3aef4bc-0c0b-49f9-bcaf-2c4fe43f3337 req-5fb8840c-83f5-4a2b-8ba7-3f19019957d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updated VIF entry in instance network info cache for port 0a00f244-298c-43b6-937a-89de92a84d31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.359 239969 DEBUG nova.network.neutron [req-f3aef4bc-0c0b-49f9-bcaf-2c4fe43f3337 req-5fb8840c-83f5-4a2b-8ba7-3f19019957d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updating instance_info_cache with network_info: [{"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "address": "fa:16:3e:7a:f0:af", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fbc7328-fd", "ovs_interfaceid": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0a00f244-298c-43b6-937a-89de92a84d31", "address": "fa:16:3e:33:e2:34", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a00f244-29", "ovs_interfaceid": "0a00f244-298c-43b6-937a-89de92a84d31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.362 239969 DEBUG nova.compute.manager [req-3e57d1fe-2457-4c50-899f-1e06d306e941 req-c7c099f9-042c-49ee-8702-2852fcf67775 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received event network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.362 239969 DEBUG oslo_concurrency.lockutils [req-3e57d1fe-2457-4c50-899f-1e06d306e941 req-c7c099f9-042c-49ee-8702-2852fcf67775 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.363 239969 DEBUG oslo_concurrency.lockutils [req-3e57d1fe-2457-4c50-899f-1e06d306e941 req-c7c099f9-042c-49ee-8702-2852fcf67775 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.363 239969 DEBUG oslo_concurrency.lockutils [req-3e57d1fe-2457-4c50-899f-1e06d306e941 req-c7c099f9-042c-49ee-8702-2852fcf67775 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.363 239969 DEBUG nova.compute.manager [req-3e57d1fe-2457-4c50-899f-1e06d306e941 req-c7c099f9-042c-49ee-8702-2852fcf67775 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Processing event network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.363 239969 DEBUG nova.compute.manager [req-3e57d1fe-2457-4c50-899f-1e06d306e941 req-c7c099f9-042c-49ee-8702-2852fcf67775 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received event network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.364 239969 DEBUG oslo_concurrency.lockutils [req-3e57d1fe-2457-4c50-899f-1e06d306e941 req-c7c099f9-042c-49ee-8702-2852fcf67775 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.364 239969 DEBUG oslo_concurrency.lockutils [req-3e57d1fe-2457-4c50-899f-1e06d306e941 req-c7c099f9-042c-49ee-8702-2852fcf67775 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.364 239969 DEBUG oslo_concurrency.lockutils [req-3e57d1fe-2457-4c50-899f-1e06d306e941 req-c7c099f9-042c-49ee-8702-2852fcf67775 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.364 239969 DEBUG nova.compute.manager [req-3e57d1fe-2457-4c50-899f-1e06d306e941 req-c7c099f9-042c-49ee-8702-2852fcf67775 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] No waiting events found dispatching network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.364 239969 WARNING nova.compute.manager [req-3e57d1fe-2457-4c50-899f-1e06d306e941 req-c7c099f9-042c-49ee-8702-2852fcf67775 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received unexpected event network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 for instance with vm_state building and task_state spawning.
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.365 239969 DEBUG nova.compute.manager [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.387 239969 DEBUG oslo_concurrency.lockutils [req-f3aef4bc-0c0b-49f9-bcaf-2c4fe43f3337 req-5fb8840c-83f5-4a2b-8ba7-3f19019957d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.387 239969 DEBUG nova.compute.manager [req-f3aef4bc-0c0b-49f9-bcaf-2c4fe43f3337 req-5fb8840c-83f5-4a2b-8ba7-3f19019957d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received event network-changed-1c2767ea-eb7b-4942-a452-31fe1017e883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.388 239969 DEBUG nova.compute.manager [req-f3aef4bc-0c0b-49f9-bcaf-2c4fe43f3337 req-5fb8840c-83f5-4a2b-8ba7-3f19019957d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Refreshing instance network info cache due to event network-changed-1c2767ea-eb7b-4942-a452-31fe1017e883. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.388 239969 DEBUG oslo_concurrency.lockutils [req-f3aef4bc-0c0b-49f9-bcaf-2c4fe43f3337 req-5fb8840c-83f5-4a2b-8ba7-3f19019957d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d8c8962d-c696-4ea6-826d-466a804a7770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.388 239969 DEBUG oslo_concurrency.lockutils [req-f3aef4bc-0c0b-49f9-bcaf-2c4fe43f3337 req-5fb8840c-83f5-4a2b-8ba7-3f19019957d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d8c8962d-c696-4ea6-826d-466a804a7770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.389 239969 DEBUG nova.network.neutron [req-f3aef4bc-0c0b-49f9-bcaf-2c4fe43f3337 req-5fb8840c-83f5-4a2b-8ba7-3f19019957d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Refreshing network info cache for port 1c2767ea-eb7b-4942-a452-31fe1017e883 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.390 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442967.3852901, d8c8962d-c696-4ea6-826d-466a804a7770 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.390 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] VM Resumed (Lifecycle Event)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.392 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver [None req-430f0987-d1fc-4db6-a4c2-3ff69f3ec711 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 3e527f14-bd71-44d8-addd-c364e4d34ae5 could not be found.
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 3e527f14-bd71-44d8-addd-c364e4d34ae5
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver 
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver 
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     image = self._client.call(
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 3e527f14-bd71-44d8-addd-c364e4d34ae5 could not be found.
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.398 239969 ERROR nova.virt.libvirt.driver 
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.452 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.460 239969 INFO nova.virt.libvirt.driver [-] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Instance spawned successfully.
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.461 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.463 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.479 239969 DEBUG nova.storage.rbd_utils [None req-430f0987-d1fc-4db6-a4c2-3ff69f3ec711 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] removing snapshot(snap) on rbd image(3e527f14-bd71-44d8-addd-c364e4d34ae5) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.492 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.500 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.500 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.501 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.501 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.502 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.503 239969 DEBUG nova.virt.libvirt.driver [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.517 239969 DEBUG oslo_concurrency.processutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1/disk.config f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.518 239969 INFO nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Deleting local config drive /var/lib/nova/instances/f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1/disk.config because it was imported into RBD.
Jan 26 15:56:07 compute-0 kernel: tap09e051a6-5a: entered promiscuous mode
Jan 26 15:56:07 compute-0 NetworkManager[48954]: <info>  [1769442967.5881] manager: (tap09e051a6-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/137)
Jan 26 15:56:07 compute-0 ovn_controller[146046]: 2026-01-26T15:56:07Z|00263|binding|INFO|Claiming lport 09e051a6-5a1d-4163-8066-913fa9569057 for this chassis.
Jan 26 15:56:07 compute-0 ovn_controller[146046]: 2026-01-26T15:56:07Z|00264|binding|INFO|09e051a6-5a1d-4163-8066-913fa9569057: Claiming fa:16:3e:70:e8:d9 10.100.0.3
Jan 26 15:56:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:07.605 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:e8:d9 10.100.0.3'], port_security=['fa:16:3e:70:e8:d9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b303275701ea4029a4a744bf25ce8726', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bbe71d43-f65e-47cf-a850-0e359656056c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6a88dad-0ef4-4088-9429-227b0056ef4f, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=09e051a6-5a1d-4163-8066-913fa9569057) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:07.607 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 09e051a6-5a1d-4163-8066-913fa9569057 in datapath 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c bound to our chassis
Jan 26 15:56:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:07.609 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.611 239969 INFO nova.compute.manager [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Took 12.61 seconds to spawn the instance on the hypervisor.
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.611 239969 DEBUG nova.compute.manager [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.612 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.619 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:07 compute-0 ovn_controller[146046]: 2026-01-26T15:56:07Z|00265|binding|INFO|Setting lport 09e051a6-5a1d-4163-8066-913fa9569057 ovn-installed in OVS
Jan 26 15:56:07 compute-0 ovn_controller[146046]: 2026-01-26T15:56:07Z|00266|binding|INFO|Setting lport 09e051a6-5a1d-4163-8066-913fa9569057 up in Southbound
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.628 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:07.636 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0924289b-7536-4572-a2ec-05ad393ee8c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:07 compute-0 systemd-machined[208061]: New machine qemu-45-instance-00000029.
Jan 26 15:56:07 compute-0 systemd[1]: Started Virtual Machine qemu-45-instance-00000029.
Jan 26 15:56:07 compute-0 systemd-udevd[277025]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:56:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:07.666 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b646b096-729f-43eb-b552-38b9d97e62bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:07.668 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f2b063c2-6a20-407e-abfb-e106ef494a25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:07 compute-0 NetworkManager[48954]: <info>  [1769442967.6778] device (tap09e051a6-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:56:07 compute-0 NetworkManager[48954]: <info>  [1769442967.6785] device (tap09e051a6-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:56:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:07.703 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2bb076-23ab-48ad-a9af-6d858e733c70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.710 239969 INFO nova.compute.manager [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Took 14.33 seconds to build instance.
Jan 26 15:56:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:07.728 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e457f126-a8cb-4363-8b38-d4dc47578a89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99f86ca1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:cf:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445516, 'reachable_time': 19368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277035, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.739 239969 DEBUG oslo_concurrency.lockutils [None req-c48c57be-f9c9-4fe1-bfe5-afc9f25ae18f e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.460s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:07.744 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fe706b95-7b2a-495a-80a5-e355e8545e70]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap99f86ca1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445530, 'tstamp': 445530}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277037, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap99f86ca1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445533, 'tstamp': 445533}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277037, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:07.745 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99f86ca1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:07 compute-0 nova_compute[239965]: 2026-01-26 15:56:07.747 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:07.750 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99f86ca1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:07.750 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:07.751 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99f86ca1-70, col_values=(('external_ids', {'iface-id': '8cf3bd60-3f77-4742-a649-f7b9bc748f38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:07.751 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 305 active+clean; 351 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 9.5 MiB/s wr, 310 op/s
Jan 26 15:56:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Jan 26 15:56:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Jan 26 15:56:08 compute-0 ceph-mon[75140]: osdmap e224: 3 total, 3 up, 3 in
Jan 26 15:56:08 compute-0 ceph-mon[75140]: pgmap v1284: 305 pgs: 305 active+clean; 351 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 9.5 MiB/s wr, 310 op/s
Jan 26 15:56:08 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Jan 26 15:56:08 compute-0 ovn_controller[146046]: 2026-01-26T15:56:08Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:e2:34 10.100.0.9
Jan 26 15:56:08 compute-0 ovn_controller[146046]: 2026-01-26T15:56:08Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:e2:34 10.100.0.9
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.469 239969 DEBUG oslo_concurrency.lockutils [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "interface-819151c3-9892-4926-879b-6bcf2047e116-cbd122d4-01d6-46df-bc1e-8707cf1566d4" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.470 239969 DEBUG oslo_concurrency.lockutils [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-819151c3-9892-4926-879b-6bcf2047e116-cbd122d4-01d6-46df-bc1e-8707cf1566d4" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.492 239969 DEBUG nova.objects.instance [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'flavor' on Instance uuid 819151c3-9892-4926-879b-6bcf2047e116 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.524 239969 DEBUG nova.virt.libvirt.vif [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:55:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.524 239969 DEBUG nova.network.os_vif_util [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.526 239969 DEBUG nova.network.os_vif_util [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:34:ca:1b,bridge_name='br-int',has_traffic_filtering=True,id=cbd122d4-01d6-46df-bc1e-8707cf1566d4,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbd122d4-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.535 239969 DEBUG nova.virt.libvirt.guest [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:34:ca:1b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcbd122d4-01"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.539 239969 DEBUG nova.virt.libvirt.guest [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:34:ca:1b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcbd122d4-01"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.543 239969 DEBUG nova.virt.libvirt.driver [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Attempting to detach device tapcbd122d4-01 from instance 819151c3-9892-4926-879b-6bcf2047e116 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.544 239969 DEBUG nova.virt.libvirt.guest [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] detach device xml: <interface type="ethernet">
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:34:ca:1b"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <target dev="tapcbd122d4-01"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]: </interface>
Jan 26 15:56:08 compute-0 nova_compute[239965]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.551 239969 DEBUG nova.virt.libvirt.guest [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:34:ca:1b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcbd122d4-01"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.558 239969 DEBUG nova.virt.libvirt.guest [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:34:ca:1b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcbd122d4-01"/></interface>not found in domain: <domain type='kvm' id='39'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <name>instance-00000023</name>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <uuid>819151c3-9892-4926-879b-6bcf2047e116</uuid>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1950902598</nova:name>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:56:06</nova:creationTime>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:port uuid="786c06cb-4344-4e72-b40a-f3aa914d037c">
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:port uuid="cbd122d4-01d6-46df-bc1e-8707cf1566d4">
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:port uuid="7fbc7328-fd77-441a-86c3-1f41f7e706a0">
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:port uuid="0a00f244-298c-43b6-937a-89de92a84d31">
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:56:08 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <resource>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </resource>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <system>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <entry name='serial'>819151c3-9892-4926-879b-6bcf2047e116</entry>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <entry name='uuid'>819151c3-9892-4926-879b-6bcf2047e116</entry>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </system>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <os>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </os>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <features>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </features>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/819151c3-9892-4926-879b-6bcf2047e116_disk' index='2'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/819151c3-9892-4926-879b-6bcf2047e116_disk.config' index='1'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:1d:ed:8a'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target dev='tap786c06cb-43'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:34:ca:1b'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target dev='tapcbd122d4-01'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='net1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:7a:f0:af'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target dev='tap7fbc7328-fd'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='net2'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:33:e2:34'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target dev='tap0a00f244-29'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='net3'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <source path='/dev/pts/1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116/console.log' append='off'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       </target>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/1'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <source path='/dev/pts/1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116/console.log' append='off'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </console>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </input>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </input>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </input>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </graphics>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <video>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </video>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c121,c894</label>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c121,c894</imagelabel>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:56:08 compute-0 nova_compute[239965]: </domain>
Jan 26 15:56:08 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.559 239969 INFO nova.virt.libvirt.driver [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully detached device tapcbd122d4-01 from instance 819151c3-9892-4926-879b-6bcf2047e116 from the persistent domain config.
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.559 239969 DEBUG nova.virt.libvirt.driver [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] (1/8): Attempting to detach device tapcbd122d4-01 with device alias net1 from instance 819151c3-9892-4926-879b-6bcf2047e116 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.560 239969 DEBUG nova.virt.libvirt.guest [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] detach device xml: <interface type="ethernet">
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:34:ca:1b"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <target dev="tapcbd122d4-01"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]: </interface>
Jan 26 15:56:08 compute-0 nova_compute[239965]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 26 15:56:08 compute-0 kernel: tapcbd122d4-01 (unregistering): left promiscuous mode
Jan 26 15:56:08 compute-0 NetworkManager[48954]: <info>  [1769442968.6689] device (tapcbd122d4-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:56:08 compute-0 ovn_controller[146046]: 2026-01-26T15:56:08Z|00267|binding|INFO|Releasing lport cbd122d4-01d6-46df-bc1e-8707cf1566d4 from this chassis (sb_readonly=0)
Jan 26 15:56:08 compute-0 ovn_controller[146046]: 2026-01-26T15:56:08Z|00268|binding|INFO|Setting lport cbd122d4-01d6-46df-bc1e-8707cf1566d4 down in Southbound
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.678 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:08 compute-0 ovn_controller[146046]: 2026-01-26T15:56:08Z|00269|binding|INFO|Removing iface tapcbd122d4-01 ovn-installed in OVS
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.680 239969 DEBUG nova.virt.libvirt.driver [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Start waiting for the detach event from libvirt for device tapcbd122d4-01 with device alias net1 for instance 819151c3-9892-4926-879b-6bcf2047e116 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.681 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.686 239969 DEBUG nova.virt.libvirt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Received event <DeviceRemovedEvent: 1769442968.6859033, 819151c3-9892-4926-879b-6bcf2047e116 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 26 15:56:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:08.686 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:ca:1b 10.100.0.8'], port_security=['fa:16:3e:34:ca:1b 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '819151c3-9892-4926-879b-6bcf2047e116', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6f59e25-9c42-4de8-8df2-2beb03c79af0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=cbd122d4-01d6-46df-bc1e-8707cf1566d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.686 239969 DEBUG nova.virt.libvirt.guest [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:34:ca:1b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcbd122d4-01"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:56:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:08.688 156105 INFO neutron.agent.ovn.metadata.agent [-] Port cbd122d4-01d6-46df-bc1e-8707cf1566d4 in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e unbound from our chassis
Jan 26 15:56:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:08.690 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.695 239969 DEBUG nova.virt.libvirt.guest [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:34:ca:1b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcbd122d4-01"/></interface>not found in domain: <domain type='kvm' id='39'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <name>instance-00000023</name>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <uuid>819151c3-9892-4926-879b-6bcf2047e116</uuid>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1950902598</nova:name>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:56:06</nova:creationTime>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:port uuid="786c06cb-4344-4e72-b40a-f3aa914d037c">
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:port uuid="cbd122d4-01d6-46df-bc1e-8707cf1566d4">
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:port uuid="7fbc7328-fd77-441a-86c3-1f41f7e706a0">
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:port uuid="0a00f244-298c-43b6-937a-89de92a84d31">
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:56:08 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <resource>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </resource>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <system>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <entry name='serial'>819151c3-9892-4926-879b-6bcf2047e116</entry>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <entry name='uuid'>819151c3-9892-4926-879b-6bcf2047e116</entry>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </system>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <os>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </os>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <features>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </features>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/819151c3-9892-4926-879b-6bcf2047e116_disk' index='2'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/819151c3-9892-4926-879b-6bcf2047e116_disk.config' index='1'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:1d:ed:8a'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target dev='tap786c06cb-43'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:7a:f0:af'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target dev='tap7fbc7328-fd'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='net2'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:33:e2:34'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target dev='tap0a00f244-29'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='net3'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <source path='/dev/pts/1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116/console.log' append='off'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       </target>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/1'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <source path='/dev/pts/1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116/console.log' append='off'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </console>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </input>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </input>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </input>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </graphics>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <video>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </video>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c121,c894</label>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c121,c894</imagelabel>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:56:08 compute-0 nova_compute[239965]: </domain>
Jan 26 15:56:08 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.696 239969 INFO nova.virt.libvirt.driver [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully detached device tapcbd122d4-01 from instance 819151c3-9892-4926-879b-6bcf2047e116 from the live domain config.
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.697 239969 DEBUG nova.virt.libvirt.vif [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:55:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.698 239969 DEBUG nova.network.os_vif_util [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.698 239969 DEBUG nova.network.os_vif_util [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:34:ca:1b,bridge_name='br-int',has_traffic_filtering=True,id=cbd122d4-01d6-46df-bc1e-8707cf1566d4,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbd122d4-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.699 239969 DEBUG os_vif [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:ca:1b,bridge_name='br-int',has_traffic_filtering=True,id=cbd122d4-01d6-46df-bc1e-8707cf1566d4,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbd122d4-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.701 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.701 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbd122d4-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.702 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.703 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:08.707 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e1637211-48aa-40b0-89b8-136d812a5993]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.717 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.720 239969 INFO os_vif [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:ca:1b,bridge_name='br-int',has_traffic_filtering=True,id=cbd122d4-01d6-46df-bc1e-8707cf1566d4,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbd122d4-01')
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.721 239969 DEBUG nova.virt.libvirt.guest [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1950902598</nova:name>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:56:08</nova:creationTime>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:port uuid="786c06cb-4344-4e72-b40a-f3aa914d037c">
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:port uuid="7fbc7328-fd77-441a-86c3-1f41f7e706a0">
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     <nova:port uuid="0a00f244-298c-43b6-937a-89de92a84d31">
Jan 26 15:56:08 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 26 15:56:08 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:08 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:56:08 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:56:08 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 15:56:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:08.751 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[bed69d92-2cd9-4870-95ba-6f39c8bd96f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:08.754 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d1da4bd6-55c2-4e2c-91ae-762b9a88b8a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:08.789 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[20023aec-c095-4778-aafe-cd27e76f8259]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.800 239969 WARNING nova.compute.manager [None req-430f0987-d1fc-4db6-a4c2-3ff69f3ec711 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Image not found during snapshot: nova.exception.ImageNotFound: Image 3e527f14-bd71-44d8-addd-c364e4d34ae5 could not be found.
Jan 26 15:56:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:08.814 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[63fd5ad6-0b46-44bb-81d2-cab8003dad99]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439439, 'reachable_time': 27465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277066, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:08.851 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9c7e383e-72a0-4f86-960a-0dc72e43745a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439451, 'tstamp': 439451}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277067, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439456, 'tstamp': 439456}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277067, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:08.853 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.854 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.856 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:08.857 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:08.857 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:08.857 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:08.858 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.979 239969 DEBUG nova.network.neutron [req-f3aef4bc-0c0b-49f9-bcaf-2c4fe43f3337 req-5fb8840c-83f5-4a2b-8ba7-3f19019957d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Updated VIF entry in instance network info cache for port 1c2767ea-eb7b-4942-a452-31fe1017e883. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:56:08 compute-0 nova_compute[239965]: 2026-01-26 15:56:08.980 239969 DEBUG nova.network.neutron [req-f3aef4bc-0c0b-49f9-bcaf-2c4fe43f3337 req-5fb8840c-83f5-4a2b-8ba7-3f19019957d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Updating instance_info_cache with network_info: [{"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.011 239969 DEBUG oslo_concurrency.lockutils [req-f3aef4bc-0c0b-49f9-bcaf-2c4fe43f3337 req-5fb8840c-83f5-4a2b-8ba7-3f19019957d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d8c8962d-c696-4ea6-826d-466a804a7770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.208 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442969.2083893, f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.209 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] VM Started (Lifecycle Event)
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.229 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.234 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442969.2114553, f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.234 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] VM Paused (Lifecycle Event)
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.268 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.271 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.296 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:56:09 compute-0 ceph-mon[75140]: osdmap e225: 3 total, 3 up, 3 in
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.342 239969 DEBUG nova.compute.manager [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Received event network-vif-plugged-5132a5fe-0b78-49d1-94f4-efde60d77ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.343 239969 DEBUG oslo_concurrency.lockutils [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.343 239969 DEBUG oslo_concurrency.lockutils [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.344 239969 DEBUG oslo_concurrency.lockutils [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.344 239969 DEBUG nova.compute.manager [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] No waiting events found dispatching network-vif-plugged-5132a5fe-0b78-49d1-94f4-efde60d77ba6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.344 239969 WARNING nova.compute.manager [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Received unexpected event network-vif-plugged-5132a5fe-0b78-49d1-94f4-efde60d77ba6 for instance with vm_state active and task_state None.
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.345 239969 DEBUG nova.compute.manager [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-plugged-0a00f244-298c-43b6-937a-89de92a84d31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.345 239969 DEBUG oslo_concurrency.lockutils [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.345 239969 DEBUG oslo_concurrency.lockutils [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.345 239969 DEBUG oslo_concurrency.lockutils [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.346 239969 DEBUG nova.compute.manager [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] No waiting events found dispatching network-vif-plugged-0a00f244-298c-43b6-937a-89de92a84d31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.346 239969 WARNING nova.compute.manager [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received unexpected event network-vif-plugged-0a00f244-298c-43b6-937a-89de92a84d31 for instance with vm_state active and task_state None.
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.346 239969 DEBUG nova.compute.manager [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-plugged-0a00f244-298c-43b6-937a-89de92a84d31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.346 239969 DEBUG oslo_concurrency.lockutils [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.347 239969 DEBUG oslo_concurrency.lockutils [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.347 239969 DEBUG oslo_concurrency.lockutils [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.347 239969 DEBUG nova.compute.manager [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] No waiting events found dispatching network-vif-plugged-0a00f244-298c-43b6-937a-89de92a84d31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.348 239969 WARNING nova.compute.manager [req-b2ff800f-07a0-4d9e-8e42-e7149643797b req-d975f270-12bc-4f4c-9ecd-67b49bf9130e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received unexpected event network-vif-plugged-0a00f244-298c-43b6-937a-89de92a84d31 for instance with vm_state active and task_state None.
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.422 239969 DEBUG nova.compute.manager [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Received event network-vif-plugged-09e051a6-5a1d-4163-8066-913fa9569057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.422 239969 DEBUG oslo_concurrency.lockutils [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.423 239969 DEBUG oslo_concurrency.lockutils [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.423 239969 DEBUG oslo_concurrency.lockutils [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.423 239969 DEBUG nova.compute.manager [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Processing event network-vif-plugged-09e051a6-5a1d-4163-8066-913fa9569057 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.424 239969 DEBUG nova.compute.manager [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Received event network-vif-plugged-09e051a6-5a1d-4163-8066-913fa9569057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.425 239969 DEBUG oslo_concurrency.lockutils [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.425 239969 DEBUG oslo_concurrency.lockutils [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.425 239969 DEBUG oslo_concurrency.lockutils [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.426 239969 DEBUG nova.compute.manager [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] No waiting events found dispatching network-vif-plugged-09e051a6-5a1d-4163-8066-913fa9569057 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.426 239969 WARNING nova.compute.manager [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Received unexpected event network-vif-plugged-09e051a6-5a1d-4163-8066-913fa9569057 for instance with vm_state building and task_state spawning.
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.426 239969 DEBUG nova.compute.manager [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-unplugged-cbd122d4-01d6-46df-bc1e-8707cf1566d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.427 239969 DEBUG oslo_concurrency.lockutils [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.427 239969 DEBUG oslo_concurrency.lockutils [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.427 239969 DEBUG oslo_concurrency.lockutils [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.429 239969 DEBUG nova.compute.manager [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] No waiting events found dispatching network-vif-unplugged-cbd122d4-01d6-46df-bc1e-8707cf1566d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.430 239969 WARNING nova.compute.manager [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received unexpected event network-vif-unplugged-cbd122d4-01d6-46df-bc1e-8707cf1566d4 for instance with vm_state active and task_state None.
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.430 239969 DEBUG nova.compute.manager [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-plugged-cbd122d4-01d6-46df-bc1e-8707cf1566d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.431 239969 DEBUG oslo_concurrency.lockutils [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.431 239969 DEBUG oslo_concurrency.lockutils [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.431 239969 DEBUG oslo_concurrency.lockutils [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.432 239969 DEBUG nova.compute.manager [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] No waiting events found dispatching network-vif-plugged-cbd122d4-01d6-46df-bc1e-8707cf1566d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.432 239969 WARNING nova.compute.manager [req-348a403b-d86d-4272-8d71-4b4284b0ea4a req-89182590-0314-4627-a3a5-31e4ade9e644 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received unexpected event network-vif-plugged-cbd122d4-01d6-46df-bc1e-8707cf1566d4 for instance with vm_state active and task_state None.
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.433 239969 DEBUG nova.compute.manager [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.445 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442969.43765, f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.446 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] VM Resumed (Lifecycle Event)
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.457 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.458 239969 DEBUG oslo_concurrency.lockutils [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.458 239969 DEBUG oslo_concurrency.lockutils [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquired lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.458 239969 DEBUG nova.network.neutron [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.464 239969 INFO nova.virt.libvirt.driver [-] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Instance spawned successfully.
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.464 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:56:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.489 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.499 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.503 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.504 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.505 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.505 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.507 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.507 239969 DEBUG nova.virt.libvirt.driver [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.539 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.577 239969 INFO nova.compute.manager [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Took 9.18 seconds to spawn the instance on the hypervisor.
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.578 239969 DEBUG nova.compute.manager [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.636 239969 INFO nova.compute.manager [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Took 10.39 seconds to build instance.
Jan 26 15:56:09 compute-0 nova_compute[239965]: 2026-01-26 15:56:09.653 239969 DEBUG oslo_concurrency.lockutils [None req-f75e574f-c4ff-40c6-960d-a04dc188744b e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 305 active+clean; 351 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.5 MiB/s wr, 162 op/s
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.213 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:10 compute-0 ceph-mon[75140]: pgmap v1286: 305 pgs: 305 active+clean; 351 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 5.5 MiB/s wr, 162 op/s
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.628 239969 DEBUG oslo_concurrency.lockutils [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "8266f382-d7db-41e3-bc97-b1510341e248" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.628 239969 DEBUG oslo_concurrency.lockutils [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "8266f382-d7db-41e3-bc97-b1510341e248" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.628 239969 DEBUG oslo_concurrency.lockutils [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "8266f382-d7db-41e3-bc97-b1510341e248-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.629 239969 DEBUG oslo_concurrency.lockutils [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "8266f382-d7db-41e3-bc97-b1510341e248-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.629 239969 DEBUG oslo_concurrency.lockutils [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "8266f382-d7db-41e3-bc97-b1510341e248-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.630 239969 INFO nova.compute.manager [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Terminating instance
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.631 239969 DEBUG nova.compute.manager [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:56:10 compute-0 kernel: tap50f13561-12 (unregistering): left promiscuous mode
Jan 26 15:56:10 compute-0 NetworkManager[48954]: <info>  [1769442970.6783] device (tap50f13561-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.690 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:10 compute-0 ovn_controller[146046]: 2026-01-26T15:56:10Z|00270|binding|INFO|Releasing lport 50f13561-1233-4562-9ae2-f16314467652 from this chassis (sb_readonly=0)
Jan 26 15:56:10 compute-0 ovn_controller[146046]: 2026-01-26T15:56:10Z|00271|binding|INFO|Setting lport 50f13561-1233-4562-9ae2-f16314467652 down in Southbound
Jan 26 15:56:10 compute-0 ovn_controller[146046]: 2026-01-26T15:56:10Z|00272|binding|INFO|Removing iface tap50f13561-12 ovn-installed in OVS
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.694 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:10.700 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:9d:4c 10.100.0.6'], port_security=['fa:16:3e:cd:9d:4c 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8266f382-d7db-41e3-bc97-b1510341e248', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12eb3028-37a4-48dc-836b-0978d0bddbce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b3f0866575347bd80cdc80b692f07f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd45551f-cdcc-4a64-9f48-bf991126bbec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef363ab1-8cec-4345-8968-ee11106642fa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=50f13561-1233-4562-9ae2-f16314467652) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:10.703 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 50f13561-1233-4562-9ae2-f16314467652 in datapath 12eb3028-37a4-48dc-836b-0978d0bddbce unbound from our chassis
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.711 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:10.710 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 12eb3028-37a4-48dc-836b-0978d0bddbce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:56:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:10.713 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3fda3b-1fc3-477f-92ee-1eed552fb018]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:10.717 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce namespace which is not needed anymore
Jan 26 15:56:10 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000026.scope: Deactivated successfully.
Jan 26 15:56:10 compute-0 systemd[1]: machine-qemu\x2d42\x2dinstance\x2d00000026.scope: Consumed 12.030s CPU time.
Jan 26 15:56:10 compute-0 systemd-machined[208061]: Machine qemu-42-instance-00000026 terminated.
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.818 239969 INFO nova.network.neutron [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Port cbd122d4-01d6-46df-bc1e-8707cf1566d4 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.856 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.862 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.868 239969 INFO nova.virt.libvirt.driver [-] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Instance destroyed successfully.
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.869 239969 DEBUG nova.objects.instance [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lazy-loading 'resources' on Instance uuid 8266f382-d7db-41e3-bc97-b1510341e248 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.887 239969 DEBUG nova.virt.libvirt.vif [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:55:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1447210512',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1447210512',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1447210512',id=38,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1b3f0866575347bd80cdc80b692f07f5',ramdisk_id='',reservation_id='r-ht6slxng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-744790849',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-744790849-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:08Z,user_data=None,user_id='8b6639ef07f74c9ebad76dffa361ec4e',uuid=8266f382-d7db-41e3-bc97-b1510341e248,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50f13561-1233-4562-9ae2-f16314467652", "address": "fa:16:3e:cd:9d:4c", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f13561-12", "ovs_interfaceid": "50f13561-1233-4562-9ae2-f16314467652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.888 239969 DEBUG nova.network.os_vif_util [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converting VIF {"id": "50f13561-1233-4562-9ae2-f16314467652", "address": "fa:16:3e:cd:9d:4c", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50f13561-12", "ovs_interfaceid": "50f13561-1233-4562-9ae2-f16314467652", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.889 239969 DEBUG nova.network.os_vif_util [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:9d:4c,bridge_name='br-int',has_traffic_filtering=True,id=50f13561-1233-4562-9ae2-f16314467652,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50f13561-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.889 239969 DEBUG os_vif [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:9d:4c,bridge_name='br-int',has_traffic_filtering=True,id=50f13561-1233-4562-9ae2-f16314467652,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50f13561-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.893 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.893 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50f13561-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.895 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.897 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:10 compute-0 nova_compute[239965]: 2026-01-26 15:56:10.900 239969 INFO os_vif [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:9d:4c,bridge_name='br-int',has_traffic_filtering=True,id=50f13561-1233-4562-9ae2-f16314467652,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap50f13561-12')
Jan 26 15:56:10 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[275694]: [NOTICE]   (275718) : haproxy version is 2.8.14-c23fe91
Jan 26 15:56:10 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[275694]: [NOTICE]   (275718) : path to executable is /usr/sbin/haproxy
Jan 26 15:56:10 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[275694]: [WARNING]  (275718) : Exiting Master process...
Jan 26 15:56:10 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[275694]: [ALERT]    (275718) : Current worker (275721) exited with code 143 (Terminated)
Jan 26 15:56:10 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[275694]: [WARNING]  (275718) : All workers exited. Exiting... (0)
Jan 26 15:56:10 compute-0 systemd[1]: libpod-6152b5a1083a349dfacbc7e7afb976dfaf729e6f747f0a32d110b70e8611a12a.scope: Deactivated successfully.
Jan 26 15:56:10 compute-0 podman[277131]: 2026-01-26 15:56:10.929326679 +0000 UTC m=+0.079754228 container died 6152b5a1083a349dfacbc7e7afb976dfaf729e6f747f0a32d110b70e8611a12a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 15:56:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6152b5a1083a349dfacbc7e7afb976dfaf729e6f747f0a32d110b70e8611a12a-userdata-shm.mount: Deactivated successfully.
Jan 26 15:56:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-60adedcb02eb3c9a93dd5f60c190a05ac491434ae42fa69f843e54d97c675b56-merged.mount: Deactivated successfully.
Jan 26 15:56:10 compute-0 podman[277131]: 2026-01-26 15:56:10.991576397 +0000 UTC m=+0.142003946 container cleanup 6152b5a1083a349dfacbc7e7afb976dfaf729e6f747f0a32d110b70e8611a12a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 26 15:56:11 compute-0 systemd[1]: libpod-conmon-6152b5a1083a349dfacbc7e7afb976dfaf729e6f747f0a32d110b70e8611a12a.scope: Deactivated successfully.
Jan 26 15:56:11 compute-0 podman[277186]: 2026-01-26 15:56:11.074349778 +0000 UTC m=+0.056850125 container remove 6152b5a1083a349dfacbc7e7afb976dfaf729e6f747f0a32d110b70e8611a12a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.083 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6d8304b0-7e5d-418b-b976-d116de7ac79c]: (4, ('Mon Jan 26 03:56:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce (6152b5a1083a349dfacbc7e7afb976dfaf729e6f747f0a32d110b70e8611a12a)\n6152b5a1083a349dfacbc7e7afb976dfaf729e6f747f0a32d110b70e8611a12a\nMon Jan 26 03:56:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce (6152b5a1083a349dfacbc7e7afb976dfaf729e6f747f0a32d110b70e8611a12a)\n6152b5a1083a349dfacbc7e7afb976dfaf729e6f747f0a32d110b70e8611a12a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.086 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1e91a202-1d2f-47e0-91ab-aa9681270a41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.087 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12eb3028-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:11 compute-0 kernel: tap12eb3028-30: left promiscuous mode
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.090 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.107 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.110 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb3dbd0-3ac7-4578-abef-94c873ad334d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.122 156105 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port c8ad4f7b-335d-4da1-8383-257adcf3f0b5 with type ""
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.123 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:e2:34 10.100.0.9'], port_security=['fa:16:3e:33:e2:34 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-906740713', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '819151c3-9892-4926-879b-6bcf2047e116', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-906740713', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6f59e25-9c42-4de8-8df2-2beb03c79af0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=0a00f244-298c-43b6-937a-89de92a84d31) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.130 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7dbc0e7a-95d7-4d72-928d-5f594f88396d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.132 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[69d19326-1f7e-4516-ad57-d869d2cd1a26]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 ovn_controller[146046]: 2026-01-26T15:56:11Z|00273|binding|INFO|Removing iface tap0a00f244-29 ovn-installed in OVS
Jan 26 15:56:11 compute-0 ovn_controller[146046]: 2026-01-26T15:56:11Z|00274|binding|INFO|Removing lport 0a00f244-298c-43b6-937a-89de92a84d31 ovn-installed in OVS
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.137 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.153 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.156 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[682dd949-cf60-46d8-b184-e04bb98aecb7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 444512, 'reachable_time': 26171, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277201, 'error': None, 'target': 'ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d12eb3028\x2d37a4\x2d48dc\x2d836b\x2d0978d0bddbce.mount: Deactivated successfully.
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.164 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.164 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[c5fe1a1e-ae48-456a-98c3-0d7e57ae6209]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.166 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 0a00f244-298c-43b6-937a-89de92a84d31 in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e unbound from our chassis
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.168 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.192 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c565e068-76a9-4495-88be-21d2603831eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.229 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7f6329cd-9667-450b-be5a-9e3929384476]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.233 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[513c43c3-2d37-4697-b141-686ce2f68543]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.262 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8a029aaf-ccbb-49a8-b697-54de5f9385c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.282 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6fab1f7b-ff6c-4d80-bf73-23f4e17891c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439439, 'reachable_time': 27465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277207, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.302 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[00395475-53d4-44d2-931c-26c34134776e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439451, 'tstamp': 439451}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277208, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439456, 'tstamp': 439456}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277208, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.312 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.314 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.315 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.315 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.316 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.316 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.317 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.478 239969 INFO nova.virt.libvirt.driver [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Deleting instance files /var/lib/nova/instances/8266f382-d7db-41e3-bc97-b1510341e248_del
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.478 239969 INFO nova.virt.libvirt.driver [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Deletion of /var/lib/nova/instances/8266f382-d7db-41e3-bc97-b1510341e248_del complete
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.540 239969 INFO nova.compute.manager [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Took 0.91 seconds to destroy the instance on the hypervisor.
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.540 239969 DEBUG oslo.service.loopingcall [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.541 239969 DEBUG nova.compute.manager [-] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.541 239969 DEBUG nova.network.neutron [-] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.723 239969 DEBUG oslo_concurrency.lockutils [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.723 239969 DEBUG oslo_concurrency.lockutils [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.724 239969 DEBUG oslo_concurrency.lockutils [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.724 239969 DEBUG oslo_concurrency.lockutils [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.724 239969 DEBUG oslo_concurrency.lockutils [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.725 239969 INFO nova.compute.manager [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Terminating instance
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.726 239969 DEBUG nova.compute.manager [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:56:11 compute-0 kernel: tap786c06cb-43 (unregistering): left promiscuous mode
Jan 26 15:56:11 compute-0 NetworkManager[48954]: <info>  [1769442971.7741] device (tap786c06cb-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:56:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 336 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 205 op/s
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.787 239969 DEBUG nova.compute.manager [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-deleted-0a00f244-298c-43b6-937a-89de92a84d31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.789 239969 INFO nova.compute.manager [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Neutron deleted interface 0a00f244-298c-43b6-937a-89de92a84d31; detaching it from the instance and deleting it from the info cache
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.789 239969 DEBUG nova.network.neutron [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updating instance_info_cache with network_info: [{"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "address": "fa:16:3e:7a:f0:af", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fbc7328-fd", "ovs_interfaceid": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:11 compute-0 kernel: tap7fbc7328-fd (unregistering): left promiscuous mode
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.797 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:11 compute-0 NetworkManager[48954]: <info>  [1769442971.7986] device (tap7fbc7328-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:56:11 compute-0 ovn_controller[146046]: 2026-01-26T15:56:11Z|00275|binding|INFO|Releasing lport 786c06cb-4344-4e72-b40a-f3aa914d037c from this chassis (sb_readonly=0)
Jan 26 15:56:11 compute-0 ovn_controller[146046]: 2026-01-26T15:56:11Z|00276|binding|INFO|Setting lport 786c06cb-4344-4e72-b40a-f3aa914d037c down in Southbound
Jan 26 15:56:11 compute-0 ovn_controller[146046]: 2026-01-26T15:56:11Z|00277|binding|INFO|Removing iface tap786c06cb-43 ovn-installed in OVS
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.806 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:11 compute-0 kernel: tap0a00f244-29 (unregistering): left promiscuous mode
Jan 26 15:56:11 compute-0 NetworkManager[48954]: <info>  [1769442971.8229] device (tap0a00f244-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.828 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.826 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1d:ed:8a 10.100.0.7'], port_security=['fa:16:3e:1d:ed:8a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '819151c3-9892-4926-879b-6bcf2047e116', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5f60fb61-4ff7-4a1e-9574-6be67928a1fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=786c06cb-4344-4e72-b40a-f3aa914d037c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.827 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 786c06cb-4344-4e72-b40a-f3aa914d037c in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e unbound from our chassis
Jan 26 15:56:11 compute-0 ovn_controller[146046]: 2026-01-26T15:56:11Z|00278|binding|INFO|Releasing lport 7fbc7328-fd77-441a-86c3-1f41f7e706a0 from this chassis (sb_readonly=0)
Jan 26 15:56:11 compute-0 ovn_controller[146046]: 2026-01-26T15:56:11Z|00279|binding|INFO|Setting lport 7fbc7328-fd77-441a-86c3-1f41f7e706a0 down in Southbound
Jan 26 15:56:11 compute-0 ovn_controller[146046]: 2026-01-26T15:56:11Z|00280|binding|INFO|Removing iface tap7fbc7328-fd ovn-installed in OVS
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.830 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.829 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.835 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.841 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:f0:af 10.100.0.4'], port_security=['fa:16:3e:7a:f0:af 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '819151c3-9892-4926-879b-6bcf2047e116', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6f59e25-9c42-4de8-8df2-2beb03c79af0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=7fbc7328-fd77-441a-86c3-1f41f7e706a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.850 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b00ba709-98fa-4874-9cf6-ae35a2a61755]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.862 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:11 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Deactivated successfully.
Jan 26 15:56:11 compute-0 systemd[1]: machine-qemu\x2d39\x2dinstance\x2d00000023.scope: Consumed 17.332s CPU time.
Jan 26 15:56:11 compute-0 systemd-machined[208061]: Machine qemu-39-instance-00000023 terminated.
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.884 239969 DEBUG nova.compute.manager [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-deleted-cbd122d4-01d6-46df-bc1e-8707cf1566d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.885 239969 INFO nova.compute.manager [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Neutron deleted interface cbd122d4-01d6-46df-bc1e-8707cf1566d4; detaching it from the instance and deleting it from the info cache
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.885 239969 DEBUG nova.network.neutron [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updating instance_info_cache with network_info: [{"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "address": "fa:16:3e:7a:f0:af", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fbc7328-fd", "ovs_interfaceid": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.900 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8e9fb444-78d2-496b-a75b-ba463bdae939]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.904 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[866151fb-49a4-4202-8778-38b252eb83bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.933 239969 DEBUG nova.objects.instance [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lazy-loading 'system_metadata' on Instance uuid 819151c3-9892-4926-879b-6bcf2047e116 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.938 239969 DEBUG nova.objects.instance [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lazy-loading 'system_metadata' on Instance uuid 819151c3-9892-4926-879b-6bcf2047e116 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.948 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9ad88c4c-8c7d-4210-8580-70917995f7d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 NetworkManager[48954]: <info>  [1769442971.9523] manager: (tap7fbc7328-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Jan 26 15:56:11 compute-0 NetworkManager[48954]: <info>  [1769442971.9624] manager: (tap0a00f244-29): new Tun device (/org/freedesktop/NetworkManager/Devices/139)
Jan 26 15:56:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:11.979 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ca9c2885-12d1-4d36-82f4-1aae791bd237]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 69], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439439, 'reachable_time': 27465, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277239, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.983 239969 DEBUG nova.objects.instance [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lazy-loading 'flavor' on Instance uuid 819151c3-9892-4926-879b-6bcf2047e116 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.986 239969 DEBUG nova.objects.instance [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lazy-loading 'flavor' on Instance uuid 819151c3-9892-4926-879b-6bcf2047e116 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.998 239969 INFO nova.virt.libvirt.driver [-] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Instance destroyed successfully.
Jan 26 15:56:11 compute-0 nova_compute[239965]: 2026-01-26 15:56:11.999 239969 DEBUG nova.objects.instance [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'resources' on Instance uuid 819151c3-9892-4926-879b-6bcf2047e116 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.002 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2242d7-651d-4947-8939-16ed52ee156c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439451, 'tstamp': 439451}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277255, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 439456, 'tstamp': 439456}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 277255, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.004 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.005 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.013 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.014 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.014 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.015 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.015 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.016 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 7fbc7328-fd77-441a-86c3-1f41f7e706a0 in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e unbound from our chassis
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.017 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.017 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8b0fffb8-6b93-4d30-91dd-50b88db853d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.018 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e namespace which is not needed anymore
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.022 239969 DEBUG nova.virt.libvirt.vif [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.022 239969 DEBUG nova.network.os_vif_util [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converting VIF {"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.023 239969 DEBUG nova.network.os_vif_util [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:34:ca:1b,bridge_name='br-int',has_traffic_filtering=True,id=cbd122d4-01d6-46df-bc1e-8707cf1566d4,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbd122d4-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.025 239969 DEBUG nova.virt.libvirt.vif [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0a00f244-298c-43b6-937a-89de92a84d31", "address": "fa:16:3e:33:e2:34", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a00f244-29", "ovs_interfaceid": "0a00f244-298c-43b6-937a-89de92a84d31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.025 239969 DEBUG nova.network.os_vif_util [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converting VIF {"id": "0a00f244-298c-43b6-937a-89de92a84d31", "address": "fa:16:3e:33:e2:34", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a00f244-29", "ovs_interfaceid": "0a00f244-298c-43b6-937a-89de92a84d31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.025 239969 DEBUG nova.network.os_vif_util [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:e2:34,bridge_name='br-int',has_traffic_filtering=True,id=0a00f244-298c-43b6-937a-89de92a84d31,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0a00f244-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.027 239969 DEBUG nova.virt.libvirt.vif [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:55:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.028 239969 DEBUG nova.network.os_vif_util [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.028 239969 DEBUG nova.network.os_vif_util [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1d:ed:8a,bridge_name='br-int',has_traffic_filtering=True,id=786c06cb-4344-4e72-b40a-f3aa914d037c,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap786c06cb-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.028 239969 DEBUG os_vif [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:ed:8a,bridge_name='br-int',has_traffic_filtering=True,id=786c06cb-4344-4e72-b40a-f3aa914d037c,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap786c06cb-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.030 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.030 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap786c06cb-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.031 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.033 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.034 239969 DEBUG nova.virt.libvirt.guest [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:34:ca:1b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcbd122d4-01"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.034 239969 DEBUG nova.virt.libvirt.guest [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:e2:34"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0a00f244-29"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.038 239969 DEBUG nova.virt.libvirt.driver [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Attempting to detach device tap0a00f244-29 from instance 819151c3-9892-4926-879b-6bcf2047e116 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.038 239969 DEBUG nova.virt.libvirt.guest [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] detach device xml: <interface type="ethernet">
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:33:e2:34"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <target dev="tap0a00f244-29"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]: </interface>
Jan 26 15:56:12 compute-0 nova_compute[239965]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.041 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.043 239969 DEBUG nova.virt.libvirt.guest [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:34:ca:1b"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapcbd122d4-01"/></interface>not found in domain: <domain type='kvm'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <name>instance-00000023</name>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <uuid>819151c3-9892-4926-879b-6bcf2047e116</uuid>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <nova:name>tempest-AttachInterfacesTestJSON-server-1950902598</nova:name>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:55:01</nova:creationTime>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <nova:port uuid="786c06cb-4344-4e72-b40a-f3aa914d037c">
Jan 26 15:56:12 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <system>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <entry name='serial'>819151c3-9892-4926-879b-6bcf2047e116</entry>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <entry name='uuid'>819151c3-9892-4926-879b-6bcf2047e116</entry>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </system>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <os>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </os>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <features>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </features>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <cpu mode='host-model' check='partial'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/819151c3-9892-4926-879b-6bcf2047e116_disk'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/819151c3-9892-4926-879b-6bcf2047e116_disk.config'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:1d:ed:8a'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target dev='tap786c06cb-43'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:7a:f0:af'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target dev='tap7fbc7328-fd'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:33:e2:34'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target dev='tap0a00f244-29'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116/console.log' append='off'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       </target>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <console type='pty'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116/console.log' append='off'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </console>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </input>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </graphics>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <video>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </video>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:56:12 compute-0 nova_compute[239965]: </domain>
Jan 26 15:56:12 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.044 239969 WARNING nova.virt.libvirt.driver [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Detaching interface fa:16:3e:34:ca:1b failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapcbd122d4-01' not found.
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.044 239969 DEBUG nova.virt.libvirt.vif [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.045 239969 DEBUG nova.network.os_vif_util [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converting VIF {"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.045 239969 DEBUG nova.network.os_vif_util [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:34:ca:1b,bridge_name='br-int',has_traffic_filtering=True,id=cbd122d4-01d6-46df-bc1e-8707cf1566d4,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbd122d4-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.045 239969 DEBUG os_vif [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:ca:1b,bridge_name='br-int',has_traffic_filtering=True,id=cbd122d4-01d6-46df-bc1e-8707cf1566d4,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbd122d4-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.050 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.050 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbd122d4-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.050 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.053 239969 INFO os_vif [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1d:ed:8a,bridge_name='br-int',has_traffic_filtering=True,id=786c06cb-4344-4e72-b40a-f3aa914d037c,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap786c06cb-43')
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.054 239969 DEBUG nova.virt.libvirt.vif [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:55:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.054 239969 DEBUG nova.network.os_vif_util [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "address": "fa:16:3e:34:ca:1b", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcbd122d4-01", "ovs_interfaceid": "cbd122d4-01d6-46df-bc1e-8707cf1566d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.054 239969 DEBUG nova.network.os_vif_util [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:34:ca:1b,bridge_name='br-int',has_traffic_filtering=True,id=cbd122d4-01d6-46df-bc1e-8707cf1566d4,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbd122d4-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.054 239969 DEBUG os_vif [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:ca:1b,bridge_name='br-int',has_traffic_filtering=True,id=cbd122d4-01d6-46df-bc1e-8707cf1566d4,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbd122d4-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.056 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.056 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcbd122d4-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.056 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.057 239969 INFO os_vif [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:ca:1b,bridge_name='br-int',has_traffic_filtering=True,id=cbd122d4-01d6-46df-bc1e-8707cf1566d4,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbd122d4-01')
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.058 239969 DEBUG nova.virt.libvirt.guest [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1950902598</nova:name>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:56:12</nova:creationTime>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:port uuid="786c06cb-4344-4e72-b40a-f3aa914d037c">
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:port uuid="7fbc7328-fd77-441a-86c3-1f41f7e706a0">
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:56:12 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:56:12 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.058 239969 INFO os_vif [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:ca:1b,bridge_name='br-int',has_traffic_filtering=True,id=cbd122d4-01d6-46df-bc1e-8707cf1566d4,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcbd122d4-01')
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.059 239969 DEBUG nova.virt.libvirt.vif [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:55:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "address": "fa:16:3e:7a:f0:af", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fbc7328-fd", "ovs_interfaceid": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.059 239969 DEBUG nova.network.os_vif_util [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "address": "fa:16:3e:7a:f0:af", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fbc7328-fd", "ovs_interfaceid": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.059 239969 DEBUG nova.network.os_vif_util [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7a:f0:af,bridge_name='br-int',has_traffic_filtering=True,id=7fbc7328-fd77-441a-86c3-1f41f7e706a0,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fbc7328-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.059 239969 DEBUG os_vif [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:f0:af,bridge_name='br-int',has_traffic_filtering=True,id=7fbc7328-fd77-441a-86c3-1f41f7e706a0,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fbc7328-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.060 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.061 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7fbc7328-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.062 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.064 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.066 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.068 239969 INFO os_vif [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7a:f0:af,bridge_name='br-int',has_traffic_filtering=True,id=7fbc7328-fd77-441a-86c3-1f41f7e706a0,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7fbc7328-fd')
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.068 239969 DEBUG nova.virt.libvirt.vif [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:55:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0a00f244-298c-43b6-937a-89de92a84d31", "address": "fa:16:3e:33:e2:34", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a00f244-29", "ovs_interfaceid": "0a00f244-298c-43b6-937a-89de92a84d31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.069 239969 DEBUG nova.network.os_vif_util [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "0a00f244-298c-43b6-937a-89de92a84d31", "address": "fa:16:3e:33:e2:34", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a00f244-29", "ovs_interfaceid": "0a00f244-298c-43b6-937a-89de92a84d31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.069 239969 DEBUG nova.network.os_vif_util [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:e2:34,bridge_name='br-int',has_traffic_filtering=True,id=0a00f244-298c-43b6-937a-89de92a84d31,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0a00f244-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.069 239969 DEBUG os_vif [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:e2:34,bridge_name='br-int',has_traffic_filtering=True,id=0a00f244-298c-43b6-937a-89de92a84d31,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0a00f244-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.070 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.071 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a00f244-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.072 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.073 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.075 239969 INFO os_vif [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:e2:34,bridge_name='br-int',has_traffic_filtering=True,id=0a00f244-298c-43b6-937a-89de92a84d31,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0a00f244-29')
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.094 239969 DEBUG nova.virt.libvirt.guest [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:33:e2:34"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0a00f244-29"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.097 239969 DEBUG nova.compute.manager [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Received event network-vif-unplugged-50f13561-1233-4562-9ae2-f16314467652 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.098 239969 DEBUG oslo_concurrency.lockutils [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8266f382-d7db-41e3-bc97-b1510341e248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.098 239969 DEBUG oslo_concurrency.lockutils [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8266f382-d7db-41e3-bc97-b1510341e248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.099 239969 DEBUG oslo_concurrency.lockutils [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8266f382-d7db-41e3-bc97-b1510341e248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.099 239969 DEBUG nova.compute.manager [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] No waiting events found dispatching network-vif-unplugged-50f13561-1233-4562-9ae2-f16314467652 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.099 239969 DEBUG nova.compute.manager [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Received event network-vif-unplugged-50f13561-1233-4562-9ae2-f16314467652 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.100 239969 DEBUG nova.compute.manager [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Received event network-vif-plugged-50f13561-1233-4562-9ae2-f16314467652 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.103 239969 DEBUG oslo_concurrency.lockutils [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8266f382-d7db-41e3-bc97-b1510341e248-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.104 239969 DEBUG oslo_concurrency.lockutils [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8266f382-d7db-41e3-bc97-b1510341e248-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.104 239969 DEBUG oslo_concurrency.lockutils [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8266f382-d7db-41e3-bc97-b1510341e248-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.105 239969 DEBUG nova.compute.manager [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] No waiting events found dispatching network-vif-plugged-50f13561-1233-4562-9ae2-f16314467652 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.105 239969 WARNING nova.compute.manager [req-58447769-16df-4e59-ba5b-b12297cbfb22 req-340a599c-8f26-4a13-88c4-89d0fd2a14ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Received unexpected event network-vif-plugged-50f13561-1233-4562-9ae2-f16314467652 for instance with vm_state active and task_state deleting.
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.113 239969 DEBUG nova.virt.libvirt.guest [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:33:e2:34"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap0a00f244-29"/></interface>not found in domain: <domain type='kvm'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <name>instance-00000023</name>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <uuid>819151c3-9892-4926-879b-6bcf2047e116</uuid>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1950902598</nova:name>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:56:12</nova:creationTime>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:port uuid="786c06cb-4344-4e72-b40a-f3aa914d037c">
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:port uuid="7fbc7328-fd77-441a-86c3-1f41f7e706a0">
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:56:12 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <system>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <entry name='serial'>819151c3-9892-4926-879b-6bcf2047e116</entry>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <entry name='uuid'>819151c3-9892-4926-879b-6bcf2047e116</entry>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </system>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <os>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </os>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <features>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </features>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <cpu mode='host-model' check='partial'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/819151c3-9892-4926-879b-6bcf2047e116_disk'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/819151c3-9892-4926-879b-6bcf2047e116_disk.config'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:1d:ed:8a'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target dev='tap786c06cb-43'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:7a:f0:af'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target dev='tap7fbc7328-fd'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116/console.log' append='off'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       </target>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <console type='pty'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116/console.log' append='off'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </console>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </input>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <graphics type='vnc' port='-1' autoport='yes' listen='::0'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </graphics>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <video>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </video>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:56:12 compute-0 nova_compute[239965]: </domain>
Jan 26 15:56:12 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.116 239969 INFO nova.virt.libvirt.driver [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Successfully detached device tap0a00f244-29 from instance 819151c3-9892-4926-879b-6bcf2047e116 from the persistent domain config.
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.119 239969 DEBUG nova.virt.libvirt.vif [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:54:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-1950902598',display_name='tempest-AttachInterfacesTestJSON-server-1950902598',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-1950902598',id=35,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGhmdsrnCl7/FqyikYjHfQKOxAWndD6yKGB/iJOesxN6Y4v+X0XP0DmbyI99MwC2RPfT2FPhRl3KaV0GiApvOdOlSy/L+WJ7Rd8K9htD6JKTDVSM5FiEZEwgzqXP61+xsQ==',key_name='tempest-keypair-849891966',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:55:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-0ggb0oao',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=819151c3-9892-4926-879b-6bcf2047e116,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0a00f244-298c-43b6-937a-89de92a84d31", "address": "fa:16:3e:33:e2:34", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a00f244-29", "ovs_interfaceid": "0a00f244-298c-43b6-937a-89de92a84d31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.120 239969 DEBUG nova.network.os_vif_util [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converting VIF {"id": "0a00f244-298c-43b6-937a-89de92a84d31", "address": "fa:16:3e:33:e2:34", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a00f244-29", "ovs_interfaceid": "0a00f244-298c-43b6-937a-89de92a84d31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.122 239969 DEBUG nova.network.os_vif_util [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:e2:34,bridge_name='br-int',has_traffic_filtering=True,id=0a00f244-298c-43b6-937a-89de92a84d31,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0a00f244-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.123 239969 DEBUG os_vif [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:e2:34,bridge_name='br-int',has_traffic_filtering=True,id=0a00f244-298c-43b6-937a-89de92a84d31,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0a00f244-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.127 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.128 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a00f244-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.129 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.138 239969 INFO os_vif [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:e2:34,bridge_name='br-int',has_traffic_filtering=True,id=0a00f244-298c-43b6-937a-89de92a84d31,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap0a00f244-29')
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.147 239969 DEBUG nova.virt.libvirt.guest [req-f10c2172-d014-4a28-ac76-cf34737fd328 req-e76680aa-9350-4b4e-aa08-308f73abb410 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesTestJSON-server-1950902598</nova:name>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:56:12</nova:creationTime>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:port uuid="786c06cb-4344-4e72-b40a-f3aa914d037c">
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:port uuid="cbd122d4-01d6-46df-bc1e-8707cf1566d4">
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     <nova:port uuid="7fbc7328-fd77-441a-86c3-1f41f7e706a0">
Jan 26 15:56:12 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 15:56:12 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:12 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:56:12 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:56:12 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 15:56:12 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[273033]: [NOTICE]   (273037) : haproxy version is 2.8.14-c23fe91
Jan 26 15:56:12 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[273033]: [NOTICE]   (273037) : path to executable is /usr/sbin/haproxy
Jan 26 15:56:12 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[273033]: [WARNING]  (273037) : Exiting Master process...
Jan 26 15:56:12 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[273033]: [ALERT]    (273037) : Current worker (273039) exited with code 143 (Terminated)
Jan 26 15:56:12 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[273033]: [WARNING]  (273037) : All workers exited. Exiting... (0)
Jan 26 15:56:12 compute-0 systemd[1]: libpod-080ce259709b8219f7cb252c124786f834c4edaa089124e389d34ce0fd87db5e.scope: Deactivated successfully.
Jan 26 15:56:12 compute-0 podman[277296]: 2026-01-26 15:56:12.171653852 +0000 UTC m=+0.050932761 container died 080ce259709b8219f7cb252c124786f834c4edaa089124e389d34ce0fd87db5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 15:56:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-080ce259709b8219f7cb252c124786f834c4edaa089124e389d34ce0fd87db5e-userdata-shm.mount: Deactivated successfully.
Jan 26 15:56:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-edfc6f0f09920060f6a34eea4620015cf022a2125fbb7da8c7baac780dbef837-merged.mount: Deactivated successfully.
Jan 26 15:56:12 compute-0 podman[277296]: 2026-01-26 15:56:12.228522798 +0000 UTC m=+0.107801707 container cleanup 080ce259709b8219f7cb252c124786f834c4edaa089124e389d34ce0fd87db5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 15:56:12 compute-0 systemd[1]: libpod-conmon-080ce259709b8219f7cb252c124786f834c4edaa089124e389d34ce0fd87db5e.scope: Deactivated successfully.
Jan 26 15:56:12 compute-0 podman[277327]: 2026-01-26 15:56:12.304642166 +0000 UTC m=+0.049047525 container remove 080ce259709b8219f7cb252c124786f834c4edaa089124e389d34ce0fd87db5e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.320 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cfc913bc-2bb5-415b-a337-165de3eb7993]: (4, ('Mon Jan 26 03:56:12 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e (080ce259709b8219f7cb252c124786f834c4edaa089124e389d34ce0fd87db5e)\n080ce259709b8219f7cb252c124786f834c4edaa089124e389d34ce0fd87db5e\nMon Jan 26 03:56:12 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e (080ce259709b8219f7cb252c124786f834c4edaa089124e389d34ce0fd87db5e)\n080ce259709b8219f7cb252c124786f834c4edaa089124e389d34ce0fd87db5e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.322 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[83129d74-3bca-4e3d-afa1-a3f8db593157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.322 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:12 compute-0 kernel: tap91bcac0a-10: left promiscuous mode
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.324 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.331 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5e2c67c0-dc86-4820-a61e-52569735583b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.349 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.352 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[51225e68-9b42-46cc-970f-57aebad00855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.354 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f702deec-3fd3-4bd0-9655-da61fee0208e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.371 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ed9855b9-f569-4ab3-8975-7e42ee2f9705]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 439431, 'reachable_time': 17006, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277343, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:12 compute-0 systemd[1]: run-netns-ovnmeta\x2d91bcac0a\x2d1926\x2d4861\x2d88ab\x2dae3c06f7e57e.mount: Deactivated successfully.
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.376 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:56:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:12.376 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[06ac5bb4-7072-465a-8d40-a0bf7dfb5cbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.440 239969 INFO nova.virt.libvirt.driver [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Deleting instance files /var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116_del
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.441 239969 INFO nova.virt.libvirt.driver [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Deletion of /var/lib/nova/instances/819151c3-9892-4926-879b-6bcf2047e116_del complete
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.503 239969 INFO nova.compute.manager [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Took 0.78 seconds to destroy the instance on the hypervisor.
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.504 239969 DEBUG oslo.service.loopingcall [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.504 239969 DEBUG nova.compute.manager [-] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.505 239969 DEBUG nova.network.neutron [-] [instance: 819151c3-9892-4926-879b-6bcf2047e116] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.621 239969 DEBUG nova.network.neutron [-] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.659 239969 INFO nova.compute.manager [-] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Took 1.12 seconds to deallocate network for instance.
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.720 239969 DEBUG oslo_concurrency.lockutils [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.721 239969 DEBUG oslo_concurrency.lockutils [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:12 compute-0 nova_compute[239965]: 2026-01-26 15:56:12.829 239969 DEBUG oslo_concurrency.processutils [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:12 compute-0 ceph-mon[75140]: pgmap v1287: 305 pgs: 305 active+clean; 336 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.2 MiB/s wr, 205 op/s
Jan 26 15:56:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:56:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1817620018' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:13 compute-0 nova_compute[239965]: 2026-01-26 15:56:13.461 239969 DEBUG oslo_concurrency.processutils [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:13 compute-0 nova_compute[239965]: 2026-01-26 15:56:13.467 239969 DEBUG nova.compute.provider_tree [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:56:13 compute-0 nova_compute[239965]: 2026-01-26 15:56:13.486 239969 DEBUG nova.scheduler.client.report [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:56:13 compute-0 nova_compute[239965]: 2026-01-26 15:56:13.505 239969 DEBUG oslo_concurrency.lockutils [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:13 compute-0 nova_compute[239965]: 2026-01-26 15:56:13.535 239969 INFO nova.scheduler.client.report [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Deleted allocations for instance 8266f382-d7db-41e3-bc97-b1510341e248
Jan 26 15:56:13 compute-0 nova_compute[239965]: 2026-01-26 15:56:13.637 239969 DEBUG oslo_concurrency.lockutils [None req-28e4f516-deab-4474-b7d7-f22859ec7d20 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "8266f382-d7db-41e3-bc97-b1510341e248" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 305 active+clean; 247 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 1.8 MiB/s wr, 456 op/s
Jan 26 15:56:13 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1817620018' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e225 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:56:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Jan 26 15:56:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Jan 26 15:56:14 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Jan 26 15:56:14 compute-0 nova_compute[239965]: 2026-01-26 15:56:14.741 239969 DEBUG nova.network.neutron [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updating instance_info_cache with network_info: [{"id": "786c06cb-4344-4e72-b40a-f3aa914d037c", "address": "fa:16:3e:1d:ed:8a", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap786c06cb-43", "ovs_interfaceid": "786c06cb-4344-4e72-b40a-f3aa914d037c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "address": "fa:16:3e:7a:f0:af", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7fbc7328-fd", "ovs_interfaceid": "7fbc7328-fd77-441a-86c3-1f41f7e706a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "0a00f244-298c-43b6-937a-89de92a84d31", "address": "fa:16:3e:33:e2:34", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a00f244-29", "ovs_interfaceid": "0a00f244-298c-43b6-937a-89de92a84d31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:14 compute-0 nova_compute[239965]: 2026-01-26 15:56:14.767 239969 DEBUG oslo_concurrency.lockutils [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Releasing lock "refresh_cache-819151c3-9892-4926-879b-6bcf2047e116" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:14 compute-0 nova_compute[239965]: 2026-01-26 15:56:14.819 239969 DEBUG oslo_concurrency.lockutils [None req-b9843315-c38d-4673-b976-4b72eec6777c 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-819151c3-9892-4926-879b-6bcf2047e116-cbd122d4-01d6-46df-bc1e-8707cf1566d4" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 6.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:14 compute-0 ceph-mon[75140]: pgmap v1288: 305 pgs: 305 active+clean; 247 MiB data, 525 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 1.8 MiB/s wr, 456 op/s
Jan 26 15:56:14 compute-0 ceph-mon[75140]: osdmap e226: 3 total, 3 up, 3 in
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.217 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.305 239969 DEBUG nova.compute.manager [req-85a588d4-8436-46b1-8727-075962f6ab49 req-78e03a43-7871-47ea-be11-b4969ffbbffe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-unplugged-786c06cb-4344-4e72-b40a-f3aa914d037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.306 239969 DEBUG oslo_concurrency.lockutils [req-85a588d4-8436-46b1-8727-075962f6ab49 req-78e03a43-7871-47ea-be11-b4969ffbbffe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.306 239969 DEBUG oslo_concurrency.lockutils [req-85a588d4-8436-46b1-8727-075962f6ab49 req-78e03a43-7871-47ea-be11-b4969ffbbffe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.307 239969 DEBUG oslo_concurrency.lockutils [req-85a588d4-8436-46b1-8727-075962f6ab49 req-78e03a43-7871-47ea-be11-b4969ffbbffe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.307 239969 DEBUG nova.compute.manager [req-85a588d4-8436-46b1-8727-075962f6ab49 req-78e03a43-7871-47ea-be11-b4969ffbbffe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] No waiting events found dispatching network-vif-unplugged-786c06cb-4344-4e72-b40a-f3aa914d037c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.307 239969 DEBUG nova.compute.manager [req-85a588d4-8436-46b1-8727-075962f6ab49 req-78e03a43-7871-47ea-be11-b4969ffbbffe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-unplugged-786c06cb-4344-4e72-b40a-f3aa914d037c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.307 239969 DEBUG nova.compute.manager [req-85a588d4-8436-46b1-8727-075962f6ab49 req-78e03a43-7871-47ea-be11-b4969ffbbffe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Received event network-vif-deleted-50f13561-1233-4562-9ae2-f16314467652 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.308 239969 DEBUG nova.compute.manager [req-85a588d4-8436-46b1-8727-075962f6ab49 req-78e03a43-7871-47ea-be11-b4969ffbbffe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-plugged-786c06cb-4344-4e72-b40a-f3aa914d037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.308 239969 DEBUG oslo_concurrency.lockutils [req-85a588d4-8436-46b1-8727-075962f6ab49 req-78e03a43-7871-47ea-be11-b4969ffbbffe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.309 239969 DEBUG oslo_concurrency.lockutils [req-85a588d4-8436-46b1-8727-075962f6ab49 req-78e03a43-7871-47ea-be11-b4969ffbbffe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.309 239969 DEBUG oslo_concurrency.lockutils [req-85a588d4-8436-46b1-8727-075962f6ab49 req-78e03a43-7871-47ea-be11-b4969ffbbffe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.309 239969 DEBUG nova.compute.manager [req-85a588d4-8436-46b1-8727-075962f6ab49 req-78e03a43-7871-47ea-be11-b4969ffbbffe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] No waiting events found dispatching network-vif-plugged-786c06cb-4344-4e72-b40a-f3aa914d037c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.310 239969 WARNING nova.compute.manager [req-85a588d4-8436-46b1-8727-075962f6ab49 req-78e03a43-7871-47ea-be11-b4969ffbbffe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received unexpected event network-vif-plugged-786c06cb-4344-4e72-b40a-f3aa914d037c for instance with vm_state active and task_state deleting.
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.397 239969 DEBUG nova.compute.manager [req-60294c00-465c-4d2e-946e-12f201c0887b req-6150fcd1-b733-48a9-a017-65948be41b10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-unplugged-7fbc7328-fd77-441a-86c3-1f41f7e706a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.397 239969 DEBUG oslo_concurrency.lockutils [req-60294c00-465c-4d2e-946e-12f201c0887b req-6150fcd1-b733-48a9-a017-65948be41b10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.398 239969 DEBUG oslo_concurrency.lockutils [req-60294c00-465c-4d2e-946e-12f201c0887b req-6150fcd1-b733-48a9-a017-65948be41b10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.398 239969 DEBUG oslo_concurrency.lockutils [req-60294c00-465c-4d2e-946e-12f201c0887b req-6150fcd1-b733-48a9-a017-65948be41b10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.398 239969 DEBUG nova.compute.manager [req-60294c00-465c-4d2e-946e-12f201c0887b req-6150fcd1-b733-48a9-a017-65948be41b10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] No waiting events found dispatching network-vif-unplugged-7fbc7328-fd77-441a-86c3-1f41f7e706a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.399 239969 DEBUG nova.compute.manager [req-60294c00-465c-4d2e-946e-12f201c0887b req-6150fcd1-b733-48a9-a017-65948be41b10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-unplugged-7fbc7328-fd77-441a-86c3-1f41f7e706a0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.399 239969 DEBUG nova.compute.manager [req-60294c00-465c-4d2e-946e-12f201c0887b req-6150fcd1-b733-48a9-a017-65948be41b10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-plugged-7fbc7328-fd77-441a-86c3-1f41f7e706a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.399 239969 DEBUG oslo_concurrency.lockutils [req-60294c00-465c-4d2e-946e-12f201c0887b req-6150fcd1-b733-48a9-a017-65948be41b10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "819151c3-9892-4926-879b-6bcf2047e116-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.399 239969 DEBUG oslo_concurrency.lockutils [req-60294c00-465c-4d2e-946e-12f201c0887b req-6150fcd1-b733-48a9-a017-65948be41b10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.400 239969 DEBUG oslo_concurrency.lockutils [req-60294c00-465c-4d2e-946e-12f201c0887b req-6150fcd1-b733-48a9-a017-65948be41b10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.400 239969 DEBUG nova.compute.manager [req-60294c00-465c-4d2e-946e-12f201c0887b req-6150fcd1-b733-48a9-a017-65948be41b10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] No waiting events found dispatching network-vif-plugged-7fbc7328-fd77-441a-86c3-1f41f7e706a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:15 compute-0 nova_compute[239965]: 2026-01-26 15:56:15.400 239969 WARNING nova.compute.manager [req-60294c00-465c-4d2e-946e-12f201c0887b req-6150fcd1-b733-48a9-a017-65948be41b10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received unexpected event network-vif-plugged-7fbc7328-fd77-441a-86c3-1f41f7e706a0 for instance with vm_state active and task_state deleting.
Jan 26 15:56:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 224 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 8.7 MiB/s rd, 223 KiB/s wr, 417 op/s
Jan 26 15:56:16 compute-0 nova_compute[239965]: 2026-01-26 15:56:16.069 239969 DEBUG nova.network.neutron [-] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:16 compute-0 nova_compute[239965]: 2026-01-26 15:56:16.089 239969 INFO nova.compute.manager [-] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Took 3.58 seconds to deallocate network for instance.
Jan 26 15:56:16 compute-0 nova_compute[239965]: 2026-01-26 15:56:16.148 239969 DEBUG oslo_concurrency.lockutils [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:16 compute-0 nova_compute[239965]: 2026-01-26 15:56:16.149 239969 DEBUG oslo_concurrency.lockutils [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:16 compute-0 nova_compute[239965]: 2026-01-26 15:56:16.234 239969 DEBUG oslo_concurrency.processutils [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:56:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3304605347' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:16 compute-0 nova_compute[239965]: 2026-01-26 15:56:16.835 239969 DEBUG oslo_concurrency.processutils [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:16 compute-0 nova_compute[239965]: 2026-01-26 15:56:16.844 239969 DEBUG nova.compute.provider_tree [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:56:16 compute-0 ceph-mon[75140]: pgmap v1290: 305 pgs: 305 active+clean; 224 MiB data, 507 MiB used, 59 GiB / 60 GiB avail; 8.7 MiB/s rd, 223 KiB/s wr, 417 op/s
Jan 26 15:56:16 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3304605347' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:16 compute-0 nova_compute[239965]: 2026-01-26 15:56:16.942 239969 DEBUG nova.scheduler.client.report [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:56:16 compute-0 nova_compute[239965]: 2026-01-26 15:56:16.979 239969 DEBUG oslo_concurrency.lockutils [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:17 compute-0 nova_compute[239965]: 2026-01-26 15:56:17.006 239969 INFO nova.scheduler.client.report [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Deleted allocations for instance 819151c3-9892-4926-879b-6bcf2047e116
Jan 26 15:56:17 compute-0 nova_compute[239965]: 2026-01-26 15:56:17.074 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:17 compute-0 nova_compute[239965]: 2026-01-26 15:56:17.090 239969 DEBUG oslo_concurrency.lockutils [None req-74d1ae2f-1474-454f-8908-c418616c3159 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "819151c3-9892-4926-879b-6bcf2047e116" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:17 compute-0 sudo[277388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:56:17 compute-0 sudo[277388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:56:17 compute-0 sudo[277388]: pam_unix(sudo:session): session closed for user root
Jan 26 15:56:17 compute-0 sudo[277413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:56:17 compute-0 sudo[277413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:56:17 compute-0 nova_compute[239965]: 2026-01-26 15:56:17.769 239969 DEBUG nova.compute.manager [req-5fe0cd2e-31ac-4223-b067-eafe90011c50 req-bdc1b8ba-a0a3-4f18-827e-f29afbf485b4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-deleted-786c06cb-4344-4e72-b40a-f3aa914d037c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:17 compute-0 nova_compute[239965]: 2026-01-26 15:56:17.770 239969 DEBUG nova.compute.manager [req-5fe0cd2e-31ac-4223-b067-eafe90011c50 req-bdc1b8ba-a0a3-4f18-827e-f29afbf485b4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Received event network-vif-deleted-7fbc7328-fd77-441a-86c3-1f41f7e706a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 181 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 189 KiB/s wr, 369 op/s
Jan 26 15:56:18 compute-0 sudo[277413]: pam_unix(sudo:session): session closed for user root
Jan 26 15:56:18 compute-0 nova_compute[239965]: 2026-01-26 15:56:18.289 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:18 compute-0 nova_compute[239965]: 2026-01-26 15:56:18.291 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:18 compute-0 nova_compute[239965]: 2026-01-26 15:56:18.322 239969 DEBUG nova.compute.manager [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:56:18 compute-0 nova_compute[239965]: 2026-01-26 15:56:18.390 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:18 compute-0 nova_compute[239965]: 2026-01-26 15:56:18.391 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:18 compute-0 nova_compute[239965]: 2026-01-26 15:56:18.398 239969 DEBUG nova.virt.hardware [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:56:18 compute-0 nova_compute[239965]: 2026-01-26 15:56:18.398 239969 INFO nova.compute.claims [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:56:18 compute-0 nova_compute[239965]: 2026-01-26 15:56:18.568 239969 DEBUG oslo_concurrency.processutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:56:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:56:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:56:19 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:56:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:56:19 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:56:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:56:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:56:19 compute-0 ceph-mon[75140]: pgmap v1291: 305 pgs: 305 active+clean; 181 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 7.3 MiB/s rd, 189 KiB/s wr, 369 op/s
Jan 26 15:56:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:56:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:56:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:56:19 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:56:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:56:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:56:19 compute-0 sudo[277488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:56:19 compute-0 sudo[277488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:56:19 compute-0 sudo[277488]: pam_unix(sudo:session): session closed for user root
Jan 26 15:56:19 compute-0 sudo[277513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:56:19 compute-0 sudo[277513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:56:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:56:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:56:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606040193' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 181 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 179 KiB/s wr, 349 op/s
Jan 26 15:56:19 compute-0 nova_compute[239965]: 2026-01-26 15:56:19.782 239969 DEBUG oslo_concurrency.processutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:19 compute-0 nova_compute[239965]: 2026-01-26 15:56:19.794 239969 DEBUG nova.compute.provider_tree [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:56:19 compute-0 nova_compute[239965]: 2026-01-26 15:56:19.826 239969 DEBUG nova.scheduler.client.report [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:56:19 compute-0 nova_compute[239965]: 2026-01-26 15:56:19.862 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.471s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:19 compute-0 nova_compute[239965]: 2026-01-26 15:56:19.863 239969 DEBUG nova.compute.manager [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:56:19 compute-0 podman[277550]: 2026-01-26 15:56:19.793544428 +0000 UTC m=+0.034594080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:56:19 compute-0 nova_compute[239965]: 2026-01-26 15:56:19.922 239969 DEBUG nova.compute.manager [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:56:19 compute-0 nova_compute[239965]: 2026-01-26 15:56:19.923 239969 DEBUG nova.network.neutron [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:56:19 compute-0 nova_compute[239965]: 2026-01-26 15:56:19.939 239969 INFO nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:56:19 compute-0 nova_compute[239965]: 2026-01-26 15:56:19.958 239969 DEBUG nova.compute.manager [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:56:20 compute-0 podman[277550]: 2026-01-26 15:56:20.010965425 +0000 UTC m=+0.252015057 container create 53f14f537599bfeca5995241b5d4d08d607a6f2f9b54583839519a59acfeb5b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bhaskara, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.044 239969 DEBUG nova.compute.manager [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.048 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.049 239969 INFO nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Creating image(s)
Jan 26 15:56:20 compute-0 systemd[1]: Started libpod-conmon-53f14f537599bfeca5995241b5d4d08d607a6f2f9b54583839519a59acfeb5b9.scope.
Jan 26 15:56:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.092 239969 DEBUG nova.storage.rbd_utils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:20 compute-0 podman[277550]: 2026-01-26 15:56:20.114130617 +0000 UTC m=+0.355180269 container init 53f14f537599bfeca5995241b5d4d08d607a6f2f9b54583839519a59acfeb5b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bhaskara, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:56:20 compute-0 podman[277550]: 2026-01-26 15:56:20.125903046 +0000 UTC m=+0.366952678 container start 53f14f537599bfeca5995241b5d4d08d607a6f2f9b54583839519a59acfeb5b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bhaskara, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 26 15:56:20 compute-0 sweet_bhaskara[277573]: 167 167
Jan 26 15:56:20 compute-0 systemd[1]: libpod-53f14f537599bfeca5995241b5d4d08d607a6f2f9b54583839519a59acfeb5b9.scope: Deactivated successfully.
Jan 26 15:56:20 compute-0 podman[277550]: 2026-01-26 15:56:20.146914551 +0000 UTC m=+0.387964203 container attach 53f14f537599bfeca5995241b5d4d08d607a6f2f9b54583839519a59acfeb5b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 26 15:56:20 compute-0 podman[277550]: 2026-01-26 15:56:20.147491296 +0000 UTC m=+0.388540928 container died 53f14f537599bfeca5995241b5d4d08d607a6f2f9b54583839519a59acfeb5b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.146 239969 DEBUG nova.storage.rbd_utils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.203 239969 DEBUG nova.storage.rbd_utils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.209 239969 DEBUG oslo_concurrency.processutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5d5c23370fbfc689b8adc830d61d64478452e6d24de1fa77c57950287ec8779-merged.mount: Deactivated successfully.
Jan 26 15:56:20 compute-0 podman[277550]: 2026-01-26 15:56:20.249237223 +0000 UTC m=+0.490286855 container remove 53f14f537599bfeca5995241b5d4d08d607a6f2f9b54583839519a59acfeb5b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.250 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:20 compute-0 systemd[1]: libpod-conmon-53f14f537599bfeca5995241b5d4d08d607a6f2f9b54583839519a59acfeb5b9.scope: Deactivated successfully.
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.313 239969 DEBUG oslo_concurrency.processutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.315 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.316 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.316 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.347 239969 DEBUG nova.storage.rbd_utils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.351 239969 DEBUG oslo_concurrency.processutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:20 compute-0 ovn_controller[146046]: 2026-01-26T15:56:20Z|00281|binding|INFO|Releasing lport 8cf3bd60-3f77-4742-a649-f7b9bc748f38 from this chassis (sb_readonly=0)
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.381 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.386 239969 DEBUG nova.policy [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8b6639ef07f74c9ebad76dffa361ec4e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1b3f0866575347bd80cdc80b692f07f5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:56:20 compute-0 podman[277667]: 2026-01-26 15:56:20.41453497 +0000 UTC m=+0.024072552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.515 239969 DEBUG oslo_concurrency.lockutils [None req-6aa61d12-95ab-4eed-9559-567d9d488195 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "d8c8962d-c696-4ea6-826d-466a804a7770" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.516 239969 DEBUG oslo_concurrency.lockutils [None req-6aa61d12-95ab-4eed-9559-567d9d488195 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.517 239969 DEBUG nova.compute.manager [None req-6aa61d12-95ab-4eed-9559-567d9d488195 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.522 239969 DEBUG nova.compute.manager [None req-6aa61d12-95ab-4eed-9559-567d9d488195 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.523 239969 DEBUG nova.objects.instance [None req-6aa61d12-95ab-4eed-9559-567d9d488195 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'flavor' on Instance uuid d8c8962d-c696-4ea6-826d-466a804a7770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.549 239969 DEBUG nova.virt.libvirt.driver [None req-6aa61d12-95ab-4eed-9559-567d9d488195 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 15:56:20 compute-0 ovn_controller[146046]: 2026-01-26T15:56:20Z|00282|binding|INFO|Releasing lport 8cf3bd60-3f77-4742-a649-f7b9bc748f38 from this chassis (sb_readonly=0)
Jan 26 15:56:20 compute-0 nova_compute[239965]: 2026-01-26 15:56:20.574 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:21 compute-0 nova_compute[239965]: 2026-01-26 15:56:21.058 239969 DEBUG nova.network.neutron [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Successfully created port: 23122a0c-9643-40ee-8c47-11f5b18ef522 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:56:21 compute-0 podman[277667]: 2026-01-26 15:56:21.067092677 +0000 UTC m=+0.676630239 container create c87cea7b47cee2d15ba23d58de1e9232aa0c899ab7055e1f594f87ae7502c322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:56:21 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:56:21 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:56:21 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:56:21 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:56:21 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2606040193' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:21 compute-0 ceph-mon[75140]: pgmap v1292: 305 pgs: 305 active+clean; 181 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 179 KiB/s wr, 349 op/s
Jan 26 15:56:21 compute-0 systemd[1]: Started libpod-conmon-c87cea7b47cee2d15ba23d58de1e9232aa0c899ab7055e1f594f87ae7502c322.scope.
Jan 26 15:56:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d66ae0f16f839089dfe1ca313762b097fca7803e98f35f38b628f79f4b379712/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d66ae0f16f839089dfe1ca313762b097fca7803e98f35f38b628f79f4b379712/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d66ae0f16f839089dfe1ca313762b097fca7803e98f35f38b628f79f4b379712/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d66ae0f16f839089dfe1ca313762b097fca7803e98f35f38b628f79f4b379712/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d66ae0f16f839089dfe1ca313762b097fca7803e98f35f38b628f79f4b379712/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:21 compute-0 podman[277667]: 2026-01-26 15:56:21.193270253 +0000 UTC m=+0.802807845 container init c87cea7b47cee2d15ba23d58de1e9232aa0c899ab7055e1f594f87ae7502c322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_aryabhata, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 15:56:21 compute-0 podman[277667]: 2026-01-26 15:56:21.20531954 +0000 UTC m=+0.814857102 container start c87cea7b47cee2d15ba23d58de1e9232aa0c899ab7055e1f594f87ae7502c322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 15:56:21 compute-0 podman[277667]: 2026-01-26 15:56:21.240182255 +0000 UTC m=+0.849719837 container attach c87cea7b47cee2d15ba23d58de1e9232aa0c899ab7055e1f594f87ae7502c322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_aryabhata, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 15:56:21 compute-0 pedantic_aryabhata[277699]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:56:21 compute-0 pedantic_aryabhata[277699]: --> All data devices are unavailable
Jan 26 15:56:21 compute-0 systemd[1]: libpod-c87cea7b47cee2d15ba23d58de1e9232aa0c899ab7055e1f594f87ae7502c322.scope: Deactivated successfully.
Jan 26 15:56:21 compute-0 podman[277667]: 2026-01-26 15:56:21.768716138 +0000 UTC m=+1.378253730 container died c87cea7b47cee2d15ba23d58de1e9232aa0c899ab7055e1f594f87ae7502c322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_aryabhata, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:56:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 188 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 731 KiB/s wr, 285 op/s
Jan 26 15:56:22 compute-0 nova_compute[239965]: 2026-01-26 15:56:22.124 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:22 compute-0 ceph-mon[75140]: pgmap v1293: 305 pgs: 305 active+clean; 188 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 731 KiB/s wr, 285 op/s
Jan 26 15:56:22 compute-0 ovn_controller[146046]: 2026-01-26T15:56:22Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3a:cd:80 10.100.0.4
Jan 26 15:56:22 compute-0 ovn_controller[146046]: 2026-01-26T15:56:22Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:cd:80 10.100.0.4
Jan 26 15:56:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d66ae0f16f839089dfe1ca313762b097fca7803e98f35f38b628f79f4b379712-merged.mount: Deactivated successfully.
Jan 26 15:56:22 compute-0 nova_compute[239965]: 2026-01-26 15:56:22.171 239969 DEBUG oslo_concurrency.processutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.820s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:22 compute-0 podman[277667]: 2026-01-26 15:56:22.310612299 +0000 UTC m=+1.920149861 container remove c87cea7b47cee2d15ba23d58de1e9232aa0c899ab7055e1f594f87ae7502c322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 15:56:22 compute-0 systemd[1]: libpod-conmon-c87cea7b47cee2d15ba23d58de1e9232aa0c899ab7055e1f594f87ae7502c322.scope: Deactivated successfully.
Jan 26 15:56:22 compute-0 nova_compute[239965]: 2026-01-26 15:56:22.352 239969 DEBUG nova.storage.rbd_utils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] resizing rbd image 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:56:22 compute-0 sudo[277513]: pam_unix(sudo:session): session closed for user root
Jan 26 15:56:22 compute-0 sudo[277775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:56:22 compute-0 sudo[277775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:56:22 compute-0 sudo[277775]: pam_unix(sudo:session): session closed for user root
Jan 26 15:56:22 compute-0 sudo[277811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:56:22 compute-0 sudo[277811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:56:22 compute-0 podman[277847]: 2026-01-26 15:56:22.80361227 +0000 UTC m=+0.047376404 container create 40990c1e68badf9d0fbf01ab71f1f17fc467723ea12031d520bc6438e6bb374a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 15:56:22 compute-0 systemd[1]: Started libpod-conmon-40990c1e68badf9d0fbf01ab71f1f17fc467723ea12031d520bc6438e6bb374a.scope.
Jan 26 15:56:22 compute-0 ovn_controller[146046]: 2026-01-26T15:56:22Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:ae:87 10.100.0.12
Jan 26 15:56:22 compute-0 ovn_controller[146046]: 2026-01-26T15:56:22Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:ae:87 10.100.0.12
Jan 26 15:56:22 compute-0 podman[277847]: 2026-01-26 15:56:22.777196081 +0000 UTC m=+0.020960235 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:56:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:56:22 compute-0 podman[277847]: 2026-01-26 15:56:22.920920058 +0000 UTC m=+0.164684222 container init 40990c1e68badf9d0fbf01ab71f1f17fc467723ea12031d520bc6438e6bb374a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 15:56:22 compute-0 podman[277847]: 2026-01-26 15:56:22.931248442 +0000 UTC m=+0.175012576 container start 40990c1e68badf9d0fbf01ab71f1f17fc467723ea12031d520bc6438e6bb374a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:56:22 compute-0 nova_compute[239965]: 2026-01-26 15:56:22.932 239969 DEBUG nova.network.neutron [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Successfully updated port: 23122a0c-9643-40ee-8c47-11f5b18ef522 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:56:22 compute-0 podman[277847]: 2026-01-26 15:56:22.938880729 +0000 UTC m=+0.182644893 container attach 40990c1e68badf9d0fbf01ab71f1f17fc467723ea12031d520bc6438e6bb374a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Jan 26 15:56:22 compute-0 determined_driscoll[277864]: 167 167
Jan 26 15:56:22 compute-0 systemd[1]: libpod-40990c1e68badf9d0fbf01ab71f1f17fc467723ea12031d520bc6438e6bb374a.scope: Deactivated successfully.
Jan 26 15:56:22 compute-0 podman[277847]: 2026-01-26 15:56:22.940418847 +0000 UTC m=+0.184182981 container died 40990c1e68badf9d0fbf01ab71f1f17fc467723ea12031d520bc6438e6bb374a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 15:56:22 compute-0 nova_compute[239965]: 2026-01-26 15:56:22.944 239969 DEBUG nova.objects.instance [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lazy-loading 'migration_context' on Instance uuid 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:22 compute-0 nova_compute[239965]: 2026-01-26 15:56:22.951 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "refresh_cache-24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:22 compute-0 nova_compute[239965]: 2026-01-26 15:56:22.951 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquired lock "refresh_cache-24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:22 compute-0 nova_compute[239965]: 2026-01-26 15:56:22.951 239969 DEBUG nova.network.neutron [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:56:22 compute-0 nova_compute[239965]: 2026-01-26 15:56:22.957 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:56:22 compute-0 nova_compute[239965]: 2026-01-26 15:56:22.958 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Ensure instance console log exists: /var/lib/nova/instances/24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:56:22 compute-0 nova_compute[239965]: 2026-01-26 15:56:22.958 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:22 compute-0 nova_compute[239965]: 2026-01-26 15:56:22.960 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:22 compute-0 nova_compute[239965]: 2026-01-26 15:56:22.960 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-09fe1602f6f75f4de4871a0328baa7e050be7f91742f06fb8ab8c0b3967d5e14-merged.mount: Deactivated successfully.
Jan 26 15:56:22 compute-0 podman[277847]: 2026-01-26 15:56:22.984592641 +0000 UTC m=+0.228356775 container remove 40990c1e68badf9d0fbf01ab71f1f17fc467723ea12031d520bc6438e6bb374a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_driscoll, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:56:23 compute-0 systemd[1]: libpod-conmon-40990c1e68badf9d0fbf01ab71f1f17fc467723ea12031d520bc6438e6bb374a.scope: Deactivated successfully.
Jan 26 15:56:23 compute-0 nova_compute[239965]: 2026-01-26 15:56:23.181 239969 DEBUG nova.network.neutron [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:56:23 compute-0 podman[277906]: 2026-01-26 15:56:23.197290032 +0000 UTC m=+0.052812558 container create 8cc3ca87138d6fd0845872df2c81317972e1785dba4da97777628664bb217bfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lamarr, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 15:56:23 compute-0 podman[277906]: 2026-01-26 15:56:23.173421186 +0000 UTC m=+0.028943742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:56:23 compute-0 systemd[1]: Started libpod-conmon-8cc3ca87138d6fd0845872df2c81317972e1785dba4da97777628664bb217bfb.scope.
Jan 26 15:56:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:56:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a380fb2e56e33602f1878b4a45d47d3c129e2e25a4f016072e51c0d61b007e17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a380fb2e56e33602f1878b4a45d47d3c129e2e25a4f016072e51c0d61b007e17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a380fb2e56e33602f1878b4a45d47d3c129e2e25a4f016072e51c0d61b007e17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a380fb2e56e33602f1878b4a45d47d3c129e2e25a4f016072e51c0d61b007e17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:23 compute-0 podman[277906]: 2026-01-26 15:56:23.327676132 +0000 UTC m=+0.183198688 container init 8cc3ca87138d6fd0845872df2c81317972e1785dba4da97777628664bb217bfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 15:56:23 compute-0 podman[277906]: 2026-01-26 15:56:23.336523279 +0000 UTC m=+0.192045805 container start 8cc3ca87138d6fd0845872df2c81317972e1785dba4da97777628664bb217bfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lamarr, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 15:56:23 compute-0 podman[277906]: 2026-01-26 15:56:23.395179549 +0000 UTC m=+0.250702105 container attach 8cc3ca87138d6fd0845872df2c81317972e1785dba4da97777628664bb217bfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]: {
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:     "0": [
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:         {
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "devices": [
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "/dev/loop3"
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             ],
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "lv_name": "ceph_lv0",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "lv_size": "21470642176",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "name": "ceph_lv0",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "tags": {
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.cluster_name": "ceph",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.crush_device_class": "",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.encrypted": "0",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.objectstore": "bluestore",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.osd_id": "0",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.type": "block",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.vdo": "0",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.with_tpm": "0"
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             },
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "type": "block",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "vg_name": "ceph_vg0"
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:         }
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:     ],
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:     "1": [
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:         {
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "devices": [
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "/dev/loop4"
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             ],
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "lv_name": "ceph_lv1",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "lv_size": "21470642176",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "name": "ceph_lv1",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "tags": {
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.cluster_name": "ceph",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.crush_device_class": "",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.encrypted": "0",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.objectstore": "bluestore",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.osd_id": "1",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.type": "block",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.vdo": "0",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.with_tpm": "0"
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             },
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "type": "block",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "vg_name": "ceph_vg1"
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:         }
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:     ],
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:     "2": [
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:         {
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "devices": [
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "/dev/loop5"
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             ],
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "lv_name": "ceph_lv2",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "lv_size": "21470642176",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "name": "ceph_lv2",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "tags": {
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.cluster_name": "ceph",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.crush_device_class": "",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.encrypted": "0",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.objectstore": "bluestore",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.osd_id": "2",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.type": "block",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.vdo": "0",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:                 "ceph.with_tpm": "0"
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             },
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "type": "block",
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:             "vg_name": "ceph_vg2"
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:         }
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]:     ]
Jan 26 15:56:23 compute-0 wonderful_lamarr[277923]: }
Jan 26 15:56:23 compute-0 systemd[1]: libpod-8cc3ca87138d6fd0845872df2c81317972e1785dba4da97777628664bb217bfb.scope: Deactivated successfully.
Jan 26 15:56:23 compute-0 podman[277932]: 2026-01-26 15:56:23.715917241 +0000 UTC m=+0.027377702 container died 8cc3ca87138d6fd0845872df2c81317972e1785dba4da97777628664bb217bfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:56:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a380fb2e56e33602f1878b4a45d47d3c129e2e25a4f016072e51c0d61b007e17-merged.mount: Deactivated successfully.
Jan 26 15:56:23 compute-0 podman[277932]: 2026-01-26 15:56:23.766917213 +0000 UTC m=+0.078377654 container remove 8cc3ca87138d6fd0845872df2c81317972e1785dba4da97777628664bb217bfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 15:56:23 compute-0 systemd[1]: libpod-conmon-8cc3ca87138d6fd0845872df2c81317972e1785dba4da97777628664bb217bfb.scope: Deactivated successfully.
Jan 26 15:56:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1294: 305 pgs: 305 active+clean; 268 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 817 KiB/s rd, 6.6 MiB/s wr, 194 op/s
Jan 26 15:56:23 compute-0 sudo[277811]: pam_unix(sudo:session): session closed for user root
Jan 26 15:56:23 compute-0 sudo[277947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:56:23 compute-0 sudo[277947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:56:23 compute-0 sudo[277947]: pam_unix(sudo:session): session closed for user root
Jan 26 15:56:23 compute-0 sudo[277972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:56:23 compute-0 sudo[277972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.015 239969 DEBUG nova.compute.manager [req-71fda2e2-325d-47f3-a4d4-b8dc1e0bc28d req-84807620-3568-4a37-9428-f61710cd085b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Received event network-changed-23122a0c-9643-40ee-8c47-11f5b18ef522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.016 239969 DEBUG nova.compute.manager [req-71fda2e2-325d-47f3-a4d4-b8dc1e0bc28d req-84807620-3568-4a37-9428-f61710cd085b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Refreshing instance network info cache due to event network-changed-23122a0c-9643-40ee-8c47-11f5b18ef522. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.016 239969 DEBUG oslo_concurrency.lockutils [req-71fda2e2-325d-47f3-a4d4-b8dc1e0bc28d req-84807620-3568-4a37-9428-f61710cd085b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:24 compute-0 ovn_controller[146046]: 2026-01-26T15:56:24Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:70:e8:d9 10.100.0.3
Jan 26 15:56:24 compute-0 ovn_controller[146046]: 2026-01-26T15:56:24Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:70:e8:d9 10.100.0.3
Jan 26 15:56:24 compute-0 podman[278009]: 2026-01-26 15:56:24.240847715 +0000 UTC m=+0.044712378 container create a33c7002125ec4e35618bda8fba557f941967d95e5998ddb0eb64fb5c02349fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldwasser, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 15:56:24 compute-0 systemd[1]: Started libpod-conmon-a33c7002125ec4e35618bda8fba557f941967d95e5998ddb0eb64fb5c02349fc.scope.
Jan 26 15:56:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:56:24 compute-0 podman[278009]: 2026-01-26 15:56:24.221180043 +0000 UTC m=+0.025044726 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:56:24 compute-0 podman[278009]: 2026-01-26 15:56:24.327427461 +0000 UTC m=+0.131292124 container init a33c7002125ec4e35618bda8fba557f941967d95e5998ddb0eb64fb5c02349fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldwasser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:56:24 compute-0 podman[278009]: 2026-01-26 15:56:24.33391013 +0000 UTC m=+0.137774823 container start a33c7002125ec4e35618bda8fba557f941967d95e5998ddb0eb64fb5c02349fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 15:56:24 compute-0 ecstatic_goldwasser[278024]: 167 167
Jan 26 15:56:24 compute-0 systemd[1]: libpod-a33c7002125ec4e35618bda8fba557f941967d95e5998ddb0eb64fb5c02349fc.scope: Deactivated successfully.
Jan 26 15:56:24 compute-0 podman[278009]: 2026-01-26 15:56:24.340401089 +0000 UTC m=+0.144265752 container attach a33c7002125ec4e35618bda8fba557f941967d95e5998ddb0eb64fb5c02349fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldwasser, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 15:56:24 compute-0 podman[278009]: 2026-01-26 15:56:24.340931502 +0000 UTC m=+0.144796165 container died a33c7002125ec4e35618bda8fba557f941967d95e5998ddb0eb64fb5c02349fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldwasser, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:56:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-f19e633fa322cd62a7e79c2efe7f974e14da833357bd8844a94736694d30bb3e-merged.mount: Deactivated successfully.
Jan 26 15:56:24 compute-0 podman[278009]: 2026-01-26 15:56:24.388612773 +0000 UTC m=+0.192477436 container remove a33c7002125ec4e35618bda8fba557f941967d95e5998ddb0eb64fb5c02349fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_goldwasser, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 15:56:24 compute-0 systemd[1]: libpod-conmon-a33c7002125ec4e35618bda8fba557f941967d95e5998ddb0eb64fb5c02349fc.scope: Deactivated successfully.
Jan 26 15:56:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:56:24 compute-0 podman[278049]: 2026-01-26 15:56:24.56453257 +0000 UTC m=+0.029978066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:56:24 compute-0 podman[278049]: 2026-01-26 15:56:24.709182671 +0000 UTC m=+0.174628147 container create 7facaceee57d8ca1dc6a2e7985c0b58d0da5ac5de8052ac0c2f4adb27bbedb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.731 239969 DEBUG nova.network.neutron [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Updating instance_info_cache with network_info: [{"id": "23122a0c-9643-40ee-8c47-11f5b18ef522", "address": "fa:16:3e:11:03:c2", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23122a0c-96", "ovs_interfaceid": "23122a0c-9643-40ee-8c47-11f5b18ef522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.751 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Releasing lock "refresh_cache-24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.752 239969 DEBUG nova.compute.manager [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Instance network_info: |[{"id": "23122a0c-9643-40ee-8c47-11f5b18ef522", "address": "fa:16:3e:11:03:c2", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23122a0c-96", "ovs_interfaceid": "23122a0c-9643-40ee-8c47-11f5b18ef522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.753 239969 DEBUG oslo_concurrency.lockutils [req-71fda2e2-325d-47f3-a4d4-b8dc1e0bc28d req-84807620-3568-4a37-9428-f61710cd085b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.755 239969 DEBUG nova.network.neutron [req-71fda2e2-325d-47f3-a4d4-b8dc1e0bc28d req-84807620-3568-4a37-9428-f61710cd085b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Refreshing network info cache for port 23122a0c-9643-40ee-8c47-11f5b18ef522 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.757 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Start _get_guest_xml network_info=[{"id": "23122a0c-9643-40ee-8c47-11f5b18ef522", "address": "fa:16:3e:11:03:c2", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23122a0c-96", "ovs_interfaceid": "23122a0c-9643-40ee-8c47-11f5b18ef522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.763 239969 WARNING nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:56:24 compute-0 systemd[1]: Started libpod-conmon-7facaceee57d8ca1dc6a2e7985c0b58d0da5ac5de8052ac0c2f4adb27bbedb29.scope.
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.769 239969 DEBUG nova.virt.libvirt.host [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.770 239969 DEBUG nova.virt.libvirt.host [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.778 239969 DEBUG nova.virt.libvirt.host [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.779 239969 DEBUG nova.virt.libvirt.host [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.779 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.780 239969 DEBUG nova.virt.hardware [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.780 239969 DEBUG nova.virt.hardware [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.781 239969 DEBUG nova.virt.hardware [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.781 239969 DEBUG nova.virt.hardware [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.781 239969 DEBUG nova.virt.hardware [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.782 239969 DEBUG nova.virt.hardware [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.782 239969 DEBUG nova.virt.hardware [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.783 239969 DEBUG nova.virt.hardware [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.783 239969 DEBUG nova.virt.hardware [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.783 239969 DEBUG nova.virt.hardware [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.784 239969 DEBUG nova.virt.hardware [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.787 239969 DEBUG oslo_concurrency.processutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d96cad8752e30ce2a86064f1c73442d1c6e3e808e3c55ae556634c7650887762/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d96cad8752e30ce2a86064f1c73442d1c6e3e808e3c55ae556634c7650887762/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d96cad8752e30ce2a86064f1c73442d1c6e3e808e3c55ae556634c7650887762/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d96cad8752e30ce2a86064f1c73442d1c6e3e808e3c55ae556634c7650887762/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:24 compute-0 podman[278049]: 2026-01-26 15:56:24.841551609 +0000 UTC m=+0.306997105 container init 7facaceee57d8ca1dc6a2e7985c0b58d0da5ac5de8052ac0c2f4adb27bbedb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 15:56:24 compute-0 ceph-mon[75140]: pgmap v1294: 305 pgs: 305 active+clean; 268 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 817 KiB/s rd, 6.6 MiB/s wr, 194 op/s
Jan 26 15:56:24 compute-0 podman[278049]: 2026-01-26 15:56:24.85338046 +0000 UTC m=+0.318825956 container start 7facaceee57d8ca1dc6a2e7985c0b58d0da5ac5de8052ac0c2f4adb27bbedb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.912 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.913 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:24 compute-0 podman[278049]: 2026-01-26 15:56:24.934681426 +0000 UTC m=+0.400126932 container attach 7facaceee57d8ca1dc6a2e7985c0b58d0da5ac5de8052ac0c2f4adb27bbedb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:56:24 compute-0 nova_compute[239965]: 2026-01-26 15:56:24.936 239969 DEBUG nova.compute.manager [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:56:25 compute-0 nova_compute[239965]: 2026-01-26 15:56:25.210 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:25 compute-0 nova_compute[239965]: 2026-01-26 15:56:25.212 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:25 compute-0 nova_compute[239965]: 2026-01-26 15:56:25.219 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:25 compute-0 nova_compute[239965]: 2026-01-26 15:56:25.224 239969 DEBUG nova.virt.hardware [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:56:25 compute-0 nova_compute[239965]: 2026-01-26 15:56:25.224 239969 INFO nova.compute.claims [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:56:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:56:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2597505688' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:25 compute-0 nova_compute[239965]: 2026-01-26 15:56:25.417 239969 DEBUG oslo_concurrency.processutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:25 compute-0 nova_compute[239965]: 2026-01-26 15:56:25.445 239969 DEBUG nova.storage.rbd_utils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:25 compute-0 nova_compute[239965]: 2026-01-26 15:56:25.451 239969 DEBUG oslo_concurrency.processutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:25 compute-0 nova_compute[239965]: 2026-01-26 15:56:25.503 239969 DEBUG oslo_concurrency.processutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:25 compute-0 lvm[278185]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:56:25 compute-0 lvm[278185]: VG ceph_vg0 finished
Jan 26 15:56:25 compute-0 lvm[278186]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:56:25 compute-0 lvm[278186]: VG ceph_vg1 finished
Jan 26 15:56:25 compute-0 lvm[278189]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:56:25 compute-0 lvm[278189]: VG ceph_vg2 finished
Jan 26 15:56:25 compute-0 dazzling_ritchie[278065]: {}
Jan 26 15:56:25 compute-0 systemd[1]: libpod-7facaceee57d8ca1dc6a2e7985c0b58d0da5ac5de8052ac0c2f4adb27bbedb29.scope: Deactivated successfully.
Jan 26 15:56:25 compute-0 systemd[1]: libpod-7facaceee57d8ca1dc6a2e7985c0b58d0da5ac5de8052ac0c2f4adb27bbedb29.scope: Consumed 1.388s CPU time.
Jan 26 15:56:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 283 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 842 KiB/s rd, 6.9 MiB/s wr, 205 op/s
Jan 26 15:56:25 compute-0 podman[278229]: 2026-01-26 15:56:25.786653946 +0000 UTC m=+0.027257280 container died 7facaceee57d8ca1dc6a2e7985c0b58d0da5ac5de8052ac0c2f4adb27bbedb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 26 15:56:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d96cad8752e30ce2a86064f1c73442d1c6e3e808e3c55ae556634c7650887762-merged.mount: Deactivated successfully.
Jan 26 15:56:25 compute-0 podman[278229]: 2026-01-26 15:56:25.842822385 +0000 UTC m=+0.083425698 container remove 7facaceee57d8ca1dc6a2e7985c0b58d0da5ac5de8052ac0c2f4adb27bbedb29 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_ritchie, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 15:56:25 compute-0 systemd[1]: libpod-conmon-7facaceee57d8ca1dc6a2e7985c0b58d0da5ac5de8052ac0c2f4adb27bbedb29.scope: Deactivated successfully.
Jan 26 15:56:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2597505688' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:25 compute-0 nova_compute[239965]: 2026-01-26 15:56:25.866 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442970.864359, 8266f382-d7db-41e3-bc97-b1510341e248 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:25 compute-0 nova_compute[239965]: 2026-01-26 15:56:25.867 239969 INFO nova.compute.manager [-] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] VM Stopped (Lifecycle Event)
Jan 26 15:56:25 compute-0 nova_compute[239965]: 2026-01-26 15:56:25.894 239969 DEBUG nova.compute.manager [None req-a182ac94-46b0-4e6c-912b-c6c2fd15a293 - - - - - -] [instance: 8266f382-d7db-41e3-bc97-b1510341e248] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:25 compute-0 sudo[277972]: pam_unix(sudo:session): session closed for user root
Jan 26 15:56:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:56:25 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:56:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:56:25 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:56:26 compute-0 sudo[278242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:56:26 compute-0 sudo[278242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:56:26 compute-0 sudo[278242]: pam_unix(sudo:session): session closed for user root
Jan 26 15:56:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:56:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228610368' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:56:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/179349866' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.128 239969 DEBUG oslo_concurrency.processutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.677s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.129 239969 DEBUG nova.virt.libvirt.vif [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:56:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1510684911',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1510684911',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1510684911',id=42,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b3f0866575347bd80cdc80b692f07f5',ramdisk_id='',reservation_id='r-xrgr06gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-744790849',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-744790849-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:56:19Z,user_data=None,user_id='8b6639ef07f74c9ebad76dffa361ec4e',uuid=24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23122a0c-9643-40ee-8c47-11f5b18ef522", "address": "fa:16:3e:11:03:c2", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23122a0c-96", "ovs_interfaceid": "23122a0c-9643-40ee-8c47-11f5b18ef522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.130 239969 DEBUG nova.network.os_vif_util [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converting VIF {"id": "23122a0c-9643-40ee-8c47-11f5b18ef522", "address": "fa:16:3e:11:03:c2", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23122a0c-96", "ovs_interfaceid": "23122a0c-9643-40ee-8c47-11f5b18ef522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.130 239969 DEBUG nova.network.os_vif_util [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:03:c2,bridge_name='br-int',has_traffic_filtering=True,id=23122a0c-9643-40ee-8c47-11f5b18ef522,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23122a0c-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.132 239969 DEBUG nova.objects.instance [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.144 239969 DEBUG oslo_concurrency.processutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.151 239969 DEBUG nova.compute.provider_tree [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.161 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:56:26 compute-0 nova_compute[239965]:   <uuid>24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423</uuid>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   <name>instance-0000002a</name>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1510684911</nova:name>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:56:24</nova:creationTime>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:56:26 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:56:26 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:56:26 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:56:26 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:26 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:56:26 compute-0 nova_compute[239965]:         <nova:user uuid="8b6639ef07f74c9ebad76dffa361ec4e">tempest-ImagesOneServerNegativeTestJSON-744790849-project-member</nova:user>
Jan 26 15:56:26 compute-0 nova_compute[239965]:         <nova:project uuid="1b3f0866575347bd80cdc80b692f07f5">tempest-ImagesOneServerNegativeTestJSON-744790849</nova:project>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:56:26 compute-0 nova_compute[239965]:         <nova:port uuid="23122a0c-9643-40ee-8c47-11f5b18ef522">
Jan 26 15:56:26 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <system>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <entry name="serial">24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423</entry>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <entry name="uuid">24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423</entry>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     </system>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   <os>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   </os>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   <features>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   </features>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423_disk">
Jan 26 15:56:26 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:56:26 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423_disk.config">
Jan 26 15:56:26 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:56:26 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:11:03:c2"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <target dev="tap23122a0c-96"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423/console.log" append="off"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <video>
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     </video>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:56:26 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:56:26 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:56:26 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:56:26 compute-0 nova_compute[239965]: </domain>
Jan 26 15:56:26 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.162 239969 DEBUG nova.compute.manager [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Preparing to wait for external event network-vif-plugged-23122a0c-9643-40ee-8c47-11f5b18ef522 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.171 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.172 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.172 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.173 239969 DEBUG nova.virt.libvirt.vif [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:56:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1510684911',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1510684911',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1510684911',id=42,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1b3f0866575347bd80cdc80b692f07f5',ramdisk_id='',reservation_id='r-xrgr06gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-744790849',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-744790849-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:56:19Z,user_data=None,user_id='8b6639ef07f74c9ebad76dffa361ec4e',uuid=24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23122a0c-9643-40ee-8c47-11f5b18ef522", "address": "fa:16:3e:11:03:c2", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23122a0c-96", "ovs_interfaceid": "23122a0c-9643-40ee-8c47-11f5b18ef522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.173 239969 DEBUG nova.network.os_vif_util [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converting VIF {"id": "23122a0c-9643-40ee-8c47-11f5b18ef522", "address": "fa:16:3e:11:03:c2", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23122a0c-96", "ovs_interfaceid": "23122a0c-9643-40ee-8c47-11f5b18ef522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.174 239969 DEBUG nova.network.os_vif_util [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:03:c2,bridge_name='br-int',has_traffic_filtering=True,id=23122a0c-9643-40ee-8c47-11f5b18ef522,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23122a0c-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.174 239969 DEBUG os_vif [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:03:c2,bridge_name='br-int',has_traffic_filtering=True,id=23122a0c-9643-40ee-8c47-11f5b18ef522,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23122a0c-96') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.176 239969 DEBUG nova.scheduler.client.report [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.179 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.179 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.180 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.185 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.186 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23122a0c-96, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.187 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap23122a0c-96, col_values=(('external_ids', {'iface-id': '23122a0c-9643-40ee-8c47-11f5b18ef522', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:03:c2', 'vm-uuid': '24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.189 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:26 compute-0 NetworkManager[48954]: <info>  [1769442986.1900] manager: (tap23122a0c-96): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.190 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.196 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.198 239969 INFO os_vif [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:03:c2,bridge_name='br-int',has_traffic_filtering=True,id=23122a0c-9643-40ee-8c47-11f5b18ef522,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23122a0c-96')
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.201 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.990s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.202 239969 DEBUG nova.compute.manager [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.265 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.266 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.266 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] No VIF found with MAC fa:16:3e:11:03:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.267 239969 INFO nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Using config drive
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.291 239969 DEBUG nova.storage.rbd_utils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.299 239969 DEBUG nova.compute.manager [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.300 239969 DEBUG nova.network.neutron [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.341 239969 INFO nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.358 239969 DEBUG nova.compute.manager [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.404 239969 DEBUG nova.network.neutron [req-71fda2e2-325d-47f3-a4d4-b8dc1e0bc28d req-84807620-3568-4a37-9428-f61710cd085b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Updated VIF entry in instance network info cache for port 23122a0c-9643-40ee-8c47-11f5b18ef522. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.405 239969 DEBUG nova.network.neutron [req-71fda2e2-325d-47f3-a4d4-b8dc1e0bc28d req-84807620-3568-4a37-9428-f61710cd085b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Updating instance_info_cache with network_info: [{"id": "23122a0c-9643-40ee-8c47-11f5b18ef522", "address": "fa:16:3e:11:03:c2", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23122a0c-96", "ovs_interfaceid": "23122a0c-9643-40ee-8c47-11f5b18ef522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.430 239969 DEBUG oslo_concurrency.lockutils [req-71fda2e2-325d-47f3-a4d4-b8dc1e0bc28d req-84807620-3568-4a37-9428-f61710cd085b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.451 239969 DEBUG nova.compute.manager [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.452 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.453 239969 INFO nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Creating image(s)
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.473 239969 DEBUG nova.storage.rbd_utils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.499 239969 DEBUG nova.storage.rbd_utils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.524 239969 DEBUG nova.storage.rbd_utils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.529 239969 DEBUG oslo_concurrency.processutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.572 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.573 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.615 239969 DEBUG oslo_concurrency.processutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.617 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.618 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.618 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.647 239969 DEBUG nova.storage.rbd_utils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.651 239969 DEBUG oslo_concurrency.processutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.762 239969 DEBUG nova.policy [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '700f80e28de6426c81d2be69a7fd8ffb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:56:26 compute-0 ceph-mon[75140]: pgmap v1295: 305 pgs: 305 active+clean; 283 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 842 KiB/s rd, 6.9 MiB/s wr, 205 op/s
Jan 26 15:56:26 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:56:26 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:56:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4228610368' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/179349866' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:26 compute-0 nova_compute[239965]: 2026-01-26 15:56:26.948 239969 DEBUG oslo_concurrency.processutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.001 239969 INFO nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Creating config drive at /var/lib/nova/instances/24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423/disk.config
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.006 239969 DEBUG oslo_concurrency.processutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpov92sk3n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.039 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442971.9897404, 819151c3-9892-4926-879b-6bcf2047e116 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.040 239969 INFO nova.compute.manager [-] [instance: 819151c3-9892-4926-879b-6bcf2047e116] VM Stopped (Lifecycle Event)
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.048 239969 DEBUG nova.storage.rbd_utils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] resizing rbd image d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.278 239969 DEBUG nova.compute.manager [None req-30331b27-d871-42ac-9cdb-e6dca80646d5 - - - - - -] [instance: 819151c3-9892-4926-879b-6bcf2047e116] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.280 239969 DEBUG oslo_concurrency.processutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpov92sk3n" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.303 239969 DEBUG nova.storage.rbd_utils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] rbd image 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.309 239969 DEBUG oslo_concurrency.processutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423/disk.config 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:27 compute-0 podman[278449]: 2026-01-26 15:56:27.373829584 +0000 UTC m=+0.056250512 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:56:27 compute-0 podman[278461]: 2026-01-26 15:56:27.401304927 +0000 UTC m=+0.083739785 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.503 239969 DEBUG nova.objects.instance [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'migration_context' on Instance uuid d49e7eb2-dcdb-4410-a016-42e58d1d1416 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.506 239969 DEBUG oslo_concurrency.processutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423/disk.config 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.507 239969 INFO nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Deleting local config drive /var/lib/nova/instances/24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423/disk.config because it was imported into RBD.
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.520 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.520 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Ensure instance console log exists: /var/lib/nova/instances/d49e7eb2-dcdb-4410-a016-42e58d1d1416/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.520 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.521 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.521 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:27 compute-0 kernel: tap23122a0c-96: entered promiscuous mode
Jan 26 15:56:27 compute-0 NetworkManager[48954]: <info>  [1769442987.5687] manager: (tap23122a0c-96): new Tun device (/org/freedesktop/NetworkManager/Devices/141)
Jan 26 15:56:27 compute-0 ovn_controller[146046]: 2026-01-26T15:56:27Z|00283|binding|INFO|Claiming lport 23122a0c-9643-40ee-8c47-11f5b18ef522 for this chassis.
Jan 26 15:56:27 compute-0 ovn_controller[146046]: 2026-01-26T15:56:27Z|00284|binding|INFO|23122a0c-9643-40ee-8c47-11f5b18ef522: Claiming fa:16:3e:11:03:c2 10.100.0.10
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.570 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:27 compute-0 systemd-udevd[278187]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.581 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:03:c2 10.100.0.10'], port_security=['fa:16:3e:11:03:c2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12eb3028-37a4-48dc-836b-0978d0bddbce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b3f0866575347bd80cdc80b692f07f5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dd45551f-cdcc-4a64-9f48-bf991126bbec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef363ab1-8cec-4345-8968-ee11106642fa, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=23122a0c-9643-40ee-8c47-11f5b18ef522) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.582 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 23122a0c-9643-40ee-8c47-11f5b18ef522 in datapath 12eb3028-37a4-48dc-836b-0978d0bddbce bound to our chassis
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.583 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 12eb3028-37a4-48dc-836b-0978d0bddbce
Jan 26 15:56:27 compute-0 NetworkManager[48954]: <info>  [1769442987.5917] device (tap23122a0c-96): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:56:27 compute-0 NetworkManager[48954]: <info>  [1769442987.5926] device (tap23122a0c-96): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.597 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a38960-d471-4ed3-b3f2-e29bee271def]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.598 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap12eb3028-31 in ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.600 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap12eb3028-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.600 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[815baab9-be34-4448-8790-a340d8356e3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.601 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[aa3e2862-266d-4643-a597-b35ed692172e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.617 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[5789b1ad-0c2b-4ffe-ad79-8f76fc5997a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:27 compute-0 systemd-machined[208061]: New machine qemu-46-instance-0000002a.
Jan 26 15:56:27 compute-0 systemd[1]: Started Virtual Machine qemu-46-instance-0000002a.
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.645 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.645 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b33a795f-ef25-44a5-881d-c9aea99c2726]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:27 compute-0 ovn_controller[146046]: 2026-01-26T15:56:27Z|00285|binding|INFO|Setting lport 23122a0c-9643-40ee-8c47-11f5b18ef522 ovn-installed in OVS
Jan 26 15:56:27 compute-0 ovn_controller[146046]: 2026-01-26T15:56:27Z|00286|binding|INFO|Setting lport 23122a0c-9643-40ee-8c47-11f5b18ef522 up in Southbound
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.697 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.716 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0c313f92-4336-4116-8110-7eb0cf74efb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:27 compute-0 NetworkManager[48954]: <info>  [1769442987.7248] manager: (tap12eb3028-30): new Veth device (/org/freedesktop/NetworkManager/Devices/142)
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.723 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0f385c42-5d96-4b62-a9e7-51ebc6a6d8d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.760 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[35184276-33c1-4f6a-9966-d0496ce7ef7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.764 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f0e9ae3d-b42e-4df4-91b9-a9147a551af2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 358 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 825 KiB/s rd, 9.5 MiB/s wr, 235 op/s
Jan 26 15:56:27 compute-0 NetworkManager[48954]: <info>  [1769442987.7913] device (tap12eb3028-30): carrier: link connected
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.797 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c56dcc44-85a6-4e79-b756-c23f59093d02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.813 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b654957e-488b-46af-a9cb-46a174e991ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12eb3028-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c1:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447859, 'reachable_time': 30970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278580, 'error': None, 'target': 'ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.830 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[32673d35-65df-4baf-8006-87c5329f6811]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0d:c192'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 447859, 'tstamp': 447859}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278581, 'error': None, 'target': 'ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.849 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[785ac0f6-21ba-449c-8f4e-266eef3cec3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap12eb3028-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:c1:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447859, 'reachable_time': 30970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278582, 'error': None, 'target': 'ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.892 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1793c3a3-b862-4c79-82df-34707f107683]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.922 239969 DEBUG nova.compute.manager [req-23b01420-c3bc-404f-a858-1fed038f72a6 req-3883c144-8d22-4744-9852-337d5efe9154 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Received event network-vif-plugged-23122a0c-9643-40ee-8c47-11f5b18ef522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.922 239969 DEBUG oslo_concurrency.lockutils [req-23b01420-c3bc-404f-a858-1fed038f72a6 req-3883c144-8d22-4744-9852-337d5efe9154 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.922 239969 DEBUG oslo_concurrency.lockutils [req-23b01420-c3bc-404f-a858-1fed038f72a6 req-3883c144-8d22-4744-9852-337d5efe9154 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.922 239969 DEBUG oslo_concurrency.lockutils [req-23b01420-c3bc-404f-a858-1fed038f72a6 req-3883c144-8d22-4744-9852-337d5efe9154 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.923 239969 DEBUG nova.compute.manager [req-23b01420-c3bc-404f-a858-1fed038f72a6 req-3883c144-8d22-4744-9852-337d5efe9154 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Processing event network-vif-plugged-23122a0c-9643-40ee-8c47-11f5b18ef522 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.951 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5131eae0-603b-465a-a125-8f683feb3836]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.952 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12eb3028-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.952 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.953 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap12eb3028-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:27 compute-0 kernel: tap12eb3028-30: entered promiscuous mode
Jan 26 15:56:27 compute-0 NetworkManager[48954]: <info>  [1769442987.9555] manager: (tap12eb3028-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.957 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap12eb3028-30, col_values=(('external_ids', {'iface-id': '25ff8da3-57db-4715-8307-e7e4fd4fbd09'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.960 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/12eb3028-37a4-48dc-836b-0978d0bddbce.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/12eb3028-37a4-48dc-836b-0978d0bddbce.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:56:27 compute-0 ovn_controller[146046]: 2026-01-26T15:56:27Z|00287|binding|INFO|Releasing lport 25ff8da3-57db-4715-8307-e7e4fd4fbd09 from this chassis (sb_readonly=0)
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.961 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.961 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ca3f672a-60a3-4e47-90dc-a378f3440bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.962 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-12eb3028-37a4-48dc-836b-0978d0bddbce
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/12eb3028-37a4-48dc-836b-0978d0bddbce.pid.haproxy
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 12eb3028-37a4-48dc-836b-0978d0bddbce
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:56:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:27.963 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce', 'env', 'PROCESS_TAG=haproxy-12eb3028-37a4-48dc-836b-0978d0bddbce', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/12eb3028-37a4-48dc-836b-0978d0bddbce.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:56:27 compute-0 nova_compute[239965]: 2026-01-26 15:56:27.982 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.354 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442988.353683, 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.354 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] VM Started (Lifecycle Event)
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.356 239969 DEBUG nova.compute.manager [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.359 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.363 239969 INFO nova.virt.libvirt.driver [-] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Instance spawned successfully.
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.363 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.391 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.395 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.396 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.396 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.396 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.397 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.397 239969 DEBUG nova.virt.libvirt.driver [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.401 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:56:28 compute-0 podman[278652]: 2026-01-26 15:56:28.32259983 +0000 UTC m=+0.028176672 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.440 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.441 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442988.3564012, 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.441 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] VM Paused (Lifecycle Event)
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.456 239969 DEBUG nova.network.neutron [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Successfully created port: 32baa4e5-2605-4a3f-8023-6a80ea71793d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.471 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.474 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442988.3590393, 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.474 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] VM Resumed (Lifecycle Event)
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.476 239969 INFO nova.compute.manager [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Took 8.43 seconds to spawn the instance on the hypervisor.
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.476 239969 DEBUG nova.compute.manager [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.499 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.502 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.526 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.562 239969 INFO nova.compute.manager [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Took 10.19 seconds to build instance.
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.582 239969 DEBUG oslo_concurrency.lockutils [None req-639c89a4-528e-4cf1-9221-523965fbeead 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:28 compute-0 podman[278652]: 2026-01-26 15:56:28.586017175 +0000 UTC m=+0.291593987 container create 48e82f49fbf8763c91cb885c7da3141582f5e74e6f8467fa52d78bf6891e8420 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 15:56:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:56:28
Jan 26 15:56:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:56:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:56:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'volumes', 'default.rgw.control']
Jan 26 15:56:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:56:28 compute-0 systemd[1]: Started libpod-conmon-48e82f49fbf8763c91cb885c7da3141582f5e74e6f8467fa52d78bf6891e8420.scope.
Jan 26 15:56:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:56:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40145d2693bb9b2e60186058f872694c104aea70cbb525cc2c7d84ca5031ebb7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:28 compute-0 podman[278652]: 2026-01-26 15:56:28.753356243 +0000 UTC m=+0.458933135 container init 48e82f49fbf8763c91cb885c7da3141582f5e74e6f8467fa52d78bf6891e8420 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 15:56:28 compute-0 podman[278652]: 2026-01-26 15:56:28.759375921 +0000 UTC m=+0.464952773 container start 48e82f49fbf8763c91cb885c7da3141582f5e74e6f8467fa52d78bf6891e8420 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:56:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:28.770 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:28 compute-0 nova_compute[239965]: 2026-01-26 15:56:28.771 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:28 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[278668]: [NOTICE]   (278673) : New worker (278675) forked
Jan 26 15:56:28 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[278668]: [NOTICE]   (278673) : Loading success.
Jan 26 15:56:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:28.853 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 15:56:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:28.855 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:29 compute-0 ceph-mon[75140]: pgmap v1296: 305 pgs: 305 active+clean; 358 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 825 KiB/s rd, 9.5 MiB/s wr, 235 op/s
Jan 26 15:56:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:56:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 305 active+clean; 358 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 817 KiB/s rd, 9.5 MiB/s wr, 222 op/s
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.143 239969 DEBUG nova.compute.manager [req-2729107a-c8e5-4b64-b68a-7e72e621d806 req-2b3be4da-3e7b-4e3d-b575-3457e7fa7f38 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Received event network-vif-plugged-23122a0c-9643-40ee-8c47-11f5b18ef522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.143 239969 DEBUG oslo_concurrency.lockutils [req-2729107a-c8e5-4b64-b68a-7e72e621d806 req-2b3be4da-3e7b-4e3d-b575-3457e7fa7f38 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.143 239969 DEBUG oslo_concurrency.lockutils [req-2729107a-c8e5-4b64-b68a-7e72e621d806 req-2b3be4da-3e7b-4e3d-b575-3457e7fa7f38 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.144 239969 DEBUG oslo_concurrency.lockutils [req-2729107a-c8e5-4b64-b68a-7e72e621d806 req-2b3be4da-3e7b-4e3d-b575-3457e7fa7f38 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.144 239969 DEBUG nova.compute.manager [req-2729107a-c8e5-4b64-b68a-7e72e621d806 req-2b3be4da-3e7b-4e3d-b575-3457e7fa7f38 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] No waiting events found dispatching network-vif-plugged-23122a0c-9643-40ee-8c47-11f5b18ef522 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.144 239969 WARNING nova.compute.manager [req-2729107a-c8e5-4b64-b68a-7e72e621d806 req-2b3be4da-3e7b-4e3d-b575-3457e7fa7f38 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Received unexpected event network-vif-plugged-23122a0c-9643-40ee-8c47-11f5b18ef522 for instance with vm_state active and task_state None.
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.221 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:56:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:56:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:56:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:56:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:56:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.380 239969 DEBUG oslo_concurrency.lockutils [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.380 239969 DEBUG oslo_concurrency.lockutils [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.381 239969 DEBUG oslo_concurrency.lockutils [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.382 239969 DEBUG oslo_concurrency.lockutils [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.382 239969 DEBUG oslo_concurrency.lockutils [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.383 239969 INFO nova.compute.manager [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Terminating instance
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.384 239969 DEBUG nova.compute.manager [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:56:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:56:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:56:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:56:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:56:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:56:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:56:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:56:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:56:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:56:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.615 239969 DEBUG nova.network.neutron [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Successfully updated port: 32baa4e5-2605-4a3f-8023-6a80ea71793d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.630 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.630 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquired lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.631 239969 DEBUG nova.network.neutron [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:56:30 compute-0 kernel: tap23122a0c-96 (unregistering): left promiscuous mode
Jan 26 15:56:30 compute-0 NetworkManager[48954]: <info>  [1769442990.6506] device (tap23122a0c-96): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:56:30 compute-0 ovn_controller[146046]: 2026-01-26T15:56:30Z|00288|binding|INFO|Releasing lport 23122a0c-9643-40ee-8c47-11f5b18ef522 from this chassis (sb_readonly=0)
Jan 26 15:56:30 compute-0 ovn_controller[146046]: 2026-01-26T15:56:30Z|00289|binding|INFO|Setting lport 23122a0c-9643-40ee-8c47-11f5b18ef522 down in Southbound
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.663 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:30 compute-0 ovn_controller[146046]: 2026-01-26T15:56:30Z|00290|binding|INFO|Removing iface tap23122a0c-96 ovn-installed in OVS
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.667 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:30.673 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:03:c2 10.100.0.10'], port_security=['fa:16:3e:11:03:c2 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-12eb3028-37a4-48dc-836b-0978d0bddbce', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1b3f0866575347bd80cdc80b692f07f5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dd45551f-cdcc-4a64-9f48-bf991126bbec', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ef363ab1-8cec-4345-8968-ee11106642fa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=23122a0c-9643-40ee-8c47-11f5b18ef522) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:30.674 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 23122a0c-9643-40ee-8c47-11f5b18ef522 in datapath 12eb3028-37a4-48dc-836b-0978d0bddbce unbound from our chassis
Jan 26 15:56:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:30.675 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 12eb3028-37a4-48dc-836b-0978d0bddbce, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:56:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:30.676 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6b96fced-0671-4b3e-b634-61eac0af2491]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:30.676 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce namespace which is not needed anymore
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.681 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:30 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Jan 26 15:56:30 compute-0 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d0000002a.scope: Consumed 2.697s CPU time.
Jan 26 15:56:30 compute-0 systemd-machined[208061]: Machine qemu-46-instance-0000002a terminated.
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.817 239969 INFO nova.virt.libvirt.driver [-] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Instance destroyed successfully.
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.817 239969 DEBUG nova.objects.instance [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lazy-loading 'resources' on Instance uuid 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.832 239969 DEBUG nova.virt.libvirt.vif [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:56:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1510684911',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1510684911',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1510684911',id=42,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:56:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1b3f0866575347bd80cdc80b692f07f5',ramdisk_id='',reservation_id='r-xrgr06gu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-744790849',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-744790849-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:28Z,user_data=None,user_id='8b6639ef07f74c9ebad76dffa361ec4e',uuid=24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "23122a0c-9643-40ee-8c47-11f5b18ef522", "address": "fa:16:3e:11:03:c2", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23122a0c-96", "ovs_interfaceid": "23122a0c-9643-40ee-8c47-11f5b18ef522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.833 239969 DEBUG nova.network.os_vif_util [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converting VIF {"id": "23122a0c-9643-40ee-8c47-11f5b18ef522", "address": "fa:16:3e:11:03:c2", "network": {"id": "12eb3028-37a4-48dc-836b-0978d0bddbce", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-566181762-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1b3f0866575347bd80cdc80b692f07f5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23122a0c-96", "ovs_interfaceid": "23122a0c-9643-40ee-8c47-11f5b18ef522", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.833 239969 DEBUG nova.network.os_vif_util [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:03:c2,bridge_name='br-int',has_traffic_filtering=True,id=23122a0c-9643-40ee-8c47-11f5b18ef522,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23122a0c-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.834 239969 DEBUG os_vif [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:03:c2,bridge_name='br-int',has_traffic_filtering=True,id=23122a0c-9643-40ee-8c47-11f5b18ef522,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23122a0c-96') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.835 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.836 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23122a0c-96, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.837 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.839 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.841 239969 INFO os_vif [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:03:c2,bridge_name='br-int',has_traffic_filtering=True,id=23122a0c-9643-40ee-8c47-11f5b18ef522,network=Network(12eb3028-37a4-48dc-836b-0978d0bddbce),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23122a0c-96')
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.862 239969 DEBUG nova.virt.libvirt.driver [None req-6aa61d12-95ab-4eed-9559-567d9d488195 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 15:56:30 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[278668]: [NOTICE]   (278673) : haproxy version is 2.8.14-c23fe91
Jan 26 15:56:30 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[278668]: [NOTICE]   (278673) : path to executable is /usr/sbin/haproxy
Jan 26 15:56:30 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[278668]: [WARNING]  (278673) : Exiting Master process...
Jan 26 15:56:30 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[278668]: [ALERT]    (278673) : Current worker (278675) exited with code 143 (Terminated)
Jan 26 15:56:30 compute-0 neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce[278668]: [WARNING]  (278673) : All workers exited. Exiting... (0)
Jan 26 15:56:30 compute-0 systemd[1]: libpod-48e82f49fbf8763c91cb885c7da3141582f5e74e6f8467fa52d78bf6891e8420.scope: Deactivated successfully.
Jan 26 15:56:30 compute-0 podman[278707]: 2026-01-26 15:56:30.953647248 +0000 UTC m=+0.196087814 container died 48e82f49fbf8763c91cb885c7da3141582f5e74e6f8467fa52d78bf6891e8420 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 15:56:30 compute-0 nova_compute[239965]: 2026-01-26 15:56:30.960 239969 DEBUG nova.network.neutron [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:56:31 compute-0 ceph-mon[75140]: pgmap v1297: 305 pgs: 305 active+clean; 358 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 817 KiB/s rd, 9.5 MiB/s wr, 222 op/s
Jan 26 15:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-48e82f49fbf8763c91cb885c7da3141582f5e74e6f8467fa52d78bf6891e8420-userdata-shm.mount: Deactivated successfully.
Jan 26 15:56:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-40145d2693bb9b2e60186058f872694c104aea70cbb525cc2c7d84ca5031ebb7-merged.mount: Deactivated successfully.
Jan 26 15:56:31 compute-0 podman[278707]: 2026-01-26 15:56:31.032947405 +0000 UTC m=+0.275387971 container cleanup 48e82f49fbf8763c91cb885c7da3141582f5e74e6f8467fa52d78bf6891e8420 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 15:56:31 compute-0 systemd[1]: libpod-conmon-48e82f49fbf8763c91cb885c7da3141582f5e74e6f8467fa52d78bf6891e8420.scope: Deactivated successfully.
Jan 26 15:56:31 compute-0 podman[278766]: 2026-01-26 15:56:31.109491874 +0000 UTC m=+0.052096130 container remove 48e82f49fbf8763c91cb885c7da3141582f5e74e6f8467fa52d78bf6891e8420 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:56:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:31.116 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1d4aff2d-3bb9-4ed7-9077-18e2677f66db]: (4, ('Mon Jan 26 03:56:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce (48e82f49fbf8763c91cb885c7da3141582f5e74e6f8467fa52d78bf6891e8420)\n48e82f49fbf8763c91cb885c7da3141582f5e74e6f8467fa52d78bf6891e8420\nMon Jan 26 03:56:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce (48e82f49fbf8763c91cb885c7da3141582f5e74e6f8467fa52d78bf6891e8420)\n48e82f49fbf8763c91cb885c7da3141582f5e74e6f8467fa52d78bf6891e8420\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:31.118 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ea0dcb8e-554d-4c7e-a34f-2184173dac26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:31.119 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap12eb3028-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:31 compute-0 kernel: tap12eb3028-30: left promiscuous mode
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.122 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.137 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:31.141 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b882da40-28d2-480e-a706-aa0d2c76e672]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.146 239969 DEBUG nova.compute.manager [req-efe5b5be-7e23-46d1-a583-2e718c9ee0a8 req-070614d1-30e7-47f8-84bb-204873492f96 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received event network-changed-32baa4e5-2605-4a3f-8023-6a80ea71793d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.147 239969 DEBUG nova.compute.manager [req-efe5b5be-7e23-46d1-a583-2e718c9ee0a8 req-070614d1-30e7-47f8-84bb-204873492f96 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Refreshing instance network info cache due to event network-changed-32baa4e5-2605-4a3f-8023-6a80ea71793d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.148 239969 DEBUG oslo_concurrency.lockutils [req-efe5b5be-7e23-46d1-a583-2e718c9ee0a8 req-070614d1-30e7-47f8-84bb-204873492f96 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:31.155 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4a0c5804-edb0-4c63-84f7-70a43d6461c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:31.156 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ce57e1c1-a747-411c-b58f-7559c37fc0eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:31.174 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6bda4568-4f5c-4ecd-852e-a379d2742e2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 447851, 'reachable_time': 26847, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278783, 'error': None, 'target': 'ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d12eb3028\x2d37a4\x2d48dc\x2d836b\x2d0978d0bddbce.mount: Deactivated successfully.
Jan 26 15:56:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:31.177 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-12eb3028-37a4-48dc-836b-0978d0bddbce deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:56:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:31.177 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[6637b471-4273-49ab-a5ea-65dfa75581a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.537 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.538 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-d8c8962d-c696-4ea6-826d-466a804a7770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-d8c8962d-c696-4ea6-826d-466a804a7770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.538 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.538 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d8c8962d-c696-4ea6-826d-466a804a7770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 372 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 9.9 MiB/s wr, 254 op/s
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.818 239969 INFO nova.virt.libvirt.driver [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Deleting instance files /var/lib/nova/instances/24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423_del
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.819 239969 INFO nova.virt.libvirt.driver [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Deletion of /var/lib/nova/instances/24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423_del complete
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.890 239969 INFO nova.compute.manager [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Took 1.51 seconds to destroy the instance on the hypervisor.
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.891 239969 DEBUG oslo.service.loopingcall [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.891 239969 DEBUG nova.compute.manager [-] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:56:31 compute-0 nova_compute[239965]: 2026-01-26 15:56:31.891 239969 DEBUG nova.network.neutron [-] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.694 239969 DEBUG nova.network.neutron [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Updating instance_info_cache with network_info: [{"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.747 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Releasing lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.748 239969 DEBUG nova.compute.manager [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Instance network_info: |[{"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.748 239969 DEBUG oslo_concurrency.lockutils [req-efe5b5be-7e23-46d1-a583-2e718c9ee0a8 req-070614d1-30e7-47f8-84bb-204873492f96 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.748 239969 DEBUG nova.network.neutron [req-efe5b5be-7e23-46d1-a583-2e718c9ee0a8 req-070614d1-30e7-47f8-84bb-204873492f96 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Refreshing network info cache for port 32baa4e5-2605-4a3f-8023-6a80ea71793d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.751 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Start _get_guest_xml network_info=[{"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.756 239969 WARNING nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.760 239969 DEBUG nova.virt.libvirt.host [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.761 239969 DEBUG nova.virt.libvirt.host [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.764 239969 DEBUG nova.virt.libvirt.host [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.764 239969 DEBUG nova.virt.libvirt.host [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.765 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.765 239969 DEBUG nova.virt.hardware [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.766 239969 DEBUG nova.virt.hardware [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.766 239969 DEBUG nova.virt.hardware [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.766 239969 DEBUG nova.virt.hardware [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.766 239969 DEBUG nova.virt.hardware [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.767 239969 DEBUG nova.virt.hardware [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.767 239969 DEBUG nova.virt.hardware [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.767 239969 DEBUG nova.virt.hardware [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.767 239969 DEBUG nova.virt.hardware [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.768 239969 DEBUG nova.virt.hardware [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.768 239969 DEBUG nova.virt.hardware [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:56:32 compute-0 nova_compute[239965]: 2026-01-26 15:56:32.771 239969 DEBUG oslo_concurrency.processutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:33 compute-0 ceph-mon[75140]: pgmap v1298: 305 pgs: 305 active+clean; 372 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 9.9 MiB/s wr, 254 op/s
Jan 26 15:56:33 compute-0 kernel: tap1c2767ea-eb (unregistering): left promiscuous mode
Jan 26 15:56:33 compute-0 NetworkManager[48954]: <info>  [1769442993.1931] device (tap1c2767ea-eb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:56:33 compute-0 ovn_controller[146046]: 2026-01-26T15:56:33Z|00291|binding|INFO|Releasing lport 1c2767ea-eb7b-4942-a452-31fe1017e883 from this chassis (sb_readonly=0)
Jan 26 15:56:33 compute-0 ovn_controller[146046]: 2026-01-26T15:56:33Z|00292|binding|INFO|Setting lport 1c2767ea-eb7b-4942-a452-31fe1017e883 down in Southbound
Jan 26 15:56:33 compute-0 ovn_controller[146046]: 2026-01-26T15:56:33Z|00293|binding|INFO|Removing iface tap1c2767ea-eb ovn-installed in OVS
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.200 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:33.217 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:cd:80 10.100.0.4'], port_security=['fa:16:3e:3a:cd:80 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd8c8962d-c696-4ea6-826d-466a804a7770', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b303275701ea4029a4a744bf25ce8726', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bbe71d43-f65e-47cf-a850-0e359656056c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6a88dad-0ef4-4088-9429-227b0056ef4f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1c2767ea-eb7b-4942-a452-31fe1017e883) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:33.219 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1c2767ea-eb7b-4942-a452-31fe1017e883 in datapath 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c unbound from our chassis
Jan 26 15:56:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:33.220 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.224 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:33.240 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c1f07bb1-b59a-4b4b-96b0-5968ebd7fd62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:33 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000027.scope: Deactivated successfully.
Jan 26 15:56:33 compute-0 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000027.scope: Consumed 14.630s CPU time.
Jan 26 15:56:33 compute-0 systemd-machined[208061]: Machine qemu-44-instance-00000027 terminated.
Jan 26 15:56:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:33.274 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b8ea46d8-1a8e-4a14-bad1-fea733aafff4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:33.279 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[065c9c98-8c1c-499e-97db-2d5a0bb9d5c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:33.306 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[29e72fbe-09d2-4f79-8c79-881f5d85d6db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:33.323 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f6548cb5-a4df-4777-b9c5-8dc762e2496d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99f86ca1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:cf:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 9, 'rx_bytes': 784, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445516, 'reachable_time': 19368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278813, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:33.339 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7172d8c5-299e-4305-8cfa-c85d88a6dd4c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap99f86ca1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445530, 'tstamp': 445530}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278814, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap99f86ca1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445533, 'tstamp': 445533}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278814, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:33.341 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99f86ca1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.343 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.346 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:33.347 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99f86ca1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:33.348 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:33.348 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99f86ca1-70, col_values=(('external_ids', {'iface-id': '8cf3bd60-3f77-4742-a649-f7b9bc748f38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:33.349 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:56:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3475904285' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.375 239969 DEBUG oslo_concurrency.processutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.399 239969 DEBUG nova.storage.rbd_utils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.403 239969 DEBUG oslo_concurrency.processutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.433 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.569 239969 DEBUG nova.compute.manager [req-6cd20c0c-c560-476d-b011-7d885447c400 req-cc54c6ed-7ae5-4363-81ce-2abc14fefd2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Received event network-vif-unplugged-23122a0c-9643-40ee-8c47-11f5b18ef522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.570 239969 DEBUG oslo_concurrency.lockutils [req-6cd20c0c-c560-476d-b011-7d885447c400 req-cc54c6ed-7ae5-4363-81ce-2abc14fefd2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.570 239969 DEBUG oslo_concurrency.lockutils [req-6cd20c0c-c560-476d-b011-7d885447c400 req-cc54c6ed-7ae5-4363-81ce-2abc14fefd2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.570 239969 DEBUG oslo_concurrency.lockutils [req-6cd20c0c-c560-476d-b011-7d885447c400 req-cc54c6ed-7ae5-4363-81ce-2abc14fefd2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.571 239969 DEBUG nova.compute.manager [req-6cd20c0c-c560-476d-b011-7d885447c400 req-cc54c6ed-7ae5-4363-81ce-2abc14fefd2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] No waiting events found dispatching network-vif-unplugged-23122a0c-9643-40ee-8c47-11f5b18ef522 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.571 239969 DEBUG nova.compute.manager [req-6cd20c0c-c560-476d-b011-7d885447c400 req-cc54c6ed-7ae5-4363-81ce-2abc14fefd2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Received event network-vif-unplugged-23122a0c-9643-40ee-8c47-11f5b18ef522 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:56:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1299: 305 pgs: 305 active+clean; 334 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 9.4 MiB/s wr, 331 op/s
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.880 239969 INFO nova.virt.libvirt.driver [None req-6aa61d12-95ab-4eed-9559-567d9d488195 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Instance shutdown successfully after 13 seconds.
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.887 239969 INFO nova.virt.libvirt.driver [-] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Instance destroyed successfully.
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.888 239969 DEBUG nova.objects.instance [None req-6aa61d12-95ab-4eed-9559-567d9d488195 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'numa_topology' on Instance uuid d8c8962d-c696-4ea6-826d-466a804a7770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.906 239969 DEBUG nova.compute.manager [None req-6aa61d12-95ab-4eed-9559-567d9d488195 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.907 239969 DEBUG nova.network.neutron [-] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.971 239969 INFO nova.compute.manager [-] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Took 2.08 seconds to deallocate network for instance.
Jan 26 15:56:33 compute-0 nova_compute[239965]: 2026-01-26 15:56:33.985 239969 DEBUG oslo_concurrency.lockutils [None req-6aa61d12-95ab-4eed-9559-567d9d488195 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:56:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/891635234' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.013 239969 DEBUG oslo_concurrency.processutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.015 239969 DEBUG nova.virt.libvirt.vif [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2044484805',display_name='tempest-tempest.common.compute-instance-2044484805',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2044484805',id=43,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKeLSVrQjLVMrDly+731By+tGp5cL/q2RkfHkPZwnhscMO6Jhep7798XgwoAsogpRVdfhr++1ZBHm1/nTkA1YXe3jICWiVf8G7qT1QKZe6WXtrz4RsbEZ4GIF6fD2FL+/A==',key_name='tempest-keypair-1813699058',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-lr8w7kzm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:56:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=d49e7eb2-dcdb-4410-a016-42e58d1d1416,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.015 239969 DEBUG nova.network.os_vif_util [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.016 239969 DEBUG nova.network.os_vif_util [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:54:a4,bridge_name='br-int',has_traffic_filtering=True,id=32baa4e5-2605-4a3f-8023-6a80ea71793d,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32baa4e5-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.018 239969 DEBUG nova.objects.instance [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'pci_devices' on Instance uuid d49e7eb2-dcdb-4410-a016-42e58d1d1416 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:34 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3475904285' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:34 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/891635234' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.044 239969 DEBUG oslo_concurrency.lockutils [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.045 239969 DEBUG oslo_concurrency.lockutils [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.048 239969 DEBUG nova.compute.manager [req-494bc5f9-bf84-4192-8ba8-c5714632efab req-061a3ff1-6c0e-46d7-9d51-7081853456f9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Received event network-vif-deleted-23122a0c-9643-40ee-8c47-11f5b18ef522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.060 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:56:34 compute-0 nova_compute[239965]:   <uuid>d49e7eb2-dcdb-4410-a016-42e58d1d1416</uuid>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   <name>instance-0000002b</name>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <nova:name>tempest-tempest.common.compute-instance-2044484805</nova:name>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:56:32</nova:creationTime>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:56:34 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:56:34 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:56:34 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:56:34 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:34 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:56:34 compute-0 nova_compute[239965]:         <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:56:34 compute-0 nova_compute[239965]:         <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:56:34 compute-0 nova_compute[239965]:         <nova:port uuid="32baa4e5-2605-4a3f-8023-6a80ea71793d">
Jan 26 15:56:34 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <system>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <entry name="serial">d49e7eb2-dcdb-4410-a016-42e58d1d1416</entry>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <entry name="uuid">d49e7eb2-dcdb-4410-a016-42e58d1d1416</entry>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     </system>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   <os>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   </os>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   <features>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   </features>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk">
Jan 26 15:56:34 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:56:34 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk.config">
Jan 26 15:56:34 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:56:34 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:b7:54:a4"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <target dev="tap32baa4e5-26"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/d49e7eb2-dcdb-4410-a016-42e58d1d1416/console.log" append="off"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <video>
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     </video>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:56:34 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:56:34 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:56:34 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:56:34 compute-0 nova_compute[239965]: </domain>
Jan 26 15:56:34 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.061 239969 DEBUG nova.compute.manager [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Preparing to wait for external event network-vif-plugged-32baa4e5-2605-4a3f-8023-6a80ea71793d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.062 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.062 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.062 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.063 239969 DEBUG nova.virt.libvirt.vif [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2044484805',display_name='tempest-tempest.common.compute-instance-2044484805',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2044484805',id=43,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKeLSVrQjLVMrDly+731By+tGp5cL/q2RkfHkPZwnhscMO6Jhep7798XgwoAsogpRVdfhr++1ZBHm1/nTkA1YXe3jICWiVf8G7qT1QKZe6WXtrz4RsbEZ4GIF6fD2FL+/A==',key_name='tempest-keypair-1813699058',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-lr8w7kzm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:56:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=d49e7eb2-dcdb-4410-a016-42e58d1d1416,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.063 239969 DEBUG nova.network.os_vif_util [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.064 239969 DEBUG nova.network.os_vif_util [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:54:a4,bridge_name='br-int',has_traffic_filtering=True,id=32baa4e5-2605-4a3f-8023-6a80ea71793d,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32baa4e5-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.064 239969 DEBUG os_vif [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:54:a4,bridge_name='br-int',has_traffic_filtering=True,id=32baa4e5-2605-4a3f-8023-6a80ea71793d,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32baa4e5-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.065 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.065 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.066 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.069 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.069 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap32baa4e5-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.070 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap32baa4e5-26, col_values=(('external_ids', {'iface-id': '32baa4e5-2605-4a3f-8023-6a80ea71793d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:54:a4', 'vm-uuid': 'd49e7eb2-dcdb-4410-a016-42e58d1d1416'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:34 compute-0 NetworkManager[48954]: <info>  [1769442994.0724] manager: (tap32baa4e5-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.072 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.074 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.079 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.079 239969 INFO os_vif [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:54:a4,bridge_name='br-int',has_traffic_filtering=True,id=32baa4e5-2605-4a3f-8023-6a80ea71793d,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32baa4e5-26')
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.134 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.134 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.134 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:b7:54:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.135 239969 INFO nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Using config drive
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.158 239969 DEBUG nova.storage.rbd_utils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.266 239969 DEBUG oslo_concurrency.processutils [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.408 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Updating instance_info_cache with network_info: [{"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.435 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-d8c8962d-c696-4ea6-826d-466a804a7770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.436 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.437 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.437 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.458 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.494 239969 INFO nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Creating config drive at /var/lib/nova/instances/d49e7eb2-dcdb-4410-a016-42e58d1d1416/disk.config
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.501 239969 DEBUG oslo_concurrency.processutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d49e7eb2-dcdb-4410-a016-42e58d1d1416/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2beb8va5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.647 239969 DEBUG oslo_concurrency.processutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d49e7eb2-dcdb-4410-a016-42e58d1d1416/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2beb8va5" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.674 239969 DEBUG nova.storage.rbd_utils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.680 239969 DEBUG oslo_concurrency.processutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d49e7eb2-dcdb-4410-a016-42e58d1d1416/disk.config d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.832 239969 DEBUG oslo_concurrency.processutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d49e7eb2-dcdb-4410-a016-42e58d1d1416/disk.config d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.833 239969 INFO nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Deleting local config drive /var/lib/nova/instances/d49e7eb2-dcdb-4410-a016-42e58d1d1416/disk.config because it was imported into RBD.
Jan 26 15:56:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:56:34 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1732093157' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:34 compute-0 NetworkManager[48954]: <info>  [1769442994.9076] manager: (tap32baa4e5-26): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Jan 26 15:56:34 compute-0 kernel: tap32baa4e5-26: entered promiscuous mode
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.917 239969 DEBUG oslo_concurrency.processutils [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.919 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:34 compute-0 ovn_controller[146046]: 2026-01-26T15:56:34Z|00294|binding|INFO|Claiming lport 32baa4e5-2605-4a3f-8023-6a80ea71793d for this chassis.
Jan 26 15:56:34 compute-0 ovn_controller[146046]: 2026-01-26T15:56:34Z|00295|binding|INFO|32baa4e5-2605-4a3f-8023-6a80ea71793d: Claiming fa:16:3e:b7:54:a4 10.100.0.11
Jan 26 15:56:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:34.933 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:54:a4 10.100.0.11'], port_security=['fa:16:3e:b7:54:a4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd49e7eb2-dcdb-4410-a016-42e58d1d1416', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be1b617b-295a-4675-91ed-17bc8ca1fead', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=32baa4e5-2605-4a3f-8023-6a80ea71793d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:34.935 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 32baa4e5-2605-4a3f-8023-6a80ea71793d in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e bound to our chassis
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.934 239969 DEBUG nova.compute.provider_tree [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:56:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:34.936 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:56:34 compute-0 systemd-udevd[278964]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.952 239969 DEBUG nova.scheduler.client.report [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:56:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:34.953 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b75d6ac5-5849-4a48-bdac-f1bdb32ee096]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:34.955 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap91bcac0a-11 in ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:56:34 compute-0 systemd-machined[208061]: New machine qemu-47-instance-0000002b.
Jan 26 15:56:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:34.958 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap91bcac0a-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:56:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:34.959 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e2036832-1684-4f4d-a6ce-9e0dbc4ea0a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:34.961 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[da078219-5010-4599-8b5a-17cc74876b7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:34 compute-0 NetworkManager[48954]: <info>  [1769442994.9638] device (tap32baa4e5-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:56:34 compute-0 NetworkManager[48954]: <info>  [1769442994.9649] device (tap32baa4e5-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:56:34 compute-0 systemd[1]: Started Virtual Machine qemu-47-instance-0000002b.
Jan 26 15:56:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:34.979 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[cd00b5f3-a960-4b29-85dc-8aaf65b9973f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.981 239969 DEBUG oslo_concurrency.lockutils [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.985 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.985 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.986 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:56:34 compute-0 nova_compute[239965]: 2026-01-26 15:56:34.986 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.005 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[40d1d89a-0f98-45bf-9af9-3ccdb0f92ee2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:35 compute-0 ovn_controller[146046]: 2026-01-26T15:56:35Z|00296|binding|INFO|Setting lport 32baa4e5-2605-4a3f-8023-6a80ea71793d ovn-installed in OVS
Jan 26 15:56:35 compute-0 ovn_controller[146046]: 2026-01-26T15:56:35Z|00297|binding|INFO|Setting lport 32baa4e5-2605-4a3f-8023-6a80ea71793d up in Southbound
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.020 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:35 compute-0 ceph-mon[75140]: pgmap v1299: 305 pgs: 305 active+clean; 334 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 9.4 MiB/s wr, 331 op/s
Jan 26 15:56:35 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1732093157' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.036 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8e9ddd8c-24ad-40fe-abb8-3bf49e09746e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.043 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[98dce431-2da0-4a04-968f-574509717ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:35 compute-0 NetworkManager[48954]: <info>  [1769442995.0447] manager: (tap91bcac0a-10): new Veth device (/org/freedesktop/NetworkManager/Devices/146)
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.047 239969 INFO nova.scheduler.client.report [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Deleted allocations for instance 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.082 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[73d4c0ec-f3f7-4f93-8bfe-0be554f35561]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.088 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4016dc64-c0ec-4a90-b0ee-26904689956f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:35 compute-0 NetworkManager[48954]: <info>  [1769442995.1165] device (tap91bcac0a-10): carrier: link connected
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.122 239969 DEBUG oslo_concurrency.lockutils [None req-5feaa471-fff2-4089-a0f6-cd3f7c27cef3 8b6639ef07f74c9ebad76dffa361ec4e 1b3f0866575347bd80cdc80b692f07f5 - - default default] Lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.123 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8bd66b-3417-4090-a3d7-a681a594234e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.125 239969 DEBUG nova.network.neutron [req-efe5b5be-7e23-46d1-a583-2e718c9ee0a8 req-070614d1-30e7-47f8-84bb-204873492f96 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Updated VIF entry in instance network info cache for port 32baa4e5-2605-4a3f-8023-6a80ea71793d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.126 239969 DEBUG nova.network.neutron [req-efe5b5be-7e23-46d1-a583-2e718c9ee0a8 req-070614d1-30e7-47f8-84bb-204873492f96 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Updating instance_info_cache with network_info: [{"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.142 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1712de-82a3-4e33-8439-4df16ace56e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448592, 'reachable_time': 43955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279021, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.157 239969 DEBUG oslo_concurrency.lockutils [req-efe5b5be-7e23-46d1-a583-2e718c9ee0a8 req-070614d1-30e7-47f8-84bb-204873492f96 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.158 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c384fe-f3ba-48c6-8d22-6bd6e268a3e5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:d619'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448592, 'tstamp': 448592}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279022, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.175 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0605be-b045-4ad2-a100-1c6b336bdd2b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448592, 'reachable_time': 43955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279023, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.208 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[37819414-8c52-4ad6-b7f7-c0358f54748f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.225 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.272 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[02392020-41d6-4ad4-a343-97d0deeb071e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.274 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.274 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.274 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:35 compute-0 NetworkManager[48954]: <info>  [1769442995.2771] manager: (tap91bcac0a-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.276 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:35 compute-0 kernel: tap91bcac0a-10: entered promiscuous mode
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.283 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.284 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.285 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:35 compute-0 ovn_controller[146046]: 2026-01-26T15:56:35Z|00298|binding|INFO|Releasing lport 0a445094-538b-4c97-83d4-6fbe61c31fbe from this chassis (sb_readonly=0)
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.306 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.312 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.313 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/91bcac0a-1926-4861-88ab-ae3c06f7e57e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/91bcac0a-1926-4861-88ab-ae3c06f7e57e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.314 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[81fb3caa-e4d9-4516-b2c9-754f08fee777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.314 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/91bcac0a-1926-4861-88ab-ae3c06f7e57e.pid.haproxy
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:56:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:35.315 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'env', 'PROCESS_TAG=haproxy-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/91bcac0a-1926-4861-88ab-ae3c06f7e57e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.513 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442995.5108273, d49e7eb2-dcdb-4410-a016-42e58d1d1416 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.513 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] VM Started (Lifecycle Event)
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.517 239969 DEBUG nova.objects.instance [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'flavor' on Instance uuid d8c8962d-c696-4ea6-826d-466a804a7770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.543 239969 DEBUG oslo_concurrency.lockutils [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "refresh_cache-d8c8962d-c696-4ea6-826d-466a804a7770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.543 239969 DEBUG oslo_concurrency.lockutils [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquired lock "refresh_cache-d8c8962d-c696-4ea6-826d-466a804a7770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.543 239969 DEBUG nova.network.neutron [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.544 239969 DEBUG nova.objects.instance [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'info_cache' on Instance uuid d8c8962d-c696-4ea6-826d-466a804a7770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.546 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.551 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442995.5112703, d49e7eb2-dcdb-4410-a016-42e58d1d1416 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.552 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] VM Paused (Lifecycle Event)
Jan 26 15:56:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:56:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1458020635' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.588 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.591 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.598 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.612 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.695 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.696 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.700 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.700 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000028 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.704 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.704 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.709 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000029 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.709 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000029 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:56:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 325 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.5 MiB/s wr, 223 op/s
Jan 26 15:56:35 compute-0 podman[279105]: 2026-01-26 15:56:35.712629936 +0000 UTC m=+0.028777437 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.943 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.944 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3605MB free_disk=59.82476106565446GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.945 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:35 compute-0 nova_compute[239965]: 2026-01-26 15:56:35.945 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.051 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance d8c8962d-c696-4ea6-826d-466a804a7770 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.051 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 6f15e546-481b-42eb-ad9c-7c40e8bfe459 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.051 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.052 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance d49e7eb2-dcdb-4410-a016-42e58d1d1416 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.052 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.052 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1088MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:56:36 compute-0 podman[279105]: 2026-01-26 15:56:36.059659063 +0000 UTC m=+0.375806534 container create 6af54679a61765c516d4a500cf6abcfa209b3c765eaea025ea060fdfc106774f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 15:56:36 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1458020635' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:36 compute-0 ceph-mon[75140]: pgmap v1300: 305 pgs: 305 active+clean; 325 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.5 MiB/s wr, 223 op/s
Jan 26 15:56:36 compute-0 systemd[1]: Started libpod-conmon-6af54679a61765c516d4a500cf6abcfa209b3c765eaea025ea060fdfc106774f.scope.
Jan 26 15:56:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:56:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/504d325a10754e439dbdb5d417913becec40f93715783f0ee81ad258bb5de625/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:36 compute-0 podman[279105]: 2026-01-26 15:56:36.165585404 +0000 UTC m=+0.481732905 container init 6af54679a61765c516d4a500cf6abcfa209b3c765eaea025ea060fdfc106774f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.167 239969 DEBUG nova.compute.manager [req-56b59cfa-9fd9-4865-898d-37a0639c7e2d req-8559b00a-7f64-43bf-a994-d2a4d0f62fd8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Received event network-vif-plugged-23122a0c-9643-40ee-8c47-11f5b18ef522 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.168 239969 DEBUG oslo_concurrency.lockutils [req-56b59cfa-9fd9-4865-898d-37a0639c7e2d req-8559b00a-7f64-43bf-a994-d2a4d0f62fd8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.168 239969 DEBUG oslo_concurrency.lockutils [req-56b59cfa-9fd9-4865-898d-37a0639c7e2d req-8559b00a-7f64-43bf-a994-d2a4d0f62fd8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.168 239969 DEBUG oslo_concurrency.lockutils [req-56b59cfa-9fd9-4865-898d-37a0639c7e2d req-8559b00a-7f64-43bf-a994-d2a4d0f62fd8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.169 239969 DEBUG nova.compute.manager [req-56b59cfa-9fd9-4865-898d-37a0639c7e2d req-8559b00a-7f64-43bf-a994-d2a4d0f62fd8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] No waiting events found dispatching network-vif-plugged-23122a0c-9643-40ee-8c47-11f5b18ef522 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.169 239969 WARNING nova.compute.manager [req-56b59cfa-9fd9-4865-898d-37a0639c7e2d req-8559b00a-7f64-43bf-a994-d2a4d0f62fd8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Received unexpected event network-vif-plugged-23122a0c-9643-40ee-8c47-11f5b18ef522 for instance with vm_state deleted and task_state None.
Jan 26 15:56:36 compute-0 podman[279105]: 2026-01-26 15:56:36.172097384 +0000 UTC m=+0.488244855 container start 6af54679a61765c516d4a500cf6abcfa209b3c765eaea025ea060fdfc106774f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 15:56:36 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[279120]: [NOTICE]   (279124) : New worker (279126) forked
Jan 26 15:56:36 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[279120]: [NOTICE]   (279124) : Loading success.
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.228 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:56:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1692209361' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.887 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.897 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.925 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.949 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.950 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.964 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Acquiring lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.965 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:36 compute-0 nova_compute[239965]: 2026-01-26 15:56:36.981 239969 DEBUG nova.compute.manager [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.048 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.049 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.058 239969 DEBUG nova.virt.hardware [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.060 239969 INFO nova.compute.claims [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:56:37 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1692209361' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.118 239969 DEBUG nova.network.neutron [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Updating instance_info_cache with network_info: [{"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.139 239969 DEBUG oslo_concurrency.lockutils [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Releasing lock "refresh_cache-d8c8962d-c696-4ea6-826d-466a804a7770" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.170 239969 INFO nova.virt.libvirt.driver [-] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Instance destroyed successfully.
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.174 239969 DEBUG nova.objects.instance [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'numa_topology' on Instance uuid d8c8962d-c696-4ea6-826d-466a804a7770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.189 239969 DEBUG nova.objects.instance [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'resources' on Instance uuid d8c8962d-c696-4ea6-826d-466a804a7770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.204 239969 DEBUG nova.virt.libvirt.vif [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:55:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-2052998401',display_name='tempest-ListServerFiltersTestJSON-instance-2052998401',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-2052998401',id=39,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:56:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b303275701ea4029a4a744bf25ce8726',ramdisk_id='',reservation_id='r-xtx413wp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-31069689',owner_user_name='tempest-ListServerFiltersTestJSON-31069689-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:33Z,user_data=None,user_id='e19e9687c7784557a28b0297dc26ff06',uuid=d8c8962d-c696-4ea6-826d-466a804a7770,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.204 239969 DEBUG nova.network.os_vif_util [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converting VIF {"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.205 239969 DEBUG nova.network.os_vif_util [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:cd:80,bridge_name='br-int',has_traffic_filtering=True,id=1c2767ea-eb7b-4942-a452-31fe1017e883,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2767ea-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.206 239969 DEBUG os_vif [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:cd:80,bridge_name='br-int',has_traffic_filtering=True,id=1c2767ea-eb7b-4942-a452-31fe1017e883,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2767ea-eb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.208 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.208 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c2767ea-eb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.215 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.217 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.221 239969 INFO os_vif [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:cd:80,bridge_name='br-int',has_traffic_filtering=True,id=1c2767ea-eb7b-4942-a452-31fe1017e883,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2767ea-eb')
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.228 239969 DEBUG nova.virt.libvirt.driver [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Start _get_guest_xml network_info=[{"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.237 239969 WARNING nova.virt.libvirt.driver [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.242 239969 DEBUG nova.virt.libvirt.host [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.243 239969 DEBUG nova.virt.libvirt.host [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.248 239969 DEBUG nova.virt.libvirt.host [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.249 239969 DEBUG nova.virt.libvirt.host [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.250 239969 DEBUG nova.virt.libvirt.driver [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.250 239969 DEBUG nova.virt.hardware [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.250 239969 DEBUG nova.virt.hardware [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.251 239969 DEBUG nova.virt.hardware [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.251 239969 DEBUG nova.virt.hardware [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.251 239969 DEBUG nova.virt.hardware [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.252 239969 DEBUG nova.virt.hardware [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.252 239969 DEBUG nova.virt.hardware [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.253 239969 DEBUG nova.virt.hardware [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.253 239969 DEBUG nova.virt.hardware [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.253 239969 DEBUG nova.virt.hardware [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.254 239969 DEBUG nova.virt.hardware [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.254 239969 DEBUG nova.objects.instance [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d8c8962d-c696-4ea6-826d-466a804a7770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.273 239969 DEBUG oslo_concurrency.processutils [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.311 239969 DEBUG oslo_concurrency.processutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 305 active+clean; 326 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Jan 26 15:56:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:56:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4000701592' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:37 compute-0 nova_compute[239965]: 2026-01-26 15:56:37.963 239969 DEBUG oslo_concurrency.processutils [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.690s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:56:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/586709570' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.010 239969 DEBUG oslo_concurrency.processutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.699s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.015 239969 DEBUG oslo_concurrency.processutils [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.047 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.049 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.050 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.051 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.057 239969 DEBUG nova.compute.provider_tree [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.075 239969 DEBUG nova.scheduler.client.report [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:56:38 compute-0 ceph-mon[75140]: pgmap v1301: 305 pgs: 305 active+clean; 326 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.6 MiB/s wr, 201 op/s
Jan 26 15:56:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4000701592' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/586709570' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.097 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.099 239969 DEBUG nova.compute.manager [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.152 239969 DEBUG nova.compute.manager [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.153 239969 DEBUG nova.network.neutron [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.181 239969 INFO nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.218 239969 DEBUG nova.compute.manager [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.310 239969 DEBUG nova.compute.manager [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.312 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.313 239969 INFO nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Creating image(s)
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.346 239969 DEBUG nova.storage.rbd_utils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] rbd image fcf10cc3-f719-42c2-8e4c-a1bd5264de60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.384 239969 DEBUG nova.storage.rbd_utils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] rbd image fcf10cc3-f719-42c2-8e4c-a1bd5264de60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.422 239969 DEBUG nova.storage.rbd_utils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] rbd image fcf10cc3-f719-42c2-8e4c-a1bd5264de60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.428 239969 DEBUG oslo_concurrency.processutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.505 239969 DEBUG oslo_concurrency.processutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.506 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.507 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.507 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.533 239969 DEBUG nova.storage.rbd_utils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] rbd image fcf10cc3-f719-42c2-8e4c-a1bd5264de60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.542 239969 DEBUG oslo_concurrency.processutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 fcf10cc3-f719-42c2-8e4c-a1bd5264de60_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:56:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1821414568' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.690 239969 DEBUG oslo_concurrency.processutils [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.674s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.692 239969 DEBUG nova.virt.libvirt.vif [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:55:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-2052998401',display_name='tempest-ListServerFiltersTestJSON-instance-2052998401',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-2052998401',id=39,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:56:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='b303275701ea4029a4a744bf25ce8726',ramdisk_id='',reservation_id='r-xtx413wp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-31069689',owner_user_name='tempest-ListServerFiltersTestJSON-31069689-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:33Z,user_data=None,user_id='e19e9687c7784557a28b0297dc26ff06',uuid=d8c8962d-c696-4ea6-826d-466a804a7770,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.692 239969 DEBUG nova.network.os_vif_util [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converting VIF {"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.693 239969 DEBUG nova.network.os_vif_util [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:cd:80,bridge_name='br-int',has_traffic_filtering=True,id=1c2767ea-eb7b-4942-a452-31fe1017e883,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2767ea-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.694 239969 DEBUG nova.objects.instance [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'pci_devices' on Instance uuid d8c8962d-c696-4ea6-826d-466a804a7770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.716 239969 DEBUG nova.virt.libvirt.driver [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:56:38 compute-0 nova_compute[239965]:   <uuid>d8c8962d-c696-4ea6-826d-466a804a7770</uuid>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   <name>instance-00000027</name>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <nova:name>tempest-ListServerFiltersTestJSON-instance-2052998401</nova:name>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:56:37</nova:creationTime>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:56:38 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:56:38 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:56:38 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:56:38 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:38 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:56:38 compute-0 nova_compute[239965]:         <nova:user uuid="e19e9687c7784557a28b0297dc26ff06">tempest-ListServerFiltersTestJSON-31069689-project-member</nova:user>
Jan 26 15:56:38 compute-0 nova_compute[239965]:         <nova:project uuid="b303275701ea4029a4a744bf25ce8726">tempest-ListServerFiltersTestJSON-31069689</nova:project>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:56:38 compute-0 nova_compute[239965]:         <nova:port uuid="1c2767ea-eb7b-4942-a452-31fe1017e883">
Jan 26 15:56:38 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <system>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <entry name="serial">d8c8962d-c696-4ea6-826d-466a804a7770</entry>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <entry name="uuid">d8c8962d-c696-4ea6-826d-466a804a7770</entry>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     </system>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   <os>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   </os>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   <features>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   </features>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d8c8962d-c696-4ea6-826d-466a804a7770_disk">
Jan 26 15:56:38 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:56:38 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d8c8962d-c696-4ea6-826d-466a804a7770_disk.config">
Jan 26 15:56:38 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:56:38 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:3a:cd:80"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <target dev="tap1c2767ea-eb"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/d8c8962d-c696-4ea6-826d-466a804a7770/console.log" append="off"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <video>
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     </video>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <input type="keyboard" bus="usb"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:56:38 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:56:38 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:56:38 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:56:38 compute-0 nova_compute[239965]: </domain>
Jan 26 15:56:38 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.717 239969 DEBUG nova.virt.libvirt.driver [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.718 239969 DEBUG nova.virt.libvirt.driver [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] skipping disk for instance-00000027 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.719 239969 DEBUG nova.virt.libvirt.vif [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:55:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-2052998401',display_name='tempest-ListServerFiltersTestJSON-instance-2052998401',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-2052998401',id=39,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:56:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='b303275701ea4029a4a744bf25ce8726',ramdisk_id='',reservation_id='r-xtx413wp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-31069689',owner_user_name='tempest-ListServerFiltersTestJSON-31069689-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:33Z,user_data=None,user_id='e19e9687c7784557a28b0297dc26ff06',uuid=d8c8962d-c696-4ea6-826d-466a804a7770,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.719 239969 DEBUG nova.network.os_vif_util [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converting VIF {"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.719 239969 DEBUG nova.network.os_vif_util [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:cd:80,bridge_name='br-int',has_traffic_filtering=True,id=1c2767ea-eb7b-4942-a452-31fe1017e883,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2767ea-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.720 239969 DEBUG os_vif [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:cd:80,bridge_name='br-int',has_traffic_filtering=True,id=1c2767ea-eb7b-4942-a452-31fe1017e883,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2767ea-eb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.720 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.721 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.721 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.731 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.732 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c2767ea-eb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.733 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c2767ea-eb, col_values=(('external_ids', {'iface-id': '1c2767ea-eb7b-4942-a452-31fe1017e883', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:cd:80', 'vm-uuid': 'd8c8962d-c696-4ea6-826d-466a804a7770'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:38 compute-0 NetworkManager[48954]: <info>  [1769442998.7365] manager: (tap1c2767ea-eb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.737 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.740 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.745 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.746 239969 INFO os_vif [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:cd:80,bridge_name='br-int',has_traffic_filtering=True,id=1c2767ea-eb7b-4942-a452-31fe1017e883,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2767ea-eb')
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.802 239969 DEBUG nova.policy [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '744e9066af094bd8b578059050760458', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e674ff15ecb74c11bb2de410f761c546', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:56:38 compute-0 kernel: tap1c2767ea-eb: entered promiscuous mode
Jan 26 15:56:38 compute-0 NetworkManager[48954]: <info>  [1769442998.8507] manager: (tap1c2767ea-eb): new Tun device (/org/freedesktop/NetworkManager/Devices/149)
Jan 26 15:56:38 compute-0 ovn_controller[146046]: 2026-01-26T15:56:38Z|00299|binding|INFO|Claiming lport 1c2767ea-eb7b-4942-a452-31fe1017e883 for this chassis.
Jan 26 15:56:38 compute-0 ovn_controller[146046]: 2026-01-26T15:56:38Z|00300|binding|INFO|1c2767ea-eb7b-4942-a452-31fe1017e883: Claiming fa:16:3e:3a:cd:80 10.100.0.4
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.858 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:38.871 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:cd:80 10.100.0.4'], port_security=['fa:16:3e:3a:cd:80 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd8c8962d-c696-4ea6-826d-466a804a7770', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b303275701ea4029a4a744bf25ce8726', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bbe71d43-f65e-47cf-a850-0e359656056c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6a88dad-0ef4-4088-9429-227b0056ef4f, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1c2767ea-eb7b-4942-a452-31fe1017e883) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:38.873 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1c2767ea-eb7b-4942-a452-31fe1017e883 in datapath 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c bound to our chassis
Jan 26 15:56:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:38.875 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c
Jan 26 15:56:38 compute-0 ovn_controller[146046]: 2026-01-26T15:56:38Z|00301|binding|INFO|Setting lport 1c2767ea-eb7b-4942-a452-31fe1017e883 ovn-installed in OVS
Jan 26 15:56:38 compute-0 ovn_controller[146046]: 2026-01-26T15:56:38Z|00302|binding|INFO|Setting lport 1c2767ea-eb7b-4942-a452-31fe1017e883 up in Southbound
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.880 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:38.892 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6e4b8d0c-0b84-4755-86c3-341ee09227fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:38 compute-0 systemd-udevd[279350]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:56:38 compute-0 systemd-machined[208061]: New machine qemu-48-instance-00000027.
Jan 26 15:56:38 compute-0 systemd[1]: Started Virtual Machine qemu-48-instance-00000027.
Jan 26 15:56:38 compute-0 NetworkManager[48954]: <info>  [1769442998.9165] device (tap1c2767ea-eb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:56:38 compute-0 NetworkManager[48954]: <info>  [1769442998.9171] device (tap1c2767ea-eb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:56:38 compute-0 nova_compute[239965]: 2026-01-26 15:56:38.922 239969 DEBUG oslo_concurrency.processutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 fcf10cc3-f719-42c2-8e4c-a1bd5264de60_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.380s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:38.934 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8401dafb-3ec7-4f4d-8d7c-9a28c5f89ea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:38.939 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd6b8b2-25fd-4179-a1cf-2f005db6026d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:38.989 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[32d95458-70f6-406d-9973-d15008c708e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:39.009 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b8073345-0eed-4b24-a794-d5ef649d1ba4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99f86ca1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:cf:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 784, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445516, 'reachable_time': 19368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279388, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.026 239969 DEBUG nova.storage.rbd_utils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] resizing rbd image fcf10cc3-f719-42c2-8e4c-a1bd5264de60_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:56:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:39.035 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a0f61704-82dd-476d-a42e-37f922ead72c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap99f86ca1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445530, 'tstamp': 445530}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279400, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap99f86ca1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445533, 'tstamp': 445533}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279400, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:39.038 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99f86ca1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:39.044 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99f86ca1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:39.045 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:39.046 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99f86ca1-70, col_values=(('external_ids', {'iface-id': '8cf3bd60-3f77-4742-a649-f7b9bc748f38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:39.047 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.065 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1821414568' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.152 239969 DEBUG nova.objects.instance [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lazy-loading 'migration_context' on Instance uuid fcf10cc3-f719-42c2-8e4c-a1bd5264de60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.166 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.167 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Ensure instance console log exists: /var/lib/nova/instances/fcf10cc3-f719-42c2-8e4c-a1bd5264de60/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.167 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.168 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.168 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.713 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for d8c8962d-c696-4ea6-826d-466a804a7770 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.713 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442999.7123666, d8c8962d-c696-4ea6-826d-466a804a7770 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.713 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] VM Resumed (Lifecycle Event)
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.716 239969 DEBUG nova.compute.manager [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.720 239969 INFO nova.virt.libvirt.driver [-] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Instance rebooted successfully.
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.720 239969 DEBUG nova.compute.manager [None req-89cf84c3-c841-4eda-a905-f9a0064078ac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.748 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.754 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.781 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] During sync_power_state the instance has a pending task (powering-on). Skip.
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.781 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769442999.7134867, d8c8962d-c696-4ea6-826d-466a804a7770 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.781 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] VM Started (Lifecycle Event)
Jan 26 15:56:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 326 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 551 KiB/s wr, 129 op/s
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.813 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:39 compute-0 nova_compute[239965]: 2026-01-26 15:56:39.818 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:56:40 compute-0 ceph-mon[75140]: pgmap v1302: 305 pgs: 305 active+clean; 326 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 551 KiB/s wr, 129 op/s
Jan 26 15:56:40 compute-0 nova_compute[239965]: 2026-01-26 15:56:40.126 239969 DEBUG nova.compute.manager [req-70f72b27-fbd1-4958-b3a8-17ffbc189b54 req-612b98f2-572f-40c0-a229-d3762b5285f8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received event network-vif-unplugged-1c2767ea-eb7b-4942-a452-31fe1017e883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:40 compute-0 nova_compute[239965]: 2026-01-26 15:56:40.126 239969 DEBUG oslo_concurrency.lockutils [req-70f72b27-fbd1-4958-b3a8-17ffbc189b54 req-612b98f2-572f-40c0-a229-d3762b5285f8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:40 compute-0 nova_compute[239965]: 2026-01-26 15:56:40.127 239969 DEBUG oslo_concurrency.lockutils [req-70f72b27-fbd1-4958-b3a8-17ffbc189b54 req-612b98f2-572f-40c0-a229-d3762b5285f8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:40 compute-0 nova_compute[239965]: 2026-01-26 15:56:40.127 239969 DEBUG oslo_concurrency.lockutils [req-70f72b27-fbd1-4958-b3a8-17ffbc189b54 req-612b98f2-572f-40c0-a229-d3762b5285f8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:40 compute-0 nova_compute[239965]: 2026-01-26 15:56:40.127 239969 DEBUG nova.compute.manager [req-70f72b27-fbd1-4958-b3a8-17ffbc189b54 req-612b98f2-572f-40c0-a229-d3762b5285f8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] No waiting events found dispatching network-vif-unplugged-1c2767ea-eb7b-4942-a452-31fe1017e883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:40 compute-0 nova_compute[239965]: 2026-01-26 15:56:40.128 239969 WARNING nova.compute.manager [req-70f72b27-fbd1-4958-b3a8-17ffbc189b54 req-612b98f2-572f-40c0-a229-d3762b5285f8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received unexpected event network-vif-unplugged-1c2767ea-eb7b-4942-a452-31fe1017e883 for instance with vm_state active and task_state None.
Jan 26 15:56:40 compute-0 nova_compute[239965]: 2026-01-26 15:56:40.229 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:40 compute-0 nova_compute[239965]: 2026-01-26 15:56:40.700 239969 DEBUG nova.network.neutron [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Successfully created port: df8d11ce-fef0-411d-abb7-9e35b7c01592 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:56:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 344 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 135 op/s
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.406 239969 DEBUG nova.network.neutron [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Successfully updated port: df8d11ce-fef0-411d-abb7-9e35b7c01592 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.424 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Acquiring lock "refresh_cache-fcf10cc3-f719-42c2-8e4c-a1bd5264de60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.424 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Acquired lock "refresh_cache-fcf10cc3-f719-42c2-8e4c-a1bd5264de60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.424 239969 DEBUG nova.network.neutron [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.638 239969 DEBUG nova.compute.manager [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received event network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.639 239969 DEBUG oslo_concurrency.lockutils [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.640 239969 DEBUG oslo_concurrency.lockutils [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.641 239969 DEBUG oslo_concurrency.lockutils [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.641 239969 DEBUG nova.compute.manager [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] No waiting events found dispatching network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.642 239969 WARNING nova.compute.manager [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received unexpected event network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 for instance with vm_state active and task_state None.
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.642 239969 DEBUG nova.compute.manager [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received event network-vif-plugged-32baa4e5-2605-4a3f-8023-6a80ea71793d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.642 239969 DEBUG oslo_concurrency.lockutils [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.642 239969 DEBUG oslo_concurrency.lockutils [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.643 239969 DEBUG oslo_concurrency.lockutils [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.643 239969 DEBUG nova.compute.manager [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Processing event network-vif-plugged-32baa4e5-2605-4a3f-8023-6a80ea71793d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.643 239969 DEBUG nova.compute.manager [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received event network-vif-plugged-32baa4e5-2605-4a3f-8023-6a80ea71793d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.644 239969 DEBUG oslo_concurrency.lockutils [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.644 239969 DEBUG oslo_concurrency.lockutils [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.644 239969 DEBUG oslo_concurrency.lockutils [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.645 239969 DEBUG nova.compute.manager [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] No waiting events found dispatching network-vif-plugged-32baa4e5-2605-4a3f-8023-6a80ea71793d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.645 239969 WARNING nova.compute.manager [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received unexpected event network-vif-plugged-32baa4e5-2605-4a3f-8023-6a80ea71793d for instance with vm_state building and task_state spawning.
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.645 239969 DEBUG nova.compute.manager [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received event network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.647 239969 DEBUG oslo_concurrency.lockutils [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.648 239969 DEBUG oslo_concurrency.lockutils [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.648 239969 DEBUG oslo_concurrency.lockutils [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.648 239969 DEBUG nova.compute.manager [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] No waiting events found dispatching network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.649 239969 WARNING nova.compute.manager [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received unexpected event network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 for instance with vm_state active and task_state None.
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.649 239969 DEBUG nova.compute.manager [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received event network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.649 239969 DEBUG oslo_concurrency.lockutils [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.649 239969 DEBUG oslo_concurrency.lockutils [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.650 239969 DEBUG oslo_concurrency.lockutils [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.650 239969 DEBUG nova.compute.manager [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] No waiting events found dispatching network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.650 239969 WARNING nova.compute.manager [req-52166bbb-742f-4b5d-8596-5099056523ed req-a46cdc87-1d38-412e-b6f2-aa18705ec411 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received unexpected event network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 for instance with vm_state active and task_state None.
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.653 239969 DEBUG nova.compute.manager [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Instance event wait completed in 7 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.654 239969 DEBUG nova.compute.manager [req-202050d6-9c8d-4458-b67d-b31ea440dc9d req-c42106ee-667b-4843-b05d-86960469a423 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received event network-changed-df8d11ce-fef0-411d-abb7-9e35b7c01592 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.654 239969 DEBUG nova.compute.manager [req-202050d6-9c8d-4458-b67d-b31ea440dc9d req-c42106ee-667b-4843-b05d-86960469a423 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Refreshing instance network info cache due to event network-changed-df8d11ce-fef0-411d-abb7-9e35b7c01592. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.655 239969 DEBUG oslo_concurrency.lockutils [req-202050d6-9c8d-4458-b67d-b31ea440dc9d req-c42106ee-667b-4843-b05d-86960469a423 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-fcf10cc3-f719-42c2-8e4c-a1bd5264de60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.662 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443002.6617928, d49e7eb2-dcdb-4410-a016-42e58d1d1416 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.665 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] VM Resumed (Lifecycle Event)
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.669 239969 DEBUG nova.network.neutron [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.672 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.677 239969 INFO nova.virt.libvirt.driver [-] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Instance spawned successfully.
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.678 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.700 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.707 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.712 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.712 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.713 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.713 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.714 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.714 239969 DEBUG nova.virt.libvirt.driver [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.750 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.792 239969 INFO nova.compute.manager [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Took 16.34 seconds to spawn the instance on the hypervisor.
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.792 239969 DEBUG nova.compute.manager [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.860 239969 INFO nova.compute.manager [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Took 17.68 seconds to build instance.
Jan 26 15:56:42 compute-0 nova_compute[239965]: 2026-01-26 15:56:42.879 239969 DEBUG oslo_concurrency.lockutils [None req-5a6aea3e-e020-40e8-b5a1-b8e8ff591ccb 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:42 compute-0 ceph-mon[75140]: pgmap v1303: 305 pgs: 305 active+clean; 344 MiB data, 583 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 135 op/s
Jan 26 15:56:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 372 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Jan 26 15:56:43 compute-0 nova_compute[239965]: 2026-01-26 15:56:43.857 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:56:44 compute-0 ceph-mon[75140]: pgmap v1304: 305 pgs: 305 active+clean; 372 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 176 op/s
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.061 239969 DEBUG nova.network.neutron [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Updating instance_info_cache with network_info: [{"id": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "address": "fa:16:3e:bd:eb:b8", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf8d11ce-fe", "ovs_interfaceid": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.082 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Releasing lock "refresh_cache-fcf10cc3-f719-42c2-8e4c-a1bd5264de60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.083 239969 DEBUG nova.compute.manager [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Instance network_info: |[{"id": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "address": "fa:16:3e:bd:eb:b8", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf8d11ce-fe", "ovs_interfaceid": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.083 239969 DEBUG oslo_concurrency.lockutils [req-202050d6-9c8d-4458-b67d-b31ea440dc9d req-c42106ee-667b-4843-b05d-86960469a423 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-fcf10cc3-f719-42c2-8e4c-a1bd5264de60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.084 239969 DEBUG nova.network.neutron [req-202050d6-9c8d-4458-b67d-b31ea440dc9d req-c42106ee-667b-4843-b05d-86960469a423 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Refreshing network info cache for port df8d11ce-fef0-411d-abb7-9e35b7c01592 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.088 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Start _get_guest_xml network_info=[{"id": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "address": "fa:16:3e:bd:eb:b8", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf8d11ce-fe", "ovs_interfaceid": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.093 239969 WARNING nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.100 239969 DEBUG nova.virt.libvirt.host [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.101 239969 DEBUG nova.virt.libvirt.host [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.111 239969 DEBUG nova.virt.libvirt.host [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.112 239969 DEBUG nova.virt.libvirt.host [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.112 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.112 239969 DEBUG nova.virt.hardware [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.113 239969 DEBUG nova.virt.hardware [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.113 239969 DEBUG nova.virt.hardware [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.113 239969 DEBUG nova.virt.hardware [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.114 239969 DEBUG nova.virt.hardware [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.114 239969 DEBUG nova.virt.hardware [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.114 239969 DEBUG nova.virt.hardware [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.115 239969 DEBUG nova.virt.hardware [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.115 239969 DEBUG nova.virt.hardware [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.116 239969 DEBUG nova.virt.hardware [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.116 239969 DEBUG nova.virt.hardware [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.120 239969 DEBUG oslo_concurrency.processutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.230 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:56:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1835955227' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 372 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.795 239969 DEBUG oslo_concurrency.processutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.829 239969 DEBUG nova.storage.rbd_utils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] rbd image fcf10cc3-f719-42c2-8e4c-a1bd5264de60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.836 239969 DEBUG oslo_concurrency.processutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.872 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769442990.8151443, 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.872 239969 INFO nova.compute.manager [-] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] VM Stopped (Lifecycle Event)
Jan 26 15:56:45 compute-0 nova_compute[239965]: 2026-01-26 15:56:45.898 239969 DEBUG nova.compute.manager [None req-5d16b2c5-fcfc-477d-8ccc-74019a2a23d9 - - - - - -] [instance: 24b6ebeb-ffb7-4ee6-bc0d-9f79e9dff423] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1835955227' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:46 compute-0 ovn_controller[146046]: 2026-01-26T15:56:46Z|00303|binding|INFO|Releasing lport 8cf3bd60-3f77-4742-a649-f7b9bc748f38 from this chassis (sb_readonly=0)
Jan 26 15:56:46 compute-0 ovn_controller[146046]: 2026-01-26T15:56:46Z|00304|binding|INFO|Releasing lport 0a445094-538b-4c97-83d4-6fbe61c31fbe from this chassis (sb_readonly=0)
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.212 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:56:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2173074519' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:46 compute-0 NetworkManager[48954]: <info>  [1769443006.5471] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Jan 26 15:56:46 compute-0 NetworkManager[48954]: <info>  [1769443006.5477] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.551 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.563 239969 DEBUG oslo_concurrency.processutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.727s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.564 239969 DEBUG nova.virt.libvirt.vif [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:56:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1836640296',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1836640296',id=44,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e674ff15ecb74c11bb2de410f761c546',ramdisk_id='',reservation_id='r-40uwisnz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1575468404',owner_user_name='tempest-AttachInterfacesV270Test-1575468404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:56:38Z,user_data=None,user_id='744e9066af094bd8b578059050760458',uuid=fcf10cc3-f719-42c2-8e4c-a1bd5264de60,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "address": "fa:16:3e:bd:eb:b8", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf8d11ce-fe", "ovs_interfaceid": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.565 239969 DEBUG nova.network.os_vif_util [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Converting VIF {"id": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "address": "fa:16:3e:bd:eb:b8", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf8d11ce-fe", "ovs_interfaceid": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.566 239969 DEBUG nova.network.os_vif_util [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:eb:b8,bridge_name='br-int',has_traffic_filtering=True,id=df8d11ce-fef0-411d-abb7-9e35b7c01592,network=Network(9d163776-1a3a-41f1-b832-ea2f79fc91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf8d11ce-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.567 239969 DEBUG nova.objects.instance [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lazy-loading 'pci_devices' on Instance uuid fcf10cc3-f719-42c2-8e4c-a1bd5264de60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.584 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:56:46 compute-0 nova_compute[239965]:   <uuid>fcf10cc3-f719-42c2-8e4c-a1bd5264de60</uuid>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   <name>instance-0000002c</name>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <nova:name>tempest-AttachInterfacesV270Test-server-1836640296</nova:name>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:56:45</nova:creationTime>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:56:46 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:56:46 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:56:46 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:56:46 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:46 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:56:46 compute-0 nova_compute[239965]:         <nova:user uuid="744e9066af094bd8b578059050760458">tempest-AttachInterfacesV270Test-1575468404-project-member</nova:user>
Jan 26 15:56:46 compute-0 nova_compute[239965]:         <nova:project uuid="e674ff15ecb74c11bb2de410f761c546">tempest-AttachInterfacesV270Test-1575468404</nova:project>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:56:46 compute-0 nova_compute[239965]:         <nova:port uuid="df8d11ce-fef0-411d-abb7-9e35b7c01592">
Jan 26 15:56:46 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <system>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <entry name="serial">fcf10cc3-f719-42c2-8e4c-a1bd5264de60</entry>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <entry name="uuid">fcf10cc3-f719-42c2-8e4c-a1bd5264de60</entry>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     </system>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   <os>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   </os>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   <features>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   </features>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/fcf10cc3-f719-42c2-8e4c-a1bd5264de60_disk">
Jan 26 15:56:46 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:56:46 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/fcf10cc3-f719-42c2-8e4c-a1bd5264de60_disk.config">
Jan 26 15:56:46 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:56:46 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:bd:eb:b8"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <target dev="tapdf8d11ce-fe"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/fcf10cc3-f719-42c2-8e4c-a1bd5264de60/console.log" append="off"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <video>
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     </video>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:56:46 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:56:46 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:56:46 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:56:46 compute-0 nova_compute[239965]: </domain>
Jan 26 15:56:46 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.585 239969 DEBUG nova.compute.manager [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Preparing to wait for external event network-vif-plugged-df8d11ce-fef0-411d-abb7-9e35b7c01592 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.586 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Acquiring lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.586 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.586 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.587 239969 DEBUG nova.virt.libvirt.vif [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:56:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1836640296',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1836640296',id=44,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e674ff15ecb74c11bb2de410f761c546',ramdisk_id='',reservation_id='r-40uwisnz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesV270Test-1575468404',owner_user_name='tempest-AttachInterfacesV270Test-1575468404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:56:38Z,user_data=None,user_id='744e9066af094bd8b578059050760458',uuid=fcf10cc3-f719-42c2-8e4c-a1bd5264de60,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "address": "fa:16:3e:bd:eb:b8", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf8d11ce-fe", "ovs_interfaceid": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.587 239969 DEBUG nova.network.os_vif_util [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Converting VIF {"id": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "address": "fa:16:3e:bd:eb:b8", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf8d11ce-fe", "ovs_interfaceid": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.588 239969 DEBUG nova.network.os_vif_util [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:eb:b8,bridge_name='br-int',has_traffic_filtering=True,id=df8d11ce-fef0-411d-abb7-9e35b7c01592,network=Network(9d163776-1a3a-41f1-b832-ea2f79fc91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf8d11ce-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.588 239969 DEBUG os_vif [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:eb:b8,bridge_name='br-int',has_traffic_filtering=True,id=df8d11ce-fef0-411d-abb7-9e35b7c01592,network=Network(9d163776-1a3a-41f1-b832-ea2f79fc91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf8d11ce-fe') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.589 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.590 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.590 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.593 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.593 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf8d11ce-fe, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.593 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdf8d11ce-fe, col_values=(('external_ids', {'iface-id': 'df8d11ce-fef0-411d-abb7-9e35b7c01592', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:eb:b8', 'vm-uuid': 'fcf10cc3-f719-42c2-8e4c-a1bd5264de60'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.595 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:46 compute-0 NetworkManager[48954]: <info>  [1769443006.5961] manager: (tapdf8d11ce-fe): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.599 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.661 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:46 compute-0 ovn_controller[146046]: 2026-01-26T15:56:46Z|00305|binding|INFO|Releasing lport 8cf3bd60-3f77-4742-a649-f7b9bc748f38 from this chassis (sb_readonly=0)
Jan 26 15:56:46 compute-0 ovn_controller[146046]: 2026-01-26T15:56:46Z|00306|binding|INFO|Releasing lport 0a445094-538b-4c97-83d4-6fbe61c31fbe from this chassis (sb_readonly=0)
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.663 239969 INFO os_vif [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:eb:b8,bridge_name='br-int',has_traffic_filtering=True,id=df8d11ce-fef0-411d-abb7-9e35b7c01592,network=Network(9d163776-1a3a-41f1-b832-ea2f79fc91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf8d11ce-fe')
Jan 26 15:56:46 compute-0 ovn_controller[146046]: 2026-01-26T15:56:46Z|00307|binding|INFO|Releasing lport 8cf3bd60-3f77-4742-a649-f7b9bc748f38 from this chassis (sb_readonly=0)
Jan 26 15:56:46 compute-0 ovn_controller[146046]: 2026-01-26T15:56:46Z|00308|binding|INFO|Releasing lport 0a445094-538b-4c97-83d4-6fbe61c31fbe from this chassis (sb_readonly=0)
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.690 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.722 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.722 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.723 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] No VIF found with MAC fa:16:3e:bd:eb:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.724 239969 INFO nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Using config drive
Jan 26 15:56:46 compute-0 nova_compute[239965]: 2026-01-26 15:56:46.758 239969 DEBUG nova.storage.rbd_utils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] rbd image fcf10cc3-f719-42c2-8e4c-a1bd5264de60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:47 compute-0 ceph-mon[75140]: pgmap v1305: 305 pgs: 305 active+clean; 372 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Jan 26 15:56:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2173074519' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 372 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Jan 26 15:56:47 compute-0 nova_compute[239965]: 2026-01-26 15:56:47.816 239969 INFO nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Creating config drive at /var/lib/nova/instances/fcf10cc3-f719-42c2-8e4c-a1bd5264de60/disk.config
Jan 26 15:56:47 compute-0 nova_compute[239965]: 2026-01-26 15:56:47.821 239969 DEBUG oslo_concurrency.processutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fcf10cc3-f719-42c2-8e4c-a1bd5264de60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbr258vzo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:47 compute-0 nova_compute[239965]: 2026-01-26 15:56:47.865 239969 DEBUG nova.network.neutron [req-202050d6-9c8d-4458-b67d-b31ea440dc9d req-c42106ee-667b-4843-b05d-86960469a423 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Updated VIF entry in instance network info cache for port df8d11ce-fef0-411d-abb7-9e35b7c01592. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:56:47 compute-0 nova_compute[239965]: 2026-01-26 15:56:47.868 239969 DEBUG nova.network.neutron [req-202050d6-9c8d-4458-b67d-b31ea440dc9d req-c42106ee-667b-4843-b05d-86960469a423 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Updating instance_info_cache with network_info: [{"id": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "address": "fa:16:3e:bd:eb:b8", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf8d11ce-fe", "ovs_interfaceid": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:47 compute-0 nova_compute[239965]: 2026-01-26 15:56:47.889 239969 DEBUG oslo_concurrency.lockutils [req-202050d6-9c8d-4458-b67d-b31ea440dc9d req-c42106ee-667b-4843-b05d-86960469a423 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-fcf10cc3-f719-42c2-8e4c-a1bd5264de60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:47 compute-0 nova_compute[239965]: 2026-01-26 15:56:47.958 239969 DEBUG nova.compute.manager [req-19d96927-d6ac-4cf9-8709-085d719ece7e req-411372cc-a9e3-481b-991b-0d9f1d1ab476 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received event network-changed-32baa4e5-2605-4a3f-8023-6a80ea71793d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:47 compute-0 nova_compute[239965]: 2026-01-26 15:56:47.959 239969 DEBUG nova.compute.manager [req-19d96927-d6ac-4cf9-8709-085d719ece7e req-411372cc-a9e3-481b-991b-0d9f1d1ab476 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Refreshing instance network info cache due to event network-changed-32baa4e5-2605-4a3f-8023-6a80ea71793d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:56:47 compute-0 nova_compute[239965]: 2026-01-26 15:56:47.960 239969 DEBUG oslo_concurrency.lockutils [req-19d96927-d6ac-4cf9-8709-085d719ece7e req-411372cc-a9e3-481b-991b-0d9f1d1ab476 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:47 compute-0 nova_compute[239965]: 2026-01-26 15:56:47.961 239969 DEBUG oslo_concurrency.lockutils [req-19d96927-d6ac-4cf9-8709-085d719ece7e req-411372cc-a9e3-481b-991b-0d9f1d1ab476 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:47 compute-0 nova_compute[239965]: 2026-01-26 15:56:47.961 239969 DEBUG nova.network.neutron [req-19d96927-d6ac-4cf9-8709-085d719ece7e req-411372cc-a9e3-481b-991b-0d9f1d1ab476 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Refreshing network info cache for port 32baa4e5-2605-4a3f-8023-6a80ea71793d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:56:47 compute-0 nova_compute[239965]: 2026-01-26 15:56:47.963 239969 DEBUG oslo_concurrency.processutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fcf10cc3-f719-42c2-8e4c-a1bd5264de60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbr258vzo" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:47 compute-0 nova_compute[239965]: 2026-01-26 15:56:47.989 239969 DEBUG nova.storage.rbd_utils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] rbd image fcf10cc3-f719-42c2-8e4c-a1bd5264de60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:47 compute-0 nova_compute[239965]: 2026-01-26 15:56:47.993 239969 DEBUG oslo_concurrency.processutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fcf10cc3-f719-42c2-8e4c-a1bd5264de60/disk.config fcf10cc3-f719-42c2-8e4c-a1bd5264de60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:48 compute-0 nova_compute[239965]: 2026-01-26 15:56:48.160 239969 DEBUG oslo_concurrency.processutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fcf10cc3-f719-42c2-8e4c-a1bd5264de60/disk.config fcf10cc3-f719-42c2-8e4c-a1bd5264de60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:48 compute-0 nova_compute[239965]: 2026-01-26 15:56:48.162 239969 INFO nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Deleting local config drive /var/lib/nova/instances/fcf10cc3-f719-42c2-8e4c-a1bd5264de60/disk.config because it was imported into RBD.
Jan 26 15:56:48 compute-0 ceph-mon[75140]: pgmap v1306: 305 pgs: 305 active+clean; 372 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Jan 26 15:56:48 compute-0 kernel: tapdf8d11ce-fe: entered promiscuous mode
Jan 26 15:56:48 compute-0 NetworkManager[48954]: <info>  [1769443008.2306] manager: (tapdf8d11ce-fe): new Tun device (/org/freedesktop/NetworkManager/Devices/153)
Jan 26 15:56:48 compute-0 nova_compute[239965]: 2026-01-26 15:56:48.234 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:48 compute-0 ovn_controller[146046]: 2026-01-26T15:56:48Z|00309|binding|INFO|Claiming lport df8d11ce-fef0-411d-abb7-9e35b7c01592 for this chassis.
Jan 26 15:56:48 compute-0 ovn_controller[146046]: 2026-01-26T15:56:48Z|00310|binding|INFO|df8d11ce-fef0-411d-abb7-9e35b7c01592: Claiming fa:16:3e:bd:eb:b8 10.100.0.13
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.248 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:eb:b8 10.100.0.13'], port_security=['fa:16:3e:bd:eb:b8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fcf10cc3-f719-42c2-8e4c-a1bd5264de60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d163776-1a3a-41f1-b832-ea2f79fc91bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e674ff15ecb74c11bb2de410f761c546', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6e263bd5-47ac-4dbb-be18-c57841bc13c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cd36b8b-8b6c-4f97-9e9f-a494963538b8, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=df8d11ce-fef0-411d-abb7-9e35b7c01592) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.250 156105 INFO neutron.agent.ovn.metadata.agent [-] Port df8d11ce-fef0-411d-abb7-9e35b7c01592 in datapath 9d163776-1a3a-41f1-b832-ea2f79fc91bb bound to our chassis
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.252 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d163776-1a3a-41f1-b832-ea2f79fc91bb
Jan 26 15:56:48 compute-0 ovn_controller[146046]: 2026-01-26T15:56:48Z|00311|binding|INFO|Setting lport df8d11ce-fef0-411d-abb7-9e35b7c01592 ovn-installed in OVS
Jan 26 15:56:48 compute-0 ovn_controller[146046]: 2026-01-26T15:56:48Z|00312|binding|INFO|Setting lport df8d11ce-fef0-411d-abb7-9e35b7c01592 up in Southbound
Jan 26 15:56:48 compute-0 nova_compute[239965]: 2026-01-26 15:56:48.265 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.269 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2b654336-ca36-4064-a978-7ae9e53a1d0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.270 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9d163776-11 in ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.272 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9d163776-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.272 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[43d71feb-7e53-4c32-88c8-e677e3846634]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.273 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[21ec42b4-eea3-4818-81ca-4b96cf3c994d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:48 compute-0 systemd-udevd[279617]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.290 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[a005edc0-790f-4ebb-b1e5-97b283af0e20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:48 compute-0 systemd-machined[208061]: New machine qemu-49-instance-0000002c.
Jan 26 15:56:48 compute-0 NetworkManager[48954]: <info>  [1769443008.3024] device (tapdf8d11ce-fe): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:56:48 compute-0 NetworkManager[48954]: <info>  [1769443008.3030] device (tapdf8d11ce-fe): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:56:48 compute-0 systemd[1]: Started Virtual Machine qemu-49-instance-0000002c.
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.312 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0bcac75d-460d-4df9-b8c0-eb02cef44f79]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.347 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[184280f5-ca63-444b-bf61-a8a4e0b015b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:48 compute-0 NetworkManager[48954]: <info>  [1769443008.3539] manager: (tap9d163776-10): new Veth device (/org/freedesktop/NetworkManager/Devices/154)
Jan 26 15:56:48 compute-0 systemd-udevd[279620]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.352 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c28a4ced-f69b-45d6-a746-2ade38b08cd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.404 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[736e20c0-3c09-4c08-b4b8-9be74ca84f0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.408 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0dee8f5b-e087-433d-8b6d-095f6854e384]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:48 compute-0 NetworkManager[48954]: <info>  [1769443008.4365] device (tap9d163776-10): carrier: link connected
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.442 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[53e3207a-0dda-495e-ae54-07d0d63c9a96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.459 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e3684c-e95e-4191-a767-b2ff821fd8be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d163776-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:ab:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449924, 'reachable_time': 25716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279648, 'error': None, 'target': 'ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.477 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c09c47-b07b-4389-bcaf-557ab1d6a082]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe95:abbd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449924, 'tstamp': 449924}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279649, 'error': None, 'target': 'ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.500 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5a5dc590-a358-4df1-abb0-5b84212b5125]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d163776-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:ab:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449924, 'reachable_time': 25716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 279650, 'error': None, 'target': 'ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.550 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e1bb1404-03e6-43d9-9a84-16c46cdf12e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.623 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[63b9540e-eb6b-46ae-a698-303412499550]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.624 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d163776-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.624 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.625 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d163776-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:48 compute-0 kernel: tap9d163776-10: entered promiscuous mode
Jan 26 15:56:48 compute-0 NetworkManager[48954]: <info>  [1769443008.6279] manager: (tap9d163776-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Jan 26 15:56:48 compute-0 nova_compute[239965]: 2026-01-26 15:56:48.630 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.636 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d163776-10, col_values=(('external_ids', {'iface-id': '9e411170-b5f9-42c8-91b3-0d3c529d75a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:48 compute-0 nova_compute[239965]: 2026-01-26 15:56:48.637 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:48 compute-0 ovn_controller[146046]: 2026-01-26T15:56:48Z|00313|binding|INFO|Releasing lport 9e411170-b5f9-42c8-91b3-0d3c529d75a3 from this chassis (sb_readonly=0)
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.641 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9d163776-1a3a-41f1-b832-ea2f79fc91bb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9d163776-1a3a-41f1-b832-ea2f79fc91bb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.642 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9516ba7a-c313-48e0-add1-4f29178220f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.643 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-9d163776-1a3a-41f1-b832-ea2f79fc91bb
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/9d163776-1a3a-41f1-b832-ea2f79fc91bb.pid.haproxy
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 9d163776-1a3a-41f1-b832-ea2f79fc91bb
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:56:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:48.643 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb', 'env', 'PROCESS_TAG=haproxy-9d163776-1a3a-41f1-b832-ea2f79fc91bb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9d163776-1a3a-41f1-b832-ea2f79fc91bb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:56:48 compute-0 nova_compute[239965]: 2026-01-26 15:56:48.658 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:56:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2913035091' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:56:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:56:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2913035091' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0029750360116414 of space, bias 1.0, pg target 0.89251080349242 quantized to 32 (current 32)
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000668364064243379 of space, bias 1.0, pg target 0.2005092192730137 quantized to 32 (current 32)
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.242417648682069e-07 of space, bias 4.0, pg target 0.0009890901178418482 quantized to 16 (current 16)
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:56:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:56:48 compute-0 nova_compute[239965]: 2026-01-26 15:56:48.924 239969 DEBUG nova.compute.manager [req-dc2d29e4-0cf8-4f74-8109-5127af8acaca req-36b2178a-4c50-4572-9c48-fa7eef69217b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received event network-vif-plugged-df8d11ce-fef0-411d-abb7-9e35b7c01592 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:48 compute-0 nova_compute[239965]: 2026-01-26 15:56:48.925 239969 DEBUG oslo_concurrency.lockutils [req-dc2d29e4-0cf8-4f74-8109-5127af8acaca req-36b2178a-4c50-4572-9c48-fa7eef69217b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:48 compute-0 nova_compute[239965]: 2026-01-26 15:56:48.925 239969 DEBUG oslo_concurrency.lockutils [req-dc2d29e4-0cf8-4f74-8109-5127af8acaca req-36b2178a-4c50-4572-9c48-fa7eef69217b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:48 compute-0 nova_compute[239965]: 2026-01-26 15:56:48.925 239969 DEBUG oslo_concurrency.lockutils [req-dc2d29e4-0cf8-4f74-8109-5127af8acaca req-36b2178a-4c50-4572-9c48-fa7eef69217b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:48 compute-0 nova_compute[239965]: 2026-01-26 15:56:48.925 239969 DEBUG nova.compute.manager [req-dc2d29e4-0cf8-4f74-8109-5127af8acaca req-36b2178a-4c50-4572-9c48-fa7eef69217b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Processing event network-vif-plugged-df8d11ce-fef0-411d-abb7-9e35b7c01592 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:56:48 compute-0 nova_compute[239965]: 2026-01-26 15:56:48.976 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443008.9765406, fcf10cc3-f719-42c2-8e4c-a1bd5264de60 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:48 compute-0 nova_compute[239965]: 2026-01-26 15:56:48.977 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] VM Started (Lifecycle Event)
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:48.998 239969 DEBUG nova.compute.manager [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.006 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.009 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.021 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.026 239969 INFO nova.virt.libvirt.driver [-] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Instance spawned successfully.
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.028 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.051 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.052 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443008.9832075, fcf10cc3-f719-42c2-8e4c-a1bd5264de60 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.052 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] VM Paused (Lifecycle Event)
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.078 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.079 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.080 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.080 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.080 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.081 239969 DEBUG nova.virt.libvirt.driver [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.112 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.116 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443009.0056825, fcf10cc3-f719-42c2-8e4c-a1bd5264de60 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.116 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] VM Resumed (Lifecycle Event)
Jan 26 15:56:49 compute-0 podman[279722]: 2026-01-26 15:56:49.026701965 +0000 UTC m=+0.035422451 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.144 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.148 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.158 239969 INFO nova.compute.manager [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Took 10.85 seconds to spawn the instance on the hypervisor.
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.159 239969 DEBUG nova.compute.manager [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.174 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:56:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2913035091' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:56:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2913035091' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.234 239969 INFO nova.compute.manager [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Took 12.21 seconds to build instance.
Jan 26 15:56:49 compute-0 podman[279722]: 2026-01-26 15:56:49.252099117 +0000 UTC m=+0.260819573 container create 8b76df5be9c8537b44bb27568c3381c33a451cfe66492e389bf72ecb726d7e60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.266 239969 DEBUG oslo_concurrency.lockutils [None req-eb78af3c-50dc-48fc-b967-3267dda6c8f0 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:49 compute-0 systemd[1]: Started libpod-conmon-8b76df5be9c8537b44bb27568c3381c33a451cfe66492e389bf72ecb726d7e60.scope.
Jan 26 15:56:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2746cf5087b58effb4bac6645628daf3462f98162b93574f1d345d880b019084/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:56:49 compute-0 podman[279722]: 2026-01-26 15:56:49.379191296 +0000 UTC m=+0.387911782 container init 8b76df5be9c8537b44bb27568c3381c33a451cfe66492e389bf72ecb726d7e60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:56:49 compute-0 podman[279722]: 2026-01-26 15:56:49.389428648 +0000 UTC m=+0.398149104 container start 8b76df5be9c8537b44bb27568c3381c33a451cfe66492e389bf72ecb726d7e60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:56:49 compute-0 neutron-haproxy-ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb[279735]: [NOTICE]   (279739) : New worker (279741) forked
Jan 26 15:56:49 compute-0 neutron-haproxy-ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb[279735]: [NOTICE]   (279739) : Loading success.
Jan 26 15:56:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.770 239969 DEBUG nova.network.neutron [req-19d96927-d6ac-4cf9-8709-085d719ece7e req-411372cc-a9e3-481b-991b-0d9f1d1ab476 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Updated VIF entry in instance network info cache for port 32baa4e5-2605-4a3f-8023-6a80ea71793d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.771 239969 DEBUG nova.network.neutron [req-19d96927-d6ac-4cf9-8709-085d719ece7e req-411372cc-a9e3-481b-991b-0d9f1d1ab476 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Updating instance_info_cache with network_info: [{"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1307: 305 pgs: 305 active+clean; 372 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 162 op/s
Jan 26 15:56:49 compute-0 nova_compute[239965]: 2026-01-26 15:56:49.795 239969 DEBUG oslo_concurrency.lockutils [req-19d96927-d6ac-4cf9-8709-085d719ece7e req-411372cc-a9e3-481b-991b-0d9f1d1ab476 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.155 239969 DEBUG oslo_concurrency.lockutils [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.156 239969 DEBUG oslo_concurrency.lockutils [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.156 239969 DEBUG oslo_concurrency.lockutils [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.157 239969 DEBUG oslo_concurrency.lockutils [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.157 239969 DEBUG oslo_concurrency.lockutils [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.158 239969 INFO nova.compute.manager [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Terminating instance
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.160 239969 DEBUG nova.compute.manager [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:56:50 compute-0 kernel: tap09e051a6-5a (unregistering): left promiscuous mode
Jan 26 15:56:50 compute-0 NetworkManager[48954]: <info>  [1769443010.2169] device (tap09e051a6-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.224 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.225 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.227 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:50 compute-0 ovn_controller[146046]: 2026-01-26T15:56:50Z|00314|binding|INFO|Releasing lport 09e051a6-5a1d-4163-8066-913fa9569057 from this chassis (sb_readonly=0)
Jan 26 15:56:50 compute-0 ovn_controller[146046]: 2026-01-26T15:56:50Z|00315|binding|INFO|Setting lport 09e051a6-5a1d-4163-8066-913fa9569057 down in Southbound
Jan 26 15:56:50 compute-0 ovn_controller[146046]: 2026-01-26T15:56:50Z|00316|binding|INFO|Removing iface tap09e051a6-5a ovn-installed in OVS
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.230 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:50.236 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:70:e8:d9 10.100.0.3'], port_security=['fa:16:3e:70:e8:d9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b303275701ea4029a4a744bf25ce8726', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bbe71d43-f65e-47cf-a850-0e359656056c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6a88dad-0ef4-4088-9429-227b0056ef4f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=09e051a6-5a1d-4163-8066-913fa9569057) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:50.237 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 09e051a6-5a1d-4163-8066-913fa9569057 in datapath 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c unbound from our chassis
Jan 26 15:56:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:50.242 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.246 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:50 compute-0 ceph-mon[75140]: pgmap v1307: 305 pgs: 305 active+clean; 372 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 162 op/s
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.251 239969 DEBUG nova.compute.manager [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:56:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:50.260 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[684b525f-d4f3-4bfa-bdf5-3a45df18501d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:50 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Deactivated successfully.
Jan 26 15:56:50 compute-0 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Consumed 16.122s CPU time.
Jan 26 15:56:50 compute-0 systemd-machined[208061]: Machine qemu-45-instance-00000029 terminated.
Jan 26 15:56:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:50.293 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2c649f16-dddf-4de3-b4e8-7adbe2da197a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:50.297 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[60b5b3d7-62e3-4982-9040-02fde6f8794f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:50.333 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[840a50dd-64a1-47fe-a76a-77205fceff4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:50.366 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bb54d9e3-d68b-43d8-acc8-4c38ec540527]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99f86ca1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:cf:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 784, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445516, 'reachable_time': 29970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279759, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.386 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:50.391 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2704a4b3-25e3-476e-98ba-20d85b407eeb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap99f86ca1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445530, 'tstamp': 445530}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279761, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap99f86ca1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445533, 'tstamp': 445533}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 279761, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.398 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.401 239969 INFO nova.virt.libvirt.driver [-] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Instance destroyed successfully.
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.401 239969 DEBUG nova.objects.instance [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'resources' on Instance uuid f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:50.404 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99f86ca1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.407 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.414 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.417 239969 DEBUG nova.virt.libvirt.vif [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:55:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-42280864',display_name='tempest-ListServerFiltersTestJSON-instance-42280864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-42280864',id=41,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:56:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b303275701ea4029a4a744bf25ce8726',ramdisk_id='',reservation_id='r-0bv7g6mv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-31069689',owner_user_name='tempest-ListServerFiltersTestJSON-31069689-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:09Z,user_data=None,user_id='e19e9687c7784557a28b0297dc26ff06',uuid=f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "09e051a6-5a1d-4163-8066-913fa9569057", "address": "fa:16:3e:70:e8:d9", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e051a6-5a", "ovs_interfaceid": "09e051a6-5a1d-4163-8066-913fa9569057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.418 239969 DEBUG nova.network.os_vif_util [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converting VIF {"id": "09e051a6-5a1d-4163-8066-913fa9569057", "address": "fa:16:3e:70:e8:d9", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap09e051a6-5a", "ovs_interfaceid": "09e051a6-5a1d-4163-8066-913fa9569057", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.419 239969 DEBUG nova.network.os_vif_util [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:70:e8:d9,bridge_name='br-int',has_traffic_filtering=True,id=09e051a6-5a1d-4163-8066-913fa9569057,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e051a6-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.420 239969 DEBUG os_vif [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:e8:d9,bridge_name='br-int',has_traffic_filtering=True,id=09e051a6-5a1d-4163-8066-913fa9569057,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e051a6-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:56:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:50.424 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99f86ca1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.424 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:50.425 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.425 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:50.425 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99f86ca1-70, col_values=(('external_ids', {'iface-id': '8cf3bd60-3f77-4742-a649-f7b9bc748f38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:50.425 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.425 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.426 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap09e051a6-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.428 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.429 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.434 239969 INFO os_vif [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:70:e8:d9,bridge_name='br-int',has_traffic_filtering=True,id=09e051a6-5a1d-4163-8066-913fa9569057,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap09e051a6-5a')
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.459 239969 DEBUG nova.virt.hardware [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.459 239969 INFO nova.compute.claims [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.661 239969 DEBUG oslo_concurrency.processutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.867 239969 INFO nova.virt.libvirt.driver [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Deleting instance files /var/lib/nova/instances/f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1_del
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.870 239969 INFO nova.virt.libvirt.driver [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Deletion of /var/lib/nova/instances/f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1_del complete
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.926 239969 INFO nova.compute.manager [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Took 0.77 seconds to destroy the instance on the hypervisor.
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.926 239969 DEBUG oslo.service.loopingcall [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.927 239969 DEBUG nova.compute.manager [-] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:56:50 compute-0 nova_compute[239965]: 2026-01-26 15:56:50.927 239969 DEBUG nova.network.neutron [-] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.002 239969 DEBUG oslo_concurrency.lockutils [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Acquiring lock "interface-fcf10cc3-f719-42c2-8e4c-a1bd5264de60-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.003 239969 DEBUG oslo_concurrency.lockutils [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "interface-fcf10cc3-f719-42c2-8e4c-a1bd5264de60-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.004 239969 DEBUG nova.objects.instance [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lazy-loading 'flavor' on Instance uuid fcf10cc3-f719-42c2-8e4c-a1bd5264de60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.022 239969 DEBUG nova.objects.instance [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lazy-loading 'pci_requests' on Instance uuid fcf10cc3-f719-42c2-8e4c-a1bd5264de60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.034 239969 DEBUG nova.network.neutron [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.218 239969 DEBUG nova.compute.manager [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received event network-vif-plugged-df8d11ce-fef0-411d-abb7-9e35b7c01592 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.218 239969 DEBUG oslo_concurrency.lockutils [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.220 239969 DEBUG oslo_concurrency.lockutils [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.220 239969 DEBUG oslo_concurrency.lockutils [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.221 239969 DEBUG nova.compute.manager [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] No waiting events found dispatching network-vif-plugged-df8d11ce-fef0-411d-abb7-9e35b7c01592 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.221 239969 WARNING nova.compute.manager [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received unexpected event network-vif-plugged-df8d11ce-fef0-411d-abb7-9e35b7c01592 for instance with vm_state active and task_state None.
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.221 239969 DEBUG nova.compute.manager [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Received event network-vif-unplugged-09e051a6-5a1d-4163-8066-913fa9569057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.222 239969 DEBUG oslo_concurrency.lockutils [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.222 239969 DEBUG oslo_concurrency.lockutils [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.222 239969 DEBUG oslo_concurrency.lockutils [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.223 239969 DEBUG nova.compute.manager [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] No waiting events found dispatching network-vif-unplugged-09e051a6-5a1d-4163-8066-913fa9569057 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.223 239969 DEBUG nova.compute.manager [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Received event network-vif-unplugged-09e051a6-5a1d-4163-8066-913fa9569057 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.223 239969 DEBUG nova.compute.manager [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Received event network-vif-plugged-09e051a6-5a1d-4163-8066-913fa9569057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.223 239969 DEBUG oslo_concurrency.lockutils [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.224 239969 DEBUG oslo_concurrency.lockutils [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.224 239969 DEBUG oslo_concurrency.lockutils [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.224 239969 DEBUG nova.compute.manager [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] No waiting events found dispatching network-vif-plugged-09e051a6-5a1d-4163-8066-913fa9569057 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.224 239969 WARNING nova.compute.manager [req-b853e4b9-4dd3-4aa0-874a-3ee973c9c27b req-5297fab8-ca7d-4bc5-b2c8-c7d908aa9d90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Received unexpected event network-vif-plugged-09e051a6-5a1d-4163-8066-913fa9569057 for instance with vm_state active and task_state deleting.
Jan 26 15:56:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:56:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1116267485' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.344 239969 DEBUG oslo_concurrency.processutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.683s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.350 239969 DEBUG nova.compute.provider_tree [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.365 239969 DEBUG nova.scheduler.client.report [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:56:51 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1116267485' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.390 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.391 239969 DEBUG nova.compute.manager [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.437 239969 DEBUG nova.compute.manager [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.438 239969 DEBUG nova.network.neutron [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.463 239969 INFO nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.484 239969 DEBUG nova.compute.manager [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.501 239969 DEBUG nova.policy [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '744e9066af094bd8b578059050760458', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e674ff15ecb74c11bb2de410f761c546', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.582 239969 DEBUG nova.compute.manager [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.583 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.583 239969 INFO nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Creating image(s)
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.604 239969 DEBUG nova.storage.rbd_utils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.627 239969 DEBUG nova.storage.rbd_utils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.652 239969 DEBUG nova.storage.rbd_utils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.657 239969 DEBUG oslo_concurrency.processutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.690 239969 DEBUG nova.policy [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '700f80e28de6426c81d2be69a7fd8ffb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.728 239969 DEBUG oslo_concurrency.processutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.729 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.730 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.730 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.760 239969 DEBUG nova.storage.rbd_utils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.767 239969 DEBUG oslo_concurrency.processutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 372 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.8 MiB/s wr, 191 op/s
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.955 239969 DEBUG nova.network.neutron [-] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:51 compute-0 nova_compute[239965]: 2026-01-26 15:56:51.977 239969 INFO nova.compute.manager [-] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Took 1.05 seconds to deallocate network for instance.
Jan 26 15:56:52 compute-0 nova_compute[239965]: 2026-01-26 15:56:52.024 239969 DEBUG oslo_concurrency.lockutils [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:52 compute-0 nova_compute[239965]: 2026-01-26 15:56:52.025 239969 DEBUG oslo_concurrency.lockutils [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:52 compute-0 nova_compute[239965]: 2026-01-26 15:56:52.129 239969 DEBUG oslo_concurrency.processutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:52 compute-0 nova_compute[239965]: 2026-01-26 15:56:52.220 239969 DEBUG nova.storage.rbd_utils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] resizing rbd image 0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:56:52 compute-0 nova_compute[239965]: 2026-01-26 15:56:52.301 239969 DEBUG oslo_concurrency.processutils [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:52 compute-0 nova_compute[239965]: 2026-01-26 15:56:52.376 239969 DEBUG nova.objects.instance [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'migration_context' on Instance uuid 0815f972-3907-402e-b6b4-f1f1fc93a8ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:52 compute-0 ceph-mon[75140]: pgmap v1308: 305 pgs: 305 active+clean; 372 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 1.8 MiB/s wr, 191 op/s
Jan 26 15:56:52 compute-0 nova_compute[239965]: 2026-01-26 15:56:52.402 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:56:52 compute-0 nova_compute[239965]: 2026-01-26 15:56:52.403 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Ensure instance console log exists: /var/lib/nova/instances/0815f972-3907-402e-b6b4-f1f1fc93a8ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:56:52 compute-0 nova_compute[239965]: 2026-01-26 15:56:52.403 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:52 compute-0 nova_compute[239965]: 2026-01-26 15:56:52.404 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:52 compute-0 nova_compute[239965]: 2026-01-26 15:56:52.404 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:52 compute-0 nova_compute[239965]: 2026-01-26 15:56:52.492 239969 DEBUG nova.network.neutron [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Successfully created port: 5dd9475a-0edb-4c96-9acb-f12a70fef9bf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:56:52 compute-0 nova_compute[239965]: 2026-01-26 15:56:52.633 239969 DEBUG nova.compute.manager [req-cc071816-ffdd-4cff-8c57-bf07a214bdbd req-febaa3ca-76e2-482d-8164-2ca79e0014ec a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Received event network-vif-deleted-09e051a6-5a1d-4163-8066-913fa9569057 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:52 compute-0 nova_compute[239965]: 2026-01-26 15:56:52.649 239969 DEBUG nova.network.neutron [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Successfully created port: 2722b13b-6ac2-481f-93b0-a5f0e4664e24 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:56:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:56:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2768164112' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:52 compute-0 nova_compute[239965]: 2026-01-26 15:56:52.978 239969 DEBUG oslo_concurrency.processutils [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.676s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:52 compute-0 nova_compute[239965]: 2026-01-26 15:56:52.984 239969 DEBUG nova.compute.provider_tree [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:56:53 compute-0 nova_compute[239965]: 2026-01-26 15:56:53.004 239969 DEBUG nova.scheduler.client.report [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:56:53 compute-0 nova_compute[239965]: 2026-01-26 15:56:53.038 239969 DEBUG oslo_concurrency.lockutils [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:53 compute-0 nova_compute[239965]: 2026-01-26 15:56:53.086 239969 INFO nova.scheduler.client.report [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Deleted allocations for instance f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1
Jan 26 15:56:53 compute-0 nova_compute[239965]: 2026-01-26 15:56:53.171 239969 DEBUG oslo_concurrency.lockutils [None req-4d28e60d-d7fb-4cab-ae9f-a37aec2a2b66 e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:53 compute-0 ovn_controller[146046]: 2026-01-26T15:56:53Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3a:cd:80 10.100.0.4
Jan 26 15:56:53 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2768164112' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1309: 305 pgs: 305 active+clean; 334 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 1.3 MiB/s wr, 264 op/s
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.029 239969 DEBUG oslo_concurrency.lockutils [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.030 239969 DEBUG oslo_concurrency.lockutils [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.031 239969 DEBUG oslo_concurrency.lockutils [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.031 239969 DEBUG oslo_concurrency.lockutils [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.031 239969 DEBUG oslo_concurrency.lockutils [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.033 239969 INFO nova.compute.manager [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Terminating instance
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.035 239969 DEBUG nova.compute.manager [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:56:54 compute-0 kernel: tap5132a5fe-0b (unregistering): left promiscuous mode
Jan 26 15:56:54 compute-0 NetworkManager[48954]: <info>  [1769443014.1326] device (tap5132a5fe-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.139 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:54 compute-0 ovn_controller[146046]: 2026-01-26T15:56:54Z|00317|binding|INFO|Releasing lport 5132a5fe-0b78-49d1-94f4-efde60d77ba6 from this chassis (sb_readonly=0)
Jan 26 15:56:54 compute-0 ovn_controller[146046]: 2026-01-26T15:56:54Z|00318|binding|INFO|Setting lport 5132a5fe-0b78-49d1-94f4-efde60d77ba6 down in Southbound
Jan 26 15:56:54 compute-0 ovn_controller[146046]: 2026-01-26T15:56:54Z|00319|binding|INFO|Removing iface tap5132a5fe-0b ovn-installed in OVS
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.144 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:54.151 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:ae:87 10.100.0.12'], port_security=['fa:16:3e:69:ae:87 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '6f15e546-481b-42eb-ad9c-7c40e8bfe459', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b303275701ea4029a4a744bf25ce8726', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bbe71d43-f65e-47cf-a850-0e359656056c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6a88dad-0ef4-4088-9429-227b0056ef4f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5132a5fe-0b78-49d1-94f4-efde60d77ba6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:54.152 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5132a5fe-0b78-49d1-94f4-efde60d77ba6 in datapath 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c unbound from our chassis
Jan 26 15:56:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:54.153 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.174 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:54.176 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9e182621-0af7-40a5-87cd-a14123a1e3ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.184 239969 DEBUG nova.network.neutron [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Successfully updated port: 5dd9475a-0edb-4c96-9acb-f12a70fef9bf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:56:54 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000028.scope: Deactivated successfully.
Jan 26 15:56:54 compute-0 systemd[1]: machine-qemu\x2d43\x2dinstance\x2d00000028.scope: Consumed 16.104s CPU time.
Jan 26 15:56:54 compute-0 systemd-machined[208061]: Machine qemu-43-instance-00000028 terminated.
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.205 239969 DEBUG oslo_concurrency.lockutils [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Acquiring lock "refresh_cache-fcf10cc3-f719-42c2-8e4c-a1bd5264de60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.205 239969 DEBUG oslo_concurrency.lockutils [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Acquired lock "refresh_cache-fcf10cc3-f719-42c2-8e4c-a1bd5264de60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.206 239969 DEBUG nova.network.neutron [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:56:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:54.214 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b6632e3b-33ad-49a5-a517-d390cae47aeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:54.218 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e9ca313e-33dd-483a-a453-5f81e188c07e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:54.255 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b8a79f-9ba2-41f4-aa8f-abc279e1149b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.278 239969 INFO nova.virt.libvirt.driver [-] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Instance destroyed successfully.
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.278 239969 DEBUG nova.objects.instance [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'resources' on Instance uuid 6f15e546-481b-42eb-ad9c-7c40e8bfe459 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.281 239969 DEBUG nova.network.neutron [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Successfully updated port: 2722b13b-6ac2-481f-93b0-a5f0e4664e24 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:56:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:54.283 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e18e4210-8608-4f1b-928b-a5c69a60ee34]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap99f86ca1-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a8:cf:65'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 784, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445516, 'reachable_time': 29970, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280014, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.303 239969 DEBUG nova.virt.libvirt.vif [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:55:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-1817493364',display_name='tempest-ListServerFiltersTestJSON-instance-1817493364',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-1817493364',id=40,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:56:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b303275701ea4029a4a744bf25ce8726',ramdisk_id='',reservation_id='r-zph60iyz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-31069689',owner_user_name='tempest-ListServerFiltersTestJSON-31069689-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:07Z,user_data=None,user_id='e19e9687c7784557a28b0297dc26ff06',uuid=6f15e546-481b-42eb-ad9c-7c40e8bfe459,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "address": "fa:16:3e:69:ae:87", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5132a5fe-0b", "ovs_interfaceid": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.304 239969 DEBUG nova.network.os_vif_util [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converting VIF {"id": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "address": "fa:16:3e:69:ae:87", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5132a5fe-0b", "ovs_interfaceid": "5132a5fe-0b78-49d1-94f4-efde60d77ba6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.305 239969 DEBUG nova.network.os_vif_util [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:ae:87,bridge_name='br-int',has_traffic_filtering=True,id=5132a5fe-0b78-49d1-94f4-efde60d77ba6,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5132a5fe-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.307 239969 DEBUG os_vif [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:ae:87,bridge_name='br-int',has_traffic_filtering=True,id=5132a5fe-0b78-49d1-94f4-efde60d77ba6,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5132a5fe-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.310 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:54.308 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[abe02b0e-409e-4371-b93f-c228c4bffcfb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap99f86ca1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445530, 'tstamp': 445530}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280022, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap99f86ca1-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 445533, 'tstamp': 445533}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280022, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:54.310 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99f86ca1-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.311 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5132a5fe-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.312 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.312 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquired lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.313 239969 DEBUG nova.network.neutron [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.372 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.374 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.376 239969 INFO os_vif [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:ae:87,bridge_name='br-int',has_traffic_filtering=True,id=5132a5fe-0b78-49d1-94f4-efde60d77ba6,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5132a5fe-0b')
Jan 26 15:56:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:54.375 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap99f86ca1-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:54.375 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:54.375 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap99f86ca1-70, col_values=(('external_ids', {'iface-id': '8cf3bd60-3f77-4742-a649-f7b9bc748f38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:54.376 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.446 239969 WARNING nova.network.neutron [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] 9d163776-1a3a-41f1-b832-ea2f79fc91bb already exists in list: networks containing: ['9d163776-1a3a-41f1-b832-ea2f79fc91bb']. ignoring it
Jan 26 15:56:54 compute-0 ceph-mon[75140]: pgmap v1309: 305 pgs: 305 active+clean; 334 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 1.3 MiB/s wr, 264 op/s
Jan 26 15:56:54 compute-0 nova_compute[239965]: 2026-01-26 15:56:54.544 239969 DEBUG nova.network.neutron [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:56:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.132 239969 DEBUG nova.compute.manager [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received event network-changed-5dd9475a-0edb-4c96-9acb-f12a70fef9bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.133 239969 DEBUG nova.compute.manager [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Refreshing instance network info cache due to event network-changed-5dd9475a-0edb-4c96-9acb-f12a70fef9bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.133 239969 DEBUG oslo_concurrency.lockutils [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-fcf10cc3-f719-42c2-8e4c-a1bd5264de60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.248 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.649 239969 DEBUG nova.network.neutron [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Updating instance_info_cache with network_info: [{"id": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "address": "fa:16:3e:e2:f5:4d", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2722b13b-6a", "ovs_interfaceid": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.754 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Releasing lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.755 239969 DEBUG nova.compute.manager [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Instance network_info: |[{"id": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "address": "fa:16:3e:e2:f5:4d", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2722b13b-6a", "ovs_interfaceid": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.759 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Start _get_guest_xml network_info=[{"id": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "address": "fa:16:3e:e2:f5:4d", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2722b13b-6a", "ovs_interfaceid": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.771 239969 WARNING nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.780 239969 DEBUG nova.virt.libvirt.host [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.782 239969 DEBUG nova.virt.libvirt.host [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.791 239969 DEBUG nova.virt.libvirt.host [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.792 239969 DEBUG nova.virt.libvirt.host [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.793 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:56:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 322 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 217 op/s
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.793 239969 DEBUG nova.virt.hardware [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.801 239969 DEBUG nova.virt.hardware [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.801 239969 DEBUG nova.virt.hardware [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.801 239969 DEBUG nova.virt.hardware [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.802 239969 DEBUG nova.virt.hardware [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.802 239969 DEBUG nova.virt.hardware [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.802 239969 DEBUG nova.virt.hardware [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.802 239969 DEBUG nova.virt.hardware [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.803 239969 DEBUG nova.virt.hardware [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.803 239969 DEBUG nova.virt.hardware [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.803 239969 DEBUG nova.virt.hardware [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.806 239969 DEBUG oslo_concurrency.processutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.861 239969 INFO nova.virt.libvirt.driver [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Deleting instance files /var/lib/nova/instances/6f15e546-481b-42eb-ad9c-7c40e8bfe459_del
Jan 26 15:56:55 compute-0 nova_compute[239965]: 2026-01-26 15:56:55.863 239969 INFO nova.virt.libvirt.driver [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Deletion of /var/lib/nova/instances/6f15e546-481b-42eb-ad9c-7c40e8bfe459_del complete
Jan 26 15:56:56 compute-0 nova_compute[239965]: 2026-01-26 15:56:56.143 239969 INFO nova.compute.manager [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Took 2.11 seconds to destroy the instance on the hypervisor.
Jan 26 15:56:56 compute-0 nova_compute[239965]: 2026-01-26 15:56:56.144 239969 DEBUG oslo.service.loopingcall [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:56:56 compute-0 nova_compute[239965]: 2026-01-26 15:56:56.145 239969 DEBUG nova.compute.manager [-] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:56:56 compute-0 nova_compute[239965]: 2026-01-26 15:56:56.145 239969 DEBUG nova.network.neutron [-] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:56:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:56:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3806722845' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:56 compute-0 nova_compute[239965]: 2026-01-26 15:56:56.523 239969 DEBUG oslo_concurrency.processutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.717s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:56 compute-0 nova_compute[239965]: 2026-01-26 15:56:56.549 239969 DEBUG nova.storage.rbd_utils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:56 compute-0 nova_compute[239965]: 2026-01-26 15:56:56.554 239969 DEBUG oslo_concurrency.processutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:56 compute-0 ceph-mon[75140]: pgmap v1310: 305 pgs: 305 active+clean; 322 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 1.8 MiB/s wr, 217 op/s
Jan 26 15:56:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3806722845' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:56:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/855358973' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.195 239969 DEBUG oslo_concurrency.processutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.198 239969 DEBUG nova.virt.libvirt.vif [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:56:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-457250150',display_name='tempest-tempest.common.compute-instance-457250150',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-457250150',id=45,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKeLSVrQjLVMrDly+731By+tGp5cL/q2RkfHkPZwnhscMO6Jhep7798XgwoAsogpRVdfhr++1ZBHm1/nTkA1YXe3jICWiVf8G7qT1QKZe6WXtrz4RsbEZ4GIF6fD2FL+/A==',key_name='tempest-keypair-1813699058',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-qhswce1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:56:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=0815f972-3907-402e-b6b4-f1f1fc93a8ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "address": "fa:16:3e:e2:f5:4d", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2722b13b-6a", "ovs_interfaceid": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.198 239969 DEBUG nova.network.os_vif_util [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "address": "fa:16:3e:e2:f5:4d", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2722b13b-6a", "ovs_interfaceid": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.199 239969 DEBUG nova.network.os_vif_util [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:f5:4d,bridge_name='br-int',has_traffic_filtering=True,id=2722b13b-6ac2-481f-93b0-a5f0e4664e24,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2722b13b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.201 239969 DEBUG nova.objects.instance [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'pci_devices' on Instance uuid 0815f972-3907-402e-b6b4-f1f1fc93a8ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.221 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <uuid>0815f972-3907-402e-b6b4-f1f1fc93a8ab</uuid>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <name>instance-0000002d</name>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <nova:name>tempest-tempest.common.compute-instance-457250150</nova:name>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:56:55</nova:creationTime>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:56:57 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:56:57 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:56:57 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:56:57 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:57 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:56:57 compute-0 nova_compute[239965]:         <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:56:57 compute-0 nova_compute[239965]:         <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:56:57 compute-0 nova_compute[239965]:         <nova:port uuid="2722b13b-6ac2-481f-93b0-a5f0e4664e24">
Jan 26 15:56:57 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <system>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <entry name="serial">0815f972-3907-402e-b6b4-f1f1fc93a8ab</entry>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <entry name="uuid">0815f972-3907-402e-b6b4-f1f1fc93a8ab</entry>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     </system>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <os>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   </os>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <features>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   </features>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk">
Jan 26 15:56:57 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:56:57 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk.config">
Jan 26 15:56:57 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       </source>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:56:57 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:e2:f5:4d"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <target dev="tap2722b13b-6a"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/0815f972-3907-402e-b6b4-f1f1fc93a8ab/console.log" append="off"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <video>
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     </video>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:56:57 compute-0 nova_compute[239965]: </domain>
Jan 26 15:56:57 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.234 239969 DEBUG nova.compute.manager [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Preparing to wait for external event network-vif-plugged-2722b13b-6ac2-481f-93b0-a5f0e4664e24 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.234 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.236 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.236 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.237 239969 DEBUG nova.virt.libvirt.vif [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:56:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-457250150',display_name='tempest-tempest.common.compute-instance-457250150',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-457250150',id=45,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKeLSVrQjLVMrDly+731By+tGp5cL/q2RkfHkPZwnhscMO6Jhep7798XgwoAsogpRVdfhr++1ZBHm1/nTkA1YXe3jICWiVf8G7qT1QKZe6WXtrz4RsbEZ4GIF6fD2FL+/A==',key_name='tempest-keypair-1813699058',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-qhswce1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:56:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=0815f972-3907-402e-b6b4-f1f1fc93a8ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "address": "fa:16:3e:e2:f5:4d", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2722b13b-6a", "ovs_interfaceid": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.237 239969 DEBUG nova.network.os_vif_util [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "address": "fa:16:3e:e2:f5:4d", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2722b13b-6a", "ovs_interfaceid": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.238 239969 DEBUG nova.network.os_vif_util [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:f5:4d,bridge_name='br-int',has_traffic_filtering=True,id=2722b13b-6ac2-481f-93b0-a5f0e4664e24,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2722b13b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.238 239969 DEBUG os_vif [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:f5:4d,bridge_name='br-int',has_traffic_filtering=True,id=2722b13b-6ac2-481f-93b0-a5f0e4664e24,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2722b13b-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.239 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.240 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.240 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.244 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.246 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2722b13b-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.247 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2722b13b-6a, col_values=(('external_ids', {'iface-id': '2722b13b-6ac2-481f-93b0-a5f0e4664e24', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:f5:4d', 'vm-uuid': '0815f972-3907-402e-b6b4-f1f1fc93a8ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:57 compute-0 NetworkManager[48954]: <info>  [1769443017.2519] manager: (tap2722b13b-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/156)
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.257 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.258 239969 INFO os_vif [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:f5:4d,bridge_name='br-int',has_traffic_filtering=True,id=2722b13b-6ac2-481f-93b0-a5f0e4664e24,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2722b13b-6a')
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.334 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.334 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.334 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:e2:f5:4d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.335 239969 INFO nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Using config drive
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.355 239969 DEBUG nova.storage.rbd_utils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.658 239969 DEBUG nova.network.neutron [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Updating instance_info_cache with network_info: [{"id": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "address": "fa:16:3e:bd:eb:b8", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf8d11ce-fe", "ovs_interfaceid": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "address": "fa:16:3e:f9:3a:7c", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dd9475a-0e", "ovs_interfaceid": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.696 239969 DEBUG oslo_concurrency.lockutils [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Releasing lock "refresh_cache-fcf10cc3-f719-42c2-8e4c-a1bd5264de60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.697 239969 DEBUG oslo_concurrency.lockutils [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-fcf10cc3-f719-42c2-8e4c-a1bd5264de60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.697 239969 DEBUG nova.network.neutron [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Refreshing network info cache for port 5dd9475a-0edb-4c96-9acb-f12a70fef9bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.702 239969 DEBUG nova.virt.libvirt.vif [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:56:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1836640296',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1836640296',id=44,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:56:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e674ff15ecb74c11bb2de410f761c546',ramdisk_id='',reservation_id='r-40uwisnz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1575468404',owner_user_name='tempest-AttachInterfacesV270Test-1575468404-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:49Z,user_data=None,user_id='744e9066af094bd8b578059050760458',uuid=fcf10cc3-f719-42c2-8e4c-a1bd5264de60,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "address": "fa:16:3e:f9:3a:7c", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dd9475a-0e", "ovs_interfaceid": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.703 239969 DEBUG nova.network.os_vif_util [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Converting VIF {"id": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "address": "fa:16:3e:f9:3a:7c", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dd9475a-0e", "ovs_interfaceid": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.703 239969 DEBUG nova.network.os_vif_util [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:3a:7c,bridge_name='br-int',has_traffic_filtering=True,id=5dd9475a-0edb-4c96-9acb-f12a70fef9bf,network=Network(9d163776-1a3a-41f1-b832-ea2f79fc91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5dd9475a-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.704 239969 DEBUG os_vif [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:3a:7c,bridge_name='br-int',has_traffic_filtering=True,id=5dd9475a-0edb-4c96-9acb-f12a70fef9bf,network=Network(9d163776-1a3a-41f1-b832-ea2f79fc91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5dd9475a-0e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.704 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.704 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.705 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.707 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.708 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5dd9475a-0e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.709 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5dd9475a-0e, col_values=(('external_ids', {'iface-id': '5dd9475a-0edb-4c96-9acb-f12a70fef9bf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f9:3a:7c', 'vm-uuid': 'fcf10cc3-f719-42c2-8e4c-a1bd5264de60'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:57 compute-0 NetworkManager[48954]: <info>  [1769443017.7114] manager: (tap5dd9475a-0e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/157)
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.715 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.720 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.721 239969 INFO os_vif [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:3a:7c,bridge_name='br-int',has_traffic_filtering=True,id=5dd9475a-0edb-4c96-9acb-f12a70fef9bf,network=Network(9d163776-1a3a-41f1-b832-ea2f79fc91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5dd9475a-0e')
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.722 239969 DEBUG nova.virt.libvirt.vif [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:56:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1836640296',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1836640296',id=44,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:56:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='e674ff15ecb74c11bb2de410f761c546',ramdisk_id='',reservation_id='r-40uwisnz',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1575468404',owner_user_name='tempest-AttachInterfacesV270Test-1575468404-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:49Z,user_data=None,user_id='744e9066af094bd8b578059050760458',uuid=fcf10cc3-f719-42c2-8e4c-a1bd5264de60,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "address": "fa:16:3e:f9:3a:7c", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dd9475a-0e", "ovs_interfaceid": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.722 239969 DEBUG nova.network.os_vif_util [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Converting VIF {"id": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "address": "fa:16:3e:f9:3a:7c", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dd9475a-0e", "ovs_interfaceid": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.723 239969 DEBUG nova.network.os_vif_util [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:3a:7c,bridge_name='br-int',has_traffic_filtering=True,id=5dd9475a-0edb-4c96-9acb-f12a70fef9bf,network=Network(9d163776-1a3a-41f1-b832-ea2f79fc91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5dd9475a-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.730 239969 DEBUG nova.virt.libvirt.guest [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] attach device xml: <interface type="ethernet">
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:f9:3a:7c"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <target dev="tap5dd9475a-0e"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]: </interface>
Jan 26 15:56:57 compute-0 nova_compute[239965]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 26 15:56:57 compute-0 kernel: tap5dd9475a-0e: entered promiscuous mode
Jan 26 15:56:57 compute-0 NetworkManager[48954]: <info>  [1769443017.7566] manager: (tap5dd9475a-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/158)
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.764 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:57 compute-0 ovn_controller[146046]: 2026-01-26T15:56:57Z|00320|binding|INFO|Claiming lport 5dd9475a-0edb-4c96-9acb-f12a70fef9bf for this chassis.
Jan 26 15:56:57 compute-0 ovn_controller[146046]: 2026-01-26T15:56:57Z|00321|binding|INFO|5dd9475a-0edb-4c96-9acb-f12a70fef9bf: Claiming fa:16:3e:f9:3a:7c 10.100.0.14
Jan 26 15:56:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:57.777 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:3a:7c 10.100.0.14'], port_security=['fa:16:3e:f9:3a:7c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'fcf10cc3-f719-42c2-8e4c-a1bd5264de60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d163776-1a3a-41f1-b832-ea2f79fc91bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e674ff15ecb74c11bb2de410f761c546', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6e263bd5-47ac-4dbb-be18-c57841bc13c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cd36b8b-8b6c-4f97-9e9f-a494963538b8, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5dd9475a-0edb-4c96-9acb-f12a70fef9bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:57.780 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5dd9475a-0edb-4c96-9acb-f12a70fef9bf in datapath 9d163776-1a3a-41f1-b832-ea2f79fc91bb bound to our chassis
Jan 26 15:56:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:57.782 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d163776-1a3a-41f1-b832-ea2f79fc91bb
Jan 26 15:56:57 compute-0 ovn_controller[146046]: 2026-01-26T15:56:57Z|00322|binding|INFO|Setting lport 5dd9475a-0edb-4c96-9acb-f12a70fef9bf ovn-installed in OVS
Jan 26 15:56:57 compute-0 ovn_controller[146046]: 2026-01-26T15:56:57Z|00323|binding|INFO|Setting lport 5dd9475a-0edb-4c96-9acb-f12a70fef9bf up in Southbound
Jan 26 15:56:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 281 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.5 MiB/s wr, 291 op/s
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.799 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:57.813 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c6378880-8d57-4664-9f94-9ccbe6eae038]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:57 compute-0 systemd-udevd[280158]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:56:57 compute-0 NetworkManager[48954]: <info>  [1769443017.8495] device (tap5dd9475a-0e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:56:57 compute-0 NetworkManager[48954]: <info>  [1769443017.8507] device (tap5dd9475a-0e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:56:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:57.851 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3053b86f-7985-45cb-a41f-15a6a4af772b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:57.858 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[70ae0289-ec77-4d1d-8fad-dfd7559e3ebd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.872 239969 DEBUG nova.virt.libvirt.driver [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.872 239969 DEBUG nova.virt.libvirt.driver [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.872 239969 DEBUG nova.virt.libvirt.driver [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] No VIF found with MAC fa:16:3e:bd:eb:b8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.873 239969 DEBUG nova.virt.libvirt.driver [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] No VIF found with MAC fa:16:3e:f9:3a:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:56:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:57.892 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7a5102f4-0735-4069-84d2-c458c397eea5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:57 compute-0 podman[280132]: 2026-01-26 15:56:57.894490321 +0000 UTC m=+0.090765199 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.900 239969 DEBUG nova.virt.libvirt.guest [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <nova:name>tempest-AttachInterfacesV270Test-server-1836640296</nova:name>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:56:57</nova:creationTime>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <nova:user uuid="744e9066af094bd8b578059050760458">tempest-AttachInterfacesV270Test-1575468404-project-member</nova:user>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <nova:project uuid="e674ff15ecb74c11bb2de410f761c546">tempest-AttachInterfacesV270Test-1575468404</nova:project>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <nova:port uuid="df8d11ce-fef0-411d-abb7-9e35b7c01592">
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     <nova:port uuid="5dd9475a-0edb-4c96-9acb-f12a70fef9bf">
Jan 26 15:56:57 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 15:56:57 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:56:57 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:56:57 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:56:57 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.921 239969 DEBUG oslo_concurrency.lockutils [None req-84216e67-864c-41df-b8dc-a3393b073b68 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "interface-fcf10cc3-f719-42c2-8e4c-a1bd5264de60-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:57.921 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7e70c997-8461-497e-8396-9cea1ae235e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d163776-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:ab:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449924, 'reachable_time': 38452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280178, 'error': None, 'target': 'ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:57 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/855358973' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:56:57 compute-0 podman[280133]: 2026-01-26 15:56:57.939454305 +0000 UTC m=+0.141037583 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 26 15:56:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:57.941 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[84fe2617-b775-46da-afab-2c0c2978c5ac]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9d163776-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449939, 'tstamp': 449939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280183, 'error': None, 'target': 'ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9d163776-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449942, 'tstamp': 449942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280183, 'error': None, 'target': 'ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:57.943 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d163776-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.944 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:57 compute-0 nova_compute[239965]: 2026-01-26 15:56:57.950 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:57.950 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d163776-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:57.951 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:57.951 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d163776-10, col_values=(('external_ids', {'iface-id': '9e411170-b5f9-42c8-91b3-0d3c529d75a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:57.951 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.100 239969 INFO nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Creating config drive at /var/lib/nova/instances/0815f972-3907-402e-b6b4-f1f1fc93a8ab/disk.config
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.108 239969 DEBUG oslo_concurrency.processutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0815f972-3907-402e-b6b4-f1f1fc93a8ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbzdw1afq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.158 239969 DEBUG nova.network.neutron [-] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.185 239969 INFO nova.compute.manager [-] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Took 2.04 seconds to deallocate network for instance.
Jan 26 15:56:58 compute-0 ovn_controller[146046]: 2026-01-26T15:56:58Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b7:54:a4 10.100.0.11
Jan 26 15:56:58 compute-0 ovn_controller[146046]: 2026-01-26T15:56:58Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b7:54:a4 10.100.0.11
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.252 239969 DEBUG oslo_concurrency.lockutils [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.252 239969 DEBUG oslo_concurrency.lockutils [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.253 239969 DEBUG oslo_concurrency.processutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0815f972-3907-402e-b6b4-f1f1fc93a8ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbzdw1afq" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.293 239969 DEBUG nova.storage.rbd_utils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] rbd image 0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.302 239969 DEBUG oslo_concurrency.processutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0815f972-3907-402e-b6b4-f1f1fc93a8ab/disk.config 0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.396 239969 DEBUG nova.compute.manager [req-2273eb08-48c4-4129-8cbc-848f9ec898bb req-4e747bec-fc24-4073-abf9-2398c4272a51 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Received event network-vif-plugged-5132a5fe-0b78-49d1-94f4-efde60d77ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.397 239969 DEBUG oslo_concurrency.lockutils [req-2273eb08-48c4-4129-8cbc-848f9ec898bb req-4e747bec-fc24-4073-abf9-2398c4272a51 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.397 239969 DEBUG oslo_concurrency.lockutils [req-2273eb08-48c4-4129-8cbc-848f9ec898bb req-4e747bec-fc24-4073-abf9-2398c4272a51 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.397 239969 DEBUG oslo_concurrency.lockutils [req-2273eb08-48c4-4129-8cbc-848f9ec898bb req-4e747bec-fc24-4073-abf9-2398c4272a51 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.398 239969 DEBUG nova.compute.manager [req-2273eb08-48c4-4129-8cbc-848f9ec898bb req-4e747bec-fc24-4073-abf9-2398c4272a51 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] No waiting events found dispatching network-vif-plugged-5132a5fe-0b78-49d1-94f4-efde60d77ba6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.398 239969 WARNING nova.compute.manager [req-2273eb08-48c4-4129-8cbc-848f9ec898bb req-4e747bec-fc24-4073-abf9-2398c4272a51 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Received unexpected event network-vif-plugged-5132a5fe-0b78-49d1-94f4-efde60d77ba6 for instance with vm_state deleted and task_state None.
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.426 239969 DEBUG nova.compute.manager [req-f37ba3cf-7676-490a-b4a0-a7bb7234fb44 req-ef5f628e-e7ad-4fc7-905a-392b54142069 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received event network-vif-plugged-5dd9475a-0edb-4c96-9acb-f12a70fef9bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.426 239969 DEBUG oslo_concurrency.lockutils [req-f37ba3cf-7676-490a-b4a0-a7bb7234fb44 req-ef5f628e-e7ad-4fc7-905a-392b54142069 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.427 239969 DEBUG oslo_concurrency.lockutils [req-f37ba3cf-7676-490a-b4a0-a7bb7234fb44 req-ef5f628e-e7ad-4fc7-905a-392b54142069 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.427 239969 DEBUG oslo_concurrency.lockutils [req-f37ba3cf-7676-490a-b4a0-a7bb7234fb44 req-ef5f628e-e7ad-4fc7-905a-392b54142069 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.427 239969 DEBUG nova.compute.manager [req-f37ba3cf-7676-490a-b4a0-a7bb7234fb44 req-ef5f628e-e7ad-4fc7-905a-392b54142069 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] No waiting events found dispatching network-vif-plugged-5dd9475a-0edb-4c96-9acb-f12a70fef9bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.427 239969 WARNING nova.compute.manager [req-f37ba3cf-7676-490a-b4a0-a7bb7234fb44 req-ef5f628e-e7ad-4fc7-905a-392b54142069 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received unexpected event network-vif-plugged-5dd9475a-0edb-4c96-9acb-f12a70fef9bf for instance with vm_state active and task_state None.
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.498 239969 DEBUG oslo_concurrency.processutils [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.734 239969 DEBUG oslo_concurrency.processutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0815f972-3907-402e-b6b4-f1f1fc93a8ab/disk.config 0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.735 239969 INFO nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Deleting local config drive /var/lib/nova/instances/0815f972-3907-402e-b6b4-f1f1fc93a8ab/disk.config because it was imported into RBD.
Jan 26 15:56:58 compute-0 kernel: tap2722b13b-6a: entered promiscuous mode
Jan 26 15:56:58 compute-0 NetworkManager[48954]: <info>  [1769443018.7818] manager: (tap2722b13b-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/159)
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.791 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:58 compute-0 ovn_controller[146046]: 2026-01-26T15:56:58Z|00324|binding|INFO|Claiming lport 2722b13b-6ac2-481f-93b0-a5f0e4664e24 for this chassis.
Jan 26 15:56:58 compute-0 ovn_controller[146046]: 2026-01-26T15:56:58Z|00325|binding|INFO|2722b13b-6ac2-481f-93b0-a5f0e4664e24: Claiming fa:16:3e:e2:f5:4d 10.100.0.13
Jan 26 15:56:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:58.801 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:f5:4d 10.100.0.13'], port_security=['fa:16:3e:e2:f5:4d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0815f972-3907-402e-b6b4-f1f1fc93a8ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be1b617b-295a-4675-91ed-17bc8ca1fead', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2722b13b-6ac2-481f-93b0-a5f0e4664e24) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:56:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:58.802 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2722b13b-6ac2-481f-93b0-a5f0e4664e24 in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e bound to our chassis
Jan 26 15:56:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:58.803 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:56:58 compute-0 NetworkManager[48954]: <info>  [1769443018.8048] device (tap2722b13b-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:56:58 compute-0 NetworkManager[48954]: <info>  [1769443018.8055] device (tap2722b13b-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:56:58 compute-0 ovn_controller[146046]: 2026-01-26T15:56:58Z|00326|binding|INFO|Setting lport 2722b13b-6ac2-481f-93b0-a5f0e4664e24 ovn-installed in OVS
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.817 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:58 compute-0 ovn_controller[146046]: 2026-01-26T15:56:58Z|00327|binding|INFO|Setting lport 2722b13b-6ac2-481f-93b0-a5f0e4664e24 up in Southbound
Jan 26 15:56:58 compute-0 systemd-machined[208061]: New machine qemu-50-instance-0000002d.
Jan 26 15:56:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:58.820 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5bbc84a5-9c72-4a29-8d03-fa75fa253f3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:58 compute-0 systemd[1]: Started Virtual Machine qemu-50-instance-0000002d.
Jan 26 15:56:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:58.867 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6adb2f5b-3850-4489-87fa-aba61639d67b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:58.870 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[fdf8eead-e3d3-441c-9bdd-611643b31e8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:58.907 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[646f368e-5a37-472e-8d78-c3ca6faef734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:58.929 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8fa980-c65c-4da1-9338-45c68faaf03e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448592, 'reachable_time': 26761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280270, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:58 compute-0 ceph-mon[75140]: pgmap v1311: 305 pgs: 305 active+clean; 281 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.5 MiB/s wr, 291 op/s
Jan 26 15:56:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:58.952 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c80c8d-878c-4154-b98c-8ab55eaf6958]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448604, 'tstamp': 448604}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280271, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448607, 'tstamp': 448607}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280271, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:56:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:58.954 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.956 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:58 compute-0 nova_compute[239965]: 2026-01-26 15:56:58.957 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:56:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:58.957 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:58.958 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:58.958 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:56:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:58.959 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:56:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:56:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3400736325' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.150 239969 DEBUG oslo_concurrency.processutils [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.653s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.160 239969 DEBUG nova.compute.provider_tree [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.178 239969 DEBUG nova.scheduler.client.report [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.205 239969 DEBUG oslo_concurrency.lockutils [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:59.218 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:56:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:59.219 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:56:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:56:59.220 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.236 239969 INFO nova.scheduler.client.report [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Deleted allocations for instance 6f15e546-481b-42eb-ad9c-7c40e8bfe459
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.318 239969 DEBUG oslo_concurrency.lockutils [None req-63ed1688-7a09-48ff-a3bd-e4ebe7cdceac e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.288s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.359 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443019.3588467, 0815f972-3907-402e-b6b4-f1f1fc93a8ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.359 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] VM Started (Lifecycle Event)
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.382 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.388 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443019.3615196, 0815f972-3907-402e-b6b4-f1f1fc93a8ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.389 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] VM Paused (Lifecycle Event)
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.412 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.418 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.448 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:56:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:56:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 281 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.5 MiB/s wr, 241 op/s
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.905 239969 DEBUG nova.network.neutron [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Updated VIF entry in instance network info cache for port 5dd9475a-0edb-4c96-9acb-f12a70fef9bf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.905 239969 DEBUG nova.network.neutron [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Updating instance_info_cache with network_info: [{"id": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "address": "fa:16:3e:bd:eb:b8", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf8d11ce-fe", "ovs_interfaceid": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "address": "fa:16:3e:f9:3a:7c", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dd9475a-0e", "ovs_interfaceid": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.923 239969 DEBUG oslo_concurrency.lockutils [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-fcf10cc3-f719-42c2-8e4c-a1bd5264de60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.924 239969 DEBUG nova.compute.manager [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received event network-changed-2722b13b-6ac2-481f-93b0-a5f0e4664e24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.924 239969 DEBUG nova.compute.manager [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Refreshing instance network info cache due to event network-changed-2722b13b-6ac2-481f-93b0-a5f0e4664e24. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.925 239969 DEBUG oslo_concurrency.lockutils [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.925 239969 DEBUG oslo_concurrency.lockutils [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:56:59 compute-0 nova_compute[239965]: 2026-01-26 15:56:59.925 239969 DEBUG nova.network.neutron [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Refreshing network info cache for port 2722b13b-6ac2-481f-93b0-a5f0e4664e24 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:56:59 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3400736325' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.057 239969 DEBUG oslo_concurrency.lockutils [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Acquiring lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.058 239969 DEBUG oslo_concurrency.lockutils [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.058 239969 DEBUG oslo_concurrency.lockutils [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Acquiring lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.058 239969 DEBUG oslo_concurrency.lockutils [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.058 239969 DEBUG oslo_concurrency.lockutils [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.060 239969 INFO nova.compute.manager [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Terminating instance
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.061 239969 DEBUG nova.compute.manager [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:57:00 compute-0 kernel: tapdf8d11ce-fe (unregistering): left promiscuous mode
Jan 26 15:57:00 compute-0 NetworkManager[48954]: <info>  [1769443020.1275] device (tapdf8d11ce-fe): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:57:00 compute-0 ovn_controller[146046]: 2026-01-26T15:57:00Z|00328|binding|INFO|Releasing lport df8d11ce-fef0-411d-abb7-9e35b7c01592 from this chassis (sb_readonly=0)
Jan 26 15:57:00 compute-0 ovn_controller[146046]: 2026-01-26T15:57:00Z|00329|binding|INFO|Setting lport df8d11ce-fef0-411d-abb7-9e35b7c01592 down in Southbound
Jan 26 15:57:00 compute-0 ovn_controller[146046]: 2026-01-26T15:57:00Z|00330|binding|INFO|Removing iface tapdf8d11ce-fe ovn-installed in OVS
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.134 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.145 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:eb:b8 10.100.0.13'], port_security=['fa:16:3e:bd:eb:b8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'fcf10cc3-f719-42c2-8e4c-a1bd5264de60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d163776-1a3a-41f1-b832-ea2f79fc91bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e674ff15ecb74c11bb2de410f761c546', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6e263bd5-47ac-4dbb-be18-c57841bc13c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cd36b8b-8b6c-4f97-9e9f-a494963538b8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=df8d11ce-fef0-411d-abb7-9e35b7c01592) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.146 156105 INFO neutron.agent.ovn.metadata.agent [-] Port df8d11ce-fef0-411d-abb7-9e35b7c01592 in datapath 9d163776-1a3a-41f1-b832-ea2f79fc91bb unbound from our chassis
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.147 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d163776-1a3a-41f1-b832-ea2f79fc91bb
Jan 26 15:57:00 compute-0 kernel: tap5dd9475a-0e (unregistering): left promiscuous mode
Jan 26 15:57:00 compute-0 NetworkManager[48954]: <info>  [1769443020.1551] device (tap5dd9475a-0e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:57:00 compute-0 ovn_controller[146046]: 2026-01-26T15:57:00Z|00331|binding|INFO|Releasing lport 5dd9475a-0edb-4c96-9acb-f12a70fef9bf from this chassis (sb_readonly=0)
Jan 26 15:57:00 compute-0 ovn_controller[146046]: 2026-01-26T15:57:00Z|00332|binding|INFO|Setting lport 5dd9475a-0edb-4c96-9acb-f12a70fef9bf down in Southbound
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.161 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:00 compute-0 ovn_controller[146046]: 2026-01-26T15:57:00Z|00333|binding|INFO|Removing iface tap5dd9475a-0e ovn-installed in OVS
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.163 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.167 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9d485a0e-12d7-454e-ba56-8f36ceee21d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.171 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f9:3a:7c 10.100.0.14'], port_security=['fa:16:3e:f9:3a:7c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'fcf10cc3-f719-42c2-8e4c-a1bd5264de60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d163776-1a3a-41f1-b832-ea2f79fc91bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e674ff15ecb74c11bb2de410f761c546', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6e263bd5-47ac-4dbb-be18-c57841bc13c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3cd36b8b-8b6c-4f97-9e9f-a494963538b8, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5dd9475a-0edb-4c96-9acb-f12a70fef9bf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.190 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.206 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[bc42f3ba-7797-4c02-92d5-cf04343e9de2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:00 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Jan 26 15:57:00 compute-0 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Consumed 11.591s CPU time.
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.210 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9defeb1a-cde8-4b26-a379-3ca106e2b933]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:00 compute-0 systemd-machined[208061]: Machine qemu-49-instance-0000002c terminated.
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.238 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5462bb41-df57-4800-be1d-fdc02c9ef44f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.251 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.254 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c001c8-e555-49ff-8ba9-db19e4fd73e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d163776-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:95:ab:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 98], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449924, 'reachable_time': 38452, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280331, 'error': None, 'target': 'ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.275 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2b61ed1f-f781-448f-a702-d33fb3f26771]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9d163776-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449939, 'tstamp': 449939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280332, 'error': None, 'target': 'ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9d163776-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 449942, 'tstamp': 449942}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280332, 'error': None, 'target': 'ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.277 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d163776-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.279 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.287 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.288 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d163776-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.288 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.289 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d163776-10, col_values=(('external_ids', {'iface-id': '9e411170-b5f9-42c8-91b3-0d3c529d75a3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.289 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.292 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5dd9475a-0edb-4c96-9acb-f12a70fef9bf in datapath 9d163776-1a3a-41f1-b832-ea2f79fc91bb unbound from our chassis
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.293 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9d163776-1a3a-41f1-b832-ea2f79fc91bb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.294 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:00 compute-0 NetworkManager[48954]: <info>  [1769443020.2960] manager: (tap5dd9475a-0e): new Tun device (/org/freedesktop/NetworkManager/Devices/160)
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.294 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[77e9c9f9-b961-49ee-9361-775d8e2c24e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:00.295 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb namespace which is not needed anymore
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.313 239969 INFO nova.virt.libvirt.driver [-] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Instance destroyed successfully.
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.314 239969 DEBUG nova.objects.instance [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lazy-loading 'resources' on Instance uuid fcf10cc3-f719-42c2-8e4c-a1bd5264de60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.333 239969 DEBUG nova.virt.libvirt.vif [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:56:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1836640296',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1836640296',id=44,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:56:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e674ff15ecb74c11bb2de410f761c546',ramdisk_id='',reservation_id='r-40uwisnz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1575468404',owner_user_name='tempest-AttachInterfacesV270Test-1575468404-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:49Z,user_data=None,user_id='744e9066af094bd8b578059050760458',uuid=fcf10cc3-f719-42c2-8e4c-a1bd5264de60,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "address": "fa:16:3e:bd:eb:b8", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf8d11ce-fe", "ovs_interfaceid": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.333 239969 DEBUG nova.network.os_vif_util [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Converting VIF {"id": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "address": "fa:16:3e:bd:eb:b8", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdf8d11ce-fe", "ovs_interfaceid": "df8d11ce-fef0-411d-abb7-9e35b7c01592", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.334 239969 DEBUG nova.network.os_vif_util [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:eb:b8,bridge_name='br-int',has_traffic_filtering=True,id=df8d11ce-fef0-411d-abb7-9e35b7c01592,network=Network(9d163776-1a3a-41f1-b832-ea2f79fc91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf8d11ce-fe') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.334 239969 DEBUG os_vif [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:eb:b8,bridge_name='br-int',has_traffic_filtering=True,id=df8d11ce-fef0-411d-abb7-9e35b7c01592,network=Network(9d163776-1a3a-41f1-b832-ea2f79fc91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf8d11ce-fe') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.335 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.336 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf8d11ce-fe, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.338 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.342 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.343 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.347 239969 INFO os_vif [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:eb:b8,bridge_name='br-int',has_traffic_filtering=True,id=df8d11ce-fef0-411d-abb7-9e35b7c01592,network=Network(9d163776-1a3a-41f1-b832-ea2f79fc91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdf8d11ce-fe')
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.348 239969 DEBUG nova.virt.libvirt.vif [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:56:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-AttachInterfacesV270Test-server-1836640296',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesv270test-server-1836640296',id=44,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:56:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e674ff15ecb74c11bb2de410f761c546',ramdisk_id='',reservation_id='r-40uwisnz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesV270Test-1575468404',owner_user_name='tempest-AttachInterfacesV270Test-1575468404-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:49Z,user_data=None,user_id='744e9066af094bd8b578059050760458',uuid=fcf10cc3-f719-42c2-8e4c-a1bd5264de60,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "address": "fa:16:3e:f9:3a:7c", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dd9475a-0e", "ovs_interfaceid": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.349 239969 DEBUG nova.network.os_vif_util [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Converting VIF {"id": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "address": "fa:16:3e:f9:3a:7c", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dd9475a-0e", "ovs_interfaceid": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.350 239969 DEBUG nova.network.os_vif_util [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f9:3a:7c,bridge_name='br-int',has_traffic_filtering=True,id=5dd9475a-0edb-4c96-9acb-f12a70fef9bf,network=Network(9d163776-1a3a-41f1-b832-ea2f79fc91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5dd9475a-0e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.351 239969 DEBUG os_vif [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:3a:7c,bridge_name='br-int',has_traffic_filtering=True,id=5dd9475a-0edb-4c96-9acb-f12a70fef9bf,network=Network(9d163776-1a3a-41f1-b832-ea2f79fc91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5dd9475a-0e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.352 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.353 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5dd9475a-0e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.355 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.358 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.361 239969 INFO os_vif [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f9:3a:7c,bridge_name='br-int',has_traffic_filtering=True,id=5dd9475a-0edb-4c96-9acb-f12a70fef9bf,network=Network(9d163776-1a3a-41f1-b832-ea2f79fc91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5dd9475a-0e')
Jan 26 15:57:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:57:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:57:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:57:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:57:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:57:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.532 239969 DEBUG oslo_concurrency.lockutils [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "d8c8962d-c696-4ea6-826d-466a804a7770" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.534 239969 DEBUG oslo_concurrency.lockutils [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.534 239969 DEBUG oslo_concurrency.lockutils [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.534 239969 DEBUG oslo_concurrency.lockutils [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.534 239969 DEBUG oslo_concurrency.lockutils [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.535 239969 INFO nova.compute.manager [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Terminating instance
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.537 239969 DEBUG nova.compute.manager [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.620 239969 DEBUG nova.compute.manager [req-568d1e33-a1ed-41d1-a740-34e18ce44801 req-0e35361b-b9ff-43b6-a3ee-6a867b22120a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Received event network-vif-deleted-5132a5fe-0b78-49d1-94f4-efde60d77ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.666 239969 DEBUG nova.compute.manager [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received event network-vif-plugged-5dd9475a-0edb-4c96-9acb-f12a70fef9bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.667 239969 DEBUG oslo_concurrency.lockutils [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.667 239969 DEBUG oslo_concurrency.lockutils [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.667 239969 DEBUG oslo_concurrency.lockutils [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.668 239969 DEBUG nova.compute.manager [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] No waiting events found dispatching network-vif-plugged-5dd9475a-0edb-4c96-9acb-f12a70fef9bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.668 239969 WARNING nova.compute.manager [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received unexpected event network-vif-plugged-5dd9475a-0edb-4c96-9acb-f12a70fef9bf for instance with vm_state active and task_state deleting.
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.668 239969 DEBUG nova.compute.manager [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received event network-vif-plugged-2722b13b-6ac2-481f-93b0-a5f0e4664e24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.668 239969 DEBUG oslo_concurrency.lockutils [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.669 239969 DEBUG oslo_concurrency.lockutils [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.669 239969 DEBUG oslo_concurrency.lockutils [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.669 239969 DEBUG nova.compute.manager [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Processing event network-vif-plugged-2722b13b-6ac2-481f-93b0-a5f0e4664e24 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.669 239969 DEBUG nova.compute.manager [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received event network-vif-plugged-2722b13b-6ac2-481f-93b0-a5f0e4664e24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.670 239969 DEBUG oslo_concurrency.lockutils [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.670 239969 DEBUG oslo_concurrency.lockutils [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.670 239969 DEBUG oslo_concurrency.lockutils [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.670 239969 DEBUG nova.compute.manager [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] No waiting events found dispatching network-vif-plugged-2722b13b-6ac2-481f-93b0-a5f0e4664e24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.671 239969 WARNING nova.compute.manager [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received unexpected event network-vif-plugged-2722b13b-6ac2-481f-93b0-a5f0e4664e24 for instance with vm_state building and task_state spawning.
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.671 239969 DEBUG nova.compute.manager [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received event network-vif-unplugged-df8d11ce-fef0-411d-abb7-9e35b7c01592 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.671 239969 DEBUG oslo_concurrency.lockutils [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.671 239969 DEBUG oslo_concurrency.lockutils [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.672 239969 DEBUG oslo_concurrency.lockutils [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.672 239969 DEBUG nova.compute.manager [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] No waiting events found dispatching network-vif-unplugged-df8d11ce-fef0-411d-abb7-9e35b7c01592 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.672 239969 DEBUG nova.compute.manager [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received event network-vif-unplugged-df8d11ce-fef0-411d-abb7-9e35b7c01592 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.672 239969 DEBUG nova.compute.manager [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received event network-vif-plugged-df8d11ce-fef0-411d-abb7-9e35b7c01592 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.673 239969 DEBUG oslo_concurrency.lockutils [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.673 239969 DEBUG oslo_concurrency.lockutils [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.673 239969 DEBUG oslo_concurrency.lockutils [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.673 239969 DEBUG nova.compute.manager [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] No waiting events found dispatching network-vif-plugged-df8d11ce-fef0-411d-abb7-9e35b7c01592 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.674 239969 WARNING nova.compute.manager [req-b29b55a0-8872-4c0a-8758-219ac5f8c0ef req-82ebd041-dbcd-4264-8f54-cbeeccedff11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received unexpected event network-vif-plugged-df8d11ce-fef0-411d-abb7-9e35b7c01592 for instance with vm_state active and task_state deleting.
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.675 239969 DEBUG nova.compute.manager [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.678 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443020.6784763, 0815f972-3907-402e-b6b4-f1f1fc93a8ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.679 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] VM Resumed (Lifecycle Event)
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.681 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.684 239969 INFO nova.virt.libvirt.driver [-] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Instance spawned successfully.
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.685 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.702 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.711 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.714 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.715 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.715 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.716 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.716 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.716 239969 DEBUG nova.virt.libvirt.driver [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.744 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.791 239969 INFO nova.compute.manager [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Took 9.21 seconds to spawn the instance on the hypervisor.
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.792 239969 DEBUG nova.compute.manager [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.864 239969 INFO nova.compute.manager [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Took 10.55 seconds to build instance.
Jan 26 15:57:00 compute-0 nova_compute[239965]: 2026-01-26 15:57:00.883 239969 DEBUG oslo_concurrency.lockutils [None req-4b612414-cf4c-4c89-9983-95bfb36a57e5 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:01 compute-0 neutron-haproxy-ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb[279735]: [NOTICE]   (279739) : haproxy version is 2.8.14-c23fe91
Jan 26 15:57:01 compute-0 neutron-haproxy-ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb[279735]: [NOTICE]   (279739) : path to executable is /usr/sbin/haproxy
Jan 26 15:57:01 compute-0 neutron-haproxy-ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb[279735]: [WARNING]  (279739) : Exiting Master process...
Jan 26 15:57:01 compute-0 neutron-haproxy-ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb[279735]: [ALERT]    (279739) : Current worker (279741) exited with code 143 (Terminated)
Jan 26 15:57:01 compute-0 neutron-haproxy-ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb[279735]: [WARNING]  (279739) : All workers exited. Exiting... (0)
Jan 26 15:57:01 compute-0 systemd[1]: libpod-8b76df5be9c8537b44bb27568c3381c33a451cfe66492e389bf72ecb726d7e60.scope: Deactivated successfully.
Jan 26 15:57:01 compute-0 podman[280391]: 2026-01-26 15:57:01.020291392 +0000 UTC m=+0.591988250 container died 8b76df5be9c8537b44bb27568c3381c33a451cfe66492e389bf72ecb726d7e60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 15:57:01 compute-0 ceph-mon[75140]: pgmap v1312: 305 pgs: 305 active+clean; 281 MiB data, 553 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.5 MiB/s wr, 241 op/s
Jan 26 15:57:01 compute-0 kernel: tap1c2767ea-eb (unregistering): left promiscuous mode
Jan 26 15:57:01 compute-0 NetworkManager[48954]: <info>  [1769443021.2887] device (tap1c2767ea-eb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:57:01 compute-0 ovn_controller[146046]: 2026-01-26T15:57:01Z|00334|binding|INFO|Releasing lport 1c2767ea-eb7b-4942-a452-31fe1017e883 from this chassis (sb_readonly=0)
Jan 26 15:57:01 compute-0 ovn_controller[146046]: 2026-01-26T15:57:01Z|00335|binding|INFO|Setting lport 1c2767ea-eb7b-4942-a452-31fe1017e883 down in Southbound
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.295 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:01 compute-0 ovn_controller[146046]: 2026-01-26T15:57:01Z|00336|binding|INFO|Removing iface tap1c2767ea-eb ovn-installed in OVS
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.302 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:01.312 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:cd:80 10.100.0.4'], port_security=['fa:16:3e:3a:cd:80 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd8c8962d-c696-4ea6-826d-466a804a7770', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b303275701ea4029a4a744bf25ce8726', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'bbe71d43-f65e-47cf-a850-0e359656056c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d6a88dad-0ef4-4088-9429-227b0056ef4f, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1c2767ea-eb7b-4942-a452-31fe1017e883) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.319 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:01 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000027.scope: Deactivated successfully.
Jan 26 15:57:01 compute-0 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d00000027.scope: Consumed 15.571s CPU time.
Jan 26 15:57:01 compute-0 systemd-machined[208061]: Machine qemu-48-instance-00000027 terminated.
Jan 26 15:57:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8b76df5be9c8537b44bb27568c3381c33a451cfe66492e389bf72ecb726d7e60-userdata-shm.mount: Deactivated successfully.
Jan 26 15:57:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2746cf5087b58effb4bac6645628daf3462f98162b93574f1d345d880b019084-merged.mount: Deactivated successfully.
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.578 239969 INFO nova.virt.libvirt.driver [-] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Instance destroyed successfully.
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.578 239969 DEBUG nova.objects.instance [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lazy-loading 'resources' on Instance uuid d8c8962d-c696-4ea6-826d-466a804a7770 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.594 239969 DEBUG nova.virt.libvirt.vif [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:55:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-2052998401',display_name='tempest-ListServerFiltersTestJSON-instance-2052998401',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-2052998401',id=39,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:56:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b303275701ea4029a4a744bf25ce8726',ramdisk_id='',reservation_id='r-xtx413wp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-31069689',owner_user_name='tempest-ListServerFiltersTestJSON-31069689-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:39Z,user_data=None,user_id='e19e9687c7784557a28b0297dc26ff06',uuid=d8c8962d-c696-4ea6-826d-466a804a7770,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.595 239969 DEBUG nova.network.os_vif_util [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converting VIF {"id": "1c2767ea-eb7b-4942-a452-31fe1017e883", "address": "fa:16:3e:3a:cd:80", "network": {"id": "99f86ca1-7e08-4ddd-92c4-0c2af6afae0c", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2144924519-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b303275701ea4029a4a744bf25ce8726", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c2767ea-eb", "ovs_interfaceid": "1c2767ea-eb7b-4942-a452-31fe1017e883", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.595 239969 DEBUG nova.network.os_vif_util [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:cd:80,bridge_name='br-int',has_traffic_filtering=True,id=1c2767ea-eb7b-4942-a452-31fe1017e883,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2767ea-eb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.596 239969 DEBUG os_vif [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:cd:80,bridge_name='br-int',has_traffic_filtering=True,id=1c2767ea-eb7b-4942-a452-31fe1017e883,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2767ea-eb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.597 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.597 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c2767ea-eb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.602 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.605 239969 INFO os_vif [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:cd:80,bridge_name='br-int',has_traffic_filtering=True,id=1c2767ea-eb7b-4942-a452-31fe1017e883,network=Network(99f86ca1-7e08-4ddd-92c4-0c2af6afae0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c2767ea-eb')
Jan 26 15:57:01 compute-0 podman[280391]: 2026-01-26 15:57:01.686951296 +0000 UTC m=+1.258648154 container cleanup 8b76df5be9c8537b44bb27568c3381c33a451cfe66492e389bf72ecb726d7e60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:57:01 compute-0 systemd[1]: libpod-conmon-8b76df5be9c8537b44bb27568c3381c33a451cfe66492e389bf72ecb726d7e60.scope: Deactivated successfully.
Jan 26 15:57:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 290 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 264 op/s
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.829 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:01 compute-0 podman[280456]: 2026-01-26 15:57:01.946625079 +0000 UTC m=+0.232716513 container remove 8b76df5be9c8537b44bb27568c3381c33a451cfe66492e389bf72ecb726d7e60 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 26 15:57:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:01.953 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[05b4097e-f660-4a82-af01-256a0a1e7a47]: (4, ('Mon Jan 26 03:57:00 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb (8b76df5be9c8537b44bb27568c3381c33a451cfe66492e389bf72ecb726d7e60)\n8b76df5be9c8537b44bb27568c3381c33a451cfe66492e389bf72ecb726d7e60\nMon Jan 26 03:57:01 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb (8b76df5be9c8537b44bb27568c3381c33a451cfe66492e389bf72ecb726d7e60)\n8b76df5be9c8537b44bb27568c3381c33a451cfe66492e389bf72ecb726d7e60\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:01.954 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[73558c90-c88e-43d5-8bff-7d7b56c57283]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:01.956 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d163776-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:01 compute-0 kernel: tap9d163776-10: left promiscuous mode
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.975 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.980 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:01.983 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3f666223-0a0b-4296-831e-fc95e8909129]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:01 compute-0 nova_compute[239965]: 2026-01-26 15:57:01.999 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.001 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[838d866d-1d04-484e-9d9d-5549eb9a3a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.003 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c67dda9c-d8cb-41d9-8418-da7533c0189b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.005 239969 DEBUG nova.network.neutron [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Updated VIF entry in instance network info cache for port 2722b13b-6ac2-481f-93b0-a5f0e4664e24. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.005 239969 DEBUG nova.network.neutron [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Updating instance_info_cache with network_info: [{"id": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "address": "fa:16:3e:e2:f5:4d", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2722b13b-6a", "ovs_interfaceid": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.023 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[490926f3-c6d6-4978-ac24-8be6862096d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 449914, 'reachable_time': 25986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280471, 'error': None, 'target': 'ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d9d163776\x2d1a3a\x2d41f1\x2db832\x2dea2f79fc91bb.mount: Deactivated successfully.
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.029 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9d163776-1a3a-41f1-b832-ea2f79fc91bb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.029 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[869a72ff-4383-4c54-b6dc-48c05b803a71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.030 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1c2767ea-eb7b-4942-a452-31fe1017e883 in datapath 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c unbound from our chassis
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.032 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 99f86ca1-7e08-4ddd-92c4-0c2af6afae0c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.034 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[076c5089-0e32-42b1-960f-8305f4ae7118]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.035 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c namespace which is not needed anymore
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.095 239969 DEBUG oslo_concurrency.lockutils [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.096 239969 DEBUG nova.compute.manager [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Received event network-vif-unplugged-5132a5fe-0b78-49d1-94f4-efde60d77ba6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.097 239969 DEBUG oslo_concurrency.lockutils [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.097 239969 DEBUG oslo_concurrency.lockutils [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.097 239969 DEBUG oslo_concurrency.lockutils [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6f15e546-481b-42eb-ad9c-7c40e8bfe459-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.098 239969 DEBUG nova.compute.manager [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] No waiting events found dispatching network-vif-unplugged-5132a5fe-0b78-49d1-94f4-efde60d77ba6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.099 239969 DEBUG nova.compute.manager [req-edf3f9a0-7768-4e8f-9d1d-7d9441038ee9 req-bd6824a2-7ad1-40d8-9acd-c402ab3367d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Received event network-vif-unplugged-5132a5fe-0b78-49d1-94f4-efde60d77ba6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:57:02 compute-0 neutron-haproxy-ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c[276656]: [NOTICE]   (276684) : haproxy version is 2.8.14-c23fe91
Jan 26 15:57:02 compute-0 neutron-haproxy-ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c[276656]: [NOTICE]   (276684) : path to executable is /usr/sbin/haproxy
Jan 26 15:57:02 compute-0 neutron-haproxy-ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c[276656]: [WARNING]  (276684) : Exiting Master process...
Jan 26 15:57:02 compute-0 neutron-haproxy-ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c[276656]: [ALERT]    (276684) : Current worker (276689) exited with code 143 (Terminated)
Jan 26 15:57:02 compute-0 neutron-haproxy-ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c[276656]: [WARNING]  (276684) : All workers exited. Exiting... (0)
Jan 26 15:57:02 compute-0 systemd[1]: libpod-dde07e8a7d5721a396ce289b3bbba4547c4b4d149b5bf260b7b48404f14b7eb9.scope: Deactivated successfully.
Jan 26 15:57:02 compute-0 podman[280493]: 2026-01-26 15:57:02.213106969 +0000 UTC m=+0.054759225 container died dde07e8a7d5721a396ce289b3bbba4547c4b4d149b5bf260b7b48404f14b7eb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.228 239969 INFO nova.virt.libvirt.driver [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Deleting instance files /var/lib/nova/instances/fcf10cc3-f719-42c2-8e4c-a1bd5264de60_del
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.229 239969 INFO nova.virt.libvirt.driver [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Deletion of /var/lib/nova/instances/fcf10cc3-f719-42c2-8e4c-a1bd5264de60_del complete
Jan 26 15:57:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dde07e8a7d5721a396ce289b3bbba4547c4b4d149b5bf260b7b48404f14b7eb9-userdata-shm.mount: Deactivated successfully.
Jan 26 15:57:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-35bf16b9dba8646110cc88ddf690dc458b6f4c3643208264a7f62a038532b97e-merged.mount: Deactivated successfully.
Jan 26 15:57:02 compute-0 ceph-mon[75140]: pgmap v1313: 305 pgs: 305 active+clean; 290 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.9 MiB/s wr, 264 op/s
Jan 26 15:57:02 compute-0 podman[280493]: 2026-01-26 15:57:02.267926146 +0000 UTC m=+0.109578422 container cleanup dde07e8a7d5721a396ce289b3bbba4547c4b4d149b5bf260b7b48404f14b7eb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 15:57:02 compute-0 systemd[1]: libpod-conmon-dde07e8a7d5721a396ce289b3bbba4547c4b4d149b5bf260b7b48404f14b7eb9.scope: Deactivated successfully.
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.297 239969 INFO nova.compute.manager [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Took 2.24 seconds to destroy the instance on the hypervisor.
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.297 239969 DEBUG oslo.service.loopingcall [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.298 239969 DEBUG nova.compute.manager [-] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.298 239969 DEBUG nova.network.neutron [-] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.400 239969 INFO nova.virt.libvirt.driver [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Deleting instance files /var/lib/nova/instances/d8c8962d-c696-4ea6-826d-466a804a7770_del
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.401 239969 INFO nova.virt.libvirt.driver [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Deletion of /var/lib/nova/instances/d8c8962d-c696-4ea6-826d-466a804a7770_del complete
Jan 26 15:57:02 compute-0 podman[280520]: 2026-01-26 15:57:02.416152753 +0000 UTC m=+0.121600435 container remove dde07e8a7d5721a396ce289b3bbba4547c4b4d149b5bf260b7b48404f14b7eb9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.424 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cda1d29e-8245-4af8-88b0-66c9e5053c19]: (4, ('Mon Jan 26 03:57:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c (dde07e8a7d5721a396ce289b3bbba4547c4b4d149b5bf260b7b48404f14b7eb9)\ndde07e8a7d5721a396ce289b3bbba4547c4b4d149b5bf260b7b48404f14b7eb9\nMon Jan 26 03:57:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c (dde07e8a7d5721a396ce289b3bbba4547c4b4d149b5bf260b7b48404f14b7eb9)\ndde07e8a7d5721a396ce289b3bbba4547c4b4d149b5bf260b7b48404f14b7eb9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.426 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[58dac608-beb1-4706-b646-a4a03747f21c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.427 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap99f86ca1-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.429 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:02 compute-0 kernel: tap99f86ca1-70: left promiscuous mode
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.450 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.453 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d352ff56-5664-48b3-af06-6997f4f27857]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.476 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[59fe13a8-ed66-4d14-8215-d0eeab36abb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.478 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[018b6ad7-7f7f-4f76-9582-3f2338080089]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.496 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee5f331-d762-45c2-ba47-594c208946d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 445508, 'reachable_time': 23401, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280533, 'error': None, 'target': 'ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.499 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-99f86ca1-7e08-4ddd-92c4-0c2af6afae0c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:57:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:02.500 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[1894780b-20d7-40ca-8fb6-d79d39673f0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.519 239969 INFO nova.compute.manager [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Took 1.98 seconds to destroy the instance on the hypervisor.
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.519 239969 DEBUG oslo.service.loopingcall [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.520 239969 DEBUG nova.compute.manager [-] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.520 239969 DEBUG nova.network.neutron [-] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:57:02 compute-0 systemd[1]: run-netns-ovnmeta\x2d99f86ca1\x2d7e08\x2d4ddd\x2d92c4\x2d0c2af6afae0c.mount: Deactivated successfully.
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.806 239969 DEBUG nova.compute.manager [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received event network-vif-unplugged-5dd9475a-0edb-4c96-9acb-f12a70fef9bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.807 239969 DEBUG oslo_concurrency.lockutils [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.807 239969 DEBUG oslo_concurrency.lockutils [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.807 239969 DEBUG oslo_concurrency.lockutils [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.808 239969 DEBUG nova.compute.manager [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] No waiting events found dispatching network-vif-unplugged-5dd9475a-0edb-4c96-9acb-f12a70fef9bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.808 239969 DEBUG nova.compute.manager [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received event network-vif-unplugged-5dd9475a-0edb-4c96-9acb-f12a70fef9bf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.808 239969 DEBUG nova.compute.manager [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received event network-vif-plugged-5dd9475a-0edb-4c96-9acb-f12a70fef9bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.808 239969 DEBUG oslo_concurrency.lockutils [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.808 239969 DEBUG oslo_concurrency.lockutils [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.809 239969 DEBUG oslo_concurrency.lockutils [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.809 239969 DEBUG nova.compute.manager [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] No waiting events found dispatching network-vif-plugged-5dd9475a-0edb-4c96-9acb-f12a70fef9bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.809 239969 WARNING nova.compute.manager [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received unexpected event network-vif-plugged-5dd9475a-0edb-4c96-9acb-f12a70fef9bf for instance with vm_state active and task_state deleting.
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.809 239969 DEBUG nova.compute.manager [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received event network-vif-unplugged-1c2767ea-eb7b-4942-a452-31fe1017e883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.809 239969 DEBUG oslo_concurrency.lockutils [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.810 239969 DEBUG oslo_concurrency.lockutils [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.810 239969 DEBUG oslo_concurrency.lockutils [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.810 239969 DEBUG nova.compute.manager [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] No waiting events found dispatching network-vif-unplugged-1c2767ea-eb7b-4942-a452-31fe1017e883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.810 239969 DEBUG nova.compute.manager [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received event network-vif-unplugged-1c2767ea-eb7b-4942-a452-31fe1017e883 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.810 239969 DEBUG nova.compute.manager [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received event network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.810 239969 DEBUG oslo_concurrency.lockutils [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.811 239969 DEBUG oslo_concurrency.lockutils [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.811 239969 DEBUG oslo_concurrency.lockutils [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.811 239969 DEBUG nova.compute.manager [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] No waiting events found dispatching network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:02 compute-0 nova_compute[239965]: 2026-01-26 15:57:02.811 239969 WARNING nova.compute.manager [req-ff5fb544-85fd-4b7d-9866-a16671dd5197 req-8982eeee-9666-4c0e-a698-b135ea916c60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received unexpected event network-vif-plugged-1c2767ea-eb7b-4942-a452-31fe1017e883 for instance with vm_state active and task_state deleting.
Jan 26 15:57:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 201 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.0 MiB/s wr, 311 op/s
Jan 26 15:57:03 compute-0 nova_compute[239965]: 2026-01-26 15:57:03.913 239969 DEBUG nova.compute.manager [req-7d549789-71b1-4eb5-8956-401db5273308 req-400bb462-aee4-45e8-b425-6955fcc17a28 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received event network-vif-deleted-df8d11ce-fef0-411d-abb7-9e35b7c01592 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:03 compute-0 nova_compute[239965]: 2026-01-26 15:57:03.914 239969 INFO nova.compute.manager [req-7d549789-71b1-4eb5-8956-401db5273308 req-400bb462-aee4-45e8-b425-6955fcc17a28 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Neutron deleted interface df8d11ce-fef0-411d-abb7-9e35b7c01592; detaching it from the instance and deleting it from the info cache
Jan 26 15:57:03 compute-0 nova_compute[239965]: 2026-01-26 15:57:03.915 239969 DEBUG nova.network.neutron [req-7d549789-71b1-4eb5-8956-401db5273308 req-400bb462-aee4-45e8-b425-6955fcc17a28 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Updating instance_info_cache with network_info: [{"id": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "address": "fa:16:3e:f9:3a:7c", "network": {"id": "9d163776-1a3a-41f1-b832-ea2f79fc91bb", "bridge": "br-int", "label": "tempest-AttachInterfacesV270Test-75029303-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e674ff15ecb74c11bb2de410f761c546", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5dd9475a-0e", "ovs_interfaceid": "5dd9475a-0edb-4c96-9acb-f12a70fef9bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:03 compute-0 nova_compute[239965]: 2026-01-26 15:57:03.941 239969 DEBUG nova.compute.manager [req-7d549789-71b1-4eb5-8956-401db5273308 req-400bb462-aee4-45e8-b425-6955fcc17a28 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Detach interface failed, port_id=df8d11ce-fef0-411d-abb7-9e35b7c01592, reason: Instance fcf10cc3-f719-42c2-8e4c-a1bd5264de60 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 15:57:04 compute-0 nova_compute[239965]: 2026-01-26 15:57:04.494 239969 DEBUG nova.network.neutron [-] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:04 compute-0 nova_compute[239965]: 2026-01-26 15:57:04.510 239969 INFO nova.compute.manager [-] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Took 1.99 seconds to deallocate network for instance.
Jan 26 15:57:04 compute-0 nova_compute[239965]: 2026-01-26 15:57:04.556 239969 DEBUG oslo_concurrency.lockutils [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:04 compute-0 nova_compute[239965]: 2026-01-26 15:57:04.557 239969 DEBUG oslo_concurrency.lockutils [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:57:04 compute-0 nova_compute[239965]: 2026-01-26 15:57:04.660 239969 DEBUG oslo_concurrency.processutils [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:04 compute-0 nova_compute[239965]: 2026-01-26 15:57:04.829 239969 DEBUG nova.network.neutron [-] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:04 compute-0 ceph-mon[75140]: pgmap v1314: 305 pgs: 305 active+clean; 201 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.0 MiB/s wr, 311 op/s
Jan 26 15:57:05 compute-0 nova_compute[239965]: 2026-01-26 15:57:05.030 239969 INFO nova.compute.manager [-] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Took 2.73 seconds to deallocate network for instance.
Jan 26 15:57:05 compute-0 nova_compute[239965]: 2026-01-26 15:57:05.085 239969 DEBUG oslo_concurrency.lockutils [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:05 compute-0 nova_compute[239965]: 2026-01-26 15:57:05.252 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:57:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2905916646' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:05 compute-0 nova_compute[239965]: 2026-01-26 15:57:05.300 239969 DEBUG oslo_concurrency.processutils [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:05 compute-0 nova_compute[239965]: 2026-01-26 15:57:05.310 239969 DEBUG nova.compute.provider_tree [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:57:05 compute-0 nova_compute[239965]: 2026-01-26 15:57:05.335 239969 DEBUG nova.scheduler.client.report [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:57:05 compute-0 nova_compute[239965]: 2026-01-26 15:57:05.360 239969 DEBUG oslo_concurrency.lockutils [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:05 compute-0 nova_compute[239965]: 2026-01-26 15:57:05.363 239969 DEBUG oslo_concurrency.lockutils [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.278s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:05 compute-0 nova_compute[239965]: 2026-01-26 15:57:05.395 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443010.3944743, f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:57:05 compute-0 nova_compute[239965]: 2026-01-26 15:57:05.396 239969 INFO nova.compute.manager [-] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] VM Stopped (Lifecycle Event)
Jan 26 15:57:05 compute-0 nova_compute[239965]: 2026-01-26 15:57:05.400 239969 INFO nova.scheduler.client.report [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Deleted allocations for instance d8c8962d-c696-4ea6-826d-466a804a7770
Jan 26 15:57:05 compute-0 nova_compute[239965]: 2026-01-26 15:57:05.418 239969 DEBUG nova.compute.manager [None req-00fe6504-b147-4ef1-8b90-f3db3d87c464 - - - - - -] [instance: f4bbcda5-8a2d-4ffb-9aff-676b29cf56c1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:05 compute-0 nova_compute[239965]: 2026-01-26 15:57:05.475 239969 DEBUG oslo_concurrency.lockutils [None req-8b5ce81c-18ff-47d7-9283-a9192f73d7bd e19e9687c7784557a28b0297dc26ff06 b303275701ea4029a4a744bf25ce8726 - - default default] Lock "d8c8962d-c696-4ea6-826d-466a804a7770" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:05 compute-0 nova_compute[239965]: 2026-01-26 15:57:05.481 239969 DEBUG oslo_concurrency.processutils [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 167 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.4 MiB/s wr, 287 op/s
Jan 26 15:57:05 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2905916646' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:57:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3513341815' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:06 compute-0 nova_compute[239965]: 2026-01-26 15:57:06.105 239969 DEBUG oslo_concurrency.processutils [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:06 compute-0 nova_compute[239965]: 2026-01-26 15:57:06.111 239969 DEBUG nova.compute.provider_tree [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:57:06 compute-0 nova_compute[239965]: 2026-01-26 15:57:06.127 239969 DEBUG nova.compute.manager [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Received event network-vif-deleted-1c2767ea-eb7b-4942-a452-31fe1017e883 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:06 compute-0 nova_compute[239965]: 2026-01-26 15:57:06.128 239969 DEBUG nova.compute.manager [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received event network-changed-32baa4e5-2605-4a3f-8023-6a80ea71793d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:06 compute-0 nova_compute[239965]: 2026-01-26 15:57:06.128 239969 DEBUG nova.compute.manager [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Refreshing instance network info cache due to event network-changed-32baa4e5-2605-4a3f-8023-6a80ea71793d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:57:06 compute-0 nova_compute[239965]: 2026-01-26 15:57:06.128 239969 DEBUG oslo_concurrency.lockutils [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:06 compute-0 nova_compute[239965]: 2026-01-26 15:57:06.128 239969 DEBUG oslo_concurrency.lockutils [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:06 compute-0 nova_compute[239965]: 2026-01-26 15:57:06.129 239969 DEBUG nova.network.neutron [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Refreshing network info cache for port 32baa4e5-2605-4a3f-8023-6a80ea71793d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:57:06 compute-0 nova_compute[239965]: 2026-01-26 15:57:06.131 239969 DEBUG nova.scheduler.client.report [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:57:06 compute-0 nova_compute[239965]: 2026-01-26 15:57:06.153 239969 DEBUG oslo_concurrency.lockutils [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:06 compute-0 nova_compute[239965]: 2026-01-26 15:57:06.181 239969 INFO nova.scheduler.client.report [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Deleted allocations for instance fcf10cc3-f719-42c2-8e4c-a1bd5264de60
Jan 26 15:57:06 compute-0 nova_compute[239965]: 2026-01-26 15:57:06.242 239969 DEBUG oslo_concurrency.lockutils [None req-1febcd4a-f359-4c86-a914-c1898e02577e 744e9066af094bd8b578059050760458 e674ff15ecb74c11bb2de410f761c546 - - default default] Lock "fcf10cc3-f719-42c2-8e4c-a1bd5264de60" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:06 compute-0 nova_compute[239965]: 2026-01-26 15:57:06.599 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:06 compute-0 ceph-mon[75140]: pgmap v1315: 305 pgs: 305 active+clean; 167 MiB data, 526 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.4 MiB/s wr, 287 op/s
Jan 26 15:57:06 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3513341815' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 262 op/s
Jan 26 15:57:07 compute-0 ovn_controller[146046]: 2026-01-26T15:57:07Z|00337|binding|INFO|Releasing lport 0a445094-538b-4c97-83d4-6fbe61c31fbe from this chassis (sb_readonly=0)
Jan 26 15:57:08 compute-0 nova_compute[239965]: 2026-01-26 15:57:08.013 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:08 compute-0 ovn_controller[146046]: 2026-01-26T15:57:08Z|00338|binding|INFO|Releasing lport 0a445094-538b-4c97-83d4-6fbe61c31fbe from this chassis (sb_readonly=0)
Jan 26 15:57:08 compute-0 nova_compute[239965]: 2026-01-26 15:57:08.757 239969 DEBUG nova.network.neutron [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Updated VIF entry in instance network info cache for port 32baa4e5-2605-4a3f-8023-6a80ea71793d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:57:08 compute-0 nova_compute[239965]: 2026-01-26 15:57:08.758 239969 DEBUG nova.network.neutron [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Updating instance_info_cache with network_info: [{"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:08 compute-0 nova_compute[239965]: 2026-01-26 15:57:08.781 239969 DEBUG oslo_concurrency.lockutils [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:08 compute-0 nova_compute[239965]: 2026-01-26 15:57:08.782 239969 DEBUG nova.compute.manager [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received event network-changed-2722b13b-6ac2-481f-93b0-a5f0e4664e24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:08 compute-0 nova_compute[239965]: 2026-01-26 15:57:08.783 239969 DEBUG nova.compute.manager [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Refreshing instance network info cache due to event network-changed-2722b13b-6ac2-481f-93b0-a5f0e4664e24. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:57:08 compute-0 nova_compute[239965]: 2026-01-26 15:57:08.783 239969 DEBUG oslo_concurrency.lockutils [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:08 compute-0 nova_compute[239965]: 2026-01-26 15:57:08.783 239969 DEBUG oslo_concurrency.lockutils [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:08 compute-0 nova_compute[239965]: 2026-01-26 15:57:08.784 239969 DEBUG nova.network.neutron [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Refreshing network info cache for port 2722b13b-6ac2-481f-93b0-a5f0e4664e24 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:57:08 compute-0 nova_compute[239965]: 2026-01-26 15:57:08.809 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:08 compute-0 ceph-mon[75140]: pgmap v1316: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.2 MiB/s wr, 262 op/s
Jan 26 15:57:09 compute-0 nova_compute[239965]: 2026-01-26 15:57:09.271 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443014.2699528, 6f15e546-481b-42eb-ad9c-7c40e8bfe459 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:57:09 compute-0 nova_compute[239965]: 2026-01-26 15:57:09.272 239969 INFO nova.compute.manager [-] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] VM Stopped (Lifecycle Event)
Jan 26 15:57:09 compute-0 nova_compute[239965]: 2026-01-26 15:57:09.294 239969 DEBUG nova.compute.manager [None req-0cf5d8c4-64ff-4496-9c0c-e732d28ee854 - - - - - -] [instance: 6f15e546-481b-42eb-ad9c-7c40e8bfe459] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:09 compute-0 nova_compute[239965]: 2026-01-26 15:57:09.327 239969 DEBUG nova.compute.manager [req-882b7404-91c3-4c63-b91a-3f50c413a9cf req-5ea20f91-645c-456d-9dac-f36a3f4b8e33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received event network-changed-2722b13b-6ac2-481f-93b0-a5f0e4664e24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:09 compute-0 nova_compute[239965]: 2026-01-26 15:57:09.328 239969 DEBUG nova.compute.manager [req-882b7404-91c3-4c63-b91a-3f50c413a9cf req-5ea20f91-645c-456d-9dac-f36a3f4b8e33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Refreshing instance network info cache due to event network-changed-2722b13b-6ac2-481f-93b0-a5f0e4664e24. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:57:09 compute-0 nova_compute[239965]: 2026-01-26 15:57:09.328 239969 DEBUG oslo_concurrency.lockutils [req-882b7404-91c3-4c63-b91a-3f50c413a9cf req-5ea20f91-645c-456d-9dac-f36a3f4b8e33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:57:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 492 KiB/s wr, 155 op/s
Jan 26 15:57:09 compute-0 nova_compute[239965]: 2026-01-26 15:57:09.899 239969 DEBUG oslo_concurrency.lockutils [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "interface-d49e7eb2-dcdb-4410-a016-42e58d1d1416-a6ea89b5-e240-4a9d-bad7-8e99a81e6805" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:09 compute-0 nova_compute[239965]: 2026-01-26 15:57:09.900 239969 DEBUG oslo_concurrency.lockutils [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-d49e7eb2-dcdb-4410-a016-42e58d1d1416-a6ea89b5-e240-4a9d-bad7-8e99a81e6805" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:09 compute-0 nova_compute[239965]: 2026-01-26 15:57:09.901 239969 DEBUG nova.objects.instance [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'flavor' on Instance uuid d49e7eb2-dcdb-4410-a016-42e58d1d1416 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:10 compute-0 nova_compute[239965]: 2026-01-26 15:57:10.254 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:10 compute-0 nova_compute[239965]: 2026-01-26 15:57:10.816 239969 DEBUG nova.network.neutron [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Updated VIF entry in instance network info cache for port 2722b13b-6ac2-481f-93b0-a5f0e4664e24. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:57:10 compute-0 nova_compute[239965]: 2026-01-26 15:57:10.817 239969 DEBUG nova.network.neutron [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Updating instance_info_cache with network_info: [{"id": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "address": "fa:16:3e:e2:f5:4d", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2722b13b-6a", "ovs_interfaceid": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:10 compute-0 nova_compute[239965]: 2026-01-26 15:57:10.853 239969 DEBUG oslo_concurrency.lockutils [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:10 compute-0 nova_compute[239965]: 2026-01-26 15:57:10.854 239969 DEBUG nova.compute.manager [req-43b5c617-8427-4cad-8aac-d0a48742a6ce req-371babb2-aa42-4411-a708-04c143672a47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Received event network-vif-deleted-5dd9475a-0edb-4c96-9acb-f12a70fef9bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:10 compute-0 nova_compute[239965]: 2026-01-26 15:57:10.855 239969 DEBUG oslo_concurrency.lockutils [req-882b7404-91c3-4c63-b91a-3f50c413a9cf req-5ea20f91-645c-456d-9dac-f36a3f4b8e33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:10 compute-0 nova_compute[239965]: 2026-01-26 15:57:10.855 239969 DEBUG nova.network.neutron [req-882b7404-91c3-4c63-b91a-3f50c413a9cf req-5ea20f91-645c-456d-9dac-f36a3f4b8e33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Refreshing network info cache for port 2722b13b-6ac2-481f-93b0-a5f0e4664e24 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:57:10 compute-0 ceph-mon[75140]: pgmap v1317: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 492 KiB/s wr, 155 op/s
Jan 26 15:57:11 compute-0 nova_compute[239965]: 2026-01-26 15:57:11.077 239969 DEBUG nova.objects.instance [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'pci_requests' on Instance uuid d49e7eb2-dcdb-4410-a016-42e58d1d1416 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:11 compute-0 nova_compute[239965]: 2026-01-26 15:57:11.096 239969 DEBUG nova.network.neutron [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:57:11 compute-0 nova_compute[239965]: 2026-01-26 15:57:11.603 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:11 compute-0 nova_compute[239965]: 2026-01-26 15:57:11.701 239969 DEBUG nova.policy [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '700f80e28de6426c81d2be69a7fd8ffb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:57:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1318: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 492 KiB/s wr, 155 op/s
Jan 26 15:57:12 compute-0 nova_compute[239965]: 2026-01-26 15:57:12.774 239969 DEBUG nova.network.neutron [req-882b7404-91c3-4c63-b91a-3f50c413a9cf req-5ea20f91-645c-456d-9dac-f36a3f4b8e33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Updated VIF entry in instance network info cache for port 2722b13b-6ac2-481f-93b0-a5f0e4664e24. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:57:12 compute-0 nova_compute[239965]: 2026-01-26 15:57:12.775 239969 DEBUG nova.network.neutron [req-882b7404-91c3-4c63-b91a-3f50c413a9cf req-5ea20f91-645c-456d-9dac-f36a3f4b8e33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Updating instance_info_cache with network_info: [{"id": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "address": "fa:16:3e:e2:f5:4d", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2722b13b-6a", "ovs_interfaceid": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:12 compute-0 nova_compute[239965]: 2026-01-26 15:57:12.799 239969 DEBUG oslo_concurrency.lockutils [req-882b7404-91c3-4c63-b91a-3f50c413a9cf req-5ea20f91-645c-456d-9dac-f36a3f4b8e33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:12 compute-0 nova_compute[239965]: 2026-01-26 15:57:12.937 239969 DEBUG nova.network.neutron [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Successfully updated port: a6ea89b5-e240-4a9d-bad7-8e99a81e6805 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:57:12 compute-0 nova_compute[239965]: 2026-01-26 15:57:12.952 239969 DEBUG oslo_concurrency.lockutils [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:12 compute-0 nova_compute[239965]: 2026-01-26 15:57:12.953 239969 DEBUG oslo_concurrency.lockutils [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquired lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:12 compute-0 nova_compute[239965]: 2026-01-26 15:57:12.953 239969 DEBUG nova.network.neutron [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:57:13 compute-0 ceph-mon[75140]: pgmap v1318: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 492 KiB/s wr, 155 op/s
Jan 26 15:57:13 compute-0 nova_compute[239965]: 2026-01-26 15:57:13.107 239969 WARNING nova.network.neutron [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] 91bcac0a-1926-4861-88ab-ae3c06f7e57e already exists in list: networks containing: ['91bcac0a-1926-4861-88ab-ae3c06f7e57e']. ignoring it
Jan 26 15:57:13 compute-0 nova_compute[239965]: 2026-01-26 15:57:13.609 239969 DEBUG nova.compute.manager [req-c41e4341-ee89-401a-a9ae-00a915bb0b76 req-8460832d-d613-4051-8338-6cd1a2b5e2e1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received event network-changed-32baa4e5-2605-4a3f-8023-6a80ea71793d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:13 compute-0 nova_compute[239965]: 2026-01-26 15:57:13.610 239969 DEBUG nova.compute.manager [req-c41e4341-ee89-401a-a9ae-00a915bb0b76 req-8460832d-d613-4051-8338-6cd1a2b5e2e1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Refreshing instance network info cache due to event network-changed-32baa4e5-2605-4a3f-8023-6a80ea71793d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:57:13 compute-0 nova_compute[239965]: 2026-01-26 15:57:13.610 239969 DEBUG oslo_concurrency.lockutils [req-c41e4341-ee89-401a-a9ae-00a915bb0b76 req-8460832d-d613-4051-8338-6cd1a2b5e2e1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 117 KiB/s wr, 133 op/s
Jan 26 15:57:14 compute-0 ceph-mon[75140]: pgmap v1319: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 117 KiB/s wr, 133 op/s
Jan 26 15:57:14 compute-0 nova_compute[239965]: 2026-01-26 15:57:14.312 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:14 compute-0 nova_compute[239965]: 2026-01-26 15:57:14.312 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:14 compute-0 nova_compute[239965]: 2026-01-26 15:57:14.340 239969 DEBUG nova.compute.manager [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:57:14 compute-0 nova_compute[239965]: 2026-01-26 15:57:14.420 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:14 compute-0 nova_compute[239965]: 2026-01-26 15:57:14.421 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:14 compute-0 nova_compute[239965]: 2026-01-26 15:57:14.430 239969 DEBUG nova.virt.hardware [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:57:14 compute-0 nova_compute[239965]: 2026-01-26 15:57:14.430 239969 INFO nova.compute.claims [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:57:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:57:14 compute-0 nova_compute[239965]: 2026-01-26 15:57:14.645 239969 DEBUG oslo_concurrency.processutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:14 compute-0 nova_compute[239965]: 2026-01-26 15:57:14.743 239969 DEBUG oslo_concurrency.lockutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Acquiring lock "3940597d-f11c-4959-85c5-bb51a373cec5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:14 compute-0 nova_compute[239965]: 2026-01-26 15:57:14.743 239969 DEBUG oslo_concurrency.lockutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lock "3940597d-f11c-4959-85c5-bb51a373cec5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:14 compute-0 nova_compute[239965]: 2026-01-26 15:57:14.762 239969 DEBUG nova.compute.manager [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:57:14 compute-0 nova_compute[239965]: 2026-01-26 15:57:14.827 239969 DEBUG oslo_concurrency.lockutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.225 239969 DEBUG nova.network.neutron [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Updating instance_info_cache with network_info: [{"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.241 239969 DEBUG oslo_concurrency.lockutils [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Releasing lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.242 239969 DEBUG oslo_concurrency.lockutils [req-c41e4341-ee89-401a-a9ae-00a915bb0b76 req-8460832d-d613-4051-8338-6cd1a2b5e2e1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.243 239969 DEBUG nova.network.neutron [req-c41e4341-ee89-401a-a9ae-00a915bb0b76 req-8460832d-d613-4051-8338-6cd1a2b5e2e1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Refreshing network info cache for port 32baa4e5-2605-4a3f-8023-6a80ea71793d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.246 239969 DEBUG nova.virt.libvirt.vif [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2044484805',display_name='tempest-tempest.common.compute-instance-2044484805',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2044484805',id=43,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKeLSVrQjLVMrDly+731By+tGp5cL/q2RkfHkPZwnhscMO6Jhep7798XgwoAsogpRVdfhr++1ZBHm1/nTkA1YXe3jICWiVf8G7qT1QKZe6WXtrz4RsbEZ4GIF6fD2FL+/A==',key_name='tempest-keypair-1813699058',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:56:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-lr8w7kzm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=d49e7eb2-dcdb-4410-a016-42e58d1d1416,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.246 239969 DEBUG nova.network.os_vif_util [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.247 239969 DEBUG nova.network.os_vif_util [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.247 239969 DEBUG os_vif [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.248 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.248 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.249 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.251 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.251 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6ea89b5-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.251 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6ea89b5-e2, col_values=(('external_ids', {'iface-id': 'a6ea89b5-e240-4a9d-bad7-8e99a81e6805', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:92:c2', 'vm-uuid': 'd49e7eb2-dcdb-4410-a016-42e58d1d1416'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:15 compute-0 NetworkManager[48954]: <info>  [1769443035.2537] manager: (tapa6ea89b5-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.253 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.257 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:57:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:57:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4006474636' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.261 239969 INFO os_vif [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2')
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.261 239969 DEBUG nova.virt.libvirt.vif [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2044484805',display_name='tempest-tempest.common.compute-instance-2044484805',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2044484805',id=43,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKeLSVrQjLVMrDly+731By+tGp5cL/q2RkfHkPZwnhscMO6Jhep7798XgwoAsogpRVdfhr++1ZBHm1/nTkA1YXe3jICWiVf8G7qT1QKZe6WXtrz4RsbEZ4GIF6fD2FL+/A==',key_name='tempest-keypair-1813699058',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:56:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-lr8w7kzm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=d49e7eb2-dcdb-4410-a016-42e58d1d1416,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.261 239969 DEBUG nova.network.os_vif_util [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.262 239969 DEBUG nova.network.os_vif_util [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.264 239969 DEBUG nova.virt.libvirt.guest [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] attach device xml: <interface type="ethernet">
Jan 26 15:57:15 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:7e:92:c2"/>
Jan 26 15:57:15 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 15:57:15 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:57:15 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 15:57:15 compute-0 nova_compute[239965]:   <target dev="tapa6ea89b5-e2"/>
Jan 26 15:57:15 compute-0 nova_compute[239965]: </interface>
Jan 26 15:57:15 compute-0 nova_compute[239965]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 26 15:57:15 compute-0 kernel: tapa6ea89b5-e2: entered promiscuous mode
Jan 26 15:57:15 compute-0 NetworkManager[48954]: <info>  [1769443035.2794] manager: (tapa6ea89b5-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/162)
Jan 26 15:57:15 compute-0 ovn_controller[146046]: 2026-01-26T15:57:15Z|00339|binding|INFO|Claiming lport a6ea89b5-e240-4a9d-bad7-8e99a81e6805 for this chassis.
Jan 26 15:57:15 compute-0 ovn_controller[146046]: 2026-01-26T15:57:15Z|00340|binding|INFO|a6ea89b5-e240-4a9d-bad7-8e99a81e6805: Claiming fa:16:3e:7e:92:c2 10.100.0.14
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.284 239969 DEBUG oslo_concurrency.processutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.285 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:15.287 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:92:c2 10.100.0.14'], port_security=['fa:16:3e:7e:92:c2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1213915953', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd49e7eb2-dcdb-4410-a016-42e58d1d1416', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1213915953', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6f59e25-9c42-4de8-8df2-2beb03c79af0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a6ea89b5-e240-4a9d-bad7-8e99a81e6805) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:57:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:15.288 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a6ea89b5-e240-4a9d-bad7-8e99a81e6805 in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e bound to our chassis
Jan 26 15:57:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:15.289 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.295 239969 DEBUG nova.compute.provider_tree [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.297 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:15 compute-0 ovn_controller[146046]: 2026-01-26T15:57:15Z|00341|binding|INFO|Setting lport a6ea89b5-e240-4a9d-bad7-8e99a81e6805 ovn-installed in OVS
Jan 26 15:57:15 compute-0 ovn_controller[146046]: 2026-01-26T15:57:15Z|00342|binding|INFO|Setting lport a6ea89b5-e240-4a9d-bad7-8e99a81e6805 up in Southbound
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.300 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.309 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443020.3081362, fcf10cc3-f719-42c2-8e4c-a1bd5264de60 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.309 239969 INFO nova.compute.manager [-] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] VM Stopped (Lifecycle Event)
Jan 26 15:57:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:15.310 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc3ba56-1b4e-4b68-a05f-73a62643fa4e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:15 compute-0 systemd-udevd[280608]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.313 239969 DEBUG nova.scheduler.client.report [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:57:15 compute-0 ovn_controller[146046]: 2026-01-26T15:57:15Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:f5:4d 10.100.0.13
Jan 26 15:57:15 compute-0 ovn_controller[146046]: 2026-01-26T15:57:15Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:f5:4d 10.100.0.13
Jan 26 15:57:15 compute-0 NetworkManager[48954]: <info>  [1769443035.3243] device (tapa6ea89b5-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:57:15 compute-0 NetworkManager[48954]: <info>  [1769443035.3249] device (tapa6ea89b5-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.331 239969 DEBUG nova.compute.manager [None req-4f342a82-1168-47e1-b7de-c93678c4689a - - - - - -] [instance: fcf10cc3-f719-42c2-8e4c-a1bd5264de60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:15.346 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5ff302fc-bedb-4ab4-9a14-c040f89d25a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.346 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.346 239969 DEBUG nova.compute.manager [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:57:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:15.349 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f1d82dd9-73c1-457a-a868-7b94591e1cef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.349 239969 DEBUG oslo_concurrency.lockutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.357 239969 DEBUG nova.virt.hardware [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.358 239969 INFO nova.compute.claims [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:57:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:15.377 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd2514f-3380-4001-8f26-9aba31643ee2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:15.393 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[986f65f4-b3de-484d-9eb6-04f33298fb0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448592, 'reachable_time': 26761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280615, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:15.408 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5c3e25fe-a0e4-46f9-aec3-5ddd687472a1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448604, 'tstamp': 448604}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280616, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448607, 'tstamp': 448607}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280616, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:15.410 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:15.413 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:15.413 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:15.413 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:15.414 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.414 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.415 239969 DEBUG nova.compute.manager [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.415 239969 DEBUG nova.network.neutron [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.433 239969 INFO nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.452 239969 DEBUG nova.compute.manager [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.539 239969 DEBUG nova.compute.manager [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.540 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.541 239969 INFO nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Creating image(s)
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.582 239969 DEBUG nova.storage.rbd_utils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] rbd image d9b6d871-11f5-46ea-b7f2-502cd8b12b21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.602 239969 DEBUG nova.storage.rbd_utils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] rbd image d9b6d871-11f5-46ea-b7f2-502cd8b12b21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.623 239969 DEBUG nova.storage.rbd_utils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] rbd image d9b6d871-11f5-46ea-b7f2-502cd8b12b21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.626 239969 DEBUG oslo_concurrency.processutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.656 239969 DEBUG oslo_concurrency.processutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.697 239969 DEBUG oslo_concurrency.processutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.698 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.698 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:15 compute-0 nova_compute[239965]: 2026-01-26 15:57:15.699 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:15 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4006474636' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 171 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 146 KiB/s wr, 70 op/s
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.340 239969 DEBUG nova.storage.rbd_utils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] rbd image d9b6d871-11f5-46ea-b7f2-502cd8b12b21_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.346 239969 DEBUG oslo_concurrency.processutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d9b6d871-11f5-46ea-b7f2-502cd8b12b21_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.379 239969 DEBUG nova.policy [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fe6e9eb388ed48fc93bf3f4b984f8d1c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'acad8b8490f840ba8d8c5d0d91874b79', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.388 239969 DEBUG nova.compute.manager [req-a4ff1653-0eee-4b82-9104-ad79dd7455f8 req-3a64c88f-a322-484e-ab54-b1f7d06dac37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received event network-changed-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.388 239969 DEBUG nova.compute.manager [req-a4ff1653-0eee-4b82-9104-ad79dd7455f8 req-3a64c88f-a322-484e-ab54-b1f7d06dac37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Refreshing instance network info cache due to event network-changed-a6ea89b5-e240-4a9d-bad7-8e99a81e6805. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.388 239969 DEBUG oslo_concurrency.lockutils [req-a4ff1653-0eee-4b82-9104-ad79dd7455f8 req-3a64c88f-a322-484e-ab54-b1f7d06dac37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.426 239969 DEBUG nova.virt.libvirt.driver [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.427 239969 DEBUG nova.virt.libvirt.driver [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.427 239969 DEBUG nova.virt.libvirt.driver [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:b7:54:a4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.427 239969 DEBUG nova.virt.libvirt.driver [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:7e:92:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.455 239969 DEBUG nova.virt.libvirt.guest [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:57:16 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:57:16 compute-0 nova_compute[239965]:   <nova:name>tempest-tempest.common.compute-instance-2044484805</nova:name>
Jan 26 15:57:16 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:57:16</nova:creationTime>
Jan 26 15:57:16 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:57:16 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:57:16 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:57:16 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:57:16 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:57:16 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:57:16 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:57:16 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:57:16 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:57:16 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:57:16 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:57:16 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:57:16 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:57:16 compute-0 nova_compute[239965]:     <nova:port uuid="32baa4e5-2605-4a3f-8023-6a80ea71793d">
Jan 26 15:57:16 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 15:57:16 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:57:16 compute-0 nova_compute[239965]:     <nova:port uuid="a6ea89b5-e240-4a9d-bad7-8e99a81e6805">
Jan 26 15:57:16 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 15:57:16 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:57:16 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:57:16 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:57:16 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.480 239969 DEBUG oslo_concurrency.lockutils [None req-cf48ec59-a4e9-4721-8680-9287070d9972 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-d49e7eb2-dcdb-4410-a016-42e58d1d1416-a6ea89b5-e240-4a9d-bad7-8e99a81e6805" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.576 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443021.575024, d8c8962d-c696-4ea6-826d-466a804a7770 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.576 239969 INFO nova.compute.manager [-] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] VM Stopped (Lifecycle Event)
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.598 239969 DEBUG nova.compute.manager [None req-21acd87f-1436-47c9-a695-b16f45c92928 - - - - - -] [instance: d8c8962d-c696-4ea6-826d-466a804a7770] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:57:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4111701761' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.891 239969 DEBUG oslo_concurrency.processutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.234s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.896 239969 DEBUG nova.compute.provider_tree [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.913 239969 DEBUG nova.scheduler.client.report [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.941 239969 DEBUG oslo_concurrency.lockutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:16 compute-0 nova_compute[239965]: 2026-01-26 15:57:16.942 239969 DEBUG nova.compute.manager [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.004 239969 DEBUG nova.compute.manager [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.022 239969 INFO nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.045 239969 DEBUG nova.compute.manager [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.048 239969 DEBUG nova.network.neutron [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Successfully created port: e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:57:17 compute-0 ceph-mon[75140]: pgmap v1320: 305 pgs: 305 active+clean; 171 MiB data, 480 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 146 KiB/s wr, 70 op/s
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.155 239969 DEBUG nova.compute.manager [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.156 239969 DEBUG nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.157 239969 INFO nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Creating image(s)
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.184 239969 DEBUG nova.storage.rbd_utils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] rbd image 3940597d-f11c-4959-85c5-bb51a373cec5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.229 239969 DEBUG nova.storage.rbd_utils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] rbd image 3940597d-f11c-4959-85c5-bb51a373cec5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.273 239969 DEBUG nova.storage.rbd_utils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] rbd image 3940597d-f11c-4959-85c5-bb51a373cec5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.279 239969 DEBUG oslo_concurrency.processutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.311 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.354 239969 DEBUG oslo_concurrency.processutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.355 239969 DEBUG oslo_concurrency.lockutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.356 239969 DEBUG oslo_concurrency.lockutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.356 239969 DEBUG oslo_concurrency.lockutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.383 239969 DEBUG nova.storage.rbd_utils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] rbd image 3940597d-f11c-4959-85c5-bb51a373cec5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.386 239969 DEBUG oslo_concurrency.processutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 3940597d-f11c-4959-85c5-bb51a373cec5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.473 239969 DEBUG oslo_concurrency.lockutils [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "interface-d49e7eb2-dcdb-4410-a016-42e58d1d1416-a6ea89b5-e240-4a9d-bad7-8e99a81e6805" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.474 239969 DEBUG oslo_concurrency.lockutils [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-d49e7eb2-dcdb-4410-a016-42e58d1d1416-a6ea89b5-e240-4a9d-bad7-8e99a81e6805" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.493 239969 DEBUG nova.objects.instance [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'flavor' on Instance uuid d49e7eb2-dcdb-4410-a016-42e58d1d1416 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.521 239969 DEBUG nova.virt.libvirt.vif [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2044484805',display_name='tempest-tempest.common.compute-instance-2044484805',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2044484805',id=43,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKeLSVrQjLVMrDly+731By+tGp5cL/q2RkfHkPZwnhscMO6Jhep7798XgwoAsogpRVdfhr++1ZBHm1/nTkA1YXe3jICWiVf8G7qT1QKZe6WXtrz4RsbEZ4GIF6fD2FL+/A==',key_name='tempest-keypair-1813699058',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:56:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-lr8w7kzm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=d49e7eb2-dcdb-4410-a016-42e58d1d1416,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.521 239969 DEBUG nova.network.os_vif_util [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.522 239969 DEBUG nova.network.os_vif_util [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.525 239969 DEBUG nova.virt.libvirt.guest [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7e:92:c2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6ea89b5-e2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.528 239969 DEBUG nova.virt.libvirt.guest [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7e:92:c2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6ea89b5-e2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.530 239969 DEBUG nova.virt.libvirt.driver [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Attempting to detach device tapa6ea89b5-e2 from instance d49e7eb2-dcdb-4410-a016-42e58d1d1416 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.530 239969 DEBUG nova.virt.libvirt.guest [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] detach device xml: <interface type="ethernet">
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:7e:92:c2"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <target dev="tapa6ea89b5-e2"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]: </interface>
Jan 26 15:57:17 compute-0 nova_compute[239965]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.790 239969 DEBUG nova.network.neutron [req-c41e4341-ee89-401a-a9ae-00a915bb0b76 req-8460832d-d613-4051-8338-6cd1a2b5e2e1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Updated VIF entry in instance network info cache for port 32baa4e5-2605-4a3f-8023-6a80ea71793d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.791 239969 DEBUG nova.network.neutron [req-c41e4341-ee89-401a-a9ae-00a915bb0b76 req-8460832d-d613-4051-8338-6cd1a2b5e2e1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Updating instance_info_cache with network_info: [{"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 201 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 2.2 MiB/s wr, 57 op/s
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.808 239969 DEBUG oslo_concurrency.lockutils [req-c41e4341-ee89-401a-a9ae-00a915bb0b76 req-8460832d-d613-4051-8338-6cd1a2b5e2e1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.809 239969 DEBUG oslo_concurrency.lockutils [req-a4ff1653-0eee-4b82-9104-ad79dd7455f8 req-3a64c88f-a322-484e-ab54-b1f7d06dac37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.809 239969 DEBUG nova.network.neutron [req-a4ff1653-0eee-4b82-9104-ad79dd7455f8 req-3a64c88f-a322-484e-ab54-b1f7d06dac37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Refreshing network info cache for port a6ea89b5-e240-4a9d-bad7-8e99a81e6805 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.812 239969 DEBUG nova.compute.manager [req-a274a5cc-01d9-4933-a28a-d36c5164baa6 req-611c47e7-89a1-4894-9778-920c0bf6a983 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received event network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.813 239969 DEBUG oslo_concurrency.lockutils [req-a274a5cc-01d9-4933-a28a-d36c5164baa6 req-611c47e7-89a1-4894-9778-920c0bf6a983 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.813 239969 DEBUG oslo_concurrency.lockutils [req-a274a5cc-01d9-4933-a28a-d36c5164baa6 req-611c47e7-89a1-4894-9778-920c0bf6a983 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.813 239969 DEBUG oslo_concurrency.lockutils [req-a274a5cc-01d9-4933-a28a-d36c5164baa6 req-611c47e7-89a1-4894-9778-920c0bf6a983 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.813 239969 DEBUG nova.compute.manager [req-a274a5cc-01d9-4933-a28a-d36c5164baa6 req-611c47e7-89a1-4894-9778-920c0bf6a983 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] No waiting events found dispatching network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.813 239969 WARNING nova.compute.manager [req-a274a5cc-01d9-4933-a28a-d36c5164baa6 req-611c47e7-89a1-4894-9778-920c0bf6a983 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received unexpected event network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 for instance with vm_state active and task_state None.
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.851 239969 DEBUG nova.virt.libvirt.guest [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7e:92:c2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6ea89b5-e2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.857 239969 DEBUG nova.virt.libvirt.guest [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:7e:92:c2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6ea89b5-e2"/></interface>not found in domain: <domain type='kvm' id='47'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <name>instance-0000002b</name>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <uuid>d49e7eb2-dcdb-4410-a016-42e58d1d1416</uuid>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:name>tempest-tempest.common.compute-instance-2044484805</nova:name>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:57:16</nova:creationTime>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:port uuid="32baa4e5-2605-4a3f-8023-6a80ea71793d">
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:port uuid="a6ea89b5-e240-4a9d-bad7-8e99a81e6805">
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:57:17 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <resource>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </resource>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <system>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <entry name='serial'>d49e7eb2-dcdb-4410-a016-42e58d1d1416</entry>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <entry name='uuid'>d49e7eb2-dcdb-4410-a016-42e58d1d1416</entry>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </system>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <os>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </os>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <features>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </features>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk' index='2'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       </source>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk.config' index='1'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       </source>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:b7:54:a4'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target dev='tap32baa4e5-26'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:7e:92:c2'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target dev='tapa6ea89b5-e2'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='net1'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/d49e7eb2-dcdb-4410-a016-42e58d1d1416/console.log' append='off'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       </target>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/0'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/d49e7eb2-dcdb-4410-a016-42e58d1d1416/console.log' append='off'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </console>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </input>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </input>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </input>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </graphics>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <video>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </video>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c50,c792</label>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c50,c792</imagelabel>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:57:17 compute-0 nova_compute[239965]: </domain>
Jan 26 15:57:17 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.858 239969 INFO nova.virt.libvirt.driver [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully detached device tapa6ea89b5-e2 from instance d49e7eb2-dcdb-4410-a016-42e58d1d1416 from the persistent domain config.
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.858 239969 DEBUG nova.virt.libvirt.driver [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] (1/8): Attempting to detach device tapa6ea89b5-e2 with device alias net1 from instance d49e7eb2-dcdb-4410-a016-42e58d1d1416 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.859 239969 DEBUG nova.virt.libvirt.guest [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] detach device xml: <interface type="ethernet">
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:7e:92:c2"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <target dev="tapa6ea89b5-e2"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]: </interface>
Jan 26 15:57:17 compute-0 nova_compute[239965]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 26 15:57:17 compute-0 kernel: tapa6ea89b5-e2 (unregistering): left promiscuous mode
Jan 26 15:57:17 compute-0 NetworkManager[48954]: <info>  [1769443037.9542] device (tapa6ea89b5-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.956 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:17 compute-0 ovn_controller[146046]: 2026-01-26T15:57:17Z|00343|binding|INFO|Releasing lport a6ea89b5-e240-4a9d-bad7-8e99a81e6805 from this chassis (sb_readonly=0)
Jan 26 15:57:17 compute-0 ovn_controller[146046]: 2026-01-26T15:57:17Z|00344|binding|INFO|Setting lport a6ea89b5-e240-4a9d-bad7-8e99a81e6805 down in Southbound
Jan 26 15:57:17 compute-0 ovn_controller[146046]: 2026-01-26T15:57:17Z|00345|binding|INFO|Removing iface tapa6ea89b5-e2 ovn-installed in OVS
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.958 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.963 239969 DEBUG nova.virt.libvirt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Received event <DeviceRemovedEvent: 1769443037.9630911, d49e7eb2-dcdb-4410-a016-42e58d1d1416 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.964 239969 DEBUG nova.virt.libvirt.driver [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Start waiting for the detach event from libvirt for device tapa6ea89b5-e2 with device alias net1 for instance d49e7eb2-dcdb-4410-a016-42e58d1d1416 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.965 239969 DEBUG nova.virt.libvirt.guest [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7e:92:c2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6ea89b5-e2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:57:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:17.965 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:92:c2 10.100.0.14'], port_security=['fa:16:3e:7e:92:c2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1213915953', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'd49e7eb2-dcdb-4410-a016-42e58d1d1416', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1213915953', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6f59e25-9c42-4de8-8df2-2beb03c79af0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a6ea89b5-e240-4a9d-bad7-8e99a81e6805) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:57:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:17.967 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a6ea89b5-e240-4a9d-bad7-8e99a81e6805 in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e unbound from our chassis
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.967 239969 DEBUG nova.virt.libvirt.guest [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:7e:92:c2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6ea89b5-e2"/></interface>not found in domain: <domain type='kvm' id='47'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <name>instance-0000002b</name>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <uuid>d49e7eb2-dcdb-4410-a016-42e58d1d1416</uuid>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:name>tempest-tempest.common.compute-instance-2044484805</nova:name>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:57:16</nova:creationTime>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:port uuid="32baa4e5-2605-4a3f-8023-6a80ea71793d">
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:port uuid="a6ea89b5-e240-4a9d-bad7-8e99a81e6805">
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:57:17 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <resource>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </resource>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <system>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <entry name='serial'>d49e7eb2-dcdb-4410-a016-42e58d1d1416</entry>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <entry name='uuid'>d49e7eb2-dcdb-4410-a016-42e58d1d1416</entry>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </system>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <os>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </os>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <features>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </features>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk' index='2'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       </source>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/d49e7eb2-dcdb-4410-a016-42e58d1d1416_disk.config' index='1'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       </source>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 15:57:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:17.968 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:b7:54:a4'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target dev='tap32baa4e5-26'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/d49e7eb2-dcdb-4410-a016-42e58d1d1416/console.log' append='off'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       </target>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/0'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/d49e7eb2-dcdb-4410-a016-42e58d1d1416/console.log' append='off'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </console>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </input>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </input>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </input>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </graphics>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <video>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </video>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c50,c792</label>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c50,c792</imagelabel>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:57:17 compute-0 nova_compute[239965]: </domain>
Jan 26 15:57:17 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.968 239969 INFO nova.virt.libvirt.driver [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully detached device tapa6ea89b5-e2 from instance d49e7eb2-dcdb-4410-a016-42e58d1d1416 from the live domain config.
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.968 239969 DEBUG nova.virt.libvirt.vif [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2044484805',display_name='tempest-tempest.common.compute-instance-2044484805',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2044484805',id=43,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKeLSVrQjLVMrDly+731By+tGp5cL/q2RkfHkPZwnhscMO6Jhep7798XgwoAsogpRVdfhr++1ZBHm1/nTkA1YXe3jICWiVf8G7qT1QKZe6WXtrz4RsbEZ4GIF6fD2FL+/A==',key_name='tempest-keypair-1813699058',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:56:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-lr8w7kzm',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=d49e7eb2-dcdb-4410-a016-42e58d1d1416,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.969 239969 DEBUG nova.network.os_vif_util [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.969 239969 DEBUG nova.network.os_vif_util [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.970 239969 DEBUG os_vif [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.971 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.971 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6ea89b5-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.973 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.976 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.977 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.980 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.982 239969 INFO os_vif [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2')
Jan 26 15:57:17 compute-0 nova_compute[239965]: 2026-01-26 15:57:17.983 239969 DEBUG nova.virt.libvirt.guest [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:name>tempest-tempest.common.compute-instance-2044484805</nova:name>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:57:17</nova:creationTime>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     <nova:port uuid="32baa4e5-2605-4a3f-8023-6a80ea71793d">
Jan 26 15:57:17 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 15:57:17 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:57:17 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:57:17 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:57:17 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 15:57:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:17.984 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[858eec14-e115-4013-a5b7-88a86483a088]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:18.017 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[274ddacb-c4a9-47a5-ab65-4383e9909326]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:18.021 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3033ca1c-f2b1-4504-bdbd-f33f6cc4f0c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:18.049 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6eaed4ec-0557-4c4c-95b4-5d4b9fe4dbca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:18.074 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e09173-9575-42a9-a89e-82e7adb0a24a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448592, 'reachable_time': 26761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280836, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:18.098 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b0895b-749f-4059-9f11-c89756fa0afa]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448604, 'tstamp': 448604}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280837, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448607, 'tstamp': 448607}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280837, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:18.100 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:18 compute-0 nova_compute[239965]: 2026-01-26 15:57:18.102 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:18 compute-0 nova_compute[239965]: 2026-01-26 15:57:18.103 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:18.104 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:18.104 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:18.105 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:18.106 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4111701761' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:19 compute-0 nova_compute[239965]: 2026-01-26 15:57:19.547 239969 DEBUG nova.network.neutron [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Successfully updated port: e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:57:19 compute-0 nova_compute[239965]: 2026-01-26 15:57:19.564 239969 DEBUG oslo_concurrency.processutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d9b6d871-11f5-46ea-b7f2-502cd8b12b21_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.219s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:19 compute-0 nova_compute[239965]: 2026-01-26 15:57:19.592 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "refresh_cache-d9b6d871-11f5-46ea-b7f2-502cd8b12b21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:19 compute-0 nova_compute[239965]: 2026-01-26 15:57:19.592 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquired lock "refresh_cache-d9b6d871-11f5-46ea-b7f2-502cd8b12b21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:19 compute-0 nova_compute[239965]: 2026-01-26 15:57:19.592 239969 DEBUG nova.network.neutron [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:57:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:57:19 compute-0 nova_compute[239965]: 2026-01-26 15:57:19.633 239969 DEBUG nova.storage.rbd_utils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] resizing rbd image d9b6d871-11f5-46ea-b7f2-502cd8b12b21_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:57:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 201 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 2.2 MiB/s wr, 56 op/s
Jan 26 15:57:19 compute-0 ceph-mon[75140]: pgmap v1321: 305 pgs: 305 active+clean; 201 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 2.2 MiB/s wr, 57 op/s
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.147 239969 DEBUG nova.compute.manager [req-7052cdf9-a4bd-47ea-bb9a-6d69c22a227a req-8b55ac5d-4e45-45c0-a32c-16cd0c377d84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Received event network-changed-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.147 239969 DEBUG nova.compute.manager [req-7052cdf9-a4bd-47ea-bb9a-6d69c22a227a req-8b55ac5d-4e45-45c0-a32c-16cd0c377d84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Refreshing instance network info cache due to event network-changed-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.148 239969 DEBUG oslo_concurrency.lockutils [req-7052cdf9-a4bd-47ea-bb9a-6d69c22a227a req-8b55ac5d-4e45-45c0-a32c-16cd0c377d84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d9b6d871-11f5-46ea-b7f2-502cd8b12b21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.181 239969 DEBUG nova.network.neutron [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.261 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.531 239969 DEBUG nova.network.neutron [req-a4ff1653-0eee-4b82-9104-ad79dd7455f8 req-3a64c88f-a322-484e-ab54-b1f7d06dac37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Updated VIF entry in instance network info cache for port a6ea89b5-e240-4a9d-bad7-8e99a81e6805. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.531 239969 DEBUG nova.network.neutron [req-a4ff1653-0eee-4b82-9104-ad79dd7455f8 req-3a64c88f-a322-484e-ab54-b1f7d06dac37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Updating instance_info_cache with network_info: [{"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.556 239969 DEBUG oslo_concurrency.lockutils [req-a4ff1653-0eee-4b82-9104-ad79dd7455f8 req-3a64c88f-a322-484e-ab54-b1f7d06dac37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.557 239969 DEBUG nova.compute.manager [req-a4ff1653-0eee-4b82-9104-ad79dd7455f8 req-3a64c88f-a322-484e-ab54-b1f7d06dac37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received event network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.557 239969 DEBUG oslo_concurrency.lockutils [req-a4ff1653-0eee-4b82-9104-ad79dd7455f8 req-3a64c88f-a322-484e-ab54-b1f7d06dac37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.557 239969 DEBUG oslo_concurrency.lockutils [req-a4ff1653-0eee-4b82-9104-ad79dd7455f8 req-3a64c88f-a322-484e-ab54-b1f7d06dac37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.558 239969 DEBUG oslo_concurrency.lockutils [req-a4ff1653-0eee-4b82-9104-ad79dd7455f8 req-3a64c88f-a322-484e-ab54-b1f7d06dac37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.558 239969 DEBUG nova.compute.manager [req-a4ff1653-0eee-4b82-9104-ad79dd7455f8 req-3a64c88f-a322-484e-ab54-b1f7d06dac37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] No waiting events found dispatching network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.558 239969 WARNING nova.compute.manager [req-a4ff1653-0eee-4b82-9104-ad79dd7455f8 req-3a64c88f-a322-484e-ab54-b1f7d06dac37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received unexpected event network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 for instance with vm_state active and task_state None.
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.909 239969 DEBUG oslo_concurrency.lockutils [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.910 239969 DEBUG oslo_concurrency.lockutils [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquired lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:20 compute-0 nova_compute[239965]: 2026-01-26 15:57:20.910 239969 DEBUG nova.network.neutron [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:57:21 compute-0 nova_compute[239965]: 2026-01-26 15:57:21.634 239969 DEBUG nova.network.neutron [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Updating instance_info_cache with network_info: [{"id": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "address": "fa:16:3e:ba:a8:be", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2bbbc1d-7d", "ovs_interfaceid": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:21 compute-0 nova_compute[239965]: 2026-01-26 15:57:21.657 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Releasing lock "refresh_cache-d9b6d871-11f5-46ea-b7f2-502cd8b12b21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:21 compute-0 nova_compute[239965]: 2026-01-26 15:57:21.658 239969 DEBUG nova.compute.manager [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Instance network_info: |[{"id": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "address": "fa:16:3e:ba:a8:be", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2bbbc1d-7d", "ovs_interfaceid": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:57:21 compute-0 nova_compute[239965]: 2026-01-26 15:57:21.658 239969 DEBUG oslo_concurrency.lockutils [req-7052cdf9-a4bd-47ea-bb9a-6d69c22a227a req-8b55ac5d-4e45-45c0-a32c-16cd0c377d84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d9b6d871-11f5-46ea-b7f2-502cd8b12b21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:21 compute-0 nova_compute[239965]: 2026-01-26 15:57:21.658 239969 DEBUG nova.network.neutron [req-7052cdf9-a4bd-47ea-bb9a-6d69c22a227a req-8b55ac5d-4e45-45c0-a32c-16cd0c377d84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Refreshing network info cache for port e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:57:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 209 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 2.5 MiB/s wr, 59 op/s
Jan 26 15:57:21 compute-0 nova_compute[239965]: 2026-01-26 15:57:21.969 239969 DEBUG nova.compute.manager [req-d1da47d8-6352-4cbd-a67f-76e3a8316cd4 req-b046e64e-4fd4-417a-9aea-d8e592f46802 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received event network-changed-32baa4e5-2605-4a3f-8023-6a80ea71793d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:21 compute-0 nova_compute[239965]: 2026-01-26 15:57:21.969 239969 DEBUG nova.compute.manager [req-d1da47d8-6352-4cbd-a67f-76e3a8316cd4 req-b046e64e-4fd4-417a-9aea-d8e592f46802 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Refreshing instance network info cache due to event network-changed-32baa4e5-2605-4a3f-8023-6a80ea71793d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:57:21 compute-0 nova_compute[239965]: 2026-01-26 15:57:21.970 239969 DEBUG oslo_concurrency.lockutils [req-d1da47d8-6352-4cbd-a67f-76e3a8316cd4 req-b046e64e-4fd4-417a-9aea-d8e592f46802 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:22 compute-0 nova_compute[239965]: 2026-01-26 15:57:22.272 239969 DEBUG nova.compute.manager [req-89a96c86-e510-41a2-939e-1bd3bc0ab986 req-edc03be0-1cca-402b-8ad5-5ad6d56ab8d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received event network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:22 compute-0 nova_compute[239965]: 2026-01-26 15:57:22.273 239969 DEBUG oslo_concurrency.lockutils [req-89a96c86-e510-41a2-939e-1bd3bc0ab986 req-edc03be0-1cca-402b-8ad5-5ad6d56ab8d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:22 compute-0 nova_compute[239965]: 2026-01-26 15:57:22.273 239969 DEBUG oslo_concurrency.lockutils [req-89a96c86-e510-41a2-939e-1bd3bc0ab986 req-edc03be0-1cca-402b-8ad5-5ad6d56ab8d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:22 compute-0 nova_compute[239965]: 2026-01-26 15:57:22.273 239969 DEBUG oslo_concurrency.lockutils [req-89a96c86-e510-41a2-939e-1bd3bc0ab986 req-edc03be0-1cca-402b-8ad5-5ad6d56ab8d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:22 compute-0 nova_compute[239965]: 2026-01-26 15:57:22.273 239969 DEBUG nova.compute.manager [req-89a96c86-e510-41a2-939e-1bd3bc0ab986 req-edc03be0-1cca-402b-8ad5-5ad6d56ab8d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] No waiting events found dispatching network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:22 compute-0 nova_compute[239965]: 2026-01-26 15:57:22.273 239969 WARNING nova.compute.manager [req-89a96c86-e510-41a2-939e-1bd3bc0ab986 req-edc03be0-1cca-402b-8ad5-5ad6d56ab8d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received unexpected event network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 for instance with vm_state active and task_state None.
Jan 26 15:57:22 compute-0 ceph-mon[75140]: pgmap v1322: 305 pgs: 305 active+clean; 201 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 2.2 MiB/s wr, 56 op/s
Jan 26 15:57:22 compute-0 nova_compute[239965]: 2026-01-26 15:57:22.860 239969 INFO nova.network.neutron [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Port a6ea89b5-e240-4a9d-bad7-8e99a81e6805 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 26 15:57:22 compute-0 nova_compute[239965]: 2026-01-26 15:57:22.860 239969 DEBUG nova.network.neutron [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Updating instance_info_cache with network_info: [{"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:22 compute-0 nova_compute[239965]: 2026-01-26 15:57:22.881 239969 DEBUG oslo_concurrency.lockutils [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Releasing lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:22 compute-0 nova_compute[239965]: 2026-01-26 15:57:22.883 239969 DEBUG oslo_concurrency.lockutils [req-d1da47d8-6352-4cbd-a67f-76e3a8316cd4 req-b046e64e-4fd4-417a-9aea-d8e592f46802 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:22 compute-0 nova_compute[239965]: 2026-01-26 15:57:22.883 239969 DEBUG nova.network.neutron [req-d1da47d8-6352-4cbd-a67f-76e3a8316cd4 req-b046e64e-4fd4-417a-9aea-d8e592f46802 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Refreshing network info cache for port 32baa4e5-2605-4a3f-8023-6a80ea71793d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:57:22 compute-0 nova_compute[239965]: 2026-01-26 15:57:22.904 239969 DEBUG oslo_concurrency.lockutils [None req-db39690c-679e-4727-8920-4a6315c4458f 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-d49e7eb2-dcdb-4410-a016-42e58d1d1416-a6ea89b5-e240-4a9d-bad7-8e99a81e6805" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 5.430s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:22 compute-0 nova_compute[239965]: 2026-01-26 15:57:22.974 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 232 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.3 MiB/s wr, 67 op/s
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.076 239969 DEBUG nova.compute.manager [req-356dad34-f1d8-4b18-bde9-89e276d88c21 req-2a2d11a0-cd6f-4244-a9cb-1e2d9cbc9e89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received event network-changed-2722b13b-6ac2-481f-93b0-a5f0e4664e24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.077 239969 DEBUG nova.compute.manager [req-356dad34-f1d8-4b18-bde9-89e276d88c21 req-2a2d11a0-cd6f-4244-a9cb-1e2d9cbc9e89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Refreshing instance network info cache due to event network-changed-2722b13b-6ac2-481f-93b0-a5f0e4664e24. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.077 239969 DEBUG oslo_concurrency.lockutils [req-356dad34-f1d8-4b18-bde9-89e276d88c21 req-2a2d11a0-cd6f-4244-a9cb-1e2d9cbc9e89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.077 239969 DEBUG oslo_concurrency.lockutils [req-356dad34-f1d8-4b18-bde9-89e276d88c21 req-2a2d11a0-cd6f-4244-a9cb-1e2d9cbc9e89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.077 239969 DEBUG nova.network.neutron [req-356dad34-f1d8-4b18-bde9-89e276d88c21 req-2a2d11a0-cd6f-4244-a9cb-1e2d9cbc9e89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Refreshing network info cache for port 2722b13b-6ac2-481f-93b0-a5f0e4664e24 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:57:24 compute-0 ceph-mon[75140]: pgmap v1323: 305 pgs: 305 active+clean; 209 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 2.5 MiB/s wr, 59 op/s
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.479 239969 DEBUG oslo_concurrency.processutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 3940597d-f11c-4959-85c5-bb51a373cec5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 7.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.518 239969 DEBUG nova.objects.instance [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lazy-loading 'migration_context' on Instance uuid d9b6d871-11f5-46ea-b7f2-502cd8b12b21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.582 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.582 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Ensure instance console log exists: /var/lib/nova/instances/d9b6d871-11f5-46ea-b7f2-502cd8b12b21/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.583 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.583 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.584 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.586 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Start _get_guest_xml network_info=[{"id": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "address": "fa:16:3e:ba:a8:be", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2bbbc1d-7d", "ovs_interfaceid": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.595 239969 DEBUG nova.storage.rbd_utils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] resizing rbd image 3940597d-f11c-4959-85c5-bb51a373cec5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:57:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.865 239969 WARNING nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.873 239969 DEBUG nova.virt.libvirt.host [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.874 239969 DEBUG nova.virt.libvirt.host [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.877 239969 DEBUG nova.virt.libvirt.host [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.878 239969 DEBUG nova.virt.libvirt.host [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.878 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.879 239969 DEBUG nova.virt.hardware [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.879 239969 DEBUG nova.virt.hardware [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.880 239969 DEBUG nova.virt.hardware [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.880 239969 DEBUG nova.virt.hardware [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.880 239969 DEBUG nova.virt.hardware [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.880 239969 DEBUG nova.virt.hardware [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.880 239969 DEBUG nova.virt.hardware [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.881 239969 DEBUG nova.virt.hardware [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.881 239969 DEBUG nova.virt.hardware [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.881 239969 DEBUG nova.virt.hardware [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.881 239969 DEBUG nova.virt.hardware [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:57:24 compute-0 nova_compute[239965]: 2026-01-26 15:57:24.885 239969 DEBUG oslo_concurrency.processutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.265 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:25 compute-0 ceph-mon[75140]: pgmap v1324: 305 pgs: 305 active+clean; 232 MiB data, 523 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 3.3 MiB/s wr, 67 op/s
Jan 26 15:57:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:57:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1627060621' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.485 239969 DEBUG nova.objects.instance [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lazy-loading 'migration_context' on Instance uuid 3940597d-f11c-4959-85c5-bb51a373cec5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.489 239969 DEBUG oslo_concurrency.processutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.515 239969 DEBUG nova.storage.rbd_utils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] rbd image d9b6d871-11f5-46ea-b7f2-502cd8b12b21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.520 239969 DEBUG oslo_concurrency.processutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.562 239969 DEBUG nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.564 239969 DEBUG nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Ensure instance console log exists: /var/lib/nova/instances/3940597d-f11c-4959-85c5-bb51a373cec5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.564 239969 DEBUG oslo_concurrency.lockutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.564 239969 DEBUG oslo_concurrency.lockutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.565 239969 DEBUG oslo_concurrency.lockutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.566 239969 DEBUG nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.571 239969 WARNING nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.577 239969 DEBUG nova.virt.libvirt.host [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.578 239969 DEBUG nova.virt.libvirt.host [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.581 239969 DEBUG nova.virt.libvirt.host [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.582 239969 DEBUG nova.virt.libvirt.host [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.582 239969 DEBUG nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.582 239969 DEBUG nova.virt.hardware [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.583 239969 DEBUG nova.virt.hardware [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.583 239969 DEBUG nova.virt.hardware [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.584 239969 DEBUG nova.virt.hardware [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.584 239969 DEBUG nova.virt.hardware [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.584 239969 DEBUG nova.virt.hardware [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.584 239969 DEBUG nova.virt.hardware [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.585 239969 DEBUG nova.virt.hardware [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.585 239969 DEBUG nova.virt.hardware [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.585 239969 DEBUG nova.virt.hardware [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.585 239969 DEBUG nova.virt.hardware [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.590 239969 DEBUG oslo_concurrency.processutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 269 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 5.2 MiB/s wr, 102 op/s
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.841 239969 DEBUG nova.network.neutron [req-7052cdf9-a4bd-47ea-bb9a-6d69c22a227a req-8b55ac5d-4e45-45c0-a32c-16cd0c377d84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Updated VIF entry in instance network info cache for port e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.843 239969 DEBUG nova.network.neutron [req-7052cdf9-a4bd-47ea-bb9a-6d69c22a227a req-8b55ac5d-4e45-45c0-a32c-16cd0c377d84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Updating instance_info_cache with network_info: [{"id": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "address": "fa:16:3e:ba:a8:be", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2bbbc1d-7d", "ovs_interfaceid": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.859 239969 DEBUG oslo_concurrency.lockutils [req-7052cdf9-a4bd-47ea-bb9a-6d69c22a227a req-8b55ac5d-4e45-45c0-a32c-16cd0c377d84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d9b6d871-11f5-46ea-b7f2-502cd8b12b21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.943 239969 DEBUG oslo_concurrency.lockutils [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "interface-0815f972-3907-402e-b6b4-f1f1fc93a8ab-a6ea89b5-e240-4a9d-bad7-8e99a81e6805" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.944 239969 DEBUG oslo_concurrency.lockutils [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-0815f972-3907-402e-b6b4-f1f1fc93a8ab-a6ea89b5-e240-4a9d-bad7-8e99a81e6805" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:25 compute-0 nova_compute[239965]: 2026-01-26 15:57:25.945 239969 DEBUG nova.objects.instance [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'flavor' on Instance uuid 0815f972-3907-402e-b6b4-f1f1fc93a8ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:26 compute-0 sudo[281065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:57:26 compute-0 sudo[281065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:57:26 compute-0 sudo[281065]: pam_unix(sudo:session): session closed for user root
Jan 26 15:57:26 compute-0 sudo[281090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:57:26 compute-0 sudo[281090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:57:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:57:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3080681418' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.256 239969 DEBUG oslo_concurrency.processutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.736s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.278 239969 DEBUG nova.virt.libvirt.vif [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:57:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-806579145',display_name='tempest-SecurityGroupsTestJSON-server-806579145',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-806579145',id=46,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acad8b8490f840ba8d8c5d0d91874b79',ramdisk_id='',reservation_id='r-wz9x4q6o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-188832966',owner_user_name='tempest-SecurityGroupsTestJSON-188832966-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:57:15Z,user_data=None,user_id='fe6e9eb388ed48fc93bf3f4b984f8d1c',uuid=d9b6d871-11f5-46ea-b7f2-502cd8b12b21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "address": "fa:16:3e:ba:a8:be", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2bbbc1d-7d", "ovs_interfaceid": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.282 239969 DEBUG nova.network.os_vif_util [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converting VIF {"id": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "address": "fa:16:3e:ba:a8:be", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2bbbc1d-7d", "ovs_interfaceid": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.283 239969 DEBUG nova.network.os_vif_util [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:a8:be,bridge_name='br-int',has_traffic_filtering=True,id=e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2bbbc1d-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.285 239969 DEBUG nova.objects.instance [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lazy-loading 'pci_devices' on Instance uuid d9b6d871-11f5-46ea-b7f2-502cd8b12b21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.302 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:57:26 compute-0 nova_compute[239965]:   <uuid>d9b6d871-11f5-46ea-b7f2-502cd8b12b21</uuid>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   <name>instance-0000002e</name>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <nova:name>tempest-SecurityGroupsTestJSON-server-806579145</nova:name>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:57:24</nova:creationTime>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:57:26 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:57:26 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:57:26 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:57:26 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:57:26 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:57:26 compute-0 nova_compute[239965]:         <nova:user uuid="fe6e9eb388ed48fc93bf3f4b984f8d1c">tempest-SecurityGroupsTestJSON-188832966-project-member</nova:user>
Jan 26 15:57:26 compute-0 nova_compute[239965]:         <nova:project uuid="acad8b8490f840ba8d8c5d0d91874b79">tempest-SecurityGroupsTestJSON-188832966</nova:project>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:57:26 compute-0 nova_compute[239965]:         <nova:port uuid="e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93">
Jan 26 15:57:26 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <system>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <entry name="serial">d9b6d871-11f5-46ea-b7f2-502cd8b12b21</entry>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <entry name="uuid">d9b6d871-11f5-46ea-b7f2-502cd8b12b21</entry>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     </system>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   <os>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   </os>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   <features>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   </features>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d9b6d871-11f5-46ea-b7f2-502cd8b12b21_disk">
Jan 26 15:57:26 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       </source>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:57:26 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d9b6d871-11f5-46ea-b7f2-502cd8b12b21_disk.config">
Jan 26 15:57:26 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       </source>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:57:26 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:ba:a8:be"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <target dev="tape2bbbc1d-7d"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/d9b6d871-11f5-46ea-b7f2-502cd8b12b21/console.log" append="off"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <video>
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     </video>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:57:26 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:57:26 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:57:26 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:57:26 compute-0 nova_compute[239965]: </domain>
Jan 26 15:57:26 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.304 239969 DEBUG nova.compute.manager [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Preparing to wait for external event network-vif-plugged-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.304 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.304 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.305 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.305 239969 DEBUG nova.virt.libvirt.vif [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:57:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-806579145',display_name='tempest-SecurityGroupsTestJSON-server-806579145',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-806579145',id=46,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acad8b8490f840ba8d8c5d0d91874b79',ramdisk_id='',reservation_id='r-wz9x4q6o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-188832966',owner_user_name='tempest-SecurityGroupsTestJSON-188832966-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:57:15Z,user_data=None,user_id='fe6e9eb388ed48fc93bf3f4b984f8d1c',uuid=d9b6d871-11f5-46ea-b7f2-502cd8b12b21,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "address": "fa:16:3e:ba:a8:be", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2bbbc1d-7d", "ovs_interfaceid": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.306 239969 DEBUG nova.network.os_vif_util [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converting VIF {"id": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "address": "fa:16:3e:ba:a8:be", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2bbbc1d-7d", "ovs_interfaceid": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.306 239969 DEBUG nova.network.os_vif_util [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ba:a8:be,bridge_name='br-int',has_traffic_filtering=True,id=e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2bbbc1d-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.307 239969 DEBUG os_vif [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:a8:be,bridge_name='br-int',has_traffic_filtering=True,id=e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2bbbc1d-7d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:57:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:57:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/269523875' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.307 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.308 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.308 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.315 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.316 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape2bbbc1d-7d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.317 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape2bbbc1d-7d, col_values=(('external_ids', {'iface-id': 'e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ba:a8:be', 'vm-uuid': 'd9b6d871-11f5-46ea-b7f2-502cd8b12b21'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:26 compute-0 NetworkManager[48954]: <info>  [1769443046.3201] manager: (tape2bbbc1d-7d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/163)
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.325 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.327 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.329 239969 INFO os_vif [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ba:a8:be,bridge_name='br-int',has_traffic_filtering=True,id=e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2bbbc1d-7d')
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.331 239969 DEBUG oslo_concurrency.processutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.742s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.366 239969 DEBUG nova.storage.rbd_utils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] rbd image 3940597d-f11c-4959-85c5-bb51a373cec5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.387 239969 DEBUG oslo_concurrency.processutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1627060621' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:57:26 compute-0 ceph-mon[75140]: pgmap v1325: 305 pgs: 305 active+clean; 269 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 390 KiB/s rd, 5.2 MiB/s wr, 102 op/s
Jan 26 15:57:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3080681418' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:57:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/269523875' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.511 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.512 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.513 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] No VIF found with MAC fa:16:3e:ba:a8:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.514 239969 INFO nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Using config drive
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.626 239969 DEBUG nova.storage.rbd_utils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] rbd image d9b6d871-11f5-46ea-b7f2-502cd8b12b21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.735 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:26 compute-0 sudo[281090]: pam_unix(sudo:session): session closed for user root
Jan 26 15:57:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:57:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:57:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:57:26 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:57:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.899 239969 DEBUG nova.network.neutron [req-d1da47d8-6352-4cbd-a67f-76e3a8316cd4 req-b046e64e-4fd4-417a-9aea-d8e592f46802 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Updated VIF entry in instance network info cache for port 32baa4e5-2605-4a3f-8023-6a80ea71793d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.900 239969 DEBUG nova.network.neutron [req-d1da47d8-6352-4cbd-a67f-76e3a8316cd4 req-b046e64e-4fd4-417a-9aea-d8e592f46802 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Updating instance_info_cache with network_info: [{"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.916 239969 DEBUG oslo_concurrency.lockutils [req-d1da47d8-6352-4cbd-a67f-76e3a8316cd4 req-b046e64e-4fd4-417a-9aea-d8e592f46802 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d49e7eb2-dcdb-4410-a016-42e58d1d1416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.919 239969 DEBUG nova.objects.instance [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'pci_requests' on Instance uuid 0815f972-3907-402e-b6b4-f1f1fc93a8ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:26 compute-0 nova_compute[239965]: 2026-01-26 15:57:26.930 239969 DEBUG nova.network.neutron [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:57:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:57:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2954207680' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.073 239969 DEBUG oslo_concurrency.processutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.075 239969 DEBUG nova.objects.instance [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lazy-loading 'pci_devices' on Instance uuid 3940597d-f11c-4959-85c5-bb51a373cec5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.132 239969 DEBUG nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:57:27 compute-0 nova_compute[239965]:   <uuid>3940597d-f11c-4959-85c5-bb51a373cec5</uuid>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   <name>instance-0000002f</name>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersAaction247Test-server-994219905</nova:name>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:57:25</nova:creationTime>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:57:27 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:57:27 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:57:27 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:57:27 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:57:27 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:57:27 compute-0 nova_compute[239965]:         <nova:user uuid="09c34e06e37948cb8302a7f6cd04d4e5">tempest-ServersAaction247Test-923058504-project-member</nova:user>
Jan 26 15:57:27 compute-0 nova_compute[239965]:         <nova:project uuid="92c1bb62f7f64e63a97025674c154318">tempest-ServersAaction247Test-923058504</nova:project>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <system>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <entry name="serial">3940597d-f11c-4959-85c5-bb51a373cec5</entry>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <entry name="uuid">3940597d-f11c-4959-85c5-bb51a373cec5</entry>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     </system>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   <os>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   </os>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   <features>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   </features>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/3940597d-f11c-4959-85c5-bb51a373cec5_disk">
Jan 26 15:57:27 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       </source>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:57:27 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/3940597d-f11c-4959-85c5-bb51a373cec5_disk.config">
Jan 26 15:57:27 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       </source>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:57:27 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/3940597d-f11c-4959-85c5-bb51a373cec5/console.log" append="off"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <video>
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     </video>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:57:27 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:57:27 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:57:27 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:57:27 compute-0 nova_compute[239965]: </domain>
Jan 26 15:57:27 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.169 239969 DEBUG nova.network.neutron [req-356dad34-f1d8-4b18-bde9-89e276d88c21 req-2a2d11a0-cd6f-4244-a9cb-1e2d9cbc9e89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Updated VIF entry in instance network info cache for port 2722b13b-6ac2-481f-93b0-a5f0e4664e24. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.169 239969 DEBUG nova.network.neutron [req-356dad34-f1d8-4b18-bde9-89e276d88c21 req-2a2d11a0-cd6f-4244-a9cb-1e2d9cbc9e89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Updating instance_info_cache with network_info: [{"id": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "address": "fa:16:3e:e2:f5:4d", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2722b13b-6a", "ovs_interfaceid": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.182 239969 INFO nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Creating config drive at /var/lib/nova/instances/d9b6d871-11f5-46ea-b7f2-502cd8b12b21/disk.config
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.187 239969 DEBUG oslo_concurrency.processutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d9b6d871-11f5-46ea-b7f2-502cd8b12b21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9x2cqckg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:27 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:57:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:57:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:57:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:57:27 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:57:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:57:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.220 239969 DEBUG oslo_concurrency.lockutils [req-356dad34-f1d8-4b18-bde9-89e276d88c21 req-2a2d11a0-cd6f-4244-a9cb-1e2d9cbc9e89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:27 compute-0 sudo[281212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:57:27 compute-0 sudo[281212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:57:27 compute-0 sudo[281212]: pam_unix(sudo:session): session closed for user root
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.291 239969 DEBUG nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.292 239969 DEBUG nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.292 239969 INFO nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Using config drive
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.312 239969 DEBUG nova.storage.rbd_utils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] rbd image 3940597d-f11c-4959-85c5-bb51a373cec5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.322 239969 DEBUG nova.policy [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '700f80e28de6426c81d2be69a7fd8ffb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.327 239969 DEBUG oslo_concurrency.processutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d9b6d871-11f5-46ea-b7f2-502cd8b12b21/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9x2cqckg" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:27 compute-0 sudo[281239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:57:27 compute-0 sudo[281239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.361 239969 DEBUG nova.storage.rbd_utils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] rbd image d9b6d871-11f5-46ea-b7f2-502cd8b12b21_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.368 239969 DEBUG oslo_concurrency.processutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d9b6d871-11f5-46ea-b7f2-502cd8b12b21/disk.config d9b6d871-11f5-46ea-b7f2-502cd8b12b21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.514 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.559 239969 INFO nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Creating config drive at /var/lib/nova/instances/3940597d-f11c-4959-85c5-bb51a373cec5/disk.config
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.566 239969 DEBUG oslo_concurrency.processutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3940597d-f11c-4959-85c5-bb51a373cec5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpczx31vey execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:57:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:57:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2954207680' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:57:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:57:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:57:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:57:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.720 239969 DEBUG oslo_concurrency.processutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3940597d-f11c-4959-85c5-bb51a373cec5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpczx31vey" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:27 compute-0 podman[281328]: 2026-01-26 15:57:27.647501727 +0000 UTC m=+0.023565099 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.749 239969 DEBUG nova.storage.rbd_utils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] rbd image 3940597d-f11c-4959-85c5-bb51a373cec5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:27 compute-0 nova_compute[239965]: 2026-01-26 15:57:27.754 239969 DEBUG oslo_concurrency.processutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3940597d-f11c-4959-85c5-bb51a373cec5/disk.config 3940597d-f11c-4959-85c5-bb51a373cec5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1326: 305 pgs: 305 active+clean; 293 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 5.5 MiB/s wr, 93 op/s
Jan 26 15:57:27 compute-0 podman[281328]: 2026-01-26 15:57:27.95035675 +0000 UTC m=+0.326420092 container create a4ec8f785dc633d225296a29cacca3a11db2188a337ffcc52b89fb456df47f20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_bouman, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 15:57:28 compute-0 systemd[1]: Started libpod-conmon-a4ec8f785dc633d225296a29cacca3a11db2188a337ffcc52b89fb456df47f20.scope.
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.143 239969 DEBUG oslo_concurrency.processutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d9b6d871-11f5-46ea-b7f2-502cd8b12b21/disk.config d9b6d871-11f5-46ea-b7f2-502cd8b12b21_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.774s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.144 239969 INFO nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Deleting local config drive /var/lib/nova/instances/d9b6d871-11f5-46ea-b7f2-502cd8b12b21/disk.config because it was imported into RBD.
Jan 26 15:57:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:57:28 compute-0 podman[281328]: 2026-01-26 15:57:28.178565571 +0000 UTC m=+0.554628943 container init a4ec8f785dc633d225296a29cacca3a11db2188a337ffcc52b89fb456df47f20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_bouman, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:57:28 compute-0 podman[281328]: 2026-01-26 15:57:28.189427568 +0000 UTC m=+0.565490910 container start a4ec8f785dc633d225296a29cacca3a11db2188a337ffcc52b89fb456df47f20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_bouman, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 15:57:28 compute-0 beautiful_bouman[281407]: 167 167
Jan 26 15:57:28 compute-0 podman[281380]: 2026-01-26 15:57:28.201782151 +0000 UTC m=+0.194243638 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 15:57:28 compute-0 NetworkManager[48954]: <info>  [1769443048.2195] manager: (tape2bbbc1d-7d): new Tun device (/org/freedesktop/NetworkManager/Devices/164)
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.193 239969 DEBUG oslo_concurrency.processutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3940597d-f11c-4959-85c5-bb51a373cec5/disk.config 3940597d-f11c-4959-85c5-bb51a373cec5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:28 compute-0 kernel: tape2bbbc1d-7d: entered promiscuous mode
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.202 239969 INFO nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Deleting local config drive /var/lib/nova/instances/3940597d-f11c-4959-85c5-bb51a373cec5/disk.config because it was imported into RBD.
Jan 26 15:57:28 compute-0 systemd[1]: libpod-a4ec8f785dc633d225296a29cacca3a11db2188a337ffcc52b89fb456df47f20.scope: Deactivated successfully.
Jan 26 15:57:28 compute-0 podman[281328]: 2026-01-26 15:57:28.199177957 +0000 UTC m=+0.575241329 container attach a4ec8f785dc633d225296a29cacca3a11db2188a337ffcc52b89fb456df47f20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_bouman, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:57:28 compute-0 podman[281328]: 2026-01-26 15:57:28.225438212 +0000 UTC m=+0.601501564 container died a4ec8f785dc633d225296a29cacca3a11db2188a337ffcc52b89fb456df47f20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:57:28 compute-0 ovn_controller[146046]: 2026-01-26T15:57:28Z|00346|binding|INFO|Claiming lport e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 for this chassis.
Jan 26 15:57:28 compute-0 ovn_controller[146046]: 2026-01-26T15:57:28Z|00347|binding|INFO|e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93: Claiming fa:16:3e:ba:a8:be 10.100.0.12
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.233 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.240 239969 DEBUG nova.network.neutron [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Successfully updated port: a6ea89b5-e240-4a9d-bad7-8e99a81e6805 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.245 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:a8:be 10.100.0.12'], port_security=['fa:16:3e:ba:a8:be 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd9b6d871-11f5-46ea-b7f2-502cd8b12b21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acad8b8490f840ba8d8c5d0d91874b79', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5fbd4415-78f2-4f82-97a8-e19004f35ccb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33add925-8073-4624-a365-97f1e7b74639, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.247 156105 INFO neutron.agent.ovn.metadata.agent [-] Port e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 in datapath 3bc677e6-ddd2-49e0-822d-6df359232a0e bound to our chassis
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.248 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3bc677e6-ddd2-49e0-822d-6df359232a0e
Jan 26 15:57:28 compute-0 ovn_controller[146046]: 2026-01-26T15:57:28Z|00348|binding|INFO|Setting lport e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 ovn-installed in OVS
Jan 26 15:57:28 compute-0 ovn_controller[146046]: 2026-01-26T15:57:28Z|00349|binding|INFO|Setting lport e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 up in Southbound
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.260 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.261 239969 DEBUG oslo_concurrency.lockutils [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.263 239969 DEBUG oslo_concurrency.lockutils [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquired lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.263 239969 DEBUG nova.network.neutron [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.269 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f33982-6878-484d-85e4-be6d4077375f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.271 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3bc677e6-d1 in ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:57:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f9f21afe8a0edd7961a09121ebf147184597ca42813aed01ac52ee57e851155-merged.mount: Deactivated successfully.
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.273 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3bc677e6-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.273 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8df2b457-1bbd-45b5-a812-c5f2ba8321ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:28 compute-0 systemd-udevd[281457]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.279 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc2b2cd-7765-473d-8448-03f912397085]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:28 compute-0 NetworkManager[48954]: <info>  [1769443048.2988] device (tape2bbbc1d-7d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:57:28 compute-0 NetworkManager[48954]: <info>  [1769443048.2999] device (tape2bbbc1d-7d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.304 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[871b07fa-2ca7-40f3-81f6-83f58cf73e59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:28 compute-0 systemd-machined[208061]: New machine qemu-51-instance-0000002e.
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.329 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b28778b9-8a80-4acb-8009-7b21ed273f0b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:28 compute-0 systemd[1]: Started Virtual Machine qemu-51-instance-0000002e.
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.337 239969 DEBUG nova.compute.manager [req-bb71a676-0f43-4e92-b8c0-adad872b3da3 req-c23e7547-7c22-4d49-b8a1-94d1af2acf87 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received event network-changed-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.338 239969 DEBUG nova.compute.manager [req-bb71a676-0f43-4e92-b8c0-adad872b3da3 req-c23e7547-7c22-4d49-b8a1-94d1af2acf87 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Refreshing instance network info cache due to event network-changed-a6ea89b5-e240-4a9d-bad7-8e99a81e6805. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.338 239969 DEBUG oslo_concurrency.lockutils [req-bb71a676-0f43-4e92-b8c0-adad872b3da3 req-c23e7547-7c22-4d49-b8a1-94d1af2acf87 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:28 compute-0 systemd-machined[208061]: New machine qemu-52-instance-0000002f.
Jan 26 15:57:28 compute-0 systemd[1]: Started Virtual Machine qemu-52-instance-0000002f.
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.365 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[da752cb8-5cca-4b7d-8ea0-1257b0c5617d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:28 compute-0 NetworkManager[48954]: <info>  [1769443048.3720] manager: (tap3bc677e6-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/165)
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.371 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[274f47ef-7c00-4bf9-94c0-1d0b8d019f70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.401 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b92fc5b9-93a9-44da-bb33-f70f0085a4bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.404 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c43ab184-6836-4ab1-999d-cd02200842f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:28 compute-0 NetworkManager[48954]: <info>  [1769443048.4284] device (tap3bc677e6-d0): carrier: link connected
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.438 239969 DEBUG nova.compute.manager [req-cef990c6-7dc4-4bea-b660-25a8a578ff75 req-1402bbe0-c241-4f14-af1e-2f13c4013acb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Received event network-vif-plugged-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.438 239969 DEBUG oslo_concurrency.lockutils [req-cef990c6-7dc4-4bea-b660-25a8a578ff75 req-1402bbe0-c241-4f14-af1e-2f13c4013acb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.439 239969 DEBUG oslo_concurrency.lockutils [req-cef990c6-7dc4-4bea-b660-25a8a578ff75 req-1402bbe0-c241-4f14-af1e-2f13c4013acb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.439 239969 DEBUG oslo_concurrency.lockutils [req-cef990c6-7dc4-4bea-b660-25a8a578ff75 req-1402bbe0-c241-4f14-af1e-2f13c4013acb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.439 239969 DEBUG nova.compute.manager [req-cef990c6-7dc4-4bea-b660-25a8a578ff75 req-1402bbe0-c241-4f14-af1e-2f13c4013acb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Processing event network-vif-plugged-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.435 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b60b460b-eaf6-4487-b0ee-3aa4f1ec1e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.455 239969 WARNING nova.network.neutron [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] 91bcac0a-1926-4861-88ab-ae3c06f7e57e already exists in list: networks containing: ['91bcac0a-1926-4861-88ab-ae3c06f7e57e']. ignoring it
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.460 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[89a4b1ef-96ba-4d11-b165-4c896c39f957]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bc677e6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:3f:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453923, 'reachable_time': 37183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281504, 'error': None, 'target': 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.480 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e5748716-a9d8-4ae8-877a-c2f23ae6abb4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe90:3fc8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453923, 'tstamp': 453923}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281505, 'error': None, 'target': 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.501 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[493b9488-e7d4-4eef-afcd-188c589e7cca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bc677e6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:3f:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453923, 'reachable_time': 37183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281506, 'error': None, 'target': 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.533 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[63995c81-04f3-4fc8-8d96-86298e539368]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:57:28
Jan 26 15:57:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:57:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:57:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'volumes', 'backups', 'cephfs.cephfs.meta', 'vms']
Jan 26 15:57:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.596 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2821c4-2dc2-46bb-9c0d-8b7a8ac61a10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.598 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bc677e6-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.598 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.598 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bc677e6-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.600 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:28 compute-0 NetworkManager[48954]: <info>  [1769443048.6008] manager: (tap3bc677e6-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Jan 26 15:57:28 compute-0 kernel: tap3bc677e6-d0: entered promiscuous mode
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.602 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.604 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3bc677e6-d0, col_values=(('external_ids', {'iface-id': '66f6ca50-b295-4776-bfa0-51b3ff3fef47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.605 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:28 compute-0 ovn_controller[146046]: 2026-01-26T15:57:28Z|00350|binding|INFO|Releasing lport 66f6ca50-b295-4776-bfa0-51b3ff3fef47 from this chassis (sb_readonly=0)
Jan 26 15:57:28 compute-0 nova_compute[239965]: 2026-01-26 15:57:28.623 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.625 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3bc677e6-ddd2-49e0-822d-6df359232a0e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3bc677e6-ddd2-49e0-822d-6df359232a0e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.626 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c0159507-ec0d-4b16-b9c6-623e28477f55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.627 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-3bc677e6-ddd2-49e0-822d-6df359232a0e
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/3bc677e6-ddd2-49e0-822d-6df359232a0e.pid.haproxy
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 3bc677e6-ddd2-49e0-822d-6df359232a0e
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:57:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:28.628 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'env', 'PROCESS_TAG=haproxy-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3bc677e6-ddd2-49e0-822d-6df359232a0e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:57:28 compute-0 ceph-mon[75140]: pgmap v1326: 305 pgs: 305 active+clean; 293 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 5.5 MiB/s wr, 93 op/s
Jan 26 15:57:29 compute-0 podman[281328]: 2026-01-26 15:57:29.158853491 +0000 UTC m=+1.534916833 container remove a4ec8f785dc633d225296a29cacca3a11db2188a337ffcc52b89fb456df47f20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_bouman, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:57:29 compute-0 podman[281381]: 2026-01-26 15:57:29.216344373 +0000 UTC m=+1.209000315 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:57:29 compute-0 systemd[1]: libpod-conmon-a4ec8f785dc633d225296a29cacca3a11db2188a337ffcc52b89fb456df47f20.scope: Deactivated successfully.
Jan 26 15:57:29 compute-0 podman[281594]: 2026-01-26 15:57:29.317714221 +0000 UTC m=+0.026840930 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.420 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443049.419489, d9b6d871-11f5-46ea-b7f2-502cd8b12b21 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.420 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] VM Started (Lifecycle Event)
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.423 239969 DEBUG nova.compute.manager [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.427 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.430 239969 INFO nova.virt.libvirt.driver [-] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Instance spawned successfully.
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.431 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.450 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.455 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.461 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.462 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.462 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.462 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.463 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.463 239969 DEBUG nova.virt.libvirt.driver [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.486 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.486 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443049.4197023, d9b6d871-11f5-46ea-b7f2-502cd8b12b21 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.487 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] VM Paused (Lifecycle Event)
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.510 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.513 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443049.425937, d9b6d871-11f5-46ea-b7f2-502cd8b12b21 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.513 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] VM Resumed (Lifecycle Event)
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.521 239969 INFO nova.compute.manager [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Took 13.98 seconds to spawn the instance on the hypervisor.
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.521 239969 DEBUG nova.compute.manager [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.540 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.542 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.564 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.589 239969 INFO nova.compute.manager [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Took 15.19 seconds to build instance.
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.606 239969 DEBUG oslo_concurrency.lockutils [None req-ab826e28-5af1-4889-9a66-9c37c3a0126c fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.729 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443049.7287838, 3940597d-f11c-4959-85c5-bb51a373cec5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.729 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] VM Resumed (Lifecycle Event)
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.733 239969 DEBUG nova.compute.manager [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.733 239969 DEBUG nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.736 239969 INFO nova.virt.libvirt.driver [-] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Instance spawned successfully.
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.736 239969 DEBUG nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.746 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.749 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.755 239969 DEBUG nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.755 239969 DEBUG nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.756 239969 DEBUG nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.756 239969 DEBUG nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.756 239969 DEBUG nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.757 239969 DEBUG nova.virt.libvirt.driver [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.778 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.778 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443049.7326913, 3940597d-f11c-4959-85c5-bb51a373cec5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.778 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] VM Started (Lifecycle Event)
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.808 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.812 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:57:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 3.5 MiB/s wr, 69 op/s
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.818 239969 INFO nova.compute.manager [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Took 12.66 seconds to spawn the instance on the hypervisor.
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.819 239969 DEBUG nova.compute.manager [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.831 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:57:29 compute-0 podman[281610]: 2026-01-26 15:57:29.822509391 +0000 UTC m=+0.488722817 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.894 239969 INFO nova.compute.manager [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Took 15.08 seconds to build instance.
Jan 26 15:57:29 compute-0 nova_compute[239965]: 2026-01-26 15:57:29.913 239969 DEBUG oslo_concurrency.lockutils [None req-f804d919-2d9e-401f-b479-92b0ab6a2f3f 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lock "3940597d-f11c-4959-85c5-bb51a373cec5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:29 compute-0 podman[281594]: 2026-01-26 15:57:29.983713328 +0000 UTC m=+0.692839997 container create 56c8a72be3ac4bbab1908fe19cea68470d5cb31e7bb539ecfce7027941e94419 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 15:57:30 compute-0 systemd[1]: Started libpod-conmon-56c8a72be3ac4bbab1908fe19cea68470d5cb31e7bb539ecfce7027941e94419.scope.
Jan 26 15:57:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8037dfb30be7ba7684d4c7710be473772804716265a5f6c6d201ef663dc581/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:57:30 compute-0 podman[281610]: 2026-01-26 15:57:30.220966561 +0000 UTC m=+0.887179967 container create e6f663339764cbb71b7b152c613fa538a079912a19a28034e609ff084106c3dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:57:30 compute-0 nova_compute[239965]: 2026-01-26 15:57:30.268 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:30 compute-0 ceph-mon[75140]: pgmap v1327: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 3.5 MiB/s wr, 69 op/s
Jan 26 15:57:30 compute-0 systemd[1]: Started libpod-conmon-e6f663339764cbb71b7b152c613fa538a079912a19a28034e609ff084106c3dd.scope.
Jan 26 15:57:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:57:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:57:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:57:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:57:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:57:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:57:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1d73401bd6141f9a3fc2f621c1db68a4507533f80d6cd8b4f415ac4831840d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1d73401bd6141f9a3fc2f621c1db68a4507533f80d6cd8b4f415ac4831840d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1d73401bd6141f9a3fc2f621c1db68a4507533f80d6cd8b4f415ac4831840d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1d73401bd6141f9a3fc2f621c1db68a4507533f80d6cd8b4f415ac4831840d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:57:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1d73401bd6141f9a3fc2f621c1db68a4507533f80d6cd8b4f415ac4831840d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:57:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:57:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:57:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:57:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:57:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:57:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:57:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:57:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:57:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:57:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:57:31 compute-0 podman[281594]: 2026-01-26 15:57:31.217642444 +0000 UTC m=+1.926769153 container init 56c8a72be3ac4bbab1908fe19cea68470d5cb31e7bb539ecfce7027941e94419 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:57:31 compute-0 podman[281594]: 2026-01-26 15:57:31.22726445 +0000 UTC m=+1.936391129 container start 56c8a72be3ac4bbab1908fe19cea68470d5cb31e7bb539ecfce7027941e94419 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:57:31 compute-0 neutron-haproxy-ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e[281658]: [NOTICE]   (281667) : New worker (281669) forked
Jan 26 15:57:31 compute-0 neutron-haproxy-ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e[281658]: [NOTICE]   (281667) : Loading success.
Jan 26 15:57:31 compute-0 podman[281610]: 2026-01-26 15:57:31.320186761 +0000 UTC m=+1.986400197 container init e6f663339764cbb71b7b152c613fa538a079912a19a28034e609ff084106c3dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wescoff, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.320 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:31 compute-0 podman[281610]: 2026-01-26 15:57:31.331938079 +0000 UTC m=+1.998151485 container start e6f663339764cbb71b7b152c613fa538a079912a19a28034e609ff084106c3dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wescoff, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.656 239969 DEBUG nova.network.neutron [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Updating instance_info_cache with network_info: [{"id": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "address": "fa:16:3e:e2:f5:4d", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2722b13b-6a", "ovs_interfaceid": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.679 239969 DEBUG oslo_concurrency.lockutils [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Releasing lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.680 239969 DEBUG oslo_concurrency.lockutils [req-bb71a676-0f43-4e92-b8c0-adad872b3da3 req-c23e7547-7c22-4d49-b8a1-94d1af2acf87 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.680 239969 DEBUG nova.network.neutron [req-bb71a676-0f43-4e92-b8c0-adad872b3da3 req-c23e7547-7c22-4d49-b8a1-94d1af2acf87 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Refreshing network info cache for port a6ea89b5-e240-4a9d-bad7-8e99a81e6805 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.691 239969 DEBUG nova.virt.libvirt.vif [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:56:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-457250150',display_name='tempest-tempest.common.compute-instance-457250150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-457250150',id=45,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKeLSVrQjLVMrDly+731By+tGp5cL/q2RkfHkPZwnhscMO6Jhep7798XgwoAsogpRVdfhr++1ZBHm1/nTkA1YXe3jICWiVf8G7qT1QKZe6WXtrz4RsbEZ4GIF6fD2FL+/A==',key_name='tempest-keypair-1813699058',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:57:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-qhswce1a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:57:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=0815f972-3907-402e-b6b4-f1f1fc93a8ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.695 239969 DEBUG nova.network.os_vif_util [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.700 239969 DEBUG nova.network.os_vif_util [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.702 239969 DEBUG os_vif [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.713 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.717 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.719 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.746 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.747 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6ea89b5-e2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.749 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6ea89b5-e2, col_values=(('external_ids', {'iface-id': 'a6ea89b5-e240-4a9d-bad7-8e99a81e6805', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:92:c2', 'vm-uuid': '0815f972-3907-402e-b6b4-f1f1fc93a8ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:31 compute-0 NetworkManager[48954]: <info>  [1769443051.7557] manager: (tapa6ea89b5-e2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.755 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.758 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.769 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.770 239969 INFO os_vif [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2')
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.771 239969 DEBUG nova.virt.libvirt.vif [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:56:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-457250150',display_name='tempest-tempest.common.compute-instance-457250150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-457250150',id=45,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKeLSVrQjLVMrDly+731By+tGp5cL/q2RkfHkPZwnhscMO6Jhep7798XgwoAsogpRVdfhr++1ZBHm1/nTkA1YXe3jICWiVf8G7qT1QKZe6WXtrz4RsbEZ4GIF6fD2FL+/A==',key_name='tempest-keypair-1813699058',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:57:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-qhswce1a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:57:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=0815f972-3907-402e-b6b4-f1f1fc93a8ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.772 239969 DEBUG nova.network.os_vif_util [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.772 239969 DEBUG nova.network.os_vif_util [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.783 239969 DEBUG nova.virt.libvirt.guest [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] attach device xml: <interface type="ethernet">
Jan 26 15:57:31 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:7e:92:c2"/>
Jan 26 15:57:31 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 15:57:31 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:57:31 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 15:57:31 compute-0 nova_compute[239965]:   <target dev="tapa6ea89b5-e2"/>
Jan 26 15:57:31 compute-0 nova_compute[239965]: </interface>
Jan 26 15:57:31 compute-0 nova_compute[239965]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 26 15:57:31 compute-0 kernel: tapa6ea89b5-e2: entered promiscuous mode
Jan 26 15:57:31 compute-0 ovn_controller[146046]: 2026-01-26T15:57:31Z|00351|binding|INFO|Claiming lport a6ea89b5-e240-4a9d-bad7-8e99a81e6805 for this chassis.
Jan 26 15:57:31 compute-0 ovn_controller[146046]: 2026-01-26T15:57:31Z|00352|binding|INFO|a6ea89b5-e240-4a9d-bad7-8e99a81e6805: Claiming fa:16:3e:7e:92:c2 10.100.0.14
Jan 26 15:57:31 compute-0 NetworkManager[48954]: <info>  [1769443051.8023] manager: (tapa6ea89b5-e2): new Tun device (/org/freedesktop/NetworkManager/Devices/168)
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.805 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:31.809 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:92:c2 10.100.0.14'], port_security=['fa:16:3e:7e:92:c2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1213915953', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0815f972-3907-402e-b6b4-f1f1fc93a8ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1213915953', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e6f59e25-9c42-4de8-8df2-2beb03c79af0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a6ea89b5-e240-4a9d-bad7-8e99a81e6805) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:57:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 3.5 MiB/s wr, 81 op/s
Jan 26 15:57:31 compute-0 ovn_controller[146046]: 2026-01-26T15:57:31Z|00353|binding|INFO|Setting lport a6ea89b5-e240-4a9d-bad7-8e99a81e6805 ovn-installed in OVS
Jan 26 15:57:31 compute-0 ovn_controller[146046]: 2026-01-26T15:57:31Z|00354|binding|INFO|Setting lport a6ea89b5-e240-4a9d-bad7-8e99a81e6805 up in Southbound
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.836 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:31 compute-0 nova_compute[239965]: 2026-01-26 15:57:31.838 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:31.837 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a6ea89b5-e240-4a9d-bad7-8e99a81e6805 in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e bound to our chassis
Jan 26 15:57:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:31.840 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:57:31 compute-0 systemd-udevd[281700]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:57:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:31.859 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[09e84173-84c6-4fd6-8e4a-7bc92d6e8a23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:31 compute-0 NetworkManager[48954]: <info>  [1769443051.8761] device (tapa6ea89b5-e2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:57:31 compute-0 NetworkManager[48954]: <info>  [1769443051.8770] device (tapa6ea89b5-e2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:57:31 compute-0 gracious_wescoff[281663]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:57:31 compute-0 gracious_wescoff[281663]: --> All data devices are unavailable
Jan 26 15:57:31 compute-0 systemd[1]: libpod-e6f663339764cbb71b7b152c613fa538a079912a19a28034e609ff084106c3dd.scope: Deactivated successfully.
Jan 26 15:57:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:31.930 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4d415e1f-500e-40e3-a948-8bfd6134a9f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:31.934 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[fbc3ed9c-3148-4ce9-8750-1cd9c985faa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:31.965 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c86430-8b44-4272-b8c9-5447c9f588cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:31.985 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6ad850-2b2f-46d7-900c-34e94e889f0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448592, 'reachable_time': 26761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281720, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:32.001 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e43b2b-0d71-4f6d-91f0-b410c5078dd6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448604, 'tstamp': 448604}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281721, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448607, 'tstamp': 448607}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281721, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:32.002 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:32.005 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:32.006 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:32.006 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.006 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:32.006 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:32 compute-0 podman[281610]: 2026-01-26 15:57:32.013672832 +0000 UTC m=+2.679886258 container attach e6f663339764cbb71b7b152c613fa538a079912a19a28034e609ff084106c3dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 15:57:32 compute-0 podman[281610]: 2026-01-26 15:57:32.016271036 +0000 UTC m=+2.682484442 container died e6f663339764cbb71b7b152c613fa538a079912a19a28034e609ff084106c3dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wescoff, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.020 239969 DEBUG nova.virt.libvirt.driver [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.021 239969 DEBUG nova.virt.libvirt.driver [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.021 239969 DEBUG nova.virt.libvirt.driver [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:e2:f5:4d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.021 239969 DEBUG nova.virt.libvirt.driver [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] No VIF found with MAC fa:16:3e:7e:92:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.071 239969 DEBUG nova.virt.libvirt.guest [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:57:32 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:57:32 compute-0 nova_compute[239965]:   <nova:name>tempest-tempest.common.compute-instance-457250150</nova:name>
Jan 26 15:57:32 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:57:32</nova:creationTime>
Jan 26 15:57:32 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:57:32 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:57:32 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:57:32 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:57:32 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:57:32 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:57:32 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:57:32 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:57:32 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:57:32 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:57:32 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:57:32 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:57:32 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:57:32 compute-0 nova_compute[239965]:     <nova:port uuid="2722b13b-6ac2-481f-93b0-a5f0e4664e24">
Jan 26 15:57:32 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 15:57:32 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:57:32 compute-0 nova_compute[239965]:     <nova:port uuid="a6ea89b5-e240-4a9d-bad7-8e99a81e6805">
Jan 26 15:57:32 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 15:57:32 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:57:32 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:57:32 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:57:32 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.103 239969 DEBUG oslo_concurrency.lockutils [None req-10338237-5490-4876-ae6b-3b1803331593 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-0815f972-3907-402e-b6b4-f1f1fc93a8ab-a6ea89b5-e240-4a9d-bad7-8e99a81e6805" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.568 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.844 239969 DEBUG nova.compute.manager [None req-eb8eca38-6485-46e0-b759-e7e0aa7baacd 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:32 compute-0 ceph-mon[75140]: pgmap v1328: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 3.5 MiB/s wr, 81 op/s
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.916 239969 INFO nova.compute.manager [None req-eb8eca38-6485-46e0-b759-e7e0aa7baacd 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] instance snapshotting
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.917 239969 DEBUG nova.objects.instance [None req-eb8eca38-6485-46e0-b759-e7e0aa7baacd 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lazy-loading 'flavor' on Instance uuid 3940597d-f11c-4959-85c5-bb51a373cec5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.963 239969 DEBUG nova.compute.manager [req-06fb1722-1252-4987-8dd9-934b25b3d096 req-b6b33b35-f966-48f7-9dbf-98ee39ab4928 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Received event network-vif-plugged-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.963 239969 DEBUG oslo_concurrency.lockutils [req-06fb1722-1252-4987-8dd9-934b25b3d096 req-b6b33b35-f966-48f7-9dbf-98ee39ab4928 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.964 239969 DEBUG oslo_concurrency.lockutils [req-06fb1722-1252-4987-8dd9-934b25b3d096 req-b6b33b35-f966-48f7-9dbf-98ee39ab4928 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.964 239969 DEBUG oslo_concurrency.lockutils [req-06fb1722-1252-4987-8dd9-934b25b3d096 req-b6b33b35-f966-48f7-9dbf-98ee39ab4928 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.964 239969 DEBUG nova.compute.manager [req-06fb1722-1252-4987-8dd9-934b25b3d096 req-b6b33b35-f966-48f7-9dbf-98ee39ab4928 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] No waiting events found dispatching network-vif-plugged-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.964 239969 WARNING nova.compute.manager [req-06fb1722-1252-4987-8dd9-934b25b3d096 req-b6b33b35-f966-48f7-9dbf-98ee39ab4928 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Received unexpected event network-vif-plugged-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 for instance with vm_state active and task_state None.
Jan 26 15:57:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:32.967 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:57:32 compute-0 nova_compute[239965]: 2026-01-26 15:57:32.967 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:32.968 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.093 239969 DEBUG nova.compute.manager [req-4823795a-92c5-41a5-9b68-b44447d35558 req-53a7dee6-0a62-42b0-aee8-d5ca3cc79464 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received event network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.093 239969 DEBUG oslo_concurrency.lockutils [req-4823795a-92c5-41a5-9b68-b44447d35558 req-53a7dee6-0a62-42b0-aee8-d5ca3cc79464 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.093 239969 DEBUG oslo_concurrency.lockutils [req-4823795a-92c5-41a5-9b68-b44447d35558 req-53a7dee6-0a62-42b0-aee8-d5ca3cc79464 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.093 239969 DEBUG oslo_concurrency.lockutils [req-4823795a-92c5-41a5-9b68-b44447d35558 req-53a7dee6-0a62-42b0-aee8-d5ca3cc79464 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.094 239969 DEBUG nova.compute.manager [req-4823795a-92c5-41a5-9b68-b44447d35558 req-53a7dee6-0a62-42b0-aee8-d5ca3cc79464 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] No waiting events found dispatching network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.094 239969 WARNING nova.compute.manager [req-4823795a-92c5-41a5-9b68-b44447d35558 req-53a7dee6-0a62-42b0-aee8-d5ca3cc79464 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received unexpected event network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 for instance with vm_state active and task_state None.
Jan 26 15:57:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa1d73401bd6141f9a3fc2f621c1db68a4507533f80d6cd8b4f415ac4831840d-merged.mount: Deactivated successfully.
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.308 239969 INFO nova.virt.libvirt.driver [None req-eb8eca38-6485-46e0-b759-e7e0aa7baacd 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Beginning live snapshot process
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.477 239969 DEBUG nova.virt.libvirt.imagebackend [None req-eb8eca38-6485-46e0-b759-e7e0aa7baacd 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.547 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.548 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.549 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.549 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.549 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.620 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Acquiring lock "58e03608-6ade-4867-ba57-e5b723e5fa71" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.621 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lock "58e03608-6ade-4867-ba57-e5b723e5fa71" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.623 239969 DEBUG oslo_concurrency.lockutils [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Acquiring lock "3940597d-f11c-4959-85c5-bb51a373cec5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.623 239969 DEBUG oslo_concurrency.lockutils [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lock "3940597d-f11c-4959-85c5-bb51a373cec5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.623 239969 DEBUG oslo_concurrency.lockutils [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Acquiring lock "3940597d-f11c-4959-85c5-bb51a373cec5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.623 239969 DEBUG oslo_concurrency.lockutils [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lock "3940597d-f11c-4959-85c5-bb51a373cec5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.623 239969 DEBUG oslo_concurrency.lockutils [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lock "3940597d-f11c-4959-85c5-bb51a373cec5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.625 239969 INFO nova.compute.manager [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Terminating instance
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.625 239969 DEBUG oslo_concurrency.lockutils [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Acquiring lock "refresh_cache-3940597d-f11c-4959-85c5-bb51a373cec5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.626 239969 DEBUG oslo_concurrency.lockutils [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Acquired lock "refresh_cache-3940597d-f11c-4959-85c5-bb51a373cec5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.626 239969 DEBUG nova.network.neutron [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.643 239969 DEBUG nova.compute.manager [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.785 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.786 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.792 239969 DEBUG nova.virt.hardware [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.793 239969 INFO nova.compute.claims [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:57:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 142 op/s
Jan 26 15:57:33 compute-0 nova_compute[239965]: 2026-01-26 15:57:33.847 239969 DEBUG nova.storage.rbd_utils [None req-eb8eca38-6485-46e0-b759-e7e0aa7baacd 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] creating snapshot(e81acc8981424085b7f014d93a19bce8) on rbd image(3940597d-f11c-4959-85c5-bb51a373cec5_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:57:33 compute-0 ovn_controller[146046]: 2026-01-26T15:57:33Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:92:c2 10.100.0.14
Jan 26 15:57:33 compute-0 ovn_controller[146046]: 2026-01-26T15:57:33Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:92:c2 10.100.0.14
Jan 26 15:57:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:57:34 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1540247526' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.161 239969 DEBUG nova.network.neutron [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.170 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.199 239969 DEBUG oslo_concurrency.lockutils [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "interface-0815f972-3907-402e-b6b4-f1f1fc93a8ab-a6ea89b5-e240-4a9d-bad7-8e99a81e6805" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.199 239969 DEBUG oslo_concurrency.lockutils [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-0815f972-3907-402e-b6b4-f1f1fc93a8ab-a6ea89b5-e240-4a9d-bad7-8e99a81e6805" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.211 239969 DEBUG nova.scheduler.client.report [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.216 239969 DEBUG nova.objects.instance [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'flavor' on Instance uuid 0815f972-3907-402e-b6b4-f1f1fc93a8ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.234 239969 DEBUG nova.scheduler.client.report [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.235 239969 DEBUG nova.compute.provider_tree [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.243 239969 DEBUG nova.virt.libvirt.vif [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:56:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-457250150',display_name='tempest-tempest.common.compute-instance-457250150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-457250150',id=45,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKeLSVrQjLVMrDly+731By+tGp5cL/q2RkfHkPZwnhscMO6Jhep7798XgwoAsogpRVdfhr++1ZBHm1/nTkA1YXe3jICWiVf8G7qT1QKZe6WXtrz4RsbEZ4GIF6fD2FL+/A==',key_name='tempest-keypair-1813699058',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:57:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-qhswce1a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:57:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=0815f972-3907-402e-b6b4-f1f1fc93a8ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.244 239969 DEBUG nova.network.os_vif_util [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.244 239969 DEBUG nova.network.os_vif_util [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.247 239969 DEBUG nova.virt.libvirt.guest [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7e:92:c2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6ea89b5-e2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.249 239969 DEBUG nova.virt.libvirt.guest [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7e:92:c2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6ea89b5-e2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.252 239969 DEBUG nova.virt.libvirt.driver [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Attempting to detach device tapa6ea89b5-e2 from instance 0815f972-3907-402e-b6b4-f1f1fc93a8ab from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.252 239969 DEBUG nova.virt.libvirt.guest [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] detach device xml: <interface type="ethernet">
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:7e:92:c2"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <target dev="tapa6ea89b5-e2"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]: </interface>
Jan 26 15:57:34 compute-0 nova_compute[239965]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.257 239969 DEBUG nova.scheduler.client.report [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.265 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.266 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.303 239969 DEBUG nova.scheduler.client.report [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.402 239969 DEBUG nova.virt.libvirt.guest [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7e:92:c2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6ea89b5-e2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.406 239969 DEBUG oslo_concurrency.processutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.441 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.441 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.442 239969 DEBUG nova.virt.libvirt.guest [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:7e:92:c2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6ea89b5-e2"/></interface>not found in domain: <domain type='kvm' id='50'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <name>instance-0000002d</name>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <uuid>0815f972-3907-402e-b6b4-f1f1fc93a8ab</uuid>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:name>tempest-tempest.common.compute-instance-457250150</nova:name>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:57:32</nova:creationTime>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:port uuid="2722b13b-6ac2-481f-93b0-a5f0e4664e24">
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:port uuid="a6ea89b5-e240-4a9d-bad7-8e99a81e6805">
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:57:34 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <resource>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </resource>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <system>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <entry name='serial'>0815f972-3907-402e-b6b4-f1f1fc93a8ab</entry>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <entry name='uuid'>0815f972-3907-402e-b6b4-f1f1fc93a8ab</entry>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </system>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <os>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </os>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <features>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </features>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk' index='2'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       </source>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk.config' index='1'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       </source>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:e2:f5:4d'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target dev='tap2722b13b-6a'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:7e:92:c2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target dev='tapa6ea89b5-e2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='net1'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <source path='/dev/pts/2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/0815f972-3907-402e-b6b4-f1f1fc93a8ab/console.log' append='off'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       </target>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/2'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <source path='/dev/pts/2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/0815f972-3907-402e-b6b4-f1f1fc93a8ab/console.log' append='off'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </console>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </input>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </input>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </input>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </graphics>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <video>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </video>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c188,c209</label>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c188,c209</imagelabel>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:57:34 compute-0 nova_compute[239965]: </domain>
Jan 26 15:57:34 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.442 239969 INFO nova.virt.libvirt.driver [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully detached device tapa6ea89b5-e2 from instance 0815f972-3907-402e-b6b4-f1f1fc93a8ab from the persistent domain config.
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.442 239969 DEBUG nova.virt.libvirt.driver [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] (1/8): Attempting to detach device tapa6ea89b5-e2 with device alias net1 from instance 0815f972-3907-402e-b6b4-f1f1fc93a8ab from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.443 239969 DEBUG nova.virt.libvirt.guest [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] detach device xml: <interface type="ethernet">
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:7e:92:c2"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <target dev="tapa6ea89b5-e2"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]: </interface>
Jan 26 15:57:34 compute-0 nova_compute[239965]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.456 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.456 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.460 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.461 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:57:34 compute-0 kernel: tapa6ea89b5-e2 (unregistering): left promiscuous mode
Jan 26 15:57:34 compute-0 NetworkManager[48954]: <info>  [1769443054.5191] device (tapa6ea89b5-e2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:57:34 compute-0 virtqemud[240263]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 26 15:57:34 compute-0 virtqemud[240263]: hostname: compute-0
Jan 26 15:57:34 compute-0 virtqemud[240263]: An error occurred, but the cause is unknown
Jan 26 15:57:34 compute-0 ovn_controller[146046]: 2026-01-26T15:57:34Z|00355|binding|INFO|Releasing lport a6ea89b5-e240-4a9d-bad7-8e99a81e6805 from this chassis (sb_readonly=0)
Jan 26 15:57:34 compute-0 ovn_controller[146046]: 2026-01-26T15:57:34Z|00356|binding|INFO|Setting lport a6ea89b5-e240-4a9d-bad7-8e99a81e6805 down in Southbound
Jan 26 15:57:34 compute-0 ovn_controller[146046]: 2026-01-26T15:57:34Z|00357|binding|INFO|Removing iface tapa6ea89b5-e2 ovn-installed in OVS
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.547 239969 DEBUG nova.virt.libvirt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Received event <DeviceRemovedEvent: 1769443054.5451324, 0815f972-3907-402e-b6b4-f1f1fc93a8ab => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.548 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.550 239969 DEBUG nova.virt.libvirt.driver [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Start waiting for the detach event from libvirt for device tapa6ea89b5-e2 with device alias net1 for instance 0815f972-3907-402e-b6b4-f1f1fc93a8ab _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.550 239969 DEBUG nova.virt.libvirt.guest [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7e:92:c2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6ea89b5-e2"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 15:57:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:34.554 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:92:c2 10.100.0.14'], port_security=['fa:16:3e:7e:92:c2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-1213915953', 'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0815f972-3907-402e-b6b4-f1f1fc93a8ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-1213915953', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'e6f59e25-9c42-4de8-8df2-2beb03c79af0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a6ea89b5-e240-4a9d-bad7-8e99a81e6805) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:57:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:34.555 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a6ea89b5-e240-4a9d-bad7-8e99a81e6805 in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e unbound from our chassis
Jan 26 15:57:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:34.557 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.560 239969 DEBUG nova.virt.libvirt.guest [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:7e:92:c2"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapa6ea89b5-e2"/></interface>not found in domain: <domain type='kvm' id='50'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <name>instance-0000002d</name>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <uuid>0815f972-3907-402e-b6b4-f1f1fc93a8ab</uuid>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:name>tempest-tempest.common.compute-instance-457250150</nova:name>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:57:32</nova:creationTime>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:port uuid="2722b13b-6ac2-481f-93b0-a5f0e4664e24">
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:port uuid="a6ea89b5-e240-4a9d-bad7-8e99a81e6805">
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:57:34 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <resource>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </resource>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <system>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <entry name='serial'>0815f972-3907-402e-b6b4-f1f1fc93a8ab</entry>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <entry name='uuid'>0815f972-3907-402e-b6b4-f1f1fc93a8ab</entry>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </system>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <os>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </os>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <features>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </features>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk' index='2'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       </source>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/0815f972-3907-402e-b6b4-f1f1fc93a8ab_disk.config' index='1'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       </source>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </controller>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:e2:f5:4d'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target dev='tap2722b13b-6a'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <source path='/dev/pts/2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/0815f972-3907-402e-b6b4-f1f1fc93a8ab/console.log' append='off'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       </target>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/2'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <source path='/dev/pts/2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/0815f972-3907-402e-b6b4-f1f1fc93a8ab/console.log' append='off'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </console>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </input>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </input>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </input>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5902' autoport='yes' listen='::0'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </graphics>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <video>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </video>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c188,c209</label>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c188,c209</imagelabel>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 15:57:34 compute-0 nova_compute[239965]: </domain>
Jan 26 15:57:34 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.563 239969 INFO nova.virt.libvirt.driver [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully detached device tapa6ea89b5-e2 from instance 0815f972-3907-402e-b6b4-f1f1fc93a8ab from the live domain config.
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.564 239969 DEBUG nova.virt.libvirt.vif [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:56:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-457250150',display_name='tempest-tempest.common.compute-instance-457250150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-457250150',id=45,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKeLSVrQjLVMrDly+731By+tGp5cL/q2RkfHkPZwnhscMO6Jhep7798XgwoAsogpRVdfhr++1ZBHm1/nTkA1YXe3jICWiVf8G7qT1QKZe6WXtrz4RsbEZ4GIF6fD2FL+/A==',key_name='tempest-keypair-1813699058',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:57:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-qhswce1a',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:57:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=0815f972-3907-402e-b6b4-f1f1fc93a8ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.564 239969 DEBUG nova.network.os_vif_util [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.565 239969 DEBUG nova.network.os_vif_util [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.566 239969 DEBUG os_vif [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.568 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.569 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6ea89b5-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.571 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.572 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.577 239969 INFO os_vif [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2')
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.578 239969 DEBUG nova.virt.libvirt.guest [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:name>tempest-tempest.common.compute-instance-457250150</nova:name>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 15:57:34</nova:creationTime>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:user uuid="700f80e28de6426c81d2be69a7fd8ffb">tempest-AttachInterfacesTestJSON-1005120150-project-member</nova:user>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:project uuid="3ba6f10f9d9d4df0aa855feb2a00210a">tempest-AttachInterfacesTestJSON-1005120150</nova:project>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     <nova:port uuid="2722b13b-6ac2-481f-93b0-a5f0e4664e24">
Jan 26 15:57:34 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 15:57:34 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 15:57:34 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 15:57:34 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 15:57:34 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 15:57:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:34.582 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[382d8861-5b7a-40c1-baa4-51252083042f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:34.614 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2552d647-70e0-42c1-88d8-f2b443dad6fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:34.618 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3fdc91ee-b1c9-460e-803c-21e475d116de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:57:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:34.658 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[dce07461-ae40-4462-af76-62f24126e3d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:34.687 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[380c5015-9c14-4a29-84d3-7a8a09a9562e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448592, 'reachable_time': 26761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281826, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:34.713 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[05ea56f1-af0e-4c32-ab1b-e8c986afde0d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448604, 'tstamp': 448604}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281827, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448607, 'tstamp': 448607}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281827, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:34.715 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.717 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:34.719 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:34.719 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:34.720 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:34.721 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.783 239969 DEBUG nova.network.neutron [req-bb71a676-0f43-4e92-b8c0-adad872b3da3 req-c23e7547-7c22-4d49-b8a1-94d1af2acf87 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Updated VIF entry in instance network info cache for port a6ea89b5-e240-4a9d-bad7-8e99a81e6805. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.784 239969 DEBUG nova.network.neutron [req-bb71a676-0f43-4e92-b8c0-adad872b3da3 req-c23e7547-7c22-4d49-b8a1-94d1af2acf87 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Updating instance_info_cache with network_info: [{"id": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "address": "fa:16:3e:e2:f5:4d", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2722b13b-6a", "ovs_interfaceid": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.806 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.807 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3413MB free_disk=59.85536741372198GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.808 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:34 compute-0 nova_compute[239965]: 2026-01-26 15:57:34.820 239969 DEBUG oslo_concurrency.lockutils [req-bb71a676-0f43-4e92-b8c0-adad872b3da3 req-c23e7547-7c22-4d49-b8a1-94d1af2acf87 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:34 compute-0 podman[281610]: 2026-01-26 15:57:34.871152038 +0000 UTC m=+5.537365444 container remove e6f663339764cbb71b7b152c613fa538a079912a19a28034e609ff084106c3dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 15:57:34 compute-0 systemd[1]: libpod-conmon-e6f663339764cbb71b7b152c613fa538a079912a19a28034e609ff084106c3dd.scope: Deactivated successfully.
Jan 26 15:57:34 compute-0 sudo[281239]: pam_unix(sudo:session): session closed for user root
Jan 26 15:57:34 compute-0 sudo[281828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:57:34 compute-0 sudo[281828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:57:34 compute-0 sudo[281828]: pam_unix(sudo:session): session closed for user root
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.007 239969 DEBUG nova.network.neutron [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.028 239969 DEBUG oslo_concurrency.lockutils [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Releasing lock "refresh_cache-3940597d-f11c-4959-85c5-bb51a373cec5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.029 239969 DEBUG nova.compute.manager [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:57:35 compute-0 sudo[281853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:57:35 compute-0 sudo[281853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:57:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Jan 26 15:57:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:57:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4098744487' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.117 239969 DEBUG oslo_concurrency.processutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.710s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.123 239969 DEBUG nova.compute.provider_tree [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.140 239969 DEBUG nova.scheduler.client.report [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.164 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.165 239969 DEBUG nova.compute.manager [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.168 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.360s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.243 239969 DEBUG nova.compute.manager [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.244 239969 DEBUG nova.network.neutron [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.266 239969 INFO nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.270 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance d49e7eb2-dcdb-4410-a016-42e58d1d1416 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.270 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 0815f972-3907-402e-b6b4-f1f1fc93a8ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.271 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance d9b6d871-11f5-46ea-b7f2-502cd8b12b21 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.271 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 3940597d-f11c-4959-85c5-bb51a373cec5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.271 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 58e03608-6ade-4867-ba57-e5b723e5fa71 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.271 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.271 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.276 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.296 239969 DEBUG nova.compute.manager [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:57:35 compute-0 ceph-mon[75140]: pgmap v1329: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 142 op/s
Jan 26 15:57:35 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1540247526' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:35 compute-0 podman[281891]: 2026-01-26 15:57:35.326005522 +0000 UTC m=+0.024934333 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.467 239969 DEBUG nova.compute.manager [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.468 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.469 239969 INFO nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Creating image(s)
Jan 26 15:57:35 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Jan 26 15:57:35 compute-0 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002f.scope: Consumed 5.914s CPU time.
Jan 26 15:57:35 compute-0 systemd-machined[208061]: Machine qemu-52-instance-0000002f terminated.
Jan 26 15:57:35 compute-0 podman[281891]: 2026-01-26 15:57:35.644702185 +0000 UTC m=+0.343630976 container create 07d3a53b28b1339d5fded154cefcce98c853c76a57e8a5dc7105d423c7f33e31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 15:57:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 206 op/s
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.884 239969 DEBUG nova.storage.rbd_utils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] rbd image 58e03608-6ade-4867-ba57-e5b723e5fa71_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Jan 26 15:57:35 compute-0 nova_compute[239965]: 2026-01-26 15:57:35.909 239969 DEBUG nova.storage.rbd_utils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] rbd image 58e03608-6ade-4867-ba57-e5b723e5fa71_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:35 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Jan 26 15:57:35 compute-0 systemd[1]: Started libpod-conmon-07d3a53b28b1339d5fded154cefcce98c853c76a57e8a5dc7105d423c7f33e31.scope.
Jan 26 15:57:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.277 239969 DEBUG nova.storage.rbd_utils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] rbd image 58e03608-6ade-4867-ba57-e5b723e5fa71_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.283 239969 DEBUG oslo_concurrency.processutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.326 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.372 239969 DEBUG nova.policy [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '155281d948ac43f4bd24b54f8f81888c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2244b984e24a40ff8bdfbbe29dec2c25', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.379 239969 DEBUG nova.compute.manager [req-65f6630f-24e7-4b7f-8b1b-1c22585af596 req-406d33be-b6e8-46ed-8e0f-fcb96295fae3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received event network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.379 239969 DEBUG oslo_concurrency.lockutils [req-65f6630f-24e7-4b7f-8b1b-1c22585af596 req-406d33be-b6e8-46ed-8e0f-fcb96295fae3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.379 239969 DEBUG oslo_concurrency.lockutils [req-65f6630f-24e7-4b7f-8b1b-1c22585af596 req-406d33be-b6e8-46ed-8e0f-fcb96295fae3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.379 239969 DEBUG oslo_concurrency.lockutils [req-65f6630f-24e7-4b7f-8b1b-1c22585af596 req-406d33be-b6e8-46ed-8e0f-fcb96295fae3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.379 239969 DEBUG nova.compute.manager [req-65f6630f-24e7-4b7f-8b1b-1c22585af596 req-406d33be-b6e8-46ed-8e0f-fcb96295fae3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] No waiting events found dispatching network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.379 239969 WARNING nova.compute.manager [req-65f6630f-24e7-4b7f-8b1b-1c22585af596 req-406d33be-b6e8-46ed-8e0f-fcb96295fae3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received unexpected event network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 for instance with vm_state active and task_state None.
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.380 239969 DEBUG nova.compute.manager [req-65f6630f-24e7-4b7f-8b1b-1c22585af596 req-406d33be-b6e8-46ed-8e0f-fcb96295fae3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received event network-vif-unplugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.380 239969 DEBUG oslo_concurrency.lockutils [req-65f6630f-24e7-4b7f-8b1b-1c22585af596 req-406d33be-b6e8-46ed-8e0f-fcb96295fae3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.380 239969 DEBUG oslo_concurrency.lockutils [req-65f6630f-24e7-4b7f-8b1b-1c22585af596 req-406d33be-b6e8-46ed-8e0f-fcb96295fae3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.380 239969 DEBUG oslo_concurrency.lockutils [req-65f6630f-24e7-4b7f-8b1b-1c22585af596 req-406d33be-b6e8-46ed-8e0f-fcb96295fae3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.380 239969 DEBUG nova.compute.manager [req-65f6630f-24e7-4b7f-8b1b-1c22585af596 req-406d33be-b6e8-46ed-8e0f-fcb96295fae3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] No waiting events found dispatching network-vif-unplugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.380 239969 WARNING nova.compute.manager [req-65f6630f-24e7-4b7f-8b1b-1c22585af596 req-406d33be-b6e8-46ed-8e0f-fcb96295fae3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received unexpected event network-vif-unplugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 for instance with vm_state active and task_state None.
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.382 239969 DEBUG oslo_concurrency.processutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.383 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.383 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:36 compute-0 nova_compute[239965]: 2026-01-26 15:57:36.383 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:36 compute-0 podman[281891]: 2026-01-26 15:57:36.992617168 +0000 UTC m=+1.691545989 container init 07d3a53b28b1339d5fded154cefcce98c853c76a57e8a5dc7105d423c7f33e31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 15:57:37 compute-0 podman[281891]: 2026-01-26 15:57:37.005638958 +0000 UTC m=+1.704567749 container start 07d3a53b28b1339d5fded154cefcce98c853c76a57e8a5dc7105d423c7f33e31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_napier, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 15:57:37 compute-0 nova_compute[239965]: 2026-01-26 15:57:37.017 239969 DEBUG nova.storage.rbd_utils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] rbd image 58e03608-6ade-4867-ba57-e5b723e5fa71_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:37 compute-0 zen_napier[281962]: 167 167
Jan 26 15:57:37 compute-0 systemd[1]: libpod-07d3a53b28b1339d5fded154cefcce98c853c76a57e8a5dc7105d423c7f33e31.scope: Deactivated successfully.
Jan 26 15:57:37 compute-0 conmon[281962]: conmon 07d3a53b28b1339d5fde <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-07d3a53b28b1339d5fded154cefcce98c853c76a57e8a5dc7105d423c7f33e31.scope/container/memory.events
Jan 26 15:57:37 compute-0 nova_compute[239965]: 2026-01-26 15:57:37.023 239969 DEBUG oslo_concurrency.processutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 58e03608-6ade-4867-ba57-e5b723e5fa71_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:37 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4098744487' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:37 compute-0 ceph-mon[75140]: pgmap v1330: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.4 MiB/s wr, 206 op/s
Jan 26 15:57:37 compute-0 ceph-mon[75140]: osdmap e227: 3 total, 3 up, 3 in
Jan 26 15:57:37 compute-0 nova_compute[239965]: 2026-01-26 15:57:37.064 239969 INFO nova.virt.libvirt.driver [-] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Instance destroyed successfully.
Jan 26 15:57:37 compute-0 nova_compute[239965]: 2026-01-26 15:57:37.065 239969 DEBUG nova.objects.instance [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lazy-loading 'resources' on Instance uuid 3940597d-f11c-4959-85c5-bb51a373cec5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:37 compute-0 podman[281891]: 2026-01-26 15:57:37.133079486 +0000 UTC m=+1.832008277 container attach 07d3a53b28b1339d5fded154cefcce98c853c76a57e8a5dc7105d423c7f33e31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 15:57:37 compute-0 podman[281891]: 2026-01-26 15:57:37.133870055 +0000 UTC m=+1.832798866 container died 07d3a53b28b1339d5fded154cefcce98c853c76a57e8a5dc7105d423c7f33e31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 26 15:57:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 50 KiB/s wr, 200 op/s
Jan 26 15:57:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:57:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1685866594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:38 compute-0 nova_compute[239965]: 2026-01-26 15:57:38.313 239969 DEBUG nova.compute.manager [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received event network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:38 compute-0 nova_compute[239965]: 2026-01-26 15:57:38.314 239969 DEBUG oslo_concurrency.lockutils [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:38 compute-0 nova_compute[239965]: 2026-01-26 15:57:38.315 239969 DEBUG oslo_concurrency.lockutils [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:38 compute-0 nova_compute[239965]: 2026-01-26 15:57:38.315 239969 DEBUG oslo_concurrency.lockutils [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:38 compute-0 nova_compute[239965]: 2026-01-26 15:57:38.315 239969 DEBUG nova.compute.manager [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] No waiting events found dispatching network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:38 compute-0 nova_compute[239965]: 2026-01-26 15:57:38.315 239969 WARNING nova.compute.manager [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received unexpected event network-vif-plugged-a6ea89b5-e240-4a9d-bad7-8e99a81e6805 for instance with vm_state active and task_state None.
Jan 26 15:57:38 compute-0 nova_compute[239965]: 2026-01-26 15:57:38.316 239969 DEBUG nova.compute.manager [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Received event network-changed-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:38 compute-0 nova_compute[239965]: 2026-01-26 15:57:38.316 239969 DEBUG nova.compute.manager [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Refreshing instance network info cache due to event network-changed-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:57:38 compute-0 nova_compute[239965]: 2026-01-26 15:57:38.316 239969 DEBUG oslo_concurrency.lockutils [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d9b6d871-11f5-46ea-b7f2-502cd8b12b21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:38 compute-0 nova_compute[239965]: 2026-01-26 15:57:38.317 239969 DEBUG oslo_concurrency.lockutils [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d9b6d871-11f5-46ea-b7f2-502cd8b12b21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:38 compute-0 nova_compute[239965]: 2026-01-26 15:57:38.317 239969 DEBUG nova.network.neutron [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Refreshing network info cache for port e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:57:38 compute-0 nova_compute[239965]: 2026-01-26 15:57:38.322 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.996s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:38 compute-0 nova_compute[239965]: 2026-01-26 15:57:38.328 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:57:38 compute-0 nova_compute[239965]: 2026-01-26 15:57:38.391 239969 DEBUG nova.storage.rbd_utils [None req-eb8eca38-6485-46e0-b759-e7e0aa7baacd 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] cloning vms/3940597d-f11c-4959-85c5-bb51a373cec5_disk@e81acc8981424085b7f014d93a19bce8 to images/c60f8bda-58e3-40e1-94fc-f8040bc4f6cb clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 15:57:38 compute-0 ceph-mon[75140]: pgmap v1332: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 50 KiB/s wr, 200 op/s
Jan 26 15:57:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1685866594' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:38.970 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4def0ebba7fa9566affbd9bf248ef08acd6518d179d83855fc0a23f21972f08f-merged.mount: Deactivated successfully.
Jan 26 15:57:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 48 KiB/s wr, 186 op/s
Jan 26 15:57:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:57:39 compute-0 nova_compute[239965]: 2026-01-26 15:57:39.871 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:39 compute-0 nova_compute[239965]: 2026-01-26 15:57:39.881 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:57:39 compute-0 nova_compute[239965]: 2026-01-26 15:57:39.936 239969 DEBUG oslo_concurrency.lockutils [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:39 compute-0 nova_compute[239965]: 2026-01-26 15:57:39.936 239969 DEBUG oslo_concurrency.lockutils [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:39 compute-0 nova_compute[239965]: 2026-01-26 15:57:39.936 239969 DEBUG oslo_concurrency.lockutils [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:39 compute-0 nova_compute[239965]: 2026-01-26 15:57:39.937 239969 DEBUG oslo_concurrency.lockutils [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:39 compute-0 nova_compute[239965]: 2026-01-26 15:57:39.937 239969 DEBUG oslo_concurrency.lockutils [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:39 compute-0 nova_compute[239965]: 2026-01-26 15:57:39.938 239969 INFO nova.compute.manager [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Terminating instance
Jan 26 15:57:39 compute-0 nova_compute[239965]: 2026-01-26 15:57:39.939 239969 DEBUG nova.compute.manager [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:57:39 compute-0 nova_compute[239965]: 2026-01-26 15:57:39.940 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:57:39 compute-0 nova_compute[239965]: 2026-01-26 15:57:39.940 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:40 compute-0 podman[281891]: 2026-01-26 15:57:40.011468366 +0000 UTC m=+4.710397157 container remove 07d3a53b28b1339d5fded154cefcce98c853c76a57e8a5dc7105d423c7f33e31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_napier, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 15:57:40 compute-0 nova_compute[239965]: 2026-01-26 15:57:40.040 239969 DEBUG nova.network.neutron [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Successfully created port: 83367d44-efa7-43f2-b6dd-eee666860388 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:57:40 compute-0 nova_compute[239965]: 2026-01-26 15:57:40.114 239969 DEBUG oslo_concurrency.lockutils [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:40 compute-0 nova_compute[239965]: 2026-01-26 15:57:40.115 239969 DEBUG oslo_concurrency.lockutils [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquired lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:40 compute-0 nova_compute[239965]: 2026-01-26 15:57:40.115 239969 DEBUG nova.network.neutron [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:57:40 compute-0 systemd[1]: libpod-conmon-07d3a53b28b1339d5fded154cefcce98c853c76a57e8a5dc7105d423c7f33e31.scope: Deactivated successfully.
Jan 26 15:57:40 compute-0 nova_compute[239965]: 2026-01-26 15:57:40.273 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:40 compute-0 podman[282102]: 2026-01-26 15:57:40.191230578 +0000 UTC m=+0.032103980 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:57:41 compute-0 nova_compute[239965]: 2026-01-26 15:57:41.043 239969 DEBUG nova.network.neutron [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Successfully updated port: 83367d44-efa7-43f2-b6dd-eee666860388 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:57:41 compute-0 nova_compute[239965]: 2026-01-26 15:57:41.067 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Acquiring lock "refresh_cache-58e03608-6ade-4867-ba57-e5b723e5fa71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:41 compute-0 nova_compute[239965]: 2026-01-26 15:57:41.068 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Acquired lock "refresh_cache-58e03608-6ade-4867-ba57-e5b723e5fa71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:41 compute-0 nova_compute[239965]: 2026-01-26 15:57:41.068 239969 DEBUG nova.network.neutron [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:57:41 compute-0 podman[282102]: 2026-01-26 15:57:41.10638509 +0000 UTC m=+0.947258482 container create 5f9bdbf53cd3b6e7c69729b3c09d83000d63ac317fc9b35b97ee85e625375b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 15:57:41 compute-0 systemd[1]: Started libpod-conmon-5f9bdbf53cd3b6e7c69729b3c09d83000d63ac317fc9b35b97ee85e625375b7d.scope.
Jan 26 15:57:41 compute-0 ceph-mon[75140]: pgmap v1333: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 48 KiB/s wr, 186 op/s
Jan 26 15:57:41 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5ba779dd4c4a4acef71a6a702ef5738303de08dc36e5b56f2587b25383be58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5ba779dd4c4a4acef71a6a702ef5738303de08dc36e5b56f2587b25383be58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5ba779dd4c4a4acef71a6a702ef5738303de08dc36e5b56f2587b25383be58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:57:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5ba779dd4c4a4acef71a6a702ef5738303de08dc36e5b56f2587b25383be58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:57:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 47 KiB/s wr, 176 op/s
Jan 26 15:57:41 compute-0 kernel: tap2722b13b-6a (unregistering): left promiscuous mode
Jan 26 15:57:41 compute-0 NetworkManager[48954]: <info>  [1769443061.9545] device (tap2722b13b-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:57:41 compute-0 ovn_controller[146046]: 2026-01-26T15:57:41Z|00358|binding|INFO|Releasing lport 2722b13b-6ac2-481f-93b0-a5f0e4664e24 from this chassis (sb_readonly=0)
Jan 26 15:57:41 compute-0 ovn_controller[146046]: 2026-01-26T15:57:41Z|00359|binding|INFO|Setting lport 2722b13b-6ac2-481f-93b0-a5f0e4664e24 down in Southbound
Jan 26 15:57:41 compute-0 ovn_controller[146046]: 2026-01-26T15:57:41Z|00360|binding|INFO|Removing iface tap2722b13b-6a ovn-installed in OVS
Jan 26 15:57:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:41.970 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:f5:4d 10.100.0.13'], port_security=['fa:16:3e:e2:f5:4d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0815f972-3907-402e-b6b4-f1f1fc93a8ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be1b617b-295a-4675-91ed-17bc8ca1fead', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2722b13b-6ac2-481f-93b0-a5f0e4664e24) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:57:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:41.971 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2722b13b-6ac2-481f-93b0-a5f0e4664e24 in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e unbound from our chassis
Jan 26 15:57:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:41.973 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e
Jan 26 15:57:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:41.990 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b92118c2-95bd-4e08-bcf3-6dff3986df30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:42 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Jan 26 15:57:42 compute-0 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002d.scope: Consumed 15.494s CPU time.
Jan 26 15:57:42 compute-0 systemd-machined[208061]: Machine qemu-50-instance-0000002d terminated.
Jan 26 15:57:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:42.018 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a74a8fc3-a545-495c-98c0-13fdfc0c9325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:42.021 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f3170df0-f175-455f-b4fd-585206088e18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:42.047 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5be7c515-1f16-4d4b-ad84-0b0014afecbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:42.064 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[34997290-08d9-4559-9905-84e09d5c7146]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap91bcac0a-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:d6:19'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448592, 'reachable_time': 26761, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282153, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:42.079 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d5c3f773-8be2-4868-9f16-e5e29c21ecb5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448604, 'tstamp': 448604}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282154, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap91bcac0a-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 448607, 'tstamp': 448607}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282154, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:42.080 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:42.086 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91bcac0a-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:42.086 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:42.087 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap91bcac0a-10, col_values=(('external_ids', {'iface-id': '0a445094-538b-4c97-83d4-6fbe61c31fbe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:42.087 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:42 compute-0 nova_compute[239965]: 2026-01-26 15:57:42.952 239969 DEBUG nova.network.neutron [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:57:42 compute-0 nova_compute[239965]: 2026-01-26 15:57:42.963 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:42 compute-0 nova_compute[239965]: 2026-01-26 15:57:42.965 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:57:42 compute-0 nova_compute[239965]: 2026-01-26 15:57:42.966 239969 DEBUG nova.compute.manager [req-865c1c63-f8de-493d-926d-32385c8d09db req-26ad6c37-3bcf-408b-81f2-aed1e8cb4ef6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Received event network-changed-83367d44-efa7-43f2-b6dd-eee666860388 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:42 compute-0 nova_compute[239965]: 2026-01-26 15:57:42.966 239969 DEBUG nova.compute.manager [req-865c1c63-f8de-493d-926d-32385c8d09db req-26ad6c37-3bcf-408b-81f2-aed1e8cb4ef6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Refreshing instance network info cache due to event network-changed-83367d44-efa7-43f2-b6dd-eee666860388. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:57:42 compute-0 nova_compute[239965]: 2026-01-26 15:57:42.967 239969 DEBUG oslo_concurrency.lockutils [req-865c1c63-f8de-493d-926d-32385c8d09db req-26ad6c37-3bcf-408b-81f2-aed1e8cb4ef6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-58e03608-6ade-4867-ba57-e5b723e5fa71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:42 compute-0 nova_compute[239965]: 2026-01-26 15:57:42.968 239969 DEBUG nova.compute.manager [req-88ad2942-663c-4888-968c-fa4ca551b8c6 req-2e1e1389-1a88-467e-8bd5-d914306eed82 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received event network-vif-unplugged-2722b13b-6ac2-481f-93b0-a5f0e4664e24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:42 compute-0 nova_compute[239965]: 2026-01-26 15:57:42.968 239969 DEBUG oslo_concurrency.lockutils [req-88ad2942-663c-4888-968c-fa4ca551b8c6 req-2e1e1389-1a88-467e-8bd5-d914306eed82 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:42 compute-0 nova_compute[239965]: 2026-01-26 15:57:42.969 239969 DEBUG oslo_concurrency.lockutils [req-88ad2942-663c-4888-968c-fa4ca551b8c6 req-2e1e1389-1a88-467e-8bd5-d914306eed82 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:42 compute-0 nova_compute[239965]: 2026-01-26 15:57:42.969 239969 DEBUG oslo_concurrency.lockutils [req-88ad2942-663c-4888-968c-fa4ca551b8c6 req-2e1e1389-1a88-467e-8bd5-d914306eed82 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:42 compute-0 nova_compute[239965]: 2026-01-26 15:57:42.969 239969 DEBUG nova.compute.manager [req-88ad2942-663c-4888-968c-fa4ca551b8c6 req-2e1e1389-1a88-467e-8bd5-d914306eed82 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] No waiting events found dispatching network-vif-unplugged-2722b13b-6ac2-481f-93b0-a5f0e4664e24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:42 compute-0 nova_compute[239965]: 2026-01-26 15:57:42.969 239969 DEBUG nova.compute.manager [req-88ad2942-663c-4888-968c-fa4ca551b8c6 req-2e1e1389-1a88-467e-8bd5-d914306eed82 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received event network-vif-unplugged-2722b13b-6ac2-481f-93b0-a5f0e4664e24 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:57:42 compute-0 nova_compute[239965]: 2026-01-26 15:57:42.970 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:57:42 compute-0 nova_compute[239965]: 2026-01-26 15:57:42.975 239969 INFO nova.virt.libvirt.driver [-] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Instance destroyed successfully.
Jan 26 15:57:42 compute-0 nova_compute[239965]: 2026-01-26 15:57:42.975 239969 DEBUG nova.objects.instance [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'resources' on Instance uuid 0815f972-3907-402e-b6b4-f1f1fc93a8ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.000 239969 DEBUG nova.virt.libvirt.vif [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:56:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-457250150',display_name='tempest-tempest.common.compute-instance-457250150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-457250150',id=45,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKeLSVrQjLVMrDly+731By+tGp5cL/q2RkfHkPZwnhscMO6Jhep7798XgwoAsogpRVdfhr++1ZBHm1/nTkA1YXe3jICWiVf8G7qT1QKZe6WXtrz4RsbEZ4GIF6fD2FL+/A==',key_name='tempest-keypair-1813699058',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:57:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-qhswce1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:57:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=0815f972-3907-402e-b6b4-f1f1fc93a8ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "address": "fa:16:3e:e2:f5:4d", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2722b13b-6a", "ovs_interfaceid": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.001 239969 DEBUG nova.network.os_vif_util [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "address": "fa:16:3e:e2:f5:4d", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2722b13b-6a", "ovs_interfaceid": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.001 239969 DEBUG nova.network.os_vif_util [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:f5:4d,bridge_name='br-int',has_traffic_filtering=True,id=2722b13b-6ac2-481f-93b0-a5f0e4664e24,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2722b13b-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.002 239969 DEBUG os_vif [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:f5:4d,bridge_name='br-int',has_traffic_filtering=True,id=2722b13b-6ac2-481f-93b0-a5f0e4664e24,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2722b13b-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.003 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.003 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2722b13b-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.006 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.008 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.008 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.008 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.008 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.010 239969 INFO os_vif [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:f5:4d,bridge_name='br-int',has_traffic_filtering=True,id=2722b13b-6ac2-481f-93b0-a5f0e4664e24,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2722b13b-6a')
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.011 239969 DEBUG nova.virt.libvirt.vif [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:56:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-457250150',display_name='tempest-tempest.common.compute-instance-457250150',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-457250150',id=45,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKeLSVrQjLVMrDly+731By+tGp5cL/q2RkfHkPZwnhscMO6Jhep7798XgwoAsogpRVdfhr++1ZBHm1/nTkA1YXe3jICWiVf8G7qT1QKZe6WXtrz4RsbEZ4GIF6fD2FL+/A==',key_name='tempest-keypair-1813699058',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:57:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-qhswce1a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:57:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=0815f972-3907-402e-b6b4-f1f1fc93a8ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.012 239969 DEBUG nova.network.os_vif_util [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "address": "fa:16:3e:7e:92:c2", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6ea89b5-e2", "ovs_interfaceid": "a6ea89b5-e240-4a9d-bad7-8e99a81e6805", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.012 239969 DEBUG nova.network.os_vif_util [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.013 239969 DEBUG os_vif [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.014 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.014 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa6ea89b5-e2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.015 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:43 compute-0 nova_compute[239965]: 2026-01-26 15:57:43.017 239969 INFO os_vif [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:92:c2,bridge_name='br-int',has_traffic_filtering=True,id=a6ea89b5-e240-4a9d-bad7-8e99a81e6805,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapa6ea89b5-e2')
Jan 26 15:57:43 compute-0 podman[282102]: 2026-01-26 15:57:43.194333808 +0000 UTC m=+3.035207220 container init 5f9bdbf53cd3b6e7c69729b3c09d83000d63ac317fc9b35b97ee85e625375b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_allen, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 15:57:43 compute-0 podman[282102]: 2026-01-26 15:57:43.202105588 +0000 UTC m=+3.042979010 container start 5f9bdbf53cd3b6e7c69729b3c09d83000d63ac317fc9b35b97ee85e625375b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_allen, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:57:43 compute-0 cool_allen[282137]: {
Jan 26 15:57:43 compute-0 cool_allen[282137]:     "0": [
Jan 26 15:57:43 compute-0 cool_allen[282137]:         {
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "devices": [
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "/dev/loop3"
Jan 26 15:57:43 compute-0 cool_allen[282137]:             ],
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "lv_name": "ceph_lv0",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "lv_size": "21470642176",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "name": "ceph_lv0",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "tags": {
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.cluster_name": "ceph",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.crush_device_class": "",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.encrypted": "0",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.objectstore": "bluestore",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.osd_id": "0",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.type": "block",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.vdo": "0",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.with_tpm": "0"
Jan 26 15:57:43 compute-0 cool_allen[282137]:             },
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "type": "block",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "vg_name": "ceph_vg0"
Jan 26 15:57:43 compute-0 cool_allen[282137]:         }
Jan 26 15:57:43 compute-0 cool_allen[282137]:     ],
Jan 26 15:57:43 compute-0 cool_allen[282137]:     "1": [
Jan 26 15:57:43 compute-0 cool_allen[282137]:         {
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "devices": [
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "/dev/loop4"
Jan 26 15:57:43 compute-0 cool_allen[282137]:             ],
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "lv_name": "ceph_lv1",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "lv_size": "21470642176",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "name": "ceph_lv1",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "tags": {
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.cluster_name": "ceph",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.crush_device_class": "",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.encrypted": "0",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.objectstore": "bluestore",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.osd_id": "1",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.type": "block",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.vdo": "0",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.with_tpm": "0"
Jan 26 15:57:43 compute-0 cool_allen[282137]:             },
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "type": "block",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "vg_name": "ceph_vg1"
Jan 26 15:57:43 compute-0 cool_allen[282137]:         }
Jan 26 15:57:43 compute-0 cool_allen[282137]:     ],
Jan 26 15:57:43 compute-0 cool_allen[282137]:     "2": [
Jan 26 15:57:43 compute-0 cool_allen[282137]:         {
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "devices": [
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "/dev/loop5"
Jan 26 15:57:43 compute-0 cool_allen[282137]:             ],
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "lv_name": "ceph_lv2",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "lv_size": "21470642176",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "name": "ceph_lv2",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "tags": {
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.cluster_name": "ceph",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.crush_device_class": "",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.encrypted": "0",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.objectstore": "bluestore",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.osd_id": "2",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.type": "block",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.vdo": "0",
Jan 26 15:57:43 compute-0 cool_allen[282137]:                 "ceph.with_tpm": "0"
Jan 26 15:57:43 compute-0 cool_allen[282137]:             },
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "type": "block",
Jan 26 15:57:43 compute-0 cool_allen[282137]:             "vg_name": "ceph_vg2"
Jan 26 15:57:43 compute-0 cool_allen[282137]:         }
Jan 26 15:57:43 compute-0 cool_allen[282137]:     ]
Jan 26 15:57:43 compute-0 cool_allen[282137]: }
Jan 26 15:57:43 compute-0 systemd[1]: libpod-5f9bdbf53cd3b6e7c69729b3c09d83000d63ac317fc9b35b97ee85e625375b7d.scope: Deactivated successfully.
Jan 26 15:57:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 KiB/s wr, 100 op/s
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.170 239969 DEBUG nova.network.neutron [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Updated VIF entry in instance network info cache for port e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.171 239969 DEBUG nova.network.neutron [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Updating instance_info_cache with network_info: [{"id": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "address": "fa:16:3e:ba:a8:be", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2bbbc1d-7d", "ovs_interfaceid": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.244 239969 DEBUG oslo_concurrency.lockutils [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d9b6d871-11f5-46ea-b7f2-502cd8b12b21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.245 239969 DEBUG nova.compute.manager [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Received event network-changed-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.245 239969 DEBUG nova.compute.manager [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Refreshing instance network info cache due to event network-changed-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.245 239969 DEBUG oslo_concurrency.lockutils [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d9b6d871-11f5-46ea-b7f2-502cd8b12b21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.245 239969 DEBUG oslo_concurrency.lockutils [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d9b6d871-11f5-46ea-b7f2-502cd8b12b21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.246 239969 DEBUG nova.network.neutron [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Refreshing network info cache for port e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.396 239969 INFO nova.network.neutron [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Port a6ea89b5-e240-4a9d-bad7-8e99a81e6805 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.396 239969 DEBUG nova.network.neutron [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Updating instance_info_cache with network_info: [{"id": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "address": "fa:16:3e:e2:f5:4d", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2722b13b-6a", "ovs_interfaceid": "2722b13b-6ac2-481f-93b0-a5f0e4664e24", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.423 239969 DEBUG oslo_concurrency.lockutils [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Releasing lock "refresh_cache-0815f972-3907-402e-b6b4-f1f1fc93a8ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.455 239969 DEBUG oslo_concurrency.lockutils [None req-98a67c10-6ab4-43a7-8eab-16ff9b1d9b3d 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "interface-0815f972-3907-402e-b6b4-f1f1fc93a8ab-a6ea89b5-e240-4a9d-bad7-8e99a81e6805" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 10.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.549 239969 DEBUG nova.compute.manager [req-660a8b59-b74c-40d2-9beb-94d5f8b7acb5 req-51728e6f-00a6-4e48-9f4f-12a3b7367050 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received event network-vif-plugged-2722b13b-6ac2-481f-93b0-a5f0e4664e24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.549 239969 DEBUG oslo_concurrency.lockutils [req-660a8b59-b74c-40d2-9beb-94d5f8b7acb5 req-51728e6f-00a6-4e48-9f4f-12a3b7367050 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.550 239969 DEBUG oslo_concurrency.lockutils [req-660a8b59-b74c-40d2-9beb-94d5f8b7acb5 req-51728e6f-00a6-4e48-9f4f-12a3b7367050 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.550 239969 DEBUG oslo_concurrency.lockutils [req-660a8b59-b74c-40d2-9beb-94d5f8b7acb5 req-51728e6f-00a6-4e48-9f4f-12a3b7367050 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.550 239969 DEBUG nova.compute.manager [req-660a8b59-b74c-40d2-9beb-94d5f8b7acb5 req-51728e6f-00a6-4e48-9f4f-12a3b7367050 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] No waiting events found dispatching network-vif-plugged-2722b13b-6ac2-481f-93b0-a5f0e4664e24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:44 compute-0 nova_compute[239965]: 2026-01-26 15:57:44.550 239969 WARNING nova.compute.manager [req-660a8b59-b74c-40d2-9beb-94d5f8b7acb5 req-51728e6f-00a6-4e48-9f4f-12a3b7367050 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received unexpected event network-vif-plugged-2722b13b-6ac2-481f-93b0-a5f0e4664e24 for instance with vm_state active and task_state deleting.
Jan 26 15:57:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:57:45 compute-0 nova_compute[239965]: 2026-01-26 15:57:45.094 239969 DEBUG nova.network.neutron [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Updating instance_info_cache with network_info: [{"id": "83367d44-efa7-43f2-b6dd-eee666860388", "address": "fa:16:3e:08:b6:96", "network": {"id": "e0f483e6-b774-4b05-865b-210bd6652bcd", "bridge": "br-int", "label": "tempest-ServersTestJSON-1790730746-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2244b984e24a40ff8bdfbbe29dec2c25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83367d44-ef", "ovs_interfaceid": "83367d44-efa7-43f2-b6dd-eee666860388", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:45 compute-0 nova_compute[239965]: 2026-01-26 15:57:45.275 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:45 compute-0 nova_compute[239965]: 2026-01-26 15:57:45.354 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Releasing lock "refresh_cache-58e03608-6ade-4867-ba57-e5b723e5fa71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:45 compute-0 nova_compute[239965]: 2026-01-26 15:57:45.355 239969 DEBUG nova.compute.manager [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Instance network_info: |[{"id": "83367d44-efa7-43f2-b6dd-eee666860388", "address": "fa:16:3e:08:b6:96", "network": {"id": "e0f483e6-b774-4b05-865b-210bd6652bcd", "bridge": "br-int", "label": "tempest-ServersTestJSON-1790730746-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2244b984e24a40ff8bdfbbe29dec2c25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83367d44-ef", "ovs_interfaceid": "83367d44-efa7-43f2-b6dd-eee666860388", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:57:45 compute-0 nova_compute[239965]: 2026-01-26 15:57:45.356 239969 DEBUG oslo_concurrency.lockutils [req-865c1c63-f8de-493d-926d-32385c8d09db req-26ad6c37-3bcf-408b-81f2-aed1e8cb4ef6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-58e03608-6ade-4867-ba57-e5b723e5fa71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:45 compute-0 nova_compute[239965]: 2026-01-26 15:57:45.356 239969 DEBUG nova.network.neutron [req-865c1c63-f8de-493d-926d-32385c8d09db req-26ad6c37-3bcf-408b-81f2-aed1e8cb4ef6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Refreshing network info cache for port 83367d44-efa7-43f2-b6dd-eee666860388 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:57:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 325 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.6 MiB/s wr, 53 op/s
Jan 26 15:57:45 compute-0 podman[282102]: 2026-01-26 15:57:45.895835255 +0000 UTC m=+5.736708667 container attach 5f9bdbf53cd3b6e7c69729b3c09d83000d63ac317fc9b35b97ee85e625375b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_allen, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 15:57:45 compute-0 podman[282102]: 2026-01-26 15:57:45.897142437 +0000 UTC m=+5.738015829 container died 5f9bdbf53cd3b6e7c69729b3c09d83000d63ac317fc9b35b97ee85e625375b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 15:57:45 compute-0 nova_compute[239965]: 2026-01-26 15:57:45.904 239969 DEBUG nova.network.neutron [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Updated VIF entry in instance network info cache for port e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:57:45 compute-0 nova_compute[239965]: 2026-01-26 15:57:45.905 239969 DEBUG nova.network.neutron [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Updating instance_info_cache with network_info: [{"id": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "address": "fa:16:3e:ba:a8:be", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2bbbc1d-7d", "ovs_interfaceid": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:45 compute-0 nova_compute[239965]: 2026-01-26 15:57:45.922 239969 DEBUG oslo_concurrency.lockutils [req-59daa569-dda5-4e8f-967b-f00fa64a53f1 req-b5338d05-7783-4df7-adc2-b82347a0324b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d9b6d871-11f5-46ea-b7f2-502cd8b12b21" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:46 compute-0 nova_compute[239965]: 2026-01-26 15:57:46.045 239969 WARNING oslo.service.loopingcall [-] Function 'nova.storage.rbd_utils.RBDDriver._destroy_volume.<locals>._cleanup_vol' run outlasted interval by 2.72 sec
Jan 26 15:57:46 compute-0 nova_compute[239965]: 2026-01-26 15:57:46.654 239969 DEBUG nova.network.neutron [req-865c1c63-f8de-493d-926d-32385c8d09db req-26ad6c37-3bcf-408b-81f2-aed1e8cb4ef6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Updated VIF entry in instance network info cache for port 83367d44-efa7-43f2-b6dd-eee666860388. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:57:46 compute-0 nova_compute[239965]: 2026-01-26 15:57:46.655 239969 DEBUG nova.network.neutron [req-865c1c63-f8de-493d-926d-32385c8d09db req-26ad6c37-3bcf-408b-81f2-aed1e8cb4ef6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Updating instance_info_cache with network_info: [{"id": "83367d44-efa7-43f2-b6dd-eee666860388", "address": "fa:16:3e:08:b6:96", "network": {"id": "e0f483e6-b774-4b05-865b-210bd6652bcd", "bridge": "br-int", "label": "tempest-ServersTestJSON-1790730746-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2244b984e24a40ff8bdfbbe29dec2c25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83367d44-ef", "ovs_interfaceid": "83367d44-efa7-43f2-b6dd-eee666860388", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:46 compute-0 nova_compute[239965]: 2026-01-26 15:57:46.672 239969 DEBUG oslo_concurrency.lockutils [req-865c1c63-f8de-493d-926d-32385c8d09db req-26ad6c37-3bcf-408b-81f2-aed1e8cb4ef6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-58e03608-6ade-4867-ba57-e5b723e5fa71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:47 compute-0 ceph-osd[86729]: bluestore(/var/lib/ceph/osd/ceph-1) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.935884953s, txc = 0x5563cca73b00, txc bytes = 1049942, txc ios = 17, txc cost = 12439942, txc onodes = 1, DB updates = 13, DB bytes = 4029, cost max = 110681080 on 2026-01-26T15:50:13.213302+0000, txc max = 104 on 2026-01-26T15:23:48.350298+0000
Jan 26 15:57:47 compute-0 ceph-mon[75140]: pgmap v1334: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 47 KiB/s wr, 176 op/s
Jan 26 15:57:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1337: 305 pgs: 305 active+clean; 325 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 1.3 MiB/s wr, 44 op/s
Jan 26 15:57:48 compute-0 nova_compute[239965]: 2026-01-26 15:57:48.005 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0025690454845822184 of space, bias 1.0, pg target 0.7707136453746655 quantized to 32 (current 32)
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006690746003578376 of space, bias 1.0, pg target 0.2007223801073513 quantized to 32 (current 32)
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.126600588051891e-07 of space, bias 4.0, pg target 0.000975192070566227 quantized to 16 (current 16)
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:57:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:57:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:57:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3261024592' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:57:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:57:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3261024592' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:57:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b5ba779dd4c4a4acef71a6a702ef5738303de08dc36e5b56f2587b25383be58-merged.mount: Deactivated successfully.
Jan 26 15:57:49 compute-0 ceph-mon[75140]: pgmap v1335: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.3 KiB/s wr, 100 op/s
Jan 26 15:57:49 compute-0 ceph-mon[75140]: pgmap v1336: 305 pgs: 305 active+clean; 325 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.6 MiB/s wr, 53 op/s
Jan 26 15:57:49 compute-0 ceph-mon[75140]: pgmap v1337: 305 pgs: 305 active+clean; 325 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 1.3 MiB/s wr, 44 op/s
Jan 26 15:57:49 compute-0 nova_compute[239965]: 2026-01-26 15:57:49.012 239969 DEBUG nova.storage.rbd_utils [None req-eb8eca38-6485-46e0-b759-e7e0aa7baacd 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] flattening images/c60f8bda-58e3-40e1-94fc-f8040bc4f6cb flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 15:57:49 compute-0 nova_compute[239965]: 2026-01-26 15:57:49.127 239969 DEBUG oslo_concurrency.processutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 58e03608-6ade-4867-ba57-e5b723e5fa71_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 12.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:49 compute-0 podman[282102]: 2026-01-26 15:57:49.129123924 +0000 UTC m=+8.969997316 container remove 5f9bdbf53cd3b6e7c69729b3c09d83000d63ac317fc9b35b97ee85e625375b7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:57:49 compute-0 systemd[1]: libpod-conmon-5f9bdbf53cd3b6e7c69729b3c09d83000d63ac317fc9b35b97ee85e625375b7d.scope: Deactivated successfully.
Jan 26 15:57:49 compute-0 sudo[281853]: pam_unix(sudo:session): session closed for user root
Jan 26 15:57:49 compute-0 sudo[282257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:57:49 compute-0 sudo[282257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:57:49 compute-0 sudo[282257]: pam_unix(sudo:session): session closed for user root
Jan 26 15:57:49 compute-0 sudo[282282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:57:49 compute-0 sudo[282282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:57:49 compute-0 podman[282319]: 2026-01-26 15:57:49.651371852 +0000 UTC m=+0.026879140 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:57:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 348 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 1009 KiB/s rd, 2.7 MiB/s wr, 69 op/s
Jan 26 15:57:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e227 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:57:49 compute-0 ceph-mon[75140]: log_channel(cluster) log [WRN] : Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Jan 26 15:57:50 compute-0 nova_compute[239965]: 2026-01-26 15:57:50.475 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:50 compute-0 podman[282319]: 2026-01-26 15:57:50.507739962 +0000 UTC m=+0.883247230 container create d760fbc1cf52287909748721a035a14aec850b4b8580671a17f20e3c42950758 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mayer, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 26 15:57:50 compute-0 nova_compute[239965]: 2026-01-26 15:57:50.510 239969 DEBUG nova.storage.rbd_utils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] resizing rbd image 58e03608-6ade-4867-ba57-e5b723e5fa71_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:57:50 compute-0 systemd[1]: Started libpod-conmon-d760fbc1cf52287909748721a035a14aec850b4b8580671a17f20e3c42950758.scope.
Jan 26 15:57:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3261024592' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:57:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3261024592' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:57:50 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:57:51 compute-0 podman[282319]: 2026-01-26 15:57:51.065485041 +0000 UTC m=+1.440992309 container init d760fbc1cf52287909748721a035a14aec850b4b8580671a17f20e3c42950758 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mayer, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:57:51 compute-0 podman[282319]: 2026-01-26 15:57:51.072807852 +0000 UTC m=+1.448315120 container start d760fbc1cf52287909748721a035a14aec850b4b8580671a17f20e3c42950758 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 15:57:51 compute-0 naughty_mayer[282370]: 167 167
Jan 26 15:57:51 compute-0 systemd[1]: libpod-d760fbc1cf52287909748721a035a14aec850b4b8580671a17f20e3c42950758.scope: Deactivated successfully.
Jan 26 15:57:51 compute-0 podman[282319]: 2026-01-26 15:57:51.129755219 +0000 UTC m=+1.505262497 container attach d760fbc1cf52287909748721a035a14aec850b4b8580671a17f20e3c42950758 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 26 15:57:51 compute-0 podman[282319]: 2026-01-26 15:57:51.130308693 +0000 UTC m=+1.505815971 container died d760fbc1cf52287909748721a035a14aec850b4b8580671a17f20e3c42950758 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mayer, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 15:57:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ab12fc24f309a2677c821525dd288f1ba961492c4919baf3af9a43e3c334cf4-merged.mount: Deactivated successfully.
Jan 26 15:57:51 compute-0 podman[282319]: 2026-01-26 15:57:51.253017434 +0000 UTC m=+1.628524702 container remove d760fbc1cf52287909748721a035a14aec850b4b8580671a17f20e3c42950758 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:57:51 compute-0 systemd[1]: libpod-conmon-d760fbc1cf52287909748721a035a14aec850b4b8580671a17f20e3c42950758.scope: Deactivated successfully.
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.328 239969 DEBUG nova.objects.instance [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lazy-loading 'migration_context' on Instance uuid 58e03608-6ade-4867-ba57-e5b723e5fa71 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.356 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.357 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Ensure instance console log exists: /var/lib/nova/instances/58e03608-6ade-4867-ba57-e5b723e5fa71/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.359 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.359 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.360 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.363 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Start _get_guest_xml network_info=[{"id": "83367d44-efa7-43f2-b6dd-eee666860388", "address": "fa:16:3e:08:b6:96", "network": {"id": "e0f483e6-b774-4b05-865b-210bd6652bcd", "bridge": "br-int", "label": "tempest-ServersTestJSON-1790730746-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2244b984e24a40ff8bdfbbe29dec2c25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83367d44-ef", "ovs_interfaceid": "83367d44-efa7-43f2-b6dd-eee666860388", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.454 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443055.656509, 3940597d-f11c-4959-85c5-bb51a373cec5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.455 239969 INFO nova.compute.manager [-] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] VM Stopped (Lifecycle Event)
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.464 239969 WARNING nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:57:51 compute-0 podman[282438]: 2026-01-26 15:57:51.465328526 +0000 UTC m=+0.061527272 container create 186ab2544bf8636b14830f2159411a9714d0ad4c65a009db14df62bf1c781587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.477 239969 DEBUG nova.virt.libvirt.host [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.478 239969 DEBUG nova.virt.libvirt.host [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.486 239969 DEBUG nova.virt.libvirt.host [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.486 239969 DEBUG nova.virt.libvirt.host [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.487 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.487 239969 DEBUG nova.virt.hardware [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.488 239969 DEBUG nova.virt.hardware [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.488 239969 DEBUG nova.virt.hardware [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.489 239969 DEBUG nova.virt.hardware [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.489 239969 DEBUG nova.virt.hardware [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.490 239969 DEBUG nova.virt.hardware [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.490 239969 DEBUG nova.virt.hardware [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.490 239969 DEBUG nova.virt.hardware [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.491 239969 DEBUG nova.virt.hardware [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.491 239969 DEBUG nova.virt.hardware [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.491 239969 DEBUG nova.virt.hardware [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.495 239969 DEBUG oslo_concurrency.processutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:51 compute-0 podman[282438]: 2026-01-26 15:57:51.430629874 +0000 UTC m=+0.026828640 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.532 239969 DEBUG nova.compute.manager [None req-5699e048-8f0d-4e24-9a6f-73554a9e1906 - - - - - -] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.535 239969 DEBUG nova.storage.rbd_utils [None req-eb8eca38-6485-46e0-b759-e7e0aa7baacd 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] removing snapshot(e81acc8981424085b7f014d93a19bce8) on rbd image(3940597d-f11c-4959-85c5-bb51a373cec5_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 15:57:51 compute-0 systemd[1]: Started libpod-conmon-186ab2544bf8636b14830f2159411a9714d0ad4c65a009db14df62bf1c781587.scope.
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.540 239969 DEBUG nova.compute.manager [None req-5699e048-8f0d-4e24-9a6f-73554a9e1906 - - - - - -] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: deleting, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:57:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.574 239969 INFO nova.compute.manager [None req-5699e048-8f0d-4e24-9a6f-73554a9e1906 - - - - - -] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] During sync_power_state the instance has a pending task (deleting). Skip.
Jan 26 15:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b61b84b16b6e93f45beaaf4f18fa4cfc92a69000d47ff1ad016419f77084b1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b61b84b16b6e93f45beaaf4f18fa4cfc92a69000d47ff1ad016419f77084b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b61b84b16b6e93f45beaaf4f18fa4cfc92a69000d47ff1ad016419f77084b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:57:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b61b84b16b6e93f45beaaf4f18fa4cfc92a69000d47ff1ad016419f77084b1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:57:51 compute-0 podman[282438]: 2026-01-26 15:57:51.596757382 +0000 UTC m=+0.192956158 container init 186ab2544bf8636b14830f2159411a9714d0ad4c65a009db14df62bf1c781587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lichterman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 15:57:51 compute-0 podman[282438]: 2026-01-26 15:57:51.610923249 +0000 UTC m=+0.207121995 container start 186ab2544bf8636b14830f2159411a9714d0ad4c65a009db14df62bf1c781587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:57:51 compute-0 podman[282438]: 2026-01-26 15:57:51.615113112 +0000 UTC m=+0.211311888 container attach 186ab2544bf8636b14830f2159411a9714d0ad4c65a009db14df62bf1c781587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:57:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 351 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 1014 KiB/s rd, 3.1 MiB/s wr, 74 op/s
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.840 239969 INFO nova.virt.libvirt.driver [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Deleting instance files /var/lib/nova/instances/0815f972-3907-402e-b6b4-f1f1fc93a8ab_del
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.841 239969 INFO nova.virt.libvirt.driver [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Deletion of /var/lib/nova/instances/0815f972-3907-402e-b6b4-f1f1fc93a8ab_del complete
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.928 239969 INFO nova.compute.manager [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Took 11.99 seconds to destroy the instance on the hypervisor.
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.929 239969 DEBUG oslo.service.loopingcall [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.930 239969 DEBUG nova.compute.manager [-] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.931 239969 DEBUG nova.network.neutron [-] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:57:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Jan 26 15:57:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Jan 26 15:57:51 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Jan 26 15:57:51 compute-0 ceph-mon[75140]: pgmap v1338: 305 pgs: 305 active+clean; 348 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 1009 KiB/s rd, 2.7 MiB/s wr, 69 op/s
Jan 26 15:57:51 compute-0 ceph-mon[75140]: Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Jan 26 15:57:51 compute-0 nova_compute[239965]: 2026-01-26 15:57:51.977 239969 DEBUG nova.storage.rbd_utils [None req-eb8eca38-6485-46e0-b759-e7e0aa7baacd 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] creating snapshot(snap) on rbd image(c60f8bda-58e3-40e1-94fc-f8040bc4f6cb) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 15:57:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:57:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1702725152' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:57:52 compute-0 nova_compute[239965]: 2026-01-26 15:57:52.203 239969 DEBUG oslo_concurrency.processutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.708s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:52 compute-0 nova_compute[239965]: 2026-01-26 15:57:52.240 239969 DEBUG nova.storage.rbd_utils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] rbd image 58e03608-6ade-4867-ba57-e5b723e5fa71_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:52 compute-0 nova_compute[239965]: 2026-01-26 15:57:52.252 239969 DEBUG oslo_concurrency.processutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:52 compute-0 lvm[282623]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:57:52 compute-0 lvm[282623]: VG ceph_vg0 finished
Jan 26 15:57:52 compute-0 lvm[282624]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:57:52 compute-0 lvm[282624]: VG ceph_vg1 finished
Jan 26 15:57:52 compute-0 lvm[282626]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:57:52 compute-0 lvm[282626]: VG ceph_vg2 finished
Jan 26 15:57:52 compute-0 adoring_lichterman[282468]: {}
Jan 26 15:57:52 compute-0 podman[282438]: 2026-01-26 15:57:52.609339245 +0000 UTC m=+1.205537991 container died 186ab2544bf8636b14830f2159411a9714d0ad4c65a009db14df62bf1c781587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lichterman, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Jan 26 15:57:52 compute-0 systemd[1]: libpod-186ab2544bf8636b14830f2159411a9714d0ad4c65a009db14df62bf1c781587.scope: Deactivated successfully.
Jan 26 15:57:52 compute-0 systemd[1]: libpod-186ab2544bf8636b14830f2159411a9714d0ad4c65a009db14df62bf1c781587.scope: Consumed 1.498s CPU time.
Jan 26 15:57:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-23b61b84b16b6e93f45beaaf4f18fa4cfc92a69000d47ff1ad016419f77084b1-merged.mount: Deactivated successfully.
Jan 26 15:57:52 compute-0 podman[282438]: 2026-01-26 15:57:52.661497935 +0000 UTC m=+1.257696681 container remove 186ab2544bf8636b14830f2159411a9714d0ad4c65a009db14df62bf1c781587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:57:52 compute-0 systemd[1]: libpod-conmon-186ab2544bf8636b14830f2159411a9714d0ad4c65a009db14df62bf1c781587.scope: Deactivated successfully.
Jan 26 15:57:52 compute-0 sudo[282282]: pam_unix(sudo:session): session closed for user root
Jan 26 15:57:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:57:52 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:57:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:57:52 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:57:52 compute-0 sudo[282641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:57:52 compute-0 sudo[282641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:57:52 compute-0 sudo[282641]: pam_unix(sudo:session): session closed for user root
Jan 26 15:57:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:57:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2055652631' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:57:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Jan 26 15:57:52 compute-0 ceph-mon[75140]: pgmap v1339: 305 pgs: 305 active+clean; 351 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 1014 KiB/s rd, 3.1 MiB/s wr, 74 op/s
Jan 26 15:57:52 compute-0 ceph-mon[75140]: osdmap e228: 3 total, 3 up, 3 in
Jan 26 15:57:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1702725152' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:57:52 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:57:52 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:57:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2055652631' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:57:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Jan 26 15:57:52 compute-0 ovn_controller[146046]: 2026-01-26T15:57:52Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ba:a8:be 10.100.0.12
Jan 26 15:57:52 compute-0 nova_compute[239965]: 2026-01-26 15:57:52.964 239969 DEBUG oslo_concurrency.processutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.711s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:52 compute-0 nova_compute[239965]: 2026-01-26 15:57:52.968 239969 DEBUG nova.virt.libvirt.vif [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:57:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-515397940',display_name='tempest-ServersTestJSON-server-515397940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-515397940',id=48,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGvEARmGS9yuUjcEfIL6UGHjJsACA1MQKB2GnHfFXLx21DIis43sG2i48NNY4bGWnoXjQJNKpRXhfvIZkQM46nSh8ilif/x14L4ocfVD5eSGKGKuANtF/Z1sQbOUYPSLg==',key_name='tempest-keypair-886775122',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2244b984e24a40ff8bdfbbe29dec2c25',ramdisk_id='',reservation_id='r-j6anui0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1037582104',owner_user_name='tempest-ServersTestJSON-1037582104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:57:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='155281d948ac43f4bd24b54f8f81888c',uuid=58e03608-6ade-4867-ba57-e5b723e5fa71,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "83367d44-efa7-43f2-b6dd-eee666860388", "address": "fa:16:3e:08:b6:96", "network": {"id": "e0f483e6-b774-4b05-865b-210bd6652bcd", "bridge": "br-int", "label": "tempest-ServersTestJSON-1790730746-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2244b984e24a40ff8bdfbbe29dec2c25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83367d44-ef", "ovs_interfaceid": "83367d44-efa7-43f2-b6dd-eee666860388", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:57:52 compute-0 ovn_controller[146046]: 2026-01-26T15:57:52Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ba:a8:be 10.100.0.12
Jan 26 15:57:52 compute-0 nova_compute[239965]: 2026-01-26 15:57:52.968 239969 DEBUG nova.network.os_vif_util [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Converting VIF {"id": "83367d44-efa7-43f2-b6dd-eee666860388", "address": "fa:16:3e:08:b6:96", "network": {"id": "e0f483e6-b774-4b05-865b-210bd6652bcd", "bridge": "br-int", "label": "tempest-ServersTestJSON-1790730746-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2244b984e24a40ff8bdfbbe29dec2c25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83367d44-ef", "ovs_interfaceid": "83367d44-efa7-43f2-b6dd-eee666860388", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:52 compute-0 nova_compute[239965]: 2026-01-26 15:57:52.969 239969 DEBUG nova.network.os_vif_util [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:96,bridge_name='br-int',has_traffic_filtering=True,id=83367d44-efa7-43f2-b6dd-eee666860388,network=Network(e0f483e6-b774-4b05-865b-210bd6652bcd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83367d44-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:52 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Jan 26 15:57:52 compute-0 nova_compute[239965]: 2026-01-26 15:57:52.971 239969 DEBUG nova.objects.instance [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lazy-loading 'pci_devices' on Instance uuid 58e03608-6ade-4867-ba57-e5b723e5fa71 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:52 compute-0 nova_compute[239965]: 2026-01-26 15:57:52.991 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:57:52 compute-0 nova_compute[239965]:   <uuid>58e03608-6ade-4867-ba57-e5b723e5fa71</uuid>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   <name>instance-00000030</name>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersTestJSON-server-515397940</nova:name>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:57:51</nova:creationTime>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:57:52 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:57:52 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:57:52 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:57:52 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:57:52 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:57:52 compute-0 nova_compute[239965]:         <nova:user uuid="155281d948ac43f4bd24b54f8f81888c">tempest-ServersTestJSON-1037582104-project-member</nova:user>
Jan 26 15:57:52 compute-0 nova_compute[239965]:         <nova:project uuid="2244b984e24a40ff8bdfbbe29dec2c25">tempest-ServersTestJSON-1037582104</nova:project>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:57:52 compute-0 nova_compute[239965]:         <nova:port uuid="83367d44-efa7-43f2-b6dd-eee666860388">
Jan 26 15:57:52 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <system>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <entry name="serial">58e03608-6ade-4867-ba57-e5b723e5fa71</entry>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <entry name="uuid">58e03608-6ade-4867-ba57-e5b723e5fa71</entry>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     </system>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   <os>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   </os>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   <features>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   </features>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/58e03608-6ade-4867-ba57-e5b723e5fa71_disk">
Jan 26 15:57:52 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       </source>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:57:52 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/58e03608-6ade-4867-ba57-e5b723e5fa71_disk.config">
Jan 26 15:57:52 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       </source>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:57:52 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:08:b6:96"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <target dev="tap83367d44-ef"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/58e03608-6ade-4867-ba57-e5b723e5fa71/console.log" append="off"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <video>
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     </video>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:57:52 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:57:52 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:57:52 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:57:52 compute-0 nova_compute[239965]: </domain>
Jan 26 15:57:52 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.003 239969 DEBUG nova.compute.manager [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Preparing to wait for external event network-vif-plugged-83367d44-efa7-43f2-b6dd-eee666860388 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.004 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Acquiring lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.004 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.004 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.005 239969 DEBUG nova.virt.libvirt.vif [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:57:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-515397940',display_name='tempest-ServersTestJSON-server-515397940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-515397940',id=48,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGvEARmGS9yuUjcEfIL6UGHjJsACA1MQKB2GnHfFXLx21DIis43sG2i48NNY4bGWnoXjQJNKpRXhfvIZkQM46nSh8ilif/x14L4ocfVD5eSGKGKuANtF/Z1sQbOUYPSLg==',key_name='tempest-keypair-886775122',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2244b984e24a40ff8bdfbbe29dec2c25',ramdisk_id='',reservation_id='r-j6anui0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1037582104',owner_user_name='tempest-ServersTestJSON-1037582104-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:57:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='155281d948ac43f4bd24b54f8f81888c',uuid=58e03608-6ade-4867-ba57-e5b723e5fa71,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "83367d44-efa7-43f2-b6dd-eee666860388", "address": "fa:16:3e:08:b6:96", "network": {"id": "e0f483e6-b774-4b05-865b-210bd6652bcd", "bridge": "br-int", "label": "tempest-ServersTestJSON-1790730746-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2244b984e24a40ff8bdfbbe29dec2c25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83367d44-ef", "ovs_interfaceid": "83367d44-efa7-43f2-b6dd-eee666860388", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.005 239969 DEBUG nova.network.os_vif_util [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Converting VIF {"id": "83367d44-efa7-43f2-b6dd-eee666860388", "address": "fa:16:3e:08:b6:96", "network": {"id": "e0f483e6-b774-4b05-865b-210bd6652bcd", "bridge": "br-int", "label": "tempest-ServersTestJSON-1790730746-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2244b984e24a40ff8bdfbbe29dec2c25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83367d44-ef", "ovs_interfaceid": "83367d44-efa7-43f2-b6dd-eee666860388", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.006 239969 DEBUG nova.network.os_vif_util [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:96,bridge_name='br-int',has_traffic_filtering=True,id=83367d44-efa7-43f2-b6dd-eee666860388,network=Network(e0f483e6-b774-4b05-865b-210bd6652bcd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83367d44-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.006 239969 DEBUG os_vif [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:96,bridge_name='br-int',has_traffic_filtering=True,id=83367d44-efa7-43f2-b6dd-eee666860388,network=Network(e0f483e6-b774-4b05-865b-210bd6652bcd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83367d44-ef') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.010 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.011 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.012 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.012 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.016 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.016 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83367d44-ef, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.017 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap83367d44-ef, col_values=(('external_ids', {'iface-id': '83367d44-efa7-43f2-b6dd-eee666860388', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:b6:96', 'vm-uuid': '58e03608-6ade-4867-ba57-e5b723e5fa71'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.019 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:53 compute-0 NetworkManager[48954]: <info>  [1769443073.0200] manager: (tap83367d44-ef): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.021 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.029 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.030 239969 INFO os_vif [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:b6:96,bridge_name='br-int',has_traffic_filtering=True,id=83367d44-efa7-43f2-b6dd-eee666860388,network=Network(e0f483e6-b774-4b05-865b-210bd6652bcd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83367d44-ef')
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.217 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.218 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.218 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] No VIF found with MAC fa:16:3e:08:b6:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.219 239969 INFO nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Using config drive
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.246 239969 DEBUG nova.storage.rbd_utils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] rbd image 58e03608-6ade-4867-ba57-e5b723e5fa71_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.528 239969 DEBUG nova.network.neutron [-] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.560 239969 INFO nova.compute.manager [-] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Took 1.63 seconds to deallocate network for instance.
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.610 239969 DEBUG oslo_concurrency.lockutils [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.611 239969 DEBUG oslo_concurrency.lockutils [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.650 239969 INFO nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Creating config drive at /var/lib/nova/instances/58e03608-6ade-4867-ba57-e5b723e5fa71/disk.config
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.655 239969 DEBUG oslo_concurrency.processutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/58e03608-6ade-4867-ba57-e5b723e5fa71/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgcrb5a9b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.807 239969 DEBUG oslo_concurrency.processutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/58e03608-6ade-4867-ba57-e5b723e5fa71/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgcrb5a9b" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 351 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.8 MiB/s wr, 56 op/s
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.833 239969 DEBUG nova.storage.rbd_utils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] rbd image 58e03608-6ade-4867-ba57-e5b723e5fa71_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.838 239969 DEBUG oslo_concurrency.processutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/58e03608-6ade-4867-ba57-e5b723e5fa71/disk.config 58e03608-6ade-4867-ba57-e5b723e5fa71_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:53 compute-0 nova_compute[239965]: 2026-01-26 15:57:53.871 239969 DEBUG oslo_concurrency.processutils [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:53 compute-0 ceph-mon[75140]: osdmap e229: 3 total, 3 up, 3 in
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.029 239969 DEBUG oslo_concurrency.processutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/58e03608-6ade-4867-ba57-e5b723e5fa71/disk.config 58e03608-6ade-4867-ba57-e5b723e5fa71_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.030 239969 INFO nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Deleting local config drive /var/lib/nova/instances/58e03608-6ade-4867-ba57-e5b723e5fa71/disk.config because it was imported into RBD.
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.102 239969 DEBUG nova.compute.manager [req-b3b1240e-31f5-4c27-838e-fe17604fabcf req-e0faf7bb-be58-4a33-bb45-73ed6d330122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Received event network-vif-deleted-2722b13b-6ac2-481f-93b0-a5f0e4664e24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:54 compute-0 kernel: tap83367d44-ef: entered promiscuous mode
Jan 26 15:57:54 compute-0 NetworkManager[48954]: <info>  [1769443074.1092] manager: (tap83367d44-ef): new Tun device (/org/freedesktop/NetworkManager/Devices/170)
Jan 26 15:57:54 compute-0 ovn_controller[146046]: 2026-01-26T15:57:54Z|00361|binding|INFO|Claiming lport 83367d44-efa7-43f2-b6dd-eee666860388 for this chassis.
Jan 26 15:57:54 compute-0 ovn_controller[146046]: 2026-01-26T15:57:54Z|00362|binding|INFO|83367d44-efa7-43f2-b6dd-eee666860388: Claiming fa:16:3e:08:b6:96 10.100.0.14
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.110 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:54 compute-0 systemd-udevd[282625]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.120 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:b6:96 10.100.0.14'], port_security=['fa:16:3e:08:b6:96 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '58e03608-6ade-4867-ba57-e5b723e5fa71', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0f483e6-b774-4b05-865b-210bd6652bcd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2244b984e24a40ff8bdfbbe29dec2c25', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7113c4c8-c402-431f-887d-4f75aaeda709', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eff8161-aacd-434f-98a2-0671c9df8619, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=83367d44-efa7-43f2-b6dd-eee666860388) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.121 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 83367d44-efa7-43f2-b6dd-eee666860388 in datapath e0f483e6-b774-4b05-865b-210bd6652bcd bound to our chassis
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.123 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e0f483e6-b774-4b05-865b-210bd6652bcd
Jan 26 15:57:54 compute-0 NetworkManager[48954]: <info>  [1769443074.1309] device (tap83367d44-ef): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:57:54 compute-0 NetworkManager[48954]: <info>  [1769443074.1320] device (tap83367d44-ef): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.138 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:54 compute-0 ovn_controller[146046]: 2026-01-26T15:57:54Z|00363|binding|INFO|Setting lport 83367d44-efa7-43f2-b6dd-eee666860388 ovn-installed in OVS
Jan 26 15:57:54 compute-0 ovn_controller[146046]: 2026-01-26T15:57:54Z|00364|binding|INFO|Setting lport 83367d44-efa7-43f2-b6dd-eee666860388 up in Southbound
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.142 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.139 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d0b0e544-03b6-45e8-b83e-2f9561413692]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.142 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape0f483e6-b1 in ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.144 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape0f483e6-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.144 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[24241665-5710-4643-b601-a19680ab447d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.146 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8e798a42-3a49-4ab6-8cf5-a4375f512b98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:54 compute-0 systemd-machined[208061]: New machine qemu-53-instance-00000030.
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.160 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[e9366e6c-89f7-483c-aa05-c221b6fb46d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:54 compute-0 systemd[1]: Started Virtual Machine qemu-53-instance-00000030.
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.178 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ae5227c8-43ff-4e2b-811a-30fd898efa17]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.224 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[958b0d19-18f4-4d1b-8ec1-b807224f0e9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.231 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[143cc355-35ff-48cf-87d2-9a71a494a946]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:54 compute-0 NetworkManager[48954]: <info>  [1769443074.2356] manager: (tape0f483e6-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/171)
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.266 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[898a5320-bb6a-4122-87ca-2dbb45b9673e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.269 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc0b827-dd08-4ceb-becc-88b343c76b45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:54 compute-0 NetworkManager[48954]: <info>  [1769443074.2931] device (tape0f483e6-b0): carrier: link connected
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.298 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ea6f8130-6176-4317-82c6-89fdeeefd53b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.320 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3e59ab50-65f7-42c9-8ba1-53f8f36d3258]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape0f483e6-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:13:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456509, 'reachable_time': 25726, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282792, 'error': None, 'target': 'ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.336 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7b47c863-1aed-4c1e-95fc-52d17bc95500]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe87:1332'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 456509, 'tstamp': 456509}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282793, 'error': None, 'target': 'ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.358 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5d81da60-a216-4ecb-8417-0085eb896574]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape0f483e6-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:87:13:32'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 112], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456509, 'reachable_time': 25726, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282794, 'error': None, 'target': 'ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.396 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1da66824-1edf-41af-a6ce-e32a2295bffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.454 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8521c49c-145d-4323-bf62-a614be1b5a43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.455 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0f483e6-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.456 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.456 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape0f483e6-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:54 compute-0 kernel: tape0f483e6-b0: entered promiscuous mode
Jan 26 15:57:54 compute-0 NetworkManager[48954]: <info>  [1769443074.4592] manager: (tape0f483e6-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/172)
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.458 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.465 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape0f483e6-b0, col_values=(('external_ids', {'iface-id': '9f9fcf20-2e57-45c7-be90-f2c7c728896a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:54 compute-0 ovn_controller[146046]: 2026-01-26T15:57:54Z|00365|binding|INFO|Releasing lport 9f9fcf20-2e57-45c7-be90-f2c7c728896a from this chassis (sb_readonly=0)
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.467 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.469 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e0f483e6-b774-4b05-865b-210bd6652bcd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e0f483e6-b774-4b05-865b-210bd6652bcd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.470 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[992361e9-54b3-40f2-95ed-1e7457ae6064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.471 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-e0f483e6-b774-4b05-865b-210bd6652bcd
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/e0f483e6-b774-4b05-865b-210bd6652bcd.pid.haproxy
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID e0f483e6-b774-4b05-865b-210bd6652bcd
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:57:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:54.473 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd', 'env', 'PROCESS_TAG=haproxy-e0f483e6-b774-4b05-865b-210bd6652bcd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e0f483e6-b774-4b05-865b-210bd6652bcd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.483 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:57:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3209690235' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.543 239969 DEBUG oslo_concurrency.processutils [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.553 239969 DEBUG nova.compute.provider_tree [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.575 239969 DEBUG nova.scheduler.client.report [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.611 239969 DEBUG oslo_concurrency.lockutils [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.637 239969 INFO nova.scheduler.client.report [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Deleted allocations for instance 0815f972-3907-402e-b6b4-f1f1fc93a8ab
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.728 239969 DEBUG oslo_concurrency.lockutils [None req-be137e02-9e81-493e-97c2-3ae85f58d076 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "0815f972-3907-402e-b6b4-f1f1fc93a8ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 14.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:57:54 compute-0 podman[282864]: 2026-01-26 15:57:54.936956816 +0000 UTC m=+0.086158006 container create 071e881c06d091ee861b5df6c5488e19534c457e9ae88d0ad54a5b2f32f2dca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.948 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443074.9479182, 58e03608-6ade-4867-ba57-e5b723e5fa71 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.949 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] VM Started (Lifecycle Event)
Jan 26 15:57:54 compute-0 podman[282864]: 2026-01-26 15:57:54.876285846 +0000 UTC m=+0.025487056 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.972 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.983 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443074.9482493, 58e03608-6ade-4867-ba57-e5b723e5fa71 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:57:54 compute-0 nova_compute[239965]: 2026-01-26 15:57:54.983 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] VM Paused (Lifecycle Event)
Jan 26 15:57:54 compute-0 systemd[1]: Started libpod-conmon-071e881c06d091ee861b5df6c5488e19534c457e9ae88d0ad54a5b2f32f2dca6.scope.
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.007 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.013 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:57:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/265aac903de07c46e9249795eed246da6798169558045b2cb555d7d4212368ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.034 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.056 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.057 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.077 239969 DEBUG nova.compute.manager [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:57:55 compute-0 ceph-mon[75140]: pgmap v1342: 305 pgs: 305 active+clean; 351 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.8 MiB/s wr, 56 op/s
Jan 26 15:57:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3209690235' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:55 compute-0 podman[282864]: 2026-01-26 15:57:55.083634575 +0000 UTC m=+0.232835785 container init 071e881c06d091ee861b5df6c5488e19534c457e9ae88d0ad54a5b2f32f2dca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 15:57:55 compute-0 podman[282864]: 2026-01-26 15:57:55.090216277 +0000 UTC m=+0.239417487 container start 071e881c06d091ee861b5df6c5488e19534c457e9ae88d0ad54a5b2f32f2dca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.108 239969 INFO nova.virt.libvirt.driver [None req-eb8eca38-6485-46e0-b759-e7e0aa7baacd 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Snapshot image upload complete
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.108 239969 INFO nova.compute.manager [None req-eb8eca38-6485-46e0-b759-e7e0aa7baacd 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Took 22.16 seconds to snapshot the instance on the hypervisor.
Jan 26 15:57:55 compute-0 neutron-haproxy-ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd[282885]: [NOTICE]   (282889) : New worker (282891) forked
Jan 26 15:57:55 compute-0 neutron-haproxy-ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd[282885]: [NOTICE]   (282889) : Loading success.
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.136 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.139 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.149 239969 DEBUG nova.virt.hardware [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.149 239969 INFO nova.compute.claims [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.153 239969 DEBUG nova.compute.manager [None req-eb8eca38-6485-46e0-b759-e7e0aa7baacd 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Instance disappeared during snapshot _snapshot_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:4390
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.278 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.337 239969 DEBUG oslo_concurrency.processutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.601 239969 DEBUG nova.compute.manager [None req-eb8eca38-6485-46e0-b759-e7e0aa7baacd 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Jan 26 15:57:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 339 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 6.6 MiB/s wr, 245 op/s
Jan 26 15:57:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:57:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1371197555' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.980 239969 DEBUG oslo_concurrency.lockutils [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.981 239969 DEBUG oslo_concurrency.lockutils [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.981 239969 DEBUG oslo_concurrency.lockutils [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.982 239969 DEBUG oslo_concurrency.lockutils [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.982 239969 DEBUG oslo_concurrency.lockutils [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.983 239969 INFO nova.compute.manager [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Terminating instance
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.986 239969 DEBUG nova.compute.manager [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.987 239969 DEBUG oslo_concurrency.processutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:55 compute-0 nova_compute[239965]: 2026-01-26 15:57:55.995 239969 DEBUG nova.compute.provider_tree [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.014 239969 DEBUG nova.scheduler.client.report [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.039 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.040 239969 DEBUG nova.compute.manager [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:57:56 compute-0 kernel: tap32baa4e5-26 (unregistering): left promiscuous mode
Jan 26 15:57:56 compute-0 NetworkManager[48954]: <info>  [1769443076.0647] device (tap32baa4e5-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:57:56 compute-0 ovn_controller[146046]: 2026-01-26T15:57:56Z|00366|binding|INFO|Releasing lport 32baa4e5-2605-4a3f-8023-6a80ea71793d from this chassis (sb_readonly=0)
Jan 26 15:57:56 compute-0 ovn_controller[146046]: 2026-01-26T15:57:56Z|00367|binding|INFO|Setting lport 32baa4e5-2605-4a3f-8023-6a80ea71793d down in Southbound
Jan 26 15:57:56 compute-0 ovn_controller[146046]: 2026-01-26T15:57:56Z|00368|binding|INFO|Removing iface tap32baa4e5-26 ovn-installed in OVS
Jan 26 15:57:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:56.120 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:54:a4 10.100.0.11'], port_security=['fa:16:3e:b7:54:a4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'd49e7eb2-dcdb-4410-a016-42e58d1d1416', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ba6f10f9d9d4df0aa855feb2a00210a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be1b617b-295a-4675-91ed-17bc8ca1fead', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=37ca6388-333b-4632-8856-fdb17c1f909d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=32baa4e5-2605-4a3f-8023-6a80ea71793d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:57:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:56.121 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 32baa4e5-2605-4a3f-8023-6a80ea71793d in datapath 91bcac0a-1926-4861-88ab-ae3c06f7e57e unbound from our chassis
Jan 26 15:57:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:56.123 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 91bcac0a-1926-4861-88ab-ae3c06f7e57e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.115 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.118 239969 DEBUG nova.compute.manager [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.118 239969 DEBUG nova.network.neutron [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:57:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:56.124 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0e08c683-b825-4795-8190-6b4517e46ac6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:56.125 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e namespace which is not needed anymore
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.131 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.141 239969 INFO nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.159 239969 DEBUG nova.compute.manager [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:57:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1371197555' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:56 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Jan 26 15:57:56 compute-0 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002b.scope: Consumed 17.454s CPU time.
Jan 26 15:57:56 compute-0 systemd-machined[208061]: Machine qemu-47-instance-0000002b terminated.
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.214 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.222 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.234 239969 INFO nova.virt.libvirt.driver [-] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Instance destroyed successfully.
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.234 239969 DEBUG nova.objects.instance [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lazy-loading 'resources' on Instance uuid d49e7eb2-dcdb-4410-a016-42e58d1d1416 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.245 239969 DEBUG nova.compute.manager [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.246 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.247 239969 INFO nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Creating image(s)
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.279 239969 DEBUG nova.storage.rbd_utils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] rbd image 4f7de6be-0bd4-48af-852a-f93f9da50ef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:56 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[279120]: [NOTICE]   (279124) : haproxy version is 2.8.14-c23fe91
Jan 26 15:57:56 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[279120]: [NOTICE]   (279124) : path to executable is /usr/sbin/haproxy
Jan 26 15:57:56 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[279120]: [WARNING]  (279124) : Exiting Master process...
Jan 26 15:57:56 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[279120]: [WARNING]  (279124) : Exiting Master process...
Jan 26 15:57:56 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[279120]: [ALERT]    (279124) : Current worker (279126) exited with code 143 (Terminated)
Jan 26 15:57:56 compute-0 neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e[279120]: [WARNING]  (279124) : All workers exited. Exiting... (0)
Jan 26 15:57:56 compute-0 systemd[1]: libpod-6af54679a61765c516d4a500cf6abcfa209b3c765eaea025ea060fdfc106774f.scope: Deactivated successfully.
Jan 26 15:57:56 compute-0 podman[282953]: 2026-01-26 15:57:56.312714273 +0000 UTC m=+0.061884420 container died 6af54679a61765c516d4a500cf6abcfa209b3c765eaea025ea060fdfc106774f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.319 239969 DEBUG nova.storage.rbd_utils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] rbd image 4f7de6be-0bd4-48af-852a-f93f9da50ef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.354 239969 DEBUG nova.storage.rbd_utils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] rbd image 4f7de6be-0bd4-48af-852a-f93f9da50ef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.360 239969 DEBUG oslo_concurrency.processutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6af54679a61765c516d4a500cf6abcfa209b3c765eaea025ea060fdfc106774f-userdata-shm.mount: Deactivated successfully.
Jan 26 15:57:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-504d325a10754e439dbdb5d417913becec40f93715783f0ee81ad258bb5de625-merged.mount: Deactivated successfully.
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.402 239969 DEBUG nova.compute.manager [req-c078d90f-ca30-46a0-b026-0dec5ff74b0b req-e4e74981-c7dd-4824-8ea3-4e7661a64d1b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Received event network-vif-plugged-83367d44-efa7-43f2-b6dd-eee666860388 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.403 239969 DEBUG oslo_concurrency.lockutils [req-c078d90f-ca30-46a0-b026-0dec5ff74b0b req-e4e74981-c7dd-4824-8ea3-4e7661a64d1b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.403 239969 DEBUG oslo_concurrency.lockutils [req-c078d90f-ca30-46a0-b026-0dec5ff74b0b req-e4e74981-c7dd-4824-8ea3-4e7661a64d1b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.404 239969 DEBUG oslo_concurrency.lockutils [req-c078d90f-ca30-46a0-b026-0dec5ff74b0b req-e4e74981-c7dd-4824-8ea3-4e7661a64d1b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.404 239969 DEBUG nova.compute.manager [req-c078d90f-ca30-46a0-b026-0dec5ff74b0b req-e4e74981-c7dd-4824-8ea3-4e7661a64d1b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Processing event network-vif-plugged-83367d44-efa7-43f2-b6dd-eee666860388 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.404 239969 DEBUG nova.compute.manager [req-c078d90f-ca30-46a0-b026-0dec5ff74b0b req-e4e74981-c7dd-4824-8ea3-4e7661a64d1b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Received event network-vif-plugged-83367d44-efa7-43f2-b6dd-eee666860388 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.404 239969 DEBUG oslo_concurrency.lockutils [req-c078d90f-ca30-46a0-b026-0dec5ff74b0b req-e4e74981-c7dd-4824-8ea3-4e7661a64d1b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.405 239969 DEBUG oslo_concurrency.lockutils [req-c078d90f-ca30-46a0-b026-0dec5ff74b0b req-e4e74981-c7dd-4824-8ea3-4e7661a64d1b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.405 239969 DEBUG oslo_concurrency.lockutils [req-c078d90f-ca30-46a0-b026-0dec5ff74b0b req-e4e74981-c7dd-4824-8ea3-4e7661a64d1b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.405 239969 DEBUG nova.compute.manager [req-c078d90f-ca30-46a0-b026-0dec5ff74b0b req-e4e74981-c7dd-4824-8ea3-4e7661a64d1b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] No waiting events found dispatching network-vif-plugged-83367d44-efa7-43f2-b6dd-eee666860388 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.405 239969 WARNING nova.compute.manager [req-c078d90f-ca30-46a0-b026-0dec5ff74b0b req-e4e74981-c7dd-4824-8ea3-4e7661a64d1b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Received unexpected event network-vif-plugged-83367d44-efa7-43f2-b6dd-eee666860388 for instance with vm_state building and task_state spawning.
Jan 26 15:57:56 compute-0 podman[282953]: 2026-01-26 15:57:56.40628404 +0000 UTC m=+0.155454177 container cleanup 6af54679a61765c516d4a500cf6abcfa209b3c765eaea025ea060fdfc106774f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.407 239969 DEBUG nova.virt.libvirt.vif [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:56:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2044484805',display_name='tempest-tempest.common.compute-instance-2044484805',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2044484805',id=43,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKeLSVrQjLVMrDly+731By+tGp5cL/q2RkfHkPZwnhscMO6Jhep7798XgwoAsogpRVdfhr++1ZBHm1/nTkA1YXe3jICWiVf8G7qT1QKZe6WXtrz4RsbEZ4GIF6fD2FL+/A==',key_name='tempest-keypair-1813699058',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:56:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ba6f10f9d9d4df0aa855feb2a00210a',ramdisk_id='',reservation_id='r-lr8w7kzm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1005120150',owner_user_name='tempest-AttachInterfacesTestJSON-1005120150-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:56:42Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='700f80e28de6426c81d2be69a7fd8ffb',uuid=d49e7eb2-dcdb-4410-a016-42e58d1d1416,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.408 239969 DEBUG nova.network.os_vif_util [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converting VIF {"id": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "address": "fa:16:3e:b7:54:a4", "network": {"id": "91bcac0a-1926-4861-88ab-ae3c06f7e57e", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-2009484795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ba6f10f9d9d4df0aa855feb2a00210a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap32baa4e5-26", "ovs_interfaceid": "32baa4e5-2605-4a3f-8023-6a80ea71793d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.409 239969 DEBUG nova.network.os_vif_util [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b7:54:a4,bridge_name='br-int',has_traffic_filtering=True,id=32baa4e5-2605-4a3f-8023-6a80ea71793d,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32baa4e5-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.409 239969 DEBUG os_vif [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:54:a4,bridge_name='br-int',has_traffic_filtering=True,id=32baa4e5-2605-4a3f-8023-6a80ea71793d,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32baa4e5-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.411 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.411 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap32baa4e5-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.413 239969 DEBUG nova.compute.manager [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.416 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.418 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.421 239969 INFO os_vif [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b7:54:a4,bridge_name='br-int',has_traffic_filtering=True,id=32baa4e5-2605-4a3f-8023-6a80ea71793d,network=Network(91bcac0a-1926-4861-88ab-ae3c06f7e57e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap32baa4e5-26')
Jan 26 15:57:56 compute-0 systemd[1]: libpod-conmon-6af54679a61765c516d4a500cf6abcfa209b3c765eaea025ea060fdfc106774f.scope: Deactivated successfully.
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.442 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443076.4227874, 58e03608-6ade-4867-ba57-e5b723e5fa71 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.442 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] VM Resumed (Lifecycle Event)
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.445 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.453 239969 DEBUG oslo_concurrency.processutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.454 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.455 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.455 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.476 239969 DEBUG nova.storage.rbd_utils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] rbd image 4f7de6be-0bd4-48af-852a-f93f9da50ef5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.486 239969 DEBUG oslo_concurrency.processutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 4f7de6be-0bd4-48af-852a-f93f9da50ef5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.524 239969 DEBUG nova.policy [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fe6e9eb388ed48fc93bf3f4b984f8d1c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'acad8b8490f840ba8d8c5d0d91874b79', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.528 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.531 239969 INFO nova.virt.libvirt.driver [-] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Instance spawned successfully.
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.532 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.535 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.555 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.557 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.557 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.558 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.559 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.559 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.559 239969 DEBUG nova.virt.libvirt.driver [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.567 239969 DEBUG nova.compute.manager [req-9e00f7f9-e80b-4ff8-9512-9a1d8de77176 req-2ca9ff21-5971-4683-8be1-64cbd640b24d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received event network-vif-unplugged-32baa4e5-2605-4a3f-8023-6a80ea71793d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.567 239969 DEBUG oslo_concurrency.lockutils [req-9e00f7f9-e80b-4ff8-9512-9a1d8de77176 req-2ca9ff21-5971-4683-8be1-64cbd640b24d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.568 239969 DEBUG oslo_concurrency.lockutils [req-9e00f7f9-e80b-4ff8-9512-9a1d8de77176 req-2ca9ff21-5971-4683-8be1-64cbd640b24d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.568 239969 DEBUG oslo_concurrency.lockutils [req-9e00f7f9-e80b-4ff8-9512-9a1d8de77176 req-2ca9ff21-5971-4683-8be1-64cbd640b24d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.568 239969 DEBUG nova.compute.manager [req-9e00f7f9-e80b-4ff8-9512-9a1d8de77176 req-2ca9ff21-5971-4683-8be1-64cbd640b24d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] No waiting events found dispatching network-vif-unplugged-32baa4e5-2605-4a3f-8023-6a80ea71793d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.568 239969 DEBUG nova.compute.manager [req-9e00f7f9-e80b-4ff8-9512-9a1d8de77176 req-2ca9ff21-5971-4683-8be1-64cbd640b24d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received event network-vif-unplugged-32baa4e5-2605-4a3f-8023-6a80ea71793d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.630 239969 INFO nova.compute.manager [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Took 21.16 seconds to spawn the instance on the hypervisor.
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.630 239969 DEBUG nova.compute.manager [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.684 239969 INFO nova.compute.manager [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Took 22.97 seconds to build instance.
Jan 26 15:57:56 compute-0 nova_compute[239965]: 2026-01-26 15:57:56.698 239969 DEBUG oslo_concurrency.lockutils [None req-539092f7-2ea8-4020-8e45-43b00cb054b0 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lock "58e03608-6ade-4867-ba57-e5b723e5fa71" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:57 compute-0 nova_compute[239965]: 2026-01-26 15:57:57.186 239969 DEBUG nova.network.neutron [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Successfully created port: 7b823eab-5ae0-4e07-ae00-150cce2fac3c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:57:57 compute-0 ceph-mon[75140]: pgmap v1343: 305 pgs: 305 active+clean; 339 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 6.6 MiB/s wr, 245 op/s
Jan 26 15:57:57 compute-0 podman[283035]: 2026-01-26 15:57:57.58251908 +0000 UTC m=+1.154050407 container remove 6af54679a61765c516d4a500cf6abcfa209b3c765eaea025ea060fdfc106774f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 15:57:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:57.591 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3cd72fed-daa5-4ab8-9e8b-756139b0401c]: (4, ('Mon Jan 26 03:57:56 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e (6af54679a61765c516d4a500cf6abcfa209b3c765eaea025ea060fdfc106774f)\n6af54679a61765c516d4a500cf6abcfa209b3c765eaea025ea060fdfc106774f\nMon Jan 26 03:57:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e (6af54679a61765c516d4a500cf6abcfa209b3c765eaea025ea060fdfc106774f)\n6af54679a61765c516d4a500cf6abcfa209b3c765eaea025ea060fdfc106774f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:57.600 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4b060b-d39d-49fe-9463-9136c8b07ff8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:57.602 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91bcac0a-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:57:57 compute-0 nova_compute[239965]: 2026-01-26 15:57:57.655 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:57 compute-0 kernel: tap91bcac0a-10: left promiscuous mode
Jan 26 15:57:57 compute-0 nova_compute[239965]: 2026-01-26 15:57:57.676 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:57:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:57.678 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6b511e3f-a043-4209-becf-624505b5d485]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:57.694 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d110c7ff-1518-441c-8f8e-b90834f4445c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:57.697 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4c954f72-df80-4d0c-ab6c-9f1accf4d97a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:57.719 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a228df9c-7b23-4861-9986-10f2c97bde24]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 448583, 'reachable_time': 34915, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283108, 'error': None, 'target': 'ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:57 compute-0 systemd[1]: run-netns-ovnmeta\x2d91bcac0a\x2d1926\x2d4861\x2d88ab\x2dae3c06f7e57e.mount: Deactivated successfully.
Jan 26 15:57:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:57.726 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-91bcac0a-1926-4861-88ab-ae3c06f7e57e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:57:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:57.727 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[0b326de9-49c7-43dd-b635-49eb66e6d207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:57:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 339 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.4 MiB/s wr, 207 op/s
Jan 26 15:57:57 compute-0 nova_compute[239965]: 2026-01-26 15:57:57.915 239969 DEBUG nova.network.neutron [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Successfully updated port: 7b823eab-5ae0-4e07-ae00-150cce2fac3c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:57:57 compute-0 nova_compute[239965]: 2026-01-26 15:57:57.930 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "refresh_cache-4f7de6be-0bd4-48af-852a-f93f9da50ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:57 compute-0 nova_compute[239965]: 2026-01-26 15:57:57.930 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquired lock "refresh_cache-4f7de6be-0bd4-48af-852a-f93f9da50ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:57 compute-0 nova_compute[239965]: 2026-01-26 15:57:57.931 239969 DEBUG nova.network.neutron [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:57:57 compute-0 nova_compute[239965]: 2026-01-26 15:57:57.965 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443062.18302, 0815f972-3907-402e-b6b4-f1f1fc93a8ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:57:57 compute-0 nova_compute[239965]: 2026-01-26 15:57:57.966 239969 INFO nova.compute.manager [-] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] VM Stopped (Lifecycle Event)
Jan 26 15:57:57 compute-0 nova_compute[239965]: 2026-01-26 15:57:57.984 239969 DEBUG nova.compute.manager [None req-7c508402-7dbd-4137-a707-d58d2dbe9aa5 - - - - - -] [instance: 0815f972-3907-402e-b6b4-f1f1fc93a8ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.151 239969 DEBUG nova.network.neutron [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.377 239969 DEBUG oslo_concurrency.processutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 4f7de6be-0bd4-48af-852a-f93f9da50ef5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.891s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:58 compute-0 podman[283109]: 2026-01-26 15:57:58.383039338 +0000 UTC m=+0.070378509 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.455 239969 DEBUG nova.compute.manager [req-38a1639d-3de4-4b6d-b3b6-80c4ea51ef68 req-d0664154-1a59-45eb-bf26-4dfd17eddd69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Received event network-changed-83367d44-efa7-43f2-b6dd-eee666860388 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.456 239969 DEBUG nova.compute.manager [req-38a1639d-3de4-4b6d-b3b6-80c4ea51ef68 req-d0664154-1a59-45eb-bf26-4dfd17eddd69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Refreshing instance network info cache due to event network-changed-83367d44-efa7-43f2-b6dd-eee666860388. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.456 239969 DEBUG oslo_concurrency.lockutils [req-38a1639d-3de4-4b6d-b3b6-80c4ea51ef68 req-d0664154-1a59-45eb-bf26-4dfd17eddd69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-58e03608-6ade-4867-ba57-e5b723e5fa71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.457 239969 DEBUG oslo_concurrency.lockutils [req-38a1639d-3de4-4b6d-b3b6-80c4ea51ef68 req-d0664154-1a59-45eb-bf26-4dfd17eddd69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-58e03608-6ade-4867-ba57-e5b723e5fa71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.457 239969 DEBUG nova.network.neutron [req-38a1639d-3de4-4b6d-b3b6-80c4ea51ef68 req-d0664154-1a59-45eb-bf26-4dfd17eddd69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Refreshing network info cache for port 83367d44-efa7-43f2-b6dd-eee666860388 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.475 239969 INFO nova.virt.libvirt.driver [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Deleting instance files /var/lib/nova/instances/3940597d-f11c-4959-85c5-bb51a373cec5_del
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.476 239969 INFO nova.virt.libvirt.driver [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Deletion of /var/lib/nova/instances/3940597d-f11c-4959-85c5-bb51a373cec5_del complete
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.490 239969 DEBUG nova.storage.rbd_utils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] resizing rbd image 4f7de6be-0bd4-48af-852a-f93f9da50ef5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.549 239969 INFO nova.compute.manager [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Took 23.52 seconds to destroy the instance on the hypervisor.
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.550 239969 DEBUG oslo.service.loopingcall [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.550 239969 DEBUG nova.compute.manager [-] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.550 239969 DEBUG nova.network.neutron [-] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:57:58 compute-0 ceph-mon[75140]: pgmap v1344: 305 pgs: 305 active+clean; 339 MiB data, 567 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.4 MiB/s wr, 207 op/s
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.635 239969 DEBUG nova.objects.instance [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lazy-loading 'migration_context' on Instance uuid 4f7de6be-0bd4-48af-852a-f93f9da50ef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.656 239969 INFO nova.virt.libvirt.driver [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Deleting instance files /var/lib/nova/instances/d49e7eb2-dcdb-4410-a016-42e58d1d1416_del
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.658 239969 INFO nova.virt.libvirt.driver [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Deletion of /var/lib/nova/instances/d49e7eb2-dcdb-4410-a016-42e58d1d1416_del complete
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.663 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.663 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Ensure instance console log exists: /var/lib/nova/instances/4f7de6be-0bd4-48af-852a-f93f9da50ef5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.664 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.664 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.664 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.707 239969 INFO nova.compute.manager [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Took 2.72 seconds to destroy the instance on the hypervisor.
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.707 239969 DEBUG oslo.service.loopingcall [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.708 239969 DEBUG nova.compute.manager [-] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.708 239969 DEBUG nova.network.neutron [-] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.770 239969 DEBUG nova.network.neutron [-] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.794 239969 DEBUG nova.network.neutron [-] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.812 239969 INFO nova.compute.manager [-] [instance: 3940597d-f11c-4959-85c5-bb51a373cec5] Took 0.26 seconds to deallocate network for instance.
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.867 239969 DEBUG oslo_concurrency.lockutils [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:58 compute-0 nova_compute[239965]: 2026-01-26 15:57:58.868 239969 DEBUG oslo_concurrency.lockutils [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.024 239969 DEBUG oslo_concurrency.processutils [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.171 239969 DEBUG nova.compute.manager [req-8c68af18-f050-470b-9252-dc250d6abdf3 req-1ea39f7d-d098-44e0-a2ed-c108a1965aac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received event network-vif-plugged-32baa4e5-2605-4a3f-8023-6a80ea71793d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.172 239969 DEBUG oslo_concurrency.lockutils [req-8c68af18-f050-470b-9252-dc250d6abdf3 req-1ea39f7d-d098-44e0-a2ed-c108a1965aac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.172 239969 DEBUG oslo_concurrency.lockutils [req-8c68af18-f050-470b-9252-dc250d6abdf3 req-1ea39f7d-d098-44e0-a2ed-c108a1965aac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.172 239969 DEBUG oslo_concurrency.lockutils [req-8c68af18-f050-470b-9252-dc250d6abdf3 req-1ea39f7d-d098-44e0-a2ed-c108a1965aac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.172 239969 DEBUG nova.compute.manager [req-8c68af18-f050-470b-9252-dc250d6abdf3 req-1ea39f7d-d098-44e0-a2ed-c108a1965aac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] No waiting events found dispatching network-vif-plugged-32baa4e5-2605-4a3f-8023-6a80ea71793d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.173 239969 WARNING nova.compute.manager [req-8c68af18-f050-470b-9252-dc250d6abdf3 req-1ea39f7d-d098-44e0-a2ed-c108a1965aac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received unexpected event network-vif-plugged-32baa4e5-2605-4a3f-8023-6a80ea71793d for instance with vm_state active and task_state deleting.
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.173 239969 DEBUG nova.compute.manager [req-8c68af18-f050-470b-9252-dc250d6abdf3 req-1ea39f7d-d098-44e0-a2ed-c108a1965aac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Received event network-changed-7b823eab-5ae0-4e07-ae00-150cce2fac3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.173 239969 DEBUG nova.compute.manager [req-8c68af18-f050-470b-9252-dc250d6abdf3 req-1ea39f7d-d098-44e0-a2ed-c108a1965aac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Refreshing instance network info cache due to event network-changed-7b823eab-5ae0-4e07-ae00-150cce2fac3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.174 239969 DEBUG oslo_concurrency.lockutils [req-8c68af18-f050-470b-9252-dc250d6abdf3 req-1ea39f7d-d098-44e0-a2ed-c108a1965aac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-4f7de6be-0bd4-48af-852a-f93f9da50ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.212 239969 DEBUG nova.network.neutron [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Updating instance_info_cache with network_info: [{"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:59.219 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:59.220 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:57:59.221 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.242 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Releasing lock "refresh_cache-4f7de6be-0bd4-48af-852a-f93f9da50ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.242 239969 DEBUG nova.compute.manager [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Instance network_info: |[{"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.243 239969 DEBUG oslo_concurrency.lockutils [req-8c68af18-f050-470b-9252-dc250d6abdf3 req-1ea39f7d-d098-44e0-a2ed-c108a1965aac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-4f7de6be-0bd4-48af-852a-f93f9da50ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.243 239969 DEBUG nova.network.neutron [req-8c68af18-f050-470b-9252-dc250d6abdf3 req-1ea39f7d-d098-44e0-a2ed-c108a1965aac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Refreshing network info cache for port 7b823eab-5ae0-4e07-ae00-150cce2fac3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.246 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Start _get_guest_xml network_info=[{"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.252 239969 WARNING nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.262 239969 DEBUG nova.virt.libvirt.host [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.263 239969 DEBUG nova.virt.libvirt.host [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.270 239969 DEBUG nova.virt.libvirt.host [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.271 239969 DEBUG nova.virt.libvirt.host [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.271 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.271 239969 DEBUG nova.virt.hardware [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.272 239969 DEBUG nova.virt.hardware [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.273 239969 DEBUG nova.virt.hardware [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.273 239969 DEBUG nova.virt.hardware [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.273 239969 DEBUG nova.virt.hardware [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.273 239969 DEBUG nova.virt.hardware [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.274 239969 DEBUG nova.virt.hardware [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.274 239969 DEBUG nova.virt.hardware [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.275 239969 DEBUG nova.virt.hardware [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.275 239969 DEBUG nova.virt.hardware [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.275 239969 DEBUG nova.virt.hardware [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.279 239969 DEBUG oslo_concurrency.processutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:59 compute-0 podman[283221]: 2026-01-26 15:57:59.407162715 +0000 UTC m=+0.094830629 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 15:57:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:57:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2468312816' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.662 239969 DEBUG oslo_concurrency.processutils [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.673 239969 DEBUG nova.compute.provider_tree [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.678 239969 DEBUG nova.network.neutron [-] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.691 239969 DEBUG nova.scheduler.client.report [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.696 239969 INFO nova.compute.manager [-] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Took 0.99 seconds to deallocate network for instance.
Jan 26 15:57:59 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2468312816' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.715 239969 DEBUG oslo_concurrency.lockutils [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.748 239969 INFO nova.scheduler.client.report [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Deleted allocations for instance 3940597d-f11c-4959-85c5-bb51a373cec5
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.752 239969 DEBUG oslo_concurrency.lockutils [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.753 239969 DEBUG oslo_concurrency.lockutils [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.825 239969 DEBUG oslo_concurrency.lockutils [None req-83160aef-3d3f-4d49-875b-80e9a2a33172 09c34e06e37948cb8302a7f6cd04d4e5 92c1bb62f7f64e63a97025674c154318 - - default default] Lock "3940597d-f11c-4959-85c5-bb51a373cec5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 26.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:57:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1345: 305 pgs: 305 active+clean; 265 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.4 MiB/s wr, 349 op/s
Jan 26 15:57:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e229 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:57:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Jan 26 15:57:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Jan 26 15:57:59 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.870 239969 DEBUG oslo_concurrency.processutils [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:57:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:57:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3760072447' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.905 239969 DEBUG oslo_concurrency.processutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.937 239969 DEBUG nova.storage.rbd_utils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] rbd image 4f7de6be-0bd4-48af-852a-f93f9da50ef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:57:59 compute-0 nova_compute[239965]: 2026-01-26 15:57:59.943 239969 DEBUG oslo_concurrency.processutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.163 239969 DEBUG nova.network.neutron [req-38a1639d-3de4-4b6d-b3b6-80c4ea51ef68 req-d0664154-1a59-45eb-bf26-4dfd17eddd69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Updated VIF entry in instance network info cache for port 83367d44-efa7-43f2-b6dd-eee666860388. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.165 239969 DEBUG nova.network.neutron [req-38a1639d-3de4-4b6d-b3b6-80c4ea51ef68 req-d0664154-1a59-45eb-bf26-4dfd17eddd69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Updating instance_info_cache with network_info: [{"id": "83367d44-efa7-43f2-b6dd-eee666860388", "address": "fa:16:3e:08:b6:96", "network": {"id": "e0f483e6-b774-4b05-865b-210bd6652bcd", "bridge": "br-int", "label": "tempest-ServersTestJSON-1790730746-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2244b984e24a40ff8bdfbbe29dec2c25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83367d44-ef", "ovs_interfaceid": "83367d44-efa7-43f2-b6dd-eee666860388", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.189 239969 DEBUG oslo_concurrency.lockutils [req-38a1639d-3de4-4b6d-b3b6-80c4ea51ef68 req-d0664154-1a59-45eb-bf26-4dfd17eddd69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-58e03608-6ade-4867-ba57-e5b723e5fa71" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.285 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:58:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:58:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:58:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:58:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:58:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:58:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:58:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/788708190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.461 239969 DEBUG oslo_concurrency.processutils [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.475 239969 DEBUG nova.compute.provider_tree [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.500 239969 DEBUG nova.network.neutron [req-8c68af18-f050-470b-9252-dc250d6abdf3 req-1ea39f7d-d098-44e0-a2ed-c108a1965aac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Updated VIF entry in instance network info cache for port 7b823eab-5ae0-4e07-ae00-150cce2fac3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.500 239969 DEBUG nova.network.neutron [req-8c68af18-f050-470b-9252-dc250d6abdf3 req-1ea39f7d-d098-44e0-a2ed-c108a1965aac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Updating instance_info_cache with network_info: [{"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.504 239969 DEBUG nova.scheduler.client.report [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.507 239969 DEBUG nova.compute.manager [req-09fb8769-d8b5-48de-a5f0-2cf8ed006918 req-b43bdd3d-bc0c-49c2-9af7-15f2dd9d1749 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Received event network-vif-deleted-32baa4e5-2605-4a3f-8023-6a80ea71793d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:58:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/347845534' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.526 239969 DEBUG oslo_concurrency.lockutils [req-8c68af18-f050-470b-9252-dc250d6abdf3 req-1ea39f7d-d098-44e0-a2ed-c108a1965aac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-4f7de6be-0bd4-48af-852a-f93f9da50ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.529 239969 DEBUG oslo_concurrency.lockutils [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.544 239969 DEBUG oslo_concurrency.processutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.545 239969 DEBUG nova.virt.libvirt.vif [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:57:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1848216185',display_name='tempest-SecurityGroupsTestJSON-server-1848216185',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1848216185',id=49,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acad8b8490f840ba8d8c5d0d91874b79',ramdisk_id='',reservation_id='r-9q4ny5nc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-188832966',owner_user_name='tempest-SecurityGroupsTestJSON-188832966-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:57:56Z,user_data=None,user_id='fe6e9eb388ed48fc93bf3f4b984f8d1c',uuid=4f7de6be-0bd4-48af-852a-f93f9da50ef5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.546 239969 DEBUG nova.network.os_vif_util [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converting VIF {"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.547 239969 DEBUG nova.network.os_vif_util [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:72:0d,bridge_name='br-int',has_traffic_filtering=True,id=7b823eab-5ae0-4e07-ae00-150cce2fac3c,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b823eab-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.548 239969 DEBUG nova.objects.instance [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f7de6be-0bd4-48af-852a-f93f9da50ef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.559 239969 INFO nova.scheduler.client.report [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Deleted allocations for instance d49e7eb2-dcdb-4410-a016-42e58d1d1416
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.573 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:58:00 compute-0 nova_compute[239965]:   <uuid>4f7de6be-0bd4-48af-852a-f93f9da50ef5</uuid>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   <name>instance-00000031</name>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1848216185</nova:name>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:57:59</nova:creationTime>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:58:00 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:58:00 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:58:00 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:58:00 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:58:00 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:58:00 compute-0 nova_compute[239965]:         <nova:user uuid="fe6e9eb388ed48fc93bf3f4b984f8d1c">tempest-SecurityGroupsTestJSON-188832966-project-member</nova:user>
Jan 26 15:58:00 compute-0 nova_compute[239965]:         <nova:project uuid="acad8b8490f840ba8d8c5d0d91874b79">tempest-SecurityGroupsTestJSON-188832966</nova:project>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:58:00 compute-0 nova_compute[239965]:         <nova:port uuid="7b823eab-5ae0-4e07-ae00-150cce2fac3c">
Jan 26 15:58:00 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <system>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <entry name="serial">4f7de6be-0bd4-48af-852a-f93f9da50ef5</entry>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <entry name="uuid">4f7de6be-0bd4-48af-852a-f93f9da50ef5</entry>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     </system>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   <os>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   </os>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   <features>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   </features>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/4f7de6be-0bd4-48af-852a-f93f9da50ef5_disk">
Jan 26 15:58:00 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       </source>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:58:00 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/4f7de6be-0bd4-48af-852a-f93f9da50ef5_disk.config">
Jan 26 15:58:00 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       </source>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:58:00 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:89:72:0d"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <target dev="tap7b823eab-5a"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/4f7de6be-0bd4-48af-852a-f93f9da50ef5/console.log" append="off"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <video>
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     </video>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:58:00 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:58:00 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:58:00 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:58:00 compute-0 nova_compute[239965]: </domain>
Jan 26 15:58:00 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.579 239969 DEBUG nova.compute.manager [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Preparing to wait for external event network-vif-plugged-7b823eab-5ae0-4e07-ae00-150cce2fac3c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.580 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.580 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.580 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.581 239969 DEBUG nova.virt.libvirt.vif [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:57:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1848216185',display_name='tempest-SecurityGroupsTestJSON-server-1848216185',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1848216185',id=49,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='acad8b8490f840ba8d8c5d0d91874b79',ramdisk_id='',reservation_id='r-9q4ny5nc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-188832966',owner_user_name='tempest-SecurityGroupsTestJSON-188832966-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:57:56Z,user_data=None,user_id='fe6e9eb388ed48fc93bf3f4b984f8d1c',uuid=4f7de6be-0bd4-48af-852a-f93f9da50ef5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.581 239969 DEBUG nova.network.os_vif_util [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converting VIF {"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.582 239969 DEBUG nova.network.os_vif_util [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:72:0d,bridge_name='br-int',has_traffic_filtering=True,id=7b823eab-5ae0-4e07-ae00-150cce2fac3c,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b823eab-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.583 239969 DEBUG os_vif [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:72:0d,bridge_name='br-int',has_traffic_filtering=True,id=7b823eab-5ae0-4e07-ae00-150cce2fac3c,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b823eab-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.584 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.584 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.585 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.589 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.590 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b823eab-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.590 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7b823eab-5a, col_values=(('external_ids', {'iface-id': '7b823eab-5ae0-4e07-ae00-150cce2fac3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:72:0d', 'vm-uuid': '4f7de6be-0bd4-48af-852a-f93f9da50ef5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.592 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:00 compute-0 NetworkManager[48954]: <info>  [1769443080.5934] manager: (tap7b823eab-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/173)
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.595 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.599 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.600 239969 INFO os_vif [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:72:0d,bridge_name='br-int',has_traffic_filtering=True,id=7b823eab-5ae0-4e07-ae00-150cce2fac3c,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b823eab-5a')
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.628 239969 DEBUG oslo_concurrency.lockutils [None req-c6ace3de-fea6-4ad5-98a6-ed093e064703 700f80e28de6426c81d2be69a7fd8ffb 3ba6f10f9d9d4df0aa855feb2a00210a - - default default] Lock "d49e7eb2-dcdb-4410-a016-42e58d1d1416" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.668 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.669 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.670 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] No VIF found with MAC fa:16:3e:89:72:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.670 239969 INFO nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Using config drive
Jan 26 15:58:00 compute-0 nova_compute[239965]: 2026-01-26 15:58:00.692 239969 DEBUG nova.storage.rbd_utils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] rbd image 4f7de6be-0bd4-48af-852a-f93f9da50ef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:00 compute-0 ceph-mon[75140]: pgmap v1345: 305 pgs: 305 active+clean; 265 MiB data, 530 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.4 MiB/s wr, 349 op/s
Jan 26 15:58:00 compute-0 ceph-mon[75140]: osdmap e230: 3 total, 3 up, 3 in
Jan 26 15:58:00 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3760072447' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:58:00 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/788708190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:00 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/347845534' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:58:01 compute-0 nova_compute[239965]: 2026-01-26 15:58:01.078 239969 INFO nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Creating config drive at /var/lib/nova/instances/4f7de6be-0bd4-48af-852a-f93f9da50ef5/disk.config
Jan 26 15:58:01 compute-0 nova_compute[239965]: 2026-01-26 15:58:01.085 239969 DEBUG oslo_concurrency.processutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4f7de6be-0bd4-48af-852a-f93f9da50ef5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7c6ddr_d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:01 compute-0 nova_compute[239965]: 2026-01-26 15:58:01.233 239969 DEBUG oslo_concurrency.processutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4f7de6be-0bd4-48af-852a-f93f9da50ef5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7c6ddr_d" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:01 compute-0 nova_compute[239965]: 2026-01-26 15:58:01.264 239969 DEBUG nova.storage.rbd_utils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] rbd image 4f7de6be-0bd4-48af-852a-f93f9da50ef5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:01 compute-0 nova_compute[239965]: 2026-01-26 15:58:01.271 239969 DEBUG oslo_concurrency.processutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4f7de6be-0bd4-48af-852a-f93f9da50ef5/disk.config 4f7de6be-0bd4-48af-852a-f93f9da50ef5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 260 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.9 MiB/s wr, 372 op/s
Jan 26 15:58:01 compute-0 nova_compute[239965]: 2026-01-26 15:58:01.910 239969 DEBUG oslo_concurrency.processutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4f7de6be-0bd4-48af-852a-f93f9da50ef5/disk.config 4f7de6be-0bd4-48af-852a-f93f9da50ef5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:01 compute-0 nova_compute[239965]: 2026-01-26 15:58:01.912 239969 INFO nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Deleting local config drive /var/lib/nova/instances/4f7de6be-0bd4-48af-852a-f93f9da50ef5/disk.config because it was imported into RBD.
Jan 26 15:58:01 compute-0 kernel: tap7b823eab-5a: entered promiscuous mode
Jan 26 15:58:01 compute-0 NetworkManager[48954]: <info>  [1769443081.9701] manager: (tap7b823eab-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/174)
Jan 26 15:58:02 compute-0 systemd-udevd[283402]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:58:02 compute-0 ovn_controller[146046]: 2026-01-26T15:58:02Z|00369|binding|INFO|Claiming lport 7b823eab-5ae0-4e07-ae00-150cce2fac3c for this chassis.
Jan 26 15:58:02 compute-0 ovn_controller[146046]: 2026-01-26T15:58:02Z|00370|binding|INFO|7b823eab-5ae0-4e07-ae00-150cce2fac3c: Claiming fa:16:3e:89:72:0d 10.100.0.10
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.014 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:02 compute-0 NetworkManager[48954]: <info>  [1769443082.0317] device (tap7b823eab-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:58:02 compute-0 NetworkManager[48954]: <info>  [1769443082.0333] device (tap7b823eab-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:58:02 compute-0 ovn_controller[146046]: 2026-01-26T15:58:02Z|00371|binding|INFO|Setting lport 7b823eab-5ae0-4e07-ae00-150cce2fac3c ovn-installed in OVS
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.047 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:02 compute-0 ovn_controller[146046]: 2026-01-26T15:58:02Z|00372|binding|INFO|Setting lport 7b823eab-5ae0-4e07-ae00-150cce2fac3c up in Southbound
Jan 26 15:58:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:02.050 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:72:0d 10.100.0.10'], port_security=['fa:16:3e:89:72:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4f7de6be-0bd4-48af-852a-f93f9da50ef5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acad8b8490f840ba8d8c5d0d91874b79', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5fbd4415-78f2-4f82-97a8-e19004f35ccb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33add925-8073-4624-a365-97f1e7b74639, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=7b823eab-5ae0-4e07-ae00-150cce2fac3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:58:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:02.051 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 7b823eab-5ae0-4e07-ae00-150cce2fac3c in datapath 3bc677e6-ddd2-49e0-822d-6df359232a0e bound to our chassis
Jan 26 15:58:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:02.053 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3bc677e6-ddd2-49e0-822d-6df359232a0e
Jan 26 15:58:02 compute-0 systemd-machined[208061]: New machine qemu-54-instance-00000031.
Jan 26 15:58:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:02.071 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[223e2e0c-20e5-4124-8492-a8e0a81a3786]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:02 compute-0 systemd[1]: Started Virtual Machine qemu-54-instance-00000031.
Jan 26 15:58:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:02.102 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0db5f404-cbb5-4c88-b498-01baa8a7ccb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:02.106 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[be17cb3c-8757-4f84-b26a-d0c7e1ee277f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:02.138 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[82bc500c-2605-4dbc-9756-ed4fe90c159f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:02.157 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e949f76c-78d4-4a1e-84fe-5ecd40c3e4d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bc677e6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:3f:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453923, 'reachable_time': 37183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283418, 'error': None, 'target': 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:02.175 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c65ffc5a-cbd3-472e-9103-79abb2c67aa0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3bc677e6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453936, 'tstamp': 453936}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283420, 'error': None, 'target': 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3bc677e6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453939, 'tstamp': 453939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283420, 'error': None, 'target': 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:02.177 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bc677e6-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.179 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:02.182 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bc677e6-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:02.182 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:58:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:02.183 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3bc677e6-d0, col_values=(('external_ids', {'iface-id': '66f6ca50-b295-4776-bfa0-51b3ff3fef47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:02.183 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.627 239969 DEBUG nova.compute.manager [req-7f113fb0-dc63-4315-b394-3276167c4879 req-d81ee264-7e95-488e-8164-a911d71565b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Received event network-vif-plugged-7b823eab-5ae0-4e07-ae00-150cce2fac3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.627 239969 DEBUG oslo_concurrency.lockutils [req-7f113fb0-dc63-4315-b394-3276167c4879 req-d81ee264-7e95-488e-8164-a911d71565b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.628 239969 DEBUG oslo_concurrency.lockutils [req-7f113fb0-dc63-4315-b394-3276167c4879 req-d81ee264-7e95-488e-8164-a911d71565b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.628 239969 DEBUG oslo_concurrency.lockutils [req-7f113fb0-dc63-4315-b394-3276167c4879 req-d81ee264-7e95-488e-8164-a911d71565b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.629 239969 DEBUG nova.compute.manager [req-7f113fb0-dc63-4315-b394-3276167c4879 req-d81ee264-7e95-488e-8164-a911d71565b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Processing event network-vif-plugged-7b823eab-5ae0-4e07-ae00-150cce2fac3c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.795 239969 DEBUG nova.compute.manager [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.797 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443082.7944276, 4f7de6be-0bd4-48af-852a-f93f9da50ef5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.797 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] VM Started (Lifecycle Event)
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.803 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.813 239969 INFO nova.virt.libvirt.driver [-] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Instance spawned successfully.
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.816 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.823 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.830 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.844 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.845 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.846 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.846 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.847 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.847 239969 DEBUG nova.virt.libvirt.driver [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.853 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.855 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443082.796437, 4f7de6be-0bd4-48af-852a-f93f9da50ef5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.856 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] VM Paused (Lifecycle Event)
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.884 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.891 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443082.802578, 4f7de6be-0bd4-48af-852a-f93f9da50ef5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.891 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] VM Resumed (Lifecycle Event)
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.916 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.925 239969 INFO nova.compute.manager [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Took 6.68 seconds to spawn the instance on the hypervisor.
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.926 239969 DEBUG nova.compute.manager [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.929 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:58:02 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.964 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:58:02 compute-0 ceph-mon[75140]: pgmap v1347: 305 pgs: 305 active+clean; 260 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.9 MiB/s wr, 372 op/s
Jan 26 15:58:03 compute-0 nova_compute[239965]: 2026-01-26 15:58:02.999 239969 INFO nova.compute.manager [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Took 7.88 seconds to build instance.
Jan 26 15:58:03 compute-0 nova_compute[239965]: 2026-01-26 15:58:03.016 239969 DEBUG oslo_concurrency.lockutils [None req-f5690aa2-68ae-453c-ac8c-30a41b76649e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 260 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.2 MiB/s wr, 329 op/s
Jan 26 15:58:04 compute-0 ceph-mon[75140]: pgmap v1348: 305 pgs: 305 active+clean; 260 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.2 MiB/s wr, 329 op/s
Jan 26 15:58:04 compute-0 nova_compute[239965]: 2026-01-26 15:58:04.696 239969 DEBUG nova.compute.manager [req-4b0ae32b-819b-4faa-9c86-906df454d083 req-c5ae7c96-1d1c-4f6c-8b67-0590ffd24aa5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Received event network-vif-plugged-7b823eab-5ae0-4e07-ae00-150cce2fac3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:04 compute-0 nova_compute[239965]: 2026-01-26 15:58:04.697 239969 DEBUG oslo_concurrency.lockutils [req-4b0ae32b-819b-4faa-9c86-906df454d083 req-c5ae7c96-1d1c-4f6c-8b67-0590ffd24aa5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:04 compute-0 nova_compute[239965]: 2026-01-26 15:58:04.697 239969 DEBUG oslo_concurrency.lockutils [req-4b0ae32b-819b-4faa-9c86-906df454d083 req-c5ae7c96-1d1c-4f6c-8b67-0590ffd24aa5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:04 compute-0 nova_compute[239965]: 2026-01-26 15:58:04.697 239969 DEBUG oslo_concurrency.lockutils [req-4b0ae32b-819b-4faa-9c86-906df454d083 req-c5ae7c96-1d1c-4f6c-8b67-0590ffd24aa5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:04 compute-0 nova_compute[239965]: 2026-01-26 15:58:04.697 239969 DEBUG nova.compute.manager [req-4b0ae32b-819b-4faa-9c86-906df454d083 req-c5ae7c96-1d1c-4f6c-8b67-0590ffd24aa5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] No waiting events found dispatching network-vif-plugged-7b823eab-5ae0-4e07-ae00-150cce2fac3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:58:04 compute-0 nova_compute[239965]: 2026-01-26 15:58:04.698 239969 WARNING nova.compute.manager [req-4b0ae32b-819b-4faa-9c86-906df454d083 req-c5ae7c96-1d1c-4f6c-8b67-0590ffd24aa5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Received unexpected event network-vif-plugged-7b823eab-5ae0-4e07-ae00-150cce2fac3c for instance with vm_state active and task_state None.
Jan 26 15:58:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:58:05 compute-0 nova_compute[239965]: 2026-01-26 15:58:05.285 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:05 compute-0 nova_compute[239965]: 2026-01-26 15:58:05.592 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:05 compute-0 nova_compute[239965]: 2026-01-26 15:58:05.717 239969 DEBUG oslo_concurrency.lockutils [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:05 compute-0 nova_compute[239965]: 2026-01-26 15:58:05.718 239969 DEBUG oslo_concurrency.lockutils [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:05 compute-0 nova_compute[239965]: 2026-01-26 15:58:05.718 239969 INFO nova.compute.manager [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Rebooting instance
Jan 26 15:58:05 compute-0 nova_compute[239965]: 2026-01-26 15:58:05.737 239969 DEBUG oslo_concurrency.lockutils [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "refresh_cache-4f7de6be-0bd4-48af-852a-f93f9da50ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:58:05 compute-0 nova_compute[239965]: 2026-01-26 15:58:05.737 239969 DEBUG oslo_concurrency.lockutils [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquired lock "refresh_cache-4f7de6be-0bd4-48af-852a-f93f9da50ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:58:05 compute-0 nova_compute[239965]: 2026-01-26 15:58:05.738 239969 DEBUG nova.network.neutron [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:58:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1349: 305 pgs: 305 active+clean; 260 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.2 MiB/s wr, 266 op/s
Jan 26 15:58:06 compute-0 ovn_controller[146046]: 2026-01-26T15:58:06Z|00373|binding|INFO|Releasing lport 66f6ca50-b295-4776-bfa0-51b3ff3fef47 from this chassis (sb_readonly=0)
Jan 26 15:58:06 compute-0 ovn_controller[146046]: 2026-01-26T15:58:06Z|00374|binding|INFO|Releasing lport 9f9fcf20-2e57-45c7-be90-f2c7c728896a from this chassis (sb_readonly=0)
Jan 26 15:58:06 compute-0 nova_compute[239965]: 2026-01-26 15:58:06.218 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:06 compute-0 nova_compute[239965]: 2026-01-26 15:58:06.800 239969 DEBUG nova.compute.manager [req-aa7aa408-d731-4a1e-8602-fb1f65047b98 req-3f7e784b-9aed-4e15-8add-90b499134ca6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Received event network-changed-7b823eab-5ae0-4e07-ae00-150cce2fac3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:06 compute-0 nova_compute[239965]: 2026-01-26 15:58:06.801 239969 DEBUG nova.compute.manager [req-aa7aa408-d731-4a1e-8602-fb1f65047b98 req-3f7e784b-9aed-4e15-8add-90b499134ca6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Refreshing instance network info cache due to event network-changed-7b823eab-5ae0-4e07-ae00-150cce2fac3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:58:06 compute-0 nova_compute[239965]: 2026-01-26 15:58:06.801 239969 DEBUG oslo_concurrency.lockutils [req-aa7aa408-d731-4a1e-8602-fb1f65047b98 req-3f7e784b-9aed-4e15-8add-90b499134ca6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-4f7de6be-0bd4-48af-852a-f93f9da50ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:58:07 compute-0 ceph-mon[75140]: pgmap v1349: 305 pgs: 305 active+clean; 260 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.2 MiB/s wr, 266 op/s
Jan 26 15:58:07 compute-0 nova_compute[239965]: 2026-01-26 15:58:07.530 239969 DEBUG nova.network.neutron [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Updating instance_info_cache with network_info: [{"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:58:07 compute-0 nova_compute[239965]: 2026-01-26 15:58:07.552 239969 DEBUG oslo_concurrency.lockutils [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Releasing lock "refresh_cache-4f7de6be-0bd4-48af-852a-f93f9da50ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:58:07 compute-0 nova_compute[239965]: 2026-01-26 15:58:07.554 239969 DEBUG nova.compute.manager [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:07 compute-0 nova_compute[239965]: 2026-01-26 15:58:07.554 239969 DEBUG oslo_concurrency.lockutils [req-aa7aa408-d731-4a1e-8602-fb1f65047b98 req-3f7e784b-9aed-4e15-8add-90b499134ca6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-4f7de6be-0bd4-48af-852a-f93f9da50ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:58:07 compute-0 nova_compute[239965]: 2026-01-26 15:58:07.554 239969 DEBUG nova.network.neutron [req-aa7aa408-d731-4a1e-8602-fb1f65047b98 req-3f7e784b-9aed-4e15-8add-90b499134ca6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Refreshing network info cache for port 7b823eab-5ae0-4e07-ae00-150cce2fac3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:58:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 260 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.2 MiB/s wr, 266 op/s
Jan 26 15:58:07 compute-0 nova_compute[239965]: 2026-01-26 15:58:07.968 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:08 compute-0 kernel: tap7b823eab-5a (unregistering): left promiscuous mode
Jan 26 15:58:08 compute-0 NetworkManager[48954]: <info>  [1769443088.1313] device (tap7b823eab-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.140 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:08 compute-0 ovn_controller[146046]: 2026-01-26T15:58:08Z|00375|binding|INFO|Releasing lport 7b823eab-5ae0-4e07-ae00-150cce2fac3c from this chassis (sb_readonly=0)
Jan 26 15:58:08 compute-0 ovn_controller[146046]: 2026-01-26T15:58:08Z|00376|binding|INFO|Setting lport 7b823eab-5ae0-4e07-ae00-150cce2fac3c down in Southbound
Jan 26 15:58:08 compute-0 ovn_controller[146046]: 2026-01-26T15:58:08Z|00377|binding|INFO|Removing iface tap7b823eab-5a ovn-installed in OVS
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.142 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:08.148 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:72:0d 10.100.0.10'], port_security=['fa:16:3e:89:72:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4f7de6be-0bd4-48af-852a-f93f9da50ef5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acad8b8490f840ba8d8c5d0d91874b79', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5fbd4415-78f2-4f82-97a8-e19004f35ccb 650af045-ab14-4167-b904-a9de57b96a8a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33add925-8073-4624-a365-97f1e7b74639, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=7b823eab-5ae0-4e07-ae00-150cce2fac3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:58:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:08.150 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 7b823eab-5ae0-4e07-ae00-150cce2fac3c in datapath 3bc677e6-ddd2-49e0-822d-6df359232a0e unbound from our chassis
Jan 26 15:58:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:08.151 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3bc677e6-ddd2-49e0-822d-6df359232a0e
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.161 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:08.172 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6f3518e3-e25a-47ab-a3eb-29e1294851d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:08 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Deactivated successfully.
Jan 26 15:58:08 compute-0 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d00000031.scope: Consumed 5.718s CPU time.
Jan 26 15:58:08 compute-0 systemd-machined[208061]: Machine qemu-54-instance-00000031 terminated.
Jan 26 15:58:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:08.205 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6edc3eee-04f2-4f93-9d10-cbc01248a63a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:08.209 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1ef493-42e3-4f52-bff5-2a60a6e5ef88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:08.238 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d2757bac-309e-43ee-9add-e0429502f9ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:08.255 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2b540626-e903-47bb-99fc-3413b7383269]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bc677e6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:3f:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 616, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453923, 'reachable_time': 37183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283475, 'error': None, 'target': 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:08 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 15:58:08 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 15:58:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:08.271 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[12a0f493-288f-4ca8-b74f-6e8a7a2ae43b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3bc677e6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453936, 'tstamp': 453936}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283476, 'error': None, 'target': 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3bc677e6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453939, 'tstamp': 453939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283476, 'error': None, 'target': 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:08.273 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bc677e6-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.275 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.278 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:08.279 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bc677e6-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:08.279 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:58:08 compute-0 ceph-mon[75140]: pgmap v1350: 305 pgs: 305 active+clean; 260 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.2 MiB/s wr, 266 op/s
Jan 26 15:58:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:08.280 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3bc677e6-d0, col_values=(('external_ids', {'iface-id': '66f6ca50-b295-4776-bfa0-51b3ff3fef47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:08.280 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.303 239969 INFO nova.virt.libvirt.driver [-] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Instance destroyed successfully.
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.303 239969 DEBUG nova.objects.instance [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lazy-loading 'resources' on Instance uuid 4f7de6be-0bd4-48af-852a-f93f9da50ef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.327 239969 DEBUG nova.virt.libvirt.vif [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:57:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1848216185',display_name='tempest-SecurityGroupsTestJSON-server-1848216185',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1848216185',id=49,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:58:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='acad8b8490f840ba8d8c5d0d91874b79',ramdisk_id='',reservation_id='r-9q4ny5nc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-188832966',owner_user_name='tempest-SecurityGroupsTestJSON-188832966-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:58:07Z,user_data=None,user_id='fe6e9eb388ed48fc93bf3f4b984f8d1c',uuid=4f7de6be-0bd4-48af-852a-f93f9da50ef5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.328 239969 DEBUG nova.network.os_vif_util [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converting VIF {"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.328 239969 DEBUG nova.network.os_vif_util [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:72:0d,bridge_name='br-int',has_traffic_filtering=True,id=7b823eab-5ae0-4e07-ae00-150cce2fac3c,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b823eab-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.329 239969 DEBUG os_vif [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:72:0d,bridge_name='br-int',has_traffic_filtering=True,id=7b823eab-5ae0-4e07-ae00-150cce2fac3c,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b823eab-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.331 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.331 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b823eab-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.334 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.337 239969 INFO os_vif [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:72:0d,bridge_name='br-int',has_traffic_filtering=True,id=7b823eab-5ae0-4e07-ae00-150cce2fac3c,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b823eab-5a')
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.343 239969 DEBUG nova.virt.libvirt.driver [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Start _get_guest_xml network_info=[{"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.347 239969 WARNING nova.virt.libvirt.driver [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.353 239969 DEBUG nova.virt.libvirt.host [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.353 239969 DEBUG nova.virt.libvirt.host [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.360 239969 DEBUG nova.virt.libvirt.host [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.361 239969 DEBUG nova.virt.libvirt.host [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.361 239969 DEBUG nova.virt.libvirt.driver [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.361 239969 DEBUG nova.virt.hardware [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.362 239969 DEBUG nova.virt.hardware [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.362 239969 DEBUG nova.virt.hardware [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.362 239969 DEBUG nova.virt.hardware [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.362 239969 DEBUG nova.virt.hardware [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.362 239969 DEBUG nova.virt.hardware [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.362 239969 DEBUG nova.virt.hardware [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.363 239969 DEBUG nova.virt.hardware [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.363 239969 DEBUG nova.virt.hardware [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.363 239969 DEBUG nova.virt.hardware [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.363 239969 DEBUG nova.virt.hardware [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.363 239969 DEBUG nova.objects.instance [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4f7de6be-0bd4-48af-852a-f93f9da50ef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.393 239969 DEBUG oslo_concurrency.processutils [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:58:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2315380499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:58:08 compute-0 nova_compute[239965]: 2026-01-26 15:58:08.980 239969 DEBUG oslo_concurrency.processutils [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.015 239969 DEBUG oslo_concurrency.processutils [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.060 239969 DEBUG nova.network.neutron [req-aa7aa408-d731-4a1e-8602-fb1f65047b98 req-3f7e784b-9aed-4e15-8add-90b499134ca6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Updated VIF entry in instance network info cache for port 7b823eab-5ae0-4e07-ae00-150cce2fac3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.061 239969 DEBUG nova.network.neutron [req-aa7aa408-d731-4a1e-8602-fb1f65047b98 req-3f7e784b-9aed-4e15-8add-90b499134ca6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Updating instance_info_cache with network_info: [{"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.080 239969 DEBUG oslo_concurrency.lockutils [req-aa7aa408-d731-4a1e-8602-fb1f65047b98 req-3f7e784b-9aed-4e15-8add-90b499134ca6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-4f7de6be-0bd4-48af-852a-f93f9da50ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:58:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:58:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2194727570' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.608 239969 DEBUG oslo_concurrency.processutils [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.610 239969 DEBUG nova.virt.libvirt.vif [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:57:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1848216185',display_name='tempest-SecurityGroupsTestJSON-server-1848216185',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1848216185',id=49,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:58:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='acad8b8490f840ba8d8c5d0d91874b79',ramdisk_id='',reservation_id='r-9q4ny5nc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-188832966',owner_user_name='tempest-SecurityGroupsTestJSON-188832966-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:58:07Z,user_data=None,user_id='fe6e9eb388ed48fc93bf3f4b984f8d1c',uuid=4f7de6be-0bd4-48af-852a-f93f9da50ef5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.611 239969 DEBUG nova.network.os_vif_util [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converting VIF {"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.611 239969 DEBUG nova.network.os_vif_util [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:72:0d,bridge_name='br-int',has_traffic_filtering=True,id=7b823eab-5ae0-4e07-ae00-150cce2fac3c,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b823eab-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.612 239969 DEBUG nova.objects.instance [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f7de6be-0bd4-48af-852a-f93f9da50ef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:58:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2315380499' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.630 239969 DEBUG nova.virt.libvirt.driver [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:58:09 compute-0 nova_compute[239965]:   <uuid>4f7de6be-0bd4-48af-852a-f93f9da50ef5</uuid>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   <name>instance-00000031</name>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <nova:name>tempest-SecurityGroupsTestJSON-server-1848216185</nova:name>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:58:08</nova:creationTime>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:58:09 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:58:09 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:58:09 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:58:09 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:58:09 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:58:09 compute-0 nova_compute[239965]:         <nova:user uuid="fe6e9eb388ed48fc93bf3f4b984f8d1c">tempest-SecurityGroupsTestJSON-188832966-project-member</nova:user>
Jan 26 15:58:09 compute-0 nova_compute[239965]:         <nova:project uuid="acad8b8490f840ba8d8c5d0d91874b79">tempest-SecurityGroupsTestJSON-188832966</nova:project>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:58:09 compute-0 nova_compute[239965]:         <nova:port uuid="7b823eab-5ae0-4e07-ae00-150cce2fac3c">
Jan 26 15:58:09 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <system>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <entry name="serial">4f7de6be-0bd4-48af-852a-f93f9da50ef5</entry>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <entry name="uuid">4f7de6be-0bd4-48af-852a-f93f9da50ef5</entry>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     </system>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   <os>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   </os>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   <features>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   </features>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/4f7de6be-0bd4-48af-852a-f93f9da50ef5_disk">
Jan 26 15:58:09 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       </source>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:58:09 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/4f7de6be-0bd4-48af-852a-f93f9da50ef5_disk.config">
Jan 26 15:58:09 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       </source>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:58:09 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:89:72:0d"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <target dev="tap7b823eab-5a"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/4f7de6be-0bd4-48af-852a-f93f9da50ef5/console.log" append="off"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <video>
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     </video>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <input type="keyboard" bus="usb"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:58:09 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:58:09 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:58:09 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:58:09 compute-0 nova_compute[239965]: </domain>
Jan 26 15:58:09 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.632 239969 DEBUG nova.virt.libvirt.driver [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.632 239969 DEBUG nova.virt.libvirt.driver [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.633 239969 DEBUG nova.virt.libvirt.vif [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:57:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1848216185',display_name='tempest-SecurityGroupsTestJSON-server-1848216185',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1848216185',id=49,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:58:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='acad8b8490f840ba8d8c5d0d91874b79',ramdisk_id='',reservation_id='r-9q4ny5nc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-188832966',owner_user_name='tempest-SecurityGroupsTestJSON-188832966-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:58:07Z,user_data=None,user_id='fe6e9eb388ed48fc93bf3f4b984f8d1c',uuid=4f7de6be-0bd4-48af-852a-f93f9da50ef5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.633 239969 DEBUG nova.network.os_vif_util [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converting VIF {"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.634 239969 DEBUG nova.network.os_vif_util [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:72:0d,bridge_name='br-int',has_traffic_filtering=True,id=7b823eab-5ae0-4e07-ae00-150cce2fac3c,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b823eab-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.634 239969 DEBUG os_vif [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:72:0d,bridge_name='br-int',has_traffic_filtering=True,id=7b823eab-5ae0-4e07-ae00-150cce2fac3c,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b823eab-5a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.634 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.635 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.635 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.639 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.639 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b823eab-5a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.639 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7b823eab-5a, col_values=(('external_ids', {'iface-id': '7b823eab-5ae0-4e07-ae00-150cce2fac3c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:72:0d', 'vm-uuid': '4f7de6be-0bd4-48af-852a-f93f9da50ef5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.640 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:09 compute-0 NetworkManager[48954]: <info>  [1769443089.6416] manager: (tap7b823eab-5a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.643 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.645 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:09 compute-0 nova_compute[239965]: 2026-01-26 15:58:09.645 239969 INFO os_vif [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:72:0d,bridge_name='br-int',has_traffic_filtering=True,id=7b823eab-5ae0-4e07-ae00-150cce2fac3c,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b823eab-5a')
Jan 26 15:58:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 260 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 927 KiB/s wr, 145 op/s
Jan 26 15:58:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:58:10 compute-0 kernel: tap7b823eab-5a: entered promiscuous mode
Jan 26 15:58:10 compute-0 NetworkManager[48954]: <info>  [1769443090.0682] manager: (tap7b823eab-5a): new Tun device (/org/freedesktop/NetworkManager/Devices/176)
Jan 26 15:58:10 compute-0 systemd-udevd[283468]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:58:10 compute-0 NetworkManager[48954]: <info>  [1769443090.0853] device (tap7b823eab-5a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:58:10 compute-0 NetworkManager[48954]: <info>  [1769443090.0867] device (tap7b823eab-5a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:58:10 compute-0 ovn_controller[146046]: 2026-01-26T15:58:10Z|00378|binding|INFO|Claiming lport 7b823eab-5ae0-4e07-ae00-150cce2fac3c for this chassis.
Jan 26 15:58:10 compute-0 nova_compute[239965]: 2026-01-26 15:58:10.113 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:10 compute-0 ovn_controller[146046]: 2026-01-26T15:58:10Z|00379|binding|INFO|7b823eab-5ae0-4e07-ae00-150cce2fac3c: Claiming fa:16:3e:89:72:0d 10.100.0.10
Jan 26 15:58:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:10.120 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:72:0d 10.100.0.10'], port_security=['fa:16:3e:89:72:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4f7de6be-0bd4-48af-852a-f93f9da50ef5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acad8b8490f840ba8d8c5d0d91874b79', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5fbd4415-78f2-4f82-97a8-e19004f35ccb 650af045-ab14-4167-b904-a9de57b96a8a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33add925-8073-4624-a365-97f1e7b74639, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=7b823eab-5ae0-4e07-ae00-150cce2fac3c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:58:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:10.122 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 7b823eab-5ae0-4e07-ae00-150cce2fac3c in datapath 3bc677e6-ddd2-49e0-822d-6df359232a0e bound to our chassis
Jan 26 15:58:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:10.123 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3bc677e6-ddd2-49e0-822d-6df359232a0e
Jan 26 15:58:10 compute-0 ovn_controller[146046]: 2026-01-26T15:58:10Z|00380|binding|INFO|Setting lport 7b823eab-5ae0-4e07-ae00-150cce2fac3c ovn-installed in OVS
Jan 26 15:58:10 compute-0 ovn_controller[146046]: 2026-01-26T15:58:10Z|00381|binding|INFO|Setting lport 7b823eab-5ae0-4e07-ae00-150cce2fac3c up in Southbound
Jan 26 15:58:10 compute-0 nova_compute[239965]: 2026-01-26 15:58:10.133 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:10.138 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fa1ff5c5-fd9d-48a7-bde7-8c66286f7ee4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:10 compute-0 systemd-machined[208061]: New machine qemu-55-instance-00000031.
Jan 26 15:58:10 compute-0 systemd[1]: Started Virtual Machine qemu-55-instance-00000031.
Jan 26 15:58:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:10.165 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5721007d-e7b8-4e78-aece-f1cde6d96850]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:10.169 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7a91e3f5-aaaa-40ef-b08f-b584b4d583b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:10.212 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7c824345-6f1a-4a55-ac91-e1087b62248c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:10.229 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[79031ff2-890c-4f65-8680-3d0115a41a8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bc677e6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:3f:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 10, 'rx_bytes': 616, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453923, 'reachable_time': 37183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283579, 'error': None, 'target': 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:10.249 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[46289497-2e54-4612-9ebe-2db93a3c633b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3bc677e6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453936, 'tstamp': 453936}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283580, 'error': None, 'target': 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3bc677e6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453939, 'tstamp': 453939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283580, 'error': None, 'target': 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:10.251 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bc677e6-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:10 compute-0 nova_compute[239965]: 2026-01-26 15:58:10.253 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:10.257 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bc677e6-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:10.257 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:58:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:10.257 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3bc677e6-d0, col_values=(('external_ids', {'iface-id': '66f6ca50-b295-4776-bfa0-51b3ff3fef47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:10.258 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:58:10 compute-0 nova_compute[239965]: 2026-01-26 15:58:10.289 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:10 compute-0 nova_compute[239965]: 2026-01-26 15:58:10.535 239969 DEBUG nova.compute.manager [req-990a22bf-e26e-4a8b-9983-1d95ce325dec req-e10f074e-c3eb-4ab3-8d4b-ebc4f21b0d02 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Received event network-vif-unplugged-7b823eab-5ae0-4e07-ae00-150cce2fac3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:10 compute-0 nova_compute[239965]: 2026-01-26 15:58:10.536 239969 DEBUG oslo_concurrency.lockutils [req-990a22bf-e26e-4a8b-9983-1d95ce325dec req-e10f074e-c3eb-4ab3-8d4b-ebc4f21b0d02 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:10 compute-0 nova_compute[239965]: 2026-01-26 15:58:10.536 239969 DEBUG oslo_concurrency.lockutils [req-990a22bf-e26e-4a8b-9983-1d95ce325dec req-e10f074e-c3eb-4ab3-8d4b-ebc4f21b0d02 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:10 compute-0 nova_compute[239965]: 2026-01-26 15:58:10.536 239969 DEBUG oslo_concurrency.lockutils [req-990a22bf-e26e-4a8b-9983-1d95ce325dec req-e10f074e-c3eb-4ab3-8d4b-ebc4f21b0d02 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:10 compute-0 nova_compute[239965]: 2026-01-26 15:58:10.536 239969 DEBUG nova.compute.manager [req-990a22bf-e26e-4a8b-9983-1d95ce325dec req-e10f074e-c3eb-4ab3-8d4b-ebc4f21b0d02 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] No waiting events found dispatching network-vif-unplugged-7b823eab-5ae0-4e07-ae00-150cce2fac3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:58:10 compute-0 nova_compute[239965]: 2026-01-26 15:58:10.537 239969 WARNING nova.compute.manager [req-990a22bf-e26e-4a8b-9983-1d95ce325dec req-e10f074e-c3eb-4ab3-8d4b-ebc4f21b0d02 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Received unexpected event network-vif-unplugged-7b823eab-5ae0-4e07-ae00-150cce2fac3c for instance with vm_state active and task_state reboot_started_hard.
Jan 26 15:58:10 compute-0 nova_compute[239965]: 2026-01-26 15:58:10.722 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2194727570' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:58:10 compute-0 ceph-mon[75140]: pgmap v1351: 305 pgs: 305 active+clean; 260 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 927 KiB/s wr, 145 op/s
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.040 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 4f7de6be-0bd4-48af-852a-f93f9da50ef5 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.041 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443091.0398283, 4f7de6be-0bd4-48af-852a-f93f9da50ef5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.041 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] VM Resumed (Lifecycle Event)
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.043 239969 DEBUG nova.compute.manager [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.049 239969 INFO nova.virt.libvirt.driver [-] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Instance rebooted successfully.
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.050 239969 DEBUG nova.compute.manager [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.063 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.067 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.095 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.095 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443091.0426, 4f7de6be-0bd4-48af-852a-f93f9da50ef5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.096 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] VM Started (Lifecycle Event)
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.118 239969 DEBUG oslo_concurrency.lockutils [None req-6353eb22-0023-469d-a644-3ab58d3601fc fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.400s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.121 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.125 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.231 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443076.23047, d49e7eb2-dcdb-4410-a016-42e58d1d1416 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.232 239969 INFO nova.compute.manager [-] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] VM Stopped (Lifecycle Event)
Jan 26 15:58:11 compute-0 nova_compute[239965]: 2026-01-26 15:58:11.256 239969 DEBUG nova.compute.manager [None req-af659f52-f1ec-4c7f-ae1e-a3847a484219 - - - - - -] [instance: d49e7eb2-dcdb-4410-a016-42e58d1d1416] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 266 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.3 MiB/s wr, 129 op/s
Jan 26 15:58:11 compute-0 ovn_controller[146046]: 2026-01-26T15:58:11Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:b6:96 10.100.0.14
Jan 26 15:58:11 compute-0 ovn_controller[146046]: 2026-01-26T15:58:11Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:b6:96 10.100.0.14
Jan 26 15:58:12 compute-0 nova_compute[239965]: 2026-01-26 15:58:12.366 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:12 compute-0 nova_compute[239965]: 2026-01-26 15:58:12.688 239969 DEBUG nova.compute.manager [req-065cf3cb-8cbc-4f69-8e5b-6c7968a3eff2 req-fd80909d-569f-4b1f-8236-5d1d78bcccee a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Received event network-vif-plugged-7b823eab-5ae0-4e07-ae00-150cce2fac3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:12 compute-0 nova_compute[239965]: 2026-01-26 15:58:12.689 239969 DEBUG oslo_concurrency.lockutils [req-065cf3cb-8cbc-4f69-8e5b-6c7968a3eff2 req-fd80909d-569f-4b1f-8236-5d1d78bcccee a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:12 compute-0 nova_compute[239965]: 2026-01-26 15:58:12.689 239969 DEBUG oslo_concurrency.lockutils [req-065cf3cb-8cbc-4f69-8e5b-6c7968a3eff2 req-fd80909d-569f-4b1f-8236-5d1d78bcccee a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:12 compute-0 nova_compute[239965]: 2026-01-26 15:58:12.689 239969 DEBUG oslo_concurrency.lockutils [req-065cf3cb-8cbc-4f69-8e5b-6c7968a3eff2 req-fd80909d-569f-4b1f-8236-5d1d78bcccee a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:12 compute-0 nova_compute[239965]: 2026-01-26 15:58:12.689 239969 DEBUG nova.compute.manager [req-065cf3cb-8cbc-4f69-8e5b-6c7968a3eff2 req-fd80909d-569f-4b1f-8236-5d1d78bcccee a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] No waiting events found dispatching network-vif-plugged-7b823eab-5ae0-4e07-ae00-150cce2fac3c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:58:12 compute-0 nova_compute[239965]: 2026-01-26 15:58:12.690 239969 WARNING nova.compute.manager [req-065cf3cb-8cbc-4f69-8e5b-6c7968a3eff2 req-fd80909d-569f-4b1f-8236-5d1d78bcccee a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Received unexpected event network-vif-plugged-7b823eab-5ae0-4e07-ae00-150cce2fac3c for instance with vm_state active and task_state None.
Jan 26 15:58:13 compute-0 ceph-mon[75140]: pgmap v1352: 305 pgs: 305 active+clean; 266 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.3 MiB/s wr, 129 op/s
Jan 26 15:58:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 266 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 595 KiB/s wr, 86 op/s
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.481 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Acquiring lock "37628c32-be19-4064-9b44-cc56c5c36b58" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.482 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lock "37628c32-be19-4064-9b44-cc56c5c36b58" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.543 239969 DEBUG nova.compute.manager [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.642 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.681 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.681 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.690 239969 DEBUG nova.virt.hardware [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.691 239969 INFO nova.compute.claims [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.743 239969 DEBUG oslo_concurrency.lockutils [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.743 239969 DEBUG oslo_concurrency.lockutils [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.744 239969 DEBUG oslo_concurrency.lockutils [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.744 239969 DEBUG oslo_concurrency.lockutils [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.744 239969 DEBUG oslo_concurrency.lockutils [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.746 239969 INFO nova.compute.manager [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Terminating instance
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.748 239969 DEBUG nova.compute.manager [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.782 239969 DEBUG nova.compute.manager [req-45f19355-459b-4a39-bf16-7e3cc37edf7c req-c73e86df-5f59-4dfb-b0e2-857538bb0a9e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Received event network-changed-7b823eab-5ae0-4e07-ae00-150cce2fac3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.783 239969 DEBUG nova.compute.manager [req-45f19355-459b-4a39-bf16-7e3cc37edf7c req-c73e86df-5f59-4dfb-b0e2-857538bb0a9e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Refreshing instance network info cache due to event network-changed-7b823eab-5ae0-4e07-ae00-150cce2fac3c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.783 239969 DEBUG oslo_concurrency.lockutils [req-45f19355-459b-4a39-bf16-7e3cc37edf7c req-c73e86df-5f59-4dfb-b0e2-857538bb0a9e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-4f7de6be-0bd4-48af-852a-f93f9da50ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.783 239969 DEBUG oslo_concurrency.lockutils [req-45f19355-459b-4a39-bf16-7e3cc37edf7c req-c73e86df-5f59-4dfb-b0e2-857538bb0a9e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-4f7de6be-0bd4-48af-852a-f93f9da50ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.783 239969 DEBUG nova.network.neutron [req-45f19355-459b-4a39-bf16-7e3cc37edf7c req-c73e86df-5f59-4dfb-b0e2-857538bb0a9e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Refreshing network info cache for port 7b823eab-5ae0-4e07-ae00-150cce2fac3c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:58:14 compute-0 kernel: tap7b823eab-5a (unregistering): left promiscuous mode
Jan 26 15:58:14 compute-0 NetworkManager[48954]: <info>  [1769443094.7980] device (tap7b823eab-5a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.806 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:14 compute-0 ovn_controller[146046]: 2026-01-26T15:58:14Z|00382|binding|INFO|Releasing lport 7b823eab-5ae0-4e07-ae00-150cce2fac3c from this chassis (sb_readonly=0)
Jan 26 15:58:14 compute-0 ovn_controller[146046]: 2026-01-26T15:58:14Z|00383|binding|INFO|Setting lport 7b823eab-5ae0-4e07-ae00-150cce2fac3c down in Southbound
Jan 26 15:58:14 compute-0 ovn_controller[146046]: 2026-01-26T15:58:14Z|00384|binding|INFO|Removing iface tap7b823eab-5a ovn-installed in OVS
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.812 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:14.818 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:72:0d 10.100.0.10'], port_security=['fa:16:3e:89:72:0d 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '4f7de6be-0bd4-48af-852a-f93f9da50ef5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acad8b8490f840ba8d8c5d0d91874b79', 'neutron:revision_number': '7', 'neutron:security_group_ids': '5fbd4415-78f2-4f82-97a8-e19004f35ccb 650af045-ab14-4167-b904-a9de57b96a8a c9858ad9-0a78-4059-be1e-fcf2694011fc', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33add925-8073-4624-a365-97f1e7b74639, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=7b823eab-5ae0-4e07-ae00-150cce2fac3c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:58:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:14.820 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 7b823eab-5ae0-4e07-ae00-150cce2fac3c in datapath 3bc677e6-ddd2-49e0-822d-6df359232a0e unbound from our chassis
Jan 26 15:58:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:14.821 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3bc677e6-ddd2-49e0-822d-6df359232a0e
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.829 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:14.842 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8830d8c2-f851-454e-92a2-0d7472b2607b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:14 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000031.scope: Deactivated successfully.
Jan 26 15:58:14 compute-0 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000031.scope: Consumed 4.545s CPU time.
Jan 26 15:58:14 compute-0 systemd-machined[208061]: Machine qemu-55-instance-00000031 terminated.
Jan 26 15:58:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:14.873 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b29bafaf-b685-4b94-a9ee-4006333168c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:14.876 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8ab28a-0f04-48fd-9898-1ee68e457540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:14 compute-0 nova_compute[239965]: 2026-01-26 15:58:14.886 239969 DEBUG oslo_concurrency.processutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:14.907 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[63bf3637-9603-44aa-a181-552d83431cd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:14.924 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[26ad1351-8404-469a-8736-d8e5dad1c969]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3bc677e6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:3f:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 12, 'rx_bytes': 616, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 108], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453923, 'reachable_time': 37183, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283635, 'error': None, 'target': 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:14.945 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2187bfcb-2361-4132-9fda-52255f9a9afc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap3bc677e6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453936, 'tstamp': 453936}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283636, 'error': None, 'target': 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap3bc677e6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 453939, 'tstamp': 453939}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283636, 'error': None, 'target': 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:14.947 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bc677e6-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.004 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.006 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.013 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:15.013 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bc677e6-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:15.014 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:58:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:15.014 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3bc677e6-d0, col_values=(('external_ids', {'iface-id': '66f6ca50-b295-4776-bfa0-51b3ff3fef47'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:15.015 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.022 239969 INFO nova.virt.libvirt.driver [-] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Instance destroyed successfully.
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.023 239969 DEBUG nova.objects.instance [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lazy-loading 'resources' on Instance uuid 4f7de6be-0bd4-48af-852a-f93f9da50ef5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:58:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.057 239969 DEBUG nova.virt.libvirt.vif [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:57:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1848216185',display_name='tempest-SecurityGroupsTestJSON-server-1848216185',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1848216185',id=49,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:58:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='acad8b8490f840ba8d8c5d0d91874b79',ramdisk_id='',reservation_id='r-9q4ny5nc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-188832966',owner_user_name='tempest-SecurityGroupsTestJSON-188832966-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:58:11Z,user_data=None,user_id='fe6e9eb388ed48fc93bf3f4b984f8d1c',uuid=4f7de6be-0bd4-48af-852a-f93f9da50ef5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.057 239969 DEBUG nova.network.os_vif_util [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converting VIF {"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.058 239969 DEBUG nova.network.os_vif_util [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:72:0d,bridge_name='br-int',has_traffic_filtering=True,id=7b823eab-5ae0-4e07-ae00-150cce2fac3c,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b823eab-5a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.059 239969 DEBUG os_vif [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:72:0d,bridge_name='br-int',has_traffic_filtering=True,id=7b823eab-5ae0-4e07-ae00-150cce2fac3c,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b823eab-5a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.061 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.061 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b823eab-5a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.063 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.064 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:15 compute-0 ceph-mon[75140]: pgmap v1353: 305 pgs: 305 active+clean; 266 MiB data, 527 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 595 KiB/s wr, 86 op/s
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.066 239969 INFO os_vif [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:72:0d,bridge_name='br-int',has_traffic_filtering=True,id=7b823eab-5ae0-4e07-ae00-150cce2fac3c,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b823eab-5a')
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.263 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.290 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.332 239969 INFO nova.virt.libvirt.driver [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Deleting instance files /var/lib/nova/instances/4f7de6be-0bd4-48af-852a-f93f9da50ef5_del
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.334 239969 INFO nova.virt.libvirt.driver [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Deletion of /var/lib/nova/instances/4f7de6be-0bd4-48af-852a-f93f9da50ef5_del complete
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.428 239969 INFO nova.compute.manager [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Took 0.68 seconds to destroy the instance on the hypervisor.
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.429 239969 DEBUG oslo.service.loopingcall [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.430 239969 DEBUG nova.compute.manager [-] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.430 239969 DEBUG nova.network.neutron [-] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:58:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:58:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/941820368' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.509 239969 DEBUG oslo_concurrency.processutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.515 239969 DEBUG nova.compute.provider_tree [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.538 239969 DEBUG nova.scheduler.client.report [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.566 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.567 239969 DEBUG nova.compute.manager [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.675 239969 DEBUG nova.compute.manager [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.676 239969 DEBUG nova.network.neutron [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.721 239969 INFO nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.743 239969 DEBUG nova.compute.manager [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:58:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 273 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 212 op/s
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.837 239969 DEBUG nova.compute.manager [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.838 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.839 239969 INFO nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Creating image(s)
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.862 239969 DEBUG nova.storage.rbd_utils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] rbd image 37628c32-be19-4064-9b44-cc56c5c36b58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.888 239969 DEBUG nova.storage.rbd_utils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] rbd image 37628c32-be19-4064-9b44-cc56c5c36b58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.917 239969 DEBUG nova.storage.rbd_utils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] rbd image 37628c32-be19-4064-9b44-cc56c5c36b58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.921 239969 DEBUG oslo_concurrency.processutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.953 239969 DEBUG nova.policy [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ae2a17f5ac914d809a867e7f6b85f618', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '84b323c4a02a4577a411dc13454f16e4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.991 239969 DEBUG oslo_concurrency.processutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.992 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.992 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:15 compute-0 nova_compute[239965]: 2026-01-26 15:58:15.993 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.013 239969 DEBUG nova.storage.rbd_utils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] rbd image 37628c32-be19-4064-9b44-cc56c5c36b58_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.016 239969 DEBUG oslo_concurrency.processutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 37628c32-be19-4064-9b44-cc56c5c36b58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.066 239969 DEBUG nova.network.neutron [-] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:58:16 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/941820368' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.089 239969 INFO nova.compute.manager [-] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Took 0.66 seconds to deallocate network for instance.
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.112 239969 DEBUG nova.compute.manager [req-c18c60ac-12f5-4567-8bac-8bdd4fade39a req-75728829-463d-4c7b-bc9e-0e65aaaced32 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Received event network-vif-deleted-7b823eab-5ae0-4e07-ae00-150cce2fac3c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.186 239969 DEBUG oslo_concurrency.lockutils [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.187 239969 DEBUG oslo_concurrency.lockutils [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.279 239969 DEBUG oslo_concurrency.processutils [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.320 239969 DEBUG oslo_concurrency.processutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 37628c32-be19-4064-9b44-cc56c5c36b58_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.392 239969 DEBUG nova.storage.rbd_utils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] resizing rbd image 37628c32-be19-4064-9b44-cc56c5c36b58_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.556 239969 DEBUG nova.network.neutron [req-45f19355-459b-4a39-bf16-7e3cc37edf7c req-c73e86df-5f59-4dfb-b0e2-857538bb0a9e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Updated VIF entry in instance network info cache for port 7b823eab-5ae0-4e07-ae00-150cce2fac3c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.556 239969 DEBUG nova.network.neutron [req-45f19355-459b-4a39-bf16-7e3cc37edf7c req-c73e86df-5f59-4dfb-b0e2-857538bb0a9e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Updating instance_info_cache with network_info: [{"id": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "address": "fa:16:3e:89:72:0d", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b823eab-5a", "ovs_interfaceid": "7b823eab-5ae0-4e07-ae00-150cce2fac3c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.562 239969 DEBUG nova.objects.instance [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lazy-loading 'migration_context' on Instance uuid 37628c32-be19-4064-9b44-cc56c5c36b58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.806 239969 DEBUG oslo_concurrency.lockutils [req-45f19355-459b-4a39-bf16-7e3cc37edf7c req-c73e86df-5f59-4dfb-b0e2-857538bb0a9e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-4f7de6be-0bd4-48af-852a-f93f9da50ef5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.807 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.807 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Ensure instance console log exists: /var/lib/nova/instances/37628c32-be19-4064-9b44-cc56c5c36b58/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.808 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.808 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.808 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:58:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2108343605' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.873 239969 DEBUG oslo_concurrency.processutils [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:16 compute-0 nova_compute[239965]: 2026-01-26 15:58:16.886 239969 DEBUG nova.compute.provider_tree [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:58:17 compute-0 nova_compute[239965]: 2026-01-26 15:58:17.083 239969 DEBUG nova.scheduler.client.report [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:58:17 compute-0 ceph-mon[75140]: pgmap v1354: 305 pgs: 305 active+clean; 273 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 212 op/s
Jan 26 15:58:17 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2108343605' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:17 compute-0 nova_compute[239965]: 2026-01-26 15:58:17.135 239969 DEBUG oslo_concurrency.lockutils [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:17 compute-0 nova_compute[239965]: 2026-01-26 15:58:17.277 239969 INFO nova.scheduler.client.report [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Deleted allocations for instance 4f7de6be-0bd4-48af-852a-f93f9da50ef5
Jan 26 15:58:17 compute-0 nova_compute[239965]: 2026-01-26 15:58:17.763 239969 DEBUG oslo_concurrency.lockutils [None req-4ccf1cdf-e070-4db9-93a6-4ab13890105e fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "4f7de6be-0bd4-48af-852a-f93f9da50ef5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.019s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 273 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Jan 26 15:58:17 compute-0 nova_compute[239965]: 2026-01-26 15:58:17.923 239969 DEBUG nova.network.neutron [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Successfully created port: 2fa36685-e638-4043-a2a1-4b66a063e440 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:58:18 compute-0 ceph-mon[75140]: pgmap v1355: 305 pgs: 305 active+clean; 273 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Jan 26 15:58:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1356: 305 pgs: 305 active+clean; 278 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 189 op/s
Jan 26 15:58:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:58:20 compute-0 nova_compute[239965]: 2026-01-26 15:58:20.062 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:20 compute-0 nova_compute[239965]: 2026-01-26 15:58:20.293 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:20 compute-0 nova_compute[239965]: 2026-01-26 15:58:20.372 239969 DEBUG nova.network.neutron [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Successfully updated port: 2fa36685-e638-4043-a2a1-4b66a063e440 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:58:20 compute-0 nova_compute[239965]: 2026-01-26 15:58:20.390 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Acquiring lock "refresh_cache-37628c32-be19-4064-9b44-cc56c5c36b58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:58:20 compute-0 nova_compute[239965]: 2026-01-26 15:58:20.391 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Acquired lock "refresh_cache-37628c32-be19-4064-9b44-cc56c5c36b58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:58:20 compute-0 nova_compute[239965]: 2026-01-26 15:58:20.391 239969 DEBUG nova.network.neutron [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:58:20 compute-0 nova_compute[239965]: 2026-01-26 15:58:20.462 239969 DEBUG nova.compute.manager [req-272b54c8-2d80-401a-aec8-908da637d259 req-9f88524b-4142-4239-aa9b-d9f76aff9ba5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Received event network-changed-2fa36685-e638-4043-a2a1-4b66a063e440 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:20 compute-0 nova_compute[239965]: 2026-01-26 15:58:20.462 239969 DEBUG nova.compute.manager [req-272b54c8-2d80-401a-aec8-908da637d259 req-9f88524b-4142-4239-aa9b-d9f76aff9ba5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Refreshing instance network info cache due to event network-changed-2fa36685-e638-4043-a2a1-4b66a063e440. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:58:20 compute-0 nova_compute[239965]: 2026-01-26 15:58:20.462 239969 DEBUG oslo_concurrency.lockutils [req-272b54c8-2d80-401a-aec8-908da637d259 req-9f88524b-4142-4239-aa9b-d9f76aff9ba5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-37628c32-be19-4064-9b44-cc56c5c36b58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:58:20 compute-0 nova_compute[239965]: 2026-01-26 15:58:20.550 239969 DEBUG nova.network.neutron [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:58:21 compute-0 ceph-mon[75140]: pgmap v1356: 305 pgs: 305 active+clean; 278 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.1 MiB/s wr, 189 op/s
Jan 26 15:58:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 184 op/s
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.158 239969 DEBUG nova.network.neutron [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Updating instance_info_cache with network_info: [{"id": "2fa36685-e638-4043-a2a1-4b66a063e440", "address": "fa:16:3e:92:9a:7a", "network": {"id": "7faa272c-a0e7-48fc-9da1-fdcce2e1d68d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1032431929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84b323c4a02a4577a411dc13454f16e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fa36685-e6", "ovs_interfaceid": "2fa36685-e638-4043-a2a1-4b66a063e440", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.178 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Releasing lock "refresh_cache-37628c32-be19-4064-9b44-cc56c5c36b58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.179 239969 DEBUG nova.compute.manager [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Instance network_info: |[{"id": "2fa36685-e638-4043-a2a1-4b66a063e440", "address": "fa:16:3e:92:9a:7a", "network": {"id": "7faa272c-a0e7-48fc-9da1-fdcce2e1d68d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1032431929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84b323c4a02a4577a411dc13454f16e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fa36685-e6", "ovs_interfaceid": "2fa36685-e638-4043-a2a1-4b66a063e440", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.180 239969 DEBUG oslo_concurrency.lockutils [req-272b54c8-2d80-401a-aec8-908da637d259 req-9f88524b-4142-4239-aa9b-d9f76aff9ba5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-37628c32-be19-4064-9b44-cc56c5c36b58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.180 239969 DEBUG nova.network.neutron [req-272b54c8-2d80-401a-aec8-908da637d259 req-9f88524b-4142-4239-aa9b-d9f76aff9ba5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Refreshing network info cache for port 2fa36685-e638-4043-a2a1-4b66a063e440 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.184 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Start _get_guest_xml network_info=[{"id": "2fa36685-e638-4043-a2a1-4b66a063e440", "address": "fa:16:3e:92:9a:7a", "network": {"id": "7faa272c-a0e7-48fc-9da1-fdcce2e1d68d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1032431929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84b323c4a02a4577a411dc13454f16e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fa36685-e6", "ovs_interfaceid": "2fa36685-e638-4043-a2a1-4b66a063e440", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.188 239969 WARNING nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.195 239969 DEBUG nova.virt.libvirt.host [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.195 239969 DEBUG nova.virt.libvirt.host [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.201 239969 DEBUG nova.virt.libvirt.host [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.201 239969 DEBUG nova.virt.libvirt.host [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.202 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.202 239969 DEBUG nova.virt.hardware [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.203 239969 DEBUG nova.virt.hardware [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.203 239969 DEBUG nova.virt.hardware [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.203 239969 DEBUG nova.virt.hardware [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.203 239969 DEBUG nova.virt.hardware [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.203 239969 DEBUG nova.virt.hardware [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.204 239969 DEBUG nova.virt.hardware [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.204 239969 DEBUG nova.virt.hardware [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.204 239969 DEBUG nova.virt.hardware [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.204 239969 DEBUG nova.virt.hardware [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.204 239969 DEBUG nova.virt.hardware [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.207 239969 DEBUG oslo_concurrency.processutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:22 compute-0 ceph-mon[75140]: pgmap v1357: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 184 op/s
Jan 26 15:58:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:58:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331396428' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.763 239969 DEBUG oslo_concurrency.processutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.786 239969 DEBUG nova.storage.rbd_utils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] rbd image 37628c32-be19-4064-9b44-cc56c5c36b58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:22 compute-0 nova_compute[239965]: 2026-01-26 15:58:22.790 239969 DEBUG oslo_concurrency.processutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3331396428' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:58:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:58:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/85488484' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.400 239969 DEBUG oslo_concurrency.processutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.402 239969 DEBUG nova.virt.libvirt.vif [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:58:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1860662977',display_name='tempest-ServerAddressesNegativeTestJSON-server-1860662977',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1860662977',id=50,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84b323c4a02a4577a411dc13454f16e4',ramdisk_id='',reservation_id='r-by7cxt8g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1124256172',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1124256172-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:58:15Z,user_data=None,user_id='ae2a17f5ac914d809a867e7f6b85f618',uuid=37628c32-be19-4064-9b44-cc56c5c36b58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fa36685-e638-4043-a2a1-4b66a063e440", "address": "fa:16:3e:92:9a:7a", "network": {"id": "7faa272c-a0e7-48fc-9da1-fdcce2e1d68d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1032431929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84b323c4a02a4577a411dc13454f16e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fa36685-e6", "ovs_interfaceid": "2fa36685-e638-4043-a2a1-4b66a063e440", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.402 239969 DEBUG nova.network.os_vif_util [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Converting VIF {"id": "2fa36685-e638-4043-a2a1-4b66a063e440", "address": "fa:16:3e:92:9a:7a", "network": {"id": "7faa272c-a0e7-48fc-9da1-fdcce2e1d68d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1032431929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84b323c4a02a4577a411dc13454f16e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fa36685-e6", "ovs_interfaceid": "2fa36685-e638-4043-a2a1-4b66a063e440", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.403 239969 DEBUG nova.network.os_vif_util [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:92:9a:7a,bridge_name='br-int',has_traffic_filtering=True,id=2fa36685-e638-4043-a2a1-4b66a063e440,network=Network(7faa272c-a0e7-48fc-9da1-fdcce2e1d68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fa36685-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.405 239969 DEBUG nova.objects.instance [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 37628c32-be19-4064-9b44-cc56c5c36b58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.423 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:58:23 compute-0 nova_compute[239965]:   <uuid>37628c32-be19-4064-9b44-cc56c5c36b58</uuid>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   <name>instance-00000032</name>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerAddressesNegativeTestJSON-server-1860662977</nova:name>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:58:22</nova:creationTime>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:58:23 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:58:23 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:58:23 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:58:23 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:58:23 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:58:23 compute-0 nova_compute[239965]:         <nova:user uuid="ae2a17f5ac914d809a867e7f6b85f618">tempest-ServerAddressesNegativeTestJSON-1124256172-project-member</nova:user>
Jan 26 15:58:23 compute-0 nova_compute[239965]:         <nova:project uuid="84b323c4a02a4577a411dc13454f16e4">tempest-ServerAddressesNegativeTestJSON-1124256172</nova:project>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:58:23 compute-0 nova_compute[239965]:         <nova:port uuid="2fa36685-e638-4043-a2a1-4b66a063e440">
Jan 26 15:58:23 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <system>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <entry name="serial">37628c32-be19-4064-9b44-cc56c5c36b58</entry>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <entry name="uuid">37628c32-be19-4064-9b44-cc56c5c36b58</entry>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     </system>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   <os>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   </os>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   <features>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   </features>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/37628c32-be19-4064-9b44-cc56c5c36b58_disk">
Jan 26 15:58:23 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       </source>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:58:23 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/37628c32-be19-4064-9b44-cc56c5c36b58_disk.config">
Jan 26 15:58:23 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       </source>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:58:23 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:92:9a:7a"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <target dev="tap2fa36685-e6"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/37628c32-be19-4064-9b44-cc56c5c36b58/console.log" append="off"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <video>
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     </video>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:58:23 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:58:23 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:58:23 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:58:23 compute-0 nova_compute[239965]: </domain>
Jan 26 15:58:23 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.424 239969 DEBUG nova.compute.manager [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Preparing to wait for external event network-vif-plugged-2fa36685-e638-4043-a2a1-4b66a063e440 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.424 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Acquiring lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.424 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.425 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.425 239969 DEBUG nova.virt.libvirt.vif [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:58:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1860662977',display_name='tempest-ServerAddressesNegativeTestJSON-server-1860662977',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1860662977',id=50,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='84b323c4a02a4577a411dc13454f16e4',ramdisk_id='',reservation_id='r-by7cxt8g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1124256172',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1124256172-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:58:15Z,user_data=None,user_id='ae2a17f5ac914d809a867e7f6b85f618',uuid=37628c32-be19-4064-9b44-cc56c5c36b58,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fa36685-e638-4043-a2a1-4b66a063e440", "address": "fa:16:3e:92:9a:7a", "network": {"id": "7faa272c-a0e7-48fc-9da1-fdcce2e1d68d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1032431929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84b323c4a02a4577a411dc13454f16e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fa36685-e6", "ovs_interfaceid": "2fa36685-e638-4043-a2a1-4b66a063e440", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.426 239969 DEBUG nova.network.os_vif_util [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Converting VIF {"id": "2fa36685-e638-4043-a2a1-4b66a063e440", "address": "fa:16:3e:92:9a:7a", "network": {"id": "7faa272c-a0e7-48fc-9da1-fdcce2e1d68d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1032431929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84b323c4a02a4577a411dc13454f16e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fa36685-e6", "ovs_interfaceid": "2fa36685-e638-4043-a2a1-4b66a063e440", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.426 239969 DEBUG nova.network.os_vif_util [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:92:9a:7a,bridge_name='br-int',has_traffic_filtering=True,id=2fa36685-e638-4043-a2a1-4b66a063e440,network=Network(7faa272c-a0e7-48fc-9da1-fdcce2e1d68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fa36685-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.427 239969 DEBUG os_vif [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:92:9a:7a,bridge_name='br-int',has_traffic_filtering=True,id=2fa36685-e638-4043-a2a1-4b66a063e440,network=Network(7faa272c-a0e7-48fc-9da1-fdcce2e1d68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fa36685-e6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.427 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.428 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.428 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.431 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.431 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fa36685-e6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.431 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2fa36685-e6, col_values=(('external_ids', {'iface-id': '2fa36685-e638-4043-a2a1-4b66a063e440', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:92:9a:7a', 'vm-uuid': '37628c32-be19-4064-9b44-cc56c5c36b58'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.433 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:23 compute-0 NetworkManager[48954]: <info>  [1769443103.4338] manager: (tap2fa36685-e6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/177)
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.435 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.438 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.438 239969 INFO os_vif [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:92:9a:7a,bridge_name='br-int',has_traffic_filtering=True,id=2fa36685-e638-4043-a2a1-4b66a063e440,network=Network(7faa272c-a0e7-48fc-9da1-fdcce2e1d68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fa36685-e6')
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.545 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.545 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.546 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] No VIF found with MAC fa:16:3e:92:9a:7a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.546 239969 INFO nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Using config drive
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.567 239969 DEBUG nova.storage.rbd_utils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] rbd image 37628c32-be19-4064-9b44-cc56c5c36b58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 177 op/s
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.980 239969 INFO nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Creating config drive at /var/lib/nova/instances/37628c32-be19-4064-9b44-cc56c5c36b58/disk.config
Jan 26 15:58:23 compute-0 nova_compute[239965]: 2026-01-26 15:58:23.986 239969 DEBUG oslo_concurrency.processutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/37628c32-be19-4064-9b44-cc56c5c36b58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt3hru02x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.123 239969 DEBUG oslo_concurrency.processutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/37628c32-be19-4064-9b44-cc56c5c36b58/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt3hru02x" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.147 239969 DEBUG nova.storage.rbd_utils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] rbd image 37628c32-be19-4064-9b44-cc56c5c36b58_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.150 239969 DEBUG oslo_concurrency.processutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/37628c32-be19-4064-9b44-cc56c5c36b58/disk.config 37628c32-be19-4064-9b44-cc56c5c36b58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/85488484' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:58:24 compute-0 ceph-mon[75140]: pgmap v1358: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 177 op/s
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.314 239969 DEBUG oslo_concurrency.processutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/37628c32-be19-4064-9b44-cc56c5c36b58/disk.config 37628c32-be19-4064-9b44-cc56c5c36b58_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.315 239969 INFO nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Deleting local config drive /var/lib/nova/instances/37628c32-be19-4064-9b44-cc56c5c36b58/disk.config because it was imported into RBD.
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.358 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Acquiring lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.359 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:24 compute-0 kernel: tap2fa36685-e6: entered promiscuous mode
Jan 26 15:58:24 compute-0 NetworkManager[48954]: <info>  [1769443104.3722] manager: (tap2fa36685-e6): new Tun device (/org/freedesktop/NetworkManager/Devices/178)
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.374 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:24 compute-0 ovn_controller[146046]: 2026-01-26T15:58:24Z|00385|binding|INFO|Claiming lport 2fa36685-e638-4043-a2a1-4b66a063e440 for this chassis.
Jan 26 15:58:24 compute-0 ovn_controller[146046]: 2026-01-26T15:58:24Z|00386|binding|INFO|2fa36685-e638-4043-a2a1-4b66a063e440: Claiming fa:16:3e:92:9a:7a 10.100.0.8
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.387 239969 DEBUG nova.network.neutron [req-272b54c8-2d80-401a-aec8-908da637d259 req-9f88524b-4142-4239-aa9b-d9f76aff9ba5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Updated VIF entry in instance network info cache for port 2fa36685-e638-4043-a2a1-4b66a063e440. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.388 239969 DEBUG nova.network.neutron [req-272b54c8-2d80-401a-aec8-908da637d259 req-9f88524b-4142-4239-aa9b-d9f76aff9ba5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Updating instance_info_cache with network_info: [{"id": "2fa36685-e638-4043-a2a1-4b66a063e440", "address": "fa:16:3e:92:9a:7a", "network": {"id": "7faa272c-a0e7-48fc-9da1-fdcce2e1d68d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1032431929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84b323c4a02a4577a411dc13454f16e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fa36685-e6", "ovs_interfaceid": "2fa36685-e638-4043-a2a1-4b66a063e440", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:58:24 compute-0 ovn_controller[146046]: 2026-01-26T15:58:24Z|00387|binding|INFO|Setting lport 2fa36685-e638-4043-a2a1-4b66a063e440 ovn-installed in OVS
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.395 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:24 compute-0 systemd-udevd[284011]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:58:24 compute-0 systemd-machined[208061]: New machine qemu-56-instance-00000032.
Jan 26 15:58:24 compute-0 NetworkManager[48954]: <info>  [1769443104.4205] device (tap2fa36685-e6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:58:24 compute-0 NetworkManager[48954]: <info>  [1769443104.4214] device (tap2fa36685-e6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:58:24 compute-0 systemd[1]: Started Virtual Machine qemu-56-instance-00000032.
Jan 26 15:58:24 compute-0 ovn_controller[146046]: 2026-01-26T15:58:24Z|00388|binding|INFO|Setting lport 2fa36685-e638-4043-a2a1-4b66a063e440 up in Southbound
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.498 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:9a:7a 10.100.0.8'], port_security=['fa:16:3e:92:9a:7a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '37628c32-be19-4064-9b44-cc56c5c36b58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84b323c4a02a4577a411dc13454f16e4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9337c2ac-6b30-4b8f-8559-d87cc5278668', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=148fa925-0f03-452e-b62b-3ee12a03838e, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2fa36685-e638-4043-a2a1-4b66a063e440) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.499 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2fa36685-e638-4043-a2a1-4b66a063e440 in datapath 7faa272c-a0e7-48fc-9da1-fdcce2e1d68d bound to our chassis
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.501 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7faa272c-a0e7-48fc-9da1-fdcce2e1d68d
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.516 239969 DEBUG oslo_concurrency.lockutils [req-272b54c8-2d80-401a-aec8-908da637d259 req-9f88524b-4142-4239-aa9b-d9f76aff9ba5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-37628c32-be19-4064-9b44-cc56c5c36b58" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.518 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5fde8dcc-22fd-45be-b569-367c59786ca5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.519 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7faa272c-a1 in ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.522 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7faa272c-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.522 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d71555be-2d41-4da8-aea7-ab060d6e82e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.523 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[69e6d633-03ab-4c63-9217-79f5d9bda230]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.535 239969 DEBUG nova.compute.manager [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.541 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[eeeea540-a5c5-4bb8-b9b6-03f11e62d208]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.566 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f120ead2-160d-465b-a900-b9e01bef0fa7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.596 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[63e2b3c4-f9e4-4a5e-8281-745efd547622]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:24 compute-0 NetworkManager[48954]: <info>  [1769443104.6057] manager: (tap7faa272c-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/179)
Jan 26 15:58:24 compute-0 systemd-udevd[284013]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.604 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4f238f83-b28c-43c3-8bb4-9077ef4564d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.643 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c96b8f71-584c-4917-b75a-61495ab77472]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.646 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7e0124f2-8499-4e56-908d-ee1206492c8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:24 compute-0 NetworkManager[48954]: <info>  [1769443104.6706] device (tap7faa272c-a0): carrier: link connected
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.677 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a689dc4a-d1a9-4e51-8aea-adbf51f0f442]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.697 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[012159ba-44b7-48bf-8fd1-4d3bc6f92671]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7faa272c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:b1:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459547, 'reachable_time': 41293, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284044, 'error': None, 'target': 'ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.716 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3218a96e-55ea-450a-9e97-a14f38b1750b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef9:b12c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459547, 'tstamp': 459547}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284045, 'error': None, 'target': 'ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.724 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.724 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.732 239969 DEBUG nova.virt.hardware [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.733 239969 INFO nova.compute.claims [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.734 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[afdc60ff-ff4a-4ebb-8b33-111561ee8c1d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7faa272c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f9:b1:2c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 119], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459547, 'reachable_time': 41293, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284046, 'error': None, 'target': 'ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.772 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0fa4a0ec-f377-4201-a85c-8ac2ffb702dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.836 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[350ee46e-473a-418d-b933-db71ef1c9f4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.838 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7faa272c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.838 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.839 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7faa272c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:24 compute-0 kernel: tap7faa272c-a0: entered promiscuous mode
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.841 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:24 compute-0 NetworkManager[48954]: <info>  [1769443104.8418] manager: (tap7faa272c-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/180)
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.844 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7faa272c-a0, col_values=(('external_ids', {'iface-id': '3e785d50-9366-455f-9576-130ddb8e4eee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.846 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:24 compute-0 ovn_controller[146046]: 2026-01-26T15:58:24Z|00389|binding|INFO|Releasing lport 3e785d50-9366-455f-9576-130ddb8e4eee from this chassis (sb_readonly=0)
Jan 26 15:58:24 compute-0 nova_compute[239965]: 2026-01-26 15:58:24.861 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.862 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7faa272c-a0e7-48fc-9da1-fdcce2e1d68d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7faa272c-a0e7-48fc-9da1-fdcce2e1d68d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.864 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[06a5d3a5-764e-466a-9ddb-1e124b247380]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.865 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/7faa272c-a0e7-48fc-9da1-fdcce2e1d68d.pid.haproxy
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 7faa272c-a0e7-48fc-9da1-fdcce2e1d68d
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:58:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:24.866 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d', 'env', 'PROCESS_TAG=haproxy-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7faa272c-a0e7-48fc-9da1-fdcce2e1d68d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:58:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.108 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443105.106709, 37628c32-be19-4064-9b44-cc56c5c36b58 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.109 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] VM Started (Lifecycle Event)
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.123 239969 DEBUG oslo_concurrency.processutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.248 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.256 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443105.1068423, 37628c32-be19-4064-9b44-cc56c5c36b58 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.256 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] VM Paused (Lifecycle Event)
Jan 26 15:58:25 compute-0 podman[284121]: 2026-01-26 15:58:25.274555968 +0000 UTC m=+0.060836794 container create 964ad5a387b692ef9549b9546902bad4d6541077d0be854ce24c3893255d9999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.285 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.289 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.296 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:25 compute-0 systemd[1]: Started libpod-conmon-964ad5a387b692ef9549b9546902bad4d6541077d0be854ce24c3893255d9999.scope.
Jan 26 15:58:25 compute-0 podman[284121]: 2026-01-26 15:58:25.242325757 +0000 UTC m=+0.028606603 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:58:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:58:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07f91bfe99117baa74cd29740abcc322179c6034829d5bf515beba21c056d6cf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:58:25 compute-0 podman[284121]: 2026-01-26 15:58:25.379372681 +0000 UTC m=+0.165653527 container init 964ad5a387b692ef9549b9546902bad4d6541077d0be854ce24c3893255d9999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 26 15:58:25 compute-0 podman[284121]: 2026-01-26 15:58:25.387001708 +0000 UTC m=+0.173282534 container start 964ad5a387b692ef9549b9546902bad4d6541077d0be854ce24c3893255d9999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.409 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:58:25 compute-0 neutron-haproxy-ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d[284156]: [NOTICE]   (284160) : New worker (284162) forked
Jan 26 15:58:25 compute-0 neutron-haproxy-ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d[284156]: [NOTICE]   (284160) : Loading success.
Jan 26 15:58:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:58:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2765155166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.719 239969 DEBUG oslo_concurrency.processutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.724 239969 DEBUG nova.compute.provider_tree [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:58:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2765155166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1359: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 178 op/s
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.874 239969 DEBUG nova.scheduler.client.report [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.906 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.907 239969 DEBUG nova.compute.manager [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.958 239969 DEBUG nova.compute.manager [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.958 239969 DEBUG nova.network.neutron [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:58:25 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.980 239969 INFO nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:25.999 239969 DEBUG nova.compute.manager [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.143 239969 DEBUG nova.compute.manager [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.145 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.145 239969 INFO nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Creating image(s)
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.170 239969 DEBUG nova.storage.rbd_utils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] rbd image 6b4dbd52-cbad-4d8f-8446-e47fabdc170c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.195 239969 DEBUG nova.storage.rbd_utils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] rbd image 6b4dbd52-cbad-4d8f-8446-e47fabdc170c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.217 239969 DEBUG nova.storage.rbd_utils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] rbd image 6b4dbd52-cbad-4d8f-8446-e47fabdc170c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.222 239969 DEBUG oslo_concurrency.processutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.397 239969 DEBUG nova.compute.manager [req-8a01d8d5-10d7-4c46-b112-f546c451ddcb req-d41252fd-3ffa-428e-8fd3-46e12ee781a9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Received event network-vif-plugged-2fa36685-e638-4043-a2a1-4b66a063e440 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.398 239969 DEBUG oslo_concurrency.lockutils [req-8a01d8d5-10d7-4c46-b112-f546c451ddcb req-d41252fd-3ffa-428e-8fd3-46e12ee781a9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.398 239969 DEBUG oslo_concurrency.lockutils [req-8a01d8d5-10d7-4c46-b112-f546c451ddcb req-d41252fd-3ffa-428e-8fd3-46e12ee781a9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.399 239969 DEBUG oslo_concurrency.lockutils [req-8a01d8d5-10d7-4c46-b112-f546c451ddcb req-d41252fd-3ffa-428e-8fd3-46e12ee781a9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.399 239969 DEBUG nova.compute.manager [req-8a01d8d5-10d7-4c46-b112-f546c451ddcb req-d41252fd-3ffa-428e-8fd3-46e12ee781a9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Processing event network-vif-plugged-2fa36685-e638-4043-a2a1-4b66a063e440 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.401 239969 DEBUG nova.policy [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '39cbd4b3356d4e52beac8da6a7696c50', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2598d15187804c928a694f10598376ca', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.403 239969 DEBUG nova.compute.manager [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.404 239969 DEBUG oslo_concurrency.processutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.406 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.406 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.407 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.436 239969 DEBUG nova.storage.rbd_utils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] rbd image 6b4dbd52-cbad-4d8f-8446-e47fabdc170c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.441 239969 DEBUG oslo_concurrency.processutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 6b4dbd52-cbad-4d8f-8446-e47fabdc170c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.477 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443106.4083276, 37628c32-be19-4064-9b44-cc56c5c36b58 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.478 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] VM Resumed (Lifecycle Event)
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.482 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.488 239969 INFO nova.virt.libvirt.driver [-] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Instance spawned successfully.
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.489 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.513 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.519 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.520 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.520 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.521 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.521 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.522 239969 DEBUG nova.virt.libvirt.driver [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.528 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.573 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.593 239969 INFO nova.compute.manager [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Took 10.76 seconds to spawn the instance on the hypervisor.
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.594 239969 DEBUG nova.compute.manager [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.683 239969 INFO nova.compute.manager [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Took 12.03 seconds to build instance.
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.706 239969 DEBUG oslo_concurrency.lockutils [None req-e4005628-8db4-4de0-bca6-a690c88e73ec ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lock "37628c32-be19-4064-9b44-cc56c5c36b58" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.224s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.859 239969 DEBUG oslo_concurrency.lockutils [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.860 239969 DEBUG oslo_concurrency.lockutils [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.860 239969 DEBUG oslo_concurrency.lockutils [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.860 239969 DEBUG oslo_concurrency.lockutils [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.861 239969 DEBUG oslo_concurrency.lockutils [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.862 239969 INFO nova.compute.manager [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Terminating instance
Jan 26 15:58:26 compute-0 nova_compute[239965]: 2026-01-26 15:58:26.863 239969 DEBUG nova.compute.manager [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:58:27 compute-0 ceph-mon[75140]: pgmap v1359: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.4 MiB/s wr, 178 op/s
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.387 239969 DEBUG oslo_concurrency.processutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 6b4dbd52-cbad-4d8f-8446-e47fabdc170c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.945s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:27 compute-0 kernel: tape2bbbc1d-7d (unregistering): left promiscuous mode
Jan 26 15:58:27 compute-0 NetworkManager[48954]: <info>  [1769443107.4417] device (tape2bbbc1d-7d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:58:27 compute-0 ovn_controller[146046]: 2026-01-26T15:58:27Z|00390|binding|INFO|Releasing lport e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 from this chassis (sb_readonly=0)
Jan 26 15:58:27 compute-0 ovn_controller[146046]: 2026-01-26T15:58:27Z|00391|binding|INFO|Setting lport e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 down in Southbound
Jan 26 15:58:27 compute-0 ovn_controller[146046]: 2026-01-26T15:58:27Z|00392|binding|INFO|Removing iface tape2bbbc1d-7d ovn-installed in OVS
Jan 26 15:58:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:27.460 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ba:a8:be 10.100.0.12'], port_security=['fa:16:3e:ba:a8:be 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'd9b6d871-11f5-46ea-b7f2-502cd8b12b21', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'acad8b8490f840ba8d8c5d0d91874b79', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5fbd4415-78f2-4f82-97a8-e19004f35ccb d50b8879-a832-4bae-9588-58c8dbc7d4ab ee5cc15a-d279-4782-81a4-fc812119feb8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33add925-8073-4624-a365-97f1e7b74639, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:58:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:27.461 156105 INFO neutron.agent.ovn.metadata.agent [-] Port e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 in datapath 3bc677e6-ddd2-49e0-822d-6df359232a0e unbound from our chassis
Jan 26 15:58:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:27.465 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3bc677e6-ddd2-49e0-822d-6df359232a0e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:58:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:27.469 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ca989c-4cf6-436a-9929-4981ee431c24]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:27.470 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e namespace which is not needed anymore
Jan 26 15:58:27 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Jan 26 15:58:27 compute-0 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002e.scope: Consumed 15.259s CPU time.
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.504 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:27 compute-0 systemd-machined[208061]: Machine qemu-51-instance-0000002e terminated.
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.515 239969 DEBUG nova.storage.rbd_utils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] resizing rbd image 6b4dbd52-cbad-4d8f-8446-e47fabdc170c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.560 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:58:27 compute-0 neutron-haproxy-ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e[281658]: [NOTICE]   (281667) : haproxy version is 2.8.14-c23fe91
Jan 26 15:58:27 compute-0 neutron-haproxy-ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e[281658]: [NOTICE]   (281667) : path to executable is /usr/sbin/haproxy
Jan 26 15:58:27 compute-0 neutron-haproxy-ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e[281658]: [WARNING]  (281667) : Exiting Master process...
Jan 26 15:58:27 compute-0 neutron-haproxy-ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e[281658]: [WARNING]  (281667) : Exiting Master process...
Jan 26 15:58:27 compute-0 neutron-haproxy-ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e[281658]: [ALERT]    (281667) : Current worker (281669) exited with code 143 (Terminated)
Jan 26 15:58:27 compute-0 neutron-haproxy-ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e[281658]: [WARNING]  (281667) : All workers exited. Exiting... (0)
Jan 26 15:58:27 compute-0 systemd[1]: libpod-56c8a72be3ac4bbab1908fe19cea68470d5cb31e7bb539ecfce7027941e94419.scope: Deactivated successfully.
Jan 26 15:58:27 compute-0 podman[284343]: 2026-01-26 15:58:27.664348984 +0000 UTC m=+0.071967738 container died 56c8a72be3ac4bbab1908fe19cea68470d5cb31e7bb539ecfce7027941e94419 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.666 239969 DEBUG nova.network.neutron [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Successfully created port: 4ef2795d-4ecf-4161-b8d3-cf078d350d35 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.688 239969 DEBUG nova.objects.instance [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lazy-loading 'migration_context' on Instance uuid 6b4dbd52-cbad-4d8f-8446-e47fabdc170c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.691 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.699 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.703 239969 INFO nova.virt.libvirt.driver [-] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Instance destroyed successfully.
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.703 239969 DEBUG nova.objects.instance [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lazy-loading 'resources' on Instance uuid d9b6d871-11f5-46ea-b7f2-502cd8b12b21 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:58:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-56c8a72be3ac4bbab1908fe19cea68470d5cb31e7bb539ecfce7027941e94419-userdata-shm.mount: Deactivated successfully.
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.717 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.719 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Ensure instance console log exists: /var/lib/nova/instances/6b4dbd52-cbad-4d8f-8446-e47fabdc170c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.719 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d8037dfb30be7ba7684d4c7710be473772804716265a5f6c6d201ef663dc581-merged.mount: Deactivated successfully.
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.720 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.721 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.723 239969 DEBUG nova.virt.libvirt.vif [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:57:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-806579145',display_name='tempest-SecurityGroupsTestJSON-server-806579145',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-806579145',id=46,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:57:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='acad8b8490f840ba8d8c5d0d91874b79',ramdisk_id='',reservation_id='r-wz9x4q6o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-188832966',owner_user_name='tempest-SecurityGroupsTestJSON-188832966-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:57:29Z,user_data=None,user_id='fe6e9eb388ed48fc93bf3f4b984f8d1c',uuid=d9b6d871-11f5-46ea-b7f2-502cd8b12b21,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "address": "fa:16:3e:ba:a8:be", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2bbbc1d-7d", "ovs_interfaceid": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.724 239969 DEBUG nova.network.os_vif_util [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converting VIF {"id": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "address": "fa:16:3e:ba:a8:be", "network": {"id": "3bc677e6-ddd2-49e0-822d-6df359232a0e", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-1690230520-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "acad8b8490f840ba8d8c5d0d91874b79", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape2bbbc1d-7d", "ovs_interfaceid": "e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.726 239969 DEBUG nova.network.os_vif_util [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ba:a8:be,bridge_name='br-int',has_traffic_filtering=True,id=e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2bbbc1d-7d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.727 239969 DEBUG os_vif [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:a8:be,bridge_name='br-int',has_traffic_filtering=True,id=e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2bbbc1d-7d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.729 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.729 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape2bbbc1d-7d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.731 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:27 compute-0 podman[284343]: 2026-01-26 15:58:27.733675475 +0000 UTC m=+0.141294229 container cleanup 56c8a72be3ac4bbab1908fe19cea68470d5cb31e7bb539ecfce7027941e94419 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.734 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.738 239969 INFO os_vif [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ba:a8:be,bridge_name='br-int',has_traffic_filtering=True,id=e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93,network=Network(3bc677e6-ddd2-49e0-822d-6df359232a0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape2bbbc1d-7d')
Jan 26 15:58:27 compute-0 systemd[1]: libpod-conmon-56c8a72be3ac4bbab1908fe19cea68470d5cb31e7bb539ecfce7027941e94419.scope: Deactivated successfully.
Jan 26 15:58:27 compute-0 podman[284409]: 2026-01-26 15:58:27.829280453 +0000 UTC m=+0.060112677 container remove 56c8a72be3ac4bbab1908fe19cea68470d5cb31e7bb539ecfce7027941e94419 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.833 239969 DEBUG nova.compute.manager [req-19da9cd9-0fb8-4b7a-a1a3-97f4d04c76f0 req-15bec7d2-097a-4c6f-9536-bf44dcee13f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Received event network-vif-unplugged-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.833 239969 DEBUG oslo_concurrency.lockutils [req-19da9cd9-0fb8-4b7a-a1a3-97f4d04c76f0 req-15bec7d2-097a-4c6f-9536-bf44dcee13f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.834 239969 DEBUG oslo_concurrency.lockutils [req-19da9cd9-0fb8-4b7a-a1a3-97f4d04c76f0 req-15bec7d2-097a-4c6f-9536-bf44dcee13f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.834 239969 DEBUG oslo_concurrency.lockutils [req-19da9cd9-0fb8-4b7a-a1a3-97f4d04c76f0 req-15bec7d2-097a-4c6f-9536-bf44dcee13f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.834 239969 DEBUG nova.compute.manager [req-19da9cd9-0fb8-4b7a-a1a3-97f4d04c76f0 req-15bec7d2-097a-4c6f-9536-bf44dcee13f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] No waiting events found dispatching network-vif-unplugged-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.834 239969 DEBUG nova.compute.manager [req-19da9cd9-0fb8-4b7a-a1a3-97f4d04c76f0 req-15bec7d2-097a-4c6f-9536-bf44dcee13f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Received event network-vif-unplugged-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:58:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:27.839 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3d837a54-1760-4ac3-9c02-21745687b21c]: (4, ('Mon Jan 26 03:58:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e (56c8a72be3ac4bbab1908fe19cea68470d5cb31e7bb539ecfce7027941e94419)\n56c8a72be3ac4bbab1908fe19cea68470d5cb31e7bb539ecfce7027941e94419\nMon Jan 26 03:58:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e (56c8a72be3ac4bbab1908fe19cea68470d5cb31e7bb539ecfce7027941e94419)\n56c8a72be3ac4bbab1908fe19cea68470d5cb31e7bb539ecfce7027941e94419\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Jan 26 15:58:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:27.842 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9bb014ee-a1df-4c2f-8b61-2802f7cc38b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:27.844 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bc677e6-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:27 compute-0 kernel: tap3bc677e6-d0: left promiscuous mode
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.857 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:27 compute-0 nova_compute[239965]: 2026-01-26 15:58:27.864 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:27.868 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d3b48966-06a4-4554-ac8e-e840c0d30b06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:27.885 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd5ab50-a04f-4a5a-874a-1d2bbe5e34ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:27.889 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ef543be5-ad90-42ec-b61f-a02b21a01402]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:27.912 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0c097de7-ba55-4b30-8ee7-666484b77b0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 453916, 'reachable_time': 40177, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284435, 'error': None, 'target': 'ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d3bc677e6\x2dddd2\x2d49e0\x2d822d\x2d6df359232a0e.mount: Deactivated successfully.
Jan 26 15:58:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:27.917 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3bc677e6-ddd2-49e0-822d-6df359232a0e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:58:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:27.917 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[07316a87-c00d-498c-973c-1585fe215822]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.133 239969 DEBUG oslo_concurrency.lockutils [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Acquiring lock "37628c32-be19-4064-9b44-cc56c5c36b58" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.134 239969 DEBUG oslo_concurrency.lockutils [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lock "37628c32-be19-4064-9b44-cc56c5c36b58" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.134 239969 DEBUG oslo_concurrency.lockutils [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Acquiring lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.134 239969 DEBUG oslo_concurrency.lockutils [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.135 239969 DEBUG oslo_concurrency.lockutils [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.136 239969 INFO nova.compute.manager [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Terminating instance
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.137 239969 DEBUG nova.compute.manager [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:58:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:58:28
Jan 26 15:58:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:58:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:58:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'backups', 'cephfs.cephfs.data', 'images', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.control']
Jan 26 15:58:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.919 239969 DEBUG nova.network.neutron [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Successfully updated port: 4ef2795d-4ecf-4161-b8d3-cf078d350d35 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.934 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Acquiring lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.934 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Acquired lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.934 239969 DEBUG nova.network.neutron [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.973 239969 DEBUG nova.compute.manager [req-b0f8d71a-22b0-4a29-864a-ec65ddef43f0 req-3aa39699-fe68-4764-9949-35e7a2b72bc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Received event network-vif-plugged-2fa36685-e638-4043-a2a1-4b66a063e440 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.973 239969 DEBUG oslo_concurrency.lockutils [req-b0f8d71a-22b0-4a29-864a-ec65ddef43f0 req-3aa39699-fe68-4764-9949-35e7a2b72bc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.974 239969 DEBUG oslo_concurrency.lockutils [req-b0f8d71a-22b0-4a29-864a-ec65ddef43f0 req-3aa39699-fe68-4764-9949-35e7a2b72bc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.974 239969 DEBUG oslo_concurrency.lockutils [req-b0f8d71a-22b0-4a29-864a-ec65ddef43f0 req-3aa39699-fe68-4764-9949-35e7a2b72bc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.974 239969 DEBUG nova.compute.manager [req-b0f8d71a-22b0-4a29-864a-ec65ddef43f0 req-3aa39699-fe68-4764-9949-35e7a2b72bc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] No waiting events found dispatching network-vif-plugged-2fa36685-e638-4043-a2a1-4b66a063e440 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:58:28 compute-0 nova_compute[239965]: 2026-01-26 15:58:28.975 239969 WARNING nova.compute.manager [req-b0f8d71a-22b0-4a29-864a-ec65ddef43f0 req-3aa39699-fe68-4764-9949-35e7a2b72bc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Received unexpected event network-vif-plugged-2fa36685-e638-4043-a2a1-4b66a063e440 for instance with vm_state active and task_state deleting.
Jan 26 15:58:29 compute-0 kernel: tap2fa36685-e6 (unregistering): left promiscuous mode
Jan 26 15:58:29 compute-0 NetworkManager[48954]: <info>  [1769443109.1023] device (tap2fa36685-e6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:58:29 compute-0 ovn_controller[146046]: 2026-01-26T15:58:29Z|00393|binding|INFO|Releasing lport 2fa36685-e638-4043-a2a1-4b66a063e440 from this chassis (sb_readonly=0)
Jan 26 15:58:29 compute-0 ovn_controller[146046]: 2026-01-26T15:58:29Z|00394|binding|INFO|Setting lport 2fa36685-e638-4043-a2a1-4b66a063e440 down in Southbound
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.104 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:29 compute-0 ovn_controller[146046]: 2026-01-26T15:58:29Z|00395|binding|INFO|Removing iface tap2fa36685-e6 ovn-installed in OVS
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.107 239969 DEBUG nova.network.neutron [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.109 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:29.111 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:9a:7a 10.100.0.8'], port_security=['fa:16:3e:92:9a:7a 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '37628c32-be19-4064-9b44-cc56c5c36b58', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '84b323c4a02a4577a411dc13454f16e4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9337c2ac-6b30-4b8f-8559-d87cc5278668', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=148fa925-0f03-452e-b62b-3ee12a03838e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2fa36685-e638-4043-a2a1-4b66a063e440) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:58:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:29.112 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2fa36685-e638-4043-a2a1-4b66a063e440 in datapath 7faa272c-a0e7-48fc-9da1-fdcce2e1d68d unbound from our chassis
Jan 26 15:58:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:29.113 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7faa272c-a0e7-48fc-9da1-fdcce2e1d68d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:58:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:29.114 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad8cc3c-04ed-4bf6-83d8-58f17b984961]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:29.114 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d namespace which is not needed anymore
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.124 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:29 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000032.scope: Deactivated successfully.
Jan 26 15:58:29 compute-0 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000032.scope: Consumed 2.313s CPU time.
Jan 26 15:58:29 compute-0 systemd-machined[208061]: Machine qemu-56-instance-00000032 terminated.
Jan 26 15:58:29 compute-0 podman[284439]: 2026-01-26 15:58:29.211903408 +0000 UTC m=+0.078871857 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 15:58:29 compute-0 ceph-mon[75140]: pgmap v1360: 305 pgs: 305 active+clean; 293 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Jan 26 15:58:29 compute-0 neutron-haproxy-ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d[284156]: [NOTICE]   (284160) : haproxy version is 2.8.14-c23fe91
Jan 26 15:58:29 compute-0 neutron-haproxy-ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d[284156]: [NOTICE]   (284160) : path to executable is /usr/sbin/haproxy
Jan 26 15:58:29 compute-0 neutron-haproxy-ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d[284156]: [WARNING]  (284160) : Exiting Master process...
Jan 26 15:58:29 compute-0 neutron-haproxy-ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d[284156]: [WARNING]  (284160) : Exiting Master process...
Jan 26 15:58:29 compute-0 neutron-haproxy-ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d[284156]: [ALERT]    (284160) : Current worker (284162) exited with code 143 (Terminated)
Jan 26 15:58:29 compute-0 neutron-haproxy-ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d[284156]: [WARNING]  (284160) : All workers exited. Exiting... (0)
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.367 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:29 compute-0 systemd[1]: libpod-964ad5a387b692ef9549b9546902bad4d6541077d0be854ce24c3893255d9999.scope: Deactivated successfully.
Jan 26 15:58:29 compute-0 podman[284479]: 2026-01-26 15:58:29.37371756 +0000 UTC m=+0.168889947 container died 964ad5a387b692ef9549b9546902bad4d6541077d0be854ce24c3893255d9999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.376 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.389 239969 INFO nova.virt.libvirt.driver [-] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Instance destroyed successfully.
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.390 239969 DEBUG nova.objects.instance [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lazy-loading 'resources' on Instance uuid 37628c32-be19-4064-9b44-cc56c5c36b58 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.406 239969 DEBUG nova.virt.libvirt.vif [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:58:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesNegativeTestJSON-server-1860662977',display_name='tempest-ServerAddressesNegativeTestJSON-server-1860662977',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressesnegativetestjson-server-1860662977',id=50,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:58:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='84b323c4a02a4577a411dc13454f16e4',ramdisk_id='',reservation_id='r-by7cxt8g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesNegativeTestJSON-1124256172',owner_user_name='tempest-ServerAddressesNegativeTestJSON-1124256172-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:58:26Z,user_data=None,user_id='ae2a17f5ac914d809a867e7f6b85f618',uuid=37628c32-be19-4064-9b44-cc56c5c36b58,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2fa36685-e638-4043-a2a1-4b66a063e440", "address": "fa:16:3e:92:9a:7a", "network": {"id": "7faa272c-a0e7-48fc-9da1-fdcce2e1d68d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1032431929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84b323c4a02a4577a411dc13454f16e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fa36685-e6", "ovs_interfaceid": "2fa36685-e638-4043-a2a1-4b66a063e440", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.407 239969 DEBUG nova.network.os_vif_util [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Converting VIF {"id": "2fa36685-e638-4043-a2a1-4b66a063e440", "address": "fa:16:3e:92:9a:7a", "network": {"id": "7faa272c-a0e7-48fc-9da1-fdcce2e1d68d", "bridge": "br-int", "label": "tempest-ServerAddressesNegativeTestJSON-1032431929-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "84b323c4a02a4577a411dc13454f16e4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fa36685-e6", "ovs_interfaceid": "2fa36685-e638-4043-a2a1-4b66a063e440", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.408 239969 DEBUG nova.network.os_vif_util [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:92:9a:7a,bridge_name='br-int',has_traffic_filtering=True,id=2fa36685-e638-4043-a2a1-4b66a063e440,network=Network(7faa272c-a0e7-48fc-9da1-fdcce2e1d68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fa36685-e6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.408 239969 DEBUG os_vif [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:92:9a:7a,bridge_name='br-int',has_traffic_filtering=True,id=2fa36685-e638-4043-a2a1-4b66a063e440,network=Network(7faa272c-a0e7-48fc-9da1-fdcce2e1d68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fa36685-e6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.409 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.410 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fa36685-e6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.411 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.413 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.416 239969 INFO os_vif [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:92:9a:7a,bridge_name='br-int',has_traffic_filtering=True,id=2fa36685-e638-4043-a2a1-4b66a063e440,network=Network(7faa272c-a0e7-48fc-9da1-fdcce2e1d68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2fa36685-e6')
Jan 26 15:58:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-07f91bfe99117baa74cd29740abcc322179c6034829d5bf515beba21c056d6cf-merged.mount: Deactivated successfully.
Jan 26 15:58:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-964ad5a387b692ef9549b9546902bad4d6541077d0be854ce24c3893255d9999-userdata-shm.mount: Deactivated successfully.
Jan 26 15:58:29 compute-0 podman[284479]: 2026-01-26 15:58:29.653833705 +0000 UTC m=+0.449006102 container cleanup 964ad5a387b692ef9549b9546902bad4d6541077d0be854ce24c3893255d9999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:58:29 compute-0 systemd[1]: libpod-conmon-964ad5a387b692ef9549b9546902bad4d6541077d0be854ce24c3893255d9999.scope: Deactivated successfully.
Jan 26 15:58:29 compute-0 podman[284550]: 2026-01-26 15:58:29.736451683 +0000 UTC m=+0.060637550 container remove 964ad5a387b692ef9549b9546902bad4d6541077d0be854ce24c3893255d9999 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 15:58:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:29.742 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[53375088-6b28-4dab-9fb8-38f8fff1ce73]: (4, ('Mon Jan 26 03:58:29 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d (964ad5a387b692ef9549b9546902bad4d6541077d0be854ce24c3893255d9999)\n964ad5a387b692ef9549b9546902bad4d6541077d0be854ce24c3893255d9999\nMon Jan 26 03:58:29 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d (964ad5a387b692ef9549b9546902bad4d6541077d0be854ce24c3893255d9999)\n964ad5a387b692ef9549b9546902bad4d6541077d0be854ce24c3893255d9999\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:29.745 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[79f11c94-c1cb-401e-871c-fc22a786edc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:29.748 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7faa272c-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.750 239969 INFO nova.virt.libvirt.driver [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Deleting instance files /var/lib/nova/instances/d9b6d871-11f5-46ea-b7f2-502cd8b12b21_del
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.751 239969 INFO nova.virt.libvirt.driver [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Deletion of /var/lib/nova/instances/d9b6d871-11f5-46ea-b7f2-502cd8b12b21_del complete
Jan 26 15:58:29 compute-0 kernel: tap7faa272c-a0: left promiscuous mode
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.756 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.769 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:29.775 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[31fa71de-266a-4a92-a49f-acb250a7ab7e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:29 compute-0 podman[284538]: 2026-01-26 15:58:29.77706802 +0000 UTC m=+0.166971369 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 15:58:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:29.789 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3f828208-76c2-4bcb-8085-88afd47a2a62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:29.792 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bf51b9d3-fec4-4421-960d-2fbced2903cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:29.814 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[195ec31c-0306-4bf8-a76d-da7b17297a17]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459539, 'reachable_time': 30405, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284578, 'error': None, 'target': 'ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.817 239969 INFO nova.compute.manager [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Took 2.95 seconds to destroy the instance on the hypervisor.
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.818 239969 DEBUG oslo.service.loopingcall [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.818 239969 DEBUG nova.compute.manager [-] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.818 239969 DEBUG nova.network.neutron [-] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:58:29 compute-0 systemd[1]: run-netns-ovnmeta\x2d7faa272c\x2da0e7\x2d48fc\x2d9da1\x2dfdcce2e1d68d.mount: Deactivated successfully.
Jan 26 15:58:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:29.819 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7faa272c-a0e7-48fc-9da1-fdcce2e1d68d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:58:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:29.819 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[273c1740-9b82-449e-baa6-f14d2bf160ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1361: 305 pgs: 305 active+clean; 271 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 132 op/s
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.916 239969 DEBUG nova.compute.manager [req-8ea11f42-7b1a-4a3a-8d46-ef96479ae288 req-cd5d0a14-af4e-46ea-92f8-c3c3bcd453c7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Received event network-vif-plugged-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.918 239969 DEBUG oslo_concurrency.lockutils [req-8ea11f42-7b1a-4a3a-8d46-ef96479ae288 req-cd5d0a14-af4e-46ea-92f8-c3c3bcd453c7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.918 239969 DEBUG oslo_concurrency.lockutils [req-8ea11f42-7b1a-4a3a-8d46-ef96479ae288 req-cd5d0a14-af4e-46ea-92f8-c3c3bcd453c7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.918 239969 DEBUG oslo_concurrency.lockutils [req-8ea11f42-7b1a-4a3a-8d46-ef96479ae288 req-cd5d0a14-af4e-46ea-92f8-c3c3bcd453c7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.919 239969 DEBUG nova.compute.manager [req-8ea11f42-7b1a-4a3a-8d46-ef96479ae288 req-cd5d0a14-af4e-46ea-92f8-c3c3bcd453c7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] No waiting events found dispatching network-vif-plugged-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.919 239969 WARNING nova.compute.manager [req-8ea11f42-7b1a-4a3a-8d46-ef96479ae288 req-cd5d0a14-af4e-46ea-92f8-c3c3bcd453c7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Received unexpected event network-vif-plugged-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 for instance with vm_state active and task_state deleting.
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.966 239969 INFO nova.virt.libvirt.driver [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Deleting instance files /var/lib/nova/instances/37628c32-be19-4064-9b44-cc56c5c36b58_del
Jan 26 15:58:29 compute-0 nova_compute[239965]: 2026-01-26 15:58:29.967 239969 INFO nova.virt.libvirt.driver [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Deletion of /var/lib/nova/instances/37628c32-be19-4064-9b44-cc56c5c36b58_del complete
Jan 26 15:58:30 compute-0 nova_compute[239965]: 2026-01-26 15:58:30.020 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443095.0205584, 4f7de6be-0bd4-48af-852a-f93f9da50ef5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:58:30 compute-0 nova_compute[239965]: 2026-01-26 15:58:30.021 239969 INFO nova.compute.manager [-] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] VM Stopped (Lifecycle Event)
Jan 26 15:58:30 compute-0 nova_compute[239965]: 2026-01-26 15:58:30.025 239969 INFO nova.compute.manager [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Took 1.89 seconds to destroy the instance on the hypervisor.
Jan 26 15:58:30 compute-0 nova_compute[239965]: 2026-01-26 15:58:30.025 239969 DEBUG oslo.service.loopingcall [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:58:30 compute-0 nova_compute[239965]: 2026-01-26 15:58:30.026 239969 DEBUG nova.compute.manager [-] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:58:30 compute-0 nova_compute[239965]: 2026-01-26 15:58:30.026 239969 DEBUG nova.network.neutron [-] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:58:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:58:30 compute-0 nova_compute[239965]: 2026-01-26 15:58:30.042 239969 DEBUG nova.compute.manager [None req-90f9d794-fffe-4451-95b9-88c7893e8bd7 - - - - - -] [instance: 4f7de6be-0bd4-48af-852a-f93f9da50ef5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:30 compute-0 nova_compute[239965]: 2026-01-26 15:58:30.295 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:30 compute-0 ceph-mon[75140]: pgmap v1361: 305 pgs: 305 active+clean; 271 MiB data, 538 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.4 MiB/s wr, 132 op/s
Jan 26 15:58:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:58:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:58:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:58:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:58:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:58:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:58:30 compute-0 nova_compute[239965]: 2026-01-26 15:58:30.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:58:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:58:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:58:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:58:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:58:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:58:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:58:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:58:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:58:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:58:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.164 239969 DEBUG nova.compute.manager [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Received event network-changed-4ef2795d-4ecf-4161-b8d3-cf078d350d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.165 239969 DEBUG nova.compute.manager [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Refreshing instance network info cache due to event network-changed-4ef2795d-4ecf-4161-b8d3-cf078d350d35. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.165 239969 DEBUG oslo_concurrency.lockutils [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.181 239969 DEBUG nova.network.neutron [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Updating instance_info_cache with network_info: [{"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.210 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Releasing lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.210 239969 DEBUG nova.compute.manager [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Instance network_info: |[{"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.211 239969 DEBUG oslo_concurrency.lockutils [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.211 239969 DEBUG nova.network.neutron [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Refreshing network info cache for port 4ef2795d-4ecf-4161-b8d3-cf078d350d35 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.214 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Start _get_guest_xml network_info=[{"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.220 239969 WARNING nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.225 239969 DEBUG nova.virt.libvirt.host [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.226 239969 DEBUG nova.virt.libvirt.host [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.234 239969 DEBUG nova.virt.libvirt.host [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.235 239969 DEBUG nova.virt.libvirt.host [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.235 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.235 239969 DEBUG nova.virt.hardware [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.236 239969 DEBUG nova.virt.hardware [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.236 239969 DEBUG nova.virt.hardware [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.236 239969 DEBUG nova.virt.hardware [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.236 239969 DEBUG nova.virt.hardware [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.236 239969 DEBUG nova.virt.hardware [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.236 239969 DEBUG nova.virt.hardware [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.237 239969 DEBUG nova.virt.hardware [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.237 239969 DEBUG nova.virt.hardware [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.237 239969 DEBUG nova.virt.hardware [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.237 239969 DEBUG nova.virt.hardware [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.240 239969 DEBUG oslo_concurrency.processutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.376 239969 DEBUG nova.network.neutron [-] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.394 239969 INFO nova.compute.manager [-] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Took 1.58 seconds to deallocate network for instance.
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.465 239969 DEBUG oslo_concurrency.lockutils [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.466 239969 DEBUG oslo_concurrency.lockutils [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.586 239969 DEBUG oslo_concurrency.processutils [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.649 239969 DEBUG nova.network.neutron [-] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.668 239969 INFO nova.compute.manager [-] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Took 1.64 seconds to deallocate network for instance.
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.734 239969 DEBUG oslo_concurrency.lockutils [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1362: 305 pgs: 305 active+clean; 245 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 124 op/s
Jan 26 15:58:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:58:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3968984043' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.886 239969 DEBUG oslo_concurrency.processutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.916 239969 DEBUG nova.storage.rbd_utils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] rbd image 6b4dbd52-cbad-4d8f-8446-e47fabdc170c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:31 compute-0 nova_compute[239965]: 2026-01-26 15:58:31.920 239969 DEBUG oslo_concurrency.processutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:31 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3968984043' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:58:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:58:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4160394054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.218 239969 DEBUG oslo_concurrency.processutils [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.227 239969 DEBUG nova.compute.provider_tree [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.290 239969 DEBUG nova.scheduler.client.report [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.318 239969 DEBUG oslo_concurrency.lockutils [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.321 239969 DEBUG oslo_concurrency.lockutils [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.331 239969 DEBUG oslo_concurrency.lockutils [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Acquiring lock "58e03608-6ade-4867-ba57-e5b723e5fa71" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.331 239969 DEBUG oslo_concurrency.lockutils [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lock "58e03608-6ade-4867-ba57-e5b723e5fa71" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.331 239969 DEBUG oslo_concurrency.lockutils [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Acquiring lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.332 239969 DEBUG oslo_concurrency.lockutils [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.332 239969 DEBUG oslo_concurrency.lockutils [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.333 239969 INFO nova.compute.manager [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Terminating instance
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.334 239969 DEBUG nova.compute.manager [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.345 239969 INFO nova.scheduler.client.report [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Deleted allocations for instance d9b6d871-11f5-46ea-b7f2-502cd8b12b21
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.430 239969 DEBUG oslo_concurrency.lockutils [None req-00e675aa-abf8-46bd-9b67-b7832316860d fe6e9eb388ed48fc93bf3f4b984f8d1c acad8b8490f840ba8d8c5d0d91874b79 - - default default] Lock "d9b6d871-11f5-46ea-b7f2-502cd8b12b21" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.433 239969 DEBUG oslo_concurrency.processutils [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:58:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300829150' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.595 239969 DEBUG oslo_concurrency.processutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.597 239969 DEBUG nova.virt.libvirt.vif [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:58:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-233245799',display_name='tempest-AttachInterfacesUnderV243Test-server-233245799',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-233245799',id=51,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBnkpdcgKw3ZxJcExi2kr0VnpomVnij75d3bR/pbfdCgGQcA+UwOyKukeTAdByLzJETxkaYGjczmjENGXW7/FyChm4ZWdLhT+jUfzqFeBkdkNxTH04DeT40B/qr/NF35Rw==',key_name='tempest-keypair-1898623706',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2598d15187804c928a694f10598376ca',ramdisk_id='',reservation_id='r-7f0s9be4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-177389642',owner_user_name='tempest-AttachInterfacesUnderV243Test-177389642-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:58:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39cbd4b3356d4e52beac8da6a7696c50',uuid=6b4dbd52-cbad-4d8f-8446-e47fabdc170c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.597 239969 DEBUG nova.network.os_vif_util [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Converting VIF {"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.598 239969 DEBUG nova.network.os_vif_util [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:f2:2a,bridge_name='br-int',has_traffic_filtering=True,id=4ef2795d-4ecf-4161-b8d3-cf078d350d35,network=Network(8d2177c9-aa21-40e9-8927-6a75b9503d25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef2795d-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.600 239969 DEBUG nova.objects.instance [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lazy-loading 'pci_devices' on Instance uuid 6b4dbd52-cbad-4d8f-8446-e47fabdc170c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.602 239969 DEBUG nova.network.neutron [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Updated VIF entry in instance network info cache for port 4ef2795d-4ecf-4161-b8d3-cf078d350d35. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.602 239969 DEBUG nova.network.neutron [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Updating instance_info_cache with network_info: [{"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.624 239969 DEBUG oslo_concurrency.lockutils [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.624 239969 DEBUG nova.compute.manager [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Received event network-vif-unplugged-2fa36685-e638-4043-a2a1-4b66a063e440 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.625 239969 DEBUG oslo_concurrency.lockutils [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.625 239969 DEBUG oslo_concurrency.lockutils [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.625 239969 DEBUG oslo_concurrency.lockutils [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.625 239969 DEBUG nova.compute.manager [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] No waiting events found dispatching network-vif-unplugged-2fa36685-e638-4043-a2a1-4b66a063e440 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.625 239969 DEBUG nova.compute.manager [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Received event network-vif-unplugged-2fa36685-e638-4043-a2a1-4b66a063e440 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.626 239969 DEBUG nova.compute.manager [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Received event network-vif-plugged-2fa36685-e638-4043-a2a1-4b66a063e440 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.626 239969 DEBUG oslo_concurrency.lockutils [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.626 239969 DEBUG oslo_concurrency.lockutils [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.626 239969 DEBUG oslo_concurrency.lockutils [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "37628c32-be19-4064-9b44-cc56c5c36b58-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.626 239969 DEBUG nova.compute.manager [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] No waiting events found dispatching network-vif-plugged-2fa36685-e638-4043-a2a1-4b66a063e440 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.627 239969 WARNING nova.compute.manager [req-9a209c5d-aed3-4f71-aad3-52f893eb464c req-8ec03cde-c1a6-4d20-ba14-a7df149cf71e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Received unexpected event network-vif-plugged-2fa36685-e638-4043-a2a1-4b66a063e440 for instance with vm_state active and task_state deleting.
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.629 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:58:32 compute-0 nova_compute[239965]:   <uuid>6b4dbd52-cbad-4d8f-8446-e47fabdc170c</uuid>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   <name>instance-00000033</name>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-233245799</nova:name>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:58:31</nova:creationTime>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:58:32 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:58:32 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:58:32 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:58:32 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:58:32 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:58:32 compute-0 nova_compute[239965]:         <nova:user uuid="39cbd4b3356d4e52beac8da6a7696c50">tempest-AttachInterfacesUnderV243Test-177389642-project-member</nova:user>
Jan 26 15:58:32 compute-0 nova_compute[239965]:         <nova:project uuid="2598d15187804c928a694f10598376ca">tempest-AttachInterfacesUnderV243Test-177389642</nova:project>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:58:32 compute-0 nova_compute[239965]:         <nova:port uuid="4ef2795d-4ecf-4161-b8d3-cf078d350d35">
Jan 26 15:58:32 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <system>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <entry name="serial">6b4dbd52-cbad-4d8f-8446-e47fabdc170c</entry>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <entry name="uuid">6b4dbd52-cbad-4d8f-8446-e47fabdc170c</entry>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     </system>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   <os>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   </os>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   <features>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   </features>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/6b4dbd52-cbad-4d8f-8446-e47fabdc170c_disk">
Jan 26 15:58:32 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       </source>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:58:32 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/6b4dbd52-cbad-4d8f-8446-e47fabdc170c_disk.config">
Jan 26 15:58:32 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       </source>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:58:32 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:d2:f2:2a"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <target dev="tap4ef2795d-4e"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/6b4dbd52-cbad-4d8f-8446-e47fabdc170c/console.log" append="off"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <video>
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     </video>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:58:32 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:58:32 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:58:32 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:58:32 compute-0 nova_compute[239965]: </domain>
Jan 26 15:58:32 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.630 239969 DEBUG nova.compute.manager [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Preparing to wait for external event network-vif-plugged-4ef2795d-4ecf-4161-b8d3-cf078d350d35 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.630 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Acquiring lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.631 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.631 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.631 239969 DEBUG nova.virt.libvirt.vif [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:58:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-233245799',display_name='tempest-AttachInterfacesUnderV243Test-server-233245799',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-233245799',id=51,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBnkpdcgKw3ZxJcExi2kr0VnpomVnij75d3bR/pbfdCgGQcA+UwOyKukeTAdByLzJETxkaYGjczmjENGXW7/FyChm4ZWdLhT+jUfzqFeBkdkNxTH04DeT40B/qr/NF35Rw==',key_name='tempest-keypair-1898623706',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2598d15187804c928a694f10598376ca',ramdisk_id='',reservation_id='r-7f0s9be4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-177389642',owner_user_name='tempest-AttachInterfacesUnderV243Test-177389642-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:58:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39cbd4b3356d4e52beac8da6a7696c50',uuid=6b4dbd52-cbad-4d8f-8446-e47fabdc170c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.632 239969 DEBUG nova.network.os_vif_util [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Converting VIF {"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.632 239969 DEBUG nova.network.os_vif_util [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:f2:2a,bridge_name='br-int',has_traffic_filtering=True,id=4ef2795d-4ecf-4161-b8d3-cf078d350d35,network=Network(8d2177c9-aa21-40e9-8927-6a75b9503d25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef2795d-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.633 239969 DEBUG os_vif [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:f2:2a,bridge_name='br-int',has_traffic_filtering=True,id=4ef2795d-4ecf-4161-b8d3-cf078d350d35,network=Network(8d2177c9-aa21-40e9-8927-6a75b9503d25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef2795d-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.633 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.634 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.634 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.637 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.637 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ef2795d-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.638 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4ef2795d-4e, col_values=(('external_ids', {'iface-id': '4ef2795d-4ecf-4161-b8d3-cf078d350d35', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:f2:2a', 'vm-uuid': '6b4dbd52-cbad-4d8f-8446-e47fabdc170c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.639 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:32 compute-0 NetworkManager[48954]: <info>  [1769443112.6402] manager: (tap4ef2795d-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.642 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.644 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:32 compute-0 nova_compute[239965]: 2026-01-26 15:58:32.645 239969 INFO os_vif [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:f2:2a,bridge_name='br-int',has_traffic_filtering=True,id=4ef2795d-4ecf-4161-b8d3-cf078d350d35,network=Network(8d2177c9-aa21-40e9-8927-6a75b9503d25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef2795d-4e')
Jan 26 15:58:33 compute-0 kernel: tap83367d44-ef (unregistering): left promiscuous mode
Jan 26 15:58:33 compute-0 NetworkManager[48954]: <info>  [1769443113.1583] device (tap83367d44-ef): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.162 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:33 compute-0 ovn_controller[146046]: 2026-01-26T15:58:33Z|00396|binding|INFO|Releasing lport 83367d44-efa7-43f2-b6dd-eee666860388 from this chassis (sb_readonly=0)
Jan 26 15:58:33 compute-0 ovn_controller[146046]: 2026-01-26T15:58:33Z|00397|binding|INFO|Setting lport 83367d44-efa7-43f2-b6dd-eee666860388 down in Southbound
Jan 26 15:58:33 compute-0 ovn_controller[146046]: 2026-01-26T15:58:33Z|00398|binding|INFO|Removing iface tap83367d44-ef ovn-installed in OVS
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.176 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:33.183 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:b6:96 10.100.0.14'], port_security=['fa:16:3e:08:b6:96 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '58e03608-6ade-4867-ba57-e5b723e5fa71', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e0f483e6-b774-4b05-865b-210bd6652bcd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2244b984e24a40ff8bdfbbe29dec2c25', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7113c4c8-c402-431f-887d-4f75aaeda709', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2eff8161-aacd-434f-98a2-0671c9df8619, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=83367d44-efa7-43f2-b6dd-eee666860388) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:58:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:33.185 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 83367d44-efa7-43f2-b6dd-eee666860388 in datapath e0f483e6-b774-4b05-865b-210bd6652bcd unbound from our chassis
Jan 26 15:58:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:33.186 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e0f483e6-b774-4b05-865b-210bd6652bcd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:58:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:33.187 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca7920c-5470-4538-9c50-fed172b35ff8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:33.188 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd namespace which is not needed anymore
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.202 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:33 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Deactivated successfully.
Jan 26 15:58:33 compute-0 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d00000030.scope: Consumed 15.123s CPU time.
Jan 26 15:58:33 compute-0 systemd-machined[208061]: Machine qemu-53-instance-00000030 terminated.
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.262 239969 DEBUG nova.compute.manager [req-d23290a6-9493-410a-8627-221161672249 req-49851da5-fc37-4c49-a24b-45cf0e457793 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Received event network-vif-deleted-e2bbbc1d-7dc9-4bc3-a0c9-01819e687e93 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.263 239969 DEBUG nova.compute.manager [req-d23290a6-9493-410a-8627-221161672249 req-49851da5-fc37-4c49-a24b-45cf0e457793 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Received event network-vif-deleted-2fa36685-e638-4043-a2a1-4b66a063e440 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:58:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/741224505' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.341 239969 DEBUG oslo_concurrency.processutils [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.909s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.358 239969 DEBUG nova.compute.provider_tree [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.380 239969 DEBUG nova.scheduler.client.report [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.389 239969 INFO nova.virt.libvirt.driver [-] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Instance destroyed successfully.
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.391 239969 DEBUG nova.objects.instance [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lazy-loading 'resources' on Instance uuid 58e03608-6ade-4867-ba57-e5b723e5fa71 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.395 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.395 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.396 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] No VIF found with MAC fa:16:3e:d2:f2:2a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.396 239969 INFO nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Using config drive
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.417 239969 DEBUG nova.storage.rbd_utils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] rbd image 6b4dbd52-cbad-4d8f-8446-e47fabdc170c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.424 239969 DEBUG nova.virt.libvirt.vif [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:57:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-515397940',display_name='tempest-ServersTestJSON-server-515397940',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-515397940',id=48,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFGvEARmGS9yuUjcEfIL6UGHjJsACA1MQKB2GnHfFXLx21DIis43sG2i48NNY4bGWnoXjQJNKpRXhfvIZkQM46nSh8ilif/x14L4ocfVD5eSGKGKuANtF/Z1sQbOUYPSLg==',key_name='tempest-keypair-886775122',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:57:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2244b984e24a40ff8bdfbbe29dec2c25',ramdisk_id='',reservation_id='r-j6anui0k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1037582104',owner_user_name='tempest-ServersTestJSON-1037582104-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:57:56Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='155281d948ac43f4bd24b54f8f81888c',uuid=58e03608-6ade-4867-ba57-e5b723e5fa71,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83367d44-efa7-43f2-b6dd-eee666860388", "address": "fa:16:3e:08:b6:96", "network": {"id": "e0f483e6-b774-4b05-865b-210bd6652bcd", "bridge": "br-int", "label": "tempest-ServersTestJSON-1790730746-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2244b984e24a40ff8bdfbbe29dec2c25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83367d44-ef", "ovs_interfaceid": "83367d44-efa7-43f2-b6dd-eee666860388", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.425 239969 DEBUG nova.network.os_vif_util [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Converting VIF {"id": "83367d44-efa7-43f2-b6dd-eee666860388", "address": "fa:16:3e:08:b6:96", "network": {"id": "e0f483e6-b774-4b05-865b-210bd6652bcd", "bridge": "br-int", "label": "tempest-ServersTestJSON-1790730746-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2244b984e24a40ff8bdfbbe29dec2c25", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83367d44-ef", "ovs_interfaceid": "83367d44-efa7-43f2-b6dd-eee666860388", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.425 239969 DEBUG nova.network.os_vif_util [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:08:b6:96,bridge_name='br-int',has_traffic_filtering=True,id=83367d44-efa7-43f2-b6dd-eee666860388,network=Network(e0f483e6-b774-4b05-865b-210bd6652bcd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83367d44-ef') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.426 239969 DEBUG os_vif [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:b6:96,bridge_name='br-int',has_traffic_filtering=True,id=83367d44-efa7-43f2-b6dd-eee666860388,network=Network(e0f483e6-b774-4b05-865b-210bd6652bcd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83367d44-ef') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.427 239969 DEBUG oslo_concurrency.lockutils [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.429 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.430 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83367d44-ef, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.455 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.457 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.459 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.461 239969 INFO os_vif [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:08:b6:96,bridge_name='br-int',has_traffic_filtering=True,id=83367d44-efa7-43f2-b6dd-eee666860388,network=Network(e0f483e6-b774-4b05-865b-210bd6652bcd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83367d44-ef')
Jan 26 15:58:33 compute-0 ceph-mon[75140]: pgmap v1362: 305 pgs: 305 active+clean; 245 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 124 op/s
Jan 26 15:58:33 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4160394054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:33 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1300829150' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:58:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:33.777 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:58:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 245 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Jan 26 15:58:33 compute-0 neutron-haproxy-ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd[282885]: [NOTICE]   (282889) : haproxy version is 2.8.14-c23fe91
Jan 26 15:58:33 compute-0 neutron-haproxy-ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd[282885]: [NOTICE]   (282889) : path to executable is /usr/sbin/haproxy
Jan 26 15:58:33 compute-0 neutron-haproxy-ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd[282885]: [WARNING]  (282889) : Exiting Master process...
Jan 26 15:58:33 compute-0 neutron-haproxy-ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd[282885]: [ALERT]    (282889) : Current worker (282891) exited with code 143 (Terminated)
Jan 26 15:58:33 compute-0 neutron-haproxy-ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd[282885]: [WARNING]  (282889) : All workers exited. Exiting... (0)
Jan 26 15:58:33 compute-0 systemd[1]: libpod-071e881c06d091ee861b5df6c5488e19534c457e9ae88d0ad54a5b2f32f2dca6.scope: Deactivated successfully.
Jan 26 15:58:33 compute-0 podman[284713]: 2026-01-26 15:58:33.865455297 +0000 UTC m=+0.572330099 container died 071e881c06d091ee861b5df6c5488e19534c457e9ae88d0ad54a5b2f32f2dca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.915 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.918 239969 INFO nova.scheduler.client.report [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Deleted allocations for instance 37628c32-be19-4064-9b44-cc56c5c36b58
Jan 26 15:58:33 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.925 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:33.999 239969 DEBUG oslo_concurrency.lockutils [None req-99b1a494-d44a-42b2-96f4-15a2a3abe1df ae2a17f5ac914d809a867e7f6b85f618 84b323c4a02a4577a411dc13454f16e4 - - default default] Lock "37628c32-be19-4064-9b44-cc56c5c36b58" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.137 239969 INFO nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Creating config drive at /var/lib/nova/instances/6b4dbd52-cbad-4d8f-8446-e47fabdc170c/disk.config
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.146 239969 DEBUG oslo_concurrency.processutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6b4dbd52-cbad-4d8f-8446-e47fabdc170c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0kwos5ra execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.296 239969 DEBUG oslo_concurrency.processutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6b4dbd52-cbad-4d8f-8446-e47fabdc170c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0kwos5ra" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.326 239969 DEBUG nova.storage.rbd_utils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] rbd image 6b4dbd52-cbad-4d8f-8446-e47fabdc170c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.331 239969 DEBUG oslo_concurrency.processutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6b4dbd52-cbad-4d8f-8446-e47fabdc170c/disk.config 6b4dbd52-cbad-4d8f-8446-e47fabdc170c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.539 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.539 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.540 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.541 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.564 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.565 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.565 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.565 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:58:34 compute-0 nova_compute[239965]: 2026-01-26 15:58:34.565 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.297 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.382 239969 DEBUG nova.compute.manager [req-a5ca3cfe-9132-42eb-965f-0a61d044e2d8 req-fa8d2786-630b-4ff7-ba9a-3a0ff70512a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Received event network-vif-unplugged-83367d44-efa7-43f2-b6dd-eee666860388 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.383 239969 DEBUG oslo_concurrency.lockutils [req-a5ca3cfe-9132-42eb-965f-0a61d044e2d8 req-fa8d2786-630b-4ff7-ba9a-3a0ff70512a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.384 239969 DEBUG oslo_concurrency.lockutils [req-a5ca3cfe-9132-42eb-965f-0a61d044e2d8 req-fa8d2786-630b-4ff7-ba9a-3a0ff70512a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.384 239969 DEBUG oslo_concurrency.lockutils [req-a5ca3cfe-9132-42eb-965f-0a61d044e2d8 req-fa8d2786-630b-4ff7-ba9a-3a0ff70512a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.384 239969 DEBUG nova.compute.manager [req-a5ca3cfe-9132-42eb-965f-0a61d044e2d8 req-fa8d2786-630b-4ff7-ba9a-3a0ff70512a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] No waiting events found dispatching network-vif-unplugged-83367d44-efa7-43f2-b6dd-eee666860388 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.385 239969 DEBUG nova.compute.manager [req-a5ca3cfe-9132-42eb-965f-0a61d044e2d8 req-fa8d2786-630b-4ff7-ba9a-3a0ff70512a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Received event network-vif-unplugged-83367d44-efa7-43f2-b6dd-eee666860388 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.385 239969 DEBUG nova.compute.manager [req-a5ca3cfe-9132-42eb-965f-0a61d044e2d8 req-fa8d2786-630b-4ff7-ba9a-3a0ff70512a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Received event network-vif-plugged-83367d44-efa7-43f2-b6dd-eee666860388 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.385 239969 DEBUG oslo_concurrency.lockutils [req-a5ca3cfe-9132-42eb-965f-0a61d044e2d8 req-fa8d2786-630b-4ff7-ba9a-3a0ff70512a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.385 239969 DEBUG oslo_concurrency.lockutils [req-a5ca3cfe-9132-42eb-965f-0a61d044e2d8 req-fa8d2786-630b-4ff7-ba9a-3a0ff70512a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.386 239969 DEBUG oslo_concurrency.lockutils [req-a5ca3cfe-9132-42eb-965f-0a61d044e2d8 req-fa8d2786-630b-4ff7-ba9a-3a0ff70512a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "58e03608-6ade-4867-ba57-e5b723e5fa71-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.386 239969 DEBUG nova.compute.manager [req-a5ca3cfe-9132-42eb-965f-0a61d044e2d8 req-fa8d2786-630b-4ff7-ba9a-3a0ff70512a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] No waiting events found dispatching network-vif-plugged-83367d44-efa7-43f2-b6dd-eee666860388 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.386 239969 WARNING nova.compute.manager [req-a5ca3cfe-9132-42eb-965f-0a61d044e2d8 req-fa8d2786-630b-4ff7-ba9a-3a0ff70512a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Received unexpected event network-vif-plugged-83367d44-efa7-43f2-b6dd-eee666860388 for instance with vm_state active and task_state deleting.
Jan 26 15:58:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:58:35 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/741224505' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:35 compute-0 ceph-mon[75140]: pgmap v1363: 305 pgs: 305 active+clean; 245 MiB data, 524 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Jan 26 15:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-071e881c06d091ee861b5df6c5488e19534c457e9ae88d0ad54a5b2f32f2dca6-userdata-shm.mount: Deactivated successfully.
Jan 26 15:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-265aac903de07c46e9249795eed246da6798169558045b2cb555d7d4212368ac-merged.mount: Deactivated successfully.
Jan 26 15:58:35 compute-0 podman[284713]: 2026-01-26 15:58:35.708526414 +0000 UTC m=+2.415401216 container cleanup 071e881c06d091ee861b5df6c5488e19534c457e9ae88d0ad54a5b2f32f2dca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.749 239969 DEBUG oslo_concurrency.processutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6b4dbd52-cbad-4d8f-8446-e47fabdc170c/disk.config 6b4dbd52-cbad-4d8f-8446-e47fabdc170c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.751 239969 INFO nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Deleting local config drive /var/lib/nova/instances/6b4dbd52-cbad-4d8f-8446-e47fabdc170c/disk.config because it was imported into RBD.
Jan 26 15:58:35 compute-0 systemd[1]: libpod-conmon-071e881c06d091ee861b5df6c5488e19534c457e9ae88d0ad54a5b2f32f2dca6.scope: Deactivated successfully.
Jan 26 15:58:35 compute-0 podman[284852]: 2026-01-26 15:58:35.813318117 +0000 UTC m=+0.079738609 container remove 071e881c06d091ee861b5df6c5488e19534c457e9ae88d0ad54a5b2f32f2dca6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.820 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[114112e4-55fc-4e9e-a3d8-ee99ba97b7f1]: (4, ('Mon Jan 26 03:58:33 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd (071e881c06d091ee861b5df6c5488e19534c457e9ae88d0ad54a5b2f32f2dca6)\n071e881c06d091ee861b5df6c5488e19534c457e9ae88d0ad54a5b2f32f2dca6\nMon Jan 26 03:58:35 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd (071e881c06d091ee861b5df6c5488e19534c457e9ae88d0ad54a5b2f32f2dca6)\n071e881c06d091ee861b5df6c5488e19534c457e9ae88d0ad54a5b2f32f2dca6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.824 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ef29235f-5fda-4f25-86cd-05cb5c51ad5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.826 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape0f483e6-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:35 compute-0 kernel: tape0f483e6-b0: left promiscuous mode
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.829 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:35 compute-0 kernel: tap4ef2795d-4e: entered promiscuous mode
Jan 26 15:58:35 compute-0 NetworkManager[48954]: <info>  [1769443115.8370] manager: (tap4ef2795d-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Jan 26 15:58:35 compute-0 systemd-udevd[284693]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:58:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 213 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Jan 26 15:58:35 compute-0 NetworkManager[48954]: <info>  [1769443115.8540] device (tap4ef2795d-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:58:35 compute-0 NetworkManager[48954]: <info>  [1769443115.8547] device (tap4ef2795d-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:58:35 compute-0 ovn_controller[146046]: 2026-01-26T15:58:35Z|00399|binding|INFO|Claiming lport 4ef2795d-4ecf-4161-b8d3-cf078d350d35 for this chassis.
Jan 26 15:58:35 compute-0 ovn_controller[146046]: 2026-01-26T15:58:35Z|00400|binding|INFO|4ef2795d-4ecf-4161-b8d3-cf078d350d35: Claiming fa:16:3e:d2:f2:2a 10.100.0.7
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.858 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.861 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c10f9d2c-77a4-4fcd-b279-9ebaa0a665ae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.864 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:f2:2a 10.100.0.7'], port_security=['fa:16:3e:d2:f2:2a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6b4dbd52-cbad-4d8f-8446-e47fabdc170c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8d2177c9-aa21-40e9-8927-6a75b9503d25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2598d15187804c928a694f10598376ca', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8f1c69a6-d773-456d-8e48-1beedd226a0d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e144102a-5edb-4f12-9913-268c4e68bd41, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=4ef2795d-4ecf-4161-b8d3-cf078d350d35) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.873 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6a19b233-fcf3-4d89-8dc8-842d9a2c2f76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.876 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[65f2574a-b5e8-443b-b8ac-3b1a487f5dc9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:35 compute-0 ovn_controller[146046]: 2026-01-26T15:58:35Z|00401|binding|INFO|Setting lport 4ef2795d-4ecf-4161-b8d3-cf078d350d35 ovn-installed in OVS
Jan 26 15:58:35 compute-0 ovn_controller[146046]: 2026-01-26T15:58:35Z|00402|binding|INFO|Setting lport 4ef2795d-4ecf-4161-b8d3-cf078d350d35 up in Southbound
Jan 26 15:58:35 compute-0 nova_compute[239965]: 2026-01-26 15:58:35.880 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.900 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[261f97ba-8cda-4bd6-9c13-d17f6aef0f6e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 456501, 'reachable_time': 27817, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284885, 'error': None, 'target': 'ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:35 compute-0 systemd-machined[208061]: New machine qemu-57-instance-00000033.
Jan 26 15:58:35 compute-0 systemd[1]: run-netns-ovnmeta\x2de0f483e6\x2db774\x2d4b05\x2d865b\x2d210bd6652bcd.mount: Deactivated successfully.
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.907 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e0f483e6-b774-4b05-865b-210bd6652bcd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.908 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[d32f9e3f-6c27-4bb1-8491-c65d02283511]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.909 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.910 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.911 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 4ef2795d-4ecf-4161-b8d3-cf078d350d35 in datapath 8d2177c9-aa21-40e9-8927-6a75b9503d25 unbound from our chassis
Jan 26 15:58:35 compute-0 systemd[1]: Started Virtual Machine qemu-57-instance-00000033.
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.920 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8d2177c9-aa21-40e9-8927-6a75b9503d25
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.933 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[24c618b3-df91-41cf-aebe-afd68c39ac44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.936 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8d2177c9-a1 in ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.941 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8d2177c9-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.941 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a9162072-bcdf-4925-860f-1bcaa42bf9f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.943 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7bdfbc12-7847-41d0-8af8-2f5f3c7fb315]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.961 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[a1927e24-5d74-4cb5-90b5-7a9a889db007]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:35.978 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6e11b2c3-e781-4b99-a3f6-fad348f35c70]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.011 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9c323705-d18c-4d6a-b682-05b421967f9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:36 compute-0 NetworkManager[48954]: <info>  [1769443116.0193] manager: (tap8d2177c9-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/183)
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.020 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[43d8be74-aa79-429e-9f46-00acf5e6134d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.054 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f0ce1835-1275-4c40-808a-999e525861e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.058 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3a06745f-6825-4a69-8515-f3cd65400bbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:36 compute-0 NetworkManager[48954]: <info>  [1769443116.0857] device (tap8d2177c9-a0): carrier: link connected
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.094 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e497dd95-d3c4-4182-b337-01d0a27bd29f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.113 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a734b3ae-e97c-4380-90b6-e55f266def77]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8d2177c9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:be:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 124], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460689, 'reachable_time': 36509, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284916, 'error': None, 'target': 'ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.135 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[53289dff-5a98-4aeb-b72e-51ad23968a97]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee0:beb2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 460689, 'tstamp': 460689}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284917, 'error': None, 'target': 'ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.144 239969 INFO nova.virt.libvirt.driver [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Deleting instance files /var/lib/nova/instances/58e03608-6ade-4867-ba57-e5b723e5fa71_del
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.145 239969 INFO nova.virt.libvirt.driver [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Deletion of /var/lib/nova/instances/58e03608-6ade-4867-ba57-e5b723e5fa71_del complete
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.152 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ed5159-33a9-4250-ac54-d7bd05bade09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8d2177c9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e0:be:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 124], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460689, 'reachable_time': 36509, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284918, 'error': None, 'target': 'ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:58:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3275568068' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.178 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.185 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d900884c-a106-49f0-9d19-cf2b5fbbd920]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.201 239969 INFO nova.compute.manager [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Took 3.87 seconds to destroy the instance on the hypervisor.
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.202 239969 DEBUG oslo.service.loopingcall [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.202 239969 DEBUG nova.compute.manager [-] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.202 239969 DEBUG nova.network.neutron [-] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.242 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3482eaf3-7550-485c-8123-bc5053b69d24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.244 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d2177c9-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.244 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.245 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d2177c9-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:36 compute-0 NetworkManager[48954]: <info>  [1769443116.2474] manager: (tap8d2177c9-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Jan 26 15:58:36 compute-0 kernel: tap8d2177c9-a0: entered promiscuous mode
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.251 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8d2177c9-a0, col_values=(('external_ids', {'iface-id': 'baa5727d-28fe-41d5-8694-960826c92bc9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.253 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.255 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8d2177c9-aa21-40e9-8927-6a75b9503d25.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8d2177c9-aa21-40e9-8927-6a75b9503d25.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.257 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9411a761-2174-48af-8af6-0b3b7264e27d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.258 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-8d2177c9-aa21-40e9-8927-6a75b9503d25
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/8d2177c9-aa21-40e9-8927-6a75b9503d25.pid.haproxy
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 8d2177c9-aa21-40e9-8927-6a75b9503d25
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:58:36 compute-0 ovn_controller[146046]: 2026-01-26T15:58:36Z|00403|binding|INFO|Releasing lport baa5727d-28fe-41d5-8694-960826c92bc9 from this chassis (sb_readonly=0)
Jan 26 15:58:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:36.259 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25', 'env', 'PROCESS_TAG=haproxy-8d2177c9-aa21-40e9-8927-6a75b9503d25', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8d2177c9-aa21-40e9-8927-6a75b9503d25.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.279 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.281 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.282 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.283 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Error from libvirt while getting description of instance-00000030: [Error Code 42] Domain not found: no domain with matching uuid '58e03608-6ade-4867-ba57-e5b723e5fa71' (instance-00000030): libvirt.libvirtError: Domain not found: no domain with matching uuid '58e03608-6ade-4867-ba57-e5b723e5fa71' (instance-00000030)
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.459 239969 DEBUG nova.compute.manager [req-412229f4-0250-4569-afc9-47e41cc48d07 req-c5dcba8c-7340-4cfb-ba8f-51b29461a962 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Received event network-vif-plugged-4ef2795d-4ecf-4161-b8d3-cf078d350d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.461 239969 DEBUG oslo_concurrency.lockutils [req-412229f4-0250-4569-afc9-47e41cc48d07 req-c5dcba8c-7340-4cfb-ba8f-51b29461a962 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.461 239969 DEBUG oslo_concurrency.lockutils [req-412229f4-0250-4569-afc9-47e41cc48d07 req-c5dcba8c-7340-4cfb-ba8f-51b29461a962 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.462 239969 DEBUG oslo_concurrency.lockutils [req-412229f4-0250-4569-afc9-47e41cc48d07 req-c5dcba8c-7340-4cfb-ba8f-51b29461a962 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.462 239969 DEBUG nova.compute.manager [req-412229f4-0250-4569-afc9-47e41cc48d07 req-c5dcba8c-7340-4cfb-ba8f-51b29461a962 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Processing event network-vif-plugged-4ef2795d-4ecf-4161-b8d3-cf078d350d35 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.534 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.535 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4080MB free_disk=59.90990134328604GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.535 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.536 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.601 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 58e03608-6ade-4867-ba57-e5b723e5fa71 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.602 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 6b4dbd52-cbad-4d8f-8446-e47fabdc170c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.602 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.602 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:58:36 compute-0 ceph-mon[75140]: pgmap v1364: 305 pgs: 305 active+clean; 213 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 161 op/s
Jan 26 15:58:36 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3275568068' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:36 compute-0 podman[284951]: 2026-01-26 15:58:36.655031306 +0000 UTC m=+0.063187992 container create 996c9ceca59a6583d0d386af6708b2ffc3bf045153939db4da82e6de0ff8fde6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.673 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:36 compute-0 systemd[1]: Started libpod-conmon-996c9ceca59a6583d0d386af6708b2ffc3bf045153939db4da82e6de0ff8fde6.scope.
Jan 26 15:58:36 compute-0 podman[284951]: 2026-01-26 15:58:36.618621162 +0000 UTC m=+0.026777868 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:58:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:58:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cd1be7d226113edc2f23189c84b6b5a353091d16442d573261499877f7b8f5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:58:36 compute-0 podman[284951]: 2026-01-26 15:58:36.761346226 +0000 UTC m=+0.169502932 container init 996c9ceca59a6583d0d386af6708b2ffc3bf045153939db4da82e6de0ff8fde6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 15:58:36 compute-0 podman[284951]: 2026-01-26 15:58:36.769950187 +0000 UTC m=+0.178106873 container start 996c9ceca59a6583d0d386af6708b2ffc3bf045153939db4da82e6de0ff8fde6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 15:58:36 compute-0 neutron-haproxy-ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25[285002]: [NOTICE]   (285012) : New worker (285015) forked
Jan 26 15:58:36 compute-0 neutron-haproxy-ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25[285002]: [NOTICE]   (285012) : Loading success.
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.809 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443116.8083358, 6b4dbd52-cbad-4d8f-8446-e47fabdc170c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.809 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] VM Started (Lifecycle Event)
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.813 239969 DEBUG nova.compute.manager [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.825 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.838 239969 INFO nova.virt.libvirt.driver [-] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Instance spawned successfully.
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.839 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.847 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.850 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.861 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.862 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.863 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.864 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.864 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.864 239969 DEBUG nova.virt.libvirt.driver [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.872 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.872 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443116.8120666, 6b4dbd52-cbad-4d8f-8446-e47fabdc170c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.873 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] VM Paused (Lifecycle Event)
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.903 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.907 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443116.8250365, 6b4dbd52-cbad-4d8f-8446-e47fabdc170c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.908 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] VM Resumed (Lifecycle Event)
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.965 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.968 239969 INFO nova.compute.manager [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Took 10.82 seconds to spawn the instance on the hypervisor.
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.968 239969 DEBUG nova.compute.manager [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.973 239969 DEBUG nova.network.neutron [-] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:58:36 compute-0 nova_compute[239965]: 2026-01-26 15:58:36.974 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:58:37 compute-0 ovn_controller[146046]: 2026-01-26T15:58:37Z|00404|binding|INFO|Releasing lport baa5727d-28fe-41d5-8694-960826c92bc9 from this chassis (sb_readonly=0)
Jan 26 15:58:37 compute-0 nova_compute[239965]: 2026-01-26 15:58:37.016 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:58:37 compute-0 nova_compute[239965]: 2026-01-26 15:58:37.025 239969 INFO nova.compute.manager [-] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Took 0.82 seconds to deallocate network for instance.
Jan 26 15:58:37 compute-0 nova_compute[239965]: 2026-01-26 15:58:37.025 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:37 compute-0 nova_compute[239965]: 2026-01-26 15:58:37.064 239969 INFO nova.compute.manager [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Took 12.37 seconds to build instance.
Jan 26 15:58:37 compute-0 nova_compute[239965]: 2026-01-26 15:58:37.087 239969 DEBUG oslo_concurrency.lockutils [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:37 compute-0 nova_compute[239965]: 2026-01-26 15:58:37.092 239969 DEBUG oslo_concurrency.lockutils [None req-3fbada1e-c3ab-4fb3-98b7-22f6fd9f24ad 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:37 compute-0 ovn_controller[146046]: 2026-01-26T15:58:37Z|00405|binding|INFO|Releasing lport baa5727d-28fe-41d5-8694-960826c92bc9 from this chassis (sb_readonly=0)
Jan 26 15:58:37 compute-0 nova_compute[239965]: 2026-01-26 15:58:37.242 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:58:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3008530190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:37 compute-0 nova_compute[239965]: 2026-01-26 15:58:37.354 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.681s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:37 compute-0 nova_compute[239965]: 2026-01-26 15:58:37.360 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:58:37 compute-0 nova_compute[239965]: 2026-01-26 15:58:37.387 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:58:37 compute-0 nova_compute[239965]: 2026-01-26 15:58:37.411 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:58:37 compute-0 nova_compute[239965]: 2026-01-26 15:58:37.412 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:37 compute-0 nova_compute[239965]: 2026-01-26 15:58:37.413 239969 DEBUG oslo_concurrency.lockutils [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:37 compute-0 nova_compute[239965]: 2026-01-26 15:58:37.481 239969 DEBUG oslo_concurrency.processutils [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:37 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3008530190' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1365: 305 pgs: 305 active+clean; 213 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Jan 26 15:58:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:58:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1399629144' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.163 239969 DEBUG oslo_concurrency.processutils [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.169 239969 DEBUG nova.compute.provider_tree [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.187 239969 DEBUG nova.scheduler.client.report [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.226 239969 DEBUG oslo_concurrency.lockutils [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.249 239969 INFO nova.scheduler.client.report [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Deleted allocations for instance 58e03608-6ade-4867-ba57-e5b723e5fa71
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.326 239969 DEBUG oslo_concurrency.lockutils [None req-e8a834a3-bae2-408b-a504-1f4ffdf12c81 155281d948ac43f4bd24b54f8f81888c 2244b984e24a40ff8bdfbbe29dec2c25 - - default default] Lock "58e03608-6ade-4867-ba57-e5b723e5fa71" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.407 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.409 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.409 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.457 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.546 239969 DEBUG nova.compute.manager [req-5d074fa1-a40a-48cc-a535-8c1fdac359df req-f6fe0283-4fe7-41c2-b49b-6afd64d89443 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Received event network-vif-plugged-4ef2795d-4ecf-4161-b8d3-cf078d350d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.547 239969 DEBUG oslo_concurrency.lockutils [req-5d074fa1-a40a-48cc-a535-8c1fdac359df req-f6fe0283-4fe7-41c2-b49b-6afd64d89443 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.547 239969 DEBUG oslo_concurrency.lockutils [req-5d074fa1-a40a-48cc-a535-8c1fdac359df req-f6fe0283-4fe7-41c2-b49b-6afd64d89443 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.548 239969 DEBUG oslo_concurrency.lockutils [req-5d074fa1-a40a-48cc-a535-8c1fdac359df req-f6fe0283-4fe7-41c2-b49b-6afd64d89443 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.548 239969 DEBUG nova.compute.manager [req-5d074fa1-a40a-48cc-a535-8c1fdac359df req-f6fe0283-4fe7-41c2-b49b-6afd64d89443 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] No waiting events found dispatching network-vif-plugged-4ef2795d-4ecf-4161-b8d3-cf078d350d35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.548 239969 WARNING nova.compute.manager [req-5d074fa1-a40a-48cc-a535-8c1fdac359df req-f6fe0283-4fe7-41c2-b49b-6afd64d89443 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Received unexpected event network-vif-plugged-4ef2795d-4ecf-4161-b8d3-cf078d350d35 for instance with vm_state active and task_state None.
Jan 26 15:58:38 compute-0 nova_compute[239965]: 2026-01-26 15:58:38.549 239969 DEBUG nova.compute.manager [req-5d074fa1-a40a-48cc-a535-8c1fdac359df req-f6fe0283-4fe7-41c2-b49b-6afd64d89443 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Received event network-vif-deleted-83367d44-efa7-43f2-b6dd-eee666860388 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:38 compute-0 ceph-mon[75140]: pgmap v1365: 305 pgs: 305 active+clean; 213 MiB data, 500 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Jan 26 15:58:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1399629144' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:39 compute-0 nova_compute[239965]: 2026-01-26 15:58:39.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:58:39 compute-0 nova_compute[239965]: 2026-01-26 15:58:39.571 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:39 compute-0 NetworkManager[48954]: <info>  [1769443119.5729] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Jan 26 15:58:39 compute-0 NetworkManager[48954]: <info>  [1769443119.5741] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Jan 26 15:58:39 compute-0 ovn_controller[146046]: 2026-01-26T15:58:39Z|00406|binding|INFO|Releasing lport baa5727d-28fe-41d5-8694-960826c92bc9 from this chassis (sb_readonly=0)
Jan 26 15:58:39 compute-0 nova_compute[239965]: 2026-01-26 15:58:39.620 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:39 compute-0 nova_compute[239965]: 2026-01-26 15:58:39.626 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 166 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 249 op/s
Jan 26 15:58:40 compute-0 nova_compute[239965]: 2026-01-26 15:58:40.299 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:58:40 compute-0 nova_compute[239965]: 2026-01-26 15:58:40.671 239969 DEBUG nova.compute.manager [req-3a602aee-b534-4f27-abf7-bb76f42cff89 req-d8c27043-3c92-4611-88ca-9338a4ddd7d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Received event network-changed-4ef2795d-4ecf-4161-b8d3-cf078d350d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:40 compute-0 nova_compute[239965]: 2026-01-26 15:58:40.671 239969 DEBUG nova.compute.manager [req-3a602aee-b534-4f27-abf7-bb76f42cff89 req-d8c27043-3c92-4611-88ca-9338a4ddd7d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Refreshing instance network info cache due to event network-changed-4ef2795d-4ecf-4161-b8d3-cf078d350d35. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:58:40 compute-0 nova_compute[239965]: 2026-01-26 15:58:40.672 239969 DEBUG oslo_concurrency.lockutils [req-3a602aee-b534-4f27-abf7-bb76f42cff89 req-d8c27043-3c92-4611-88ca-9338a4ddd7d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:58:40 compute-0 nova_compute[239965]: 2026-01-26 15:58:40.672 239969 DEBUG oslo_concurrency.lockutils [req-3a602aee-b534-4f27-abf7-bb76f42cff89 req-d8c27043-3c92-4611-88ca-9338a4ddd7d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:58:40 compute-0 nova_compute[239965]: 2026-01-26 15:58:40.672 239969 DEBUG nova.network.neutron [req-3a602aee-b534-4f27-abf7-bb76f42cff89 req-d8c27043-3c92-4611-88ca-9338a4ddd7d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Refreshing network info cache for port 4ef2795d-4ecf-4161-b8d3-cf078d350d35 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:58:40 compute-0 ceph-mon[75140]: pgmap v1366: 305 pgs: 305 active+clean; 166 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 249 op/s
Jan 26 15:58:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 157 KiB/s wr, 177 op/s
Jan 26 15:58:42 compute-0 nova_compute[239965]: 2026-01-26 15:58:42.699 239969 DEBUG nova.network.neutron [req-3a602aee-b534-4f27-abf7-bb76f42cff89 req-d8c27043-3c92-4611-88ca-9338a4ddd7d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Updated VIF entry in instance network info cache for port 4ef2795d-4ecf-4161-b8d3-cf078d350d35. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:58:42 compute-0 nova_compute[239965]: 2026-01-26 15:58:42.700 239969 DEBUG nova.network.neutron [req-3a602aee-b534-4f27-abf7-bb76f42cff89 req-d8c27043-3c92-4611-88ca-9338a4ddd7d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Updating instance_info_cache with network_info: [{"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:58:42 compute-0 nova_compute[239965]: 2026-01-26 15:58:42.701 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443107.6984491, d9b6d871-11f5-46ea-b7f2-502cd8b12b21 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:58:42 compute-0 nova_compute[239965]: 2026-01-26 15:58:42.701 239969 INFO nova.compute.manager [-] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] VM Stopped (Lifecycle Event)
Jan 26 15:58:42 compute-0 nova_compute[239965]: 2026-01-26 15:58:42.734 239969 DEBUG nova.compute.manager [None req-106fc60a-742d-454b-969c-96f9b1f131d6 - - - - - -] [instance: d9b6d871-11f5-46ea-b7f2-502cd8b12b21] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:42 compute-0 nova_compute[239965]: 2026-01-26 15:58:42.739 239969 DEBUG oslo_concurrency.lockutils [req-3a602aee-b534-4f27-abf7-bb76f42cff89 req-d8c27043-3c92-4611-88ca-9338a4ddd7d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:58:42 compute-0 ovn_controller[146046]: 2026-01-26T15:58:42Z|00407|binding|INFO|Releasing lport baa5727d-28fe-41d5-8694-960826c92bc9 from this chassis (sb_readonly=0)
Jan 26 15:58:42 compute-0 nova_compute[239965]: 2026-01-26 15:58:42.860 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:43 compute-0 ceph-mon[75140]: pgmap v1367: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 157 KiB/s wr, 177 op/s
Jan 26 15:58:43 compute-0 nova_compute[239965]: 2026-01-26 15:58:43.459 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1368: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 133 op/s
Jan 26 15:58:44 compute-0 nova_compute[239965]: 2026-01-26 15:58:44.385 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443109.383582, 37628c32-be19-4064-9b44-cc56c5c36b58 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:58:44 compute-0 nova_compute[239965]: 2026-01-26 15:58:44.386 239969 INFO nova.compute.manager [-] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] VM Stopped (Lifecycle Event)
Jan 26 15:58:44 compute-0 nova_compute[239965]: 2026-01-26 15:58:44.405 239969 DEBUG nova.compute.manager [None req-098f8655-3789-4e1a-8477-0a02edb6349b - - - - - -] [instance: 37628c32-be19-4064-9b44-cc56c5c36b58] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:45 compute-0 ceph-mon[75140]: pgmap v1368: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 133 op/s
Jan 26 15:58:45 compute-0 nova_compute[239965]: 2026-01-26 15:58:45.301 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:45 compute-0 nova_compute[239965]: 2026-01-26 15:58:45.538 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:58:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 133 op/s
Jan 26 15:58:47 compute-0 ceph-mon[75140]: pgmap v1369: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 133 op/s
Jan 26 15:58:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1370: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 95 op/s
Jan 26 15:58:47 compute-0 nova_compute[239965]: 2026-01-26 15:58:47.919 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:48 compute-0 ceph-mon[75140]: pgmap v1370: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 95 op/s
Jan 26 15:58:48 compute-0 nova_compute[239965]: 2026-01-26 15:58:48.374 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443113.3733084, 58e03608-6ade-4867-ba57-e5b723e5fa71 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:58:48 compute-0 nova_compute[239965]: 2026-01-26 15:58:48.375 239969 INFO nova.compute.manager [-] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] VM Stopped (Lifecycle Event)
Jan 26 15:58:48 compute-0 nova_compute[239965]: 2026-01-26 15:58:48.451 239969 DEBUG nova.compute.manager [None req-778d0715-666e-4d23-8884-78b78e943c9e - - - - - -] [instance: 58e03608-6ade-4867-ba57-e5b723e5fa71] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:58:48 compute-0 nova_compute[239965]: 2026-01-26 15:58:48.461 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:58:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1386610345' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:58:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:58:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1386610345' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00035507914811487253 of space, bias 1.0, pg target 0.10652374443446176 quantized to 32 (current 32)
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010149608080978775 of space, bias 1.0, pg target 0.3044882424293633 quantized to 32 (current 32)
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.015906802221707e-07 of space, bias 4.0, pg target 0.0009619088162666048 quantized to 16 (current 16)
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:58:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:58:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1386610345' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:58:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1386610345' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:58:49 compute-0 nova_compute[239965]: 2026-01-26 15:58:49.551 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:49 compute-0 nova_compute[239965]: 2026-01-26 15:58:49.717 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1371: 305 pgs: 305 active+clean; 134 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Jan 26 15:58:50 compute-0 nova_compute[239965]: 2026-01-26 15:58:50.303 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:50 compute-0 ceph-mon[75140]: pgmap v1371: 305 pgs: 305 active+clean; 134 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Jan 26 15:58:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:58:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 137 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 197 KiB/s wr, 21 op/s
Jan 26 15:58:51 compute-0 nova_compute[239965]: 2026-01-26 15:58:51.913 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:52 compute-0 ovn_controller[146046]: 2026-01-26T15:58:52Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:f2:2a 10.100.0.7
Jan 26 15:58:52 compute-0 ovn_controller[146046]: 2026-01-26T15:58:52Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:f2:2a 10.100.0.7
Jan 26 15:58:52 compute-0 nova_compute[239965]: 2026-01-26 15:58:52.841 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:52 compute-0 ceph-mon[75140]: pgmap v1372: 305 pgs: 305 active+clean; 137 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 197 KiB/s wr, 21 op/s
Jan 26 15:58:52 compute-0 sudo[285068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:58:52 compute-0 sudo[285068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:58:52 compute-0 sudo[285068]: pam_unix(sudo:session): session closed for user root
Jan 26 15:58:53 compute-0 sudo[285093]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:58:53 compute-0 sudo[285093]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:58:53 compute-0 nova_compute[239965]: 2026-01-26 15:58:53.463 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:53 compute-0 sudo[285093]: pam_unix(sudo:session): session closed for user root
Jan 26 15:58:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:58:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:58:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:58:53 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:58:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:58:53 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:58:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:58:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:58:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:58:53 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:58:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:58:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:58:53 compute-0 sudo[285149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:58:53 compute-0 sudo[285149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:58:53 compute-0 sudo[285149]: pam_unix(sudo:session): session closed for user root
Jan 26 15:58:53 compute-0 sudo[285174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:58:53 compute-0 sudo[285174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:58:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1373: 305 pgs: 305 active+clean; 137 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 197 KiB/s wr, 14 op/s
Jan 26 15:58:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:58:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:58:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:58:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:58:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:58:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:58:54 compute-0 podman[285211]: 2026-01-26 15:58:54.105237265 +0000 UTC m=+0.025450765 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:58:54 compute-0 podman[285211]: 2026-01-26 15:58:54.203158729 +0000 UTC m=+0.123372209 container create 3eedc6437958b4123c707b40216d8513177c6ad1e4abac81280792353e0f179b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:58:54 compute-0 systemd[1]: Started libpod-conmon-3eedc6437958b4123c707b40216d8513177c6ad1e4abac81280792353e0f179b.scope.
Jan 26 15:58:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:58:54 compute-0 podman[285211]: 2026-01-26 15:58:54.316320366 +0000 UTC m=+0.236533866 container init 3eedc6437958b4123c707b40216d8513177c6ad1e4abac81280792353e0f179b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 15:58:54 compute-0 podman[285211]: 2026-01-26 15:58:54.325547703 +0000 UTC m=+0.245761183 container start 3eedc6437958b4123c707b40216d8513177c6ad1e4abac81280792353e0f179b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 15:58:54 compute-0 keen_khorana[285228]: 167 167
Jan 26 15:58:54 compute-0 systemd[1]: libpod-3eedc6437958b4123c707b40216d8513177c6ad1e4abac81280792353e0f179b.scope: Deactivated successfully.
Jan 26 15:58:54 compute-0 podman[285211]: 2026-01-26 15:58:54.347634825 +0000 UTC m=+0.267848335 container attach 3eedc6437958b4123c707b40216d8513177c6ad1e4abac81280792353e0f179b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:58:54 compute-0 podman[285211]: 2026-01-26 15:58:54.348717281 +0000 UTC m=+0.268930761 container died 3eedc6437958b4123c707b40216d8513177c6ad1e4abac81280792353e0f179b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 15:58:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1c750d58962ff0455aca070da89d0d2f31e3724d2b0ee31aa21f74403abd341-merged.mount: Deactivated successfully.
Jan 26 15:58:54 compute-0 podman[285211]: 2026-01-26 15:58:54.442055313 +0000 UTC m=+0.362268793 container remove 3eedc6437958b4123c707b40216d8513177c6ad1e4abac81280792353e0f179b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:58:54 compute-0 systemd[1]: libpod-conmon-3eedc6437958b4123c707b40216d8513177c6ad1e4abac81280792353e0f179b.scope: Deactivated successfully.
Jan 26 15:58:54 compute-0 ovn_controller[146046]: 2026-01-26T15:58:54Z|00408|binding|INFO|Releasing lport baa5727d-28fe-41d5-8694-960826c92bc9 from this chassis (sb_readonly=0)
Jan 26 15:58:54 compute-0 nova_compute[239965]: 2026-01-26 15:58:54.584 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:54 compute-0 podman[285252]: 2026-01-26 15:58:54.639775375 +0000 UTC m=+0.059732847 container create c51d0da0ae7bae9a63fa7acab6726539feca1f4f84a66b108f2bed256111aa1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:58:54 compute-0 systemd[1]: Started libpod-conmon-c51d0da0ae7bae9a63fa7acab6726539feca1f4f84a66b108f2bed256111aa1a.scope.
Jan 26 15:58:54 compute-0 podman[285252]: 2026-01-26 15:58:54.607422121 +0000 UTC m=+0.027379613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:58:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:58:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3869eff3443fe8b4916211dadefc216f11807db81809bcbdab5b68fce484cc92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:58:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3869eff3443fe8b4916211dadefc216f11807db81809bcbdab5b68fce484cc92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:58:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3869eff3443fe8b4916211dadefc216f11807db81809bcbdab5b68fce484cc92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:58:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3869eff3443fe8b4916211dadefc216f11807db81809bcbdab5b68fce484cc92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:58:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3869eff3443fe8b4916211dadefc216f11807db81809bcbdab5b68fce484cc92/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 15:58:54 compute-0 podman[285252]: 2026-01-26 15:58:54.722571507 +0000 UTC m=+0.142529009 container init c51d0da0ae7bae9a63fa7acab6726539feca1f4f84a66b108f2bed256111aa1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_panini, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:58:54 compute-0 sshd-session[285265]: Connection closed by 45.148.10.240 port 52098
Jan 26 15:58:54 compute-0 podman[285252]: 2026-01-26 15:58:54.730736768 +0000 UTC m=+0.150694240 container start c51d0da0ae7bae9a63fa7acab6726539feca1f4f84a66b108f2bed256111aa1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_panini, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 15:58:54 compute-0 podman[285252]: 2026-01-26 15:58:54.734827518 +0000 UTC m=+0.154784990 container attach c51d0da0ae7bae9a63fa7acab6726539feca1f4f84a66b108f2bed256111aa1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:58:54 compute-0 ceph-mon[75140]: pgmap v1373: 305 pgs: 305 active+clean; 137 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 197 KiB/s wr, 14 op/s
Jan 26 15:58:55 compute-0 sad_panini[285270]: --> passed data devices: 0 physical, 3 LVM
Jan 26 15:58:55 compute-0 sad_panini[285270]: --> All data devices are unavailable
Jan 26 15:58:55 compute-0 nova_compute[239965]: 2026-01-26 15:58:55.198 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Acquiring lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:55 compute-0 nova_compute[239965]: 2026-01-26 15:58:55.199 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:55 compute-0 systemd[1]: libpod-c51d0da0ae7bae9a63fa7acab6726539feca1f4f84a66b108f2bed256111aa1a.scope: Deactivated successfully.
Jan 26 15:58:55 compute-0 podman[285290]: 2026-01-26 15:58:55.270366873 +0000 UTC m=+0.025030295 container died c51d0da0ae7bae9a63fa7acab6726539feca1f4f84a66b108f2bed256111aa1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:58:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-3869eff3443fe8b4916211dadefc216f11807db81809bcbdab5b68fce484cc92-merged.mount: Deactivated successfully.
Jan 26 15:58:55 compute-0 nova_compute[239965]: 2026-01-26 15:58:55.305 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:55 compute-0 podman[285290]: 2026-01-26 15:58:55.310735863 +0000 UTC m=+0.065399265 container remove c51d0da0ae7bae9a63fa7acab6726539feca1f4f84a66b108f2bed256111aa1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_panini, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:58:55 compute-0 systemd[1]: libpod-conmon-c51d0da0ae7bae9a63fa7acab6726539feca1f4f84a66b108f2bed256111aa1a.scope: Deactivated successfully.
Jan 26 15:58:55 compute-0 sudo[285174]: pam_unix(sudo:session): session closed for user root
Jan 26 15:58:55 compute-0 nova_compute[239965]: 2026-01-26 15:58:55.394 239969 DEBUG nova.compute.manager [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:58:55 compute-0 sudo[285305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:58:55 compute-0 sudo[285305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:58:55 compute-0 sudo[285305]: pam_unix(sudo:session): session closed for user root
Jan 26 15:58:55 compute-0 nova_compute[239965]: 2026-01-26 15:58:55.476 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:55 compute-0 nova_compute[239965]: 2026-01-26 15:58:55.476 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:55 compute-0 nova_compute[239965]: 2026-01-26 15:58:55.483 239969 DEBUG nova.virt.hardware [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:58:55 compute-0 nova_compute[239965]: 2026-01-26 15:58:55.485 239969 INFO nova.compute.claims [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:58:55 compute-0 sudo[285330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 15:58:55 compute-0 sudo[285330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:58:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:58:55 compute-0 nova_compute[239965]: 2026-01-26 15:58:55.629 239969 DEBUG oslo_concurrency.processutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:55 compute-0 podman[285368]: 2026-01-26 15:58:55.817578164 +0000 UTC m=+0.051427543 container create efa087d4dda8be2c9c1d6903def850a3f57b0a3cb5fbc26d4ebb841850ec010a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gates, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:58:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 277 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 15:58:55 compute-0 systemd[1]: Started libpod-conmon-efa087d4dda8be2c9c1d6903def850a3f57b0a3cb5fbc26d4ebb841850ec010a.scope.
Jan 26 15:58:55 compute-0 podman[285368]: 2026-01-26 15:58:55.794359144 +0000 UTC m=+0.028208543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:58:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:58:55 compute-0 podman[285368]: 2026-01-26 15:58:55.921687 +0000 UTC m=+0.155536409 container init efa087d4dda8be2c9c1d6903def850a3f57b0a3cb5fbc26d4ebb841850ec010a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 15:58:55 compute-0 podman[285368]: 2026-01-26 15:58:55.930462325 +0000 UTC m=+0.164311704 container start efa087d4dda8be2c9c1d6903def850a3f57b0a3cb5fbc26d4ebb841850ec010a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gates, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 26 15:58:55 compute-0 elated_gates[285403]: 167 167
Jan 26 15:58:55 compute-0 systemd[1]: libpod-efa087d4dda8be2c9c1d6903def850a3f57b0a3cb5fbc26d4ebb841850ec010a.scope: Deactivated successfully.
Jan 26 15:58:55 compute-0 podman[285368]: 2026-01-26 15:58:55.937440346 +0000 UTC m=+0.171289755 container attach efa087d4dda8be2c9c1d6903def850a3f57b0a3cb5fbc26d4ebb841850ec010a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gates, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 15:58:55 compute-0 podman[285368]: 2026-01-26 15:58:55.938268206 +0000 UTC m=+0.172117595 container died efa087d4dda8be2c9c1d6903def850a3f57b0a3cb5fbc26d4ebb841850ec010a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 15:58:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c81d12a6071fe111bb881d0d61a580e07ce0481d219c3e2c7d4774d29c8b8043-merged.mount: Deactivated successfully.
Jan 26 15:58:55 compute-0 podman[285368]: 2026-01-26 15:58:55.992264402 +0000 UTC m=+0.226113781 container remove efa087d4dda8be2c9c1d6903def850a3f57b0a3cb5fbc26d4ebb841850ec010a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:58:56 compute-0 systemd[1]: libpod-conmon-efa087d4dda8be2c9c1d6903def850a3f57b0a3cb5fbc26d4ebb841850ec010a.scope: Deactivated successfully.
Jan 26 15:58:56 compute-0 podman[285427]: 2026-01-26 15:58:56.186144081 +0000 UTC m=+0.048668266 container create 1ab477e33268faf8c7833ec75acd110f5df30a685b128538e3f75067e286e82e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 15:58:56 compute-0 systemd[1]: Started libpod-conmon-1ab477e33268faf8c7833ec75acd110f5df30a685b128538e3f75067e286e82e.scope.
Jan 26 15:58:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:58:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1495374893' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:56 compute-0 podman[285427]: 2026-01-26 15:58:56.163093004 +0000 UTC m=+0.025617229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.278 239969 DEBUG oslo_concurrency.processutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f48af9388e7ca548458c39b909c38a7042bb9faab189bbd450daaa98c414f07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f48af9388e7ca548458c39b909c38a7042bb9faab189bbd450daaa98c414f07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f48af9388e7ca548458c39b909c38a7042bb9faab189bbd450daaa98c414f07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:58:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f48af9388e7ca548458c39b909c38a7042bb9faab189bbd450daaa98c414f07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.295 239969 DEBUG nova.compute.provider_tree [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:58:56 compute-0 podman[285427]: 2026-01-26 15:58:56.305579252 +0000 UTC m=+0.168103467 container init 1ab477e33268faf8c7833ec75acd110f5df30a685b128538e3f75067e286e82e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_greider, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.312 239969 DEBUG nova.scheduler.client.report [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:58:56 compute-0 podman[285427]: 2026-01-26 15:58:56.315832293 +0000 UTC m=+0.178356478 container start 1ab477e33268faf8c7833ec75acd110f5df30a685b128538e3f75067e286e82e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 15:58:56 compute-0 podman[285427]: 2026-01-26 15:58:56.320475787 +0000 UTC m=+0.182999992 container attach 1ab477e33268faf8c7833ec75acd110f5df30a685b128538e3f75067e286e82e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.340 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.341 239969 DEBUG nova.compute.manager [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.395 239969 DEBUG nova.compute.manager [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.395 239969 DEBUG nova.network.neutron [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.414 239969 INFO nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.432 239969 DEBUG nova.compute.manager [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.534 239969 DEBUG nova.compute.manager [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.535 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.536 239969 INFO nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Creating image(s)
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.566 239969 DEBUG nova.storage.rbd_utils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] rbd image 5a8e158a-5615-4acc-89e3-48b6dbf159c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.599 239969 DEBUG nova.storage.rbd_utils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] rbd image 5a8e158a-5615-4acc-89e3-48b6dbf159c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:56 compute-0 wizardly_greider[285443]: {
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:     "0": [
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:         {
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "devices": [
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "/dev/loop3"
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             ],
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "lv_name": "ceph_lv0",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "lv_size": "21470642176",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "name": "ceph_lv0",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "tags": {
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.cluster_name": "ceph",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.crush_device_class": "",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.encrypted": "0",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.objectstore": "bluestore",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.osd_id": "0",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.type": "block",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.vdo": "0",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.with_tpm": "0"
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             },
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "type": "block",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "vg_name": "ceph_vg0"
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:         }
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:     ],
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:     "1": [
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:         {
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "devices": [
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "/dev/loop4"
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             ],
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "lv_name": "ceph_lv1",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "lv_size": "21470642176",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "name": "ceph_lv1",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "tags": {
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.cluster_name": "ceph",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.crush_device_class": "",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.encrypted": "0",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.objectstore": "bluestore",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.osd_id": "1",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.type": "block",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.vdo": "0",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.with_tpm": "0"
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             },
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "type": "block",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "vg_name": "ceph_vg1"
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:         }
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:     ],
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:     "2": [
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:         {
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "devices": [
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "/dev/loop5"
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             ],
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "lv_name": "ceph_lv2",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "lv_size": "21470642176",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "name": "ceph_lv2",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "tags": {
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.cluster_name": "ceph",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.crush_device_class": "",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.encrypted": "0",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.objectstore": "bluestore",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.osd_id": "2",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.type": "block",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.vdo": "0",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:                 "ceph.with_tpm": "0"
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             },
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "type": "block",
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:             "vg_name": "ceph_vg2"
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:         }
Jan 26 15:58:56 compute-0 wizardly_greider[285443]:     ]
Jan 26 15:58:56 compute-0 wizardly_greider[285443]: }
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.628 239969 DEBUG nova.storage.rbd_utils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] rbd image 5a8e158a-5615-4acc-89e3-48b6dbf159c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.631 239969 DEBUG oslo_concurrency.processutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:56 compute-0 systemd[1]: libpod-1ab477e33268faf8c7833ec75acd110f5df30a685b128538e3f75067e286e82e.scope: Deactivated successfully.
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.664 239969 DEBUG nova.policy [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '090e697091a94483b6e27027be4f5cdf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '47c190a68c774b85bbacc74c5c5eb036', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:58:56 compute-0 podman[285509]: 2026-01-26 15:58:56.701257403 +0000 UTC m=+0.032866497 container died 1ab477e33268faf8c7833ec75acd110f5df30a685b128538e3f75067e286e82e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_greider, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.705 239969 DEBUG oslo_concurrency.processutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.705 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.706 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.706 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.728 239969 DEBUG nova.storage.rbd_utils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] rbd image 5a8e158a-5615-4acc-89e3-48b6dbf159c5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f48af9388e7ca548458c39b909c38a7042bb9faab189bbd450daaa98c414f07-merged.mount: Deactivated successfully.
Jan 26 15:58:56 compute-0 nova_compute[239965]: 2026-01-26 15:58:56.735 239969 DEBUG oslo_concurrency.processutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 5a8e158a-5615-4acc-89e3-48b6dbf159c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:56 compute-0 podman[285509]: 2026-01-26 15:58:56.749801105 +0000 UTC m=+0.081410179 container remove 1ab477e33268faf8c7833ec75acd110f5df30a685b128538e3f75067e286e82e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_greider, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 15:58:56 compute-0 systemd[1]: libpod-conmon-1ab477e33268faf8c7833ec75acd110f5df30a685b128538e3f75067e286e82e.scope: Deactivated successfully.
Jan 26 15:58:56 compute-0 sudo[285330]: pam_unix(sudo:session): session closed for user root
Jan 26 15:58:56 compute-0 sudo[285560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:58:56 compute-0 sudo[285560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:58:56 compute-0 sudo[285560]: pam_unix(sudo:session): session closed for user root
Jan 26 15:58:56 compute-0 ceph-mon[75140]: pgmap v1374: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 277 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 15:58:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1495374893' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:56 compute-0 sudo[285585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 15:58:56 compute-0 sudo[285585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.121 239969 DEBUG oslo_concurrency.processutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 5a8e158a-5615-4acc-89e3-48b6dbf159c5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.199 239969 DEBUG nova.storage.rbd_utils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] resizing rbd image 5a8e158a-5615-4acc-89e3-48b6dbf159c5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.249 239969 DEBUG nova.network.neutron [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Successfully created port: 3271e8c2-ab59-4211-b954-f7747fedbb1c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:58:57 compute-0 podman[285660]: 2026-01-26 15:58:57.252804921 +0000 UTC m=+0.050517951 container create d00f40b131127c72954f877642d0d775778a4bead4f53f7d1b0603e6c06f74b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_nobel, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.269 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Acquiring lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.269 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:57 compute-0 systemd[1]: Started libpod-conmon-d00f40b131127c72954f877642d0d775778a4bead4f53f7d1b0603e6c06f74b0.scope.
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.312 239969 DEBUG nova.compute.manager [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:58:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.320 239969 DEBUG nova.objects.instance [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lazy-loading 'migration_context' on Instance uuid 5a8e158a-5615-4acc-89e3-48b6dbf159c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:58:57 compute-0 podman[285660]: 2026-01-26 15:58:57.233367374 +0000 UTC m=+0.031080414 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:58:57 compute-0 podman[285660]: 2026-01-26 15:58:57.332510697 +0000 UTC m=+0.130223747 container init d00f40b131127c72954f877642d0d775778a4bead4f53f7d1b0603e6c06f74b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_nobel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 15:58:57 compute-0 podman[285660]: 2026-01-26 15:58:57.343941478 +0000 UTC m=+0.141654508 container start d00f40b131127c72954f877642d0d775778a4bead4f53f7d1b0603e6c06f74b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.346 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.347 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Ensure instance console log exists: /var/lib/nova/instances/5a8e158a-5615-4acc-89e3-48b6dbf159c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.348 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:57 compute-0 podman[285660]: 2026-01-26 15:58:57.348665584 +0000 UTC m=+0.146378634 container attach d00f40b131127c72954f877642d0d775778a4bead4f53f7d1b0603e6c06f74b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.348 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.349 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:57 compute-0 festive_nobel[285708]: 167 167
Jan 26 15:58:57 compute-0 systemd[1]: libpod-d00f40b131127c72954f877642d0d775778a4bead4f53f7d1b0603e6c06f74b0.scope: Deactivated successfully.
Jan 26 15:58:57 compute-0 podman[285660]: 2026-01-26 15:58:57.350766115 +0000 UTC m=+0.148479155 container died d00f40b131127c72954f877642d0d775778a4bead4f53f7d1b0603e6c06f74b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 15:58:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bbb2a0b29d10c7ec34cfbf84026c4df2cb21d3f910b3a70bc049f5ac2d39633-merged.mount: Deactivated successfully.
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.385 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.385 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.396 239969 DEBUG nova.virt.hardware [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.396 239969 INFO nova.compute.claims [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:58:57 compute-0 podman[285660]: 2026-01-26 15:58:57.398387854 +0000 UTC m=+0.196100884 container remove d00f40b131127c72954f877642d0d775778a4bead4f53f7d1b0603e6c06f74b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 15:58:57 compute-0 systemd[1]: libpod-conmon-d00f40b131127c72954f877642d0d775778a4bead4f53f7d1b0603e6c06f74b0.scope: Deactivated successfully.
Jan 26 15:58:57 compute-0 nova_compute[239965]: 2026-01-26 15:58:57.557 239969 DEBUG oslo_concurrency.processutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:57 compute-0 podman[285734]: 2026-01-26 15:58:57.587004254 +0000 UTC m=+0.045468777 container create d7f8ff42a84cb855a5a60a8f7a11121578ce19cda1a3cbbf1f967edafda5f59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_noyce, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 15:58:57 compute-0 systemd[1]: Started libpod-conmon-d7f8ff42a84cb855a5a60a8f7a11121578ce19cda1a3cbbf1f967edafda5f59e.scope.
Jan 26 15:58:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:58:57 compute-0 podman[285734]: 2026-01-26 15:58:57.567057305 +0000 UTC m=+0.025521858 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 15:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52aa1831f5c561e8d4b1fe44fa656ceb4eaf970b4d33445c706f26b0c982909/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 15:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52aa1831f5c561e8d4b1fe44fa656ceb4eaf970b4d33445c706f26b0c982909/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 15:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52aa1831f5c561e8d4b1fe44fa656ceb4eaf970b4d33445c706f26b0c982909/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 15:58:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d52aa1831f5c561e8d4b1fe44fa656ceb4eaf970b4d33445c706f26b0c982909/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 15:58:57 compute-0 podman[285734]: 2026-01-26 15:58:57.678868329 +0000 UTC m=+0.137332862 container init d7f8ff42a84cb855a5a60a8f7a11121578ce19cda1a3cbbf1f967edafda5f59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_noyce, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 15:58:57 compute-0 podman[285734]: 2026-01-26 15:58:57.686403304 +0000 UTC m=+0.144867827 container start d7f8ff42a84cb855a5a60a8f7a11121578ce19cda1a3cbbf1f967edafda5f59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_noyce, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 15:58:57 compute-0 podman[285734]: 2026-01-26 15:58:57.691602012 +0000 UTC m=+0.150066535 container attach d7f8ff42a84cb855a5a60a8f7a11121578ce19cda1a3cbbf1f967edafda5f59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 15:58:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1375: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 277 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 15:58:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:58:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3088279207' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.196 239969 DEBUG oslo_concurrency.processutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.204 239969 DEBUG nova.compute.provider_tree [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.226 239969 DEBUG nova.scheduler.client.report [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.251 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.266 239969 DEBUG nova.network.neutron [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Successfully updated port: 3271e8c2-ab59-4211-b954-f7747fedbb1c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.272 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Acquiring lock "dc83b1f1-19f0-43fa-824f-08eb86d2818f" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.273 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "dc83b1f1-19f0-43fa-824f-08eb86d2818f" acquired by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.280 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Acquiring lock "refresh_cache-5a8e158a-5615-4acc-89e3-48b6dbf159c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.280 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Acquired lock "refresh_cache-5a8e158a-5615-4acc-89e3-48b6dbf159c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.281 239969 DEBUG nova.network.neutron [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.285 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "dc83b1f1-19f0-43fa-824f-08eb86d2818f" "released" by "nova.compute.manager.ComputeManager._validate_instance_group_policy.<locals>._do_validation" :: held 0.013s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.286 239969 DEBUG nova.compute.manager [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.339 239969 DEBUG nova.compute.manager [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.339 239969 DEBUG nova.network.neutron [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.359 239969 INFO nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.374 239969 DEBUG nova.compute.manager [req-9b6b168f-a16b-4271-bcd6-dc93a303fae7 req-535be4f1-6692-427c-9c69-b2134aa8946a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Received event network-changed-3271e8c2-ab59-4211-b954-f7747fedbb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.375 239969 DEBUG nova.compute.manager [req-9b6b168f-a16b-4271-bcd6-dc93a303fae7 req-535be4f1-6692-427c-9c69-b2134aa8946a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Refreshing instance network info cache due to event network-changed-3271e8c2-ab59-4211-b954-f7747fedbb1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.375 239969 DEBUG oslo_concurrency.lockutils [req-9b6b168f-a16b-4271-bcd6-dc93a303fae7 req-535be4f1-6692-427c-9c69-b2134aa8946a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-5a8e158a-5615-4acc-89e3-48b6dbf159c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.378 239969 DEBUG nova.compute.manager [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:58:58 compute-0 lvm[285851]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 15:58:58 compute-0 lvm[285851]: VG ceph_vg0 finished
Jan 26 15:58:58 compute-0 lvm[285852]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 15:58:58 compute-0 lvm[285852]: VG ceph_vg1 finished
Jan 26 15:58:58 compute-0 lvm[285854]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 15:58:58 compute-0 lvm[285854]: VG ceph_vg2 finished
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.465 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.469 239969 DEBUG nova.compute.manager [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.470 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.470 239969 INFO nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Creating image(s)
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.494 239969 DEBUG nova.storage.rbd_utils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] rbd image 82d2c240-6e7e-4cc1-a311-2bf4fea69b85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:58 compute-0 frosty_noyce[285751]: {}
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.527 239969 DEBUG nova.storage.rbd_utils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] rbd image 82d2c240-6e7e-4cc1-a311-2bf4fea69b85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:58 compute-0 systemd[1]: libpod-d7f8ff42a84cb855a5a60a8f7a11121578ce19cda1a3cbbf1f967edafda5f59e.scope: Deactivated successfully.
Jan 26 15:58:58 compute-0 systemd[1]: libpod-d7f8ff42a84cb855a5a60a8f7a11121578ce19cda1a3cbbf1f967edafda5f59e.scope: Consumed 1.416s CPU time.
Jan 26 15:58:58 compute-0 podman[285734]: 2026-01-26 15:58:58.546623747 +0000 UTC m=+1.005088270 container died d7f8ff42a84cb855a5a60a8f7a11121578ce19cda1a3cbbf1f967edafda5f59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_noyce, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.559 239969 DEBUG nova.storage.rbd_utils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] rbd image 82d2c240-6e7e-4cc1-a311-2bf4fea69b85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.566 239969 DEBUG oslo_concurrency.processutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d52aa1831f5c561e8d4b1fe44fa656ceb4eaf970b4d33445c706f26b0c982909-merged.mount: Deactivated successfully.
Jan 26 15:58:58 compute-0 podman[285734]: 2026-01-26 15:58:58.596496431 +0000 UTC m=+1.054960954 container remove d7f8ff42a84cb855a5a60a8f7a11121578ce19cda1a3cbbf1f967edafda5f59e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_noyce, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.602 239969 DEBUG nova.network.neutron [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.608 239969 DEBUG nova.policy [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '244974d3583149b1aa203f20c9fe40f7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0afaeef8913647ed98f86bf42cba01d5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:58:58 compute-0 systemd[1]: libpod-conmon-d7f8ff42a84cb855a5a60a8f7a11121578ce19cda1a3cbbf1f967edafda5f59e.scope: Deactivated successfully.
Jan 26 15:58:58 compute-0 sudo[285585]: pam_unix(sudo:session): session closed for user root
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.646 239969 DEBUG oslo_concurrency.processutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.647 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.648 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.648 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 15:58:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:58:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 15:58:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.672 239969 DEBUG nova.storage.rbd_utils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] rbd image 82d2c240-6e7e-4cc1-a311-2bf4fea69b85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.677 239969 DEBUG oslo_concurrency.processutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 82d2c240-6e7e-4cc1-a311-2bf4fea69b85_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:58 compute-0 sudo[285943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 15:58:58 compute-0 sudo[285943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:58:58 compute-0 sudo[285943]: pam_unix(sudo:session): session closed for user root
Jan 26 15:58:58 compute-0 ceph-mon[75140]: pgmap v1375: 305 pgs: 305 active+clean; 167 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 277 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 15:58:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3088279207' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:58:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:58:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:58:58 compute-0 nova_compute[239965]: 2026-01-26 15:58:58.996 239969 DEBUG oslo_concurrency.processutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 82d2c240-6e7e-4cc1-a311-2bf4fea69b85_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.060 239969 DEBUG nova.storage.rbd_utils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] resizing rbd image 82d2c240-6e7e-4cc1-a311-2bf4fea69b85_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.140 239969 DEBUG nova.objects.instance [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lazy-loading 'migration_context' on Instance uuid 82d2c240-6e7e-4cc1-a311-2bf4fea69b85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.157 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.158 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Ensure instance console log exists: /var/lib/nova/instances/82d2c240-6e7e-4cc1-a311-2bf4fea69b85/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.158 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.158 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.159 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:59.221 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:58:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:59.221 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:58:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:58:59.222 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.315 239969 DEBUG nova.network.neutron [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Successfully created port: 773a2e95-3163-4978-8c27-a987d2f1e3f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:58:59 compute-0 podman[286059]: 2026-01-26 15:58:59.368298175 +0000 UTC m=+0.054067848 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.479 239969 DEBUG nova.network.neutron [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Updating instance_info_cache with network_info: [{"id": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "address": "fa:16:3e:19:ea:d8", "network": {"id": "6b13b2b9-56af-4413-8ced-0f8ccf249f76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-425562732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47c190a68c774b85bbacc74c5c5eb036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3271e8c2-ab", "ovs_interfaceid": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.507 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Releasing lock "refresh_cache-5a8e158a-5615-4acc-89e3-48b6dbf159c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.507 239969 DEBUG nova.compute.manager [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Instance network_info: |[{"id": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "address": "fa:16:3e:19:ea:d8", "network": {"id": "6b13b2b9-56af-4413-8ced-0f8ccf249f76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-425562732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47c190a68c774b85bbacc74c5c5eb036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3271e8c2-ab", "ovs_interfaceid": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.508 239969 DEBUG oslo_concurrency.lockutils [req-9b6b168f-a16b-4271-bcd6-dc93a303fae7 req-535be4f1-6692-427c-9c69-b2134aa8946a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-5a8e158a-5615-4acc-89e3-48b6dbf159c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.508 239969 DEBUG nova.network.neutron [req-9b6b168f-a16b-4271-bcd6-dc93a303fae7 req-535be4f1-6692-427c-9c69-b2134aa8946a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Refreshing network info cache for port 3271e8c2-ab59-4211-b954-f7747fedbb1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.515 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Start _get_guest_xml network_info=[{"id": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "address": "fa:16:3e:19:ea:d8", "network": {"id": "6b13b2b9-56af-4413-8ced-0f8ccf249f76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-425562732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47c190a68c774b85bbacc74c5c5eb036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3271e8c2-ab", "ovs_interfaceid": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.519 239969 WARNING nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.525 239969 DEBUG nova.virt.libvirt.host [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.526 239969 DEBUG nova.virt.libvirt.host [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.530 239969 DEBUG nova.virt.libvirt.host [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.530 239969 DEBUG nova.virt.libvirt.host [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.531 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.531 239969 DEBUG nova.virt.hardware [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.532 239969 DEBUG nova.virt.hardware [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.532 239969 DEBUG nova.virt.hardware [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.532 239969 DEBUG nova.virt.hardware [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.532 239969 DEBUG nova.virt.hardware [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.532 239969 DEBUG nova.virt.hardware [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.533 239969 DEBUG nova.virt.hardware [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.533 239969 DEBUG nova.virt.hardware [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.533 239969 DEBUG nova.virt.hardware [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.533 239969 DEBUG nova.virt.hardware [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.534 239969 DEBUG nova.virt.hardware [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:58:59 compute-0 nova_compute[239965]: 2026-01-26 15:58:59.537 239969 DEBUG oslo_concurrency.processutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:58:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 235 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 4.7 MiB/s wr, 114 op/s
Jan 26 15:59:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:59:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2017199260' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.142 239969 DEBUG oslo_concurrency.processutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.167 239969 DEBUG nova.storage.rbd_utils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] rbd image 5a8e158a-5615-4acc-89e3-48b6dbf159c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.171 239969 DEBUG oslo_concurrency.processutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.307 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.369 239969 DEBUG nova.network.neutron [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Successfully updated port: 773a2e95-3163-4978-8c27-a987d2f1e3f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:59:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:59:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:59:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:59:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:59:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:59:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.383 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Acquiring lock "refresh_cache-82d2c240-6e7e-4cc1-a311-2bf4fea69b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.383 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Acquired lock "refresh_cache-82d2c240-6e7e-4cc1-a311-2bf4fea69b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.383 239969 DEBUG nova.network.neutron [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:59:00 compute-0 podman[286119]: 2026-01-26 15:59:00.398327177 +0000 UTC m=+0.083841579 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.442 239969 DEBUG nova.compute.manager [req-5a976b06-a2df-4bc8-a716-496060b36433 req-93c83c32-44ee-4e94-85f2-9c4f39f021cf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Received event network-changed-773a2e95-3163-4978-8c27-a987d2f1e3f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.442 239969 DEBUG nova.compute.manager [req-5a976b06-a2df-4bc8-a716-496060b36433 req-93c83c32-44ee-4e94-85f2-9c4f39f021cf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Refreshing instance network info cache due to event network-changed-773a2e95-3163-4978-8c27-a987d2f1e3f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.442 239969 DEBUG oslo_concurrency.lockutils [req-5a976b06-a2df-4bc8-a716-496060b36433 req-93c83c32-44ee-4e94-85f2-9c4f39f021cf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-82d2c240-6e7e-4cc1-a311-2bf4fea69b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.523 239969 DEBUG nova.network.neutron [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:59:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:59:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:59:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4219388275' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.749 239969 DEBUG oslo_concurrency.processutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.751 239969 DEBUG nova.virt.libvirt.vif [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:58:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-964626783',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-964626783',id=52,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='47c190a68c774b85bbacc74c5c5eb036',ramdisk_id='',reservation_id='r-k8vqae0m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-996490364',owner_user_name='tempest-InstanceActionsV221TestJSON-996490364-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:58:56Z,user_data=None,user_id='090e697091a94483b6e27027be4f5cdf',uuid=5a8e158a-5615-4acc-89e3-48b6dbf159c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "address": "fa:16:3e:19:ea:d8", "network": {"id": "6b13b2b9-56af-4413-8ced-0f8ccf249f76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-425562732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47c190a68c774b85bbacc74c5c5eb036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3271e8c2-ab", "ovs_interfaceid": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.751 239969 DEBUG nova.network.os_vif_util [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Converting VIF {"id": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "address": "fa:16:3e:19:ea:d8", "network": {"id": "6b13b2b9-56af-4413-8ced-0f8ccf249f76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-425562732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47c190a68c774b85bbacc74c5c5eb036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3271e8c2-ab", "ovs_interfaceid": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.752 239969 DEBUG nova.network.os_vif_util [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:ea:d8,bridge_name='br-int',has_traffic_filtering=True,id=3271e8c2-ab59-4211-b954-f7747fedbb1c,network=Network(6b13b2b9-56af-4413-8ced-0f8ccf249f76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3271e8c2-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.753 239969 DEBUG nova.objects.instance [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5a8e158a-5615-4acc-89e3-48b6dbf159c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.767 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:59:00 compute-0 nova_compute[239965]:   <uuid>5a8e158a-5615-4acc-89e3-48b6dbf159c5</uuid>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   <name>instance-00000034</name>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <nova:name>tempest-InstanceActionsV221TestJSON-server-964626783</nova:name>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:58:59</nova:creationTime>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:59:00 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:59:00 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:59:00 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:59:00 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:59:00 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:59:00 compute-0 nova_compute[239965]:         <nova:user uuid="090e697091a94483b6e27027be4f5cdf">tempest-InstanceActionsV221TestJSON-996490364-project-member</nova:user>
Jan 26 15:59:00 compute-0 nova_compute[239965]:         <nova:project uuid="47c190a68c774b85bbacc74c5c5eb036">tempest-InstanceActionsV221TestJSON-996490364</nova:project>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:59:00 compute-0 nova_compute[239965]:         <nova:port uuid="3271e8c2-ab59-4211-b954-f7747fedbb1c">
Jan 26 15:59:00 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <system>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <entry name="serial">5a8e158a-5615-4acc-89e3-48b6dbf159c5</entry>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <entry name="uuid">5a8e158a-5615-4acc-89e3-48b6dbf159c5</entry>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     </system>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   <os>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   </os>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   <features>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   </features>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/5a8e158a-5615-4acc-89e3-48b6dbf159c5_disk">
Jan 26 15:59:00 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       </source>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:59:00 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/5a8e158a-5615-4acc-89e3-48b6dbf159c5_disk.config">
Jan 26 15:59:00 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       </source>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:59:00 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:19:ea:d8"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <target dev="tap3271e8c2-ab"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/5a8e158a-5615-4acc-89e3-48b6dbf159c5/console.log" append="off"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <video>
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     </video>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:59:00 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:59:00 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:59:00 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:59:00 compute-0 nova_compute[239965]: </domain>
Jan 26 15:59:00 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.769 239969 DEBUG nova.compute.manager [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Preparing to wait for external event network-vif-plugged-3271e8c2-ab59-4211-b954-f7747fedbb1c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.769 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Acquiring lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.769 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.770 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.771 239969 DEBUG nova.virt.libvirt.vif [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:58:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-964626783',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-964626783',id=52,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='47c190a68c774b85bbacc74c5c5eb036',ramdisk_id='',reservation_id='r-k8vqae0m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsV221TestJSON-996490364',owner_user_name='tempest-InstanceActionsV221TestJSON-996490364-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:58:56Z,user_data=None,user_id='090e697091a94483b6e27027be4f5cdf',uuid=5a8e158a-5615-4acc-89e3-48b6dbf159c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "address": "fa:16:3e:19:ea:d8", "network": {"id": "6b13b2b9-56af-4413-8ced-0f8ccf249f76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-425562732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47c190a68c774b85bbacc74c5c5eb036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3271e8c2-ab", "ovs_interfaceid": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.771 239969 DEBUG nova.network.os_vif_util [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Converting VIF {"id": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "address": "fa:16:3e:19:ea:d8", "network": {"id": "6b13b2b9-56af-4413-8ced-0f8ccf249f76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-425562732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47c190a68c774b85bbacc74c5c5eb036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3271e8c2-ab", "ovs_interfaceid": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.772 239969 DEBUG nova.network.os_vif_util [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:ea:d8,bridge_name='br-int',has_traffic_filtering=True,id=3271e8c2-ab59-4211-b954-f7747fedbb1c,network=Network(6b13b2b9-56af-4413-8ced-0f8ccf249f76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3271e8c2-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.772 239969 DEBUG os_vif [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:ea:d8,bridge_name='br-int',has_traffic_filtering=True,id=3271e8c2-ab59-4211-b954-f7747fedbb1c,network=Network(6b13b2b9-56af-4413-8ced-0f8ccf249f76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3271e8c2-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.775 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.775 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.776 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.780 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.780 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3271e8c2-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.781 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3271e8c2-ab, col_values=(('external_ids', {'iface-id': '3271e8c2-ab59-4211-b954-f7747fedbb1c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:19:ea:d8', 'vm-uuid': '5a8e158a-5615-4acc-89e3-48b6dbf159c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.782 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:00 compute-0 NetworkManager[48954]: <info>  [1769443140.7842] manager: (tap3271e8c2-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.785 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.792 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.794 239969 INFO os_vif [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:ea:d8,bridge_name='br-int',has_traffic_filtering=True,id=3271e8c2-ab59-4211-b954-f7747fedbb1c,network=Network(6b13b2b9-56af-4413-8ced-0f8ccf249f76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3271e8c2-ab')
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.894 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.895 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.895 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] No VIF found with MAC fa:16:3e:19:ea:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.895 239969 INFO nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Using config drive
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.916 239969 DEBUG nova.storage.rbd_utils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] rbd image 5a8e158a-5615-4acc-89e3-48b6dbf159c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.970 239969 DEBUG nova.network.neutron [req-9b6b168f-a16b-4271-bcd6-dc93a303fae7 req-535be4f1-6692-427c-9c69-b2134aa8946a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Updated VIF entry in instance network info cache for port 3271e8c2-ab59-4211-b954-f7747fedbb1c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.970 239969 DEBUG nova.network.neutron [req-9b6b168f-a16b-4271-bcd6-dc93a303fae7 req-535be4f1-6692-427c-9c69-b2134aa8946a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Updating instance_info_cache with network_info: [{"id": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "address": "fa:16:3e:19:ea:d8", "network": {"id": "6b13b2b9-56af-4413-8ced-0f8ccf249f76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-425562732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47c190a68c774b85bbacc74c5c5eb036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3271e8c2-ab", "ovs_interfaceid": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:00 compute-0 ceph-mon[75140]: pgmap v1376: 305 pgs: 305 active+clean; 235 MiB data, 509 MiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 4.7 MiB/s wr, 114 op/s
Jan 26 15:59:00 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2017199260' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:00 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4219388275' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:00 compute-0 nova_compute[239965]: 2026-01-26 15:59:00.989 239969 DEBUG oslo_concurrency.lockutils [req-9b6b168f-a16b-4271-bcd6-dc93a303fae7 req-535be4f1-6692-427c-9c69-b2134aa8946a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-5a8e158a-5615-4acc-89e3-48b6dbf159c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.067 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.290 239969 DEBUG nova.network.neutron [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Updating instance_info_cache with network_info: [{"id": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "address": "fa:16:3e:81:18:04", "network": {"id": "032a6c56-95c9-4ce5-8343-dc29adfd3819", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-743378819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0afaeef8913647ed98f86bf42cba01d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap773a2e95-31", "ovs_interfaceid": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.308 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Releasing lock "refresh_cache-82d2c240-6e7e-4cc1-a311-2bf4fea69b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.308 239969 DEBUG nova.compute.manager [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Instance network_info: |[{"id": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "address": "fa:16:3e:81:18:04", "network": {"id": "032a6c56-95c9-4ce5-8343-dc29adfd3819", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-743378819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0afaeef8913647ed98f86bf42cba01d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap773a2e95-31", "ovs_interfaceid": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.308 239969 DEBUG oslo_concurrency.lockutils [req-5a976b06-a2df-4bc8-a716-496060b36433 req-93c83c32-44ee-4e94-85f2-9c4f39f021cf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-82d2c240-6e7e-4cc1-a311-2bf4fea69b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.309 239969 DEBUG nova.network.neutron [req-5a976b06-a2df-4bc8-a716-496060b36433 req-93c83c32-44ee-4e94-85f2-9c4f39f021cf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Refreshing network info cache for port 773a2e95-3163-4978-8c27-a987d2f1e3f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.311 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Start _get_guest_xml network_info=[{"id": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "address": "fa:16:3e:81:18:04", "network": {"id": "032a6c56-95c9-4ce5-8343-dc29adfd3819", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-743378819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0afaeef8913647ed98f86bf42cba01d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap773a2e95-31", "ovs_interfaceid": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.315 239969 WARNING nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.319 239969 INFO nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Creating config drive at /var/lib/nova/instances/5a8e158a-5615-4acc-89e3-48b6dbf159c5/disk.config
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.323 239969 DEBUG oslo_concurrency.processutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5a8e158a-5615-4acc-89e3-48b6dbf159c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6mg2tz28 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.364 239969 DEBUG nova.virt.libvirt.host [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.365 239969 DEBUG nova.virt.libvirt.host [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.367 239969 DEBUG nova.virt.libvirt.host [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.368 239969 DEBUG nova.virt.libvirt.host [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.368 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.368 239969 DEBUG nova.virt.hardware [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.369 239969 DEBUG nova.virt.hardware [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.369 239969 DEBUG nova.virt.hardware [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.369 239969 DEBUG nova.virt.hardware [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.369 239969 DEBUG nova.virt.hardware [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.369 239969 DEBUG nova.virt.hardware [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.370 239969 DEBUG nova.virt.hardware [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.370 239969 DEBUG nova.virt.hardware [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.370 239969 DEBUG nova.virt.hardware [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.370 239969 DEBUG nova.virt.hardware [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.370 239969 DEBUG nova.virt.hardware [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.373 239969 DEBUG oslo_concurrency.processutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.460 239969 DEBUG oslo_concurrency.processutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5a8e158a-5615-4acc-89e3-48b6dbf159c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6mg2tz28" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.493 239969 DEBUG nova.storage.rbd_utils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] rbd image 5a8e158a-5615-4acc-89e3-48b6dbf159c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.498 239969 DEBUG oslo_concurrency.processutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5a8e158a-5615-4acc-89e3-48b6dbf159c5/disk.config 5a8e158a-5615-4acc-89e3-48b6dbf159c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.730 239969 DEBUG oslo_concurrency.processutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5a8e158a-5615-4acc-89e3-48b6dbf159c5/disk.config 5a8e158a-5615-4acc-89e3-48b6dbf159c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.731 239969 INFO nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Deleting local config drive /var/lib/nova/instances/5a8e158a-5615-4acc-89e3-48b6dbf159c5/disk.config because it was imported into RBD.
Jan 26 15:59:01 compute-0 kernel: tap3271e8c2-ab: entered promiscuous mode
Jan 26 15:59:01 compute-0 NetworkManager[48954]: <info>  [1769443141.7783] manager: (tap3271e8c2-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Jan 26 15:59:01 compute-0 ovn_controller[146046]: 2026-01-26T15:59:01Z|00409|binding|INFO|Claiming lport 3271e8c2-ab59-4211-b954-f7747fedbb1c for this chassis.
Jan 26 15:59:01 compute-0 ovn_controller[146046]: 2026-01-26T15:59:01Z|00410|binding|INFO|3271e8c2-ab59-4211-b954-f7747fedbb1c: Claiming fa:16:3e:19:ea:d8 10.100.0.3
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.782 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:01.789 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:ea:d8 10.100.0.3'], port_security=['fa:16:3e:19:ea:d8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5a8e158a-5615-4acc-89e3-48b6dbf159c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b13b2b9-56af-4413-8ced-0f8ccf249f76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '47c190a68c774b85bbacc74c5c5eb036', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f3330e28-3804-4132-9262-7b03b784d26e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=272e110a-91e0-446d-bab3-0220a2b903ae, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3271e8c2-ab59-4211-b954-f7747fedbb1c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:01.791 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3271e8c2-ab59-4211-b954-f7747fedbb1c in datapath 6b13b2b9-56af-4413-8ced-0f8ccf249f76 bound to our chassis
Jan 26 15:59:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:01.792 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6b13b2b9-56af-4413-8ced-0f8ccf249f76
Jan 26 15:59:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:01.808 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3149755c-2a0f-4b61-991c-24d093cc4326]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:01.809 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6b13b2b9-51 in ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.812 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:01.812 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6b13b2b9-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:59:01 compute-0 ovn_controller[146046]: 2026-01-26T15:59:01Z|00411|binding|INFO|Setting lport 3271e8c2-ab59-4211-b954-f7747fedbb1c ovn-installed in OVS
Jan 26 15:59:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:01.812 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9b021a29-26d5-43b8-ab3b-aaff06b362fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:01 compute-0 ovn_controller[146046]: 2026-01-26T15:59:01Z|00412|binding|INFO|Setting lport 3271e8c2-ab59-4211-b954-f7747fedbb1c up in Southbound
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.816 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:01 compute-0 systemd-udevd[286257]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:59:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:01.817 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[06685c30-c6a6-4d73-8516-a200e200f9d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:01 compute-0 nova_compute[239965]: 2026-01-26 15:59:01.821 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:01 compute-0 NetworkManager[48954]: <info>  [1769443141.8318] device (tap3271e8c2-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:59:01 compute-0 NetworkManager[48954]: <info>  [1769443141.8325] device (tap3271e8c2-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:59:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:01.845 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[ea562d32-d6e3-4271-8014-6cca903db85b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:01 compute-0 systemd-machined[208061]: New machine qemu-58-instance-00000034.
Jan 26 15:59:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1377: 305 pgs: 305 active+clean; 260 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 294 KiB/s rd, 5.7 MiB/s wr, 113 op/s
Jan 26 15:59:01 compute-0 systemd[1]: Started Virtual Machine qemu-58-instance-00000034.
Jan 26 15:59:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:01.871 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa1cc30-3d72-4983-9ac2-9074687bd07c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:01.909 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[532d37b6-061d-46b8-b8c2-f50579fb4ba1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:01 compute-0 NetworkManager[48954]: <info>  [1769443141.9179] manager: (tap6b13b2b9-50): new Veth device (/org/freedesktop/NetworkManager/Devices/189)
Jan 26 15:59:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:01.919 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0b8e0553-55f5-42f7-b398-1271e75d8009]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:01 compute-0 systemd-udevd[286262]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:59:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:01.963 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1aead9ed-4c54-46bf-a8a5-45ce19b8f928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:01.967 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[20925380-dc07-4bff-beb7-2b84343a8fad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:01 compute-0 NetworkManager[48954]: <info>  [1769443141.9959] device (tap6b13b2b9-50): carrier: link connected
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:02.005 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0a5745-af94-40fe-8259-f9d753dbfd8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:59:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1391048198' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:02.023 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0c600ebe-b928-43fe-a635-59af759ab483]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6b13b2b9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:b7:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463280, 'reachable_time': 19781, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286295, 'error': None, 'target': 'ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.031 239969 DEBUG oslo_concurrency.processutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:02.045 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fea96120-ad08-42b6-8fe5-3679ea58751e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb2:b7bf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463280, 'tstamp': 463280}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286297, 'error': None, 'target': 'ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.058 239969 DEBUG nova.storage.rbd_utils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] rbd image 82d2c240-6e7e-4cc1-a311-2bf4fea69b85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.064 239969 DEBUG oslo_concurrency.processutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:02.067 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a006f312-8dab-4e81-a7e1-5a514074883c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6b13b2b9-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b2:b7:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 126], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463280, 'reachable_time': 19781, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286313, 'error': None, 'target': 'ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:02.103 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[51fc24e1-ef2d-42a6-93e1-c1e4579ca199]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:02.171 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[413e242a-da9f-4c90-b912-618ad29faaf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:02.173 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b13b2b9-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:02.173 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:02.173 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b13b2b9-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:02 compute-0 NetworkManager[48954]: <info>  [1769443142.1765] manager: (tap6b13b2b9-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Jan 26 15:59:02 compute-0 kernel: tap6b13b2b9-50: entered promiscuous mode
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.176 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:02.180 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6b13b2b9-50, col_values=(('external_ids', {'iface-id': 'b7b929d8-5072-471d-b3c7-2631bc58e403'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.181 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:02 compute-0 ovn_controller[146046]: 2026-01-26T15:59:02Z|00413|binding|INFO|Releasing lport b7b929d8-5072-471d-b3c7-2631bc58e403 from this chassis (sb_readonly=0)
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.182 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:02.183 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6b13b2b9-56af-4413-8ced-0f8ccf249f76.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6b13b2b9-56af-4413-8ced-0f8ccf249f76.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:02.184 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cd545c19-9251-43df-b760-981348ddfc80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:02.185 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-6b13b2b9-56af-4413-8ced-0f8ccf249f76
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/6b13b2b9-56af-4413-8ced-0f8ccf249f76.pid.haproxy
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 6b13b2b9-56af-4413-8ced-0f8ccf249f76
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:59:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:02.186 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76', 'env', 'PROCESS_TAG=haproxy-6b13b2b9-56af-4413-8ced-0f8ccf249f76', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6b13b2b9-56af-4413-8ced-0f8ccf249f76.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.198 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.546 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443142.5457137, 5a8e158a-5615-4acc-89e3-48b6dbf159c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.546 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] VM Started (Lifecycle Event)
Jan 26 15:59:02 compute-0 podman[286407]: 2026-01-26 15:59:02.580787114 +0000 UTC m=+0.058788834 container create 0a55ee32c6b231d369969bbdcffcc3c833ea7ddab868609c596dd575c4a90b29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.590 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.599 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443142.54587, 5a8e158a-5615-4acc-89e3-48b6dbf159c5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.599 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] VM Paused (Lifecycle Event)
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.623 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:02 compute-0 systemd[1]: Started libpod-conmon-0a55ee32c6b231d369969bbdcffcc3c833ea7ddab868609c596dd575c4a90b29.scope.
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.628 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:02 compute-0 podman[286407]: 2026-01-26 15:59:02.550203774 +0000 UTC m=+0.028205524 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.652 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:59:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:59:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da45a331572941f7213b16c39b1e6ce906deb08d421260f5b30d8ca76211daf1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:59:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:59:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2028740451' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:02 compute-0 podman[286407]: 2026-01-26 15:59:02.671590003 +0000 UTC m=+0.149591753 container init 0a55ee32c6b231d369969bbdcffcc3c833ea7ddab868609c596dd575c4a90b29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 15:59:02 compute-0 podman[286407]: 2026-01-26 15:59:02.678126853 +0000 UTC m=+0.156128573 container start 0a55ee32c6b231d369969bbdcffcc3c833ea7ddab868609c596dd575c4a90b29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.691 239969 DEBUG oslo_concurrency.processutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.692 239969 DEBUG nova.virt.libvirt.vif [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:58:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1021106214',display_name='tempest-ServerGroupTestJSON-server-1021106214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1021106214',id=53,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0afaeef8913647ed98f86bf42cba01d5',ramdisk_id='',reservation_id='r-bc3bmr8y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-328223140',owner_user_name='tempest-ServerGroupTestJSON-328223140-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:58:58Z,user_data=None,user_id='244974d3583149b1aa203f20c9fe40f7',uuid=82d2c240-6e7e-4cc1-a311-2bf4fea69b85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "address": "fa:16:3e:81:18:04", "network": {"id": "032a6c56-95c9-4ce5-8343-dc29adfd3819", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-743378819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0afaeef8913647ed98f86bf42cba01d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap773a2e95-31", "ovs_interfaceid": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.692 239969 DEBUG nova.network.os_vif_util [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Converting VIF {"id": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "address": "fa:16:3e:81:18:04", "network": {"id": "032a6c56-95c9-4ce5-8343-dc29adfd3819", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-743378819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0afaeef8913647ed98f86bf42cba01d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap773a2e95-31", "ovs_interfaceid": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.693 239969 DEBUG nova.network.os_vif_util [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:18:04,bridge_name='br-int',has_traffic_filtering=True,id=773a2e95-3163-4978-8c27-a987d2f1e3f9,network=Network(032a6c56-95c9-4ce5-8343-dc29adfd3819),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap773a2e95-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.694 239969 DEBUG nova.objects.instance [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 82d2c240-6e7e-4cc1-a311-2bf4fea69b85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:02 compute-0 neutron-haproxy-ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76[286423]: [NOTICE]   (286429) : New worker (286431) forked
Jan 26 15:59:02 compute-0 neutron-haproxy-ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76[286423]: [NOTICE]   (286429) : Loading success.
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.709 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:59:02 compute-0 nova_compute[239965]:   <uuid>82d2c240-6e7e-4cc1-a311-2bf4fea69b85</uuid>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   <name>instance-00000035</name>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerGroupTestJSON-server-1021106214</nova:name>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:59:01</nova:creationTime>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:59:02 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:59:02 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:59:02 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:59:02 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:59:02 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:59:02 compute-0 nova_compute[239965]:         <nova:user uuid="244974d3583149b1aa203f20c9fe40f7">tempest-ServerGroupTestJSON-328223140-project-member</nova:user>
Jan 26 15:59:02 compute-0 nova_compute[239965]:         <nova:project uuid="0afaeef8913647ed98f86bf42cba01d5">tempest-ServerGroupTestJSON-328223140</nova:project>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:59:02 compute-0 nova_compute[239965]:         <nova:port uuid="773a2e95-3163-4978-8c27-a987d2f1e3f9">
Jan 26 15:59:02 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <system>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <entry name="serial">82d2c240-6e7e-4cc1-a311-2bf4fea69b85</entry>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <entry name="uuid">82d2c240-6e7e-4cc1-a311-2bf4fea69b85</entry>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     </system>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   <os>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   </os>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   <features>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   </features>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/82d2c240-6e7e-4cc1-a311-2bf4fea69b85_disk">
Jan 26 15:59:02 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       </source>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:59:02 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/82d2c240-6e7e-4cc1-a311-2bf4fea69b85_disk.config">
Jan 26 15:59:02 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       </source>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:59:02 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:81:18:04"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <target dev="tap773a2e95-31"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/82d2c240-6e7e-4cc1-a311-2bf4fea69b85/console.log" append="off"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <video>
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     </video>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:59:02 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:59:02 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:59:02 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:59:02 compute-0 nova_compute[239965]: </domain>
Jan 26 15:59:02 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.714 239969 DEBUG nova.compute.manager [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Preparing to wait for external event network-vif-plugged-773a2e95-3163-4978-8c27-a987d2f1e3f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.715 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Acquiring lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.715 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.716 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.716 239969 DEBUG nova.virt.libvirt.vif [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:58:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1021106214',display_name='tempest-ServerGroupTestJSON-server-1021106214',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1021106214',id=53,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0afaeef8913647ed98f86bf42cba01d5',ramdisk_id='',reservation_id='r-bc3bmr8y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerGroupTestJSON-328223140',owner_user_name='tempest-ServerGroupTestJSON-328223140-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:58:58Z,user_data=None,user_id='244974d3583149b1aa203f20c9fe40f7',uuid=82d2c240-6e7e-4cc1-a311-2bf4fea69b85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "address": "fa:16:3e:81:18:04", "network": {"id": "032a6c56-95c9-4ce5-8343-dc29adfd3819", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-743378819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0afaeef8913647ed98f86bf42cba01d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap773a2e95-31", "ovs_interfaceid": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.717 239969 DEBUG nova.network.os_vif_util [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Converting VIF {"id": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "address": "fa:16:3e:81:18:04", "network": {"id": "032a6c56-95c9-4ce5-8343-dc29adfd3819", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-743378819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0afaeef8913647ed98f86bf42cba01d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap773a2e95-31", "ovs_interfaceid": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.718 239969 DEBUG nova.network.os_vif_util [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:18:04,bridge_name='br-int',has_traffic_filtering=True,id=773a2e95-3163-4978-8c27-a987d2f1e3f9,network=Network(032a6c56-95c9-4ce5-8343-dc29adfd3819),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap773a2e95-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.718 239969 DEBUG os_vif [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:18:04,bridge_name='br-int',has_traffic_filtering=True,id=773a2e95-3163-4978-8c27-a987d2f1e3f9,network=Network(032a6c56-95c9-4ce5-8343-dc29adfd3819),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap773a2e95-31') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.719 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.719 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.720 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.722 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.722 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap773a2e95-31, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.723 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap773a2e95-31, col_values=(('external_ids', {'iface-id': '773a2e95-3163-4978-8c27-a987d2f1e3f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:81:18:04', 'vm-uuid': '82d2c240-6e7e-4cc1-a311-2bf4fea69b85'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:02 compute-0 NetworkManager[48954]: <info>  [1769443142.7258] manager: (tap773a2e95-31): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/191)
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.724 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.728 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.731 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.733 239969 INFO os_vif [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:18:04,bridge_name='br-int',has_traffic_filtering=True,id=773a2e95-3163-4978-8c27-a987d2f1e3f9,network=Network(032a6c56-95c9-4ce5-8343-dc29adfd3819),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap773a2e95-31')
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.785 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.786 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.786 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] No VIF found with MAC fa:16:3e:81:18:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.786 239969 INFO nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Using config drive
Jan 26 15:59:02 compute-0 nova_compute[239965]: 2026-01-26 15:59:02.811 239969 DEBUG nova.storage.rbd_utils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] rbd image 82d2c240-6e7e-4cc1-a311-2bf4fea69b85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:02 compute-0 ceph-mon[75140]: pgmap v1377: 305 pgs: 305 active+clean; 260 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 294 KiB/s rd, 5.7 MiB/s wr, 113 op/s
Jan 26 15:59:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1391048198' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2028740451' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.085 239969 INFO nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Creating config drive at /var/lib/nova/instances/82d2c240-6e7e-4cc1-a311-2bf4fea69b85/disk.config
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.092 239969 DEBUG oslo_concurrency.processutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/82d2c240-6e7e-4cc1-a311-2bf4fea69b85/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc91euc0_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.229 239969 DEBUG oslo_concurrency.processutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/82d2c240-6e7e-4cc1-a311-2bf4fea69b85/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc91euc0_" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.263 239969 DEBUG nova.storage.rbd_utils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] rbd image 82d2c240-6e7e-4cc1-a311-2bf4fea69b85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.268 239969 DEBUG oslo_concurrency.processutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/82d2c240-6e7e-4cc1-a311-2bf4fea69b85/disk.config 82d2c240-6e7e-4cc1-a311-2bf4fea69b85_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.309 239969 DEBUG nova.network.neutron [req-5a976b06-a2df-4bc8-a716-496060b36433 req-93c83c32-44ee-4e94-85f2-9c4f39f021cf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Updated VIF entry in instance network info cache for port 773a2e95-3163-4978-8c27-a987d2f1e3f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.310 239969 DEBUG nova.network.neutron [req-5a976b06-a2df-4bc8-a716-496060b36433 req-93c83c32-44ee-4e94-85f2-9c4f39f021cf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Updating instance_info_cache with network_info: [{"id": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "address": "fa:16:3e:81:18:04", "network": {"id": "032a6c56-95c9-4ce5-8343-dc29adfd3819", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-743378819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0afaeef8913647ed98f86bf42cba01d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap773a2e95-31", "ovs_interfaceid": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.328 239969 DEBUG oslo_concurrency.lockutils [req-5a976b06-a2df-4bc8-a716-496060b36433 req-93c83c32-44ee-4e94-85f2-9c4f39f021cf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-82d2c240-6e7e-4cc1-a311-2bf4fea69b85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.433 239969 DEBUG nova.compute.manager [req-d0ff893a-55cb-4acb-a2db-1147d9203a60 req-52550ca0-7aa2-4b48-aaab-e7aa9c7b8d18 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Received event network-vif-plugged-3271e8c2-ab59-4211-b954-f7747fedbb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.433 239969 DEBUG oslo_concurrency.lockutils [req-d0ff893a-55cb-4acb-a2db-1147d9203a60 req-52550ca0-7aa2-4b48-aaab-e7aa9c7b8d18 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.434 239969 DEBUG oslo_concurrency.lockutils [req-d0ff893a-55cb-4acb-a2db-1147d9203a60 req-52550ca0-7aa2-4b48-aaab-e7aa9c7b8d18 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.434 239969 DEBUG oslo_concurrency.lockutils [req-d0ff893a-55cb-4acb-a2db-1147d9203a60 req-52550ca0-7aa2-4b48-aaab-e7aa9c7b8d18 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.434 239969 DEBUG nova.compute.manager [req-d0ff893a-55cb-4acb-a2db-1147d9203a60 req-52550ca0-7aa2-4b48-aaab-e7aa9c7b8d18 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Processing event network-vif-plugged-3271e8c2-ab59-4211-b954-f7747fedbb1c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.435 239969 DEBUG nova.compute.manager [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.439 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443143.4391031, 5a8e158a-5615-4acc-89e3-48b6dbf159c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.440 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] VM Resumed (Lifecycle Event)
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.442 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.445 239969 INFO nova.virt.libvirt.driver [-] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Instance spawned successfully.
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.445 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.460 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.467 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.471 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.472 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.472 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.473 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.473 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.474 239969 DEBUG nova.virt.libvirt.driver [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.484 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.528 239969 INFO nova.compute.manager [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Took 6.99 seconds to spawn the instance on the hypervisor.
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.529 239969 DEBUG nova.compute.manager [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.598 239969 INFO nova.compute.manager [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Took 8.15 seconds to build instance.
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.601 239969 DEBUG oslo_concurrency.processutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/82d2c240-6e7e-4cc1-a311-2bf4fea69b85/disk.config 82d2c240-6e7e-4cc1-a311-2bf4fea69b85_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.602 239969 INFO nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Deleting local config drive /var/lib/nova/instances/82d2c240-6e7e-4cc1-a311-2bf4fea69b85/disk.config because it was imported into RBD.
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.618 239969 DEBUG oslo_concurrency.lockutils [None req-7da76726-b447-4546-ad4d-eccd63610fae 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.419s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:03 compute-0 kernel: tap773a2e95-31: entered promiscuous mode
Jan 26 15:59:03 compute-0 NetworkManager[48954]: <info>  [1769443143.6528] manager: (tap773a2e95-31): new Tun device (/org/freedesktop/NetworkManager/Devices/192)
Jan 26 15:59:03 compute-0 systemd-udevd[286283]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.653 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:03 compute-0 ovn_controller[146046]: 2026-01-26T15:59:03Z|00414|binding|INFO|Claiming lport 773a2e95-3163-4978-8c27-a987d2f1e3f9 for this chassis.
Jan 26 15:59:03 compute-0 ovn_controller[146046]: 2026-01-26T15:59:03Z|00415|binding|INFO|773a2e95-3163-4978-8c27-a987d2f1e3f9: Claiming fa:16:3e:81:18:04 10.100.0.5
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.661 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:18:04 10.100.0.5'], port_security=['fa:16:3e:81:18:04 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '82d2c240-6e7e-4cc1-a311-2bf4fea69b85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-032a6c56-95c9-4ce5-8343-dc29adfd3819', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0afaeef8913647ed98f86bf42cba01d5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '72314c14-a597-4ad4-a884-4cf79dce484c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e05c9413-0c23-4741-a917-e9a18b3c1e0c, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=773a2e95-3163-4978-8c27-a987d2f1e3f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.663 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 773a2e95-3163-4978-8c27-a987d2f1e3f9 in datapath 032a6c56-95c9-4ce5-8343-dc29adfd3819 bound to our chassis
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.665 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 032a6c56-95c9-4ce5-8343-dc29adfd3819
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.670 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:03 compute-0 ovn_controller[146046]: 2026-01-26T15:59:03Z|00416|binding|INFO|Setting lport 773a2e95-3163-4978-8c27-a987d2f1e3f9 ovn-installed in OVS
Jan 26 15:59:03 compute-0 ovn_controller[146046]: 2026-01-26T15:59:03Z|00417|binding|INFO|Setting lport 773a2e95-3163-4978-8c27-a987d2f1e3f9 up in Southbound
Jan 26 15:59:03 compute-0 nova_compute[239965]: 2026-01-26 15:59:03.673 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:03 compute-0 NetworkManager[48954]: <info>  [1769443143.6775] device (tap773a2e95-31): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:59:03 compute-0 NetworkManager[48954]: <info>  [1769443143.6787] device (tap773a2e95-31): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.677 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[140b9212-96d1-4f26-becd-1f8ce1d76387]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.678 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap032a6c56-91 in ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.680 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap032a6c56-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.681 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd49ca8-f9bc-42a6-98a8-c8748993023e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.682 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c0a7e7-2591-4026-a59c-12cc6fb9be16]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.698 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[6fcc1217-6dd0-420e-88dc-0519d41065f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:03 compute-0 systemd-machined[208061]: New machine qemu-59-instance-00000035.
Jan 26 15:59:03 compute-0 systemd[1]: Started Virtual Machine qemu-59-instance-00000035.
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.719 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a1ddfba1-6d44-4e7d-8502-894727ba2105]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.757 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6776ac46-96ce-41bd-9cf3-1784b82d4100]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.767 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb8c6e2-8f95-479e-8d7f-d3ae9eaf28a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:03 compute-0 NetworkManager[48954]: <info>  [1769443143.7681] manager: (tap032a6c56-90): new Veth device (/org/freedesktop/NetworkManager/Devices/193)
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.807 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[abe3e6dd-cea5-466a-9b48-e718ce0795da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.812 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c5118c5a-e5d6-4e50-a0fb-ce3845db964e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:03 compute-0 NetworkManager[48954]: <info>  [1769443143.8390] device (tap032a6c56-90): carrier: link connected
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.846 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[bb208639-46df-4008-b03e-0827bf4db856]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 305 active+clean; 260 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 270 KiB/s rd, 5.5 MiB/s wr, 103 op/s
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.864 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[92e4e378-c342-42e2-8672-c31e599c2da7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap032a6c56-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:48:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463464, 'reachable_time': 24310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286531, 'error': None, 'target': 'ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.884 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3d1dbd-ad52-4421-a6fd-db072037c170]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea7:48a1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 463464, 'tstamp': 463464}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286532, 'error': None, 'target': 'ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.905 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[312ada33-02e6-408c-8651-1f496f83b407]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap032a6c56-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a7:48:a1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 128], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463464, 'reachable_time': 24310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 286533, 'error': None, 'target': 'ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:03.945 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0bfc23bf-515e-41d9-aa27-33752d73bd41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:04.028 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca02e0b-806e-4654-b757-e1968f90ccf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:04.030 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap032a6c56-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:04.032 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:04.033 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap032a6c56-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:04 compute-0 kernel: tap032a6c56-90: entered promiscuous mode
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.035 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:04 compute-0 NetworkManager[48954]: <info>  [1769443144.0390] manager: (tap032a6c56-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:04.042 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap032a6c56-90, col_values=(('external_ids', {'iface-id': 'f8276128-9d92-4cee-9abd-7bd1e0fffd41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.043 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:04 compute-0 ovn_controller[146046]: 2026-01-26T15:59:04Z|00418|binding|INFO|Releasing lport f8276128-9d92-4cee-9abd-7bd1e0fffd41 from this chassis (sb_readonly=0)
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.044 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:04.046 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/032a6c56-95c9-4ce5-8343-dc29adfd3819.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/032a6c56-95c9-4ce5-8343-dc29adfd3819.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:04.047 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[167c3f8c-9a2f-4b0e-9ff1-2895df77861b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:04.048 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-032a6c56-95c9-4ce5-8343-dc29adfd3819
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/032a6c56-95c9-4ce5-8343-dc29adfd3819.pid.haproxy
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 032a6c56-95c9-4ce5-8343-dc29adfd3819
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:04.049 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819', 'env', 'PROCESS_TAG=haproxy-032a6c56-95c9-4ce5-8343-dc29adfd3819', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/032a6c56-95c9-4ce5-8343-dc29adfd3819.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.063 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.261 239969 DEBUG oslo_concurrency.lockutils [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Acquiring lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.261 239969 DEBUG oslo_concurrency.lockutils [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.261 239969 DEBUG oslo_concurrency.lockutils [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Acquiring lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.261 239969 DEBUG oslo_concurrency.lockutils [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.262 239969 DEBUG oslo_concurrency.lockutils [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.263 239969 INFO nova.compute.manager [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Terminating instance
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.263 239969 DEBUG nova.compute.manager [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:59:04 compute-0 kernel: tap3271e8c2-ab (unregistering): left promiscuous mode
Jan 26 15:59:04 compute-0 NetworkManager[48954]: <info>  [1769443144.3783] device (tap3271e8c2-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:59:04 compute-0 ovn_controller[146046]: 2026-01-26T15:59:04Z|00419|binding|INFO|Releasing lport 3271e8c2-ab59-4211-b954-f7747fedbb1c from this chassis (sb_readonly=0)
Jan 26 15:59:04 compute-0 ovn_controller[146046]: 2026-01-26T15:59:04Z|00420|binding|INFO|Setting lport 3271e8c2-ab59-4211-b954-f7747fedbb1c down in Southbound
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.380 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:04 compute-0 ovn_controller[146046]: 2026-01-26T15:59:04Z|00421|binding|INFO|Removing iface tap3271e8c2-ab ovn-installed in OVS
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.384 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:04.391 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:19:ea:d8 10.100.0.3'], port_security=['fa:16:3e:19:ea:d8 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '5a8e158a-5615-4acc-89e3-48b6dbf159c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6b13b2b9-56af-4413-8ced-0f8ccf249f76', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '47c190a68c774b85bbacc74c5c5eb036', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f3330e28-3804-4132-9262-7b03b784d26e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=272e110a-91e0-446d-bab3-0220a2b903ae, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3271e8c2-ab59-4211-b954-f7747fedbb1c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.411 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:04 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000034.scope: Deactivated successfully.
Jan 26 15:59:04 compute-0 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000034.scope: Consumed 1.408s CPU time.
Jan 26 15:59:04 compute-0 systemd-machined[208061]: Machine qemu-58-instance-00000034 terminated.
Jan 26 15:59:04 compute-0 podman[286609]: 2026-01-26 15:59:04.478766279 +0000 UTC m=+0.055845372 container create c0cc27e94d458a2f9a8a133751b83b29355d8590eb45977c3335ead4d32d1976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.490 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443144.486529, 82d2c240-6e7e-4cc1-a311-2bf4fea69b85 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.491 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] VM Started (Lifecycle Event)
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.509 239969 INFO nova.virt.libvirt.driver [-] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Instance destroyed successfully.
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.511 239969 DEBUG nova.objects.instance [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lazy-loading 'resources' on Instance uuid 5a8e158a-5615-4acc-89e3-48b6dbf159c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.515 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.519 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443144.486676, 82d2c240-6e7e-4cc1-a311-2bf4fea69b85 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.520 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] VM Paused (Lifecycle Event)
Jan 26 15:59:04 compute-0 systemd[1]: Started libpod-conmon-c0cc27e94d458a2f9a8a133751b83b29355d8590eb45977c3335ead4d32d1976.scope.
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.535 239969 DEBUG nova.virt.libvirt.vif [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:58:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-InstanceActionsV221TestJSON-server-964626783',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsv221testjson-server-964626783',id=52,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:59:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='47c190a68c774b85bbacc74c5c5eb036',ramdisk_id='',reservation_id='r-k8vqae0m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsV221TestJSON-996490364',owner_user_name='tempest-InstanceActionsV221TestJSON-996490364-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:59:03Z,user_data=None,user_id='090e697091a94483b6e27027be4f5cdf',uuid=5a8e158a-5615-4acc-89e3-48b6dbf159c5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "address": "fa:16:3e:19:ea:d8", "network": {"id": "6b13b2b9-56af-4413-8ced-0f8ccf249f76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-425562732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47c190a68c774b85bbacc74c5c5eb036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3271e8c2-ab", "ovs_interfaceid": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.536 239969 DEBUG nova.network.os_vif_util [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Converting VIF {"id": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "address": "fa:16:3e:19:ea:d8", "network": {"id": "6b13b2b9-56af-4413-8ced-0f8ccf249f76", "bridge": "br-int", "label": "tempest-InstanceActionsV221TestJSON-425562732-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "47c190a68c774b85bbacc74c5c5eb036", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3271e8c2-ab", "ovs_interfaceid": "3271e8c2-ab59-4211-b954-f7747fedbb1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.537 239969 DEBUG nova.network.os_vif_util [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:19:ea:d8,bridge_name='br-int',has_traffic_filtering=True,id=3271e8c2-ab59-4211-b954-f7747fedbb1c,network=Network(6b13b2b9-56af-4413-8ced-0f8ccf249f76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3271e8c2-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.537 239969 DEBUG os_vif [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:ea:d8,bridge_name='br-int',has_traffic_filtering=True,id=3271e8c2-ab59-4211-b954-f7747fedbb1c,network=Network(6b13b2b9-56af-4413-8ced-0f8ccf249f76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3271e8c2-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.540 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.540 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3271e8c2-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.542 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.543 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:04 compute-0 podman[286609]: 2026-01-26 15:59:04.44784659 +0000 UTC m=+0.024925723 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.545 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.549 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.551 239969 INFO os_vif [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:19:ea:d8,bridge_name='br-int',has_traffic_filtering=True,id=3271e8c2-ab59-4211-b954-f7747fedbb1c,network=Network(6b13b2b9-56af-4413-8ced-0f8ccf249f76),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3271e8c2-ab')
Jan 26 15:59:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:59:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99cf857cea5a083b92d778cabba88b4cbec7358f3118e29aa15ef1cd316d80c1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:59:04 compute-0 nova_compute[239965]: 2026-01-26 15:59:04.606 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:59:04 compute-0 podman[286609]: 2026-01-26 15:59:04.650806102 +0000 UTC m=+0.227885225 container init c0cc27e94d458a2f9a8a133751b83b29355d8590eb45977c3335ead4d32d1976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 15:59:04 compute-0 podman[286609]: 2026-01-26 15:59:04.658389518 +0000 UTC m=+0.235468611 container start c0cc27e94d458a2f9a8a133751b83b29355d8590eb45977c3335ead4d32d1976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 15:59:04 compute-0 neutron-haproxy-ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819[286635]: [NOTICE]   (286657) : New worker (286659) forked
Jan 26 15:59:04 compute-0 neutron-haproxy-ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819[286635]: [NOTICE]   (286657) : Loading success.
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:04.738 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3271e8c2-ab59-4211-b954-f7747fedbb1c in datapath 6b13b2b9-56af-4413-8ced-0f8ccf249f76 unbound from our chassis
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:04.740 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6b13b2b9-56af-4413-8ced-0f8ccf249f76, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:04.741 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[65c1e782-c962-435f-8142-db769cded515]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:04.741 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76 namespace which is not needed anymore
Jan 26 15:59:05 compute-0 ceph-mon[75140]: pgmap v1378: 305 pgs: 305 active+clean; 260 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 270 KiB/s rd, 5.5 MiB/s wr, 103 op/s
Jan 26 15:59:05 compute-0 neutron-haproxy-ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76[286423]: [NOTICE]   (286429) : haproxy version is 2.8.14-c23fe91
Jan 26 15:59:05 compute-0 neutron-haproxy-ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76[286423]: [NOTICE]   (286429) : path to executable is /usr/sbin/haproxy
Jan 26 15:59:05 compute-0 neutron-haproxy-ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76[286423]: [WARNING]  (286429) : Exiting Master process...
Jan 26 15:59:05 compute-0 neutron-haproxy-ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76[286423]: [WARNING]  (286429) : Exiting Master process...
Jan 26 15:59:05 compute-0 neutron-haproxy-ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76[286423]: [ALERT]    (286429) : Current worker (286431) exited with code 143 (Terminated)
Jan 26 15:59:05 compute-0 neutron-haproxy-ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76[286423]: [WARNING]  (286429) : All workers exited. Exiting... (0)
Jan 26 15:59:05 compute-0 systemd[1]: libpod-0a55ee32c6b231d369969bbdcffcc3c833ea7ddab868609c596dd575c4a90b29.scope: Deactivated successfully.
Jan 26 15:59:05 compute-0 podman[286684]: 2026-01-26 15:59:05.116109472 +0000 UTC m=+0.277392129 container died 0a55ee32c6b231d369969bbdcffcc3c833ea7ddab868609c596dd575c4a90b29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:59:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0a55ee32c6b231d369969bbdcffcc3c833ea7ddab868609c596dd575c4a90b29-userdata-shm.mount: Deactivated successfully.
Jan 26 15:59:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-da45a331572941f7213b16c39b1e6ce906deb08d421260f5b30d8ca76211daf1-merged.mount: Deactivated successfully.
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.178 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:05 compute-0 podman[286684]: 2026-01-26 15:59:05.218933797 +0000 UTC m=+0.380216464 container cleanup 0a55ee32c6b231d369969bbdcffcc3c833ea7ddab868609c596dd575c4a90b29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 26 15:59:05 compute-0 systemd[1]: libpod-conmon-0a55ee32c6b231d369969bbdcffcc3c833ea7ddab868609c596dd575c4a90b29.scope: Deactivated successfully.
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.237 239969 INFO nova.virt.libvirt.driver [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Deleting instance files /var/lib/nova/instances/5a8e158a-5615-4acc-89e3-48b6dbf159c5_del
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.238 239969 INFO nova.virt.libvirt.driver [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Deletion of /var/lib/nova/instances/5a8e158a-5615-4acc-89e3-48b6dbf159c5_del complete
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.290 239969 INFO nova.compute.manager [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Took 1.03 seconds to destroy the instance on the hypervisor.
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.290 239969 DEBUG oslo.service.loopingcall [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.291 239969 DEBUG nova.compute.manager [-] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.291 239969 DEBUG nova.network.neutron [-] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:59:05 compute-0 podman[286714]: 2026-01-26 15:59:05.298169351 +0000 UTC m=+0.049287251 container remove 0a55ee32c6b231d369969bbdcffcc3c833ea7ddab868609c596dd575c4a90b29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:59:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:05.305 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[55c0db19-3c98-41c5-8b8e-c94c037b20c1]: (4, ('Mon Jan 26 03:59:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76 (0a55ee32c6b231d369969bbdcffcc3c833ea7ddab868609c596dd575c4a90b29)\n0a55ee32c6b231d369969bbdcffcc3c833ea7ddab868609c596dd575c4a90b29\nMon Jan 26 03:59:05 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76 (0a55ee32c6b231d369969bbdcffcc3c833ea7ddab868609c596dd575c4a90b29)\n0a55ee32c6b231d369969bbdcffcc3c833ea7ddab868609c596dd575c4a90b29\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:05.327 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7307b4ec-239d-4990-bae0-ad47ac40af85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.328 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:05.330 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b13b2b9-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.332 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:05 compute-0 kernel: tap6b13b2b9-50: left promiscuous mode
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.354 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:05.358 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ff697478-52c3-423b-b6ee-666cffe693c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:05.369 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cd430cd3-ce97-420f-8b83-abc3f004f181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:05.370 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[79b7baa8-8a64-46c0-9707-f5c972696a10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:05.387 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8d80e9e4-5594-48c5-80b3-1cacfd18e9f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463271, 'reachable_time': 43873, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286727, 'error': None, 'target': 'ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:05.389 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6b13b2b9-56af-4413-8ced-0f8ccf249f76 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:59:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:05.390 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[fc6206c1-1de8-4c7a-b661-76fa9a85b155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d6b13b2b9\x2d56af\x2d4413\x2d8ced\x2d0f8ccf249f76.mount: Deactivated successfully.
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.524 239969 DEBUG nova.compute.manager [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Received event network-vif-plugged-3271e8c2-ab59-4211-b954-f7747fedbb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.524 239969 DEBUG oslo_concurrency.lockutils [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.524 239969 DEBUG oslo_concurrency.lockutils [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.524 239969 DEBUG oslo_concurrency.lockutils [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.524 239969 DEBUG nova.compute.manager [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] No waiting events found dispatching network-vif-plugged-3271e8c2-ab59-4211-b954-f7747fedbb1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.525 239969 WARNING nova.compute.manager [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Received unexpected event network-vif-plugged-3271e8c2-ab59-4211-b954-f7747fedbb1c for instance with vm_state active and task_state deleting.
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.525 239969 DEBUG nova.compute.manager [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Received event network-vif-plugged-773a2e95-3163-4978-8c27-a987d2f1e3f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.525 239969 DEBUG oslo_concurrency.lockutils [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.525 239969 DEBUG oslo_concurrency.lockutils [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.525 239969 DEBUG oslo_concurrency.lockutils [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.525 239969 DEBUG nova.compute.manager [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Processing event network-vif-plugged-773a2e95-3163-4978-8c27-a987d2f1e3f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.526 239969 DEBUG nova.compute.manager [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Received event network-vif-plugged-773a2e95-3163-4978-8c27-a987d2f1e3f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.526 239969 DEBUG oslo_concurrency.lockutils [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.526 239969 DEBUG oslo_concurrency.lockutils [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.526 239969 DEBUG oslo_concurrency.lockutils [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.526 239969 DEBUG nova.compute.manager [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] No waiting events found dispatching network-vif-plugged-773a2e95-3163-4978-8c27-a987d2f1e3f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.526 239969 WARNING nova.compute.manager [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Received unexpected event network-vif-plugged-773a2e95-3163-4978-8c27-a987d2f1e3f9 for instance with vm_state building and task_state spawning.
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.527 239969 DEBUG nova.compute.manager [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Received event network-vif-unplugged-3271e8c2-ab59-4211-b954-f7747fedbb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.527 239969 DEBUG oslo_concurrency.lockutils [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.527 239969 DEBUG oslo_concurrency.lockutils [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.527 239969 DEBUG oslo_concurrency.lockutils [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.527 239969 DEBUG nova.compute.manager [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] No waiting events found dispatching network-vif-unplugged-3271e8c2-ab59-4211-b954-f7747fedbb1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.527 239969 DEBUG nova.compute.manager [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Received event network-vif-unplugged-3271e8c2-ab59-4211-b954-f7747fedbb1c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.527 239969 DEBUG nova.compute.manager [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Received event network-vif-plugged-3271e8c2-ab59-4211-b954-f7747fedbb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.528 239969 DEBUG oslo_concurrency.lockutils [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.528 239969 DEBUG oslo_concurrency.lockutils [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.528 239969 DEBUG oslo_concurrency.lockutils [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.528 239969 DEBUG nova.compute.manager [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] No waiting events found dispatching network-vif-plugged-3271e8c2-ab59-4211-b954-f7747fedbb1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.528 239969 WARNING nova.compute.manager [req-1e5d2585-c26f-403c-8d56-7037b610e438 req-5762f2e7-e924-4a9c-aa9c-f1957e4802e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Received unexpected event network-vif-plugged-3271e8c2-ab59-4211-b954-f7747fedbb1c for instance with vm_state active and task_state deleting.
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.529 239969 DEBUG nova.compute.manager [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.533 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443145.5333614, 82d2c240-6e7e-4cc1-a311-2bf4fea69b85 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.533 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] VM Resumed (Lifecycle Event)
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.535 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.537 239969 INFO nova.virt.libvirt.driver [-] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Instance spawned successfully.
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.538 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.557 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.561 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.561 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.561 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.562 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.562 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.562 239969 DEBUG nova.virt.libvirt.driver [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.566 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.593 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.624 239969 INFO nova.compute.manager [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Took 7.16 seconds to spawn the instance on the hypervisor.
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.625 239969 DEBUG nova.compute.manager [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.685 239969 INFO nova.compute.manager [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Took 8.32 seconds to build instance.
Jan 26 15:59:05 compute-0 nova_compute[239965]: 2026-01-26 15:59:05.701 239969 DEBUG oslo_concurrency.lockutils [None req-cdc8ccfc-90f4-4daa-8af4-797912f9a3f9 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.432s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 305 active+clean; 243 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 545 KiB/s rd, 5.5 MiB/s wr, 136 op/s
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:59:06.111682) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443146111747, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2144, "num_deletes": 254, "total_data_size": 3366164, "memory_usage": 3423440, "flush_reason": "Manual Compaction"}
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443146133372, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3301513, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26270, "largest_seqno": 28413, "table_properties": {"data_size": 3291802, "index_size": 6077, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20842, "raw_average_key_size": 20, "raw_value_size": 3272054, "raw_average_value_size": 3242, "num_data_blocks": 267, "num_entries": 1009, "num_filter_entries": 1009, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769442937, "oldest_key_time": 1769442937, "file_creation_time": 1769443146, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 21740 microseconds, and 6845 cpu microseconds.
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.134 239969 DEBUG nova.network.neutron [-] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:59:06.133425) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3301513 bytes OK
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:59:06.133451) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:59:06.135969) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:59:06.136079) EVENT_LOG_v1 {"time_micros": 1769443146136064, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:59:06.136139) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3357034, prev total WAL file size 3357034, number of live WAL files 2.
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:59:06.137555) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3224KB)], [59(7500KB)]
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443146137638, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10982409, "oldest_snapshot_seqno": -1}
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.159 239969 INFO nova.compute.manager [-] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Took 0.87 seconds to deallocate network for instance.
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5521 keys, 9274736 bytes, temperature: kUnknown
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443146200819, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9274736, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9235523, "index_size": 24319, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 137300, "raw_average_key_size": 24, "raw_value_size": 9134018, "raw_average_value_size": 1654, "num_data_blocks": 994, "num_entries": 5521, "num_filter_entries": 5521, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769443146, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:59:06.201319) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9274736 bytes
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:59:06.202545) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 172.9 rd, 146.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.3 +0.0 blob) out(8.8 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 6044, records dropped: 523 output_compression: NoCompression
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:59:06.202565) EVENT_LOG_v1 {"time_micros": 1769443146202556, "job": 32, "event": "compaction_finished", "compaction_time_micros": 63523, "compaction_time_cpu_micros": 21340, "output_level": 6, "num_output_files": 1, "total_output_size": 9274736, "num_input_records": 6044, "num_output_records": 5521, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443146203311, "job": 32, "event": "table_file_deletion", "file_number": 61}
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443146204560, "job": 32, "event": "table_file_deletion", "file_number": 59}
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:59:06.137436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:59:06.204662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:59:06.204671) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:59:06.204673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:59:06.204675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:59:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-15:59:06.204677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.206 239969 DEBUG oslo_concurrency.lockutils [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.207 239969 DEBUG oslo_concurrency.lockutils [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.285 239969 DEBUG oslo_concurrency.processutils [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.342 239969 DEBUG nova.compute.manager [req-be88dde2-68a7-47b0-a814-b3c9ad594efe req-2b91e2f5-7a6e-48ba-a5ae-36f939caf16e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Received event network-vif-deleted-3271e8c2-ab59-4211-b954-f7747fedbb1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.508 239969 DEBUG oslo_concurrency.lockutils [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Acquiring lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.509 239969 DEBUG oslo_concurrency.lockutils [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.514 239969 DEBUG oslo_concurrency.lockutils [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Acquiring lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.515 239969 DEBUG oslo_concurrency.lockutils [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.515 239969 DEBUG oslo_concurrency.lockutils [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.518 239969 INFO nova.compute.manager [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Terminating instance
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.521 239969 DEBUG nova.compute.manager [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:59:06 compute-0 kernel: tap773a2e95-31 (unregistering): left promiscuous mode
Jan 26 15:59:06 compute-0 NetworkManager[48954]: <info>  [1769443146.5664] device (tap773a2e95-31): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:59:06 compute-0 ovn_controller[146046]: 2026-01-26T15:59:06Z|00422|binding|INFO|Releasing lport 773a2e95-3163-4978-8c27-a987d2f1e3f9 from this chassis (sb_readonly=0)
Jan 26 15:59:06 compute-0 ovn_controller[146046]: 2026-01-26T15:59:06Z|00423|binding|INFO|Setting lport 773a2e95-3163-4978-8c27-a987d2f1e3f9 down in Southbound
Jan 26 15:59:06 compute-0 ovn_controller[146046]: 2026-01-26T15:59:06Z|00424|binding|INFO|Removing iface tap773a2e95-31 ovn-installed in OVS
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.579 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:06.584 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:81:18:04 10.100.0.5'], port_security=['fa:16:3e:81:18:04 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '82d2c240-6e7e-4cc1-a311-2bf4fea69b85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-032a6c56-95c9-4ce5-8343-dc29adfd3819', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0afaeef8913647ed98f86bf42cba01d5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '72314c14-a597-4ad4-a884-4cf79dce484c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e05c9413-0c23-4741-a917-e9a18b3c1e0c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=773a2e95-3163-4978-8c27-a987d2f1e3f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:06.585 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 773a2e95-3163-4978-8c27-a987d2f1e3f9 in datapath 032a6c56-95c9-4ce5-8343-dc29adfd3819 unbound from our chassis
Jan 26 15:59:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:06.586 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 032a6c56-95c9-4ce5-8343-dc29adfd3819, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:59:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:06.587 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b175cef9-3fed-4666-a06c-ee56fc2bbb9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:06.588 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819 namespace which is not needed anymore
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.601 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:06 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000035.scope: Deactivated successfully.
Jan 26 15:59:06 compute-0 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000035.scope: Consumed 1.604s CPU time.
Jan 26 15:59:06 compute-0 systemd-machined[208061]: Machine qemu-59-instance-00000035 terminated.
Jan 26 15:59:06 compute-0 neutron-haproxy-ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819[286635]: [NOTICE]   (286657) : haproxy version is 2.8.14-c23fe91
Jan 26 15:59:06 compute-0 neutron-haproxy-ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819[286635]: [NOTICE]   (286657) : path to executable is /usr/sbin/haproxy
Jan 26 15:59:06 compute-0 neutron-haproxy-ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819[286635]: [WARNING]  (286657) : Exiting Master process...
Jan 26 15:59:06 compute-0 neutron-haproxy-ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819[286635]: [ALERT]    (286657) : Current worker (286659) exited with code 143 (Terminated)
Jan 26 15:59:06 compute-0 neutron-haproxy-ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819[286635]: [WARNING]  (286657) : All workers exited. Exiting... (0)
Jan 26 15:59:06 compute-0 systemd[1]: libpod-c0cc27e94d458a2f9a8a133751b83b29355d8590eb45977c3335ead4d32d1976.scope: Deactivated successfully.
Jan 26 15:59:06 compute-0 podman[286773]: 2026-01-26 15:59:06.750782305 +0000 UTC m=+0.063124490 container died c0cc27e94d458a2f9a8a133751b83b29355d8590eb45977c3335ead4d32d1976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.765 239969 INFO nova.virt.libvirt.driver [-] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Instance destroyed successfully.
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.766 239969 DEBUG nova.objects.instance [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lazy-loading 'resources' on Instance uuid 82d2c240-6e7e-4cc1-a311-2bf4fea69b85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c0cc27e94d458a2f9a8a133751b83b29355d8590eb45977c3335ead4d32d1976-userdata-shm.mount: Deactivated successfully.
Jan 26 15:59:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-99cf857cea5a083b92d778cabba88b4cbec7358f3118e29aa15ef1cd316d80c1-merged.mount: Deactivated successfully.
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.784 239969 DEBUG nova.virt.libvirt.vif [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:58:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerGroupTestJSON-server-1021106214',display_name='tempest-ServerGroupTestJSON-server-1021106214',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servergrouptestjson-server-1021106214',id=53,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:59:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0afaeef8913647ed98f86bf42cba01d5',ramdisk_id='',reservation_id='r-bc3bmr8y',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerGroupTestJSON-328223140',owner_user_name='tempest-ServerGroupTestJSON-328223140-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:59:05Z,user_data=None,user_id='244974d3583149b1aa203f20c9fe40f7',uuid=82d2c240-6e7e-4cc1-a311-2bf4fea69b85,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "address": "fa:16:3e:81:18:04", "network": {"id": "032a6c56-95c9-4ce5-8343-dc29adfd3819", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-743378819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0afaeef8913647ed98f86bf42cba01d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap773a2e95-31", "ovs_interfaceid": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.786 239969 DEBUG nova.network.os_vif_util [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Converting VIF {"id": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "address": "fa:16:3e:81:18:04", "network": {"id": "032a6c56-95c9-4ce5-8343-dc29adfd3819", "bridge": "br-int", "label": "tempest-ServerGroupTestJSON-743378819-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0afaeef8913647ed98f86bf42cba01d5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap773a2e95-31", "ovs_interfaceid": "773a2e95-3163-4978-8c27-a987d2f1e3f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.787 239969 DEBUG nova.network.os_vif_util [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:81:18:04,bridge_name='br-int',has_traffic_filtering=True,id=773a2e95-3163-4978-8c27-a987d2f1e3f9,network=Network(032a6c56-95c9-4ce5-8343-dc29adfd3819),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap773a2e95-31') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.787 239969 DEBUG os_vif [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:18:04,bridge_name='br-int',has_traffic_filtering=True,id=773a2e95-3163-4978-8c27-a987d2f1e3f9,network=Network(032a6c56-95c9-4ce5-8343-dc29adfd3819),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap773a2e95-31') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.791 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:06 compute-0 podman[286773]: 2026-01-26 15:59:06.792636553 +0000 UTC m=+0.104978738 container cleanup c0cc27e94d458a2f9a8a133751b83b29355d8590eb45977c3335ead4d32d1976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.799 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap773a2e95-31, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.804 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.806 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.810 239969 INFO os_vif [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:81:18:04,bridge_name='br-int',has_traffic_filtering=True,id=773a2e95-3163-4978-8c27-a987d2f1e3f9,network=Network(032a6c56-95c9-4ce5-8343-dc29adfd3819),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap773a2e95-31')
Jan 26 15:59:06 compute-0 systemd[1]: libpod-conmon-c0cc27e94d458a2f9a8a133751b83b29355d8590eb45977c3335ead4d32d1976.scope: Deactivated successfully.
Jan 26 15:59:06 compute-0 podman[286812]: 2026-01-26 15:59:06.872832451 +0000 UTC m=+0.049150668 container remove c0cc27e94d458a2f9a8a133751b83b29355d8590eb45977c3335ead4d32d1976 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 15:59:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:06.880 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[73b47a36-4896-492e-8db7-f8df9c6fe2ad]: (4, ('Mon Jan 26 03:59:06 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819 (c0cc27e94d458a2f9a8a133751b83b29355d8590eb45977c3335ead4d32d1976)\nc0cc27e94d458a2f9a8a133751b83b29355d8590eb45977c3335ead4d32d1976\nMon Jan 26 03:59:06 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819 (c0cc27e94d458a2f9a8a133751b83b29355d8590eb45977c3335ead4d32d1976)\nc0cc27e94d458a2f9a8a133751b83b29355d8590eb45977c3335ead4d32d1976\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:06.883 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d10ed2b2-54b6-4e07-a953-640bec485b56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:06.884 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap032a6c56-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:06 compute-0 kernel: tap032a6c56-90: left promiscuous mode
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.888 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:06 compute-0 nova_compute[239965]: 2026-01-26 15:59:06.903 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:06.904 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e5135c62-4cff-4f86-9b03-7150fa8fd96c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:06.920 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7efafc29-9f81-48cd-a1a4-ef8d723b4b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:06.922 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[99edec8c-e0a7-412f-8b6d-125afb406f6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:06.944 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0b65fa8c-1e1c-41c1-9fe4-3c5325a30be2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 463455, 'reachable_time': 40087, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286846, 'error': None, 'target': 'ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d032a6c56\x2d95c9\x2d4ce5\x2d8343\x2ddc29adfd3819.mount: Deactivated successfully.
Jan 26 15:59:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:06.947 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-032a6c56-95c9-4ce5-8343-dc29adfd3819 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:59:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:06.947 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[fd89f94a-c035-4143-afdb-52737ab338b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:59:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/676285442' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.042 239969 DEBUG oslo_concurrency.processutils [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.757s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.050 239969 DEBUG nova.compute.provider_tree [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.063 239969 DEBUG nova.scheduler.client.report [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.092 239969 DEBUG oslo_concurrency.lockutils [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:07 compute-0 ceph-mon[75140]: pgmap v1379: 305 pgs: 305 active+clean; 243 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 545 KiB/s rd, 5.5 MiB/s wr, 136 op/s
Jan 26 15:59:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/676285442' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.127 239969 INFO nova.scheduler.client.report [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Deleted allocations for instance 5a8e158a-5615-4acc-89e3-48b6dbf159c5
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.150 239969 INFO nova.virt.libvirt.driver [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Deleting instance files /var/lib/nova/instances/82d2c240-6e7e-4cc1-a311-2bf4fea69b85_del
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.151 239969 INFO nova.virt.libvirt.driver [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Deletion of /var/lib/nova/instances/82d2c240-6e7e-4cc1-a311-2bf4fea69b85_del complete
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.224 239969 DEBUG oslo_concurrency.lockutils [None req-514fa438-61bb-41b1-af84-28d9c3a2eb8a 090e697091a94483b6e27027be4f5cdf 47c190a68c774b85bbacc74c5c5eb036 - - default default] Lock "5a8e158a-5615-4acc-89e3-48b6dbf159c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.229 239969 INFO nova.compute.manager [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Took 0.71 seconds to destroy the instance on the hypervisor.
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.231 239969 DEBUG oslo.service.loopingcall [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.231 239969 DEBUG nova.compute.manager [-] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.231 239969 DEBUG nova.network.neutron [-] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.623 239969 DEBUG nova.compute.manager [req-0f9dc364-18b5-4905-9eff-9b7504b4bd5f req-4975ed8b-4886-4233-a597-f82518f798b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Received event network-vif-unplugged-773a2e95-3163-4978-8c27-a987d2f1e3f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.624 239969 DEBUG oslo_concurrency.lockutils [req-0f9dc364-18b5-4905-9eff-9b7504b4bd5f req-4975ed8b-4886-4233-a597-f82518f798b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.624 239969 DEBUG oslo_concurrency.lockutils [req-0f9dc364-18b5-4905-9eff-9b7504b4bd5f req-4975ed8b-4886-4233-a597-f82518f798b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.624 239969 DEBUG oslo_concurrency.lockutils [req-0f9dc364-18b5-4905-9eff-9b7504b4bd5f req-4975ed8b-4886-4233-a597-f82518f798b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.625 239969 DEBUG nova.compute.manager [req-0f9dc364-18b5-4905-9eff-9b7504b4bd5f req-4975ed8b-4886-4233-a597-f82518f798b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] No waiting events found dispatching network-vif-unplugged-773a2e95-3163-4978-8c27-a987d2f1e3f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.625 239969 DEBUG nova.compute.manager [req-0f9dc364-18b5-4905-9eff-9b7504b4bd5f req-4975ed8b-4886-4233-a597-f82518f798b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Received event network-vif-unplugged-773a2e95-3163-4978-8c27-a987d2f1e3f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.625 239969 DEBUG nova.compute.manager [req-0f9dc364-18b5-4905-9eff-9b7504b4bd5f req-4975ed8b-4886-4233-a597-f82518f798b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Received event network-vif-plugged-773a2e95-3163-4978-8c27-a987d2f1e3f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.625 239969 DEBUG oslo_concurrency.lockutils [req-0f9dc364-18b5-4905-9eff-9b7504b4bd5f req-4975ed8b-4886-4233-a597-f82518f798b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.626 239969 DEBUG oslo_concurrency.lockutils [req-0f9dc364-18b5-4905-9eff-9b7504b4bd5f req-4975ed8b-4886-4233-a597-f82518f798b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.626 239969 DEBUG oslo_concurrency.lockutils [req-0f9dc364-18b5-4905-9eff-9b7504b4bd5f req-4975ed8b-4886-4233-a597-f82518f798b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.627 239969 DEBUG nova.compute.manager [req-0f9dc364-18b5-4905-9eff-9b7504b4bd5f req-4975ed8b-4886-4233-a597-f82518f798b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] No waiting events found dispatching network-vif-plugged-773a2e95-3163-4978-8c27-a987d2f1e3f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.627 239969 WARNING nova.compute.manager [req-0f9dc364-18b5-4905-9eff-9b7504b4bd5f req-4975ed8b-4886-4233-a597-f82518f798b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Received unexpected event network-vif-plugged-773a2e95-3163-4978-8c27-a987d2f1e3f9 for instance with vm_state active and task_state deleting.
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.838 239969 DEBUG nova.network.neutron [-] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.858 239969 INFO nova.compute.manager [-] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Took 0.63 seconds to deallocate network for instance.
Jan 26 15:59:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1380: 305 pgs: 305 active+clean; 243 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 3.6 MiB/s wr, 85 op/s
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.897 239969 DEBUG oslo_concurrency.lockutils [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.898 239969 DEBUG oslo_concurrency.lockutils [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:07 compute-0 nova_compute[239965]: 2026-01-26 15:59:07.951 239969 DEBUG oslo_concurrency.processutils [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:08 compute-0 ceph-mon[75140]: pgmap v1380: 305 pgs: 305 active+clean; 243 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 3.6 MiB/s wr, 85 op/s
Jan 26 15:59:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:59:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1615314576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:08 compute-0 nova_compute[239965]: 2026-01-26 15:59:08.540 239969 DEBUG oslo_concurrency.processutils [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:08 compute-0 nova_compute[239965]: 2026-01-26 15:59:08.546 239969 DEBUG nova.compute.provider_tree [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:59:08 compute-0 nova_compute[239965]: 2026-01-26 15:59:08.571 239969 DEBUG nova.scheduler.client.report [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:59:08 compute-0 nova_compute[239965]: 2026-01-26 15:59:08.593 239969 DEBUG oslo_concurrency.lockutils [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:08 compute-0 nova_compute[239965]: 2026-01-26 15:59:08.622 239969 INFO nova.scheduler.client.report [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Deleted allocations for instance 82d2c240-6e7e-4cc1-a311-2bf4fea69b85
Jan 26 15:59:08 compute-0 nova_compute[239965]: 2026-01-26 15:59:08.711 239969 DEBUG oslo_concurrency.lockutils [None req-066184a2-c25f-43c4-b27b-d42fa68e71c1 244974d3583149b1aa203f20c9fe40f7 0afaeef8913647ed98f86bf42cba01d5 - - default default] Lock "82d2c240-6e7e-4cc1-a311-2bf4fea69b85" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:08 compute-0 nova_compute[239965]: 2026-01-26 15:59:08.742 239969 DEBUG nova.compute.manager [req-5b6635c3-ddb2-4e4b-82b0-ea6142f24333 req-8b11a95a-36a6-4709-84d9-70cd1ec22ba7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Received event network-vif-deleted-773a2e95-3163-4978-8c27-a987d2f1e3f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:08 compute-0 nova_compute[239965]: 2026-01-26 15:59:08.929 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "883e64be-1368-4217-b5a2-7353e3045e3c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:08 compute-0 nova_compute[239965]: 2026-01-26 15:59:08.929 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "883e64be-1368-4217-b5a2-7353e3045e3c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:08 compute-0 nova_compute[239965]: 2026-01-26 15:59:08.946 239969 DEBUG nova.compute.manager [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:59:09 compute-0 nova_compute[239965]: 2026-01-26 15:59:09.026 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:09 compute-0 nova_compute[239965]: 2026-01-26 15:59:09.027 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:09 compute-0 nova_compute[239965]: 2026-01-26 15:59:09.033 239969 DEBUG nova.virt.hardware [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:59:09 compute-0 nova_compute[239965]: 2026-01-26 15:59:09.034 239969 INFO nova.compute.claims [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:59:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1615314576' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:09 compute-0 nova_compute[239965]: 2026-01-26 15:59:09.191 239969 DEBUG oslo_concurrency.processutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:59:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2179616462' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:09 compute-0 nova_compute[239965]: 2026-01-26 15:59:09.815 239969 DEBUG oslo_concurrency.processutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:09 compute-0 nova_compute[239965]: 2026-01-26 15:59:09.824 239969 DEBUG nova.compute.provider_tree [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:59:09 compute-0 nova_compute[239965]: 2026-01-26 15:59:09.847 239969 DEBUG nova.scheduler.client.report [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:59:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 305 active+clean; 175 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 160 op/s
Jan 26 15:59:09 compute-0 nova_compute[239965]: 2026-01-26 15:59:09.873 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:09 compute-0 nova_compute[239965]: 2026-01-26 15:59:09.874 239969 DEBUG nova.compute.manager [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:59:09 compute-0 nova_compute[239965]: 2026-01-26 15:59:09.918 239969 DEBUG nova.compute.manager [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:59:09 compute-0 nova_compute[239965]: 2026-01-26 15:59:09.918 239969 DEBUG nova.network.neutron [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:59:09 compute-0 nova_compute[239965]: 2026-01-26 15:59:09.936 239969 INFO nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:59:09 compute-0 nova_compute[239965]: 2026-01-26 15:59:09.953 239969 DEBUG nova.compute.manager [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.037 239969 DEBUG nova.compute.manager [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.038 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.038 239969 INFO nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Creating image(s)
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.062 239969 DEBUG nova.storage.rbd_utils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 883e64be-1368-4217-b5a2-7353e3045e3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.085 239969 DEBUG nova.storage.rbd_utils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 883e64be-1368-4217-b5a2-7353e3045e3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.105 239969 DEBUG nova.storage.rbd_utils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 883e64be-1368-4217-b5a2-7353e3045e3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.109 239969 DEBUG oslo_concurrency.processutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.174 239969 DEBUG oslo_concurrency.processutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.175 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.175 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.175 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2179616462' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:10 compute-0 ceph-mon[75140]: pgmap v1381: 305 pgs: 305 active+clean; 175 MiB data, 490 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.6 MiB/s wr, 160 op/s
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.207 239969 DEBUG nova.storage.rbd_utils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 883e64be-1368-4217-b5a2-7353e3045e3c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.211 239969 DEBUG oslo_concurrency.processutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 883e64be-1368-4217-b5a2-7353e3045e3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.330 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.460 239969 DEBUG oslo_concurrency.processutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 883e64be-1368-4217-b5a2-7353e3045e3c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.495 239969 DEBUG nova.policy [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3c7cba75e9fd41599b1c9a3388447cdd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aee99e5b6af74088bd848cecc9592e82', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.530 239969 DEBUG nova.storage.rbd_utils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] resizing rbd image 883e64be-1368-4217-b5a2-7353e3045e3c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:59:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.684 239969 DEBUG nova.objects.instance [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'migration_context' on Instance uuid 883e64be-1368-4217-b5a2-7353e3045e3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.700 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.701 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Ensure instance console log exists: /var/lib/nova/instances/883e64be-1368-4217-b5a2-7353e3045e3c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.701 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.701 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:10 compute-0 nova_compute[239965]: 2026-01-26 15:59:10.701 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:11 compute-0 ovn_controller[146046]: 2026-01-26T15:59:11Z|00425|binding|INFO|Releasing lport baa5727d-28fe-41d5-8694-960826c92bc9 from this chassis (sb_readonly=0)
Jan 26 15:59:11 compute-0 nova_compute[239965]: 2026-01-26 15:59:11.418 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:11 compute-0 nova_compute[239965]: 2026-01-26 15:59:11.759 239969 DEBUG nova.network.neutron [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Successfully created port: 80f1be22-ff75-43ec-bf00-3b825b1932f5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:59:11 compute-0 nova_compute[239965]: 2026-01-26 15:59:11.803 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 195 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 129 op/s
Jan 26 15:59:12 compute-0 ceph-mon[75140]: pgmap v1382: 305 pgs: 305 active+clean; 195 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.2 MiB/s wr, 129 op/s
Jan 26 15:59:13 compute-0 nova_compute[239965]: 2026-01-26 15:59:13.316 239969 DEBUG nova.network.neutron [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Successfully updated port: 80f1be22-ff75-43ec-bf00-3b825b1932f5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:59:13 compute-0 nova_compute[239965]: 2026-01-26 15:59:13.340 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "refresh_cache-883e64be-1368-4217-b5a2-7353e3045e3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:13 compute-0 nova_compute[239965]: 2026-01-26 15:59:13.341 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquired lock "refresh_cache-883e64be-1368-4217-b5a2-7353e3045e3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:13 compute-0 nova_compute[239965]: 2026-01-26 15:59:13.341 239969 DEBUG nova.network.neutron [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:59:13 compute-0 nova_compute[239965]: 2026-01-26 15:59:13.433 239969 DEBUG nova.compute.manager [req-32adb699-a1c8-4573-9b8e-bbb5c7f71f7c req-f653cf46-3329-4f66-aaab-69a68a275494 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Received event network-changed-80f1be22-ff75-43ec-bf00-3b825b1932f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:13 compute-0 nova_compute[239965]: 2026-01-26 15:59:13.434 239969 DEBUG nova.compute.manager [req-32adb699-a1c8-4573-9b8e-bbb5c7f71f7c req-f653cf46-3329-4f66-aaab-69a68a275494 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Refreshing instance network info cache due to event network-changed-80f1be22-ff75-43ec-bf00-3b825b1932f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:59:13 compute-0 nova_compute[239965]: 2026-01-26 15:59:13.434 239969 DEBUG oslo_concurrency.lockutils [req-32adb699-a1c8-4573-9b8e-bbb5c7f71f7c req-f653cf46-3329-4f66-aaab-69a68a275494 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-883e64be-1368-4217-b5a2-7353e3045e3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:13 compute-0 nova_compute[239965]: 2026-01-26 15:59:13.513 239969 DEBUG nova.objects.instance [None req-06dd81eb-24e9-495b-b665-4b940e4c9035 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lazy-loading 'flavor' on Instance uuid 6b4dbd52-cbad-4d8f-8446-e47fabdc170c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:13 compute-0 nova_compute[239965]: 2026-01-26 15:59:13.535 239969 DEBUG oslo_concurrency.lockutils [None req-06dd81eb-24e9-495b-b665-4b940e4c9035 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Acquiring lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:13 compute-0 nova_compute[239965]: 2026-01-26 15:59:13.536 239969 DEBUG oslo_concurrency.lockutils [None req-06dd81eb-24e9-495b-b665-4b940e4c9035 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Acquired lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:13 compute-0 nova_compute[239965]: 2026-01-26 15:59:13.585 239969 DEBUG nova.network.neutron [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:59:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1383: 305 pgs: 305 active+clean; 195 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.2 MiB/s wr, 126 op/s
Jan 26 15:59:14 compute-0 ceph-mon[75140]: pgmap v1383: 305 pgs: 305 active+clean; 195 MiB data, 478 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.2 MiB/s wr, 126 op/s
Jan 26 15:59:15 compute-0 nova_compute[239965]: 2026-01-26 15:59:15.365 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:59:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 213 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.418 239969 DEBUG nova.network.neutron [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Updating instance_info_cache with network_info: [{"id": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "address": "fa:16:3e:e5:0a:ef", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f1be22-ff", "ovs_interfaceid": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.443 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Releasing lock "refresh_cache-883e64be-1368-4217-b5a2-7353e3045e3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.444 239969 DEBUG nova.compute.manager [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Instance network_info: |[{"id": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "address": "fa:16:3e:e5:0a:ef", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f1be22-ff", "ovs_interfaceid": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.445 239969 DEBUG oslo_concurrency.lockutils [req-32adb699-a1c8-4573-9b8e-bbb5c7f71f7c req-f653cf46-3329-4f66-aaab-69a68a275494 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-883e64be-1368-4217-b5a2-7353e3045e3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.445 239969 DEBUG nova.network.neutron [req-32adb699-a1c8-4573-9b8e-bbb5c7f71f7c req-f653cf46-3329-4f66-aaab-69a68a275494 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Refreshing network info cache for port 80f1be22-ff75-43ec-bf00-3b825b1932f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.448 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Start _get_guest_xml network_info=[{"id": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "address": "fa:16:3e:e5:0a:ef", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f1be22-ff", "ovs_interfaceid": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.454 239969 WARNING nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.460 239969 DEBUG nova.virt.libvirt.host [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.461 239969 DEBUG nova.virt.libvirt.host [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.466 239969 DEBUG nova.virt.libvirt.host [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.467 239969 DEBUG nova.virt.libvirt.host [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.467 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.468 239969 DEBUG nova.virt.hardware [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.468 239969 DEBUG nova.virt.hardware [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.469 239969 DEBUG nova.virt.hardware [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.469 239969 DEBUG nova.virt.hardware [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.469 239969 DEBUG nova.virt.hardware [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.470 239969 DEBUG nova.virt.hardware [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.470 239969 DEBUG nova.virt.hardware [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.470 239969 DEBUG nova.virt.hardware [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.471 239969 DEBUG nova.virt.hardware [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.471 239969 DEBUG nova.virt.hardware [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.471 239969 DEBUG nova.virt.hardware [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.475 239969 DEBUG oslo_concurrency.processutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.600 239969 DEBUG nova.network.neutron [None req-06dd81eb-24e9-495b-b665-4b940e4c9035 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:59:16 compute-0 ovn_controller[146046]: 2026-01-26T15:59:16Z|00426|binding|INFO|Releasing lport baa5727d-28fe-41d5-8694-960826c92bc9 from this chassis (sb_readonly=0)
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.741 239969 DEBUG nova.compute.manager [req-b5414fc1-9491-44ac-9437-ab687543eab4 req-6cbdd804-2b18-48bd-9fba-16770b05797e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Received event network-changed-4ef2795d-4ecf-4161-b8d3-cf078d350d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.742 239969 DEBUG nova.compute.manager [req-b5414fc1-9491-44ac-9437-ab687543eab4 req-6cbdd804-2b18-48bd-9fba-16770b05797e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Refreshing instance network info cache due to event network-changed-4ef2795d-4ecf-4161-b8d3-cf078d350d35. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.742 239969 DEBUG oslo_concurrency.lockutils [req-b5414fc1-9491-44ac-9437-ab687543eab4 req-6cbdd804-2b18-48bd-9fba-16770b05797e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.770 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:16 compute-0 nova_compute[239965]: 2026-01-26 15:59:16.804 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:17 compute-0 ceph-mon[75140]: pgmap v1384: 305 pgs: 305 active+clean; 213 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 151 op/s
Jan 26 15:59:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:59:17 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3480062164' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.096 239969 DEBUG oslo_concurrency.processutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.117 239969 DEBUG nova.storage.rbd_utils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 883e64be-1368-4217-b5a2-7353e3045e3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.120 239969 DEBUG oslo_concurrency.processutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:59:17 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4088271008' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.744 239969 DEBUG oslo_concurrency.processutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.746 239969 DEBUG nova.virt.libvirt.vif [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:59:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1678235978',display_name='tempest-DeleteServersTestJSON-server-1678235978',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1678235978',id=54,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aee99e5b6af74088bd848cecc9592e82',ramdisk_id='',reservation_id='r-9ap1zhmd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-234439961',owner_user_name='tempest-DeleteServersTestJSON-234439961-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:59:09Z,user_data=None,user_id='3c7cba75e9fd41599b1c9a3388447cdd',uuid=883e64be-1368-4217-b5a2-7353e3045e3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "address": "fa:16:3e:e5:0a:ef", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f1be22-ff", "ovs_interfaceid": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.746 239969 DEBUG nova.network.os_vif_util [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converting VIF {"id": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "address": "fa:16:3e:e5:0a:ef", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f1be22-ff", "ovs_interfaceid": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.747 239969 DEBUG nova.network.os_vif_util [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:0a:ef,bridge_name='br-int',has_traffic_filtering=True,id=80f1be22-ff75-43ec-bf00-3b825b1932f5,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80f1be22-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.748 239969 DEBUG nova.objects.instance [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 883e64be-1368-4217-b5a2-7353e3045e3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.768 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:59:17 compute-0 nova_compute[239965]:   <uuid>883e64be-1368-4217-b5a2-7353e3045e3c</uuid>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   <name>instance-00000036</name>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <nova:name>tempest-DeleteServersTestJSON-server-1678235978</nova:name>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:59:16</nova:creationTime>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:59:17 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:59:17 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:59:17 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:59:17 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:59:17 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:59:17 compute-0 nova_compute[239965]:         <nova:user uuid="3c7cba75e9fd41599b1c9a3388447cdd">tempest-DeleteServersTestJSON-234439961-project-member</nova:user>
Jan 26 15:59:17 compute-0 nova_compute[239965]:         <nova:project uuid="aee99e5b6af74088bd848cecc9592e82">tempest-DeleteServersTestJSON-234439961</nova:project>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:59:17 compute-0 nova_compute[239965]:         <nova:port uuid="80f1be22-ff75-43ec-bf00-3b825b1932f5">
Jan 26 15:59:17 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <system>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <entry name="serial">883e64be-1368-4217-b5a2-7353e3045e3c</entry>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <entry name="uuid">883e64be-1368-4217-b5a2-7353e3045e3c</entry>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     </system>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   <os>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   </os>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   <features>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   </features>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/883e64be-1368-4217-b5a2-7353e3045e3c_disk">
Jan 26 15:59:17 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       </source>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:59:17 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/883e64be-1368-4217-b5a2-7353e3045e3c_disk.config">
Jan 26 15:59:17 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       </source>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:59:17 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:e5:0a:ef"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <target dev="tap80f1be22-ff"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/883e64be-1368-4217-b5a2-7353e3045e3c/console.log" append="off"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <video>
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     </video>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:59:17 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:59:17 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:59:17 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:59:17 compute-0 nova_compute[239965]: </domain>
Jan 26 15:59:17 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.770 239969 DEBUG nova.compute.manager [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Preparing to wait for external event network-vif-plugged-80f1be22-ff75-43ec-bf00-3b825b1932f5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.771 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.771 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.771 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.772 239969 DEBUG nova.virt.libvirt.vif [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:59:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1678235978',display_name='tempest-DeleteServersTestJSON-server-1678235978',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1678235978',id=54,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aee99e5b6af74088bd848cecc9592e82',ramdisk_id='',reservation_id='r-9ap1zhmd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-234439961',owner_user_name='tempest-DeleteServersTestJSON-234439961-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:59:09Z,user_data=None,user_id='3c7cba75e9fd41599b1c9a3388447cdd',uuid=883e64be-1368-4217-b5a2-7353e3045e3c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "address": "fa:16:3e:e5:0a:ef", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f1be22-ff", "ovs_interfaceid": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.772 239969 DEBUG nova.network.os_vif_util [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converting VIF {"id": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "address": "fa:16:3e:e5:0a:ef", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f1be22-ff", "ovs_interfaceid": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.773 239969 DEBUG nova.network.os_vif_util [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:0a:ef,bridge_name='br-int',has_traffic_filtering=True,id=80f1be22-ff75-43ec-bf00-3b825b1932f5,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80f1be22-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.774 239969 DEBUG os_vif [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:0a:ef,bridge_name='br-int',has_traffic_filtering=True,id=80f1be22-ff75-43ec-bf00-3b825b1932f5,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80f1be22-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.774 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.775 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.775 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.779 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.779 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80f1be22-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.780 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap80f1be22-ff, col_values=(('external_ids', {'iface-id': '80f1be22-ff75-43ec-bf00-3b825b1932f5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:0a:ef', 'vm-uuid': '883e64be-1368-4217-b5a2-7353e3045e3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:17 compute-0 NetworkManager[48954]: <info>  [1769443157.7837] manager: (tap80f1be22-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.786 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.789 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.789 239969 INFO os_vif [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:0a:ef,bridge_name='br-int',has_traffic_filtering=True,id=80f1be22-ff75-43ec-bf00-3b825b1932f5,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80f1be22-ff')
Jan 26 15:59:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 213 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.892 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.892 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.892 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] No VIF found with MAC fa:16:3e:e5:0a:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.893 239969 INFO nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Using config drive
Jan 26 15:59:17 compute-0 nova_compute[239965]: 2026-01-26 15:59:17.914 239969 DEBUG nova.storage.rbd_utils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 883e64be-1368-4217-b5a2-7353e3045e3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3480062164' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4088271008' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:19 compute-0 nova_compute[239965]: 2026-01-26 15:59:19.027 239969 DEBUG nova.network.neutron [req-32adb699-a1c8-4573-9b8e-bbb5c7f71f7c req-f653cf46-3329-4f66-aaab-69a68a275494 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Updated VIF entry in instance network info cache for port 80f1be22-ff75-43ec-bf00-3b825b1932f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:59:19 compute-0 nova_compute[239965]: 2026-01-26 15:59:19.027 239969 DEBUG nova.network.neutron [req-32adb699-a1c8-4573-9b8e-bbb5c7f71f7c req-f653cf46-3329-4f66-aaab-69a68a275494 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Updating instance_info_cache with network_info: [{"id": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "address": "fa:16:3e:e5:0a:ef", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f1be22-ff", "ovs_interfaceid": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:19 compute-0 nova_compute[239965]: 2026-01-26 15:59:19.053 239969 DEBUG oslo_concurrency.lockutils [req-32adb699-a1c8-4573-9b8e-bbb5c7f71f7c req-f653cf46-3329-4f66-aaab-69a68a275494 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-883e64be-1368-4217-b5a2-7353e3045e3c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:19 compute-0 ceph-mon[75140]: pgmap v1385: 305 pgs: 305 active+clean; 213 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Jan 26 15:59:19 compute-0 nova_compute[239965]: 2026-01-26 15:59:19.608 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443144.5047283, 5a8e158a-5615-4acc-89e3-48b6dbf159c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:19 compute-0 nova_compute[239965]: 2026-01-26 15:59:19.609 239969 INFO nova.compute.manager [-] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] VM Stopped (Lifecycle Event)
Jan 26 15:59:19 compute-0 nova_compute[239965]: 2026-01-26 15:59:19.639 239969 DEBUG nova.compute.manager [None req-1b9f5ed9-00c1-44b9-be25-c2a21471ead1 - - - - - -] [instance: 5a8e158a-5615-4acc-89e3-48b6dbf159c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:19 compute-0 nova_compute[239965]: 2026-01-26 15:59:19.847 239969 INFO nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Creating config drive at /var/lib/nova/instances/883e64be-1368-4217-b5a2-7353e3045e3c/disk.config
Jan 26 15:59:19 compute-0 nova_compute[239965]: 2026-01-26 15:59:19.852 239969 DEBUG oslo_concurrency.processutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/883e64be-1368-4217-b5a2-7353e3045e3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb5iwg6s8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1386: 305 pgs: 305 active+clean; 213 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Jan 26 15:59:19 compute-0 nova_compute[239965]: 2026-01-26 15:59:19.881 239969 DEBUG nova.network.neutron [None req-06dd81eb-24e9-495b-b665-4b940e4c9035 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Updating instance_info_cache with network_info: [{"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:19 compute-0 nova_compute[239965]: 2026-01-26 15:59:19.904 239969 DEBUG oslo_concurrency.lockutils [None req-06dd81eb-24e9-495b-b665-4b940e4c9035 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Releasing lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:19 compute-0 nova_compute[239965]: 2026-01-26 15:59:19.904 239969 DEBUG nova.compute.manager [None req-06dd81eb-24e9-495b-b665-4b940e4c9035 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 26 15:59:19 compute-0 nova_compute[239965]: 2026-01-26 15:59:19.905 239969 DEBUG nova.compute.manager [None req-06dd81eb-24e9-495b-b665-4b940e4c9035 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] network_info to inject: |[{"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 26 15:59:19 compute-0 nova_compute[239965]: 2026-01-26 15:59:19.908 239969 DEBUG oslo_concurrency.lockutils [req-b5414fc1-9491-44ac-9437-ab687543eab4 req-6cbdd804-2b18-48bd-9fba-16770b05797e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:19 compute-0 nova_compute[239965]: 2026-01-26 15:59:19.908 239969 DEBUG nova.network.neutron [req-b5414fc1-9491-44ac-9437-ab687543eab4 req-6cbdd804-2b18-48bd-9fba-16770b05797e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Refreshing network info cache for port 4ef2795d-4ecf-4161-b8d3-cf078d350d35 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:59:19 compute-0 nova_compute[239965]: 2026-01-26 15:59:19.987 239969 DEBUG oslo_concurrency.processutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/883e64be-1368-4217-b5a2-7353e3045e3c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb5iwg6s8" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.016 239969 DEBUG nova.storage.rbd_utils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 883e64be-1368-4217-b5a2-7353e3045e3c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.020 239969 DEBUG oslo_concurrency.processutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/883e64be-1368-4217-b5a2-7353e3045e3c/disk.config 883e64be-1368-4217-b5a2-7353e3045e3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.176 239969 DEBUG oslo_concurrency.processutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/883e64be-1368-4217-b5a2-7353e3045e3c/disk.config 883e64be-1368-4217-b5a2-7353e3045e3c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.177 239969 INFO nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Deleting local config drive /var/lib/nova/instances/883e64be-1368-4217-b5a2-7353e3045e3c/disk.config because it was imported into RBD.
Jan 26 15:59:20 compute-0 kernel: tap80f1be22-ff: entered promiscuous mode
Jan 26 15:59:20 compute-0 ovn_controller[146046]: 2026-01-26T15:59:20Z|00427|binding|INFO|Claiming lport 80f1be22-ff75-43ec-bf00-3b825b1932f5 for this chassis.
Jan 26 15:59:20 compute-0 ovn_controller[146046]: 2026-01-26T15:59:20Z|00428|binding|INFO|80f1be22-ff75-43ec-bf00-3b825b1932f5: Claiming fa:16:3e:e5:0a:ef 10.100.0.11
Jan 26 15:59:20 compute-0 NetworkManager[48954]: <info>  [1769443160.2344] manager: (tap80f1be22-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.233 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.245 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:0a:ef 10.100.0.11'], port_security=['fa:16:3e:e5:0a:ef 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '883e64be-1368-4217-b5a2-7353e3045e3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00eb7549-d24b-4657-b244-7664c8a34b20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aee99e5b6af74088bd848cecc9592e82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6daa4359-88b3-4de7-a38c-e21562f82a46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89244795-c322-42a2-be6b-f02f8ae6e05c, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=80f1be22-ff75-43ec-bf00-3b825b1932f5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.246 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 80f1be22-ff75-43ec-bf00-3b825b1932f5 in datapath 00eb7549-d24b-4657-b244-7664c8a34b20 bound to our chassis
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.248 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00eb7549-d24b-4657-b244-7664c8a34b20
Jan 26 15:59:20 compute-0 ovn_controller[146046]: 2026-01-26T15:59:20Z|00429|binding|INFO|Setting lport 80f1be22-ff75-43ec-bf00-3b825b1932f5 ovn-installed in OVS
Jan 26 15:59:20 compute-0 ovn_controller[146046]: 2026-01-26T15:59:20Z|00430|binding|INFO|Setting lport 80f1be22-ff75-43ec-bf00-3b825b1932f5 up in Southbound
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.253 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.255 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:20 compute-0 systemd-udevd[287196]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.262 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[12733202-75c6-4338-8fd9-6f1fec9781ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.263 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00eb7549-d1 in ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.265 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00eb7549-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.265 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[076a3cfc-d522-4a58-8d8c-eed0b3ab0422]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.267 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1af32d4b-294d-4c4b-9df6-8ddbb4a626c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:20 compute-0 systemd-machined[208061]: New machine qemu-60-instance-00000036.
Jan 26 15:59:20 compute-0 NetworkManager[48954]: <info>  [1769443160.2751] device (tap80f1be22-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:59:20 compute-0 NetworkManager[48954]: <info>  [1769443160.2769] device (tap80f1be22-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:59:20 compute-0 systemd[1]: Started Virtual Machine qemu-60-instance-00000036.
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.279 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[6c1736d2-dbc1-484c-a549-044b48183735]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.293 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c8b50ab6-2da6-4472-91b6-a1e04a7e7c85]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.320 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6479f627-b427-4267-ba49-d0f7a6848ec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.326 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[223bd8b3-1048-4df1-b1f6-d74cebf254bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:20 compute-0 NetworkManager[48954]: <info>  [1769443160.3272] manager: (tap00eb7549-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/197)
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.358 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ffb7acd0-f6d4-4cbb-9a27-b7e1155bfcba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.360 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[75284e1a-d075-460f-8181-335cd2432de7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.368 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:20 compute-0 NetworkManager[48954]: <info>  [1769443160.3827] device (tap00eb7549-d0): carrier: link connected
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.391 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[be12cc3c-48c7-4826-a178-8a75bfa091d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.407 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[74b8bcb8-ec30-42ff-acea-26dd6b231fa1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00eb7549-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:aa:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465118, 'reachable_time': 30840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287229, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.423 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7e850445-a22f-4598-a7f9-a1f85400e3b8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:aa8a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 465118, 'tstamp': 465118}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287230, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.438 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f16d7734-e1f5-4750-a863-ece011e54640]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00eb7549-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:aa:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 132], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465118, 'reachable_time': 30840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 287231, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.477 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ee4422-e90a-4d16-8244-a636206d6fdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.537 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f80eb1fc-1f5d-44b9-b532-668c3d243ce0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.539 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00eb7549-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.539 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.539 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00eb7549-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.541 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:20 compute-0 kernel: tap00eb7549-d0: entered promiscuous mode
Jan 26 15:59:20 compute-0 NetworkManager[48954]: <info>  [1769443160.5423] manager: (tap00eb7549-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.544 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.545 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00eb7549-d0, col_values=(('external_ids', {'iface-id': 'c26451f4-ab48-4e8d-b25b-9e4988573b7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.546 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:20 compute-0 ovn_controller[146046]: 2026-01-26T15:59:20Z|00431|binding|INFO|Releasing lport c26451f4-ab48-4e8d-b25b-9e4988573b7d from this chassis (sb_readonly=0)
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.550 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00eb7549-d24b-4657-b244-7664c8a34b20.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00eb7549-d24b-4657-b244-7664c8a34b20.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.549 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.551 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f32bf9ef-a9ef-49e9-bb91-b4ec82ffdbab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.552 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-00eb7549-d24b-4657-b244-7664c8a34b20
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/00eb7549-d24b-4657-b244-7664c8a34b20.pid.haproxy
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 00eb7549-d24b-4657-b244-7664c8a34b20
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:59:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:20.552 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'env', 'PROCESS_TAG=haproxy-00eb7549-d24b-4657-b244-7664c8a34b20', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00eb7549-d24b-4657-b244-7664c8a34b20.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.565 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.775 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443160.7745674, 883e64be-1368-4217-b5a2-7353e3045e3c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.775 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] VM Started (Lifecycle Event)
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.813 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.817 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443160.7747402, 883e64be-1368-4217-b5a2-7353e3045e3c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.818 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] VM Paused (Lifecycle Event)
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.835 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.840 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:20 compute-0 nova_compute[239965]: 2026-01-26 15:59:20.867 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:59:20 compute-0 podman[287305]: 2026-01-26 15:59:20.955280499 +0000 UTC m=+0.056800816 container create 639557d187ab8abfcec04b708c6e64b55b13e19f17629da4296ef361c88813c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Jan 26 15:59:20 compute-0 systemd[1]: Started libpod-conmon-639557d187ab8abfcec04b708c6e64b55b13e19f17629da4296ef361c88813c3.scope.
Jan 26 15:59:21 compute-0 podman[287305]: 2026-01-26 15:59:20.926737979 +0000 UTC m=+0.028258316 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:59:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:59:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8a190053fed666ac4fecbf20789d26e6e2df4f6c35a5c72b4bd128480aca9f8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:59:21 compute-0 podman[287305]: 2026-01-26 15:59:21.048644721 +0000 UTC m=+0.150165058 container init 639557d187ab8abfcec04b708c6e64b55b13e19f17629da4296ef361c88813c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:59:21 compute-0 podman[287305]: 2026-01-26 15:59:21.05393403 +0000 UTC m=+0.155454337 container start 639557d187ab8abfcec04b708c6e64b55b13e19f17629da4296ef361c88813c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 15:59:21 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[287320]: [NOTICE]   (287324) : New worker (287326) forked
Jan 26 15:59:21 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[287320]: [NOTICE]   (287324) : Loading success.
Jan 26 15:59:21 compute-0 ceph-mon[75140]: pgmap v1386: 305 pgs: 305 active+clean; 213 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 118 op/s
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.763 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443146.762202, 82d2c240-6e7e-4cc1-a311-2bf4fea69b85 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.763 239969 INFO nova.compute.manager [-] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] VM Stopped (Lifecycle Event)
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.812 239969 DEBUG nova.compute.manager [req-6d8fa1b5-562f-4543-9c36-e2c101434e2d req-32c656ba-708d-4d1a-b242-c928ebcc31d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Received event network-vif-plugged-80f1be22-ff75-43ec-bf00-3b825b1932f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.812 239969 DEBUG oslo_concurrency.lockutils [req-6d8fa1b5-562f-4543-9c36-e2c101434e2d req-32c656ba-708d-4d1a-b242-c928ebcc31d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.812 239969 DEBUG oslo_concurrency.lockutils [req-6d8fa1b5-562f-4543-9c36-e2c101434e2d req-32c656ba-708d-4d1a-b242-c928ebcc31d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.812 239969 DEBUG oslo_concurrency.lockutils [req-6d8fa1b5-562f-4543-9c36-e2c101434e2d req-32c656ba-708d-4d1a-b242-c928ebcc31d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.813 239969 DEBUG nova.compute.manager [req-6d8fa1b5-562f-4543-9c36-e2c101434e2d req-32c656ba-708d-4d1a-b242-c928ebcc31d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Processing event network-vif-plugged-80f1be22-ff75-43ec-bf00-3b825b1932f5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.813 239969 DEBUG nova.compute.manager [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.818 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443161.816428, 883e64be-1368-4217-b5a2-7353e3045e3c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.818 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] VM Resumed (Lifecycle Event)
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.821 239969 DEBUG nova.objects.instance [None req-2533f811-0180-4df0-9851-33778a9ed96c 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lazy-loading 'flavor' on Instance uuid 6b4dbd52-cbad-4d8f-8446-e47fabdc170c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.823 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.827 239969 INFO nova.virt.libvirt.driver [-] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Instance spawned successfully.
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.827 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.853 239969 DEBUG nova.compute.manager [None req-21c025eb-0c74-454b-a786-67789269e695 - - - - - -] [instance: 82d2c240-6e7e-4cc1-a311-2bf4fea69b85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.859 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1387: 305 pgs: 305 active+clean; 213 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.865 239969 DEBUG oslo_concurrency.lockutils [None req-2533f811-0180-4df0-9851-33778a9ed96c 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Acquiring lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.868 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.872 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.872 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.872 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.873 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.873 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.874 239969 DEBUG nova.virt.libvirt.driver [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.900 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.930 239969 INFO nova.compute.manager [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Took 11.89 seconds to spawn the instance on the hypervisor.
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.930 239969 DEBUG nova.compute.manager [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:21 compute-0 nova_compute[239965]: 2026-01-26 15:59:21.987 239969 INFO nova.compute.manager [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Took 12.98 seconds to build instance.
Jan 26 15:59:22 compute-0 nova_compute[239965]: 2026-01-26 15:59:22.006 239969 DEBUG oslo_concurrency.lockutils [None req-e042db47-656f-4061-b669-7eff2743bde8 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "883e64be-1368-4217-b5a2-7353e3045e3c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:22 compute-0 nova_compute[239965]: 2026-01-26 15:59:22.624 239969 DEBUG nova.network.neutron [req-b5414fc1-9491-44ac-9437-ab687543eab4 req-6cbdd804-2b18-48bd-9fba-16770b05797e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Updated VIF entry in instance network info cache for port 4ef2795d-4ecf-4161-b8d3-cf078d350d35. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:59:22 compute-0 nova_compute[239965]: 2026-01-26 15:59:22.625 239969 DEBUG nova.network.neutron [req-b5414fc1-9491-44ac-9437-ab687543eab4 req-6cbdd804-2b18-48bd-9fba-16770b05797e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Updating instance_info_cache with network_info: [{"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:22 compute-0 nova_compute[239965]: 2026-01-26 15:59:22.642 239969 DEBUG oslo_concurrency.lockutils [req-b5414fc1-9491-44ac-9437-ab687543eab4 req-6cbdd804-2b18-48bd-9fba-16770b05797e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:22 compute-0 nova_compute[239965]: 2026-01-26 15:59:22.643 239969 DEBUG oslo_concurrency.lockutils [None req-2533f811-0180-4df0-9851-33778a9ed96c 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Acquired lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:22 compute-0 nova_compute[239965]: 2026-01-26 15:59:22.784 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:23 compute-0 ceph-mon[75140]: pgmap v1387: 305 pgs: 305 active+clean; 213 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.535 239969 DEBUG oslo_concurrency.lockutils [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "883e64be-1368-4217-b5a2-7353e3045e3c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.536 239969 DEBUG oslo_concurrency.lockutils [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "883e64be-1368-4217-b5a2-7353e3045e3c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.536 239969 DEBUG oslo_concurrency.lockutils [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.537 239969 DEBUG oslo_concurrency.lockutils [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.537 239969 DEBUG oslo_concurrency.lockutils [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.538 239969 INFO nova.compute.manager [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Terminating instance
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.540 239969 DEBUG nova.compute.manager [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:59:23 compute-0 kernel: tap80f1be22-ff (unregistering): left promiscuous mode
Jan 26 15:59:23 compute-0 NetworkManager[48954]: <info>  [1769443163.5807] device (tap80f1be22-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:59:23 compute-0 ovn_controller[146046]: 2026-01-26T15:59:23Z|00432|binding|INFO|Releasing lport 80f1be22-ff75-43ec-bf00-3b825b1932f5 from this chassis (sb_readonly=0)
Jan 26 15:59:23 compute-0 ovn_controller[146046]: 2026-01-26T15:59:23Z|00433|binding|INFO|Setting lport 80f1be22-ff75-43ec-bf00-3b825b1932f5 down in Southbound
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.594 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:23 compute-0 ovn_controller[146046]: 2026-01-26T15:59:23Z|00434|binding|INFO|Removing iface tap80f1be22-ff ovn-installed in OVS
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.599 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:23.603 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:0a:ef 10.100.0.11'], port_security=['fa:16:3e:e5:0a:ef 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '883e64be-1368-4217-b5a2-7353e3045e3c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00eb7549-d24b-4657-b244-7664c8a34b20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aee99e5b6af74088bd848cecc9592e82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6daa4359-88b3-4de7-a38c-e21562f82a46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89244795-c322-42a2-be6b-f02f8ae6e05c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=80f1be22-ff75-43ec-bf00-3b825b1932f5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:23.604 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 80f1be22-ff75-43ec-bf00-3b825b1932f5 in datapath 00eb7549-d24b-4657-b244-7664c8a34b20 unbound from our chassis
Jan 26 15:59:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:23.606 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00eb7549-d24b-4657-b244-7664c8a34b20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:59:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:23.607 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[da77f6a8-d5d5-4104-9993-cff65d3a0c16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:23.607 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 namespace which is not needed anymore
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.619 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:23 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000036.scope: Deactivated successfully.
Jan 26 15:59:23 compute-0 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000036.scope: Consumed 2.286s CPU time.
Jan 26 15:59:23 compute-0 systemd-machined[208061]: Machine qemu-60-instance-00000036 terminated.
Jan 26 15:59:23 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[287320]: [NOTICE]   (287324) : haproxy version is 2.8.14-c23fe91
Jan 26 15:59:23 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[287320]: [NOTICE]   (287324) : path to executable is /usr/sbin/haproxy
Jan 26 15:59:23 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[287320]: [ALERT]    (287324) : Current worker (287326) exited with code 143 (Terminated)
Jan 26 15:59:23 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[287320]: [WARNING]  (287324) : All workers exited. Exiting... (0)
Jan 26 15:59:23 compute-0 systemd[1]: libpod-639557d187ab8abfcec04b708c6e64b55b13e19f17629da4296ef361c88813c3.scope: Deactivated successfully.
Jan 26 15:59:23 compute-0 podman[287358]: 2026-01-26 15:59:23.741303731 +0000 UTC m=+0.044710689 container died 639557d187ab8abfcec04b708c6e64b55b13e19f17629da4296ef361c88813c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.774 239969 INFO nova.virt.libvirt.driver [-] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Instance destroyed successfully.
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.775 239969 DEBUG nova.objects.instance [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'resources' on Instance uuid 883e64be-1368-4217-b5a2-7353e3045e3c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-639557d187ab8abfcec04b708c6e64b55b13e19f17629da4296ef361c88813c3-userdata-shm.mount: Deactivated successfully.
Jan 26 15:59:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8a190053fed666ac4fecbf20789d26e6e2df4f6c35a5c72b4bd128480aca9f8-merged.mount: Deactivated successfully.
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.789 239969 DEBUG nova.virt.libvirt.vif [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:59:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1678235978',display_name='tempest-DeleteServersTestJSON-server-1678235978',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1678235978',id=54,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:59:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aee99e5b6af74088bd848cecc9592e82',ramdisk_id='',reservation_id='r-9ap1zhmd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-234439961',owner_user_name='tempest-DeleteServersTestJSON-234439961-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:59:21Z,user_data=None,user_id='3c7cba75e9fd41599b1c9a3388447cdd',uuid=883e64be-1368-4217-b5a2-7353e3045e3c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "address": "fa:16:3e:e5:0a:ef", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f1be22-ff", "ovs_interfaceid": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.790 239969 DEBUG nova.network.os_vif_util [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converting VIF {"id": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "address": "fa:16:3e:e5:0a:ef", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap80f1be22-ff", "ovs_interfaceid": "80f1be22-ff75-43ec-bf00-3b825b1932f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.791 239969 DEBUG nova.network.os_vif_util [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:0a:ef,bridge_name='br-int',has_traffic_filtering=True,id=80f1be22-ff75-43ec-bf00-3b825b1932f5,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80f1be22-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.792 239969 DEBUG os_vif [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:0a:ef,bridge_name='br-int',has_traffic_filtering=True,id=80f1be22-ff75-43ec-bf00-3b825b1932f5,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80f1be22-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:59:23 compute-0 podman[287358]: 2026-01-26 15:59:23.793882801 +0000 UTC m=+0.097289769 container cleanup 639557d187ab8abfcec04b708c6e64b55b13e19f17629da4296ef361c88813c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.794 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.794 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80f1be22-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.796 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.798 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.801 239969 INFO os_vif [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:0a:ef,bridge_name='br-int',has_traffic_filtering=True,id=80f1be22-ff75-43ec-bf00-3b825b1932f5,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap80f1be22-ff')
Jan 26 15:59:23 compute-0 systemd[1]: libpod-conmon-639557d187ab8abfcec04b708c6e64b55b13e19f17629da4296ef361c88813c3.scope: Deactivated successfully.
Jan 26 15:59:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 213 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 601 KiB/s wr, 25 op/s
Jan 26 15:59:23 compute-0 podman[287397]: 2026-01-26 15:59:23.868310849 +0000 UTC m=+0.051720391 container remove 639557d187ab8abfcec04b708c6e64b55b13e19f17629da4296ef361c88813c3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 15:59:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:23.875 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2a5d0e16-fc26-4e35-8316-2f52cf23bf6d]: (4, ('Mon Jan 26 03:59:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 (639557d187ab8abfcec04b708c6e64b55b13e19f17629da4296ef361c88813c3)\n639557d187ab8abfcec04b708c6e64b55b13e19f17629da4296ef361c88813c3\nMon Jan 26 03:59:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 (639557d187ab8abfcec04b708c6e64b55b13e19f17629da4296ef361c88813c3)\n639557d187ab8abfcec04b708c6e64b55b13e19f17629da4296ef361c88813c3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:23.877 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9b93c430-8b3c-433f-a3e6-d0a761d92ffc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:23.878 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00eb7549-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.880 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:23 compute-0 kernel: tap00eb7549-d0: left promiscuous mode
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.882 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:23.885 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3b09aa12-e2d9-4634-b214-c7c101ecefbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:23 compute-0 nova_compute[239965]: 2026-01-26 15:59:23.899 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:23.904 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc785d3-afea-4a89-992d-848eafb29920]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:23.905 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1c6dbc-aea1-4bbb-9ea3-fe89ea75af07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:23.921 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[054a69bb-ccd9-4569-9083-6452941f3466]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 465112, 'reachable_time': 18165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287428, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:23.923 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:59:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:23.924 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[ae2db9f0-1f02-40a7-8df1-40fc9b90e93a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d00eb7549\x2dd24b\x2d4657\x2db244\x2d7664c8a34b20.mount: Deactivated successfully.
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.084 239969 INFO nova.virt.libvirt.driver [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Deleting instance files /var/lib/nova/instances/883e64be-1368-4217-b5a2-7353e3045e3c_del
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.085 239969 INFO nova.virt.libvirt.driver [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Deletion of /var/lib/nova/instances/883e64be-1368-4217-b5a2-7353e3045e3c_del complete
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.109 239969 DEBUG nova.compute.manager [req-ada69426-1c0d-43f7-ad93-82257aac7d6d req-a5043331-3811-4c19-8287-658a727a0ed6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Received event network-vif-plugged-80f1be22-ff75-43ec-bf00-3b825b1932f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.110 239969 DEBUG oslo_concurrency.lockutils [req-ada69426-1c0d-43f7-ad93-82257aac7d6d req-a5043331-3811-4c19-8287-658a727a0ed6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.110 239969 DEBUG oslo_concurrency.lockutils [req-ada69426-1c0d-43f7-ad93-82257aac7d6d req-a5043331-3811-4c19-8287-658a727a0ed6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.110 239969 DEBUG oslo_concurrency.lockutils [req-ada69426-1c0d-43f7-ad93-82257aac7d6d req-a5043331-3811-4c19-8287-658a727a0ed6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.110 239969 DEBUG nova.compute.manager [req-ada69426-1c0d-43f7-ad93-82257aac7d6d req-a5043331-3811-4c19-8287-658a727a0ed6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] No waiting events found dispatching network-vif-plugged-80f1be22-ff75-43ec-bf00-3b825b1932f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.111 239969 WARNING nova.compute.manager [req-ada69426-1c0d-43f7-ad93-82257aac7d6d req-a5043331-3811-4c19-8287-658a727a0ed6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Received unexpected event network-vif-plugged-80f1be22-ff75-43ec-bf00-3b825b1932f5 for instance with vm_state active and task_state deleting.
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.111 239969 DEBUG nova.compute.manager [req-ada69426-1c0d-43f7-ad93-82257aac7d6d req-a5043331-3811-4c19-8287-658a727a0ed6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Received event network-vif-unplugged-80f1be22-ff75-43ec-bf00-3b825b1932f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.111 239969 DEBUG oslo_concurrency.lockutils [req-ada69426-1c0d-43f7-ad93-82257aac7d6d req-a5043331-3811-4c19-8287-658a727a0ed6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.111 239969 DEBUG oslo_concurrency.lockutils [req-ada69426-1c0d-43f7-ad93-82257aac7d6d req-a5043331-3811-4c19-8287-658a727a0ed6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.111 239969 DEBUG oslo_concurrency.lockutils [req-ada69426-1c0d-43f7-ad93-82257aac7d6d req-a5043331-3811-4c19-8287-658a727a0ed6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.112 239969 DEBUG nova.compute.manager [req-ada69426-1c0d-43f7-ad93-82257aac7d6d req-a5043331-3811-4c19-8287-658a727a0ed6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] No waiting events found dispatching network-vif-unplugged-80f1be22-ff75-43ec-bf00-3b825b1932f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.112 239969 DEBUG nova.compute.manager [req-ada69426-1c0d-43f7-ad93-82257aac7d6d req-a5043331-3811-4c19-8287-658a727a0ed6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Received event network-vif-unplugged-80f1be22-ff75-43ec-bf00-3b825b1932f5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.140 239969 INFO nova.compute.manager [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Took 0.60 seconds to destroy the instance on the hypervisor.
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.141 239969 DEBUG oslo.service.loopingcall [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.141 239969 DEBUG nova.compute.manager [-] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.141 239969 DEBUG nova.network.neutron [-] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.326 239969 DEBUG nova.network.neutron [None req-2533f811-0180-4df0-9851-33778a9ed96c 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.452 239969 DEBUG nova.compute.manager [req-6e528c2b-a176-4de3-94ac-c725efb5bbca req-47ad5cc3-ba79-4573-8fda-b60fcd55a380 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Received event network-changed-4ef2795d-4ecf-4161-b8d3-cf078d350d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.452 239969 DEBUG nova.compute.manager [req-6e528c2b-a176-4de3-94ac-c725efb5bbca req-47ad5cc3-ba79-4573-8fda-b60fcd55a380 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Refreshing instance network info cache due to event network-changed-4ef2795d-4ecf-4161-b8d3-cf078d350d35. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.452 239969 DEBUG oslo_concurrency.lockutils [req-6e528c2b-a176-4de3-94ac-c725efb5bbca req-47ad5cc3-ba79-4573-8fda-b60fcd55a380 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.681 239969 DEBUG nova.network.neutron [-] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.702 239969 INFO nova.compute.manager [-] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Took 0.56 seconds to deallocate network for instance.
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.750 239969 DEBUG oslo_concurrency.lockutils [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.751 239969 DEBUG oslo_concurrency.lockutils [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:24 compute-0 nova_compute[239965]: 2026-01-26 15:59:24.843 239969 DEBUG oslo_concurrency.processutils [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:25 compute-0 nova_compute[239965]: 2026-01-26 15:59:25.049 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:25 compute-0 ceph-mon[75140]: pgmap v1388: 305 pgs: 305 active+clean; 213 MiB data, 499 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 601 KiB/s wr, 25 op/s
Jan 26 15:59:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:59:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1372660541' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:25 compute-0 nova_compute[239965]: 2026-01-26 15:59:25.413 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:25 compute-0 nova_compute[239965]: 2026-01-26 15:59:25.433 239969 DEBUG oslo_concurrency.processutils [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:25 compute-0 nova_compute[239965]: 2026-01-26 15:59:25.438 239969 DEBUG nova.compute.provider_tree [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:59:25 compute-0 nova_compute[239965]: 2026-01-26 15:59:25.453 239969 DEBUG nova.scheduler.client.report [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:59:25 compute-0 nova_compute[239965]: 2026-01-26 15:59:25.475 239969 DEBUG oslo_concurrency.lockutils [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:25 compute-0 nova_compute[239965]: 2026-01-26 15:59:25.503 239969 INFO nova.scheduler.client.report [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Deleted allocations for instance 883e64be-1368-4217-b5a2-7353e3045e3c
Jan 26 15:59:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:59:25 compute-0 nova_compute[239965]: 2026-01-26 15:59:25.595 239969 DEBUG oslo_concurrency.lockutils [None req-127d0cc9-b98f-411c-a01c-62dc8a9079c1 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "883e64be-1368-4217-b5a2-7353e3045e3c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 167 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 605 KiB/s wr, 125 op/s
Jan 26 15:59:25 compute-0 nova_compute[239965]: 2026-01-26 15:59:25.947 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1372660541' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:26 compute-0 ceph-mon[75140]: pgmap v1389: 305 pgs: 305 active+clean; 167 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 605 KiB/s wr, 125 op/s
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.240 239969 DEBUG nova.compute.manager [req-e6dbd190-83dc-4f7c-b5da-6e552551318e req-2e9c5871-d8fb-4b1f-a039-1f31b76843e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Received event network-vif-plugged-80f1be22-ff75-43ec-bf00-3b825b1932f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.241 239969 DEBUG oslo_concurrency.lockutils [req-e6dbd190-83dc-4f7c-b5da-6e552551318e req-2e9c5871-d8fb-4b1f-a039-1f31b76843e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.242 239969 DEBUG oslo_concurrency.lockutils [req-e6dbd190-83dc-4f7c-b5da-6e552551318e req-2e9c5871-d8fb-4b1f-a039-1f31b76843e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.242 239969 DEBUG oslo_concurrency.lockutils [req-e6dbd190-83dc-4f7c-b5da-6e552551318e req-2e9c5871-d8fb-4b1f-a039-1f31b76843e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "883e64be-1368-4217-b5a2-7353e3045e3c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.243 239969 DEBUG nova.compute.manager [req-e6dbd190-83dc-4f7c-b5da-6e552551318e req-2e9c5871-d8fb-4b1f-a039-1f31b76843e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] No waiting events found dispatching network-vif-plugged-80f1be22-ff75-43ec-bf00-3b825b1932f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.243 239969 WARNING nova.compute.manager [req-e6dbd190-83dc-4f7c-b5da-6e552551318e req-2e9c5871-d8fb-4b1f-a039-1f31b76843e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Received unexpected event network-vif-plugged-80f1be22-ff75-43ec-bf00-3b825b1932f5 for instance with vm_state deleted and task_state None.
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.244 239969 DEBUG nova.compute.manager [req-e6dbd190-83dc-4f7c-b5da-6e552551318e req-2e9c5871-d8fb-4b1f-a039-1f31b76843e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Received event network-vif-deleted-80f1be22-ff75-43ec-bf00-3b825b1932f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.392 239969 DEBUG nova.network.neutron [None req-2533f811-0180-4df0-9851-33778a9ed96c 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Updating instance_info_cache with network_info: [{"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.425 239969 DEBUG oslo_concurrency.lockutils [None req-2533f811-0180-4df0-9851-33778a9ed96c 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Releasing lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.426 239969 DEBUG nova.compute.manager [None req-2533f811-0180-4df0-9851-33778a9ed96c 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.426 239969 DEBUG nova.compute.manager [None req-2533f811-0180-4df0-9851-33778a9ed96c 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] network_info to inject: |[{"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.429 239969 DEBUG oslo_concurrency.lockutils [req-6e528c2b-a176-4de3-94ac-c725efb5bbca req-47ad5cc3-ba79-4573-8fda-b60fcd55a380 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.429 239969 DEBUG nova.network.neutron [req-6e528c2b-a176-4de3-94ac-c725efb5bbca req-47ad5cc3-ba79-4573-8fda-b60fcd55a380 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Refreshing network info cache for port 4ef2795d-4ecf-4161-b8d3-cf078d350d35 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.675 239969 DEBUG oslo_concurrency.lockutils [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Acquiring lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.675 239969 DEBUG oslo_concurrency.lockutils [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.676 239969 DEBUG oslo_concurrency.lockutils [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Acquiring lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.676 239969 DEBUG oslo_concurrency.lockutils [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.676 239969 DEBUG oslo_concurrency.lockutils [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.677 239969 INFO nova.compute.manager [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Terminating instance
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.679 239969 DEBUG nova.compute.manager [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:59:26 compute-0 kernel: tap4ef2795d-4e (unregistering): left promiscuous mode
Jan 26 15:59:26 compute-0 NetworkManager[48954]: <info>  [1769443166.7276] device (tap4ef2795d-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:59:26 compute-0 ovn_controller[146046]: 2026-01-26T15:59:26Z|00435|binding|INFO|Releasing lport 4ef2795d-4ecf-4161-b8d3-cf078d350d35 from this chassis (sb_readonly=0)
Jan 26 15:59:26 compute-0 ovn_controller[146046]: 2026-01-26T15:59:26Z|00436|binding|INFO|Setting lport 4ef2795d-4ecf-4161-b8d3-cf078d350d35 down in Southbound
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.735 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:26 compute-0 ovn_controller[146046]: 2026-01-26T15:59:26Z|00437|binding|INFO|Removing iface tap4ef2795d-4e ovn-installed in OVS
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.738 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:26.746 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:f2:2a 10.100.0.7'], port_security=['fa:16:3e:d2:f2:2a 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6b4dbd52-cbad-4d8f-8446-e47fabdc170c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8d2177c9-aa21-40e9-8927-6a75b9503d25', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2598d15187804c928a694f10598376ca', 'neutron:revision_number': '6', 'neutron:security_group_ids': '8f1c69a6-d773-456d-8e48-1beedd226a0d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e144102a-5edb-4f12-9913-268c4e68bd41, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=4ef2795d-4ecf-4161-b8d3-cf078d350d35) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:26.747 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 4ef2795d-4ecf-4161-b8d3-cf078d350d35 in datapath 8d2177c9-aa21-40e9-8927-6a75b9503d25 unbound from our chassis
Jan 26 15:59:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:26.748 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8d2177c9-aa21-40e9-8927-6a75b9503d25, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:59:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:26.749 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[82377044-63a5-4755-a6a6-6b8037fe25c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:26.749 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25 namespace which is not needed anymore
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.758 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:26 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Deactivated successfully.
Jan 26 15:59:26 compute-0 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000033.scope: Consumed 15.754s CPU time.
Jan 26 15:59:26 compute-0 systemd-machined[208061]: Machine qemu-57-instance-00000033 terminated.
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.904 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:26 compute-0 neutron-haproxy-ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25[285002]: [NOTICE]   (285012) : haproxy version is 2.8.14-c23fe91
Jan 26 15:59:26 compute-0 neutron-haproxy-ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25[285002]: [NOTICE]   (285012) : path to executable is /usr/sbin/haproxy
Jan 26 15:59:26 compute-0 neutron-haproxy-ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25[285002]: [WARNING]  (285012) : Exiting Master process...
Jan 26 15:59:26 compute-0 neutron-haproxy-ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25[285002]: [WARNING]  (285012) : Exiting Master process...
Jan 26 15:59:26 compute-0 neutron-haproxy-ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25[285002]: [ALERT]    (285012) : Current worker (285015) exited with code 143 (Terminated)
Jan 26 15:59:26 compute-0 neutron-haproxy-ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25[285002]: [WARNING]  (285012) : All workers exited. Exiting... (0)
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.911 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:26 compute-0 systemd[1]: libpod-996c9ceca59a6583d0d386af6708b2ffc3bf045153939db4da82e6de0ff8fde6.scope: Deactivated successfully.
Jan 26 15:59:26 compute-0 podman[287476]: 2026-01-26 15:59:26.920366469 +0000 UTC m=+0.088718128 container died 996c9ceca59a6583d0d386af6708b2ffc3bf045153939db4da82e6de0ff8fde6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.924 239969 INFO nova.virt.libvirt.driver [-] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Instance destroyed successfully.
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.924 239969 DEBUG nova.objects.instance [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lazy-loading 'resources' on Instance uuid 6b4dbd52-cbad-4d8f-8446-e47fabdc170c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.936 239969 DEBUG nova.virt.libvirt.vif [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:58:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-233245799',display_name='tempest-AttachInterfacesUnderV243Test-server-233245799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-233245799',id=51,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBnkpdcgKw3ZxJcExi2kr0VnpomVnij75d3bR/pbfdCgGQcA+UwOyKukeTAdByLzJETxkaYGjczmjENGXW7/FyChm4ZWdLhT+jUfzqFeBkdkNxTH04DeT40B/qr/NF35Rw==',key_name='tempest-keypair-1898623706',keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:58:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2598d15187804c928a694f10598376ca',ramdisk_id='',reservation_id='r-7f0s9be4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-177389642',owner_user_name='tempest-AttachInterfacesUnderV243Test-177389642-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:59:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39cbd4b3356d4e52beac8da6a7696c50',uuid=6b4dbd52-cbad-4d8f-8446-e47fabdc170c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.937 239969 DEBUG nova.network.os_vif_util [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Converting VIF {"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.938 239969 DEBUG nova.network.os_vif_util [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:f2:2a,bridge_name='br-int',has_traffic_filtering=True,id=4ef2795d-4ecf-4161-b8d3-cf078d350d35,network=Network(8d2177c9-aa21-40e9-8927-6a75b9503d25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef2795d-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.939 239969 DEBUG os_vif [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:f2:2a,bridge_name='br-int',has_traffic_filtering=True,id=4ef2795d-4ecf-4161-b8d3-cf078d350d35,network=Network(8d2177c9-aa21-40e9-8927-6a75b9503d25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef2795d-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.940 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.940 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ef2795d-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.942 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.943 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:26 compute-0 nova_compute[239965]: 2026-01-26 15:59:26.946 239969 INFO os_vif [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:f2:2a,bridge_name='br-int',has_traffic_filtering=True,id=4ef2795d-4ecf-4161-b8d3-cf078d350d35,network=Network(8d2177c9-aa21-40e9-8927-6a75b9503d25),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ef2795d-4e')
Jan 26 15:59:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-996c9ceca59a6583d0d386af6708b2ffc3bf045153939db4da82e6de0ff8fde6-userdata-shm.mount: Deactivated successfully.
Jan 26 15:59:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5cd1be7d226113edc2f23189c84b6b5a353091d16442d573261499877f7b8f5-merged.mount: Deactivated successfully.
Jan 26 15:59:26 compute-0 podman[287476]: 2026-01-26 15:59:26.973207576 +0000 UTC m=+0.141559235 container cleanup 996c9ceca59a6583d0d386af6708b2ffc3bf045153939db4da82e6de0ff8fde6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 15:59:26 compute-0 systemd[1]: libpod-conmon-996c9ceca59a6583d0d386af6708b2ffc3bf045153939db4da82e6de0ff8fde6.scope: Deactivated successfully.
Jan 26 15:59:27 compute-0 podman[287528]: 2026-01-26 15:59:27.049333295 +0000 UTC m=+0.051265889 container remove 996c9ceca59a6583d0d386af6708b2ffc3bf045153939db4da82e6de0ff8fde6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 15:59:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:27.058 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b2bb8c-8f06-472d-8792-9451b6ba25ee]: (4, ('Mon Jan 26 03:59:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25 (996c9ceca59a6583d0d386af6708b2ffc3bf045153939db4da82e6de0ff8fde6)\n996c9ceca59a6583d0d386af6708b2ffc3bf045153939db4da82e6de0ff8fde6\nMon Jan 26 03:59:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25 (996c9ceca59a6583d0d386af6708b2ffc3bf045153939db4da82e6de0ff8fde6)\n996c9ceca59a6583d0d386af6708b2ffc3bf045153939db4da82e6de0ff8fde6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:27.060 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8fa06a83-eda1-4e66-acc5-08ced575667d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:27.061 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d2177c9-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:27 compute-0 nova_compute[239965]: 2026-01-26 15:59:27.063 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:27 compute-0 kernel: tap8d2177c9-a0: left promiscuous mode
Jan 26 15:59:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:27.069 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9be507-4acd-4696-afd9-0cc45046b237]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:27 compute-0 nova_compute[239965]: 2026-01-26 15:59:27.083 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:27.086 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[14e6d507-6df3-468b-a417-bfe90537c44f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:27.088 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b45c5809-46b4-4841-8f4c-9753f76e970c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:27.108 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[447727c6-437a-462d-9854-70506440b7db]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 460681, 'reachable_time': 36175, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287546, 'error': None, 'target': 'ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:27.111 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8d2177c9-aa21-40e9-8927-6a75b9503d25 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:59:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:27.111 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[29d6e58d-2531-4d85-a02e-5cbfe65f4294]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d8d2177c9\x2daa21\x2d40e9\x2d8927\x2d6a75b9503d25.mount: Deactivated successfully.
Jan 26 15:59:27 compute-0 rsyslogd[1006]: imjournal from <np0005595828:kernel>: begin to drop messages due to rate-limiting
Jan 26 15:59:27 compute-0 nova_compute[239965]: 2026-01-26 15:59:27.369 239969 INFO nova.virt.libvirt.driver [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Deleting instance files /var/lib/nova/instances/6b4dbd52-cbad-4d8f-8446-e47fabdc170c_del
Jan 26 15:59:27 compute-0 nova_compute[239965]: 2026-01-26 15:59:27.370 239969 INFO nova.virt.libvirt.driver [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Deletion of /var/lib/nova/instances/6b4dbd52-cbad-4d8f-8446-e47fabdc170c_del complete
Jan 26 15:59:27 compute-0 nova_compute[239965]: 2026-01-26 15:59:27.417 239969 INFO nova.compute.manager [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Took 0.74 seconds to destroy the instance on the hypervisor.
Jan 26 15:59:27 compute-0 nova_compute[239965]: 2026-01-26 15:59:27.417 239969 DEBUG oslo.service.loopingcall [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:59:27 compute-0 nova_compute[239965]: 2026-01-26 15:59:27.418 239969 DEBUG nova.compute.manager [-] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:59:27 compute-0 nova_compute[239965]: 2026-01-26 15:59:27.418 239969 DEBUG nova.network.neutron [-] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:59:27 compute-0 nova_compute[239965]: 2026-01-26 15:59:27.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:59:27 compute-0 nova_compute[239965]: 2026-01-26 15:59:27.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 15:59:27 compute-0 nova_compute[239965]: 2026-01-26 15:59:27.542 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 15:59:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 167 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 100 op/s
Jan 26 15:59:27 compute-0 nova_compute[239965]: 2026-01-26 15:59:27.898 239969 DEBUG nova.network.neutron [req-6e528c2b-a176-4de3-94ac-c725efb5bbca req-47ad5cc3-ba79-4573-8fda-b60fcd55a380 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Updated VIF entry in instance network info cache for port 4ef2795d-4ecf-4161-b8d3-cf078d350d35. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:59:27 compute-0 nova_compute[239965]: 2026-01-26 15:59:27.899 239969 DEBUG nova.network.neutron [req-6e528c2b-a176-4de3-94ac-c725efb5bbca req-47ad5cc3-ba79-4573-8fda-b60fcd55a380 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Updating instance_info_cache with network_info: [{"id": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "address": "fa:16:3e:d2:f2:2a", "network": {"id": "8d2177c9-aa21-40e9-8927-6a75b9503d25", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-868370460-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2598d15187804c928a694f10598376ca", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ef2795d-4e", "ovs_interfaceid": "4ef2795d-4ecf-4161-b8d3-cf078d350d35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:27 compute-0 nova_compute[239965]: 2026-01-26 15:59:27.919 239969 DEBUG oslo_concurrency.lockutils [req-6e528c2b-a176-4de3-94ac-c725efb5bbca req-47ad5cc3-ba79-4573-8fda-b60fcd55a380 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-6b4dbd52-cbad-4d8f-8446-e47fabdc170c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:28 compute-0 nova_compute[239965]: 2026-01-26 15:59:28.365 239969 DEBUG nova.compute.manager [req-8e5f6004-a48e-4993-97fd-63824df35580 req-225d82af-2bbd-4ed5-8539-1e16ce111cc4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Received event network-vif-unplugged-4ef2795d-4ecf-4161-b8d3-cf078d350d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:28 compute-0 nova_compute[239965]: 2026-01-26 15:59:28.365 239969 DEBUG oslo_concurrency.lockutils [req-8e5f6004-a48e-4993-97fd-63824df35580 req-225d82af-2bbd-4ed5-8539-1e16ce111cc4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:28 compute-0 nova_compute[239965]: 2026-01-26 15:59:28.365 239969 DEBUG oslo_concurrency.lockutils [req-8e5f6004-a48e-4993-97fd-63824df35580 req-225d82af-2bbd-4ed5-8539-1e16ce111cc4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:28 compute-0 nova_compute[239965]: 2026-01-26 15:59:28.365 239969 DEBUG oslo_concurrency.lockutils [req-8e5f6004-a48e-4993-97fd-63824df35580 req-225d82af-2bbd-4ed5-8539-1e16ce111cc4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:28 compute-0 nova_compute[239965]: 2026-01-26 15:59:28.366 239969 DEBUG nova.compute.manager [req-8e5f6004-a48e-4993-97fd-63824df35580 req-225d82af-2bbd-4ed5-8539-1e16ce111cc4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] No waiting events found dispatching network-vif-unplugged-4ef2795d-4ecf-4161-b8d3-cf078d350d35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:28 compute-0 nova_compute[239965]: 2026-01-26 15:59:28.366 239969 DEBUG nova.compute.manager [req-8e5f6004-a48e-4993-97fd-63824df35580 req-225d82af-2bbd-4ed5-8539-1e16ce111cc4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Received event network-vif-unplugged-4ef2795d-4ecf-4161-b8d3-cf078d350d35 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:59:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_15:59:28
Jan 26 15:59:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 15:59:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 15:59:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.control', '.rgw.root', 'volumes', 'vms', 'images', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 26 15:59:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 15:59:28 compute-0 nova_compute[239965]: 2026-01-26 15:59:28.883 239969 DEBUG nova.network.neutron [-] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:28 compute-0 nova_compute[239965]: 2026-01-26 15:59:28.903 239969 INFO nova.compute.manager [-] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Took 1.49 seconds to deallocate network for instance.
Jan 26 15:59:28 compute-0 ceph-mon[75140]: pgmap v1390: 305 pgs: 305 active+clean; 167 MiB data, 482 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 100 op/s
Jan 26 15:59:28 compute-0 nova_compute[239965]: 2026-01-26 15:59:28.944 239969 DEBUG oslo_concurrency.lockutils [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:28 compute-0 nova_compute[239965]: 2026-01-26 15:59:28.944 239969 DEBUG oslo_concurrency.lockutils [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:29 compute-0 nova_compute[239965]: 2026-01-26 15:59:29.039 239969 DEBUG oslo_concurrency.processutils [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:29 compute-0 nova_compute[239965]: 2026-01-26 15:59:29.081 239969 DEBUG nova.compute.manager [req-9debd8d9-b889-4768-b9ad-2b5a20db2ae8 req-45e112df-15cc-4c8d-abc8-deee25dadc47 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Received event network-vif-deleted-4ef2795d-4ecf-4161-b8d3-cf078d350d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:29 compute-0 nova_compute[239965]: 2026-01-26 15:59:29.542 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:59:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:59:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3227535624' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:29 compute-0 nova_compute[239965]: 2026-01-26 15:59:29.666 239969 DEBUG oslo_concurrency.processutils [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:29 compute-0 nova_compute[239965]: 2026-01-26 15:59:29.671 239969 DEBUG nova.compute.provider_tree [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:59:29 compute-0 nova_compute[239965]: 2026-01-26 15:59:29.691 239969 DEBUG nova.scheduler.client.report [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:59:29 compute-0 nova_compute[239965]: 2026-01-26 15:59:29.716 239969 DEBUG oslo_concurrency.lockutils [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:29 compute-0 nova_compute[239965]: 2026-01-26 15:59:29.745 239969 INFO nova.scheduler.client.report [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Deleted allocations for instance 6b4dbd52-cbad-4d8f-8446-e47fabdc170c
Jan 26 15:59:29 compute-0 nova_compute[239965]: 2026-01-26 15:59:29.814 239969 DEBUG oslo_concurrency.lockutils [None req-f3fae5b4-e0ae-4dec-b9e3-ffce46de8a52 39cbd4b3356d4e52beac8da6a7696c50 2598d15187804c928a694f10598376ca - - default default] Lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1391: 305 pgs: 305 active+clean; 103 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 124 op/s
Jan 26 15:59:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3227535624' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:59:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:59:30 compute-0 podman[287570]: 2026-01-26 15:59:30.368937533 +0000 UTC m=+0.054274892 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 26 15:59:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:59:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:59:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 15:59:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 15:59:30 compute-0 nova_compute[239965]: 2026-01-26 15:59:30.415 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:30 compute-0 nova_compute[239965]: 2026-01-26 15:59:30.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:59:30 compute-0 nova_compute[239965]: 2026-01-26 15:59:30.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:59:30 compute-0 nova_compute[239965]: 2026-01-26 15:59:30.554 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:30 compute-0 nova_compute[239965]: 2026-01-26 15:59:30.562 239969 DEBUG nova.compute.manager [req-4d749694-adac-451f-a1af-fa1f73adf2d5 req-82bfc8f1-5927-4a5d-8ee1-674baf2636d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Received event network-vif-plugged-4ef2795d-4ecf-4161-b8d3-cf078d350d35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:30 compute-0 nova_compute[239965]: 2026-01-26 15:59:30.563 239969 DEBUG oslo_concurrency.lockutils [req-4d749694-adac-451f-a1af-fa1f73adf2d5 req-82bfc8f1-5927-4a5d-8ee1-674baf2636d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:30 compute-0 nova_compute[239965]: 2026-01-26 15:59:30.563 239969 DEBUG oslo_concurrency.lockutils [req-4d749694-adac-451f-a1af-fa1f73adf2d5 req-82bfc8f1-5927-4a5d-8ee1-674baf2636d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:30 compute-0 nova_compute[239965]: 2026-01-26 15:59:30.563 239969 DEBUG oslo_concurrency.lockutils [req-4d749694-adac-451f-a1af-fa1f73adf2d5 req-82bfc8f1-5927-4a5d-8ee1-674baf2636d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6b4dbd52-cbad-4d8f-8446-e47fabdc170c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:30 compute-0 nova_compute[239965]: 2026-01-26 15:59:30.563 239969 DEBUG nova.compute.manager [req-4d749694-adac-451f-a1af-fa1f73adf2d5 req-82bfc8f1-5927-4a5d-8ee1-674baf2636d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] No waiting events found dispatching network-vif-plugged-4ef2795d-4ecf-4161-b8d3-cf078d350d35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:30 compute-0 nova_compute[239965]: 2026-01-26 15:59:30.563 239969 WARNING nova.compute.manager [req-4d749694-adac-451f-a1af-fa1f73adf2d5 req-82bfc8f1-5927-4a5d-8ee1-674baf2636d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Received unexpected event network-vif-plugged-4ef2795d-4ecf-4161-b8d3-cf078d350d35 for instance with vm_state deleted and task_state None.
Jan 26 15:59:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:59:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 15:59:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 15:59:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:59:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 15:59:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:59:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:59:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 15:59:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 15:59:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:59:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 15:59:30 compute-0 ceph-mon[75140]: pgmap v1391: 305 pgs: 305 active+clean; 103 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 124 op/s
Jan 26 15:59:31 compute-0 podman[287590]: 2026-01-26 15:59:31.391613605 +0000 UTC m=+0.080783334 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 15:59:31 compute-0 nova_compute[239965]: 2026-01-26 15:59:31.449 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1392: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 128 op/s
Jan 26 15:59:31 compute-0 nova_compute[239965]: 2026-01-26 15:59:31.943 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.006 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.006 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.029 239969 DEBUG nova.compute.manager [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.102 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.103 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.113 239969 DEBUG nova.virt.hardware [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.113 239969 INFO nova.compute.claims [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.210 239969 DEBUG oslo_concurrency.processutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:59:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:59:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/465224348' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.798 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.802 239969 DEBUG oslo_concurrency.processutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.808 239969 DEBUG nova.compute.provider_tree [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.833 239969 DEBUG nova.scheduler.client.report [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.838 239969 WARNING nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.839 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Triggering sync for uuid 64435a76-a92d-4b8f-83e5-a6a85b70b10f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.839 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.858 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.860 239969 DEBUG nova.compute.manager [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.905 239969 DEBUG nova.compute.manager [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.906 239969 DEBUG nova.network.neutron [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.926 239969 INFO nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:59:32 compute-0 nova_compute[239965]: 2026-01-26 15:59:32.946 239969 DEBUG nova.compute.manager [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:59:32 compute-0 ceph-mon[75140]: pgmap v1392: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 17 KiB/s wr, 128 op/s
Jan 26 15:59:32 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/465224348' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.048 239969 DEBUG nova.compute.manager [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.049 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.049 239969 INFO nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Creating image(s)
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.074 239969 DEBUG nova.storage.rbd_utils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 64435a76-a92d-4b8f-83e5-a6a85b70b10f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.100 239969 DEBUG nova.storage.rbd_utils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 64435a76-a92d-4b8f-83e5-a6a85b70b10f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.123 239969 DEBUG nova.storage.rbd_utils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 64435a76-a92d-4b8f-83e5-a6a85b70b10f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.127 239969 DEBUG oslo_concurrency.processutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.156 239969 DEBUG nova.policy [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3c7cba75e9fd41599b1c9a3388447cdd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aee99e5b6af74088bd848cecc9592e82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.195 239969 DEBUG oslo_concurrency.processutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.196 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.197 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.197 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.221 239969 DEBUG nova.storage.rbd_utils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 64435a76-a92d-4b8f-83e5-a6a85b70b10f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.225 239969 DEBUG oslo_concurrency.processutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 64435a76-a92d-4b8f-83e5-a6a85b70b10f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.535 239969 DEBUG oslo_concurrency.processutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 64435a76-a92d-4b8f-83e5-a6a85b70b10f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.310s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.602 239969 DEBUG nova.storage.rbd_utils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] resizing rbd image 64435a76-a92d-4b8f-83e5-a6a85b70b10f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.693 239969 DEBUG nova.objects.instance [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'migration_context' on Instance uuid 64435a76-a92d-4b8f-83e5-a6a85b70b10f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.712 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.713 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Ensure instance console log exists: /var/lib/nova/instances/64435a76-a92d-4b8f-83e5-a6a85b70b10f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.713 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.714 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.714 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.2 KiB/s wr, 127 op/s
Jan 26 15:59:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:33.960 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:33 compute-0 nova_compute[239965]: 2026-01-26 15:59:33.960 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:33.962 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.145 239969 DEBUG nova.network.neutron [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Successfully created port: 5010d02a-4286-4559-ab9e-1f1ee882ee9b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.230 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "66382f82-a38c-4066-8564-7866c37ccd0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.230 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "66382f82-a38c-4066-8564-7866c37ccd0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.255 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.286 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Acquiring lock "a4ddea97-0b71-4080-abed-fd5748d7244e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.287 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.335 239969 DEBUG nova.compute.manager [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.338 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "af55c460-3670-4fa6-855f-386e2c58b312" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.338 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "af55c460-3670-4fa6-855f-386e2c58b312" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.373 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.381 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.381 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.413 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.414 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.419 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.420 239969 INFO nova.compute.claims [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.429 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.500 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.551 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.552 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.552 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.554 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.554 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.575 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.578 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:34 compute-0 nova_compute[239965]: 2026-01-26 15:59:34.648 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:35 compute-0 ceph-mon[75140]: pgmap v1393: 305 pgs: 305 active+clean; 88 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.2 KiB/s wr, 127 op/s
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.068 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:59:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1223185893' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.253 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.260 239969 DEBUG nova.compute.provider_tree [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.284 239969 DEBUG nova.scheduler.client.report [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.315 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.316 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.319 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.324 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.327 239969 DEBUG nova.virt.hardware [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.327 239969 INFO nova.compute.claims [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.386 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.387 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.393 239969 DEBUG nova.network.neutron [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Successfully updated port: 5010d02a-4286-4559-ab9e-1f1ee882ee9b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.406 239969 INFO nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.412 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "refresh_cache-64435a76-a92d-4b8f-83e5-a6a85b70b10f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.413 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquired lock "refresh_cache-64435a76-a92d-4b8f-83e5-a6a85b70b10f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.413 239969 DEBUG nova.network.neutron [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.418 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.435 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.494 239969 DEBUG nova.compute.manager [req-fd9de78e-2d95-4618-a9cb-099816c1339f req-d2f1c415-0b40-46b8-bb8e-5d5ef161e1d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Received event network-changed-5010d02a-4286-4559-ab9e-1f1ee882ee9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.495 239969 DEBUG nova.compute.manager [req-fd9de78e-2d95-4618-a9cb-099816c1339f req-d2f1c415-0b40-46b8-bb8e-5d5ef161e1d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Refreshing instance network info cache due to event network-changed-5010d02a-4286-4559-ab9e-1f1ee882ee9b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.495 239969 DEBUG oslo_concurrency.lockutils [req-fd9de78e-2d95-4618-a9cb-099816c1339f req-d2f1c415-0b40-46b8-bb8e-5d5ef161e1d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-64435a76-a92d-4b8f-83e5-a6a85b70b10f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.525 239969 DEBUG oslo_concurrency.processutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.568 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.569 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.569 239969 INFO nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Creating image(s)
Jan 26 15:59:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.592 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image 66382f82-a38c-4066-8564-7866c37ccd0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.619 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image 66382f82-a38c-4066-8564-7866c37ccd0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.641 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image 66382f82-a38c-4066-8564-7866c37ccd0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.646 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.674 239969 DEBUG nova.network.neutron [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.679 239969 DEBUG nova.policy [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '48264b1a141f4ef5b99a45972af1903a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '07b927cfeb484a5d80d731f9e9abfa4f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.715 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.715 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.716 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.716 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.734 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image 66382f82-a38c-4066-8564-7866c37ccd0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:35 compute-0 nova_compute[239965]: 2026-01-26 15:59:35.737 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 66382f82-a38c-4066-8564-7866c37ccd0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 134 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 26 15:59:36 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1223185893' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:59:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1372759787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.164 239969 DEBUG oslo_concurrency.processutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.174 239969 DEBUG nova.compute.provider_tree [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.221 239969 DEBUG nova.scheduler.client.report [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.260 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 66382f82-a38c-4066-8564-7866c37ccd0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.346 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] resizing rbd image 66382f82-a38c-4066-8564-7866c37ccd0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.430 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.431 239969 DEBUG nova.compute.manager [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.433 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.879s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.440 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.440 239969 INFO nova.compute.claims [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.614 239969 DEBUG nova.compute.manager [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.615 239969 DEBUG nova.network.neutron [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.642 239969 INFO nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.685 239969 DEBUG nova.compute.manager [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.693 239969 DEBUG nova.objects.instance [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lazy-loading 'migration_context' on Instance uuid 66382f82-a38c-4066-8564-7866c37ccd0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.713 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.713 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Ensure instance console log exists: /var/lib/nova/instances/66382f82-a38c-4066-8564-7866c37ccd0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.714 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.714 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.714 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.773 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.806 239969 DEBUG nova.compute.manager [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.809 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.809 239969 INFO nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Creating image(s)
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.836 239969 DEBUG nova.storage.rbd_utils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] rbd image a4ddea97-0b71-4080-abed-fd5748d7244e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.863 239969 DEBUG nova.storage.rbd_utils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] rbd image a4ddea97-0b71-4080-abed-fd5748d7244e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.884 239969 DEBUG nova.storage.rbd_utils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] rbd image a4ddea97-0b71-4080-abed-fd5748d7244e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.888 239969 DEBUG oslo_concurrency.processutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.946 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.963 239969 DEBUG oslo_concurrency.processutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.964 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.965 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.965 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.986 239969 DEBUG nova.storage.rbd_utils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] rbd image a4ddea97-0b71-4080-abed-fd5748d7244e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:36 compute-0 nova_compute[239965]: 2026-01-26 15:59:36.991 239969 DEBUG oslo_concurrency.processutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 a4ddea97-0b71-4080-abed-fd5748d7244e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:37 compute-0 ceph-mon[75140]: pgmap v1394: 305 pgs: 305 active+clean; 134 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 155 op/s
Jan 26 15:59:37 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1372759787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.256 239969 DEBUG oslo_concurrency.processutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 a4ddea97-0b71-4080-abed-fd5748d7244e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.310 239969 DEBUG nova.storage.rbd_utils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] resizing rbd image a4ddea97-0b71-4080-abed-fd5748d7244e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:59:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:59:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/827895136' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.353 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.382 239969 DEBUG nova.objects.instance [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lazy-loading 'migration_context' on Instance uuid a4ddea97-0b71-4080-abed-fd5748d7244e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.384 239969 DEBUG nova.compute.provider_tree [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.401 239969 DEBUG nova.scheduler.client.report [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.404 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.405 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Ensure instance console log exists: /var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.405 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.406 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.406 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.422 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.989s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.423 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.425 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 2.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.432 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.433 239969 INFO nova.compute.claims [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.493 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.493 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.517 239969 INFO nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.539 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.647 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.648 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.649 239969 INFO nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Creating image(s)
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.670 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.696 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.719 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.722 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.753 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.795 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Successfully created port: 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.801 239969 DEBUG nova.policy [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '48264b1a141f4ef5b99a45972af1903a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '07b927cfeb484a5d80d731f9e9abfa4f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.805 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.806 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.806 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.807 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.835 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.841 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 134 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.880 239969 DEBUG nova.network.neutron [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Updating instance_info_cache with network_info: [{"id": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "address": "fa:16:3e:00:52:67", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5010d02a-42", "ovs_interfaceid": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.886 239969 DEBUG nova.policy [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '10e6dce963804f98b45b6f58ef8d1b2e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c41676eb0a634807a0c639355e39a512', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.913 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Releasing lock "refresh_cache-64435a76-a92d-4b8f-83e5-a6a85b70b10f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.914 239969 DEBUG nova.compute.manager [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Instance network_info: |[{"id": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "address": "fa:16:3e:00:52:67", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5010d02a-42", "ovs_interfaceid": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.915 239969 DEBUG oslo_concurrency.lockutils [req-fd9de78e-2d95-4618-a9cb-099816c1339f req-d2f1c415-0b40-46b8-bb8e-5d5ef161e1d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-64435a76-a92d-4b8f-83e5-a6a85b70b10f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.916 239969 DEBUG nova.network.neutron [req-fd9de78e-2d95-4618-a9cb-099816c1339f req-d2f1c415-0b40-46b8-bb8e-5d5ef161e1d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Refreshing network info cache for port 5010d02a-4286-4559-ab9e-1f1ee882ee9b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.919 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Start _get_guest_xml network_info=[{"id": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "address": "fa:16:3e:00:52:67", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5010d02a-42", "ovs_interfaceid": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.926 239969 WARNING nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.936 239969 DEBUG nova.virt.libvirt.host [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.937 239969 DEBUG nova.virt.libvirt.host [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.942 239969 DEBUG nova.virt.libvirt.host [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.943 239969 DEBUG nova.virt.libvirt.host [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.944 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.944 239969 DEBUG nova.virt.hardware [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.945 239969 DEBUG nova.virt.hardware [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.945 239969 DEBUG nova.virt.hardware [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.945 239969 DEBUG nova.virt.hardware [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.945 239969 DEBUG nova.virt.hardware [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.946 239969 DEBUG nova.virt.hardware [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.946 239969 DEBUG nova.virt.hardware [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.946 239969 DEBUG nova.virt.hardware [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.947 239969 DEBUG nova.virt.hardware [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.947 239969 DEBUG nova.virt.hardware [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.947 239969 DEBUG nova.virt.hardware [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:59:37 compute-0 nova_compute[239965]: 2026-01-26 15:59:37.952 239969 DEBUG oslo_concurrency.processutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:37.964 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/827895136' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.358 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.423 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] resizing rbd image 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:59:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:59:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2433612846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.466 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.713s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.472 239969 DEBUG nova.compute.provider_tree [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.512 239969 DEBUG nova.scheduler.client.report [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.524 239969 DEBUG nova.objects.instance [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lazy-loading 'migration_context' on Instance uuid 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.546 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.547 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.550 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 3.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.550 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.550 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.551 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.591 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.592 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Ensure instance console log exists: /var/lib/nova/instances/7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.595 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.596 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.596 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.635 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.636 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:59:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:59:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4070128336' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.656 239969 INFO nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.668 239969 DEBUG oslo_concurrency.processutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.716s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.691 239969 DEBUG nova.storage.rbd_utils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 64435a76-a92d-4b8f-83e5-a6a85b70b10f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.695 239969 DEBUG oslo_concurrency.processutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.735 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.773 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443163.772524, 883e64be-1368-4217-b5a2-7353e3045e3c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.774 239969 INFO nova.compute.manager [-] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] VM Stopped (Lifecycle Event)
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.799 239969 DEBUG nova.compute.manager [None req-76df0ee0-b964-4a07-af3e-44f218878c71 - - - - - -] [instance: 883e64be-1368-4217-b5a2-7353e3045e3c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.830 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.831 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.831 239969 INFO nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Creating image(s)
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.851 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image af55c460-3670-4fa6-855f-386e2c58b312_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.877 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image af55c460-3670-4fa6-855f-386e2c58b312_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.910 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image af55c460-3670-4fa6-855f-386e2c58b312_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.916 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:38 compute-0 nova_compute[239965]: 2026-01-26 15:59:38.967 239969 DEBUG nova.policy [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '48264b1a141f4ef5b99a45972af1903a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '07b927cfeb484a5d80d731f9e9abfa4f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.011 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.012 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.013 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.013 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.045 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image af55c460-3670-4fa6-855f-386e2c58b312_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.051 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 af55c460-3670-4fa6-855f-386e2c58b312_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:59:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/759721684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.225 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.674s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:59:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1594728260' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:39 compute-0 ceph-mon[75140]: pgmap v1395: 305 pgs: 305 active+clean; 134 MiB data, 453 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 54 op/s
Jan 26 15:59:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2433612846' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4070128336' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/759721684' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.363 239969 DEBUG oslo_concurrency.processutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.668s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.365 239969 DEBUG nova.virt.libvirt.vif [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:59:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1702437232',display_name='tempest-DeleteServersTestJSON-server-1702437232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1702437232',id=55,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aee99e5b6af74088bd848cecc9592e82',ramdisk_id='',reservation_id='r-7mjf8wcb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-234439961',owner_user_name='tempest-DeleteServersTestJSON-234439961-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:59:32Z,user_data=None,user_id='3c7cba75e9fd41599b1c9a3388447cdd',uuid=64435a76-a92d-4b8f-83e5-a6a85b70b10f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "address": "fa:16:3e:00:52:67", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5010d02a-42", "ovs_interfaceid": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.366 239969 DEBUG nova.network.os_vif_util [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converting VIF {"id": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "address": "fa:16:3e:00:52:67", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5010d02a-42", "ovs_interfaceid": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.367 239969 DEBUG nova.network.os_vif_util [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:52:67,bridge_name='br-int',has_traffic_filtering=True,id=5010d02a-4286-4559-ab9e-1f1ee882ee9b,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5010d02a-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.368 239969 DEBUG nova.objects.instance [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 64435a76-a92d-4b8f-83e5-a6a85b70b10f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.384 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 af55c460-3670-4fa6-855f-386e2c58b312_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.418 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:59:39 compute-0 nova_compute[239965]:   <uuid>64435a76-a92d-4b8f-83e5-a6a85b70b10f</uuid>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   <name>instance-00000037</name>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <nova:name>tempest-DeleteServersTestJSON-server-1702437232</nova:name>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:59:37</nova:creationTime>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:59:39 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:59:39 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:59:39 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:59:39 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:59:39 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:59:39 compute-0 nova_compute[239965]:         <nova:user uuid="3c7cba75e9fd41599b1c9a3388447cdd">tempest-DeleteServersTestJSON-234439961-project-member</nova:user>
Jan 26 15:59:39 compute-0 nova_compute[239965]:         <nova:project uuid="aee99e5b6af74088bd848cecc9592e82">tempest-DeleteServersTestJSON-234439961</nova:project>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:59:39 compute-0 nova_compute[239965]:         <nova:port uuid="5010d02a-4286-4559-ab9e-1f1ee882ee9b">
Jan 26 15:59:39 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <system>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <entry name="serial">64435a76-a92d-4b8f-83e5-a6a85b70b10f</entry>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <entry name="uuid">64435a76-a92d-4b8f-83e5-a6a85b70b10f</entry>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     </system>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   <os>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   </os>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   <features>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   </features>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/64435a76-a92d-4b8f-83e5-a6a85b70b10f_disk">
Jan 26 15:59:39 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       </source>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:59:39 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/64435a76-a92d-4b8f-83e5-a6a85b70b10f_disk.config">
Jan 26 15:59:39 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       </source>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:59:39 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:00:52:67"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <target dev="tap5010d02a-42"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/64435a76-a92d-4b8f-83e5-a6a85b70b10f/console.log" append="off"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <video>
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     </video>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:59:39 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:59:39 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:59:39 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:59:39 compute-0 nova_compute[239965]: </domain>
Jan 26 15:59:39 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.419 239969 DEBUG nova.compute.manager [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Preparing to wait for external event network-vif-plugged-5010d02a-4286-4559-ab9e-1f1ee882ee9b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.420 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.420 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.420 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.421 239969 DEBUG nova.virt.libvirt.vif [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:59:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1702437232',display_name='tempest-DeleteServersTestJSON-server-1702437232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1702437232',id=55,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aee99e5b6af74088bd848cecc9592e82',ramdisk_id='',reservation_id='r-7mjf8wcb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-234439961',owner_user_name='tempest-DeleteServersTestJSON-234439961-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:59:32Z,user_data=None,user_id='3c7cba75e9fd41599b1c9a3388447cdd',uuid=64435a76-a92d-4b8f-83e5-a6a85b70b10f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "address": "fa:16:3e:00:52:67", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5010d02a-42", "ovs_interfaceid": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.421 239969 DEBUG nova.network.os_vif_util [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converting VIF {"id": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "address": "fa:16:3e:00:52:67", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5010d02a-42", "ovs_interfaceid": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.421 239969 DEBUG nova.network.os_vif_util [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:52:67,bridge_name='br-int',has_traffic_filtering=True,id=5010d02a-4286-4559-ab9e-1f1ee882ee9b,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5010d02a-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.422 239969 DEBUG os_vif [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:52:67,bridge_name='br-int',has_traffic_filtering=True,id=5010d02a-4286-4559-ab9e-1f1ee882ee9b,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5010d02a-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.422 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.423 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.423 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.455 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.456 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5010d02a-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.456 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5010d02a-42, col_values=(('external_ids', {'iface-id': '5010d02a-4286-4559-ab9e-1f1ee882ee9b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:52:67', 'vm-uuid': '64435a76-a92d-4b8f-83e5-a6a85b70b10f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:39 compute-0 NetworkManager[48954]: <info>  [1769443179.4597] manager: (tap5010d02a-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.461 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.469 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] resizing rbd image af55c460-3670-4fa6-855f-386e2c58b312_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.499 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.501 239969 INFO os_vif [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:52:67,bridge_name='br-int',has_traffic_filtering=True,id=5010d02a-4286-4559-ab9e-1f1ee882ee9b,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5010d02a-42')
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.605 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.607 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4076MB free_disk=59.96713743638247GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.607 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.607 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.672 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 64435a76-a92d-4b8f-83e5-a6a85b70b10f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.672 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 66382f82-a38c-4066-8564-7866c37ccd0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.672 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance af55c460-3670-4fa6-855f-386e2c58b312 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.672 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.672 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance a4ddea97-0b71-4080-abed-fd5748d7244e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.672 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.673 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 15:59:39 compute-0 nova_compute[239965]: 2026-01-26 15:59:39.861 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1396: 305 pgs: 305 active+clean; 244 MiB data, 522 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 6.1 MiB/s wr, 121 op/s
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.014 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.015 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.015 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] No VIF found with MAC fa:16:3e:00:52:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.015 239969 INFO nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Using config drive
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.032 239969 DEBUG nova.storage.rbd_utils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 64435a76-a92d-4b8f-83e5-a6a85b70b10f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.139 239969 DEBUG nova.objects.instance [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lazy-loading 'migration_context' on Instance uuid af55c460-3670-4fa6-855f-386e2c58b312 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.155 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.156 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Ensure instance console log exists: /var/lib/nova/instances/af55c460-3670-4fa6-855f-386e2c58b312/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.156 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.157 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.157 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:40 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1594728260' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:40 compute-0 ceph-mon[75140]: pgmap v1396: 305 pgs: 305 active+clean; 244 MiB data, 522 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 6.1 MiB/s wr, 121 op/s
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.419 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:59:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1616321296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.451 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.456 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.472 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.493 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 15:59:40 compute-0 nova_compute[239965]: 2026-01-26 15:59:40.493 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.138 239969 INFO nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Creating config drive at /var/lib/nova/instances/64435a76-a92d-4b8f-83e5-a6a85b70b10f/disk.config
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.143 239969 DEBUG oslo_concurrency.processutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/64435a76-a92d-4b8f-83e5-a6a85b70b10f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphz9buzhq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.279 239969 DEBUG oslo_concurrency.processutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/64435a76-a92d-4b8f-83e5-a6a85b70b10f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphz9buzhq" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.307 239969 DEBUG nova.storage.rbd_utils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 64435a76-a92d-4b8f-83e5-a6a85b70b10f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.312 239969 DEBUG oslo_concurrency.processutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/64435a76-a92d-4b8f-83e5-a6a85b70b10f/disk.config 64435a76-a92d-4b8f-83e5-a6a85b70b10f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1616321296' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.449 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.450 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.450 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.451 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.451 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.456 239969 DEBUG oslo_concurrency.processutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/64435a76-a92d-4b8f-83e5-a6a85b70b10f/disk.config 64435a76-a92d-4b8f-83e5-a6a85b70b10f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.457 239969 INFO nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Deleting local config drive /var/lib/nova/instances/64435a76-a92d-4b8f-83e5-a6a85b70b10f/disk.config because it was imported into RBD.
Jan 26 15:59:41 compute-0 kernel: tap5010d02a-42: entered promiscuous mode
Jan 26 15:59:41 compute-0 NetworkManager[48954]: <info>  [1769443181.5081] manager: (tap5010d02a-42): new Tun device (/org/freedesktop/NetworkManager/Devices/200)
Jan 26 15:59:41 compute-0 ovn_controller[146046]: 2026-01-26T15:59:41Z|00438|binding|INFO|Claiming lport 5010d02a-4286-4559-ab9e-1f1ee882ee9b for this chassis.
Jan 26 15:59:41 compute-0 ovn_controller[146046]: 2026-01-26T15:59:41Z|00439|binding|INFO|5010d02a-4286-4559-ab9e-1f1ee882ee9b: Claiming fa:16:3e:00:52:67 10.100.0.9
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.508 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.512 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.520 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:52:67 10.100.0.9'], port_security=['fa:16:3e:00:52:67 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '64435a76-a92d-4b8f-83e5-a6a85b70b10f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00eb7549-d24b-4657-b244-7664c8a34b20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aee99e5b6af74088bd848cecc9592e82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6daa4359-88b3-4de7-a38c-e21562f82a46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89244795-c322-42a2-be6b-f02f8ae6e05c, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5010d02a-4286-4559-ab9e-1f1ee882ee9b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.522 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5010d02a-4286-4559-ab9e-1f1ee882ee9b in datapath 00eb7549-d24b-4657-b244-7664c8a34b20 bound to our chassis
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.524 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00eb7549-d24b-4657-b244-7664c8a34b20
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.538 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[079ad0ec-8b1c-452e-b798-7ceaf124bd08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.539 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00eb7549-d1 in ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.541 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00eb7549-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.541 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f97907-a8d8-4726-98eb-50104947671b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:41 compute-0 systemd-udevd[288738]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.542 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ef0d6ad6-9ad2-47a7-8ba6-33ed182a5ebe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.553 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Successfully created port: 360a1634-f2af-423f-8312-2d8623a1dab4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.558 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[fb5c9ced-5418-40c7-aff6-b436b21d1d77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:41 compute-0 NetworkManager[48954]: <info>  [1769443181.5620] device (tap5010d02a-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:59:41 compute-0 NetworkManager[48954]: <info>  [1769443181.5631] device (tap5010d02a-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:59:41 compute-0 systemd-machined[208061]: New machine qemu-61-instance-00000037.
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.579 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Successfully created port: aa71bf39-b625-491e-9e6e-9a6b6bc4d087 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.587 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ad4b523c-4473-4b6a-9031-7f61e4e02f50]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.588 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:41 compute-0 systemd[1]: Started Virtual Machine qemu-61-instance-00000037.
Jan 26 15:59:41 compute-0 ovn_controller[146046]: 2026-01-26T15:59:41Z|00440|binding|INFO|Setting lport 5010d02a-4286-4559-ab9e-1f1ee882ee9b ovn-installed in OVS
Jan 26 15:59:41 compute-0 ovn_controller[146046]: 2026-01-26T15:59:41Z|00441|binding|INFO|Setting lport 5010d02a-4286-4559-ab9e-1f1ee882ee9b up in Southbound
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.599 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.617 239969 DEBUG nova.network.neutron [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Successfully created port: ad309073-b3c7-40b4-a937-5d72ec411ed7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.618 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[330db05a-0165-40f9-a118-4409a73ac78a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:41 compute-0 NetworkManager[48954]: <info>  [1769443181.6268] manager: (tap00eb7549-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/201)
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.626 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[74f3a65c-ef25-48aa-a164-ebb73c97b9b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.661 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0e396d39-82a4-40b1-a3f3-2cb5eb465740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.665 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[44ec07e1-604c-4fc8-a86a-9e5f9926683e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:41 compute-0 NetworkManager[48954]: <info>  [1769443181.6888] device (tap00eb7549-d0): carrier: link connected
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.697 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2b1bf778-e301-4163-af63-5dd615e3bc60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.716 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[abc88ada-4e22-4644-93c4-c741fb6e12d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00eb7549-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:aa:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467249, 'reachable_time': 25451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288771, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.732 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[caf98245-1156-4f69-9e99-ea61ad7ae154]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:aa8a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467249, 'tstamp': 467249}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 288772, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.749 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ffacfca8-5713-4e39-a12d-5ab522b5099e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00eb7549-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:aa:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 136], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467249, 'reachable_time': 25451, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 288773, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.783 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[369d3c90-0d1a-4a0c-9931-c652cf971c87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.812 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Successfully updated port: 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.835 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "refresh_cache-66382f82-a38c-4066-8564-7866c37ccd0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.835 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquired lock "refresh_cache-66382f82-a38c-4066-8564-7866c37ccd0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.835 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.856 239969 DEBUG nova.network.neutron [req-fd9de78e-2d95-4618-a9cb-099816c1339f req-d2f1c415-0b40-46b8-bb8e-5d5ef161e1d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Updated VIF entry in instance network info cache for port 5010d02a-4286-4559-ab9e-1f1ee882ee9b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.857 239969 DEBUG nova.network.neutron [req-fd9de78e-2d95-4618-a9cb-099816c1339f req-d2f1c415-0b40-46b8-bb8e-5d5ef161e1d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Updating instance_info_cache with network_info: [{"id": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "address": "fa:16:3e:00:52:67", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5010d02a-42", "ovs_interfaceid": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.866 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4928ac4f-3df9-4f35-94f0-74381e8edfa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.867 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00eb7549-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.868 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.868 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00eb7549-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.869 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:41 compute-0 NetworkManager[48954]: <info>  [1769443181.8706] manager: (tap00eb7549-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Jan 26 15:59:41 compute-0 kernel: tap00eb7549-d0: entered promiscuous mode
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.872 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00eb7549-d0, col_values=(('external_ids', {'iface-id': 'c26451f4-ab48-4e8d-b25b-9e4988573b7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 304 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 8.1 MiB/s wr, 127 op/s
Jan 26 15:59:41 compute-0 ovn_controller[146046]: 2026-01-26T15:59:41Z|00442|binding|INFO|Releasing lport c26451f4-ab48-4e8d-b25b-9e4988573b7d from this chassis (sb_readonly=0)
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.879 239969 DEBUG oslo_concurrency.lockutils [req-fd9de78e-2d95-4618-a9cb-099816c1339f req-d2f1c415-0b40-46b8-bb8e-5d5ef161e1d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-64435a76-a92d-4b8f-83e5-a6a85b70b10f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.889 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.890 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00eb7549-d24b-4657-b244-7664c8a34b20.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00eb7549-d24b-4657-b244-7664c8a34b20.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.891 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[19f8e436-a0c8-4b3a-a783-175a3e519ab0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.892 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-00eb7549-d24b-4657-b244-7664c8a34b20
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/00eb7549-d24b-4657-b244-7664c8a34b20.pid.haproxy
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 00eb7549-d24b-4657-b244-7664c8a34b20
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:59:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:41.892 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'env', 'PROCESS_TAG=haproxy-00eb7549-d24b-4657-b244-7664c8a34b20', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00eb7549-d24b-4657-b244-7664c8a34b20.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.907 239969 DEBUG nova.compute.manager [req-b717cf88-26c6-4bf9-b77d-0c77d286f5e9 req-331c6612-1ffd-4f2b-b36f-c2676c81b24b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Received event network-vif-plugged-5010d02a-4286-4559-ab9e-1f1ee882ee9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.907 239969 DEBUG oslo_concurrency.lockutils [req-b717cf88-26c6-4bf9-b77d-0c77d286f5e9 req-331c6612-1ffd-4f2b-b36f-c2676c81b24b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.908 239969 DEBUG oslo_concurrency.lockutils [req-b717cf88-26c6-4bf9-b77d-0c77d286f5e9 req-331c6612-1ffd-4f2b-b36f-c2676c81b24b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.908 239969 DEBUG oslo_concurrency.lockutils [req-b717cf88-26c6-4bf9-b77d-0c77d286f5e9 req-331c6612-1ffd-4f2b-b36f-c2676c81b24b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.908 239969 DEBUG nova.compute.manager [req-b717cf88-26c6-4bf9-b77d-0c77d286f5e9 req-331c6612-1ffd-4f2b-b36f-c2676c81b24b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Processing event network-vif-plugged-5010d02a-4286-4559-ab9e-1f1ee882ee9b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.919 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443166.9182334, 6b4dbd52-cbad-4d8f-8446-e47fabdc170c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.919 239969 INFO nova.compute.manager [-] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] VM Stopped (Lifecycle Event)
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.938 239969 DEBUG nova.compute.manager [None req-77817a86-66b6-49fb-b155-1acf2473f816 - - - - - -] [instance: 6b4dbd52-cbad-4d8f-8446-e47fabdc170c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.976 239969 DEBUG nova.compute.manager [req-baa68dd8-85e3-4530-806f-c51d3f180f50 req-a6246853-504e-4f14-9b53-197bf88c3b43 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Received event network-changed-9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.977 239969 DEBUG nova.compute.manager [req-baa68dd8-85e3-4530-806f-c51d3f180f50 req-a6246853-504e-4f14-9b53-197bf88c3b43 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Refreshing instance network info cache due to event network-changed-9fdb2907-6e42-42c6-ac8b-e9b435fa2c28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:59:41 compute-0 nova_compute[239965]: 2026-01-26 15:59:41.978 239969 DEBUG oslo_concurrency.lockutils [req-baa68dd8-85e3-4530-806f-c51d3f180f50 req-a6246853-504e-4f14-9b53-197bf88c3b43 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-66382f82-a38c-4066-8564-7866c37ccd0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.059 239969 DEBUG nova.compute.manager [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.060 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443182.0590196, 64435a76-a92d-4b8f-83e5-a6a85b70b10f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.060 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] VM Started (Lifecycle Event)
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.065 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.069 239969 INFO nova.virt.libvirt.driver [-] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Instance spawned successfully.
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.070 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.096 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.103 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.106 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.106 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.107 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.107 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.107 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.108 239969 DEBUG nova.virt.libvirt.driver [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.137 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.169 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.170 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443182.059225, 64435a76-a92d-4b8f-83e5-a6a85b70b10f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.170 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] VM Paused (Lifecycle Event)
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.195 239969 INFO nova.compute.manager [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Took 9.15 seconds to spawn the instance on the hypervisor.
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.196 239969 DEBUG nova.compute.manager [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.203 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.207 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443182.0627456, 64435a76-a92d-4b8f-83e5-a6a85b70b10f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.207 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] VM Resumed (Lifecycle Event)
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.266 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.271 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.294 239969 INFO nova.compute.manager [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Took 10.21 seconds to build instance.
Jan 26 15:59:42 compute-0 podman[288847]: 2026-01-26 15:59:42.309557912 +0000 UTC m=+0.052916570 container create 0cb83635bf79f7fc715cd497678a2cd1b0ab893f05952694524644ef64fe00a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.312 239969 DEBUG oslo_concurrency.lockutils [None req-82365325-bfaf-4cce-8f87-0a0c4e12b8cf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.313 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 9.473s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:42 compute-0 ceph-mon[75140]: pgmap v1397: 305 pgs: 305 active+clean; 304 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 8.1 MiB/s wr, 127 op/s
Jan 26 15:59:42 compute-0 systemd[1]: Started libpod-conmon-0cb83635bf79f7fc715cd497678a2cd1b0ab893f05952694524644ef64fe00a0.scope.
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.360 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:42 compute-0 podman[288847]: 2026-01-26 15:59:42.280843617 +0000 UTC m=+0.024202295 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:59:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd73186a7f8a613ec0256a89f4b0bff995698b9d81cbcec291bee2220e3d2036/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:59:42 compute-0 podman[288847]: 2026-01-26 15:59:42.405788913 +0000 UTC m=+0.149147591 container init 0cb83635bf79f7fc715cd497678a2cd1b0ab893f05952694524644ef64fe00a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:59:42 compute-0 podman[288847]: 2026-01-26 15:59:42.412057387 +0000 UTC m=+0.155416045 container start 0cb83635bf79f7fc715cd497678a2cd1b0ab893f05952694524644ef64fe00a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 15:59:42 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[288863]: [NOTICE]   (288867) : New worker (288869) forked
Jan 26 15:59:42 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[288863]: [NOTICE]   (288867) : Loading success.
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:59:42 compute-0 nova_compute[239965]: 2026-01-26 15:59:42.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.195 239969 DEBUG nova.network.neutron [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Successfully updated port: ad309073-b3c7-40b4-a937-5d72ec411ed7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.222 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Acquiring lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.223 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Acquired lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.223 239969 DEBUG nova.network.neutron [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.537 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Updating instance_info_cache with network_info: [{"id": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "address": "fa:16:3e:1a:53:13", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fdb2907-6e", "ovs_interfaceid": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.561 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Releasing lock "refresh_cache-66382f82-a38c-4066-8564-7866c37ccd0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.562 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Instance network_info: |[{"id": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "address": "fa:16:3e:1a:53:13", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fdb2907-6e", "ovs_interfaceid": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.562 239969 DEBUG oslo_concurrency.lockutils [req-baa68dd8-85e3-4530-806f-c51d3f180f50 req-a6246853-504e-4f14-9b53-197bf88c3b43 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-66382f82-a38c-4066-8564-7866c37ccd0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.563 239969 DEBUG nova.network.neutron [req-baa68dd8-85e3-4530-806f-c51d3f180f50 req-a6246853-504e-4f14-9b53-197bf88c3b43 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Refreshing network info cache for port 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.565 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Start _get_guest_xml network_info=[{"id": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "address": "fa:16:3e:1a:53:13", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fdb2907-6e", "ovs_interfaceid": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.570 239969 WARNING nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.578 239969 DEBUG nova.network.neutron [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.581 239969 DEBUG nova.virt.libvirt.host [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.582 239969 DEBUG nova.virt.libvirt.host [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.591 239969 DEBUG nova.virt.libvirt.host [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.592 239969 DEBUG nova.virt.libvirt.host [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.592 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.593 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.593 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.594 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.594 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.594 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.595 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.595 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.595 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.596 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.596 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.596 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.599 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.672 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Successfully updated port: 360a1634-f2af-423f-8312-2d8623a1dab4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.688 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Successfully updated port: aa71bf39-b625-491e-9e6e-9a6b6bc4d087 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.691 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "refresh_cache-7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.691 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquired lock "refresh_cache-7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.692 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.707 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "refresh_cache-af55c460-3670-4fa6-855f-386e2c58b312" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.708 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquired lock "refresh_cache-af55c460-3670-4fa6-855f-386e2c58b312" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.709 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:59:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 304 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 8.1 MiB/s wr, 123 op/s
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.892 239969 INFO nova.compute.manager [None req-196d6726-7fde-4da1-a422-1df17d28fe34 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Pausing
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.894 239969 DEBUG nova.objects.instance [None req-196d6726-7fde-4da1-a422-1df17d28fe34 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'flavor' on Instance uuid 64435a76-a92d-4b8f-83e5-a6a85b70b10f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.922 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443183.9222448, 64435a76-a92d-4b8f-83e5-a6a85b70b10f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.923 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] VM Paused (Lifecycle Event)
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.925 239969 DEBUG nova.compute.manager [None req-196d6726-7fde-4da1-a422-1df17d28fe34 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.929 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:59:43 compute-0 nova_compute[239965]: 2026-01-26 15:59:43.936 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:59:44 compute-0 nova_compute[239965]: 2026-01-26 15:59:44.098 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:44 compute-0 nova_compute[239965]: 2026-01-26 15:59:44.103 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:44 compute-0 nova_compute[239965]: 2026-01-26 15:59:44.460 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.337 239969 DEBUG nova.compute.manager [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Received event network-vif-plugged-5010d02a-4286-4559-ab9e-1f1ee882ee9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.338 239969 DEBUG oslo_concurrency.lockutils [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.339 239969 DEBUG oslo_concurrency.lockutils [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.340 239969 DEBUG oslo_concurrency.lockutils [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.340 239969 DEBUG nova.compute.manager [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] No waiting events found dispatching network-vif-plugged-5010d02a-4286-4559-ab9e-1f1ee882ee9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.341 239969 WARNING nova.compute.manager [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Received unexpected event network-vif-plugged-5010d02a-4286-4559-ab9e-1f1ee882ee9b for instance with vm_state paused and task_state None.
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.341 239969 DEBUG nova.compute.manager [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Received event network-changed-360a1634-f2af-423f-8312-2d8623a1dab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.341 239969 DEBUG nova.compute.manager [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Refreshing instance network info cache due to event network-changed-360a1634-f2af-423f-8312-2d8623a1dab4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.342 239969 DEBUG oslo_concurrency.lockutils [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.419 239969 DEBUG nova.compute.manager [req-cd191ac5-4caf-44fe-8b82-12b00d4ca9e1 req-600f1407-6ee5-49e3-8111-50dc241f5790 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received event network-changed-ad309073-b3c7-40b4-a937-5d72ec411ed7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.420 239969 DEBUG nova.compute.manager [req-cd191ac5-4caf-44fe-8b82-12b00d4ca9e1 req-600f1407-6ee5-49e3-8111-50dc241f5790 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Refreshing instance network info cache due to event network-changed-ad309073-b3c7-40b4-a937-5d72ec411ed7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.420 239969 DEBUG oslo_concurrency.lockutils [req-cd191ac5-4caf-44fe-8b82-12b00d4ca9e1 req-600f1407-6ee5-49e3-8111-50dc241f5790 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.422 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.718 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Updating instance_info_cache with network_info: [{"id": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "address": "fa:16:3e:ed:05:48", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa71bf39-b6", "ovs_interfaceid": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.739 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Releasing lock "refresh_cache-af55c460-3670-4fa6-855f-386e2c58b312" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.740 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Instance network_info: |[{"id": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "address": "fa:16:3e:ed:05:48", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa71bf39-b6", "ovs_interfaceid": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.742 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Start _get_guest_xml network_info=[{"id": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "address": "fa:16:3e:ed:05:48", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa71bf39-b6", "ovs_interfaceid": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.745 239969 WARNING nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.751 239969 DEBUG nova.virt.libvirt.host [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.751 239969 DEBUG nova.virt.libvirt.host [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.755 239969 DEBUG nova.network.neutron [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Updating instance_info_cache with network_info: [{"id": "360a1634-f2af-423f-8312-2d8623a1dab4", "address": "fa:16:3e:2a:d7:9a", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap360a1634-f2", "ovs_interfaceid": "360a1634-f2af-423f-8312-2d8623a1dab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.757 239969 DEBUG nova.virt.libvirt.host [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.758 239969 DEBUG nova.virt.libvirt.host [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.758 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.758 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.759 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.759 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.759 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.759 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.759 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.760 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.760 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.760 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.760 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.760 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.763 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.799 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Releasing lock "refresh_cache-7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.801 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Instance network_info: |[{"id": "360a1634-f2af-423f-8312-2d8623a1dab4", "address": "fa:16:3e:2a:d7:9a", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap360a1634-f2", "ovs_interfaceid": "360a1634-f2af-423f-8312-2d8623a1dab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.802 239969 DEBUG oslo_concurrency.lockutils [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.803 239969 DEBUG nova.network.neutron [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Refreshing network info cache for port 360a1634-f2af-423f-8312-2d8623a1dab4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.810 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Start _get_guest_xml network_info=[{"id": "360a1634-f2af-423f-8312-2d8623a1dab4", "address": "fa:16:3e:2a:d7:9a", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap360a1634-f2", "ovs_interfaceid": "360a1634-f2af-423f-8312-2d8623a1dab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.817 239969 WARNING nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.823 239969 DEBUG nova.virt.libvirt.host [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.824 239969 DEBUG nova.virt.libvirt.host [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.831 239969 DEBUG nova.virt.libvirt.host [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.831 239969 DEBUG nova.virt.libvirt.host [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.832 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.832 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.833 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.833 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.833 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.834 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.834 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.834 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.834 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.835 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.835 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.835 239969 DEBUG nova.virt.hardware [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.840 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1399: 305 pgs: 305 active+clean; 319 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 8.9 MiB/s wr, 209 op/s
Jan 26 15:59:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:59:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:59:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/474998396' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:45 compute-0 nova_compute[239965]: 2026-01-26 15:59:45.963 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:45 compute-0 ceph-mon[75140]: pgmap v1398: 305 pgs: 305 active+clean; 304 MiB data, 546 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 8.1 MiB/s wr, 123 op/s
Jan 26 15:59:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/474998396' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.001 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image 66382f82-a38c-4066-8564-7866c37ccd0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.006 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.075 239969 DEBUG nova.network.neutron [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Updating instance_info_cache with network_info: [{"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.099 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Releasing lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.100 239969 DEBUG nova.compute.manager [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Instance network_info: |[{"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.100 239969 DEBUG oslo_concurrency.lockutils [req-cd191ac5-4caf-44fe-8b82-12b00d4ca9e1 req-600f1407-6ee5-49e3-8111-50dc241f5790 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.100 239969 DEBUG nova.network.neutron [req-cd191ac5-4caf-44fe-8b82-12b00d4ca9e1 req-600f1407-6ee5-49e3-8111-50dc241f5790 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Refreshing network info cache for port ad309073-b3c7-40b4-a937-5d72ec411ed7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.103 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Start _get_guest_xml network_info=[{"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.107 239969 WARNING nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.113 239969 DEBUG nova.virt.libvirt.host [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.114 239969 DEBUG nova.virt.libvirt.host [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.125 239969 DEBUG nova.virt.libvirt.host [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.127 239969 DEBUG nova.virt.libvirt.host [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.129 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.130 239969 DEBUG nova.virt.hardware [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.131 239969 DEBUG nova.virt.hardware [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.131 239969 DEBUG nova.virt.hardware [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.132 239969 DEBUG nova.virt.hardware [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.132 239969 DEBUG nova.virt.hardware [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.132 239969 DEBUG nova.virt.hardware [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.133 239969 DEBUG nova.virt.hardware [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.133 239969 DEBUG nova.virt.hardware [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.134 239969 DEBUG nova.virt.hardware [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.134 239969 DEBUG nova.virt.hardware [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.134 239969 DEBUG nova.virt.hardware [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.139 239969 DEBUG oslo_concurrency.processutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:59:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2362276326' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.374 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.396 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image af55c460-3670-4fa6-855f-386e2c58b312_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.399 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:59:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2895407034' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.432 239969 DEBUG nova.network.neutron [req-baa68dd8-85e3-4530-806f-c51d3f180f50 req-a6246853-504e-4f14-9b53-197bf88c3b43 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Updated VIF entry in instance network info cache for port 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.433 239969 DEBUG nova.network.neutron [req-baa68dd8-85e3-4530-806f-c51d3f180f50 req-a6246853-504e-4f14-9b53-197bf88c3b43 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Updating instance_info_cache with network_info: [{"id": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "address": "fa:16:3e:1a:53:13", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fdb2907-6e", "ovs_interfaceid": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.440 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.468 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.477 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.519 239969 DEBUG oslo_concurrency.lockutils [req-baa68dd8-85e3-4530-806f-c51d3f180f50 req-a6246853-504e-4f14-9b53-197bf88c3b43 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-66382f82-a38c-4066-8564-7866c37ccd0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.521 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 15:59:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:59:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2338601244' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.604 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.606 239969 DEBUG nova.virt.libvirt.vif [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1762749817',display_name='tempest-ListServersNegativeTestJSON-server-1762749817-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1762749817-1',id=56,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07b927cfeb484a5d80d731f9e9abfa4f',ramdisk_id='',reservation_id='r-oeb19hac',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1615111890',owner_user_name='tempest-ListServersNegativeTestJSON-1615111890-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:59:35Z,user_data=None,user_id='48264b1a141f4ef5b99a45972af1903a',uuid=66382f82-a38c-4066-8564-7866c37ccd0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "address": "fa:16:3e:1a:53:13", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fdb2907-6e", "ovs_interfaceid": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.606 239969 DEBUG nova.network.os_vif_util [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converting VIF {"id": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "address": "fa:16:3e:1a:53:13", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fdb2907-6e", "ovs_interfaceid": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.607 239969 DEBUG nova.network.os_vif_util [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:53:13,bridge_name='br-int',has_traffic_filtering=True,id=9fdb2907-6e42-42c6-ac8b-e9b435fa2c28,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fdb2907-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.608 239969 DEBUG nova.objects.instance [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 66382f82-a38c-4066-8564-7866c37ccd0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.625 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:59:46 compute-0 nova_compute[239965]:   <uuid>66382f82-a38c-4066-8564-7866c37ccd0f</uuid>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   <name>instance-00000038</name>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1762749817-1</nova:name>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:59:43</nova:creationTime>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:59:46 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:59:46 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:59:46 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:59:46 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:59:46 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:59:46 compute-0 nova_compute[239965]:         <nova:user uuid="48264b1a141f4ef5b99a45972af1903a">tempest-ListServersNegativeTestJSON-1615111890-project-member</nova:user>
Jan 26 15:59:46 compute-0 nova_compute[239965]:         <nova:project uuid="07b927cfeb484a5d80d731f9e9abfa4f">tempest-ListServersNegativeTestJSON-1615111890</nova:project>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:59:46 compute-0 nova_compute[239965]:         <nova:port uuid="9fdb2907-6e42-42c6-ac8b-e9b435fa2c28">
Jan 26 15:59:46 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <system>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <entry name="serial">66382f82-a38c-4066-8564-7866c37ccd0f</entry>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <entry name="uuid">66382f82-a38c-4066-8564-7866c37ccd0f</entry>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     </system>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   <os>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   </os>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   <features>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   </features>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/66382f82-a38c-4066-8564-7866c37ccd0f_disk">
Jan 26 15:59:46 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       </source>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:59:46 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/66382f82-a38c-4066-8564-7866c37ccd0f_disk.config">
Jan 26 15:59:46 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       </source>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:59:46 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:1a:53:13"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <target dev="tap9fdb2907-6e"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/66382f82-a38c-4066-8564-7866c37ccd0f/console.log" append="off"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <video>
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     </video>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:59:46 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:59:46 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:59:46 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:59:46 compute-0 nova_compute[239965]: </domain>
Jan 26 15:59:46 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.632 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Preparing to wait for external event network-vif-plugged-9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.633 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "66382f82-a38c-4066-8564-7866c37ccd0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.633 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "66382f82-a38c-4066-8564-7866c37ccd0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.634 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "66382f82-a38c-4066-8564-7866c37ccd0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.635 239969 DEBUG nova.virt.libvirt.vif [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1762749817',display_name='tempest-ListServersNegativeTestJSON-server-1762749817-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1762749817-1',id=56,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07b927cfeb484a5d80d731f9e9abfa4f',ramdisk_id='',reservation_id='r-oeb19hac',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1615111890',owner_user_name='tempest-ListServersNegativeTestJSON-1615111890-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:59:35Z,user_data=None,user_id='48264b1a141f4ef5b99a45972af1903a',uuid=66382f82-a38c-4066-8564-7866c37ccd0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "address": "fa:16:3e:1a:53:13", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fdb2907-6e", "ovs_interfaceid": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.635 239969 DEBUG nova.network.os_vif_util [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converting VIF {"id": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "address": "fa:16:3e:1a:53:13", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fdb2907-6e", "ovs_interfaceid": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.636 239969 DEBUG nova.network.os_vif_util [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:53:13,bridge_name='br-int',has_traffic_filtering=True,id=9fdb2907-6e42-42c6-ac8b-e9b435fa2c28,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fdb2907-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.636 239969 DEBUG os_vif [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:53:13,bridge_name='br-int',has_traffic_filtering=True,id=9fdb2907-6e42-42c6-ac8b-e9b435fa2c28,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fdb2907-6e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.637 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.638 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.638 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.644 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.645 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fdb2907-6e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.646 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9fdb2907-6e, col_values=(('external_ids', {'iface-id': '9fdb2907-6e42-42c6-ac8b-e9b435fa2c28', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:53:13', 'vm-uuid': '66382f82-a38c-4066-8564-7866c37ccd0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.647 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:46 compute-0 NetworkManager[48954]: <info>  [1769443186.6494] manager: (tap9fdb2907-6e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.652 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.659 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.661 239969 INFO os_vif [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:53:13,bridge_name='br-int',has_traffic_filtering=True,id=9fdb2907-6e42-42c6-ac8b-e9b435fa2c28,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fdb2907-6e')
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.725 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.726 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.727 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] No VIF found with MAC fa:16:3e:1a:53:13, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.728 239969 INFO nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Using config drive
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.757 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image 66382f82-a38c-4066-8564-7866c37ccd0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:59:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3112117004' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.792 239969 DEBUG oslo_concurrency.processutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.653s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.814 239969 DEBUG nova.storage.rbd_utils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] rbd image a4ddea97-0b71-4080-abed-fd5748d7244e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:46 compute-0 nova_compute[239965]: 2026-01-26 15:59:46.820 239969 DEBUG oslo_concurrency.processutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:47 compute-0 ceph-mon[75140]: pgmap v1399: 305 pgs: 305 active+clean; 319 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 8.9 MiB/s wr, 209 op/s
Jan 26 15:59:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2362276326' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2895407034' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2338601244' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3112117004' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:59:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3638440723' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.038 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.040 239969 DEBUG nova.virt.libvirt.vif [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1762749817',display_name='tempest-ListServersNegativeTestJSON-server-1762749817-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1762749817-2',id=57,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07b927cfeb484a5d80d731f9e9abfa4f',ramdisk_id='',reservation_id='r-oeb19hac',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1615111890',owner_user_name='tempest-ListServersNegativeTestJSON-1615111890-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:59:38Z,user_data=None,user_id='48264b1a141f4ef5b99a45972af1903a',uuid=af55c460-3670-4fa6-855f-386e2c58b312,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "address": "fa:16:3e:ed:05:48", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa71bf39-b6", "ovs_interfaceid": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.040 239969 DEBUG nova.network.os_vif_util [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converting VIF {"id": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "address": "fa:16:3e:ed:05:48", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa71bf39-b6", "ovs_interfaceid": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.041 239969 DEBUG nova.network.os_vif_util [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:05:48,bridge_name='br-int',has_traffic_filtering=True,id=aa71bf39-b625-491e-9e6e-9a6b6bc4d087,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa71bf39-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.042 239969 DEBUG nova.objects.instance [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lazy-loading 'pci_devices' on Instance uuid af55c460-3670-4fa6-855f-386e2c58b312 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.057 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <uuid>af55c460-3670-4fa6-855f-386e2c58b312</uuid>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <name>instance-00000039</name>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1762749817-2</nova:name>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:59:45</nova:creationTime>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:user uuid="48264b1a141f4ef5b99a45972af1903a">tempest-ListServersNegativeTestJSON-1615111890-project-member</nova:user>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:project uuid="07b927cfeb484a5d80d731f9e9abfa4f">tempest-ListServersNegativeTestJSON-1615111890</nova:project>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:port uuid="aa71bf39-b625-491e-9e6e-9a6b6bc4d087">
Jan 26 15:59:47 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <system>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="serial">af55c460-3670-4fa6-855f-386e2c58b312</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="uuid">af55c460-3670-4fa6-855f-386e2c58b312</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </system>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <os>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </os>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <features>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </features>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/af55c460-3670-4fa6-855f-386e2c58b312_disk">
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </source>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/af55c460-3670-4fa6-855f-386e2c58b312_disk.config">
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </source>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:ed:05:48"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <target dev="tapaa71bf39-b6"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/af55c460-3670-4fa6-855f-386e2c58b312/console.log" append="off"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <video>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </video>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:59:47 compute-0 nova_compute[239965]: </domain>
Jan 26 15:59:47 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.063 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Preparing to wait for external event network-vif-plugged-aa71bf39-b625-491e-9e6e-9a6b6bc4d087 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.064 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "af55c460-3670-4fa6-855f-386e2c58b312-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.064 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "af55c460-3670-4fa6-855f-386e2c58b312-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.065 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "af55c460-3670-4fa6-855f-386e2c58b312-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.066 239969 DEBUG nova.virt.libvirt.vif [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1762749817',display_name='tempest-ListServersNegativeTestJSON-server-1762749817-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1762749817-2',id=57,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07b927cfeb484a5d80d731f9e9abfa4f',ramdisk_id='',reservation_id='r-oeb19hac',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1615111890',owner_user_name='tempest-ListServersNegativeTestJSON-1615111890-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:59:38Z,user_data=None,user_id='48264b1a141f4ef5b99a45972af1903a',uuid=af55c460-3670-4fa6-855f-386e2c58b312,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "address": "fa:16:3e:ed:05:48", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa71bf39-b6", "ovs_interfaceid": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.066 239969 DEBUG nova.network.os_vif_util [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converting VIF {"id": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "address": "fa:16:3e:ed:05:48", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa71bf39-b6", "ovs_interfaceid": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.067 239969 DEBUG nova.network.os_vif_util [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:05:48,bridge_name='br-int',has_traffic_filtering=True,id=aa71bf39-b625-491e-9e6e-9a6b6bc4d087,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa71bf39-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.067 239969 DEBUG os_vif [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:05:48,bridge_name='br-int',has_traffic_filtering=True,id=aa71bf39-b625-491e-9e6e-9a6b6bc4d087,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa71bf39-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.072 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.073 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.073 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:59:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2968440269' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.078 239969 DEBUG oslo_concurrency.lockutils [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.078 239969 DEBUG oslo_concurrency.lockutils [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.078 239969 DEBUG oslo_concurrency.lockutils [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.079 239969 DEBUG oslo_concurrency.lockutils [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.079 239969 DEBUG oslo_concurrency.lockutils [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.081 239969 INFO nova.compute.manager [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Terminating instance
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.082 239969 DEBUG nova.compute.manager [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.086 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.087 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa71bf39-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.089 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa71bf39-b6, col_values=(('external_ids', {'iface-id': 'aa71bf39-b625-491e-9e6e-9a6b6bc4d087', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:05:48', 'vm-uuid': 'af55c460-3670-4fa6-855f-386e2c58b312'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.090 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 NetworkManager[48954]: <info>  [1769443187.0921] manager: (tapaa71bf39-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/204)
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.093 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.099 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.100 239969 INFO os_vif [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:05:48,bridge_name='br-int',has_traffic_filtering=True,id=aa71bf39-b625-491e-9e6e-9a6b6bc4d087,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa71bf39-b6')
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.102 239969 INFO nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Creating config drive at /var/lib/nova/instances/66382f82-a38c-4066-8564-7866c37ccd0f/disk.config
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.107 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66382f82-a38c-4066-8564-7866c37ccd0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_1_kxdgw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.136 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.144 239969 DEBUG nova.virt.libvirt.vif [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1762749817',display_name='tempest-ListServersNegativeTestJSON-server-1762749817-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1762749817-3',id=58,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07b927cfeb484a5d80d731f9e9abfa4f',ramdisk_id='',reservation_id='r-oeb19hac',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1615111890',owner_user_name='tempest-ListServersNegativeTestJSON-1615111890-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:59:37Z,user_data=None,user_id='48264b1a141f4ef5b99a45972af1903a',uuid=7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "360a1634-f2af-423f-8312-2d8623a1dab4", "address": "fa:16:3e:2a:d7:9a", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap360a1634-f2", "ovs_interfaceid": "360a1634-f2af-423f-8312-2d8623a1dab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.145 239969 DEBUG nova.network.os_vif_util [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converting VIF {"id": "360a1634-f2af-423f-8312-2d8623a1dab4", "address": "fa:16:3e:2a:d7:9a", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap360a1634-f2", "ovs_interfaceid": "360a1634-f2af-423f-8312-2d8623a1dab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.146 239969 DEBUG nova.network.os_vif_util [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:d7:9a,bridge_name='br-int',has_traffic_filtering=True,id=360a1634-f2af-423f-8312-2d8623a1dab4,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap360a1634-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.149 239969 DEBUG nova.objects.instance [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:47 compute-0 kernel: tap5010d02a-42 (unregistering): left promiscuous mode
Jan 26 15:59:47 compute-0 NetworkManager[48954]: <info>  [1769443187.1686] device (tap5010d02a-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.174 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <uuid>7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97</uuid>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <name>instance-0000003a</name>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:name>tempest-ListServersNegativeTestJSON-server-1762749817-3</nova:name>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:59:45</nova:creationTime>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:user uuid="48264b1a141f4ef5b99a45972af1903a">tempest-ListServersNegativeTestJSON-1615111890-project-member</nova:user>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:project uuid="07b927cfeb484a5d80d731f9e9abfa4f">tempest-ListServersNegativeTestJSON-1615111890</nova:project>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:port uuid="360a1634-f2af-423f-8312-2d8623a1dab4">
Jan 26 15:59:47 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <system>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="serial">7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="uuid">7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </system>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <os>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </os>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <features>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </features>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97_disk">
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </source>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97_disk.config">
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </source>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:2a:d7:9a"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <target dev="tap360a1634-f2"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97/console.log" append="off"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <video>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </video>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:59:47 compute-0 nova_compute[239965]: </domain>
Jan 26 15:59:47 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:59:47 compute-0 ovn_controller[146046]: 2026-01-26T15:59:47Z|00443|binding|INFO|Releasing lport 5010d02a-4286-4559-ab9e-1f1ee882ee9b from this chassis (sb_readonly=0)
Jan 26 15:59:47 compute-0 ovn_controller[146046]: 2026-01-26T15:59:47Z|00444|binding|INFO|Setting lport 5010d02a-4286-4559-ab9e-1f1ee882ee9b down in Southbound
Jan 26 15:59:47 compute-0 ovn_controller[146046]: 2026-01-26T15:59:47Z|00445|binding|INFO|Removing iface tap5010d02a-42 ovn-installed in OVS
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.185 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Preparing to wait for external event network-vif-plugged-360a1634-f2af-423f-8312-2d8623a1dab4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.186 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.186 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.187 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.188 239969 DEBUG nova.virt.libvirt.vif [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1762749817',display_name='tempest-ListServersNegativeTestJSON-server-1762749817-3',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1762749817-3',id=58,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=2,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='07b927cfeb484a5d80d731f9e9abfa4f',ramdisk_id='',reservation_id='r-oeb19hac',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServersNegativeTestJSON-1615111890',owner_user_name='tempest-ListServersNegativeTestJSON-1615111890-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:59:37Z,user_data=None,user_id='48264b1a141f4ef5b99a45972af1903a',uuid=7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "360a1634-f2af-423f-8312-2d8623a1dab4", "address": "fa:16:3e:2a:d7:9a", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap360a1634-f2", "ovs_interfaceid": "360a1634-f2af-423f-8312-2d8623a1dab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.189 239969 DEBUG nova.network.os_vif_util [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converting VIF {"id": "360a1634-f2af-423f-8312-2d8623a1dab4", "address": "fa:16:3e:2a:d7:9a", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap360a1634-f2", "ovs_interfaceid": "360a1634-f2af-423f-8312-2d8623a1dab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.190 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:52:67 10.100.0.9'], port_security=['fa:16:3e:00:52:67 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '64435a76-a92d-4b8f-83e5-a6a85b70b10f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00eb7549-d24b-4657-b244-7664c8a34b20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aee99e5b6af74088bd848cecc9592e82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6daa4359-88b3-4de7-a38c-e21562f82a46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89244795-c322-42a2-be6b-f02f8ae6e05c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5010d02a-4286-4559-ab9e-1f1ee882ee9b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.190 239969 DEBUG nova.network.os_vif_util [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:d7:9a,bridge_name='br-int',has_traffic_filtering=True,id=360a1634-f2af-423f-8312-2d8623a1dab4,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap360a1634-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.192 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5010d02a-4286-4559-ab9e-1f1ee882ee9b in datapath 00eb7549-d24b-4657-b244-7664c8a34b20 unbound from our chassis
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.192 239969 DEBUG os_vif [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:d7:9a,bridge_name='br-int',has_traffic_filtering=True,id=360a1634-f2af-423f-8312-2d8623a1dab4,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap360a1634-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.193 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.193 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00eb7549-d24b-4657-b244-7664c8a34b20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.195 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.194 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[49d57452-7c15-42bd-8e13-6d128416f8b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.195 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 namespace which is not needed anymore
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.195 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.196 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.198 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.198 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap360a1634-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.199 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap360a1634-f2, col_values=(('external_ids', {'iface-id': '360a1634-f2af-423f-8312-2d8623a1dab4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:d7:9a', 'vm-uuid': '7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.200 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 NetworkManager[48954]: <info>  [1769443187.2018] manager: (tap360a1634-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.203 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.218 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000037.scope: Deactivated successfully.
Jan 26 15:59:47 compute-0 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000037.scope: Consumed 2.356s CPU time.
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.228 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 systemd-machined[208061]: Machine qemu-61-instance-00000037 terminated.
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.230 239969 INFO os_vif [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:d7:9a,bridge_name='br-int',has_traffic_filtering=True,id=360a1634-f2af-423f-8312-2d8623a1dab4,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap360a1634-f2')
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.244 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66382f82-a38c-4066-8564-7866c37ccd0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_1_kxdgw" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.279 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image 66382f82-a38c-4066-8564-7866c37ccd0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.304 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/66382f82-a38c-4066-8564-7866c37ccd0f/disk.config 66382f82-a38c-4066-8564-7866c37ccd0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:47 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[288863]: [NOTICE]   (288867) : haproxy version is 2.8.14-c23fe91
Jan 26 15:59:47 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[288863]: [NOTICE]   (288867) : path to executable is /usr/sbin/haproxy
Jan 26 15:59:47 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[288863]: [WARNING]  (288867) : Exiting Master process...
Jan 26 15:59:47 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[288863]: [ALERT]    (288867) : Current worker (288869) exited with code 143 (Terminated)
Jan 26 15:59:47 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[288863]: [WARNING]  (288867) : All workers exited. Exiting... (0)
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.344 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.344 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.344 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] No VIF found with MAC fa:16:3e:ed:05:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.345 239969 INFO nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Using config drive
Jan 26 15:59:47 compute-0 systemd[1]: libpod-0cb83635bf79f7fc715cd497678a2cd1b0ab893f05952694524644ef64fe00a0.scope: Deactivated successfully.
Jan 26 15:59:47 compute-0 podman[289197]: 2026-01-26 15:59:47.352123408 +0000 UTC m=+0.058439225 container died 0cb83635bf79f7fc715cd497678a2cd1b0ab893f05952694524644ef64fe00a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.367 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image af55c460-3670-4fa6-855f-386e2c58b312_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.383 239969 INFO nova.virt.libvirt.driver [-] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Instance destroyed successfully.
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.384 239969 DEBUG nova.objects.instance [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'resources' on Instance uuid 64435a76-a92d-4b8f-83e5-a6a85b70b10f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.413 239969 DEBUG nova.virt.libvirt.vif [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:59:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1702437232',display_name='tempest-DeleteServersTestJSON-server-1702437232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1702437232',id=55,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:59:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='aee99e5b6af74088bd848cecc9592e82',ramdisk_id='',reservation_id='r-7mjf8wcb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-234439961',owner_user_name='tempest-DeleteServersTestJSON-234439961-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:59:44Z,user_data=None,user_id='3c7cba75e9fd41599b1c9a3388447cdd',uuid=64435a76-a92d-4b8f-83e5-a6a85b70b10f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "address": "fa:16:3e:00:52:67", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5010d02a-42", "ovs_interfaceid": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.414 239969 DEBUG nova.network.os_vif_util [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converting VIF {"id": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "address": "fa:16:3e:00:52:67", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5010d02a-42", "ovs_interfaceid": "5010d02a-4286-4559-ab9e-1f1ee882ee9b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.415 239969 DEBUG nova.network.os_vif_util [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:52:67,bridge_name='br-int',has_traffic_filtering=True,id=5010d02a-4286-4559-ab9e-1f1ee882ee9b,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5010d02a-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.415 239969 DEBUG os_vif [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:52:67,bridge_name='br-int',has_traffic_filtering=True,id=5010d02a-4286-4559-ab9e-1f1ee882ee9b,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5010d02a-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.420 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.420 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5010d02a-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.422 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.423 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:59:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 15:59:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1767988870' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.431 239969 DEBUG nova.compute.manager [req-7b8660ba-89b4-4cea-809c-ab861c410dec req-3bd2f663-e960-4b8d-af61-6015c4b1169e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Received event network-vif-unplugged-5010d02a-4286-4559-ab9e-1f1ee882ee9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.431 239969 DEBUG oslo_concurrency.lockutils [req-7b8660ba-89b4-4cea-809c-ab861c410dec req-3bd2f663-e960-4b8d-af61-6015c4b1169e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.432 239969 DEBUG oslo_concurrency.lockutils [req-7b8660ba-89b4-4cea-809c-ab861c410dec req-3bd2f663-e960-4b8d-af61-6015c4b1169e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.432 239969 DEBUG oslo_concurrency.lockutils [req-7b8660ba-89b4-4cea-809c-ab861c410dec req-3bd2f663-e960-4b8d-af61-6015c4b1169e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.433 239969 DEBUG nova.compute.manager [req-7b8660ba-89b4-4cea-809c-ab861c410dec req-3bd2f663-e960-4b8d-af61-6015c4b1169e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] No waiting events found dispatching network-vif-unplugged-5010d02a-4286-4559-ab9e-1f1ee882ee9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0cb83635bf79f7fc715cd497678a2cd1b0ab893f05952694524644ef64fe00a0-userdata-shm.mount: Deactivated successfully.
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.433 239969 DEBUG nova.compute.manager [req-7b8660ba-89b4-4cea-809c-ab861c410dec req-3bd2f663-e960-4b8d-af61-6015c4b1169e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Received event network-vif-unplugged-5010d02a-4286-4559-ab9e-1f1ee882ee9b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 15:59:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd73186a7f8a613ec0256a89f4b0bff995698b9d81cbcec291bee2220e3d2036-merged.mount: Deactivated successfully.
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.440 239969 DEBUG nova.network.neutron [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Updated VIF entry in instance network info cache for port 360a1634-f2af-423f-8312-2d8623a1dab4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.440 239969 DEBUG nova.network.neutron [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Updating instance_info_cache with network_info: [{"id": "360a1634-f2af-423f-8312-2d8623a1dab4", "address": "fa:16:3e:2a:d7:9a", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap360a1634-f2", "ovs_interfaceid": "360a1634-f2af-423f-8312-2d8623a1dab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.441 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.444 239969 INFO os_vif [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:52:67,bridge_name='br-int',has_traffic_filtering=True,id=5010d02a-4286-4559-ab9e-1f1ee882ee9b,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5010d02a-42')
Jan 26 15:59:47 compute-0 podman[289197]: 2026-01-26 15:59:47.45486082 +0000 UTC m=+0.161176637 container cleanup 0cb83635bf79f7fc715cd497678a2cd1b0ab893f05952694524644ef64fe00a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.461 239969 DEBUG oslo_concurrency.lockutils [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.461 239969 DEBUG nova.compute.manager [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Received event network-changed-aa71bf39-b625-491e-9e6e-9a6b6bc4d087 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.462 239969 DEBUG nova.compute.manager [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Refreshing instance network info cache due to event network-changed-aa71bf39-b625-491e-9e6e-9a6b6bc4d087. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.462 239969 DEBUG oslo_concurrency.lockutils [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-af55c460-3670-4fa6-855f-386e2c58b312" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.462 239969 DEBUG oslo_concurrency.lockutils [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-af55c460-3670-4fa6-855f-386e2c58b312" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.462 239969 DEBUG nova.network.neutron [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Refreshing network info cache for port aa71bf39-b625-491e-9e6e-9a6b6bc4d087 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 15:59:47 compute-0 systemd[1]: libpod-conmon-0cb83635bf79f7fc715cd497678a2cd1b0ab893f05952694524644ef64fe00a0.scope: Deactivated successfully.
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.464 239969 DEBUG oslo_concurrency.processutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.466 239969 DEBUG nova.virt.libvirt.vif [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-881645873',display_name='tempest-ServerRescueTestJSONUnderV235-server-881645873',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-881645873',id=59,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c41676eb0a634807a0c639355e39a512',ramdisk_id='',reservation_id='r-osdyj818',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-701891094',owner_user_name='tempest-ServerRescueTestJSONUnderV235-701891094-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:59:36Z,user_data=None,user_id='10e6dce963804f98b45b6f58ef8d1b2e',uuid=a4ddea97-0b71-4080-abed-fd5748d7244e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.467 239969 DEBUG nova.network.os_vif_util [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Converting VIF {"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.467 239969 DEBUG nova.network.os_vif_util [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:d5:77,bridge_name='br-int',has_traffic_filtering=True,id=ad309073-b3c7-40b4-a937-5d72ec411ed7,network=Network(90c691e2-a785-487c-9a4f-dd30836b91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad309073-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.468 239969 DEBUG nova.objects.instance [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lazy-loading 'pci_devices' on Instance uuid a4ddea97-0b71-4080-abed-fd5748d7244e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.474 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.474 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.474 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] No VIF found with MAC fa:16:3e:2a:d7:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.475 239969 INFO nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Using config drive
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.498 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.503 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/66382f82-a38c-4066-8564-7866c37ccd0f/disk.config 66382f82-a38c-4066-8564-7866c37ccd0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.505 239969 INFO nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Deleting local config drive /var/lib/nova/instances/66382f82-a38c-4066-8564-7866c37ccd0f/disk.config because it was imported into RBD.
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.511 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] End _get_guest_xml xml=<domain type="kvm">
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <uuid>a4ddea97-0b71-4080-abed-fd5748d7244e</uuid>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <name>instance-0000003b</name>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <metadata>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-881645873</nova:name>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 15:59:46</nova:creationTime>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:user uuid="10e6dce963804f98b45b6f58ef8d1b2e">tempest-ServerRescueTestJSONUnderV235-701891094-project-member</nova:user>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:project uuid="c41676eb0a634807a0c639355e39a512">tempest-ServerRescueTestJSONUnderV235-701891094</nova:project>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <nova:port uuid="ad309073-b3c7-40b4-a937-5d72ec411ed7">
Jan 26 15:59:47 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </metadata>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <system>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="serial">a4ddea97-0b71-4080-abed-fd5748d7244e</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="uuid">a4ddea97-0b71-4080-abed-fd5748d7244e</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </system>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <os>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </os>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <features>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <apic/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </features>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </clock>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </cpu>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   <devices>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/a4ddea97-0b71-4080-abed-fd5748d7244e_disk">
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </source>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/a4ddea97-0b71-4080-abed-fd5748d7244e_disk.config">
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </source>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 15:59:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:3a:d5:77"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <target dev="tapad309073-b3"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </interface>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e/console.log" append="off"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </serial>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <video>
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </video>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </rng>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 15:59:47 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 15:59:47 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 15:59:47 compute-0 nova_compute[239965]:   </devices>
Jan 26 15:59:47 compute-0 nova_compute[239965]: </domain>
Jan 26 15:59:47 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.513 239969 DEBUG nova.compute.manager [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Preparing to wait for external event network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.513 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Acquiring lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.513 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.514 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.515 239969 DEBUG nova.virt.libvirt.vif [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-881645873',display_name='tempest-ServerRescueTestJSONUnderV235-server-881645873',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-881645873',id=59,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c41676eb0a634807a0c639355e39a512',ramdisk_id='',reservation_id='r-osdyj818',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSONUnderV235-701891094',owner_user_name='tempest-ServerRescueTestJSONUnderV235-701891094-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:59:36Z,user_data=None,user_id='10e6dce963804f98b45b6f58ef8d1b2e',uuid=a4ddea97-0b71-4080-abed-fd5748d7244e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.519 239969 DEBUG nova.network.os_vif_util [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Converting VIF {"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.520 239969 DEBUG nova.network.os_vif_util [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3a:d5:77,bridge_name='br-int',has_traffic_filtering=True,id=ad309073-b3c7-40b4-a937-5d72ec411ed7,network=Network(90c691e2-a785-487c-9a4f-dd30836b91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad309073-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.520 239969 DEBUG os_vif [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:d5:77,bridge_name='br-int',has_traffic_filtering=True,id=ad309073-b3c7-40b4-a937-5d72ec411ed7,network=Network(90c691e2-a785-487c-9a4f-dd30836b91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad309073-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.522 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.522 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.522 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:47 compute-0 podman[289298]: 2026-01-26 15:59:47.530997769 +0000 UTC m=+0.051505415 container remove 0cb83635bf79f7fc715cd497678a2cd1b0ab893f05952694524644ef64fe00a0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.535 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.535 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad309073-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.536 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapad309073-b3, col_values=(('external_ids', {'iface-id': 'ad309073-b3c7-40b4-a937-5d72ec411ed7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3a:d5:77', 'vm-uuid': 'a4ddea97-0b71-4080-abed-fd5748d7244e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:47 compute-0 NetworkManager[48954]: <info>  [1769443187.5382] manager: (tapad309073-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/206)
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.537 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.539 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.545 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8cb7f6-313c-4a37-8f2a-1433c5a1761e]: (4, ('Mon Jan 26 03:59:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 (0cb83635bf79f7fc715cd497678a2cd1b0ab893f05952694524644ef64fe00a0)\n0cb83635bf79f7fc715cd497678a2cd1b0ab893f05952694524644ef64fe00a0\nMon Jan 26 03:59:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 (0cb83635bf79f7fc715cd497678a2cd1b0ab893f05952694524644ef64fe00a0)\n0cb83635bf79f7fc715cd497678a2cd1b0ab893f05952694524644ef64fe00a0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.547 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1870ed55-d27b-425e-b07f-f665e9b5b047]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.548 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00eb7549-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.550 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.568 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 kernel: tap00eb7549-d0: left promiscuous mode
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.571 239969 INFO os_vif [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3a:d5:77,bridge_name='br-int',has_traffic_filtering=True,id=ad309073-b3c7-40b4-a937-5d72ec411ed7,network=Network(90c691e2-a785-487c-9a4f-dd30836b91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad309073-b3')
Jan 26 15:59:47 compute-0 NetworkManager[48954]: <info>  [1769443187.5736] manager: (tap9fdb2907-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/207)
Jan 26 15:59:47 compute-0 kernel: tap9fdb2907-6e: entered promiscuous mode
Jan 26 15:59:47 compute-0 systemd-udevd[289161]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.582 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 ovn_controller[146046]: 2026-01-26T15:59:47Z|00446|binding|INFO|Claiming lport 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 for this chassis.
Jan 26 15:59:47 compute-0 ovn_controller[146046]: 2026-01-26T15:59:47Z|00447|binding|INFO|9fdb2907-6e42-42c6-ac8b-e9b435fa2c28: Claiming fa:16:3e:1a:53:13 10.100.0.3
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.585 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2612dfdf-93a4-4ead-9559-57e6db0f9340]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 NetworkManager[48954]: <info>  [1769443187.5918] device (tap9fdb2907-6e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:59:47 compute-0 NetworkManager[48954]: <info>  [1769443187.5927] device (tap9fdb2907-6e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.595 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:53:13 10.100.0.3'], port_security=['fa:16:3e:1a:53:13 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '66382f82-a38c-4066-8564-7866c37ccd0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe677d76-a003-444b-839f-1500fe522371', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07b927cfeb484a5d80d731f9e9abfa4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eb0b7b89-c66e-4945-ae39-9f6072eed95b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f020610-ced7-462f-92be-0953400a3e4b, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=9fdb2907-6e42-42c6-ac8b-e9b435fa2c28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.600 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[263a9427-ae7d-4d97-bf79-c47fb18d1739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.602 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a190d5a8-ded0-4768-8884-00ea5da9668a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.619 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4f5f8984-3a4a-4cce-9db5-90f628b87cd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467242, 'reachable_time': 21470, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289354, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 systemd-machined[208061]: New machine qemu-62-instance-00000038.
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.623 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.623 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[d05623ec-1f89-4bdc-b22d-dbfa67056a29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.624 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 in datapath fe677d76-a003-444b-839f-1500fe522371 bound to our chassis
Jan 26 15:59:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d00eb7549\x2dd24b\x2d4657\x2db244\x2d7664c8a34b20.mount: Deactivated successfully.
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.625 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe677d76-a003-444b-839f-1500fe522371
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.638 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0b76c8-4ef6-41d3-bb44-f6f025f9eb9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.639 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfe677d76-a1 in ovnmeta-fe677d76-a003-444b-839f-1500fe522371 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.642 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfe677d76-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.642 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[96f90ebf-cf5a-4799-bbd1-ac019a897847]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.643 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d17bbd39-bbab-4c7e-85a6-bf567434d23a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 systemd[1]: Started Virtual Machine qemu-62-instance-00000038.
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.656 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[02fe46be-0c4b-451b-87b2-cfe9c9011b47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.670 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f8963101-c83f-4341-ab94-8f81b7b8755d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.675 239969 DEBUG nova.network.neutron [req-cd191ac5-4caf-44fe-8b82-12b00d4ca9e1 req-600f1407-6ee5-49e3-8111-50dc241f5790 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Updated VIF entry in instance network info cache for port ad309073-b3c7-40b4-a937-5d72ec411ed7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.676 239969 DEBUG nova.network.neutron [req-cd191ac5-4caf-44fe-8b82-12b00d4ca9e1 req-600f1407-6ee5-49e3-8111-50dc241f5790 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Updating instance_info_cache with network_info: [{"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.677 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 ovn_controller[146046]: 2026-01-26T15:59:47Z|00448|binding|INFO|Setting lport 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 ovn-installed in OVS
Jan 26 15:59:47 compute-0 ovn_controller[146046]: 2026-01-26T15:59:47Z|00449|binding|INFO|Setting lport 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 up in Southbound
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.689 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.701 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[136a8045-b404-4f92-83e4-87ec7cd665a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 NetworkManager[48954]: <info>  [1769443187.7092] manager: (tapfe677d76-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/208)
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.707 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7474a77a-0061-48fb-a600-bafdc8d84fe1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.711 239969 DEBUG oslo_concurrency.lockutils [req-cd191ac5-4caf-44fe-8b82-12b00d4ca9e1 req-600f1407-6ee5-49e3-8111-50dc241f5790 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.724 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.724 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.724 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] No VIF found with MAC fa:16:3e:3a:d5:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.725 239969 INFO nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Using config drive
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.746 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac13113-c141-4ee2-aadd-21f4be098895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.750 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5942f4f0-e24b-41d9-bf3f-c5982fa9822a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.756 239969 DEBUG nova.storage.rbd_utils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] rbd image a4ddea97-0b71-4080-abed-fd5748d7244e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:47 compute-0 NetworkManager[48954]: <info>  [1769443187.7803] device (tapfe677d76-a0): carrier: link connected
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.786 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1fbd979f-f2f2-4f28-9654-5187673e3462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.805 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0edf15bc-84ba-4f54-8fa7-378dbd3def3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe677d76-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:11:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467858, 'reachable_time': 38560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289418, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.818 239969 INFO nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Creating config drive at /var/lib/nova/instances/af55c460-3670-4fa6-855f-386e2c58b312/disk.config
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.825 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af55c460-3670-4fa6-855f-386e2c58b312/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6s5k9zqh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.825 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[118a0a10-2c1c-4b20-8cd0-209d0662cbc6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2d:11fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467858, 'tstamp': 467858}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289419, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.842 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0931aec3-2898-4375-940f-3e41b3261312]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe677d76-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:11:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467858, 'reachable_time': 38560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289420, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 319 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 182 op/s
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.882 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[44bd7e7f-b937-4c6e-ac7b-c805566254a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.893 239969 INFO nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Creating config drive at /var/lib/nova/instances/7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97/disk.config
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.900 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplj55lspn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.940 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8f6eb9b7-0225-4ca6-9aec-317287ee0bb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.942 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe677d76-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.942 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.943 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe677d76-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:47 compute-0 NetworkManager[48954]: <info>  [1769443187.9461] manager: (tapfe677d76-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/209)
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.945 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 kernel: tapfe677d76-a0: entered promiscuous mode
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.958 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:47.960 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe677d76-a0, col_values=(('external_ids', {'iface-id': 'a04cf444-51ef-4a15-8985-77a2cc807a92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:47 compute-0 ovn_controller[146046]: 2026-01-26T15:59:47Z|00450|binding|INFO|Releasing lport a04cf444-51ef-4a15-8985-77a2cc807a92 from this chassis (sb_readonly=0)
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.967 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:47 compute-0 nova_compute[239965]: 2026-01-26 15:59:47.983 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af55c460-3670-4fa6-855f-386e2c58b312/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6s5k9zqh" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.006 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fe677d76-a003-444b-839f-1500fe522371.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fe677d76-a003-444b-839f-1500fe522371.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.007 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3632d3-1c8d-41f6-9310-cbd2fc238ec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.008 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: global
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-fe677d76-a003-444b-839f-1500fe522371
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/fe677d76-a003-444b-839f-1500fe522371.pid.haproxy
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID fe677d76-a003-444b-839f-1500fe522371
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.010 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'env', 'PROCESS_TAG=haproxy-fe677d76-a003-444b-839f-1500fe522371', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fe677d76-a003-444b-839f-1500fe522371.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.018 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image af55c460-3670-4fa6-855f-386e2c58b312_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.026 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af55c460-3670-4fa6-855f-386e2c58b312/disk.config af55c460-3670-4fa6-855f-386e2c58b312_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3638440723' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2968440269' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1767988870' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.072 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.076 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplj55lspn" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.113 239969 DEBUG nova.storage.rbd_utils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] rbd image 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.116 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97/disk.config 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.193 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af55c460-3670-4fa6-855f-386e2c58b312/disk.config af55c460-3670-4fa6-855f-386e2c58b312_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.194 239969 INFO nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Deleting local config drive /var/lib/nova/instances/af55c460-3670-4fa6-855f-386e2c58b312/disk.config because it was imported into RBD.
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.216 239969 INFO nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Creating config drive at /var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e/disk.config
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.221 239969 DEBUG oslo_concurrency.processutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8xvhq7k7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.249 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443188.217387, 66382f82-a38c-4066-8564-7866c37ccd0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.250 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] VM Started (Lifecycle Event)
Jan 26 15:59:48 compute-0 systemd-udevd[289389]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:59:48 compute-0 kernel: tapaa71bf39-b6: entered promiscuous mode
Jan 26 15:59:48 compute-0 NetworkManager[48954]: <info>  [1769443188.2608] manager: (tapaa71bf39-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Jan 26 15:59:48 compute-0 ovn_controller[146046]: 2026-01-26T15:59:48Z|00451|binding|INFO|Claiming lport aa71bf39-b625-491e-9e6e-9a6b6bc4d087 for this chassis.
Jan 26 15:59:48 compute-0 ovn_controller[146046]: 2026-01-26T15:59:48Z|00452|binding|INFO|aa71bf39-b625-491e-9e6e-9a6b6bc4d087: Claiming fa:16:3e:ed:05:48 10.100.0.11
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.269 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:48 compute-0 NetworkManager[48954]: <info>  [1769443188.2717] device (tapaa71bf39-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:59:48 compute-0 NetworkManager[48954]: <info>  [1769443188.2724] device (tapaa71bf39-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.275 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:05:48 10.100.0.11'], port_security=['fa:16:3e:ed:05:48 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'af55c460-3670-4fa6-855f-386e2c58b312', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe677d76-a003-444b-839f-1500fe522371', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07b927cfeb484a5d80d731f9e9abfa4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eb0b7b89-c66e-4945-ae39-9f6072eed95b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f020610-ced7-462f-92be-0953400a3e4b, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=aa71bf39-b625-491e-9e6e-9a6b6bc4d087) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.276 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.289 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443188.2175465, 66382f82-a38c-4066-8564-7866c37ccd0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.291 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] VM Paused (Lifecycle Event)
Jan 26 15:59:48 compute-0 ovn_controller[146046]: 2026-01-26T15:59:48Z|00453|binding|INFO|Setting lport aa71bf39-b625-491e-9e6e-9a6b6bc4d087 ovn-installed in OVS
Jan 26 15:59:48 compute-0 ovn_controller[146046]: 2026-01-26T15:59:48Z|00454|binding|INFO|Setting lport aa71bf39-b625-491e-9e6e-9a6b6bc4d087 up in Southbound
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.297 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:48 compute-0 systemd-machined[208061]: New machine qemu-63-instance-00000039.
Jan 26 15:59:48 compute-0 systemd[1]: Started Virtual Machine qemu-63-instance-00000039.
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.322 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.332 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.340 239969 DEBUG oslo_concurrency.processutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97/disk.config 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.340 239969 INFO nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Deleting local config drive /var/lib/nova/instances/7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97/disk.config because it was imported into RBD.
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.352 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.356 239969 DEBUG oslo_concurrency.processutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8xvhq7k7" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.383 239969 DEBUG nova.storage.rbd_utils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] rbd image a4ddea97-0b71-4080-abed-fd5748d7244e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.393 239969 DEBUG oslo_concurrency.processutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e/disk.config a4ddea97-0b71-4080-abed-fd5748d7244e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:48 compute-0 NetworkManager[48954]: <info>  [1769443188.4013] manager: (tap360a1634-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/211)
Jan 26 15:59:48 compute-0 kernel: tap360a1634-f2: entered promiscuous mode
Jan 26 15:59:48 compute-0 ovn_controller[146046]: 2026-01-26T15:59:48Z|00455|binding|INFO|Claiming lport 360a1634-f2af-423f-8312-2d8623a1dab4 for this chassis.
Jan 26 15:59:48 compute-0 ovn_controller[146046]: 2026-01-26T15:59:48Z|00456|binding|INFO|360a1634-f2af-423f-8312-2d8623a1dab4: Claiming fa:16:3e:2a:d7:9a 10.100.0.12
Jan 26 15:59:48 compute-0 NetworkManager[48954]: <info>  [1769443188.4144] device (tap360a1634-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:59:48 compute-0 NetworkManager[48954]: <info>  [1769443188.4152] device (tap360a1634-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.422 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:d7:9a 10.100.0.12'], port_security=['fa:16:3e:2a:d7:9a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe677d76-a003-444b-839f-1500fe522371', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07b927cfeb484a5d80d731f9e9abfa4f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eb0b7b89-c66e-4945-ae39-9f6072eed95b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f020610-ced7-462f-92be-0953400a3e4b, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=360a1634-f2af-423f-8312-2d8623a1dab4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:48 compute-0 ovn_controller[146046]: 2026-01-26T15:59:48Z|00457|binding|INFO|Setting lport 360a1634-f2af-423f-8312-2d8623a1dab4 ovn-installed in OVS
Jan 26 15:59:48 compute-0 ovn_controller[146046]: 2026-01-26T15:59:48Z|00458|binding|INFO|Setting lport 360a1634-f2af-423f-8312-2d8623a1dab4 up in Southbound
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.434 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.440 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:48 compute-0 systemd-machined[208061]: New machine qemu-64-instance-0000003a.
Jan 26 15:59:48 compute-0 systemd[1]: Started Virtual Machine qemu-64-instance-0000003a.
Jan 26 15:59:48 compute-0 podman[289634]: 2026-01-26 15:59:48.483822245 +0000 UTC m=+0.094247294 container create e0f31eaa820f2f9faf8660b9ab93481c81ad789fc86f68906cd4358c8137b67d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fe677d76-a003-444b-839f-1500fe522371, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.496 239969 INFO nova.virt.libvirt.driver [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Deleting instance files /var/lib/nova/instances/64435a76-a92d-4b8f-83e5-a6a85b70b10f_del
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.498 239969 INFO nova.virt.libvirt.driver [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Deletion of /var/lib/nova/instances/64435a76-a92d-4b8f-83e5-a6a85b70b10f_del complete
Jan 26 15:59:48 compute-0 podman[289634]: 2026-01-26 15:59:48.425658918 +0000 UTC m=+0.036083977 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 15:59:48 compute-0 systemd[1]: Started libpod-conmon-e0f31eaa820f2f9faf8660b9ab93481c81ad789fc86f68906cd4358c8137b67d.scope.
Jan 26 15:59:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 15:59:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cea7d59da4509a10c18cbcd27b36e1e120ecdedbe5fd1b97da8ba0b096b797c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.571 239969 INFO nova.compute.manager [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Took 1.49 seconds to destroy the instance on the hypervisor.
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.572 239969 DEBUG oslo.service.loopingcall [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.572 239969 DEBUG nova.compute.manager [-] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.572 239969 DEBUG nova.network.neutron [-] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.577 239969 DEBUG oslo_concurrency.processutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e/disk.config a4ddea97-0b71-4080-abed-fd5748d7244e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.184s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.577 239969 INFO nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Deleting local config drive /var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e/disk.config because it was imported into RBD.
Jan 26 15:59:48 compute-0 podman[289634]: 2026-01-26 15:59:48.579131004 +0000 UTC m=+0.189556063 container init e0f31eaa820f2f9faf8660b9ab93481c81ad789fc86f68906cd4358c8137b67d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fe677d76-a003-444b-839f-1500fe522371, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:59:48 compute-0 podman[289634]: 2026-01-26 15:59:48.584526047 +0000 UTC m=+0.194951086 container start e0f31eaa820f2f9faf8660b9ab93481c81ad789fc86f68906cd4358c8137b67d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fe677d76-a003-444b-839f-1500fe522371, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 15:59:48 compute-0 neutron-haproxy-ovnmeta-fe677d76-a003-444b-839f-1500fe522371[289686]: [NOTICE]   (289693) : New worker (289698) forked
Jan 26 15:59:48 compute-0 neutron-haproxy-ovnmeta-fe677d76-a003-444b-839f-1500fe522371[289686]: [NOTICE]   (289693) : Loading success.
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.636 156105 INFO neutron.agent.ovn.metadata.agent [-] Port aa71bf39-b625-491e-9e6e-9a6b6bc4d087 in datapath fe677d76-a003-444b-839f-1500fe522371 unbound from our chassis
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.638 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe677d76-a003-444b-839f-1500fe522371
Jan 26 15:59:48 compute-0 kernel: tapad309073-b3: entered promiscuous mode
Jan 26 15:59:48 compute-0 NetworkManager[48954]: <info>  [1769443188.6432] manager: (tapad309073-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/212)
Jan 26 15:59:48 compute-0 ovn_controller[146046]: 2026-01-26T15:59:48Z|00459|binding|INFO|Claiming lport ad309073-b3c7-40b4-a937-5d72ec411ed7 for this chassis.
Jan 26 15:59:48 compute-0 ovn_controller[146046]: 2026-01-26T15:59:48Z|00460|binding|INFO|ad309073-b3c7-40b4-a937-5d72ec411ed7: Claiming fa:16:3e:3a:d5:77 10.100.0.11
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.647 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.658 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:d5:77 10.100.0.11'], port_security=['fa:16:3e:3a:d5:77 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a4ddea97-0b71-4080-abed-fd5748d7244e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-90c691e2-a785-487c-9a4f-dd30836b91bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c41676eb0a634807a0c639355e39a512', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ff51ba6-72d9-4f5f-bcad-1bb60c354451', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30c9b54c-9e24-4f98-9fda-b4dce976db39, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=ad309073-b3c7-40b4-a937-5d72ec411ed7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:48 compute-0 NetworkManager[48954]: <info>  [1769443188.6612] device (tapad309073-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 15:59:48 compute-0 NetworkManager[48954]: <info>  [1769443188.6622] device (tapad309073-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.664 239969 DEBUG nova.compute.manager [req-66256d53-98aa-4f98-823c-f54f1b40cda5 req-60c3179e-eb74-4f5b-89f4-afe6af5f8045 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Received event network-vif-plugged-aa71bf39-b625-491e-9e6e-9a6b6bc4d087 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.665 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a67b83cf-e983-4b95-ba31-4f58684b725d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.665 239969 DEBUG oslo_concurrency.lockutils [req-66256d53-98aa-4f98-823c-f54f1b40cda5 req-60c3179e-eb74-4f5b-89f4-afe6af5f8045 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "af55c460-3670-4fa6-855f-386e2c58b312-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.666 239969 DEBUG oslo_concurrency.lockutils [req-66256d53-98aa-4f98-823c-f54f1b40cda5 req-60c3179e-eb74-4f5b-89f4-afe6af5f8045 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af55c460-3670-4fa6-855f-386e2c58b312-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.666 239969 DEBUG oslo_concurrency.lockutils [req-66256d53-98aa-4f98-823c-f54f1b40cda5 req-60c3179e-eb74-4f5b-89f4-afe6af5f8045 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af55c460-3670-4fa6-855f-386e2c58b312-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.666 239969 DEBUG nova.compute.manager [req-66256d53-98aa-4f98-823c-f54f1b40cda5 req-60c3179e-eb74-4f5b-89f4-afe6af5f8045 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Processing event network-vif-plugged-aa71bf39-b625-491e-9e6e-9a6b6bc4d087 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:59:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 15:59:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2487247240' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:59:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 15:59:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2487247240' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:59:48 compute-0 systemd-machined[208061]: New machine qemu-65-instance-0000003b.
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.699 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1c286a8f-5907-42bf-8461-5ddcf8c63bed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.702 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[52bb151b-59fc-46e6-9d0a-e7d356ffabc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:48 compute-0 systemd[1]: Started Virtual Machine qemu-65-instance-0000003b.
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.728 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.732 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ef73ff40-ebb0-4c3a-ba5f-37171ba6887e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:48 compute-0 ovn_controller[146046]: 2026-01-26T15:59:48Z|00461|binding|INFO|Setting lport ad309073-b3c7-40b4-a937-5d72ec411ed7 ovn-installed in OVS
Jan 26 15:59:48 compute-0 ovn_controller[146046]: 2026-01-26T15:59:48Z|00462|binding|INFO|Setting lport ad309073-b3c7-40b4-a937-5d72ec411ed7 up in Southbound
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.736 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.756 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0d073adf-7014-4b4c-b33a-748cd4659487]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe677d76-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:11:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467858, 'reachable_time': 38560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289722, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.783 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9ed29e4b-0e96-49ee-af9a-f92a04474b93]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfe677d76-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467871, 'tstamp': 467871}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289726, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfe677d76-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467874, 'tstamp': 467874}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289726, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.785 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe677d76-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001738653554343196 of space, bias 1.0, pg target 0.5215960663029588 quantized to 32 (current 32)
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010149316675318182 of space, bias 1.0, pg target 0.30447950025954545 quantized to 32 (current 32)
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.961879540694492e-07 of space, bias 4.0, pg target 0.0009554255448833391 quantized to 16 (current 16)
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 15:59:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.787 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.788 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.790 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe677d76-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.791 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.792 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe677d76-a0, col_values=(('external_ids', {'iface-id': 'a04cf444-51ef-4a15-8985-77a2cc807a92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.792 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.794 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 360a1634-f2af-423f-8312-2d8623a1dab4 in datapath fe677d76-a003-444b-839f-1500fe522371 unbound from our chassis
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.796 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe677d76-a003-444b-839f-1500fe522371
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.812 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e7e048-3f59-48c8-bcd0-e95a0fa2ef20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.845 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5bb400-4ffb-4693-b574-273f060129b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.848 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d6ed56-96bf-4d77-a4c2-883e301e4201]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.878 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3a61f311-bbd1-48fd-af49-f542eedffdd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.894 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[67f2be79-8c70-408a-91fe-61e068e60ccd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe677d76-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:11:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 7, 'rx_bytes': 266, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 7, 'rx_bytes': 266, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467858, 'reachable_time': 38560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289753, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.916 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[21877de5-a3a5-4db2-a2bf-8ba2784fd7b4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfe677d76-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467871, 'tstamp': 467871}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289768, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfe677d76-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467874, 'tstamp': 467874}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289768, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.919 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe677d76-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:48 compute-0 nova_compute[239965]: 2026-01-26 15:59:48.921 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.923 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe677d76-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.924 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.924 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe677d76-a0, col_values=(('external_ids', {'iface-id': 'a04cf444-51ef-4a15-8985-77a2cc807a92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.925 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.926 156105 INFO neutron.agent.ovn.metadata.agent [-] Port ad309073-b3c7-40b4-a937-5d72ec411ed7 in datapath 90c691e2-a785-487c-9a4f-dd30836b91bb unbound from our chassis
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.927 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 90c691e2-a785-487c-9a4f-dd30836b91bb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 15:59:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:48.928 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cf75552d-a4a1-4a8b-994a-63ca967d1f17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.052 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443189.0514524, 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.052 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] VM Started (Lifecycle Event)
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.077 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:49 compute-0 ceph-mon[75140]: pgmap v1400: 305 pgs: 305 active+clean; 319 MiB data, 555 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.1 MiB/s wr, 182 op/s
Jan 26 15:59:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2487247240' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 15:59:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2487247240' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.083 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443189.0516763, 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.084 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] VM Paused (Lifecycle Event)
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.116 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.120 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.142 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.292 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443189.2924333, af55c460-3670-4fa6-855f-386e2c58b312 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.293 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af55c460-3670-4fa6-855f-386e2c58b312] VM Started (Lifecycle Event)
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.295 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.298 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.301 239969 INFO nova.virt.libvirt.driver [-] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Instance spawned successfully.
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.301 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.313 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.318 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.321 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.321 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.322 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.322 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.323 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.323 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.354 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af55c460-3670-4fa6-855f-386e2c58b312] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.354 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443189.2930603, af55c460-3670-4fa6-855f-386e2c58b312 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.355 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af55c460-3670-4fa6-855f-386e2c58b312] VM Paused (Lifecycle Event)
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.374 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.382 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443189.2973852, af55c460-3670-4fa6-855f-386e2c58b312 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.383 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af55c460-3670-4fa6-855f-386e2c58b312] VM Resumed (Lifecycle Event)
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.387 239969 INFO nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Took 10.56 seconds to spawn the instance on the hypervisor.
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.388 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.399 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.402 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.433 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af55c460-3670-4fa6-855f-386e2c58b312] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.433 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443189.3980224, a4ddea97-0b71-4080-abed-fd5748d7244e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.434 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] VM Started (Lifecycle Event)
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.447 239969 INFO nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Took 14.98 seconds to build instance.
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.452 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.457 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443189.3987157, a4ddea97-0b71-4080-abed-fd5748d7244e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.457 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] VM Paused (Lifecycle Event)
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.460 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "af55c460-3670-4fa6-855f-386e2c58b312" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.122s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.472 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.475 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:49 compute-0 nova_compute[239965]: 2026-01-26 15:59:49.494 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:59:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1401: 305 pgs: 305 active+clean; 291 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.2 MiB/s wr, 224 op/s
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.384 239969 DEBUG nova.network.neutron [-] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.401 239969 INFO nova.compute.manager [-] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Took 1.83 seconds to deallocate network for instance.
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.423 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.452 239969 DEBUG oslo_concurrency.lockutils [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.452 239969 DEBUG oslo_concurrency.lockutils [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.569 239969 DEBUG oslo_concurrency.processutils [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.701 239969 DEBUG nova.network.neutron [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Updated VIF entry in instance network info cache for port aa71bf39-b625-491e-9e6e-9a6b6bc4d087. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.702 239969 DEBUG nova.network.neutron [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Updating instance_info_cache with network_info: [{"id": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "address": "fa:16:3e:ed:05:48", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa71bf39-b6", "ovs_interfaceid": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.737 239969 DEBUG oslo_concurrency.lockutils [req-28e45cce-8434-4d91-a12e-21f79b76bd8e req-70fc5b29-8f23-4ff5-b92e-3521efe89787 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-af55c460-3670-4fa6-855f-386e2c58b312" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.798 239969 DEBUG nova.compute.manager [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Received event network-vif-plugged-5010d02a-4286-4559-ab9e-1f1ee882ee9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.799 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.800 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.800 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.800 239969 DEBUG nova.compute.manager [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] No waiting events found dispatching network-vif-plugged-5010d02a-4286-4559-ab9e-1f1ee882ee9b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.801 239969 WARNING nova.compute.manager [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Received unexpected event network-vif-plugged-5010d02a-4286-4559-ab9e-1f1ee882ee9b for instance with vm_state deleted and task_state None.
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.801 239969 DEBUG nova.compute.manager [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Received event network-vif-plugged-9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.801 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "66382f82-a38c-4066-8564-7866c37ccd0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.802 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "66382f82-a38c-4066-8564-7866c37ccd0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.802 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "66382f82-a38c-4066-8564-7866c37ccd0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.802 239969 DEBUG nova.compute.manager [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Processing event network-vif-plugged-9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.803 239969 DEBUG nova.compute.manager [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Received event network-vif-plugged-9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.803 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "66382f82-a38c-4066-8564-7866c37ccd0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.803 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "66382f82-a38c-4066-8564-7866c37ccd0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.804 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "66382f82-a38c-4066-8564-7866c37ccd0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.804 239969 DEBUG nova.compute.manager [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] No waiting events found dispatching network-vif-plugged-9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.804 239969 WARNING nova.compute.manager [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Received unexpected event network-vif-plugged-9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 for instance with vm_state building and task_state spawning.
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.805 239969 DEBUG nova.compute.manager [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Received event network-vif-plugged-360a1634-f2af-423f-8312-2d8623a1dab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.805 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.805 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.806 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.806 239969 DEBUG nova.compute.manager [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Processing event network-vif-plugged-360a1634-f2af-423f-8312-2d8623a1dab4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.806 239969 DEBUG nova.compute.manager [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Received event network-vif-plugged-360a1634-f2af-423f-8312-2d8623a1dab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.807 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.807 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.807 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.808 239969 DEBUG nova.compute.manager [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] No waiting events found dispatching network-vif-plugged-360a1634-f2af-423f-8312-2d8623a1dab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.808 239969 WARNING nova.compute.manager [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Received unexpected event network-vif-plugged-360a1634-f2af-423f-8312-2d8623a1dab4 for instance with vm_state building and task_state spawning.
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.808 239969 DEBUG nova.compute.manager [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received event network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.809 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.809 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.809 239969 DEBUG oslo_concurrency.lockutils [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.809 239969 DEBUG nova.compute.manager [req-9463c88b-5edf-4567-846e-c2caeebf4088 req-f5f4c46a-1b21-4e6d-8d76-4b492b3a3c53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Processing event network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.810 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.811 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.811 239969 DEBUG nova.compute.manager [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.824 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443190.815355, a4ddea97-0b71-4080-abed-fd5748d7244e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.827 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] VM Resumed (Lifecycle Event)
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.839 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.840 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.840 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.858 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.858 239969 INFO nova.virt.libvirt.driver [-] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Instance spawned successfully.
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.859 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.873 239969 INFO nova.virt.libvirt.driver [-] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Instance spawned successfully.
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.874 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.878 239969 INFO nova.virt.libvirt.driver [-] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Instance spawned successfully.
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.879 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.881 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.913 239969 DEBUG nova.compute.manager [req-5f538e61-bbb5-4114-baba-c7a7f0994aca req-be2fe1b5-dd49-446f-9dba-813ce5064552 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Received event network-vif-plugged-aa71bf39-b625-491e-9e6e-9a6b6bc4d087 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.914 239969 DEBUG oslo_concurrency.lockutils [req-5f538e61-bbb5-4114-baba-c7a7f0994aca req-be2fe1b5-dd49-446f-9dba-813ce5064552 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "af55c460-3670-4fa6-855f-386e2c58b312-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.914 239969 DEBUG oslo_concurrency.lockutils [req-5f538e61-bbb5-4114-baba-c7a7f0994aca req-be2fe1b5-dd49-446f-9dba-813ce5064552 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af55c460-3670-4fa6-855f-386e2c58b312-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.914 239969 DEBUG oslo_concurrency.lockutils [req-5f538e61-bbb5-4114-baba-c7a7f0994aca req-be2fe1b5-dd49-446f-9dba-813ce5064552 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af55c460-3670-4fa6-855f-386e2c58b312-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.915 239969 DEBUG nova.compute.manager [req-5f538e61-bbb5-4114-baba-c7a7f0994aca req-be2fe1b5-dd49-446f-9dba-813ce5064552 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] No waiting events found dispatching network-vif-plugged-aa71bf39-b625-491e-9e6e-9a6b6bc4d087 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.915 239969 WARNING nova.compute.manager [req-5f538e61-bbb5-4114-baba-c7a7f0994aca req-be2fe1b5-dd49-446f-9dba-813ce5064552 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Received unexpected event network-vif-plugged-aa71bf39-b625-491e-9e6e-9a6b6bc4d087 for instance with vm_state active and task_state None.
Jan 26 15:59:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.970 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.971 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.971 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.971 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.972 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.972 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.980 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.981 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.981 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.981 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.982 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:50 compute-0 nova_compute[239965]: 2026-01-26 15:59:50.982 239969 DEBUG nova.virt.libvirt.driver [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.052 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.052 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443190.8176122, 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.052 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] VM Resumed (Lifecycle Event)
Jan 26 15:59:51 compute-0 ceph-mon[75140]: pgmap v1401: 305 pgs: 305 active+clean; 291 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 7.2 MiB/s wr, 224 op/s
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.113 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.116 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.120 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.120 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.120 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.121 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.121 239969 DEBUG nova.virt.libvirt.driver [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.128 239969 INFO nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Took 13.48 seconds to spawn the instance on the hypervisor.
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.128 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.135 239969 INFO nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Took 15.57 seconds to spawn the instance on the hypervisor.
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.135 239969 DEBUG nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.136 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:59:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2128459691' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.163 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.163 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443190.8177195, 66382f82-a38c-4066-8564-7866c37ccd0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.163 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] VM Resumed (Lifecycle Event)
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.180 239969 DEBUG oslo_concurrency.processutils [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.185 239969 DEBUG nova.compute.provider_tree [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.208 239969 INFO nova.compute.manager [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Took 14.40 seconds to spawn the instance on the hypervisor.
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.208 239969 DEBUG nova.compute.manager [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.220 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.222 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.227 239969 DEBUG nova.scheduler.client.report [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.296 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.300 239969 INFO nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Took 16.97 seconds to build instance.
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.310 239969 INFO nova.compute.manager [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Took 16.78 seconds to build instance.
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.327 239969 DEBUG oslo_concurrency.lockutils [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.331 239969 INFO nova.compute.manager [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Took 16.91 seconds to build instance.
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.332 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "66382f82-a38c-4066-8564-7866c37ccd0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.335 239969 DEBUG oslo_concurrency.lockutils [None req-1b818add-8f63-4fe3-86ef-9dd27d1711d3 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.350 239969 DEBUG oslo_concurrency.lockutils [None req-87432ed4-a2de-470c-9cb5-15f10a716ec0 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.362 239969 INFO nova.scheduler.client.report [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Deleted allocations for instance 64435a76-a92d-4b8f-83e5-a6a85b70b10f
Jan 26 15:59:51 compute-0 nova_compute[239965]: 2026-01-26 15:59:51.427 239969 DEBUG oslo_concurrency.lockutils [None req-841f6acc-2880-4762-98b2-22ef793c81ef 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "64435a76-a92d-4b8f-83e5-a6a85b70b10f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 273 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.8 MiB/s wr, 208 op/s
Jan 26 15:59:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2128459691' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:52 compute-0 nova_compute[239965]: 2026-01-26 15:59:52.520 239969 INFO nova.compute.manager [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Rescuing
Jan 26 15:59:52 compute-0 nova_compute[239965]: 2026-01-26 15:59:52.521 239969 DEBUG oslo_concurrency.lockutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Acquiring lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:52 compute-0 nova_compute[239965]: 2026-01-26 15:59:52.522 239969 DEBUG oslo_concurrency.lockutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Acquired lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:52 compute-0 nova_compute[239965]: 2026-01-26 15:59:52.522 239969 DEBUG nova.network.neutron [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:59:52 compute-0 nova_compute[239965]: 2026-01-26 15:59:52.540 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:53 compute-0 ceph-mon[75140]: pgmap v1402: 305 pgs: 305 active+clean; 273 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.8 MiB/s wr, 208 op/s
Jan 26 15:59:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1403: 305 pgs: 305 active+clean; 273 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 863 KiB/s wr, 178 op/s
Jan 26 15:59:54 compute-0 nova_compute[239965]: 2026-01-26 15:59:54.078 239969 DEBUG nova.compute.manager [req-294332e4-9ba3-4020-b7b7-1b2469feda8c req-20128a8a-0de1-437f-852d-ff465e2bc5f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Received event network-vif-deleted-5010d02a-4286-4559-ab9e-1f1ee882ee9b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:54 compute-0 nova_compute[239965]: 2026-01-26 15:59:54.079 239969 DEBUG nova.compute.manager [req-294332e4-9ba3-4020-b7b7-1b2469feda8c req-20128a8a-0de1-437f-852d-ff465e2bc5f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received event network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:54 compute-0 nova_compute[239965]: 2026-01-26 15:59:54.080 239969 DEBUG oslo_concurrency.lockutils [req-294332e4-9ba3-4020-b7b7-1b2469feda8c req-20128a8a-0de1-437f-852d-ff465e2bc5f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:54 compute-0 nova_compute[239965]: 2026-01-26 15:59:54.080 239969 DEBUG oslo_concurrency.lockutils [req-294332e4-9ba3-4020-b7b7-1b2469feda8c req-20128a8a-0de1-437f-852d-ff465e2bc5f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:54 compute-0 nova_compute[239965]: 2026-01-26 15:59:54.081 239969 DEBUG oslo_concurrency.lockutils [req-294332e4-9ba3-4020-b7b7-1b2469feda8c req-20128a8a-0de1-437f-852d-ff465e2bc5f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:54 compute-0 nova_compute[239965]: 2026-01-26 15:59:54.081 239969 DEBUG nova.compute.manager [req-294332e4-9ba3-4020-b7b7-1b2469feda8c req-20128a8a-0de1-437f-852d-ff465e2bc5f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] No waiting events found dispatching network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 15:59:54 compute-0 nova_compute[239965]: 2026-01-26 15:59:54.081 239969 WARNING nova.compute.manager [req-294332e4-9ba3-4020-b7b7-1b2469feda8c req-20128a8a-0de1-437f-852d-ff465e2bc5f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received unexpected event network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 for instance with vm_state active and task_state rescuing.
Jan 26 15:59:54 compute-0 ceph-mon[75140]: pgmap v1403: 305 pgs: 305 active+clean; 273 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 863 KiB/s wr, 178 op/s
Jan 26 15:59:55 compute-0 nova_compute[239965]: 2026-01-26 15:59:55.115 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:55 compute-0 nova_compute[239965]: 2026-01-26 15:59:55.116 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:55 compute-0 nova_compute[239965]: 2026-01-26 15:59:55.141 239969 DEBUG nova.compute.manager [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 15:59:55 compute-0 nova_compute[239965]: 2026-01-26 15:59:55.186 239969 DEBUG nova.network.neutron [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Updating instance_info_cache with network_info: [{"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:55 compute-0 nova_compute[239965]: 2026-01-26 15:59:55.212 239969 DEBUG oslo_concurrency.lockutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Releasing lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 15:59:55 compute-0 nova_compute[239965]: 2026-01-26 15:59:55.232 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:55 compute-0 nova_compute[239965]: 2026-01-26 15:59:55.232 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:55 compute-0 nova_compute[239965]: 2026-01-26 15:59:55.238 239969 DEBUG nova.virt.hardware [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 15:59:55 compute-0 nova_compute[239965]: 2026-01-26 15:59:55.239 239969 INFO nova.compute.claims [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Claim successful on node compute-0.ctlplane.example.com
Jan 26 15:59:55 compute-0 nova_compute[239965]: 2026-01-26 15:59:55.422 239969 DEBUG oslo_concurrency.processutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:55 compute-0 nova_compute[239965]: 2026-01-26 15:59:55.458 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:55 compute-0 nova_compute[239965]: 2026-01-26 15:59:55.508 239969 DEBUG nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 15:59:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 273 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 863 KiB/s wr, 406 op/s
Jan 26 15:59:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 15:59:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:59:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4084995198' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.088 239969 DEBUG oslo_concurrency.processutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.095 239969 DEBUG nova.compute.provider_tree [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.121 239969 DEBUG nova.scheduler.client.report [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.159 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.160 239969 DEBUG nova.compute.manager [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.219 239969 DEBUG nova.compute.manager [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.220 239969 DEBUG nova.network.neutron [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.243 239969 INFO nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.283 239969 DEBUG nova.compute.manager [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.372 239969 DEBUG nova.compute.manager [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.374 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.374 239969 INFO nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Creating image(s)
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.394 239969 DEBUG nova.storage.rbd_utils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.417 239969 DEBUG nova.storage.rbd_utils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.445 239969 DEBUG nova.storage.rbd_utils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.457 239969 DEBUG oslo_concurrency.processutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.505 239969 DEBUG nova.policy [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3c7cba75e9fd41599b1c9a3388447cdd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aee99e5b6af74088bd848cecc9592e82', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.560 239969 DEBUG oslo_concurrency.processutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.560 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.561 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.562 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.601 239969 DEBUG nova.storage.rbd_utils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.606 239969 DEBUG oslo_concurrency.processutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.643 239969 DEBUG oslo_concurrency.lockutils [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "66382f82-a38c-4066-8564-7866c37ccd0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.644 239969 DEBUG oslo_concurrency.lockutils [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "66382f82-a38c-4066-8564-7866c37ccd0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.644 239969 DEBUG oslo_concurrency.lockutils [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "66382f82-a38c-4066-8564-7866c37ccd0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.644 239969 DEBUG oslo_concurrency.lockutils [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "66382f82-a38c-4066-8564-7866c37ccd0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.645 239969 DEBUG oslo_concurrency.lockutils [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "66382f82-a38c-4066-8564-7866c37ccd0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.647 239969 INFO nova.compute.manager [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Terminating instance
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.648 239969 DEBUG nova.compute.manager [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 15:59:56 compute-0 kernel: tap9fdb2907-6e (unregistering): left promiscuous mode
Jan 26 15:59:56 compute-0 NetworkManager[48954]: <info>  [1769443196.7413] device (tap9fdb2907-6e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 15:59:56 compute-0 ovn_controller[146046]: 2026-01-26T15:59:56Z|00463|binding|INFO|Releasing lport 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 from this chassis (sb_readonly=0)
Jan 26 15:59:56 compute-0 ovn_controller[146046]: 2026-01-26T15:59:56Z|00464|binding|INFO|Setting lport 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 down in Southbound
Jan 26 15:59:56 compute-0 ovn_controller[146046]: 2026-01-26T15:59:56Z|00465|binding|INFO|Removing iface tap9fdb2907-6e ovn-installed in OVS
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.757 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:53:13 10.100.0.3'], port_security=['fa:16:3e:1a:53:13 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '66382f82-a38c-4066-8564-7866c37ccd0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe677d76-a003-444b-839f-1500fe522371', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07b927cfeb484a5d80d731f9e9abfa4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb0b7b89-c66e-4945-ae39-9f6072eed95b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f020610-ced7-462f-92be-0953400a3e4b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=9fdb2907-6e42-42c6-ac8b-e9b435fa2c28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.759 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 in datapath fe677d76-a003-444b-839f-1500fe522371 unbound from our chassis
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.761 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe677d76-a003-444b-839f-1500fe522371
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.762 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.772 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:56 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000038.scope: Deactivated successfully.
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.782 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d28f9143-4c8b-4fee-b16f-873d6efedc4b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:56 compute-0 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000038.scope: Consumed 6.283s CPU time.
Jan 26 15:59:56 compute-0 systemd-machined[208061]: Machine qemu-62-instance-00000038 terminated.
Jan 26 15:59:56 compute-0 sshd-session[289907]: Connection closed by 154.117.199.5 port 62718 [preauth]
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.818 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[430e1f09-8f9b-4e94-8524-3eb6b486d517]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.830 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c50f1be2-4485-45c9-abcf-8854b5d96f47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:56 compute-0 NetworkManager[48954]: <info>  [1769443196.8744] manager: (tap9fdb2907-6e): new Tun device (/org/freedesktop/NetworkManager/Devices/213)
Jan 26 15:59:56 compute-0 systemd-udevd[290007]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 15:59:56 compute-0 kernel: tap9fdb2907-6e: entered promiscuous mode
Jan 26 15:59:56 compute-0 kernel: tap9fdb2907-6e (unregistering): left promiscuous mode
Jan 26 15:59:56 compute-0 ovn_controller[146046]: 2026-01-26T15:59:56Z|00466|binding|INFO|Claiming lport 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 for this chassis.
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.878 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:56 compute-0 ovn_controller[146046]: 2026-01-26T15:59:56Z|00467|binding|INFO|9fdb2907-6e42-42c6-ac8b-e9b435fa2c28: Claiming fa:16:3e:1a:53:13 10.100.0.3
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.882 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1066ba81-3f29-4883-95f5-ea7d09691438]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.886 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:53:13 10.100.0.3'], port_security=['fa:16:3e:1a:53:13 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '66382f82-a38c-4066-8564-7866c37ccd0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe677d76-a003-444b-839f-1500fe522371', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07b927cfeb484a5d80d731f9e9abfa4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb0b7b89-c66e-4945-ae39-9f6072eed95b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f020610-ced7-462f-92be-0953400a3e4b, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=9fdb2907-6e42-42c6-ac8b-e9b435fa2c28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:56 compute-0 ovn_controller[146046]: 2026-01-26T15:59:56Z|00468|binding|INFO|Setting lport 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 ovn-installed in OVS
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.905 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:56 compute-0 ovn_controller[146046]: 2026-01-26T15:59:56Z|00469|binding|INFO|Setting lport 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 up in Southbound
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.905 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[66ac80bf-c48f-49a7-b524-e1177a3b1146]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe677d76-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:11:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467858, 'reachable_time': 38560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290018, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:56 compute-0 ovn_controller[146046]: 2026-01-26T15:59:56Z|00470|binding|INFO|Releasing lport 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 from this chassis (sb_readonly=1)
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.907 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:56 compute-0 ovn_controller[146046]: 2026-01-26T15:59:56Z|00471|if_status|INFO|Not setting lport 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 down as sb is readonly
Jan 26 15:59:56 compute-0 ovn_controller[146046]: 2026-01-26T15:59:56Z|00472|binding|INFO|Removing iface tap9fdb2907-6e ovn-installed in OVS
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.908 239969 INFO nova.virt.libvirt.driver [-] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Instance destroyed successfully.
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.909 239969 DEBUG nova.objects.instance [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lazy-loading 'resources' on Instance uuid 66382f82-a38c-4066-8564-7866c37ccd0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:56 compute-0 ovn_controller[146046]: 2026-01-26T15:59:56Z|00473|binding|INFO|Releasing lport 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 from this chassis (sb_readonly=0)
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.910 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:56 compute-0 ovn_controller[146046]: 2026-01-26T15:59:56Z|00474|binding|INFO|Setting lport 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 down in Southbound
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.917 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:53:13 10.100.0.3'], port_security=['fa:16:3e:1a:53:13 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '66382f82-a38c-4066-8564-7866c37ccd0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe677d76-a003-444b-839f-1500fe522371', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07b927cfeb484a5d80d731f9e9abfa4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb0b7b89-c66e-4945-ae39-9f6072eed95b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f020610-ced7-462f-92be-0953400a3e4b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=9fdb2907-6e42-42c6-ac8b-e9b435fa2c28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.923 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6c800f-3c3c-4656-b768-aca60313a65f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfe677d76-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467871, 'tstamp': 467871}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290020, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfe677d76-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467874, 'tstamp': 467874}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290020, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.925 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe677d76-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.926 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.930 239969 DEBUG nova.virt.libvirt.vif [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1762749817',display_name='tempest-ListServersNegativeTestJSON-server-1762749817-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1762749817-1',id=56,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:59:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='07b927cfeb484a5d80d731f9e9abfa4f',ramdisk_id='',reservation_id='r-oeb19hac',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1615111890',owner_user_name='tempest-ListServersNegativeTestJSON-1615111890-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:59:51Z,user_data=None,user_id='48264b1a141f4ef5b99a45972af1903a',uuid=66382f82-a38c-4066-8564-7866c37ccd0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "address": "fa:16:3e:1a:53:13", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fdb2907-6e", "ovs_interfaceid": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.930 239969 DEBUG nova.network.os_vif_util [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converting VIF {"id": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "address": "fa:16:3e:1a:53:13", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fdb2907-6e", "ovs_interfaceid": "9fdb2907-6e42-42c6-ac8b-e9b435fa2c28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.931 239969 DEBUG nova.network.os_vif_util [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:53:13,bridge_name='br-int',has_traffic_filtering=True,id=9fdb2907-6e42-42c6-ac8b-e9b435fa2c28,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fdb2907-6e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.931 239969 DEBUG os_vif [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:53:13,bridge_name='br-int',has_traffic_filtering=True,id=9fdb2907-6e42-42c6-ac8b-e9b435fa2c28,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fdb2907-6e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.933 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.934 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fdb2907-6e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.936 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe677d76-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.936 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.936 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe677d76-a0, col_values=(('external_ids', {'iface-id': 'a04cf444-51ef-4a15-8985-77a2cc807a92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.936 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.938 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.938 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 in datapath fe677d76-a003-444b-839f-1500fe522371 unbound from our chassis
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.940 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe677d76-a003-444b-839f-1500fe522371
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.941 239969 INFO os_vif [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:53:13,bridge_name='br-int',has_traffic_filtering=True,id=9fdb2907-6e42-42c6-ac8b-e9b435fa2c28,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fdb2907-6e')
Jan 26 15:59:56 compute-0 ceph-mon[75140]: pgmap v1404: 305 pgs: 305 active+clean; 273 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 863 KiB/s wr, 406 op/s
Jan 26 15:59:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4084995198' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.960 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ac457c6d-4066-424e-a7ea-6d35e5a3def7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:56 compute-0 nova_compute[239965]: 2026-01-26 15:59:56.982 239969 DEBUG oslo_concurrency.processutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:56.996 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[80ef4b77-318f-48a1-9876-526708b521b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.004 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4255f7ba-8e88-4799-8c72-577f1cb3ec36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.043 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[20dbe7c2-e3f5-4825-9301-68fee5395f90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.066 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4a6500-c331-4ae2-922b-499821aee2ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe677d76-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:11:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467858, 'reachable_time': 38560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290067, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.121 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9a7e437d-4138-46a1-9703-f97a3626bf5d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfe677d76-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467871, 'tstamp': 467871}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290080, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfe677d76-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467874, 'tstamp': 467874}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290080, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.123 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe677d76-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.123 239969 DEBUG nova.storage.rbd_utils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] resizing rbd image d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.126 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe677d76-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.126 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.127 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe677d76-a0, col_values=(('external_ids', {'iface-id': 'a04cf444-51ef-4a15-8985-77a2cc807a92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.127 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.128 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 in datapath fe677d76-a003-444b-839f-1500fe522371 unbound from our chassis
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.129 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe677d76-a003-444b-839f-1500fe522371
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.149 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2dd46f-12b1-4f8c-a31e-b80e945888da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.164 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.194 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ad1a374d-7345-4d38-aa7b-e46511c9e836]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.197 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3b64f6-5297-4c57-95f2-939a1af03c2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.236 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[48e5463c-34ef-4690-9658-20f57be78ca5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.247 239969 DEBUG nova.objects.instance [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'migration_context' on Instance uuid d89afe65-4fc2-4cfa-8580-7ae5befd6a4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.259 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[16cfebd3-ffa2-4adb-b673-e602a9381c82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe677d76-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:11:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 13, 'rx_bytes': 532, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 13, 'rx_bytes': 532, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467858, 'reachable_time': 38560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290125, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.280 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7cd98f94-61d1-40f2-8415-648af3238950]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfe677d76-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467871, 'tstamp': 467871}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290126, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfe677d76-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467874, 'tstamp': 467874}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290126, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.286 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe677d76-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.288 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.289 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.290 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe677d76-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.290 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.291 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe677d76-a0, col_values=(('external_ids', {'iface-id': 'a04cf444-51ef-4a15-8985-77a2cc807a92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 15:59:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:57.291 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.391 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.391 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Ensure instance console log exists: /var/lib/nova/instances/d89afe65-4fc2-4cfa-8580-7ae5befd6a4d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.392 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.392 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.393 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.512 239969 DEBUG nova.network.neutron [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Successfully created port: 34a7ae82-747e-4fef-9081-0d0e28e80f28 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.556 239969 INFO nova.virt.libvirt.driver [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Deleting instance files /var/lib/nova/instances/66382f82-a38c-4066-8564-7866c37ccd0f_del
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.557 239969 INFO nova.virt.libvirt.driver [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Deletion of /var/lib/nova/instances/66382f82-a38c-4066-8564-7866c37ccd0f_del complete
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.617 239969 INFO nova.compute.manager [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Took 0.97 seconds to destroy the instance on the hypervisor.
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.618 239969 DEBUG oslo.service.loopingcall [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.618 239969 DEBUG nova.compute.manager [-] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 15:59:57 compute-0 nova_compute[239965]: 2026-01-26 15:59:57.618 239969 DEBUG nova.network.neutron [-] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 15:59:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1405: 305 pgs: 305 active+clean; 273 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 51 KiB/s wr, 321 op/s
Jan 26 15:59:58 compute-0 ceph-mon[75140]: pgmap v1405: 305 pgs: 305 active+clean; 273 MiB data, 535 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 51 KiB/s wr, 321 op/s
Jan 26 15:59:58 compute-0 nova_compute[239965]: 2026-01-26 15:59:58.694 239969 DEBUG nova.network.neutron [-] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 15:59:58 compute-0 nova_compute[239965]: 2026-01-26 15:59:58.711 239969 INFO nova.compute.manager [-] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Took 1.09 seconds to deallocate network for instance.
Jan 26 15:59:58 compute-0 nova_compute[239965]: 2026-01-26 15:59:58.752 239969 DEBUG oslo_concurrency.lockutils [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:58 compute-0 nova_compute[239965]: 2026-01-26 15:59:58.753 239969 DEBUG oslo_concurrency.lockutils [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:58 compute-0 nova_compute[239965]: 2026-01-26 15:59:58.764 239969 DEBUG nova.compute.manager [req-4ca2307b-b734-4e4e-9bf4-c38456591fd0 req-7c327c5f-6d43-4c1c-be02-d434c2fb4678 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Received event network-vif-deleted-9fdb2907-6e42-42c6-ac8b-e9b435fa2c28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 15:59:58 compute-0 sudo[290128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:59:58 compute-0 sudo[290128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:59:58 compute-0 sudo[290128]: pam_unix(sudo:session): session closed for user root
Jan 26 15:59:58 compute-0 sudo[290153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 15:59:58 compute-0 sudo[290153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:59:58 compute-0 nova_compute[239965]: 2026-01-26 15:59:58.931 239969 DEBUG oslo_concurrency.processutils [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 15:59:58 compute-0 nova_compute[239965]: 2026-01-26 15:59:58.979 239969 DEBUG nova.network.neutron [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Successfully updated port: 34a7ae82-747e-4fef-9081-0d0e28e80f28 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 15:59:58 compute-0 nova_compute[239965]: 2026-01-26 15:59:58.997 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "refresh_cache-d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 15:59:58 compute-0 nova_compute[239965]: 2026-01-26 15:59:58.998 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquired lock "refresh_cache-d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 15:59:58 compute-0 nova_compute[239965]: 2026-01-26 15:59:58.998 239969 DEBUG nova.network.neutron [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 15:59:59 compute-0 nova_compute[239965]: 2026-01-26 15:59:59.219 239969 DEBUG nova.network.neutron [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 15:59:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:59.222 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 15:59:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:59.222 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 15:59:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 15:59:59.223 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:59 compute-0 sudo[290153]: pam_unix(sudo:session): session closed for user root
Jan 26 15:59:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:59:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:59:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 15:59:59 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:59:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 15:59:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 15:59:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3708263204' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:59 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 15:59:59 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:59:59 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 15:59:59 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3708263204' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 15:59:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 15:59:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 15:59:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 15:59:59 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 15:59:59 compute-0 nova_compute[239965]: 2026-01-26 15:59:59.679 239969 DEBUG oslo_concurrency.processutils [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.748s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 15:59:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 15:59:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 15:59:59 compute-0 nova_compute[239965]: 2026-01-26 15:59:59.686 239969 DEBUG nova.compute.provider_tree [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 15:59:59 compute-0 nova_compute[239965]: 2026-01-26 15:59:59.711 239969 DEBUG nova.scheduler.client.report [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 15:59:59 compute-0 sudo[290230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 15:59:59 compute-0 sudo[290230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:59:59 compute-0 sudo[290230]: pam_unix(sudo:session): session closed for user root
Jan 26 15:59:59 compute-0 nova_compute[239965]: 2026-01-26 15:59:59.761 239969 DEBUG oslo_concurrency.lockutils [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 15:59:59 compute-0 nova_compute[239965]: 2026-01-26 15:59:59.817 239969 INFO nova.scheduler.client.report [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Deleted allocations for instance 66382f82-a38c-4066-8564-7866c37ccd0f
Jan 26 15:59:59 compute-0 sudo[290255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 15:59:59 compute-0 sudo[290255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 15:59:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 252 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 579 KiB/s wr, 359 op/s
Jan 26 15:59:59 compute-0 nova_compute[239965]: 2026-01-26 15:59:59.913 239969 DEBUG oslo_concurrency.lockutils [None req-1bdce1ff-5650-45e4-87f5-61c65182d23b 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "66382f82-a38c-4066-8564-7866c37ccd0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:00 compute-0 podman[290291]: 2026-01-26 16:00:00.143314504 +0000 UTC m=+0.065766136 container create 07196dab0ebcd8cc120a935a11fc4e22f821af8b0dd3f576f6c903db1428a2e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:00:00 compute-0 podman[290291]: 2026-01-26 16:00:00.104777297 +0000 UTC m=+0.027228959 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:00:00 compute-0 systemd[1]: Started libpod-conmon-07196dab0ebcd8cc120a935a11fc4e22f821af8b0dd3f576f6c903db1428a2e4.scope.
Jan 26 16:00:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:00:00 compute-0 podman[290291]: 2026-01-26 16:00:00.27640524 +0000 UTC m=+0.198856902 container init 07196dab0ebcd8cc120a935a11fc4e22f821af8b0dd3f576f6c903db1428a2e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:00:00 compute-0 podman[290291]: 2026-01-26 16:00:00.287537213 +0000 UTC m=+0.209988845 container start 07196dab0ebcd8cc120a935a11fc4e22f821af8b0dd3f576f6c903db1428a2e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_haslett, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 16:00:00 compute-0 hardcore_haslett[290307]: 167 167
Jan 26 16:00:00 compute-0 systemd[1]: libpod-07196dab0ebcd8cc120a935a11fc4e22f821af8b0dd3f576f6c903db1428a2e4.scope: Deactivated successfully.
Jan 26 16:00:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:00:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:00:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:00:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:00:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:00:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:00:00 compute-0 podman[290291]: 2026-01-26 16:00:00.707491501 +0000 UTC m=+0.629943133 container attach 07196dab0ebcd8cc120a935a11fc4e22f821af8b0dd3f576f6c903db1428a2e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 16:00:00 compute-0 podman[290291]: 2026-01-26 16:00:00.708001833 +0000 UTC m=+0.630453475 container died 07196dab0ebcd8cc120a935a11fc4e22f821af8b0dd3f576f6c903db1428a2e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.706 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:00:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:00:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:00:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:00:00 compute-0 ceph-mon[75140]: pgmap v1406: 305 pgs: 305 active+clean; 252 MiB data, 521 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 579 KiB/s wr, 359 op/s
Jan 26 16:00:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c51a45daebe771fa65c5c965d376e91aa27d67104350aa555a73e26f4f25eaff-merged.mount: Deactivated successfully.
Jan 26 16:00:00 compute-0 podman[290291]: 2026-01-26 16:00:00.805789784 +0000 UTC m=+0.728241416 container remove 07196dab0ebcd8cc120a935a11fc4e22f821af8b0dd3f576f6c903db1428a2e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:00:00 compute-0 systemd[1]: libpod-conmon-07196dab0ebcd8cc120a935a11fc4e22f821af8b0dd3f576f6c903db1428a2e4.scope: Deactivated successfully.
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.877 239969 DEBUG nova.compute.manager [req-06c91c64-c381-47dd-820f-2f94be88f387 req-a14e18a4-4c4f-4c7f-9d0a-78ec14d43fbc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Received event network-changed-34a7ae82-747e-4fef-9081-0d0e28e80f28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.878 239969 DEBUG nova.compute.manager [req-06c91c64-c381-47dd-820f-2f94be88f387 req-a14e18a4-4c4f-4c7f-9d0a-78ec14d43fbc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Refreshing instance network info cache due to event network-changed-34a7ae82-747e-4fef-9081-0d0e28e80f28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.878 239969 DEBUG oslo_concurrency.lockutils [req-06c91c64-c381-47dd-820f-2f94be88f387 req-a14e18a4-4c4f-4c7f-9d0a-78ec14d43fbc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:00 compute-0 podman[290323]: 2026-01-26 16:00:00.891878326 +0000 UTC m=+0.121999915 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.913 239969 DEBUG nova.network.neutron [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Updating instance_info_cache with network_info: [{"id": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "address": "fa:16:3e:67:57:38", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a7ae82-74", "ovs_interfaceid": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.945 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Releasing lock "refresh_cache-d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.946 239969 DEBUG nova.compute.manager [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Instance network_info: |[{"id": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "address": "fa:16:3e:67:57:38", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a7ae82-74", "ovs_interfaceid": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.948 239969 DEBUG oslo_concurrency.lockutils [req-06c91c64-c381-47dd-820f-2f94be88f387 req-a14e18a4-4c4f-4c7f-9d0a-78ec14d43fbc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.949 239969 DEBUG nova.network.neutron [req-06c91c64-c381-47dd-820f-2f94be88f387 req-a14e18a4-4c4f-4c7f-9d0a-78ec14d43fbc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Refreshing network info cache for port 34a7ae82-747e-4fef-9081-0d0e28e80f28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.953 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Start _get_guest_xml network_info=[{"id": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "address": "fa:16:3e:67:57:38", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a7ae82-74", "ovs_interfaceid": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.959 239969 WARNING nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.977 239969 DEBUG nova.virt.libvirt.host [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.979 239969 DEBUG nova.virt.libvirt.host [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.985 239969 DEBUG nova.virt.libvirt.host [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.986 239969 DEBUG nova.virt.libvirt.host [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.987 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.987 239969 DEBUG nova.virt.hardware [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.988 239969 DEBUG nova.virt.hardware [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.988 239969 DEBUG nova.virt.hardware [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.989 239969 DEBUG nova.virt.hardware [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.989 239969 DEBUG nova.virt.hardware [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.989 239969 DEBUG nova.virt.hardware [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.990 239969 DEBUG nova.virt.hardware [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.990 239969 DEBUG nova.virt.hardware [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.990 239969 DEBUG nova.virt.hardware [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.990 239969 DEBUG nova.virt.hardware [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.990 239969 DEBUG nova.virt.hardware [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:00:00 compute-0 nova_compute[239965]: 2026-01-26 16:00:00.994 239969 DEBUG oslo_concurrency.processutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:01 compute-0 podman[290346]: 2026-01-26 16:00:01.011167014 +0000 UTC m=+0.057340559 container create 013a3a2e77710495a6f0375cf948c03e14934a875cd491082378d115a9139441 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lumiere, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:00:01 compute-0 systemd[1]: Started libpod-conmon-013a3a2e77710495a6f0375cf948c03e14934a875cd491082378d115a9139441.scope.
Jan 26 16:00:01 compute-0 podman[290346]: 2026-01-26 16:00:00.987417381 +0000 UTC m=+0.033590946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:00:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ada94f86ce65b2f150b71b36e7d3bebf7c38dfa6045550d2ed3b1a1e039fc19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ada94f86ce65b2f150b71b36e7d3bebf7c38dfa6045550d2ed3b1a1e039fc19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ada94f86ce65b2f150b71b36e7d3bebf7c38dfa6045550d2ed3b1a1e039fc19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ada94f86ce65b2f150b71b36e7d3bebf7c38dfa6045550d2ed3b1a1e039fc19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ada94f86ce65b2f150b71b36e7d3bebf7c38dfa6045550d2ed3b1a1e039fc19/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:01 compute-0 podman[290346]: 2026-01-26 16:00:01.141563105 +0000 UTC m=+0.187736680 container init 013a3a2e77710495a6f0375cf948c03e14934a875cd491082378d115a9139441 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lumiere, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:00:01 compute-0 podman[290346]: 2026-01-26 16:00:01.151067328 +0000 UTC m=+0.197240873 container start 013a3a2e77710495a6f0375cf948c03e14934a875cd491082378d115a9139441 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:00:01 compute-0 podman[290346]: 2026-01-26 16:00:01.166713223 +0000 UTC m=+0.212886768 container attach 013a3a2e77710495a6f0375cf948c03e14934a875cd491082378d115a9139441 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lumiere, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:00:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:00:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2217824107' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:01 compute-0 nova_compute[239965]: 2026-01-26 16:00:01.648 239969 DEBUG oslo_concurrency.processutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:01 compute-0 nova_compute[239965]: 2026-01-26 16:00:01.674 239969 DEBUG nova.storage.rbd_utils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:01 compute-0 nova_compute[239965]: 2026-01-26 16:00:01.681 239969 DEBUG oslo_concurrency.processutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:01 compute-0 nifty_lumiere[290362]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:00:01 compute-0 nifty_lumiere[290362]: --> All data devices are unavailable
Jan 26 16:00:01 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2217824107' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:01 compute-0 systemd[1]: libpod-013a3a2e77710495a6f0375cf948c03e14934a875cd491082378d115a9139441.scope: Deactivated successfully.
Jan 26 16:00:01 compute-0 anacron[30880]: Job `cron.monthly' started
Jan 26 16:00:01 compute-0 podman[290422]: 2026-01-26 16:00:01.84861999 +0000 UTC m=+0.051458925 container died 013a3a2e77710495a6f0375cf948c03e14934a875cd491082378d115a9139441 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 16:00:01 compute-0 anacron[30880]: Job `cron.monthly' terminated
Jan 26 16:00:01 compute-0 anacron[30880]: Normal exit (3 jobs run)
Jan 26 16:00:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1407: 305 pgs: 305 active+clean; 273 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 1.8 MiB/s wr, 332 op/s
Jan 26 16:00:01 compute-0 nova_compute[239965]: 2026-01-26 16:00:01.937 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-0ada94f86ce65b2f150b71b36e7d3bebf7c38dfa6045550d2ed3b1a1e039fc19-merged.mount: Deactivated successfully.
Jan 26 16:00:02 compute-0 podman[290422]: 2026-01-26 16:00:02.015537396 +0000 UTC m=+0.218376311 container remove 013a3a2e77710495a6f0375cf948c03e14934a875cd491082378d115a9139441 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 16:00:02 compute-0 systemd[1]: libpod-conmon-013a3a2e77710495a6f0375cf948c03e14934a875cd491082378d115a9139441.scope: Deactivated successfully.
Jan 26 16:00:02 compute-0 sudo[290255]: pam_unix(sudo:session): session closed for user root
Jan 26 16:00:02 compute-0 podman[290423]: 2026-01-26 16:00:02.118567415 +0000 UTC m=+0.295623437 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 16:00:02 compute-0 sudo[290479]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:00:02 compute-0 sudo[290479]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:00:02 compute-0 sudo[290479]: pam_unix(sudo:session): session closed for user root
Jan 26 16:00:02 compute-0 sudo[290504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:00:02 compute-0 sudo[290504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.342 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443187.326859, 64435a76-a92d-4b8f-83e5-a6a85b70b10f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.343 239969 INFO nova.compute.manager [-] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] VM Stopped (Lifecycle Event)
Jan 26 16:00:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:00:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1850153001' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.368 239969 DEBUG oslo_concurrency.processutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.687s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.370 239969 DEBUG nova.virt.libvirt.vif [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:59:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2129100736',display_name='tempest-DeleteServersTestJSON-server-2129100736',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2129100736',id=60,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aee99e5b6af74088bd848cecc9592e82',ramdisk_id='',reservation_id='r-u8bn3cy7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-234439961',owner_user_name='tempest-DeleteServersTestJSON-234439961-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:59:56Z,user_data=None,user_id='3c7cba75e9fd41599b1c9a3388447cdd',uuid=d89afe65-4fc2-4cfa-8580-7ae5befd6a4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "address": "fa:16:3e:67:57:38", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a7ae82-74", "ovs_interfaceid": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.371 239969 DEBUG nova.network.os_vif_util [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converting VIF {"id": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "address": "fa:16:3e:67:57:38", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a7ae82-74", "ovs_interfaceid": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.372 239969 DEBUG nova.network.os_vif_util [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:57:38,bridge_name='br-int',has_traffic_filtering=True,id=34a7ae82-747e-4fef-9081-0d0e28e80f28,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34a7ae82-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.373 239969 DEBUG nova.objects.instance [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'pci_devices' on Instance uuid d89afe65-4fc2-4cfa-8580-7ae5befd6a4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.376 239969 DEBUG nova.compute.manager [None req-fa89fe86-3b71-439e-b1d3-3fcfe859a938 - - - - - -] [instance: 64435a76-a92d-4b8f-83e5-a6a85b70b10f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.388 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:00:02 compute-0 nova_compute[239965]:   <uuid>d89afe65-4fc2-4cfa-8580-7ae5befd6a4d</uuid>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   <name>instance-0000003c</name>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <nova:name>tempest-DeleteServersTestJSON-server-2129100736</nova:name>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:00:00</nova:creationTime>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:00:02 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:00:02 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:00:02 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:00:02 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:00:02 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:00:02 compute-0 nova_compute[239965]:         <nova:user uuid="3c7cba75e9fd41599b1c9a3388447cdd">tempest-DeleteServersTestJSON-234439961-project-member</nova:user>
Jan 26 16:00:02 compute-0 nova_compute[239965]:         <nova:project uuid="aee99e5b6af74088bd848cecc9592e82">tempest-DeleteServersTestJSON-234439961</nova:project>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:00:02 compute-0 nova_compute[239965]:         <nova:port uuid="34a7ae82-747e-4fef-9081-0d0e28e80f28">
Jan 26 16:00:02 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <system>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <entry name="serial">d89afe65-4fc2-4cfa-8580-7ae5befd6a4d</entry>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <entry name="uuid">d89afe65-4fc2-4cfa-8580-7ae5befd6a4d</entry>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     </system>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   <os>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   </os>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   <features>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   </features>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk">
Jan 26 16:00:02 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       </source>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:00:02 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk.config">
Jan 26 16:00:02 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       </source>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:00:02 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:67:57:38"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <target dev="tap34a7ae82-74"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/d89afe65-4fc2-4cfa-8580-7ae5befd6a4d/console.log" append="off"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <video>
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     </video>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:00:02 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:00:02 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:00:02 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:00:02 compute-0 nova_compute[239965]: </domain>
Jan 26 16:00:02 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.396 239969 DEBUG nova.compute.manager [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Preparing to wait for external event network-vif-plugged-34a7ae82-747e-4fef-9081-0d0e28e80f28 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.396 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.397 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.397 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.398 239969 DEBUG nova.virt.libvirt.vif [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T15:59:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2129100736',display_name='tempest-DeleteServersTestJSON-server-2129100736',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2129100736',id=60,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aee99e5b6af74088bd848cecc9592e82',ramdisk_id='',reservation_id='r-u8bn3cy7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-234439961',owner_user_name='tempest-DeleteServersTestJSON-234439961-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:59:56Z,user_data=None,user_id='3c7cba75e9fd41599b1c9a3388447cdd',uuid=d89afe65-4fc2-4cfa-8580-7ae5befd6a4d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "address": "fa:16:3e:67:57:38", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a7ae82-74", "ovs_interfaceid": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.398 239969 DEBUG nova.network.os_vif_util [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converting VIF {"id": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "address": "fa:16:3e:67:57:38", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a7ae82-74", "ovs_interfaceid": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.399 239969 DEBUG nova.network.os_vif_util [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:57:38,bridge_name='br-int',has_traffic_filtering=True,id=34a7ae82-747e-4fef-9081-0d0e28e80f28,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34a7ae82-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.399 239969 DEBUG os_vif [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:57:38,bridge_name='br-int',has_traffic_filtering=True,id=34a7ae82-747e-4fef-9081-0d0e28e80f28,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34a7ae82-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.400 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.401 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.401 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.406 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.407 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap34a7ae82-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.407 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap34a7ae82-74, col_values=(('external_ids', {'iface-id': '34a7ae82-747e-4fef-9081-0d0e28e80f28', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:57:38', 'vm-uuid': 'd89afe65-4fc2-4cfa-8580-7ae5befd6a4d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.409 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:02 compute-0 NetworkManager[48954]: <info>  [1769443202.4116] manager: (tap34a7ae82-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/214)
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.411 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.416 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.417 239969 INFO os_vif [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:57:38,bridge_name='br-int',has_traffic_filtering=True,id=34a7ae82-747e-4fef-9081-0d0e28e80f28,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34a7ae82-74')
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.446 239969 DEBUG nova.network.neutron [req-06c91c64-c381-47dd-820f-2f94be88f387 req-a14e18a4-4c4f-4c7f-9d0a-78ec14d43fbc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Updated VIF entry in instance network info cache for port 34a7ae82-747e-4fef-9081-0d0e28e80f28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.447 239969 DEBUG nova.network.neutron [req-06c91c64-c381-47dd-820f-2f94be88f387 req-a14e18a4-4c4f-4c7f-9d0a-78ec14d43fbc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Updating instance_info_cache with network_info: [{"id": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "address": "fa:16:3e:67:57:38", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a7ae82-74", "ovs_interfaceid": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.468 239969 DEBUG oslo_concurrency.lockutils [req-06c91c64-c381-47dd-820f-2f94be88f387 req-a14e18a4-4c4f-4c7f-9d0a-78ec14d43fbc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.483 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.484 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.484 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] No VIF found with MAC fa:16:3e:67:57:38, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.485 239969 INFO nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Using config drive
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.514 239969 DEBUG nova.storage.rbd_utils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:02 compute-0 podman[290563]: 2026-01-26 16:00:02.626733008 +0000 UTC m=+0.056212431 container create a5f61f1d03fb6904aaacc491130bacaf23a57351f81c7c7cd4ae77205761cd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 16:00:02 compute-0 systemd[1]: Started libpod-conmon-a5f61f1d03fb6904aaacc491130bacaf23a57351f81c7c7cd4ae77205761cd8b.scope.
Jan 26 16:00:02 compute-0 podman[290563]: 2026-01-26 16:00:02.593244305 +0000 UTC m=+0.022723758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:00:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:00:02 compute-0 podman[290563]: 2026-01-26 16:00:02.716206404 +0000 UTC m=+0.145685897 container init a5f61f1d03fb6904aaacc491130bacaf23a57351f81c7c7cd4ae77205761cd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:00:02 compute-0 podman[290563]: 2026-01-26 16:00:02.723647297 +0000 UTC m=+0.153126750 container start a5f61f1d03fb6904aaacc491130bacaf23a57351f81c7c7cd4ae77205761cd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:00:02 compute-0 podman[290563]: 2026-01-26 16:00:02.728646589 +0000 UTC m=+0.158126042 container attach a5f61f1d03fb6904aaacc491130bacaf23a57351f81c7c7cd4ae77205761cd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 16:00:02 compute-0 funny_franklin[290579]: 167 167
Jan 26 16:00:02 compute-0 systemd[1]: libpod-a5f61f1d03fb6904aaacc491130bacaf23a57351f81c7c7cd4ae77205761cd8b.scope: Deactivated successfully.
Jan 26 16:00:02 compute-0 podman[290563]: 2026-01-26 16:00:02.730675959 +0000 UTC m=+0.160155402 container died a5f61f1d03fb6904aaacc491130bacaf23a57351f81c7c7cd4ae77205761cd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 16:00:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-df16d9409008113c6fbe4086877145fdedb3ae8e29620fba4bf2b37e5895d84b-merged.mount: Deactivated successfully.
Jan 26 16:00:02 compute-0 ceph-mon[75140]: pgmap v1407: 305 pgs: 305 active+clean; 273 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 1.8 MiB/s wr, 332 op/s
Jan 26 16:00:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1850153001' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:02 compute-0 podman[290563]: 2026-01-26 16:00:02.790426425 +0000 UTC m=+0.219905848 container remove a5f61f1d03fb6904aaacc491130bacaf23a57351f81c7c7cd4ae77205761cd8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:00:02 compute-0 systemd[1]: libpod-conmon-a5f61f1d03fb6904aaacc491130bacaf23a57351f81c7c7cd4ae77205761cd8b.scope: Deactivated successfully.
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.939 239969 INFO nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Creating config drive at /var/lib/nova/instances/d89afe65-4fc2-4cfa-8580-7ae5befd6a4d/disk.config
Jan 26 16:00:02 compute-0 nova_compute[239965]: 2026-01-26 16:00:02.944 239969 DEBUG oslo_concurrency.processutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d89afe65-4fc2-4cfa-8580-7ae5befd6a4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp357en7if execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:03 compute-0 podman[290605]: 2026-01-26 16:00:03.002549262 +0000 UTC m=+0.050983233 container create dd68b9a27d379b0895d5d06f57c2654f8a3e40fabd5199f06653d3484a8de513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:00:03 compute-0 systemd[1]: Started libpod-conmon-dd68b9a27d379b0895d5d06f57c2654f8a3e40fabd5199f06653d3484a8de513.scope.
Jan 26 16:00:03 compute-0 podman[290605]: 2026-01-26 16:00:02.980597613 +0000 UTC m=+0.029031604 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:00:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678d90e93c9f18cf77311563599dad510039735979d79f870de73f8931999625/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678d90e93c9f18cf77311563599dad510039735979d79f870de73f8931999625/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678d90e93c9f18cf77311563599dad510039735979d79f870de73f8931999625/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678d90e93c9f18cf77311563599dad510039735979d79f870de73f8931999625/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:03 compute-0 nova_compute[239965]: 2026-01-26 16:00:03.084 239969 DEBUG oslo_concurrency.processutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d89afe65-4fc2-4cfa-8580-7ae5befd6a4d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp357en7if" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:03 compute-0 podman[290605]: 2026-01-26 16:00:03.106524984 +0000 UTC m=+0.154958975 container init dd68b9a27d379b0895d5d06f57c2654f8a3e40fabd5199f06653d3484a8de513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:00:03 compute-0 podman[290605]: 2026-01-26 16:00:03.117166566 +0000 UTC m=+0.165600537 container start dd68b9a27d379b0895d5d06f57c2654f8a3e40fabd5199f06653d3484a8de513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 16:00:03 compute-0 nova_compute[239965]: 2026-01-26 16:00:03.118 239969 DEBUG nova.storage.rbd_utils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:03 compute-0 podman[290605]: 2026-01-26 16:00:03.122775993 +0000 UTC m=+0.171209974 container attach dd68b9a27d379b0895d5d06f57c2654f8a3e40fabd5199f06653d3484a8de513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:00:03 compute-0 nova_compute[239965]: 2026-01-26 16:00:03.126 239969 DEBUG oslo_concurrency.processutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d89afe65-4fc2-4cfa-8580-7ae5befd6a4d/disk.config d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]: {
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:     "0": [
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:         {
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "devices": [
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "/dev/loop3"
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             ],
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "lv_name": "ceph_lv0",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "lv_size": "21470642176",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "name": "ceph_lv0",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "tags": {
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.cluster_name": "ceph",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.crush_device_class": "",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.encrypted": "0",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.objectstore": "bluestore",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.osd_id": "0",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.type": "block",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.vdo": "0",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.with_tpm": "0"
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             },
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "type": "block",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "vg_name": "ceph_vg0"
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:         }
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:     ],
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:     "1": [
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:         {
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "devices": [
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "/dev/loop4"
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             ],
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "lv_name": "ceph_lv1",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "lv_size": "21470642176",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "name": "ceph_lv1",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "tags": {
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.cluster_name": "ceph",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.crush_device_class": "",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.encrypted": "0",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.objectstore": "bluestore",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.osd_id": "1",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.type": "block",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.vdo": "0",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.with_tpm": "0"
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             },
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "type": "block",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "vg_name": "ceph_vg1"
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:         }
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:     ],
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:     "2": [
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:         {
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "devices": [
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "/dev/loop5"
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             ],
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "lv_name": "ceph_lv2",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "lv_size": "21470642176",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "name": "ceph_lv2",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "tags": {
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.cluster_name": "ceph",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.crush_device_class": "",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.encrypted": "0",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.objectstore": "bluestore",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.osd_id": "2",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.type": "block",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.vdo": "0",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:                 "ceph.with_tpm": "0"
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             },
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "type": "block",
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:             "vg_name": "ceph_vg2"
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:         }
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]:     ]
Jan 26 16:00:03 compute-0 focused_visvesvaraya[290625]: }
Jan 26 16:00:03 compute-0 systemd[1]: libpod-dd68b9a27d379b0895d5d06f57c2654f8a3e40fabd5199f06653d3484a8de513.scope: Deactivated successfully.
Jan 26 16:00:03 compute-0 nova_compute[239965]: 2026-01-26 16:00:03.463 239969 DEBUG oslo_concurrency.processutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d89afe65-4fc2-4cfa-8580-7ae5befd6a4d/disk.config d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:03 compute-0 nova_compute[239965]: 2026-01-26 16:00:03.464 239969 INFO nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Deleting local config drive /var/lib/nova/instances/d89afe65-4fc2-4cfa-8580-7ae5befd6a4d/disk.config because it was imported into RBD.
Jan 26 16:00:03 compute-0 podman[290671]: 2026-01-26 16:00:03.513575455 +0000 UTC m=+0.033857062 container died dd68b9a27d379b0895d5d06f57c2654f8a3e40fabd5199f06653d3484a8de513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 16:00:03 compute-0 kernel: tap34a7ae82-74: entered promiscuous mode
Jan 26 16:00:03 compute-0 NetworkManager[48954]: <info>  [1769443203.5424] manager: (tap34a7ae82-74): new Tun device (/org/freedesktop/NetworkManager/Devices/215)
Jan 26 16:00:03 compute-0 nova_compute[239965]: 2026-01-26 16:00:03.546 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:03 compute-0 ovn_controller[146046]: 2026-01-26T16:00:03Z|00475|binding|INFO|Claiming lport 34a7ae82-747e-4fef-9081-0d0e28e80f28 for this chassis.
Jan 26 16:00:03 compute-0 ovn_controller[146046]: 2026-01-26T16:00:03Z|00476|binding|INFO|34a7ae82-747e-4fef-9081-0d0e28e80f28: Claiming fa:16:3e:67:57:38 10.100.0.4
Jan 26 16:00:03 compute-0 nova_compute[239965]: 2026-01-26 16:00:03.569 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:03 compute-0 ovn_controller[146046]: 2026-01-26T16:00:03Z|00477|binding|INFO|Setting lport 34a7ae82-747e-4fef-9081-0d0e28e80f28 ovn-installed in OVS
Jan 26 16:00:03 compute-0 ovn_controller[146046]: 2026-01-26T16:00:03Z|00478|binding|INFO|Setting lport 34a7ae82-747e-4fef-9081-0d0e28e80f28 up in Southbound
Jan 26 16:00:03 compute-0 nova_compute[239965]: 2026-01-26 16:00:03.572 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.573 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:57:38 10.100.0.4'], port_security=['fa:16:3e:67:57:38 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd89afe65-4fc2-4cfa-8580-7ae5befd6a4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00eb7549-d24b-4657-b244-7664c8a34b20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aee99e5b6af74088bd848cecc9592e82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6daa4359-88b3-4de7-a38c-e21562f82a46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89244795-c322-42a2-be6b-f02f8ae6e05c, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=34a7ae82-747e-4fef-9081-0d0e28e80f28) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.575 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 34a7ae82-747e-4fef-9081-0d0e28e80f28 in datapath 00eb7549-d24b-4657-b244-7664c8a34b20 bound to our chassis
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.576 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00eb7549-d24b-4657-b244-7664c8a34b20
Jan 26 16:00:03 compute-0 systemd-udevd[290692]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:00:03 compute-0 systemd-machined[208061]: New machine qemu-66-instance-0000003c.
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.591 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[19c7dfac-6028-4779-bc44-56a46270b0a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.592 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00eb7549-d1 in ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.596 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00eb7549-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.596 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d4094ea8-0dcc-4801-821f-4cfcb2cff78a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.598 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[389b20ad-5d24-4833-b9e0-5c7941e2db5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:03 compute-0 NetworkManager[48954]: <info>  [1769443203.6050] device (tap34a7ae82-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:00:03 compute-0 NetworkManager[48954]: <info>  [1769443203.6064] device (tap34a7ae82-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:00:03 compute-0 systemd[1]: Started Virtual Machine qemu-66-instance-0000003c.
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.612 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[3a86eee6-b2c5-47e7-aed2-8a4f09d5844c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.633 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2c94d2fa-d83f-4db6-9c50-8432bd941043]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.668 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[67083d51-73d1-4427-af15-5f9561a8653e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:03 compute-0 systemd-udevd[290699]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:00:03 compute-0 NetworkManager[48954]: <info>  [1769443203.6752] manager: (tap00eb7549-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/216)
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.674 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[851c9ac6-30a4-46b7-82bf-9a9372ca3056]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-678d90e93c9f18cf77311563599dad510039735979d79f870de73f8931999625-merged.mount: Deactivated successfully.
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.710 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[af7598be-5579-47fb-ab77-483cf051530e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.715 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[eb88e040-b78f-4984-b564-2c08220ac0bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:03 compute-0 NetworkManager[48954]: <info>  [1769443203.7378] device (tap00eb7549-d0): carrier: link connected
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.742 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9e1dbb-b859-4fce-b1fd-050b6b6ce6a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.763 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0a1fbdf2-a006-48ea-87bb-e2a1a018e3b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00eb7549-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:aa:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469454, 'reachable_time': 33101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290728, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.776 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f02a5def-50c0-4788-966d-6391a92d7bac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:aa8a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 469454, 'tstamp': 469454}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290729, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.792 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fa22a5f4-b1b5-43df-bc9b-a9880582fc1e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00eb7549-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:aa:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469454, 'reachable_time': 33101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290730, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:03 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.838 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5d013a20-33e3-441c-bcb1-0453d28fbaf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:03 compute-0 podman[290671]: 2026-01-26 16:00:03.860559902 +0000 UTC m=+0.380841489 container remove dd68b9a27d379b0895d5d06f57c2654f8a3e40fabd5199f06653d3484a8de513 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_visvesvaraya, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 16:00:03 compute-0 systemd[1]: libpod-conmon-dd68b9a27d379b0895d5d06f57c2654f8a3e40fabd5199f06653d3484a8de513.scope: Deactivated successfully.
Jan 26 16:00:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 273 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 1.8 MiB/s wr, 281 op/s
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.908 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7df2550b-6fcc-44dd-8a28-4963eb88390f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.910 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00eb7549-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.910 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.910 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00eb7549-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:03 compute-0 kernel: tap00eb7549-d0: entered promiscuous mode
Jan 26 16:00:03 compute-0 NetworkManager[48954]: <info>  [1769443203.9128] manager: (tap00eb7549-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Jan 26 16:00:03 compute-0 nova_compute[239965]: 2026-01-26 16:00:03.912 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.915 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00eb7549-d0, col_values=(('external_ids', {'iface-id': 'c26451f4-ab48-4e8d-b25b-9e4988573b7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:03 compute-0 ovn_controller[146046]: 2026-01-26T16:00:03Z|00479|binding|INFO|Releasing lport c26451f4-ab48-4e8d-b25b-9e4988573b7d from this chassis (sb_readonly=0)
Jan 26 16:00:03 compute-0 nova_compute[239965]: 2026-01-26 16:00:03.916 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:03 compute-0 nova_compute[239965]: 2026-01-26 16:00:03.934 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.935 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00eb7549-d24b-4657-b244-7664c8a34b20.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00eb7549-d24b-4657-b244-7664c8a34b20.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.936 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8b310857-9f15-4ff9-ada9-1d84ab74690b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.936 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-00eb7549-d24b-4657-b244-7664c8a34b20
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/00eb7549-d24b-4657-b244-7664c8a34b20.pid.haproxy
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 00eb7549-d24b-4657-b244-7664c8a34b20
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:00:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:03.937 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'env', 'PROCESS_TAG=haproxy-00eb7549-d24b-4657-b244-7664c8a34b20', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00eb7549-d24b-4657-b244-7664c8a34b20.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:00:03 compute-0 sudo[290504]: pam_unix(sudo:session): session closed for user root
Jan 26 16:00:04 compute-0 sudo[290738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:00:04 compute-0 sudo[290738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:00:04 compute-0 sudo[290738]: pam_unix(sudo:session): session closed for user root
Jan 26 16:00:04 compute-0 sudo[290765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:00:04 compute-0 sudo[290765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:00:04 compute-0 ovn_controller[146046]: 2026-01-26T16:00:04Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ed:05:48 10.100.0.11
Jan 26 16:00:04 compute-0 ovn_controller[146046]: 2026-01-26T16:00:04Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ed:05:48 10.100.0.11
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.365 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443204.3645318, d89afe65-4fc2-4cfa-8580-7ae5befd6a4d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.366 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] VM Started (Lifecycle Event)
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.385 239969 DEBUG nova.compute.manager [req-1664a787-3243-42a3-9f09-fbfed09142ea req-0c40a276-6cce-45d6-9f8b-ef622c074c3e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Received event network-vif-plugged-34a7ae82-747e-4fef-9081-0d0e28e80f28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.386 239969 DEBUG oslo_concurrency.lockutils [req-1664a787-3243-42a3-9f09-fbfed09142ea req-0c40a276-6cce-45d6-9f8b-ef622c074c3e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.386 239969 DEBUG oslo_concurrency.lockutils [req-1664a787-3243-42a3-9f09-fbfed09142ea req-0c40a276-6cce-45d6-9f8b-ef622c074c3e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.386 239969 DEBUG oslo_concurrency.lockutils [req-1664a787-3243-42a3-9f09-fbfed09142ea req-0c40a276-6cce-45d6-9f8b-ef622c074c3e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.386 239969 DEBUG nova.compute.manager [req-1664a787-3243-42a3-9f09-fbfed09142ea req-0c40a276-6cce-45d6-9f8b-ef622c074c3e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Processing event network-vif-plugged-34a7ae82-747e-4fef-9081-0d0e28e80f28 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.387 239969 DEBUG nova.compute.manager [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.391 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.395 239969 INFO nova.virt.libvirt.driver [-] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Instance spawned successfully.
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.396 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.399 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.402 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.423 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.424 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.424 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.425 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.425 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.425 239969 DEBUG nova.virt.libvirt.driver [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.433 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.434 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443204.3654156, d89afe65-4fc2-4cfa-8580-7ae5befd6a4d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.434 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] VM Paused (Lifecycle Event)
Jan 26 16:00:04 compute-0 podman[290851]: 2026-01-26 16:00:04.340335087 +0000 UTC m=+0.027236129 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:00:04 compute-0 podman[290851]: 2026-01-26 16:00:04.438303862 +0000 UTC m=+0.125204894 container create 2a93282732aafe3d6c33e3390cea54e4950d1ac665777d7ed9759e2ba7b84c95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.461 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.467 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443204.39162, d89afe65-4fc2-4cfa-8580-7ae5befd6a4d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.468 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] VM Resumed (Lifecycle Event)
Jan 26 16:00:04 compute-0 podman[290874]: 2026-01-26 16:00:04.495647069 +0000 UTC m=+0.061908200 container create f3791ee65b0fba8b377ab6f1cb9d32fb83746a0f4863db08be5399df7c25a6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_colden, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:00:04 compute-0 systemd[1]: Started libpod-conmon-2a93282732aafe3d6c33e3390cea54e4950d1ac665777d7ed9759e2ba7b84c95.scope.
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.516 239969 INFO nova.compute.manager [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Took 8.14 seconds to spawn the instance on the hypervisor.
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.516 239969 DEBUG nova.compute.manager [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.519 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.528 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:00:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:00:04 compute-0 systemd[1]: Started libpod-conmon-f3791ee65b0fba8b377ab6f1cb9d32fb83746a0f4863db08be5399df7c25a6b1.scope.
Jan 26 16:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/700a0d4790aed27f15275ced4cfc3bafc28e19cd9b4e03a5ab24f52278972c76/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:04 compute-0 podman[290874]: 2026-01-26 16:00:04.458706023 +0000 UTC m=+0.024967184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:00:04 compute-0 podman[290851]: 2026-01-26 16:00:04.570415525 +0000 UTC m=+0.257316587 container init 2a93282732aafe3d6c33e3390cea54e4950d1ac665777d7ed9759e2ba7b84c95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:00:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:00:04 compute-0 podman[290851]: 2026-01-26 16:00:04.57961046 +0000 UTC m=+0.266511482 container start 2a93282732aafe3d6c33e3390cea54e4950d1ac665777d7ed9759e2ba7b84c95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.588 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:00:04 compute-0 podman[290874]: 2026-01-26 16:00:04.607607668 +0000 UTC m=+0.173868829 container init f3791ee65b0fba8b377ab6f1cb9d32fb83746a0f4863db08be5399df7c25a6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_colden, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:00:04 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[290892]: [NOTICE]   (290901) : New worker (290904) forked
Jan 26 16:00:04 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[290892]: [NOTICE]   (290901) : Loading success.
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.615 239969 INFO nova.compute.manager [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Took 9.41 seconds to build instance.
Jan 26 16:00:04 compute-0 podman[290874]: 2026-01-26 16:00:04.617271035 +0000 UTC m=+0.183532166 container start f3791ee65b0fba8b377ab6f1cb9d32fb83746a0f4863db08be5399df7c25a6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_colden, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 26 16:00:04 compute-0 systemd[1]: libpod-f3791ee65b0fba8b377ab6f1cb9d32fb83746a0f4863db08be5399df7c25a6b1.scope: Deactivated successfully.
Jan 26 16:00:04 compute-0 inspiring_colden[290897]: 167 167
Jan 26 16:00:04 compute-0 conmon[290897]: conmon f3791ee65b0fba8b377a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f3791ee65b0fba8b377ab6f1cb9d32fb83746a0f4863db08be5399df7c25a6b1.scope/container/memory.events
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.634 239969 DEBUG oslo_concurrency.lockutils [None req-614557bd-31de-4687-9d0b-31b744d306d4 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.518s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:04 compute-0 podman[290874]: 2026-01-26 16:00:04.639402608 +0000 UTC m=+0.205663759 container attach f3791ee65b0fba8b377ab6f1cb9d32fb83746a0f4863db08be5399df7c25a6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_colden, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:00:04 compute-0 podman[290874]: 2026-01-26 16:00:04.642776401 +0000 UTC m=+0.209037532 container died f3791ee65b0fba8b377ab6f1cb9d32fb83746a0f4863db08be5399df7c25a6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_colden, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:00:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-944c22fd04ff5b23814dfe070c6137c31ed6bb7b8b221e2b4eea9c912c271c7c-merged.mount: Deactivated successfully.
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.671 239969 DEBUG oslo_concurrency.lockutils [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "af55c460-3670-4fa6-855f-386e2c58b312" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.671 239969 DEBUG oslo_concurrency.lockutils [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "af55c460-3670-4fa6-855f-386e2c58b312" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.672 239969 DEBUG oslo_concurrency.lockutils [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "af55c460-3670-4fa6-855f-386e2c58b312-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.672 239969 DEBUG oslo_concurrency.lockutils [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "af55c460-3670-4fa6-855f-386e2c58b312-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.672 239969 DEBUG oslo_concurrency.lockutils [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "af55c460-3670-4fa6-855f-386e2c58b312-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.673 239969 INFO nova.compute.manager [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Terminating instance
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.674 239969 DEBUG nova.compute.manager [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:00:04 compute-0 podman[290874]: 2026-01-26 16:00:04.687402476 +0000 UTC m=+0.253663607 container remove f3791ee65b0fba8b377ab6f1cb9d32fb83746a0f4863db08be5399df7c25a6b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_colden, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 16:00:04 compute-0 systemd[1]: libpod-conmon-f3791ee65b0fba8b377ab6f1cb9d32fb83746a0f4863db08be5399df7c25a6b1.scope: Deactivated successfully.
Jan 26 16:00:04 compute-0 kernel: tapaa71bf39-b6 (unregistering): left promiscuous mode
Jan 26 16:00:04 compute-0 NetworkManager[48954]: <info>  [1769443204.7480] device (tapaa71bf39-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:00:04 compute-0 ovn_controller[146046]: 2026-01-26T16:00:04Z|00480|binding|INFO|Releasing lport aa71bf39-b625-491e-9e6e-9a6b6bc4d087 from this chassis (sb_readonly=0)
Jan 26 16:00:04 compute-0 ovn_controller[146046]: 2026-01-26T16:00:04Z|00481|binding|INFO|Setting lport aa71bf39-b625-491e-9e6e-9a6b6bc4d087 down in Southbound
Jan 26 16:00:04 compute-0 ovn_controller[146046]: 2026-01-26T16:00:04Z|00482|binding|INFO|Removing iface tapaa71bf39-b6 ovn-installed in OVS
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.762 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.764 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:04.769 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:05:48 10.100.0.11'], port_security=['fa:16:3e:ed:05:48 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'af55c460-3670-4fa6-855f-386e2c58b312', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe677d76-a003-444b-839f-1500fe522371', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07b927cfeb484a5d80d731f9e9abfa4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb0b7b89-c66e-4945-ae39-9f6072eed95b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f020610-ced7-462f-92be-0953400a3e4b, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=aa71bf39-b625-491e-9e6e-9a6b6bc4d087) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:00:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:04.774 156105 INFO neutron.agent.ovn.metadata.agent [-] Port aa71bf39-b625-491e-9e6e-9a6b6bc4d087 in datapath fe677d76-a003-444b-839f-1500fe522371 unbound from our chassis
Jan 26 16:00:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:04.777 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fe677d76-a003-444b-839f-1500fe522371
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.778 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:04.798 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c36d9c-5a2e-4dae-806b-8c7f621ceccc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:04 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000039.scope: Deactivated successfully.
Jan 26 16:00:04 compute-0 systemd[1]: machine-qemu\x2d63\x2dinstance\x2d00000039.scope: Consumed 14.055s CPU time.
Jan 26 16:00:04 compute-0 systemd-machined[208061]: Machine qemu-63-instance-00000039 terminated.
Jan 26 16:00:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:04.834 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e3d9b810-7ace-44d6-9758-11c7ec09a640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:04.836 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ef517d5d-1b75-4fdd-b734-2f641c6ee448]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:04.867 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[38f9bd78-8cac-4581-bd39-266ad838d50e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:04 compute-0 NetworkManager[48954]: <info>  [1769443204.8892] manager: (tapaa71bf39-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/218)
Jan 26 16:00:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:04.892 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7a66ee4c-3999-4f5e-b885-114c218bedc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfe677d76-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2d:11:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 15, 'rx_bytes': 532, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 15, 'rx_bytes': 532, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 139], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467858, 'reachable_time': 38560, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290949, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.904 239969 INFO nova.virt.libvirt.driver [-] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Instance destroyed successfully.
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.904 239969 DEBUG nova.objects.instance [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lazy-loading 'resources' on Instance uuid af55c460-3670-4fa6-855f-386e2c58b312 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:04.912 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c55f2861-230f-4762-982a-e18457f718e1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfe677d76-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467871, 'tstamp': 467871}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290961, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfe677d76-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 467874, 'tstamp': 467874}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290961, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:04.913 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe677d76-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.915 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:04 compute-0 podman[290942]: 2026-01-26 16:00:04.918822756 +0000 UTC m=+0.053970015 container create aa73692fdb46561af8eceed14ade196981895957e2a8fe573c5ac0ea689f2320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_turing, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.919 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:04.920 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe677d76-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:04.920 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:04.920 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfe677d76-a0, col_values=(('external_ids', {'iface-id': 'a04cf444-51ef-4a15-8985-77a2cc807a92'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:04.921 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.922 239969 DEBUG nova.virt.libvirt.vif [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1762749817',display_name='tempest-ListServersNegativeTestJSON-server-1762749817-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1762749817-2',id=57,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-26T15:59:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='07b927cfeb484a5d80d731f9e9abfa4f',ramdisk_id='',reservation_id='r-oeb19hac',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1615111890',owner_user_name='tempest-ListServersNegativeTestJSON-1615111890-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:59:49Z,user_data=None,user_id='48264b1a141f4ef5b99a45972af1903a',uuid=af55c460-3670-4fa6-855f-386e2c58b312,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "address": "fa:16:3e:ed:05:48", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa71bf39-b6", "ovs_interfaceid": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.922 239969 DEBUG nova.network.os_vif_util [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converting VIF {"id": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "address": "fa:16:3e:ed:05:48", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa71bf39-b6", "ovs_interfaceid": "aa71bf39-b625-491e-9e6e-9a6b6bc4d087", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.923 239969 DEBUG nova.network.os_vif_util [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:05:48,bridge_name='br-int',has_traffic_filtering=True,id=aa71bf39-b625-491e-9e6e-9a6b6bc4d087,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa71bf39-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.924 239969 DEBUG os_vif [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:05:48,bridge_name='br-int',has_traffic_filtering=True,id=aa71bf39-b625-491e-9e6e-9a6b6bc4d087,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa71bf39-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.926 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.927 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa71bf39-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.928 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.932 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:00:04 compute-0 nova_compute[239965]: 2026-01-26 16:00:04.934 239969 INFO os_vif [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:05:48,bridge_name='br-int',has_traffic_filtering=True,id=aa71bf39-b625-491e-9e6e-9a6b6bc4d087,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa71bf39-b6')
Jan 26 16:00:04 compute-0 ceph-mon[75140]: pgmap v1408: 305 pgs: 305 active+clean; 273 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 1.8 MiB/s wr, 281 op/s
Jan 26 16:00:04 compute-0 systemd[1]: Started libpod-conmon-aa73692fdb46561af8eceed14ade196981895957e2a8fe573c5ac0ea689f2320.scope.
Jan 26 16:00:04 compute-0 podman[290942]: 2026-01-26 16:00:04.899294066 +0000 UTC m=+0.034441355 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:00:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6babd022fd34f341cf3c71536c3a1a11f1e3223f41c2e396432e6b947a18d6b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6babd022fd34f341cf3c71536c3a1a11f1e3223f41c2e396432e6b947a18d6b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6babd022fd34f341cf3c71536c3a1a11f1e3223f41c2e396432e6b947a18d6b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6babd022fd34f341cf3c71536c3a1a11f1e3223f41c2e396432e6b947a18d6b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:05 compute-0 podman[290942]: 2026-01-26 16:00:05.091613058 +0000 UTC m=+0.226760347 container init aa73692fdb46561af8eceed14ade196981895957e2a8fe573c5ac0ea689f2320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_turing, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 16:00:05 compute-0 podman[290942]: 2026-01-26 16:00:05.10106075 +0000 UTC m=+0.236208009 container start aa73692fdb46561af8eceed14ade196981895957e2a8fe573c5ac0ea689f2320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Jan 26 16:00:05 compute-0 podman[290942]: 2026-01-26 16:00:05.10641998 +0000 UTC m=+0.241567259 container attach aa73692fdb46561af8eceed14ade196981895957e2a8fe573c5ac0ea689f2320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_turing, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:00:05 compute-0 ovn_controller[146046]: 2026-01-26T16:00:05Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2a:d7:9a 10.100.0.12
Jan 26 16:00:05 compute-0 ovn_controller[146046]: 2026-01-26T16:00:05Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2a:d7:9a 10.100.0.12
Jan 26 16:00:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:00:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Cumulative writes: 6381 writes, 28K keys, 6381 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6381 writes, 6381 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1668 writes, 7488 keys, 1668 commit groups, 1.0 writes per commit group, ingest: 10.19 MB, 0.02 MB/s
                                           Interval WAL: 1668 writes, 1668 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     96.3      0.35              0.10        16    0.022       0      0       0.0       0.0
                                             L6      1/0    8.85 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    154.4    127.4      0.92              0.28        15    0.061     72K   8322       0.0       0.0
                                            Sum      1/0    8.85 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5    111.8    118.8      1.27              0.38        31    0.041     72K   8322       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8    130.1    134.0      0.33              0.11         8    0.042     23K   2599       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    154.4    127.4      0.92              0.28        15    0.061     72K   8322       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     97.5      0.34              0.10        15    0.023       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.6      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.033, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.15 GB write, 0.06 MB/s write, 0.14 GB read, 0.06 MB/s read, 1.3 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x565356c818d0#2 capacity: 304.00 MB usage: 16.13 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000186 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1015,15.54 MB,5.11322%) FilterBlock(32,209.30 KB,0.067234%) IndexBlock(32,388.64 KB,0.124846%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 16:00:05 compute-0 nova_compute[239965]: 2026-01-26 16:00:05.709 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:05 compute-0 nova_compute[239965]: 2026-01-26 16:00:05.757 239969 DEBUG nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:00:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1409: 305 pgs: 305 active+clean; 320 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 8.0 MiB/s wr, 458 op/s
Jan 26 16:00:05 compute-0 lvm[291063]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:00:05 compute-0 lvm[291063]: VG ceph_vg0 finished
Jan 26 16:00:05 compute-0 lvm[291065]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:00:05 compute-0 lvm[291065]: VG ceph_vg1 finished
Jan 26 16:00:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:00:05 compute-0 lvm[291066]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:00:05 compute-0 lvm[291066]: VG ceph_vg2 finished
Jan 26 16:00:06 compute-0 hungry_turing[290985]: {}
Jan 26 16:00:06 compute-0 systemd[1]: libpod-aa73692fdb46561af8eceed14ade196981895957e2a8fe573c5ac0ea689f2320.scope: Deactivated successfully.
Jan 26 16:00:06 compute-0 systemd[1]: libpod-aa73692fdb46561af8eceed14ade196981895957e2a8fe573c5ac0ea689f2320.scope: Consumed 1.456s CPU time.
Jan 26 16:00:06 compute-0 podman[290942]: 2026-01-26 16:00:06.080380756 +0000 UTC m=+1.215528015 container died aa73692fdb46561af8eceed14ade196981895957e2a8fe573c5ac0ea689f2320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_turing, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.250 239969 DEBUG oslo_concurrency.lockutils [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.251 239969 DEBUG oslo_concurrency.lockutils [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.251 239969 DEBUG oslo_concurrency.lockutils [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.252 239969 DEBUG oslo_concurrency.lockutils [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.252 239969 DEBUG oslo_concurrency.lockutils [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.253 239969 INFO nova.compute.manager [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Terminating instance
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.254 239969 DEBUG nova.compute.manager [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:00:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-6babd022fd34f341cf3c71536c3a1a11f1e3223f41c2e396432e6b947a18d6b0-merged.mount: Deactivated successfully.
Jan 26 16:00:06 compute-0 kernel: tap360a1634-f2 (unregistering): left promiscuous mode
Jan 26 16:00:06 compute-0 NetworkManager[48954]: <info>  [1769443206.6062] device (tap360a1634-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.616 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:06 compute-0 ovn_controller[146046]: 2026-01-26T16:00:06Z|00483|binding|INFO|Releasing lport 360a1634-f2af-423f-8312-2d8623a1dab4 from this chassis (sb_readonly=0)
Jan 26 16:00:06 compute-0 ovn_controller[146046]: 2026-01-26T16:00:06Z|00484|binding|INFO|Setting lport 360a1634-f2af-423f-8312-2d8623a1dab4 down in Southbound
Jan 26 16:00:06 compute-0 ovn_controller[146046]: 2026-01-26T16:00:06Z|00485|binding|INFO|Removing iface tap360a1634-f2 ovn-installed in OVS
Jan 26 16:00:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:06.629 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:d7:9a 10.100.0.12'], port_security=['fa:16:3e:2a:d7:9a 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fe677d76-a003-444b-839f-1500fe522371', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '07b927cfeb484a5d80d731f9e9abfa4f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eb0b7b89-c66e-4945-ae39-9f6072eed95b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f020610-ced7-462f-92be-0953400a3e4b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=360a1634-f2af-423f-8312-2d8623a1dab4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:00:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:06.630 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 360a1634-f2af-423f-8312-2d8623a1dab4 in datapath fe677d76-a003-444b-839f-1500fe522371 unbound from our chassis
Jan 26 16:00:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:06.632 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fe677d76-a003-444b-839f-1500fe522371, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.633 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:06.633 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b958b51b-a756-4664-8d10-d9f1ff953c53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:06.634 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fe677d76-a003-444b-839f-1500fe522371 namespace which is not needed anymore
Jan 26 16:00:06 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000003a.scope: Deactivated successfully.
Jan 26 16:00:06 compute-0 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d0000003a.scope: Consumed 13.528s CPU time.
Jan 26 16:00:06 compute-0 systemd-machined[208061]: Machine qemu-64-instance-0000003a terminated.
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.710 239969 DEBUG oslo_concurrency.lockutils [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.711 239969 DEBUG oslo_concurrency.lockutils [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.711 239969 INFO nova.compute.manager [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Shelving
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.734 239969 DEBUG nova.virt.libvirt.driver [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.844 239969 DEBUG nova.compute.manager [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Received event network-vif-plugged-34a7ae82-747e-4fef-9081-0d0e28e80f28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.844 239969 DEBUG oslo_concurrency.lockutils [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.844 239969 DEBUG oslo_concurrency.lockutils [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.844 239969 DEBUG oslo_concurrency.lockutils [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.845 239969 DEBUG nova.compute.manager [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] No waiting events found dispatching network-vif-plugged-34a7ae82-747e-4fef-9081-0d0e28e80f28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.845 239969 WARNING nova.compute.manager [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Received unexpected event network-vif-plugged-34a7ae82-747e-4fef-9081-0d0e28e80f28 for instance with vm_state active and task_state shelving.
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.845 239969 DEBUG nova.compute.manager [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Received event network-vif-unplugged-aa71bf39-b625-491e-9e6e-9a6b6bc4d087 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.845 239969 DEBUG oslo_concurrency.lockutils [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "af55c460-3670-4fa6-855f-386e2c58b312-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.845 239969 DEBUG oslo_concurrency.lockutils [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af55c460-3670-4fa6-855f-386e2c58b312-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.845 239969 DEBUG oslo_concurrency.lockutils [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af55c460-3670-4fa6-855f-386e2c58b312-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.846 239969 DEBUG nova.compute.manager [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] No waiting events found dispatching network-vif-unplugged-aa71bf39-b625-491e-9e6e-9a6b6bc4d087 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.846 239969 DEBUG nova.compute.manager [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Received event network-vif-unplugged-aa71bf39-b625-491e-9e6e-9a6b6bc4d087 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.846 239969 DEBUG nova.compute.manager [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Received event network-vif-plugged-aa71bf39-b625-491e-9e6e-9a6b6bc4d087 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.846 239969 DEBUG oslo_concurrency.lockutils [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "af55c460-3670-4fa6-855f-386e2c58b312-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.846 239969 DEBUG oslo_concurrency.lockutils [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af55c460-3670-4fa6-855f-386e2c58b312-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.846 239969 DEBUG oslo_concurrency.lockutils [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af55c460-3670-4fa6-855f-386e2c58b312-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.846 239969 DEBUG nova.compute.manager [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] No waiting events found dispatching network-vif-plugged-aa71bf39-b625-491e-9e6e-9a6b6bc4d087 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.846 239969 WARNING nova.compute.manager [req-6ec7be59-093a-4670-a616-25d9b8f61e9e req-437539f7-c766-4909-9e99-bed9f74c8737 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Received unexpected event network-vif-plugged-aa71bf39-b625-491e-9e6e-9a6b6bc4d087 for instance with vm_state active and task_state deleting.
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.889 239969 INFO nova.virt.libvirt.driver [-] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Instance destroyed successfully.
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.889 239969 DEBUG nova.objects.instance [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lazy-loading 'resources' on Instance uuid 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.905 239969 DEBUG nova.virt.libvirt.vif [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServersNegativeTestJSON-server-1762749817',display_name='tempest-ListServersNegativeTestJSON-server-1762749817-3',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserversnegativetestjson-server-1762749817-3',id=58,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=2,launched_at=2026-01-26T15:59:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='07b927cfeb484a5d80d731f9e9abfa4f',ramdisk_id='',reservation_id='r-oeb19hac',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServersNegativeTestJSON-1615111890',owner_user_name='tempest-ListServersNegativeTestJSON-1615111890-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T15:59:51Z,user_data=None,user_id='48264b1a141f4ef5b99a45972af1903a',uuid=7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "360a1634-f2af-423f-8312-2d8623a1dab4", "address": "fa:16:3e:2a:d7:9a", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap360a1634-f2", "ovs_interfaceid": "360a1634-f2af-423f-8312-2d8623a1dab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.905 239969 DEBUG nova.network.os_vif_util [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converting VIF {"id": "360a1634-f2af-423f-8312-2d8623a1dab4", "address": "fa:16:3e:2a:d7:9a", "network": {"id": "fe677d76-a003-444b-839f-1500fe522371", "bridge": "br-int", "label": "tempest-ListServersNegativeTestJSON-176828695-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "07b927cfeb484a5d80d731f9e9abfa4f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap360a1634-f2", "ovs_interfaceid": "360a1634-f2af-423f-8312-2d8623a1dab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.906 239969 DEBUG nova.network.os_vif_util [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:d7:9a,bridge_name='br-int',has_traffic_filtering=True,id=360a1634-f2af-423f-8312-2d8623a1dab4,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap360a1634-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.907 239969 DEBUG os_vif [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:d7:9a,bridge_name='br-int',has_traffic_filtering=True,id=360a1634-f2af-423f-8312-2d8623a1dab4,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap360a1634-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.909 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.909 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap360a1634-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.910 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.914 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:00:06 compute-0 nova_compute[239965]: 2026-01-26 16:00:06.916 239969 INFO os_vif [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:d7:9a,bridge_name='br-int',has_traffic_filtering=True,id=360a1634-f2af-423f-8312-2d8623a1dab4,network=Network(fe677d76-a003-444b-839f-1500fe522371),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap360a1634-f2')
Jan 26 16:00:07 compute-0 podman[290942]: 2026-01-26 16:00:07.191688773 +0000 UTC m=+2.326836032 container remove aa73692fdb46561af8eceed14ade196981895957e2a8fe573c5ac0ea689f2320 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_turing, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:00:07 compute-0 sudo[290765]: pam_unix(sudo:session): session closed for user root
Jan 26 16:00:07 compute-0 systemd[1]: libpod-conmon-aa73692fdb46561af8eceed14ade196981895957e2a8fe573c5ac0ea689f2320.scope: Deactivated successfully.
Jan 26 16:00:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:00:07 compute-0 ceph-mon[75140]: pgmap v1409: 305 pgs: 305 active+clean; 320 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 8.0 MiB/s wr, 458 op/s
Jan 26 16:00:07 compute-0 neutron-haproxy-ovnmeta-fe677d76-a003-444b-839f-1500fe522371[289686]: [NOTICE]   (289693) : haproxy version is 2.8.14-c23fe91
Jan 26 16:00:07 compute-0 neutron-haproxy-ovnmeta-fe677d76-a003-444b-839f-1500fe522371[289686]: [NOTICE]   (289693) : path to executable is /usr/sbin/haproxy
Jan 26 16:00:07 compute-0 neutron-haproxy-ovnmeta-fe677d76-a003-444b-839f-1500fe522371[289686]: [ALERT]    (289693) : Current worker (289698) exited with code 143 (Terminated)
Jan 26 16:00:07 compute-0 neutron-haproxy-ovnmeta-fe677d76-a003-444b-839f-1500fe522371[289686]: [WARNING]  (289693) : All workers exited. Exiting... (0)
Jan 26 16:00:07 compute-0 systemd[1]: libpod-e0f31eaa820f2f9faf8660b9ab93481c81ad789fc86f68906cd4358c8137b67d.scope: Deactivated successfully.
Jan 26 16:00:07 compute-0 conmon[289686]: conmon e0f31eaa820f2f9faf86 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e0f31eaa820f2f9faf8660b9ab93481c81ad789fc86f68906cd4358c8137b67d.scope/container/memory.events
Jan 26 16:00:07 compute-0 podman[291134]: 2026-01-26 16:00:07.36838033 +0000 UTC m=+0.094322567 container died e0f31eaa820f2f9faf8660b9ab93481c81ad789fc86f68906cd4358c8137b67d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fe677d76-a003-444b-839f-1500fe522371, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.554 239969 DEBUG nova.compute.manager [req-2a04d775-ee92-40da-90ba-b872944fcba0 req-b9290ad0-5e93-4079-aed7-134dba056344 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Received event network-vif-unplugged-360a1634-f2af-423f-8312-2d8623a1dab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.554 239969 DEBUG oslo_concurrency.lockutils [req-2a04d775-ee92-40da-90ba-b872944fcba0 req-b9290ad0-5e93-4079-aed7-134dba056344 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.555 239969 DEBUG oslo_concurrency.lockutils [req-2a04d775-ee92-40da-90ba-b872944fcba0 req-b9290ad0-5e93-4079-aed7-134dba056344 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.555 239969 DEBUG oslo_concurrency.lockutils [req-2a04d775-ee92-40da-90ba-b872944fcba0 req-b9290ad0-5e93-4079-aed7-134dba056344 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.555 239969 DEBUG nova.compute.manager [req-2a04d775-ee92-40da-90ba-b872944fcba0 req-b9290ad0-5e93-4079-aed7-134dba056344 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] No waiting events found dispatching network-vif-unplugged-360a1634-f2af-423f-8312-2d8623a1dab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.556 239969 DEBUG nova.compute.manager [req-2a04d775-ee92-40da-90ba-b872944fcba0 req-b9290ad0-5e93-4079-aed7-134dba056344 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Received event network-vif-unplugged-360a1634-f2af-423f-8312-2d8623a1dab4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:00:07 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:00:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:00:07 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:00:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e0f31eaa820f2f9faf8660b9ab93481c81ad789fc86f68906cd4358c8137b67d-userdata-shm.mount: Deactivated successfully.
Jan 26 16:00:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4cea7d59da4509a10c18cbcd27b36e1e120ecdedbe5fd1b97da8ba0b096b797c-merged.mount: Deactivated successfully.
Jan 26 16:00:07 compute-0 podman[291134]: 2026-01-26 16:00:07.742870871 +0000 UTC m=+0.468813108 container cleanup e0f31eaa820f2f9faf8660b9ab93481c81ad789fc86f68906cd4358c8137b67d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fe677d76-a003-444b-839f-1500fe522371, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 16:00:07 compute-0 systemd[1]: libpod-conmon-e0f31eaa820f2f9faf8660b9ab93481c81ad789fc86f68906cd4358c8137b67d.scope: Deactivated successfully.
Jan 26 16:00:07 compute-0 sudo[291166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:00:07 compute-0 sudo[291166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:00:07 compute-0 sudo[291166]: pam_unix(sudo:session): session closed for user root
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.787 239969 INFO nova.virt.libvirt.driver [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Deleting instance files /var/lib/nova/instances/af55c460-3670-4fa6-855f-386e2c58b312_del
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.788 239969 INFO nova.virt.libvirt.driver [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Deletion of /var/lib/nova/instances/af55c460-3670-4fa6-855f-386e2c58b312_del complete
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.819 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "98bce7d4-177d-4ead-b03c-70f005006e85" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.820 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:07 compute-0 podman[291191]: 2026-01-26 16:00:07.828694408 +0000 UTC m=+0.049638630 container remove e0f31eaa820f2f9faf8660b9ab93481c81ad789fc86f68906cd4358c8137b67d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fe677d76-a003-444b-839f-1500fe522371, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 16:00:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:07.834 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[27d30ce8-1a7f-4c79-9d94-79e44b4da1f7]: (4, ('Mon Jan 26 04:00:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fe677d76-a003-444b-839f-1500fe522371 (e0f31eaa820f2f9faf8660b9ab93481c81ad789fc86f68906cd4358c8137b67d)\ne0f31eaa820f2f9faf8660b9ab93481c81ad789fc86f68906cd4358c8137b67d\nMon Jan 26 04:00:07 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fe677d76-a003-444b-839f-1500fe522371 (e0f31eaa820f2f9faf8660b9ab93481c81ad789fc86f68906cd4358c8137b67d)\ne0f31eaa820f2f9faf8660b9ab93481c81ad789fc86f68906cd4358c8137b67d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:07.837 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7a1338-d197-4e9e-a3b3-6062c2d949a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:07.838 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe677d76-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:07 compute-0 kernel: tapfe677d76-a0: left promiscuous mode
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.840 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.857 239969 INFO nova.compute.manager [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Took 3.18 seconds to destroy the instance on the hypervisor.
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.857 239969 DEBUG oslo.service.loopingcall [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.858 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.859 239969 DEBUG nova.compute.manager [-] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.859 239969 DEBUG nova.network.neutron [-] [instance: af55c460-3670-4fa6-855f-386e2c58b312] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:00:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:07.862 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d48fc513-d770-46f7-a5f7-bee86fde7c32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.862 239969 DEBUG nova.compute.manager [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:00:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:07.879 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[057fd130-8301-430e-8891-2d022d47157e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:07.881 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[99653bd3-1f89-4c35-9aff-2ebf2a5a7703]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 320 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 8.0 MiB/s wr, 230 op/s
Jan 26 16:00:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:07.902 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc67eb3-66ad-488e-b70f-c3b16eb2d44a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 467850, 'reachable_time': 23178, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291207, 'error': None, 'target': 'ovnmeta-fe677d76-a003-444b-839f-1500fe522371', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:07 compute-0 systemd[1]: run-netns-ovnmeta\x2dfe677d76\x2da003\x2d444b\x2d839f\x2d1500fe522371.mount: Deactivated successfully.
Jan 26 16:00:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:07.906 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fe677d76-a003-444b-839f-1500fe522371 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:00:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:07.906 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[f3d31a1f-befe-4f25-bfdc-a3a9434fc35c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.984 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.985 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.994 239969 DEBUG nova.virt.hardware [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:00:07 compute-0 nova_compute[239965]: 2026-01-26 16:00:07.995 239969 INFO nova.compute.claims [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.035 239969 INFO nova.virt.libvirt.driver [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Deleting instance files /var/lib/nova/instances/7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97_del
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.036 239969 INFO nova.virt.libvirt.driver [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Deletion of /var/lib/nova/instances/7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97_del complete
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.099 239969 INFO nova.compute.manager [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Took 1.85 seconds to destroy the instance on the hypervisor.
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.100 239969 DEBUG oslo.service.loopingcall [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.100 239969 DEBUG nova.compute.manager [-] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.101 239969 DEBUG nova.network.neutron [-] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.219 239969 DEBUG oslo_concurrency.processutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.540 239969 DEBUG nova.network.neutron [-] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.562 239969 INFO nova.compute.manager [-] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Took 0.70 seconds to deallocate network for instance.
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.630 239969 DEBUG oslo_concurrency.lockutils [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:08 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:00:08 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:00:08 compute-0 ceph-mon[75140]: pgmap v1410: 305 pgs: 305 active+clean; 320 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 8.0 MiB/s wr, 230 op/s
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.788 239969 DEBUG nova.network.neutron [-] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.804 239969 INFO nova.compute.manager [-] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Took 0.70 seconds to deallocate network for instance.
Jan 26 16:00:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:00:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/801719245' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.855 239969 DEBUG oslo_concurrency.lockutils [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.856 239969 DEBUG oslo_concurrency.processutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.864 239969 DEBUG nova.compute.provider_tree [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.887 239969 DEBUG nova.scheduler.client.report [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.917 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.917 239969 DEBUG nova.compute.manager [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.919 239969 DEBUG oslo_concurrency.lockutils [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.289s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.938 239969 DEBUG nova.compute.manager [req-db43081a-6202-4cf5-8e6c-565677aae4ac req-cb73c46f-a492-40a1-a35a-8da5dff7f8b4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Received event network-vif-deleted-aa71bf39-b625-491e-9e6e-9a6b6bc4d087 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:08 compute-0 nova_compute[239965]: 2026-01-26 16:00:08.938 239969 DEBUG nova.compute.manager [req-db43081a-6202-4cf5-8e6c-565677aae4ac req-cb73c46f-a492-40a1-a35a-8da5dff7f8b4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Received event network-vif-deleted-360a1634-f2af-423f-8312-2d8623a1dab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.044 239969 DEBUG oslo_concurrency.processutils [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.081 239969 DEBUG nova.compute.manager [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.082 239969 DEBUG nova.network.neutron [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.114 239969 INFO nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.143 239969 DEBUG nova.compute.manager [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.245 239969 DEBUG nova.compute.manager [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.246 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.246 239969 INFO nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Creating image(s)
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.267 239969 DEBUG nova.storage.rbd_utils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] rbd image 98bce7d4-177d-4ead-b03c-70f005006e85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.289 239969 DEBUG nova.storage.rbd_utils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] rbd image 98bce7d4-177d-4ead-b03c-70f005006e85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.310 239969 DEBUG nova.storage.rbd_utils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] rbd image 98bce7d4-177d-4ead-b03c-70f005006e85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.313 239969 DEBUG oslo_concurrency.processutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.380 239969 DEBUG oslo_concurrency.processutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.382 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.382 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.383 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.402 239969 DEBUG nova.storage.rbd_utils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] rbd image 98bce7d4-177d-4ead-b03c-70f005006e85_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.405 239969 DEBUG oslo_concurrency.processutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 98bce7d4-177d-4ead-b03c-70f005006e85_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:09 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Jan 26 16:00:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:00:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2415640371' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.671 239969 DEBUG oslo_concurrency.processutils [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.677 239969 DEBUG nova.compute.provider_tree [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:00:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/801719245' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2415640371' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.707 239969 DEBUG nova.scheduler.client.report [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.728 239969 DEBUG oslo_concurrency.processutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 98bce7d4-177d-4ead-b03c-70f005006e85_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.770 239969 DEBUG oslo_concurrency.lockutils [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.773 239969 DEBUG oslo_concurrency.lockutils [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.817 239969 INFO nova.scheduler.client.report [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Deleted allocations for instance af55c460-3670-4fa6-855f-386e2c58b312
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.828 239969 DEBUG nova.storage.rbd_utils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] resizing rbd image 98bce7d4-177d-4ead-b03c-70f005006e85_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.864 239969 DEBUG nova.policy [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9cc0fb8436b44d0499a7d55e0a7e7585', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3f9fff751f0a4a4aba88032259e9628c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:00:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1411: 305 pgs: 305 active+clean; 225 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 8.1 MiB/s wr, 339 op/s
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.920 239969 DEBUG nova.objects.instance [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lazy-loading 'migration_context' on Instance uuid 98bce7d4-177d-4ead-b03c-70f005006e85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.935 239969 DEBUG oslo_concurrency.lockutils [None req-7e7ebab3-87fb-42b4-ae2b-64b61711daed 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "af55c460-3670-4fa6-855f-386e2c58b312" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.264s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.939 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.940 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Ensure instance console log exists: /var/lib/nova/instances/98bce7d4-177d-4ead-b03c-70f005006e85/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.940 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.941 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.941 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:09 compute-0 nova_compute[239965]: 2026-01-26 16:00:09.979 239969 DEBUG oslo_concurrency.processutils [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:10 compute-0 kernel: tapad309073-b3 (unregistering): left promiscuous mode
Jan 26 16:00:10 compute-0 NetworkManager[48954]: <info>  [1769443210.2225] device (tapad309073-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:00:10 compute-0 ovn_controller[146046]: 2026-01-26T16:00:10Z|00486|binding|INFO|Releasing lport ad309073-b3c7-40b4-a937-5d72ec411ed7 from this chassis (sb_readonly=0)
Jan 26 16:00:10 compute-0 ovn_controller[146046]: 2026-01-26T16:00:10Z|00487|binding|INFO|Setting lport ad309073-b3c7-40b4-a937-5d72ec411ed7 down in Southbound
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.235 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:10 compute-0 ovn_controller[146046]: 2026-01-26T16:00:10Z|00488|binding|INFO|Removing iface tapad309073-b3 ovn-installed in OVS
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.237 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:10.244 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:d5:77 10.100.0.11'], port_security=['fa:16:3e:3a:d5:77 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a4ddea97-0b71-4080-abed-fd5748d7244e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-90c691e2-a785-487c-9a4f-dd30836b91bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c41676eb0a634807a0c639355e39a512', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ff51ba6-72d9-4f5f-bcad-1bb60c354451', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30c9b54c-9e24-4f98-9fda-b4dce976db39, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=ad309073-b3c7-40b4-a937-5d72ec411ed7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:00:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:10.246 156105 INFO neutron.agent.ovn.metadata.agent [-] Port ad309073-b3c7-40b4-a937-5d72ec411ed7 in datapath 90c691e2-a785-487c-9a4f-dd30836b91bb unbound from our chassis
Jan 26 16:00:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:10.247 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 90c691e2-a785-487c-9a4f-dd30836b91bb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:00:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:10.248 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[12e6c730-26b2-4c6c-891d-0b18ec021d09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.263 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:10 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Jan 26 16:00:10 compute-0 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d0000003b.scope: Consumed 14.371s CPU time.
Jan 26 16:00:10 compute-0 systemd-machined[208061]: Machine qemu-65-instance-0000003b terminated.
Jan 26 16:00:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:00:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/211893740' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.649 239969 DEBUG oslo_concurrency.processutils [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.659 239969 DEBUG nova.compute.provider_tree [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.677 239969 DEBUG nova.scheduler.client.report [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.699 239969 DEBUG oslo_concurrency.lockutils [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.712 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.732 239969 INFO nova.scheduler.client.report [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Deleted allocations for instance 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97
Jan 26 16:00:10 compute-0 ceph-mon[75140]: pgmap v1411: 305 pgs: 305 active+clean; 225 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 8.1 MiB/s wr, 339 op/s
Jan 26 16:00:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/211893740' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.803 239969 DEBUG oslo_concurrency.lockutils [None req-2ad6614b-81e9-40f3-9000-3ef98f3c9b9f 48264b1a141f4ef5b99a45972af1903a 07b927cfeb484a5d80d731f9e9abfa4f - - default default] Lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.874 239969 INFO nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Instance shutdown successfully after 15 seconds.
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.878 239969 DEBUG nova.compute.manager [req-5f5efe09-6264-4f32-9c7a-fb57327afc93 req-ac4fa40c-7f0b-4978-b848-ec12c95cc08f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Received event network-vif-plugged-360a1634-f2af-423f-8312-2d8623a1dab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.879 239969 DEBUG oslo_concurrency.lockutils [req-5f5efe09-6264-4f32-9c7a-fb57327afc93 req-ac4fa40c-7f0b-4978-b848-ec12c95cc08f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.879 239969 DEBUG oslo_concurrency.lockutils [req-5f5efe09-6264-4f32-9c7a-fb57327afc93 req-ac4fa40c-7f0b-4978-b848-ec12c95cc08f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.880 239969 DEBUG oslo_concurrency.lockutils [req-5f5efe09-6264-4f32-9c7a-fb57327afc93 req-ac4fa40c-7f0b-4978-b848-ec12c95cc08f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.880 239969 DEBUG nova.compute.manager [req-5f5efe09-6264-4f32-9c7a-fb57327afc93 req-ac4fa40c-7f0b-4978-b848-ec12c95cc08f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] No waiting events found dispatching network-vif-plugged-360a1634-f2af-423f-8312-2d8623a1dab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.880 239969 WARNING nova.compute.manager [req-5f5efe09-6264-4f32-9c7a-fb57327afc93 req-ac4fa40c-7f0b-4978-b848-ec12c95cc08f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Received unexpected event network-vif-plugged-360a1634-f2af-423f-8312-2d8623a1dab4 for instance with vm_state deleted and task_state None.
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.888 239969 INFO nova.virt.libvirt.driver [-] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Instance destroyed successfully.
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.889 239969 DEBUG nova.objects.instance [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lazy-loading 'numa_topology' on Instance uuid a4ddea97-0b71-4080-abed-fd5748d7244e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.928 239969 INFO nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Attempting rescue
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.930 239969 DEBUG nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 26 16:00:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.956 239969 DEBUG nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.957 239969 INFO nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Creating image(s)
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.982 239969 DEBUG nova.storage.rbd_utils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] rbd image a4ddea97-0b71-4080-abed-fd5748d7244e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:10 compute-0 nova_compute[239965]: 2026-01-26 16:00:10.987 239969 DEBUG nova.objects.instance [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a4ddea97-0b71-4080-abed-fd5748d7244e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.024 239969 DEBUG nova.storage.rbd_utils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] rbd image a4ddea97-0b71-4080-abed-fd5748d7244e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.050 239969 DEBUG nova.storage.rbd_utils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] rbd image a4ddea97-0b71-4080-abed-fd5748d7244e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.055 239969 DEBUG oslo_concurrency.processutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.125 239969 DEBUG oslo_concurrency.processutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.126 239969 DEBUG oslo_concurrency.lockutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.127 239969 DEBUG oslo_concurrency.lockutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.127 239969 DEBUG oslo_concurrency.lockutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.148 239969 DEBUG nova.storage.rbd_utils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] rbd image a4ddea97-0b71-4080-abed-fd5748d7244e_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.153 239969 DEBUG oslo_concurrency.processutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 a4ddea97-0b71-4080-abed-fd5748d7244e_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.381 239969 DEBUG nova.network.neutron [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Successfully created port: c0a1fe12-63f4-4d07-9bc6-5613c9307343 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.472 239969 DEBUG oslo_concurrency.processutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 a4ddea97-0b71-4080-abed-fd5748d7244e_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.473 239969 DEBUG nova.objects.instance [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lazy-loading 'migration_context' on Instance uuid a4ddea97-0b71-4080-abed-fd5748d7244e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.491 239969 DEBUG nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.492 239969 DEBUG nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Start _get_guest_xml network_info=[{"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "vif_mac": "fa:16:3e:3a:d5:77"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.493 239969 DEBUG nova.objects.instance [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lazy-loading 'resources' on Instance uuid a4ddea97-0b71-4080-abed-fd5748d7244e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.513 239969 WARNING nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.519 239969 DEBUG nova.virt.libvirt.host [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.519 239969 DEBUG nova.virt.libvirt.host [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.521 239969 DEBUG nova.virt.libvirt.host [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.522 239969 DEBUG nova.virt.libvirt.host [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.522 239969 DEBUG nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.522 239969 DEBUG nova.virt.hardware [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.523 239969 DEBUG nova.virt.hardware [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.523 239969 DEBUG nova.virt.hardware [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.523 239969 DEBUG nova.virt.hardware [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.523 239969 DEBUG nova.virt.hardware [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.523 239969 DEBUG nova.virt.hardware [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.523 239969 DEBUG nova.virt.hardware [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.524 239969 DEBUG nova.virt.hardware [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.524 239969 DEBUG nova.virt.hardware [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.524 239969 DEBUG nova.virt.hardware [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.524 239969 DEBUG nova.virt.hardware [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.524 239969 DEBUG nova.objects.instance [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a4ddea97-0b71-4080-abed-fd5748d7244e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.545 239969 DEBUG oslo_concurrency.processutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 230 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 8.1 MiB/s wr, 324 op/s
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.906 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443196.9038267, 66382f82-a38c-4066-8564-7866c37ccd0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.907 239969 INFO nova.compute.manager [-] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] VM Stopped (Lifecycle Event)
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.912 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:11 compute-0 nova_compute[239965]: 2026-01-26 16:00:11.927 239969 DEBUG nova.compute.manager [None req-98b3e105-c1e1-4c36-9ecf-5ba34dfcafe8 - - - - - -] [instance: 66382f82-a38c-4066-8564-7866c37ccd0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:12 compute-0 nova_compute[239965]: 2026-01-26 16:00:12.096 239969 DEBUG nova.network.neutron [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Successfully created port: 73773977-7e7a-4ec3-8051-43770cc2a237 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:00:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:00:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/960035416' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:12 compute-0 nova_compute[239965]: 2026-01-26 16:00:12.137 239969 DEBUG oslo_concurrency.processutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:12 compute-0 nova_compute[239965]: 2026-01-26 16:00:12.138 239969 DEBUG oslo_concurrency.processutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:00:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2083970865' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:12 compute-0 nova_compute[239965]: 2026-01-26 16:00:12.800 239969 DEBUG oslo_concurrency.processutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:12 compute-0 nova_compute[239965]: 2026-01-26 16:00:12.801 239969 DEBUG oslo_concurrency.processutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:12 compute-0 ceph-mon[75140]: pgmap v1412: 305 pgs: 305 active+clean; 230 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 8.1 MiB/s wr, 324 op/s
Jan 26 16:00:12 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/960035416' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:12 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2083970865' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.064 239969 DEBUG nova.network.neutron [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Successfully created port: 1c0256a0-1409-4bc6-b14d-588ae40e6010 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.071 239969 DEBUG nova.compute.manager [req-d8e2804f-c7f9-44e3-997c-4fa5c6359b95 req-9d65b358-e368-4bd9-8384-8fc96ab36491 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received event network-vif-unplugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.071 239969 DEBUG oslo_concurrency.lockutils [req-d8e2804f-c7f9-44e3-997c-4fa5c6359b95 req-9d65b358-e368-4bd9-8384-8fc96ab36491 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.071 239969 DEBUG oslo_concurrency.lockutils [req-d8e2804f-c7f9-44e3-997c-4fa5c6359b95 req-9d65b358-e368-4bd9-8384-8fc96ab36491 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.072 239969 DEBUG oslo_concurrency.lockutils [req-d8e2804f-c7f9-44e3-997c-4fa5c6359b95 req-9d65b358-e368-4bd9-8384-8fc96ab36491 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.072 239969 DEBUG nova.compute.manager [req-d8e2804f-c7f9-44e3-997c-4fa5c6359b95 req-9d65b358-e368-4bd9-8384-8fc96ab36491 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] No waiting events found dispatching network-vif-unplugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.072 239969 WARNING nova.compute.manager [req-d8e2804f-c7f9-44e3-997c-4fa5c6359b95 req-9d65b358-e368-4bd9-8384-8fc96ab36491 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received unexpected event network-vif-unplugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 for instance with vm_state active and task_state rescuing.
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.072 239969 DEBUG nova.compute.manager [req-d8e2804f-c7f9-44e3-997c-4fa5c6359b95 req-9d65b358-e368-4bd9-8384-8fc96ab36491 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received event network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.073 239969 DEBUG oslo_concurrency.lockutils [req-d8e2804f-c7f9-44e3-997c-4fa5c6359b95 req-9d65b358-e368-4bd9-8384-8fc96ab36491 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.073 239969 DEBUG oslo_concurrency.lockutils [req-d8e2804f-c7f9-44e3-997c-4fa5c6359b95 req-9d65b358-e368-4bd9-8384-8fc96ab36491 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.073 239969 DEBUG oslo_concurrency.lockutils [req-d8e2804f-c7f9-44e3-997c-4fa5c6359b95 req-9d65b358-e368-4bd9-8384-8fc96ab36491 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.073 239969 DEBUG nova.compute.manager [req-d8e2804f-c7f9-44e3-997c-4fa5c6359b95 req-9d65b358-e368-4bd9-8384-8fc96ab36491 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] No waiting events found dispatching network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.074 239969 WARNING nova.compute.manager [req-d8e2804f-c7f9-44e3-997c-4fa5c6359b95 req-9d65b358-e368-4bd9-8384-8fc96ab36491 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received unexpected event network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 for instance with vm_state active and task_state rescuing.
Jan 26 16:00:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:00:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2220094973' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.399 239969 DEBUG oslo_concurrency.processutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.400 239969 DEBUG nova.virt.libvirt.vif [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-881645873',display_name='tempest-ServerRescueTestJSONUnderV235-server-881645873',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-881645873',id=59,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T15:59:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c41676eb0a634807a0c639355e39a512',ramdisk_id='',reservation_id='r-osdyj818',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-701891094',owner_user_name='tempest-ServerRescueTestJSONUnderV235-701891094-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T15:59:51Z,user_data=None,user_id='10e6dce963804f98b45b6f58ef8d1b2e',uuid=a4ddea97-0b71-4080-abed-fd5748d7244e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "vif_mac": "fa:16:3e:3a:d5:77"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.401 239969 DEBUG nova.network.os_vif_util [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Converting VIF {"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "vif_mac": "fa:16:3e:3a:d5:77"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.402 239969 DEBUG nova.network.os_vif_util [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:d5:77,bridge_name='br-int',has_traffic_filtering=True,id=ad309073-b3c7-40b4-a937-5d72ec411ed7,network=Network(90c691e2-a785-487c-9a4f-dd30836b91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad309073-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.403 239969 DEBUG nova.objects.instance [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lazy-loading 'pci_devices' on Instance uuid a4ddea97-0b71-4080-abed-fd5748d7244e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.432 239969 DEBUG nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:00:13 compute-0 nova_compute[239965]:   <uuid>a4ddea97-0b71-4080-abed-fd5748d7244e</uuid>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   <name>instance-0000003b</name>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerRescueTestJSONUnderV235-server-881645873</nova:name>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:00:11</nova:creationTime>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:00:13 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:00:13 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:00:13 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:00:13 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:00:13 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:00:13 compute-0 nova_compute[239965]:         <nova:user uuid="10e6dce963804f98b45b6f58ef8d1b2e">tempest-ServerRescueTestJSONUnderV235-701891094-project-member</nova:user>
Jan 26 16:00:13 compute-0 nova_compute[239965]:         <nova:project uuid="c41676eb0a634807a0c639355e39a512">tempest-ServerRescueTestJSONUnderV235-701891094</nova:project>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:00:13 compute-0 nova_compute[239965]:         <nova:port uuid="ad309073-b3c7-40b4-a937-5d72ec411ed7">
Jan 26 16:00:13 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <system>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <entry name="serial">a4ddea97-0b71-4080-abed-fd5748d7244e</entry>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <entry name="uuid">a4ddea97-0b71-4080-abed-fd5748d7244e</entry>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     </system>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   <os>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   </os>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   <features>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   </features>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/a4ddea97-0b71-4080-abed-fd5748d7244e_disk.rescue">
Jan 26 16:00:13 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       </source>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:00:13 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/a4ddea97-0b71-4080-abed-fd5748d7244e_disk">
Jan 26 16:00:13 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       </source>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:00:13 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <target dev="vdb" bus="virtio"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/a4ddea97-0b71-4080-abed-fd5748d7244e_disk.config.rescue">
Jan 26 16:00:13 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       </source>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:00:13 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:3a:d5:77"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <target dev="tapad309073-b3"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e/console.log" append="off"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <video>
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     </video>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:00:13 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:00:13 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:00:13 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:00:13 compute-0 nova_compute[239965]: </domain>
Jan 26 16:00:13 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.441 239969 INFO nova.virt.libvirt.driver [-] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Instance destroyed successfully.
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.515 239969 DEBUG nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.516 239969 DEBUG nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.516 239969 DEBUG nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.516 239969 DEBUG nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] No VIF found with MAC fa:16:3e:3a:d5:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.517 239969 INFO nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Using config drive
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.543 239969 DEBUG nova.storage.rbd_utils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] rbd image a4ddea97-0b71-4080-abed-fd5748d7244e_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.567 239969 DEBUG nova.objects.instance [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lazy-loading 'ec2_ids' on Instance uuid a4ddea97-0b71-4080-abed-fd5748d7244e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.600 239969 DEBUG nova.objects.instance [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lazy-loading 'keypairs' on Instance uuid a4ddea97-0b71-4080-abed-fd5748d7244e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 230 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.9 MiB/s wr, 309 op/s
Jan 26 16:00:13 compute-0 nova_compute[239965]: 2026-01-26 16:00:13.967 239969 DEBUG nova.network.neutron [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Successfully updated port: c0a1fe12-63f4-4d07-9bc6-5613c9307343 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:00:13 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2220094973' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:14 compute-0 nova_compute[239965]: 2026-01-26 16:00:14.028 239969 INFO nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Creating config drive at /var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e/disk.config.rescue
Jan 26 16:00:14 compute-0 nova_compute[239965]: 2026-01-26 16:00:14.033 239969 DEBUG oslo_concurrency.processutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz7yn0jau execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:14 compute-0 nova_compute[239965]: 2026-01-26 16:00:14.178 239969 DEBUG oslo_concurrency.processutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz7yn0jau" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:14 compute-0 nova_compute[239965]: 2026-01-26 16:00:14.210 239969 DEBUG nova.storage.rbd_utils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] rbd image a4ddea97-0b71-4080-abed-fd5748d7244e_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:14 compute-0 nova_compute[239965]: 2026-01-26 16:00:14.228 239969 DEBUG oslo_concurrency.processutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e/disk.config.rescue a4ddea97-0b71-4080-abed-fd5748d7244e_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:14 compute-0 nova_compute[239965]: 2026-01-26 16:00:14.554 239969 DEBUG oslo_concurrency.processutils [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e/disk.config.rescue a4ddea97-0b71-4080-abed-fd5748d7244e_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.326s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:14 compute-0 nova_compute[239965]: 2026-01-26 16:00:14.555 239969 INFO nova.virt.libvirt.driver [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Deleting local config drive /var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e/disk.config.rescue because it was imported into RBD.
Jan 26 16:00:14 compute-0 kernel: tapad309073-b3: entered promiscuous mode
Jan 26 16:00:14 compute-0 NetworkManager[48954]: <info>  [1769443214.6233] manager: (tapad309073-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/219)
Jan 26 16:00:14 compute-0 ovn_controller[146046]: 2026-01-26T16:00:14Z|00489|binding|INFO|Claiming lport ad309073-b3c7-40b4-a937-5d72ec411ed7 for this chassis.
Jan 26 16:00:14 compute-0 ovn_controller[146046]: 2026-01-26T16:00:14Z|00490|binding|INFO|ad309073-b3c7-40b4-a937-5d72ec411ed7: Claiming fa:16:3e:3a:d5:77 10.100.0.11
Jan 26 16:00:14 compute-0 nova_compute[239965]: 2026-01-26 16:00:14.625 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:14.633 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:d5:77 10.100.0.11'], port_security=['fa:16:3e:3a:d5:77 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a4ddea97-0b71-4080-abed-fd5748d7244e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-90c691e2-a785-487c-9a4f-dd30836b91bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c41676eb0a634807a0c639355e39a512', 'neutron:revision_number': '5', 'neutron:security_group_ids': '9ff51ba6-72d9-4f5f-bcad-1bb60c354451', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30c9b54c-9e24-4f98-9fda-b4dce976db39, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=ad309073-b3c7-40b4-a937-5d72ec411ed7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:00:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:14.634 156105 INFO neutron.agent.ovn.metadata.agent [-] Port ad309073-b3c7-40b4-a937-5d72ec411ed7 in datapath 90c691e2-a785-487c-9a4f-dd30836b91bb bound to our chassis
Jan 26 16:00:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:14.636 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 90c691e2-a785-487c-9a4f-dd30836b91bb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:00:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:14.637 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[947101fc-81ce-46e0-b79d-cc129e462dcb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:14 compute-0 ovn_controller[146046]: 2026-01-26T16:00:14Z|00491|binding|INFO|Setting lport ad309073-b3c7-40b4-a937-5d72ec411ed7 ovn-installed in OVS
Jan 26 16:00:14 compute-0 ovn_controller[146046]: 2026-01-26T16:00:14Z|00492|binding|INFO|Setting lport ad309073-b3c7-40b4-a937-5d72ec411ed7 up in Southbound
Jan 26 16:00:14 compute-0 nova_compute[239965]: 2026-01-26 16:00:14.646 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:14 compute-0 nova_compute[239965]: 2026-01-26 16:00:14.652 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:14 compute-0 systemd-udevd[291688]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:00:14 compute-0 systemd-machined[208061]: New machine qemu-67-instance-0000003b.
Jan 26 16:00:14 compute-0 NetworkManager[48954]: <info>  [1769443214.6690] device (tapad309073-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:00:14 compute-0 NetworkManager[48954]: <info>  [1769443214.6701] device (tapad309073-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:00:14 compute-0 systemd[1]: Started Virtual Machine qemu-67-instance-0000003b.
Jan 26 16:00:14 compute-0 ceph-mon[75140]: pgmap v1413: 305 pgs: 305 active+clean; 230 MiB data, 582 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.9 MiB/s wr, 309 op/s
Jan 26 16:00:15 compute-0 ovn_controller[146046]: 2026-01-26T16:00:15Z|00493|binding|INFO|Releasing lport c26451f4-ab48-4e8d-b25b-9e4988573b7d from this chassis (sb_readonly=0)
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.103 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.129 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for a4ddea97-0b71-4080-abed-fd5748d7244e due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.130 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443215.1244552, a4ddea97-0b71-4080-abed-fd5748d7244e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.130 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] VM Resumed (Lifecycle Event)
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.142 239969 DEBUG nova.compute.manager [None req-0dee9e36-723f-4b2c-a4bc-08fb7b683f88 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.209 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.213 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.262 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.263 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443215.1390984, a4ddea97-0b71-4080-abed-fd5748d7244e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.264 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] VM Started (Lifecycle Event)
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.286 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.296 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.374 239969 DEBUG nova.compute.manager [req-359d0aa1-49e6-4930-bcb1-bf41265f403c req-69ecc81a-8be5-4e1a-af76-3091c6031334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received event network-changed-c0a1fe12-63f4-4d07-9bc6-5613c9307343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.374 239969 DEBUG nova.compute.manager [req-359d0aa1-49e6-4930-bcb1-bf41265f403c req-69ecc81a-8be5-4e1a-af76-3091c6031334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Refreshing instance network info cache due to event network-changed-c0a1fe12-63f4-4d07-9bc6-5613c9307343. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.375 239969 DEBUG oslo_concurrency.lockutils [req-359d0aa1-49e6-4930-bcb1-bf41265f403c req-69ecc81a-8be5-4e1a-af76-3091c6031334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-98bce7d4-177d-4ead-b03c-70f005006e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.375 239969 DEBUG oslo_concurrency.lockutils [req-359d0aa1-49e6-4930-bcb1-bf41265f403c req-69ecc81a-8be5-4e1a-af76-3091c6031334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-98bce7d4-177d-4ead-b03c-70f005006e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.376 239969 DEBUG nova.network.neutron [req-359d0aa1-49e6-4930-bcb1-bf41265f403c req-69ecc81a-8be5-4e1a-af76-3091c6031334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Refreshing network info cache for port c0a1fe12-63f4-4d07-9bc6-5613c9307343 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.593 239969 DEBUG nova.network.neutron [req-359d0aa1-49e6-4930-bcb1-bf41265f403c req-69ecc81a-8be5-4e1a-af76-3091c6031334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.695 239969 DEBUG nova.network.neutron [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Successfully updated port: 73773977-7e7a-4ec3-8051-43770cc2a237 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:00:15 compute-0 nova_compute[239965]: 2026-01-26 16:00:15.712 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 306 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 9.9 MiB/s wr, 354 op/s
Jan 26 16:00:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:00:16 compute-0 nova_compute[239965]: 2026-01-26 16:00:16.313 239969 DEBUG nova.network.neutron [req-359d0aa1-49e6-4930-bcb1-bf41265f403c req-69ecc81a-8be5-4e1a-af76-3091c6031334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:16 compute-0 nova_compute[239965]: 2026-01-26 16:00:16.335 239969 DEBUG oslo_concurrency.lockutils [req-359d0aa1-49e6-4930-bcb1-bf41265f403c req-69ecc81a-8be5-4e1a-af76-3091c6031334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-98bce7d4-177d-4ead-b03c-70f005006e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:16 compute-0 nova_compute[239965]: 2026-01-26 16:00:16.335 239969 DEBUG nova.compute.manager [req-359d0aa1-49e6-4930-bcb1-bf41265f403c req-69ecc81a-8be5-4e1a-af76-3091c6031334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received event network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:16 compute-0 nova_compute[239965]: 2026-01-26 16:00:16.336 239969 DEBUG oslo_concurrency.lockutils [req-359d0aa1-49e6-4930-bcb1-bf41265f403c req-69ecc81a-8be5-4e1a-af76-3091c6031334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:16 compute-0 nova_compute[239965]: 2026-01-26 16:00:16.336 239969 DEBUG oslo_concurrency.lockutils [req-359d0aa1-49e6-4930-bcb1-bf41265f403c req-69ecc81a-8be5-4e1a-af76-3091c6031334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:16 compute-0 nova_compute[239965]: 2026-01-26 16:00:16.336 239969 DEBUG oslo_concurrency.lockutils [req-359d0aa1-49e6-4930-bcb1-bf41265f403c req-69ecc81a-8be5-4e1a-af76-3091c6031334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:16 compute-0 nova_compute[239965]: 2026-01-26 16:00:16.336 239969 DEBUG nova.compute.manager [req-359d0aa1-49e6-4930-bcb1-bf41265f403c req-69ecc81a-8be5-4e1a-af76-3091c6031334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] No waiting events found dispatching network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:16 compute-0 nova_compute[239965]: 2026-01-26 16:00:16.336 239969 WARNING nova.compute.manager [req-359d0aa1-49e6-4930-bcb1-bf41265f403c req-69ecc81a-8be5-4e1a-af76-3091c6031334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received unexpected event network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 for instance with vm_state rescued and task_state None.
Jan 26 16:00:16 compute-0 nova_compute[239965]: 2026-01-26 16:00:16.855 239969 DEBUG nova.virt.libvirt.driver [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:00:16 compute-0 nova_compute[239965]: 2026-01-26 16:00:16.916 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:16 compute-0 nova_compute[239965]: 2026-01-26 16:00:16.923 239969 DEBUG nova.network.neutron [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Successfully updated port: 1c0256a0-1409-4bc6-b14d-588ae40e6010 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:00:16 compute-0 nova_compute[239965]: 2026-01-26 16:00:16.937 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "refresh_cache-98bce7d4-177d-4ead-b03c-70f005006e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:16 compute-0 nova_compute[239965]: 2026-01-26 16:00:16.937 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquired lock "refresh_cache-98bce7d4-177d-4ead-b03c-70f005006e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:16 compute-0 nova_compute[239965]: 2026-01-26 16:00:16.938 239969 DEBUG nova.network.neutron [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:00:17 compute-0 ceph-mon[75140]: pgmap v1414: 305 pgs: 305 active+clean; 306 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 9.9 MiB/s wr, 354 op/s
Jan 26 16:00:17 compute-0 nova_compute[239965]: 2026-01-26 16:00:17.499 239969 DEBUG nova.network.neutron [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:00:17 compute-0 nova_compute[239965]: 2026-01-26 16:00:17.692 239969 DEBUG nova.compute.manager [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received event network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:17 compute-0 nova_compute[239965]: 2026-01-26 16:00:17.692 239969 DEBUG oslo_concurrency.lockutils [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:17 compute-0 nova_compute[239965]: 2026-01-26 16:00:17.692 239969 DEBUG oslo_concurrency.lockutils [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:17 compute-0 nova_compute[239965]: 2026-01-26 16:00:17.693 239969 DEBUG oslo_concurrency.lockutils [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:17 compute-0 nova_compute[239965]: 2026-01-26 16:00:17.693 239969 DEBUG nova.compute.manager [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] No waiting events found dispatching network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:17 compute-0 nova_compute[239965]: 2026-01-26 16:00:17.693 239969 WARNING nova.compute.manager [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received unexpected event network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 for instance with vm_state rescued and task_state None.
Jan 26 16:00:17 compute-0 nova_compute[239965]: 2026-01-26 16:00:17.694 239969 DEBUG nova.compute.manager [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received event network-changed-73773977-7e7a-4ec3-8051-43770cc2a237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:17 compute-0 nova_compute[239965]: 2026-01-26 16:00:17.694 239969 DEBUG nova.compute.manager [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Refreshing instance network info cache due to event network-changed-73773977-7e7a-4ec3-8051-43770cc2a237. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:00:17 compute-0 nova_compute[239965]: 2026-01-26 16:00:17.694 239969 DEBUG oslo_concurrency.lockutils [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-98bce7d4-177d-4ead-b03c-70f005006e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 306 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.7 MiB/s wr, 177 op/s
Jan 26 16:00:19 compute-0 ceph-mon[75140]: pgmap v1415: 305 pgs: 305 active+clean; 306 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.7 MiB/s wr, 177 op/s
Jan 26 16:00:19 compute-0 ovn_controller[146046]: 2026-01-26T16:00:19Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:57:38 10.100.0.4
Jan 26 16:00:19 compute-0 ovn_controller[146046]: 2026-01-26T16:00:19Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:57:38 10.100.0.4
Jan 26 16:00:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 328 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.9 MiB/s wr, 257 op/s
Jan 26 16:00:19 compute-0 nova_compute[239965]: 2026-01-26 16:00:19.900 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443204.9003186, af55c460-3670-4fa6-855f-386e2c58b312 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:19 compute-0 nova_compute[239965]: 2026-01-26 16:00:19.902 239969 INFO nova.compute.manager [-] [instance: af55c460-3670-4fa6-855f-386e2c58b312] VM Stopped (Lifecycle Event)
Jan 26 16:00:19 compute-0 nova_compute[239965]: 2026-01-26 16:00:19.922 239969 DEBUG nova.compute.manager [None req-31910db5-dfe1-48f4-8f7a-293048c679bd - - - - - -] [instance: af55c460-3670-4fa6-855f-386e2c58b312] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:20 compute-0 nova_compute[239965]: 2026-01-26 16:00:20.547 239969 DEBUG nova.compute.manager [req-55957508-a4e9-47c2-9290-176642b134c7 req-363f3ff2-68bd-4bd5-b2a2-ff08b1a9e2fc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received event network-changed-ad309073-b3c7-40b4-a937-5d72ec411ed7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:20 compute-0 nova_compute[239965]: 2026-01-26 16:00:20.548 239969 DEBUG nova.compute.manager [req-55957508-a4e9-47c2-9290-176642b134c7 req-363f3ff2-68bd-4bd5-b2a2-ff08b1a9e2fc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Refreshing instance network info cache due to event network-changed-ad309073-b3c7-40b4-a937-5d72ec411ed7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:00:20 compute-0 nova_compute[239965]: 2026-01-26 16:00:20.548 239969 DEBUG oslo_concurrency.lockutils [req-55957508-a4e9-47c2-9290-176642b134c7 req-363f3ff2-68bd-4bd5-b2a2-ff08b1a9e2fc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:20 compute-0 nova_compute[239965]: 2026-01-26 16:00:20.549 239969 DEBUG oslo_concurrency.lockutils [req-55957508-a4e9-47c2-9290-176642b134c7 req-363f3ff2-68bd-4bd5-b2a2-ff08b1a9e2fc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:20 compute-0 nova_compute[239965]: 2026-01-26 16:00:20.549 239969 DEBUG nova.network.neutron [req-55957508-a4e9-47c2-9290-176642b134c7 req-363f3ff2-68bd-4bd5-b2a2-ff08b1a9e2fc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Refreshing network info cache for port ad309073-b3c7-40b4-a937-5d72ec411ed7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:00:20 compute-0 nova_compute[239965]: 2026-01-26 16:00:20.713 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:20 compute-0 nova_compute[239965]: 2026-01-26 16:00:20.724 239969 DEBUG nova.compute.manager [req-c2c1580f-1228-420f-bc3e-4813722bd986 req-3e0ca533-5c32-4ca6-82bd-a5599e75279a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received event network-changed-ad309073-b3c7-40b4-a937-5d72ec411ed7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:20 compute-0 nova_compute[239965]: 2026-01-26 16:00:20.725 239969 DEBUG nova.compute.manager [req-c2c1580f-1228-420f-bc3e-4813722bd986 req-3e0ca533-5c32-4ca6-82bd-a5599e75279a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Refreshing instance network info cache due to event network-changed-ad309073-b3c7-40b4-a937-5d72ec411ed7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:00:20 compute-0 nova_compute[239965]: 2026-01-26 16:00:20.725 239969 DEBUG oslo_concurrency.lockutils [req-c2c1580f-1228-420f-bc3e-4813722bd986 req-3e0ca533-5c32-4ca6-82bd-a5599e75279a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:00:21 compute-0 ceph-mon[75140]: pgmap v1416: 305 pgs: 305 active+clean; 328 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.9 MiB/s wr, 257 op/s
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.215 239969 DEBUG nova.network.neutron [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Updating instance_info_cache with network_info: [{"id": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "address": "fa:16:3e:30:31:28", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0a1fe12-63", "ovs_interfaceid": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "73773977-7e7a-4ec3-8051-43770cc2a237", "address": "fa:16:3e:aa:4b:c4", "network": {"id": "1b1727e5-0718-44ed-bec7-f0452326e7d0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1054665073", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73773977-7e", "ovs_interfaceid": "73773977-7e7a-4ec3-8051-43770cc2a237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "address": "fa:16:3e:ea:11:d3", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c0256a0-14", "ovs_interfaceid": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.238 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Releasing lock "refresh_cache-98bce7d4-177d-4ead-b03c-70f005006e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.239 239969 DEBUG nova.compute.manager [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Instance network_info: |[{"id": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "address": "fa:16:3e:30:31:28", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0a1fe12-63", "ovs_interfaceid": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "73773977-7e7a-4ec3-8051-43770cc2a237", "address": "fa:16:3e:aa:4b:c4", "network": {"id": "1b1727e5-0718-44ed-bec7-f0452326e7d0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1054665073", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73773977-7e", "ovs_interfaceid": "73773977-7e7a-4ec3-8051-43770cc2a237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "address": "fa:16:3e:ea:11:d3", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c0256a0-14", "ovs_interfaceid": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.240 239969 DEBUG oslo_concurrency.lockutils [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-98bce7d4-177d-4ead-b03c-70f005006e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.240 239969 DEBUG nova.network.neutron [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Refreshing network info cache for port 73773977-7e7a-4ec3-8051-43770cc2a237 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.245 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Start _get_guest_xml network_info=[{"id": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "address": "fa:16:3e:30:31:28", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0a1fe12-63", "ovs_interfaceid": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "73773977-7e7a-4ec3-8051-43770cc2a237", "address": "fa:16:3e:aa:4b:c4", "network": {"id": "1b1727e5-0718-44ed-bec7-f0452326e7d0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1054665073", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73773977-7e", "ovs_interfaceid": "73773977-7e7a-4ec3-8051-43770cc2a237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "address": "fa:16:3e:ea:11:d3", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c0256a0-14", "ovs_interfaceid": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.255 239969 WARNING nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.262 239969 DEBUG nova.virt.libvirt.host [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.263 239969 DEBUG nova.virt.libvirt.host [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.272 239969 DEBUG nova.virt.libvirt.host [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.273 239969 DEBUG nova.virt.libvirt.host [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.273 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.273 239969 DEBUG nova.virt.hardware [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.274 239969 DEBUG nova.virt.hardware [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.274 239969 DEBUG nova.virt.hardware [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.275 239969 DEBUG nova.virt.hardware [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.275 239969 DEBUG nova.virt.hardware [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.275 239969 DEBUG nova.virt.hardware [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.276 239969 DEBUG nova.virt.hardware [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.276 239969 DEBUG nova.virt.hardware [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.276 239969 DEBUG nova.virt.hardware [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.276 239969 DEBUG nova.virt.hardware [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.277 239969 DEBUG nova.virt.hardware [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.279 239969 DEBUG oslo_concurrency.processutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.887 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443206.8864276, 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.888 239969 INFO nova.compute.manager [-] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] VM Stopped (Lifecycle Event)
Jan 26 16:00:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1417: 305 pgs: 305 active+clean; 339 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 203 op/s
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.915 239969 DEBUG nova.compute.manager [None req-6ce4e712-3b0c-4e2e-a8fc-a5471aa6b718 - - - - - -] [instance: 7b0ac1f0-7a8b-4e5f-b077-fc854b2eaa97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.919 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:00:21 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1774644915' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.958 239969 DEBUG oslo_concurrency.processutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.679s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.982 239969 DEBUG nova.storage.rbd_utils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] rbd image 98bce7d4-177d-4ead-b03c-70f005006e85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:21 compute-0 nova_compute[239965]: 2026-01-26 16:00:21.987 239969 DEBUG oslo_concurrency.processutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1774644915' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:00:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4013647699' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.649 239969 DEBUG oslo_concurrency.processutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.651 239969 DEBUG nova.virt.libvirt.vif [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:00:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1097672540',display_name='tempest-ServersTestMultiNic-server-1097672540',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1097672540',id=61,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f9fff751f0a4a4aba88032259e9628c',ramdisk_id='',reservation_id='r-n2sgrrie',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1401749103',owner_user_name='tempest-ServersTestMultiNic-1401749103-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:00:09Z,user_data=None,user_id='9cc0fb8436b44d0499a7d55e0a7e7585',uuid=98bce7d4-177d-4ead-b03c-70f005006e85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "address": "fa:16:3e:30:31:28", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0a1fe12-63", "ovs_interfaceid": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.651 239969 DEBUG nova.network.os_vif_util [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converting VIF {"id": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "address": "fa:16:3e:30:31:28", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0a1fe12-63", "ovs_interfaceid": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.652 239969 DEBUG nova.network.os_vif_util [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:31:28,bridge_name='br-int',has_traffic_filtering=True,id=c0a1fe12-63f4-4d07-9bc6-5613c9307343,network=Network(5a544c01-d2d6-42ef-8ce3-fe84f814eb0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0a1fe12-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.653 239969 DEBUG nova.virt.libvirt.vif [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:00:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1097672540',display_name='tempest-ServersTestMultiNic-server-1097672540',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1097672540',id=61,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f9fff751f0a4a4aba88032259e9628c',ramdisk_id='',reservation_id='r-n2sgrrie',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1401749103',owner_user_name='tempest-ServersTestMultiNic-1401749103-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:00:09Z,user_data=None,user_id='9cc0fb8436b44d0499a7d55e0a7e7585',uuid=98bce7d4-177d-4ead-b03c-70f005006e85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73773977-7e7a-4ec3-8051-43770cc2a237", "address": "fa:16:3e:aa:4b:c4", "network": {"id": "1b1727e5-0718-44ed-bec7-f0452326e7d0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1054665073", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73773977-7e", "ovs_interfaceid": "73773977-7e7a-4ec3-8051-43770cc2a237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.653 239969 DEBUG nova.network.os_vif_util [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converting VIF {"id": "73773977-7e7a-4ec3-8051-43770cc2a237", "address": "fa:16:3e:aa:4b:c4", "network": {"id": "1b1727e5-0718-44ed-bec7-f0452326e7d0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1054665073", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73773977-7e", "ovs_interfaceid": "73773977-7e7a-4ec3-8051-43770cc2a237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.654 239969 DEBUG nova.network.os_vif_util [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4b:c4,bridge_name='br-int',has_traffic_filtering=True,id=73773977-7e7a-4ec3-8051-43770cc2a237,network=Network(1b1727e5-0718-44ed-bec7-f0452326e7d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73773977-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.654 239969 DEBUG nova.virt.libvirt.vif [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:00:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1097672540',display_name='tempest-ServersTestMultiNic-server-1097672540',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1097672540',id=61,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f9fff751f0a4a4aba88032259e9628c',ramdisk_id='',reservation_id='r-n2sgrrie',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1401749103',owner_user_name='tempest-ServersTestMultiNic-1401749103-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:00:09Z,user_data=None,user_id='9cc0fb8436b44d0499a7d55e0a7e7585',uuid=98bce7d4-177d-4ead-b03c-70f005006e85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "address": "fa:16:3e:ea:11:d3", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c0256a0-14", "ovs_interfaceid": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.655 239969 DEBUG nova.network.os_vif_util [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converting VIF {"id": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "address": "fa:16:3e:ea:11:d3", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c0256a0-14", "ovs_interfaceid": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.655 239969 DEBUG nova.network.os_vif_util [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:11:d3,bridge_name='br-int',has_traffic_filtering=True,id=1c0256a0-1409-4bc6-b14d-588ae40e6010,network=Network(5a544c01-d2d6-42ef-8ce3-fe84f814eb0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c0256a0-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.656 239969 DEBUG nova.objects.instance [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lazy-loading 'pci_devices' on Instance uuid 98bce7d4-177d-4ead-b03c-70f005006e85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.704 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:00:22 compute-0 nova_compute[239965]:   <uuid>98bce7d4-177d-4ead-b03c-70f005006e85</uuid>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   <name>instance-0000003d</name>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersTestMultiNic-server-1097672540</nova:name>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:00:21</nova:creationTime>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:00:22 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:00:22 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:00:22 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:00:22 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:00:22 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:00:22 compute-0 nova_compute[239965]:         <nova:user uuid="9cc0fb8436b44d0499a7d55e0a7e7585">tempest-ServersTestMultiNic-1401749103-project-member</nova:user>
Jan 26 16:00:22 compute-0 nova_compute[239965]:         <nova:project uuid="3f9fff751f0a4a4aba88032259e9628c">tempest-ServersTestMultiNic-1401749103</nova:project>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:00:22 compute-0 nova_compute[239965]:         <nova:port uuid="c0a1fe12-63f4-4d07-9bc6-5613c9307343">
Jan 26 16:00:22 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:00:22 compute-0 nova_compute[239965]:         <nova:port uuid="73773977-7e7a-4ec3-8051-43770cc2a237">
Jan 26 16:00:22 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.1.66" ipVersion="4"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:00:22 compute-0 nova_compute[239965]:         <nova:port uuid="1c0256a0-1409-4bc6-b14d-588ae40e6010">
Jan 26 16:00:22 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.218" ipVersion="4"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <system>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <entry name="serial">98bce7d4-177d-4ead-b03c-70f005006e85</entry>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <entry name="uuid">98bce7d4-177d-4ead-b03c-70f005006e85</entry>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     </system>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   <os>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   </os>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   <features>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   </features>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/98bce7d4-177d-4ead-b03c-70f005006e85_disk">
Jan 26 16:00:22 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       </source>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:00:22 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/98bce7d4-177d-4ead-b03c-70f005006e85_disk.config">
Jan 26 16:00:22 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       </source>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:00:22 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:30:31:28"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <target dev="tapc0a1fe12-63"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:aa:4b:c4"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <target dev="tap73773977-7e"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:ea:11:d3"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <target dev="tap1c0256a0-14"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/98bce7d4-177d-4ead-b03c-70f005006e85/console.log" append="off"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <video>
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     </video>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:00:22 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:00:22 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:00:22 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:00:22 compute-0 nova_compute[239965]: </domain>
Jan 26 16:00:22 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.709 239969 DEBUG nova.compute.manager [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Preparing to wait for external event network-vif-plugged-c0a1fe12-63f4-4d07-9bc6-5613c9307343 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.710 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.710 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.711 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.711 239969 DEBUG nova.compute.manager [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Preparing to wait for external event network-vif-plugged-73773977-7e7a-4ec3-8051-43770cc2a237 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.711 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.712 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.712 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.712 239969 DEBUG nova.compute.manager [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Preparing to wait for external event network-vif-plugged-1c0256a0-1409-4bc6-b14d-588ae40e6010 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.712 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.713 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.713 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.714 239969 DEBUG nova.virt.libvirt.vif [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:00:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1097672540',display_name='tempest-ServersTestMultiNic-server-1097672540',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1097672540',id=61,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f9fff751f0a4a4aba88032259e9628c',ramdisk_id='',reservation_id='r-n2sgrrie',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1401749103',owner_user_name='tempest-ServersTestMultiNic-1401749103-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:00:09Z,user_data=None,user_id='9cc0fb8436b44d0499a7d55e0a7e7585',uuid=98bce7d4-177d-4ead-b03c-70f005006e85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "address": "fa:16:3e:30:31:28", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0a1fe12-63", "ovs_interfaceid": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.714 239969 DEBUG nova.network.os_vif_util [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converting VIF {"id": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "address": "fa:16:3e:30:31:28", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0a1fe12-63", "ovs_interfaceid": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.716 239969 DEBUG nova.network.os_vif_util [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:31:28,bridge_name='br-int',has_traffic_filtering=True,id=c0a1fe12-63f4-4d07-9bc6-5613c9307343,network=Network(5a544c01-d2d6-42ef-8ce3-fe84f814eb0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0a1fe12-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.716 239969 DEBUG os_vif [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:31:28,bridge_name='br-int',has_traffic_filtering=True,id=c0a1fe12-63f4-4d07-9bc6-5613c9307343,network=Network(5a544c01-d2d6-42ef-8ce3-fe84f814eb0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0a1fe12-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.719 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.720 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.721 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.726 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.727 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc0a1fe12-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.728 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc0a1fe12-63, col_values=(('external_ids', {'iface-id': 'c0a1fe12-63f4-4d07-9bc6-5613c9307343', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:31:28', 'vm-uuid': '98bce7d4-177d-4ead-b03c-70f005006e85'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.729 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:22 compute-0 NetworkManager[48954]: <info>  [1769443222.7313] manager: (tapc0a1fe12-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.732 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.735 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.736 239969 INFO os_vif [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:31:28,bridge_name='br-int',has_traffic_filtering=True,id=c0a1fe12-63f4-4d07-9bc6-5613c9307343,network=Network(5a544c01-d2d6-42ef-8ce3-fe84f814eb0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0a1fe12-63')
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.737 239969 DEBUG nova.virt.libvirt.vif [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:00:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1097672540',display_name='tempest-ServersTestMultiNic-server-1097672540',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1097672540',id=61,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f9fff751f0a4a4aba88032259e9628c',ramdisk_id='',reservation_id='r-n2sgrrie',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1401749103',owner_user_name='tempest-ServersTestMultiNic-1401749103-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:00:09Z,user_data=None,user_id='9cc0fb8436b44d0499a7d55e0a7e7585',uuid=98bce7d4-177d-4ead-b03c-70f005006e85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73773977-7e7a-4ec3-8051-43770cc2a237", "address": "fa:16:3e:aa:4b:c4", "network": {"id": "1b1727e5-0718-44ed-bec7-f0452326e7d0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1054665073", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73773977-7e", "ovs_interfaceid": "73773977-7e7a-4ec3-8051-43770cc2a237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.737 239969 DEBUG nova.network.os_vif_util [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converting VIF {"id": "73773977-7e7a-4ec3-8051-43770cc2a237", "address": "fa:16:3e:aa:4b:c4", "network": {"id": "1b1727e5-0718-44ed-bec7-f0452326e7d0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1054665073", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73773977-7e", "ovs_interfaceid": "73773977-7e7a-4ec3-8051-43770cc2a237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.738 239969 DEBUG nova.network.os_vif_util [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4b:c4,bridge_name='br-int',has_traffic_filtering=True,id=73773977-7e7a-4ec3-8051-43770cc2a237,network=Network(1b1727e5-0718-44ed-bec7-f0452326e7d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73773977-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.738 239969 DEBUG os_vif [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4b:c4,bridge_name='br-int',has_traffic_filtering=True,id=73773977-7e7a-4ec3-8051-43770cc2a237,network=Network(1b1727e5-0718-44ed-bec7-f0452326e7d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73773977-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.739 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.739 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.740 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.743 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.743 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73773977-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.744 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap73773977-7e, col_values=(('external_ids', {'iface-id': '73773977-7e7a-4ec3-8051-43770cc2a237', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:4b:c4', 'vm-uuid': '98bce7d4-177d-4ead-b03c-70f005006e85'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:22 compute-0 NetworkManager[48954]: <info>  [1769443222.7472] manager: (tap73773977-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.749 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.750 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.753 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.754 239969 INFO os_vif [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4b:c4,bridge_name='br-int',has_traffic_filtering=True,id=73773977-7e7a-4ec3-8051-43770cc2a237,network=Network(1b1727e5-0718-44ed-bec7-f0452326e7d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73773977-7e')
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.755 239969 DEBUG nova.virt.libvirt.vif [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:00:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1097672540',display_name='tempest-ServersTestMultiNic-server-1097672540',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1097672540',id=61,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f9fff751f0a4a4aba88032259e9628c',ramdisk_id='',reservation_id='r-n2sgrrie',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1401749103',owner_user_name='tempest-ServersTestMultiNic-1401749103-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:00:09Z,user_data=None,user_id='9cc0fb8436b44d0499a7d55e0a7e7585',uuid=98bce7d4-177d-4ead-b03c-70f005006e85,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "address": "fa:16:3e:ea:11:d3", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c0256a0-14", "ovs_interfaceid": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.755 239969 DEBUG nova.network.os_vif_util [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converting VIF {"id": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "address": "fa:16:3e:ea:11:d3", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c0256a0-14", "ovs_interfaceid": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.756 239969 DEBUG nova.network.os_vif_util [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:11:d3,bridge_name='br-int',has_traffic_filtering=True,id=1c0256a0-1409-4bc6-b14d-588ae40e6010,network=Network(5a544c01-d2d6-42ef-8ce3-fe84f814eb0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c0256a0-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.756 239969 DEBUG os_vif [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:11:d3,bridge_name='br-int',has_traffic_filtering=True,id=1c0256a0-1409-4bc6-b14d-588ae40e6010,network=Network(5a544c01-d2d6-42ef-8ce3-fe84f814eb0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c0256a0-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.757 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.757 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.757 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.760 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.761 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c0256a0-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.761 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c0256a0-14, col_values=(('external_ids', {'iface-id': '1c0256a0-1409-4bc6-b14d-588ae40e6010', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ea:11:d3', 'vm-uuid': '98bce7d4-177d-4ead-b03c-70f005006e85'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.762 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:22 compute-0 NetworkManager[48954]: <info>  [1769443222.7644] manager: (tap1c0256a0-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/222)
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.765 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.773 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.775 239969 INFO os_vif [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:11:d3,bridge_name='br-int',has_traffic_filtering=True,id=1c0256a0-1409-4bc6-b14d-588ae40e6010,network=Network(5a544c01-d2d6-42ef-8ce3-fe84f814eb0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c0256a0-14')
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.849 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.849 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.849 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] No VIF found with MAC fa:16:3e:30:31:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.849 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] No VIF found with MAC fa:16:3e:aa:4b:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.850 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] No VIF found with MAC fa:16:3e:ea:11:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.850 239969 INFO nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Using config drive
Jan 26 16:00:22 compute-0 nova_compute[239965]: 2026-01-26 16:00:22.870 239969 DEBUG nova.storage.rbd_utils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] rbd image 98bce7d4-177d-4ead-b03c-70f005006e85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.094 239969 DEBUG nova.network.neutron [req-55957508-a4e9-47c2-9290-176642b134c7 req-363f3ff2-68bd-4bd5-b2a2-ff08b1a9e2fc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Updated VIF entry in instance network info cache for port ad309073-b3c7-40b4-a937-5d72ec411ed7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.095 239969 DEBUG nova.network.neutron [req-55957508-a4e9-47c2-9290-176642b134c7 req-363f3ff2-68bd-4bd5-b2a2-ff08b1a9e2fc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Updating instance_info_cache with network_info: [{"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.124 239969 DEBUG oslo_concurrency.lockutils [req-55957508-a4e9-47c2-9290-176642b134c7 req-363f3ff2-68bd-4bd5-b2a2-ff08b1a9e2fc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.126 239969 DEBUG oslo_concurrency.lockutils [req-c2c1580f-1228-420f-bc3e-4813722bd986 req-3e0ca533-5c32-4ca6-82bd-a5599e75279a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.126 239969 DEBUG nova.network.neutron [req-c2c1580f-1228-420f-bc3e-4813722bd986 req-3e0ca533-5c32-4ca6-82bd-a5599e75279a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Refreshing network info cache for port ad309073-b3c7-40b4-a937-5d72ec411ed7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.196 239969 DEBUG nova.network.neutron [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Updated VIF entry in instance network info cache for port 73773977-7e7a-4ec3-8051-43770cc2a237. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.197 239969 DEBUG nova.network.neutron [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Updating instance_info_cache with network_info: [{"id": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "address": "fa:16:3e:30:31:28", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0a1fe12-63", "ovs_interfaceid": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "73773977-7e7a-4ec3-8051-43770cc2a237", "address": "fa:16:3e:aa:4b:c4", "network": {"id": "1b1727e5-0718-44ed-bec7-f0452326e7d0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1054665073", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73773977-7e", "ovs_interfaceid": "73773977-7e7a-4ec3-8051-43770cc2a237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "address": "fa:16:3e:ea:11:d3", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c0256a0-14", "ovs_interfaceid": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:23 compute-0 ceph-mon[75140]: pgmap v1417: 305 pgs: 305 active+clean; 339 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 203 op/s
Jan 26 16:00:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4013647699' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.223 239969 DEBUG oslo_concurrency.lockutils [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-98bce7d4-177d-4ead-b03c-70f005006e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.224 239969 DEBUG nova.compute.manager [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received event network-changed-1c0256a0-1409-4bc6-b14d-588ae40e6010 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.224 239969 DEBUG nova.compute.manager [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Refreshing instance network info cache due to event network-changed-1c0256a0-1409-4bc6-b14d-588ae40e6010. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.225 239969 DEBUG oslo_concurrency.lockutils [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-98bce7d4-177d-4ead-b03c-70f005006e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.225 239969 DEBUG oslo_concurrency.lockutils [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-98bce7d4-177d-4ead-b03c-70f005006e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.225 239969 DEBUG nova.network.neutron [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Refreshing network info cache for port 1c0256a0-1409-4bc6-b14d-588ae40e6010 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.463 239969 INFO nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Creating config drive at /var/lib/nova/instances/98bce7d4-177d-4ead-b03c-70f005006e85/disk.config
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.470 239969 DEBUG oslo_concurrency.processutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98bce7d4-177d-4ead-b03c-70f005006e85/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps0yadaw8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.617 239969 DEBUG oslo_concurrency.processutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98bce7d4-177d-4ead-b03c-70f005006e85/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps0yadaw8" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.644 239969 DEBUG nova.storage.rbd_utils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] rbd image 98bce7d4-177d-4ead-b03c-70f005006e85_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.650 239969 DEBUG oslo_concurrency.processutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98bce7d4-177d-4ead-b03c-70f005006e85/disk.config 98bce7d4-177d-4ead-b03c-70f005006e85_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.888 239969 DEBUG oslo_concurrency.processutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98bce7d4-177d-4ead-b03c-70f005006e85/disk.config 98bce7d4-177d-4ead-b03c-70f005006e85_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.238s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.889 239969 INFO nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Deleting local config drive /var/lib/nova/instances/98bce7d4-177d-4ead-b03c-70f005006e85/disk.config because it was imported into RBD.
Jan 26 16:00:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 339 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.2 MiB/s wr, 180 op/s
Jan 26 16:00:23 compute-0 kernel: tapc0a1fe12-63: entered promiscuous mode
Jan 26 16:00:23 compute-0 NetworkManager[48954]: <info>  [1769443223.9429] manager: (tapc0a1fe12-63): new Tun device (/org/freedesktop/NetworkManager/Devices/223)
Jan 26 16:00:23 compute-0 ovn_controller[146046]: 2026-01-26T16:00:23Z|00494|binding|INFO|Claiming lport c0a1fe12-63f4-4d07-9bc6-5613c9307343 for this chassis.
Jan 26 16:00:23 compute-0 ovn_controller[146046]: 2026-01-26T16:00:23Z|00495|binding|INFO|c0a1fe12-63f4-4d07-9bc6-5613c9307343: Claiming fa:16:3e:30:31:28 10.100.0.28
Jan 26 16:00:23 compute-0 nova_compute[239965]: 2026-01-26 16:00:23.951 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:23.966 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:31:28 10.100.0.28'], port_security=['fa:16:3e:30:31:28 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/24', 'neutron:device_id': '98bce7d4-177d-4ead-b03c-70f005006e85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f9fff751f0a4a4aba88032259e9628c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2771346f-84f3-45ca-bd0b-10e35f7a7112', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2afbb601-33fd-4c59-ae5a-d2450ccc1602, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=c0a1fe12-63f4-4d07-9bc6-5613c9307343) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:00:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:23.967 156105 INFO neutron.agent.ovn.metadata.agent [-] Port c0a1fe12-63f4-4d07-9bc6-5613c9307343 in datapath 5a544c01-d2d6-42ef-8ce3-fe84f814eb0e bound to our chassis
Jan 26 16:00:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:23.969 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5a544c01-d2d6-42ef-8ce3-fe84f814eb0e
Jan 26 16:00:23 compute-0 NetworkManager[48954]: <info>  [1769443223.9744] manager: (tap73773977-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/224)
Jan 26 16:00:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:23.982 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cb52a5c8-4ea6-4975-902b-77a3a28e0f61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:23.983 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5a544c01-d1 in ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:00:23 compute-0 systemd-udevd[291904]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:00:23 compute-0 systemd-udevd[291906]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:00:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:23.987 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5a544c01-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:00:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:23.987 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fc778f2d-8c63-4e51-b201-8064a7fa5cfc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:23.989 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[598ebb4c-63d3-4e35-a130-74fb7de3f6ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:23 compute-0 NetworkManager[48954]: <info>  [1769443223.9901] manager: (tap1c0256a0-14): new Tun device (/org/freedesktop/NetworkManager/Devices/225)
Jan 26 16:00:23 compute-0 systemd-udevd[291910]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:00:24 compute-0 kernel: tap1c0256a0-14: entered promiscuous mode
Jan 26 16:00:24 compute-0 kernel: tap73773977-7e: entered promiscuous mode
Jan 26 16:00:24 compute-0 NetworkManager[48954]: <info>  [1769443224.0300] device (tapc0a1fe12-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:00:24 compute-0 NetworkManager[48954]: <info>  [1769443224.0309] device (tapc0a1fe12-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:00:24 compute-0 NetworkManager[48954]: <info>  [1769443224.0377] device (tap1c0256a0-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:00:24 compute-0 NetworkManager[48954]: <info>  [1769443224.0388] device (tap73773977-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:00:24 compute-0 NetworkManager[48954]: <info>  [1769443224.0396] device (tap1c0256a0-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.032 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[78be9033-0f4e-465e-8314-8c0e08beaae0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 NetworkManager[48954]: <info>  [1769443224.0402] device (tap73773977-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:00:24 compute-0 ovn_controller[146046]: 2026-01-26T16:00:24Z|00496|binding|INFO|Claiming lport 73773977-7e7a-4ec3-8051-43770cc2a237 for this chassis.
Jan 26 16:00:24 compute-0 ovn_controller[146046]: 2026-01-26T16:00:24Z|00497|binding|INFO|73773977-7e7a-4ec3-8051-43770cc2a237: Claiming fa:16:3e:aa:4b:c4 10.100.1.66
Jan 26 16:00:24 compute-0 ovn_controller[146046]: 2026-01-26T16:00:24Z|00498|binding|INFO|Claiming lport 1c0256a0-1409-4bc6-b14d-588ae40e6010 for this chassis.
Jan 26 16:00:24 compute-0 ovn_controller[146046]: 2026-01-26T16:00:24Z|00499|binding|INFO|1c0256a0-1409-4bc6-b14d-588ae40e6010: Claiming fa:16:3e:ea:11:d3 10.100.0.218
Jan 26 16:00:24 compute-0 nova_compute[239965]: 2026-01-26 16:00:24.042 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:24 compute-0 ovn_controller[146046]: 2026-01-26T16:00:24Z|00500|binding|INFO|Setting lport c0a1fe12-63f4-4d07-9bc6-5613c9307343 ovn-installed in OVS
Jan 26 16:00:24 compute-0 nova_compute[239965]: 2026-01-26 16:00:24.046 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.055 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:11:d3 10.100.0.218'], port_security=['fa:16:3e:ea:11:d3 10.100.0.218'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.218/24', 'neutron:device_id': '98bce7d4-177d-4ead-b03c-70f005006e85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f9fff751f0a4a4aba88032259e9628c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2771346f-84f3-45ca-bd0b-10e35f7a7112', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2afbb601-33fd-4c59-ae5a-d2450ccc1602, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1c0256a0-1409-4bc6-b14d-588ae40e6010) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:00:24 compute-0 systemd-machined[208061]: New machine qemu-68-instance-0000003d.
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.056 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:4b:c4 10.100.1.66'], port_security=['fa:16:3e:aa:4b:c4 10.100.1.66'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.66/24', 'neutron:device_id': '98bce7d4-177d-4ead-b03c-70f005006e85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b1727e5-0718-44ed-bec7-f0452326e7d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f9fff751f0a4a4aba88032259e9628c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2771346f-84f3-45ca-bd0b-10e35f7a7112', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc7c09ab-aeed-41a7-b7f3-83c23530d016, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=73773977-7e7a-4ec3-8051-43770cc2a237) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:00:24 compute-0 ovn_controller[146046]: 2026-01-26T16:00:24Z|00501|binding|INFO|Setting lport c0a1fe12-63f4-4d07-9bc6-5613c9307343 up in Southbound
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.063 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa846f6-3d90-4d9b-a154-16432ec31598]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.087 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[131d0dd4-3a01-4b32-9de7-b9e9352798d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 systemd[1]: Started Virtual Machine qemu-68-instance-0000003d.
Jan 26 16:00:24 compute-0 NetworkManager[48954]: <info>  [1769443224.0970] manager: (tap5a544c01-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/226)
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.096 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4a34e11d-2afe-4e7d-a382-6dfbec8274aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 ovn_controller[146046]: 2026-01-26T16:00:24Z|00502|binding|INFO|Setting lport 1c0256a0-1409-4bc6-b14d-588ae40e6010 ovn-installed in OVS
Jan 26 16:00:24 compute-0 ovn_controller[146046]: 2026-01-26T16:00:24Z|00503|binding|INFO|Setting lport 1c0256a0-1409-4bc6-b14d-588ae40e6010 up in Southbound
Jan 26 16:00:24 compute-0 ovn_controller[146046]: 2026-01-26T16:00:24Z|00504|binding|INFO|Setting lport 73773977-7e7a-4ec3-8051-43770cc2a237 ovn-installed in OVS
Jan 26 16:00:24 compute-0 ovn_controller[146046]: 2026-01-26T16:00:24Z|00505|binding|INFO|Setting lport 73773977-7e7a-4ec3-8051-43770cc2a237 up in Southbound
Jan 26 16:00:24 compute-0 nova_compute[239965]: 2026-01-26 16:00:24.113 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.127 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a76a1cdf-c9ff-468d-b696-554d29ff8a02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.130 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6b0aee49-c44d-4bbe-9009-9eaf13244386]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 NetworkManager[48954]: <info>  [1769443224.1572] device (tap5a544c01-d0): carrier: link connected
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.164 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c24f6ae0-8fe8-4090-898e-4057f589dafe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.182 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0763e1df-2c07-4493-9d6e-4d6de8199521]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5a544c01-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:58:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471496, 'reachable_time': 28523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 291943, 'error': None, 'target': 'ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.198 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a96e1036-e68a-4fb6-b837-1da931ef6c9a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:58da'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471496, 'tstamp': 471496}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 291945, 'error': None, 'target': 'ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 ceph-mon[75140]: pgmap v1418: 305 pgs: 305 active+clean; 339 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.2 MiB/s wr, 180 op/s
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.215 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c41c34-ff6f-4687-b337-abf6d1dab291]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5a544c01-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:58:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471496, 'reachable_time': 28523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 291946, 'error': None, 'target': 'ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.244 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0a9b1bdd-a550-4e5a-b797-b6da7823c990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.292 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[69dc25ec-666d-4328-87e0-c9a6e1c9bcca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.294 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a544c01-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.295 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.295 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a544c01-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:24 compute-0 nova_compute[239965]: 2026-01-26 16:00:24.297 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:24 compute-0 NetworkManager[48954]: <info>  [1769443224.2975] manager: (tap5a544c01-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/227)
Jan 26 16:00:24 compute-0 kernel: tap5a544c01-d0: entered promiscuous mode
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.299 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5a544c01-d0, col_values=(('external_ids', {'iface-id': 'c3d22444-4dc9-413c-81b4-c6273fa9f79d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:24 compute-0 ovn_controller[146046]: 2026-01-26T16:00:24Z|00506|binding|INFO|Releasing lport c3d22444-4dc9-413c-81b4-c6273fa9f79d from this chassis (sb_readonly=0)
Jan 26 16:00:24 compute-0 nova_compute[239965]: 2026-01-26 16:00:24.317 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.318 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5a544c01-d2d6-42ef-8ce3-fe84f814eb0e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5a544c01-d2d6-42ef-8ce3-fe84f814eb0e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.319 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f550221b-07d4-4a78-aa32-c72e220838e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.320 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/5a544c01-d2d6-42ef-8ce3-fe84f814eb0e.pid.haproxy
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 5a544c01-d2d6-42ef-8ce3-fe84f814eb0e
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.321 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e', 'env', 'PROCESS_TAG=haproxy-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5a544c01-d2d6-42ef-8ce3-fe84f814eb0e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:00:24 compute-0 nova_compute[239965]: 2026-01-26 16:00:24.476 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443224.475876, 98bce7d4-177d-4ead-b03c-70f005006e85 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:24 compute-0 nova_compute[239965]: 2026-01-26 16:00:24.483 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] VM Started (Lifecycle Event)
Jan 26 16:00:24 compute-0 nova_compute[239965]: 2026-01-26 16:00:24.508 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:24 compute-0 nova_compute[239965]: 2026-01-26 16:00:24.513 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443224.4760995, 98bce7d4-177d-4ead-b03c-70f005006e85 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:24 compute-0 nova_compute[239965]: 2026-01-26 16:00:24.513 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] VM Paused (Lifecycle Event)
Jan 26 16:00:24 compute-0 nova_compute[239965]: 2026-01-26 16:00:24.547 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:24 compute-0 nova_compute[239965]: 2026-01-26 16:00:24.550 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:00:24 compute-0 podman[292020]: 2026-01-26 16:00:24.689659514 +0000 UTC m=+0.049977547 container create bec10474df28312305c52f229e6253b9bbd503e254606a829da02c35bbfe295e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:00:24 compute-0 nova_compute[239965]: 2026-01-26 16:00:24.690 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:00:24 compute-0 systemd[1]: Started libpod-conmon-bec10474df28312305c52f229e6253b9bbd503e254606a829da02c35bbfe295e.scope.
Jan 26 16:00:24 compute-0 podman[292020]: 2026-01-26 16:00:24.662544399 +0000 UTC m=+0.022862452 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:00:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:00:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc24ca23c7f3f166309cde319ad43518a99275ba8ea425e3f27d56ef82180e53/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:24 compute-0 podman[292020]: 2026-01-26 16:00:24.782697044 +0000 UTC m=+0.143015097 container init bec10474df28312305c52f229e6253b9bbd503e254606a829da02c35bbfe295e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:00:24 compute-0 podman[292020]: 2026-01-26 16:00:24.788198809 +0000 UTC m=+0.148516842 container start bec10474df28312305c52f229e6253b9bbd503e254606a829da02c35bbfe295e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:00:24 compute-0 neutron-haproxy-ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e[292036]: [NOTICE]   (292040) : New worker (292042) forked
Jan 26 16:00:24 compute-0 neutron-haproxy-ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e[292036]: [NOTICE]   (292040) : Loading success.
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.865 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1c0256a0-1409-4bc6-b14d-588ae40e6010 in datapath 5a544c01-d2d6-42ef-8ce3-fe84f814eb0e unbound from our chassis
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.868 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5a544c01-d2d6-42ef-8ce3-fe84f814eb0e
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.886 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[315b1264-a553-4c80-ab06-60aba23f465f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.919 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8b013477-ff34-4ccd-b0eb-c936394da3de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.924 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6dbd0f19-0cab-4252-b9e4-39822bb71777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.952 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ca26626a-1fa3-4873-b5ea-470f293d32d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.971 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[da22d977-16e0-44ba-9db7-ea1ba9973cf2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5a544c01-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:58:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 176, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 5, 'rx_bytes': 176, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471496, 'reachable_time': 28523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292056, 'error': None, 'target': 'ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.990 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6444998c-3159-495c-a326-377aea5a2087]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5a544c01-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471507, 'tstamp': 471507}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292057, 'error': None, 'target': 'ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap5a544c01-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471509, 'tstamp': 471509}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292057, 'error': None, 'target': 'ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.992 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a544c01-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:24 compute-0 nova_compute[239965]: 2026-01-26 16:00:24.994 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.995 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a544c01-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.996 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.996 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5a544c01-d0, col_values=(('external_ids', {'iface-id': 'c3d22444-4dc9-413c-81b4-c6273fa9f79d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.997 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.998 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 73773977-7e7a-4ec3-8051-43770cc2a237 in datapath 1b1727e5-0718-44ed-bec7-f0452326e7d0 unbound from our chassis
Jan 26 16:00:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:24.999 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1b1727e5-0718-44ed-bec7-f0452326e7d0
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.010 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5d39722b-52ef-40ba-98e6-a2f164430e10]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.011 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1b1727e5-01 in ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.012 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1b1727e5-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.013 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1cf9acc3-7993-4612-aefa-6d69aab24b71]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.013 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ad90142f-3a1a-4a10-8608-4f84dfb96e9b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.026 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[839205a3-0fd7-44b9-a3fa-c4f25e75301e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.040 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[00734925-df2b-45e3-afa2-dff0eda8d86f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.074 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[96ef8974-15a7-4a7e-9414-f26b2b8f65a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.082 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9ded94a0-dcda-40c7-a558-357083de2c6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:25 compute-0 NetworkManager[48954]: <info>  [1769443225.0836] manager: (tap1b1727e5-00): new Veth device (/org/freedesktop/NetworkManager/Devices/228)
Jan 26 16:00:25 compute-0 systemd-udevd[291933]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.121 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0a09a5-f285-4f93-ba49-4dd1262cdff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.124 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4356d0c4-bdd4-47ef-8b22-62e4774cdb00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:25 compute-0 NetworkManager[48954]: <info>  [1769443225.1524] device (tap1b1727e5-00): carrier: link connected
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.151 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f1581e-dc05-4a80-a5ba-777b9680153c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.170 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4a626d-c393-45a7-9b20-d7b3557ed16c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1b1727e5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:41:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471595, 'reachable_time': 15691, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292070, 'error': None, 'target': 'ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.189 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[821d4e6c-845d-4f5d-8787-74a608e186f2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:4111'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471595, 'tstamp': 471595}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292071, 'error': None, 'target': 'ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.209 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[932b1551-6754-4903-8285-30e898a71822]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1b1727e5-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:41:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471595, 'reachable_time': 15691, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 292072, 'error': None, 'target': 'ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.237 239969 DEBUG nova.compute.manager [req-3426bf1f-e056-4c02-a88a-a9577ebb3a87 req-c1804c3c-173c-4fec-ba00-97161c45a055 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received event network-vif-plugged-1c0256a0-1409-4bc6-b14d-588ae40e6010 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.238 239969 DEBUG oslo_concurrency.lockutils [req-3426bf1f-e056-4c02-a88a-a9577ebb3a87 req-c1804c3c-173c-4fec-ba00-97161c45a055 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.238 239969 DEBUG oslo_concurrency.lockutils [req-3426bf1f-e056-4c02-a88a-a9577ebb3a87 req-c1804c3c-173c-4fec-ba00-97161c45a055 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.239 239969 DEBUG oslo_concurrency.lockutils [req-3426bf1f-e056-4c02-a88a-a9577ebb3a87 req-c1804c3c-173c-4fec-ba00-97161c45a055 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.239 239969 DEBUG nova.compute.manager [req-3426bf1f-e056-4c02-a88a-a9577ebb3a87 req-c1804c3c-173c-4fec-ba00-97161c45a055 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Processing event network-vif-plugged-1c0256a0-1409-4bc6-b14d-588ae40e6010 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.248 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a97715af-5949-40b9-8e6d-bd7c8333d7b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.279 239969 DEBUG nova.network.neutron [req-c2c1580f-1228-420f-bc3e-4813722bd986 req-3e0ca533-5c32-4ca6-82bd-a5599e75279a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Updated VIF entry in instance network info cache for port ad309073-b3c7-40b4-a937-5d72ec411ed7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.280 239969 DEBUG nova.network.neutron [req-c2c1580f-1228-420f-bc3e-4813722bd986 req-3e0ca533-5c32-4ca6-82bd-a5599e75279a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Updating instance_info_cache with network_info: [{"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.297 239969 DEBUG oslo_concurrency.lockutils [req-c2c1580f-1228-420f-bc3e-4813722bd986 req-3e0ca533-5c32-4ca6-82bd-a5599e75279a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.313 239969 DEBUG nova.compute.manager [req-4f4edfa4-a84b-4fbf-920c-85cd0627b055 req-4238b840-d8e8-4410-9641-038fa30800cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received event network-vif-plugged-c0a1fe12-63f4-4d07-9bc6-5613c9307343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.314 239969 DEBUG oslo_concurrency.lockutils [req-4f4edfa4-a84b-4fbf-920c-85cd0627b055 req-4238b840-d8e8-4410-9641-038fa30800cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.314 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a91e19ca-8af4-4689-ba7c-798ab5697157]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.315 239969 DEBUG oslo_concurrency.lockutils [req-4f4edfa4-a84b-4fbf-920c-85cd0627b055 req-4238b840-d8e8-4410-9641-038fa30800cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.315 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b1727e5-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.315 239969 DEBUG oslo_concurrency.lockutils [req-4f4edfa4-a84b-4fbf-920c-85cd0627b055 req-4238b840-d8e8-4410-9641-038fa30800cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.316 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.316 239969 DEBUG nova.compute.manager [req-4f4edfa4-a84b-4fbf-920c-85cd0627b055 req-4238b840-d8e8-4410-9641-038fa30800cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Processing event network-vif-plugged-c0a1fe12-63f4-4d07-9bc6-5613c9307343 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.316 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b1727e5-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.318 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:25 compute-0 NetworkManager[48954]: <info>  [1769443225.3193] manager: (tap1b1727e5-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Jan 26 16:00:25 compute-0 kernel: tap1b1727e5-00: entered promiscuous mode
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.323 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.324 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1b1727e5-00, col_values=(('external_ids', {'iface-id': '68b56ae3-86ad-46c0-a1ea-ff452334237b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:25 compute-0 ovn_controller[146046]: 2026-01-26T16:00:25Z|00507|binding|INFO|Releasing lport 68b56ae3-86ad-46c0-a1ea-ff452334237b from this chassis (sb_readonly=0)
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.326 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.350 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.352 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1b1727e5-0718-44ed-bec7-f0452326e7d0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1b1727e5-0718-44ed-bec7-f0452326e7d0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.353 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c5eb06e4-0ce2-4a0a-97bc-6af098fdcb51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.353 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-1b1727e5-0718-44ed-bec7-f0452326e7d0
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/1b1727e5-0718-44ed-bec7-f0452326e7d0.pid.haproxy
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 1b1727e5-0718-44ed-bec7-f0452326e7d0
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:00:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:25.354 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0', 'env', 'PROCESS_TAG=haproxy-1b1727e5-0718-44ed-bec7-f0452326e7d0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1b1727e5-0718-44ed-bec7-f0452326e7d0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.639 239969 DEBUG nova.network.neutron [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Updated VIF entry in instance network info cache for port 1c0256a0-1409-4bc6-b14d-588ae40e6010. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.640 239969 DEBUG nova.network.neutron [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Updating instance_info_cache with network_info: [{"id": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "address": "fa:16:3e:30:31:28", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0a1fe12-63", "ovs_interfaceid": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "73773977-7e7a-4ec3-8051-43770cc2a237", "address": "fa:16:3e:aa:4b:c4", "network": {"id": "1b1727e5-0718-44ed-bec7-f0452326e7d0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1054665073", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73773977-7e", "ovs_interfaceid": "73773977-7e7a-4ec3-8051-43770cc2a237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "address": "fa:16:3e:ea:11:d3", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c0256a0-14", "ovs_interfaceid": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.661 239969 DEBUG oslo_concurrency.lockutils [req-05d7fc4b-f8f1-4ca5-912a-7900baa5e907 req-8c2e6be2-09c7-43f3-bd3b-c163399fca68 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-98bce7d4-177d-4ead-b03c-70f005006e85" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.715 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:25 compute-0 podman[292105]: 2026-01-26 16:00:25.779377358 +0000 UTC m=+0.084706078 container create 77718c390217c31ecf3df79a9b5f127297d2bcd32610595d7f0802626d8690fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:00:25 compute-0 podman[292105]: 2026-01-26 16:00:25.7292945 +0000 UTC m=+0.034623250 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:00:25 compute-0 NetworkManager[48954]: <info>  [1769443225.8400] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/230)
Jan 26 16:00:25 compute-0 NetworkManager[48954]: <info>  [1769443225.8408] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/231)
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.839 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:25 compute-0 systemd[1]: Started libpod-conmon-77718c390217c31ecf3df79a9b5f127297d2bcd32610595d7f0802626d8690fb.scope.
Jan 26 16:00:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:00:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4542e5196a6013f02d0d67486457436c3b85900c9e0852d91200bf8d5c56a780/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 339 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.2 MiB/s wr, 186 op/s
Jan 26 16:00:25 compute-0 podman[292105]: 2026-01-26 16:00:25.904332661 +0000 UTC m=+0.209661471 container init 77718c390217c31ecf3df79a9b5f127297d2bcd32610595d7f0802626d8690fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:00:25 compute-0 podman[292105]: 2026-01-26 16:00:25.910221156 +0000 UTC m=+0.215549876 container start 77718c390217c31ecf3df79a9b5f127297d2bcd32610595d7f0802626d8690fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.923 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:25 compute-0 ovn_controller[146046]: 2026-01-26T16:00:25Z|00508|binding|INFO|Releasing lport 68b56ae3-86ad-46c0-a1ea-ff452334237b from this chassis (sb_readonly=0)
Jan 26 16:00:25 compute-0 ovn_controller[146046]: 2026-01-26T16:00:25Z|00509|binding|INFO|Releasing lport c26451f4-ab48-4e8d-b25b-9e4988573b7d from this chassis (sb_readonly=0)
Jan 26 16:00:25 compute-0 ovn_controller[146046]: 2026-01-26T16:00:25Z|00510|binding|INFO|Releasing lport c3d22444-4dc9-413c-81b4-c6273fa9f79d from this chassis (sb_readonly=0)
Jan 26 16:00:25 compute-0 neutron-haproxy-ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0[292120]: [NOTICE]   (292124) : New worker (292126) forked
Jan 26 16:00:25 compute-0 neutron-haproxy-ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0[292120]: [NOTICE]   (292124) : Loading success.
Jan 26 16:00:25 compute-0 nova_compute[239965]: 2026-01-26 16:00:25.938 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:00:26 compute-0 ceph-mon[75140]: pgmap v1419: 305 pgs: 305 active+clean; 339 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.2 MiB/s wr, 186 op/s
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.763 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 339 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.904 239969 DEBUG nova.virt.libvirt.driver [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.924 239969 DEBUG nova.compute.manager [req-8c226bb8-ffcf-42ca-9742-754be3ea1759 req-6248d194-78f8-4428-b728-013012cf2be3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received event network-vif-plugged-1c0256a0-1409-4bc6-b14d-588ae40e6010 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.924 239969 DEBUG oslo_concurrency.lockutils [req-8c226bb8-ffcf-42ca-9742-754be3ea1759 req-6248d194-78f8-4428-b728-013012cf2be3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.924 239969 DEBUG oslo_concurrency.lockutils [req-8c226bb8-ffcf-42ca-9742-754be3ea1759 req-6248d194-78f8-4428-b728-013012cf2be3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.924 239969 DEBUG oslo_concurrency.lockutils [req-8c226bb8-ffcf-42ca-9742-754be3ea1759 req-6248d194-78f8-4428-b728-013012cf2be3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.925 239969 DEBUG nova.compute.manager [req-8c226bb8-ffcf-42ca-9742-754be3ea1759 req-6248d194-78f8-4428-b728-013012cf2be3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] No event matching network-vif-plugged-1c0256a0-1409-4bc6-b14d-588ae40e6010 in dict_keys([('network-vif-plugged', '73773977-7e7a-4ec3-8051-43770cc2a237')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.925 239969 WARNING nova.compute.manager [req-8c226bb8-ffcf-42ca-9742-754be3ea1759 req-6248d194-78f8-4428-b728-013012cf2be3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received unexpected event network-vif-plugged-1c0256a0-1409-4bc6-b14d-588ae40e6010 for instance with vm_state building and task_state spawning.
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.925 239969 DEBUG nova.compute.manager [req-8c226bb8-ffcf-42ca-9742-754be3ea1759 req-6248d194-78f8-4428-b728-013012cf2be3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received event network-changed-ad309073-b3c7-40b4-a937-5d72ec411ed7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.925 239969 DEBUG nova.compute.manager [req-8c226bb8-ffcf-42ca-9742-754be3ea1759 req-6248d194-78f8-4428-b728-013012cf2be3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Refreshing instance network info cache due to event network-changed-ad309073-b3c7-40b4-a937-5d72ec411ed7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.926 239969 DEBUG oslo_concurrency.lockutils [req-8c226bb8-ffcf-42ca-9742-754be3ea1759 req-6248d194-78f8-4428-b728-013012cf2be3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.926 239969 DEBUG oslo_concurrency.lockutils [req-8c226bb8-ffcf-42ca-9742-754be3ea1759 req-6248d194-78f8-4428-b728-013012cf2be3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.926 239969 DEBUG nova.network.neutron [req-8c226bb8-ffcf-42ca-9742-754be3ea1759 req-6248d194-78f8-4428-b728-013012cf2be3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Refreshing network info cache for port ad309073-b3c7-40b4-a937-5d72ec411ed7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.994 239969 DEBUG nova.compute.manager [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received event network-vif-plugged-c0a1fe12-63f4-4d07-9bc6-5613c9307343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.994 239969 DEBUG oslo_concurrency.lockutils [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.995 239969 DEBUG oslo_concurrency.lockutils [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.995 239969 DEBUG oslo_concurrency.lockutils [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.995 239969 DEBUG nova.compute.manager [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] No event matching network-vif-plugged-c0a1fe12-63f4-4d07-9bc6-5613c9307343 in dict_keys([('network-vif-plugged', '73773977-7e7a-4ec3-8051-43770cc2a237')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.995 239969 WARNING nova.compute.manager [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received unexpected event network-vif-plugged-c0a1fe12-63f4-4d07-9bc6-5613c9307343 for instance with vm_state building and task_state spawning.
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.995 239969 DEBUG nova.compute.manager [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received event network-vif-plugged-73773977-7e7a-4ec3-8051-43770cc2a237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.995 239969 DEBUG oslo_concurrency.lockutils [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.996 239969 DEBUG oslo_concurrency.lockutils [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.996 239969 DEBUG oslo_concurrency.lockutils [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.996 239969 DEBUG nova.compute.manager [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Processing event network-vif-plugged-73773977-7e7a-4ec3-8051-43770cc2a237 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.996 239969 DEBUG nova.compute.manager [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received event network-vif-plugged-73773977-7e7a-4ec3-8051-43770cc2a237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.996 239969 DEBUG oslo_concurrency.lockutils [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.996 239969 DEBUG oslo_concurrency.lockutils [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.996 239969 DEBUG oslo_concurrency.lockutils [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.997 239969 DEBUG nova.compute.manager [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] No waiting events found dispatching network-vif-plugged-73773977-7e7a-4ec3-8051-43770cc2a237 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.997 239969 WARNING nova.compute.manager [req-e85e810d-d372-4f2a-b8ad-95c111861370 req-bb75e8f6-8fed-4da0-afe7-f1f55b15b9b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received unexpected event network-vif-plugged-73773977-7e7a-4ec3-8051-43770cc2a237 for instance with vm_state building and task_state spawning.
Jan 26 16:00:27 compute-0 nova_compute[239965]: 2026-01-26 16:00:27.997 239969 DEBUG nova.compute.manager [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.001 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443228.0009027, 98bce7d4-177d-4ead-b03c-70f005006e85 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.001 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] VM Resumed (Lifecycle Event)
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.002 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.009 239969 INFO nova.virt.libvirt.driver [-] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Instance spawned successfully.
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.010 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.033 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.040 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.043 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.043 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.044 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.044 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.045 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.045 239969 DEBUG nova.virt.libvirt.driver [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.083 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.118 239969 INFO nova.compute.manager [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Took 18.87 seconds to spawn the instance on the hypervisor.
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.118 239969 DEBUG nova.compute.manager [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.211 239969 INFO nova.compute.manager [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Took 20.27 seconds to build instance.
Jan 26 16:00:28 compute-0 nova_compute[239965]: 2026-01-26 16:00:28.238 239969 DEBUG oslo_concurrency.lockutils [None req-2cf2585c-795d-4749-be2b-50f85366009a 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.418s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:00:28
Jan 26 16:00:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:00:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:00:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'images', '.mgr']
Jan 26 16:00:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:00:28 compute-0 ceph-mon[75140]: pgmap v1420: 305 pgs: 305 active+clean; 339 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.764 239969 DEBUG oslo_concurrency.lockutils [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "98bce7d4-177d-4ead-b03c-70f005006e85" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.765 239969 DEBUG oslo_concurrency.lockutils [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.765 239969 DEBUG oslo_concurrency.lockutils [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.765 239969 DEBUG oslo_concurrency.lockutils [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.765 239969 DEBUG oslo_concurrency.lockutils [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.766 239969 INFO nova.compute.manager [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Terminating instance
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.767 239969 DEBUG nova.compute.manager [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:00:29 compute-0 kernel: tapc0a1fe12-63 (unregistering): left promiscuous mode
Jan 26 16:00:29 compute-0 NetworkManager[48954]: <info>  [1769443229.8297] device (tapc0a1fe12-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.836 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:29 compute-0 ovn_controller[146046]: 2026-01-26T16:00:29Z|00511|binding|INFO|Releasing lport c0a1fe12-63f4-4d07-9bc6-5613c9307343 from this chassis (sb_readonly=0)
Jan 26 16:00:29 compute-0 ovn_controller[146046]: 2026-01-26T16:00:29Z|00512|binding|INFO|Setting lport c0a1fe12-63f4-4d07-9bc6-5613c9307343 down in Southbound
Jan 26 16:00:29 compute-0 ovn_controller[146046]: 2026-01-26T16:00:29Z|00513|binding|INFO|Removing iface tapc0a1fe12-63 ovn-installed in OVS
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.839 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:29.849 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:31:28 10.100.0.28'], port_security=['fa:16:3e:30:31:28 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/24', 'neutron:device_id': '98bce7d4-177d-4ead-b03c-70f005006e85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f9fff751f0a4a4aba88032259e9628c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2771346f-84f3-45ca-bd0b-10e35f7a7112', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2afbb601-33fd-4c59-ae5a-d2450ccc1602, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=c0a1fe12-63f4-4d07-9bc6-5613c9307343) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:00:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:29.850 156105 INFO neutron.agent.ovn.metadata.agent [-] Port c0a1fe12-63f4-4d07-9bc6-5613c9307343 in datapath 5a544c01-d2d6-42ef-8ce3-fe84f814eb0e unbound from our chassis
Jan 26 16:00:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:29.852 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5a544c01-d2d6-42ef-8ce3-fe84f814eb0e
Jan 26 16:00:29 compute-0 kernel: tap73773977-7e (unregistering): left promiscuous mode
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.855 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:29 compute-0 NetworkManager[48954]: <info>  [1769443229.8604] device (tap73773977-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.865 239969 DEBUG nova.network.neutron [req-8c226bb8-ffcf-42ca-9742-754be3ea1759 req-6248d194-78f8-4428-b728-013012cf2be3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Updated VIF entry in instance network info cache for port ad309073-b3c7-40b4-a937-5d72ec411ed7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.866 239969 DEBUG nova.network.neutron [req-8c226bb8-ffcf-42ca-9742-754be3ea1759 req-6248d194-78f8-4428-b728-013012cf2be3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Updating instance_info_cache with network_info: [{"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:29.867 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5a36f38d-e4f1-477a-ac2e-aded3d85b3e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:29 compute-0 ovn_controller[146046]: 2026-01-26T16:00:29Z|00514|binding|INFO|Releasing lport 73773977-7e7a-4ec3-8051-43770cc2a237 from this chassis (sb_readonly=0)
Jan 26 16:00:29 compute-0 ovn_controller[146046]: 2026-01-26T16:00:29Z|00515|binding|INFO|Setting lport 73773977-7e7a-4ec3-8051-43770cc2a237 down in Southbound
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.869 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:29 compute-0 ovn_controller[146046]: 2026-01-26T16:00:29Z|00516|binding|INFO|Removing iface tap73773977-7e ovn-installed in OVS
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.871 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:29.876 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:4b:c4 10.100.1.66'], port_security=['fa:16:3e:aa:4b:c4 10.100.1.66'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.66/24', 'neutron:device_id': '98bce7d4-177d-4ead-b03c-70f005006e85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1b1727e5-0718-44ed-bec7-f0452326e7d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f9fff751f0a4a4aba88032259e9628c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2771346f-84f3-45ca-bd0b-10e35f7a7112', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cc7c09ab-aeed-41a7-b7f3-83c23530d016, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=73773977-7e7a-4ec3-8051-43770cc2a237) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:00:29 compute-0 kernel: tap1c0256a0-14 (unregistering): left promiscuous mode
Jan 26 16:00:29 compute-0 NetworkManager[48954]: <info>  [1769443229.8876] device (tap1c0256a0-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.886 239969 DEBUG oslo_concurrency.lockutils [req-8c226bb8-ffcf-42ca-9742-754be3ea1759 req-6248d194-78f8-4428-b728-013012cf2be3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.892 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 339 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 226 op/s
Jan 26 16:00:29 compute-0 ovn_controller[146046]: 2026-01-26T16:00:29Z|00517|binding|INFO|Releasing lport 1c0256a0-1409-4bc6-b14d-588ae40e6010 from this chassis (sb_readonly=0)
Jan 26 16:00:29 compute-0 ovn_controller[146046]: 2026-01-26T16:00:29Z|00518|binding|INFO|Setting lport 1c0256a0-1409-4bc6-b14d-588ae40e6010 down in Southbound
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.904 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:29 compute-0 ovn_controller[146046]: 2026-01-26T16:00:29Z|00519|binding|INFO|Removing iface tap1c0256a0-14 ovn-installed in OVS
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.906 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:29.912 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ea:11:d3 10.100.0.218'], port_security=['fa:16:3e:ea:11:d3 10.100.0.218'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.218/24', 'neutron:device_id': '98bce7d4-177d-4ead-b03c-70f005006e85', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f9fff751f0a4a4aba88032259e9628c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2771346f-84f3-45ca-bd0b-10e35f7a7112', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2afbb601-33fd-4c59-ae5a-d2450ccc1602, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1c0256a0-1409-4bc6-b14d-588ae40e6010) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:00:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:29.912 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[caf4dcf5-c734-41d8-ab6c-284b8b32fb34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:29.917 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f2808836-0c8e-4b5f-ba09-43a1b1df5055]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.924 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:29 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Jan 26 16:00:29 compute-0 systemd[1]: machine-qemu\x2d68\x2dinstance\x2d0000003d.scope: Consumed 2.207s CPU time.
Jan 26 16:00:29 compute-0 systemd-machined[208061]: Machine qemu-68-instance-0000003d terminated.
Jan 26 16:00:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:29.953 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ec501647-1329-4c12-a4d7-8a672f4a1397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:29.976 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f4490ad8-f42e-4121-9b3b-a528d9be2699]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5a544c01-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:58:da'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471496, 'reachable_time': 28523, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292156, 'error': None, 'target': 'ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:29.994 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a6aa8750-4395-483f-979c-26f8e57cd5b3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5a544c01-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471507, 'tstamp': 471507}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292159, 'error': None, 'target': 'ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap5a544c01-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 471509, 'tstamp': 471509}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 292159, 'error': None, 'target': 'ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:29.996 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a544c01-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:29 compute-0 NetworkManager[48954]: <info>  [1769443229.9980] manager: (tap73773977-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/232)
Jan 26 16:00:29 compute-0 nova_compute[239965]: 2026-01-26 16:00:29.997 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.007 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.008 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a544c01-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.009 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.009 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5a544c01-d0, col_values=(('external_ids', {'iface-id': 'c3d22444-4dc9-413c-81b4-c6273fa9f79d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.010 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:30 compute-0 NetworkManager[48954]: <info>  [1769443230.0109] manager: (tap1c0256a0-14): new Tun device (/org/freedesktop/NetworkManager/Devices/233)
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.011 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 73773977-7e7a-4ec3-8051-43770cc2a237 in datapath 1b1727e5-0718-44ed-bec7-f0452326e7d0 unbound from our chassis
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.013 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1b1727e5-0718-44ed-bec7-f0452326e7d0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.014 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b80f96b0-c3a7-42a1-9d38-c5c6edb8d278]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.014 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0 namespace which is not needed anymore
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.033 239969 INFO nova.virt.libvirt.driver [-] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Instance destroyed successfully.
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.033 239969 DEBUG nova.objects.instance [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lazy-loading 'resources' on Instance uuid 98bce7d4-177d-4ead-b03c-70f005006e85 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.059 239969 DEBUG nova.virt.libvirt.vif [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:00:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1097672540',display_name='tempest-ServersTestMultiNic-server-1097672540',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1097672540',id=61,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:00:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3f9fff751f0a4a4aba88032259e9628c',ramdisk_id='',reservation_id='r-n2sgrrie',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1401749103',owner_user_name='tempest-ServersTestMultiNic-1401749103-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:00:28Z,user_data=None,user_id='9cc0fb8436b44d0499a7d55e0a7e7585',uuid=98bce7d4-177d-4ead-b03c-70f005006e85,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "address": "fa:16:3e:30:31:28", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0a1fe12-63", "ovs_interfaceid": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.059 239969 DEBUG nova.network.os_vif_util [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converting VIF {"id": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "address": "fa:16:3e:30:31:28", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc0a1fe12-63", "ovs_interfaceid": "c0a1fe12-63f4-4d07-9bc6-5613c9307343", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.060 239969 DEBUG nova.network.os_vif_util [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:31:28,bridge_name='br-int',has_traffic_filtering=True,id=c0a1fe12-63f4-4d07-9bc6-5613c9307343,network=Network(5a544c01-d2d6-42ef-8ce3-fe84f814eb0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0a1fe12-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.060 239969 DEBUG os_vif [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:31:28,bridge_name='br-int',has_traffic_filtering=True,id=c0a1fe12-63f4-4d07-9bc6-5613c9307343,network=Network(5a544c01-d2d6-42ef-8ce3-fe84f814eb0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0a1fe12-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.062 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.062 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc0a1fe12-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.064 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.066 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.072 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.075 239969 INFO os_vif [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:31:28,bridge_name='br-int',has_traffic_filtering=True,id=c0a1fe12-63f4-4d07-9bc6-5613c9307343,network=Network(5a544c01-d2d6-42ef-8ce3-fe84f814eb0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc0a1fe12-63')
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.076 239969 DEBUG nova.virt.libvirt.vif [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:00:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1097672540',display_name='tempest-ServersTestMultiNic-server-1097672540',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1097672540',id=61,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:00:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3f9fff751f0a4a4aba88032259e9628c',ramdisk_id='',reservation_id='r-n2sgrrie',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1401749103',owner_user_name='tempest-ServersTestMultiNic-1401749103-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:00:28Z,user_data=None,user_id='9cc0fb8436b44d0499a7d55e0a7e7585',uuid=98bce7d4-177d-4ead-b03c-70f005006e85,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73773977-7e7a-4ec3-8051-43770cc2a237", "address": "fa:16:3e:aa:4b:c4", "network": {"id": "1b1727e5-0718-44ed-bec7-f0452326e7d0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1054665073", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73773977-7e", "ovs_interfaceid": "73773977-7e7a-4ec3-8051-43770cc2a237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.077 239969 DEBUG nova.network.os_vif_util [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converting VIF {"id": "73773977-7e7a-4ec3-8051-43770cc2a237", "address": "fa:16:3e:aa:4b:c4", "network": {"id": "1b1727e5-0718-44ed-bec7-f0452326e7d0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1054665073", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.66", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73773977-7e", "ovs_interfaceid": "73773977-7e7a-4ec3-8051-43770cc2a237", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.077 239969 DEBUG nova.network.os_vif_util [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4b:c4,bridge_name='br-int',has_traffic_filtering=True,id=73773977-7e7a-4ec3-8051-43770cc2a237,network=Network(1b1727e5-0718-44ed-bec7-f0452326e7d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73773977-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.077 239969 DEBUG os_vif [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4b:c4,bridge_name='br-int',has_traffic_filtering=True,id=73773977-7e7a-4ec3-8051-43770cc2a237,network=Network(1b1727e5-0718-44ed-bec7-f0452326e7d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73773977-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.078 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.079 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73773977-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.080 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.082 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.085 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.087 239969 INFO os_vif [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:4b:c4,bridge_name='br-int',has_traffic_filtering=True,id=73773977-7e7a-4ec3-8051-43770cc2a237,network=Network(1b1727e5-0718-44ed-bec7-f0452326e7d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73773977-7e')
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.088 239969 DEBUG nova.virt.libvirt.vif [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:00:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1097672540',display_name='tempest-ServersTestMultiNic-server-1097672540',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1097672540',id=61,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:00:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3f9fff751f0a4a4aba88032259e9628c',ramdisk_id='',reservation_id='r-n2sgrrie',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1401749103',owner_user_name='tempest-ServersTestMultiNic-1401749103-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:00:28Z,user_data=None,user_id='9cc0fb8436b44d0499a7d55e0a7e7585',uuid=98bce7d4-177d-4ead-b03c-70f005006e85,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "address": "fa:16:3e:ea:11:d3", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c0256a0-14", "ovs_interfaceid": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.089 239969 DEBUG nova.network.os_vif_util [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converting VIF {"id": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "address": "fa:16:3e:ea:11:d3", "network": {"id": "5a544c01-d2d6-42ef-8ce3-fe84f814eb0e", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-131782315", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.218", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c0256a0-14", "ovs_interfaceid": "1c0256a0-1409-4bc6-b14d-588ae40e6010", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.089 239969 DEBUG nova.network.os_vif_util [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ea:11:d3,bridge_name='br-int',has_traffic_filtering=True,id=1c0256a0-1409-4bc6-b14d-588ae40e6010,network=Network(5a544c01-d2d6-42ef-8ce3-fe84f814eb0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c0256a0-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.090 239969 DEBUG os_vif [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:11:d3,bridge_name='br-int',has_traffic_filtering=True,id=1c0256a0-1409-4bc6-b14d-588ae40e6010,network=Network(5a544c01-d2d6-42ef-8ce3-fe84f814eb0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c0256a0-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.091 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.091 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c0256a0-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.092 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.095 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.121 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.125 239969 INFO os_vif [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ea:11:d3,bridge_name='br-int',has_traffic_filtering=True,id=1c0256a0-1409-4bc6-b14d-588ae40e6010,network=Network(5a544c01-d2d6-42ef-8ce3-fe84f814eb0e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c0256a0-14')
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0[292120]: [NOTICE]   (292124) : haproxy version is 2.8.14-c23fe91
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0[292120]: [NOTICE]   (292124) : path to executable is /usr/sbin/haproxy
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0[292120]: [WARNING]  (292124) : Exiting Master process...
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0[292120]: [WARNING]  (292124) : Exiting Master process...
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0[292120]: [ALERT]    (292124) : Current worker (292126) exited with code 143 (Terminated)
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0[292120]: [WARNING]  (292124) : All workers exited. Exiting... (0)
Jan 26 16:00:30 compute-0 kernel: tap34a7ae82-74 (unregistering): left promiscuous mode
Jan 26 16:00:30 compute-0 NetworkManager[48954]: <info>  [1769443230.1592] device (tap34a7ae82-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:00:30 compute-0 systemd[1]: libpod-77718c390217c31ecf3df79a9b5f127297d2bcd32610595d7f0802626d8690fb.scope: Deactivated successfully.
Jan 26 16:00:30 compute-0 podman[292214]: 2026-01-26 16:00:30.163240448 +0000 UTC m=+0.052061018 container died 77718c390217c31ecf3df79a9b5f127297d2bcd32610595d7f0802626d8690fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:00:30 compute-0 ovn_controller[146046]: 2026-01-26T16:00:30Z|00520|binding|INFO|Releasing lport 34a7ae82-747e-4fef-9081-0d0e28e80f28 from this chassis (sb_readonly=0)
Jan 26 16:00:30 compute-0 ovn_controller[146046]: 2026-01-26T16:00:30Z|00521|binding|INFO|Setting lport 34a7ae82-747e-4fef-9081-0d0e28e80f28 down in Southbound
Jan 26 16:00:30 compute-0 ovn_controller[146046]: 2026-01-26T16:00:30Z|00522|binding|INFO|Removing iface tap34a7ae82-74 ovn-installed in OVS
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.164 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.166 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.173 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:57:38 10.100.0.4'], port_security=['fa:16:3e:67:57:38 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'd89afe65-4fc2-4cfa-8580-7ae5befd6a4d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00eb7549-d24b-4657-b244-7664c8a34b20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aee99e5b6af74088bd848cecc9592e82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6daa4359-88b3-4de7-a38c-e21562f82a46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89244795-c322-42a2-be6b-f02f8ae6e05c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=34a7ae82-747e-4fef-9081-0d0e28e80f28) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.186 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-4542e5196a6013f02d0d67486457436c3b85900c9e0852d91200bf8d5c56a780-merged.mount: Deactivated successfully.
Jan 26 16:00:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77718c390217c31ecf3df79a9b5f127297d2bcd32610595d7f0802626d8690fb-userdata-shm.mount: Deactivated successfully.
Jan 26 16:00:30 compute-0 podman[292214]: 2026-01-26 16:00:30.209526042 +0000 UTC m=+0.098346612 container cleanup 77718c390217c31ecf3df79a9b5f127297d2bcd32610595d7f0802626d8690fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 16:00:30 compute-0 systemd[1]: libpod-conmon-77718c390217c31ecf3df79a9b5f127297d2bcd32610595d7f0802626d8690fb.scope: Deactivated successfully.
Jan 26 16:00:30 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Jan 26 16:00:30 compute-0 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d0000003c.scope: Consumed 14.741s CPU time.
Jan 26 16:00:30 compute-0 systemd-machined[208061]: Machine qemu-66-instance-0000003c terminated.
Jan 26 16:00:30 compute-0 podman[292262]: 2026-01-26 16:00:30.282695617 +0000 UTC m=+0.051571146 container remove 77718c390217c31ecf3df79a9b5f127297d2bcd32610595d7f0802626d8690fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.288 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0e17f7a1-bbce-4815-83dc-e406ba277af2]: (4, ('Mon Jan 26 04:00:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0 (77718c390217c31ecf3df79a9b5f127297d2bcd32610595d7f0802626d8690fb)\n77718c390217c31ecf3df79a9b5f127297d2bcd32610595d7f0802626d8690fb\nMon Jan 26 04:00:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0 (77718c390217c31ecf3df79a9b5f127297d2bcd32610595d7f0802626d8690fb)\n77718c390217c31ecf3df79a9b5f127297d2bcd32610595d7f0802626d8690fb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.290 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4e8105aa-b1b6-4392-bff5-368f9c34be56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.291 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b1727e5-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.293 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 kernel: tap1b1727e5-00: left promiscuous mode
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.314 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.318 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c26bbbc9-8001-4d29-89e5-69a8eea84e54]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.332 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[735338b0-f3fe-404e-8293-a4b53cfc5887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.335 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[958aec62-7abe-4fc5-8343-594ab95f6f5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.357 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a80f5281-c65a-4702-a3a4-7e35fcef14e8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471587, 'reachable_time': 30675, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292282, 'error': None, 'target': 'ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.360 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1b1727e5-0718-44ed-bec7-f0452326e7d0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.360 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[6af5b927-eb80-4ca5-8e54-479f72ba0cba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.361 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1c0256a0-1409-4bc6-b14d-588ae40e6010 in datapath 5a544c01-d2d6-42ef-8ce3-fe84f814eb0e unbound from our chassis
Jan 26 16:00:30 compute-0 systemd[1]: run-netns-ovnmeta\x2d1b1727e5\x2d0718\x2d44ed\x2dbec7\x2df0452326e7d0.mount: Deactivated successfully.
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.363 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5a544c01-d2d6-42ef-8ce3-fe84f814eb0e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.364 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6758108c-6d32-46bc-90aa-39b7d204594e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.365 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e namespace which is not needed anymore
Jan 26 16:00:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:00:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:00:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:00:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:00:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:00:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.422 239969 INFO nova.virt.libvirt.driver [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Deleting instance files /var/lib/nova/instances/98bce7d4-177d-4ead-b03c-70f005006e85_del
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.423 239969 INFO nova.virt.libvirt.driver [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Deletion of /var/lib/nova/instances/98bce7d4-177d-4ead-b03c-70f005006e85_del complete
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e[292036]: [NOTICE]   (292040) : haproxy version is 2.8.14-c23fe91
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e[292036]: [NOTICE]   (292040) : path to executable is /usr/sbin/haproxy
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e[292036]: [WARNING]  (292040) : Exiting Master process...
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e[292036]: [WARNING]  (292040) : Exiting Master process...
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.486 239969 INFO nova.compute.manager [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Took 0.72 seconds to destroy the instance on the hypervisor.
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.488 239969 DEBUG oslo.service.loopingcall [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e[292036]: [ALERT]    (292040) : Current worker (292042) exited with code 143 (Terminated)
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e[292036]: [WARNING]  (292040) : All workers exited. Exiting... (0)
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.488 239969 DEBUG nova.compute.manager [-] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.488 239969 DEBUG nova.network.neutron [-] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:00:30 compute-0 systemd[1]: libpod-bec10474df28312305c52f229e6253b9bbd503e254606a829da02c35bbfe295e.scope: Deactivated successfully.
Jan 26 16:00:30 compute-0 podman[292312]: 2026-01-26 16:00:30.498236581 +0000 UTC m=+0.043063207 container died bec10474df28312305c52f229e6253b9bbd503e254606a829da02c35bbfe295e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:00:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bec10474df28312305c52f229e6253b9bbd503e254606a829da02c35bbfe295e-userdata-shm.mount: Deactivated successfully.
Jan 26 16:00:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc24ca23c7f3f166309cde319ad43518a99275ba8ea425e3f27d56ef82180e53-merged.mount: Deactivated successfully.
Jan 26 16:00:30 compute-0 podman[292312]: 2026-01-26 16:00:30.528865811 +0000 UTC m=+0.073692447 container cleanup bec10474df28312305c52f229e6253b9bbd503e254606a829da02c35bbfe295e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 16:00:30 compute-0 systemd[1]: libpod-conmon-bec10474df28312305c52f229e6253b9bbd503e254606a829da02c35bbfe295e.scope: Deactivated successfully.
Jan 26 16:00:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:00:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:00:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:00:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:00:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:00:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:00:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:00:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:00:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:00:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:00:30 compute-0 podman[292343]: 2026-01-26 16:00:30.596783556 +0000 UTC m=+0.042202455 container remove bec10474df28312305c52f229e6253b9bbd503e254606a829da02c35bbfe295e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.601 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[872b3728-25fd-45af-adde-150096ebd6a5]: (4, ('Mon Jan 26 04:00:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e (bec10474df28312305c52f229e6253b9bbd503e254606a829da02c35bbfe295e)\nbec10474df28312305c52f229e6253b9bbd503e254606a829da02c35bbfe295e\nMon Jan 26 04:00:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e (bec10474df28312305c52f229e6253b9bbd503e254606a829da02c35bbfe295e)\nbec10474df28312305c52f229e6253b9bbd503e254606a829da02c35bbfe295e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.602 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[37fb5ce5-dff3-40cb-9a8f-70ef22c89ef8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.603 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a544c01-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:30 compute-0 kernel: tap5a544c01-d0: left promiscuous mode
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.606 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.623 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.625 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[abc97f72-65ab-492f-8148-c3f3172a9246]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.644 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cd57dc18-04b5-4530-aad5-e8e0019661a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.645 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b252cde9-9d4e-49f4-b9c7-ebdeca71a6a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.660 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[339035b0-0a40-4ce8-93e4-89e632dd4ba9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 471489, 'reachable_time': 26273, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292359, 'error': None, 'target': 'ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.662 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5a544c01-d2d6-42ef-8ce3-fe84f814eb0e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.662 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[27d8bde3-a18a-44da-8269-95e77a9e2dcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.663 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 34a7ae82-747e-4fef-9081-0d0e28e80f28 in datapath 00eb7549-d24b-4657-b244-7664c8a34b20 unbound from our chassis
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.664 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00eb7549-d24b-4657-b244-7664c8a34b20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.665 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc03c57-2f64-4d5b-b40f-d1433c9db5e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.665 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 namespace which is not needed anymore
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.717 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[290892]: [NOTICE]   (290901) : haproxy version is 2.8.14-c23fe91
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[290892]: [NOTICE]   (290901) : path to executable is /usr/sbin/haproxy
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[290892]: [WARNING]  (290901) : Exiting Master process...
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[290892]: [ALERT]    (290901) : Current worker (290904) exited with code 143 (Terminated)
Jan 26 16:00:30 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[290892]: [WARNING]  (290901) : All workers exited. Exiting... (0)
Jan 26 16:00:30 compute-0 systemd[1]: libpod-2a93282732aafe3d6c33e3390cea54e4950d1ac665777d7ed9759e2ba7b84c95.scope: Deactivated successfully.
Jan 26 16:00:30 compute-0 conmon[290892]: conmon 2a93282732aafe3d6c33 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2a93282732aafe3d6c33e3390cea54e4950d1ac665777d7ed9759e2ba7b84c95.scope/container/memory.events
Jan 26 16:00:30 compute-0 podman[292377]: 2026-01-26 16:00:30.784289843 +0000 UTC m=+0.040879553 container died 2a93282732aafe3d6c33e3390cea54e4950d1ac665777d7ed9759e2ba7b84c95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 16:00:30 compute-0 podman[292377]: 2026-01-26 16:00:30.835067107 +0000 UTC m=+0.091656817 container cleanup 2a93282732aafe3d6c33e3390cea54e4950d1ac665777d7ed9759e2ba7b84c95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 16:00:30 compute-0 systemd[1]: libpod-conmon-2a93282732aafe3d6c33e3390cea54e4950d1ac665777d7ed9759e2ba7b84c95.scope: Deactivated successfully.
Jan 26 16:00:30 compute-0 podman[292409]: 2026-01-26 16:00:30.892936196 +0000 UTC m=+0.039463788 container remove 2a93282732aafe3d6c33e3390cea54e4950d1ac665777d7ed9759e2ba7b84c95 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.898 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5b6598cd-c0f2-4695-9590-678a7f980834]: (4, ('Mon Jan 26 04:00:30 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 (2a93282732aafe3d6c33e3390cea54e4950d1ac665777d7ed9759e2ba7b84c95)\n2a93282732aafe3d6c33e3390cea54e4950d1ac665777d7ed9759e2ba7b84c95\nMon Jan 26 04:00:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 (2a93282732aafe3d6c33e3390cea54e4950d1ac665777d7ed9759e2ba7b84c95)\n2a93282732aafe3d6c33e3390cea54e4950d1ac665777d7ed9759e2ba7b84c95\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.900 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ba8708a3-10ed-4edf-bc58-fea70de18ec8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.900 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00eb7549-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.902 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 kernel: tap00eb7549-d0: left promiscuous mode
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.920 239969 INFO nova.virt.libvirt.driver [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Instance shutdown successfully after 24 seconds.
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.922 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.925 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7250f234-d969-4d16-b622-af34133443f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.928 239969 INFO nova.virt.libvirt.driver [-] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Instance destroyed successfully.
Jan 26 16:00:30 compute-0 nova_compute[239965]: 2026-01-26 16:00:30.929 239969 DEBUG nova.objects.instance [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'numa_topology' on Instance uuid d89afe65-4fc2-4cfa-8580-7ae5befd6a4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.938 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[33218cf6-8ebf-44d5-8baa-045e89f9afeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.940 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[88f629af-454b-4ae6-a80b-ff37dda9a58b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.959 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e3f152-df6d-4ab2-b6fe-44c7f79e1d58]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 469446, 'reachable_time': 28239, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292428, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.961 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:00:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:30.962 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6759f7-cc4f-4948-a381-4f08642e734a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:30 compute-0 ceph-mon[75140]: pgmap v1421: 305 pgs: 305 active+clean; 339 MiB data, 655 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.2 MiB/s wr, 226 op/s
Jan 26 16:00:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:00:31 compute-0 podman[292426]: 2026-01-26 16:00:31.03504611 +0000 UTC m=+0.076893276 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 26 16:00:31 compute-0 nova_compute[239965]: 2026-01-26 16:00:31.118 239969 DEBUG nova.compute.manager [req-a81b1e42-abc4-45af-9fac-5fcda24d1cc5 req-967759b9-6370-4046-9339-8d181ae07d06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received event network-vif-unplugged-1c0256a0-1409-4bc6-b14d-588ae40e6010 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:31 compute-0 nova_compute[239965]: 2026-01-26 16:00:31.118 239969 DEBUG oslo_concurrency.lockutils [req-a81b1e42-abc4-45af-9fac-5fcda24d1cc5 req-967759b9-6370-4046-9339-8d181ae07d06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:31 compute-0 nova_compute[239965]: 2026-01-26 16:00:31.119 239969 DEBUG oslo_concurrency.lockutils [req-a81b1e42-abc4-45af-9fac-5fcda24d1cc5 req-967759b9-6370-4046-9339-8d181ae07d06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:31 compute-0 nova_compute[239965]: 2026-01-26 16:00:31.119 239969 DEBUG oslo_concurrency.lockutils [req-a81b1e42-abc4-45af-9fac-5fcda24d1cc5 req-967759b9-6370-4046-9339-8d181ae07d06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:31 compute-0 nova_compute[239965]: 2026-01-26 16:00:31.119 239969 DEBUG nova.compute.manager [req-a81b1e42-abc4-45af-9fac-5fcda24d1cc5 req-967759b9-6370-4046-9339-8d181ae07d06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] No waiting events found dispatching network-vif-unplugged-1c0256a0-1409-4bc6-b14d-588ae40e6010 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:31 compute-0 nova_compute[239965]: 2026-01-26 16:00:31.120 239969 DEBUG nova.compute.manager [req-a81b1e42-abc4-45af-9fac-5fcda24d1cc5 req-967759b9-6370-4046-9339-8d181ae07d06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received event network-vif-unplugged-1c0256a0-1409-4bc6-b14d-588ae40e6010 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:00:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d5a544c01\x2dd2d6\x2d42ef\x2d8ce3\x2dfe84f814eb0e.mount: Deactivated successfully.
Jan 26 16:00:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-700a0d4790aed27f15275ced4cfc3bafc28e19cd9b4e03a5ab24f52278972c76-merged.mount: Deactivated successfully.
Jan 26 16:00:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a93282732aafe3d6c33e3390cea54e4950d1ac665777d7ed9759e2ba7b84c95-userdata-shm.mount: Deactivated successfully.
Jan 26 16:00:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d00eb7549\x2dd24b\x2d4657\x2db244\x2d7664c8a34b20.mount: Deactivated successfully.
Jan 26 16:00:31 compute-0 nova_compute[239965]: 2026-01-26 16:00:31.313 239969 INFO nova.virt.libvirt.driver [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Beginning cold snapshot process
Jan 26 16:00:31 compute-0 nova_compute[239965]: 2026-01-26 16:00:31.480 239969 DEBUG nova.virt.libvirt.imagebackend [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 16:00:31 compute-0 nova_compute[239965]: 2026-01-26 16:00:31.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:00:31 compute-0 nova_compute[239965]: 2026-01-26 16:00:31.757 239969 DEBUG nova.storage.rbd_utils [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] creating snapshot(e8c6a5e5eae34df3a0432f43164d51a7) on rbd image(d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:00:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 322 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.0 MiB/s wr, 185 op/s
Jan 26 16:00:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Jan 26 16:00:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Jan 26 16:00:31 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Jan 26 16:00:32 compute-0 nova_compute[239965]: 2026-01-26 16:00:32.034 239969 DEBUG nova.storage.rbd_utils [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] cloning vms/d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk@e8c6a5e5eae34df3a0432f43164d51a7 to images/e86a5abc-f34a-4e04-b91e-26b6df03b83a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 16:00:32 compute-0 nova_compute[239965]: 2026-01-26 16:00:32.126 239969 DEBUG nova.storage.rbd_utils [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] flattening images/e86a5abc-f34a-4e04-b91e-26b6df03b83a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 16:00:32 compute-0 nova_compute[239965]: 2026-01-26 16:00:32.423 239969 DEBUG nova.compute.manager [req-d31a1a85-8604-48c3-9466-53d6524567ed req-c8b6b017-67b6-4cee-905e-b00058ec983d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received event network-changed-ad309073-b3c7-40b4-a937-5d72ec411ed7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:32 compute-0 nova_compute[239965]: 2026-01-26 16:00:32.424 239969 DEBUG nova.compute.manager [req-d31a1a85-8604-48c3-9466-53d6524567ed req-c8b6b017-67b6-4cee-905e-b00058ec983d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Refreshing instance network info cache due to event network-changed-ad309073-b3c7-40b4-a937-5d72ec411ed7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:00:32 compute-0 nova_compute[239965]: 2026-01-26 16:00:32.424 239969 DEBUG oslo_concurrency.lockutils [req-d31a1a85-8604-48c3-9466-53d6524567ed req-c8b6b017-67b6-4cee-905e-b00058ec983d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:32 compute-0 nova_compute[239965]: 2026-01-26 16:00:32.424 239969 DEBUG oslo_concurrency.lockutils [req-d31a1a85-8604-48c3-9466-53d6524567ed req-c8b6b017-67b6-4cee-905e-b00058ec983d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:32 compute-0 nova_compute[239965]: 2026-01-26 16:00:32.424 239969 DEBUG nova.network.neutron [req-d31a1a85-8604-48c3-9466-53d6524567ed req-c8b6b017-67b6-4cee-905e-b00058ec983d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Refreshing network info cache for port ad309073-b3c7-40b4-a937-5d72ec411ed7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:00:32 compute-0 podman[292552]: 2026-01-26 16:00:32.438797092 +0000 UTC m=+0.120364570 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 26 16:00:32 compute-0 nova_compute[239965]: 2026-01-26 16:00:32.589 239969 DEBUG nova.storage.rbd_utils [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] removing snapshot(e8c6a5e5eae34df3a0432f43164d51a7) on rbd image(d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 16:00:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Jan 26 16:00:32 compute-0 ceph-mon[75140]: pgmap v1422: 305 pgs: 305 active+clean; 322 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.0 MiB/s wr, 185 op/s
Jan 26 16:00:32 compute-0 ceph-mon[75140]: osdmap e231: 3 total, 3 up, 3 in
Jan 26 16:00:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Jan 26 16:00:33 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Jan 26 16:00:33 compute-0 nova_compute[239965]: 2026-01-26 16:00:33.026 239969 DEBUG nova.storage.rbd_utils [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] creating snapshot(snap) on rbd image(e86a5abc-f34a-4e04-b91e-26b6df03b83a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:00:33 compute-0 nova_compute[239965]: 2026-01-26 16:00:33.172 239969 DEBUG nova.network.neutron [-] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:33 compute-0 nova_compute[239965]: 2026-01-26 16:00:33.190 239969 INFO nova.compute.manager [-] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Took 2.70 seconds to deallocate network for instance.
Jan 26 16:00:33 compute-0 nova_compute[239965]: 2026-01-26 16:00:33.232 239969 DEBUG oslo_concurrency.lockutils [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:33 compute-0 nova_compute[239965]: 2026-01-26 16:00:33.233 239969 DEBUG oslo_concurrency.lockutils [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:33 compute-0 nova_compute[239965]: 2026-01-26 16:00:33.325 239969 DEBUG oslo_concurrency.processutils [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:33 compute-0 nova_compute[239965]: 2026-01-26 16:00:33.560 239969 DEBUG nova.compute.manager [req-13a08672-061b-48e4-9f1c-78e6c7dd5159 req-d821a03a-ee77-43d2-a629-4b364b8f1b6b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received event network-vif-plugged-1c0256a0-1409-4bc6-b14d-588ae40e6010 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:33 compute-0 nova_compute[239965]: 2026-01-26 16:00:33.561 239969 DEBUG oslo_concurrency.lockutils [req-13a08672-061b-48e4-9f1c-78e6c7dd5159 req-d821a03a-ee77-43d2-a629-4b364b8f1b6b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:33 compute-0 nova_compute[239965]: 2026-01-26 16:00:33.561 239969 DEBUG oslo_concurrency.lockutils [req-13a08672-061b-48e4-9f1c-78e6c7dd5159 req-d821a03a-ee77-43d2-a629-4b364b8f1b6b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:33 compute-0 nova_compute[239965]: 2026-01-26 16:00:33.562 239969 DEBUG oslo_concurrency.lockutils [req-13a08672-061b-48e4-9f1c-78e6c7dd5159 req-d821a03a-ee77-43d2-a629-4b364b8f1b6b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:33 compute-0 nova_compute[239965]: 2026-01-26 16:00:33.562 239969 DEBUG nova.compute.manager [req-13a08672-061b-48e4-9f1c-78e6c7dd5159 req-d821a03a-ee77-43d2-a629-4b364b8f1b6b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] No waiting events found dispatching network-vif-plugged-1c0256a0-1409-4bc6-b14d-588ae40e6010 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:33 compute-0 nova_compute[239965]: 2026-01-26 16:00:33.563 239969 WARNING nova.compute.manager [req-13a08672-061b-48e4-9f1c-78e6c7dd5159 req-d821a03a-ee77-43d2-a629-4b364b8f1b6b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received unexpected event network-vif-plugged-1c0256a0-1409-4bc6-b14d-588ae40e6010 for instance with vm_state deleted and task_state None.
Jan 26 16:00:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:00:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2402897424' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 322 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 44 KiB/s wr, 186 op/s
Jan 26 16:00:33 compute-0 nova_compute[239965]: 2026-01-26 16:00:33.920 239969 DEBUG oslo_concurrency.processutils [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:33 compute-0 nova_compute[239965]: 2026-01-26 16:00:33.927 239969 DEBUG nova.compute.provider_tree [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:00:33 compute-0 nova_compute[239965]: 2026-01-26 16:00:33.945 239969 DEBUG nova.scheduler.client.report [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:00:33 compute-0 nova_compute[239965]: 2026-01-26 16:00:33.987 239969 DEBUG oslo_concurrency.lockutils [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Jan 26 16:00:34 compute-0 ceph-mon[75140]: osdmap e232: 3 total, 3 up, 3 in
Jan 26 16:00:34 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2402897424' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:34 compute-0 nova_compute[239965]: 2026-01-26 16:00:34.016 239969 INFO nova.scheduler.client.report [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Deleted allocations for instance 98bce7d4-177d-4ead-b03c-70f005006e85
Jan 26 16:00:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Jan 26 16:00:34 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Jan 26 16:00:34 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Jan 26 16:00:34 compute-0 nova_compute[239965]: 2026-01-26 16:00:34.095 239969 DEBUG oslo_concurrency.lockutils [None req-9c3541ba-efc7-4255-b73d-f0919a6d8501 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "98bce7d4-177d-4ead-b03c-70f005006e85" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:34.110 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:00:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:34.110 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:00:34 compute-0 nova_compute[239965]: 2026-01-26 16:00:34.111 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:34 compute-0 nova_compute[239965]: 2026-01-26 16:00:34.344 239969 DEBUG nova.network.neutron [req-d31a1a85-8604-48c3-9466-53d6524567ed req-c8b6b017-67b6-4cee-905e-b00058ec983d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Updated VIF entry in instance network info cache for port ad309073-b3c7-40b4-a937-5d72ec411ed7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:00:34 compute-0 nova_compute[239965]: 2026-01-26 16:00:34.345 239969 DEBUG nova.network.neutron [req-d31a1a85-8604-48c3-9466-53d6524567ed req-c8b6b017-67b6-4cee-905e-b00058ec983d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Updating instance_info_cache with network_info: [{"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:34 compute-0 nova_compute[239965]: 2026-01-26 16:00:34.364 239969 DEBUG oslo_concurrency.lockutils [req-d31a1a85-8604-48c3-9466-53d6524567ed req-c8b6b017-67b6-4cee-905e-b00058ec983d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:34 compute-0 nova_compute[239965]: 2026-01-26 16:00:34.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:00:34 compute-0 nova_compute[239965]: 2026-01-26 16:00:34.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:00:34 compute-0 nova_compute[239965]: 2026-01-26 16:00:34.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:00:34 compute-0 nova_compute[239965]: 2026-01-26 16:00:34.579 239969 DEBUG nova.compute.manager [req-41818a4a-5b3b-4a54-bb85-f3a7512d6499 req-007cfbc9-d774-4c9b-8efb-4f0bb796a0ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received event network-vif-deleted-c0a1fe12-63f4-4d07-9bc6-5613c9307343 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:34 compute-0 nova_compute[239965]: 2026-01-26 16:00:34.580 239969 DEBUG nova.compute.manager [req-41818a4a-5b3b-4a54-bb85-f3a7512d6499 req-007cfbc9-d774-4c9b-8efb-4f0bb796a0ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received event network-vif-deleted-73773977-7e7a-4ec3-8051-43770cc2a237 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:34 compute-0 nova_compute[239965]: 2026-01-26 16:00:34.580 239969 DEBUG nova.compute.manager [req-41818a4a-5b3b-4a54-bb85-f3a7512d6499 req-007cfbc9-d774-4c9b-8efb-4f0bb796a0ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Received event network-vif-deleted-1c0256a0-1409-4bc6-b14d-588ae40e6010 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:34 compute-0 nova_compute[239965]: 2026-01-26 16:00:34.713 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:34 compute-0 nova_compute[239965]: 2026-01-26 16:00:34.714 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:34 compute-0 nova_compute[239965]: 2026-01-26 16:00:34.714 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:00:34 compute-0 nova_compute[239965]: 2026-01-26 16:00:34.714 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a4ddea97-0b71-4080-abed-fd5748d7244e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:35 compute-0 ceph-mon[75140]: pgmap v1425: 305 pgs: 305 active+clean; 322 MiB data, 656 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 44 KiB/s wr, 186 op/s
Jan 26 16:00:35 compute-0 ceph-mon[75140]: osdmap e233: 3 total, 3 up, 3 in
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.093 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.429 239969 DEBUG oslo_concurrency.lockutils [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Acquiring lock "a4ddea97-0b71-4080-abed-fd5748d7244e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.429 239969 DEBUG oslo_concurrency.lockutils [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.429 239969 DEBUG oslo_concurrency.lockutils [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Acquiring lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.430 239969 DEBUG oslo_concurrency.lockutils [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.430 239969 DEBUG oslo_concurrency.lockutils [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.431 239969 INFO nova.compute.manager [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Terminating instance
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.432 239969 DEBUG nova.compute.manager [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:00:35 compute-0 kernel: tapad309073-b3 (unregistering): left promiscuous mode
Jan 26 16:00:35 compute-0 NetworkManager[48954]: <info>  [1769443235.4973] device (tapad309073-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.500 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:35 compute-0 ovn_controller[146046]: 2026-01-26T16:00:35Z|00523|binding|INFO|Releasing lport ad309073-b3c7-40b4-a937-5d72ec411ed7 from this chassis (sb_readonly=0)
Jan 26 16:00:35 compute-0 ovn_controller[146046]: 2026-01-26T16:00:35Z|00524|binding|INFO|Setting lport ad309073-b3c7-40b4-a937-5d72ec411ed7 down in Southbound
Jan 26 16:00:35 compute-0 ovn_controller[146046]: 2026-01-26T16:00:35Z|00525|binding|INFO|Removing iface tapad309073-b3 ovn-installed in OVS
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.511 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.512 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:35.521 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3a:d5:77 10.100.0.11'], port_security=['fa:16:3e:3a:d5:77 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'a4ddea97-0b71-4080-abed-fd5748d7244e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-90c691e2-a785-487c-9a4f-dd30836b91bb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c41676eb0a634807a0c639355e39a512', 'neutron:revision_number': '8', 'neutron:security_group_ids': '9ff51ba6-72d9-4f5f-bcad-1bb60c354451', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=30c9b54c-9e24-4f98-9fda-b4dce976db39, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=ad309073-b3c7-40b4-a937-5d72ec411ed7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:00:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:35.522 156105 INFO neutron.agent.ovn.metadata.agent [-] Port ad309073-b3c7-40b4-a937-5d72ec411ed7 in datapath 90c691e2-a785-487c-9a4f-dd30836b91bb unbound from our chassis
Jan 26 16:00:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:35.524 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 90c691e2-a785-487c-9a4f-dd30836b91bb or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:00:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:35.525 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[68f4204d-0932-4812-a40d-b3c5fc51c408]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.529 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:35 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003b.scope: Deactivated successfully.
Jan 26 16:00:35 compute-0 systemd[1]: machine-qemu\x2d67\x2dinstance\x2d0000003b.scope: Consumed 14.039s CPU time.
Jan 26 16:00:35 compute-0 systemd-machined[208061]: Machine qemu-67-instance-0000003b terminated.
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.660 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.669 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.675 239969 INFO nova.virt.libvirt.driver [-] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Instance destroyed successfully.
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.676 239969 DEBUG nova.objects.instance [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lazy-loading 'resources' on Instance uuid a4ddea97-0b71-4080-abed-fd5748d7244e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.715 239969 DEBUG nova.virt.libvirt.vif [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:59:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSONUnderV235-server-881645873',display_name='tempest-ServerRescueTestJSONUnderV235-server-881645873',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjsonunderv235-server-881645873',id=59,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:00:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c41676eb0a634807a0c639355e39a512',ramdisk_id='',reservation_id='r-osdyj818',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSONUnderV235-701891094',owner_user_name='tempest-ServerRescueTestJSONUnderV235-701891094-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:00:15Z,user_data=None,user_id='10e6dce963804f98b45b6f58ef8d1b2e',uuid=a4ddea97-0b71-4080-abed-fd5748d7244e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.715 239969 DEBUG nova.network.os_vif_util [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Converting VIF {"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.716 239969 DEBUG nova.network.os_vif_util [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3a:d5:77,bridge_name='br-int',has_traffic_filtering=True,id=ad309073-b3c7-40b4-a937-5d72ec411ed7,network=Network(90c691e2-a785-487c-9a4f-dd30836b91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad309073-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.716 239969 DEBUG os_vif [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:d5:77,bridge_name='br-int',has_traffic_filtering=True,id=ad309073-b3c7-40b4-a937-5d72ec411ed7,network=Network(90c691e2-a785-487c-9a4f-dd30836b91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad309073-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.718 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.719 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad309073-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.721 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.725 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:35 compute-0 nova_compute[239965]: 2026-01-26 16:00:35.729 239969 INFO os_vif [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3a:d5:77,bridge_name='br-int',has_traffic_filtering=True,id=ad309073-b3c7-40b4-a937-5d72ec411ed7,network=Network(90c691e2-a785-487c-9a4f-dd30836b91bb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad309073-b3')
Jan 26 16:00:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 374 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 7.8 MiB/s wr, 304 op/s
Jan 26 16:00:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.151 239969 INFO nova.virt.libvirt.driver [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Snapshot image upload complete
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.152 239969 DEBUG nova.compute.manager [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.233 239969 INFO nova.compute.manager [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Shelve offloading
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.241 239969 INFO nova.virt.libvirt.driver [-] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Instance destroyed successfully.
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.241 239969 DEBUG nova.compute.manager [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.244 239969 DEBUG oslo_concurrency.lockutils [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "refresh_cache-d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.244 239969 DEBUG oslo_concurrency.lockutils [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquired lock "refresh_cache-d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.245 239969 DEBUG nova.network.neutron [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.353 239969 INFO nova.virt.libvirt.driver [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Deleting instance files /var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e_del
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.354 239969 INFO nova.virt.libvirt.driver [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Deletion of /var/lib/nova/instances/a4ddea97-0b71-4080-abed-fd5748d7244e_del complete
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.424 239969 INFO nova.compute.manager [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Took 0.99 seconds to destroy the instance on the hypervisor.
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.425 239969 DEBUG oslo.service.loopingcall [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.426 239969 DEBUG nova.compute.manager [-] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.426 239969 DEBUG nova.network.neutron [-] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.799 239969 DEBUG nova.compute.manager [req-da0792d0-b32e-4514-9178-10f881ae80b5 req-8560eaf7-9da1-46b0-9184-a14a62521d33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received event network-vif-unplugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.800 239969 DEBUG oslo_concurrency.lockutils [req-da0792d0-b32e-4514-9178-10f881ae80b5 req-8560eaf7-9da1-46b0-9184-a14a62521d33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.801 239969 DEBUG oslo_concurrency.lockutils [req-da0792d0-b32e-4514-9178-10f881ae80b5 req-8560eaf7-9da1-46b0-9184-a14a62521d33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.801 239969 DEBUG oslo_concurrency.lockutils [req-da0792d0-b32e-4514-9178-10f881ae80b5 req-8560eaf7-9da1-46b0-9184-a14a62521d33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.801 239969 DEBUG nova.compute.manager [req-da0792d0-b32e-4514-9178-10f881ae80b5 req-8560eaf7-9da1-46b0-9184-a14a62521d33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] No waiting events found dispatching network-vif-unplugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.801 239969 DEBUG nova.compute.manager [req-da0792d0-b32e-4514-9178-10f881ae80b5 req-8560eaf7-9da1-46b0-9184-a14a62521d33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received event network-vif-unplugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.919 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:36 compute-0 nova_compute[239965]: 2026-01-26 16:00:36.990 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Updating instance_info_cache with network_info: [{"id": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "address": "fa:16:3e:3a:d5:77", "network": {"id": "90c691e2-a785-487c-9a4f-dd30836b91bb", "bridge": "br-int", "label": "tempest-ServerRescueTestJSONUnderV235-2030047543-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c41676eb0a634807a0c639355e39a512", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad309073-b3", "ovs_interfaceid": "ad309073-b3c7-40b4-a937-5d72ec411ed7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.011 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-a4ddea97-0b71-4080-abed-fd5748d7244e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.012 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.012 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.013 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.013 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.013 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:00:37 compute-0 ceph-mon[75140]: pgmap v1427: 305 pgs: 305 active+clean; 374 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 7.8 MiB/s wr, 304 op/s
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.033 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.034 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.034 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.035 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.035 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.310 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:00:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2763503785' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.580 239969 DEBUG nova.compute.manager [req-99d61148-b035-4083-9559-fec4ea32f79f req-9ca30acb-7a81-41d9-abad-73fa40bab97e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Received event network-vif-unplugged-34a7ae82-747e-4fef-9081-0d0e28e80f28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.580 239969 DEBUG oslo_concurrency.lockutils [req-99d61148-b035-4083-9559-fec4ea32f79f req-9ca30acb-7a81-41d9-abad-73fa40bab97e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.581 239969 DEBUG oslo_concurrency.lockutils [req-99d61148-b035-4083-9559-fec4ea32f79f req-9ca30acb-7a81-41d9-abad-73fa40bab97e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.581 239969 DEBUG oslo_concurrency.lockutils [req-99d61148-b035-4083-9559-fec4ea32f79f req-9ca30acb-7a81-41d9-abad-73fa40bab97e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.581 239969 DEBUG nova.compute.manager [req-99d61148-b035-4083-9559-fec4ea32f79f req-9ca30acb-7a81-41d9-abad-73fa40bab97e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] No waiting events found dispatching network-vif-unplugged-34a7ae82-747e-4fef-9081-0d0e28e80f28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.581 239969 WARNING nova.compute.manager [req-99d61148-b035-4083-9559-fec4ea32f79f req-9ca30acb-7a81-41d9-abad-73fa40bab97e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Received unexpected event network-vif-unplugged-34a7ae82-747e-4fef-9081-0d0e28e80f28 for instance with vm_state shelved and task_state shelving_offloading.
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.582 239969 DEBUG nova.compute.manager [req-99d61148-b035-4083-9559-fec4ea32f79f req-9ca30acb-7a81-41d9-abad-73fa40bab97e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Received event network-vif-plugged-34a7ae82-747e-4fef-9081-0d0e28e80f28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.582 239969 DEBUG oslo_concurrency.lockutils [req-99d61148-b035-4083-9559-fec4ea32f79f req-9ca30acb-7a81-41d9-abad-73fa40bab97e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.582 239969 DEBUG oslo_concurrency.lockutils [req-99d61148-b035-4083-9559-fec4ea32f79f req-9ca30acb-7a81-41d9-abad-73fa40bab97e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.582 239969 DEBUG oslo_concurrency.lockutils [req-99d61148-b035-4083-9559-fec4ea32f79f req-9ca30acb-7a81-41d9-abad-73fa40bab97e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.583 239969 DEBUG nova.compute.manager [req-99d61148-b035-4083-9559-fec4ea32f79f req-9ca30acb-7a81-41d9-abad-73fa40bab97e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] No waiting events found dispatching network-vif-plugged-34a7ae82-747e-4fef-9081-0d0e28e80f28 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.583 239969 WARNING nova.compute.manager [req-99d61148-b035-4083-9559-fec4ea32f79f req-9ca30acb-7a81-41d9-abad-73fa40bab97e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Received unexpected event network-vif-plugged-34a7ae82-747e-4fef-9081-0d0e28e80f28 for instance with vm_state shelved and task_state shelving_offloading.
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.600 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.669 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.669 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.716 239969 DEBUG nova.network.neutron [-] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.746 239969 INFO nova.compute.manager [-] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Took 1.32 seconds to deallocate network for instance.
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.794 239969 DEBUG oslo_concurrency.lockutils [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.795 239969 DEBUG oslo_concurrency.lockutils [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.857 239969 DEBUG oslo_concurrency.processutils [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.890 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.891 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3988MB free_disk=59.87582315597683GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:00:37 compute-0 nova_compute[239965]: 2026-01-26 16:00:37.892 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 374 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 7.8 MiB/s wr, 224 op/s
Jan 26 16:00:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2763503785' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:38.112 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:00:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1079206593' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.394 239969 DEBUG oslo_concurrency.processutils [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.400 239969 DEBUG nova.compute.provider_tree [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.415 239969 DEBUG nova.scheduler.client.report [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.434 239969 DEBUG oslo_concurrency.lockutils [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.437 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.483 239969 INFO nova.scheduler.client.report [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Deleted allocations for instance a4ddea97-0b71-4080-abed-fd5748d7244e
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.520 239969 DEBUG nova.network.neutron [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Updating instance_info_cache with network_info: [{"id": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "address": "fa:16:3e:67:57:38", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a7ae82-74", "ovs_interfaceid": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.535 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance d89afe65-4fc2-4cfa-8580-7ae5befd6a4d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.535 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.535 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.567 239969 DEBUG oslo_concurrency.lockutils [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Releasing lock "refresh_cache-d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.594 239969 DEBUG oslo_concurrency.lockutils [None req-5b5d5526-f80b-4e85-bf6c-9f2d5f38b71c 10e6dce963804f98b45b6f58ef8d1b2e c41676eb0a634807a0c639355e39a512 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.600 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.921 239969 DEBUG nova.compute.manager [req-7f259f9c-0b25-4249-8ab6-42e990249d81 req-65931446-d589-4728-a3fd-bc44fdda3d60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received event network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.922 239969 DEBUG oslo_concurrency.lockutils [req-7f259f9c-0b25-4249-8ab6-42e990249d81 req-65931446-d589-4728-a3fd-bc44fdda3d60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.923 239969 DEBUG oslo_concurrency.lockutils [req-7f259f9c-0b25-4249-8ab6-42e990249d81 req-65931446-d589-4728-a3fd-bc44fdda3d60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.923 239969 DEBUG oslo_concurrency.lockutils [req-7f259f9c-0b25-4249-8ab6-42e990249d81 req-65931446-d589-4728-a3fd-bc44fdda3d60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a4ddea97-0b71-4080-abed-fd5748d7244e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.923 239969 DEBUG nova.compute.manager [req-7f259f9c-0b25-4249-8ab6-42e990249d81 req-65931446-d589-4728-a3fd-bc44fdda3d60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] No waiting events found dispatching network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.924 239969 WARNING nova.compute.manager [req-7f259f9c-0b25-4249-8ab6-42e990249d81 req-65931446-d589-4728-a3fd-bc44fdda3d60 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received unexpected event network-vif-plugged-ad309073-b3c7-40b4-a937-5d72ec411ed7 for instance with vm_state deleted and task_state None.
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.966 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "fbc49a44-5efe-4d87-aea7-b26f7232c224" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.967 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:38 compute-0 nova_compute[239965]: 2026-01-26 16:00:38.988 239969 DEBUG nova.compute.manager [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:00:39 compute-0 ceph-mon[75140]: pgmap v1428: 305 pgs: 305 active+clean; 374 MiB data, 681 MiB used, 59 GiB / 60 GiB avail; 8.4 MiB/s rd, 7.8 MiB/s wr, 224 op/s
Jan 26 16:00:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1079206593' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.086 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:00:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1197439361' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.206 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.213 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.230 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.256 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.256 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.257 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.264 239969 DEBUG nova.virt.hardware [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.264 239969 INFO nova.compute.claims [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.539 239969 DEBUG oslo_concurrency.processutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.689 239969 DEBUG nova.compute.manager [req-6d40f3d0-40b9-4e41-babb-de5fd7603b17 req-18f1bee8-9fbc-48f0-b304-55b5082ad0ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Received event network-vif-deleted-ad309073-b3c7-40b4-a937-5d72ec411ed7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.875 239969 INFO nova.virt.libvirt.driver [-] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Instance destroyed successfully.
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.876 239969 DEBUG nova.objects.instance [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'resources' on Instance uuid d89afe65-4fc2-4cfa-8580-7ae5befd6a4d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.892 239969 DEBUG nova.virt.libvirt.vif [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T15:59:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2129100736',display_name='tempest-DeleteServersTestJSON-server-2129100736',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2129100736',id=60,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:00:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='aee99e5b6af74088bd848cecc9592e82',ramdisk_id='',reservation_id='r-u8bn3cy7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-234439961',owner_user_name='tempest-DeleteServersTestJSON-234439961-project-member',shelved_at='2026-01-26T16:00:36.152488',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='e86a5abc-f34a-4e04-b91e-26b6df03b83a'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:00:31Z,user_data=None,user_id='3c7cba75e9fd41599b1c9a3388447cdd',uuid=d89afe65-4fc2-4cfa-8580-7ae5befd6a4d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "address": "fa:16:3e:67:57:38", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a7ae82-74", "ovs_interfaceid": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.893 239969 DEBUG nova.network.os_vif_util [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converting VIF {"id": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "address": "fa:16:3e:67:57:38", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap34a7ae82-74", "ovs_interfaceid": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.894 239969 DEBUG nova.network.os_vif_util [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:57:38,bridge_name='br-int',has_traffic_filtering=True,id=34a7ae82-747e-4fef-9081-0d0e28e80f28,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34a7ae82-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.894 239969 DEBUG os_vif [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:57:38,bridge_name='br-int',has_traffic_filtering=True,id=34a7ae82-747e-4fef-9081-0d0e28e80f28,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34a7ae82-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.896 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.896 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap34a7ae82-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.898 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.901 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:00:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 291 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 5.9 MiB/s wr, 247 op/s
Jan 26 16:00:39 compute-0 nova_compute[239965]: 2026-01-26 16:00:39.903 239969 INFO os_vif [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:57:38,bridge_name='br-int',has_traffic_filtering=True,id=34a7ae82-747e-4fef-9081-0d0e28e80f28,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap34a7ae82-74')
Jan 26 16:00:40 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1197439361' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:00:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4111202783' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.167 239969 DEBUG oslo_concurrency.processutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.172 239969 DEBUG nova.compute.provider_tree [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.187 239969 DEBUG nova.scheduler.client.report [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.205 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.205 239969 DEBUG nova.compute.manager [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.249 239969 DEBUG nova.compute.manager [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.250 239969 DEBUG nova.network.neutron [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.270 239969 INFO nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.290 239969 DEBUG nova.compute.manager [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.394 239969 DEBUG nova.compute.manager [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.396 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.396 239969 INFO nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Creating image(s)
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.436 239969 DEBUG nova.storage.rbd_utils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image fbc49a44-5efe-4d87-aea7-b26f7232c224_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.464 239969 DEBUG nova.storage.rbd_utils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image fbc49a44-5efe-4d87-aea7-b26f7232c224_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.488 239969 DEBUG nova.storage.rbd_utils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image fbc49a44-5efe-4d87-aea7-b26f7232c224_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.493 239969 DEBUG oslo_concurrency.processutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.560 239969 DEBUG oslo_concurrency.processutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.561 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.561 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.562 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.585 239969 DEBUG nova.storage.rbd_utils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image fbc49a44-5efe-4d87-aea7-b26f7232c224_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.589 239969 DEBUG oslo_concurrency.processutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 fbc49a44-5efe-4d87-aea7-b26f7232c224_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.721 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.890 239969 DEBUG nova.policy [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '57040a375df6487fbc604a9b04389eeb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4f5d05ded9184fea9b526a3522d47ea5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.929 239969 INFO nova.virt.libvirt.driver [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Deleting instance files /var/lib/nova/instances/d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_del
Jan 26 16:00:40 compute-0 nova_compute[239965]: 2026-01-26 16:00:40.930 239969 INFO nova.virt.libvirt.driver [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Deletion of /var/lib/nova/instances/d89afe65-4fc2-4cfa-8580-7ae5befd6a4d_del complete
Jan 26 16:00:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e233 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:00:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.250 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.250 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:00:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Jan 26 16:00:41 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Jan 26 16:00:41 compute-0 ceph-mon[75140]: pgmap v1429: 305 pgs: 305 active+clean; 291 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 5.9 MiB/s wr, 247 op/s
Jan 26 16:00:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4111202783' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.794 239969 DEBUG oslo_concurrency.processutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 fbc49a44-5efe-4d87-aea7-b26f7232c224_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.851 239969 DEBUG nova.compute.manager [req-e8d6b58d-e901-4061-a590-0435c8457552 req-993b1071-2f47-42b8-b016-971828dee625 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Received event network-changed-34a7ae82-747e-4fef-9081-0d0e28e80f28 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.851 239969 DEBUG nova.compute.manager [req-e8d6b58d-e901-4061-a590-0435c8457552 req-993b1071-2f47-42b8-b016-971828dee625 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Refreshing instance network info cache due to event network-changed-34a7ae82-747e-4fef-9081-0d0e28e80f28. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.852 239969 DEBUG oslo_concurrency.lockutils [req-e8d6b58d-e901-4061-a590-0435c8457552 req-993b1071-2f47-42b8-b016-971828dee625 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.852 239969 DEBUG oslo_concurrency.lockutils [req-e8d6b58d-e901-4061-a590-0435c8457552 req-993b1071-2f47-42b8-b016-971828dee625 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.852 239969 DEBUG nova.network.neutron [req-e8d6b58d-e901-4061-a590-0435c8457552 req-993b1071-2f47-42b8-b016-971828dee625 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Refreshing network info cache for port 34a7ae82-747e-4fef-9081-0d0e28e80f28 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.858 239969 DEBUG nova.storage.rbd_utils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] resizing rbd image fbc49a44-5efe-4d87-aea7-b26f7232c224_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.896 239969 INFO nova.scheduler.client.report [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Deleted allocations for instance d89afe65-4fc2-4cfa-8580-7ae5befd6a4d
Jan 26 16:00:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 202 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 5.8 MiB/s wr, 291 op/s
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.938 239969 DEBUG oslo_concurrency.lockutils [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.939 239969 DEBUG oslo_concurrency.lockutils [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.943 239969 DEBUG nova.objects.instance [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'migration_context' on Instance uuid fbc49a44-5efe-4d87-aea7-b26f7232c224 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.964 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.965 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Ensure instance console log exists: /var/lib/nova/instances/fbc49a44-5efe-4d87-aea7-b26f7232c224/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.965 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.965 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.966 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:41 compute-0 nova_compute[239965]: 2026-01-26 16:00:41.980 239969 DEBUG oslo_concurrency.processutils [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:00:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1094298297' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:42 compute-0 nova_compute[239965]: 2026-01-26 16:00:42.529 239969 DEBUG oslo_concurrency.processutils [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:42 compute-0 nova_compute[239965]: 2026-01-26 16:00:42.536 239969 DEBUG nova.compute.provider_tree [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:00:42 compute-0 nova_compute[239965]: 2026-01-26 16:00:42.556 239969 DEBUG nova.scheduler.client.report [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:00:42 compute-0 nova_compute[239965]: 2026-01-26 16:00:42.586 239969 DEBUG oslo_concurrency.lockutils [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.647s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:42 compute-0 nova_compute[239965]: 2026-01-26 16:00:42.668 239969 DEBUG oslo_concurrency.lockutils [None req-1e61a068-a8b3-4517-80cf-9fc6756c8b3e 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 35.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:42 compute-0 nova_compute[239965]: 2026-01-26 16:00:42.749 239969 DEBUG nova.network.neutron [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Successfully created port: f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:00:43 compute-0 ceph-mon[75140]: osdmap e234: 3 total, 3 up, 3 in
Jan 26 16:00:43 compute-0 ceph-mon[75140]: pgmap v1431: 305 pgs: 305 active+clean; 202 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 5.8 MiB/s wr, 291 op/s
Jan 26 16:00:43 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1094298297' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 202 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.7 MiB/s wr, 235 op/s
Jan 26 16:00:44 compute-0 nova_compute[239965]: 2026-01-26 16:00:44.600 239969 DEBUG nova.network.neutron [req-e8d6b58d-e901-4061-a590-0435c8457552 req-993b1071-2f47-42b8-b016-971828dee625 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Updated VIF entry in instance network info cache for port 34a7ae82-747e-4fef-9081-0d0e28e80f28. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:00:44 compute-0 nova_compute[239965]: 2026-01-26 16:00:44.601 239969 DEBUG nova.network.neutron [req-e8d6b58d-e901-4061-a590-0435c8457552 req-993b1071-2f47-42b8-b016-971828dee625 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Updating instance_info_cache with network_info: [{"id": "34a7ae82-747e-4fef-9081-0d0e28e80f28", "address": "fa:16:3e:67:57:38", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": null, "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap34a7ae82-74", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:44 compute-0 nova_compute[239965]: 2026-01-26 16:00:44.626 239969 DEBUG oslo_concurrency.lockutils [req-e8d6b58d-e901-4061-a590-0435c8457552 req-993b1071-2f47-42b8-b016-971828dee625 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d89afe65-4fc2-4cfa-8580-7ae5befd6a4d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:44 compute-0 nova_compute[239965]: 2026-01-26 16:00:44.635 239969 DEBUG nova.network.neutron [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Successfully updated port: f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:00:44 compute-0 nova_compute[239965]: 2026-01-26 16:00:44.650 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "refresh_cache-fbc49a44-5efe-4d87-aea7-b26f7232c224" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:44 compute-0 nova_compute[239965]: 2026-01-26 16:00:44.650 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquired lock "refresh_cache-fbc49a44-5efe-4d87-aea7-b26f7232c224" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:44 compute-0 nova_compute[239965]: 2026-01-26 16:00:44.650 239969 DEBUG nova.network.neutron [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:00:44 compute-0 nova_compute[239965]: 2026-01-26 16:00:44.711 239969 DEBUG nova.compute.manager [req-6a1bc306-d698-4ede-a33e-024751c4e8a0 req-17fd2f0e-9c2b-4b64-9b0e-9b15ef90a2c9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received event network-changed-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:44 compute-0 nova_compute[239965]: 2026-01-26 16:00:44.712 239969 DEBUG nova.compute.manager [req-6a1bc306-d698-4ede-a33e-024751c4e8a0 req-17fd2f0e-9c2b-4b64-9b0e-9b15ef90a2c9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Refreshing instance network info cache due to event network-changed-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:00:44 compute-0 nova_compute[239965]: 2026-01-26 16:00:44.713 239969 DEBUG oslo_concurrency.lockutils [req-6a1bc306-d698-4ede-a33e-024751c4e8a0 req-17fd2f0e-9c2b-4b64-9b0e-9b15ef90a2c9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-fbc49a44-5efe-4d87-aea7-b26f7232c224" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:44 compute-0 nova_compute[239965]: 2026-01-26 16:00:44.816 239969 DEBUG nova.network.neutron [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:00:44 compute-0 nova_compute[239965]: 2026-01-26 16:00:44.901 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Jan 26 16:00:45 compute-0 ceph-mon[75140]: pgmap v1432: 305 pgs: 305 active+clean; 202 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.7 MiB/s wr, 235 op/s
Jan 26 16:00:45 compute-0 nova_compute[239965]: 2026-01-26 16:00:45.030 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443230.0295024, 98bce7d4-177d-4ead-b03c-70f005006e85 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:45 compute-0 nova_compute[239965]: 2026-01-26 16:00:45.031 239969 INFO nova.compute.manager [-] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] VM Stopped (Lifecycle Event)
Jan 26 16:00:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Jan 26 16:00:45 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Jan 26 16:00:45 compute-0 nova_compute[239965]: 2026-01-26 16:00:45.050 239969 DEBUG nova.compute.manager [None req-dc6ffe74-3808-4320-bdb3-4b5da02c891d - - - - - -] [instance: 98bce7d4-177d-4ead-b03c-70f005006e85] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:45 compute-0 nova_compute[239965]: 2026-01-26 16:00:45.401 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443230.399966, d89afe65-4fc2-4cfa-8580-7ae5befd6a4d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:45 compute-0 nova_compute[239965]: 2026-01-26 16:00:45.402 239969 INFO nova.compute.manager [-] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] VM Stopped (Lifecycle Event)
Jan 26 16:00:45 compute-0 nova_compute[239965]: 2026-01-26 16:00:45.423 239969 DEBUG nova.compute.manager [None req-6893087a-3224-41e4-b0f2-2ffdd97bde23 - - - - - -] [instance: d89afe65-4fc2-4cfa-8580-7ae5befd6a4d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:45 compute-0 nova_compute[239965]: 2026-01-26 16:00:45.723 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 187 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 121 KiB/s rd, 2.7 MiB/s wr, 176 op/s
Jan 26 16:00:46 compute-0 ceph-mon[75140]: osdmap e235: 3 total, 3 up, 3 in
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.161 239969 DEBUG nova.network.neutron [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Updating instance_info_cache with network_info: [{"id": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "address": "fa:16:3e:65:b9:39", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72df886-4f", "ovs_interfaceid": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.188 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Releasing lock "refresh_cache-fbc49a44-5efe-4d87-aea7-b26f7232c224" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.189 239969 DEBUG nova.compute.manager [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Instance network_info: |[{"id": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "address": "fa:16:3e:65:b9:39", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72df886-4f", "ovs_interfaceid": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.190 239969 DEBUG oslo_concurrency.lockutils [req-6a1bc306-d698-4ede-a33e-024751c4e8a0 req-17fd2f0e-9c2b-4b64-9b0e-9b15ef90a2c9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-fbc49a44-5efe-4d87-aea7-b26f7232c224" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.190 239969 DEBUG nova.network.neutron [req-6a1bc306-d698-4ede-a33e-024751c4e8a0 req-17fd2f0e-9c2b-4b64-9b0e-9b15ef90a2c9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Refreshing network info cache for port f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.193 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Start _get_guest_xml network_info=[{"id": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "address": "fa:16:3e:65:b9:39", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72df886-4f", "ovs_interfaceid": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.198 239969 WARNING nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.203 239969 DEBUG nova.virt.libvirt.host [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.204 239969 DEBUG nova.virt.libvirt.host [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:00:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.212 239969 DEBUG nova.virt.libvirt.host [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.214 239969 DEBUG nova.virt.libvirt.host [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.214 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.214 239969 DEBUG nova.virt.hardware [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.215 239969 DEBUG nova.virt.hardware [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.215 239969 DEBUG nova.virt.hardware [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.216 239969 DEBUG nova.virt.hardware [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.217 239969 DEBUG nova.virt.hardware [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.217 239969 DEBUG nova.virt.hardware [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.217 239969 DEBUG nova.virt.hardware [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.217 239969 DEBUG nova.virt.hardware [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.218 239969 DEBUG nova.virt.hardware [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.218 239969 DEBUG nova.virt.hardware [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.218 239969 DEBUG nova.virt.hardware [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.221 239969 DEBUG oslo_concurrency.processutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:00:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/532608813' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.819 239969 DEBUG oslo_concurrency.processutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.838 239969 DEBUG nova.storage.rbd_utils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image fbc49a44-5efe-4d87-aea7-b26f7232c224_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:46 compute-0 nova_compute[239965]: 2026-01-26 16:00:46.841 239969 DEBUG oslo_concurrency.processutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:47 compute-0 ceph-mon[75140]: pgmap v1434: 305 pgs: 305 active+clean; 187 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 121 KiB/s rd, 2.7 MiB/s wr, 176 op/s
Jan 26 16:00:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/532608813' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:00:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1302988565' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.421 239969 DEBUG oslo_concurrency.processutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.423 239969 DEBUG nova.virt.libvirt.vif [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:00:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-439720396',display_name='tempest-ServerActionsTestOtherA-server-439720396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-439720396',id=62,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPEYI1ckjoLk7yIcgUD2N5pEXz7CpgCWmUDrDMsy81rcfUCI8L/DB4jj85dJvKgW1J8FZ3QYwC3Zv2Ccazu/ZH8R7JCX7KFR/iJYErtRCi4QLPrZS0N+skSfnxQ/qbE6Ew==',key_name='tempest-keypair-339418653',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4f5d05ded9184fea9b526a3522d47ea5',ramdisk_id='',reservation_id='r-up098k7k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-980651809',owner_user_name='tempest-ServerActionsTestOtherA-980651809-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:00:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57040a375df6487fbc604a9b04389eeb',uuid=fbc49a44-5efe-4d87-aea7-b26f7232c224,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "address": "fa:16:3e:65:b9:39", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72df886-4f", "ovs_interfaceid": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.423 239969 DEBUG nova.network.os_vif_util [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converting VIF {"id": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "address": "fa:16:3e:65:b9:39", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72df886-4f", "ovs_interfaceid": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.424 239969 DEBUG nova.network.os_vif_util [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:b9:39,bridge_name='br-int',has_traffic_filtering=True,id=f72df886-4fcf-4dc1-8e52-62b2a9cbfb49,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72df886-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.425 239969 DEBUG nova.objects.instance [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'pci_devices' on Instance uuid fbc49a44-5efe-4d87-aea7-b26f7232c224 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.440 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:00:47 compute-0 nova_compute[239965]:   <uuid>fbc49a44-5efe-4d87-aea7-b26f7232c224</uuid>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   <name>instance-0000003e</name>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerActionsTestOtherA-server-439720396</nova:name>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:00:46</nova:creationTime>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:00:47 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:00:47 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:00:47 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:00:47 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:00:47 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:00:47 compute-0 nova_compute[239965]:         <nova:user uuid="57040a375df6487fbc604a9b04389eeb">tempest-ServerActionsTestOtherA-980651809-project-member</nova:user>
Jan 26 16:00:47 compute-0 nova_compute[239965]:         <nova:project uuid="4f5d05ded9184fea9b526a3522d47ea5">tempest-ServerActionsTestOtherA-980651809</nova:project>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:00:47 compute-0 nova_compute[239965]:         <nova:port uuid="f72df886-4fcf-4dc1-8e52-62b2a9cbfb49">
Jan 26 16:00:47 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <system>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <entry name="serial">fbc49a44-5efe-4d87-aea7-b26f7232c224</entry>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <entry name="uuid">fbc49a44-5efe-4d87-aea7-b26f7232c224</entry>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     </system>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   <os>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   </os>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   <features>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   </features>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/fbc49a44-5efe-4d87-aea7-b26f7232c224_disk">
Jan 26 16:00:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       </source>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:00:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/fbc49a44-5efe-4d87-aea7-b26f7232c224_disk.config">
Jan 26 16:00:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       </source>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:00:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:65:b9:39"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <target dev="tapf72df886-4f"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/fbc49a44-5efe-4d87-aea7-b26f7232c224/console.log" append="off"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <video>
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     </video>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:00:47 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:00:47 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:00:47 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:00:47 compute-0 nova_compute[239965]: </domain>
Jan 26 16:00:47 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.442 239969 DEBUG nova.compute.manager [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Preparing to wait for external event network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.442 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.442 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.442 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.443 239969 DEBUG nova.virt.libvirt.vif [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:00:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-439720396',display_name='tempest-ServerActionsTestOtherA-server-439720396',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-439720396',id=62,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPEYI1ckjoLk7yIcgUD2N5pEXz7CpgCWmUDrDMsy81rcfUCI8L/DB4jj85dJvKgW1J8FZ3QYwC3Zv2Ccazu/ZH8R7JCX7KFR/iJYErtRCi4QLPrZS0N+skSfnxQ/qbE6Ew==',key_name='tempest-keypair-339418653',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4f5d05ded9184fea9b526a3522d47ea5',ramdisk_id='',reservation_id='r-up098k7k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-980651809',owner_user_name='tempest-ServerActionsTestOtherA-980651809-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:00:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57040a375df6487fbc604a9b04389eeb',uuid=fbc49a44-5efe-4d87-aea7-b26f7232c224,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "address": "fa:16:3e:65:b9:39", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72df886-4f", "ovs_interfaceid": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.443 239969 DEBUG nova.network.os_vif_util [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converting VIF {"id": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "address": "fa:16:3e:65:b9:39", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72df886-4f", "ovs_interfaceid": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.444 239969 DEBUG nova.network.os_vif_util [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:b9:39,bridge_name='br-int',has_traffic_filtering=True,id=f72df886-4fcf-4dc1-8e52-62b2a9cbfb49,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72df886-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.444 239969 DEBUG os_vif [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:b9:39,bridge_name='br-int',has_traffic_filtering=True,id=f72df886-4fcf-4dc1-8e52-62b2a9cbfb49,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72df886-4f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.445 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.445 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.445 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.449 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.449 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf72df886-4f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.450 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf72df886-4f, col_values=(('external_ids', {'iface-id': 'f72df886-4fcf-4dc1-8e52-62b2a9cbfb49', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:b9:39', 'vm-uuid': 'fbc49a44-5efe-4d87-aea7-b26f7232c224'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:47 compute-0 NetworkManager[48954]: <info>  [1769443247.4531] manager: (tapf72df886-4f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.454 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.457 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.457 239969 INFO os_vif [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:b9:39,bridge_name='br-int',has_traffic_filtering=True,id=f72df886-4fcf-4dc1-8e52-62b2a9cbfb49,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72df886-4f')
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.522 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.522 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.524 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] No VIF found with MAC fa:16:3e:65:b9:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.525 239969 INFO nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Using config drive
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.545 239969 DEBUG nova.storage.rbd_utils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image fbc49a44-5efe-4d87-aea7-b26f7232c224_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.722 239969 DEBUG nova.network.neutron [req-6a1bc306-d698-4ede-a33e-024751c4e8a0 req-17fd2f0e-9c2b-4b64-9b0e-9b15ef90a2c9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Updated VIF entry in instance network info cache for port f72df886-4fcf-4dc1-8e52-62b2a9cbfb49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.723 239969 DEBUG nova.network.neutron [req-6a1bc306-d698-4ede-a33e-024751c4e8a0 req-17fd2f0e-9c2b-4b64-9b0e-9b15ef90a2c9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Updating instance_info_cache with network_info: [{"id": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "address": "fa:16:3e:65:b9:39", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72df886-4f", "ovs_interfaceid": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:47 compute-0 nova_compute[239965]: 2026-01-26 16:00:47.740 239969 DEBUG oslo_concurrency.lockutils [req-6a1bc306-d698-4ede-a33e-024751c4e8a0 req-17fd2f0e-9c2b-4b64-9b0e-9b15ef90a2c9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-fbc49a44-5efe-4d87-aea7-b26f7232c224" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 187 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 2.7 MiB/s wr, 101 op/s
Jan 26 16:00:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1302988565' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:48 compute-0 nova_compute[239965]: 2026-01-26 16:00:48.063 239969 INFO nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Creating config drive at /var/lib/nova/instances/fbc49a44-5efe-4d87-aea7-b26f7232c224/disk.config
Jan 26 16:00:48 compute-0 nova_compute[239965]: 2026-01-26 16:00:48.071 239969 DEBUG oslo_concurrency.processutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fbc49a44-5efe-4d87-aea7-b26f7232c224/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpck5953cq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:48 compute-0 nova_compute[239965]: 2026-01-26 16:00:48.217 239969 DEBUG oslo_concurrency.processutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fbc49a44-5efe-4d87-aea7-b26f7232c224/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpck5953cq" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:48 compute-0 nova_compute[239965]: 2026-01-26 16:00:48.244 239969 DEBUG nova.storage.rbd_utils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image fbc49a44-5efe-4d87-aea7-b26f7232c224_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:48 compute-0 nova_compute[239965]: 2026-01-26 16:00:48.247 239969 DEBUG oslo_concurrency.processutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fbc49a44-5efe-4d87-aea7-b26f7232c224/disk.config fbc49a44-5efe-4d87-aea7-b26f7232c224_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:48 compute-0 nova_compute[239965]: 2026-01-26 16:00:48.451 239969 DEBUG oslo_concurrency.processutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fbc49a44-5efe-4d87-aea7-b26f7232c224/disk.config fbc49a44-5efe-4d87-aea7-b26f7232c224_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:48 compute-0 nova_compute[239965]: 2026-01-26 16:00:48.452 239969 INFO nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Deleting local config drive /var/lib/nova/instances/fbc49a44-5efe-4d87-aea7-b26f7232c224/disk.config because it was imported into RBD.
Jan 26 16:00:48 compute-0 kernel: tapf72df886-4f: entered promiscuous mode
Jan 26 16:00:48 compute-0 NetworkManager[48954]: <info>  [1769443248.5127] manager: (tapf72df886-4f): new Tun device (/org/freedesktop/NetworkManager/Devices/235)
Jan 26 16:00:48 compute-0 nova_compute[239965]: 2026-01-26 16:00:48.511 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:48 compute-0 ovn_controller[146046]: 2026-01-26T16:00:48Z|00526|binding|INFO|Claiming lport f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 for this chassis.
Jan 26 16:00:48 compute-0 ovn_controller[146046]: 2026-01-26T16:00:48Z|00527|binding|INFO|f72df886-4fcf-4dc1-8e52-62b2a9cbfb49: Claiming fa:16:3e:65:b9:39 10.100.0.10
Jan 26 16:00:48 compute-0 nova_compute[239965]: 2026-01-26 16:00:48.517 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.526 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:b9:39 10.100.0.10'], port_security=['fa:16:3e:65:b9:39 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbc49a44-5efe-4d87-aea7-b26f7232c224', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac4ec909-9164-40ab-a894-4d501ce12c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4f5d05ded9184fea9b526a3522d47ea5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0ea85f49-3fe1-4caf-b20f-790e82828475', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=287381d1-9fc0-4e2d-b786-047adf5707ff, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=f72df886-4fcf-4dc1-8e52-62b2a9cbfb49) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.528 156105 INFO neutron.agent.ovn.metadata.agent [-] Port f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 in datapath ac4ec909-9164-40ab-a894-4d501ce12c59 bound to our chassis
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.529 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac4ec909-9164-40ab-a894-4d501ce12c59
Jan 26 16:00:48 compute-0 systemd-udevd[293111]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.542 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e39b1f13-df7a-4d66-874b-9f4faeee8500]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.544 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapac4ec909-91 in ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.546 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapac4ec909-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.546 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c5d3da0e-0b47-4bfd-a865-00b6d098f2d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.547 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d6957e33-defd-4b79-8805-f70ee7f6ad95]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:48 compute-0 systemd-machined[208061]: New machine qemu-69-instance-0000003e.
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.559 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[99ff86ce-576b-42cd-ad28-c62f72e97eb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:48 compute-0 NetworkManager[48954]: <info>  [1769443248.5633] device (tapf72df886-4f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:00:48 compute-0 NetworkManager[48954]: <info>  [1769443248.5644] device (tapf72df886-4f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:00:48 compute-0 systemd[1]: Started Virtual Machine qemu-69-instance-0000003e.
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.582 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dcb2578a-1b38-4487-9a1f-d02a88de50ac]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:48 compute-0 nova_compute[239965]: 2026-01-26 16:00:48.612 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.613 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[96318d06-66ae-40be-ba6d-f17e53bbb940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:48 compute-0 ovn_controller[146046]: 2026-01-26T16:00:48Z|00528|binding|INFO|Setting lport f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 ovn-installed in OVS
Jan 26 16:00:48 compute-0 ovn_controller[146046]: 2026-01-26T16:00:48Z|00529|binding|INFO|Setting lport f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 up in Southbound
Jan 26 16:00:48 compute-0 NetworkManager[48954]: <info>  [1769443248.6212] manager: (tapac4ec909-90): new Veth device (/org/freedesktop/NetworkManager/Devices/236)
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.620 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[45a83ae9-f14e-4005-ab83-0ad22e259d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:48 compute-0 nova_compute[239965]: 2026-01-26 16:00:48.620 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:48 compute-0 systemd-udevd[293114]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.653 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1076b377-e296-4a98-8406-d5a122e16d08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.657 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[db868cf7-9712-42cb-a31f-11e07809e1cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:48 compute-0 NetworkManager[48954]: <info>  [1769443248.6797] device (tapac4ec909-90): carrier: link connected
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.685 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b4462912-24d0-4384-8c94-df6457531caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.708 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[62076878-a9f4-482c-bffe-87cdb03a7191]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac4ec909-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6b:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473948, 'reachable_time': 32074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293143, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:00:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2755158747' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:00:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:00:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2755158747' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.728 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f0105abc-a2cb-4d04-b09e-6879e2b4812f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:6bf8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473948, 'tstamp': 473948}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293144, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.746 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b38cbe-5b25-4c26-af62-bfec1e33d6e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac4ec909-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6b:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473948, 'reachable_time': 32074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293145, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.786 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ca394a54-7149-44a0-8f63-a60ac180c0f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00035452122348915313 of space, bias 1.0, pg target 0.10635636704674593 quantized to 32 (current 32)
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0015353295007720469 of space, bias 1.0, pg target 0.46059885023161407 quantized to 32 (current 32)
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.893724460779413e-07 of space, bias 4.0, pg target 0.0009472469352935296 quantized to 16 (current 16)
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:00:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.855 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[33b33668-644a-4b51-b4cc-bf1018c9d4ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.858 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac4ec909-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.858 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.859 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac4ec909-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:48 compute-0 nova_compute[239965]: 2026-01-26 16:00:48.861 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:48 compute-0 kernel: tapac4ec909-90: entered promiscuous mode
Jan 26 16:00:48 compute-0 NetworkManager[48954]: <info>  [1769443248.8620] manager: (tapac4ec909-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.866 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac4ec909-90, col_values=(('external_ids', {'iface-id': '47411fc6-7d46-43a0-b0c2-7c06b22cee9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:48 compute-0 nova_compute[239965]: 2026-01-26 16:00:48.868 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:48 compute-0 ovn_controller[146046]: 2026-01-26T16:00:48Z|00530|binding|INFO|Releasing lport 47411fc6-7d46-43a0-b0c2-7c06b22cee9e from this chassis (sb_readonly=0)
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.871 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ac4ec909-9164-40ab-a894-4d501ce12c59.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ac4ec909-9164-40ab-a894-4d501ce12c59.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.873 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[02894da7-f9ba-4914-b1d9-2feea5f27860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.874 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-ac4ec909-9164-40ab-a894-4d501ce12c59
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/ac4ec909-9164-40ab-a894-4d501ce12c59.pid.haproxy
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID ac4ec909-9164-40ab-a894-4d501ce12c59
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:00:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:48.875 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'env', 'PROCESS_TAG=haproxy-ac4ec909-9164-40ab-a894-4d501ce12c59', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ac4ec909-9164-40ab-a894-4d501ce12c59.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:00:48 compute-0 nova_compute[239965]: 2026-01-26 16:00:48.887 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:49 compute-0 nova_compute[239965]: 2026-01-26 16:00:49.005 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443249.0047538, fbc49a44-5efe-4d87-aea7-b26f7232c224 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:49 compute-0 nova_compute[239965]: 2026-01-26 16:00:49.006 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] VM Started (Lifecycle Event)
Jan 26 16:00:49 compute-0 nova_compute[239965]: 2026-01-26 16:00:49.030 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:49 compute-0 nova_compute[239965]: 2026-01-26 16:00:49.035 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443249.0052483, fbc49a44-5efe-4d87-aea7-b26f7232c224 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:49 compute-0 nova_compute[239965]: 2026-01-26 16:00:49.035 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] VM Paused (Lifecycle Event)
Jan 26 16:00:49 compute-0 nova_compute[239965]: 2026-01-26 16:00:49.058 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:49 compute-0 nova_compute[239965]: 2026-01-26 16:00:49.062 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:00:49 compute-0 nova_compute[239965]: 2026-01-26 16:00:49.081 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:00:49 compute-0 ceph-mon[75140]: pgmap v1435: 305 pgs: 305 active+clean; 187 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 2.7 MiB/s wr, 101 op/s
Jan 26 16:00:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2755158747' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:00:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2755158747' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:00:49 compute-0 podman[293219]: 2026-01-26 16:00:49.272426147 +0000 UTC m=+0.025491306 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:00:49 compute-0 podman[293219]: 2026-01-26 16:00:49.381280796 +0000 UTC m=+0.134345935 container create 34dffa102c7c5239544e936159cc27de1839457c13160ee664add79420e031db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:00:49 compute-0 systemd[1]: Started libpod-conmon-34dffa102c7c5239544e936159cc27de1839457c13160ee664add79420e031db.scope.
Jan 26 16:00:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:00:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d85cb6818267243b35c5a9f1f66cdcc099c755b83efb0fa92f1a673a8c9c87/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:49 compute-0 podman[293219]: 2026-01-26 16:00:49.487385947 +0000 UTC m=+0.240451116 container init 34dffa102c7c5239544e936159cc27de1839457c13160ee664add79420e031db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:00:49 compute-0 podman[293219]: 2026-01-26 16:00:49.494064101 +0000 UTC m=+0.247129240 container start 34dffa102c7c5239544e936159cc27de1839457c13160ee664add79420e031db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:00:49 compute-0 neutron-haproxy-ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59[293234]: [NOTICE]   (293238) : New worker (293240) forked
Jan 26 16:00:49 compute-0 neutron-haproxy-ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59[293234]: [NOTICE]   (293238) : Loading success.
Jan 26 16:00:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1436: 305 pgs: 305 active+clean; 134 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 2.6 MiB/s wr, 93 op/s
Jan 26 16:00:50 compute-0 ceph-mon[75140]: pgmap v1436: 305 pgs: 305 active+clean; 134 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 2.6 MiB/s wr, 93 op/s
Jan 26 16:00:50 compute-0 nova_compute[239965]: 2026-01-26 16:00:50.673 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443235.6725576, a4ddea97-0b71-4080-abed-fd5748d7244e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:50 compute-0 nova_compute[239965]: 2026-01-26 16:00:50.674 239969 INFO nova.compute.manager [-] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] VM Stopped (Lifecycle Event)
Jan 26 16:00:50 compute-0 nova_compute[239965]: 2026-01-26 16:00:50.698 239969 DEBUG nova.compute.manager [None req-122eb5e4-7e4c-4eb9-ae9e-3874044716d7 - - - - - -] [instance: a4ddea97-0b71-4080-abed-fd5748d7244e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:50 compute-0 nova_compute[239965]: 2026-01-26 16:00:50.725 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.011 239969 DEBUG nova.compute.manager [req-c6dfe55c-4d7a-43b6-8a1e-2b72bdd876e5 req-fd5e8082-0603-4d41-8c42-2ccaf0563cf0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received event network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.011 239969 DEBUG oslo_concurrency.lockutils [req-c6dfe55c-4d7a-43b6-8a1e-2b72bdd876e5 req-fd5e8082-0603-4d41-8c42-2ccaf0563cf0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.012 239969 DEBUG oslo_concurrency.lockutils [req-c6dfe55c-4d7a-43b6-8a1e-2b72bdd876e5 req-fd5e8082-0603-4d41-8c42-2ccaf0563cf0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.012 239969 DEBUG oslo_concurrency.lockutils [req-c6dfe55c-4d7a-43b6-8a1e-2b72bdd876e5 req-fd5e8082-0603-4d41-8c42-2ccaf0563cf0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.012 239969 DEBUG nova.compute.manager [req-c6dfe55c-4d7a-43b6-8a1e-2b72bdd876e5 req-fd5e8082-0603-4d41-8c42-2ccaf0563cf0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Processing event network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.013 239969 DEBUG nova.compute.manager [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.017 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443251.0165687, fbc49a44-5efe-4d87-aea7-b26f7232c224 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.017 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] VM Resumed (Lifecycle Event)
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.019 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.023 239969 INFO nova.virt.libvirt.driver [-] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Instance spawned successfully.
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.024 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.038 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.045 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.054 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.055 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.056 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.056 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.057 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.057 239969 DEBUG nova.virt.libvirt.driver [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.067 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.126 239969 INFO nova.compute.manager [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Took 10.73 seconds to spawn the instance on the hypervisor.
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.126 239969 DEBUG nova.compute.manager [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.188 239969 INFO nova.compute.manager [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Took 12.12 seconds to build instance.
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.196 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "764b92e5-31ed-41bd-b90d-067a907d0206" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.197 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.205 239969 DEBUG oslo_concurrency.lockutils [None req-c678ef24-2dcf-4ee1-b1e0-853c34f67d10 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e235 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:00:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Jan 26 16:00:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Jan 26 16:00:51 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.221 239969 DEBUG nova.compute.manager [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.311 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.311 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.321 239969 DEBUG nova.virt.hardware [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.321 239969 INFO nova.compute.claims [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:00:51 compute-0 nova_compute[239965]: 2026-01-26 16:00:51.445 239969 DEBUG oslo_concurrency.processutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 134 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 2.7 MiB/s wr, 100 op/s
Jan 26 16:00:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:00:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3368821920' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.129 239969 DEBUG oslo_concurrency.processutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.684s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.135 239969 DEBUG nova.compute.provider_tree [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.155 239969 DEBUG nova.scheduler.client.report [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.181 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.182 239969 DEBUG nova.compute.manager [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.189 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "6c804bac-7ee8-45d4-821c-535fac124d08" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.190 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.218 239969 DEBUG nova.compute.manager [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:00:52 compute-0 ceph-mon[75140]: osdmap e236: 3 total, 3 up, 3 in
Jan 26 16:00:52 compute-0 ceph-mon[75140]: pgmap v1438: 305 pgs: 305 active+clean; 134 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 2.7 MiB/s wr, 100 op/s
Jan 26 16:00:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3368821920' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.251 239969 DEBUG nova.compute.manager [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.251 239969 DEBUG nova.network.neutron [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.282 239969 INFO nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.299 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.300 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.301 239969 DEBUG nova.compute.manager [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.309 239969 DEBUG nova.virt.hardware [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.309 239969 INFO nova.compute.claims [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.411 239969 DEBUG nova.compute.manager [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.413 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.413 239969 INFO nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Creating image(s)
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.438 239969 DEBUG nova.storage.rbd_utils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 764b92e5-31ed-41bd-b90d-067a907d0206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.462 239969 DEBUG nova.storage.rbd_utils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 764b92e5-31ed-41bd-b90d-067a907d0206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.485 239969 DEBUG nova.storage.rbd_utils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 764b92e5-31ed-41bd-b90d-067a907d0206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.489 239969 DEBUG oslo_concurrency.processutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.519 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.555 239969 DEBUG oslo_concurrency.processutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.555 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.556 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.556 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.577 239969 DEBUG nova.storage.rbd_utils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 764b92e5-31ed-41bd-b90d-067a907d0206_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.581 239969 DEBUG oslo_concurrency.processutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 764b92e5-31ed-41bd-b90d-067a907d0206_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.626 239969 DEBUG oslo_concurrency.processutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:52 compute-0 nova_compute[239965]: 2026-01-26 16:00:52.707 239969 DEBUG nova.policy [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3c7cba75e9fd41599b1c9a3388447cdd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aee99e5b6af74088bd848cecc9592e82', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.120 239969 DEBUG nova.compute.manager [req-e6ee9c62-156e-4cf4-9350-a809b6d4391f req-f853f8f6-557e-487e-8f69-9e708b941d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received event network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.121 239969 DEBUG oslo_concurrency.lockutils [req-e6ee9c62-156e-4cf4-9350-a809b6d4391f req-f853f8f6-557e-487e-8f69-9e708b941d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.121 239969 DEBUG oslo_concurrency.lockutils [req-e6ee9c62-156e-4cf4-9350-a809b6d4391f req-f853f8f6-557e-487e-8f69-9e708b941d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.122 239969 DEBUG oslo_concurrency.lockutils [req-e6ee9c62-156e-4cf4-9350-a809b6d4391f req-f853f8f6-557e-487e-8f69-9e708b941d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.122 239969 DEBUG nova.compute.manager [req-e6ee9c62-156e-4cf4-9350-a809b6d4391f req-f853f8f6-557e-487e-8f69-9e708b941d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] No waiting events found dispatching network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.122 239969 WARNING nova.compute.manager [req-e6ee9c62-156e-4cf4-9350-a809b6d4391f req-f853f8f6-557e-487e-8f69-9e708b941d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received unexpected event network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 for instance with vm_state active and task_state None.
Jan 26 16:00:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:00:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2755031665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.274 239969 DEBUG oslo_concurrency.processutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.283 239969 DEBUG nova.compute.provider_tree [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.301 239969 DEBUG nova.scheduler.client.report [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.307 239969 DEBUG oslo_concurrency.processutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 764b92e5-31ed-41bd-b90d-067a907d0206_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.727s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:53 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2755031665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.350 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.351 239969 DEBUG nova.compute.manager [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.390 239969 DEBUG nova.storage.rbd_utils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] resizing rbd image 764b92e5-31ed-41bd-b90d-067a907d0206_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.426 239969 DEBUG nova.compute.manager [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.426 239969 DEBUG nova.network.neutron [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.449 239969 INFO nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.470 239969 DEBUG nova.compute.manager [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.526 239969 DEBUG nova.objects.instance [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'migration_context' on Instance uuid 764b92e5-31ed-41bd-b90d-067a907d0206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.545 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.546 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Ensure instance console log exists: /var/lib/nova/instances/764b92e5-31ed-41bd-b90d-067a907d0206/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.547 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.547 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.547 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.565 239969 DEBUG nova.network.neutron [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Successfully created port: e12546e3-3fac-40e3-88c0-ed9ad0bc3880 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.573 239969 DEBUG nova.compute.manager [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.575 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.575 239969 INFO nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Creating image(s)
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.608 239969 DEBUG nova.storage.rbd_utils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] rbd image 6c804bac-7ee8-45d4-821c-535fac124d08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:53 compute-0 NetworkManager[48954]: <info>  [1769443253.6370] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Jan 26 16:00:53 compute-0 NetworkManager[48954]: <info>  [1769443253.6380] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.660 239969 DEBUG nova.storage.rbd_utils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] rbd image 6c804bac-7ee8-45d4-821c-535fac124d08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.700 239969 DEBUG nova.storage.rbd_utils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] rbd image 6c804bac-7ee8-45d4-821c-535fac124d08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.707 239969 DEBUG oslo_concurrency.processutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.750 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.757 239969 DEBUG nova.policy [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9cc0fb8436b44d0499a7d55e0a7e7585', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3f9fff751f0a4a4aba88032259e9628c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.789 239969 DEBUG oslo_concurrency.processutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.790 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.790 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.791 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:53 compute-0 ovn_controller[146046]: 2026-01-26T16:00:53Z|00531|binding|INFO|Releasing lport 47411fc6-7d46-43a0-b0c2-7c06b22cee9e from this chassis (sb_readonly=0)
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.815 239969 DEBUG nova.storage.rbd_utils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] rbd image 6c804bac-7ee8-45d4-821c-535fac124d08_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.832 239969 DEBUG oslo_concurrency.processutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 6c804bac-7ee8-45d4-821c-535fac124d08_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:53 compute-0 nova_compute[239965]: 2026-01-26 16:00:53.860 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 134 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 354 KiB/s wr, 61 op/s
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.146 239969 DEBUG oslo_concurrency.processutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 6c804bac-7ee8-45d4-821c-535fac124d08_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.223 239969 DEBUG nova.storage.rbd_utils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] resizing rbd image 6c804bac-7ee8-45d4-821c-535fac124d08_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.319 239969 DEBUG nova.objects.instance [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lazy-loading 'migration_context' on Instance uuid 6c804bac-7ee8-45d4-821c-535fac124d08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:54 compute-0 ceph-mon[75140]: pgmap v1439: 305 pgs: 305 active+clean; 134 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 354 KiB/s wr, 61 op/s
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.339 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.340 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Ensure instance console log exists: /var/lib/nova/instances/6c804bac-7ee8-45d4-821c-535fac124d08/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.341 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.341 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.341 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.350 239969 DEBUG nova.compute.manager [req-f1fc0c7d-4867-4354-bb62-445ebdfa3f43 req-b70d166d-d53e-40dc-9a83-334b4285c917 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received event network-changed-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.350 239969 DEBUG nova.compute.manager [req-f1fc0c7d-4867-4354-bb62-445ebdfa3f43 req-b70d166d-d53e-40dc-9a83-334b4285c917 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Refreshing instance network info cache due to event network-changed-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.351 239969 DEBUG oslo_concurrency.lockutils [req-f1fc0c7d-4867-4354-bb62-445ebdfa3f43 req-b70d166d-d53e-40dc-9a83-334b4285c917 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-fbc49a44-5efe-4d87-aea7-b26f7232c224" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.351 239969 DEBUG oslo_concurrency.lockutils [req-f1fc0c7d-4867-4354-bb62-445ebdfa3f43 req-b70d166d-d53e-40dc-9a83-334b4285c917 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-fbc49a44-5efe-4d87-aea7-b26f7232c224" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.351 239969 DEBUG nova.network.neutron [req-f1fc0c7d-4867-4354-bb62-445ebdfa3f43 req-b70d166d-d53e-40dc-9a83-334b4285c917 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Refreshing network info cache for port f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.673 239969 DEBUG nova.network.neutron [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Successfully updated port: e12546e3-3fac-40e3-88c0-ed9ad0bc3880 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.740 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "refresh_cache-764b92e5-31ed-41bd-b90d-067a907d0206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.741 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquired lock "refresh_cache-764b92e5-31ed-41bd-b90d-067a907d0206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.741 239969 DEBUG nova.network.neutron [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:00:54 compute-0 nova_compute[239965]: 2026-01-26 16:00:54.880 239969 DEBUG nova.network.neutron [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.042 239969 DEBUG nova.network.neutron [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Successfully created port: a23e88a0-e02f-48ba-8c26-a51810b9f2ed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.682 239969 DEBUG nova.network.neutron [req-f1fc0c7d-4867-4354-bb62-445ebdfa3f43 req-b70d166d-d53e-40dc-9a83-334b4285c917 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Updated VIF entry in instance network info cache for port f72df886-4fcf-4dc1-8e52-62b2a9cbfb49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.683 239969 DEBUG nova.network.neutron [req-f1fc0c7d-4867-4354-bb62-445ebdfa3f43 req-b70d166d-d53e-40dc-9a83-334b4285c917 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Updating instance_info_cache with network_info: [{"id": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "address": "fa:16:3e:65:b9:39", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72df886-4f", "ovs_interfaceid": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.703 239969 DEBUG oslo_concurrency.lockutils [req-f1fc0c7d-4867-4354-bb62-445ebdfa3f43 req-b70d166d-d53e-40dc-9a83-334b4285c917 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-fbc49a44-5efe-4d87-aea7-b26f7232c224" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.727 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.856 239969 DEBUG nova.network.neutron [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Successfully created port: 769188ca-56ed-49b7-8944-e9229697b5b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:00:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1440: 305 pgs: 305 active+clean; 227 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 174 op/s
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.923 239969 DEBUG nova.network.neutron [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Updating instance_info_cache with network_info: [{"id": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "address": "fa:16:3e:a7:8d:b0", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape12546e3-3f", "ovs_interfaceid": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.946 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Releasing lock "refresh_cache-764b92e5-31ed-41bd-b90d-067a907d0206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.946 239969 DEBUG nova.compute.manager [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Instance network_info: |[{"id": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "address": "fa:16:3e:a7:8d:b0", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape12546e3-3f", "ovs_interfaceid": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.949 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Start _get_guest_xml network_info=[{"id": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "address": "fa:16:3e:a7:8d:b0", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape12546e3-3f", "ovs_interfaceid": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.953 239969 WARNING nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.961 239969 DEBUG nova.virt.libvirt.host [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.962 239969 DEBUG nova.virt.libvirt.host [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.965 239969 DEBUG nova.virt.libvirt.host [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.966 239969 DEBUG nova.virt.libvirt.host [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.967 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.967 239969 DEBUG nova.virt.hardware [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.968 239969 DEBUG nova.virt.hardware [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.968 239969 DEBUG nova.virt.hardware [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.968 239969 DEBUG nova.virt.hardware [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.969 239969 DEBUG nova.virt.hardware [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.969 239969 DEBUG nova.virt.hardware [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.969 239969 DEBUG nova.virt.hardware [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.969 239969 DEBUG nova.virt.hardware [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.970 239969 DEBUG nova.virt.hardware [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.970 239969 DEBUG nova.virt.hardware [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.970 239969 DEBUG nova.virt.hardware [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:00:55 compute-0 nova_compute[239965]: 2026-01-26 16:00:55.973 239969 DEBUG oslo_concurrency.processutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:00:56 compute-0 nova_compute[239965]: 2026-01-26 16:00:56.449 239969 DEBUG nova.compute.manager [req-6e1117c5-c3e1-43ef-ae79-24482afa1474 req-df85409a-2db2-4063-8eb2-6a8beb5c103a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Received event network-changed-e12546e3-3fac-40e3-88c0-ed9ad0bc3880 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:56 compute-0 nova_compute[239965]: 2026-01-26 16:00:56.449 239969 DEBUG nova.compute.manager [req-6e1117c5-c3e1-43ef-ae79-24482afa1474 req-df85409a-2db2-4063-8eb2-6a8beb5c103a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Refreshing instance network info cache due to event network-changed-e12546e3-3fac-40e3-88c0-ed9ad0bc3880. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:00:56 compute-0 nova_compute[239965]: 2026-01-26 16:00:56.450 239969 DEBUG oslo_concurrency.lockutils [req-6e1117c5-c3e1-43ef-ae79-24482afa1474 req-df85409a-2db2-4063-8eb2-6a8beb5c103a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-764b92e5-31ed-41bd-b90d-067a907d0206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:56 compute-0 nova_compute[239965]: 2026-01-26 16:00:56.450 239969 DEBUG oslo_concurrency.lockutils [req-6e1117c5-c3e1-43ef-ae79-24482afa1474 req-df85409a-2db2-4063-8eb2-6a8beb5c103a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-764b92e5-31ed-41bd-b90d-067a907d0206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:56 compute-0 nova_compute[239965]: 2026-01-26 16:00:56.450 239969 DEBUG nova.network.neutron [req-6e1117c5-c3e1-43ef-ae79-24482afa1474 req-df85409a-2db2-4063-8eb2-6a8beb5c103a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Refreshing network info cache for port e12546e3-3fac-40e3-88c0-ed9ad0bc3880 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:00:56 compute-0 nova_compute[239965]: 2026-01-26 16:00:56.566 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:00:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1340032704' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:56 compute-0 nova_compute[239965]: 2026-01-26 16:00:56.600 239969 DEBUG oslo_concurrency.processutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:56 compute-0 nova_compute[239965]: 2026-01-26 16:00:56.623 239969 DEBUG nova.storage.rbd_utils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 764b92e5-31ed-41bd-b90d-067a907d0206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:56 compute-0 nova_compute[239965]: 2026-01-26 16:00:56.627 239969 DEBUG oslo_concurrency.processutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:56 compute-0 ceph-mon[75140]: pgmap v1440: 305 pgs: 305 active+clean; 227 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 174 op/s
Jan 26 16:00:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1340032704' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.170 239969 DEBUG nova.network.neutron [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Successfully updated port: a23e88a0-e02f-48ba-8c26-a51810b9f2ed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:00:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:00:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/339097968' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.253 239969 DEBUG oslo_concurrency.processutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.626s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.254 239969 DEBUG nova.virt.libvirt.vif [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:00:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1801386944',display_name='tempest-DeleteServersTestJSON-server-1801386944',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1801386944',id=63,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aee99e5b6af74088bd848cecc9592e82',ramdisk_id='',reservation_id='r-0l8b8fhx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-234439961',owner_user_name='tempest-DeleteServersTestJSON-234439961-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:00:52Z,user_data=None,user_id='3c7cba75e9fd41599b1c9a3388447cdd',uuid=764b92e5-31ed-41bd-b90d-067a907d0206,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "address": "fa:16:3e:a7:8d:b0", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape12546e3-3f", "ovs_interfaceid": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.255 239969 DEBUG nova.network.os_vif_util [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converting VIF {"id": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "address": "fa:16:3e:a7:8d:b0", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape12546e3-3f", "ovs_interfaceid": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.256 239969 DEBUG nova.network.os_vif_util [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:8d:b0,bridge_name='br-int',has_traffic_filtering=True,id=e12546e3-3fac-40e3-88c0-ed9ad0bc3880,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape12546e3-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.257 239969 DEBUG nova.objects.instance [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 764b92e5-31ed-41bd-b90d-067a907d0206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.272 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:00:57 compute-0 nova_compute[239965]:   <uuid>764b92e5-31ed-41bd-b90d-067a907d0206</uuid>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   <name>instance-0000003f</name>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <nova:name>tempest-DeleteServersTestJSON-server-1801386944</nova:name>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:00:55</nova:creationTime>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:00:57 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:00:57 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:00:57 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:00:57 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:00:57 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:00:57 compute-0 nova_compute[239965]:         <nova:user uuid="3c7cba75e9fd41599b1c9a3388447cdd">tempest-DeleteServersTestJSON-234439961-project-member</nova:user>
Jan 26 16:00:57 compute-0 nova_compute[239965]:         <nova:project uuid="aee99e5b6af74088bd848cecc9592e82">tempest-DeleteServersTestJSON-234439961</nova:project>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:00:57 compute-0 nova_compute[239965]:         <nova:port uuid="e12546e3-3fac-40e3-88c0-ed9ad0bc3880">
Jan 26 16:00:57 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <system>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <entry name="serial">764b92e5-31ed-41bd-b90d-067a907d0206</entry>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <entry name="uuid">764b92e5-31ed-41bd-b90d-067a907d0206</entry>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     </system>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   <os>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   </os>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   <features>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   </features>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/764b92e5-31ed-41bd-b90d-067a907d0206_disk">
Jan 26 16:00:57 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       </source>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:00:57 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/764b92e5-31ed-41bd-b90d-067a907d0206_disk.config">
Jan 26 16:00:57 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       </source>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:00:57 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:a7:8d:b0"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <target dev="tape12546e3-3f"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/764b92e5-31ed-41bd-b90d-067a907d0206/console.log" append="off"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <video>
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     </video>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:00:57 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:00:57 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:00:57 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:00:57 compute-0 nova_compute[239965]: </domain>
Jan 26 16:00:57 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.277 239969 DEBUG nova.compute.manager [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Preparing to wait for external event network-vif-plugged-e12546e3-3fac-40e3-88c0-ed9ad0bc3880 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.278 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.278 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.278 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.279 239969 DEBUG nova.virt.libvirt.vif [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:00:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1801386944',display_name='tempest-DeleteServersTestJSON-server-1801386944',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1801386944',id=63,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aee99e5b6af74088bd848cecc9592e82',ramdisk_id='',reservation_id='r-0l8b8fhx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-234439961',owner_user_name='tempest-DeleteServersTestJSON-234439961-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:00:52Z,user_data=None,user_id='3c7cba75e9fd41599b1c9a3388447cdd',uuid=764b92e5-31ed-41bd-b90d-067a907d0206,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "address": "fa:16:3e:a7:8d:b0", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape12546e3-3f", "ovs_interfaceid": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.279 239969 DEBUG nova.network.os_vif_util [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converting VIF {"id": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "address": "fa:16:3e:a7:8d:b0", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape12546e3-3f", "ovs_interfaceid": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.280 239969 DEBUG nova.network.os_vif_util [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:8d:b0,bridge_name='br-int',has_traffic_filtering=True,id=e12546e3-3fac-40e3-88c0-ed9ad0bc3880,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape12546e3-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.280 239969 DEBUG os_vif [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:8d:b0,bridge_name='br-int',has_traffic_filtering=True,id=e12546e3-3fac-40e3-88c0-ed9ad0bc3880,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape12546e3-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.281 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.282 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.282 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.286 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.286 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape12546e3-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.287 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape12546e3-3f, col_values=(('external_ids', {'iface-id': 'e12546e3-3fac-40e3-88c0-ed9ad0bc3880', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:8d:b0', 'vm-uuid': '764b92e5-31ed-41bd-b90d-067a907d0206'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.288 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:57 compute-0 NetworkManager[48954]: <info>  [1769443257.2891] manager: (tape12546e3-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/240)
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.290 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.298 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.300 239969 INFO os_vif [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:8d:b0,bridge_name='br-int',has_traffic_filtering=True,id=e12546e3-3fac-40e3-88c0-ed9ad0bc3880,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape12546e3-3f')
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.358 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.358 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.358 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] No VIF found with MAC fa:16:3e:a7:8d:b0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.359 239969 INFO nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Using config drive
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.379 239969 DEBUG nova.storage.rbd_utils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 764b92e5-31ed-41bd-b90d-067a907d0206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.831 239969 DEBUG nova.compute.manager [req-aef670fd-0763-4ba0-b164-61da496b5dd0 req-c2b49017-1cc2-41fe-b701-1ded01aab49e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received event network-changed-a23e88a0-e02f-48ba-8c26-a51810b9f2ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.831 239969 DEBUG nova.compute.manager [req-aef670fd-0763-4ba0-b164-61da496b5dd0 req-c2b49017-1cc2-41fe-b701-1ded01aab49e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Refreshing instance network info cache due to event network-changed-a23e88a0-e02f-48ba-8c26-a51810b9f2ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.832 239969 DEBUG oslo_concurrency.lockutils [req-aef670fd-0763-4ba0-b164-61da496b5dd0 req-c2b49017-1cc2-41fe-b701-1ded01aab49e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-6c804bac-7ee8-45d4-821c-535fac124d08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.832 239969 DEBUG oslo_concurrency.lockutils [req-aef670fd-0763-4ba0-b164-61da496b5dd0 req-c2b49017-1cc2-41fe-b701-1ded01aab49e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-6c804bac-7ee8-45d4-821c-535fac124d08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.832 239969 DEBUG nova.network.neutron [req-aef670fd-0763-4ba0-b164-61da496b5dd0 req-c2b49017-1cc2-41fe-b701-1ded01aab49e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Refreshing network info cache for port a23e88a0-e02f-48ba-8c26-a51810b9f2ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.877 239969 INFO nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Creating config drive at /var/lib/nova/instances/764b92e5-31ed-41bd-b90d-067a907d0206/disk.config
Jan 26 16:00:57 compute-0 nova_compute[239965]: 2026-01-26 16:00:57.882 239969 DEBUG oslo_concurrency.processutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/764b92e5-31ed-41bd-b90d-067a907d0206/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0fi3t621 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 227 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 174 op/s
Jan 26 16:00:57 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/339097968' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.002 239969 DEBUG nova.network.neutron [req-6e1117c5-c3e1-43ef-ae79-24482afa1474 req-df85409a-2db2-4063-8eb2-6a8beb5c103a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Updated VIF entry in instance network info cache for port e12546e3-3fac-40e3-88c0-ed9ad0bc3880. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.004 239969 DEBUG nova.network.neutron [req-6e1117c5-c3e1-43ef-ae79-24482afa1474 req-df85409a-2db2-4063-8eb2-6a8beb5c103a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Updating instance_info_cache with network_info: [{"id": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "address": "fa:16:3e:a7:8d:b0", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape12546e3-3f", "ovs_interfaceid": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.019 239969 DEBUG oslo_concurrency.processutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/764b92e5-31ed-41bd-b90d-067a907d0206/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0fi3t621" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.046 239969 DEBUG nova.storage.rbd_utils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 764b92e5-31ed-41bd-b90d-067a907d0206_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.050 239969 DEBUG oslo_concurrency.processutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/764b92e5-31ed-41bd-b90d-067a907d0206/disk.config 764b92e5-31ed-41bd-b90d-067a907d0206_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.088 239969 DEBUG nova.network.neutron [req-aef670fd-0763-4ba0-b164-61da496b5dd0 req-c2b49017-1cc2-41fe-b701-1ded01aab49e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.095 239969 DEBUG oslo_concurrency.lockutils [req-6e1117c5-c3e1-43ef-ae79-24482afa1474 req-df85409a-2db2-4063-8eb2-6a8beb5c103a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-764b92e5-31ed-41bd-b90d-067a907d0206" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.330 239969 DEBUG oslo_concurrency.processutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/764b92e5-31ed-41bd-b90d-067a907d0206/disk.config 764b92e5-31ed-41bd-b90d-067a907d0206_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.331 239969 INFO nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Deleting local config drive /var/lib/nova/instances/764b92e5-31ed-41bd-b90d-067a907d0206/disk.config because it was imported into RBD.
Jan 26 16:00:58 compute-0 kernel: tape12546e3-3f: entered promiscuous mode
Jan 26 16:00:58 compute-0 NetworkManager[48954]: <info>  [1769443258.3904] manager: (tape12546e3-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Jan 26 16:00:58 compute-0 ovn_controller[146046]: 2026-01-26T16:00:58Z|00532|binding|INFO|Claiming lport e12546e3-3fac-40e3-88c0-ed9ad0bc3880 for this chassis.
Jan 26 16:00:58 compute-0 ovn_controller[146046]: 2026-01-26T16:00:58Z|00533|binding|INFO|e12546e3-3fac-40e3-88c0-ed9ad0bc3880: Claiming fa:16:3e:a7:8d:b0 10.100.0.14
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.392 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.400 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:8d:b0 10.100.0.14'], port_security=['fa:16:3e:a7:8d:b0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '764b92e5-31ed-41bd-b90d-067a907d0206', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00eb7549-d24b-4657-b244-7664c8a34b20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aee99e5b6af74088bd848cecc9592e82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6daa4359-88b3-4de7-a38c-e21562f82a46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89244795-c322-42a2-be6b-f02f8ae6e05c, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=e12546e3-3fac-40e3-88c0-ed9ad0bc3880) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.401 156105 INFO neutron.agent.ovn.metadata.agent [-] Port e12546e3-3fac-40e3-88c0-ed9ad0bc3880 in datapath 00eb7549-d24b-4657-b244-7664c8a34b20 bound to our chassis
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.403 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00eb7549-d24b-4657-b244-7664c8a34b20
Jan 26 16:00:58 compute-0 ovn_controller[146046]: 2026-01-26T16:00:58Z|00534|binding|INFO|Setting lport e12546e3-3fac-40e3-88c0-ed9ad0bc3880 ovn-installed in OVS
Jan 26 16:00:58 compute-0 ovn_controller[146046]: 2026-01-26T16:00:58Z|00535|binding|INFO|Setting lport e12546e3-3fac-40e3-88c0-ed9ad0bc3880 up in Southbound
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.415 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.417 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.417 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7858ce-b565-4376-b433-2be320d7c459]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.419 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00eb7549-d1 in ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.422 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00eb7549-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.422 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8a1284f9-08ec-46df-a31e-f31479563984]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.423 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4fce16b4-1691-4e8c-802d-f91b17ffef2e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.435 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[2ae73e5e-d6f5-430e-9aa1-7aa21ec18247]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:58 compute-0 systemd-udevd[293762]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:00:58 compute-0 systemd-machined[208061]: New machine qemu-70-instance-0000003f.
Jan 26 16:00:58 compute-0 NetworkManager[48954]: <info>  [1769443258.4525] device (tape12546e3-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:00:58 compute-0 NetworkManager[48954]: <info>  [1769443258.4531] device (tape12546e3-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:00:58 compute-0 systemd[1]: Started Virtual Machine qemu-70-instance-0000003f.
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.466 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[90176915-e4c9-4ee8-8dfa-05da813d95f0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.502 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc007f3-c40f-49d6-9c8a-fc87bece3c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:58 compute-0 NetworkManager[48954]: <info>  [1769443258.5087] manager: (tap00eb7549-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/242)
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.507 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[62bd4483-c43e-4ab8-9dfb-69addeda4f25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:58 compute-0 systemd-udevd[293766]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.563 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b086c602-cb7b-4a25-9ff2-b5f05081eca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.568 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8009c6-58f8-4c12-953d-614896ee3317]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:58 compute-0 NetworkManager[48954]: <info>  [1769443258.5922] device (tap00eb7549-d0): carrier: link connected
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.598 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e7b0474c-0461-4536-80fd-b5c95a9eb3c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.618 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a5494e71-5441-4418-a23f-9fb18ff4aa3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00eb7549-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:aa:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474939, 'reachable_time': 38526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293794, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.636 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3bc32d76-7bbd-40b3-a12c-e3d5af6766fb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:aa8a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 474939, 'tstamp': 474939}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293795, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.652 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a60dcb11-da7e-4acf-b01e-929a230ffe63]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00eb7549-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:aa:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 163], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474939, 'reachable_time': 38526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293796, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.684 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[98be4eea-55ca-4281-a416-0de4594c7e02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.746 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1e4fff-f124-4815-a8b9-58177de0a2f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.747 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00eb7549-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.747 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.748 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00eb7549-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.749 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:58 compute-0 kernel: tap00eb7549-d0: entered promiscuous mode
Jan 26 16:00:58 compute-0 NetworkManager[48954]: <info>  [1769443258.7524] manager: (tap00eb7549-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/243)
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.752 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.753 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00eb7549-d0, col_values=(('external_ids', {'iface-id': 'c26451f4-ab48-4e8d-b25b-9e4988573b7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.754 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:58 compute-0 ovn_controller[146046]: 2026-01-26T16:00:58Z|00536|binding|INFO|Releasing lport c26451f4-ab48-4e8d-b25b-9e4988573b7d from this chassis (sb_readonly=0)
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.771 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.771 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00eb7549-d24b-4657-b244-7664c8a34b20.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00eb7549-d24b-4657-b244-7664c8a34b20.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.772 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8cae36a3-e90c-4467-a9f0-412ed376fdb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.773 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-00eb7549-d24b-4657-b244-7664c8a34b20
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/00eb7549-d24b-4657-b244-7664c8a34b20.pid.haproxy
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 00eb7549-d24b-4657-b244-7664c8a34b20
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:00:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:58.773 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'env', 'PROCESS_TAG=haproxy-00eb7549-d24b-4657-b244-7664c8a34b20', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00eb7549-d24b-4657-b244-7664c8a34b20.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.825 239969 DEBUG nova.network.neutron [req-aef670fd-0763-4ba0-b164-61da496b5dd0 req-c2b49017-1cc2-41fe-b701-1ded01aab49e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.844 239969 DEBUG oslo_concurrency.lockutils [req-aef670fd-0763-4ba0-b164-61da496b5dd0 req-c2b49017-1cc2-41fe-b701-1ded01aab49e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-6c804bac-7ee8-45d4-821c-535fac124d08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.875 239969 DEBUG nova.network.neutron [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Successfully updated port: 769188ca-56ed-49b7-8944-e9229697b5b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.896 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "refresh_cache-6c804bac-7ee8-45d4-821c-535fac124d08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.896 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquired lock "refresh_cache-6c804bac-7ee8-45d4-821c-535fac124d08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:00:58 compute-0 nova_compute[239965]: 2026-01-26 16:00:58.897 239969 DEBUG nova.network.neutron [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:00:59 compute-0 ceph-mon[75140]: pgmap v1441: 305 pgs: 305 active+clean; 227 MiB data, 587 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 174 op/s
Jan 26 16:00:59 compute-0 nova_compute[239965]: 2026-01-26 16:00:59.076 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443259.0754986, 764b92e5-31ed-41bd-b90d-067a907d0206 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:59 compute-0 nova_compute[239965]: 2026-01-26 16:00:59.076 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] VM Started (Lifecycle Event)
Jan 26 16:00:59 compute-0 nova_compute[239965]: 2026-01-26 16:00:59.091 239969 DEBUG nova.network.neutron [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:00:59 compute-0 nova_compute[239965]: 2026-01-26 16:00:59.097 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:59 compute-0 nova_compute[239965]: 2026-01-26 16:00:59.101 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443259.0766563, 764b92e5-31ed-41bd-b90d-067a907d0206 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:00:59 compute-0 nova_compute[239965]: 2026-01-26 16:00:59.102 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] VM Paused (Lifecycle Event)
Jan 26 16:00:59 compute-0 nova_compute[239965]: 2026-01-26 16:00:59.124 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:00:59 compute-0 nova_compute[239965]: 2026-01-26 16:00:59.127 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:00:59 compute-0 nova_compute[239965]: 2026-01-26 16:00:59.145 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:00:59 compute-0 podman[293870]: 2026-01-26 16:00:59.170206781 +0000 UTC m=+0.051735260 container create f5bded0635884f4c263538066779f93129f2c056f7649651334b52be802458b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:00:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:59.223 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:00:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:59.224 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:00:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:00:59.225 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:00:59 compute-0 podman[293870]: 2026-01-26 16:00:59.14160479 +0000 UTC m=+0.023133289 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:00:59 compute-0 systemd[1]: Started libpod-conmon-f5bded0635884f4c263538066779f93129f2c056f7649651334b52be802458b3.scope.
Jan 26 16:00:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:00:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a859baf1ad07b15b61ad3680dad20539e30c9c74453c3895fc6a6c8d53d98027/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:00:59 compute-0 podman[293870]: 2026-01-26 16:00:59.311416202 +0000 UTC m=+0.192944751 container init f5bded0635884f4c263538066779f93129f2c056f7649651334b52be802458b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:00:59 compute-0 podman[293870]: 2026-01-26 16:00:59.317923822 +0000 UTC m=+0.199452301 container start f5bded0635884f4c263538066779f93129f2c056f7649651334b52be802458b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:00:59 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[293886]: [NOTICE]   (293890) : New worker (293892) forked
Jan 26 16:00:59 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[293886]: [NOTICE]   (293890) : Loading success.
Jan 26 16:00:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 227 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 150 op/s
Jan 26 16:00:59 compute-0 nova_compute[239965]: 2026-01-26 16:00:59.934 239969 DEBUG nova.compute.manager [req-ee9cdd0e-5f2b-4044-9243-dd68049d34e9 req-90b3bba3-4075-4429-9567-ffe0ade493ce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received event network-changed-769188ca-56ed-49b7-8944-e9229697b5b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:00:59 compute-0 nova_compute[239965]: 2026-01-26 16:00:59.935 239969 DEBUG nova.compute.manager [req-ee9cdd0e-5f2b-4044-9243-dd68049d34e9 req-90b3bba3-4075-4429-9567-ffe0ade493ce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Refreshing instance network info cache due to event network-changed-769188ca-56ed-49b7-8944-e9229697b5b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:00:59 compute-0 nova_compute[239965]: 2026-01-26 16:00:59.935 239969 DEBUG oslo_concurrency.lockutils [req-ee9cdd0e-5f2b-4044-9243-dd68049d34e9 req-90b3bba3-4075-4429-9567-ffe0ade493ce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-6c804bac-7ee8-45d4-821c-535fac124d08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:01:00 compute-0 nova_compute[239965]: 2026-01-26 16:01:00.029 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:01:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:01:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:01:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:01:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:01:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:01:00 compute-0 nova_compute[239965]: 2026-01-26 16:01:00.730 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:01 compute-0 ceph-mon[75140]: pgmap v1442: 305 pgs: 305 active+clean; 227 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.3 MiB/s wr, 150 op/s
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.147 239969 DEBUG nova.network.neutron [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Updating instance_info_cache with network_info: [{"id": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "address": "fa:16:3e:6f:fd:6f", "network": {"id": "05612a26-4bfc-4999-b9cf-ae81a7d0e6a9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1066997338", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa23e88a0-e0", "ovs_interfaceid": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "769188ca-56ed-49b7-8944-e9229697b5b3", "address": "fa:16:3e:c8:e2:27", "network": {"id": "3f233816-7cc9-4554-a51f-5f05891ca43f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1224441887", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap769188ca-56", "ovs_interfaceid": "769188ca-56ed-49b7-8944-e9229697b5b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.171 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Releasing lock "refresh_cache-6c804bac-7ee8-45d4-821c-535fac124d08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.172 239969 DEBUG nova.compute.manager [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Instance network_info: |[{"id": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "address": "fa:16:3e:6f:fd:6f", "network": {"id": "05612a26-4bfc-4999-b9cf-ae81a7d0e6a9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1066997338", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa23e88a0-e0", "ovs_interfaceid": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "769188ca-56ed-49b7-8944-e9229697b5b3", "address": "fa:16:3e:c8:e2:27", "network": {"id": "3f233816-7cc9-4554-a51f-5f05891ca43f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1224441887", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap769188ca-56", "ovs_interfaceid": "769188ca-56ed-49b7-8944-e9229697b5b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.172 239969 DEBUG oslo_concurrency.lockutils [req-ee9cdd0e-5f2b-4044-9243-dd68049d34e9 req-90b3bba3-4075-4429-9567-ffe0ade493ce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-6c804bac-7ee8-45d4-821c-535fac124d08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.173 239969 DEBUG nova.network.neutron [req-ee9cdd0e-5f2b-4044-9243-dd68049d34e9 req-90b3bba3-4075-4429-9567-ffe0ade493ce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Refreshing network info cache for port 769188ca-56ed-49b7-8944-e9229697b5b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.176 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Start _get_guest_xml network_info=[{"id": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "address": "fa:16:3e:6f:fd:6f", "network": {"id": "05612a26-4bfc-4999-b9cf-ae81a7d0e6a9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1066997338", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa23e88a0-e0", "ovs_interfaceid": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "769188ca-56ed-49b7-8944-e9229697b5b3", "address": "fa:16:3e:c8:e2:27", "network": {"id": "3f233816-7cc9-4554-a51f-5f05891ca43f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1224441887", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap769188ca-56", "ovs_interfaceid": "769188ca-56ed-49b7-8944-e9229697b5b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.181 239969 WARNING nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.187 239969 DEBUG nova.virt.libvirt.host [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.189 239969 DEBUG nova.virt.libvirt.host [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.196 239969 DEBUG nova.virt.libvirt.host [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.196 239969 DEBUG nova.virt.libvirt.host [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.197 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.197 239969 DEBUG nova.virt.hardware [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.198 239969 DEBUG nova.virt.hardware [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.198 239969 DEBUG nova.virt.hardware [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.198 239969 DEBUG nova.virt.hardware [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.198 239969 DEBUG nova.virt.hardware [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.198 239969 DEBUG nova.virt.hardware [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.199 239969 DEBUG nova.virt.hardware [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.199 239969 DEBUG nova.virt.hardware [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.199 239969 DEBUG nova.virt.hardware [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.199 239969 DEBUG nova.virt.hardware [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.200 239969 DEBUG nova.virt.hardware [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.202 239969 DEBUG oslo_concurrency.processutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:01:01 compute-0 podman[293902]: 2026-01-26 16:01:01.389424834 +0000 UTC m=+0.062559354 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 16:01:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3155184874' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.788 239969 DEBUG oslo_concurrency.processutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.817 239969 DEBUG nova.storage.rbd_utils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] rbd image 6c804bac-7ee8-45d4-821c-535fac124d08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:01 compute-0 nova_compute[239965]: 2026-01-26 16:01:01.821 239969 DEBUG oslo_concurrency.processutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:01 compute-0 CROND[293961]: (root) CMD (run-parts /etc/cron.hourly)
Jan 26 16:01:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 227 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 143 op/s
Jan 26 16:01:01 compute-0 run-parts[293964]: (/etc/cron.hourly) starting 0anacron
Jan 26 16:01:01 compute-0 run-parts[293970]: (/etc/cron.hourly) finished 0anacron
Jan 26 16:01:01 compute-0 CROND[293960]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 26 16:01:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3155184874' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.290 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2797283014' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.487 239969 DEBUG oslo_concurrency.processutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.488 239969 DEBUG nova.virt.libvirt.vif [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:00:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1298183690',display_name='tempest-ServersTestMultiNic-server-1298183690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1298183690',id=64,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f9fff751f0a4a4aba88032259e9628c',ramdisk_id='',reservation_id='r-a33k0b3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1401749103',owner_user_name='tempest-ServersTestMultiNic-1401749103-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:00:53Z,user_data=None,user_id='9cc0fb8436b44d0499a7d55e0a7e7585',uuid=6c804bac-7ee8-45d4-821c-535fac124d08,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "address": "fa:16:3e:6f:fd:6f", "network": {"id": "05612a26-4bfc-4999-b9cf-ae81a7d0e6a9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1066997338", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa23e88a0-e0", "ovs_interfaceid": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.489 239969 DEBUG nova.network.os_vif_util [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converting VIF {"id": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "address": "fa:16:3e:6f:fd:6f", "network": {"id": "05612a26-4bfc-4999-b9cf-ae81a7d0e6a9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1066997338", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa23e88a0-e0", "ovs_interfaceid": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.490 239969 DEBUG nova.network.os_vif_util [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:fd:6f,bridge_name='br-int',has_traffic_filtering=True,id=a23e88a0-e02f-48ba-8c26-a51810b9f2ed,network=Network(05612a26-4bfc-4999-b9cf-ae81a7d0e6a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa23e88a0-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.491 239969 DEBUG nova.virt.libvirt.vif [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:00:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1298183690',display_name='tempest-ServersTestMultiNic-server-1298183690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1298183690',id=64,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f9fff751f0a4a4aba88032259e9628c',ramdisk_id='',reservation_id='r-a33k0b3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1401749103',owner_user_name='tempest-ServersTestMultiNic-1401749103-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:00:53Z,user_data=None,user_id='9cc0fb8436b44d0499a7d55e0a7e7585',uuid=6c804bac-7ee8-45d4-821c-535fac124d08,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "769188ca-56ed-49b7-8944-e9229697b5b3", "address": "fa:16:3e:c8:e2:27", "network": {"id": "3f233816-7cc9-4554-a51f-5f05891ca43f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1224441887", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap769188ca-56", "ovs_interfaceid": "769188ca-56ed-49b7-8944-e9229697b5b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.491 239969 DEBUG nova.network.os_vif_util [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converting VIF {"id": "769188ca-56ed-49b7-8944-e9229697b5b3", "address": "fa:16:3e:c8:e2:27", "network": {"id": "3f233816-7cc9-4554-a51f-5f05891ca43f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1224441887", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap769188ca-56", "ovs_interfaceid": "769188ca-56ed-49b7-8944-e9229697b5b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.492 239969 DEBUG nova.network.os_vif_util [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:e2:27,bridge_name='br-int',has_traffic_filtering=True,id=769188ca-56ed-49b7-8944-e9229697b5b3,network=Network(3f233816-7cc9-4554-a51f-5f05891ca43f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap769188ca-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.496 239969 DEBUG nova.objects.instance [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lazy-loading 'pci_devices' on Instance uuid 6c804bac-7ee8-45d4-821c-535fac124d08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.518 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:01:02 compute-0 nova_compute[239965]:   <uuid>6c804bac-7ee8-45d4-821c-535fac124d08</uuid>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   <name>instance-00000040</name>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersTestMultiNic-server-1298183690</nova:name>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:01:01</nova:creationTime>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:01:02 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:01:02 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:01:02 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:01:02 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:01:02 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:01:02 compute-0 nova_compute[239965]:         <nova:user uuid="9cc0fb8436b44d0499a7d55e0a7e7585">tempest-ServersTestMultiNic-1401749103-project-member</nova:user>
Jan 26 16:01:02 compute-0 nova_compute[239965]:         <nova:project uuid="3f9fff751f0a4a4aba88032259e9628c">tempest-ServersTestMultiNic-1401749103</nova:project>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:01:02 compute-0 nova_compute[239965]:         <nova:port uuid="a23e88a0-e02f-48ba-8c26-a51810b9f2ed">
Jan 26 16:01:02 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.30" ipVersion="4"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:01:02 compute-0 nova_compute[239965]:         <nova:port uuid="769188ca-56ed-49b7-8944-e9229697b5b3">
Jan 26 16:01:02 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.1.193" ipVersion="4"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <system>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <entry name="serial">6c804bac-7ee8-45d4-821c-535fac124d08</entry>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <entry name="uuid">6c804bac-7ee8-45d4-821c-535fac124d08</entry>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     </system>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   <os>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   </os>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   <features>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   </features>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/6c804bac-7ee8-45d4-821c-535fac124d08_disk">
Jan 26 16:01:02 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:02 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/6c804bac-7ee8-45d4-821c-535fac124d08_disk.config">
Jan 26 16:01:02 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:02 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:6f:fd:6f"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <target dev="tapa23e88a0-e0"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:c8:e2:27"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <target dev="tap769188ca-56"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/6c804bac-7ee8-45d4-821c-535fac124d08/console.log" append="off"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <video>
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     </video>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:01:02 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:01:02 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:01:02 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:01:02 compute-0 nova_compute[239965]: </domain>
Jan 26 16:01:02 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.520 239969 DEBUG nova.compute.manager [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Preparing to wait for external event network-vif-plugged-a23e88a0-e02f-48ba-8c26-a51810b9f2ed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.521 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.521 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.522 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.522 239969 DEBUG nova.compute.manager [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Preparing to wait for external event network-vif-plugged-769188ca-56ed-49b7-8944-e9229697b5b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.522 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.523 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.523 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.524 239969 DEBUG nova.virt.libvirt.vif [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:00:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1298183690',display_name='tempest-ServersTestMultiNic-server-1298183690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1298183690',id=64,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f9fff751f0a4a4aba88032259e9628c',ramdisk_id='',reservation_id='r-a33k0b3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1401749103',owner_user_name='tempest-ServersTestMultiNic-1401749103-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:00:53Z,user_data=None,user_id='9cc0fb8436b44d0499a7d55e0a7e7585',uuid=6c804bac-7ee8-45d4-821c-535fac124d08,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "address": "fa:16:3e:6f:fd:6f", "network": {"id": "05612a26-4bfc-4999-b9cf-ae81a7d0e6a9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1066997338", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa23e88a0-e0", "ovs_interfaceid": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.524 239969 DEBUG nova.network.os_vif_util [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converting VIF {"id": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "address": "fa:16:3e:6f:fd:6f", "network": {"id": "05612a26-4bfc-4999-b9cf-ae81a7d0e6a9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1066997338", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa23e88a0-e0", "ovs_interfaceid": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.525 239969 DEBUG nova.network.os_vif_util [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:fd:6f,bridge_name='br-int',has_traffic_filtering=True,id=a23e88a0-e02f-48ba-8c26-a51810b9f2ed,network=Network(05612a26-4bfc-4999-b9cf-ae81a7d0e6a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa23e88a0-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.526 239969 DEBUG os_vif [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:fd:6f,bridge_name='br-int',has_traffic_filtering=True,id=a23e88a0-e02f-48ba-8c26-a51810b9f2ed,network=Network(05612a26-4bfc-4999-b9cf-ae81a7d0e6a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa23e88a0-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.526 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.527 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.527 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.532 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.532 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa23e88a0-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.533 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa23e88a0-e0, col_values=(('external_ids', {'iface-id': 'a23e88a0-e02f-48ba-8c26-a51810b9f2ed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:fd:6f', 'vm-uuid': '6c804bac-7ee8-45d4-821c-535fac124d08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:02 compute-0 NetworkManager[48954]: <info>  [1769443262.5359] manager: (tapa23e88a0-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/244)
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.537 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.540 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.546 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.548 239969 INFO os_vif [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:fd:6f,bridge_name='br-int',has_traffic_filtering=True,id=a23e88a0-e02f-48ba-8c26-a51810b9f2ed,network=Network(05612a26-4bfc-4999-b9cf-ae81a7d0e6a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa23e88a0-e0')
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.549 239969 DEBUG nova.virt.libvirt.vif [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:00:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1298183690',display_name='tempest-ServersTestMultiNic-server-1298183690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1298183690',id=64,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3f9fff751f0a4a4aba88032259e9628c',ramdisk_id='',reservation_id='r-a33k0b3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1401749103',owner_user_name='tempest-ServersTestMultiNic-1401749103-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:00:53Z,user_data=None,user_id='9cc0fb8436b44d0499a7d55e0a7e7585',uuid=6c804bac-7ee8-45d4-821c-535fac124d08,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "769188ca-56ed-49b7-8944-e9229697b5b3", "address": "fa:16:3e:c8:e2:27", "network": {"id": "3f233816-7cc9-4554-a51f-5f05891ca43f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1224441887", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap769188ca-56", "ovs_interfaceid": "769188ca-56ed-49b7-8944-e9229697b5b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.549 239969 DEBUG nova.network.os_vif_util [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converting VIF {"id": "769188ca-56ed-49b7-8944-e9229697b5b3", "address": "fa:16:3e:c8:e2:27", "network": {"id": "3f233816-7cc9-4554-a51f-5f05891ca43f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1224441887", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap769188ca-56", "ovs_interfaceid": "769188ca-56ed-49b7-8944-e9229697b5b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.550 239969 DEBUG nova.network.os_vif_util [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:e2:27,bridge_name='br-int',has_traffic_filtering=True,id=769188ca-56ed-49b7-8944-e9229697b5b3,network=Network(3f233816-7cc9-4554-a51f-5f05891ca43f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap769188ca-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.550 239969 DEBUG os_vif [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:e2:27,bridge_name='br-int',has_traffic_filtering=True,id=769188ca-56ed-49b7-8944-e9229697b5b3,network=Network(3f233816-7cc9-4554-a51f-5f05891ca43f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap769188ca-56') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.551 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.551 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.551 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.553 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.553 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap769188ca-56, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.554 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap769188ca-56, col_values=(('external_ids', {'iface-id': '769188ca-56ed-49b7-8944-e9229697b5b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:e2:27', 'vm-uuid': '6c804bac-7ee8-45d4-821c-535fac124d08'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.555 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:02 compute-0 NetworkManager[48954]: <info>  [1769443262.5565] manager: (tap769188ca-56): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.557 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.562 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.563 239969 INFO os_vif [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:e2:27,bridge_name='br-int',has_traffic_filtering=True,id=769188ca-56ed-49b7-8944-e9229697b5b3,network=Network(3f233816-7cc9-4554-a51f-5f05891ca43f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap769188ca-56')
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.648 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.648 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.649 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] No VIF found with MAC fa:16:3e:6f:fd:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.649 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] No VIF found with MAC fa:16:3e:c8:e2:27, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.649 239969 INFO nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Using config drive
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.673 239969 DEBUG nova.storage.rbd_utils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] rbd image 6c804bac-7ee8-45d4-821c-535fac124d08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.680 239969 DEBUG nova.network.neutron [req-ee9cdd0e-5f2b-4044-9243-dd68049d34e9 req-90b3bba3-4075-4429-9567-ffe0ade493ce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Updated VIF entry in instance network info cache for port 769188ca-56ed-49b7-8944-e9229697b5b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.681 239969 DEBUG nova.network.neutron [req-ee9cdd0e-5f2b-4044-9243-dd68049d34e9 req-90b3bba3-4075-4429-9567-ffe0ade493ce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Updating instance_info_cache with network_info: [{"id": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "address": "fa:16:3e:6f:fd:6f", "network": {"id": "05612a26-4bfc-4999-b9cf-ae81a7d0e6a9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1066997338", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa23e88a0-e0", "ovs_interfaceid": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "769188ca-56ed-49b7-8944-e9229697b5b3", "address": "fa:16:3e:c8:e2:27", "network": {"id": "3f233816-7cc9-4554-a51f-5f05891ca43f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1224441887", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap769188ca-56", "ovs_interfaceid": "769188ca-56ed-49b7-8944-e9229697b5b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:02 compute-0 nova_compute[239965]: 2026-01-26 16:01:02.711 239969 DEBUG oslo_concurrency.lockutils [req-ee9cdd0e-5f2b-4044-9243-dd68049d34e9 req-90b3bba3-4075-4429-9567-ffe0ade493ce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-6c804bac-7ee8-45d4-821c-535fac124d08" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:01:02 compute-0 podman[293998]: 2026-01-26 16:01:02.721995412 +0000 UTC m=+0.111332430 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 16:01:03 compute-0 nova_compute[239965]: 2026-01-26 16:01:03.052 239969 INFO nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Creating config drive at /var/lib/nova/instances/6c804bac-7ee8-45d4-821c-535fac124d08/disk.config
Jan 26 16:01:03 compute-0 nova_compute[239965]: 2026-01-26 16:01:03.059 239969 DEBUG oslo_concurrency.processutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6c804bac-7ee8-45d4-821c-535fac124d08/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ujw5iyr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:03 compute-0 ceph-mon[75140]: pgmap v1443: 305 pgs: 305 active+clean; 227 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 143 op/s
Jan 26 16:01:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2797283014' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:03 compute-0 nova_compute[239965]: 2026-01-26 16:01:03.209 239969 DEBUG oslo_concurrency.processutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6c804bac-7ee8-45d4-821c-535fac124d08/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ujw5iyr" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:03 compute-0 nova_compute[239965]: 2026-01-26 16:01:03.242 239969 DEBUG nova.storage.rbd_utils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] rbd image 6c804bac-7ee8-45d4-821c-535fac124d08_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:03 compute-0 nova_compute[239965]: 2026-01-26 16:01:03.246 239969 DEBUG oslo_concurrency.processutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6c804bac-7ee8-45d4-821c-535fac124d08/disk.config 6c804bac-7ee8-45d4-821c-535fac124d08_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:03 compute-0 nova_compute[239965]: 2026-01-26 16:01:03.444 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:03 compute-0 nova_compute[239965]: 2026-01-26 16:01:03.445 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:03 compute-0 nova_compute[239965]: 2026-01-26 16:01:03.463 239969 DEBUG nova.compute.manager [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:01:03 compute-0 nova_compute[239965]: 2026-01-26 16:01:03.539 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:03 compute-0 nova_compute[239965]: 2026-01-26 16:01:03.540 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:03 compute-0 nova_compute[239965]: 2026-01-26 16:01:03.547 239969 DEBUG nova.virt.hardware [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:01:03 compute-0 nova_compute[239965]: 2026-01-26 16:01:03.547 239969 INFO nova.compute.claims [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:01:03 compute-0 nova_compute[239965]: 2026-01-26 16:01:03.746 239969 DEBUG oslo_concurrency.processutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 227 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.190 239969 DEBUG oslo_concurrency.processutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6c804bac-7ee8-45d4-821c-535fac124d08/disk.config 6c804bac-7ee8-45d4-821c-535fac124d08_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.944s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.192 239969 INFO nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Deleting local config drive /var/lib/nova/instances/6c804bac-7ee8-45d4-821c-535fac124d08/disk.config because it was imported into RBD.
Jan 26 16:01:04 compute-0 ceph-mon[75140]: pgmap v1444: 305 pgs: 305 active+clean; 227 MiB data, 601 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 127 op/s
Jan 26 16:01:04 compute-0 NetworkManager[48954]: <info>  [1769443264.2509] manager: (tapa23e88a0-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/246)
Jan 26 16:01:04 compute-0 kernel: tapa23e88a0-e0: entered promiscuous mode
Jan 26 16:01:04 compute-0 ovn_controller[146046]: 2026-01-26T16:01:04Z|00537|binding|INFO|Claiming lport a23e88a0-e02f-48ba-8c26-a51810b9f2ed for this chassis.
Jan 26 16:01:04 compute-0 ovn_controller[146046]: 2026-01-26T16:01:04Z|00538|binding|INFO|a23e88a0-e02f-48ba-8c26-a51810b9f2ed: Claiming fa:16:3e:6f:fd:6f 10.100.0.30
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.255 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.266 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:fd:6f 10.100.0.30'], port_security=['fa:16:3e:6f:fd:6f 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/24', 'neutron:device_id': '6c804bac-7ee8-45d4-821c-535fac124d08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f9fff751f0a4a4aba88032259e9628c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2771346f-84f3-45ca-bd0b-10e35f7a7112', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e6d2521-36b2-4516-be03-ae71f0709642, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a23e88a0-e02f-48ba-8c26-a51810b9f2ed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.267 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a23e88a0-e02f-48ba-8c26-a51810b9f2ed in datapath 05612a26-4bfc-4999-b9cf-ae81a7d0e6a9 bound to our chassis
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.271 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 05612a26-4bfc-4999-b9cf-ae81a7d0e6a9
Jan 26 16:01:04 compute-0 NetworkManager[48954]: <info>  [1769443264.2744] manager: (tap769188ca-56): new Tun device (/org/freedesktop/NetworkManager/Devices/247)
Jan 26 16:01:04 compute-0 sshd-session[294041]: Invalid user admin from 176.120.22.13 port 20512
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.291 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ae8c49be-c787-4492-aa98-b701476f163d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.293 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap05612a26-41 in ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:01:04 compute-0 systemd-udevd[294120]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:01:04 compute-0 systemd-udevd[294121]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.298 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap05612a26-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.298 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c70172f5-c847-4b08-b099-1e34f0a3bd70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.300 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[78e913fa-328d-4a2a-9f23-44b92c9b2361]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:04 compute-0 NetworkManager[48954]: <info>  [1769443264.3096] device (tapa23e88a0-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:01:04 compute-0 NetworkManager[48954]: <info>  [1769443264.3106] device (tapa23e88a0-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.311 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[d91f5105-c0b1-4831-8481-73f017a54bfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:04 compute-0 kernel: tap769188ca-56: entered promiscuous mode
Jan 26 16:01:04 compute-0 NetworkManager[48954]: <info>  [1769443264.3186] device (tap769188ca-56): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.318 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:04 compute-0 ovn_controller[146046]: 2026-01-26T16:01:04Z|00539|binding|INFO|Claiming lport 769188ca-56ed-49b7-8944-e9229697b5b3 for this chassis.
Jan 26 16:01:04 compute-0 ovn_controller[146046]: 2026-01-26T16:01:04Z|00540|binding|INFO|769188ca-56ed-49b7-8944-e9229697b5b3: Claiming fa:16:3e:c8:e2:27 10.100.1.193
Jan 26 16:01:04 compute-0 NetworkManager[48954]: <info>  [1769443264.3208] device (tap769188ca-56): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:01:04 compute-0 systemd-machined[208061]: New machine qemu-71-instance-00000040.
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.325 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:04 compute-0 ovn_controller[146046]: 2026-01-26T16:01:04Z|00541|binding|INFO|Setting lport a23e88a0-e02f-48ba-8c26-a51810b9f2ed ovn-installed in OVS
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.328 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:04 compute-0 ovn_controller[146046]: 2026-01-26T16:01:04Z|00542|binding|INFO|Setting lport a23e88a0-e02f-48ba-8c26-a51810b9f2ed up in Southbound
Jan 26 16:01:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.332 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:e2:27 10.100.1.193'], port_security=['fa:16:3e:c8:e2:27 10.100.1.193'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.193/24', 'neutron:device_id': '6c804bac-7ee8-45d4-821c-535fac124d08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f233816-7cc9-4554-a51f-5f05891ca43f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f9fff751f0a4a4aba88032259e9628c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2771346f-84f3-45ca-bd0b-10e35f7a7112', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f065695-583f-4b9a-b1e8-610dffd16ba4, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=769188ca-56ed-49b7-8944-e9229697b5b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/609619046' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.338 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e16f684b-c273-4758-8c92-203ceb2f2bfd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:04 compute-0 systemd[1]: Started Virtual Machine qemu-71-instance-00000040.
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.358 239969 DEBUG oslo_concurrency.processutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:04 compute-0 ovn_controller[146046]: 2026-01-26T16:01:04Z|00543|binding|INFO|Setting lport 769188ca-56ed-49b7-8944-e9229697b5b3 ovn-installed in OVS
Jan 26 16:01:04 compute-0 ovn_controller[146046]: 2026-01-26T16:01:04Z|00544|binding|INFO|Setting lport 769188ca-56ed-49b7-8944-e9229697b5b3 up in Southbound
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.375 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a53600-04ad-43af-8b06-0e005dd7d30a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.378 239969 DEBUG nova.compute.provider_tree [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:01:04 compute-0 NetworkManager[48954]: <info>  [1769443264.3836] manager: (tap05612a26-40): new Veth device (/org/freedesktop/NetworkManager/Devices/248)
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.384 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.385 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe2dd0d-4f25-48a7-b9b4-e90cc6f96645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.398 239969 DEBUG nova.scheduler.client.report [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.419 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[669f0fdb-dc42-4be1-88df-f68211c3e61c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.423 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3b51fa27-e31f-45a8-ba21-c289535dde90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.423 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.424 239969 DEBUG nova.compute.manager [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:01:04 compute-0 NetworkManager[48954]: <info>  [1769443264.4506] device (tap05612a26-40): carrier: link connected
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.455 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[91601e26-912f-4415-a7b7-a67e856794a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.468 239969 DEBUG nova.compute.manager [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.468 239969 DEBUG nova.network.neutron [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.474 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c256090c-017c-4996-8254-87a2ce499dae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap05612a26-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:e6:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475525, 'reachable_time': 43654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294157, 'error': None, 'target': 'ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.491 239969 INFO nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.492 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a639359e-4d26-40b0-95c0-28c37e740bab]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb7:e623'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 475525, 'tstamp': 475525}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294158, 'error': None, 'target': 'ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.507 239969 DEBUG nova.compute.manager [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.516 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[94f12d00-3d86-4b3f-b12e-d624ae802fd1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap05612a26-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b7:e6:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 166], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475525, 'reachable_time': 43654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294159, 'error': None, 'target': 'ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.548 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[58edcca9-98fb-45cf-ab9e-574d35de540b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.584 239969 DEBUG nova.compute.manager [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.586 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.586 239969 INFO nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Creating image(s)
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.619 239969 DEBUG nova.storage.rbd_utils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image f50eea8d-23f7-4c65-99fa-f919c99ed80d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.620 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[87b6b2b6-60d7-4684-ae91-9b857c60be47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.622 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05612a26-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.622 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.623 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05612a26-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:04 compute-0 NetworkManager[48954]: <info>  [1769443264.6257] manager: (tap05612a26-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Jan 26 16:01:04 compute-0 kernel: tap05612a26-40: entered promiscuous mode
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.628 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap05612a26-40, col_values=(('external_ids', {'iface-id': '88014fb8-c2b1-483d-b753-bf999d19c34d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:04 compute-0 ovn_controller[146046]: 2026-01-26T16:01:04Z|00545|binding|INFO|Releasing lport 88014fb8-c2b1-483d-b753-bf999d19c34d from this chassis (sb_readonly=0)
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.651 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/05612a26-4bfc-4999-b9cf-ae81a7d0e6a9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/05612a26-4bfc-4999-b9cf-ae81a7d0e6a9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.652 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3343974e-170e-402b-a8dd-0240d25228b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.654 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/05612a26-4bfc-4999-b9cf-ae81a7d0e6a9.pid.haproxy
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 05612a26-4bfc-4999-b9cf-ae81a7d0e6a9
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:01:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:04.656 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9', 'env', 'PROCESS_TAG=haproxy-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/05612a26-4bfc-4999-b9cf-ae81a7d0e6a9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:01:04 compute-0 sshd-session[294041]: Connection reset by invalid user admin 176.120.22.13 port 20512 [preauth]
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.679 239969 DEBUG nova.storage.rbd_utils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image f50eea8d-23f7-4c65-99fa-f919c99ed80d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.717 239969 DEBUG nova.storage.rbd_utils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image f50eea8d-23f7-4c65-99fa-f919c99ed80d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.725 239969 DEBUG oslo_concurrency.processutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.761 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.808 239969 DEBUG oslo_concurrency.processutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.808 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.809 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.809 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.831 239969 DEBUG nova.storage.rbd_utils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image f50eea8d-23f7-4c65-99fa-f919c99ed80d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.835 239969 DEBUG oslo_concurrency.processutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 f50eea8d-23f7-4c65-99fa-f919c99ed80d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.923 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443264.9225166, 6c804bac-7ee8-45d4-821c-535fac124d08 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.923 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] VM Started (Lifecycle Event)
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.942 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.946 239969 DEBUG nova.policy [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2b5ecf8e47344a3d9fec2e1bf4244aae', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1e098d809d8b4d0a8747af49a85560a1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.953 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443264.9226816, 6c804bac-7ee8-45d4-821c-535fac124d08 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.953 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] VM Paused (Lifecycle Event)
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.974 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.980 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:04 compute-0 nova_compute[239965]: 2026-01-26 16:01:04.997 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:01:05 compute-0 podman[294328]: 2026-01-26 16:01:05.040515909 +0000 UTC m=+0.024653475 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:01:05 compute-0 ovn_controller[146046]: 2026-01-26T16:01:05Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:b9:39 10.100.0.10
Jan 26 16:01:05 compute-0 ovn_controller[146046]: 2026-01-26T16:01:05Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:b9:39 10.100.0.10
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.731 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.851 239969 DEBUG nova.compute.manager [req-2f4241f6-c87b-4960-a3c0-febf465c88a9 req-2e185444-b4e0-4e0f-b99c-b6c580447f84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received event network-vif-plugged-769188ca-56ed-49b7-8944-e9229697b5b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.851 239969 DEBUG oslo_concurrency.lockutils [req-2f4241f6-c87b-4960-a3c0-febf465c88a9 req-2e185444-b4e0-4e0f-b99c-b6c580447f84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.852 239969 DEBUG oslo_concurrency.lockutils [req-2f4241f6-c87b-4960-a3c0-febf465c88a9 req-2e185444-b4e0-4e0f-b99c-b6c580447f84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.852 239969 DEBUG oslo_concurrency.lockutils [req-2f4241f6-c87b-4960-a3c0-febf465c88a9 req-2e185444-b4e0-4e0f-b99c-b6c580447f84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.852 239969 DEBUG nova.compute.manager [req-2f4241f6-c87b-4960-a3c0-febf465c88a9 req-2e185444-b4e0-4e0f-b99c-b6c580447f84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Processing event network-vif-plugged-769188ca-56ed-49b7-8944-e9229697b5b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:01:05 compute-0 sshd-session[294260]: Connection reset by authenticating user root 176.120.22.13 port 20526 [preauth]
Jan 26 16:01:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 260 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.0 MiB/s wr, 174 op/s
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.945 239969 DEBUG nova.compute.manager [req-cf51a80e-dea4-4d5c-80d6-f132773d7ddd req-4192df3e-59ad-48be-9559-75ea477ed86e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Received event network-vif-plugged-e12546e3-3fac-40e3-88c0-ed9ad0bc3880 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.946 239969 DEBUG oslo_concurrency.lockutils [req-cf51a80e-dea4-4d5c-80d6-f132773d7ddd req-4192df3e-59ad-48be-9559-75ea477ed86e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.946 239969 DEBUG oslo_concurrency.lockutils [req-cf51a80e-dea4-4d5c-80d6-f132773d7ddd req-4192df3e-59ad-48be-9559-75ea477ed86e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.946 239969 DEBUG oslo_concurrency.lockutils [req-cf51a80e-dea4-4d5c-80d6-f132773d7ddd req-4192df3e-59ad-48be-9559-75ea477ed86e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.946 239969 DEBUG nova.compute.manager [req-cf51a80e-dea4-4d5c-80d6-f132773d7ddd req-4192df3e-59ad-48be-9559-75ea477ed86e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Processing event network-vif-plugged-e12546e3-3fac-40e3-88c0-ed9ad0bc3880 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.947 239969 DEBUG nova.compute.manager [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.951 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443265.9508119, 764b92e5-31ed-41bd-b90d-067a907d0206 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.951 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] VM Resumed (Lifecycle Event)
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.953 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.956 239969 INFO nova.virt.libvirt.driver [-] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Instance spawned successfully.
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.957 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.980 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.985 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.989 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.989 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.990 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.990 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.990 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:05 compute-0 nova_compute[239965]: 2026-01-26 16:01:05.990 239969 DEBUG nova.virt.libvirt.driver [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.021 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.056 239969 INFO nova.compute.manager [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Took 13.64 seconds to spawn the instance on the hypervisor.
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.057 239969 DEBUG nova.compute.manager [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.125 239969 INFO nova.compute.manager [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Took 14.86 seconds to build instance.
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.145 239969 DEBUG oslo_concurrency.lockutils [None req-7693130f-e0f9-40f1-a595-e5bcc214e70b 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:01:06 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/609619046' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:06 compute-0 podman[294328]: 2026-01-26 16:01:06.465175865 +0000 UTC m=+1.449313411 container create d5a71bba8cfd401cdfe4a112a2c0aa7281115eced8569dfdcb5678fc8ef4e1d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.506 239969 DEBUG nova.network.neutron [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Successfully created port: f01280ac-7e71-4b0c-bad0-c519d134f076 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.510 239969 DEBUG oslo_concurrency.processutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 f50eea8d-23f7-4c65-99fa-f919c99ed80d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:06 compute-0 systemd[1]: Started libpod-conmon-d5a71bba8cfd401cdfe4a112a2c0aa7281115eced8569dfdcb5678fc8ef4e1d2.scope.
Jan 26 16:01:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:01:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9809e085d4e40059d1858085f3830acd68f1c8f36a90e9741be14002559d852c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:06 compute-0 podman[294328]: 2026-01-26 16:01:06.557716924 +0000 UTC m=+1.541854500 container init d5a71bba8cfd401cdfe4a112a2c0aa7281115eced8569dfdcb5678fc8ef4e1d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:01:06 compute-0 podman[294328]: 2026-01-26 16:01:06.564166121 +0000 UTC m=+1.548303677 container start d5a71bba8cfd401cdfe4a112a2c0aa7281115eced8569dfdcb5678fc8ef4e1d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:01:06 compute-0 neutron-haproxy-ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9[294352]: [NOTICE]   (294367) : New worker (294385) forked
Jan 26 16:01:06 compute-0 neutron-haproxy-ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9[294352]: [NOTICE]   (294367) : Loading success.
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.616 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.617 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.626 239969 DEBUG nova.storage.rbd_utils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] resizing rbd image f50eea8d-23f7-4c65-99fa-f919c99ed80d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.651 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 769188ca-56ed-49b7-8944-e9229697b5b3 in datapath 3f233816-7cc9-4554-a51f-5f05891ca43f unbound from our chassis
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.653 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3f233816-7cc9-4554-a51f-5f05891ca43f
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.668 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a98001d0-926b-4be9-b5a6-fae33539cef9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.669 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3f233816-71 in ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.671 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3f233816-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.671 239969 DEBUG nova.compute.manager [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.671 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a17aad-62f2-43b3-95b5-429805b8c6d8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.674 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[21a90969-af9d-44b6-8757-a7b0e89801e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.692 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[b391908e-3660-4242-9ef1-17867a57ea12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.716 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[eef29583-d6c1-4267-bfb8-c1e4507ae5e5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.765 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3644f49f-7306-44e7-933c-7a88d029bcc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.774 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7b54df59-4d52-4532-b451-220ed924fba9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:06 compute-0 NetworkManager[48954]: <info>  [1769443266.7764] manager: (tap3f233816-70): new Veth device (/org/freedesktop/NetworkManager/Devices/250)
Jan 26 16:01:06 compute-0 systemd-udevd[294142]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.778 239969 DEBUG nova.objects.instance [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lazy-loading 'migration_context' on Instance uuid f50eea8d-23f7-4c65-99fa-f919c99ed80d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.781 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.782 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.791 239969 DEBUG nova.virt.hardware [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.792 239969 INFO nova.compute.claims [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.796 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.797 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Ensure instance console log exists: /var/lib/nova/instances/f50eea8d-23f7-4c65-99fa-f919c99ed80d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.797 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.799 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.799 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.817 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0e885d59-92e5-40a2-9a65-b598699ee04c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.820 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1e2aba-cd6b-44e1-9508-70634f9b9625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:06 compute-0 NetworkManager[48954]: <info>  [1769443266.8567] device (tap3f233816-70): carrier: link connected
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.862 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f0690f-bd24-4e41-b225-46e1eb39e497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.879 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6694d3-f945-4df3-bdbd-3d0cc247e66a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f233816-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:ae:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475766, 'reachable_time': 24975, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294442, 'error': None, 'target': 'ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.898 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6311a080-d20f-42a1-85e3-c2022b193dde]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:aedf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 475766, 'tstamp': 475766}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294443, 'error': None, 'target': 'ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.914 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[97187ca4-fc0f-42c9-af8a-14d2d8310748]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3f233816-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:ae:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 167], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475766, 'reachable_time': 24975, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294444, 'error': None, 'target': 'ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:06 compute-0 nova_compute[239965]: 2026-01-26 16:01:06.945 239969 DEBUG oslo_concurrency.processutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:06.955 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[abafc5bc-9f8a-41e7-b35a-4d2680fd0eb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:07.020 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[83a36ab0-3b66-49ee-8cd5-806ffa77912c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:07.021 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f233816-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:07.022 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:07.022 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f233816-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:07 compute-0 kernel: tap3f233816-70: entered promiscuous mode
Jan 26 16:01:07 compute-0 NetworkManager[48954]: <info>  [1769443267.0250] manager: (tap3f233816-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:07.028 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3f233816-70, col_values=(('external_ids', {'iface-id': '091448f7-3d79-4160-bd6b-9f3fc4e69914'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.028 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:07 compute-0 ovn_controller[146046]: 2026-01-26T16:01:07Z|00546|binding|INFO|Releasing lport 091448f7-3d79-4160-bd6b-9f3fc4e69914 from this chassis (sb_readonly=0)
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.050 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:07.051 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3f233816-7cc9-4554-a51f-5f05891ca43f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3f233816-7cc9-4554-a51f-5f05891ca43f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:07.052 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[68deafbe-ec27-45cc-9eef-a7f1aaa8a343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:07.053 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-3f233816-7cc9-4554-a51f-5f05891ca43f
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/3f233816-7cc9-4554-a51f-5f05891ca43f.pid.haproxy
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 3f233816-7cc9-4554-a51f-5f05891ca43f
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:01:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:07.055 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f', 'env', 'PROCESS_TAG=haproxy-3f233816-7cc9-4554-a51f-5f05891ca43f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3f233816-7cc9-4554-a51f-5f05891ca43f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:01:07 compute-0 sshd-session[294341]: Connection reset by authenticating user root 176.120.22.13 port 20540 [preauth]
Jan 26 16:01:07 compute-0 ceph-mon[75140]: pgmap v1445: 305 pgs: 305 active+clean; 260 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.0 MiB/s wr, 174 op/s
Jan 26 16:01:07 compute-0 podman[294496]: 2026-01-26 16:01:07.480463965 +0000 UTC m=+0.053073252 container create f7bd8625157b29f25df3329f7af8c66ccd9d6cceb9ae743ac8108316bfca92cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 16:01:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:01:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219085708' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:07 compute-0 systemd[1]: Started libpod-conmon-f7bd8625157b29f25df3329f7af8c66ccd9d6cceb9ae743ac8108316bfca92cf.scope.
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.548 239969 DEBUG oslo_concurrency.processutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:07 compute-0 podman[294496]: 2026-01-26 16:01:07.456294081 +0000 UTC m=+0.028903378 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:01:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.555 239969 DEBUG nova.compute.provider_tree [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.557 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb1a129b729ba162691e2960cdf012997c424364dfd440a261094e27648e3f3f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:07 compute-0 podman[294496]: 2026-01-26 16:01:07.571269481 +0000 UTC m=+0.143878798 container init f7bd8625157b29f25df3329f7af8c66ccd9d6cceb9ae743ac8108316bfca92cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.573 239969 DEBUG nova.scheduler.client.report [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:01:07 compute-0 podman[294496]: 2026-01-26 16:01:07.577010232 +0000 UTC m=+0.149619519 container start f7bd8625157b29f25df3329f7af8c66ccd9d6cceb9ae743ac8108316bfca92cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.603 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.604 239969 DEBUG nova.compute.manager [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:01:07 compute-0 neutron-haproxy-ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f[294513]: [NOTICE]   (294518) : New worker (294520) forked
Jan 26 16:01:07 compute-0 neutron-haproxy-ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f[294513]: [NOTICE]   (294518) : Loading success.
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.663 239969 DEBUG nova.compute.manager [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.663 239969 DEBUG nova.network.neutron [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.683 239969 INFO nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.701 239969 DEBUG nova.compute.manager [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.743 239969 DEBUG oslo_concurrency.lockutils [None req-51e934ed-7b54-46e4-818e-f1fc13246169 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "764b92e5-31ed-41bd-b90d-067a907d0206" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.744 239969 DEBUG oslo_concurrency.lockutils [None req-51e934ed-7b54-46e4-818e-f1fc13246169 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.744 239969 DEBUG nova.compute.manager [None req-51e934ed-7b54-46e4-818e-f1fc13246169 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.748 239969 DEBUG nova.compute.manager [None req-51e934ed-7b54-46e4-818e-f1fc13246169 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.748 239969 DEBUG nova.objects.instance [None req-51e934ed-7b54-46e4-818e-f1fc13246169 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'flavor' on Instance uuid 764b92e5-31ed-41bd-b90d-067a907d0206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.780 239969 DEBUG nova.virt.libvirt.driver [None req-51e934ed-7b54-46e4-818e-f1fc13246169 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.801 239969 DEBUG nova.compute.manager [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.803 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.803 239969 INFO nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Creating image(s)
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.827 239969 DEBUG nova.storage.rbd_utils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:07 compute-0 sudo[294529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:01:07 compute-0 sudo[294529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.851 239969 DEBUG nova.storage.rbd_utils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:07 compute-0 sudo[294529]: pam_unix(sudo:session): session closed for user root
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.880 239969 DEBUG nova.storage.rbd_utils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.886 239969 DEBUG oslo_concurrency.processutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:07 compute-0 sudo[294590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:01:07 compute-0 sudo[294590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:01:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 260 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 188 KiB/s rd, 2.4 MiB/s wr, 59 op/s
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.921 239969 DEBUG nova.policy [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2b5ecf8e47344a3d9fec2e1bf4244aae', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1e098d809d8b4d0a8747af49a85560a1', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.964 239969 DEBUG oslo_concurrency.processutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.964 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.965 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.966 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.989 239969 DEBUG nova.storage.rbd_utils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:07 compute-0 nova_compute[239965]: 2026-01-26 16:01:07.992 239969 DEBUG oslo_concurrency.processutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.054 239969 DEBUG nova.compute.manager [req-0bffca78-58e9-4cc5-aa68-57197e292ae0 req-29dcd1c8-e3fc-45c9-84b4-acb38b197938 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received event network-vif-plugged-769188ca-56ed-49b7-8944-e9229697b5b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.055 239969 DEBUG oslo_concurrency.lockutils [req-0bffca78-58e9-4cc5-aa68-57197e292ae0 req-29dcd1c8-e3fc-45c9-84b4-acb38b197938 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.056 239969 DEBUG oslo_concurrency.lockutils [req-0bffca78-58e9-4cc5-aa68-57197e292ae0 req-29dcd1c8-e3fc-45c9-84b4-acb38b197938 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.057 239969 DEBUG oslo_concurrency.lockutils [req-0bffca78-58e9-4cc5-aa68-57197e292ae0 req-29dcd1c8-e3fc-45c9-84b4-acb38b197938 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.057 239969 DEBUG nova.compute.manager [req-0bffca78-58e9-4cc5-aa68-57197e292ae0 req-29dcd1c8-e3fc-45c9-84b4-acb38b197938 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] No event matching network-vif-plugged-769188ca-56ed-49b7-8944-e9229697b5b3 in dict_keys([('network-vif-plugged', 'a23e88a0-e02f-48ba-8c26-a51810b9f2ed')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.057 239969 WARNING nova.compute.manager [req-0bffca78-58e9-4cc5-aa68-57197e292ae0 req-29dcd1c8-e3fc-45c9-84b4-acb38b197938 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received unexpected event network-vif-plugged-769188ca-56ed-49b7-8944-e9229697b5b3 for instance with vm_state building and task_state spawning.
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.116 239969 DEBUG nova.compute.manager [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Received event network-vif-plugged-e12546e3-3fac-40e3-88c0-ed9ad0bc3880 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.118 239969 DEBUG oslo_concurrency.lockutils [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.119 239969 DEBUG oslo_concurrency.lockutils [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.119 239969 DEBUG oslo_concurrency.lockutils [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.119 239969 DEBUG nova.compute.manager [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] No waiting events found dispatching network-vif-plugged-e12546e3-3fac-40e3-88c0-ed9ad0bc3880 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.120 239969 WARNING nova.compute.manager [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Received unexpected event network-vif-plugged-e12546e3-3fac-40e3-88c0-ed9ad0bc3880 for instance with vm_state active and task_state powering-off.
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.120 239969 DEBUG nova.compute.manager [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received event network-vif-plugged-a23e88a0-e02f-48ba-8c26-a51810b9f2ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.120 239969 DEBUG oslo_concurrency.lockutils [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.124 239969 DEBUG oslo_concurrency.lockutils [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.125 239969 DEBUG oslo_concurrency.lockutils [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.126 239969 DEBUG nova.compute.manager [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Processing event network-vif-plugged-a23e88a0-e02f-48ba-8c26-a51810b9f2ed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.126 239969 DEBUG nova.compute.manager [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received event network-vif-plugged-a23e88a0-e02f-48ba-8c26-a51810b9f2ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.127 239969 DEBUG oslo_concurrency.lockutils [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.128 239969 DEBUG oslo_concurrency.lockutils [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.128 239969 DEBUG oslo_concurrency.lockutils [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.128 239969 DEBUG nova.compute.manager [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] No waiting events found dispatching network-vif-plugged-a23e88a0-e02f-48ba-8c26-a51810b9f2ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.129 239969 WARNING nova.compute.manager [req-0b3937cd-740f-4da9-b870-ecb5b9e48ea4 req-2c8989b8-d6df-4618-8a12-865f38cca773 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received unexpected event network-vif-plugged-a23e88a0-e02f-48ba-8c26-a51810b9f2ed for instance with vm_state building and task_state spawning.
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.131 239969 DEBUG nova.compute.manager [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.140 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.141 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443268.1391072, 6c804bac-7ee8-45d4-821c-535fac124d08 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.142 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] VM Resumed (Lifecycle Event)
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.154 239969 INFO nova.virt.libvirt.driver [-] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Instance spawned successfully.
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.154 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.160 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.176 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.185 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.185 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.186 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.186 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.186 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.187 239969 DEBUG nova.virt.libvirt.driver [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.226 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.254 239969 INFO nova.compute.manager [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Took 14.68 seconds to spawn the instance on the hypervisor.
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.255 239969 DEBUG nova.compute.manager [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.273 239969 DEBUG nova.network.neutron [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Successfully updated port: f01280ac-7e71-4b0c-bad0-c519d134f076 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.289 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "refresh_cache-f50eea8d-23f7-4c65-99fa-f919c99ed80d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.290 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquired lock "refresh_cache-f50eea8d-23f7-4c65-99fa-f919c99ed80d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.290 239969 DEBUG nova.network.neutron [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.313 239969 DEBUG oslo_concurrency.processutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.353 239969 INFO nova.compute.manager [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Took 16.07 seconds to build instance.
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.389 239969 DEBUG oslo_concurrency.lockutils [None req-fadf4b52-f330-44c3-91b3-242027edeecc 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.401 239969 DEBUG nova.storage.rbd_utils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] resizing rbd image 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:01:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1219085708' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:08 compute-0 ceph-mon[75140]: pgmap v1446: 305 pgs: 305 active+clean; 260 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 188 KiB/s rd, 2.4 MiB/s wr, 59 op/s
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.510 239969 DEBUG nova.objects.instance [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lazy-loading 'migration_context' on Instance uuid 96c386e9-e3b3-47c6-b6ca-863e894d5c49 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.522 239969 DEBUG nova.network.neutron [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.536 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.537 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Ensure instance console log exists: /var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.537 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.537 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.538 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:08 compute-0 nova_compute[239965]: 2026-01-26 16:01:08.598 239969 DEBUG nova.network.neutron [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Successfully created port: 819adf29-2d57-4e5d-81f7-041a9ac00baa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:01:08 compute-0 sudo[294590]: pam_unix(sudo:session): session closed for user root
Jan 26 16:01:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:01:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:01:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:01:08 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:01:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:01:08 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:01:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:01:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:01:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:01:08 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:01:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:01:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:01:08 compute-0 sudo[294775]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:01:08 compute-0 sudo[294775]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:01:08 compute-0 sudo[294775]: pam_unix(sudo:session): session closed for user root
Jan 26 16:01:08 compute-0 sshd-session[294490]: Connection reset by authenticating user root 176.120.22.13 port 20548 [preauth]
Jan 26 16:01:08 compute-0 sudo[294800]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:01:08 compute-0 sudo[294800]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:01:09 compute-0 podman[294836]: 2026-01-26 16:01:09.150767682 +0000 UTC m=+0.050070909 container create 5c9f0c0a0853dc47fb281b4f271357cea00eb08b08d6ae2b5678d370991c027f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 26 16:01:09 compute-0 systemd[1]: Started libpod-conmon-5c9f0c0a0853dc47fb281b4f271357cea00eb08b08d6ae2b5678d370991c027f.scope.
Jan 26 16:01:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:01:09 compute-0 podman[294836]: 2026-01-26 16:01:09.132121244 +0000 UTC m=+0.031424501 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:01:09 compute-0 podman[294836]: 2026-01-26 16:01:09.252640259 +0000 UTC m=+0.151943506 container init 5c9f0c0a0853dc47fb281b4f271357cea00eb08b08d6ae2b5678d370991c027f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_sinoussi, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:01:09 compute-0 podman[294836]: 2026-01-26 16:01:09.263095215 +0000 UTC m=+0.162398442 container start 5c9f0c0a0853dc47fb281b4f271357cea00eb08b08d6ae2b5678d370991c027f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:01:09 compute-0 podman[294836]: 2026-01-26 16:01:09.26692073 +0000 UTC m=+0.166223957 container attach 5c9f0c0a0853dc47fb281b4f271357cea00eb08b08d6ae2b5678d370991c027f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:01:09 compute-0 systemd[1]: libpod-5c9f0c0a0853dc47fb281b4f271357cea00eb08b08d6ae2b5678d370991c027f.scope: Deactivated successfully.
Jan 26 16:01:09 compute-0 silly_sinoussi[294854]: 167 167
Jan 26 16:01:09 compute-0 conmon[294854]: conmon 5c9f0c0a0853dc47fb28 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c9f0c0a0853dc47fb281b4f271357cea00eb08b08d6ae2b5678d370991c027f.scope/container/memory.events
Jan 26 16:01:09 compute-0 podman[294836]: 2026-01-26 16:01:09.274942315 +0000 UTC m=+0.174245542 container died 5c9f0c0a0853dc47fb281b4f271357cea00eb08b08d6ae2b5678d370991c027f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_sinoussi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:01:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-d649c91f0b1ad39dd97b0a2da8a3b30f81b71f7afa408c3340346792a2e310bf-merged.mount: Deactivated successfully.
Jan 26 16:01:09 compute-0 podman[294836]: 2026-01-26 16:01:09.319478457 +0000 UTC m=+0.218781684 container remove 5c9f0c0a0853dc47fb281b4f271357cea00eb08b08d6ae2b5678d370991c027f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_sinoussi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:01:09 compute-0 systemd[1]: libpod-conmon-5c9f0c0a0853dc47fb281b4f271357cea00eb08b08d6ae2b5678d370991c027f.scope: Deactivated successfully.
Jan 26 16:01:09 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:01:09 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:01:09 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:01:09 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:01:09 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:01:09 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:01:09 compute-0 podman[294877]: 2026-01-26 16:01:09.547213461 +0000 UTC m=+0.048031448 container create c79b6b4fa59aba9e05b7c9a1a8d858e56ce587c38d02d6e390d2a8509433ff3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:01:09 compute-0 systemd[1]: Started libpod-conmon-c79b6b4fa59aba9e05b7c9a1a8d858e56ce587c38d02d6e390d2a8509433ff3e.scope.
Jan 26 16:01:09 compute-0 podman[294877]: 2026-01-26 16:01:09.527391045 +0000 UTC m=+0.028209072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:01:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b68a4b86f94b08d2d20601543d93fed17188978c3f5139af7a3c4d4ef7957c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b68a4b86f94b08d2d20601543d93fed17188978c3f5139af7a3c4d4ef7957c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b68a4b86f94b08d2d20601543d93fed17188978c3f5139af7a3c4d4ef7957c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b68a4b86f94b08d2d20601543d93fed17188978c3f5139af7a3c4d4ef7957c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b68a4b86f94b08d2d20601543d93fed17188978c3f5139af7a3c4d4ef7957c3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:09 compute-0 podman[294877]: 2026-01-26 16:01:09.656469999 +0000 UTC m=+0.157288026 container init c79b6b4fa59aba9e05b7c9a1a8d858e56ce587c38d02d6e390d2a8509433ff3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_leavitt, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 16:01:09 compute-0 podman[294877]: 2026-01-26 16:01:09.665199463 +0000 UTC m=+0.166017440 container start c79b6b4fa59aba9e05b7c9a1a8d858e56ce587c38d02d6e390d2a8509433ff3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_leavitt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 16:01:09 compute-0 podman[294877]: 2026-01-26 16:01:09.669017217 +0000 UTC m=+0.169835254 container attach c79b6b4fa59aba9e05b7c9a1a8d858e56ce587c38d02d6e390d2a8509433ff3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 16:01:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 350 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.7 MiB/s wr, 212 op/s
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.088 239969 DEBUG nova.network.neutron [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Updating instance_info_cache with network_info: [{"id": "f01280ac-7e71-4b0c-bad0-c519d134f076", "address": "fa:16:3e:11:6b:52", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf01280ac-7e", "ovs_interfaceid": "f01280ac-7e71-4b0c-bad0-c519d134f076", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.116 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Releasing lock "refresh_cache-f50eea8d-23f7-4c65-99fa-f919c99ed80d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.116 239969 DEBUG nova.compute.manager [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Instance network_info: |[{"id": "f01280ac-7e71-4b0c-bad0-c519d134f076", "address": "fa:16:3e:11:6b:52", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf01280ac-7e", "ovs_interfaceid": "f01280ac-7e71-4b0c-bad0-c519d134f076", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.119 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Start _get_guest_xml network_info=[{"id": "f01280ac-7e71-4b0c-bad0-c519d134f076", "address": "fa:16:3e:11:6b:52", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf01280ac-7e", "ovs_interfaceid": "f01280ac-7e71-4b0c-bad0-c519d134f076", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.126 239969 WARNING nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.133 239969 DEBUG nova.virt.libvirt.host [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.135 239969 DEBUG nova.virt.libvirt.host [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.138 239969 DEBUG nova.virt.libvirt.host [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.139 239969 DEBUG nova.virt.libvirt.host [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.139 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.140 239969 DEBUG nova.virt.hardware [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.140 239969 DEBUG nova.virt.hardware [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.140 239969 DEBUG nova.virt.hardware [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:01:10 compute-0 romantic_leavitt[294894]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.141 239969 DEBUG nova.virt.hardware [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.141 239969 DEBUG nova.virt.hardware [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.141 239969 DEBUG nova.virt.hardware [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:01:10 compute-0 romantic_leavitt[294894]: --> All data devices are unavailable
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.141 239969 DEBUG nova.virt.hardware [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.142 239969 DEBUG nova.virt.hardware [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.142 239969 DEBUG nova.virt.hardware [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.142 239969 DEBUG nova.virt.hardware [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.142 239969 DEBUG nova.virt.hardware [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.146 239969 DEBUG oslo_concurrency.processutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:10 compute-0 systemd[1]: libpod-c79b6b4fa59aba9e05b7c9a1a8d858e56ce587c38d02d6e390d2a8509433ff3e.scope: Deactivated successfully.
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.208 239969 DEBUG nova.compute.manager [req-76043d9d-89ab-4c1e-ba80-0e0ea174e00b req-6fb3648e-eada-48ef-b9a2-81941aeb2a15 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Received event network-changed-f01280ac-7e71-4b0c-bad0-c519d134f076 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.211 239969 DEBUG nova.compute.manager [req-76043d9d-89ab-4c1e-ba80-0e0ea174e00b req-6fb3648e-eada-48ef-b9a2-81941aeb2a15 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Refreshing instance network info cache due to event network-changed-f01280ac-7e71-4b0c-bad0-c519d134f076. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.213 239969 DEBUG oslo_concurrency.lockutils [req-76043d9d-89ab-4c1e-ba80-0e0ea174e00b req-6fb3648e-eada-48ef-b9a2-81941aeb2a15 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-f50eea8d-23f7-4c65-99fa-f919c99ed80d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.216 239969 DEBUG oslo_concurrency.lockutils [req-76043d9d-89ab-4c1e-ba80-0e0ea174e00b req-6fb3648e-eada-48ef-b9a2-81941aeb2a15 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-f50eea8d-23f7-4c65-99fa-f919c99ed80d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.218 239969 DEBUG nova.network.neutron [req-76043d9d-89ab-4c1e-ba80-0e0ea174e00b req-6fb3648e-eada-48ef-b9a2-81941aeb2a15 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Refreshing network info cache for port f01280ac-7e71-4b0c-bad0-c519d134f076 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:01:10 compute-0 sshd-session[294825]: Invalid user sshadmin from 176.120.22.13 port 20568
Jan 26 16:01:10 compute-0 podman[294915]: 2026-01-26 16:01:10.233754721 +0000 UTC m=+0.036636319 container died c79b6b4fa59aba9e05b7c9a1a8d858e56ce587c38d02d6e390d2a8509433ff3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:01:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b68a4b86f94b08d2d20601543d93fed17188978c3f5139af7a3c4d4ef7957c3-merged.mount: Deactivated successfully.
Jan 26 16:01:10 compute-0 podman[294915]: 2026-01-26 16:01:10.314178973 +0000 UTC m=+0.117060561 container remove c79b6b4fa59aba9e05b7c9a1a8d858e56ce587c38d02d6e390d2a8509433ff3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:01:10 compute-0 systemd[1]: libpod-conmon-c79b6b4fa59aba9e05b7c9a1a8d858e56ce587c38d02d6e390d2a8509433ff3e.scope: Deactivated successfully.
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.339 239969 DEBUG oslo_concurrency.lockutils [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "6c804bac-7ee8-45d4-821c-535fac124d08" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.340 239969 DEBUG oslo_concurrency.lockutils [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.341 239969 DEBUG oslo_concurrency.lockutils [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.342 239969 DEBUG oslo_concurrency.lockutils [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.344 239969 DEBUG oslo_concurrency.lockutils [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.351 239969 INFO nova.compute.manager [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Terminating instance
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.356 239969 DEBUG nova.compute.manager [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:01:10 compute-0 sudo[294800]: pam_unix(sudo:session): session closed for user root
Jan 26 16:01:10 compute-0 sudo[294948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:01:10 compute-0 sudo[294948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:01:10 compute-0 sudo[294948]: pam_unix(sudo:session): session closed for user root
Jan 26 16:01:10 compute-0 kernel: tapa23e88a0-e0 (unregistering): left promiscuous mode
Jan 26 16:01:10 compute-0 NetworkManager[48954]: <info>  [1769443270.4706] device (tapa23e88a0-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:01:10 compute-0 ovn_controller[146046]: 2026-01-26T16:01:10Z|00547|binding|INFO|Releasing lport a23e88a0-e02f-48ba-8c26-a51810b9f2ed from this chassis (sb_readonly=0)
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.482 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:10 compute-0 ovn_controller[146046]: 2026-01-26T16:01:10Z|00548|binding|INFO|Setting lport a23e88a0-e02f-48ba-8c26-a51810b9f2ed down in Southbound
Jan 26 16:01:10 compute-0 ovn_controller[146046]: 2026-01-26T16:01:10Z|00549|binding|INFO|Removing iface tapa23e88a0-e0 ovn-installed in OVS
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.485 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.489 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:fd:6f 10.100.0.30'], port_security=['fa:16:3e:6f:fd:6f 10.100.0.30'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.30/24', 'neutron:device_id': '6c804bac-7ee8-45d4-821c-535fac124d08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f9fff751f0a4a4aba88032259e9628c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2771346f-84f3-45ca-bd0b-10e35f7a7112', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9e6d2521-36b2-4516-be03-ae71f0709642, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a23e88a0-e02f-48ba-8c26-a51810b9f2ed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.491 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a23e88a0-e02f-48ba-8c26-a51810b9f2ed in datapath 05612a26-4bfc-4999-b9cf-ae81a7d0e6a9 unbound from our chassis
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.493 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 05612a26-4bfc-4999-b9cf-ae81a7d0e6a9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:01:10 compute-0 kernel: tap769188ca-56 (unregistering): left promiscuous mode
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.494 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[55d428df-9046-4117-8713-ea3a9afb00cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.495 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9 namespace which is not needed anymore
Jan 26 16:01:10 compute-0 NetworkManager[48954]: <info>  [1769443270.5012] device (tap769188ca-56): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:01:10 compute-0 ceph-mon[75140]: pgmap v1447: 305 pgs: 305 active+clean; 350 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.7 MiB/s wr, 212 op/s
Jan 26 16:01:10 compute-0 ovn_controller[146046]: 2026-01-26T16:01:10Z|00550|binding|INFO|Releasing lport 769188ca-56ed-49b7-8944-e9229697b5b3 from this chassis (sb_readonly=0)
Jan 26 16:01:10 compute-0 ovn_controller[146046]: 2026-01-26T16:01:10Z|00551|binding|INFO|Setting lport 769188ca-56ed-49b7-8944-e9229697b5b3 down in Southbound
Jan 26 16:01:10 compute-0 ovn_controller[146046]: 2026-01-26T16:01:10Z|00552|binding|INFO|Removing iface tap769188ca-56 ovn-installed in OVS
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.524 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.528 239969 DEBUG nova.network.neutron [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Successfully updated port: 819adf29-2d57-4e5d-81f7-041a9ac00baa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.531 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:e2:27 10.100.1.193'], port_security=['fa:16:3e:c8:e2:27 10.100.1.193'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.193/24', 'neutron:device_id': '6c804bac-7ee8-45d4-821c-535fac124d08', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3f233816-7cc9-4554-a51f-5f05891ca43f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3f9fff751f0a4a4aba88032259e9628c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2771346f-84f3-45ca-bd0b-10e35f7a7112', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5f065695-583f-4b9a-b1e8-610dffd16ba4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=769188ca-56ed-49b7-8944-e9229697b5b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:10 compute-0 sudo[294973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:01:10 compute-0 sudo[294973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.544 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "refresh_cache-96c386e9-e3b3-47c6-b6ca-863e894d5c49" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.548 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquired lock "refresh_cache-96c386e9-e3b3-47c6-b6ca-863e894d5c49" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.548 239969 DEBUG nova.network.neutron [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.550 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:10 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000040.scope: Deactivated successfully.
Jan 26 16:01:10 compute-0 systemd[1]: machine-qemu\x2d71\x2dinstance\x2d00000040.scope: Consumed 2.819s CPU time.
Jan 26 16:01:10 compute-0 sshd-session[294825]: Connection reset by invalid user sshadmin 176.120.22.13 port 20568 [preauth]
Jan 26 16:01:10 compute-0 systemd-machined[208061]: Machine qemu-71-instance-00000040 terminated.
Jan 26 16:01:10 compute-0 NetworkManager[48954]: <info>  [1769443270.5962] manager: (tap769188ca-56): new Tun device (/org/freedesktop/NetworkManager/Devices/252)
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.618 239969 INFO nova.virt.libvirt.driver [-] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Instance destroyed successfully.
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.618 239969 DEBUG nova.objects.instance [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lazy-loading 'resources' on Instance uuid 6c804bac-7ee8-45d4-821c-535fac124d08 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.652 239969 DEBUG nova.virt.libvirt.vif [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:00:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1298183690',display_name='tempest-ServersTestMultiNic-server-1298183690',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1298183690',id=64,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:01:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3f9fff751f0a4a4aba88032259e9628c',ramdisk_id='',reservation_id='r-a33k0b3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1401749103',owner_user_name='tempest-ServersTestMultiNic-1401749103-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:01:08Z,user_data=None,user_id='9cc0fb8436b44d0499a7d55e0a7e7585',uuid=6c804bac-7ee8-45d4-821c-535fac124d08,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "address": "fa:16:3e:6f:fd:6f", "network": {"id": "05612a26-4bfc-4999-b9cf-ae81a7d0e6a9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1066997338", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa23e88a0-e0", "ovs_interfaceid": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.653 239969 DEBUG nova.network.os_vif_util [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converting VIF {"id": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "address": "fa:16:3e:6f:fd:6f", "network": {"id": "05612a26-4bfc-4999-b9cf-ae81a7d0e6a9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1066997338", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa23e88a0-e0", "ovs_interfaceid": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.654 239969 DEBUG nova.network.os_vif_util [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:fd:6f,bridge_name='br-int',has_traffic_filtering=True,id=a23e88a0-e02f-48ba-8c26-a51810b9f2ed,network=Network(05612a26-4bfc-4999-b9cf-ae81a7d0e6a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa23e88a0-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.654 239969 DEBUG os_vif [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:fd:6f,bridge_name='br-int',has_traffic_filtering=True,id=a23e88a0-e02f-48ba-8c26-a51810b9f2ed,network=Network(05612a26-4bfc-4999-b9cf-ae81a7d0e6a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa23e88a0-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.656 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.656 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa23e88a0-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.658 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.660 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.664 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.666 239969 INFO os_vif [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:fd:6f,bridge_name='br-int',has_traffic_filtering=True,id=a23e88a0-e02f-48ba-8c26-a51810b9f2ed,network=Network(05612a26-4bfc-4999-b9cf-ae81a7d0e6a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa23e88a0-e0')
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.667 239969 DEBUG nova.virt.libvirt.vif [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:00:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1298183690',display_name='tempest-ServersTestMultiNic-server-1298183690',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1298183690',id=64,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:01:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3f9fff751f0a4a4aba88032259e9628c',ramdisk_id='',reservation_id='r-a33k0b3a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1401749103',owner_user_name='tempest-ServersTestMultiNic-1401749103-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:01:08Z,user_data=None,user_id='9cc0fb8436b44d0499a7d55e0a7e7585',uuid=6c804bac-7ee8-45d4-821c-535fac124d08,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "769188ca-56ed-49b7-8944-e9229697b5b3", "address": "fa:16:3e:c8:e2:27", "network": {"id": "3f233816-7cc9-4554-a51f-5f05891ca43f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1224441887", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap769188ca-56", "ovs_interfaceid": "769188ca-56ed-49b7-8944-e9229697b5b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.667 239969 DEBUG nova.network.os_vif_util [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converting VIF {"id": "769188ca-56ed-49b7-8944-e9229697b5b3", "address": "fa:16:3e:c8:e2:27", "network": {"id": "3f233816-7cc9-4554-a51f-5f05891ca43f", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1224441887", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.193", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap769188ca-56", "ovs_interfaceid": "769188ca-56ed-49b7-8944-e9229697b5b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.668 239969 DEBUG nova.network.os_vif_util [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:e2:27,bridge_name='br-int',has_traffic_filtering=True,id=769188ca-56ed-49b7-8944-e9229697b5b3,network=Network(3f233816-7cc9-4554-a51f-5f05891ca43f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap769188ca-56') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.669 239969 DEBUG os_vif [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:e2:27,bridge_name='br-int',has_traffic_filtering=True,id=769188ca-56ed-49b7-8944-e9229697b5b3,network=Network(3f233816-7cc9-4554-a51f-5f05891ca43f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap769188ca-56') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.670 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.670 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap769188ca-56, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:10 compute-0 neutron-haproxy-ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9[294352]: [NOTICE]   (294367) : haproxy version is 2.8.14-c23fe91
Jan 26 16:01:10 compute-0 neutron-haproxy-ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9[294352]: [NOTICE]   (294367) : path to executable is /usr/sbin/haproxy
Jan 26 16:01:10 compute-0 neutron-haproxy-ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9[294352]: [WARNING]  (294367) : Exiting Master process...
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.674 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:10 compute-0 neutron-haproxy-ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9[294352]: [ALERT]    (294367) : Current worker (294385) exited with code 143 (Terminated)
Jan 26 16:01:10 compute-0 neutron-haproxy-ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9[294352]: [WARNING]  (294367) : All workers exited. Exiting... (0)
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.676 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:10 compute-0 systemd[1]: libpod-d5a71bba8cfd401cdfe4a112a2c0aa7281115eced8569dfdcb5678fc8ef4e1d2.scope: Deactivated successfully.
Jan 26 16:01:10 compute-0 podman[295035]: 2026-01-26 16:01:10.682580594 +0000 UTC m=+0.061261183 container died d5a71bba8cfd401cdfe4a112a2c0aa7281115eced8569dfdcb5678fc8ef4e1d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.684 239969 INFO os_vif [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:e2:27,bridge_name='br-int',has_traffic_filtering=True,id=769188ca-56ed-49b7-8944-e9229697b5b3,network=Network(3f233816-7cc9-4554-a51f-5f05891ca43f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap769188ca-56')
Jan 26 16:01:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d5a71bba8cfd401cdfe4a112a2c0aa7281115eced8569dfdcb5678fc8ef4e1d2-userdata-shm.mount: Deactivated successfully.
Jan 26 16:01:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-9809e085d4e40059d1858085f3830acd68f1c8f36a90e9741be14002559d852c-merged.mount: Deactivated successfully.
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.720 239969 DEBUG nova.compute.manager [req-095b33a3-8b60-4752-9017-f74bf6fcbd66 req-f5f09c71-1414-4c46-8c3c-b11855a90af5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received event network-vif-unplugged-769188ca-56ed-49b7-8944-e9229697b5b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.721 239969 DEBUG oslo_concurrency.lockutils [req-095b33a3-8b60-4752-9017-f74bf6fcbd66 req-f5f09c71-1414-4c46-8c3c-b11855a90af5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.721 239969 DEBUG oslo_concurrency.lockutils [req-095b33a3-8b60-4752-9017-f74bf6fcbd66 req-f5f09c71-1414-4c46-8c3c-b11855a90af5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.721 239969 DEBUG oslo_concurrency.lockutils [req-095b33a3-8b60-4752-9017-f74bf6fcbd66 req-f5f09c71-1414-4c46-8c3c-b11855a90af5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.722 239969 DEBUG nova.compute.manager [req-095b33a3-8b60-4752-9017-f74bf6fcbd66 req-f5f09c71-1414-4c46-8c3c-b11855a90af5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] No waiting events found dispatching network-vif-unplugged-769188ca-56ed-49b7-8944-e9229697b5b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.722 239969 DEBUG nova.compute.manager [req-095b33a3-8b60-4752-9017-f74bf6fcbd66 req-f5f09c71-1414-4c46-8c3c-b11855a90af5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received event network-vif-unplugged-769188ca-56ed-49b7-8944-e9229697b5b3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:01:10 compute-0 podman[295035]: 2026-01-26 16:01:10.728872599 +0000 UTC m=+0.107553168 container cleanup d5a71bba8cfd401cdfe4a112a2c0aa7281115eced8569dfdcb5678fc8ef4e1d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.735 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:10 compute-0 systemd[1]: libpod-conmon-d5a71bba8cfd401cdfe4a112a2c0aa7281115eced8569dfdcb5678fc8ef4e1d2.scope: Deactivated successfully.
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.766 239969 DEBUG nova.network.neutron [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:01:10 compute-0 podman[295091]: 2026-01-26 16:01:10.80928304 +0000 UTC m=+0.051401421 container remove d5a71bba8cfd401cdfe4a112a2c0aa7281115eced8569dfdcb5678fc8ef4e1d2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.818 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7002f7fa-9ea9-4a45-96ef-90bad6256a99]: (4, ('Mon Jan 26 04:01:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9 (d5a71bba8cfd401cdfe4a112a2c0aa7281115eced8569dfdcb5678fc8ef4e1d2)\nd5a71bba8cfd401cdfe4a112a2c0aa7281115eced8569dfdcb5678fc8ef4e1d2\nMon Jan 26 04:01:10 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9 (d5a71bba8cfd401cdfe4a112a2c0aa7281115eced8569dfdcb5678fc8ef4e1d2)\nd5a71bba8cfd401cdfe4a112a2c0aa7281115eced8569dfdcb5678fc8ef4e1d2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1853513770' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.821 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0c25cc60-f0b6-4b90-b1ac-48b00e2ce3ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.822 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05612a26-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:10 compute-0 kernel: tap05612a26-40: left promiscuous mode
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.824 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.836 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[47e7fcd5-fde4-4ad8-b393-f7e0a1bceba2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.848 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.851 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2be32d65-e477-40f4-9cd5-61062679a299]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.854 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[31ae0aa5-ce66-4a1c-ad62-e08a083a1f6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.863 239969 DEBUG oslo_concurrency.processutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.717s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.872 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f5cec985-6a85-45c8-9154-6317691ebcbe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475517, 'reachable_time': 24370, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295130, 'error': None, 'target': 'ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d05612a26\x2d4bfc\x2d4999\x2db9cf\x2dae81a7d0e6a9.mount: Deactivated successfully.
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.875 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-05612a26-4bfc-4999-b9cf-ae81a7d0e6a9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.875 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[a06a0693-9efc-48ff-98fd-e87067d10ec6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.877 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 769188ca-56ed-49b7-8944-e9229697b5b3 in datapath 3f233816-7cc9-4554-a51f-5f05891ca43f unbound from our chassis
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.879 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3f233816-7cc9-4554-a51f-5f05891ca43f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.880 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1d717172-5679-4488-a103-614262656c05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:10.881 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f namespace which is not needed anymore
Jan 26 16:01:10 compute-0 podman[295115]: 2026-01-26 16:01:10.883658553 +0000 UTC m=+0.048729725 container create 366770c252995fd4dca3851a35eafed11735eef739ecf8bd75638670866cfe51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hertz, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.896 239969 DEBUG nova.storage.rbd_utils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image f50eea8d-23f7-4c65-99fa-f919c99ed80d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:10 compute-0 nova_compute[239965]: 2026-01-26 16:01:10.916 239969 DEBUG oslo_concurrency.processutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:10 compute-0 systemd[1]: Started libpod-conmon-366770c252995fd4dca3851a35eafed11735eef739ecf8bd75638670866cfe51.scope.
Jan 26 16:01:10 compute-0 podman[295115]: 2026-01-26 16:01:10.862275589 +0000 UTC m=+0.027346781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:01:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:01:11 compute-0 podman[295115]: 2026-01-26 16:01:11.020315234 +0000 UTC m=+0.185386406 container init 366770c252995fd4dca3851a35eafed11735eef739ecf8bd75638670866cfe51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hertz, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:01:11 compute-0 podman[295115]: 2026-01-26 16:01:11.027908279 +0000 UTC m=+0.192979451 container start 366770c252995fd4dca3851a35eafed11735eef739ecf8bd75638670866cfe51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hertz, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 16:01:11 compute-0 upbeat_hertz[295167]: 167 167
Jan 26 16:01:11 compute-0 systemd[1]: libpod-366770c252995fd4dca3851a35eafed11735eef739ecf8bd75638670866cfe51.scope: Deactivated successfully.
Jan 26 16:01:11 compute-0 conmon[295167]: conmon 366770c252995fd4dca3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-366770c252995fd4dca3851a35eafed11735eef739ecf8bd75638670866cfe51.scope/container/memory.events
Jan 26 16:01:11 compute-0 podman[295115]: 2026-01-26 16:01:11.1017408 +0000 UTC m=+0.266811972 container attach 366770c252995fd4dca3851a35eafed11735eef739ecf8bd75638670866cfe51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hertz, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:01:11 compute-0 podman[295115]: 2026-01-26 16:01:11.10254153 +0000 UTC m=+0.267612702 container died 366770c252995fd4dca3851a35eafed11735eef739ecf8bd75638670866cfe51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hertz, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 16:01:11 compute-0 neutron-haproxy-ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f[294513]: [NOTICE]   (294518) : haproxy version is 2.8.14-c23fe91
Jan 26 16:01:11 compute-0 neutron-haproxy-ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f[294513]: [NOTICE]   (294518) : path to executable is /usr/sbin/haproxy
Jan 26 16:01:11 compute-0 neutron-haproxy-ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f[294513]: [WARNING]  (294518) : Exiting Master process...
Jan 26 16:01:11 compute-0 neutron-haproxy-ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f[294513]: [WARNING]  (294518) : Exiting Master process...
Jan 26 16:01:11 compute-0 neutron-haproxy-ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f[294513]: [ALERT]    (294518) : Current worker (294520) exited with code 143 (Terminated)
Jan 26 16:01:11 compute-0 neutron-haproxy-ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f[294513]: [WARNING]  (294518) : All workers exited. Exiting... (0)
Jan 26 16:01:11 compute-0 systemd[1]: libpod-f7bd8625157b29f25df3329f7af8c66ccd9d6cceb9ae743ac8108316bfca92cf.scope: Deactivated successfully.
Jan 26 16:01:11 compute-0 conmon[294513]: conmon f7bd8625157b29f25df3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f7bd8625157b29f25df3329f7af8c66ccd9d6cceb9ae743ac8108316bfca92cf.scope/container/memory.events
Jan 26 16:01:11 compute-0 podman[295115]: 2026-01-26 16:01:11.172684209 +0000 UTC m=+0.337755381 container remove 366770c252995fd4dca3851a35eafed11735eef739ecf8bd75638670866cfe51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_hertz, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 16:01:11 compute-0 podman[295178]: 2026-01-26 16:01:11.176053361 +0000 UTC m=+0.189150518 container died f7bd8625157b29f25df3329f7af8c66ccd9d6cceb9ae743ac8108316bfca92cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 26 16:01:11 compute-0 systemd[1]: libpod-conmon-366770c252995fd4dca3851a35eafed11735eef739ecf8bd75638670866cfe51.scope: Deactivated successfully.
Jan 26 16:01:11 compute-0 podman[295178]: 2026-01-26 16:01:11.220203524 +0000 UTC m=+0.233300651 container cleanup f7bd8625157b29f25df3329f7af8c66ccd9d6cceb9ae743ac8108316bfca92cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:01:11 compute-0 systemd[1]: libpod-conmon-f7bd8625157b29f25df3329f7af8c66ccd9d6cceb9ae743ac8108316bfca92cf.scope: Deactivated successfully.
Jan 26 16:01:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-23b63577198b37c08f32e74b007f6f9cf318b30b90799c30a9823d7a83da112f-merged.mount: Deactivated successfully.
Jan 26 16:01:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb1a129b729ba162691e2960cdf012997c424364dfd440a261094e27648e3f3f-merged.mount: Deactivated successfully.
Jan 26 16:01:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f7bd8625157b29f25df3329f7af8c66ccd9d6cceb9ae743ac8108316bfca92cf-userdata-shm.mount: Deactivated successfully.
Jan 26 16:01:11 compute-0 podman[295242]: 2026-01-26 16:01:11.30118323 +0000 UTC m=+0.051585996 container remove f7bd8625157b29f25df3329f7af8c66ccd9d6cceb9ae743ac8108316bfca92cf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:01:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:11.313 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4237201c-f35e-406f-98d4-9550e6f233c6]: (4, ('Mon Jan 26 04:01:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f (f7bd8625157b29f25df3329f7af8c66ccd9d6cceb9ae743ac8108316bfca92cf)\nf7bd8625157b29f25df3329f7af8c66ccd9d6cceb9ae743ac8108316bfca92cf\nMon Jan 26 04:01:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f (f7bd8625157b29f25df3329f7af8c66ccd9d6cceb9ae743ac8108316bfca92cf)\nf7bd8625157b29f25df3329f7af8c66ccd9d6cceb9ae743ac8108316bfca92cf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.315 239969 INFO nova.virt.libvirt.driver [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Deleting instance files /var/lib/nova/instances/6c804bac-7ee8-45d4-821c-535fac124d08_del
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.316 239969 INFO nova.virt.libvirt.driver [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Deletion of /var/lib/nova/instances/6c804bac-7ee8-45d4-821c-535fac124d08_del complete
Jan 26 16:01:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:11.317 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[629e215f-04b7-48f3-9d4d-66d0e692e341]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:11.318 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3f233816-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.323 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:11 compute-0 kernel: tap3f233816-70: left promiscuous mode
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.337 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:11.344 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ca032002-b0e8-49f6-9d18-882414525d3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.347 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:11.357 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2b16ae45-790d-4ff8-8e8b-fb92873ebe7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:11.360 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a8e9332b-2961-4558-9a30-de66d7d17301]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.372 239969 INFO nova.compute.manager [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Took 1.02 seconds to destroy the instance on the hypervisor.
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.372 239969 DEBUG oslo.service.loopingcall [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.373 239969 DEBUG nova.compute.manager [-] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.373 239969 DEBUG nova.network.neutron [-] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:01:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:11.379 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cfbe3680-58e3-47af-b9da-ca36ff128c93]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 475756, 'reachable_time': 42694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295266, 'error': None, 'target': 'ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d3f233816\x2d7cc9\x2d4554\x2da51f\x2d5f05891ca43f.mount: Deactivated successfully.
Jan 26 16:01:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:11.385 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3f233816-7cc9-4554-a51f-5f05891ca43f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:01:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:11.385 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[602d9df0-5353-485a-9032-3530620a86bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:01:11 compute-0 podman[295257]: 2026-01-26 16:01:11.418443353 +0000 UTC m=+0.068872589 container create 019a413adb1106a4cd03cf78c8f53e4d6697ed07ffa2dd9a634a85ced37c5aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_torvalds, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 16:01:11 compute-0 systemd[1]: Started libpod-conmon-019a413adb1106a4cd03cf78c8f53e4d6697ed07ffa2dd9a634a85ced37c5aa5.scope.
Jan 26 16:01:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:01:11 compute-0 podman[295257]: 2026-01-26 16:01:11.398166347 +0000 UTC m=+0.048595593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e026bb5985866798fd5c25abf2bde38add5ffc77c84db4a7a039a4e222e19c95/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e026bb5985866798fd5c25abf2bde38add5ffc77c84db4a7a039a4e222e19c95/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e026bb5985866798fd5c25abf2bde38add5ffc77c84db4a7a039a4e222e19c95/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e026bb5985866798fd5c25abf2bde38add5ffc77c84db4a7a039a4e222e19c95/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:11 compute-0 podman[295257]: 2026-01-26 16:01:11.50684279 +0000 UTC m=+0.157272046 container init 019a413adb1106a4cd03cf78c8f53e4d6697ed07ffa2dd9a634a85ced37c5aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:01:11 compute-0 podman[295257]: 2026-01-26 16:01:11.517723337 +0000 UTC m=+0.168152563 container start 019a413adb1106a4cd03cf78c8f53e4d6697ed07ffa2dd9a634a85ced37c5aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_torvalds, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 16:01:11 compute-0 podman[295257]: 2026-01-26 16:01:11.522656029 +0000 UTC m=+0.173085285 container attach 019a413adb1106a4cd03cf78c8f53e4d6697ed07ffa2dd9a634a85ced37c5aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:01:11 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1853513770' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:11 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1909678336' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.583 239969 DEBUG oslo_concurrency.processutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.585 239969 DEBUG nova.virt.libvirt.vif [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:01:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-2124285703',display_name='tempest-ServerRescueNegativeTestJSON-server-2124285703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-2124285703',id=65,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1e098d809d8b4d0a8747af49a85560a1',ramdisk_id='',reservation_id='r-8ym7t3nz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-2133238807',owner_user_name='tempest-ServerRescueNegativeTestJSON-2133238807-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:04Z,user_data=None,user_id='2b5ecf8e47344a3d9fec2e1bf4244aae',uuid=f50eea8d-23f7-4c65-99fa-f919c99ed80d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f01280ac-7e71-4b0c-bad0-c519d134f076", "address": "fa:16:3e:11:6b:52", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf01280ac-7e", "ovs_interfaceid": "f01280ac-7e71-4b0c-bad0-c519d134f076", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.585 239969 DEBUG nova.network.os_vif_util [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Converting VIF {"id": "f01280ac-7e71-4b0c-bad0-c519d134f076", "address": "fa:16:3e:11:6b:52", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf01280ac-7e", "ovs_interfaceid": "f01280ac-7e71-4b0c-bad0-c519d134f076", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.585 239969 DEBUG nova.network.os_vif_util [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:52,bridge_name='br-int',has_traffic_filtering=True,id=f01280ac-7e71-4b0c-bad0-c519d134f076,network=Network(677a455b-3478-4811-937e-e2d4184c4933),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf01280ac-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.587 239969 DEBUG nova.objects.instance [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid f50eea8d-23f7-4c65-99fa-f919c99ed80d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.603 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:01:11 compute-0 nova_compute[239965]:   <uuid>f50eea8d-23f7-4c65-99fa-f919c99ed80d</uuid>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   <name>instance-00000041</name>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-2124285703</nova:name>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:01:10</nova:creationTime>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:01:11 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:01:11 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:01:11 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:01:11 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:01:11 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:01:11 compute-0 nova_compute[239965]:         <nova:user uuid="2b5ecf8e47344a3d9fec2e1bf4244aae">tempest-ServerRescueNegativeTestJSON-2133238807-project-member</nova:user>
Jan 26 16:01:11 compute-0 nova_compute[239965]:         <nova:project uuid="1e098d809d8b4d0a8747af49a85560a1">tempest-ServerRescueNegativeTestJSON-2133238807</nova:project>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:01:11 compute-0 nova_compute[239965]:         <nova:port uuid="f01280ac-7e71-4b0c-bad0-c519d134f076">
Jan 26 16:01:11 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <system>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <entry name="serial">f50eea8d-23f7-4c65-99fa-f919c99ed80d</entry>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <entry name="uuid">f50eea8d-23f7-4c65-99fa-f919c99ed80d</entry>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     </system>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   <os>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   </os>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   <features>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   </features>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/f50eea8d-23f7-4c65-99fa-f919c99ed80d_disk">
Jan 26 16:01:11 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:11 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/f50eea8d-23f7-4c65-99fa-f919c99ed80d_disk.config">
Jan 26 16:01:11 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:11 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:11:6b:52"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <target dev="tapf01280ac-7e"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/f50eea8d-23f7-4c65-99fa-f919c99ed80d/console.log" append="off"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <video>
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     </video>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:01:11 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:01:11 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:01:11 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:01:11 compute-0 nova_compute[239965]: </domain>
Jan 26 16:01:11 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.604 239969 DEBUG nova.compute.manager [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Preparing to wait for external event network-vif-plugged-f01280ac-7e71-4b0c-bad0-c519d134f076 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.604 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.604 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.604 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.605 239969 DEBUG nova.virt.libvirt.vif [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:01:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-2124285703',display_name='tempest-ServerRescueNegativeTestJSON-server-2124285703',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-2124285703',id=65,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1e098d809d8b4d0a8747af49a85560a1',ramdisk_id='',reservation_id='r-8ym7t3nz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-2133238807',owner_user_name='tempest-ServerRescueNegativeTestJSON-2133238807-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:04Z,user_data=None,user_id='2b5ecf8e47344a3d9fec2e1bf4244aae',uuid=f50eea8d-23f7-4c65-99fa-f919c99ed80d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f01280ac-7e71-4b0c-bad0-c519d134f076", "address": "fa:16:3e:11:6b:52", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf01280ac-7e", "ovs_interfaceid": "f01280ac-7e71-4b0c-bad0-c519d134f076", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.605 239969 DEBUG nova.network.os_vif_util [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Converting VIF {"id": "f01280ac-7e71-4b0c-bad0-c519d134f076", "address": "fa:16:3e:11:6b:52", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf01280ac-7e", "ovs_interfaceid": "f01280ac-7e71-4b0c-bad0-c519d134f076", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.606 239969 DEBUG nova.network.os_vif_util [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:52,bridge_name='br-int',has_traffic_filtering=True,id=f01280ac-7e71-4b0c-bad0-c519d134f076,network=Network(677a455b-3478-4811-937e-e2d4184c4933),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf01280ac-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.606 239969 DEBUG os_vif [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:52,bridge_name='br-int',has_traffic_filtering=True,id=f01280ac-7e71-4b0c-bad0-c519d134f076,network=Network(677a455b-3478-4811-937e-e2d4184c4933),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf01280ac-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.607 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.607 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.608 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.612 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.612 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf01280ac-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.612 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf01280ac-7e, col_values=(('external_ids', {'iface-id': 'f01280ac-7e71-4b0c-bad0-c519d134f076', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:6b:52', 'vm-uuid': 'f50eea8d-23f7-4c65-99fa-f919c99ed80d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.614 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:11 compute-0 NetworkManager[48954]: <info>  [1769443271.6151] manager: (tapf01280ac-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/253)
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.617 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.622 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.623 239969 INFO os_vif [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:52,bridge_name='br-int',has_traffic_filtering=True,id=f01280ac-7e71-4b0c-bad0-c519d134f076,network=Network(677a455b-3478-4811-937e-e2d4184c4933),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf01280ac-7e')
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.688 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.689 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.689 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] No VIF found with MAC fa:16:3e:11:6b:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.690 239969 INFO nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Using config drive
Jan 26 16:01:11 compute-0 nova_compute[239965]: 2026-01-26 16:01:11.714 239969 DEBUG nova.storage.rbd_utils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image f50eea8d-23f7-4c65-99fa-f919c99ed80d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:11 compute-0 sad_torvalds[295276]: {
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:     "0": [
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:         {
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "devices": [
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "/dev/loop3"
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             ],
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "lv_name": "ceph_lv0",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "lv_size": "21470642176",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "name": "ceph_lv0",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "tags": {
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.cluster_name": "ceph",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.crush_device_class": "",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.encrypted": "0",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.objectstore": "bluestore",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.osd_id": "0",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.type": "block",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.vdo": "0",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.with_tpm": "0"
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             },
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "type": "block",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "vg_name": "ceph_vg0"
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:         }
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:     ],
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:     "1": [
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:         {
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "devices": [
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "/dev/loop4"
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             ],
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "lv_name": "ceph_lv1",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "lv_size": "21470642176",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "name": "ceph_lv1",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "tags": {
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.cluster_name": "ceph",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.crush_device_class": "",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.encrypted": "0",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.objectstore": "bluestore",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.osd_id": "1",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.type": "block",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.vdo": "0",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.with_tpm": "0"
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             },
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "type": "block",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "vg_name": "ceph_vg1"
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:         }
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:     ],
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:     "2": [
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:         {
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "devices": [
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "/dev/loop5"
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             ],
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "lv_name": "ceph_lv2",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "lv_size": "21470642176",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "name": "ceph_lv2",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "tags": {
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.cluster_name": "ceph",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.crush_device_class": "",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.encrypted": "0",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.objectstore": "bluestore",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.osd_id": "2",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.type": "block",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.vdo": "0",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:                 "ceph.with_tpm": "0"
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             },
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "type": "block",
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:             "vg_name": "ceph_vg2"
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:         }
Jan 26 16:01:11 compute-0 sad_torvalds[295276]:     ]
Jan 26 16:01:11 compute-0 sad_torvalds[295276]: }
Jan 26 16:01:11 compute-0 systemd[1]: libpod-019a413adb1106a4cd03cf78c8f53e4d6697ed07ffa2dd9a634a85ced37c5aa5.scope: Deactivated successfully.
Jan 26 16:01:11 compute-0 podman[295307]: 2026-01-26 16:01:11.903755471 +0000 UTC m=+0.024383839 container died 019a413adb1106a4cd03cf78c8f53e4d6697ed07ffa2dd9a634a85ced37c5aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_torvalds, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 16:01:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 352 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.7 MiB/s wr, 232 op/s
Jan 26 16:01:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-e026bb5985866798fd5c25abf2bde38add5ffc77c84db4a7a039a4e222e19c95-merged.mount: Deactivated successfully.
Jan 26 16:01:11 compute-0 podman[295307]: 2026-01-26 16:01:11.95062709 +0000 UTC m=+0.071255428 container remove 019a413adb1106a4cd03cf78c8f53e4d6697ed07ffa2dd9a634a85ced37c5aa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_torvalds, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:01:11 compute-0 systemd[1]: libpod-conmon-019a413adb1106a4cd03cf78c8f53e4d6697ed07ffa2dd9a634a85ced37c5aa5.scope: Deactivated successfully.
Jan 26 16:01:12 compute-0 sudo[294973]: pam_unix(sudo:session): session closed for user root
Jan 26 16:01:12 compute-0 sudo[295322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:01:12 compute-0 sudo[295322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:01:12 compute-0 sudo[295322]: pam_unix(sudo:session): session closed for user root
Jan 26 16:01:12 compute-0 sudo[295347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:01:12 compute-0 sudo[295347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.256 239969 INFO nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Creating config drive at /var/lib/nova/instances/f50eea8d-23f7-4c65-99fa-f919c99ed80d/disk.config
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.262 239969 DEBUG oslo_concurrency.processutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f50eea8d-23f7-4c65-99fa-f919c99ed80d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg5g9wcio execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.322 239969 DEBUG nova.network.neutron [req-76043d9d-89ab-4c1e-ba80-0e0ea174e00b req-6fb3648e-eada-48ef-b9a2-81941aeb2a15 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Updated VIF entry in instance network info cache for port f01280ac-7e71-4b0c-bad0-c519d134f076. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.323 239969 DEBUG nova.network.neutron [req-76043d9d-89ab-4c1e-ba80-0e0ea174e00b req-6fb3648e-eada-48ef-b9a2-81941aeb2a15 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Updating instance_info_cache with network_info: [{"id": "f01280ac-7e71-4b0c-bad0-c519d134f076", "address": "fa:16:3e:11:6b:52", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf01280ac-7e", "ovs_interfaceid": "f01280ac-7e71-4b0c-bad0-c519d134f076", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.329 239969 DEBUG nova.compute.manager [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received event network-changed-819adf29-2d57-4e5d-81f7-041a9ac00baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.330 239969 DEBUG nova.compute.manager [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Refreshing instance network info cache due to event network-changed-819adf29-2d57-4e5d-81f7-041a9ac00baa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.330 239969 DEBUG oslo_concurrency.lockutils [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-96c386e9-e3b3-47c6-b6ca-863e894d5c49" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.341 239969 DEBUG oslo_concurrency.lockutils [req-76043d9d-89ab-4c1e-ba80-0e0ea174e00b req-6fb3648e-eada-48ef-b9a2-81941aeb2a15 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-f50eea8d-23f7-4c65-99fa-f919c99ed80d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.377 239969 DEBUG nova.network.neutron [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Updating instance_info_cache with network_info: [{"id": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "address": "fa:16:3e:cd:10:75", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap819adf29-2d", "ovs_interfaceid": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.404 239969 DEBUG oslo_concurrency.processutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f50eea8d-23f7-4c65-99fa-f919c99ed80d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg5g9wcio" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.404 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Releasing lock "refresh_cache-96c386e9-e3b3-47c6-b6ca-863e894d5c49" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.405 239969 DEBUG nova.compute.manager [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Instance network_info: |[{"id": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "address": "fa:16:3e:cd:10:75", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap819adf29-2d", "ovs_interfaceid": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.428 239969 DEBUG nova.storage.rbd_utils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image f50eea8d-23f7-4c65-99fa-f919c99ed80d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.431 239969 DEBUG oslo_concurrency.processutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f50eea8d-23f7-4c65-99fa-f919c99ed80d/disk.config f50eea8d-23f7-4c65-99fa-f919c99ed80d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:12 compute-0 podman[295387]: 2026-01-26 16:01:12.436995224 +0000 UTC m=+0.058188308 container create 058a3f4ddf1675e0007faaf1410c24c6d72a992be13fc1a3b676760bd8b7f910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.460 239969 DEBUG oslo_concurrency.lockutils [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-96c386e9-e3b3-47c6-b6ca-863e894d5c49" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.461 239969 DEBUG nova.network.neutron [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Refreshing network info cache for port 819adf29-2d57-4e5d-81f7-041a9ac00baa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.465 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Start _get_guest_xml network_info=[{"id": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "address": "fa:16:3e:cd:10:75", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap819adf29-2d", "ovs_interfaceid": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.472 239969 WARNING nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:01:12 compute-0 systemd[1]: Started libpod-conmon-058a3f4ddf1675e0007faaf1410c24c6d72a992be13fc1a3b676760bd8b7f910.scope.
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.485 239969 DEBUG nova.virt.libvirt.host [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.486 239969 DEBUG nova.virt.libvirt.host [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.492 239969 DEBUG nova.virt.libvirt.host [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.493 239969 DEBUG nova.virt.libvirt.host [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.494 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.494 239969 DEBUG nova.virt.hardware [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.495 239969 DEBUG nova.virt.hardware [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.495 239969 DEBUG nova.virt.hardware [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.495 239969 DEBUG nova.virt.hardware [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.495 239969 DEBUG nova.virt.hardware [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.495 239969 DEBUG nova.virt.hardware [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.496 239969 DEBUG nova.virt.hardware [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.496 239969 DEBUG nova.virt.hardware [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.496 239969 DEBUG nova.virt.hardware [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.496 239969 DEBUG nova.virt.hardware [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.497 239969 DEBUG nova.virt.hardware [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:01:12 compute-0 podman[295387]: 2026-01-26 16:01:12.414463461 +0000 UTC m=+0.035656585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.500 239969 DEBUG oslo_concurrency.processutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:01:12 compute-0 podman[295387]: 2026-01-26 16:01:12.527758699 +0000 UTC m=+0.148951803 container init 058a3f4ddf1675e0007faaf1410c24c6d72a992be13fc1a3b676760bd8b7f910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:01:12 compute-0 podman[295387]: 2026-01-26 16:01:12.53680917 +0000 UTC m=+0.158002254 container start 058a3f4ddf1675e0007faaf1410c24c6d72a992be13fc1a3b676760bd8b7f910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 16:01:12 compute-0 podman[295387]: 2026-01-26 16:01:12.540906871 +0000 UTC m=+0.162099985 container attach 058a3f4ddf1675e0007faaf1410c24c6d72a992be13fc1a3b676760bd8b7f910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 26 16:01:12 compute-0 nifty_payne[295422]: 167 167
Jan 26 16:01:12 compute-0 systemd[1]: libpod-058a3f4ddf1675e0007faaf1410c24c6d72a992be13fc1a3b676760bd8b7f910.scope: Deactivated successfully.
Jan 26 16:01:12 compute-0 podman[295387]: 2026-01-26 16:01:12.542771556 +0000 UTC m=+0.163964660 container died 058a3f4ddf1675e0007faaf1410c24c6d72a992be13fc1a3b676760bd8b7f910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:01:12 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1909678336' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:12 compute-0 ceph-mon[75140]: pgmap v1448: 305 pgs: 305 active+clean; 352 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.7 MiB/s wr, 232 op/s
Jan 26 16:01:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-78074a7633eb732452c2237f5b8912bbc07ddf23a836298f6a2ad8deabc2c325-merged.mount: Deactivated successfully.
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.780 239969 DEBUG nova.compute.manager [req-a898d1f6-579e-425b-a725-48b1b0a1f1d9 req-119dcdeb-83a6-46b4-8a5f-1a9ba733a070 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received event network-vif-plugged-769188ca-56ed-49b7-8944-e9229697b5b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.781 239969 DEBUG oslo_concurrency.lockutils [req-a898d1f6-579e-425b-a725-48b1b0a1f1d9 req-119dcdeb-83a6-46b4-8a5f-1a9ba733a070 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.781 239969 DEBUG oslo_concurrency.lockutils [req-a898d1f6-579e-425b-a725-48b1b0a1f1d9 req-119dcdeb-83a6-46b4-8a5f-1a9ba733a070 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.781 239969 DEBUG oslo_concurrency.lockutils [req-a898d1f6-579e-425b-a725-48b1b0a1f1d9 req-119dcdeb-83a6-46b4-8a5f-1a9ba733a070 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.781 239969 DEBUG nova.compute.manager [req-a898d1f6-579e-425b-a725-48b1b0a1f1d9 req-119dcdeb-83a6-46b4-8a5f-1a9ba733a070 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] No waiting events found dispatching network-vif-plugged-769188ca-56ed-49b7-8944-e9229697b5b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.781 239969 WARNING nova.compute.manager [req-a898d1f6-579e-425b-a725-48b1b0a1f1d9 req-119dcdeb-83a6-46b4-8a5f-1a9ba733a070 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received unexpected event network-vif-plugged-769188ca-56ed-49b7-8944-e9229697b5b3 for instance with vm_state active and task_state deleting.
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.785 239969 DEBUG nova.compute.manager [req-a898d1f6-579e-425b-a725-48b1b0a1f1d9 req-119dcdeb-83a6-46b4-8a5f-1a9ba733a070 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received event network-vif-deleted-769188ca-56ed-49b7-8944-e9229697b5b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.785 239969 INFO nova.compute.manager [req-a898d1f6-579e-425b-a725-48b1b0a1f1d9 req-119dcdeb-83a6-46b4-8a5f-1a9ba733a070 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Neutron deleted interface 769188ca-56ed-49b7-8944-e9229697b5b3; detaching it from the instance and deleting it from the info cache
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.785 239969 DEBUG nova.network.neutron [req-a898d1f6-579e-425b-a725-48b1b0a1f1d9 req-119dcdeb-83a6-46b4-8a5f-1a9ba733a070 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Updating instance_info_cache with network_info: [{"id": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "address": "fa:16:3e:6f:fd:6f", "network": {"id": "05612a26-4bfc-4999-b9cf-ae81a7d0e6a9", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1066997338", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.30", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3f9fff751f0a4a4aba88032259e9628c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa23e88a0-e0", "ovs_interfaceid": "a23e88a0-e02f-48ba-8c26-a51810b9f2ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.810 239969 DEBUG nova.compute.manager [req-a898d1f6-579e-425b-a725-48b1b0a1f1d9 req-119dcdeb-83a6-46b4-8a5f-1a9ba733a070 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Detach interface failed, port_id=769188ca-56ed-49b7-8944-e9229697b5b3, reason: Instance 6c804bac-7ee8-45d4-821c-535fac124d08 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.875 239969 DEBUG nova.network.neutron [-] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.891 239969 INFO nova.compute.manager [-] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Took 1.52 seconds to deallocate network for instance.
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.934 239969 DEBUG oslo_concurrency.lockutils [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:12 compute-0 nova_compute[239965]: 2026-01-26 16:01:12.934 239969 DEBUG oslo_concurrency.lockutils [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.051 239969 DEBUG oslo_concurrency.processutils [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3781560802' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.086 239969 DEBUG oslo_concurrency.processutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.112 239969 DEBUG nova.storage.rbd_utils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.117 239969 DEBUG oslo_concurrency.processutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1740263383' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:01:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4171708627' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.696 239969 DEBUG oslo_concurrency.processutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.697 239969 DEBUG oslo_concurrency.processutils [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.646s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.699 239969 DEBUG nova.virt.libvirt.vif [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:01:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1808774340',display_name='tempest-ServerRescueNegativeTestJSON-server-1808774340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1808774340',id=66,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1e098d809d8b4d0a8747af49a85560a1',ramdisk_id='',reservation_id='r-xtk9l70v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-2133238807',owner_user_name='tempest-ServerRescueNegativeTestJSON-2133238807-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:07Z,user_data=None,user_id='2b5ecf8e47344a3d9fec2e1bf4244aae',uuid=96c386e9-e3b3-47c6-b6ca-863e894d5c49,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "address": "fa:16:3e:cd:10:75", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap819adf29-2d", "ovs_interfaceid": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.699 239969 DEBUG nova.network.os_vif_util [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Converting VIF {"id": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "address": "fa:16:3e:cd:10:75", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap819adf29-2d", "ovs_interfaceid": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.701 239969 DEBUG nova.network.os_vif_util [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:10:75,bridge_name='br-int',has_traffic_filtering=True,id=819adf29-2d57-4e5d-81f7-041a9ac00baa,network=Network(677a455b-3478-4811-937e-e2d4184c4933),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap819adf29-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.702 239969 DEBUG nova.objects.instance [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 96c386e9-e3b3-47c6-b6ca-863e894d5c49 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.710 239969 DEBUG nova.compute.provider_tree [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.727 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:01:13 compute-0 nova_compute[239965]:   <uuid>96c386e9-e3b3-47c6-b6ca-863e894d5c49</uuid>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   <name>instance-00000042</name>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1808774340</nova:name>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:01:12</nova:creationTime>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:01:13 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:01:13 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:01:13 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:01:13 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:01:13 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:01:13 compute-0 nova_compute[239965]:         <nova:user uuid="2b5ecf8e47344a3d9fec2e1bf4244aae">tempest-ServerRescueNegativeTestJSON-2133238807-project-member</nova:user>
Jan 26 16:01:13 compute-0 nova_compute[239965]:         <nova:project uuid="1e098d809d8b4d0a8747af49a85560a1">tempest-ServerRescueNegativeTestJSON-2133238807</nova:project>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:01:13 compute-0 nova_compute[239965]:         <nova:port uuid="819adf29-2d57-4e5d-81f7-041a9ac00baa">
Jan 26 16:01:13 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <system>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <entry name="serial">96c386e9-e3b3-47c6-b6ca-863e894d5c49</entry>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <entry name="uuid">96c386e9-e3b3-47c6-b6ca-863e894d5c49</entry>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     </system>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   <os>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   </os>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   <features>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   </features>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk">
Jan 26 16:01:13 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:13 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.config">
Jan 26 16:01:13 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:13 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:cd:10:75"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <target dev="tap819adf29-2d"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49/console.log" append="off"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <video>
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     </video>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:01:13 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:01:13 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:01:13 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:01:13 compute-0 nova_compute[239965]: </domain>
Jan 26 16:01:13 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.728 239969 DEBUG nova.compute.manager [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Preparing to wait for external event network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.728 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.729 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.729 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.730 239969 DEBUG nova.virt.libvirt.vif [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:01:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1808774340',display_name='tempest-ServerRescueNegativeTestJSON-server-1808774340',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1808774340',id=66,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1e098d809d8b4d0a8747af49a85560a1',ramdisk_id='',reservation_id='r-xtk9l70v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueNegativeTestJSON-2133238807',owner_user_name='tempest-ServerRescueNegativeTestJSON-2133238807-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:07Z,user_data=None,user_id='2b5ecf8e47344a3d9fec2e1bf4244aae',uuid=96c386e9-e3b3-47c6-b6ca-863e894d5c49,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "address": "fa:16:3e:cd:10:75", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap819adf29-2d", "ovs_interfaceid": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.730 239969 DEBUG nova.network.os_vif_util [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Converting VIF {"id": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "address": "fa:16:3e:cd:10:75", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap819adf29-2d", "ovs_interfaceid": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.731 239969 DEBUG nova.network.os_vif_util [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:10:75,bridge_name='br-int',has_traffic_filtering=True,id=819adf29-2d57-4e5d-81f7-041a9ac00baa,network=Network(677a455b-3478-4811-937e-e2d4184c4933),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap819adf29-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.731 239969 DEBUG os_vif [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:10:75,bridge_name='br-int',has_traffic_filtering=True,id=819adf29-2d57-4e5d-81f7-041a9ac00baa,network=Network(677a455b-3478-4811-937e-e2d4184c4933),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap819adf29-2d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.732 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.733 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.733 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.734 239969 DEBUG nova.scheduler.client.report [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.745 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.746 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap819adf29-2d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.746 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap819adf29-2d, col_values=(('external_ids', {'iface-id': '819adf29-2d57-4e5d-81f7-041a9ac00baa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:10:75', 'vm-uuid': '96c386e9-e3b3-47c6-b6ca-863e894d5c49'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.748 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:13 compute-0 NetworkManager[48954]: <info>  [1769443273.7492] manager: (tap819adf29-2d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/254)
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.750 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.759 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.760 239969 INFO os_vif [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:10:75,bridge_name='br-int',has_traffic_filtering=True,id=819adf29-2d57-4e5d-81f7-041a9ac00baa,network=Network(677a455b-3478-4811-937e-e2d4184c4933),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap819adf29-2d')
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.763 239969 DEBUG oslo_concurrency.lockutils [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.807 239969 INFO nova.scheduler.client.report [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Deleted allocations for instance 6c804bac-7ee8-45d4-821c-535fac124d08
Jan 26 16:01:13 compute-0 nova_compute[239965]: 2026-01-26 16:01:13.895 239969 DEBUG oslo_concurrency.lockutils [None req-f89cf2b3-34dd-46ea-abfc-1236c7786b8c 9cc0fb8436b44d0499a7d55e0a7e7585 3f9fff751f0a4a4aba88032259e9628c - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 352 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.7 MiB/s wr, 227 op/s
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.060 239969 DEBUG nova.network.neutron [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Updated VIF entry in instance network info cache for port 819adf29-2d57-4e5d-81f7-041a9ac00baa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.061 239969 DEBUG nova.network.neutron [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Updating instance_info_cache with network_info: [{"id": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "address": "fa:16:3e:cd:10:75", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap819adf29-2d", "ovs_interfaceid": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.076 239969 DEBUG oslo_concurrency.lockutils [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-96c386e9-e3b3-47c6-b6ca-863e894d5c49" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.076 239969 DEBUG nova.compute.manager [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received event network-vif-unplugged-a23e88a0-e02f-48ba-8c26-a51810b9f2ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.077 239969 DEBUG oslo_concurrency.lockutils [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.077 239969 DEBUG oslo_concurrency.lockutils [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.077 239969 DEBUG oslo_concurrency.lockutils [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.077 239969 DEBUG nova.compute.manager [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] No waiting events found dispatching network-vif-unplugged-a23e88a0-e02f-48ba-8c26-a51810b9f2ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.077 239969 DEBUG nova.compute.manager [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received event network-vif-unplugged-a23e88a0-e02f-48ba-8c26-a51810b9f2ed for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.078 239969 DEBUG nova.compute.manager [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received event network-vif-plugged-a23e88a0-e02f-48ba-8c26-a51810b9f2ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.078 239969 DEBUG oslo_concurrency.lockutils [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.079 239969 DEBUG oslo_concurrency.lockutils [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.079 239969 DEBUG oslo_concurrency.lockutils [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6c804bac-7ee8-45d4-821c-535fac124d08-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.079 239969 DEBUG nova.compute.manager [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] No waiting events found dispatching network-vif-plugged-a23e88a0-e02f-48ba-8c26-a51810b9f2ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.079 239969 WARNING nova.compute.manager [req-f1d0c09b-0fb4-497e-b99d-a3c8216b75c0 req-36789909-5a37-4dc6-81f5-db4258be2cb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received unexpected event network-vif-plugged-a23e88a0-e02f-48ba-8c26-a51810b9f2ed for instance with vm_state active and task_state deleting.
Jan 26 16:01:14 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3781560802' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.126 239969 DEBUG oslo_concurrency.processutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f50eea8d-23f7-4c65-99fa-f919c99ed80d/disk.config f50eea8d-23f7-4c65-99fa-f919c99ed80d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.695s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.126 239969 INFO nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Deleting local config drive /var/lib/nova/instances/f50eea8d-23f7-4c65-99fa-f919c99ed80d/disk.config because it was imported into RBD.
Jan 26 16:01:14 compute-0 podman[295387]: 2026-01-26 16:01:14.133668548 +0000 UTC m=+1.754861632 container remove 058a3f4ddf1675e0007faaf1410c24c6d72a992be13fc1a3b676760bd8b7f910 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_payne, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:01:14 compute-0 systemd[1]: libpod-conmon-058a3f4ddf1675e0007faaf1410c24c6d72a992be13fc1a3b676760bd8b7f910.scope: Deactivated successfully.
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.192 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.193 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.193 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] No VIF found with MAC fa:16:3e:cd:10:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.194 239969 INFO nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Using config drive
Jan 26 16:01:14 compute-0 kernel: tapf01280ac-7e: entered promiscuous mode
Jan 26 16:01:14 compute-0 NetworkManager[48954]: <info>  [1769443274.2043] manager: (tapf01280ac-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/255)
Jan 26 16:01:14 compute-0 ovn_controller[146046]: 2026-01-26T16:01:14Z|00553|binding|INFO|Claiming lport f01280ac-7e71-4b0c-bad0-c519d134f076 for this chassis.
Jan 26 16:01:14 compute-0 ovn_controller[146046]: 2026-01-26T16:01:14Z|00554|binding|INFO|f01280ac-7e71-4b0c-bad0-c519d134f076: Claiming fa:16:3e:11:6b:52 10.100.0.13
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.218 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:6b:52 10.100.0.13'], port_security=['fa:16:3e:11:6b:52 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f50eea8d-23f7-4c65-99fa-f919c99ed80d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-677a455b-3478-4811-937e-e2d4184c4933', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e098d809d8b4d0a8747af49a85560a1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43e1902e-755a-4b29-81ae-0ec275ea79a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=375e748b-fa99-415a-ab22-338d401b8bb9, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=f01280ac-7e71-4b0c-bad0-c519d134f076) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.219 156105 INFO neutron.agent.ovn.metadata.agent [-] Port f01280ac-7e71-4b0c-bad0-c519d134f076 in datapath 677a455b-3478-4811-937e-e2d4184c4933 bound to our chassis
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.221 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 677a455b-3478-4811-937e-e2d4184c4933
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.238 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[794e8057-8618-45a7-9a2a-f570039f5819]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.239 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap677a455b-31 in ovnmeta-677a455b-3478-4811-937e-e2d4184c4933 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:01:14 compute-0 ovn_controller[146046]: 2026-01-26T16:01:14Z|00555|binding|INFO|Setting lport f01280ac-7e71-4b0c-bad0-c519d134f076 ovn-installed in OVS
Jan 26 16:01:14 compute-0 ovn_controller[146046]: 2026-01-26T16:01:14Z|00556|binding|INFO|Setting lport f01280ac-7e71-4b0c-bad0-c519d134f076 up in Southbound
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.241 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap677a455b-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.242 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a96fa0-422d-49c8-a480-d4a35c2cbc33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.243 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[aab4becd-2a9e-4c93-88bc-bed014843403]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:14 compute-0 systemd-udevd[295568]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.258 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[a9df2c82-0a23-4da4-bf8d-f6de16c8bb4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:14 compute-0 systemd-machined[208061]: New machine qemu-72-instance-00000041.
Jan 26 16:01:14 compute-0 NetworkManager[48954]: <info>  [1769443274.2694] device (tapf01280ac-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:01:14 compute-0 NetworkManager[48954]: <info>  [1769443274.2704] device (tapf01280ac-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.274 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5f63b4b8-0e45-4b37-b4c4-e4267790c1d1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:14 compute-0 systemd[1]: Started Virtual Machine qemu-72-instance-00000041.
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.277 239969 DEBUG nova.storage.rbd_utils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.299 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.316 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9405471b-101f-4ff4-abba-c2e22ce02b41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:14 compute-0 NetworkManager[48954]: <info>  [1769443274.3265] manager: (tap677a455b-30): new Veth device (/org/freedesktop/NetworkManager/Devices/256)
Jan 26 16:01:14 compute-0 systemd-udevd[295584]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.325 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb69d1e-3384-4502-bacb-ee7cb1230ef7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.373 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7deeaf7c-8125-4671-aca0-24c7fee16428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:14 compute-0 podman[295591]: 2026-01-26 16:01:14.375048505 +0000 UTC m=+0.071598547 container create ae5e44ef6c143ad1af5b8b734cadb6733eecd28677bd3b9ffb90f97c32deef75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.378 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c5af2ca5-d826-438f-b3d3-f72d0e0f909f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:14 compute-0 NetworkManager[48954]: <info>  [1769443274.4088] device (tap677a455b-30): carrier: link connected
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.414 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6860df44-ab29-4769-8b2f-050f816605cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:14 compute-0 systemd[1]: Started libpod-conmon-ae5e44ef6c143ad1af5b8b734cadb6733eecd28677bd3b9ffb90f97c32deef75.scope.
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.433 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8803d60d-9cb4-4e78-a657-a56955350db9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap677a455b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:e4:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476521, 'reachable_time': 27062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295635, 'error': None, 'target': 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:14 compute-0 podman[295591]: 2026-01-26 16:01:14.351604219 +0000 UTC m=+0.048154281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.451 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7ac98855-6576-4311-86be-f2baf8ad3c50]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe97:e471'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476521, 'tstamp': 476521}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295639, 'error': None, 'target': 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07434085e6f8dd0c0d3b4601dabeb36943d71dbfb9cc4a92272031b84a1d89ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.470 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[856d97b1-83ac-42e1-88f5-8ee0daf7ea53]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap677a455b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:e4:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476521, 'reachable_time': 27062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 295640, 'error': None, 'target': 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07434085e6f8dd0c0d3b4601dabeb36943d71dbfb9cc4a92272031b84a1d89ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07434085e6f8dd0c0d3b4601dabeb36943d71dbfb9cc4a92272031b84a1d89ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07434085e6f8dd0c0d3b4601dabeb36943d71dbfb9cc4a92272031b84a1d89ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:14 compute-0 podman[295591]: 2026-01-26 16:01:14.494723398 +0000 UTC m=+0.191273470 container init ae5e44ef6c143ad1af5b8b734cadb6733eecd28677bd3b9ffb90f97c32deef75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hamilton, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.496 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[60f9d076-8302-4c2a-a5ec-cc7194f75297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:14 compute-0 podman[295591]: 2026-01-26 16:01:14.504064647 +0000 UTC m=+0.200614689 container start ae5e44ef6c143ad1af5b8b734cadb6733eecd28677bd3b9ffb90f97c32deef75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 16:01:14 compute-0 podman[295591]: 2026-01-26 16:01:14.508025484 +0000 UTC m=+0.204575526 container attach ae5e44ef6c143ad1af5b8b734cadb6733eecd28677bd3b9ffb90f97c32deef75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.560 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f5045744-677b-4663-9986-01fb0852be76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.561 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap677a455b-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.561 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.562 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap677a455b-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.564 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:14 compute-0 NetworkManager[48954]: <info>  [1769443274.5651] manager: (tap677a455b-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/257)
Jan 26 16:01:14 compute-0 kernel: tap677a455b-30: entered promiscuous mode
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.570 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.575 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap677a455b-30, col_values=(('external_ids', {'iface-id': 'be57e329-8724-4480-9b41-724e08bf9152'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.577 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:14 compute-0 ovn_controller[146046]: 2026-01-26T16:01:14Z|00557|binding|INFO|Releasing lport be57e329-8724-4480-9b41-724e08bf9152 from this chassis (sb_readonly=0)
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.597 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.600 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.602 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/677a455b-3478-4811-937e-e2d4184c4933.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/677a455b-3478-4811-937e-e2d4184c4933.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.604 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ad51730a-2a6c-473e-a09b-64abfa638e6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.605 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-677a455b-3478-4811-937e-e2d4184c4933
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/677a455b-3478-4811-937e-e2d4184c4933.pid.haproxy
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 677a455b-3478-4811-937e-e2d4184c4933
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:01:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:14.606 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'env', 'PROCESS_TAG=haproxy-677a455b-3478-4811-937e-e2d4184c4933', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/677a455b-3478-4811-937e-e2d4184c4933.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:01:14 compute-0 nova_compute[239965]: 2026-01-26 16:01:14.880 239969 DEBUG nova.compute.manager [req-2fcbccd3-afbe-433d-b52c-372f97eff34d req-f798fbf6-2ed2-416c-8c7c-ee5a8a53cf70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Received event network-vif-deleted-a23e88a0-e02f-48ba-8c26-a51810b9f2ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.011 239969 DEBUG nova.compute.manager [req-49ebd394-9356-418e-95d0-28d07b45df51 req-6cd14e2c-6b07-45c2-ab52-c00f7dd671f3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Received event network-vif-plugged-f01280ac-7e71-4b0c-bad0-c519d134f076 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.013 239969 DEBUG oslo_concurrency.lockutils [req-49ebd394-9356-418e-95d0-28d07b45df51 req-6cd14e2c-6b07-45c2-ab52-c00f7dd671f3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.013 239969 DEBUG oslo_concurrency.lockutils [req-49ebd394-9356-418e-95d0-28d07b45df51 req-6cd14e2c-6b07-45c2-ab52-c00f7dd671f3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.013 239969 DEBUG oslo_concurrency.lockutils [req-49ebd394-9356-418e-95d0-28d07b45df51 req-6cd14e2c-6b07-45c2-ab52-c00f7dd671f3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.013 239969 DEBUG nova.compute.manager [req-49ebd394-9356-418e-95d0-28d07b45df51 req-6cd14e2c-6b07-45c2-ab52-c00f7dd671f3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Processing event network-vif-plugged-f01280ac-7e71-4b0c-bad0-c519d134f076 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.015 239969 INFO nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Creating config drive at /var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49/disk.config
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.021 239969 DEBUG oslo_concurrency.processutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ev8zqy1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:15 compute-0 podman[295702]: 2026-01-26 16:01:15.041370799 +0000 UTC m=+0.083412956 container create 52b76f946ab270c5b7a8621739ebd2744478afbe55e29a3fdbd8af32526da800 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-677a455b-3478-4811-937e-e2d4184c4933, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 26 16:01:15 compute-0 podman[295702]: 2026-01-26 16:01:14.983628184 +0000 UTC m=+0.025670371 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:01:15 compute-0 systemd[1]: Started libpod-conmon-52b76f946ab270c5b7a8621739ebd2744478afbe55e29a3fdbd8af32526da800.scope.
Jan 26 16:01:15 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1740263383' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:15 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4171708627' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:15 compute-0 ceph-mon[75140]: pgmap v1449: 305 pgs: 305 active+clean; 352 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 5.7 MiB/s wr, 227 op/s
Jan 26 16:01:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:01:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da10c2e72c3f60ab96ec9e8c064c2a64a269a78f4b8810eb9e0430e606911bfe/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:15 compute-0 podman[295702]: 2026-01-26 16:01:15.15071253 +0000 UTC m=+0.192754717 container init 52b76f946ab270c5b7a8621739ebd2744478afbe55e29a3fdbd8af32526da800 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-677a455b-3478-4811-937e-e2d4184c4933, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 26 16:01:15 compute-0 podman[295702]: 2026-01-26 16:01:15.15808122 +0000 UTC m=+0.200123377 container start 52b76f946ab270c5b7a8621739ebd2744478afbe55e29a3fdbd8af32526da800 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-677a455b-3478-4811-937e-e2d4184c4933, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.165 239969 DEBUG oslo_concurrency.processutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ev8zqy1" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:15 compute-0 neutron-haproxy-ovnmeta-677a455b-3478-4811-937e-e2d4184c4933[295745]: [NOTICE]   (295756) : New worker (295767) forked
Jan 26 16:01:15 compute-0 neutron-haproxy-ovnmeta-677a455b-3478-4811-937e-e2d4184c4933[295745]: [NOTICE]   (295756) : Loading success.
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.196 239969 DEBUG nova.storage.rbd_utils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.199 239969 DEBUG oslo_concurrency.processutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49/disk.config 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:15 compute-0 ovn_controller[146046]: 2026-01-26T16:01:15Z|00558|binding|INFO|Releasing lport be57e329-8724-4480-9b41-724e08bf9152 from this chassis (sb_readonly=0)
Jan 26 16:01:15 compute-0 ovn_controller[146046]: 2026-01-26T16:01:15Z|00559|binding|INFO|Releasing lport c26451f4-ab48-4e8d-b25b-9e4988573b7d from this chassis (sb_readonly=0)
Jan 26 16:01:15 compute-0 ovn_controller[146046]: 2026-01-26T16:01:15Z|00560|binding|INFO|Releasing lport 47411fc6-7d46-43a0-b0c2-7c06b22cee9e from this chassis (sb_readonly=0)
Jan 26 16:01:15 compute-0 lvm[295801]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:01:15 compute-0 lvm[295811]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:01:15 compute-0 lvm[295811]: VG ceph_vg1 finished
Jan 26 16:01:15 compute-0 lvm[295801]: VG ceph_vg0 finished
Jan 26 16:01:15 compute-0 lvm[295812]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:01:15 compute-0 lvm[295812]: VG ceph_vg2 finished
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.357 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:15 compute-0 cranky_hamilton[295636]: {}
Jan 26 16:01:15 compute-0 systemd[1]: libpod-ae5e44ef6c143ad1af5b8b734cadb6733eecd28677bd3b9ffb90f97c32deef75.scope: Deactivated successfully.
Jan 26 16:01:15 compute-0 systemd[1]: libpod-ae5e44ef6c143ad1af5b8b734cadb6733eecd28677bd3b9ffb90f97c32deef75.scope: Consumed 1.429s CPU time.
Jan 26 16:01:15 compute-0 podman[295591]: 2026-01-26 16:01:15.45748821 +0000 UTC m=+1.154038252 container died ae5e44ef6c143ad1af5b8b734cadb6733eecd28677bd3b9ffb90f97c32deef75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 16:01:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-07434085e6f8dd0c0d3b4601dabeb36943d71dbfb9cc4a92272031b84a1d89ec-merged.mount: Deactivated successfully.
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.505 239969 DEBUG oslo_concurrency.processutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49/disk.config 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.507 239969 INFO nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Deleting local config drive /var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49/disk.config because it was imported into RBD.
Jan 26 16:01:15 compute-0 podman[295591]: 2026-01-26 16:01:15.53212518 +0000 UTC m=+1.228675222 container remove ae5e44ef6c143ad1af5b8b734cadb6733eecd28677bd3b9ffb90f97c32deef75 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hamilton, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:01:15 compute-0 systemd[1]: libpod-conmon-ae5e44ef6c143ad1af5b8b734cadb6733eecd28677bd3b9ffb90f97c32deef75.scope: Deactivated successfully.
Jan 26 16:01:15 compute-0 NetworkManager[48954]: <info>  [1769443275.5762] manager: (tap819adf29-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/258)
Jan 26 16:01:15 compute-0 kernel: tap819adf29-2d: entered promiscuous mode
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.579 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:15 compute-0 ovn_controller[146046]: 2026-01-26T16:01:15Z|00561|binding|INFO|Claiming lport 819adf29-2d57-4e5d-81f7-041a9ac00baa for this chassis.
Jan 26 16:01:15 compute-0 ovn_controller[146046]: 2026-01-26T16:01:15Z|00562|binding|INFO|819adf29-2d57-4e5d-81f7-041a9ac00baa: Claiming fa:16:3e:cd:10:75 10.100.0.14
Jan 26 16:01:15 compute-0 sudo[295347]: pam_unix(sudo:session): session closed for user root
Jan 26 16:01:15 compute-0 systemd-udevd[295616]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:01:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:01:15 compute-0 ovn_controller[146046]: 2026-01-26T16:01:15Z|00563|binding|INFO|Setting lport 819adf29-2d57-4e5d-81f7-041a9ac00baa ovn-installed in OVS
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.600 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.601 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:15 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:01:15 compute-0 NetworkManager[48954]: <info>  [1769443275.6046] device (tap819adf29-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:01:15 compute-0 NetworkManager[48954]: <info>  [1769443275.6052] device (tap819adf29-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:01:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:01:15 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:01:15 compute-0 systemd-machined[208061]: New machine qemu-73-instance-00000042.
Jan 26 16:01:15 compute-0 ovn_controller[146046]: 2026-01-26T16:01:15Z|00564|binding|INFO|Setting lport 819adf29-2d57-4e5d-81f7-041a9ac00baa up in Southbound
Jan 26 16:01:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:15.629 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:10:75 10.100.0.14'], port_security=['fa:16:3e:cd:10:75 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '96c386e9-e3b3-47c6-b6ca-863e894d5c49', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-677a455b-3478-4811-937e-e2d4184c4933', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e098d809d8b4d0a8747af49a85560a1', 'neutron:revision_number': '2', 'neutron:security_group_ids': '43e1902e-755a-4b29-81ae-0ec275ea79a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=375e748b-fa99-415a-ab22-338d401b8bb9, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=819adf29-2d57-4e5d-81f7-041a9ac00baa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:15.630 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 819adf29-2d57-4e5d-81f7-041a9ac00baa in datapath 677a455b-3478-4811-937e-e2d4184c4933 bound to our chassis
Jan 26 16:01:15 compute-0 systemd[1]: Started Virtual Machine qemu-73-instance-00000042.
Jan 26 16:01:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:15.632 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 677a455b-3478-4811-937e-e2d4184c4933
Jan 26 16:01:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:15.649 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f41e66ee-ac03-41d7-8d09-7d82bb2bab66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:15 compute-0 sudo[295846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:01:15 compute-0 sudo[295846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:01:15 compute-0 sudo[295846]: pam_unix(sudo:session): session closed for user root
Jan 26 16:01:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:15.692 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b92a209f-ff3d-443e-86d3-9043e7cae249]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:15.696 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[77bb92d5-9554-46b6-8aa7-b68d5347bee7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:15.736 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[91f3bb71-447c-4f15-afca-cc89b4eb03db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.738 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:15.760 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[35436daf-677b-451e-bdfa-4d94ae7c13ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap677a455b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:e4:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476521, 'reachable_time': 27062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 295882, 'error': None, 'target': 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:15.784 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[97244973-0493-4e07-bf1d-1e4ad11a9610]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap677a455b-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476533, 'tstamp': 476533}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295883, 'error': None, 'target': 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap677a455b-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476536, 'tstamp': 476536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 295883, 'error': None, 'target': 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:15.786 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap677a455b-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.788 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:15 compute-0 nova_compute[239965]: 2026-01-26 16:01:15.789 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:15.791 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap677a455b-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:15.792 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:15.792 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap677a455b-30, col_values=(('external_ids', {'iface-id': 'be57e329-8724-4480-9b41-724e08bf9152'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:15.793 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 306 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 289 op/s
Jan 26 16:01:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.471 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443276.4711096, f50eea8d-23f7-4c65-99fa-f919c99ed80d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.473 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] VM Started (Lifecycle Event)
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.477 239969 DEBUG nova.compute.manager [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.480 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.484 239969 INFO nova.virt.libvirt.driver [-] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Instance spawned successfully.
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.485 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:01:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:01:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:01:16 compute-0 ceph-mon[75140]: pgmap v1450: 305 pgs: 305 active+clean; 306 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 289 op/s
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.641 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.647 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.651 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.652 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.653 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.653 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.653 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.654 239969 DEBUG nova.virt.libvirt.driver [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.685 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.686 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443276.4722285, f50eea8d-23f7-4c65-99fa-f919c99ed80d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.686 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] VM Paused (Lifecycle Event)
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.726 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.731 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443276.479603, f50eea8d-23f7-4c65-99fa-f919c99ed80d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.731 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] VM Resumed (Lifecycle Event)
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.738 239969 INFO nova.compute.manager [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Took 12.15 seconds to spawn the instance on the hypervisor.
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.739 239969 DEBUG nova.compute.manager [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.762 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.767 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.806 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.806 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443276.630472, 96c386e9-e3b3-47c6-b6ca-863e894d5c49 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.807 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] VM Started (Lifecycle Event)
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.828 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.830 239969 INFO nova.compute.manager [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Took 13.32 seconds to build instance.
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.834 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443276.6306112, 96c386e9-e3b3-47c6-b6ca-863e894d5c49 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.835 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] VM Paused (Lifecycle Event)
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.852 239969 DEBUG oslo_concurrency.lockutils [None req-ba0e06d2-fbbf-438b-942f-7c8fcf5557b9 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.407s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.853 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.856 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:16 compute-0 nova_compute[239965]: 2026-01-26 16:01:16.875 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.150 239969 DEBUG nova.compute.manager [req-a6393049-75ca-4087-98a9-ef8740422375 req-55b3121d-09be-4d52-ae35-a9e08ef5ad6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received event network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.151 239969 DEBUG oslo_concurrency.lockutils [req-a6393049-75ca-4087-98a9-ef8740422375 req-55b3121d-09be-4d52-ae35-a9e08ef5ad6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.152 239969 DEBUG oslo_concurrency.lockutils [req-a6393049-75ca-4087-98a9-ef8740422375 req-55b3121d-09be-4d52-ae35-a9e08ef5ad6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.152 239969 DEBUG oslo_concurrency.lockutils [req-a6393049-75ca-4087-98a9-ef8740422375 req-55b3121d-09be-4d52-ae35-a9e08ef5ad6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.152 239969 DEBUG nova.compute.manager [req-a6393049-75ca-4087-98a9-ef8740422375 req-55b3121d-09be-4d52-ae35-a9e08ef5ad6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Processing event network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.153 239969 DEBUG nova.compute.manager [req-a6393049-75ca-4087-98a9-ef8740422375 req-55b3121d-09be-4d52-ae35-a9e08ef5ad6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received event network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.153 239969 DEBUG oslo_concurrency.lockutils [req-a6393049-75ca-4087-98a9-ef8740422375 req-55b3121d-09be-4d52-ae35-a9e08ef5ad6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.154 239969 DEBUG oslo_concurrency.lockutils [req-a6393049-75ca-4087-98a9-ef8740422375 req-55b3121d-09be-4d52-ae35-a9e08ef5ad6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.154 239969 DEBUG oslo_concurrency.lockutils [req-a6393049-75ca-4087-98a9-ef8740422375 req-55b3121d-09be-4d52-ae35-a9e08ef5ad6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.154 239969 DEBUG nova.compute.manager [req-a6393049-75ca-4087-98a9-ef8740422375 req-55b3121d-09be-4d52-ae35-a9e08ef5ad6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] No waiting events found dispatching network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.155 239969 WARNING nova.compute.manager [req-a6393049-75ca-4087-98a9-ef8740422375 req-55b3121d-09be-4d52-ae35-a9e08ef5ad6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received unexpected event network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa for instance with vm_state building and task_state spawning.
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.156 239969 DEBUG nova.compute.manager [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.166 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443277.1587157, 96c386e9-e3b3-47c6-b6ca-863e894d5c49 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.167 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] VM Resumed (Lifecycle Event)
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.169 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.172 239969 INFO nova.virt.libvirt.driver [-] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Instance spawned successfully.
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.173 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.192 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.198 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.202 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.202 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.203 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.203 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.204 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.204 239969 DEBUG nova.virt.libvirt.driver [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.226 239969 DEBUG nova.compute.manager [req-1a03121d-d09f-4be2-978b-987a098a38cb req-fea1d073-8649-4613-8376-8a3784694c9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Received event network-vif-plugged-f01280ac-7e71-4b0c-bad0-c519d134f076 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.227 239969 DEBUG oslo_concurrency.lockutils [req-1a03121d-d09f-4be2-978b-987a098a38cb req-fea1d073-8649-4613-8376-8a3784694c9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.227 239969 DEBUG oslo_concurrency.lockutils [req-1a03121d-d09f-4be2-978b-987a098a38cb req-fea1d073-8649-4613-8376-8a3784694c9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.227 239969 DEBUG oslo_concurrency.lockutils [req-1a03121d-d09f-4be2-978b-987a098a38cb req-fea1d073-8649-4613-8376-8a3784694c9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.228 239969 DEBUG nova.compute.manager [req-1a03121d-d09f-4be2-978b-987a098a38cb req-fea1d073-8649-4613-8376-8a3784694c9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] No waiting events found dispatching network-vif-plugged-f01280ac-7e71-4b0c-bad0-c519d134f076 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.228 239969 WARNING nova.compute.manager [req-1a03121d-d09f-4be2-978b-987a098a38cb req-fea1d073-8649-4613-8376-8a3784694c9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Received unexpected event network-vif-plugged-f01280ac-7e71-4b0c-bad0-c519d134f076 for instance with vm_state active and task_state None.
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.230 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.281 239969 INFO nova.compute.manager [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Took 9.48 seconds to spawn the instance on the hypervisor.
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.281 239969 DEBUG nova.compute.manager [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.342 239969 INFO nova.compute.manager [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Took 10.59 seconds to build instance.
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.360 239969 DEBUG oslo_concurrency.lockutils [None req-db75a402-cc05-4073-83e2-7f614adc5856 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:17 compute-0 nova_compute[239965]: 2026-01-26 16:01:17.836 239969 DEBUG nova.virt.libvirt.driver [None req-51e934ed-7b54-46e4-818e-f1fc13246169 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:01:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 306 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.3 MiB/s wr, 242 op/s
Jan 26 16:01:18 compute-0 nova_compute[239965]: 2026-01-26 16:01:18.748 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:18 compute-0 nova_compute[239965]: 2026-01-26 16:01:18.793 239969 INFO nova.compute.manager [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Rescuing
Jan 26 16:01:18 compute-0 nova_compute[239965]: 2026-01-26 16:01:18.794 239969 DEBUG oslo_concurrency.lockutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "refresh_cache-96c386e9-e3b3-47c6-b6ca-863e894d5c49" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:01:18 compute-0 nova_compute[239965]: 2026-01-26 16:01:18.795 239969 DEBUG oslo_concurrency.lockutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquired lock "refresh_cache-96c386e9-e3b3-47c6-b6ca-863e894d5c49" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:01:18 compute-0 nova_compute[239965]: 2026-01-26 16:01:18.795 239969 DEBUG nova.network.neutron [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:01:19 compute-0 ceph-mon[75140]: pgmap v1451: 305 pgs: 305 active+clean; 306 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.3 MiB/s wr, 242 op/s
Jan 26 16:01:19 compute-0 ovn_controller[146046]: 2026-01-26T16:01:19Z|00565|binding|INFO|Releasing lport be57e329-8724-4480-9b41-724e08bf9152 from this chassis (sb_readonly=0)
Jan 26 16:01:19 compute-0 ovn_controller[146046]: 2026-01-26T16:01:19Z|00566|binding|INFO|Releasing lport c26451f4-ab48-4e8d-b25b-9e4988573b7d from this chassis (sb_readonly=0)
Jan 26 16:01:19 compute-0 ovn_controller[146046]: 2026-01-26T16:01:19Z|00567|binding|INFO|Releasing lport 47411fc6-7d46-43a0-b0c2-7c06b22cee9e from this chassis (sb_readonly=0)
Jan 26 16:01:19 compute-0 nova_compute[239965]: 2026-01-26 16:01:19.566 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 323 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 4.8 MiB/s wr, 382 op/s
Jan 26 16:01:20 compute-0 nova_compute[239965]: 2026-01-26 16:01:20.738 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:21 compute-0 ceph-mon[75140]: pgmap v1452: 305 pgs: 305 active+clean; 323 MiB data, 663 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 4.8 MiB/s wr, 382 op/s
Jan 26 16:01:21 compute-0 ovn_controller[146046]: 2026-01-26T16:01:21Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:8d:b0 10.100.0.14
Jan 26 16:01:21 compute-0 ovn_controller[146046]: 2026-01-26T16:01:21Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:8d:b0 10.100.0.14
Jan 26 16:01:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:01:21 compute-0 nova_compute[239965]: 2026-01-26 16:01:21.903 239969 DEBUG nova.network.neutron [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Updating instance_info_cache with network_info: [{"id": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "address": "fa:16:3e:cd:10:75", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap819adf29-2d", "ovs_interfaceid": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 328 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.1 MiB/s wr, 258 op/s
Jan 26 16:01:21 compute-0 nova_compute[239965]: 2026-01-26 16:01:21.927 239969 DEBUG oslo_concurrency.lockutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Releasing lock "refresh_cache-96c386e9-e3b3-47c6-b6ca-863e894d5c49" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:01:22 compute-0 ceph-mon[75140]: pgmap v1453: 305 pgs: 305 active+clean; 328 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.1 MiB/s wr, 258 op/s
Jan 26 16:01:22 compute-0 nova_compute[239965]: 2026-01-26 16:01:22.336 239969 DEBUG nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:01:23 compute-0 nova_compute[239965]: 2026-01-26 16:01:23.751 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 328 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.1 MiB/s wr, 230 op/s
Jan 26 16:01:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:01:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 21K writes, 87K keys, 21K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.04 MB/s
                                           Cumulative WAL: 21K writes, 7290 syncs, 3.01 writes per sync, written: 0.08 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 43.73 MB, 0.07 MB/s
                                           Interval WAL: 11K writes, 4406 syncs, 2.56 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:01:25 compute-0 ceph-mon[75140]: pgmap v1454: 305 pgs: 305 active+clean; 328 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.1 MiB/s wr, 230 op/s
Jan 26 16:01:25 compute-0 nova_compute[239965]: 2026-01-26 16:01:25.610 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443270.6086805, 6c804bac-7ee8-45d4-821c-535fac124d08 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:25 compute-0 nova_compute[239965]: 2026-01-26 16:01:25.612 239969 INFO nova.compute.manager [-] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] VM Stopped (Lifecycle Event)
Jan 26 16:01:25 compute-0 nova_compute[239965]: 2026-01-26 16:01:25.631 239969 DEBUG nova.compute.manager [None req-8b5175f2-1695-472c-84a5-e0ed90279a49 - - - - - -] [instance: 6c804bac-7ee8-45d4-821c-535fac124d08] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:25 compute-0 nova_compute[239965]: 2026-01-26 16:01:25.742 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 339 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.2 MiB/s wr, 263 op/s
Jan 26 16:01:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:01:27 compute-0 ceph-mon[75140]: pgmap v1455: 305 pgs: 305 active+clean; 339 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.2 MiB/s wr, 263 op/s
Jan 26 16:01:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 339 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Jan 26 16:01:28 compute-0 nova_compute[239965]: 2026-01-26 16:01:28.582 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:01:28
Jan 26 16:01:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:01:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:01:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['images', 'default.rgw.control', 'vms', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'volumes', '.mgr']
Jan 26 16:01:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:01:28 compute-0 nova_compute[239965]: 2026-01-26 16:01:28.752 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:28 compute-0 nova_compute[239965]: 2026-01-26 16:01:28.889 239969 DEBUG nova.virt.libvirt.driver [None req-51e934ed-7b54-46e4-818e-f1fc13246169 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:01:29 compute-0 ceph-mon[75140]: pgmap v1456: 305 pgs: 305 active+clean; 339 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.1 MiB/s wr, 202 op/s
Jan 26 16:01:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:01:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.4 total, 600.0 interval
                                           Cumulative writes: 24K writes, 94K keys, 24K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.04 MB/s
                                           Cumulative WAL: 24K writes, 8202 syncs, 2.96 writes per sync, written: 0.09 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 44K keys, 11K commit groups, 1.0 writes per commit group, ingest: 45.72 MB, 0.08 MB/s
                                           Interval WAL: 11K writes, 4471 syncs, 2.59 writes per sync, written: 0.04 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:01:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 339 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 203 op/s
Jan 26 16:01:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:01:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:01:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:01:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:01:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:01:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:01:30 compute-0 nova_compute[239965]: 2026-01-26 16:01:30.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:01:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:01:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:01:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:01:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:01:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:01:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:01:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:01:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:01:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:01:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:01:30 compute-0 nova_compute[239965]: 2026-01-26 16:01:30.746 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:30 compute-0 nova_compute[239965]: 2026-01-26 16:01:30.984 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "9148cd85-c0ce-49ec-a252-00ee17051d61" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:30 compute-0 nova_compute[239965]: 2026-01-26 16:01:30.984 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.001 239969 DEBUG nova.compute.manager [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.098 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.099 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.109 239969 DEBUG nova.virt.hardware [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.110 239969 INFO nova.compute.claims [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:01:31 compute-0 ceph-mon[75140]: pgmap v1457: 305 pgs: 305 active+clean; 339 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 203 op/s
Jan 26 16:01:31 compute-0 kernel: tape12546e3-3f (unregistering): left promiscuous mode
Jan 26 16:01:31 compute-0 NetworkManager[48954]: <info>  [1769443291.2899] device (tape12546e3-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.295 239969 DEBUG oslo_concurrency.processutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:31 compute-0 ovn_controller[146046]: 2026-01-26T16:01:31Z|00568|binding|INFO|Releasing lport e12546e3-3fac-40e3-88c0-ed9ad0bc3880 from this chassis (sb_readonly=0)
Jan 26 16:01:31 compute-0 ovn_controller[146046]: 2026-01-26T16:01:31Z|00569|binding|INFO|Setting lport e12546e3-3fac-40e3-88c0-ed9ad0bc3880 down in Southbound
Jan 26 16:01:31 compute-0 ovn_controller[146046]: 2026-01-26T16:01:31Z|00570|binding|INFO|Removing iface tape12546e3-3f ovn-installed in OVS
Jan 26 16:01:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:31.307 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:8d:b0 10.100.0.14'], port_security=['fa:16:3e:a7:8d:b0 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '764b92e5-31ed-41bd-b90d-067a907d0206', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00eb7549-d24b-4657-b244-7664c8a34b20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aee99e5b6af74088bd848cecc9592e82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6daa4359-88b3-4de7-a38c-e21562f82a46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89244795-c322-42a2-be6b-f02f8ae6e05c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=e12546e3-3fac-40e3-88c0-ed9ad0bc3880) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:31.309 156105 INFO neutron.agent.ovn.metadata.agent [-] Port e12546e3-3fac-40e3-88c0-ed9ad0bc3880 in datapath 00eb7549-d24b-4657-b244-7664c8a34b20 unbound from our chassis
Jan 26 16:01:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:31.311 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00eb7549-d24b-4657-b244-7664c8a34b20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:01:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:31.312 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e512e4-00a4-4668-a0bd-95f5ae2254d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:31.313 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 namespace which is not needed anymore
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.351 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:31 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003f.scope: Deactivated successfully.
Jan 26 16:01:31 compute-0 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003f.scope: Consumed 15.720s CPU time.
Jan 26 16:01:31 compute-0 systemd-machined[208061]: Machine qemu-70-instance-0000003f terminated.
Jan 26 16:01:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:01:31 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[293886]: [NOTICE]   (293890) : haproxy version is 2.8.14-c23fe91
Jan 26 16:01:31 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[293886]: [NOTICE]   (293890) : path to executable is /usr/sbin/haproxy
Jan 26 16:01:31 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[293886]: [WARNING]  (293890) : Exiting Master process...
Jan 26 16:01:31 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[293886]: [WARNING]  (293890) : Exiting Master process...
Jan 26 16:01:31 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[293886]: [ALERT]    (293890) : Current worker (293892) exited with code 143 (Terminated)
Jan 26 16:01:31 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[293886]: [WARNING]  (293890) : All workers exited. Exiting... (0)
Jan 26 16:01:31 compute-0 systemd[1]: libpod-f5bded0635884f4c263538066779f93129f2c056f7649651334b52be802458b3.scope: Deactivated successfully.
Jan 26 16:01:31 compute-0 podman[295991]: 2026-01-26 16:01:31.481504928 +0000 UTC m=+0.052183441 container died f5bded0635884f4c263538066779f93129f2c056f7649651334b52be802458b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:01:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f5bded0635884f4c263538066779f93129f2c056f7649651334b52be802458b3-userdata-shm.mount: Deactivated successfully.
Jan 26 16:01:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a859baf1ad07b15b61ad3680dad20539e30c9c74453c3895fc6a6c8d53d98027-merged.mount: Deactivated successfully.
Jan 26 16:01:31 compute-0 podman[295991]: 2026-01-26 16:01:31.54808477 +0000 UTC m=+0.118763273 container cleanup f5bded0635884f4c263538066779f93129f2c056f7649651334b52be802458b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 16:01:31 compute-0 systemd[1]: libpod-conmon-f5bded0635884f4c263538066779f93129f2c056f7649651334b52be802458b3.scope: Deactivated successfully.
Jan 26 16:01:31 compute-0 podman[296018]: 2026-01-26 16:01:31.599952882 +0000 UTC m=+0.095009431 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 16:01:31 compute-0 podman[296054]: 2026-01-26 16:01:31.642614208 +0000 UTC m=+0.070266074 container remove f5bded0635884f4c263538066779f93129f2c056f7649651334b52be802458b3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 16:01:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:31.658 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[65e5b6d3-27b6-4190-92d6-4bfb780ab708]: (4, ('Mon Jan 26 04:01:31 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 (f5bded0635884f4c263538066779f93129f2c056f7649651334b52be802458b3)\nf5bded0635884f4c263538066779f93129f2c056f7649651334b52be802458b3\nMon Jan 26 04:01:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 (f5bded0635884f4c263538066779f93129f2c056f7649651334b52be802458b3)\nf5bded0635884f4c263538066779f93129f2c056f7649651334b52be802458b3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:31.660 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c3c2e244-c45f-4d29-9523-00f5d39672d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:31.662 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00eb7549-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.664 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:31 compute-0 kernel: tap00eb7549-d0: left promiscuous mode
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.678 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:31.681 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[79d65340-e398-4f8d-a6f2-5a63935eae4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.691 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:31.700 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[73f9fd2e-5b46-4502-ad21-5abd9da703ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:31.702 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf0d2a0-f0eb-4b27-858a-735fa376595b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:31.719 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e2ae29-bcb1-441f-8e61-be7bcdc93925]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 474930, 'reachable_time': 39513, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296082, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d00eb7549\x2dd24b\x2d4657\x2db244\x2d7664c8a34b20.mount: Deactivated successfully.
Jan 26 16:01:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:31.729 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:01:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:31.729 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[18bbdf83-cc73-4508-8951-5869e72dc445]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.902 239969 INFO nova.virt.libvirt.driver [None req-51e934ed-7b54-46e4-818e-f1fc13246169 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Instance shutdown successfully after 24 seconds.
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.910 239969 INFO nova.virt.libvirt.driver [-] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Instance destroyed successfully.
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.911 239969 DEBUG nova.objects.instance [None req-51e934ed-7b54-46e4-818e-f1fc13246169 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'numa_topology' on Instance uuid 764b92e5-31ed-41bd-b90d-067a907d0206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 339 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 801 KiB/s rd, 640 KiB/s wr, 66 op/s
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.927 239969 DEBUG nova.compute.manager [None req-51e934ed-7b54-46e4-818e-f1fc13246169 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:31 compute-0 nova_compute[239965]: 2026-01-26 16:01:31.978 239969 DEBUG oslo_concurrency.lockutils [None req-51e934ed-7b54-46e4-818e-f1fc13246169 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:01:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3747991931' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.047 239969 DEBUG oslo_concurrency.processutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.752s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.053 239969 DEBUG nova.compute.provider_tree [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.072 239969 DEBUG nova.scheduler.client.report [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.100 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.101 239969 DEBUG nova.compute.manager [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:01:32 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3747991931' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.171 239969 DEBUG nova.compute.manager [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.172 239969 DEBUG nova.network.neutron [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.192 239969 INFO nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.211 239969 DEBUG nova.compute.manager [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.316 239969 DEBUG nova.compute.manager [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.318 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.318 239969 INFO nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Creating image(s)
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.342 239969 DEBUG nova.storage.rbd_utils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 9148cd85-c0ce-49ec-a252-00ee17051d61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.366 239969 DEBUG nova.storage.rbd_utils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 9148cd85-c0ce-49ec-a252-00ee17051d61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.398 239969 DEBUG nova.storage.rbd_utils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 9148cd85-c0ce-49ec-a252-00ee17051d61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.402 239969 DEBUG oslo_concurrency.processutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.440 239969 DEBUG nova.policy [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '57040a375df6487fbc604a9b04389eeb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4f5d05ded9184fea9b526a3522d47ea5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.448 239969 DEBUG nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.491 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.493 239969 DEBUG oslo_concurrency.processutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.494 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.495 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.495 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.522 239969 DEBUG nova.storage.rbd_utils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 9148cd85-c0ce-49ec-a252-00ee17051d61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.528 239969 DEBUG oslo_concurrency.processutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 9148cd85-c0ce-49ec-a252-00ee17051d61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.861 239969 DEBUG oslo_concurrency.processutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 9148cd85-c0ce-49ec-a252-00ee17051d61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:32 compute-0 nova_compute[239965]: 2026-01-26 16:01:32.934 239969 DEBUG nova.storage.rbd_utils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] resizing rbd image 9148cd85-c0ce-49ec-a252-00ee17051d61_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:01:33 compute-0 nova_compute[239965]: 2026-01-26 16:01:33.016 239969 DEBUG nova.objects.instance [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'migration_context' on Instance uuid 9148cd85-c0ce-49ec-a252-00ee17051d61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:33 compute-0 nova_compute[239965]: 2026-01-26 16:01:33.030 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:01:33 compute-0 nova_compute[239965]: 2026-01-26 16:01:33.030 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Ensure instance console log exists: /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:01:33 compute-0 nova_compute[239965]: 2026-01-26 16:01:33.031 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:33 compute-0 nova_compute[239965]: 2026-01-26 16:01:33.031 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:33 compute-0 nova_compute[239965]: 2026-01-26 16:01:33.032 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:33 compute-0 ceph-mon[75140]: pgmap v1458: 305 pgs: 305 active+clean; 339 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 801 KiB/s rd, 640 KiB/s wr, 66 op/s
Jan 26 16:01:33 compute-0 ovn_controller[146046]: 2026-01-26T16:01:33Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:11:6b:52 10.100.0.13
Jan 26 16:01:33 compute-0 ovn_controller[146046]: 2026-01-26T16:01:33Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:6b:52 10.100.0.13
Jan 26 16:01:33 compute-0 podman[296253]: 2026-01-26 16:01:33.421711342 +0000 UTC m=+0.099232574 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:01:33 compute-0 nova_compute[239965]: 2026-01-26 16:01:33.754 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 339 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 120 KiB/s wr, 36 op/s
Jan 26 16:01:34 compute-0 ceph-mon[75140]: pgmap v1459: 305 pgs: 305 active+clean; 339 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 120 KiB/s wr, 36 op/s
Jan 26 16:01:34 compute-0 nova_compute[239965]: 2026-01-26 16:01:34.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:01:34 compute-0 nova_compute[239965]: 2026-01-26 16:01:34.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:01:34 compute-0 nova_compute[239965]: 2026-01-26 16:01:34.533 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:01:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:01:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 18K writes, 73K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.03 MB/s
                                           Cumulative WAL: 18K writes, 5789 syncs, 3.11 writes per sync, written: 0.07 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8479 writes, 34K keys, 8479 commit groups, 1.0 writes per commit group, ingest: 36.41 MB, 0.06 MB/s
                                           Interval WAL: 8479 writes, 3303 syncs, 2.57 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:01:35 compute-0 ovn_controller[146046]: 2026-01-26T16:01:35Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cd:10:75 10.100.0.14
Jan 26 16:01:35 compute-0 ovn_controller[146046]: 2026-01-26T16:01:35Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cd:10:75 10.100.0.14
Jan 26 16:01:35 compute-0 nova_compute[239965]: 2026-01-26 16:01:35.746 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:35 compute-0 nova_compute[239965]: 2026-01-26 16:01:35.865 239969 DEBUG nova.network.neutron [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Successfully created port: 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:01:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 445 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 828 KiB/s rd, 6.1 MiB/s wr, 180 op/s
Jan 26 16:01:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.905 239969 DEBUG oslo_concurrency.lockutils [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "764b92e5-31ed-41bd-b90d-067a907d0206" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.905 239969 DEBUG oslo_concurrency.lockutils [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.906 239969 DEBUG oslo_concurrency.lockutils [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.906 239969 DEBUG oslo_concurrency.lockutils [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.906 239969 DEBUG oslo_concurrency.lockutils [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.908 239969 INFO nova.compute.manager [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Terminating instance
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.909 239969 DEBUG nova.compute.manager [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.915 239969 INFO nova.virt.libvirt.driver [-] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Instance destroyed successfully.
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.915 239969 DEBUG nova.objects.instance [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'resources' on Instance uuid 764b92e5-31ed-41bd-b90d-067a907d0206 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.928 239969 DEBUG nova.virt.libvirt.vif [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:00:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1801386944',display_name='tempest-DeleteServersTestJSON-server-1801386944',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1801386944',id=63,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:01:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='aee99e5b6af74088bd848cecc9592e82',ramdisk_id='',reservation_id='r-0l8b8fhx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-234439961',owner_user_name='tempest-DeleteServersTestJSON-234439961-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:01:31Z,user_data=None,user_id='3c7cba75e9fd41599b1c9a3388447cdd',uuid=764b92e5-31ed-41bd-b90d-067a907d0206,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "address": "fa:16:3e:a7:8d:b0", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape12546e3-3f", "ovs_interfaceid": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.928 239969 DEBUG nova.network.os_vif_util [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converting VIF {"id": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "address": "fa:16:3e:a7:8d:b0", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape12546e3-3f", "ovs_interfaceid": "e12546e3-3fac-40e3-88c0-ed9ad0bc3880", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.929 239969 DEBUG nova.network.os_vif_util [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:8d:b0,bridge_name='br-int',has_traffic_filtering=True,id=e12546e3-3fac-40e3-88c0-ed9ad0bc3880,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape12546e3-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.929 239969 DEBUG os_vif [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:8d:b0,bridge_name='br-int',has_traffic_filtering=True,id=e12546e3-3fac-40e3-88c0-ed9ad0bc3880,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape12546e3-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.931 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.931 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape12546e3-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.934 239969 DEBUG nova.network.neutron [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Successfully updated port: 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.936 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.939 239969 INFO os_vif [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:8d:b0,bridge_name='br-int',has_traffic_filtering=True,id=e12546e3-3fac-40e3-88c0-ed9ad0bc3880,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape12546e3-3f')
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.953 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "refresh_cache-9148cd85-c0ce-49ec-a252-00ee17051d61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.953 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquired lock "refresh_cache-9148cd85-c0ce-49ec-a252-00ee17051d61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:01:36 compute-0 nova_compute[239965]: 2026-01-26 16:01:36.954 239969 DEBUG nova.network.neutron [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:01:36 compute-0 ceph-mon[75140]: pgmap v1460: 305 pgs: 305 active+clean; 445 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 828 KiB/s rd, 6.1 MiB/s wr, 180 op/s
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.082 239969 DEBUG nova.compute.manager [req-d778e2bf-bde9-47d2-b57f-725595e5a50e req-88cf966e-38cb-46ba-a79d-464845660200 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Received event network-changed-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.082 239969 DEBUG nova.compute.manager [req-d778e2bf-bde9-47d2-b57f-725595e5a50e req-88cf966e-38cb-46ba-a79d-464845660200 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Refreshing instance network info cache due to event network-changed-2d904ed5-936c-4546-9ac5-a1cb1ccd0266. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.083 239969 DEBUG oslo_concurrency.lockutils [req-d778e2bf-bde9-47d2-b57f-725595e5a50e req-88cf966e-38cb-46ba-a79d-464845660200 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-9148cd85-c0ce-49ec-a252-00ee17051d61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.314 239969 INFO nova.virt.libvirt.driver [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Deleting instance files /var/lib/nova/instances/764b92e5-31ed-41bd-b90d-067a907d0206_del
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.315 239969 INFO nova.virt.libvirt.driver [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Deletion of /var/lib/nova/instances/764b92e5-31ed-41bd-b90d-067a907d0206_del complete
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.371 239969 INFO nova.compute.manager [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Took 0.46 seconds to destroy the instance on the hypervisor.
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.371 239969 DEBUG oslo.service.loopingcall [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.372 239969 DEBUG nova.compute.manager [-] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.372 239969 DEBUG nova.network.neutron [-] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.527 239969 DEBUG nova.network.neutron [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:01:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 445 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 6.0 MiB/s wr, 147 op/s
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.931 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "7653a9f9-e5bc-4478-b169-1240875c1675" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.932 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "7653a9f9-e5bc-4478-b169-1240875c1675" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.955 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.965 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "1a3248c2-cd69-499a-8aec-8b03fa196e91" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.965 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "1a3248c2-cd69-499a-8aec-8b03fa196e91" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:37 compute-0 nova_compute[239965]: 2026-01-26 16:01:37.994 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.049 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.049 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.059 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.059 239969 INFO nova.compute.claims [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.071 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.240 239969 DEBUG nova.network.neutron [-] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.259 239969 INFO nova.compute.manager [-] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Took 0.89 seconds to deallocate network for instance.
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.282 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.324 239969 DEBUG oslo_concurrency.lockutils [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.537 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.579 239969 DEBUG nova.network.neutron [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Updating instance_info_cache with network_info: [{"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.605 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Releasing lock "refresh_cache-9148cd85-c0ce-49ec-a252-00ee17051d61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.605 239969 DEBUG nova.compute.manager [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Instance network_info: |[{"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.606 239969 DEBUG oslo_concurrency.lockutils [req-d778e2bf-bde9-47d2-b57f-725595e5a50e req-88cf966e-38cb-46ba-a79d-464845660200 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-9148cd85-c0ce-49ec-a252-00ee17051d61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.606 239969 DEBUG nova.network.neutron [req-d778e2bf-bde9-47d2-b57f-725595e5a50e req-88cf966e-38cb-46ba-a79d-464845660200 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Refreshing network info cache for port 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.609 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Start _get_guest_xml network_info=[{"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.615 239969 WARNING nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.626 239969 DEBUG nova.virt.libvirt.host [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.627 239969 DEBUG nova.virt.libvirt.host [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.631 239969 DEBUG nova.virt.libvirt.host [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.632 239969 DEBUG nova.virt.libvirt.host [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.632 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.633 239969 DEBUG nova.virt.hardware [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.633 239969 DEBUG nova.virt.hardware [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.633 239969 DEBUG nova.virt.hardware [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.634 239969 DEBUG nova.virt.hardware [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.634 239969 DEBUG nova.virt.hardware [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.634 239969 DEBUG nova.virt.hardware [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.634 239969 DEBUG nova.virt.hardware [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.635 239969 DEBUG nova.virt.hardware [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.635 239969 DEBUG nova.virt.hardware [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.635 239969 DEBUG nova.virt.hardware [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.635 239969 DEBUG nova.virt.hardware [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.639 239969 DEBUG oslo_concurrency.processutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:01:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3350576621' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.962 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.679s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.968 239969 DEBUG nova.compute.provider_tree [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:01:38 compute-0 nova_compute[239965]: 2026-01-26 16:01:38.986 239969 DEBUG nova.scheduler.client.report [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:01:38 compute-0 ceph-mon[75140]: pgmap v1461: 305 pgs: 305 active+clean; 445 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 650 KiB/s rd, 6.0 MiB/s wr, 147 op/s
Jan 26 16:01:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3350576621' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.013 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.964s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.014 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.016 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.022 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.023 239969 INFO nova.compute.claims [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.117 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.118 239969 DEBUG nova.network.neutron [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.144 239969 INFO nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.169 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.186 239969 DEBUG nova.compute.manager [req-9424ddbf-c499-49e2-8231-a2b1eeb45df8 req-26c89cdb-b366-4e77-9915-50ce81b53f65 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Received event network-vif-unplugged-e12546e3-3fac-40e3-88c0-ed9ad0bc3880 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.186 239969 DEBUG oslo_concurrency.lockutils [req-9424ddbf-c499-49e2-8231-a2b1eeb45df8 req-26c89cdb-b366-4e77-9915-50ce81b53f65 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.187 239969 DEBUG oslo_concurrency.lockutils [req-9424ddbf-c499-49e2-8231-a2b1eeb45df8 req-26c89cdb-b366-4e77-9915-50ce81b53f65 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.187 239969 DEBUG oslo_concurrency.lockutils [req-9424ddbf-c499-49e2-8231-a2b1eeb45df8 req-26c89cdb-b366-4e77-9915-50ce81b53f65 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.188 239969 DEBUG nova.compute.manager [req-9424ddbf-c499-49e2-8231-a2b1eeb45df8 req-26c89cdb-b366-4e77-9915-50ce81b53f65 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] No waiting events found dispatching network-vif-unplugged-e12546e3-3fac-40e3-88c0-ed9ad0bc3880 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.188 239969 WARNING nova.compute.manager [req-9424ddbf-c499-49e2-8231-a2b1eeb45df8 req-26c89cdb-b366-4e77-9915-50ce81b53f65 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Received unexpected event network-vif-unplugged-e12546e3-3fac-40e3-88c0-ed9ad0bc3880 for instance with vm_state deleted and task_state None.
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.188 239969 DEBUG nova.compute.manager [req-9424ddbf-c499-49e2-8231-a2b1eeb45df8 req-26c89cdb-b366-4e77-9915-50ce81b53f65 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Received event network-vif-plugged-e12546e3-3fac-40e3-88c0-ed9ad0bc3880 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.188 239969 DEBUG oslo_concurrency.lockutils [req-9424ddbf-c499-49e2-8231-a2b1eeb45df8 req-26c89cdb-b366-4e77-9915-50ce81b53f65 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.189 239969 DEBUG oslo_concurrency.lockutils [req-9424ddbf-c499-49e2-8231-a2b1eeb45df8 req-26c89cdb-b366-4e77-9915-50ce81b53f65 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.189 239969 DEBUG oslo_concurrency.lockutils [req-9424ddbf-c499-49e2-8231-a2b1eeb45df8 req-26c89cdb-b366-4e77-9915-50ce81b53f65 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.189 239969 DEBUG nova.compute.manager [req-9424ddbf-c499-49e2-8231-a2b1eeb45df8 req-26c89cdb-b366-4e77-9915-50ce81b53f65 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] No waiting events found dispatching network-vif-plugged-e12546e3-3fac-40e3-88c0-ed9ad0bc3880 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.189 239969 WARNING nova.compute.manager [req-9424ddbf-c499-49e2-8231-a2b1eeb45df8 req-26c89cdb-b366-4e77-9915-50ce81b53f65 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Received unexpected event network-vif-plugged-e12546e3-3fac-40e3-88c0-ed9ad0bc3880 for instance with vm_state deleted and task_state None.
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.190 239969 DEBUG nova.compute.manager [req-9424ddbf-c499-49e2-8231-a2b1eeb45df8 req-26c89cdb-b366-4e77-9915-50ce81b53f65 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Received event network-vif-deleted-e12546e3-3fac-40e3-88c0-ed9ad0bc3880 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3843610831' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.263 239969 DEBUG oslo_concurrency.processutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.286 239969 DEBUG nova.storage.rbd_utils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 9148cd85-c0ce-49ec-a252-00ee17051d61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.289 239969 DEBUG oslo_concurrency.processutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.325 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.326 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.326 239969 INFO nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Creating image(s)
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.349 239969 DEBUG nova.storage.rbd_utils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 7653a9f9-e5bc-4478-b169-1240875c1675_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.370 239969 DEBUG nova.storage.rbd_utils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 7653a9f9-e5bc-4478-b169-1240875c1675_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.392 239969 DEBUG nova.storage.rbd_utils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 7653a9f9-e5bc-4478-b169-1240875c1675_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.395 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.473 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.473 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.474 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.474 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.495 239969 DEBUG nova.storage.rbd_utils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 7653a9f9-e5bc-4478-b169-1240875c1675_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.499 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 7653a9f9-e5bc-4478-b169-1240875c1675_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.530 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:39 compute-0 ceph-mgr[75431]: [devicehealth INFO root] Check health
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.791 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 7653a9f9-e5bc-4478-b169-1240875c1675_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.866 239969 DEBUG nova.storage.rbd_utils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] resizing rbd image 7653a9f9-e5bc-4478-b169-1240875c1675_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:01:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2539204849' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.907 239969 DEBUG nova.policy [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '008d2b0a4ac3490f9a0fcea9e21be080', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df35e1b31a074e6c9a56000deadb0acc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.912 239969 DEBUG oslo_concurrency.processutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.915 239969 DEBUG nova.virt.libvirt.vif [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-931503541',display_name='tempest-tempest.common.compute-instance-931503541',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-931503541',id=67,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4f5d05ded9184fea9b526a3522d47ea5',ramdisk_id='',reservation_id='r-6gturwy1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-980651809',owner_user_name='tempest-ServerActionsTestOtherA-980651809-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:32Z,user_data=None,user_id='57040a375df6487fbc604a9b04389eeb',uuid=9148cd85-c0ce-49ec-a252-00ee17051d61,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.916 239969 DEBUG nova.network.os_vif_util [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converting VIF {"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.918 239969 DEBUG nova.network.os_vif_util [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=2d904ed5-936c-4546-9ac5-a1cb1ccd0266,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d904ed5-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.920 239969 DEBUG nova.objects.instance [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9148cd85-c0ce-49ec-a252-00ee17051d61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 692 KiB/s rd, 6.1 MiB/s wr, 174 op/s
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.970 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:01:39 compute-0 nova_compute[239965]:   <uuid>9148cd85-c0ce-49ec-a252-00ee17051d61</uuid>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   <name>instance-00000043</name>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <nova:name>tempest-tempest.common.compute-instance-931503541</nova:name>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:01:38</nova:creationTime>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:01:39 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:01:39 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:01:39 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:01:39 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:01:39 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:01:39 compute-0 nova_compute[239965]:         <nova:user uuid="57040a375df6487fbc604a9b04389eeb">tempest-ServerActionsTestOtherA-980651809-project-member</nova:user>
Jan 26 16:01:39 compute-0 nova_compute[239965]:         <nova:project uuid="4f5d05ded9184fea9b526a3522d47ea5">tempest-ServerActionsTestOtherA-980651809</nova:project>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:01:39 compute-0 nova_compute[239965]:         <nova:port uuid="2d904ed5-936c-4546-9ac5-a1cb1ccd0266">
Jan 26 16:01:39 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <system>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <entry name="serial">9148cd85-c0ce-49ec-a252-00ee17051d61</entry>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <entry name="uuid">9148cd85-c0ce-49ec-a252-00ee17051d61</entry>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     </system>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   <os>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   </os>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   <features>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   </features>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/9148cd85-c0ce-49ec-a252-00ee17051d61_disk">
Jan 26 16:01:39 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:39 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/9148cd85-c0ce-49ec-a252-00ee17051d61_disk.config">
Jan 26 16:01:39 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:39 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:e9:65:1d"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <target dev="tap2d904ed5-93"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61/console.log" append="off"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <video>
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     </video>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:01:39 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:01:39 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:01:39 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:01:39 compute-0 nova_compute[239965]: </domain>
Jan 26 16:01:39 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.977 239969 DEBUG nova.compute.manager [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Preparing to wait for external event network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.977 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.978 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.978 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.979 239969 DEBUG nova.virt.libvirt.vif [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-931503541',display_name='tempest-tempest.common.compute-instance-931503541',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-931503541',id=67,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4f5d05ded9184fea9b526a3522d47ea5',ramdisk_id='',reservation_id='r-6gturwy1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-980651809',owner_user_name='tempest-ServerActionsTestOtherA-980651809-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:32Z,user_data=None,user_id='57040a375df6487fbc604a9b04389eeb',uuid=9148cd85-c0ce-49ec-a252-00ee17051d61,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.980 239969 DEBUG nova.network.os_vif_util [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converting VIF {"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.981 239969 DEBUG nova.network.os_vif_util [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=2d904ed5-936c-4546-9ac5-a1cb1ccd0266,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d904ed5-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.982 239969 DEBUG os_vif [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=2d904ed5-936c-4546-9ac5-a1cb1ccd0266,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d904ed5-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.983 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.983 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.984 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.992 239969 DEBUG nova.objects.instance [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lazy-loading 'migration_context' on Instance uuid 7653a9f9-e5bc-4478-b169-1240875c1675 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.997 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.997 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d904ed5-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3843610831' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2539204849' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:39 compute-0 nova_compute[239965]: 2026-01-26 16:01:39.999 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d904ed5-93, col_values=(('external_ids', {'iface-id': '2d904ed5-936c-4546-9ac5-a1cb1ccd0266', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:65:1d', 'vm-uuid': '9148cd85-c0ce-49ec-a252-00ee17051d61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.000 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:40 compute-0 NetworkManager[48954]: <info>  [1769443300.0016] manager: (tap2d904ed5-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.004 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.007 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.008 239969 INFO os_vif [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=2d904ed5-936c-4546-9ac5-a1cb1ccd0266,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d904ed5-93')
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.019 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.020 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Ensure instance console log exists: /var/lib/nova/instances/7653a9f9-e5bc-4478-b169-1240875c1675/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.021 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.022 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.023 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.088 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.089 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.089 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] No VIF found with MAC fa:16:3e:e9:65:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.090 239969 INFO nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Using config drive
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.125 239969 DEBUG nova.storage.rbd_utils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 9148cd85-c0ce-49ec-a252-00ee17051d61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:01:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3183630657' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.232 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.702s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.239 239969 DEBUG nova.compute.provider_tree [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.264 239969 DEBUG nova.scheduler.client.report [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.292 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.293 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.296 239969 DEBUG oslo_concurrency.lockutils [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.359 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.360 239969 DEBUG nova.network.neutron [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.381 239969 INFO nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.396 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.472 239969 DEBUG oslo_concurrency.processutils [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.515 239969 DEBUG nova.network.neutron [req-d778e2bf-bde9-47d2-b57f-725595e5a50e req-88cf966e-38cb-46ba-a79d-464845660200 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Updated VIF entry in instance network info cache for port 2d904ed5-936c-4546-9ac5-a1cb1ccd0266. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.516 239969 DEBUG nova.network.neutron [req-d778e2bf-bde9-47d2-b57f-725595e5a50e req-88cf966e-38cb-46ba-a79d-464845660200 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Updating instance_info_cache with network_info: [{"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.518 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.520 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.520 239969 INFO nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Creating image(s)
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.546 239969 DEBUG nova.storage.rbd_utils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 1a3248c2-cd69-499a-8aec-8b03fa196e91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.578 239969 DEBUG nova.storage.rbd_utils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 1a3248c2-cd69-499a-8aec-8b03fa196e91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.604 239969 DEBUG nova.storage.rbd_utils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 1a3248c2-cd69-499a-8aec-8b03fa196e91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.608 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.638 239969 DEBUG nova.network.neutron [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Successfully created port: 9325f8b5-246d-4519-a1d0-f1b5d35375fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.645 239969 DEBUG nova.policy [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '008d2b0a4ac3490f9a0fcea9e21be080', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df35e1b31a074e6c9a56000deadb0acc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.650 239969 INFO nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Creating config drive at /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61/disk.config
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.656 239969 DEBUG oslo_concurrency.processutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsxmohbmi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.689 239969 DEBUG oslo_concurrency.lockutils [req-d778e2bf-bde9-47d2-b57f-725595e5a50e req-88cf966e-38cb-46ba-a79d-464845660200 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-9148cd85-c0ce-49ec-a252-00ee17051d61" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.692 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.692 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.693 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.693 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.714 239969 DEBUG nova.storage.rbd_utils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 1a3248c2-cd69-499a-8aec-8b03fa196e91_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.717 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 1a3248c2-cd69-499a-8aec-8b03fa196e91_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.750 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.794 239969 DEBUG oslo_concurrency.processutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsxmohbmi" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.819 239969 DEBUG nova.storage.rbd_utils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 9148cd85-c0ce-49ec-a252-00ee17051d61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:40 compute-0 nova_compute[239965]: 2026-01-26 16:01:40.825 239969 DEBUG oslo_concurrency.processutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61/disk.config 9148cd85-c0ce-49ec-a252-00ee17051d61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:41 compute-0 ceph-mon[75140]: pgmap v1462: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 692 KiB/s rd, 6.1 MiB/s wr, 174 op/s
Jan 26 16:01:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3183630657' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.019 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 1a3248c2-cd69-499a-8aec-8b03fa196e91_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.301s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.052 239969 DEBUG oslo_concurrency.processutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61/disk.config 9148cd85-c0ce-49ec-a252-00ee17051d61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.053 239969 INFO nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Deleting local config drive /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61/disk.config because it was imported into RBD.
Jan 26 16:01:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:01:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1563503090' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:41 compute-0 NetworkManager[48954]: <info>  [1769443301.1195] manager: (tap2d904ed5-93): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Jan 26 16:01:41 compute-0 kernel: tap2d904ed5-93: entered promiscuous mode
Jan 26 16:01:41 compute-0 ovn_controller[146046]: 2026-01-26T16:01:41Z|00571|binding|INFO|Claiming lport 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 for this chassis.
Jan 26 16:01:41 compute-0 ovn_controller[146046]: 2026-01-26T16:01:41Z|00572|binding|INFO|2d904ed5-936c-4546-9ac5-a1cb1ccd0266: Claiming fa:16:3e:e9:65:1d 10.100.0.5
Jan 26 16:01:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:41.131 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:65:1d 10.100.0.5'], port_security=['fa:16:3e:e9:65:1d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9148cd85-c0ce-49ec-a252-00ee17051d61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac4ec909-9164-40ab-a894-4d501ce12c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4f5d05ded9184fea9b526a3522d47ea5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '93ada5a0-1112-43ef-8d69-92fe39df7192', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=287381d1-9fc0-4e2d-b786-047adf5707ff, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2d904ed5-936c-4546-9ac5-a1cb1ccd0266) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:41.132 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 in datapath ac4ec909-9164-40ab-a894-4d501ce12c59 bound to our chassis
Jan 26 16:01:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:41.133 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac4ec909-9164-40ab-a894-4d501ce12c59
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.134 239969 DEBUG nova.storage.rbd_utils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] resizing rbd image 1a3248c2-cd69-499a-8aec-8b03fa196e91_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:01:41 compute-0 ovn_controller[146046]: 2026-01-26T16:01:41Z|00573|binding|INFO|Setting lport 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 ovn-installed in OVS
Jan 26 16:01:41 compute-0 ovn_controller[146046]: 2026-01-26T16:01:41Z|00574|binding|INFO|Setting lport 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 up in Southbound
Jan 26 16:01:41 compute-0 systemd-machined[208061]: New machine qemu-74-instance-00000043.
Jan 26 16:01:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:41.166 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2c5a3c41-ced0-406b-bcf2-2a3508358e11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.168 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:41 compute-0 systemd[1]: Started Virtual Machine qemu-74-instance-00000043.
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.172 239969 DEBUG oslo_concurrency.processutils [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.184 239969 DEBUG nova.compute.provider_tree [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:01:41 compute-0 systemd-udevd[296817]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.202 239969 DEBUG nova.scheduler.client.report [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:01:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:41.203 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0bdbad95-0c91-475e-9baa-e743134f0807]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:41.206 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[86f01dc8-d1d5-42c1-b27d-b43708680e6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:41 compute-0 NetworkManager[48954]: <info>  [1769443301.2111] device (tap2d904ed5-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:01:41 compute-0 NetworkManager[48954]: <info>  [1769443301.2122] device (tap2d904ed5-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:01:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:41.235 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[57ea1fdb-abf1-46d8-93c1-298f4e46ae7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.247 239969 DEBUG oslo_concurrency.lockutils [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.249 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 2.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.249 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.250 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.250 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:41.257 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6e701ae0-37a2-494b-9bec-a357b06d3a3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac4ec909-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6b:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473948, 'reachable_time': 32074, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296843, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:41.276 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f7a1782d-3622-4acd-9e26-a1f8182f9e66]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapac4ec909-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473962, 'tstamp': 473962}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296848, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapac4ec909-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473965, 'tstamp': 473965}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296848, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:41.278 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac4ec909-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.284 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:41.285 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac4ec909-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:41.286 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:41.286 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac4ec909-90, col_values=(('external_ids', {'iface-id': '47411fc6-7d46-43a0-b0c2-7c06b22cee9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:41.287 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.294 239969 DEBUG nova.objects.instance [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lazy-loading 'migration_context' on Instance uuid 1a3248c2-cd69-499a-8aec-8b03fa196e91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.310 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.311 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Ensure instance console log exists: /var/lib/nova/instances/1a3248c2-cd69-499a-8aec-8b03fa196e91/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.311 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.312 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.312 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.313 239969 INFO nova.scheduler.client.report [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Deleted allocations for instance 764b92e5-31ed-41bd-b90d-067a907d0206
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.374 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:41.375 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:41.376 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.383 239969 DEBUG oslo_concurrency.lockutils [None req-1b9edd4e-566e-44fe-902d-b5963a8a43bf 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "764b92e5-31ed-41bd-b90d-067a907d0206" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.478s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.535 239969 DEBUG nova.compute.manager [req-6ae5b15d-d14c-4572-8ac2-f92dfd2f55e3 req-533c333c-0e3a-4964-9e9c-42233bf3376c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Received event network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.536 239969 DEBUG oslo_concurrency.lockutils [req-6ae5b15d-d14c-4572-8ac2-f92dfd2f55e3 req-533c333c-0e3a-4964-9e9c-42233bf3376c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.536 239969 DEBUG oslo_concurrency.lockutils [req-6ae5b15d-d14c-4572-8ac2-f92dfd2f55e3 req-533c333c-0e3a-4964-9e9c-42233bf3376c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.536 239969 DEBUG oslo_concurrency.lockutils [req-6ae5b15d-d14c-4572-8ac2-f92dfd2f55e3 req-533c333c-0e3a-4964-9e9c-42233bf3376c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.536 239969 DEBUG nova.compute.manager [req-6ae5b15d-d14c-4572-8ac2-f92dfd2f55e3 req-533c333c-0e3a-4964-9e9c-42233bf3376c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Processing event network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.624 239969 DEBUG nova.network.neutron [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Successfully created port: 48c9a7b6-229f-4386-b150-b99a89d048a8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:01:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:01:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/200741880' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.808 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.879 239969 DEBUG nova.network.neutron [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Successfully updated port: 9325f8b5-246d-4519-a1d0-f1b5d35375fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.898 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "refresh_cache-7653a9f9-e5bc-4478-b169-1240875c1675" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.898 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquired lock "refresh_cache-7653a9f9-e5bc-4478-b169-1240875c1675" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.898 239969 DEBUG nova.network.neutron [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.907 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000041 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.908 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000041 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.915 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.915 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000042 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.919 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.919 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000043 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.924 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.924 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:01:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 385 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 701 KiB/s rd, 6.4 MiB/s wr, 186 op/s
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.980 239969 DEBUG nova.compute.manager [req-5600dffb-9372-403a-a34f-0e15ba0b8f62 req-3649b8dd-971d-4d14-9e48-e26ae17de4d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Received event network-changed-9325f8b5-246d-4519-a1d0-f1b5d35375fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.981 239969 DEBUG nova.compute.manager [req-5600dffb-9372-403a-a34f-0e15ba0b8f62 req-3649b8dd-971d-4d14-9e48-e26ae17de4d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Refreshing instance network info cache due to event network-changed-9325f8b5-246d-4519-a1d0-f1b5d35375fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:01:41 compute-0 nova_compute[239965]: 2026-01-26 16:01:41.981 239969 DEBUG oslo_concurrency.lockutils [req-5600dffb-9372-403a-a34f-0e15ba0b8f62 req-3649b8dd-971d-4d14-9e48-e26ae17de4d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-7653a9f9-e5bc-4478-b169-1240875c1675" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:01:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1563503090' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/200741880' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.081 239969 DEBUG nova.network.neutron [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.192 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.193 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3364MB free_disk=59.82385678309947GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.193 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.194 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.272 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance fbc49a44-5efe-4d87-aea7-b26f7232c224 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.272 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance f50eea8d-23f7-4c65-99fa-f919c99ed80d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.272 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 96c386e9-e3b3-47c6-b6ca-863e894d5c49 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.272 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 9148cd85-c0ce-49ec-a252-00ee17051d61 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.272 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 7653a9f9-e5bc-4478-b169-1240875c1675 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.272 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 1a3248c2-cd69-499a-8aec-8b03fa196e91 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.273 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.273 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.395 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.452 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443302.4521525, 9148cd85-c0ce-49ec-a252-00ee17051d61 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.453 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] VM Started (Lifecycle Event)
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.456 239969 DEBUG nova.compute.manager [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.460 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.463 239969 INFO nova.virt.libvirt.driver [-] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Instance spawned successfully.
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.464 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.471 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.475 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.483 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.483 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.484 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.484 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.484 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.485 239969 DEBUG nova.virt.libvirt.driver [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.493 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.493 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443302.4555502, 9148cd85-c0ce-49ec-a252-00ee17051d61 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.493 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] VM Paused (Lifecycle Event)
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.519 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.522 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443302.4606085, 9148cd85-c0ce-49ec-a252-00ee17051d61 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.522 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] VM Resumed (Lifecycle Event)
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.537 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.540 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.545 239969 INFO nova.compute.manager [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Took 10.23 seconds to spawn the instance on the hypervisor.
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.546 239969 DEBUG nova.compute.manager [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.578 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.626 239969 INFO nova.compute.manager [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Took 11.56 seconds to build instance.
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.648 239969 DEBUG oslo_concurrency.lockutils [None req-1a54f53e-ce74-4204-b592-993337c72f5f 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:01:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4224813789' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.968 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:42 compute-0 nova_compute[239965]: 2026-01-26 16:01:42.991 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.007 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:01:43 compute-0 ceph-mon[75140]: pgmap v1463: 305 pgs: 305 active+clean; 385 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 701 KiB/s rd, 6.4 MiB/s wr, 186 op/s
Jan 26 16:01:43 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4224813789' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.039 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.040 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.052 239969 DEBUG nova.network.neutron [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Successfully updated port: 48c9a7b6-229f-4386-b150-b99a89d048a8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.069 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "refresh_cache-1a3248c2-cd69-499a-8aec-8b03fa196e91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.070 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquired lock "refresh_cache-1a3248c2-cd69-499a-8aec-8b03fa196e91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.070 239969 DEBUG nova.network.neutron [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.260 239969 DEBUG nova.network.neutron [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.266 239969 DEBUG nova.network.neutron [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Updating instance_info_cache with network_info: [{"id": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "address": "fa:16:3e:4e:47:bb", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9325f8b5-24", "ovs_interfaceid": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.289 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Releasing lock "refresh_cache-7653a9f9-e5bc-4478-b169-1240875c1675" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.290 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Instance network_info: |[{"id": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "address": "fa:16:3e:4e:47:bb", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9325f8b5-24", "ovs_interfaceid": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.290 239969 DEBUG oslo_concurrency.lockutils [req-5600dffb-9372-403a-a34f-0e15ba0b8f62 req-3649b8dd-971d-4d14-9e48-e26ae17de4d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-7653a9f9-e5bc-4478-b169-1240875c1675" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.291 239969 DEBUG nova.network.neutron [req-5600dffb-9372-403a-a34f-0e15ba0b8f62 req-3649b8dd-971d-4d14-9e48-e26ae17de4d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Refreshing network info cache for port 9325f8b5-246d-4519-a1d0-f1b5d35375fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.294 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Start _get_guest_xml network_info=[{"id": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "address": "fa:16:3e:4e:47:bb", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9325f8b5-24", "ovs_interfaceid": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.301 239969 WARNING nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.306 239969 DEBUG nova.virt.libvirt.host [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.307 239969 DEBUG nova.virt.libvirt.host [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.316 239969 DEBUG nova.virt.libvirt.host [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.317 239969 DEBUG nova.virt.libvirt.host [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.318 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.318 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.319 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.319 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.319 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.320 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.320 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.320 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.321 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.321 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.321 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.322 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.326 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.769 239969 DEBUG nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:01:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 385 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 700 KiB/s rd, 6.4 MiB/s wr, 184 op/s
Jan 26 16:01:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:43 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/885024616' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.970 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:43 compute-0 nova_compute[239965]: 2026-01-26 16:01:43.998 239969 DEBUG nova.storage.rbd_utils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 7653a9f9-e5bc-4478-b169-1240875c1675_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.003 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:44 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/885024616' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.068 239969 DEBUG nova.compute.manager [req-5fd52290-ce1a-4e6d-afcb-dd46a299bf12 req-15adce7e-ff21-4edb-a9b8-481c9389feb5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Received event network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.069 239969 DEBUG oslo_concurrency.lockutils [req-5fd52290-ce1a-4e6d-afcb-dd46a299bf12 req-15adce7e-ff21-4edb-a9b8-481c9389feb5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.070 239969 DEBUG oslo_concurrency.lockutils [req-5fd52290-ce1a-4e6d-afcb-dd46a299bf12 req-15adce7e-ff21-4edb-a9b8-481c9389feb5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.070 239969 DEBUG oslo_concurrency.lockutils [req-5fd52290-ce1a-4e6d-afcb-dd46a299bf12 req-15adce7e-ff21-4edb-a9b8-481c9389feb5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.071 239969 DEBUG nova.compute.manager [req-5fd52290-ce1a-4e6d-afcb-dd46a299bf12 req-15adce7e-ff21-4edb-a9b8-481c9389feb5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] No waiting events found dispatching network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.071 239969 WARNING nova.compute.manager [req-5fd52290-ce1a-4e6d-afcb-dd46a299bf12 req-15adce7e-ff21-4edb-a9b8-481c9389feb5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Received unexpected event network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 for instance with vm_state active and task_state None.
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.178 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.179 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.191 239969 DEBUG nova.compute.manager [req-eda2a1f3-911d-4439-a3ba-c0785ed11c89 req-a2acf010-cb1d-437e-98e4-c97f4fa1cd64 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Received event network-changed-48c9a7b6-229f-4386-b150-b99a89d048a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.192 239969 DEBUG nova.compute.manager [req-eda2a1f3-911d-4439-a3ba-c0785ed11c89 req-a2acf010-cb1d-437e-98e4-c97f4fa1cd64 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Refreshing instance network info cache due to event network-changed-48c9a7b6-229f-4386-b150-b99a89d048a8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.193 239969 DEBUG oslo_concurrency.lockutils [req-eda2a1f3-911d-4439-a3ba-c0785ed11c89 req-a2acf010-cb1d-437e-98e4-c97f4fa1cd64 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-1a3248c2-cd69-499a-8aec-8b03fa196e91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.205 239969 DEBUG nova.compute.manager [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.290 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.292 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.302 239969 DEBUG nova.virt.hardware [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.303 239969 INFO nova.compute.claims [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.518 239969 DEBUG oslo_concurrency.processutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4020571109' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.737 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.733s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.742 239969 DEBUG nova.virt.libvirt.vif [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659135611',display_name='tempest-tempest.common.compute-instance-659135611-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659135611-1',id=68,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df35e1b31a074e6c9a56000deadb0acc',ramdisk_id='',reservation_id='r-ze65gd0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1803223404',owner_user_name='tempest-MultipleCreateTestJSON-1803223404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:39Z,user_data=None,user_id='008d2b0a4ac3490f9a0fcea9e21be080',uuid=7653a9f9-e5bc-4478-b169-1240875c1675,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "address": "fa:16:3e:4e:47:bb", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9325f8b5-24", "ovs_interfaceid": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.743 239969 DEBUG nova.network.os_vif_util [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converting VIF {"id": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "address": "fa:16:3e:4e:47:bb", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9325f8b5-24", "ovs_interfaceid": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.746 239969 DEBUG nova.network.os_vif_util [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:47:bb,bridge_name='br-int',has_traffic_filtering=True,id=9325f8b5-246d-4519-a1d0-f1b5d35375fc,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9325f8b5-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.750 239969 DEBUG nova.objects.instance [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lazy-loading 'pci_devices' on Instance uuid 7653a9f9-e5bc-4478-b169-1240875c1675 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.766 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:01:44 compute-0 nova_compute[239965]:   <uuid>7653a9f9-e5bc-4478-b169-1240875c1675</uuid>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   <name>instance-00000044</name>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <nova:name>tempest-tempest.common.compute-instance-659135611-1</nova:name>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:01:43</nova:creationTime>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:01:44 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:01:44 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:01:44 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:01:44 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:01:44 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:01:44 compute-0 nova_compute[239965]:         <nova:user uuid="008d2b0a4ac3490f9a0fcea9e21be080">tempest-MultipleCreateTestJSON-1803223404-project-member</nova:user>
Jan 26 16:01:44 compute-0 nova_compute[239965]:         <nova:project uuid="df35e1b31a074e6c9a56000deadb0acc">tempest-MultipleCreateTestJSON-1803223404</nova:project>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:01:44 compute-0 nova_compute[239965]:         <nova:port uuid="9325f8b5-246d-4519-a1d0-f1b5d35375fc">
Jan 26 16:01:44 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <system>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <entry name="serial">7653a9f9-e5bc-4478-b169-1240875c1675</entry>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <entry name="uuid">7653a9f9-e5bc-4478-b169-1240875c1675</entry>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     </system>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   <os>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   </os>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   <features>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   </features>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/7653a9f9-e5bc-4478-b169-1240875c1675_disk">
Jan 26 16:01:44 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:44 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/7653a9f9-e5bc-4478-b169-1240875c1675_disk.config">
Jan 26 16:01:44 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:44 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:4e:47:bb"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <target dev="tap9325f8b5-24"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/7653a9f9-e5bc-4478-b169-1240875c1675/console.log" append="off"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <video>
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     </video>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:01:44 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:01:44 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:01:44 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:01:44 compute-0 nova_compute[239965]: </domain>
Jan 26 16:01:44 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.775 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Preparing to wait for external event network-vif-plugged-9325f8b5-246d-4519-a1d0-f1b5d35375fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.775 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.776 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.776 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.777 239969 DEBUG nova.virt.libvirt.vif [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659135611',display_name='tempest-tempest.common.compute-instance-659135611-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659135611-1',id=68,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df35e1b31a074e6c9a56000deadb0acc',ramdisk_id='',reservation_id='r-ze65gd0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1803223404',owner_user_name='tempest-MultipleCreateTestJSON-1803223404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:39Z,user_data=None,user_id='008d2b0a4ac3490f9a0fcea9e21be080',uuid=7653a9f9-e5bc-4478-b169-1240875c1675,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "address": "fa:16:3e:4e:47:bb", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9325f8b5-24", "ovs_interfaceid": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.778 239969 DEBUG nova.network.os_vif_util [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converting VIF {"id": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "address": "fa:16:3e:4e:47:bb", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9325f8b5-24", "ovs_interfaceid": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.779 239969 DEBUG nova.network.os_vif_util [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:47:bb,bridge_name='br-int',has_traffic_filtering=True,id=9325f8b5-246d-4519-a1d0-f1b5d35375fc,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9325f8b5-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.780 239969 DEBUG os_vif [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:47:bb,bridge_name='br-int',has_traffic_filtering=True,id=9325f8b5-246d-4519-a1d0-f1b5d35375fc,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9325f8b5-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.781 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.782 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.783 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.788 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.788 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9325f8b5-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.789 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9325f8b5-24, col_values=(('external_ids', {'iface-id': '9325f8b5-246d-4519-a1d0-f1b5d35375fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:47:bb', 'vm-uuid': '7653a9f9-e5bc-4478-b169-1240875c1675'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:44 compute-0 NetworkManager[48954]: <info>  [1769443304.7934] manager: (tap9325f8b5-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/261)
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.797 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.799 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.800 239969 INFO os_vif [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:47:bb,bridge_name='br-int',has_traffic_filtering=True,id=9325f8b5-246d-4519-a1d0-f1b5d35375fc,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9325f8b5-24')
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.886 239969 DEBUG nova.network.neutron [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Updating instance_info_cache with network_info: [{"id": "48c9a7b6-229f-4386-b150-b99a89d048a8", "address": "fa:16:3e:62:7b:8c", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48c9a7b6-22", "ovs_interfaceid": "48c9a7b6-229f-4386-b150-b99a89d048a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.900 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.900 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.901 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] No VIF found with MAC fa:16:3e:4e:47:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.902 239969 INFO nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Using config drive
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.949 239969 DEBUG nova.storage.rbd_utils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 7653a9f9-e5bc-4478-b169-1240875c1675_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.990 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Releasing lock "refresh_cache-1a3248c2-cd69-499a-8aec-8b03fa196e91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.991 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Instance network_info: |[{"id": "48c9a7b6-229f-4386-b150-b99a89d048a8", "address": "fa:16:3e:62:7b:8c", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48c9a7b6-22", "ovs_interfaceid": "48c9a7b6-229f-4386-b150-b99a89d048a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.995 239969 DEBUG oslo_concurrency.lockutils [req-eda2a1f3-911d-4439-a3ba-c0785ed11c89 req-a2acf010-cb1d-437e-98e4-c97f4fa1cd64 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-1a3248c2-cd69-499a-8aec-8b03fa196e91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:01:44 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.995 239969 DEBUG nova.network.neutron [req-eda2a1f3-911d-4439-a3ba-c0785ed11c89 req-a2acf010-cb1d-437e-98e4-c97f4fa1cd64 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Refreshing network info cache for port 48c9a7b6-229f-4386-b150-b99a89d048a8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:44.999 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Start _get_guest_xml network_info=[{"id": "48c9a7b6-229f-4386-b150-b99a89d048a8", "address": "fa:16:3e:62:7b:8c", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48c9a7b6-22", "ovs_interfaceid": "48c9a7b6-229f-4386-b150-b99a89d048a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.023 239969 DEBUG oslo_concurrency.lockutils [None req-d7b37705-d01c-404e-a834-1580ab882143 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "9148cd85-c0ce-49ec-a252-00ee17051d61" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.023 239969 DEBUG oslo_concurrency.lockutils [None req-d7b37705-d01c-404e-a834-1580ab882143 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.024 239969 DEBUG nova.compute.manager [None req-d7b37705-d01c-404e-a834-1580ab882143 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.026 239969 WARNING nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.031 239969 DEBUG nova.compute.manager [None req-d7b37705-d01c-404e-a834-1580ab882143 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.031 239969 DEBUG nova.objects.instance [None req-d7b37705-d01c-404e-a834-1580ab882143 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'flavor' on Instance uuid 9148cd85-c0ce-49ec-a252-00ee17051d61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.034 239969 DEBUG nova.virt.libvirt.host [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.034 239969 DEBUG nova.virt.libvirt.host [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.038 239969 DEBUG nova.virt.libvirt.host [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.038 239969 DEBUG nova.virt.libvirt.host [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.039 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.039 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.039 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.040 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:01:45 compute-0 ceph-mon[75140]: pgmap v1464: 305 pgs: 305 active+clean; 385 MiB data, 707 MiB used, 59 GiB / 60 GiB avail; 700 KiB/s rd, 6.4 MiB/s wr, 184 op/s
Jan 26 16:01:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4020571109' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.040 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.040 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.040 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.041 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.041 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.041 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.041 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.041 239969 DEBUG nova.virt.hardware [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.044 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.088 239969 DEBUG nova.virt.libvirt.driver [None req-d7b37705-d01c-404e-a834-1580ab882143 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.147 239969 DEBUG nova.network.neutron [req-5600dffb-9372-403a-a34f-0e15ba0b8f62 req-3649b8dd-971d-4d14-9e48-e26ae17de4d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Updated VIF entry in instance network info cache for port 9325f8b5-246d-4519-a1d0-f1b5d35375fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.149 239969 DEBUG nova.network.neutron [req-5600dffb-9372-403a-a34f-0e15ba0b8f62 req-3649b8dd-971d-4d14-9e48-e26ae17de4d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Updating instance_info_cache with network_info: [{"id": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "address": "fa:16:3e:4e:47:bb", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9325f8b5-24", "ovs_interfaceid": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:01:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/173526898' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.164 239969 DEBUG oslo_concurrency.lockutils [req-5600dffb-9372-403a-a34f-0e15ba0b8f62 req-3649b8dd-971d-4d14-9e48-e26ae17de4d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-7653a9f9-e5bc-4478-b169-1240875c1675" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.175 239969 DEBUG oslo_concurrency.processutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.657s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.183 239969 DEBUG nova.compute.provider_tree [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.197 239969 DEBUG nova.scheduler.client.report [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.218 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.218 239969 DEBUG nova.compute.manager [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.270 239969 DEBUG nova.compute.manager [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.270 239969 DEBUG nova.network.neutron [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.290 239969 INFO nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.311 239969 DEBUG nova.compute.manager [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.407 239969 DEBUG nova.compute.manager [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.409 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.410 239969 INFO nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Creating image(s)
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.433 239969 DEBUG nova.storage.rbd_utils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 458dfc7d-cb02-4d38-a6ba-a52234a1af2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.457 239969 DEBUG nova.storage.rbd_utils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 458dfc7d-cb02-4d38-a6ba-a52234a1af2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.479 239969 DEBUG nova.storage.rbd_utils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 458dfc7d-cb02-4d38-a6ba-a52234a1af2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.483 239969 DEBUG oslo_concurrency.processutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.521 239969 DEBUG nova.policy [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3c7cba75e9fd41599b1c9a3388447cdd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aee99e5b6af74088bd848cecc9592e82', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.525 239969 INFO nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Creating config drive at /var/lib/nova/instances/7653a9f9-e5bc-4478-b169-1240875c1675/disk.config
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.530 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7653a9f9-e5bc-4478-b169-1240875c1675/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm24nh9ab execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.569 239969 DEBUG oslo_concurrency.processutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.570 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.571 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.571 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.598 239969 DEBUG nova.storage.rbd_utils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 458dfc7d-cb02-4d38-a6ba-a52234a1af2a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.602 239969 DEBUG oslo_concurrency.processutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 458dfc7d-cb02-4d38-a6ba-a52234a1af2a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2959254863' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.661 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.688 239969 DEBUG nova.storage.rbd_utils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 1a3248c2-cd69-499a-8aec-8b03fa196e91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.695 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.751 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7653a9f9-e5bc-4478-b169-1240875c1675/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm24nh9ab" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.796 239969 DEBUG nova.storage.rbd_utils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 7653a9f9-e5bc-4478-b169-1240875c1675_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.818 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7653a9f9-e5bc-4478-b169-1240875c1675/disk.config 7653a9f9-e5bc-4478-b169-1240875c1675_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.858 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 465 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 9.6 MiB/s wr, 316 op/s
Jan 26 16:01:45 compute-0 nova_compute[239965]: 2026-01-26 16:01:45.942 239969 DEBUG oslo_concurrency.processutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 458dfc7d-cb02-4d38-a6ba-a52234a1af2a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.005 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7653a9f9-e5bc-4478-b169-1240875c1675/disk.config 7653a9f9-e5bc-4478-b169-1240875c1675_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.006 239969 INFO nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Deleting local config drive /var/lib/nova/instances/7653a9f9-e5bc-4478-b169-1240875c1675/disk.config because it was imported into RBD.
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.031 239969 DEBUG nova.storage.rbd_utils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] resizing rbd image 458dfc7d-cb02-4d38-a6ba-a52234a1af2a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:01:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/173526898' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2959254863' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:46 compute-0 kernel: tap819adf29-2d (unregistering): left promiscuous mode
Jan 26 16:01:46 compute-0 NetworkManager[48954]: <info>  [1769443306.0561] device (tap819adf29-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:01:46 compute-0 NetworkManager[48954]: <info>  [1769443306.0661] manager: (tap9325f8b5-24): new Tun device (/org/freedesktop/NetworkManager/Devices/262)
Jan 26 16:01:46 compute-0 kernel: tap9325f8b5-24: entered promiscuous mode
Jan 26 16:01:46 compute-0 ovn_controller[146046]: 2026-01-26T16:01:46Z|00575|binding|INFO|Releasing lport 819adf29-2d57-4e5d-81f7-041a9ac00baa from this chassis (sb_readonly=0)
Jan 26 16:01:46 compute-0 ovn_controller[146046]: 2026-01-26T16:01:46Z|00576|binding|INFO|Setting lport 819adf29-2d57-4e5d-81f7-041a9ac00baa down in Southbound
Jan 26 16:01:46 compute-0 ovn_controller[146046]: 2026-01-26T16:01:46Z|00577|binding|INFO|Claiming lport 9325f8b5-246d-4519-a1d0-f1b5d35375fc for this chassis.
Jan 26 16:01:46 compute-0 ovn_controller[146046]: 2026-01-26T16:01:46Z|00578|binding|INFO|9325f8b5-246d-4519-a1d0-f1b5d35375fc: Claiming fa:16:3e:4e:47:bb 10.100.0.9
Jan 26 16:01:46 compute-0 ovn_controller[146046]: 2026-01-26T16:01:46Z|00579|binding|INFO|Removing iface tap819adf29-2d ovn-installed in OVS
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.088 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:47:bb 10.100.0.9'], port_security=['fa:16:3e:4e:47:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7653a9f9-e5bc-4478-b169-1240875c1675', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df35e1b31a074e6c9a56000deadb0acc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eaa8c06b-92d0-4145-8d40-c4fadf59354c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a5abb02-464d-4773-99b2-5341ce12ab00, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=9325f8b5-246d-4519-a1d0-f1b5d35375fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.091 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:10:75 10.100.0.14'], port_security=['fa:16:3e:cd:10:75 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '96c386e9-e3b3-47c6-b6ca-863e894d5c49', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-677a455b-3478-4811-937e-e2d4184c4933', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e098d809d8b4d0a8747af49a85560a1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43e1902e-755a-4b29-81ae-0ec275ea79a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=375e748b-fa99-415a-ab22-338d401b8bb9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=819adf29-2d57-4e5d-81f7-041a9ac00baa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.092 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 9325f8b5-246d-4519-a1d0-f1b5d35375fc in datapath ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 unbound from our chassis
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.094 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ca8d6d0f-aa78-49e5-bed8-8ca0c9363435
Jan 26 16:01:46 compute-0 systemd-udevd[297306]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.114 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[02b853aa-a917-42d3-bb3b-765b3b745e0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.115 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapca8d6d0f-a1 in ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.116 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:46 compute-0 NetworkManager[48954]: <info>  [1769443306.1187] device (tap9325f8b5-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:01:46 compute-0 NetworkManager[48954]: <info>  [1769443306.1196] device (tap9325f8b5-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:01:46 compute-0 systemd-machined[208061]: New machine qemu-75-instance-00000044.
Jan 26 16:01:46 compute-0 systemd[1]: Started Virtual Machine qemu-75-instance-00000044.
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.117 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapca8d6d0f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.120 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f5cc066c-99dc-47ba-bcd9-cdaa36b02624]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.125 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[65e53390-08ca-435c-8c4f-0d4350b12294]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:46 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000042.scope: Deactivated successfully.
Jan 26 16:01:46 compute-0 systemd[1]: machine-qemu\x2d73\x2dinstance\x2d00000042.scope: Consumed 18.668s CPU time.
Jan 26 16:01:46 compute-0 ovn_controller[146046]: 2026-01-26T16:01:46Z|00580|binding|INFO|Setting lport 9325f8b5-246d-4519-a1d0-f1b5d35375fc ovn-installed in OVS
Jan 26 16:01:46 compute-0 ovn_controller[146046]: 2026-01-26T16:01:46Z|00581|binding|INFO|Setting lport 9325f8b5-246d-4519-a1d0-f1b5d35375fc up in Southbound
Jan 26 16:01:46 compute-0 systemd-machined[208061]: Machine qemu-73-instance-00000042 terminated.
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.135 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.145 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[945b7910-49b0-4b31-80f5-30126a6f15bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.158 239969 DEBUG nova.network.neutron [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Successfully created port: 7c9c333e-8516-448a-be45-a7c687b16c5a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.170 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1cf753fa-ab28-4d50-8df9-d50f48c16369]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.197 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0a3fc7bc-c198-4190-ba76-c7c0d5d37a64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:46 compute-0 systemd-udevd[297304]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:01:46 compute-0 NetworkManager[48954]: <info>  [1769443306.2097] manager: (tapca8d6d0f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/263)
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.208 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a0895f65-bbdb-4251-a708-eaca2c6c2d05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.216 239969 DEBUG nova.objects.instance [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'migration_context' on Instance uuid 458dfc7d-cb02-4d38-a6ba-a52234a1af2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.230 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.230 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Ensure instance console log exists: /var/lib/nova/instances/458dfc7d-cb02-4d38-a6ba-a52234a1af2a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.231 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.231 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.231 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.258 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[250095d9-5623-4538-83fd-2cbc1e05f00f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.263 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[35a0f459-7b95-40ff-8c72-42f859b6a0d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:46 compute-0 NetworkManager[48954]: <info>  [1769443306.2788] manager: (tap819adf29-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/264)
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.279 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.284 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:46 compute-0 NetworkManager[48954]: <info>  [1769443306.2973] device (tapca8d6d0f-a0): carrier: link connected
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.304 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e358a0ae-a505-4b39-a837-8f196a3f7b2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.324 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c9b0ce22-128f-4e0e-bdae-fb3db3dc7346]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca8d6d0f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:a1:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479710, 'reachable_time': 24335, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297372, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.342 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1f6ab404-ca1d-440f-baba-275baa5a3149]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:a1d3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479710, 'tstamp': 479710}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297373, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.346 239969 DEBUG nova.compute.manager [req-7cbef1d4-38a4-4fe3-a52e-998464c808ee req-e00de826-6b3d-440e-92ea-9adced4e502e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Received event network-vif-plugged-9325f8b5-246d-4519-a1d0-f1b5d35375fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.346 239969 DEBUG oslo_concurrency.lockutils [req-7cbef1d4-38a4-4fe3-a52e-998464c808ee req-e00de826-6b3d-440e-92ea-9adced4e502e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.346 239969 DEBUG oslo_concurrency.lockutils [req-7cbef1d4-38a4-4fe3-a52e-998464c808ee req-e00de826-6b3d-440e-92ea-9adced4e502e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.347 239969 DEBUG oslo_concurrency.lockutils [req-7cbef1d4-38a4-4fe3-a52e-998464c808ee req-e00de826-6b3d-440e-92ea-9adced4e502e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.347 239969 DEBUG nova.compute.manager [req-7cbef1d4-38a4-4fe3-a52e-998464c808ee req-e00de826-6b3d-440e-92ea-9adced4e502e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Processing event network-vif-plugged-9325f8b5-246d-4519-a1d0-f1b5d35375fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.363 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4541c9-9baa-4523-b23a-8fdcb6e48eae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca8d6d0f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:a1:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479710, 'reachable_time': 24335, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 297381, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3323695779' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.406 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b202f5e4-faaa-4a75-a3c1-ef38e1ec1808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.415 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.720s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.416 239969 DEBUG nova.virt.libvirt.vif [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659135611',display_name='tempest-tempest.common.compute-instance-659135611-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659135611-2',id=69,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df35e1b31a074e6c9a56000deadb0acc',ramdisk_id='',reservation_id='r-ze65gd0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1803223404',owner_user_name='tempest-MultipleCreateTestJSON-1803223404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:40Z,user_data=None,user_id='008d2b0a4ac3490f9a0fcea9e21be080',uuid=1a3248c2-cd69-499a-8aec-8b03fa196e91,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "48c9a7b6-229f-4386-b150-b99a89d048a8", "address": "fa:16:3e:62:7b:8c", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48c9a7b6-22", "ovs_interfaceid": "48c9a7b6-229f-4386-b150-b99a89d048a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.416 239969 DEBUG nova.network.os_vif_util [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converting VIF {"id": "48c9a7b6-229f-4386-b150-b99a89d048a8", "address": "fa:16:3e:62:7b:8c", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48c9a7b6-22", "ovs_interfaceid": "48c9a7b6-229f-4386-b150-b99a89d048a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.417 239969 DEBUG nova.network.os_vif_util [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:7b:8c,bridge_name='br-int',has_traffic_filtering=True,id=48c9a7b6-229f-4386-b150-b99a89d048a8,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48c9a7b6-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.418 239969 DEBUG nova.objects.instance [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lazy-loading 'pci_devices' on Instance uuid 1a3248c2-cd69-499a-8aec-8b03fa196e91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.424 239969 DEBUG nova.compute.manager [req-ce2d54ff-e3ae-4b52-9c84-35df7614c935 req-639bdd59-a03e-419f-8aa0-be418cedac33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received event network-vif-unplugged-819adf29-2d57-4e5d-81f7-041a9ac00baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.425 239969 DEBUG oslo_concurrency.lockutils [req-ce2d54ff-e3ae-4b52-9c84-35df7614c935 req-639bdd59-a03e-419f-8aa0-be418cedac33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.425 239969 DEBUG oslo_concurrency.lockutils [req-ce2d54ff-e3ae-4b52-9c84-35df7614c935 req-639bdd59-a03e-419f-8aa0-be418cedac33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.426 239969 DEBUG oslo_concurrency.lockutils [req-ce2d54ff-e3ae-4b52-9c84-35df7614c935 req-639bdd59-a03e-419f-8aa0-be418cedac33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.426 239969 DEBUG nova.compute.manager [req-ce2d54ff-e3ae-4b52-9c84-35df7614c935 req-639bdd59-a03e-419f-8aa0-be418cedac33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] No waiting events found dispatching network-vif-unplugged-819adf29-2d57-4e5d-81f7-041a9ac00baa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.426 239969 WARNING nova.compute.manager [req-ce2d54ff-e3ae-4b52-9c84-35df7614c935 req-639bdd59-a03e-419f-8aa0-be418cedac33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received unexpected event network-vif-unplugged-819adf29-2d57-4e5d-81f7-041a9ac00baa for instance with vm_state active and task_state rescuing.
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.440 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:01:46 compute-0 nova_compute[239965]:   <uuid>1a3248c2-cd69-499a-8aec-8b03fa196e91</uuid>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   <name>instance-00000045</name>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <nova:name>tempest-tempest.common.compute-instance-659135611-2</nova:name>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:01:45</nova:creationTime>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:01:46 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:01:46 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:01:46 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:01:46 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:01:46 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:01:46 compute-0 nova_compute[239965]:         <nova:user uuid="008d2b0a4ac3490f9a0fcea9e21be080">tempest-MultipleCreateTestJSON-1803223404-project-member</nova:user>
Jan 26 16:01:46 compute-0 nova_compute[239965]:         <nova:project uuid="df35e1b31a074e6c9a56000deadb0acc">tempest-MultipleCreateTestJSON-1803223404</nova:project>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:01:46 compute-0 nova_compute[239965]:         <nova:port uuid="48c9a7b6-229f-4386-b150-b99a89d048a8">
Jan 26 16:01:46 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <system>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <entry name="serial">1a3248c2-cd69-499a-8aec-8b03fa196e91</entry>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <entry name="uuid">1a3248c2-cd69-499a-8aec-8b03fa196e91</entry>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     </system>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   <os>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   </os>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   <features>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   </features>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/1a3248c2-cd69-499a-8aec-8b03fa196e91_disk">
Jan 26 16:01:46 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:46 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/1a3248c2-cd69-499a-8aec-8b03fa196e91_disk.config">
Jan 26 16:01:46 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:46 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:62:7b:8c"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <target dev="tap48c9a7b6-22"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/1a3248c2-cd69-499a-8aec-8b03fa196e91/console.log" append="off"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <video>
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     </video>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:01:46 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:01:46 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:01:46 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:01:46 compute-0 nova_compute[239965]: </domain>
Jan 26 16:01:46 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.441 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Preparing to wait for external event network-vif-plugged-48c9a7b6-229f-4386-b150-b99a89d048a8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.441 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.442 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.442 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.443 239969 DEBUG nova.virt.libvirt.vif [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659135611',display_name='tempest-tempest.common.compute-instance-659135611-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659135611-2',id=69,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df35e1b31a074e6c9a56000deadb0acc',ramdisk_id='',reservation_id='r-ze65gd0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1803223404',owner_user_name='tempest-MultipleCreateTestJSON-1803223404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:40Z,user_data=None,user_id='008d2b0a4ac3490f9a0fcea9e21be080',uuid=1a3248c2-cd69-499a-8aec-8b03fa196e91,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "48c9a7b6-229f-4386-b150-b99a89d048a8", "address": "fa:16:3e:62:7b:8c", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48c9a7b6-22", "ovs_interfaceid": "48c9a7b6-229f-4386-b150-b99a89d048a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.443 239969 DEBUG nova.network.os_vif_util [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converting VIF {"id": "48c9a7b6-229f-4386-b150-b99a89d048a8", "address": "fa:16:3e:62:7b:8c", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48c9a7b6-22", "ovs_interfaceid": "48c9a7b6-229f-4386-b150-b99a89d048a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.444 239969 DEBUG nova.network.os_vif_util [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:7b:8c,bridge_name='br-int',has_traffic_filtering=True,id=48c9a7b6-229f-4386-b150-b99a89d048a8,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48c9a7b6-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.445 239969 DEBUG os_vif [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:7b:8c,bridge_name='br-int',has_traffic_filtering=True,id=48c9a7b6-229f-4386-b150-b99a89d048a8,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48c9a7b6-22') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.445 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.446 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.446 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.449 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.450 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48c9a7b6-22, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.450 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap48c9a7b6-22, col_values=(('external_ids', {'iface-id': '48c9a7b6-229f-4386-b150-b99a89d048a8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:62:7b:8c', 'vm-uuid': '1a3248c2-cd69-499a-8aec-8b03fa196e91'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.451 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:46 compute-0 NetworkManager[48954]: <info>  [1769443306.4526] manager: (tap48c9a7b6-22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/265)
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.453 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.461 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.463 239969 INFO os_vif [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:7b:8c,bridge_name='br-int',has_traffic_filtering=True,id=48c9a7b6-229f-4386-b150-b99a89d048a8,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48c9a7b6-22')
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.481 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[027f0f46-f8c8-4916-a122-a74e941ca224]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.483 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca8d6d0f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.483 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.484 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca8d6d0f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.486 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:46 compute-0 NetworkManager[48954]: <info>  [1769443306.4873] manager: (tapca8d6d0f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/266)
Jan 26 16:01:46 compute-0 kernel: tapca8d6d0f-a0: entered promiscuous mode
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.495 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.496 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapca8d6d0f-a0, col_values=(('external_ids', {'iface-id': '20125a47-5d33-44a5-95b7-646e9eff5c4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.498 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:46 compute-0 ovn_controller[146046]: 2026-01-26T16:01:46Z|00582|binding|INFO|Releasing lport 20125a47-5d33-44a5-95b7-646e9eff5c4a from this chassis (sb_readonly=0)
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.520 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.526 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.527 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.527 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] No VIF found with MAC fa:16:3e:62:7b:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.528 239969 INFO nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Using config drive
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.535 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ca8d6d0f-aa78-49e5-bed8-8ca0c9363435.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ca8d6d0f-aa78-49e5-bed8-8ca0c9363435.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.536 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9f957f9e-a4d1-4f34-b5a7-3e00ac4f3887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.537 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/ca8d6d0f-aa78-49e5-bed8-8ca0c9363435.pid.haproxy
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID ca8d6d0f-aa78-49e5-bed8-8ca0c9363435
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:01:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:46.538 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'env', 'PROCESS_TAG=haproxy-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ca8d6d0f-aa78-49e5-bed8-8ca0c9363435.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.559 239969 DEBUG nova.storage.rbd_utils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 1a3248c2-cd69-499a-8aec-8b03fa196e91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.566 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443306.5662267, 7653a9f9-e5bc-4478-b169-1240875c1675 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.567 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] VM Started (Lifecycle Event)
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.568 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.569 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443291.547234, 764b92e5-31ed-41bd-b90d-067a907d0206 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.569 239969 INFO nova.compute.manager [-] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] VM Stopped (Lifecycle Event)
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.571 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.584 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.588 239969 INFO nova.virt.libvirt.driver [-] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Instance spawned successfully.
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.588 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.601 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.605 239969 DEBUG nova.compute.manager [None req-c3437243-80b9-4141-87cb-f7c14256e45d - - - - - -] [instance: 764b92e5-31ed-41bd-b90d-067a907d0206] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.609 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.612 239969 DEBUG nova.network.neutron [req-eda2a1f3-911d-4439-a3ba-c0785ed11c89 req-a2acf010-cb1d-437e-98e4-c97f4fa1cd64 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Updated VIF entry in instance network info cache for port 48c9a7b6-229f-4386-b150-b99a89d048a8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.613 239969 DEBUG nova.network.neutron [req-eda2a1f3-911d-4439-a3ba-c0785ed11c89 req-a2acf010-cb1d-437e-98e4-c97f4fa1cd64 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Updating instance_info_cache with network_info: [{"id": "48c9a7b6-229f-4386-b150-b99a89d048a8", "address": "fa:16:3e:62:7b:8c", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48c9a7b6-22", "ovs_interfaceid": "48c9a7b6-229f-4386-b150-b99a89d048a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.615 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.615 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.616 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.616 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.617 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.617 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.654 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.655 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443306.566461, 7653a9f9-e5bc-4478-b169-1240875c1675 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.655 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] VM Paused (Lifecycle Event)
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.659 239969 DEBUG oslo_concurrency.lockutils [req-eda2a1f3-911d-4439-a3ba-c0785ed11c89 req-a2acf010-cb1d-437e-98e4-c97f4fa1cd64 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-1a3248c2-cd69-499a-8aec-8b03fa196e91" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.687 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.690 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443306.5745404, 7653a9f9-e5bc-4478-b169-1240875c1675 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.690 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] VM Resumed (Lifecycle Event)
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.696 239969 INFO nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Took 7.37 seconds to spawn the instance on the hypervisor.
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.696 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.708 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.715 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.744 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.770 239969 INFO nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Took 8.75 seconds to build instance.
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.788 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "7653a9f9-e5bc-4478-b169-1240875c1675" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.888 239969 INFO nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Instance shutdown successfully after 24 seconds.
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.893 239969 INFO nova.virt.libvirt.driver [-] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Instance destroyed successfully.
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.894 239969 DEBUG nova.objects.instance [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lazy-loading 'numa_topology' on Instance uuid 96c386e9-e3b3-47c6-b6ca-863e894d5c49 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.915 239969 INFO nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Attempting rescue
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.916 239969 DEBUG nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.920 239969 DEBUG nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.921 239969 INFO nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Creating image(s)
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.952 239969 DEBUG nova.storage.rbd_utils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.961 239969 DEBUG nova.objects.instance [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 96c386e9-e3b3-47c6-b6ca-863e894d5c49 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.965 239969 INFO nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Creating config drive at /var/lib/nova/instances/1a3248c2-cd69-499a-8aec-8b03fa196e91/disk.config
Jan 26 16:01:46 compute-0 nova_compute[239965]: 2026-01-26 16:01:46.970 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1a3248c2-cd69-499a-8aec-8b03fa196e91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ukpzy0z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:47 compute-0 podman[297479]: 2026-01-26 16:01:47.025153506 +0000 UTC m=+0.108594943 container create 370c740dafac5229ad9bf75d68b7d552253a75a1de2f9d066635d4ca48d82cfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 16:01:47 compute-0 podman[297479]: 2026-01-26 16:01:46.942227053 +0000 UTC m=+0.025668500 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.057 239969 DEBUG nova.storage.rbd_utils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:47 compute-0 ceph-mon[75140]: pgmap v1465: 305 pgs: 305 active+clean; 465 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 9.6 MiB/s wr, 316 op/s
Jan 26 16:01:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3323695779' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:47 compute-0 systemd[1]: Started libpod-conmon-370c740dafac5229ad9bf75d68b7d552253a75a1de2f9d066635d4ca48d82cfe.scope.
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.089 239969 DEBUG nova.storage.rbd_utils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.094 239969 DEBUG oslo_concurrency.processutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:01:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c46d82d2ebeb4a788993426fe0e9c18afc2956c0347f5aa6dab7421511ac695/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.125 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1a3248c2-cd69-499a-8aec-8b03fa196e91/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7ukpzy0z" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.128 239969 DEBUG nova.network.neutron [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Successfully updated port: 7c9c333e-8516-448a-be45-a7c687b16c5a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.131 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.158 239969 DEBUG nova.storage.rbd_utils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 1a3248c2-cd69-499a-8aec-8b03fa196e91_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.162 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1a3248c2-cd69-499a-8aec-8b03fa196e91/disk.config 1a3248c2-cd69-499a-8aec-8b03fa196e91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:47 compute-0 podman[297479]: 2026-01-26 16:01:47.196428124 +0000 UTC m=+0.279869581 container init 370c740dafac5229ad9bf75d68b7d552253a75a1de2f9d066635d4ca48d82cfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.196 239969 DEBUG oslo_concurrency.processutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.197 239969 DEBUG oslo_concurrency.lockutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.198 239969 DEBUG oslo_concurrency.lockutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.198 239969 DEBUG oslo_concurrency.lockutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:47 compute-0 podman[297479]: 2026-01-26 16:01:47.205757203 +0000 UTC m=+0.289198630 container start 370c740dafac5229ad9bf75d68b7d552253a75a1de2f9d066635d4ca48d82cfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.224 239969 DEBUG nova.storage.rbd_utils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:47 compute-0 neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435[297548]: [NOTICE]   (297584) : New worker (297610) forked
Jan 26 16:01:47 compute-0 neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435[297548]: [NOTICE]   (297584) : Loading success.
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.241 239969 DEBUG oslo_concurrency.processutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.284 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 819adf29-2d57-4e5d-81f7-041a9ac00baa in datapath 677a455b-3478-4811-937e-e2d4184c4933 unbound from our chassis
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.287 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 677a455b-3478-4811-937e-e2d4184c4933
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.291 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "refresh_cache-458dfc7d-cb02-4d38-a6ba-a52234a1af2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.292 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquired lock "refresh_cache-458dfc7d-cb02-4d38-a6ba-a52234a1af2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.293 239969 DEBUG nova.network.neutron [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.306 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[67429a11-95c9-46e9-869f-a9c8532fac43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.339 239969 DEBUG oslo_concurrency.processutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1a3248c2-cd69-499a-8aec-8b03fa196e91/disk.config 1a3248c2-cd69-499a-8aec-8b03fa196e91_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.340 239969 INFO nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Deleting local config drive /var/lib/nova/instances/1a3248c2-cd69-499a-8aec-8b03fa196e91/disk.config because it was imported into RBD.
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.343 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[645e851e-6619-4930-8262-7ebe21c98db3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.348 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[52d7902a-f62d-4ee2-be96-93259ba8003d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.384 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1d5f3035-dd76-4bc3-928f-05ef7665f3cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:47 compute-0 NetworkManager[48954]: <info>  [1769443307.4069] manager: (tap48c9a7b6-22): new Tun device (/org/freedesktop/NetworkManager/Devices/267)
Jan 26 16:01:47 compute-0 systemd-udevd[297356]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:01:47 compute-0 kernel: tap48c9a7b6-22: entered promiscuous mode
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.416 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:47 compute-0 ovn_controller[146046]: 2026-01-26T16:01:47Z|00583|binding|INFO|Claiming lport 48c9a7b6-229f-4386-b150-b99a89d048a8 for this chassis.
Jan 26 16:01:47 compute-0 ovn_controller[146046]: 2026-01-26T16:01:47Z|00584|binding|INFO|48c9a7b6-229f-4386-b150-b99a89d048a8: Claiming fa:16:3e:62:7b:8c 10.100.0.7
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.425 239969 DEBUG nova.network.neutron [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.425 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:7b:8c 10.100.0.7'], port_security=['fa:16:3e:62:7b:8c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1a3248c2-cd69-499a-8aec-8b03fa196e91', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df35e1b31a074e6c9a56000deadb0acc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eaa8c06b-92d0-4145-8d40-c4fadf59354c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a5abb02-464d-4773-99b2-5341ce12ab00, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=48c9a7b6-229f-4386-b150-b99a89d048a8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:47 compute-0 NetworkManager[48954]: <info>  [1769443307.4297] device (tap48c9a7b6-22): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:01:47 compute-0 NetworkManager[48954]: <info>  [1769443307.4303] device (tap48c9a7b6-22): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.434 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[26a57bf2-5906-42cd-bdbc-9b17132db0a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap677a455b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:e4:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476521, 'reachable_time': 27062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297652, 'error': None, 'target': 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:47 compute-0 ovn_controller[146046]: 2026-01-26T16:01:47Z|00585|binding|INFO|Setting lport 48c9a7b6-229f-4386-b150-b99a89d048a8 ovn-installed in OVS
Jan 26 16:01:47 compute-0 ovn_controller[146046]: 2026-01-26T16:01:47Z|00586|binding|INFO|Setting lport 48c9a7b6-229f-4386-b150-b99a89d048a8 up in Southbound
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.446 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:47 compute-0 systemd-machined[208061]: New machine qemu-76-instance-00000045.
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.463 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[38335172-d179-4400-90a5-c6cc92176998]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap677a455b-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476533, 'tstamp': 476533}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297664, 'error': None, 'target': 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap677a455b-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476536, 'tstamp': 476536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297664, 'error': None, 'target': 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.465 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap677a455b-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.466 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:47 compute-0 systemd[1]: Started Virtual Machine qemu-76-instance-00000045.
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.472 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.474 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap677a455b-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.476 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.477 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap677a455b-30, col_values=(('external_ids', {'iface-id': 'be57e329-8724-4480-9b41-724e08bf9152'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.478 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.480 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 48c9a7b6-229f-4386-b150-b99a89d048a8 in datapath ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 unbound from our chassis
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.482 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ca8d6d0f-aa78-49e5-bed8-8ca0c9363435
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.502 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7093d6-f309-428a-8c39-ea695e666c32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.538 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a4bdc1c3-cfde-4e5e-8cdc-e3b7fef1c5b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.541 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[91a6df22-f290-4c90-be0a-128700ea8d9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.584 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ec3a77c6-00b1-44da-b81f-a43f29b591b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.609 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[64778aae-dc59-441f-b0e6-6545b040b95f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca8d6d0f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:a1:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 176], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479710, 'reachable_time': 24335, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297677, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.628 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6c81665c-9851-4071-a52e-6dee5048c723]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapca8d6d0f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479724, 'tstamp': 479724}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297678, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapca8d6d0f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 479728, 'tstamp': 479728}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 297678, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.629 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca8d6d0f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.631 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.636 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.637 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca8d6d0f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.637 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.638 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapca8d6d0f-a0, col_values=(('external_ids', {'iface-id': '20125a47-5d33-44a5-95b7-646e9eff5c4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:47.638 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.643 239969 DEBUG oslo_concurrency.processutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.402s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.644 239969 DEBUG nova.objects.instance [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lazy-loading 'migration_context' on Instance uuid 96c386e9-e3b3-47c6-b6ca-863e894d5c49 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.669 239969 DEBUG nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.672 239969 DEBUG nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Start _get_guest_xml network_info=[{"id": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "address": "fa:16:3e:cd:10:75", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "vif_mac": "fa:16:3e:cd:10:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap819adf29-2d", "ovs_interfaceid": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.672 239969 DEBUG nova.objects.instance [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lazy-loading 'resources' on Instance uuid 96c386e9-e3b3-47c6-b6ca-863e894d5c49 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.701 239969 WARNING nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.707 239969 DEBUG nova.virt.libvirt.host [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.708 239969 DEBUG nova.virt.libvirt.host [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.712 239969 DEBUG nova.virt.libvirt.host [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.713 239969 DEBUG nova.virt.libvirt.host [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.714 239969 DEBUG nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.714 239969 DEBUG nova.virt.hardware [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.715 239969 DEBUG nova.virt.hardware [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.715 239969 DEBUG nova.virt.hardware [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.715 239969 DEBUG nova.virt.hardware [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.716 239969 DEBUG nova.virt.hardware [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.716 239969 DEBUG nova.virt.hardware [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.716 239969 DEBUG nova.virt.hardware [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.717 239969 DEBUG nova.virt.hardware [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.717 239969 DEBUG nova.virt.hardware [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.717 239969 DEBUG nova.virt.hardware [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.718 239969 DEBUG nova.virt.hardware [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.718 239969 DEBUG nova.objects.instance [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 96c386e9-e3b3-47c6-b6ca-863e894d5c49 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.737 239969 DEBUG oslo_concurrency.processutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 465 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 173 op/s
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.953 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443307.9522867, 1a3248c2-cd69-499a-8aec-8b03fa196e91 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.954 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] VM Started (Lifecycle Event)
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.974 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.984 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443307.9540732, 1a3248c2-cd69-499a-8aec-8b03fa196e91 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:47 compute-0 nova_compute[239965]: 2026-01-26 16:01:47.986 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] VM Paused (Lifecycle Event)
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.002 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.008 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.032 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.122 239969 DEBUG nova.network.neutron [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Updating instance_info_cache with network_info: [{"id": "7c9c333e-8516-448a-be45-a7c687b16c5a", "address": "fa:16:3e:2a:fc:be", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c9c333e-85", "ovs_interfaceid": "7c9c333e-8516-448a-be45-a7c687b16c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.142 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Releasing lock "refresh_cache-458dfc7d-cb02-4d38-a6ba-a52234a1af2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.143 239969 DEBUG nova.compute.manager [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Instance network_info: |[{"id": "7c9c333e-8516-448a-be45-a7c687b16c5a", "address": "fa:16:3e:2a:fc:be", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c9c333e-85", "ovs_interfaceid": "7c9c333e-8516-448a-be45-a7c687b16c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.145 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Start _get_guest_xml network_info=[{"id": "7c9c333e-8516-448a-be45-a7c687b16c5a", "address": "fa:16:3e:2a:fc:be", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c9c333e-85", "ovs_interfaceid": "7c9c333e-8516-448a-be45-a7c687b16c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.148 239969 WARNING nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.153 239969 DEBUG nova.virt.libvirt.host [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.154 239969 DEBUG nova.virt.libvirt.host [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.159 239969 DEBUG nova.virt.libvirt.host [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.159 239969 DEBUG nova.virt.libvirt.host [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.159 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.160 239969 DEBUG nova.virt.hardware [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.160 239969 DEBUG nova.virt.hardware [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.161 239969 DEBUG nova.virt.hardware [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.161 239969 DEBUG nova.virt.hardware [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.161 239969 DEBUG nova.virt.hardware [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.162 239969 DEBUG nova.virt.hardware [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.162 239969 DEBUG nova.virt.hardware [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.162 239969 DEBUG nova.virt.hardware [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.163 239969 DEBUG nova.virt.hardware [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.163 239969 DEBUG nova.virt.hardware [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.163 239969 DEBUG nova.virt.hardware [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.166 239969 DEBUG oslo_concurrency.processutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2795924004' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:48.379 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.406 239969 DEBUG oslo_concurrency.processutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.413 239969 DEBUG oslo_concurrency.processutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.497 239969 DEBUG nova.compute.manager [req-a9fc354c-949e-45b3-814f-df9672e40687 req-7691f364-0884-4af5-9a9d-92245b876e23 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Received event network-vif-plugged-9325f8b5-246d-4519-a1d0-f1b5d35375fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.498 239969 DEBUG oslo_concurrency.lockutils [req-a9fc354c-949e-45b3-814f-df9672e40687 req-7691f364-0884-4af5-9a9d-92245b876e23 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.499 239969 DEBUG oslo_concurrency.lockutils [req-a9fc354c-949e-45b3-814f-df9672e40687 req-7691f364-0884-4af5-9a9d-92245b876e23 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.499 239969 DEBUG oslo_concurrency.lockutils [req-a9fc354c-949e-45b3-814f-df9672e40687 req-7691f364-0884-4af5-9a9d-92245b876e23 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.499 239969 DEBUG nova.compute.manager [req-a9fc354c-949e-45b3-814f-df9672e40687 req-7691f364-0884-4af5-9a9d-92245b876e23 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] No waiting events found dispatching network-vif-plugged-9325f8b5-246d-4519-a1d0-f1b5d35375fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.500 239969 WARNING nova.compute.manager [req-a9fc354c-949e-45b3-814f-df9672e40687 req-7691f364-0884-4af5-9a9d-92245b876e23 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Received unexpected event network-vif-plugged-9325f8b5-246d-4519-a1d0-f1b5d35375fc for instance with vm_state active and task_state None.
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.500 239969 DEBUG nova.compute.manager [req-a9fc354c-949e-45b3-814f-df9672e40687 req-7691f364-0884-4af5-9a9d-92245b876e23 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Received event network-changed-7c9c333e-8516-448a-be45-a7c687b16c5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.500 239969 DEBUG nova.compute.manager [req-a9fc354c-949e-45b3-814f-df9672e40687 req-7691f364-0884-4af5-9a9d-92245b876e23 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Refreshing instance network info cache due to event network-changed-7c9c333e-8516-448a-be45-a7c687b16c5a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.500 239969 DEBUG oslo_concurrency.lockutils [req-a9fc354c-949e-45b3-814f-df9672e40687 req-7691f364-0884-4af5-9a9d-92245b876e23 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-458dfc7d-cb02-4d38-a6ba-a52234a1af2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.501 239969 DEBUG oslo_concurrency.lockutils [req-a9fc354c-949e-45b3-814f-df9672e40687 req-7691f364-0884-4af5-9a9d-92245b876e23 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-458dfc7d-cb02-4d38-a6ba-a52234a1af2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.501 239969 DEBUG nova.network.neutron [req-a9fc354c-949e-45b3-814f-df9672e40687 req-7691f364-0884-4af5-9a9d-92245b876e23 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Refreshing network info cache for port 7c9c333e-8516-448a-be45-a7c687b16c5a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.569 239969 DEBUG nova.compute.manager [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received event network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.570 239969 DEBUG oslo_concurrency.lockutils [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.570 239969 DEBUG oslo_concurrency.lockutils [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.570 239969 DEBUG oslo_concurrency.lockutils [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.571 239969 DEBUG nova.compute.manager [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] No waiting events found dispatching network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.571 239969 WARNING nova.compute.manager [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received unexpected event network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa for instance with vm_state active and task_state rescuing.
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.571 239969 DEBUG nova.compute.manager [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Received event network-vif-plugged-48c9a7b6-229f-4386-b150-b99a89d048a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.571 239969 DEBUG oslo_concurrency.lockutils [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.571 239969 DEBUG oslo_concurrency.lockutils [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.572 239969 DEBUG oslo_concurrency.lockutils [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.572 239969 DEBUG nova.compute.manager [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Processing event network-vif-plugged-48c9a7b6-229f-4386-b150-b99a89d048a8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.572 239969 DEBUG nova.compute.manager [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Received event network-vif-plugged-48c9a7b6-229f-4386-b150-b99a89d048a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.572 239969 DEBUG oslo_concurrency.lockutils [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.572 239969 DEBUG oslo_concurrency.lockutils [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.573 239969 DEBUG oslo_concurrency.lockutils [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.573 239969 DEBUG nova.compute.manager [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] No waiting events found dispatching network-vif-plugged-48c9a7b6-229f-4386-b150-b99a89d048a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.573 239969 WARNING nova.compute.manager [req-b10fe928-7737-4912-8c9f-c54669f20fc7 req-0b00e18c-10d5-412c-8edc-9b765d143f8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Received unexpected event network-vif-plugged-48c9a7b6-229f-4386-b150-b99a89d048a8 for instance with vm_state building and task_state spawning.
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.574 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.578 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443308.5784705, 1a3248c2-cd69-499a-8aec-8b03fa196e91 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.580 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] VM Resumed (Lifecycle Event)
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.596 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.600 239969 INFO nova.virt.libvirt.driver [-] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Instance spawned successfully.
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.603 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.605 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.617 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.628 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.628 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.632 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.633 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.633 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.634 239969 DEBUG nova.virt.libvirt.driver [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.643 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:01:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:01:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3305324241' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:01:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:01:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3305324241' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.706 239969 INFO nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Took 8.19 seconds to spawn the instance on the hypervisor.
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.706 239969 DEBUG nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.769 239969 INFO nova.compute.manager [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Took 10.71 seconds to build instance.
Jan 26 16:01:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3123980269' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.786 239969 DEBUG oslo_concurrency.lockutils [None req-5feaf2e6-2f17-44d5-8bd4-b615722ecd69 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "1a3248c2-cd69-499a-8aec-8b03fa196e91" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.801 239969 DEBUG oslo_concurrency.processutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00332650157431416 of space, bias 1.0, pg target 0.9979504722942479 quantized to 32 (current 32)
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001016213836913355 of space, bias 1.0, pg target 0.3048641510740065 quantized to 32 (current 32)
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.791724717034068e-07 of space, bias 4.0, pg target 0.0009350069660440882 quantized to 16 (current 16)
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:01:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.823 239969 DEBUG nova.storage.rbd_utils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 458dfc7d-cb02-4d38-a6ba-a52234a1af2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:48 compute-0 nova_compute[239965]: 2026-01-26 16:01:48.828 239969 DEBUG oslo_concurrency.processutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/441828946' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:49 compute-0 ceph-mon[75140]: pgmap v1466: 305 pgs: 305 active+clean; 465 MiB data, 741 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 173 op/s
Jan 26 16:01:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2795924004' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3305324241' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:01:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3305324241' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:01:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3123980269' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/441828946' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.074 239969 DEBUG oslo_concurrency.processutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.075 239969 DEBUG oslo_concurrency.processutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3085939121' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.447 239969 DEBUG oslo_concurrency.processutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.448 239969 DEBUG nova.virt.libvirt.vif [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:01:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-472679170',display_name='tempest-DeleteServersTestJSON-server-472679170',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-472679170',id=70,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aee99e5b6af74088bd848cecc9592e82',ramdisk_id='',reservation_id='r-jr4kh4b2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-234439961',owner_user_name='tempest-DeleteServersTestJSON-234439961-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:45Z,user_data=None,user_id='3c7cba75e9fd41599b1c9a3388447cdd',uuid=458dfc7d-cb02-4d38-a6ba-a52234a1af2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c9c333e-8516-448a-be45-a7c687b16c5a", "address": "fa:16:3e:2a:fc:be", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c9c333e-85", "ovs_interfaceid": "7c9c333e-8516-448a-be45-a7c687b16c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.449 239969 DEBUG nova.network.os_vif_util [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converting VIF {"id": "7c9c333e-8516-448a-be45-a7c687b16c5a", "address": "fa:16:3e:2a:fc:be", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c9c333e-85", "ovs_interfaceid": "7c9c333e-8516-448a-be45-a7c687b16c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.450 239969 DEBUG nova.network.os_vif_util [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:fc:be,bridge_name='br-int',has_traffic_filtering=True,id=7c9c333e-8516-448a-be45-a7c687b16c5a,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c9c333e-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.451 239969 DEBUG nova.objects.instance [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 458dfc7d-cb02-4d38-a6ba-a52234a1af2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.469 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <uuid>458dfc7d-cb02-4d38-a6ba-a52234a1af2a</uuid>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <name>instance-00000046</name>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <nova:name>tempest-DeleteServersTestJSON-server-472679170</nova:name>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:01:48</nova:creationTime>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <nova:user uuid="3c7cba75e9fd41599b1c9a3388447cdd">tempest-DeleteServersTestJSON-234439961-project-member</nova:user>
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <nova:project uuid="aee99e5b6af74088bd848cecc9592e82">tempest-DeleteServersTestJSON-234439961</nova:project>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <nova:port uuid="7c9c333e-8516-448a-be45-a7c687b16c5a">
Jan 26 16:01:49 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <system>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <entry name="serial">458dfc7d-cb02-4d38-a6ba-a52234a1af2a</entry>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <entry name="uuid">458dfc7d-cb02-4d38-a6ba-a52234a1af2a</entry>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </system>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <os>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   </os>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <features>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   </features>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/458dfc7d-cb02-4d38-a6ba-a52234a1af2a_disk">
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/458dfc7d-cb02-4d38-a6ba-a52234a1af2a_disk.config">
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:2a:fc:be"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <target dev="tap7c9c333e-85"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/458dfc7d-cb02-4d38-a6ba-a52234a1af2a/console.log" append="off"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <video>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </video>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:01:49 compute-0 nova_compute[239965]: </domain>
Jan 26 16:01:49 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.470 239969 DEBUG nova.compute.manager [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Preparing to wait for external event network-vif-plugged-7c9c333e-8516-448a-be45-a7c687b16c5a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.470 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.471 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.471 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.472 239969 DEBUG nova.virt.libvirt.vif [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:01:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-472679170',display_name='tempest-DeleteServersTestJSON-server-472679170',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-472679170',id=70,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aee99e5b6af74088bd848cecc9592e82',ramdisk_id='',reservation_id='r-jr4kh4b2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-234439961',owner_user_name='tempest-DeleteServersTestJSON-234439961-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:45Z,user_data=None,user_id='3c7cba75e9fd41599b1c9a3388447cdd',uuid=458dfc7d-cb02-4d38-a6ba-a52234a1af2a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7c9c333e-8516-448a-be45-a7c687b16c5a", "address": "fa:16:3e:2a:fc:be", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c9c333e-85", "ovs_interfaceid": "7c9c333e-8516-448a-be45-a7c687b16c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.472 239969 DEBUG nova.network.os_vif_util [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converting VIF {"id": "7c9c333e-8516-448a-be45-a7c687b16c5a", "address": "fa:16:3e:2a:fc:be", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c9c333e-85", "ovs_interfaceid": "7c9c333e-8516-448a-be45-a7c687b16c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.472 239969 DEBUG nova.network.os_vif_util [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:fc:be,bridge_name='br-int',has_traffic_filtering=True,id=7c9c333e-8516-448a-be45-a7c687b16c5a,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c9c333e-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.473 239969 DEBUG os_vif [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:fc:be,bridge_name='br-int',has_traffic_filtering=True,id=7c9c333e-8516-448a-be45-a7c687b16c5a,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c9c333e-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.474 239969 DEBUG nova.network.neutron [req-a9fc354c-949e-45b3-814f-df9672e40687 req-7691f364-0884-4af5-9a9d-92245b876e23 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Updated VIF entry in instance network info cache for port 7c9c333e-8516-448a-be45-a7c687b16c5a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.474 239969 DEBUG nova.network.neutron [req-a9fc354c-949e-45b3-814f-df9672e40687 req-7691f364-0884-4af5-9a9d-92245b876e23 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Updating instance_info_cache with network_info: [{"id": "7c9c333e-8516-448a-be45-a7c687b16c5a", "address": "fa:16:3e:2a:fc:be", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c9c333e-85", "ovs_interfaceid": "7c9c333e-8516-448a-be45-a7c687b16c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.476 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.476 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.477 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.481 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.482 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c9c333e-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.482 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7c9c333e-85, col_values=(('external_ids', {'iface-id': '7c9c333e-8516-448a-be45-a7c687b16c5a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:fc:be', 'vm-uuid': '458dfc7d-cb02-4d38-a6ba-a52234a1af2a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:49 compute-0 NetworkManager[48954]: <info>  [1769443309.4851] manager: (tap7c9c333e-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/268)
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.484 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.486 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.489 239969 DEBUG oslo_concurrency.lockutils [req-a9fc354c-949e-45b3-814f-df9672e40687 req-7691f364-0884-4af5-9a9d-92245b876e23 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-458dfc7d-cb02-4d38-a6ba-a52234a1af2a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.491 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.491 239969 INFO os_vif [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:fc:be,bridge_name='br-int',has_traffic_filtering=True,id=7c9c333e-8516-448a-be45-a7c687b16c5a,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c9c333e-85')
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.544 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.544 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.544 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] No VIF found with MAC fa:16:3e:2a:fc:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.545 239969 INFO nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Using config drive
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.569 239969 DEBUG nova.storage.rbd_utils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 458dfc7d-cb02-4d38-a6ba-a52234a1af2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:01:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2315438385' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.656 239969 DEBUG oslo_concurrency.processutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.657 239969 DEBUG nova.virt.libvirt.vif [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:01:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1808774340',display_name='tempest-ServerRescueNegativeTestJSON-server-1808774340',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1808774340',id=66,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:01:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1e098d809d8b4d0a8747af49a85560a1',ramdisk_id='',reservation_id='r-xtk9l70v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-2133238807',owner_user_name='tempest-ServerRescueNegativeTestJSON-2133238807-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:17Z,user_data=None,user_id='2b5ecf8e47344a3d9fec2e1bf4244aae',uuid=96c386e9-e3b3-47c6-b6ca-863e894d5c49,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "address": "fa:16:3e:cd:10:75", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "vif_mac": "fa:16:3e:cd:10:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap819adf29-2d", "ovs_interfaceid": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.658 239969 DEBUG nova.network.os_vif_util [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Converting VIF {"id": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "address": "fa:16:3e:cd:10:75", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "vif_mac": "fa:16:3e:cd:10:75"}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap819adf29-2d", "ovs_interfaceid": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.658 239969 DEBUG nova.network.os_vif_util [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cd:10:75,bridge_name='br-int',has_traffic_filtering=True,id=819adf29-2d57-4e5d-81f7-041a9ac00baa,network=Network(677a455b-3478-4811-937e-e2d4184c4933),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap819adf29-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.660 239969 DEBUG nova.objects.instance [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 96c386e9-e3b3-47c6-b6ca-863e894d5c49 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.679 239969 DEBUG nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <uuid>96c386e9-e3b3-47c6-b6ca-863e894d5c49</uuid>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <name>instance-00000042</name>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerRescueNegativeTestJSON-server-1808774340</nova:name>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:01:47</nova:creationTime>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <nova:user uuid="2b5ecf8e47344a3d9fec2e1bf4244aae">tempest-ServerRescueNegativeTestJSON-2133238807-project-member</nova:user>
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <nova:project uuid="1e098d809d8b4d0a8747af49a85560a1">tempest-ServerRescueNegativeTestJSON-2133238807</nova:project>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <nova:port uuid="819adf29-2d57-4e5d-81f7-041a9ac00baa">
Jan 26 16:01:49 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <system>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <entry name="serial">96c386e9-e3b3-47c6-b6ca-863e894d5c49</entry>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <entry name="uuid">96c386e9-e3b3-47c6-b6ca-863e894d5c49</entry>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </system>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <os>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   </os>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <features>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   </features>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.rescue">
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk">
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <target dev="vdb" bus="virtio"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.config.rescue">
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       </source>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:01:49 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:cd:10:75"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <target dev="tap819adf29-2d"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49/console.log" append="off"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <video>
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </video>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:01:49 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:01:49 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:01:49 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:01:49 compute-0 nova_compute[239965]: </domain>
Jan 26 16:01:49 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.693 239969 INFO nova.virt.libvirt.driver [-] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Instance destroyed successfully.
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.836 239969 INFO nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Creating config drive at /var/lib/nova/instances/458dfc7d-cb02-4d38-a6ba-a52234a1af2a/disk.config
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.847 239969 DEBUG oslo_concurrency.processutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/458dfc7d-cb02-4d38-a6ba-a52234a1af2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp7t919ly execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 532 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 6.1 MiB/s wr, 247 op/s
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.932 239969 DEBUG nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.933 239969 DEBUG nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.933 239969 DEBUG nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.933 239969 DEBUG nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] No VIF found with MAC fa:16:3e:cd:10:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.934 239969 INFO nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Using config drive
Jan 26 16:01:49 compute-0 nova_compute[239965]: 2026-01-26 16:01:49.970 239969 DEBUG nova.storage.rbd_utils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.000 239969 DEBUG nova.objects.instance [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 96c386e9-e3b3-47c6-b6ca-863e894d5c49 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.005 239969 DEBUG oslo_concurrency.processutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/458dfc7d-cb02-4d38-a6ba-a52234a1af2a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp7t919ly" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.037 239969 DEBUG nova.storage.rbd_utils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] rbd image 458dfc7d-cb02-4d38-a6ba-a52234a1af2a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.041 239969 DEBUG oslo_concurrency.processutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/458dfc7d-cb02-4d38-a6ba-a52234a1af2a/disk.config 458dfc7d-cb02-4d38-a6ba-a52234a1af2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.085 239969 DEBUG nova.objects.instance [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lazy-loading 'keypairs' on Instance uuid 96c386e9-e3b3-47c6-b6ca-863e894d5c49 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3085939121' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2315438385' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.189 239969 DEBUG oslo_concurrency.processutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/458dfc7d-cb02-4d38-a6ba-a52234a1af2a/disk.config 458dfc7d-cb02-4d38-a6ba-a52234a1af2a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.189 239969 INFO nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Deleting local config drive /var/lib/nova/instances/458dfc7d-cb02-4d38-a6ba-a52234a1af2a/disk.config because it was imported into RBD.
Jan 26 16:01:50 compute-0 kernel: tap7c9c333e-85: entered promiscuous mode
Jan 26 16:01:50 compute-0 NetworkManager[48954]: <info>  [1769443310.2501] manager: (tap7c9c333e-85): new Tun device (/org/freedesktop/NetworkManager/Devices/269)
Jan 26 16:01:50 compute-0 ovn_controller[146046]: 2026-01-26T16:01:50Z|00587|binding|INFO|Claiming lport 7c9c333e-8516-448a-be45-a7c687b16c5a for this chassis.
Jan 26 16:01:50 compute-0 ovn_controller[146046]: 2026-01-26T16:01:50Z|00588|binding|INFO|7c9c333e-8516-448a-be45-a7c687b16c5a: Claiming fa:16:3e:2a:fc:be 10.100.0.6
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.256 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.263 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:fc:be 10.100.0.6'], port_security=['fa:16:3e:2a:fc:be 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '458dfc7d-cb02-4d38-a6ba-a52234a1af2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00eb7549-d24b-4657-b244-7664c8a34b20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aee99e5b6af74088bd848cecc9592e82', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6daa4359-88b3-4de7-a38c-e21562f82a46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89244795-c322-42a2-be6b-f02f8ae6e05c, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=7c9c333e-8516-448a-be45-a7c687b16c5a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.265 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 7c9c333e-8516-448a-be45-a7c687b16c5a in datapath 00eb7549-d24b-4657-b244-7664c8a34b20 bound to our chassis
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.267 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 00eb7549-d24b-4657-b244-7664c8a34b20
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.282 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f58c72d8-ad81-4107-ae97-e51456f7b6a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.283 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap00eb7549-d1 in ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:01:50 compute-0 ovn_controller[146046]: 2026-01-26T16:01:50Z|00589|binding|INFO|Setting lport 7c9c333e-8516-448a-be45-a7c687b16c5a ovn-installed in OVS
Jan 26 16:01:50 compute-0 ovn_controller[146046]: 2026-01-26T16:01:50Z|00590|binding|INFO|Setting lport 7c9c333e-8516-448a-be45-a7c687b16c5a up in Southbound
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.287 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap00eb7549-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.287 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7d6a734a-2250-4fb0-86e9-95497a88d110]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.290 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.290 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[50ad37b7-23f3-4c92-aa69-5f69f4efc734]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.292 239969 DEBUG oslo_concurrency.lockutils [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "7653a9f9-e5bc-4478-b169-1240875c1675" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.293 239969 DEBUG oslo_concurrency.lockutils [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "7653a9f9-e5bc-4478-b169-1240875c1675" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.293 239969 DEBUG oslo_concurrency.lockutils [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.294 239969 DEBUG oslo_concurrency.lockutils [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.294 239969 DEBUG oslo_concurrency.lockutils [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:50 compute-0 systemd-udevd[297944]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.296 239969 INFO nova.compute.manager [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Terminating instance
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.300 239969 DEBUG nova.compute.manager [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.304 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[19d8add7-56b9-4493-83d4-deb499e23ed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.306 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 NetworkManager[48954]: <info>  [1769443310.3146] device (tap7c9c333e-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:01:50 compute-0 NetworkManager[48954]: <info>  [1769443310.3152] device (tap7c9c333e-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:01:50 compute-0 systemd-machined[208061]: New machine qemu-77-instance-00000046.
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.332 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[adf42f04-8064-45c6-85b1-1851d2058c3a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:50 compute-0 systemd[1]: Started Virtual Machine qemu-77-instance-00000046.
Jan 26 16:01:50 compute-0 kernel: tap9325f8b5-24 (unregistering): left promiscuous mode
Jan 26 16:01:50 compute-0 NetworkManager[48954]: <info>  [1769443310.3633] device (tap9325f8b5-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.377 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 ovn_controller[146046]: 2026-01-26T16:01:50Z|00591|binding|INFO|Releasing lport 9325f8b5-246d-4519-a1d0-f1b5d35375fc from this chassis (sb_readonly=0)
Jan 26 16:01:50 compute-0 ovn_controller[146046]: 2026-01-26T16:01:50Z|00592|binding|INFO|Setting lport 9325f8b5-246d-4519-a1d0-f1b5d35375fc down in Southbound
Jan 26 16:01:50 compute-0 ovn_controller[146046]: 2026-01-26T16:01:50Z|00593|binding|INFO|Removing iface tap9325f8b5-24 ovn-installed in OVS
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.380 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.385 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:47:bb 10.100.0.9'], port_security=['fa:16:3e:4e:47:bb 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7653a9f9-e5bc-4478-b169-1240875c1675', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df35e1b31a074e6c9a56000deadb0acc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eaa8c06b-92d0-4145-8d40-c4fadf59354c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a5abb02-464d-4773-99b2-5341ce12ab00, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=9325f8b5-246d-4519-a1d0-f1b5d35375fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.387 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3ee7b645-777d-4877-9c45-7605df26db09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.396 239969 INFO nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Creating config drive at /var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49/disk.config.rescue
Jan 26 16:01:50 compute-0 NetworkManager[48954]: <info>  [1769443310.4013] manager: (tap00eb7549-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/270)
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.399 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bd485c11-22a6-4a63-966e-4f6ac0d08459]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.401 239969 DEBUG oslo_concurrency.processutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1mignw2d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:50 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000044.scope: Deactivated successfully.
Jan 26 16:01:50 compute-0 systemd[1]: machine-qemu\x2d75\x2dinstance\x2d00000044.scope: Consumed 4.097s CPU time.
Jan 26 16:01:50 compute-0 systemd-machined[208061]: Machine qemu-75-instance-00000044 terminated.
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.445 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.459 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8587f586-b003-4479-8b55-6f123aa8275a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.463 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a4592903-b89e-4f58-a7bf-59fc7a9c3bcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.470 239969 DEBUG oslo_concurrency.lockutils [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "1a3248c2-cd69-499a-8aec-8b03fa196e91" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.471 239969 DEBUG oslo_concurrency.lockutils [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "1a3248c2-cd69-499a-8aec-8b03fa196e91" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.472 239969 DEBUG oslo_concurrency.lockutils [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.472 239969 DEBUG oslo_concurrency.lockutils [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.472 239969 DEBUG oslo_concurrency.lockutils [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.474 239969 INFO nova.compute.manager [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Terminating instance
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.475 239969 DEBUG nova.compute.manager [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:01:50 compute-0 NetworkManager[48954]: <info>  [1769443310.4942] device (tap00eb7549-d0): carrier: link connected
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.501 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3f31298d-c19f-408d-8b74-582fb26558af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:50 compute-0 kernel: tap48c9a7b6-22 (unregistering): left promiscuous mode
Jan 26 16:01:50 compute-0 NetworkManager[48954]: <info>  [1769443310.5180] device (tap48c9a7b6-22): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.527 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4e67c1-71a7-47fa-b834-6b031b953065]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00eb7549-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:aa:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480130, 'reachable_time': 44910, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297994, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:50 compute-0 ovn_controller[146046]: 2026-01-26T16:01:50Z|00594|binding|INFO|Releasing lport 48c9a7b6-229f-4386-b150-b99a89d048a8 from this chassis (sb_readonly=0)
Jan 26 16:01:50 compute-0 ovn_controller[146046]: 2026-01-26T16:01:50Z|00595|binding|INFO|Setting lport 48c9a7b6-229f-4386-b150-b99a89d048a8 down in Southbound
Jan 26 16:01:50 compute-0 ovn_controller[146046]: 2026-01-26T16:01:50Z|00596|binding|INFO|Removing iface tap48c9a7b6-22 ovn-installed in OVS
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.541 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.546 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:62:7b:8c 10.100.0.7'], port_security=['fa:16:3e:62:7b:8c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1a3248c2-cd69-499a-8aec-8b03fa196e91', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df35e1b31a074e6c9a56000deadb0acc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eaa8c06b-92d0-4145-8d40-c4fadf59354c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a5abb02-464d-4773-99b2-5341ce12ab00, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=48c9a7b6-229f-4386-b150-b99a89d048a8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.553 239969 DEBUG oslo_concurrency.processutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1mignw2d" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.556 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[454e0ddc-52be-4677-973e-cf32c61b6583]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea9:aa8a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 480130, 'tstamp': 480130}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298004, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:50 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000045.scope: Deactivated successfully.
Jan 26 16:01:50 compute-0 systemd[1]: machine-qemu\x2d76\x2dinstance\x2d00000045.scope: Consumed 2.328s CPU time.
Jan 26 16:01:50 compute-0 systemd-machined[208061]: Machine qemu-76-instance-00000045 terminated.
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.580 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[94ca7287-962b-44a2-8e69-04db96c1c7b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap00eb7549-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a9:aa:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 180], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480130, 'reachable_time': 44910, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298029, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.598 239969 DEBUG nova.storage.rbd_utils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] rbd image 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.602 239969 DEBUG oslo_concurrency.processutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49/disk.config.rescue 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.617 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9ee851-59fe-4f31-b909-3d8f6fd30889]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.634 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.642 239969 INFO nova.virt.libvirt.driver [-] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Instance destroyed successfully.
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.643 239969 DEBUG nova.objects.instance [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lazy-loading 'resources' on Instance uuid 7653a9f9-e5bc-4478-b169-1240875c1675 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.669 239969 DEBUG nova.virt.libvirt.vif [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659135611',display_name='tempest-tempest.common.compute-instance-659135611-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659135611-1',id=68,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:01:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df35e1b31a074e6c9a56000deadb0acc',ramdisk_id='',reservation_id='r-ze65gd0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1803223404',owner_user_name='tempest-MultipleCreateTestJSON-1803223404-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:01:46Z,user_data=None,user_id='008d2b0a4ac3490f9a0fcea9e21be080',uuid=7653a9f9-e5bc-4478-b169-1240875c1675,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "address": "fa:16:3e:4e:47:bb", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9325f8b5-24", "ovs_interfaceid": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.670 239969 DEBUG nova.network.os_vif_util [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converting VIF {"id": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "address": "fa:16:3e:4e:47:bb", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9325f8b5-24", "ovs_interfaceid": "9325f8b5-246d-4519-a1d0-f1b5d35375fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.670 239969 DEBUG nova.network.os_vif_util [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:47:bb,bridge_name='br-int',has_traffic_filtering=True,id=9325f8b5-246d-4519-a1d0-f1b5d35375fc,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9325f8b5-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.671 239969 DEBUG os_vif [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:47:bb,bridge_name='br-int',has_traffic_filtering=True,id=9325f8b5-246d-4519-a1d0-f1b5d35375fc,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9325f8b5-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.672 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.673 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9325f8b5-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.674 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.675 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[27c931af-a95b-424e-9b18-9260ddc98d6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.676 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00eb7549-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.677 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.677 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00eb7549-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:50 compute-0 NetworkManager[48954]: <info>  [1769443310.6788] manager: (tap00eb7549-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/271)
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.679 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.681 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.686 239969 INFO os_vif [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:47:bb,bridge_name='br-int',has_traffic_filtering=True,id=9325f8b5-246d-4519-a1d0-f1b5d35375fc,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9325f8b5-24')
Jan 26 16:01:50 compute-0 kernel: tap00eb7549-d0: entered promiscuous mode
Jan 26 16:01:50 compute-0 NetworkManager[48954]: <info>  [1769443310.6950] manager: (tap48c9a7b6-22): new Tun device (/org/freedesktop/NetworkManager/Devices/272)
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.696 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap00eb7549-d0, col_values=(('external_ids', {'iface-id': 'c26451f4-ab48-4e8d-b25b-9e4988573b7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:50 compute-0 ovn_controller[146046]: 2026-01-26T16:01:50Z|00597|binding|INFO|Releasing lport c26451f4-ab48-4e8d-b25b-9e4988573b7d from this chassis (sb_readonly=0)
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.717 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.731 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.733 239969 INFO nova.virt.libvirt.driver [-] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Instance destroyed successfully.
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.733 239969 DEBUG nova.objects.instance [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lazy-loading 'resources' on Instance uuid 1a3248c2-cd69-499a-8aec-8b03fa196e91 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.736 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.736 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/00eb7549-d24b-4657-b244-7664c8a34b20.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/00eb7549-d24b-4657-b244-7664c8a34b20.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.739 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[97b4863c-2aa7-4a7b-81d5-2da5a60ccbd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.740 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-00eb7549-d24b-4657-b244-7664c8a34b20
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/00eb7549-d24b-4657-b244-7664c8a34b20.pid.haproxy
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 00eb7549-d24b-4657-b244-7664c8a34b20
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.741 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'env', 'PROCESS_TAG=haproxy-00eb7549-d24b-4657-b244-7664c8a34b20', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/00eb7549-d24b-4657-b244-7664c8a34b20.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.754 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.756 239969 DEBUG nova.virt.libvirt.vif [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:01:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-659135611',display_name='tempest-tempest.common.compute-instance-659135611-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-659135611-2',id=69,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-26T16:01:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df35e1b31a074e6c9a56000deadb0acc',ramdisk_id='',reservation_id='r-ze65gd0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1803223404',owner_user_name='tempest-MultipleCreateTestJSON-1803223404-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:01:48Z,user_data=None,user_id='008d2b0a4ac3490f9a0fcea9e21be080',uuid=1a3248c2-cd69-499a-8aec-8b03fa196e91,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "48c9a7b6-229f-4386-b150-b99a89d048a8", "address": "fa:16:3e:62:7b:8c", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48c9a7b6-22", "ovs_interfaceid": "48c9a7b6-229f-4386-b150-b99a89d048a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.756 239969 DEBUG nova.network.os_vif_util [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converting VIF {"id": "48c9a7b6-229f-4386-b150-b99a89d048a8", "address": "fa:16:3e:62:7b:8c", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap48c9a7b6-22", "ovs_interfaceid": "48c9a7b6-229f-4386-b150-b99a89d048a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.757 239969 DEBUG nova.network.os_vif_util [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:62:7b:8c,bridge_name='br-int',has_traffic_filtering=True,id=48c9a7b6-229f-4386-b150-b99a89d048a8,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48c9a7b6-22') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.757 239969 DEBUG os_vif [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:7b:8c,bridge_name='br-int',has_traffic_filtering=True,id=48c9a7b6-229f-4386-b150-b99a89d048a8,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48c9a7b6-22') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.759 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.759 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48c9a7b6-22, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.763 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.766 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.769 239969 INFO os_vif [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:62:7b:8c,bridge_name='br-int',has_traffic_filtering=True,id=48c9a7b6-229f-4386-b150-b99a89d048a8,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap48c9a7b6-22')
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.822 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443310.819109, 458dfc7d-cb02-4d38-a6ba-a52234a1af2a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.822 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] VM Started (Lifecycle Event)
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.825 239969 DEBUG oslo_concurrency.processutils [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49/disk.config.rescue 96c386e9-e3b3-47c6-b6ca-863e894d5c49_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.223s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.826 239969 INFO nova.virt.libvirt.driver [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Deleting local config drive /var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49/disk.config.rescue because it was imported into RBD.
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.848 239969 DEBUG nova.compute.manager [req-06b9c9a9-1e13-49e1-87ad-1cff1d227bfc req-8b66c612-34f2-493b-b5d8-c7d09dfa25d2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Received event network-vif-plugged-7c9c333e-8516-448a-be45-a7c687b16c5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.848 239969 DEBUG oslo_concurrency.lockutils [req-06b9c9a9-1e13-49e1-87ad-1cff1d227bfc req-8b66c612-34f2-493b-b5d8-c7d09dfa25d2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.848 239969 DEBUG oslo_concurrency.lockutils [req-06b9c9a9-1e13-49e1-87ad-1cff1d227bfc req-8b66c612-34f2-493b-b5d8-c7d09dfa25d2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.849 239969 DEBUG oslo_concurrency.lockutils [req-06b9c9a9-1e13-49e1-87ad-1cff1d227bfc req-8b66c612-34f2-493b-b5d8-c7d09dfa25d2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.849 239969 DEBUG nova.compute.manager [req-06b9c9a9-1e13-49e1-87ad-1cff1d227bfc req-8b66c612-34f2-493b-b5d8-c7d09dfa25d2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Processing event network-vif-plugged-7c9c333e-8516-448a-be45-a7c687b16c5a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.850 239969 DEBUG nova.compute.manager [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.851 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.860 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.866 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.872 239969 INFO nova.virt.libvirt.driver [-] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Instance spawned successfully.
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.872 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.887 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.887 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443310.8192153, 458dfc7d-cb02-4d38-a6ba-a52234a1af2a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.888 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] VM Paused (Lifecycle Event)
Jan 26 16:01:50 compute-0 kernel: tap819adf29-2d: entered promiscuous mode
Jan 26 16:01:50 compute-0 NetworkManager[48954]: <info>  [1769443310.8999] manager: (tap819adf29-2d): new Tun device (/org/freedesktop/NetworkManager/Devices/273)
Jan 26 16:01:50 compute-0 systemd-udevd[297977]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:01:50 compute-0 ovn_controller[146046]: 2026-01-26T16:01:50Z|00598|binding|INFO|Claiming lport 819adf29-2d57-4e5d-81f7-041a9ac00baa for this chassis.
Jan 26 16:01:50 compute-0 ovn_controller[146046]: 2026-01-26T16:01:50Z|00599|binding|INFO|819adf29-2d57-4e5d-81f7-041a9ac00baa: Claiming fa:16:3e:cd:10:75 10.100.0.14
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.903 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.914 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:50.911 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:10:75 10.100.0.14'], port_security=['fa:16:3e:cd:10:75 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '96c386e9-e3b3-47c6-b6ca-863e894d5c49', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-677a455b-3478-4811-937e-e2d4184c4933', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e098d809d8b4d0a8747af49a85560a1', 'neutron:revision_number': '5', 'neutron:security_group_ids': '43e1902e-755a-4b29-81ae-0ec275ea79a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=375e748b-fa99-415a-ab22-338d401b8bb9, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=819adf29-2d57-4e5d-81f7-041a9ac00baa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:50 compute-0 NetworkManager[48954]: <info>  [1769443310.9203] device (tap819adf29-2d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:01:50 compute-0 NetworkManager[48954]: <info>  [1769443310.9208] device (tap819adf29-2d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.919 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.920 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.920 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.921 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.921 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.922 239969 DEBUG nova.virt.libvirt.driver [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:01:50 compute-0 ovn_controller[146046]: 2026-01-26T16:01:50Z|00600|binding|INFO|Setting lport 819adf29-2d57-4e5d-81f7-041a9ac00baa ovn-installed in OVS
Jan 26 16:01:50 compute-0 ovn_controller[146046]: 2026-01-26T16:01:50Z|00601|binding|INFO|Setting lport 819adf29-2d57-4e5d-81f7-041a9ac00baa up in Southbound
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.933 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.936 239969 DEBUG nova.compute.manager [req-f20fcf46-fba7-4ae3-8e2b-44182faba186 req-c26fdbf5-b312-4d78-87af-18d03c764032 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Received event network-vif-unplugged-48c9a7b6-229f-4386-b150-b99a89d048a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.936 239969 DEBUG oslo_concurrency.lockutils [req-f20fcf46-fba7-4ae3-8e2b-44182faba186 req-c26fdbf5-b312-4d78-87af-18d03c764032 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.937 239969 DEBUG oslo_concurrency.lockutils [req-f20fcf46-fba7-4ae3-8e2b-44182faba186 req-c26fdbf5-b312-4d78-87af-18d03c764032 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.937 239969 DEBUG oslo_concurrency.lockutils [req-f20fcf46-fba7-4ae3-8e2b-44182faba186 req-c26fdbf5-b312-4d78-87af-18d03c764032 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.937 239969 DEBUG nova.compute.manager [req-f20fcf46-fba7-4ae3-8e2b-44182faba186 req-c26fdbf5-b312-4d78-87af-18d03c764032 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] No waiting events found dispatching network-vif-unplugged-48c9a7b6-229f-4386-b150-b99a89d048a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.937 239969 DEBUG nova.compute.manager [req-f20fcf46-fba7-4ae3-8e2b-44182faba186 req-c26fdbf5-b312-4d78-87af-18d03c764032 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Received event network-vif-unplugged-48c9a7b6-229f-4386-b150-b99a89d048a8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.941 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443310.858767, 458dfc7d-cb02-4d38-a6ba-a52234a1af2a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.941 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] VM Resumed (Lifecycle Event)
Jan 26 16:01:50 compute-0 systemd-machined[208061]: New machine qemu-78-instance-00000042.
Jan 26 16:01:50 compute-0 systemd[1]: Started Virtual Machine qemu-78-instance-00000042.
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.971 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.976 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:50 compute-0 nova_compute[239965]: 2026-01-26 16:01:50.998 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.008 239969 INFO nova.compute.manager [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Took 5.60 seconds to spawn the instance on the hypervisor.
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.008 239969 DEBUG nova.compute.manager [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.085 239969 INFO nova.compute.manager [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Took 6.82 seconds to build instance.
Jan 26 16:01:51 compute-0 ceph-mon[75140]: pgmap v1467: 305 pgs: 305 active+clean; 532 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 6.1 MiB/s wr, 247 op/s
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.103 239969 DEBUG oslo_concurrency.lockutils [None req-5212d3bf-f6b5-4b75-8b02-2377e813268d 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.117 239969 INFO nova.virt.libvirt.driver [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Deleting instance files /var/lib/nova/instances/7653a9f9-e5bc-4478-b169-1240875c1675_del
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.118 239969 INFO nova.virt.libvirt.driver [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Deletion of /var/lib/nova/instances/7653a9f9-e5bc-4478-b169-1240875c1675_del complete
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.161 239969 INFO nova.virt.libvirt.driver [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Deleting instance files /var/lib/nova/instances/1a3248c2-cd69-499a-8aec-8b03fa196e91_del
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.162 239969 INFO nova.virt.libvirt.driver [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Deletion of /var/lib/nova/instances/1a3248c2-cd69-499a-8aec-8b03fa196e91_del complete
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.169 239969 INFO nova.compute.manager [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Took 0.87 seconds to destroy the instance on the hypervisor.
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.170 239969 DEBUG oslo.service.loopingcall [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.170 239969 DEBUG nova.compute.manager [-] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.171 239969 DEBUG nova.network.neutron [-] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.219 239969 INFO nova.compute.manager [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Took 0.74 seconds to destroy the instance on the hypervisor.
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.219 239969 DEBUG oslo.service.loopingcall [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.220 239969 DEBUG nova.compute.manager [-] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.220 239969 DEBUG nova.network.neutron [-] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:01:51 compute-0 podman[298201]: 2026-01-26 16:01:51.226311527 +0000 UTC m=+0.102998816 container create 51c1b1d5bc9750424068e73fc6cb8b2a02c144e99cc92601613b091ae48a3e78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 16:01:51 compute-0 podman[298201]: 2026-01-26 16:01:51.155716896 +0000 UTC m=+0.032404205 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:01:51 compute-0 systemd[1]: Started libpod-conmon-51c1b1d5bc9750424068e73fc6cb8b2a02c144e99cc92601613b091ae48a3e78.scope.
Jan 26 16:01:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07cd6910301121926b1498b6af139c21c8808ed75b623f0c940c41bb88edd754/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:01:51 compute-0 podman[298201]: 2026-01-26 16:01:51.323842007 +0000 UTC m=+0.200529306 container init 51c1b1d5bc9750424068e73fc6cb8b2a02c144e99cc92601613b091ae48a3e78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:01:51 compute-0 podman[298201]: 2026-01-26 16:01:51.328977923 +0000 UTC m=+0.205665202 container start 51c1b1d5bc9750424068e73fc6cb8b2a02c144e99cc92601613b091ae48a3e78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 26 16:01:51 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[298216]: [NOTICE]   (298220) : New worker (298222) forked
Jan 26 16:01:51 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[298216]: [NOTICE]   (298220) : Loading success.
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.396 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 9325f8b5-246d-4519-a1d0-f1b5d35375fc in datapath ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 unbound from our chassis
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.398 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ca8d6d0f-aa78-49e5-bed8-8ca0c9363435, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:01:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.399 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[13a34d54-069f-4745-a739-f07815af059d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.400 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 namespace which is not needed anymore
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.565 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:01:51 compute-0 neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435[297548]: [NOTICE]   (297584) : haproxy version is 2.8.14-c23fe91
Jan 26 16:01:51 compute-0 neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435[297548]: [NOTICE]   (297584) : path to executable is /usr/sbin/haproxy
Jan 26 16:01:51 compute-0 neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435[297548]: [WARNING]  (297584) : Exiting Master process...
Jan 26 16:01:51 compute-0 neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435[297548]: [WARNING]  (297584) : Exiting Master process...
Jan 26 16:01:51 compute-0 neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435[297548]: [ALERT]    (297584) : Current worker (297610) exited with code 143 (Terminated)
Jan 26 16:01:51 compute-0 neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435[297548]: [WARNING]  (297584) : All workers exited. Exiting... (0)
Jan 26 16:01:51 compute-0 systemd[1]: libpod-370c740dafac5229ad9bf75d68b7d552253a75a1de2f9d066635d4ca48d82cfe.scope: Deactivated successfully.
Jan 26 16:01:51 compute-0 podman[298247]: 2026-01-26 16:01:51.580247823 +0000 UTC m=+0.061364265 container died 370c740dafac5229ad9bf75d68b7d552253a75a1de2f9d066635d4ca48d82cfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:01:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-370c740dafac5229ad9bf75d68b7d552253a75a1de2f9d066635d4ca48d82cfe-userdata-shm.mount: Deactivated successfully.
Jan 26 16:01:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c46d82d2ebeb4a788993426fe0e9c18afc2956c0347f5aa6dab7421511ac695-merged.mount: Deactivated successfully.
Jan 26 16:01:51 compute-0 podman[298247]: 2026-01-26 16:01:51.627363758 +0000 UTC m=+0.108480190 container cleanup 370c740dafac5229ad9bf75d68b7d552253a75a1de2f9d066635d4ca48d82cfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:01:51 compute-0 systemd[1]: libpod-conmon-370c740dafac5229ad9bf75d68b7d552253a75a1de2f9d066635d4ca48d82cfe.scope: Deactivated successfully.
Jan 26 16:01:51 compute-0 podman[298277]: 2026-01-26 16:01:51.692975587 +0000 UTC m=+0.043540558 container remove 370c740dafac5229ad9bf75d68b7d552253a75a1de2f9d066635d4ca48d82cfe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.698 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[03971e77-b920-45cb-82e6-b0c0b621761b]: (4, ('Mon Jan 26 04:01:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 (370c740dafac5229ad9bf75d68b7d552253a75a1de2f9d066635d4ca48d82cfe)\n370c740dafac5229ad9bf75d68b7d552253a75a1de2f9d066635d4ca48d82cfe\nMon Jan 26 04:01:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 (370c740dafac5229ad9bf75d68b7d552253a75a1de2f9d066635d4ca48d82cfe)\n370c740dafac5229ad9bf75d68b7d552253a75a1de2f9d066635d4ca48d82cfe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.699 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f89267d9-e84d-4dce-bba6-3d2e18f48b7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.700 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca8d6d0f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.702 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:51 compute-0 kernel: tapca8d6d0f-a0: left promiscuous mode
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.716 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[85030068-9d9a-4978-b3b7-8c109d85af32]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.719 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.726 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2c935f31-3bf9-4c8d-b310-caab8350e78c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.727 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ecd28d3a-de59-4df2-b76e-c1ecf53872b8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.744 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8d8272-9485-453b-a389-9a431556aa49]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 479700, 'reachable_time': 39895, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298291, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:51 compute-0 systemd[1]: run-netns-ovnmeta\x2dca8d6d0f\x2daa78\x2d49e5\x2dbed8\x2d8ca0c9363435.mount: Deactivated successfully.
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.748 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.748 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[b3eb1234-c115-495e-9575-7cbebb089da8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.749 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 48c9a7b6-229f-4386-b150-b99a89d048a8 in datapath ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 unbound from our chassis
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.751 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ca8d6d0f-aa78-49e5-bed8-8ca0c9363435, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.752 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5170f294-7d48-45e6-80e1-f8887e3b5bd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.752 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 819adf29-2d57-4e5d-81f7-041a9ac00baa in datapath 677a455b-3478-4811-937e-e2d4184c4933 unbound from our chassis
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.754 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 677a455b-3478-4811-937e-e2d4184c4933
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.769 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f5e77544-64c0-4128-a44e-5a0cb6e505be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.799 239969 DEBUG nova.objects.instance [None req-2178269c-1ba4-4152-a556-0faf90e20c29 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'pci_devices' on Instance uuid 458dfc7d-cb02-4d38-a6ba-a52234a1af2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.803 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[833f0bb3-c765-4bc4-9cf4-a59e5f65472d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.806 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[41169ee9-acb4-4294-b3f5-817677bb7b46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.829 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443311.8295214, 458dfc7d-cb02-4d38-a6ba-a52234a1af2a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.830 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] VM Paused (Lifecycle Event)
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.841 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[859c6e5e-63db-4fcc-8d5e-7149cf528852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.859 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.862 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.863 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e313c347-aaf6-4312-916d-7f4f01f615ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap677a455b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:e4:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476521, 'reachable_time': 27062, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298300, 'error': None, 'target': 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.882 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f30ea396-ac76-49bd-9e4a-90eeeefbd14a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap677a455b-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476533, 'tstamp': 476533}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298301, 'error': None, 'target': 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap677a455b-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476536, 'tstamp': 476536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298301, 'error': None, 'target': 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.885 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap677a455b-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.886 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.887 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.888 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.889 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap677a455b-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.889 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.889 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap677a455b-30, col_values=(('external_ids', {'iface-id': 'be57e329-8724-4480-9b41-724e08bf9152'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.890 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 558 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 7.2 MiB/s wr, 302 op/s
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.941 239969 DEBUG nova.network.neutron [-] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.960 239969 INFO nova.compute.manager [-] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Took 0.79 seconds to deallocate network for instance.
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.964 239969 DEBUG nova.network.neutron [-] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:51 compute-0 kernel: tap7c9c333e-85 (unregistering): left promiscuous mode
Jan 26 16:01:51 compute-0 NetworkManager[48954]: <info>  [1769443311.9726] device (tap7c9c333e-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.983 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:51 compute-0 ovn_controller[146046]: 2026-01-26T16:01:51Z|00602|binding|INFO|Releasing lport 7c9c333e-8516-448a-be45-a7c687b16c5a from this chassis (sb_readonly=0)
Jan 26 16:01:51 compute-0 ovn_controller[146046]: 2026-01-26T16:01:51Z|00603|binding|INFO|Setting lport 7c9c333e-8516-448a-be45-a7c687b16c5a down in Southbound
Jan 26 16:01:51 compute-0 ovn_controller[146046]: 2026-01-26T16:01:51Z|00604|binding|INFO|Removing iface tap7c9c333e-85 ovn-installed in OVS
Jan 26 16:01:51 compute-0 nova_compute[239965]: 2026-01-26 16:01:51.986 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.991 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:fc:be 10.100.0.6'], port_security=['fa:16:3e:2a:fc:be 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '458dfc7d-cb02-4d38-a6ba-a52234a1af2a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-00eb7549-d24b-4657-b244-7664c8a34b20', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aee99e5b6af74088bd848cecc9592e82', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6daa4359-88b3-4de7-a38c-e21562f82a46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=89244795-c322-42a2-be6b-f02f8ae6e05c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=7c9c333e-8516-448a-be45-a7c687b16c5a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.993 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 7c9c333e-8516-448a-be45-a7c687b16c5a in datapath 00eb7549-d24b-4657-b244-7664c8a34b20 unbound from our chassis
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.994 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 00eb7549-d24b-4657-b244-7664c8a34b20, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.995 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c4c0cb62-2ca7-4b2b-9364-4d4c4def4523]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:51.996 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 namespace which is not needed anymore
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.003 239969 INFO nova.compute.manager [-] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Took 0.78 seconds to deallocate network for instance.
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.007 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.010 239969 DEBUG oslo_concurrency.lockutils [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.010 239969 DEBUG oslo_concurrency.lockutils [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:52 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000046.scope: Deactivated successfully.
Jan 26 16:01:52 compute-0 systemd[1]: machine-qemu\x2d77\x2dinstance\x2d00000046.scope: Consumed 1.273s CPU time.
Jan 26 16:01:52 compute-0 systemd-machined[208061]: Machine qemu-77-instance-00000046 terminated.
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.055 239969 DEBUG oslo_concurrency.lockutils [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:52 compute-0 sshd-session[298235]: Invalid user sol from 45.148.10.240 port 53650
Jan 26 16:01:52 compute-0 NetworkManager[48954]: <info>  [1769443312.1519] manager: (tap7c9c333e-85): new Tun device (/org/freedesktop/NetworkManager/Devices/274)
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.157 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.161 239969 DEBUG oslo_concurrency.processutils [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:52 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[298216]: [NOTICE]   (298220) : haproxy version is 2.8.14-c23fe91
Jan 26 16:01:52 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[298216]: [NOTICE]   (298220) : path to executable is /usr/sbin/haproxy
Jan 26 16:01:52 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[298216]: [WARNING]  (298220) : Exiting Master process...
Jan 26 16:01:52 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[298216]: [ALERT]    (298220) : Current worker (298222) exited with code 143 (Terminated)
Jan 26 16:01:52 compute-0 neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20[298216]: [WARNING]  (298220) : All workers exited. Exiting... (0)
Jan 26 16:01:52 compute-0 systemd[1]: libpod-51c1b1d5bc9750424068e73fc6cb8b2a02c144e99cc92601613b091ae48a3e78.scope: Deactivated successfully.
Jan 26 16:01:52 compute-0 podman[298329]: 2026-01-26 16:01:52.180118549 +0000 UTC m=+0.059252273 container died 51c1b1d5bc9750424068e73fc6cb8b2a02c144e99cc92601613b091ae48a3e78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:01:52 compute-0 sshd-session[298235]: Connection closed by invalid user sol 45.148.10.240 port 53650 [preauth]
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.243 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.245 239969 DEBUG nova.compute.manager [None req-2178269c-1ba4-4152-a556-0faf90e20c29 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-51c1b1d5bc9750424068e73fc6cb8b2a02c144e99cc92601613b091ae48a3e78-userdata-shm.mount: Deactivated successfully.
Jan 26 16:01:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-07cd6910301121926b1498b6af139c21c8808ed75b623f0c940c41bb88edd754-merged.mount: Deactivated successfully.
Jan 26 16:01:52 compute-0 podman[298329]: 2026-01-26 16:01:52.270682529 +0000 UTC m=+0.149816233 container cleanup 51c1b1d5bc9750424068e73fc6cb8b2a02c144e99cc92601613b091ae48a3e78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:01:52 compute-0 systemd[1]: libpod-conmon-51c1b1d5bc9750424068e73fc6cb8b2a02c144e99cc92601613b091ae48a3e78.scope: Deactivated successfully.
Jan 26 16:01:52 compute-0 podman[298407]: 2026-01-26 16:01:52.365463023 +0000 UTC m=+0.063936608 container remove 51c1b1d5bc9750424068e73fc6cb8b2a02c144e99cc92601613b091ae48a3e78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:01:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:52.371 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1f206c43-52c8-4532-a76a-dc7c677dc562]: (4, ('Mon Jan 26 04:01:52 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 (51c1b1d5bc9750424068e73fc6cb8b2a02c144e99cc92601613b091ae48a3e78)\n51c1b1d5bc9750424068e73fc6cb8b2a02c144e99cc92601613b091ae48a3e78\nMon Jan 26 04:01:52 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 (51c1b1d5bc9750424068e73fc6cb8b2a02c144e99cc92601613b091ae48a3e78)\n51c1b1d5bc9750424068e73fc6cb8b2a02c144e99cc92601613b091ae48a3e78\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:52.372 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b3573b6e-8d79-473b-8716-45db3e888098]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:52.373 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00eb7549-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.375 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:52 compute-0 kernel: tap00eb7549-d0: left promiscuous mode
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.396 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:52.399 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8c542c-19e5-4665-9484-4d19a0544da5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:52.413 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[144141cd-bbee-4f00-9f2c-338ef1acb85a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:52.414 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0fead02a-4798-4964-aedf-af877a369b4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.427 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 96c386e9-e3b3-47c6-b6ca-863e894d5c49 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.428 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443312.4268594, 96c386e9-e3b3-47c6-b6ca-863e894d5c49 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.428 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] VM Resumed (Lifecycle Event)
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.432 239969 DEBUG nova.compute.manager [None req-9d375190-f4b9-4d83-9f0e-aeb2a4192ecb 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:52.433 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e6cf0d46-dad0-44ec-8b52-e5d36ff2f341]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 480117, 'reachable_time': 20274, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298447, 'error': None, 'target': 'ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:52.436 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-00eb7549-d24b-4657-b244-7664c8a34b20 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:01:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:52.436 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[cfe69c72-b39a-4424-90e9-fa43b1ace3a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.445 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.448 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.475 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.475 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443312.4299738, 96c386e9-e3b3-47c6-b6ca-863e894d5c49 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.476 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] VM Started (Lifecycle Event)
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.505 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.512 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d00eb7549\x2dd24b\x2d4657\x2db244\x2d7664c8a34b20.mount: Deactivated successfully.
Jan 26 16:01:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:01:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/629473283' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.811 239969 DEBUG oslo_concurrency.processutils [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.819 239969 DEBUG nova.compute.provider_tree [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.835 239969 DEBUG nova.scheduler.client.report [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.861 239969 DEBUG oslo_concurrency.lockutils [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.868 239969 DEBUG oslo_concurrency.lockutils [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.894 239969 INFO nova.scheduler.client.report [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Deleted allocations for instance 7653a9f9-e5bc-4478-b169-1240875c1675
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.941 239969 DEBUG nova.compute.manager [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Received event network-vif-plugged-7c9c333e-8516-448a-be45-a7c687b16c5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.942 239969 DEBUG oslo_concurrency.lockutils [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.943 239969 DEBUG oslo_concurrency.lockutils [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.943 239969 DEBUG oslo_concurrency.lockutils [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.944 239969 DEBUG nova.compute.manager [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] No waiting events found dispatching network-vif-plugged-7c9c333e-8516-448a-be45-a7c687b16c5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.945 239969 WARNING nova.compute.manager [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Received unexpected event network-vif-plugged-7c9c333e-8516-448a-be45-a7c687b16c5a for instance with vm_state suspended and task_state None.
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.945 239969 DEBUG nova.compute.manager [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Received event network-vif-unplugged-9325f8b5-246d-4519-a1d0-f1b5d35375fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.946 239969 DEBUG oslo_concurrency.lockutils [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.946 239969 DEBUG oslo_concurrency.lockutils [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.947 239969 DEBUG oslo_concurrency.lockutils [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.948 239969 DEBUG nova.compute.manager [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] No waiting events found dispatching network-vif-unplugged-9325f8b5-246d-4519-a1d0-f1b5d35375fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.949 239969 WARNING nova.compute.manager [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Received unexpected event network-vif-unplugged-9325f8b5-246d-4519-a1d0-f1b5d35375fc for instance with vm_state deleted and task_state None.
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.949 239969 DEBUG nova.compute.manager [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Received event network-vif-plugged-9325f8b5-246d-4519-a1d0-f1b5d35375fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.950 239969 DEBUG oslo_concurrency.lockutils [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.950 239969 DEBUG oslo_concurrency.lockutils [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.951 239969 DEBUG oslo_concurrency.lockutils [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7653a9f9-e5bc-4478-b169-1240875c1675-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.952 239969 DEBUG nova.compute.manager [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] No waiting events found dispatching network-vif-plugged-9325f8b5-246d-4519-a1d0-f1b5d35375fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.953 239969 WARNING nova.compute.manager [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Received unexpected event network-vif-plugged-9325f8b5-246d-4519-a1d0-f1b5d35375fc for instance with vm_state deleted and task_state None.
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.953 239969 DEBUG nova.compute.manager [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Received event network-vif-unplugged-7c9c333e-8516-448a-be45-a7c687b16c5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.954 239969 DEBUG oslo_concurrency.lockutils [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.955 239969 DEBUG oslo_concurrency.lockutils [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.955 239969 DEBUG oslo_concurrency.lockutils [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.956 239969 DEBUG nova.compute.manager [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] No waiting events found dispatching network-vif-unplugged-7c9c333e-8516-448a-be45-a7c687b16c5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.957 239969 WARNING nova.compute.manager [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Received unexpected event network-vif-unplugged-7c9c333e-8516-448a-be45-a7c687b16c5a for instance with vm_state suspended and task_state None.
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.957 239969 DEBUG nova.compute.manager [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Received event network-vif-plugged-7c9c333e-8516-448a-be45-a7c687b16c5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.957 239969 DEBUG oslo_concurrency.lockutils [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.958 239969 DEBUG oslo_concurrency.lockutils [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.958 239969 DEBUG oslo_concurrency.lockutils [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.959 239969 DEBUG nova.compute.manager [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] No waiting events found dispatching network-vif-plugged-7c9c333e-8516-448a-be45-a7c687b16c5a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.959 239969 WARNING nova.compute.manager [req-0ea767bd-30e1-454e-a800-88b6482a9d3e req-ebe10412-28ed-467b-9916-eedda7eef8b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Received unexpected event network-vif-plugged-7c9c333e-8516-448a-be45-a7c687b16c5a for instance with vm_state suspended and task_state None.
Jan 26 16:01:52 compute-0 nova_compute[239965]: 2026-01-26 16:01:52.977 239969 DEBUG oslo_concurrency.lockutils [None req-6c4e4cce-2211-4e07-a69c-171575460158 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "7653a9f9-e5bc-4478-b169-1240875c1675" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.019 239969 DEBUG nova.compute.manager [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Received event network-vif-plugged-48c9a7b6-229f-4386-b150-b99a89d048a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.019 239969 DEBUG oslo_concurrency.lockutils [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.020 239969 DEBUG oslo_concurrency.lockutils [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.020 239969 DEBUG oslo_concurrency.lockutils [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1a3248c2-cd69-499a-8aec-8b03fa196e91-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.021 239969 DEBUG nova.compute.manager [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] No waiting events found dispatching network-vif-plugged-48c9a7b6-229f-4386-b150-b99a89d048a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.021 239969 WARNING nova.compute.manager [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Received unexpected event network-vif-plugged-48c9a7b6-229f-4386-b150-b99a89d048a8 for instance with vm_state deleted and task_state None.
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.021 239969 DEBUG nova.compute.manager [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received event network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.022 239969 DEBUG oslo_concurrency.lockutils [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.022 239969 DEBUG oslo_concurrency.lockutils [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.022 239969 DEBUG oslo_concurrency.lockutils [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.023 239969 DEBUG nova.compute.manager [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] No waiting events found dispatching network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.023 239969 WARNING nova.compute.manager [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received unexpected event network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa for instance with vm_state rescued and task_state None.
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.023 239969 DEBUG nova.compute.manager [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received event network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.024 239969 DEBUG oslo_concurrency.lockutils [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.024 239969 DEBUG oslo_concurrency.lockutils [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.024 239969 DEBUG oslo_concurrency.lockutils [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.025 239969 DEBUG nova.compute.manager [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] No waiting events found dispatching network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.025 239969 WARNING nova.compute.manager [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received unexpected event network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa for instance with vm_state rescued and task_state None.
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.025 239969 DEBUG nova.compute.manager [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Received event network-vif-deleted-9325f8b5-246d-4519-a1d0-f1b5d35375fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.026 239969 DEBUG nova.compute.manager [req-6ccea1d4-9680-4763-9c02-b0272476f852 req-ea157045-79cf-4b99-a1b9-8fb87638d53c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Received event network-vif-deleted-48c9a7b6-229f-4386-b150-b99a89d048a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.042 239969 DEBUG oslo_concurrency.processutils [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:53 compute-0 ceph-mon[75140]: pgmap v1468: 305 pgs: 305 active+clean; 558 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 7.2 MiB/s wr, 302 op/s
Jan 26 16:01:53 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/629473283' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:01:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/380354312' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.606 239969 DEBUG oslo_concurrency.processutils [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.614 239969 DEBUG nova.compute.provider_tree [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.656 239969 DEBUG nova.scheduler.client.report [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.682 239969 DEBUG oslo_concurrency.lockutils [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.716 239969 INFO nova.scheduler.client.report [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Deleted allocations for instance 1a3248c2-cd69-499a-8aec-8b03fa196e91
Jan 26 16:01:53 compute-0 nova_compute[239965]: 2026-01-26 16:01:53.783 239969 DEBUG oslo_concurrency.lockutils [None req-2bab38a2-1c63-4c59-baad-0fb58cf0c8a3 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "1a3248c2-cd69-499a-8aec-8b03fa196e91" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.312s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 558 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.8 MiB/s wr, 288 op/s
Jan 26 16:01:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/380354312' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.379 239969 DEBUG oslo_concurrency.lockutils [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.379 239969 DEBUG oslo_concurrency.lockutils [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.380 239969 DEBUG oslo_concurrency.lockutils [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.380 239969 DEBUG oslo_concurrency.lockutils [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.380 239969 DEBUG oslo_concurrency.lockutils [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.381 239969 INFO nova.compute.manager [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Terminating instance
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.382 239969 DEBUG nova.compute.manager [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.392 239969 INFO nova.virt.libvirt.driver [-] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Instance destroyed successfully.
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.394 239969 DEBUG nova.objects.instance [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lazy-loading 'resources' on Instance uuid 458dfc7d-cb02-4d38-a6ba-a52234a1af2a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.412 239969 DEBUG nova.virt.libvirt.vif [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:01:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-472679170',display_name='tempest-DeleteServersTestJSON-server-472679170',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-472679170',id=70,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:01:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='aee99e5b6af74088bd848cecc9592e82',ramdisk_id='',reservation_id='r-jr4kh4b2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-234439961',owner_user_name='tempest-DeleteServersTestJSON-234439961-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:01:52Z,user_data=None,user_id='3c7cba75e9fd41599b1c9a3388447cdd',uuid=458dfc7d-cb02-4d38-a6ba-a52234a1af2a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "7c9c333e-8516-448a-be45-a7c687b16c5a", "address": "fa:16:3e:2a:fc:be", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c9c333e-85", "ovs_interfaceid": "7c9c333e-8516-448a-be45-a7c687b16c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.414 239969 DEBUG nova.network.os_vif_util [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converting VIF {"id": "7c9c333e-8516-448a-be45-a7c687b16c5a", "address": "fa:16:3e:2a:fc:be", "network": {"id": "00eb7549-d24b-4657-b244-7664c8a34b20", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-615812633-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aee99e5b6af74088bd848cecc9592e82", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7c9c333e-85", "ovs_interfaceid": "7c9c333e-8516-448a-be45-a7c687b16c5a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.419 239969 DEBUG nova.network.os_vif_util [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:fc:be,bridge_name='br-int',has_traffic_filtering=True,id=7c9c333e-8516-448a-be45-a7c687b16c5a,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c9c333e-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.424 239969 DEBUG os_vif [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:fc:be,bridge_name='br-int',has_traffic_filtering=True,id=7c9c333e-8516-448a-be45-a7c687b16c5a,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c9c333e-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.433 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.437 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c9c333e-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.571 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.592 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.603 239969 INFO os_vif [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:fc:be,bridge_name='br-int',has_traffic_filtering=True,id=7c9c333e-8516-448a-be45-a7c687b16c5a,network=Network(00eb7549-d24b-4657-b244-7664c8a34b20),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7c9c333e-85')
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.652 239969 INFO nova.compute.manager [None req-89c072a8-f40a-4b67-b1d7-940558d51bf1 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Pausing
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.657 239969 DEBUG nova.objects.instance [None req-89c072a8-f40a-4b67-b1d7-940558d51bf1 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lazy-loading 'flavor' on Instance uuid f50eea8d-23f7-4c65-99fa-f919c99ed80d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.692 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443314.6921716, f50eea8d-23f7-4c65-99fa-f919c99ed80d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.692 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] VM Paused (Lifecycle Event)
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.694 239969 DEBUG nova.compute.manager [None req-89c072a8-f40a-4b67-b1d7-940558d51bf1 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.718 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.720 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:54 compute-0 nova_compute[239965]: 2026-01-26 16:01:54.747 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] During sync_power_state the instance has a pending task (pausing). Skip.
Jan 26 16:01:55 compute-0 nova_compute[239965]: 2026-01-26 16:01:55.024 239969 INFO nova.virt.libvirt.driver [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Deleting instance files /var/lib/nova/instances/458dfc7d-cb02-4d38-a6ba-a52234a1af2a_del
Jan 26 16:01:55 compute-0 nova_compute[239965]: 2026-01-26 16:01:55.026 239969 INFO nova.virt.libvirt.driver [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Deletion of /var/lib/nova/instances/458dfc7d-cb02-4d38-a6ba-a52234a1af2a_del complete
Jan 26 16:01:55 compute-0 nova_compute[239965]: 2026-01-26 16:01:55.074 239969 INFO nova.compute.manager [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Took 0.69 seconds to destroy the instance on the hypervisor.
Jan 26 16:01:55 compute-0 nova_compute[239965]: 2026-01-26 16:01:55.074 239969 DEBUG oslo.service.loopingcall [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:01:55 compute-0 nova_compute[239965]: 2026-01-26 16:01:55.075 239969 DEBUG nova.compute.manager [-] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:01:55 compute-0 nova_compute[239965]: 2026-01-26 16:01:55.075 239969 DEBUG nova.network.neutron [-] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:01:55 compute-0 ceph-mon[75140]: pgmap v1469: 305 pgs: 305 active+clean; 558 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 6.8 MiB/s wr, 288 op/s
Jan 26 16:01:55 compute-0 nova_compute[239965]: 2026-01-26 16:01:55.351 239969 DEBUG nova.virt.libvirt.driver [None req-d7b37705-d01c-404e-a834-1580ab882143 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:01:55 compute-0 nova_compute[239965]: 2026-01-26 16:01:55.790 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:55 compute-0 nova_compute[239965]: 2026-01-26 16:01:55.841 239969 DEBUG nova.network.neutron [-] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:01:55 compute-0 nova_compute[239965]: 2026-01-26 16:01:55.858 239969 INFO nova.compute.manager [-] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Took 0.78 seconds to deallocate network for instance.
Jan 26 16:01:55 compute-0 nova_compute[239965]: 2026-01-26 16:01:55.895 239969 DEBUG oslo_concurrency.lockutils [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:55 compute-0 nova_compute[239965]: 2026-01-26 16:01:55.896 239969 DEBUG oslo_concurrency.lockutils [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:55 compute-0 nova_compute[239965]: 2026-01-26 16:01:55.901 239969 DEBUG nova.compute.manager [req-4ff22936-f8bd-4970-b2ef-fa2a167b6c4d req-7a66ca47-f04d-493e-8655-eee29ad98a20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Received event network-vif-deleted-7c9c333e-8516-448a-be45-a7c687b16c5a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 452 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 6.8 MiB/s wr, 492 op/s
Jan 26 16:01:55 compute-0 nova_compute[239965]: 2026-01-26 16:01:55.990 239969 DEBUG oslo_concurrency.processutils [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:01:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:01:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2743232480' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:56 compute-0 nova_compute[239965]: 2026-01-26 16:01:56.619 239969 DEBUG oslo_concurrency.processutils [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:56 compute-0 nova_compute[239965]: 2026-01-26 16:01:56.626 239969 DEBUG nova.compute.provider_tree [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:01:56 compute-0 nova_compute[239965]: 2026-01-26 16:01:56.650 239969 DEBUG nova.scheduler.client.report [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:01:56 compute-0 nova_compute[239965]: 2026-01-26 16:01:56.685 239969 DEBUG oslo_concurrency.lockutils [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:56 compute-0 nova_compute[239965]: 2026-01-26 16:01:56.708 239969 INFO nova.scheduler.client.report [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Deleted allocations for instance 458dfc7d-cb02-4d38-a6ba-a52234a1af2a
Jan 26 16:01:56 compute-0 nova_compute[239965]: 2026-01-26 16:01:56.775 239969 INFO nova.compute.manager [None req-7415d7d5-42fe-4f19-9d52-3c91c146f338 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Unpausing
Jan 26 16:01:56 compute-0 nova_compute[239965]: 2026-01-26 16:01:56.777 239969 DEBUG nova.objects.instance [None req-7415d7d5-42fe-4f19-9d52-3c91c146f338 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lazy-loading 'flavor' on Instance uuid f50eea8d-23f7-4c65-99fa-f919c99ed80d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:56 compute-0 nova_compute[239965]: 2026-01-26 16:01:56.815 239969 DEBUG oslo_concurrency.lockutils [None req-7da95451-ca79-40db-89fc-f88d740faa56 3c7cba75e9fd41599b1c9a3388447cdd aee99e5b6af74088bd848cecc9592e82 - - default default] Lock "458dfc7d-cb02-4d38-a6ba-a52234a1af2a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:56 compute-0 nova_compute[239965]: 2026-01-26 16:01:56.825 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443316.8244894, f50eea8d-23f7-4c65-99fa-f919c99ed80d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:01:56 compute-0 nova_compute[239965]: 2026-01-26 16:01:56.825 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] VM Resumed (Lifecycle Event)
Jan 26 16:01:56 compute-0 virtqemud[240263]: argument unsupported: QEMU guest agent is not configured
Jan 26 16:01:56 compute-0 nova_compute[239965]: 2026-01-26 16:01:56.833 239969 DEBUG nova.virt.libvirt.guest [None req-7415d7d5-42fe-4f19-9d52-3c91c146f338 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 26 16:01:56 compute-0 nova_compute[239965]: 2026-01-26 16:01:56.834 239969 DEBUG nova.compute.manager [None req-7415d7d5-42fe-4f19-9d52-3c91c146f338 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:56 compute-0 nova_compute[239965]: 2026-01-26 16:01:56.860 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:01:56 compute-0 nova_compute[239965]: 2026-01-26 16:01:56.864 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:01:56 compute-0 nova_compute[239965]: 2026-01-26 16:01:56.898 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] During sync_power_state the instance has a pending task (unpausing). Skip.
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.037 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.037 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.051 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.076 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "a84aa0aa-4479-4871-9773-a7ee17fa8565" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.077 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "a84aa0aa-4479-4871-9773-a7ee17fa8565" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.100 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.117 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.118 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.124 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.124 239969 INFO nova.compute.claims [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:01:57 compute-0 ceph-mon[75140]: pgmap v1470: 305 pgs: 305 active+clean; 452 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 6.8 MiB/s wr, 492 op/s
Jan 26 16:01:57 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2743232480' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.172 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.328 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:01:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1726195617' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.887 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.894 239969 DEBUG nova.compute.provider_tree [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.910 239969 DEBUG nova.scheduler.client.report [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:01:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 452 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 3.6 MiB/s wr, 359 op/s
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.957 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.958 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.962 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.968 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:01:57 compute-0 nova_compute[239965]: 2026-01-26 16:01:57.968 239969 INFO nova.compute.claims [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.020 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.020 239969 DEBUG nova.network.neutron [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.045 239969 INFO nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.070 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:01:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1726195617' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.159 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.160 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.160 239969 INFO nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Creating image(s)
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.180 239969 DEBUG nova.storage.rbd_utils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 7748d19a-5c9b-456d-808c-b8f0c79a96ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.202 239969 DEBUG nova.storage.rbd_utils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 7748d19a-5c9b-456d-808c-b8f0c79a96ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.222 239969 DEBUG nova.storage.rbd_utils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 7748d19a-5c9b-456d-808c-b8f0c79a96ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.225 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.257 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.289 239969 DEBUG nova.policy [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '008d2b0a4ac3490f9a0fcea9e21be080', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df35e1b31a074e6c9a56000deadb0acc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.313 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.315 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.316 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.316 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.344 239969 DEBUG nova.storage.rbd_utils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 7748d19a-5c9b-456d-808c-b8f0c79a96ef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.348 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 7748d19a-5c9b-456d-808c-b8f0c79a96ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.657 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 7748d19a-5c9b-456d-808c-b8f0c79a96ef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.727 239969 DEBUG nova.storage.rbd_utils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] resizing rbd image 7748d19a-5c9b-456d-808c-b8f0c79a96ef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.796 239969 DEBUG nova.objects.instance [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lazy-loading 'migration_context' on Instance uuid 7748d19a-5c9b-456d-808c-b8f0c79a96ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.808 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.808 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Ensure instance console log exists: /var/lib/nova/instances/7748d19a-5c9b-456d-808c-b8f0c79a96ef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.809 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.809 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.809 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:01:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3184033697' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.890 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.895 239969 DEBUG nova.compute.provider_tree [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.912 239969 DEBUG nova.scheduler.client.report [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.932 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.970s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.933 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.986 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:01:58 compute-0 nova_compute[239965]: 2026-01-26 16:01:58.987 239969 DEBUG nova.network.neutron [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.009 239969 INFO nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.069 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:01:59 compute-0 ovn_controller[146046]: 2026-01-26T16:01:59Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e9:65:1d 10.100.0.5
Jan 26 16:01:59 compute-0 ovn_controller[146046]: 2026-01-26T16:01:59Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e9:65:1d 10.100.0.5
Jan 26 16:01:59 compute-0 ceph-mon[75140]: pgmap v1471: 305 pgs: 305 active+clean; 452 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 3.6 MiB/s wr, 359 op/s
Jan 26 16:01:59 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3184033697' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.172 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.173 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.174 239969 INFO nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Creating image(s)
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.193 239969 DEBUG nova.storage.rbd_utils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image a84aa0aa-4479-4871-9773-a7ee17fa8565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.216 239969 DEBUG nova.storage.rbd_utils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image a84aa0aa-4479-4871-9773-a7ee17fa8565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.223 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.224 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.224 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.237 239969 DEBUG nova.storage.rbd_utils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image a84aa0aa-4479-4871-9773-a7ee17fa8565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.240 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.292 239969 DEBUG oslo_concurrency.lockutils [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.292 239969 DEBUG oslo_concurrency.lockutils [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.293 239969 DEBUG oslo_concurrency.lockutils [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.293 239969 DEBUG oslo_concurrency.lockutils [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.293 239969 DEBUG oslo_concurrency.lockutils [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.295 239969 INFO nova.compute.manager [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Terminating instance
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.296 239969 DEBUG nova.compute.manager [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.315 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.315 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.316 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.316 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:59 compute-0 kernel: tap819adf29-2d (unregistering): left promiscuous mode
Jan 26 16:01:59 compute-0 NetworkManager[48954]: <info>  [1769443319.3334] device (tap819adf29-2d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.338 239969 DEBUG nova.storage.rbd_utils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image a84aa0aa-4479-4871-9773-a7ee17fa8565_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:01:59 compute-0 ovn_controller[146046]: 2026-01-26T16:01:59Z|00605|binding|INFO|Releasing lport 819adf29-2d57-4e5d-81f7-041a9ac00baa from this chassis (sb_readonly=0)
Jan 26 16:01:59 compute-0 ovn_controller[146046]: 2026-01-26T16:01:59Z|00606|binding|INFO|Setting lport 819adf29-2d57-4e5d-81f7-041a9ac00baa down in Southbound
Jan 26 16:01:59 compute-0 ovn_controller[146046]: 2026-01-26T16:01:59Z|00607|binding|INFO|Removing iface tap819adf29-2d ovn-installed in OVS
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.344 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 a84aa0aa-4479-4871-9773-a7ee17fa8565_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.358 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:10:75 10.100.0.14'], port_security=['fa:16:3e:cd:10:75 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '96c386e9-e3b3-47c6-b6ca-863e894d5c49', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-677a455b-3478-4811-937e-e2d4184c4933', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e098d809d8b4d0a8747af49a85560a1', 'neutron:revision_number': '6', 'neutron:security_group_ids': '43e1902e-755a-4b29-81ae-0ec275ea79a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=375e748b-fa99-415a-ab22-338d401b8bb9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=819adf29-2d57-4e5d-81f7-041a9ac00baa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.360 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 819adf29-2d57-4e5d-81f7-041a9ac00baa in datapath 677a455b-3478-4811-937e-e2d4184c4933 unbound from our chassis
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.361 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 677a455b-3478-4811-937e-e2d4184c4933
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.375 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[66d86021-3f17-4221-b5ec-56dceb266057]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.379 239969 DEBUG nova.policy [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '008d2b0a4ac3490f9a0fcea9e21be080', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df35e1b31a074e6c9a56000deadb0acc', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.382 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:59 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000042.scope: Deactivated successfully.
Jan 26 16:01:59 compute-0 systemd[1]: machine-qemu\x2d78\x2dinstance\x2d00000042.scope: Consumed 8.186s CPU time.
Jan 26 16:01:59 compute-0 systemd-machined[208061]: Machine qemu-78-instance-00000042 terminated.
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.407 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4acced23-717c-4f2d-a7d8-47aa117f2292]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.411 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[19f70908-d48e-4f50-9248-22ea99372b02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.439 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1115aa-7396-47ba-bbb4-4e7af9028a8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.458 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4b5e3989-1392-4148-b98d-9ae851bdd7f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap677a455b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:e4:71'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 171], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476521, 'reachable_time': 29363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298830, 'error': None, 'target': 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.473 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e536640e-a6c1-43cf-8737-2235dfa63e06]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap677a455b-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476533, 'tstamp': 476533}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298831, 'error': None, 'target': 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap677a455b-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 476536, 'tstamp': 476536}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298831, 'error': None, 'target': 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.475 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap677a455b-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.477 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.482 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.482 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap677a455b-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.482 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.483 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap677a455b-30, col_values=(('external_ids', {'iface-id': 'be57e329-8724-4480-9b41-724e08bf9152'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:01:59.483 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.529 239969 INFO nova.virt.libvirt.driver [-] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Instance destroyed successfully.
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.530 239969 DEBUG nova.objects.instance [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lazy-loading 'resources' on Instance uuid 96c386e9-e3b3-47c6-b6ca-863e894d5c49 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.543 239969 DEBUG nova.virt.libvirt.vif [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:01:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-1808774340',display_name='tempest-ServerRescueNegativeTestJSON-server-1808774340',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-1808774340',id=66,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:01:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1e098d809d8b4d0a8747af49a85560a1',ramdisk_id='',reservation_id='r-xtk9l70v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-2133238807',owner_user_name='tempest-ServerRescueNegativeTestJSON-2133238807-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:01:52Z,user_data=None,user_id='2b5ecf8e47344a3d9fec2e1bf4244aae',uuid=96c386e9-e3b3-47c6-b6ca-863e894d5c49,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "address": "fa:16:3e:cd:10:75", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap819adf29-2d", "ovs_interfaceid": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.543 239969 DEBUG nova.network.os_vif_util [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Converting VIF {"id": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "address": "fa:16:3e:cd:10:75", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap819adf29-2d", "ovs_interfaceid": "819adf29-2d57-4e5d-81f7-041a9ac00baa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.544 239969 DEBUG nova.network.os_vif_util [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cd:10:75,bridge_name='br-int',has_traffic_filtering=True,id=819adf29-2d57-4e5d-81f7-041a9ac00baa,network=Network(677a455b-3478-4811-937e-e2d4184c4933),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap819adf29-2d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.544 239969 DEBUG os_vif [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:10:75,bridge_name='br-int',has_traffic_filtering=True,id=819adf29-2d57-4e5d-81f7-041a9ac00baa,network=Network(677a455b-3478-4811-937e-e2d4184c4933),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap819adf29-2d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.545 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.546 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap819adf29-2d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.549 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.551 239969 INFO os_vif [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:10:75,bridge_name='br-int',has_traffic_filtering=True,id=819adf29-2d57-4e5d-81f7-041a9ac00baa,network=Network(677a455b-3478-4811-937e-e2d4184c4933),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap819adf29-2d')
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.598 239969 DEBUG nova.network.neutron [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Successfully created port: b493e4e4-7abf-4310-b538-b6b4d31a956c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.627 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 a84aa0aa-4479-4871-9773-a7ee17fa8565_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.690 239969 DEBUG nova.storage.rbd_utils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] resizing rbd image a84aa0aa-4479-4871-9773-a7ee17fa8565_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.777 239969 DEBUG nova.objects.instance [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lazy-loading 'migration_context' on Instance uuid a84aa0aa-4479-4871-9773-a7ee17fa8565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.793 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.794 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Ensure instance console log exists: /var/lib/nova/instances/a84aa0aa-4479-4871-9773-a7ee17fa8565/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.794 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.795 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.795 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.812 239969 DEBUG nova.compute.manager [req-a36d8615-4890-41db-922d-728553f592f3 req-882b669f-cc37-4d32-9a72-5f71e7991237 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received event network-vif-unplugged-819adf29-2d57-4e5d-81f7-041a9ac00baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.812 239969 DEBUG oslo_concurrency.lockutils [req-a36d8615-4890-41db-922d-728553f592f3 req-882b669f-cc37-4d32-9a72-5f71e7991237 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.814 239969 DEBUG oslo_concurrency.lockutils [req-a36d8615-4890-41db-922d-728553f592f3 req-882b669f-cc37-4d32-9a72-5f71e7991237 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.814 239969 DEBUG oslo_concurrency.lockutils [req-a36d8615-4890-41db-922d-728553f592f3 req-882b669f-cc37-4d32-9a72-5f71e7991237 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.814 239969 DEBUG nova.compute.manager [req-a36d8615-4890-41db-922d-728553f592f3 req-882b669f-cc37-4d32-9a72-5f71e7991237 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] No waiting events found dispatching network-vif-unplugged-819adf29-2d57-4e5d-81f7-041a9ac00baa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:01:59 compute-0 nova_compute[239965]: 2026-01-26 16:01:59.815 239969 DEBUG nova.compute.manager [req-a36d8615-4890-41db-922d-728553f592f3 req-882b669f-cc37-4d32-9a72-5f71e7991237 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received event network-vif-unplugged-819adf29-2d57-4e5d-81f7-041a9ac00baa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:01:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 477 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 7.0 MiB/s wr, 451 op/s
Jan 26 16:02:00 compute-0 nova_compute[239965]: 2026-01-26 16:02:00.066 239969 INFO nova.virt.libvirt.driver [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Deleting instance files /var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49_del
Jan 26 16:02:00 compute-0 nova_compute[239965]: 2026-01-26 16:02:00.067 239969 INFO nova.virt.libvirt.driver [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Deletion of /var/lib/nova/instances/96c386e9-e3b3-47c6-b6ca-863e894d5c49_del complete
Jan 26 16:02:00 compute-0 nova_compute[239965]: 2026-01-26 16:02:00.116 239969 INFO nova.compute.manager [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Took 0.82 seconds to destroy the instance on the hypervisor.
Jan 26 16:02:00 compute-0 nova_compute[239965]: 2026-01-26 16:02:00.117 239969 DEBUG oslo.service.loopingcall [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:02:00 compute-0 nova_compute[239965]: 2026-01-26 16:02:00.117 239969 DEBUG nova.compute.manager [-] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:02:00 compute-0 nova_compute[239965]: 2026-01-26 16:02:00.117 239969 DEBUG nova.network.neutron [-] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:02:00 compute-0 ceph-mon[75140]: pgmap v1472: 305 pgs: 305 active+clean; 477 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 7.0 MiB/s wr, 451 op/s
Jan 26 16:02:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:02:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:02:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:02:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:02:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:02:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:02:00 compute-0 nova_compute[239965]: 2026-01-26 16:02:00.793 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:01 compute-0 nova_compute[239965]: 2026-01-26 16:02:01.268 239969 DEBUG nova.network.neutron [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Successfully created port: b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:02:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:02:01 compute-0 nova_compute[239965]: 2026-01-26 16:02:01.880 239969 DEBUG nova.network.neutron [-] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:01 compute-0 nova_compute[239965]: 2026-01-26 16:02:01.903 239969 INFO nova.compute.manager [-] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Took 1.79 seconds to deallocate network for instance.
Jan 26 16:02:01 compute-0 nova_compute[239965]: 2026-01-26 16:02:01.923 239969 DEBUG nova.compute.manager [req-583e92b8-0d42-4e13-8cec-212aff447678 req-8081e75f-87cc-4151-a546-e26a0ba50a80 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received event network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:01 compute-0 nova_compute[239965]: 2026-01-26 16:02:01.924 239969 DEBUG oslo_concurrency.lockutils [req-583e92b8-0d42-4e13-8cec-212aff447678 req-8081e75f-87cc-4151-a546-e26a0ba50a80 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:01 compute-0 nova_compute[239965]: 2026-01-26 16:02:01.925 239969 DEBUG oslo_concurrency.lockutils [req-583e92b8-0d42-4e13-8cec-212aff447678 req-8081e75f-87cc-4151-a546-e26a0ba50a80 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:01 compute-0 nova_compute[239965]: 2026-01-26 16:02:01.925 239969 DEBUG oslo_concurrency.lockutils [req-583e92b8-0d42-4e13-8cec-212aff447678 req-8081e75f-87cc-4151-a546-e26a0ba50a80 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:01 compute-0 nova_compute[239965]: 2026-01-26 16:02:01.925 239969 DEBUG nova.compute.manager [req-583e92b8-0d42-4e13-8cec-212aff447678 req-8081e75f-87cc-4151-a546-e26a0ba50a80 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] No waiting events found dispatching network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:01 compute-0 nova_compute[239965]: 2026-01-26 16:02:01.925 239969 WARNING nova.compute.manager [req-583e92b8-0d42-4e13-8cec-212aff447678 req-8081e75f-87cc-4151-a546-e26a0ba50a80 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received unexpected event network-vif-plugged-819adf29-2d57-4e5d-81f7-041a9ac00baa for instance with vm_state rescued and task_state deleting.
Jan 26 16:02:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 493 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 5.7 MiB/s wr, 416 op/s
Jan 26 16:02:01 compute-0 nova_compute[239965]: 2026-01-26 16:02:01.948 239969 DEBUG oslo_concurrency.lockutils [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:01 compute-0 nova_compute[239965]: 2026-01-26 16:02:01.948 239969 DEBUG oslo_concurrency.lockutils [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.066 239969 DEBUG oslo_concurrency.processutils [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.103 239969 DEBUG nova.network.neutron [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Successfully updated port: b493e4e4-7abf-4310-b538-b6b4d31a956c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.117 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "refresh_cache-7748d19a-5c9b-456d-808c-b8f0c79a96ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.117 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquired lock "refresh_cache-7748d19a-5c9b-456d-808c-b8f0c79a96ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.117 239969 DEBUG nova.network.neutron [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.297 239969 DEBUG nova.network.neutron [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:02:02 compute-0 podman[298955]: 2026-01-26 16:02:02.395953049 +0000 UTC m=+0.084173655 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.664 239969 DEBUG nova.network.neutron [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Successfully updated port: b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:02:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:02:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2337768910' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.684 239969 DEBUG oslo_concurrency.processutils [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.687 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "refresh_cache-a84aa0aa-4479-4871-9773-a7ee17fa8565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.687 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquired lock "refresh_cache-a84aa0aa-4479-4871-9773-a7ee17fa8565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.687 239969 DEBUG nova.network.neutron [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.695 239969 DEBUG nova.compute.provider_tree [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.713 239969 DEBUG nova.scheduler.client.report [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.741 239969 DEBUG oslo_concurrency.lockutils [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.767 239969 INFO nova.scheduler.client.report [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Deleted allocations for instance 96c386e9-e3b3-47c6-b6ca-863e894d5c49
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.841 239969 DEBUG oslo_concurrency.lockutils [None req-aaef08ae-64e8-46f2-bdbc-aa57619b58ac 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "96c386e9-e3b3-47c6-b6ca-863e894d5c49" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:02 compute-0 nova_compute[239965]: 2026-01-26 16:02:02.900 239969 DEBUG nova.network.neutron [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:02:03 compute-0 ceph-mon[75140]: pgmap v1473: 305 pgs: 305 active+clean; 493 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 5.7 MiB/s wr, 416 op/s
Jan 26 16:02:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2337768910' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:03 compute-0 ovn_controller[146046]: 2026-01-26T16:02:03Z|00608|binding|INFO|Releasing lport be57e329-8724-4480-9b41-724e08bf9152 from this chassis (sb_readonly=0)
Jan 26 16:02:03 compute-0 ovn_controller[146046]: 2026-01-26T16:02:03Z|00609|binding|INFO|Releasing lport 47411fc6-7d46-43a0-b0c2-7c06b22cee9e from this chassis (sb_readonly=0)
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.585 239969 DEBUG nova.network.neutron [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Updating instance_info_cache with network_info: [{"id": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "address": "fa:16:3e:8a:6f:04", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb493e4e4-7a", "ovs_interfaceid": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.607 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Releasing lock "refresh_cache-7748d19a-5c9b-456d-808c-b8f0c79a96ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.608 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Instance network_info: |[{"id": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "address": "fa:16:3e:8a:6f:04", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb493e4e4-7a", "ovs_interfaceid": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.611 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Start _get_guest_xml network_info=[{"id": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "address": "fa:16:3e:8a:6f:04", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb493e4e4-7a", "ovs_interfaceid": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.615 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.619 239969 WARNING nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.633 239969 DEBUG nova.virt.libvirt.host [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.633 239969 DEBUG nova.virt.libvirt.host [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.637 239969 DEBUG nova.virt.libvirt.host [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.637 239969 DEBUG nova.virt.libvirt.host [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.638 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.638 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.638 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.639 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.639 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.639 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.639 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.639 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.640 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.640 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.640 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.640 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.644 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.821 239969 DEBUG oslo_concurrency.lockutils [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.822 239969 DEBUG oslo_concurrency.lockutils [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.823 239969 DEBUG oslo_concurrency.lockutils [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.823 239969 DEBUG oslo_concurrency.lockutils [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.824 239969 DEBUG oslo_concurrency.lockutils [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.826 239969 INFO nova.compute.manager [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Terminating instance
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.827 239969 DEBUG nova.compute.manager [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:02:03 compute-0 kernel: tapf01280ac-7e (unregistering): left promiscuous mode
Jan 26 16:02:03 compute-0 NetworkManager[48954]: <info>  [1769443323.9155] device (tapf01280ac-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:02:03 compute-0 ovn_controller[146046]: 2026-01-26T16:02:03Z|00610|binding|INFO|Releasing lport f01280ac-7e71-4b0c-bad0-c519d134f076 from this chassis (sb_readonly=0)
Jan 26 16:02:03 compute-0 ovn_controller[146046]: 2026-01-26T16:02:03Z|00611|binding|INFO|Setting lport f01280ac-7e71-4b0c-bad0-c519d134f076 down in Southbound
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.925 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:03 compute-0 ovn_controller[146046]: 2026-01-26T16:02:03Z|00612|binding|INFO|Removing iface tapf01280ac-7e ovn-installed in OVS
Jan 26 16:02:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:03.938 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:6b:52 10.100.0.13'], port_security=['fa:16:3e:11:6b:52 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'f50eea8d-23f7-4c65-99fa-f919c99ed80d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-677a455b-3478-4811-937e-e2d4184c4933', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1e098d809d8b4d0a8747af49a85560a1', 'neutron:revision_number': '4', 'neutron:security_group_ids': '43e1902e-755a-4b29-81ae-0ec275ea79a3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=375e748b-fa99-415a-ab22-338d401b8bb9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=f01280ac-7e71-4b0c-bad0-c519d134f076) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:03.940 156105 INFO neutron.agent.ovn.metadata.agent [-] Port f01280ac-7e71-4b0c-bad0-c519d134f076 in datapath 677a455b-3478-4811-937e-e2d4184c4933 unbound from our chassis
Jan 26 16:02:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:03.942 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 677a455b-3478-4811-937e-e2d4184c4933, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:02:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 493 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.5 MiB/s wr, 335 op/s
Jan 26 16:02:03 compute-0 nova_compute[239965]: 2026-01-26 16:02:03.951 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:03.947 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[912d9b13-524d-4f4e-a4cc-7c47b7faea39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:03.952 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-677a455b-3478-4811-937e-e2d4184c4933 namespace which is not needed anymore
Jan 26 16:02:04 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000041.scope: Deactivated successfully.
Jan 26 16:02:04 compute-0 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d00000041.scope: Consumed 19.700s CPU time.
Jan 26 16:02:04 compute-0 systemd-machined[208061]: Machine qemu-72-instance-00000041 terminated.
Jan 26 16:02:04 compute-0 podman[299000]: 2026-01-26 16:02:04.049626009 +0000 UTC m=+0.103101548 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.067 239969 INFO nova.virt.libvirt.driver [-] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Instance destroyed successfully.
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.068 239969 DEBUG nova.objects.instance [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lazy-loading 'resources' on Instance uuid f50eea8d-23f7-4c65-99fa-f919c99ed80d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.081 239969 DEBUG nova.virt.libvirt.vif [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:01:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueNegativeTestJSON-server-2124285703',display_name='tempest-ServerRescueNegativeTestJSON-server-2124285703',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuenegativetestjson-server-2124285703',id=65,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:01:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1e098d809d8b4d0a8747af49a85560a1',ramdisk_id='',reservation_id='r-8ym7t3nz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueNegativeTestJSON-2133238807',owner_user_name='tempest-ServerRescueNegativeTestJSON-2133238807-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:01:56Z,user_data=None,user_id='2b5ecf8e47344a3d9fec2e1bf4244aae',uuid=f50eea8d-23f7-4c65-99fa-f919c99ed80d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f01280ac-7e71-4b0c-bad0-c519d134f076", "address": "fa:16:3e:11:6b:52", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf01280ac-7e", "ovs_interfaceid": "f01280ac-7e71-4b0c-bad0-c519d134f076", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.082 239969 DEBUG nova.network.os_vif_util [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Converting VIF {"id": "f01280ac-7e71-4b0c-bad0-c519d134f076", "address": "fa:16:3e:11:6b:52", "network": {"id": "677a455b-3478-4811-937e-e2d4184c4933", "bridge": "br-int", "label": "tempest-ServerRescueNegativeTestJSON-2018064129-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1e098d809d8b4d0a8747af49a85560a1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf01280ac-7e", "ovs_interfaceid": "f01280ac-7e71-4b0c-bad0-c519d134f076", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.083 239969 DEBUG nova.network.os_vif_util [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:52,bridge_name='br-int',has_traffic_filtering=True,id=f01280ac-7e71-4b0c-bad0-c519d134f076,network=Network(677a455b-3478-4811-937e-e2d4184c4933),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf01280ac-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.083 239969 DEBUG os_vif [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:52,bridge_name='br-int',has_traffic_filtering=True,id=f01280ac-7e71-4b0c-bad0-c519d134f076,network=Network(677a455b-3478-4811-937e-e2d4184c4933),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf01280ac-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.085 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.085 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf01280ac-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.087 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:04 compute-0 neutron-haproxy-ovnmeta-677a455b-3478-4811-937e-e2d4184c4933[295745]: [NOTICE]   (295756) : haproxy version is 2.8.14-c23fe91
Jan 26 16:02:04 compute-0 neutron-haproxy-ovnmeta-677a455b-3478-4811-937e-e2d4184c4933[295745]: [NOTICE]   (295756) : path to executable is /usr/sbin/haproxy
Jan 26 16:02:04 compute-0 neutron-haproxy-ovnmeta-677a455b-3478-4811-937e-e2d4184c4933[295745]: [WARNING]  (295756) : Exiting Master process...
Jan 26 16:02:04 compute-0 neutron-haproxy-ovnmeta-677a455b-3478-4811-937e-e2d4184c4933[295745]: [WARNING]  (295756) : Exiting Master process...
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.089 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:04 compute-0 neutron-haproxy-ovnmeta-677a455b-3478-4811-937e-e2d4184c4933[295745]: [ALERT]    (295756) : Current worker (295767) exited with code 143 (Terminated)
Jan 26 16:02:04 compute-0 neutron-haproxy-ovnmeta-677a455b-3478-4811-937e-e2d4184c4933[295745]: [WARNING]  (295756) : All workers exited. Exiting... (0)
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.092 239969 INFO os_vif [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:6b:52,bridge_name='br-int',has_traffic_filtering=True,id=f01280ac-7e71-4b0c-bad0-c519d134f076,network=Network(677a455b-3478-4811-937e-e2d4184c4933),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf01280ac-7e')
Jan 26 16:02:04 compute-0 systemd[1]: libpod-52b76f946ab270c5b7a8621739ebd2744478afbe55e29a3fdbd8af32526da800.scope: Deactivated successfully.
Jan 26 16:02:04 compute-0 podman[299045]: 2026-01-26 16:02:04.100154038 +0000 UTC m=+0.050131831 container died 52b76f946ab270c5b7a8621739ebd2744478afbe55e29a3fdbd8af32526da800 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-677a455b-3478-4811-937e-e2d4184c4933, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:02:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-52b76f946ab270c5b7a8621739ebd2744478afbe55e29a3fdbd8af32526da800-userdata-shm.mount: Deactivated successfully.
Jan 26 16:02:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-da10c2e72c3f60ab96ec9e8c064c2a64a269a78f4b8810eb9e0430e606911bfe-merged.mount: Deactivated successfully.
Jan 26 16:02:04 compute-0 podman[299045]: 2026-01-26 16:02:04.151491066 +0000 UTC m=+0.101468859 container cleanup 52b76f946ab270c5b7a8621739ebd2744478afbe55e29a3fdbd8af32526da800 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-677a455b-3478-4811-937e-e2d4184c4933, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:02:04 compute-0 systemd[1]: libpod-conmon-52b76f946ab270c5b7a8621739ebd2744478afbe55e29a3fdbd8af32526da800.scope: Deactivated successfully.
Jan 26 16:02:04 compute-0 podman[299098]: 2026-01-26 16:02:04.219499134 +0000 UTC m=+0.044727087 container remove 52b76f946ab270c5b7a8621739ebd2744478afbe55e29a3fdbd8af32526da800 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-677a455b-3478-4811-937e-e2d4184c4933, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 26 16:02:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:04.230 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c8507e62-e24b-4946-859d-563209a41e23]: (4, ('Mon Jan 26 04:02:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-677a455b-3478-4811-937e-e2d4184c4933 (52b76f946ab270c5b7a8621739ebd2744478afbe55e29a3fdbd8af32526da800)\n52b76f946ab270c5b7a8621739ebd2744478afbe55e29a3fdbd8af32526da800\nMon Jan 26 04:02:04 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-677a455b-3478-4811-937e-e2d4184c4933 (52b76f946ab270c5b7a8621739ebd2744478afbe55e29a3fdbd8af32526da800)\n52b76f946ab270c5b7a8621739ebd2744478afbe55e29a3fdbd8af32526da800\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:04.232 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3e94eaaf-2353-44e6-85ff-53b4a5b41a4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:04.233 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap677a455b-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.235 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:04 compute-0 kernel: tap677a455b-30: left promiscuous mode
Jan 26 16:02:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:02:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/396945630' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.239 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:04.248 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ce66390b-e576-4685-9605-b8d48ed39358]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.258 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:04.260 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0a928d09-b80b-491a-a9b3-a64a727ba646]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:04.261 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0d9c84-3670-4630-8a80-c0ebf10bb796]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.262 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:04.278 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[78682449-a4ea-4900-9476-9af4540c752b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 476511, 'reachable_time': 20726, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299116, 'error': None, 'target': 'ovnmeta-677a455b-3478-4811-937e-e2d4184c4933', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d677a455b\x2d3478\x2d4811\x2d937e\x2de2d4184c4933.mount: Deactivated successfully.
Jan 26 16:02:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:04.282 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-677a455b-3478-4811-937e-e2d4184c4933 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:02:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:04.282 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[56e8f717-47c9-422b-ba6d-08387c7c5b39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.282 239969 DEBUG nova.storage.rbd_utils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 7748d19a-5c9b-456d-808c-b8f0c79a96ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.286 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.412 239969 INFO nova.virt.libvirt.driver [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Deleting instance files /var/lib/nova/instances/f50eea8d-23f7-4c65-99fa-f919c99ed80d_del
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.413 239969 INFO nova.virt.libvirt.driver [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Deletion of /var/lib/nova/instances/f50eea8d-23f7-4c65-99fa-f919c99ed80d_del complete
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.459 239969 INFO nova.compute.manager [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Took 0.63 seconds to destroy the instance on the hypervisor.
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.459 239969 DEBUG oslo.service.loopingcall [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.460 239969 DEBUG nova.compute.manager [-] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.460 239969 DEBUG nova.network.neutron [-] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:02:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:02:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3467189256' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.854 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.856 239969 DEBUG nova.virt.libvirt.vif [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:01:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-273040466',display_name='tempest-MultipleCreateTestJSON-server-273040466-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-273040466-1',id=71,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df35e1b31a074e6c9a56000deadb0acc',ramdisk_id='',reservation_id='r-y0wbg9mi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1803223404',owner_user_name='tempest-MultipleCreateTestJSON-1803223404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:58Z,user_data=None,user_id='008d2b0a4ac3490f9a0fcea9e21be080',uuid=7748d19a-5c9b-456d-808c-b8f0c79a96ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "address": "fa:16:3e:8a:6f:04", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb493e4e4-7a", "ovs_interfaceid": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.857 239969 DEBUG nova.network.os_vif_util [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converting VIF {"id": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "address": "fa:16:3e:8a:6f:04", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb493e4e4-7a", "ovs_interfaceid": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.858 239969 DEBUG nova.network.os_vif_util [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:6f:04,bridge_name='br-int',has_traffic_filtering=True,id=b493e4e4-7abf-4310-b538-b6b4d31a956c,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb493e4e4-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.859 239969 DEBUG nova.objects.instance [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lazy-loading 'pci_devices' on Instance uuid 7748d19a-5c9b-456d-808c-b8f0c79a96ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.874 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:02:04 compute-0 nova_compute[239965]:   <uuid>7748d19a-5c9b-456d-808c-b8f0c79a96ef</uuid>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   <name>instance-00000047</name>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <nova:name>tempest-MultipleCreateTestJSON-server-273040466-1</nova:name>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:02:03</nova:creationTime>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:02:04 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:02:04 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:02:04 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:02:04 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:02:04 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:02:04 compute-0 nova_compute[239965]:         <nova:user uuid="008d2b0a4ac3490f9a0fcea9e21be080">tempest-MultipleCreateTestJSON-1803223404-project-member</nova:user>
Jan 26 16:02:04 compute-0 nova_compute[239965]:         <nova:project uuid="df35e1b31a074e6c9a56000deadb0acc">tempest-MultipleCreateTestJSON-1803223404</nova:project>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:02:04 compute-0 nova_compute[239965]:         <nova:port uuid="b493e4e4-7abf-4310-b538-b6b4d31a956c">
Jan 26 16:02:04 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <system>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <entry name="serial">7748d19a-5c9b-456d-808c-b8f0c79a96ef</entry>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <entry name="uuid">7748d19a-5c9b-456d-808c-b8f0c79a96ef</entry>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     </system>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   <os>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   </os>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   <features>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   </features>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/7748d19a-5c9b-456d-808c-b8f0c79a96ef_disk">
Jan 26 16:02:04 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       </source>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:02:04 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/7748d19a-5c9b-456d-808c-b8f0c79a96ef_disk.config">
Jan 26 16:02:04 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       </source>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:02:04 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:8a:6f:04"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <target dev="tapb493e4e4-7a"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/7748d19a-5c9b-456d-808c-b8f0c79a96ef/console.log" append="off"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <video>
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     </video>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:02:04 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:02:04 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:02:04 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:02:04 compute-0 nova_compute[239965]: </domain>
Jan 26 16:02:04 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.875 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Preparing to wait for external event network-vif-plugged-b493e4e4-7abf-4310-b538-b6b4d31a956c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.875 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.876 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.876 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.877 239969 DEBUG nova.virt.libvirt.vif [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:01:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-273040466',display_name='tempest-MultipleCreateTestJSON-server-273040466-1',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-273040466-1',id=71,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df35e1b31a074e6c9a56000deadb0acc',ramdisk_id='',reservation_id='r-y0wbg9mi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1803223404',owner_user_name='tempest-MultipleCreateTestJSON-1803223404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:58Z,user_data=None,user_id='008d2b0a4ac3490f9a0fcea9e21be080',uuid=7748d19a-5c9b-456d-808c-b8f0c79a96ef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "address": "fa:16:3e:8a:6f:04", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb493e4e4-7a", "ovs_interfaceid": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.877 239969 DEBUG nova.network.os_vif_util [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converting VIF {"id": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "address": "fa:16:3e:8a:6f:04", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb493e4e4-7a", "ovs_interfaceid": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.878 239969 DEBUG nova.network.os_vif_util [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:6f:04,bridge_name='br-int',has_traffic_filtering=True,id=b493e4e4-7abf-4310-b538-b6b4d31a956c,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb493e4e4-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.878 239969 DEBUG os_vif [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:6f:04,bridge_name='br-int',has_traffic_filtering=True,id=b493e4e4-7abf-4310-b538-b6b4d31a956c,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb493e4e4-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.879 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.879 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.880 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.882 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.882 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb493e4e4-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.883 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb493e4e4-7a, col_values=(('external_ids', {'iface-id': 'b493e4e4-7abf-4310-b538-b6b4d31a956c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8a:6f:04', 'vm-uuid': '7748d19a-5c9b-456d-808c-b8f0c79a96ef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:04 compute-0 NetworkManager[48954]: <info>  [1769443324.8854] manager: (tapb493e4e4-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/275)
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.886 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.889 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.890 239969 INFO os_vif [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:6f:04,bridge_name='br-int',has_traffic_filtering=True,id=b493e4e4-7abf-4310-b538-b6b4d31a956c,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb493e4e4-7a')
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.935 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.936 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.936 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] No VIF found with MAC fa:16:3e:8a:6f:04, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.937 239969 INFO nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Using config drive
Jan 26 16:02:04 compute-0 nova_compute[239965]: 2026-01-26 16:02:04.957 239969 DEBUG nova.storage.rbd_utils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 7748d19a-5c9b-456d-808c-b8f0c79a96ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.008 239969 DEBUG nova.network.neutron [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Updating instance_info_cache with network_info: [{"id": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "address": "fa:16:3e:28:e4:64", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1bc4eb5-6a", "ovs_interfaceid": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.024 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Releasing lock "refresh_cache-a84aa0aa-4479-4871-9773-a7ee17fa8565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.025 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Instance network_info: |[{"id": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "address": "fa:16:3e:28:e4:64", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1bc4eb5-6a", "ovs_interfaceid": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.027 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Start _get_guest_xml network_info=[{"id": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "address": "fa:16:3e:28:e4:64", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1bc4eb5-6a", "ovs_interfaceid": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.031 239969 WARNING nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:02:05 compute-0 ceph-mon[75140]: pgmap v1474: 305 pgs: 305 active+clean; 493 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 4.5 MiB/s wr, 335 op/s
Jan 26 16:02:05 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/396945630' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:05 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3467189256' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.039 239969 DEBUG nova.virt.libvirt.host [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.040 239969 DEBUG nova.virt.libvirt.host [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.043 239969 DEBUG nova.virt.libvirt.host [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.043 239969 DEBUG nova.virt.libvirt.host [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.044 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.044 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.044 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.045 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.045 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.045 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.045 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.045 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.045 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.046 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.046 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.046 239969 DEBUG nova.virt.hardware [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.049 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.554 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443310.5506127, 7653a9f9-e5bc-4478-b169-1240875c1675 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.555 239969 INFO nova.compute.manager [-] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] VM Stopped (Lifecycle Event)
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.581 239969 DEBUG nova.compute.manager [None req-a850ba15-ed3f-4b49-a4aa-09e482cf8194 - - - - - -] [instance: 7653a9f9-e5bc-4478-b169-1240875c1675] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:02:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4624764' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.719 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.719 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443310.7033722, 1a3248c2-cd69-499a-8aec-8b03fa196e91 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.720 239969 INFO nova.compute.manager [-] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] VM Stopped (Lifecycle Event)
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.750 239969 DEBUG nova.storage.rbd_utils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image a84aa0aa-4479-4871-9773-a7ee17fa8565_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.754 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.794 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.809 239969 DEBUG nova.compute.manager [None req-36ce59cc-ebb5-4147-adc1-c8ded78f9956 - - - - - -] [instance: 1a3248c2-cd69-499a-8aec-8b03fa196e91] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.942 239969 INFO nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Creating config drive at /var/lib/nova/instances/7748d19a-5c9b-456d-808c-b8f0c79a96ef/disk.config
Jan 26 16:02:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 363 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 398 op/s
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.949 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7748d19a-5c9b-456d-808c-b8f0c79a96ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe_v9h995 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.984 239969 DEBUG nova.compute.manager [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Received event network-vif-deleted-819adf29-2d57-4e5d-81f7-041a9ac00baa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.984 239969 DEBUG nova.compute.manager [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Received event network-changed-b493e4e4-7abf-4310-b538-b6b4d31a956c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.985 239969 DEBUG nova.compute.manager [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Refreshing instance network info cache due to event network-changed-b493e4e4-7abf-4310-b538-b6b4d31a956c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.985 239969 DEBUG oslo_concurrency.lockutils [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-7748d19a-5c9b-456d-808c-b8f0c79a96ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.985 239969 DEBUG oslo_concurrency.lockutils [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-7748d19a-5c9b-456d-808c-b8f0c79a96ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:02:05 compute-0 nova_compute[239965]: 2026-01-26 16:02:05.986 239969 DEBUG nova.network.neutron [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Refreshing network info cache for port b493e4e4-7abf-4310-b538-b6b4d31a956c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.006 239969 DEBUG nova.compute.manager [req-8ca4f198-aa64-40d6-9ed3-beeefc001451 req-131eb1d4-359d-40b1-b71a-5d7d2bb0d771 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Received event network-vif-unplugged-f01280ac-7e71-4b0c-bad0-c519d134f076 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.006 239969 DEBUG oslo_concurrency.lockutils [req-8ca4f198-aa64-40d6-9ed3-beeefc001451 req-131eb1d4-359d-40b1-b71a-5d7d2bb0d771 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.007 239969 DEBUG oslo_concurrency.lockutils [req-8ca4f198-aa64-40d6-9ed3-beeefc001451 req-131eb1d4-359d-40b1-b71a-5d7d2bb0d771 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.007 239969 DEBUG oslo_concurrency.lockutils [req-8ca4f198-aa64-40d6-9ed3-beeefc001451 req-131eb1d4-359d-40b1-b71a-5d7d2bb0d771 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.007 239969 DEBUG nova.compute.manager [req-8ca4f198-aa64-40d6-9ed3-beeefc001451 req-131eb1d4-359d-40b1-b71a-5d7d2bb0d771 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] No waiting events found dispatching network-vif-unplugged-f01280ac-7e71-4b0c-bad0-c519d134f076 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.007 239969 DEBUG nova.compute.manager [req-8ca4f198-aa64-40d6-9ed3-beeefc001451 req-131eb1d4-359d-40b1-b71a-5d7d2bb0d771 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Received event network-vif-unplugged-f01280ac-7e71-4b0c-bad0-c519d134f076 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.085 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7748d19a-5c9b-456d-808c-b8f0c79a96ef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe_v9h995" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:06 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4624764' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.146 239969 DEBUG nova.storage.rbd_utils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image 7748d19a-5c9b-456d-808c-b8f0c79a96ef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.151 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7748d19a-5c9b-456d-808c-b8f0c79a96ef/disk.config 7748d19a-5c9b-456d-808c-b8f0c79a96ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.284 239969 DEBUG nova.network.neutron [-] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.289 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7748d19a-5c9b-456d-808c-b8f0c79a96ef/disk.config 7748d19a-5c9b-456d-808c-b8f0c79a96ef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.290 239969 INFO nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Deleting local config drive /var/lib/nova/instances/7748d19a-5c9b-456d-808c-b8f0c79a96ef/disk.config because it was imported into RBD.
Jan 26 16:02:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:02:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2535082003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.311 239969 INFO nova.compute.manager [-] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Took 1.85 seconds to deallocate network for instance.
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.325 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.327 239969 DEBUG nova.virt.libvirt.vif [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:01:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-273040466',display_name='tempest-MultipleCreateTestJSON-server-273040466-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-273040466-2',id=72,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df35e1b31a074e6c9a56000deadb0acc',ramdisk_id='',reservation_id='r-y0wbg9mi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1803223404',owner_user_name='tempest-MultipleCreateTestJSON-1803223404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:59Z,user_data=None,user_id='008d2b0a4ac3490f9a0fcea9e21be080',uuid=a84aa0aa-4479-4871-9773-a7ee17fa8565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "address": "fa:16:3e:28:e4:64", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1bc4eb5-6a", "ovs_interfaceid": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.327 239969 DEBUG nova.network.os_vif_util [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converting VIF {"id": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "address": "fa:16:3e:28:e4:64", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1bc4eb5-6a", "ovs_interfaceid": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.328 239969 DEBUG nova.network.os_vif_util [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:e4:64,bridge_name='br-int',has_traffic_filtering=True,id=b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1bc4eb5-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.329 239969 DEBUG nova.objects.instance [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lazy-loading 'pci_devices' on Instance uuid a84aa0aa-4479-4871-9773-a7ee17fa8565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.348 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:02:06 compute-0 nova_compute[239965]:   <uuid>a84aa0aa-4479-4871-9773-a7ee17fa8565</uuid>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   <name>instance-00000048</name>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <nova:name>tempest-MultipleCreateTestJSON-server-273040466-2</nova:name>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:02:05</nova:creationTime>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:02:06 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:02:06 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:02:06 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:02:06 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:02:06 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:02:06 compute-0 nova_compute[239965]:         <nova:user uuid="008d2b0a4ac3490f9a0fcea9e21be080">tempest-MultipleCreateTestJSON-1803223404-project-member</nova:user>
Jan 26 16:02:06 compute-0 nova_compute[239965]:         <nova:project uuid="df35e1b31a074e6c9a56000deadb0acc">tempest-MultipleCreateTestJSON-1803223404</nova:project>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:02:06 compute-0 nova_compute[239965]:         <nova:port uuid="b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4">
Jan 26 16:02:06 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <system>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <entry name="serial">a84aa0aa-4479-4871-9773-a7ee17fa8565</entry>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <entry name="uuid">a84aa0aa-4479-4871-9773-a7ee17fa8565</entry>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     </system>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   <os>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   </os>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   <features>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   </features>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/a84aa0aa-4479-4871-9773-a7ee17fa8565_disk">
Jan 26 16:02:06 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       </source>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:02:06 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/a84aa0aa-4479-4871-9773-a7ee17fa8565_disk.config">
Jan 26 16:02:06 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       </source>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:02:06 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:28:e4:64"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <target dev="tapb1bc4eb5-6a"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/a84aa0aa-4479-4871-9773-a7ee17fa8565/console.log" append="off"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <video>
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     </video>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:02:06 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:02:06 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:02:06 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:02:06 compute-0 nova_compute[239965]: </domain>
Jan 26 16:02:06 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.350 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Preparing to wait for external event network-vif-plugged-b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.351 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.351 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.351 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.352 239969 DEBUG nova.virt.libvirt.vif [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:01:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-273040466',display_name='tempest-MultipleCreateTestJSON-server-273040466-2',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-273040466-2',id=72,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=1,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df35e1b31a074e6c9a56000deadb0acc',ramdisk_id='',reservation_id='r-y0wbg9mi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-MultipleCreateTestJSON-1803223404',owner_user_name='tempest-MultipleCreateTestJSON-1803223404-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:01:59Z,user_data=None,user_id='008d2b0a4ac3490f9a0fcea9e21be080',uuid=a84aa0aa-4479-4871-9773-a7ee17fa8565,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "address": "fa:16:3e:28:e4:64", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1bc4eb5-6a", "ovs_interfaceid": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.353 239969 DEBUG nova.network.os_vif_util [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converting VIF {"id": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "address": "fa:16:3e:28:e4:64", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1bc4eb5-6a", "ovs_interfaceid": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.353 239969 DEBUG nova.network.os_vif_util [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:e4:64,bridge_name='br-int',has_traffic_filtering=True,id=b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1bc4eb5-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.354 239969 DEBUG os_vif [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:e4:64,bridge_name='br-int',has_traffic_filtering=True,id=b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1bc4eb5-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.354 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.355 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.355 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.360 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:06 compute-0 kernel: tapb493e4e4-7a: entered promiscuous mode
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.360 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1bc4eb5-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:06 compute-0 NetworkManager[48954]: <info>  [1769443326.3609] manager: (tapb493e4e4-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/276)
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.360 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb1bc4eb5-6a, col_values=(('external_ids', {'iface-id': 'b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:e4:64', 'vm-uuid': 'a84aa0aa-4479-4871-9773-a7ee17fa8565'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:06 compute-0 systemd-udevd[299011]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:02:06 compute-0 ovn_controller[146046]: 2026-01-26T16:02:06Z|00613|binding|INFO|Claiming lport b493e4e4-7abf-4310-b538-b6b4d31a956c for this chassis.
Jan 26 16:02:06 compute-0 ovn_controller[146046]: 2026-01-26T16:02:06Z|00614|binding|INFO|b493e4e4-7abf-4310-b538-b6b4d31a956c: Claiming fa:16:3e:8a:6f:04 10.100.0.10
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.363 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:06 compute-0 NetworkManager[48954]: <info>  [1769443326.3649] manager: (tapb1bc4eb5-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/277)
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.368 239969 DEBUG oslo_concurrency.lockutils [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.368 239969 DEBUG oslo_concurrency.lockutils [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.369 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.370 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:6f:04 10.100.0.10'], port_security=['fa:16:3e:8a:6f:04 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7748d19a-5c9b-456d-808c-b8f0c79a96ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df35e1b31a074e6c9a56000deadb0acc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eaa8c06b-92d0-4145-8d40-c4fadf59354c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a5abb02-464d-4773-99b2-5341ce12ab00, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b493e4e4-7abf-4310-b538-b6b4d31a956c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.371 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b493e4e4-7abf-4310-b538-b6b4d31a956c in datapath ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 bound to our chassis
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.372 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ca8d6d0f-aa78-49e5-bed8-8ca0c9363435
Jan 26 16:02:06 compute-0 NetworkManager[48954]: <info>  [1769443326.3805] device (tapb493e4e4-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:02:06 compute-0 NetworkManager[48954]: <info>  [1769443326.3810] device (tapb493e4e4-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:02:06 compute-0 ovn_controller[146046]: 2026-01-26T16:02:06Z|00615|binding|INFO|Setting lport b493e4e4-7abf-4310-b538-b6b4d31a956c ovn-installed in OVS
Jan 26 16:02:06 compute-0 ovn_controller[146046]: 2026-01-26T16:02:06Z|00616|binding|INFO|Setting lport b493e4e4-7abf-4310-b538-b6b4d31a956c up in Southbound
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.385 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0aedf409-b736-4b01-a523-e32a2735e39a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.386 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapca8d6d0f-a1 in ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.388 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.389 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapca8d6d0f-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.389 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6a04bf8a-8cd1-43e7-990b-e9fd319ea2fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.389 239969 INFO os_vif [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:e4:64,bridge_name='br-int',has_traffic_filtering=True,id=b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1bc4eb5-6a')
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.390 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3d213c20-c345-4979-9c44-e40edfbbd3cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.391 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.401 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[bcee21df-b6db-4d68-8300-a13d931ce67d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:06 compute-0 systemd-machined[208061]: New machine qemu-79-instance-00000047.
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.426 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a5248b24-b87c-49d2-bd88-6c93abf9d158]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.428 239969 DEBUG nova.virt.libvirt.driver [None req-d7b37705-d01c-404e-a834-1580ab882143 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:02:06 compute-0 systemd[1]: Started Virtual Machine qemu-79-instance-00000047.
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.469 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1eca4d45-9283-4815-b57a-1aac4f92df68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.476 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fa303642-8b7f-4ed1-82dd-07189a0f209c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:06 compute-0 NetworkManager[48954]: <info>  [1769443326.4771] manager: (tapca8d6d0f-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/278)
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.481 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.481 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.481 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] No VIF found with MAC fa:16:3e:28:e4:64, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.482 239969 INFO nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Using config drive
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.507 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa9736f-bcae-4e02-abeb-44bbc9b42ab5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.511 239969 DEBUG nova.storage.rbd_utils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image a84aa0aa-4479-4871-9773-a7ee17fa8565_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.511 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c9cd1f17-1da0-4f3f-b3f1-a760c72527a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:06 compute-0 NetworkManager[48954]: <info>  [1769443326.5298] device (tapca8d6d0f-a0): carrier: link connected
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.534 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e5e8c2f6-7c7a-4891-ad41-9b7596cbf637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.537 239969 DEBUG oslo_concurrency.processutils [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.552 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ba4effe8-e773-461f-89c9-30b045786859]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca8d6d0f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:a1:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481733, 'reachable_time': 37902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299346, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.577 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[05ee2bb9-029a-490e-954a-18ddc70d93ac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3c:a1d3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 481733, 'tstamp': 481733}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299348, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.595 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[41d4e76d-582d-43c2-ab7f-b0abe73ef455]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca8d6d0f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:a1:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481733, 'reachable_time': 37902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299349, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.635 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[abe8e690-351a-40e3-af89-aaf069212b92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.703 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a251be2a-024e-4bda-bbc0-54d3af1ad2b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.704 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca8d6d0f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.704 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.705 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca8d6d0f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:06 compute-0 NetworkManager[48954]: <info>  [1769443326.7077] manager: (tapca8d6d0f-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/279)
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.707 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:06 compute-0 kernel: tapca8d6d0f-a0: entered promiscuous mode
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.713 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.713 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapca8d6d0f-a0, col_values=(('external_ids', {'iface-id': '20125a47-5d33-44a5-95b7-646e9eff5c4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:06 compute-0 ovn_controller[146046]: 2026-01-26T16:02:06Z|00617|binding|INFO|Releasing lport 20125a47-5d33-44a5-95b7-646e9eff5c4a from this chassis (sb_readonly=0)
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.733 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.739 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.739 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ca8d6d0f-aa78-49e5-bed8-8ca0c9363435.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ca8d6d0f-aa78-49e5-bed8-8ca0c9363435.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.740 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f55bc41b-f25f-4ca5-ab22-a99fb3eedcd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.741 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/ca8d6d0f-aa78-49e5-bed8-8ca0c9363435.pid.haproxy
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID ca8d6d0f-aa78-49e5-bed8-8ca0c9363435
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:02:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:06.742 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'env', 'PROCESS_TAG=haproxy-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ca8d6d0f-aa78-49e5-bed8-8ca0c9363435.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.915 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443326.9152358, 7748d19a-5c9b-456d-808c-b8f0c79a96ef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.916 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] VM Started (Lifecycle Event)
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.943 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.949 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443326.9154663, 7748d19a-5c9b-456d-808c-b8f0c79a96ef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.950 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] VM Paused (Lifecycle Event)
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.974 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.979 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:02:06 compute-0 nova_compute[239965]: 2026-01-26 16:02:06.998 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:02:07 compute-0 ceph-mon[75140]: pgmap v1475: 305 pgs: 305 active+clean; 363 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 398 op/s
Jan 26 16:02:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2535082003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:02:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1611004378' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:07 compute-0 podman[299447]: 2026-01-26 16:02:07.173695505 +0000 UTC m=+0.079377276 container create 4bd363e8dc66b72966810d9e0ac19a8534e85c344ff2f5c23d64c40401fa72ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.178 239969 DEBUG oslo_concurrency.processutils [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.187 239969 DEBUG nova.compute.provider_tree [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.206 239969 DEBUG nova.scheduler.client.report [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:02:07 compute-0 podman[299447]: 2026-01-26 16:02:07.115229993 +0000 UTC m=+0.020911794 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:02:07 compute-0 systemd[1]: Started libpod-conmon-4bd363e8dc66b72966810d9e0ac19a8534e85c344ff2f5c23d64c40401fa72ce.scope.
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.225 239969 DEBUG oslo_concurrency.lockutils [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.243 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443312.18083, 458dfc7d-cb02-4d38-a6ba-a52234a1af2a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.244 239969 INFO nova.compute.manager [-] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] VM Stopped (Lifecycle Event)
Jan 26 16:02:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6975722172448541105af90bf7eef1eab6ddd1082c50b099aaa10d4c5db3d37/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.267 239969 INFO nova.scheduler.client.report [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Deleted allocations for instance f50eea8d-23f7-4c65-99fa-f919c99ed80d
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.270 239969 DEBUG nova.compute.manager [None req-d183d30d-0d88-4c0b-b979-6d2bbbc7793b - - - - - -] [instance: 458dfc7d-cb02-4d38-a6ba-a52234a1af2a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:07 compute-0 podman[299447]: 2026-01-26 16:02:07.280592606 +0000 UTC m=+0.186274407 container init 4bd363e8dc66b72966810d9e0ac19a8534e85c344ff2f5c23d64c40401fa72ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 16:02:07 compute-0 podman[299447]: 2026-01-26 16:02:07.286515921 +0000 UTC m=+0.192197702 container start 4bd363e8dc66b72966810d9e0ac19a8534e85c344ff2f5c23d64c40401fa72ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:02:07 compute-0 neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435[299465]: [NOTICE]   (299469) : New worker (299471) forked
Jan 26 16:02:07 compute-0 neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435[299465]: [NOTICE]   (299469) : Loading success.
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.313 239969 INFO nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Creating config drive at /var/lib/nova/instances/a84aa0aa-4479-4871-9773-a7ee17fa8565/disk.config
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.318 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a84aa0aa-4479-4871-9773-a7ee17fa8565/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8sj8e_da execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.352 239969 DEBUG oslo_concurrency.lockutils [None req-d97f8119-7838-4079-a619-51e7e5a6786c 2b5ecf8e47344a3d9fec2e1bf4244aae 1e098d809d8b4d0a8747af49a85560a1 - - default default] Lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.456 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a84aa0aa-4479-4871-9773-a7ee17fa8565/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8sj8e_da" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.479 239969 DEBUG nova.storage.rbd_utils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] rbd image a84aa0aa-4479-4871-9773-a7ee17fa8565_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.483 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a84aa0aa-4479-4871-9773-a7ee17fa8565/disk.config a84aa0aa-4479-4871-9773-a7ee17fa8565_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.720 239969 DEBUG oslo_concurrency.processutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a84aa0aa-4479-4871-9773-a7ee17fa8565/disk.config a84aa0aa-4479-4871-9773-a7ee17fa8565_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.721 239969 INFO nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Deleting local config drive /var/lib/nova/instances/a84aa0aa-4479-4871-9773-a7ee17fa8565/disk.config because it was imported into RBD.
Jan 26 16:02:07 compute-0 kernel: tapb1bc4eb5-6a: entered promiscuous mode
Jan 26 16:02:07 compute-0 NetworkManager[48954]: <info>  [1769443327.7915] manager: (tapb1bc4eb5-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/280)
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.793 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:07 compute-0 ovn_controller[146046]: 2026-01-26T16:02:07Z|00618|binding|INFO|Claiming lport b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 for this chassis.
Jan 26 16:02:07 compute-0 ovn_controller[146046]: 2026-01-26T16:02:07Z|00619|binding|INFO|b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4: Claiming fa:16:3e:28:e4:64 10.100.0.5
Jan 26 16:02:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:07.801 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:e4:64 10.100.0.5'], port_security=['fa:16:3e:28:e4:64 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a84aa0aa-4479-4871-9773-a7ee17fa8565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df35e1b31a074e6c9a56000deadb0acc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'eaa8c06b-92d0-4145-8d40-c4fadf59354c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a5abb02-464d-4773-99b2-5341ce12ab00, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:07.802 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 in datapath ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 bound to our chassis
Jan 26 16:02:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:07.803 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ca8d6d0f-aa78-49e5-bed8-8ca0c9363435
Jan 26 16:02:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:07.822 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[23b20ef5-b26b-4ff6-a2ca-74044b5fed86]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:07 compute-0 ovn_controller[146046]: 2026-01-26T16:02:07Z|00620|binding|INFO|Setting lport b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 ovn-installed in OVS
Jan 26 16:02:07 compute-0 ovn_controller[146046]: 2026-01-26T16:02:07Z|00621|binding|INFO|Setting lport b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 up in Southbound
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.831 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:07 compute-0 systemd-udevd[299533]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:02:07 compute-0 systemd-machined[208061]: New machine qemu-80-instance-00000048.
Jan 26 16:02:07 compute-0 NetworkManager[48954]: <info>  [1769443327.8503] device (tapb1bc4eb5-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:02:07 compute-0 NetworkManager[48954]: <info>  [1769443327.8510] device (tapb1bc4eb5-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:02:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:07.858 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[43f3a446-61ac-4a70-8e27-05587d80ab34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:07.861 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8b888aad-a365-437f-b182-da9c826c33b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:07 compute-0 systemd[1]: Started Virtual Machine qemu-80-instance-00000048.
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.884 239969 DEBUG nova.network.neutron [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Updated VIF entry in instance network info cache for port b493e4e4-7abf-4310-b538-b6b4d31a956c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.885 239969 DEBUG nova.network.neutron [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Updating instance_info_cache with network_info: [{"id": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "address": "fa:16:3e:8a:6f:04", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb493e4e4-7a", "ovs_interfaceid": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:07.892 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4529948b-0395-4f73-92c2-b165e830c7ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.905 239969 DEBUG oslo_concurrency.lockutils [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-7748d19a-5c9b-456d-808c-b8f0c79a96ef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.906 239969 DEBUG nova.compute.manager [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Received event network-changed-b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.907 239969 DEBUG nova.compute.manager [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Refreshing instance network info cache due to event network-changed-b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.907 239969 DEBUG oslo_concurrency.lockutils [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-a84aa0aa-4479-4871-9773-a7ee17fa8565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.908 239969 DEBUG oslo_concurrency.lockutils [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-a84aa0aa-4479-4871-9773-a7ee17fa8565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.908 239969 DEBUG nova.network.neutron [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Refreshing network info cache for port b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:02:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:07.908 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[373389bd-2e90-459d-8de7-073c8c661a8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca8d6d0f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:a1:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 5, 'rx_bytes': 266, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481733, 'reachable_time': 37902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299544, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:07.926 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7357bb97-1489-4301-bffc-2ed3ee2ed0c0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapca8d6d0f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 481746, 'tstamp': 481746}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299547, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapca8d6d0f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 481750, 'tstamp': 481750}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299547, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:07.930 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca8d6d0f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.932 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:07 compute-0 nova_compute[239965]: 2026-01-26 16:02:07.933 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:07.933 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca8d6d0f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:07.934 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:07.934 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapca8d6d0f-a0, col_values=(('external_ids', {'iface-id': '20125a47-5d33-44a5-95b7-646e9eff5c4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:07.934 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 363 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 5.7 MiB/s wr, 194 op/s
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.039 239969 DEBUG nova.compute.manager [req-2aabb8fe-3945-4216-85f3-728d6f7b9242 req-6b0775de-c21f-4c15-a982-ce1a90044ce9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Received event network-vif-deleted-f01280ac-7e71-4b0c-bad0-c519d134f076 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.141 239969 DEBUG nova.compute.manager [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Received event network-vif-plugged-f01280ac-7e71-4b0c-bad0-c519d134f076 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.142 239969 DEBUG oslo_concurrency.lockutils [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.142 239969 DEBUG oslo_concurrency.lockutils [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.142 239969 DEBUG oslo_concurrency.lockutils [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f50eea8d-23f7-4c65-99fa-f919c99ed80d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.143 239969 DEBUG nova.compute.manager [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] No waiting events found dispatching network-vif-plugged-f01280ac-7e71-4b0c-bad0-c519d134f076 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.143 239969 WARNING nova.compute.manager [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Received unexpected event network-vif-plugged-f01280ac-7e71-4b0c-bad0-c519d134f076 for instance with vm_state deleted and task_state None.
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.144 239969 DEBUG nova.compute.manager [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Received event network-vif-plugged-b493e4e4-7abf-4310-b538-b6b4d31a956c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.144 239969 DEBUG oslo_concurrency.lockutils [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.144 239969 DEBUG oslo_concurrency.lockutils [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.145 239969 DEBUG oslo_concurrency.lockutils [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.145 239969 DEBUG nova.compute.manager [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Processing event network-vif-plugged-b493e4e4-7abf-4310-b538-b6b4d31a956c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.145 239969 DEBUG nova.compute.manager [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Received event network-vif-plugged-b493e4e4-7abf-4310-b538-b6b4d31a956c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.145 239969 DEBUG oslo_concurrency.lockutils [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.146 239969 DEBUG oslo_concurrency.lockutils [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.146 239969 DEBUG oslo_concurrency.lockutils [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.147 239969 DEBUG nova.compute.manager [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] No waiting events found dispatching network-vif-plugged-b493e4e4-7abf-4310-b538-b6b4d31a956c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.147 239969 WARNING nova.compute.manager [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Received unexpected event network-vif-plugged-b493e4e4-7abf-4310-b538-b6b4d31a956c for instance with vm_state building and task_state spawning.
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.148 239969 DEBUG nova.compute.manager [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Received event network-vif-plugged-b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.148 239969 DEBUG oslo_concurrency.lockutils [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.149 239969 DEBUG oslo_concurrency.lockutils [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.150 239969 DEBUG oslo_concurrency.lockutils [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.150 239969 DEBUG nova.compute.manager [req-32ef4e4a-fd3b-4749-969c-c8f028b8826f req-f695bae3-06e7-473c-aa98-31c758dba1e4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Processing event network-vif-plugged-b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.151 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:02:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1611004378' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.158 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443328.157422, 7748d19a-5c9b-456d-808c-b8f0c79a96ef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.159 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] VM Resumed (Lifecycle Event)
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.161 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.167 239969 INFO nova.virt.libvirt.driver [-] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Instance spawned successfully.
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.168 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.191 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.199 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.202 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.202 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.203 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.203 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.204 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.204 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.240 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.279 239969 INFO nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Took 10.12 seconds to spawn the instance on the hypervisor.
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.279 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.368 239969 INFO nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Took 11.27 seconds to build instance.
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.391 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:08 compute-0 kernel: tap2d904ed5-93 (unregistering): left promiscuous mode
Jan 26 16:02:08 compute-0 NetworkManager[48954]: <info>  [1769443328.6591] device (tap2d904ed5-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:02:08 compute-0 ovn_controller[146046]: 2026-01-26T16:02:08Z|00622|binding|INFO|Releasing lport 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 from this chassis (sb_readonly=0)
Jan 26 16:02:08 compute-0 ovn_controller[146046]: 2026-01-26T16:02:08Z|00623|binding|INFO|Setting lport 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 down in Southbound
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.665 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:08 compute-0 ovn_controller[146046]: 2026-01-26T16:02:08Z|00624|binding|INFO|Removing iface tap2d904ed5-93 ovn-installed in OVS
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.668 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:08.678 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:65:1d 10.100.0.5'], port_security=['fa:16:3e:e9:65:1d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9148cd85-c0ce-49ec-a252-00ee17051d61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac4ec909-9164-40ab-a894-4d501ce12c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4f5d05ded9184fea9b526a3522d47ea5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '93ada5a0-1112-43ef-8d69-92fe39df7192', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=287381d1-9fc0-4e2d-b786-047adf5707ff, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2d904ed5-936c-4546-9ac5-a1cb1ccd0266) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:08.680 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 in datapath ac4ec909-9164-40ab-a894-4d501ce12c59 unbound from our chassis
Jan 26 16:02:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:08.681 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac4ec909-9164-40ab-a894-4d501ce12c59
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.688 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:08.699 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[73256d70-c778-4d01-9402-dd150981716a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.704 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443328.704193, a84aa0aa-4479-4871-9773-a7ee17fa8565 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.705 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] VM Started (Lifecycle Event)
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.708 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.711 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.715 239969 INFO nova.virt.libvirt.driver [-] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Instance spawned successfully.
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.716 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:02:08 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000043.scope: Deactivated successfully.
Jan 26 16:02:08 compute-0 systemd[1]: machine-qemu\x2d74\x2dinstance\x2d00000043.scope: Consumed 16.839s CPU time.
Jan 26 16:02:08 compute-0 systemd-machined[208061]: Machine qemu-74-instance-00000043 terminated.
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.735 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.740 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.744 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.744 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.745 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.746 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.746 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.747 239969 DEBUG nova.virt.libvirt.driver [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:08.751 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[aad67527-cf37-4fb1-80ae-3a33487edbc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:08.755 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[77e42145-30a3-4f3d-b468-9fb627d857b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:08.791 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2ca42a7c-6747-416b-93b9-cec2f39c7b8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:08.809 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ed2bddc2-38dc-4a17-839c-bde30df0cdc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac4ec909-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6b:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473948, 'reachable_time': 43095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299599, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:08.825 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[672976da-1bf9-4f87-9a7f-8d9469505f8e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapac4ec909-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473962, 'tstamp': 473962}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299600, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapac4ec909-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473965, 'tstamp': 473965}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299600, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:08.826 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac4ec909-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.827 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.832 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:08.832 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac4ec909-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:08.833 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:08.833 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac4ec909-90, col_values=(('external_ids', {'iface-id': '47411fc6-7d46-43a0-b0c2-7c06b22cee9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:08.833 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.862 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.862 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443328.704284, a84aa0aa-4479-4871-9773-a7ee17fa8565 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:08 compute-0 nova_compute[239965]: 2026-01-26 16:02:08.862 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] VM Paused (Lifecycle Event)
Jan 26 16:02:09 compute-0 nova_compute[239965]: 2026-01-26 16:02:09.020 239969 INFO nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Took 9.85 seconds to spawn the instance on the hypervisor.
Jan 26 16:02:09 compute-0 nova_compute[239965]: 2026-01-26 16:02:09.020 239969 DEBUG nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:09 compute-0 nova_compute[239965]: 2026-01-26 16:02:09.021 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:09 compute-0 nova_compute[239965]: 2026-01-26 16:02:09.028 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443328.7117472, a84aa0aa-4479-4871-9773-a7ee17fa8565 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:09 compute-0 nova_compute[239965]: 2026-01-26 16:02:09.028 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] VM Resumed (Lifecycle Event)
Jan 26 16:02:09 compute-0 nova_compute[239965]: 2026-01-26 16:02:09.071 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:09 compute-0 nova_compute[239965]: 2026-01-26 16:02:09.075 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:02:09 compute-0 ceph-mon[75140]: pgmap v1476: 305 pgs: 305 active+clean; 363 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 410 KiB/s rd, 5.7 MiB/s wr, 194 op/s
Jan 26 16:02:09 compute-0 nova_compute[239965]: 2026-01-26 16:02:09.249 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:02:09 compute-0 nova_compute[239965]: 2026-01-26 16:02:09.266 239969 INFO nova.compute.manager [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Took 12.11 seconds to build instance.
Jan 26 16:02:09 compute-0 nova_compute[239965]: 2026-01-26 16:02:09.288 239969 DEBUG oslo_concurrency.lockutils [None req-6b6f939f-751f-4eb3-8ecf-d3a742a60b3a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "a84aa0aa-4479-4871-9773-a7ee17fa8565" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:09 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:02:09 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:02:09 compute-0 nova_compute[239965]: 2026-01-26 16:02:09.442 239969 INFO nova.virt.libvirt.driver [None req-d7b37705-d01c-404e-a834-1580ab882143 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Instance shutdown successfully after 24 seconds.
Jan 26 16:02:09 compute-0 nova_compute[239965]: 2026-01-26 16:02:09.450 239969 INFO nova.virt.libvirt.driver [-] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Instance destroyed successfully.
Jan 26 16:02:09 compute-0 nova_compute[239965]: 2026-01-26 16:02:09.451 239969 DEBUG nova.objects.instance [None req-d7b37705-d01c-404e-a834-1580ab882143 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9148cd85-c0ce-49ec-a252-00ee17051d61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:09 compute-0 nova_compute[239965]: 2026-01-26 16:02:09.469 239969 DEBUG nova.compute.manager [None req-d7b37705-d01c-404e-a834-1580ab882143 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:09 compute-0 nova_compute[239965]: 2026-01-26 16:02:09.547 239969 DEBUG oslo_concurrency.lockutils [None req-d7b37705-d01c-404e-a834-1580ab882143 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 339 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 270 op/s
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.137 239969 DEBUG nova.network.neutron [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Updated VIF entry in instance network info cache for port b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.137 239969 DEBUG nova.network.neutron [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Updating instance_info_cache with network_info: [{"id": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "address": "fa:16:3e:28:e4:64", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1bc4eb5-6a", "ovs_interfaceid": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.169 239969 DEBUG oslo_concurrency.lockutils [req-51106158-eba7-4ac9-a6e7-7550bc98485b req-da0d6e11-a49d-48d9-b060-4e0cea431805 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-a84aa0aa-4479-4871-9773-a7ee17fa8565" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.221 239969 DEBUG nova.compute.manager [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Received event network-vif-plugged-b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.221 239969 DEBUG oslo_concurrency.lockutils [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.223 239969 DEBUG oslo_concurrency.lockutils [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.223 239969 DEBUG oslo_concurrency.lockutils [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.224 239969 DEBUG nova.compute.manager [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] No waiting events found dispatching network-vif-plugged-b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.224 239969 WARNING nova.compute.manager [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Received unexpected event network-vif-plugged-b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 for instance with vm_state active and task_state None.
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.225 239969 DEBUG nova.compute.manager [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Received event network-vif-unplugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.225 239969 DEBUG oslo_concurrency.lockutils [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.226 239969 DEBUG oslo_concurrency.lockutils [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.226 239969 DEBUG oslo_concurrency.lockutils [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.226 239969 DEBUG nova.compute.manager [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] No waiting events found dispatching network-vif-unplugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.227 239969 WARNING nova.compute.manager [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Received unexpected event network-vif-unplugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 for instance with vm_state stopped and task_state None.
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.228 239969 DEBUG nova.compute.manager [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Received event network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.228 239969 DEBUG oslo_concurrency.lockutils [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.228 239969 DEBUG oslo_concurrency.lockutils [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.229 239969 DEBUG oslo_concurrency.lockutils [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.229 239969 DEBUG nova.compute.manager [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] No waiting events found dispatching network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.229 239969 WARNING nova.compute.manager [req-128806c8-1a62-4026-98a3-4f6020ea20ae req-2e8db23a-6cca-4298-a4e3-a9c837cd5f56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Received unexpected event network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 for instance with vm_state stopped and task_state None.
Jan 26 16:02:10 compute-0 ceph-mon[75140]: pgmap v1477: 305 pgs: 305 active+clean; 339 MiB data, 673 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 270 op/s
Jan 26 16:02:10 compute-0 nova_compute[239965]: 2026-01-26 16:02:10.797 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.062 239969 INFO nova.compute.manager [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Rebuilding instance
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.310 239969 DEBUG nova.objects.instance [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 9148cd85-c0ce-49ec-a252-00ee17051d61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.325 239969 DEBUG nova.compute.manager [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.364 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.371 239969 DEBUG nova.objects.instance [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 9148cd85-c0ce-49ec-a252-00ee17051d61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.385 239969 DEBUG nova.objects.instance [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9148cd85-c0ce-49ec-a252-00ee17051d61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.401 239969 DEBUG nova.objects.instance [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'resources' on Instance uuid 9148cd85-c0ce-49ec-a252-00ee17051d61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.415 239969 DEBUG nova.objects.instance [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'migration_context' on Instance uuid 9148cd85-c0ce-49ec-a252-00ee17051d61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.428 239969 DEBUG nova.objects.instance [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.431 239969 INFO nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Instance already shutdown.
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.437 239969 INFO nova.virt.libvirt.driver [-] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Instance destroyed successfully.
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.440 239969 INFO nova.virt.libvirt.driver [-] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Instance destroyed successfully.
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.441 239969 DEBUG nova.virt.libvirt.vif [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-931503541',display_name='tempest-tempest.common.compute-instance-931503541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-931503541',id=67,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:01:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4f5d05ded9184fea9b526a3522d47ea5',ramdisk_id='',reservation_id='r-6gturwy1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-980651809',owner_user_name='tempest-ServerActionsTestOtherA-980651809-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:02:10Z,user_data=None,user_id='57040a375df6487fbc604a9b04389eeb',uuid=9148cd85-c0ce-49ec-a252-00ee17051d61,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.441 239969 DEBUG nova.network.os_vif_util [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converting VIF {"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.442 239969 DEBUG nova.network.os_vif_util [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=2d904ed5-936c-4546-9ac5-a1cb1ccd0266,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d904ed5-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.443 239969 DEBUG os_vif [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=2d904ed5-936c-4546-9ac5-a1cb1ccd0266,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d904ed5-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.444 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.444 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d904ed5-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.445 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.447 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.449 239969 INFO os_vif [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=2d904ed5-936c-4546-9ac5-a1cb1ccd0266,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d904ed5-93')
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.742 239969 INFO nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Deleting instance files /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61_del
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.743 239969 INFO nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Deletion of /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61_del complete
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.870 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.871 239969 INFO nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Creating image(s)
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.897 239969 DEBUG nova.storage.rbd_utils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 9148cd85-c0ce-49ec-a252-00ee17051d61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.926 239969 DEBUG nova.storage.rbd_utils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 9148cd85-c0ce-49ec-a252-00ee17051d61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 339 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 208 op/s
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.952 239969 DEBUG nova.storage.rbd_utils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 9148cd85-c0ce-49ec-a252-00ee17051d61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:11 compute-0 nova_compute[239965]: 2026-01-26 16:02:11.956 239969 DEBUG oslo_concurrency.processutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.052 239969 DEBUG oslo_concurrency.processutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.052 239969 DEBUG oslo_concurrency.lockutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "43c516c9cae415c0ab334521ada79f427fb2809a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.053 239969 DEBUG oslo_concurrency.lockutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.054 239969 DEBUG oslo_concurrency.lockutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.084 239969 DEBUG nova.storage.rbd_utils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 9148cd85-c0ce-49ec-a252-00ee17051d61_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.090 239969 DEBUG oslo_concurrency.processutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a 9148cd85-c0ce-49ec-a252-00ee17051d61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.412 239969 DEBUG oslo_concurrency.processutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a 9148cd85-c0ce-49ec-a252-00ee17051d61_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.469 239969 DEBUG oslo_concurrency.lockutils [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.469 239969 DEBUG oslo_concurrency.lockutils [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.470 239969 DEBUG oslo_concurrency.lockutils [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.470 239969 DEBUG oslo_concurrency.lockutils [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.470 239969 DEBUG oslo_concurrency.lockutils [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.472 239969 INFO nova.compute.manager [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Terminating instance
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.473 239969 DEBUG nova.compute.manager [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.479 239969 DEBUG nova.storage.rbd_utils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] resizing rbd image 9148cd85-c0ce-49ec-a252-00ee17051d61_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:02:12 compute-0 kernel: tapb493e4e4-7a (unregistering): left promiscuous mode
Jan 26 16:02:12 compute-0 NetworkManager[48954]: <info>  [1769443332.5186] device (tapb493e4e4-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.524 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:12 compute-0 ovn_controller[146046]: 2026-01-26T16:02:12Z|00625|binding|INFO|Releasing lport 20125a47-5d33-44a5-95b7-646e9eff5c4a from this chassis (sb_readonly=0)
Jan 26 16:02:12 compute-0 ovn_controller[146046]: 2026-01-26T16:02:12Z|00626|binding|INFO|Releasing lport 47411fc6-7d46-43a0-b0c2-7c06b22cee9e from this chassis (sb_readonly=0)
Jan 26 16:02:12 compute-0 ovn_controller[146046]: 2026-01-26T16:02:12Z|00627|binding|INFO|Releasing lport b493e4e4-7abf-4310-b538-b6b4d31a956c from this chassis (sb_readonly=0)
Jan 26 16:02:12 compute-0 ovn_controller[146046]: 2026-01-26T16:02:12Z|00628|binding|INFO|Setting lport b493e4e4-7abf-4310-b538-b6b4d31a956c down in Southbound
Jan 26 16:02:12 compute-0 ovn_controller[146046]: 2026-01-26T16:02:12Z|00629|binding|INFO|Removing iface tapb493e4e4-7a ovn-installed in OVS
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.553 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:6f:04 10.100.0.10'], port_security=['fa:16:3e:8a:6f:04 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '7748d19a-5c9b-456d-808c-b8f0c79a96ef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df35e1b31a074e6c9a56000deadb0acc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eaa8c06b-92d0-4145-8d40-c4fadf59354c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a5abb02-464d-4773-99b2-5341ce12ab00, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b493e4e4-7abf-4310-b538-b6b4d31a956c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.555 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b493e4e4-7abf-4310-b538-b6b4d31a956c in datapath ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 unbound from our chassis
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.556 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ca8d6d0f-aa78-49e5-bed8-8ca0c9363435
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.571 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ddb8c9-1b94-469d-9755-df96ef3f44a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:12 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000047.scope: Deactivated successfully.
Jan 26 16:02:12 compute-0 systemd[1]: machine-qemu\x2d79\x2dinstance\x2d00000047.scope: Consumed 4.799s CPU time.
Jan 26 16:02:12 compute-0 systemd-machined[208061]: Machine qemu-79-instance-00000047 terminated.
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.588 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.597 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.598 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Ensure instance console log exists: /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.598 239969 DEBUG oslo_concurrency.lockutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.599 239969 DEBUG oslo_concurrency.lockutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.599 239969 DEBUG oslo_concurrency.lockutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.601 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3514ef-beb0-44bf-9335-e83ae3cea0ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.602 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Start _get_guest_xml network_info=[{"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.605 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[01dba2fb-a74c-4a1d-b34d-9dcf4ce41609]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.606 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.610 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.615 239969 WARNING nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.623 239969 DEBUG nova.virt.libvirt.host [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.624 239969 DEBUG nova.virt.libvirt.host [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.629 239969 DEBUG nova.virt.libvirt.host [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.630 239969 DEBUG nova.virt.libvirt.host [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.630 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.631 239969 DEBUG nova.virt.hardware [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.631 239969 DEBUG nova.virt.hardware [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.631 239969 DEBUG nova.virt.hardware [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.632 239969 DEBUG nova.virt.hardware [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.632 239969 DEBUG nova.virt.hardware [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.632 239969 DEBUG nova.virt.hardware [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.632 239969 DEBUG nova.virt.hardware [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.632 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[fb2e442f-58ba-4158-831c-364e1babff20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.633 239969 DEBUG nova.virt.hardware [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.633 239969 DEBUG nova.virt.hardware [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.633 239969 DEBUG nova.virt.hardware [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.633 239969 DEBUG nova.virt.hardware [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.634 239969 DEBUG nova.objects.instance [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 9148cd85-c0ce-49ec-a252-00ee17051d61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.636 239969 DEBUG oslo_concurrency.lockutils [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "a84aa0aa-4479-4871-9773-a7ee17fa8565" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.636 239969 DEBUG oslo_concurrency.lockutils [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "a84aa0aa-4479-4871-9773-a7ee17fa8565" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.636 239969 DEBUG oslo_concurrency.lockutils [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.637 239969 DEBUG oslo_concurrency.lockutils [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.637 239969 DEBUG oslo_concurrency.lockutils [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.638 239969 INFO nova.compute.manager [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Terminating instance
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.639 239969 DEBUG nova.compute.manager [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.649 239969 DEBUG oslo_concurrency.processutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.652 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4ff33a-bc6b-4c13-8d1a-9babc850c57e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca8d6d0f-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3c:a1:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 188], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481733, 'reachable_time': 37902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299808, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:12 compute-0 kernel: tapb1bc4eb5-6a (unregistering): left promiscuous mode
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.671 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[edd30921-29d7-47be-b644-c2f236537d39]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapca8d6d0f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 481746, 'tstamp': 481746}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299809, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapca8d6d0f-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 481750, 'tstamp': 481750}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299809, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.673 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca8d6d0f-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:12 compute-0 NetworkManager[48954]: <info>  [1769443332.6751] device (tapb1bc4eb5-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:02:12 compute-0 ovn_controller[146046]: 2026-01-26T16:02:12Z|00630|binding|INFO|Releasing lport b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 from this chassis (sb_readonly=0)
Jan 26 16:02:12 compute-0 ovn_controller[146046]: 2026-01-26T16:02:12Z|00631|binding|INFO|Setting lport b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 down in Southbound
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.685 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca8d6d0f-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.686 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.686 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapca8d6d0f-a0, col_values=(('external_ids', {'iface-id': '20125a47-5d33-44a5-95b7-646e9eff5c4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:12 compute-0 ovn_controller[146046]: 2026-01-26T16:02:12Z|00632|binding|INFO|Removing iface tapb1bc4eb5-6a ovn-installed in OVS
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.686 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.688 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.695 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:e4:64 10.100.0.5'], port_security=['fa:16:3e:28:e4:64 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a84aa0aa-4479-4871-9773-a7ee17fa8565', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df35e1b31a074e6c9a56000deadb0acc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'eaa8c06b-92d0-4145-8d40-c4fadf59354c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2a5abb02-464d-4773-99b2-5341ce12ab00, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.696 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 in datapath ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 unbound from our chassis
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.697 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ca8d6d0f-aa78-49e5-bed8-8ca0c9363435, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.698 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5df04b-ff3b-41f8-83bb-c4dad6a8b040]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.699 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 namespace which is not needed anymore
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.709 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.715 239969 INFO nova.virt.libvirt.driver [-] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Instance destroyed successfully.
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.715 239969 DEBUG nova.objects.instance [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lazy-loading 'resources' on Instance uuid 7748d19a-5c9b-456d-808c-b8f0c79a96ef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:12 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000048.scope: Deactivated successfully.
Jan 26 16:02:12 compute-0 systemd[1]: machine-qemu\x2d80\x2dinstance\x2d00000048.scope: Consumed 4.823s CPU time.
Jan 26 16:02:12 compute-0 systemd-machined[208061]: Machine qemu-80-instance-00000048 terminated.
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.733 239969 DEBUG nova.virt.libvirt.vif [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:01:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-273040466',display_name='tempest-MultipleCreateTestJSON-server-273040466-1',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-273040466-1',id=71,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:02:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df35e1b31a074e6c9a56000deadb0acc',ramdisk_id='',reservation_id='r-y0wbg9mi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1803223404',owner_user_name='tempest-MultipleCreateTestJSON-1803223404-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:02:08Z,user_data=None,user_id='008d2b0a4ac3490f9a0fcea9e21be080',uuid=7748d19a-5c9b-456d-808c-b8f0c79a96ef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "address": "fa:16:3e:8a:6f:04", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb493e4e4-7a", "ovs_interfaceid": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.734 239969 DEBUG nova.network.os_vif_util [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converting VIF {"id": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "address": "fa:16:3e:8a:6f:04", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb493e4e4-7a", "ovs_interfaceid": "b493e4e4-7abf-4310-b538-b6b4d31a956c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.735 239969 DEBUG nova.network.os_vif_util [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:6f:04,bridge_name='br-int',has_traffic_filtering=True,id=b493e4e4-7abf-4310-b538-b6b4d31a956c,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb493e4e4-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.735 239969 DEBUG os_vif [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:6f:04,bridge_name='br-int',has_traffic_filtering=True,id=b493e4e4-7abf-4310-b538-b6b4d31a956c,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb493e4e4-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.736 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.737 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb493e4e4-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.739 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.741 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.743 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.745 239969 INFO os_vif [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:6f:04,bridge_name='br-int',has_traffic_filtering=True,id=b493e4e4-7abf-4310-b538-b6b4d31a956c,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb493e4e4-7a')
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.813 239969 DEBUG nova.compute.manager [req-25846c2d-c67f-4066-a4d0-cbbbafbe8a89 req-6a3e1c54-c09e-4c2d-b532-5a3c76657430 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Received event network-vif-unplugged-b493e4e4-7abf-4310-b538-b6b4d31a956c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.814 239969 DEBUG oslo_concurrency.lockutils [req-25846c2d-c67f-4066-a4d0-cbbbafbe8a89 req-6a3e1c54-c09e-4c2d-b532-5a3c76657430 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.814 239969 DEBUG oslo_concurrency.lockutils [req-25846c2d-c67f-4066-a4d0-cbbbafbe8a89 req-6a3e1c54-c09e-4c2d-b532-5a3c76657430 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.814 239969 DEBUG oslo_concurrency.lockutils [req-25846c2d-c67f-4066-a4d0-cbbbafbe8a89 req-6a3e1c54-c09e-4c2d-b532-5a3c76657430 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.814 239969 DEBUG nova.compute.manager [req-25846c2d-c67f-4066-a4d0-cbbbafbe8a89 req-6a3e1c54-c09e-4c2d-b532-5a3c76657430 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] No waiting events found dispatching network-vif-unplugged-b493e4e4-7abf-4310-b538-b6b4d31a956c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.815 239969 DEBUG nova.compute.manager [req-25846c2d-c67f-4066-a4d0-cbbbafbe8a89 req-6a3e1c54-c09e-4c2d-b532-5a3c76657430 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Received event network-vif-unplugged-b493e4e4-7abf-4310-b538-b6b4d31a956c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:02:12 compute-0 neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435[299465]: [NOTICE]   (299469) : haproxy version is 2.8.14-c23fe91
Jan 26 16:02:12 compute-0 neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435[299465]: [NOTICE]   (299469) : path to executable is /usr/sbin/haproxy
Jan 26 16:02:12 compute-0 neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435[299465]: [WARNING]  (299469) : Exiting Master process...
Jan 26 16:02:12 compute-0 neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435[299465]: [ALERT]    (299469) : Current worker (299471) exited with code 143 (Terminated)
Jan 26 16:02:12 compute-0 neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435[299465]: [WARNING]  (299469) : All workers exited. Exiting... (0)
Jan 26 16:02:12 compute-0 systemd[1]: libpod-4bd363e8dc66b72966810d9e0ac19a8534e85c344ff2f5c23d64c40401fa72ce.scope: Deactivated successfully.
Jan 26 16:02:12 compute-0 podman[299867]: 2026-01-26 16:02:12.843859699 +0000 UTC m=+0.048721226 container died 4bd363e8dc66b72966810d9e0ac19a8534e85c344ff2f5c23d64c40401fa72ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.859 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.867 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.876 239969 INFO nova.virt.libvirt.driver [-] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Instance destroyed successfully.
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.876 239969 DEBUG nova.objects.instance [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lazy-loading 'resources' on Instance uuid a84aa0aa-4479-4871-9773-a7ee17fa8565 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6975722172448541105af90bf7eef1eab6ddd1082c50b099aaa10d4c5db3d37-merged.mount: Deactivated successfully.
Jan 26 16:02:12 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4bd363e8dc66b72966810d9e0ac19a8534e85c344ff2f5c23d64c40401fa72ce-userdata-shm.mount: Deactivated successfully.
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.892 239969 DEBUG nova.virt.libvirt.vif [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:01:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-MultipleCreateTestJSON-server-273040466',display_name='tempest-MultipleCreateTestJSON-server-273040466-2',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-multiplecreatetestjson-server-273040466-2',id=72,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=1,launched_at=2026-01-26T16:02:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df35e1b31a074e6c9a56000deadb0acc',ramdisk_id='',reservation_id='r-y0wbg9mi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-MultipleCreateTestJSON-1803223404',owner_user_name='tempest-MultipleCreateTestJSON-1803223404-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:02:09Z,user_data=None,user_id='008d2b0a4ac3490f9a0fcea9e21be080',uuid=a84aa0aa-4479-4871-9773-a7ee17fa8565,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "address": "fa:16:3e:28:e4:64", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1bc4eb5-6a", "ovs_interfaceid": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.893 239969 DEBUG nova.network.os_vif_util [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converting VIF {"id": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "address": "fa:16:3e:28:e4:64", "network": {"id": "ca8d6d0f-aa78-49e5-bed8-8ca0c9363435", "bridge": "br-int", "label": "tempest-MultipleCreateTestJSON-610710835-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df35e1b31a074e6c9a56000deadb0acc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb1bc4eb5-6a", "ovs_interfaceid": "b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.894 239969 DEBUG nova.network.os_vif_util [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:e4:64,bridge_name='br-int',has_traffic_filtering=True,id=b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1bc4eb5-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.894 239969 DEBUG os_vif [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:e4:64,bridge_name='br-int',has_traffic_filtering=True,id=b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1bc4eb5-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.896 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:12 compute-0 podman[299867]: 2026-01-26 16:02:12.896061478 +0000 UTC m=+0.100923015 container cleanup 4bd363e8dc66b72966810d9e0ac19a8534e85c344ff2f5c23d64c40401fa72ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.896 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1bc4eb5-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.897 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.898 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.899 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.902 239969 INFO os_vif [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:e4:64,bridge_name='br-int',has_traffic_filtering=True,id=b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4,network=Network(ca8d6d0f-aa78-49e5-bed8-8ca0c9363435),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb1bc4eb5-6a')
Jan 26 16:02:12 compute-0 systemd[1]: libpod-conmon-4bd363e8dc66b72966810d9e0ac19a8534e85c344ff2f5c23d64c40401fa72ce.scope: Deactivated successfully.
Jan 26 16:02:12 compute-0 podman[299926]: 2026-01-26 16:02:12.972495883 +0000 UTC m=+0.052566760 container remove 4bd363e8dc66b72966810d9e0ac19a8534e85c344ff2f5c23d64c40401fa72ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.982 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cad89bff-7ccb-4c7e-b02d-e9e9cc0f976f]: (4, ('Mon Jan 26 04:02:12 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 (4bd363e8dc66b72966810d9e0ac19a8534e85c344ff2f5c23d64c40401fa72ce)\n4bd363e8dc66b72966810d9e0ac19a8534e85c344ff2f5c23d64c40401fa72ce\nMon Jan 26 04:02:12 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 (4bd363e8dc66b72966810d9e0ac19a8534e85c344ff2f5c23d64c40401fa72ce)\n4bd363e8dc66b72966810d9e0ac19a8534e85c344ff2f5c23d64c40401fa72ce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.985 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0d146b1f-f12f-4023-a16e-c976596f3ad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:12.986 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca8d6d0f-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:12 compute-0 nova_compute[239965]: 2026-01-26 16:02:12.988 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:12 compute-0 kernel: tapca8d6d0f-a0: left promiscuous mode
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.010 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:13 compute-0 ceph-mon[75140]: pgmap v1478: 305 pgs: 305 active+clean; 339 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 208 op/s
Jan 26 16:02:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:13.014 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[03efd0e7-69a7-4370-8a8e-f98b7aebb85b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:13.034 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[698be764-c26e-4421-a054-aa595d8c9d79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:13.036 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[843e7f9a-96ba-40e2-82fa-c39019dece4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:13.054 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[777f7f15-b680-4cc6-8877-5d305f3760c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 481727, 'reachable_time': 43155, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299955, 'error': None, 'target': 'ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:13.057 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ca8d6d0f-aa78-49e5-bed8-8ca0c9363435 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:02:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:13.057 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[fc3843d2-5eeb-4fa9-99d3-e574086b5c63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:13 compute-0 systemd[1]: run-netns-ovnmeta\x2dca8d6d0f\x2daa78\x2d49e5\x2dbed8\x2d8ca0c9363435.mount: Deactivated successfully.
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.099 239969 INFO nova.virt.libvirt.driver [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Deleting instance files /var/lib/nova/instances/7748d19a-5c9b-456d-808c-b8f0c79a96ef_del
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.100 239969 INFO nova.virt.libvirt.driver [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Deletion of /var/lib/nova/instances/7748d19a-5c9b-456d-808c-b8f0c79a96ef_del complete
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.159 239969 INFO nova.compute.manager [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Took 0.69 seconds to destroy the instance on the hypervisor.
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.160 239969 DEBUG oslo.service.loopingcall [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.160 239969 DEBUG nova.compute.manager [-] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.161 239969 DEBUG nova.network.neutron [-] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.212 239969 INFO nova.virt.libvirt.driver [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Deleting instance files /var/lib/nova/instances/a84aa0aa-4479-4871-9773-a7ee17fa8565_del
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.213 239969 INFO nova.virt.libvirt.driver [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Deletion of /var/lib/nova/instances/a84aa0aa-4479-4871-9773-a7ee17fa8565_del complete
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.260 239969 INFO nova.compute.manager [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Took 0.62 seconds to destroy the instance on the hypervisor.
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.261 239969 DEBUG oslo.service.loopingcall [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.262 239969 DEBUG nova.compute.manager [-] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.262 239969 DEBUG nova.network.neutron [-] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:02:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:02:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4226974961' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.284 239969 DEBUG oslo_concurrency.processutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.312 239969 DEBUG nova.storage.rbd_utils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 9148cd85-c0ce-49ec-a252-00ee17051d61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.316 239969 DEBUG oslo_concurrency.processutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:02:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/439369519' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.871 239969 DEBUG oslo_concurrency.processutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.873 239969 DEBUG nova.virt.libvirt.vif [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-931503541',display_name='tempest-tempest.common.compute-instance-931503541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-931503541',id=67,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:01:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4f5d05ded9184fea9b526a3522d47ea5',ramdisk_id='',reservation_id='r-6gturwy1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-980651809',owner_user_name='tempest-ServerActionsTestOtherA-980651809-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:02:11Z,user_data=None,user_id='57040a375df6487fbc604a9b04389eeb',uuid=9148cd85-c0ce-49ec-a252-00ee17051d61,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.874 239969 DEBUG nova.network.os_vif_util [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converting VIF {"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.875 239969 DEBUG nova.network.os_vif_util [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=2d904ed5-936c-4546-9ac5-a1cb1ccd0266,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d904ed5-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.879 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:02:13 compute-0 nova_compute[239965]:   <uuid>9148cd85-c0ce-49ec-a252-00ee17051d61</uuid>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   <name>instance-00000043</name>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <nova:name>tempest-tempest.common.compute-instance-931503541</nova:name>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:02:12</nova:creationTime>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:02:13 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:02:13 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:02:13 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:02:13 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:02:13 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:02:13 compute-0 nova_compute[239965]:         <nova:user uuid="57040a375df6487fbc604a9b04389eeb">tempest-ServerActionsTestOtherA-980651809-project-member</nova:user>
Jan 26 16:02:13 compute-0 nova_compute[239965]:         <nova:project uuid="4f5d05ded9184fea9b526a3522d47ea5">tempest-ServerActionsTestOtherA-980651809</nova:project>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="24ff5bfa-2bf0-4d76-ba05-fc857cd2108f"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:02:13 compute-0 nova_compute[239965]:         <nova:port uuid="2d904ed5-936c-4546-9ac5-a1cb1ccd0266">
Jan 26 16:02:13 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <system>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <entry name="serial">9148cd85-c0ce-49ec-a252-00ee17051d61</entry>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <entry name="uuid">9148cd85-c0ce-49ec-a252-00ee17051d61</entry>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     </system>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   <os>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   </os>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   <features>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   </features>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/9148cd85-c0ce-49ec-a252-00ee17051d61_disk">
Jan 26 16:02:13 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       </source>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:02:13 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/9148cd85-c0ce-49ec-a252-00ee17051d61_disk.config">
Jan 26 16:02:13 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       </source>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:02:13 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:e9:65:1d"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <target dev="tap2d904ed5-93"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61/console.log" append="off"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <video>
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     </video>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:02:13 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:02:13 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:02:13 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:02:13 compute-0 nova_compute[239965]: </domain>
Jan 26 16:02:13 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.881 239969 DEBUG nova.compute.manager [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Preparing to wait for external event network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.882 239969 DEBUG oslo_concurrency.lockutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.882 239969 DEBUG oslo_concurrency.lockutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.883 239969 DEBUG oslo_concurrency.lockutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.884 239969 DEBUG nova.virt.libvirt.vif [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-931503541',display_name='tempest-tempest.common.compute-instance-931503541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-931503541',id=67,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:01:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='4f5d05ded9184fea9b526a3522d47ea5',ramdisk_id='',reservation_id='r-6gturwy1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-980651809',owner_user_name='tempest-ServerActionsTestOtherA-980651809-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:02:11Z,user_data=None,user_id='57040a375df6487fbc604a9b04389eeb',uuid=9148cd85-c0ce-49ec-a252-00ee17051d61,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.884 239969 DEBUG nova.network.os_vif_util [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converting VIF {"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.885 239969 DEBUG nova.network.os_vif_util [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=2d904ed5-936c-4546-9ac5-a1cb1ccd0266,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d904ed5-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.885 239969 DEBUG os_vif [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=2d904ed5-936c-4546-9ac5-a1cb1ccd0266,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d904ed5-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.886 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.886 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.886 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.889 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.890 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d904ed5-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.890 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2d904ed5-93, col_values=(('external_ids', {'iface-id': '2d904ed5-936c-4546-9ac5-a1cb1ccd0266', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e9:65:1d', 'vm-uuid': '9148cd85-c0ce-49ec-a252-00ee17051d61'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.892 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:13 compute-0 NetworkManager[48954]: <info>  [1769443333.8928] manager: (tap2d904ed5-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/281)
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.894 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.897 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.898 239969 INFO os_vif [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=2d904ed5-936c-4546-9ac5-a1cb1ccd0266,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d904ed5-93')
Jan 26 16:02:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 339 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 168 op/s
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.967 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.967 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.968 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] No VIF found with MAC fa:16:3e:e9:65:1d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.969 239969 INFO nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Using config drive
Jan 26 16:02:13 compute-0 nova_compute[239965]: 2026-01-26 16:02:13.991 239969 DEBUG nova.storage.rbd_utils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 9148cd85-c0ce-49ec-a252-00ee17051d61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.013 239969 DEBUG nova.objects.instance [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 9148cd85-c0ce-49ec-a252-00ee17051d61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:14 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4226974961' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:14 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/439369519' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.040 239969 DEBUG nova.objects.instance [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'keypairs' on Instance uuid 9148cd85-c0ce-49ec-a252-00ee17051d61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.399 239969 INFO nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Creating config drive at /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61/disk.config
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.404 239969 DEBUG oslo_concurrency.processutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqin34v94 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.525 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443319.5236206, 96c386e9-e3b3-47c6-b6ca-863e894d5c49 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.525 239969 INFO nova.compute.manager [-] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] VM Stopped (Lifecycle Event)
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.548 239969 DEBUG oslo_concurrency.processutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqin34v94" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.578 239969 DEBUG nova.storage.rbd_utils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 9148cd85-c0ce-49ec-a252-00ee17051d61_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.581 239969 DEBUG oslo_concurrency.processutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61/disk.config 9148cd85-c0ce-49ec-a252-00ee17051d61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.673 239969 DEBUG nova.compute.manager [None req-1a539330-55bc-4d9b-9bb6-a1ab1f13a791 - - - - - -] [instance: 96c386e9-e3b3-47c6-b6ca-863e894d5c49] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.687 239969 DEBUG nova.network.neutron [-] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.710 239969 INFO nova.compute.manager [-] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Took 1.55 seconds to deallocate network for instance.
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.718 239969 DEBUG oslo_concurrency.processutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61/disk.config 9148cd85-c0ce-49ec-a252-00ee17051d61_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.719 239969 INFO nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Deleting local config drive /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61/disk.config because it was imported into RBD.
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.755 239969 DEBUG oslo_concurrency.lockutils [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.755 239969 DEBUG oslo_concurrency.lockutils [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:14 compute-0 kernel: tap2d904ed5-93: entered promiscuous mode
Jan 26 16:02:14 compute-0 systemd-udevd[299781]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:02:14 compute-0 NetworkManager[48954]: <info>  [1769443334.7708] manager: (tap2d904ed5-93): new Tun device (/org/freedesktop/NetworkManager/Devices/282)
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.771 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:14 compute-0 ovn_controller[146046]: 2026-01-26T16:02:14Z|00633|binding|INFO|Claiming lport 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 for this chassis.
Jan 26 16:02:14 compute-0 ovn_controller[146046]: 2026-01-26T16:02:14Z|00634|binding|INFO|2d904ed5-936c-4546-9ac5-a1cb1ccd0266: Claiming fa:16:3e:e9:65:1d 10.100.0.5
Jan 26 16:02:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:14.780 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:65:1d 10.100.0.5'], port_security=['fa:16:3e:e9:65:1d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9148cd85-c0ce-49ec-a252-00ee17051d61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac4ec909-9164-40ab-a894-4d501ce12c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4f5d05ded9184fea9b526a3522d47ea5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '93ada5a0-1112-43ef-8d69-92fe39df7192', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=287381d1-9fc0-4e2d-b786-047adf5707ff, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2d904ed5-936c-4546-9ac5-a1cb1ccd0266) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:14.781 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 in datapath ac4ec909-9164-40ab-a894-4d501ce12c59 bound to our chassis
Jan 26 16:02:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:14.783 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac4ec909-9164-40ab-a894-4d501ce12c59
Jan 26 16:02:14 compute-0 NetworkManager[48954]: <info>  [1769443334.7854] device (tap2d904ed5-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:02:14 compute-0 NetworkManager[48954]: <info>  [1769443334.7861] device (tap2d904ed5-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.788 239969 DEBUG nova.network.neutron [-] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:14 compute-0 ovn_controller[146046]: 2026-01-26T16:02:14Z|00635|binding|INFO|Setting lport 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 ovn-installed in OVS
Jan 26 16:02:14 compute-0 ovn_controller[146046]: 2026-01-26T16:02:14Z|00636|binding|INFO|Setting lport 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 up in Southbound
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.790 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.793 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:14.803 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1a0bf2c5-f061-4550-8bc4-4c7af666749e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:14 compute-0 systemd-machined[208061]: New machine qemu-81-instance-00000043.
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.810 239969 INFO nova.compute.manager [-] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Took 1.55 seconds to deallocate network for instance.
Jan 26 16:02:14 compute-0 systemd[1]: Started Virtual Machine qemu-81-instance-00000043.
Jan 26 16:02:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:14.836 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ea97e5fd-3a5d-4466-8b0e-3646f7811beb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:14.839 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[025c0235-fca3-442c-921f-1bf9602b81a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.860 239969 DEBUG oslo_concurrency.lockutils [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:14.865 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b0fda195-04f9-4cdf-81a8-70286cc188e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.883 239969 DEBUG oslo_concurrency.processutils [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:14.890 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[48ecd646-b014-4839-a3b6-bdf69b3eb7f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac4ec909-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6b:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473948, 'reachable_time': 43095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300082, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:14.915 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fa0206e0-e280-4e2e-8809-fb66102bfbc1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapac4ec909-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473962, 'tstamp': 473962}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300085, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapac4ec909-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473965, 'tstamp': 473965}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300085, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:14.917 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac4ec909-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:14.920 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac4ec909-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:14.920 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:14.921 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac4ec909-90, col_values=(('external_ids', {'iface-id': '47411fc6-7d46-43a0-b0c2-7c06b22cee9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:14.921 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.927 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:14 compute-0 nova_compute[239965]: 2026-01-26 16:02:14.948 239969 DEBUG nova.compute.manager [req-d30b1505-599a-45e2-964d-1afa627a522f req-c7769580-185b-490f-8fc9-bd7751242813 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Received event network-vif-deleted-b493e4e4-7abf-4310-b538-b6b4d31a956c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.002 239969 DEBUG nova.compute.manager [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Received event network-vif-plugged-b493e4e4-7abf-4310-b538-b6b4d31a956c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.002 239969 DEBUG oslo_concurrency.lockutils [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.003 239969 DEBUG oslo_concurrency.lockutils [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.003 239969 DEBUG oslo_concurrency.lockutils [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.003 239969 DEBUG nova.compute.manager [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] No waiting events found dispatching network-vif-plugged-b493e4e4-7abf-4310-b538-b6b4d31a956c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.003 239969 WARNING nova.compute.manager [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Received unexpected event network-vif-plugged-b493e4e4-7abf-4310-b538-b6b4d31a956c for instance with vm_state deleted and task_state None.
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.004 239969 DEBUG nova.compute.manager [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Received event network-vif-unplugged-b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.004 239969 DEBUG oslo_concurrency.lockutils [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.004 239969 DEBUG oslo_concurrency.lockutils [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.005 239969 DEBUG oslo_concurrency.lockutils [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.005 239969 DEBUG nova.compute.manager [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] No waiting events found dispatching network-vif-unplugged-b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.005 239969 WARNING nova.compute.manager [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Received unexpected event network-vif-unplugged-b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 for instance with vm_state deleted and task_state None.
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.005 239969 DEBUG nova.compute.manager [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Received event network-vif-plugged-b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.006 239969 DEBUG oslo_concurrency.lockutils [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.006 239969 DEBUG oslo_concurrency.lockutils [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.006 239969 DEBUG oslo_concurrency.lockutils [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "a84aa0aa-4479-4871-9773-a7ee17fa8565-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.006 239969 DEBUG nova.compute.manager [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] No waiting events found dispatching network-vif-plugged-b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.007 239969 WARNING nova.compute.manager [req-a8f9fdf8-6c04-48cf-8986-3a6dd59e7e46 req-37b34ce0-78c5-40fd-9085-6457d98cb0bb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Received unexpected event network-vif-plugged-b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 for instance with vm_state deleted and task_state None.
Jan 26 16:02:15 compute-0 ceph-mon[75140]: pgmap v1479: 305 pgs: 305 active+clean; 339 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 168 op/s
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.166 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.307 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 9148cd85-c0ce-49ec-a252-00ee17051d61 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.308 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443335.3061855, 9148cd85-c0ce-49ec-a252-00ee17051d61 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.308 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] VM Started (Lifecycle Event)
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.331 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.336 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443335.3070467, 9148cd85-c0ce-49ec-a252-00ee17051d61 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.337 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] VM Paused (Lifecycle Event)
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.361 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.364 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.383 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 16:02:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:02:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4064340910' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.527 239969 DEBUG oslo_concurrency.processutils [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.533 239969 DEBUG nova.compute.provider_tree [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.546 239969 DEBUG nova.scheduler.client.report [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.565 239969 DEBUG oslo_concurrency.lockutils [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.567 239969 DEBUG oslo_concurrency.lockutils [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.590 239969 INFO nova.scheduler.client.report [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Deleted allocations for instance 7748d19a-5c9b-456d-808c-b8f0c79a96ef
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.670 239969 DEBUG oslo_concurrency.processutils [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.716 239969 DEBUG oslo_concurrency.lockutils [None req-b837587d-c980-42e9-a870-40857e4987fd 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "7748d19a-5c9b-456d-808c-b8f0c79a96ef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:15 compute-0 sudo[300150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:02:15 compute-0 sudo[300150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:02:15 compute-0 sudo[300150]: pam_unix(sudo:session): session closed for user root
Jan 26 16:02:15 compute-0 nova_compute[239965]: 2026-01-26 16:02:15.799 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:15 compute-0 sudo[300175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:02:15 compute-0 sudo[300175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:02:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 213 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.0 MiB/s wr, 340 op/s
Jan 26 16:02:16 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4064340910' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:02:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2080746719' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:16 compute-0 nova_compute[239965]: 2026-01-26 16:02:16.246 239969 DEBUG oslo_concurrency.processutils [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:16 compute-0 nova_compute[239965]: 2026-01-26 16:02:16.253 239969 DEBUG nova.compute.provider_tree [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:02:16 compute-0 nova_compute[239965]: 2026-01-26 16:02:16.270 239969 DEBUG nova.scheduler.client.report [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:02:16 compute-0 nova_compute[239965]: 2026-01-26 16:02:16.292 239969 DEBUG oslo_concurrency.lockutils [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:16 compute-0 nova_compute[239965]: 2026-01-26 16:02:16.319 239969 INFO nova.scheduler.client.report [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Deleted allocations for instance a84aa0aa-4479-4871-9773-a7ee17fa8565
Jan 26 16:02:16 compute-0 nova_compute[239965]: 2026-01-26 16:02:16.396 239969 DEBUG oslo_concurrency.lockutils [None req-06799622-81a7-482b-b0fa-cabe1eeb396a 008d2b0a4ac3490f9a0fcea9e21be080 df35e1b31a074e6c9a56000deadb0acc - - default default] Lock "a84aa0aa-4479-4871-9773-a7ee17fa8565" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:02:16 compute-0 sudo[300175]: pam_unix(sudo:session): session closed for user root
Jan 26 16:02:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:02:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:02:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:02:16 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:02:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:02:16 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:02:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:02:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:02:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:02:16 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:02:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:02:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:02:16 compute-0 sudo[300252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:02:16 compute-0 sudo[300252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:02:16 compute-0 sudo[300252]: pam_unix(sudo:session): session closed for user root
Jan 26 16:02:16 compute-0 sudo[300277]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:02:16 compute-0 sudo[300277]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:02:16 compute-0 podman[300315]: 2026-01-26 16:02:16.903091931 +0000 UTC m=+0.047620868 container create 0f5320acd368e7faa3c75d5ada862964a968f40ed3151fbb150662c5108bc51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_nobel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:02:16 compute-0 systemd[1]: Started libpod-conmon-0f5320acd368e7faa3c75d5ada862964a968f40ed3151fbb150662c5108bc51c.scope.
Jan 26 16:02:16 compute-0 podman[300315]: 2026-01-26 16:02:16.882413055 +0000 UTC m=+0.026942022 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:02:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:02:17 compute-0 podman[300315]: 2026-01-26 16:02:17.039574627 +0000 UTC m=+0.184103594 container init 0f5320acd368e7faa3c75d5ada862964a968f40ed3151fbb150662c5108bc51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_nobel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:02:17 compute-0 ceph-mon[75140]: pgmap v1480: 305 pgs: 305 active+clean; 213 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.0 MiB/s wr, 340 op/s
Jan 26 16:02:17 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2080746719' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:02:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:02:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:02:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:02:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:02:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:02:17 compute-0 podman[300315]: 2026-01-26 16:02:17.052710049 +0000 UTC m=+0.197239026 container start 0f5320acd368e7faa3c75d5ada862964a968f40ed3151fbb150662c5108bc51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_nobel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:02:17 compute-0 podman[300315]: 2026-01-26 16:02:17.057746583 +0000 UTC m=+0.202275530 container attach 0f5320acd368e7faa3c75d5ada862964a968f40ed3151fbb150662c5108bc51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_nobel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:02:17 compute-0 unruffled_nobel[300331]: 167 167
Jan 26 16:02:17 compute-0 systemd[1]: libpod-0f5320acd368e7faa3c75d5ada862964a968f40ed3151fbb150662c5108bc51c.scope: Deactivated successfully.
Jan 26 16:02:17 compute-0 podman[300315]: 2026-01-26 16:02:17.065349139 +0000 UTC m=+0.209878076 container died 0f5320acd368e7faa3c75d5ada862964a968f40ed3151fbb150662c5108bc51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_nobel, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 16:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5829478cce60dff74e17082f18aed2d09bf391adec1d1ed7da73e2cb0508359-merged.mount: Deactivated successfully.
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.109 239969 DEBUG nova.compute.manager [req-f804c07a-f961-4f14-a1c9-2b0cad93cecf req-57e5e929-c29f-4006-b2ea-6c56ffdb42de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Received event network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.110 239969 DEBUG oslo_concurrency.lockutils [req-f804c07a-f961-4f14-a1c9-2b0cad93cecf req-57e5e929-c29f-4006-b2ea-6c56ffdb42de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.110 239969 DEBUG oslo_concurrency.lockutils [req-f804c07a-f961-4f14-a1c9-2b0cad93cecf req-57e5e929-c29f-4006-b2ea-6c56ffdb42de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.110 239969 DEBUG oslo_concurrency.lockutils [req-f804c07a-f961-4f14-a1c9-2b0cad93cecf req-57e5e929-c29f-4006-b2ea-6c56ffdb42de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.110 239969 DEBUG nova.compute.manager [req-f804c07a-f961-4f14-a1c9-2b0cad93cecf req-57e5e929-c29f-4006-b2ea-6c56ffdb42de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Processing event network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.110 239969 DEBUG nova.compute.manager [req-f804c07a-f961-4f14-a1c9-2b0cad93cecf req-57e5e929-c29f-4006-b2ea-6c56ffdb42de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Received event network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.111 239969 DEBUG oslo_concurrency.lockutils [req-f804c07a-f961-4f14-a1c9-2b0cad93cecf req-57e5e929-c29f-4006-b2ea-6c56ffdb42de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.111 239969 DEBUG oslo_concurrency.lockutils [req-f804c07a-f961-4f14-a1c9-2b0cad93cecf req-57e5e929-c29f-4006-b2ea-6c56ffdb42de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.111 239969 DEBUG oslo_concurrency.lockutils [req-f804c07a-f961-4f14-a1c9-2b0cad93cecf req-57e5e929-c29f-4006-b2ea-6c56ffdb42de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.111 239969 DEBUG nova.compute.manager [req-f804c07a-f961-4f14-a1c9-2b0cad93cecf req-57e5e929-c29f-4006-b2ea-6c56ffdb42de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] No waiting events found dispatching network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.111 239969 WARNING nova.compute.manager [req-f804c07a-f961-4f14-a1c9-2b0cad93cecf req-57e5e929-c29f-4006-b2ea-6c56ffdb42de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Received unexpected event network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 for instance with vm_state stopped and task_state rebuild_spawning.
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.112 239969 DEBUG nova.compute.manager [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:02:17 compute-0 podman[300315]: 2026-01-26 16:02:17.115314844 +0000 UTC m=+0.259843791 container remove 0f5320acd368e7faa3c75d5ada862964a968f40ed3151fbb150662c5108bc51c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_nobel, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.120 239969 DEBUG nova.compute.manager [req-7b8e1583-9bd4-4d42-bb6b-a10225081c0d req-77774488-347f-4d13-ba6d-a22b8f560718 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Received event network-vif-deleted-b1bc4eb5-6a87-4148-a73d-9386fbd7d2d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.121 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443337.1180913, 9148cd85-c0ce-49ec-a252-00ee17051d61 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.121 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] VM Resumed (Lifecycle Event)
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.123 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.129 239969 INFO nova.virt.libvirt.driver [-] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Instance spawned successfully.
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.130 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:02:17 compute-0 systemd[1]: libpod-conmon-0f5320acd368e7faa3c75d5ada862964a968f40ed3151fbb150662c5108bc51c.scope: Deactivated successfully.
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.141 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.148 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.152 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.152 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.152 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.153 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.153 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.153 239969 DEBUG nova.virt.libvirt.driver [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.184 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.211 239969 DEBUG nova.compute.manager [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.260 239969 INFO nova.compute.manager [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] bringing vm to original state: 'stopped'
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.325 239969 DEBUG oslo_concurrency.lockutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "9148cd85-c0ce-49ec-a252-00ee17051d61" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.327 239969 DEBUG oslo_concurrency.lockutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.327 239969 DEBUG nova.compute.manager [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.331 239969 DEBUG nova.compute.manager [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 26 16:02:17 compute-0 podman[300353]: 2026-01-26 16:02:17.340166136 +0000 UTC m=+0.054669581 container create 6a7c923ada94b48bcfabe641a076487c233aee08f4c8ae4170af7ebb87c26466 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:02:17 compute-0 kernel: tap2d904ed5-93 (unregistering): left promiscuous mode
Jan 26 16:02:17 compute-0 NetworkManager[48954]: <info>  [1769443337.3738] device (tap2d904ed5-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:02:17 compute-0 ovn_controller[146046]: 2026-01-26T16:02:17Z|00637|binding|INFO|Releasing lport 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 from this chassis (sb_readonly=0)
Jan 26 16:02:17 compute-0 ovn_controller[146046]: 2026-01-26T16:02:17Z|00638|binding|INFO|Setting lport 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 down in Southbound
Jan 26 16:02:17 compute-0 ovn_controller[146046]: 2026-01-26T16:02:17Z|00639|binding|INFO|Removing iface tap2d904ed5-93 ovn-installed in OVS
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.385 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.389 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:17 compute-0 systemd[1]: Started libpod-conmon-6a7c923ada94b48bcfabe641a076487c233aee08f4c8ae4170af7ebb87c26466.scope.
Jan 26 16:02:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:17.393 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e9:65:1d 10.100.0.5'], port_security=['fa:16:3e:e9:65:1d 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '9148cd85-c0ce-49ec-a252-00ee17051d61', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac4ec909-9164-40ab-a894-4d501ce12c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4f5d05ded9184fea9b526a3522d47ea5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '93ada5a0-1112-43ef-8d69-92fe39df7192', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=287381d1-9fc0-4e2d-b786-047adf5707ff, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2d904ed5-936c-4546-9ac5-a1cb1ccd0266) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:17.394 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2d904ed5-936c-4546-9ac5-a1cb1ccd0266 in datapath ac4ec909-9164-40ab-a894-4d501ce12c59 unbound from our chassis
Jan 26 16:02:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:17.395 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac4ec909-9164-40ab-a894-4d501ce12c59
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.405 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:17 compute-0 podman[300353]: 2026-01-26 16:02:17.316095796 +0000 UTC m=+0.030599271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:02:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:02:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:17.424 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5d0c1b64-2a18-4661-bbd9-5845fa37bc0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdcba66c8083b082ebb0264c2fb4a4d70f545323efa27572563018665bd1b30/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdcba66c8083b082ebb0264c2fb4a4d70f545323efa27572563018665bd1b30/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdcba66c8083b082ebb0264c2fb4a4d70f545323efa27572563018665bd1b30/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdcba66c8083b082ebb0264c2fb4a4d70f545323efa27572563018665bd1b30/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fdcba66c8083b082ebb0264c2fb4a4d70f545323efa27572563018665bd1b30/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:02:17 compute-0 systemd[1]: machine-qemu\x2d81\x2dinstance\x2d00000043.scope: Deactivated successfully.
Jan 26 16:02:17 compute-0 systemd-machined[208061]: Machine qemu-81-instance-00000043 terminated.
Jan 26 16:02:17 compute-0 podman[300353]: 2026-01-26 16:02:17.452875859 +0000 UTC m=+0.167379324 container init 6a7c923ada94b48bcfabe641a076487c233aee08f4c8ae4170af7ebb87c26466 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 16:02:17 compute-0 podman[300353]: 2026-01-26 16:02:17.468380239 +0000 UTC m=+0.182883684 container start 6a7c923ada94b48bcfabe641a076487c233aee08f4c8ae4170af7ebb87c26466 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Jan 26 16:02:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:17.467 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[08a41e66-d613-4103-8e5f-79cf0ea8cbc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:17 compute-0 podman[300353]: 2026-01-26 16:02:17.472844469 +0000 UTC m=+0.187347914 container attach 6a7c923ada94b48bcfabe641a076487c233aee08f4c8ae4170af7ebb87c26466 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 16:02:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:17.475 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[438ace50-02cc-411d-bdc4-bc5aa4ebdcca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:17.510 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[61ffa673-dc58-4a14-9bfb-0c6f760565ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:17.535 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[54bbbcae-8c53-437a-ac2e-0b66995188b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac4ec909-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6b:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473948, 'reachable_time': 43095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300386, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:17.568 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dc37ee01-11ed-4dc7-aef6-171ee864b1c5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapac4ec909-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473962, 'tstamp': 473962}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300387, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapac4ec909-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473965, 'tstamp': 473965}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300387, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:17.570 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac4ec909-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.573 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.578 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:17.579 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac4ec909-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:17.580 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:17.580 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac4ec909-90, col_values=(('external_ids', {'iface-id': '47411fc6-7d46-43a0-b0c2-7c06b22cee9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:17.581 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.591 239969 INFO nova.virt.libvirt.driver [-] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Instance destroyed successfully.
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.592 239969 DEBUG nova.compute.manager [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.652 239969 DEBUG oslo_concurrency.lockutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.679 239969 DEBUG oslo_concurrency.lockutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.680 239969 DEBUG oslo_concurrency.lockutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.680 239969 DEBUG nova.objects.instance [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 16:02:17 compute-0 nova_compute[239965]: 2026-01-26 16:02:17.748 239969 DEBUG oslo_concurrency.lockutils [None req-4e0c9bf2-d887-417f-a2e9-b889666e0a7d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.068s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 213 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 277 op/s
Jan 26 16:02:18 compute-0 brave_solomon[300371]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:02:18 compute-0 brave_solomon[300371]: --> All data devices are unavailable
Jan 26 16:02:18 compute-0 systemd[1]: libpod-6a7c923ada94b48bcfabe641a076487c233aee08f4c8ae4170af7ebb87c26466.scope: Deactivated successfully.
Jan 26 16:02:18 compute-0 podman[300353]: 2026-01-26 16:02:18.067696731 +0000 UTC m=+0.782200176 container died 6a7c923ada94b48bcfabe641a076487c233aee08f4c8ae4170af7ebb87c26466 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 16:02:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fdcba66c8083b082ebb0264c2fb4a4d70f545323efa27572563018665bd1b30-merged.mount: Deactivated successfully.
Jan 26 16:02:18 compute-0 podman[300353]: 2026-01-26 16:02:18.118196689 +0000 UTC m=+0.832700134 container remove 6a7c923ada94b48bcfabe641a076487c233aee08f4c8ae4170af7ebb87c26466 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:02:18 compute-0 systemd[1]: libpod-conmon-6a7c923ada94b48bcfabe641a076487c233aee08f4c8ae4170af7ebb87c26466.scope: Deactivated successfully.
Jan 26 16:02:18 compute-0 sudo[300277]: pam_unix(sudo:session): session closed for user root
Jan 26 16:02:18 compute-0 sudo[300428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:02:18 compute-0 sudo[300428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:02:18 compute-0 sudo[300428]: pam_unix(sudo:session): session closed for user root
Jan 26 16:02:18 compute-0 sudo[300453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:02:18 compute-0 sudo[300453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:02:18 compute-0 podman[300488]: 2026-01-26 16:02:18.600547015 +0000 UTC m=+0.055403740 container create f480361db88454d2c1bc70fd2caa51a11ccbfb882142e4e9e0fcda81fe21fa54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:02:18 compute-0 systemd[1]: Started libpod-conmon-f480361db88454d2c1bc70fd2caa51a11ccbfb882142e4e9e0fcda81fe21fa54.scope.
Jan 26 16:02:18 compute-0 podman[300488]: 2026-01-26 16:02:18.573363698 +0000 UTC m=+0.028220453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:02:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:02:18 compute-0 podman[300488]: 2026-01-26 16:02:18.703113249 +0000 UTC m=+0.157970004 container init f480361db88454d2c1bc70fd2caa51a11ccbfb882142e4e9e0fcda81fe21fa54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 16:02:18 compute-0 podman[300488]: 2026-01-26 16:02:18.715566104 +0000 UTC m=+0.170422829 container start f480361db88454d2c1bc70fd2caa51a11ccbfb882142e4e9e0fcda81fe21fa54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Jan 26 16:02:18 compute-0 blissful_banach[300505]: 167 167
Jan 26 16:02:18 compute-0 systemd[1]: libpod-f480361db88454d2c1bc70fd2caa51a11ccbfb882142e4e9e0fcda81fe21fa54.scope: Deactivated successfully.
Jan 26 16:02:18 compute-0 conmon[300505]: conmon f480361db88454d2c1bc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f480361db88454d2c1bc70fd2caa51a11ccbfb882142e4e9e0fcda81fe21fa54.scope/container/memory.events
Jan 26 16:02:18 compute-0 podman[300488]: 2026-01-26 16:02:18.7227583 +0000 UTC m=+0.177615025 container attach f480361db88454d2c1bc70fd2caa51a11ccbfb882142e4e9e0fcda81fe21fa54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:02:18 compute-0 podman[300488]: 2026-01-26 16:02:18.72516925 +0000 UTC m=+0.180025995 container died f480361db88454d2c1bc70fd2caa51a11ccbfb882142e4e9e0fcda81fe21fa54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:02:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd43eb52501da5fd5873ef7f41171d0df830d26b8e2f793cd5874b6da8feb556-merged.mount: Deactivated successfully.
Jan 26 16:02:18 compute-0 podman[300488]: 2026-01-26 16:02:18.770946581 +0000 UTC m=+0.225803316 container remove f480361db88454d2c1bc70fd2caa51a11ccbfb882142e4e9e0fcda81fe21fa54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 16:02:18 compute-0 systemd[1]: libpod-conmon-f480361db88454d2c1bc70fd2caa51a11ccbfb882142e4e9e0fcda81fe21fa54.scope: Deactivated successfully.
Jan 26 16:02:18 compute-0 nova_compute[239965]: 2026-01-26 16:02:18.892 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:19 compute-0 podman[300529]: 2026-01-26 16:02:19.022782875 +0000 UTC m=+0.066434579 container create 7c49813e1d3fc80727008a4e8051fbc4c0dc662930321ac19830629cfe741e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hamilton, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:02:19 compute-0 ceph-mon[75140]: pgmap v1481: 305 pgs: 305 active+clean; 213 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 277 op/s
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.060 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443324.0584211, f50eea8d-23f7-4c65-99fa-f919c99ed80d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.062 239969 INFO nova.compute.manager [-] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] VM Stopped (Lifecycle Event)
Jan 26 16:02:19 compute-0 systemd[1]: Started libpod-conmon-7c49813e1d3fc80727008a4e8051fbc4c0dc662930321ac19830629cfe741e48.scope.
Jan 26 16:02:19 compute-0 podman[300529]: 2026-01-26 16:02:18.99522382 +0000 UTC m=+0.038875544 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.091 239969 DEBUG nova.compute.manager [None req-ddb97098-ec15-4cac-a7e8-c6e90019736c - - - - - -] [instance: f50eea8d-23f7-4c65-99fa-f919c99ed80d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:02:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f9a3f92239e718ee9b8dfc8dd02c6d7ce255cadfb88ee39a3018405c98b5ef3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:02:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f9a3f92239e718ee9b8dfc8dd02c6d7ce255cadfb88ee39a3018405c98b5ef3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:02:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f9a3f92239e718ee9b8dfc8dd02c6d7ce255cadfb88ee39a3018405c98b5ef3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:02:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f9a3f92239e718ee9b8dfc8dd02c6d7ce255cadfb88ee39a3018405c98b5ef3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:02:19 compute-0 podman[300529]: 2026-01-26 16:02:19.142931191 +0000 UTC m=+0.186582905 container init 7c49813e1d3fc80727008a4e8051fbc4c0dc662930321ac19830629cfe741e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Jan 26 16:02:19 compute-0 podman[300529]: 2026-01-26 16:02:19.151080211 +0000 UTC m=+0.194731925 container start 7c49813e1d3fc80727008a4e8051fbc4c0dc662930321ac19830629cfe741e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hamilton, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 16:02:19 compute-0 podman[300529]: 2026-01-26 16:02:19.156134394 +0000 UTC m=+0.199786128 container attach 7c49813e1d3fc80727008a4e8051fbc4c0dc662930321ac19830629cfe741e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hamilton, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.219 239969 DEBUG nova.compute.manager [req-46d00864-d1e2-4762-96e1-c2cdd3b0e1be req-0ae67322-f281-4dfe-9a0a-2c103b38b96b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Received event network-vif-unplugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.221 239969 DEBUG oslo_concurrency.lockutils [req-46d00864-d1e2-4762-96e1-c2cdd3b0e1be req-0ae67322-f281-4dfe-9a0a-2c103b38b96b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.221 239969 DEBUG oslo_concurrency.lockutils [req-46d00864-d1e2-4762-96e1-c2cdd3b0e1be req-0ae67322-f281-4dfe-9a0a-2c103b38b96b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.221 239969 DEBUG oslo_concurrency.lockutils [req-46d00864-d1e2-4762-96e1-c2cdd3b0e1be req-0ae67322-f281-4dfe-9a0a-2c103b38b96b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.221 239969 DEBUG nova.compute.manager [req-46d00864-d1e2-4762-96e1-c2cdd3b0e1be req-0ae67322-f281-4dfe-9a0a-2c103b38b96b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] No waiting events found dispatching network-vif-unplugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.221 239969 WARNING nova.compute.manager [req-46d00864-d1e2-4762-96e1-c2cdd3b0e1be req-0ae67322-f281-4dfe-9a0a-2c103b38b96b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Received unexpected event network-vif-unplugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 for instance with vm_state stopped and task_state None.
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.222 239969 DEBUG nova.compute.manager [req-46d00864-d1e2-4762-96e1-c2cdd3b0e1be req-0ae67322-f281-4dfe-9a0a-2c103b38b96b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Received event network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.222 239969 DEBUG oslo_concurrency.lockutils [req-46d00864-d1e2-4762-96e1-c2cdd3b0e1be req-0ae67322-f281-4dfe-9a0a-2c103b38b96b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.222 239969 DEBUG oslo_concurrency.lockutils [req-46d00864-d1e2-4762-96e1-c2cdd3b0e1be req-0ae67322-f281-4dfe-9a0a-2c103b38b96b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.222 239969 DEBUG oslo_concurrency.lockutils [req-46d00864-d1e2-4762-96e1-c2cdd3b0e1be req-0ae67322-f281-4dfe-9a0a-2c103b38b96b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.222 239969 DEBUG nova.compute.manager [req-46d00864-d1e2-4762-96e1-c2cdd3b0e1be req-0ae67322-f281-4dfe-9a0a-2c103b38b96b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] No waiting events found dispatching network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.223 239969 WARNING nova.compute.manager [req-46d00864-d1e2-4762-96e1-c2cdd3b0e1be req-0ae67322-f281-4dfe-9a0a-2c103b38b96b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Received unexpected event network-vif-plugged-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 for instance with vm_state stopped and task_state None.
Jan 26 16:02:19 compute-0 epic_hamilton[300546]: {
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:     "0": [
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:         {
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "devices": [
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "/dev/loop3"
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             ],
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "lv_name": "ceph_lv0",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "lv_size": "21470642176",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "name": "ceph_lv0",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "tags": {
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.cluster_name": "ceph",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.crush_device_class": "",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.encrypted": "0",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.objectstore": "bluestore",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.osd_id": "0",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.type": "block",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.vdo": "0",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.with_tpm": "0"
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             },
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "type": "block",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "vg_name": "ceph_vg0"
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:         }
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:     ],
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:     "1": [
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:         {
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "devices": [
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "/dev/loop4"
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             ],
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "lv_name": "ceph_lv1",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "lv_size": "21470642176",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "name": "ceph_lv1",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "tags": {
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.cluster_name": "ceph",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.crush_device_class": "",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.encrypted": "0",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.objectstore": "bluestore",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.osd_id": "1",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.type": "block",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.vdo": "0",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.with_tpm": "0"
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             },
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "type": "block",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "vg_name": "ceph_vg1"
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:         }
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:     ],
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:     "2": [
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:         {
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "devices": [
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "/dev/loop5"
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             ],
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "lv_name": "ceph_lv2",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "lv_size": "21470642176",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "name": "ceph_lv2",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "tags": {
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.cluster_name": "ceph",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.crush_device_class": "",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.encrypted": "0",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.objectstore": "bluestore",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.osd_id": "2",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.type": "block",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.vdo": "0",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:                 "ceph.with_tpm": "0"
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             },
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "type": "block",
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:             "vg_name": "ceph_vg2"
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:         }
Jan 26 16:02:19 compute-0 epic_hamilton[300546]:     ]
Jan 26 16:02:19 compute-0 epic_hamilton[300546]: }
Jan 26 16:02:19 compute-0 systemd[1]: libpod-7c49813e1d3fc80727008a4e8051fbc4c0dc662930321ac19830629cfe741e48.scope: Deactivated successfully.
Jan 26 16:02:19 compute-0 podman[300555]: 2026-01-26 16:02:19.602415575 +0000 UTC m=+0.033352619 container died 7c49813e1d3fc80727008a4e8051fbc4c0dc662930321ac19830629cfe741e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hamilton, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 26 16:02:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f9a3f92239e718ee9b8dfc8dd02c6d7ce255cadfb88ee39a3018405c98b5ef3-merged.mount: Deactivated successfully.
Jan 26 16:02:19 compute-0 podman[300555]: 2026-01-26 16:02:19.656017919 +0000 UTC m=+0.086954933 container remove 7c49813e1d3fc80727008a4e8051fbc4c0dc662930321ac19830629cfe741e48 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 16:02:19 compute-0 systemd[1]: libpod-conmon-7c49813e1d3fc80727008a4e8051fbc4c0dc662930321ac19830629cfe741e48.scope: Deactivated successfully.
Jan 26 16:02:19 compute-0 sudo[300453]: pam_unix(sudo:session): session closed for user root
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.775 239969 DEBUG oslo_concurrency.lockutils [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "9148cd85-c0ce-49ec-a252-00ee17051d61" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.777 239969 DEBUG oslo_concurrency.lockutils [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.777 239969 DEBUG oslo_concurrency.lockutils [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.777 239969 DEBUG oslo_concurrency.lockutils [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.777 239969 DEBUG oslo_concurrency.lockutils [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.778 239969 INFO nova.compute.manager [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Terminating instance
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.779 239969 DEBUG nova.compute.manager [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.788 239969 INFO nova.virt.libvirt.driver [-] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Instance destroyed successfully.
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.788 239969 DEBUG nova.objects.instance [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'resources' on Instance uuid 9148cd85-c0ce-49ec-a252-00ee17051d61 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.797 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.803 239969 DEBUG nova.virt.libvirt.vif [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:01:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-931503541',display_name='tempest-tempest.common.compute-instance-931503541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-931503541',id=67,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:02:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='4f5d05ded9184fea9b526a3522d47ea5',ramdisk_id='',reservation_id='r-6gturwy1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-980651809',owner_user_name='tempest-ServerActionsTestOtherA-980651809-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:02:17Z,user_data=None,user_id='57040a375df6487fbc604a9b04389eeb',uuid=9148cd85-c0ce-49ec-a252-00ee17051d61,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.804 239969 DEBUG nova.network.os_vif_util [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converting VIF {"id": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "address": "fa:16:3e:e9:65:1d", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2d904ed5-93", "ovs_interfaceid": "2d904ed5-936c-4546-9ac5-a1cb1ccd0266", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.805 239969 DEBUG nova.network.os_vif_util [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e9:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=2d904ed5-936c-4546-9ac5-a1cb1ccd0266,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d904ed5-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.806 239969 DEBUG os_vif [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=2d904ed5-936c-4546-9ac5-a1cb1ccd0266,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d904ed5-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.809 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.809 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d904ed5-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.811 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.814 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:02:19 compute-0 nova_compute[239965]: 2026-01-26 16:02:19.820 239969 INFO os_vif [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e9:65:1d,bridge_name='br-int',has_traffic_filtering=True,id=2d904ed5-936c-4546-9ac5-a1cb1ccd0266,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2d904ed5-93')
Jan 26 16:02:19 compute-0 sudo[300567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:02:19 compute-0 sudo[300567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:02:19 compute-0 sudo[300567]: pam_unix(sudo:session): session closed for user root
Jan 26 16:02:19 compute-0 sudo[300607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:02:19 compute-0 sudo[300607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:02:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 213 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 286 op/s
Jan 26 16:02:20 compute-0 nova_compute[239965]: 2026-01-26 16:02:20.183 239969 INFO nova.virt.libvirt.driver [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Deleting instance files /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61_del
Jan 26 16:02:20 compute-0 nova_compute[239965]: 2026-01-26 16:02:20.184 239969 INFO nova.virt.libvirt.driver [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Deletion of /var/lib/nova/instances/9148cd85-c0ce-49ec-a252-00ee17051d61_del complete
Jan 26 16:02:20 compute-0 nova_compute[239965]: 2026-01-26 16:02:20.231 239969 INFO nova.compute.manager [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Took 0.45 seconds to destroy the instance on the hypervisor.
Jan 26 16:02:20 compute-0 nova_compute[239965]: 2026-01-26 16:02:20.231 239969 DEBUG oslo.service.loopingcall [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:02:20 compute-0 nova_compute[239965]: 2026-01-26 16:02:20.232 239969 DEBUG nova.compute.manager [-] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:02:20 compute-0 nova_compute[239965]: 2026-01-26 16:02:20.232 239969 DEBUG nova.network.neutron [-] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:02:20 compute-0 podman[300648]: 2026-01-26 16:02:20.286149257 +0000 UTC m=+0.053839041 container create bf9dbe936872ab9310d911d4004d0549ce11e88ad15dd3dd28b7a782288ce50b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 16:02:20 compute-0 systemd[1]: Started libpod-conmon-bf9dbe936872ab9310d911d4004d0549ce11e88ad15dd3dd28b7a782288ce50b.scope.
Jan 26 16:02:20 compute-0 podman[300648]: 2026-01-26 16:02:20.258094849 +0000 UTC m=+0.025784653 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:02:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:02:20 compute-0 podman[300648]: 2026-01-26 16:02:20.393017147 +0000 UTC m=+0.160706951 container init bf9dbe936872ab9310d911d4004d0549ce11e88ad15dd3dd28b7a782288ce50b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ptolemy, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 26 16:02:20 compute-0 podman[300648]: 2026-01-26 16:02:20.402101139 +0000 UTC m=+0.169790923 container start bf9dbe936872ab9310d911d4004d0549ce11e88ad15dd3dd28b7a782288ce50b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ptolemy, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:02:20 compute-0 podman[300648]: 2026-01-26 16:02:20.407326367 +0000 UTC m=+0.175016161 container attach bf9dbe936872ab9310d911d4004d0549ce11e88ad15dd3dd28b7a782288ce50b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 16:02:20 compute-0 vibrant_ptolemy[300664]: 167 167
Jan 26 16:02:20 compute-0 systemd[1]: libpod-bf9dbe936872ab9310d911d4004d0549ce11e88ad15dd3dd28b7a782288ce50b.scope: Deactivated successfully.
Jan 26 16:02:20 compute-0 podman[300648]: 2026-01-26 16:02:20.408625089 +0000 UTC m=+0.176314883 container died bf9dbe936872ab9310d911d4004d0549ce11e88ad15dd3dd28b7a782288ce50b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ptolemy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:02:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-1747fa553f73060cbceb59fedb561e64cb19135e8fb3733118b532cffc7bf51b-merged.mount: Deactivated successfully.
Jan 26 16:02:20 compute-0 podman[300648]: 2026-01-26 16:02:20.45233529 +0000 UTC m=+0.220025074 container remove bf9dbe936872ab9310d911d4004d0549ce11e88ad15dd3dd28b7a782288ce50b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 16:02:20 compute-0 systemd[1]: libpod-conmon-bf9dbe936872ab9310d911d4004d0549ce11e88ad15dd3dd28b7a782288ce50b.scope: Deactivated successfully.
Jan 26 16:02:20 compute-0 ovn_controller[146046]: 2026-01-26T16:02:20Z|00640|binding|INFO|Releasing lport 47411fc6-7d46-43a0-b0c2-7c06b22cee9e from this chassis (sb_readonly=0)
Jan 26 16:02:20 compute-0 podman[300688]: 2026-01-26 16:02:20.660244958 +0000 UTC m=+0.060952576 container create ea4069d1b67c7db2f25636f75220d7a1dd9eebdcfb98db8a18489386b46c1b9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_germain, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 16:02:20 compute-0 nova_compute[239965]: 2026-01-26 16:02:20.667 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:20 compute-0 systemd[1]: Started libpod-conmon-ea4069d1b67c7db2f25636f75220d7a1dd9eebdcfb98db8a18489386b46c1b9c.scope.
Jan 26 16:02:20 compute-0 podman[300688]: 2026-01-26 16:02:20.625312671 +0000 UTC m=+0.026020299 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:02:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d94a648efac38f402a8c80331f2eec1efdd27534f3e2c2f8301faf5464f7cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d94a648efac38f402a8c80331f2eec1efdd27534f3e2c2f8301faf5464f7cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d94a648efac38f402a8c80331f2eec1efdd27534f3e2c2f8301faf5464f7cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d94a648efac38f402a8c80331f2eec1efdd27534f3e2c2f8301faf5464f7cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:02:20 compute-0 podman[300688]: 2026-01-26 16:02:20.775850931 +0000 UTC m=+0.176558579 container init ea4069d1b67c7db2f25636f75220d7a1dd9eebdcfb98db8a18489386b46c1b9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:02:20 compute-0 podman[300688]: 2026-01-26 16:02:20.78477171 +0000 UTC m=+0.185479328 container start ea4069d1b67c7db2f25636f75220d7a1dd9eebdcfb98db8a18489386b46c1b9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_germain, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:02:20 compute-0 podman[300688]: 2026-01-26 16:02:20.793619017 +0000 UTC m=+0.194326655 container attach ea4069d1b67c7db2f25636f75220d7a1dd9eebdcfb98db8a18489386b46c1b9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_germain, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:02:20 compute-0 nova_compute[239965]: 2026-01-26 16:02:20.809 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:21 compute-0 ceph-mon[75140]: pgmap v1482: 305 pgs: 305 active+clean; 213 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 286 op/s
Jan 26 16:02:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:02:21 compute-0 lvm[300782]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:02:21 compute-0 lvm[300782]: VG ceph_vg0 finished
Jan 26 16:02:21 compute-0 lvm[300783]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:02:21 compute-0 lvm[300783]: VG ceph_vg1 finished
Jan 26 16:02:21 compute-0 lvm[300785]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:02:21 compute-0 lvm[300785]: VG ceph_vg2 finished
Jan 26 16:02:21 compute-0 naughty_germain[300704]: {}
Jan 26 16:02:21 compute-0 systemd[1]: libpod-ea4069d1b67c7db2f25636f75220d7a1dd9eebdcfb98db8a18489386b46c1b9c.scope: Deactivated successfully.
Jan 26 16:02:21 compute-0 systemd[1]: libpod-ea4069d1b67c7db2f25636f75220d7a1dd9eebdcfb98db8a18489386b46c1b9c.scope: Consumed 1.642s CPU time.
Jan 26 16:02:21 compute-0 podman[300688]: 2026-01-26 16:02:21.771496349 +0000 UTC m=+1.172203967 container died ea4069d1b67c7db2f25636f75220d7a1dd9eebdcfb98db8a18489386b46c1b9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:02:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4d94a648efac38f402a8c80331f2eec1efdd27534f3e2c2f8301faf5464f7cd-merged.mount: Deactivated successfully.
Jan 26 16:02:21 compute-0 podman[300688]: 2026-01-26 16:02:21.82738948 +0000 UTC m=+1.228097098 container remove ea4069d1b67c7db2f25636f75220d7a1dd9eebdcfb98db8a18489386b46c1b9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_germain, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:02:21 compute-0 systemd[1]: libpod-conmon-ea4069d1b67c7db2f25636f75220d7a1dd9eebdcfb98db8a18489386b46c1b9c.scope: Deactivated successfully.
Jan 26 16:02:21 compute-0 sudo[300607]: pam_unix(sudo:session): session closed for user root
Jan 26 16:02:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:02:21 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:02:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:02:21 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:02:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 193 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 211 op/s
Jan 26 16:02:21 compute-0 sudo[300802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:02:21 compute-0 sudo[300802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:02:21 compute-0 sudo[300802]: pam_unix(sudo:session): session closed for user root
Jan 26 16:02:22 compute-0 nova_compute[239965]: 2026-01-26 16:02:22.876 239969 DEBUG nova.network.neutron [-] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:22 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:02:22 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:02:22 compute-0 ceph-mon[75140]: pgmap v1483: 305 pgs: 305 active+clean; 193 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 211 op/s
Jan 26 16:02:22 compute-0 nova_compute[239965]: 2026-01-26 16:02:22.930 239969 INFO nova.compute.manager [-] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Took 2.70 seconds to deallocate network for instance.
Jan 26 16:02:22 compute-0 nova_compute[239965]: 2026-01-26 16:02:22.986 239969 DEBUG oslo_concurrency.lockutils [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:22 compute-0 nova_compute[239965]: 2026-01-26 16:02:22.986 239969 DEBUG oslo_concurrency.lockutils [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:23 compute-0 nova_compute[239965]: 2026-01-26 16:02:23.059 239969 DEBUG oslo_concurrency.processutils [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:23 compute-0 nova_compute[239965]: 2026-01-26 16:02:23.278 239969 DEBUG nova.compute.manager [req-eb49d8d8-9d55-4991-903d-803e32f63163 req-4f3da99b-0218-4212-ad98-b71f9176d14f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Received event network-vif-deleted-2d904ed5-936c-4546-9ac5-a1cb1ccd0266 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:02:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2467156271' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:23 compute-0 nova_compute[239965]: 2026-01-26 16:02:23.701 239969 DEBUG oslo_concurrency.processutils [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:23 compute-0 nova_compute[239965]: 2026-01-26 16:02:23.709 239969 DEBUG nova.compute.provider_tree [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:02:23 compute-0 nova_compute[239965]: 2026-01-26 16:02:23.733 239969 DEBUG nova.scheduler.client.report [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:02:23 compute-0 nova_compute[239965]: 2026-01-26 16:02:23.761 239969 DEBUG oslo_concurrency.lockutils [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:23 compute-0 nova_compute[239965]: 2026-01-26 16:02:23.789 239969 INFO nova.scheduler.client.report [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Deleted allocations for instance 9148cd85-c0ce-49ec-a252-00ee17051d61
Jan 26 16:02:23 compute-0 nova_compute[239965]: 2026-01-26 16:02:23.859 239969 DEBUG oslo_concurrency.lockutils [None req-a739e652-8689-43d4-a52a-bb86573bab8d 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "9148cd85-c0ce-49ec-a252-00ee17051d61" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 193 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 182 op/s
Jan 26 16:02:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2467156271' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:23 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Jan 26 16:02:23 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:23.975424) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:02:23 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Jan 26 16:02:23 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443343975468, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2386, "num_deletes": 509, "total_data_size": 3273368, "memory_usage": 3330272, "flush_reason": "Manual Compaction"}
Jan 26 16:02:23 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443344007390, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3198891, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28414, "largest_seqno": 30799, "table_properties": {"data_size": 3188778, "index_size": 5839, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3205, "raw_key_size": 25006, "raw_average_key_size": 19, "raw_value_size": 3165945, "raw_average_value_size": 2496, "num_data_blocks": 257, "num_entries": 1268, "num_filter_entries": 1268, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769443147, "oldest_key_time": 1769443147, "file_creation_time": 1769443343, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 32060 microseconds, and 7917 cpu microseconds.
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:24.007473) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3198891 bytes OK
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:24.007509) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:24.010251) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:24.010303) EVENT_LOG_v1 {"time_micros": 1769443344010286, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:24.010340) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3262253, prev total WAL file size 3262253, number of live WAL files 2.
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:24.011781) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3123KB)], [62(9057KB)]
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443344011877, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12473627, "oldest_snapshot_seqno": -1}
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5754 keys, 10643585 bytes, temperature: kUnknown
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443344084904, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10643585, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10601155, "index_size": 26960, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 144369, "raw_average_key_size": 25, "raw_value_size": 10493953, "raw_average_value_size": 1823, "num_data_blocks": 1098, "num_entries": 5754, "num_filter_entries": 5754, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769443344, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:24.085203) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10643585 bytes
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:24.086592) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.5 rd, 145.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.8 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 6789, records dropped: 1035 output_compression: NoCompression
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:24.086607) EVENT_LOG_v1 {"time_micros": 1769443344086599, "job": 34, "event": "compaction_finished", "compaction_time_micros": 73158, "compaction_time_cpu_micros": 26502, "output_level": 6, "num_output_files": 1, "total_output_size": 10643585, "num_input_records": 6789, "num_output_records": 5754, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443344087367, "job": 34, "event": "table_file_deletion", "file_number": 64}
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443344089058, "job": 34, "event": "table_file_deletion", "file_number": 62}
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:24.011584) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:24.089410) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:24.089416) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:24.089418) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:24.089420) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:02:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:24.089422) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:02:24 compute-0 nova_compute[239965]: 2026-01-26 16:02:24.813 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:24 compute-0 ceph-mon[75140]: pgmap v1484: 305 pgs: 305 active+clean; 193 MiB data, 592 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 182 op/s
Jan 26 16:02:25 compute-0 nova_compute[239965]: 2026-01-26 16:02:25.459 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "e264bffd-7e7a-43a6-b297-0bca74340744" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:25 compute-0 nova_compute[239965]: 2026-01-26 16:02:25.460 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:25 compute-0 nova_compute[239965]: 2026-01-26 16:02:25.484 239969 DEBUG nova.compute.manager [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:02:25 compute-0 nova_compute[239965]: 2026-01-26 16:02:25.556 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:25 compute-0 nova_compute[239965]: 2026-01-26 16:02:25.557 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:25 compute-0 nova_compute[239965]: 2026-01-26 16:02:25.587 239969 DEBUG nova.virt.hardware [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:02:25 compute-0 nova_compute[239965]: 2026-01-26 16:02:25.588 239969 INFO nova.compute.claims [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:02:25 compute-0 nova_compute[239965]: 2026-01-26 16:02:25.595 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:25 compute-0 nova_compute[239965]: 2026-01-26 16:02:25.764 239969 DEBUG oslo_concurrency.processutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:25 compute-0 nova_compute[239965]: 2026-01-26 16:02:25.821 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 207 op/s
Jan 26 16:02:26 compute-0 ceph-mon[75140]: pgmap v1485: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 207 op/s
Jan 26 16:02:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:02:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2005510467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.402 239969 DEBUG oslo_concurrency.processutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.409 239969 DEBUG nova.compute.provider_tree [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.430 239969 DEBUG nova.scheduler.client.report [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.468 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.912s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.469 239969 DEBUG nova.compute.manager [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.521 239969 DEBUG nova.compute.manager [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.521 239969 DEBUG nova.network.neutron [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.556 239969 INFO nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.578 239969 DEBUG nova.compute.manager [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.675 239969 DEBUG nova.compute.manager [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.677 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.678 239969 INFO nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Creating image(s)
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.712 239969 DEBUG nova.storage.rbd_utils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image e264bffd-7e7a-43a6-b297-0bca74340744_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.789 239969 DEBUG nova.storage.rbd_utils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image e264bffd-7e7a-43a6-b297-0bca74340744_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.813 239969 DEBUG nova.storage.rbd_utils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image e264bffd-7e7a-43a6-b297-0bca74340744_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.817 239969 DEBUG oslo_concurrency.processutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.885 239969 DEBUG nova.policy [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fc62be3f072e47058d3706393447ec4a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8368d1e77bbb4de59a4d3088dda0a707', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.900 239969 DEBUG oslo_concurrency.processutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.902 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.903 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.903 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.929 239969 DEBUG nova.storage.rbd_utils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image e264bffd-7e7a-43a6-b297-0bca74340744_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:26 compute-0 nova_compute[239965]: 2026-01-26 16:02:26.932 239969 DEBUG oslo_concurrency.processutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 e264bffd-7e7a-43a6-b297-0bca74340744_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2005510467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:27 compute-0 nova_compute[239965]: 2026-01-26 16:02:27.241 239969 DEBUG oslo_concurrency.processutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 e264bffd-7e7a-43a6-b297-0bca74340744_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:27 compute-0 nova_compute[239965]: 2026-01-26 16:02:27.308 239969 DEBUG nova.storage.rbd_utils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] resizing rbd image e264bffd-7e7a-43a6-b297-0bca74340744_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:02:27 compute-0 nova_compute[239965]: 2026-01-26 16:02:27.395 239969 DEBUG nova.objects.instance [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'migration_context' on Instance uuid e264bffd-7e7a-43a6-b297-0bca74340744 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:27 compute-0 nova_compute[239965]: 2026-01-26 16:02:27.411 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:02:27 compute-0 nova_compute[239965]: 2026-01-26 16:02:27.411 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Ensure instance console log exists: /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:02:27 compute-0 nova_compute[239965]: 2026-01-26 16:02:27.412 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:27 compute-0 nova_compute[239965]: 2026-01-26 16:02:27.412 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:27 compute-0 nova_compute[239965]: 2026-01-26 16:02:27.412 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:27 compute-0 nova_compute[239965]: 2026-01-26 16:02:27.707 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443332.7059073, 7748d19a-5c9b-456d-808c-b8f0c79a96ef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:27 compute-0 nova_compute[239965]: 2026-01-26 16:02:27.707 239969 INFO nova.compute.manager [-] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] VM Stopped (Lifecycle Event)
Jan 26 16:02:27 compute-0 nova_compute[239965]: 2026-01-26 16:02:27.727 239969 DEBUG nova.compute.manager [None req-e3ca19b9-e9ba-4fd9-918d-245858a7cea8 - - - - - -] [instance: 7748d19a-5c9b-456d-808c-b8f0c79a96ef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:27 compute-0 nova_compute[239965]: 2026-01-26 16:02:27.873 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443332.871559, a84aa0aa-4479-4871-9773-a7ee17fa8565 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:27 compute-0 nova_compute[239965]: 2026-01-26 16:02:27.874 239969 INFO nova.compute.manager [-] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] VM Stopped (Lifecycle Event)
Jan 26 16:02:27 compute-0 nova_compute[239965]: 2026-01-26 16:02:27.891 239969 DEBUG nova.compute.manager [None req-fe0c91bf-de48-4733-920d-e155b7331ee4 - - - - - -] [instance: a84aa0aa-4479-4871-9773-a7ee17fa8565] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 35 op/s
Jan 26 16:02:28 compute-0 ceph-mon[75140]: pgmap v1486: 305 pgs: 305 active+clean; 167 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 14 KiB/s wr, 35 op/s
Jan 26 16:02:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:02:28
Jan 26 16:02:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:02:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:02:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.mgr', 'vms', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.log']
Jan 26 16:02:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:02:29 compute-0 nova_compute[239965]: 2026-01-26 16:02:29.816 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 203 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.6 MiB/s wr, 59 op/s
Jan 26 16:02:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:02:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:02:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:02:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:02:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:02:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:02:30 compute-0 nova_compute[239965]: 2026-01-26 16:02:30.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:02:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:02:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:02:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:02:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:02:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:02:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:02:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:02:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:02:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:02:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:02:30 compute-0 nova_compute[239965]: 2026-01-26 16:02:30.813 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:30 compute-0 nova_compute[239965]: 2026-01-26 16:02:30.906 239969 DEBUG nova.network.neutron [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Successfully created port: a68cd767-391c-4870-8944-dec4f54e1591 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:02:31 compute-0 ceph-mon[75140]: pgmap v1487: 305 pgs: 305 active+clean; 203 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.6 MiB/s wr, 59 op/s
Jan 26 16:02:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:02:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Jan 26 16:02:32 compute-0 nova_compute[239965]: 2026-01-26 16:02:32.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:02:32 compute-0 nova_compute[239965]: 2026-01-26 16:02:32.586 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443337.5855227, 9148cd85-c0ce-49ec-a252-00ee17051d61 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:32 compute-0 nova_compute[239965]: 2026-01-26 16:02:32.587 239969 INFO nova.compute.manager [-] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] VM Stopped (Lifecycle Event)
Jan 26 16:02:32 compute-0 nova_compute[239965]: 2026-01-26 16:02:32.605 239969 DEBUG nova.compute.manager [None req-1875a617-177a-4d38-acbf-32bc289c5596 - - - - - -] [instance: 9148cd85-c0ce-49ec-a252-00ee17051d61] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:32 compute-0 nova_compute[239965]: 2026-01-26 16:02:32.922 239969 DEBUG nova.network.neutron [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Successfully updated port: a68cd767-391c-4870-8944-dec4f54e1591 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:02:32 compute-0 nova_compute[239965]: 2026-01-26 16:02:32.939 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "refresh_cache-e264bffd-7e7a-43a6-b297-0bca74340744" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:02:32 compute-0 nova_compute[239965]: 2026-01-26 16:02:32.940 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquired lock "refresh_cache-e264bffd-7e7a-43a6-b297-0bca74340744" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:02:32 compute-0 nova_compute[239965]: 2026-01-26 16:02:32.940 239969 DEBUG nova.network.neutron [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:02:33 compute-0 ceph-mon[75140]: pgmap v1488: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Jan 26 16:02:33 compute-0 podman[301038]: 2026-01-26 16:02:33.41464932 +0000 UTC m=+0.083705023 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 16:02:33 compute-0 nova_compute[239965]: 2026-01-26 16:02:33.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:02:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 26 16:02:33 compute-0 nova_compute[239965]: 2026-01-26 16:02:33.984 239969 DEBUG nova.network.neutron [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:02:34 compute-0 nova_compute[239965]: 2026-01-26 16:02:34.118 239969 DEBUG nova.compute.manager [req-6c6f5cb2-5158-4986-8090-ebd331cda3a8 req-e20ee13d-31f6-4f58-afd8-9b4c6b4f0b3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Received event network-changed-a68cd767-391c-4870-8944-dec4f54e1591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:34 compute-0 nova_compute[239965]: 2026-01-26 16:02:34.119 239969 DEBUG nova.compute.manager [req-6c6f5cb2-5158-4986-8090-ebd331cda3a8 req-e20ee13d-31f6-4f58-afd8-9b4c6b4f0b3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Refreshing instance network info cache due to event network-changed-a68cd767-391c-4870-8944-dec4f54e1591. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:02:34 compute-0 nova_compute[239965]: 2026-01-26 16:02:34.119 239969 DEBUG oslo_concurrency.lockutils [req-6c6f5cb2-5158-4986-8090-ebd331cda3a8 req-e20ee13d-31f6-4f58-afd8-9b4c6b4f0b3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-e264bffd-7e7a-43a6-b297-0bca74340744" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:02:34 compute-0 podman[301059]: 2026-01-26 16:02:34.4369364 +0000 UTC m=+0.120981306 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 26 16:02:34 compute-0 nova_compute[239965]: 2026-01-26 16:02:34.818 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.091 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:35 compute-0 ceph-mon[75140]: pgmap v1489: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.815 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.860 239969 DEBUG nova.network.neutron [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Updating instance_info_cache with network_info: [{"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.884 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Releasing lock "refresh_cache-e264bffd-7e7a-43a6-b297-0bca74340744" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.884 239969 DEBUG nova.compute.manager [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Instance network_info: |[{"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.886 239969 DEBUG oslo_concurrency.lockutils [req-6c6f5cb2-5158-4986-8090-ebd331cda3a8 req-e20ee13d-31f6-4f58-afd8-9b4c6b4f0b3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-e264bffd-7e7a-43a6-b297-0bca74340744" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.886 239969 DEBUG nova.network.neutron [req-6c6f5cb2-5158-4986-8090-ebd331cda3a8 req-e20ee13d-31f6-4f58-afd8-9b4c6b4f0b3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Refreshing network info cache for port a68cd767-391c-4870-8944-dec4f54e1591 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.891 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Start _get_guest_xml network_info=[{"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.899 239969 WARNING nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.913 239969 DEBUG nova.virt.libvirt.host [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.913 239969 DEBUG nova.virt.libvirt.host [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.918 239969 DEBUG nova.virt.libvirt.host [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.919 239969 DEBUG nova.virt.libvirt.host [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.919 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.920 239969 DEBUG nova.virt.hardware [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.920 239969 DEBUG nova.virt.hardware [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.920 239969 DEBUG nova.virt.hardware [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.920 239969 DEBUG nova.virt.hardware [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.921 239969 DEBUG nova.virt.hardware [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.921 239969 DEBUG nova.virt.hardware [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.921 239969 DEBUG nova.virt.hardware [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.921 239969 DEBUG nova.virt.hardware [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.921 239969 DEBUG nova.virt.hardware [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.922 239969 DEBUG nova.virt.hardware [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.922 239969 DEBUG nova.virt.hardware [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:02:35 compute-0 nova_compute[239965]: 2026-01-26 16:02:35.925 239969 DEBUG oslo_concurrency.processutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.224 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "725537ef-e1ac-4cc7-bd87-fe04016df207" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.225 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "725537ef-e1ac-4cc7-bd87-fe04016df207" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.248 239969 DEBUG nova.compute.manager [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.341 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.341 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.351 239969 DEBUG nova.virt.hardware [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.351 239969 INFO nova.compute.claims [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:02:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.471 239969 DEBUG nova.scheduler.client.report [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.500 239969 DEBUG nova.scheduler.client.report [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.500 239969 DEBUG nova.compute.provider_tree [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:02:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:02:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2263754869' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.518 239969 DEBUG nova.scheduler.client.report [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.531 239969 DEBUG oslo_concurrency.processutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.552 239969 DEBUG nova.storage.rbd_utils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image e264bffd-7e7a-43a6-b297-0bca74340744_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.558 239969 DEBUG oslo_concurrency.processutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.601 239969 DEBUG nova.scheduler.client.report [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.609 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.609 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.677 239969 DEBUG oslo_concurrency.processutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.781 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-fbc49a44-5efe-4d87-aea7-b26f7232c224" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.782 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-fbc49a44-5efe-4d87-aea7-b26f7232c224" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.782 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:02:36 compute-0 nova_compute[239965]: 2026-01-26 16:02:36.783 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fbc49a44-5efe-4d87-aea7-b26f7232c224 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:37 compute-0 ceph-mon[75140]: pgmap v1490: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Jan 26 16:02:37 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2263754869' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:02:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2020335078' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.203 239969 DEBUG oslo_concurrency.processutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.204 239969 DEBUG nova.virt.libvirt.vif [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:02:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1731532228',display_name='tempest-ServerDiskConfigTestJSON-server-1731532228',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1731532228',id=73,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8368d1e77bbb4de59a4d3088dda0a707',ramdisk_id='',reservation_id='r-3pwxsvw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1476979264',owner_user_name='tempest-ServerDiskConfigTestJSON-1476979264-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:02:26Z,user_data=None,user_id='fc62be3f072e47058d3706393447ec4a',uuid=e264bffd-7e7a-43a6-b297-0bca74340744,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.205 239969 DEBUG nova.network.os_vif_util [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converting VIF {"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.205 239969 DEBUG nova.network.os_vif_util [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:39:2f,bridge_name='br-int',has_traffic_filtering=True,id=a68cd767-391c-4870-8944-dec4f54e1591,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa68cd767-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.207 239969 DEBUG nova.objects.instance [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'pci_devices' on Instance uuid e264bffd-7e7a-43a6-b297-0bca74340744 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.224 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:02:37 compute-0 nova_compute[239965]:   <uuid>e264bffd-7e7a-43a6-b297-0bca74340744</uuid>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   <name>instance-00000049</name>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1731532228</nova:name>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:02:35</nova:creationTime>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:02:37 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:02:37 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:02:37 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:02:37 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:02:37 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:02:37 compute-0 nova_compute[239965]:         <nova:user uuid="fc62be3f072e47058d3706393447ec4a">tempest-ServerDiskConfigTestJSON-1476979264-project-member</nova:user>
Jan 26 16:02:37 compute-0 nova_compute[239965]:         <nova:project uuid="8368d1e77bbb4de59a4d3088dda0a707">tempest-ServerDiskConfigTestJSON-1476979264</nova:project>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:02:37 compute-0 nova_compute[239965]:         <nova:port uuid="a68cd767-391c-4870-8944-dec4f54e1591">
Jan 26 16:02:37 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <system>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <entry name="serial">e264bffd-7e7a-43a6-b297-0bca74340744</entry>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <entry name="uuid">e264bffd-7e7a-43a6-b297-0bca74340744</entry>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     </system>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   <os>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   </os>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   <features>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   </features>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/e264bffd-7e7a-43a6-b297-0bca74340744_disk">
Jan 26 16:02:37 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       </source>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:02:37 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/e264bffd-7e7a-43a6-b297-0bca74340744_disk.config">
Jan 26 16:02:37 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       </source>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:02:37 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:d1:39:2f"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <target dev="tapa68cd767-39"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744/console.log" append="off"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <video>
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     </video>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:02:37 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:02:37 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:02:37 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:02:37 compute-0 nova_compute[239965]: </domain>
Jan 26 16:02:37 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.226 239969 DEBUG nova.compute.manager [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Preparing to wait for external event network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.226 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.227 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.227 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.228 239969 DEBUG nova.virt.libvirt.vif [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:02:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1731532228',display_name='tempest-ServerDiskConfigTestJSON-server-1731532228',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1731532228',id=73,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8368d1e77bbb4de59a4d3088dda0a707',ramdisk_id='',reservation_id='r-3pwxsvw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1476979264',owner_user_name='tempest-ServerDiskConfigTestJSON-1476979264-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:02:26Z,user_data=None,user_id='fc62be3f072e47058d3706393447ec4a',uuid=e264bffd-7e7a-43a6-b297-0bca74340744,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.228 239969 DEBUG nova.network.os_vif_util [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converting VIF {"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.229 239969 DEBUG nova.network.os_vif_util [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:39:2f,bridge_name='br-int',has_traffic_filtering=True,id=a68cd767-391c-4870-8944-dec4f54e1591,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa68cd767-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.229 239969 DEBUG os_vif [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:39:2f,bridge_name='br-int',has_traffic_filtering=True,id=a68cd767-391c-4870-8944-dec4f54e1591,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa68cd767-39') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.230 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.231 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.231 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.236 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.236 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa68cd767-39, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.237 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa68cd767-39, col_values=(('external_ids', {'iface-id': 'a68cd767-391c-4870-8944-dec4f54e1591', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:39:2f', 'vm-uuid': 'e264bffd-7e7a-43a6-b297-0bca74340744'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:02:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2536957553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:37 compute-0 NetworkManager[48954]: <info>  [1769443357.2810] manager: (tapa68cd767-39): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/283)
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.280 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.285 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.289 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.291 239969 INFO os_vif [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:39:2f,bridge_name='br-int',has_traffic_filtering=True,id=a68cd767-391c-4870-8944-dec4f54e1591,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa68cd767-39')
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.321 239969 DEBUG oslo_concurrency.processutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.644s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.329 239969 DEBUG nova.compute.provider_tree [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.344 239969 DEBUG nova.scheduler.client.report [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.367 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.368 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.368 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] No VIF found with MAC fa:16:3e:d1:39:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.368 239969 INFO nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Using config drive
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.391 239969 DEBUG nova.storage.rbd_utils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image e264bffd-7e7a-43a6-b297-0bca74340744_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.398 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.399 239969 DEBUG nova.compute.manager [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.452 239969 DEBUG nova.compute.manager [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.452 239969 DEBUG nova.network.neutron [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.457 239969 DEBUG nova.network.neutron [req-6c6f5cb2-5158-4986-8090-ebd331cda3a8 req-e20ee13d-31f6-4f58-afd8-9b4c6b4f0b3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Updated VIF entry in instance network info cache for port a68cd767-391c-4870-8944-dec4f54e1591. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.458 239969 DEBUG nova.network.neutron [req-6c6f5cb2-5158-4986-8090-ebd331cda3a8 req-e20ee13d-31f6-4f58-afd8-9b4c6b4f0b3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Updating instance_info_cache with network_info: [{"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.481 239969 INFO nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.488 239969 DEBUG oslo_concurrency.lockutils [req-6c6f5cb2-5158-4986-8090-ebd331cda3a8 req-e20ee13d-31f6-4f58-afd8-9b4c6b4f0b3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-e264bffd-7e7a-43a6-b297-0bca74340744" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.507 239969 DEBUG nova.compute.manager [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.598 239969 DEBUG nova.compute.manager [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.600 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.600 239969 INFO nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Creating image(s)
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.622 239969 DEBUG nova.storage.rbd_utils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 725537ef-e1ac-4cc7-bd87-fe04016df207_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.650 239969 DEBUG nova.storage.rbd_utils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 725537ef-e1ac-4cc7-bd87-fe04016df207_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.679 239969 DEBUG nova.storage.rbd_utils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 725537ef-e1ac-4cc7-bd87-fe04016df207_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.683 239969 DEBUG oslo_concurrency.processutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.757 239969 DEBUG nova.policy [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '57040a375df6487fbc604a9b04389eeb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4f5d05ded9184fea9b526a3522d47ea5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.761 239969 DEBUG oslo_concurrency.processutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.761 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.762 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.762 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.787 239969 DEBUG nova.storage.rbd_utils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 725537ef-e1ac-4cc7-bd87-fe04016df207_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.793 239969 DEBUG oslo_concurrency.processutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 725537ef-e1ac-4cc7-bd87-fe04016df207_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.969 239969 INFO nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Creating config drive at /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744/disk.config
Jan 26 16:02:37 compute-0 nova_compute[239965]: 2026-01-26 16:02:37.982 239969 DEBUG oslo_concurrency.processutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvesi_1uu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.088 239969 DEBUG oslo_concurrency.processutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 725537ef-e1ac-4cc7-bd87-fe04016df207_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.127 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Updating instance_info_cache with network_info: [{"id": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "address": "fa:16:3e:65:b9:39", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72df886-4f", "ovs_interfaceid": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2020335078' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2536957553' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.175 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-fbc49a44-5efe-4d87-aea7-b26f7232c224" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.176 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.177 239969 DEBUG oslo_concurrency.processutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvesi_1uu" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.178 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.178 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.202 239969 DEBUG nova.storage.rbd_utils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image e264bffd-7e7a-43a6-b297-0bca74340744_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.207 239969 DEBUG oslo_concurrency.processutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744/disk.config e264bffd-7e7a-43a6-b297-0bca74340744_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.261 239969 DEBUG nova.storage.rbd_utils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] resizing rbd image 725537ef-e1ac-4cc7-bd87-fe04016df207_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.479 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Acquiring lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.480 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.486 239969 DEBUG nova.objects.instance [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'migration_context' on Instance uuid 725537ef-e1ac-4cc7-bd87-fe04016df207 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.493 239969 DEBUG oslo_concurrency.processutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744/disk.config e264bffd-7e7a-43a6-b297-0bca74340744_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.493 239969 INFO nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Deleting local config drive /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744/disk.config because it was imported into RBD.
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.515 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.516 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Ensure instance console log exists: /var/lib/nova/instances/725537ef-e1ac-4cc7-bd87-fe04016df207/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.516 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.516 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.517 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.533 239969 DEBUG nova.compute.manager [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:02:38 compute-0 kernel: tapa68cd767-39: entered promiscuous mode
Jan 26 16:02:38 compute-0 NetworkManager[48954]: <info>  [1769443358.5561] manager: (tapa68cd767-39): new Tun device (/org/freedesktop/NetworkManager/Devices/284)
Jan 26 16:02:38 compute-0 ovn_controller[146046]: 2026-01-26T16:02:38Z|00641|binding|INFO|Claiming lport a68cd767-391c-4870-8944-dec4f54e1591 for this chassis.
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.555 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:38 compute-0 ovn_controller[146046]: 2026-01-26T16:02:38Z|00642|binding|INFO|a68cd767-391c-4870-8944-dec4f54e1591: Claiming fa:16:3e:d1:39:2f 10.100.0.13
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.555 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.556 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.556 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.556 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.568 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:39:2f 10.100.0.13'], port_security=['fa:16:3e:d1:39:2f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e264bffd-7e7a-43a6-b297-0bca74340744', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db63036e-9778-49aa-880f-9b900f3c2179', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8368d1e77bbb4de59a4d3088dda0a707', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcfedff7-784c-47e9-a3f0-b01264b35f17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859c58dc-99b6-4246-b01e-ede78d7a88e8, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a68cd767-391c-4870-8944-dec4f54e1591) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.570 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a68cd767-391c-4870-8944-dec4f54e1591 in datapath db63036e-9778-49aa-880f-9b900f3c2179 bound to our chassis
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.571 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network db63036e-9778-49aa-880f-9b900f3c2179
Jan 26 16:02:38 compute-0 ovn_controller[146046]: 2026-01-26T16:02:38Z|00643|binding|INFO|Setting lport a68cd767-391c-4870-8944-dec4f54e1591 up in Southbound
Jan 26 16:02:38 compute-0 ovn_controller[146046]: 2026-01-26T16:02:38Z|00644|binding|INFO|Setting lport a68cd767-391c-4870-8944-dec4f54e1591 ovn-installed in OVS
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.586 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3bb372cf-a12f-4153-9296-df025dcbd7a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.587 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdb63036e-91 in ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.590 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdb63036e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.590 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5ffed3-a1dd-43d9-8b13-e38942b413a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.593 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2e5b76bb-e2dd-4342-a51a-a9ed3adccea8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.597 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:38 compute-0 systemd-udevd[301412]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.611 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[17f05d77-8f1d-4c5d-b227-90ca73168238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:38 compute-0 systemd-machined[208061]: New machine qemu-82-instance-00000049.
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.623 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.624 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:38 compute-0 NetworkManager[48954]: <info>  [1769443358.6248] device (tapa68cd767-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:02:38 compute-0 NetworkManager[48954]: <info>  [1769443358.6258] device (tapa68cd767-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:02:38 compute-0 systemd[1]: Started Virtual Machine qemu-82-instance-00000049.
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.634 239969 DEBUG nova.virt.hardware [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.634 239969 INFO nova.compute.claims [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.640 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4a28f900-7885-461a-800e-99bde0651d29]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.670 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[83bb6f13-0b55-4825-9148-b6ad4d20268c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.676 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f035d215-e5b3-4ec4-a6c8-fa16d7413411]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:38 compute-0 NetworkManager[48954]: <info>  [1769443358.6779] manager: (tapdb63036e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/285)
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.679 239969 DEBUG nova.network.neutron [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Successfully created port: 4c176286-4795-480e-8ae2-30472ef381f7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.712 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[76b52c1f-f57b-49d8-9ae4-7ffb4fe1e14b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.716 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[78767c9f-6dac-49f2-8215-6e38df6de6c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:38 compute-0 NetworkManager[48954]: <info>  [1769443358.7379] device (tapdb63036e-90): carrier: link connected
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.743 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f826e3f5-5952-4cfb-83d8-42fd6b565dde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.762 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e9eb3ae3-67f8-41a7-a09d-3031529f771f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb63036e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:bb:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484954, 'reachable_time': 25092, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301464, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.780 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8dab1dc8-43ac-4b2d-8adf-083159920d1e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7c:bb2a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 484954, 'tstamp': 484954}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301465, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.798 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8d1aa32d-3102-410f-9c68-31918597f620]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb63036e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:bb:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 196], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484954, 'reachable_time': 25092, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 301466, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.804 239969 DEBUG nova.compute.manager [req-e0704797-1422-4cb0-97dc-f1369f361213 req-59d9767b-6e21-4c4f-9968-00cf83a7f9e2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Received event network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.804 239969 DEBUG oslo_concurrency.lockutils [req-e0704797-1422-4cb0-97dc-f1369f361213 req-59d9767b-6e21-4c4f-9968-00cf83a7f9e2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.804 239969 DEBUG oslo_concurrency.lockutils [req-e0704797-1422-4cb0-97dc-f1369f361213 req-59d9767b-6e21-4c4f-9968-00cf83a7f9e2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.805 239969 DEBUG oslo_concurrency.lockutils [req-e0704797-1422-4cb0-97dc-f1369f361213 req-59d9767b-6e21-4c4f-9968-00cf83a7f9e2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.805 239969 DEBUG nova.compute.manager [req-e0704797-1422-4cb0-97dc-f1369f361213 req-59d9767b-6e21-4c4f-9968-00cf83a7f9e2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Processing event network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.813 239969 DEBUG oslo_concurrency.processutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.830 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c5c27a0b-6cec-4013-b747-0292654e979d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.905 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1a55f021-6771-48e2-ac62-e1d88cf1e690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.906 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb63036e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.906 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.907 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb63036e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:38 compute-0 NetworkManager[48954]: <info>  [1769443358.9096] manager: (tapdb63036e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/286)
Jan 26 16:02:38 compute-0 kernel: tapdb63036e-90: entered promiscuous mode
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.912 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdb63036e-90, col_values=(('external_ids', {'iface-id': 'c91d83e2-40e9-4d10-8b19-e75b3795dd04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.913 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:38 compute-0 ovn_controller[146046]: 2026-01-26T16:02:38Z|00645|binding|INFO|Releasing lport c91d83e2-40e9-4d10-8b19-e75b3795dd04 from this chassis (sb_readonly=0)
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.919 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/db63036e-9778-49aa-880f-9b900f3c2179.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/db63036e-9778-49aa-880f-9b900f3c2179.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.920 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[11f2a620-6e38-4cbb-8637-1d356f6909bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.921 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-db63036e-9778-49aa-880f-9b900f3c2179
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/db63036e-9778-49aa-880f-9b900f3c2179.pid.haproxy
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID db63036e-9778-49aa-880f-9b900f3c2179
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:02:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:38.922 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'env', 'PROCESS_TAG=haproxy-db63036e-9778-49aa-880f-9b900f3c2179', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/db63036e-9778-49aa-880f-9b900f3c2179.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:02:38 compute-0 nova_compute[239965]: 2026-01-26 16:02:38.940 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.105 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443359.1047542, e264bffd-7e7a-43a6-b297-0bca74340744 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.105 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] VM Started (Lifecycle Event)
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.108 239969 DEBUG nova.compute.manager [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.138 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.139 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.150 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.153 239969 INFO nova.virt.libvirt.driver [-] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Instance spawned successfully.
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.154 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:02:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:02:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3366073166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:39 compute-0 ceph-mon[75140]: pgmap v1491: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:02:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3366073166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.181 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.182 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443359.1083097, e264bffd-7e7a-43a6-b297-0bca74340744 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.182 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] VM Paused (Lifecycle Event)
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.184 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.189 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.190 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.190 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.190 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.191 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.191 239969 DEBUG nova.virt.libvirt.driver [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.203 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.209 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443359.1110208, e264bffd-7e7a-43a6-b297-0bca74340744 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.209 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] VM Resumed (Lifecycle Event)
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.227 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.232 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.270 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.275 239969 INFO nova.compute.manager [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Took 12.60 seconds to spawn the instance on the hypervisor.
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.276 239969 DEBUG nova.compute.manager [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.325 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.326 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.330 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.330 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000049 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.363 239969 INFO nova.compute.manager [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Took 13.82 seconds to build instance.
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.386 239969 DEBUG oslo_concurrency.lockutils [None req-1a3adfa6-dcaf-4287-93bc-d8b5e17307d3 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:39 compute-0 podman[301563]: 2026-01-26 16:02:39.31517608 +0000 UTC m=+0.026037449 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:02:39 compute-0 podman[301563]: 2026-01-26 16:02:39.443193889 +0000 UTC m=+0.154055208 container create 817fce6be191565c0574d44f822d227760ee888253d2bf84bc5fe8c0413fd633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:02:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:02:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3047732577' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.478 239969 DEBUG oslo_concurrency.processutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.483 239969 DEBUG nova.compute.provider_tree [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:02:39 compute-0 systemd[1]: Started libpod-conmon-817fce6be191565c0574d44f822d227760ee888253d2bf84bc5fe8c0413fd633.scope.
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.504 239969 DEBUG nova.scheduler.client.report [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:02:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8372a9d880b33a015622072b7a30c0ae772320af6ec5056c029f383793b2ebde/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.532 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.534 239969 DEBUG nova.compute.manager [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:02:39 compute-0 podman[301563]: 2026-01-26 16:02:39.549280339 +0000 UTC m=+0.260141678 container init 817fce6be191565c0574d44f822d227760ee888253d2bf84bc5fe8c0413fd633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:02:39 compute-0 podman[301563]: 2026-01-26 16:02:39.556082915 +0000 UTC m=+0.266944234 container start 817fce6be191565c0574d44f822d227760ee888253d2bf84bc5fe8c0413fd633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 16:02:39 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[301580]: [NOTICE]   (301584) : New worker (301586) forked
Jan 26 16:02:39 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[301580]: [NOTICE]   (301584) : Loading success.
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.603 239969 DEBUG nova.compute.manager [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.604 239969 DEBUG nova.network.neutron [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.626 239969 INFO nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.640 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.641 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3699MB free_disk=59.9213544819504GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.642 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.642 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.650 239969 DEBUG nova.compute.manager [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.745 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance fbc49a44-5efe-4d87-aea7-b26f7232c224 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.746 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance e264bffd-7e7a-43a6-b297-0bca74340744 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.746 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 725537ef-e1ac-4cc7-bd87-fe04016df207 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.746 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.746 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.746 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.765 239969 DEBUG nova.compute.manager [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.767 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.768 239969 INFO nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Creating image(s)
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.794 239969 DEBUG nova.storage.rbd_utils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] rbd image 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.817 239969 DEBUG nova.storage.rbd_utils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] rbd image 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.845 239969 DEBUG nova.storage.rbd_utils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] rbd image 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.849 239969 DEBUG oslo_concurrency.processutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.888 239969 DEBUG nova.policy [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4f5fcc73d4b8415a9d00f19f2f851ab3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a88b6ae11bc04608ba445196f2238171', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.930 239969 DEBUG oslo_concurrency.processutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.931 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.932 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.932 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.959 239969 DEBUG nova.storage.rbd_utils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] rbd image 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 250 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.2 MiB/s wr, 40 op/s
Jan 26 16:02:39 compute-0 nova_compute[239965]: 2026-01-26 16:02:39.964 239969 DEBUG oslo_concurrency.processutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.016 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:40 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3047732577' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.242 239969 DEBUG oslo_concurrency.processutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.314 239969 DEBUG nova.storage.rbd_utils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] resizing rbd image 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.412 239969 DEBUG nova.objects.instance [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lazy-loading 'migration_context' on Instance uuid 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.442 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.443 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Ensure instance console log exists: /var/lib/nova/instances/446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.444 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.444 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.444 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:02:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1025387165' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.654 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.638s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.660 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.697 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.726 239969 DEBUG nova.network.neutron [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Successfully updated port: 4c176286-4795-480e-8ae2-30472ef381f7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.741 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.741 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.746 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "refresh_cache-725537ef-e1ac-4cc7-bd87-fe04016df207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.747 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquired lock "refresh_cache-725537ef-e1ac-4cc7-bd87-fe04016df207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.747 239969 DEBUG nova.network.neutron [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.817 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.843 239969 DEBUG nova.compute.manager [req-5a0f40bc-c66a-48d7-8a46-33c1593439ea req-9a58c532-f3e2-4e10-a4e0-67c2849c1e69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Received event network-changed-4c176286-4795-480e-8ae2-30472ef381f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.844 239969 DEBUG nova.compute.manager [req-5a0f40bc-c66a-48d7-8a46-33c1593439ea req-9a58c532-f3e2-4e10-a4e0-67c2849c1e69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Refreshing instance network info cache due to event network-changed-4c176286-4795-480e-8ae2-30472ef381f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.844 239969 DEBUG oslo_concurrency.lockutils [req-5a0f40bc-c66a-48d7-8a46-33c1593439ea req-9a58c532-f3e2-4e10-a4e0-67c2849c1e69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-725537ef-e1ac-4cc7-bd87-fe04016df207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.922 239969 DEBUG nova.compute.manager [req-1fa1a2b2-235c-4d15-af84-efac2ff1d564 req-aaaa3197-61e3-4a3e-bf88-e529356c6ace a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Received event network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.922 239969 DEBUG oslo_concurrency.lockutils [req-1fa1a2b2-235c-4d15-af84-efac2ff1d564 req-aaaa3197-61e3-4a3e-bf88-e529356c6ace a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.924 239969 DEBUG oslo_concurrency.lockutils [req-1fa1a2b2-235c-4d15-af84-efac2ff1d564 req-aaaa3197-61e3-4a3e-bf88-e529356c6ace a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.924 239969 DEBUG oslo_concurrency.lockutils [req-1fa1a2b2-235c-4d15-af84-efac2ff1d564 req-aaaa3197-61e3-4a3e-bf88-e529356c6ace a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.925 239969 DEBUG nova.compute.manager [req-1fa1a2b2-235c-4d15-af84-efac2ff1d564 req-aaaa3197-61e3-4a3e-bf88-e529356c6ace a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] No waiting events found dispatching network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:40 compute-0 nova_compute[239965]: 2026-01-26 16:02:40.925 239969 WARNING nova.compute.manager [req-1fa1a2b2-235c-4d15-af84-efac2ff1d564 req-aaaa3197-61e3-4a3e-bf88-e529356c6ace a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Received unexpected event network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 for instance with vm_state active and task_state None.
Jan 26 16:02:41 compute-0 nova_compute[239965]: 2026-01-26 16:02:41.050 239969 DEBUG nova.network.neutron [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Successfully created port: 047aa04e-5502-4cdf-ae6a-94aacc3c05ff _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:02:41 compute-0 nova_compute[239965]: 2026-01-26 16:02:41.054 239969 DEBUG nova.network.neutron [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:02:41 compute-0 ceph-mon[75140]: pgmap v1492: 305 pgs: 305 active+clean; 250 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.2 MiB/s wr, 40 op/s
Jan 26 16:02:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1025387165' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:02:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:41.554 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:41.556 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:02:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:41.557 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:41 compute-0 nova_compute[239965]: 2026-01-26 16:02:41.616 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:41 compute-0 nova_compute[239965]: 2026-01-26 16:02:41.743 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:02:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 276 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 169 KiB/s rd, 2.3 MiB/s wr, 58 op/s
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.184 239969 DEBUG nova.network.neutron [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Updating instance_info_cache with network_info: [{"id": "4c176286-4795-480e-8ae2-30472ef381f7", "address": "fa:16:3e:54:91:4e", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c176286-47", "ovs_interfaceid": "4c176286-4795-480e-8ae2-30472ef381f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.207 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Releasing lock "refresh_cache-725537ef-e1ac-4cc7-bd87-fe04016df207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.207 239969 DEBUG nova.compute.manager [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Instance network_info: |[{"id": "4c176286-4795-480e-8ae2-30472ef381f7", "address": "fa:16:3e:54:91:4e", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c176286-47", "ovs_interfaceid": "4c176286-4795-480e-8ae2-30472ef381f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.208 239969 DEBUG oslo_concurrency.lockutils [req-5a0f40bc-c66a-48d7-8a46-33c1593439ea req-9a58c532-f3e2-4e10-a4e0-67c2849c1e69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-725537ef-e1ac-4cc7-bd87-fe04016df207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.208 239969 DEBUG nova.network.neutron [req-5a0f40bc-c66a-48d7-8a46-33c1593439ea req-9a58c532-f3e2-4e10-a4e0-67c2849c1e69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Refreshing network info cache for port 4c176286-4795-480e-8ae2-30472ef381f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.210 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Start _get_guest_xml network_info=[{"id": "4c176286-4795-480e-8ae2-30472ef381f7", "address": "fa:16:3e:54:91:4e", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c176286-47", "ovs_interfaceid": "4c176286-4795-480e-8ae2-30472ef381f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.216 239969 WARNING nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.221 239969 DEBUG nova.virt.libvirt.host [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.222 239969 DEBUG nova.virt.libvirt.host [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.227 239969 DEBUG nova.virt.libvirt.host [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.228 239969 DEBUG nova.virt.libvirt.host [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.228 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.228 239969 DEBUG nova.virt.hardware [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.229 239969 DEBUG nova.virt.hardware [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.229 239969 DEBUG nova.virt.hardware [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.229 239969 DEBUG nova.virt.hardware [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.229 239969 DEBUG nova.virt.hardware [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.230 239969 DEBUG nova.virt.hardware [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.230 239969 DEBUG nova.virt.hardware [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.230 239969 DEBUG nova.virt.hardware [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.231 239969 DEBUG nova.virt.hardware [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.231 239969 DEBUG nova.virt.hardware [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.231 239969 DEBUG nova.virt.hardware [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.237 239969 DEBUG oslo_concurrency.processutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:42 compute-0 ceph-mon[75140]: pgmap v1493: 305 pgs: 305 active+clean; 276 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 169 KiB/s rd, 2.3 MiB/s wr, 58 op/s
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.280 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.732 239969 INFO nova.compute.manager [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Rebuilding instance
Jan 26 16:02:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:02:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2635269331' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.791 239969 DEBUG oslo_concurrency.processutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.822 239969 DEBUG nova.storage.rbd_utils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 725537ef-e1ac-4cc7-bd87-fe04016df207_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.827 239969 DEBUG oslo_concurrency.processutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:42 compute-0 nova_compute[239965]: 2026-01-26 16:02:42.979 239969 DEBUG nova.objects.instance [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e264bffd-7e7a-43a6-b297-0bca74340744 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.009 239969 DEBUG nova.compute.manager [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.058 239969 DEBUG nova.objects.instance [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'pci_requests' on Instance uuid e264bffd-7e7a-43a6-b297-0bca74340744 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.073 239969 DEBUG nova.objects.instance [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'pci_devices' on Instance uuid e264bffd-7e7a-43a6-b297-0bca74340744 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.094 239969 DEBUG nova.objects.instance [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'resources' on Instance uuid e264bffd-7e7a-43a6-b297-0bca74340744 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.106 239969 DEBUG nova.objects.instance [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'migration_context' on Instance uuid e264bffd-7e7a-43a6-b297-0bca74340744 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.119 239969 DEBUG nova.objects.instance [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.122 239969 DEBUG nova.network.neutron [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Successfully updated port: 047aa04e-5502-4cdf-ae6a-94aacc3c05ff _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.127 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.141 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Acquiring lock "refresh_cache-446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.142 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Acquired lock "refresh_cache-446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.142 239969 DEBUG nova.network.neutron [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.258 239969 DEBUG nova.compute.manager [req-29bf1bc5-002f-4173-9075-b8bbbd748550 req-ad256421-8568-4e18-a9d2-69107fe03b93 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Received event network-changed-047aa04e-5502-4cdf-ae6a-94aacc3c05ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.258 239969 DEBUG nova.compute.manager [req-29bf1bc5-002f-4173-9075-b8bbbd748550 req-ad256421-8568-4e18-a9d2-69107fe03b93 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Refreshing instance network info cache due to event network-changed-047aa04e-5502-4cdf-ae6a-94aacc3c05ff. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.258 239969 DEBUG oslo_concurrency.lockutils [req-29bf1bc5-002f-4173-9075-b8bbbd748550 req-ad256421-8568-4e18-a9d2-69107fe03b93 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:02:43 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2635269331' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.286 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:02:43 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2295026203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.397 239969 DEBUG nova.network.neutron [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.412 239969 DEBUG oslo_concurrency.processutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.414 239969 DEBUG nova.virt.libvirt.vif [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:02:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1011695261',display_name='tempest-ServerActionsTestOtherA-server-1011695261',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1011695261',id=74,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4f5d05ded9184fea9b526a3522d47ea5',ramdisk_id='',reservation_id='r-4y30pkcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-980651809',owner_user_name='tempest-ServerActionsTestOtherA-980651809-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:02:37Z,user_data=None,user_id='57040a375df6487fbc604a9b04389eeb',uuid=725537ef-e1ac-4cc7-bd87-fe04016df207,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c176286-4795-480e-8ae2-30472ef381f7", "address": "fa:16:3e:54:91:4e", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c176286-47", "ovs_interfaceid": "4c176286-4795-480e-8ae2-30472ef381f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.414 239969 DEBUG nova.network.os_vif_util [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converting VIF {"id": "4c176286-4795-480e-8ae2-30472ef381f7", "address": "fa:16:3e:54:91:4e", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c176286-47", "ovs_interfaceid": "4c176286-4795-480e-8ae2-30472ef381f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.415 239969 DEBUG nova.network.os_vif_util [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:91:4e,bridge_name='br-int',has_traffic_filtering=True,id=4c176286-4795-480e-8ae2-30472ef381f7,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c176286-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.416 239969 DEBUG nova.objects.instance [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 725537ef-e1ac-4cc7-bd87-fe04016df207 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.434 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:02:43 compute-0 nova_compute[239965]:   <uuid>725537ef-e1ac-4cc7-bd87-fe04016df207</uuid>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   <name>instance-0000004a</name>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerActionsTestOtherA-server-1011695261</nova:name>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:02:42</nova:creationTime>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:02:43 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:02:43 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:02:43 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:02:43 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:02:43 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:02:43 compute-0 nova_compute[239965]:         <nova:user uuid="57040a375df6487fbc604a9b04389eeb">tempest-ServerActionsTestOtherA-980651809-project-member</nova:user>
Jan 26 16:02:43 compute-0 nova_compute[239965]:         <nova:project uuid="4f5d05ded9184fea9b526a3522d47ea5">tempest-ServerActionsTestOtherA-980651809</nova:project>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:02:43 compute-0 nova_compute[239965]:         <nova:port uuid="4c176286-4795-480e-8ae2-30472ef381f7">
Jan 26 16:02:43 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <system>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <entry name="serial">725537ef-e1ac-4cc7-bd87-fe04016df207</entry>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <entry name="uuid">725537ef-e1ac-4cc7-bd87-fe04016df207</entry>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     </system>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   <os>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   </os>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   <features>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   </features>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/725537ef-e1ac-4cc7-bd87-fe04016df207_disk">
Jan 26 16:02:43 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       </source>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:02:43 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/725537ef-e1ac-4cc7-bd87-fe04016df207_disk.config">
Jan 26 16:02:43 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       </source>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:02:43 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:54:91:4e"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <target dev="tap4c176286-47"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/725537ef-e1ac-4cc7-bd87-fe04016df207/console.log" append="off"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <video>
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     </video>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:02:43 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:02:43 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:02:43 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:02:43 compute-0 nova_compute[239965]: </domain>
Jan 26 16:02:43 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.436 239969 DEBUG nova.compute.manager [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Preparing to wait for external event network-vif-plugged-4c176286-4795-480e-8ae2-30472ef381f7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.436 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "725537ef-e1ac-4cc7-bd87-fe04016df207-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.437 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "725537ef-e1ac-4cc7-bd87-fe04016df207-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.437 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "725537ef-e1ac-4cc7-bd87-fe04016df207-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.438 239969 DEBUG nova.virt.libvirt.vif [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:02:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1011695261',display_name='tempest-ServerActionsTestOtherA-server-1011695261',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1011695261',id=74,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4f5d05ded9184fea9b526a3522d47ea5',ramdisk_id='',reservation_id='r-4y30pkcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-980651809',owner_user_name='tempest-ServerActionsTestOtherA-980651809-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:02:37Z,user_data=None,user_id='57040a375df6487fbc604a9b04389eeb',uuid=725537ef-e1ac-4cc7-bd87-fe04016df207,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4c176286-4795-480e-8ae2-30472ef381f7", "address": "fa:16:3e:54:91:4e", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c176286-47", "ovs_interfaceid": "4c176286-4795-480e-8ae2-30472ef381f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.438 239969 DEBUG nova.network.os_vif_util [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converting VIF {"id": "4c176286-4795-480e-8ae2-30472ef381f7", "address": "fa:16:3e:54:91:4e", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c176286-47", "ovs_interfaceid": "4c176286-4795-480e-8ae2-30472ef381f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.439 239969 DEBUG nova.network.os_vif_util [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:91:4e,bridge_name='br-int',has_traffic_filtering=True,id=4c176286-4795-480e-8ae2-30472ef381f7,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c176286-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.440 239969 DEBUG os_vif [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:91:4e,bridge_name='br-int',has_traffic_filtering=True,id=4c176286-4795-480e-8ae2-30472ef381f7,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c176286-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.440 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.441 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.442 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.445 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.446 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c176286-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.446 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4c176286-47, col_values=(('external_ids', {'iface-id': '4c176286-4795-480e-8ae2-30472ef381f7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:54:91:4e', 'vm-uuid': '725537ef-e1ac-4cc7-bd87-fe04016df207'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.448 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:43 compute-0 NetworkManager[48954]: <info>  [1769443363.4494] manager: (tap4c176286-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/287)
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.450 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.457 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.459 239969 INFO os_vif [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:91:4e,bridge_name='br-int',has_traffic_filtering=True,id=4c176286-4795-480e-8ae2-30472ef381f7,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c176286-47')
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.510 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.511 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.511 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] No VIF found with MAC fa:16:3e:54:91:4e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.512 239969 INFO nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Using config drive
Jan 26 16:02:43 compute-0 nova_compute[239965]: 2026-01-26 16:02:43.534 239969 DEBUG nova.storage.rbd_utils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 725537ef-e1ac-4cc7-bd87-fe04016df207_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 276 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 168 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Jan 26 16:02:44 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2295026203' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:44 compute-0 ceph-mon[75140]: pgmap v1494: 305 pgs: 305 active+clean; 276 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 168 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Jan 26 16:02:44 compute-0 nova_compute[239965]: 2026-01-26 16:02:44.284 239969 INFO nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Creating config drive at /var/lib/nova/instances/725537ef-e1ac-4cc7-bd87-fe04016df207/disk.config
Jan 26 16:02:44 compute-0 nova_compute[239965]: 2026-01-26 16:02:44.302 239969 DEBUG oslo_concurrency.processutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/725537ef-e1ac-4cc7-bd87-fe04016df207/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj03l2a_b execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:44 compute-0 nova_compute[239965]: 2026-01-26 16:02:44.460 239969 DEBUG oslo_concurrency.processutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/725537ef-e1ac-4cc7-bd87-fe04016df207/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj03l2a_b" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:44 compute-0 nova_compute[239965]: 2026-01-26 16:02:44.490 239969 DEBUG nova.storage.rbd_utils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] rbd image 725537ef-e1ac-4cc7-bd87-fe04016df207_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:44 compute-0 nova_compute[239965]: 2026-01-26 16:02:44.495 239969 DEBUG oslo_concurrency.processutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/725537ef-e1ac-4cc7-bd87-fe04016df207/disk.config 725537ef-e1ac-4cc7-bd87-fe04016df207_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:44 compute-0 nova_compute[239965]: 2026-01-26 16:02:44.539 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:02:44 compute-0 nova_compute[239965]: 2026-01-26 16:02:44.542 239969 DEBUG nova.network.neutron [req-5a0f40bc-c66a-48d7-8a46-33c1593439ea req-9a58c532-f3e2-4e10-a4e0-67c2849c1e69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Updated VIF entry in instance network info cache for port 4c176286-4795-480e-8ae2-30472ef381f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:02:44 compute-0 nova_compute[239965]: 2026-01-26 16:02:44.543 239969 DEBUG nova.network.neutron [req-5a0f40bc-c66a-48d7-8a46-33c1593439ea req-9a58c532-f3e2-4e10-a4e0-67c2849c1e69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Updating instance_info_cache with network_info: [{"id": "4c176286-4795-480e-8ae2-30472ef381f7", "address": "fa:16:3e:54:91:4e", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c176286-47", "ovs_interfaceid": "4c176286-4795-480e-8ae2-30472ef381f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:44 compute-0 nova_compute[239965]: 2026-01-26 16:02:44.558 239969 DEBUG oslo_concurrency.lockutils [req-5a0f40bc-c66a-48d7-8a46-33c1593439ea req-9a58c532-f3e2-4e10-a4e0-67c2849c1e69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-725537ef-e1ac-4cc7-bd87-fe04016df207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:02:44 compute-0 nova_compute[239965]: 2026-01-26 16:02:44.671 239969 DEBUG oslo_concurrency.processutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/725537ef-e1ac-4cc7-bd87-fe04016df207/disk.config 725537ef-e1ac-4cc7-bd87-fe04016df207_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.176s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:44 compute-0 nova_compute[239965]: 2026-01-26 16:02:44.672 239969 INFO nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Deleting local config drive /var/lib/nova/instances/725537ef-e1ac-4cc7-bd87-fe04016df207/disk.config because it was imported into RBD.
Jan 26 16:02:44 compute-0 kernel: tap4c176286-47: entered promiscuous mode
Jan 26 16:02:44 compute-0 NetworkManager[48954]: <info>  [1769443364.7265] manager: (tap4c176286-47): new Tun device (/org/freedesktop/NetworkManager/Devices/288)
Jan 26 16:02:44 compute-0 ovn_controller[146046]: 2026-01-26T16:02:44Z|00646|binding|INFO|Claiming lport 4c176286-4795-480e-8ae2-30472ef381f7 for this chassis.
Jan 26 16:02:44 compute-0 ovn_controller[146046]: 2026-01-26T16:02:44Z|00647|binding|INFO|4c176286-4795-480e-8ae2-30472ef381f7: Claiming fa:16:3e:54:91:4e 10.100.0.13
Jan 26 16:02:44 compute-0 nova_compute[239965]: 2026-01-26 16:02:44.735 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:44.751 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:91:4e 10.100.0.13'], port_security=['fa:16:3e:54:91:4e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '725537ef-e1ac-4cc7-bd87-fe04016df207', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac4ec909-9164-40ab-a894-4d501ce12c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4f5d05ded9184fea9b526a3522d47ea5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '93ada5a0-1112-43ef-8d69-92fe39df7192', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=287381d1-9fc0-4e2d-b786-047adf5707ff, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=4c176286-4795-480e-8ae2-30472ef381f7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:44.752 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 4c176286-4795-480e-8ae2-30472ef381f7 in datapath ac4ec909-9164-40ab-a894-4d501ce12c59 bound to our chassis
Jan 26 16:02:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:44.754 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac4ec909-9164-40ab-a894-4d501ce12c59
Jan 26 16:02:44 compute-0 ovn_controller[146046]: 2026-01-26T16:02:44Z|00648|binding|INFO|Setting lport 4c176286-4795-480e-8ae2-30472ef381f7 ovn-installed in OVS
Jan 26 16:02:44 compute-0 ovn_controller[146046]: 2026-01-26T16:02:44Z|00649|binding|INFO|Setting lport 4c176286-4795-480e-8ae2-30472ef381f7 up in Southbound
Jan 26 16:02:44 compute-0 nova_compute[239965]: 2026-01-26 16:02:44.763 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:44 compute-0 systemd-udevd[301920]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:02:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:44.783 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7180dcc9-7a23-4c65-806b-482381662566]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:44 compute-0 systemd-machined[208061]: New machine qemu-83-instance-0000004a.
Jan 26 16:02:44 compute-0 NetworkManager[48954]: <info>  [1769443364.8032] device (tap4c176286-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:02:44 compute-0 NetworkManager[48954]: <info>  [1769443364.8038] device (tap4c176286-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:02:44 compute-0 systemd[1]: Started Virtual Machine qemu-83-instance-0000004a.
Jan 26 16:02:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:44.823 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b2e02593-188b-40dd-8f77-e55965fc9f22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:44.827 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4c2f275f-6bce-4cb2-b758-7d83aa7c407f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:44.860 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[10a5ad76-5cbd-4ecb-b7c0-e8254fedcde3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:44.880 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cae0aaac-5558-4a36-94af-d9516b4d387b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac4ec909-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6b:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473948, 'reachable_time': 43095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301932, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:44.904 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f4c25f0e-8ada-451e-9392-8be7ff11346a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapac4ec909-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473962, 'tstamp': 473962}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301933, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapac4ec909-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473965, 'tstamp': 473965}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 301933, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:44.908 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac4ec909-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:44 compute-0 nova_compute[239965]: 2026-01-26 16:02:44.910 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:44 compute-0 nova_compute[239965]: 2026-01-26 16:02:44.911 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:44.911 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac4ec909-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:44.911 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:44.912 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac4ec909-90, col_values=(('external_ids', {'iface-id': '47411fc6-7d46-43a0-b0c2-7c06b22cee9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:44.912 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.323 239969 DEBUG nova.network.neutron [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Updating instance_info_cache with network_info: [{"id": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "address": "fa:16:3e:fe:1a:f7", "network": {"id": "300c36f8-0d84-4e5b-a546-a37b88a0fe3e", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-233879439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a88b6ae11bc04608ba445196f2238171", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap047aa04e-55", "ovs_interfaceid": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.352 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Releasing lock "refresh_cache-446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.353 239969 DEBUG nova.compute.manager [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Instance network_info: |[{"id": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "address": "fa:16:3e:fe:1a:f7", "network": {"id": "300c36f8-0d84-4e5b-a546-a37b88a0fe3e", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-233879439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a88b6ae11bc04608ba445196f2238171", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap047aa04e-55", "ovs_interfaceid": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.353 239969 DEBUG oslo_concurrency.lockutils [req-29bf1bc5-002f-4173-9075-b8bbbd748550 req-ad256421-8568-4e18-a9d2-69107fe03b93 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.354 239969 DEBUG nova.network.neutron [req-29bf1bc5-002f-4173-9075-b8bbbd748550 req-ad256421-8568-4e18-a9d2-69107fe03b93 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Refreshing network info cache for port 047aa04e-5502-4cdf-ae6a-94aacc3c05ff _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.358 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Start _get_guest_xml network_info=[{"id": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "address": "fa:16:3e:fe:1a:f7", "network": {"id": "300c36f8-0d84-4e5b-a546-a37b88a0fe3e", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-233879439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a88b6ae11bc04608ba445196f2238171", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap047aa04e-55", "ovs_interfaceid": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.362 239969 WARNING nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.366 239969 DEBUG nova.virt.libvirt.host [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.367 239969 DEBUG nova.virt.libvirt.host [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.370 239969 DEBUG nova.virt.libvirt.host [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.370 239969 DEBUG nova.virt.libvirt.host [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.370 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.371 239969 DEBUG nova.virt.hardware [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.371 239969 DEBUG nova.virt.hardware [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.371 239969 DEBUG nova.virt.hardware [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.372 239969 DEBUG nova.virt.hardware [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.372 239969 DEBUG nova.virt.hardware [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.372 239969 DEBUG nova.virt.hardware [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.372 239969 DEBUG nova.virt.hardware [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.373 239969 DEBUG nova.virt.hardware [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.373 239969 DEBUG nova.virt.hardware [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.373 239969 DEBUG nova.virt.hardware [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.373 239969 DEBUG nova.virt.hardware [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.376 239969 DEBUG oslo_concurrency.processutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.418 239969 DEBUG nova.compute.manager [req-f7bfc69b-1881-4d10-8130-9b3a0169d6cd req-bedbd651-6f97-40db-9e4a-f645a792e508 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Received event network-vif-plugged-4c176286-4795-480e-8ae2-30472ef381f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.418 239969 DEBUG oslo_concurrency.lockutils [req-f7bfc69b-1881-4d10-8130-9b3a0169d6cd req-bedbd651-6f97-40db-9e4a-f645a792e508 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "725537ef-e1ac-4cc7-bd87-fe04016df207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.419 239969 DEBUG oslo_concurrency.lockutils [req-f7bfc69b-1881-4d10-8130-9b3a0169d6cd req-bedbd651-6f97-40db-9e4a-f645a792e508 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "725537ef-e1ac-4cc7-bd87-fe04016df207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.419 239969 DEBUG oslo_concurrency.lockutils [req-f7bfc69b-1881-4d10-8130-9b3a0169d6cd req-bedbd651-6f97-40db-9e4a-f645a792e508 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "725537ef-e1ac-4cc7-bd87-fe04016df207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.419 239969 DEBUG nova.compute.manager [req-f7bfc69b-1881-4d10-8130-9b3a0169d6cd req-bedbd651-6f97-40db-9e4a-f645a792e508 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Processing event network-vif-plugged-4c176286-4795-480e-8ae2-30472ef381f7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.701 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443365.6999927, 725537ef-e1ac-4cc7-bd87-fe04016df207 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.701 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] VM Started (Lifecycle Event)
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.706 239969 DEBUG nova.compute.manager [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.710 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.713 239969 INFO nova.virt.libvirt.driver [-] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Instance spawned successfully.
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.714 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.733 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.736 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.737 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.737 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.737 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.738 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.739 239969 DEBUG nova.virt.libvirt.driver [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.743 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.808 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.809 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443365.7002347, 725537ef-e1ac-4cc7-bd87-fe04016df207 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.809 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] VM Paused (Lifecycle Event)
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.819 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.830 239969 INFO nova.compute.manager [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Took 8.23 seconds to spawn the instance on the hypervisor.
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.830 239969 DEBUG nova.compute.manager [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.832 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.842 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443365.709763, 725537ef-e1ac-4cc7-bd87-fe04016df207 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.843 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] VM Resumed (Lifecycle Event)
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.884 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.890 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.933 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:02:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:02:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2817322978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.957 239969 INFO nova.compute.manager [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Took 9.66 seconds to build instance.
Jan 26 16:02:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 306 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 131 op/s
Jan 26 16:02:45 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.971 239969 DEBUG oslo_concurrency.processutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:45.999 239969 DEBUG nova.storage.rbd_utils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] rbd image 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2817322978' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.004 239969 DEBUG oslo_concurrency.processutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.052 239969 DEBUG oslo_concurrency.lockutils [None req-353043b5-6520-4454-ae26-b62fdb1da256 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "725537ef-e1ac-4cc7-bd87-fe04016df207" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.827s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.542 239969 DEBUG nova.network.neutron [req-29bf1bc5-002f-4173-9075-b8bbbd748550 req-ad256421-8568-4e18-a9d2-69107fe03b93 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Updated VIF entry in instance network info cache for port 047aa04e-5502-4cdf-ae6a-94aacc3c05ff. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.543 239969 DEBUG nova.network.neutron [req-29bf1bc5-002f-4173-9075-b8bbbd748550 req-ad256421-8568-4e18-a9d2-69107fe03b93 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Updating instance_info_cache with network_info: [{"id": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "address": "fa:16:3e:fe:1a:f7", "network": {"id": "300c36f8-0d84-4e5b-a546-a37b88a0fe3e", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-233879439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a88b6ae11bc04608ba445196f2238171", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap047aa04e-55", "ovs_interfaceid": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.561 239969 DEBUG oslo_concurrency.lockutils [req-29bf1bc5-002f-4173-9075-b8bbbd748550 req-ad256421-8568-4e18-a9d2-69107fe03b93 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:02:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:02:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476871821' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.656 239969 DEBUG oslo_concurrency.processutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.658 239969 DEBUG nova.virt.libvirt.vif [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:02:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1608663305',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1608663305',id=75,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a88b6ae11bc04608ba445196f2238171',ramdisk_id='',reservation_id='r-570e8ox0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-238093472',owner_user_name='tempest-ServerTagsTestJSON-238093472-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:02:39Z,user_data=None,user_id='4f5fcc73d4b8415a9d00f19f2f851ab3',uuid=446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "address": "fa:16:3e:fe:1a:f7", "network": {"id": "300c36f8-0d84-4e5b-a546-a37b88a0fe3e", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-233879439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a88b6ae11bc04608ba445196f2238171", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap047aa04e-55", "ovs_interfaceid": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.659 239969 DEBUG nova.network.os_vif_util [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Converting VIF {"id": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "address": "fa:16:3e:fe:1a:f7", "network": {"id": "300c36f8-0d84-4e5b-a546-a37b88a0fe3e", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-233879439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a88b6ae11bc04608ba445196f2238171", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap047aa04e-55", "ovs_interfaceid": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.660 239969 DEBUG nova.network.os_vif_util [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:1a:f7,bridge_name='br-int',has_traffic_filtering=True,id=047aa04e-5502-4cdf-ae6a-94aacc3c05ff,network=Network(300c36f8-0d84-4e5b-a546-a37b88a0fe3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap047aa04e-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.661 239969 DEBUG nova.objects.instance [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lazy-loading 'pci_devices' on Instance uuid 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.683 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:02:46 compute-0 nova_compute[239965]:   <uuid>446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd</uuid>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   <name>instance-0000004b</name>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerTagsTestJSON-server-1608663305</nova:name>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:02:45</nova:creationTime>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:02:46 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:02:46 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:02:46 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:02:46 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:02:46 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:02:46 compute-0 nova_compute[239965]:         <nova:user uuid="4f5fcc73d4b8415a9d00f19f2f851ab3">tempest-ServerTagsTestJSON-238093472-project-member</nova:user>
Jan 26 16:02:46 compute-0 nova_compute[239965]:         <nova:project uuid="a88b6ae11bc04608ba445196f2238171">tempest-ServerTagsTestJSON-238093472</nova:project>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:02:46 compute-0 nova_compute[239965]:         <nova:port uuid="047aa04e-5502-4cdf-ae6a-94aacc3c05ff">
Jan 26 16:02:46 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <system>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <entry name="serial">446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd</entry>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <entry name="uuid">446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd</entry>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     </system>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   <os>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   </os>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   <features>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   </features>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd_disk">
Jan 26 16:02:46 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       </source>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:02:46 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd_disk.config">
Jan 26 16:02:46 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       </source>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:02:46 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:fe:1a:f7"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <target dev="tap047aa04e-55"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd/console.log" append="off"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <video>
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     </video>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:02:46 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:02:46 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:02:46 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:02:46 compute-0 nova_compute[239965]: </domain>
Jan 26 16:02:46 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.689 239969 DEBUG nova.compute.manager [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Preparing to wait for external event network-vif-plugged-047aa04e-5502-4cdf-ae6a-94aacc3c05ff prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.689 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Acquiring lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.690 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.690 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.691 239969 DEBUG nova.virt.libvirt.vif [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:02:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1608663305',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1608663305',id=75,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a88b6ae11bc04608ba445196f2238171',ramdisk_id='',reservation_id='r-570e8ox0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-238093472',owner_user_name='tempest-ServerTagsTestJSON-238093472-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:02:39Z,user_data=None,user_id='4f5fcc73d4b8415a9d00f19f2f851ab3',uuid=446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "address": "fa:16:3e:fe:1a:f7", "network": {"id": "300c36f8-0d84-4e5b-a546-a37b88a0fe3e", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-233879439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a88b6ae11bc04608ba445196f2238171", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap047aa04e-55", "ovs_interfaceid": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.691 239969 DEBUG nova.network.os_vif_util [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Converting VIF {"id": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "address": "fa:16:3e:fe:1a:f7", "network": {"id": "300c36f8-0d84-4e5b-a546-a37b88a0fe3e", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-233879439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a88b6ae11bc04608ba445196f2238171", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap047aa04e-55", "ovs_interfaceid": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.692 239969 DEBUG nova.network.os_vif_util [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:1a:f7,bridge_name='br-int',has_traffic_filtering=True,id=047aa04e-5502-4cdf-ae6a-94aacc3c05ff,network=Network(300c36f8-0d84-4e5b-a546-a37b88a0fe3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap047aa04e-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.692 239969 DEBUG os_vif [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:1a:f7,bridge_name='br-int',has_traffic_filtering=True,id=047aa04e-5502-4cdf-ae6a-94aacc3c05ff,network=Network(300c36f8-0d84-4e5b-a546-a37b88a0fe3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap047aa04e-55') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.693 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.693 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.694 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.697 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.697 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap047aa04e-55, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.698 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap047aa04e-55, col_values=(('external_ids', {'iface-id': '047aa04e-5502-4cdf-ae6a-94aacc3c05ff', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:1a:f7', 'vm-uuid': '446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.699 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:46 compute-0 NetworkManager[48954]: <info>  [1769443366.7002] manager: (tap047aa04e-55): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/289)
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.704 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.706 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.707 239969 INFO os_vif [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:1a:f7,bridge_name='br-int',has_traffic_filtering=True,id=047aa04e-5502-4cdf-ae6a-94aacc3c05ff,network=Network(300c36f8-0d84-4e5b-a546-a37b88a0fe3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap047aa04e-55')
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.760 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.761 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.761 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] No VIF found with MAC fa:16:3e:fe:1a:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.762 239969 INFO nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Using config drive
Jan 26 16:02:46 compute-0 nova_compute[239965]: 2026-01-26 16:02:46.784 239969 DEBUG nova.storage.rbd_utils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] rbd image 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:47 compute-0 ceph-mon[75140]: pgmap v1495: 305 pgs: 305 active+clean; 306 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 131 op/s
Jan 26 16:02:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2476871821' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.303 239969 INFO nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Creating config drive at /var/lib/nova/instances/446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd/disk.config
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.309 239969 DEBUG oslo_concurrency.processutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zymye1y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.451 239969 DEBUG oslo_concurrency.processutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8zymye1y" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.485 239969 DEBUG nova.storage.rbd_utils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] rbd image 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.490 239969 DEBUG oslo_concurrency.processutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd/disk.config 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.664 239969 DEBUG nova.compute.manager [req-613f0a10-96ab-46e0-8d35-c5feff298bc7 req-bb8bd5c9-f142-4455-a829-07c3149b5fb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Received event network-vif-plugged-4c176286-4795-480e-8ae2-30472ef381f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.666 239969 DEBUG oslo_concurrency.lockutils [req-613f0a10-96ab-46e0-8d35-c5feff298bc7 req-bb8bd5c9-f142-4455-a829-07c3149b5fb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "725537ef-e1ac-4cc7-bd87-fe04016df207-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.667 239969 DEBUG oslo_concurrency.lockutils [req-613f0a10-96ab-46e0-8d35-c5feff298bc7 req-bb8bd5c9-f142-4455-a829-07c3149b5fb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "725537ef-e1ac-4cc7-bd87-fe04016df207-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.667 239969 DEBUG oslo_concurrency.lockutils [req-613f0a10-96ab-46e0-8d35-c5feff298bc7 req-bb8bd5c9-f142-4455-a829-07c3149b5fb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "725537ef-e1ac-4cc7-bd87-fe04016df207-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.668 239969 DEBUG nova.compute.manager [req-613f0a10-96ab-46e0-8d35-c5feff298bc7 req-bb8bd5c9-f142-4455-a829-07c3149b5fb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] No waiting events found dispatching network-vif-plugged-4c176286-4795-480e-8ae2-30472ef381f7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.668 239969 WARNING nova.compute.manager [req-613f0a10-96ab-46e0-8d35-c5feff298bc7 req-bb8bd5c9-f142-4455-a829-07c3149b5fb1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Received unexpected event network-vif-plugged-4c176286-4795-480e-8ae2-30472ef381f7 for instance with vm_state active and task_state None.
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.670 239969 DEBUG oslo_concurrency.processutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd/disk.config 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.672 239969 DEBUG nova.compute.manager [req-26d1ede7-e415-4c02-8f8c-08a4ec6594e6 req-dc09fc31-096c-4d1e-b576-5820362d6ef2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Received event network-changed-4c176286-4795-480e-8ae2-30472ef381f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.673 239969 DEBUG nova.compute.manager [req-26d1ede7-e415-4c02-8f8c-08a4ec6594e6 req-dc09fc31-096c-4d1e-b576-5820362d6ef2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Refreshing instance network info cache due to event network-changed-4c176286-4795-480e-8ae2-30472ef381f7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.673 239969 DEBUG oslo_concurrency.lockutils [req-26d1ede7-e415-4c02-8f8c-08a4ec6594e6 req-dc09fc31-096c-4d1e-b576-5820362d6ef2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-725537ef-e1ac-4cc7-bd87-fe04016df207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.674 239969 DEBUG oslo_concurrency.lockutils [req-26d1ede7-e415-4c02-8f8c-08a4ec6594e6 req-dc09fc31-096c-4d1e-b576-5820362d6ef2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-725537ef-e1ac-4cc7-bd87-fe04016df207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.674 239969 DEBUG nova.network.neutron [req-26d1ede7-e415-4c02-8f8c-08a4ec6594e6 req-dc09fc31-096c-4d1e-b576-5820362d6ef2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Refreshing network info cache for port 4c176286-4795-480e-8ae2-30472ef381f7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.676 239969 INFO nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Deleting local config drive /var/lib/nova/instances/446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd/disk.config because it was imported into RBD.
Jan 26 16:02:47 compute-0 kernel: tap047aa04e-55: entered promiscuous mode
Jan 26 16:02:47 compute-0 NetworkManager[48954]: <info>  [1769443367.7328] manager: (tap047aa04e-55): new Tun device (/org/freedesktop/NetworkManager/Devices/290)
Jan 26 16:02:47 compute-0 ovn_controller[146046]: 2026-01-26T16:02:47Z|00650|binding|INFO|Claiming lport 047aa04e-5502-4cdf-ae6a-94aacc3c05ff for this chassis.
Jan 26 16:02:47 compute-0 ovn_controller[146046]: 2026-01-26T16:02:47Z|00651|binding|INFO|047aa04e-5502-4cdf-ae6a-94aacc3c05ff: Claiming fa:16:3e:fe:1a:f7 10.100.0.12
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.736 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.741 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:1a:f7 10.100.0.12'], port_security=['fa:16:3e:fe:1a:f7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-300c36f8-0d84-4e5b-a546-a37b88a0fe3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a88b6ae11bc04608ba445196f2238171', 'neutron:revision_number': '2', 'neutron:security_group_ids': '13fc89c1-b460-4ff8-a52c-ef6ce7230e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0d2c5104-c2ab-4ee9-b3c3-a669570106da, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=047aa04e-5502-4cdf-ae6a-94aacc3c05ff) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.743 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 047aa04e-5502-4cdf-ae6a-94aacc3c05ff in datapath 300c36f8-0d84-4e5b-a546-a37b88a0fe3e bound to our chassis
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.751 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 300c36f8-0d84-4e5b-a546-a37b88a0fe3e
Jan 26 16:02:47 compute-0 ovn_controller[146046]: 2026-01-26T16:02:47Z|00652|binding|INFO|Setting lport 047aa04e-5502-4cdf-ae6a-94aacc3c05ff ovn-installed in OVS
Jan 26 16:02:47 compute-0 ovn_controller[146046]: 2026-01-26T16:02:47Z|00653|binding|INFO|Setting lport 047aa04e-5502-4cdf-ae6a-94aacc3c05ff up in Southbound
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.755 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:47 compute-0 nova_compute[239965]: 2026-01-26 16:02:47.757 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.768 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5a53177c-dc9d-484f-9e60-36be5176dd3e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.769 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap300c36f8-01 in ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:02:47 compute-0 systemd-udevd[302114]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:02:47 compute-0 systemd-machined[208061]: New machine qemu-84-instance-0000004b.
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.771 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap300c36f8-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.771 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb26e60-b26b-4d26-9605-8551588f1869]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.772 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e6f282ea-6e11-4747-930d-15d83c051af6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:47 compute-0 NetworkManager[48954]: <info>  [1769443367.7828] device (tap047aa04e-55): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:02:47 compute-0 systemd[1]: Started Virtual Machine qemu-84-instance-0000004b.
Jan 26 16:02:47 compute-0 NetworkManager[48954]: <info>  [1769443367.7843] device (tap047aa04e-55): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.783 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[ff5924b7-be79-4f76-adaf-648e72984aee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.801 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[61000069-9818-43db-8a94-72cae889bb2b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.836 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2fa7639f-0304-451b-83d6-30ce23895fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:47 compute-0 systemd-udevd[302117]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:02:47 compute-0 NetworkManager[48954]: <info>  [1769443367.8470] manager: (tap300c36f8-00): new Veth device (/org/freedesktop/NetworkManager/Devices/291)
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.848 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[78a7b5cf-74f4-4dc4-9d55-fb5ee487323f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.883 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e3edd142-290e-4e21-bf28-c535f121727f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.889 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9166cc0b-31f0-4e61-93c8-e44e367942cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:47 compute-0 NetworkManager[48954]: <info>  [1769443367.9147] device (tap300c36f8-00): carrier: link connected
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.922 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c4923fc0-0a14-457c-876f-b64b98e3e4ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.942 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d1c2f706-f43e-4a0c-935b-5bede08f9c45]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap300c36f8-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:4f:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485872, 'reachable_time': 33527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302146, 'error': None, 'target': 'ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.961 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[41e0afa9-662b-46db-94ab-ee3b9544a62b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6c:4ff9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 485872, 'tstamp': 485872}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302147, 'error': None, 'target': 'ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 306 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 131 op/s
Jan 26 16:02:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:47.980 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b00174df-f62a-402c-8066-5d28bb7226d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap300c36f8-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6c:4f:f9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 199], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485872, 'reachable_time': 33527, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302148, 'error': None, 'target': 'ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.022 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[59c169fb-16d0-4c13-aabe-6004fe280821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.093 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[174d9323-9094-4e83-937e-4070fe4b090a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.094 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap300c36f8-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.095 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.095 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap300c36f8-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:48 compute-0 NetworkManager[48954]: <info>  [1769443368.0988] manager: (tap300c36f8-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/292)
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.098 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:48 compute-0 kernel: tap300c36f8-00: entered promiscuous mode
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.106 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap300c36f8-00, col_values=(('external_ids', {'iface-id': 'd3deef9d-8218-4484-bf3d-19233618b3dd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:48 compute-0 ovn_controller[146046]: 2026-01-26T16:02:48Z|00654|binding|INFO|Releasing lport d3deef9d-8218-4484-bf3d-19233618b3dd from this chassis (sb_readonly=0)
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.107 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.127 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.130 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/300c36f8-0d84-4e5b-a546-a37b88a0fe3e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/300c36f8-0d84-4e5b-a546-a37b88a0fe3e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.132 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[763fb49f-f505-4fab-a181-ccec841eb9c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.133 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-300c36f8-0d84-4e5b-a546-a37b88a0fe3e
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/300c36f8-0d84-4e5b-a546-a37b88a0fe3e.pid.haproxy
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 300c36f8-0d84-4e5b-a546-a37b88a0fe3e
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.135 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e', 'env', 'PROCESS_TAG=haproxy-300c36f8-0d84-4e5b-a546-a37b88a0fe3e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/300c36f8-0d84-4e5b-a546-a37b88a0fe3e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.418 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443368.4176934, 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.419 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] VM Started (Lifecycle Event)
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.439 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.442 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443368.417818, 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.443 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] VM Paused (Lifecycle Event)
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.481 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.485 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.505 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:02:48 compute-0 podman[302219]: 2026-01-26 16:02:48.590467573 +0000 UTC m=+0.079002638 container create 71058f096e0bdd6fe2ac7563e6db5f08f28673c30ac669126f40e2240e9f8451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 16:02:48 compute-0 podman[302219]: 2026-01-26 16:02:48.539271478 +0000 UTC m=+0.027806563 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:02:48 compute-0 systemd[1]: Started libpod-conmon-71058f096e0bdd6fe2ac7563e6db5f08f28673c30ac669126f40e2240e9f8451.scope.
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.654 239969 DEBUG oslo_concurrency.lockutils [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "725537ef-e1ac-4cc7-bd87-fe04016df207" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.655 239969 DEBUG oslo_concurrency.lockutils [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "725537ef-e1ac-4cc7-bd87-fe04016df207" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.655 239969 DEBUG oslo_concurrency.lockutils [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "725537ef-e1ac-4cc7-bd87-fe04016df207-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.656 239969 DEBUG oslo_concurrency.lockutils [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "725537ef-e1ac-4cc7-bd87-fe04016df207-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.656 239969 DEBUG oslo_concurrency.lockutils [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "725537ef-e1ac-4cc7-bd87-fe04016df207-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.657 239969 INFO nova.compute.manager [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Terminating instance
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.659 239969 DEBUG nova.compute.manager [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:02:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:02:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95649a4b2388370a00a1210e8edabfde4557baec0e02660c42833fec2218311e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:02:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:02:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3418206725' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:02:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:02:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3418206725' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:02:48 compute-0 podman[302219]: 2026-01-26 16:02:48.686908508 +0000 UTC m=+0.175443593 container init 71058f096e0bdd6fe2ac7563e6db5f08f28673c30ac669126f40e2240e9f8451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 16:02:48 compute-0 podman[302219]: 2026-01-26 16:02:48.693229042 +0000 UTC m=+0.181764107 container start 71058f096e0bdd6fe2ac7563e6db5f08f28673c30ac669126f40e2240e9f8451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 16:02:48 compute-0 kernel: tap4c176286-47 (unregistering): left promiscuous mode
Jan 26 16:02:48 compute-0 NetworkManager[48954]: <info>  [1769443368.7117] device (tap4c176286-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:02:48 compute-0 ovn_controller[146046]: 2026-01-26T16:02:48Z|00655|binding|INFO|Releasing lport 4c176286-4795-480e-8ae2-30472ef381f7 from this chassis (sb_readonly=0)
Jan 26 16:02:48 compute-0 ovn_controller[146046]: 2026-01-26T16:02:48Z|00656|binding|INFO|Setting lport 4c176286-4795-480e-8ae2-30472ef381f7 down in Southbound
Jan 26 16:02:48 compute-0 ovn_controller[146046]: 2026-01-26T16:02:48Z|00657|binding|INFO|Removing iface tap4c176286-47 ovn-installed in OVS
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.720 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.728 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:54:91:4e 10.100.0.13'], port_security=['fa:16:3e:54:91:4e 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '725537ef-e1ac-4cc7-bd87-fe04016df207', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac4ec909-9164-40ab-a894-4d501ce12c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4f5d05ded9184fea9b526a3522d47ea5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=287381d1-9fc0-4e2d-b786-047adf5707ff, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=4c176286-4795-480e-8ae2-30472ef381f7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:48 compute-0 neutron-haproxy-ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e[302234]: [NOTICE]   (302238) : New worker (302243) forked
Jan 26 16:02:48 compute-0 neutron-haproxy-ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e[302234]: [NOTICE]   (302238) : Loading success.
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.754 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:48 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d0000004a.scope: Deactivated successfully.
Jan 26 16:02:48 compute-0 systemd[1]: machine-qemu\x2d83\x2dinstance\x2d0000004a.scope: Consumed 3.835s CPU time.
Jan 26 16:02:48 compute-0 systemd-machined[208061]: Machine qemu-83-instance-0000004a terminated.
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018123664403866843 of space, bias 1.0, pg target 0.5437099321160053 quantized to 32 (current 32)
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010162478678780872 of space, bias 1.0, pg target 0.30487436036342613 quantized to 32 (current 32)
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.71611760104627e-07 of space, bias 4.0, pg target 0.0009259341121255524 quantized to 16 (current 16)
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:02:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.837 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 4c176286-4795-480e-8ae2-30472ef381f7 in datapath ac4ec909-9164-40ab-a894-4d501ce12c59 unbound from our chassis
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.839 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac4ec909-9164-40ab-a894-4d501ce12c59
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.857 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c01b63c4-61d5-4db8-9b49-a91b3256a056]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.884 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.890 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.900 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7cc8694c-27d2-41c4-a991-3f005965ecc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.905 239969 INFO nova.virt.libvirt.driver [-] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Instance destroyed successfully.
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.905 239969 DEBUG nova.objects.instance [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'resources' on Instance uuid 725537ef-e1ac-4cc7-bd87-fe04016df207 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.906 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[568fdc70-d653-41fe-924e-0b6b8f26b4be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.925 239969 DEBUG nova.virt.libvirt.vif [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-1011695261',display_name='tempest-ServerActionsTestOtherA-server-1011695261',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-1011695261',id=74,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:02:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4f5d05ded9184fea9b526a3522d47ea5',ramdisk_id='',reservation_id='r-4y30pkcm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-980651809',owner_user_name='tempest-ServerActionsTestOtherA-980651809-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:02:45Z,user_data=None,user_id='57040a375df6487fbc604a9b04389eeb',uuid=725537ef-e1ac-4cc7-bd87-fe04016df207,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4c176286-4795-480e-8ae2-30472ef381f7", "address": "fa:16:3e:54:91:4e", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c176286-47", "ovs_interfaceid": "4c176286-4795-480e-8ae2-30472ef381f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.926 239969 DEBUG nova.network.os_vif_util [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converting VIF {"id": "4c176286-4795-480e-8ae2-30472ef381f7", "address": "fa:16:3e:54:91:4e", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c176286-47", "ovs_interfaceid": "4c176286-4795-480e-8ae2-30472ef381f7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.927 239969 DEBUG nova.network.os_vif_util [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:54:91:4e,bridge_name='br-int',has_traffic_filtering=True,id=4c176286-4795-480e-8ae2-30472ef381f7,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c176286-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.927 239969 DEBUG os_vif [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:91:4e,bridge_name='br-int',has_traffic_filtering=True,id=4c176286-4795-480e-8ae2-30472ef381f7,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c176286-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.930 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.930 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c176286-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.934 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.937 239969 INFO os_vif [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:54:91:4e,bridge_name='br-int',has_traffic_filtering=True,id=4c176286-4795-480e-8ae2-30472ef381f7,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4c176286-47')
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.948 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c3301c1f-b72b-4132-b40b-d6e272f2b60f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.967 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[85f31533-317a-495c-af83-8a68f0484383]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac4ec909-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:6b:f8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 161], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473948, 'reachable_time': 43095, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302283, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.983 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ba68c46f-3fc9-4e6d-8377-0b9ac0196ca8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapac4ec909-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473962, 'tstamp': 473962}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302287, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapac4ec909-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 473965, 'tstamp': 473965}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302287, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.985 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac4ec909-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.988 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac4ec909-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:48 compute-0 nova_compute[239965]: 2026-01-26 16:02:48.988 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.988 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.989 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac4ec909-90, col_values=(('external_ids', {'iface-id': '47411fc6-7d46-43a0-b0c2-7c06b22cee9e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:48.990 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:49 compute-0 ceph-mon[75140]: pgmap v1496: 305 pgs: 305 active+clean; 306 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 131 op/s
Jan 26 16:02:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3418206725' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:02:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3418206725' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.070 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.199 239969 DEBUG nova.network.neutron [req-26d1ede7-e415-4c02-8f8c-08a4ec6594e6 req-dc09fc31-096c-4d1e-b576-5820362d6ef2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Updated VIF entry in instance network info cache for port 4c176286-4795-480e-8ae2-30472ef381f7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.200 239969 DEBUG nova.network.neutron [req-26d1ede7-e415-4c02-8f8c-08a4ec6594e6 req-dc09fc31-096c-4d1e-b576-5820362d6ef2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Updating instance_info_cache with network_info: [{"id": "4c176286-4795-480e-8ae2-30472ef381f7", "address": "fa:16:3e:54:91:4e", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4c176286-47", "ovs_interfaceid": "4c176286-4795-480e-8ae2-30472ef381f7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.224 239969 DEBUG oslo_concurrency.lockutils [req-26d1ede7-e415-4c02-8f8c-08a4ec6594e6 req-dc09fc31-096c-4d1e-b576-5820362d6ef2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-725537ef-e1ac-4cc7-bd87-fe04016df207" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.337 239969 INFO nova.virt.libvirt.driver [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Deleting instance files /var/lib/nova/instances/725537ef-e1ac-4cc7-bd87-fe04016df207_del
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.339 239969 INFO nova.virt.libvirt.driver [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Deletion of /var/lib/nova/instances/725537ef-e1ac-4cc7-bd87-fe04016df207_del complete
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.407 239969 INFO nova.compute.manager [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Took 0.75 seconds to destroy the instance on the hypervisor.
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.408 239969 DEBUG oslo.service.loopingcall [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.408 239969 DEBUG nova.compute.manager [-] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.409 239969 DEBUG nova.network.neutron [-] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.740 239969 DEBUG nova.compute.manager [req-1b71c138-acc4-48fd-802d-a16316bc7394 req-21d23adf-65ab-4e2e-99e6-6e5738bbbb33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Received event network-vif-plugged-047aa04e-5502-4cdf-ae6a-94aacc3c05ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.741 239969 DEBUG oslo_concurrency.lockutils [req-1b71c138-acc4-48fd-802d-a16316bc7394 req-21d23adf-65ab-4e2e-99e6-6e5738bbbb33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.741 239969 DEBUG oslo_concurrency.lockutils [req-1b71c138-acc4-48fd-802d-a16316bc7394 req-21d23adf-65ab-4e2e-99e6-6e5738bbbb33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.741 239969 DEBUG oslo_concurrency.lockutils [req-1b71c138-acc4-48fd-802d-a16316bc7394 req-21d23adf-65ab-4e2e-99e6-6e5738bbbb33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.742 239969 DEBUG nova.compute.manager [req-1b71c138-acc4-48fd-802d-a16316bc7394 req-21d23adf-65ab-4e2e-99e6-6e5738bbbb33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Processing event network-vif-plugged-047aa04e-5502-4cdf-ae6a-94aacc3c05ff _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.742 239969 DEBUG nova.compute.manager [req-1b71c138-acc4-48fd-802d-a16316bc7394 req-21d23adf-65ab-4e2e-99e6-6e5738bbbb33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Received event network-vif-plugged-047aa04e-5502-4cdf-ae6a-94aacc3c05ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.742 239969 DEBUG oslo_concurrency.lockutils [req-1b71c138-acc4-48fd-802d-a16316bc7394 req-21d23adf-65ab-4e2e-99e6-6e5738bbbb33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.742 239969 DEBUG oslo_concurrency.lockutils [req-1b71c138-acc4-48fd-802d-a16316bc7394 req-21d23adf-65ab-4e2e-99e6-6e5738bbbb33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.743 239969 DEBUG oslo_concurrency.lockutils [req-1b71c138-acc4-48fd-802d-a16316bc7394 req-21d23adf-65ab-4e2e-99e6-6e5738bbbb33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.743 239969 DEBUG nova.compute.manager [req-1b71c138-acc4-48fd-802d-a16316bc7394 req-21d23adf-65ab-4e2e-99e6-6e5738bbbb33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] No waiting events found dispatching network-vif-plugged-047aa04e-5502-4cdf-ae6a-94aacc3c05ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.743 239969 WARNING nova.compute.manager [req-1b71c138-acc4-48fd-802d-a16316bc7394 req-21d23adf-65ab-4e2e-99e6-6e5738bbbb33 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Received unexpected event network-vif-plugged-047aa04e-5502-4cdf-ae6a-94aacc3c05ff for instance with vm_state building and task_state spawning.
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.744 239969 DEBUG nova.compute.manager [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.748 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443369.748442, 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.749 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] VM Resumed (Lifecycle Event)
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.753 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.756 239969 INFO nova.virt.libvirt.driver [-] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Instance spawned successfully.
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.757 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.774 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.780 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.782 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.783 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.783 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.784 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.784 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.784 239969 DEBUG nova.virt.libvirt.driver [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.812 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.843 239969 INFO nova.compute.manager [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Took 10.08 seconds to spawn the instance on the hypervisor.
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.843 239969 DEBUG nova.compute.manager [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.905 239969 INFO nova.compute.manager [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Took 11.30 seconds to build instance.
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.927 239969 DEBUG oslo_concurrency.lockutils [None req-004b7658-efaa-43de-830c-004d5b8a83e2 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.948 239969 DEBUG nova.network.neutron [-] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 269 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 195 op/s
Jan 26 16:02:49 compute-0 nova_compute[239965]: 2026-01-26 16:02:49.981 239969 INFO nova.compute.manager [-] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Took 0.57 seconds to deallocate network for instance.
Jan 26 16:02:50 compute-0 nova_compute[239965]: 2026-01-26 16:02:50.034 239969 DEBUG oslo_concurrency.lockutils [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:50 compute-0 nova_compute[239965]: 2026-01-26 16:02:50.035 239969 DEBUG oslo_concurrency.lockutils [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:50 compute-0 nova_compute[239965]: 2026-01-26 16:02:50.130 239969 DEBUG oslo_concurrency.processutils [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:02:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/930073544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:50 compute-0 nova_compute[239965]: 2026-01-26 16:02:50.812 239969 DEBUG oslo_concurrency.processutils [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:50 compute-0 nova_compute[239965]: 2026-01-26 16:02:50.819 239969 DEBUG nova.compute.provider_tree [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:02:50 compute-0 nova_compute[239965]: 2026-01-26 16:02:50.824 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:50 compute-0 nova_compute[239965]: 2026-01-26 16:02:50.843 239969 DEBUG nova.scheduler.client.report [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:02:50 compute-0 nova_compute[239965]: 2026-01-26 16:02:50.872 239969 DEBUG oslo_concurrency.lockutils [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:50 compute-0 nova_compute[239965]: 2026-01-26 16:02:50.898 239969 INFO nova.scheduler.client.report [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Deleted allocations for instance 725537ef-e1ac-4cc7-bd87-fe04016df207
Jan 26 16:02:50 compute-0 nova_compute[239965]: 2026-01-26 16:02:50.962 239969 DEBUG oslo_concurrency.lockutils [None req-22106db3-047c-4a11-b11e-0f0c7f65dd6e 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "725537ef-e1ac-4cc7-bd87-fe04016df207" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:51 compute-0 ceph-mon[75140]: pgmap v1497: 305 pgs: 305 active+clean; 269 MiB data, 648 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 195 op/s
Jan 26 16:02:51 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/930073544' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.475 239969 DEBUG oslo_concurrency.lockutils [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "fbc49a44-5efe-4d87-aea7-b26f7232c224" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.476 239969 DEBUG oslo_concurrency.lockutils [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.476 239969 DEBUG oslo_concurrency.lockutils [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.477 239969 DEBUG oslo_concurrency.lockutils [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.477 239969 DEBUG oslo_concurrency.lockutils [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.479 239969 INFO nova.compute.manager [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Terminating instance
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.480 239969 DEBUG nova.compute.manager [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:02:51 compute-0 kernel: tapf72df886-4f (unregistering): left promiscuous mode
Jan 26 16:02:51 compute-0 NetworkManager[48954]: <info>  [1769443371.5633] device (tapf72df886-4f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:02:51 compute-0 ovn_controller[146046]: 2026-01-26T16:02:51Z|00658|binding|INFO|Releasing lport f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 from this chassis (sb_readonly=0)
Jan 26 16:02:51 compute-0 ovn_controller[146046]: 2026-01-26T16:02:51Z|00659|binding|INFO|Setting lport f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 down in Southbound
Jan 26 16:02:51 compute-0 ovn_controller[146046]: 2026-01-26T16:02:51Z|00660|binding|INFO|Removing iface tapf72df886-4f ovn-installed in OVS
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.575 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.579 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.582 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:b9:39 10.100.0.10'], port_security=['fa:16:3e:65:b9:39 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbc49a44-5efe-4d87-aea7-b26f7232c224', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac4ec909-9164-40ab-a894-4d501ce12c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4f5d05ded9184fea9b526a3522d47ea5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ea85f49-3fe1-4caf-b20f-790e82828475', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=287381d1-9fc0-4e2d-b786-047adf5707ff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=f72df886-4fcf-4dc1-8e52-62b2a9cbfb49) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.583 156105 INFO neutron.agent.ovn.metadata.agent [-] Port f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 in datapath ac4ec909-9164-40ab-a894-4d501ce12c59 unbound from our chassis
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.585 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ac4ec909-9164-40ab-a894-4d501ce12c59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.587 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0d7e77-fc09-4685-b180-d69b33c454d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.587 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59 namespace which is not needed anymore
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.599 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:51 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Jan 26 16:02:51 compute-0 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003e.scope: Consumed 18.300s CPU time.
Jan 26 16:02:51 compute-0 systemd-machined[208061]: Machine qemu-69-instance-0000003e terminated.
Jan 26 16:02:51 compute-0 kernel: tapf72df886-4f: entered promiscuous mode
Jan 26 16:02:51 compute-0 ovn_controller[146046]: 2026-01-26T16:02:51Z|00661|binding|INFO|Claiming lport f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 for this chassis.
Jan 26 16:02:51 compute-0 ovn_controller[146046]: 2026-01-26T16:02:51Z|00662|binding|INFO|f72df886-4fcf-4dc1-8e52-62b2a9cbfb49: Claiming fa:16:3e:65:b9:39 10.100.0.10
Jan 26 16:02:51 compute-0 kernel: tapf72df886-4f (unregistering): left promiscuous mode
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.709 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.715 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:b9:39 10.100.0.10'], port_security=['fa:16:3e:65:b9:39 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbc49a44-5efe-4d87-aea7-b26f7232c224', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac4ec909-9164-40ab-a894-4d501ce12c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4f5d05ded9184fea9b526a3522d47ea5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ea85f49-3fe1-4caf-b20f-790e82828475', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=287381d1-9fc0-4e2d-b786-047adf5707ff, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=f72df886-4fcf-4dc1-8e52-62b2a9cbfb49) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.720 239969 INFO nova.virt.libvirt.driver [-] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Instance destroyed successfully.
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.721 239969 DEBUG nova.objects.instance [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lazy-loading 'resources' on Instance uuid fbc49a44-5efe-4d87-aea7-b26f7232c224 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:51 compute-0 neutron-haproxy-ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59[293234]: [NOTICE]   (293238) : haproxy version is 2.8.14-c23fe91
Jan 26 16:02:51 compute-0 neutron-haproxy-ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59[293234]: [NOTICE]   (293238) : path to executable is /usr/sbin/haproxy
Jan 26 16:02:51 compute-0 neutron-haproxy-ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59[293234]: [WARNING]  (293238) : Exiting Master process...
Jan 26 16:02:51 compute-0 neutron-haproxy-ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59[293234]: [ALERT]    (293238) : Current worker (293240) exited with code 143 (Terminated)
Jan 26 16:02:51 compute-0 neutron-haproxy-ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59[293234]: [WARNING]  (293238) : All workers exited. Exiting... (0)
Jan 26 16:02:51 compute-0 ovn_controller[146046]: 2026-01-26T16:02:51Z|00663|binding|INFO|Setting lport f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 ovn-installed in OVS
Jan 26 16:02:51 compute-0 ovn_controller[146046]: 2026-01-26T16:02:51Z|00664|binding|INFO|Setting lport f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 up in Southbound
Jan 26 16:02:51 compute-0 ovn_controller[146046]: 2026-01-26T16:02:51Z|00665|binding|INFO|Releasing lport f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 from this chassis (sb_readonly=1)
Jan 26 16:02:51 compute-0 ovn_controller[146046]: 2026-01-26T16:02:51Z|00666|if_status|INFO|Dropped 2 log messages in last 175 seconds (most recently, 175 seconds ago) due to excessive rate
Jan 26 16:02:51 compute-0 ovn_controller[146046]: 2026-01-26T16:02:51Z|00667|if_status|INFO|Not setting lport f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 down as sb is readonly
Jan 26 16:02:51 compute-0 systemd[1]: libpod-34dffa102c7c5239544e936159cc27de1839457c13160ee664add79420e031db.scope: Deactivated successfully.
Jan 26 16:02:51 compute-0 ovn_controller[146046]: 2026-01-26T16:02:51Z|00668|binding|INFO|Removing iface tapf72df886-4f ovn-installed in OVS
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.734 239969 DEBUG nova.virt.libvirt.vif [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:00:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-439720396',display_name='tempest-ServerActionsTestOtherA-server-439720396',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-439720396',id=62,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPEYI1ckjoLk7yIcgUD2N5pEXz7CpgCWmUDrDMsy81rcfUCI8L/DB4jj85dJvKgW1J8FZ3QYwC3Zv2Ccazu/ZH8R7JCX7KFR/iJYErtRCi4QLPrZS0N+skSfnxQ/qbE6Ew==',key_name='tempest-keypair-339418653',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:00:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4f5d05ded9184fea9b526a3522d47ea5',ramdisk_id='',reservation_id='r-up098k7k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-980651809',owner_user_name='tempest-ServerActionsTestOtherA-980651809-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:00:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='57040a375df6487fbc604a9b04389eeb',uuid=fbc49a44-5efe-4d87-aea7-b26f7232c224,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "address": "fa:16:3e:65:b9:39", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72df886-4f", "ovs_interfaceid": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.735 239969 DEBUG nova.network.os_vif_util [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converting VIF {"id": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "address": "fa:16:3e:65:b9:39", "network": {"id": "ac4ec909-9164-40ab-a894-4d501ce12c59", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-721152760-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4f5d05ded9184fea9b526a3522d47ea5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf72df886-4f", "ovs_interfaceid": "f72df886-4fcf-4dc1-8e52-62b2a9cbfb49", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.735 239969 DEBUG nova.network.os_vif_util [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:b9:39,bridge_name='br-int',has_traffic_filtering=True,id=f72df886-4fcf-4dc1-8e52-62b2a9cbfb49,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72df886-4f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.736 239969 DEBUG os_vif [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:b9:39,bridge_name='br-int',has_traffic_filtering=True,id=f72df886-4fcf-4dc1-8e52-62b2a9cbfb49,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72df886-4f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:02:51 compute-0 ovn_controller[146046]: 2026-01-26T16:02:51Z|00669|binding|INFO|Releasing lport f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 from this chassis (sb_readonly=0)
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.738 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:51 compute-0 ovn_controller[146046]: 2026-01-26T16:02:51Z|00670|binding|INFO|Setting lport f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 down in Southbound
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.739 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf72df886-4f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.740 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.742 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:51 compute-0 podman[302335]: 2026-01-26 16:02:51.743583362 +0000 UTC m=+0.057240135 container died 34dffa102c7c5239544e936159cc27de1839457c13160ee664add79420e031db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.744 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.747 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:b9:39 10.100.0.10'], port_security=['fa:16:3e:65:b9:39 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'fbc49a44-5efe-4d87-aea7-b26f7232c224', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac4ec909-9164-40ab-a894-4d501ce12c59', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4f5d05ded9184fea9b526a3522d47ea5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0ea85f49-3fe1-4caf-b20f-790e82828475', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.188'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=287381d1-9fc0-4e2d-b786-047adf5707ff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=f72df886-4fcf-4dc1-8e52-62b2a9cbfb49) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.759 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.761 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.763 239969 INFO os_vif [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:b9:39,bridge_name='br-int',has_traffic_filtering=True,id=f72df886-4fcf-4dc1-8e52-62b2a9cbfb49,network=Network(ac4ec909-9164-40ab-a894-4d501ce12c59),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf72df886-4f')
Jan 26 16:02:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-34dffa102c7c5239544e936159cc27de1839457c13160ee664add79420e031db-userdata-shm.mount: Deactivated successfully.
Jan 26 16:02:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8d85cb6818267243b35c5a9f1f66cdcc099c755b83efb0fa92f1a673a8c9c87-merged.mount: Deactivated successfully.
Jan 26 16:02:51 compute-0 podman[302335]: 2026-01-26 16:02:51.800086526 +0000 UTC m=+0.113743299 container cleanup 34dffa102c7c5239544e936159cc27de1839457c13160ee664add79420e031db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:02:51 compute-0 systemd[1]: libpod-conmon-34dffa102c7c5239544e936159cc27de1839457c13160ee664add79420e031db.scope: Deactivated successfully.
Jan 26 16:02:51 compute-0 podman[302387]: 2026-01-26 16:02:51.893492897 +0000 UTC m=+0.066616195 container remove 34dffa102c7c5239544e936159cc27de1839457c13160ee664add79420e031db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.902 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c662bd11-6c10-47a6-9eef-90e9c5071137]: (4, ('Mon Jan 26 04:02:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59 (34dffa102c7c5239544e936159cc27de1839457c13160ee664add79420e031db)\n34dffa102c7c5239544e936159cc27de1839457c13160ee664add79420e031db\nMon Jan 26 04:02:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59 (34dffa102c7c5239544e936159cc27de1839457c13160ee664add79420e031db)\n34dffa102c7c5239544e936159cc27de1839457c13160ee664add79420e031db\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.909 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6c2b04c0-13d2-49fe-8473-c804688b44b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.912 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac4ec909-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:51 compute-0 kernel: tapac4ec909-90: left promiscuous mode
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.915 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.921 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7d53c461-20a9-4523-bf43-29b0700eaba7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.936 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d4872f10-5e28-4b4f-909b-80c14d962272]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:51 compute-0 nova_compute[239965]: 2026-01-26 16:02:51.937 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.938 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c46bdc27-b412-4120-8516-e297f78de03d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.957 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b2bdf256-b608-4e8d-af6d-67efa3b27b09]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 473941, 'reachable_time': 15653, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302405, 'error': None, 'target': 'ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:51 compute-0 systemd[1]: run-netns-ovnmeta\x2dac4ec909\x2d9164\x2d40ab\x2da894\x2d4d501ce12c59.mount: Deactivated successfully.
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.964 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ac4ec909-9164-40ab-a894-4d501ce12c59 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.964 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[f687d804-8248-426e-b02c-98fdf31441ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.965 156105 INFO neutron.agent.ovn.metadata.agent [-] Port f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 in datapath ac4ec909-9164-40ab-a894-4d501ce12c59 unbound from our chassis
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.967 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ac4ec909-9164-40ab-a894-4d501ce12c59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:02:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 260 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 236 op/s
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.969 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[76956593-28cc-4f88-a82d-e5a4f6cbf3f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.970 156105 INFO neutron.agent.ovn.metadata.agent [-] Port f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 in datapath ac4ec909-9164-40ab-a894-4d501ce12c59 unbound from our chassis
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.973 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ac4ec909-9164-40ab-a894-4d501ce12c59, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:02:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:51.974 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[82dadcdf-c530-41b5-9c76-2c0af5d228c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:52 compute-0 nova_compute[239965]: 2026-01-26 16:02:52.261 239969 INFO nova.virt.libvirt.driver [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Deleting instance files /var/lib/nova/instances/fbc49a44-5efe-4d87-aea7-b26f7232c224_del
Jan 26 16:02:52 compute-0 nova_compute[239965]: 2026-01-26 16:02:52.262 239969 INFO nova.virt.libvirt.driver [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Deletion of /var/lib/nova/instances/fbc49a44-5efe-4d87-aea7-b26f7232c224_del complete
Jan 26 16:02:52 compute-0 nova_compute[239965]: 2026-01-26 16:02:52.271 239969 DEBUG nova.compute.manager [req-98e525c8-9bfb-4575-a619-d1e150d0ef3e req-2abf40d5-a950-42b4-b916-fef0b486af50 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Received event network-vif-deleted-4c176286-4795-480e-8ae2-30472ef381f7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:52 compute-0 nova_compute[239965]: 2026-01-26 16:02:52.271 239969 DEBUG nova.compute.manager [req-98e525c8-9bfb-4575-a619-d1e150d0ef3e req-2abf40d5-a950-42b4-b916-fef0b486af50 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received event network-vif-unplugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:52 compute-0 nova_compute[239965]: 2026-01-26 16:02:52.272 239969 DEBUG oslo_concurrency.lockutils [req-98e525c8-9bfb-4575-a619-d1e150d0ef3e req-2abf40d5-a950-42b4-b916-fef0b486af50 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:52 compute-0 nova_compute[239965]: 2026-01-26 16:02:52.272 239969 DEBUG oslo_concurrency.lockutils [req-98e525c8-9bfb-4575-a619-d1e150d0ef3e req-2abf40d5-a950-42b4-b916-fef0b486af50 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:52 compute-0 nova_compute[239965]: 2026-01-26 16:02:52.273 239969 DEBUG oslo_concurrency.lockutils [req-98e525c8-9bfb-4575-a619-d1e150d0ef3e req-2abf40d5-a950-42b4-b916-fef0b486af50 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:52 compute-0 nova_compute[239965]: 2026-01-26 16:02:52.273 239969 DEBUG nova.compute.manager [req-98e525c8-9bfb-4575-a619-d1e150d0ef3e req-2abf40d5-a950-42b4-b916-fef0b486af50 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] No waiting events found dispatching network-vif-unplugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:52 compute-0 nova_compute[239965]: 2026-01-26 16:02:52.274 239969 DEBUG nova.compute.manager [req-98e525c8-9bfb-4575-a619-d1e150d0ef3e req-2abf40d5-a950-42b4-b916-fef0b486af50 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received event network-vif-unplugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:02:52 compute-0 nova_compute[239965]: 2026-01-26 16:02:52.325 239969 INFO nova.compute.manager [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Took 0.84 seconds to destroy the instance on the hypervisor.
Jan 26 16:02:52 compute-0 nova_compute[239965]: 2026-01-26 16:02:52.326 239969 DEBUG oslo.service.loopingcall [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:02:52 compute-0 nova_compute[239965]: 2026-01-26 16:02:52.327 239969 DEBUG nova.compute.manager [-] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:02:52 compute-0 nova_compute[239965]: 2026-01-26 16:02:52.327 239969 DEBUG nova.network.neutron [-] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:02:52 compute-0 ovn_controller[146046]: 2026-01-26T16:02:52Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d1:39:2f 10.100.0.13
Jan 26 16:02:52 compute-0 ovn_controller[146046]: 2026-01-26T16:02:52Z|00080|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d1:39:2f 10.100.0.13
Jan 26 16:02:53 compute-0 ceph-mon[75140]: pgmap v1498: 305 pgs: 305 active+clean; 260 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.2 MiB/s wr, 236 op/s
Jan 26 16:02:53 compute-0 nova_compute[239965]: 2026-01-26 16:02:53.181 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:02:53 compute-0 nova_compute[239965]: 2026-01-26 16:02:53.345 239969 DEBUG nova.network.neutron [-] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:53 compute-0 nova_compute[239965]: 2026-01-26 16:02:53.365 239969 INFO nova.compute.manager [-] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Took 1.04 seconds to deallocate network for instance.
Jan 26 16:02:53 compute-0 nova_compute[239965]: 2026-01-26 16:02:53.422 239969 DEBUG oslo_concurrency.lockutils [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:53 compute-0 nova_compute[239965]: 2026-01-26 16:02:53.423 239969 DEBUG oslo_concurrency.lockutils [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:53 compute-0 nova_compute[239965]: 2026-01-26 16:02:53.518 239969 DEBUG oslo_concurrency.processutils [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:53 compute-0 nova_compute[239965]: 2026-01-26 16:02:53.597 239969 DEBUG nova.compute.manager [req-7e01ca07-8264-4d7d-9ea2-d1d93255762e req-defa5cff-9922-4837-b0cf-0fc98655795e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received event network-vif-deleted-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:53 compute-0 nova_compute[239965]: 2026-01-26 16:02:53.958 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:53 compute-0 nova_compute[239965]: 2026-01-26 16:02:53.959 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 260 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.4 MiB/s wr, 194 op/s
Jan 26 16:02:53 compute-0 nova_compute[239965]: 2026-01-26 16:02:53.979 239969 DEBUG nova.compute.manager [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.047 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:02:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3901902193' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.104 239969 DEBUG oslo_concurrency.processutils [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.112 239969 DEBUG nova.compute.provider_tree [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.131 239969 DEBUG nova.scheduler.client.report [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.164 239969 DEBUG oslo_concurrency.lockutils [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.741s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.166 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3901902193' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.177 239969 DEBUG nova.virt.hardware [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.178 239969 INFO nova.compute.claims [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.191 239969 INFO nova.scheduler.client.report [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Deleted allocations for instance fbc49a44-5efe-4d87-aea7-b26f7232c224
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.282 239969 DEBUG oslo_concurrency.lockutils [None req-946c8aac-f57a-4d3e-8108-fbaec7209aa0 57040a375df6487fbc604a9b04389eeb 4f5d05ded9184fea9b526a3522d47ea5 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.362 239969 DEBUG oslo_concurrency.processutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.404 239969 DEBUG nova.compute.manager [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received event network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.404 239969 DEBUG oslo_concurrency.lockutils [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.405 239969 DEBUG oslo_concurrency.lockutils [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.405 239969 DEBUG oslo_concurrency.lockutils [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.405 239969 DEBUG nova.compute.manager [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] No waiting events found dispatching network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.405 239969 WARNING nova.compute.manager [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received unexpected event network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 for instance with vm_state deleted and task_state None.
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.405 239969 DEBUG nova.compute.manager [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received event network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.406 239969 DEBUG oslo_concurrency.lockutils [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.406 239969 DEBUG oslo_concurrency.lockutils [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.406 239969 DEBUG oslo_concurrency.lockutils [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.406 239969 DEBUG nova.compute.manager [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] No waiting events found dispatching network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.407 239969 WARNING nova.compute.manager [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received unexpected event network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 for instance with vm_state deleted and task_state None.
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.407 239969 DEBUG nova.compute.manager [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received event network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.407 239969 DEBUG oslo_concurrency.lockutils [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.407 239969 DEBUG oslo_concurrency.lockutils [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.408 239969 DEBUG oslo_concurrency.lockutils [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.408 239969 DEBUG nova.compute.manager [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] No waiting events found dispatching network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.408 239969 WARNING nova.compute.manager [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received unexpected event network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 for instance with vm_state deleted and task_state None.
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.408 239969 DEBUG nova.compute.manager [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received event network-vif-unplugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.409 239969 DEBUG oslo_concurrency.lockutils [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.409 239969 DEBUG oslo_concurrency.lockutils [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.409 239969 DEBUG oslo_concurrency.lockutils [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.409 239969 DEBUG nova.compute.manager [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] No waiting events found dispatching network-vif-unplugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.410 239969 WARNING nova.compute.manager [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received unexpected event network-vif-unplugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 for instance with vm_state deleted and task_state None.
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.410 239969 DEBUG nova.compute.manager [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received event network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.410 239969 DEBUG oslo_concurrency.lockutils [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.411 239969 DEBUG oslo_concurrency.lockutils [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.411 239969 DEBUG oslo_concurrency.lockutils [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fbc49a44-5efe-4d87-aea7-b26f7232c224-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.411 239969 DEBUG nova.compute.manager [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] No waiting events found dispatching network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.411 239969 WARNING nova.compute.manager [req-11ebf5d9-b264-48ac-8ae7-18582d60f1ec req-a36ee71d-72d1-4ea0-867e-1ff6f0ec9122 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Received unexpected event network-vif-plugged-f72df886-4fcf-4dc1-8e52-62b2a9cbfb49 for instance with vm_state deleted and task_state None.
Jan 26 16:02:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:02:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3064933498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.947 239969 DEBUG oslo_concurrency.processutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.954 239969 DEBUG nova.compute.provider_tree [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.974 239969 DEBUG nova.scheduler.client.report [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.998 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:54 compute-0 nova_compute[239965]: 2026-01-26 16:02:54.999 239969 DEBUG nova.compute.manager [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.057 239969 DEBUG nova.compute.manager [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.058 239969 DEBUG nova.network.neutron [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.081 239969 INFO nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.104 239969 DEBUG nova.compute.manager [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:02:55 compute-0 ceph-mon[75140]: pgmap v1499: 305 pgs: 305 active+clean; 260 MiB data, 643 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 1.4 MiB/s wr, 194 op/s
Jan 26 16:02:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3064933498' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.223 239969 DEBUG nova.compute.manager [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.225 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.226 239969 INFO nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Creating image(s)
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.253 239969 DEBUG nova.storage.rbd_utils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 492acb7a-42e1-4918-a973-352f9d8251d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.282 239969 DEBUG nova.storage.rbd_utils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 492acb7a-42e1-4918-a973-352f9d8251d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.314 239969 DEBUG nova.storage.rbd_utils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 492acb7a-42e1-4918-a973-352f9d8251d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.319 239969 DEBUG oslo_concurrency.processutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.360 239969 DEBUG nova.policy [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '59f8ee09903a4e0a812c3d9e013996bd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94771806cd0d4b5db117956e09fea9e6', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.387 239969 DEBUG oslo_concurrency.lockutils [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Acquiring lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.388 239969 DEBUG oslo_concurrency.lockutils [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.388 239969 DEBUG oslo_concurrency.lockutils [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Acquiring lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.388 239969 DEBUG oslo_concurrency.lockutils [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.389 239969 DEBUG oslo_concurrency.lockutils [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.390 239969 INFO nova.compute.manager [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Terminating instance
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.392 239969 DEBUG nova.compute.manager [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.398 239969 DEBUG oslo_concurrency.processutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.400 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.400 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.401 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:55 compute-0 kernel: tap047aa04e-55 (unregistering): left promiscuous mode
Jan 26 16:02:55 compute-0 NetworkManager[48954]: <info>  [1769443375.4267] device (tap047aa04e-55): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.430 239969 DEBUG nova.storage.rbd_utils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 492acb7a-42e1-4918-a973-352f9d8251d0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:55 compute-0 ovn_controller[146046]: 2026-01-26T16:02:55Z|00671|binding|INFO|Releasing lport 047aa04e-5502-4cdf-ae6a-94aacc3c05ff from this chassis (sb_readonly=0)
Jan 26 16:02:55 compute-0 ovn_controller[146046]: 2026-01-26T16:02:55Z|00672|binding|INFO|Setting lport 047aa04e-5502-4cdf-ae6a-94aacc3c05ff down in Southbound
Jan 26 16:02:55 compute-0 ovn_controller[146046]: 2026-01-26T16:02:55Z|00673|binding|INFO|Removing iface tap047aa04e-55 ovn-installed in OVS
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.442 239969 DEBUG oslo_concurrency.processutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 492acb7a-42e1-4918-a973-352f9d8251d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.455 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:1a:f7 10.100.0.12'], port_security=['fa:16:3e:fe:1a:f7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-300c36f8-0d84-4e5b-a546-a37b88a0fe3e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a88b6ae11bc04608ba445196f2238171', 'neutron:revision_number': '4', 'neutron:security_group_ids': '13fc89c1-b460-4ff8-a52c-ef6ce7230e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0d2c5104-c2ab-4ee9-b3c3-a669570106da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=047aa04e-5502-4cdf-ae6a-94aacc3c05ff) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.457 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 047aa04e-5502-4cdf-ae6a-94aacc3c05ff in datapath 300c36f8-0d84-4e5b-a546-a37b88a0fe3e unbound from our chassis
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.459 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 300c36f8-0d84-4e5b-a546-a37b88a0fe3e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.460 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[042f8222-1994-4e32-b985-56d91539713d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.461 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e namespace which is not needed anymore
Jan 26 16:02:55 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d0000004b.scope: Deactivated successfully.
Jan 26 16:02:55 compute-0 systemd[1]: machine-qemu\x2d84\x2dinstance\x2d0000004b.scope: Consumed 6.239s CPU time.
Jan 26 16:02:55 compute-0 systemd-machined[208061]: Machine qemu-84-instance-0000004b terminated.
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.481 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:55 compute-0 neutron-haproxy-ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e[302234]: [NOTICE]   (302238) : haproxy version is 2.8.14-c23fe91
Jan 26 16:02:55 compute-0 neutron-haproxy-ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e[302234]: [NOTICE]   (302238) : path to executable is /usr/sbin/haproxy
Jan 26 16:02:55 compute-0 neutron-haproxy-ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e[302234]: [WARNING]  (302238) : Exiting Master process...
Jan 26 16:02:55 compute-0 neutron-haproxy-ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e[302234]: [WARNING]  (302238) : Exiting Master process...
Jan 26 16:02:55 compute-0 neutron-haproxy-ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e[302234]: [ALERT]    (302238) : Current worker (302243) exited with code 143 (Terminated)
Jan 26 16:02:55 compute-0 neutron-haproxy-ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e[302234]: [WARNING]  (302238) : All workers exited. Exiting... (0)
Jan 26 16:02:55 compute-0 systemd[1]: libpod-71058f096e0bdd6fe2ac7563e6db5f08f28673c30ac669126f40e2240e9f8451.scope: Deactivated successfully.
Jan 26 16:02:55 compute-0 podman[302563]: 2026-01-26 16:02:55.610235893 +0000 UTC m=+0.056575499 container died 71058f096e0bdd6fe2ac7563e6db5f08f28673c30ac669126f40e2240e9f8451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.632 239969 INFO nova.virt.libvirt.driver [-] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Instance destroyed successfully.
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.632 239969 DEBUG nova.objects.instance [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lazy-loading 'resources' on Instance uuid 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.653 239969 DEBUG nova.virt.libvirt.vif [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:37Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-1608663305',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-1608663305',id=75,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:02:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a88b6ae11bc04608ba445196f2238171',ramdisk_id='',reservation_id='r-570e8ox0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-238093472',owner_user_name='tempest-ServerTagsTestJSON-238093472-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:02:49Z,user_data=None,user_id='4f5fcc73d4b8415a9d00f19f2f851ab3',uuid=446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "address": "fa:16:3e:fe:1a:f7", "network": {"id": "300c36f8-0d84-4e5b-a546-a37b88a0fe3e", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-233879439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a88b6ae11bc04608ba445196f2238171", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap047aa04e-55", "ovs_interfaceid": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.654 239969 DEBUG nova.network.os_vif_util [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Converting VIF {"id": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "address": "fa:16:3e:fe:1a:f7", "network": {"id": "300c36f8-0d84-4e5b-a546-a37b88a0fe3e", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-233879439-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a88b6ae11bc04608ba445196f2238171", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap047aa04e-55", "ovs_interfaceid": "047aa04e-5502-4cdf-ae6a-94aacc3c05ff", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.655 239969 DEBUG nova.network.os_vif_util [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:1a:f7,bridge_name='br-int',has_traffic_filtering=True,id=047aa04e-5502-4cdf-ae6a-94aacc3c05ff,network=Network(300c36f8-0d84-4e5b-a546-a37b88a0fe3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap047aa04e-55') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.656 239969 DEBUG os_vif [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:1a:f7,bridge_name='br-int',has_traffic_filtering=True,id=047aa04e-5502-4cdf-ae6a-94aacc3c05ff,network=Network(300c36f8-0d84-4e5b-a546-a37b88a0fe3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap047aa04e-55') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.661 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.662 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap047aa04e-55, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.664 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.671 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.677 239969 INFO os_vif [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:1a:f7,bridge_name='br-int',has_traffic_filtering=True,id=047aa04e-5502-4cdf-ae6a-94aacc3c05ff,network=Network(300c36f8-0d84-4e5b-a546-a37b88a0fe3e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap047aa04e-55')
Jan 26 16:02:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-95649a4b2388370a00a1210e8edabfde4557baec0e02660c42833fec2218311e-merged.mount: Deactivated successfully.
Jan 26 16:02:55 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-71058f096e0bdd6fe2ac7563e6db5f08f28673c30ac669126f40e2240e9f8451-userdata-shm.mount: Deactivated successfully.
Jan 26 16:02:55 compute-0 podman[302563]: 2026-01-26 16:02:55.733375031 +0000 UTC m=+0.179714637 container cleanup 71058f096e0bdd6fe2ac7563e6db5f08f28673c30ac669126f40e2240e9f8451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 16:02:55 compute-0 systemd[1]: libpod-conmon-71058f096e0bdd6fe2ac7563e6db5f08f28673c30ac669126f40e2240e9f8451.scope: Deactivated successfully.
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.760 239969 DEBUG oslo_concurrency.processutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 492acb7a-42e1-4918-a973-352f9d8251d0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.319s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:55 compute-0 podman[302625]: 2026-01-26 16:02:55.809842516 +0000 UTC m=+0.052565479 container remove 71058f096e0bdd6fe2ac7563e6db5f08f28673c30ac669126f40e2240e9f8451 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.817 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d70ac307-35fc-4312-bd71-998b20ce93d6]: (4, ('Mon Jan 26 04:02:55 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e (71058f096e0bdd6fe2ac7563e6db5f08f28673c30ac669126f40e2240e9f8451)\n71058f096e0bdd6fe2ac7563e6db5f08f28673c30ac669126f40e2240e9f8451\nMon Jan 26 04:02:55 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e (71058f096e0bdd6fe2ac7563e6db5f08f28673c30ac669126f40e2240e9f8451)\n71058f096e0bdd6fe2ac7563e6db5f08f28673c30ac669126f40e2240e9f8451\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.820 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[17db708c-98ea-4745-9fb8-58b287945efa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.821 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap300c36f8-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:55 compute-0 kernel: tap300c36f8-00: left promiscuous mode
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.849 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a4280cc2-132e-4999-83ab-70bab6e7ad72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.862 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cbc785b0-48a9-4e9b-a3a1-926c9829da69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.864 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.867 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[35231bce-6881-440c-af41-566bfa4fe7a4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.876 239969 DEBUG nova.storage.rbd_utils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] resizing rbd image 492acb7a-42e1-4918-a973-352f9d8251d0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.884 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f93da772-ad68-4132-b3f3-8531297f0132]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 485863, 'reachable_time': 25042, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302675, 'error': None, 'target': 'ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.887 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-300c36f8-0d84-4e5b-a546-a37b88a0fe3e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.887 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[8986cd62-2fb0-4257-a91b-dc40c3d5b4d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:55 compute-0 systemd[1]: run-netns-ovnmeta\x2d300c36f8\x2d0d84\x2d4e5b\x2da546\x2da37b88a0fe3e.mount: Deactivated successfully.
Jan 26 16:02:55 compute-0 kernel: tapa68cd767-39 (unregistering): left promiscuous mode
Jan 26 16:02:55 compute-0 NetworkManager[48954]: <info>  [1769443375.9473] device (tapa68cd767-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:02:55 compute-0 ovn_controller[146046]: 2026-01-26T16:02:55Z|00674|binding|INFO|Releasing lport a68cd767-391c-4870-8944-dec4f54e1591 from this chassis (sb_readonly=0)
Jan 26 16:02:55 compute-0 ovn_controller[146046]: 2026-01-26T16:02:55Z|00675|binding|INFO|Setting lport a68cd767-391c-4870-8944-dec4f54e1591 down in Southbound
Jan 26 16:02:55 compute-0 ovn_controller[146046]: 2026-01-26T16:02:55Z|00676|binding|INFO|Removing iface tapa68cd767-39 ovn-installed in OVS
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.966 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:39:2f 10.100.0.13'], port_security=['fa:16:3e:d1:39:2f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e264bffd-7e7a-43a6-b297-0bca74340744', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db63036e-9778-49aa-880f-9b900f3c2179', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8368d1e77bbb4de59a4d3088dda0a707', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcfedff7-784c-47e9-a3f0-b01264b35f17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859c58dc-99b6-4246-b01e-ede78d7a88e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a68cd767-391c-4870-8944-dec4f54e1591) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.967 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a68cd767-391c-4870-8944-dec4f54e1591 in datapath db63036e-9778-49aa-880f-9b900f3c2179 unbound from our chassis
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.968 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network db63036e-9778-49aa-880f-9b900f3c2179, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:02:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 227 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 4.0 MiB/s wr, 412 op/s
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.969 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cbdfe7c8-514e-4768-a1b2-07aa0a605348]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:55.970 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 namespace which is not needed anymore
Jan 26 16:02:55 compute-0 nova_compute[239965]: 2026-01-26 16:02:55.992 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.001 239969 DEBUG nova.objects.instance [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'migration_context' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:56 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000049.scope: Deactivated successfully.
Jan 26 16:02:56 compute-0 systemd[1]: machine-qemu\x2d82\x2dinstance\x2d00000049.scope: Consumed 14.342s CPU time.
Jan 26 16:02:56 compute-0 systemd-machined[208061]: Machine qemu-82-instance-00000049 terminated.
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.021 239969 INFO nova.virt.libvirt.driver [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Deleting instance files /var/lib/nova/instances/446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd_del
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.022 239969 INFO nova.virt.libvirt.driver [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Deletion of /var/lib/nova/instances/446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd_del complete
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.027 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.027 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Ensure instance console log exists: /var/lib/nova/instances/492acb7a-42e1-4918-a973-352f9d8251d0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.027 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.028 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.028 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.070 239969 INFO nova.compute.manager [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Took 0.68 seconds to destroy the instance on the hypervisor.
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.071 239969 DEBUG oslo.service.loopingcall [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.071 239969 DEBUG nova.compute.manager [-] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.071 239969 DEBUG nova.network.neutron [-] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:02:56 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[301580]: [NOTICE]   (301584) : haproxy version is 2.8.14-c23fe91
Jan 26 16:02:56 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[301580]: [NOTICE]   (301584) : path to executable is /usr/sbin/haproxy
Jan 26 16:02:56 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[301580]: [WARNING]  (301584) : Exiting Master process...
Jan 26 16:02:56 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[301580]: [ALERT]    (301584) : Current worker (301586) exited with code 143 (Terminated)
Jan 26 16:02:56 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[301580]: [WARNING]  (301584) : All workers exited. Exiting... (0)
Jan 26 16:02:56 compute-0 systemd[1]: libpod-817fce6be191565c0574d44f822d227760ee888253d2bf84bc5fe8c0413fd633.scope: Deactivated successfully.
Jan 26 16:02:56 compute-0 podman[302733]: 2026-01-26 16:02:56.108139618 +0000 UTC m=+0.043903617 container died 817fce6be191565c0574d44f822d227760ee888253d2bf84bc5fe8c0413fd633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 16:02:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-817fce6be191565c0574d44f822d227760ee888253d2bf84bc5fe8c0413fd633-userdata-shm.mount: Deactivated successfully.
Jan 26 16:02:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-8372a9d880b33a015622072b7a30c0ae772320af6ec5056c029f383793b2ebde-merged.mount: Deactivated successfully.
Jan 26 16:02:56 compute-0 podman[302733]: 2026-01-26 16:02:56.145288549 +0000 UTC m=+0.081052538 container cleanup 817fce6be191565c0574d44f822d227760ee888253d2bf84bc5fe8c0413fd633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 16:02:56 compute-0 systemd[1]: libpod-conmon-817fce6be191565c0574d44f822d227760ee888253d2bf84bc5fe8c0413fd633.scope: Deactivated successfully.
Jan 26 16:02:56 compute-0 NetworkManager[48954]: <info>  [1769443376.1737] manager: (tapa68cd767-39): new Tun device (/org/freedesktop/NetworkManager/Devices/293)
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.175 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.182 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:56 compute-0 ceph-mon[75140]: pgmap v1500: 305 pgs: 305 active+clean; 227 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 4.0 MiB/s wr, 412 op/s
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.197 239969 INFO nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Instance shutdown successfully after 13 seconds.
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.205 239969 INFO nova.virt.libvirt.driver [-] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Instance destroyed successfully.
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.213 239969 INFO nova.virt.libvirt.driver [-] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Instance destroyed successfully.
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.214 239969 DEBUG nova.virt.libvirt.vif [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1731532228',display_name='tempest-ServerDiskConfigTestJSON-server-1731532228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1731532228',id=73,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:02:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8368d1e77bbb4de59a4d3088dda0a707',ramdisk_id='',reservation_id='r-3pwxsvw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1476979264',owner_user_name='tempest-ServerDiskConfigTestJSON-1476979264-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:02:42Z,user_data=None,user_id='fc62be3f072e47058d3706393447ec4a',uuid=e264bffd-7e7a-43a6-b297-0bca74340744,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.214 239969 DEBUG nova.network.os_vif_util [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converting VIF {"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.215 239969 DEBUG nova.network.os_vif_util [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:39:2f,bridge_name='br-int',has_traffic_filtering=True,id=a68cd767-391c-4870-8944-dec4f54e1591,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa68cd767-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.215 239969 DEBUG os_vif [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:39:2f,bridge_name='br-int',has_traffic_filtering=True,id=a68cd767-391c-4870-8944-dec4f54e1591,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa68cd767-39') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.216 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.216 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa68cd767-39, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.218 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.219 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.222 239969 INFO os_vif [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:39:2f,bridge_name='br-int',has_traffic_filtering=True,id=a68cd767-391c-4870-8944-dec4f54e1591,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa68cd767-39')
Jan 26 16:02:56 compute-0 podman[302762]: 2026-01-26 16:02:56.2318314 +0000 UTC m=+0.061881468 container remove 817fce6be191565c0574d44f822d227760ee888253d2bf84bc5fe8c0413fd633 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:02:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:56.239 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cf0d24c1-32a4-426b-b866-bbb39e26beda]: (4, ('Mon Jan 26 04:02:56 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 (817fce6be191565c0574d44f822d227760ee888253d2bf84bc5fe8c0413fd633)\n817fce6be191565c0574d44f822d227760ee888253d2bf84bc5fe8c0413fd633\nMon Jan 26 04:02:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 (817fce6be191565c0574d44f822d227760ee888253d2bf84bc5fe8c0413fd633)\n817fce6be191565c0574d44f822d227760ee888253d2bf84bc5fe8c0413fd633\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:56.241 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6d491c8d-3ed4-4973-8d57-93cc670b2b1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:56.243 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb63036e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:56 compute-0 kernel: tapdb63036e-90: left promiscuous mode
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.245 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.247 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:56.249 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cb860d8d-e7f1-4af4-a7fd-824d2960f85c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.265 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:56.271 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c9374d-f9ec-4dc8-a7d7-3e882e181d5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:56.272 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c6294447-c143-4f07-8cdd-44ddb1b3a39d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:56.291 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[82161e54-a999-489e-9067-d25b80ea5b74]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 484947, 'reachable_time': 37472, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302805, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:56.295 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:02:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:56.295 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[2416ee26-e166-4e8c-b92f-641a354a29ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:02:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:56.417910) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443376418093, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 540, "num_deletes": 252, "total_data_size": 496836, "memory_usage": 506392, "flush_reason": "Manual Compaction"}
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443376422030, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 354141, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30800, "largest_seqno": 31339, "table_properties": {"data_size": 351474, "index_size": 639, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7381, "raw_average_key_size": 20, "raw_value_size": 345979, "raw_average_value_size": 958, "num_data_blocks": 29, "num_entries": 361, "num_filter_entries": 361, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769443344, "oldest_key_time": 1769443344, "file_creation_time": 1769443376, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 4145 microseconds, and 1655 cpu microseconds.
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:56.422056) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 354141 bytes OK
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:56.422070) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:56.424271) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:56.424285) EVENT_LOG_v1 {"time_micros": 1769443376424280, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:56.424298) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 493764, prev total WAL file size 493764, number of live WAL files 2.
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:56.424891) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323536' seq:0, type:0; will stop at (end)
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(345KB)], [65(10MB)]
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443376424929, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 10997726, "oldest_snapshot_seqno": -1}
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5614 keys, 7747642 bytes, temperature: kUnknown
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443376485046, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7747642, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7710665, "index_size": 21827, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14085, "raw_key_size": 141685, "raw_average_key_size": 25, "raw_value_size": 7610425, "raw_average_value_size": 1355, "num_data_blocks": 884, "num_entries": 5614, "num_filter_entries": 5614, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769443376, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:56.485291) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7747642 bytes
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:56.487020) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.8 rd, 128.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.2 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(52.9) write-amplify(21.9) OK, records in: 6115, records dropped: 501 output_compression: NoCompression
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:56.487042) EVENT_LOG_v1 {"time_micros": 1769443376487031, "job": 36, "event": "compaction_finished", "compaction_time_micros": 60176, "compaction_time_cpu_micros": 27933, "output_level": 6, "num_output_files": 1, "total_output_size": 7747642, "num_input_records": 6115, "num_output_records": 5614, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443376487383, "job": 36, "event": "table_file_deletion", "file_number": 67}
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443376489579, "job": 36, "event": "table_file_deletion", "file_number": 65}
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:56.424811) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:56.489731) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:56.489736) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:56.489738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:56.489740) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:02:56 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:02:56.489741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.523 239969 INFO nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Deleting instance files /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744_del
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.524 239969 INFO nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Deletion of /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744_del complete
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.541 239969 DEBUG nova.network.neutron [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Successfully created port: 2a2de29b-29c0-439c-8236-2f566c4c89aa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.598 239969 DEBUG nova.compute.manager [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Received event network-vif-unplugged-047aa04e-5502-4cdf-ae6a-94aacc3c05ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.599 239969 DEBUG oslo_concurrency.lockutils [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.599 239969 DEBUG oslo_concurrency.lockutils [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.599 239969 DEBUG oslo_concurrency.lockutils [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.599 239969 DEBUG nova.compute.manager [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] No waiting events found dispatching network-vif-unplugged-047aa04e-5502-4cdf-ae6a-94aacc3c05ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.599 239969 DEBUG nova.compute.manager [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Received event network-vif-unplugged-047aa04e-5502-4cdf-ae6a-94aacc3c05ff for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.599 239969 DEBUG nova.compute.manager [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Received event network-vif-plugged-047aa04e-5502-4cdf-ae6a-94aacc3c05ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.600 239969 DEBUG oslo_concurrency.lockutils [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.600 239969 DEBUG oslo_concurrency.lockutils [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.600 239969 DEBUG oslo_concurrency.lockutils [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.600 239969 DEBUG nova.compute.manager [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] No waiting events found dispatching network-vif-plugged-047aa04e-5502-4cdf-ae6a-94aacc3c05ff pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.600 239969 WARNING nova.compute.manager [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Received unexpected event network-vif-plugged-047aa04e-5502-4cdf-ae6a-94aacc3c05ff for instance with vm_state active and task_state deleting.
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.600 239969 DEBUG nova.compute.manager [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Received event network-vif-unplugged-a68cd767-391c-4870-8944-dec4f54e1591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.600 239969 DEBUG oslo_concurrency.lockutils [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.601 239969 DEBUG oslo_concurrency.lockutils [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.601 239969 DEBUG oslo_concurrency.lockutils [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.601 239969 DEBUG nova.compute.manager [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] No waiting events found dispatching network-vif-unplugged-a68cd767-391c-4870-8944-dec4f54e1591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.601 239969 WARNING nova.compute.manager [req-090d142f-e197-472d-9b0a-7ed452e9ff7d req-a8bb8981-3c61-4629-a8da-f618c3df828e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Received unexpected event network-vif-unplugged-a68cd767-391c-4870-8944-dec4f54e1591 for instance with vm_state active and task_state rebuilding.
Jan 26 16:02:56 compute-0 systemd[1]: run-netns-ovnmeta\x2ddb63036e\x2d9778\x2d49aa\x2d880f\x2d9b900f3c2179.mount: Deactivated successfully.
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.711 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.711 239969 INFO nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Creating image(s)
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.732 239969 DEBUG nova.storage.rbd_utils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image e264bffd-7e7a-43a6-b297-0bca74340744_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.751 239969 DEBUG nova.storage.rbd_utils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image e264bffd-7e7a-43a6-b297-0bca74340744_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.772 239969 DEBUG nova.storage.rbd_utils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image e264bffd-7e7a-43a6-b297-0bca74340744_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.776 239969 DEBUG oslo_concurrency.processutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.845 239969 DEBUG oslo_concurrency.processutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.846 239969 DEBUG oslo_concurrency.lockutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "43c516c9cae415c0ab334521ada79f427fb2809a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.847 239969 DEBUG oslo_concurrency.lockutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.847 239969 DEBUG oslo_concurrency.lockutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.867 239969 DEBUG nova.storage.rbd_utils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image e264bffd-7e7a-43a6-b297-0bca74340744_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.872 239969 DEBUG oslo_concurrency.processutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a e264bffd-7e7a-43a6-b297-0bca74340744_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.917 239969 DEBUG nova.network.neutron [-] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.940 239969 INFO nova.compute.manager [-] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Took 0.87 seconds to deallocate network for instance.
Jan 26 16:02:56 compute-0 nova_compute[239965]: 2026-01-26 16:02:56.969 239969 DEBUG nova.compute.manager [req-1607c71b-6385-4b71-a36a-14a10037a5ab req-178e844b-7522-4139-801c-502b5af1c4a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Received event network-vif-deleted-047aa04e-5502-4cdf-ae6a-94aacc3c05ff external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.009 239969 DEBUG oslo_concurrency.lockutils [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.010 239969 DEBUG oslo_concurrency.lockutils [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.086 239969 DEBUG oslo_concurrency.processutils [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.156 239969 DEBUG oslo_concurrency.processutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a e264bffd-7e7a-43a6-b297-0bca74340744_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.284s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.215 239969 DEBUG nova.storage.rbd_utils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] resizing rbd image e264bffd-7e7a-43a6-b297-0bca74340744_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.300 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.300 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Ensure instance console log exists: /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.301 239969 DEBUG oslo_concurrency.lockutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.301 239969 DEBUG oslo_concurrency.lockutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.301 239969 DEBUG oslo_concurrency.lockutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.304 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Start _get_guest_xml network_info=[{"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.309 239969 WARNING nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.314 239969 DEBUG nova.virt.libvirt.host [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.314 239969 DEBUG nova.virt.libvirt.host [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.317 239969 DEBUG nova.virt.libvirt.host [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.317 239969 DEBUG nova.virt.libvirt.host [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.318 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.318 239969 DEBUG nova.virt.hardware [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.318 239969 DEBUG nova.virt.hardware [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.319 239969 DEBUG nova.virt.hardware [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.319 239969 DEBUG nova.virt.hardware [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.319 239969 DEBUG nova.virt.hardware [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.319 239969 DEBUG nova.virt.hardware [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.319 239969 DEBUG nova.virt.hardware [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.319 239969 DEBUG nova.virt.hardware [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.320 239969 DEBUG nova.virt.hardware [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.320 239969 DEBUG nova.virt.hardware [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.320 239969 DEBUG nova.virt.hardware [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.320 239969 DEBUG nova.objects.instance [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e264bffd-7e7a-43a6-b297-0bca74340744 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.341 239969 DEBUG oslo_concurrency.processutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:02:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/606332872' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.644 239969 DEBUG oslo_concurrency.processutils [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.652 239969 DEBUG nova.compute.provider_tree [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:02:57 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/606332872' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.679 239969 DEBUG nova.scheduler.client.report [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.711 239969 DEBUG oslo_concurrency.lockutils [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.743 239969 INFO nova.scheduler.client.report [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Deleted allocations for instance 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.770 239969 DEBUG nova.network.neutron [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Successfully updated port: 2a2de29b-29c0-439c-8236-2f566c4c89aa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.803 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.804 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquired lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.804 239969 DEBUG nova.network.neutron [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.831 239969 DEBUG oslo_concurrency.lockutils [None req-099dbd38-1924-4fff-9577-996f7bb7f2e5 4f5fcc73d4b8415a9d00f19f2f851ab3 a88b6ae11bc04608ba445196f2238171 - - default default] Lock "446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:02:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2321234968' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.923 239969 DEBUG oslo_concurrency.processutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.943 239969 DEBUG nova.storage.rbd_utils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image e264bffd-7e7a-43a6-b297-0bca74340744_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.948 239969 DEBUG oslo_concurrency.processutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:02:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 227 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 336 op/s
Jan 26 16:02:57 compute-0 nova_compute[239965]: 2026-01-26 16:02:57.980 239969 DEBUG nova.network.neutron [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:02:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:02:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1772878042' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.484 239969 DEBUG oslo_concurrency.processutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.485 239969 DEBUG nova.virt.libvirt.vif [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:02:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1731532228',display_name='tempest-ServerDiskConfigTestJSON-server-1731532228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1731532228',id=73,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:02:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8368d1e77bbb4de59a4d3088dda0a707',ramdisk_id='',reservation_id='r-3pwxsvw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1476979264',owner_user_name='tempest-ServerDiskConfigTestJSON-1476979264-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:02:56Z,user_data=None,user_id='fc62be3f072e47058d3706393447ec4a',uuid=e264bffd-7e7a-43a6-b297-0bca74340744,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.485 239969 DEBUG nova.network.os_vif_util [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converting VIF {"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.486 239969 DEBUG nova.network.os_vif_util [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:39:2f,bridge_name='br-int',has_traffic_filtering=True,id=a68cd767-391c-4870-8944-dec4f54e1591,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa68cd767-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.489 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:02:58 compute-0 nova_compute[239965]:   <uuid>e264bffd-7e7a-43a6-b297-0bca74340744</uuid>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   <name>instance-00000049</name>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1731532228</nova:name>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:02:57</nova:creationTime>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:02:58 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:02:58 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:02:58 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:02:58 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:02:58 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:02:58 compute-0 nova_compute[239965]:         <nova:user uuid="fc62be3f072e47058d3706393447ec4a">tempest-ServerDiskConfigTestJSON-1476979264-project-member</nova:user>
Jan 26 16:02:58 compute-0 nova_compute[239965]:         <nova:project uuid="8368d1e77bbb4de59a4d3088dda0a707">tempest-ServerDiskConfigTestJSON-1476979264</nova:project>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="24ff5bfa-2bf0-4d76-ba05-fc857cd2108f"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:02:58 compute-0 nova_compute[239965]:         <nova:port uuid="a68cd767-391c-4870-8944-dec4f54e1591">
Jan 26 16:02:58 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <system>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <entry name="serial">e264bffd-7e7a-43a6-b297-0bca74340744</entry>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <entry name="uuid">e264bffd-7e7a-43a6-b297-0bca74340744</entry>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     </system>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   <os>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   </os>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   <features>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   </features>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/e264bffd-7e7a-43a6-b297-0bca74340744_disk">
Jan 26 16:02:58 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       </source>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:02:58 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/e264bffd-7e7a-43a6-b297-0bca74340744_disk.config">
Jan 26 16:02:58 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       </source>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:02:58 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:d1:39:2f"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <target dev="tapa68cd767-39"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744/console.log" append="off"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <video>
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     </video>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:02:58 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:02:58 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:02:58 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:02:58 compute-0 nova_compute[239965]: </domain>
Jan 26 16:02:58 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.489 239969 DEBUG nova.compute.manager [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Preparing to wait for external event network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.489 239969 DEBUG oslo_concurrency.lockutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.489 239969 DEBUG oslo_concurrency.lockutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.490 239969 DEBUG oslo_concurrency.lockutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.490 239969 DEBUG nova.virt.libvirt.vif [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:02:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1731532228',display_name='tempest-ServerDiskConfigTestJSON-server-1731532228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1731532228',id=73,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:02:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8368d1e77bbb4de59a4d3088dda0a707',ramdisk_id='',reservation_id='r-3pwxsvw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1476979264',owner_user_name='tempest-ServerDiskConfigTestJSON-1476979264-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:02:56Z,user_data=None,user_id='fc62be3f072e47058d3706393447ec4a',uuid=e264bffd-7e7a-43a6-b297-0bca74340744,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.490 239969 DEBUG nova.network.os_vif_util [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converting VIF {"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.491 239969 DEBUG nova.network.os_vif_util [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:39:2f,bridge_name='br-int',has_traffic_filtering=True,id=a68cd767-391c-4870-8944-dec4f54e1591,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa68cd767-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.491 239969 DEBUG os_vif [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:39:2f,bridge_name='br-int',has_traffic_filtering=True,id=a68cd767-391c-4870-8944-dec4f54e1591,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa68cd767-39') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.492 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.492 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.493 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.495 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.495 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa68cd767-39, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.496 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa68cd767-39, col_values=(('external_ids', {'iface-id': 'a68cd767-391c-4870-8944-dec4f54e1591', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:39:2f', 'vm-uuid': 'e264bffd-7e7a-43a6-b297-0bca74340744'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.497 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:58 compute-0 NetworkManager[48954]: <info>  [1769443378.4983] manager: (tapa68cd767-39): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/294)
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.500 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.501 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.502 239969 INFO os_vif [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:39:2f,bridge_name='br-int',has_traffic_filtering=True,id=a68cd767-391c-4870-8944-dec4f54e1591,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa68cd767-39')
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.606 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.607 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.607 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] No VIF found with MAC fa:16:3e:d1:39:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.607 239969 INFO nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Using config drive
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.633 239969 DEBUG nova.storage.rbd_utils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image e264bffd-7e7a-43a6-b297-0bca74340744_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.651 239969 DEBUG nova.objects.instance [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e264bffd-7e7a-43a6-b297-0bca74340744 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.679 239969 DEBUG nova.objects.instance [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'keypairs' on Instance uuid e264bffd-7e7a-43a6-b297-0bca74340744 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.696 239969 DEBUG nova.compute.manager [req-baaf2516-d126-4083-a324-258995426c24 req-398acdf7-cfac-4116-acd9-1fd998641d0c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Received event network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.696 239969 DEBUG oslo_concurrency.lockutils [req-baaf2516-d126-4083-a324-258995426c24 req-398acdf7-cfac-4116-acd9-1fd998641d0c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.696 239969 DEBUG oslo_concurrency.lockutils [req-baaf2516-d126-4083-a324-258995426c24 req-398acdf7-cfac-4116-acd9-1fd998641d0c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.696 239969 DEBUG oslo_concurrency.lockutils [req-baaf2516-d126-4083-a324-258995426c24 req-398acdf7-cfac-4116-acd9-1fd998641d0c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.697 239969 DEBUG nova.compute.manager [req-baaf2516-d126-4083-a324-258995426c24 req-398acdf7-cfac-4116-acd9-1fd998641d0c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Processing event network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.697 239969 DEBUG nova.compute.manager [req-baaf2516-d126-4083-a324-258995426c24 req-398acdf7-cfac-4116-acd9-1fd998641d0c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-changed-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.697 239969 DEBUG nova.compute.manager [req-baaf2516-d126-4083-a324-258995426c24 req-398acdf7-cfac-4116-acd9-1fd998641d0c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Refreshing instance network info cache due to event network-changed-2a2de29b-29c0-439c-8236-2f566c4c89aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:02:58 compute-0 nova_compute[239965]: 2026-01-26 16:02:58.697 239969 DEBUG oslo_concurrency.lockutils [req-baaf2516-d126-4083-a324-258995426c24 req-398acdf7-cfac-4116-acd9-1fd998641d0c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:02:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2321234968' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:58 compute-0 ceph-mon[75140]: pgmap v1501: 305 pgs: 305 active+clean; 227 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.6 MiB/s wr, 336 op/s
Jan 26 16:02:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1772878042' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:02:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:59.224 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:02:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:59.225 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:02:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:02:59.225 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:02:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 192 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.8 MiB/s wr, 389 op/s
Jan 26 16:03:00 compute-0 nova_compute[239965]: 2026-01-26 16:03:00.015 239969 INFO nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Creating config drive at /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744/disk.config
Jan 26 16:03:00 compute-0 nova_compute[239965]: 2026-01-26 16:03:00.022 239969 DEBUG oslo_concurrency.processutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdh4t1x6u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:00 compute-0 nova_compute[239965]: 2026-01-26 16:03:00.162 239969 DEBUG oslo_concurrency.processutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdh4t1x6u" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:00 compute-0 nova_compute[239965]: 2026-01-26 16:03:00.186 239969 DEBUG nova.storage.rbd_utils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image e264bffd-7e7a-43a6-b297-0bca74340744_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:00 compute-0 nova_compute[239965]: 2026-01-26 16:03:00.189 239969 DEBUG oslo_concurrency.processutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744/disk.config e264bffd-7e7a-43a6-b297-0bca74340744_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:03:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:03:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:03:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:03:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:03:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:03:00 compute-0 nova_compute[239965]: 2026-01-26 16:03:00.426 239969 DEBUG oslo_concurrency.processutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744/disk.config e264bffd-7e7a-43a6-b297-0bca74340744_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.237s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:00 compute-0 nova_compute[239965]: 2026-01-26 16:03:00.427 239969 INFO nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Deleting local config drive /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744/disk.config because it was imported into RBD.
Jan 26 16:03:00 compute-0 NetworkManager[48954]: <info>  [1769443380.4979] manager: (tapa68cd767-39): new Tun device (/org/freedesktop/NetworkManager/Devices/295)
Jan 26 16:03:00 compute-0 kernel: tapa68cd767-39: entered promiscuous mode
Jan 26 16:03:00 compute-0 nova_compute[239965]: 2026-01-26 16:03:00.502 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:00 compute-0 ovn_controller[146046]: 2026-01-26T16:03:00Z|00677|binding|INFO|Claiming lport a68cd767-391c-4870-8944-dec4f54e1591 for this chassis.
Jan 26 16:03:00 compute-0 ovn_controller[146046]: 2026-01-26T16:03:00Z|00678|binding|INFO|a68cd767-391c-4870-8944-dec4f54e1591: Claiming fa:16:3e:d1:39:2f 10.100.0.13
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.511 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:39:2f 10.100.0.13'], port_security=['fa:16:3e:d1:39:2f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e264bffd-7e7a-43a6-b297-0bca74340744', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db63036e-9778-49aa-880f-9b900f3c2179', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8368d1e77bbb4de59a4d3088dda0a707', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'dcfedff7-784c-47e9-a3f0-b01264b35f17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859c58dc-99b6-4246-b01e-ede78d7a88e8, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a68cd767-391c-4870-8944-dec4f54e1591) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.513 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a68cd767-391c-4870-8944-dec4f54e1591 in datapath db63036e-9778-49aa-880f-9b900f3c2179 bound to our chassis
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.515 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network db63036e-9778-49aa-880f-9b900f3c2179
Jan 26 16:03:00 compute-0 ovn_controller[146046]: 2026-01-26T16:03:00Z|00679|binding|INFO|Setting lport a68cd767-391c-4870-8944-dec4f54e1591 ovn-installed in OVS
Jan 26 16:03:00 compute-0 ovn_controller[146046]: 2026-01-26T16:03:00Z|00680|binding|INFO|Setting lport a68cd767-391c-4870-8944-dec4f54e1591 up in Southbound
Jan 26 16:03:00 compute-0 nova_compute[239965]: 2026-01-26 16:03:00.524 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.528 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[069f8532-7b4d-4463-b211-813b67d23a92]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.530 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdb63036e-91 in ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.533 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdb63036e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.533 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[622b5030-3977-4f8c-ab60-3271c36c192d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:00 compute-0 systemd-machined[208061]: New machine qemu-85-instance-00000049.
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.535 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d56e4947-b29f-4349-ab54-42d2f08d3690]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.549 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[240a979f-f34e-4c02-85c9-087a85c83dec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:00 compute-0 systemd[1]: Started Virtual Machine qemu-85-instance-00000049.
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.566 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[54c44798-90bf-44ee-9c9d-aba7cf8b18a9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:00 compute-0 systemd-udevd[303136]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.595 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[185780b3-8b13-4fa1-afec-7ba2271df071]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:00 compute-0 NetworkManager[48954]: <info>  [1769443380.5972] device (tapa68cd767-39): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:03:00 compute-0 NetworkManager[48954]: <info>  [1769443380.5984] device (tapa68cd767-39): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:03:00 compute-0 NetworkManager[48954]: <info>  [1769443380.6042] manager: (tapdb63036e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/296)
Jan 26 16:03:00 compute-0 systemd-udevd[303140]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.603 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b73e4ebe-915c-4b6e-a570-784fa03bc773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.637 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fcf4d2-80ec-4244-bca4-db0eb60430b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.641 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c0974b10-add1-4a16-a28b-c865dc6b09d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:00 compute-0 NetworkManager[48954]: <info>  [1769443380.6658] device (tapdb63036e-90): carrier: link connected
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.671 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[db8b6271-0a26-4bc6-84f1-866186b6aaaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.686 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[50e82b40-76ec-4fae-b657-751026a4533d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb63036e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:bb:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487147, 'reachable_time': 43350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303164, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.701 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8366ef-b4f5-4123-b3c2-844753353b14]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7c:bb2a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487147, 'tstamp': 487147}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303165, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.727 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[95821943-a1a0-4e41-9d0a-6cf5340bf573]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb63036e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:bb:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 205], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487147, 'reachable_time': 43350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303166, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.764 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fb8a59a8-033f-4488-9a76-b486a8041a02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.830 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cd9d17a3-f5a0-4083-8513-9b05afab51e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.831 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb63036e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.831 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.832 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb63036e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:00 compute-0 NetworkManager[48954]: <info>  [1769443380.8348] manager: (tapdb63036e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/297)
Jan 26 16:03:00 compute-0 kernel: tapdb63036e-90: entered promiscuous mode
Jan 26 16:03:00 compute-0 nova_compute[239965]: 2026-01-26 16:03:00.834 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.837 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdb63036e-90, col_values=(('external_ids', {'iface-id': 'c91d83e2-40e9-4d10-8b19-e75b3795dd04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:00 compute-0 nova_compute[239965]: 2026-01-26 16:03:00.839 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:00 compute-0 ovn_controller[146046]: 2026-01-26T16:03:00Z|00681|binding|INFO|Releasing lport c91d83e2-40e9-4d10-8b19-e75b3795dd04 from this chassis (sb_readonly=0)
Jan 26 16:03:00 compute-0 nova_compute[239965]: 2026-01-26 16:03:00.870 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:00 compute-0 nova_compute[239965]: 2026-01-26 16:03:00.875 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.876 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/db63036e-9778-49aa-880f-9b900f3c2179.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/db63036e-9778-49aa-880f-9b900f3c2179.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.877 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3e4f47bb-1d82-40b8-b370-38c1fe7a7fc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.878 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-db63036e-9778-49aa-880f-9b900f3c2179
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/db63036e-9778-49aa-880f-9b900f3c2179.pid.haproxy
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID db63036e-9778-49aa-880f-9b900f3c2179
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:03:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:00.879 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'env', 'PROCESS_TAG=haproxy-db63036e-9778-49aa-880f-9b900f3c2179', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/db63036e-9778-49aa-880f-9b900f3c2179.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:03:00 compute-0 nova_compute[239965]: 2026-01-26 16:03:00.995 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for e264bffd-7e7a-43a6-b297-0bca74340744 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:03:00 compute-0 nova_compute[239965]: 2026-01-26 16:03:00.999 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443380.9946322, e264bffd-7e7a-43a6-b297-0bca74340744 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:00 compute-0 nova_compute[239965]: 2026-01-26 16:03:00.999 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] VM Started (Lifecycle Event)
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.001 239969 DEBUG nova.compute.manager [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.006 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.012 239969 INFO nova.virt.libvirt.driver [-] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Instance spawned successfully.
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.012 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.030 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.045 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.046 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.047 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.048 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.050 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.051 239969 DEBUG nova.virt.libvirt.driver [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.060 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.096 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.097 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443380.9955523, e264bffd-7e7a-43a6-b297-0bca74340744 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.097 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] VM Paused (Lifecycle Event)
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.113 239969 DEBUG nova.compute.manager [req-cc3d4a13-6eed-426e-a412-b3ddcac7e0d0 req-540a0795-b910-4baa-b542-8609a927e72c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Received event network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.113 239969 DEBUG oslo_concurrency.lockutils [req-cc3d4a13-6eed-426e-a412-b3ddcac7e0d0 req-540a0795-b910-4baa-b542-8609a927e72c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.114 239969 DEBUG oslo_concurrency.lockutils [req-cc3d4a13-6eed-426e-a412-b3ddcac7e0d0 req-540a0795-b910-4baa-b542-8609a927e72c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.115 239969 DEBUG oslo_concurrency.lockutils [req-cc3d4a13-6eed-426e-a412-b3ddcac7e0d0 req-540a0795-b910-4baa-b542-8609a927e72c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.115 239969 DEBUG nova.compute.manager [req-cc3d4a13-6eed-426e-a412-b3ddcac7e0d0 req-540a0795-b910-4baa-b542-8609a927e72c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] No waiting events found dispatching network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.116 239969 WARNING nova.compute.manager [req-cc3d4a13-6eed-426e-a412-b3ddcac7e0d0 req-540a0795-b910-4baa-b542-8609a927e72c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Received unexpected event network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 for instance with vm_state active and task_state rebuild_spawning.
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.127 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.132 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443381.0054958, e264bffd-7e7a-43a6-b297-0bca74340744 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.132 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] VM Resumed (Lifecycle Event)
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.138 239969 DEBUG nova.compute.manager [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.160 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.166 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.204 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 16:03:01 compute-0 ceph-mon[75140]: pgmap v1502: 305 pgs: 305 active+clean; 192 MiB data, 600 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.8 MiB/s wr, 389 op/s
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.223 239969 DEBUG oslo_concurrency.lockutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.223 239969 DEBUG oslo_concurrency.lockutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.224 239969 DEBUG nova.objects.instance [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.305 239969 DEBUG oslo_concurrency.lockutils [None req-c9969d36-4f6b-44e6-abd7-5804ae8be3c7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.081s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:01 compute-0 podman[303240]: 2026-01-26 16:03:01.32217639 +0000 UTC m=+0.058705220 container create 02c49a311484c2e0d4619366fa72dbb9423f5fa7faea912be49b1b76ddfa0534 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 16:03:01 compute-0 systemd[1]: Started libpod-conmon-02c49a311484c2e0d4619366fa72dbb9423f5fa7faea912be49b1b76ddfa0534.scope.
Jan 26 16:03:01 compute-0 podman[303240]: 2026-01-26 16:03:01.291396965 +0000 UTC m=+0.027925805 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:03:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b11d72a6be8ecd5e3eb8c78fe0fd02045fca4be450bca27050c1cb5b5c0478fb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:03:01 compute-0 podman[303240]: 2026-01-26 16:03:01.488549819 +0000 UTC m=+0.225078719 container init 02c49a311484c2e0d4619366fa72dbb9423f5fa7faea912be49b1b76ddfa0534 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:03:01 compute-0 podman[303240]: 2026-01-26 16:03:01.498074682 +0000 UTC m=+0.234603532 container start 02c49a311484c2e0d4619366fa72dbb9423f5fa7faea912be49b1b76ddfa0534 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:03:01 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[303256]: [NOTICE]   (303260) : New worker (303262) forked
Jan 26 16:03:01 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[303256]: [NOTICE]   (303260) : Loading success.
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.775 239969 DEBUG nova.network.neutron [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Updating instance_info_cache with network_info: [{"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.796 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Releasing lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.796 239969 DEBUG nova.compute.manager [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance network_info: |[{"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.797 239969 DEBUG oslo_concurrency.lockutils [req-baaf2516-d126-4083-a324-258995426c24 req-398acdf7-cfac-4116-acd9-1fd998641d0c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.797 239969 DEBUG nova.network.neutron [req-baaf2516-d126-4083-a324-258995426c24 req-398acdf7-cfac-4116-acd9-1fd998641d0c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Refreshing network info cache for port 2a2de29b-29c0-439c-8236-2f566c4c89aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.800 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Start _get_guest_xml network_info=[{"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.806 239969 WARNING nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.813 239969 DEBUG nova.virt.libvirt.host [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.814 239969 DEBUG nova.virt.libvirt.host [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.822 239969 DEBUG nova.virt.libvirt.host [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.823 239969 DEBUG nova.virt.libvirt.host [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.823 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.823 239969 DEBUG nova.virt.hardware [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.824 239969 DEBUG nova.virt.hardware [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.824 239969 DEBUG nova.virt.hardware [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.824 239969 DEBUG nova.virt.hardware [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.824 239969 DEBUG nova.virt.hardware [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.825 239969 DEBUG nova.virt.hardware [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.825 239969 DEBUG nova.virt.hardware [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.825 239969 DEBUG nova.virt.hardware [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.825 239969 DEBUG nova.virt.hardware [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.825 239969 DEBUG nova.virt.hardware [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.826 239969 DEBUG nova.virt.hardware [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:03:01 compute-0 nova_compute[239965]: 2026-01-26 16:03:01.829 239969 DEBUG oslo_concurrency.processutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 180 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 5.7 MiB/s wr, 369 op/s
Jan 26 16:03:02 compute-0 ceph-mon[75140]: pgmap v1503: 305 pgs: 305 active+clean; 180 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 5.7 MiB/s wr, 369 op/s
Jan 26 16:03:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:03:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/777356257' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:02 compute-0 nova_compute[239965]: 2026-01-26 16:03:02.504 239969 DEBUG oslo_concurrency.processutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:02 compute-0 nova_compute[239965]: 2026-01-26 16:03:02.546 239969 DEBUG nova.storage.rbd_utils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 492acb7a-42e1-4918-a973-352f9d8251d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:02 compute-0 nova_compute[239965]: 2026-01-26 16:03:02.553 239969 DEBUG oslo_concurrency.processutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:03:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2382055146' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.205 239969 DEBUG nova.compute.manager [req-1bbef2b3-cd92-40db-9eb2-af192165ba01 req-c03cb831-3b07-429b-8106-a89d08927a3b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Received event network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.206 239969 DEBUG oslo_concurrency.lockutils [req-1bbef2b3-cd92-40db-9eb2-af192165ba01 req-c03cb831-3b07-429b-8106-a89d08927a3b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.206 239969 DEBUG oslo_concurrency.lockutils [req-1bbef2b3-cd92-40db-9eb2-af192165ba01 req-c03cb831-3b07-429b-8106-a89d08927a3b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.207 239969 DEBUG oslo_concurrency.lockutils [req-1bbef2b3-cd92-40db-9eb2-af192165ba01 req-c03cb831-3b07-429b-8106-a89d08927a3b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.207 239969 DEBUG nova.compute.manager [req-1bbef2b3-cd92-40db-9eb2-af192165ba01 req-c03cb831-3b07-429b-8106-a89d08927a3b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] No waiting events found dispatching network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.207 239969 WARNING nova.compute.manager [req-1bbef2b3-cd92-40db-9eb2-af192165ba01 req-c03cb831-3b07-429b-8106-a89d08927a3b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Received unexpected event network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 for instance with vm_state active and task_state None.
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.209 239969 DEBUG oslo_concurrency.processutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.211 239969 DEBUG nova.virt.libvirt.vif [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1205900142',display_name='tempest-ServerActionsTestJSON-server-1205900142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1205900142',id=76,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJC9bPt4tHtAdEqahH8dDewWtV4sFfSIyn3+BOhF1ZOSNVcrw8cqQ/JKx3C2TWjp9SjIuYmlVUH9MiH1RWG1NVSVOkV3L9+W0mVxNAs5TvmU8qKvHIWoJY7lntFnq7MLA==',key_name='tempest-keypair-1155267326',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-plaaamng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:02:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=492acb7a-42e1-4918-a973-352f9d8251d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.211 239969 DEBUG nova.network.os_vif_util [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.212 239969 DEBUG nova.network.os_vif_util [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.215 239969 DEBUG nova.objects.instance [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.234 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:03:03 compute-0 nova_compute[239965]:   <uuid>492acb7a-42e1-4918-a973-352f9d8251d0</uuid>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   <name>instance-0000004c</name>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerActionsTestJSON-server-1205900142</nova:name>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:03:01</nova:creationTime>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:03:03 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:03:03 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:03:03 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:03:03 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:03:03 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:03:03 compute-0 nova_compute[239965]:         <nova:user uuid="59f8ee09903a4e0a812c3d9e013996bd">tempest-ServerActionsTestJSON-274649005-project-member</nova:user>
Jan 26 16:03:03 compute-0 nova_compute[239965]:         <nova:project uuid="94771806cd0d4b5db117956e09fea9e6">tempest-ServerActionsTestJSON-274649005</nova:project>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:03:03 compute-0 nova_compute[239965]:         <nova:port uuid="2a2de29b-29c0-439c-8236-2f566c4c89aa">
Jan 26 16:03:03 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <system>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <entry name="serial">492acb7a-42e1-4918-a973-352f9d8251d0</entry>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <entry name="uuid">492acb7a-42e1-4918-a973-352f9d8251d0</entry>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     </system>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   <os>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   </os>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   <features>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   </features>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/492acb7a-42e1-4918-a973-352f9d8251d0_disk">
Jan 26 16:03:03 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       </source>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:03:03 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/492acb7a-42e1-4918-a973-352f9d8251d0_disk.config">
Jan 26 16:03:03 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       </source>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:03:03 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:f8:ca:89"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <target dev="tap2a2de29b-29"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/492acb7a-42e1-4918-a973-352f9d8251d0/console.log" append="off"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <video>
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     </video>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:03:03 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:03:03 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:03:03 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:03:03 compute-0 nova_compute[239965]: </domain>
Jan 26 16:03:03 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.236 239969 DEBUG nova.compute.manager [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Preparing to wait for external event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.237 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.237 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.238 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.239 239969 DEBUG nova.virt.libvirt.vif [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1205900142',display_name='tempest-ServerActionsTestJSON-server-1205900142',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1205900142',id=76,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJC9bPt4tHtAdEqahH8dDewWtV4sFfSIyn3+BOhF1ZOSNVcrw8cqQ/JKx3C2TWjp9SjIuYmlVUH9MiH1RWG1NVSVOkV3L9+W0mVxNAs5TvmU8qKvHIWoJY7lntFnq7MLA==',key_name='tempest-keypair-1155267326',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-plaaamng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:02:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=492acb7a-42e1-4918-a973-352f9d8251d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.240 239969 DEBUG nova.network.os_vif_util [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.241 239969 DEBUG nova.network.os_vif_util [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.241 239969 DEBUG os_vif [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.242 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/777356257' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.243 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.244 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:03:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2382055146' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.246 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.247 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a2de29b-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.247 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a2de29b-29, col_values=(('external_ids', {'iface-id': '2a2de29b-29c0-439c-8236-2f566c4c89aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:ca:89', 'vm-uuid': '492acb7a-42e1-4918-a973-352f9d8251d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.249 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:03 compute-0 NetworkManager[48954]: <info>  [1769443383.2507] manager: (tap2a2de29b-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/298)
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.252 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.257 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.259 239969 INFO os_vif [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29')
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.315 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.316 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.316 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] No VIF found with MAC fa:16:3e:f8:ca:89, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.317 239969 INFO nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Using config drive
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.346 239969 DEBUG nova.storage.rbd_utils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 492acb7a-42e1-4918-a973-352f9d8251d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.902 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443368.9017205, 725537ef-e1ac-4cc7-bd87-fe04016df207 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.905 239969 INFO nova.compute.manager [-] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] VM Stopped (Lifecycle Event)
Jan 26 16:03:03 compute-0 nova_compute[239965]: 2026-01-26 16:03:03.923 239969 DEBUG nova.compute.manager [None req-f2b04b49-f35f-4154-89c4-59be5bdb4355 - - - - - -] [instance: 725537ef-e1ac-4cc7-bd87-fe04016df207] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 180 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.7 MiB/s wr, 315 op/s
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.176 239969 DEBUG oslo_concurrency.lockutils [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "e264bffd-7e7a-43a6-b297-0bca74340744" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.177 239969 DEBUG oslo_concurrency.lockutils [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.177 239969 DEBUG oslo_concurrency.lockutils [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.178 239969 DEBUG oslo_concurrency.lockutils [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.178 239969 DEBUG oslo_concurrency.lockutils [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.180 239969 INFO nova.compute.manager [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Terminating instance
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.181 239969 DEBUG nova.compute.manager [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:03:04 compute-0 kernel: tapa68cd767-39 (unregistering): left promiscuous mode
Jan 26 16:03:04 compute-0 NetworkManager[48954]: <info>  [1769443384.2172] device (tapa68cd767-39): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.218 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:04 compute-0 ovn_controller[146046]: 2026-01-26T16:03:04Z|00682|binding|INFO|Releasing lport a68cd767-391c-4870-8944-dec4f54e1591 from this chassis (sb_readonly=0)
Jan 26 16:03:04 compute-0 ovn_controller[146046]: 2026-01-26T16:03:04Z|00683|binding|INFO|Setting lport a68cd767-391c-4870-8944-dec4f54e1591 down in Southbound
Jan 26 16:03:04 compute-0 ovn_controller[146046]: 2026-01-26T16:03:04Z|00684|binding|INFO|Removing iface tapa68cd767-39 ovn-installed in OVS
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.228 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.234 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:39:2f 10.100.0.13'], port_security=['fa:16:3e:d1:39:2f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e264bffd-7e7a-43a6-b297-0bca74340744', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db63036e-9778-49aa-880f-9b900f3c2179', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8368d1e77bbb4de59a4d3088dda0a707', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'dcfedff7-784c-47e9-a3f0-b01264b35f17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859c58dc-99b6-4246-b01e-ede78d7a88e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a68cd767-391c-4870-8944-dec4f54e1591) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.236 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a68cd767-391c-4870-8944-dec4f54e1591 in datapath db63036e-9778-49aa-880f-9b900f3c2179 unbound from our chassis
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.238 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network db63036e-9778-49aa-880f-9b900f3c2179, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.239 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a38206-43e0-477c-8e2e-a049df4cf961]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.240 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 namespace which is not needed anymore
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.247 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:04 compute-0 ceph-mon[75140]: pgmap v1504: 305 pgs: 305 active+clean; 180 MiB data, 607 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.7 MiB/s wr, 315 op/s
Jan 26 16:03:04 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000049.scope: Deactivated successfully.
Jan 26 16:03:04 compute-0 systemd[1]: machine-qemu\x2d85\x2dinstance\x2d00000049.scope: Consumed 3.622s CPU time.
Jan 26 16:03:04 compute-0 systemd-machined[208061]: Machine qemu-85-instance-00000049 terminated.
Jan 26 16:03:04 compute-0 podman[303355]: 2026-01-26 16:03:04.29230559 +0000 UTC m=+0.054871366 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.316 239969 INFO nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Creating config drive at /var/lib/nova/instances/492acb7a-42e1-4918-a973-352f9d8251d0/disk.config
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.321 239969 DEBUG oslo_concurrency.processutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/492acb7a-42e1-4918-a973-352f9d8251d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgf_bz7vb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:04 compute-0 ovn_controller[146046]: 2026-01-26T16:03:04Z|00685|binding|INFO|Releasing lport c91d83e2-40e9-4d10-8b19-e75b3795dd04 from this chassis (sb_readonly=0)
Jan 26 16:03:04 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[303256]: [NOTICE]   (303260) : haproxy version is 2.8.14-c23fe91
Jan 26 16:03:04 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[303256]: [NOTICE]   (303260) : path to executable is /usr/sbin/haproxy
Jan 26 16:03:04 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[303256]: [WARNING]  (303260) : Exiting Master process...
Jan 26 16:03:04 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[303256]: [ALERT]    (303260) : Current worker (303262) exited with code 143 (Terminated)
Jan 26 16:03:04 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[303256]: [WARNING]  (303260) : All workers exited. Exiting... (0)
Jan 26 16:03:04 compute-0 systemd[1]: libpod-02c49a311484c2e0d4619366fa72dbb9423f5fa7faea912be49b1b76ddfa0534.scope: Deactivated successfully.
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.388 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:04 compute-0 podman[303397]: 2026-01-26 16:03:04.396643748 +0000 UTC m=+0.071398171 container died 02c49a311484c2e0d4619366fa72dbb9423f5fa7faea912be49b1b76ddfa0534 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.428 239969 INFO nova.virt.libvirt.driver [-] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Instance destroyed successfully.
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.429 239969 DEBUG nova.objects.instance [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'resources' on Instance uuid e264bffd-7e7a-43a6-b297-0bca74340744 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-02c49a311484c2e0d4619366fa72dbb9423f5fa7faea912be49b1b76ddfa0534-userdata-shm.mount: Deactivated successfully.
Jan 26 16:03:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b11d72a6be8ecd5e3eb8c78fe0fd02045fca4be450bca27050c1cb5b5c0478fb-merged.mount: Deactivated successfully.
Jan 26 16:03:04 compute-0 podman[303397]: 2026-01-26 16:03:04.460919964 +0000 UTC m=+0.135674387 container cleanup 02c49a311484c2e0d4619366fa72dbb9423f5fa7faea912be49b1b76ddfa0534 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.460 239969 DEBUG nova.virt.libvirt.vif [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:02:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1731532228',display_name='tempest-ServerDiskConfigTestJSON-server-1731532228',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1731532228',id=73,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8368d1e77bbb4de59a4d3088dda0a707',ramdisk_id='',reservation_id='r-3pwxsvw9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1476979264',owner_user_name='tempest-ServerDiskConfigTestJSON-1476979264-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:03:01Z,user_data=None,user_id='fc62be3f072e47058d3706393447ec4a',uuid=e264bffd-7e7a-43a6-b297-0bca74340744,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.461 239969 DEBUG nova.network.os_vif_util [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converting VIF {"id": "a68cd767-391c-4870-8944-dec4f54e1591", "address": "fa:16:3e:d1:39:2f", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa68cd767-39", "ovs_interfaceid": "a68cd767-391c-4870-8944-dec4f54e1591", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.462 239969 DEBUG nova.network.os_vif_util [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:39:2f,bridge_name='br-int',has_traffic_filtering=True,id=a68cd767-391c-4870-8944-dec4f54e1591,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa68cd767-39') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.463 239969 DEBUG os_vif [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:39:2f,bridge_name='br-int',has_traffic_filtering=True,id=a68cd767-391c-4870-8944-dec4f54e1591,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa68cd767-39') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.465 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.465 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa68cd767-39, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.466 239969 DEBUG oslo_concurrency.processutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/492acb7a-42e1-4918-a973-352f9d8251d0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgf_bz7vb" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:04 compute-0 systemd[1]: libpod-conmon-02c49a311484c2e0d4619366fa72dbb9423f5fa7faea912be49b1b76ddfa0534.scope: Deactivated successfully.
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.493 239969 DEBUG nova.storage.rbd_utils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 492acb7a-42e1-4918-a973-352f9d8251d0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.501 239969 DEBUG oslo_concurrency.processutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/492acb7a-42e1-4918-a973-352f9d8251d0/disk.config 492acb7a-42e1-4918-a973-352f9d8251d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.541 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.545 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:03:04 compute-0 podman[303441]: 2026-01-26 16:03:04.558002504 +0000 UTC m=+0.072817637 container remove 02c49a311484c2e0d4619366fa72dbb9423f5fa7faea912be49b1b76ddfa0534 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.560 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b12cee70-7862-4ca5-8b8e-8acf81c97273]: (4, ('Mon Jan 26 04:03:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 (02c49a311484c2e0d4619366fa72dbb9423f5fa7faea912be49b1b76ddfa0534)\n02c49a311484c2e0d4619366fa72dbb9423f5fa7faea912be49b1b76ddfa0534\nMon Jan 26 04:03:04 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 (02c49a311484c2e0d4619366fa72dbb9423f5fa7faea912be49b1b76ddfa0534)\n02c49a311484c2e0d4619366fa72dbb9423f5fa7faea912be49b1b76ddfa0534\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.562 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6b128638-e7ef-4ede-be0a-4fa7e9a755ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.563 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb63036e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.606 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:04 compute-0 kernel: tapdb63036e-90: left promiscuous mode
Jan 26 16:03:04 compute-0 podman[303449]: 2026-01-26 16:03:04.686850842 +0000 UTC m=+0.176183440 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.690 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.693 239969 INFO os_vif [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:39:2f,bridge_name='br-int',has_traffic_filtering=True,id=a68cd767-391c-4870-8944-dec4f54e1591,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa68cd767-39')
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.695 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d4286059-377d-484b-baef-9b961fda666c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.711 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[64ea565a-2872-479c-837d-c1293db11a7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.712 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b9ec1323-a534-4d6e-8cad-6cb60e7ec6aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.724 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.727 239969 DEBUG nova.network.neutron [req-baaf2516-d126-4083-a324-258995426c24 req-398acdf7-cfac-4116-acd9-1fd998641d0c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Updated VIF entry in instance network info cache for port 2a2de29b-29c0-439c-8236-2f566c4c89aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.728 239969 DEBUG nova.network.neutron [req-baaf2516-d126-4083-a324-258995426c24 req-398acdf7-cfac-4116-acd9-1fd998641d0c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Updating instance_info_cache with network_info: [{"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.733 239969 DEBUG oslo_concurrency.processutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/492acb7a-42e1-4918-a973-352f9d8251d0/disk.config 492acb7a-42e1-4918-a973-352f9d8251d0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.232s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.735 239969 INFO nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Deleting local config drive /var/lib/nova/instances/492acb7a-42e1-4918-a973-352f9d8251d0/disk.config because it was imported into RBD.
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.735 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1d24e1c4-fa27-40b7-9e81-75329a09e7a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487139, 'reachable_time': 36746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303533, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.736 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.739 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.739 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[34644884-bc54-4312-b6dd-b3534c060b13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:04 compute-0 systemd[1]: run-netns-ovnmeta\x2ddb63036e\x2d9778\x2d49aa\x2d880f\x2d9b900f3c2179.mount: Deactivated successfully.
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.761 239969 DEBUG oslo_concurrency.lockutils [req-baaf2516-d126-4083-a324-258995426c24 req-398acdf7-cfac-4116-acd9-1fd998641d0c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:03:04 compute-0 kernel: tap2a2de29b-29: entered promiscuous mode
Jan 26 16:03:04 compute-0 NetworkManager[48954]: <info>  [1769443384.8186] manager: (tap2a2de29b-29): new Tun device (/org/freedesktop/NetworkManager/Devices/299)
Jan 26 16:03:04 compute-0 systemd-udevd[303364]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:03:04 compute-0 ovn_controller[146046]: 2026-01-26T16:03:04Z|00686|binding|INFO|Claiming lport 2a2de29b-29c0-439c-8236-2f566c4c89aa for this chassis.
Jan 26 16:03:04 compute-0 ovn_controller[146046]: 2026-01-26T16:03:04Z|00687|binding|INFO|2a2de29b-29c0-439c-8236-2f566c4c89aa: Claiming fa:16:3e:f8:ca:89 10.100.0.14
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.821 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.822 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.829 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ca:89 10.100.0.14'], port_security=['fa:16:3e:f8:ca:89 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '492acb7a-42e1-4918-a973-352f9d8251d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2b6630f0-082e-491e-b663-f8af00e04b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2a2de29b-29c0-439c-8236-2f566c4c89aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.831 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2a2de29b-29c0-439c-8236-2f566c4c89aa in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d bound to our chassis
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.832 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:03:04 compute-0 NetworkManager[48954]: <info>  [1769443384.8347] device (tap2a2de29b-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:03:04 compute-0 NetworkManager[48954]: <info>  [1769443384.8354] device (tap2a2de29b-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.845 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[66fde49b-455f-4b6b-a02b-2af1ad0e8e45]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.847 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84c1ad93-11 in ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.849 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84c1ad93-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.849 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[01d9e196-9162-4d76-b3e3-82b041f507bc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.850 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d77cd7e9-1fe3-4dec-924b-fac4a05b96df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:04 compute-0 systemd-machined[208061]: New machine qemu-86-instance-0000004c.
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.862 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[343ac1fd-031b-4bdd-85f1-267fd9e3e4ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:04 compute-0 systemd[1]: Started Virtual Machine qemu-86-instance-0000004c.
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.889 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[149ef24c-08ed-49fa-be67-dcb3e9df24fb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.895 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:04 compute-0 ovn_controller[146046]: 2026-01-26T16:03:04Z|00688|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa ovn-installed in OVS
Jan 26 16:03:04 compute-0 ovn_controller[146046]: 2026-01-26T16:03:04Z|00689|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa up in Southbound
Jan 26 16:03:04 compute-0 nova_compute[239965]: 2026-01-26 16:03:04.902 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.933 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1afadfb5-9157-45ba-92b0-0ef0445b057f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:04 compute-0 NetworkManager[48954]: <info>  [1769443384.9417] manager: (tap84c1ad93-10): new Veth device (/org/freedesktop/NetworkManager/Devices/300)
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.940 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[efbd3672-a087-4c55-b966-36303cdfa39c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.978 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c61d29fa-1011-4cc2-a4a3-260e39211930]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:04.981 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0d7039ab-9d9e-40c5-b7d0-865c75e07c7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:05 compute-0 NetworkManager[48954]: <info>  [1769443385.0058] device (tap84c1ad93-10): carrier: link connected
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:05.014 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d7d14904-4107-45bc-9beb-e62b4c9d4769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:05.037 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e8883ae7-ec1c-4990-9a97-d502769b41c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84c1ad93-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487581, 'reachable_time': 19008, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 303582, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:05.061 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0414607a-3e0f-425b-b8a6-3e862377d913]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:ad50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487581, 'tstamp': 487581}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 303583, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:05.082 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[11d4abf1-d1a8-469d-8e50-ade2a82e45aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84c1ad93-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 208], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487581, 'reachable_time': 19008, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 303584, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:05.124 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[21857370-5339-4644-8949-139981213c37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:05.194 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b7e152b1-a162-4d69-9693-e7518e97bf3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:05.195 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1ad93-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:05.195 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:05.195 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84c1ad93-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:05 compute-0 kernel: tap84c1ad93-10: entered promiscuous mode
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.197 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:05 compute-0 NetworkManager[48954]: <info>  [1769443385.1978] manager: (tap84c1ad93-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/301)
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:05.206 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84c1ad93-10, col_values=(('external_ids', {'iface-id': '61ede7e0-2ac4-4ab0-a8d4-11a64add8da0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.208 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:05 compute-0 ovn_controller[146046]: 2026-01-26T16:03:05Z|00690|binding|INFO|Releasing lport 61ede7e0-2ac4-4ab0-a8d4-11a64add8da0 from this chassis (sb_readonly=0)
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:05.212 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:05.213 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4e11a30d-d1f0-4504-9eb2-81414a283812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:05.214 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:03:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:05.214 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'env', 'PROCESS_TAG=haproxy-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.228 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.272 239969 INFO nova.virt.libvirt.driver [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Deleting instance files /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744_del
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.273 239969 INFO nova.virt.libvirt.driver [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Deletion of /var/lib/nova/instances/e264bffd-7e7a-43a6-b297-0bca74340744_del complete
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.279 239969 DEBUG nova.compute.manager [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Received event network-vif-unplugged-a68cd767-391c-4870-8944-dec4f54e1591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.280 239969 DEBUG oslo_concurrency.lockutils [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.280 239969 DEBUG oslo_concurrency.lockutils [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.280 239969 DEBUG oslo_concurrency.lockutils [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.281 239969 DEBUG nova.compute.manager [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] No waiting events found dispatching network-vif-unplugged-a68cd767-391c-4870-8944-dec4f54e1591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.281 239969 DEBUG nova.compute.manager [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Received event network-vif-unplugged-a68cd767-391c-4870-8944-dec4f54e1591 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.281 239969 DEBUG nova.compute.manager [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Received event network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.281 239969 DEBUG oslo_concurrency.lockutils [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.282 239969 DEBUG oslo_concurrency.lockutils [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.282 239969 DEBUG oslo_concurrency.lockutils [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.282 239969 DEBUG nova.compute.manager [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] No waiting events found dispatching network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.282 239969 WARNING nova.compute.manager [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Received unexpected event network-vif-plugged-a68cd767-391c-4870-8944-dec4f54e1591 for instance with vm_state active and task_state deleting.
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.283 239969 DEBUG nova.compute.manager [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.283 239969 DEBUG oslo_concurrency.lockutils [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.283 239969 DEBUG oslo_concurrency.lockutils [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.283 239969 DEBUG oslo_concurrency.lockutils [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.284 239969 DEBUG nova.compute.manager [req-a5627061-6da0-435b-ad81-19337289bd20 req-2fe5d124-9036-431e-8581-95971989a104 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Processing event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.345 239969 INFO nova.compute.manager [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Took 1.16 seconds to destroy the instance on the hypervisor.
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.346 239969 DEBUG oslo.service.loopingcall [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.346 239969 DEBUG nova.compute.manager [-] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.346 239969 DEBUG nova.network.neutron [-] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.540 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443385.5401003, 492acb7a-42e1-4918-a973-352f9d8251d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.542 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Started (Lifecycle Event)
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.544 239969 DEBUG nova.compute.manager [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.549 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.553 239969 INFO nova.virt.libvirt.driver [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance spawned successfully.
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.553 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.560 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.563 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.573 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.573 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.574 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.574 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.575 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.575 239969 DEBUG nova.virt.libvirt.driver [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.581 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.581 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443385.5402317, 492acb7a-42e1-4918-a973-352f9d8251d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.582 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Paused (Lifecycle Event)
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.607 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.611 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443385.5481427, 492acb7a-42e1-4918-a973-352f9d8251d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.611 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Resumed (Lifecycle Event)
Jan 26 16:03:05 compute-0 podman[303658]: 2026-01-26 16:03:05.612107905 +0000 UTC m=+0.053418140 container create 53a5abbde69708796c2a7e4f9c294b803eb75834f613c1f3d87f7b3c031ac36f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.634 239969 INFO nova.compute.manager [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Took 10.41 seconds to spawn the instance on the hypervisor.
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.635 239969 DEBUG nova.compute.manager [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.636 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.643 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:03:05 compute-0 systemd[1]: Started libpod-conmon-53a5abbde69708796c2a7e4f9c294b803eb75834f613c1f3d87f7b3c031ac36f.scope.
Jan 26 16:03:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.679 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:03:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abb55a47e0b1ec3f3667d445598dcf5cfa16fa06d7754041fd6a36d850bc6a58/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:05 compute-0 podman[303658]: 2026-01-26 16:03:05.587514622 +0000 UTC m=+0.028824867 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:03:05 compute-0 podman[303658]: 2026-01-26 16:03:05.695838848 +0000 UTC m=+0.137149163 container init 53a5abbde69708796c2a7e4f9c294b803eb75834f613c1f3d87f7b3c031ac36f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 26 16:03:05 compute-0 podman[303658]: 2026-01-26 16:03:05.701537718 +0000 UTC m=+0.142847983 container start 53a5abbde69708796c2a7e4f9c294b803eb75834f613c1f3d87f7b3c031ac36f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.702 239969 INFO nova.compute.manager [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Took 11.67 seconds to build instance.
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.723 239969 DEBUG oslo_concurrency.lockutils [None req-fcfe724e-c6ac-4196-a476-f99e809ed53b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:05 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[303673]: [NOTICE]   (303677) : New worker (303679) forked
Jan 26 16:03:05 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[303673]: [NOTICE]   (303677) : Loading success.
Jan 26 16:03:05 compute-0 nova_compute[239965]: 2026-01-26 16:03:05.879 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 163 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 406 op/s
Jan 26 16:03:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:03:06 compute-0 nova_compute[239965]: 2026-01-26 16:03:06.718 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443371.7172575, fbc49a44-5efe-4d87-aea7-b26f7232c224 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:06 compute-0 nova_compute[239965]: 2026-01-26 16:03:06.719 239969 INFO nova.compute.manager [-] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] VM Stopped (Lifecycle Event)
Jan 26 16:03:06 compute-0 nova_compute[239965]: 2026-01-26 16:03:06.736 239969 DEBUG nova.compute.manager [None req-4ab1dca6-6466-44e2-9932-9107032b8efd - - - - - -] [instance: fbc49a44-5efe-4d87-aea7-b26f7232c224] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:07 compute-0 ceph-mon[75140]: pgmap v1505: 305 pgs: 305 active+clean; 163 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.7 MiB/s wr, 406 op/s
Jan 26 16:03:07 compute-0 nova_compute[239965]: 2026-01-26 16:03:07.358 239969 DEBUG nova.compute.manager [req-6d48aa8d-ca8f-4eb6-a446-bed7dcaa7d33 req-1b3205da-bfe4-484f-b47f-b0844990217c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:07 compute-0 nova_compute[239965]: 2026-01-26 16:03:07.359 239969 DEBUG oslo_concurrency.lockutils [req-6d48aa8d-ca8f-4eb6-a446-bed7dcaa7d33 req-1b3205da-bfe4-484f-b47f-b0844990217c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:07 compute-0 nova_compute[239965]: 2026-01-26 16:03:07.359 239969 DEBUG oslo_concurrency.lockutils [req-6d48aa8d-ca8f-4eb6-a446-bed7dcaa7d33 req-1b3205da-bfe4-484f-b47f-b0844990217c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:07 compute-0 nova_compute[239965]: 2026-01-26 16:03:07.359 239969 DEBUG oslo_concurrency.lockutils [req-6d48aa8d-ca8f-4eb6-a446-bed7dcaa7d33 req-1b3205da-bfe4-484f-b47f-b0844990217c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:07 compute-0 nova_compute[239965]: 2026-01-26 16:03:07.360 239969 DEBUG nova.compute.manager [req-6d48aa8d-ca8f-4eb6-a446-bed7dcaa7d33 req-1b3205da-bfe4-484f-b47f-b0844990217c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:07 compute-0 nova_compute[239965]: 2026-01-26 16:03:07.360 239969 WARNING nova.compute.manager [req-6d48aa8d-ca8f-4eb6-a446-bed7dcaa7d33 req-1b3205da-bfe4-484f-b47f-b0844990217c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state active and task_state None.
Jan 26 16:03:07 compute-0 nova_compute[239965]: 2026-01-26 16:03:07.562 239969 DEBUG nova.network.neutron [-] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:03:07 compute-0 nova_compute[239965]: 2026-01-26 16:03:07.583 239969 INFO nova.compute.manager [-] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Took 2.24 seconds to deallocate network for instance.
Jan 26 16:03:07 compute-0 nova_compute[239965]: 2026-01-26 16:03:07.641 239969 DEBUG oslo_concurrency.lockutils [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:07 compute-0 nova_compute[239965]: 2026-01-26 16:03:07.642 239969 DEBUG oslo_concurrency.lockutils [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:07 compute-0 nova_compute[239965]: 2026-01-26 16:03:07.715 239969 DEBUG oslo_concurrency.processutils [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:07 compute-0 nova_compute[239965]: 2026-01-26 16:03:07.757 239969 DEBUG nova.compute.manager [req-9a88ad51-5913-438e-bcb6-c9d232450701 req-da8a7f97-28bd-4add-82b8-32529af02f6b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Received event network-vif-deleted-a68cd767-391c-4870-8944-dec4f54e1591 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 163 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 188 op/s
Jan 26 16:03:08 compute-0 nova_compute[239965]: 2026-01-26 16:03:08.183 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:08 compute-0 NetworkManager[48954]: <info>  [1769443388.1836] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/302)
Jan 26 16:03:08 compute-0 NetworkManager[48954]: <info>  [1769443388.1846] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/303)
Jan 26 16:03:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:03:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/868002972' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:03:08 compute-0 nova_compute[239965]: 2026-01-26 16:03:08.310 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:08 compute-0 ovn_controller[146046]: 2026-01-26T16:03:08Z|00691|binding|INFO|Releasing lport 61ede7e0-2ac4-4ab0-a8d4-11a64add8da0 from this chassis (sb_readonly=0)
Jan 26 16:03:08 compute-0 nova_compute[239965]: 2026-01-26 16:03:08.318 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:08 compute-0 nova_compute[239965]: 2026-01-26 16:03:08.323 239969 DEBUG oslo_concurrency.processutils [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:08 compute-0 nova_compute[239965]: 2026-01-26 16:03:08.330 239969 DEBUG nova.compute.provider_tree [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:03:08 compute-0 nova_compute[239965]: 2026-01-26 16:03:08.351 239969 DEBUG nova.scheduler.client.report [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:03:08 compute-0 nova_compute[239965]: 2026-01-26 16:03:08.393 239969 DEBUG oslo_concurrency.lockutils [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:08 compute-0 nova_compute[239965]: 2026-01-26 16:03:08.420 239969 INFO nova.scheduler.client.report [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Deleted allocations for instance e264bffd-7e7a-43a6-b297-0bca74340744
Jan 26 16:03:09 compute-0 ceph-mon[75140]: pgmap v1506: 305 pgs: 305 active+clean; 163 MiB data, 569 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.2 MiB/s wr, 188 op/s
Jan 26 16:03:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/868002972' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:03:09 compute-0 nova_compute[239965]: 2026-01-26 16:03:09.132 239969 DEBUG oslo_concurrency.lockutils [None req-b9ed89a3-a3e8-483e-80dd-1f87a088edc7 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "e264bffd-7e7a-43a6-b297-0bca74340744" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:09 compute-0 nova_compute[239965]: 2026-01-26 16:03:09.195 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "7658ad77-dd99-4ff7-b88f-287f38b8344a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:09 compute-0 nova_compute[239965]: 2026-01-26 16:03:09.196 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:09 compute-0 nova_compute[239965]: 2026-01-26 16:03:09.220 239969 DEBUG nova.compute.manager [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:03:09 compute-0 nova_compute[239965]: 2026-01-26 16:03:09.297 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:09 compute-0 nova_compute[239965]: 2026-01-26 16:03:09.298 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:09 compute-0 nova_compute[239965]: 2026-01-26 16:03:09.306 239969 DEBUG nova.virt.hardware [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:03:09 compute-0 nova_compute[239965]: 2026-01-26 16:03:09.307 239969 INFO nova.compute.claims [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:03:09 compute-0 nova_compute[239965]: 2026-01-26 16:03:09.458 239969 DEBUG oslo_concurrency.processutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:09 compute-0 nova_compute[239965]: 2026-01-26 16:03:09.489 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 134 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.2 MiB/s wr, 257 op/s
Jan 26 16:03:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:03:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3453502527' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.038 239969 DEBUG oslo_concurrency.processutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.045 239969 DEBUG nova.compute.provider_tree [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:03:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3453502527' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.054 239969 DEBUG nova.compute.manager [req-2c37d605-4eac-4e18-bc9e-541f71db60ab req-4b87addf-e601-4ada-91f8-d59bbedd21fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-changed-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.055 239969 DEBUG nova.compute.manager [req-2c37d605-4eac-4e18-bc9e-541f71db60ab req-4b87addf-e601-4ada-91f8-d59bbedd21fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Refreshing instance network info cache due to event network-changed-2a2de29b-29c0-439c-8236-2f566c4c89aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.056 239969 DEBUG oslo_concurrency.lockutils [req-2c37d605-4eac-4e18-bc9e-541f71db60ab req-4b87addf-e601-4ada-91f8-d59bbedd21fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.056 239969 DEBUG oslo_concurrency.lockutils [req-2c37d605-4eac-4e18-bc9e-541f71db60ab req-4b87addf-e601-4ada-91f8-d59bbedd21fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.056 239969 DEBUG nova.network.neutron [req-2c37d605-4eac-4e18-bc9e-541f71db60ab req-4b87addf-e601-4ada-91f8-d59bbedd21fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Refreshing network info cache for port 2a2de29b-29c0-439c-8236-2f566c4c89aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.063 239969 DEBUG nova.scheduler.client.report [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.113 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.114 239969 DEBUG nova.compute.manager [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.169 239969 DEBUG nova.compute.manager [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.171 239969 DEBUG nova.network.neutron [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.193 239969 INFO nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.232 239969 DEBUG nova.compute.manager [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.364 239969 DEBUG nova.compute.manager [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.366 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.367 239969 INFO nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Creating image(s)
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.394 239969 DEBUG nova.storage.rbd_utils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.427 239969 DEBUG nova.storage.rbd_utils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.452 239969 DEBUG nova.storage.rbd_utils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.456 239969 DEBUG oslo_concurrency.processutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.530 239969 DEBUG oslo_concurrency.processutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.532 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.532 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.533 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.554 239969 DEBUG nova.storage.rbd_utils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.559 239969 DEBUG oslo_concurrency.processutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.628 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443375.626902, 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.629 239969 INFO nova.compute.manager [-] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] VM Stopped (Lifecycle Event)
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.663 239969 DEBUG nova.compute.manager [None req-499cd93d-6caa-4a49-a588-06ae8f42a59a - - - - - -] [instance: 446cb5d0-c6c5-4128-84ba-1ad1e7b00ccd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.793 239969 DEBUG nova.policy [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fc62be3f072e47058d3706393447ec4a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8368d1e77bbb4de59a4d3088dda0a707', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.857 239969 DEBUG oslo_concurrency.processutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.885 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.918 239969 DEBUG nova.storage.rbd_utils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] resizing rbd image 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:03:10 compute-0 nova_compute[239965]: 2026-01-26 16:03:10.993 239969 DEBUG nova.objects.instance [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'migration_context' on Instance uuid 7658ad77-dd99-4ff7-b88f-287f38b8344a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:11 compute-0 nova_compute[239965]: 2026-01-26 16:03:11.013 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:03:11 compute-0 nova_compute[239965]: 2026-01-26 16:03:11.014 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Ensure instance console log exists: /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:03:11 compute-0 nova_compute[239965]: 2026-01-26 16:03:11.014 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:11 compute-0 nova_compute[239965]: 2026-01-26 16:03:11.014 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:11 compute-0 nova_compute[239965]: 2026-01-26 16:03:11.015 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:11 compute-0 ceph-mon[75140]: pgmap v1507: 305 pgs: 305 active+clean; 134 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.2 MiB/s wr, 257 op/s
Jan 26 16:03:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:03:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 141 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 215 op/s
Jan 26 16:03:12 compute-0 nova_compute[239965]: 2026-01-26 16:03:12.068 239969 DEBUG nova.network.neutron [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Successfully created port: fed98a41-4eee-4ff2-8fc1-4fa7a278799f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:03:12 compute-0 nova_compute[239965]: 2026-01-26 16:03:12.949 239969 DEBUG nova.network.neutron [req-2c37d605-4eac-4e18-bc9e-541f71db60ab req-4b87addf-e601-4ada-91f8-d59bbedd21fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Updated VIF entry in instance network info cache for port 2a2de29b-29c0-439c-8236-2f566c4c89aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:03:12 compute-0 nova_compute[239965]: 2026-01-26 16:03:12.950 239969 DEBUG nova.network.neutron [req-2c37d605-4eac-4e18-bc9e-541f71db60ab req-4b87addf-e601-4ada-91f8-d59bbedd21fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Updating instance_info_cache with network_info: [{"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:03:12 compute-0 nova_compute[239965]: 2026-01-26 16:03:12.971 239969 DEBUG oslo_concurrency.lockutils [req-2c37d605-4eac-4e18-bc9e-541f71db60ab req-4b87addf-e601-4ada-91f8-d59bbedd21fb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:03:13 compute-0 ceph-mon[75140]: pgmap v1508: 305 pgs: 305 active+clean; 141 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.1 MiB/s wr, 215 op/s
Jan 26 16:03:13 compute-0 nova_compute[239965]: 2026-01-26 16:03:13.753 239969 DEBUG nova.network.neutron [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Successfully updated port: fed98a41-4eee-4ff2-8fc1-4fa7a278799f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:03:13 compute-0 nova_compute[239965]: 2026-01-26 16:03:13.770 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "refresh_cache-7658ad77-dd99-4ff7-b88f-287f38b8344a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:03:13 compute-0 nova_compute[239965]: 2026-01-26 16:03:13.770 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquired lock "refresh_cache-7658ad77-dd99-4ff7-b88f-287f38b8344a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:03:13 compute-0 nova_compute[239965]: 2026-01-26 16:03:13.771 239969 DEBUG nova.network.neutron [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:03:13 compute-0 nova_compute[239965]: 2026-01-26 16:03:13.867 239969 DEBUG nova.compute.manager [req-e19d2423-9b58-420f-b0d7-fc7fff6f7e3c req-27dccd50-bdff-4ca1-8907-169208156f7e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Received event network-changed-fed98a41-4eee-4ff2-8fc1-4fa7a278799f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:13 compute-0 nova_compute[239965]: 2026-01-26 16:03:13.868 239969 DEBUG nova.compute.manager [req-e19d2423-9b58-420f-b0d7-fc7fff6f7e3c req-27dccd50-bdff-4ca1-8907-169208156f7e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Refreshing instance network info cache due to event network-changed-fed98a41-4eee-4ff2-8fc1-4fa7a278799f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:03:13 compute-0 nova_compute[239965]: 2026-01-26 16:03:13.868 239969 DEBUG oslo_concurrency.lockutils [req-e19d2423-9b58-420f-b0d7-fc7fff6f7e3c req-27dccd50-bdff-4ca1-8907-169208156f7e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-7658ad77-dd99-4ff7-b88f-287f38b8344a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:03:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 141 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 214 KiB/s wr, 171 op/s
Jan 26 16:03:14 compute-0 nova_compute[239965]: 2026-01-26 16:03:14.040 239969 DEBUG nova.network.neutron [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:03:14 compute-0 nova_compute[239965]: 2026-01-26 16:03:14.493 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:15 compute-0 ceph-mon[75140]: pgmap v1509: 305 pgs: 305 active+clean; 141 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 214 KiB/s wr, 171 op/s
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.813 239969 DEBUG nova.network.neutron [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Updating instance_info_cache with network_info: [{"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.836 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Releasing lock "refresh_cache-7658ad77-dd99-4ff7-b88f-287f38b8344a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.837 239969 DEBUG nova.compute.manager [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Instance network_info: |[{"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.839 239969 DEBUG oslo_concurrency.lockutils [req-e19d2423-9b58-420f-b0d7-fc7fff6f7e3c req-27dccd50-bdff-4ca1-8907-169208156f7e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-7658ad77-dd99-4ff7-b88f-287f38b8344a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.840 239969 DEBUG nova.network.neutron [req-e19d2423-9b58-420f-b0d7-fc7fff6f7e3c req-27dccd50-bdff-4ca1-8907-169208156f7e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Refreshing network info cache for port fed98a41-4eee-4ff2-8fc1-4fa7a278799f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.843 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Start _get_guest_xml network_info=[{"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.849 239969 WARNING nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.861 239969 DEBUG nova.virt.libvirt.host [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.863 239969 DEBUG nova.virt.libvirt.host [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.869 239969 DEBUG nova.virt.libvirt.host [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.870 239969 DEBUG nova.virt.libvirt.host [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.872 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.872 239969 DEBUG nova.virt.hardware [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.873 239969 DEBUG nova.virt.hardware [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.873 239969 DEBUG nova.virt.hardware [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.874 239969 DEBUG nova.virt.hardware [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.874 239969 DEBUG nova.virt.hardware [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.874 239969 DEBUG nova.virt.hardware [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.874 239969 DEBUG nova.virt.hardware [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.875 239969 DEBUG nova.virt.hardware [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.875 239969 DEBUG nova.virt.hardware [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.876 239969 DEBUG nova.virt.hardware [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.876 239969 DEBUG nova.virt.hardware [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.879 239969 DEBUG oslo_concurrency.processutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:15 compute-0 nova_compute[239965]: 2026-01-26 16:03:15.915 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 180 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 198 op/s
Jan 26 16:03:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:03:16 compute-0 nova_compute[239965]: 2026-01-26 16:03:16.422 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:03:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2709611310' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:16 compute-0 nova_compute[239965]: 2026-01-26 16:03:16.456 239969 DEBUG oslo_concurrency.processutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:16 compute-0 nova_compute[239965]: 2026-01-26 16:03:16.482 239969 DEBUG nova.storage.rbd_utils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:16 compute-0 nova_compute[239965]: 2026-01-26 16:03:16.487 239969 DEBUG oslo_concurrency.processutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:03:17 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3031003709' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:17 compute-0 ceph-mon[75140]: pgmap v1510: 305 pgs: 305 active+clean; 180 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 198 op/s
Jan 26 16:03:17 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2709611310' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:17 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3031003709' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.125 239969 DEBUG oslo_concurrency.processutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.127 239969 DEBUG nova.virt.libvirt.vif [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:03:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-326782750',display_name='tempest-ServerDiskConfigTestJSON-server-326782750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-326782750',id=77,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8368d1e77bbb4de59a4d3088dda0a707',ramdisk_id='',reservation_id='r-xijxih41',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1476979264',owner_user_name='tempest-ServerDiskConfigTestJSON-1476979264-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:03:10Z,user_data=None,user_id='fc62be3f072e47058d3706393447ec4a',uuid=7658ad77-dd99-4ff7-b88f-287f38b8344a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.128 239969 DEBUG nova.network.os_vif_util [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converting VIF {"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.129 239969 DEBUG nova.network.os_vif_util [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:49:72,bridge_name='br-int',has_traffic_filtering=True,id=fed98a41-4eee-4ff2-8fc1-4fa7a278799f,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed98a41-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.131 239969 DEBUG nova.objects.instance [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7658ad77-dd99-4ff7-b88f-287f38b8344a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.155 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:03:17 compute-0 nova_compute[239965]:   <uuid>7658ad77-dd99-4ff7-b88f-287f38b8344a</uuid>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   <name>instance-0000004d</name>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-326782750</nova:name>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:03:15</nova:creationTime>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:03:17 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:03:17 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:03:17 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:03:17 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:03:17 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:03:17 compute-0 nova_compute[239965]:         <nova:user uuid="fc62be3f072e47058d3706393447ec4a">tempest-ServerDiskConfigTestJSON-1476979264-project-member</nova:user>
Jan 26 16:03:17 compute-0 nova_compute[239965]:         <nova:project uuid="8368d1e77bbb4de59a4d3088dda0a707">tempest-ServerDiskConfigTestJSON-1476979264</nova:project>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:03:17 compute-0 nova_compute[239965]:         <nova:port uuid="fed98a41-4eee-4ff2-8fc1-4fa7a278799f">
Jan 26 16:03:17 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <system>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <entry name="serial">7658ad77-dd99-4ff7-b88f-287f38b8344a</entry>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <entry name="uuid">7658ad77-dd99-4ff7-b88f-287f38b8344a</entry>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     </system>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   <os>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   </os>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   <features>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   </features>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/7658ad77-dd99-4ff7-b88f-287f38b8344a_disk">
Jan 26 16:03:17 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       </source>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:03:17 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/7658ad77-dd99-4ff7-b88f-287f38b8344a_disk.config">
Jan 26 16:03:17 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       </source>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:03:17 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:42:49:72"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <target dev="tapfed98a41-4e"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a/console.log" append="off"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <video>
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     </video>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:03:17 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:03:17 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:03:17 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:03:17 compute-0 nova_compute[239965]: </domain>
Jan 26 16:03:17 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.156 239969 DEBUG nova.compute.manager [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Preparing to wait for external event network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.157 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.158 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.158 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.160 239969 DEBUG nova.virt.libvirt.vif [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:03:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-326782750',display_name='tempest-ServerDiskConfigTestJSON-server-326782750',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-326782750',id=77,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8368d1e77bbb4de59a4d3088dda0a707',ramdisk_id='',reservation_id='r-xijxih41',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1476979264',owner_user_name='tempest-ServerDiskConfigTestJSON-1476979264-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:03:10Z,user_data=None,user_id='fc62be3f072e47058d3706393447ec4a',uuid=7658ad77-dd99-4ff7-b88f-287f38b8344a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.160 239969 DEBUG nova.network.os_vif_util [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converting VIF {"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.161 239969 DEBUG nova.network.os_vif_util [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:49:72,bridge_name='br-int',has_traffic_filtering=True,id=fed98a41-4eee-4ff2-8fc1-4fa7a278799f,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed98a41-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.162 239969 DEBUG os_vif [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:49:72,bridge_name='br-int',has_traffic_filtering=True,id=fed98a41-4eee-4ff2-8fc1-4fa7a278799f,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed98a41-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.163 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.164 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.165 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.169 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.170 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfed98a41-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.171 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfed98a41-4e, col_values=(('external_ids', {'iface-id': 'fed98a41-4eee-4ff2-8fc1-4fa7a278799f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:49:72', 'vm-uuid': '7658ad77-dd99-4ff7-b88f-287f38b8344a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.173 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:17 compute-0 NetworkManager[48954]: <info>  [1769443397.1748] manager: (tapfed98a41-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/304)
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.176 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.182 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.183 239969 INFO os_vif [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:49:72,bridge_name='br-int',has_traffic_filtering=True,id=fed98a41-4eee-4ff2-8fc1-4fa7a278799f,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed98a41-4e')
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.259 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.260 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.260 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] No VIF found with MAC fa:16:3e:42:49:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.261 239969 INFO nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Using config drive
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.292 239969 DEBUG nova.storage.rbd_utils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.722 239969 DEBUG nova.network.neutron [req-e19d2423-9b58-420f-b0d7-fc7fff6f7e3c req-27dccd50-bdff-4ca1-8907-169208156f7e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Updated VIF entry in instance network info cache for port fed98a41-4eee-4ff2-8fc1-4fa7a278799f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.723 239969 DEBUG nova.network.neutron [req-e19d2423-9b58-420f-b0d7-fc7fff6f7e3c req-27dccd50-bdff-4ca1-8907-169208156f7e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Updating instance_info_cache with network_info: [{"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.740 239969 DEBUG oslo_concurrency.lockutils [req-e19d2423-9b58-420f-b0d7-fc7fff6f7e3c req-27dccd50-bdff-4ca1-8907-169208156f7e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-7658ad77-dd99-4ff7-b88f-287f38b8344a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.912 239969 INFO nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Creating config drive at /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a/disk.config
Jan 26 16:03:17 compute-0 nova_compute[239965]: 2026-01-26 16:03:17.918 239969 DEBUG oslo_concurrency.processutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprvq8kx_l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 180 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Jan 26 16:03:18 compute-0 nova_compute[239965]: 2026-01-26 16:03:18.066 239969 DEBUG oslo_concurrency.processutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprvq8kx_l" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:18 compute-0 nova_compute[239965]: 2026-01-26 16:03:18.096 239969 DEBUG nova.storage.rbd_utils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:18 compute-0 nova_compute[239965]: 2026-01-26 16:03:18.101 239969 DEBUG oslo_concurrency.processutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a/disk.config 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:18 compute-0 nova_compute[239965]: 2026-01-26 16:03:18.260 239969 DEBUG oslo_concurrency.processutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a/disk.config 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:18 compute-0 nova_compute[239965]: 2026-01-26 16:03:18.262 239969 INFO nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Deleting local config drive /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a/disk.config because it was imported into RBD.
Jan 26 16:03:18 compute-0 NetworkManager[48954]: <info>  [1769443398.3232] manager: (tapfed98a41-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/305)
Jan 26 16:03:18 compute-0 kernel: tapfed98a41-4e: entered promiscuous mode
Jan 26 16:03:18 compute-0 ovn_controller[146046]: 2026-01-26T16:03:18Z|00692|binding|INFO|Claiming lport fed98a41-4eee-4ff2-8fc1-4fa7a278799f for this chassis.
Jan 26 16:03:18 compute-0 ovn_controller[146046]: 2026-01-26T16:03:18Z|00693|binding|INFO|fed98a41-4eee-4ff2-8fc1-4fa7a278799f: Claiming fa:16:3e:42:49:72 10.100.0.6
Jan 26 16:03:18 compute-0 nova_compute[239965]: 2026-01-26 16:03:18.331 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.345 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:49:72 10.100.0.6'], port_security=['fa:16:3e:42:49:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7658ad77-dd99-4ff7-b88f-287f38b8344a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db63036e-9778-49aa-880f-9b900f3c2179', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8368d1e77bbb4de59a4d3088dda0a707', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcfedff7-784c-47e9-a3f0-b01264b35f17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859c58dc-99b6-4246-b01e-ede78d7a88e8, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=fed98a41-4eee-4ff2-8fc1-4fa7a278799f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.347 156105 INFO neutron.agent.ovn.metadata.agent [-] Port fed98a41-4eee-4ff2-8fc1-4fa7a278799f in datapath db63036e-9778-49aa-880f-9b900f3c2179 bound to our chassis
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.349 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network db63036e-9778-49aa-880f-9b900f3c2179
Jan 26 16:03:18 compute-0 ovn_controller[146046]: 2026-01-26T16:03:18Z|00694|binding|INFO|Setting lport fed98a41-4eee-4ff2-8fc1-4fa7a278799f ovn-installed in OVS
Jan 26 16:03:18 compute-0 ovn_controller[146046]: 2026-01-26T16:03:18Z|00695|binding|INFO|Setting lport fed98a41-4eee-4ff2-8fc1-4fa7a278799f up in Southbound
Jan 26 16:03:18 compute-0 nova_compute[239965]: 2026-01-26 16:03:18.356 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.366 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0c89cb20-cea0-45a0-a9cf-ff105d100edf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.367 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdb63036e-91 in ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.370 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdb63036e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.370 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dc4357d1-4c4b-4dc5-9d5d-eec8334bf6d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.372 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[74ee8af6-6c36-488f-bc6e-8d43b5eefdd9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:18 compute-0 systemd-machined[208061]: New machine qemu-87-instance-0000004d.
Jan 26 16:03:18 compute-0 systemd-udevd[304037]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.390 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f7fd97-9a58-4c57-a2dc-0d4654c16634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:18 compute-0 NetworkManager[48954]: <info>  [1769443398.3987] device (tapfed98a41-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:03:18 compute-0 NetworkManager[48954]: <info>  [1769443398.3996] device (tapfed98a41-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:03:18 compute-0 systemd[1]: Started Virtual Machine qemu-87-instance-0000004d.
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.409 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[554c28d9-bb5a-4bae-883a-b8c8eaf2326b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:18 compute-0 ovn_controller[146046]: 2026-01-26T16:03:18Z|00081|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f8:ca:89 10.100.0.14
Jan 26 16:03:18 compute-0 ovn_controller[146046]: 2026-01-26T16:03:18Z|00082|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:ca:89 10.100.0.14
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.447 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e9f0fd87-f7f8-446b-a1aa-fda10fa619b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:18 compute-0 NetworkManager[48954]: <info>  [1769443398.4560] manager: (tapdb63036e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/306)
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.455 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[505c13f0-924a-44ef-a946-48af5e2b4d80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.496 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c998864d-40c7-437f-8d82-4701b71cfc08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.499 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[51be5f1a-2ca4-4bd3-a27a-ef087318dc08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:18 compute-0 NetworkManager[48954]: <info>  [1769443398.5288] device (tapdb63036e-90): carrier: link connected
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.539 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b9cdec1b-f8be-4268-9eeb-b171a6adab56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.560 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[66a6487b-1802-4bf4-831b-d370c6684580]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb63036e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:bb:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488933, 'reachable_time': 19844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304068, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.581 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f54c793a-0288-4417-970a-8a96076a5fec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7c:bb2a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 488933, 'tstamp': 488933}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 304069, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.608 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e8fbbe-4d8c-4f27-aead-dfbc369d9bbb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb63036e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:bb:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 210], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488933, 'reachable_time': 19844, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 304070, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.662 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fc527f79-fc10-4128-aae8-98a3f590207d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.781 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2c49f054-02d0-4bab-b7c4-880590621f2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.782 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb63036e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.782 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.783 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb63036e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:18 compute-0 NetworkManager[48954]: <info>  [1769443398.7862] manager: (tapdb63036e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/307)
Jan 26 16:03:18 compute-0 kernel: tapdb63036e-90: entered promiscuous mode
Jan 26 16:03:18 compute-0 nova_compute[239965]: 2026-01-26 16:03:18.787 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.789 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdb63036e-90, col_values=(('external_ids', {'iface-id': 'c91d83e2-40e9-4d10-8b19-e75b3795dd04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:18 compute-0 nova_compute[239965]: 2026-01-26 16:03:18.790 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:18 compute-0 ovn_controller[146046]: 2026-01-26T16:03:18Z|00696|binding|INFO|Releasing lport c91d83e2-40e9-4d10-8b19-e75b3795dd04 from this chassis (sb_readonly=0)
Jan 26 16:03:18 compute-0 nova_compute[239965]: 2026-01-26 16:03:18.811 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.813 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/db63036e-9778-49aa-880f-9b900f3c2179.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/db63036e-9778-49aa-880f-9b900f3c2179.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.814 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e3b048-0b95-4d10-8313-d61d0bfa95d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.815 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-db63036e-9778-49aa-880f-9b900f3c2179
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/db63036e-9778-49aa-880f-9b900f3c2179.pid.haproxy
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID db63036e-9778-49aa-880f-9b900f3c2179
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:03:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:18.815 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'env', 'PROCESS_TAG=haproxy-db63036e-9778-49aa-880f-9b900f3c2179', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/db63036e-9778-49aa-880f-9b900f3c2179.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:03:18 compute-0 nova_compute[239965]: 2026-01-26 16:03:18.903 239969 DEBUG nova.compute.manager [req-1f684040-b141-40ce-bf3b-3cb79a77508a req-6e9edee4-d01f-4c7b-9fce-dcc6117a0e50 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Received event network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:18 compute-0 nova_compute[239965]: 2026-01-26 16:03:18.904 239969 DEBUG oslo_concurrency.lockutils [req-1f684040-b141-40ce-bf3b-3cb79a77508a req-6e9edee4-d01f-4c7b-9fce-dcc6117a0e50 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:18 compute-0 nova_compute[239965]: 2026-01-26 16:03:18.905 239969 DEBUG oslo_concurrency.lockutils [req-1f684040-b141-40ce-bf3b-3cb79a77508a req-6e9edee4-d01f-4c7b-9fce-dcc6117a0e50 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:18 compute-0 nova_compute[239965]: 2026-01-26 16:03:18.905 239969 DEBUG oslo_concurrency.lockutils [req-1f684040-b141-40ce-bf3b-3cb79a77508a req-6e9edee4-d01f-4c7b-9fce-dcc6117a0e50 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:18 compute-0 nova_compute[239965]: 2026-01-26 16:03:18.905 239969 DEBUG nova.compute.manager [req-1f684040-b141-40ce-bf3b-3cb79a77508a req-6e9edee4-d01f-4c7b-9fce-dcc6117a0e50 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Processing event network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.090 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443399.0898187, 7658ad77-dd99-4ff7-b88f-287f38b8344a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.091 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] VM Started (Lifecycle Event)
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.095 239969 DEBUG nova.compute.manager [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.100 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:03:19 compute-0 ceph-mon[75140]: pgmap v1511: 305 pgs: 305 active+clean; 180 MiB data, 580 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.103 239969 INFO nova.virt.libvirt.driver [-] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Instance spawned successfully.
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.104 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.126 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.132 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.136 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.137 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.137 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.137 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.138 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.138 239969 DEBUG nova.virt.libvirt.driver [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.171 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.172 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443399.0901473, 7658ad77-dd99-4ff7-b88f-287f38b8344a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.172 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] VM Paused (Lifecycle Event)
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.202 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.207 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443399.0986345, 7658ad77-dd99-4ff7-b88f-287f38b8344a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.207 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] VM Resumed (Lifecycle Event)
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.214 239969 INFO nova.compute.manager [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Took 8.85 seconds to spawn the instance on the hypervisor.
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.214 239969 DEBUG nova.compute.manager [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.252 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.256 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.289 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:03:19 compute-0 podman[304142]: 2026-01-26 16:03:19.298347222 +0000 UTC m=+0.070009978 container create 1759e441f9db0148b6c53bf0f84ee73fc0f957c70871bfda6a2463e77d7e473a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.319 239969 INFO nova.compute.manager [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Took 10.04 seconds to build instance.
Jan 26 16:03:19 compute-0 systemd[1]: Started libpod-conmon-1759e441f9db0148b6c53bf0f84ee73fc0f957c70871bfda6a2463e77d7e473a.scope.
Jan 26 16:03:19 compute-0 podman[304142]: 2026-01-26 16:03:19.256634059 +0000 UTC m=+0.028296815 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.364 239969 DEBUG oslo_concurrency.lockutils [None req-4a872037-73e9-4d3f-8ad1-38179785ba77 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:03:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f77505c3e444effb8843160795090ac985f7bf630614189bfb6a9ee140ddc784/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:19 compute-0 podman[304142]: 2026-01-26 16:03:19.403439858 +0000 UTC m=+0.175102614 container init 1759e441f9db0148b6c53bf0f84ee73fc0f957c70871bfda6a2463e77d7e473a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:03:19 compute-0 podman[304142]: 2026-01-26 16:03:19.412402047 +0000 UTC m=+0.184064783 container start 1759e441f9db0148b6c53bf0f84ee73fc0f957c70871bfda6a2463e77d7e473a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.426 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443384.4246876, e264bffd-7e7a-43a6-b297-0bca74340744 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.426 239969 INFO nova.compute.manager [-] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] VM Stopped (Lifecycle Event)
Jan 26 16:03:19 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[304158]: [NOTICE]   (304162) : New worker (304164) forked
Jan 26 16:03:19 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[304158]: [NOTICE]   (304162) : Loading success.
Jan 26 16:03:19 compute-0 nova_compute[239965]: 2026-01-26 16:03:19.446 239969 DEBUG nova.compute.manager [None req-502f1361-c6c4-46af-82d9-a1008ce6f40d - - - - - -] [instance: e264bffd-7e7a-43a6-b297-0bca74340744] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 210 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 156 op/s
Jan 26 16:03:20 compute-0 nova_compute[239965]: 2026-01-26 16:03:20.262 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:20 compute-0 nova_compute[239965]: 2026-01-26 16:03:20.884 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:21 compute-0 ceph-mon[75140]: pgmap v1512: 305 pgs: 305 active+clean; 210 MiB data, 602 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 156 op/s
Jan 26 16:03:21 compute-0 nova_compute[239965]: 2026-01-26 16:03:21.290 239969 DEBUG nova.compute.manager [req-ce9ee5e6-4124-4426-9eb3-66a06ee58ebf req-4426cbc5-9d49-4c1b-8f9a-af87cb6e06ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Received event network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:21 compute-0 nova_compute[239965]: 2026-01-26 16:03:21.291 239969 DEBUG oslo_concurrency.lockutils [req-ce9ee5e6-4124-4426-9eb3-66a06ee58ebf req-4426cbc5-9d49-4c1b-8f9a-af87cb6e06ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:21 compute-0 nova_compute[239965]: 2026-01-26 16:03:21.292 239969 DEBUG oslo_concurrency.lockutils [req-ce9ee5e6-4124-4426-9eb3-66a06ee58ebf req-4426cbc5-9d49-4c1b-8f9a-af87cb6e06ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:21 compute-0 nova_compute[239965]: 2026-01-26 16:03:21.292 239969 DEBUG oslo_concurrency.lockutils [req-ce9ee5e6-4124-4426-9eb3-66a06ee58ebf req-4426cbc5-9d49-4c1b-8f9a-af87cb6e06ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:21 compute-0 nova_compute[239965]: 2026-01-26 16:03:21.292 239969 DEBUG nova.compute.manager [req-ce9ee5e6-4124-4426-9eb3-66a06ee58ebf req-4426cbc5-9d49-4c1b-8f9a-af87cb6e06ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] No waiting events found dispatching network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:21 compute-0 nova_compute[239965]: 2026-01-26 16:03:21.293 239969 WARNING nova.compute.manager [req-ce9ee5e6-4124-4426-9eb3-66a06ee58ebf req-4426cbc5-9d49-4c1b-8f9a-af87cb6e06ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Received unexpected event network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f for instance with vm_state active and task_state None.
Jan 26 16:03:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:03:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 130 op/s
Jan 26 16:03:22 compute-0 sudo[304173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:03:22 compute-0 sudo[304173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:03:22 compute-0 sudo[304173]: pam_unix(sudo:session): session closed for user root
Jan 26 16:03:22 compute-0 sudo[304198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 26 16:03:22 compute-0 sudo[304198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:03:22 compute-0 nova_compute[239965]: 2026-01-26 16:03:22.173 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:22 compute-0 ceph-mon[75140]: pgmap v1513: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.9 MiB/s wr, 130 op/s
Jan 26 16:03:22 compute-0 sudo[304198]: pam_unix(sudo:session): session closed for user root
Jan 26 16:03:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:03:22 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:03:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:03:22 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:03:22 compute-0 sudo[304242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:03:22 compute-0 sudo[304242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:03:22 compute-0 sudo[304242]: pam_unix(sudo:session): session closed for user root
Jan 26 16:03:22 compute-0 sudo[304267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:03:22 compute-0 sudo[304267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:03:23 compute-0 sudo[304267]: pam_unix(sudo:session): session closed for user root
Jan 26 16:03:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 16:03:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 16:03:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:03:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:03:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:03:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:03:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:03:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:03:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:03:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:03:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:03:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:03:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:03:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:03:23 compute-0 sudo[304323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:03:23 compute-0 sudo[304323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:03:23 compute-0 sudo[304323]: pam_unix(sudo:session): session closed for user root
Jan 26 16:03:23 compute-0 sudo[304348]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:03:23 compute-0 sudo[304348]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:03:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:03:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:03:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 16:03:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:03:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:03:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:03:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:03:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:03:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:03:23 compute-0 podman[304385]: 2026-01-26 16:03:23.847210907 +0000 UTC m=+0.056797634 container create 71b29921c82471751a2204ef81b0778b803ba6dbceab3ab9d25e106582effba1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_spence, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 16:03:23 compute-0 systemd[1]: Started libpod-conmon-71b29921c82471751a2204ef81b0778b803ba6dbceab3ab9d25e106582effba1.scope.
Jan 26 16:03:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:03:23 compute-0 podman[304385]: 2026-01-26 16:03:23.830856956 +0000 UTC m=+0.040443703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:03:23 compute-0 podman[304385]: 2026-01-26 16:03:23.937947351 +0000 UTC m=+0.147534078 container init 71b29921c82471751a2204ef81b0778b803ba6dbceab3ab9d25e106582effba1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_spence, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 16:03:23 compute-0 podman[304385]: 2026-01-26 16:03:23.949860753 +0000 UTC m=+0.159447480 container start 71b29921c82471751a2204ef81b0778b803ba6dbceab3ab9d25e106582effba1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:03:23 compute-0 podman[304385]: 2026-01-26 16:03:23.956029764 +0000 UTC m=+0.165616581 container attach 71b29921c82471751a2204ef81b0778b803ba6dbceab3ab9d25e106582effba1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_spence, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:03:23 compute-0 modest_spence[304401]: 167 167
Jan 26 16:03:23 compute-0 systemd[1]: libpod-71b29921c82471751a2204ef81b0778b803ba6dbceab3ab9d25e106582effba1.scope: Deactivated successfully.
Jan 26 16:03:23 compute-0 conmon[304401]: conmon 71b29921c82471751a22 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-71b29921c82471751a2204ef81b0778b803ba6dbceab3ab9d25e106582effba1.scope/container/memory.events
Jan 26 16:03:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 925 KiB/s rd, 3.7 MiB/s wr, 118 op/s
Jan 26 16:03:24 compute-0 podman[304407]: 2026-01-26 16:03:24.010665883 +0000 UTC m=+0.035989122 container died 71b29921c82471751a2204ef81b0778b803ba6dbceab3ab9d25e106582effba1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_spence, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:03:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a854ffb2a7bca54da599bcc94da3b26a0f52c7925b3cc49ac857c024b693b446-merged.mount: Deactivated successfully.
Jan 26 16:03:24 compute-0 podman[304407]: 2026-01-26 16:03:24.061858629 +0000 UTC m=+0.087181788 container remove 71b29921c82471751a2204ef81b0778b803ba6dbceab3ab9d25e106582effba1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:03:24 compute-0 systemd[1]: libpod-conmon-71b29921c82471751a2204ef81b0778b803ba6dbceab3ab9d25e106582effba1.scope: Deactivated successfully.
Jan 26 16:03:24 compute-0 podman[304429]: 2026-01-26 16:03:24.323756319 +0000 UTC m=+0.063712212 container create 4f9438b0980c96f5e3826108eafc3acc57882606401a8cfd417489376d1956e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hoover, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:03:24 compute-0 systemd[1]: Started libpod-conmon-4f9438b0980c96f5e3826108eafc3acc57882606401a8cfd417489376d1956e4.scope.
Jan 26 16:03:24 compute-0 podman[304429]: 2026-01-26 16:03:24.302946379 +0000 UTC m=+0.042902292 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:03:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:03:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7182dc7f6d02955b279f3ad0b3a61a0f5a80c80ccda7ea63c1a28105d17f3e9e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7182dc7f6d02955b279f3ad0b3a61a0f5a80c80ccda7ea63c1a28105d17f3e9e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7182dc7f6d02955b279f3ad0b3a61a0f5a80c80ccda7ea63c1a28105d17f3e9e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7182dc7f6d02955b279f3ad0b3a61a0f5a80c80ccda7ea63c1a28105d17f3e9e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7182dc7f6d02955b279f3ad0b3a61a0f5a80c80ccda7ea63c1a28105d17f3e9e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:24 compute-0 podman[304429]: 2026-01-26 16:03:24.444740506 +0000 UTC m=+0.184696419 container init 4f9438b0980c96f5e3826108eafc3acc57882606401a8cfd417489376d1956e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 16:03:24 compute-0 podman[304429]: 2026-01-26 16:03:24.455202562 +0000 UTC m=+0.195158455 container start 4f9438b0980c96f5e3826108eafc3acc57882606401a8cfd417489376d1956e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 16:03:24 compute-0 podman[304429]: 2026-01-26 16:03:24.45964555 +0000 UTC m=+0.199601443 container attach 4f9438b0980c96f5e3826108eafc3acc57882606401a8cfd417489376d1956e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hoover, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 16:03:24 compute-0 ceph-mon[75140]: pgmap v1514: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 925 KiB/s rd, 3.7 MiB/s wr, 118 op/s
Jan 26 16:03:24 compute-0 nova_compute[239965]: 2026-01-26 16:03:24.920 239969 INFO nova.compute.manager [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Rebuilding instance
Jan 26 16:03:25 compute-0 cranky_hoover[304446]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:03:25 compute-0 cranky_hoover[304446]: --> All data devices are unavailable
Jan 26 16:03:25 compute-0 systemd[1]: libpod-4f9438b0980c96f5e3826108eafc3acc57882606401a8cfd417489376d1956e4.scope: Deactivated successfully.
Jan 26 16:03:25 compute-0 podman[304429]: 2026-01-26 16:03:25.064601411 +0000 UTC m=+0.804557304 container died 4f9438b0980c96f5e3826108eafc3acc57882606401a8cfd417489376d1956e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 16:03:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-7182dc7f6d02955b279f3ad0b3a61a0f5a80c80ccda7ea63c1a28105d17f3e9e-merged.mount: Deactivated successfully.
Jan 26 16:03:25 compute-0 podman[304429]: 2026-01-26 16:03:25.116437492 +0000 UTC m=+0.856393385 container remove 4f9438b0980c96f5e3826108eafc3acc57882606401a8cfd417489376d1956e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_hoover, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:03:25 compute-0 systemd[1]: libpod-conmon-4f9438b0980c96f5e3826108eafc3acc57882606401a8cfd417489376d1956e4.scope: Deactivated successfully.
Jan 26 16:03:25 compute-0 sudo[304348]: pam_unix(sudo:session): session closed for user root
Jan 26 16:03:25 compute-0 nova_compute[239965]: 2026-01-26 16:03:25.226 239969 DEBUG nova.objects.instance [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 7658ad77-dd99-4ff7-b88f-287f38b8344a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:25 compute-0 nova_compute[239965]: 2026-01-26 16:03:25.244 239969 DEBUG nova.compute.manager [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:25 compute-0 sudo[304478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:03:25 compute-0 sudo[304478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:03:25 compute-0 sudo[304478]: pam_unix(sudo:session): session closed for user root
Jan 26 16:03:25 compute-0 nova_compute[239965]: 2026-01-26 16:03:25.293 239969 DEBUG nova.objects.instance [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'pci_requests' on Instance uuid 7658ad77-dd99-4ff7-b88f-287f38b8344a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:25 compute-0 nova_compute[239965]: 2026-01-26 16:03:25.306 239969 DEBUG nova.objects.instance [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7658ad77-dd99-4ff7-b88f-287f38b8344a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:25 compute-0 nova_compute[239965]: 2026-01-26 16:03:25.325 239969 DEBUG nova.objects.instance [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'resources' on Instance uuid 7658ad77-dd99-4ff7-b88f-287f38b8344a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:25 compute-0 sudo[304503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:03:25 compute-0 sudo[304503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:03:25 compute-0 nova_compute[239965]: 2026-01-26 16:03:25.342 239969 DEBUG nova.objects.instance [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'migration_context' on Instance uuid 7658ad77-dd99-4ff7-b88f-287f38b8344a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:25 compute-0 nova_compute[239965]: 2026-01-26 16:03:25.356 239969 DEBUG nova.objects.instance [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 16:03:25 compute-0 nova_compute[239965]: 2026-01-26 16:03:25.361 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:03:25 compute-0 ovn_controller[146046]: 2026-01-26T16:03:25Z|00697|binding|INFO|Releasing lport 61ede7e0-2ac4-4ab0-a8d4-11a64add8da0 from this chassis (sb_readonly=0)
Jan 26 16:03:25 compute-0 ovn_controller[146046]: 2026-01-26T16:03:25Z|00698|binding|INFO|Releasing lport c91d83e2-40e9-4d10-8b19-e75b3795dd04 from this chassis (sb_readonly=0)
Jan 26 16:03:25 compute-0 nova_compute[239965]: 2026-01-26 16:03:25.607 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:25 compute-0 podman[304540]: 2026-01-26 16:03:25.684547959 +0000 UTC m=+0.053826280 container create d518a219f35fb42b5f942a084db813f8801366423b9c2df51d17176a6486e05c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_boyd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:03:25 compute-0 systemd[1]: Started libpod-conmon-d518a219f35fb42b5f942a084db813f8801366423b9c2df51d17176a6486e05c.scope.
Jan 26 16:03:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:03:25 compute-0 podman[304540]: 2026-01-26 16:03:25.658359547 +0000 UTC m=+0.027637918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:03:25 compute-0 podman[304540]: 2026-01-26 16:03:25.765656437 +0000 UTC m=+0.134934778 container init d518a219f35fb42b5f942a084db813f8801366423b9c2df51d17176a6486e05c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 16:03:25 compute-0 podman[304540]: 2026-01-26 16:03:25.779561948 +0000 UTC m=+0.148840259 container start d518a219f35fb42b5f942a084db813f8801366423b9c2df51d17176a6486e05c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_boyd, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:03:25 compute-0 podman[304540]: 2026-01-26 16:03:25.784155501 +0000 UTC m=+0.153433862 container attach d518a219f35fb42b5f942a084db813f8801366423b9c2df51d17176a6486e05c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_boyd, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:03:25 compute-0 clever_boyd[304557]: 167 167
Jan 26 16:03:25 compute-0 systemd[1]: libpod-d518a219f35fb42b5f942a084db813f8801366423b9c2df51d17176a6486e05c.scope: Deactivated successfully.
Jan 26 16:03:25 compute-0 podman[304540]: 2026-01-26 16:03:25.788592509 +0000 UTC m=+0.157870860 container died d518a219f35fb42b5f942a084db813f8801366423b9c2df51d17176a6486e05c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 16:03:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b80ea66eecf7709256939e28246985543586c7a0c607506dd1189fe808e5d88-merged.mount: Deactivated successfully.
Jan 26 16:03:25 compute-0 podman[304540]: 2026-01-26 16:03:25.840475341 +0000 UTC m=+0.209753652 container remove d518a219f35fb42b5f942a084db813f8801366423b9c2df51d17176a6486e05c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_boyd, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:03:25 compute-0 systemd[1]: libpod-conmon-d518a219f35fb42b5f942a084db813f8801366423b9c2df51d17176a6486e05c.scope: Deactivated successfully.
Jan 26 16:03:25 compute-0 nova_compute[239965]: 2026-01-26 16:03:25.887 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.7 MiB/s wr, 167 op/s
Jan 26 16:03:26 compute-0 podman[304584]: 2026-01-26 16:03:26.073622337 +0000 UTC m=+0.069569717 container create c76f8fe4cb3a414df5a881ccac56739747abf17eb5474e5b6d976f2baefc59a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 16:03:26 compute-0 nova_compute[239965]: 2026-01-26 16:03:26.108 239969 DEBUG oslo_concurrency.lockutils [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:26 compute-0 nova_compute[239965]: 2026-01-26 16:03:26.110 239969 DEBUG oslo_concurrency.lockutils [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:26 compute-0 nova_compute[239965]: 2026-01-26 16:03:26.111 239969 INFO nova.compute.manager [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Rebooting instance
Jan 26 16:03:26 compute-0 systemd[1]: Started libpod-conmon-c76f8fe4cb3a414df5a881ccac56739747abf17eb5474e5b6d976f2baefc59a8.scope.
Jan 26 16:03:26 compute-0 podman[304584]: 2026-01-26 16:03:26.048266896 +0000 UTC m=+0.044214296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:03:26 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52fa91767edd06ff27939fc39ff906d57d67d277f58362f0f01e8727a81bee6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52fa91767edd06ff27939fc39ff906d57d67d277f58362f0f01e8727a81bee6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52fa91767edd06ff27939fc39ff906d57d67d277f58362f0f01e8727a81bee6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52fa91767edd06ff27939fc39ff906d57d67d277f58362f0f01e8727a81bee6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:26 compute-0 nova_compute[239965]: 2026-01-26 16:03:26.163 239969 DEBUG oslo_concurrency.lockutils [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:03:26 compute-0 nova_compute[239965]: 2026-01-26 16:03:26.165 239969 DEBUG oslo_concurrency.lockutils [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquired lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:03:26 compute-0 nova_compute[239965]: 2026-01-26 16:03:26.165 239969 DEBUG nova.network.neutron [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:03:26 compute-0 podman[304584]: 2026-01-26 16:03:26.168184135 +0000 UTC m=+0.164131515 container init c76f8fe4cb3a414df5a881ccac56739747abf17eb5474e5b6d976f2baefc59a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 16:03:26 compute-0 podman[304584]: 2026-01-26 16:03:26.174740626 +0000 UTC m=+0.170687986 container start c76f8fe4cb3a414df5a881ccac56739747abf17eb5474e5b6d976f2baefc59a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:03:26 compute-0 podman[304584]: 2026-01-26 16:03:26.178175741 +0000 UTC m=+0.174123091 container attach c76f8fe4cb3a414df5a881ccac56739747abf17eb5474e5b6d976f2baefc59a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 16:03:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]: {
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:     "0": [
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:         {
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "devices": [
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "/dev/loop3"
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             ],
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "lv_name": "ceph_lv0",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "lv_size": "21470642176",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "name": "ceph_lv0",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "tags": {
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.cluster_name": "ceph",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.crush_device_class": "",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.encrypted": "0",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.objectstore": "bluestore",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.osd_id": "0",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.type": "block",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.vdo": "0",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.with_tpm": "0"
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             },
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "type": "block",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "vg_name": "ceph_vg0"
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:         }
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:     ],
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:     "1": [
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:         {
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "devices": [
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "/dev/loop4"
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             ],
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "lv_name": "ceph_lv1",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "lv_size": "21470642176",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "name": "ceph_lv1",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "tags": {
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.cluster_name": "ceph",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.crush_device_class": "",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.encrypted": "0",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.objectstore": "bluestore",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.osd_id": "1",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.type": "block",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.vdo": "0",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.with_tpm": "0"
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             },
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "type": "block",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "vg_name": "ceph_vg1"
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:         }
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:     ],
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:     "2": [
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:         {
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "devices": [
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "/dev/loop5"
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             ],
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "lv_name": "ceph_lv2",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "lv_size": "21470642176",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "name": "ceph_lv2",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "tags": {
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.cluster_name": "ceph",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.crush_device_class": "",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.encrypted": "0",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.objectstore": "bluestore",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.osd_id": "2",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.type": "block",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.vdo": "0",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:                 "ceph.with_tpm": "0"
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             },
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "type": "block",
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:             "vg_name": "ceph_vg2"
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:         }
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]:     ]
Jan 26 16:03:26 compute-0 flamboyant_turing[304600]: }
Jan 26 16:03:26 compute-0 systemd[1]: libpod-c76f8fe4cb3a414df5a881ccac56739747abf17eb5474e5b6d976f2baefc59a8.scope: Deactivated successfully.
Jan 26 16:03:26 compute-0 podman[304584]: 2026-01-26 16:03:26.543925587 +0000 UTC m=+0.539872947 container died c76f8fe4cb3a414df5a881ccac56739747abf17eb5474e5b6d976f2baefc59a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 16:03:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-52fa91767edd06ff27939fc39ff906d57d67d277f58362f0f01e8727a81bee6a-merged.mount: Deactivated successfully.
Jan 26 16:03:26 compute-0 podman[304584]: 2026-01-26 16:03:26.595525732 +0000 UTC m=+0.591473092 container remove c76f8fe4cb3a414df5a881ccac56739747abf17eb5474e5b6d976f2baefc59a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_turing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:03:26 compute-0 systemd[1]: libpod-conmon-c76f8fe4cb3a414df5a881ccac56739747abf17eb5474e5b6d976f2baefc59a8.scope: Deactivated successfully.
Jan 26 16:03:26 compute-0 sudo[304503]: pam_unix(sudo:session): session closed for user root
Jan 26 16:03:26 compute-0 sudo[304622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:03:26 compute-0 sudo[304622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:03:26 compute-0 sudo[304622]: pam_unix(sudo:session): session closed for user root
Jan 26 16:03:26 compute-0 sudo[304647]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:03:26 compute-0 sudo[304647]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:03:27 compute-0 ceph-mon[75140]: pgmap v1515: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.7 MiB/s wr, 167 op/s
Jan 26 16:03:27 compute-0 podman[304684]: 2026-01-26 16:03:27.1021034 +0000 UTC m=+0.047881005 container create 589e586947f07b516f9405f0a95fd1236295b33a9e5657161eff5ae5e61b33f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galileo, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:03:27 compute-0 systemd[1]: Started libpod-conmon-589e586947f07b516f9405f0a95fd1236295b33a9e5657161eff5ae5e61b33f7.scope.
Jan 26 16:03:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:03:27 compute-0 podman[304684]: 2026-01-26 16:03:27.079936577 +0000 UTC m=+0.025714212 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:03:27 compute-0 nova_compute[239965]: 2026-01-26 16:03:27.176 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:27 compute-0 podman[304684]: 2026-01-26 16:03:27.185067384 +0000 UTC m=+0.130845019 container init 589e586947f07b516f9405f0a95fd1236295b33a9e5657161eff5ae5e61b33f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galileo, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 16:03:27 compute-0 podman[304684]: 2026-01-26 16:03:27.192493016 +0000 UTC m=+0.138270621 container start 589e586947f07b516f9405f0a95fd1236295b33a9e5657161eff5ae5e61b33f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galileo, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:03:27 compute-0 podman[304684]: 2026-01-26 16:03:27.196603847 +0000 UTC m=+0.142381482 container attach 589e586947f07b516f9405f0a95fd1236295b33a9e5657161eff5ae5e61b33f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galileo, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:03:27 compute-0 laughing_galileo[304701]: 167 167
Jan 26 16:03:27 compute-0 systemd[1]: libpod-589e586947f07b516f9405f0a95fd1236295b33a9e5657161eff5ae5e61b33f7.scope: Deactivated successfully.
Jan 26 16:03:27 compute-0 podman[304684]: 2026-01-26 16:03:27.199113749 +0000 UTC m=+0.144891354 container died 589e586947f07b516f9405f0a95fd1236295b33a9e5657161eff5ae5e61b33f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galileo, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:03:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-10ea4649ebb938eff839547dacdf67dde82dc4b880c0b5350c213fd110ce705d-merged.mount: Deactivated successfully.
Jan 26 16:03:27 compute-0 podman[304684]: 2026-01-26 16:03:27.250012066 +0000 UTC m=+0.195789671 container remove 589e586947f07b516f9405f0a95fd1236295b33a9e5657161eff5ae5e61b33f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_galileo, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:03:27 compute-0 systemd[1]: libpod-conmon-589e586947f07b516f9405f0a95fd1236295b33a9e5657161eff5ae5e61b33f7.scope: Deactivated successfully.
Jan 26 16:03:27 compute-0 podman[304726]: 2026-01-26 16:03:27.489469556 +0000 UTC m=+0.060642797 container create 18cef470517a31fddc256c3a4f447a624303ac1a77c7995098d22821ff0f59e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kilby, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 16:03:27 compute-0 systemd[1]: Started libpod-conmon-18cef470517a31fddc256c3a4f447a624303ac1a77c7995098d22821ff0f59e8.scope.
Jan 26 16:03:27 compute-0 podman[304726]: 2026-01-26 16:03:27.470604154 +0000 UTC m=+0.041777395 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:03:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ca1843d799867ee9fc383d4d9b8f5b68d1efef40dbacfa42df9fddac20b4fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ca1843d799867ee9fc383d4d9b8f5b68d1efef40dbacfa42df9fddac20b4fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ca1843d799867ee9fc383d4d9b8f5b68d1efef40dbacfa42df9fddac20b4fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ca1843d799867ee9fc383d4d9b8f5b68d1efef40dbacfa42df9fddac20b4fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:27 compute-0 podman[304726]: 2026-01-26 16:03:27.614505761 +0000 UTC m=+0.185679032 container init 18cef470517a31fddc256c3a4f447a624303ac1a77c7995098d22821ff0f59e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kilby, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 16:03:27 compute-0 podman[304726]: 2026-01-26 16:03:27.626052385 +0000 UTC m=+0.197225626 container start 18cef470517a31fddc256c3a4f447a624303ac1a77c7995098d22821ff0f59e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:03:27 compute-0 podman[304726]: 2026-01-26 16:03:27.630234767 +0000 UTC m=+0.201408008 container attach 18cef470517a31fddc256c3a4f447a624303ac1a77c7995098d22821ff0f59e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kilby, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:03:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 26 16:03:27 compute-0 nova_compute[239965]: 2026-01-26 16:03:27.987 239969 DEBUG nova.network.neutron [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Updating instance_info_cache with network_info: [{"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.014 239969 DEBUG oslo_concurrency.lockutils [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Releasing lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.016 239969 DEBUG nova.compute.manager [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:28 compute-0 kernel: tap2a2de29b-29 (unregistering): left promiscuous mode
Jan 26 16:03:28 compute-0 NetworkManager[48954]: <info>  [1769443408.1737] device (tap2a2de29b-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:03:28 compute-0 ovn_controller[146046]: 2026-01-26T16:03:28Z|00699|binding|INFO|Releasing lport 2a2de29b-29c0-439c-8236-2f566c4c89aa from this chassis (sb_readonly=0)
Jan 26 16:03:28 compute-0 ovn_controller[146046]: 2026-01-26T16:03:28Z|00700|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa down in Southbound
Jan 26 16:03:28 compute-0 ovn_controller[146046]: 2026-01-26T16:03:28Z|00701|binding|INFO|Removing iface tap2a2de29b-29 ovn-installed in OVS
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.184 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:28.220 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ca:89 10.100.0.14'], port_security=['fa:16:3e:f8:ca:89 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '492acb7a-42e1-4918-a973-352f9d8251d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2b6630f0-082e-491e-b663-f8af00e04b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2a2de29b-29c0-439c-8236-2f566c4c89aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:03:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:28.223 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2a2de29b-29c0-439c-8236-2f566c4c89aa in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d unbound from our chassis
Jan 26 16:03:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:28.224 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.187 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:28.225 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[55602048-71a6-4b76-8ecd-cc5ecd2954ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:28.226 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d namespace which is not needed anymore
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.235 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:28 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Jan 26 16:03:28 compute-0 systemd[1]: machine-qemu\x2d86\x2dinstance\x2d0000004c.scope: Consumed 14.212s CPU time.
Jan 26 16:03:28 compute-0 systemd-machined[208061]: Machine qemu-86-instance-0000004c terminated.
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.345 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.351 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.359 239969 INFO nova.virt.libvirt.driver [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance destroyed successfully.
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.360 239969 DEBUG nova.objects.instance [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'resources' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.375 239969 DEBUG nova.virt.libvirt.vif [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1205900142',display_name='tempest-ServerActionsTestJSON-server-1205900142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1205900142',id=76,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJC9bPt4tHtAdEqahH8dDewWtV4sFfSIyn3+BOhF1ZOSNVcrw8cqQ/JKx3C2TWjp9SjIuYmlVUH9MiH1RWG1NVSVOkV3L9+W0mVxNAs5TvmU8qKvHIWoJY7lntFnq7MLA==',key_name='tempest-keypair-1155267326',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-plaaamng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:03:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=492acb7a-42e1-4918-a973-352f9d8251d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.376 239969 DEBUG nova.network.os_vif_util [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.377 239969 DEBUG nova.network.os_vif_util [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.377 239969 DEBUG os_vif [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.379 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.379 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a2de29b-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:28 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[303673]: [NOTICE]   (303677) : haproxy version is 2.8.14-c23fe91
Jan 26 16:03:28 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[303673]: [NOTICE]   (303677) : path to executable is /usr/sbin/haproxy
Jan 26 16:03:28 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[303673]: [WARNING]  (303677) : Exiting Master process...
Jan 26 16:03:28 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[303673]: [WARNING]  (303677) : Exiting Master process...
Jan 26 16:03:28 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[303673]: [ALERT]    (303677) : Current worker (303679) exited with code 143 (Terminated)
Jan 26 16:03:28 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[303673]: [WARNING]  (303677) : All workers exited. Exiting... (0)
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.387 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:03:28 compute-0 systemd[1]: libpod-53a5abbde69708796c2a7e4f9c294b803eb75834f613c1f3d87f7b3c031ac36f.scope: Deactivated successfully.
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.395 239969 INFO os_vif [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29')
Jan 26 16:03:28 compute-0 podman[304827]: 2026-01-26 16:03:28.396656576 +0000 UTC m=+0.054366574 container died 53a5abbde69708796c2a7e4f9c294b803eb75834f613c1f3d87f7b3c031ac36f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.405 239969 DEBUG nova.virt.libvirt.driver [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Start _get_guest_xml network_info=[{"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.411 239969 WARNING nova.virt.libvirt.driver [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.420 239969 DEBUG nova.virt.libvirt.host [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.422 239969 DEBUG nova.virt.libvirt.host [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.427 239969 DEBUG nova.virt.libvirt.host [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.428 239969 DEBUG nova.virt.libvirt.host [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.428 239969 DEBUG nova.virt.libvirt.driver [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.428 239969 DEBUG nova.virt.hardware [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.429 239969 DEBUG nova.virt.hardware [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.429 239969 DEBUG nova.virt.hardware [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.430 239969 DEBUG nova.virt.hardware [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.430 239969 DEBUG nova.virt.hardware [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.430 239969 DEBUG nova.virt.hardware [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.430 239969 DEBUG nova.virt.hardware [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.431 239969 DEBUG nova.virt.hardware [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.431 239969 DEBUG nova.virt.hardware [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.431 239969 DEBUG nova.virt.hardware [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.431 239969 DEBUG nova.virt.hardware [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.432 239969 DEBUG nova.objects.instance [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-53a5abbde69708796c2a7e4f9c294b803eb75834f613c1f3d87f7b3c031ac36f-userdata-shm.mount: Deactivated successfully.
Jan 26 16:03:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-abb55a47e0b1ec3f3667d445598dcf5cfa16fa06d7754041fd6a36d850bc6a58-merged.mount: Deactivated successfully.
Jan 26 16:03:28 compute-0 podman[304827]: 2026-01-26 16:03:28.449595584 +0000 UTC m=+0.107305592 container cleanup 53a5abbde69708796c2a7e4f9c294b803eb75834f613c1f3d87f7b3c031ac36f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.451 239969 DEBUG oslo_concurrency.processutils [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:28 compute-0 systemd[1]: libpod-conmon-53a5abbde69708796c2a7e4f9c294b803eb75834f613c1f3d87f7b3c031ac36f.scope: Deactivated successfully.
Jan 26 16:03:28 compute-0 lvm[304889]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:03:28 compute-0 lvm[304889]: VG ceph_vg0 finished
Jan 26 16:03:28 compute-0 podman[304875]: 2026-01-26 16:03:28.545555626 +0000 UTC m=+0.066821159 container remove 53a5abbde69708796c2a7e4f9c294b803eb75834f613c1f3d87f7b3c031ac36f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 16:03:28 compute-0 lvm[304892]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:03:28 compute-0 lvm[304892]: VG ceph_vg1 finished
Jan 26 16:03:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:28.560 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7c16cf-5fb4-4e3a-9ce8-7416ed06252b]: (4, ('Mon Jan 26 04:03:28 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d (53a5abbde69708796c2a7e4f9c294b803eb75834f613c1f3d87f7b3c031ac36f)\n53a5abbde69708796c2a7e4f9c294b803eb75834f613c1f3d87f7b3c031ac36f\nMon Jan 26 04:03:28 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d (53a5abbde69708796c2a7e4f9c294b803eb75834f613c1f3d87f7b3c031ac36f)\n53a5abbde69708796c2a7e4f9c294b803eb75834f613c1f3d87f7b3c031ac36f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:28.563 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ce58d8-ed2e-4a6c-aad8-767802393e7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:28.566 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1ad93-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:28 compute-0 kernel: tap84c1ad93-10: left promiscuous mode
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.573 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:28 compute-0 lvm[304894]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:03:28 compute-0 lvm[304894]: VG ceph_vg0 finished
Jan 26 16:03:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:28.581 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[16b69710-5ee0-4837-b78f-5ca7e3fce1ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:28 compute-0 lvm[304893]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:03:28 compute-0 lvm[304893]: VG ceph_vg2 finished
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.590 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:28.598 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d361ac6a-918f-443a-974a-32c02c91dd5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:28.603 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3af573f0-7416-4fb2-b144-434604b82820]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:03:28
Jan 26 16:03:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:03:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:03:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', 'volumes', 'backups', 'vms']
Jan 26 16:03:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:03:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:28.628 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e105c458-ceb4-442c-97e6-24e8b4486c48]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487573, 'reachable_time': 27317, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 304910, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:28 compute-0 systemd[1]: run-netns-ovnmeta\x2d84c1ad93\x2d16e1\x2d4751\x2dac9b\x2dceeaa2d50c1d.mount: Deactivated successfully.
Jan 26 16:03:28 compute-0 naughty_kilby[304743]: {}
Jan 26 16:03:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:28.634 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:03:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:28.634 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[51bf79d2-9a2e-43ed-8133-b0e8628810db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:28 compute-0 systemd[1]: libpod-18cef470517a31fddc256c3a4f447a624303ac1a77c7995098d22821ff0f59e8.scope: Deactivated successfully.
Jan 26 16:03:28 compute-0 systemd[1]: libpod-18cef470517a31fddc256c3a4f447a624303ac1a77c7995098d22821ff0f59e8.scope: Consumed 1.728s CPU time.
Jan 26 16:03:28 compute-0 podman[304726]: 2026-01-26 16:03:28.684274046 +0000 UTC m=+1.255447287 container died 18cef470517a31fddc256c3a4f447a624303ac1a77c7995098d22821ff0f59e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kilby, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.934 239969 DEBUG nova.compute.manager [req-4c96290d-a4fd-423c-9d62-a83f5326d6b8 req-e5749c54-78c7-4ab9-a7be-97cca9b78267 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.935 239969 DEBUG oslo_concurrency.lockutils [req-4c96290d-a4fd-423c-9d62-a83f5326d6b8 req-e5749c54-78c7-4ab9-a7be-97cca9b78267 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.935 239969 DEBUG oslo_concurrency.lockutils [req-4c96290d-a4fd-423c-9d62-a83f5326d6b8 req-e5749c54-78c7-4ab9-a7be-97cca9b78267 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.935 239969 DEBUG oslo_concurrency.lockutils [req-4c96290d-a4fd-423c-9d62-a83f5326d6b8 req-e5749c54-78c7-4ab9-a7be-97cca9b78267 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.935 239969 DEBUG nova.compute.manager [req-4c96290d-a4fd-423c-9d62-a83f5326d6b8 req-e5749c54-78c7-4ab9-a7be-97cca9b78267 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:28 compute-0 nova_compute[239965]: 2026-01-26 16:03:28.936 239969 WARNING nova.compute.manager [req-4c96290d-a4fd-423c-9d62-a83f5326d6b8 req-e5749c54-78c7-4ab9-a7be-97cca9b78267 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state active and task_state reboot_started_hard.
Jan 26 16:03:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-99ca1843d799867ee9fc383d4d9b8f5b68d1efef40dbacfa42df9fddac20b4fa-merged.mount: Deactivated successfully.
Jan 26 16:03:29 compute-0 podman[304726]: 2026-01-26 16:03:29.016816029 +0000 UTC m=+1.587989270 container remove 18cef470517a31fddc256c3a4f447a624303ac1a77c7995098d22821ff0f59e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 16:03:29 compute-0 systemd[1]: libpod-conmon-18cef470517a31fddc256c3a4f447a624303ac1a77c7995098d22821ff0f59e8.scope: Deactivated successfully.
Jan 26 16:03:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:03:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/844040275' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:29 compute-0 ceph-mon[75140]: pgmap v1516: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 26 16:03:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/844040275' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:29 compute-0 sudo[304647]: pam_unix(sudo:session): session closed for user root
Jan 26 16:03:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:03:29 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:03:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:03:29 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.092 239969 DEBUG oslo_concurrency.processutils [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:29 compute-0 sudo[304935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.156 239969 DEBUG oslo_concurrency.processutils [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:29 compute-0 sudo[304935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:03:29 compute-0 sudo[304935]: pam_unix(sudo:session): session closed for user root
Jan 26 16:03:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:03:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2898751873' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.713 239969 DEBUG oslo_concurrency.processutils [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.716 239969 DEBUG nova.virt.libvirt.vif [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1205900142',display_name='tempest-ServerActionsTestJSON-server-1205900142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1205900142',id=76,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJC9bPt4tHtAdEqahH8dDewWtV4sFfSIyn3+BOhF1ZOSNVcrw8cqQ/JKx3C2TWjp9SjIuYmlVUH9MiH1RWG1NVSVOkV3L9+W0mVxNAs5TvmU8qKvHIWoJY7lntFnq7MLA==',key_name='tempest-keypair-1155267326',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-plaaamng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:03:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=492acb7a-42e1-4918-a973-352f9d8251d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.717 239969 DEBUG nova.network.os_vif_util [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.718 239969 DEBUG nova.network.os_vif_util [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.719 239969 DEBUG nova.objects.instance [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.737 239969 DEBUG nova.virt.libvirt.driver [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:03:29 compute-0 nova_compute[239965]:   <uuid>492acb7a-42e1-4918-a973-352f9d8251d0</uuid>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   <name>instance-0000004c</name>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerActionsTestJSON-server-1205900142</nova:name>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:03:28</nova:creationTime>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:03:29 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:03:29 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:03:29 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:03:29 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:03:29 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:03:29 compute-0 nova_compute[239965]:         <nova:user uuid="59f8ee09903a4e0a812c3d9e013996bd">tempest-ServerActionsTestJSON-274649005-project-member</nova:user>
Jan 26 16:03:29 compute-0 nova_compute[239965]:         <nova:project uuid="94771806cd0d4b5db117956e09fea9e6">tempest-ServerActionsTestJSON-274649005</nova:project>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:03:29 compute-0 nova_compute[239965]:         <nova:port uuid="2a2de29b-29c0-439c-8236-2f566c4c89aa">
Jan 26 16:03:29 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <system>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <entry name="serial">492acb7a-42e1-4918-a973-352f9d8251d0</entry>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <entry name="uuid">492acb7a-42e1-4918-a973-352f9d8251d0</entry>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     </system>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   <os>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   </os>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   <features>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   </features>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/492acb7a-42e1-4918-a973-352f9d8251d0_disk">
Jan 26 16:03:29 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       </source>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:03:29 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/492acb7a-42e1-4918-a973-352f9d8251d0_disk.config">
Jan 26 16:03:29 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       </source>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:03:29 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:f8:ca:89"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <target dev="tap2a2de29b-29"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/492acb7a-42e1-4918-a973-352f9d8251d0/console.log" append="off"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <video>
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     </video>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <input type="keyboard" bus="usb"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:03:29 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:03:29 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:03:29 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:03:29 compute-0 nova_compute[239965]: </domain>
Jan 26 16:03:29 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.739 239969 DEBUG nova.virt.libvirt.driver [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.739 239969 DEBUG nova.virt.libvirt.driver [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.740 239969 DEBUG nova.virt.libvirt.vif [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1205900142',display_name='tempest-ServerActionsTestJSON-server-1205900142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1205900142',id=76,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJC9bPt4tHtAdEqahH8dDewWtV4sFfSIyn3+BOhF1ZOSNVcrw8cqQ/JKx3C2TWjp9SjIuYmlVUH9MiH1RWG1NVSVOkV3L9+W0mVxNAs5TvmU8qKvHIWoJY7lntFnq7MLA==',key_name='tempest-keypair-1155267326',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-plaaamng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:03:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=492acb7a-42e1-4918-a973-352f9d8251d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.741 239969 DEBUG nova.network.os_vif_util [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.741 239969 DEBUG nova.network.os_vif_util [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.742 239969 DEBUG os_vif [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.742 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.743 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.743 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.747 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.748 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a2de29b-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.748 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a2de29b-29, col_values=(('external_ids', {'iface-id': '2a2de29b-29c0-439c-8236-2f566c4c89aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:ca:89', 'vm-uuid': '492acb7a-42e1-4918-a973-352f9d8251d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.750 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:29 compute-0 NetworkManager[48954]: <info>  [1769443409.7511] manager: (tap2a2de29b-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/308)
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.753 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.757 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.758 239969 INFO os_vif [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29')
Jan 26 16:03:29 compute-0 kernel: tap2a2de29b-29: entered promiscuous mode
Jan 26 16:03:29 compute-0 NetworkManager[48954]: <info>  [1769443409.8525] manager: (tap2a2de29b-29): new Tun device (/org/freedesktop/NetworkManager/Devices/309)
Jan 26 16:03:29 compute-0 ovn_controller[146046]: 2026-01-26T16:03:29Z|00702|binding|INFO|Claiming lport 2a2de29b-29c0-439c-8236-2f566c4c89aa for this chassis.
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.854 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:29 compute-0 ovn_controller[146046]: 2026-01-26T16:03:29Z|00703|binding|INFO|2a2de29b-29c0-439c-8236-2f566c4c89aa: Claiming fa:16:3e:f8:ca:89 10.100.0.14
Jan 26 16:03:29 compute-0 systemd-udevd[304896]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:03:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:29.863 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ca:89 10.100.0.14'], port_security=['fa:16:3e:f8:ca:89 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '492acb7a-42e1-4918-a973-352f9d8251d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2b6630f0-082e-491e-b663-f8af00e04b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2a2de29b-29c0-439c-8236-2f566c4c89aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:03:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:29.866 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2a2de29b-29c0-439c-8236-2f566c4c89aa in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d bound to our chassis
Jan 26 16:03:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:29.868 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:03:29 compute-0 NetworkManager[48954]: <info>  [1769443409.8778] device (tap2a2de29b-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:03:29 compute-0 NetworkManager[48954]: <info>  [1769443409.8786] device (tap2a2de29b-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.879 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:29 compute-0 ovn_controller[146046]: 2026-01-26T16:03:29Z|00704|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa ovn-installed in OVS
Jan 26 16:03:29 compute-0 ovn_controller[146046]: 2026-01-26T16:03:29Z|00705|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa up in Southbound
Jan 26 16:03:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:29.881 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9e6ab0-cec1-4542-9ee0-34c8ac1aa348]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:29 compute-0 nova_compute[239965]: 2026-01-26 16:03:29.885 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:29.885 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84c1ad93-11 in ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:03:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:29.891 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84c1ad93-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:03:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:29.892 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c3cb4e88-3e2b-4d90-b4ec-b63366cbd533]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:29.893 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[afafd23a-fead-4f24-860c-15930700884f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:29 compute-0 systemd-machined[208061]: New machine qemu-88-instance-0000004c.
Jan 26 16:03:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:29.910 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[3c4312c1-3f99-49db-9cb7-d815b4d39e29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:29 compute-0 systemd[1]: Started Virtual Machine qemu-88-instance-0000004c.
Jan 26 16:03:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:29.926 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b9295ffc-e2f2-434b-943e-91e18cbe6582]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:29.962 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9782e85d-1f75-44f8-afde-01cfbe92640d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:29.968 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[af44d59c-2379-4162-9a00-a2511a5509e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:29 compute-0 NetworkManager[48954]: <info>  [1769443409.9696] manager: (tap84c1ad93-10): new Veth device (/org/freedesktop/NetworkManager/Devices/310)
Jan 26 16:03:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:30.005 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0df0accb-597a-49ff-bbba-bdb2c269884a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:30.008 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d5445646-a372-45d4-a0ec-d700b1948f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:30 compute-0 NetworkManager[48954]: <info>  [1769443410.0387] device (tap84c1ad93-10): carrier: link connected
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:30.045 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[95c00ba5-14c6-403c-a5b4-864d2d2efd6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:30.068 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3b8149-32e1-43b3-b5c9-42d2f798a65b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84c1ad93-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490084, 'reachable_time': 41644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305046, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:03:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:03:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2898751873' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:30.087 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a74b51b9-14e2-4a7d-a8ff-3bf528636827]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:ad50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 490084, 'tstamp': 490084}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305047, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:30.109 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ce96cf81-c3f0-4a55-bec0-0eeac8afcf8a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84c1ad93-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 213], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490084, 'reachable_time': 41644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305048, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:30.159 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8cbc416a-207b-451d-9dd4-f9a3a3015b85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:30.243 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e2200c9b-26a7-4b8b-ba91-373f2924aecc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:30.245 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1ad93-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:30.245 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:30.246 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84c1ad93-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.248 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:30 compute-0 kernel: tap84c1ad93-10: entered promiscuous mode
Jan 26 16:03:30 compute-0 NetworkManager[48954]: <info>  [1769443410.2492] manager: (tap84c1ad93-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/311)
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.251 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:30.252 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84c1ad93-10, col_values=(('external_ids', {'iface-id': '61ede7e0-2ac4-4ab0-a8d4-11a64add8da0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.254 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:30 compute-0 ovn_controller[146046]: 2026-01-26T16:03:30Z|00706|binding|INFO|Releasing lport 61ede7e0-2ac4-4ab0-a8d4-11a64add8da0 from this chassis (sb_readonly=0)
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.273 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:30.276 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:30.277 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8a9208df-9ff5-4b3b-a1fe-628f2aec5e69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:30.278 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:03:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:30.279 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'env', 'PROCESS_TAG=haproxy-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:03:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:03:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:03:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:03:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:03:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:03:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:03:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:03:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:03:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:03:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:03:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:03:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:03:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:03:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:03:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:03:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:03:30 compute-0 podman[305119]: 2026-01-26 16:03:30.709034414 +0000 UTC m=+0.058870634 container create 4ff117a2831cfdaec4d53dbb091d7c7f0928e12d18e4d8472a7de62fa3fc76b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:03:30 compute-0 systemd[1]: Started libpod-conmon-4ff117a2831cfdaec4d53dbb091d7c7f0928e12d18e4d8472a7de62fa3fc76b2.scope.
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.754 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 492acb7a-42e1-4918-a973-352f9d8251d0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.755 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443410.7536213, 492acb7a-42e1-4918-a973-352f9d8251d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.756 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Resumed (Lifecycle Event)
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.758 239969 DEBUG nova.compute.manager [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.770 239969 INFO nova.virt.libvirt.driver [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance rebooted successfully.
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.771 239969 DEBUG nova.compute.manager [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:30 compute-0 podman[305119]: 2026-01-26 16:03:30.677803428 +0000 UTC m=+0.027639678 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.780 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.783 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:03:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ec3964b626954e753f1825d5ca298605851d862739f32769e4c9e78cdb52d9c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:30 compute-0 podman[305119]: 2026-01-26 16:03:30.805674963 +0000 UTC m=+0.155511203 container init 4ff117a2831cfdaec4d53dbb091d7c7f0928e12d18e4d8472a7de62fa3fc76b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.807 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.808 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443410.7552395, 492acb7a-42e1-4918-a973-352f9d8251d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.808 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Started (Lifecycle Event)
Jan 26 16:03:30 compute-0 podman[305119]: 2026-01-26 16:03:30.811904445 +0000 UTC m=+0.161740665 container start 4ff117a2831cfdaec4d53dbb091d7c7f0928e12d18e4d8472a7de62fa3fc76b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.837 239969 DEBUG oslo_concurrency.lockutils [None req-2dc35b24-85ae-43ae-82fe-6816e361fdb2 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:30 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[305136]: [NOTICE]   (305140) : New worker (305142) forked
Jan 26 16:03:30 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[305136]: [NOTICE]   (305140) : Loading success.
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.840 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.843 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:03:30 compute-0 nova_compute[239965]: 2026-01-26 16:03:30.890 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:31 compute-0 ceph-mon[75140]: pgmap v1517: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 143 op/s
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.126 239969 DEBUG nova.compute.manager [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.127 239969 DEBUG oslo_concurrency.lockutils [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.128 239969 DEBUG oslo_concurrency.lockutils [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.128 239969 DEBUG oslo_concurrency.lockutils [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.128 239969 DEBUG nova.compute.manager [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.128 239969 WARNING nova.compute.manager [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state active and task_state None.
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.129 239969 DEBUG nova.compute.manager [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.129 239969 DEBUG oslo_concurrency.lockutils [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.129 239969 DEBUG oslo_concurrency.lockutils [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.130 239969 DEBUG oslo_concurrency.lockutils [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.130 239969 DEBUG nova.compute.manager [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.130 239969 WARNING nova.compute.manager [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state active and task_state None.
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.130 239969 DEBUG nova.compute.manager [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.131 239969 DEBUG oslo_concurrency.lockutils [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.131 239969 DEBUG oslo_concurrency.lockutils [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.131 239969 DEBUG oslo_concurrency.lockutils [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.131 239969 DEBUG nova.compute.manager [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:31 compute-0 nova_compute[239965]: 2026-01-26 16:03:31.132 239969 WARNING nova.compute.manager [req-0e58ad25-34dc-44c1-91b3-6c99cb14f46a req-1630cec6-0bca-4625-824c-70b6adc0261d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state active and task_state None.
Jan 26 16:03:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:03:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 310 KiB/s wr, 94 op/s
Jan 26 16:03:32 compute-0 nova_compute[239965]: 2026-01-26 16:03:32.849 239969 INFO nova.compute.manager [None req-62df1b79-34cb-4422-9805-d29c2776b02f 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Get console output
Jan 26 16:03:32 compute-0 nova_compute[239965]: 2026-01-26 16:03:32.855 239969 INFO oslo.privsep.daemon [None req-62df1b79-34cb-4422-9805-d29c2776b02f 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpergfm198/privsep.sock']
Jan 26 16:03:32 compute-0 nova_compute[239965]: 2026-01-26 16:03:32.919 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:33 compute-0 ceph-mon[75140]: pgmap v1518: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 310 KiB/s wr, 94 op/s
Jan 26 16:03:33 compute-0 nova_compute[239965]: 2026-01-26 16:03:33.667 239969 INFO oslo.privsep.daemon [None req-62df1b79-34cb-4422-9805-d29c2776b02f 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Spawned new privsep daemon via rootwrap
Jan 26 16:03:33 compute-0 nova_compute[239965]: 2026-01-26 16:03:33.534 305155 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 26 16:03:33 compute-0 nova_compute[239965]: 2026-01-26 16:03:33.541 305155 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 26 16:03:33 compute-0 nova_compute[239965]: 2026-01-26 16:03:33.544 305155 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 26 16:03:33 compute-0 nova_compute[239965]: 2026-01-26 16:03:33.545 305155 INFO oslo.privsep.daemon [-] privsep daemon running as pid 305155
Jan 26 16:03:33 compute-0 ovn_controller[146046]: 2026-01-26T16:03:33Z|00083|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:42:49:72 10.100.0.6
Jan 26 16:03:33 compute-0 ovn_controller[146046]: 2026-01-26T16:03:33Z|00084|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:42:49:72 10.100.0.6
Jan 26 16:03:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 51 op/s
Jan 26 16:03:34 compute-0 nova_compute[239965]: 2026-01-26 16:03:34.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:03:34 compute-0 nova_compute[239965]: 2026-01-26 16:03:34.750 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:35 compute-0 ceph-mon[75140]: pgmap v1519: 305 pgs: 305 active+clean; 213 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 13 KiB/s wr, 51 op/s
Jan 26 16:03:35 compute-0 podman[305157]: 2026-01-26 16:03:35.387703711 +0000 UTC m=+0.059606482 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 26 16:03:35 compute-0 podman[305158]: 2026-01-26 16:03:35.419875889 +0000 UTC m=+0.094133968 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:03:35 compute-0 nova_compute[239965]: 2026-01-26 16:03:35.429 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:03:35 compute-0 nova_compute[239965]: 2026-01-26 16:03:35.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:03:35 compute-0 nova_compute[239965]: 2026-01-26 16:03:35.893 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 246 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 183 op/s
Jan 26 16:03:36 compute-0 nova_compute[239965]: 2026-01-26 16:03:36.259 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:03:37 compute-0 ceph-mon[75140]: pgmap v1520: 305 pgs: 305 active+clean; 246 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.1 MiB/s wr, 183 op/s
Jan 26 16:03:37 compute-0 nova_compute[239965]: 2026-01-26 16:03:37.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:03:37 compute-0 nova_compute[239965]: 2026-01-26 16:03:37.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:03:37 compute-0 nova_compute[239965]: 2026-01-26 16:03:37.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:03:37 compute-0 kernel: tapfed98a41-4e (unregistering): left promiscuous mode
Jan 26 16:03:37 compute-0 NetworkManager[48954]: <info>  [1769443417.7716] device (tapfed98a41-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:03:37 compute-0 ovn_controller[146046]: 2026-01-26T16:03:37Z|00707|binding|INFO|Releasing lport fed98a41-4eee-4ff2-8fc1-4fa7a278799f from this chassis (sb_readonly=0)
Jan 26 16:03:37 compute-0 ovn_controller[146046]: 2026-01-26T16:03:37Z|00708|binding|INFO|Setting lport fed98a41-4eee-4ff2-8fc1-4fa7a278799f down in Southbound
Jan 26 16:03:37 compute-0 ovn_controller[146046]: 2026-01-26T16:03:37Z|00709|binding|INFO|Removing iface tapfed98a41-4e ovn-installed in OVS
Jan 26 16:03:37 compute-0 nova_compute[239965]: 2026-01-26 16:03:37.779 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:37 compute-0 nova_compute[239965]: 2026-01-26 16:03:37.800 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:37 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Jan 26 16:03:37 compute-0 systemd[1]: machine-qemu\x2d87\x2dinstance\x2d0000004d.scope: Consumed 14.461s CPU time.
Jan 26 16:03:37 compute-0 systemd-machined[208061]: Machine qemu-87-instance-0000004d terminated.
Jan 26 16:03:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:37.838 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:49:72 10.100.0.6'], port_security=['fa:16:3e:42:49:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7658ad77-dd99-4ff7-b88f-287f38b8344a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db63036e-9778-49aa-880f-9b900f3c2179', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8368d1e77bbb4de59a4d3088dda0a707', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcfedff7-784c-47e9-a3f0-b01264b35f17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859c58dc-99b6-4246-b01e-ede78d7a88e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=fed98a41-4eee-4ff2-8fc1-4fa7a278799f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:03:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:37.840 156105 INFO neutron.agent.ovn.metadata.agent [-] Port fed98a41-4eee-4ff2-8fc1-4fa7a278799f in datapath db63036e-9778-49aa-880f-9b900f3c2179 unbound from our chassis
Jan 26 16:03:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:37.841 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network db63036e-9778-49aa-880f-9b900f3c2179, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:03:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:37.842 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3a4ecc34-9447-4bc2-8dfa-c6b25760c49b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:37.843 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 namespace which is not needed anymore
Jan 26 16:03:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 246 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.007 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.017 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:38 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[304158]: [NOTICE]   (304162) : haproxy version is 2.8.14-c23fe91
Jan 26 16:03:38 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[304158]: [NOTICE]   (304162) : path to executable is /usr/sbin/haproxy
Jan 26 16:03:38 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[304158]: [WARNING]  (304162) : Exiting Master process...
Jan 26 16:03:38 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[304158]: [ALERT]    (304162) : Current worker (304164) exited with code 143 (Terminated)
Jan 26 16:03:38 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[304158]: [WARNING]  (304162) : All workers exited. Exiting... (0)
Jan 26 16:03:38 compute-0 systemd[1]: libpod-1759e441f9db0148b6c53bf0f84ee73fc0f957c70871bfda6a2463e77d7e473a.scope: Deactivated successfully.
Jan 26 16:03:38 compute-0 podman[305222]: 2026-01-26 16:03:38.036645959 +0000 UTC m=+0.106357878 container died 1759e441f9db0148b6c53bf0f84ee73fc0f957c70871bfda6a2463e77d7e473a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.281 239969 DEBUG nova.compute.manager [req-2b5f8cc8-4be0-4bc9-b317-bee68c06c2a3 req-c528c822-126f-4b02-9682-24c4e8cb3270 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Received event network-vif-unplugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.281 239969 DEBUG oslo_concurrency.lockutils [req-2b5f8cc8-4be0-4bc9-b317-bee68c06c2a3 req-c528c822-126f-4b02-9682-24c4e8cb3270 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.282 239969 DEBUG oslo_concurrency.lockutils [req-2b5f8cc8-4be0-4bc9-b317-bee68c06c2a3 req-c528c822-126f-4b02-9682-24c4e8cb3270 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.282 239969 DEBUG oslo_concurrency.lockutils [req-2b5f8cc8-4be0-4bc9-b317-bee68c06c2a3 req-c528c822-126f-4b02-9682-24c4e8cb3270 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.283 239969 DEBUG nova.compute.manager [req-2b5f8cc8-4be0-4bc9-b317-bee68c06c2a3 req-c528c822-126f-4b02-9682-24c4e8cb3270 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] No waiting events found dispatching network-vif-unplugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.283 239969 WARNING nova.compute.manager [req-2b5f8cc8-4be0-4bc9-b317-bee68c06c2a3 req-c528c822-126f-4b02-9682-24c4e8cb3270 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Received unexpected event network-vif-unplugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f for instance with vm_state active and task_state rebuilding.
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.337 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.337 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.337 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.338 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:38 compute-0 ceph-mon[75140]: pgmap v1521: 305 pgs: 305 active+clean; 246 MiB data, 630 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Jan 26 16:03:38 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1759e441f9db0148b6c53bf0f84ee73fc0f957c70871bfda6a2463e77d7e473a-userdata-shm.mount: Deactivated successfully.
Jan 26 16:03:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f77505c3e444effb8843160795090ac985f7bf630614189bfb6a9ee140ddc784-merged.mount: Deactivated successfully.
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.450 239969 INFO nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Instance shutdown successfully after 13 seconds.
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.459 239969 INFO nova.virt.libvirt.driver [-] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Instance destroyed successfully.
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.465 239969 INFO nova.virt.libvirt.driver [-] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Instance destroyed successfully.
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.466 239969 DEBUG nova.virt.libvirt.vif [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:03:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-326782750',display_name='tempest-ServerDiskConfigTestJSON-server-326782750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-326782750',id=77,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8368d1e77bbb4de59a4d3088dda0a707',ramdisk_id='',reservation_id='r-xijxih41',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1476979264',owner_user_name='tempest-ServerDiskConfigTestJSON-1476979264-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:03:24Z,user_data=None,user_id='fc62be3f072e47058d3706393447ec4a',uuid=7658ad77-dd99-4ff7-b88f-287f38b8344a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.466 239969 DEBUG nova.network.os_vif_util [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converting VIF {"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.467 239969 DEBUG nova.network.os_vif_util [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:49:72,bridge_name='br-int',has_traffic_filtering=True,id=fed98a41-4eee-4ff2-8fc1-4fa7a278799f,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed98a41-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.468 239969 DEBUG os_vif [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:49:72,bridge_name='br-int',has_traffic_filtering=True,id=fed98a41-4eee-4ff2-8fc1-4fa7a278799f,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed98a41-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.470 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.470 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfed98a41-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.472 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.475 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.478 239969 INFO os_vif [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:49:72,bridge_name='br-int',has_traffic_filtering=True,id=fed98a41-4eee-4ff2-8fc1-4fa7a278799f,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed98a41-4e')
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.498 239969 DEBUG oslo_concurrency.lockutils [None req-76fddabd-bc1f-4939-9e5c-2f0c6a06f92b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.498 239969 DEBUG oslo_concurrency.lockutils [None req-76fddabd-bc1f-4939-9e5c-2f0c6a06f92b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.499 239969 DEBUG nova.compute.manager [None req-76fddabd-bc1f-4939-9e5c-2f0c6a06f92b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.505 239969 DEBUG nova.compute.manager [None req-76fddabd-bc1f-4939-9e5c-2f0c6a06f92b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.506 239969 DEBUG nova.objects.instance [None req-76fddabd-bc1f-4939-9e5c-2f0c6a06f92b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'flavor' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:38 compute-0 podman[305222]: 2026-01-26 16:03:38.523502615 +0000 UTC m=+0.593214544 container cleanup 1759e441f9db0148b6c53bf0f84ee73fc0f957c70871bfda6a2463e77d7e473a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.535 239969 DEBUG nova.virt.libvirt.driver [None req-76fddabd-bc1f-4939-9e5c-2f0c6a06f92b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:03:38 compute-0 podman[305278]: 2026-01-26 16:03:38.756532057 +0000 UTC m=+0.208141733 container remove 1759e441f9db0148b6c53bf0f84ee73fc0f957c70871bfda6a2463e77d7e473a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:03:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:38.764 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[74f113cf-021c-4d28-9219-f63c01f942e0]: (4, ('Mon Jan 26 04:03:37 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 (1759e441f9db0148b6c53bf0f84ee73fc0f957c70871bfda6a2463e77d7e473a)\n1759e441f9db0148b6c53bf0f84ee73fc0f957c70871bfda6a2463e77d7e473a\nMon Jan 26 04:03:38 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 (1759e441f9db0148b6c53bf0f84ee73fc0f957c70871bfda6a2463e77d7e473a)\n1759e441f9db0148b6c53bf0f84ee73fc0f957c70871bfda6a2463e77d7e473a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:38.767 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[52531ed5-45b0-42a1-849f-04b49dbea158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:38.768 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb63036e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:38 compute-0 kernel: tapdb63036e-90: left promiscuous mode
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.770 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:38 compute-0 systemd[1]: libpod-conmon-1759e441f9db0148b6c53bf0f84ee73fc0f957c70871bfda6a2463e77d7e473a.scope: Deactivated successfully.
Jan 26 16:03:38 compute-0 nova_compute[239965]: 2026-01-26 16:03:38.817 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:38.822 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[695c229a-2661-4441-9c46-bf4dd08cb882]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:38.840 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[953ad55f-7f53-433a-9933-ee698fcfe935]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:38.841 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[eba7dd64-fe17-4762-9d8b-604649521ee5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:38.863 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[64e010f1-2eb5-4502-be37-e2bc4525daf5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 488924, 'reachable_time': 27660, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305293, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:38 compute-0 systemd[1]: run-netns-ovnmeta\x2ddb63036e\x2d9778\x2d49aa\x2d880f\x2d9b900f3c2179.mount: Deactivated successfully.
Jan 26 16:03:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:38.869 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:03:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:38.869 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[c755628b-0bb0-4982-9371-50bd85243565]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.017 239969 INFO nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Deleting instance files /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a_del
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.020 239969 INFO nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Deletion of /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a_del complete
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.246 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.247 239969 INFO nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Creating image(s)
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.274 239969 DEBUG nova.storage.rbd_utils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.304 239969 DEBUG nova.storage.rbd_utils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.329 239969 DEBUG nova.storage.rbd_utils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.333 239969 DEBUG oslo_concurrency.processutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.401 239969 DEBUG oslo_concurrency.processutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.402 239969 DEBUG oslo_concurrency.lockutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "43c516c9cae415c0ab334521ada79f427fb2809a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.402 239969 DEBUG oslo_concurrency.lockutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.403 239969 DEBUG oslo_concurrency.lockutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.421 239969 DEBUG nova.storage.rbd_utils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.424 239969 DEBUG oslo_concurrency.processutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.651 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Acquiring lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.652 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.671 239969 DEBUG nova.compute.manager [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.702 239969 DEBUG oslo_concurrency.processutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.779 239969 DEBUG nova.storage.rbd_utils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] resizing rbd image 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.819 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.820 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.832 239969 DEBUG nova.virt.hardware [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.833 239969 INFO nova.compute.claims [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.868 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.869 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Ensure instance console log exists: /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.869 239969 DEBUG oslo_concurrency.lockutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.870 239969 DEBUG oslo_concurrency.lockutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.870 239969 DEBUG oslo_concurrency.lockutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.873 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Start _get_guest_xml network_info=[{"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.880 239969 WARNING nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.885 239969 DEBUG nova.virt.libvirt.host [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.886 239969 DEBUG nova.virt.libvirt.host [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.890 239969 DEBUG nova.virt.libvirt.host [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.891 239969 DEBUG nova.virt.libvirt.host [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.891 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.892 239969 DEBUG nova.virt.hardware [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.892 239969 DEBUG nova.virt.hardware [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.893 239969 DEBUG nova.virt.hardware [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.893 239969 DEBUG nova.virt.hardware [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.893 239969 DEBUG nova.virt.hardware [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.894 239969 DEBUG nova.virt.hardware [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.894 239969 DEBUG nova.virt.hardware [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.895 239969 DEBUG nova.virt.hardware [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.895 239969 DEBUG nova.virt.hardware [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.895 239969 DEBUG nova.virt.hardware [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.896 239969 DEBUG nova.virt.hardware [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.896 239969 DEBUG nova.objects.instance [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 7658ad77-dd99-4ff7-b88f-287f38b8344a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:39 compute-0 nova_compute[239965]: 2026-01-26 16:03:39.910 239969 DEBUG oslo_concurrency.processutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 191 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 155 op/s
Jan 26 16:03:40 compute-0 nova_compute[239965]: 2026-01-26 16:03:40.003 239969 DEBUG oslo_concurrency.processutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:03:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2054657928' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:40 compute-0 nova_compute[239965]: 2026-01-26 16:03:40.466 239969 DEBUG oslo_concurrency.processutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:40 compute-0 nova_compute[239965]: 2026-01-26 16:03:40.492 239969 DEBUG nova.storage.rbd_utils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:40 compute-0 nova_compute[239965]: 2026-01-26 16:03:40.497 239969 DEBUG oslo_concurrency.processutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:03:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1324090742' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:03:40 compute-0 nova_compute[239965]: 2026-01-26 16:03:40.534 239969 DEBUG oslo_concurrency.processutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:40 compute-0 nova_compute[239965]: 2026-01-26 16:03:40.540 239969 DEBUG nova.compute.provider_tree [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:03:40 compute-0 nova_compute[239965]: 2026-01-26 16:03:40.586 239969 DEBUG nova.compute.manager [req-78dfeba9-a58c-4623-8026-db68b9835788 req-6133d623-e17d-4b36-af60-6cccdc3b82a7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Received event network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:40 compute-0 nova_compute[239965]: 2026-01-26 16:03:40.587 239969 DEBUG oslo_concurrency.lockutils [req-78dfeba9-a58c-4623-8026-db68b9835788 req-6133d623-e17d-4b36-af60-6cccdc3b82a7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:40 compute-0 nova_compute[239965]: 2026-01-26 16:03:40.588 239969 DEBUG oslo_concurrency.lockutils [req-78dfeba9-a58c-4623-8026-db68b9835788 req-6133d623-e17d-4b36-af60-6cccdc3b82a7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:40 compute-0 nova_compute[239965]: 2026-01-26 16:03:40.588 239969 DEBUG oslo_concurrency.lockutils [req-78dfeba9-a58c-4623-8026-db68b9835788 req-6133d623-e17d-4b36-af60-6cccdc3b82a7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:40 compute-0 nova_compute[239965]: 2026-01-26 16:03:40.588 239969 DEBUG nova.compute.manager [req-78dfeba9-a58c-4623-8026-db68b9835788 req-6133d623-e17d-4b36-af60-6cccdc3b82a7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] No waiting events found dispatching network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:40 compute-0 nova_compute[239965]: 2026-01-26 16:03:40.589 239969 WARNING nova.compute.manager [req-78dfeba9-a58c-4623-8026-db68b9835788 req-6133d623-e17d-4b36-af60-6cccdc3b82a7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Received unexpected event network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f for instance with vm_state active and task_state rebuild_spawning.
Jan 26 16:03:40 compute-0 nova_compute[239965]: 2026-01-26 16:03:40.612 239969 DEBUG nova.scheduler.client.report [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:03:40 compute-0 nova_compute[239965]: 2026-01-26 16:03:40.642 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:40 compute-0 nova_compute[239965]: 2026-01-26 16:03:40.644 239969 DEBUG nova.compute.manager [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:03:40 compute-0 nova_compute[239965]: 2026-01-26 16:03:40.896 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:03:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1821278388' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.045 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Updating instance_info_cache with network_info: [{"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.046 239969 DEBUG nova.compute.manager [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.047 239969 DEBUG nova.network.neutron [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.066 239969 DEBUG oslo_concurrency.processutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:41 compute-0 ceph-mon[75140]: pgmap v1522: 305 pgs: 305 active+clean; 191 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 155 op/s
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.068 239969 DEBUG nova.virt.libvirt.vif [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:03:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-326782750',display_name='tempest-ServerDiskConfigTestJSON-server-326782750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-326782750',id=77,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8368d1e77bbb4de59a4d3088dda0a707',ramdisk_id='',reservation_id='r-xijxih41',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1476979264',owner_user_name='tempest-ServerDiskConfigTestJSON-1476979264-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:03:39Z,user_data=None,user_id='fc62be3f072e47058d3706393447ec4a',uuid=7658ad77-dd99-4ff7-b88f-287f38b8344a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.069 239969 DEBUG nova.network.os_vif_util [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converting VIF {"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2054657928' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1324090742' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:03:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1821278388' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.070 239969 DEBUG nova.network.os_vif_util [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:49:72,bridge_name='br-int',has_traffic_filtering=True,id=fed98a41-4eee-4ff2-8fc1-4fa7a278799f,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed98a41-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.074 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:03:41 compute-0 nova_compute[239965]:   <uuid>7658ad77-dd99-4ff7-b88f-287f38b8344a</uuid>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   <name>instance-0000004d</name>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-326782750</nova:name>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:03:39</nova:creationTime>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:03:41 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:03:41 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:03:41 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:03:41 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:03:41 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:03:41 compute-0 nova_compute[239965]:         <nova:user uuid="fc62be3f072e47058d3706393447ec4a">tempest-ServerDiskConfigTestJSON-1476979264-project-member</nova:user>
Jan 26 16:03:41 compute-0 nova_compute[239965]:         <nova:project uuid="8368d1e77bbb4de59a4d3088dda0a707">tempest-ServerDiskConfigTestJSON-1476979264</nova:project>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="24ff5bfa-2bf0-4d76-ba05-fc857cd2108f"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:03:41 compute-0 nova_compute[239965]:         <nova:port uuid="fed98a41-4eee-4ff2-8fc1-4fa7a278799f">
Jan 26 16:03:41 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <system>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <entry name="serial">7658ad77-dd99-4ff7-b88f-287f38b8344a</entry>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <entry name="uuid">7658ad77-dd99-4ff7-b88f-287f38b8344a</entry>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     </system>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   <os>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   </os>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   <features>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   </features>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/7658ad77-dd99-4ff7-b88f-287f38b8344a_disk">
Jan 26 16:03:41 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       </source>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:03:41 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/7658ad77-dd99-4ff7-b88f-287f38b8344a_disk.config">
Jan 26 16:03:41 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       </source>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:03:41 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:42:49:72"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <target dev="tapfed98a41-4e"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a/console.log" append="off"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <video>
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     </video>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:03:41 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:03:41 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:03:41 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:03:41 compute-0 nova_compute[239965]: </domain>
Jan 26 16:03:41 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.076 239969 DEBUG nova.compute.manager [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Preparing to wait for external event network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.077 239969 DEBUG oslo_concurrency.lockutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.078 239969 DEBUG oslo_concurrency.lockutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.078 239969 DEBUG oslo_concurrency.lockutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.079 239969 DEBUG nova.virt.libvirt.vif [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:03:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-326782750',display_name='tempest-ServerDiskConfigTestJSON-server-326782750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-326782750',id=77,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='8368d1e77bbb4de59a4d3088dda0a707',ramdisk_id='',reservation_id='r-xijxih41',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1476979264',owner_user_name='tempest-ServerDiskConfigTestJSON-1476979264-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:03:39Z,user_data=None,user_id='fc62be3f072e47058d3706393447ec4a',uuid=7658ad77-dd99-4ff7-b88f-287f38b8344a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.079 239969 DEBUG nova.network.os_vif_util [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converting VIF {"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.080 239969 DEBUG nova.network.os_vif_util [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:49:72,bridge_name='br-int',has_traffic_filtering=True,id=fed98a41-4eee-4ff2-8fc1-4fa7a278799f,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed98a41-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.080 239969 DEBUG os_vif [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:49:72,bridge_name='br-int',has_traffic_filtering=True,id=fed98a41-4eee-4ff2-8fc1-4fa7a278799f,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed98a41-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.082 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.082 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.083 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.083 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.084 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.084 239969 INFO nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.089 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.090 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.090 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.091 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.092 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.093 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfed98a41-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.094 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfed98a41-4e, col_values=(('external_ids', {'iface-id': 'fed98a41-4eee-4ff2-8fc1-4fa7a278799f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:49:72', 'vm-uuid': '7658ad77-dd99-4ff7-b88f-287f38b8344a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:41 compute-0 NetworkManager[48954]: <info>  [1769443421.0967] manager: (tapfed98a41-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/312)
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.097 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.101 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.104 239969 INFO os_vif [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:49:72,bridge_name='br-int',has_traffic_filtering=True,id=fed98a41-4eee-4ff2-8fc1-4fa7a278799f,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed98a41-4e')
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.107 239969 DEBUG nova.compute.manager [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.117 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.117 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.117 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.117 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.118 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.207 239969 DEBUG nova.compute.manager [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.209 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.209 239969 INFO nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Creating image(s)
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.237 239969 DEBUG nova.storage.rbd_utils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] rbd image c85b4b5d-0f7d-4fb0-b02f-c841ee20544a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.261 239969 DEBUG nova.storage.rbd_utils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] rbd image c85b4b5d-0f7d-4fb0-b02f-c841ee20544a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.289 239969 DEBUG nova.storage.rbd_utils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] rbd image c85b4b5d-0f7d-4fb0-b02f-c841ee20544a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.293 239969 DEBUG oslo_concurrency.processutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.331 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.331 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.332 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] No VIF found with MAC fa:16:3e:42:49:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.332 239969 INFO nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Using config drive
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.356 239969 DEBUG nova.storage.rbd_utils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.368 239969 DEBUG oslo_concurrency.processutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.369 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.370 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.370 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.395 239969 DEBUG nova.storage.rbd_utils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] rbd image c85b4b5d-0f7d-4fb0-b02f-c841ee20544a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.398 239969 DEBUG oslo_concurrency.processutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 c85b4b5d-0f7d-4fb0-b02f-c841ee20544a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.430 239969 DEBUG nova.objects.instance [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 7658ad77-dd99-4ff7-b88f-287f38b8344a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.461 239969 DEBUG nova.objects.instance [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'keypairs' on Instance uuid 7658ad77-dd99-4ff7-b88f-287f38b8344a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.699 239969 DEBUG oslo_concurrency.processutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 c85b4b5d-0f7d-4fb0-b02f-c841ee20544a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.728 239969 DEBUG nova.policy [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '61cb7a992aab4e41af5fd2eccf26c5b8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5355ee24fabf4f089cfa485487324f31', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:03:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:03:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468214242' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.769 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.775 239969 DEBUG nova.storage.rbd_utils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] resizing rbd image c85b4b5d-0f7d-4fb0-b02f-c841ee20544a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.805 239969 INFO nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Creating config drive at /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a/disk.config
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.813 239969 DEBUG oslo_concurrency.processutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpge4kqonu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.886 239969 DEBUG nova.objects.instance [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lazy-loading 'migration_context' on Instance uuid c85b4b5d-0f7d-4fb0-b02f-c841ee20544a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.906 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.906 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Ensure instance console log exists: /var/lib/nova/instances/c85b4b5d-0f7d-4fb0-b02f-c841ee20544a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.907 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.907 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.908 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.943 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.943 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000004d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.947 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.947 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.951 239969 DEBUG oslo_concurrency.processutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpge4kqonu" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.974 239969 DEBUG nova.storage.rbd_utils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:41 compute-0 nova_compute[239965]: 2026-01-26 16:03:41.978 239969 DEBUG oslo_concurrency.processutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a/disk.config 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 176 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 163 op/s
Jan 26 16:03:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1468214242' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.166 239969 DEBUG oslo_concurrency.processutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a/disk.config 7658ad77-dd99-4ff7-b88f-287f38b8344a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.169 239969 INFO nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Deleting local config drive /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a/disk.config because it was imported into RBD.
Jan 26 16:03:42 compute-0 kernel: tapfed98a41-4e: entered promiscuous mode
Jan 26 16:03:42 compute-0 ovn_controller[146046]: 2026-01-26T16:03:42Z|00710|binding|INFO|Claiming lport fed98a41-4eee-4ff2-8fc1-4fa7a278799f for this chassis.
Jan 26 16:03:42 compute-0 ovn_controller[146046]: 2026-01-26T16:03:42Z|00711|binding|INFO|fed98a41-4eee-4ff2-8fc1-4fa7a278799f: Claiming fa:16:3e:42:49:72 10.100.0.6
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.226 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:42 compute-0 NetworkManager[48954]: <info>  [1769443422.2275] manager: (tapfed98a41-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/313)
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.236 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:49:72 10.100.0.6'], port_security=['fa:16:3e:42:49:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7658ad77-dd99-4ff7-b88f-287f38b8344a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db63036e-9778-49aa-880f-9b900f3c2179', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8368d1e77bbb4de59a4d3088dda0a707', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'dcfedff7-784c-47e9-a3f0-b01264b35f17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859c58dc-99b6-4246-b01e-ede78d7a88e8, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=fed98a41-4eee-4ff2-8fc1-4fa7a278799f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.238 156105 INFO neutron.agent.ovn.metadata.agent [-] Port fed98a41-4eee-4ff2-8fc1-4fa7a278799f in datapath db63036e-9778-49aa-880f-9b900f3c2179 bound to our chassis
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.242 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network db63036e-9778-49aa-880f-9b900f3c2179
Jan 26 16:03:42 compute-0 ovn_controller[146046]: 2026-01-26T16:03:42Z|00712|binding|INFO|Setting lport fed98a41-4eee-4ff2-8fc1-4fa7a278799f ovn-installed in OVS
Jan 26 16:03:42 compute-0 ovn_controller[146046]: 2026-01-26T16:03:42Z|00713|binding|INFO|Setting lport fed98a41-4eee-4ff2-8fc1-4fa7a278799f up in Southbound
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.245 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.252 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.254 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c9275545-52a8-43ad-bf9c-93e16f599a6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.255 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdb63036e-91 in ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.257 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdb63036e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.258 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fa6a0cfa-240b-4016-acd0-7699e8b55408]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:42 compute-0 systemd-udevd[305810]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.264 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[da041858-8b31-4c66-882c-9150637370d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:42 compute-0 systemd-machined[208061]: New machine qemu-89-instance-0000004d.
Jan 26 16:03:42 compute-0 NetworkManager[48954]: <info>  [1769443422.2747] device (tapfed98a41-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:03:42 compute-0 NetworkManager[48954]: <info>  [1769443422.2758] device (tapfed98a41-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.276 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[15384242-dc4d-4a8a-aed2-e73fb2c713a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:42 compute-0 systemd[1]: Started Virtual Machine qemu-89-instance-0000004d.
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.297 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[730bbd97-862d-4f4d-83ac-f6e49d81ded5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.300 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.301 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3706MB free_disk=59.931604228913784GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.301 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.302 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.325 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[42a1f87b-9b46-4120-94ce-122d1e713ca1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:42 compute-0 NetworkManager[48954]: <info>  [1769443422.3310] manager: (tapdb63036e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/314)
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.329 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2d727ea1-bb89-4d90-835e-856bc074edd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.361 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[847fe500-0314-4b26-a4f5-82de84331e34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.365 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d58aa20e-8eb6-4b2a-b9ed-281bb55794fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.385 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 492acb7a-42e1-4918-a973-352f9d8251d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.386 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 7658ad77-dd99-4ff7-b88f-287f38b8344a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.386 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance c85b4b5d-0f7d-4fb0-b02f-c841ee20544a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.386 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.386 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:03:42 compute-0 NetworkManager[48954]: <info>  [1769443422.3887] device (tapdb63036e-90): carrier: link connected
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.396 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[89ab8c54-b52a-4ac9-b18e-0e4c5d3c1ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.422 239969 DEBUG nova.network.neutron [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Successfully created port: 3538690f-6beb-4617-a1af-4b0fa9a05da5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.423 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6767a255-c514-4cb2-a8e2-3b6c83a2c5eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb63036e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:bb:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 216], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491319, 'reachable_time': 30728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 305843, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.451 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[747d9e92-4f0d-4ce3-8dc0-144946cf15a8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7c:bb2a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491319, 'tstamp': 491319}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 305844, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.457 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.478 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c594fa64-eecd-4af4-b109-c6d8bf2fda13]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb63036e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:bb:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 216], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491319, 'reachable_time': 30728, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 305845, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.516 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[54a9c21a-98f1-48f5-8feb-518ac35ca328]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.676 239969 DEBUG nova.compute.manager [req-4e30ba42-798a-452e-b139-fe00bb886c9c req-a84facff-a9fe-4c17-9186-106265d474f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Received event network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.676 239969 DEBUG oslo_concurrency.lockutils [req-4e30ba42-798a-452e-b139-fe00bb886c9c req-a84facff-a9fe-4c17-9186-106265d474f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.677 239969 DEBUG oslo_concurrency.lockutils [req-4e30ba42-798a-452e-b139-fe00bb886c9c req-a84facff-a9fe-4c17-9186-106265d474f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.677 239969 DEBUG oslo_concurrency.lockutils [req-4e30ba42-798a-452e-b139-fe00bb886c9c req-a84facff-a9fe-4c17-9186-106265d474f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.678 239969 DEBUG nova.compute.manager [req-4e30ba42-798a-452e-b139-fe00bb886c9c req-a84facff-a9fe-4c17-9186-106265d474f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Processing event network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.702 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.703 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.740 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c3deaa87-d190-4061-aa0f-09e33ba6e6be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.742 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb63036e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.742 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.743 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb63036e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:42 compute-0 kernel: tapdb63036e-90: entered promiscuous mode
Jan 26 16:03:42 compute-0 NetworkManager[48954]: <info>  [1769443422.7458] manager: (tapdb63036e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/315)
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.749 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.757 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdb63036e-90, col_values=(('external_ids', {'iface-id': 'c91d83e2-40e9-4d10-8b19-e75b3795dd04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:42 compute-0 ovn_controller[146046]: 2026-01-26T16:03:42Z|00714|binding|INFO|Releasing lport c91d83e2-40e9-4d10-8b19-e75b3795dd04 from this chassis (sb_readonly=0)
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.760 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:42 compute-0 nova_compute[239965]: 2026-01-26 16:03:42.776 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.779 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/db63036e-9778-49aa-880f-9b900f3c2179.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/db63036e-9778-49aa-880f-9b900f3c2179.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.780 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a72db55e-85e1-45b1-9a48-c0ce1547f0f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.781 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-db63036e-9778-49aa-880f-9b900f3c2179
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/db63036e-9778-49aa-880f-9b900f3c2179.pid.haproxy
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID db63036e-9778-49aa-880f-9b900f3c2179
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:03:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:42.782 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'env', 'PROCESS_TAG=haproxy-db63036e-9778-49aa-880f-9b900f3c2179', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/db63036e-9778-49aa-880f-9b900f3c2179.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:03:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:03:43 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/292391973' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.063 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.069 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:03:43 compute-0 ceph-mon[75140]: pgmap v1523: 305 pgs: 305 active+clean; 176 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 163 op/s
Jan 26 16:03:43 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/292391973' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.126 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.153 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.154 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:43 compute-0 podman[305940]: 2026-01-26 16:03:43.214197967 +0000 UTC m=+0.055694407 container create 4f9a5c7524bf20e125cc63ace6a42c3bb5714b9053a87616cf45bdb5652f924b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.229 239969 DEBUG nova.compute.manager [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.230 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 7658ad77-dd99-4ff7-b88f-287f38b8344a due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.230 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443423.230069, 7658ad77-dd99-4ff7-b88f-287f38b8344a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.230 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] VM Started (Lifecycle Event)
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.236 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.239 239969 INFO nova.virt.libvirt.driver [-] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Instance spawned successfully.
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.239 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.257 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:43 compute-0 systemd[1]: Started libpod-conmon-4f9a5c7524bf20e125cc63ace6a42c3bb5714b9053a87616cf45bdb5652f924b.scope.
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.261 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.271 239969 DEBUG nova.network.neutron [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Successfully updated port: 3538690f-6beb-4617-a1af-4b0fa9a05da5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.278 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.278 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.279 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.280 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.280 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:43 compute-0 podman[305940]: 2026-01-26 16:03:43.185314888 +0000 UTC m=+0.026811368 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.281 239969 DEBUG nova.virt.libvirt.driver [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.287 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.287 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443423.231088, 7658ad77-dd99-4ff7-b88f-287f38b8344a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.287 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] VM Paused (Lifecycle Event)
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.290 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Acquiring lock "refresh_cache-c85b4b5d-0f7d-4fb0-b02f-c841ee20544a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.290 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Acquired lock "refresh_cache-c85b4b5d-0f7d-4fb0-b02f-c841ee20544a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.291 239969 DEBUG nova.network.neutron [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:03:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:03:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e5838928621ca1f97c37b3ebbba1ef817a5311b7a5fca167b64e5af4084d1bb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.312 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.315 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443423.2326753, 7658ad77-dd99-4ff7-b88f-287f38b8344a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.315 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] VM Resumed (Lifecycle Event)
Jan 26 16:03:43 compute-0 podman[305940]: 2026-01-26 16:03:43.316489534 +0000 UTC m=+0.157985984 container init 4f9a5c7524bf20e125cc63ace6a42c3bb5714b9053a87616cf45bdb5652f924b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 26 16:03:43 compute-0 podman[305940]: 2026-01-26 16:03:43.322631904 +0000 UTC m=+0.164128344 container start 4f9a5c7524bf20e125cc63ace6a42c3bb5714b9053a87616cf45bdb5652f924b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:03:43 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[305954]: [NOTICE]   (305959) : New worker (305961) forked
Jan 26 16:03:43 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[305954]: [NOTICE]   (305959) : Loading success.
Jan 26 16:03:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:43.383 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:03:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:43.384 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.387 239969 DEBUG nova.compute.manager [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.428 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.432 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.451 239969 DEBUG nova.compute.manager [req-a0dbb039-1268-4a80-8dd1-5ab0d007d79f req-6a654025-2b9e-49f5-88fb-b9be79c1dbf4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Received event network-changed-3538690f-6beb-4617-a1af-4b0fa9a05da5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.451 239969 DEBUG nova.compute.manager [req-a0dbb039-1268-4a80-8dd1-5ab0d007d79f req-6a654025-2b9e-49f5-88fb-b9be79c1dbf4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Refreshing instance network info cache due to event network-changed-3538690f-6beb-4617-a1af-4b0fa9a05da5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.451 239969 DEBUG oslo_concurrency.lockutils [req-a0dbb039-1268-4a80-8dd1-5ab0d007d79f req-6a654025-2b9e-49f5-88fb-b9be79c1dbf4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-c85b4b5d-0f7d-4fb0-b02f-c841ee20544a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.467 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.468 239969 DEBUG oslo_concurrency.lockutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.468 239969 DEBUG oslo_concurrency.lockutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.468 239969 DEBUG nova.objects.instance [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.501 239969 DEBUG nova.network.neutron [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:03:43 compute-0 nova_compute[239965]: 2026-01-26 16:03:43.524 239969 DEBUG oslo_concurrency.lockutils [None req-d933528f-be02-4746-a390-0dd2046d65a8 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 176 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 162 op/s
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.133 239969 DEBUG nova.network.neutron [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Updating instance_info_cache with network_info: [{"id": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "address": "fa:16:3e:1f:8d:df", "network": {"id": "806eb863-205a-4fb7-a077-13a4018c3841", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1258358432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5355ee24fabf4f089cfa485487324f31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3538690f-6b", "ovs_interfaceid": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.147 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.155 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Releasing lock "refresh_cache-c85b4b5d-0f7d-4fb0-b02f-c841ee20544a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.156 239969 DEBUG nova.compute.manager [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Instance network_info: |[{"id": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "address": "fa:16:3e:1f:8d:df", "network": {"id": "806eb863-205a-4fb7-a077-13a4018c3841", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1258358432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5355ee24fabf4f089cfa485487324f31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3538690f-6b", "ovs_interfaceid": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.157 239969 DEBUG oslo_concurrency.lockutils [req-a0dbb039-1268-4a80-8dd1-5ab0d007d79f req-6a654025-2b9e-49f5-88fb-b9be79c1dbf4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-c85b4b5d-0f7d-4fb0-b02f-c841ee20544a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.157 239969 DEBUG nova.network.neutron [req-a0dbb039-1268-4a80-8dd1-5ab0d007d79f req-6a654025-2b9e-49f5-88fb-b9be79c1dbf4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Refreshing network info cache for port 3538690f-6beb-4617-a1af-4b0fa9a05da5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.162 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Start _get_guest_xml network_info=[{"id": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "address": "fa:16:3e:1f:8d:df", "network": {"id": "806eb863-205a-4fb7-a077-13a4018c3841", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1258358432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5355ee24fabf4f089cfa485487324f31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3538690f-6b", "ovs_interfaceid": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.168 239969 WARNING nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.176 239969 DEBUG nova.virt.libvirt.host [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.177 239969 DEBUG nova.virt.libvirt.host [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.192 239969 DEBUG nova.virt.libvirt.host [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.193 239969 DEBUG nova.virt.libvirt.host [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.195 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.196 239969 DEBUG nova.virt.hardware [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.197 239969 DEBUG nova.virt.hardware [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.197 239969 DEBUG nova.virt.hardware [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.197 239969 DEBUG nova.virt.hardware [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.198 239969 DEBUG nova.virt.hardware [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.198 239969 DEBUG nova.virt.hardware [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.198 239969 DEBUG nova.virt.hardware [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.199 239969 DEBUG nova.virt.hardware [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.199 239969 DEBUG nova.virt.hardware [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.199 239969 DEBUG nova.virt.hardware [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.200 239969 DEBUG nova.virt.hardware [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.204 239969 DEBUG oslo_concurrency.processutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:03:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:03:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/174912858' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.757 239969 DEBUG oslo_concurrency.processutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.778 239969 DEBUG nova.storage.rbd_utils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] rbd image c85b4b5d-0f7d-4fb0-b02f-c841ee20544a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.782 239969 DEBUG oslo_concurrency.processutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.825 239969 DEBUG nova.compute.manager [req-251b51d8-bc2a-4356-8b85-eb919394c50b req-37517083-e38b-4289-898e-7f4220656d9f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Received event network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.825 239969 DEBUG oslo_concurrency.lockutils [req-251b51d8-bc2a-4356-8b85-eb919394c50b req-37517083-e38b-4289-898e-7f4220656d9f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.826 239969 DEBUG oslo_concurrency.lockutils [req-251b51d8-bc2a-4356-8b85-eb919394c50b req-37517083-e38b-4289-898e-7f4220656d9f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.826 239969 DEBUG oslo_concurrency.lockutils [req-251b51d8-bc2a-4356-8b85-eb919394c50b req-37517083-e38b-4289-898e-7f4220656d9f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.826 239969 DEBUG nova.compute.manager [req-251b51d8-bc2a-4356-8b85-eb919394c50b req-37517083-e38b-4289-898e-7f4220656d9f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] No waiting events found dispatching network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:44 compute-0 nova_compute[239965]: 2026-01-26 16:03:44.826 239969 WARNING nova.compute.manager [req-251b51d8-bc2a-4356-8b85-eb919394c50b req-37517083-e38b-4289-898e-7f4220656d9f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Received unexpected event network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f for instance with vm_state active and task_state None.
Jan 26 16:03:45 compute-0 ceph-mon[75140]: pgmap v1524: 305 pgs: 305 active+clean; 176 MiB data, 622 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.3 MiB/s wr, 162 op/s
Jan 26 16:03:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/174912858' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:45 compute-0 ovn_controller[146046]: 2026-01-26T16:03:45Z|00085|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:ca:89 10.100.0.14
Jan 26 16:03:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:03:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2337921148' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.331 239969 DEBUG oslo_concurrency.processutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.333 239969 DEBUG nova.virt.libvirt.vif [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:03:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-823220195',display_name='tempest-ServerMetadataTestJSON-server-823220195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-823220195',id=78,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5355ee24fabf4f089cfa485487324f31',ramdisk_id='',reservation_id='r-lw0fv80x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1651266872',owner_user_name='tempest-ServerMetadataTestJSON-1651266872-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:03:41Z,user_data=None,user_id='61cb7a992aab4e41af5fd2eccf26c5b8',uuid=c85b4b5d-0f7d-4fb0-b02f-c841ee20544a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "address": "fa:16:3e:1f:8d:df", "network": {"id": "806eb863-205a-4fb7-a077-13a4018c3841", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1258358432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5355ee24fabf4f089cfa485487324f31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3538690f-6b", "ovs_interfaceid": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.333 239969 DEBUG nova.network.os_vif_util [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Converting VIF {"id": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "address": "fa:16:3e:1f:8d:df", "network": {"id": "806eb863-205a-4fb7-a077-13a4018c3841", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1258358432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5355ee24fabf4f089cfa485487324f31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3538690f-6b", "ovs_interfaceid": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.334 239969 DEBUG nova.network.os_vif_util [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:8d:df,bridge_name='br-int',has_traffic_filtering=True,id=3538690f-6beb-4617-a1af-4b0fa9a05da5,network=Network(806eb863-205a-4fb7-a077-13a4018c3841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3538690f-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.335 239969 DEBUG nova.objects.instance [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lazy-loading 'pci_devices' on Instance uuid c85b4b5d-0f7d-4fb0-b02f-c841ee20544a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.352 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:03:45 compute-0 nova_compute[239965]:   <uuid>c85b4b5d-0f7d-4fb0-b02f-c841ee20544a</uuid>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   <name>instance-0000004e</name>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerMetadataTestJSON-server-823220195</nova:name>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:03:44</nova:creationTime>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:03:45 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:03:45 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:03:45 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:03:45 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:03:45 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:03:45 compute-0 nova_compute[239965]:         <nova:user uuid="61cb7a992aab4e41af5fd2eccf26c5b8">tempest-ServerMetadataTestJSON-1651266872-project-member</nova:user>
Jan 26 16:03:45 compute-0 nova_compute[239965]:         <nova:project uuid="5355ee24fabf4f089cfa485487324f31">tempest-ServerMetadataTestJSON-1651266872</nova:project>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:03:45 compute-0 nova_compute[239965]:         <nova:port uuid="3538690f-6beb-4617-a1af-4b0fa9a05da5">
Jan 26 16:03:45 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <system>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <entry name="serial">c85b4b5d-0f7d-4fb0-b02f-c841ee20544a</entry>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <entry name="uuid">c85b4b5d-0f7d-4fb0-b02f-c841ee20544a</entry>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     </system>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   <os>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   </os>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   <features>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   </features>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/c85b4b5d-0f7d-4fb0-b02f-c841ee20544a_disk">
Jan 26 16:03:45 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       </source>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:03:45 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/c85b4b5d-0f7d-4fb0-b02f-c841ee20544a_disk.config">
Jan 26 16:03:45 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       </source>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:03:45 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:1f:8d:df"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <target dev="tap3538690f-6b"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/c85b4b5d-0f7d-4fb0-b02f-c841ee20544a/console.log" append="off"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <video>
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     </video>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:03:45 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:03:45 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:03:45 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:03:45 compute-0 nova_compute[239965]: </domain>
Jan 26 16:03:45 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.353 239969 DEBUG nova.compute.manager [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Preparing to wait for external event network-vif-plugged-3538690f-6beb-4617-a1af-4b0fa9a05da5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.354 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Acquiring lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.354 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.354 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.355 239969 DEBUG nova.virt.libvirt.vif [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:03:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-823220195',display_name='tempest-ServerMetadataTestJSON-server-823220195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-823220195',id=78,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5355ee24fabf4f089cfa485487324f31',ramdisk_id='',reservation_id='r-lw0fv80x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataTestJSON-1651266872',owner_user_name='tempest-ServerMetadataTestJSON-1651266872-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:03:41Z,user_data=None,user_id='61cb7a992aab4e41af5fd2eccf26c5b8',uuid=c85b4b5d-0f7d-4fb0-b02f-c841ee20544a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "address": "fa:16:3e:1f:8d:df", "network": {"id": "806eb863-205a-4fb7-a077-13a4018c3841", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1258358432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5355ee24fabf4f089cfa485487324f31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3538690f-6b", "ovs_interfaceid": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.355 239969 DEBUG nova.network.os_vif_util [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Converting VIF {"id": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "address": "fa:16:3e:1f:8d:df", "network": {"id": "806eb863-205a-4fb7-a077-13a4018c3841", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1258358432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5355ee24fabf4f089cfa485487324f31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3538690f-6b", "ovs_interfaceid": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.356 239969 DEBUG nova.network.os_vif_util [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:8d:df,bridge_name='br-int',has_traffic_filtering=True,id=3538690f-6beb-4617-a1af-4b0fa9a05da5,network=Network(806eb863-205a-4fb7-a077-13a4018c3841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3538690f-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.356 239969 DEBUG os_vif [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:8d:df,bridge_name='br-int',has_traffic_filtering=True,id=3538690f-6beb-4617-a1af-4b0fa9a05da5,network=Network(806eb863-205a-4fb7-a077-13a4018c3841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3538690f-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.357 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.358 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.358 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.361 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.362 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3538690f-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.362 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3538690f-6b, col_values=(('external_ids', {'iface-id': '3538690f-6beb-4617-a1af-4b0fa9a05da5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1f:8d:df', 'vm-uuid': 'c85b4b5d-0f7d-4fb0-b02f-c841ee20544a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.364 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:45 compute-0 NetworkManager[48954]: <info>  [1769443425.3647] manager: (tap3538690f-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/316)
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.366 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.370 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.371 239969 INFO os_vif [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:8d:df,bridge_name='br-int',has_traffic_filtering=True,id=3538690f-6beb-4617-a1af-4b0fa9a05da5,network=Network(806eb863-205a-4fb7-a077-13a4018c3841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3538690f-6b')
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.429 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.430 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.430 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] No VIF found with MAC fa:16:3e:1f:8d:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.431 239969 INFO nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Using config drive
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.449 239969 DEBUG nova.storage.rbd_utils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] rbd image c85b4b5d-0f7d-4fb0-b02f-c841ee20544a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.592 239969 DEBUG nova.network.neutron [req-a0dbb039-1268-4a80-8dd1-5ab0d007d79f req-6a654025-2b9e-49f5-88fb-b9be79c1dbf4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Updated VIF entry in instance network info cache for port 3538690f-6beb-4617-a1af-4b0fa9a05da5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.593 239969 DEBUG nova.network.neutron [req-a0dbb039-1268-4a80-8dd1-5ab0d007d79f req-6a654025-2b9e-49f5-88fb-b9be79c1dbf4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Updating instance_info_cache with network_info: [{"id": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "address": "fa:16:3e:1f:8d:df", "network": {"id": "806eb863-205a-4fb7-a077-13a4018c3841", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1258358432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5355ee24fabf4f089cfa485487324f31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3538690f-6b", "ovs_interfaceid": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.613 239969 DEBUG oslo_concurrency.lockutils [req-a0dbb039-1268-4a80-8dd1-5ab0d007d79f req-6a654025-2b9e-49f5-88fb-b9be79c1dbf4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-c85b4b5d-0f7d-4fb0-b02f-c841ee20544a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.778 239969 DEBUG oslo_concurrency.lockutils [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "7658ad77-dd99-4ff7-b88f-287f38b8344a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.778 239969 DEBUG oslo_concurrency.lockutils [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.779 239969 DEBUG oslo_concurrency.lockutils [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.779 239969 DEBUG oslo_concurrency.lockutils [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.779 239969 DEBUG oslo_concurrency.lockutils [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.780 239969 INFO nova.compute.manager [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Terminating instance
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.781 239969 DEBUG nova.compute.manager [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:03:45 compute-0 kernel: tapfed98a41-4e (unregistering): left promiscuous mode
Jan 26 16:03:45 compute-0 NetworkManager[48954]: <info>  [1769443425.8197] device (tapfed98a41-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:03:45 compute-0 ovn_controller[146046]: 2026-01-26T16:03:45Z|00715|binding|INFO|Releasing lport fed98a41-4eee-4ff2-8fc1-4fa7a278799f from this chassis (sb_readonly=0)
Jan 26 16:03:45 compute-0 ovn_controller[146046]: 2026-01-26T16:03:45Z|00716|binding|INFO|Setting lport fed98a41-4eee-4ff2-8fc1-4fa7a278799f down in Southbound
Jan 26 16:03:45 compute-0 ovn_controller[146046]: 2026-01-26T16:03:45Z|00717|binding|INFO|Removing iface tapfed98a41-4e ovn-installed in OVS
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.840 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:45 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:45.848 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:49:72 10.100.0.6'], port_security=['fa:16:3e:42:49:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7658ad77-dd99-4ff7-b88f-287f38b8344a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db63036e-9778-49aa-880f-9b900f3c2179', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8368d1e77bbb4de59a4d3088dda0a707', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'dcfedff7-784c-47e9-a3f0-b01264b35f17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859c58dc-99b6-4246-b01e-ede78d7a88e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=fed98a41-4eee-4ff2-8fc1-4fa7a278799f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:03:45 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:45.850 156105 INFO neutron.agent.ovn.metadata.agent [-] Port fed98a41-4eee-4ff2-8fc1-4fa7a278799f in datapath db63036e-9778-49aa-880f-9b900f3c2179 unbound from our chassis
Jan 26 16:03:45 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:45.854 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network db63036e-9778-49aa-880f-9b900f3c2179, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:03:45 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:45.855 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[85b9b9bd-f443-4716-a13e-797c12947429]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:45 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:45.856 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 namespace which is not needed anymore
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.860 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:45 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004d.scope: Deactivated successfully.
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.896 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:45 compute-0 systemd[1]: machine-qemu\x2d89\x2dinstance\x2d0000004d.scope: Consumed 3.550s CPU time.
Jan 26 16:03:45 compute-0 systemd-machined[208061]: Machine qemu-89-instance-0000004d terminated.
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.935 239969 INFO nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Creating config drive at /var/lib/nova/instances/c85b4b5d-0f7d-4fb0-b02f-c841ee20544a/disk.config
Jan 26 16:03:45 compute-0 nova_compute[239965]: 2026-01-26 16:03:45.939 239969 DEBUG oslo_concurrency.processutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c85b4b5d-0f7d-4fb0-b02f-c841ee20544a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxl1eerg8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:45 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[305954]: [NOTICE]   (305959) : haproxy version is 2.8.14-c23fe91
Jan 26 16:03:45 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[305954]: [NOTICE]   (305959) : path to executable is /usr/sbin/haproxy
Jan 26 16:03:45 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[305954]: [WARNING]  (305959) : Exiting Master process...
Jan 26 16:03:45 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[305954]: [WARNING]  (305959) : Exiting Master process...
Jan 26 16:03:45 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[305954]: [ALERT]    (305959) : Current worker (305961) exited with code 143 (Terminated)
Jan 26 16:03:45 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[305954]: [WARNING]  (305959) : All workers exited. Exiting... (0)
Jan 26 16:03:45 compute-0 systemd[1]: libpod-4f9a5c7524bf20e125cc63ace6a42c3bb5714b9053a87616cf45bdb5652f924b.scope: Deactivated successfully.
Jan 26 16:03:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 260 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 322 op/s
Jan 26 16:03:46 compute-0 podman[306077]: 2026-01-26 16:03:46.000153654 +0000 UTC m=+0.048222803 container died 4f9a5c7524bf20e125cc63ace6a42c3bb5714b9053a87616cf45bdb5652f924b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.013 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.020 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.025 239969 INFO nova.virt.libvirt.driver [-] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Instance destroyed successfully.
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.025 239969 DEBUG nova.objects.instance [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'resources' on Instance uuid 7658ad77-dd99-4ff7-b88f-287f38b8344a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4f9a5c7524bf20e125cc63ace6a42c3bb5714b9053a87616cf45bdb5652f924b-userdata-shm.mount: Deactivated successfully.
Jan 26 16:03:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e5838928621ca1f97c37b3ebbba1ef817a5311b7a5fca167b64e5af4084d1bb-merged.mount: Deactivated successfully.
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.044 239969 DEBUG nova.virt.libvirt.vif [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:03:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-326782750',display_name='tempest-ServerDiskConfigTestJSON-server-326782750',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-326782750',id=77,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8368d1e77bbb4de59a4d3088dda0a707',ramdisk_id='',reservation_id='r-xijxih41',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1476979264',owner_user_name='tempest-ServerDiskConfigTestJSON-1476979264-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:03:43Z,user_data=None,user_id='fc62be3f072e47058d3706393447ec4a',uuid=7658ad77-dd99-4ff7-b88f-287f38b8344a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.045 239969 DEBUG nova.network.os_vif_util [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converting VIF {"id": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "address": "fa:16:3e:42:49:72", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfed98a41-4e", "ovs_interfaceid": "fed98a41-4eee-4ff2-8fc1-4fa7a278799f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.045 239969 DEBUG nova.network.os_vif_util [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:49:72,bridge_name='br-int',has_traffic_filtering=True,id=fed98a41-4eee-4ff2-8fc1-4fa7a278799f,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed98a41-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.046 239969 DEBUG os_vif [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:49:72,bridge_name='br-int',has_traffic_filtering=True,id=fed98a41-4eee-4ff2-8fc1-4fa7a278799f,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed98a41-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.048 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.048 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfed98a41-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.051 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:03:46 compute-0 podman[306077]: 2026-01-26 16:03:46.053646835 +0000 UTC m=+0.101715974 container cleanup 4f9a5c7524bf20e125cc63ace6a42c3bb5714b9053a87616cf45bdb5652f924b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.055 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.058 239969 INFO os_vif [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:49:72,bridge_name='br-int',has_traffic_filtering=True,id=fed98a41-4eee-4ff2-8fc1-4fa7a278799f,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfed98a41-4e')
Jan 26 16:03:46 compute-0 systemd[1]: libpod-conmon-4f9a5c7524bf20e125cc63ace6a42c3bb5714b9053a87616cf45bdb5652f924b.scope: Deactivated successfully.
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.083 239969 DEBUG oslo_concurrency.processutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c85b4b5d-0f7d-4fb0-b02f-c841ee20544a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxl1eerg8" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2337921148' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.121 239969 DEBUG nova.storage.rbd_utils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] rbd image c85b4b5d-0f7d-4fb0-b02f-c841ee20544a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.124 239969 DEBUG oslo_concurrency.processutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c85b4b5d-0f7d-4fb0-b02f-c841ee20544a/disk.config c85b4b5d-0f7d-4fb0-b02f-c841ee20544a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:46 compute-0 podman[306118]: 2026-01-26 16:03:46.1272893 +0000 UTC m=+0.048163661 container remove 4f9a5c7524bf20e125cc63ace6a42c3bb5714b9053a87616cf45bdb5652f924b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.134 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8e56ac04-6c0c-4907-8517-671148292e9d]: (4, ('Mon Jan 26 04:03:45 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 (4f9a5c7524bf20e125cc63ace6a42c3bb5714b9053a87616cf45bdb5652f924b)\n4f9a5c7524bf20e125cc63ace6a42c3bb5714b9053a87616cf45bdb5652f924b\nMon Jan 26 04:03:46 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 (4f9a5c7524bf20e125cc63ace6a42c3bb5714b9053a87616cf45bdb5652f924b)\n4f9a5c7524bf20e125cc63ace6a42c3bb5714b9053a87616cf45bdb5652f924b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.136 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6a5e400c-01b6-45be-8b1f-3125650ebae4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.137 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb63036e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:46 compute-0 kernel: tapdb63036e-90: left promiscuous mode
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.163 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.164 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[012207ad-bbc7-4cd7-b43a-7855cc4c31b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.178 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4573deff-1780-48a0-8224-ad48589f50a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.180 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fd98c2f1-d71c-499a-b3f4-a77ed9ff769d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.197 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[898f4b22-525e-4828-9d7c-9738148c455b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491312, 'reachable_time': 40145, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306171, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.200 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.200 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[af485cb5-9d1d-4b41-a0e4-8e140fa92366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 systemd[1]: run-netns-ovnmeta\x2ddb63036e\x2d9778\x2d49aa\x2d880f\x2d9b900f3c2179.mount: Deactivated successfully.
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.313 239969 DEBUG oslo_concurrency.processutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c85b4b5d-0f7d-4fb0-b02f-c841ee20544a/disk.config c85b4b5d-0f7d-4fb0-b02f-c841ee20544a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.314 239969 INFO nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Deleting local config drive /var/lib/nova/instances/c85b4b5d-0f7d-4fb0-b02f-c841ee20544a/disk.config because it was imported into RBD.
Jan 26 16:03:46 compute-0 systemd-udevd[306057]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:03:46 compute-0 kernel: tap3538690f-6b: entered promiscuous mode
Jan 26 16:03:46 compute-0 NetworkManager[48954]: <info>  [1769443426.3739] manager: (tap3538690f-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/317)
Jan 26 16:03:46 compute-0 ovn_controller[146046]: 2026-01-26T16:03:46Z|00718|binding|INFO|Claiming lport 3538690f-6beb-4617-a1af-4b0fa9a05da5 for this chassis.
Jan 26 16:03:46 compute-0 ovn_controller[146046]: 2026-01-26T16:03:46Z|00719|binding|INFO|3538690f-6beb-4617-a1af-4b0fa9a05da5: Claiming fa:16:3e:1f:8d:df 10.100.0.6
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.373 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:46 compute-0 NetworkManager[48954]: <info>  [1769443426.3817] device (tap3538690f-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:03:46 compute-0 NetworkManager[48954]: <info>  [1769443426.3833] device (tap3538690f-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.385 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:8d:df 10.100.0.6'], port_security=['fa:16:3e:1f:8d:df 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c85b4b5d-0f7d-4fb0-b02f-c841ee20544a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-806eb863-205a-4fb7-a077-13a4018c3841', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5355ee24fabf4f089cfa485487324f31', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9d9200e7-b3a7-4d47-9c95-0becf05c2390', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c31298d-164d-42d7-82ea-727e455374c1, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3538690f-6beb-4617-a1af-4b0fa9a05da5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.387 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3538690f-6beb-4617-a1af-4b0fa9a05da5 in datapath 806eb863-205a-4fb7-a077-13a4018c3841 bound to our chassis
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.388 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 806eb863-205a-4fb7-a077-13a4018c3841
Jan 26 16:03:46 compute-0 ovn_controller[146046]: 2026-01-26T16:03:46Z|00720|binding|INFO|Setting lport 3538690f-6beb-4617-a1af-4b0fa9a05da5 ovn-installed in OVS
Jan 26 16:03:46 compute-0 ovn_controller[146046]: 2026-01-26T16:03:46Z|00721|binding|INFO|Setting lport 3538690f-6beb-4617-a1af-4b0fa9a05da5 up in Southbound
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.394 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.396 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.404 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[629c9903-a5de-4aa6-9122-8ec46187115e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.405 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap806eb863-21 in ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.406 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap806eb863-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.406 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[be99b003-72b5-465e-bb83-b1de70ab077c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 systemd-machined[208061]: New machine qemu-90-instance-0000004e.
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.407 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6db2e056-9861-4803-a977-59e0b1bf4c2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 systemd[1]: Started Virtual Machine qemu-90-instance-0000004e.
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.422 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb7ac6f-a809-4ade-b5e3-88381bd0770c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.451 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6027d667-b18b-4829-922e-3fd7766f7fde]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.465 239969 INFO nova.virt.libvirt.driver [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Deleting instance files /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a_del
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.466 239969 INFO nova.virt.libvirt.driver [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Deletion of /var/lib/nova/instances/7658ad77-dd99-4ff7-b88f-287f38b8344a_del complete
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.487 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[eadb8849-d894-41c5-8ce3-c1170b0ff108]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 NetworkManager[48954]: <info>  [1769443426.4961] manager: (tap806eb863-20): new Veth device (/org/freedesktop/NetworkManager/Devices/318)
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.495 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b265ae-b0d6-4167-8394-d31bab2c4eda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.524 239969 INFO nova.compute.manager [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Took 0.74 seconds to destroy the instance on the hypervisor.
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.524 239969 DEBUG oslo.service.loopingcall [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.528 239969 DEBUG nova.compute.manager [-] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.528 239969 DEBUG nova.network.neutron [-] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.533 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a3ce3075-3855-4197-8adb-ec2d269bc14b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.536 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[abda4438-e550-424b-a215-936a7e7bb6df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 NetworkManager[48954]: <info>  [1769443426.5616] device (tap806eb863-20): carrier: link connected
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.576 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1324717a-e138-4f3d-a1de-38a3d8c8f9ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.596 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3063921b-6c52-46d9-b782-4b732a5131f6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap806eb863-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:a7:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491736, 'reachable_time': 24643, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306235, 'error': None, 'target': 'ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.620 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e1789ea0-2f38-4ae6-bd6c-da2b31068029]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:a71d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 491736, 'tstamp': 491736}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306236, 'error': None, 'target': 'ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.649 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cb1352bc-da36-4b4b-9052-20c49a9d0cdb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap806eb863-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:a7:1d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 219], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491736, 'reachable_time': 24643, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306237, 'error': None, 'target': 'ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.689 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5fe09735-6185-4c41-9fed-6333a3ad2238]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.763 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[36be0069-5655-4340-afe8-77dd4f51b522]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.764 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap806eb863-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.764 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.765 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap806eb863-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.796 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:46 compute-0 NetworkManager[48954]: <info>  [1769443426.7992] manager: (tap806eb863-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/319)
Jan 26 16:03:46 compute-0 kernel: tap806eb863-20: entered promiscuous mode
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.801 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.802 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap806eb863-20, col_values=(('external_ids', {'iface-id': '21d09a7a-d591-4711-88ff-4004532e1416'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.804 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:46 compute-0 ovn_controller[146046]: 2026-01-26T16:03:46Z|00722|binding|INFO|Releasing lport 21d09a7a-d591-4711-88ff-4004532e1416 from this chassis (sb_readonly=0)
Jan 26 16:03:46 compute-0 nova_compute[239965]: 2026-01-26 16:03:46.836 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.836 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/806eb863-205a-4fb7-a077-13a4018c3841.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/806eb863-205a-4fb7-a077-13a4018c3841.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.837 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1d5173-e61e-44b5-b6f3-9a5288b28419]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.838 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-806eb863-205a-4fb7-a077-13a4018c3841
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/806eb863-205a-4fb7-a077-13a4018c3841.pid.haproxy
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 806eb863-205a-4fb7-a077-13a4018c3841
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:03:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:46.838 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841', 'env', 'PROCESS_TAG=haproxy-806eb863-205a-4fb7-a077-13a4018c3841', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/806eb863-205a-4fb7-a077-13a4018c3841.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.098 239969 DEBUG nova.network.neutron [-] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.118 239969 INFO nova.compute.manager [-] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Took 0.59 seconds to deallocate network for instance.
Jan 26 16:03:47 compute-0 ceph-mon[75140]: pgmap v1525: 305 pgs: 305 active+clean; 260 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.7 MiB/s wr, 322 op/s
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.125 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443427.1250224, c85b4b5d-0f7d-4fb0-b02f-c841ee20544a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.125 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] VM Started (Lifecycle Event)
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.153 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.158 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443427.1251752, c85b4b5d-0f7d-4fb0-b02f-c841ee20544a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.159 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] VM Paused (Lifecycle Event)
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.185 239969 DEBUG oslo_concurrency.lockutils [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.186 239969 DEBUG oslo_concurrency.lockutils [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.188 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.197 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.204 239969 DEBUG nova.compute.manager [req-e7a94dae-3d26-4188-ae54-82596805a015 req-65b2effc-5485-4f91-8fea-fba880a5256c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Received event network-vif-plugged-3538690f-6beb-4617-a1af-4b0fa9a05da5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.205 239969 DEBUG oslo_concurrency.lockutils [req-e7a94dae-3d26-4188-ae54-82596805a015 req-65b2effc-5485-4f91-8fea-fba880a5256c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.205 239969 DEBUG oslo_concurrency.lockutils [req-e7a94dae-3d26-4188-ae54-82596805a015 req-65b2effc-5485-4f91-8fea-fba880a5256c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.205 239969 DEBUG oslo_concurrency.lockutils [req-e7a94dae-3d26-4188-ae54-82596805a015 req-65b2effc-5485-4f91-8fea-fba880a5256c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.205 239969 DEBUG nova.compute.manager [req-e7a94dae-3d26-4188-ae54-82596805a015 req-65b2effc-5485-4f91-8fea-fba880a5256c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Processing event network-vif-plugged-3538690f-6beb-4617-a1af-4b0fa9a05da5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.206 239969 DEBUG nova.compute.manager [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.210 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.217 239969 INFO nova.virt.libvirt.driver [-] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Instance spawned successfully.
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.218 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.224 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.225 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443427.209806, c85b4b5d-0f7d-4fb0-b02f-c841ee20544a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.225 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] VM Resumed (Lifecycle Event)
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.252 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.252 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.253 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.253 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.253 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.254 239969 DEBUG nova.virt.libvirt.driver [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.261 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:47 compute-0 podman[306310]: 2026-01-26 16:03:47.262617602 +0000 UTC m=+0.105065127 container create 245e712e193b455c3e0a602125584067ed806cf7c9af4d093a554a9c56107745 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.267 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:03:47 compute-0 podman[306310]: 2026-01-26 16:03:47.178757106 +0000 UTC m=+0.021204641 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.276 239969 DEBUG oslo_concurrency.processutils [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:47 compute-0 systemd[1]: Started libpod-conmon-245e712e193b455c3e0a602125584067ed806cf7c9af4d093a554a9c56107745.scope.
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.339 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.348 239969 INFO nova.compute.manager [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Took 6.14 seconds to spawn the instance on the hypervisor.
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.349 239969 DEBUG nova.compute.manager [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:03:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cde2af03ff9b6d5654bbbd9b2c4589859a34d219dff751d4d64f5171a6963ca1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:47 compute-0 podman[306310]: 2026-01-26 16:03:47.376609947 +0000 UTC m=+0.219057482 container init 245e712e193b455c3e0a602125584067ed806cf7c9af4d093a554a9c56107745 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 16:03:47 compute-0 podman[306310]: 2026-01-26 16:03:47.382817309 +0000 UTC m=+0.225264824 container start 245e712e193b455c3e0a602125584067ed806cf7c9af4d093a554a9c56107745 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:03:47 compute-0 neutron-haproxy-ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841[306325]: [NOTICE]   (306329) : New worker (306333) forked
Jan 26 16:03:47 compute-0 neutron-haproxy-ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841[306325]: [NOTICE]   (306329) : Loading success.
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.421 239969 INFO nova.compute.manager [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Took 7.69 seconds to build instance.
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.441 239969 DEBUG oslo_concurrency.lockutils [None req-5010e557-9058-4326-b747-4b0222ede42e 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.789s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:03:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2605710440' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.868 239969 DEBUG oslo_concurrency.processutils [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.876 239969 DEBUG nova.compute.provider_tree [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.896 239969 DEBUG nova.scheduler.client.report [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.920 239969 DEBUG oslo_concurrency.lockutils [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:47 compute-0 nova_compute[239965]: 2026-01-26 16:03:47.956 239969 INFO nova.scheduler.client.report [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Deleted allocations for instance 7658ad77-dd99-4ff7-b88f-287f38b8344a
Jan 26 16:03:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 260 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 190 op/s
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.011 239969 DEBUG oslo_concurrency.lockutils [None req-77b0cd84-c2c2-45d7-b95f-dc9a6b268376 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.233s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.059 239969 DEBUG nova.compute.manager [req-384e2383-511d-4be5-a28a-44f88b45511b req-840d2aeb-697c-457c-b85b-ed4b79098731 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Received event network-vif-unplugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.059 239969 DEBUG oslo_concurrency.lockutils [req-384e2383-511d-4be5-a28a-44f88b45511b req-840d2aeb-697c-457c-b85b-ed4b79098731 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.060 239969 DEBUG oslo_concurrency.lockutils [req-384e2383-511d-4be5-a28a-44f88b45511b req-840d2aeb-697c-457c-b85b-ed4b79098731 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.060 239969 DEBUG oslo_concurrency.lockutils [req-384e2383-511d-4be5-a28a-44f88b45511b req-840d2aeb-697c-457c-b85b-ed4b79098731 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.060 239969 DEBUG nova.compute.manager [req-384e2383-511d-4be5-a28a-44f88b45511b req-840d2aeb-697c-457c-b85b-ed4b79098731 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] No waiting events found dispatching network-vif-unplugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.061 239969 WARNING nova.compute.manager [req-384e2383-511d-4be5-a28a-44f88b45511b req-840d2aeb-697c-457c-b85b-ed4b79098731 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Received unexpected event network-vif-unplugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f for instance with vm_state deleted and task_state None.
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.061 239969 DEBUG nova.compute.manager [req-384e2383-511d-4be5-a28a-44f88b45511b req-840d2aeb-697c-457c-b85b-ed4b79098731 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Received event network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.062 239969 DEBUG oslo_concurrency.lockutils [req-384e2383-511d-4be5-a28a-44f88b45511b req-840d2aeb-697c-457c-b85b-ed4b79098731 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.062 239969 DEBUG oslo_concurrency.lockutils [req-384e2383-511d-4be5-a28a-44f88b45511b req-840d2aeb-697c-457c-b85b-ed4b79098731 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.062 239969 DEBUG oslo_concurrency.lockutils [req-384e2383-511d-4be5-a28a-44f88b45511b req-840d2aeb-697c-457c-b85b-ed4b79098731 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7658ad77-dd99-4ff7-b88f-287f38b8344a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.062 239969 DEBUG nova.compute.manager [req-384e2383-511d-4be5-a28a-44f88b45511b req-840d2aeb-697c-457c-b85b-ed4b79098731 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] No waiting events found dispatching network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.063 239969 WARNING nova.compute.manager [req-384e2383-511d-4be5-a28a-44f88b45511b req-840d2aeb-697c-457c-b85b-ed4b79098731 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Received unexpected event network-vif-plugged-fed98a41-4eee-4ff2-8fc1-4fa7a278799f for instance with vm_state deleted and task_state None.
Jan 26 16:03:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2605710440' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.501 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.501 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.528 239969 DEBUG nova.compute.manager [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.586 239969 DEBUG nova.virt.libvirt.driver [None req-76fddabd-bc1f-4939-9e5c-2f0c6a06f92b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.597 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.599 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.612 239969 DEBUG nova.virt.hardware [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.612 239969 INFO nova.compute.claims [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:03:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:03:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3136200671' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:03:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:03:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3136200671' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:03:48 compute-0 nova_compute[239965]: 2026-01-26 16:03:48.760 239969 DEBUG oslo_concurrency.processutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00146405159856547 of space, bias 1.0, pg target 0.439215479569641 quantized to 32 (current 32)
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010162546678610036 of space, bias 1.0, pg target 0.30487640035830105 quantized to 32 (current 32)
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.676528659409949e-07 of space, bias 4.0, pg target 0.000921183439129194 quantized to 16 (current 16)
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:03:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:03:49 compute-0 ceph-mon[75140]: pgmap v1526: 305 pgs: 305 active+clean; 260 MiB data, 627 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 190 op/s
Jan 26 16:03:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3136200671' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:03:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3136200671' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:03:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:03:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3265826095' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.338 239969 DEBUG oslo_concurrency.processutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.344 239969 DEBUG nova.compute.provider_tree [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.447 239969 DEBUG nova.compute.manager [req-0ff2c0ab-3b33-4fbd-b1f3-f92c52c9c94b req-17c177bb-3e8b-4d75-943d-f8d11f197067 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Received event network-vif-plugged-3538690f-6beb-4617-a1af-4b0fa9a05da5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.448 239969 DEBUG oslo_concurrency.lockutils [req-0ff2c0ab-3b33-4fbd-b1f3-f92c52c9c94b req-17c177bb-3e8b-4d75-943d-f8d11f197067 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.448 239969 DEBUG oslo_concurrency.lockutils [req-0ff2c0ab-3b33-4fbd-b1f3-f92c52c9c94b req-17c177bb-3e8b-4d75-943d-f8d11f197067 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.448 239969 DEBUG oslo_concurrency.lockutils [req-0ff2c0ab-3b33-4fbd-b1f3-f92c52c9c94b req-17c177bb-3e8b-4d75-943d-f8d11f197067 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.448 239969 DEBUG nova.compute.manager [req-0ff2c0ab-3b33-4fbd-b1f3-f92c52c9c94b req-17c177bb-3e8b-4d75-943d-f8d11f197067 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] No waiting events found dispatching network-vif-plugged-3538690f-6beb-4617-a1af-4b0fa9a05da5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.448 239969 WARNING nova.compute.manager [req-0ff2c0ab-3b33-4fbd-b1f3-f92c52c9c94b req-17c177bb-3e8b-4d75-943d-f8d11f197067 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Received unexpected event network-vif-plugged-3538690f-6beb-4617-a1af-4b0fa9a05da5 for instance with vm_state active and task_state None.
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.461 239969 DEBUG nova.scheduler.client.report [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.484 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.885s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.485 239969 DEBUG nova.compute.manager [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.543 239969 DEBUG nova.compute.manager [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.545 239969 DEBUG nova.network.neutron [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.567 239969 INFO nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.593 239969 DEBUG nova.compute.manager [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.696 239969 DEBUG nova.compute.manager [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.697 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.697 239969 INFO nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Creating image(s)
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.728 239969 DEBUG nova.storage.rbd_utils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.765 239969 DEBUG nova.storage.rbd_utils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.802 239969 DEBUG nova.storage.rbd_utils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.812 239969 DEBUG oslo_concurrency.processutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.899 239969 DEBUG oslo_concurrency.processutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.900 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.901 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.901 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.931 239969 DEBUG nova.storage.rbd_utils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.939 239969 DEBUG oslo_concurrency.processutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:49 compute-0 nova_compute[239965]: 2026-01-26 16:03:49.988 239969 DEBUG nova.policy [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fc62be3f072e47058d3706393447ec4a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8368d1e77bbb4de59a4d3088dda0a707', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:03:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 225 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 288 op/s
Jan 26 16:03:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3265826095' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:03:50 compute-0 nova_compute[239965]: 2026-01-26 16:03:50.158 239969 DEBUG nova.compute.manager [req-2cf7f162-9ab1-432b-91ea-15e2b9028732 req-0806b8c1-0fdb-4ee2-ab73-15e40dab2384 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Received event network-vif-deleted-fed98a41-4eee-4ff2-8fc1-4fa7a278799f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:50 compute-0 nova_compute[239965]: 2026-01-26 16:03:50.275 239969 DEBUG oslo_concurrency.processutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:50 compute-0 nova_compute[239965]: 2026-01-26 16:03:50.356 239969 DEBUG nova.storage.rbd_utils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] resizing rbd image 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:03:50 compute-0 nova_compute[239965]: 2026-01-26 16:03:50.456 239969 DEBUG nova.objects.instance [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'migration_context' on Instance uuid 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:50 compute-0 nova_compute[239965]: 2026-01-26 16:03:50.473 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:03:50 compute-0 nova_compute[239965]: 2026-01-26 16:03:50.474 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Ensure instance console log exists: /var/lib/nova/instances/05ffa55e-3746-4da3-bf2d-0f3a64dd44a6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:03:50 compute-0 nova_compute[239965]: 2026-01-26 16:03:50.474 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:50 compute-0 nova_compute[239965]: 2026-01-26 16:03:50.474 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:50 compute-0 nova_compute[239965]: 2026-01-26 16:03:50.475 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:50 compute-0 nova_compute[239965]: 2026-01-26 16:03:50.565 239969 DEBUG nova.network.neutron [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Successfully created port: ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:03:50 compute-0 kernel: tap2a2de29b-29 (unregistering): left promiscuous mode
Jan 26 16:03:50 compute-0 NetworkManager[48954]: <info>  [1769443430.8190] device (tap2a2de29b-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:03:50 compute-0 ovn_controller[146046]: 2026-01-26T16:03:50Z|00723|binding|INFO|Releasing lport 2a2de29b-29c0-439c-8236-2f566c4c89aa from this chassis (sb_readonly=0)
Jan 26 16:03:50 compute-0 ovn_controller[146046]: 2026-01-26T16:03:50Z|00724|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa down in Southbound
Jan 26 16:03:50 compute-0 nova_compute[239965]: 2026-01-26 16:03:50.825 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:50 compute-0 ovn_controller[146046]: 2026-01-26T16:03:50Z|00725|binding|INFO|Removing iface tap2a2de29b-29 ovn-installed in OVS
Jan 26 16:03:50 compute-0 nova_compute[239965]: 2026-01-26 16:03:50.827 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:50.835 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ca:89 10.100.0.14'], port_security=['fa:16:3e:f8:ca:89 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '492acb7a-42e1-4918-a973-352f9d8251d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2b6630f0-082e-491e-b663-f8af00e04b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2a2de29b-29c0-439c-8236-2f566c4c89aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:03:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:50.837 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2a2de29b-29c0-439c-8236-2f566c4c89aa in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d unbound from our chassis
Jan 26 16:03:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:50.838 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:03:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:50.839 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[595a0844-9f95-4087-a98f-73c931cf4f12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:50.840 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d namespace which is not needed anymore
Jan 26 16:03:50 compute-0 nova_compute[239965]: 2026-01-26 16:03:50.847 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:50 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Jan 26 16:03:50 compute-0 systemd[1]: machine-qemu\x2d88\x2dinstance\x2d0000004c.scope: Consumed 14.581s CPU time.
Jan 26 16:03:50 compute-0 systemd-machined[208061]: Machine qemu-88-instance-0000004c terminated.
Jan 26 16:03:50 compute-0 nova_compute[239965]: 2026-01-26 16:03:50.898 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:51 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[305136]: [NOTICE]   (305140) : haproxy version is 2.8.14-c23fe91
Jan 26 16:03:51 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[305136]: [NOTICE]   (305140) : path to executable is /usr/sbin/haproxy
Jan 26 16:03:51 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[305136]: [ALERT]    (305140) : Current worker (305142) exited with code 143 (Terminated)
Jan 26 16:03:51 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[305136]: [WARNING]  (305140) : All workers exited. Exiting... (0)
Jan 26 16:03:51 compute-0 systemd[1]: libpod-4ff117a2831cfdaec4d53dbb091d7c7f0928e12d18e4d8472a7de62fa3fc76b2.scope: Deactivated successfully.
Jan 26 16:03:51 compute-0 podman[306572]: 2026-01-26 16:03:51.017655166 +0000 UTC m=+0.055120482 container died 4ff117a2831cfdaec4d53dbb091d7c7f0928e12d18e4d8472a7de62fa3fc76b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:03:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ff117a2831cfdaec4d53dbb091d7c7f0928e12d18e4d8472a7de62fa3fc76b2-userdata-shm.mount: Deactivated successfully.
Jan 26 16:03:51 compute-0 nova_compute[239965]: 2026-01-26 16:03:51.056 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:51 compute-0 nova_compute[239965]: 2026-01-26 16:03:51.060 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:03:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ec3964b626954e753f1825d5ca298605851d862739f32769e4c9e78cdb52d9c-merged.mount: Deactivated successfully.
Jan 26 16:03:51 compute-0 nova_compute[239965]: 2026-01-26 16:03:51.065 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:51 compute-0 podman[306572]: 2026-01-26 16:03:51.072884271 +0000 UTC m=+0.110349577 container cleanup 4ff117a2831cfdaec4d53dbb091d7c7f0928e12d18e4d8472a7de62fa3fc76b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:03:51 compute-0 systemd[1]: libpod-conmon-4ff117a2831cfdaec4d53dbb091d7c7f0928e12d18e4d8472a7de62fa3fc76b2.scope: Deactivated successfully.
Jan 26 16:03:51 compute-0 ceph-mon[75140]: pgmap v1527: 305 pgs: 305 active+clean; 225 MiB data, 608 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.6 MiB/s wr, 288 op/s
Jan 26 16:03:51 compute-0 podman[306604]: 2026-01-26 16:03:51.158797536 +0000 UTC m=+0.052915198 container remove 4ff117a2831cfdaec4d53dbb091d7c7f0928e12d18e4d8472a7de62fa3fc76b2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 26 16:03:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:51.167 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[76b5e92b-731e-4abb-bd6b-ef5280ac1282]: (4, ('Mon Jan 26 04:03:50 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d (4ff117a2831cfdaec4d53dbb091d7c7f0928e12d18e4d8472a7de62fa3fc76b2)\n4ff117a2831cfdaec4d53dbb091d7c7f0928e12d18e4d8472a7de62fa3fc76b2\nMon Jan 26 04:03:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d (4ff117a2831cfdaec4d53dbb091d7c7f0928e12d18e4d8472a7de62fa3fc76b2)\n4ff117a2831cfdaec4d53dbb091d7c7f0928e12d18e4d8472a7de62fa3fc76b2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:51.170 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd398f3-2d38-4bb0-b82d-4c70bfb8362f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:51.171 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1ad93-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:51 compute-0 nova_compute[239965]: 2026-01-26 16:03:51.173 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:51 compute-0 kernel: tap84c1ad93-10: left promiscuous mode
Jan 26 16:03:51 compute-0 nova_compute[239965]: 2026-01-26 16:03:51.192 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:51 compute-0 nova_compute[239965]: 2026-01-26 16:03:51.199 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:51.202 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[28543715-8190-4e04-963a-80a3ca644b58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:51.222 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[98c70bea-11c6-420b-bc27-2ac1a6c068da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:51.225 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[20d1725a-9e62-4721-aa14-826b61825912]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:51.256 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ad982d78-a5ef-48ad-a48f-b0589d362183]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 490076, 'reachable_time': 23840, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306622, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d84c1ad93\x2d16e1\x2d4751\x2dac9b\x2dceeaa2d50c1d.mount: Deactivated successfully.
Jan 26 16:03:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:51.262 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:03:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:51.262 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[b842fd1c-3767-4c7e-ae59-92ff1258e97d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:03:51 compute-0 nova_compute[239965]: 2026-01-26 16:03:51.509 239969 DEBUG nova.network.neutron [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Successfully updated port: ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:03:51 compute-0 nova_compute[239965]: 2026-01-26 16:03:51.526 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "refresh_cache-05ffa55e-3746-4da3-bf2d-0f3a64dd44a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:03:51 compute-0 nova_compute[239965]: 2026-01-26 16:03:51.527 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquired lock "refresh_cache-05ffa55e-3746-4da3-bf2d-0f3a64dd44a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:03:51 compute-0 nova_compute[239965]: 2026-01-26 16:03:51.527 239969 DEBUG nova.network.neutron [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:03:51 compute-0 nova_compute[239965]: 2026-01-26 16:03:51.607 239969 INFO nova.virt.libvirt.driver [None req-76fddabd-bc1f-4939-9e5c-2f0c6a06f92b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance shutdown successfully after 13 seconds.
Jan 26 16:03:51 compute-0 nova_compute[239965]: 2026-01-26 16:03:51.615 239969 INFO nova.virt.libvirt.driver [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance destroyed successfully.
Jan 26 16:03:51 compute-0 nova_compute[239965]: 2026-01-26 16:03:51.615 239969 DEBUG nova.objects.instance [None req-76fddabd-bc1f-4939-9e5c-2f0c6a06f92b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:51 compute-0 nova_compute[239965]: 2026-01-26 16:03:51.627 239969 DEBUG nova.compute.manager [None req-76fddabd-bc1f-4939-9e5c-2f0c6a06f92b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:51 compute-0 nova_compute[239965]: 2026-01-26 16:03:51.678 239969 DEBUG oslo_concurrency.lockutils [None req-76fddabd-bc1f-4939-9e5c-2f0c6a06f92b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:51 compute-0 nova_compute[239965]: 2026-01-26 16:03:51.737 239969 DEBUG nova.network.neutron [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:03:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 221 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.8 MiB/s wr, 300 op/s
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.076 239969 DEBUG nova.compute.manager [req-393b688c-8ca8-484d-9dca-7c0cbe782a2a req-8e2950bd-51fd-4d4e-bd4c-d9b9c284129f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.077 239969 DEBUG oslo_concurrency.lockutils [req-393b688c-8ca8-484d-9dca-7c0cbe782a2a req-8e2950bd-51fd-4d4e-bd4c-d9b9c284129f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.077 239969 DEBUG oslo_concurrency.lockutils [req-393b688c-8ca8-484d-9dca-7c0cbe782a2a req-8e2950bd-51fd-4d4e-bd4c-d9b9c284129f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.077 239969 DEBUG oslo_concurrency.lockutils [req-393b688c-8ca8-484d-9dca-7c0cbe782a2a req-8e2950bd-51fd-4d4e-bd4c-d9b9c284129f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.077 239969 DEBUG nova.compute.manager [req-393b688c-8ca8-484d-9dca-7c0cbe782a2a req-8e2950bd-51fd-4d4e-bd4c-d9b9c284129f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.078 239969 WARNING nova.compute.manager [req-393b688c-8ca8-484d-9dca-7c0cbe782a2a req-8e2950bd-51fd-4d4e-bd4c-d9b9c284129f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state stopped and task_state None.
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.078 239969 DEBUG nova.compute.manager [req-393b688c-8ca8-484d-9dca-7c0cbe782a2a req-8e2950bd-51fd-4d4e-bd4c-d9b9c284129f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.078 239969 DEBUG oslo_concurrency.lockutils [req-393b688c-8ca8-484d-9dca-7c0cbe782a2a req-8e2950bd-51fd-4d4e-bd4c-d9b9c284129f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.078 239969 DEBUG oslo_concurrency.lockutils [req-393b688c-8ca8-484d-9dca-7c0cbe782a2a req-8e2950bd-51fd-4d4e-bd4c-d9b9c284129f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.078 239969 DEBUG oslo_concurrency.lockutils [req-393b688c-8ca8-484d-9dca-7c0cbe782a2a req-8e2950bd-51fd-4d4e-bd4c-d9b9c284129f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.079 239969 DEBUG nova.compute.manager [req-393b688c-8ca8-484d-9dca-7c0cbe782a2a req-8e2950bd-51fd-4d4e-bd4c-d9b9c284129f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.079 239969 WARNING nova.compute.manager [req-393b688c-8ca8-484d-9dca-7c0cbe782a2a req-8e2950bd-51fd-4d4e-bd4c-d9b9c284129f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state stopped and task_state None.
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.246 239969 DEBUG nova.compute.manager [req-5f315463-9a22-44f6-b274-384c04ed2253 req-f6e3df9b-81d8-4341-8fbd-12542630ec17 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Received event network-changed-ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.247 239969 DEBUG nova.compute.manager [req-5f315463-9a22-44f6-b274-384c04ed2253 req-f6e3df9b-81d8-4341-8fbd-12542630ec17 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Refreshing instance network info cache due to event network-changed-ddfbd3d2-79f1-43d0-bbbd-d89258fd9580. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.247 239969 DEBUG oslo_concurrency.lockutils [req-5f315463-9a22-44f6-b274-384c04ed2253 req-f6e3df9b-81d8-4341-8fbd-12542630ec17 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-05ffa55e-3746-4da3-bf2d-0f3a64dd44a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.620 239969 DEBUG nova.network.neutron [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Updating instance_info_cache with network_info: [{"id": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "address": "fa:16:3e:76:c7:6c", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddfbd3d2-79", "ovs_interfaceid": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.644 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Releasing lock "refresh_cache-05ffa55e-3746-4da3-bf2d-0f3a64dd44a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.645 239969 DEBUG nova.compute.manager [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Instance network_info: |[{"id": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "address": "fa:16:3e:76:c7:6c", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddfbd3d2-79", "ovs_interfaceid": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.645 239969 DEBUG oslo_concurrency.lockutils [req-5f315463-9a22-44f6-b274-384c04ed2253 req-f6e3df9b-81d8-4341-8fbd-12542630ec17 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-05ffa55e-3746-4da3-bf2d-0f3a64dd44a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.645 239969 DEBUG nova.network.neutron [req-5f315463-9a22-44f6-b274-384c04ed2253 req-f6e3df9b-81d8-4341-8fbd-12542630ec17 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Refreshing network info cache for port ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.648 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Start _get_guest_xml network_info=[{"id": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "address": "fa:16:3e:76:c7:6c", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddfbd3d2-79", "ovs_interfaceid": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.653 239969 WARNING nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.658 239969 DEBUG nova.virt.libvirt.host [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.660 239969 DEBUG nova.virt.libvirt.host [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.669 239969 DEBUG nova.virt.libvirt.host [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.670 239969 DEBUG nova.virt.libvirt.host [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.670 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.670 239969 DEBUG nova.virt.hardware [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.670 239969 DEBUG nova.virt.hardware [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.671 239969 DEBUG nova.virt.hardware [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.671 239969 DEBUG nova.virt.hardware [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.671 239969 DEBUG nova.virt.hardware [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.671 239969 DEBUG nova.virt.hardware [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.671 239969 DEBUG nova.virt.hardware [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.671 239969 DEBUG nova.virt.hardware [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.672 239969 DEBUG nova.virt.hardware [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.672 239969 DEBUG nova.virt.hardware [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.672 239969 DEBUG nova.virt.hardware [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:03:52 compute-0 nova_compute[239965]: 2026-01-26 16:03:52.675 239969 DEBUG oslo_concurrency.processutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:53 compute-0 ceph-mon[75140]: pgmap v1528: 305 pgs: 305 active+clean; 221 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.8 MiB/s wr, 300 op/s
Jan 26 16:03:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:03:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2362440029' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:53 compute-0 nova_compute[239965]: 2026-01-26 16:03:53.335 239969 DEBUG oslo_concurrency.processutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:53 compute-0 nova_compute[239965]: 2026-01-26 16:03:53.362 239969 DEBUG nova.storage.rbd_utils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:53 compute-0 nova_compute[239965]: 2026-01-26 16:03:53.367 239969 DEBUG oslo_concurrency.processutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:53 compute-0 nova_compute[239965]: 2026-01-26 16:03:53.731 239969 DEBUG oslo_concurrency.lockutils [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Acquiring lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:53 compute-0 nova_compute[239965]: 2026-01-26 16:03:53.732 239969 DEBUG oslo_concurrency.lockutils [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:53 compute-0 nova_compute[239965]: 2026-01-26 16:03:53.732 239969 DEBUG oslo_concurrency.lockutils [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Acquiring lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:53 compute-0 nova_compute[239965]: 2026-01-26 16:03:53.732 239969 DEBUG oslo_concurrency.lockutils [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:53 compute-0 nova_compute[239965]: 2026-01-26 16:03:53.733 239969 DEBUG oslo_concurrency.lockutils [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:53 compute-0 nova_compute[239965]: 2026-01-26 16:03:53.735 239969 INFO nova.compute.manager [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Terminating instance
Jan 26 16:03:53 compute-0 nova_compute[239965]: 2026-01-26 16:03:53.736 239969 DEBUG nova.compute.manager [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:03:53 compute-0 kernel: tap3538690f-6b (unregistering): left promiscuous mode
Jan 26 16:03:53 compute-0 NetworkManager[48954]: <info>  [1769443433.7865] device (tap3538690f-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:03:53 compute-0 nova_compute[239965]: 2026-01-26 16:03:53.794 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:53 compute-0 ovn_controller[146046]: 2026-01-26T16:03:53Z|00726|binding|INFO|Releasing lport 3538690f-6beb-4617-a1af-4b0fa9a05da5 from this chassis (sb_readonly=0)
Jan 26 16:03:53 compute-0 ovn_controller[146046]: 2026-01-26T16:03:53Z|00727|binding|INFO|Setting lport 3538690f-6beb-4617-a1af-4b0fa9a05da5 down in Southbound
Jan 26 16:03:53 compute-0 ovn_controller[146046]: 2026-01-26T16:03:53Z|00728|binding|INFO|Removing iface tap3538690f-6b ovn-installed in OVS
Jan 26 16:03:53 compute-0 nova_compute[239965]: 2026-01-26 16:03:53.805 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:53.814 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1f:8d:df 10.100.0.6'], port_security=['fa:16:3e:1f:8d:df 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c85b4b5d-0f7d-4fb0-b02f-c841ee20544a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-806eb863-205a-4fb7-a077-13a4018c3841', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5355ee24fabf4f089cfa485487324f31', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9d9200e7-b3a7-4d47-9c95-0becf05c2390', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c31298d-164d-42d7-82ea-727e455374c1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3538690f-6beb-4617-a1af-4b0fa9a05da5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:03:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:53.817 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3538690f-6beb-4617-a1af-4b0fa9a05da5 in datapath 806eb863-205a-4fb7-a077-13a4018c3841 unbound from our chassis
Jan 26 16:03:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:53.819 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 806eb863-205a-4fb7-a077-13a4018c3841, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:03:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:53.820 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8e7c4f-3706-4323-a431-6d069edc6359]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:53.821 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841 namespace which is not needed anymore
Jan 26 16:03:53 compute-0 nova_compute[239965]: 2026-01-26 16:03:53.829 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:53 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Jan 26 16:03:53 compute-0 systemd[1]: machine-qemu\x2d90\x2dinstance\x2d0000004e.scope: Consumed 7.356s CPU time.
Jan 26 16:03:53 compute-0 systemd-machined[208061]: Machine qemu-90-instance-0000004e terminated.
Jan 26 16:03:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:03:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/209254915' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:53 compute-0 NetworkManager[48954]: <info>  [1769443433.9637] manager: (tap3538690f-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/320)
Jan 26 16:03:53 compute-0 nova_compute[239965]: 2026-01-26 16:03:53.981 239969 INFO nova.virt.libvirt.driver [-] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Instance destroyed successfully.
Jan 26 16:03:53 compute-0 nova_compute[239965]: 2026-01-26 16:03:53.982 239969 DEBUG nova.objects.instance [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lazy-loading 'resources' on Instance uuid c85b4b5d-0f7d-4fb0-b02f-c841ee20544a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:53 compute-0 neutron-haproxy-ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841[306325]: [NOTICE]   (306329) : haproxy version is 2.8.14-c23fe91
Jan 26 16:03:53 compute-0 neutron-haproxy-ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841[306325]: [NOTICE]   (306329) : path to executable is /usr/sbin/haproxy
Jan 26 16:03:53 compute-0 neutron-haproxy-ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841[306325]: [WARNING]  (306329) : Exiting Master process...
Jan 26 16:03:53 compute-0 neutron-haproxy-ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841[306325]: [ALERT]    (306329) : Current worker (306333) exited with code 143 (Terminated)
Jan 26 16:03:53 compute-0 neutron-haproxy-ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841[306325]: [WARNING]  (306329) : All workers exited. Exiting... (0)
Jan 26 16:03:53 compute-0 systemd[1]: libpod-245e712e193b455c3e0a602125584067ed806cf7c9af4d093a554a9c56107745.scope: Deactivated successfully.
Jan 26 16:03:53 compute-0 nova_compute[239965]: 2026-01-26 16:03:53.992 239969 DEBUG nova.objects.instance [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'flavor' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:53 compute-0 podman[306706]: 2026-01-26 16:03:53.998514442 +0000 UTC m=+0.066634324 container died 245e712e193b455c3e0a602125584067ed806cf7c9af4d093a554a9c56107745 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.001 239969 DEBUG oslo_concurrency.processutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.004 239969 DEBUG nova.virt.libvirt.vif [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:03:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1165411368',display_name='tempest-ServerDiskConfigTestJSON-server-1165411368',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1165411368',id=79,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8368d1e77bbb4de59a4d3088dda0a707',ramdisk_id='',reservation_id='r-a6bffdi0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1476979264',owner_user_name='tempest-ServerDiskConfigTestJSON-1476979264-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:03:49Z,user_data=None,user_id='fc62be3f072e47058d3706393447ec4a',uuid=05ffa55e-3746-4da3-bf2d-0f3a64dd44a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "address": "fa:16:3e:76:c7:6c", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddfbd3d2-79", "ovs_interfaceid": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.005 239969 DEBUG nova.network.os_vif_util [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converting VIF {"id": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "address": "fa:16:3e:76:c7:6c", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddfbd3d2-79", "ovs_interfaceid": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.006 239969 DEBUG nova.network.os_vif_util [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:c7:6c,bridge_name='br-int',has_traffic_filtering=True,id=ddfbd3d2-79f1-43d0-bbbd-d89258fd9580,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddfbd3d2-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.008 239969 DEBUG nova.objects.instance [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'pci_devices' on Instance uuid 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.012 239969 DEBUG nova.virt.libvirt.vif [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:03:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataTestJSON-server-823220195',display_name='tempest-ServerMetadataTestJSON-server-823220195',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatatestjson-server-823220195',id=78,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={key1='alt1',key2='value2',key3='value3'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5355ee24fabf4f089cfa485487324f31',ramdisk_id='',reservation_id='r-lw0fv80x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataTestJSON-1651266872',owner_user_name='tempest-ServerMetadataTestJSON-1651266872-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:03:53Z,user_data=None,user_id='61cb7a992aab4e41af5fd2eccf26c5b8',uuid=c85b4b5d-0f7d-4fb0-b02f-c841ee20544a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "address": "fa:16:3e:1f:8d:df", "network": {"id": "806eb863-205a-4fb7-a077-13a4018c3841", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1258358432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5355ee24fabf4f089cfa485487324f31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3538690f-6b", "ovs_interfaceid": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:03:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 221 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 290 op/s
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.015 239969 DEBUG nova.network.os_vif_util [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Converting VIF {"id": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "address": "fa:16:3e:1f:8d:df", "network": {"id": "806eb863-205a-4fb7-a077-13a4018c3841", "bridge": "br-int", "label": "tempest-ServerMetadataTestJSON-1258358432-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5355ee24fabf4f089cfa485487324f31", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3538690f-6b", "ovs_interfaceid": "3538690f-6beb-4617-a1af-4b0fa9a05da5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.016 239969 DEBUG nova.network.os_vif_util [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1f:8d:df,bridge_name='br-int',has_traffic_filtering=True,id=3538690f-6beb-4617-a1af-4b0fa9a05da5,network=Network(806eb863-205a-4fb7-a077-13a4018c3841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3538690f-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.017 239969 DEBUG os_vif [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:8d:df,bridge_name='br-int',has_traffic_filtering=True,id=3538690f-6beb-4617-a1af-4b0fa9a05da5,network=Network(806eb863-205a-4fb7-a077-13a4018c3841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3538690f-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.021 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.021 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3538690f-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.025 239969 DEBUG oslo_concurrency.lockutils [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.025 239969 DEBUG oslo_concurrency.lockutils [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquired lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.025 239969 DEBUG nova.network.neutron [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.026 239969 DEBUG nova.objects.instance [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'info_cache' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-245e712e193b455c3e0a602125584067ed806cf7c9af4d093a554a9c56107745-userdata-shm.mount: Deactivated successfully.
Jan 26 16:03:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-cde2af03ff9b6d5654bbbd9b2c4589859a34d219dff751d4d64f5171a6963ca1-merged.mount: Deactivated successfully.
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.032 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.036 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:03:54 compute-0 nova_compute[239965]:   <uuid>05ffa55e-3746-4da3-bf2d-0f3a64dd44a6</uuid>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   <name>instance-0000004f</name>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerDiskConfigTestJSON-server-1165411368</nova:name>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:03:52</nova:creationTime>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:03:54 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:03:54 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:03:54 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:03:54 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:03:54 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:03:54 compute-0 nova_compute[239965]:         <nova:user uuid="fc62be3f072e47058d3706393447ec4a">tempest-ServerDiskConfigTestJSON-1476979264-project-member</nova:user>
Jan 26 16:03:54 compute-0 nova_compute[239965]:         <nova:project uuid="8368d1e77bbb4de59a4d3088dda0a707">tempest-ServerDiskConfigTestJSON-1476979264</nova:project>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:03:54 compute-0 nova_compute[239965]:         <nova:port uuid="ddfbd3d2-79f1-43d0-bbbd-d89258fd9580">
Jan 26 16:03:54 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <system>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <entry name="serial">05ffa55e-3746-4da3-bf2d-0f3a64dd44a6</entry>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <entry name="uuid">05ffa55e-3746-4da3-bf2d-0f3a64dd44a6</entry>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     </system>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   <os>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   </os>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   <features>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   </features>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/05ffa55e-3746-4da3-bf2d-0f3a64dd44a6_disk">
Jan 26 16:03:54 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       </source>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:03:54 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/05ffa55e-3746-4da3-bf2d-0f3a64dd44a6_disk.config">
Jan 26 16:03:54 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       </source>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:03:54 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:76:c7:6c"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <target dev="tapddfbd3d2-79"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/05ffa55e-3746-4da3-bf2d-0f3a64dd44a6/console.log" append="off"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <video>
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     </video>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:03:54 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:03:54 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:03:54 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:03:54 compute-0 nova_compute[239965]: </domain>
Jan 26 16:03:54 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.037 239969 DEBUG nova.compute.manager [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Preparing to wait for external event network-vif-plugged-ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.038 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.038 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.038 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.039 239969 DEBUG nova.virt.libvirt.vif [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:03:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1165411368',display_name='tempest-ServerDiskConfigTestJSON-server-1165411368',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1165411368',id=79,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8368d1e77bbb4de59a4d3088dda0a707',ramdisk_id='',reservation_id='r-a6bffdi0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-1476979264',owner_user_name='tempest-ServerDiskConfigTestJSON-1476979264-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:03:49Z,user_data=None,user_id='fc62be3f072e47058d3706393447ec4a',uuid=05ffa55e-3746-4da3-bf2d-0f3a64dd44a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "address": "fa:16:3e:76:c7:6c", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddfbd3d2-79", "ovs_interfaceid": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.040 239969 DEBUG nova.network.os_vif_util [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converting VIF {"id": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "address": "fa:16:3e:76:c7:6c", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddfbd3d2-79", "ovs_interfaceid": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.040 239969 DEBUG nova.network.os_vif_util [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:c7:6c,bridge_name='br-int',has_traffic_filtering=True,id=ddfbd3d2-79f1-43d0-bbbd-d89258fd9580,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddfbd3d2-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.041 239969 DEBUG os_vif [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:c7:6c,bridge_name='br-int',has_traffic_filtering=True,id=ddfbd3d2-79f1-43d0-bbbd-d89258fd9580,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddfbd3d2-79') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:03:54 compute-0 podman[306706]: 2026-01-26 16:03:54.042062529 +0000 UTC m=+0.110182401 container cleanup 245e712e193b455c3e0a602125584067ed806cf7c9af4d093a554a9c56107745 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.042 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.043 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.043 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.044 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.049 239969 INFO os_vif [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1f:8d:df,bridge_name='br-int',has_traffic_filtering=True,id=3538690f-6beb-4617-a1af-4b0fa9a05da5,network=Network(806eb863-205a-4fb7-a077-13a4018c3841),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3538690f-6b')
Jan 26 16:03:54 compute-0 systemd[1]: libpod-conmon-245e712e193b455c3e0a602125584067ed806cf7c9af4d093a554a9c56107745.scope: Deactivated successfully.
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.073 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.074 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapddfbd3d2-79, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.074 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapddfbd3d2-79, col_values=(('external_ids', {'iface-id': 'ddfbd3d2-79f1-43d0-bbbd-d89258fd9580', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:76:c7:6c', 'vm-uuid': '05ffa55e-3746-4da3-bf2d-0f3a64dd44a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:54 compute-0 NetworkManager[48954]: <info>  [1769443434.0775] manager: (tapddfbd3d2-79): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/321)
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.079 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.082 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.087 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.087 239969 INFO os_vif [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:c7:6c,bridge_name='br-int',has_traffic_filtering=True,id=ddfbd3d2-79f1-43d0-bbbd-d89258fd9580,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddfbd3d2-79')
Jan 26 16:03:54 compute-0 podman[306750]: 2026-01-26 16:03:54.128421987 +0000 UTC m=+0.060111405 container remove 245e712e193b455c3e0a602125584067ed806cf7c9af4d093a554a9c56107745 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:03:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:54.139 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f25ee4e7-752c-4292-8d47-01a4ac531975]: (4, ('Mon Jan 26 04:03:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841 (245e712e193b455c3e0a602125584067ed806cf7c9af4d093a554a9c56107745)\n245e712e193b455c3e0a602125584067ed806cf7c9af4d093a554a9c56107745\nMon Jan 26 04:03:54 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841 (245e712e193b455c3e0a602125584067ed806cf7c9af4d093a554a9c56107745)\n245e712e193b455c3e0a602125584067ed806cf7c9af4d093a554a9c56107745\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:54.141 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae7ea60-02f1-4c7e-ad0b-db325a266391]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:54.142 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap806eb863-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.145 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:54 compute-0 kernel: tap806eb863-20: left promiscuous mode
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.155 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.156 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.157 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] No VIF found with MAC fa:16:3e:76:c7:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.157 239969 INFO nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Using config drive
Jan 26 16:03:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2362440029' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/209254915' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:54.178 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3163d248-3a68-427b-ae6e-3ba3be724cd1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:54.193 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b2dd1446-d02b-4622-ba46-674c09970478]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:54.195 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[75c030fa-e8d1-4796-80bd-4e90bf4ffdfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.195 239969 DEBUG nova.storage.rbd_utils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.206 239969 DEBUG nova.network.neutron [req-5f315463-9a22-44f6-b274-384c04ed2253 req-f6e3df9b-81d8-4341-8fbd-12542630ec17 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Updated VIF entry in instance network info cache for port ddfbd3d2-79f1-43d0-bbbd-d89258fd9580. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.207 239969 DEBUG nova.network.neutron [req-5f315463-9a22-44f6-b274-384c04ed2253 req-f6e3df9b-81d8-4341-8fbd-12542630ec17 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Updating instance_info_cache with network_info: [{"id": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "address": "fa:16:3e:76:c7:6c", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddfbd3d2-79", "ovs_interfaceid": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.209 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:54.216 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a99fb5b1-5158-4f39-a686-66ab590f89b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 491728, 'reachable_time': 40422, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306810, 'error': None, 'target': 'ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:54.220 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-806eb863-205a-4fb7-a077-13a4018c3841 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:03:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:54.220 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[79b6eea0-ce34-4527-9d09-714242336bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d806eb863\x2d205a\x2d4fb7\x2da077\x2d13a4018c3841.mount: Deactivated successfully.
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.344 239969 DEBUG oslo_concurrency.lockutils [req-5f315463-9a22-44f6-b274-384c04ed2253 req-f6e3df9b-81d8-4341-8fbd-12542630ec17 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-05ffa55e-3746-4da3-bf2d-0f3a64dd44a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.382 239969 INFO nova.virt.libvirt.driver [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Deleting instance files /var/lib/nova/instances/c85b4b5d-0f7d-4fb0-b02f-c841ee20544a_del
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.384 239969 INFO nova.virt.libvirt.driver [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Deletion of /var/lib/nova/instances/c85b4b5d-0f7d-4fb0-b02f-c841ee20544a_del complete
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.437 239969 INFO nova.compute.manager [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Took 0.70 seconds to destroy the instance on the hypervisor.
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.438 239969 DEBUG oslo.service.loopingcall [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.438 239969 DEBUG nova.compute.manager [-] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.439 239969 DEBUG nova.network.neutron [-] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.631 239969 DEBUG nova.compute.manager [req-3dc15343-079e-4853-b75d-0726f67b9903 req-ac42aacb-8db7-4b95-b50f-c81514c6ee83 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Received event network-vif-unplugged-3538690f-6beb-4617-a1af-4b0fa9a05da5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.632 239969 DEBUG oslo_concurrency.lockutils [req-3dc15343-079e-4853-b75d-0726f67b9903 req-ac42aacb-8db7-4b95-b50f-c81514c6ee83 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.632 239969 DEBUG oslo_concurrency.lockutils [req-3dc15343-079e-4853-b75d-0726f67b9903 req-ac42aacb-8db7-4b95-b50f-c81514c6ee83 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.632 239969 DEBUG oslo_concurrency.lockutils [req-3dc15343-079e-4853-b75d-0726f67b9903 req-ac42aacb-8db7-4b95-b50f-c81514c6ee83 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.633 239969 DEBUG nova.compute.manager [req-3dc15343-079e-4853-b75d-0726f67b9903 req-ac42aacb-8db7-4b95-b50f-c81514c6ee83 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] No waiting events found dispatching network-vif-unplugged-3538690f-6beb-4617-a1af-4b0fa9a05da5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.633 239969 DEBUG nova.compute.manager [req-3dc15343-079e-4853-b75d-0726f67b9903 req-ac42aacb-8db7-4b95-b50f-c81514c6ee83 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Received event network-vif-unplugged-3538690f-6beb-4617-a1af-4b0fa9a05da5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.803 239969 INFO nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Creating config drive at /var/lib/nova/instances/05ffa55e-3746-4da3-bf2d-0f3a64dd44a6/disk.config
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.812 239969 DEBUG oslo_concurrency.processutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/05ffa55e-3746-4da3-bf2d-0f3a64dd44a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapx074v2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.960 239969 DEBUG oslo_concurrency.processutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/05ffa55e-3746-4da3-bf2d-0f3a64dd44a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapx074v2" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:54 compute-0 nova_compute[239965]: 2026-01-26 16:03:54.994 239969 DEBUG nova.storage.rbd_utils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] rbd image 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.000 239969 DEBUG oslo_concurrency.processutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/05ffa55e-3746-4da3-bf2d-0f3a64dd44a6/disk.config 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:55 compute-0 ceph-mon[75140]: pgmap v1529: 305 pgs: 305 active+clean; 221 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 3.6 MiB/s wr, 290 op/s
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.191 239969 DEBUG oslo_concurrency.processutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/05ffa55e-3746-4da3-bf2d-0f3a64dd44a6/disk.config 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.192 239969 INFO nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Deleting local config drive /var/lib/nova/instances/05ffa55e-3746-4da3-bf2d-0f3a64dd44a6/disk.config because it was imported into RBD.
Jan 26 16:03:55 compute-0 kernel: tapddfbd3d2-79: entered promiscuous mode
Jan 26 16:03:55 compute-0 NetworkManager[48954]: <info>  [1769443435.2585] manager: (tapddfbd3d2-79): new Tun device (/org/freedesktop/NetworkManager/Devices/322)
Jan 26 16:03:55 compute-0 systemd-udevd[306724]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:03:55 compute-0 ovn_controller[146046]: 2026-01-26T16:03:55Z|00729|binding|INFO|Claiming lport ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 for this chassis.
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.263 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:55 compute-0 ovn_controller[146046]: 2026-01-26T16:03:55Z|00730|binding|INFO|ddfbd3d2-79f1-43d0-bbbd-d89258fd9580: Claiming fa:16:3e:76:c7:6c 10.100.0.11
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.272 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:c7:6c 10.100.0.11'], port_security=['fa:16:3e:76:c7:6c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '05ffa55e-3746-4da3-bf2d-0f3a64dd44a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db63036e-9778-49aa-880f-9b900f3c2179', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8368d1e77bbb4de59a4d3088dda0a707', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'dcfedff7-784c-47e9-a3f0-b01264b35f17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859c58dc-99b6-4246-b01e-ede78d7a88e8, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=ddfbd3d2-79f1-43d0-bbbd-d89258fd9580) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.274 156105 INFO neutron.agent.ovn.metadata.agent [-] Port ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 in datapath db63036e-9778-49aa-880f-9b900f3c2179 bound to our chassis
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.276 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network db63036e-9778-49aa-880f-9b900f3c2179
Jan 26 16:03:55 compute-0 NetworkManager[48954]: <info>  [1769443435.2792] device (tapddfbd3d2-79): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:03:55 compute-0 NetworkManager[48954]: <info>  [1769443435.2798] device (tapddfbd3d2-79): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:03:55 compute-0 ovn_controller[146046]: 2026-01-26T16:03:55Z|00731|binding|INFO|Setting lport ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 ovn-installed in OVS
Jan 26 16:03:55 compute-0 ovn_controller[146046]: 2026-01-26T16:03:55Z|00732|binding|INFO|Setting lport ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 up in Southbound
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.289 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.294 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.297 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ac9e91d6-3f22-478d-9cd8-04a7f913b31d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.298 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdb63036e-91 in ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.300 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdb63036e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.301 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7143125e-a81b-439b-a3a2-e0856474491a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.302 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[68a5fee9-94df-4e4d-8baa-05066d3f8696]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:55 compute-0 systemd-machined[208061]: New machine qemu-91-instance-0000004f.
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.322 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee68b1f-59f8-4618-81f9-13fcdcd7b31e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:55 compute-0 systemd[1]: Started Virtual Machine qemu-91-instance-0000004f.
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.344 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f35fe086-b0ed-474f-aa27-d56da511e22f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.378 239969 DEBUG nova.network.neutron [-] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.395 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[63dcc4df-ad06-4e3b-8bd1-67c682b7b12b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:55 compute-0 NetworkManager[48954]: <info>  [1769443435.4035] manager: (tapdb63036e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/323)
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.403 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bc67dd8c-4dba-43fa-bb3d-c7f4b9ad6f7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.416 239969 INFO nova.compute.manager [-] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Took 0.98 seconds to deallocate network for instance.
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.442 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5996e8-5942-42b7-8146-8a0eda7bd9f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.447 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a72c5b19-21a3-46ae-a4a8-81d94a4cf899]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.475 239969 DEBUG oslo_concurrency.lockutils [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:55 compute-0 NetworkManager[48954]: <info>  [1769443435.4767] device (tapdb63036e-90): carrier: link connected
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.477 239969 DEBUG oslo_concurrency.lockutils [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.482 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[be30b6e0-9e93-4210-a022-7149a930bc3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.512 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9685c894-040b-42c4-a463-d75c7bca3456]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb63036e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:bb:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 223], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492628, 'reachable_time': 15054, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 306903, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.544 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[86a78001-584d-4a5b-91fb-c99a51246696]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7c:bb2a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 492628, 'tstamp': 492628}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 306904, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.576 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0191373a-14c6-484b-85b8-00e931ccabbc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdb63036e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:bb:2a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 223], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492628, 'reachable_time': 15054, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 306905, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.599 239969 DEBUG oslo_concurrency.processutils [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.621 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[372266e6-2d91-458d-a8e7-316651db7ee0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.653 239969 DEBUG nova.compute.manager [req-13b94cd4-9087-417e-8d51-48c618e86015 req-97e78ad6-21b0-4ac2-9b1d-ac10a6bd6085 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Received event network-vif-plugged-ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.654 239969 DEBUG oslo_concurrency.lockutils [req-13b94cd4-9087-417e-8d51-48c618e86015 req-97e78ad6-21b0-4ac2-9b1d-ac10a6bd6085 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.654 239969 DEBUG oslo_concurrency.lockutils [req-13b94cd4-9087-417e-8d51-48c618e86015 req-97e78ad6-21b0-4ac2-9b1d-ac10a6bd6085 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.654 239969 DEBUG oslo_concurrency.lockutils [req-13b94cd4-9087-417e-8d51-48c618e86015 req-97e78ad6-21b0-4ac2-9b1d-ac10a6bd6085 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.655 239969 DEBUG nova.compute.manager [req-13b94cd4-9087-417e-8d51-48c618e86015 req-97e78ad6-21b0-4ac2-9b1d-ac10a6bd6085 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Processing event network-vif-plugged-ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.699 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[37afca69-8b2a-43c8-85cb-24832999c339]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.701 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb63036e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.701 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.701 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdb63036e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:55 compute-0 NetworkManager[48954]: <info>  [1769443435.7044] manager: (tapdb63036e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/324)
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.705 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:55 compute-0 kernel: tapdb63036e-90: entered promiscuous mode
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.708 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.709 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdb63036e-90, col_values=(('external_ids', {'iface-id': 'c91d83e2-40e9-4d10-8b19-e75b3795dd04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:55 compute-0 ovn_controller[146046]: 2026-01-26T16:03:55Z|00733|binding|INFO|Releasing lport c91d83e2-40e9-4d10-8b19-e75b3795dd04 from this chassis (sb_readonly=0)
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.710 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.727 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.730 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/db63036e-9778-49aa-880f-9b900f3c2179.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/db63036e-9778-49aa-880f-9b900f3c2179.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.732 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1117784d-a087-44b8-aba1-ddd98e870167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.733 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-db63036e-9778-49aa-880f-9b900f3c2179
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/db63036e-9778-49aa-880f-9b900f3c2179.pid.haproxy
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID db63036e-9778-49aa-880f-9b900f3c2179
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:03:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:55.736 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'env', 'PROCESS_TAG=haproxy-db63036e-9778-49aa-880f-9b900f3c2179', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/db63036e-9778-49aa-880f-9b900f3c2179.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.787 239969 DEBUG nova.network.neutron [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Updating instance_info_cache with network_info: [{"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.829 239969 DEBUG oslo_concurrency.lockutils [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Releasing lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.855 239969 INFO nova.virt.libvirt.driver [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance destroyed successfully.
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.856 239969 DEBUG nova.objects.instance [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.869 239969 DEBUG nova.objects.instance [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'resources' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.885 239969 DEBUG nova.virt.libvirt.vif [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1205900142',display_name='tempest-ServerActionsTestJSON-server-1205900142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1205900142',id=76,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJC9bPt4tHtAdEqahH8dDewWtV4sFfSIyn3+BOhF1ZOSNVcrw8cqQ/JKx3C2TWjp9SjIuYmlVUH9MiH1RWG1NVSVOkV3L9+W0mVxNAs5TvmU8qKvHIWoJY7lntFnq7MLA==',key_name='tempest-keypair-1155267326',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-plaaamng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:03:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=492acb7a-42e1-4918-a973-352f9d8251d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.885 239969 DEBUG nova.network.os_vif_util [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.886 239969 DEBUG nova.network.os_vif_util [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.886 239969 DEBUG os_vif [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.888 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.889 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a2de29b-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.898 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.907 239969 INFO os_vif [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29')
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.916 239969 DEBUG nova.virt.libvirt.driver [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Start _get_guest_xml network_info=[{"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.920 239969 WARNING nova.virt.libvirt.driver [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.927 239969 DEBUG nova.virt.libvirt.host [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.928 239969 DEBUG nova.virt.libvirt.host [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.933 239969 DEBUG nova.virt.libvirt.host [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.934 239969 DEBUG nova.virt.libvirt.host [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.934 239969 DEBUG nova.virt.libvirt.driver [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.934 239969 DEBUG nova.virt.hardware [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.935 239969 DEBUG nova.virt.hardware [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.935 239969 DEBUG nova.virt.hardware [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.935 239969 DEBUG nova.virt.hardware [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.936 239969 DEBUG nova.virt.hardware [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.936 239969 DEBUG nova.virt.hardware [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.936 239969 DEBUG nova.virt.hardware [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.937 239969 DEBUG nova.virt.hardware [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.937 239969 DEBUG nova.virt.hardware [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.937 239969 DEBUG nova.virt.hardware [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.937 239969 DEBUG nova.virt.hardware [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.938 239969 DEBUG nova.objects.instance [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:55 compute-0 nova_compute[239965]: 2026-01-26 16:03:55.958 239969 DEBUG oslo_concurrency.processutils [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 225 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 5.2 MiB/s wr, 328 op/s
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.111 239969 DEBUG nova.compute.manager [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.112 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443436.1107023, 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.112 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] VM Started (Lifecycle Event)
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.117 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.120 239969 INFO nova.virt.libvirt.driver [-] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Instance spawned successfully.
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.121 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.164 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.168 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:03:56 compute-0 podman[307004]: 2026-01-26 16:03:56.182591734 +0000 UTC m=+0.064784389 container create 3ed84669998c193bd702aa9a962a9049e6d9bba7f188b5197a02ec7db4fbb1a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.188 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.189 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.189 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.190 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.190 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.190 239969 DEBUG nova.virt.libvirt.driver [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:03:56 compute-0 systemd[1]: Started libpod-conmon-3ed84669998c193bd702aa9a962a9049e6d9bba7f188b5197a02ec7db4fbb1a5.scope.
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.232 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.232 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443436.1115327, 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.233 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] VM Paused (Lifecycle Event)
Jan 26 16:03:56 compute-0 podman[307004]: 2026-01-26 16:03:56.151145234 +0000 UTC m=+0.033337909 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:03:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9a40707375177e6d3303ccb4f2b3cc6862564373d13351b2d3b6aea2c6bf1e7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:03:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/405732883' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:03:56 compute-0 podman[307004]: 2026-01-26 16:03:56.274932908 +0000 UTC m=+0.157125593 container init 3ed84669998c193bd702aa9a962a9049e6d9bba7f188b5197a02ec7db4fbb1a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 26 16:03:56 compute-0 podman[307004]: 2026-01-26 16:03:56.281239473 +0000 UTC m=+0.163432128 container start 3ed84669998c193bd702aa9a962a9049e6d9bba7f188b5197a02ec7db4fbb1a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.286 239969 DEBUG oslo_concurrency.processutils [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.687s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.294 239969 DEBUG nova.compute.provider_tree [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:03:56 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[307037]: [NOTICE]   (307043) : New worker (307045) forked
Jan 26 16:03:56 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[307037]: [NOTICE]   (307043) : Loading success.
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.378 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.383 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443436.1142113, 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.384 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] VM Resumed (Lifecycle Event)
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.391 239969 DEBUG nova.scheduler.client.report [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.409 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.412 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:03:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.445 239969 INFO nova.compute.manager [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Took 6.75 seconds to spawn the instance on the hypervisor.
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.446 239969 DEBUG nova.compute.manager [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:03:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3073697249' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.627 239969 DEBUG oslo_concurrency.processutils [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.658 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.659 239969 DEBUG oslo_concurrency.lockutils [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.668 239969 DEBUG oslo_concurrency.processutils [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.732 239969 INFO nova.compute.manager [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Took 8.16 seconds to build instance.
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.744 239969 INFO nova.scheduler.client.report [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Deleted allocations for instance c85b4b5d-0f7d-4fb0-b02f-c841ee20544a
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.764 239969 DEBUG oslo_concurrency.lockutils [None req-abdc8f64-7c26-40bf-85ae-85734c5d8d00 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.263s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.815 239969 DEBUG oslo_concurrency.lockutils [None req-a696132b-efc6-48de-be95-d77376831c60 61cb7a992aab4e41af5fd2eccf26c5b8 5355ee24fabf4f089cfa485487324f31 - - default default] Lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.908 239969 DEBUG nova.compute.manager [req-a51792e4-dcb8-45e7-8d7f-2d5e05e4039f req-ea1d7fce-ae43-4282-a588-919b1f99bea8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Received event network-vif-plugged-3538690f-6beb-4617-a1af-4b0fa9a05da5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.910 239969 DEBUG oslo_concurrency.lockutils [req-a51792e4-dcb8-45e7-8d7f-2d5e05e4039f req-ea1d7fce-ae43-4282-a588-919b1f99bea8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.910 239969 DEBUG oslo_concurrency.lockutils [req-a51792e4-dcb8-45e7-8d7f-2d5e05e4039f req-ea1d7fce-ae43-4282-a588-919b1f99bea8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.911 239969 DEBUG oslo_concurrency.lockutils [req-a51792e4-dcb8-45e7-8d7f-2d5e05e4039f req-ea1d7fce-ae43-4282-a588-919b1f99bea8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c85b4b5d-0f7d-4fb0-b02f-c841ee20544a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.911 239969 DEBUG nova.compute.manager [req-a51792e4-dcb8-45e7-8d7f-2d5e05e4039f req-ea1d7fce-ae43-4282-a588-919b1f99bea8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] No waiting events found dispatching network-vif-plugged-3538690f-6beb-4617-a1af-4b0fa9a05da5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.912 239969 WARNING nova.compute.manager [req-a51792e4-dcb8-45e7-8d7f-2d5e05e4039f req-ea1d7fce-ae43-4282-a588-919b1f99bea8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Received unexpected event network-vif-plugged-3538690f-6beb-4617-a1af-4b0fa9a05da5 for instance with vm_state deleted and task_state None.
Jan 26 16:03:56 compute-0 nova_compute[239965]: 2026-01-26 16:03:56.912 239969 DEBUG nova.compute.manager [req-a51792e4-dcb8-45e7-8d7f-2d5e05e4039f req-ea1d7fce-ae43-4282-a588-919b1f99bea8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Received event network-vif-deleted-3538690f-6beb-4617-a1af-4b0fa9a05da5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:57 compute-0 ceph-mon[75140]: pgmap v1530: 305 pgs: 305 active+clean; 225 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 5.2 MiB/s wr, 328 op/s
Jan 26 16:03:57 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/405732883' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:03:57 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3073697249' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:03:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/485458630' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.290 239969 DEBUG oslo_concurrency.processutils [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.292 239969 DEBUG nova.virt.libvirt.vif [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1205900142',display_name='tempest-ServerActionsTestJSON-server-1205900142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1205900142',id=76,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJC9bPt4tHtAdEqahH8dDewWtV4sFfSIyn3+BOhF1ZOSNVcrw8cqQ/JKx3C2TWjp9SjIuYmlVUH9MiH1RWG1NVSVOkV3L9+W0mVxNAs5TvmU8qKvHIWoJY7lntFnq7MLA==',key_name='tempest-keypair-1155267326',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-plaaamng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:03:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=492acb7a-42e1-4918-a973-352f9d8251d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.293 239969 DEBUG nova.network.os_vif_util [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.294 239969 DEBUG nova.network.os_vif_util [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.296 239969 DEBUG nova.objects.instance [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.314 239969 DEBUG nova.virt.libvirt.driver [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:03:57 compute-0 nova_compute[239965]:   <uuid>492acb7a-42e1-4918-a973-352f9d8251d0</uuid>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   <name>instance-0000004c</name>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerActionsTestJSON-server-1205900142</nova:name>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:03:55</nova:creationTime>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:03:57 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:03:57 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:03:57 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:03:57 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:03:57 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:03:57 compute-0 nova_compute[239965]:         <nova:user uuid="59f8ee09903a4e0a812c3d9e013996bd">tempest-ServerActionsTestJSON-274649005-project-member</nova:user>
Jan 26 16:03:57 compute-0 nova_compute[239965]:         <nova:project uuid="94771806cd0d4b5db117956e09fea9e6">tempest-ServerActionsTestJSON-274649005</nova:project>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:03:57 compute-0 nova_compute[239965]:         <nova:port uuid="2a2de29b-29c0-439c-8236-2f566c4c89aa">
Jan 26 16:03:57 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <system>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <entry name="serial">492acb7a-42e1-4918-a973-352f9d8251d0</entry>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <entry name="uuid">492acb7a-42e1-4918-a973-352f9d8251d0</entry>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     </system>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   <os>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   </os>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   <features>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   </features>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/492acb7a-42e1-4918-a973-352f9d8251d0_disk">
Jan 26 16:03:57 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       </source>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:03:57 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/492acb7a-42e1-4918-a973-352f9d8251d0_disk.config">
Jan 26 16:03:57 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       </source>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:03:57 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:f8:ca:89"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <target dev="tap2a2de29b-29"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/492acb7a-42e1-4918-a973-352f9d8251d0/console.log" append="off"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <video>
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     </video>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <input type="keyboard" bus="usb"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:03:57 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:03:57 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:03:57 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:03:57 compute-0 nova_compute[239965]: </domain>
Jan 26 16:03:57 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.321 239969 DEBUG nova.virt.libvirt.driver [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.321 239969 DEBUG nova.virt.libvirt.driver [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.322 239969 DEBUG nova.virt.libvirt.vif [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1205900142',display_name='tempest-ServerActionsTestJSON-server-1205900142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1205900142',id=76,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJC9bPt4tHtAdEqahH8dDewWtV4sFfSIyn3+BOhF1ZOSNVcrw8cqQ/JKx3C2TWjp9SjIuYmlVUH9MiH1RWG1NVSVOkV3L9+W0mVxNAs5TvmU8qKvHIWoJY7lntFnq7MLA==',key_name='tempest-keypair-1155267326',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-plaaamng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:03:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=492acb7a-42e1-4918-a973-352f9d8251d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.323 239969 DEBUG nova.network.os_vif_util [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.323 239969 DEBUG nova.network.os_vif_util [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.324 239969 DEBUG os_vif [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.325 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.326 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.326 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.330 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.330 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a2de29b-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.331 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a2de29b-29, col_values=(('external_ids', {'iface-id': '2a2de29b-29c0-439c-8236-2f566c4c89aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:ca:89', 'vm-uuid': '492acb7a-42e1-4918-a973-352f9d8251d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:57 compute-0 NetworkManager[48954]: <info>  [1769443437.3354] manager: (tap2a2de29b-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/325)
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.337 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.345 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.347 239969 INFO os_vif [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29')
Jan 26 16:03:57 compute-0 kernel: tap2a2de29b-29: entered promiscuous mode
Jan 26 16:03:57 compute-0 NetworkManager[48954]: <info>  [1769443437.4310] manager: (tap2a2de29b-29): new Tun device (/org/freedesktop/NetworkManager/Devices/326)
Jan 26 16:03:57 compute-0 ovn_controller[146046]: 2026-01-26T16:03:57Z|00734|binding|INFO|Claiming lport 2a2de29b-29c0-439c-8236-2f566c4c89aa for this chassis.
Jan 26 16:03:57 compute-0 ovn_controller[146046]: 2026-01-26T16:03:57Z|00735|binding|INFO|2a2de29b-29c0-439c-8236-2f566c4c89aa: Claiming fa:16:3e:f8:ca:89 10.100.0.14
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.440 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.452 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ca:89 10.100.0.14'], port_security=['fa:16:3e:f8:ca:89 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '492acb7a-42e1-4918-a973-352f9d8251d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '7', 'neutron:security_group_ids': '2b6630f0-082e-491e-b663-f8af00e04b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2a2de29b-29c0-439c-8236-2f566c4c89aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.455 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2a2de29b-29c0-439c-8236-2f566c4c89aa in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d bound to our chassis
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.457 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:03:57 compute-0 ovn_controller[146046]: 2026-01-26T16:03:57Z|00736|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa ovn-installed in OVS
Jan 26 16:03:57 compute-0 ovn_controller[146046]: 2026-01-26T16:03:57Z|00737|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa up in Southbound
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.463 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.471 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6476d26d-2414-4e4a-8cae-9a0129fc2410]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.473 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84c1ad93-11 in ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.475 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84c1ad93-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.475 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c6168cae-429b-49c5-8f23-4c400b6948c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.475 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fc663319-9863-4c8c-b2fe-4a57be8729e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:57 compute-0 systemd-machined[208061]: New machine qemu-92-instance-0000004c.
Jan 26 16:03:57 compute-0 systemd-udevd[307112]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:03:57 compute-0 systemd[1]: Started Virtual Machine qemu-92-instance-0000004c.
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.493 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[bb7c4f5e-0e3c-4afc-8981-7b4c35b99aad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:57 compute-0 NetworkManager[48954]: <info>  [1769443437.4994] device (tap2a2de29b-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:03:57 compute-0 NetworkManager[48954]: <info>  [1769443437.5002] device (tap2a2de29b-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.520 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[61ae13db-92c1-49e5-bfcd-75ca12071c6d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.548 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9b4a1ced-c91e-4922-9d86-94dafa86ff3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:57 compute-0 NetworkManager[48954]: <info>  [1769443437.5544] manager: (tap84c1ad93-10): new Veth device (/org/freedesktop/NetworkManager/Devices/327)
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.554 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3a519505-a924-4ba3-a4b9-29aa0ef751ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.593 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[efcfa65b-cb0e-4198-a9a7-d7b3b5aac02b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.597 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f6d92fe2-79fd-4938-86a8-9770fd1d937e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:57 compute-0 NetworkManager[48954]: <info>  [1769443437.6263] device (tap84c1ad93-10): carrier: link connected
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.637 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f75449ff-07fb-48a7-b6ee-94a3f69cea80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.662 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cbed320b-b4c5-4f35-a008-8d59a838b7c8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84c1ad93-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492843, 'reachable_time': 35072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307144, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.687 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[581a6018-bda9-4730-b956-cb87bdedf16d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:ad50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 492843, 'tstamp': 492843}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307145, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.713 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[059d042c-cda7-4ca3-80b3-9ae4ebabf190]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84c1ad93-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 225], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492843, 'reachable_time': 35072, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307146, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.750 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a2c1c7-e61a-4ccb-ac70-933c1a097ca8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.813 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[359432ff-1534-4229-8225-259dd8722a48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.815 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1ad93-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.815 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.816 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84c1ad93-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.818 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:57 compute-0 NetworkManager[48954]: <info>  [1769443437.8193] manager: (tap84c1ad93-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/328)
Jan 26 16:03:57 compute-0 kernel: tap84c1ad93-10: entered promiscuous mode
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.820 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.825 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84c1ad93-10, col_values=(('external_ids', {'iface-id': '61ede7e0-2ac4-4ab0-a8d4-11a64add8da0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.826 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:57 compute-0 ovn_controller[146046]: 2026-01-26T16:03:57Z|00738|binding|INFO|Releasing lport 61ede7e0-2ac4-4ab0-a8d4-11a64add8da0 from this chassis (sb_readonly=0)
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.827 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.830 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.831 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[18252594-3ca7-42c1-b17c-6a63346d0ad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.832 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:03:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:57.832 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'env', 'PROCESS_TAG=haproxy-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.846 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.971 239969 DEBUG nova.compute.manager [req-7d268b72-d08e-4915-9431-ba87e748a342 req-da37f401-7b04-42a7-b8bf-bd4f8785145e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Received event network-vif-plugged-ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.973 239969 DEBUG oslo_concurrency.lockutils [req-7d268b72-d08e-4915-9431-ba87e748a342 req-da37f401-7b04-42a7-b8bf-bd4f8785145e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.973 239969 DEBUG oslo_concurrency.lockutils [req-7d268b72-d08e-4915-9431-ba87e748a342 req-da37f401-7b04-42a7-b8bf-bd4f8785145e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.974 239969 DEBUG oslo_concurrency.lockutils [req-7d268b72-d08e-4915-9431-ba87e748a342 req-da37f401-7b04-42a7-b8bf-bd4f8785145e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.974 239969 DEBUG nova.compute.manager [req-7d268b72-d08e-4915-9431-ba87e748a342 req-da37f401-7b04-42a7-b8bf-bd4f8785145e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] No waiting events found dispatching network-vif-plugged-ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:57 compute-0 nova_compute[239965]: 2026-01-26 16:03:57.974 239969 WARNING nova.compute.manager [req-7d268b72-d08e-4915-9431-ba87e748a342 req-da37f401-7b04-42a7-b8bf-bd4f8785145e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Received unexpected event network-vif-plugged-ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 for instance with vm_state active and task_state None.
Jan 26 16:03:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 225 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Jan 26 16:03:58 compute-0 nova_compute[239965]: 2026-01-26 16:03:58.076 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 492acb7a-42e1-4918-a973-352f9d8251d0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:03:58 compute-0 nova_compute[239965]: 2026-01-26 16:03:58.077 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443438.0758748, 492acb7a-42e1-4918-a973-352f9d8251d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:58 compute-0 nova_compute[239965]: 2026-01-26 16:03:58.077 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Resumed (Lifecycle Event)
Jan 26 16:03:58 compute-0 nova_compute[239965]: 2026-01-26 16:03:58.079 239969 DEBUG nova.compute.manager [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:03:58 compute-0 nova_compute[239965]: 2026-01-26 16:03:58.084 239969 INFO nova.virt.libvirt.driver [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance rebooted successfully.
Jan 26 16:03:58 compute-0 nova_compute[239965]: 2026-01-26 16:03:58.085 239969 DEBUG nova.compute.manager [None req-e799a52e-bf5f-4aa8-aa6a-68c14b295716 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:58 compute-0 nova_compute[239965]: 2026-01-26 16:03:58.099 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:58 compute-0 nova_compute[239965]: 2026-01-26 16:03:58.103 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:03:58 compute-0 nova_compute[239965]: 2026-01-26 16:03:58.125 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] During sync_power_state the instance has a pending task (powering-on). Skip.
Jan 26 16:03:58 compute-0 nova_compute[239965]: 2026-01-26 16:03:58.125 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443438.078378, 492acb7a-42e1-4918-a973-352f9d8251d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:03:58 compute-0 nova_compute[239965]: 2026-01-26 16:03:58.126 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Started (Lifecycle Event)
Jan 26 16:03:58 compute-0 nova_compute[239965]: 2026-01-26 16:03:58.149 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:03:58 compute-0 nova_compute[239965]: 2026-01-26 16:03:58.159 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:03:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/485458630' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:03:58 compute-0 podman[307219]: 2026-01-26 16:03:58.244176184 +0000 UTC m=+0.056859335 container create ec1c4c0708ddf65a247975200cb97374b9e5c9e06a97314cf0e136f454424a6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:03:58 compute-0 systemd[1]: Started libpod-conmon-ec1c4c0708ddf65a247975200cb97374b9e5c9e06a97314cf0e136f454424a6f.scope.
Jan 26 16:03:58 compute-0 podman[307219]: 2026-01-26 16:03:58.214083026 +0000 UTC m=+0.026766207 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:03:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:03:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/530507c75358c5565d0421790d2d7de197514982f74979ea07d179c4a706c9d0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:03:58 compute-0 podman[307219]: 2026-01-26 16:03:58.345872446 +0000 UTC m=+0.158555597 container init ec1c4c0708ddf65a247975200cb97374b9e5c9e06a97314cf0e136f454424a6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 26 16:03:58 compute-0 podman[307219]: 2026-01-26 16:03:58.351863603 +0000 UTC m=+0.164546754 container start ec1c4c0708ddf65a247975200cb97374b9e5c9e06a97314cf0e136f454424a6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:03:58 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[307235]: [NOTICE]   (307239) : New worker (307241) forked
Jan 26 16:03:58 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[307235]: [NOTICE]   (307239) : Loading success.
Jan 26 16:03:59 compute-0 ceph-mon[75140]: pgmap v1531: 305 pgs: 305 active+clean; 225 MiB data, 623 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 169 op/s
Jan 26 16:03:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:59.225 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:59.226 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:03:59.227 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:59 compute-0 nova_compute[239965]: 2026-01-26 16:03:59.399 239969 DEBUG nova.compute.manager [req-8de28bbe-4a27-4108-b86e-693cd21437ee req-2e56389e-c32e-47a7-b1a8-c29778130fab a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:59 compute-0 nova_compute[239965]: 2026-01-26 16:03:59.399 239969 DEBUG oslo_concurrency.lockutils [req-8de28bbe-4a27-4108-b86e-693cd21437ee req-2e56389e-c32e-47a7-b1a8-c29778130fab a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:59 compute-0 nova_compute[239965]: 2026-01-26 16:03:59.399 239969 DEBUG oslo_concurrency.lockutils [req-8de28bbe-4a27-4108-b86e-693cd21437ee req-2e56389e-c32e-47a7-b1a8-c29778130fab a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:59 compute-0 nova_compute[239965]: 2026-01-26 16:03:59.399 239969 DEBUG oslo_concurrency.lockutils [req-8de28bbe-4a27-4108-b86e-693cd21437ee req-2e56389e-c32e-47a7-b1a8-c29778130fab a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:59 compute-0 nova_compute[239965]: 2026-01-26 16:03:59.400 239969 DEBUG nova.compute.manager [req-8de28bbe-4a27-4108-b86e-693cd21437ee req-2e56389e-c32e-47a7-b1a8-c29778130fab a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:59 compute-0 nova_compute[239965]: 2026-01-26 16:03:59.400 239969 WARNING nova.compute.manager [req-8de28bbe-4a27-4108-b86e-693cd21437ee req-2e56389e-c32e-47a7-b1a8-c29778130fab a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state active and task_state None.
Jan 26 16:03:59 compute-0 nova_compute[239965]: 2026-01-26 16:03:59.400 239969 DEBUG nova.compute.manager [req-8de28bbe-4a27-4108-b86e-693cd21437ee req-2e56389e-c32e-47a7-b1a8-c29778130fab a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:03:59 compute-0 nova_compute[239965]: 2026-01-26 16:03:59.400 239969 DEBUG oslo_concurrency.lockutils [req-8de28bbe-4a27-4108-b86e-693cd21437ee req-2e56389e-c32e-47a7-b1a8-c29778130fab a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:03:59 compute-0 nova_compute[239965]: 2026-01-26 16:03:59.400 239969 DEBUG oslo_concurrency.lockutils [req-8de28bbe-4a27-4108-b86e-693cd21437ee req-2e56389e-c32e-47a7-b1a8-c29778130fab a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:03:59 compute-0 nova_compute[239965]: 2026-01-26 16:03:59.401 239969 DEBUG oslo_concurrency.lockutils [req-8de28bbe-4a27-4108-b86e-693cd21437ee req-2e56389e-c32e-47a7-b1a8-c29778130fab a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:03:59 compute-0 nova_compute[239965]: 2026-01-26 16:03:59.401 239969 DEBUG nova.compute.manager [req-8de28bbe-4a27-4108-b86e-693cd21437ee req-2e56389e-c32e-47a7-b1a8-c29778130fab a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:03:59 compute-0 nova_compute[239965]: 2026-01-26 16:03:59.401 239969 WARNING nova.compute.manager [req-8de28bbe-4a27-4108-b86e-693cd21437ee req-2e56389e-c32e-47a7-b1a8-c29778130fab a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state active and task_state None.
Jan 26 16:04:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 1.8 MiB/s wr, 296 op/s
Jan 26 16:04:00 compute-0 ceph-mon[75140]: pgmap v1532: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 1.8 MiB/s wr, 296 op/s
Jan 26 16:04:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:04:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:04:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:04:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:04:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:04:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:04:00 compute-0 ovn_controller[146046]: 2026-01-26T16:04:00Z|00739|binding|INFO|Releasing lport 61ede7e0-2ac4-4ab0-a8d4-11a64add8da0 from this chassis (sb_readonly=0)
Jan 26 16:04:00 compute-0 ovn_controller[146046]: 2026-01-26T16:04:00Z|00740|binding|INFO|Releasing lport c91d83e2-40e9-4d10-8b19-e75b3795dd04 from this chassis (sb_readonly=0)
Jan 26 16:04:00 compute-0 nova_compute[239965]: 2026-01-26 16:04:00.617 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:00 compute-0 nova_compute[239965]: 2026-01-26 16:04:00.904 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:01 compute-0 nova_compute[239965]: 2026-01-26 16:04:01.022 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443426.0203643, 7658ad77-dd99-4ff7-b88f-287f38b8344a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:01 compute-0 nova_compute[239965]: 2026-01-26 16:04:01.022 239969 INFO nova.compute.manager [-] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] VM Stopped (Lifecycle Event)
Jan 26 16:04:01 compute-0 nova_compute[239965]: 2026-01-26 16:04:01.042 239969 DEBUG nova.compute.manager [None req-a0fcac72-78a7-4574-9879-b5c0b6ae3e71 - - - - - -] [instance: 7658ad77-dd99-4ff7-b88f-287f38b8344a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:04:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Jan 26 16:04:02 compute-0 rsyslogd[1006]: imjournal: 16531 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 26 16:04:02 compute-0 nova_compute[239965]: 2026-01-26 16:04:02.335 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:03 compute-0 ceph-mon[75140]: pgmap v1533: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Jan 26 16:04:03 compute-0 nova_compute[239965]: 2026-01-26 16:04:03.992 239969 DEBUG oslo_concurrency.lockutils [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:03 compute-0 nova_compute[239965]: 2026-01-26 16:04:03.993 239969 DEBUG oslo_concurrency.lockutils [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:03 compute-0 nova_compute[239965]: 2026-01-26 16:04:03.993 239969 DEBUG oslo_concurrency.lockutils [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:03 compute-0 nova_compute[239965]: 2026-01-26 16:04:03.994 239969 DEBUG oslo_concurrency.lockutils [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:03 compute-0 nova_compute[239965]: 2026-01-26 16:04:03.994 239969 DEBUG oslo_concurrency.lockutils [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:03 compute-0 nova_compute[239965]: 2026-01-26 16:04:03.996 239969 INFO nova.compute.manager [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Terminating instance
Jan 26 16:04:03 compute-0 nova_compute[239965]: 2026-01-26 16:04:03.997 239969 DEBUG nova.compute.manager [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:04:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 185 op/s
Jan 26 16:04:04 compute-0 kernel: tapddfbd3d2-79 (unregistering): left promiscuous mode
Jan 26 16:04:04 compute-0 NetworkManager[48954]: <info>  [1769443444.0562] device (tapddfbd3d2-79): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.062 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.065 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:04 compute-0 ovn_controller[146046]: 2026-01-26T16:04:04Z|00741|binding|INFO|Releasing lport ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 from this chassis (sb_readonly=0)
Jan 26 16:04:04 compute-0 ovn_controller[146046]: 2026-01-26T16:04:04Z|00742|binding|INFO|Setting lport ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 down in Southbound
Jan 26 16:04:04 compute-0 ovn_controller[146046]: 2026-01-26T16:04:04Z|00743|binding|INFO|Removing iface tapddfbd3d2-79 ovn-installed in OVS
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.083 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:04.099 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:c7:6c 10.100.0.11'], port_security=['fa:16:3e:76:c7:6c 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '05ffa55e-3746-4da3-bf2d-0f3a64dd44a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-db63036e-9778-49aa-880f-9b900f3c2179', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8368d1e77bbb4de59a4d3088dda0a707', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'dcfedff7-784c-47e9-a3f0-b01264b35f17', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=859c58dc-99b6-4246-b01e-ede78d7a88e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=ddfbd3d2-79f1-43d0-bbbd-d89258fd9580) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:04:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:04.103 156105 INFO neutron.agent.ovn.metadata.agent [-] Port ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 in datapath db63036e-9778-49aa-880f-9b900f3c2179 unbound from our chassis
Jan 26 16:04:04 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d0000004f.scope: Deactivated successfully.
Jan 26 16:04:04 compute-0 systemd[1]: machine-qemu\x2d91\x2dinstance\x2d0000004f.scope: Consumed 8.830s CPU time.
Jan 26 16:04:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:04.108 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network db63036e-9778-49aa-880f-9b900f3c2179, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:04:04 compute-0 systemd-machined[208061]: Machine qemu-91-instance-0000004f terminated.
Jan 26 16:04:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:04.109 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d03a9b-4030-4341-b872-dbd8a0af449b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:04.111 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 namespace which is not needed anymore
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.236 239969 INFO nova.virt.libvirt.driver [-] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Instance destroyed successfully.
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.237 239969 DEBUG nova.objects.instance [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lazy-loading 'resources' on Instance uuid 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.259 239969 DEBUG nova.virt.libvirt.vif [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:03:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1165411368',display_name='tempest-ServerDiskConfigTestJSON-server-1165411368',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1165411368',id=79,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8368d1e77bbb4de59a4d3088dda0a707',ramdisk_id='',reservation_id='r-a6bffdi0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-1476979264',owner_user_name='tempest-ServerDiskConfigTestJSON-1476979264-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:04:01Z,user_data=None,user_id='fc62be3f072e47058d3706393447ec4a',uuid=05ffa55e-3746-4da3-bf2d-0f3a64dd44a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "address": "fa:16:3e:76:c7:6c", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddfbd3d2-79", "ovs_interfaceid": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.260 239969 DEBUG nova.network.os_vif_util [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converting VIF {"id": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "address": "fa:16:3e:76:c7:6c", "network": {"id": "db63036e-9778-49aa-880f-9b900f3c2179", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-365239627-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8368d1e77bbb4de59a4d3088dda0a707", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapddfbd3d2-79", "ovs_interfaceid": "ddfbd3d2-79f1-43d0-bbbd-d89258fd9580", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.261 239969 DEBUG nova.network.os_vif_util [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:76:c7:6c,bridge_name='br-int',has_traffic_filtering=True,id=ddfbd3d2-79f1-43d0-bbbd-d89258fd9580,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddfbd3d2-79') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.261 239969 DEBUG os_vif [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:c7:6c,bridge_name='br-int',has_traffic_filtering=True,id=ddfbd3d2-79f1-43d0-bbbd-d89258fd9580,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddfbd3d2-79') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.263 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.263 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapddfbd3d2-79, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.265 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.266 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.268 239969 INFO os_vif [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:76:c7:6c,bridge_name='br-int',has_traffic_filtering=True,id=ddfbd3d2-79f1-43d0-bbbd-d89258fd9580,network=Network(db63036e-9778-49aa-880f-9b900f3c2179),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapddfbd3d2-79')
Jan 26 16:04:04 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[307037]: [NOTICE]   (307043) : haproxy version is 2.8.14-c23fe91
Jan 26 16:04:04 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[307037]: [NOTICE]   (307043) : path to executable is /usr/sbin/haproxy
Jan 26 16:04:04 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[307037]: [ALERT]    (307043) : Current worker (307045) exited with code 143 (Terminated)
Jan 26 16:04:04 compute-0 neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179[307037]: [WARNING]  (307043) : All workers exited. Exiting... (0)
Jan 26 16:04:04 compute-0 systemd[1]: libpod-3ed84669998c193bd702aa9a962a9049e6d9bba7f188b5197a02ec7db4fbb1a5.scope: Deactivated successfully.
Jan 26 16:04:04 compute-0 podman[307274]: 2026-01-26 16:04:04.301243472 +0000 UTC m=+0.077538032 container stop 3ed84669998c193bd702aa9a962a9049e6d9bba7f188b5197a02ec7db4fbb1a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:04:04 compute-0 podman[307274]: 2026-01-26 16:04:04.332396816 +0000 UTC m=+0.108691556 container died 3ed84669998c193bd702aa9a962a9049e6d9bba7f188b5197a02ec7db4fbb1a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:04:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3ed84669998c193bd702aa9a962a9049e6d9bba7f188b5197a02ec7db4fbb1a5-userdata-shm.mount: Deactivated successfully.
Jan 26 16:04:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9a40707375177e6d3303ccb4f2b3cc6862564373d13351b2d3b6aea2c6bf1e7-merged.mount: Deactivated successfully.
Jan 26 16:04:04 compute-0 podman[307274]: 2026-01-26 16:04:04.382032192 +0000 UTC m=+0.158326752 container cleanup 3ed84669998c193bd702aa9a962a9049e6d9bba7f188b5197a02ec7db4fbb1a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 16:04:04 compute-0 systemd[1]: libpod-conmon-3ed84669998c193bd702aa9a962a9049e6d9bba7f188b5197a02ec7db4fbb1a5.scope: Deactivated successfully.
Jan 26 16:04:04 compute-0 podman[307334]: 2026-01-26 16:04:04.46473957 +0000 UTC m=+0.059450138 container remove 3ed84669998c193bd702aa9a962a9049e6d9bba7f188b5197a02ec7db4fbb1a5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:04:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:04.471 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3ffc6b-dddc-406d-93d6-f088228a0d69]: (4, ('Mon Jan 26 04:04:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 (3ed84669998c193bd702aa9a962a9049e6d9bba7f188b5197a02ec7db4fbb1a5)\n3ed84669998c193bd702aa9a962a9049e6d9bba7f188b5197a02ec7db4fbb1a5\nMon Jan 26 04:04:04 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 (3ed84669998c193bd702aa9a962a9049e6d9bba7f188b5197a02ec7db4fbb1a5)\n3ed84669998c193bd702aa9a962a9049e6d9bba7f188b5197a02ec7db4fbb1a5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:04.474 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc091af-1eea-443c-8ef9-aa286af200ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:04.475 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdb63036e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.478 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:04 compute-0 kernel: tapdb63036e-90: left promiscuous mode
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.505 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:04.508 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f45579f3-e0b0-48ef-8e79-cdaa575fb631]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:04.527 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d7d0ff-3efc-4d96-99f2-a3b58b16ddbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:04.529 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[402ef663-d0c3-451e-83ac-06bcecb026ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:04.550 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e2ddd490-4be6-47d8-979e-c5a895339cd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492619, 'reachable_time': 18363, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307350, 'error': None, 'target': 'ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:04 compute-0 systemd[1]: run-netns-ovnmeta\x2ddb63036e\x2d9778\x2d49aa\x2d880f\x2d9b900f3c2179.mount: Deactivated successfully.
Jan 26 16:04:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:04.556 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-db63036e-9778-49aa-880f-9b900f3c2179 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:04:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:04.556 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[eb3923d8-9c11-4c91-aede-672455b1fa6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.585 239969 INFO nova.virt.libvirt.driver [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Deleting instance files /var/lib/nova/instances/05ffa55e-3746-4da3-bf2d-0f3a64dd44a6_del
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.587 239969 INFO nova.virt.libvirt.driver [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Deletion of /var/lib/nova/instances/05ffa55e-3746-4da3-bf2d-0f3a64dd44a6_del complete
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.639 239969 INFO nova.compute.manager [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Took 0.64 seconds to destroy the instance on the hypervisor.
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.640 239969 DEBUG oslo.service.loopingcall [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.640 239969 DEBUG nova.compute.manager [-] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:04:04 compute-0 nova_compute[239965]: 2026-01-26 16:04:04.641 239969 DEBUG nova.network.neutron [-] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:04:05 compute-0 ceph-mon[75140]: pgmap v1534: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.6 MiB/s wr, 185 op/s
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.320 239969 DEBUG nova.compute.manager [req-f060c27a-39f1-4e0d-aaa5-513199d586e3 req-20e59190-866f-448a-a6ed-c895efca7767 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Received event network-vif-unplugged-ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.321 239969 DEBUG oslo_concurrency.lockutils [req-f060c27a-39f1-4e0d-aaa5-513199d586e3 req-20e59190-866f-448a-a6ed-c895efca7767 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.321 239969 DEBUG oslo_concurrency.lockutils [req-f060c27a-39f1-4e0d-aaa5-513199d586e3 req-20e59190-866f-448a-a6ed-c895efca7767 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.321 239969 DEBUG oslo_concurrency.lockutils [req-f060c27a-39f1-4e0d-aaa5-513199d586e3 req-20e59190-866f-448a-a6ed-c895efca7767 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.322 239969 DEBUG nova.compute.manager [req-f060c27a-39f1-4e0d-aaa5-513199d586e3 req-20e59190-866f-448a-a6ed-c895efca7767 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] No waiting events found dispatching network-vif-unplugged-ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.322 239969 DEBUG nova.compute.manager [req-f060c27a-39f1-4e0d-aaa5-513199d586e3 req-20e59190-866f-448a-a6ed-c895efca7767 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Received event network-vif-unplugged-ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.644 239969 DEBUG nova.network.neutron [-] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.659 239969 INFO nova.compute.manager [-] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Took 1.02 seconds to deallocate network for instance.
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.697 239969 INFO nova.compute.manager [None req-62ba705e-9885-45fc-967f-44fe60e518d7 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Pausing
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.698 239969 DEBUG nova.objects.instance [None req-62ba705e-9885-45fc-967f-44fe60e518d7 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'flavor' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.711 239969 DEBUG oslo_concurrency.lockutils [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.711 239969 DEBUG oslo_concurrency.lockutils [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.732 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443445.7322176, 492acb7a-42e1-4918-a973-352f9d8251d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.732 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Paused (Lifecycle Event)
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.734 239969 DEBUG nova.compute.manager [None req-62ba705e-9885-45fc-967f-44fe60e518d7 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.752 239969 DEBUG nova.compute.manager [req-196be369-c615-477f-8764-5a54fa513dc7 req-fe99e52d-5ecf-462a-9482-c041f2f1e7ba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Received event network-vif-deleted-ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.765 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.770 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.803 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] During sync_power_state the instance has a pending task (pausing). Skip.
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.811 239969 DEBUG oslo_concurrency.processutils [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:05 compute-0 nova_compute[239965]: 2026-01-26 16:04:05.906 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 210 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 195 op/s
Jan 26 16:04:06 compute-0 podman[307371]: 2026-01-26 16:04:06.388966903 +0000 UTC m=+0.066709487 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:04:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:04:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/918416863' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:04:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:04:06 compute-0 nova_compute[239965]: 2026-01-26 16:04:06.443 239969 DEBUG oslo_concurrency.processutils [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:06 compute-0 nova_compute[239965]: 2026-01-26 16:04:06.448 239969 DEBUG nova.compute.provider_tree [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:04:06 compute-0 podman[307372]: 2026-01-26 16:04:06.450945572 +0000 UTC m=+0.133232407 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:04:06 compute-0 nova_compute[239965]: 2026-01-26 16:04:06.465 239969 DEBUG nova.scheduler.client.report [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:04:06 compute-0 nova_compute[239965]: 2026-01-26 16:04:06.487 239969 DEBUG oslo_concurrency.lockutils [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:06 compute-0 nova_compute[239965]: 2026-01-26 16:04:06.521 239969 INFO nova.scheduler.client.report [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Deleted allocations for instance 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6
Jan 26 16:04:06 compute-0 nova_compute[239965]: 2026-01-26 16:04:06.582 239969 DEBUG oslo_concurrency.lockutils [None req-98aa9a04-d319-4b53-8c14-0ca7e3e24740 fc62be3f072e47058d3706393447ec4a 8368d1e77bbb4de59a4d3088dda0a707 - - default default] Lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:07 compute-0 ceph-mon[75140]: pgmap v1535: 305 pgs: 305 active+clean; 210 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 195 op/s
Jan 26 16:04:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/918416863' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:04:07 compute-0 nova_compute[239965]: 2026-01-26 16:04:07.506 239969 DEBUG nova.compute.manager [req-a7cc0da2-6c0b-454a-97f0-2b47a2f32be4 req-33f01a6a-a093-455d-8aa4-6130f276ca7b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Received event network-vif-plugged-ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:07 compute-0 nova_compute[239965]: 2026-01-26 16:04:07.507 239969 DEBUG oslo_concurrency.lockutils [req-a7cc0da2-6c0b-454a-97f0-2b47a2f32be4 req-33f01a6a-a093-455d-8aa4-6130f276ca7b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:07 compute-0 nova_compute[239965]: 2026-01-26 16:04:07.507 239969 DEBUG oslo_concurrency.lockutils [req-a7cc0da2-6c0b-454a-97f0-2b47a2f32be4 req-33f01a6a-a093-455d-8aa4-6130f276ca7b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:07 compute-0 nova_compute[239965]: 2026-01-26 16:04:07.508 239969 DEBUG oslo_concurrency.lockutils [req-a7cc0da2-6c0b-454a-97f0-2b47a2f32be4 req-33f01a6a-a093-455d-8aa4-6130f276ca7b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "05ffa55e-3746-4da3-bf2d-0f3a64dd44a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:07 compute-0 nova_compute[239965]: 2026-01-26 16:04:07.508 239969 DEBUG nova.compute.manager [req-a7cc0da2-6c0b-454a-97f0-2b47a2f32be4 req-33f01a6a-a093-455d-8aa4-6130f276ca7b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] No waiting events found dispatching network-vif-plugged-ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:07 compute-0 nova_compute[239965]: 2026-01-26 16:04:07.508 239969 WARNING nova.compute.manager [req-a7cc0da2-6c0b-454a-97f0-2b47a2f32be4 req-33f01a6a-a093-455d-8aa4-6130f276ca7b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Received unexpected event network-vif-plugged-ddfbd3d2-79f1-43d0-bbbd-d89258fd9580 for instance with vm_state deleted and task_state None.
Jan 26 16:04:07 compute-0 nova_compute[239965]: 2026-01-26 16:04:07.912 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 210 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 KiB/s wr, 157 op/s
Jan 26 16:04:08 compute-0 nova_compute[239965]: 2026-01-26 16:04:08.099 239969 INFO nova.compute.manager [None req-c2e3f1f9-5508-4ae4-9412-6ccb0c0b90e4 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Unpausing
Jan 26 16:04:08 compute-0 nova_compute[239965]: 2026-01-26 16:04:08.101 239969 DEBUG nova.objects.instance [None req-c2e3f1f9-5508-4ae4-9412-6ccb0c0b90e4 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'flavor' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:08 compute-0 nova_compute[239965]: 2026-01-26 16:04:08.132 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443448.132137, 492acb7a-42e1-4918-a973-352f9d8251d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:08 compute-0 nova_compute[239965]: 2026-01-26 16:04:08.133 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Resumed (Lifecycle Event)
Jan 26 16:04:08 compute-0 virtqemud[240263]: argument unsupported: QEMU guest agent is not configured
Jan 26 16:04:08 compute-0 nova_compute[239965]: 2026-01-26 16:04:08.141 239969 DEBUG nova.virt.libvirt.guest [None req-c2e3f1f9-5508-4ae4-9412-6ccb0c0b90e4 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 26 16:04:08 compute-0 nova_compute[239965]: 2026-01-26 16:04:08.141 239969 DEBUG nova.compute.manager [None req-c2e3f1f9-5508-4ae4-9412-6ccb0c0b90e4 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:08 compute-0 nova_compute[239965]: 2026-01-26 16:04:08.164 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:08 compute-0 nova_compute[239965]: 2026-01-26 16:04:08.168 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:04:08 compute-0 nova_compute[239965]: 2026-01-26 16:04:08.193 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] During sync_power_state the instance has a pending task (unpausing). Skip.
Jan 26 16:04:08 compute-0 nova_compute[239965]: 2026-01-26 16:04:08.979 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443433.9783847, c85b4b5d-0f7d-4fb0-b02f-c841ee20544a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:08 compute-0 nova_compute[239965]: 2026-01-26 16:04:08.979 239969 INFO nova.compute.manager [-] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] VM Stopped (Lifecycle Event)
Jan 26 16:04:09 compute-0 ceph-mon[75140]: pgmap v1536: 305 pgs: 305 active+clean; 210 MiB data, 603 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.2 KiB/s wr, 157 op/s
Jan 26 16:04:09 compute-0 nova_compute[239965]: 2026-01-26 16:04:09.167 239969 DEBUG nova.compute.manager [None req-368c4cb2-8314-49a2-8a56-df81f8bf1d64 - - - - - -] [instance: c85b4b5d-0f7d-4fb0-b02f-c841ee20544a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:09 compute-0 nova_compute[239965]: 2026-01-26 16:04:09.265 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 169 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.3 KiB/s wr, 174 op/s
Jan 26 16:04:10 compute-0 nova_compute[239965]: 2026-01-26 16:04:10.908 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:11 compute-0 ceph-mon[75140]: pgmap v1537: 305 pgs: 305 active+clean; 169 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.3 KiB/s wr, 174 op/s
Jan 26 16:04:11 compute-0 nova_compute[239965]: 2026-01-26 16:04:11.352 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:04:11 compute-0 ovn_controller[146046]: 2026-01-26T16:04:11Z|00744|binding|INFO|Releasing lport 61ede7e0-2ac4-4ab0-a8d4-11a64add8da0 from this chassis (sb_readonly=0)
Jan 26 16:04:11 compute-0 nova_compute[239965]: 2026-01-26 16:04:11.536 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 169 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 583 KiB/s rd, 1.2 KiB/s wr, 47 op/s
Jan 26 16:04:12 compute-0 sshd-session[307418]: Invalid user solana from 45.148.10.240 port 56608
Jan 26 16:04:12 compute-0 sshd-session[307418]: Connection closed by invalid user solana 45.148.10.240 port 56608 [preauth]
Jan 26 16:04:13 compute-0 ceph-mon[75140]: pgmap v1538: 305 pgs: 305 active+clean; 169 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 583 KiB/s rd, 1.2 KiB/s wr, 47 op/s
Jan 26 16:04:13 compute-0 ovn_controller[146046]: 2026-01-26T16:04:13Z|00086|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:ca:89 10.100.0.14
Jan 26 16:04:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 169 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 16:04:14 compute-0 nova_compute[239965]: 2026-01-26 16:04:14.304 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:15 compute-0 ceph-mon[75140]: pgmap v1539: 305 pgs: 305 active+clean; 169 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 16:04:15 compute-0 nova_compute[239965]: 2026-01-26 16:04:15.910 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 169 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 584 KiB/s rd, 12 KiB/s wr, 72 op/s
Jan 26 16:04:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:04:17 compute-0 ceph-mon[75140]: pgmap v1540: 305 pgs: 305 active+clean; 169 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 584 KiB/s rd, 12 KiB/s wr, 72 op/s
Jan 26 16:04:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 169 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 12 KiB/s wr, 62 op/s
Jan 26 16:04:18 compute-0 ceph-mon[75140]: pgmap v1541: 305 pgs: 305 active+clean; 169 MiB data, 586 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 12 KiB/s wr, 62 op/s
Jan 26 16:04:19 compute-0 nova_compute[239965]: 2026-01-26 16:04:19.234 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443444.2335923, 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:19 compute-0 nova_compute[239965]: 2026-01-26 16:04:19.235 239969 INFO nova.compute.manager [-] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] VM Stopped (Lifecycle Event)
Jan 26 16:04:19 compute-0 nova_compute[239965]: 2026-01-26 16:04:19.258 239969 DEBUG nova.compute.manager [None req-0b467327-36f0-4342-94cf-a73cec66f755 - - - - - -] [instance: 05ffa55e-3746-4da3-bf2d-0f3a64dd44a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:19 compute-0 nova_compute[239965]: 2026-01-26 16:04:19.306 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:19 compute-0 nova_compute[239965]: 2026-01-26 16:04:19.322 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "477642fb-5695-4387-944d-587e17cf3ed8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:19 compute-0 nova_compute[239965]: 2026-01-26 16:04:19.322 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:19 compute-0 nova_compute[239965]: 2026-01-26 16:04:19.348 239969 DEBUG nova.compute.manager [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:04:19 compute-0 nova_compute[239965]: 2026-01-26 16:04:19.445 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:19 compute-0 nova_compute[239965]: 2026-01-26 16:04:19.446 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:19 compute-0 nova_compute[239965]: 2026-01-26 16:04:19.457 239969 DEBUG nova.virt.hardware [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:04:19 compute-0 nova_compute[239965]: 2026-01-26 16:04:19.458 239969 INFO nova.compute.claims [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:04:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 169 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 13 KiB/s wr, 62 op/s
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.038 239969 DEBUG oslo_concurrency.processutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:04:20 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/918806470' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.609 239969 DEBUG oslo_concurrency.processutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.615 239969 DEBUG nova.compute.provider_tree [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.619 239969 DEBUG oslo_concurrency.lockutils [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.620 239969 DEBUG oslo_concurrency.lockutils [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.620 239969 INFO nova.compute.manager [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Rebooting instance
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.637 239969 DEBUG nova.scheduler.client.report [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.642 239969 DEBUG oslo_concurrency.lockutils [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.642 239969 DEBUG oslo_concurrency.lockutils [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquired lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.642 239969 DEBUG nova.network.neutron [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.681 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.682 239969 DEBUG nova.compute.manager [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.746 239969 DEBUG nova.compute.manager [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.746 239969 DEBUG nova.network.neutron [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.767 239969 INFO nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.784 239969 DEBUG nova.compute.manager [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.869 239969 DEBUG nova.compute.manager [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.870 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.871 239969 INFO nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Creating image(s)
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.892 239969 DEBUG nova.storage.rbd_utils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 477642fb-5695-4387-944d-587e17cf3ed8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.916 239969 DEBUG nova.storage.rbd_utils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 477642fb-5695-4387-944d-587e17cf3ed8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.938 239969 DEBUG nova.storage.rbd_utils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 477642fb-5695-4387-944d-587e17cf3ed8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.941 239969 DEBUG oslo_concurrency.processutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:20 compute-0 nova_compute[239965]: 2026-01-26 16:04:20.979 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:21 compute-0 nova_compute[239965]: 2026-01-26 16:04:21.025 239969 DEBUG oslo_concurrency.processutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:21 compute-0 nova_compute[239965]: 2026-01-26 16:04:21.026 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:21 compute-0 nova_compute[239965]: 2026-01-26 16:04:21.026 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:21 compute-0 nova_compute[239965]: 2026-01-26 16:04:21.026 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:21 compute-0 nova_compute[239965]: 2026-01-26 16:04:21.047 239969 DEBUG nova.storage.rbd_utils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 477642fb-5695-4387-944d-587e17cf3ed8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:21 compute-0 nova_compute[239965]: 2026-01-26 16:04:21.051 239969 DEBUG oslo_concurrency.processutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 477642fb-5695-4387-944d-587e17cf3ed8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:21 compute-0 ceph-mon[75140]: pgmap v1542: 305 pgs: 305 active+clean; 169 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 544 KiB/s rd, 13 KiB/s wr, 62 op/s
Jan 26 16:04:21 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/918806470' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:04:21 compute-0 nova_compute[239965]: 2026-01-26 16:04:21.214 239969 DEBUG nova.policy [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a6914254de3499da1b987d99b6e0ff6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd91082a727584818a01c3c60e7f7ef73', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:04:21 compute-0 nova_compute[239965]: 2026-01-26 16:04:21.325 239969 DEBUG oslo_concurrency.processutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 477642fb-5695-4387-944d-587e17cf3ed8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.274s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:21 compute-0 nova_compute[239965]: 2026-01-26 16:04:21.380 239969 DEBUG nova.storage.rbd_utils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] resizing rbd image 477642fb-5695-4387-944d-587e17cf3ed8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:04:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:04:21 compute-0 nova_compute[239965]: 2026-01-26 16:04:21.459 239969 DEBUG nova.objects.instance [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'migration_context' on Instance uuid 477642fb-5695-4387-944d-587e17cf3ed8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:21 compute-0 nova_compute[239965]: 2026-01-26 16:04:21.474 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:04:21 compute-0 nova_compute[239965]: 2026-01-26 16:04:21.474 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Ensure instance console log exists: /var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:04:21 compute-0 nova_compute[239965]: 2026-01-26 16:04:21.475 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:21 compute-0 nova_compute[239965]: 2026-01-26 16:04:21.475 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:21 compute-0 nova_compute[239965]: 2026-01-26 16:04:21.476 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 169 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 26 KiB/s wr, 45 op/s
Jan 26 16:04:22 compute-0 nova_compute[239965]: 2026-01-26 16:04:22.420 239969 DEBUG nova.network.neutron [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Successfully created port: 5083d232-fbe1-4440-91de-c2c31fae4b6b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:04:22 compute-0 nova_compute[239965]: 2026-01-26 16:04:22.630 239969 DEBUG nova.network.neutron [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Updating instance_info_cache with network_info: [{"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:04:22 compute-0 nova_compute[239965]: 2026-01-26 16:04:22.702 239969 DEBUG oslo_concurrency.lockutils [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Releasing lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:04:22 compute-0 nova_compute[239965]: 2026-01-26 16:04:22.703 239969 DEBUG nova.compute.manager [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:22 compute-0 kernel: tap2a2de29b-29 (unregistering): left promiscuous mode
Jan 26 16:04:22 compute-0 NetworkManager[48954]: <info>  [1769443462.8909] device (tap2a2de29b-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:04:22 compute-0 nova_compute[239965]: 2026-01-26 16:04:22.904 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:22 compute-0 ovn_controller[146046]: 2026-01-26T16:04:22Z|00745|binding|INFO|Releasing lport 2a2de29b-29c0-439c-8236-2f566c4c89aa from this chassis (sb_readonly=0)
Jan 26 16:04:22 compute-0 ovn_controller[146046]: 2026-01-26T16:04:22Z|00746|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa down in Southbound
Jan 26 16:04:22 compute-0 ovn_controller[146046]: 2026-01-26T16:04:22Z|00747|binding|INFO|Removing iface tap2a2de29b-29 ovn-installed in OVS
Jan 26 16:04:22 compute-0 nova_compute[239965]: 2026-01-26 16:04:22.906 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:22 compute-0 nova_compute[239965]: 2026-01-26 16:04:22.906 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:22 compute-0 nova_compute[239965]: 2026-01-26 16:04:22.926 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:22 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Jan 26 16:04:22 compute-0 systemd[1]: machine-qemu\x2d92\x2dinstance\x2d0000004c.scope: Consumed 13.727s CPU time.
Jan 26 16:04:22 compute-0 systemd-machined[208061]: Machine qemu-92-instance-0000004c terminated.
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.076 239969 INFO nova.virt.libvirt.driver [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance destroyed successfully.
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.077 239969 DEBUG nova.objects.instance [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'resources' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:23.084 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ca:89 10.100.0.14'], port_security=['fa:16:3e:f8:ca:89 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '492acb7a-42e1-4918-a973-352f9d8251d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2b6630f0-082e-491e-b663-f8af00e04b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2a2de29b-29c0-439c-8236-2f566c4c89aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:04:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:23.085 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2a2de29b-29c0-439c-8236-2f566c4c89aa in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d unbound from our chassis
Jan 26 16:04:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:23.086 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:04:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:23.087 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[78e2f019-80cc-47ab-9819-779793106aa5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:23.088 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d namespace which is not needed anymore
Jan 26 16:04:23 compute-0 ceph-mon[75140]: pgmap v1543: 305 pgs: 305 active+clean; 169 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 26 KiB/s wr, 45 op/s
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.099 239969 DEBUG nova.virt.libvirt.vif [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1205900142',display_name='tempest-ServerActionsTestJSON-server-1205900142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1205900142',id=76,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJC9bPt4tHtAdEqahH8dDewWtV4sFfSIyn3+BOhF1ZOSNVcrw8cqQ/JKx3C2TWjp9SjIuYmlVUH9MiH1RWG1NVSVOkV3L9+W0mVxNAs5TvmU8qKvHIWoJY7lntFnq7MLA==',key_name='tempest-keypair-1155267326',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-plaaamng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:04:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=492acb7a-42e1-4918-a973-352f9d8251d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.100 239969 DEBUG nova.network.os_vif_util [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.101 239969 DEBUG nova.network.os_vif_util [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.101 239969 DEBUG os_vif [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.103 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.104 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a2de29b-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.106 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.108 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.110 239969 INFO os_vif [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29')
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.116 239969 DEBUG nova.virt.libvirt.driver [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Start _get_guest_xml network_info=[{"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.121 239969 WARNING nova.virt.libvirt.driver [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.139 239969 DEBUG nova.virt.libvirt.host [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.140 239969 DEBUG nova.virt.libvirt.host [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.145 239969 DEBUG nova.virt.libvirt.host [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.145 239969 DEBUG nova.virt.libvirt.host [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.145 239969 DEBUG nova.virt.libvirt.driver [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.146 239969 DEBUG nova.virt.hardware [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.146 239969 DEBUG nova.virt.hardware [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.147 239969 DEBUG nova.virt.hardware [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.147 239969 DEBUG nova.virt.hardware [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.147 239969 DEBUG nova.virt.hardware [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.147 239969 DEBUG nova.virt.hardware [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.148 239969 DEBUG nova.virt.hardware [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.148 239969 DEBUG nova.virt.hardware [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.148 239969 DEBUG nova.virt.hardware [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.148 239969 DEBUG nova.virt.hardware [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.149 239969 DEBUG nova.virt.hardware [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.149 239969 DEBUG nova.objects.instance [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.172 239969 DEBUG oslo_concurrency.processutils [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:23 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[307235]: [NOTICE]   (307239) : haproxy version is 2.8.14-c23fe91
Jan 26 16:04:23 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[307235]: [NOTICE]   (307239) : path to executable is /usr/sbin/haproxy
Jan 26 16:04:23 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[307235]: [ALERT]    (307239) : Current worker (307241) exited with code 143 (Terminated)
Jan 26 16:04:23 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[307235]: [WARNING]  (307239) : All workers exited. Exiting... (0)
Jan 26 16:04:23 compute-0 systemd[1]: libpod-ec1c4c0708ddf65a247975200cb97374b9e5c9e06a97314cf0e136f454424a6f.scope: Deactivated successfully.
Jan 26 16:04:23 compute-0 conmon[307235]: conmon ec1c4c0708ddf65a2479 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec1c4c0708ddf65a247975200cb97374b9e5c9e06a97314cf0e136f454424a6f.scope/container/memory.events
Jan 26 16:04:23 compute-0 podman[307644]: 2026-01-26 16:04:23.256071716 +0000 UTC m=+0.053799310 container died ec1c4c0708ddf65a247975200cb97374b9e5c9e06a97314cf0e136f454424a6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 16:04:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ec1c4c0708ddf65a247975200cb97374b9e5c9e06a97314cf0e136f454424a6f-userdata-shm.mount: Deactivated successfully.
Jan 26 16:04:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-530507c75358c5565d0421790d2d7de197514982f74979ea07d179c4a706c9d0-merged.mount: Deactivated successfully.
Jan 26 16:04:23 compute-0 podman[307644]: 2026-01-26 16:04:23.299003668 +0000 UTC m=+0.096731303 container cleanup ec1c4c0708ddf65a247975200cb97374b9e5c9e06a97314cf0e136f454424a6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:04:23 compute-0 systemd[1]: libpod-conmon-ec1c4c0708ddf65a247975200cb97374b9e5c9e06a97314cf0e136f454424a6f.scope: Deactivated successfully.
Jan 26 16:04:23 compute-0 podman[307675]: 2026-01-26 16:04:23.368347479 +0000 UTC m=+0.047482855 container remove ec1c4c0708ddf65a247975200cb97374b9e5c9e06a97314cf0e136f454424a6f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 16:04:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:23.375 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b1ffe59a-a24c-4641-988b-01628c9d5bc2]: (4, ('Mon Jan 26 04:04:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d (ec1c4c0708ddf65a247975200cb97374b9e5c9e06a97314cf0e136f454424a6f)\nec1c4c0708ddf65a247975200cb97374b9e5c9e06a97314cf0e136f454424a6f\nMon Jan 26 04:04:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d (ec1c4c0708ddf65a247975200cb97374b9e5c9e06a97314cf0e136f454424a6f)\nec1c4c0708ddf65a247975200cb97374b9e5c9e06a97314cf0e136f454424a6f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:23.378 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2419b1b1-2a8b-43b3-a5c9-def204336205]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:23.378 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1ad93-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.380 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:23 compute-0 kernel: tap84c1ad93-10: left promiscuous mode
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.389 239969 DEBUG nova.compute.manager [req-a605b1f8-7999-44f0-b669-966f4249dd3b req-a5e82bf2-307a-4e63-9a4e-fb7f5e528270 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.390 239969 DEBUG oslo_concurrency.lockutils [req-a605b1f8-7999-44f0-b669-966f4249dd3b req-a5e82bf2-307a-4e63-9a4e-fb7f5e528270 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.390 239969 DEBUG oslo_concurrency.lockutils [req-a605b1f8-7999-44f0-b669-966f4249dd3b req-a5e82bf2-307a-4e63-9a4e-fb7f5e528270 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.390 239969 DEBUG oslo_concurrency.lockutils [req-a605b1f8-7999-44f0-b669-966f4249dd3b req-a5e82bf2-307a-4e63-9a4e-fb7f5e528270 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.391 239969 DEBUG nova.compute.manager [req-a605b1f8-7999-44f0-b669-966f4249dd3b req-a5e82bf2-307a-4e63-9a4e-fb7f5e528270 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.391 239969 WARNING nova.compute.manager [req-a605b1f8-7999-44f0-b669-966f4249dd3b req-a5e82bf2-307a-4e63-9a4e-fb7f5e528270 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state active and task_state reboot_started_hard.
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.395 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.396 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:23.399 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a727919f-64f6-4929-9d2c-9f5f541aefc4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:23.417 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6654a0ef-41cc-40e8-b880-b98806a93665]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:23.418 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ccdbd56d-528e-47e8-9f7d-d953b0f89acb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:23.435 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[73fba6fa-500b-4751-8cd8-bc3c9940e119]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 492834, 'reachable_time': 36155, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307704, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d84c1ad93\x2d16e1\x2d4751\x2dac9b\x2dceeaa2d50c1d.mount: Deactivated successfully.
Jan 26 16:04:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:23.440 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:04:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:23.441 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[b0bcf1c0-d97c-41c5-a1f5-3546798fc8eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:04:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3476342848' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.735 239969 DEBUG oslo_concurrency.processutils [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:23 compute-0 nova_compute[239965]: 2026-01-26 16:04:23.769 239969 DEBUG oslo_concurrency.processutils [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 169 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 26 KiB/s wr, 45 op/s
Jan 26 16:04:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3476342848' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.257 239969 DEBUG nova.network.neutron [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Successfully updated port: 5083d232-fbe1-4440-91de-c2c31fae4b6b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.272 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "refresh_cache-477642fb-5695-4387-944d-587e17cf3ed8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.272 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquired lock "refresh_cache-477642fb-5695-4387-944d-587e17cf3ed8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.272 239969 DEBUG nova.network.neutron [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:04:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:04:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3802762074' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.323 239969 DEBUG oslo_concurrency.processutils [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.325 239969 DEBUG nova.virt.libvirt.vif [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1205900142',display_name='tempest-ServerActionsTestJSON-server-1205900142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1205900142',id=76,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJC9bPt4tHtAdEqahH8dDewWtV4sFfSIyn3+BOhF1ZOSNVcrw8cqQ/JKx3C2TWjp9SjIuYmlVUH9MiH1RWG1NVSVOkV3L9+W0mVxNAs5TvmU8qKvHIWoJY7lntFnq7MLA==',key_name='tempest-keypair-1155267326',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-plaaamng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:04:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=492acb7a-42e1-4918-a973-352f9d8251d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.325 239969 DEBUG nova.network.os_vif_util [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.326 239969 DEBUG nova.network.os_vif_util [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.327 239969 DEBUG nova.objects.instance [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.343 239969 DEBUG nova.virt.libvirt.driver [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:04:24 compute-0 nova_compute[239965]:   <uuid>492acb7a-42e1-4918-a973-352f9d8251d0</uuid>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   <name>instance-0000004c</name>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerActionsTestJSON-server-1205900142</nova:name>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:04:23</nova:creationTime>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:04:24 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:04:24 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:04:24 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:04:24 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:04:24 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:04:24 compute-0 nova_compute[239965]:         <nova:user uuid="59f8ee09903a4e0a812c3d9e013996bd">tempest-ServerActionsTestJSON-274649005-project-member</nova:user>
Jan 26 16:04:24 compute-0 nova_compute[239965]:         <nova:project uuid="94771806cd0d4b5db117956e09fea9e6">tempest-ServerActionsTestJSON-274649005</nova:project>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:04:24 compute-0 nova_compute[239965]:         <nova:port uuid="2a2de29b-29c0-439c-8236-2f566c4c89aa">
Jan 26 16:04:24 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <system>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <entry name="serial">492acb7a-42e1-4918-a973-352f9d8251d0</entry>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <entry name="uuid">492acb7a-42e1-4918-a973-352f9d8251d0</entry>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     </system>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   <os>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   </os>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   <features>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   </features>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/492acb7a-42e1-4918-a973-352f9d8251d0_disk">
Jan 26 16:04:24 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       </source>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:04:24 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/492acb7a-42e1-4918-a973-352f9d8251d0_disk.config">
Jan 26 16:04:24 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       </source>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:04:24 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:f8:ca:89"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <target dev="tap2a2de29b-29"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/492acb7a-42e1-4918-a973-352f9d8251d0/console.log" append="off"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <video>
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     </video>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <input type="keyboard" bus="usb"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:04:24 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:04:24 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:04:24 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:04:24 compute-0 nova_compute[239965]: </domain>
Jan 26 16:04:24 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.345 239969 DEBUG nova.virt.libvirt.driver [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.346 239969 DEBUG nova.virt.libvirt.driver [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.348 239969 DEBUG nova.virt.libvirt.vif [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1205900142',display_name='tempest-ServerActionsTestJSON-server-1205900142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1205900142',id=76,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJC9bPt4tHtAdEqahH8dDewWtV4sFfSIyn3+BOhF1ZOSNVcrw8cqQ/JKx3C2TWjp9SjIuYmlVUH9MiH1RWG1NVSVOkV3L9+W0mVxNAs5TvmU8qKvHIWoJY7lntFnq7MLA==',key_name='tempest-keypair-1155267326',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-plaaamng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:04:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=492acb7a-42e1-4918-a973-352f9d8251d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.349 239969 DEBUG nova.network.os_vif_util [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.350 239969 DEBUG nova.network.os_vif_util [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.351 239969 DEBUG os_vif [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.352 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.354 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.355 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.359 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.360 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a2de29b-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.361 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a2de29b-29, col_values=(('external_ids', {'iface-id': '2a2de29b-29c0-439c-8236-2f566c4c89aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:ca:89', 'vm-uuid': '492acb7a-42e1-4918-a973-352f9d8251d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:24 compute-0 NetworkManager[48954]: <info>  [1769443464.3827] manager: (tap2a2de29b-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/329)
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.381 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.385 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.388 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.389 239969 INFO os_vif [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29')
Jan 26 16:04:24 compute-0 kernel: tap2a2de29b-29: entered promiscuous mode
Jan 26 16:04:24 compute-0 systemd-udevd[307610]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:04:24 compute-0 NetworkManager[48954]: <info>  [1769443464.4725] manager: (tap2a2de29b-29): new Tun device (/org/freedesktop/NetworkManager/Devices/330)
Jan 26 16:04:24 compute-0 ovn_controller[146046]: 2026-01-26T16:04:24Z|00748|binding|INFO|Claiming lport 2a2de29b-29c0-439c-8236-2f566c4c89aa for this chassis.
Jan 26 16:04:24 compute-0 ovn_controller[146046]: 2026-01-26T16:04:24Z|00749|binding|INFO|2a2de29b-29c0-439c-8236-2f566c4c89aa: Claiming fa:16:3e:f8:ca:89 10.100.0.14
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.472 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.482 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ca:89 10.100.0.14'], port_security=['fa:16:3e:f8:ca:89 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '492acb7a-42e1-4918-a973-352f9d8251d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '9', 'neutron:security_group_ids': '2b6630f0-082e-491e-b663-f8af00e04b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2a2de29b-29c0-439c-8236-2f566c4c89aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.483 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2a2de29b-29c0-439c-8236-2f566c4c89aa in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d bound to our chassis
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.484 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:04:24 compute-0 NetworkManager[48954]: <info>  [1769443464.4882] device (tap2a2de29b-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:04:24 compute-0 NetworkManager[48954]: <info>  [1769443464.4891] device (tap2a2de29b-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:04:24 compute-0 ovn_controller[146046]: 2026-01-26T16:04:24Z|00750|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa ovn-installed in OVS
Jan 26 16:04:24 compute-0 ovn_controller[146046]: 2026-01-26T16:04:24Z|00751|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa up in Southbound
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.492 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.494 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.499 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fb903ccb-b716-403e-bb3d-54e51db6e669]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.500 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84c1ad93-11 in ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.502 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84c1ad93-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.502 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7e72f751-5a74-4ff3-a274-0da5f4208930]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.503 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[03e1202b-d4c8-40b3-9498-767ff4939ed5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:24 compute-0 systemd-machined[208061]: New machine qemu-93-instance-0000004c.
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.516 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[6d9e8610-399b-4781-a99b-858d1486bc89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:24 compute-0 systemd[1]: Started Virtual Machine qemu-93-instance-0000004c.
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.544 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[59eabcf2-1751-4f53-99cb-e8c0158a4ce7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.574 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3bf222-c53a-4c72-9b08-27a1539b43a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.581 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[83eb3a80-1120-44aa-8bc0-5d4ee06deab2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:24 compute-0 NetworkManager[48954]: <info>  [1769443464.5823] manager: (tap84c1ad93-10): new Veth device (/org/freedesktop/NetworkManager/Devices/331)
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.583 239969 DEBUG nova.network.neutron [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.612 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2f88de5f-4ca8-4695-afb7-962125c9a782]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.615 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[604fb07f-1228-4c2e-9eed-dd4837e23ef4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:24 compute-0 NetworkManager[48954]: <info>  [1769443464.6432] device (tap84c1ad93-10): carrier: link connected
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.647 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[589c90dd-e214-43cc-9921-a22890a11b94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.671 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0a5772ea-993a-46e6-a157-669b44a0177c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84c1ad93-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495544, 'reachable_time': 41449, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307793, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.693 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[497eccfb-481c-489f-9284-89fc547a091f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:ad50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495544, 'tstamp': 495544}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307794, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.716 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4fbe2538-327f-429b-a20b-02e0afab2d9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84c1ad93-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495544, 'reachable_time': 41449, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307803, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.759 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b56d20b0-6ffc-4f50-80da-4c78a27aaf55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.820 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e66e4f28-04cc-4ca3-8cca-1b7cb9ec5bb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.822 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1ad93-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.822 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.823 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84c1ad93-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:24 compute-0 kernel: tap84c1ad93-10: entered promiscuous mode
Jan 26 16:04:24 compute-0 NetworkManager[48954]: <info>  [1769443464.8258] manager: (tap84c1ad93-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/332)
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.825 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.828 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84c1ad93-10, col_values=(('external_ids', {'iface-id': '61ede7e0-2ac4-4ab0-a8d4-11a64add8da0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:24 compute-0 ovn_controller[146046]: 2026-01-26T16:04:24Z|00752|binding|INFO|Releasing lport 61ede7e0-2ac4-4ab0-a8d4-11a64add8da0 from this chassis (sb_readonly=0)
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.829 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.846 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.846 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.847 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ab7298-3af8-4a89-bc66-3c117b481f96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.848 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:04:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:24.849 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'env', 'PROCESS_TAG=haproxy-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.879 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 492acb7a-42e1-4918-a973-352f9d8251d0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.880 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443464.8789394, 492acb7a-42e1-4918-a973-352f9d8251d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.880 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Resumed (Lifecycle Event)
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.883 239969 DEBUG nova.compute.manager [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.890 239969 INFO nova.virt.libvirt.driver [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance rebooted successfully.
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.891 239969 DEBUG nova.compute.manager [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.902 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.906 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.932 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.933 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443464.8803349, 492acb7a-42e1-4918-a973-352f9d8251d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.933 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Started (Lifecycle Event)
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.965 239969 DEBUG oslo_concurrency.lockutils [None req-7c528422-bc7d-4380-8668-e37866f20d43 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.968 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:24 compute-0 nova_compute[239965]: 2026-01-26 16:04:24.972 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:04:25 compute-0 ceph-mon[75140]: pgmap v1544: 305 pgs: 305 active+clean; 169 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 26 KiB/s wr, 45 op/s
Jan 26 16:04:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3802762074' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:25 compute-0 podman[307869]: 2026-01-26 16:04:25.266465391 +0000 UTC m=+0.057756437 container create ff25274bbc427090624282cff22aab1c98c551f638fb93c3858b4c5ce3456b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 16:04:25 compute-0 systemd[1]: Started libpod-conmon-ff25274bbc427090624282cff22aab1c98c551f638fb93c3858b4c5ce3456b16.scope.
Jan 26 16:04:25 compute-0 podman[307869]: 2026-01-26 16:04:25.234439085 +0000 UTC m=+0.025730211 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:04:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:04:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420d68e67bdcbb5059b11ef9cc6a5bda0c5358702ff5c1d0625d7e0103e04fe2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.355 239969 DEBUG nova.network.neutron [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Updating instance_info_cache with network_info: [{"id": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "address": "fa:16:3e:e7:14:3e", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5083d232-fb", "ovs_interfaceid": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:04:25 compute-0 podman[307869]: 2026-01-26 16:04:25.365773535 +0000 UTC m=+0.157064581 container init ff25274bbc427090624282cff22aab1c98c551f638fb93c3858b4c5ce3456b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 16:04:25 compute-0 podman[307869]: 2026-01-26 16:04:25.377363969 +0000 UTC m=+0.168655005 container start ff25274bbc427090624282cff22aab1c98c551f638fb93c3858b4c5ce3456b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.397 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Releasing lock "refresh_cache-477642fb-5695-4387-944d-587e17cf3ed8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.398 239969 DEBUG nova.compute.manager [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Instance network_info: |[{"id": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "address": "fa:16:3e:e7:14:3e", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5083d232-fb", "ovs_interfaceid": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.400 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Start _get_guest_xml network_info=[{"id": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "address": "fa:16:3e:e7:14:3e", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5083d232-fb", "ovs_interfaceid": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:04:25 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[307884]: [NOTICE]   (307888) : New worker (307890) forked
Jan 26 16:04:25 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[307884]: [NOTICE]   (307888) : Loading success.
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.405 239969 WARNING nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.412 239969 DEBUG nova.virt.libvirt.host [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.413 239969 DEBUG nova.virt.libvirt.host [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.418 239969 DEBUG nova.virt.libvirt.host [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.419 239969 DEBUG nova.virt.libvirt.host [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.420 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.420 239969 DEBUG nova.virt.hardware [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.420 239969 DEBUG nova.virt.hardware [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.421 239969 DEBUG nova.virt.hardware [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.421 239969 DEBUG nova.virt.hardware [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.421 239969 DEBUG nova.virt.hardware [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.421 239969 DEBUG nova.virt.hardware [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.421 239969 DEBUG nova.virt.hardware [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.422 239969 DEBUG nova.virt.hardware [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.422 239969 DEBUG nova.virt.hardware [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.422 239969 DEBUG nova.virt.hardware [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.423 239969 DEBUG nova.virt.hardware [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.426 239969 DEBUG oslo_concurrency.processutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.475 239969 DEBUG nova.compute.manager [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.475 239969 DEBUG oslo_concurrency.lockutils [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.476 239969 DEBUG oslo_concurrency.lockutils [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.476 239969 DEBUG oslo_concurrency.lockutils [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.476 239969 DEBUG nova.compute.manager [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.476 239969 WARNING nova.compute.manager [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state active and task_state None.
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.476 239969 DEBUG nova.compute.manager [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received event network-changed-5083d232-fbe1-4440-91de-c2c31fae4b6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.476 239969 DEBUG nova.compute.manager [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Refreshing instance network info cache due to event network-changed-5083d232-fbe1-4440-91de-c2c31fae4b6b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.477 239969 DEBUG oslo_concurrency.lockutils [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-477642fb-5695-4387-944d-587e17cf3ed8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.477 239969 DEBUG oslo_concurrency.lockutils [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-477642fb-5695-4387-944d-587e17cf3ed8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.477 239969 DEBUG nova.network.neutron [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Refreshing network info cache for port 5083d232-fbe1-4440-91de-c2c31fae4b6b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:04:25 compute-0 nova_compute[239965]: 2026-01-26 16:04:25.934 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:04:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/464026170' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:26 compute-0 nova_compute[239965]: 2026-01-26 16:04:26.016 239969 DEBUG oslo_concurrency.processutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 902 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 26 16:04:26 compute-0 nova_compute[239965]: 2026-01-26 16:04:26.040 239969 DEBUG nova.storage.rbd_utils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 477642fb-5695-4387-944d-587e17cf3ed8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:26 compute-0 nova_compute[239965]: 2026-01-26 16:04:26.044 239969 DEBUG oslo_concurrency.processutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/464026170' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:04:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:04:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3826231483' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:26 compute-0 nova_compute[239965]: 2026-01-26 16:04:26.586 239969 DEBUG oslo_concurrency.processutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:26 compute-0 nova_compute[239965]: 2026-01-26 16:04:26.588 239969 DEBUG nova.virt.libvirt.vif [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:04:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-5920657',display_name='tempest-ServerRescueTestJSON-server-5920657',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-5920657',id=80,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d91082a727584818a01c3c60e7f7ef73',ramdisk_id='',reservation_id='r-415rexkh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1881999610',owner_user_name='tempest-ServerRescueTestJSON-1881999610-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:04:20Z,user_data=None,user_id='7a6914254de3499da1b987d99b6e0ff6',uuid=477642fb-5695-4387-944d-587e17cf3ed8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "address": "fa:16:3e:e7:14:3e", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5083d232-fb", "ovs_interfaceid": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:04:26 compute-0 nova_compute[239965]: 2026-01-26 16:04:26.588 239969 DEBUG nova.network.os_vif_util [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Converting VIF {"id": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "address": "fa:16:3e:e7:14:3e", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5083d232-fb", "ovs_interfaceid": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:04:26 compute-0 nova_compute[239965]: 2026-01-26 16:04:26.589 239969 DEBUG nova.network.os_vif_util [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:14:3e,bridge_name='br-int',has_traffic_filtering=True,id=5083d232-fbe1-4440-91de-c2c31fae4b6b,network=Network(c3f3d947-073d-4aff-ad60-49414846f6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5083d232-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:04:26 compute-0 nova_compute[239965]: 2026-01-26 16:04:26.591 239969 DEBUG nova.objects.instance [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 477642fb-5695-4387-944d-587e17cf3ed8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:27 compute-0 ceph-mon[75140]: pgmap v1545: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 902 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 26 16:04:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3826231483' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:27 compute-0 nova_compute[239965]: 2026-01-26 16:04:27.364 239969 DEBUG nova.network.neutron [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Updated VIF entry in instance network info cache for port 5083d232-fbe1-4440-91de-c2c31fae4b6b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:04:27 compute-0 nova_compute[239965]: 2026-01-26 16:04:27.364 239969 DEBUG nova.network.neutron [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Updating instance_info_cache with network_info: [{"id": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "address": "fa:16:3e:e7:14:3e", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5083d232-fb", "ovs_interfaceid": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:04:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.614 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:04:28 compute-0 nova_compute[239965]:   <uuid>477642fb-5695-4387-944d-587e17cf3ed8</uuid>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   <name>instance-00000050</name>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerRescueTestJSON-server-5920657</nova:name>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:04:25</nova:creationTime>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:04:28 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:04:28 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:04:28 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:04:28 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:04:28 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:04:28 compute-0 nova_compute[239965]:         <nova:user uuid="7a6914254de3499da1b987d99b6e0ff6">tempest-ServerRescueTestJSON-1881999610-project-member</nova:user>
Jan 26 16:04:28 compute-0 nova_compute[239965]:         <nova:project uuid="d91082a727584818a01c3c60e7f7ef73">tempest-ServerRescueTestJSON-1881999610</nova:project>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:04:28 compute-0 nova_compute[239965]:         <nova:port uuid="5083d232-fbe1-4440-91de-c2c31fae4b6b">
Jan 26 16:04:28 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <system>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <entry name="serial">477642fb-5695-4387-944d-587e17cf3ed8</entry>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <entry name="uuid">477642fb-5695-4387-944d-587e17cf3ed8</entry>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     </system>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   <os>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   </os>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   <features>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   </features>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/477642fb-5695-4387-944d-587e17cf3ed8_disk">
Jan 26 16:04:28 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       </source>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:04:28 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/477642fb-5695-4387-944d-587e17cf3ed8_disk.config">
Jan 26 16:04:28 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       </source>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:04:28 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:e7:14:3e"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <target dev="tap5083d232-fb"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8/console.log" append="off"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <video>
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     </video>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:04:28 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:04:28 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:04:28 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:04:28 compute-0 nova_compute[239965]: </domain>
Jan 26 16:04:28 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.614 239969 DEBUG nova.compute.manager [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Preparing to wait for external event network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.614 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "477642fb-5695-4387-944d-587e17cf3ed8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.615 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.615 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.616 239969 DEBUG nova.virt.libvirt.vif [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:04:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-5920657',display_name='tempest-ServerRescueTestJSON-server-5920657',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-5920657',id=80,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d91082a727584818a01c3c60e7f7ef73',ramdisk_id='',reservation_id='r-415rexkh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1881999610',owner_user_name='tempest-ServerRescueTestJSON-1881999610-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:04:20Z,user_data=None,user_id='7a6914254de3499da1b987d99b6e0ff6',uuid=477642fb-5695-4387-944d-587e17cf3ed8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "address": "fa:16:3e:e7:14:3e", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5083d232-fb", "ovs_interfaceid": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.616 239969 DEBUG nova.network.os_vif_util [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Converting VIF {"id": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "address": "fa:16:3e:e7:14:3e", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5083d232-fb", "ovs_interfaceid": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.617 239969 DEBUG nova.network.os_vif_util [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:14:3e,bridge_name='br-int',has_traffic_filtering=True,id=5083d232-fbe1-4440-91de-c2c31fae4b6b,network=Network(c3f3d947-073d-4aff-ad60-49414846f6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5083d232-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.617 239969 DEBUG os_vif [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:14:3e,bridge_name='br-int',has_traffic_filtering=True,id=5083d232-fbe1-4440-91de-c2c31fae4b6b,network=Network(c3f3d947-073d-4aff-ad60-49414846f6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5083d232-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.619 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.619 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.620 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.622 239969 DEBUG oslo_concurrency.lockutils [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-477642fb-5695-4387-944d-587e17cf3ed8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:04:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:04:28
Jan 26 16:04:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:04:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:04:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', '.mgr', 'vms', 'volumes']
Jan 26 16:04:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.622 239969 DEBUG nova.compute.manager [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.623 239969 DEBUG oslo_concurrency.lockutils [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.623 239969 DEBUG oslo_concurrency.lockutils [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.623 239969 DEBUG oslo_concurrency.lockutils [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.623 239969 DEBUG nova.compute.manager [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.624 239969 WARNING nova.compute.manager [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state active and task_state None.
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.624 239969 DEBUG nova.compute.manager [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.624 239969 DEBUG oslo_concurrency.lockutils [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.624 239969 DEBUG oslo_concurrency.lockutils [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.625 239969 DEBUG oslo_concurrency.lockutils [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.625 239969 DEBUG nova.compute.manager [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.625 239969 WARNING nova.compute.manager [req-fb752ffa-c3c0-45b5-8c22-bcc2bbce0413 req-b0e4ff90-dbc0-445a-bdf1-56f722aea73a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state active and task_state None.
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.627 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.627 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5083d232-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.628 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5083d232-fb, col_values=(('external_ids', {'iface-id': '5083d232-fbe1-4440-91de-c2c31fae4b6b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:14:3e', 'vm-uuid': '477642fb-5695-4387-944d-587e17cf3ed8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.629 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:28 compute-0 NetworkManager[48954]: <info>  [1769443468.6307] manager: (tap5083d232-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/333)
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.634 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.637 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.638 239969 INFO os_vif [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:14:3e,bridge_name='br-int',has_traffic_filtering=True,id=5083d232-fbe1-4440-91de-c2c31fae4b6b,network=Network(c3f3d947-073d-4aff-ad60-49414846f6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5083d232-fb')
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.695 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.695 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.695 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] No VIF found with MAC fa:16:3e:e7:14:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.696 239969 INFO nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Using config drive
Jan 26 16:04:28 compute-0 nova_compute[239965]: 2026-01-26 16:04:28.719 239969 DEBUG nova.storage.rbd_utils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 477642fb-5695-4387-944d-587e17cf3ed8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:29 compute-0 nova_compute[239965]: 2026-01-26 16:04:29.021 239969 INFO nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Creating config drive at /var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8/disk.config
Jan 26 16:04:29 compute-0 nova_compute[239965]: 2026-01-26 16:04:29.026 239969 DEBUG oslo_concurrency.processutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmu4l848x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:29 compute-0 ceph-mon[75140]: pgmap v1546: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Jan 26 16:04:29 compute-0 nova_compute[239965]: 2026-01-26 16:04:29.169 239969 DEBUG oslo_concurrency.processutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmu4l848x" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:29 compute-0 nova_compute[239965]: 2026-01-26 16:04:29.198 239969 DEBUG nova.storage.rbd_utils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 477642fb-5695-4387-944d-587e17cf3ed8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:29 compute-0 nova_compute[239965]: 2026-01-26 16:04:29.203 239969 DEBUG oslo_concurrency.processutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8/disk.config 477642fb-5695-4387-944d-587e17cf3ed8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:29 compute-0 sudo[307995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:04:29 compute-0 sudo[307995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:04:29 compute-0 sudo[307995]: pam_unix(sudo:session): session closed for user root
Jan 26 16:04:29 compute-0 sudo[308029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 26 16:04:29 compute-0 sudo[308029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:04:29 compute-0 nova_compute[239965]: 2026-01-26 16:04:29.357 239969 DEBUG oslo_concurrency.processutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8/disk.config 477642fb-5695-4387-944d-587e17cf3ed8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:29 compute-0 nova_compute[239965]: 2026-01-26 16:04:29.359 239969 INFO nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Deleting local config drive /var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8/disk.config because it was imported into RBD.
Jan 26 16:04:29 compute-0 NetworkManager[48954]: <info>  [1769443469.4171] manager: (tap5083d232-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/334)
Jan 26 16:04:29 compute-0 kernel: tap5083d232-fb: entered promiscuous mode
Jan 26 16:04:29 compute-0 nova_compute[239965]: 2026-01-26 16:04:29.422 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:29 compute-0 ovn_controller[146046]: 2026-01-26T16:04:29Z|00753|binding|INFO|Claiming lport 5083d232-fbe1-4440-91de-c2c31fae4b6b for this chassis.
Jan 26 16:04:29 compute-0 ovn_controller[146046]: 2026-01-26T16:04:29Z|00754|binding|INFO|5083d232-fbe1-4440-91de-c2c31fae4b6b: Claiming fa:16:3e:e7:14:3e 10.100.0.7
Jan 26 16:04:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:29.435 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:14:3e 10.100.0.7'], port_security=['fa:16:3e:e7:14:3e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '477642fb-5695-4387-944d-587e17cf3ed8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3f3d947-073d-4aff-ad60-49414846f6e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd91082a727584818a01c3c60e7f7ef73', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7fafebe7-bf1c-4fd6-83e0-4f8a0494b535', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fcf049c-f6b3-4162-b939-ed99b129811b, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5083d232-fbe1-4440-91de-c2c31fae4b6b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:04:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:29.437 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5083d232-fbe1-4440-91de-c2c31fae4b6b in datapath c3f3d947-073d-4aff-ad60-49414846f6e5 bound to our chassis
Jan 26 16:04:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:29.437 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c3f3d947-073d-4aff-ad60-49414846f6e5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:04:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:29.438 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c54fa378-7df6-46b3-85dc-2b9b42063a47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:29 compute-0 ovn_controller[146046]: 2026-01-26T16:04:29Z|00755|binding|INFO|Setting lport 5083d232-fbe1-4440-91de-c2c31fae4b6b up in Southbound
Jan 26 16:04:29 compute-0 ovn_controller[146046]: 2026-01-26T16:04:29Z|00756|binding|INFO|Setting lport 5083d232-fbe1-4440-91de-c2c31fae4b6b ovn-installed in OVS
Jan 26 16:04:29 compute-0 nova_compute[239965]: 2026-01-26 16:04:29.448 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:29 compute-0 systemd-udevd[308084]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:04:29 compute-0 systemd-machined[208061]: New machine qemu-94-instance-00000050.
Jan 26 16:04:29 compute-0 NetworkManager[48954]: <info>  [1769443469.4748] device (tap5083d232-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:04:29 compute-0 NetworkManager[48954]: <info>  [1769443469.4755] device (tap5083d232-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:04:29 compute-0 systemd[1]: Started Virtual Machine qemu-94-instance-00000050.
Jan 26 16:04:29 compute-0 podman[308132]: 2026-01-26 16:04:29.786779875 +0000 UTC m=+0.066943362 container exec 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 16:04:29 compute-0 podman[308132]: 2026-01-26 16:04:29.921796525 +0000 UTC m=+0.201959982 container exec_died 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.047 239969 DEBUG nova.compute.manager [req-4f2f0df8-630c-4350-91f3-b7a3c7d0135a req-8f25373e-0d3c-4aed-8d74-4ef8e6580fec a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received event network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.048 239969 DEBUG oslo_concurrency.lockutils [req-4f2f0df8-630c-4350-91f3-b7a3c7d0135a req-8f25373e-0d3c-4aed-8d74-4ef8e6580fec a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "477642fb-5695-4387-944d-587e17cf3ed8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.049 239969 DEBUG oslo_concurrency.lockutils [req-4f2f0df8-630c-4350-91f3-b7a3c7d0135a req-8f25373e-0d3c-4aed-8d74-4ef8e6580fec a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.049 239969 DEBUG oslo_concurrency.lockutils [req-4f2f0df8-630c-4350-91f3-b7a3c7d0135a req-8f25373e-0d3c-4aed-8d74-4ef8e6580fec a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.049 239969 DEBUG nova.compute.manager [req-4f2f0df8-630c-4350-91f3-b7a3c7d0135a req-8f25373e-0d3c-4aed-8d74-4ef8e6580fec a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Processing event network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.207 239969 DEBUG nova.compute.manager [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.208 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443470.2071648, 477642fb-5695-4387-944d-587e17cf3ed8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.208 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] VM Started (Lifecycle Event)
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.211 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.214 239969 INFO nova.virt.libvirt.driver [-] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Instance spawned successfully.
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.214 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.233 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.238 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.247 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.247 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.252 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.253 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.253 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.254 239969 DEBUG nova.virt.libvirt.driver [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.259 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.260 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443470.20834, 477642fb-5695-4387-944d-587e17cf3ed8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.260 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] VM Paused (Lifecycle Event)
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.313 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.318 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443470.2111776, 477642fb-5695-4387-944d-587e17cf3ed8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.318 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] VM Resumed (Lifecycle Event)
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.350 239969 INFO nova.compute.manager [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Took 9.48 seconds to spawn the instance on the hypervisor.
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.350 239969 DEBUG nova.compute.manager [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.352 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.360 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.387 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.418 239969 INFO nova.compute.manager [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Took 11.01 seconds to build instance.
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.434 239969 DEBUG oslo_concurrency.lockutils [None req-b3c3e45f-7ffd-4cf6-a68b-57e296e12a8b 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:04:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:04:30 compute-0 sudo[308029]: pam_unix(sudo:session): session closed for user root
Jan 26 16:04:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:04:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:04:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:04:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:04:30 compute-0 sudo[308357]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:04:30 compute-0 sudo[308357]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:04:30 compute-0 sudo[308357]: pam_unix(sudo:session): session closed for user root
Jan 26 16:04:30 compute-0 sudo[308382]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:04:30 compute-0 sudo[308382]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:04:30 compute-0 nova_compute[239965]: 2026-01-26 16:04:30.933 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:31 compute-0 ceph-mon[75140]: pgmap v1547: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 26 16:04:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:04:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:04:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:04:31 compute-0 nova_compute[239965]: 2026-01-26 16:04:31.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:04:31 compute-0 sudo[308382]: pam_unix(sudo:session): session closed for user root
Jan 26 16:04:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:04:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:04:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:04:31 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:04:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:04:31 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:04:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:04:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:04:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:04:31 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:04:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:04:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:04:31 compute-0 sudo[308437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:04:31 compute-0 sudo[308437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:04:31 compute-0 sudo[308437]: pam_unix(sudo:session): session closed for user root
Jan 26 16:04:31 compute-0 sudo[308462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:04:31 compute-0 sudo[308462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:04:31 compute-0 nova_compute[239965]: 2026-01-26 16:04:31.774 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:32 compute-0 podman[308500]: 2026-01-26 16:04:32.008692495 +0000 UTC m=+0.054588459 container create 8949731c28739f27aa361d9edbc5b5563242dd6ed8b92a0e45ea82bf597f4d59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:04:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 26 16:04:32 compute-0 systemd[1]: Started libpod-conmon-8949731c28739f27aa361d9edbc5b5563242dd6ed8b92a0e45ea82bf597f4d59.scope.
Jan 26 16:04:32 compute-0 podman[308500]: 2026-01-26 16:04:31.988772066 +0000 UTC m=+0.034668050 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:04:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:04:32 compute-0 podman[308500]: 2026-01-26 16:04:32.107768673 +0000 UTC m=+0.153664657 container init 8949731c28739f27aa361d9edbc5b5563242dd6ed8b92a0e45ea82bf597f4d59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 26 16:04:32 compute-0 podman[308500]: 2026-01-26 16:04:32.116497318 +0000 UTC m=+0.162393282 container start 8949731c28739f27aa361d9edbc5b5563242dd6ed8b92a0e45ea82bf597f4d59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_sanderson, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 26 16:04:32 compute-0 podman[308500]: 2026-01-26 16:04:32.12027108 +0000 UTC m=+0.166167074 container attach 8949731c28739f27aa361d9edbc5b5563242dd6ed8b92a0e45ea82bf597f4d59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:04:32 compute-0 reverent_sanderson[308516]: 167 167
Jan 26 16:04:32 compute-0 systemd[1]: libpod-8949731c28739f27aa361d9edbc5b5563242dd6ed8b92a0e45ea82bf597f4d59.scope: Deactivated successfully.
Jan 26 16:04:32 compute-0 nova_compute[239965]: 2026-01-26 16:04:32.134 239969 DEBUG nova.compute.manager [req-18345f27-ac91-48fd-9dc2-54f8cc2171c8 req-36b106bd-7128-45f3-8e67-1094c6aac1d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received event network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:32 compute-0 nova_compute[239965]: 2026-01-26 16:04:32.135 239969 DEBUG oslo_concurrency.lockutils [req-18345f27-ac91-48fd-9dc2-54f8cc2171c8 req-36b106bd-7128-45f3-8e67-1094c6aac1d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "477642fb-5695-4387-944d-587e17cf3ed8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:32 compute-0 nova_compute[239965]: 2026-01-26 16:04:32.135 239969 DEBUG oslo_concurrency.lockutils [req-18345f27-ac91-48fd-9dc2-54f8cc2171c8 req-36b106bd-7128-45f3-8e67-1094c6aac1d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:32 compute-0 nova_compute[239965]: 2026-01-26 16:04:32.135 239969 DEBUG oslo_concurrency.lockutils [req-18345f27-ac91-48fd-9dc2-54f8cc2171c8 req-36b106bd-7128-45f3-8e67-1094c6aac1d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:32 compute-0 nova_compute[239965]: 2026-01-26 16:04:32.135 239969 DEBUG nova.compute.manager [req-18345f27-ac91-48fd-9dc2-54f8cc2171c8 req-36b106bd-7128-45f3-8e67-1094c6aac1d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] No waiting events found dispatching network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:32 compute-0 nova_compute[239965]: 2026-01-26 16:04:32.136 239969 WARNING nova.compute.manager [req-18345f27-ac91-48fd-9dc2-54f8cc2171c8 req-36b106bd-7128-45f3-8e67-1094c6aac1d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received unexpected event network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b for instance with vm_state active and task_state None.
Jan 26 16:04:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:04:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:04:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:04:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:04:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:04:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:04:32 compute-0 podman[308521]: 2026-01-26 16:04:32.174054249 +0000 UTC m=+0.027748422 container died 8949731c28739f27aa361d9edbc5b5563242dd6ed8b92a0e45ea82bf597f4d59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:04:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e7b38ca05f716359833cf67727535e1245822aba2cb09ca5ad2bd50d6a66bc8-merged.mount: Deactivated successfully.
Jan 26 16:04:32 compute-0 podman[308521]: 2026-01-26 16:04:32.217899903 +0000 UTC m=+0.071594066 container remove 8949731c28739f27aa361d9edbc5b5563242dd6ed8b92a0e45ea82bf597f4d59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:04:32 compute-0 nova_compute[239965]: 2026-01-26 16:04:32.221 239969 INFO nova.compute.manager [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Rescuing
Jan 26 16:04:32 compute-0 nova_compute[239965]: 2026-01-26 16:04:32.221 239969 DEBUG oslo_concurrency.lockutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "refresh_cache-477642fb-5695-4387-944d-587e17cf3ed8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:04:32 compute-0 nova_compute[239965]: 2026-01-26 16:04:32.221 239969 DEBUG oslo_concurrency.lockutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquired lock "refresh_cache-477642fb-5695-4387-944d-587e17cf3ed8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:04:32 compute-0 nova_compute[239965]: 2026-01-26 16:04:32.222 239969 DEBUG nova.network.neutron [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:04:32 compute-0 systemd[1]: libpod-conmon-8949731c28739f27aa361d9edbc5b5563242dd6ed8b92a0e45ea82bf597f4d59.scope: Deactivated successfully.
Jan 26 16:04:32 compute-0 podman[308543]: 2026-01-26 16:04:32.413266983 +0000 UTC m=+0.051155815 container create 6aa68ca518bcb4a8cad0eef2babae254aa10c86c1531813530b8ccfe691b7ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mahavira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:04:32 compute-0 systemd[1]: Started libpod-conmon-6aa68ca518bcb4a8cad0eef2babae254aa10c86c1531813530b8ccfe691b7ac1.scope.
Jan 26 16:04:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:04:32 compute-0 podman[308543]: 2026-01-26 16:04:32.383707508 +0000 UTC m=+0.021596360 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae4df169a0ce76a038746526ba922f795fec856c78b1577e091e6275b8394fbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae4df169a0ce76a038746526ba922f795fec856c78b1577e091e6275b8394fbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae4df169a0ce76a038746526ba922f795fec856c78b1577e091e6275b8394fbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae4df169a0ce76a038746526ba922f795fec856c78b1577e091e6275b8394fbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae4df169a0ce76a038746526ba922f795fec856c78b1577e091e6275b8394fbf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:04:32 compute-0 podman[308543]: 2026-01-26 16:04:32.500110722 +0000 UTC m=+0.137999574 container init 6aa68ca518bcb4a8cad0eef2babae254aa10c86c1531813530b8ccfe691b7ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mahavira, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:04:32 compute-0 podman[308543]: 2026-01-26 16:04:32.508795215 +0000 UTC m=+0.146684047 container start 6aa68ca518bcb4a8cad0eef2babae254aa10c86c1531813530b8ccfe691b7ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 16:04:32 compute-0 podman[308543]: 2026-01-26 16:04:32.513348326 +0000 UTC m=+0.151237168 container attach 6aa68ca518bcb4a8cad0eef2babae254aa10c86c1531813530b8ccfe691b7ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mahavira, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 16:04:32 compute-0 affectionate_mahavira[308560]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:04:32 compute-0 affectionate_mahavira[308560]: --> All data devices are unavailable
Jan 26 16:04:33 compute-0 systemd[1]: libpod-6aa68ca518bcb4a8cad0eef2babae254aa10c86c1531813530b8ccfe691b7ac1.scope: Deactivated successfully.
Jan 26 16:04:33 compute-0 podman[308543]: 2026-01-26 16:04:33.000319444 +0000 UTC m=+0.638208286 container died 6aa68ca518bcb4a8cad0eef2babae254aa10c86c1531813530b8ccfe691b7ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mahavira, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 16:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae4df169a0ce76a038746526ba922f795fec856c78b1577e091e6275b8394fbf-merged.mount: Deactivated successfully.
Jan 26 16:04:33 compute-0 podman[308543]: 2026-01-26 16:04:33.04501994 +0000 UTC m=+0.682908772 container remove 6aa68ca518bcb4a8cad0eef2babae254aa10c86c1531813530b8ccfe691b7ac1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mahavira, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 26 16:04:33 compute-0 systemd[1]: libpod-conmon-6aa68ca518bcb4a8cad0eef2babae254aa10c86c1531813530b8ccfe691b7ac1.scope: Deactivated successfully.
Jan 26 16:04:33 compute-0 sudo[308462]: pam_unix(sudo:session): session closed for user root
Jan 26 16:04:33 compute-0 sudo[308591]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:04:33 compute-0 sudo[308591]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:04:33 compute-0 sudo[308591]: pam_unix(sudo:session): session closed for user root
Jan 26 16:04:33 compute-0 ceph-mon[75140]: pgmap v1548: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 26 16:04:33 compute-0 sudo[308616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:04:33 compute-0 sudo[308616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:04:33 compute-0 nova_compute[239965]: 2026-01-26 16:04:33.379 239969 DEBUG nova.network.neutron [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Updating instance_info_cache with network_info: [{"id": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "address": "fa:16:3e:e7:14:3e", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5083d232-fb", "ovs_interfaceid": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:04:33 compute-0 nova_compute[239965]: 2026-01-26 16:04:33.406 239969 DEBUG oslo_concurrency.lockutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Releasing lock "refresh_cache-477642fb-5695-4387-944d-587e17cf3ed8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:04:33 compute-0 podman[308653]: 2026-01-26 16:04:33.519125612 +0000 UTC m=+0.053259706 container create 0f05faa92b4003052d52d713933116485360633439d414085fc0d1b70584be56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 26 16:04:33 compute-0 systemd[1]: Started libpod-conmon-0f05faa92b4003052d52d713933116485360633439d414085fc0d1b70584be56.scope.
Jan 26 16:04:33 compute-0 podman[308653]: 2026-01-26 16:04:33.489268191 +0000 UTC m=+0.023402265 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:04:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:04:33 compute-0 podman[308653]: 2026-01-26 16:04:33.627725525 +0000 UTC m=+0.161859599 container init 0f05faa92b4003052d52d713933116485360633439d414085fc0d1b70584be56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_easley, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 16:04:33 compute-0 nova_compute[239965]: 2026-01-26 16:04:33.630 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:33 compute-0 podman[308653]: 2026-01-26 16:04:33.646624617 +0000 UTC m=+0.180758671 container start 0f05faa92b4003052d52d713933116485360633439d414085fc0d1b70584be56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_easley, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 16:04:33 compute-0 podman[308653]: 2026-01-26 16:04:33.650247147 +0000 UTC m=+0.184381201 container attach 0f05faa92b4003052d52d713933116485360633439d414085fc0d1b70584be56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 16:04:33 compute-0 infallible_easley[308668]: 167 167
Jan 26 16:04:33 compute-0 systemd[1]: libpod-0f05faa92b4003052d52d713933116485360633439d414085fc0d1b70584be56.scope: Deactivated successfully.
Jan 26 16:04:33 compute-0 podman[308653]: 2026-01-26 16:04:33.653207959 +0000 UTC m=+0.187342013 container died 0f05faa92b4003052d52d713933116485360633439d414085fc0d1b70584be56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_easley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc862ea344f079ac0a9065f2cf39fc8025922d60b07b7b6e0aebaa3ea2e5f419-merged.mount: Deactivated successfully.
Jan 26 16:04:33 compute-0 podman[308653]: 2026-01-26 16:04:33.693301323 +0000 UTC m=+0.227435377 container remove 0f05faa92b4003052d52d713933116485360633439d414085fc0d1b70584be56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_easley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 26 16:04:33 compute-0 systemd[1]: libpod-conmon-0f05faa92b4003052d52d713933116485360633439d414085fc0d1b70584be56.scope: Deactivated successfully.
Jan 26 16:04:33 compute-0 podman[308692]: 2026-01-26 16:04:33.910722123 +0000 UTC m=+0.048226023 container create 799c3e52e0cd899cd7173a1b042cf4cc66d7f449c99763457d036e7f4a306587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:04:33 compute-0 systemd[1]: Started libpod-conmon-799c3e52e0cd899cd7173a1b042cf4cc66d7f449c99763457d036e7f4a306587.scope.
Jan 26 16:04:33 compute-0 nova_compute[239965]: 2026-01-26 16:04:33.973 239969 DEBUG nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:04:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f001015677b1f5309aeb37ebb005dbd3ca76ea2c18ebff9c2e8bdc290ca812ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:04:33 compute-0 podman[308692]: 2026-01-26 16:04:33.89024293 +0000 UTC m=+0.027746850 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f001015677b1f5309aeb37ebb005dbd3ca76ea2c18ebff9c2e8bdc290ca812ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f001015677b1f5309aeb37ebb005dbd3ca76ea2c18ebff9c2e8bdc290ca812ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:04:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f001015677b1f5309aeb37ebb005dbd3ca76ea2c18ebff9c2e8bdc290ca812ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:04:34 compute-0 podman[308692]: 2026-01-26 16:04:34.010796326 +0000 UTC m=+0.148300266 container init 799c3e52e0cd899cd7173a1b042cf4cc66d7f449c99763457d036e7f4a306587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:04:34 compute-0 podman[308692]: 2026-01-26 16:04:34.019901139 +0000 UTC m=+0.157405039 container start 799c3e52e0cd899cd7173a1b042cf4cc66d7f449c99763457d036e7f4a306587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 16:04:34 compute-0 podman[308692]: 2026-01-26 16:04:34.024244575 +0000 UTC m=+0.161748535 container attach 799c3e52e0cd899cd7173a1b042cf4cc66d7f449c99763457d036e7f4a306587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 16:04:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]: {
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:     "0": [
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:         {
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "devices": [
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "/dev/loop3"
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             ],
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "lv_name": "ceph_lv0",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "lv_size": "21470642176",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "name": "ceph_lv0",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "tags": {
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.cluster_name": "ceph",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.crush_device_class": "",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.encrypted": "0",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.objectstore": "bluestore",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.osd_id": "0",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.type": "block",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.vdo": "0",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.with_tpm": "0"
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             },
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "type": "block",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "vg_name": "ceph_vg0"
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:         }
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:     ],
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:     "1": [
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:         {
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "devices": [
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "/dev/loop4"
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             ],
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "lv_name": "ceph_lv1",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "lv_size": "21470642176",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "name": "ceph_lv1",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "tags": {
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.cluster_name": "ceph",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.crush_device_class": "",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.encrypted": "0",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.objectstore": "bluestore",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.osd_id": "1",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.type": "block",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.vdo": "0",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.with_tpm": "0"
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             },
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "type": "block",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "vg_name": "ceph_vg1"
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:         }
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:     ],
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:     "2": [
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:         {
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "devices": [
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "/dev/loop5"
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             ],
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "lv_name": "ceph_lv2",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "lv_size": "21470642176",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "name": "ceph_lv2",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "tags": {
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.cluster_name": "ceph",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.crush_device_class": "",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.encrypted": "0",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.objectstore": "bluestore",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.osd_id": "2",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.type": "block",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.vdo": "0",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:                 "ceph.with_tpm": "0"
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             },
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "type": "block",
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:             "vg_name": "ceph_vg2"
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:         }
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]:     ]
Jan 26 16:04:34 compute-0 exciting_bardeen[308708]: }
Jan 26 16:04:34 compute-0 systemd[1]: libpod-799c3e52e0cd899cd7173a1b042cf4cc66d7f449c99763457d036e7f4a306587.scope: Deactivated successfully.
Jan 26 16:04:34 compute-0 podman[308692]: 2026-01-26 16:04:34.314786579 +0000 UTC m=+0.452290499 container died 799c3e52e0cd899cd7173a1b042cf4cc66d7f449c99763457d036e7f4a306587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 16:04:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f001015677b1f5309aeb37ebb005dbd3ca76ea2c18ebff9c2e8bdc290ca812ff-merged.mount: Deactivated successfully.
Jan 26 16:04:34 compute-0 podman[308692]: 2026-01-26 16:04:34.363600415 +0000 UTC m=+0.501104315 container remove 799c3e52e0cd899cd7173a1b042cf4cc66d7f449c99763457d036e7f4a306587 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_bardeen, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 16:04:34 compute-0 systemd[1]: libpod-conmon-799c3e52e0cd899cd7173a1b042cf4cc66d7f449c99763457d036e7f4a306587.scope: Deactivated successfully.
Jan 26 16:04:34 compute-0 sudo[308616]: pam_unix(sudo:session): session closed for user root
Jan 26 16:04:34 compute-0 sudo[308727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:04:34 compute-0 sudo[308727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:04:34 compute-0 sudo[308727]: pam_unix(sudo:session): session closed for user root
Jan 26 16:04:34 compute-0 sudo[308752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:04:34 compute-0 sudo[308752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:04:34 compute-0 podman[308790]: 2026-01-26 16:04:34.826495733 +0000 UTC m=+0.039461479 container create a31fa78850449062a9c4adfa8a9cf1a18adf9855b40927497ab8cd183f56bba7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_diffie, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 16:04:34 compute-0 systemd[1]: Started libpod-conmon-a31fa78850449062a9c4adfa8a9cf1a18adf9855b40927497ab8cd183f56bba7.scope.
Jan 26 16:04:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:04:34 compute-0 podman[308790]: 2026-01-26 16:04:34.80882956 +0000 UTC m=+0.021795316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:04:34 compute-0 podman[308790]: 2026-01-26 16:04:34.910954093 +0000 UTC m=+0.123919839 container init a31fa78850449062a9c4adfa8a9cf1a18adf9855b40927497ab8cd183f56bba7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:04:34 compute-0 podman[308790]: 2026-01-26 16:04:34.917829161 +0000 UTC m=+0.130794897 container start a31fa78850449062a9c4adfa8a9cf1a18adf9855b40927497ab8cd183f56bba7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_diffie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:04:34 compute-0 podman[308790]: 2026-01-26 16:04:34.921384189 +0000 UTC m=+0.134349935 container attach a31fa78850449062a9c4adfa8a9cf1a18adf9855b40927497ab8cd183f56bba7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_diffie, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 16:04:34 compute-0 thirsty_diffie[308806]: 167 167
Jan 26 16:04:34 compute-0 systemd[1]: libpod-a31fa78850449062a9c4adfa8a9cf1a18adf9855b40927497ab8cd183f56bba7.scope: Deactivated successfully.
Jan 26 16:04:34 compute-0 podman[308790]: 2026-01-26 16:04:34.928720019 +0000 UTC m=+0.141685765 container died a31fa78850449062a9c4adfa8a9cf1a18adf9855b40927497ab8cd183f56bba7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 16:04:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-1567df5d7cc86f337884c74eb7013c7bb214c43ab812efec670202459c184a55-merged.mount: Deactivated successfully.
Jan 26 16:04:34 compute-0 podman[308790]: 2026-01-26 16:04:34.978600682 +0000 UTC m=+0.191566458 container remove a31fa78850449062a9c4adfa8a9cf1a18adf9855b40927497ab8cd183f56bba7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 16:04:34 compute-0 systemd[1]: libpod-conmon-a31fa78850449062a9c4adfa8a9cf1a18adf9855b40927497ab8cd183f56bba7.scope: Deactivated successfully.
Jan 26 16:04:35 compute-0 ceph-mon[75140]: pgmap v1549: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 106 op/s
Jan 26 16:04:35 compute-0 podman[308828]: 2026-01-26 16:04:35.176522684 +0000 UTC m=+0.044844251 container create bdfad20cdd9552f8802a8c64a01142d296358c0f3c7840135c1e0a0a6ed28543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:04:35 compute-0 systemd[1]: Started libpod-conmon-bdfad20cdd9552f8802a8c64a01142d296358c0f3c7840135c1e0a0a6ed28543.scope.
Jan 26 16:04:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8d9fc41a2431f3c369515ee2bbc9b4a3d9735659f69d1ba0028b191256a67e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8d9fc41a2431f3c369515ee2bbc9b4a3d9735659f69d1ba0028b191256a67e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8d9fc41a2431f3c369515ee2bbc9b4a3d9735659f69d1ba0028b191256a67e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:04:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa8d9fc41a2431f3c369515ee2bbc9b4a3d9735659f69d1ba0028b191256a67e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:04:35 compute-0 podman[308828]: 2026-01-26 16:04:35.158068511 +0000 UTC m=+0.026390108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:04:35 compute-0 podman[308828]: 2026-01-26 16:04:35.253785587 +0000 UTC m=+0.122107184 container init bdfad20cdd9552f8802a8c64a01142d296358c0f3c7840135c1e0a0a6ed28543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hertz, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 16:04:35 compute-0 podman[308828]: 2026-01-26 16:04:35.260362409 +0000 UTC m=+0.128683976 container start bdfad20cdd9552f8802a8c64a01142d296358c0f3c7840135c1e0a0a6ed28543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hertz, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:04:35 compute-0 podman[308828]: 2026-01-26 16:04:35.264874859 +0000 UTC m=+0.133196426 container attach bdfad20cdd9552f8802a8c64a01142d296358c0f3c7840135c1e0a0a6ed28543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hertz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:04:35 compute-0 nova_compute[239965]: 2026-01-26 16:04:35.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:04:35 compute-0 nova_compute[239965]: 2026-01-26 16:04:35.935 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:36 compute-0 lvm[308922]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:04:36 compute-0 lvm[308924]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:04:36 compute-0 lvm[308922]: VG ceph_vg0 finished
Jan 26 16:04:36 compute-0 lvm[308924]: VG ceph_vg1 finished
Jan 26 16:04:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Jan 26 16:04:36 compute-0 lvm[308926]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:04:36 compute-0 lvm[308926]: VG ceph_vg2 finished
Jan 26 16:04:36 compute-0 eager_hertz[308845]: {}
Jan 26 16:04:36 compute-0 systemd[1]: libpod-bdfad20cdd9552f8802a8c64a01142d296358c0f3c7840135c1e0a0a6ed28543.scope: Deactivated successfully.
Jan 26 16:04:36 compute-0 systemd[1]: libpod-bdfad20cdd9552f8802a8c64a01142d296358c0f3c7840135c1e0a0a6ed28543.scope: Consumed 1.411s CPU time.
Jan 26 16:04:36 compute-0 podman[308828]: 2026-01-26 16:04:36.195336781 +0000 UTC m=+1.063658388 container died bdfad20cdd9552f8802a8c64a01142d296358c0f3c7840135c1e0a0a6ed28543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hertz, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 16:04:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa8d9fc41a2431f3c369515ee2bbc9b4a3d9735659f69d1ba0028b191256a67e-merged.mount: Deactivated successfully.
Jan 26 16:04:36 compute-0 podman[308828]: 2026-01-26 16:04:36.255606405 +0000 UTC m=+1.123927962 container remove bdfad20cdd9552f8802a8c64a01142d296358c0f3c7840135c1e0a0a6ed28543 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_hertz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:04:36 compute-0 systemd[1]: libpod-conmon-bdfad20cdd9552f8802a8c64a01142d296358c0f3c7840135c1e0a0a6ed28543.scope: Deactivated successfully.
Jan 26 16:04:36 compute-0 sudo[308752]: pam_unix(sudo:session): session closed for user root
Jan 26 16:04:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:04:36 compute-0 nova_compute[239965]: 2026-01-26 16:04:36.344 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Acquiring lock "d7f67ae9-2751-4c70-a750-ccc1940478de" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:36 compute-0 nova_compute[239965]: 2026-01-26 16:04:36.346 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:36 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:04:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:04:36 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:04:36 compute-0 nova_compute[239965]: 2026-01-26 16:04:36.363 239969 DEBUG nova.compute.manager [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:04:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:04:36 compute-0 sudo[308941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:04:36 compute-0 sudo[308941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:04:36 compute-0 sudo[308941]: pam_unix(sudo:session): session closed for user root
Jan 26 16:04:36 compute-0 podman[308965]: 2026-01-26 16:04:36.55712375 +0000 UTC m=+0.072749970 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:04:36 compute-0 nova_compute[239965]: 2026-01-26 16:04:36.579 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:36 compute-0 nova_compute[239965]: 2026-01-26 16:04:36.580 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:36 compute-0 nova_compute[239965]: 2026-01-26 16:04:36.587 239969 DEBUG nova.virt.hardware [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:04:36 compute-0 nova_compute[239965]: 2026-01-26 16:04:36.587 239969 INFO nova.compute.claims [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:04:36 compute-0 podman[308966]: 2026-01-26 16:04:36.606281653 +0000 UTC m=+0.117399643 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 16:04:37 compute-0 ceph-mon[75140]: pgmap v1550: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Jan 26 16:04:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:04:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:04:37 compute-0 nova_compute[239965]: 2026-01-26 16:04:37.313 239969 DEBUG oslo_concurrency.processutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:37 compute-0 nova_compute[239965]: 2026-01-26 16:04:37.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:04:37 compute-0 ovn_controller[146046]: 2026-01-26T16:04:37Z|00087|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f8:ca:89 10.100.0.14
Jan 26 16:04:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:04:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3938139767' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:04:37 compute-0 nova_compute[239965]: 2026-01-26 16:04:37.877 239969 DEBUG oslo_concurrency.processutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:37 compute-0 nova_compute[239965]: 2026-01-26 16:04:37.883 239969 DEBUG nova.compute.provider_tree [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:04:37 compute-0 nova_compute[239965]: 2026-01-26 16:04:37.902 239969 DEBUG nova.scheduler.client.report [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:04:37 compute-0 nova_compute[239965]: 2026-01-26 16:04:37.935 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:37 compute-0 nova_compute[239965]: 2026-01-26 16:04:37.936 239969 DEBUG nova.compute.manager [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:04:37 compute-0 nova_compute[239965]: 2026-01-26 16:04:37.989 239969 DEBUG nova.compute.manager [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:04:37 compute-0 nova_compute[239965]: 2026-01-26 16:04:37.990 239969 DEBUG nova.network.neutron [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.007 239969 INFO nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.025 239969 DEBUG nova.compute.manager [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:04:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 12 KiB/s wr, 131 op/s
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.110 239969 DEBUG nova.compute.manager [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.111 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.112 239969 INFO nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Creating image(s)
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.137 239969 DEBUG nova.storage.rbd_utils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] rbd image d7f67ae9-2751-4c70-a750-ccc1940478de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.163 239969 DEBUG nova.storage.rbd_utils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] rbd image d7f67ae9-2751-4c70-a750-ccc1940478de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3938139767' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.203 239969 DEBUG nova.storage.rbd_utils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] rbd image d7f67ae9-2751-4c70-a750-ccc1940478de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.208 239969 DEBUG oslo_concurrency.processutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.260 239969 DEBUG nova.policy [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fb2bc5dfd3a74496b6982288700ab306', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5527f6a0b0004aa790a89b7a57853ace', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.310 239969 DEBUG oslo_concurrency.processutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.311 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.311 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.312 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.337 239969 DEBUG nova.storage.rbd_utils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] rbd image d7f67ae9-2751-4c70-a750-ccc1940478de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.342 239969 DEBUG oslo_concurrency.processutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d7f67ae9-2751-4c70-a750-ccc1940478de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.632 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.670 239969 DEBUG oslo_concurrency.processutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d7f67ae9-2751-4c70-a750-ccc1940478de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.726 239969 DEBUG nova.storage.rbd_utils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] resizing rbd image d7f67ae9-2751-4c70-a750-ccc1940478de_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.795 239969 DEBUG nova.objects.instance [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lazy-loading 'migration_context' on Instance uuid d7f67ae9-2751-4c70-a750-ccc1940478de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.810 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.810 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Ensure instance console log exists: /var/lib/nova/instances/d7f67ae9-2751-4c70-a750-ccc1940478de/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.810 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.811 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:38 compute-0 nova_compute[239965]: 2026-01-26 16:04:38.811 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:39 compute-0 ceph-mon[75140]: pgmap v1551: 305 pgs: 305 active+clean; 215 MiB data, 605 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 12 KiB/s wr, 131 op/s
Jan 26 16:04:39 compute-0 nova_compute[239965]: 2026-01-26 16:04:39.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:04:39 compute-0 nova_compute[239965]: 2026-01-26 16:04:39.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:04:39 compute-0 nova_compute[239965]: 2026-01-26 16:04:39.509 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:04:39 compute-0 nova_compute[239965]: 2026-01-26 16:04:39.532 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:04:39 compute-0 nova_compute[239965]: 2026-01-26 16:04:39.533 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:04:39 compute-0 nova_compute[239965]: 2026-01-26 16:04:39.533 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:04:39 compute-0 nova_compute[239965]: 2026-01-26 16:04:39.533 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:04:39 compute-0 nova_compute[239965]: 2026-01-26 16:04:39.830 239969 DEBUG nova.network.neutron [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Successfully created port: 1704d3ae-5330-41ec-8f1c-05df19fdb33c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:04:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 254 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.7 MiB/s wr, 181 op/s
Jan 26 16:04:40 compute-0 nova_compute[239965]: 2026-01-26 16:04:40.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:04:40 compute-0 nova_compute[239965]: 2026-01-26 16:04:40.537 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:40 compute-0 nova_compute[239965]: 2026-01-26 16:04:40.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:40 compute-0 nova_compute[239965]: 2026-01-26 16:04:40.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:40 compute-0 nova_compute[239965]: 2026-01-26 16:04:40.538 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:04:40 compute-0 nova_compute[239965]: 2026-01-26 16:04:40.539 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:40 compute-0 nova_compute[239965]: 2026-01-26 16:04:40.748 239969 DEBUG nova.network.neutron [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Successfully updated port: 1704d3ae-5330-41ec-8f1c-05df19fdb33c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:04:40 compute-0 nova_compute[239965]: 2026-01-26 16:04:40.766 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Acquiring lock "refresh_cache-d7f67ae9-2751-4c70-a750-ccc1940478de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:04:40 compute-0 nova_compute[239965]: 2026-01-26 16:04:40.767 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Acquired lock "refresh_cache-d7f67ae9-2751-4c70-a750-ccc1940478de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:04:40 compute-0 nova_compute[239965]: 2026-01-26 16:04:40.767 239969 DEBUG nova.network.neutron [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:04:40 compute-0 nova_compute[239965]: 2026-01-26 16:04:40.856 239969 DEBUG nova.compute.manager [req-005dd6d9-b4db-4750-9b29-af2ce8588139 req-d08c1c97-dc30-4400-82d4-fd750b4276d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received event network-changed-1704d3ae-5330-41ec-8f1c-05df19fdb33c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:40 compute-0 nova_compute[239965]: 2026-01-26 16:04:40.857 239969 DEBUG nova.compute.manager [req-005dd6d9-b4db-4750-9b29-af2ce8588139 req-d08c1c97-dc30-4400-82d4-fd750b4276d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Refreshing instance network info cache due to event network-changed-1704d3ae-5330-41ec-8f1c-05df19fdb33c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:04:40 compute-0 nova_compute[239965]: 2026-01-26 16:04:40.857 239969 DEBUG oslo_concurrency.lockutils [req-005dd6d9-b4db-4750-9b29-af2ce8588139 req-d08c1c97-dc30-4400-82d4-fd750b4276d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d7f67ae9-2751-4c70-a750-ccc1940478de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:04:40 compute-0 nova_compute[239965]: 2026-01-26 16:04:40.923 239969 DEBUG nova.network.neutron [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:04:40 compute-0 nova_compute[239965]: 2026-01-26 16:04:40.937 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:04:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4282463874' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:04:41 compute-0 nova_compute[239965]: 2026-01-26 16:04:41.107 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:41 compute-0 ceph-mon[75140]: pgmap v1552: 305 pgs: 305 active+clean; 254 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.7 MiB/s wr, 181 op/s
Jan 26 16:04:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4282463874' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:04:41 compute-0 nova_compute[239965]: 2026-01-26 16:04:41.248 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:04:41 compute-0 nova_compute[239965]: 2026-01-26 16:04:41.248 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:04:41 compute-0 nova_compute[239965]: 2026-01-26 16:04:41.251 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:04:41 compute-0 nova_compute[239965]: 2026-01-26 16:04:41.252 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:04:41 compute-0 nova_compute[239965]: 2026-01-26 16:04:41.406 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:04:41 compute-0 nova_compute[239965]: 2026-01-26 16:04:41.407 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3543MB free_disk=59.901002537459135GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:04:41 compute-0 nova_compute[239965]: 2026-01-26 16:04:41.407 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:41 compute-0 nova_compute[239965]: 2026-01-26 16:04:41.408 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:04:41 compute-0 nova_compute[239965]: 2026-01-26 16:04:41.474 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 492acb7a-42e1-4918-a973-352f9d8251d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:04:41 compute-0 nova_compute[239965]: 2026-01-26 16:04:41.475 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 477642fb-5695-4387-944d-587e17cf3ed8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:04:41 compute-0 nova_compute[239965]: 2026-01-26 16:04:41.475 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance d7f67ae9-2751-4c70-a750-ccc1940478de actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:04:41 compute-0 nova_compute[239965]: 2026-01-26 16:04:41.475 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:04:41 compute-0 nova_compute[239965]: 2026-01-26 16:04:41.475 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:04:41 compute-0 nova_compute[239965]: 2026-01-26 16:04:41.544 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 262 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Jan 26 16:04:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:04:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2117434149' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.162 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.168 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.198 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:04:42 compute-0 ceph-mon[75140]: pgmap v1553: 305 pgs: 305 active+clean; 262 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.8 MiB/s wr, 144 op/s
Jan 26 16:04:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2117434149' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.229 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.229 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.230 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.230 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.234 239969 DEBUG nova.network.neutron [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Updating instance_info_cache with network_info: [{"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.255 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.260 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Releasing lock "refresh_cache-d7f67ae9-2751-4c70-a750-ccc1940478de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.261 239969 DEBUG nova.compute.manager [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Instance network_info: |[{"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.261 239969 DEBUG oslo_concurrency.lockutils [req-005dd6d9-b4db-4750-9b29-af2ce8588139 req-d08c1c97-dc30-4400-82d4-fd750b4276d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d7f67ae9-2751-4c70-a750-ccc1940478de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.262 239969 DEBUG nova.network.neutron [req-005dd6d9-b4db-4750-9b29-af2ce8588139 req-d08c1c97-dc30-4400-82d4-fd750b4276d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Refreshing network info cache for port 1704d3ae-5330-41ec-8f1c-05df19fdb33c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.269 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Start _get_guest_xml network_info=[{"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.273 239969 WARNING nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.289 239969 DEBUG nova.virt.libvirt.host [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.290 239969 DEBUG nova.virt.libvirt.host [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.298 239969 DEBUG nova.virt.libvirt.host [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.299 239969 DEBUG nova.virt.libvirt.host [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.300 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.301 239969 DEBUG nova.virt.hardware [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.301 239969 DEBUG nova.virt.hardware [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.302 239969 DEBUG nova.virt.hardware [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.302 239969 DEBUG nova.virt.hardware [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.302 239969 DEBUG nova.virt.hardware [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.303 239969 DEBUG nova.virt.hardware [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.303 239969 DEBUG nova.virt.hardware [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.303 239969 DEBUG nova.virt.hardware [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.304 239969 DEBUG nova.virt.hardware [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.304 239969 DEBUG nova.virt.hardware [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.304 239969 DEBUG nova.virt.hardware [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.311 239969 DEBUG oslo_concurrency.processutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:04:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:04:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2638013350' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.892 239969 DEBUG oslo_concurrency.processutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.919 239969 DEBUG nova.storage.rbd_utils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] rbd image d7f67ae9-2751-4c70-a750-ccc1940478de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:42 compute-0 nova_compute[239965]: 2026-01-26 16:04:42.923 239969 DEBUG oslo_concurrency.processutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:43 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2638013350' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:04:43 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3482220227' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.522 239969 DEBUG oslo_concurrency.processutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.525 239969 DEBUG nova.virt.libvirt.vif [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:04:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1664472187',display_name='tempest-InstanceActionsTestJSON-server-1664472187',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1664472187',id=81,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5527f6a0b0004aa790a89b7a57853ace',ramdisk_id='',reservation_id='r-81i0kmes',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-1033896084',owner_user_name='tempest-InstanceActionsTestJSON-1033896084-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:04:38Z,user_data=None,user_id='fb2bc5dfd3a74496b6982288700ab306',uuid=d7f67ae9-2751-4c70-a750-ccc1940478de,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.525 239969 DEBUG nova.network.os_vif_util [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Converting VIF {"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.527 239969 DEBUG nova.network.os_vif_util [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:77:4c,bridge_name='br-int',has_traffic_filtering=True,id=1704d3ae-5330-41ec-8f1c-05df19fdb33c,network=Network(2d7bbe96-2c16-44c7-ad13-09585bbd834e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1704d3ae-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.528 239969 DEBUG nova.objects.instance [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lazy-loading 'pci_devices' on Instance uuid d7f67ae9-2751-4c70-a750-ccc1940478de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.546 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:04:43 compute-0 nova_compute[239965]:   <uuid>d7f67ae9-2751-4c70-a750-ccc1940478de</uuid>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   <name>instance-00000051</name>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <nova:name>tempest-InstanceActionsTestJSON-server-1664472187</nova:name>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:04:42</nova:creationTime>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:04:43 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:04:43 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:04:43 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:04:43 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:04:43 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:04:43 compute-0 nova_compute[239965]:         <nova:user uuid="fb2bc5dfd3a74496b6982288700ab306">tempest-InstanceActionsTestJSON-1033896084-project-member</nova:user>
Jan 26 16:04:43 compute-0 nova_compute[239965]:         <nova:project uuid="5527f6a0b0004aa790a89b7a57853ace">tempest-InstanceActionsTestJSON-1033896084</nova:project>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:04:43 compute-0 nova_compute[239965]:         <nova:port uuid="1704d3ae-5330-41ec-8f1c-05df19fdb33c">
Jan 26 16:04:43 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <system>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <entry name="serial">d7f67ae9-2751-4c70-a750-ccc1940478de</entry>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <entry name="uuid">d7f67ae9-2751-4c70-a750-ccc1940478de</entry>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     </system>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   <os>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   </os>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   <features>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   </features>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d7f67ae9-2751-4c70-a750-ccc1940478de_disk">
Jan 26 16:04:43 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       </source>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:04:43 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d7f67ae9-2751-4c70-a750-ccc1940478de_disk.config">
Jan 26 16:04:43 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       </source>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:04:43 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:df:77:4c"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <target dev="tap1704d3ae-53"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/d7f67ae9-2751-4c70-a750-ccc1940478de/console.log" append="off"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <video>
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     </video>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:04:43 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:04:43 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:04:43 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:04:43 compute-0 nova_compute[239965]: </domain>
Jan 26 16:04:43 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.547 239969 DEBUG nova.compute.manager [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Preparing to wait for external event network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.547 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Acquiring lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.547 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.548 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.548 239969 DEBUG nova.virt.libvirt.vif [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:04:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1664472187',display_name='tempest-InstanceActionsTestJSON-server-1664472187',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1664472187',id=81,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5527f6a0b0004aa790a89b7a57853ace',ramdisk_id='',reservation_id='r-81i0kmes',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-1033896084',owner_user_name='tempest-InstanceActionsTestJSON-1033896084-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:04:38Z,user_data=None,user_id='fb2bc5dfd3a74496b6982288700ab306',uuid=d7f67ae9-2751-4c70-a750-ccc1940478de,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.549 239969 DEBUG nova.network.os_vif_util [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Converting VIF {"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.549 239969 DEBUG nova.network.os_vif_util [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:df:77:4c,bridge_name='br-int',has_traffic_filtering=True,id=1704d3ae-5330-41ec-8f1c-05df19fdb33c,network=Network(2d7bbe96-2c16-44c7-ad13-09585bbd834e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1704d3ae-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.550 239969 DEBUG os_vif [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:77:4c,bridge_name='br-int',has_traffic_filtering=True,id=1704d3ae-5330-41ec-8f1c-05df19fdb33c,network=Network(2d7bbe96-2c16-44c7-ad13-09585bbd834e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1704d3ae-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.550 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.551 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.552 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.556 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.556 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1704d3ae-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.557 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1704d3ae-53, col_values=(('external_ids', {'iface-id': '1704d3ae-5330-41ec-8f1c-05df19fdb33c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:77:4c', 'vm-uuid': 'd7f67ae9-2751-4c70-a750-ccc1940478de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:43 compute-0 NetworkManager[48954]: <info>  [1769443483.5607] manager: (tap1704d3ae-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/335)
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.568 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.571 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.571 239969 INFO os_vif [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:df:77:4c,bridge_name='br-int',has_traffic_filtering=True,id=1704d3ae-5330-41ec-8f1c-05df19fdb33c,network=Network(2d7bbe96-2c16-44c7-ad13-09585bbd834e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1704d3ae-53')
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.616 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.616 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.617 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] No VIF found with MAC fa:16:3e:df:77:4c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.617 239969 INFO nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Using config drive
Jan 26 16:04:43 compute-0 nova_compute[239965]: 2026-01-26 16:04:43.636 239969 DEBUG nova.storage.rbd_utils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] rbd image d7f67ae9-2751-4c70-a750-ccc1940478de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 262 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.038 239969 DEBUG nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.049 239969 INFO nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Creating config drive at /var/lib/nova/instances/d7f67ae9-2751-4c70-a750-ccc1940478de/disk.config
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.055 239969 DEBUG oslo_concurrency.processutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7f67ae9-2751-4c70-a750-ccc1940478de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyth5kbgk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.200 239969 DEBUG oslo_concurrency.processutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7f67ae9-2751-4c70-a750-ccc1940478de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyth5kbgk" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.237 239969 DEBUG nova.storage.rbd_utils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] rbd image d7f67ae9-2751-4c70-a750-ccc1940478de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.243 239969 DEBUG oslo_concurrency.processutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7f67ae9-2751-4c70-a750-ccc1940478de/disk.config d7f67ae9-2751-4c70-a750-ccc1940478de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:44 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3482220227' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:44 compute-0 ceph-mon[75140]: pgmap v1554: 305 pgs: 305 active+clean; 262 MiB data, 621 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.280 239969 DEBUG nova.network.neutron [req-005dd6d9-b4db-4750-9b29-af2ce8588139 req-d08c1c97-dc30-4400-82d4-fd750b4276d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Updated VIF entry in instance network info cache for port 1704d3ae-5330-41ec-8f1c-05df19fdb33c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.281 239969 DEBUG nova.network.neutron [req-005dd6d9-b4db-4750-9b29-af2ce8588139 req-d08c1c97-dc30-4400-82d4-fd750b4276d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Updating instance_info_cache with network_info: [{"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.297 239969 DEBUG oslo_concurrency.lockutils [req-005dd6d9-b4db-4750-9b29-af2ce8588139 req-d08c1c97-dc30-4400-82d4-fd750b4276d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d7f67ae9-2751-4c70-a750-ccc1940478de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.397 239969 DEBUG oslo_concurrency.processutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7f67ae9-2751-4c70-a750-ccc1940478de/disk.config d7f67ae9-2751-4c70-a750-ccc1940478de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.398 239969 INFO nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Deleting local config drive /var/lib/nova/instances/d7f67ae9-2751-4c70-a750-ccc1940478de/disk.config because it was imported into RBD.
Jan 26 16:04:44 compute-0 NetworkManager[48954]: <info>  [1769443484.4608] manager: (tap1704d3ae-53): new Tun device (/org/freedesktop/NetworkManager/Devices/336)
Jan 26 16:04:44 compute-0 kernel: tap1704d3ae-53: entered promiscuous mode
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.467 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:44 compute-0 ovn_controller[146046]: 2026-01-26T16:04:44Z|00757|binding|INFO|Claiming lport 1704d3ae-5330-41ec-8f1c-05df19fdb33c for this chassis.
Jan 26 16:04:44 compute-0 ovn_controller[146046]: 2026-01-26T16:04:44Z|00758|binding|INFO|1704d3ae-5330-41ec-8f1c-05df19fdb33c: Claiming fa:16:3e:df:77:4c 10.100.0.3
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.475 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:77:4c 10.100.0.3'], port_security=['fa:16:3e:df:77:4c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd7f67ae9-2751-4c70-a750-ccc1940478de', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d7bbe96-2c16-44c7-ad13-09585bbd834e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5527f6a0b0004aa790a89b7a57853ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e9916c46-5670-4687-89a4-433216e0fc8a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bce504e2-87b2-44d9-9048-27581ae62fa5, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1704d3ae-5330-41ec-8f1c-05df19fdb33c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.476 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1704d3ae-5330-41ec-8f1c-05df19fdb33c in datapath 2d7bbe96-2c16-44c7-ad13-09585bbd834e bound to our chassis
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.479 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d7bbe96-2c16-44c7-ad13-09585bbd834e
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.490 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:44 compute-0 ovn_controller[146046]: 2026-01-26T16:04:44Z|00759|binding|INFO|Setting lport 1704d3ae-5330-41ec-8f1c-05df19fdb33c ovn-installed in OVS
Jan 26 16:04:44 compute-0 ovn_controller[146046]: 2026-01-26T16:04:44Z|00760|binding|INFO|Setting lport 1704d3ae-5330-41ec-8f1c-05df19fdb33c up in Southbound
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.492 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.494 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6af4ed-5757-4bb9-94f5-9ed3c92e3abe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.494 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2d7bbe96-21 in ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.497 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2d7bbe96-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.498 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5993a961-7c45-4af9-81a9-c248027a0b85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.499 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1a89220a-01cf-46d8-aa7c-ae890211a8b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:44 compute-0 systemd-udevd[309382]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:04:44 compute-0 systemd-machined[208061]: New machine qemu-95-instance-00000051.
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.537 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[515abd4c-c4b7-4bd1-bdda-a272d53f2c39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:44 compute-0 NetworkManager[48954]: <info>  [1769443484.5443] device (tap1704d3ae-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:04:44 compute-0 NetworkManager[48954]: <info>  [1769443484.5454] device (tap1704d3ae-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:04:44 compute-0 systemd[1]: Started Virtual Machine qemu-95-instance-00000051.
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.568 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f8e5f1-538b-490a-897f-55995b1c6a00]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.598 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6d43fd42-f48c-4d04-9e8d-46d3f30b9b00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:44 compute-0 NetworkManager[48954]: <info>  [1769443484.6045] manager: (tap2d7bbe96-20): new Veth device (/org/freedesktop/NetworkManager/Devices/337)
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.604 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f4013ed7-2dfc-4aa4-a056-e781c73cbf4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.636 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a527b46d-1bdf-4f12-a309-504aa28e3492]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.640 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d13519c8-96ae-41f0-9bfd-0a48eec43ad3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:44 compute-0 NetworkManager[48954]: <info>  [1769443484.6622] device (tap2d7bbe96-20): carrier: link connected
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.668 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae52696-5aa1-43e0-86b6-85184754986e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.691 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6f9fbf-b7aa-43f5-9b1a-d2607322177a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d7bbe96-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:c4:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497546, 'reachable_time': 30483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309413, 'error': None, 'target': 'ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.707 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c1facb05-02ef-4a82-ba7e-c9b1de082850]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef6:c4c7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 497546, 'tstamp': 497546}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309414, 'error': None, 'target': 'ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.723 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b1151e84-0e1a-47a6-9e6c-a2112f03069f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d7bbe96-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:c4:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 232], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497546, 'reachable_time': 30483, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309415, 'error': None, 'target': 'ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.749 239969 DEBUG nova.compute.manager [req-66b4ce5c-f2c2-4f11-9328-e2eab4859140 req-597ac4b3-5d66-4c11-acbf-b52f5e546712 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received event network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.750 239969 DEBUG oslo_concurrency.lockutils [req-66b4ce5c-f2c2-4f11-9328-e2eab4859140 req-597ac4b3-5d66-4c11-acbf-b52f5e546712 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.750 239969 DEBUG oslo_concurrency.lockutils [req-66b4ce5c-f2c2-4f11-9328-e2eab4859140 req-597ac4b3-5d66-4c11-acbf-b52f5e546712 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.750 239969 DEBUG oslo_concurrency.lockutils [req-66b4ce5c-f2c2-4f11-9328-e2eab4859140 req-597ac4b3-5d66-4c11-acbf-b52f5e546712 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.750 239969 DEBUG nova.compute.manager [req-66b4ce5c-f2c2-4f11-9328-e2eab4859140 req-597ac4b3-5d66-4c11-acbf-b52f5e546712 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Processing event network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.760 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[29bc5e45-a913-4a95-889f-05bf85d50225]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.844 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[11301c40-28c9-4056-b2b8-56f19e17abfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.846 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d7bbe96-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.847 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.848 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d7bbe96-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.850 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:44 compute-0 kernel: tap2d7bbe96-20: entered promiscuous mode
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.852 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:44 compute-0 NetworkManager[48954]: <info>  [1769443484.8536] manager: (tap2d7bbe96-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/338)
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.860 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d7bbe96-20, col_values=(('external_ids', {'iface-id': '06a7b7d1-8946-4364-b107-f4545efac613'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.862 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:44 compute-0 ovn_controller[146046]: 2026-01-26T16:04:44Z|00761|binding|INFO|Releasing lport 06a7b7d1-8946-4364-b107-f4545efac613 from this chassis (sb_readonly=0)
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.863 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.864 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2d7bbe96-2c16-44c7-ad13-09585bbd834e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2d7bbe96-2c16-44c7-ad13-09585bbd834e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.866 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9abbe21a-ee27-4f3a-a66b-61eeeab021e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.868 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-2d7bbe96-2c16-44c7-ad13-09585bbd834e
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/2d7bbe96-2c16-44c7-ad13-09585bbd834e.pid.haproxy
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 2d7bbe96-2c16-44c7-ad13-09585bbd834e
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:04:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:44.869 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e', 'env', 'PROCESS_TAG=haproxy-2d7bbe96-2c16-44c7-ad13-09585bbd834e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2d7bbe96-2c16-44c7-ad13-09585bbd834e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:04:44 compute-0 nova_compute[239965]: 2026-01-26 16:04:44.886 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Jan 26 16:04:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Jan 26 16:04:45 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.291 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443485.290424, d7f67ae9-2751-4c70-a750-ccc1940478de => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.291 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] VM Started (Lifecycle Event)
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.292 239969 DEBUG nova.compute.manager [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.296 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.300 239969 INFO nova.virt.libvirt.driver [-] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Instance spawned successfully.
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.300 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:04:45 compute-0 podman[309487]: 2026-01-26 16:04:45.300687718 +0000 UTC m=+0.063172326 container create 26869f62d462503632206b46e89cac419d14de23c6226d77ce4b91932280109e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 16:04:45 compute-0 systemd[1]: Started libpod-conmon-26869f62d462503632206b46e89cac419d14de23c6226d77ce4b91932280109e.scope.
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.358 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:45 compute-0 podman[309487]: 2026-01-26 16:04:45.267726081 +0000 UTC m=+0.030210709 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.361 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:04:45 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:04:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6252641b2c7ce5dc2935749989ec2c37cea8b2506948c0f0e1147afd24cfa67/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:04:45 compute-0 podman[309487]: 2026-01-26 16:04:45.382531639 +0000 UTC m=+0.145016267 container init 26869f62d462503632206b46e89cac419d14de23c6226d77ce4b91932280109e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 16:04:45 compute-0 podman[309487]: 2026-01-26 16:04:45.390055374 +0000 UTC m=+0.152539982 container start 26869f62d462503632206b46e89cac419d14de23c6226d77ce4b91932280109e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 16:04:45 compute-0 neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e[309502]: [NOTICE]   (309506) : New worker (309508) forked
Jan 26 16:04:45 compute-0 neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e[309502]: [NOTICE]   (309506) : Loading success.
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.451 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.451 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443485.2906392, d7f67ae9-2751-4c70-a750-ccc1940478de => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.451 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] VM Paused (Lifecycle Event)
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.457 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.458 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.458 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.459 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.460 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.460 239969 DEBUG nova.virt.libvirt.driver [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.488 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.492 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443485.2960987, d7f67ae9-2751-4c70-a750-ccc1940478de => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.492 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] VM Resumed (Lifecycle Event)
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.540 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.542 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.676 239969 INFO nova.compute.manager [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Took 7.57 seconds to spawn the instance on the hypervisor.
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.677 239969 DEBUG nova.compute.manager [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.679 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.835 239969 INFO nova.compute.manager [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Took 9.40 seconds to build instance.
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.930 239969 DEBUG oslo_concurrency.lockutils [None req-91c40a00-5dd7-4119-85ee-50a142192e6d fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:45 compute-0 nova_compute[239965]: 2026-01-26 16:04:45.939 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 294 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 4.7 MiB/s wr, 176 op/s
Jan 26 16:04:46 compute-0 ceph-mon[75140]: osdmap e237: 3 total, 3 up, 3 in
Jan 26 16:04:46 compute-0 ceph-mon[75140]: pgmap v1556: 305 pgs: 305 active+clean; 294 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 4.7 MiB/s wr, 176 op/s
Jan 26 16:04:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:04:46 compute-0 nova_compute[239965]: 2026-01-26 16:04:46.526 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:04:46 compute-0 kernel: tap5083d232-fb (unregistering): left promiscuous mode
Jan 26 16:04:46 compute-0 NetworkManager[48954]: <info>  [1769443486.7269] device (tap5083d232-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:04:46 compute-0 ovn_controller[146046]: 2026-01-26T16:04:46Z|00762|binding|INFO|Releasing lport 5083d232-fbe1-4440-91de-c2c31fae4b6b from this chassis (sb_readonly=0)
Jan 26 16:04:46 compute-0 ovn_controller[146046]: 2026-01-26T16:04:46Z|00763|binding|INFO|Setting lport 5083d232-fbe1-4440-91de-c2c31fae4b6b down in Southbound
Jan 26 16:04:46 compute-0 ovn_controller[146046]: 2026-01-26T16:04:46Z|00764|binding|INFO|Removing iface tap5083d232-fb ovn-installed in OVS
Jan 26 16:04:46 compute-0 nova_compute[239965]: 2026-01-26 16:04:46.737 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:46 compute-0 nova_compute[239965]: 2026-01-26 16:04:46.739 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:46 compute-0 nova_compute[239965]: 2026-01-26 16:04:46.760 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:46.766 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:14:3e 10.100.0.7'], port_security=['fa:16:3e:e7:14:3e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '477642fb-5695-4387-944d-587e17cf3ed8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3f3d947-073d-4aff-ad60-49414846f6e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd91082a727584818a01c3c60e7f7ef73', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7fafebe7-bf1c-4fd6-83e0-4f8a0494b535', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fcf049c-f6b3-4162-b939-ed99b129811b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5083d232-fbe1-4440-91de-c2c31fae4b6b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:04:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:46.768 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5083d232-fbe1-4440-91de-c2c31fae4b6b in datapath c3f3d947-073d-4aff-ad60-49414846f6e5 unbound from our chassis
Jan 26 16:04:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:46.769 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c3f3d947-073d-4aff-ad60-49414846f6e5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:04:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:46.770 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[35791e75-3f1d-4005-a858-233a4fa80a42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:46 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d00000050.scope: Deactivated successfully.
Jan 26 16:04:46 compute-0 systemd[1]: machine-qemu\x2d94\x2dinstance\x2d00000050.scope: Consumed 14.379s CPU time.
Jan 26 16:04:46 compute-0 systemd-machined[208061]: Machine qemu-94-instance-00000050 terminated.
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.043 239969 DEBUG nova.compute.manager [req-5a7a808f-79e3-4dfb-a3d7-137141cde08c req-6eb419d2-da23-4325-94af-cad8443630bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received event network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.045 239969 DEBUG oslo_concurrency.lockutils [req-5a7a808f-79e3-4dfb-a3d7-137141cde08c req-6eb419d2-da23-4325-94af-cad8443630bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.045 239969 DEBUG oslo_concurrency.lockutils [req-5a7a808f-79e3-4dfb-a3d7-137141cde08c req-6eb419d2-da23-4325-94af-cad8443630bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.045 239969 DEBUG oslo_concurrency.lockutils [req-5a7a808f-79e3-4dfb-a3d7-137141cde08c req-6eb419d2-da23-4325-94af-cad8443630bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.046 239969 DEBUG nova.compute.manager [req-5a7a808f-79e3-4dfb-a3d7-137141cde08c req-6eb419d2-da23-4325-94af-cad8443630bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] No waiting events found dispatching network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.046 239969 WARNING nova.compute.manager [req-5a7a808f-79e3-4dfb-a3d7-137141cde08c req-6eb419d2-da23-4325-94af-cad8443630bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received unexpected event network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c for instance with vm_state active and task_state None.
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.053 239969 INFO nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Instance shutdown successfully after 13 seconds.
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.058 239969 INFO nova.virt.libvirt.driver [-] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Instance destroyed successfully.
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.059 239969 DEBUG nova.objects.instance [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'numa_topology' on Instance uuid 477642fb-5695-4387-944d-587e17cf3ed8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.093 239969 INFO nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Attempting rescue
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.095 239969 DEBUG nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.099 239969 DEBUG nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.100 239969 INFO nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Creating image(s)
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.127 239969 DEBUG nova.storage.rbd_utils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 477642fb-5695-4387-944d-587e17cf3ed8_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.133 239969 DEBUG nova.objects.instance [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 477642fb-5695-4387-944d-587e17cf3ed8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.138 239969 DEBUG nova.compute.manager [req-6f92330d-7a8f-4599-9e95-c07d7d070be7 req-3a0ba7bf-f582-4fe9-a8a8-53404e64f8e3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received event network-vif-unplugged-5083d232-fbe1-4440-91de-c2c31fae4b6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.138 239969 DEBUG oslo_concurrency.lockutils [req-6f92330d-7a8f-4599-9e95-c07d7d070be7 req-3a0ba7bf-f582-4fe9-a8a8-53404e64f8e3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "477642fb-5695-4387-944d-587e17cf3ed8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.139 239969 DEBUG oslo_concurrency.lockutils [req-6f92330d-7a8f-4599-9e95-c07d7d070be7 req-3a0ba7bf-f582-4fe9-a8a8-53404e64f8e3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.139 239969 DEBUG oslo_concurrency.lockutils [req-6f92330d-7a8f-4599-9e95-c07d7d070be7 req-3a0ba7bf-f582-4fe9-a8a8-53404e64f8e3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.139 239969 DEBUG nova.compute.manager [req-6f92330d-7a8f-4599-9e95-c07d7d070be7 req-3a0ba7bf-f582-4fe9-a8a8-53404e64f8e3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] No waiting events found dispatching network-vif-unplugged-5083d232-fbe1-4440-91de-c2c31fae4b6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.140 239969 WARNING nova.compute.manager [req-6f92330d-7a8f-4599-9e95-c07d7d070be7 req-3a0ba7bf-f582-4fe9-a8a8-53404e64f8e3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received unexpected event network-vif-unplugged-5083d232-fbe1-4440-91de-c2c31fae4b6b for instance with vm_state active and task_state rescuing.
Jan 26 16:04:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:47.165 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:04:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:47.166 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.173 239969 DEBUG nova.storage.rbd_utils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 477642fb-5695-4387-944d-587e17cf3ed8_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.199 239969 DEBUG nova.storage.rbd_utils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 477642fb-5695-4387-944d-587e17cf3ed8_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.203 239969 DEBUG oslo_concurrency.processutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.234 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.278 239969 DEBUG oslo_concurrency.processutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.279 239969 DEBUG oslo_concurrency.lockutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.280 239969 DEBUG oslo_concurrency.lockutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.280 239969 DEBUG oslo_concurrency.lockutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.302 239969 DEBUG nova.storage.rbd_utils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 477642fb-5695-4387-944d-587e17cf3ed8_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.310 239969 DEBUG oslo_concurrency.processutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 477642fb-5695-4387-944d-587e17cf3ed8_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.420 239969 DEBUG oslo_concurrency.lockutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Acquiring lock "824e5887-8524-4231-9b97-19fb738d44c2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.421 239969 DEBUG oslo_concurrency.lockutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lock "824e5887-8524-4231-9b97-19fb738d44c2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.448 239969 DEBUG nova.compute.manager [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.456 239969 DEBUG oslo_concurrency.lockutils [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Acquiring lock "d7f67ae9-2751-4c70-a750-ccc1940478de" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.456 239969 DEBUG oslo_concurrency.lockutils [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.457 239969 INFO nova.compute.manager [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Rebooting instance
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.478 239969 DEBUG oslo_concurrency.lockutils [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Acquiring lock "refresh_cache-d7f67ae9-2751-4c70-a750-ccc1940478de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.479 239969 DEBUG oslo_concurrency.lockutils [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Acquired lock "refresh_cache-d7f67ae9-2751-4c70-a750-ccc1940478de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.479 239969 DEBUG nova.network.neutron [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.544 239969 DEBUG oslo_concurrency.lockutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.545 239969 DEBUG oslo_concurrency.lockutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.552 239969 DEBUG nova.virt.hardware [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.552 239969 INFO nova.compute.claims [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.626 239969 DEBUG oslo_concurrency.processutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 477642fb-5695-4387-944d-587e17cf3ed8_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.627 239969 DEBUG nova.objects.instance [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'migration_context' on Instance uuid 477642fb-5695-4387-944d-587e17cf3ed8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.643 239969 DEBUG nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.644 239969 DEBUG nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Start _get_guest_xml network_info=[{"id": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "address": "fa:16:3e:e7:14:3e", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1538815050-network", "vif_mac": "fa:16:3e:e7:14:3e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5083d232-fb", "ovs_interfaceid": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.645 239969 DEBUG nova.objects.instance [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'resources' on Instance uuid 477642fb-5695-4387-944d-587e17cf3ed8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.671 239969 WARNING nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.676 239969 DEBUG nova.virt.libvirt.host [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.677 239969 DEBUG nova.virt.libvirt.host [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.679 239969 DEBUG nova.virt.libvirt.host [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.680 239969 DEBUG nova.virt.libvirt.host [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.680 239969 DEBUG nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.681 239969 DEBUG nova.virt.hardware [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.681 239969 DEBUG nova.virt.hardware [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.682 239969 DEBUG nova.virt.hardware [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.682 239969 DEBUG nova.virt.hardware [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.682 239969 DEBUG nova.virt.hardware [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.683 239969 DEBUG nova.virt.hardware [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.683 239969 DEBUG nova.virt.hardware [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.683 239969 DEBUG nova.virt.hardware [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.684 239969 DEBUG nova.virt.hardware [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.684 239969 DEBUG nova.virt.hardware [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.684 239969 DEBUG nova.virt.hardware [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.685 239969 DEBUG nova.objects.instance [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 477642fb-5695-4387-944d-587e17cf3ed8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.706 239969 DEBUG oslo_concurrency.processutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:47 compute-0 nova_compute[239965]: 2026-01-26 16:04:47.800 239969 DEBUG oslo_concurrency.processutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 294 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 4.7 MiB/s wr, 176 op/s
Jan 26 16:04:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:04:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2164675406' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.292 239969 DEBUG oslo_concurrency.processutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.294 239969 DEBUG oslo_concurrency.processutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:04:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1731504224' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.469 239969 DEBUG oslo_concurrency.processutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.479 239969 DEBUG nova.compute.provider_tree [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.502 239969 DEBUG nova.scheduler.client.report [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.532 239969 DEBUG oslo_concurrency.lockutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.534 239969 DEBUG nova.compute.manager [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.559 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.592 239969 DEBUG nova.compute.manager [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.593 239969 DEBUG nova.network.neutron [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.616 239969 INFO nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.640 239969 DEBUG nova.compute.manager [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:04:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:04:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1135943739' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:04:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:04:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1135943739' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.723 239969 DEBUG nova.compute.manager [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.725 239969 DEBUG nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.725 239969 INFO nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Creating image(s)
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.748 239969 DEBUG nova.storage.rbd_utils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] rbd image 824e5887-8524-4231-9b97-19fb738d44c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.774 239969 DEBUG nova.storage.rbd_utils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] rbd image 824e5887-8524-4231-9b97-19fb738d44c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.796 239969 DEBUG nova.storage.rbd_utils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] rbd image 824e5887-8524-4231-9b97-19fb738d44c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.800 239969 DEBUG oslo_concurrency.lockutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Acquiring lock "97237c0cf6e87fd02732432046b59f36d5396ac9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.801 239969 DEBUG oslo_concurrency.lockutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lock "97237c0cf6e87fd02732432046b59f36d5396ac9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:04:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3639997451' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018748338624447857 of space, bias 1.0, pg target 0.5624501587334357 quantized to 32 (current 32)
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010162914778141876 of space, bias 1.0, pg target 0.30488744334425627 quantized to 32 (current 32)
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.660072079749361e-07 of space, bias 4.0, pg target 0.0009192086495699234 quantized to 16 (current 16)
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:04:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.865 239969 DEBUG oslo_concurrency.processutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:48 compute-0 nova_compute[239965]: 2026-01-26 16:04:48.866 239969 DEBUG oslo_concurrency.processutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.006 239969 DEBUG nova.virt.libvirt.imagebackend [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Image locations are: [{'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/1f22d538-7a57-47f9-99d8-a9b97ed410f2/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/1f22d538-7a57-47f9-99d8-a9b97ed410f2/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.071 239969 DEBUG nova.virt.libvirt.imagebackend [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Selected location: {'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/1f22d538-7a57-47f9-99d8-a9b97ed410f2/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.072 239969 DEBUG nova.storage.rbd_utils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] cloning images/1f22d538-7a57-47f9-99d8-a9b97ed410f2@snap to None/824e5887-8524-4231-9b97-19fb738d44c2_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 16:04:49 compute-0 ceph-mon[75140]: pgmap v1557: 305 pgs: 305 active+clean; 294 MiB data, 669 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 4.7 MiB/s wr, 176 op/s
Jan 26 16:04:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2164675406' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1731504224' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:04:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1135943739' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:04:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1135943739' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:04:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3639997451' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.113 239969 DEBUG nova.network.neutron [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.114 239969 DEBUG nova.compute.manager [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.226 239969 DEBUG oslo_concurrency.lockutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lock "97237c0cf6e87fd02732432046b59f36d5396ac9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.365 239969 DEBUG nova.storage.rbd_utils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] resizing rbd image 824e5887-8524-4231-9b97-19fb738d44c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:04:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:04:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3839032438' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.472 239969 DEBUG nova.network.neutron [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Updating instance_info_cache with network_info: [{"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.479 239969 DEBUG oslo_concurrency.processutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.481 239969 DEBUG nova.virt.libvirt.vif [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:04:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-5920657',display_name='tempest-ServerRescueTestJSON-server-5920657',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-5920657',id=80,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:04:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d91082a727584818a01c3c60e7f7ef73',ramdisk_id='',reservation_id='r-415rexkh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1881999610',owner_user_name='tempest-ServerRescueTestJSON-1881999610-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:04:30Z,user_data=None,user_id='7a6914254de3499da1b987d99b6e0ff6',uuid=477642fb-5695-4387-944d-587e17cf3ed8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "address": "fa:16:3e:e7:14:3e", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1538815050-network", "vif_mac": "fa:16:3e:e7:14:3e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5083d232-fb", "ovs_interfaceid": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.481 239969 DEBUG nova.network.os_vif_util [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Converting VIF {"id": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "address": "fa:16:3e:e7:14:3e", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1538815050-network", "vif_mac": "fa:16:3e:e7:14:3e"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5083d232-fb", "ovs_interfaceid": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.482 239969 DEBUG nova.network.os_vif_util [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e7:14:3e,bridge_name='br-int',has_traffic_filtering=True,id=5083d232-fbe1-4440-91de-c2c31fae4b6b,network=Network(c3f3d947-073d-4aff-ad60-49414846f6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5083d232-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.483 239969 DEBUG nova.objects.instance [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 477642fb-5695-4387-944d-587e17cf3ed8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.498 239969 DEBUG oslo_concurrency.lockutils [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Releasing lock "refresh_cache-d7f67ae9-2751-4c70-a750-ccc1940478de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.500 239969 DEBUG nova.compute.manager [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.503 239969 DEBUG nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:04:49 compute-0 nova_compute[239965]:   <uuid>477642fb-5695-4387-944d-587e17cf3ed8</uuid>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   <name>instance-00000050</name>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerRescueTestJSON-server-5920657</nova:name>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:04:47</nova:creationTime>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:04:49 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:04:49 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:04:49 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:04:49 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:04:49 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:04:49 compute-0 nova_compute[239965]:         <nova:user uuid="7a6914254de3499da1b987d99b6e0ff6">tempest-ServerRescueTestJSON-1881999610-project-member</nova:user>
Jan 26 16:04:49 compute-0 nova_compute[239965]:         <nova:project uuid="d91082a727584818a01c3c60e7f7ef73">tempest-ServerRescueTestJSON-1881999610</nova:project>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:04:49 compute-0 nova_compute[239965]:         <nova:port uuid="5083d232-fbe1-4440-91de-c2c31fae4b6b">
Jan 26 16:04:49 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <system>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <entry name="serial">477642fb-5695-4387-944d-587e17cf3ed8</entry>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <entry name="uuid">477642fb-5695-4387-944d-587e17cf3ed8</entry>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     </system>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   <os>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   </os>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   <features>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   </features>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/477642fb-5695-4387-944d-587e17cf3ed8_disk.rescue">
Jan 26 16:04:49 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       </source>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:04:49 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/477642fb-5695-4387-944d-587e17cf3ed8_disk">
Jan 26 16:04:49 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       </source>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:04:49 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <target dev="vdb" bus="virtio"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/477642fb-5695-4387-944d-587e17cf3ed8_disk.config.rescue">
Jan 26 16:04:49 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       </source>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:04:49 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:e7:14:3e"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <target dev="tap5083d232-fb"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8/console.log" append="off"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <video>
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     </video>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:04:49 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:04:49 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:04:49 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:04:49 compute-0 nova_compute[239965]: </domain>
Jan 26 16:04:49 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.516 239969 INFO nova.virt.libvirt.driver [-] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Instance destroyed successfully.
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.612 239969 DEBUG nova.objects.instance [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lazy-loading 'migration_context' on Instance uuid 824e5887-8524-4231-9b97-19fb738d44c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.642 239969 DEBUG nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.643 239969 DEBUG nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Ensure instance console log exists: /var/lib/nova/instances/824e5887-8524-4231-9b97-19fb738d44c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.646 239969 DEBUG oslo_concurrency.lockutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.647 239969 DEBUG oslo_concurrency.lockutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.648 239969 DEBUG oslo_concurrency.lockutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.651 239969 DEBUG nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='be5301a07ba9b91e7deb687a374b22f0',container_format='bare',created_at=2026-01-26T16:04:44Z,direct_url=<?>,disk_format='raw',id=1f22d538-7a57-47f9-99d8-a9b97ed410f2,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1919828229',owner='0ef852851e4e435f82709929f3babca1',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-01-26T16:04:45Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': '1f22d538-7a57-47f9-99d8-a9b97ed410f2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.658 239969 DEBUG nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.658 239969 DEBUG nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.658 239969 DEBUG nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.659 239969 DEBUG nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] No VIF found with MAC fa:16:3e:e7:14:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.659 239969 INFO nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Using config drive
Jan 26 16:04:49 compute-0 kernel: tap1704d3ae-53 (unregistering): left promiscuous mode
Jan 26 16:04:49 compute-0 NetworkManager[48954]: <info>  [1769443489.6695] device (tap1704d3ae-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:04:49 compute-0 ovn_controller[146046]: 2026-01-26T16:04:49Z|00765|binding|INFO|Releasing lport 1704d3ae-5330-41ec-8f1c-05df19fdb33c from this chassis (sb_readonly=0)
Jan 26 16:04:49 compute-0 ovn_controller[146046]: 2026-01-26T16:04:49Z|00766|binding|INFO|Setting lport 1704d3ae-5330-41ec-8f1c-05df19fdb33c down in Southbound
Jan 26 16:04:49 compute-0 ovn_controller[146046]: 2026-01-26T16:04:49Z|00767|binding|INFO|Removing iface tap1704d3ae-53 ovn-installed in OVS
Jan 26 16:04:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:49.696 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:77:4c 10.100.0.3'], port_security=['fa:16:3e:df:77:4c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd7f67ae9-2751-4c70-a750-ccc1940478de', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d7bbe96-2c16-44c7-ad13-09585bbd834e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5527f6a0b0004aa790a89b7a57853ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e9916c46-5670-4687-89a4-433216e0fc8a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bce504e2-87b2-44d9-9048-27581ae62fa5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1704d3ae-5330-41ec-8f1c-05df19fdb33c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:04:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:49.698 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1704d3ae-5330-41ec-8f1c-05df19fdb33c in datapath 2d7bbe96-2c16-44c7-ad13-09585bbd834e unbound from our chassis
Jan 26 16:04:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:49.700 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2d7bbe96-2c16-44c7-ad13-09585bbd834e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:04:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:49.701 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[940b9da3-3ea9-47cc-8a2b-97cf2ec14536]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:49.702 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e namespace which is not needed anymore
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.707 239969 DEBUG nova.storage.rbd_utils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 477642fb-5695-4387-944d-587e17cf3ed8_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.720 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.723 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:49 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d00000051.scope: Deactivated successfully.
Jan 26 16:04:49 compute-0 systemd[1]: machine-qemu\x2d95\x2dinstance\x2d00000051.scope: Consumed 5.121s CPU time.
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.728 239969 WARNING nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:04:49 compute-0 systemd-machined[208061]: Machine qemu-95-instance-00000051 terminated.
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.733 239969 DEBUG nova.virt.libvirt.host [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.733 239969 DEBUG nova.virt.libvirt.host [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.738 239969 DEBUG nova.virt.libvirt.host [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.738 239969 DEBUG nova.virt.libvirt.host [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.738 239969 DEBUG nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.739 239969 DEBUG nova.virt.hardware [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='be5301a07ba9b91e7deb687a374b22f0',container_format='bare',created_at=2026-01-26T16:04:44Z,direct_url=<?>,disk_format='raw',id=1f22d538-7a57-47f9-99d8-a9b97ed410f2,min_disk=0,min_ram=0,name='tempest-image-dependency-test-1919828229',owner='0ef852851e4e435f82709929f3babca1',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2026-01-26T16:04:45Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.739 239969 DEBUG nova.virt.hardware [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.739 239969 DEBUG nova.virt.hardware [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.740 239969 DEBUG nova.virt.hardware [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.740 239969 DEBUG nova.virt.hardware [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.740 239969 DEBUG nova.virt.hardware [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.740 239969 DEBUG nova.virt.hardware [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.741 239969 DEBUG nova.virt.hardware [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.741 239969 DEBUG nova.virt.hardware [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.741 239969 DEBUG nova.virt.hardware [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.741 239969 DEBUG nova.virt.hardware [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.745 239969 DEBUG oslo_concurrency.processutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.793 239969 DEBUG nova.objects.instance [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 477642fb-5695-4387-944d-587e17cf3ed8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.825 239969 DEBUG nova.objects.instance [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'keypairs' on Instance uuid 477642fb-5695-4387-944d-587e17cf3ed8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.871 239969 INFO nova.virt.libvirt.driver [-] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Instance destroyed successfully.
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.871 239969 DEBUG nova.objects.instance [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lazy-loading 'resources' on Instance uuid d7f67ae9-2751-4c70-a750-ccc1940478de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:49 compute-0 neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e[309502]: [NOTICE]   (309506) : haproxy version is 2.8.14-c23fe91
Jan 26 16:04:49 compute-0 neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e[309502]: [NOTICE]   (309506) : path to executable is /usr/sbin/haproxy
Jan 26 16:04:49 compute-0 neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e[309502]: [WARNING]  (309506) : Exiting Master process...
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.882 239969 DEBUG nova.virt.libvirt.vif [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:04:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1664472187',display_name='tempest-InstanceActionsTestJSON-server-1664472187',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1664472187',id=81,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:04:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5527f6a0b0004aa790a89b7a57853ace',ramdisk_id='',reservation_id='r-81i0kmes',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1033896084',owner_user_name='tempest-InstanceActionsTestJSON-1033896084-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:04:49Z,user_data=None,user_id='fb2bc5dfd3a74496b6982288700ab306',uuid=d7f67ae9-2751-4c70-a750-ccc1940478de,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.883 239969 DEBUG nova.network.os_vif_util [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Converting VIF {"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.883 239969 DEBUG nova.network.os_vif_util [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:77:4c,bridge_name='br-int',has_traffic_filtering=True,id=1704d3ae-5330-41ec-8f1c-05df19fdb33c,network=Network(2d7bbe96-2c16-44c7-ad13-09585bbd834e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1704d3ae-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.884 239969 DEBUG os_vif [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:77:4c,bridge_name='br-int',has_traffic_filtering=True,id=1704d3ae-5330-41ec-8f1c-05df19fdb33c,network=Network(2d7bbe96-2c16-44c7-ad13-09585bbd834e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1704d3ae-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:04:49 compute-0 neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e[309502]: [ALERT]    (309506) : Current worker (309508) exited with code 143 (Terminated)
Jan 26 16:04:49 compute-0 neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e[309502]: [WARNING]  (309506) : All workers exited. Exiting... (0)
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.885 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.886 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1704d3ae-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:49 compute-0 systemd[1]: libpod-26869f62d462503632206b46e89cac419d14de23c6226d77ce4b91932280109e.scope: Deactivated successfully.
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.889 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.890 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.893 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:49 compute-0 podman[309955]: 2026-01-26 16:04:49.895546629 +0000 UTC m=+0.071438299 container died 26869f62d462503632206b46e89cac419d14de23c6226d77ce4b91932280109e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.900 239969 INFO os_vif [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:77:4c,bridge_name='br-int',has_traffic_filtering=True,id=1704d3ae-5330-41ec-8f1c-05df19fdb33c,network=Network(2d7bbe96-2c16-44c7-ad13-09585bbd834e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1704d3ae-53')
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.908 239969 DEBUG nova.virt.libvirt.driver [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Start _get_guest_xml network_info=[{"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.914 239969 WARNING nova.virt.libvirt.driver [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.917 239969 DEBUG nova.virt.libvirt.host [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.918 239969 DEBUG nova.virt.libvirt.host [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.923 239969 DEBUG nova.virt.libvirt.host [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.923 239969 DEBUG nova.virt.libvirt.host [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.923 239969 DEBUG nova.virt.libvirt.driver [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.924 239969 DEBUG nova.virt.hardware [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.924 239969 DEBUG nova.virt.hardware [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.924 239969 DEBUG nova.virt.hardware [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.924 239969 DEBUG nova.virt.hardware [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.924 239969 DEBUG nova.virt.hardware [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.925 239969 DEBUG nova.virt.hardware [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.925 239969 DEBUG nova.virt.hardware [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.925 239969 DEBUG nova.virt.hardware [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.925 239969 DEBUG nova.virt.hardware [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.925 239969 DEBUG nova.virt.hardware [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.925 239969 DEBUG nova.virt.hardware [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.926 239969 DEBUG nova.objects.instance [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lazy-loading 'vcpu_model' on Instance uuid d7f67ae9-2751-4c70-a750-ccc1940478de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26869f62d462503632206b46e89cac419d14de23c6226d77ce4b91932280109e-userdata-shm.mount: Deactivated successfully.
Jan 26 16:04:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6252641b2c7ce5dc2935749989ec2c37cea8b2506948c0f0e1147afd24cfa67-merged.mount: Deactivated successfully.
Jan 26 16:04:49 compute-0 nova_compute[239965]: 2026-01-26 16:04:49.948 239969 DEBUG oslo_concurrency.processutils [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:49 compute-0 podman[309955]: 2026-01-26 16:04:49.950765429 +0000 UTC m=+0.126657109 container cleanup 26869f62d462503632206b46e89cac419d14de23c6226d77ce4b91932280109e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:04:49 compute-0 systemd[1]: libpod-conmon-26869f62d462503632206b46e89cac419d14de23c6226d77ce4b91932280109e.scope: Deactivated successfully.
Jan 26 16:04:50 compute-0 podman[310017]: 2026-01-26 16:04:50.033255157 +0000 UTC m=+0.058598835 container remove 26869f62d462503632206b46e89cac419d14de23c6226d77ce4b91932280109e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:04:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 336 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.5 MiB/s wr, 209 op/s
Jan 26 16:04:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:50.040 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[694c0b26-074d-4452-915d-e6174469b633]: (4, ('Mon Jan 26 04:04:49 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e (26869f62d462503632206b46e89cac419d14de23c6226d77ce4b91932280109e)\n26869f62d462503632206b46e89cac419d14de23c6226d77ce4b91932280109e\nMon Jan 26 04:04:49 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e (26869f62d462503632206b46e89cac419d14de23c6226d77ce4b91932280109e)\n26869f62d462503632206b46e89cac419d14de23c6226d77ce4b91932280109e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:50.043 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[11b75201-8356-4fc4-b141-3a477900ac78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:50.044 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d7bbe96-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:50 compute-0 kernel: tap2d7bbe96-20: left promiscuous mode
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.046 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.067 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:50.070 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e542d5b6-5125-4d82-af8c-cda4dc008954]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:50.084 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[64aeeecc-a192-4a77-ab39-7b042ebf348e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:50.087 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5d1f64-29e9-439a-88b9-b91d9579e6d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:50.113 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c2bb14c9-07c1-4529-9fc5-736d5e702989]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 497540, 'reachable_time': 19147, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310043, 'error': None, 'target': 'ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d2d7bbe96\x2d2c16\x2d44c7\x2dad13\x2d09585bbd834e.mount: Deactivated successfully.
Jan 26 16:04:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:50.118 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:04:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:50.118 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[c3daa29e-062a-4d85-aca3-dbb25bdc8005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3839032438' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.135 239969 DEBUG nova.compute.manager [req-4c253c31-80bf-46cd-b9fb-6163aaf303be req-2b1957c1-c516-4b35-9ac9-ad508a0591cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received event network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.135 239969 DEBUG oslo_concurrency.lockutils [req-4c253c31-80bf-46cd-b9fb-6163aaf303be req-2b1957c1-c516-4b35-9ac9-ad508a0591cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "477642fb-5695-4387-944d-587e17cf3ed8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.136 239969 DEBUG oslo_concurrency.lockutils [req-4c253c31-80bf-46cd-b9fb-6163aaf303be req-2b1957c1-c516-4b35-9ac9-ad508a0591cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.136 239969 DEBUG oslo_concurrency.lockutils [req-4c253c31-80bf-46cd-b9fb-6163aaf303be req-2b1957c1-c516-4b35-9ac9-ad508a0591cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.136 239969 DEBUG nova.compute.manager [req-4c253c31-80bf-46cd-b9fb-6163aaf303be req-2b1957c1-c516-4b35-9ac9-ad508a0591cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] No waiting events found dispatching network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.136 239969 WARNING nova.compute.manager [req-4c253c31-80bf-46cd-b9fb-6163aaf303be req-2b1957c1-c516-4b35-9ac9-ad508a0591cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received unexpected event network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b for instance with vm_state active and task_state rescuing.
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.245 239969 INFO nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Creating config drive at /var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8/disk.config.rescue
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.254 239969 DEBUG oslo_concurrency.processutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp3f37hu2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:04:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2403826674' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.383 239969 DEBUG oslo_concurrency.processutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.412 239969 DEBUG nova.storage.rbd_utils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] rbd image 824e5887-8524-4231-9b97-19fb738d44c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.417 239969 DEBUG oslo_concurrency.processutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.450 239969 DEBUG oslo_concurrency.processutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp3f37hu2" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.476 239969 DEBUG nova.storage.rbd_utils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 477642fb-5695-4387-944d-587e17cf3ed8_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.480 239969 DEBUG oslo_concurrency.processutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8/disk.config.rescue 477642fb-5695-4387-944d-587e17cf3ed8_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.519 239969 DEBUG nova.compute.manager [req-5d4b6726-65dd-4687-a72b-19d78c8922b0 req-d5591469-3eeb-429e-8107-d08a8cac0014 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received event network-vif-unplugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.519 239969 DEBUG oslo_concurrency.lockutils [req-5d4b6726-65dd-4687-a72b-19d78c8922b0 req-d5591469-3eeb-429e-8107-d08a8cac0014 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.520 239969 DEBUG oslo_concurrency.lockutils [req-5d4b6726-65dd-4687-a72b-19d78c8922b0 req-d5591469-3eeb-429e-8107-d08a8cac0014 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.520 239969 DEBUG oslo_concurrency.lockutils [req-5d4b6726-65dd-4687-a72b-19d78c8922b0 req-d5591469-3eeb-429e-8107-d08a8cac0014 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.520 239969 DEBUG nova.compute.manager [req-5d4b6726-65dd-4687-a72b-19d78c8922b0 req-d5591469-3eeb-429e-8107-d08a8cac0014 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] No waiting events found dispatching network-vif-unplugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.521 239969 WARNING nova.compute.manager [req-5d4b6726-65dd-4687-a72b-19d78c8922b0 req-d5591469-3eeb-429e-8107-d08a8cac0014 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received unexpected event network-vif-unplugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c for instance with vm_state active and task_state reboot_started_hard.
Jan 26 16:04:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:04:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2054150951' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.613 239969 DEBUG oslo_concurrency.processutils [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.665s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.641 239969 DEBUG oslo_concurrency.processutils [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8/disk.config.rescue 477642fb-5695-4387-944d-587e17cf3ed8_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.642 239969 INFO nova.virt.libvirt.driver [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Deleting local config drive /var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8/disk.config.rescue because it was imported into RBD.
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.647 239969 DEBUG oslo_concurrency.processutils [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:50 compute-0 kernel: tap5083d232-fb: entered promiscuous mode
Jan 26 16:04:50 compute-0 systemd-udevd[309934]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.712 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:50 compute-0 ovn_controller[146046]: 2026-01-26T16:04:50Z|00768|binding|INFO|Claiming lport 5083d232-fbe1-4440-91de-c2c31fae4b6b for this chassis.
Jan 26 16:04:50 compute-0 ovn_controller[146046]: 2026-01-26T16:04:50Z|00769|binding|INFO|5083d232-fbe1-4440-91de-c2c31fae4b6b: Claiming fa:16:3e:e7:14:3e 10.100.0.7
Jan 26 16:04:50 compute-0 NetworkManager[48954]: <info>  [1769443490.7153] manager: (tap5083d232-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/339)
Jan 26 16:04:50 compute-0 NetworkManager[48954]: <info>  [1769443490.7251] device (tap5083d232-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:04:50 compute-0 NetworkManager[48954]: <info>  [1769443490.7257] device (tap5083d232-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:04:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:50.734 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:14:3e 10.100.0.7'], port_security=['fa:16:3e:e7:14:3e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '477642fb-5695-4387-944d-587e17cf3ed8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3f3d947-073d-4aff-ad60-49414846f6e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd91082a727584818a01c3c60e7f7ef73', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7fafebe7-bf1c-4fd6-83e0-4f8a0494b535', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fcf049c-f6b3-4162-b939-ed99b129811b, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5083d232-fbe1-4440-91de-c2c31fae4b6b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:04:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:50.736 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5083d232-fbe1-4440-91de-c2c31fae4b6b in datapath c3f3d947-073d-4aff-ad60-49414846f6e5 bound to our chassis
Jan 26 16:04:50 compute-0 ovn_controller[146046]: 2026-01-26T16:04:50Z|00770|binding|INFO|Setting lport 5083d232-fbe1-4440-91de-c2c31fae4b6b ovn-installed in OVS
Jan 26 16:04:50 compute-0 ovn_controller[146046]: 2026-01-26T16:04:50Z|00771|binding|INFO|Setting lport 5083d232-fbe1-4440-91de-c2c31fae4b6b up in Southbound
Jan 26 16:04:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:50.736 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c3f3d947-073d-4aff-ad60-49414846f6e5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.737 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:50.737 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[326263ec-0a1c-4c95-8fdc-322ceee46ac2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.740 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:50 compute-0 systemd-machined[208061]: New machine qemu-96-instance-00000050.
Jan 26 16:04:50 compute-0 systemd[1]: Started Virtual Machine qemu-96-instance-00000050.
Jan 26 16:04:50 compute-0 nova_compute[239965]: 2026-01-26 16:04:50.942 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:04:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1626523687' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.033 239969 DEBUG oslo_concurrency.processutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.035 239969 DEBUG nova.objects.instance [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 824e5887-8524-4231-9b97-19fb738d44c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.061 239969 DEBUG nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <uuid>824e5887-8524-4231-9b97-19fb738d44c2</uuid>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <name>instance-00000052</name>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <nova:name>instance-depend-image</nova:name>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:04:49</nova:creationTime>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <nova:user uuid="893b3bc39877452fa2e3799a61f3e715">tempest-ImageDependencyTests-1544866620-project-member</nova:user>
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <nova:project uuid="0ef852851e4e435f82709929f3babca1">tempest-ImageDependencyTests-1544866620</nova:project>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="1f22d538-7a57-47f9-99d8-a9b97ed410f2"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <system>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <entry name="serial">824e5887-8524-4231-9b97-19fb738d44c2</entry>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <entry name="uuid">824e5887-8524-4231-9b97-19fb738d44c2</entry>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </system>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <os>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   </os>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <features>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   </features>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/824e5887-8524-4231-9b97-19fb738d44c2_disk">
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       </source>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/824e5887-8524-4231-9b97-19fb738d44c2_disk.config">
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       </source>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/824e5887-8524-4231-9b97-19fb738d44c2/console.log" append="off"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <video>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </video>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:04:51 compute-0 nova_compute[239965]: </domain>
Jan 26 16:04:51 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.114 239969 DEBUG nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.114 239969 DEBUG nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.115 239969 INFO nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Using config drive
Jan 26 16:04:51 compute-0 ceph-mon[75140]: pgmap v1558: 305 pgs: 305 active+clean; 336 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.5 MiB/s wr, 209 op/s
Jan 26 16:04:51 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2403826674' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:51 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2054150951' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:51 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1626523687' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.145 239969 DEBUG nova.storage.rbd_utils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] rbd image 824e5887-8524-4231-9b97-19fb738d44c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.284 239969 INFO nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Creating config drive at /var/lib/nova/instances/824e5887-8524-4231-9b97-19fb738d44c2/disk.config
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.309 239969 DEBUG oslo_concurrency.processutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/824e5887-8524-4231-9b97-19fb738d44c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaa6c4wmg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:04:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554237135' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.353 239969 DEBUG oslo_concurrency.processutils [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.706s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.355 239969 DEBUG nova.virt.libvirt.vif [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:04:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1664472187',display_name='tempest-InstanceActionsTestJSON-server-1664472187',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1664472187',id=81,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:04:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5527f6a0b0004aa790a89b7a57853ace',ramdisk_id='',reservation_id='r-81i0kmes',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1033896084',owner_user_name='tempest-InstanceActionsTestJSON-1033896084-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:04:49Z,user_data=None,user_id='fb2bc5dfd3a74496b6982288700ab306',uuid=d7f67ae9-2751-4c70-a750-ccc1940478de,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.356 239969 DEBUG nova.network.os_vif_util [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Converting VIF {"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.356 239969 DEBUG nova.network.os_vif_util [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:77:4c,bridge_name='br-int',has_traffic_filtering=True,id=1704d3ae-5330-41ec-8f1c-05df19fdb33c,network=Network(2d7bbe96-2c16-44c7-ad13-09585bbd834e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1704d3ae-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.357 239969 DEBUG nova.objects.instance [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lazy-loading 'pci_devices' on Instance uuid d7f67ae9-2751-4c70-a750-ccc1940478de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.368 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 477642fb-5695-4387-944d-587e17cf3ed8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.369 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443491.367382, 477642fb-5695-4387-944d-587e17cf3ed8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.369 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] VM Resumed (Lifecycle Event)
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.375 239969 DEBUG nova.virt.libvirt.driver [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <uuid>d7f67ae9-2751-4c70-a750-ccc1940478de</uuid>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <name>instance-00000051</name>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <nova:name>tempest-InstanceActionsTestJSON-server-1664472187</nova:name>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:04:49</nova:creationTime>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <nova:user uuid="fb2bc5dfd3a74496b6982288700ab306">tempest-InstanceActionsTestJSON-1033896084-project-member</nova:user>
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <nova:project uuid="5527f6a0b0004aa790a89b7a57853ace">tempest-InstanceActionsTestJSON-1033896084</nova:project>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <nova:port uuid="1704d3ae-5330-41ec-8f1c-05df19fdb33c">
Jan 26 16:04:51 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <system>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <entry name="serial">d7f67ae9-2751-4c70-a750-ccc1940478de</entry>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <entry name="uuid">d7f67ae9-2751-4c70-a750-ccc1940478de</entry>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </system>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <os>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   </os>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <features>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   </features>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d7f67ae9-2751-4c70-a750-ccc1940478de_disk">
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       </source>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d7f67ae9-2751-4c70-a750-ccc1940478de_disk.config">
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       </source>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:04:51 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:df:77:4c"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <target dev="tap1704d3ae-53"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/d7f67ae9-2751-4c70-a750-ccc1940478de/console.log" append="off"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <video>
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </video>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <input type="keyboard" bus="usb"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:04:51 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:04:51 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:04:51 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:04:51 compute-0 nova_compute[239965]: </domain>
Jan 26 16:04:51 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.376 239969 DEBUG nova.virt.libvirt.driver [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.376 239969 DEBUG nova.virt.libvirt.driver [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] skipping disk for instance-00000051 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.377 239969 DEBUG nova.virt.libvirt.vif [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:04:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1664472187',display_name='tempest-InstanceActionsTestJSON-server-1664472187',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1664472187',id=81,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:04:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='5527f6a0b0004aa790a89b7a57853ace',ramdisk_id='',reservation_id='r-81i0kmes',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1033896084',owner_user_name='tempest-InstanceActionsTestJSON-1033896084-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:04:49Z,user_data=None,user_id='fb2bc5dfd3a74496b6982288700ab306',uuid=d7f67ae9-2751-4c70-a750-ccc1940478de,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.378 239969 DEBUG nova.network.os_vif_util [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Converting VIF {"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.379 239969 DEBUG nova.network.os_vif_util [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:77:4c,bridge_name='br-int',has_traffic_filtering=True,id=1704d3ae-5330-41ec-8f1c-05df19fdb33c,network=Network(2d7bbe96-2c16-44c7-ad13-09585bbd834e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1704d3ae-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.379 239969 DEBUG os_vif [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:77:4c,bridge_name='br-int',has_traffic_filtering=True,id=1704d3ae-5330-41ec-8f1c-05df19fdb33c,network=Network(2d7bbe96-2c16-44c7-ad13-09585bbd834e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1704d3ae-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.380 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.381 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.381 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.384 239969 DEBUG nova.compute.manager [None req-717b164f-e152-4118-b11d-b54f55c5edb7 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.386 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.386 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1704d3ae-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.386 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1704d3ae-53, col_values=(('external_ids', {'iface-id': '1704d3ae-5330-41ec-8f1c-05df19fdb33c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:df:77:4c', 'vm-uuid': 'd7f67ae9-2751-4c70-a750-ccc1940478de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:51 compute-0 NetworkManager[48954]: <info>  [1769443491.3895] manager: (tap1704d3ae-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/340)
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.388 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.390 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.393 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.404 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.407 239969 INFO os_vif [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:77:4c,bridge_name='br-int',has_traffic_filtering=True,id=1704d3ae-5330-41ec-8f1c-05df19fdb33c,network=Network(2d7bbe96-2c16-44c7-ad13-09585bbd834e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1704d3ae-53')
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.411 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:04:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.451 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.452 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443491.3726811, 477642fb-5695-4387-944d-587e17cf3ed8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.452 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] VM Started (Lifecycle Event)
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.454 239969 DEBUG oslo_concurrency.processutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/824e5887-8524-4231-9b97-19fb738d44c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaa6c4wmg" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.486 239969 DEBUG nova.storage.rbd_utils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] rbd image 824e5887-8524-4231-9b97-19fb738d44c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.495 239969 DEBUG oslo_concurrency.processutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/824e5887-8524-4231-9b97-19fb738d44c2/disk.config 824e5887-8524-4231-9b97-19fb738d44c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:51 compute-0 NetworkManager[48954]: <info>  [1769443491.5033] manager: (tap1704d3ae-53): new Tun device (/org/freedesktop/NetworkManager/Devices/341)
Jan 26 16:04:51 compute-0 kernel: tap1704d3ae-53: entered promiscuous mode
Jan 26 16:04:51 compute-0 ovn_controller[146046]: 2026-01-26T16:04:51Z|00772|binding|INFO|Claiming lport 1704d3ae-5330-41ec-8f1c-05df19fdb33c for this chassis.
Jan 26 16:04:51 compute-0 ovn_controller[146046]: 2026-01-26T16:04:51Z|00773|binding|INFO|1704d3ae-5330-41ec-8f1c-05df19fdb33c: Claiming fa:16:3e:df:77:4c 10.100.0.3
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.514 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:77:4c 10.100.0.3'], port_security=['fa:16:3e:df:77:4c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd7f67ae9-2751-4c70-a750-ccc1940478de', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d7bbe96-2c16-44c7-ad13-09585bbd834e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5527f6a0b0004aa790a89b7a57853ace', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'e9916c46-5670-4687-89a4-433216e0fc8a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bce504e2-87b2-44d9-9048-27581ae62fa5, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1704d3ae-5330-41ec-8f1c-05df19fdb33c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.516 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1704d3ae-5330-41ec-8f1c-05df19fdb33c in datapath 2d7bbe96-2c16-44c7-ad13-09585bbd834e bound to our chassis
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.518 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2d7bbe96-2c16-44c7-ad13-09585bbd834e
Jan 26 16:04:51 compute-0 NetworkManager[48954]: <info>  [1769443491.5206] device (tap1704d3ae-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:04:51 compute-0 NetworkManager[48954]: <info>  [1769443491.5212] device (tap1704d3ae-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:04:51 compute-0 ovn_controller[146046]: 2026-01-26T16:04:51Z|00774|binding|INFO|Setting lport 1704d3ae-5330-41ec-8f1c-05df19fdb33c ovn-installed in OVS
Jan 26 16:04:51 compute-0 ovn_controller[146046]: 2026-01-26T16:04:51Z|00775|binding|INFO|Setting lport 1704d3ae-5330-41ec-8f1c-05df19fdb33c up in Southbound
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.537 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[55d6ebb3-4096-4d88-a208-4b934ad563cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.538 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2d7bbe96-21 in ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.540 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2d7bbe96-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.540 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8c80fa21-e8a3-45cd-a163-9ffa68120626]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.540 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.542 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6227a8e0-ba49-4092-8f0d-a5e99a4d83fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:51 compute-0 systemd-machined[208061]: New machine qemu-97-instance-00000051.
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.559 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[56bc78e1-2f28-4e3b-80b8-ebcf17eb8d00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.560 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.571 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:04:51 compute-0 systemd[1]: Started Virtual Machine qemu-97-instance-00000051.
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.592 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0513ef-4801-4493-bd3b-6e30d742882e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.633 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[386be2de-5eec-4b0a-8b64-9df12a221e9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.638 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8ce5d627-295b-4ead-ab8f-ca2ab847a0e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:51 compute-0 NetworkManager[48954]: <info>  [1769443491.6397] manager: (tap2d7bbe96-20): new Veth device (/org/freedesktop/NetworkManager/Devices/342)
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.673 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a7ca9086-11e8-4264-b9fa-d78fb3ee4913]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.677 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f98ac3-be86-40c6-8788-d31fa26e8a6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.698 239969 DEBUG oslo_concurrency.processutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/824e5887-8524-4231-9b97-19fb738d44c2/disk.config 824e5887-8524-4231-9b97-19fb738d44c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.698 239969 INFO nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Deleting local config drive /var/lib/nova/instances/824e5887-8524-4231-9b97-19fb738d44c2/disk.config because it was imported into RBD.
Jan 26 16:04:51 compute-0 NetworkManager[48954]: <info>  [1769443491.7018] device (tap2d7bbe96-20): carrier: link connected
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.709 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[43d88651-b7d9-4c3a-83d8-78aaeb9133bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.732 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1c2826dd-68c7-48b6-a08c-cd3492ed78ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d7bbe96-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:c4:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498250, 'reachable_time': 26239, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310360, 'error': None, 'target': 'ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.751 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[18ba6b3e-1de4-4d31-b667-a02492f3d8c5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef6:c4c7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 498250, 'tstamp': 498250}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310372, 'error': None, 'target': 'ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:51 compute-0 systemd-machined[208061]: New machine qemu-98-instance-00000052.
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.771 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c35b7428-c142-4925-955f-de92ed59f71b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2d7bbe96-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f6:c4:c7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 237], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498250, 'reachable_time': 26239, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310385, 'error': None, 'target': 'ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:51 compute-0 systemd[1]: Started Virtual Machine qemu-98-instance-00000052.
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.809 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0830526a-9a9a-4753-9cfc-3fa55f16265e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.887 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1fcb0ed5-0f0d-4867-b582-ff2305812c75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.890 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d7bbe96-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.890 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.891 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2d7bbe96-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.893 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:51 compute-0 NetworkManager[48954]: <info>  [1769443491.8944] manager: (tap2d7bbe96-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/343)
Jan 26 16:04:51 compute-0 kernel: tap2d7bbe96-20: entered promiscuous mode
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.896 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.900 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2d7bbe96-20, col_values=(('external_ids', {'iface-id': '06a7b7d1-8946-4364-b107-f4545efac613'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.901 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:51 compute-0 ovn_controller[146046]: 2026-01-26T16:04:51Z|00776|binding|INFO|Releasing lport 06a7b7d1-8946-4364-b107-f4545efac613 from this chassis (sb_readonly=0)
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.902 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.903 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2d7bbe96-2c16-44c7-ad13-09585bbd834e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2d7bbe96-2c16-44c7-ad13-09585bbd834e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.904 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0cf64da7-5f14-4df9-908e-e314ae710175]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.905 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-2d7bbe96-2c16-44c7-ad13-09585bbd834e
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/2d7bbe96-2c16-44c7-ad13-09585bbd834e.pid.haproxy
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 2d7bbe96-2c16-44c7-ad13-09585bbd834e
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:04:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:51.907 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e', 'env', 'PROCESS_TAG=haproxy-2d7bbe96-2c16-44c7-ad13-09585bbd834e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2d7bbe96-2c16-44c7-ad13-09585bbd834e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.920 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.954 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for d7f67ae9-2751-4c70-a750-ccc1940478de due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.955 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443491.9541163, d7f67ae9-2751-4c70-a750-ccc1940478de => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.955 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] VM Resumed (Lifecycle Event)
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.961 239969 DEBUG nova.compute.manager [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.966 239969 INFO nova.virt.libvirt.driver [-] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Instance rebooted successfully.
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.967 239969 DEBUG nova.compute.manager [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.974 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:51 compute-0 nova_compute[239965]: 2026-01-26 16:04:51.981 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.005 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.005 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443491.9598672, d7f67ae9-2751-4c70-a750-ccc1940478de => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.005 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] VM Started (Lifecycle Event)
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.032 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.037 239969 DEBUG oslo_concurrency.lockutils [None req-8ad3ac5f-c5d2-4f97-82de-04730cff9992 fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.580s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 341 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.8 MiB/s wr, 241 op/s
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.043 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:04:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3554237135' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.233 239969 DEBUG nova.compute.manager [req-06673ff4-fa8f-4fe0-aeb5-6be5167cfd14 req-00771915-c2c1-4f48-a5b3-f5b865ec3c44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received event network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.233 239969 DEBUG oslo_concurrency.lockutils [req-06673ff4-fa8f-4fe0-aeb5-6be5167cfd14 req-00771915-c2c1-4f48-a5b3-f5b865ec3c44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "477642fb-5695-4387-944d-587e17cf3ed8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.234 239969 DEBUG oslo_concurrency.lockutils [req-06673ff4-fa8f-4fe0-aeb5-6be5167cfd14 req-00771915-c2c1-4f48-a5b3-f5b865ec3c44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.234 239969 DEBUG oslo_concurrency.lockutils [req-06673ff4-fa8f-4fe0-aeb5-6be5167cfd14 req-00771915-c2c1-4f48-a5b3-f5b865ec3c44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.234 239969 DEBUG nova.compute.manager [req-06673ff4-fa8f-4fe0-aeb5-6be5167cfd14 req-00771915-c2c1-4f48-a5b3-f5b865ec3c44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] No waiting events found dispatching network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.234 239969 WARNING nova.compute.manager [req-06673ff4-fa8f-4fe0-aeb5-6be5167cfd14 req-00771915-c2c1-4f48-a5b3-f5b865ec3c44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received unexpected event network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b for instance with vm_state rescued and task_state None.
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.234 239969 DEBUG nova.compute.manager [req-06673ff4-fa8f-4fe0-aeb5-6be5167cfd14 req-00771915-c2c1-4f48-a5b3-f5b865ec3c44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received event network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.235 239969 DEBUG oslo_concurrency.lockutils [req-06673ff4-fa8f-4fe0-aeb5-6be5167cfd14 req-00771915-c2c1-4f48-a5b3-f5b865ec3c44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "477642fb-5695-4387-944d-587e17cf3ed8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.235 239969 DEBUG oslo_concurrency.lockutils [req-06673ff4-fa8f-4fe0-aeb5-6be5167cfd14 req-00771915-c2c1-4f48-a5b3-f5b865ec3c44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.235 239969 DEBUG oslo_concurrency.lockutils [req-06673ff4-fa8f-4fe0-aeb5-6be5167cfd14 req-00771915-c2c1-4f48-a5b3-f5b865ec3c44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.235 239969 DEBUG nova.compute.manager [req-06673ff4-fa8f-4fe0-aeb5-6be5167cfd14 req-00771915-c2c1-4f48-a5b3-f5b865ec3c44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] No waiting events found dispatching network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.235 239969 WARNING nova.compute.manager [req-06673ff4-fa8f-4fe0-aeb5-6be5167cfd14 req-00771915-c2c1-4f48-a5b3-f5b865ec3c44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received unexpected event network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b for instance with vm_state rescued and task_state None.
Jan 26 16:04:52 compute-0 podman[310448]: 2026-01-26 16:04:52.330098312 +0000 UTC m=+0.059900125 container create 01fdce2fc2249168c830cf8eb8a3f63c2538de6e87f282edd4a34081aab49118 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:04:52 compute-0 systemd[1]: Started libpod-conmon-01fdce2fc2249168c830cf8eb8a3f63c2538de6e87f282edd4a34081aab49118.scope.
Jan 26 16:04:52 compute-0 podman[310448]: 2026-01-26 16:04:52.29890773 +0000 UTC m=+0.028709573 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:04:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:04:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2e92a08ac0db6f327168936ba3dd1e548cab9adf560fa638fd3a6a5662454c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:04:52 compute-0 podman[310448]: 2026-01-26 16:04:52.427515276 +0000 UTC m=+0.157317109 container init 01fdce2fc2249168c830cf8eb8a3f63c2538de6e87f282edd4a34081aab49118 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 16:04:52 compute-0 podman[310448]: 2026-01-26 16:04:52.434414485 +0000 UTC m=+0.164216298 container start 01fdce2fc2249168c830cf8eb8a3f63c2538de6e87f282edd4a34081aab49118 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 16:04:52 compute-0 neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e[310494]: [NOTICE]   (310505) : New worker (310507) forked
Jan 26 16:04:52 compute-0 neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e[310494]: [NOTICE]   (310505) : Loading success.
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.506 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443492.5065591, 824e5887-8524-4231-9b97-19fb738d44c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.507 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] VM Resumed (Lifecycle Event)
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.511 239969 DEBUG nova.compute.manager [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.511 239969 DEBUG nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.515 239969 INFO nova.virt.libvirt.driver [-] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Instance spawned successfully.
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.515 239969 DEBUG nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.530 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.539 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.543 239969 DEBUG nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.543 239969 DEBUG nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.544 239969 DEBUG nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.544 239969 DEBUG nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.545 239969 DEBUG nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.545 239969 DEBUG nova.virt.libvirt.driver [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.552 239969 DEBUG nova.compute.manager [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received event network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.553 239969 DEBUG oslo_concurrency.lockutils [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.553 239969 DEBUG oslo_concurrency.lockutils [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.553 239969 DEBUG oslo_concurrency.lockutils [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.553 239969 DEBUG nova.compute.manager [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] No waiting events found dispatching network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.554 239969 WARNING nova.compute.manager [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received unexpected event network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c for instance with vm_state active and task_state None.
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.554 239969 DEBUG nova.compute.manager [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received event network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.554 239969 DEBUG oslo_concurrency.lockutils [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.554 239969 DEBUG oslo_concurrency.lockutils [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.555 239969 DEBUG oslo_concurrency.lockutils [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.555 239969 DEBUG nova.compute.manager [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] No waiting events found dispatching network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.555 239969 WARNING nova.compute.manager [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received unexpected event network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c for instance with vm_state active and task_state None.
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.555 239969 DEBUG nova.compute.manager [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received event network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.555 239969 DEBUG oslo_concurrency.lockutils [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.556 239969 DEBUG oslo_concurrency.lockutils [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.556 239969 DEBUG oslo_concurrency.lockutils [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.556 239969 DEBUG nova.compute.manager [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] No waiting events found dispatching network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.556 239969 WARNING nova.compute.manager [req-78251c74-14ad-48ee-880a-898d122c818e req-b406a699-77b1-4fb6-821e-9ccadf463762 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received unexpected event network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c for instance with vm_state active and task_state None.
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.578 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.579 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443492.5098166, 824e5887-8524-4231-9b97-19fb738d44c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.579 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] VM Started (Lifecycle Event)
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.607 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.611 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.619 239969 INFO nova.compute.manager [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Took 3.89 seconds to spawn the instance on the hypervisor.
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.619 239969 DEBUG nova.compute.manager [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.629 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.684 239969 INFO nova.compute.manager [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Took 5.18 seconds to build instance.
Jan 26 16:04:52 compute-0 nova_compute[239965]: 2026-01-26 16:04:52.703 239969 DEBUG oslo_concurrency.lockutils [None req-718fde78-800e-4e87-aa10-cf07ce15d25f 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lock "824e5887-8524-4231-9b97-19fb738d44c2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:53 compute-0 ceph-mon[75140]: pgmap v1559: 305 pgs: 305 active+clean; 341 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.8 MiB/s wr, 241 op/s
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.466 239969 DEBUG oslo_concurrency.lockutils [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Acquiring lock "d7f67ae9-2751-4c70-a750-ccc1940478de" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.467 239969 DEBUG oslo_concurrency.lockutils [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.467 239969 DEBUG oslo_concurrency.lockutils [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Acquiring lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.468 239969 DEBUG oslo_concurrency.lockutils [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.468 239969 DEBUG oslo_concurrency.lockutils [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.470 239969 INFO nova.compute.manager [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Terminating instance
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.471 239969 DEBUG nova.compute.manager [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:04:53 compute-0 kernel: tap1704d3ae-53 (unregistering): left promiscuous mode
Jan 26 16:04:53 compute-0 NetworkManager[48954]: <info>  [1769443493.5103] device (tap1704d3ae-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.525 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:53 compute-0 ovn_controller[146046]: 2026-01-26T16:04:53Z|00777|binding|INFO|Releasing lport 1704d3ae-5330-41ec-8f1c-05df19fdb33c from this chassis (sb_readonly=0)
Jan 26 16:04:53 compute-0 ovn_controller[146046]: 2026-01-26T16:04:53Z|00778|binding|INFO|Setting lport 1704d3ae-5330-41ec-8f1c-05df19fdb33c down in Southbound
Jan 26 16:04:53 compute-0 ovn_controller[146046]: 2026-01-26T16:04:53Z|00779|binding|INFO|Removing iface tap1704d3ae-53 ovn-installed in OVS
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.529 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:53.534 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:df:77:4c 10.100.0.3'], port_security=['fa:16:3e:df:77:4c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd7f67ae9-2751-4c70-a750-ccc1940478de', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2d7bbe96-2c16-44c7-ad13-09585bbd834e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5527f6a0b0004aa790a89b7a57853ace', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'e9916c46-5670-4687-89a4-433216e0fc8a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bce504e2-87b2-44d9-9048-27581ae62fa5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1704d3ae-5330-41ec-8f1c-05df19fdb33c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:04:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:53.536 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1704d3ae-5330-41ec-8f1c-05df19fdb33c in datapath 2d7bbe96-2c16-44c7-ad13-09585bbd834e unbound from our chassis
Jan 26 16:04:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:53.538 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2d7bbe96-2c16-44c7-ad13-09585bbd834e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:04:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:53.539 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[28f2771b-bbcb-4d44-8b97-94bafb746144]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:53.539 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e namespace which is not needed anymore
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.553 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:53 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000051.scope: Deactivated successfully.
Jan 26 16:04:53 compute-0 systemd[1]: machine-qemu\x2d97\x2dinstance\x2d00000051.scope: Consumed 1.959s CPU time.
Jan 26 16:04:53 compute-0 systemd-machined[208061]: Machine qemu-97-instance-00000051 terminated.
Jan 26 16:04:53 compute-0 neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e[310494]: [NOTICE]   (310505) : haproxy version is 2.8.14-c23fe91
Jan 26 16:04:53 compute-0 neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e[310494]: [NOTICE]   (310505) : path to executable is /usr/sbin/haproxy
Jan 26 16:04:53 compute-0 neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e[310494]: [WARNING]  (310505) : Exiting Master process...
Jan 26 16:04:53 compute-0 neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e[310494]: [ALERT]    (310505) : Current worker (310507) exited with code 143 (Terminated)
Jan 26 16:04:53 compute-0 neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e[310494]: [WARNING]  (310505) : All workers exited. Exiting... (0)
Jan 26 16:04:53 compute-0 systemd[1]: libpod-01fdce2fc2249168c830cf8eb8a3f63c2538de6e87f282edd4a34081aab49118.scope: Deactivated successfully.
Jan 26 16:04:53 compute-0 podman[310538]: 2026-01-26 16:04:53.704708379 +0000 UTC m=+0.049668926 container died 01fdce2fc2249168c830cf8eb8a3f63c2538de6e87f282edd4a34081aab49118 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.713 239969 INFO nova.virt.libvirt.driver [-] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Instance destroyed successfully.
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.715 239969 DEBUG nova.objects.instance [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lazy-loading 'resources' on Instance uuid d7f67ae9-2751-4c70-a750-ccc1940478de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01fdce2fc2249168c830cf8eb8a3f63c2538de6e87f282edd4a34081aab49118-userdata-shm.mount: Deactivated successfully.
Jan 26 16:04:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e2e92a08ac0db6f327168936ba3dd1e548cab9adf560fa638fd3a6a5662454c-merged.mount: Deactivated successfully.
Jan 26 16:04:53 compute-0 podman[310538]: 2026-01-26 16:04:53.75954175 +0000 UTC m=+0.104502267 container cleanup 01fdce2fc2249168c830cf8eb8a3f63c2538de6e87f282edd4a34081aab49118 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:04:53 compute-0 systemd[1]: libpod-conmon-01fdce2fc2249168c830cf8eb8a3f63c2538de6e87f282edd4a34081aab49118.scope: Deactivated successfully.
Jan 26 16:04:53 compute-0 podman[310581]: 2026-01-26 16:04:53.825528345 +0000 UTC m=+0.042069191 container remove 01fdce2fc2249168c830cf8eb8a3f63c2538de6e87f282edd4a34081aab49118 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.827 239969 DEBUG nova.virt.libvirt.vif [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:04:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1664472187',display_name='tempest-InstanceActionsTestJSON-server-1664472187',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1664472187',id=81,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:04:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5527f6a0b0004aa790a89b7a57853ace',ramdisk_id='',reservation_id='r-81i0kmes',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-1033896084',owner_user_name='tempest-InstanceActionsTestJSON-1033896084-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:04:52Z,user_data=None,user_id='fb2bc5dfd3a74496b6982288700ab306',uuid=d7f67ae9-2751-4c70-a750-ccc1940478de,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.827 239969 DEBUG nova.network.os_vif_util [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Converting VIF {"id": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "address": "fa:16:3e:df:77:4c", "network": {"id": "2d7bbe96-2c16-44c7-ad13-09585bbd834e", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-2068556559-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5527f6a0b0004aa790a89b7a57853ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1704d3ae-53", "ovs_interfaceid": "1704d3ae-5330-41ec-8f1c-05df19fdb33c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.828 239969 DEBUG nova.network.os_vif_util [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:df:77:4c,bridge_name='br-int',has_traffic_filtering=True,id=1704d3ae-5330-41ec-8f1c-05df19fdb33c,network=Network(2d7bbe96-2c16-44c7-ad13-09585bbd834e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1704d3ae-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.828 239969 DEBUG os_vif [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:77:4c,bridge_name='br-int',has_traffic_filtering=True,id=1704d3ae-5330-41ec-8f1c-05df19fdb33c,network=Network(2d7bbe96-2c16-44c7-ad13-09585bbd834e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1704d3ae-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.830 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.830 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1704d3ae-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.831 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.834 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.837 239969 INFO os_vif [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:df:77:4c,bridge_name='br-int',has_traffic_filtering=True,id=1704d3ae-5330-41ec-8f1c-05df19fdb33c,network=Network(2d7bbe96-2c16-44c7-ad13-09585bbd834e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1704d3ae-53')
Jan 26 16:04:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:53.843 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bc06c6ca-907a-4e79-af96-4445c3ff38b7]: (4, ('Mon Jan 26 04:04:53 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e (01fdce2fc2249168c830cf8eb8a3f63c2538de6e87f282edd4a34081aab49118)\n01fdce2fc2249168c830cf8eb8a3f63c2538de6e87f282edd4a34081aab49118\nMon Jan 26 04:04:53 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e (01fdce2fc2249168c830cf8eb8a3f63c2538de6e87f282edd4a34081aab49118)\n01fdce2fc2249168c830cf8eb8a3f63c2538de6e87f282edd4a34081aab49118\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:53.845 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e52ea6f6-4355-4b51-ae1e-aa371a3b2808]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:53.846 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2d7bbe96-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:53 compute-0 kernel: tap2d7bbe96-20: left promiscuous mode
Jan 26 16:04:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:53.867 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[af8ae760-9fb0-4da3-b069-53d21f4d2e06]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.872 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:53 compute-0 nova_compute[239965]: 2026-01-26 16:04:53.882 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:53.884 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2c497b6e-4c8b-4edd-9269-b7de7a8ed775]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:53.885 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[08e304aa-d2af-437b-88d8-03d659cd6ae0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:53.908 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bc814303-80ed-46bb-8c3f-4eb32dea99f0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 498243, 'reachable_time': 39887, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310611, 'error': None, 'target': 'ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d2d7bbe96\x2d2c16\x2d44c7\x2dad13\x2d09585bbd834e.mount: Deactivated successfully.
Jan 26 16:04:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:53.918 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2d7bbe96-2c16-44c7-ad13-09585bbd834e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:04:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:53.918 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[b61eed4b-d5e4-43fc-9a49-d330ae5b7e51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:04:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 341 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.8 MiB/s wr, 241 op/s
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.139 239969 INFO nova.virt.libvirt.driver [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Deleting instance files /var/lib/nova/instances/d7f67ae9-2751-4c70-a750-ccc1940478de_del
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.140 239969 INFO nova.virt.libvirt.driver [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Deletion of /var/lib/nova/instances/d7f67ae9-2751-4c70-a750-ccc1940478de_del complete
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.167 239969 DEBUG nova.compute.manager [None req-8dd4461e-43e8-4931-a05d-4f3553dbdfc7 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:04:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:54.169 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.223 239969 INFO nova.compute.manager [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Took 0.75 seconds to destroy the instance on the hypervisor.
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.224 239969 DEBUG oslo.service.loopingcall [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.224 239969 DEBUG nova.compute.manager [-] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.224 239969 DEBUG nova.network.neutron [-] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.228 239969 INFO nova.compute.manager [None req-8dd4461e-43e8-4931-a05d-4f3553dbdfc7 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] instance snapshotting
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.533 239969 INFO nova.virt.libvirt.driver [None req-8dd4461e-43e8-4931-a05d-4f3553dbdfc7 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Beginning live snapshot process
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.679 239969 DEBUG nova.storage.rbd_utils [None req-8dd4461e-43e8-4931-a05d-4f3553dbdfc7 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] creating snapshot(e9a7223e60914e2b802046d19aa4d8b0) on rbd image(824e5887-8524-4231-9b97-19fb738d44c2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.721 239969 DEBUG nova.compute.manager [req-d77c4b34-fc4f-489d-944a-532b57609927 req-ba9a8c94-c352-4bfe-97bd-95622b4b712a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received event network-vif-unplugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.722 239969 DEBUG oslo_concurrency.lockutils [req-d77c4b34-fc4f-489d-944a-532b57609927 req-ba9a8c94-c352-4bfe-97bd-95622b4b712a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.723 239969 DEBUG oslo_concurrency.lockutils [req-d77c4b34-fc4f-489d-944a-532b57609927 req-ba9a8c94-c352-4bfe-97bd-95622b4b712a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.723 239969 DEBUG oslo_concurrency.lockutils [req-d77c4b34-fc4f-489d-944a-532b57609927 req-ba9a8c94-c352-4bfe-97bd-95622b4b712a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.723 239969 DEBUG nova.compute.manager [req-d77c4b34-fc4f-489d-944a-532b57609927 req-ba9a8c94-c352-4bfe-97bd-95622b4b712a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] No waiting events found dispatching network-vif-unplugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.723 239969 DEBUG nova.compute.manager [req-d77c4b34-fc4f-489d-944a-532b57609927 req-ba9a8c94-c352-4bfe-97bd-95622b4b712a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received event network-vif-unplugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.724 239969 DEBUG nova.compute.manager [req-d77c4b34-fc4f-489d-944a-532b57609927 req-ba9a8c94-c352-4bfe-97bd-95622b4b712a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received event network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.724 239969 DEBUG oslo_concurrency.lockutils [req-d77c4b34-fc4f-489d-944a-532b57609927 req-ba9a8c94-c352-4bfe-97bd-95622b4b712a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.724 239969 DEBUG oslo_concurrency.lockutils [req-d77c4b34-fc4f-489d-944a-532b57609927 req-ba9a8c94-c352-4bfe-97bd-95622b4b712a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.725 239969 DEBUG oslo_concurrency.lockutils [req-d77c4b34-fc4f-489d-944a-532b57609927 req-ba9a8c94-c352-4bfe-97bd-95622b4b712a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.725 239969 DEBUG nova.compute.manager [req-d77c4b34-fc4f-489d-944a-532b57609927 req-ba9a8c94-c352-4bfe-97bd-95622b4b712a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] No waiting events found dispatching network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:04:54 compute-0 nova_compute[239965]: 2026-01-26 16:04:54.726 239969 WARNING nova.compute.manager [req-d77c4b34-fc4f-489d-944a-532b57609927 req-ba9a8c94-c352-4bfe-97bd-95622b4b712a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received unexpected event network-vif-plugged-1704d3ae-5330-41ec-8f1c-05df19fdb33c for instance with vm_state active and task_state deleting.
Jan 26 16:04:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Jan 26 16:04:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Jan 26 16:04:55 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Jan 26 16:04:55 compute-0 ceph-mon[75140]: pgmap v1560: 305 pgs: 305 active+clean; 341 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.8 MiB/s wr, 241 op/s
Jan 26 16:04:55 compute-0 nova_compute[239965]: 2026-01-26 16:04:55.214 239969 DEBUG nova.storage.rbd_utils [None req-8dd4461e-43e8-4931-a05d-4f3553dbdfc7 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] cloning vms/824e5887-8524-4231-9b97-19fb738d44c2_disk@e9a7223e60914e2b802046d19aa4d8b0 to images/5d36e375-d5a7-43e8-8194-b14c65c46692 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 16:04:55 compute-0 nova_compute[239965]: 2026-01-26 16:04:55.257 239969 DEBUG nova.network.neutron [-] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:04:55 compute-0 nova_compute[239965]: 2026-01-26 16:04:55.279 239969 INFO nova.compute.manager [-] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Took 1.06 seconds to deallocate network for instance.
Jan 26 16:04:55 compute-0 nova_compute[239965]: 2026-01-26 16:04:55.327 239969 DEBUG nova.storage.rbd_utils [None req-8dd4461e-43e8-4931-a05d-4f3553dbdfc7 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] flattening images/5d36e375-d5a7-43e8-8194-b14c65c46692 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 16:04:55 compute-0 nova_compute[239965]: 2026-01-26 16:04:55.468 239969 DEBUG nova.storage.rbd_utils [None req-8dd4461e-43e8-4931-a05d-4f3553dbdfc7 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] removing snapshot(e9a7223e60914e2b802046d19aa4d8b0) on rbd image(824e5887-8524-4231-9b97-19fb738d44c2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 16:04:55 compute-0 nova_compute[239965]: 2026-01-26 16:04:55.557 239969 DEBUG oslo_concurrency.lockutils [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:55 compute-0 nova_compute[239965]: 2026-01-26 16:04:55.558 239969 DEBUG oslo_concurrency.lockutils [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:55 compute-0 nova_compute[239965]: 2026-01-26 16:04:55.673 239969 DEBUG oslo_concurrency.processutils [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:55 compute-0 nova_compute[239965]: 2026-01-26 16:04:55.943 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 295 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.2 MiB/s wr, 420 op/s
Jan 26 16:04:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Jan 26 16:04:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Jan 26 16:04:56 compute-0 ceph-mon[75140]: osdmap e238: 3 total, 3 up, 3 in
Jan 26 16:04:56 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.223 239969 DEBUG nova.storage.rbd_utils [None req-8dd4461e-43e8-4931-a05d-4f3553dbdfc7 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] creating snapshot(snap) on rbd image(5d36e375-d5a7-43e8-8194-b14c65c46692) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.265 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "08d4e028-235a-4222-9acb-e07b6652d9ee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.266 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.288 239969 DEBUG nova.compute.manager [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:04:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:04:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2187048984' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.348 239969 DEBUG oslo_concurrency.processutils [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.360 239969 DEBUG nova.compute.provider_tree [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.373 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.375 239969 DEBUG nova.scheduler.client.report [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.394 239969 DEBUG oslo_concurrency.lockutils [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.836s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.397 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.404 239969 DEBUG nova.virt.hardware [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.404 239969 INFO nova.compute.claims [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.421 239969 INFO nova.scheduler.client.report [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Deleted allocations for instance d7f67ae9-2751-4c70-a750-ccc1940478de
Jan 26 16:04:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.502 239969 DEBUG oslo_concurrency.lockutils [None req-98176201-951c-41f1-a593-0075f6b7d5ce fb2bc5dfd3a74496b6982288700ab306 5527f6a0b0004aa790a89b7a57853ace - - default default] Lock "d7f67ae9-2751-4c70-a750-ccc1940478de" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.560 239969 DEBUG oslo_concurrency.processutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:56 compute-0 nova_compute[239965]: 2026-01-26 16:04:56.926 239969 DEBUG nova.compute.manager [req-2fc4cfe7-a399-446b-afd1-fff659cd578c req-8fec0444-4e4d-4d98-899c-15578acf520f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Received event network-vif-deleted-1704d3ae-5330-41ec-8f1c-05df19fdb33c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:04:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:04:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4223661430' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.127 239969 DEBUG oslo_concurrency.processutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.134 239969 DEBUG nova.compute.provider_tree [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.150 239969 DEBUG nova.scheduler.client.report [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.169 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.170 239969 DEBUG nova.compute.manager [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:04:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Jan 26 16:04:57 compute-0 ceph-mon[75140]: pgmap v1562: 305 pgs: 305 active+clean; 295 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 2.2 MiB/s wr, 420 op/s
Jan 26 16:04:57 compute-0 ceph-mon[75140]: osdmap e239: 3 total, 3 up, 3 in
Jan 26 16:04:57 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2187048984' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:04:57 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4223661430' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:04:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Jan 26 16:04:57 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.226 239969 DEBUG nova.compute.manager [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.227 239969 DEBUG nova.network.neutron [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.246 239969 INFO nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.267 239969 DEBUG nova.compute.manager [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.381 239969 DEBUG nova.compute.manager [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.383 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.383 239969 INFO nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Creating image(s)
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.406 239969 DEBUG nova.storage.rbd_utils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 08d4e028-235a-4222-9acb-e07b6652d9ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.433 239969 DEBUG nova.storage.rbd_utils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 08d4e028-235a-4222-9acb-e07b6652d9ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.463 239969 DEBUG nova.storage.rbd_utils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 08d4e028-235a-4222-9acb-e07b6652d9ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.471 239969 DEBUG oslo_concurrency.processutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.515 239969 DEBUG nova.policy [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7a6914254de3499da1b987d99b6e0ff6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd91082a727584818a01c3c60e7f7ef73', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.560 239969 DEBUG oslo_concurrency.processutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.561 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.561 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.562 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.589 239969 DEBUG nova.storage.rbd_utils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 08d4e028-235a-4222-9acb-e07b6652d9ee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.594 239969 DEBUG oslo_concurrency.processutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 08d4e028-235a-4222-9acb-e07b6652d9ee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.866 239969 DEBUG oslo_concurrency.processutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 08d4e028-235a-4222-9acb-e07b6652d9ee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.272s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.924 239969 DEBUG nova.storage.rbd_utils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] resizing rbd image 08d4e028-235a-4222-9acb-e07b6652d9ee_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:04:57 compute-0 nova_compute[239965]: 2026-01-26 16:04:57.991 239969 DEBUG nova.objects.instance [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'migration_context' on Instance uuid 08d4e028-235a-4222-9acb-e07b6652d9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:04:58 compute-0 nova_compute[239965]: 2026-01-26 16:04:58.007 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:04:58 compute-0 nova_compute[239965]: 2026-01-26 16:04:58.008 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Ensure instance console log exists: /var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:04:58 compute-0 nova_compute[239965]: 2026-01-26 16:04:58.008 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:58 compute-0 nova_compute[239965]: 2026-01-26 16:04:58.009 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:58 compute-0 nova_compute[239965]: 2026-01-26 16:04:58.009 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:04:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 295 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 47 KiB/s wr, 448 op/s
Jan 26 16:04:58 compute-0 ceph-mon[75140]: osdmap e240: 3 total, 3 up, 3 in
Jan 26 16:04:58 compute-0 nova_compute[239965]: 2026-01-26 16:04:58.817 239969 DEBUG nova.network.neutron [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Successfully created port: 29825953-a53f-4dd1-bf0d-b670cc439160 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:04:58 compute-0 nova_compute[239965]: 2026-01-26 16:04:58.832 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:04:58 compute-0 nova_compute[239965]: 2026-01-26 16:04:58.868 239969 INFO nova.virt.libvirt.driver [None req-8dd4461e-43e8-4931-a05d-4f3553dbdfc7 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Snapshot image upload complete
Jan 26 16:04:58 compute-0 nova_compute[239965]: 2026-01-26 16:04:58.869 239969 INFO nova.compute.manager [None req-8dd4461e-43e8-4931-a05d-4f3553dbdfc7 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Took 4.64 seconds to snapshot the instance on the hypervisor.
Jan 26 16:04:59 compute-0 ceph-mon[75140]: pgmap v1565: 305 pgs: 305 active+clean; 295 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 47 KiB/s wr, 448 op/s
Jan 26 16:04:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:59.227 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:04:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:59.227 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:04:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:04:59.228 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 330 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 2.5 MiB/s wr, 519 op/s
Jan 26 16:05:00 compute-0 ovn_controller[146046]: 2026-01-26T16:05:00Z|00780|binding|INFO|Releasing lport 61ede7e0-2ac4-4ab0-a8d4-11a64add8da0 from this chassis (sb_readonly=0)
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.179 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Jan 26 16:05:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Jan 26 16:05:00 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Jan 26 16:05:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:05:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:05:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:05:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:05:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:05:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.637 239969 DEBUG nova.network.neutron [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Successfully updated port: 29825953-a53f-4dd1-bf0d-b670cc439160 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.652 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "refresh_cache-08d4e028-235a-4222-9acb-e07b6652d9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.653 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquired lock "refresh_cache-08d4e028-235a-4222-9acb-e07b6652d9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.653 239969 DEBUG nova.network.neutron [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.868 239969 DEBUG nova.network.neutron [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.946 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.964 239969 DEBUG nova.compute.manager [req-b72e0bef-99e2-4389-93ac-7988df0c5daf req-3fc2eb06-0912-419b-aeeb-6239c0b24b95 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received event network-changed-29825953-a53f-4dd1-bf0d-b670cc439160 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.964 239969 DEBUG nova.compute.manager [req-b72e0bef-99e2-4389-93ac-7988df0c5daf req-3fc2eb06-0912-419b-aeeb-6239c0b24b95 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Refreshing instance network info cache due to event network-changed-29825953-a53f-4dd1-bf0d-b670cc439160. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.965 239969 DEBUG oslo_concurrency.lockutils [req-b72e0bef-99e2-4389-93ac-7988df0c5daf req-3fc2eb06-0912-419b-aeeb-6239c0b24b95 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-08d4e028-235a-4222-9acb-e07b6652d9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.969 239969 DEBUG oslo_concurrency.lockutils [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Acquiring lock "824e5887-8524-4231-9b97-19fb738d44c2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.970 239969 DEBUG oslo_concurrency.lockutils [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lock "824e5887-8524-4231-9b97-19fb738d44c2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.970 239969 DEBUG oslo_concurrency.lockutils [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Acquiring lock "824e5887-8524-4231-9b97-19fb738d44c2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.970 239969 DEBUG oslo_concurrency.lockutils [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lock "824e5887-8524-4231-9b97-19fb738d44c2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.971 239969 DEBUG oslo_concurrency.lockutils [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lock "824e5887-8524-4231-9b97-19fb738d44c2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.972 239969 INFO nova.compute.manager [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Terminating instance
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.973 239969 DEBUG oslo_concurrency.lockutils [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Acquiring lock "refresh_cache-824e5887-8524-4231-9b97-19fb738d44c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.973 239969 DEBUG oslo_concurrency.lockutils [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Acquired lock "refresh_cache-824e5887-8524-4231-9b97-19fb738d44c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:05:00 compute-0 nova_compute[239965]: 2026-01-26 16:05:00.973 239969 DEBUG nova.network.neutron [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:05:01 compute-0 ceph-mon[75140]: pgmap v1566: 305 pgs: 305 active+clean; 330 MiB data, 685 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 2.5 MiB/s wr, 519 op/s
Jan 26 16:05:01 compute-0 ceph-mon[75140]: osdmap e241: 3 total, 3 up, 3 in
Jan 26 16:05:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:05:01 compute-0 nova_compute[239965]: 2026-01-26 16:05:01.656 239969 DEBUG nova.network.neutron [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:05:01 compute-0 nova_compute[239965]: 2026-01-26 16:05:01.909 239969 DEBUG nova.network.neutron [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:05:01 compute-0 nova_compute[239965]: 2026-01-26 16:05:01.929 239969 DEBUG oslo_concurrency.lockutils [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Releasing lock "refresh_cache-824e5887-8524-4231-9b97-19fb738d44c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:05:01 compute-0 nova_compute[239965]: 2026-01-26 16:05:01.930 239969 DEBUG nova.compute.manager [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:05:01 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000052.scope: Deactivated successfully.
Jan 26 16:05:01 compute-0 systemd[1]: machine-qemu\x2d98\x2dinstance\x2d00000052.scope: Consumed 1.190s CPU time.
Jan 26 16:05:01 compute-0 systemd-machined[208061]: Machine qemu-98-instance-00000052 terminated.
Jan 26 16:05:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 341 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 3.5 MiB/s wr, 153 op/s
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.154 239969 INFO nova.virt.libvirt.driver [-] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Instance destroyed successfully.
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.155 239969 DEBUG nova.objects.instance [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lazy-loading 'resources' on Instance uuid 824e5887-8524-4231-9b97-19fb738d44c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:02 compute-0 ceph-mon[75140]: pgmap v1568: 305 pgs: 305 active+clean; 341 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 3.5 MiB/s wr, 153 op/s
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.412 239969 DEBUG nova.network.neutron [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Updating instance_info_cache with network_info: [{"id": "29825953-a53f-4dd1-bf0d-b670cc439160", "address": "fa:16:3e:10:bc:39", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29825953-a5", "ovs_interfaceid": "29825953-a53f-4dd1-bf0d-b670cc439160", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.436 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Releasing lock "refresh_cache-08d4e028-235a-4222-9acb-e07b6652d9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.437 239969 DEBUG nova.compute.manager [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Instance network_info: |[{"id": "29825953-a53f-4dd1-bf0d-b670cc439160", "address": "fa:16:3e:10:bc:39", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29825953-a5", "ovs_interfaceid": "29825953-a53f-4dd1-bf0d-b670cc439160", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.439 239969 DEBUG oslo_concurrency.lockutils [req-b72e0bef-99e2-4389-93ac-7988df0c5daf req-3fc2eb06-0912-419b-aeeb-6239c0b24b95 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-08d4e028-235a-4222-9acb-e07b6652d9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.439 239969 DEBUG nova.network.neutron [req-b72e0bef-99e2-4389-93ac-7988df0c5daf req-3fc2eb06-0912-419b-aeeb-6239c0b24b95 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Refreshing network info cache for port 29825953-a53f-4dd1-bf0d-b670cc439160 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.446 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Start _get_guest_xml network_info=[{"id": "29825953-a53f-4dd1-bf0d-b670cc439160", "address": "fa:16:3e:10:bc:39", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29825953-a5", "ovs_interfaceid": "29825953-a53f-4dd1-bf0d-b670cc439160", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.455 239969 WARNING nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.461 239969 DEBUG nova.virt.libvirt.host [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.463 239969 DEBUG nova.virt.libvirt.host [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.471 239969 DEBUG nova.virt.libvirt.host [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.472 239969 DEBUG nova.virt.libvirt.host [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.473 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.473 239969 DEBUG nova.virt.hardware [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.474 239969 DEBUG nova.virt.hardware [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.474 239969 DEBUG nova.virt.hardware [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.474 239969 DEBUG nova.virt.hardware [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.474 239969 DEBUG nova.virt.hardware [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.475 239969 DEBUG nova.virt.hardware [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.475 239969 DEBUG nova.virt.hardware [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.475 239969 DEBUG nova.virt.hardware [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.475 239969 DEBUG nova.virt.hardware [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.476 239969 DEBUG nova.virt.hardware [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.476 239969 DEBUG nova.virt.hardware [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:05:02 compute-0 nova_compute[239965]: 2026-01-26 16:05:02.479 239969 DEBUG oslo_concurrency.processutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:05:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3634637142' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.051 239969 DEBUG oslo_concurrency.processutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.077 239969 DEBUG nova.storage.rbd_utils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 08d4e028-235a-4222-9acb-e07b6652d9ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.081 239969 DEBUG oslo_concurrency.processutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Jan 26 16:05:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Jan 26 16:05:03 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Jan 26 16:05:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3634637142' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.521 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "76321bf3-65b2-429d-9eb3-27916690a101" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.523 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.541 239969 DEBUG nova.compute.manager [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.553 239969 INFO nova.virt.libvirt.driver [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Deleting instance files /var/lib/nova/instances/824e5887-8524-4231-9b97-19fb738d44c2_del
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.555 239969 INFO nova.virt.libvirt.driver [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Deletion of /var/lib/nova/instances/824e5887-8524-4231-9b97-19fb738d44c2_del complete
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.611 239969 INFO nova.compute.manager [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Took 1.68 seconds to destroy the instance on the hypervisor.
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.613 239969 DEBUG oslo.service.loopingcall [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.615 239969 DEBUG nova.compute.manager [-] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.615 239969 DEBUG nova.network.neutron [-] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.652 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.652 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.658 239969 DEBUG nova.virt.hardware [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.658 239969 INFO nova.compute.claims [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:05:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:05:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2902450566' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.728 239969 DEBUG oslo_concurrency.processutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.731 239969 DEBUG nova.virt.libvirt.vif [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:04:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1716043766',display_name='tempest-ServerRescueTestJSON-server-1716043766',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1716043766',id=83,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d91082a727584818a01c3c60e7f7ef73',ramdisk_id='',reservation_id='r-d9mj2wj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1881999610',owner_user_name='tempest-ServerRescueTestJSON-1881999610-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:04:57Z,user_data=None,user_id='7a6914254de3499da1b987d99b6e0ff6',uuid=08d4e028-235a-4222-9acb-e07b6652d9ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29825953-a53f-4dd1-bf0d-b670cc439160", "address": "fa:16:3e:10:bc:39", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29825953-a5", "ovs_interfaceid": "29825953-a53f-4dd1-bf0d-b670cc439160", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.731 239969 DEBUG nova.network.os_vif_util [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Converting VIF {"id": "29825953-a53f-4dd1-bf0d-b670cc439160", "address": "fa:16:3e:10:bc:39", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29825953-a5", "ovs_interfaceid": "29825953-a53f-4dd1-bf0d-b670cc439160", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.733 239969 DEBUG nova.network.os_vif_util [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:bc:39,bridge_name='br-int',has_traffic_filtering=True,id=29825953-a53f-4dd1-bf0d-b670cc439160,network=Network(c3f3d947-073d-4aff-ad60-49414846f6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29825953-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.735 239969 DEBUG nova.objects.instance [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 08d4e028-235a-4222-9acb-e07b6652d9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.749 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:05:03 compute-0 nova_compute[239965]:   <uuid>08d4e028-235a-4222-9acb-e07b6652d9ee</uuid>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   <name>instance-00000053</name>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerRescueTestJSON-server-1716043766</nova:name>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:05:02</nova:creationTime>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:05:03 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:05:03 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:05:03 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:05:03 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:05:03 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:05:03 compute-0 nova_compute[239965]:         <nova:user uuid="7a6914254de3499da1b987d99b6e0ff6">tempest-ServerRescueTestJSON-1881999610-project-member</nova:user>
Jan 26 16:05:03 compute-0 nova_compute[239965]:         <nova:project uuid="d91082a727584818a01c3c60e7f7ef73">tempest-ServerRescueTestJSON-1881999610</nova:project>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:05:03 compute-0 nova_compute[239965]:         <nova:port uuid="29825953-a53f-4dd1-bf0d-b670cc439160">
Jan 26 16:05:03 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <system>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <entry name="serial">08d4e028-235a-4222-9acb-e07b6652d9ee</entry>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <entry name="uuid">08d4e028-235a-4222-9acb-e07b6652d9ee</entry>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     </system>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   <os>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   </os>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   <features>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   </features>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/08d4e028-235a-4222-9acb-e07b6652d9ee_disk">
Jan 26 16:05:03 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       </source>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:05:03 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/08d4e028-235a-4222-9acb-e07b6652d9ee_disk.config">
Jan 26 16:05:03 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       </source>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:05:03 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:10:bc:39"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <target dev="tap29825953-a5"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee/console.log" append="off"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <video>
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     </video>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:05:03 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:05:03 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:05:03 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:05:03 compute-0 nova_compute[239965]: </domain>
Jan 26 16:05:03 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.750 239969 DEBUG nova.compute.manager [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Preparing to wait for external event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.750 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.750 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.751 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.752 239969 DEBUG nova.virt.libvirt.vif [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:04:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1716043766',display_name='tempest-ServerRescueTestJSON-server-1716043766',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1716043766',id=83,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d91082a727584818a01c3c60e7f7ef73',ramdisk_id='',reservation_id='r-d9mj2wj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerRescueTestJSON-1881999610',owner_user_name='tempest-ServerRescueTestJSON-1881999610-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:04:57Z,user_data=None,user_id='7a6914254de3499da1b987d99b6e0ff6',uuid=08d4e028-235a-4222-9acb-e07b6652d9ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29825953-a53f-4dd1-bf0d-b670cc439160", "address": "fa:16:3e:10:bc:39", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29825953-a5", "ovs_interfaceid": "29825953-a53f-4dd1-bf0d-b670cc439160", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.753 239969 DEBUG nova.network.os_vif_util [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Converting VIF {"id": "29825953-a53f-4dd1-bf0d-b670cc439160", "address": "fa:16:3e:10:bc:39", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29825953-a5", "ovs_interfaceid": "29825953-a53f-4dd1-bf0d-b670cc439160", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.753 239969 DEBUG nova.network.os_vif_util [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:bc:39,bridge_name='br-int',has_traffic_filtering=True,id=29825953-a53f-4dd1-bf0d-b670cc439160,network=Network(c3f3d947-073d-4aff-ad60-49414846f6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29825953-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.754 239969 DEBUG os_vif [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:bc:39,bridge_name='br-int',has_traffic_filtering=True,id=29825953-a53f-4dd1-bf0d-b670cc439160,network=Network(c3f3d947-073d-4aff-ad60-49414846f6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29825953-a5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.758 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.759 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.760 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.764 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.764 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29825953-a5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.764 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29825953-a5, col_values=(('external_ids', {'iface-id': '29825953-a53f-4dd1-bf0d-b670cc439160', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:bc:39', 'vm-uuid': '08d4e028-235a-4222-9acb-e07b6652d9ee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.766 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:03 compute-0 NetworkManager[48954]: <info>  [1769443503.7670] manager: (tap29825953-a5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/344)
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.769 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.778 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.780 239969 INFO os_vif [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:bc:39,bridge_name='br-int',has_traffic_filtering=True,id=29825953-a53f-4dd1-bf0d-b670cc439160,network=Network(c3f3d947-073d-4aff-ad60-49414846f6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29825953-a5')
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.782 239969 DEBUG nova.network.neutron [-] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.796 239969 DEBUG nova.network.neutron [-] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.812 239969 INFO nova.compute.manager [-] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Took 0.20 seconds to deallocate network for instance.
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.829 239969 DEBUG oslo_concurrency.processutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.875 239969 DEBUG oslo_concurrency.lockutils [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.884 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.885 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.886 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] No VIF found with MAC fa:16:3e:10:bc:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.887 239969 INFO nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Using config drive
Jan 26 16:05:03 compute-0 nova_compute[239965]: 2026-01-26 16:05:03.913 239969 DEBUG nova.storage.rbd_utils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 08d4e028-235a-4222-9acb-e07b6652d9ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.034 239969 DEBUG nova.network.neutron [req-b72e0bef-99e2-4389-93ac-7988df0c5daf req-3fc2eb06-0912-419b-aeeb-6239c0b24b95 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Updated VIF entry in instance network info cache for port 29825953-a53f-4dd1-bf0d-b670cc439160. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.035 239969 DEBUG nova.network.neutron [req-b72e0bef-99e2-4389-93ac-7988df0c5daf req-3fc2eb06-0912-419b-aeeb-6239c0b24b95 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Updating instance_info_cache with network_info: [{"id": "29825953-a53f-4dd1-bf0d-b670cc439160", "address": "fa:16:3e:10:bc:39", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29825953-a5", "ovs_interfaceid": "29825953-a53f-4dd1-bf0d-b670cc439160", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:05:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 341 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 3.1 MiB/s wr, 134 op/s
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.053 239969 DEBUG oslo_concurrency.lockutils [req-b72e0bef-99e2-4389-93ac-7988df0c5daf req-3fc2eb06-0912-419b-aeeb-6239c0b24b95 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-08d4e028-235a-4222-9acb-e07b6652d9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.199 239969 INFO nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Creating config drive at /var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee/disk.config
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.205 239969 DEBUG oslo_concurrency.processutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnb4prpe4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:04 compute-0 ceph-mon[75140]: osdmap e242: 3 total, 3 up, 3 in
Jan 26 16:05:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2902450566' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:04 compute-0 ceph-mon[75140]: pgmap v1570: 305 pgs: 305 active+clean; 341 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 3.1 MiB/s wr, 134 op/s
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.356 239969 DEBUG oslo_concurrency.processutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnb4prpe4" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.395 239969 DEBUG nova.storage.rbd_utils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 08d4e028-235a-4222-9acb-e07b6652d9ee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.400 239969 DEBUG oslo_concurrency.processutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee/disk.config 08d4e028-235a-4222-9acb-e07b6652d9ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:05:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4166786093' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.495 239969 DEBUG oslo_concurrency.processutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.666s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.504 239969 DEBUG nova.compute.provider_tree [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.522 239969 DEBUG nova.scheduler.client.report [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.549 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.550 239969 DEBUG nova.compute.manager [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.552 239969 DEBUG oslo_concurrency.lockutils [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.564 239969 DEBUG oslo_concurrency.processutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee/disk.config 08d4e028-235a-4222-9acb-e07b6652d9ee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.565 239969 INFO nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Deleting local config drive /var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee/disk.config because it was imported into RBD.
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.601 239969 DEBUG nova.compute.manager [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.601 239969 DEBUG nova.network.neutron [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:05:04 compute-0 kernel: tap29825953-a5: entered promiscuous mode
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.621 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:04 compute-0 systemd-udevd[310969]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:05:04 compute-0 ovn_controller[146046]: 2026-01-26T16:05:04Z|00781|binding|INFO|Claiming lport 29825953-a53f-4dd1-bf0d-b670cc439160 for this chassis.
Jan 26 16:05:04 compute-0 ovn_controller[146046]: 2026-01-26T16:05:04Z|00782|binding|INFO|29825953-a53f-4dd1-bf0d-b670cc439160: Claiming fa:16:3e:10:bc:39 10.100.0.14
Jan 26 16:05:04 compute-0 NetworkManager[48954]: <info>  [1769443504.6236] manager: (tap29825953-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/345)
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.624 239969 INFO nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:05:04 compute-0 ovn_controller[146046]: 2026-01-26T16:05:04Z|00783|binding|INFO|Setting lport 29825953-a53f-4dd1-bf0d-b670cc439160 ovn-installed in OVS
Jan 26 16:05:04 compute-0 ovn_controller[146046]: 2026-01-26T16:05:04Z|00784|binding|INFO|Setting lport 29825953-a53f-4dd1-bf0d-b670cc439160 up in Southbound
Jan 26 16:05:04 compute-0 NetworkManager[48954]: <info>  [1769443504.6390] device (tap29825953-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:05:04 compute-0 NetworkManager[48954]: <info>  [1769443504.6400] device (tap29825953-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:05:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:04.638 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:bc:39 10.100.0.14'], port_security=['fa:16:3e:10:bc:39 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '08d4e028-235a-4222-9acb-e07b6652d9ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3f3d947-073d-4aff-ad60-49414846f6e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd91082a727584818a01c3c60e7f7ef73', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7fafebe7-bf1c-4fd6-83e0-4f8a0494b535', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fcf049c-f6b3-4162-b939-ed99b129811b, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=29825953-a53f-4dd1-bf0d-b670cc439160) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:05:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:04.640 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 29825953-a53f-4dd1-bf0d-b670cc439160 in datapath c3f3d947-073d-4aff-ad60-49414846f6e5 bound to our chassis
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.640 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:04.641 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c3f3d947-073d-4aff-ad60-49414846f6e5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.642 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:04.642 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8ffe00ed-0f23-4192-b0b5-b993e87c26bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.650 239969 DEBUG nova.compute.manager [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:05:04 compute-0 systemd-machined[208061]: New machine qemu-99-instance-00000053.
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.674 239969 DEBUG oslo_concurrency.processutils [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:04 compute-0 systemd[1]: Started Virtual Machine qemu-99-instance-00000053.
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.743 239969 DEBUG nova.compute.manager [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.745 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.746 239969 INFO nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Creating image(s)
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.775 239969 DEBUG nova.storage.rbd_utils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 76321bf3-65b2-429d-9eb3-27916690a101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.816 239969 DEBUG nova.storage.rbd_utils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 76321bf3-65b2-429d-9eb3-27916690a101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.848 239969 DEBUG nova.storage.rbd_utils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 76321bf3-65b2-429d-9eb3-27916690a101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.867 239969 DEBUG oslo_concurrency.processutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.954 239969 DEBUG oslo_concurrency.processutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.956 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.958 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.959 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.993 239969 DEBUG nova.storage.rbd_utils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 76321bf3-65b2-429d-9eb3-27916690a101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:04 compute-0 nova_compute[239965]: 2026-01-26 16:05:04.998 239969 DEBUG oslo_concurrency.processutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 76321bf3-65b2-429d-9eb3-27916690a101_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.067 239969 DEBUG nova.policy [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '59f8ee09903a4e0a812c3d9e013996bd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '94771806cd0d4b5db117956e09fea9e6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.247 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443505.245896, 08d4e028-235a-4222-9acb-e07b6652d9ee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.248 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] VM Started (Lifecycle Event)
Jan 26 16:05:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:05:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/357074946' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.262 239969 DEBUG nova.compute.manager [req-074cf32d-1487-4c4b-ae7e-afe14be5b1cc req-79d43af5-bc1d-4ca9-8bb3-67496d18e4a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.263 239969 DEBUG oslo_concurrency.lockutils [req-074cf32d-1487-4c4b-ae7e-afe14be5b1cc req-79d43af5-bc1d-4ca9-8bb3-67496d18e4a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.263 239969 DEBUG oslo_concurrency.lockutils [req-074cf32d-1487-4c4b-ae7e-afe14be5b1cc req-79d43af5-bc1d-4ca9-8bb3-67496d18e4a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.263 239969 DEBUG oslo_concurrency.lockutils [req-074cf32d-1487-4c4b-ae7e-afe14be5b1cc req-79d43af5-bc1d-4ca9-8bb3-67496d18e4a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.263 239969 DEBUG nova.compute.manager [req-074cf32d-1487-4c4b-ae7e-afe14be5b1cc req-79d43af5-bc1d-4ca9-8bb3-67496d18e4a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Processing event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.264 239969 DEBUG nova.compute.manager [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.268 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.271 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.273 239969 INFO nova.virt.libvirt.driver [-] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Instance spawned successfully.
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.273 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.276 239969 DEBUG oslo_concurrency.processutils [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.278 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.284 239969 DEBUG nova.compute.provider_tree [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.292 239969 DEBUG oslo_concurrency.processutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 76321bf3-65b2-429d-9eb3-27916690a101_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:05 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4166786093' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:05:05 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/357074946' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.328 239969 DEBUG nova.scheduler.client.report [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.331 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.331 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443505.2460625, 08d4e028-235a-4222-9acb-e07b6652d9ee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.332 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] VM Paused (Lifecycle Event)
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.357 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.357 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.358 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.358 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.359 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.359 239969 DEBUG nova.virt.libvirt.driver [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.362 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.363 239969 DEBUG oslo_concurrency.lockutils [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.368 239969 DEBUG nova.storage.rbd_utils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] resizing rbd image 76321bf3-65b2-429d-9eb3-27916690a101_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.393 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443505.2675352, 08d4e028-235a-4222-9acb-e07b6652d9ee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.393 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] VM Resumed (Lifecycle Event)
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.436 239969 INFO nova.scheduler.client.report [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Deleted allocations for instance 824e5887-8524-4231-9b97-19fb738d44c2
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.437 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.438 239969 INFO nova.compute.manager [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Took 8.06 seconds to spawn the instance on the hypervisor.
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.439 239969 DEBUG nova.compute.manager [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.444 239969 DEBUG nova.objects.instance [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'migration_context' on Instance uuid 76321bf3-65b2-429d-9eb3-27916690a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.446 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.457 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.457 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Ensure instance console log exists: /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.457 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.458 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.458 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.482 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.548 239969 DEBUG oslo_concurrency.lockutils [None req-15216c09-f29b-4aec-8f1c-2896e82c46e9 893b3bc39877452fa2e3799a61f3e715 0ef852851e4e435f82709929f3babca1 - - default default] Lock "824e5887-8524-4231-9b97-19fb738d44c2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.550 239969 INFO nova.compute.manager [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Took 9.20 seconds to build instance.
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.573 239969 DEBUG oslo_concurrency.lockutils [None req-5e8d7305-acfa-49c7-9eb2-f4a78cd5159d 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.308s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:05 compute-0 nova_compute[239965]: 2026-01-26 16:05:05.950 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1571: 305 pgs: 305 active+clean; 351 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 772 KiB/s rd, 3.4 MiB/s wr, 282 op/s
Jan 26 16:05:06 compute-0 nova_compute[239965]: 2026-01-26 16:05:06.115 239969 DEBUG nova.network.neutron [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Successfully created port: cafc30f3-a313-4723-9612-f0b29a9513bb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:05:06 compute-0 ceph-mon[75140]: pgmap v1571: 305 pgs: 305 active+clean; 351 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 772 KiB/s rd, 3.4 MiB/s wr, 282 op/s
Jan 26 16:05:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:05:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Jan 26 16:05:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Jan 26 16:05:06 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Jan 26 16:05:06 compute-0 nova_compute[239965]: 2026-01-26 16:05:06.762 239969 INFO nova.compute.manager [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Rescuing
Jan 26 16:05:06 compute-0 nova_compute[239965]: 2026-01-26 16:05:06.763 239969 DEBUG oslo_concurrency.lockutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "refresh_cache-08d4e028-235a-4222-9acb-e07b6652d9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:05:06 compute-0 nova_compute[239965]: 2026-01-26 16:05:06.763 239969 DEBUG oslo_concurrency.lockutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquired lock "refresh_cache-08d4e028-235a-4222-9acb-e07b6652d9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:05:06 compute-0 nova_compute[239965]: 2026-01-26 16:05:06.763 239969 DEBUG nova.network.neutron [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:05:07 compute-0 nova_compute[239965]: 2026-01-26 16:05:07.136 239969 DEBUG nova.network.neutron [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Successfully updated port: cafc30f3-a313-4723-9612-f0b29a9513bb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:05:07 compute-0 nova_compute[239965]: 2026-01-26 16:05:07.167 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "refresh_cache-76321bf3-65b2-429d-9eb3-27916690a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:05:07 compute-0 nova_compute[239965]: 2026-01-26 16:05:07.168 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquired lock "refresh_cache-76321bf3-65b2-429d-9eb3-27916690a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:05:07 compute-0 nova_compute[239965]: 2026-01-26 16:05:07.169 239969 DEBUG nova.network.neutron [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:05:07 compute-0 podman[311387]: 2026-01-26 16:05:07.383194826 +0000 UTC m=+0.066331704 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:05:07 compute-0 nova_compute[239965]: 2026-01-26 16:05:07.388 239969 DEBUG nova.compute.manager [req-b64fb337-c0c8-4c86-8631-dd2b75399dc8 req-d40a606c-8753-4892-a02c-49eecb41f94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:07 compute-0 nova_compute[239965]: 2026-01-26 16:05:07.389 239969 DEBUG oslo_concurrency.lockutils [req-b64fb337-c0c8-4c86-8631-dd2b75399dc8 req-d40a606c-8753-4892-a02c-49eecb41f94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:07 compute-0 nova_compute[239965]: 2026-01-26 16:05:07.389 239969 DEBUG oslo_concurrency.lockutils [req-b64fb337-c0c8-4c86-8631-dd2b75399dc8 req-d40a606c-8753-4892-a02c-49eecb41f94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:07 compute-0 nova_compute[239965]: 2026-01-26 16:05:07.389 239969 DEBUG oslo_concurrency.lockutils [req-b64fb337-c0c8-4c86-8631-dd2b75399dc8 req-d40a606c-8753-4892-a02c-49eecb41f94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:07 compute-0 nova_compute[239965]: 2026-01-26 16:05:07.390 239969 DEBUG nova.compute.manager [req-b64fb337-c0c8-4c86-8631-dd2b75399dc8 req-d40a606c-8753-4892-a02c-49eecb41f94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] No waiting events found dispatching network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:07 compute-0 nova_compute[239965]: 2026-01-26 16:05:07.390 239969 WARNING nova.compute.manager [req-b64fb337-c0c8-4c86-8631-dd2b75399dc8 req-d40a606c-8753-4892-a02c-49eecb41f94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received unexpected event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 for instance with vm_state active and task_state rescuing.
Jan 26 16:05:07 compute-0 nova_compute[239965]: 2026-01-26 16:05:07.390 239969 DEBUG nova.compute.manager [req-b64fb337-c0c8-4c86-8631-dd2b75399dc8 req-d40a606c-8753-4892-a02c-49eecb41f94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Received event network-changed-cafc30f3-a313-4723-9612-f0b29a9513bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:07 compute-0 nova_compute[239965]: 2026-01-26 16:05:07.390 239969 DEBUG nova.compute.manager [req-b64fb337-c0c8-4c86-8631-dd2b75399dc8 req-d40a606c-8753-4892-a02c-49eecb41f94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Refreshing instance network info cache due to event network-changed-cafc30f3-a313-4723-9612-f0b29a9513bb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:05:07 compute-0 nova_compute[239965]: 2026-01-26 16:05:07.390 239969 DEBUG oslo_concurrency.lockutils [req-b64fb337-c0c8-4c86-8631-dd2b75399dc8 req-d40a606c-8753-4892-a02c-49eecb41f94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-76321bf3-65b2-429d-9eb3-27916690a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:05:07 compute-0 podman[311388]: 2026-01-26 16:05:07.418081188 +0000 UTC m=+0.100628881 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:05:07 compute-0 ceph-mon[75140]: osdmap e243: 3 total, 3 up, 3 in
Jan 26 16:05:07 compute-0 nova_compute[239965]: 2026-01-26 16:05:07.492 239969 DEBUG nova.network.neutron [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:05:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1573: 305 pgs: 305 active+clean; 351 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 750 KiB/s rd, 1.6 MiB/s wr, 233 op/s
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.214 239969 DEBUG nova.network.neutron [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Updating instance_info_cache with network_info: [{"id": "29825953-a53f-4dd1-bf0d-b670cc439160", "address": "fa:16:3e:10:bc:39", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29825953-a5", "ovs_interfaceid": "29825953-a53f-4dd1-bf0d-b670cc439160", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.239 239969 DEBUG oslo_concurrency.lockutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Releasing lock "refresh_cache-08d4e028-235a-4222-9acb-e07b6652d9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.334 239969 DEBUG nova.network.neutron [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Updating instance_info_cache with network_info: [{"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.415 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Releasing lock "refresh_cache-76321bf3-65b2-429d-9eb3-27916690a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.415 239969 DEBUG nova.compute.manager [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Instance network_info: |[{"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.416 239969 DEBUG oslo_concurrency.lockutils [req-b64fb337-c0c8-4c86-8631-dd2b75399dc8 req-d40a606c-8753-4892-a02c-49eecb41f94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-76321bf3-65b2-429d-9eb3-27916690a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.416 239969 DEBUG nova.network.neutron [req-b64fb337-c0c8-4c86-8631-dd2b75399dc8 req-d40a606c-8753-4892-a02c-49eecb41f94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Refreshing network info cache for port cafc30f3-a313-4723-9612-f0b29a9513bb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.421 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Start _get_guest_xml network_info=[{"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.427 239969 WARNING nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.432 239969 DEBUG nova.virt.libvirt.host [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.433 239969 DEBUG nova.virt.libvirt.host [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.436 239969 DEBUG nova.virt.libvirt.host [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.436 239969 DEBUG nova.virt.libvirt.host [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.437 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.437 239969 DEBUG nova.virt.hardware [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.437 239969 DEBUG nova.virt.hardware [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.438 239969 DEBUG nova.virt.hardware [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.438 239969 DEBUG nova.virt.hardware [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.438 239969 DEBUG nova.virt.hardware [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.438 239969 DEBUG nova.virt.hardware [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.438 239969 DEBUG nova.virt.hardware [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.439 239969 DEBUG nova.virt.hardware [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.439 239969 DEBUG nova.virt.hardware [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.439 239969 DEBUG nova.virt.hardware [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.439 239969 DEBUG nova.virt.hardware [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.443 239969 DEBUG oslo_concurrency.processutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:08 compute-0 ceph-mon[75140]: pgmap v1573: 305 pgs: 305 active+clean; 351 MiB data, 691 MiB used, 59 GiB / 60 GiB avail; 750 KiB/s rd, 1.6 MiB/s wr, 233 op/s
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.639 239969 DEBUG nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.710 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443493.7097592, d7f67ae9-2751-4c70-a750-ccc1940478de => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.711 239969 INFO nova.compute.manager [-] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] VM Stopped (Lifecycle Event)
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.735 239969 DEBUG nova.compute.manager [None req-b149e1f4-31f6-443e-af2d-ba4517529b3a - - - - - -] [instance: d7f67ae9-2751-4c70-a750-ccc1940478de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:08 compute-0 nova_compute[239965]: 2026-01-26 16:05:08.767 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:05:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1696954335' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.048 239969 DEBUG oslo_concurrency.processutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.081 239969 DEBUG nova.storage.rbd_utils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 76321bf3-65b2-429d-9eb3-27916690a101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.107 239969 DEBUG oslo_concurrency.processutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.337 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1696954335' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:05:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/668158522' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.730 239969 DEBUG oslo_concurrency.processutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.731 239969 DEBUG nova.virt.libvirt.vif [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:05:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1105506242',display_name='tempest-tempest.common.compute-instance-1105506242',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1105506242',id=84,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-drlg4l02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:05:04Z,user_data=None,user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=76321bf3-65b2-429d-9eb3-27916690a101,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.731 239969 DEBUG nova.network.os_vif_util [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.732 239969 DEBUG nova.network.os_vif_util [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:90:b1,bridge_name='br-int',has_traffic_filtering=True,id=cafc30f3-a313-4723-9612-f0b29a9513bb,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcafc30f3-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.735 239969 DEBUG nova.objects.instance [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 76321bf3-65b2-429d-9eb3-27916690a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.753 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:05:09 compute-0 nova_compute[239965]:   <uuid>76321bf3-65b2-429d-9eb3-27916690a101</uuid>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   <name>instance-00000054</name>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <nova:name>tempest-tempest.common.compute-instance-1105506242</nova:name>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:05:08</nova:creationTime>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:05:09 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:05:09 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:05:09 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:05:09 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:05:09 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:05:09 compute-0 nova_compute[239965]:         <nova:user uuid="59f8ee09903a4e0a812c3d9e013996bd">tempest-ServerActionsTestJSON-274649005-project-member</nova:user>
Jan 26 16:05:09 compute-0 nova_compute[239965]:         <nova:project uuid="94771806cd0d4b5db117956e09fea9e6">tempest-ServerActionsTestJSON-274649005</nova:project>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:05:09 compute-0 nova_compute[239965]:         <nova:port uuid="cafc30f3-a313-4723-9612-f0b29a9513bb">
Jan 26 16:05:09 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <system>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <entry name="serial">76321bf3-65b2-429d-9eb3-27916690a101</entry>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <entry name="uuid">76321bf3-65b2-429d-9eb3-27916690a101</entry>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     </system>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   <os>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   </os>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   <features>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   </features>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/76321bf3-65b2-429d-9eb3-27916690a101_disk">
Jan 26 16:05:09 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       </source>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:05:09 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/76321bf3-65b2-429d-9eb3-27916690a101_disk.config">
Jan 26 16:05:09 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       </source>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:05:09 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:43:90:b1"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <target dev="tapcafc30f3-a3"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101/console.log" append="off"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <video>
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     </video>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:05:09 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:05:09 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:05:09 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:05:09 compute-0 nova_compute[239965]: </domain>
Jan 26 16:05:09 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.755 239969 DEBUG nova.compute.manager [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Preparing to wait for external event network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.756 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "76321bf3-65b2-429d-9eb3-27916690a101-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.756 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.756 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.757 239969 DEBUG nova.virt.libvirt.vif [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:05:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1105506242',display_name='tempest-tempest.common.compute-instance-1105506242',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1105506242',id=84,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-drlg4l02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:05:04Z,user_data=None,user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=76321bf3-65b2-429d-9eb3-27916690a101,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.758 239969 DEBUG nova.network.os_vif_util [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.758 239969 DEBUG nova.network.os_vif_util [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:90:b1,bridge_name='br-int',has_traffic_filtering=True,id=cafc30f3-a313-4723-9612-f0b29a9513bb,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcafc30f3-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.759 239969 DEBUG os_vif [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:90:b1,bridge_name='br-int',has_traffic_filtering=True,id=cafc30f3-a313-4723-9612-f0b29a9513bb,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcafc30f3-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.760 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.760 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.760 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.764 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.764 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcafc30f3-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.764 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcafc30f3-a3, col_values=(('external_ids', {'iface-id': 'cafc30f3-a313-4723-9612-f0b29a9513bb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:90:b1', 'vm-uuid': '76321bf3-65b2-429d-9eb3-27916690a101'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:09 compute-0 NetworkManager[48954]: <info>  [1769443509.8053] manager: (tapcafc30f3-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/346)
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.804 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.809 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.814 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.815 239969 INFO os_vif [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:90:b1,bridge_name='br-int',has_traffic_filtering=True,id=cafc30f3-a313-4723-9612-f0b29a9513bb,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcafc30f3-a3')
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.869 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.870 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.870 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] No VIF found with MAC fa:16:3e:43:90:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.871 239969 INFO nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Using config drive
Jan 26 16:05:09 compute-0 nova_compute[239965]: 2026-01-26 16:05:09.894 239969 DEBUG nova.storage.rbd_utils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 76321bf3-65b2-429d-9eb3-27916690a101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:10 compute-0 nova_compute[239965]: 2026-01-26 16:05:10.008 239969 DEBUG nova.network.neutron [req-b64fb337-c0c8-4c86-8631-dd2b75399dc8 req-d40a606c-8753-4892-a02c-49eecb41f94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Updated VIF entry in instance network info cache for port cafc30f3-a313-4723-9612-f0b29a9513bb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:05:10 compute-0 nova_compute[239965]: 2026-01-26 16:05:10.009 239969 DEBUG nova.network.neutron [req-b64fb337-c0c8-4c86-8631-dd2b75399dc8 req-d40a606c-8753-4892-a02c-49eecb41f94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Updating instance_info_cache with network_info: [{"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:05:10 compute-0 nova_compute[239965]: 2026-01-26 16:05:10.023 239969 DEBUG oslo_concurrency.lockutils [req-b64fb337-c0c8-4c86-8631-dd2b75399dc8 req-d40a606c-8753-4892-a02c-49eecb41f94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-76321bf3-65b2-429d-9eb3-27916690a101" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:05:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 387 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.7 MiB/s wr, 291 op/s
Jan 26 16:05:10 compute-0 nova_compute[239965]: 2026-01-26 16:05:10.355 239969 INFO nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Creating config drive at /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101/disk.config
Jan 26 16:05:10 compute-0 nova_compute[239965]: 2026-01-26 16:05:10.361 239969 DEBUG oslo_concurrency.processutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcgtvladx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/668158522' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:10 compute-0 ceph-mon[75140]: pgmap v1574: 305 pgs: 305 active+clean; 387 MiB data, 725 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.7 MiB/s wr, 291 op/s
Jan 26 16:05:10 compute-0 nova_compute[239965]: 2026-01-26 16:05:10.507 239969 DEBUG oslo_concurrency.processutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcgtvladx" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:10 compute-0 nova_compute[239965]: 2026-01-26 16:05:10.535 239969 DEBUG nova.storage.rbd_utils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 76321bf3-65b2-429d-9eb3-27916690a101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:10 compute-0 nova_compute[239965]: 2026-01-26 16:05:10.539 239969 DEBUG oslo_concurrency.processutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101/disk.config 76321bf3-65b2-429d-9eb3-27916690a101_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:10 compute-0 nova_compute[239965]: 2026-01-26 16:05:10.708 239969 DEBUG oslo_concurrency.processutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101/disk.config 76321bf3-65b2-429d-9eb3-27916690a101_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:10 compute-0 nova_compute[239965]: 2026-01-26 16:05:10.709 239969 INFO nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Deleting local config drive /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101/disk.config because it was imported into RBD.
Jan 26 16:05:10 compute-0 kernel: tapcafc30f3-a3: entered promiscuous mode
Jan 26 16:05:10 compute-0 ovn_controller[146046]: 2026-01-26T16:05:10Z|00785|binding|INFO|Claiming lport cafc30f3-a313-4723-9612-f0b29a9513bb for this chassis.
Jan 26 16:05:10 compute-0 ovn_controller[146046]: 2026-01-26T16:05:10Z|00786|binding|INFO|cafc30f3-a313-4723-9612-f0b29a9513bb: Claiming fa:16:3e:43:90:b1 10.100.0.9
Jan 26 16:05:10 compute-0 nova_compute[239965]: 2026-01-26 16:05:10.764 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:10.770 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:90:b1 10.100.0.9'], port_security=['fa:16:3e:43:90:b1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '76321bf3-65b2-429d-9eb3-27916690a101', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'adbf641d-e58d-4501-869f-a3fe43494930', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=cafc30f3-a313-4723-9612-f0b29a9513bb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:05:10 compute-0 NetworkManager[48954]: <info>  [1769443510.7719] manager: (tapcafc30f3-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/347)
Jan 26 16:05:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:10.773 156105 INFO neutron.agent.ovn.metadata.agent [-] Port cafc30f3-a313-4723-9612-f0b29a9513bb in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d bound to our chassis
Jan 26 16:05:10 compute-0 nova_compute[239965]: 2026-01-26 16:05:10.778 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:10.778 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:05:10 compute-0 ovn_controller[146046]: 2026-01-26T16:05:10Z|00787|binding|INFO|Setting lport cafc30f3-a313-4723-9612-f0b29a9513bb ovn-installed in OVS
Jan 26 16:05:10 compute-0 ovn_controller[146046]: 2026-01-26T16:05:10Z|00788|binding|INFO|Setting lport cafc30f3-a313-4723-9612-f0b29a9513bb up in Southbound
Jan 26 16:05:10 compute-0 nova_compute[239965]: 2026-01-26 16:05:10.781 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:10 compute-0 systemd-machined[208061]: New machine qemu-100-instance-00000054.
Jan 26 16:05:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:10.800 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ffba60b7-e87b-4094-a69a-96cc91a8dc28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:10 compute-0 systemd-udevd[311566]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:05:10 compute-0 systemd[1]: Started Virtual Machine qemu-100-instance-00000054.
Jan 26 16:05:10 compute-0 NetworkManager[48954]: <info>  [1769443510.8173] device (tapcafc30f3-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:05:10 compute-0 NetworkManager[48954]: <info>  [1769443510.8180] device (tapcafc30f3-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:05:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:10.846 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b0f28d61-05f5-4bc5-8038-2e04605577bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:10.852 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e6587fb1-1591-4cd0-a1d4-c168202ada52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:10.886 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e466e7-7bf9-44d8-8336-449cc9c188b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:10.910 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9a394f61-4cfe-4b5d-aab0-1a906232fe4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84c1ad93-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495544, 'reachable_time': 41449, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311579, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:10.930 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c1a834bd-3316-4ee1-8cb5-000950a66f4c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap84c1ad93-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495559, 'tstamp': 495559}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311580, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap84c1ad93-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495562, 'tstamp': 495562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311580, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:10.933 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1ad93-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:10 compute-0 nova_compute[239965]: 2026-01-26 16:05:10.936 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:10.938 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84c1ad93-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:10.938 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:05:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:10.939 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84c1ad93-10, col_values=(('external_ids', {'iface-id': '61ede7e0-2ac4-4ab0-a8d4-11a64add8da0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:10.940 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:05:10 compute-0 nova_compute[239965]: 2026-01-26 16:05:10.950 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:11 compute-0 nova_compute[239965]: 2026-01-26 16:05:11.121 239969 DEBUG nova.compute.manager [req-157ce4eb-e9e5-4604-8762-1fc960b7fd91 req-75adbd7a-9217-4e5e-85af-430f88a8b1eb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Received event network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:11 compute-0 nova_compute[239965]: 2026-01-26 16:05:11.122 239969 DEBUG oslo_concurrency.lockutils [req-157ce4eb-e9e5-4604-8762-1fc960b7fd91 req-75adbd7a-9217-4e5e-85af-430f88a8b1eb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "76321bf3-65b2-429d-9eb3-27916690a101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:11 compute-0 nova_compute[239965]: 2026-01-26 16:05:11.122 239969 DEBUG oslo_concurrency.lockutils [req-157ce4eb-e9e5-4604-8762-1fc960b7fd91 req-75adbd7a-9217-4e5e-85af-430f88a8b1eb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:11 compute-0 nova_compute[239965]: 2026-01-26 16:05:11.122 239969 DEBUG oslo_concurrency.lockutils [req-157ce4eb-e9e5-4604-8762-1fc960b7fd91 req-75adbd7a-9217-4e5e-85af-430f88a8b1eb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:11 compute-0 nova_compute[239965]: 2026-01-26 16:05:11.123 239969 DEBUG nova.compute.manager [req-157ce4eb-e9e5-4604-8762-1fc960b7fd91 req-75adbd7a-9217-4e5e-85af-430f88a8b1eb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Processing event network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:05:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:05:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 389 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.5 MiB/s wr, 289 op/s
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.108 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:13 compute-0 ceph-mon[75140]: pgmap v1575: 305 pgs: 305 active+clean; 389 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.5 MiB/s wr, 289 op/s
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.476 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443513.4752266, 76321bf3-65b2-429d-9eb3-27916690a101 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.476 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] VM Started (Lifecycle Event)
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.479 239969 DEBUG nova.compute.manager [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.484 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.487 239969 INFO nova.virt.libvirt.driver [-] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Instance spawned successfully.
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.488 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.509 239969 DEBUG nova.compute.manager [req-05ce4275-d71d-4c2e-a73b-942498eab821 req-a2d33d72-755f-4c00-b3e6-f8ab1a2c162b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Received event network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.510 239969 DEBUG oslo_concurrency.lockutils [req-05ce4275-d71d-4c2e-a73b-942498eab821 req-a2d33d72-755f-4c00-b3e6-f8ab1a2c162b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "76321bf3-65b2-429d-9eb3-27916690a101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.511 239969 DEBUG oslo_concurrency.lockutils [req-05ce4275-d71d-4c2e-a73b-942498eab821 req-a2d33d72-755f-4c00-b3e6-f8ab1a2c162b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.512 239969 DEBUG oslo_concurrency.lockutils [req-05ce4275-d71d-4c2e-a73b-942498eab821 req-a2d33d72-755f-4c00-b3e6-f8ab1a2c162b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.513 239969 DEBUG nova.compute.manager [req-05ce4275-d71d-4c2e-a73b-942498eab821 req-a2d33d72-755f-4c00-b3e6-f8ab1a2c162b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] No waiting events found dispatching network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.513 239969 WARNING nova.compute.manager [req-05ce4275-d71d-4c2e-a73b-942498eab821 req-a2d33d72-755f-4c00-b3e6-f8ab1a2c162b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Received unexpected event network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb for instance with vm_state building and task_state spawning.
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.536 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.543 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.544 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.545 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.546 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.547 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.547 239969 DEBUG nova.virt.libvirt.driver [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.555 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.591 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.592 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443513.475592, 76321bf3-65b2-429d-9eb3-27916690a101 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.592 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] VM Paused (Lifecycle Event)
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.623 239969 INFO nova.compute.manager [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Took 8.88 seconds to spawn the instance on the hypervisor.
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.624 239969 DEBUG nova.compute.manager [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.625 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.640 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443513.4829469, 76321bf3-65b2-429d-9eb3-27916690a101 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.641 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] VM Resumed (Lifecycle Event)
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.667 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.672 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.691 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.704 239969 INFO nova.compute.manager [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Took 10.10 seconds to build instance.
Jan 26 16:05:13 compute-0 nova_compute[239965]: 2026-01-26 16:05:13.723 239969 DEBUG oslo_concurrency.lockutils [None req-5984cb6e-747f-49b8-9471-1417b9feb316 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1576: 305 pgs: 305 active+clean; 389 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 253 op/s
Jan 26 16:05:14 compute-0 nova_compute[239965]: 2026-01-26 16:05:14.805 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:15 compute-0 ceph-mon[75140]: pgmap v1576: 305 pgs: 305 active+clean; 389 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.2 MiB/s wr, 253 op/s
Jan 26 16:05:15 compute-0 nova_compute[239965]: 2026-01-26 16:05:15.952 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 389 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 167 op/s
Jan 26 16:05:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:05:16 compute-0 nova_compute[239965]: 2026-01-26 16:05:16.730 239969 INFO nova.compute.manager [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Rebuilding instance
Jan 26 16:05:16 compute-0 nova_compute[239965]: 2026-01-26 16:05:16.962 239969 DEBUG nova.objects.instance [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 76321bf3-65b2-429d-9eb3-27916690a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:16 compute-0 nova_compute[239965]: 2026-01-26 16:05:16.979 239969 DEBUG nova.compute.manager [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:17 compute-0 nova_compute[239965]: 2026-01-26 16:05:17.022 239969 DEBUG nova.objects.instance [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'pci_requests' on Instance uuid 76321bf3-65b2-429d-9eb3-27916690a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:17 compute-0 nova_compute[239965]: 2026-01-26 16:05:17.035 239969 DEBUG nova.objects.instance [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 76321bf3-65b2-429d-9eb3-27916690a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:17 compute-0 nova_compute[239965]: 2026-01-26 16:05:17.047 239969 DEBUG nova.objects.instance [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'resources' on Instance uuid 76321bf3-65b2-429d-9eb3-27916690a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:17 compute-0 nova_compute[239965]: 2026-01-26 16:05:17.058 239969 DEBUG nova.objects.instance [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'migration_context' on Instance uuid 76321bf3-65b2-429d-9eb3-27916690a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:17 compute-0 nova_compute[239965]: 2026-01-26 16:05:17.073 239969 DEBUG nova.objects.instance [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 16:05:17 compute-0 nova_compute[239965]: 2026-01-26 16:05:17.078 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:05:17 compute-0 ceph-mon[75140]: pgmap v1577: 305 pgs: 305 active+clean; 389 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.6 MiB/s wr, 167 op/s
Jan 26 16:05:17 compute-0 nova_compute[239965]: 2026-01-26 16:05:17.153 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443502.1527565, 824e5887-8524-4231-9b97-19fb738d44c2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:17 compute-0 nova_compute[239965]: 2026-01-26 16:05:17.155 239969 INFO nova.compute.manager [-] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] VM Stopped (Lifecycle Event)
Jan 26 16:05:17 compute-0 nova_compute[239965]: 2026-01-26 16:05:17.180 239969 DEBUG nova.compute.manager [None req-2e9a86d6-4f32-4e14-b2d3-ee8f01c5c428 - - - - - -] [instance: 824e5887-8524-4231-9b97-19fb738d44c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 389 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.4 MiB/s wr, 144 op/s
Jan 26 16:05:18 compute-0 nova_compute[239965]: 2026-01-26 16:05:18.663 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "571cd10f-e22e-43b6-9523-e3539d047e4a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:18 compute-0 nova_compute[239965]: 2026-01-26 16:05:18.664 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:18 compute-0 nova_compute[239965]: 2026-01-26 16:05:18.686 239969 DEBUG nova.compute.manager [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:05:18 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Jan 26 16:05:18 compute-0 nova_compute[239965]: 2026-01-26 16:05:18.745 239969 DEBUG nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:05:18 compute-0 nova_compute[239965]: 2026-01-26 16:05:18.763 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:18 compute-0 nova_compute[239965]: 2026-01-26 16:05:18.764 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:18 compute-0 nova_compute[239965]: 2026-01-26 16:05:18.772 239969 DEBUG nova.virt.hardware [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:05:18 compute-0 nova_compute[239965]: 2026-01-26 16:05:18.772 239969 INFO nova.compute.claims [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:05:18 compute-0 nova_compute[239965]: 2026-01-26 16:05:18.932 239969 DEBUG oslo_concurrency.processutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:19 compute-0 ceph-mon[75140]: pgmap v1578: 305 pgs: 305 active+clean; 389 MiB data, 730 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.4 MiB/s wr, 144 op/s
Jan 26 16:05:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:05:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3854891469' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:05:19 compute-0 nova_compute[239965]: 2026-01-26 16:05:19.516 239969 DEBUG oslo_concurrency.processutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:19 compute-0 nova_compute[239965]: 2026-01-26 16:05:19.522 239969 DEBUG nova.compute.provider_tree [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:05:19 compute-0 nova_compute[239965]: 2026-01-26 16:05:19.538 239969 DEBUG nova.scheduler.client.report [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:05:19 compute-0 nova_compute[239965]: 2026-01-26 16:05:19.584 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:19 compute-0 nova_compute[239965]: 2026-01-26 16:05:19.586 239969 DEBUG nova.compute.manager [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:05:19 compute-0 nova_compute[239965]: 2026-01-26 16:05:19.679 239969 DEBUG nova.compute.manager [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:05:19 compute-0 nova_compute[239965]: 2026-01-26 16:05:19.680 239969 DEBUG nova.network.neutron [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:05:19 compute-0 nova_compute[239965]: 2026-01-26 16:05:19.733 239969 INFO nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:05:19 compute-0 nova_compute[239965]: 2026-01-26 16:05:19.750 239969 DEBUG nova.compute.manager [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:05:19 compute-0 nova_compute[239965]: 2026-01-26 16:05:19.807 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:19 compute-0 nova_compute[239965]: 2026-01-26 16:05:19.967 239969 DEBUG nova.compute.manager [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:05:19 compute-0 nova_compute[239965]: 2026-01-26 16:05:19.970 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:05:19 compute-0 nova_compute[239965]: 2026-01-26 16:05:19.971 239969 INFO nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Creating image(s)
Jan 26 16:05:19 compute-0 nova_compute[239965]: 2026-01-26 16:05:19.999 239969 DEBUG nova.storage.rbd_utils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 571cd10f-e22e-43b6-9523-e3539d047e4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.031 239969 DEBUG nova.storage.rbd_utils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 571cd10f-e22e-43b6-9523-e3539d047e4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 404 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 205 op/s
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.067 239969 DEBUG nova.storage.rbd_utils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 571cd10f-e22e-43b6-9523-e3539d047e4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.073 239969 DEBUG oslo_concurrency.processutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.120 239969 DEBUG nova.policy [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ad9c6196af60436caf20747e96ad8388', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56f8818d291f4e738d868673048ce025', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:05:20 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3854891469' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.154 239969 DEBUG oslo_concurrency.processutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.157 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.158 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.158 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.185 239969 DEBUG nova.storage.rbd_utils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 571cd10f-e22e-43b6-9523-e3539d047e4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.191 239969 DEBUG oslo_concurrency.processutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 571cd10f-e22e-43b6-9523-e3539d047e4a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.555 239969 DEBUG oslo_concurrency.processutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 571cd10f-e22e-43b6-9523-e3539d047e4a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.611 239969 DEBUG nova.storage.rbd_utils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] resizing rbd image 571cd10f-e22e-43b6-9523-e3539d047e4a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.701 239969 DEBUG nova.objects.instance [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'migration_context' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.736 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.737 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Ensure instance console log exists: /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.737 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.738 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.738 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:20 compute-0 nova_compute[239965]: 2026-01-26 16:05:20.954 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:21 compute-0 ceph-mon[75140]: pgmap v1579: 305 pgs: 305 active+clean; 404 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.0 MiB/s wr, 205 op/s
Jan 26 16:05:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:05:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1580: 305 pgs: 305 active+clean; 429 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 161 op/s
Jan 26 16:05:22 compute-0 nova_compute[239965]: 2026-01-26 16:05:22.293 239969 DEBUG nova.network.neutron [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Successfully created port: 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:05:23 compute-0 ceph-mon[75140]: pgmap v1580: 305 pgs: 305 active+clean; 429 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.6 MiB/s wr, 161 op/s
Jan 26 16:05:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 429 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 143 op/s
Jan 26 16:05:24 compute-0 nova_compute[239965]: 2026-01-26 16:05:24.718 239969 DEBUG nova.network.neutron [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Successfully updated port: 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:05:24 compute-0 nova_compute[239965]: 2026-01-26 16:05:24.742 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:05:24 compute-0 nova_compute[239965]: 2026-01-26 16:05:24.742 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquired lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:05:24 compute-0 nova_compute[239965]: 2026-01-26 16:05:24.742 239969 DEBUG nova.network.neutron [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:05:24 compute-0 nova_compute[239965]: 2026-01-26 16:05:24.838 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:25 compute-0 ceph-mon[75140]: pgmap v1581: 305 pgs: 305 active+clean; 429 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.6 MiB/s wr, 143 op/s
Jan 26 16:05:25 compute-0 nova_compute[239965]: 2026-01-26 16:05:25.204 239969 DEBUG nova.compute.manager [req-8f0c6ed5-9a39-49c2-9f24-f31b58c303d6 req-46f0fbf2-a931-42be-be41-d1f6eca0ab44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received event network-changed-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:25 compute-0 nova_compute[239965]: 2026-01-26 16:05:25.204 239969 DEBUG nova.compute.manager [req-8f0c6ed5-9a39-49c2-9f24-f31b58c303d6 req-46f0fbf2-a931-42be-be41-d1f6eca0ab44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Refreshing instance network info cache due to event network-changed-4dce6184-01c6-4ef8-a5d3-7fc88f784d89. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:05:25 compute-0 nova_compute[239965]: 2026-01-26 16:05:25.204 239969 DEBUG oslo_concurrency.lockutils [req-8f0c6ed5-9a39-49c2-9f24-f31b58c303d6 req-46f0fbf2-a931-42be-be41-d1f6eca0ab44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:05:25 compute-0 nova_compute[239965]: 2026-01-26 16:05:25.956 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 469 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 26 16:05:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:05:26 compute-0 nova_compute[239965]: 2026-01-26 16:05:26.862 239969 DEBUG nova.network.neutron [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:05:27 compute-0 nova_compute[239965]: 2026-01-26 16:05:27.155 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:05:27 compute-0 ceph-mon[75140]: pgmap v1582: 305 pgs: 305 active+clean; 469 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Jan 26 16:05:27 compute-0 nova_compute[239965]: 2026-01-26 16:05:27.912 239969 DEBUG nova.network.neutron [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updating instance_info_cache with network_info: [{"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:05:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 469 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 125 op/s
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.234 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Releasing lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.235 239969 DEBUG nova.compute.manager [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Instance network_info: |[{"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.235 239969 DEBUG oslo_concurrency.lockutils [req-8f0c6ed5-9a39-49c2-9f24-f31b58c303d6 req-46f0fbf2-a931-42be-be41-d1f6eca0ab44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.235 239969 DEBUG nova.network.neutron [req-8f0c6ed5-9a39-49c2-9f24-f31b58c303d6 req-46f0fbf2-a931-42be-be41-d1f6eca0ab44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Refreshing network info cache for port 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.238 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Start _get_guest_xml network_info=[{"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.242 239969 WARNING nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.248 239969 DEBUG nova.virt.libvirt.host [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.249 239969 DEBUG nova.virt.libvirt.host [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.253 239969 DEBUG nova.virt.libvirt.host [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.254 239969 DEBUG nova.virt.libvirt.host [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.255 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.255 239969 DEBUG nova.virt.hardware [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.256 239969 DEBUG nova.virt.hardware [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.256 239969 DEBUG nova.virt.hardware [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.257 239969 DEBUG nova.virt.hardware [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.257 239969 DEBUG nova.virt.hardware [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.258 239969 DEBUG nova.virt.hardware [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.258 239969 DEBUG nova.virt.hardware [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.259 239969 DEBUG nova.virt.hardware [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.259 239969 DEBUG nova.virt.hardware [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.259 239969 DEBUG nova.virt.hardware [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.260 239969 DEBUG nova.virt.hardware [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.264 239969 DEBUG oslo_concurrency.processutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:28 compute-0 ceph-mon[75140]: pgmap v1583: 305 pgs: 305 active+clean; 469 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 125 op/s
Jan 26 16:05:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:05:28
Jan 26 16:05:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:05:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:05:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'backups', 'default.rgw.log', 'images', 'volumes', 'default.rgw.control', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr']
Jan 26 16:05:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:05:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:05:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3511780114' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.928 239969 DEBUG oslo_concurrency.processutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.663s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.956 239969 DEBUG nova.storage.rbd_utils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 571cd10f-e22e-43b6-9523-e3539d047e4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:28 compute-0 nova_compute[239965]: 2026-01-26 16:05:28.960 239969 DEBUG oslo_concurrency.processutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3511780114' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:05:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/800150668' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.589 239969 DEBUG oslo_concurrency.processutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.591 239969 DEBUG nova.virt.libvirt.vif [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:05:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2114557834',display_name='tempest-ServerActionsTestOtherB-server-2114557834',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2114557834',id=85,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH1KacXv8geoEy+28E7FY8ymb1wNpeorh0ig3qqxM26aclzAUqPX8+7CUPQ9iESN1O5fiTgrITEiydlNm6ZYPZvODzMzw3Vex9NYYyqIj3iZQ6pC0nbgSJxWQCDOHEzqIA==',key_name='tempest-keypair-1024411567',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56f8818d291f4e738d868673048ce025',ramdisk_id='',reservation_id='r-1wqrxcju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1778121066',owner_user_name='tempest-ServerActionsTestOtherB-1778121066-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:05:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ad9c6196af60436caf20747e96ad8388',uuid=571cd10f-e22e-43b6-9523-e3539d047e4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.592 239969 DEBUG nova.network.os_vif_util [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converting VIF {"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.592 239969 DEBUG nova.network.os_vif_util [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:ea:5b,bridge_name='br-int',has_traffic_filtering=True,id=4dce6184-01c6-4ef8-a5d3-7fc88f784d89,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dce6184-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.593 239969 DEBUG nova.objects.instance [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'pci_devices' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.609 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:05:29 compute-0 nova_compute[239965]:   <uuid>571cd10f-e22e-43b6-9523-e3539d047e4a</uuid>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   <name>instance-00000055</name>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerActionsTestOtherB-server-2114557834</nova:name>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:05:28</nova:creationTime>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:05:29 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:05:29 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:05:29 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:05:29 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:05:29 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:05:29 compute-0 nova_compute[239965]:         <nova:user uuid="ad9c6196af60436caf20747e96ad8388">tempest-ServerActionsTestOtherB-1778121066-project-member</nova:user>
Jan 26 16:05:29 compute-0 nova_compute[239965]:         <nova:project uuid="56f8818d291f4e738d868673048ce025">tempest-ServerActionsTestOtherB-1778121066</nova:project>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:05:29 compute-0 nova_compute[239965]:         <nova:port uuid="4dce6184-01c6-4ef8-a5d3-7fc88f784d89">
Jan 26 16:05:29 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <system>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <entry name="serial">571cd10f-e22e-43b6-9523-e3539d047e4a</entry>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <entry name="uuid">571cd10f-e22e-43b6-9523-e3539d047e4a</entry>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     </system>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   <os>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   </os>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   <features>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   </features>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/571cd10f-e22e-43b6-9523-e3539d047e4a_disk">
Jan 26 16:05:29 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       </source>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:05:29 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/571cd10f-e22e-43b6-9523-e3539d047e4a_disk.config">
Jan 26 16:05:29 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       </source>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:05:29 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:72:ea:5b"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <target dev="tap4dce6184-01"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a/console.log" append="off"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <video>
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     </video>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:05:29 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:05:29 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:05:29 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:05:29 compute-0 nova_compute[239965]: </domain>
Jan 26 16:05:29 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.610 239969 DEBUG nova.compute.manager [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Preparing to wait for external event network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.610 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.610 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.610 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.611 239969 DEBUG nova.virt.libvirt.vif [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:05:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2114557834',display_name='tempest-ServerActionsTestOtherB-server-2114557834',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2114557834',id=85,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH1KacXv8geoEy+28E7FY8ymb1wNpeorh0ig3qqxM26aclzAUqPX8+7CUPQ9iESN1O5fiTgrITEiydlNm6ZYPZvODzMzw3Vex9NYYyqIj3iZQ6pC0nbgSJxWQCDOHEzqIA==',key_name='tempest-keypair-1024411567',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56f8818d291f4e738d868673048ce025',ramdisk_id='',reservation_id='r-1wqrxcju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1778121066',owner_user_name='tempest-ServerActionsTestOtherB-1778121066-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:05:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ad9c6196af60436caf20747e96ad8388',uuid=571cd10f-e22e-43b6-9523-e3539d047e4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.611 239969 DEBUG nova.network.os_vif_util [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converting VIF {"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.612 239969 DEBUG nova.network.os_vif_util [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:ea:5b,bridge_name='br-int',has_traffic_filtering=True,id=4dce6184-01c6-4ef8-a5d3-7fc88f784d89,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dce6184-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.612 239969 DEBUG os_vif [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:ea:5b,bridge_name='br-int',has_traffic_filtering=True,id=4dce6184-01c6-4ef8-a5d3-7fc88f784d89,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dce6184-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.613 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.613 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.614 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.616 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.616 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4dce6184-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.617 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4dce6184-01, col_values=(('external_ids', {'iface-id': '4dce6184-01c6-4ef8-a5d3-7fc88f784d89', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:ea:5b', 'vm-uuid': '571cd10f-e22e-43b6-9523-e3539d047e4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.618 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:29 compute-0 NetworkManager[48954]: <info>  [1769443529.6196] manager: (tap4dce6184-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/348)
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.621 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.624 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.626 239969 INFO os_vif [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:ea:5b,bridge_name='br-int',has_traffic_filtering=True,id=4dce6184-01c6-4ef8-a5d3-7fc88f784d89,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dce6184-01')
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.689 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.690 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.690 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No VIF found with MAC fa:16:3e:72:ea:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.690 239969 INFO nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Using config drive
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.712 239969 DEBUG nova.storage.rbd_utils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 571cd10f-e22e-43b6-9523-e3539d047e4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:29 compute-0 nova_compute[239965]: 2026-01-26 16:05:29.792 239969 DEBUG nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:05:29 compute-0 ovn_controller[146046]: 2026-01-26T16:05:29Z|00088|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:43:90:b1 10.100.0.9
Jan 26 16:05:29 compute-0 ovn_controller[146046]: 2026-01-26T16:05:29Z|00089|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:43:90:b1 10.100.0.9
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1584: 305 pgs: 305 active+clean; 483 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.4 MiB/s wr, 154 op/s
Jan 26 16:05:30 compute-0 nova_compute[239965]: 2026-01-26 16:05:30.150 239969 INFO nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Creating config drive at /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a/disk.config
Jan 26 16:05:30 compute-0 nova_compute[239965]: 2026-01-26 16:05:30.161 239969 DEBUG oslo_concurrency.processutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ulnc49_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/800150668' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:30 compute-0 ceph-mon[75140]: pgmap v1584: 305 pgs: 305 active+clean; 483 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.4 MiB/s wr, 154 op/s
Jan 26 16:05:30 compute-0 nova_compute[239965]: 2026-01-26 16:05:30.328 239969 DEBUG oslo_concurrency.processutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0ulnc49_" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:30 compute-0 nova_compute[239965]: 2026-01-26 16:05:30.360 239969 DEBUG nova.storage.rbd_utils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 571cd10f-e22e-43b6-9523-e3539d047e4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:30 compute-0 nova_compute[239965]: 2026-01-26 16:05:30.364 239969 DEBUG oslo_concurrency.processutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a/disk.config 571cd10f-e22e-43b6-9523-e3539d047e4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:05:30 compute-0 nova_compute[239965]: 2026-01-26 16:05:30.519 239969 DEBUG oslo_concurrency.processutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a/disk.config 571cd10f-e22e-43b6-9523-e3539d047e4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:30 compute-0 nova_compute[239965]: 2026-01-26 16:05:30.520 239969 INFO nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Deleting local config drive /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a/disk.config because it was imported into RBD.
Jan 26 16:05:30 compute-0 kernel: tap4dce6184-01: entered promiscuous mode
Jan 26 16:05:30 compute-0 NetworkManager[48954]: <info>  [1769443530.5703] manager: (tap4dce6184-01): new Tun device (/org/freedesktop/NetworkManager/Devices/349)
Jan 26 16:05:30 compute-0 nova_compute[239965]: 2026-01-26 16:05:30.572 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:30 compute-0 ovn_controller[146046]: 2026-01-26T16:05:30Z|00789|binding|INFO|Claiming lport 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 for this chassis.
Jan 26 16:05:30 compute-0 ovn_controller[146046]: 2026-01-26T16:05:30Z|00790|binding|INFO|4dce6184-01c6-4ef8-a5d3-7fc88f784d89: Claiming fa:16:3e:72:ea:5b 10.100.0.6
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.580 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:ea:5b 10.100.0.6'], port_security=['fa:16:3e:72:ea:5b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '571cd10f-e22e-43b6-9523-e3539d047e4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56f8818d291f4e738d868673048ce025', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8357fb8f-8173-42b8-8413-682bfcbe9368', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=112ebdb6-08cf-4ff4-bdfb-063b37c813d0, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=4dce6184-01c6-4ef8-a5d3-7fc88f784d89) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.581 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 in datapath 67ce52d3-4d78-44cc-ab83-0fab051ec68d bound to our chassis
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.582 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67ce52d3-4d78-44cc-ab83-0fab051ec68d
Jan 26 16:05:30 compute-0 ovn_controller[146046]: 2026-01-26T16:05:30Z|00791|binding|INFO|Setting lport 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 ovn-installed in OVS
Jan 26 16:05:30 compute-0 ovn_controller[146046]: 2026-01-26T16:05:30Z|00792|binding|INFO|Setting lport 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 up in Southbound
Jan 26 16:05:30 compute-0 nova_compute[239965]: 2026-01-26 16:05:30.593 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:30 compute-0 nova_compute[239965]: 2026-01-26 16:05:30.594 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.599 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4b7913de-268b-418d-b10f-bde3eabcf69f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.600 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap67ce52d3-41 in ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.602 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap67ce52d3-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.602 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d3e6f61d-ffc5-4943-b931-f20fd2ed9e66]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.603 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[da859e2c-b98f-40b3-892d-0873d2eba461]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:30 compute-0 systemd-machined[208061]: New machine qemu-101-instance-00000055.
Jan 26 16:05:30 compute-0 systemd-udevd[311949]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:05:30 compute-0 NetworkManager[48954]: <info>  [1769443530.6169] device (tap4dce6184-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:05:30 compute-0 systemd[1]: Started Virtual Machine qemu-101-instance-00000055.
Jan 26 16:05:30 compute-0 NetworkManager[48954]: <info>  [1769443530.6185] device (tap4dce6184-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.619 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[a66b4a0f-488f-43fe-a91b-f70beaaf9db2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:05:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.643 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d21495e6-6be8-41dc-a55b-5d42b86586bf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.674 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7db8cd7d-f747-47a9-8b39-a7a6d16304fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.679 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[77bdf9df-6782-4188-bc6d-21d1db8bc481]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:30 compute-0 systemd-udevd[311952]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:05:30 compute-0 NetworkManager[48954]: <info>  [1769443530.6808] manager: (tap67ce52d3-40): new Veth device (/org/freedesktop/NetworkManager/Devices/350)
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.714 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[cc42e858-8de5-4663-b460-cf66d6408a32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.717 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[08369a96-3b64-488d-b073-795792139545]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:30 compute-0 NetworkManager[48954]: <info>  [1769443530.7422] device (tap67ce52d3-40): carrier: link connected
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.748 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ac22f5f6-f6c6-42a6-8f16-9938f26f1684]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.780 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[480d4cf3-d567-4ca2-9a5b-55d8819eebe9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67ce52d3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:dd:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502154, 'reachable_time': 29665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311981, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.796 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f67d0217-0b94-499f-9c6b-d5cbba3b0da4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:dd8f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502154, 'tstamp': 502154}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311982, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.823 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bb9942ba-6988-4181-adfd-bcdab29eb4a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67ce52d3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:dd:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502154, 'reachable_time': 29665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 311983, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.858 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[be76e174-dae9-4fc7-bf23-f0ff7b56ca7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:30 compute-0 nova_compute[239965]: 2026-01-26 16:05:30.958 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.960 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5fffd839-aac2-48a0-a0eb-5e3890935c14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.964 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67ce52d3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.964 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.965 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67ce52d3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:30 compute-0 nova_compute[239965]: 2026-01-26 16:05:30.967 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:30 compute-0 kernel: tap67ce52d3-40: entered promiscuous mode
Jan 26 16:05:30 compute-0 NetworkManager[48954]: <info>  [1769443530.9691] manager: (tap67ce52d3-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/351)
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.972 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67ce52d3-40, col_values=(('external_ids', {'iface-id': '8d8bd19b-594d-4000-99fc-eb8020a59c23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:30 compute-0 nova_compute[239965]: 2026-01-26 16:05:30.973 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:30 compute-0 ovn_controller[146046]: 2026-01-26T16:05:30Z|00793|binding|INFO|Releasing lport 8d8bd19b-594d-4000-99fc-eb8020a59c23 from this chassis (sb_readonly=0)
Jan 26 16:05:30 compute-0 nova_compute[239965]: 2026-01-26 16:05:30.996 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.998 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/67ce52d3-4d78-44cc-ab83-0fab051ec68d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/67ce52d3-4d78-44cc-ab83-0fab051ec68d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:30.999 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9e18b69f-706e-4036-aa9a-8a35d15e86fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:31.000 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-67ce52d3-4d78-44cc-ab83-0fab051ec68d
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/67ce52d3-4d78-44cc-ab83-0fab051ec68d.pid.haproxy
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 67ce52d3-4d78-44cc-ab83-0fab051ec68d
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:05:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:31.000 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'env', 'PROCESS_TAG=haproxy-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/67ce52d3-4d78-44cc-ab83-0fab051ec68d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.140 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443531.1392474, 571cd10f-e22e-43b6-9523-e3539d047e4a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.140 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] VM Started (Lifecycle Event)
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.162 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.168 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443531.1395996, 571cd10f-e22e-43b6-9523-e3539d047e4a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.169 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] VM Paused (Lifecycle Event)
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.189 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.194 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.214 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.236 239969 DEBUG nova.compute.manager [req-6472e0a4-3428-4a43-ba93-0ba482499ece req-3d9759e7-e9e4-4e30-93ea-4b6b4aadf2ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received event network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.237 239969 DEBUG oslo_concurrency.lockutils [req-6472e0a4-3428-4a43-ba93-0ba482499ece req-3d9759e7-e9e4-4e30-93ea-4b6b4aadf2ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.237 239969 DEBUG oslo_concurrency.lockutils [req-6472e0a4-3428-4a43-ba93-0ba482499ece req-3d9759e7-e9e4-4e30-93ea-4b6b4aadf2ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.237 239969 DEBUG oslo_concurrency.lockutils [req-6472e0a4-3428-4a43-ba93-0ba482499ece req-3d9759e7-e9e4-4e30-93ea-4b6b4aadf2ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.238 239969 DEBUG nova.compute.manager [req-6472e0a4-3428-4a43-ba93-0ba482499ece req-3d9759e7-e9e4-4e30-93ea-4b6b4aadf2ae a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Processing event network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.238 239969 DEBUG nova.compute.manager [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.241 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443531.241612, 571cd10f-e22e-43b6-9523-e3539d047e4a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.242 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] VM Resumed (Lifecycle Event)
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.244 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.248 239969 INFO nova.virt.libvirt.driver [-] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Instance spawned successfully.
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.248 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.263 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.281 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.286 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.287 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.288 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.288 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.288 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.289 239969 DEBUG nova.virt.libvirt.driver [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.300 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.345 239969 INFO nova.compute.manager [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Took 11.38 seconds to spawn the instance on the hypervisor.
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.345 239969 DEBUG nova.compute.manager [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.388 239969 DEBUG nova.network.neutron [req-8f0c6ed5-9a39-49c2-9f24-f31b58c303d6 req-46f0fbf2-a931-42be-be41-d1f6eca0ab44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updated VIF entry in instance network info cache for port 4dce6184-01c6-4ef8-a5d3-7fc88f784d89. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.388 239969 DEBUG nova.network.neutron [req-8f0c6ed5-9a39-49c2-9f24-f31b58c303d6 req-46f0fbf2-a931-42be-be41-d1f6eca0ab44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updating instance_info_cache with network_info: [{"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.419 239969 DEBUG oslo_concurrency.lockutils [req-8f0c6ed5-9a39-49c2-9f24-f31b58c303d6 req-46f0fbf2-a931-42be-be41-d1f6eca0ab44 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.430 239969 INFO nova.compute.manager [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Took 12.68 seconds to build instance.
Jan 26 16:05:31 compute-0 podman[312057]: 2026-01-26 16:05:31.441747083 +0000 UTC m=+0.085552435 container create d4e7f1af173924e1831d49d02d154f1888f6725f24bfdf3b3c561074865eed1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.450 239969 DEBUG oslo_concurrency.lockutils [None req-de116c8c-a39d-4b35-96d5-510c8dc48100 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:05:31 compute-0 podman[312057]: 2026-01-26 16:05:31.385906097 +0000 UTC m=+0.029711519 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:05:31 compute-0 systemd[1]: Started libpod-conmon-d4e7f1af173924e1831d49d02d154f1888f6725f24bfdf3b3c561074865eed1e.scope.
Jan 26 16:05:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:05:31 compute-0 nova_compute[239965]: 2026-01-26 16:05:31.523 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:05:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/627e3fb86124d8cc8f1e97b53597d3b8cc738188852576428221fd2d173f4280/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:05:31 compute-0 podman[312057]: 2026-01-26 16:05:31.544784643 +0000 UTC m=+0.188590015 container init d4e7f1af173924e1831d49d02d154f1888f6725f24bfdf3b3c561074865eed1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:05:31 compute-0 podman[312057]: 2026-01-26 16:05:31.551072057 +0000 UTC m=+0.194877409 container start d4e7f1af173924e1831d49d02d154f1888f6725f24bfdf3b3c561074865eed1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 16:05:31 compute-0 neutron-haproxy-ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d[312073]: [NOTICE]   (312077) : New worker (312079) forked
Jan 26 16:05:31 compute-0 neutron-haproxy-ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d[312073]: [NOTICE]   (312077) : Loading success.
Jan 26 16:05:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 493 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 294 KiB/s rd, 4.3 MiB/s wr, 105 op/s
Jan 26 16:05:32 compute-0 kernel: tap29825953-a5 (unregistering): left promiscuous mode
Jan 26 16:05:32 compute-0 NetworkManager[48954]: <info>  [1769443532.0663] device (tap29825953-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:05:32 compute-0 ovn_controller[146046]: 2026-01-26T16:05:32Z|00794|binding|INFO|Releasing lport 29825953-a53f-4dd1-bf0d-b670cc439160 from this chassis (sb_readonly=0)
Jan 26 16:05:32 compute-0 ovn_controller[146046]: 2026-01-26T16:05:32Z|00795|binding|INFO|Setting lport 29825953-a53f-4dd1-bf0d-b670cc439160 down in Southbound
Jan 26 16:05:32 compute-0 ovn_controller[146046]: 2026-01-26T16:05:32Z|00796|binding|INFO|Removing iface tap29825953-a5 ovn-installed in OVS
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.075 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:32.085 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:bc:39 10.100.0.14'], port_security=['fa:16:3e:10:bc:39 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '08d4e028-235a-4222-9acb-e07b6652d9ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3f3d947-073d-4aff-ad60-49414846f6e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd91082a727584818a01c3c60e7f7ef73', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7fafebe7-bf1c-4fd6-83e0-4f8a0494b535', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fcf049c-f6b3-4162-b939-ed99b129811b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=29825953-a53f-4dd1-bf0d-b670cc439160) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:05:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:32.086 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 29825953-a53f-4dd1-bf0d-b670cc439160 in datapath c3f3d947-073d-4aff-ad60-49414846f6e5 unbound from our chassis
Jan 26 16:05:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:32.087 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c3f3d947-073d-4aff-ad60-49414846f6e5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:05:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:32.088 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[49e270dc-b2da-409b-a5ac-d067ff87431e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.093 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:32 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000053.scope: Deactivated successfully.
Jan 26 16:05:32 compute-0 systemd[1]: machine-qemu\x2d99\x2dinstance\x2d00000053.scope: Consumed 15.226s CPU time.
Jan 26 16:05:32 compute-0 systemd-machined[208061]: Machine qemu-99-instance-00000053 terminated.
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.474 239969 DEBUG nova.compute.manager [req-fde29214-43af-4739-8cd6-d626547ef0eb req-0a46caa6-fd12-444d-9350-cf6cfbce63b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received event network-vif-unplugged-29825953-a53f-4dd1-bf0d-b670cc439160 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.475 239969 DEBUG oslo_concurrency.lockutils [req-fde29214-43af-4739-8cd6-d626547ef0eb req-0a46caa6-fd12-444d-9350-cf6cfbce63b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.475 239969 DEBUG oslo_concurrency.lockutils [req-fde29214-43af-4739-8cd6-d626547ef0eb req-0a46caa6-fd12-444d-9350-cf6cfbce63b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.477 239969 DEBUG oslo_concurrency.lockutils [req-fde29214-43af-4739-8cd6-d626547ef0eb req-0a46caa6-fd12-444d-9350-cf6cfbce63b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.477 239969 DEBUG nova.compute.manager [req-fde29214-43af-4739-8cd6-d626547ef0eb req-0a46caa6-fd12-444d-9350-cf6cfbce63b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] No waiting events found dispatching network-vif-unplugged-29825953-a53f-4dd1-bf0d-b670cc439160 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.477 239969 WARNING nova.compute.manager [req-fde29214-43af-4739-8cd6-d626547ef0eb req-0a46caa6-fd12-444d-9350-cf6cfbce63b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received unexpected event network-vif-unplugged-29825953-a53f-4dd1-bf0d-b670cc439160 for instance with vm_state active and task_state rescuing.
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.810 239969 INFO nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Instance shutdown successfully after 24 seconds.
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.816 239969 INFO nova.virt.libvirt.driver [-] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Instance destroyed successfully.
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.816 239969 DEBUG nova.objects.instance [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'numa_topology' on Instance uuid 08d4e028-235a-4222-9acb-e07b6652d9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.833 239969 INFO nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Attempting rescue
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.834 239969 DEBUG nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rescue generated disk_info: {'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} rescue /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4314
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.837 239969 DEBUG nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Instance directory exists: not creating _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4719
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.838 239969 INFO nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Creating image(s)
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.861 239969 DEBUG nova.storage.rbd_utils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 08d4e028-235a-4222-9acb-e07b6652d9ee_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.864 239969 DEBUG nova.objects.instance [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 08d4e028-235a-4222-9acb-e07b6652d9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.901 239969 DEBUG nova.storage.rbd_utils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 08d4e028-235a-4222-9acb-e07b6652d9ee_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.923 239969 DEBUG nova.storage.rbd_utils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 08d4e028-235a-4222-9acb-e07b6652d9ee_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:32 compute-0 nova_compute[239965]: 2026-01-26 16:05:32.926 239969 DEBUG oslo_concurrency.processutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.003 239969 DEBUG oslo_concurrency.processutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.004 239969 DEBUG oslo_concurrency.lockutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.005 239969 DEBUG oslo_concurrency.lockutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.005 239969 DEBUG oslo_concurrency.lockutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.025 239969 DEBUG nova.storage.rbd_utils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 08d4e028-235a-4222-9acb-e07b6652d9ee_disk.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.029 239969 DEBUG oslo_concurrency.processutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 08d4e028-235a-4222-9acb-e07b6652d9ee_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:33 compute-0 ceph-mon[75140]: pgmap v1585: 305 pgs: 305 active+clean; 493 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 294 KiB/s rd, 4.3 MiB/s wr, 105 op/s
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.304 239969 DEBUG nova.compute.manager [req-b5728258-0345-4a29-851a-a8fc9488d55d req-025d4c0b-46b3-4731-9939-a6ebfc409b0f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received event network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.304 239969 DEBUG oslo_concurrency.lockutils [req-b5728258-0345-4a29-851a-a8fc9488d55d req-025d4c0b-46b3-4731-9939-a6ebfc409b0f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.304 239969 DEBUG oslo_concurrency.lockutils [req-b5728258-0345-4a29-851a-a8fc9488d55d req-025d4c0b-46b3-4731-9939-a6ebfc409b0f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.305 239969 DEBUG oslo_concurrency.lockutils [req-b5728258-0345-4a29-851a-a8fc9488d55d req-025d4c0b-46b3-4731-9939-a6ebfc409b0f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.305 239969 DEBUG nova.compute.manager [req-b5728258-0345-4a29-851a-a8fc9488d55d req-025d4c0b-46b3-4731-9939-a6ebfc409b0f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] No waiting events found dispatching network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.305 239969 WARNING nova.compute.manager [req-b5728258-0345-4a29-851a-a8fc9488d55d req-025d4c0b-46b3-4731-9939-a6ebfc409b0f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received unexpected event network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 for instance with vm_state active and task_state None.
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.384 239969 DEBUG oslo_concurrency.processutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 08d4e028-235a-4222-9acb-e07b6652d9ee_disk.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.384 239969 DEBUG nova.objects.instance [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'migration_context' on Instance uuid 08d4e028-235a-4222-9acb-e07b6652d9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.413 239969 DEBUG nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.414 239969 DEBUG nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Start _get_guest_xml network_info=[{"id": "29825953-a53f-4dd1-bf0d-b670cc439160", "address": "fa:16:3e:10:bc:39", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1538815050-network", "vif_mac": "fa:16:3e:10:bc:39"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29825953-a5", "ovs_interfaceid": "29825953-a53f-4dd1-bf0d-b670cc439160", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'disk.rescue': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config.rescue': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue={'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba', 'kernel_id': '', 'ramdisk_id': ''} block_device_info=None _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.414 239969 DEBUG nova.objects.instance [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'resources' on Instance uuid 08d4e028-235a-4222-9acb-e07b6652d9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.436 239969 WARNING nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.443 239969 DEBUG nova.virt.libvirt.host [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.443 239969 DEBUG nova.virt.libvirt.host [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.448 239969 DEBUG nova.virt.libvirt.host [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.448 239969 DEBUG nova.virt.libvirt.host [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.449 239969 DEBUG nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.449 239969 DEBUG nova.virt.hardware [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.449 239969 DEBUG nova.virt.hardware [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.449 239969 DEBUG nova.virt.hardware [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.450 239969 DEBUG nova.virt.hardware [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.450 239969 DEBUG nova.virt.hardware [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.450 239969 DEBUG nova.virt.hardware [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.450 239969 DEBUG nova.virt.hardware [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.450 239969 DEBUG nova.virt.hardware [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.450 239969 DEBUG nova.virt.hardware [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.451 239969 DEBUG nova.virt.hardware [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.451 239969 DEBUG nova.virt.hardware [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.451 239969 DEBUG nova.objects.instance [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 08d4e028-235a-4222-9acb-e07b6652d9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:33 compute-0 nova_compute[239965]: 2026-01-26 16:05:33.472 239969 DEBUG oslo_concurrency.processutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 493 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 3.4 MiB/s wr, 66 op/s
Jan 26 16:05:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:05:34 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3270760622' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:34 compute-0 nova_compute[239965]: 2026-01-26 16:05:34.084 239969 DEBUG oslo_concurrency.processutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:34 compute-0 nova_compute[239965]: 2026-01-26 16:05:34.085 239969 DEBUG oslo_concurrency.processutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:34 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3270760622' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:34 compute-0 nova_compute[239965]: 2026-01-26 16:05:34.619 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:05:34 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2918215809' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:34 compute-0 nova_compute[239965]: 2026-01-26 16:05:34.806 239969 DEBUG nova.compute.manager [req-3d245736-c8a4-453e-9607-e8dcf942d4e0 req-08c7cbf1-3079-4e4c-a5d1-a45313aff224 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:34 compute-0 nova_compute[239965]: 2026-01-26 16:05:34.807 239969 DEBUG oslo_concurrency.lockutils [req-3d245736-c8a4-453e-9607-e8dcf942d4e0 req-08c7cbf1-3079-4e4c-a5d1-a45313aff224 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:34 compute-0 nova_compute[239965]: 2026-01-26 16:05:34.807 239969 DEBUG oslo_concurrency.lockutils [req-3d245736-c8a4-453e-9607-e8dcf942d4e0 req-08c7cbf1-3079-4e4c-a5d1-a45313aff224 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:34 compute-0 nova_compute[239965]: 2026-01-26 16:05:34.807 239969 DEBUG oslo_concurrency.lockutils [req-3d245736-c8a4-453e-9607-e8dcf942d4e0 req-08c7cbf1-3079-4e4c-a5d1-a45313aff224 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:34 compute-0 nova_compute[239965]: 2026-01-26 16:05:34.807 239969 DEBUG nova.compute.manager [req-3d245736-c8a4-453e-9607-e8dcf942d4e0 req-08c7cbf1-3079-4e4c-a5d1-a45313aff224 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] No waiting events found dispatching network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:34 compute-0 nova_compute[239965]: 2026-01-26 16:05:34.808 239969 WARNING nova.compute.manager [req-3d245736-c8a4-453e-9607-e8dcf942d4e0 req-08c7cbf1-3079-4e4c-a5d1-a45313aff224 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received unexpected event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 for instance with vm_state active and task_state rescuing.
Jan 26 16:05:34 compute-0 nova_compute[239965]: 2026-01-26 16:05:34.808 239969 DEBUG oslo_concurrency.processutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.723s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:34 compute-0 nova_compute[239965]: 2026-01-26 16:05:34.809 239969 DEBUG oslo_concurrency.processutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:05:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1738407021' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:35 compute-0 nova_compute[239965]: 2026-01-26 16:05:35.368 239969 DEBUG oslo_concurrency.processutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:35 compute-0 nova_compute[239965]: 2026-01-26 16:05:35.370 239969 DEBUG nova.virt.libvirt.vif [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:04:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1716043766',display_name='tempest-ServerRescueTestJSON-server-1716043766',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1716043766',id=83,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:05:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d91082a727584818a01c3c60e7f7ef73',ramdisk_id='',reservation_id='r-d9mj2wj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1881999610',owner_user_name='tempest-ServerRescueTestJSON-1881999610-project-member'},tags=<?>,task_state='rescuing',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:05:05Z,user_data=None,user_id='7a6914254de3499da1b987d99b6e0ff6',uuid=08d4e028-235a-4222-9acb-e07b6652d9ee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29825953-a53f-4dd1-bf0d-b670cc439160", "address": "fa:16:3e:10:bc:39", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1538815050-network", "vif_mac": "fa:16:3e:10:bc:39"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29825953-a5", "ovs_interfaceid": "29825953-a53f-4dd1-bf0d-b670cc439160", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:05:35 compute-0 nova_compute[239965]: 2026-01-26 16:05:35.371 239969 DEBUG nova.network.os_vif_util [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Converting VIF {"id": "29825953-a53f-4dd1-bf0d-b670cc439160", "address": "fa:16:3e:10:bc:39", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [], "label": "tempest-ServerRescueTestJSON-1538815050-network", "vif_mac": "fa:16:3e:10:bc:39"}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29825953-a5", "ovs_interfaceid": "29825953-a53f-4dd1-bf0d-b670cc439160", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:05:35 compute-0 nova_compute[239965]: 2026-01-26 16:05:35.372 239969 DEBUG nova.network.os_vif_util [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:bc:39,bridge_name='br-int',has_traffic_filtering=True,id=29825953-a53f-4dd1-bf0d-b670cc439160,network=Network(c3f3d947-073d-4aff-ad60-49414846f6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29825953-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:05:35 compute-0 nova_compute[239965]: 2026-01-26 16:05:35.373 239969 DEBUG nova.objects.instance [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'pci_devices' on Instance uuid 08d4e028-235a-4222-9acb-e07b6652d9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:35 compute-0 nova_compute[239965]: 2026-01-26 16:05:35.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:05:35 compute-0 ceph-mon[75140]: pgmap v1586: 305 pgs: 305 active+clean; 493 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 180 KiB/s rd, 3.4 MiB/s wr, 66 op/s
Jan 26 16:05:35 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2918215809' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:35 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1738407021' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:35 compute-0 nova_compute[239965]: 2026-01-26 16:05:35.610 239969 DEBUG nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:05:35 compute-0 nova_compute[239965]:   <uuid>08d4e028-235a-4222-9acb-e07b6652d9ee</uuid>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   <name>instance-00000053</name>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerRescueTestJSON-server-1716043766</nova:name>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:05:33</nova:creationTime>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:05:35 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:05:35 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:05:35 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:05:35 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:05:35 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:05:35 compute-0 nova_compute[239965]:         <nova:user uuid="7a6914254de3499da1b987d99b6e0ff6">tempest-ServerRescueTestJSON-1881999610-project-member</nova:user>
Jan 26 16:05:35 compute-0 nova_compute[239965]:         <nova:project uuid="d91082a727584818a01c3c60e7f7ef73">tempest-ServerRescueTestJSON-1881999610</nova:project>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:05:35 compute-0 nova_compute[239965]:         <nova:port uuid="29825953-a53f-4dd1-bf0d-b670cc439160">
Jan 26 16:05:35 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <system>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <entry name="serial">08d4e028-235a-4222-9acb-e07b6652d9ee</entry>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <entry name="uuid">08d4e028-235a-4222-9acb-e07b6652d9ee</entry>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     </system>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   <os>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   </os>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   <features>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   </features>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/08d4e028-235a-4222-9acb-e07b6652d9ee_disk.rescue">
Jan 26 16:05:35 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       </source>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:05:35 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/08d4e028-235a-4222-9acb-e07b6652d9ee_disk">
Jan 26 16:05:35 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       </source>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:05:35 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <target dev="vdb" bus="virtio"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/08d4e028-235a-4222-9acb-e07b6652d9ee_disk.config.rescue">
Jan 26 16:05:35 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       </source>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:05:35 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:10:bc:39"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <target dev="tap29825953-a5"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee/console.log" append="off"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <video>
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     </video>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:05:35 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:05:35 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:05:35 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:05:35 compute-0 nova_compute[239965]: </domain>
Jan 26 16:05:35 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:05:35 compute-0 nova_compute[239965]: 2026-01-26 16:05:35.619 239969 INFO nova.virt.libvirt.driver [-] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Instance destroyed successfully.
Jan 26 16:05:35 compute-0 nova_compute[239965]: 2026-01-26 16:05:35.706 239969 DEBUG nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:05:35 compute-0 nova_compute[239965]: 2026-01-26 16:05:35.707 239969 DEBUG nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:05:35 compute-0 nova_compute[239965]: 2026-01-26 16:05:35.707 239969 DEBUG nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:05:35 compute-0 nova_compute[239965]: 2026-01-26 16:05:35.707 239969 DEBUG nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] No VIF found with MAC fa:16:3e:10:bc:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:05:35 compute-0 nova_compute[239965]: 2026-01-26 16:05:35.708 239969 INFO nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Using config drive
Jan 26 16:05:35 compute-0 nova_compute[239965]: 2026-01-26 16:05:35.736 239969 DEBUG nova.storage.rbd_utils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 08d4e028-235a-4222-9acb-e07b6652d9ee_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:35 compute-0 nova_compute[239965]: 2026-01-26 16:05:35.937 239969 DEBUG nova.objects.instance [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 08d4e028-235a-4222-9acb-e07b6652d9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:35 compute-0 nova_compute[239965]: 2026-01-26 16:05:35.961 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1587: 305 pgs: 305 active+clean; 548 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.3 MiB/s wr, 176 op/s
Jan 26 16:05:36 compute-0 nova_compute[239965]: 2026-01-26 16:05:36.061 239969 DEBUG nova.objects.instance [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'keypairs' on Instance uuid 08d4e028-235a-4222-9acb-e07b6652d9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:05:36 compute-0 nova_compute[239965]: 2026-01-26 16:05:36.457 239969 INFO nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Creating config drive at /var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee/disk.config.rescue
Jan 26 16:05:36 compute-0 nova_compute[239965]: 2026-01-26 16:05:36.465 239969 DEBUG oslo_concurrency.processutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7k5p3gtv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:36 compute-0 ceph-mon[75140]: pgmap v1587: 305 pgs: 305 active+clean; 548 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.3 MiB/s wr, 176 op/s
Jan 26 16:05:36 compute-0 sudo[312283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:05:36 compute-0 sudo[312283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:05:36 compute-0 sudo[312283]: pam_unix(sudo:session): session closed for user root
Jan 26 16:05:36 compute-0 nova_compute[239965]: 2026-01-26 16:05:36.607 239969 DEBUG oslo_concurrency.processutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee/disk.config.rescue -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7k5p3gtv" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:36 compute-0 sudo[312310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:05:36 compute-0 sudo[312310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:05:36 compute-0 nova_compute[239965]: 2026-01-26 16:05:36.637 239969 DEBUG nova.storage.rbd_utils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] rbd image 08d4e028-235a-4222-9acb-e07b6652d9ee_disk.config.rescue does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:36 compute-0 nova_compute[239965]: 2026-01-26 16:05:36.646 239969 DEBUG oslo_concurrency.processutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee/disk.config.rescue 08d4e028-235a-4222-9acb-e07b6652d9ee_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:36 compute-0 nova_compute[239965]: 2026-01-26 16:05:36.819 239969 DEBUG oslo_concurrency.processutils [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee/disk.config.rescue 08d4e028-235a-4222-9acb-e07b6652d9ee_disk.config.rescue --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:36 compute-0 nova_compute[239965]: 2026-01-26 16:05:36.820 239969 INFO nova.virt.libvirt.driver [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Deleting local config drive /var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee/disk.config.rescue because it was imported into RBD.
Jan 26 16:05:36 compute-0 kernel: tap29825953-a5: entered promiscuous mode
Jan 26 16:05:36 compute-0 NetworkManager[48954]: <info>  [1769443536.9128] manager: (tap29825953-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/352)
Jan 26 16:05:36 compute-0 nova_compute[239965]: 2026-01-26 16:05:36.912 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:36 compute-0 ovn_controller[146046]: 2026-01-26T16:05:36Z|00797|binding|INFO|Claiming lport 29825953-a53f-4dd1-bf0d-b670cc439160 for this chassis.
Jan 26 16:05:36 compute-0 ovn_controller[146046]: 2026-01-26T16:05:36Z|00798|binding|INFO|29825953-a53f-4dd1-bf0d-b670cc439160: Claiming fa:16:3e:10:bc:39 10.100.0.14
Jan 26 16:05:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:36.936 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:bc:39 10.100.0.14'], port_security=['fa:16:3e:10:bc:39 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '08d4e028-235a-4222-9acb-e07b6652d9ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3f3d947-073d-4aff-ad60-49414846f6e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd91082a727584818a01c3c60e7f7ef73', 'neutron:revision_number': '5', 'neutron:security_group_ids': '7fafebe7-bf1c-4fd6-83e0-4f8a0494b535', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fcf049c-f6b3-4162-b939-ed99b129811b, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=29825953-a53f-4dd1-bf0d-b670cc439160) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:05:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:36.939 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 29825953-a53f-4dd1-bf0d-b670cc439160 in datapath c3f3d947-073d-4aff-ad60-49414846f6e5 bound to our chassis
Jan 26 16:05:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:36.940 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c3f3d947-073d-4aff-ad60-49414846f6e5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:05:36 compute-0 nova_compute[239965]: 2026-01-26 16:05:36.940 239969 DEBUG nova.compute.manager [req-3c402364-9cbf-42f9-af4e-f3571f51621b req-04b1b2d0-57c6-4b38-9395-ebb3cb227fa4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received event network-changed-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:36 compute-0 nova_compute[239965]: 2026-01-26 16:05:36.940 239969 DEBUG nova.compute.manager [req-3c402364-9cbf-42f9-af4e-f3571f51621b req-04b1b2d0-57c6-4b38-9395-ebb3cb227fa4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Refreshing instance network info cache due to event network-changed-4dce6184-01c6-4ef8-a5d3-7fc88f784d89. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:05:36 compute-0 nova_compute[239965]: 2026-01-26 16:05:36.941 239969 DEBUG oslo_concurrency.lockutils [req-3c402364-9cbf-42f9-af4e-f3571f51621b req-04b1b2d0-57c6-4b38-9395-ebb3cb227fa4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:05:36 compute-0 nova_compute[239965]: 2026-01-26 16:05:36.941 239969 DEBUG oslo_concurrency.lockutils [req-3c402364-9cbf-42f9-af4e-f3571f51621b req-04b1b2d0-57c6-4b38-9395-ebb3cb227fa4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:05:36 compute-0 nova_compute[239965]: 2026-01-26 16:05:36.942 239969 DEBUG nova.network.neutron [req-3c402364-9cbf-42f9-af4e-f3571f51621b req-04b1b2d0-57c6-4b38-9395-ebb3cb227fa4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Refreshing network info cache for port 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:05:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:36.941 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e4c1697d-f991-4a1b-b25c-747f2addfd53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:36 compute-0 ovn_controller[146046]: 2026-01-26T16:05:36Z|00799|binding|INFO|Setting lport 29825953-a53f-4dd1-bf0d-b670cc439160 ovn-installed in OVS
Jan 26 16:05:36 compute-0 ovn_controller[146046]: 2026-01-26T16:05:36Z|00800|binding|INFO|Setting lport 29825953-a53f-4dd1-bf0d-b670cc439160 up in Southbound
Jan 26 16:05:36 compute-0 systemd-udevd[312397]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:05:36 compute-0 nova_compute[239965]: 2026-01-26 16:05:36.948 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:36 compute-0 NetworkManager[48954]: <info>  [1769443536.9717] device (tap29825953-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:05:36 compute-0 NetworkManager[48954]: <info>  [1769443536.9727] device (tap29825953-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:05:36 compute-0 systemd-machined[208061]: New machine qemu-102-instance-00000053.
Jan 26 16:05:36 compute-0 systemd[1]: Started Virtual Machine qemu-102-instance-00000053.
Jan 26 16:05:37 compute-0 sudo[312310]: pam_unix(sudo:session): session closed for user root
Jan 26 16:05:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:05:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:05:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:05:37 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:05:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:05:37 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:05:37 compute-0 nova_compute[239965]: 2026-01-26 16:05:37.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:05:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:05:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:05:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:05:37 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:05:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:05:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:05:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:05:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:05:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:05:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:05:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:05:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:05:37 compute-0 sudo[312426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:05:37 compute-0 sudo[312426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:05:37 compute-0 sudo[312426]: pam_unix(sudo:session): session closed for user root
Jan 26 16:05:37 compute-0 sudo[312463]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:05:37 compute-0 sudo[312463]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:05:37 compute-0 podman[312450]: 2026-01-26 16:05:37.682828423 +0000 UTC m=+0.064130550 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:05:37 compute-0 podman[312451]: 2026-01-26 16:05:37.774558168 +0000 UTC m=+0.155777483 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 26 16:05:37 compute-0 podman[312545]: 2026-01-26 16:05:37.970036679 +0000 UTC m=+0.047576755 container create 7cab89f86f7867d15ae3e38f455776db898c4dece2e85d3b1cf7ba4c5b3179b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kepler, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:05:38 compute-0 systemd[1]: Started libpod-conmon-7cab89f86f7867d15ae3e38f455776db898c4dece2e85d3b1cf7ba4c5b3179b8.scope.
Jan 26 16:05:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:05:38 compute-0 podman[312545]: 2026-01-26 16:05:37.951152128 +0000 UTC m=+0.028692224 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:05:38 compute-0 podman[312545]: 2026-01-26 16:05:38.058099193 +0000 UTC m=+0.135639309 container init 7cab89f86f7867d15ae3e38f455776db898c4dece2e85d3b1cf7ba4c5b3179b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 26 16:05:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 548 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Jan 26 16:05:38 compute-0 podman[312545]: 2026-01-26 16:05:38.069484502 +0000 UTC m=+0.147024578 container start 7cab89f86f7867d15ae3e38f455776db898c4dece2e85d3b1cf7ba4c5b3179b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 26 16:05:38 compute-0 podman[312545]: 2026-01-26 16:05:38.073870319 +0000 UTC m=+0.151410395 container attach 7cab89f86f7867d15ae3e38f455776db898c4dece2e85d3b1cf7ba4c5b3179b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kepler, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 16:05:38 compute-0 reverent_kepler[312601]: 167 167
Jan 26 16:05:38 compute-0 systemd[1]: libpod-7cab89f86f7867d15ae3e38f455776db898c4dece2e85d3b1cf7ba4c5b3179b8.scope: Deactivated successfully.
Jan 26 16:05:38 compute-0 conmon[312601]: conmon 7cab89f86f7867d15ae3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7cab89f86f7867d15ae3e38f455776db898c4dece2e85d3b1cf7ba4c5b3179b8.scope/container/memory.events
Jan 26 16:05:38 compute-0 podman[312545]: 2026-01-26 16:05:38.081460185 +0000 UTC m=+0.159000251 container died 7cab89f86f7867d15ae3e38f455776db898c4dece2e85d3b1cf7ba4c5b3179b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 26 16:05:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5f03d2a5de4ef3a2aa0d902a431c817a304a3b50101f200a136e6fabe9fa764-merged.mount: Deactivated successfully.
Jan 26 16:05:38 compute-0 podman[312545]: 2026-01-26 16:05:38.128770272 +0000 UTC m=+0.206310358 container remove 7cab89f86f7867d15ae3e38f455776db898c4dece2e85d3b1cf7ba4c5b3179b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kepler, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 16:05:38 compute-0 systemd[1]: libpod-conmon-7cab89f86f7867d15ae3e38f455776db898c4dece2e85d3b1cf7ba4c5b3179b8.scope: Deactivated successfully.
Jan 26 16:05:38 compute-0 nova_compute[239965]: 2026-01-26 16:05:38.149 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 08d4e028-235a-4222-9acb-e07b6652d9ee due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:05:38 compute-0 nova_compute[239965]: 2026-01-26 16:05:38.150 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443538.1487012, 08d4e028-235a-4222-9acb-e07b6652d9ee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:38 compute-0 nova_compute[239965]: 2026-01-26 16:05:38.150 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] VM Resumed (Lifecycle Event)
Jan 26 16:05:38 compute-0 nova_compute[239965]: 2026-01-26 16:05:38.154 239969 DEBUG nova.compute.manager [None req-271d091d-fec8-4185-aec6-7b04f88d37e5 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:38 compute-0 nova_compute[239965]: 2026-01-26 16:05:38.182 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:38 compute-0 nova_compute[239965]: 2026-01-26 16:05:38.187 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:05:38 compute-0 nova_compute[239965]: 2026-01-26 16:05:38.213 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] During sync_power_state the instance has a pending task (rescuing). Skip.
Jan 26 16:05:38 compute-0 nova_compute[239965]: 2026-01-26 16:05:38.213 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443538.1523473, 08d4e028-235a-4222-9acb-e07b6652d9ee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:38 compute-0 nova_compute[239965]: 2026-01-26 16:05:38.213 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] VM Started (Lifecycle Event)
Jan 26 16:05:38 compute-0 nova_compute[239965]: 2026-01-26 16:05:38.241 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:38 compute-0 nova_compute[239965]: 2026-01-26 16:05:38.246 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:05:38 compute-0 nova_compute[239965]: 2026-01-26 16:05:38.256 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:05:38 compute-0 podman[312629]: 2026-01-26 16:05:38.328387115 +0000 UTC m=+0.043292100 container create 3da5d3ae7b6d90a4a186041e8d413d6e789546f93e43a099681f5ab0016332da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:05:38 compute-0 systemd[1]: Started libpod-conmon-3da5d3ae7b6d90a4a186041e8d413d6e789546f93e43a099681f5ab0016332da.scope.
Jan 26 16:05:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:05:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e88076df1a41dd98b40dbd4ad4a8e5507adbb0c5fb0cd20b7e66cd162f8c08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:05:38 compute-0 podman[312629]: 2026-01-26 16:05:38.307799291 +0000 UTC m=+0.022704316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:05:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e88076df1a41dd98b40dbd4ad4a8e5507adbb0c5fb0cd20b7e66cd162f8c08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:05:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e88076df1a41dd98b40dbd4ad4a8e5507adbb0c5fb0cd20b7e66cd162f8c08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:05:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e88076df1a41dd98b40dbd4ad4a8e5507adbb0c5fb0cd20b7e66cd162f8c08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:05:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07e88076df1a41dd98b40dbd4ad4a8e5507adbb0c5fb0cd20b7e66cd162f8c08/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:05:38 compute-0 podman[312629]: 2026-01-26 16:05:38.415750692 +0000 UTC m=+0.130655697 container init 3da5d3ae7b6d90a4a186041e8d413d6e789546f93e43a099681f5ab0016332da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hoover, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:05:38 compute-0 podman[312629]: 2026-01-26 16:05:38.426608368 +0000 UTC m=+0.141513353 container start 3da5d3ae7b6d90a4a186041e8d413d6e789546f93e43a099681f5ab0016332da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hoover, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:05:38 compute-0 podman[312629]: 2026-01-26 16:05:38.430848171 +0000 UTC m=+0.145753156 container attach 3da5d3ae7b6d90a4a186041e8d413d6e789546f93e43a099681f5ab0016332da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hoover, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:05:38 compute-0 ceph-mon[75140]: pgmap v1588: 305 pgs: 305 active+clean; 548 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Jan 26 16:05:39 compute-0 fervent_hoover[312644]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:05:39 compute-0 fervent_hoover[312644]: --> All data devices are unavailable
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.019 239969 DEBUG nova.compute.manager [req-2192bd42-3642-450a-b4e9-0d7e99d9ad02 req-07a3ed53-02b5-46fa-863e-2e6d4ea54862 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.020 239969 DEBUG oslo_concurrency.lockutils [req-2192bd42-3642-450a-b4e9-0d7e99d9ad02 req-07a3ed53-02b5-46fa-863e-2e6d4ea54862 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.020 239969 DEBUG oslo_concurrency.lockutils [req-2192bd42-3642-450a-b4e9-0d7e99d9ad02 req-07a3ed53-02b5-46fa-863e-2e6d4ea54862 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.020 239969 DEBUG oslo_concurrency.lockutils [req-2192bd42-3642-450a-b4e9-0d7e99d9ad02 req-07a3ed53-02b5-46fa-863e-2e6d4ea54862 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.020 239969 DEBUG nova.compute.manager [req-2192bd42-3642-450a-b4e9-0d7e99d9ad02 req-07a3ed53-02b5-46fa-863e-2e6d4ea54862 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] No waiting events found dispatching network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.021 239969 WARNING nova.compute.manager [req-2192bd42-3642-450a-b4e9-0d7e99d9ad02 req-07a3ed53-02b5-46fa-863e-2e6d4ea54862 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received unexpected event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 for instance with vm_state rescued and task_state None.
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.021 239969 DEBUG nova.compute.manager [req-2192bd42-3642-450a-b4e9-0d7e99d9ad02 req-07a3ed53-02b5-46fa-863e-2e6d4ea54862 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.021 239969 DEBUG oslo_concurrency.lockutils [req-2192bd42-3642-450a-b4e9-0d7e99d9ad02 req-07a3ed53-02b5-46fa-863e-2e6d4ea54862 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.021 239969 DEBUG oslo_concurrency.lockutils [req-2192bd42-3642-450a-b4e9-0d7e99d9ad02 req-07a3ed53-02b5-46fa-863e-2e6d4ea54862 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.022 239969 DEBUG oslo_concurrency.lockutils [req-2192bd42-3642-450a-b4e9-0d7e99d9ad02 req-07a3ed53-02b5-46fa-863e-2e6d4ea54862 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.022 239969 DEBUG nova.compute.manager [req-2192bd42-3642-450a-b4e9-0d7e99d9ad02 req-07a3ed53-02b5-46fa-863e-2e6d4ea54862 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] No waiting events found dispatching network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.022 239969 WARNING nova.compute.manager [req-2192bd42-3642-450a-b4e9-0d7e99d9ad02 req-07a3ed53-02b5-46fa-863e-2e6d4ea54862 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received unexpected event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 for instance with vm_state rescued and task_state None.
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.060 239969 DEBUG nova.network.neutron [req-3c402364-9cbf-42f9-af4e-f3571f51621b req-04b1b2d0-57c6-4b38-9395-ebb3cb227fa4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updated VIF entry in instance network info cache for port 4dce6184-01c6-4ef8-a5d3-7fc88f784d89. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.060 239969 DEBUG nova.network.neutron [req-3c402364-9cbf-42f9-af4e-f3571f51621b req-04b1b2d0-57c6-4b38-9395-ebb3cb227fa4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updating instance_info_cache with network_info: [{"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:05:39 compute-0 systemd[1]: libpod-3da5d3ae7b6d90a4a186041e8d413d6e789546f93e43a099681f5ab0016332da.scope: Deactivated successfully.
Jan 26 16:05:39 compute-0 podman[312629]: 2026-01-26 16:05:39.078942835 +0000 UTC m=+0.793847820 container died 3da5d3ae7b6d90a4a186041e8d413d6e789546f93e43a099681f5ab0016332da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.083 239969 DEBUG oslo_concurrency.lockutils [req-3c402364-9cbf-42f9-af4e-f3571f51621b req-04b1b2d0-57c6-4b38-9395-ebb3cb227fa4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:05:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-07e88076df1a41dd98b40dbd4ad4a8e5507adbb0c5fb0cd20b7e66cd162f8c08-merged.mount: Deactivated successfully.
Jan 26 16:05:39 compute-0 podman[312629]: 2026-01-26 16:05:39.126072109 +0000 UTC m=+0.840977094 container remove 3da5d3ae7b6d90a4a186041e8d413d6e789546f93e43a099681f5ab0016332da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_hoover, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:05:39 compute-0 systemd[1]: libpod-conmon-3da5d3ae7b6d90a4a186041e8d413d6e789546f93e43a099681f5ab0016332da.scope: Deactivated successfully.
Jan 26 16:05:39 compute-0 sudo[312463]: pam_unix(sudo:session): session closed for user root
Jan 26 16:05:39 compute-0 sudo[312676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:05:39 compute-0 sudo[312676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:05:39 compute-0 sudo[312676]: pam_unix(sudo:session): session closed for user root
Jan 26 16:05:39 compute-0 sudo[312701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:05:39 compute-0 sudo[312701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.418 239969 INFO nova.compute.manager [None req-a68fbe53-994c-44e8-b0f7-a7dede672153 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Unrescuing
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.420 239969 DEBUG oslo_concurrency.lockutils [None req-a68fbe53-994c-44e8-b0f7-a7dede672153 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "refresh_cache-08d4e028-235a-4222-9acb-e07b6652d9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.420 239969 DEBUG oslo_concurrency.lockutils [None req-a68fbe53-994c-44e8-b0f7-a7dede672153 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquired lock "refresh_cache-08d4e028-235a-4222-9acb-e07b6652d9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.420 239969 DEBUG nova.network.neutron [None req-a68fbe53-994c-44e8-b0f7-a7dede672153 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.504 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:05:39 compute-0 nova_compute[239965]: 2026-01-26 16:05:39.621 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:39 compute-0 podman[312738]: 2026-01-26 16:05:39.624015799 +0000 UTC m=+0.035643723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:05:40 compute-0 podman[312738]: 2026-01-26 16:05:40.034144162 +0000 UTC m=+0.445772106 container create a8b901ab391c69d5a31444b5ca2a4cbdec4ce9efe0dbeda6d86029513d6112c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 16:05:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1589: 305 pgs: 305 active+clean; 548 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.0 MiB/s wr, 188 op/s
Jan 26 16:05:40 compute-0 systemd[1]: Started libpod-conmon-a8b901ab391c69d5a31444b5ca2a4cbdec4ce9efe0dbeda6d86029513d6112c1.scope.
Jan 26 16:05:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:05:40 compute-0 nova_compute[239965]: 2026-01-26 16:05:40.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:05:40 compute-0 nova_compute[239965]: 2026-01-26 16:05:40.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:05:40 compute-0 nova_compute[239965]: 2026-01-26 16:05:40.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:05:40 compute-0 nova_compute[239965]: 2026-01-26 16:05:40.914 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:05:40 compute-0 nova_compute[239965]: 2026-01-26 16:05:40.916 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:05:40 compute-0 nova_compute[239965]: 2026-01-26 16:05:40.916 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:05:40 compute-0 nova_compute[239965]: 2026-01-26 16:05:40.916 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:40 compute-0 nova_compute[239965]: 2026-01-26 16:05:40.965 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:41 compute-0 podman[312738]: 2026-01-26 16:05:41.075284741 +0000 UTC m=+1.486912675 container init a8b901ab391c69d5a31444b5ca2a4cbdec4ce9efe0dbeda6d86029513d6112c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_euclid, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 16:05:41 compute-0 podman[312738]: 2026-01-26 16:05:41.084072665 +0000 UTC m=+1.495700569 container start a8b901ab391c69d5a31444b5ca2a4cbdec4ce9efe0dbeda6d86029513d6112c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 26 16:05:41 compute-0 bold_euclid[312754]: 167 167
Jan 26 16:05:41 compute-0 systemd[1]: libpod-a8b901ab391c69d5a31444b5ca2a4cbdec4ce9efe0dbeda6d86029513d6112c1.scope: Deactivated successfully.
Jan 26 16:05:41 compute-0 nova_compute[239965]: 2026-01-26 16:05:41.167 239969 DEBUG nova.network.neutron [None req-a68fbe53-994c-44e8-b0f7-a7dede672153 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Updating instance_info_cache with network_info: [{"id": "29825953-a53f-4dd1-bf0d-b670cc439160", "address": "fa:16:3e:10:bc:39", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29825953-a5", "ovs_interfaceid": "29825953-a53f-4dd1-bf0d-b670cc439160", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:05:41 compute-0 nova_compute[239965]: 2026-01-26 16:05:41.187 239969 DEBUG oslo_concurrency.lockutils [None req-a68fbe53-994c-44e8-b0f7-a7dede672153 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Releasing lock "refresh_cache-08d4e028-235a-4222-9acb-e07b6652d9ee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:05:41 compute-0 nova_compute[239965]: 2026-01-26 16:05:41.188 239969 DEBUG nova.objects.instance [None req-a68fbe53-994c-44e8-b0f7-a7dede672153 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'flavor' on Instance uuid 08d4e028-235a-4222-9acb-e07b6652d9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:41 compute-0 nova_compute[239965]: 2026-01-26 16:05:41.275 239969 INFO nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Instance shutdown successfully after 24 seconds.
Jan 26 16:05:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 548 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.5 MiB/s wr, 182 op/s
Jan 26 16:05:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:05:42 compute-0 podman[312738]: 2026-01-26 16:05:42.582753787 +0000 UTC m=+2.994381781 container attach a8b901ab391c69d5a31444b5ca2a4cbdec4ce9efe0dbeda6d86029513d6112c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:05:42 compute-0 podman[312738]: 2026-01-26 16:05:42.584449319 +0000 UTC m=+2.996077233 container died a8b901ab391c69d5a31444b5ca2a4cbdec4ce9efe0dbeda6d86029513d6112c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:05:42 compute-0 ceph-mon[75140]: pgmap v1589: 305 pgs: 305 active+clean; 548 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 4.0 MiB/s wr, 188 op/s
Jan 26 16:05:42 compute-0 kernel: tapcafc30f3-a3 (unregistering): left promiscuous mode
Jan 26 16:05:42 compute-0 NetworkManager[48954]: <info>  [1769443542.6544] device (tapcafc30f3-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:05:42 compute-0 ovn_controller[146046]: 2026-01-26T16:05:42Z|00801|binding|INFO|Releasing lport cafc30f3-a313-4723-9612-f0b29a9513bb from this chassis (sb_readonly=0)
Jan 26 16:05:42 compute-0 ovn_controller[146046]: 2026-01-26T16:05:42Z|00802|binding|INFO|Setting lport cafc30f3-a313-4723-9612-f0b29a9513bb down in Southbound
Jan 26 16:05:42 compute-0 ovn_controller[146046]: 2026-01-26T16:05:42Z|00803|binding|INFO|Removing iface tapcafc30f3-a3 ovn-installed in OVS
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:05:42.666232) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443542666263, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1827, "num_deletes": 253, "total_data_size": 2817446, "memory_usage": 2857792, "flush_reason": "Manual Compaction"}
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.663 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.669 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.672 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:90:b1 10.100.0.9'], port_security=['fa:16:3e:43:90:b1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '76321bf3-65b2-429d-9eb3-27916690a101', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'adbf641d-e58d-4501-869f-a3fe43494930', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=cafc30f3-a313-4723-9612-f0b29a9513bb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:05:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d94bc254de43882b1999bfc582a2ea39fa12687463f4dea96ac6fef4ec67db3-merged.mount: Deactivated successfully.
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.678 156105 INFO neutron.agent.ovn.metadata.agent [-] Port cafc30f3-a313-4723-9612-f0b29a9513bb in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d unbound from our chassis
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.680 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.694 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443542697626, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 2740253, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31340, "largest_seqno": 33166, "table_properties": {"data_size": 2731932, "index_size": 5007, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18040, "raw_average_key_size": 20, "raw_value_size": 2714943, "raw_average_value_size": 3081, "num_data_blocks": 221, "num_entries": 881, "num_filter_entries": 881, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769443377, "oldest_key_time": 1769443377, "file_creation_time": 1769443542, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 31467 microseconds, and 6970 cpu microseconds.
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:05:42.697692) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 2740253 bytes OK
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:05:42.697714) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:05:42.701798) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:05:42.701826) EVENT_LOG_v1 {"time_micros": 1769443542701819, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:05:42.701848) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2809552, prev total WAL file size 2809552, number of live WAL files 2.
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:05:42.703716) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(2676KB)], [68(7566KB)]
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443542703774, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 10487895, "oldest_snapshot_seqno": -1}
Jan 26 16:05:42 compute-0 kernel: tap29825953-a5 (unregistering): left promiscuous mode
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.711 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d1eb890a-bb2f-4a99-b9e4-17bbc0ef215a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:42 compute-0 NetworkManager[48954]: <info>  [1769443542.7298] device (tap29825953-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:05:42 compute-0 ovn_controller[146046]: 2026-01-26T16:05:42Z|00804|binding|INFO|Releasing lport 29825953-a53f-4dd1-bf0d-b670cc439160 from this chassis (sb_readonly=0)
Jan 26 16:05:42 compute-0 ovn_controller[146046]: 2026-01-26T16:05:42Z|00805|binding|INFO|Setting lport 29825953-a53f-4dd1-bf0d-b670cc439160 down in Southbound
Jan 26 16:05:42 compute-0 ovn_controller[146046]: 2026-01-26T16:05:42Z|00806|binding|INFO|Removing iface tap29825953-a5 ovn-installed in OVS
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.729 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.735 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.738 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:bc:39 10.100.0.14'], port_security=['fa:16:3e:10:bc:39 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '08d4e028-235a-4222-9acb-e07b6652d9ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3f3d947-073d-4aff-ad60-49414846f6e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd91082a727584818a01c3c60e7f7ef73', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7fafebe7-bf1c-4fd6-83e0-4f8a0494b535', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fcf049c-f6b3-4162-b939-ed99b129811b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=29825953-a53f-4dd1-bf0d-b670cc439160) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.754 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:42 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000054.scope: Deactivated successfully.
Jan 26 16:05:42 compute-0 systemd[1]: machine-qemu\x2d100\x2dinstance\x2d00000054.scope: Consumed 19.650s CPU time.
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.761 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[70512004-b29f-48fc-97f8-d137a8dbc4a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:42 compute-0 systemd-machined[208061]: Machine qemu-100-instance-00000054 terminated.
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.765 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3383f3e1-c791-4d81-b3f8-9e680ce489b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:42 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000053.scope: Deactivated successfully.
Jan 26 16:05:42 compute-0 systemd[1]: machine-qemu\x2d102\x2dinstance\x2d00000053.scope: Consumed 4.146s CPU time.
Jan 26 16:05:42 compute-0 systemd-machined[208061]: Machine qemu-102-instance-00000053 terminated.
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.798 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ba43758b-0ced-46c5-a550-c21b1467ae4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.818 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a72fab36-1ee5-46ec-9595-9d922418d131]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84c1ad93-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495544, 'reachable_time': 41449, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312788, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.835 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[984934a3-637e-4ab5-a97b-2241ee4cfb0c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap84c1ad93-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495559, 'tstamp': 495559}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312789, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap84c1ad93-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495562, 'tstamp': 495562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312789, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.838 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1ad93-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.840 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.847 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.848 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84c1ad93-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.850 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.852 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84c1ad93-10, col_values=(('external_ids', {'iface-id': '61ede7e0-2ac4-4ab0-a8d4-11a64add8da0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.852 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.855 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 29825953-a53f-4dd1-bf0d-b670cc439160 in datapath c3f3d947-073d-4aff-ad60-49414846f6e5 unbound from our chassis
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.857 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c3f3d947-073d-4aff-ad60-49414846f6e5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:05:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:42.858 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8a459158-4b2b-4172-8cf2-7f993d91ced8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.868 239969 INFO nova.virt.libvirt.driver [-] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Instance destroyed successfully.
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.869 239969 DEBUG nova.objects.instance [None req-a68fbe53-994c-44e8-b0f7-a7dede672153 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'numa_topology' on Instance uuid 08d4e028-235a-4222-9acb-e07b6652d9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:42 compute-0 NetworkManager[48954]: <info>  [1769443542.9074] manager: (tapcafc30f3-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/353)
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.916 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.923 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.929 239969 INFO nova.virt.libvirt.driver [-] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Instance destroyed successfully.
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.933 239969 INFO nova.virt.libvirt.driver [-] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Instance destroyed successfully.
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.934 239969 DEBUG nova.virt.libvirt.vif [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:05:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1105506242',display_name='tempest-ServerActionsTestJSON-server-1040479726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1105506242',id=84,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:05:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-drlg4l02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:05:16Z,user_data=None,user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=76321bf3-65b2-429d-9eb3-27916690a101,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.934 239969 DEBUG nova.network.os_vif_util [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.935 239969 DEBUG nova.network.os_vif_util [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:90:b1,bridge_name='br-int',has_traffic_filtering=True,id=cafc30f3-a313-4723-9612-f0b29a9513bb,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcafc30f3-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.936 239969 DEBUG os_vif [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:90:b1,bridge_name='br-int',has_traffic_filtering=True,id=cafc30f3-a313-4723-9612-f0b29a9513bb,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcafc30f3-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.937 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.938 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcafc30f3-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.939 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.942 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.945 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:42 compute-0 nova_compute[239965]: 2026-01-26 16:05:42.947 239969 INFO os_vif [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:90:b1,bridge_name='br-int',has_traffic_filtering=True,id=cafc30f3-a313-4723-9612-f0b29a9513bb,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcafc30f3-a3')
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 5973 keys, 8784245 bytes, temperature: kUnknown
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443542984793, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 8784245, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8743978, "index_size": 24221, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14981, "raw_key_size": 150002, "raw_average_key_size": 25, "raw_value_size": 8636559, "raw_average_value_size": 1445, "num_data_blocks": 981, "num_entries": 5973, "num_filter_entries": 5973, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769443542, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:05:42 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:05:43 compute-0 podman[312738]: 2026-01-26 16:05:43.308351657 +0000 UTC m=+3.719979581 container remove a8b901ab391c69d5a31444b5ca2a4cbdec4ce9efe0dbeda6d86029513d6112c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_euclid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 16:05:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:05:42.985323) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 8784245 bytes
Jan 26 16:05:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:05:43.311016) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 37.3 rd, 31.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 7.4 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 6495, records dropped: 522 output_compression: NoCompression
Jan 26 16:05:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:05:43.311059) EVENT_LOG_v1 {"time_micros": 1769443543311041, "job": 38, "event": "compaction_finished", "compaction_time_micros": 281373, "compaction_time_cpu_micros": 23723, "output_level": 6, "num_output_files": 1, "total_output_size": 8784245, "num_input_records": 6495, "num_output_records": 5973, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:05:43 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:05:43 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443543311883, "job": 38, "event": "table_file_deletion", "file_number": 70}
Jan 26 16:05:43 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:05:43 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443543314174, "job": 38, "event": "table_file_deletion", "file_number": 68}
Jan 26 16:05:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:05:42.703611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:05:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:05:43.314265) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:05:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:05:43.314272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:05:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:05:43.314275) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:05:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:05:43.314278) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:05:43 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:05:43.314281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:05:43 compute-0 systemd[1]: libpod-conmon-a8b901ab391c69d5a31444b5ca2a4cbdec4ce9efe0dbeda6d86029513d6112c1.scope: Deactivated successfully.
Jan 26 16:05:43 compute-0 podman[312839]: 2026-01-26 16:05:43.59588547 +0000 UTC m=+0.093096008 container create d896fc6cb294454584c87b6f43a2c03f389e23f6a25fe3d6c1cb4644a44b97c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 16:05:43 compute-0 ceph-mon[75140]: pgmap v1590: 305 pgs: 305 active+clean; 548 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.5 MiB/s wr, 182 op/s
Jan 26 16:05:43 compute-0 podman[312839]: 2026-01-26 16:05:43.539476421 +0000 UTC m=+0.036687049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:05:43 compute-0 kernel: tap29825953-a5: entered promiscuous mode
Jan 26 16:05:43 compute-0 systemd-udevd[312775]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:05:43 compute-0 nova_compute[239965]: 2026-01-26 16:05:43.662 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:43 compute-0 ovn_controller[146046]: 2026-01-26T16:05:43Z|00807|binding|INFO|Claiming lport 29825953-a53f-4dd1-bf0d-b670cc439160 for this chassis.
Jan 26 16:05:43 compute-0 ovn_controller[146046]: 2026-01-26T16:05:43Z|00808|binding|INFO|29825953-a53f-4dd1-bf0d-b670cc439160: Claiming fa:16:3e:10:bc:39 10.100.0.14
Jan 26 16:05:43 compute-0 NetworkManager[48954]: <info>  [1769443543.6681] manager: (tap29825953-a5): new Tun device (/org/freedesktop/NetworkManager/Devices/354)
Jan 26 16:05:43 compute-0 systemd[1]: Started libpod-conmon-d896fc6cb294454584c87b6f43a2c03f389e23f6a25fe3d6c1cb4644a44b97c6.scope.
Jan 26 16:05:43 compute-0 NetworkManager[48954]: <info>  [1769443543.6812] device (tap29825953-a5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:05:43 compute-0 NetworkManager[48954]: <info>  [1769443543.6817] device (tap29825953-a5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:05:43 compute-0 ovn_controller[146046]: 2026-01-26T16:05:43Z|00809|binding|INFO|Setting lport 29825953-a53f-4dd1-bf0d-b670cc439160 ovn-installed in OVS
Jan 26 16:05:43 compute-0 nova_compute[239965]: 2026-01-26 16:05:43.692 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:05:43 compute-0 systemd-machined[208061]: New machine qemu-103-instance-00000053.
Jan 26 16:05:43 compute-0 systemd[1]: Started Virtual Machine qemu-103-instance-00000053.
Jan 26 16:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0e6091c4021605835f378adda4e6a0f8cf8c1fba445e1a5cc2e8d77d4e0b6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0e6091c4021605835f378adda4e6a0f8cf8c1fba445e1a5cc2e8d77d4e0b6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0e6091c4021605835f378adda4e6a0f8cf8c1fba445e1a5cc2e8d77d4e0b6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:05:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb0e6091c4021605835f378adda4e6a0f8cf8c1fba445e1a5cc2e8d77d4e0b6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:05:43 compute-0 ovn_controller[146046]: 2026-01-26T16:05:43Z|00810|binding|INFO|Setting lport 29825953-a53f-4dd1-bf0d-b670cc439160 up in Southbound
Jan 26 16:05:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:43.739 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:bc:39 10.100.0.14'], port_security=['fa:16:3e:10:bc:39 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '08d4e028-235a-4222-9acb-e07b6652d9ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3f3d947-073d-4aff-ad60-49414846f6e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd91082a727584818a01c3c60e7f7ef73', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7fafebe7-bf1c-4fd6-83e0-4f8a0494b535', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fcf049c-f6b3-4162-b939-ed99b129811b, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=29825953-a53f-4dd1-bf0d-b670cc439160) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:05:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:43.740 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 29825953-a53f-4dd1-bf0d-b670cc439160 in datapath c3f3d947-073d-4aff-ad60-49414846f6e5 bound to our chassis
Jan 26 16:05:43 compute-0 nova_compute[239965]: 2026-01-26 16:05:43.739 239969 DEBUG nova.compute.manager [req-b06537a6-b485-4f35-a68a-7416a2b2d3af req-cadec82a-eabf-47b7-a6ae-258e9f1e3165 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Received event network-vif-unplugged-cafc30f3-a313-4723-9612-f0b29a9513bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:43 compute-0 nova_compute[239965]: 2026-01-26 16:05:43.739 239969 DEBUG oslo_concurrency.lockutils [req-b06537a6-b485-4f35-a68a-7416a2b2d3af req-cadec82a-eabf-47b7-a6ae-258e9f1e3165 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "76321bf3-65b2-429d-9eb3-27916690a101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:43 compute-0 nova_compute[239965]: 2026-01-26 16:05:43.740 239969 DEBUG oslo_concurrency.lockutils [req-b06537a6-b485-4f35-a68a-7416a2b2d3af req-cadec82a-eabf-47b7-a6ae-258e9f1e3165 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:43 compute-0 nova_compute[239965]: 2026-01-26 16:05:43.740 239969 DEBUG oslo_concurrency.lockutils [req-b06537a6-b485-4f35-a68a-7416a2b2d3af req-cadec82a-eabf-47b7-a6ae-258e9f1e3165 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:43 compute-0 nova_compute[239965]: 2026-01-26 16:05:43.740 239969 DEBUG nova.compute.manager [req-b06537a6-b485-4f35-a68a-7416a2b2d3af req-cadec82a-eabf-47b7-a6ae-258e9f1e3165 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] No waiting events found dispatching network-vif-unplugged-cafc30f3-a313-4723-9612-f0b29a9513bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:43 compute-0 nova_compute[239965]: 2026-01-26 16:05:43.740 239969 WARNING nova.compute.manager [req-b06537a6-b485-4f35-a68a-7416a2b2d3af req-cadec82a-eabf-47b7-a6ae-258e9f1e3165 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Received unexpected event network-vif-unplugged-cafc30f3-a313-4723-9612-f0b29a9513bb for instance with vm_state active and task_state rebuilding.
Jan 26 16:05:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:43.741 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c3f3d947-073d-4aff-ad60-49414846f6e5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:05:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:43.743 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ae24ce5f-78c9-4a61-b25a-f01c63ab1e4c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:43 compute-0 podman[312839]: 2026-01-26 16:05:43.790320957 +0000 UTC m=+0.287531505 container init d896fc6cb294454584c87b6f43a2c03f389e23f6a25fe3d6c1cb4644a44b97c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cerf, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Jan 26 16:05:43 compute-0 podman[312839]: 2026-01-26 16:05:43.801412218 +0000 UTC m=+0.298622766 container start d896fc6cb294454584c87b6f43a2c03f389e23f6a25fe3d6c1cb4644a44b97c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 16:05:43 compute-0 podman[312839]: 2026-01-26 16:05:43.805621341 +0000 UTC m=+0.302831879 container attach d896fc6cb294454584c87b6f43a2c03f389e23f6a25fe3d6c1cb4644a44b97c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cerf, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.035 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Updating instance_info_cache with network_info: [{"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:05:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 305 active+clean; 548 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.9 MiB/s wr, 166 op/s
Jan 26 16:05:44 compute-0 competent_cerf[312866]: {
Jan 26 16:05:44 compute-0 competent_cerf[312866]:     "0": [
Jan 26 16:05:44 compute-0 competent_cerf[312866]:         {
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "devices": [
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "/dev/loop3"
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             ],
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "lv_name": "ceph_lv0",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "lv_size": "21470642176",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "name": "ceph_lv0",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "tags": {
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.cluster_name": "ceph",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.crush_device_class": "",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.encrypted": "0",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.objectstore": "bluestore",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.osd_id": "0",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.type": "block",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.vdo": "0",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.with_tpm": "0"
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             },
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "type": "block",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "vg_name": "ceph_vg0"
Jan 26 16:05:44 compute-0 competent_cerf[312866]:         }
Jan 26 16:05:44 compute-0 competent_cerf[312866]:     ],
Jan 26 16:05:44 compute-0 competent_cerf[312866]:     "1": [
Jan 26 16:05:44 compute-0 competent_cerf[312866]:         {
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "devices": [
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "/dev/loop4"
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             ],
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "lv_name": "ceph_lv1",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "lv_size": "21470642176",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "name": "ceph_lv1",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "tags": {
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.cluster_name": "ceph",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.crush_device_class": "",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.encrypted": "0",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.objectstore": "bluestore",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.osd_id": "1",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.type": "block",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.vdo": "0",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.with_tpm": "0"
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             },
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "type": "block",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "vg_name": "ceph_vg1"
Jan 26 16:05:44 compute-0 competent_cerf[312866]:         }
Jan 26 16:05:44 compute-0 competent_cerf[312866]:     ],
Jan 26 16:05:44 compute-0 competent_cerf[312866]:     "2": [
Jan 26 16:05:44 compute-0 competent_cerf[312866]:         {
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "devices": [
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "/dev/loop5"
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             ],
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "lv_name": "ceph_lv2",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "lv_size": "21470642176",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "name": "ceph_lv2",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "tags": {
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.cluster_name": "ceph",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.crush_device_class": "",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.encrypted": "0",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.objectstore": "bluestore",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.osd_id": "2",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.type": "block",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.vdo": "0",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:                 "ceph.with_tpm": "0"
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             },
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "type": "block",
Jan 26 16:05:44 compute-0 competent_cerf[312866]:             "vg_name": "ceph_vg2"
Jan 26 16:05:44 compute-0 competent_cerf[312866]:         }
Jan 26 16:05:44 compute-0 competent_cerf[312866]:     ]
Jan 26 16:05:44 compute-0 competent_cerf[312866]: }
Jan 26 16:05:44 compute-0 systemd[1]: libpod-d896fc6cb294454584c87b6f43a2c03f389e23f6a25fe3d6c1cb4644a44b97c6.scope: Deactivated successfully.
Jan 26 16:05:44 compute-0 podman[312922]: 2026-01-26 16:05:44.17677891 +0000 UTC m=+0.023543907 container died d896fc6cb294454584c87b6f43a2c03f389e23f6a25fe3d6c1cb4644a44b97c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cerf, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.257 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 08d4e028-235a-4222-9acb-e07b6652d9ee due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.258 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443544.2573466, 08d4e028-235a-4222-9acb-e07b6652d9ee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.258 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] VM Resumed (Lifecycle Event)
Jan 26 16:05:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb0e6091c4021605835f378adda4e6a0f8cf8c1fba445e1a5cc2e8d77d4e0b6f-merged.mount: Deactivated successfully.
Jan 26 16:05:44 compute-0 podman[312922]: 2026-01-26 16:05:44.337934092 +0000 UTC m=+0.184699059 container remove d896fc6cb294454584c87b6f43a2c03f389e23f6a25fe3d6c1cb4644a44b97c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_cerf, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:05:44 compute-0 systemd[1]: libpod-conmon-d896fc6cb294454584c87b6f43a2c03f389e23f6a25fe3d6c1cb4644a44b97c6.scope: Deactivated successfully.
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.362 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.365 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.365 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.366 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.366 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.367 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.367 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.382 239969 INFO nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Deleting instance files /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101_del
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.384 239969 INFO nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Deletion of /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101_del complete
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.393 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:05:44 compute-0 sudo[312701]: pam_unix(sudo:session): session closed for user root
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.418 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.419 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.420 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.420 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.420 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.463 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.464 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443544.259521, 08d4e028-235a-4222-9acb-e07b6652d9ee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.465 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] VM Started (Lifecycle Event)
Jan 26 16:05:44 compute-0 sudo[312961]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:05:44 compute-0 sudo[312961]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:05:44 compute-0 sudo[312961]: pam_unix(sudo:session): session closed for user root
Jan 26 16:05:44 compute-0 sudo[312987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:05:44 compute-0 sudo[312987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:05:44 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Jan 26 16:05:44 compute-0 ceph-mon[75140]: pgmap v1591: 305 pgs: 305 active+clean; 548 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.9 MiB/s wr, 166 op/s
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.717 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.722 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Synchronizing instance power state after lifecycle event "Started"; current vm_state: rescued, current task_state: unrescuing, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.749 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] During sync_power_state the instance has a pending task (unrescuing). Skip.
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.813 239969 DEBUG nova.compute.manager [None req-a68fbe53-994c-44e8-b0f7-a7dede672153 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:44 compute-0 podman[313044]: 2026-01-26 16:05:44.882614977 +0000 UTC m=+0.057750084 container create e9e319acc45ba14f56dd8755d7b55c97a7a9417b0b2e357eda2c089d5bf10209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.926 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.927 239969 INFO nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Creating image(s)
Jan 26 16:05:44 compute-0 systemd[1]: Started libpod-conmon-e9e319acc45ba14f56dd8755d7b55c97a7a9417b0b2e357eda2c089d5bf10209.scope.
Jan 26 16:05:44 compute-0 podman[313044]: 2026-01-26 16:05:44.849789184 +0000 UTC m=+0.024924311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:05:44 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:05:44 compute-0 nova_compute[239965]: 2026-01-26 16:05:44.967 239969 DEBUG nova.storage.rbd_utils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 76321bf3-65b2-429d-9eb3-27916690a101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:44 compute-0 podman[313044]: 2026-01-26 16:05:44.976348309 +0000 UTC m=+0.151483436 container init e9e319acc45ba14f56dd8755d7b55c97a7a9417b0b2e357eda2c089d5bf10209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_euler, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 16:05:44 compute-0 podman[313044]: 2026-01-26 16:05:44.984896368 +0000 UTC m=+0.160031465 container start e9e319acc45ba14f56dd8755d7b55c97a7a9417b0b2e357eda2c089d5bf10209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_euler, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 16:05:44 compute-0 podman[313044]: 2026-01-26 16:05:44.988400745 +0000 UTC m=+0.163535852 container attach e9e319acc45ba14f56dd8755d7b55c97a7a9417b0b2e357eda2c089d5bf10209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_euler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 16:05:44 compute-0 epic_euler[313061]: 167 167
Jan 26 16:05:44 compute-0 systemd[1]: libpod-e9e319acc45ba14f56dd8755d7b55c97a7a9417b0b2e357eda2c089d5bf10209.scope: Deactivated successfully.
Jan 26 16:05:44 compute-0 podman[313044]: 2026-01-26 16:05:44.991767577 +0000 UTC m=+0.166902684 container died e9e319acc45ba14f56dd8755d7b55c97a7a9417b0b2e357eda2c089d5bf10209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_euler, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 16:05:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-742c13024894281927dae5dfc5116779b81ee289aeedb024c42071aa28154773-merged.mount: Deactivated successfully.
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.023 239969 DEBUG nova.storage.rbd_utils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 76321bf3-65b2-429d-9eb3-27916690a101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:45 compute-0 podman[313044]: 2026-01-26 16:05:45.038662534 +0000 UTC m=+0.213797641 container remove e9e319acc45ba14f56dd8755d7b55c97a7a9417b0b2e357eda2c089d5bf10209 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Jan 26 16:05:45 compute-0 systemd[1]: libpod-conmon-e9e319acc45ba14f56dd8755d7b55c97a7a9417b0b2e357eda2c089d5bf10209.scope: Deactivated successfully.
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.070 239969 DEBUG nova.storage.rbd_utils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 76321bf3-65b2-429d-9eb3-27916690a101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:05:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1698614389' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.076 239969 DEBUG oslo_concurrency.processutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.118 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.697s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.175 239969 DEBUG oslo_concurrency.processutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.176 239969 DEBUG oslo_concurrency.lockutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "43c516c9cae415c0ab334521ada79f427fb2809a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.176 239969 DEBUG oslo_concurrency.lockutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.177 239969 DEBUG oslo_concurrency.lockutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.201 239969 DEBUG nova.storage.rbd_utils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 76321bf3-65b2-429d-9eb3-27916690a101_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.214 239969 DEBUG oslo_concurrency.processutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a 76321bf3-65b2-429d-9eb3-27916690a101_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:45 compute-0 podman[313163]: 2026-01-26 16:05:45.261500385 +0000 UTC m=+0.046044117 container create 85a587a5fd89d746a0854ba56afbb573e4f80ff82f5259f4aa9532431289ad14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_aryabhata, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 16:05:45 compute-0 systemd[1]: Started libpod-conmon-85a587a5fd89d746a0854ba56afbb573e4f80ff82f5259f4aa9532431289ad14.scope.
Jan 26 16:05:45 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:05:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b35504fcbd38f12c6e7f5fdc10644ca91ce27371598df017bb246734c13fa1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:05:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b35504fcbd38f12c6e7f5fdc10644ca91ce27371598df017bb246734c13fa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:05:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b35504fcbd38f12c6e7f5fdc10644ca91ce27371598df017bb246734c13fa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:05:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b35504fcbd38f12c6e7f5fdc10644ca91ce27371598df017bb246734c13fa1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:05:45 compute-0 podman[313163]: 2026-01-26 16:05:45.24084211 +0000 UTC m=+0.025385872 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:05:45 compute-0 podman[313163]: 2026-01-26 16:05:45.34922377 +0000 UTC m=+0.133767522 container init 85a587a5fd89d746a0854ba56afbb573e4f80ff82f5259f4aa9532431289ad14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_aryabhata, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 16:05:45 compute-0 podman[313163]: 2026-01-26 16:05:45.358212921 +0000 UTC m=+0.142756653 container start 85a587a5fd89d746a0854ba56afbb573e4f80ff82f5259f4aa9532431289ad14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_aryabhata, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:05:45 compute-0 podman[313163]: 2026-01-26 16:05:45.363999882 +0000 UTC m=+0.148543614 container attach 85a587a5fd89d746a0854ba56afbb573e4f80ff82f5259f4aa9532431289ad14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_aryabhata, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.385 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.387 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.394 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.395 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000053 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.409 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.409 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.409 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000050 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.417 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.417 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.610 239969 DEBUG oslo_concurrency.processutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a 76321bf3-65b2-429d-9eb3-27916690a101_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1698614389' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.694 239969 DEBUG nova.storage.rbd_utils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] resizing rbd image 76321bf3-65b2-429d-9eb3-27916690a101_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.886 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.886 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3141MB free_disk=59.74267577752471GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.887 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.887 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.935 239969 DEBUG nova.compute.manager [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Received event network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.936 239969 DEBUG oslo_concurrency.lockutils [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "76321bf3-65b2-429d-9eb3-27916690a101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.936 239969 DEBUG oslo_concurrency.lockutils [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.936 239969 DEBUG oslo_concurrency.lockutils [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.937 239969 DEBUG nova.compute.manager [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] No waiting events found dispatching network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.937 239969 WARNING nova.compute.manager [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Received unexpected event network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb for instance with vm_state active and task_state rebuild_spawning.
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.937 239969 DEBUG nova.compute.manager [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received event network-vif-unplugged-29825953-a53f-4dd1-bf0d-b670cc439160 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.937 239969 DEBUG oslo_concurrency.lockutils [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.938 239969 DEBUG oslo_concurrency.lockutils [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.938 239969 DEBUG oslo_concurrency.lockutils [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.938 239969 DEBUG nova.compute.manager [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] No waiting events found dispatching network-vif-unplugged-29825953-a53f-4dd1-bf0d-b670cc439160 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.941 239969 WARNING nova.compute.manager [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received unexpected event network-vif-unplugged-29825953-a53f-4dd1-bf0d-b670cc439160 for instance with vm_state active and task_state None.
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.941 239969 DEBUG nova.compute.manager [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.942 239969 DEBUG oslo_concurrency.lockutils [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.942 239969 DEBUG oslo_concurrency.lockutils [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.942 239969 DEBUG oslo_concurrency.lockutils [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.942 239969 DEBUG nova.compute.manager [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] No waiting events found dispatching network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.943 239969 WARNING nova.compute.manager [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received unexpected event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 for instance with vm_state active and task_state None.
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.943 239969 DEBUG nova.compute.manager [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.943 239969 DEBUG oslo_concurrency.lockutils [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.943 239969 DEBUG oslo_concurrency.lockutils [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.943 239969 DEBUG oslo_concurrency.lockutils [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.944 239969 DEBUG nova.compute.manager [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] No waiting events found dispatching network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.944 239969 WARNING nova.compute.manager [req-73ec7ab7-85fa-45ca-9b02-872c0e5a7b90 req-869c1581-b362-4de2-aa65-d3490a93c1ea a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received unexpected event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 for instance with vm_state active and task_state None.
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.964 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.964 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Ensure instance console log exists: /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.965 239969 DEBUG oslo_concurrency.lockutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.965 239969 DEBUG oslo_concurrency.lockutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.965 239969 DEBUG oslo_concurrency.lockutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.968 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Start _get_guest_xml network_info=[{"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.968 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.974 239969 WARNING nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.978 239969 DEBUG nova.virt.libvirt.host [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.980 239969 DEBUG nova.virt.libvirt.host [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.987 239969 DEBUG nova.virt.libvirt.host [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.988 239969 DEBUG nova.virt.libvirt.host [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.988 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.988 239969 DEBUG nova.virt.hardware [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.989 239969 DEBUG nova.virt.hardware [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.989 239969 DEBUG nova.virt.hardware [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.989 239969 DEBUG nova.virt.hardware [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.989 239969 DEBUG nova.virt.hardware [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.989 239969 DEBUG nova.virt.hardware [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.989 239969 DEBUG nova.virt.hardware [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.990 239969 DEBUG nova.virt.hardware [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.990 239969 DEBUG nova.virt.hardware [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.990 239969 DEBUG nova.virt.hardware [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.990 239969 DEBUG nova.virt.hardware [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:05:45 compute-0 nova_compute[239965]: 2026-01-26 16:05:45.990 239969 DEBUG nova.objects.instance [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 76321bf3-65b2-429d-9eb3-27916690a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1592: 305 pgs: 305 active+clean; 470 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.9 MiB/s wr, 256 op/s
Jan 26 16:05:46 compute-0 lvm[313348]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:05:46 compute-0 lvm[313349]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:05:46 compute-0 lvm[313348]: VG ceph_vg0 finished
Jan 26 16:05:46 compute-0 lvm[313349]: VG ceph_vg1 finished
Jan 26 16:05:46 compute-0 lvm[313351]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:05:46 compute-0 lvm[313351]: VG ceph_vg2 finished
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.150 239969 DEBUG oslo_concurrency.processutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.236 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 492acb7a-42e1-4918-a973-352f9d8251d0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.238 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 477642fb-5695-4387-944d-587e17cf3ed8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.238 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 08d4e028-235a-4222-9acb-e07b6652d9ee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.238 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 76321bf3-65b2-429d-9eb3-27916690a101 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.239 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 571cd10f-e22e-43b6-9523-e3539d047e4a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.239 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.239 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:05:46 compute-0 loving_aryabhata[313194]: {}
Jan 26 16:05:46 compute-0 systemd[1]: libpod-85a587a5fd89d746a0854ba56afbb573e4f80ff82f5259f4aa9532431289ad14.scope: Deactivated successfully.
Jan 26 16:05:46 compute-0 systemd[1]: libpod-85a587a5fd89d746a0854ba56afbb573e4f80ff82f5259f4aa9532431289ad14.scope: Consumed 1.335s CPU time.
Jan 26 16:05:46 compute-0 podman[313163]: 2026-01-26 16:05:46.293514731 +0000 UTC m=+1.078058463 container died 85a587a5fd89d746a0854ba56afbb573e4f80ff82f5259f4aa9532431289ad14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_aryabhata, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.409 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-20b35504fcbd38f12c6e7f5fdc10644ca91ce27371598df017bb246734c13fa1-merged.mount: Deactivated successfully.
Jan 26 16:05:46 compute-0 ceph-mon[75140]: pgmap v1592: 305 pgs: 305 active+clean; 470 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.9 MiB/s wr, 256 op/s
Jan 26 16:05:46 compute-0 podman[313163]: 2026-01-26 16:05:46.738392074 +0000 UTC m=+1.522935796 container remove 85a587a5fd89d746a0854ba56afbb573e4f80ff82f5259f4aa9532431289ad14 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_aryabhata, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:05:46 compute-0 systemd[1]: libpod-conmon-85a587a5fd89d746a0854ba56afbb573e4f80ff82f5259f4aa9532431289ad14.scope: Deactivated successfully.
Jan 26 16:05:46 compute-0 sudo[312987]: pam_unix(sudo:session): session closed for user root
Jan 26 16:05:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:05:46 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:05:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:05:46 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:05:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:05:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2401409283' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.860 239969 DEBUG oslo_concurrency.lockutils [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "08d4e028-235a-4222-9acb-e07b6652d9ee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.861 239969 DEBUG oslo_concurrency.lockutils [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.861 239969 DEBUG oslo_concurrency.lockutils [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.861 239969 DEBUG oslo_concurrency.lockutils [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.861 239969 DEBUG oslo_concurrency.lockutils [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.862 239969 INFO nova.compute.manager [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Terminating instance
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.863 239969 DEBUG nova.compute.manager [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:05:46 compute-0 ovn_controller[146046]: 2026-01-26T16:05:46Z|00090|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:72:ea:5b 10.100.0.6
Jan 26 16:05:46 compute-0 ovn_controller[146046]: 2026-01-26T16:05:46Z|00091|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:ea:5b 10.100.0.6
Jan 26 16:05:46 compute-0 sudo[313404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:05:46 compute-0 sudo[313404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:05:46 compute-0 sudo[313404]: pam_unix(sudo:session): session closed for user root
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.884 239969 DEBUG oslo_concurrency.processutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.733s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.927 239969 DEBUG nova.storage.rbd_utils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 76321bf3-65b2-429d-9eb3-27916690a101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:46 compute-0 nova_compute[239965]: 2026-01-26 16:05:46.949 239969 DEBUG oslo_concurrency.processutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:05:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2020180960' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.110 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.700s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.116 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.131 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:05:47 compute-0 kernel: tap29825953-a5 (unregistering): left promiscuous mode
Jan 26 16:05:47 compute-0 NetworkManager[48954]: <info>  [1769443547.2769] device (tap29825953-a5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:05:47 compute-0 ovn_controller[146046]: 2026-01-26T16:05:47Z|00811|binding|INFO|Releasing lport 29825953-a53f-4dd1-bf0d-b670cc439160 from this chassis (sb_readonly=0)
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.295 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:47 compute-0 ovn_controller[146046]: 2026-01-26T16:05:47Z|00812|binding|INFO|Setting lport 29825953-a53f-4dd1-bf0d-b670cc439160 down in Southbound
Jan 26 16:05:47 compute-0 ovn_controller[146046]: 2026-01-26T16:05:47Z|00813|binding|INFO|Removing iface tap29825953-a5 ovn-installed in OVS
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.298 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.328 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:47 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000053.scope: Deactivated successfully.
Jan 26 16:05:47 compute-0 systemd[1]: machine-qemu\x2d103\x2dinstance\x2d00000053.scope: Consumed 3.002s CPU time.
Jan 26 16:05:47 compute-0 systemd-machined[208061]: Machine qemu-103-instance-00000053 terminated.
Jan 26 16:05:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:47.404 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:bc:39 10.100.0.14'], port_security=['fa:16:3e:10:bc:39 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '08d4e028-235a-4222-9acb-e07b6652d9ee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3f3d947-073d-4aff-ad60-49414846f6e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd91082a727584818a01c3c60e7f7ef73', 'neutron:revision_number': '8', 'neutron:security_group_ids': '7fafebe7-bf1c-4fd6-83e0-4f8a0494b535', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fcf049c-f6b3-4162-b939-ed99b129811b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=29825953-a53f-4dd1-bf0d-b670cc439160) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:05:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:47.406 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 29825953-a53f-4dd1-bf0d-b670cc439160 in datapath c3f3d947-073d-4aff-ad60-49414846f6e5 unbound from our chassis
Jan 26 16:05:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:47.406 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c3f3d947-073d-4aff-ad60-49414846f6e5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:05:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:47.408 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4e9b5b7d-f46f-4748-ac4f-878b5cfdc8bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.418 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.418 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.531s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.505 239969 INFO nova.virt.libvirt.driver [-] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Instance destroyed successfully.
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.506 239969 DEBUG nova.objects.instance [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'resources' on Instance uuid 08d4e028-235a-4222-9acb-e07b6652d9ee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:05:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1307502113' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.546 239969 DEBUG oslo_concurrency.processutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.548 239969 DEBUG nova.virt.libvirt.vif [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:05:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1105506242',display_name='tempest-ServerActionsTestJSON-server-1040479726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1105506242',id=84,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:05:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-drlg4l02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:05:44Z,user_data=None,user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=76321bf3-65b2-429d-9eb3-27916690a101,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.549 239969 DEBUG nova.network.os_vif_util [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.550 239969 DEBUG nova.network.os_vif_util [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:90:b1,bridge_name='br-int',has_traffic_filtering=True,id=cafc30f3-a313-4723-9612-f0b29a9513bb,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcafc30f3-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.554 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:05:47 compute-0 nova_compute[239965]:   <uuid>76321bf3-65b2-429d-9eb3-27916690a101</uuid>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   <name>instance-00000054</name>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerActionsTestJSON-server-1040479726</nova:name>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:05:45</nova:creationTime>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:05:47 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:05:47 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:05:47 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:05:47 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:05:47 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:05:47 compute-0 nova_compute[239965]:         <nova:user uuid="59f8ee09903a4e0a812c3d9e013996bd">tempest-ServerActionsTestJSON-274649005-project-member</nova:user>
Jan 26 16:05:47 compute-0 nova_compute[239965]:         <nova:project uuid="94771806cd0d4b5db117956e09fea9e6">tempest-ServerActionsTestJSON-274649005</nova:project>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="24ff5bfa-2bf0-4d76-ba05-fc857cd2108f"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:05:47 compute-0 nova_compute[239965]:         <nova:port uuid="cafc30f3-a313-4723-9612-f0b29a9513bb">
Jan 26 16:05:47 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <system>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <entry name="serial">76321bf3-65b2-429d-9eb3-27916690a101</entry>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <entry name="uuid">76321bf3-65b2-429d-9eb3-27916690a101</entry>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     </system>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   <os>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   </os>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   <features>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   </features>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/76321bf3-65b2-429d-9eb3-27916690a101_disk">
Jan 26 16:05:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       </source>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:05:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/76321bf3-65b2-429d-9eb3-27916690a101_disk.config">
Jan 26 16:05:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       </source>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:05:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:43:90:b1"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <target dev="tapcafc30f3-a3"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101/console.log" append="off"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <video>
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     </video>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:05:47 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:05:47 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:05:47 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:05:47 compute-0 nova_compute[239965]: </domain>
Jan 26 16:05:47 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.555 239969 DEBUG nova.compute.manager [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Preparing to wait for external event network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.555 239969 DEBUG oslo_concurrency.lockutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "76321bf3-65b2-429d-9eb3-27916690a101-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.555 239969 DEBUG oslo_concurrency.lockutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.556 239969 DEBUG oslo_concurrency.lockutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.557 239969 DEBUG nova.virt.libvirt.vif [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:05:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1105506242',display_name='tempest-ServerActionsTestJSON-server-1040479726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1105506242',id=84,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:05:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-drlg4l02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:05:44Z,user_data=None,user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=76321bf3-65b2-429d-9eb3-27916690a101,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.557 239969 DEBUG nova.network.os_vif_util [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.558 239969 DEBUG nova.network.os_vif_util [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:90:b1,bridge_name='br-int',has_traffic_filtering=True,id=cafc30f3-a313-4723-9612-f0b29a9513bb,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcafc30f3-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.558 239969 DEBUG os_vif [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:90:b1,bridge_name='br-int',has_traffic_filtering=True,id=cafc30f3-a313-4723-9612-f0b29a9513bb,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcafc30f3-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.559 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.560 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.560 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.564 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.565 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcafc30f3-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.565 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcafc30f3-a3, col_values=(('external_ids', {'iface-id': 'cafc30f3-a313-4723-9612-f0b29a9513bb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:43:90:b1', 'vm-uuid': '76321bf3-65b2-429d-9eb3-27916690a101'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.568 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:47 compute-0 NetworkManager[48954]: <info>  [1769443547.5695] manager: (tapcafc30f3-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/355)
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.571 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.577 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.578 239969 INFO os_vif [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:90:b1,bridge_name='br-int',has_traffic_filtering=True,id=cafc30f3-a313-4723-9612-f0b29a9513bb,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcafc30f3-a3')
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.729 239969 DEBUG nova.virt.libvirt.vif [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:04:55Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-1716043766',display_name='tempest-ServerRescueTestJSON-server-1716043766',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-1716043766',id=83,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:05:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d91082a727584818a01c3c60e7f7ef73',ramdisk_id='',reservation_id='r-d9mj2wj8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1881999610',owner_user_name='tempest-ServerRescueTestJSON-1881999610-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:05:44Z,user_data=None,user_id='7a6914254de3499da1b987d99b6e0ff6',uuid=08d4e028-235a-4222-9acb-e07b6652d9ee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "29825953-a53f-4dd1-bf0d-b670cc439160", "address": "fa:16:3e:10:bc:39", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29825953-a5", "ovs_interfaceid": "29825953-a53f-4dd1-bf0d-b670cc439160", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.729 239969 DEBUG nova.network.os_vif_util [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Converting VIF {"id": "29825953-a53f-4dd1-bf0d-b670cc439160", "address": "fa:16:3e:10:bc:39", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29825953-a5", "ovs_interfaceid": "29825953-a53f-4dd1-bf0d-b670cc439160", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.731 239969 DEBUG nova.network.os_vif_util [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:bc:39,bridge_name='br-int',has_traffic_filtering=True,id=29825953-a53f-4dd1-bf0d-b670cc439160,network=Network(c3f3d947-073d-4aff-ad60-49414846f6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29825953-a5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.731 239969 DEBUG os_vif [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:bc:39,bridge_name='br-int',has_traffic_filtering=True,id=29825953-a53f-4dd1-bf0d-b670cc439160,network=Network(c3f3d947-073d-4aff-ad60-49414846f6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29825953-a5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.735 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.735 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29825953-a5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.738 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.740 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.743 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.747 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.748 239969 INFO os_vif [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:bc:39,bridge_name='br-int',has_traffic_filtering=True,id=29825953-a53f-4dd1-bf0d-b670cc439160,network=Network(c3f3d947-073d-4aff-ad60-49414846f6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29825953-a5')
Jan 26 16:05:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:47.747 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:05:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:47.749 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.767 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.768 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.768 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] No VIF found with MAC fa:16:3e:43:90:b1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.769 239969 INFO nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Using config drive
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.788 239969 DEBUG nova.storage.rbd_utils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 76321bf3-65b2-429d-9eb3-27916690a101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:05:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:05:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2401409283' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2020180960' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:05:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1307502113' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.816 239969 DEBUG nova.objects.instance [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 76321bf3-65b2-429d-9eb3-27916690a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:47 compute-0 nova_compute[239965]: 2026-01-26 16:05:47.925 239969 DEBUG nova.objects.instance [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'keypairs' on Instance uuid 76321bf3-65b2-429d-9eb3-27916690a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 305 active+clean; 470 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.0 MiB/s wr, 146 op/s
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.127 239969 INFO nova.virt.libvirt.driver [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Deleting instance files /var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee_del
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.128 239969 INFO nova.virt.libvirt.driver [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Deletion of /var/lib/nova/instances/08d4e028-235a-4222-9acb-e07b6652d9ee_del complete
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.241 239969 DEBUG nova.compute.manager [req-5281641a-1483-4d9e-8626-a0fce88486b7 req-d0fa8948-e4a0-42c0-9a15-4789641a8b62 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.242 239969 DEBUG oslo_concurrency.lockutils [req-5281641a-1483-4d9e-8626-a0fce88486b7 req-d0fa8948-e4a0-42c0-9a15-4789641a8b62 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.242 239969 DEBUG oslo_concurrency.lockutils [req-5281641a-1483-4d9e-8626-a0fce88486b7 req-d0fa8948-e4a0-42c0-9a15-4789641a8b62 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.242 239969 DEBUG oslo_concurrency.lockutils [req-5281641a-1483-4d9e-8626-a0fce88486b7 req-d0fa8948-e4a0-42c0-9a15-4789641a8b62 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.242 239969 DEBUG nova.compute.manager [req-5281641a-1483-4d9e-8626-a0fce88486b7 req-d0fa8948-e4a0-42c0-9a15-4789641a8b62 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] No waiting events found dispatching network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.242 239969 WARNING nova.compute.manager [req-5281641a-1483-4d9e-8626-a0fce88486b7 req-d0fa8948-e4a0-42c0-9a15-4789641a8b62 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received unexpected event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 for instance with vm_state active and task_state deleting.
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.243 239969 DEBUG nova.compute.manager [req-5281641a-1483-4d9e-8626-a0fce88486b7 req-d0fa8948-e4a0-42c0-9a15-4789641a8b62 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received event network-vif-unplugged-29825953-a53f-4dd1-bf0d-b670cc439160 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.243 239969 DEBUG oslo_concurrency.lockutils [req-5281641a-1483-4d9e-8626-a0fce88486b7 req-d0fa8948-e4a0-42c0-9a15-4789641a8b62 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.243 239969 DEBUG oslo_concurrency.lockutils [req-5281641a-1483-4d9e-8626-a0fce88486b7 req-d0fa8948-e4a0-42c0-9a15-4789641a8b62 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.243 239969 DEBUG oslo_concurrency.lockutils [req-5281641a-1483-4d9e-8626-a0fce88486b7 req-d0fa8948-e4a0-42c0-9a15-4789641a8b62 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.243 239969 DEBUG nova.compute.manager [req-5281641a-1483-4d9e-8626-a0fce88486b7 req-d0fa8948-e4a0-42c0-9a15-4789641a8b62 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] No waiting events found dispatching network-vif-unplugged-29825953-a53f-4dd1-bf0d-b670cc439160 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.243 239969 DEBUG nova.compute.manager [req-5281641a-1483-4d9e-8626-a0fce88486b7 req-d0fa8948-e4a0-42c0-9a15-4789641a8b62 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received event network-vif-unplugged-29825953-a53f-4dd1-bf0d-b670cc439160 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.286 239969 INFO nova.compute.manager [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Took 1.42 seconds to destroy the instance on the hypervisor.
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.286 239969 DEBUG oslo.service.loopingcall [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.287 239969 DEBUG nova.compute.manager [-] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.287 239969 DEBUG nova.network.neutron [-] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:05:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:05:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2548068434' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:05:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:05:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2548068434' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.736 239969 INFO nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Creating config drive at /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101/disk.config
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.747 239969 DEBUG oslo_concurrency.processutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgk8zau25 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:48 compute-0 ceph-mon[75140]: pgmap v1593: 305 pgs: 305 active+clean; 470 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.0 MiB/s wr, 146 op/s
Jan 26 16:05:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2548068434' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:05:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2548068434' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003447959515749884 of space, bias 1.0, pg target 1.0343878547249652 quantized to 32 (current 32)
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010162461601198206 of space, bias 1.0, pg target 0.30385760187582633 quantized to 32 (current 32)
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.706181552949311e-07 of space, bias 4.0, pg target 0.0009216593137327376 quantized to 16 (current 16)
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:05:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.899 239969 DEBUG oslo_concurrency.processutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgk8zau25" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.924 239969 DEBUG nova.storage.rbd_utils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] rbd image 76321bf3-65b2-429d-9eb3-27916690a101_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:05:48 compute-0 nova_compute[239965]: 2026-01-26 16:05:48.928 239969 DEBUG oslo_concurrency.processutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101/disk.config 76321bf3-65b2-429d-9eb3-27916690a101_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:49 compute-0 nova_compute[239965]: 2026-01-26 16:05:49.235 239969 DEBUG oslo_concurrency.processutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101/disk.config 76321bf3-65b2-429d-9eb3-27916690a101_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:49 compute-0 nova_compute[239965]: 2026-01-26 16:05:49.237 239969 INFO nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Deleting local config drive /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101/disk.config because it was imported into RBD.
Jan 26 16:05:49 compute-0 kernel: tapcafc30f3-a3: entered promiscuous mode
Jan 26 16:05:49 compute-0 NetworkManager[48954]: <info>  [1769443549.3030] manager: (tapcafc30f3-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/356)
Jan 26 16:05:49 compute-0 nova_compute[239965]: 2026-01-26 16:05:49.303 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:49 compute-0 ovn_controller[146046]: 2026-01-26T16:05:49Z|00814|binding|INFO|Claiming lport cafc30f3-a313-4723-9612-f0b29a9513bb for this chassis.
Jan 26 16:05:49 compute-0 ovn_controller[146046]: 2026-01-26T16:05:49Z|00815|binding|INFO|cafc30f3-a313-4723-9612-f0b29a9513bb: Claiming fa:16:3e:43:90:b1 10.100.0.9
Jan 26 16:05:49 compute-0 nova_compute[239965]: 2026-01-26 16:05:49.332 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:49 compute-0 systemd-udevd[313581]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:05:49 compute-0 ovn_controller[146046]: 2026-01-26T16:05:49Z|00816|binding|INFO|Setting lport cafc30f3-a313-4723-9612-f0b29a9513bb ovn-installed in OVS
Jan 26 16:05:49 compute-0 nova_compute[239965]: 2026-01-26 16:05:49.337 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:49 compute-0 systemd-machined[208061]: New machine qemu-104-instance-00000054.
Jan 26 16:05:49 compute-0 NetworkManager[48954]: <info>  [1769443549.3515] device (tapcafc30f3-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:05:49 compute-0 NetworkManager[48954]: <info>  [1769443549.3523] device (tapcafc30f3-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:05:49 compute-0 systemd[1]: Started Virtual Machine qemu-104-instance-00000054.
Jan 26 16:05:49 compute-0 ovn_controller[146046]: 2026-01-26T16:05:49Z|00817|binding|INFO|Setting lport cafc30f3-a313-4723-9612-f0b29a9513bb up in Southbound
Jan 26 16:05:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:49.378 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:90:b1 10.100.0.9'], port_security=['fa:16:3e:43:90:b1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '76321bf3-65b2-429d-9eb3-27916690a101', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'adbf641d-e58d-4501-869f-a3fe43494930', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=cafc30f3-a313-4723-9612-f0b29a9513bb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:05:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:49.380 156105 INFO neutron.agent.ovn.metadata.agent [-] Port cafc30f3-a313-4723-9612-f0b29a9513bb in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d bound to our chassis
Jan 26 16:05:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:49.382 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:05:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:49.403 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[19fb3f70-9f57-4f76-b6d8-5b2101c01fa9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:49.436 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b5af3d7b-3f32-4984-bdf8-7e06184841e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:49.439 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[fedafd1e-e2b0-44de-b495-d49f30b93382]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:49.473 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[25e5dba8-2b91-4445-a7e8-1e2698250ba0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:49.500 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ab1f12fa-0ba1-46fe-a6b8-069c702f6cf7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84c1ad93-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495544, 'reachable_time': 41449, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313596, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:49.519 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b3844e8f-a1a8-4e13-a2a8-9c37731e1405]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap84c1ad93-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495559, 'tstamp': 495559}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313597, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap84c1ad93-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495562, 'tstamp': 495562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313597, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:49.520 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1ad93-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:49 compute-0 nova_compute[239965]: 2026-01-26 16:05:49.522 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:49 compute-0 nova_compute[239965]: 2026-01-26 16:05:49.523 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:49.523 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84c1ad93-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:49.524 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:05:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:49.524 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84c1ad93-10, col_values=(('external_ids', {'iface-id': '61ede7e0-2ac4-4ab0-a8d4-11a64add8da0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:49.524 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:05:49 compute-0 nova_compute[239965]: 2026-01-26 16:05:49.877 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 76321bf3-65b2-429d-9eb3-27916690a101 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:05:49 compute-0 nova_compute[239965]: 2026-01-26 16:05:49.878 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443549.875522, 76321bf3-65b2-429d-9eb3-27916690a101 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:49 compute-0 nova_compute[239965]: 2026-01-26 16:05:49.878 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] VM Started (Lifecycle Event)
Jan 26 16:05:49 compute-0 nova_compute[239965]: 2026-01-26 16:05:49.919 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:49 compute-0 nova_compute[239965]: 2026-01-26 16:05:49.924 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443549.876093, 76321bf3-65b2-429d-9eb3-27916690a101 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:49 compute-0 nova_compute[239965]: 2026-01-26 16:05:49.925 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] VM Paused (Lifecycle Event)
Jan 26 16:05:49 compute-0 nova_compute[239965]: 2026-01-26 16:05:49.941 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:49 compute-0 nova_compute[239965]: 2026-01-26 16:05:49.946 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:05:49 compute-0 nova_compute[239965]: 2026-01-26 16:05:49.972 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 16:05:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1594: 305 pgs: 305 active+clean; 440 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 311 op/s
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.322 239969 DEBUG nova.network.neutron [-] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.418 239969 DEBUG nova.compute.manager [req-af4cf2d8-5018-426e-a858-124937e107ee req-375625cb-6921-4eba-b9b7-625396b478d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.419 239969 DEBUG oslo_concurrency.lockutils [req-af4cf2d8-5018-426e-a858-124937e107ee req-375625cb-6921-4eba-b9b7-625396b478d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.419 239969 DEBUG oslo_concurrency.lockutils [req-af4cf2d8-5018-426e-a858-124937e107ee req-375625cb-6921-4eba-b9b7-625396b478d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.420 239969 DEBUG oslo_concurrency.lockutils [req-af4cf2d8-5018-426e-a858-124937e107ee req-375625cb-6921-4eba-b9b7-625396b478d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.420 239969 DEBUG nova.compute.manager [req-af4cf2d8-5018-426e-a858-124937e107ee req-375625cb-6921-4eba-b9b7-625396b478d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] No waiting events found dispatching network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.420 239969 WARNING nova.compute.manager [req-af4cf2d8-5018-426e-a858-124937e107ee req-375625cb-6921-4eba-b9b7-625396b478d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received unexpected event network-vif-plugged-29825953-a53f-4dd1-bf0d-b670cc439160 for instance with vm_state active and task_state deleting.
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.421 239969 DEBUG nova.compute.manager [req-af4cf2d8-5018-426e-a858-124937e107ee req-375625cb-6921-4eba-b9b7-625396b478d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Received event network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.421 239969 DEBUG oslo_concurrency.lockutils [req-af4cf2d8-5018-426e-a858-124937e107ee req-375625cb-6921-4eba-b9b7-625396b478d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "76321bf3-65b2-429d-9eb3-27916690a101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.422 239969 DEBUG oslo_concurrency.lockutils [req-af4cf2d8-5018-426e-a858-124937e107ee req-375625cb-6921-4eba-b9b7-625396b478d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.422 239969 DEBUG oslo_concurrency.lockutils [req-af4cf2d8-5018-426e-a858-124937e107ee req-375625cb-6921-4eba-b9b7-625396b478d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.422 239969 DEBUG nova.compute.manager [req-af4cf2d8-5018-426e-a858-124937e107ee req-375625cb-6921-4eba-b9b7-625396b478d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Processing event network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.423 239969 DEBUG nova.compute.manager [req-af4cf2d8-5018-426e-a858-124937e107ee req-375625cb-6921-4eba-b9b7-625396b478d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Received event network-vif-deleted-29825953-a53f-4dd1-bf0d-b670cc439160 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.423 239969 INFO nova.compute.manager [req-af4cf2d8-5018-426e-a858-124937e107ee req-375625cb-6921-4eba-b9b7-625396b478d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Neutron deleted interface 29825953-a53f-4dd1-bf0d-b670cc439160; detaching it from the instance and deleting it from the info cache
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.424 239969 DEBUG nova.network.neutron [req-af4cf2d8-5018-426e-a858-124937e107ee req-375625cb-6921-4eba-b9b7-625396b478d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.425 239969 DEBUG nova.compute.manager [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.431 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443550.4308634, 76321bf3-65b2-429d-9eb3-27916690a101 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.431 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] VM Resumed (Lifecycle Event)
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.434 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.439 239969 INFO nova.virt.libvirt.driver [-] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Instance spawned successfully.
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.440 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.448 239969 INFO nova.compute.manager [-] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Took 2.16 seconds to deallocate network for instance.
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.469 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.475 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.483 239969 DEBUG nova.compute.manager [req-af4cf2d8-5018-426e-a858-124937e107ee req-375625cb-6921-4eba-b9b7-625396b478d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Detach interface failed, port_id=29825953-a53f-4dd1-bf0d-b670cc439160, reason: Instance 08d4e028-235a-4222-9acb-e07b6652d9ee could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.490 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.490 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.491 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.491 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.492 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.492 239969 DEBUG nova.virt.libvirt.driver [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.502 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.521 239969 DEBUG oslo_concurrency.lockutils [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.521 239969 DEBUG oslo_concurrency.lockutils [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.559 239969 DEBUG nova.compute.manager [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.622 239969 DEBUG oslo_concurrency.lockutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.633 239969 DEBUG oslo_concurrency.processutils [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:50 compute-0 nova_compute[239965]: 2026-01-26 16:05:50.969 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:51 compute-0 ceph-mon[75140]: pgmap v1594: 305 pgs: 305 active+clean; 440 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 311 op/s
Jan 26 16:05:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:05:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3687795253' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:05:51 compute-0 nova_compute[239965]: 2026-01-26 16:05:51.234 239969 DEBUG oslo_concurrency.processutils [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:51 compute-0 nova_compute[239965]: 2026-01-26 16:05:51.240 239969 DEBUG nova.compute.provider_tree [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:05:51 compute-0 nova_compute[239965]: 2026-01-26 16:05:51.260 239969 DEBUG nova.scheduler.client.report [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:05:51 compute-0 nova_compute[239965]: 2026-01-26 16:05:51.279 239969 DEBUG oslo_concurrency.lockutils [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:51 compute-0 nova_compute[239965]: 2026-01-26 16:05:51.282 239969 DEBUG oslo_concurrency.lockutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.660s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:51 compute-0 nova_compute[239965]: 2026-01-26 16:05:51.282 239969 DEBUG nova.objects.instance [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 16:05:51 compute-0 nova_compute[239965]: 2026-01-26 16:05:51.327 239969 INFO nova.scheduler.client.report [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Deleted allocations for instance 08d4e028-235a-4222-9acb-e07b6652d9ee
Jan 26 16:05:51 compute-0 nova_compute[239965]: 2026-01-26 16:05:51.356 239969 DEBUG oslo_concurrency.lockutils [None req-e44cb05d-5178-42a7-a5f9-b7b22e2fb5de 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.073s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:51 compute-0 nova_compute[239965]: 2026-01-26 16:05:51.394 239969 DEBUG oslo_concurrency.lockutils [None req-2dad116d-2046-4b28-b165-1b3849a4c852 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "08d4e028-235a-4222-9acb-e07b6652d9ee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.534s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:51 compute-0 nova_compute[239965]: 2026-01-26 16:05:51.563 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:05:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:51.750 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 305 active+clean; 422 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 297 op/s
Jan 26 16:05:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3687795253' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.499 239969 DEBUG nova.compute.manager [req-2afc1e15-94f0-45a1-903c-fc6891a6288c req-95216d06-58b3-41d9-acec-ff3efcd69409 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Received event network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.499 239969 DEBUG oslo_concurrency.lockutils [req-2afc1e15-94f0-45a1-903c-fc6891a6288c req-95216d06-58b3-41d9-acec-ff3efcd69409 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "76321bf3-65b2-429d-9eb3-27916690a101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.500 239969 DEBUG oslo_concurrency.lockutils [req-2afc1e15-94f0-45a1-903c-fc6891a6288c req-95216d06-58b3-41d9-acec-ff3efcd69409 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.500 239969 DEBUG oslo_concurrency.lockutils [req-2afc1e15-94f0-45a1-903c-fc6891a6288c req-95216d06-58b3-41d9-acec-ff3efcd69409 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.500 239969 DEBUG nova.compute.manager [req-2afc1e15-94f0-45a1-903c-fc6891a6288c req-95216d06-58b3-41d9-acec-ff3efcd69409 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] No waiting events found dispatching network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.500 239969 WARNING nova.compute.manager [req-2afc1e15-94f0-45a1-903c-fc6891a6288c req-95216d06-58b3-41d9-acec-ff3efcd69409 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Received unexpected event network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb for instance with vm_state active and task_state None.
Jan 26 16:05:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.791 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.792 239969 DEBUG oslo_concurrency.lockutils [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "477642fb-5695-4387-944d-587e17cf3ed8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.792 239969 DEBUG oslo_concurrency.lockutils [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.793 239969 DEBUG oslo_concurrency.lockutils [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "477642fb-5695-4387-944d-587e17cf3ed8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.793 239969 DEBUG oslo_concurrency.lockutils [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.793 239969 DEBUG oslo_concurrency.lockutils [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.794 239969 INFO nova.compute.manager [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Terminating instance
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.795 239969 DEBUG nova.compute.manager [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:05:52 compute-0 kernel: tap5083d232-fb (unregistering): left promiscuous mode
Jan 26 16:05:52 compute-0 NetworkManager[48954]: <info>  [1769443552.8551] device (tap5083d232-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:05:52 compute-0 ovn_controller[146046]: 2026-01-26T16:05:52Z|00818|binding|INFO|Releasing lport 5083d232-fbe1-4440-91de-c2c31fae4b6b from this chassis (sb_readonly=0)
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.864 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:52 compute-0 ovn_controller[146046]: 2026-01-26T16:05:52Z|00819|binding|INFO|Setting lport 5083d232-fbe1-4440-91de-c2c31fae4b6b down in Southbound
Jan 26 16:05:52 compute-0 ovn_controller[146046]: 2026-01-26T16:05:52Z|00820|binding|INFO|Removing iface tap5083d232-fb ovn-installed in OVS
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.868 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:52 compute-0 nova_compute[239965]: 2026-01-26 16:05:52.889 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:52 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d00000050.scope: Deactivated successfully.
Jan 26 16:05:52 compute-0 systemd[1]: machine-qemu\x2d96\x2dinstance\x2d00000050.scope: Consumed 15.804s CPU time.
Jan 26 16:05:52 compute-0 systemd-machined[208061]: Machine qemu-96-instance-00000050 terminated.
Jan 26 16:05:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:52.979 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:14:3e 10.100.0.7'], port_security=['fa:16:3e:e7:14:3e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '477642fb-5695-4387-944d-587e17cf3ed8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3f3d947-073d-4aff-ad60-49414846f6e5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd91082a727584818a01c3c60e7f7ef73', 'neutron:revision_number': '6', 'neutron:security_group_ids': '7fafebe7-bf1c-4fd6-83e0-4f8a0494b535', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fcf049c-f6b3-4162-b939-ed99b129811b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5083d232-fbe1-4440-91de-c2c31fae4b6b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:05:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:52.980 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5083d232-fbe1-4440-91de-c2c31fae4b6b in datapath c3f3d947-073d-4aff-ad60-49414846f6e5 unbound from our chassis
Jan 26 16:05:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:52.981 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network c3f3d947-073d-4aff-ad60-49414846f6e5 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:05:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:52.982 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[afca7414-47a4-4cfa-b5a4-2bc2ca63a9a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.030 239969 INFO nova.virt.libvirt.driver [-] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Instance destroyed successfully.
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.030 239969 DEBUG nova.objects.instance [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lazy-loading 'resources' on Instance uuid 477642fb-5695-4387-944d-587e17cf3ed8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.044 239969 DEBUG nova.virt.libvirt.vif [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:04:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerRescueTestJSON-server-5920657',display_name='tempest-ServerRescueTestJSON-server-5920657',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverrescuetestjson-server-5920657',id=80,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:04:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d91082a727584818a01c3c60e7f7ef73',ramdisk_id='',reservation_id='r-415rexkh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerRescueTestJSON-1881999610',owner_user_name='tempest-ServerRescueTestJSON-1881999610-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:04:51Z,user_data=None,user_id='7a6914254de3499da1b987d99b6e0ff6',uuid=477642fb-5695-4387-944d-587e17cf3ed8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='rescued') vif={"id": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "address": "fa:16:3e:e7:14:3e", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5083d232-fb", "ovs_interfaceid": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.044 239969 DEBUG nova.network.os_vif_util [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Converting VIF {"id": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "address": "fa:16:3e:e7:14:3e", "network": {"id": "c3f3d947-073d-4aff-ad60-49414846f6e5", "bridge": "br-int", "label": "tempest-ServerRescueTestJSON-1538815050-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "d91082a727584818a01c3c60e7f7ef73", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5083d232-fb", "ovs_interfaceid": "5083d232-fbe1-4440-91de-c2c31fae4b6b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.045 239969 DEBUG nova.network.os_vif_util [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e7:14:3e,bridge_name='br-int',has_traffic_filtering=True,id=5083d232-fbe1-4440-91de-c2c31fae4b6b,network=Network(c3f3d947-073d-4aff-ad60-49414846f6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5083d232-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.045 239969 DEBUG os_vif [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:14:3e,bridge_name='br-int',has_traffic_filtering=True,id=5083d232-fbe1-4440-91de-c2c31fae4b6b,network=Network(c3f3d947-073d-4aff-ad60-49414846f6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5083d232-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.047 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.047 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5083d232-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.049 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.050 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.054 239969 INFO os_vif [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e7:14:3e,bridge_name='br-int',has_traffic_filtering=True,id=5083d232-fbe1-4440-91de-c2c31fae4b6b,network=Network(c3f3d947-073d-4aff-ad60-49414846f6e5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5083d232-fb')
Jan 26 16:05:53 compute-0 ceph-mon[75140]: pgmap v1595: 305 pgs: 305 active+clean; 422 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.9 MiB/s wr, 297 op/s
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.598 239969 INFO nova.virt.libvirt.driver [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Deleting instance files /var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8_del
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.599 239969 INFO nova.virt.libvirt.driver [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Deletion of /var/lib/nova/instances/477642fb-5695-4387-944d-587e17cf3ed8_del complete
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.670 239969 DEBUG oslo_concurrency.lockutils [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "76321bf3-65b2-429d-9eb3-27916690a101" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.670 239969 DEBUG oslo_concurrency.lockutils [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.671 239969 DEBUG oslo_concurrency.lockutils [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "76321bf3-65b2-429d-9eb3-27916690a101-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.671 239969 DEBUG oslo_concurrency.lockutils [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.671 239969 DEBUG oslo_concurrency.lockutils [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.672 239969 INFO nova.compute.manager [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Terminating instance
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.673 239969 DEBUG nova.compute.manager [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.676 239969 INFO nova.compute.manager [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Took 0.88 seconds to destroy the instance on the hypervisor.
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.677 239969 DEBUG oslo.service.loopingcall [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.677 239969 DEBUG nova.compute.manager [-] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.677 239969 DEBUG nova.network.neutron [-] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:05:53 compute-0 kernel: tapcafc30f3-a3 (unregistering): left promiscuous mode
Jan 26 16:05:53 compute-0 NetworkManager[48954]: <info>  [1769443553.7145] device (tapcafc30f3-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:05:53 compute-0 ovn_controller[146046]: 2026-01-26T16:05:53Z|00821|binding|INFO|Releasing lport cafc30f3-a313-4723-9612-f0b29a9513bb from this chassis (sb_readonly=0)
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.720 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:53 compute-0 ovn_controller[146046]: 2026-01-26T16:05:53Z|00822|binding|INFO|Setting lport cafc30f3-a313-4723-9612-f0b29a9513bb down in Southbound
Jan 26 16:05:53 compute-0 ovn_controller[146046]: 2026-01-26T16:05:53Z|00823|binding|INFO|Removing iface tapcafc30f3-a3 ovn-installed in OVS
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.722 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:53.730 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:43:90:b1 10.100.0.9'], port_security=['fa:16:3e:43:90:b1 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '76321bf3-65b2-429d-9eb3-27916690a101', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'adbf641d-e58d-4501-869f-a3fe43494930', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=cafc30f3-a313-4723-9612-f0b29a9513bb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:05:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:53.731 156105 INFO neutron.agent.ovn.metadata.agent [-] Port cafc30f3-a313-4723-9612-f0b29a9513bb in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d unbound from our chassis
Jan 26 16:05:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:53.733 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.737 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:53.748 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[21733621-bc43-4bcc-b728-121c97569942]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:53 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000054.scope: Deactivated successfully.
Jan 26 16:05:53 compute-0 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000054.scope: Consumed 3.682s CPU time.
Jan 26 16:05:53 compute-0 systemd-machined[208061]: Machine qemu-104-instance-00000054 terminated.
Jan 26 16:05:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:53.777 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[fc872507-f777-448b-b13c-1228fe941d55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:53.781 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d484930e-cbdc-489f-9848-cfbe093d2d5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:53.807 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae56af9-8efd-469c-9c16-1a4fb942f3d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:53.824 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6db7225e-63f9-4e31-966e-0e4086970425]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84c1ad93-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 229], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495544, 'reachable_time': 41449, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313706, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:53.841 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3d096b3e-34bd-4629-8252-c5d512eda875]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap84c1ad93-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495559, 'tstamp': 495559}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313707, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap84c1ad93-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 495562, 'tstamp': 495562}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313707, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:05:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:53.842 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1ad93-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.844 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.848 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:53.849 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84c1ad93-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:53.850 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:05:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:53.850 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84c1ad93-10, col_values=(('external_ids', {'iface-id': '61ede7e0-2ac4-4ab0-a8d4-11a64add8da0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:53.851 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.917 239969 INFO nova.virt.libvirt.driver [-] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Instance destroyed successfully.
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.917 239969 DEBUG nova.objects.instance [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'resources' on Instance uuid 76321bf3-65b2-429d-9eb3-27916690a101 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.935 239969 DEBUG nova.virt.libvirt.vif [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:05:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-1105506242',display_name='tempest-ServerActionsTestJSON-server-1040479726',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-1105506242',id=84,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:05:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={rebuild='server'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-drlg4l02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:05:51Z,user_data=None,user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=76321bf3-65b2-429d-9eb3-27916690a101,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.936 239969 DEBUG nova.network.os_vif_util [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "cafc30f3-a313-4723-9612-f0b29a9513bb", "address": "fa:16:3e:43:90:b1", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcafc30f3-a3", "ovs_interfaceid": "cafc30f3-a313-4723-9612-f0b29a9513bb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.936 239969 DEBUG nova.network.os_vif_util [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:43:90:b1,bridge_name='br-int',has_traffic_filtering=True,id=cafc30f3-a313-4723-9612-f0b29a9513bb,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcafc30f3-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.937 239969 DEBUG os_vif [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:90:b1,bridge_name='br-int',has_traffic_filtering=True,id=cafc30f3-a313-4723-9612-f0b29a9513bb,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcafc30f3-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.938 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.939 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcafc30f3-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.942 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.943 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:53 compute-0 nova_compute[239965]: 2026-01-26 16:05:53.945 239969 INFO os_vif [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:43:90:b1,bridge_name='br-int',has_traffic_filtering=True,id=cafc30f3-a313-4723-9612-f0b29a9513bb,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcafc30f3-a3')
Jan 26 16:05:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 305 active+clean; 422 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 273 op/s
Jan 26 16:05:54 compute-0 nova_compute[239965]: 2026-01-26 16:05:54.228 239969 INFO nova.virt.libvirt.driver [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Deleting instance files /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101_del
Jan 26 16:05:54 compute-0 nova_compute[239965]: 2026-01-26 16:05:54.228 239969 INFO nova.virt.libvirt.driver [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Deletion of /var/lib/nova/instances/76321bf3-65b2-429d-9eb3-27916690a101_del complete
Jan 26 16:05:54 compute-0 nova_compute[239965]: 2026-01-26 16:05:54.273 239969 INFO nova.compute.manager [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Took 0.60 seconds to destroy the instance on the hypervisor.
Jan 26 16:05:54 compute-0 nova_compute[239965]: 2026-01-26 16:05:54.273 239969 DEBUG oslo.service.loopingcall [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:05:54 compute-0 nova_compute[239965]: 2026-01-26 16:05:54.273 239969 DEBUG nova.compute.manager [-] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:05:54 compute-0 nova_compute[239965]: 2026-01-26 16:05:54.273 239969 DEBUG nova.network.neutron [-] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:05:54 compute-0 nova_compute[239965]: 2026-01-26 16:05:54.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:05:55 compute-0 ceph-mon[75140]: pgmap v1596: 305 pgs: 305 active+clean; 422 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.9 MiB/s wr, 273 op/s
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.901 239969 DEBUG nova.compute.manager [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received event network-vif-unplugged-5083d232-fbe1-4440-91de-c2c31fae4b6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.902 239969 DEBUG oslo_concurrency.lockutils [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "477642fb-5695-4387-944d-587e17cf3ed8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.902 239969 DEBUG oslo_concurrency.lockutils [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.903 239969 DEBUG oslo_concurrency.lockutils [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.903 239969 DEBUG nova.compute.manager [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] No waiting events found dispatching network-vif-unplugged-5083d232-fbe1-4440-91de-c2c31fae4b6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.903 239969 DEBUG nova.compute.manager [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received event network-vif-unplugged-5083d232-fbe1-4440-91de-c2c31fae4b6b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.904 239969 DEBUG nova.compute.manager [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received event network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.904 239969 DEBUG oslo_concurrency.lockutils [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "477642fb-5695-4387-944d-587e17cf3ed8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.904 239969 DEBUG oslo_concurrency.lockutils [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.905 239969 DEBUG oslo_concurrency.lockutils [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.905 239969 DEBUG nova.compute.manager [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] No waiting events found dispatching network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.906 239969 WARNING nova.compute.manager [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received unexpected event network-vif-plugged-5083d232-fbe1-4440-91de-c2c31fae4b6b for instance with vm_state rescued and task_state deleting.
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.906 239969 DEBUG nova.compute.manager [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Received event network-vif-unplugged-cafc30f3-a313-4723-9612-f0b29a9513bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.906 239969 DEBUG oslo_concurrency.lockutils [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "76321bf3-65b2-429d-9eb3-27916690a101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.907 239969 DEBUG oslo_concurrency.lockutils [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.907 239969 DEBUG oslo_concurrency.lockutils [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.907 239969 DEBUG nova.compute.manager [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] No waiting events found dispatching network-vif-unplugged-cafc30f3-a313-4723-9612-f0b29a9513bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.909 239969 DEBUG nova.compute.manager [req-a0b45982-86c6-4670-bb9c-5eba00b9feb8 req-9a94022b-b580-4982-b2c7-36f96721429f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Received event network-vif-unplugged-cafc30f3-a313-4723-9612-f0b29a9513bb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:05:55 compute-0 nova_compute[239965]: 2026-01-26 16:05:55.972 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1597: 305 pgs: 305 active+clean; 248 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.9 MiB/s wr, 420 op/s
Jan 26 16:05:56 compute-0 nova_compute[239965]: 2026-01-26 16:05:56.946 239969 DEBUG nova.network.neutron [-] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:05:56 compute-0 nova_compute[239965]: 2026-01-26 16:05:56.965 239969 INFO nova.compute.manager [-] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Took 3.29 seconds to deallocate network for instance.
Jan 26 16:05:57 compute-0 nova_compute[239965]: 2026-01-26 16:05:57.144 239969 DEBUG oslo_concurrency.lockutils [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:57 compute-0 nova_compute[239965]: 2026-01-26 16:05:57.145 239969 DEBUG oslo_concurrency.lockutils [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:57 compute-0 nova_compute[239965]: 2026-01-26 16:05:57.161 239969 DEBUG nova.network.neutron [-] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:05:57 compute-0 ceph-mon[75140]: pgmap v1597: 305 pgs: 305 active+clean; 248 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.9 MiB/s wr, 420 op/s
Jan 26 16:05:57 compute-0 nova_compute[239965]: 2026-01-26 16:05:57.184 239969 INFO nova.compute.manager [-] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Took 2.91 seconds to deallocate network for instance.
Jan 26 16:05:57 compute-0 nova_compute[239965]: 2026-01-26 16:05:57.225 239969 DEBUG oslo_concurrency.lockutils [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:57 compute-0 nova_compute[239965]: 2026-01-26 16:05:57.248 239969 DEBUG oslo_concurrency.processutils [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:05:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:05:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3120133782' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:05:57 compute-0 nova_compute[239965]: 2026-01-26 16:05:57.785 239969 DEBUG oslo_concurrency.processutils [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:57 compute-0 nova_compute[239965]: 2026-01-26 16:05:57.792 239969 DEBUG nova.compute.provider_tree [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:05:57 compute-0 nova_compute[239965]: 2026-01-26 16:05:57.810 239969 DEBUG nova.scheduler.client.report [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:05:57 compute-0 nova_compute[239965]: 2026-01-26 16:05:57.831 239969 DEBUG oslo_concurrency.lockutils [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:57 compute-0 nova_compute[239965]: 2026-01-26 16:05:57.834 239969 DEBUG oslo_concurrency.lockutils [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.609s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:57 compute-0 nova_compute[239965]: 2026-01-26 16:05:57.856 239969 INFO nova.scheduler.client.report [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Deleted allocations for instance 477642fb-5695-4387-944d-587e17cf3ed8
Jan 26 16:05:57 compute-0 nova_compute[239965]: 2026-01-26 16:05:57.911 239969 DEBUG oslo_concurrency.processutils [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:05:57 compute-0 nova_compute[239965]: 2026-01-26 16:05:57.946 239969 DEBUG oslo_concurrency.lockutils [None req-ffb4ad74-d471-4a37-884c-d04c2b30457e 7a6914254de3499da1b987d99b6e0ff6 d91082a727584818a01c3c60e7f7ef73 - - default default] Lock "477642fb-5695-4387-944d-587e17cf3ed8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 305 active+clean; 248 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.0 MiB/s wr, 330 op/s
Jan 26 16:05:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3120133782' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:05:58 compute-0 nova_compute[239965]: 2026-01-26 16:05:58.219 239969 DEBUG nova.compute.manager [req-f9ced66b-93be-44ab-b0d5-0d64010a0480 req-c2764775-11bf-49a9-a75b-a17fc80acb04 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Received event network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:58 compute-0 nova_compute[239965]: 2026-01-26 16:05:58.219 239969 DEBUG oslo_concurrency.lockutils [req-f9ced66b-93be-44ab-b0d5-0d64010a0480 req-c2764775-11bf-49a9-a75b-a17fc80acb04 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "76321bf3-65b2-429d-9eb3-27916690a101-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:58 compute-0 nova_compute[239965]: 2026-01-26 16:05:58.219 239969 DEBUG oslo_concurrency.lockutils [req-f9ced66b-93be-44ab-b0d5-0d64010a0480 req-c2764775-11bf-49a9-a75b-a17fc80acb04 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:58 compute-0 nova_compute[239965]: 2026-01-26 16:05:58.219 239969 DEBUG oslo_concurrency.lockutils [req-f9ced66b-93be-44ab-b0d5-0d64010a0480 req-c2764775-11bf-49a9-a75b-a17fc80acb04 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:58 compute-0 nova_compute[239965]: 2026-01-26 16:05:58.220 239969 DEBUG nova.compute.manager [req-f9ced66b-93be-44ab-b0d5-0d64010a0480 req-c2764775-11bf-49a9-a75b-a17fc80acb04 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] No waiting events found dispatching network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:05:58 compute-0 nova_compute[239965]: 2026-01-26 16:05:58.220 239969 WARNING nova.compute.manager [req-f9ced66b-93be-44ab-b0d5-0d64010a0480 req-c2764775-11bf-49a9-a75b-a17fc80acb04 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Received unexpected event network-vif-plugged-cafc30f3-a313-4723-9612-f0b29a9513bb for instance with vm_state deleted and task_state None.
Jan 26 16:05:58 compute-0 nova_compute[239965]: 2026-01-26 16:05:58.220 239969 DEBUG nova.compute.manager [req-f9ced66b-93be-44ab-b0d5-0d64010a0480 req-c2764775-11bf-49a9-a75b-a17fc80acb04 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Received event network-vif-deleted-5083d232-fbe1-4440-91de-c2c31fae4b6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:58 compute-0 nova_compute[239965]: 2026-01-26 16:05:58.220 239969 DEBUG nova.compute.manager [req-f9ced66b-93be-44ab-b0d5-0d64010a0480 req-c2764775-11bf-49a9-a75b-a17fc80acb04 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Received event network-vif-deleted-cafc30f3-a313-4723-9612-f0b29a9513bb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:05:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:05:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3208563394' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:05:58 compute-0 nova_compute[239965]: 2026-01-26 16:05:58.490 239969 DEBUG oslo_concurrency.processutils [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:05:58 compute-0 nova_compute[239965]: 2026-01-26 16:05:58.496 239969 DEBUG nova.compute.provider_tree [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:05:58 compute-0 nova_compute[239965]: 2026-01-26 16:05:58.514 239969 DEBUG nova.scheduler.client.report [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:05:58 compute-0 nova_compute[239965]: 2026-01-26 16:05:58.531 239969 DEBUG oslo_concurrency.lockutils [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.697s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:58 compute-0 nova_compute[239965]: 2026-01-26 16:05:58.569 239969 INFO nova.scheduler.client.report [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Deleted allocations for instance 76321bf3-65b2-429d-9eb3-27916690a101
Jan 26 16:05:58 compute-0 nova_compute[239965]: 2026-01-26 16:05:58.646 239969 DEBUG oslo_concurrency.lockutils [None req-8de801f9-e9f9-424a-b3d1-d0c82e738a01 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "76321bf3-65b2-429d-9eb3-27916690a101" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:05:58 compute-0 nova_compute[239965]: 2026-01-26 16:05:58.942 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:05:59 compute-0 ceph-mon[75140]: pgmap v1598: 305 pgs: 305 active+clean; 248 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.0 MiB/s wr, 330 op/s
Jan 26 16:05:59 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3208563394' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:05:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:59.227 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:05:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:59.228 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:05:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:05:59.229 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1599: 305 pgs: 305 active+clean; 248 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.0 MiB/s wr, 330 op/s
Jan 26 16:06:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:06:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:06:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:06:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:06:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:06:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:06:00 compute-0 nova_compute[239965]: 2026-01-26 16:06:00.974 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:01 compute-0 ceph-mon[75140]: pgmap v1599: 305 pgs: 305 active+clean; 248 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.0 MiB/s wr, 330 op/s
Jan 26 16:06:02 compute-0 nova_compute[239965]: 2026-01-26 16:06:02.017 239969 DEBUG oslo_concurrency.lockutils [None req-53d6c2d9-a616-43b3-9d6c-c737ea9c9de7 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:02 compute-0 nova_compute[239965]: 2026-01-26 16:06:02.018 239969 DEBUG oslo_concurrency.lockutils [None req-53d6c2d9-a616-43b3-9d6c-c737ea9c9de7 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:02 compute-0 nova_compute[239965]: 2026-01-26 16:06:02.019 239969 DEBUG nova.compute.manager [None req-53d6c2d9-a616-43b3-9d6c-c737ea9c9de7 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:02 compute-0 nova_compute[239965]: 2026-01-26 16:06:02.026 239969 DEBUG nova.compute.manager [None req-53d6c2d9-a616-43b3-9d6c-c737ea9c9de7 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 26 16:06:02 compute-0 nova_compute[239965]: 2026-01-26 16:06:02.027 239969 DEBUG nova.objects.instance [None req-53d6c2d9-a616-43b3-9d6c-c737ea9c9de7 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'flavor' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:02 compute-0 nova_compute[239965]: 2026-01-26 16:06:02.056 239969 DEBUG nova.virt.libvirt.driver [None req-53d6c2d9-a616-43b3-9d6c-c737ea9c9de7 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:06:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 305 active+clean; 248 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 165 op/s
Jan 26 16:06:02 compute-0 nova_compute[239965]: 2026-01-26 16:06:02.500 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443547.4991744, 08d4e028-235a-4222-9acb-e07b6652d9ee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:02 compute-0 nova_compute[239965]: 2026-01-26 16:06:02.500 239969 INFO nova.compute.manager [-] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] VM Stopped (Lifecycle Event)
Jan 26 16:06:02 compute-0 nova_compute[239965]: 2026-01-26 16:06:02.519 239969 DEBUG nova.compute.manager [None req-aa8bb083-39cd-42f7-94f2-e3feba7dcdad - - - - - -] [instance: 08d4e028-235a-4222-9acb-e07b6652d9ee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:06:02 compute-0 ovn_controller[146046]: 2026-01-26T16:06:02Z|00824|binding|INFO|Releasing lport 61ede7e0-2ac4-4ab0-a8d4-11a64add8da0 from this chassis (sb_readonly=0)
Jan 26 16:06:02 compute-0 ovn_controller[146046]: 2026-01-26T16:06:02Z|00825|binding|INFO|Releasing lport 8d8bd19b-594d-4000-99fc-eb8020a59c23 from this chassis (sb_readonly=0)
Jan 26 16:06:03 compute-0 nova_compute[239965]: 2026-01-26 16:06:03.014 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:03 compute-0 ceph-mon[75140]: pgmap v1600: 305 pgs: 305 active+clean; 248 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 165 op/s
Jan 26 16:06:03 compute-0 nova_compute[239965]: 2026-01-26 16:06:03.946 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1601: 305 pgs: 305 active+clean; 248 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 147 op/s
Jan 26 16:06:04 compute-0 ceph-mon[75140]: pgmap v1601: 305 pgs: 305 active+clean; 248 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 18 KiB/s wr, 147 op/s
Jan 26 16:06:04 compute-0 kernel: tap2a2de29b-29 (unregistering): left promiscuous mode
Jan 26 16:06:04 compute-0 NetworkManager[48954]: <info>  [1769443564.3121] device (tap2a2de29b-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:06:04 compute-0 nova_compute[239965]: 2026-01-26 16:06:04.317 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:04 compute-0 ovn_controller[146046]: 2026-01-26T16:06:04Z|00826|binding|INFO|Releasing lport 2a2de29b-29c0-439c-8236-2f566c4c89aa from this chassis (sb_readonly=0)
Jan 26 16:06:04 compute-0 ovn_controller[146046]: 2026-01-26T16:06:04Z|00827|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa down in Southbound
Jan 26 16:06:04 compute-0 ovn_controller[146046]: 2026-01-26T16:06:04Z|00828|binding|INFO|Removing iface tap2a2de29b-29 ovn-installed in OVS
Jan 26 16:06:04 compute-0 nova_compute[239965]: 2026-01-26 16:06:04.318 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:04.328 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ca:89 10.100.0.14'], port_security=['fa:16:3e:f8:ca:89 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '492acb7a-42e1-4918-a973-352f9d8251d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '10', 'neutron:security_group_ids': '2b6630f0-082e-491e-b663-f8af00e04b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2a2de29b-29c0-439c-8236-2f566c4c89aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:06:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:04.330 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2a2de29b-29c0-439c-8236-2f566c4c89aa in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d unbound from our chassis
Jan 26 16:06:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:04.331 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:06:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:04.332 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc82fa6-92b4-411a-8e62-98bb93603960]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:04.333 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d namespace which is not needed anymore
Jan 26 16:06:04 compute-0 nova_compute[239965]: 2026-01-26 16:06:04.334 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:04 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Jan 26 16:06:04 compute-0 systemd[1]: machine-qemu\x2d93\x2dinstance\x2d0000004c.scope: Consumed 17.157s CPU time.
Jan 26 16:06:04 compute-0 systemd-machined[208061]: Machine qemu-93-instance-0000004c terminated.
Jan 26 16:06:04 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[307884]: [NOTICE]   (307888) : haproxy version is 2.8.14-c23fe91
Jan 26 16:06:04 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[307884]: [NOTICE]   (307888) : path to executable is /usr/sbin/haproxy
Jan 26 16:06:04 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[307884]: [WARNING]  (307888) : Exiting Master process...
Jan 26 16:06:04 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[307884]: [ALERT]    (307888) : Current worker (307890) exited with code 143 (Terminated)
Jan 26 16:06:04 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[307884]: [WARNING]  (307888) : All workers exited. Exiting... (0)
Jan 26 16:06:04 compute-0 systemd[1]: libpod-ff25274bbc427090624282cff22aab1c98c551f638fb93c3858b4c5ce3456b16.scope: Deactivated successfully.
Jan 26 16:06:04 compute-0 podman[313808]: 2026-01-26 16:06:04.460728443 +0000 UTC m=+0.045102034 container died ff25274bbc427090624282cff22aab1c98c551f638fb93c3858b4c5ce3456b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:06:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff25274bbc427090624282cff22aab1c98c551f638fb93c3858b4c5ce3456b16-userdata-shm.mount: Deactivated successfully.
Jan 26 16:06:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-420d68e67bdcbb5059b11ef9cc6a5bda0c5358702ff5c1d0625d7e0103e04fe2-merged.mount: Deactivated successfully.
Jan 26 16:06:04 compute-0 podman[313808]: 2026-01-26 16:06:04.502381742 +0000 UTC m=+0.086755333 container cleanup ff25274bbc427090624282cff22aab1c98c551f638fb93c3858b4c5ce3456b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:06:04 compute-0 systemd[1]: libpod-conmon-ff25274bbc427090624282cff22aab1c98c551f638fb93c3858b4c5ce3456b16.scope: Deactivated successfully.
Jan 26 16:06:04 compute-0 nova_compute[239965]: 2026-01-26 16:06:04.540 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:04 compute-0 nova_compute[239965]: 2026-01-26 16:06:04.548 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:04 compute-0 nova_compute[239965]: 2026-01-26 16:06:04.551 239969 DEBUG nova.compute.manager [req-0d3fd522-769f-40c6-8f21-76a074c4d246 req-cc471f0f-d78e-4be2-8c1d-12fc24033167 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:04 compute-0 nova_compute[239965]: 2026-01-26 16:06:04.551 239969 DEBUG oslo_concurrency.lockutils [req-0d3fd522-769f-40c6-8f21-76a074c4d246 req-cc471f0f-d78e-4be2-8c1d-12fc24033167 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:04 compute-0 nova_compute[239965]: 2026-01-26 16:06:04.551 239969 DEBUG oslo_concurrency.lockutils [req-0d3fd522-769f-40c6-8f21-76a074c4d246 req-cc471f0f-d78e-4be2-8c1d-12fc24033167 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:04 compute-0 nova_compute[239965]: 2026-01-26 16:06:04.552 239969 DEBUG oslo_concurrency.lockutils [req-0d3fd522-769f-40c6-8f21-76a074c4d246 req-cc471f0f-d78e-4be2-8c1d-12fc24033167 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:04 compute-0 nova_compute[239965]: 2026-01-26 16:06:04.552 239969 DEBUG nova.compute.manager [req-0d3fd522-769f-40c6-8f21-76a074c4d246 req-cc471f0f-d78e-4be2-8c1d-12fc24033167 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:06:04 compute-0 nova_compute[239965]: 2026-01-26 16:06:04.552 239969 WARNING nova.compute.manager [req-0d3fd522-769f-40c6-8f21-76a074c4d246 req-cc471f0f-d78e-4be2-8c1d-12fc24033167 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state active and task_state powering-off.
Jan 26 16:06:04 compute-0 podman[313839]: 2026-01-26 16:06:04.580735389 +0000 UTC m=+0.060314776 container remove ff25274bbc427090624282cff22aab1c98c551f638fb93c3858b4c5ce3456b16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:06:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:04.589 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7be75b35-281e-403d-a5e1-ddfb25439982]: (4, ('Mon Jan 26 04:06:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d (ff25274bbc427090624282cff22aab1c98c551f638fb93c3858b4c5ce3456b16)\nff25274bbc427090624282cff22aab1c98c551f638fb93c3858b4c5ce3456b16\nMon Jan 26 04:06:04 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d (ff25274bbc427090624282cff22aab1c98c551f638fb93c3858b4c5ce3456b16)\nff25274bbc427090624282cff22aab1c98c551f638fb93c3858b4c5ce3456b16\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:04.591 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[830a63dd-423e-4346-9d84-a94275dccc3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:04.592 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1ad93-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:04 compute-0 nova_compute[239965]: 2026-01-26 16:06:04.594 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:04 compute-0 kernel: tap84c1ad93-10: left promiscuous mode
Jan 26 16:06:04 compute-0 nova_compute[239965]: 2026-01-26 16:06:04.616 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:04.619 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3d5a6e-0a0a-4d05-9e52-2a929bdb8206]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:04.631 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ce31ef53-ad0d-4e2c-8355-4480b514046c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:04.632 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b4b4eb-59e0-4e1a-8eb0-b4766eb19350]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:04.648 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7db2082f-1ba6-4c03-b98b-9ea14025f38a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 495537, 'reachable_time': 17109, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313866, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d84c1ad93\x2d16e1\x2d4751\x2dac9b\x2dceeaa2d50c1d.mount: Deactivated successfully.
Jan 26 16:06:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:04.652 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:06:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:04.652 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[b84433f5-2e00-4589-acee-694371a1d5d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:05 compute-0 nova_compute[239965]: 2026-01-26 16:06:05.079 239969 INFO nova.virt.libvirt.driver [None req-53d6c2d9-a616-43b3-9d6c-c737ea9c9de7 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance shutdown successfully after 3 seconds.
Jan 26 16:06:05 compute-0 nova_compute[239965]: 2026-01-26 16:06:05.086 239969 INFO nova.virt.libvirt.driver [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance destroyed successfully.
Jan 26 16:06:05 compute-0 nova_compute[239965]: 2026-01-26 16:06:05.087 239969 DEBUG nova.objects.instance [None req-53d6c2d9-a616-43b3-9d6c-c737ea9c9de7 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:05 compute-0 nova_compute[239965]: 2026-01-26 16:06:05.106 239969 DEBUG nova.compute.manager [None req-53d6c2d9-a616-43b3-9d6c-c737ea9c9de7 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:05 compute-0 nova_compute[239965]: 2026-01-26 16:06:05.148 239969 DEBUG oslo_concurrency.lockutils [None req-53d6c2d9-a616-43b3-9d6c-c737ea9c9de7 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:05 compute-0 nova_compute[239965]: 2026-01-26 16:06:05.976 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 305 active+clean; 248 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 22 KiB/s wr, 149 op/s
Jan 26 16:06:06 compute-0 nova_compute[239965]: 2026-01-26 16:06:06.431 239969 DEBUG nova.objects.instance [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'flavor' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:06 compute-0 nova_compute[239965]: 2026-01-26 16:06:06.455 239969 DEBUG oslo_concurrency.lockutils [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:06:06 compute-0 nova_compute[239965]: 2026-01-26 16:06:06.456 239969 DEBUG oslo_concurrency.lockutils [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquired lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:06:06 compute-0 nova_compute[239965]: 2026-01-26 16:06:06.459 239969 DEBUG nova.network.neutron [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:06:06 compute-0 nova_compute[239965]: 2026-01-26 16:06:06.460 239969 DEBUG nova.objects.instance [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'info_cache' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:06 compute-0 nova_compute[239965]: 2026-01-26 16:06:06.630 239969 DEBUG nova.compute.manager [req-e4f53776-b40e-4607-8cbb-e8866feac770 req-951b3f3c-67e9-4d74-bc13-daf118f4f1db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:06 compute-0 nova_compute[239965]: 2026-01-26 16:06:06.631 239969 DEBUG oslo_concurrency.lockutils [req-e4f53776-b40e-4607-8cbb-e8866feac770 req-951b3f3c-67e9-4d74-bc13-daf118f4f1db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:06 compute-0 nova_compute[239965]: 2026-01-26 16:06:06.631 239969 DEBUG oslo_concurrency.lockutils [req-e4f53776-b40e-4607-8cbb-e8866feac770 req-951b3f3c-67e9-4d74-bc13-daf118f4f1db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:06 compute-0 nova_compute[239965]: 2026-01-26 16:06:06.631 239969 DEBUG oslo_concurrency.lockutils [req-e4f53776-b40e-4607-8cbb-e8866feac770 req-951b3f3c-67e9-4d74-bc13-daf118f4f1db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:06 compute-0 nova_compute[239965]: 2026-01-26 16:06:06.631 239969 DEBUG nova.compute.manager [req-e4f53776-b40e-4607-8cbb-e8866feac770 req-951b3f3c-67e9-4d74-bc13-daf118f4f1db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:06:06 compute-0 nova_compute[239965]: 2026-01-26 16:06:06.632 239969 WARNING nova.compute.manager [req-e4f53776-b40e-4607-8cbb-e8866feac770 req-951b3f3c-67e9-4d74-bc13-daf118f4f1db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state stopped and task_state powering-on.
Jan 26 16:06:07 compute-0 ceph-mon[75140]: pgmap v1602: 305 pgs: 305 active+clean; 248 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 22 KiB/s wr, 149 op/s
Jan 26 16:06:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e243 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.027 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443553.0266917, 477642fb-5695-4387-944d-587e17cf3ed8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.028 239969 INFO nova.compute.manager [-] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] VM Stopped (Lifecycle Event)
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.052 239969 DEBUG nova.compute.manager [None req-3cb8b34d-a11c-42c2-b8fd-0d7954dc1a01 - - - - - -] [instance: 477642fb-5695-4387-944d-587e17cf3ed8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1603: 305 pgs: 305 active+clean; 248 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.105 239969 DEBUG nova.network.neutron [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Updating instance_info_cache with network_info: [{"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.129 239969 DEBUG oslo_concurrency.lockutils [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Releasing lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.152 239969 INFO nova.virt.libvirt.driver [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance destroyed successfully.
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.153 239969 DEBUG nova.objects.instance [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.168 239969 DEBUG nova.objects.instance [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'resources' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.188 239969 DEBUG nova.virt.libvirt.vif [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1205900142',display_name='tempest-ServerActionsTestJSON-server-1205900142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1205900142',id=76,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJC9bPt4tHtAdEqahH8dDewWtV4sFfSIyn3+BOhF1ZOSNVcrw8cqQ/JKx3C2TWjp9SjIuYmlVUH9MiH1RWG1NVSVOkV3L9+W0mVxNAs5TvmU8qKvHIWoJY7lntFnq7MLA==',key_name='tempest-keypair-1155267326',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-plaaamng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:06:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=492acb7a-42e1-4918-a973-352f9d8251d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.189 239969 DEBUG nova.network.os_vif_util [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.190 239969 DEBUG nova.network.os_vif_util [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.190 239969 DEBUG os_vif [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.193 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.193 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a2de29b-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.197 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.200 239969 INFO os_vif [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29')
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.209 239969 DEBUG nova.virt.libvirt.driver [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Start _get_guest_xml network_info=[{"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.214 239969 WARNING nova.virt.libvirt.driver [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.224 239969 DEBUG nova.virt.libvirt.host [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.225 239969 DEBUG nova.virt.libvirt.host [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.228 239969 DEBUG nova.virt.libvirt.host [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.229 239969 DEBUG nova.virt.libvirt.host [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.229 239969 DEBUG nova.virt.libvirt.driver [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.229 239969 DEBUG nova.virt.hardware [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.230 239969 DEBUG nova.virt.hardware [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.230 239969 DEBUG nova.virt.hardware [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.230 239969 DEBUG nova.virt.hardware [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.231 239969 DEBUG nova.virt.hardware [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.231 239969 DEBUG nova.virt.hardware [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.231 239969 DEBUG nova.virt.hardware [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.231 239969 DEBUG nova.virt.hardware [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.232 239969 DEBUG nova.virt.hardware [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.232 239969 DEBUG nova.virt.hardware [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.232 239969 DEBUG nova.virt.hardware [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.232 239969 DEBUG nova.objects.instance [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.249 239969 DEBUG oslo_concurrency.processutils [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:08 compute-0 podman[313868]: 2026-01-26 16:06:08.369771089 +0000 UTC m=+0.052134586 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 16:06:08 compute-0 podman[313869]: 2026-01-26 16:06:08.409915061 +0000 UTC m=+0.089003259 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:06:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:06:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/960566150' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.889 239969 DEBUG oslo_concurrency.processutils [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.640s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.917 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443553.914243, 76321bf3-65b2-429d-9eb3-27916690a101 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.917 239969 INFO nova.compute.manager [-] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] VM Stopped (Lifecycle Event)
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.923 239969 DEBUG oslo_concurrency.processutils [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:08 compute-0 nova_compute[239965]: 2026-01-26 16:06:08.957 239969 DEBUG nova.compute.manager [None req-ef57da89-9e07-4d20-9dc5-a78a33014b44 - - - - - -] [instance: 76321bf3-65b2-429d-9eb3-27916690a101] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:09 compute-0 ceph-mon[75140]: pgmap v1603: 305 pgs: 305 active+clean; 248 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Jan 26 16:06:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/960566150' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:06:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:06:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/956153006' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.510 239969 DEBUG oslo_concurrency.processutils [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.512 239969 DEBUG nova.virt.libvirt.vif [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1205900142',display_name='tempest-ServerActionsTestJSON-server-1205900142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1205900142',id=76,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJC9bPt4tHtAdEqahH8dDewWtV4sFfSIyn3+BOhF1ZOSNVcrw8cqQ/JKx3C2TWjp9SjIuYmlVUH9MiH1RWG1NVSVOkV3L9+W0mVxNAs5TvmU8qKvHIWoJY7lntFnq7MLA==',key_name='tempest-keypair-1155267326',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-plaaamng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:06:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=492acb7a-42e1-4918-a973-352f9d8251d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.512 239969 DEBUG nova.network.os_vif_util [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.513 239969 DEBUG nova.network.os_vif_util [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.515 239969 DEBUG nova.objects.instance [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.603 239969 DEBUG nova.virt.libvirt.driver [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:06:09 compute-0 nova_compute[239965]:   <uuid>492acb7a-42e1-4918-a973-352f9d8251d0</uuid>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   <name>instance-0000004c</name>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerActionsTestJSON-server-1205900142</nova:name>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:06:08</nova:creationTime>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:06:09 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:06:09 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:06:09 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:06:09 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:06:09 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:06:09 compute-0 nova_compute[239965]:         <nova:user uuid="59f8ee09903a4e0a812c3d9e013996bd">tempest-ServerActionsTestJSON-274649005-project-member</nova:user>
Jan 26 16:06:09 compute-0 nova_compute[239965]:         <nova:project uuid="94771806cd0d4b5db117956e09fea9e6">tempest-ServerActionsTestJSON-274649005</nova:project>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:06:09 compute-0 nova_compute[239965]:         <nova:port uuid="2a2de29b-29c0-439c-8236-2f566c4c89aa">
Jan 26 16:06:09 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <system>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <entry name="serial">492acb7a-42e1-4918-a973-352f9d8251d0</entry>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <entry name="uuid">492acb7a-42e1-4918-a973-352f9d8251d0</entry>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     </system>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   <os>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   </os>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   <features>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   </features>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/492acb7a-42e1-4918-a973-352f9d8251d0_disk">
Jan 26 16:06:09 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       </source>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:06:09 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/492acb7a-42e1-4918-a973-352f9d8251d0_disk.config">
Jan 26 16:06:09 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       </source>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:06:09 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:f8:ca:89"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <target dev="tap2a2de29b-29"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/492acb7a-42e1-4918-a973-352f9d8251d0/console.log" append="off"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <video>
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     </video>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <input type="keyboard" bus="usb"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:06:09 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:06:09 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:06:09 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:06:09 compute-0 nova_compute[239965]: </domain>
Jan 26 16:06:09 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.606 239969 DEBUG nova.virt.libvirt.driver [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.607 239969 DEBUG nova.virt.libvirt.driver [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] skipping disk for instance-0000004c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.608 239969 DEBUG nova.virt.libvirt.vif [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1205900142',display_name='tempest-ServerActionsTestJSON-server-1205900142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1205900142',id=76,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJC9bPt4tHtAdEqahH8dDewWtV4sFfSIyn3+BOhF1ZOSNVcrw8cqQ/JKx3C2TWjp9SjIuYmlVUH9MiH1RWG1NVSVOkV3L9+W0mVxNAs5TvmU8qKvHIWoJY7lntFnq7MLA==',key_name='tempest-keypair-1155267326',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-plaaamng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:06:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=492acb7a-42e1-4918-a973-352f9d8251d0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.608 239969 DEBUG nova.network.os_vif_util [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.609 239969 DEBUG nova.network.os_vif_util [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.609 239969 DEBUG os_vif [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.610 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.611 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.611 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.618 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.619 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a2de29b-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.620 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a2de29b-29, col_values=(('external_ids', {'iface-id': '2a2de29b-29c0-439c-8236-2f566c4c89aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:ca:89', 'vm-uuid': '492acb7a-42e1-4918-a973-352f9d8251d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.621 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:09 compute-0 NetworkManager[48954]: <info>  [1769443569.6224] manager: (tap2a2de29b-29): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/357)
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.623 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.627 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.628 239969 INFO os_vif [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29')
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.648 239969 DEBUG nova.compute.manager [None req-8e2df28c-c1d8-4584-a8e2-1bbac0b6c89f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.694 239969 INFO nova.compute.manager [None req-8e2df28c-c1d8-4584-a8e2-1bbac0b6c89f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] instance snapshotting
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.695 239969 DEBUG nova.objects.instance [None req-8e2df28c-c1d8-4584-a8e2-1bbac0b6c89f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'flavor' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:09 compute-0 kernel: tap2a2de29b-29: entered promiscuous mode
Jan 26 16:06:09 compute-0 NetworkManager[48954]: <info>  [1769443569.7210] manager: (tap2a2de29b-29): new Tun device (/org/freedesktop/NetworkManager/Devices/358)
Jan 26 16:06:09 compute-0 ovn_controller[146046]: 2026-01-26T16:06:09Z|00829|binding|INFO|Claiming lport 2a2de29b-29c0-439c-8236-2f566c4c89aa for this chassis.
Jan 26 16:06:09 compute-0 ovn_controller[146046]: 2026-01-26T16:06:09Z|00830|binding|INFO|2a2de29b-29c0-439c-8236-2f566c4c89aa: Claiming fa:16:3e:f8:ca:89 10.100.0.14
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.721 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.731 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ca:89 10.100.0.14'], port_security=['fa:16:3e:f8:ca:89 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '492acb7a-42e1-4918-a973-352f9d8251d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '11', 'neutron:security_group_ids': '2b6630f0-082e-491e-b663-f8af00e04b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2a2de29b-29c0-439c-8236-2f566c4c89aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.732 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2a2de29b-29c0-439c-8236-2f566c4c89aa in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d bound to our chassis
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.733 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.739 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:09 compute-0 ovn_controller[146046]: 2026-01-26T16:06:09Z|00831|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa ovn-installed in OVS
Jan 26 16:06:09 compute-0 ovn_controller[146046]: 2026-01-26T16:06:09Z|00832|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa up in Southbound
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.743 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.749 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5e55317d-602d-4e55-9118-c0f721cf8b62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.750 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84c1ad93-11 in ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.752 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84c1ad93-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.752 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2e355a-b25d-4070-b3e7-52d5ab0cdc8e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.753 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee025d0-b951-4104-875d-5dfef62b5e39]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:09 compute-0 systemd-machined[208061]: New machine qemu-105-instance-0000004c.
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.763 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[d54a03cf-478d-403f-8022-c256c8aca627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:09 compute-0 systemd[1]: Started Virtual Machine qemu-105-instance-0000004c.
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.788 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[aaff8307-530f-4d6b-8bc9-3889231934f7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:09 compute-0 systemd-udevd[313993]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:06:09 compute-0 NetworkManager[48954]: <info>  [1769443569.8142] device (tap2a2de29b-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:06:09 compute-0 NetworkManager[48954]: <info>  [1769443569.8155] device (tap2a2de29b-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.822 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a2ceba42-bcd7-4a42-9d5c-6b1f81235be9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.830 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[19d14c85-833b-4903-972b-3f8bf5966b84]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:09 compute-0 NetworkManager[48954]: <info>  [1769443569.8319] manager: (tap84c1ad93-10): new Veth device (/org/freedesktop/NetworkManager/Devices/359)
Jan 26 16:06:09 compute-0 systemd-udevd[314000]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.866 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1765680c-ad20-4dbf-be6c-8534c60a8452]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.869 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ec82f890-bdf5-4fff-8e3a-6074a521f4ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:09 compute-0 NetworkManager[48954]: <info>  [1769443569.8985] device (tap84c1ad93-10): carrier: link connected
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.904 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8e33ae-107b-4121-b1c4-6d06740e2152]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.924 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0c606e97-bb4f-4c71-a4ae-e96e849b5f04]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84c1ad93-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506070, 'reachable_time': 23040, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314021, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.925 239969 DEBUG nova.compute.manager [req-b4b7cda7-9dd8-4fe9-ba39-27702f588619 req-4032c808-c5da-4e5f-b086-0be4e86fff6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.925 239969 DEBUG oslo_concurrency.lockutils [req-b4b7cda7-9dd8-4fe9-ba39-27702f588619 req-4032c808-c5da-4e5f-b086-0be4e86fff6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.926 239969 DEBUG oslo_concurrency.lockutils [req-b4b7cda7-9dd8-4fe9-ba39-27702f588619 req-4032c808-c5da-4e5f-b086-0be4e86fff6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.926 239969 DEBUG oslo_concurrency.lockutils [req-b4b7cda7-9dd8-4fe9-ba39-27702f588619 req-4032c808-c5da-4e5f-b086-0be4e86fff6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.926 239969 DEBUG nova.compute.manager [req-b4b7cda7-9dd8-4fe9-ba39-27702f588619 req-4032c808-c5da-4e5f-b086-0be4e86fff6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.926 239969 WARNING nova.compute.manager [req-b4b7cda7-9dd8-4fe9-ba39-27702f588619 req-4032c808-c5da-4e5f-b086-0be4e86fff6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state stopped and task_state powering-on.
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.951 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[11b784fe-fc5b-4793-8f05-08108ce02014]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:ad50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 506070, 'tstamp': 506070}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314022, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:09.975 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb1f0b6-34ec-4a0e-811b-8bb5205a22cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84c1ad93-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 254], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506070, 'reachable_time': 23040, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314023, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:09 compute-0 nova_compute[239965]: 2026-01-26 16:06:09.988 239969 INFO nova.virt.libvirt.driver [None req-8e2df28c-c1d8-4584-a8e2-1bbac0b6c89f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Beginning live snapshot process
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:10.020 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee0fe43-225d-4398-b987-3605fc9e243f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1604: 305 pgs: 305 active+clean; 248 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 6.7 KiB/s wr, 4 op/s
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:10.106 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bd093a90-e0d3-4d58-bf4e-52b7ab5081da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:10.107 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1ad93-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:10.107 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:10.108 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84c1ad93-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:10 compute-0 NetworkManager[48954]: <info>  [1769443570.1116] manager: (tap84c1ad93-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/360)
Jan 26 16:06:10 compute-0 kernel: tap84c1ad93-10: entered promiscuous mode
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:10.113 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84c1ad93-10, col_values=(('external_ids', {'iface-id': '61ede7e0-2ac4-4ab0-a8d4-11a64add8da0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:10 compute-0 ovn_controller[146046]: 2026-01-26T16:06:10Z|00833|binding|INFO|Releasing lport 61ede7e0-2ac4-4ab0-a8d4-11a64add8da0 from this chassis (sb_readonly=0)
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:10.132 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:10.133 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b1e5cb-bc20-4655-8dd9-216846b4b058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:10.134 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:06:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:10.135 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'env', 'PROCESS_TAG=haproxy-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.135 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.144 239969 DEBUG nova.virt.libvirt.imagebackend [None req-8e2df28c-c1d8-4584-a8e2-1bbac0b6c89f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 16:06:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/956153006' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.341 239969 DEBUG nova.storage.rbd_utils [None req-8e2df28c-c1d8-4584-a8e2-1bbac0b6c89f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] creating snapshot(3b112174a9ce421fa567d0401ed87c86) on rbd image(571cd10f-e22e-43b6-9523-e3539d047e4a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.408 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 492acb7a-42e1-4918-a973-352f9d8251d0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.408 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443570.4076738, 492acb7a-42e1-4918-a973-352f9d8251d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.408 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Resumed (Lifecycle Event)
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.410 239969 DEBUG nova.compute.manager [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.414 239969 INFO nova.virt.libvirt.driver [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance rebooted successfully.
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.414 239969 DEBUG nova.compute.manager [None req-f99be5e8-bc0c-4205-a7df-babc92f9b26b 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.425 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.429 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.455 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] During sync_power_state the instance has a pending task (powering-on). Skip.
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.456 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443570.4104033, 492acb7a-42e1-4918-a973-352f9d8251d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.456 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Started (Lifecycle Event)
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.479 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.484 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:06:10 compute-0 podman[314148]: 2026-01-26 16:06:10.549966931 +0000 UTC m=+0.060037500 container create e02cff722345ce2103c4574eeacec5a245e09bee9fba58236fb31cfb8a98f03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 16:06:10 compute-0 systemd[1]: Started libpod-conmon-e02cff722345ce2103c4574eeacec5a245e09bee9fba58236fb31cfb8a98f03a.scope.
Jan 26 16:06:10 compute-0 podman[314148]: 2026-01-26 16:06:10.517398455 +0000 UTC m=+0.027469044 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:06:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:06:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/649160ebea3198cf490063aa586c92e27c14f472c37ceab80bec1ee585b0d8ee/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:06:10 compute-0 podman[314148]: 2026-01-26 16:06:10.633841483 +0000 UTC m=+0.143912052 container init e02cff722345ce2103c4574eeacec5a245e09bee9fba58236fb31cfb8a98f03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 16:06:10 compute-0 podman[314148]: 2026-01-26 16:06:10.640180778 +0000 UTC m=+0.150251347 container start e02cff722345ce2103c4574eeacec5a245e09bee9fba58236fb31cfb8a98f03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 16:06:10 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[314163]: [NOTICE]   (314167) : New worker (314169) forked
Jan 26 16:06:10 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[314163]: [NOTICE]   (314167) : Loading success.
Jan 26 16:06:10 compute-0 nova_compute[239965]: 2026-01-26 16:06:10.979 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Jan 26 16:06:11 compute-0 ceph-mon[75140]: pgmap v1604: 305 pgs: 305 active+clean; 248 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 6.7 KiB/s wr, 4 op/s
Jan 26 16:06:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Jan 26 16:06:11 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Jan 26 16:06:11 compute-0 nova_compute[239965]: 2026-01-26 16:06:11.285 239969 DEBUG nova.storage.rbd_utils [None req-8e2df28c-c1d8-4584-a8e2-1bbac0b6c89f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] cloning vms/571cd10f-e22e-43b6-9523-e3539d047e4a_disk@3b112174a9ce421fa567d0401ed87c86 to images/e3f1b4e6-3b75-40cd-9d76-1d94417e28f9 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 16:06:11 compute-0 nova_compute[239965]: 2026-01-26 16:06:11.345 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:11 compute-0 nova_compute[239965]: 2026-01-26 16:06:11.386 239969 DEBUG nova.storage.rbd_utils [None req-8e2df28c-c1d8-4584-a8e2-1bbac0b6c89f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] flattening images/e3f1b4e6-3b75-40cd-9d76-1d94417e28f9 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 16:06:11 compute-0 nova_compute[239965]: 2026-01-26 16:06:11.798 239969 DEBUG nova.storage.rbd_utils [None req-8e2df28c-c1d8-4584-a8e2-1bbac0b6c89f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] removing snapshot(3b112174a9ce421fa567d0401ed87c86) on rbd image(571cd10f-e22e-43b6-9523-e3539d047e4a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 16:06:12 compute-0 nova_compute[239965]: 2026-01-26 16:06:11.999 239969 DEBUG nova.compute.manager [req-4c14863b-1fcb-4c2c-be8c-3fd662132698 req-7e017a50-cc8a-4eed-b920-a3cf9be22aeb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:12 compute-0 nova_compute[239965]: 2026-01-26 16:06:12.000 239969 DEBUG oslo_concurrency.lockutils [req-4c14863b-1fcb-4c2c-be8c-3fd662132698 req-7e017a50-cc8a-4eed-b920-a3cf9be22aeb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:12 compute-0 nova_compute[239965]: 2026-01-26 16:06:12.000 239969 DEBUG oslo_concurrency.lockutils [req-4c14863b-1fcb-4c2c-be8c-3fd662132698 req-7e017a50-cc8a-4eed-b920-a3cf9be22aeb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:12 compute-0 nova_compute[239965]: 2026-01-26 16:06:12.000 239969 DEBUG oslo_concurrency.lockutils [req-4c14863b-1fcb-4c2c-be8c-3fd662132698 req-7e017a50-cc8a-4eed-b920-a3cf9be22aeb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:12 compute-0 nova_compute[239965]: 2026-01-26 16:06:12.000 239969 DEBUG nova.compute.manager [req-4c14863b-1fcb-4c2c-be8c-3fd662132698 req-7e017a50-cc8a-4eed-b920-a3cf9be22aeb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:06:12 compute-0 nova_compute[239965]: 2026-01-26 16:06:12.001 239969 WARNING nova.compute.manager [req-4c14863b-1fcb-4c2c-be8c-3fd662132698 req-7e017a50-cc8a-4eed-b920-a3cf9be22aeb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state active and task_state None.
Jan 26 16:06:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1606: 305 pgs: 305 active+clean; 248 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 6.4 KiB/s wr, 10 op/s
Jan 26 16:06:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Jan 26 16:06:12 compute-0 ceph-mon[75140]: osdmap e244: 3 total, 3 up, 3 in
Jan 26 16:06:12 compute-0 ceph-mon[75140]: pgmap v1606: 305 pgs: 305 active+clean; 248 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 6.4 KiB/s wr, 10 op/s
Jan 26 16:06:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Jan 26 16:06:12 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Jan 26 16:06:12 compute-0 nova_compute[239965]: 2026-01-26 16:06:12.298 239969 DEBUG nova.storage.rbd_utils [None req-8e2df28c-c1d8-4584-a8e2-1bbac0b6c89f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] creating snapshot(snap) on rbd image(e3f1b4e6-3b75-40cd-9d76-1d94417e28f9) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:06:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:06:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Jan 26 16:06:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Jan 26 16:06:13 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Jan 26 16:06:13 compute-0 ceph-mon[75140]: osdmap e245: 3 total, 3 up, 3 in
Jan 26 16:06:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1609: 305 pgs: 305 active+clean; 248 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 KiB/s wr, 14 op/s
Jan 26 16:06:14 compute-0 nova_compute[239965]: 2026-01-26 16:06:14.315 239969 DEBUG nova.objects.instance [None req-f56c8b0d-0b89-4a83-9b76-3ac59f01cbd5 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:14 compute-0 nova_compute[239965]: 2026-01-26 16:06:14.342 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443574.342044, 492acb7a-42e1-4918-a973-352f9d8251d0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:14 compute-0 nova_compute[239965]: 2026-01-26 16:06:14.342 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Paused (Lifecycle Event)
Jan 26 16:06:14 compute-0 nova_compute[239965]: 2026-01-26 16:06:14.368 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:14 compute-0 nova_compute[239965]: 2026-01-26 16:06:14.375 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:06:14 compute-0 nova_compute[239965]: 2026-01-26 16:06:14.399 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 26 16:06:14 compute-0 nova_compute[239965]: 2026-01-26 16:06:14.622 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:14 compute-0 ceph-mon[75140]: osdmap e246: 3 total, 3 up, 3 in
Jan 26 16:06:14 compute-0 ceph-mon[75140]: pgmap v1609: 305 pgs: 305 active+clean; 248 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 KiB/s wr, 14 op/s
Jan 26 16:06:14 compute-0 kernel: tap2a2de29b-29 (unregistering): left promiscuous mode
Jan 26 16:06:14 compute-0 NetworkManager[48954]: <info>  [1769443574.8987] device (tap2a2de29b-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:06:14 compute-0 nova_compute[239965]: 2026-01-26 16:06:14.906 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:14 compute-0 ovn_controller[146046]: 2026-01-26T16:06:14Z|00834|binding|INFO|Releasing lport 2a2de29b-29c0-439c-8236-2f566c4c89aa from this chassis (sb_readonly=0)
Jan 26 16:06:14 compute-0 ovn_controller[146046]: 2026-01-26T16:06:14Z|00835|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa down in Southbound
Jan 26 16:06:14 compute-0 ovn_controller[146046]: 2026-01-26T16:06:14Z|00836|binding|INFO|Removing iface tap2a2de29b-29 ovn-installed in OVS
Jan 26 16:06:14 compute-0 nova_compute[239965]: 2026-01-26 16:06:14.908 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:14.912 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ca:89 10.100.0.14'], port_security=['fa:16:3e:f8:ca:89 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '492acb7a-42e1-4918-a973-352f9d8251d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '12', 'neutron:security_group_ids': '2b6630f0-082e-491e-b663-f8af00e04b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2a2de29b-29c0-439c-8236-2f566c4c89aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:06:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:14.913 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2a2de29b-29c0-439c-8236-2f566c4c89aa in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d unbound from our chassis
Jan 26 16:06:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:14.914 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:06:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:14.915 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[24fdcbd7-1b70-4fb7-acf5-036ef5c7a083]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:14.916 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d namespace which is not needed anymore
Jan 26 16:06:14 compute-0 nova_compute[239965]: 2026-01-26 16:06:14.923 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:14 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Jan 26 16:06:14 compute-0 systemd[1]: machine-qemu\x2d105\x2dinstance\x2d0000004c.scope: Consumed 4.628s CPU time.
Jan 26 16:06:14 compute-0 systemd-machined[208061]: Machine qemu-105-instance-0000004c terminated.
Jan 26 16:06:15 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[314163]: [NOTICE]   (314167) : haproxy version is 2.8.14-c23fe91
Jan 26 16:06:15 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[314163]: [NOTICE]   (314167) : path to executable is /usr/sbin/haproxy
Jan 26 16:06:15 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[314163]: [WARNING]  (314167) : Exiting Master process...
Jan 26 16:06:15 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[314163]: [ALERT]    (314167) : Current worker (314169) exited with code 143 (Terminated)
Jan 26 16:06:15 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[314163]: [WARNING]  (314167) : All workers exited. Exiting... (0)
Jan 26 16:06:15 compute-0 systemd[1]: libpod-e02cff722345ce2103c4574eeacec5a245e09bee9fba58236fb31cfb8a98f03a.scope: Deactivated successfully.
Jan 26 16:06:15 compute-0 podman[314295]: 2026-01-26 16:06:15.056386089 +0000 UTC m=+0.051467260 container died e02cff722345ce2103c4574eeacec5a245e09bee9fba58236fb31cfb8a98f03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:06:15 compute-0 nova_compute[239965]: 2026-01-26 16:06:15.062 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:15 compute-0 kernel: tap2a2de29b-29: entered promiscuous mode
Jan 26 16:06:15 compute-0 NetworkManager[48954]: <info>  [1769443575.0839] manager: (tap2a2de29b-29): new Tun device (/org/freedesktop/NetworkManager/Devices/361)
Jan 26 16:06:15 compute-0 systemd-udevd[314276]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:06:15 compute-0 ovn_controller[146046]: 2026-01-26T16:06:15Z|00837|binding|INFO|Claiming lport 2a2de29b-29c0-439c-8236-2f566c4c89aa for this chassis.
Jan 26 16:06:15 compute-0 ovn_controller[146046]: 2026-01-26T16:06:15Z|00838|binding|INFO|2a2de29b-29c0-439c-8236-2f566c4c89aa: Claiming fa:16:3e:f8:ca:89 10.100.0.14
Jan 26 16:06:15 compute-0 nova_compute[239965]: 2026-01-26 16:06:15.083 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:15 compute-0 kernel: tap2a2de29b-29 (unregistering): left promiscuous mode
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.092 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ca:89 10.100.0.14'], port_security=['fa:16:3e:f8:ca:89 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '492acb7a-42e1-4918-a973-352f9d8251d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '12', 'neutron:security_group_ids': '2b6630f0-082e-491e-b663-f8af00e04b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2a2de29b-29c0-439c-8236-2f566c4c89aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:06:15 compute-0 nova_compute[239965]: 2026-01-26 16:06:15.101 239969 DEBUG nova.compute.manager [None req-f56c8b0d-0b89-4a83-9b76-3ac59f01cbd5 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:15 compute-0 ovn_controller[146046]: 2026-01-26T16:06:15Z|00839|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa ovn-installed in OVS
Jan 26 16:06:15 compute-0 nova_compute[239965]: 2026-01-26 16:06:15.118 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:15 compute-0 ovn_controller[146046]: 2026-01-26T16:06:15Z|00840|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa up in Southbound
Jan 26 16:06:15 compute-0 ovn_controller[146046]: 2026-01-26T16:06:15Z|00841|binding|INFO|Releasing lport 2a2de29b-29c0-439c-8236-2f566c4c89aa from this chassis (sb_readonly=1)
Jan 26 16:06:15 compute-0 nova_compute[239965]: 2026-01-26 16:06:15.120 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:15 compute-0 ovn_controller[146046]: 2026-01-26T16:06:15Z|00842|if_status|INFO|Dropped 2 log messages in last 203 seconds (most recently, 203 seconds ago) due to excessive rate
Jan 26 16:06:15 compute-0 ovn_controller[146046]: 2026-01-26T16:06:15Z|00843|if_status|INFO|Not setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa down as sb is readonly
Jan 26 16:06:15 compute-0 ovn_controller[146046]: 2026-01-26T16:06:15Z|00844|binding|INFO|Removing iface tap2a2de29b-29 ovn-installed in OVS
Jan 26 16:06:15 compute-0 nova_compute[239965]: 2026-01-26 16:06:15.123 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:15 compute-0 ovn_controller[146046]: 2026-01-26T16:06:15Z|00845|binding|INFO|Releasing lport 2a2de29b-29c0-439c-8236-2f566c4c89aa from this chassis (sb_readonly=0)
Jan 26 16:06:15 compute-0 ovn_controller[146046]: 2026-01-26T16:06:15Z|00846|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa down in Southbound
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.133 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ca:89 10.100.0.14'], port_security=['fa:16:3e:f8:ca:89 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '492acb7a-42e1-4918-a973-352f9d8251d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '12', 'neutron:security_group_ids': '2b6630f0-082e-491e-b663-f8af00e04b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2a2de29b-29c0-439c-8236-2f566c4c89aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:06:15 compute-0 nova_compute[239965]: 2026-01-26 16:06:15.144 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:15 compute-0 nova_compute[239965]: 2026-01-26 16:06:15.211 239969 INFO nova.virt.libvirt.driver [None req-8e2df28c-c1d8-4584-a8e2-1bbac0b6c89f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Snapshot image upload complete
Jan 26 16:06:15 compute-0 nova_compute[239965]: 2026-01-26 16:06:15.211 239969 INFO nova.compute.manager [None req-8e2df28c-c1d8-4584-a8e2-1bbac0b6c89f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Took 5.49 seconds to snapshot the instance on the hypervisor.
Jan 26 16:06:15 compute-0 nova_compute[239965]: 2026-01-26 16:06:15.507 239969 DEBUG nova.compute.manager [None req-8e2df28c-c1d8-4584-a8e2-1bbac0b6c89f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Found 1 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Jan 26 16:06:15 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e02cff722345ce2103c4574eeacec5a245e09bee9fba58236fb31cfb8a98f03a-userdata-shm.mount: Deactivated successfully.
Jan 26 16:06:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-649160ebea3198cf490063aa586c92e27c14f472c37ceab80bec1ee585b0d8ee-merged.mount: Deactivated successfully.
Jan 26 16:06:15 compute-0 podman[314295]: 2026-01-26 16:06:15.827267016 +0000 UTC m=+0.822348167 container cleanup e02cff722345ce2103c4574eeacec5a245e09bee9fba58236fb31cfb8a98f03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:06:15 compute-0 systemd[1]: libpod-conmon-e02cff722345ce2103c4574eeacec5a245e09bee9fba58236fb31cfb8a98f03a.scope: Deactivated successfully.
Jan 26 16:06:15 compute-0 podman[314328]: 2026-01-26 16:06:15.895362082 +0000 UTC m=+0.042804639 container remove e02cff722345ce2103c4574eeacec5a245e09bee9fba58236fb31cfb8a98f03a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.901 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[050bbb2d-d0ad-48d2-b162-fa20cabb34b6]: (4, ('Mon Jan 26 04:06:14 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d (e02cff722345ce2103c4574eeacec5a245e09bee9fba58236fb31cfb8a98f03a)\ne02cff722345ce2103c4574eeacec5a245e09bee9fba58236fb31cfb8a98f03a\nMon Jan 26 04:06:15 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d (e02cff722345ce2103c4574eeacec5a245e09bee9fba58236fb31cfb8a98f03a)\ne02cff722345ce2103c4574eeacec5a245e09bee9fba58236fb31cfb8a98f03a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.902 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c342862a-695b-429f-95b1-b6845774e1de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.903 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1ad93-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:15 compute-0 nova_compute[239965]: 2026-01-26 16:06:15.905 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:15 compute-0 kernel: tap84c1ad93-10: left promiscuous mode
Jan 26 16:06:15 compute-0 nova_compute[239965]: 2026-01-26 16:06:15.923 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.927 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1c903287-f23b-4dc1-897b-b7a682210ac5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.944 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[96ce2723-713d-4e98-8a02-bc3798561295]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.946 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[345566a1-b4d3-4fa2-8cf5-4e99901b6b14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.960 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7d378f0f-5362-46d2-b12d-e1b6ed09668b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 506062, 'reachable_time': 19008, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314344, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.963 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.963 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[cc6ed207-4b2b-4d90-af28-545a494af1e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:15 compute-0 systemd[1]: run-netns-ovnmeta\x2d84c1ad93\x2d16e1\x2d4751\x2dac9b\x2dceeaa2d50c1d.mount: Deactivated successfully.
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.965 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2a2de29b-29c0-439c-8236-2f566c4c89aa in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d unbound from our chassis
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.967 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.968 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bb105a3a-53d6-4078-8608-47c1fc5800e1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.969 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2a2de29b-29c0-439c-8236-2f566c4c89aa in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d unbound from our chassis
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.970 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:06:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:15.971 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3b703362-6449-463d-a8f5-935bc1849d30]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:15 compute-0 nova_compute[239965]: 2026-01-26 16:06:15.980 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 305 active+clean; 328 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 297 op/s
Jan 26 16:06:16 compute-0 nova_compute[239965]: 2026-01-26 16:06:16.650 239969 DEBUG nova.compute.manager [None req-273ee841-29ed-4e0f-836e-dc07a25dcd63 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:16 compute-0 nova_compute[239965]: 2026-01-26 16:06:16.690 239969 INFO nova.compute.manager [None req-273ee841-29ed-4e0f-836e-dc07a25dcd63 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] instance snapshotting
Jan 26 16:06:16 compute-0 nova_compute[239965]: 2026-01-26 16:06:16.691 239969 DEBUG nova.objects.instance [None req-273ee841-29ed-4e0f-836e-dc07a25dcd63 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'flavor' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:16 compute-0 nova_compute[239965]: 2026-01-26 16:06:16.826 239969 INFO nova.compute.manager [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Resuming
Jan 26 16:06:16 compute-0 nova_compute[239965]: 2026-01-26 16:06:16.827 239969 DEBUG nova.objects.instance [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'flavor' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:16 compute-0 nova_compute[239965]: 2026-01-26 16:06:16.855 239969 DEBUG oslo_concurrency.lockutils [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:06:16 compute-0 nova_compute[239965]: 2026-01-26 16:06:16.856 239969 DEBUG oslo_concurrency.lockutils [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquired lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:06:16 compute-0 nova_compute[239965]: 2026-01-26 16:06:16.856 239969 DEBUG nova.network.neutron [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:06:16 compute-0 nova_compute[239965]: 2026-01-26 16:06:16.916 239969 INFO nova.virt.libvirt.driver [None req-273ee841-29ed-4e0f-836e-dc07a25dcd63 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Beginning live snapshot process
Jan 26 16:06:17 compute-0 nova_compute[239965]: 2026-01-26 16:06:17.090 239969 DEBUG nova.virt.libvirt.imagebackend [None req-273ee841-29ed-4e0f-836e-dc07a25dcd63 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 16:06:17 compute-0 ceph-mon[75140]: pgmap v1610: 305 pgs: 305 active+clean; 328 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 297 op/s
Jan 26 16:06:17 compute-0 nova_compute[239965]: 2026-01-26 16:06:17.236 239969 DEBUG nova.storage.rbd_utils [None req-273ee841-29ed-4e0f-836e-dc07a25dcd63 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] creating snapshot(ac656605295c4d89ae2d5acd474917f5) on rbd image(571cd10f-e22e-43b6-9523-e3539d047e4a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:06:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:06:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Jan 26 16:06:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Jan 26 16:06:17 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Jan 26 16:06:17 compute-0 nova_compute[239965]: 2026-01-26 16:06:17.605 239969 DEBUG nova.storage.rbd_utils [None req-273ee841-29ed-4e0f-836e-dc07a25dcd63 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] cloning vms/571cd10f-e22e-43b6-9523-e3539d047e4a_disk@ac656605295c4d89ae2d5acd474917f5 to images/6b93c8ad-2a0b-4232-afca-273adb0f4234 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 16:06:17 compute-0 nova_compute[239965]: 2026-01-26 16:06:17.697 239969 DEBUG nova.storage.rbd_utils [None req-273ee841-29ed-4e0f-836e-dc07a25dcd63 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] flattening images/6b93c8ad-2a0b-4232-afca-273adb0f4234 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 16:06:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 328 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 287 op/s
Jan 26 16:06:18 compute-0 nova_compute[239965]: 2026-01-26 16:06:18.116 239969 DEBUG nova.storage.rbd_utils [None req-273ee841-29ed-4e0f-836e-dc07a25dcd63 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] removing snapshot(ac656605295c4d89ae2d5acd474917f5) on rbd image(571cd10f-e22e-43b6-9523-e3539d047e4a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 16:06:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Jan 26 16:06:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Jan 26 16:06:18 compute-0 ceph-mon[75140]: osdmap e247: 3 total, 3 up, 3 in
Jan 26 16:06:18 compute-0 ceph-mon[75140]: pgmap v1612: 305 pgs: 305 active+clean; 328 MiB data, 766 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 287 op/s
Jan 26 16:06:18 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Jan 26 16:06:18 compute-0 nova_compute[239965]: 2026-01-26 16:06:18.585 239969 DEBUG nova.storage.rbd_utils [None req-273ee841-29ed-4e0f-836e-dc07a25dcd63 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] creating snapshot(snap) on rbd image(6b93c8ad-2a0b-4232-afca-273adb0f4234) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:06:19 compute-0 nova_compute[239965]: 2026-01-26 16:06:19.626 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Jan 26 16:06:19 compute-0 ceph-mon[75140]: osdmap e248: 3 total, 3 up, 3 in
Jan 26 16:06:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Jan 26 16:06:19 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Jan 26 16:06:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1615: 305 pgs: 305 active+clean; 388 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 17 MiB/s rd, 13 MiB/s wr, 394 op/s
Jan 26 16:06:20 compute-0 nova_compute[239965]: 2026-01-26 16:06:20.127 239969 DEBUG nova.compute.manager [req-dd3ed3e9-480d-4523-b4b7-2e9496e4a8c6 req-b887ad60-d58e-4e20-b205-8c5644aae2db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:20 compute-0 nova_compute[239965]: 2026-01-26 16:06:20.128 239969 DEBUG oslo_concurrency.lockutils [req-dd3ed3e9-480d-4523-b4b7-2e9496e4a8c6 req-b887ad60-d58e-4e20-b205-8c5644aae2db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:20 compute-0 nova_compute[239965]: 2026-01-26 16:06:20.128 239969 DEBUG oslo_concurrency.lockutils [req-dd3ed3e9-480d-4523-b4b7-2e9496e4a8c6 req-b887ad60-d58e-4e20-b205-8c5644aae2db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:20 compute-0 nova_compute[239965]: 2026-01-26 16:06:20.128 239969 DEBUG oslo_concurrency.lockutils [req-dd3ed3e9-480d-4523-b4b7-2e9496e4a8c6 req-b887ad60-d58e-4e20-b205-8c5644aae2db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:20 compute-0 nova_compute[239965]: 2026-01-26 16:06:20.128 239969 DEBUG nova.compute.manager [req-dd3ed3e9-480d-4523-b4b7-2e9496e4a8c6 req-b887ad60-d58e-4e20-b205-8c5644aae2db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:06:20 compute-0 nova_compute[239965]: 2026-01-26 16:06:20.129 239969 WARNING nova.compute.manager [req-dd3ed3e9-480d-4523-b4b7-2e9496e4a8c6 req-b887ad60-d58e-4e20-b205-8c5644aae2db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state suspended and task_state resuming.
Jan 26 16:06:20 compute-0 ceph-mon[75140]: osdmap e249: 3 total, 3 up, 3 in
Jan 26 16:06:20 compute-0 ceph-mon[75140]: pgmap v1615: 305 pgs: 305 active+clean; 388 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 17 MiB/s rd, 13 MiB/s wr, 394 op/s
Jan 26 16:06:20 compute-0 nova_compute[239965]: 2026-01-26 16:06:20.825 239969 INFO nova.virt.libvirt.driver [None req-273ee841-29ed-4e0f-836e-dc07a25dcd63 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Snapshot image upload complete
Jan 26 16:06:20 compute-0 nova_compute[239965]: 2026-01-26 16:06:20.826 239969 INFO nova.compute.manager [None req-273ee841-29ed-4e0f-836e-dc07a25dcd63 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Took 4.11 seconds to snapshot the instance on the hypervisor.
Jan 26 16:06:20 compute-0 nova_compute[239965]: 2026-01-26 16:06:20.983 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.028 239969 DEBUG nova.network.neutron [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Updating instance_info_cache with network_info: [{"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.047 239969 DEBUG oslo_concurrency.lockutils [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Releasing lock "refresh_cache-492acb7a-42e1-4918-a973-352f9d8251d0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.051 239969 DEBUG nova.virt.libvirt.vif [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1205900142',display_name='tempest-ServerActionsTestJSON-server-1205900142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1205900142',id=76,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJC9bPt4tHtAdEqahH8dDewWtV4sFfSIyn3+BOhF1ZOSNVcrw8cqQ/JKx3C2TWjp9SjIuYmlVUH9MiH1RWG1NVSVOkV3L9+W0mVxNAs5TvmU8qKvHIWoJY7lntFnq7MLA==',key_name='tempest-keypair-1155267326',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-plaaamng',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:06:15Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=492acb7a-42e1-4918-a973-352f9d8251d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.051 239969 DEBUG nova.network.os_vif_util [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.052 239969 DEBUG nova.network.os_vif_util [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.052 239969 DEBUG os_vif [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.052 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.053 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.053 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.055 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.055 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2a2de29b-29, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.055 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2a2de29b-29, col_values=(('external_ids', {'iface-id': '2a2de29b-29c0-439c-8236-2f566c4c89aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f8:ca:89', 'vm-uuid': '492acb7a-42e1-4918-a973-352f9d8251d0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.056 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.056 239969 INFO os_vif [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29')
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.076 239969 DEBUG nova.objects.instance [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'numa_topology' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.087 239969 DEBUG nova.compute.manager [None req-273ee841-29ed-4e0f-836e-dc07a25dcd63 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Found 2 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Jan 26 16:06:21 compute-0 kernel: tap2a2de29b-29: entered promiscuous mode
Jan 26 16:06:21 compute-0 NetworkManager[48954]: <info>  [1769443581.1571] manager: (tap2a2de29b-29): new Tun device (/org/freedesktop/NetworkManager/Devices/362)
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.199 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:21 compute-0 ovn_controller[146046]: 2026-01-26T16:06:21Z|00847|binding|INFO|Claiming lport 2a2de29b-29c0-439c-8236-2f566c4c89aa for this chassis.
Jan 26 16:06:21 compute-0 ovn_controller[146046]: 2026-01-26T16:06:21Z|00848|binding|INFO|2a2de29b-29c0-439c-8236-2f566c4c89aa: Claiming fa:16:3e:f8:ca:89 10.100.0.14
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.207 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ca:89 10.100.0.14'], port_security=['fa:16:3e:f8:ca:89 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '492acb7a-42e1-4918-a973-352f9d8251d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '15', 'neutron:security_group_ids': '2b6630f0-082e-491e-b663-f8af00e04b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2a2de29b-29c0-439c-8236-2f566c4c89aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.208 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2a2de29b-29c0-439c-8236-2f566c4c89aa in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d bound to our chassis
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.210 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.223 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fb5507-098f-48c6-9c1f-671d66ef0109]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.224 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap84c1ad93-11 in ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:06:21 compute-0 ovn_controller[146046]: 2026-01-26T16:06:21Z|00849|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa ovn-installed in OVS
Jan 26 16:06:21 compute-0 ovn_controller[146046]: 2026-01-26T16:06:21Z|00850|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa up in Southbound
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.225 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap84c1ad93-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.225 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[799c5fae-6d9c-4e65-a80b-5cce38cbdada]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.227 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:21 compute-0 systemd-udevd[314501]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.226 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[00a7aee5-67bd-42ac-8e58-d05ba084cabc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.239 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[b91715d6-5ed4-4416-9aed-415db21502d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:21 compute-0 NetworkManager[48954]: <info>  [1769443581.2419] device (tap2a2de29b-29): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:06:21 compute-0 NetworkManager[48954]: <info>  [1769443581.2429] device (tap2a2de29b-29): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:06:21 compute-0 systemd-machined[208061]: New machine qemu-106-instance-0000004c.
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.254 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9ce15f40-6351-4f29-a8aa-cbad20d93949]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:21 compute-0 systemd[1]: Started Virtual Machine qemu-106-instance-0000004c.
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.287 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[02e5bced-4d6b-40b8-91d0-a95951f717cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.291 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd1c584-da0e-4ea7-8fcf-66a8424b7cf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:21 compute-0 NetworkManager[48954]: <info>  [1769443581.2929] manager: (tap84c1ad93-10): new Veth device (/org/freedesktop/NetworkManager/Devices/363)
Jan 26 16:06:21 compute-0 systemd-udevd[314506]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.329 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[caf640ab-dbc3-4d10-a6ab-88b142095fef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.333 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[47edcf36-e93e-4703-a25e-96584f0e7fd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:21 compute-0 NetworkManager[48954]: <info>  [1769443581.3615] device (tap84c1ad93-10): carrier: link connected
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.370 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[aee12841-217e-4621-8bda-b54bdce80378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.394 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7206c061-6ed7-43d3-8ea4-dd72b2b3ef24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84c1ad93-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507216, 'reachable_time': 42985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314535, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.415 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[59916721-1b71-45b1-b33a-0ce5508833dc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:ad50'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507216, 'tstamp': 507216}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314536, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.441 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[81a9fe79-53d3-400c-a9ac-ef6cb08c61fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap84c1ad93-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:50'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 257], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507216, 'reachable_time': 42985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314537, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.477 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b9dbeef0-208e-407b-bae4-36090e975428]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.551 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[465a9603-b664-4272-9167-48f939439e9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.553 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1ad93-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.553 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.554 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84c1ad93-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:21 compute-0 kernel: tap84c1ad93-10: entered promiscuous mode
Jan 26 16:06:21 compute-0 NetworkManager[48954]: <info>  [1769443581.5574] manager: (tap84c1ad93-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/364)
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.561 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap84c1ad93-10, col_values=(('external_ids', {'iface-id': '61ede7e0-2ac4-4ab0-a8d4-11a64add8da0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:21 compute-0 ovn_controller[146046]: 2026-01-26T16:06:21Z|00851|binding|INFO|Releasing lport 61ede7e0-2ac4-4ab0-a8d4-11a64add8da0 from this chassis (sb_readonly=0)
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.567 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.568 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fa35cf43-3614-4dac-9723-8a891484e186]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.569 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.pid.haproxy
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:06:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:21.570 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'env', 'PROCESS_TAG=haproxy-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.569 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:21 compute-0 nova_compute[239965]: 2026-01-26 16:06:21.585 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:21 compute-0 podman[314570]: 2026-01-26 16:06:21.943680218 +0000 UTC m=+0.047578315 container create 4052865902c33b00aab3ca08325b930e05cd91e3aebfde9b4913db36382218b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 26 16:06:21 compute-0 systemd[1]: Started libpod-conmon-4052865902c33b00aab3ca08325b930e05cd91e3aebfde9b4913db36382218b6.scope.
Jan 26 16:06:22 compute-0 podman[314570]: 2026-01-26 16:06:21.919183608 +0000 UTC m=+0.023081735 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:06:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:06:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/53954f81fc0860a0c36e820dd6b1e417f78d89f78b97faa86a17933ff35c93af/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:06:22 compute-0 podman[314570]: 2026-01-26 16:06:22.046473252 +0000 UTC m=+0.150371389 container init 4052865902c33b00aab3ca08325b930e05cd91e3aebfde9b4913db36382218b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.051 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.052 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:22 compute-0 podman[314570]: 2026-01-26 16:06:22.053207207 +0000 UTC m=+0.157105314 container start 4052865902c33b00aab3ca08325b930e05cd91e3aebfde9b4913db36382218b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.073 239969 DEBUG nova.compute.manager [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:06:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 407 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 166 op/s
Jan 26 16:06:22 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[314586]: [NOTICE]   (314590) : New worker (314607) forked
Jan 26 16:06:22 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[314586]: [NOTICE]   (314590) : Loading success.
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.156 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.156 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.163 239969 DEBUG nova.virt.hardware [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.163 239969 INFO nova.compute.claims [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.224 239969 DEBUG nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.225 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.225 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.226 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.226 239969 DEBUG nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.226 239969 WARNING nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state suspended and task_state resuming.
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.227 239969 DEBUG nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.227 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.228 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.229 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.229 239969 DEBUG nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.229 239969 WARNING nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state suspended and task_state resuming.
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.230 239969 DEBUG nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.230 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.230 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.230 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.231 239969 DEBUG nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.231 239969 WARNING nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state suspended and task_state resuming.
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.231 239969 DEBUG nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.231 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.232 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.232 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.232 239969 DEBUG nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.232 239969 WARNING nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state suspended and task_state resuming.
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.233 239969 DEBUG nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.233 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.233 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.233 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.233 239969 DEBUG nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.234 239969 WARNING nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state suspended and task_state resuming.
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.234 239969 DEBUG nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.234 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.234 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.235 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.235 239969 DEBUG nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.235 239969 WARNING nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state suspended and task_state resuming.
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.235 239969 DEBUG nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.235 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.236 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.236 239969 DEBUG oslo_concurrency.lockutils [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.236 239969 DEBUG nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.236 239969 WARNING nova.compute.manager [req-2ac15f01-c7c9-475b-b1f8-ac3d7d87063f req-fe1e5e90-5233-406d-b246-05da6645c6d5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state suspended and task_state resuming.
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.309 239969 DEBUG oslo_concurrency.processutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.348 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 492acb7a-42e1-4918-a973-352f9d8251d0 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.349 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443582.3273993, 492acb7a-42e1-4918-a973-352f9d8251d0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.349 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Started (Lifecycle Event)
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.352 239969 DEBUG nova.compute.manager [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.354 239969 DEBUG nova.objects.instance [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.392 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.398 239969 INFO nova.virt.libvirt.driver [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance running successfully.
Jan 26 16:06:22 compute-0 virtqemud[240263]: argument unsupported: QEMU guest agent is not configured
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.402 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.405 239969 DEBUG nova.virt.libvirt.guest [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.405 239969 DEBUG nova.compute.manager [None req-c14f7c2e-0f95-4ffe-a472-54d478288ada 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.431 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.431 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443582.3278468, 492acb7a-42e1-4918-a973-352f9d8251d0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.432 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Resumed (Lifecycle Event)
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.476 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.492 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:06:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e249 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:06:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Jan 26 16:06:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Jan 26 16:06:22 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:06:22.565552) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443582565632, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 689, "num_deletes": 257, "total_data_size": 766059, "memory_usage": 778816, "flush_reason": "Manual Compaction"}
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443582574895, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 755156, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33167, "largest_seqno": 33855, "table_properties": {"data_size": 751453, "index_size": 1483, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8403, "raw_average_key_size": 19, "raw_value_size": 743925, "raw_average_value_size": 1702, "num_data_blocks": 65, "num_entries": 437, "num_filter_entries": 437, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769443543, "oldest_key_time": 1769443543, "file_creation_time": 1769443582, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 9408 microseconds, and 4474 cpu microseconds.
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:06:22.574955) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 755156 bytes OK
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:06:22.575000) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:06:22.577025) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:06:22.577043) EVENT_LOG_v1 {"time_micros": 1769443582577036, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:06:22.577072) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 762383, prev total WAL file size 762383, number of live WAL files 2.
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:06:22.577927) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303032' seq:72057594037927935, type:22 .. '6C6F676D0031323533' seq:0, type:0; will stop at (end)
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(737KB)], [71(8578KB)]
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443582577998, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 9539401, "oldest_snapshot_seqno": -1}
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 5879 keys, 9406158 bytes, temperature: kUnknown
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443582640624, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 9406158, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9365249, "index_size": 25119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14725, "raw_key_size": 149024, "raw_average_key_size": 25, "raw_value_size": 9258261, "raw_average_value_size": 1574, "num_data_blocks": 1015, "num_entries": 5879, "num_filter_entries": 5879, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769443582, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:06:22.640859) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 9406158 bytes
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:06:22.642738) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.1 rd, 150.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 8.4 +0.0 blob) out(9.0 +0.0 blob), read-write-amplify(25.1) write-amplify(12.5) OK, records in: 6410, records dropped: 531 output_compression: NoCompression
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:06:22.642753) EVENT_LOG_v1 {"time_micros": 1769443582642745, "job": 40, "event": "compaction_finished", "compaction_time_micros": 62714, "compaction_time_cpu_micros": 24379, "output_level": 6, "num_output_files": 1, "total_output_size": 9406158, "num_input_records": 6410, "num_output_records": 5879, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443582642968, "job": 40, "event": "table_file_deletion", "file_number": 73}
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443582645383, "job": 40, "event": "table_file_deletion", "file_number": 71}
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:06:22.577813) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:06:22.645489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:06:22.645499) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:06:22.645502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:06:22.645504) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:06:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:06:22.645506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.791 239969 DEBUG nova.compute.manager [None req-a46600be-5b87-4bc6-93da-2b315d766fc7 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.843 239969 INFO nova.compute.manager [None req-a46600be-5b87-4bc6-93da-2b315d766fc7 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] instance snapshotting
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.845 239969 DEBUG nova.objects.instance [None req-a46600be-5b87-4bc6-93da-2b315d766fc7 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'flavor' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:06:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1824302632' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.899 239969 DEBUG oslo_concurrency.processutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.907 239969 DEBUG nova.compute.provider_tree [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.928 239969 DEBUG nova.scheduler.client.report [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.966 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:22 compute-0 nova_compute[239965]: 2026-01-26 16:06:22.968 239969 DEBUG nova.compute.manager [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.027 239969 DEBUG nova.compute.manager [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.027 239969 DEBUG nova.network.neutron [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.046 239969 INFO nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.064 239969 DEBUG nova.compute.manager [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.079 239969 INFO nova.virt.libvirt.driver [None req-a46600be-5b87-4bc6-93da-2b315d766fc7 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Beginning live snapshot process
Jan 26 16:06:23 compute-0 ceph-mon[75140]: pgmap v1616: 305 pgs: 305 active+clean; 407 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 166 op/s
Jan 26 16:06:23 compute-0 ceph-mon[75140]: osdmap e250: 3 total, 3 up, 3 in
Jan 26 16:06:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1824302632' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.262 239969 DEBUG nova.policy [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '392bf4c554724bd3b097b990cec964ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3d3c26abe454a90816833e484abbbd5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.265 239969 DEBUG nova.compute.manager [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.266 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.266 239969 INFO nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Creating image(s)
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.289 239969 DEBUG nova.storage.rbd_utils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 1c5a4474-3020-4bb1-96e5-41b02d0eb962_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.315 239969 DEBUG nova.storage.rbd_utils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 1c5a4474-3020-4bb1-96e5-41b02d0eb962_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.347 239969 DEBUG nova.storage.rbd_utils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 1c5a4474-3020-4bb1-96e5-41b02d0eb962_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.351 239969 DEBUG oslo_concurrency.processutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.387 239969 DEBUG nova.virt.libvirt.imagebackend [None req-a46600be-5b87-4bc6-93da-2b315d766fc7 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.422 239969 DEBUG oslo_concurrency.processutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.422 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.423 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.423 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.449 239969 DEBUG nova.storage.rbd_utils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 1c5a4474-3020-4bb1-96e5-41b02d0eb962_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.453 239969 DEBUG oslo_concurrency.processutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 1c5a4474-3020-4bb1-96e5-41b02d0eb962_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.605 239969 DEBUG nova.storage.rbd_utils [None req-a46600be-5b87-4bc6-93da-2b315d766fc7 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] creating snapshot(0d536903d4ab4470aa3e27a927c21d19) on rbd image(571cd10f-e22e-43b6-9523-e3539d047e4a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:06:23 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.759 239969 DEBUG oslo_concurrency.processutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 1c5a4474-3020-4bb1-96e5-41b02d0eb962_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.306s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.811 239969 DEBUG nova.storage.rbd_utils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] resizing rbd image 1c5a4474-3020-4bb1-96e5-41b02d0eb962_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.929 239969 DEBUG nova.network.neutron [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Successfully created port: 3cbece9c-4828-4d27-8ede-b67dab0fc2b0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.938 239969 DEBUG nova.objects.instance [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'migration_context' on Instance uuid 1c5a4474-3020-4bb1-96e5-41b02d0eb962 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.954 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.955 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Ensure instance console log exists: /var/lib/nova/instances/1c5a4474-3020-4bb1-96e5-41b02d0eb962/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.955 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.956 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:23 compute-0 nova_compute[239965]: 2026-01-26 16:06:23.956 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 407 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 166 op/s
Jan 26 16:06:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Jan 26 16:06:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Jan 26 16:06:24 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.186 239969 DEBUG nova.storage.rbd_utils [None req-a46600be-5b87-4bc6-93da-2b315d766fc7 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] cloning vms/571cd10f-e22e-43b6-9523-e3539d047e4a_disk@0d536903d4ab4470aa3e27a927c21d19 to images/81876c85-13a0-427f-8da6-a495ac46a80a clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.311 239969 DEBUG nova.storage.rbd_utils [None req-a46600be-5b87-4bc6-93da-2b315d766fc7 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] flattening images/81876c85-13a0-427f-8da6-a495ac46a80a flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.604 239969 DEBUG nova.network.neutron [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Successfully updated port: 3cbece9c-4828-4d27-8ede-b67dab0fc2b0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.618 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "refresh_cache-1c5a4474-3020-4bb1-96e5-41b02d0eb962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.619 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquired lock "refresh_cache-1c5a4474-3020-4bb1-96e5-41b02d0eb962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.619 239969 DEBUG nova.network.neutron [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.629 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.688 239969 DEBUG oslo_concurrency.lockutils [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.689 239969 DEBUG oslo_concurrency.lockutils [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.689 239969 DEBUG oslo_concurrency.lockutils [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.689 239969 DEBUG oslo_concurrency.lockutils [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.689 239969 DEBUG oslo_concurrency.lockutils [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.691 239969 INFO nova.compute.manager [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Terminating instance
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.692 239969 DEBUG nova.compute.manager [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.764 239969 DEBUG nova.compute.manager [req-03b145b2-c777-4f9e-914b-f9cd08f87059 req-443feb36-6caf-4b03-b417-08b2a11e7dd5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Received event network-changed-3cbece9c-4828-4d27-8ede-b67dab0fc2b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.765 239969 DEBUG nova.compute.manager [req-03b145b2-c777-4f9e-914b-f9cd08f87059 req-443feb36-6caf-4b03-b417-08b2a11e7dd5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Refreshing instance network info cache due to event network-changed-3cbece9c-4828-4d27-8ede-b67dab0fc2b0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.765 239969 DEBUG oslo_concurrency.lockutils [req-03b145b2-c777-4f9e-914b-f9cd08f87059 req-443feb36-6caf-4b03-b417-08b2a11e7dd5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-1c5a4474-3020-4bb1-96e5-41b02d0eb962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.773 239969 DEBUG nova.network.neutron [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:06:24 compute-0 kernel: tap2a2de29b-29 (unregistering): left promiscuous mode
Jan 26 16:06:24 compute-0 NetworkManager[48954]: <info>  [1769443584.8273] device (tap2a2de29b-29): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:06:24 compute-0 ovn_controller[146046]: 2026-01-26T16:06:24Z|00852|binding|INFO|Releasing lport 2a2de29b-29c0-439c-8236-2f566c4c89aa from this chassis (sb_readonly=0)
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.839 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:24 compute-0 ovn_controller[146046]: 2026-01-26T16:06:24Z|00853|binding|INFO|Setting lport 2a2de29b-29c0-439c-8236-2f566c4c89aa down in Southbound
Jan 26 16:06:24 compute-0 ovn_controller[146046]: 2026-01-26T16:06:24Z|00854|binding|INFO|Removing iface tap2a2de29b-29 ovn-installed in OVS
Jan 26 16:06:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:24.847 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f8:ca:89 10.100.0.14'], port_security=['fa:16:3e:f8:ca:89 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '492acb7a-42e1-4918-a973-352f9d8251d0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '94771806cd0d4b5db117956e09fea9e6', 'neutron:revision_number': '16', 'neutron:security_group_ids': '2b6630f0-082e-491e-b663-f8af00e04b60', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9cb6658a-2da0-4b82-9cf1-51e6dbeaf3ff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=2a2de29b-29c0-439c-8236-2f566c4c89aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:06:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:24.848 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 2a2de29b-29c0-439c-8236-2f566c4c89aa in datapath 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d unbound from our chassis
Jan 26 16:06:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:24.849 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:06:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:24.850 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[28569d31-5245-4c94-bb38-e0534d5e6de4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:24.851 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d namespace which is not needed anymore
Jan 26 16:06:24 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d0000004c.scope: Deactivated successfully.
Jan 26 16:06:24 compute-0 systemd[1]: machine-qemu\x2d106\x2dinstance\x2d0000004c.scope: Consumed 3.399s CPU time.
Jan 26 16:06:24 compute-0 systemd-machined[208061]: Machine qemu-106-instance-0000004c terminated.
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.891 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.906 239969 DEBUG nova.storage.rbd_utils [None req-a46600be-5b87-4bc6-93da-2b315d766fc7 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] removing snapshot(0d536903d4ab4470aa3e27a927c21d19) on rbd image(571cd10f-e22e-43b6-9523-e3539d047e4a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.949 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.958 239969 INFO nova.virt.libvirt.driver [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Instance destroyed successfully.
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.958 239969 DEBUG nova.objects.instance [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lazy-loading 'resources' on Instance uuid 492acb7a-42e1-4918-a973-352f9d8251d0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.970 239969 DEBUG nova.virt.libvirt.vif [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:02:53Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1205900142',display_name='tempest-ServerActionsTestJSON-server-1205900142',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1205900142',id=76,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLJC9bPt4tHtAdEqahH8dDewWtV4sFfSIyn3+BOhF1ZOSNVcrw8cqQ/JKx3C2TWjp9SjIuYmlVUH9MiH1RWG1NVSVOkV3L9+W0mVxNAs5TvmU8qKvHIWoJY7lntFnq7MLA==',key_name='tempest-keypair-1155267326',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:03:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='94771806cd0d4b5db117956e09fea9e6',ramdisk_id='',reservation_id='r-plaaamng',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-274649005',owner_user_name='tempest-ServerActionsTestJSON-274649005-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:06:22Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='59f8ee09903a4e0a812c3d9e013996bd',uuid=492acb7a-42e1-4918-a973-352f9d8251d0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.970 239969 DEBUG nova.network.os_vif_util [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converting VIF {"id": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "address": "fa:16:3e:f8:ca:89", "network": {"id": "84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-583950921-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "94771806cd0d4b5db117956e09fea9e6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2a2de29b-29", "ovs_interfaceid": "2a2de29b-29c0-439c-8236-2f566c4c89aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.971 239969 DEBUG nova.network.os_vif_util [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.971 239969 DEBUG os_vif [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.972 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.973 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2a2de29b-29, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.974 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.977 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:06:24 compute-0 nova_compute[239965]: 2026-01-26 16:06:24.978 239969 INFO os_vif [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f8:ca:89,bridge_name='br-int',has_traffic_filtering=True,id=2a2de29b-29c0-439c-8236-2f566c4c89aa,network=Network(84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2a2de29b-29')
Jan 26 16:06:25 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[314586]: [NOTICE]   (314590) : haproxy version is 2.8.14-c23fe91
Jan 26 16:06:25 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[314586]: [NOTICE]   (314590) : path to executable is /usr/sbin/haproxy
Jan 26 16:06:25 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[314586]: [WARNING]  (314590) : Exiting Master process...
Jan 26 16:06:25 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[314586]: [ALERT]    (314590) : Current worker (314607) exited with code 143 (Terminated)
Jan 26 16:06:25 compute-0 neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d[314586]: [WARNING]  (314590) : All workers exited. Exiting... (0)
Jan 26 16:06:25 compute-0 systemd[1]: libpod-4052865902c33b00aab3ca08325b930e05cd91e3aebfde9b4913db36382218b6.scope: Deactivated successfully.
Jan 26 16:06:25 compute-0 podman[314986]: 2026-01-26 16:06:25.021360714 +0000 UTC m=+0.048148699 container died 4052865902c33b00aab3ca08325b930e05cd91e3aebfde9b4913db36382218b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 16:06:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4052865902c33b00aab3ca08325b930e05cd91e3aebfde9b4913db36382218b6-userdata-shm.mount: Deactivated successfully.
Jan 26 16:06:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-53954f81fc0860a0c36e820dd6b1e417f78d89f78b97faa86a17933ff35c93af-merged.mount: Deactivated successfully.
Jan 26 16:06:25 compute-0 podman[314986]: 2026-01-26 16:06:25.068143738 +0000 UTC m=+0.094931723 container cleanup 4052865902c33b00aab3ca08325b930e05cd91e3aebfde9b4913db36382218b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:06:25 compute-0 systemd[1]: libpod-conmon-4052865902c33b00aab3ca08325b930e05cd91e3aebfde9b4913db36382218b6.scope: Deactivated successfully.
Jan 26 16:06:25 compute-0 podman[315034]: 2026-01-26 16:06:25.134324088 +0000 UTC m=+0.046524130 container remove 4052865902c33b00aab3ca08325b930e05cd91e3aebfde9b4913db36382218b6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 16:06:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:25.140 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9954ce59-2181-40ab-8598-2d7a0aacdf34]: (4, ('Mon Jan 26 04:06:24 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d (4052865902c33b00aab3ca08325b930e05cd91e3aebfde9b4913db36382218b6)\n4052865902c33b00aab3ca08325b930e05cd91e3aebfde9b4913db36382218b6\nMon Jan 26 04:06:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d (4052865902c33b00aab3ca08325b930e05cd91e3aebfde9b4913db36382218b6)\n4052865902c33b00aab3ca08325b930e05cd91e3aebfde9b4913db36382218b6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:25.142 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[050f6a92-8c51-4f5c-a84a-f2555445177d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:25.145 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84c1ad93-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:25 compute-0 kernel: tap84c1ad93-10: left promiscuous mode
Jan 26 16:06:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Jan 26 16:06:25 compute-0 nova_compute[239965]: 2026-01-26 16:06:25.146 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:25 compute-0 ceph-mon[75140]: pgmap v1618: 305 pgs: 305 active+clean; 407 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 166 op/s
Jan 26 16:06:25 compute-0 ceph-mon[75140]: osdmap e251: 3 total, 3 up, 3 in
Jan 26 16:06:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Jan 26 16:06:25 compute-0 nova_compute[239965]: 2026-01-26 16:06:25.164 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:25 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Jan 26 16:06:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:25.166 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0f5be0b3-c724-46fd-8817-1156a60bbeb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:25.180 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[030293b2-ef99-4992-bbf2-eacee6b56885]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:25.182 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d155be3c-a752-4c9c-8223-510242270aae]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:25 compute-0 nova_compute[239965]: 2026-01-26 16:06:25.189 239969 DEBUG nova.storage.rbd_utils [None req-a46600be-5b87-4bc6-93da-2b315d766fc7 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] creating snapshot(snap) on rbd image(81876c85-13a0-427f-8da6-a495ac46a80a) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:06:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:25.203 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[eb7570bd-c12f-45ad-92da-a20dde78f53d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507208, 'reachable_time': 31450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315048, 'error': None, 'target': 'ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:25.208 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-84c1ad93-16e1-4751-ac9b-ceeaa2d50c1d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:06:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:25.208 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff3c36e-68b1-41cb-8de3-37edb660a28b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d84c1ad93\x2d16e1\x2d4751\x2dac9b\x2dceeaa2d50c1d.mount: Deactivated successfully.
Jan 26 16:06:25 compute-0 nova_compute[239965]: 2026-01-26 16:06:25.300 239969 INFO nova.virt.libvirt.driver [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Deleting instance files /var/lib/nova/instances/492acb7a-42e1-4918-a973-352f9d8251d0_del
Jan 26 16:06:25 compute-0 nova_compute[239965]: 2026-01-26 16:06:25.301 239969 INFO nova.virt.libvirt.driver [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Deletion of /var/lib/nova/instances/492acb7a-42e1-4918-a973-352f9d8251d0_del complete
Jan 26 16:06:25 compute-0 nova_compute[239965]: 2026-01-26 16:06:25.483 239969 INFO nova.compute.manager [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Took 0.79 seconds to destroy the instance on the hypervisor.
Jan 26 16:06:25 compute-0 nova_compute[239965]: 2026-01-26 16:06:25.483 239969 DEBUG oslo.service.loopingcall [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:06:25 compute-0 nova_compute[239965]: 2026-01-26 16:06:25.484 239969 DEBUG nova.compute.manager [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:06:25 compute-0 nova_compute[239965]: 2026-01-26 16:06:25.484 239969 DEBUG nova.network.neutron [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:06:25 compute-0 nova_compute[239965]: 2026-01-26 16:06:25.985 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 465 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 8.4 MiB/s wr, 239 op/s
Jan 26 16:06:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Jan 26 16:06:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Jan 26 16:06:26 compute-0 ceph-mon[75140]: osdmap e252: 3 total, 3 up, 3 in
Jan 26 16:06:26 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.799 239969 DEBUG nova.network.neutron [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Updating instance_info_cache with network_info: [{"id": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "address": "fa:16:3e:2d:8a:67", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbece9c-48", "ovs_interfaceid": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.823 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Releasing lock "refresh_cache-1c5a4474-3020-4bb1-96e5-41b02d0eb962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.823 239969 DEBUG nova.compute.manager [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Instance network_info: |[{"id": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "address": "fa:16:3e:2d:8a:67", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbece9c-48", "ovs_interfaceid": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.824 239969 DEBUG oslo_concurrency.lockutils [req-03b145b2-c777-4f9e-914b-f9cd08f87059 req-443feb36-6caf-4b03-b417-08b2a11e7dd5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-1c5a4474-3020-4bb1-96e5-41b02d0eb962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.824 239969 DEBUG nova.network.neutron [req-03b145b2-c777-4f9e-914b-f9cd08f87059 req-443feb36-6caf-4b03-b417-08b2a11e7dd5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Refreshing network info cache for port 3cbece9c-4828-4d27-8ede-b67dab0fc2b0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.828 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Start _get_guest_xml network_info=[{"id": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "address": "fa:16:3e:2d:8a:67", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbece9c-48", "ovs_interfaceid": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.833 239969 WARNING nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.841 239969 DEBUG nova.compute.manager [req-01d5d5f3-8933-44c2-8113-eb463f201f06 req-405d032b-d145-4167-b05a-2e8c43180a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.841 239969 DEBUG oslo_concurrency.lockutils [req-01d5d5f3-8933-44c2-8113-eb463f201f06 req-405d032b-d145-4167-b05a-2e8c43180a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.841 239969 DEBUG oslo_concurrency.lockutils [req-01d5d5f3-8933-44c2-8113-eb463f201f06 req-405d032b-d145-4167-b05a-2e8c43180a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.842 239969 DEBUG oslo_concurrency.lockutils [req-01d5d5f3-8933-44c2-8113-eb463f201f06 req-405d032b-d145-4167-b05a-2e8c43180a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.842 239969 DEBUG nova.compute.manager [req-01d5d5f3-8933-44c2-8113-eb463f201f06 req-405d032b-d145-4167-b05a-2e8c43180a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.842 239969 DEBUG nova.compute.manager [req-01d5d5f3-8933-44c2-8113-eb463f201f06 req-405d032b-d145-4167-b05a-2e8c43180a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-unplugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.842 239969 DEBUG nova.compute.manager [req-01d5d5f3-8933-44c2-8113-eb463f201f06 req-405d032b-d145-4167-b05a-2e8c43180a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.843 239969 DEBUG oslo_concurrency.lockutils [req-01d5d5f3-8933-44c2-8113-eb463f201f06 req-405d032b-d145-4167-b05a-2e8c43180a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.843 239969 DEBUG oslo_concurrency.lockutils [req-01d5d5f3-8933-44c2-8113-eb463f201f06 req-405d032b-d145-4167-b05a-2e8c43180a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.843 239969 DEBUG oslo_concurrency.lockutils [req-01d5d5f3-8933-44c2-8113-eb463f201f06 req-405d032b-d145-4167-b05a-2e8c43180a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.843 239969 DEBUG nova.compute.manager [req-01d5d5f3-8933-44c2-8113-eb463f201f06 req-405d032b-d145-4167-b05a-2e8c43180a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] No waiting events found dispatching network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.843 239969 WARNING nova.compute.manager [req-01d5d5f3-8933-44c2-8113-eb463f201f06 req-405d032b-d145-4167-b05a-2e8c43180a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received unexpected event network-vif-plugged-2a2de29b-29c0-439c-8236-2f566c4c89aa for instance with vm_state active and task_state deleting.
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.847 239969 DEBUG nova.virt.libvirt.host [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.847 239969 DEBUG nova.virt.libvirt.host [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.851 239969 DEBUG nova.virt.libvirt.host [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.852 239969 DEBUG nova.virt.libvirt.host [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.852 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.852 239969 DEBUG nova.virt.hardware [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.853 239969 DEBUG nova.virt.hardware [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.853 239969 DEBUG nova.virt.hardware [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.853 239969 DEBUG nova.virt.hardware [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.853 239969 DEBUG nova.virt.hardware [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.854 239969 DEBUG nova.virt.hardware [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.854 239969 DEBUG nova.virt.hardware [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.854 239969 DEBUG nova.virt.hardware [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.854 239969 DEBUG nova.virt.hardware [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.855 239969 DEBUG nova.virt.hardware [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.855 239969 DEBUG nova.virt.hardware [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:06:26 compute-0 nova_compute[239965]: 2026-01-26 16:06:26.857 239969 DEBUG oslo_concurrency.processutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:27 compute-0 ceph-mon[75140]: pgmap v1621: 305 pgs: 305 active+clean; 465 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 8.4 MiB/s wr, 239 op/s
Jan 26 16:06:27 compute-0 ceph-mon[75140]: osdmap e253: 3 total, 3 up, 3 in
Jan 26 16:06:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:06:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/232329427' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:06:27 compute-0 nova_compute[239965]: 2026-01-26 16:06:27.420 239969 DEBUG oslo_concurrency.processutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:27 compute-0 nova_compute[239965]: 2026-01-26 16:06:27.448 239969 DEBUG nova.storage.rbd_utils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 1c5a4474-3020-4bb1-96e5-41b02d0eb962_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:27 compute-0 nova_compute[239965]: 2026-01-26 16:06:27.457 239969 DEBUG oslo_concurrency.processutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:27 compute-0 nova_compute[239965]: 2026-01-26 16:06:27.506 239969 INFO nova.virt.libvirt.driver [None req-a46600be-5b87-4bc6-93da-2b315d766fc7 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Snapshot image upload complete
Jan 26 16:06:27 compute-0 nova_compute[239965]: 2026-01-26 16:06:27.507 239969 INFO nova.compute.manager [None req-a46600be-5b87-4bc6-93da-2b315d766fc7 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Took 4.64 seconds to snapshot the instance on the hypervisor.
Jan 26 16:06:27 compute-0 nova_compute[239965]: 2026-01-26 16:06:27.537 239969 DEBUG nova.network.neutron [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:06:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:06:27 compute-0 nova_compute[239965]: 2026-01-26 16:06:27.579 239969 INFO nova.compute.manager [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Took 2.09 seconds to deallocate network for instance.
Jan 26 16:06:27 compute-0 nova_compute[239965]: 2026-01-26 16:06:27.624 239969 DEBUG oslo_concurrency.lockutils [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:27 compute-0 nova_compute[239965]: 2026-01-26 16:06:27.624 239969 DEBUG oslo_concurrency.lockutils [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:27 compute-0 nova_compute[239965]: 2026-01-26 16:06:27.728 239969 DEBUG oslo_concurrency.processutils [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.014 239969 DEBUG nova.compute.manager [None req-a46600be-5b87-4bc6-93da-2b315d766fc7 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Found 3 images (rotation: 2) _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4450
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.015 239969 DEBUG nova.compute.manager [None req-a46600be-5b87-4bc6-93da-2b315d766fc7 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Rotating out 1 backups _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4458
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.015 239969 DEBUG nova.compute.manager [None req-a46600be-5b87-4bc6-93da-2b315d766fc7 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Deleting image e3f1b4e6-3b75-40cd-9d76-1d94417e28f9 _rotate_backups /usr/lib/python3.9/site-packages/nova/compute/manager.py:4463
Jan 26 16:06:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:06:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/575534881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.057 239969 DEBUG oslo_concurrency.processutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.058 239969 DEBUG nova.virt.libvirt.vif [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:06:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-108293772',display_name='tempest-₡-108293772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--108293772',id=86,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-4cw1yfux',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:06:23Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=1c5a4474-3020-4bb1-96e5-41b02d0eb962,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "address": "fa:16:3e:2d:8a:67", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbece9c-48", "ovs_interfaceid": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.059 239969 DEBUG nova.network.os_vif_util [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "address": "fa:16:3e:2d:8a:67", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbece9c-48", "ovs_interfaceid": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.059 239969 DEBUG nova.network.os_vif_util [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:8a:67,bridge_name='br-int',has_traffic_filtering=True,id=3cbece9c-4828-4d27-8ede-b67dab0fc2b0,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cbece9c-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.061 239969 DEBUG nova.objects.instance [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c5a4474-3020-4bb1-96e5-41b02d0eb962 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 465 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.7 MiB/s wr, 196 op/s
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.081 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:06:28 compute-0 nova_compute[239965]:   <uuid>1c5a4474-3020-4bb1-96e5-41b02d0eb962</uuid>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   <name>instance-00000056</name>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <nova:name>tempest-₡-108293772</nova:name>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:06:26</nova:creationTime>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:06:28 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:06:28 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:06:28 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:06:28 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:06:28 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:06:28 compute-0 nova_compute[239965]:         <nova:user uuid="392bf4c554724bd3b097b990cec964ac">tempest-ServersTestJSON-190839520-project-member</nova:user>
Jan 26 16:06:28 compute-0 nova_compute[239965]:         <nova:project uuid="e3d3c26abe454a90816833e484abbbd5">tempest-ServersTestJSON-190839520</nova:project>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:06:28 compute-0 nova_compute[239965]:         <nova:port uuid="3cbece9c-4828-4d27-8ede-b67dab0fc2b0">
Jan 26 16:06:28 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <system>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <entry name="serial">1c5a4474-3020-4bb1-96e5-41b02d0eb962</entry>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <entry name="uuid">1c5a4474-3020-4bb1-96e5-41b02d0eb962</entry>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     </system>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   <os>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   </os>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   <features>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   </features>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/1c5a4474-3020-4bb1-96e5-41b02d0eb962_disk">
Jan 26 16:06:28 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       </source>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:06:28 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/1c5a4474-3020-4bb1-96e5-41b02d0eb962_disk.config">
Jan 26 16:06:28 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       </source>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:06:28 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:2d:8a:67"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <target dev="tap3cbece9c-48"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/1c5a4474-3020-4bb1-96e5-41b02d0eb962/console.log" append="off"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <video>
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     </video>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:06:28 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:06:28 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:06:28 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:06:28 compute-0 nova_compute[239965]: </domain>
Jan 26 16:06:28 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.082 239969 DEBUG nova.compute.manager [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Preparing to wait for external event network-vif-plugged-3cbece9c-4828-4d27-8ede-b67dab0fc2b0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.082 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.082 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.082 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.083 239969 DEBUG nova.virt.libvirt.vif [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:06:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-₡-108293772',display_name='tempest-₡-108293772',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--108293772',id=86,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-4cw1yfux',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:06:23Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=1c5a4474-3020-4bb1-96e5-41b02d0eb962,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "address": "fa:16:3e:2d:8a:67", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbece9c-48", "ovs_interfaceid": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.083 239969 DEBUG nova.network.os_vif_util [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "address": "fa:16:3e:2d:8a:67", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbece9c-48", "ovs_interfaceid": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.084 239969 DEBUG nova.network.os_vif_util [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:8a:67,bridge_name='br-int',has_traffic_filtering=True,id=3cbece9c-4828-4d27-8ede-b67dab0fc2b0,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cbece9c-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.084 239969 DEBUG os_vif [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:8a:67,bridge_name='br-int',has_traffic_filtering=True,id=3cbece9c-4828-4d27-8ede-b67dab0fc2b0,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cbece9c-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.084 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.085 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.085 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.087 239969 DEBUG nova.network.neutron [req-03b145b2-c777-4f9e-914b-f9cd08f87059 req-443feb36-6caf-4b03-b417-08b2a11e7dd5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Updated VIF entry in instance network info cache for port 3cbece9c-4828-4d27-8ede-b67dab0fc2b0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.087 239969 DEBUG nova.network.neutron [req-03b145b2-c777-4f9e-914b-f9cd08f87059 req-443feb36-6caf-4b03-b417-08b2a11e7dd5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Updating instance_info_cache with network_info: [{"id": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "address": "fa:16:3e:2d:8a:67", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbece9c-48", "ovs_interfaceid": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.094 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.094 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cbece9c-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.094 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3cbece9c-48, col_values=(('external_ids', {'iface-id': '3cbece9c-4828-4d27-8ede-b67dab0fc2b0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:8a:67', 'vm-uuid': '1c5a4474-3020-4bb1-96e5-41b02d0eb962'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.096 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:28 compute-0 NetworkManager[48954]: <info>  [1769443588.0970] manager: (tap3cbece9c-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/365)
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.098 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.100 239969 DEBUG oslo_concurrency.lockutils [req-03b145b2-c777-4f9e-914b-f9cd08f87059 req-443feb36-6caf-4b03-b417-08b2a11e7dd5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-1c5a4474-3020-4bb1-96e5-41b02d0eb962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.103 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.103 239969 INFO os_vif [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:8a:67,bridge_name='br-int',has_traffic_filtering=True,id=3cbece9c-4828-4d27-8ede-b67dab0fc2b0,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cbece9c-48')
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.156 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.156 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.157 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No VIF found with MAC fa:16:3e:2d:8a:67, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.157 239969 INFO nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Using config drive
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.180 239969 DEBUG nova.storage.rbd_utils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 1c5a4474-3020-4bb1-96e5-41b02d0eb962_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:28 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/232329427' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:06:28 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/575534881' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:06:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:06:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3371547848' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.299 239969 DEBUG oslo_concurrency.processutils [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.305 239969 DEBUG nova.compute.provider_tree [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.318 239969 DEBUG nova.scheduler.client.report [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.335 239969 DEBUG oslo_concurrency.lockutils [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.357 239969 INFO nova.scheduler.client.report [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Deleted allocations for instance 492acb7a-42e1-4918-a973-352f9d8251d0
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.419 239969 DEBUG oslo_concurrency.lockutils [None req-b2106192-5e73-43f1-8477-b1915049d026 59f8ee09903a4e0a812c3d9e013996bd 94771806cd0d4b5db117956e09fea9e6 - - default default] Lock "492acb7a-42e1-4918-a973-352f9d8251d0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.494 239969 INFO nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Creating config drive at /var/lib/nova/instances/1c5a4474-3020-4bb1-96e5-41b02d0eb962/disk.config
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.502 239969 DEBUG oslo_concurrency.processutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c5a4474-3020-4bb1-96e5-41b02d0eb962/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6coog4hl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:06:28
Jan 26 16:06:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:06:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:06:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'vms', 'default.rgw.meta', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.log']
Jan 26 16:06:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.641 239969 DEBUG oslo_concurrency.processutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c5a4474-3020-4bb1-96e5-41b02d0eb962/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6coog4hl" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.671 239969 DEBUG nova.storage.rbd_utils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 1c5a4474-3020-4bb1-96e5-41b02d0eb962_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.678 239969 DEBUG oslo_concurrency.processutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1c5a4474-3020-4bb1-96e5-41b02d0eb962/disk.config 1c5a4474-3020-4bb1-96e5-41b02d0eb962_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.849 239969 DEBUG oslo_concurrency.processutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1c5a4474-3020-4bb1-96e5-41b02d0eb962/disk.config 1c5a4474-3020-4bb1-96e5-41b02d0eb962_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.850 239969 INFO nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Deleting local config drive /var/lib/nova/instances/1c5a4474-3020-4bb1-96e5-41b02d0eb962/disk.config because it was imported into RBD.
Jan 26 16:06:28 compute-0 kernel: tap3cbece9c-48: entered promiscuous mode
Jan 26 16:06:28 compute-0 ovn_controller[146046]: 2026-01-26T16:06:28Z|00855|binding|INFO|Claiming lport 3cbece9c-4828-4d27-8ede-b67dab0fc2b0 for this chassis.
Jan 26 16:06:28 compute-0 ovn_controller[146046]: 2026-01-26T16:06:28Z|00856|binding|INFO|3cbece9c-4828-4d27-8ede-b67dab0fc2b0: Claiming fa:16:3e:2d:8a:67 10.100.0.5
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.903 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:28 compute-0 NetworkManager[48954]: <info>  [1769443588.9063] manager: (tap3cbece9c-48): new Tun device (/org/freedesktop/NetworkManager/Devices/366)
Jan 26 16:06:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:28.913 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:8a:67 10.100.0.5'], port_security=['fa:16:3e:2d:8a:67 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1c5a4474-3020-4bb1-96e5-41b02d0eb962', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3d3c26abe454a90816833e484abbbd5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1788d42-3ab8-4af6-9d96-b2f289468a39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=866be92a-2c15-4b84-ae39-2a0041a8b635, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3cbece9c-4828-4d27-8ede-b67dab0fc2b0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:06:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:28.916 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3cbece9c-4828-4d27-8ede-b67dab0fc2b0 in datapath e7ef659c-2c7b-4cdf-8956-267cdfeff707 bound to our chassis
Jan 26 16:06:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:28.918 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:06:28 compute-0 ovn_controller[146046]: 2026-01-26T16:06:28Z|00857|binding|INFO|Setting lport 3cbece9c-4828-4d27-8ede-b67dab0fc2b0 ovn-installed in OVS
Jan 26 16:06:28 compute-0 ovn_controller[146046]: 2026-01-26T16:06:28Z|00858|binding|INFO|Setting lport 3cbece9c-4828-4d27-8ede-b67dab0fc2b0 up in Southbound
Jan 26 16:06:28 compute-0 nova_compute[239965]: 2026-01-26 16:06:28.923 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:28.931 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b2084f45-7fa9-43eb-a1ab-6a71c14ae3f1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:28.932 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape7ef659c-21 in ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:06:28 compute-0 systemd-udevd[315225]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:06:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:28.935 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape7ef659c-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:06:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:28.935 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f0fb387d-52f5-4315-9d93-ce00a36a66a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:28.937 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[516ad564-fbe3-4ac0-a043-4d24678439be]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:28 compute-0 systemd-machined[208061]: New machine qemu-107-instance-00000056.
Jan 26 16:06:28 compute-0 NetworkManager[48954]: <info>  [1769443588.9475] device (tap3cbece9c-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:06:28 compute-0 NetworkManager[48954]: <info>  [1769443588.9482] device (tap3cbece9c-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:06:28 compute-0 systemd[1]: Started Virtual Machine qemu-107-instance-00000056.
Jan 26 16:06:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:28.955 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[eb295bde-de52-42ee-8971-7da2f5cd1433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:28.984 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4d62f2-9012-4306-9a4e-cac68ce2ffe2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.024 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf4c2c6-e214-4c47-b17f-4f0845d3b3c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.025 239969 DEBUG nova.compute.manager [req-8935a8f4-2fad-4a94-a43e-77ef0ead37b6 req-cde17d0d-5c54-4128-98df-c8f278dc7a7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Received event network-vif-deleted-2a2de29b-29c0-439c-8236-2f566c4c89aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:29 compute-0 NetworkManager[48954]: <info>  [1769443589.0341] manager: (tape7ef659c-20): new Veth device (/org/freedesktop/NetworkManager/Devices/367)
Jan 26 16:06:29 compute-0 systemd-udevd[315228]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.034 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e459a6f1-99b6-441a-8984-d7f312be9e83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.078 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b1968531-d70f-4087-b254-8672d99bb541]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.081 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ccfb1a2e-afa7-4e4d-b8d1-edb67a5a2acb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:29 compute-0 NetworkManager[48954]: <info>  [1769443589.1128] device (tape7ef659c-20): carrier: link connected
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.122 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8002cf0c-9f74-4098-bf77-90628df6221a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.145 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[074882dc-15e3-4ab3-a40c-f7e0f638d4e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7ef659c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:1b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507991, 'reachable_time': 18624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 315257, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.166 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7a70a7a2-ea08-4a93-bb3e-13624e5a09f0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe16:1b47'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 507991, 'tstamp': 507991}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 315258, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.194 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[18f74c0a-1ad1-4ca9-a0f8-e3bea8f5048a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7ef659c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:1b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507991, 'reachable_time': 18624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 315259, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:29 compute-0 ceph-mon[75140]: pgmap v1623: 305 pgs: 305 active+clean; 465 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.7 MiB/s wr, 196 op/s
Jan 26 16:06:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3371547848' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:06:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.208 239969 DEBUG nova.compute.manager [req-d0cd9374-cc09-41e9-b96d-be64529dca2b req-06e54e61-8fc6-4389-9e19-34813943d672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Received event network-vif-plugged-3cbece9c-4828-4d27-8ede-b67dab0fc2b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:29 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.213 239969 DEBUG oslo_concurrency.lockutils [req-d0cd9374-cc09-41e9-b96d-be64529dca2b req-06e54e61-8fc6-4389-9e19-34813943d672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.214 239969 DEBUG oslo_concurrency.lockutils [req-d0cd9374-cc09-41e9-b96d-be64529dca2b req-06e54e61-8fc6-4389-9e19-34813943d672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.214 239969 DEBUG oslo_concurrency.lockutils [req-d0cd9374-cc09-41e9-b96d-be64529dca2b req-06e54e61-8fc6-4389-9e19-34813943d672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.215 239969 DEBUG nova.compute.manager [req-d0cd9374-cc09-41e9-b96d-be64529dca2b req-06e54e61-8fc6-4389-9e19-34813943d672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Processing event network-vif-plugged-3cbece9c-4828-4d27-8ede-b67dab0fc2b0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.234 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f2fd261d-5828-4143-8303-e3a23b1d63bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.297 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b68e07-1d5b-452b-92a9-e084d4cb008b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.299 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7ef659c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.299 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.300 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7ef659c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.302 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:29 compute-0 NetworkManager[48954]: <info>  [1769443589.3025] manager: (tape7ef659c-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/368)
Jan 26 16:06:29 compute-0 kernel: tape7ef659c-20: entered promiscuous mode
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.303 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.305 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape7ef659c-20, col_values=(('external_ids', {'iface-id': '86107515-3deb-4a4b-a7ca-2f22eed5cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.306 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:29 compute-0 ovn_controller[146046]: 2026-01-26T16:06:29Z|00859|binding|INFO|Releasing lport 86107515-3deb-4a4b-a7ca-2f22eed5cb66 from this chassis (sb_readonly=0)
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.324 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.325 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e7ef659c-2c7b-4cdf-8956-267cdfeff707.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e7ef659c-2c7b-4cdf-8956-267cdfeff707.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.326 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9acd1287-5b3e-4c2a-be0b-00b1a054a11b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.326 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/e7ef659c-2c7b-4cdf-8956-267cdfeff707.pid.haproxy
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:06:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:29.327 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'env', 'PROCESS_TAG=haproxy-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e7ef659c-2c7b-4cdf-8956-267cdfeff707.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.576 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443589.575092, 1c5a4474-3020-4bb1-96e5-41b02d0eb962 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.576 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] VM Started (Lifecycle Event)
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.579 239969 DEBUG nova.compute.manager [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.585 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.590 239969 INFO nova.virt.libvirt.driver [-] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Instance spawned successfully.
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.591 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.609 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.614 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.617 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.617 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.618 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.618 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.619 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.619 239969 DEBUG nova.virt.libvirt.driver [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.647 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.647 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443589.5754201, 1c5a4474-3020-4bb1-96e5-41b02d0eb962 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.648 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] VM Paused (Lifecycle Event)
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.681 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.684 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443589.5849128, 1c5a4474-3020-4bb1-96e5-41b02d0eb962 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.685 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] VM Resumed (Lifecycle Event)
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.695 239969 INFO nova.compute.manager [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Took 6.43 seconds to spawn the instance on the hypervisor.
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.696 239969 DEBUG nova.compute.manager [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.708 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.711 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:06:29 compute-0 podman[315333]: 2026-01-26 16:06:29.728618364 +0000 UTC m=+0.058491042 container create da1254d4eadee8317f58eeb17eb10b0f8014ab39fb55b0c4f3629947ab8bfeeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.743 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:06:29 compute-0 systemd[1]: Started libpod-conmon-da1254d4eadee8317f58eeb17eb10b0f8014ab39fb55b0c4f3629947ab8bfeeb.scope.
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.785 239969 INFO nova.compute.manager [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Took 7.66 seconds to build instance.
Jan 26 16:06:29 compute-0 podman[315333]: 2026-01-26 16:06:29.704497274 +0000 UTC m=+0.034369952 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:06:29 compute-0 nova_compute[239965]: 2026-01-26 16:06:29.803 239969 DEBUG oslo_concurrency.lockutils [None req-9bed234a-9799-4160-8558-fa38b6782de2 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:06:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f0b6f31645d9f0ece3c6711209f5dc1bb8c997ef0ec06a8ab3ee4af6c1c6c9a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:06:29 compute-0 podman[315333]: 2026-01-26 16:06:29.83677753 +0000 UTC m=+0.166650238 container init da1254d4eadee8317f58eeb17eb10b0f8014ab39fb55b0c4f3629947ab8bfeeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:06:29 compute-0 podman[315333]: 2026-01-26 16:06:29.844799016 +0000 UTC m=+0.174671694 container start da1254d4eadee8317f58eeb17eb10b0f8014ab39fb55b0c4f3629947ab8bfeeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:06:29 compute-0 neutron-haproxy-ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707[315348]: [NOTICE]   (315352) : New worker (315354) forked
Jan 26 16:06:29 compute-0 neutron-haproxy-ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707[315348]: [NOTICE]   (315352) : Loading success.
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 451 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 11 MiB/s wr, 335 op/s
Jan 26 16:06:30 compute-0 ceph-mon[75140]: osdmap e254: 3 total, 3 up, 3 in
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:06:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:06:30 compute-0 nova_compute[239965]: 2026-01-26 16:06:30.988 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Jan 26 16:06:31 compute-0 ceph-mon[75140]: pgmap v1625: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 451 MiB data, 851 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 11 MiB/s wr, 335 op/s
Jan 26 16:06:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Jan 26 16:06:31 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Jan 26 16:06:31 compute-0 nova_compute[239965]: 2026-01-26 16:06:31.286 239969 DEBUG nova.compute.manager [req-e43a53d3-a6ac-43dc-9a43-be008d489eb4 req-995088ab-58aa-4cf9-b5ca-c721bd58ca66 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Received event network-vif-plugged-3cbece9c-4828-4d27-8ede-b67dab0fc2b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:31 compute-0 nova_compute[239965]: 2026-01-26 16:06:31.286 239969 DEBUG oslo_concurrency.lockutils [req-e43a53d3-a6ac-43dc-9a43-be008d489eb4 req-995088ab-58aa-4cf9-b5ca-c721bd58ca66 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:31 compute-0 nova_compute[239965]: 2026-01-26 16:06:31.287 239969 DEBUG oslo_concurrency.lockutils [req-e43a53d3-a6ac-43dc-9a43-be008d489eb4 req-995088ab-58aa-4cf9-b5ca-c721bd58ca66 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:31 compute-0 nova_compute[239965]: 2026-01-26 16:06:31.287 239969 DEBUG oslo_concurrency.lockutils [req-e43a53d3-a6ac-43dc-9a43-be008d489eb4 req-995088ab-58aa-4cf9-b5ca-c721bd58ca66 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:31 compute-0 nova_compute[239965]: 2026-01-26 16:06:31.287 239969 DEBUG nova.compute.manager [req-e43a53d3-a6ac-43dc-9a43-be008d489eb4 req-995088ab-58aa-4cf9-b5ca-c721bd58ca66 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] No waiting events found dispatching network-vif-plugged-3cbece9c-4828-4d27-8ede-b67dab0fc2b0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:06:31 compute-0 nova_compute[239965]: 2026-01-26 16:06:31.287 239969 WARNING nova.compute.manager [req-e43a53d3-a6ac-43dc-9a43-be008d489eb4 req-995088ab-58aa-4cf9-b5ca-c721bd58ca66 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Received unexpected event network-vif-plugged-3cbece9c-4828-4d27-8ede-b67dab0fc2b0 for instance with vm_state active and task_state None.
Jan 26 16:06:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1627: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 417 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 5.2 MiB/s wr, 224 op/s
Jan 26 16:06:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Jan 26 16:06:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Jan 26 16:06:32 compute-0 ceph-mon[75140]: osdmap e255: 3 total, 3 up, 3 in
Jan 26 16:06:32 compute-0 ceph-mon[75140]: pgmap v1627: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 417 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 5.2 MiB/s wr, 224 op/s
Jan 26 16:06:32 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Jan 26 16:06:32 compute-0 ovn_controller[146046]: 2026-01-26T16:06:32Z|00860|binding|INFO|Releasing lport 86107515-3deb-4a4b-a7ca-2f22eed5cb66 from this chassis (sb_readonly=0)
Jan 26 16:06:32 compute-0 ovn_controller[146046]: 2026-01-26T16:06:32Z|00861|binding|INFO|Releasing lport 8d8bd19b-594d-4000-99fc-eb8020a59c23 from this chassis (sb_readonly=0)
Jan 26 16:06:32 compute-0 nova_compute[239965]: 2026-01-26 16:06:32.490 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e256 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:06:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Jan 26 16:06:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Jan 26 16:06:32 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Jan 26 16:06:33 compute-0 nova_compute[239965]: 2026-01-26 16:06:33.096 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:33 compute-0 ceph-mon[75140]: osdmap e256: 3 total, 3 up, 3 in
Jan 26 16:06:33 compute-0 ceph-mon[75140]: osdmap e257: 3 total, 3 up, 3 in
Jan 26 16:06:33 compute-0 nova_compute[239965]: 2026-01-26 16:06:33.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:06:33 compute-0 sshd-session[315363]: Invalid user sol from 45.148.10.240 port 57124
Jan 26 16:06:33 compute-0 sshd-session[315363]: Connection closed by invalid user sol 45.148.10.240 port 57124 [preauth]
Jan 26 16:06:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 378 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 6.4 MiB/s wr, 367 op/s
Jan 26 16:06:34 compute-0 ceph-mon[75140]: pgmap v1630: 305 pgs: 305 active+clean; 378 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 6.4 MiB/s wr, 367 op/s
Jan 26 16:06:35 compute-0 nova_compute[239965]: 2026-01-26 16:06:35.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:06:35 compute-0 nova_compute[239965]: 2026-01-26 16:06:35.988 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 225 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 6.3 KiB/s wr, 257 op/s
Jan 26 16:06:37 compute-0 ceph-mon[75140]: pgmap v1631: 305 pgs: 305 active+clean; 225 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 6.3 KiB/s wr, 257 op/s
Jan 26 16:06:37 compute-0 nova_compute[239965]: 2026-01-26 16:06:37.262 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "2260a3d1-973c-4364-af38-ebd9acd287f8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:37 compute-0 nova_compute[239965]: 2026-01-26 16:06:37.262 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "2260a3d1-973c-4364-af38-ebd9acd287f8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:37 compute-0 nova_compute[239965]: 2026-01-26 16:06:37.279 239969 DEBUG nova.compute.manager [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:06:37 compute-0 nova_compute[239965]: 2026-01-26 16:06:37.390 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:37 compute-0 nova_compute[239965]: 2026-01-26 16:06:37.391 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:37 compute-0 nova_compute[239965]: 2026-01-26 16:06:37.399 239969 DEBUG nova.virt.hardware [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:06:37 compute-0 nova_compute[239965]: 2026-01-26 16:06:37.400 239969 INFO nova.compute.claims [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:06:37 compute-0 nova_compute[239965]: 2026-01-26 16:06:37.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:06:37 compute-0 nova_compute[239965]: 2026-01-26 16:06:37.546 239969 DEBUG oslo_concurrency.processutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:06:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Jan 26 16:06:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Jan 26 16:06:37 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Jan 26 16:06:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1633: 305 pgs: 305 active+clean; 213 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 23 KiB/s wr, 206 op/s
Jan 26 16:06:38 compute-0 nova_compute[239965]: 2026-01-26 16:06:38.098 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:06:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/421505830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:06:38 compute-0 nova_compute[239965]: 2026-01-26 16:06:38.181 239969 DEBUG oslo_concurrency.processutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.635s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:38 compute-0 nova_compute[239965]: 2026-01-26 16:06:38.187 239969 DEBUG nova.compute.provider_tree [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:06:38 compute-0 nova_compute[239965]: 2026-01-26 16:06:38.208 239969 DEBUG nova.scheduler.client.report [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:06:38 compute-0 nova_compute[239965]: 2026-01-26 16:06:38.232 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:38 compute-0 nova_compute[239965]: 2026-01-26 16:06:38.233 239969 DEBUG nova.compute.manager [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:06:38 compute-0 nova_compute[239965]: 2026-01-26 16:06:38.294 239969 DEBUG nova.compute.manager [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:06:38 compute-0 nova_compute[239965]: 2026-01-26 16:06:38.294 239969 DEBUG nova.network.neutron [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:06:38 compute-0 nova_compute[239965]: 2026-01-26 16:06:38.317 239969 INFO nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:06:38 compute-0 nova_compute[239965]: 2026-01-26 16:06:38.362 239969 DEBUG nova.compute.manager [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:06:38 compute-0 ceph-mon[75140]: osdmap e258: 3 total, 3 up, 3 in
Jan 26 16:06:38 compute-0 ceph-mon[75140]: pgmap v1633: 305 pgs: 305 active+clean; 213 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 23 KiB/s wr, 206 op/s
Jan 26 16:06:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/421505830' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.093 239969 DEBUG nova.policy [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '392bf4c554724bd3b097b990cec964ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3d3c26abe454a90816833e484abbbd5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:06:39 compute-0 podman[315387]: 2026-01-26 16:06:39.369920873 +0000 UTC m=+0.055775185 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 26 16:06:39 compute-0 podman[315388]: 2026-01-26 16:06:39.400747337 +0000 UTC m=+0.084487318 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.682 239969 DEBUG nova.compute.manager [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.683 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.684 239969 INFO nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Creating image(s)
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.709 239969 DEBUG nova.storage.rbd_utils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 2260a3d1-973c-4364-af38-ebd9acd287f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.737 239969 DEBUG nova.storage.rbd_utils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 2260a3d1-973c-4364-af38-ebd9acd287f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.762 239969 DEBUG nova.storage.rbd_utils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 2260a3d1-973c-4364-af38-ebd9acd287f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.765 239969 DEBUG oslo_concurrency.processutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.839 239969 DEBUG oslo_concurrency.processutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.840 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.841 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.841 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.868 239969 DEBUG nova.storage.rbd_utils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 2260a3d1-973c-4364-af38-ebd9acd287f8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.871 239969 DEBUG oslo_concurrency.processutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 2260a3d1-973c-4364-af38-ebd9acd287f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.957 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443584.9557946, 492acb7a-42e1-4918-a973-352f9d8251d0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.958 239969 INFO nova.compute.manager [-] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] VM Stopped (Lifecycle Event)
Jan 26 16:06:39 compute-0 nova_compute[239965]: 2026-01-26 16:06:39.987 239969 DEBUG nova.compute.manager [None req-e1261ff9-2301-4328-b2ce-1847d083c372 - - - - - -] [instance: 492acb7a-42e1-4918-a973-352f9d8251d0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 213 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 16 KiB/s wr, 101 op/s
Jan 26 16:06:40 compute-0 nova_compute[239965]: 2026-01-26 16:06:40.302 239969 DEBUG oslo_concurrency.processutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 2260a3d1-973c-4364-af38-ebd9acd287f8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:40 compute-0 nova_compute[239965]: 2026-01-26 16:06:40.362 239969 DEBUG nova.storage.rbd_utils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] resizing rbd image 2260a3d1-973c-4364-af38-ebd9acd287f8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:06:40 compute-0 nova_compute[239965]: 2026-01-26 16:06:40.444 239969 DEBUG nova.objects.instance [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'migration_context' on Instance uuid 2260a3d1-973c-4364-af38-ebd9acd287f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:40 compute-0 nova_compute[239965]: 2026-01-26 16:06:40.459 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:06:40 compute-0 nova_compute[239965]: 2026-01-26 16:06:40.459 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Ensure instance console log exists: /var/lib/nova/instances/2260a3d1-973c-4364-af38-ebd9acd287f8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:06:40 compute-0 nova_compute[239965]: 2026-01-26 16:06:40.460 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:40 compute-0 nova_compute[239965]: 2026-01-26 16:06:40.460 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:40 compute-0 nova_compute[239965]: 2026-01-26 16:06:40.460 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:40 compute-0 nova_compute[239965]: 2026-01-26 16:06:40.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:06:40 compute-0 nova_compute[239965]: 2026-01-26 16:06:40.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:06:40 compute-0 nova_compute[239965]: 2026-01-26 16:06:40.795 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:06:40 compute-0 nova_compute[239965]: 2026-01-26 16:06:40.795 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:06:40 compute-0 nova_compute[239965]: 2026-01-26 16:06:40.796 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:06:40 compute-0 nova_compute[239965]: 2026-01-26 16:06:40.991 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:41 compute-0 ceph-mon[75140]: pgmap v1634: 305 pgs: 305 active+clean; 213 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 16 KiB/s wr, 101 op/s
Jan 26 16:06:41 compute-0 nova_compute[239965]: 2026-01-26 16:06:41.230 239969 DEBUG nova.network.neutron [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Successfully created port: b7f76f11-77ec-41f2-8177-93de5bf467a9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:06:41 compute-0 nova_compute[239965]: 2026-01-26 16:06:41.459 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:41 compute-0 nova_compute[239965]: 2026-01-26 16:06:41.460 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:41 compute-0 nova_compute[239965]: 2026-01-26 16:06:41.477 239969 DEBUG nova.compute.manager [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:06:41 compute-0 nova_compute[239965]: 2026-01-26 16:06:41.538 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:41 compute-0 nova_compute[239965]: 2026-01-26 16:06:41.538 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:41 compute-0 nova_compute[239965]: 2026-01-26 16:06:41.543 239969 DEBUG nova.virt.hardware [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:06:41 compute-0 nova_compute[239965]: 2026-01-26 16:06:41.543 239969 INFO nova.compute.claims [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:06:41 compute-0 nova_compute[239965]: 2026-01-26 16:06:41.705 239969 DEBUG oslo_concurrency.processutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1635: 305 pgs: 305 active+clean; 225 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 863 KiB/s rd, 380 KiB/s wr, 87 op/s
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.243 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updating instance_info_cache with network_info: [{"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.259 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.260 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.260 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.261 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.261 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:06:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:06:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3478646628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.429 239969 DEBUG oslo_concurrency.processutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.724s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.436 239969 DEBUG nova.compute.provider_tree [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.452 239969 DEBUG nova.scheduler.client.report [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:06:42 compute-0 ceph-mon[75140]: pgmap v1635: 305 pgs: 305 active+clean; 225 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 863 KiB/s rd, 380 KiB/s wr, 87 op/s
Jan 26 16:06:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3478646628' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.478 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.940s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.479 239969 DEBUG nova.compute.manager [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.532 239969 DEBUG nova.compute.manager [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.533 239969 DEBUG nova.network.neutron [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.549 239969 INFO nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.571 239969 DEBUG nova.compute.manager [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:06:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:06:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Jan 26 16:06:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Jan 26 16:06:42 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.662 239969 DEBUG nova.compute.manager [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.665 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.665 239969 INFO nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Creating image(s)
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.692 239969 DEBUG nova.storage.rbd_utils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 8dcfffb0-b36f-463e-b49a-9954483b8f5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.718 239969 DEBUG nova.storage.rbd_utils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 8dcfffb0-b36f-463e-b49a-9954483b8f5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.741 239969 DEBUG nova.storage.rbd_utils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 8dcfffb0-b36f-463e-b49a-9954483b8f5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.744 239969 DEBUG oslo_concurrency.processutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.823 239969 DEBUG oslo_concurrency.processutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.824 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.824 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.825 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.843 239969 DEBUG nova.storage.rbd_utils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 8dcfffb0-b36f-463e-b49a-9954483b8f5b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:42 compute-0 nova_compute[239965]: 2026-01-26 16:06:42.846 239969 DEBUG oslo_concurrency.processutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 8dcfffb0-b36f-463e-b49a-9954483b8f5b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.101 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.134 239969 DEBUG nova.policy [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ad9c6196af60436caf20747e96ad8388', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56f8818d291f4e738d868673048ce025', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.337 239969 DEBUG oslo_concurrency.processutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 8dcfffb0-b36f-463e-b49a-9954483b8f5b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.373 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.420 239969 DEBUG nova.storage.rbd_utils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] resizing rbd image 8dcfffb0-b36f-463e-b49a-9954483b8f5b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.472 239969 DEBUG nova.network.neutron [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Successfully updated port: b7f76f11-77ec-41f2-8177-93de5bf467a9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.497 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "refresh_cache-2260a3d1-973c-4364-af38-ebd9acd287f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.498 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquired lock "refresh_cache-2260a3d1-973c-4364-af38-ebd9acd287f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.498 239969 DEBUG nova.network.neutron [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.544 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.544 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.544 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.545 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:43 compute-0 ceph-mon[75140]: osdmap e259: 3 total, 3 up, 3 in
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.614 239969 DEBUG nova.objects.instance [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'migration_context' on Instance uuid 8dcfffb0-b36f-463e-b49a-9954483b8f5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.639 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.640 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Ensure instance console log exists: /var/lib/nova/instances/8dcfffb0-b36f-463e-b49a-9954483b8f5b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.640 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.641 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.641 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.771 239969 DEBUG nova.compute.manager [req-11629b7e-2ace-44e9-864c-c882090e1b3e req-a9627bc1-f860-4a0b-8824-6191a190fc06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Received event network-changed-b7f76f11-77ec-41f2-8177-93de5bf467a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.772 239969 DEBUG nova.compute.manager [req-11629b7e-2ace-44e9-864c-c882090e1b3e req-a9627bc1-f860-4a0b-8824-6191a190fc06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Refreshing instance network info cache due to event network-changed-b7f76f11-77ec-41f2-8177-93de5bf467a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:06:43 compute-0 nova_compute[239965]: 2026-01-26 16:06:43.772 239969 DEBUG oslo_concurrency.lockutils [req-11629b7e-2ace-44e9-864c-c882090e1b3e req-a9627bc1-f860-4a0b-8824-6191a190fc06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-2260a3d1-973c-4364-af38-ebd9acd287f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:06:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 274 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 425 KiB/s rd, 5.0 MiB/s wr, 112 op/s
Jan 26 16:06:44 compute-0 ovn_controller[146046]: 2026-01-26T16:06:44Z|00092|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:8a:67 10.100.0.5
Jan 26 16:06:44 compute-0 ovn_controller[146046]: 2026-01-26T16:06:44Z|00093|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:8a:67 10.100.0.5
Jan 26 16:06:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:06:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/532363195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.131 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.249 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.250 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000055 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.254 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.254 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.436 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.437 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3353MB free_disk=59.91782923322171GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.437 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.437 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.451 239969 DEBUG nova.network.neutron [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.544 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 571cd10f-e22e-43b6-9523-e3539d047e4a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.544 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 1c5a4474-3020-4bb1-96e5-41b02d0eb962 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.545 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 2260a3d1-973c-4364-af38-ebd9acd287f8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.545 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 8dcfffb0-b36f-463e-b49a-9954483b8f5b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.545 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.545 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.662 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:44 compute-0 ceph-mon[75140]: pgmap v1637: 305 pgs: 305 active+clean; 274 MiB data, 743 MiB used, 59 GiB / 60 GiB avail; 425 KiB/s rd, 5.0 MiB/s wr, 112 op/s
Jan 26 16:06:44 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/532363195' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:06:44 compute-0 nova_compute[239965]: 2026-01-26 16:06:44.867 239969 DEBUG nova.network.neutron [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Successfully created port: d0f5820b-f3ad-4a26-84c4-c20c732dbcfd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:06:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:06:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3330741401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:06:45 compute-0 nova_compute[239965]: 2026-01-26 16:06:45.237 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:45 compute-0 nova_compute[239965]: 2026-01-26 16:06:45.242 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:06:45 compute-0 nova_compute[239965]: 2026-01-26 16:06:45.392 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:06:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3330741401' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:06:46 compute-0 nova_compute[239965]: 2026-01-26 16:06:46.029 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1638: 305 pgs: 305 active+clean; 303 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 471 KiB/s rd, 6.5 MiB/s wr, 128 op/s
Jan 26 16:06:46 compute-0 ceph-mon[75140]: pgmap v1638: 305 pgs: 305 active+clean; 303 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 471 KiB/s rd, 6.5 MiB/s wr, 128 op/s
Jan 26 16:06:46 compute-0 sudo[315831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:06:46 compute-0 sudo[315831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:06:46 compute-0 sudo[315831]: pam_unix(sudo:session): session closed for user root
Jan 26 16:06:47 compute-0 sudo[315856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:06:47 compute-0 sudo[315856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:06:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:06:47 compute-0 sudo[315856]: pam_unix(sudo:session): session closed for user root
Jan 26 16:06:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:06:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:06:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:06:47 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:06:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:06:47 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:06:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:06:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:06:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:06:47 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:06:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:06:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:06:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:06:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:06:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:06:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:06:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:06:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:06:47 compute-0 sudo[315914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:06:47 compute-0 sudo[315914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:06:47 compute-0 sudo[315914]: pam_unix(sudo:session): session closed for user root
Jan 26 16:06:47 compute-0 sudo[315939]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:06:47 compute-0 sudo[315939]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 537 KiB/s rd, 6.8 MiB/s wr, 144 op/s
Jan 26 16:06:48 compute-0 nova_compute[239965]: 2026-01-26 16:06:48.105 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:48 compute-0 podman[315977]: 2026-01-26 16:06:48.119201232 +0000 UTC m=+0.048474477 container create 279f6f14fb92b2527eaf093a52072ac4664f2d77f5076fc7085230fab80fe0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:06:48 compute-0 systemd[1]: Started libpod-conmon-279f6f14fb92b2527eaf093a52072ac4664f2d77f5076fc7085230fab80fe0fa.scope.
Jan 26 16:06:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:06:48 compute-0 podman[315977]: 2026-01-26 16:06:48.095284397 +0000 UTC m=+0.024557652 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:06:48 compute-0 podman[315977]: 2026-01-26 16:06:48.202569351 +0000 UTC m=+0.131842606 container init 279f6f14fb92b2527eaf093a52072ac4664f2d77f5076fc7085230fab80fe0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_kowalevski, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 16:06:48 compute-0 podman[315977]: 2026-01-26 16:06:48.211619943 +0000 UTC m=+0.140893168 container start 279f6f14fb92b2527eaf093a52072ac4664f2d77f5076fc7085230fab80fe0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 16:06:48 compute-0 podman[315977]: 2026-01-26 16:06:48.21601923 +0000 UTC m=+0.145292455 container attach 279f6f14fb92b2527eaf093a52072ac4664f2d77f5076fc7085230fab80fe0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_kowalevski, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 26 16:06:48 compute-0 priceless_kowalevski[315994]: 167 167
Jan 26 16:06:48 compute-0 systemd[1]: libpod-279f6f14fb92b2527eaf093a52072ac4664f2d77f5076fc7085230fab80fe0fa.scope: Deactivated successfully.
Jan 26 16:06:48 compute-0 podman[315977]: 2026-01-26 16:06:48.220411337 +0000 UTC m=+0.149684572 container died 279f6f14fb92b2527eaf093a52072ac4664f2d77f5076fc7085230fab80fe0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_kowalevski, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:06:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aa81be060c8a4d169aa72b5d880546e5a928b5d8e0dace335dbb1602e42a84d-merged.mount: Deactivated successfully.
Jan 26 16:06:48 compute-0 podman[315977]: 2026-01-26 16:06:48.268170336 +0000 UTC m=+0.197443561 container remove 279f6f14fb92b2527eaf093a52072ac4664f2d77f5076fc7085230fab80fe0fa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_kowalevski, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 16:06:48 compute-0 systemd[1]: libpod-conmon-279f6f14fb92b2527eaf093a52072ac4664f2d77f5076fc7085230fab80fe0fa.scope: Deactivated successfully.
Jan 26 16:06:48 compute-0 podman[316018]: 2026-01-26 16:06:48.467075911 +0000 UTC m=+0.053501149 container create ce3f3927651cf2a9f34f57cf264d22ca41ca0f5d5008c7a54d3be56c9408e202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:06:48 compute-0 systemd[1]: Started libpod-conmon-ce3f3927651cf2a9f34f57cf264d22ca41ca0f5d5008c7a54d3be56c9408e202.scope.
Jan 26 16:06:48 compute-0 podman[316018]: 2026-01-26 16:06:48.446022176 +0000 UTC m=+0.032447444 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:06:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1aa176f330251d26327048b939c6f3e89d7cb6f31c0781dd64b548db13cb984/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1aa176f330251d26327048b939c6f3e89d7cb6f31c0781dd64b548db13cb984/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1aa176f330251d26327048b939c6f3e89d7cb6f31c0781dd64b548db13cb984/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1aa176f330251d26327048b939c6f3e89d7cb6f31c0781dd64b548db13cb984/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:06:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1aa176f330251d26327048b939c6f3e89d7cb6f31c0781dd64b548db13cb984/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:06:48 compute-0 podman[316018]: 2026-01-26 16:06:48.565263113 +0000 UTC m=+0.151688381 container init ce3f3927651cf2a9f34f57cf264d22ca41ca0f5d5008c7a54d3be56c9408e202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendeleev, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 26 16:06:48 compute-0 podman[316018]: 2026-01-26 16:06:48.572590692 +0000 UTC m=+0.159015930 container start ce3f3927651cf2a9f34f57cf264d22ca41ca0f5d5008c7a54d3be56c9408e202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendeleev, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 16:06:48 compute-0 podman[316018]: 2026-01-26 16:06:48.576347344 +0000 UTC m=+0.162772602 container attach ce3f3927651cf2a9f34f57cf264d22ca41ca0f5d5008c7a54d3be56c9408e202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendeleev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 16:06:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:06:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/325323126' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:06:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:06:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/325323126' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:06:48 compute-0 ceph-mon[75140]: pgmap v1639: 305 pgs: 305 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 537 KiB/s rd, 6.8 MiB/s wr, 144 op/s
Jan 26 16:06:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/325323126' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:06:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/325323126' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:06:48 compute-0 nova_compute[239965]: 2026-01-26 16:06:48.783 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:06:48 compute-0 nova_compute[239965]: 2026-01-26 16:06:48.783 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 4.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022185332857243612 of space, bias 1.0, pg target 0.6655599857173083 quantized to 32 (current 32)
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001016172121036423 of space, bias 1.0, pg target 0.3048516363109269 quantized to 32 (current 32)
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.985788156427799e-07 of space, bias 4.0, pg target 0.0009582945787713359 quantized to 16 (current 16)
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:06:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:06:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:48.966 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:06:48 compute-0 nova_compute[239965]: 2026-01-26 16:06:48.968 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:48.969 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:06:49 compute-0 festive_mendeleev[316035]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:06:49 compute-0 festive_mendeleev[316035]: --> All data devices are unavailable
Jan 26 16:06:49 compute-0 systemd[1]: libpod-ce3f3927651cf2a9f34f57cf264d22ca41ca0f5d5008c7a54d3be56c9408e202.scope: Deactivated successfully.
Jan 26 16:06:49 compute-0 conmon[316035]: conmon ce3f3927651cf2a9f34f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce3f3927651cf2a9f34f57cf264d22ca41ca0f5d5008c7a54d3be56c9408e202.scope/container/memory.events
Jan 26 16:06:49 compute-0 podman[316018]: 2026-01-26 16:06:49.152471708 +0000 UTC m=+0.738896946 container died ce3f3927651cf2a9f34f57cf264d22ca41ca0f5d5008c7a54d3be56c9408e202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 16:06:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1aa176f330251d26327048b939c6f3e89d7cb6f31c0781dd64b548db13cb984-merged.mount: Deactivated successfully.
Jan 26 16:06:49 compute-0 podman[316018]: 2026-01-26 16:06:49.200861191 +0000 UTC m=+0.787286469 container remove ce3f3927651cf2a9f34f57cf264d22ca41ca0f5d5008c7a54d3be56c9408e202 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:06:49 compute-0 systemd[1]: libpod-conmon-ce3f3927651cf2a9f34f57cf264d22ca41ca0f5d5008c7a54d3be56c9408e202.scope: Deactivated successfully.
Jan 26 16:06:49 compute-0 sudo[315939]: pam_unix(sudo:session): session closed for user root
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.281 239969 DEBUG nova.network.neutron [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Updating instance_info_cache with network_info: [{"id": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "address": "fa:16:3e:60:3a:ef", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f76f11-77", "ovs_interfaceid": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.304 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Releasing lock "refresh_cache-2260a3d1-973c-4364-af38-ebd9acd287f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.304 239969 DEBUG nova.compute.manager [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Instance network_info: |[{"id": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "address": "fa:16:3e:60:3a:ef", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f76f11-77", "ovs_interfaceid": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.304 239969 DEBUG oslo_concurrency.lockutils [req-11629b7e-2ace-44e9-864c-c882090e1b3e req-a9627bc1-f860-4a0b-8824-6191a190fc06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-2260a3d1-973c-4364-af38-ebd9acd287f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.305 239969 DEBUG nova.network.neutron [req-11629b7e-2ace-44e9-864c-c882090e1b3e req-a9627bc1-f860-4a0b-8824-6191a190fc06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Refreshing network info cache for port b7f76f11-77ec-41f2-8177-93de5bf467a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.308 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Start _get_guest_xml network_info=[{"id": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "address": "fa:16:3e:60:3a:ef", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f76f11-77", "ovs_interfaceid": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:06:49 compute-0 sudo[316069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:06:49 compute-0 sudo[316069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.314 239969 WARNING nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:06:49 compute-0 sudo[316069]: pam_unix(sudo:session): session closed for user root
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.323 239969 DEBUG nova.virt.libvirt.host [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.325 239969 DEBUG nova.virt.libvirt.host [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.329 239969 DEBUG nova.virt.libvirt.host [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.330 239969 DEBUG nova.virt.libvirt.host [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.331 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.331 239969 DEBUG nova.virt.hardware [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.331 239969 DEBUG nova.virt.hardware [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.331 239969 DEBUG nova.virt.hardware [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.332 239969 DEBUG nova.virt.hardware [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.332 239969 DEBUG nova.virt.hardware [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.332 239969 DEBUG nova.virt.hardware [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.332 239969 DEBUG nova.virt.hardware [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.333 239969 DEBUG nova.virt.hardware [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.333 239969 DEBUG nova.virt.hardware [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.333 239969 DEBUG nova.virt.hardware [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.333 239969 DEBUG nova.virt.hardware [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.337 239969 DEBUG oslo_concurrency.processutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:49 compute-0 sudo[316094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:06:49 compute-0 sudo[316094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:06:49 compute-0 podman[316150]: 2026-01-26 16:06:49.677768358 +0000 UTC m=+0.044977351 container create f13dbe8991c6334fd27437e1b245f9ad06fe6c1b10399ecbc7341ea4c2385911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_raman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:06:49 compute-0 systemd[1]: Started libpod-conmon-f13dbe8991c6334fd27437e1b245f9ad06fe6c1b10399ecbc7341ea4c2385911.scope.
Jan 26 16:06:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:06:49 compute-0 podman[316150]: 2026-01-26 16:06:49.65866833 +0000 UTC m=+0.025877343 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:06:49 compute-0 podman[316150]: 2026-01-26 16:06:49.75634019 +0000 UTC m=+0.123549203 container init f13dbe8991c6334fd27437e1b245f9ad06fe6c1b10399ecbc7341ea4c2385911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:06:49 compute-0 podman[316150]: 2026-01-26 16:06:49.765310299 +0000 UTC m=+0.132519292 container start f13dbe8991c6334fd27437e1b245f9ad06fe6c1b10399ecbc7341ea4c2385911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_raman, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 16:06:49 compute-0 podman[316150]: 2026-01-26 16:06:49.770126427 +0000 UTC m=+0.137335450 container attach f13dbe8991c6334fd27437e1b245f9ad06fe6c1b10399ecbc7341ea4c2385911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:06:49 compute-0 gracious_raman[316166]: 167 167
Jan 26 16:06:49 compute-0 systemd[1]: libpod-f13dbe8991c6334fd27437e1b245f9ad06fe6c1b10399ecbc7341ea4c2385911.scope: Deactivated successfully.
Jan 26 16:06:49 compute-0 podman[316150]: 2026-01-26 16:06:49.773609452 +0000 UTC m=+0.140818445 container died f13dbe8991c6334fd27437e1b245f9ad06fe6c1b10399ecbc7341ea4c2385911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 16:06:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-2db2b628bcd95e4e6ed36e79d24a0d6d1a3058350e2d7d78763b847980a32973-merged.mount: Deactivated successfully.
Jan 26 16:06:49 compute-0 podman[316150]: 2026-01-26 16:06:49.813667222 +0000 UTC m=+0.180876215 container remove f13dbe8991c6334fd27437e1b245f9ad06fe6c1b10399ecbc7341ea4c2385911 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_raman, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:06:49 compute-0 systemd[1]: libpod-conmon-f13dbe8991c6334fd27437e1b245f9ad06fe6c1b10399ecbc7341ea4c2385911.scope: Deactivated successfully.
Jan 26 16:06:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:06:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/853311848' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.904 239969 DEBUG oslo_concurrency.processutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.924 239969 DEBUG nova.storage.rbd_utils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 2260a3d1-973c-4364-af38-ebd9acd287f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:49 compute-0 nova_compute[239965]: 2026-01-26 16:06:49.928 239969 DEBUG oslo_concurrency.processutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/853311848' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:06:50 compute-0 podman[316211]: 2026-01-26 16:06:50.005185667 +0000 UTC m=+0.044764056 container create 90e1c20ba8130e4f66315397f7e3a949f95dda98043e844d09c686b1d7b9401c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jang, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 26 16:06:50 compute-0 systemd[1]: Started libpod-conmon-90e1c20ba8130e4f66315397f7e3a949f95dda98043e844d09c686b1d7b9401c.scope.
Jan 26 16:06:50 compute-0 podman[316211]: 2026-01-26 16:06:49.983438325 +0000 UTC m=+0.023016744 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:06:50 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8667e7cc44f15ca306e1c288c8df76fabb8e98eba5796110ce17cd1ec69ccf27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8667e7cc44f15ca306e1c288c8df76fabb8e98eba5796110ce17cd1ec69ccf27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8667e7cc44f15ca306e1c288c8df76fabb8e98eba5796110ce17cd1ec69ccf27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:06:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8667e7cc44f15ca306e1c288c8df76fabb8e98eba5796110ce17cd1ec69ccf27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:06:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1640: 305 pgs: 305 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 537 KiB/s rd, 6.8 MiB/s wr, 144 op/s
Jan 26 16:06:50 compute-0 podman[316211]: 2026-01-26 16:06:50.102080287 +0000 UTC m=+0.141658696 container init 90e1c20ba8130e4f66315397f7e3a949f95dda98043e844d09c686b1d7b9401c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jang, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:06:50 compute-0 podman[316211]: 2026-01-26 16:06:50.10833542 +0000 UTC m=+0.147913829 container start 90e1c20ba8130e4f66315397f7e3a949f95dda98043e844d09c686b1d7b9401c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jang, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:06:50 compute-0 podman[316211]: 2026-01-26 16:06:50.113100467 +0000 UTC m=+0.152678876 container attach 90e1c20ba8130e4f66315397f7e3a949f95dda98043e844d09c686b1d7b9401c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.174 239969 DEBUG nova.network.neutron [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Successfully updated port: d0f5820b-f3ad-4a26-84c4-c20c732dbcfd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.195 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "refresh_cache-8dcfffb0-b36f-463e-b49a-9954483b8f5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.196 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquired lock "refresh_cache-8dcfffb0-b36f-463e-b49a-9954483b8f5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.196 239969 DEBUG nova.network.neutron [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:06:50 compute-0 nice_jang[316228]: {
Jan 26 16:06:50 compute-0 nice_jang[316228]:     "0": [
Jan 26 16:06:50 compute-0 nice_jang[316228]:         {
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "devices": [
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "/dev/loop3"
Jan 26 16:06:50 compute-0 nice_jang[316228]:             ],
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "lv_name": "ceph_lv0",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "lv_size": "21470642176",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "name": "ceph_lv0",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "tags": {
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.cluster_name": "ceph",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.crush_device_class": "",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.encrypted": "0",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.objectstore": "bluestore",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.osd_id": "0",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.type": "block",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.vdo": "0",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.with_tpm": "0"
Jan 26 16:06:50 compute-0 nice_jang[316228]:             },
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "type": "block",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "vg_name": "ceph_vg0"
Jan 26 16:06:50 compute-0 nice_jang[316228]:         }
Jan 26 16:06:50 compute-0 nice_jang[316228]:     ],
Jan 26 16:06:50 compute-0 nice_jang[316228]:     "1": [
Jan 26 16:06:50 compute-0 nice_jang[316228]:         {
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "devices": [
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "/dev/loop4"
Jan 26 16:06:50 compute-0 nice_jang[316228]:             ],
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "lv_name": "ceph_lv1",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "lv_size": "21470642176",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "name": "ceph_lv1",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "tags": {
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.cluster_name": "ceph",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.crush_device_class": "",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.encrypted": "0",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.objectstore": "bluestore",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.osd_id": "1",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.type": "block",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.vdo": "0",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.with_tpm": "0"
Jan 26 16:06:50 compute-0 nice_jang[316228]:             },
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "type": "block",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "vg_name": "ceph_vg1"
Jan 26 16:06:50 compute-0 nice_jang[316228]:         }
Jan 26 16:06:50 compute-0 nice_jang[316228]:     ],
Jan 26 16:06:50 compute-0 nice_jang[316228]:     "2": [
Jan 26 16:06:50 compute-0 nice_jang[316228]:         {
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "devices": [
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "/dev/loop5"
Jan 26 16:06:50 compute-0 nice_jang[316228]:             ],
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "lv_name": "ceph_lv2",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "lv_size": "21470642176",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "name": "ceph_lv2",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "tags": {
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.cluster_name": "ceph",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.crush_device_class": "",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.encrypted": "0",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.objectstore": "bluestore",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.osd_id": "2",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.type": "block",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.vdo": "0",
Jan 26 16:06:50 compute-0 nice_jang[316228]:                 "ceph.with_tpm": "0"
Jan 26 16:06:50 compute-0 nice_jang[316228]:             },
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "type": "block",
Jan 26 16:06:50 compute-0 nice_jang[316228]:             "vg_name": "ceph_vg2"
Jan 26 16:06:50 compute-0 nice_jang[316228]:         }
Jan 26 16:06:50 compute-0 nice_jang[316228]:     ]
Jan 26 16:06:50 compute-0 nice_jang[316228]: }
Jan 26 16:06:50 compute-0 systemd[1]: libpod-90e1c20ba8130e4f66315397f7e3a949f95dda98043e844d09c686b1d7b9401c.scope: Deactivated successfully.
Jan 26 16:06:50 compute-0 podman[316211]: 2026-01-26 16:06:50.414715385 +0000 UTC m=+0.454293814 container died 90e1c20ba8130e4f66315397f7e3a949f95dda98043e844d09c686b1d7b9401c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jang, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.443 239969 DEBUG nova.network.neutron [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:06:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-8667e7cc44f15ca306e1c288c8df76fabb8e98eba5796110ce17cd1ec69ccf27-merged.mount: Deactivated successfully.
Jan 26 16:06:50 compute-0 podman[316211]: 2026-01-26 16:06:50.46645133 +0000 UTC m=+0.506029759 container remove 90e1c20ba8130e4f66315397f7e3a949f95dda98043e844d09c686b1d7b9401c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.470 239969 DEBUG nova.compute.manager [req-4b482db0-af49-434c-b997-ac6ae888832c req-f34e9323-8e47-4494-ba53-fe914ae62742 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Received event network-changed-d0f5820b-f3ad-4a26-84c4-c20c732dbcfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.471 239969 DEBUG nova.compute.manager [req-4b482db0-af49-434c-b997-ac6ae888832c req-f34e9323-8e47-4494-ba53-fe914ae62742 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Refreshing instance network info cache due to event network-changed-d0f5820b-f3ad-4a26-84c4-c20c732dbcfd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.472 239969 DEBUG oslo_concurrency.lockutils [req-4b482db0-af49-434c-b997-ac6ae888832c req-f34e9323-8e47-4494-ba53-fe914ae62742 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-8dcfffb0-b36f-463e-b49a-9954483b8f5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:06:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:06:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3686768246' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:06:50 compute-0 systemd[1]: libpod-conmon-90e1c20ba8130e4f66315397f7e3a949f95dda98043e844d09c686b1d7b9401c.scope: Deactivated successfully.
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.498 239969 DEBUG oslo_concurrency.processutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.500 239969 DEBUG nova.virt.libvirt.vif [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:06:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1539940225',display_name='tempest-ServersTestJSON-server-1539940225',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1539940225',id=87,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-y4k33z3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:06:38Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=2260a3d1-973c-4364-af38-ebd9acd287f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "address": "fa:16:3e:60:3a:ef", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f76f11-77", "ovs_interfaceid": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.501 239969 DEBUG nova.network.os_vif_util [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "address": "fa:16:3e:60:3a:ef", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f76f11-77", "ovs_interfaceid": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.502 239969 DEBUG nova.network.os_vif_util [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:3a:ef,bridge_name='br-int',has_traffic_filtering=True,id=b7f76f11-77ec-41f2-8177-93de5bf467a9,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7f76f11-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.503 239969 DEBUG nova.objects.instance [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2260a3d1-973c-4364-af38-ebd9acd287f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:50 compute-0 sudo[316094]: pam_unix(sudo:session): session closed for user root
Jan 26 16:06:50 compute-0 sudo[316272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:06:50 compute-0 sudo[316272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:06:50 compute-0 sudo[316272]: pam_unix(sudo:session): session closed for user root
Jan 26 16:06:50 compute-0 sudo[316297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:06:50 compute-0 sudo[316297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.892 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:06:50 compute-0 nova_compute[239965]:   <uuid>2260a3d1-973c-4364-af38-ebd9acd287f8</uuid>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   <name>instance-00000057</name>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersTestJSON-server-1539940225</nova:name>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:06:49</nova:creationTime>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:06:50 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:06:50 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:06:50 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:06:50 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:06:50 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:06:50 compute-0 nova_compute[239965]:         <nova:user uuid="392bf4c554724bd3b097b990cec964ac">tempest-ServersTestJSON-190839520-project-member</nova:user>
Jan 26 16:06:50 compute-0 nova_compute[239965]:         <nova:project uuid="e3d3c26abe454a90816833e484abbbd5">tempest-ServersTestJSON-190839520</nova:project>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:06:50 compute-0 nova_compute[239965]:         <nova:port uuid="b7f76f11-77ec-41f2-8177-93de5bf467a9">
Jan 26 16:06:50 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <system>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <entry name="serial">2260a3d1-973c-4364-af38-ebd9acd287f8</entry>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <entry name="uuid">2260a3d1-973c-4364-af38-ebd9acd287f8</entry>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     </system>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   <os>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   </os>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   <features>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   </features>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/2260a3d1-973c-4364-af38-ebd9acd287f8_disk">
Jan 26 16:06:50 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       </source>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:06:50 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/2260a3d1-973c-4364-af38-ebd9acd287f8_disk.config">
Jan 26 16:06:50 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       </source>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:06:50 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:60:3a:ef"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <target dev="tapb7f76f11-77"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/2260a3d1-973c-4364-af38-ebd9acd287f8/console.log" append="off"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <video>
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     </video>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:06:50 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:06:50 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:06:50 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:06:50 compute-0 nova_compute[239965]: </domain>
Jan 26 16:06:50 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.892 239969 DEBUG nova.compute.manager [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Preparing to wait for external event network-vif-plugged-b7f76f11-77ec-41f2-8177-93de5bf467a9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.893 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.893 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.893 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.894 239969 DEBUG nova.virt.libvirt.vif [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:06:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1539940225',display_name='tempest-ServersTestJSON-server-1539940225',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1539940225',id=87,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-y4k33z3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:06:38Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=2260a3d1-973c-4364-af38-ebd9acd287f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "address": "fa:16:3e:60:3a:ef", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f76f11-77", "ovs_interfaceid": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.894 239969 DEBUG nova.network.os_vif_util [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "address": "fa:16:3e:60:3a:ef", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f76f11-77", "ovs_interfaceid": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.895 239969 DEBUG nova.network.os_vif_util [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:3a:ef,bridge_name='br-int',has_traffic_filtering=True,id=b7f76f11-77ec-41f2-8177-93de5bf467a9,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7f76f11-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.895 239969 DEBUG os_vif [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:3a:ef,bridge_name='br-int',has_traffic_filtering=True,id=b7f76f11-77ec-41f2-8177-93de5bf467a9,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7f76f11-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.896 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.896 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.896 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.902 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.902 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7f76f11-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.903 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb7f76f11-77, col_values=(('external_ids', {'iface-id': 'b7f76f11-77ec-41f2-8177-93de5bf467a9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:60:3a:ef', 'vm-uuid': '2260a3d1-973c-4364-af38-ebd9acd287f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.905 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:50 compute-0 NetworkManager[48954]: <info>  [1769443610.9062] manager: (tapb7f76f11-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/369)
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.906 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.918 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.919 239969 INFO os_vif [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:3a:ef,bridge_name='br-int',has_traffic_filtering=True,id=b7f76f11-77ec-41f2-8177-93de5bf467a9,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7f76f11-77')
Jan 26 16:06:50 compute-0 podman[316333]: 2026-01-26 16:06:50.942768882 +0000 UTC m=+0.062457048 container create 4d73de4a0d35ec749dced018491b68e689fd474874343af07aa3e5234f7ff4c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shtern, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:06:50 compute-0 ceph-mon[75140]: pgmap v1640: 305 pgs: 305 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 537 KiB/s rd, 6.8 MiB/s wr, 144 op/s
Jan 26 16:06:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3686768246' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:06:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:50.971 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.978 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.979 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.980 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No VIF found with MAC fa:16:3e:60:3a:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:06:50 compute-0 nova_compute[239965]: 2026-01-26 16:06:50.980 239969 INFO nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Using config drive
Jan 26 16:06:50 compute-0 systemd[1]: Started libpod-conmon-4d73de4a0d35ec749dced018491b68e689fd474874343af07aa3e5234f7ff4c7.scope.
Jan 26 16:06:51 compute-0 nova_compute[239965]: 2026-01-26 16:06:51.003 239969 DEBUG nova.storage.rbd_utils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 2260a3d1-973c-4364-af38-ebd9acd287f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:06:51 compute-0 podman[316333]: 2026-01-26 16:06:50.917892493 +0000 UTC m=+0.037580719 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:06:51 compute-0 podman[316333]: 2026-01-26 16:06:51.026653144 +0000 UTC m=+0.146341310 container init 4d73de4a0d35ec749dced018491b68e689fd474874343af07aa3e5234f7ff4c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 16:06:51 compute-0 nova_compute[239965]: 2026-01-26 16:06:51.030 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:51 compute-0 podman[316333]: 2026-01-26 16:06:51.03423564 +0000 UTC m=+0.153923806 container start 4d73de4a0d35ec749dced018491b68e689fd474874343af07aa3e5234f7ff4c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:06:51 compute-0 podman[316333]: 2026-01-26 16:06:51.038259108 +0000 UTC m=+0.157947304 container attach 4d73de4a0d35ec749dced018491b68e689fd474874343af07aa3e5234f7ff4c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 16:06:51 compute-0 nifty_shtern[316355]: 167 167
Jan 26 16:06:51 compute-0 systemd[1]: libpod-4d73de4a0d35ec749dced018491b68e689fd474874343af07aa3e5234f7ff4c7.scope: Deactivated successfully.
Jan 26 16:06:51 compute-0 podman[316333]: 2026-01-26 16:06:51.040061482 +0000 UTC m=+0.159749648 container died 4d73de4a0d35ec749dced018491b68e689fd474874343af07aa3e5234f7ff4c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 16:06:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-2232a444e2549c6242706e90a697bd8e4adb64ad7365d4820e53eedd15435e14-merged.mount: Deactivated successfully.
Jan 26 16:06:51 compute-0 podman[316333]: 2026-01-26 16:06:51.080293016 +0000 UTC m=+0.199981182 container remove 4d73de4a0d35ec749dced018491b68e689fd474874343af07aa3e5234f7ff4c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 16:06:51 compute-0 systemd[1]: libpod-conmon-4d73de4a0d35ec749dced018491b68e689fd474874343af07aa3e5234f7ff4c7.scope: Deactivated successfully.
Jan 26 16:06:51 compute-0 podman[316394]: 2026-01-26 16:06:51.261314845 +0000 UTC m=+0.042381759 container create 16c7dfb294b6e00dc26a3f13012960677b43b8e96bc6d7576085c30dc9f33935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 26 16:06:51 compute-0 systemd[1]: Started libpod-conmon-16c7dfb294b6e00dc26a3f13012960677b43b8e96bc6d7576085c30dc9f33935.scope.
Jan 26 16:06:51 compute-0 podman[316394]: 2026-01-26 16:06:51.241819638 +0000 UTC m=+0.022886562 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:06:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:06:51 compute-0 nova_compute[239965]: 2026-01-26 16:06:51.341 239969 INFO nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Creating config drive at /var/lib/nova/instances/2260a3d1-973c-4364-af38-ebd9acd287f8/disk.config
Jan 26 16:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28e2d3c1324351de6785722a954ccfc259367d3327e896a42eea89823cbaf28/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28e2d3c1324351de6785722a954ccfc259367d3327e896a42eea89823cbaf28/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28e2d3c1324351de6785722a954ccfc259367d3327e896a42eea89823cbaf28/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:06:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e28e2d3c1324351de6785722a954ccfc259367d3327e896a42eea89823cbaf28/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:06:51 compute-0 nova_compute[239965]: 2026-01-26 16:06:51.350 239969 DEBUG oslo_concurrency.processutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2260a3d1-973c-4364-af38-ebd9acd287f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplh7_aa9m execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:51 compute-0 podman[316394]: 2026-01-26 16:06:51.36416774 +0000 UTC m=+0.145234664 container init 16c7dfb294b6e00dc26a3f13012960677b43b8e96bc6d7576085c30dc9f33935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ritchie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:06:51 compute-0 podman[316394]: 2026-01-26 16:06:51.375260922 +0000 UTC m=+0.156327826 container start 16c7dfb294b6e00dc26a3f13012960677b43b8e96bc6d7576085c30dc9f33935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:06:51 compute-0 podman[316394]: 2026-01-26 16:06:51.378990163 +0000 UTC m=+0.160057097 container attach 16c7dfb294b6e00dc26a3f13012960677b43b8e96bc6d7576085c30dc9f33935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:06:51 compute-0 nova_compute[239965]: 2026-01-26 16:06:51.507 239969 DEBUG oslo_concurrency.processutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2260a3d1-973c-4364-af38-ebd9acd287f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplh7_aa9m" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:51 compute-0 nova_compute[239965]: 2026-01-26 16:06:51.541 239969 DEBUG nova.storage.rbd_utils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 2260a3d1-973c-4364-af38-ebd9acd287f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:51 compute-0 nova_compute[239965]: 2026-01-26 16:06:51.546 239969 DEBUG oslo_concurrency.processutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2260a3d1-973c-4364-af38-ebd9acd287f8/disk.config 2260a3d1-973c-4364-af38-ebd9acd287f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 536 KiB/s rd, 6.5 MiB/s wr, 142 op/s
Jan 26 16:06:52 compute-0 lvm[316524]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:06:52 compute-0 lvm[316524]: VG ceph_vg0 finished
Jan 26 16:06:52 compute-0 lvm[316528]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:06:52 compute-0 lvm[316528]: VG ceph_vg2 finished
Jan 26 16:06:52 compute-0 lvm[316527]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:06:52 compute-0 lvm[316527]: VG ceph_vg1 finished
Jan 26 16:06:52 compute-0 recursing_ritchie[316410]: {}
Jan 26 16:06:52 compute-0 systemd[1]: libpod-16c7dfb294b6e00dc26a3f13012960677b43b8e96bc6d7576085c30dc9f33935.scope: Deactivated successfully.
Jan 26 16:06:52 compute-0 systemd[1]: libpod-16c7dfb294b6e00dc26a3f13012960677b43b8e96bc6d7576085c30dc9f33935.scope: Consumed 1.506s CPU time.
Jan 26 16:06:52 compute-0 ceph-mon[75140]: pgmap v1641: 305 pgs: 305 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 536 KiB/s rd, 6.5 MiB/s wr, 142 op/s
Jan 26 16:06:52 compute-0 podman[316534]: 2026-01-26 16:06:52.327503116 +0000 UTC m=+0.031131813 container died 16c7dfb294b6e00dc26a3f13012960677b43b8e96bc6d7576085c30dc9f33935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ritchie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:06:52 compute-0 nova_compute[239965]: 2026-01-26 16:06:52.343 239969 DEBUG oslo_concurrency.processutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2260a3d1-973c-4364-af38-ebd9acd287f8/disk.config 2260a3d1-973c-4364-af38-ebd9acd287f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.797s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:52 compute-0 nova_compute[239965]: 2026-01-26 16:06:52.345 239969 INFO nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Deleting local config drive /var/lib/nova/instances/2260a3d1-973c-4364-af38-ebd9acd287f8/disk.config because it was imported into RBD.
Jan 26 16:06:52 compute-0 kernel: tapb7f76f11-77: entered promiscuous mode
Jan 26 16:06:52 compute-0 NetworkManager[48954]: <info>  [1769443612.4156] manager: (tapb7f76f11-77): new Tun device (/org/freedesktop/NetworkManager/Devices/370)
Jan 26 16:06:52 compute-0 systemd-udevd[316526]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:06:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e28e2d3c1324351de6785722a954ccfc259367d3327e896a42eea89823cbaf28-merged.mount: Deactivated successfully.
Jan 26 16:06:52 compute-0 nova_compute[239965]: 2026-01-26 16:06:52.420 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:52 compute-0 ovn_controller[146046]: 2026-01-26T16:06:52Z|00862|binding|INFO|Claiming lport b7f76f11-77ec-41f2-8177-93de5bf467a9 for this chassis.
Jan 26 16:06:52 compute-0 ovn_controller[146046]: 2026-01-26T16:06:52Z|00863|binding|INFO|b7f76f11-77ec-41f2-8177-93de5bf467a9: Claiming fa:16:3e:60:3a:ef 10.100.0.7
Jan 26 16:06:52 compute-0 NetworkManager[48954]: <info>  [1769443612.4319] device (tapb7f76f11-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:06:52 compute-0 NetworkManager[48954]: <info>  [1769443612.4329] device (tapb7f76f11-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:06:52 compute-0 ovn_controller[146046]: 2026-01-26T16:06:52Z|00864|binding|INFO|Setting lport b7f76f11-77ec-41f2-8177-93de5bf467a9 ovn-installed in OVS
Jan 26 16:06:52 compute-0 nova_compute[239965]: 2026-01-26 16:06:52.442 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:52 compute-0 nova_compute[239965]: 2026-01-26 16:06:52.447 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:52 compute-0 systemd-machined[208061]: New machine qemu-108-instance-00000057.
Jan 26 16:06:52 compute-0 podman[316534]: 2026-01-26 16:06:52.455159518 +0000 UTC m=+0.158788185 container remove 16c7dfb294b6e00dc26a3f13012960677b43b8e96bc6d7576085c30dc9f33935 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:06:52 compute-0 systemd[1]: Started Virtual Machine qemu-108-instance-00000057.
Jan 26 16:06:52 compute-0 systemd[1]: libpod-conmon-16c7dfb294b6e00dc26a3f13012960677b43b8e96bc6d7576085c30dc9f33935.scope: Deactivated successfully.
Jan 26 16:06:52 compute-0 sudo[316297]: pam_unix(sudo:session): session closed for user root
Jan 26 16:06:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:06:52 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:06:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:06:52 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:06:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:06:52 compute-0 sudo[316570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:06:52 compute-0 sudo[316570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:06:52 compute-0 sudo[316570]: pam_unix(sudo:session): session closed for user root
Jan 26 16:06:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:52.931 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:3a:ef 10.100.0.7'], port_security=['fa:16:3e:60:3a:ef 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2260a3d1-973c-4364-af38-ebd9acd287f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3d3c26abe454a90816833e484abbbd5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1788d42-3ab8-4af6-9d96-b2f289468a39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=866be92a-2c15-4b84-ae39-2a0041a8b635, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b7f76f11-77ec-41f2-8177-93de5bf467a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:06:52 compute-0 ovn_controller[146046]: 2026-01-26T16:06:52Z|00865|binding|INFO|Setting lport b7f76f11-77ec-41f2-8177-93de5bf467a9 up in Southbound
Jan 26 16:06:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:52.932 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b7f76f11-77ec-41f2-8177-93de5bf467a9 in datapath e7ef659c-2c7b-4cdf-8956-267cdfeff707 bound to our chassis
Jan 26 16:06:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:52.934 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:06:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:52.966 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1dad7618-8a8c-405e-a36c-b31b9d21d763]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:53.006 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[55d5c8f3-d3e0-410e-8bce-19e5bd448544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:53.010 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[97132049-337e-435b-8daa-694d2c74592e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:53.043 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[40a19a45-c1cd-4898-9ca3-b278c9afe49d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:53.062 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7cccc340-af1c-439f-a156-adf379732474]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7ef659c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:1b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507991, 'reachable_time': 18624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316600, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.086 239969 DEBUG nova.network.neutron [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Updating instance_info_cache with network_info: [{"id": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "address": "fa:16:3e:75:71:25", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0f5820b-f3", "ovs_interfaceid": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:06:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:53.093 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2cbed7f6-4710-49f9-af0e-80d1c4a83da1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508006, 'tstamp': 508006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316601, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508010, 'tstamp': 508010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316601, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:53.095 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7ef659c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.097 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:53.099 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7ef659c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:53.099 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:06:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:53.099 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape7ef659c-20, col_values=(('external_ids', {'iface-id': '86107515-3deb-4a4b-a7ca-2f22eed5cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:53.100 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:06:53 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:06:53 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.106 239969 DEBUG nova.network.neutron [req-11629b7e-2ace-44e9-864c-c882090e1b3e req-a9627bc1-f860-4a0b-8824-6191a190fc06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Updated VIF entry in instance network info cache for port b7f76f11-77ec-41f2-8177-93de5bf467a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.107 239969 DEBUG nova.network.neutron [req-11629b7e-2ace-44e9-864c-c882090e1b3e req-a9627bc1-f860-4a0b-8824-6191a190fc06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Updating instance_info_cache with network_info: [{"id": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "address": "fa:16:3e:60:3a:ef", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f76f11-77", "ovs_interfaceid": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.111 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Releasing lock "refresh_cache-8dcfffb0-b36f-463e-b49a-9954483b8f5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.111 239969 DEBUG nova.compute.manager [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Instance network_info: |[{"id": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "address": "fa:16:3e:75:71:25", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0f5820b-f3", "ovs_interfaceid": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.111 239969 DEBUG oslo_concurrency.lockutils [req-4b482db0-af49-434c-b997-ac6ae888832c req-f34e9323-8e47-4494-ba53-fe914ae62742 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-8dcfffb0-b36f-463e-b49a-9954483b8f5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.112 239969 DEBUG nova.network.neutron [req-4b482db0-af49-434c-b997-ac6ae888832c req-f34e9323-8e47-4494-ba53-fe914ae62742 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Refreshing network info cache for port d0f5820b-f3ad-4a26-84c4-c20c732dbcfd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.115 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Start _get_guest_xml network_info=[{"id": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "address": "fa:16:3e:75:71:25", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0f5820b-f3", "ovs_interfaceid": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.122 239969 WARNING nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.126 239969 DEBUG oslo_concurrency.lockutils [req-11629b7e-2ace-44e9-864c-c882090e1b3e req-a9627bc1-f860-4a0b-8824-6191a190fc06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-2260a3d1-973c-4364-af38-ebd9acd287f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.129 239969 DEBUG nova.virt.libvirt.host [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.130 239969 DEBUG nova.virt.libvirt.host [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.140 239969 DEBUG nova.virt.libvirt.host [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.142 239969 DEBUG nova.virt.libvirt.host [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.142 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.142 239969 DEBUG nova.virt.hardware [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.143 239969 DEBUG nova.virt.hardware [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.143 239969 DEBUG nova.virt.hardware [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.143 239969 DEBUG nova.virt.hardware [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.143 239969 DEBUG nova.virt.hardware [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.144 239969 DEBUG nova.virt.hardware [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.144 239969 DEBUG nova.virt.hardware [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.144 239969 DEBUG nova.virt.hardware [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.145 239969 DEBUG nova.virt.hardware [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.145 239969 DEBUG nova.virt.hardware [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.145 239969 DEBUG nova.virt.hardware [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.151 239969 DEBUG oslo_concurrency.processutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.325 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443613.3248482, 2260a3d1-973c-4364-af38-ebd9acd287f8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.325 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] VM Started (Lifecycle Event)
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.369 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.373 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443613.3277411, 2260a3d1-973c-4364-af38-ebd9acd287f8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.373 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] VM Paused (Lifecycle Event)
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.399 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.402 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.427 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:06:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:06:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:06:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:06:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2966394209' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.752 239969 DEBUG oslo_concurrency.processutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.785 239969 DEBUG nova.storage.rbd_utils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 8dcfffb0-b36f-463e-b49a-9954483b8f5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:53 compute-0 nova_compute[239965]: 2026-01-26 16:06:53.790 239969 DEBUG oslo_concurrency.processutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1642: 305 pgs: 305 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Jan 26 16:06:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:06:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3654438576' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.442 239969 DEBUG oslo_concurrency.processutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.652s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.443 239969 DEBUG nova.virt.libvirt.vif [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1343019912',display_name='tempest-ServerActionsTestOtherB-server-1343019912',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1343019912',id=88,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56f8818d291f4e738d868673048ce025',ramdisk_id='',reservation_id='r-iovk31tw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1778121066',owner_user_name='tempest-ServerActionsTestOtherB-1778121066-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:06:42Z,user_data=None,user_id='ad9c6196af60436caf20747e96ad8388',uuid=8dcfffb0-b36f-463e-b49a-9954483b8f5b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "address": "fa:16:3e:75:71:25", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0f5820b-f3", "ovs_interfaceid": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.444 239969 DEBUG nova.network.os_vif_util [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converting VIF {"id": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "address": "fa:16:3e:75:71:25", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0f5820b-f3", "ovs_interfaceid": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.445 239969 DEBUG nova.network.os_vif_util [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:71:25,bridge_name='br-int',has_traffic_filtering=True,id=d0f5820b-f3ad-4a26-84c4-c20c732dbcfd,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0f5820b-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.446 239969 DEBUG nova.objects.instance [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8dcfffb0-b36f-463e-b49a-9954483b8f5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:06:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2966394209' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:06:54 compute-0 ceph-mon[75140]: pgmap v1642: 305 pgs: 305 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 2.5 MiB/s wr, 63 op/s
Jan 26 16:06:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3654438576' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.746 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:06:54 compute-0 nova_compute[239965]:   <uuid>8dcfffb0-b36f-463e-b49a-9954483b8f5b</uuid>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   <name>instance-00000058</name>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerActionsTestOtherB-server-1343019912</nova:name>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:06:53</nova:creationTime>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:06:54 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:06:54 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:06:54 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:06:54 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:06:54 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:06:54 compute-0 nova_compute[239965]:         <nova:user uuid="ad9c6196af60436caf20747e96ad8388">tempest-ServerActionsTestOtherB-1778121066-project-member</nova:user>
Jan 26 16:06:54 compute-0 nova_compute[239965]:         <nova:project uuid="56f8818d291f4e738d868673048ce025">tempest-ServerActionsTestOtherB-1778121066</nova:project>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:06:54 compute-0 nova_compute[239965]:         <nova:port uuid="d0f5820b-f3ad-4a26-84c4-c20c732dbcfd">
Jan 26 16:06:54 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <system>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <entry name="serial">8dcfffb0-b36f-463e-b49a-9954483b8f5b</entry>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <entry name="uuid">8dcfffb0-b36f-463e-b49a-9954483b8f5b</entry>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     </system>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   <os>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   </os>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   <features>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   </features>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/8dcfffb0-b36f-463e-b49a-9954483b8f5b_disk">
Jan 26 16:06:54 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       </source>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:06:54 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/8dcfffb0-b36f-463e-b49a-9954483b8f5b_disk.config">
Jan 26 16:06:54 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       </source>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:06:54 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:75:71:25"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <target dev="tapd0f5820b-f3"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/8dcfffb0-b36f-463e-b49a-9954483b8f5b/console.log" append="off"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <video>
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     </video>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:06:54 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:06:54 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:06:54 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:06:54 compute-0 nova_compute[239965]: </domain>
Jan 26 16:06:54 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.746 239969 DEBUG nova.compute.manager [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Preparing to wait for external event network-vif-plugged-d0f5820b-f3ad-4a26-84c4-c20c732dbcfd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.746 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.747 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.747 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.747 239969 DEBUG nova.virt.libvirt.vif [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1343019912',display_name='tempest-ServerActionsTestOtherB-server-1343019912',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1343019912',id=88,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56f8818d291f4e738d868673048ce025',ramdisk_id='',reservation_id='r-iovk31tw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1778121066',owner_user_name='tempest-ServerActionsTestOtherB-1778121066-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:06:42Z,user_data=None,user_id='ad9c6196af60436caf20747e96ad8388',uuid=8dcfffb0-b36f-463e-b49a-9954483b8f5b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "address": "fa:16:3e:75:71:25", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0f5820b-f3", "ovs_interfaceid": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.748 239969 DEBUG nova.network.os_vif_util [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converting VIF {"id": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "address": "fa:16:3e:75:71:25", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0f5820b-f3", "ovs_interfaceid": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.749 239969 DEBUG nova.network.os_vif_util [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:71:25,bridge_name='br-int',has_traffic_filtering=True,id=d0f5820b-f3ad-4a26-84c4-c20c732dbcfd,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0f5820b-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.749 239969 DEBUG os_vif [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:71:25,bridge_name='br-int',has_traffic_filtering=True,id=d0f5820b-f3ad-4a26-84c4-c20c732dbcfd,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0f5820b-f3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.750 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.750 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.751 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.753 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.754 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0f5820b-f3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.754 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd0f5820b-f3, col_values=(('external_ids', {'iface-id': 'd0f5820b-f3ad-4a26-84c4-c20c732dbcfd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:75:71:25', 'vm-uuid': '8dcfffb0-b36f-463e-b49a-9954483b8f5b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.756 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:54 compute-0 NetworkManager[48954]: <info>  [1769443614.7572] manager: (tapd0f5820b-f3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/371)
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.759 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.766 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.767 239969 INFO os_vif [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:71:25,bridge_name='br-int',has_traffic_filtering=True,id=d0f5820b-f3ad-4a26-84c4-c20c732dbcfd,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0f5820b-f3')
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.829 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.830 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.830 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No VIF found with MAC fa:16:3e:75:71:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.831 239969 INFO nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Using config drive
Jan 26 16:06:54 compute-0 nova_compute[239965]: 2026-01-26 16:06:54.859 239969 DEBUG nova.storage.rbd_utils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 8dcfffb0-b36f-463e-b49a-9954483b8f5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.186 239969 INFO nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Creating config drive at /var/lib/nova/instances/8dcfffb0-b36f-463e-b49a-9954483b8f5b/disk.config
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.193 239969 DEBUG oslo_concurrency.processutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8dcfffb0-b36f-463e-b49a-9954483b8f5b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprv86r81f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.270 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.341 239969 DEBUG oslo_concurrency.processutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8dcfffb0-b36f-463e-b49a-9954483b8f5b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprv86r81f" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.371 239969 DEBUG nova.storage.rbd_utils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 8dcfffb0-b36f-463e-b49a-9954483b8f5b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.376 239969 DEBUG oslo_concurrency.processutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8dcfffb0-b36f-463e-b49a-9954483b8f5b/disk.config 8dcfffb0-b36f-463e-b49a-9954483b8f5b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.750 239969 DEBUG nova.network.neutron [req-4b482db0-af49-434c-b997-ac6ae888832c req-f34e9323-8e47-4494-ba53-fe914ae62742 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Updated VIF entry in instance network info cache for port d0f5820b-f3ad-4a26-84c4-c20c732dbcfd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.751 239969 DEBUG nova.network.neutron [req-4b482db0-af49-434c-b997-ac6ae888832c req-f34e9323-8e47-4494-ba53-fe914ae62742 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Updating instance_info_cache with network_info: [{"id": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "address": "fa:16:3e:75:71:25", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0f5820b-f3", "ovs_interfaceid": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.769 239969 DEBUG oslo_concurrency.lockutils [req-4b482db0-af49-434c-b997-ac6ae888832c req-f34e9323-8e47-4494-ba53-fe914ae62742 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-8dcfffb0-b36f-463e-b49a-9954483b8f5b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.770 239969 DEBUG oslo_concurrency.processutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8dcfffb0-b36f-463e-b49a-9954483b8f5b/disk.config 8dcfffb0-b36f-463e-b49a-9954483b8f5b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.771 239969 INFO nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Deleting local config drive /var/lib/nova/instances/8dcfffb0-b36f-463e-b49a-9954483b8f5b/disk.config because it was imported into RBD.
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.783 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:06:55 compute-0 kernel: tapd0f5820b-f3: entered promiscuous mode
Jan 26 16:06:55 compute-0 NetworkManager[48954]: <info>  [1769443615.8286] manager: (tapd0f5820b-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/372)
Jan 26 16:06:55 compute-0 ovn_controller[146046]: 2026-01-26T16:06:55Z|00866|binding|INFO|Claiming lport d0f5820b-f3ad-4a26-84c4-c20c732dbcfd for this chassis.
Jan 26 16:06:55 compute-0 ovn_controller[146046]: 2026-01-26T16:06:55Z|00867|binding|INFO|d0f5820b-f3ad-4a26-84c4-c20c732dbcfd: Claiming fa:16:3e:75:71:25 10.100.0.11
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.831 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:55.839 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:71:25 10.100.0.11'], port_security=['fa:16:3e:75:71:25 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8dcfffb0-b36f-463e-b49a-9954483b8f5b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56f8818d291f4e738d868673048ce025', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5352fc20-0398-48ca-8854-47c9620f712b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=112ebdb6-08cf-4ff4-bdfb-063b37c813d0, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d0f5820b-f3ad-4a26-84c4-c20c732dbcfd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:06:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:55.841 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d0f5820b-f3ad-4a26-84c4-c20c732dbcfd in datapath 67ce52d3-4d78-44cc-ab83-0fab051ec68d bound to our chassis
Jan 26 16:06:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:55.842 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67ce52d3-4d78-44cc-ab83-0fab051ec68d
Jan 26 16:06:55 compute-0 ovn_controller[146046]: 2026-01-26T16:06:55Z|00868|binding|INFO|Setting lport d0f5820b-f3ad-4a26-84c4-c20c732dbcfd ovn-installed in OVS
Jan 26 16:06:55 compute-0 ovn_controller[146046]: 2026-01-26T16:06:55Z|00869|binding|INFO|Setting lport d0f5820b-f3ad-4a26-84c4-c20c732dbcfd up in Southbound
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.849 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.851 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:55.859 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc2283e-dd48-491a-8309-b769bb27e53a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:55 compute-0 systemd-machined[208061]: New machine qemu-109-instance-00000058.
Jan 26 16:06:55 compute-0 systemd[1]: Started Virtual Machine qemu-109-instance-00000058.
Jan 26 16:06:55 compute-0 systemd-udevd[316784]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:06:55 compute-0 NetworkManager[48954]: <info>  [1769443615.8902] device (tapd0f5820b-f3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:06:55 compute-0 NetworkManager[48954]: <info>  [1769443615.8908] device (tapd0f5820b-f3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:06:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:55.891 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c8329c05-147f-4deb-8bf3-489f211c5092]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:55.895 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0c85a8e0-f013-4876-b1b0-2f37bfc9baf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:55.929 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a9dd78aa-efae-483c-af6e-a845d52dbfad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:55.949 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3816f031-dd78-4829-b0a2-967e06d0290f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67ce52d3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:dd:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502154, 'reachable_time': 29665, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316795, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:55.971 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a1159356-39b8-429e-b026-9bbeb4fef6a2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap67ce52d3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502171, 'tstamp': 502171}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316796, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67ce52d3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502175, 'tstamp': 502175}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316796, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:06:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:55.973 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67ce52d3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.975 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:55 compute-0 nova_compute[239965]: 2026-01-26 16:06:55.976 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:55.978 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67ce52d3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:55.978 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:06:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:55.978 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67ce52d3-40, col_values=(('external_ids', {'iface-id': '8d8bd19b-594d-4000-99fc-eb8020a59c23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:06:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:55.979 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.033 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.060 239969 DEBUG nova.compute.manager [req-806a9945-941d-45c6-9be0-47edc86b28e5 req-68bef0ce-bcd7-4d83-bbfb-6647cb2fe8ca a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Received event network-vif-plugged-d0f5820b-f3ad-4a26-84c4-c20c732dbcfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.061 239969 DEBUG oslo_concurrency.lockutils [req-806a9945-941d-45c6-9be0-47edc86b28e5 req-68bef0ce-bcd7-4d83-bbfb-6647cb2fe8ca a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.061 239969 DEBUG oslo_concurrency.lockutils [req-806a9945-941d-45c6-9be0-47edc86b28e5 req-68bef0ce-bcd7-4d83-bbfb-6647cb2fe8ca a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.061 239969 DEBUG oslo_concurrency.lockutils [req-806a9945-941d-45c6-9be0-47edc86b28e5 req-68bef0ce-bcd7-4d83-bbfb-6647cb2fe8ca a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.062 239969 DEBUG nova.compute.manager [req-806a9945-941d-45c6-9be0-47edc86b28e5 req-68bef0ce-bcd7-4d83-bbfb-6647cb2fe8ca a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Processing event network-vif-plugged-d0f5820b-f3ad-4a26-84c4-c20c732dbcfd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:06:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1643: 305 pgs: 305 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 2.4 MiB/s wr, 61 op/s
Jan 26 16:06:56 compute-0 ceph-mon[75140]: pgmap v1643: 305 pgs: 305 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 2.4 MiB/s wr, 61 op/s
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.483 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443616.483122, 8dcfffb0-b36f-463e-b49a-9954483b8f5b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.484 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] VM Started (Lifecycle Event)
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.488 239969 DEBUG nova.compute.manager [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.493 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.498 239969 INFO nova.virt.libvirt.driver [-] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Instance spawned successfully.
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.498 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.508 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.512 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.520 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.521 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.521 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.521 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.522 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.522 239969 DEBUG nova.virt.libvirt.driver [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.530 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.530 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443616.4833593, 8dcfffb0-b36f-463e-b49a-9954483b8f5b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.530 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] VM Paused (Lifecycle Event)
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.570 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.574 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443616.4922028, 8dcfffb0-b36f-463e-b49a-9954483b8f5b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.574 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] VM Resumed (Lifecycle Event)
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.606 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.613 239969 INFO nova.compute.manager [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Took 13.95 seconds to spawn the instance on the hypervisor.
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.613 239969 DEBUG nova.compute.manager [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.616 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.650 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.679 239969 INFO nova.compute.manager [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Took 15.16 seconds to build instance.
Jan 26 16:06:56 compute-0 nova_compute[239965]: 2026-01-26 16:06:56.701 239969 DEBUG oslo_concurrency.lockutils [None req-3e16b649-ab05-4965-b669-12f5142dafb1 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:06:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 163 KiB/s rd, 1.2 MiB/s wr, 51 op/s
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.136 239969 DEBUG nova.compute.manager [req-698e8065-86f3-4988-b156-5e8c44dd3315 req-5f223634-93ac-4129-96b8-f5bfabd5b997 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Received event network-vif-plugged-d0f5820b-f3ad-4a26-84c4-c20c732dbcfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.137 239969 DEBUG oslo_concurrency.lockutils [req-698e8065-86f3-4988-b156-5e8c44dd3315 req-5f223634-93ac-4129-96b8-f5bfabd5b997 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.137 239969 DEBUG oslo_concurrency.lockutils [req-698e8065-86f3-4988-b156-5e8c44dd3315 req-5f223634-93ac-4129-96b8-f5bfabd5b997 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.137 239969 DEBUG oslo_concurrency.lockutils [req-698e8065-86f3-4988-b156-5e8c44dd3315 req-5f223634-93ac-4129-96b8-f5bfabd5b997 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.137 239969 DEBUG nova.compute.manager [req-698e8065-86f3-4988-b156-5e8c44dd3315 req-5f223634-93ac-4129-96b8-f5bfabd5b997 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] No waiting events found dispatching network-vif-plugged-d0f5820b-f3ad-4a26-84c4-c20c732dbcfd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.137 239969 WARNING nova.compute.manager [req-698e8065-86f3-4988-b156-5e8c44dd3315 req-5f223634-93ac-4129-96b8-f5bfabd5b997 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Received unexpected event network-vif-plugged-d0f5820b-f3ad-4a26-84c4-c20c732dbcfd for instance with vm_state active and task_state None.
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.275 239969 INFO nova.compute.manager [None req-b530690b-d8b3-455e-9aec-afd6e2f7ac64 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Get console output
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.280 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.334 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Acquiring lock "09b47418-b02a-4498-9491-407d4065793a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.335 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lock "09b47418-b02a-4498-9491-407d4065793a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.358 239969 DEBUG nova.compute.manager [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.429 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.430 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.436 239969 DEBUG nova.virt.hardware [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.436 239969 INFO nova.compute.claims [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.620 239969 DEBUG oslo_concurrency.processutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.803 239969 DEBUG nova.compute.manager [req-09b5c628-004c-4fec-b6c5-1c8905650516 req-17bc89ef-5b47-4f6e-82fe-c0630c1a2108 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Received event network-vif-plugged-b7f76f11-77ec-41f2-8177-93de5bf467a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.804 239969 DEBUG oslo_concurrency.lockutils [req-09b5c628-004c-4fec-b6c5-1c8905650516 req-17bc89ef-5b47-4f6e-82fe-c0630c1a2108 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.804 239969 DEBUG oslo_concurrency.lockutils [req-09b5c628-004c-4fec-b6c5-1c8905650516 req-17bc89ef-5b47-4f6e-82fe-c0630c1a2108 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.804 239969 DEBUG oslo_concurrency.lockutils [req-09b5c628-004c-4fec-b6c5-1c8905650516 req-17bc89ef-5b47-4f6e-82fe-c0630c1a2108 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.804 239969 DEBUG nova.compute.manager [req-09b5c628-004c-4fec-b6c5-1c8905650516 req-17bc89ef-5b47-4f6e-82fe-c0630c1a2108 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Processing event network-vif-plugged-b7f76f11-77ec-41f2-8177-93de5bf467a9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.805 239969 DEBUG nova.compute.manager [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.808 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443618.8082736, 2260a3d1-973c-4364-af38-ebd9acd287f8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.808 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] VM Resumed (Lifecycle Event)
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.815 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.818 239969 INFO nova.virt.libvirt.driver [-] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Instance spawned successfully.
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.818 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.827 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.831 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.843 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.844 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.844 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.844 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.845 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.845 239969 DEBUG nova.virt.libvirt.driver [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.849 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.904 239969 INFO nova.compute.manager [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Took 19.22 seconds to spawn the instance on the hypervisor.
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.905 239969 DEBUG nova.compute.manager [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:06:58 compute-0 nova_compute[239965]: 2026-01-26 16:06:58.984 239969 INFO nova.compute.manager [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Took 21.65 seconds to build instance.
Jan 26 16:06:59 compute-0 nova_compute[239965]: 2026-01-26 16:06:59.001 239969 DEBUG oslo_concurrency.lockutils [None req-d9f6bb4a-dff0-4bc9-86f2-7ec49e8cde92 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "2260a3d1-973c-4364-af38-ebd9acd287f8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:06:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2570839363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:06:59 compute-0 nova_compute[239965]: 2026-01-26 16:06:59.158 239969 DEBUG oslo_concurrency.processutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:06:59 compute-0 nova_compute[239965]: 2026-01-26 16:06:59.163 239969 DEBUG nova.compute.provider_tree [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:06:59 compute-0 nova_compute[239965]: 2026-01-26 16:06:59.178 239969 DEBUG nova.scheduler.client.report [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:06:59 compute-0 ceph-mon[75140]: pgmap v1644: 305 pgs: 305 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 163 KiB/s rd, 1.2 MiB/s wr, 51 op/s
Jan 26 16:06:59 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2570839363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:06:59 compute-0 nova_compute[239965]: 2026-01-26 16:06:59.205 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:59 compute-0 nova_compute[239965]: 2026-01-26 16:06:59.206 239969 DEBUG nova.compute.manager [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:06:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:59.228 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:06:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:59.228 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:06:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:06:59.229 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:06:59 compute-0 nova_compute[239965]: 2026-01-26 16:06:59.248 239969 DEBUG nova.compute.manager [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:06:59 compute-0 nova_compute[239965]: 2026-01-26 16:06:59.249 239969 DEBUG nova.network.neutron [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:06:59 compute-0 nova_compute[239965]: 2026-01-26 16:06:59.384 239969 INFO nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:06:59 compute-0 nova_compute[239965]: 2026-01-26 16:06:59.410 239969 DEBUG nova.policy [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e943bd4be5214b3ebebfb57bf7a244e6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '236577cf59a24cc38d6d1fc4abf38e0f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:06:59 compute-0 nova_compute[239965]: 2026-01-26 16:06:59.660 239969 INFO nova.compute.manager [None req-ad849449-6e63-4c51-ab8c-c641eb299776 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Get console output
Jan 26 16:06:59 compute-0 nova_compute[239965]: 2026-01-26 16:06:59.666 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:06:59 compute-0 nova_compute[239965]: 2026-01-26 16:06:59.755 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1645: 305 pgs: 305 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 40 KiB/s wr, 21 op/s
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.117 239969 DEBUG nova.compute.manager [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.217 239969 DEBUG nova.compute.manager [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.218 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.219 239969 INFO nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Creating image(s)
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.235 239969 DEBUG nova.storage.rbd_utils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] rbd image 09b47418-b02a-4498-9491-407d4065793a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.258 239969 DEBUG nova.storage.rbd_utils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] rbd image 09b47418-b02a-4498-9491-407d4065793a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.277 239969 DEBUG nova.storage.rbd_utils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] rbd image 09b47418-b02a-4498-9491-407d4065793a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.280 239969 DEBUG oslo_concurrency.processutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.350 239969 DEBUG oslo_concurrency.processutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.351 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.352 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.352 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.373 239969 DEBUG nova.storage.rbd_utils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] rbd image 09b47418-b02a-4498-9491-407d4065793a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.375 239969 DEBUG oslo_concurrency.processutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 09b47418-b02a-4498-9491-407d4065793a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:07:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:07:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:07:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:07:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:07:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.803 239969 DEBUG oslo_concurrency.processutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 09b47418-b02a-4498-9491-407d4065793a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.866 239969 DEBUG nova.storage.rbd_utils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] resizing rbd image 09b47418-b02a-4498-9491-407d4065793a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.984 239969 DEBUG nova.objects.instance [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lazy-loading 'migration_context' on Instance uuid 09b47418-b02a-4498-9491-407d4065793a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:00 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.999 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:07:01 compute-0 nova_compute[239965]: 2026-01-26 16:07:00.999 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Ensure instance console log exists: /var/lib/nova/instances/09b47418-b02a-4498-9491-407d4065793a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:07:01 compute-0 nova_compute[239965]: 2026-01-26 16:07:01.000 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:01 compute-0 nova_compute[239965]: 2026-01-26 16:07:01.000 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:01 compute-0 nova_compute[239965]: 2026-01-26 16:07:01.001 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:01 compute-0 nova_compute[239965]: 2026-01-26 16:07:01.035 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:01 compute-0 ceph-mon[75140]: pgmap v1645: 305 pgs: 305 active+clean; 339 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 40 KiB/s wr, 21 op/s
Jan 26 16:07:01 compute-0 nova_compute[239965]: 2026-01-26 16:07:01.279 239969 DEBUG nova.compute.manager [req-54a1365d-5b03-4f37-b65a-7d3d3bd87a8f req-45c38649-6bc5-40c4-bdf8-fa9d47092b4d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Received event network-vif-plugged-b7f76f11-77ec-41f2-8177-93de5bf467a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:01 compute-0 nova_compute[239965]: 2026-01-26 16:07:01.280 239969 DEBUG oslo_concurrency.lockutils [req-54a1365d-5b03-4f37-b65a-7d3d3bd87a8f req-45c38649-6bc5-40c4-bdf8-fa9d47092b4d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:01 compute-0 nova_compute[239965]: 2026-01-26 16:07:01.281 239969 DEBUG oslo_concurrency.lockutils [req-54a1365d-5b03-4f37-b65a-7d3d3bd87a8f req-45c38649-6bc5-40c4-bdf8-fa9d47092b4d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:01 compute-0 nova_compute[239965]: 2026-01-26 16:07:01.281 239969 DEBUG oslo_concurrency.lockutils [req-54a1365d-5b03-4f37-b65a-7d3d3bd87a8f req-45c38649-6bc5-40c4-bdf8-fa9d47092b4d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:01 compute-0 nova_compute[239965]: 2026-01-26 16:07:01.282 239969 DEBUG nova.compute.manager [req-54a1365d-5b03-4f37-b65a-7d3d3bd87a8f req-45c38649-6bc5-40c4-bdf8-fa9d47092b4d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] No waiting events found dispatching network-vif-plugged-b7f76f11-77ec-41f2-8177-93de5bf467a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:01 compute-0 nova_compute[239965]: 2026-01-26 16:07:01.282 239969 WARNING nova.compute.manager [req-54a1365d-5b03-4f37-b65a-7d3d3bd87a8f req-45c38649-6bc5-40c4-bdf8-fa9d47092b4d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Received unexpected event network-vif-plugged-b7f76f11-77ec-41f2-8177-93de5bf467a9 for instance with vm_state active and task_state None.
Jan 26 16:07:01 compute-0 nova_compute[239965]: 2026-01-26 16:07:01.569 239969 DEBUG nova.network.neutron [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Successfully created port: d54701d9-2aca-43e6-b340-8bc2ede7baf6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:07:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 368 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.1 MiB/s wr, 121 op/s
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.328 239969 DEBUG oslo_concurrency.lockutils [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "2260a3d1-973c-4364-af38-ebd9acd287f8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.329 239969 DEBUG oslo_concurrency.lockutils [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "2260a3d1-973c-4364-af38-ebd9acd287f8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.329 239969 DEBUG oslo_concurrency.lockutils [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.329 239969 DEBUG oslo_concurrency.lockutils [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.330 239969 DEBUG oslo_concurrency.lockutils [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.331 239969 INFO nova.compute.manager [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Terminating instance
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.332 239969 DEBUG nova.compute.manager [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:07:02 compute-0 kernel: tapb7f76f11-77 (unregistering): left promiscuous mode
Jan 26 16:07:02 compute-0 NetworkManager[48954]: <info>  [1769443622.4163] device (tapb7f76f11-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.427 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:02 compute-0 ovn_controller[146046]: 2026-01-26T16:07:02Z|00870|binding|INFO|Releasing lport b7f76f11-77ec-41f2-8177-93de5bf467a9 from this chassis (sb_readonly=0)
Jan 26 16:07:02 compute-0 ovn_controller[146046]: 2026-01-26T16:07:02Z|00871|binding|INFO|Setting lport b7f76f11-77ec-41f2-8177-93de5bf467a9 down in Southbound
Jan 26 16:07:02 compute-0 ovn_controller[146046]: 2026-01-26T16:07:02Z|00872|binding|INFO|Removing iface tapb7f76f11-77 ovn-installed in OVS
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.431 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.441 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:3a:ef 10.100.0.7'], port_security=['fa:16:3e:60:3a:ef 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2260a3d1-973c-4364-af38-ebd9acd287f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3d3c26abe454a90816833e484abbbd5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1788d42-3ab8-4af6-9d96-b2f289468a39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=866be92a-2c15-4b84-ae39-2a0041a8b635, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b7f76f11-77ec-41f2-8177-93de5bf467a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.442 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b7f76f11-77ec-41f2-8177-93de5bf467a9 in datapath e7ef659c-2c7b-4cdf-8956-267cdfeff707 unbound from our chassis
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.444 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.455 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.468 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e8c176-8c42-4f68-aaff-e69d6a50041e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000057.scope: Deactivated successfully.
Jan 26 16:07:02 compute-0 systemd[1]: machine-qemu\x2d108\x2dinstance\x2d00000057.scope: Consumed 4.353s CPU time.
Jan 26 16:07:02 compute-0 systemd-machined[208061]: Machine qemu-108-instance-00000057 terminated.
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.500 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[47c1b404-3c4e-4279-b2ef-0c2570c06b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.503 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[79d7644b-fc37-4074-9917-1b92e3fbb439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.530 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[45917679-fde7-4599-9cb6-7d40e80094bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 kernel: tapb7f76f11-77: entered promiscuous mode
Jan 26 16:07:02 compute-0 NetworkManager[48954]: <info>  [1769443622.5545] manager: (tapb7f76f11-77): new Tun device (/org/freedesktop/NetworkManager/Devices/373)
Jan 26 16:07:02 compute-0 kernel: tapb7f76f11-77 (unregistering): left promiscuous mode
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.555 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ba79b680-d473-4c39-b3e8-3f28b0667607]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7ef659c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:1b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507991, 'reachable_time': 18624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317039, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 ovn_controller[146046]: 2026-01-26T16:07:02Z|00873|binding|INFO|Claiming lport b7f76f11-77ec-41f2-8177-93de5bf467a9 for this chassis.
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.562 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:02 compute-0 ovn_controller[146046]: 2026-01-26T16:07:02Z|00874|binding|INFO|b7f76f11-77ec-41f2-8177-93de5bf467a9: Claiming fa:16:3e:60:3a:ef 10.100.0.7
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.571 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:3a:ef 10.100.0.7'], port_security=['fa:16:3e:60:3a:ef 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2260a3d1-973c-4364-af38-ebd9acd287f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3d3c26abe454a90816833e484abbbd5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1788d42-3ab8-4af6-9d96-b2f289468a39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=866be92a-2c15-4b84-ae39-2a0041a8b635, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b7f76f11-77ec-41f2-8177-93de5bf467a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.578 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[979ca638-268c-4e32-a083-c25b1d1248fc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508006, 'tstamp': 508006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317045, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508010, 'tstamp': 508010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317045, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.580 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7ef659c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.581 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.585 239969 INFO nova.virt.libvirt.driver [-] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Instance destroyed successfully.
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.585 239969 DEBUG nova.objects.instance [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'resources' on Instance uuid 2260a3d1-973c-4364-af38-ebd9acd287f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.588 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:02 compute-0 ovn_controller[146046]: 2026-01-26T16:07:02Z|00875|binding|INFO|Releasing lport b7f76f11-77ec-41f2-8177-93de5bf467a9 from this chassis (sb_readonly=0)
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.599 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:60:3a:ef 10.100.0.7'], port_security=['fa:16:3e:60:3a:ef 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '2260a3d1-973c-4364-af38-ebd9acd287f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3d3c26abe454a90816833e484abbbd5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1788d42-3ab8-4af6-9d96-b2f289468a39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=866be92a-2c15-4b84-ae39-2a0041a8b635, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b7f76f11-77ec-41f2-8177-93de5bf467a9) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.608 239969 DEBUG nova.virt.libvirt.vif [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:06:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1539940225',display_name='tempest-ServersTestJSON-server-1539940225',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1539940225',id=87,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:06:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-y4k33z3m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:06:58Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=2260a3d1-973c-4364-af38-ebd9acd287f8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "address": "fa:16:3e:60:3a:ef", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f76f11-77", "ovs_interfaceid": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.609 239969 DEBUG nova.network.os_vif_util [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "address": "fa:16:3e:60:3a:ef", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7f76f11-77", "ovs_interfaceid": "b7f76f11-77ec-41f2-8177-93de5bf467a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.610 239969 DEBUG nova.network.os_vif_util [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:60:3a:ef,bridge_name='br-int',has_traffic_filtering=True,id=b7f76f11-77ec-41f2-8177-93de5bf467a9,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7f76f11-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.610 239969 DEBUG os_vif [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:3a:ef,bridge_name='br-int',has_traffic_filtering=True,id=b7f76f11-77ec-41f2-8177-93de5bf467a9,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7f76f11-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:07:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.613 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7ef659c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.613 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.613 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape7ef659c-20, col_values=(('external_ids', {'iface-id': '86107515-3deb-4a4b-a7ca-2f22eed5cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.614 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.614 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.615 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7f76f11-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.615 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b7f76f11-77ec-41f2-8177-93de5bf467a9 in datapath e7ef659c-2c7b-4cdf-8956-267cdfeff707 unbound from our chassis
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.616 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.616 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.617 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.619 239969 INFO os_vif [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:60:3a:ef,bridge_name='br-int',has_traffic_filtering=True,id=b7f76f11-77ec-41f2-8177-93de5bf467a9,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7f76f11-77')
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.634 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8a094084-abbe-47b6-8604-b4db5ce1c737]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.660 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[bbd80931-5a4d-4d0a-a2b9-8e033e8ace58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.663 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb3af74-f3d2-4a1b-b821-e2bfc38b1ae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.690 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a59edeef-d891-4b15-b5fc-6aa2404f2a72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.710 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f26a75d6-47bb-4086-9d25-57bad6814622]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7ef659c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:1b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507991, 'reachable_time': 18624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317073, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.730 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[836bd2df-d9c4-4538-8221-ab89e3269718]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508006, 'tstamp': 508006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317074, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508010, 'tstamp': 508010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317074, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.732 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7ef659c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.734 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.736 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7ef659c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.737 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.738 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape7ef659c-20, col_values=(('external_ids', {'iface-id': '86107515-3deb-4a4b-a7ca-2f22eed5cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.739 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.741 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b7f76f11-77ec-41f2-8177-93de5bf467a9 in datapath e7ef659c-2c7b-4cdf-8956-267cdfeff707 unbound from our chassis
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.744 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.766 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e47dbae8-7ec7-4489-a725-5813157ef7ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.802 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2b273751-cad2-427e-9e36-c78d658f6708]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.806 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0caf9249-9b6b-4ebc-b779-5310217353b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.845 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9010f044-60dc-42da-8239-256b35526c96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.864 239969 DEBUG nova.network.neutron [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Successfully updated port: d54701d9-2aca-43e6-b340-8bc2ede7baf6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.866 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ee7e7967-5fd0-4ea8-ae0d-ee3422338423]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7ef659c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:1b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 616, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507991, 'reachable_time': 18624, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317080, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.879 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Acquiring lock "refresh_cache-09b47418-b02a-4498-9491-407d4065793a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.879 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Acquired lock "refresh_cache-09b47418-b02a-4498-9491-407d4065793a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.880 239969 DEBUG nova.network.neutron [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.885 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d4453878-f988-4aab-960e-0c8639dfce5e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508006, 'tstamp': 508006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317081, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508010, 'tstamp': 508010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317081, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.887 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7ef659c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:02 compute-0 nova_compute[239965]: 2026-01-26 16:07:02.889 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.890 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7ef659c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.891 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.892 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape7ef659c-20, col_values=(('external_ids', {'iface-id': '86107515-3deb-4a4b-a7ca-2f22eed5cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:02.892 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:03 compute-0 nova_compute[239965]: 2026-01-26 16:07:03.040 239969 DEBUG nova.network.neutron [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:07:03 compute-0 ceph-mon[75140]: pgmap v1646: 305 pgs: 305 active+clean; 368 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.1 MiB/s wr, 121 op/s
Jan 26 16:07:03 compute-0 nova_compute[239965]: 2026-01-26 16:07:03.364 239969 DEBUG nova.compute.manager [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Received event network-vif-unplugged-b7f76f11-77ec-41f2-8177-93de5bf467a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:03 compute-0 nova_compute[239965]: 2026-01-26 16:07:03.367 239969 DEBUG oslo_concurrency.lockutils [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:03 compute-0 nova_compute[239965]: 2026-01-26 16:07:03.368 239969 DEBUG oslo_concurrency.lockutils [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:03 compute-0 nova_compute[239965]: 2026-01-26 16:07:03.369 239969 DEBUG oslo_concurrency.lockutils [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:03 compute-0 nova_compute[239965]: 2026-01-26 16:07:03.370 239969 DEBUG nova.compute.manager [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] No waiting events found dispatching network-vif-unplugged-b7f76f11-77ec-41f2-8177-93de5bf467a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:03 compute-0 nova_compute[239965]: 2026-01-26 16:07:03.370 239969 DEBUG nova.compute.manager [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Received event network-vif-unplugged-b7f76f11-77ec-41f2-8177-93de5bf467a9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:07:03 compute-0 nova_compute[239965]: 2026-01-26 16:07:03.371 239969 DEBUG nova.compute.manager [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Received event network-changed-d54701d9-2aca-43e6-b340-8bc2ede7baf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:03 compute-0 nova_compute[239965]: 2026-01-26 16:07:03.372 239969 DEBUG nova.compute.manager [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Refreshing instance network info cache due to event network-changed-d54701d9-2aca-43e6-b340-8bc2ede7baf6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:07:03 compute-0 nova_compute[239965]: 2026-01-26 16:07:03.373 239969 DEBUG oslo_concurrency.lockutils [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-09b47418-b02a-4498-9491-407d4065793a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:03 compute-0 nova_compute[239965]: 2026-01-26 16:07:03.676 239969 INFO nova.virt.libvirt.driver [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Deleting instance files /var/lib/nova/instances/2260a3d1-973c-4364-af38-ebd9acd287f8_del
Jan 26 16:07:03 compute-0 nova_compute[239965]: 2026-01-26 16:07:03.676 239969 INFO nova.virt.libvirt.driver [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Deletion of /var/lib/nova/instances/2260a3d1-973c-4364-af38-ebd9acd287f8_del complete
Jan 26 16:07:03 compute-0 nova_compute[239965]: 2026-01-26 16:07:03.724 239969 INFO nova.compute.manager [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Took 1.39 seconds to destroy the instance on the hypervisor.
Jan 26 16:07:03 compute-0 nova_compute[239965]: 2026-01-26 16:07:03.725 239969 DEBUG oslo.service.loopingcall [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:07:03 compute-0 nova_compute[239965]: 2026-01-26 16:07:03.725 239969 DEBUG nova.compute.manager [-] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:07:03 compute-0 nova_compute[239965]: 2026-01-26 16:07:03.725 239969 DEBUG nova.network.neutron [-] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:07:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1647: 305 pgs: 305 active+clean; 377 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.5 MiB/s wr, 173 op/s
Jan 26 16:07:04 compute-0 ceph-mon[75140]: pgmap v1647: 305 pgs: 305 active+clean; 377 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.5 MiB/s wr, 173 op/s
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.160 239969 DEBUG nova.network.neutron [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Updating instance_info_cache with network_info: [{"id": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "address": "fa:16:3e:bd:3a:60", "network": {"id": "a5539ab3-1cad-4d8d-8185-acc169a052f8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325251939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "236577cf59a24cc38d6d1fc4abf38e0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd54701d9-2a", "ovs_interfaceid": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.183 239969 DEBUG nova.network.neutron [-] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.188 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Releasing lock "refresh_cache-09b47418-b02a-4498-9491-407d4065793a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.188 239969 DEBUG nova.compute.manager [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Instance network_info: |[{"id": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "address": "fa:16:3e:bd:3a:60", "network": {"id": "a5539ab3-1cad-4d8d-8185-acc169a052f8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325251939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "236577cf59a24cc38d6d1fc4abf38e0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd54701d9-2a", "ovs_interfaceid": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.188 239969 DEBUG oslo_concurrency.lockutils [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-09b47418-b02a-4498-9491-407d4065793a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.189 239969 DEBUG nova.network.neutron [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Refreshing network info cache for port d54701d9-2aca-43e6-b340-8bc2ede7baf6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.192 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Start _get_guest_xml network_info=[{"id": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "address": "fa:16:3e:bd:3a:60", "network": {"id": "a5539ab3-1cad-4d8d-8185-acc169a052f8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325251939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "236577cf59a24cc38d6d1fc4abf38e0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd54701d9-2a", "ovs_interfaceid": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.196 239969 WARNING nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.205 239969 INFO nova.compute.manager [-] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Took 1.48 seconds to deallocate network for instance.
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.212 239969 DEBUG nova.virt.libvirt.host [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.213 239969 DEBUG nova.virt.libvirt.host [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.217 239969 DEBUG nova.virt.libvirt.host [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.218 239969 DEBUG nova.virt.libvirt.host [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.218 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.218 239969 DEBUG nova.virt.hardware [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.219 239969 DEBUG nova.virt.hardware [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.219 239969 DEBUG nova.virt.hardware [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.219 239969 DEBUG nova.virt.hardware [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.220 239969 DEBUG nova.virt.hardware [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.220 239969 DEBUG nova.virt.hardware [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.220 239969 DEBUG nova.virt.hardware [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.220 239969 DEBUG nova.virt.hardware [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.221 239969 DEBUG nova.virt.hardware [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.221 239969 DEBUG nova.virt.hardware [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.221 239969 DEBUG nova.virt.hardware [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.224 239969 DEBUG oslo_concurrency.processutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.261 239969 DEBUG oslo_concurrency.lockutils [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.262 239969 DEBUG oslo_concurrency.lockutils [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.380 239969 DEBUG oslo_concurrency.processutils [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:07:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2943282776' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.795 239969 DEBUG oslo_concurrency.processutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.818 239969 DEBUG nova.storage.rbd_utils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] rbd image 09b47418-b02a-4498-9491-407d4065793a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.822 239969 DEBUG oslo_concurrency.processutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:05 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2943282776' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.940 239969 DEBUG nova.compute.manager [req-7a7e7ade-09c6-44bb-a76d-b6f5fa259fb3 req-f423c2ea-a2f2-4ba3-a362-b72de2c4747e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Received event network-vif-deleted-b7f76f11-77ec-41f2-8177-93de5bf467a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:07:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1936599184' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.960 239969 DEBUG oslo_concurrency.processutils [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:05 compute-0 nova_compute[239965]: 2026-01-26 16:07:05.968 239969 DEBUG nova.compute.provider_tree [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.038 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 305 active+clean; 374 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.172 239969 DEBUG nova.scheduler.client.report [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.205 239969 DEBUG oslo_concurrency.lockutils [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.235 239969 INFO nova.scheduler.client.report [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Deleted allocations for instance 2260a3d1-973c-4364-af38-ebd9acd287f8
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.300 239969 DEBUG oslo_concurrency.lockutils [None req-86ddb2b8-c02e-4d3f-8a5f-37412591a18d 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "2260a3d1-973c-4364-af38-ebd9acd287f8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.324 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "8c136f06-4df5-4491-a67c-fb799c6cf053" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.324 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "8c136f06-4df5-4491-a67c-fb799c6cf053" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.338 239969 DEBUG nova.compute.manager [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.402 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.403 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.409 239969 DEBUG nova.virt.hardware [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.409 239969 INFO nova.compute.claims [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:07:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:07:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3341958760' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.443 239969 DEBUG oslo_concurrency.processutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.444 239969 DEBUG nova.virt.libvirt.vif [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:06:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-650598483',display_name='tempest-ServerAddressesTestJSON-server-650598483',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-650598483',id=89,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='236577cf59a24cc38d6d1fc4abf38e0f',ramdisk_id='',reservation_id='r-5nsidvs8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1395906254',owner_user_name='tempest-ServerAddressesTestJSON-1395906254-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:07:00Z,user_data=None,user_id='e943bd4be5214b3ebebfb57bf7a244e6',uuid=09b47418-b02a-4498-9491-407d4065793a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "address": "fa:16:3e:bd:3a:60", "network": {"id": "a5539ab3-1cad-4d8d-8185-acc169a052f8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325251939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "236577cf59a24cc38d6d1fc4abf38e0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd54701d9-2a", "ovs_interfaceid": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.444 239969 DEBUG nova.network.os_vif_util [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Converting VIF {"id": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "address": "fa:16:3e:bd:3a:60", "network": {"id": "a5539ab3-1cad-4d8d-8185-acc169a052f8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325251939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "236577cf59a24cc38d6d1fc4abf38e0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd54701d9-2a", "ovs_interfaceid": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.445 239969 DEBUG nova.network.os_vif_util [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:3a:60,bridge_name='br-int',has_traffic_filtering=True,id=d54701d9-2aca-43e6-b340-8bc2ede7baf6,network=Network(a5539ab3-1cad-4d8d-8185-acc169a052f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd54701d9-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.446 239969 DEBUG nova.objects.instance [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lazy-loading 'pci_devices' on Instance uuid 09b47418-b02a-4498-9491-407d4065793a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.462 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:07:06 compute-0 nova_compute[239965]:   <uuid>09b47418-b02a-4498-9491-407d4065793a</uuid>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   <name>instance-00000059</name>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerAddressesTestJSON-server-650598483</nova:name>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:07:05</nova:creationTime>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:07:06 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:07:06 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:07:06 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:07:06 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:07:06 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:07:06 compute-0 nova_compute[239965]:         <nova:user uuid="e943bd4be5214b3ebebfb57bf7a244e6">tempest-ServerAddressesTestJSON-1395906254-project-member</nova:user>
Jan 26 16:07:06 compute-0 nova_compute[239965]:         <nova:project uuid="236577cf59a24cc38d6d1fc4abf38e0f">tempest-ServerAddressesTestJSON-1395906254</nova:project>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:07:06 compute-0 nova_compute[239965]:         <nova:port uuid="d54701d9-2aca-43e6-b340-8bc2ede7baf6">
Jan 26 16:07:06 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <system>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <entry name="serial">09b47418-b02a-4498-9491-407d4065793a</entry>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <entry name="uuid">09b47418-b02a-4498-9491-407d4065793a</entry>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     </system>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   <os>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   </os>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   <features>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   </features>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/09b47418-b02a-4498-9491-407d4065793a_disk">
Jan 26 16:07:06 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       </source>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:07:06 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/09b47418-b02a-4498-9491-407d4065793a_disk.config">
Jan 26 16:07:06 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       </source>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:07:06 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:bd:3a:60"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <target dev="tapd54701d9-2a"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/09b47418-b02a-4498-9491-407d4065793a/console.log" append="off"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <video>
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     </video>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:07:06 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:07:06 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:07:06 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:07:06 compute-0 nova_compute[239965]: </domain>
Jan 26 16:07:06 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.463 239969 DEBUG nova.compute.manager [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Preparing to wait for external event network-vif-plugged-d54701d9-2aca-43e6-b340-8bc2ede7baf6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.463 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Acquiring lock "09b47418-b02a-4498-9491-407d4065793a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.464 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lock "09b47418-b02a-4498-9491-407d4065793a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.464 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lock "09b47418-b02a-4498-9491-407d4065793a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.465 239969 DEBUG nova.virt.libvirt.vif [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:06:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-650598483',display_name='tempest-ServerAddressesTestJSON-server-650598483',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-650598483',id=89,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='236577cf59a24cc38d6d1fc4abf38e0f',ramdisk_id='',reservation_id='r-5nsidvs8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-1395906254',owner_user_name='tempest-ServerAddressesTestJSON-1395906254-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:07:00Z,user_data=None,user_id='e943bd4be5214b3ebebfb57bf7a244e6',uuid=09b47418-b02a-4498-9491-407d4065793a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "address": "fa:16:3e:bd:3a:60", "network": {"id": "a5539ab3-1cad-4d8d-8185-acc169a052f8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325251939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "236577cf59a24cc38d6d1fc4abf38e0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd54701d9-2a", "ovs_interfaceid": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.465 239969 DEBUG nova.network.os_vif_util [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Converting VIF {"id": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "address": "fa:16:3e:bd:3a:60", "network": {"id": "a5539ab3-1cad-4d8d-8185-acc169a052f8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325251939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "236577cf59a24cc38d6d1fc4abf38e0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd54701d9-2a", "ovs_interfaceid": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.466 239969 DEBUG nova.network.os_vif_util [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:3a:60,bridge_name='br-int',has_traffic_filtering=True,id=d54701d9-2aca-43e6-b340-8bc2ede7baf6,network=Network(a5539ab3-1cad-4d8d-8185-acc169a052f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd54701d9-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.466 239969 DEBUG os_vif [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:3a:60,bridge_name='br-int',has_traffic_filtering=True,id=d54701d9-2aca-43e6-b340-8bc2ede7baf6,network=Network(a5539ab3-1cad-4d8d-8185-acc169a052f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd54701d9-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.466 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.467 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.467 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.471 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.472 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd54701d9-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.472 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd54701d9-2a, col_values=(('external_ids', {'iface-id': 'd54701d9-2aca-43e6-b340-8bc2ede7baf6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:3a:60', 'vm-uuid': '09b47418-b02a-4498-9491-407d4065793a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.474 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:06 compute-0 NetworkManager[48954]: <info>  [1769443626.4750] manager: (tapd54701d9-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/374)
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.478 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.479 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.480 239969 INFO os_vif [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:3a:60,bridge_name='br-int',has_traffic_filtering=True,id=d54701d9-2aca-43e6-b340-8bc2ede7baf6,network=Network(a5539ab3-1cad-4d8d-8185-acc169a052f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd54701d9-2a')
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.529 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.529 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.530 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] No VIF found with MAC fa:16:3e:bd:3a:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.530 239969 INFO nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Using config drive
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.551 239969 DEBUG nova.storage.rbd_utils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] rbd image 09b47418-b02a-4498-9491-407d4065793a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.624 239969 DEBUG oslo_concurrency.processutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:06 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1936599184' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:06 compute-0 ceph-mon[75140]: pgmap v1648: 305 pgs: 305 active+clean; 374 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 172 op/s
Jan 26 16:07:06 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3341958760' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.888 239969 INFO nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Creating config drive at /var/lib/nova/instances/09b47418-b02a-4498-9491-407d4065793a/disk.config
Jan 26 16:07:06 compute-0 nova_compute[239965]: 2026-01-26 16:07:06.893 239969 DEBUG oslo_concurrency.processutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/09b47418-b02a-4498-9491-407d4065793a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7j7asfy8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.032 239969 DEBUG oslo_concurrency.processutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/09b47418-b02a-4498-9491-407d4065793a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7j7asfy8" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.058 239969 DEBUG nova.storage.rbd_utils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] rbd image 09b47418-b02a-4498-9491-407d4065793a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.062 239969 DEBUG oslo_concurrency.processutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/09b47418-b02a-4498-9491-407d4065793a/disk.config 09b47418-b02a-4498-9491-407d4065793a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.114 239969 DEBUG nova.network.neutron [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Updated VIF entry in instance network info cache for port d54701d9-2aca-43e6-b340-8bc2ede7baf6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.115 239969 DEBUG nova.network.neutron [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Updating instance_info_cache with network_info: [{"id": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "address": "fa:16:3e:bd:3a:60", "network": {"id": "a5539ab3-1cad-4d8d-8185-acc169a052f8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325251939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "236577cf59a24cc38d6d1fc4abf38e0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd54701d9-2a", "ovs_interfaceid": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.145 239969 DEBUG oslo_concurrency.lockutils [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-09b47418-b02a-4498-9491-407d4065793a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.145 239969 DEBUG nova.compute.manager [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Received event network-vif-plugged-b7f76f11-77ec-41f2-8177-93de5bf467a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.146 239969 DEBUG oslo_concurrency.lockutils [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.146 239969 DEBUG oslo_concurrency.lockutils [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.146 239969 DEBUG oslo_concurrency.lockutils [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2260a3d1-973c-4364-af38-ebd9acd287f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.146 239969 DEBUG nova.compute.manager [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] No waiting events found dispatching network-vif-plugged-b7f76f11-77ec-41f2-8177-93de5bf467a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.147 239969 WARNING nova.compute.manager [req-e9cdd51b-c847-49a5-9fe5-2ee98d1a19dc req-c353e194-3c0d-4660-941b-7587223d2483 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Received unexpected event network-vif-plugged-b7f76f11-77ec-41f2-8177-93de5bf467a9 for instance with vm_state active and task_state deleting.
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.196 239969 DEBUG oslo_concurrency.processutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/09b47418-b02a-4498-9491-407d4065793a/disk.config 09b47418-b02a-4498-9491-407d4065793a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.197 239969 INFO nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Deleting local config drive /var/lib/nova/instances/09b47418-b02a-4498-9491-407d4065793a/disk.config because it was imported into RBD.
Jan 26 16:07:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:07:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/346897097' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.251 239969 DEBUG oslo_concurrency.processutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.257 239969 DEBUG nova.compute.provider_tree [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:07:07 compute-0 kernel: tapd54701d9-2a: entered promiscuous mode
Jan 26 16:07:07 compute-0 NetworkManager[48954]: <info>  [1769443627.2656] manager: (tapd54701d9-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/375)
Jan 26 16:07:07 compute-0 systemd-udevd[317259]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:07:07 compute-0 ovn_controller[146046]: 2026-01-26T16:07:07Z|00876|binding|INFO|Claiming lport d54701d9-2aca-43e6-b340-8bc2ede7baf6 for this chassis.
Jan 26 16:07:07 compute-0 ovn_controller[146046]: 2026-01-26T16:07:07Z|00877|binding|INFO|d54701d9-2aca-43e6-b340-8bc2ede7baf6: Claiming fa:16:3e:bd:3a:60 10.100.0.13
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.316 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:07 compute-0 NetworkManager[48954]: <info>  [1769443627.3261] device (tapd54701d9-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:07:07 compute-0 NetworkManager[48954]: <info>  [1769443627.3272] device (tapd54701d9-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:07:07 compute-0 ovn_controller[146046]: 2026-01-26T16:07:07Z|00878|binding|INFO|Setting lport d54701d9-2aca-43e6-b340-8bc2ede7baf6 ovn-installed in OVS
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.330 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.332 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:07 compute-0 systemd-machined[208061]: New machine qemu-110-instance-00000059.
Jan 26 16:07:07 compute-0 systemd[1]: Started Virtual Machine qemu-110-instance-00000059.
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.411 239969 DEBUG nova.scheduler.client.report [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:07:07 compute-0 ovn_controller[146046]: 2026-01-26T16:07:07Z|00879|binding|INFO|Setting lport d54701d9-2aca-43e6-b340-8bc2ede7baf6 up in Southbound
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.417 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:3a:60 10.100.0.13'], port_security=['fa:16:3e:bd:3a:60 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '09b47418-b02a-4498-9491-407d4065793a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5539ab3-1cad-4d8d-8185-acc169a052f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '236577cf59a24cc38d6d1fc4abf38e0f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6d373652-359c-43a2-9552-2291fa12e4d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bfcc5e8e-9f38-449a-b1b1-8702b4419005, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d54701d9-2aca-43e6-b340-8bc2ede7baf6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.418 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d54701d9-2aca-43e6-b340-8bc2ede7baf6 in datapath a5539ab3-1cad-4d8d-8185-acc169a052f8 bound to our chassis
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.419 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a5539ab3-1cad-4d8d-8185-acc169a052f8
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.432 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8582cd6c-c97a-4bdc-9e60-a106afc5b0c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.433 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa5539ab3-11 in ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.435 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa5539ab3-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.435 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cea60b7c-9d64-48b5-9a3e-bd258d584c3c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.436 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[812bd409-3c8b-4c05-bc5c-c1a7f0a5be6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.436 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.033s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.437 239969 DEBUG nova.compute.manager [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.448 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[0eed4ee0-ddb5-409d-81cf-245d1f0f491f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.464 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8700e8-1446-4926-a6f2-7a2031218dc4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.487 239969 DEBUG nova.compute.manager [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.488 239969 DEBUG nova.network.neutron [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.498 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[cc1e45ab-df2b-46e2-b4f7-d1d74e5aa4e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.505 239969 INFO nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:07:07 compute-0 NetworkManager[48954]: <info>  [1769443627.5059] manager: (tapa5539ab3-10): new Veth device (/org/freedesktop/NetworkManager/Devices/376)
Jan 26 16:07:07 compute-0 systemd-udevd[317263]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.505 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5c397e-14cc-41c1-b280-c06ac873e8a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.525 239969 DEBUG nova.compute.manager [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.542 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a8ab2758-f1b5-4d36-beda-c83a40ee734d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.545 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[49944b73-cd88-4440-a100-886bc4626a0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:07 compute-0 NetworkManager[48954]: <info>  [1769443627.5685] device (tapa5539ab3-10): carrier: link connected
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.574 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2c199b-4f24-4433-80b0-bfd22785cce6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.594 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed2b68b-c998-4a97-921f-daa4c4cd5acf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa5539ab3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:c3:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 265], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511837, 'reachable_time': 35911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317295, 'error': None, 'target': 'ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.614 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dadb1d57-b471-4bc2-a2c9-11bffaf1695f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe78:c3f7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511837, 'tstamp': 511837}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317296, 'error': None, 'target': 'ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.624 239969 DEBUG nova.compute.manager [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.626 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.627 239969 INFO nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Creating image(s)
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.633 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[11934fe4-5dc3-4618-84b2-81c6acd13645]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa5539ab3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:78:c3:f7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 265], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511837, 'reachable_time': 35911, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317297, 'error': None, 'target': 'ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.649 239969 DEBUG nova.storage.rbd_utils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 8c136f06-4df5-4491-a67c-fb799c6cf053_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.668 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[99a747bf-7af2-4631-bcfb-c9ba853c6183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.678 239969 DEBUG nova.storage.rbd_utils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 8c136f06-4df5-4491-a67c-fb799c6cf053_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.703 239969 DEBUG nova.storage.rbd_utils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 8c136f06-4df5-4491-a67c-fb799c6cf053_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.713 239969 DEBUG oslo_concurrency.processutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.731 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9284d8-7ff3-41c6-9ad1-e97502ab952b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.732 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5539ab3-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.732 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.733 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5539ab3-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:07 compute-0 NetworkManager[48954]: <info>  [1769443627.7352] manager: (tapa5539ab3-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/377)
Jan 26 16:07:07 compute-0 kernel: tapa5539ab3-10: entered promiscuous mode
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.738 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa5539ab3-10, col_values=(('external_ids', {'iface-id': '9670571e-0e33-4629-985f-d62cc8e3c1ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.740 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a5539ab3-1cad-4d8d-8185-acc169a052f8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a5539ab3-1cad-4d8d-8185-acc169a052f8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.741 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[40678691-0b94-4407-b033-44a9a09a0499]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:07 compute-0 ovn_controller[146046]: 2026-01-26T16:07:07Z|00880|binding|INFO|Releasing lport 9670571e-0e33-4629-985f-d62cc8e3c1ac from this chassis (sb_readonly=0)
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.741 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-a5539ab3-1cad-4d8d-8185-acc169a052f8
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/a5539ab3-1cad-4d8d-8185-acc169a052f8.pid.haproxy
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID a5539ab3-1cad-4d8d-8185-acc169a052f8
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:07:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:07.742 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8', 'env', 'PROCESS_TAG=haproxy-a5539ab3-1cad-4d8d-8185-acc169a052f8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a5539ab3-1cad-4d8d-8185-acc169a052f8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.756 239969 DEBUG nova.policy [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ad9c6196af60436caf20747e96ad8388', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '56f8818d291f4e738d868673048ce025', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.758 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.763 239969 DEBUG nova.compute.manager [req-87c1a750-a2ad-4fa2-88a9-856cb3616896 req-4b65c83e-8f28-4cb7-adbe-7270b24eb43d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Received event network-vif-plugged-d54701d9-2aca-43e6-b340-8bc2ede7baf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.764 239969 DEBUG oslo_concurrency.lockutils [req-87c1a750-a2ad-4fa2-88a9-856cb3616896 req-4b65c83e-8f28-4cb7-adbe-7270b24eb43d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "09b47418-b02a-4498-9491-407d4065793a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.764 239969 DEBUG oslo_concurrency.lockutils [req-87c1a750-a2ad-4fa2-88a9-856cb3616896 req-4b65c83e-8f28-4cb7-adbe-7270b24eb43d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "09b47418-b02a-4498-9491-407d4065793a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.764 239969 DEBUG oslo_concurrency.lockutils [req-87c1a750-a2ad-4fa2-88a9-856cb3616896 req-4b65c83e-8f28-4cb7-adbe-7270b24eb43d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "09b47418-b02a-4498-9491-407d4065793a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.764 239969 DEBUG nova.compute.manager [req-87c1a750-a2ad-4fa2-88a9-856cb3616896 req-4b65c83e-8f28-4cb7-adbe-7270b24eb43d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Processing event network-vif-plugged-d54701d9-2aca-43e6-b340-8bc2ede7baf6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.798 239969 DEBUG oslo_concurrency.processutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.799 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.799 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.800 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.820 239969 DEBUG nova.storage.rbd_utils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 8c136f06-4df5-4491-a67c-fb799c6cf053_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.824 239969 DEBUG oslo_concurrency.processutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 8c136f06-4df5-4491-a67c-fb799c6cf053_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/346897097' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.854 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443627.8504913, 09b47418-b02a-4498-9491-407d4065793a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.855 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 09b47418-b02a-4498-9491-407d4065793a] VM Started (Lifecycle Event)
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.858 239969 DEBUG nova.compute.manager [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.863 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.867 239969 INFO nova.virt.libvirt.driver [-] [instance: 09b47418-b02a-4498-9491-407d4065793a] Instance spawned successfully.
Jan 26 16:07:07 compute-0 nova_compute[239965]: 2026-01-26 16:07:07.867 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:07:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1649: 305 pgs: 305 active+clean; 339 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 198 op/s
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.165 239969 DEBUG oslo_concurrency.processutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 8c136f06-4df5-4491-a67c-fb799c6cf053_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:08 compute-0 podman[317465]: 2026-01-26 16:07:08.240307568 +0000 UTC m=+0.096930822 container create 80c1a1b449ff6c2c74856f4ac02fbd03aa1c45091f1d3c0316ed31a042010b4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.256 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 09b47418-b02a-4498-9491-407d4065793a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.261 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.261 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.262 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.262 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.262 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.263 239969 DEBUG nova.virt.libvirt.driver [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:08 compute-0 podman[317465]: 2026-01-26 16:07:08.17660409 +0000 UTC m=+0.033227364 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.278 239969 DEBUG nova.storage.rbd_utils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] resizing rbd image 8c136f06-4df5-4491-a67c-fb799c6cf053_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:07:08 compute-0 systemd[1]: Started libpod-conmon-80c1a1b449ff6c2c74856f4ac02fbd03aa1c45091f1d3c0316ed31a042010b4b.scope.
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.306 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 09b47418-b02a-4498-9491-407d4065793a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:07:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:07:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2285e3f94246d433a8040bd1609662d62910143c4959a1262c774302f74144d3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:07:08 compute-0 podman[317465]: 2026-01-26 16:07:08.352660207 +0000 UTC m=+0.209283481 container init 80c1a1b449ff6c2c74856f4ac02fbd03aa1c45091f1d3c0316ed31a042010b4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 16:07:08 compute-0 podman[317465]: 2026-01-26 16:07:08.359641258 +0000 UTC m=+0.216264512 container start 80c1a1b449ff6c2c74856f4ac02fbd03aa1c45091f1d3c0316ed31a042010b4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:07:08 compute-0 neutron-haproxy-ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8[317532]: [NOTICE]   (317552) : New worker (317558) forked
Jan 26 16:07:08 compute-0 neutron-haproxy-ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8[317532]: [NOTICE]   (317552) : Loading success.
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.393 239969 DEBUG nova.objects.instance [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'migration_context' on Instance uuid 8c136f06-4df5-4491-a67c-fb799c6cf053 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.421 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.421 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Ensure instance console log exists: /var/lib/nova/instances/8c136f06-4df5-4491-a67c-fb799c6cf053/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.422 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.422 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.422 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.436 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 09b47418-b02a-4498-9491-407d4065793a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.436 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443627.8507223, 09b47418-b02a-4498-9491-407d4065793a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.437 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 09b47418-b02a-4498-9491-407d4065793a] VM Paused (Lifecycle Event)
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.458 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 09b47418-b02a-4498-9491-407d4065793a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.463 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443627.863162, 09b47418-b02a-4498-9491-407d4065793a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.463 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 09b47418-b02a-4498-9491-407d4065793a] VM Resumed (Lifecycle Event)
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.474 239969 INFO nova.compute.manager [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Took 8.26 seconds to spawn the instance on the hypervisor.
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.474 239969 DEBUG nova.compute.manager [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.482 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 09b47418-b02a-4498-9491-407d4065793a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.485 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 09b47418-b02a-4498-9491-407d4065793a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.516 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 09b47418-b02a-4498-9491-407d4065793a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.543 239969 INFO nova.compute.manager [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Took 10.14 seconds to build instance.
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.565 239969 DEBUG oslo_concurrency.lockutils [None req-406514e2-4362-4880-8fe5-80a659e3368d e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lock "09b47418-b02a-4498-9491-407d4065793a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:08 compute-0 nova_compute[239965]: 2026-01-26 16:07:08.627 239969 DEBUG nova.network.neutron [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Successfully created port: 9d6fc516-d19e-408e-bbf8-ef2cf5754216 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:07:08 compute-0 ceph-mon[75140]: pgmap v1649: 305 pgs: 305 active+clean; 339 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 198 op/s
Jan 26 16:07:09 compute-0 ovn_controller[146046]: 2026-01-26T16:07:09Z|00094|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:75:71:25 10.100.0.11
Jan 26 16:07:09 compute-0 ovn_controller[146046]: 2026-01-26T16:07:09Z|00095|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:75:71:25 10.100.0.11
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.357 239969 DEBUG nova.network.neutron [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Successfully updated port: 9d6fc516-d19e-408e-bbf8-ef2cf5754216 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.372 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "refresh_cache-8c136f06-4df5-4491-a67c-fb799c6cf053" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.372 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquired lock "refresh_cache-8c136f06-4df5-4491-a67c-fb799c6cf053" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.372 239969 DEBUG nova.network.neutron [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.504 239969 DEBUG nova.compute.manager [req-b0f1abab-3e5d-4dd6-98e5-7dc1d934f0e7 req-b0ad4888-c414-475c-a751-db3d02fd7f10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Received event network-changed-9d6fc516-d19e-408e-bbf8-ef2cf5754216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.504 239969 DEBUG nova.compute.manager [req-b0f1abab-3e5d-4dd6-98e5-7dc1d934f0e7 req-b0ad4888-c414-475c-a751-db3d02fd7f10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Refreshing instance network info cache due to event network-changed-9d6fc516-d19e-408e-bbf8-ef2cf5754216. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.504 239969 DEBUG oslo_concurrency.lockutils [req-b0f1abab-3e5d-4dd6-98e5-7dc1d934f0e7 req-b0ad4888-c414-475c-a751-db3d02fd7f10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-8c136f06-4df5-4491-a67c-fb799c6cf053" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.642 239969 DEBUG oslo_concurrency.lockutils [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Acquiring lock "09b47418-b02a-4498-9491-407d4065793a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.642 239969 DEBUG oslo_concurrency.lockutils [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lock "09b47418-b02a-4498-9491-407d4065793a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.643 239969 DEBUG oslo_concurrency.lockutils [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Acquiring lock "09b47418-b02a-4498-9491-407d4065793a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.644 239969 DEBUG oslo_concurrency.lockutils [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lock "09b47418-b02a-4498-9491-407d4065793a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.644 239969 DEBUG oslo_concurrency.lockutils [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lock "09b47418-b02a-4498-9491-407d4065793a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.645 239969 INFO nova.compute.manager [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Terminating instance
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.646 239969 DEBUG nova.compute.manager [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:07:09 compute-0 kernel: tapd54701d9-2a (unregistering): left promiscuous mode
Jan 26 16:07:09 compute-0 NetworkManager[48954]: <info>  [1769443629.6824] device (tapd54701d9-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.682 239969 DEBUG nova.network.neutron [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:07:09 compute-0 ovn_controller[146046]: 2026-01-26T16:07:09Z|00881|binding|INFO|Releasing lport d54701d9-2aca-43e6-b340-8bc2ede7baf6 from this chassis (sb_readonly=0)
Jan 26 16:07:09 compute-0 ovn_controller[146046]: 2026-01-26T16:07:09Z|00882|binding|INFO|Setting lport d54701d9-2aca-43e6-b340-8bc2ede7baf6 down in Southbound
Jan 26 16:07:09 compute-0 ovn_controller[146046]: 2026-01-26T16:07:09Z|00883|binding|INFO|Removing iface tapd54701d9-2a ovn-installed in OVS
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.732 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:09.741 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:3a:60 10.100.0.13'], port_security=['fa:16:3e:bd:3a:60 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '09b47418-b02a-4498-9491-407d4065793a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5539ab3-1cad-4d8d-8185-acc169a052f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '236577cf59a24cc38d6d1fc4abf38e0f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6d373652-359c-43a2-9552-2291fa12e4d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bfcc5e8e-9f38-449a-b1b1-8702b4419005, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d54701d9-2aca-43e6-b340-8bc2ede7baf6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:09.743 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d54701d9-2aca-43e6-b340-8bc2ede7baf6 in datapath a5539ab3-1cad-4d8d-8185-acc169a052f8 unbound from our chassis
Jan 26 16:07:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:09.744 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a5539ab3-1cad-4d8d-8185-acc169a052f8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:07:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:09.745 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9a46f1aa-581c-4c91-8ce6-15668266ae4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:09.745 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8 namespace which is not needed anymore
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.749 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:09 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000059.scope: Deactivated successfully.
Jan 26 16:07:09 compute-0 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000059.scope: Consumed 2.268s CPU time.
Jan 26 16:07:09 compute-0 systemd-machined[208061]: Machine qemu-110-instance-00000059 terminated.
Jan 26 16:07:09 compute-0 podman[317567]: 2026-01-26 16:07:09.803905077 +0000 UTC m=+0.060047420 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:07:09 compute-0 podman[317572]: 2026-01-26 16:07:09.83509311 +0000 UTC m=+0.127201962 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 16:07:09 compute-0 neutron-haproxy-ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8[317532]: [NOTICE]   (317552) : haproxy version is 2.8.14-c23fe91
Jan 26 16:07:09 compute-0 neutron-haproxy-ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8[317532]: [NOTICE]   (317552) : path to executable is /usr/sbin/haproxy
Jan 26 16:07:09 compute-0 neutron-haproxy-ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8[317532]: [WARNING]  (317552) : Exiting Master process...
Jan 26 16:07:09 compute-0 neutron-haproxy-ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8[317532]: [ALERT]    (317552) : Current worker (317558) exited with code 143 (Terminated)
Jan 26 16:07:09 compute-0 neutron-haproxy-ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8[317532]: [WARNING]  (317552) : All workers exited. Exiting... (0)
Jan 26 16:07:09 compute-0 systemd[1]: libpod-80c1a1b449ff6c2c74856f4ac02fbd03aa1c45091f1d3c0316ed31a042010b4b.scope: Deactivated successfully.
Jan 26 16:07:09 compute-0 podman[317629]: 2026-01-26 16:07:09.888486137 +0000 UTC m=+0.050640041 container died 80c1a1b449ff6c2c74856f4ac02fbd03aa1c45091f1d3c0316ed31a042010b4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.892 239969 INFO nova.virt.libvirt.driver [-] [instance: 09b47418-b02a-4498-9491-407d4065793a] Instance destroyed successfully.
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.894 239969 DEBUG nova.objects.instance [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lazy-loading 'resources' on Instance uuid 09b47418-b02a-4498-9491-407d4065793a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.912 239969 DEBUG nova.virt.libvirt.vif [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:06:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-650598483',display_name='tempest-ServerAddressesTestJSON-server-650598483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-650598483',id=89,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:07:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='236577cf59a24cc38d6d1fc4abf38e0f',ramdisk_id='',reservation_id='r-5nsidvs8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-1395906254',owner_user_name='tempest-ServerAddressesTestJSON-1395906254-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:07:08Z,user_data=None,user_id='e943bd4be5214b3ebebfb57bf7a244e6',uuid=09b47418-b02a-4498-9491-407d4065793a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "address": "fa:16:3e:bd:3a:60", "network": {"id": "a5539ab3-1cad-4d8d-8185-acc169a052f8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325251939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "236577cf59a24cc38d6d1fc4abf38e0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd54701d9-2a", "ovs_interfaceid": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.912 239969 DEBUG nova.network.os_vif_util [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Converting VIF {"id": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "address": "fa:16:3e:bd:3a:60", "network": {"id": "a5539ab3-1cad-4d8d-8185-acc169a052f8", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-1325251939-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "236577cf59a24cc38d6d1fc4abf38e0f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd54701d9-2a", "ovs_interfaceid": "d54701d9-2aca-43e6-b340-8bc2ede7baf6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.913 239969 DEBUG nova.network.os_vif_util [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:3a:60,bridge_name='br-int',has_traffic_filtering=True,id=d54701d9-2aca-43e6-b340-8bc2ede7baf6,network=Network(a5539ab3-1cad-4d8d-8185-acc169a052f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd54701d9-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.914 239969 DEBUG os_vif [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:3a:60,bridge_name='br-int',has_traffic_filtering=True,id=d54701d9-2aca-43e6-b340-8bc2ede7baf6,network=Network(a5539ab3-1cad-4d8d-8185-acc169a052f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd54701d9-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.916 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.916 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd54701d9-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-80c1a1b449ff6c2c74856f4ac02fbd03aa1c45091f1d3c0316ed31a042010b4b-userdata-shm.mount: Deactivated successfully.
Jan 26 16:07:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-2285e3f94246d433a8040bd1609662d62910143c4959a1262c774302f74144d3-merged.mount: Deactivated successfully.
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.926 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:07:09 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.929 239969 INFO os_vif [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:3a:60,bridge_name='br-int',has_traffic_filtering=True,id=d54701d9-2aca-43e6-b340-8bc2ede7baf6,network=Network(a5539ab3-1cad-4d8d-8185-acc169a052f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd54701d9-2a')
Jan 26 16:07:09 compute-0 podman[317629]: 2026-01-26 16:07:09.936632454 +0000 UTC m=+0.098786338 container cleanup 80c1a1b449ff6c2c74856f4ac02fbd03aa1c45091f1d3c0316ed31a042010b4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 16:07:09 compute-0 systemd[1]: libpod-conmon-80c1a1b449ff6c2c74856f4ac02fbd03aa1c45091f1d3c0316ed31a042010b4b.scope: Deactivated successfully.
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:09.999 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "dfbd3acb-76a4-4675-8c8c-eed121af14af" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.000 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "dfbd3acb-76a4-4675-8c8c-eed121af14af" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:10 compute-0 podman[317684]: 2026-01-26 16:07:10.01085737 +0000 UTC m=+0.051233105 container remove 80c1a1b449ff6c2c74856f4ac02fbd03aa1c45091f1d3c0316ed31a042010b4b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.015 239969 DEBUG nova.compute.manager [req-280fda2a-80a7-4b8d-95a1-2721720d73ee req-3fc21cee-81da-47cf-bcea-ba5e0cdedc96 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Received event network-vif-plugged-d54701d9-2aca-43e6-b340-8bc2ede7baf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.017 239969 DEBUG oslo_concurrency.lockutils [req-280fda2a-80a7-4b8d-95a1-2721720d73ee req-3fc21cee-81da-47cf-bcea-ba5e0cdedc96 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "09b47418-b02a-4498-9491-407d4065793a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.017 239969 DEBUG oslo_concurrency.lockutils [req-280fda2a-80a7-4b8d-95a1-2721720d73ee req-3fc21cee-81da-47cf-bcea-ba5e0cdedc96 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "09b47418-b02a-4498-9491-407d4065793a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.018 239969 DEBUG oslo_concurrency.lockutils [req-280fda2a-80a7-4b8d-95a1-2721720d73ee req-3fc21cee-81da-47cf-bcea-ba5e0cdedc96 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "09b47418-b02a-4498-9491-407d4065793a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:10.017 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0b82edb2-ef75-4448-9287-6db955f7d7f5]: (4, ('Mon Jan 26 04:07:09 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8 (80c1a1b449ff6c2c74856f4ac02fbd03aa1c45091f1d3c0316ed31a042010b4b)\n80c1a1b449ff6c2c74856f4ac02fbd03aa1c45091f1d3c0316ed31a042010b4b\nMon Jan 26 04:07:09 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8 (80c1a1b449ff6c2c74856f4ac02fbd03aa1c45091f1d3c0316ed31a042010b4b)\n80c1a1b449ff6c2c74856f4ac02fbd03aa1c45091f1d3c0316ed31a042010b4b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.018 239969 DEBUG nova.compute.manager [req-280fda2a-80a7-4b8d-95a1-2721720d73ee req-3fc21cee-81da-47cf-bcea-ba5e0cdedc96 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] No waiting events found dispatching network-vif-plugged-d54701d9-2aca-43e6-b340-8bc2ede7baf6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.018 239969 WARNING nova.compute.manager [req-280fda2a-80a7-4b8d-95a1-2721720d73ee req-3fc21cee-81da-47cf-bcea-ba5e0cdedc96 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Received unexpected event network-vif-plugged-d54701d9-2aca-43e6-b340-8bc2ede7baf6 for instance with vm_state active and task_state deleting.
Jan 26 16:07:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:10.018 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ff230aa3-a94e-462e-ae6b-3475cb630eae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:10.019 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5539ab3-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.021 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:10 compute-0 kernel: tapa5539ab3-10: left promiscuous mode
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.027 239969 DEBUG nova.compute.manager [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.041 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:10.045 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[711e69af-aa34-4391-851e-96c5c53066a9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:10.058 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[382ceb13-cd98-40d4-addc-7f434d2aa90f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:10.060 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2bdad546-30a7-4e40-8399-d1091b1f45a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:10.081 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[29489c1b-0628-48b4-912e-bc4cf0c41ece]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511829, 'reachable_time': 22597, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317702, 'error': None, 'target': 'ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:10.084 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a5539ab3-1cad-4d8d-8185-acc169a052f8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:07:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:10.084 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[a3eab93d-d43e-471f-be76-8652b314c7e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:10 compute-0 systemd[1]: run-netns-ovnmeta\x2da5539ab3\x2d1cad\x2d4d8d\x2d8185\x2dacc169a052f8.mount: Deactivated successfully.
Jan 26 16:07:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1650: 305 pgs: 305 active+clean; 339 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 181 op/s
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.104 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.104 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.110 239969 DEBUG nova.virt.hardware [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.110 239969 INFO nova.compute.claims [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.202 239969 INFO nova.virt.libvirt.driver [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Deleting instance files /var/lib/nova/instances/09b47418-b02a-4498-9491-407d4065793a_del
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.202 239969 INFO nova.virt.libvirt.driver [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Deletion of /var/lib/nova/instances/09b47418-b02a-4498-9491-407d4065793a_del complete
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.255 239969 INFO nova.compute.manager [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Took 0.61 seconds to destroy the instance on the hypervisor.
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.255 239969 DEBUG oslo.service.loopingcall [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.256 239969 DEBUG nova.compute.manager [-] [instance: 09b47418-b02a-4498-9491-407d4065793a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.256 239969 DEBUG nova.network.neutron [-] [instance: 09b47418-b02a-4498-9491-407d4065793a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.279 239969 DEBUG oslo_concurrency.processutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:07:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3980849873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.828 239969 DEBUG oslo_concurrency.processutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.834 239969 DEBUG nova.compute.provider_tree [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.853 239969 DEBUG nova.scheduler.client.report [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.872 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.872 239969 DEBUG nova.compute.manager [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.913 239969 DEBUG nova.compute.manager [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.913 239969 DEBUG nova.network.neutron [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.933 239969 INFO nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:07:10 compute-0 nova_compute[239965]: 2026-01-26 16:07:10.954 239969 DEBUG nova.compute.manager [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.039 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.075 239969 DEBUG nova.compute.manager [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.076 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.077 239969 INFO nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Creating image(s)
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.098 239969 DEBUG nova.storage.rbd_utils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image dfbd3acb-76a4-4675-8c8c-eed121af14af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.123 239969 DEBUG nova.storage.rbd_utils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image dfbd3acb-76a4-4675-8c8c-eed121af14af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.142 239969 DEBUG nova.storage.rbd_utils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image dfbd3acb-76a4-4675-8c8c-eed121af14af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.145 239969 DEBUG oslo_concurrency.processutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:11 compute-0 ceph-mon[75140]: pgmap v1650: 305 pgs: 305 active+clean; 339 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 181 op/s
Jan 26 16:07:11 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3980849873' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.175 239969 DEBUG nova.network.neutron [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Updating instance_info_cache with network_info: [{"id": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "address": "fa:16:3e:de:7b:b5", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6fc516-d1", "ovs_interfaceid": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.188 239969 DEBUG nova.policy [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '392bf4c554724bd3b097b990cec964ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3d3c26abe454a90816833e484abbbd5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.201 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Releasing lock "refresh_cache-8c136f06-4df5-4491-a67c-fb799c6cf053" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.202 239969 DEBUG nova.compute.manager [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Instance network_info: |[{"id": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "address": "fa:16:3e:de:7b:b5", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6fc516-d1", "ovs_interfaceid": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.202 239969 DEBUG oslo_concurrency.lockutils [req-b0f1abab-3e5d-4dd6-98e5-7dc1d934f0e7 req-b0ad4888-c414-475c-a751-db3d02fd7f10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-8c136f06-4df5-4491-a67c-fb799c6cf053" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.202 239969 DEBUG nova.network.neutron [req-b0f1abab-3e5d-4dd6-98e5-7dc1d934f0e7 req-b0ad4888-c414-475c-a751-db3d02fd7f10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Refreshing network info cache for port 9d6fc516-d19e-408e-bbf8-ef2cf5754216 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.205 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Start _get_guest_xml network_info=[{"id": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "address": "fa:16:3e:de:7b:b5", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6fc516-d1", "ovs_interfaceid": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.210 239969 WARNING nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.216 239969 DEBUG oslo_concurrency.processutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.216 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.217 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.217 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.235 239969 DEBUG nova.storage.rbd_utils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image dfbd3acb-76a4-4675-8c8c-eed121af14af_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.240 239969 DEBUG oslo_concurrency.processutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 dfbd3acb-76a4-4675-8c8c-eed121af14af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.270 239969 DEBUG nova.virt.libvirt.host [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.271 239969 DEBUG nova.virt.libvirt.host [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.280 239969 DEBUG nova.virt.libvirt.host [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.281 239969 DEBUG nova.virt.libvirt.host [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.281 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.281 239969 DEBUG nova.virt.hardware [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.282 239969 DEBUG nova.virt.hardware [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.282 239969 DEBUG nova.virt.hardware [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.283 239969 DEBUG nova.virt.hardware [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.283 239969 DEBUG nova.virt.hardware [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.283 239969 DEBUG nova.virt.hardware [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.283 239969 DEBUG nova.virt.hardware [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.284 239969 DEBUG nova.virt.hardware [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.284 239969 DEBUG nova.virt.hardware [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.284 239969 DEBUG nova.virt.hardware [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.284 239969 DEBUG nova.virt.hardware [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.289 239969 DEBUG oslo_concurrency.processutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.382 239969 DEBUG nova.network.neutron [-] [instance: 09b47418-b02a-4498-9491-407d4065793a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.404 239969 INFO nova.compute.manager [-] [instance: 09b47418-b02a-4498-9491-407d4065793a] Took 1.15 seconds to deallocate network for instance.
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.444 239969 DEBUG oslo_concurrency.lockutils [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.445 239969 DEBUG oslo_concurrency.lockutils [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.606 239969 DEBUG oslo_concurrency.processutils [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.730 239969 DEBUG nova.compute.manager [req-2a5f395d-555a-485f-b3d0-bb9da1066128 req-cd518400-d1b1-4a66-bd55-0c783ceaf320 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Received event network-vif-deleted-d54701d9-2aca-43e6-b340-8bc2ede7baf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:07:11 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4233682787' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.883 239969 DEBUG oslo_concurrency.processutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.910 239969 DEBUG nova.storage.rbd_utils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 8c136f06-4df5-4491-a67c-fb799c6cf053_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.916 239969 DEBUG oslo_concurrency.processutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:11 compute-0 nova_compute[239965]: 2026-01-26 16:07:11.978 239969 DEBUG oslo_concurrency.processutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 dfbd3acb-76a4-4675-8c8c-eed121af14af_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.738s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.038 239969 DEBUG nova.storage.rbd_utils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] resizing rbd image dfbd3acb-76a4-4675-8c8c-eed121af14af_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:07:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 305 active+clean; 363 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 5.0 MiB/s wr, 324 op/s
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.123 239969 DEBUG nova.compute.manager [req-6527c500-78da-4817-a797-0b8ec95ee5b0 req-dafcc952-679e-48aa-9167-d02f8484ae07 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Received event network-vif-unplugged-d54701d9-2aca-43e6-b340-8bc2ede7baf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.123 239969 DEBUG oslo_concurrency.lockutils [req-6527c500-78da-4817-a797-0b8ec95ee5b0 req-dafcc952-679e-48aa-9167-d02f8484ae07 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "09b47418-b02a-4498-9491-407d4065793a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.124 239969 DEBUG oslo_concurrency.lockutils [req-6527c500-78da-4817-a797-0b8ec95ee5b0 req-dafcc952-679e-48aa-9167-d02f8484ae07 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "09b47418-b02a-4498-9491-407d4065793a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.125 239969 DEBUG oslo_concurrency.lockutils [req-6527c500-78da-4817-a797-0b8ec95ee5b0 req-dafcc952-679e-48aa-9167-d02f8484ae07 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "09b47418-b02a-4498-9491-407d4065793a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.125 239969 DEBUG nova.compute.manager [req-6527c500-78da-4817-a797-0b8ec95ee5b0 req-dafcc952-679e-48aa-9167-d02f8484ae07 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] No waiting events found dispatching network-vif-unplugged-d54701d9-2aca-43e6-b340-8bc2ede7baf6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.125 239969 WARNING nova.compute.manager [req-6527c500-78da-4817-a797-0b8ec95ee5b0 req-dafcc952-679e-48aa-9167-d02f8484ae07 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Received unexpected event network-vif-unplugged-d54701d9-2aca-43e6-b340-8bc2ede7baf6 for instance with vm_state deleted and task_state None.
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.125 239969 DEBUG nova.compute.manager [req-6527c500-78da-4817-a797-0b8ec95ee5b0 req-dafcc952-679e-48aa-9167-d02f8484ae07 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Received event network-vif-plugged-d54701d9-2aca-43e6-b340-8bc2ede7baf6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.126 239969 DEBUG oslo_concurrency.lockutils [req-6527c500-78da-4817-a797-0b8ec95ee5b0 req-dafcc952-679e-48aa-9167-d02f8484ae07 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "09b47418-b02a-4498-9491-407d4065793a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.126 239969 DEBUG oslo_concurrency.lockutils [req-6527c500-78da-4817-a797-0b8ec95ee5b0 req-dafcc952-679e-48aa-9167-d02f8484ae07 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "09b47418-b02a-4498-9491-407d4065793a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.126 239969 DEBUG oslo_concurrency.lockutils [req-6527c500-78da-4817-a797-0b8ec95ee5b0 req-dafcc952-679e-48aa-9167-d02f8484ae07 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "09b47418-b02a-4498-9491-407d4065793a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.126 239969 DEBUG nova.compute.manager [req-6527c500-78da-4817-a797-0b8ec95ee5b0 req-dafcc952-679e-48aa-9167-d02f8484ae07 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] No waiting events found dispatching network-vif-plugged-d54701d9-2aca-43e6-b340-8bc2ede7baf6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.126 239969 WARNING nova.compute.manager [req-6527c500-78da-4817-a797-0b8ec95ee5b0 req-dafcc952-679e-48aa-9167-d02f8484ae07 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 09b47418-b02a-4498-9491-407d4065793a] Received unexpected event network-vif-plugged-d54701d9-2aca-43e6-b340-8bc2ede7baf6 for instance with vm_state deleted and task_state None.
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.131 239969 DEBUG nova.objects.instance [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'migration_context' on Instance uuid dfbd3acb-76a4-4675-8c8c-eed121af14af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.148 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.148 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Ensure instance console log exists: /var/lib/nova/instances/dfbd3acb-76a4-4675-8c8c-eed121af14af/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.149 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.149 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.149 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:12 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4233682787' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:07:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2564375184' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.233 239969 DEBUG oslo_concurrency.processutils [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.237 239969 DEBUG nova.compute.provider_tree [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.253 239969 DEBUG nova.scheduler.client.report [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.284 239969 DEBUG oslo_concurrency.lockutils [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.311 239969 INFO nova.scheduler.client.report [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Deleted allocations for instance 09b47418-b02a-4498-9491-407d4065793a
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.371 239969 DEBUG nova.network.neutron [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Successfully created port: b56d21dd-1402-48a6-b04b-a5cf207e9244 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.384 239969 DEBUG oslo_concurrency.lockutils [None req-f9d51ad2-9d13-44b3-b3a7-9e29f5753294 e943bd4be5214b3ebebfb57bf7a244e6 236577cf59a24cc38d6d1fc4abf38e0f - - default default] Lock "09b47418-b02a-4498-9491-407d4065793a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.742s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:07:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2795438647' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.478 239969 DEBUG oslo_concurrency.processutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.479 239969 DEBUG nova.virt.libvirt.vif [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1263243890',display_name='tempest-ServerActionsTestOtherB-server-1263243890',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1263243890',id=90,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56f8818d291f4e738d868673048ce025',ramdisk_id='',reservation_id='r-v10bi8tk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1778121066',owner_user_name='tempest-ServerActionsTestOtherB-1778121066-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:07:07Z,user_data=None,user_id='ad9c6196af60436caf20747e96ad8388',uuid=8c136f06-4df5-4491-a67c-fb799c6cf053,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "address": "fa:16:3e:de:7b:b5", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6fc516-d1", "ovs_interfaceid": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.479 239969 DEBUG nova.network.os_vif_util [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converting VIF {"id": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "address": "fa:16:3e:de:7b:b5", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6fc516-d1", "ovs_interfaceid": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.480 239969 DEBUG nova.network.os_vif_util [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:7b:b5,bridge_name='br-int',has_traffic_filtering=True,id=9d6fc516-d19e-408e-bbf8-ef2cf5754216,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d6fc516-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.481 239969 DEBUG nova.objects.instance [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8c136f06-4df5-4491-a67c-fb799c6cf053 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.496 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:07:12 compute-0 nova_compute[239965]:   <uuid>8c136f06-4df5-4491-a67c-fb799c6cf053</uuid>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   <name>instance-0000005a</name>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerActionsTestOtherB-server-1263243890</nova:name>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:07:11</nova:creationTime>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:07:12 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:07:12 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:07:12 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:07:12 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:07:12 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:07:12 compute-0 nova_compute[239965]:         <nova:user uuid="ad9c6196af60436caf20747e96ad8388">tempest-ServerActionsTestOtherB-1778121066-project-member</nova:user>
Jan 26 16:07:12 compute-0 nova_compute[239965]:         <nova:project uuid="56f8818d291f4e738d868673048ce025">tempest-ServerActionsTestOtherB-1778121066</nova:project>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:07:12 compute-0 nova_compute[239965]:         <nova:port uuid="9d6fc516-d19e-408e-bbf8-ef2cf5754216">
Jan 26 16:07:12 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <system>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <entry name="serial">8c136f06-4df5-4491-a67c-fb799c6cf053</entry>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <entry name="uuid">8c136f06-4df5-4491-a67c-fb799c6cf053</entry>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     </system>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   <os>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   </os>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   <features>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   </features>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/8c136f06-4df5-4491-a67c-fb799c6cf053_disk">
Jan 26 16:07:12 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       </source>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:07:12 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/8c136f06-4df5-4491-a67c-fb799c6cf053_disk.config">
Jan 26 16:07:12 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       </source>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:07:12 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:de:7b:b5"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <target dev="tap9d6fc516-d1"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/8c136f06-4df5-4491-a67c-fb799c6cf053/console.log" append="off"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <video>
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     </video>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:07:12 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:07:12 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:07:12 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:07:12 compute-0 nova_compute[239965]: </domain>
Jan 26 16:07:12 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.496 239969 DEBUG nova.compute.manager [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Preparing to wait for external event network-vif-plugged-9d6fc516-d19e-408e-bbf8-ef2cf5754216 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.497 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "8c136f06-4df5-4491-a67c-fb799c6cf053-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.497 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "8c136f06-4df5-4491-a67c-fb799c6cf053-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.497 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "8c136f06-4df5-4491-a67c-fb799c6cf053-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.497 239969 DEBUG nova.virt.libvirt.vif [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1263243890',display_name='tempest-ServerActionsTestOtherB-server-1263243890',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1263243890',id=90,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='56f8818d291f4e738d868673048ce025',ramdisk_id='',reservation_id='r-v10bi8tk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherB-1778121066',owner_user_name='tempest-ServerActionsTestOtherB-1778121066-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:07:07Z,user_data=None,user_id='ad9c6196af60436caf20747e96ad8388',uuid=8c136f06-4df5-4491-a67c-fb799c6cf053,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "address": "fa:16:3e:de:7b:b5", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6fc516-d1", "ovs_interfaceid": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.498 239969 DEBUG nova.network.os_vif_util [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converting VIF {"id": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "address": "fa:16:3e:de:7b:b5", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6fc516-d1", "ovs_interfaceid": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.498 239969 DEBUG nova.network.os_vif_util [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:7b:b5,bridge_name='br-int',has_traffic_filtering=True,id=9d6fc516-d19e-408e-bbf8-ef2cf5754216,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d6fc516-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.498 239969 DEBUG os_vif [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:7b:b5,bridge_name='br-int',has_traffic_filtering=True,id=9d6fc516-d19e-408e-bbf8-ef2cf5754216,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d6fc516-d1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.499 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.499 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.500 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.502 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.503 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d6fc516-d1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.503 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9d6fc516-d1, col_values=(('external_ids', {'iface-id': '9d6fc516-d19e-408e-bbf8-ef2cf5754216', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:7b:b5', 'vm-uuid': '8c136f06-4df5-4491-a67c-fb799c6cf053'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.506 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:12 compute-0 NetworkManager[48954]: <info>  [1769443632.5064] manager: (tap9d6fc516-d1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/378)
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.510 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.514 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.515 239969 INFO os_vif [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:7b:b5,bridge_name='br-int',has_traffic_filtering=True,id=9d6fc516-d19e-408e-bbf8-ef2cf5754216,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d6fc516-d1')
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.575 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.575 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.576 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No VIF found with MAC fa:16:3e:de:7b:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.576 239969 INFO nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Using config drive
Jan 26 16:07:12 compute-0 nova_compute[239965]: 2026-01-26 16:07:12.598 239969 DEBUG nova.storage.rbd_utils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 8c136f06-4df5-4491-a67c-fb799c6cf053_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:07:13 compute-0 ceph-mon[75140]: pgmap v1651: 305 pgs: 305 active+clean; 363 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 5.0 MiB/s wr, 324 op/s
Jan 26 16:07:13 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2564375184' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:13 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2795438647' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.266 239969 INFO nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Creating config drive at /var/lib/nova/instances/8c136f06-4df5-4491-a67c-fb799c6cf053/disk.config
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.271 239969 DEBUG oslo_concurrency.processutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8c136f06-4df5-4491-a67c-fb799c6cf053/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8p0z8gvn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.336 239969 DEBUG nova.network.neutron [req-b0f1abab-3e5d-4dd6-98e5-7dc1d934f0e7 req-b0ad4888-c414-475c-a751-db3d02fd7f10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Updated VIF entry in instance network info cache for port 9d6fc516-d19e-408e-bbf8-ef2cf5754216. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.337 239969 DEBUG nova.network.neutron [req-b0f1abab-3e5d-4dd6-98e5-7dc1d934f0e7 req-b0ad4888-c414-475c-a751-db3d02fd7f10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Updating instance_info_cache with network_info: [{"id": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "address": "fa:16:3e:de:7b:b5", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6fc516-d1", "ovs_interfaceid": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.359 239969 DEBUG oslo_concurrency.lockutils [req-b0f1abab-3e5d-4dd6-98e5-7dc1d934f0e7 req-b0ad4888-c414-475c-a751-db3d02fd7f10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-8c136f06-4df5-4491-a67c-fb799c6cf053" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.432 239969 DEBUG oslo_concurrency.processutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8c136f06-4df5-4491-a67c-fb799c6cf053/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8p0z8gvn" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.456 239969 DEBUG nova.storage.rbd_utils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 8c136f06-4df5-4491-a67c-fb799c6cf053_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.460 239969 DEBUG oslo_concurrency.processutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8c136f06-4df5-4491-a67c-fb799c6cf053/disk.config 8c136f06-4df5-4491-a67c-fb799c6cf053_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.710 239969 DEBUG nova.network.neutron [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Successfully updated port: b56d21dd-1402-48a6-b04b-a5cf207e9244 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.724 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "refresh_cache-dfbd3acb-76a4-4675-8c8c-eed121af14af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.724 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquired lock "refresh_cache-dfbd3acb-76a4-4675-8c8c-eed121af14af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.725 239969 DEBUG nova.network.neutron [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.741 239969 DEBUG oslo_concurrency.processutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8c136f06-4df5-4491-a67c-fb799c6cf053/disk.config 8c136f06-4df5-4491-a67c-fb799c6cf053_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.742 239969 INFO nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Deleting local config drive /var/lib/nova/instances/8c136f06-4df5-4491-a67c-fb799c6cf053/disk.config because it was imported into RBD.
Jan 26 16:07:13 compute-0 kernel: tap9d6fc516-d1: entered promiscuous mode
Jan 26 16:07:13 compute-0 NetworkManager[48954]: <info>  [1769443633.7992] manager: (tap9d6fc516-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/379)
Jan 26 16:07:13 compute-0 ovn_controller[146046]: 2026-01-26T16:07:13Z|00884|binding|INFO|Claiming lport 9d6fc516-d19e-408e-bbf8-ef2cf5754216 for this chassis.
Jan 26 16:07:13 compute-0 ovn_controller[146046]: 2026-01-26T16:07:13Z|00885|binding|INFO|9d6fc516-d19e-408e-bbf8-ef2cf5754216: Claiming fa:16:3e:de:7b:b5 10.100.0.3
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.801 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:13.806 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:7b:b5 10.100.0.3'], port_security=['fa:16:3e:de:7b:b5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8c136f06-4df5-4491-a67c-fb799c6cf053', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56f8818d291f4e738d868673048ce025', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5352fc20-0398-48ca-8854-47c9620f712b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=112ebdb6-08cf-4ff4-bdfb-063b37c813d0, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=9d6fc516-d19e-408e-bbf8-ef2cf5754216) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:13.807 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 9d6fc516-d19e-408e-bbf8-ef2cf5754216 in datapath 67ce52d3-4d78-44cc-ab83-0fab051ec68d bound to our chassis
Jan 26 16:07:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:13.808 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67ce52d3-4d78-44cc-ab83-0fab051ec68d
Jan 26 16:07:13 compute-0 ovn_controller[146046]: 2026-01-26T16:07:13Z|00886|binding|INFO|Setting lport 9d6fc516-d19e-408e-bbf8-ef2cf5754216 ovn-installed in OVS
Jan 26 16:07:13 compute-0 ovn_controller[146046]: 2026-01-26T16:07:13Z|00887|binding|INFO|Setting lport 9d6fc516-d19e-408e-bbf8-ef2cf5754216 up in Southbound
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.818 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:13.828 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3c412752-7675-4571-b887-0f82598a9b9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:13 compute-0 systemd-udevd[318051]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:07:13 compute-0 systemd-machined[208061]: New machine qemu-111-instance-0000005a.
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.835 239969 DEBUG nova.compute.manager [req-a30685f4-0fce-4456-9bda-05eb06f963cf req-30b81a6a-79b6-4589-a97a-5a907c39f629 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Received event network-changed-b56d21dd-1402-48a6-b04b-a5cf207e9244 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.836 239969 DEBUG nova.compute.manager [req-a30685f4-0fce-4456-9bda-05eb06f963cf req-30b81a6a-79b6-4589-a97a-5a907c39f629 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Refreshing instance network info cache due to event network-changed-b56d21dd-1402-48a6-b04b-a5cf207e9244. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.836 239969 DEBUG oslo_concurrency.lockutils [req-a30685f4-0fce-4456-9bda-05eb06f963cf req-30b81a6a-79b6-4589-a97a-5a907c39f629 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-dfbd3acb-76a4-4675-8c8c-eed121af14af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:13 compute-0 NetworkManager[48954]: <info>  [1769443633.8446] device (tap9d6fc516-d1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:07:13 compute-0 NetworkManager[48954]: <info>  [1769443633.8453] device (tap9d6fc516-d1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:07:13 compute-0 systemd[1]: Started Virtual Machine qemu-111-instance-0000005a.
Jan 26 16:07:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:13.862 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[78a58c21-ff8e-4718-86f0-544cce8ca0a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:13.865 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[91d6bd78-bc02-45d1-ac5f-5585c4a61ae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.893 239969 DEBUG nova.network.neutron [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:07:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:13.898 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[cafd0016-c168-4f6c-aac1-8c97f0bf84be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:13.924 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d59b8c52-4c06-4cc4-b669-9464c5c21e0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67ce52d3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:dd:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502154, 'reachable_time': 34738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318063, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:13.948 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[98ba262f-430c-4295-84f4-01a115915c3c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap67ce52d3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502171, 'tstamp': 502171}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318065, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67ce52d3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502175, 'tstamp': 502175}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318065, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:13.950 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67ce52d3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:13 compute-0 nova_compute[239965]: 2026-01-26 16:07:13.952 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:13.953 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67ce52d3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:13.953 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:13.954 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67ce52d3-40, col_values=(('external_ids', {'iface-id': '8d8bd19b-594d-4000-99fc-eb8020a59c23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:13.954 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 402 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.9 MiB/s wr, 283 op/s
Jan 26 16:07:14 compute-0 nova_compute[239965]: 2026-01-26 16:07:14.957 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443634.9565642, 8c136f06-4df5-4491-a67c-fb799c6cf053 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:14 compute-0 nova_compute[239965]: 2026-01-26 16:07:14.958 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] VM Started (Lifecycle Event)
Jan 26 16:07:14 compute-0 nova_compute[239965]: 2026-01-26 16:07:14.974 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:14 compute-0 nova_compute[239965]: 2026-01-26 16:07:14.978 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443634.9578793, 8c136f06-4df5-4491-a67c-fb799c6cf053 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:14 compute-0 nova_compute[239965]: 2026-01-26 16:07:14.979 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] VM Paused (Lifecycle Event)
Jan 26 16:07:14 compute-0 nova_compute[239965]: 2026-01-26 16:07:14.997 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:14.999 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.025 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:07:15 compute-0 ceph-mon[75140]: pgmap v1652: 305 pgs: 305 active+clean; 402 MiB data, 816 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.9 MiB/s wr, 283 op/s
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.558 239969 DEBUG nova.network.neutron [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Updating instance_info_cache with network_info: [{"id": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "address": "fa:16:3e:0c:73:aa", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56d21dd-14", "ovs_interfaceid": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.581 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Releasing lock "refresh_cache-dfbd3acb-76a4-4675-8c8c-eed121af14af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.582 239969 DEBUG nova.compute.manager [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Instance network_info: |[{"id": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "address": "fa:16:3e:0c:73:aa", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56d21dd-14", "ovs_interfaceid": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.582 239969 DEBUG oslo_concurrency.lockutils [req-a30685f4-0fce-4456-9bda-05eb06f963cf req-30b81a6a-79b6-4589-a97a-5a907c39f629 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-dfbd3acb-76a4-4675-8c8c-eed121af14af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.582 239969 DEBUG nova.network.neutron [req-a30685f4-0fce-4456-9bda-05eb06f963cf req-30b81a6a-79b6-4589-a97a-5a907c39f629 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Refreshing network info cache for port b56d21dd-1402-48a6-b04b-a5cf207e9244 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.586 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Start _get_guest_xml network_info=[{"id": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "address": "fa:16:3e:0c:73:aa", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56d21dd-14", "ovs_interfaceid": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.591 239969 WARNING nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.596 239969 DEBUG nova.virt.libvirt.host [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.597 239969 DEBUG nova.virt.libvirt.host [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.602 239969 DEBUG nova.virt.libvirt.host [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.603 239969 DEBUG nova.virt.libvirt.host [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.603 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.603 239969 DEBUG nova.virt.hardware [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.604 239969 DEBUG nova.virt.hardware [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.604 239969 DEBUG nova.virt.hardware [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.604 239969 DEBUG nova.virt.hardware [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.604 239969 DEBUG nova.virt.hardware [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.604 239969 DEBUG nova.virt.hardware [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.605 239969 DEBUG nova.virt.hardware [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.605 239969 DEBUG nova.virt.hardware [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.605 239969 DEBUG nova.virt.hardware [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.605 239969 DEBUG nova.virt.hardware [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.605 239969 DEBUG nova.virt.hardware [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.608 239969 DEBUG oslo_concurrency.processutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.924 239969 DEBUG nova.compute.manager [req-2d00c07e-f27a-41a2-bf37-723e2bf2b766 req-01670b5b-20a0-4b08-9a81-9a31c6e7eb06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Received event network-vif-plugged-9d6fc516-d19e-408e-bbf8-ef2cf5754216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.924 239969 DEBUG oslo_concurrency.lockutils [req-2d00c07e-f27a-41a2-bf37-723e2bf2b766 req-01670b5b-20a0-4b08-9a81-9a31c6e7eb06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8c136f06-4df5-4491-a67c-fb799c6cf053-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.928 239969 DEBUG oslo_concurrency.lockutils [req-2d00c07e-f27a-41a2-bf37-723e2bf2b766 req-01670b5b-20a0-4b08-9a81-9a31c6e7eb06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8c136f06-4df5-4491-a67c-fb799c6cf053-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.928 239969 DEBUG oslo_concurrency.lockutils [req-2d00c07e-f27a-41a2-bf37-723e2bf2b766 req-01670b5b-20a0-4b08-9a81-9a31c6e7eb06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8c136f06-4df5-4491-a67c-fb799c6cf053-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.929 239969 DEBUG nova.compute.manager [req-2d00c07e-f27a-41a2-bf37-723e2bf2b766 req-01670b5b-20a0-4b08-9a81-9a31c6e7eb06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Processing event network-vif-plugged-9d6fc516-d19e-408e-bbf8-ef2cf5754216 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.929 239969 DEBUG nova.compute.manager [req-2d00c07e-f27a-41a2-bf37-723e2bf2b766 req-01670b5b-20a0-4b08-9a81-9a31c6e7eb06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Received event network-vif-plugged-9d6fc516-d19e-408e-bbf8-ef2cf5754216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.929 239969 DEBUG oslo_concurrency.lockutils [req-2d00c07e-f27a-41a2-bf37-723e2bf2b766 req-01670b5b-20a0-4b08-9a81-9a31c6e7eb06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8c136f06-4df5-4491-a67c-fb799c6cf053-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.930 239969 DEBUG oslo_concurrency.lockutils [req-2d00c07e-f27a-41a2-bf37-723e2bf2b766 req-01670b5b-20a0-4b08-9a81-9a31c6e7eb06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8c136f06-4df5-4491-a67c-fb799c6cf053-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.930 239969 DEBUG oslo_concurrency.lockutils [req-2d00c07e-f27a-41a2-bf37-723e2bf2b766 req-01670b5b-20a0-4b08-9a81-9a31c6e7eb06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8c136f06-4df5-4491-a67c-fb799c6cf053-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.930 239969 DEBUG nova.compute.manager [req-2d00c07e-f27a-41a2-bf37-723e2bf2b766 req-01670b5b-20a0-4b08-9a81-9a31c6e7eb06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] No waiting events found dispatching network-vif-plugged-9d6fc516-d19e-408e-bbf8-ef2cf5754216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.930 239969 WARNING nova.compute.manager [req-2d00c07e-f27a-41a2-bf37-723e2bf2b766 req-01670b5b-20a0-4b08-9a81-9a31c6e7eb06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Received unexpected event network-vif-plugged-9d6fc516-d19e-408e-bbf8-ef2cf5754216 for instance with vm_state building and task_state spawning.
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.931 239969 DEBUG nova.compute.manager [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.935 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443635.934681, 8c136f06-4df5-4491-a67c-fb799c6cf053 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.935 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] VM Resumed (Lifecycle Event)
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.938 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.944 239969 INFO nova.virt.libvirt.driver [-] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Instance spawned successfully.
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.945 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.958 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.962 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.967 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.967 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.967 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.968 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.968 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.969 239969 DEBUG nova.virt.libvirt.driver [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:15 compute-0 nova_compute[239965]: 2026-01-26 16:07:15.990 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.016 239969 INFO nova.compute.manager [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Took 8.39 seconds to spawn the instance on the hypervisor.
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.017 239969 DEBUG nova.compute.manager [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.042 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1653: 305 pgs: 305 active+clean; 418 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.0 MiB/s wr, 249 op/s
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.108 239969 INFO nova.compute.manager [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Took 9.72 seconds to build instance.
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.126 239969 DEBUG oslo_concurrency.lockutils [None req-edf645b2-9e51-4e9c-ae55-9d7fcd5e1f90 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "8c136f06-4df5-4491-a67c-fb799c6cf053" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:07:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3213821181' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:16 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3213821181' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.199 239969 DEBUG oslo_concurrency.processutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.220 239969 DEBUG nova.storage.rbd_utils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image dfbd3acb-76a4-4675-8c8c-eed121af14af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.232 239969 DEBUG oslo_concurrency.processutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.620 239969 INFO nova.compute.manager [None req-f845c1f9-5675-4400-ae67-5bf14cbed91d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Pausing
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.621 239969 DEBUG nova.objects.instance [None req-f845c1f9-5675-4400-ae67-5bf14cbed91d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'flavor' on Instance uuid 8c136f06-4df5-4491-a67c-fb799c6cf053 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.648 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443636.6477942, 8c136f06-4df5-4491-a67c-fb799c6cf053 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.648 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] VM Paused (Lifecycle Event)
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.649 239969 DEBUG nova.compute.manager [None req-f845c1f9-5675-4400-ae67-5bf14cbed91d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.668 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.671 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:07:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:07:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2708727357' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.842 239969 DEBUG oslo_concurrency.processutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.843 239969 DEBUG nova.virt.libvirt.vif [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:07:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-187686196',display_name='tempest-ServersTestJSON-server-187686196',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-187686196',id=91,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLPqvmqsyjxiXwMdiCH0fcGqBFvSkUnjArvtpXSc2QwVnBKlPzs0YLzWZT2X7wh9b2DvuZ/UWgEsqB48/Y+cV6xFDzZPxsgveEuEhaifWlOgDKNK4G4ISHMsV44PcS+oYg==',key_name='tempest-key-265585767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-2kcdkozu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:07:10Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=dfbd3acb-76a4-4675-8c8c-eed121af14af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "address": "fa:16:3e:0c:73:aa", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56d21dd-14", "ovs_interfaceid": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.843 239969 DEBUG nova.network.os_vif_util [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "address": "fa:16:3e:0c:73:aa", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56d21dd-14", "ovs_interfaceid": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.844 239969 DEBUG nova.network.os_vif_util [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:73:aa,bridge_name='br-int',has_traffic_filtering=True,id=b56d21dd-1402-48a6-b04b-a5cf207e9244,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56d21dd-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.845 239969 DEBUG nova.objects.instance [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'pci_devices' on Instance uuid dfbd3acb-76a4-4675-8c8c-eed121af14af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.859 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:07:16 compute-0 nova_compute[239965]:   <uuid>dfbd3acb-76a4-4675-8c8c-eed121af14af</uuid>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   <name>instance-0000005b</name>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersTestJSON-server-187686196</nova:name>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:07:15</nova:creationTime>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:07:16 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:07:16 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:07:16 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:07:16 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:07:16 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:07:16 compute-0 nova_compute[239965]:         <nova:user uuid="392bf4c554724bd3b097b990cec964ac">tempest-ServersTestJSON-190839520-project-member</nova:user>
Jan 26 16:07:16 compute-0 nova_compute[239965]:         <nova:project uuid="e3d3c26abe454a90816833e484abbbd5">tempest-ServersTestJSON-190839520</nova:project>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:07:16 compute-0 nova_compute[239965]:         <nova:port uuid="b56d21dd-1402-48a6-b04b-a5cf207e9244">
Jan 26 16:07:16 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <system>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <entry name="serial">dfbd3acb-76a4-4675-8c8c-eed121af14af</entry>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <entry name="uuid">dfbd3acb-76a4-4675-8c8c-eed121af14af</entry>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     </system>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   <os>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   </os>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   <features>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   </features>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/dfbd3acb-76a4-4675-8c8c-eed121af14af_disk">
Jan 26 16:07:16 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       </source>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:07:16 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/dfbd3acb-76a4-4675-8c8c-eed121af14af_disk.config">
Jan 26 16:07:16 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       </source>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:07:16 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:0c:73:aa"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <target dev="tapb56d21dd-14"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/dfbd3acb-76a4-4675-8c8c-eed121af14af/console.log" append="off"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <video>
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     </video>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:07:16 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:07:16 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:07:16 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:07:16 compute-0 nova_compute[239965]: </domain>
Jan 26 16:07:16 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.860 239969 DEBUG nova.compute.manager [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Preparing to wait for external event network-vif-plugged-b56d21dd-1402-48a6-b04b-a5cf207e9244 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.860 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "dfbd3acb-76a4-4675-8c8c-eed121af14af-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.860 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "dfbd3acb-76a4-4675-8c8c-eed121af14af-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.860 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "dfbd3acb-76a4-4675-8c8c-eed121af14af-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.861 239969 DEBUG nova.virt.libvirt.vif [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:07:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-187686196',display_name='tempest-ServersTestJSON-server-187686196',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-187686196',id=91,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLPqvmqsyjxiXwMdiCH0fcGqBFvSkUnjArvtpXSc2QwVnBKlPzs0YLzWZT2X7wh9b2DvuZ/UWgEsqB48/Y+cV6xFDzZPxsgveEuEhaifWlOgDKNK4G4ISHMsV44PcS+oYg==',key_name='tempest-key-265585767',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-2kcdkozu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:07:10Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=dfbd3acb-76a4-4675-8c8c-eed121af14af,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "address": "fa:16:3e:0c:73:aa", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56d21dd-14", "ovs_interfaceid": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.861 239969 DEBUG nova.network.os_vif_util [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "address": "fa:16:3e:0c:73:aa", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56d21dd-14", "ovs_interfaceid": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.862 239969 DEBUG nova.network.os_vif_util [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:73:aa,bridge_name='br-int',has_traffic_filtering=True,id=b56d21dd-1402-48a6-b04b-a5cf207e9244,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56d21dd-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.862 239969 DEBUG os_vif [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:73:aa,bridge_name='br-int',has_traffic_filtering=True,id=b56d21dd-1402-48a6-b04b-a5cf207e9244,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56d21dd-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.863 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.863 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.863 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.866 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.866 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb56d21dd-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.866 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb56d21dd-14, col_values=(('external_ids', {'iface-id': 'b56d21dd-1402-48a6-b04b-a5cf207e9244', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:73:aa', 'vm-uuid': 'dfbd3acb-76a4-4675-8c8c-eed121af14af'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:16 compute-0 NetworkManager[48954]: <info>  [1769443636.8686] manager: (tapb56d21dd-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/380)
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.867 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.871 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.874 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.875 239969 INFO os_vif [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:73:aa,bridge_name='br-int',has_traffic_filtering=True,id=b56d21dd-1402-48a6-b04b-a5cf207e9244,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56d21dd-14')
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.921 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.921 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.922 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No VIF found with MAC fa:16:3e:0c:73:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.922 239969 INFO nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Using config drive
Jan 26 16:07:16 compute-0 nova_compute[239965]: 2026-01-26 16:07:16.940 239969 DEBUG nova.storage.rbd_utils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image dfbd3acb-76a4-4675-8c8c-eed121af14af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:17 compute-0 ceph-mon[75140]: pgmap v1653: 305 pgs: 305 active+clean; 418 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.0 MiB/s wr, 249 op/s
Jan 26 16:07:17 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2708727357' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:17 compute-0 nova_compute[239965]: 2026-01-26 16:07:17.365 239969 INFO nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Creating config drive at /var/lib/nova/instances/dfbd3acb-76a4-4675-8c8c-eed121af14af/disk.config
Jan 26 16:07:17 compute-0 nova_compute[239965]: 2026-01-26 16:07:17.374 239969 DEBUG oslo_concurrency.processutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dfbd3acb-76a4-4675-8c8c-eed121af14af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplrsth8d5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:17 compute-0 nova_compute[239965]: 2026-01-26 16:07:17.517 239969 DEBUG oslo_concurrency.processutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dfbd3acb-76a4-4675-8c8c-eed121af14af/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplrsth8d5" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:17 compute-0 nova_compute[239965]: 2026-01-26 16:07:17.545 239969 DEBUG nova.storage.rbd_utils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image dfbd3acb-76a4-4675-8c8c-eed121af14af_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:17 compute-0 nova_compute[239965]: 2026-01-26 16:07:17.549 239969 DEBUG oslo_concurrency.processutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dfbd3acb-76a4-4675-8c8c-eed121af14af/disk.config dfbd3acb-76a4-4675-8c8c-eed121af14af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:17 compute-0 nova_compute[239965]: 2026-01-26 16:07:17.585 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443622.5782657, 2260a3d1-973c-4364-af38-ebd9acd287f8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:17 compute-0 nova_compute[239965]: 2026-01-26 16:07:17.586 239969 INFO nova.compute.manager [-] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] VM Stopped (Lifecycle Event)
Jan 26 16:07:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:07:17 compute-0 nova_compute[239965]: 2026-01-26 16:07:17.695 239969 DEBUG oslo_concurrency.processutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dfbd3acb-76a4-4675-8c8c-eed121af14af/disk.config dfbd3acb-76a4-4675-8c8c-eed121af14af_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:17 compute-0 nova_compute[239965]: 2026-01-26 16:07:17.696 239969 INFO nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Deleting local config drive /var/lib/nova/instances/dfbd3acb-76a4-4675-8c8c-eed121af14af/disk.config because it was imported into RBD.
Jan 26 16:07:17 compute-0 kernel: tapb56d21dd-14: entered promiscuous mode
Jan 26 16:07:17 compute-0 NetworkManager[48954]: <info>  [1769443637.7437] manager: (tapb56d21dd-14): new Tun device (/org/freedesktop/NetworkManager/Devices/381)
Jan 26 16:07:17 compute-0 ovn_controller[146046]: 2026-01-26T16:07:17Z|00888|binding|INFO|Claiming lport b56d21dd-1402-48a6-b04b-a5cf207e9244 for this chassis.
Jan 26 16:07:17 compute-0 ovn_controller[146046]: 2026-01-26T16:07:17Z|00889|binding|INFO|b56d21dd-1402-48a6-b04b-a5cf207e9244: Claiming fa:16:3e:0c:73:aa 10.100.0.7
Jan 26 16:07:17 compute-0 nova_compute[239965]: 2026-01-26 16:07:17.746 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:17 compute-0 ovn_controller[146046]: 2026-01-26T16:07:17Z|00890|binding|INFO|Setting lport b56d21dd-1402-48a6-b04b-a5cf207e9244 ovn-installed in OVS
Jan 26 16:07:17 compute-0 nova_compute[239965]: 2026-01-26 16:07:17.767 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:17 compute-0 systemd-machined[208061]: New machine qemu-112-instance-0000005b.
Jan 26 16:07:17 compute-0 systemd-udevd[318243]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:07:17 compute-0 NetworkManager[48954]: <info>  [1769443637.7885] device (tapb56d21dd-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:07:17 compute-0 NetworkManager[48954]: <info>  [1769443637.7890] device (tapb56d21dd-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:07:17 compute-0 systemd[1]: Started Virtual Machine qemu-112-instance-0000005b.
Jan 26 16:07:17 compute-0 nova_compute[239965]: 2026-01-26 16:07:17.796 239969 DEBUG nova.compute.manager [None req-23db5f4c-5ad3-4272-b013-a411871504a7 - - - - - -] [instance: 2260a3d1-973c-4364-af38-ebd9acd287f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1654: 305 pgs: 305 active+clean; 418 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 259 op/s
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.204 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443638.203927, dfbd3acb-76a4-4675-8c8c-eed121af14af => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.204 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] VM Started (Lifecycle Event)
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.321 239969 DEBUG nova.network.neutron [req-a30685f4-0fce-4456-9bda-05eb06f963cf req-30b81a6a-79b6-4589-a97a-5a907c39f629 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Updated VIF entry in instance network info cache for port b56d21dd-1402-48a6-b04b-a5cf207e9244. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.321 239969 DEBUG nova.network.neutron [req-a30685f4-0fce-4456-9bda-05eb06f963cf req-30b81a6a-79b6-4589-a97a-5a907c39f629 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Updating instance_info_cache with network_info: [{"id": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "address": "fa:16:3e:0c:73:aa", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56d21dd-14", "ovs_interfaceid": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:18 compute-0 ovn_controller[146046]: 2026-01-26T16:07:18Z|00891|binding|INFO|Setting lport b56d21dd-1402-48a6-b04b-a5cf207e9244 up in Southbound
Jan 26 16:07:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:18.371 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:73:aa 10.100.0.7'], port_security=['fa:16:3e:0c:73:aa 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'dfbd3acb-76a4-4675-8c8c-eed121af14af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3d3c26abe454a90816833e484abbbd5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1788d42-3ab8-4af6-9d96-b2f289468a39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=866be92a-2c15-4b84-ae39-2a0041a8b635, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b56d21dd-1402-48a6-b04b-a5cf207e9244) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:18.373 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b56d21dd-1402-48a6-b04b-a5cf207e9244 in datapath e7ef659c-2c7b-4cdf-8956-267cdfeff707 bound to our chassis
Jan 26 16:07:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:18.374 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.394 239969 DEBUG oslo_concurrency.lockutils [req-a30685f4-0fce-4456-9bda-05eb06f963cf req-30b81a6a-79b6-4589-a97a-5a907c39f629 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-dfbd3acb-76a4-4675-8c8c-eed121af14af" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:18.394 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[36d35070-8e2d-4d0a-b183-2e947f6c0c22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.399 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.404 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443638.2061791, dfbd3acb-76a4-4675-8c8c-eed121af14af => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.404 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] VM Paused (Lifecycle Event)
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.420 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.423 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:07:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:18.431 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0881e704-0276-44dc-8721-87cdcc4b347d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:18.435 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6a9851d0-b813-4d53-aea2-bd4bdd35af86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.446 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:07:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:18.473 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[46630fb9-3b87-4f96-a445-bccb8ef69c05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:18.496 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[78b626ad-983c-4a08-80ef-259445f115cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7ef659c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:1b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 13, 'rx_bytes': 616, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507991, 'reachable_time': 15716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318299, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:18 compute-0 ovn_controller[146046]: 2026-01-26T16:07:18Z|00892|binding|INFO|Releasing lport 86107515-3deb-4a4b-a7ca-2f22eed5cb66 from this chassis (sb_readonly=0)
Jan 26 16:07:18 compute-0 ovn_controller[146046]: 2026-01-26T16:07:18Z|00893|binding|INFO|Releasing lport 8d8bd19b-594d-4000-99fc-eb8020a59c23 from this chassis (sb_readonly=0)
Jan 26 16:07:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:18.516 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0d3eec82-d598-47fc-af96-8ae6be5f4cc6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508006, 'tstamp': 508006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318300, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508010, 'tstamp': 508010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318300, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:18.518 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7ef659c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.520 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.575 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:18.576 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7ef659c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:18.577 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:18.578 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape7ef659c-20, col_values=(('external_ids', {'iface-id': '86107515-3deb-4a4b-a7ca-2f22eed5cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:18.578 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.579 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.685 239969 DEBUG nova.compute.manager [req-25e262a5-30f6-41d6-b759-2a29e6029772 req-84045643-1a90-43c6-9746-e00f9c1c58ce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Received event network-vif-plugged-b56d21dd-1402-48a6-b04b-a5cf207e9244 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.685 239969 DEBUG oslo_concurrency.lockutils [req-25e262a5-30f6-41d6-b759-2a29e6029772 req-84045643-1a90-43c6-9746-e00f9c1c58ce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "dfbd3acb-76a4-4675-8c8c-eed121af14af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.686 239969 DEBUG oslo_concurrency.lockutils [req-25e262a5-30f6-41d6-b759-2a29e6029772 req-84045643-1a90-43c6-9746-e00f9c1c58ce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dfbd3acb-76a4-4675-8c8c-eed121af14af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.686 239969 DEBUG oslo_concurrency.lockutils [req-25e262a5-30f6-41d6-b759-2a29e6029772 req-84045643-1a90-43c6-9746-e00f9c1c58ce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dfbd3acb-76a4-4675-8c8c-eed121af14af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.686 239969 DEBUG nova.compute.manager [req-25e262a5-30f6-41d6-b759-2a29e6029772 req-84045643-1a90-43c6-9746-e00f9c1c58ce a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Processing event network-vif-plugged-b56d21dd-1402-48a6-b04b-a5cf207e9244 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.687 239969 DEBUG nova.compute.manager [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.692 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443638.6912975, dfbd3acb-76a4-4675-8c8c-eed121af14af => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.692 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] VM Resumed (Lifecycle Event)
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.694 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.698 239969 INFO nova.virt.libvirt.driver [-] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Instance spawned successfully.
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.698 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.715 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.721 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.726 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.726 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.727 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.727 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.728 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.728 239969 DEBUG nova.virt.libvirt.driver [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.741 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.796 239969 INFO nova.compute.manager [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Took 7.72 seconds to spawn the instance on the hypervisor.
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.797 239969 DEBUG nova.compute.manager [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.859 239969 INFO nova.compute.manager [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Took 8.78 seconds to build instance.
Jan 26 16:07:18 compute-0 nova_compute[239965]: 2026-01-26 16:07:18.873 239969 DEBUG oslo_concurrency.lockutils [None req-2dca6430-adaa-4ed8-8251-a778f9cf635f 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "dfbd3acb-76a4-4675-8c8c-eed121af14af" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.874s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.152 239969 DEBUG oslo_concurrency.lockutils [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "8c136f06-4df5-4491-a67c-fb799c6cf053" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.153 239969 DEBUG oslo_concurrency.lockutils [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "8c136f06-4df5-4491-a67c-fb799c6cf053" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.153 239969 INFO nova.compute.manager [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Shelving
Jan 26 16:07:19 compute-0 kernel: tap9d6fc516-d1 (unregistering): left promiscuous mode
Jan 26 16:07:19 compute-0 NetworkManager[48954]: <info>  [1769443639.2078] device (tap9d6fc516-d1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.214 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:19 compute-0 ovn_controller[146046]: 2026-01-26T16:07:19Z|00894|binding|INFO|Releasing lport 9d6fc516-d19e-408e-bbf8-ef2cf5754216 from this chassis (sb_readonly=0)
Jan 26 16:07:19 compute-0 ovn_controller[146046]: 2026-01-26T16:07:19Z|00895|binding|INFO|Setting lport 9d6fc516-d19e-408e-bbf8-ef2cf5754216 down in Southbound
Jan 26 16:07:19 compute-0 ovn_controller[146046]: 2026-01-26T16:07:19Z|00896|binding|INFO|Removing iface tap9d6fc516-d1 ovn-installed in OVS
Jan 26 16:07:19 compute-0 ceph-mon[75140]: pgmap v1654: 305 pgs: 305 active+clean; 418 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 259 op/s
Jan 26 16:07:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:19.222 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:7b:b5 10.100.0.3'], port_security=['fa:16:3e:de:7b:b5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '8c136f06-4df5-4491-a67c-fb799c6cf053', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56f8818d291f4e738d868673048ce025', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5352fc20-0398-48ca-8854-47c9620f712b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=112ebdb6-08cf-4ff4-bdfb-063b37c813d0, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=9d6fc516-d19e-408e-bbf8-ef2cf5754216) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:19.223 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 9d6fc516-d19e-408e-bbf8-ef2cf5754216 in datapath 67ce52d3-4d78-44cc-ab83-0fab051ec68d unbound from our chassis
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.224 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:19.226 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67ce52d3-4d78-44cc-ab83-0fab051ec68d
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.242 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:19.246 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2e4ed3-c6cc-4497-88d9-c5f40783597f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:19 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Jan 26 16:07:19 compute-0 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005a.scope: Consumed 1.759s CPU time.
Jan 26 16:07:19 compute-0 systemd-machined[208061]: Machine qemu-111-instance-0000005a terminated.
Jan 26 16:07:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:19.280 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4fcbee03-b54f-4b89-8117-9a756e77d6e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:19.284 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[92ed61a4-8d8d-4584-8eb2-aede340f2d86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:19.317 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c3067a6d-f5e5-4231-b635-f6b207dae011]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:19.337 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd5186b-1452-49bf-a0da-b1837911b281]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67ce52d3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:dd:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502154, 'reachable_time': 34738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318311, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:19.366 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1e0a86d5-b009-4a60-9949-79db8e583370]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap67ce52d3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502171, 'tstamp': 502171}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318312, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67ce52d3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502175, 'tstamp': 502175}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318312, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:19.369 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67ce52d3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.371 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.377 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:19.378 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67ce52d3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:19.379 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:19.379 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67ce52d3-40, col_values=(('external_ids', {'iface-id': '8d8bd19b-594d-4000-99fc-eb8020a59c23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:19.379 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:19 compute-0 NetworkManager[48954]: <info>  [1769443639.3945] manager: (tap9d6fc516-d1): new Tun device (/org/freedesktop/NetworkManager/Devices/382)
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.400 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.407 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.422 239969 INFO nova.virt.libvirt.driver [-] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Instance destroyed successfully.
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.423 239969 DEBUG nova.objects.instance [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'numa_topology' on Instance uuid 8c136f06-4df5-4491-a67c-fb799c6cf053 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.520 239969 DEBUG nova.compute.manager [req-1c39e2bc-2bd6-42b9-8947-ce671b7e6351 req-eaa5a878-4605-4ba8-ac5e-2ab8810e0af2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Received event network-vif-unplugged-9d6fc516-d19e-408e-bbf8-ef2cf5754216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.522 239969 DEBUG oslo_concurrency.lockutils [req-1c39e2bc-2bd6-42b9-8947-ce671b7e6351 req-eaa5a878-4605-4ba8-ac5e-2ab8810e0af2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8c136f06-4df5-4491-a67c-fb799c6cf053-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.522 239969 DEBUG oslo_concurrency.lockutils [req-1c39e2bc-2bd6-42b9-8947-ce671b7e6351 req-eaa5a878-4605-4ba8-ac5e-2ab8810e0af2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8c136f06-4df5-4491-a67c-fb799c6cf053-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.523 239969 DEBUG oslo_concurrency.lockutils [req-1c39e2bc-2bd6-42b9-8947-ce671b7e6351 req-eaa5a878-4605-4ba8-ac5e-2ab8810e0af2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8c136f06-4df5-4491-a67c-fb799c6cf053-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.523 239969 DEBUG nova.compute.manager [req-1c39e2bc-2bd6-42b9-8947-ce671b7e6351 req-eaa5a878-4605-4ba8-ac5e-2ab8810e0af2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] No waiting events found dispatching network-vif-unplugged-9d6fc516-d19e-408e-bbf8-ef2cf5754216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.523 239969 WARNING nova.compute.manager [req-1c39e2bc-2bd6-42b9-8947-ce671b7e6351 req-eaa5a878-4605-4ba8-ac5e-2ab8810e0af2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Received unexpected event network-vif-unplugged-9d6fc516-d19e-408e-bbf8-ef2cf5754216 for instance with vm_state paused and task_state shelving.
Jan 26 16:07:19 compute-0 nova_compute[239965]: 2026-01-26 16:07:19.915 239969 INFO nova.virt.libvirt.driver [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Beginning cold snapshot process
Jan 26 16:07:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 418 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 233 op/s
Jan 26 16:07:20 compute-0 nova_compute[239965]: 2026-01-26 16:07:20.502 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:20 compute-0 nova_compute[239965]: 2026-01-26 16:07:20.565 239969 DEBUG nova.virt.libvirt.imagebackend [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 16:07:20 compute-0 nova_compute[239965]: 2026-01-26 16:07:20.877 239969 DEBUG nova.storage.rbd_utils [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] creating snapshot(6b40aff1fafe4991ba4a05c4319493a9) on rbd image(8c136f06-4df5-4491-a67c-fb799c6cf053_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.040 239969 DEBUG nova.compute.manager [req-35c721c8-db61-439e-bdbf-579456778a58 req-3a9c840f-7dd5-4275-a66b-8169e2dfef32 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Received event network-vif-plugged-b56d21dd-1402-48a6-b04b-a5cf207e9244 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.041 239969 DEBUG oslo_concurrency.lockutils [req-35c721c8-db61-439e-bdbf-579456778a58 req-3a9c840f-7dd5-4275-a66b-8169e2dfef32 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "dfbd3acb-76a4-4675-8c8c-eed121af14af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.041 239969 DEBUG oslo_concurrency.lockutils [req-35c721c8-db61-439e-bdbf-579456778a58 req-3a9c840f-7dd5-4275-a66b-8169e2dfef32 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dfbd3acb-76a4-4675-8c8c-eed121af14af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.041 239969 DEBUG oslo_concurrency.lockutils [req-35c721c8-db61-439e-bdbf-579456778a58 req-3a9c840f-7dd5-4275-a66b-8169e2dfef32 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dfbd3acb-76a4-4675-8c8c-eed121af14af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.041 239969 DEBUG nova.compute.manager [req-35c721c8-db61-439e-bdbf-579456778a58 req-3a9c840f-7dd5-4275-a66b-8169e2dfef32 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] No waiting events found dispatching network-vif-plugged-b56d21dd-1402-48a6-b04b-a5cf207e9244 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.042 239969 WARNING nova.compute.manager [req-35c721c8-db61-439e-bdbf-579456778a58 req-3a9c840f-7dd5-4275-a66b-8169e2dfef32 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Received unexpected event network-vif-plugged-b56d21dd-1402-48a6-b04b-a5cf207e9244 for instance with vm_state active and task_state None.
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.044 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.276 239969 DEBUG oslo_concurrency.lockutils [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "dfbd3acb-76a4-4675-8c8c-eed121af14af" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.276 239969 DEBUG oslo_concurrency.lockutils [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "dfbd3acb-76a4-4675-8c8c-eed121af14af" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.277 239969 DEBUG oslo_concurrency.lockutils [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "dfbd3acb-76a4-4675-8c8c-eed121af14af-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.277 239969 DEBUG oslo_concurrency.lockutils [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "dfbd3acb-76a4-4675-8c8c-eed121af14af-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.277 239969 DEBUG oslo_concurrency.lockutils [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "dfbd3acb-76a4-4675-8c8c-eed121af14af-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.278 239969 INFO nova.compute.manager [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Terminating instance
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.279 239969 DEBUG nova.compute.manager [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:07:21 compute-0 kernel: tapb56d21dd-14 (unregistering): left promiscuous mode
Jan 26 16:07:21 compute-0 NetworkManager[48954]: <info>  [1769443641.3197] device (tapb56d21dd-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:07:21 compute-0 ovn_controller[146046]: 2026-01-26T16:07:21Z|00897|binding|INFO|Releasing lport b56d21dd-1402-48a6-b04b-a5cf207e9244 from this chassis (sb_readonly=0)
Jan 26 16:07:21 compute-0 ovn_controller[146046]: 2026-01-26T16:07:21Z|00898|binding|INFO|Setting lport b56d21dd-1402-48a6-b04b-a5cf207e9244 down in Southbound
Jan 26 16:07:21 compute-0 ovn_controller[146046]: 2026-01-26T16:07:21Z|00899|binding|INFO|Removing iface tapb56d21dd-14 ovn-installed in OVS
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.331 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.332 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:21.338 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:73:aa 10.100.0.7'], port_security=['fa:16:3e:0c:73:aa 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'dfbd3acb-76a4-4675-8c8c-eed121af14af', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3d3c26abe454a90816833e484abbbd5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1788d42-3ab8-4af6-9d96-b2f289468a39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=866be92a-2c15-4b84-ae39-2a0041a8b635, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b56d21dd-1402-48a6-b04b-a5cf207e9244) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:21.339 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b56d21dd-1402-48a6-b04b-a5cf207e9244 in datapath e7ef659c-2c7b-4cdf-8956-267cdfeff707 unbound from our chassis
Jan 26 16:07:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:21.341 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.352 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:21.357 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[25897783-a029-430d-af06-fb2812e0a5de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:21 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Jan 26 16:07:21 compute-0 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005b.scope: Consumed 3.095s CPU time.
Jan 26 16:07:21 compute-0 systemd-machined[208061]: Machine qemu-112-instance-0000005b terminated.
Jan 26 16:07:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:21.389 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ca235ff7-c8aa-4599-91fa-770edf8a710e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:21.392 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[55923508-3f5d-4f02-8823-2efd66db6953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:21.424 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5ced03bd-a161-4872-a101-40a9b9126ce2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:21.440 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8770bb9d-8644-4745-b009-fc007010a566]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7ef659c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:1b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 15, 'rx_bytes': 616, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507991, 'reachable_time': 15716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318382, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Jan 26 16:07:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Jan 26 16:07:21 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Jan 26 16:07:21 compute-0 ceph-mon[75140]: pgmap v1655: 305 pgs: 305 active+clean; 418 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.7 MiB/s wr, 233 op/s
Jan 26 16:07:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:21.462 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[382bddf6-6b78-488b-a8e2-2489feab736b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508006, 'tstamp': 508006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318383, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508010, 'tstamp': 508010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318383, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:21.464 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7ef659c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.465 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.472 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:21.473 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7ef659c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:21.473 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:21.473 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape7ef659c-20, col_values=(('external_ids', {'iface-id': '86107515-3deb-4a4b-a7ca-2f22eed5cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:21.474 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.510 239969 DEBUG nova.storage.rbd_utils [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] cloning vms/8c136f06-4df5-4491-a67c-fb799c6cf053_disk@6b40aff1fafe4991ba4a05c4319493a9 to images/6bac8476-457e-4939-8475-ebfc03e92edd clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.540 239969 INFO nova.virt.libvirt.driver [-] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Instance destroyed successfully.
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.540 239969 DEBUG nova.objects.instance [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'resources' on Instance uuid dfbd3acb-76a4-4675-8c8c-eed121af14af obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.553 239969 DEBUG nova.virt.libvirt.vif [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:07:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-187686196',display_name='tempest-ServersTestJSON-server-187686196',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-187686196',id=91,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLPqvmqsyjxiXwMdiCH0fcGqBFvSkUnjArvtpXSc2QwVnBKlPzs0YLzWZT2X7wh9b2DvuZ/UWgEsqB48/Y+cV6xFDzZPxsgveEuEhaifWlOgDKNK4G4ISHMsV44PcS+oYg==',key_name='tempest-key-265585767',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:07:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-2kcdkozu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:07:18Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=dfbd3acb-76a4-4675-8c8c-eed121af14af,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "address": "fa:16:3e:0c:73:aa", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56d21dd-14", "ovs_interfaceid": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.553 239969 DEBUG nova.network.os_vif_util [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "address": "fa:16:3e:0c:73:aa", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb56d21dd-14", "ovs_interfaceid": "b56d21dd-1402-48a6-b04b-a5cf207e9244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.554 239969 DEBUG nova.network.os_vif_util [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:73:aa,bridge_name='br-int',has_traffic_filtering=True,id=b56d21dd-1402-48a6-b04b-a5cf207e9244,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56d21dd-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.554 239969 DEBUG os_vif [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:73:aa,bridge_name='br-int',has_traffic_filtering=True,id=b56d21dd-1402-48a6-b04b-a5cf207e9244,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56d21dd-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.556 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.556 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb56d21dd-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.561 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.563 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.566 239969 INFO os_vif [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:73:aa,bridge_name='br-int',has_traffic_filtering=True,id=b56d21dd-1402-48a6-b04b-a5cf207e9244,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb56d21dd-14')
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.604 239969 DEBUG nova.storage.rbd_utils [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] flattening images/6bac8476-457e-4939-8475-ebfc03e92edd flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.647 239969 DEBUG nova.compute.manager [req-f059bbb4-da95-4866-a903-5f8e7e78308a req-0a71813c-5593-4fab-8f88-e94bc0be4e84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Received event network-vif-plugged-9d6fc516-d19e-408e-bbf8-ef2cf5754216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.648 239969 DEBUG oslo_concurrency.lockutils [req-f059bbb4-da95-4866-a903-5f8e7e78308a req-0a71813c-5593-4fab-8f88-e94bc0be4e84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8c136f06-4df5-4491-a67c-fb799c6cf053-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.648 239969 DEBUG oslo_concurrency.lockutils [req-f059bbb4-da95-4866-a903-5f8e7e78308a req-0a71813c-5593-4fab-8f88-e94bc0be4e84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8c136f06-4df5-4491-a67c-fb799c6cf053-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.648 239969 DEBUG oslo_concurrency.lockutils [req-f059bbb4-da95-4866-a903-5f8e7e78308a req-0a71813c-5593-4fab-8f88-e94bc0be4e84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8c136f06-4df5-4491-a67c-fb799c6cf053-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.648 239969 DEBUG nova.compute.manager [req-f059bbb4-da95-4866-a903-5f8e7e78308a req-0a71813c-5593-4fab-8f88-e94bc0be4e84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] No waiting events found dispatching network-vif-plugged-9d6fc516-d19e-408e-bbf8-ef2cf5754216 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.649 239969 WARNING nova.compute.manager [req-f059bbb4-da95-4866-a903-5f8e7e78308a req-0a71813c-5593-4fab-8f88-e94bc0be4e84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Received unexpected event network-vif-plugged-9d6fc516-d19e-408e-bbf8-ef2cf5754216 for instance with vm_state paused and task_state shelving_image_uploading.
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.864 239969 DEBUG nova.storage.rbd_utils [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] removing snapshot(6b40aff1fafe4991ba4a05c4319493a9) on rbd image(8c136f06-4df5-4491-a67c-fb799c6cf053_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.970 239969 INFO nova.virt.libvirt.driver [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Deleting instance files /var/lib/nova/instances/dfbd3acb-76a4-4675-8c8c-eed121af14af_del
Jan 26 16:07:21 compute-0 nova_compute[239965]: 2026-01-26 16:07:21.972 239969 INFO nova.virt.libvirt.driver [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Deletion of /var/lib/nova/instances/dfbd3acb-76a4-4675-8c8c-eed121af14af_del complete
Jan 26 16:07:22 compute-0 nova_compute[239965]: 2026-01-26 16:07:22.067 239969 INFO nova.compute.manager [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Took 0.79 seconds to destroy the instance on the hypervisor.
Jan 26 16:07:22 compute-0 nova_compute[239965]: 2026-01-26 16:07:22.067 239969 DEBUG oslo.service.loopingcall [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:07:22 compute-0 nova_compute[239965]: 2026-01-26 16:07:22.067 239969 DEBUG nova.compute.manager [-] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:07:22 compute-0 nova_compute[239965]: 2026-01-26 16:07:22.068 239969 DEBUG nova.network.neutron [-] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:07:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1657: 305 pgs: 305 active+clean; 416 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.3 MiB/s wr, 198 op/s
Jan 26 16:07:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Jan 26 16:07:22 compute-0 ceph-mon[75140]: osdmap e260: 3 total, 3 up, 3 in
Jan 26 16:07:22 compute-0 ceph-mon[75140]: pgmap v1657: 305 pgs: 305 active+clean; 416 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.3 MiB/s wr, 198 op/s
Jan 26 16:07:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Jan 26 16:07:22 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Jan 26 16:07:22 compute-0 nova_compute[239965]: 2026-01-26 16:07:22.501 239969 DEBUG nova.storage.rbd_utils [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] creating snapshot(snap) on rbd image(6bac8476-457e-4939-8475-ebfc03e92edd) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:07:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:07:22 compute-0 nova_compute[239965]: 2026-01-26 16:07:22.764 239969 DEBUG nova.network.neutron [-] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:22 compute-0 nova_compute[239965]: 2026-01-26 16:07:22.780 239969 INFO nova.compute.manager [-] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Took 0.71 seconds to deallocate network for instance.
Jan 26 16:07:22 compute-0 nova_compute[239965]: 2026-01-26 16:07:22.843 239969 DEBUG oslo_concurrency.lockutils [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:22 compute-0 nova_compute[239965]: 2026-01-26 16:07:22.843 239969 DEBUG oslo_concurrency.lockutils [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:22 compute-0 nova_compute[239965]: 2026-01-26 16:07:22.956 239969 DEBUG oslo_concurrency.processutils [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:23 compute-0 nova_compute[239965]: 2026-01-26 16:07:23.145 239969 DEBUG nova.compute.manager [req-eed2de33-44b1-45e2-a607-3b08403b5dcb req-5e55f021-5ba8-4d71-8e4c-2e2b2fab35bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Received event network-vif-plugged-b56d21dd-1402-48a6-b04b-a5cf207e9244 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:23 compute-0 nova_compute[239965]: 2026-01-26 16:07:23.145 239969 DEBUG oslo_concurrency.lockutils [req-eed2de33-44b1-45e2-a607-3b08403b5dcb req-5e55f021-5ba8-4d71-8e4c-2e2b2fab35bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "dfbd3acb-76a4-4675-8c8c-eed121af14af-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:23 compute-0 nova_compute[239965]: 2026-01-26 16:07:23.145 239969 DEBUG oslo_concurrency.lockutils [req-eed2de33-44b1-45e2-a607-3b08403b5dcb req-5e55f021-5ba8-4d71-8e4c-2e2b2fab35bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dfbd3acb-76a4-4675-8c8c-eed121af14af-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:23 compute-0 nova_compute[239965]: 2026-01-26 16:07:23.146 239969 DEBUG oslo_concurrency.lockutils [req-eed2de33-44b1-45e2-a607-3b08403b5dcb req-5e55f021-5ba8-4d71-8e4c-2e2b2fab35bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dfbd3acb-76a4-4675-8c8c-eed121af14af-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:23 compute-0 nova_compute[239965]: 2026-01-26 16:07:23.146 239969 DEBUG nova.compute.manager [req-eed2de33-44b1-45e2-a607-3b08403b5dcb req-5e55f021-5ba8-4d71-8e4c-2e2b2fab35bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] No waiting events found dispatching network-vif-plugged-b56d21dd-1402-48a6-b04b-a5cf207e9244 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:23 compute-0 nova_compute[239965]: 2026-01-26 16:07:23.146 239969 WARNING nova.compute.manager [req-eed2de33-44b1-45e2-a607-3b08403b5dcb req-5e55f021-5ba8-4d71-8e4c-2e2b2fab35bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Received unexpected event network-vif-plugged-b56d21dd-1402-48a6-b04b-a5cf207e9244 for instance with vm_state deleted and task_state None.
Jan 26 16:07:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Jan 26 16:07:23 compute-0 ceph-mon[75140]: osdmap e261: 3 total, 3 up, 3 in
Jan 26 16:07:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Jan 26 16:07:23 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Jan 26 16:07:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:07:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1515020200' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:23 compute-0 nova_compute[239965]: 2026-01-26 16:07:23.517 239969 DEBUG oslo_concurrency.processutils [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:23 compute-0 nova_compute[239965]: 2026-01-26 16:07:23.523 239969 DEBUG nova.compute.provider_tree [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:07:23 compute-0 nova_compute[239965]: 2026-01-26 16:07:23.672 239969 DEBUG nova.scheduler.client.report [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:07:23 compute-0 nova_compute[239965]: 2026-01-26 16:07:23.697 239969 DEBUG oslo_concurrency.lockutils [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:23 compute-0 nova_compute[239965]: 2026-01-26 16:07:23.732 239969 INFO nova.scheduler.client.report [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Deleted allocations for instance dfbd3acb-76a4-4675-8c8c-eed121af14af
Jan 26 16:07:23 compute-0 nova_compute[239965]: 2026-01-26 16:07:23.735 239969 DEBUG nova.compute.manager [req-e89ada94-3432-4d8b-b714-3dfe4feca9d7 req-a34c6ea8-da66-4efa-8177-8cf47d626bb8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Received event network-vif-deleted-b56d21dd-1402-48a6-b04b-a5cf207e9244 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:23 compute-0 nova_compute[239965]: 2026-01-26 16:07:23.823 239969 DEBUG oslo_concurrency.lockutils [None req-0bf915f5-ea5d-4d5c-b6eb-142f29f176fe 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "dfbd3acb-76a4-4675-8c8c-eed121af14af" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:23 compute-0 nova_compute[239965]: 2026-01-26 16:07:23.848 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 416 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 1.5 MiB/s wr, 275 op/s
Jan 26 16:07:24 compute-0 ceph-mon[75140]: osdmap e262: 3 total, 3 up, 3 in
Jan 26 16:07:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1515020200' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:24 compute-0 ceph-mon[75140]: pgmap v1660: 305 pgs: 305 active+clean; 416 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 1.5 MiB/s wr, 275 op/s
Jan 26 16:07:24 compute-0 nova_compute[239965]: 2026-01-26 16:07:24.889 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443629.8881643, 09b47418-b02a-4498-9491-407d4065793a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:24 compute-0 nova_compute[239965]: 2026-01-26 16:07:24.889 239969 INFO nova.compute.manager [-] [instance: 09b47418-b02a-4498-9491-407d4065793a] VM Stopped (Lifecycle Event)
Jan 26 16:07:24 compute-0 nova_compute[239965]: 2026-01-26 16:07:24.912 239969 DEBUG nova.compute.manager [None req-0abdcd2c-8f5c-4572-ad6b-751f1df86095 - - - - - -] [instance: 09b47418-b02a-4498-9491-407d4065793a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:25 compute-0 nova_compute[239965]: 2026-01-26 16:07:25.226 239969 INFO nova.virt.libvirt.driver [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Snapshot image upload complete
Jan 26 16:07:25 compute-0 nova_compute[239965]: 2026-01-26 16:07:25.227 239969 DEBUG nova.compute.manager [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:25 compute-0 nova_compute[239965]: 2026-01-26 16:07:25.284 239969 INFO nova.compute.manager [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Shelve offloading
Jan 26 16:07:25 compute-0 nova_compute[239965]: 2026-01-26 16:07:25.293 239969 INFO nova.virt.libvirt.driver [-] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Instance destroyed successfully.
Jan 26 16:07:25 compute-0 nova_compute[239965]: 2026-01-26 16:07:25.294 239969 DEBUG nova.compute.manager [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:25 compute-0 nova_compute[239965]: 2026-01-26 16:07:25.296 239969 DEBUG oslo_concurrency.lockutils [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "refresh_cache-8c136f06-4df5-4491-a67c-fb799c6cf053" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:25 compute-0 nova_compute[239965]: 2026-01-26 16:07:25.297 239969 DEBUG oslo_concurrency.lockutils [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquired lock "refresh_cache-8c136f06-4df5-4491-a67c-fb799c6cf053" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:25 compute-0 nova_compute[239965]: 2026-01-26 16:07:25.297 239969 DEBUG nova.network.neutron [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:07:26 compute-0 nova_compute[239965]: 2026-01-26 16:07:26.047 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1661: 305 pgs: 305 active+clean; 418 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 3.6 MiB/s wr, 304 op/s
Jan 26 16:07:26 compute-0 ceph-mon[75140]: pgmap v1661: 305 pgs: 305 active+clean; 418 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 3.6 MiB/s wr, 304 op/s
Jan 26 16:07:26 compute-0 nova_compute[239965]: 2026-01-26 16:07:26.558 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:26 compute-0 nova_compute[239965]: 2026-01-26 16:07:26.951 239969 DEBUG nova.network.neutron [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Updating instance_info_cache with network_info: [{"id": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "address": "fa:16:3e:de:7b:b5", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6fc516-d1", "ovs_interfaceid": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:26 compute-0 nova_compute[239965]: 2026-01-26 16:07:26.970 239969 DEBUG oslo_concurrency.lockutils [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Releasing lock "refresh_cache-8c136f06-4df5-4491-a67c-fb799c6cf053" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:26 compute-0 nova_compute[239965]: 2026-01-26 16:07:26.978 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:27 compute-0 nova_compute[239965]: 2026-01-26 16:07:27.376 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:27 compute-0 nova_compute[239965]: 2026-01-26 16:07:27.377 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:27 compute-0 nova_compute[239965]: 2026-01-26 16:07:27.394 239969 DEBUG nova.compute.manager [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:07:27 compute-0 nova_compute[239965]: 2026-01-26 16:07:27.452 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:27 compute-0 nova_compute[239965]: 2026-01-26 16:07:27.453 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:27 compute-0 nova_compute[239965]: 2026-01-26 16:07:27.459 239969 DEBUG nova.virt.hardware [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:07:27 compute-0 nova_compute[239965]: 2026-01-26 16:07:27.460 239969 INFO nova.compute.claims [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:07:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:07:27 compute-0 nova_compute[239965]: 2026-01-26 16:07:27.635 239969 DEBUG oslo_concurrency.processutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.073 239969 INFO nova.virt.libvirt.driver [-] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Instance destroyed successfully.
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.074 239969 DEBUG nova.objects.instance [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'resources' on Instance uuid 8c136f06-4df5-4491-a67c-fb799c6cf053 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.088 239969 DEBUG nova.virt.libvirt.vif [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:07:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1263243890',display_name='tempest-ServerActionsTestOtherB-server-1263243890',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1263243890',id=90,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:07:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56f8818d291f4e738d868673048ce025',ramdisk_id='',reservation_id='r-v10bi8tk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1778121066',owner_user_name='tempest-ServerActionsTestOtherB-1778121066-project-member',shelved_at='2026-01-26T16:07:25.227683',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='6bac8476-457e-4939-8475-ebfc03e92edd'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:07:20Z,user_data=None,user_id='ad9c6196af60436caf20747e96ad8388',uuid=8c136f06-4df5-4491-a67c-fb799c6cf053,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "address": "fa:16:3e:de:7b:b5", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6fc516-d1", "ovs_interfaceid": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.088 239969 DEBUG nova.network.os_vif_util [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converting VIF {"id": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "address": "fa:16:3e:de:7b:b5", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9d6fc516-d1", "ovs_interfaceid": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.089 239969 DEBUG nova.network.os_vif_util [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:7b:b5,bridge_name='br-int',has_traffic_filtering=True,id=9d6fc516-d19e-408e-bbf8-ef2cf5754216,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d6fc516-d1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.090 239969 DEBUG os_vif [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:7b:b5,bridge_name='br-int',has_traffic_filtering=True,id=9d6fc516-d19e-408e-bbf8-ef2cf5754216,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d6fc516-d1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.092 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.093 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d6fc516-d1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.094 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.096 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.098 239969 INFO os_vif [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:7b:b5,bridge_name='br-int',has_traffic_filtering=True,id=9d6fc516-d19e-408e-bbf8-ef2cf5754216,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9d6fc516-d1')
Jan 26 16:07:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 418 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 3.2 MiB/s wr, 253 op/s
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.135 239969 DEBUG nova.compute.manager [req-6d4d3675-dbd3-4b88-8cd2-50bdd82c8e90 req-3c8428fd-b71f-4526-af54-50ee2e15aed9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Received event network-changed-9d6fc516-d19e-408e-bbf8-ef2cf5754216 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.136 239969 DEBUG nova.compute.manager [req-6d4d3675-dbd3-4b88-8cd2-50bdd82c8e90 req-3c8428fd-b71f-4526-af54-50ee2e15aed9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Refreshing instance network info cache due to event network-changed-9d6fc516-d19e-408e-bbf8-ef2cf5754216. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.136 239969 DEBUG oslo_concurrency.lockutils [req-6d4d3675-dbd3-4b88-8cd2-50bdd82c8e90 req-3c8428fd-b71f-4526-af54-50ee2e15aed9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-8c136f06-4df5-4491-a67c-fb799c6cf053" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.137 239969 DEBUG oslo_concurrency.lockutils [req-6d4d3675-dbd3-4b88-8cd2-50bdd82c8e90 req-3c8428fd-b71f-4526-af54-50ee2e15aed9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-8c136f06-4df5-4491-a67c-fb799c6cf053" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.137 239969 DEBUG nova.network.neutron [req-6d4d3675-dbd3-4b88-8cd2-50bdd82c8e90 req-3c8428fd-b71f-4526-af54-50ee2e15aed9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Refreshing network info cache for port 9d6fc516-d19e-408e-bbf8-ef2cf5754216 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:07:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:07:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3603134597' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.292 239969 DEBUG oslo_concurrency.processutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.658s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.298 239969 DEBUG nova.compute.provider_tree [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.312 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.315 239969 DEBUG nova.scheduler.client.report [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.338 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.340 239969 DEBUG nova.compute.manager [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.398 239969 DEBUG nova.compute.manager [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.399 239969 DEBUG nova.network.neutron [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.410 239969 INFO nova.virt.libvirt.driver [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Deleting instance files /var/lib/nova/instances/8c136f06-4df5-4491-a67c-fb799c6cf053_del
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.411 239969 INFO nova.virt.libvirt.driver [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Deletion of /var/lib/nova/instances/8c136f06-4df5-4491-a67c-fb799c6cf053_del complete
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.417 239969 INFO nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.439 239969 DEBUG nova.compute.manager [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.498 239969 INFO nova.scheduler.client.report [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Deleted allocations for instance 8c136f06-4df5-4491-a67c-fb799c6cf053
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.546 239969 DEBUG oslo_concurrency.lockutils [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.547 239969 DEBUG oslo_concurrency.lockutils [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.562 239969 DEBUG nova.compute.manager [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.563 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.564 239969 INFO nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Creating image(s)
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.584 239969 DEBUG nova.storage.rbd_utils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.604 239969 DEBUG nova.storage.rbd_utils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.626 239969 DEBUG nova.storage.rbd_utils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.630 239969 DEBUG oslo_concurrency.processutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:07:28
Jan 26 16:07:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:07:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:07:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'vms', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'volumes', '.rgw.root', 'default.rgw.log']
Jan 26 16:07:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.668 239969 DEBUG nova.policy [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '392bf4c554724bd3b097b990cec964ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3d3c26abe454a90816833e484abbbd5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.721 239969 DEBUG oslo_concurrency.processutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.722 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.722 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.723 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.743 239969 DEBUG nova.storage.rbd_utils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.746 239969 DEBUG oslo_concurrency.processutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:28 compute-0 nova_compute[239965]: 2026-01-26 16:07:28.781 239969 DEBUG oslo_concurrency.processutils [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.128 239969 DEBUG oslo_concurrency.processutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.382s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:29 compute-0 ceph-mon[75140]: pgmap v1662: 305 pgs: 305 active+clean; 418 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 5.7 MiB/s rd, 3.2 MiB/s wr, 253 op/s
Jan 26 16:07:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3603134597' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.207 239969 DEBUG nova.storage.rbd_utils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] resizing rbd image 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.329 239969 DEBUG nova.objects.instance [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'migration_context' on Instance uuid 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.343 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.343 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Ensure instance console log exists: /var/lib/nova/instances/2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.344 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.344 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.344 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:07:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1237484045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.450 239969 DEBUG oslo_concurrency.processutils [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.455 239969 DEBUG nova.compute.provider_tree [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.468 239969 DEBUG nova.scheduler.client.report [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.489 239969 DEBUG oslo_concurrency.lockutils [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.530 239969 DEBUG oslo_concurrency.lockutils [None req-6fa9ff8d-a555-432f-8515-b3cd6ef7d78d ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "8c136f06-4df5-4491-a67c-fb799c6cf053" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 10.377s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.706 239969 DEBUG nova.network.neutron [req-6d4d3675-dbd3-4b88-8cd2-50bdd82c8e90 req-3c8428fd-b71f-4526-af54-50ee2e15aed9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Updated VIF entry in instance network info cache for port 9d6fc516-d19e-408e-bbf8-ef2cf5754216. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.706 239969 DEBUG nova.network.neutron [req-6d4d3675-dbd3-4b88-8cd2-50bdd82c8e90 req-3c8428fd-b71f-4526-af54-50ee2e15aed9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Updating instance_info_cache with network_info: [{"id": "9d6fc516-d19e-408e-bbf8-ef2cf5754216", "address": "fa:16:3e:de:7b:b5", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap9d6fc516-d1", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.725 239969 DEBUG oslo_concurrency.lockutils [req-6d4d3675-dbd3-4b88-8cd2-50bdd82c8e90 req-3c8428fd-b71f-4526-af54-50ee2e15aed9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-8c136f06-4df5-4491-a67c-fb799c6cf053" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:29 compute-0 nova_compute[239965]: 2026-01-26 16:07:29.800 239969 DEBUG nova.network.neutron [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Successfully created port: 73d8ee40-94a4-4fc4-9646-1e6cb8410022 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1663: 305 pgs: 305 active+clean; 418 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.4 MiB/s wr, 137 op/s
Jan 26 16:07:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1237484045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:07:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:07:30 compute-0 nova_compute[239965]: 2026-01-26 16:07:30.927 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:31 compute-0 nova_compute[239965]: 2026-01-26 16:07:31.050 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:31 compute-0 ceph-mon[75140]: pgmap v1663: 305 pgs: 305 active+clean; 418 MiB data, 827 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.4 MiB/s wr, 137 op/s
Jan 26 16:07:31 compute-0 nova_compute[239965]: 2026-01-26 16:07:31.437 239969 DEBUG nova.network.neutron [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Successfully updated port: 73d8ee40-94a4-4fc4-9646-1e6cb8410022 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:07:31 compute-0 nova_compute[239965]: 2026-01-26 16:07:31.454 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "refresh_cache-2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:31 compute-0 nova_compute[239965]: 2026-01-26 16:07:31.455 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquired lock "refresh_cache-2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:31 compute-0 nova_compute[239965]: 2026-01-26 16:07:31.455 239969 DEBUG nova.network.neutron [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:07:31 compute-0 nova_compute[239965]: 2026-01-26 16:07:31.503 239969 DEBUG nova.compute.manager [req-c1bf8be6-5502-4937-8a02-fb6cc1cce9bb req-71d2935e-821f-47f0-8b5a-4eee5b12eec6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Received event network-changed-73d8ee40-94a4-4fc4-9646-1e6cb8410022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:31 compute-0 nova_compute[239965]: 2026-01-26 16:07:31.503 239969 DEBUG nova.compute.manager [req-c1bf8be6-5502-4937-8a02-fb6cc1cce9bb req-71d2935e-821f-47f0-8b5a-4eee5b12eec6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Refreshing instance network info cache due to event network-changed-73d8ee40-94a4-4fc4-9646-1e6cb8410022. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:07:31 compute-0 nova_compute[239965]: 2026-01-26 16:07:31.503 239969 DEBUG oslo_concurrency.lockutils [req-c1bf8be6-5502-4937-8a02-fb6cc1cce9bb req-71d2935e-821f-47f0-8b5a-4eee5b12eec6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:31 compute-0 nova_compute[239965]: 2026-01-26 16:07:31.605 239969 DEBUG nova.network.neutron [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:07:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 425 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 645 KiB/s rd, 2.0 MiB/s wr, 76 op/s
Jan 26 16:07:32 compute-0 ceph-mon[75140]: pgmap v1664: 305 pgs: 305 active+clean; 425 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 645 KiB/s rd, 2.0 MiB/s wr, 76 op/s
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.391 239969 DEBUG nova.network.neutron [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Updating instance_info_cache with network_info: [{"id": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "address": "fa:16:3e:b0:06:0d", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d8ee40-94", "ovs_interfaceid": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.412 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Releasing lock "refresh_cache-2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.412 239969 DEBUG nova.compute.manager [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Instance network_info: |[{"id": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "address": "fa:16:3e:b0:06:0d", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d8ee40-94", "ovs_interfaceid": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.413 239969 DEBUG oslo_concurrency.lockutils [req-c1bf8be6-5502-4937-8a02-fb6cc1cce9bb req-71d2935e-821f-47f0-8b5a-4eee5b12eec6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.413 239969 DEBUG nova.network.neutron [req-c1bf8be6-5502-4937-8a02-fb6cc1cce9bb req-71d2935e-821f-47f0-8b5a-4eee5b12eec6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Refreshing network info cache for port 73d8ee40-94a4-4fc4-9646-1e6cb8410022 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.417 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Start _get_guest_xml network_info=[{"id": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "address": "fa:16:3e:b0:06:0d", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d8ee40-94", "ovs_interfaceid": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.422 239969 WARNING nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.430 239969 DEBUG nova.virt.libvirt.host [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.431 239969 DEBUG nova.virt.libvirt.host [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.440 239969 DEBUG nova.virt.libvirt.host [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.441 239969 DEBUG nova.virt.libvirt.host [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.441 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.442 239969 DEBUG nova.virt.hardware [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.442 239969 DEBUG nova.virt.hardware [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.442 239969 DEBUG nova.virt.hardware [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.442 239969 DEBUG nova.virt.hardware [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.443 239969 DEBUG nova.virt.hardware [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.443 239969 DEBUG nova.virt.hardware [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.443 239969 DEBUG nova.virt.hardware [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.443 239969 DEBUG nova.virt.hardware [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.443 239969 DEBUG nova.virt.hardware [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.444 239969 DEBUG nova.virt.hardware [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.444 239969 DEBUG nova.virt.hardware [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:07:32 compute-0 nova_compute[239965]: 2026-01-26 16:07:32.447 239969 DEBUG oslo_concurrency.processutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:07:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Jan 26 16:07:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Jan 26 16:07:32 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Jan 26 16:07:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:07:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4177296700' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.010 239969 DEBUG oslo_concurrency.processutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.107 239969 DEBUG nova.storage.rbd_utils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.111 239969 DEBUG oslo_concurrency.processutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.141 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.688 239969 DEBUG nova.network.neutron [req-c1bf8be6-5502-4937-8a02-fb6cc1cce9bb req-71d2935e-821f-47f0-8b5a-4eee5b12eec6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Updated VIF entry in instance network info cache for port 73d8ee40-94a4-4fc4-9646-1e6cb8410022. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.689 239969 DEBUG nova.network.neutron [req-c1bf8be6-5502-4937-8a02-fb6cc1cce9bb req-71d2935e-821f-47f0-8b5a-4eee5b12eec6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Updating instance_info_cache with network_info: [{"id": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "address": "fa:16:3e:b0:06:0d", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d8ee40-94", "ovs_interfaceid": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.710 239969 DEBUG oslo_concurrency.lockutils [req-c1bf8be6-5502-4937-8a02-fb6cc1cce9bb req-71d2935e-821f-47f0-8b5a-4eee5b12eec6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:07:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3145583416' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:33 compute-0 ceph-mon[75140]: osdmap e263: 3 total, 3 up, 3 in
Jan 26 16:07:33 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4177296700' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.750 239969 DEBUG oslo_concurrency.processutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.751 239969 DEBUG nova.virt.libvirt.vif [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:07:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-174634227',display_name='tempest-ServersTestJSON-server-174634227',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-174634227',id=92,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-z08va5rh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:07:28Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "address": "fa:16:3e:b0:06:0d", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d8ee40-94", "ovs_interfaceid": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.752 239969 DEBUG nova.network.os_vif_util [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "address": "fa:16:3e:b0:06:0d", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d8ee40-94", "ovs_interfaceid": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.753 239969 DEBUG nova.network.os_vif_util [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:06:0d,bridge_name='br-int',has_traffic_filtering=True,id=73d8ee40-94a4-4fc4-9646-1e6cb8410022,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73d8ee40-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.754 239969 DEBUG nova.objects.instance [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.772 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Acquiring lock "5669b8af-7f11-43be-a56d-983f8b8b859e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.773 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lock "5669b8af-7f11-43be-a56d-983f8b8b859e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.778 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:07:33 compute-0 nova_compute[239965]:   <uuid>2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e</uuid>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   <name>instance-0000005c</name>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersTestJSON-server-174634227</nova:name>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:07:32</nova:creationTime>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:07:33 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:07:33 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:07:33 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:07:33 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:07:33 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:07:33 compute-0 nova_compute[239965]:         <nova:user uuid="392bf4c554724bd3b097b990cec964ac">tempest-ServersTestJSON-190839520-project-member</nova:user>
Jan 26 16:07:33 compute-0 nova_compute[239965]:         <nova:project uuid="e3d3c26abe454a90816833e484abbbd5">tempest-ServersTestJSON-190839520</nova:project>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:07:33 compute-0 nova_compute[239965]:         <nova:port uuid="73d8ee40-94a4-4fc4-9646-1e6cb8410022">
Jan 26 16:07:33 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <system>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <entry name="serial">2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e</entry>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <entry name="uuid">2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e</entry>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     </system>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   <os>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   </os>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   <features>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   </features>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e_disk">
Jan 26 16:07:33 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       </source>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:07:33 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e_disk.config">
Jan 26 16:07:33 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       </source>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:07:33 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:b0:06:0d"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <target dev="tap73d8ee40-94"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e/console.log" append="off"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <video>
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     </video>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:07:33 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:07:33 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:07:33 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:07:33 compute-0 nova_compute[239965]: </domain>
Jan 26 16:07:33 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.779 239969 DEBUG nova.compute.manager [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Preparing to wait for external event network-vif-plugged-73d8ee40-94a4-4fc4-9646-1e6cb8410022 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.779 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.780 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.780 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.781 239969 DEBUG nova.virt.libvirt.vif [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:07:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-174634227',display_name='tempest-ServersTestJSON-server-174634227',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-174634227',id=92,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-z08va5rh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:07:28Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "address": "fa:16:3e:b0:06:0d", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d8ee40-94", "ovs_interfaceid": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.781 239969 DEBUG nova.network.os_vif_util [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "address": "fa:16:3e:b0:06:0d", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d8ee40-94", "ovs_interfaceid": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.782 239969 DEBUG nova.network.os_vif_util [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:06:0d,bridge_name='br-int',has_traffic_filtering=True,id=73d8ee40-94a4-4fc4-9646-1e6cb8410022,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73d8ee40-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.782 239969 DEBUG os_vif [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:06:0d,bridge_name='br-int',has_traffic_filtering=True,id=73d8ee40-94a4-4fc4-9646-1e6cb8410022,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73d8ee40-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.783 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.783 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.784 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.787 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.787 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73d8ee40-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.788 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap73d8ee40-94, col_values=(('external_ids', {'iface-id': '73d8ee40-94a4-4fc4-9646-1e6cb8410022', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b0:06:0d', 'vm-uuid': '2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.795 239969 DEBUG nova.compute.manager [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.817 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:33 compute-0 NetworkManager[48954]: <info>  [1769443653.8187] manager: (tap73d8ee40-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.821 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.823 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.824 239969 INFO os_vif [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:06:0d,bridge_name='br-int',has_traffic_filtering=True,id=73d8ee40-94a4-4fc4-9646-1e6cb8410022,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73d8ee40-94')
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.875 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.876 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.882 239969 DEBUG nova.virt.hardware [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.882 239969 INFO nova.compute.claims [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.915 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.915 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.915 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No VIF found with MAC fa:16:3e:b0:06:0d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.916 239969 INFO nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Using config drive
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.938 239969 DEBUG nova.storage.rbd_utils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.946 239969 DEBUG oslo_concurrency.lockutils [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "571cd10f-e22e-43b6-9523-e3539d047e4a" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.946 239969 DEBUG oslo_concurrency.lockutils [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.947 239969 INFO nova.compute.manager [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Shelving
Jan 26 16:07:33 compute-0 nova_compute[239965]: 2026-01-26 16:07:33.979 239969 DEBUG nova.virt.libvirt.driver [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.072 239969 DEBUG oslo_concurrency.processutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 418 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 638 KiB/s rd, 3.4 MiB/s wr, 101 op/s
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.315 239969 INFO nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Creating config drive at /var/lib/nova/instances/2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e/disk.config
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.321 239969 DEBUG oslo_concurrency.processutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7byeq049 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.418 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443639.4175339, 8c136f06-4df5-4491-a67c-fb799c6cf053 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.419 239969 INFO nova.compute.manager [-] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] VM Stopped (Lifecycle Event)
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.444 239969 DEBUG nova.compute.manager [None req-6a5c8918-1f5d-429d-b3b1-b3374ab26986 - - - - - -] [instance: 8c136f06-4df5-4491-a67c-fb799c6cf053] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.466 239969 DEBUG oslo_concurrency.processutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7byeq049" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.491 239969 DEBUG nova.storage.rbd_utils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.494 239969 DEBUG oslo_concurrency.processutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e/disk.config 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.525 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:07:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:07:34 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3105446787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.665 239969 DEBUG oslo_concurrency.processutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.670 239969 DEBUG nova.compute.provider_tree [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.688 239969 DEBUG nova.scheduler.client.report [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.718 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.719 239969 DEBUG nova.compute.manager [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.762 239969 DEBUG nova.compute.manager [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.762 239969 DEBUG nova.network.neutron [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.784 239969 DEBUG oslo_concurrency.processutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e/disk.config 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.290s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.785 239969 INFO nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Deleting local config drive /var/lib/nova/instances/2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e/disk.config because it was imported into RBD.
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.788 239969 INFO nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.806 239969 DEBUG nova.compute.manager [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:07:34 compute-0 kernel: tap73d8ee40-94: entered promiscuous mode
Jan 26 16:07:34 compute-0 NetworkManager[48954]: <info>  [1769443654.8406] manager: (tap73d8ee40-94): new Tun device (/org/freedesktop/NetworkManager/Devices/384)
Jan 26 16:07:34 compute-0 ovn_controller[146046]: 2026-01-26T16:07:34Z|00900|binding|INFO|Claiming lport 73d8ee40-94a4-4fc4-9646-1e6cb8410022 for this chassis.
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.842 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:34 compute-0 ovn_controller[146046]: 2026-01-26T16:07:34Z|00901|binding|INFO|73d8ee40-94a4-4fc4-9646-1e6cb8410022: Claiming fa:16:3e:b0:06:0d 10.100.0.12
Jan 26 16:07:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:34.852 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:06:0d 10.100.0.12'], port_security=['fa:16:3e:b0:06:0d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3d3c26abe454a90816833e484abbbd5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1788d42-3ab8-4af6-9d96-b2f289468a39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=866be92a-2c15-4b84-ae39-2a0041a8b635, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=73d8ee40-94a4-4fc4-9646-1e6cb8410022) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:34.854 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 73d8ee40-94a4-4fc4-9646-1e6cb8410022 in datapath e7ef659c-2c7b-4cdf-8956-267cdfeff707 bound to our chassis
Jan 26 16:07:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:34.856 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:07:34 compute-0 ovn_controller[146046]: 2026-01-26T16:07:34Z|00902|binding|INFO|Setting lport 73d8ee40-94a4-4fc4-9646-1e6cb8410022 ovn-installed in OVS
Jan 26 16:07:34 compute-0 ovn_controller[146046]: 2026-01-26T16:07:34Z|00903|binding|INFO|Setting lport 73d8ee40-94a4-4fc4-9646-1e6cb8410022 up in Southbound
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.863 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.866 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:34 compute-0 systemd-udevd[318918]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:07:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:34.874 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4a492c51-f881-4d88-be31-436e7cea32a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:34 compute-0 systemd-machined[208061]: New machine qemu-113-instance-0000005c.
Jan 26 16:07:34 compute-0 NetworkManager[48954]: <info>  [1769443654.8836] device (tap73d8ee40-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:07:34 compute-0 NetworkManager[48954]: <info>  [1769443654.8842] device (tap73d8ee40-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:07:34 compute-0 systemd[1]: Started Virtual Machine qemu-113-instance-0000005c.
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.893 239969 DEBUG nova.compute.manager [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.894 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.894 239969 INFO nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Creating image(s)
Jan 26 16:07:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:34.902 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0a649f-054d-4e45-8b8f-2bd2c9e76478]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:34.905 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[776eb73f-6ea1-459f-9631-f370b87c91d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.915 239969 DEBUG nova.storage.rbd_utils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] rbd image 5669b8af-7f11-43be-a56d-983f8b8b859e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:34.933 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[164d107b-eabe-4892-97fb-95d0eaca1a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.939 239969 DEBUG nova.storage.rbd_utils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] rbd image 5669b8af-7f11-43be-a56d-983f8b8b859e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:34.952 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5999c638-2766-4644-b044-2bdccc312702]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7ef659c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:1b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 17, 'rx_bytes': 616, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 17, 'rx_bytes': 616, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507991, 'reachable_time': 15716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318965, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.963 239969 DEBUG nova.storage.rbd_utils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] rbd image 5669b8af-7f11-43be-a56d-983f8b8b859e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.966 239969 DEBUG oslo_concurrency.processutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:34.966 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab03bb6-7c6e-412d-9a48-6ee641dd013f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508006, 'tstamp': 508006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318984, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508010, 'tstamp': 508010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318984, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:34.968 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7ef659c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:34.970 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7ef659c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:34.970 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:34.971 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape7ef659c-20, col_values=(('external_ids', {'iface-id': '86107515-3deb-4a4b-a7ca-2f22eed5cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:34.971 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:34 compute-0 nova_compute[239965]: 2026-01-26 16:07:34.998 239969 DEBUG nova.policy [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '62734cf5470643439afdf87f571cc432', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cc85b4e0fa164ecdb2e65bce2778c06c', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:07:35 compute-0 nova_compute[239965]: 2026-01-26 16:07:35.002 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:35 compute-0 nova_compute[239965]: 2026-01-26 16:07:35.038 239969 DEBUG oslo_concurrency.processutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:35 compute-0 nova_compute[239965]: 2026-01-26 16:07:35.039 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:35 compute-0 nova_compute[239965]: 2026-01-26 16:07:35.040 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:35 compute-0 nova_compute[239965]: 2026-01-26 16:07:35.040 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:35 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3145583416' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:35 compute-0 ceph-mon[75140]: pgmap v1666: 305 pgs: 305 active+clean; 418 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 638 KiB/s rd, 3.4 MiB/s wr, 101 op/s
Jan 26 16:07:35 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3105446787' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:35 compute-0 nova_compute[239965]: 2026-01-26 16:07:35.098 239969 DEBUG nova.storage.rbd_utils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] rbd image 5669b8af-7f11-43be-a56d-983f8b8b859e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:35 compute-0 nova_compute[239965]: 2026-01-26 16:07:35.102 239969 DEBUG oslo_concurrency.processutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 5669b8af-7f11-43be-a56d-983f8b8b859e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:35 compute-0 nova_compute[239965]: 2026-01-26 16:07:35.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.052 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1667: 305 pgs: 305 active+clean; 423 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 2.2 MiB/s wr, 93 op/s
Jan 26 16:07:36 compute-0 ceph-mon[75140]: pgmap v1667: 305 pgs: 305 active+clean; 423 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 64 KiB/s rd, 2.2 MiB/s wr, 93 op/s
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.389 239969 DEBUG oslo_concurrency.processutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 5669b8af-7f11-43be-a56d-983f8b8b859e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.451 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443656.441474, 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.452 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] VM Started (Lifecycle Event)
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.459 239969 DEBUG nova.storage.rbd_utils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] resizing rbd image 5669b8af-7f11-43be-a56d-983f8b8b859e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.521 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.522 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443641.5060337, dfbd3acb-76a4-4675-8c8c-eed121af14af => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.522 239969 INFO nova.compute.manager [-] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] VM Stopped (Lifecycle Event)
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.528 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443656.4415941, 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.528 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] VM Paused (Lifecycle Event)
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.549 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:36 compute-0 kernel: tap4dce6184-01 (unregistering): left promiscuous mode
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.550 239969 DEBUG nova.compute.manager [None req-1c09e499-c4dd-4911-ba59-9891f99474f0 - - - - - -] [instance: dfbd3acb-76a4-4675-8c8c-eed121af14af] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:36 compute-0 NetworkManager[48954]: <info>  [1769443656.5544] device (tap4dce6184-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.554 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:07:36 compute-0 ovn_controller[146046]: 2026-01-26T16:07:36Z|00904|binding|INFO|Releasing lport 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 from this chassis (sb_readonly=0)
Jan 26 16:07:36 compute-0 ovn_controller[146046]: 2026-01-26T16:07:36Z|00905|binding|INFO|Setting lport 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 down in Southbound
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.564 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:36 compute-0 ovn_controller[146046]: 2026-01-26T16:07:36Z|00906|binding|INFO|Removing iface tap4dce6184-01 ovn-installed in OVS
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.565 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:36.571 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:ea:5b 10.100.0.6'], port_security=['fa:16:3e:72:ea:5b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '571cd10f-e22e-43b6-9523-e3539d047e4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56f8818d291f4e738d868673048ce025', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8357fb8f-8173-42b8-8413-682bfcbe9368', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=112ebdb6-08cf-4ff4-bdfb-063b37c813d0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=4dce6184-01c6-4ef8-a5d3-7fc88f784d89) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:36.572 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 in datapath 67ce52d3-4d78-44cc-ab83-0fab051ec68d unbound from our chassis
Jan 26 16:07:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:36.574 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67ce52d3-4d78-44cc-ab83-0fab051ec68d
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.578 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.586 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:36.590 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b72b3ffa-ee44-4adc-8c5e-1eaf6f7c3903]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.616 239969 DEBUG nova.compute.manager [req-f273c4ce-35b3-4a3d-a6f6-8e39bab4a779 req-4ee3212f-c1b5-49a8-b9df-c9ace65ca4f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Received event network-vif-plugged-73d8ee40-94a4-4fc4-9646-1e6cb8410022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.617 239969 DEBUG oslo_concurrency.lockutils [req-f273c4ce-35b3-4a3d-a6f6-8e39bab4a779 req-4ee3212f-c1b5-49a8-b9df-c9ace65ca4f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.617 239969 DEBUG oslo_concurrency.lockutils [req-f273c4ce-35b3-4a3d-a6f6-8e39bab4a779 req-4ee3212f-c1b5-49a8-b9df-c9ace65ca4f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.617 239969 DEBUG oslo_concurrency.lockutils [req-f273c4ce-35b3-4a3d-a6f6-8e39bab4a779 req-4ee3212f-c1b5-49a8-b9df-c9ace65ca4f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.617 239969 DEBUG nova.compute.manager [req-f273c4ce-35b3-4a3d-a6f6-8e39bab4a779 req-4ee3212f-c1b5-49a8-b9df-c9ace65ca4f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Processing event network-vif-plugged-73d8ee40-94a4-4fc4-9646-1e6cb8410022 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.619 239969 DEBUG nova.compute.manager [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:07:36 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000055.scope: Deactivated successfully.
Jan 26 16:07:36 compute-0 systemd[1]: machine-qemu\x2d101\x2dinstance\x2d00000055.scope: Consumed 19.692s CPU time.
Jan 26 16:07:36 compute-0 systemd-machined[208061]: Machine qemu-101-instance-00000055 terminated.
Jan 26 16:07:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:36.623 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b61bc232-d211-45a8-a1cc-94c231af2e87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:36.629 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ad624038-3480-4013-aec3-ab86eec64923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.649 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443656.6225953, 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.650 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] VM Resumed (Lifecycle Event)
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.652 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.658 239969 DEBUG nova.objects.instance [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lazy-loading 'migration_context' on Instance uuid 5669b8af-7f11-43be-a56d-983f8b8b859e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:36.658 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6f0c87-1fb9-414b-a825-e30a0f217a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.660 239969 INFO nova.virt.libvirt.driver [-] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Instance spawned successfully.
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.660 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:07:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:36.674 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[136b3c07-47a5-4f04-bf50-3999a0301b62]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67ce52d3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:dd:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 700, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502154, 'reachable_time': 34738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319151, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.680 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.681 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.681 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Ensure instance console log exists: /var/lib/nova/instances/5669b8af-7f11-43be-a56d-983f8b8b859e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.682 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.682 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.682 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.686 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.690 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.690 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.690 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.690 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.691 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.691 239969 DEBUG nova.virt.libvirt.driver [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:36.691 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2483496a-eb13-49fb-a990-44170fc99703]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap67ce52d3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502171, 'tstamp': 502171}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319152, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67ce52d3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502175, 'tstamp': 502175}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319152, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:36.692 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67ce52d3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.694 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.697 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:36.697 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67ce52d3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:36.697 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:36.697 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67ce52d3-40, col_values=(('external_ids', {'iface-id': '8d8bd19b-594d-4000-99fc-eb8020a59c23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:36.698 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.716 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.755 239969 INFO nova.compute.manager [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Took 8.19 seconds to spawn the instance on the hypervisor.
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.755 239969 DEBUG nova.compute.manager [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.782 239969 DEBUG nova.compute.manager [req-1934688b-ef11-4734-b0e4-9cb83c0bf234 req-12c2eca4-e332-470d-b41f-799d9da43ac5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received event network-vif-unplugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.783 239969 DEBUG oslo_concurrency.lockutils [req-1934688b-ef11-4734-b0e4-9cb83c0bf234 req-12c2eca4-e332-470d-b41f-799d9da43ac5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.783 239969 DEBUG oslo_concurrency.lockutils [req-1934688b-ef11-4734-b0e4-9cb83c0bf234 req-12c2eca4-e332-470d-b41f-799d9da43ac5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.783 239969 DEBUG oslo_concurrency.lockutils [req-1934688b-ef11-4734-b0e4-9cb83c0bf234 req-12c2eca4-e332-470d-b41f-799d9da43ac5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.783 239969 DEBUG nova.compute.manager [req-1934688b-ef11-4734-b0e4-9cb83c0bf234 req-12c2eca4-e332-470d-b41f-799d9da43ac5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] No waiting events found dispatching network-vif-unplugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.784 239969 WARNING nova.compute.manager [req-1934688b-ef11-4734-b0e4-9cb83c0bf234 req-12c2eca4-e332-470d-b41f-799d9da43ac5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received unexpected event network-vif-unplugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 for instance with vm_state active and task_state shelving.
Jan 26 16:07:36 compute-0 NetworkManager[48954]: <info>  [1769443656.8042] manager: (tap4dce6184-01): new Tun device (/org/freedesktop/NetworkManager/Devices/385)
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.814 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.817 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.821 239969 INFO nova.compute.manager [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Took 9.38 seconds to build instance.
Jan 26 16:07:36 compute-0 nova_compute[239965]: 2026-01-26 16:07:36.840 239969 DEBUG oslo_concurrency.lockutils [None req-91b4d40c-42a0-4108-a407-ded6e017dcff 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:37 compute-0 nova_compute[239965]: 2026-01-26 16:07:37.017 239969 INFO nova.virt.libvirt.driver [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Instance shutdown successfully after 3 seconds.
Jan 26 16:07:37 compute-0 nova_compute[239965]: 2026-01-26 16:07:37.023 239969 INFO nova.virt.libvirt.driver [-] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Instance destroyed successfully.
Jan 26 16:07:37 compute-0 nova_compute[239965]: 2026-01-26 16:07:37.023 239969 DEBUG nova.objects.instance [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'numa_topology' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:37 compute-0 nova_compute[239965]: 2026-01-26 16:07:37.287 239969 INFO nova.virt.libvirt.driver [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Beginning cold snapshot process
Jan 26 16:07:37 compute-0 nova_compute[239965]: 2026-01-26 16:07:37.297 239969 DEBUG nova.network.neutron [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Successfully created port: 36f0ff9c-f877-4ec9-b21e-324f84b23986 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:07:37 compute-0 ovn_controller[146046]: 2026-01-26T16:07:37Z|00907|binding|INFO|Releasing lport 86107515-3deb-4a4b-a7ca-2f22eed5cb66 from this chassis (sb_readonly=0)
Jan 26 16:07:37 compute-0 ovn_controller[146046]: 2026-01-26T16:07:37Z|00908|binding|INFO|Releasing lport 8d8bd19b-594d-4000-99fc-eb8020a59c23 from this chassis (sb_readonly=0)
Jan 26 16:07:37 compute-0 nova_compute[239965]: 2026-01-26 16:07:37.425 239969 DEBUG nova.virt.libvirt.imagebackend [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 16:07:37 compute-0 nova_compute[239965]: 2026-01-26 16:07:37.466 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:07:37 compute-0 nova_compute[239965]: 2026-01-26 16:07:37.643 239969 DEBUG nova.storage.rbd_utils [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] creating snapshot(121167d4474f4bbdbb7d2e89638b3bfb) on rbd image(571cd10f-e22e-43b6-9523-e3539d047e4a_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:07:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Jan 26 16:07:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Jan 26 16:07:37 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Jan 26 16:07:37 compute-0 nova_compute[239965]: 2026-01-26 16:07:37.794 239969 DEBUG nova.storage.rbd_utils [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] cloning vms/571cd10f-e22e-43b6-9523-e3539d047e4a_disk@121167d4474f4bbdbb7d2e89638b3bfb to images/8c93b0ea-6f08-4c34-a2a1-57cec764f188 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 16:07:37 compute-0 nova_compute[239965]: 2026-01-26 16:07:37.878 239969 DEBUG nova.storage.rbd_utils [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] flattening images/8c93b0ea-6f08-4c34-a2a1-57cec764f188 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 16:07:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1669: 305 pgs: 305 active+clean; 455 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 4.6 MiB/s wr, 127 op/s
Jan 26 16:07:38 compute-0 nova_compute[239965]: 2026-01-26 16:07:38.348 239969 DEBUG nova.storage.rbd_utils [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] removing snapshot(121167d4474f4bbdbb7d2e89638b3bfb) on rbd image(571cd10f-e22e-43b6-9523-e3539d047e4a_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 16:07:38 compute-0 nova_compute[239965]: 2026-01-26 16:07:38.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:07:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Jan 26 16:07:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Jan 26 16:07:38 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Jan 26 16:07:38 compute-0 ceph-mon[75140]: osdmap e264: 3 total, 3 up, 3 in
Jan 26 16:07:38 compute-0 ceph-mon[75140]: pgmap v1669: 305 pgs: 305 active+clean; 455 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 4.6 MiB/s wr, 127 op/s
Jan 26 16:07:38 compute-0 nova_compute[239965]: 2026-01-26 16:07:38.784 239969 DEBUG nova.storage.rbd_utils [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] creating snapshot(snap) on rbd image(8c93b0ea-6f08-4c34-a2a1-57cec764f188) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:07:38 compute-0 nova_compute[239965]: 2026-01-26 16:07:38.820 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.366 239969 DEBUG nova.compute.manager [req-81c06c99-034c-46a4-9fa1-a7eddd673670 req-5a051126-a412-4ea3-8925-5caabc1a8711 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Received event network-vif-plugged-73d8ee40-94a4-4fc4-9646-1e6cb8410022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.367 239969 DEBUG oslo_concurrency.lockutils [req-81c06c99-034c-46a4-9fa1-a7eddd673670 req-5a051126-a412-4ea3-8925-5caabc1a8711 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.367 239969 DEBUG oslo_concurrency.lockutils [req-81c06c99-034c-46a4-9fa1-a7eddd673670 req-5a051126-a412-4ea3-8925-5caabc1a8711 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.367 239969 DEBUG oslo_concurrency.lockutils [req-81c06c99-034c-46a4-9fa1-a7eddd673670 req-5a051126-a412-4ea3-8925-5caabc1a8711 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.368 239969 DEBUG nova.compute.manager [req-81c06c99-034c-46a4-9fa1-a7eddd673670 req-5a051126-a412-4ea3-8925-5caabc1a8711 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] No waiting events found dispatching network-vif-plugged-73d8ee40-94a4-4fc4-9646-1e6cb8410022 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.368 239969 WARNING nova.compute.manager [req-81c06c99-034c-46a4-9fa1-a7eddd673670 req-5a051126-a412-4ea3-8925-5caabc1a8711 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Received unexpected event network-vif-plugged-73d8ee40-94a4-4fc4-9646-1e6cb8410022 for instance with vm_state active and task_state None.
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.376 239969 DEBUG nova.network.neutron [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Successfully updated port: 36f0ff9c-f877-4ec9-b21e-324f84b23986 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.390 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Acquiring lock "refresh_cache-5669b8af-7f11-43be-a56d-983f8b8b859e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.390 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Acquired lock "refresh_cache-5669b8af-7f11-43be-a56d-983f8b8b859e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.390 239969 DEBUG nova.network.neutron [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.441 239969 DEBUG nova.compute.manager [req-8a19153a-c15a-4643-a233-8d47b58a9860 req-681ac3cf-3ec3-4822-ba1a-92f183b10ce7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received event network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.441 239969 DEBUG oslo_concurrency.lockutils [req-8a19153a-c15a-4643-a233-8d47b58a9860 req-681ac3cf-3ec3-4822-ba1a-92f183b10ce7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.441 239969 DEBUG oslo_concurrency.lockutils [req-8a19153a-c15a-4643-a233-8d47b58a9860 req-681ac3cf-3ec3-4822-ba1a-92f183b10ce7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.442 239969 DEBUG oslo_concurrency.lockutils [req-8a19153a-c15a-4643-a233-8d47b58a9860 req-681ac3cf-3ec3-4822-ba1a-92f183b10ce7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.442 239969 DEBUG nova.compute.manager [req-8a19153a-c15a-4643-a233-8d47b58a9860 req-681ac3cf-3ec3-4822-ba1a-92f183b10ce7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] No waiting events found dispatching network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.442 239969 WARNING nova.compute.manager [req-8a19153a-c15a-4643-a233-8d47b58a9860 req-681ac3cf-3ec3-4822-ba1a-92f183b10ce7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received unexpected event network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 for instance with vm_state active and task_state shelving_image_uploading.
Jan 26 16:07:39 compute-0 nova_compute[239965]: 2026-01-26 16:07:39.560 239969 DEBUG nova.network.neutron [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:07:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Jan 26 16:07:39 compute-0 ceph-mon[75140]: osdmap e265: 3 total, 3 up, 3 in
Jan 26 16:07:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Jan 26 16:07:39 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Jan 26 16:07:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1672: 305 pgs: 305 active+clean; 455 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 2.6 MiB/s wr, 60 op/s
Jan 26 16:07:40 compute-0 podman[319305]: 2026-01-26 16:07:40.390127277 +0000 UTC m=+0.065830682 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 16:07:40 compute-0 podman[319306]: 2026-01-26 16:07:40.423044832 +0000 UTC m=+0.099135716 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS)
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.484 239969 DEBUG nova.network.neutron [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Updating instance_info_cache with network_info: [{"id": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "address": "fa:16:3e:3f:bd:f5", "network": {"id": "30d87fd0-73b9-42de-a5d9-83ff4cda4f31", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1994172965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc85b4e0fa164ecdb2e65bce2778c06c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f0ff9c-f8", "ovs_interfaceid": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.505 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Releasing lock "refresh_cache-5669b8af-7f11-43be-a56d-983f8b8b859e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.505 239969 DEBUG nova.compute.manager [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Instance network_info: |[{"id": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "address": "fa:16:3e:3f:bd:f5", "network": {"id": "30d87fd0-73b9-42de-a5d9-83ff4cda4f31", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1994172965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc85b4e0fa164ecdb2e65bce2778c06c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f0ff9c-f8", "ovs_interfaceid": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.508 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Start _get_guest_xml network_info=[{"id": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "address": "fa:16:3e:3f:bd:f5", "network": {"id": "30d87fd0-73b9-42de-a5d9-83ff4cda4f31", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1994172965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc85b4e0fa164ecdb2e65bce2778c06c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f0ff9c-f8", "ovs_interfaceid": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.513 239969 WARNING nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.518 239969 DEBUG nova.virt.libvirt.host [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.519 239969 DEBUG nova.virt.libvirt.host [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.522 239969 DEBUG nova.virt.libvirt.host [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.523 239969 DEBUG nova.virt.libvirt.host [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.523 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.523 239969 DEBUG nova.virt.hardware [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.524 239969 DEBUG nova.virt.hardware [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.524 239969 DEBUG nova.virt.hardware [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.524 239969 DEBUG nova.virt.hardware [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.524 239969 DEBUG nova.virt.hardware [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.525 239969 DEBUG nova.virt.hardware [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.525 239969 DEBUG nova.virt.hardware [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.525 239969 DEBUG nova.virt.hardware [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.525 239969 DEBUG nova.virt.hardware [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.526 239969 DEBUG nova.virt.hardware [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.526 239969 DEBUG nova.virt.hardware [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:07:40 compute-0 nova_compute[239965]: 2026-01-26 16:07:40.529 239969 DEBUG oslo_concurrency.processutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:40 compute-0 ceph-mon[75140]: osdmap e266: 3 total, 3 up, 3 in
Jan 26 16:07:40 compute-0 ceph-mon[75140]: pgmap v1672: 305 pgs: 305 active+clean; 455 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 2.6 MiB/s wr, 60 op/s
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.055 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:07:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1941767020' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.140 239969 DEBUG oslo_concurrency.processutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.168 239969 DEBUG nova.storage.rbd_utils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] rbd image 5669b8af-7f11-43be-a56d-983f8b8b859e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.173 239969 DEBUG oslo_concurrency.processutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.362 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "f3d29c29-2c6a-4e49-bd86-415f47139844" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.362 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "f3d29c29-2c6a-4e49-bd86-415f47139844" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.380 239969 DEBUG nova.compute.manager [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.385 239969 INFO nova.virt.libvirt.driver [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Snapshot image upload complete
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.386 239969 DEBUG nova.compute.manager [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.460 239969 INFO nova.compute.manager [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Shelve offloading
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.472 239969 DEBUG nova.compute.manager [req-5296a339-c074-46ee-958a-3aef33779925 req-638f02b6-b296-47e6-a7fa-e5d3a28a7d2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Received event network-changed-36f0ff9c-f877-4ec9-b21e-324f84b23986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.473 239969 DEBUG nova.compute.manager [req-5296a339-c074-46ee-958a-3aef33779925 req-638f02b6-b296-47e6-a7fa-e5d3a28a7d2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Refreshing instance network info cache due to event network-changed-36f0ff9c-f877-4ec9-b21e-324f84b23986. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.473 239969 DEBUG oslo_concurrency.lockutils [req-5296a339-c074-46ee-958a-3aef33779925 req-638f02b6-b296-47e6-a7fa-e5d3a28a7d2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-5669b8af-7f11-43be-a56d-983f8b8b859e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.473 239969 DEBUG oslo_concurrency.lockutils [req-5296a339-c074-46ee-958a-3aef33779925 req-638f02b6-b296-47e6-a7fa-e5d3a28a7d2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-5669b8af-7f11-43be-a56d-983f8b8b859e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.473 239969 DEBUG nova.network.neutron [req-5296a339-c074-46ee-958a-3aef33779925 req-638f02b6-b296-47e6-a7fa-e5d3a28a7d2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Refreshing network info cache for port 36f0ff9c-f877-4ec9-b21e-324f84b23986 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.477 239969 INFO nova.virt.libvirt.driver [-] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Instance destroyed successfully.
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.477 239969 DEBUG nova.compute.manager [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.481 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.482 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.484 239969 DEBUG oslo_concurrency.lockutils [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.485 239969 DEBUG oslo_concurrency.lockutils [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquired lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.485 239969 DEBUG nova.network.neutron [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.492 239969 DEBUG nova.virt.hardware [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.493 239969 INFO nova.compute.claims [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.542 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.583 239969 DEBUG nova.scheduler.client.report [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.601 239969 DEBUG nova.scheduler.client.report [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.601 239969 DEBUG nova.compute.provider_tree [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.617 239969 DEBUG nova.scheduler.client.report [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.643 239969 DEBUG nova.scheduler.client.report [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:07:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:07:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2028247966' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.755 239969 DEBUG oslo_concurrency.processutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.757 239969 DEBUG nova.virt.libvirt.vif [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-546794690',display_name='tempest-ServerPasswordTestJSON-server-546794690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-546794690',id=93,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cc85b4e0fa164ecdb2e65bce2778c06c',ramdisk_id='',reservation_id='r-qii47trp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1026791849',owner_user_name='tempest-ServerPasswordTestJSON-1026791849-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:07:34Z,user_data=None,user_id='62734cf5470643439afdf87f571cc432',uuid=5669b8af-7f11-43be-a56d-983f8b8b859e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "address": "fa:16:3e:3f:bd:f5", "network": {"id": "30d87fd0-73b9-42de-a5d9-83ff4cda4f31", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1994172965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc85b4e0fa164ecdb2e65bce2778c06c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f0ff9c-f8", "ovs_interfaceid": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.757 239969 DEBUG nova.network.os_vif_util [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Converting VIF {"id": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "address": "fa:16:3e:3f:bd:f5", "network": {"id": "30d87fd0-73b9-42de-a5d9-83ff4cda4f31", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1994172965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc85b4e0fa164ecdb2e65bce2778c06c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f0ff9c-f8", "ovs_interfaceid": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.758 239969 DEBUG nova.network.os_vif_util [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:bd:f5,bridge_name='br-int',has_traffic_filtering=True,id=36f0ff9c-f877-4ec9-b21e-324f84b23986,network=Network(30d87fd0-73b9-42de-a5d9-83ff4cda4f31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36f0ff9c-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.759 239969 DEBUG nova.objects.instance [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lazy-loading 'pci_devices' on Instance uuid 5669b8af-7f11-43be-a56d-983f8b8b859e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.779 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:07:41 compute-0 nova_compute[239965]:   <uuid>5669b8af-7f11-43be-a56d-983f8b8b859e</uuid>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   <name>instance-0000005d</name>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerPasswordTestJSON-server-546794690</nova:name>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:07:40</nova:creationTime>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:07:41 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:07:41 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:07:41 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:07:41 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:07:41 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:07:41 compute-0 nova_compute[239965]:         <nova:user uuid="62734cf5470643439afdf87f571cc432">tempest-ServerPasswordTestJSON-1026791849-project-member</nova:user>
Jan 26 16:07:41 compute-0 nova_compute[239965]:         <nova:project uuid="cc85b4e0fa164ecdb2e65bce2778c06c">tempest-ServerPasswordTestJSON-1026791849</nova:project>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:07:41 compute-0 nova_compute[239965]:         <nova:port uuid="36f0ff9c-f877-4ec9-b21e-324f84b23986">
Jan 26 16:07:41 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <system>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <entry name="serial">5669b8af-7f11-43be-a56d-983f8b8b859e</entry>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <entry name="uuid">5669b8af-7f11-43be-a56d-983f8b8b859e</entry>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     </system>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   <os>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   </os>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   <features>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   </features>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/5669b8af-7f11-43be-a56d-983f8b8b859e_disk">
Jan 26 16:07:41 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       </source>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:07:41 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/5669b8af-7f11-43be-a56d-983f8b8b859e_disk.config">
Jan 26 16:07:41 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       </source>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:07:41 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:3f:bd:f5"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <target dev="tap36f0ff9c-f8"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/5669b8af-7f11-43be-a56d-983f8b8b859e/console.log" append="off"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <video>
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     </video>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:07:41 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:07:41 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:07:41 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:07:41 compute-0 nova_compute[239965]: </domain>
Jan 26 16:07:41 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.781 239969 DEBUG nova.compute.manager [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Preparing to wait for external event network-vif-plugged-36f0ff9c-f877-4ec9-b21e-324f84b23986 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.781 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Acquiring lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.782 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.782 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.783 239969 DEBUG nova.virt.libvirt.vif [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-546794690',display_name='tempest-ServerPasswordTestJSON-server-546794690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-546794690',id=93,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cc85b4e0fa164ecdb2e65bce2778c06c',ramdisk_id='',reservation_id='r-qii47trp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-1026791849',owner_user_name='tempest-ServerPasswordTestJSON-1026791849-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:07:34Z,user_data=None,user_id='62734cf5470643439afdf87f571cc432',uuid=5669b8af-7f11-43be-a56d-983f8b8b859e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "address": "fa:16:3e:3f:bd:f5", "network": {"id": "30d87fd0-73b9-42de-a5d9-83ff4cda4f31", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1994172965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc85b4e0fa164ecdb2e65bce2778c06c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f0ff9c-f8", "ovs_interfaceid": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.783 239969 DEBUG nova.network.os_vif_util [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Converting VIF {"id": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "address": "fa:16:3e:3f:bd:f5", "network": {"id": "30d87fd0-73b9-42de-a5d9-83ff4cda4f31", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1994172965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc85b4e0fa164ecdb2e65bce2778c06c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f0ff9c-f8", "ovs_interfaceid": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.784 239969 DEBUG nova.network.os_vif_util [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:bd:f5,bridge_name='br-int',has_traffic_filtering=True,id=36f0ff9c-f877-4ec9-b21e-324f84b23986,network=Network(30d87fd0-73b9-42de-a5d9-83ff4cda4f31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36f0ff9c-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.784 239969 DEBUG os_vif [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:bd:f5,bridge_name='br-int',has_traffic_filtering=True,id=36f0ff9c-f877-4ec9-b21e-324f84b23986,network=Network(30d87fd0-73b9-42de-a5d9-83ff4cda4f31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36f0ff9c-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.785 239969 DEBUG oslo_concurrency.processutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.818 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.820 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.821 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.828 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.828 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap36f0ff9c-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.829 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap36f0ff9c-f8, col_values=(('external_ids', {'iface-id': '36f0ff9c-f877-4ec9-b21e-324f84b23986', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:bd:f5', 'vm-uuid': '5669b8af-7f11-43be-a56d-983f8b8b859e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.830 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:41 compute-0 NetworkManager[48954]: <info>  [1769443661.8317] manager: (tap36f0ff9c-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/386)
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.835 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.839 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.840 239969 INFO os_vif [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:bd:f5,bridge_name='br-int',has_traffic_filtering=True,id=36f0ff9c-f877-4ec9-b21e-324f84b23986,network=Network(30d87fd0-73b9-42de-a5d9-83ff4cda4f31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36f0ff9c-f8')
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.897 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.898 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.898 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] No VIF found with MAC fa:16:3e:3f:bd:f5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.899 239969 INFO nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Using config drive
Jan 26 16:07:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1941767020' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2028247966' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:41 compute-0 nova_compute[239965]: 2026-01-26 16:07:41.969 239969 DEBUG nova.storage.rbd_utils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] rbd image 5669b8af-7f11-43be-a56d-983f8b8b859e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 520 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 8.8 MiB/s wr, 267 op/s
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.293 239969 INFO nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Creating config drive at /var/lib/nova/instances/5669b8af-7f11-43be-a56d-983f8b8b859e/disk.config
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.299 239969 DEBUG oslo_concurrency.processutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5669b8af-7f11-43be-a56d-983f8b8b859e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv3d2bl46 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:07:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2552577116' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.382 239969 DEBUG oslo_concurrency.processutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.597s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.392 239969 DEBUG nova.compute.provider_tree [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.413 239969 DEBUG nova.scheduler.client.report [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.439 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.439 239969 DEBUG nova.compute.manager [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.444 239969 DEBUG oslo_concurrency.processutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5669b8af-7f11-43be-a56d-983f8b8b859e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv3d2bl46" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.469 239969 DEBUG nova.storage.rbd_utils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] rbd image 5669b8af-7f11-43be-a56d-983f8b8b859e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.472 239969 DEBUG oslo_concurrency.processutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5669b8af-7f11-43be-a56d-983f8b8b859e/disk.config 5669b8af-7f11-43be-a56d-983f8b8b859e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.508 239969 DEBUG nova.compute.manager [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.509 239969 DEBUG nova.network.neutron [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.529 239969 INFO nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.548 239969 DEBUG nova.compute.manager [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:07:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.655 239969 DEBUG nova.compute.manager [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.656 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.656 239969 INFO nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Creating image(s)
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.677 239969 DEBUG nova.storage.rbd_utils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image f3d29c29-2c6a-4e49-bd86-415f47139844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.701 239969 DEBUG nova.storage.rbd_utils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image f3d29c29-2c6a-4e49-bd86-415f47139844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.724 239969 DEBUG nova.storage.rbd_utils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image f3d29c29-2c6a-4e49-bd86-415f47139844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.727 239969 DEBUG oslo_concurrency.processutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.769 239969 DEBUG nova.policy [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '392bf4c554724bd3b097b990cec964ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3d3c26abe454a90816833e484abbbd5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.806 239969 DEBUG oslo_concurrency.processutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.806 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.807 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.807 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.829 239969 DEBUG nova.storage.rbd_utils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image f3d29c29-2c6a-4e49-bd86-415f47139844_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.832 239969 DEBUG oslo_concurrency.processutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 f3d29c29-2c6a-4e49-bd86-415f47139844_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.869 239969 DEBUG oslo_concurrency.processutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5669b8af-7f11-43be-a56d-983f8b8b859e/disk.config 5669b8af-7f11-43be-a56d-983f8b8b859e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.397s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.870 239969 INFO nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Deleting local config drive /var/lib/nova/instances/5669b8af-7f11-43be-a56d-983f8b8b859e/disk.config because it was imported into RBD.
Jan 26 16:07:42 compute-0 ovn_controller[146046]: 2026-01-26T16:07:42Z|00909|binding|INFO|Releasing lport 86107515-3deb-4a4b-a7ca-2f22eed5cb66 from this chassis (sb_readonly=0)
Jan 26 16:07:42 compute-0 ovn_controller[146046]: 2026-01-26T16:07:42Z|00910|binding|INFO|Releasing lport 8d8bd19b-594d-4000-99fc-eb8020a59c23 from this chassis (sb_readonly=0)
Jan 26 16:07:42 compute-0 NetworkManager[48954]: <info>  [1769443662.9322] manager: (tap36f0ff9c-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/387)
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.945 239969 DEBUG nova.network.neutron [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updating instance_info_cache with network_info: [{"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:42 compute-0 kernel: tap36f0ff9c-f8: entered promiscuous mode
Jan 26 16:07:42 compute-0 ovn_controller[146046]: 2026-01-26T16:07:42Z|00911|binding|INFO|Claiming lport 36f0ff9c-f877-4ec9-b21e-324f84b23986 for this chassis.
Jan 26 16:07:42 compute-0 ovn_controller[146046]: 2026-01-26T16:07:42Z|00912|binding|INFO|36f0ff9c-f877-4ec9-b21e-324f84b23986: Claiming fa:16:3e:3f:bd:f5 10.100.0.13
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.966 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:42 compute-0 systemd-udevd[319594]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.970 239969 DEBUG oslo_concurrency.lockutils [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Releasing lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:42 compute-0 systemd-machined[208061]: New machine qemu-114-instance-0000005d.
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.973 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.973 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.974 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.974 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:42.975 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:bd:f5 10.100.0.13'], port_security=['fa:16:3e:3f:bd:f5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5669b8af-7f11-43be-a56d-983f8b8b859e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30d87fd0-73b9-42de-a5d9-83ff4cda4f31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc85b4e0fa164ecdb2e65bce2778c06c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1ec0095f-e6fe-4ff2-94ae-d985f817d927', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=950aabde-a90a-4eae-adf9-ed4cacba1e0b, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=36f0ff9c-f877-4ec9-b21e-324f84b23986) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:42.976 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 36f0ff9c-f877-4ec9-b21e-324f84b23986 in datapath 30d87fd0-73b9-42de-a5d9-83ff4cda4f31 bound to our chassis
Jan 26 16:07:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:42.977 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 30d87fd0-73b9-42de-a5d9-83ff4cda4f31
Jan 26 16:07:42 compute-0 systemd[1]: Started Virtual Machine qemu-114-instance-0000005d.
Jan 26 16:07:42 compute-0 NetworkManager[48954]: <info>  [1769443662.9831] device (tap36f0ff9c-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:07:42 compute-0 NetworkManager[48954]: <info>  [1769443662.9844] device (tap36f0ff9c-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:07:42 compute-0 ovn_controller[146046]: 2026-01-26T16:07:42Z|00913|binding|INFO|Setting lport 36f0ff9c-f877-4ec9-b21e-324f84b23986 ovn-installed in OVS
Jan 26 16:07:42 compute-0 ovn_controller[146046]: 2026-01-26T16:07:42Z|00914|binding|INFO|Setting lport 36f0ff9c-f877-4ec9-b21e-324f84b23986 up in Southbound
Jan 26 16:07:42 compute-0 nova_compute[239965]: 2026-01-26 16:07:42.991 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:42.991 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[32cec8c6-65e9-4bbc-9520-fce2419d8c33]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:42.993 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap30d87fd0-71 in ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:07:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:42.995 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap30d87fd0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:07:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:42.995 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c769c946-1e92-4bf0-beea-b30c5d80d37c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:42.996 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8d872477-ccaf-4efa-9082-4f5b612c7cdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.008 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9139b1-c6ea-4150-b79b-3cd23b9ee6ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:43 compute-0 ceph-mon[75140]: pgmap v1673: 305 pgs: 305 active+clean; 520 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 8.8 MiB/s wr, 267 op/s
Jan 26 16:07:43 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2552577116' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.034 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ba590859-83e1-464d-9426-67b070469b4a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.038 239969 DEBUG nova.network.neutron [req-5296a339-c074-46ee-958a-3aef33779925 req-638f02b6-b296-47e6-a7fa-e5d3a28a7d2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Updated VIF entry in instance network info cache for port 36f0ff9c-f877-4ec9-b21e-324f84b23986. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.039 239969 DEBUG nova.network.neutron [req-5296a339-c074-46ee-958a-3aef33779925 req-638f02b6-b296-47e6-a7fa-e5d3a28a7d2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Updating instance_info_cache with network_info: [{"id": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "address": "fa:16:3e:3f:bd:f5", "network": {"id": "30d87fd0-73b9-42de-a5d9-83ff4cda4f31", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1994172965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc85b4e0fa164ecdb2e65bce2778c06c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f0ff9c-f8", "ovs_interfaceid": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.054 239969 DEBUG oslo_concurrency.lockutils [req-5296a339-c074-46ee-958a-3aef33779925 req-638f02b6-b296-47e6-a7fa-e5d3a28a7d2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-5669b8af-7f11-43be-a56d-983f8b8b859e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.071 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a903e9be-c513-4534-9888-307977c5efc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:43 compute-0 NetworkManager[48954]: <info>  [1769443663.0780] manager: (tap30d87fd0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/388)
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.076 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9c1e08fc-e776-4403-b2f2-fe268c623b53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.125 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2b54f7a0-e4a1-4ac7-82b7-11a50907374c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.128 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ab7913b8-384c-4084-8fbe-f3a0b1b01497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:43 compute-0 NetworkManager[48954]: <info>  [1769443663.1515] device (tap30d87fd0-70): carrier: link connected
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.156 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[227dbf20-5975-460c-b525-3b7b85d1a9d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.171 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[22ae9798-c14c-41ec-a530-c34871597298]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30d87fd0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:a4:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 274], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515395, 'reachable_time': 33086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319635, 'error': None, 'target': 'ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.186 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[790f9c0a-f2af-40dd-a9fc-c2e81cf9e599]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe16:a4fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 515395, 'tstamp': 515395}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 319636, 'error': None, 'target': 'ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.203 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[14eb0d85-b5b1-471b-9ce6-7e1abd37ed05]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap30d87fd0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:a4:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 274], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515395, 'reachable_time': 33086, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 319637, 'error': None, 'target': 'ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.235 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[04e1413c-8c21-4668-ac9c-15a7d5dcdf6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.306 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb9eb10-d561-4283-ac2a-3b74a946e101]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.307 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30d87fd0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.307 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.307 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30d87fd0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.309 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:43 compute-0 NetworkManager[48954]: <info>  [1769443663.3099] manager: (tap30d87fd0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/389)
Jan 26 16:07:43 compute-0 kernel: tap30d87fd0-70: entered promiscuous mode
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.314 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.314 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap30d87fd0-70, col_values=(('external_ids', {'iface-id': 'cd8a2284-438b-4e2d-a111-a26cb6736148'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.315 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:43 compute-0 ovn_controller[146046]: 2026-01-26T16:07:43Z|00915|binding|INFO|Releasing lport cd8a2284-438b-4e2d-a111-a26cb6736148 from this chassis (sb_readonly=0)
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.334 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.334 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.335 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/30d87fd0-73b9-42de-a5d9-83ff4cda4f31.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/30d87fd0-73b9-42de-a5d9-83ff4cda4f31.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.336 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ec17365b-e69e-4369-89a3-89c835c89eda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.337 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-30d87fd0-73b9-42de-a5d9-83ff4cda4f31
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/30d87fd0-73b9-42de-a5d9-83ff4cda4f31.pid.haproxy
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 30d87fd0-73b9-42de-a5d9-83ff4cda4f31
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:07:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:43.337 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31', 'env', 'PROCESS_TAG=haproxy-30d87fd0-73b9-42de-a5d9-83ff4cda4f31', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/30d87fd0-73b9-42de-a5d9-83ff4cda4f31.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.475 239969 DEBUG nova.network.neutron [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Successfully created port: 3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.626 239969 DEBUG oslo_concurrency.processutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 f3d29c29-2c6a-4e49-bd86-415f47139844_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.794s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.711 239969 DEBUG nova.storage.rbd_utils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] resizing rbd image f3d29c29-2c6a-4e49-bd86-415f47139844_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:07:43 compute-0 podman[319686]: 2026-01-26 16:07:43.757848529 +0000 UTC m=+0.083416502 container create f583118b51689087b1849dc5846bd4fabb5ce396dd6d88d6a3269c63996e98b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:07:43 compute-0 podman[319686]: 2026-01-26 16:07:43.7010555 +0000 UTC m=+0.026623493 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:07:43 compute-0 systemd[1]: Started libpod-conmon-f583118b51689087b1849dc5846bd4fabb5ce396dd6d88d6a3269c63996e98b9.scope.
Jan 26 16:07:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:07:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebc5d63ce0e21ab508611f83cdf2f48787cb21013afdcaaf52623acaa4bf3265/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:07:43 compute-0 podman[319686]: 2026-01-26 16:07:43.884467216 +0000 UTC m=+0.210035219 container init f583118b51689087b1849dc5846bd4fabb5ce396dd6d88d6a3269c63996e98b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 26 16:07:43 compute-0 podman[319686]: 2026-01-26 16:07:43.893471126 +0000 UTC m=+0.219039099 container start f583118b51689087b1849dc5846bd4fabb5ce396dd6d88d6a3269c63996e98b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.905 239969 DEBUG nova.objects.instance [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'migration_context' on Instance uuid f3d29c29-2c6a-4e49-bd86-415f47139844 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:43 compute-0 neutron-haproxy-ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31[319739]: [NOTICE]   (319761) : New worker (319763) forked
Jan 26 16:07:43 compute-0 neutron-haproxy-ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31[319739]: [NOTICE]   (319761) : Loading success.
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.923 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.924 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Ensure instance console log exists: /var/lib/nova/instances/f3d29c29-2c6a-4e49-bd86-415f47139844/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.924 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.925 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:43 compute-0 nova_compute[239965]: 2026-01-26 16:07:43.925 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1674: 305 pgs: 305 active+clean; 544 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 8.3 MiB/s wr, 295 op/s
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.139 239969 DEBUG nova.compute.manager [req-d8e2dc37-3824-4588-86b1-982a31332351 req-c4d90b33-a127-4f50-a746-273a36da9589 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Received event network-vif-plugged-36f0ff9c-f877-4ec9-b21e-324f84b23986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.140 239969 DEBUG oslo_concurrency.lockutils [req-d8e2dc37-3824-4588-86b1-982a31332351 req-c4d90b33-a127-4f50-a746-273a36da9589 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.140 239969 DEBUG oslo_concurrency.lockutils [req-d8e2dc37-3824-4588-86b1-982a31332351 req-c4d90b33-a127-4f50-a746-273a36da9589 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.141 239969 DEBUG oslo_concurrency.lockutils [req-d8e2dc37-3824-4588-86b1-982a31332351 req-c4d90b33-a127-4f50-a746-273a36da9589 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.141 239969 DEBUG nova.compute.manager [req-d8e2dc37-3824-4588-86b1-982a31332351 req-c4d90b33-a127-4f50-a746-273a36da9589 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Processing event network-vif-plugged-36f0ff9c-f877-4ec9-b21e-324f84b23986 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.193 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443664.1930048, 5669b8af-7f11-43be-a56d-983f8b8b859e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.193 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] VM Started (Lifecycle Event)
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.195 239969 DEBUG nova.compute.manager [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.205 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.209 239969 INFO nova.virt.libvirt.driver [-] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Instance spawned successfully.
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.209 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.211 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.214 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.230 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.230 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.230 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.231 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.231 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.231 239969 DEBUG nova.virt.libvirt.driver [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.240 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.241 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443664.1953015, 5669b8af-7f11-43be-a56d-983f8b8b859e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.241 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] VM Paused (Lifecycle Event)
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.268 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.271 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443664.2047498, 5669b8af-7f11-43be-a56d-983f8b8b859e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.271 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] VM Resumed (Lifecycle Event)
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.292 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.295 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.299 239969 INFO nova.compute.manager [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Took 9.41 seconds to spawn the instance on the hypervisor.
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.299 239969 DEBUG nova.compute.manager [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.327 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:07:44 compute-0 ceph-mon[75140]: pgmap v1674: 305 pgs: 305 active+clean; 544 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 8.3 MiB/s wr, 295 op/s
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.360 239969 INFO nova.compute.manager [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Took 10.51 seconds to build instance.
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.374 239969 DEBUG oslo_concurrency.lockutils [None req-4b51a782-e7c9-4f9e-bfc5-6bef45c732fc 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lock "5669b8af-7f11-43be-a56d-983f8b8b859e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.746 239969 DEBUG nova.network.neutron [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Successfully updated port: 3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.762 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "refresh_cache-f3d29c29-2c6a-4e49-bd86-415f47139844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.762 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquired lock "refresh_cache-f3d29c29-2c6a-4e49-bd86-415f47139844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.762 239969 DEBUG nova.network.neutron [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.833 239969 DEBUG nova.compute.manager [req-4c779574-8759-45a9-a063-b1ed43bfb5c5 req-c4a73124-93df-4a88-9253-b3b502fc26a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Received event network-changed-3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.833 239969 DEBUG nova.compute.manager [req-4c779574-8759-45a9-a063-b1ed43bfb5c5 req-c4a73124-93df-4a88-9253-b3b502fc26a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Refreshing instance network info cache due to event network-changed-3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.834 239969 DEBUG oslo_concurrency.lockutils [req-4c779574-8759-45a9-a063-b1ed43bfb5c5 req-c4a73124-93df-4a88-9253-b3b502fc26a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-f3d29c29-2c6a-4e49-bd86-415f47139844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.908 239969 INFO nova.virt.libvirt.driver [-] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Instance destroyed successfully.
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.908 239969 DEBUG nova.objects.instance [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'resources' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.923 239969 DEBUG nova.virt.libvirt.vif [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:05:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2114557834',display_name='tempest-ServerActionsTestOtherB-server-2114557834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2114557834',id=85,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH1KacXv8geoEy+28E7FY8ymb1wNpeorh0ig3qqxM26aclzAUqPX8+7CUPQ9iESN1O5fiTgrITEiydlNm6ZYPZvODzMzw3Vex9NYYyqIj3iZQ6pC0nbgSJxWQCDOHEzqIA==',key_name='tempest-keypair-1024411567',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:05:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='56f8818d291f4e738d868673048ce025',ramdisk_id='',reservation_id='r-1wqrxcju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1778121066',owner_user_name='tempest-ServerActionsTestOtherB-1778121066-project-member',shelved_at='2026-01-26T16:07:41.386071',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='8c93b0ea-6f08-4c34-a2a1-57cec764f188'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:07:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ad9c6196af60436caf20747e96ad8388',uuid=571cd10f-e22e-43b6-9523-e3539d047e4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.924 239969 DEBUG nova.network.os_vif_util [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converting VIF {"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.925 239969 DEBUG nova.network.os_vif_util [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:ea:5b,bridge_name='br-int',has_traffic_filtering=True,id=4dce6184-01c6-4ef8-a5d3-7fc88f784d89,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dce6184-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.925 239969 DEBUG os_vif [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:ea:5b,bridge_name='br-int',has_traffic_filtering=True,id=4dce6184-01c6-4ef8-a5d3-7fc88f784d89,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dce6184-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.927 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.927 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dce6184-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.935 239969 DEBUG nova.network.neutron [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.937 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:07:44 compute-0 nova_compute[239965]: 2026-01-26 16:07:44.941 239969 INFO os_vif [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:ea:5b,bridge_name='br-int',has_traffic_filtering=True,id=4dce6184-01c6-4ef8-a5d3-7fc88f784d89,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dce6184-01')
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.239 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updating instance_info_cache with network_info: [{"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.257 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.257 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.258 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.258 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.258 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.258 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.277 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.277 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.277 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.277 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.277 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.384 239969 INFO nova.virt.libvirt.driver [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Deleting instance files /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a_del
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.386 239969 INFO nova.virt.libvirt.driver [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Deletion of /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a_del complete
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.469 239969 INFO nova.scheduler.client.report [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Deleted allocations for instance 571cd10f-e22e-43b6-9523-e3539d047e4a
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.522 239969 DEBUG oslo_concurrency.lockutils [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.523 239969 DEBUG oslo_concurrency.lockutils [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.629 239969 DEBUG oslo_concurrency.processutils [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:07:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2302509538' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:45 compute-0 nova_compute[239965]: 2026-01-26 16:07:45.926 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2302509538' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.029 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.029 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.033 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.033 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.042 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.042 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.046 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.046 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.058 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 529 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 7.1 MiB/s wr, 310 op/s
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.190 239969 DEBUG nova.network.neutron [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Updating instance_info_cache with network_info: [{"id": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "address": "fa:16:3e:99:f6:ad", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c42bcf9-b5", "ovs_interfaceid": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.210 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Releasing lock "refresh_cache-f3d29c29-2c6a-4e49-bd86-415f47139844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.211 239969 DEBUG nova.compute.manager [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Instance network_info: |[{"id": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "address": "fa:16:3e:99:f6:ad", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c42bcf9-b5", "ovs_interfaceid": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.212 239969 DEBUG oslo_concurrency.lockutils [req-4c779574-8759-45a9-a063-b1ed43bfb5c5 req-c4a73124-93df-4a88-9253-b3b502fc26a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-f3d29c29-2c6a-4e49-bd86-415f47139844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.212 239969 DEBUG nova.network.neutron [req-4c779574-8759-45a9-a063-b1ed43bfb5c5 req-c4a73124-93df-4a88-9253-b3b502fc26a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Refreshing network info cache for port 3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.216 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Start _get_guest_xml network_info=[{"id": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "address": "fa:16:3e:99:f6:ad", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c42bcf9-b5", "ovs_interfaceid": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.220 239969 WARNING nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.227 239969 DEBUG nova.virt.libvirt.host [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.227 239969 DEBUG nova.virt.libvirt.host [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.238 239969 DEBUG nova.compute.manager [req-f0718f93-df82-4e6c-a813-29932fdadc05 req-a732a12f-0313-45b3-989c-273dc463f706 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Received event network-vif-plugged-36f0ff9c-f877-4ec9-b21e-324f84b23986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.238 239969 DEBUG oslo_concurrency.lockutils [req-f0718f93-df82-4e6c-a813-29932fdadc05 req-a732a12f-0313-45b3-989c-273dc463f706 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.238 239969 DEBUG oslo_concurrency.lockutils [req-f0718f93-df82-4e6c-a813-29932fdadc05 req-a732a12f-0313-45b3-989c-273dc463f706 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.239 239969 DEBUG oslo_concurrency.lockutils [req-f0718f93-df82-4e6c-a813-29932fdadc05 req-a732a12f-0313-45b3-989c-273dc463f706 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.239 239969 DEBUG nova.compute.manager [req-f0718f93-df82-4e6c-a813-29932fdadc05 req-a732a12f-0313-45b3-989c-273dc463f706 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] No waiting events found dispatching network-vif-plugged-36f0ff9c-f877-4ec9-b21e-324f84b23986 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.239 239969 WARNING nova.compute.manager [req-f0718f93-df82-4e6c-a813-29932fdadc05 req-a732a12f-0313-45b3-989c-273dc463f706 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Received unexpected event network-vif-plugged-36f0ff9c-f877-4ec9-b21e-324f84b23986 for instance with vm_state active and task_state None.
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.239 239969 DEBUG nova.compute.manager [req-f0718f93-df82-4e6c-a813-29932fdadc05 req-a732a12f-0313-45b3-989c-273dc463f706 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received event network-changed-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.239 239969 DEBUG nova.compute.manager [req-f0718f93-df82-4e6c-a813-29932fdadc05 req-a732a12f-0313-45b3-989c-273dc463f706 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Refreshing instance network info cache due to event network-changed-4dce6184-01c6-4ef8-a5d3-7fc88f784d89. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.240 239969 DEBUG oslo_concurrency.lockutils [req-f0718f93-df82-4e6c-a813-29932fdadc05 req-a732a12f-0313-45b3-989c-273dc463f706 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.240 239969 DEBUG oslo_concurrency.lockutils [req-f0718f93-df82-4e6c-a813-29932fdadc05 req-a732a12f-0313-45b3-989c-273dc463f706 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.240 239969 DEBUG nova.network.neutron [req-f0718f93-df82-4e6c-a813-29932fdadc05 req-a732a12f-0313-45b3-989c-273dc463f706 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Refreshing network info cache for port 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.242 239969 DEBUG nova.virt.libvirt.host [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.243 239969 DEBUG nova.virt.libvirt.host [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.243 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.244 239969 DEBUG nova.virt.hardware [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.244 239969 DEBUG nova.virt.hardware [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.244 239969 DEBUG nova.virt.hardware [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.244 239969 DEBUG nova.virt.hardware [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.245 239969 DEBUG nova.virt.hardware [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.245 239969 DEBUG nova.virt.hardware [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.245 239969 DEBUG nova.virt.hardware [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.245 239969 DEBUG nova.virt.hardware [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.245 239969 DEBUG nova.virt.hardware [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.246 239969 DEBUG nova.virt.hardware [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.246 239969 DEBUG nova.virt.hardware [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.249 239969 DEBUG oslo_concurrency.processutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:07:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3840250166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.363 239969 DEBUG oslo_concurrency.processutils [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.734s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.371 239969 DEBUG nova.compute.provider_tree [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.403 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.404 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3215MB free_disk=59.809279021807015GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.404 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.442 239969 DEBUG nova.scheduler.client.report [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.464 239969 DEBUG oslo_concurrency.lockutils [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.466 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.556 239969 DEBUG oslo_concurrency.lockutils [None req-d456a9cd-3e8c-4876-8108-a8edc621e20b ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 12.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.587 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 1c5a4474-3020-4bb1-96e5-41b02d0eb962 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.588 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 8dcfffb0-b36f-463e-b49a-9954483b8f5b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.588 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.588 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 5669b8af-7f11-43be-a56d-983f8b8b859e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.588 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance f3d29c29-2c6a-4e49-bd86-415f47139844 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.588 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.589 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.705 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.891 239969 DEBUG oslo_concurrency.lockutils [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Acquiring lock "5669b8af-7f11-43be-a56d-983f8b8b859e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.892 239969 DEBUG oslo_concurrency.lockutils [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lock "5669b8af-7f11-43be-a56d-983f8b8b859e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.892 239969 DEBUG oslo_concurrency.lockutils [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Acquiring lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.892 239969 DEBUG oslo_concurrency.lockutils [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.893 239969 DEBUG oslo_concurrency.lockutils [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.894 239969 INFO nova.compute.manager [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Terminating instance
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.896 239969 DEBUG nova.compute.manager [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:07:46 compute-0 kernel: tap36f0ff9c-f8 (unregistering): left promiscuous mode
Jan 26 16:07:46 compute-0 NetworkManager[48954]: <info>  [1769443666.9430] device (tap36f0ff9c-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:07:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:07:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4220987359' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.954 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:46 compute-0 ovn_controller[146046]: 2026-01-26T16:07:46Z|00916|binding|INFO|Releasing lport 36f0ff9c-f877-4ec9-b21e-324f84b23986 from this chassis (sb_readonly=0)
Jan 26 16:07:46 compute-0 ovn_controller[146046]: 2026-01-26T16:07:46Z|00917|binding|INFO|Setting lport 36f0ff9c-f877-4ec9-b21e-324f84b23986 down in Southbound
Jan 26 16:07:46 compute-0 ovn_controller[146046]: 2026-01-26T16:07:46Z|00918|binding|INFO|Removing iface tap36f0ff9c-f8 ovn-installed in OVS
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.957 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:46.964 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:bd:f5 10.100.0.13'], port_security=['fa:16:3e:3f:bd:f5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5669b8af-7f11-43be-a56d-983f8b8b859e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-30d87fd0-73b9-42de-a5d9-83ff4cda4f31', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cc85b4e0fa164ecdb2e65bce2778c06c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1ec0095f-e6fe-4ff2-94ae-d985f817d927', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=950aabde-a90a-4eae-adf9-ed4cacba1e0b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=36f0ff9c-f877-4ec9-b21e-324f84b23986) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:46.968 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 36f0ff9c-f877-4ec9-b21e-324f84b23986 in datapath 30d87fd0-73b9-42de-a5d9-83ff4cda4f31 unbound from our chassis
Jan 26 16:07:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:46.970 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 30d87fd0-73b9-42de-a5d9-83ff4cda4f31, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:07:46 compute-0 ceph-mon[75140]: pgmap v1675: 305 pgs: 305 active+clean; 529 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 9.8 MiB/s rd, 7.1 MiB/s wr, 310 op/s
Jan 26 16:07:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3840250166' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4220987359' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.975 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:46.976 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[219c9d72-1b34-4b3b-8538-169294bea324]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:46.977 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31 namespace which is not needed anymore
Jan 26 16:07:46 compute-0 nova_compute[239965]: 2026-01-26 16:07:46.996 239969 DEBUG oslo_concurrency.processutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:47 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Jan 26 16:07:47 compute-0 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005d.scope: Consumed 3.881s CPU time.
Jan 26 16:07:47 compute-0 systemd-machined[208061]: Machine qemu-114-instance-0000005d terminated.
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.033 239969 DEBUG nova.storage.rbd_utils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image f3d29c29-2c6a-4e49-bd86-415f47139844_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.037 239969 DEBUG oslo_concurrency.processutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:47 compute-0 neutron-haproxy-ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31[319739]: [NOTICE]   (319761) : haproxy version is 2.8.14-c23fe91
Jan 26 16:07:47 compute-0 neutron-haproxy-ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31[319739]: [NOTICE]   (319761) : path to executable is /usr/sbin/haproxy
Jan 26 16:07:47 compute-0 neutron-haproxy-ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31[319739]: [WARNING]  (319761) : Exiting Master process...
Jan 26 16:07:47 compute-0 neutron-haproxy-ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31[319739]: [WARNING]  (319761) : Exiting Master process...
Jan 26 16:07:47 compute-0 neutron-haproxy-ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31[319739]: [ALERT]    (319761) : Current worker (319763) exited with code 143 (Terminated)
Jan 26 16:07:47 compute-0 neutron-haproxy-ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31[319739]: [WARNING]  (319761) : All workers exited. Exiting... (0)
Jan 26 16:07:47 compute-0 systemd[1]: libpod-f583118b51689087b1849dc5846bd4fabb5ce396dd6d88d6a3269c63996e98b9.scope: Deactivated successfully.
Jan 26 16:07:47 compute-0 podman[319964]: 2026-01-26 16:07:47.127867937 +0000 UTC m=+0.056158585 container died f583118b51689087b1849dc5846bd4fabb5ce396dd6d88d6a3269c63996e98b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.137 239969 INFO nova.virt.libvirt.driver [-] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Instance destroyed successfully.
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.138 239969 DEBUG nova.objects.instance [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lazy-loading 'resources' on Instance uuid 5669b8af-7f11-43be-a56d-983f8b8b859e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.158 239969 DEBUG nova.virt.libvirt.vif [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-546794690',display_name='tempest-ServerPasswordTestJSON-server-546794690',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-546794690',id=93,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:07:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cc85b4e0fa164ecdb2e65bce2778c06c',ramdisk_id='',reservation_id='r-qii47trp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-1026791849',owner_user_name='tempest-ServerPasswordTestJSON-1026791849-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:07:46Z,user_data=None,user_id='62734cf5470643439afdf87f571cc432',uuid=5669b8af-7f11-43be-a56d-983f8b8b859e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "address": "fa:16:3e:3f:bd:f5", "network": {"id": "30d87fd0-73b9-42de-a5d9-83ff4cda4f31", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1994172965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc85b4e0fa164ecdb2e65bce2778c06c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f0ff9c-f8", "ovs_interfaceid": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.159 239969 DEBUG nova.network.os_vif_util [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Converting VIF {"id": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "address": "fa:16:3e:3f:bd:f5", "network": {"id": "30d87fd0-73b9-42de-a5d9-83ff4cda4f31", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-1994172965-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cc85b4e0fa164ecdb2e65bce2778c06c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap36f0ff9c-f8", "ovs_interfaceid": "36f0ff9c-f877-4ec9-b21e-324f84b23986", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f583118b51689087b1849dc5846bd4fabb5ce396dd6d88d6a3269c63996e98b9-userdata-shm.mount: Deactivated successfully.
Jan 26 16:07:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebc5d63ce0e21ab508611f83cdf2f48787cb21013afdcaaf52623acaa4bf3265-merged.mount: Deactivated successfully.
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.162 239969 DEBUG nova.network.os_vif_util [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:bd:f5,bridge_name='br-int',has_traffic_filtering=True,id=36f0ff9c-f877-4ec9-b21e-324f84b23986,network=Network(30d87fd0-73b9-42de-a5d9-83ff4cda4f31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36f0ff9c-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.170 239969 DEBUG os_vif [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:bd:f5,bridge_name='br-int',has_traffic_filtering=True,id=36f0ff9c-f877-4ec9-b21e-324f84b23986,network=Network(30d87fd0-73b9-42de-a5d9-83ff4cda4f31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36f0ff9c-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.172 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.173 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap36f0ff9c-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.175 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.176 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.179 239969 INFO os_vif [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:bd:f5,bridge_name='br-int',has_traffic_filtering=True,id=36f0ff9c-f877-4ec9-b21e-324f84b23986,network=Network(30d87fd0-73b9-42de-a5d9-83ff4cda4f31),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap36f0ff9c-f8')
Jan 26 16:07:47 compute-0 podman[319964]: 2026-01-26 16:07:47.180636278 +0000 UTC m=+0.108926926 container cleanup f583118b51689087b1849dc5846bd4fabb5ce396dd6d88d6a3269c63996e98b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 26 16:07:47 compute-0 systemd[1]: libpod-conmon-f583118b51689087b1849dc5846bd4fabb5ce396dd6d88d6a3269c63996e98b9.scope: Deactivated successfully.
Jan 26 16:07:47 compute-0 podman[320022]: 2026-01-26 16:07:47.267245477 +0000 UTC m=+0.063765842 container remove f583118b51689087b1849dc5846bd4fabb5ce396dd6d88d6a3269c63996e98b9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:07:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:47.277 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[321b39eb-dfe0-4dbd-ab2a-5a6e7eb73fd4]: (4, ('Mon Jan 26 04:07:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31 (f583118b51689087b1849dc5846bd4fabb5ce396dd6d88d6a3269c63996e98b9)\nf583118b51689087b1849dc5846bd4fabb5ce396dd6d88d6a3269c63996e98b9\nMon Jan 26 04:07:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31 (f583118b51689087b1849dc5846bd4fabb5ce396dd6d88d6a3269c63996e98b9)\nf583118b51689087b1849dc5846bd4fabb5ce396dd6d88d6a3269c63996e98b9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:47.279 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3f876032-c410-4cbd-b564-6b21e673109c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:47.280 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30d87fd0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:47 compute-0 kernel: tap30d87fd0-70: left promiscuous mode
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.282 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.303 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:47.309 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1812a9cd-da44-4d5b-9466-bd72b959cc6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:47.324 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f136f0-b052-4e0c-8911-c23c707291f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:47.327 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5f59ec45-15ae-4379-b031-0448d4e1b4be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:07:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1958474969' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:47.347 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[142ef1e1-4fa2-4df8-90bc-909b939aaca8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 515387, 'reachable_time': 26695, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320054, 'error': None, 'target': 'ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:47.351 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-30d87fd0-73b9-42de-a5d9-83ff4cda4f31 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:07:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:47.351 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[2eccb16d-504f-406b-b81f-6b2caac5fc4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d30d87fd0\x2d73b9\x2d42de\x2da5d9\x2d83ff4cda4f31.mount: Deactivated successfully.
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.361 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.367 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.390 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.420 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.420 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.954s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.531 239969 INFO nova.virt.libvirt.driver [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Deleting instance files /var/lib/nova/instances/5669b8af-7f11-43be-a56d-983f8b8b859e_del
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.532 239969 INFO nova.virt.libvirt.driver [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Deletion of /var/lib/nova/instances/5669b8af-7f11-43be-a56d-983f8b8b859e_del complete
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.578 239969 INFO nova.compute.manager [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Took 0.68 seconds to destroy the instance on the hypervisor.
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.578 239969 DEBUG oslo.service.loopingcall [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.579 239969 DEBUG nova.compute.manager [-] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.579 239969 DEBUG nova.network.neutron [-] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:07:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:07:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Jan 26 16:07:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Jan 26 16:07:47 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Jan 26 16:07:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:07:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/544823965' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.659 239969 DEBUG oslo_concurrency.processutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.661 239969 DEBUG nova.virt.libvirt.vif [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:07:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-174634227',display_name='tempest-ServersTestJSON-server-174634227',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-174634227',id=94,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-vwsfpbv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:07:42Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=f3d29c29-2c6a-4e49-bd86-415f47139844,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "address": "fa:16:3e:99:f6:ad", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c42bcf9-b5", "ovs_interfaceid": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.662 239969 DEBUG nova.network.os_vif_util [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "address": "fa:16:3e:99:f6:ad", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c42bcf9-b5", "ovs_interfaceid": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.663 239969 DEBUG nova.network.os_vif_util [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:f6:ad,bridge_name='br-int',has_traffic_filtering=True,id=3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c42bcf9-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.665 239969 DEBUG nova.objects.instance [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'pci_devices' on Instance uuid f3d29c29-2c6a-4e49-bd86-415f47139844 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.681 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:07:47 compute-0 nova_compute[239965]:   <uuid>f3d29c29-2c6a-4e49-bd86-415f47139844</uuid>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   <name>instance-0000005e</name>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersTestJSON-server-174634227</nova:name>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:07:46</nova:creationTime>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:07:47 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:07:47 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:07:47 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:07:47 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:07:47 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:07:47 compute-0 nova_compute[239965]:         <nova:user uuid="392bf4c554724bd3b097b990cec964ac">tempest-ServersTestJSON-190839520-project-member</nova:user>
Jan 26 16:07:47 compute-0 nova_compute[239965]:         <nova:project uuid="e3d3c26abe454a90816833e484abbbd5">tempest-ServersTestJSON-190839520</nova:project>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:07:47 compute-0 nova_compute[239965]:         <nova:port uuid="3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8">
Jan 26 16:07:47 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <system>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <entry name="serial">f3d29c29-2c6a-4e49-bd86-415f47139844</entry>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <entry name="uuid">f3d29c29-2c6a-4e49-bd86-415f47139844</entry>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     </system>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   <os>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   </os>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   <features>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   </features>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/f3d29c29-2c6a-4e49-bd86-415f47139844_disk">
Jan 26 16:07:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       </source>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:07:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/f3d29c29-2c6a-4e49-bd86-415f47139844_disk.config">
Jan 26 16:07:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       </source>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:07:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:99:f6:ad"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <target dev="tap3c42bcf9-b5"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/f3d29c29-2c6a-4e49-bd86-415f47139844/console.log" append="off"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <video>
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     </video>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:07:47 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:07:47 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:07:47 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:07:47 compute-0 nova_compute[239965]: </domain>
Jan 26 16:07:47 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.682 239969 DEBUG nova.compute.manager [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Preparing to wait for external event network-vif-plugged-3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.683 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.683 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.684 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.685 239969 DEBUG nova.virt.libvirt.vif [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:07:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-174634227',display_name='tempest-ServersTestJSON-server-174634227',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-174634227',id=94,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-vwsfpbv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:07:42Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=f3d29c29-2c6a-4e49-bd86-415f47139844,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "address": "fa:16:3e:99:f6:ad", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c42bcf9-b5", "ovs_interfaceid": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.685 239969 DEBUG nova.network.os_vif_util [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "address": "fa:16:3e:99:f6:ad", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c42bcf9-b5", "ovs_interfaceid": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.686 239969 DEBUG nova.network.os_vif_util [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:f6:ad,bridge_name='br-int',has_traffic_filtering=True,id=3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c42bcf9-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.687 239969 DEBUG os_vif [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:f6:ad,bridge_name='br-int',has_traffic_filtering=True,id=3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c42bcf9-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.688 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.688 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.689 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.696 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.697 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3c42bcf9-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.697 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3c42bcf9-b5, col_values=(('external_ids', {'iface-id': '3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:99:f6:ad', 'vm-uuid': 'f3d29c29-2c6a-4e49-bd86-415f47139844'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.700 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:47 compute-0 NetworkManager[48954]: <info>  [1769443667.7031] manager: (tap3c42bcf9-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.704 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.707 239969 INFO os_vif [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:f6:ad,bridge_name='br-int',has_traffic_filtering=True,id=3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c42bcf9-b5')
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.774 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.774 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.774 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No VIF found with MAC fa:16:3e:99:f6:ad, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.775 239969 INFO nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Using config drive
Jan 26 16:07:47 compute-0 nova_compute[239965]: 2026-01-26 16:07:47.808 239969 DEBUG nova.storage.rbd_utils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image f3d29c29-2c6a-4e49-bd86-415f47139844_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1958474969' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:47 compute-0 ceph-mon[75140]: osdmap e267: 3 total, 3 up, 3 in
Jan 26 16:07:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/544823965' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.110 239969 DEBUG nova.network.neutron [req-4c779574-8759-45a9-a063-b1ed43bfb5c5 req-c4a73124-93df-4a88-9253-b3b502fc26a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Updated VIF entry in instance network info cache for port 3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.110 239969 DEBUG nova.network.neutron [req-4c779574-8759-45a9-a063-b1ed43bfb5c5 req-c4a73124-93df-4a88-9253-b3b502fc26a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Updating instance_info_cache with network_info: [{"id": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "address": "fa:16:3e:99:f6:ad", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c42bcf9-b5", "ovs_interfaceid": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1677: 305 pgs: 305 active+clean; 511 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 9.0 MiB/s wr, 416 op/s
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.127 239969 DEBUG oslo_concurrency.lockutils [req-4c779574-8759-45a9-a063-b1ed43bfb5c5 req-c4a73124-93df-4a88-9253-b3b502fc26a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-f3d29c29-2c6a-4e49-bd86-415f47139844" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.289 239969 INFO nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Creating config drive at /var/lib/nova/instances/f3d29c29-2c6a-4e49-bd86-415f47139844/disk.config
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.295 239969 DEBUG oslo_concurrency.processutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f3d29c29-2c6a-4e49-bd86-415f47139844/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzv4pirc5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.394 239969 DEBUG nova.compute.manager [req-292394b7-3f5b-4615-a65b-1787cae67dc4 req-cd64ae99-6d24-4582-8336-16beb4d59fd0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Received event network-vif-unplugged-36f0ff9c-f877-4ec9-b21e-324f84b23986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.395 239969 DEBUG oslo_concurrency.lockutils [req-292394b7-3f5b-4615-a65b-1787cae67dc4 req-cd64ae99-6d24-4582-8336-16beb4d59fd0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.395 239969 DEBUG oslo_concurrency.lockutils [req-292394b7-3f5b-4615-a65b-1787cae67dc4 req-cd64ae99-6d24-4582-8336-16beb4d59fd0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.395 239969 DEBUG oslo_concurrency.lockutils [req-292394b7-3f5b-4615-a65b-1787cae67dc4 req-cd64ae99-6d24-4582-8336-16beb4d59fd0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.395 239969 DEBUG nova.compute.manager [req-292394b7-3f5b-4615-a65b-1787cae67dc4 req-cd64ae99-6d24-4582-8336-16beb4d59fd0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] No waiting events found dispatching network-vif-unplugged-36f0ff9c-f877-4ec9-b21e-324f84b23986 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.396 239969 DEBUG nova.compute.manager [req-292394b7-3f5b-4615-a65b-1787cae67dc4 req-cd64ae99-6d24-4582-8336-16beb4d59fd0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Received event network-vif-unplugged-36f0ff9c-f877-4ec9-b21e-324f84b23986 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.396 239969 DEBUG nova.compute.manager [req-292394b7-3f5b-4615-a65b-1787cae67dc4 req-cd64ae99-6d24-4582-8336-16beb4d59fd0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Received event network-vif-plugged-36f0ff9c-f877-4ec9-b21e-324f84b23986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.396 239969 DEBUG oslo_concurrency.lockutils [req-292394b7-3f5b-4615-a65b-1787cae67dc4 req-cd64ae99-6d24-4582-8336-16beb4d59fd0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.396 239969 DEBUG oslo_concurrency.lockutils [req-292394b7-3f5b-4615-a65b-1787cae67dc4 req-cd64ae99-6d24-4582-8336-16beb4d59fd0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.396 239969 DEBUG oslo_concurrency.lockutils [req-292394b7-3f5b-4615-a65b-1787cae67dc4 req-cd64ae99-6d24-4582-8336-16beb4d59fd0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5669b8af-7f11-43be-a56d-983f8b8b859e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.396 239969 DEBUG nova.compute.manager [req-292394b7-3f5b-4615-a65b-1787cae67dc4 req-cd64ae99-6d24-4582-8336-16beb4d59fd0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] No waiting events found dispatching network-vif-plugged-36f0ff9c-f877-4ec9-b21e-324f84b23986 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.397 239969 WARNING nova.compute.manager [req-292394b7-3f5b-4615-a65b-1787cae67dc4 req-cd64ae99-6d24-4582-8336-16beb4d59fd0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Received unexpected event network-vif-plugged-36f0ff9c-f877-4ec9-b21e-324f84b23986 for instance with vm_state active and task_state deleting.
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.438 239969 DEBUG oslo_concurrency.processutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f3d29c29-2c6a-4e49-bd86-415f47139844/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzv4pirc5" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.461 239969 DEBUG nova.storage.rbd_utils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image f3d29c29-2c6a-4e49-bd86-415f47139844_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.464 239969 DEBUG oslo_concurrency.processutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f3d29c29-2c6a-4e49-bd86-415f47139844/disk.config f3d29c29-2c6a-4e49-bd86-415f47139844_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.524 239969 DEBUG nova.network.neutron [req-f0718f93-df82-4e6c-a813-29932fdadc05 req-a732a12f-0313-45b3-989c-273dc463f706 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updated VIF entry in instance network info cache for port 4dce6184-01c6-4ef8-a5d3-7fc88f784d89. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.525 239969 DEBUG nova.network.neutron [req-f0718f93-df82-4e6c-a813-29932fdadc05 req-a732a12f-0313-45b3-989c-273dc463f706 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updating instance_info_cache with network_info: [{"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": null, "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap4dce6184-01", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.529 239969 DEBUG nova.network.neutron [-] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.555 239969 DEBUG oslo_concurrency.lockutils [req-f0718f93-df82-4e6c-a813-29932fdadc05 req-a732a12f-0313-45b3-989c-273dc463f706 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.557 239969 INFO nova.compute.manager [-] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Took 0.98 seconds to deallocate network for instance.
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.603 239969 DEBUG oslo_concurrency.lockutils [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.603 239969 DEBUG oslo_concurrency.lockutils [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.670 239969 DEBUG oslo_concurrency.processutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f3d29c29-2c6a-4e49-bd86-415f47139844/disk.config f3d29c29-2c6a-4e49-bd86-415f47139844_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.670 239969 INFO nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Deleting local config drive /var/lib/nova/instances/f3d29c29-2c6a-4e49-bd86-415f47139844/disk.config because it was imported into RBD.
Jan 26 16:07:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:07:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/74387035' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:07:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:07:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/74387035' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:07:48 compute-0 kernel: tap3c42bcf9-b5: entered promiscuous mode
Jan 26 16:07:48 compute-0 systemd-udevd[319926]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:07:48 compute-0 NetworkManager[48954]: <info>  [1769443668.7236] manager: (tap3c42bcf9-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.724 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:48 compute-0 ovn_controller[146046]: 2026-01-26T16:07:48Z|00919|binding|INFO|Claiming lport 3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 for this chassis.
Jan 26 16:07:48 compute-0 ovn_controller[146046]: 2026-01-26T16:07:48Z|00920|binding|INFO|3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8: Claiming fa:16:3e:99:f6:ad 10.100.0.7
Jan 26 16:07:48 compute-0 NetworkManager[48954]: <info>  [1769443668.7340] device (tap3c42bcf9-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:07:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:48.733 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:f6:ad 10.100.0.7'], port_security=['fa:16:3e:99:f6:ad 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f3d29c29-2c6a-4e49-bd86-415f47139844', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3d3c26abe454a90816833e484abbbd5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1788d42-3ab8-4af6-9d96-b2f289468a39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=866be92a-2c15-4b84-ae39-2a0041a8b635, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:48 compute-0 NetworkManager[48954]: <info>  [1769443668.7347] device (tap3c42bcf9-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:07:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:48.735 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 in datapath e7ef659c-2c7b-4cdf-8956-267cdfeff707 bound to our chassis
Jan 26 16:07:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:48.736 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.745 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:48 compute-0 ovn_controller[146046]: 2026-01-26T16:07:48Z|00921|binding|INFO|Setting lport 3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 ovn-installed in OVS
Jan 26 16:07:48 compute-0 ovn_controller[146046]: 2026-01-26T16:07:48Z|00922|binding|INFO|Setting lport 3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 up in Southbound
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.748 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.750 239969 DEBUG oslo_concurrency.processutils [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:48 compute-0 systemd-machined[208061]: New machine qemu-115-instance-0000005e.
Jan 26 16:07:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:48.758 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[18add460-e174-4e43-b8c8-3316895709bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:48 compute-0 systemd[1]: Started Virtual Machine qemu-115-instance-0000005e.
Jan 26 16:07:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:48.797 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c15014a5-defa-456c-be6f-6b070e5193bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:48.801 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d8302735-8261-4c56-90c9-34ef5d2714ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:48.836 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b21b578e-1336-4fe6-8e91-4a9328ec96f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:48.867 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8d68ddb9-7410-4cb7-9030-038946edd9ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7ef659c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:1b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 19, 'rx_bytes': 616, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 19, 'rx_bytes': 616, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507991, 'reachable_time': 15716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320147, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0025713807663865066 of space, bias 1.0, pg target 0.771414229915952 quantized to 32 (current 32)
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0021202459910996934 of space, bias 1.0, pg target 0.636073797329908 quantized to 32 (current 32)
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.952409244852078e-07 of space, bias 4.0, pg target 0.0009542891093822494 quantized to 16 (current 16)
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:07:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:07:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:48.889 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5248ec41-49dc-4cbf-b380-8228efd567e5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508006, 'tstamp': 508006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320148, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508010, 'tstamp': 508010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320148, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:48.891 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7ef659c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:48 compute-0 nova_compute[239965]: 2026-01-26 16:07:48.892 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:48.894 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7ef659c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:48.895 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:48.895 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape7ef659c-20, col_values=(('external_ids', {'iface-id': '86107515-3deb-4a4b-a7ca-2f22eed5cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:48.895 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:48 compute-0 ceph-mon[75140]: pgmap v1677: 305 pgs: 305 active+clean; 511 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 9.0 MiB/s wr, 416 op/s
Jan 26 16:07:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/74387035' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:07:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/74387035' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:07:49 compute-0 nova_compute[239965]: 2026-01-26 16:07:49.255 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443669.2543683, f3d29c29-2c6a-4e49-bd86-415f47139844 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:49 compute-0 nova_compute[239965]: 2026-01-26 16:07:49.255 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] VM Started (Lifecycle Event)
Jan 26 16:07:49 compute-0 nova_compute[239965]: 2026-01-26 16:07:49.275 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:49 compute-0 nova_compute[239965]: 2026-01-26 16:07:49.278 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443669.2545006, f3d29c29-2c6a-4e49-bd86-415f47139844 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:49 compute-0 nova_compute[239965]: 2026-01-26 16:07:49.279 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] VM Paused (Lifecycle Event)
Jan 26 16:07:49 compute-0 nova_compute[239965]: 2026-01-26 16:07:49.304 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:49 compute-0 nova_compute[239965]: 2026-01-26 16:07:49.310 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:07:49 compute-0 nova_compute[239965]: 2026-01-26 16:07:49.334 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:07:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:07:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3824712519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:49 compute-0 nova_compute[239965]: 2026-01-26 16:07:49.359 239969 DEBUG oslo_concurrency.processutils [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:49 compute-0 nova_compute[239965]: 2026-01-26 16:07:49.365 239969 DEBUG nova.compute.provider_tree [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:07:49 compute-0 nova_compute[239965]: 2026-01-26 16:07:49.380 239969 DEBUG nova.scheduler.client.report [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:07:49 compute-0 nova_compute[239965]: 2026-01-26 16:07:49.398 239969 DEBUG oslo_concurrency.lockutils [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:49 compute-0 nova_compute[239965]: 2026-01-26 16:07:49.420 239969 INFO nova.scheduler.client.report [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Deleted allocations for instance 5669b8af-7f11-43be-a56d-983f8b8b859e
Jan 26 16:07:49 compute-0 nova_compute[239965]: 2026-01-26 16:07:49.483 239969 DEBUG oslo_concurrency.lockutils [None req-8fa56f7c-7358-49ea-8acc-0d8d07006aa3 62734cf5470643439afdf87f571cc432 cc85b4e0fa164ecdb2e65bce2778c06c - - default default] Lock "5669b8af-7f11-43be-a56d-983f8b8b859e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:49 compute-0 ovn_controller[146046]: 2026-01-26T16:07:49Z|00096|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b0:06:0d 10.100.0.12
Jan 26 16:07:49 compute-0 ovn_controller[146046]: 2026-01-26T16:07:49Z|00097|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b0:06:0d 10.100.0.12
Jan 26 16:07:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3824712519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 511 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 7.4 MiB/s wr, 345 op/s
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.469 239969 DEBUG nova.compute.manager [req-7f7afd19-957d-4826-bdb8-e914580af66f req-6cfca7ae-bf86-45c7-8721-33fb886392cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Received event network-vif-deleted-36f0ff9c-f877-4ec9-b21e-324f84b23986 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.469 239969 DEBUG nova.compute.manager [req-7f7afd19-957d-4826-bdb8-e914580af66f req-6cfca7ae-bf86-45c7-8721-33fb886392cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Received event network-vif-plugged-3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.469 239969 DEBUG oslo_concurrency.lockutils [req-7f7afd19-957d-4826-bdb8-e914580af66f req-6cfca7ae-bf86-45c7-8721-33fb886392cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.470 239969 DEBUG oslo_concurrency.lockutils [req-7f7afd19-957d-4826-bdb8-e914580af66f req-6cfca7ae-bf86-45c7-8721-33fb886392cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.470 239969 DEBUG oslo_concurrency.lockutils [req-7f7afd19-957d-4826-bdb8-e914580af66f req-6cfca7ae-bf86-45c7-8721-33fb886392cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.470 239969 DEBUG nova.compute.manager [req-7f7afd19-957d-4826-bdb8-e914580af66f req-6cfca7ae-bf86-45c7-8721-33fb886392cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Processing event network-vif-plugged-3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.470 239969 DEBUG nova.compute.manager [req-7f7afd19-957d-4826-bdb8-e914580af66f req-6cfca7ae-bf86-45c7-8721-33fb886392cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Received event network-vif-plugged-3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.471 239969 DEBUG oslo_concurrency.lockutils [req-7f7afd19-957d-4826-bdb8-e914580af66f req-6cfca7ae-bf86-45c7-8721-33fb886392cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.471 239969 DEBUG oslo_concurrency.lockutils [req-7f7afd19-957d-4826-bdb8-e914580af66f req-6cfca7ae-bf86-45c7-8721-33fb886392cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.471 239969 DEBUG oslo_concurrency.lockutils [req-7f7afd19-957d-4826-bdb8-e914580af66f req-6cfca7ae-bf86-45c7-8721-33fb886392cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.471 239969 DEBUG nova.compute.manager [req-7f7afd19-957d-4826-bdb8-e914580af66f req-6cfca7ae-bf86-45c7-8721-33fb886392cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] No waiting events found dispatching network-vif-plugged-3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.471 239969 WARNING nova.compute.manager [req-7f7afd19-957d-4826-bdb8-e914580af66f req-6cfca7ae-bf86-45c7-8721-33fb886392cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Received unexpected event network-vif-plugged-3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 for instance with vm_state building and task_state spawning.
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.472 239969 DEBUG nova.compute.manager [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.477 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.478 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443670.4783046, f3d29c29-2c6a-4e49-bd86-415f47139844 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.478 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] VM Resumed (Lifecycle Event)
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.484 239969 INFO nova.virt.libvirt.driver [-] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Instance spawned successfully.
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.485 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.502 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.509 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.513 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.513 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.514 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.514 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.514 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.515 239969 DEBUG nova.virt.libvirt.driver [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.545 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.576 239969 INFO nova.compute.manager [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Took 7.92 seconds to spawn the instance on the hypervisor.
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.576 239969 DEBUG nova.compute.manager [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.655 239969 INFO nova.compute.manager [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Took 9.20 seconds to build instance.
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.673 239969 DEBUG oslo_concurrency.lockutils [None req-afd823db-8461-47f3-aceb-c7a1937916b8 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "f3d29c29-2c6a-4e49-bd86-415f47139844" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:50 compute-0 nova_compute[239965]: 2026-01-26 16:07:50.989 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:50.990 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:50.992 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:07:51 compute-0 nova_compute[239965]: 2026-01-26 16:07:51.062 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:51 compute-0 ceph-mon[75140]: pgmap v1678: 305 pgs: 305 active+clean; 511 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 9.3 MiB/s rd, 7.4 MiB/s wr, 345 op/s
Jan 26 16:07:51 compute-0 nova_compute[239965]: 2026-01-26 16:07:51.830 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443656.8290763, 571cd10f-e22e-43b6-9523-e3539d047e4a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:51 compute-0 nova_compute[239965]: 2026-01-26 16:07:51.831 239969 INFO nova.compute.manager [-] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] VM Stopped (Lifecycle Event)
Jan 26 16:07:51 compute-0 nova_compute[239965]: 2026-01-26 16:07:51.849 239969 DEBUG nova.compute.manager [None req-f487a499-d24b-489e-97e1-3411168e0b3a - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1679: 305 pgs: 305 active+clean; 500 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 5.5 MiB/s wr, 309 op/s
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.296 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "571cd10f-e22e-43b6-9523-e3539d047e4a" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.296 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.297 239969 INFO nova.compute.manager [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Unshelving
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.380 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.381 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.386 239969 DEBUG nova.objects.instance [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'pci_requests' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.400 239969 DEBUG nova.objects.instance [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'numa_topology' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.414 239969 DEBUG nova.virt.hardware [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.415 239969 INFO nova.compute.claims [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.561 239969 DEBUG oslo_concurrency.processutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.700 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:52 compute-0 sudo[320213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:07:52 compute-0 sudo[320213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:07:52 compute-0 sudo[320213]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:52 compute-0 sudo[320257]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:07:52 compute-0 sudo[320257]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.812 239969 DEBUG oslo_concurrency.lockutils [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "f3d29c29-2c6a-4e49-bd86-415f47139844" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.813 239969 DEBUG oslo_concurrency.lockutils [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "f3d29c29-2c6a-4e49-bd86-415f47139844" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.814 239969 DEBUG oslo_concurrency.lockutils [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.814 239969 DEBUG oslo_concurrency.lockutils [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.814 239969 DEBUG oslo_concurrency.lockutils [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.816 239969 INFO nova.compute.manager [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Terminating instance
Jan 26 16:07:52 compute-0 nova_compute[239965]: 2026-01-26 16:07:52.817 239969 DEBUG nova.compute.manager [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:07:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:52.995 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:53 compute-0 ovn_controller[146046]: 2026-01-26T16:07:53Z|00923|binding|INFO|Releasing lport 86107515-3deb-4a4b-a7ca-2f22eed5cb66 from this chassis (sb_readonly=0)
Jan 26 16:07:53 compute-0 ovn_controller[146046]: 2026-01-26T16:07:53Z|00924|binding|INFO|Releasing lport 8d8bd19b-594d-4000-99fc-eb8020a59c23 from this chassis (sb_readonly=0)
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.215 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:07:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3008018602' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:53 compute-0 ceph-mon[75140]: pgmap v1679: 305 pgs: 305 active+clean; 500 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 5.5 MiB/s wr, 309 op/s
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.270 239969 DEBUG oslo_concurrency.processutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.709s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:53 compute-0 kernel: tap3c42bcf9-b5 (unregistering): left promiscuous mode
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.276 239969 DEBUG nova.compute.provider_tree [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:07:53 compute-0 NetworkManager[48954]: <info>  [1769443673.2790] device (tap3c42bcf9-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:07:53 compute-0 ovn_controller[146046]: 2026-01-26T16:07:53Z|00925|binding|INFO|Releasing lport 3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 from this chassis (sb_readonly=0)
Jan 26 16:07:53 compute-0 ovn_controller[146046]: 2026-01-26T16:07:53Z|00926|binding|INFO|Setting lport 3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 down in Southbound
Jan 26 16:07:53 compute-0 ovn_controller[146046]: 2026-01-26T16:07:53Z|00927|binding|INFO|Removing iface tap3c42bcf9-b5 ovn-installed in OVS
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.291 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.294 239969 DEBUG nova.scheduler.client.report [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:07:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:53.297 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:99:f6:ad 10.100.0.7'], port_security=['fa:16:3e:99:f6:ad 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'f3d29c29-2c6a-4e49-bd86-415f47139844', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3d3c26abe454a90816833e484abbbd5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1788d42-3ab8-4af6-9d96-b2f289468a39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=866be92a-2c15-4b84-ae39-2a0041a8b635, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:53.298 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 in datapath e7ef659c-2c7b-4cdf-8956-267cdfeff707 unbound from our chassis
Jan 26 16:07:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:53.300 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.316 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:53 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Jan 26 16:07:53 compute-0 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005e.scope: Consumed 2.868s CPU time.
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.325 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:53.326 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0036e0ea-26f6-4d0a-b34c-9d99e047afce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:53 compute-0 systemd-machined[208061]: Machine qemu-115-instance-0000005e terminated.
Jan 26 16:07:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:53.355 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4c623b1f-6677-4d7b-908c-6458027e2852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:53.358 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7e17fa01-efca-4142-a343-3192746c7266]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:53.395 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4aa77422-725d-4b0f-adfd-643d675b352a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:53 compute-0 sudo[320257]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:53.414 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8db868d3-123e-4362-8b97-829832f0a7eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7ef659c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:1b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 21, 'rx_bytes': 616, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 21, 'rx_bytes': 616, 'tx_bytes': 1026, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507991, 'reachable_time': 15716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320328, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:53.429 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2b87d8ba-6324-46b8-b2b3-9f808318b5ae]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508006, 'tstamp': 508006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320329, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508010, 'tstamp': 508010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320329, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:53.431 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7ef659c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.433 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.440 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:53.441 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7ef659c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:53.441 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:53.441 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape7ef659c-20, col_values=(('external_ids', {'iface-id': '86107515-3deb-4a4b-a7ca-2f22eed5cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:53.442 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.452 239969 INFO nova.virt.libvirt.driver [-] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Instance destroyed successfully.
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.452 239969 DEBUG nova.objects.instance [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'resources' on Instance uuid f3d29c29-2c6a-4e49-bd86-415f47139844 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:07:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.468 239969 DEBUG nova.virt.libvirt.vif [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:07:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-174634227',display_name='tempest-ServersTestJSON-server-174634227',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-174634227',id=94,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:07:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-vwsfpbv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:07:50Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=f3d29c29-2c6a-4e49-bd86-415f47139844,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "address": "fa:16:3e:99:f6:ad", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c42bcf9-b5", "ovs_interfaceid": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:07:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.469 239969 DEBUG nova.network.os_vif_util [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "address": "fa:16:3e:99:f6:ad", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3c42bcf9-b5", "ovs_interfaceid": "3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:53 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.469 239969 DEBUG nova.network.os_vif_util [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:99:f6:ad,bridge_name='br-int',has_traffic_filtering=True,id=3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c42bcf9-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.470 239969 DEBUG os_vif [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:f6:ad,bridge_name='br-int',has_traffic_filtering=True,id=3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c42bcf9-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:07:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.471 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.472 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3c42bcf9-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.510 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.512 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.513 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:53 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.516 239969 INFO os_vif [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:99:f6:ad,bridge_name='br-int',has_traffic_filtering=True,id=3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3c42bcf9-b5')
Jan 26 16:07:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:07:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:07:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:07:53 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:07:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:07:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.537 239969 INFO nova.network.neutron [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updating port 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 26 16:07:53 compute-0 sudo[320356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:07:53 compute-0 sudo[320356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:07:53 compute-0 sudo[320356]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.659 239969 DEBUG nova.compute.manager [req-e23cc637-a52d-4321-bf72-ebf102327b88 req-3a21d347-f820-4445-a5a4-a7257e58b3dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Received event network-vif-unplugged-3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.659 239969 DEBUG oslo_concurrency.lockutils [req-e23cc637-a52d-4321-bf72-ebf102327b88 req-3a21d347-f820-4445-a5a4-a7257e58b3dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.659 239969 DEBUG oslo_concurrency.lockutils [req-e23cc637-a52d-4321-bf72-ebf102327b88 req-3a21d347-f820-4445-a5a4-a7257e58b3dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.660 239969 DEBUG oslo_concurrency.lockutils [req-e23cc637-a52d-4321-bf72-ebf102327b88 req-3a21d347-f820-4445-a5a4-a7257e58b3dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.660 239969 DEBUG nova.compute.manager [req-e23cc637-a52d-4321-bf72-ebf102327b88 req-3a21d347-f820-4445-a5a4-a7257e58b3dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] No waiting events found dispatching network-vif-unplugged-3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.660 239969 DEBUG nova.compute.manager [req-e23cc637-a52d-4321-bf72-ebf102327b88 req-3a21d347-f820-4445-a5a4-a7257e58b3dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Received event network-vif-unplugged-3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:07:53 compute-0 sudo[320384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:07:53 compute-0 sudo[320384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.672 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.766 239969 INFO nova.virt.libvirt.driver [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Deleting instance files /var/lib/nova/instances/f3d29c29-2c6a-4e49-bd86-415f47139844_del
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.766 239969 INFO nova.virt.libvirt.driver [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Deletion of /var/lib/nova/instances/f3d29c29-2c6a-4e49-bd86-415f47139844_del complete
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.822 239969 INFO nova.compute.manager [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Took 1.00 seconds to destroy the instance on the hypervisor.
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.822 239969 DEBUG oslo.service.loopingcall [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.823 239969 DEBUG nova.compute.manager [-] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:07:53 compute-0 nova_compute[239965]: 2026-01-26 16:07:53.823 239969 DEBUG nova.network.neutron [-] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:07:53 compute-0 podman[320423]: 2026-01-26 16:07:53.985775397 +0000 UTC m=+0.046609961 container create ccff240f62d038d8e4e1f9922b6311b951cda9ffe18f54454ec12b684a791247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_proskuriakova, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 16:07:54 compute-0 systemd[1]: Started libpod-conmon-ccff240f62d038d8e4e1f9922b6311b951cda9ffe18f54454ec12b684a791247.scope.
Jan 26 16:07:54 compute-0 podman[320423]: 2026-01-26 16:07:53.964848065 +0000 UTC m=+0.025682669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:07:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:07:54 compute-0 podman[320423]: 2026-01-26 16:07:54.094711272 +0000 UTC m=+0.155545846 container init ccff240f62d038d8e4e1f9922b6311b951cda9ffe18f54454ec12b684a791247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_proskuriakova, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:07:54 compute-0 podman[320423]: 2026-01-26 16:07:54.101815465 +0000 UTC m=+0.162650029 container start ccff240f62d038d8e4e1f9922b6311b951cda9ffe18f54454ec12b684a791247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_proskuriakova, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 16:07:54 compute-0 podman[320423]: 2026-01-26 16:07:54.105068276 +0000 UTC m=+0.165902850 container attach ccff240f62d038d8e4e1f9922b6311b951cda9ffe18f54454ec12b684a791247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 16:07:54 compute-0 goofy_proskuriakova[320439]: 167 167
Jan 26 16:07:54 compute-0 systemd[1]: libpod-ccff240f62d038d8e4e1f9922b6311b951cda9ffe18f54454ec12b684a791247.scope: Deactivated successfully.
Jan 26 16:07:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 497 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 360 op/s
Jan 26 16:07:54 compute-0 conmon[320439]: conmon ccff240f62d038d8e4e1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ccff240f62d038d8e4e1f9922b6311b951cda9ffe18f54454ec12b684a791247.scope/container/memory.events
Jan 26 16:07:54 compute-0 podman[320423]: 2026-01-26 16:07:54.118698088 +0000 UTC m=+0.179532652 container died ccff240f62d038d8e4e1f9922b6311b951cda9ffe18f54454ec12b684a791247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_proskuriakova, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d24d35c61439f82a21e669b0282574be8c1779df8d00af3293da6211ff0a5e52-merged.mount: Deactivated successfully.
Jan 26 16:07:54 compute-0 podman[320423]: 2026-01-26 16:07:54.161410394 +0000 UTC m=+0.222244958 container remove ccff240f62d038d8e4e1f9922b6311b951cda9ffe18f54454ec12b684a791247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_proskuriakova, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 16:07:54 compute-0 systemd[1]: libpod-conmon-ccff240f62d038d8e4e1f9922b6311b951cda9ffe18f54454ec12b684a791247.scope: Deactivated successfully.
Jan 26 16:07:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3008018602' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:07:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:07:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:07:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:07:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:07:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:07:54 compute-0 ceph-mon[75140]: pgmap v1680: 305 pgs: 305 active+clean; 497 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 360 op/s
Jan 26 16:07:54 compute-0 podman[320463]: 2026-01-26 16:07:54.357340507 +0000 UTC m=+0.041374044 container create 4271d933722ae71eabddcd4c6e20569cafb8108df6971ab9ed110cf13ef3c2b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wing, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:07:54 compute-0 nova_compute[239965]: 2026-01-26 16:07:54.376 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:54 compute-0 nova_compute[239965]: 2026-01-26 16:07:54.376 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquired lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:54 compute-0 nova_compute[239965]: 2026-01-26 16:07:54.376 239969 DEBUG nova.network.neutron [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:07:54 compute-0 systemd[1]: Started libpod-conmon-4271d933722ae71eabddcd4c6e20569cafb8108df6971ab9ed110cf13ef3c2b2.scope.
Jan 26 16:07:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f452788d365de5f36a0f2cfc6c60fac00ebe17355a92bd2c2ed6ae7524ee97d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f452788d365de5f36a0f2cfc6c60fac00ebe17355a92bd2c2ed6ae7524ee97d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f452788d365de5f36a0f2cfc6c60fac00ebe17355a92bd2c2ed6ae7524ee97d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f452788d365de5f36a0f2cfc6c60fac00ebe17355a92bd2c2ed6ae7524ee97d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:07:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f452788d365de5f36a0f2cfc6c60fac00ebe17355a92bd2c2ed6ae7524ee97d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:07:54 compute-0 podman[320463]: 2026-01-26 16:07:54.338784513 +0000 UTC m=+0.022818040 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:07:54 compute-0 podman[320463]: 2026-01-26 16:07:54.444236062 +0000 UTC m=+0.128269619 container init 4271d933722ae71eabddcd4c6e20569cafb8108df6971ab9ed110cf13ef3c2b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wing, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:07:54 compute-0 podman[320463]: 2026-01-26 16:07:54.451830928 +0000 UTC m=+0.135864435 container start 4271d933722ae71eabddcd4c6e20569cafb8108df6971ab9ed110cf13ef3c2b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 16:07:54 compute-0 podman[320463]: 2026-01-26 16:07:54.456038441 +0000 UTC m=+0.140071948 container attach 4271d933722ae71eabddcd4c6e20569cafb8108df6971ab9ed110cf13ef3c2b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wing, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:07:54 compute-0 determined_wing[320480]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:07:54 compute-0 determined_wing[320480]: --> All data devices are unavailable
Jan 26 16:07:54 compute-0 systemd[1]: libpod-4271d933722ae71eabddcd4c6e20569cafb8108df6971ab9ed110cf13ef3c2b2.scope: Deactivated successfully.
Jan 26 16:07:54 compute-0 podman[320463]: 2026-01-26 16:07:54.949151353 +0000 UTC m=+0.633184900 container died 4271d933722ae71eabddcd4c6e20569cafb8108df6971ab9ed110cf13ef3c2b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wing, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 16:07:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f452788d365de5f36a0f2cfc6c60fac00ebe17355a92bd2c2ed6ae7524ee97d-merged.mount: Deactivated successfully.
Jan 26 16:07:55 compute-0 podman[320463]: 2026-01-26 16:07:55.004721573 +0000 UTC m=+0.688755080 container remove 4271d933722ae71eabddcd4c6e20569cafb8108df6971ab9ed110cf13ef3c2b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 26 16:07:55 compute-0 systemd[1]: libpod-conmon-4271d933722ae71eabddcd4c6e20569cafb8108df6971ab9ed110cf13ef3c2b2.scope: Deactivated successfully.
Jan 26 16:07:55 compute-0 sudo[320384]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.045 239969 DEBUG nova.network.neutron [-] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.067 239969 INFO nova.compute.manager [-] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Took 1.24 seconds to deallocate network for instance.
Jan 26 16:07:55 compute-0 sudo[320514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:07:55 compute-0 sudo[320514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:07:55 compute-0 sudo[320514]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.114 239969 DEBUG oslo_concurrency.lockutils [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.115 239969 DEBUG oslo_concurrency.lockutils [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:55 compute-0 sudo[320539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:07:55 compute-0 sudo[320539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.220 239969 DEBUG nova.compute.manager [req-59ee3b34-5e20-4fc7-8301-7d4d541554b5 req-3aade414-5d67-4fe2-956a-e98e9aa82a7f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Received event network-vif-deleted-3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.223 239969 DEBUG oslo_concurrency.processutils [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:55 compute-0 podman[320596]: 2026-01-26 16:07:55.467254848 +0000 UTC m=+0.051686376 container create 84f8f489869fd03266ee671d4913c422120fee33e73bcc33840174f507734c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bohr, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:07:55 compute-0 systemd[1]: Started libpod-conmon-84f8f489869fd03266ee671d4913c422120fee33e73bcc33840174f507734c08.scope.
Jan 26 16:07:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:07:55 compute-0 podman[320596]: 2026-01-26 16:07:55.443324832 +0000 UTC m=+0.027756390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:07:55 compute-0 podman[320596]: 2026-01-26 16:07:55.545639135 +0000 UTC m=+0.130070683 container init 84f8f489869fd03266ee671d4913c422120fee33e73bcc33840174f507734c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bohr, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 16:07:55 compute-0 podman[320596]: 2026-01-26 16:07:55.552380039 +0000 UTC m=+0.136811567 container start 84f8f489869fd03266ee671d4913c422120fee33e73bcc33840174f507734c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:07:55 compute-0 podman[320596]: 2026-01-26 16:07:55.556027099 +0000 UTC m=+0.140458657 container attach 84f8f489869fd03266ee671d4913c422120fee33e73bcc33840174f507734c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bohr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 16:07:55 compute-0 elastic_bohr[320612]: 167 167
Jan 26 16:07:55 compute-0 systemd[1]: libpod-84f8f489869fd03266ee671d4913c422120fee33e73bcc33840174f507734c08.scope: Deactivated successfully.
Jan 26 16:07:55 compute-0 conmon[320612]: conmon 84f8f489869fd03266ee <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-84f8f489869fd03266ee671d4913c422120fee33e73bcc33840174f507734c08.scope/container/memory.events
Jan 26 16:07:55 compute-0 podman[320596]: 2026-01-26 16:07:55.558598562 +0000 UTC m=+0.143030100 container died 84f8f489869fd03266ee671d4913c422120fee33e73bcc33840174f507734c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bohr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:07:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-dffb25478a68baf780d2faaf12ccfd812bd5ca86f2afb266a58324fed4598d1d-merged.mount: Deactivated successfully.
Jan 26 16:07:55 compute-0 podman[320596]: 2026-01-26 16:07:55.594831679 +0000 UTC m=+0.179263207 container remove 84f8f489869fd03266ee671d4913c422120fee33e73bcc33840174f507734c08 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 16:07:55 compute-0 systemd[1]: libpod-conmon-84f8f489869fd03266ee671d4913c422120fee33e73bcc33840174f507734c08.scope: Deactivated successfully.
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.657 239969 DEBUG nova.network.neutron [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updating instance_info_cache with network_info: [{"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.681 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Releasing lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.683 239969 DEBUG nova.virt.libvirt.driver [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.683 239969 INFO nova.virt.libvirt.driver [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Creating image(s)
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.707 239969 DEBUG nova.storage.rbd_utils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 571cd10f-e22e-43b6-9523-e3539d047e4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.711 239969 DEBUG nova.objects.instance [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.749 239969 DEBUG nova.storage.rbd_utils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 571cd10f-e22e-43b6-9523-e3539d047e4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.780 239969 DEBUG nova.storage.rbd_utils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 571cd10f-e22e-43b6-9523-e3539d047e4a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.790 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "e9ec201d1d48b93802d938d21fbfe302ca4f2cdb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.791 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "e9ec201d1d48b93802d938d21fbfe302ca4f2cdb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:07:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/919174070' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.798 239969 DEBUG nova.compute.manager [req-cc2e39b0-3b6a-4319-ac2e-1730bf13dec3 req-93e7b071-4f2e-46de-a38a-37a621b80990 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Received event network-vif-plugged-3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.798 239969 DEBUG oslo_concurrency.lockutils [req-cc2e39b0-3b6a-4319-ac2e-1730bf13dec3 req-93e7b071-4f2e-46de-a38a-37a621b80990 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.798 239969 DEBUG oslo_concurrency.lockutils [req-cc2e39b0-3b6a-4319-ac2e-1730bf13dec3 req-93e7b071-4f2e-46de-a38a-37a621b80990 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.799 239969 DEBUG oslo_concurrency.lockutils [req-cc2e39b0-3b6a-4319-ac2e-1730bf13dec3 req-93e7b071-4f2e-46de-a38a-37a621b80990 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "f3d29c29-2c6a-4e49-bd86-415f47139844-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.799 239969 DEBUG nova.compute.manager [req-cc2e39b0-3b6a-4319-ac2e-1730bf13dec3 req-93e7b071-4f2e-46de-a38a-37a621b80990 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] No waiting events found dispatching network-vif-plugged-3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.799 239969 WARNING nova.compute.manager [req-cc2e39b0-3b6a-4319-ac2e-1730bf13dec3 req-93e7b071-4f2e-46de-a38a-37a621b80990 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Received unexpected event network-vif-plugged-3c42bcf9-b5e8-41aa-a5a7-c22c3e24dba8 for instance with vm_state deleted and task_state None.
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.799 239969 DEBUG nova.compute.manager [req-cc2e39b0-3b6a-4319-ac2e-1730bf13dec3 req-93e7b071-4f2e-46de-a38a-37a621b80990 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received event network-changed-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.800 239969 DEBUG nova.compute.manager [req-cc2e39b0-3b6a-4319-ac2e-1730bf13dec3 req-93e7b071-4f2e-46de-a38a-37a621b80990 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Refreshing instance network info cache due to event network-changed-4dce6184-01c6-4ef8-a5d3-7fc88f784d89. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.800 239969 DEBUG oslo_concurrency.lockutils [req-cc2e39b0-3b6a-4319-ac2e-1730bf13dec3 req-93e7b071-4f2e-46de-a38a-37a621b80990 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.800 239969 DEBUG oslo_concurrency.lockutils [req-cc2e39b0-3b6a-4319-ac2e-1730bf13dec3 req-93e7b071-4f2e-46de-a38a-37a621b80990 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.800 239969 DEBUG nova.network.neutron [req-cc2e39b0-3b6a-4319-ac2e-1730bf13dec3 req-93e7b071-4f2e-46de-a38a-37a621b80990 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Refreshing network info cache for port 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:07:55 compute-0 podman[320670]: 2026-01-26 16:07:55.803505103 +0000 UTC m=+0.044016228 container create 744124a81e0e6105480c55af8494c69a9ebe1b92dd3c468e93316503e950b786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ptolemy, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.819 239969 DEBUG oslo_concurrency.processutils [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.825 239969 DEBUG nova.compute.provider_tree [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:07:55 compute-0 systemd[1]: Started libpod-conmon-744124a81e0e6105480c55af8494c69a9ebe1b92dd3c468e93316503e950b786.scope.
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.843 239969 DEBUG nova.scheduler.client.report [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:07:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/919174070' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3d105f2c19569190e764b5a8e52b5fc77387d6fdd4ab21dd19a8b41d7d67b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.869 239969 DEBUG oslo_concurrency.lockutils [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3d105f2c19569190e764b5a8e52b5fc77387d6fdd4ab21dd19a8b41d7d67b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3d105f2c19569190e764b5a8e52b5fc77387d6fdd4ab21dd19a8b41d7d67b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:07:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f3d105f2c19569190e764b5a8e52b5fc77387d6fdd4ab21dd19a8b41d7d67b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:07:55 compute-0 podman[320670]: 2026-01-26 16:07:55.881612243 +0000 UTC m=+0.122123398 container init 744124a81e0e6105480c55af8494c69a9ebe1b92dd3c468e93316503e950b786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:07:55 compute-0 podman[320670]: 2026-01-26 16:07:55.786768854 +0000 UTC m=+0.027279999 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:07:55 compute-0 podman[320670]: 2026-01-26 16:07:55.888156363 +0000 UTC m=+0.128667488 container start 744124a81e0e6105480c55af8494c69a9ebe1b92dd3c468e93316503e950b786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ptolemy, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:07:55 compute-0 podman[320670]: 2026-01-26 16:07:55.890824599 +0000 UTC m=+0.131335814 container attach 744124a81e0e6105480c55af8494c69a9ebe1b92dd3c468e93316503e950b786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ptolemy, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.892 239969 INFO nova.scheduler.client.report [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Deleted allocations for instance f3d29c29-2c6a-4e49-bd86-415f47139844
Jan 26 16:07:55 compute-0 nova_compute[239965]: 2026-01-26 16:07:55.956 239969 DEBUG oslo_concurrency.lockutils [None req-7449bef1-c650-45a0-a5ae-84c95cee82e4 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "f3d29c29-2c6a-4e49-bd86-415f47139844" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.065 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1681: 305 pgs: 305 active+clean; 491 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.3 MiB/s wr, 324 op/s
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]: {
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:     "0": [
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:         {
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "devices": [
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "/dev/loop3"
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             ],
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "lv_name": "ceph_lv0",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "lv_size": "21470642176",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "name": "ceph_lv0",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "tags": {
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.cluster_name": "ceph",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.crush_device_class": "",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.encrypted": "0",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.objectstore": "bluestore",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.osd_id": "0",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.type": "block",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.vdo": "0",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.with_tpm": "0"
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             },
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "type": "block",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "vg_name": "ceph_vg0"
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:         }
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:     ],
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:     "1": [
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:         {
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "devices": [
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "/dev/loop4"
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             ],
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "lv_name": "ceph_lv1",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "lv_size": "21470642176",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "name": "ceph_lv1",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "tags": {
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.cluster_name": "ceph",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.crush_device_class": "",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.encrypted": "0",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.objectstore": "bluestore",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.osd_id": "1",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.type": "block",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.vdo": "0",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.with_tpm": "0"
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             },
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "type": "block",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "vg_name": "ceph_vg1"
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:         }
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:     ],
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:     "2": [
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:         {
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "devices": [
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "/dev/loop5"
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             ],
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "lv_name": "ceph_lv2",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "lv_size": "21470642176",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "name": "ceph_lv2",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "tags": {
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.cluster_name": "ceph",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.crush_device_class": "",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.encrypted": "0",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.objectstore": "bluestore",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.osd_id": "2",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.type": "block",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.vdo": "0",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:                 "ceph.with_tpm": "0"
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             },
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "type": "block",
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:             "vg_name": "ceph_vg2"
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:         }
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]:     ]
Jan 26 16:07:56 compute-0 brave_ptolemy[320706]: }
Jan 26 16:07:56 compute-0 systemd[1]: libpod-744124a81e0e6105480c55af8494c69a9ebe1b92dd3c468e93316503e950b786.scope: Deactivated successfully.
Jan 26 16:07:56 compute-0 podman[320670]: 2026-01-26 16:07:56.190309575 +0000 UTC m=+0.430820720 container died 744124a81e0e6105480c55af8494c69a9ebe1b92dd3c468e93316503e950b786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ptolemy, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.213 239969 DEBUG nova.virt.libvirt.imagebackend [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Image locations are: [{'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/8c93b0ea-6f08-4c34-a2a1-57cec764f188/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/8c93b0ea-6f08-4c34-a2a1-57cec764f188/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 26 16:07:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f3d105f2c19569190e764b5a8e52b5fc77387d6fdd4ab21dd19a8b41d7d67b6-merged.mount: Deactivated successfully.
Jan 26 16:07:56 compute-0 podman[320670]: 2026-01-26 16:07:56.253216104 +0000 UTC m=+0.493727229 container remove 744124a81e0e6105480c55af8494c69a9ebe1b92dd3c468e93316503e950b786 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_ptolemy, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:07:56 compute-0 systemd[1]: libpod-conmon-744124a81e0e6105480c55af8494c69a9ebe1b92dd3c468e93316503e950b786.scope: Deactivated successfully.
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.274 239969 DEBUG oslo_concurrency.lockutils [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.274 239969 DEBUG oslo_concurrency.lockutils [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.275 239969 DEBUG oslo_concurrency.lockutils [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.275 239969 DEBUG oslo_concurrency.lockutils [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.275 239969 DEBUG oslo_concurrency.lockutils [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.276 239969 INFO nova.compute.manager [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Terminating instance
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.277 239969 DEBUG nova.compute.manager [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.284 239969 DEBUG nova.virt.libvirt.imagebackend [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Selected location: {'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/8c93b0ea-6f08-4c34-a2a1-57cec764f188/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.285 239969 DEBUG nova.storage.rbd_utils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] cloning images/8c93b0ea-6f08-4c34-a2a1-57cec764f188@snap to None/571cd10f-e22e-43b6-9523-e3539d047e4a_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 16:07:56 compute-0 sudo[320539]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:56 compute-0 kernel: tap73d8ee40-94 (unregistering): left promiscuous mode
Jan 26 16:07:56 compute-0 NetworkManager[48954]: <info>  [1769443676.3654] device (tap73d8ee40-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.375 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:56 compute-0 ovn_controller[146046]: 2026-01-26T16:07:56Z|00928|binding|INFO|Releasing lport 73d8ee40-94a4-4fc4-9646-1e6cb8410022 from this chassis (sb_readonly=0)
Jan 26 16:07:56 compute-0 ovn_controller[146046]: 2026-01-26T16:07:56Z|00929|binding|INFO|Setting lport 73d8ee40-94a4-4fc4-9646-1e6cb8410022 down in Southbound
Jan 26 16:07:56 compute-0 ovn_controller[146046]: 2026-01-26T16:07:56Z|00930|binding|INFO|Removing iface tap73d8ee40-94 ovn-installed in OVS
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.377 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:56 compute-0 sudo[320791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:07:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:56.387 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b0:06:0d 10.100.0.12'], port_security=['fa:16:3e:b0:06:0d 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3d3c26abe454a90816833e484abbbd5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1788d42-3ab8-4af6-9d96-b2f289468a39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=866be92a-2c15-4b84-ae39-2a0041a8b635, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=73d8ee40-94a4-4fc4-9646-1e6cb8410022) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:56 compute-0 sudo[320791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:07:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:56.389 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 73d8ee40-94a4-4fc4-9646-1e6cb8410022 in datapath e7ef659c-2c7b-4cdf-8956-267cdfeff707 unbound from our chassis
Jan 26 16:07:56 compute-0 sudo[320791]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:56.390 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:07:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:56.416 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[330b922b-e4a4-4c9a-941b-397e89300f4c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.417 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "e9ec201d1d48b93802d938d21fbfe302ca4f2cdb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:56 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Jan 26 16:07:56 compute-0 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000005c.scope: Consumed 13.906s CPU time.
Jan 26 16:07:56 compute-0 systemd-machined[208061]: Machine qemu-113-instance-0000005c terminated.
Jan 26 16:07:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:56.445 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6779ec15-0925-4124-be09-9aa41378c15f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:56.448 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e90db8cc-1dbc-42cb-823b-9553485c0b6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:56 compute-0 sudo[320824]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:07:56 compute-0 sudo[320824]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.460 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:56.484 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8d30ea2c-96c8-4924-bf5d-2ea41e0dab72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:56.504 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[468dc8f2-2f6c-4853-ba79-fa1d299a990c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7ef659c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:1b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 23, 'rx_bytes': 658, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 23, 'rx_bytes': 658, 'tx_bytes': 1110, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507991, 'reachable_time': 15716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320895, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:56.522 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[86da737a-c822-416d-a3c6-7f174e2d7721]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508006, 'tstamp': 508006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320896, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508010, 'tstamp': 508010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320896, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:56.524 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7ef659c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:56.531 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7ef659c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:56.532 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:56.532 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape7ef659c-20, col_values=(('external_ids', {'iface-id': '86107515-3deb-4a4b-a7ca-2f22eed5cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:56.532 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.566 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.572 239969 DEBUG nova.objects.instance [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'migration_context' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.574 239969 INFO nova.virt.libvirt.driver [-] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Instance destroyed successfully.
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.574 239969 DEBUG nova.objects.instance [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'resources' on Instance uuid 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.595 239969 DEBUG nova.virt.libvirt.vif [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:07:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-174634227',display_name='tempest-ServersTestJSON-server-174634227',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-174634227',id=92,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:07:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-z08va5rh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:07:36Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "address": "fa:16:3e:b0:06:0d", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d8ee40-94", "ovs_interfaceid": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.595 239969 DEBUG nova.network.os_vif_util [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "address": "fa:16:3e:b0:06:0d", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d8ee40-94", "ovs_interfaceid": "73d8ee40-94a4-4fc4-9646-1e6cb8410022", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.596 239969 DEBUG nova.network.os_vif_util [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b0:06:0d,bridge_name='br-int',has_traffic_filtering=True,id=73d8ee40-94a4-4fc4-9646-1e6cb8410022,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73d8ee40-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.596 239969 DEBUG os_vif [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:06:0d,bridge_name='br-int',has_traffic_filtering=True,id=73d8ee40-94a4-4fc4-9646-1e6cb8410022,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73d8ee40-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.632 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.632 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73d8ee40-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.635 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.637 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.639 239969 INFO os_vif [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b0:06:0d,bridge_name='br-int',has_traffic_filtering=True,id=73d8ee40-94a4-4fc4-9646-1e6cb8410022,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73d8ee40-94')
Jan 26 16:07:56 compute-0 nova_compute[239965]: 2026-01-26 16:07:56.656 239969 DEBUG nova.storage.rbd_utils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] flattening vms/571cd10f-e22e-43b6-9523-e3539d047e4a_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 16:07:56 compute-0 podman[320989]: 2026-01-26 16:07:56.781303302 +0000 UTC m=+0.063799752 container create 04191437be0188711dd467cd56cbfddfe82ac842b8d69ff301107023140d5196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:07:56 compute-0 podman[320989]: 2026-01-26 16:07:56.741013456 +0000 UTC m=+0.023509916 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:07:56 compute-0 systemd[1]: Started libpod-conmon-04191437be0188711dd467cd56cbfddfe82ac842b8d69ff301107023140d5196.scope.
Jan 26 16:07:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:07:56 compute-0 ceph-mon[75140]: pgmap v1681: 305 pgs: 305 active+clean; 491 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.3 MiB/s wr, 324 op/s
Jan 26 16:07:56 compute-0 podman[320989]: 2026-01-26 16:07:56.957596095 +0000 UTC m=+0.240092555 container init 04191437be0188711dd467cd56cbfddfe82ac842b8d69ff301107023140d5196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_roentgen, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:07:56 compute-0 podman[320989]: 2026-01-26 16:07:56.97296023 +0000 UTC m=+0.255456700 container start 04191437be0188711dd467cd56cbfddfe82ac842b8d69ff301107023140d5196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_roentgen, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:07:56 compute-0 crazy_roentgen[321011]: 167 167
Jan 26 16:07:56 compute-0 systemd[1]: libpod-04191437be0188711dd467cd56cbfddfe82ac842b8d69ff301107023140d5196.scope: Deactivated successfully.
Jan 26 16:07:56 compute-0 podman[320989]: 2026-01-26 16:07:56.994855196 +0000 UTC m=+0.277351646 container attach 04191437be0188711dd467cd56cbfddfe82ac842b8d69ff301107023140d5196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:07:56 compute-0 podman[320989]: 2026-01-26 16:07:56.996669851 +0000 UTC m=+0.279166341 container died 04191437be0188711dd467cd56cbfddfe82ac842b8d69ff301107023140d5196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_roentgen, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.037 239969 DEBUG nova.virt.libvirt.driver [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Image rbd:vms/571cd10f-e22e-43b6-9523-e3539d047e4a_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.038 239969 DEBUG nova.virt.libvirt.driver [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.038 239969 DEBUG nova.virt.libvirt.driver [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Ensure instance console log exists: /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.039 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ac361468db59ea5bab62486ca20f1b80dfca36e1e167f63dfec350d5d2ee83f-merged.mount: Deactivated successfully.
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.039 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.039 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.042 239969 DEBUG nova.virt.libvirt.driver [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Start _get_guest_xml network_info=[{"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-26T16:07:33Z,direct_url=<?>,disk_format='raw',id=8c93b0ea-6f08-4c34-a2a1-57cec764f188,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-2114557834-shelved',owner='56f8818d291f4e738d868673048ce025',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-26T16:07:41Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.046 239969 WARNING nova.virt.libvirt.driver [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.052 239969 DEBUG nova.virt.libvirt.host [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.053 239969 DEBUG nova.virt.libvirt.host [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.056 239969 DEBUG nova.virt.libvirt.host [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.057 239969 DEBUG nova.virt.libvirt.host [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.058 239969 DEBUG nova.virt.libvirt.driver [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.058 239969 DEBUG nova.virt.hardware [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-26T16:07:33Z,direct_url=<?>,disk_format='raw',id=8c93b0ea-6f08-4c34-a2a1-57cec764f188,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-2114557834-shelved',owner='56f8818d291f4e738d868673048ce025',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-26T16:07:41Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.059 239969 DEBUG nova.virt.hardware [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.059 239969 DEBUG nova.virt.hardware [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.059 239969 DEBUG nova.virt.hardware [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.060 239969 DEBUG nova.virt.hardware [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.060 239969 DEBUG nova.virt.hardware [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.060 239969 DEBUG nova.virt.hardware [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.060 239969 DEBUG nova.virt.hardware [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.061 239969 DEBUG nova.virt.hardware [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.061 239969 DEBUG nova.virt.hardware [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.061 239969 DEBUG nova.virt.hardware [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.061 239969 DEBUG nova.objects.instance [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:57 compute-0 podman[320989]: 2026-01-26 16:07:57.070227639 +0000 UTC m=+0.352724079 container remove 04191437be0188711dd467cd56cbfddfe82ac842b8d69ff301107023140d5196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_roentgen, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:07:57 compute-0 systemd[1]: libpod-conmon-04191437be0188711dd467cd56cbfddfe82ac842b8d69ff301107023140d5196.scope: Deactivated successfully.
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.079 239969 DEBUG oslo_concurrency.processutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:57 compute-0 podman[321039]: 2026-01-26 16:07:57.260655048 +0000 UTC m=+0.047282617 container create 7788734939ace0d7707cb981f5e66157e0f996655efcb44f48dac0028f19126a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.275 239969 INFO nova.virt.libvirt.driver [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Deleting instance files /var/lib/nova/instances/2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e_del
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.276 239969 INFO nova.virt.libvirt.driver [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Deletion of /var/lib/nova/instances/2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e_del complete
Jan 26 16:07:57 compute-0 systemd[1]: Started libpod-conmon-7788734939ace0d7707cb981f5e66157e0f996655efcb44f48dac0028f19126a.scope.
Jan 26 16:07:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e1e6a800795dc0d4bea82e06dc38c4b77ea47b3e59def1e561d35d979fa1a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e1e6a800795dc0d4bea82e06dc38c4b77ea47b3e59def1e561d35d979fa1a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e1e6a800795dc0d4bea82e06dc38c4b77ea47b3e59def1e561d35d979fa1a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:07:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6e1e6a800795dc0d4bea82e06dc38c4b77ea47b3e59def1e561d35d979fa1a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:07:57 compute-0 podman[321039]: 2026-01-26 16:07:57.242532284 +0000 UTC m=+0.029159883 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:07:57 compute-0 podman[321039]: 2026-01-26 16:07:57.347714508 +0000 UTC m=+0.134342097 container init 7788734939ace0d7707cb981f5e66157e0f996655efcb44f48dac0028f19126a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_nobel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 26 16:07:57 compute-0 podman[321039]: 2026-01-26 16:07:57.354900214 +0000 UTC m=+0.141527783 container start 7788734939ace0d7707cb981f5e66157e0f996655efcb44f48dac0028f19126a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_nobel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:07:57 compute-0 podman[321039]: 2026-01-26 16:07:57.35846856 +0000 UTC m=+0.145096149 container attach 7788734939ace0d7707cb981f5e66157e0f996655efcb44f48dac0028f19126a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.378 239969 INFO nova.compute.manager [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Took 1.10 seconds to destroy the instance on the hypervisor.
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.379 239969 DEBUG oslo.service.loopingcall [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.379 239969 DEBUG nova.compute.manager [-] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.379 239969 DEBUG nova.network.neutron [-] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.417 239969 DEBUG nova.compute.manager [req-f8dca8d6-711a-41e6-adbf-928fa3ff3b52 req-764e6902-31d3-44c6-8c52-6900f8b5d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Received event network-vif-unplugged-73d8ee40-94a4-4fc4-9646-1e6cb8410022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.417 239969 DEBUG oslo_concurrency.lockutils [req-f8dca8d6-711a-41e6-adbf-928fa3ff3b52 req-764e6902-31d3-44c6-8c52-6900f8b5d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.421 239969 DEBUG oslo_concurrency.lockutils [req-f8dca8d6-711a-41e6-adbf-928fa3ff3b52 req-764e6902-31d3-44c6-8c52-6900f8b5d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.421 239969 DEBUG oslo_concurrency.lockutils [req-f8dca8d6-711a-41e6-adbf-928fa3ff3b52 req-764e6902-31d3-44c6-8c52-6900f8b5d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.421 239969 DEBUG nova.compute.manager [req-f8dca8d6-711a-41e6-adbf-928fa3ff3b52 req-764e6902-31d3-44c6-8c52-6900f8b5d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] No waiting events found dispatching network-vif-unplugged-73d8ee40-94a4-4fc4-9646-1e6cb8410022 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.422 239969 DEBUG nova.compute.manager [req-f8dca8d6-711a-41e6-adbf-928fa3ff3b52 req-764e6902-31d3-44c6-8c52-6900f8b5d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Received event network-vif-unplugged-73d8ee40-94a4-4fc4-9646-1e6cb8410022 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.422 239969 DEBUG nova.compute.manager [req-f8dca8d6-711a-41e6-adbf-928fa3ff3b52 req-764e6902-31d3-44c6-8c52-6900f8b5d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Received event network-vif-plugged-73d8ee40-94a4-4fc4-9646-1e6cb8410022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.422 239969 DEBUG oslo_concurrency.lockutils [req-f8dca8d6-711a-41e6-adbf-928fa3ff3b52 req-764e6902-31d3-44c6-8c52-6900f8b5d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.422 239969 DEBUG oslo_concurrency.lockutils [req-f8dca8d6-711a-41e6-adbf-928fa3ff3b52 req-764e6902-31d3-44c6-8c52-6900f8b5d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.423 239969 DEBUG oslo_concurrency.lockutils [req-f8dca8d6-711a-41e6-adbf-928fa3ff3b52 req-764e6902-31d3-44c6-8c52-6900f8b5d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.423 239969 DEBUG nova.compute.manager [req-f8dca8d6-711a-41e6-adbf-928fa3ff3b52 req-764e6902-31d3-44c6-8c52-6900f8b5d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] No waiting events found dispatching network-vif-plugged-73d8ee40-94a4-4fc4-9646-1e6cb8410022 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.423 239969 WARNING nova.compute.manager [req-f8dca8d6-711a-41e6-adbf-928fa3ff3b52 req-764e6902-31d3-44c6-8c52-6900f8b5d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Received unexpected event network-vif-plugged-73d8ee40-94a4-4fc4-9646-1e6cb8410022 for instance with vm_state active and task_state deleting.
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.618 239969 DEBUG nova.network.neutron [req-cc2e39b0-3b6a-4319-ac2e-1730bf13dec3 req-93e7b071-4f2e-46de-a38a-37a621b80990 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updated VIF entry in instance network info cache for port 4dce6184-01c6-4ef8-a5d3-7fc88f784d89. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.618 239969 DEBUG nova.network.neutron [req-cc2e39b0-3b6a-4319-ac2e-1730bf13dec3 req-93e7b071-4f2e-46de-a38a-37a621b80990 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updating instance_info_cache with network_info: [{"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:07:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:07:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1984238173' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.640 239969 DEBUG oslo_concurrency.lockutils [req-cc2e39b0-3b6a-4319-ac2e-1730bf13dec3 req-93e7b071-4f2e-46de-a38a-37a621b80990 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-571cd10f-e22e-43b6-9523-e3539d047e4a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.653 239969 DEBUG oslo_concurrency.processutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.678 239969 DEBUG nova.storage.rbd_utils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 571cd10f-e22e-43b6-9523-e3539d047e4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.683 239969 DEBUG oslo_concurrency.processutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.760 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:57 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1984238173' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:57 compute-0 nova_compute[239965]: 2026-01-26 16:07:57.989 239969 DEBUG nova.network.neutron [-] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.010 239969 INFO nova.compute.manager [-] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Took 0.63 seconds to deallocate network for instance.
Jan 26 16:07:58 compute-0 lvm[321189]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:07:58 compute-0 lvm[321189]: VG ceph_vg0 finished
Jan 26 16:07:58 compute-0 lvm[321191]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:07:58 compute-0 lvm[321191]: VG ceph_vg1 finished
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.062 239969 DEBUG oslo_concurrency.lockutils [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.062 239969 DEBUG oslo_concurrency.lockutils [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:58 compute-0 lvm[321193]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:07:58 compute-0 lvm[321193]: VG ceph_vg2 finished
Jan 26 16:07:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1682: 305 pgs: 305 active+clean; 492 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 6.2 MiB/s wr, 281 op/s
Jan 26 16:07:58 compute-0 jolly_nobel[321074]: {}
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.219 239969 DEBUG oslo_concurrency.processutils [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:58 compute-0 systemd[1]: libpod-7788734939ace0d7707cb981f5e66157e0f996655efcb44f48dac0028f19126a.scope: Deactivated successfully.
Jan 26 16:07:58 compute-0 systemd[1]: libpod-7788734939ace0d7707cb981f5e66157e0f996655efcb44f48dac0028f19126a.scope: Consumed 1.342s CPU time.
Jan 26 16:07:58 compute-0 podman[321039]: 2026-01-26 16:07:58.228152915 +0000 UTC m=+1.014780484 container died 7788734939ace0d7707cb981f5e66157e0f996655efcb44f48dac0028f19126a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_nobel, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 16:07:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6e1e6a800795dc0d4bea82e06dc38c4b77ea47b3e59def1e561d35d979fa1a4-merged.mount: Deactivated successfully.
Jan 26 16:07:58 compute-0 podman[321039]: 2026-01-26 16:07:58.272306395 +0000 UTC m=+1.058933964 container remove 7788734939ace0d7707cb981f5e66157e0f996655efcb44f48dac0028f19126a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:07:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:07:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1018343910' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:58 compute-0 systemd[1]: libpod-conmon-7788734939ace0d7707cb981f5e66157e0f996655efcb44f48dac0028f19126a.scope: Deactivated successfully.
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.302 239969 DEBUG oslo_concurrency.processutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.304 239969 DEBUG nova.virt.libvirt.vif [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:05:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2114557834',display_name='tempest-ServerActionsTestOtherB-server-2114557834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2114557834',id=85,image_ref='8c93b0ea-6f08-4c34-a2a1-57cec764f188',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1024411567',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:05:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='56f8818d291f4e738d868673048ce025',ramdisk_id='',reservation_id='r-1wqrxcju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1778121066',owner_user_name='tempest-ServerActionsTestOtherB-1778121066-project-member',shelved_at='2026-01-26T16:07:41.386071',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='8c93b0ea-6f08-4c34-a2a1-57cec764f188'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:07:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ad9c6196af60436caf20747e96ad8388',uuid=571cd10f-e22e-43b6-9523-e3539d047e4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.304 239969 DEBUG nova.network.os_vif_util [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converting VIF {"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.305 239969 DEBUG nova.network.os_vif_util [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:ea:5b,bridge_name='br-int',has_traffic_filtering=True,id=4dce6184-01c6-4ef8-a5d3-7fc88f784d89,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dce6184-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.307 239969 DEBUG nova.objects.instance [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'pci_devices' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:58 compute-0 sudo[320824]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.329 239969 DEBUG nova.virt.libvirt.driver [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:07:58 compute-0 nova_compute[239965]:   <uuid>571cd10f-e22e-43b6-9523-e3539d047e4a</uuid>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   <name>instance-00000055</name>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerActionsTestOtherB-server-2114557834</nova:name>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:07:57</nova:creationTime>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:07:58 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:07:58 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:07:58 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:07:58 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:07:58 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:07:58 compute-0 nova_compute[239965]:         <nova:user uuid="ad9c6196af60436caf20747e96ad8388">tempest-ServerActionsTestOtherB-1778121066-project-member</nova:user>
Jan 26 16:07:58 compute-0 nova_compute[239965]:         <nova:project uuid="56f8818d291f4e738d868673048ce025">tempest-ServerActionsTestOtherB-1778121066</nova:project>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="8c93b0ea-6f08-4c34-a2a1-57cec764f188"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:07:58 compute-0 nova_compute[239965]:         <nova:port uuid="4dce6184-01c6-4ef8-a5d3-7fc88f784d89">
Jan 26 16:07:58 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <system>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <entry name="serial">571cd10f-e22e-43b6-9523-e3539d047e4a</entry>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <entry name="uuid">571cd10f-e22e-43b6-9523-e3539d047e4a</entry>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     </system>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   <os>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   </os>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   <features>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   </features>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/571cd10f-e22e-43b6-9523-e3539d047e4a_disk">
Jan 26 16:07:58 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       </source>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:07:58 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/571cd10f-e22e-43b6-9523-e3539d047e4a_disk.config">
Jan 26 16:07:58 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       </source>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:07:58 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:72:ea:5b"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <target dev="tap4dce6184-01"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a/console.log" append="off"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <video>
Jan 26 16:07:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     </video>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <input type="keyboard" bus="usb"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:07:58 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:07:58 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:07:58 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:07:58 compute-0 nova_compute[239965]: </domain>
Jan 26 16:07:58 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.330 239969 DEBUG nova.compute.manager [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Preparing to wait for external event network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.330 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.330 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.330 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.331 239969 DEBUG nova.virt.libvirt.vif [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:05:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2114557834',display_name='tempest-ServerActionsTestOtherB-server-2114557834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2114557834',id=85,image_ref='8c93b0ea-6f08-4c34-a2a1-57cec764f188',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1024411567',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:05:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='56f8818d291f4e738d868673048ce025',ramdisk_id='',reservation_id='r-1wqrxcju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1778121066',owner_user_name='tempest-ServerActionsTestOtherB-1778121066-project-member',shelved_at='2026-01-26T16:07:41.386071',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='8c93b0ea-6f08-4c34-a2a1-57cec764f188'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:07:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ad9c6196af60436caf20747e96ad8388',uuid=571cd10f-e22e-43b6-9523-e3539d047e4a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:07:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.332 239969 DEBUG nova.network.os_vif_util [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converting VIF {"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.333 239969 DEBUG nova.network.os_vif_util [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:ea:5b,bridge_name='br-int',has_traffic_filtering=True,id=4dce6184-01c6-4ef8-a5d3-7fc88f784d89,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dce6184-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.333 239969 DEBUG os_vif [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:ea:5b,bridge_name='br-int',has_traffic_filtering=True,id=4dce6184-01c6-4ef8-a5d3-7fc88f784d89,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dce6184-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.334 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.334 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.335 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.338 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.338 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4dce6184-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.339 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4dce6184-01, col_values=(('external_ids', {'iface-id': '4dce6184-01c6-4ef8-a5d3-7fc88f784d89', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:ea:5b', 'vm-uuid': '571cd10f-e22e-43b6-9523-e3539d047e4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.340 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:58 compute-0 NetworkManager[48954]: <info>  [1769443678.3417] manager: (tap4dce6184-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/392)
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.343 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.348 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.349 239969 INFO os_vif [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:ea:5b,bridge_name='br-int',has_traffic_filtering=True,id=4dce6184-01c6-4ef8-a5d3-7fc88f784d89,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dce6184-01')
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.404 239969 DEBUG nova.virt.libvirt.driver [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.405 239969 DEBUG nova.virt.libvirt.driver [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.405 239969 DEBUG nova.virt.libvirt.driver [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] No VIF found with MAC fa:16:3e:72:ea:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.405 239969 INFO nova.virt.libvirt.driver [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Using config drive
Jan 26 16:07:58 compute-0 sudo[321212]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:07:58 compute-0 sudo[321212]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:07:58 compute-0 sudo[321212]: pam_unix(sudo:session): session closed for user root
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.426 239969 DEBUG nova.storage.rbd_utils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 571cd10f-e22e-43b6-9523-e3539d047e4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.443 239969 DEBUG nova.objects.instance [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.497 239969 DEBUG nova.objects.instance [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'keypairs' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.825 239969 INFO nova.virt.libvirt.driver [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Creating config drive at /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a/disk.config
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.831 239969 DEBUG oslo_concurrency.processutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpui0ff0wf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:07:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3921772806' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.866 239969 DEBUG oslo_concurrency.processutils [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.873 239969 DEBUG nova.compute.provider_tree [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.889 239969 DEBUG nova.scheduler.client.report [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.917 239969 DEBUG oslo_concurrency.lockutils [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:58 compute-0 ceph-mon[75140]: pgmap v1682: 305 pgs: 305 active+clean; 492 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 6.2 MiB/s wr, 281 op/s
Jan 26 16:07:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1018343910' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:07:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:07:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:07:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3921772806' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.969 239969 INFO nova.scheduler.client.report [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Deleted allocations for instance 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e
Jan 26 16:07:58 compute-0 nova_compute[239965]: 2026-01-26 16:07:58.975 239969 DEBUG oslo_concurrency.processutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpui0ff0wf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.002 239969 DEBUG nova.storage.rbd_utils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] rbd image 571cd10f-e22e-43b6-9523-e3539d047e4a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.006 239969 DEBUG oslo_concurrency.processutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a/disk.config 571cd10f-e22e-43b6-9523-e3539d047e4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.109 239969 DEBUG oslo_concurrency.lockutils [None req-a04dc214-2e1a-416c-9756-c1660bad3c99 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.164 239969 DEBUG oslo_concurrency.processutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a/disk.config 571cd10f-e22e-43b6-9523-e3539d047e4a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.165 239969 INFO nova.virt.libvirt.driver [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Deleting local config drive /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a/disk.config because it was imported into RBD.
Jan 26 16:07:59 compute-0 kernel: tap4dce6184-01: entered promiscuous mode
Jan 26 16:07:59 compute-0 NetworkManager[48954]: <info>  [1769443679.2183] manager: (tap4dce6184-01): new Tun device (/org/freedesktop/NetworkManager/Devices/393)
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.218 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:59 compute-0 ovn_controller[146046]: 2026-01-26T16:07:59Z|00931|binding|INFO|Claiming lport 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 for this chassis.
Jan 26 16:07:59 compute-0 ovn_controller[146046]: 2026-01-26T16:07:59Z|00932|binding|INFO|4dce6184-01c6-4ef8-a5d3-7fc88f784d89: Claiming fa:16:3e:72:ea:5b 10.100.0.6
Jan 26 16:07:59 compute-0 systemd-udevd[320830]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.226 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:ea:5b 10.100.0.6'], port_security=['fa:16:3e:72:ea:5b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '571cd10f-e22e-43b6-9523-e3539d047e4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56f8818d291f4e738d868673048ce025', 'neutron:revision_number': '7', 'neutron:security_group_ids': '8357fb8f-8173-42b8-8413-682bfcbe9368', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=112ebdb6-08cf-4ff4-bdfb-063b37c813d0, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=4dce6184-01c6-4ef8-a5d3-7fc88f784d89) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.227 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 in datapath 67ce52d3-4d78-44cc-ab83-0fab051ec68d bound to our chassis
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.228 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67ce52d3-4d78-44cc-ab83-0fab051ec68d
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.229 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.229 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.230 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:07:59 compute-0 NetworkManager[48954]: <info>  [1769443679.2348] device (tap4dce6184-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:07:59 compute-0 NetworkManager[48954]: <info>  [1769443679.2354] device (tap4dce6184-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:07:59 compute-0 ovn_controller[146046]: 2026-01-26T16:07:59Z|00933|binding|INFO|Setting lport 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 ovn-installed in OVS
Jan 26 16:07:59 compute-0 ovn_controller[146046]: 2026-01-26T16:07:59Z|00934|binding|INFO|Setting lport 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 up in Southbound
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.239 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.243 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.249 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[67995b11-6559-4221-9869-9b02e5ca3100]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:59 compute-0 systemd-machined[208061]: New machine qemu-116-instance-00000055.
Jan 26 16:07:59 compute-0 systemd[1]: Started Virtual Machine qemu-116-instance-00000055.
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.277 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c805bfd6-a2d2-46d5-b843-99aa3aa431d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.280 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[34698ec8-2002-4ff3-a2a9-d819bf76bb46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.306 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[25fabd40-4883-470b-94ea-908faa027447]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.324 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[da9dc47b-d77b-4d2f-bc09-bafb55a8c7c1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67ce52d3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:dd:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 700, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502154, 'reachable_time': 34738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321340, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.342 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[00ff4e43-7975-421d-afaa-db8744ea6e41]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap67ce52d3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502171, 'tstamp': 502171}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321342, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67ce52d3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502175, 'tstamp': 502175}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321342, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.344 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67ce52d3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.450 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.452 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.453 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67ce52d3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.453 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.454 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67ce52d3-40, col_values=(('external_ids', {'iface-id': '8d8bd19b-594d-4000-99fc-eb8020a59c23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:07:59.454 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.931 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443679.9312682, 571cd10f-e22e-43b6-9523-e3539d047e4a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.932 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] VM Started (Lifecycle Event)
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.948 239969 DEBUG nova.compute.manager [req-8a155242-e59e-490e-ad77-c842cb56f85c req-860b5358-4100-4e02-925e-c7542ffa0ba8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Received event network-vif-deleted-73d8ee40-94a4-4fc4-9646-1e6cb8410022 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.952 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.957 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443679.9335966, 571cd10f-e22e-43b6-9523-e3539d047e4a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.957 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] VM Paused (Lifecycle Event)
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.977 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:07:59 compute-0 nova_compute[239965]: 2026-01-26 16:07:59.981 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.003 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:08:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 492 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 5.4 MiB/s wr, 246 op/s
Jan 26 16:08:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:08:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:08:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:08:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:08:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:08:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.492 239969 DEBUG oslo_concurrency.lockutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "ffae5575-2de8-4386-8619-8c1fe66b887a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.493 239969 DEBUG oslo_concurrency.lockutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "ffae5575-2de8-4386-8619-8c1fe66b887a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.513 239969 DEBUG nova.compute.manager [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.577 239969 DEBUG oslo_concurrency.lockutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.578 239969 DEBUG oslo_concurrency.lockutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.588 239969 DEBUG nova.virt.hardware [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.589 239969 INFO nova.compute.claims [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.664 239969 DEBUG nova.compute.manager [req-69e02e24-3841-4b89-b4fd-e6e0184cc51b req-b331df86-7f3d-461c-a701-b7b52837a274 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received event network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.665 239969 DEBUG oslo_concurrency.lockutils [req-69e02e24-3841-4b89-b4fd-e6e0184cc51b req-b331df86-7f3d-461c-a701-b7b52837a274 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.665 239969 DEBUG oslo_concurrency.lockutils [req-69e02e24-3841-4b89-b4fd-e6e0184cc51b req-b331df86-7f3d-461c-a701-b7b52837a274 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.666 239969 DEBUG oslo_concurrency.lockutils [req-69e02e24-3841-4b89-b4fd-e6e0184cc51b req-b331df86-7f3d-461c-a701-b7b52837a274 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.667 239969 DEBUG nova.compute.manager [req-69e02e24-3841-4b89-b4fd-e6e0184cc51b req-b331df86-7f3d-461c-a701-b7b52837a274 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Processing event network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.670 239969 DEBUG nova.compute.manager [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.674 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443680.6743321, 571cd10f-e22e-43b6-9523-e3539d047e4a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.675 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] VM Resumed (Lifecycle Event)
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.679 239969 DEBUG nova.virt.libvirt.driver [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.684 239969 INFO nova.virt.libvirt.driver [-] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Instance spawned successfully.
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.702 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.706 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.729 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.775 239969 DEBUG oslo_concurrency.processutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:00 compute-0 nova_compute[239965]: 2026-01-26 16:08:00.825 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.066 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Jan 26 16:08:01 compute-0 ceph-mon[75140]: pgmap v1683: 305 pgs: 305 active+clean; 492 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 5.4 MiB/s wr, 246 op/s
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.183 239969 DEBUG oslo_concurrency.lockutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "dc214d0d-327d-4a44-b811-2c04d4ccc9de" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.183 239969 DEBUG oslo_concurrency.lockutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "dc214d0d-327d-4a44-b811-2c04d4ccc9de" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Jan 26 16:08:01 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.209 239969 DEBUG nova.compute.manager [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.293 239969 DEBUG oslo_concurrency.lockutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:08:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1007087012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.494 239969 DEBUG oslo_concurrency.processutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.719s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.499 239969 DEBUG nova.compute.provider_tree [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.518 239969 DEBUG nova.scheduler.client.report [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.522 239969 DEBUG nova.compute.manager [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.546 239969 DEBUG oslo_concurrency.lockutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.968s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.547 239969 DEBUG nova.compute.manager [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.549 239969 DEBUG oslo_concurrency.lockutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.554 239969 DEBUG nova.virt.hardware [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.554 239969 INFO nova.compute.claims [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.630 239969 DEBUG oslo_concurrency.lockutils [None req-41cb8488-ef64-477f-aac7-d2e6ce5808f6 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 9.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.639 239969 DEBUG nova.compute.manager [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.674 239969 INFO nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.693 239969 DEBUG nova.compute.manager [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.729 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "0e27c936-f272-4dff-b393-9602b77da425" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.730 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "0e27c936-f272-4dff-b393-9602b77da425" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.747 239969 DEBUG nova.compute.manager [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.805 239969 DEBUG oslo_concurrency.processutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.842 239969 DEBUG nova.compute.manager [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.845 239969 DEBUG nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.846 239969 INFO nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Creating image(s)
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.870 239969 DEBUG nova.storage.rbd_utils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image ffae5575-2de8-4386-8619-8c1fe66b887a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.894 239969 DEBUG nova.storage.rbd_utils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image ffae5575-2de8-4386-8619-8c1fe66b887a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.916 239969 DEBUG nova.storage.rbd_utils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image ffae5575-2de8-4386-8619-8c1fe66b887a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.919 239969 DEBUG oslo_concurrency.processutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.971 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.991 239969 DEBUG oslo_concurrency.processutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.992 239969 DEBUG oslo_concurrency.lockutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.992 239969 DEBUG oslo_concurrency.lockutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:01 compute-0 nova_compute[239965]: 2026-01-26 16:08:01.993 239969 DEBUG oslo_concurrency.lockutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.012 239969 DEBUG nova.storage.rbd_utils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image ffae5575-2de8-4386-8619-8c1fe66b887a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.016 239969 DEBUG oslo_concurrency.processutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 ffae5575-2de8-4386-8619-8c1fe66b887a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1685: 305 pgs: 305 active+clean; 410 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 5.4 MiB/s wr, 323 op/s
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.131 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443667.1273594, 5669b8af-7f11-43be-a56d-983f8b8b859e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.134 239969 INFO nova.compute.manager [-] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] VM Stopped (Lifecycle Event)
Jan 26 16:08:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:02.148 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:ea:76 10.100.0.2 2001:db8::f816:3eff:fe35:ea76'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe35:ea76/64', 'neutron:device_id': 'ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afff7a52-f9b6-46d8-88e0-ad54d3e2a5be, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=4fab6bb9-7be8-461a-b3cf-098c70ef4666) old=Port_Binding(mac=['fa:16:3e:35:ea:76 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:08:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:02.149 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 4fab6bb9-7be8-461a-b3cf-098c70ef4666 in datapath 8b5a2723-70e1-4acc-a601-a4f0bf0d77a4 updated
Jan 26 16:08:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:02.150 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8b5a2723-70e1-4acc-a601-a4f0bf0d77a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:08:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:02.151 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f27673cf-43af-4c54-b9c9-1932429195a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.168 239969 DEBUG nova.compute.manager [None req-67d84142-07fb-40ed-bcb5-0c44bfaac7a1 - - - - - -] [instance: 5669b8af-7f11-43be-a56d-983f8b8b859e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:02 compute-0 ceph-mon[75140]: osdmap e268: 3 total, 3 up, 3 in
Jan 26 16:08:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1007087012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.293 239969 DEBUG oslo_concurrency.processutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 ffae5575-2de8-4386-8619-8c1fe66b887a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.357 239969 DEBUG nova.storage.rbd_utils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] resizing rbd image ffae5575-2de8-4386-8619-8c1fe66b887a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:08:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:08:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3386067840' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.402 239969 DEBUG oslo_concurrency.processutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.431 239969 DEBUG nova.objects.instance [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lazy-loading 'migration_context' on Instance uuid ffae5575-2de8-4386-8619-8c1fe66b887a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.435 239969 DEBUG nova.compute.provider_tree [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.450 239969 DEBUG nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.450 239969 DEBUG nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Ensure instance console log exists: /var/lib/nova/instances/ffae5575-2de8-4386-8619-8c1fe66b887a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.450 239969 DEBUG oslo_concurrency.lockutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.451 239969 DEBUG oslo_concurrency.lockutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.451 239969 DEBUG oslo_concurrency.lockutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.453 239969 DEBUG nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.454 239969 DEBUG nova.scheduler.client.report [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.461 239969 WARNING nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.465 239969 DEBUG nova.virt.libvirt.host [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.466 239969 DEBUG nova.virt.libvirt.host [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.470 239969 DEBUG nova.virt.libvirt.host [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.470 239969 DEBUG nova.virt.libvirt.host [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.470 239969 DEBUG nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.471 239969 DEBUG nova.virt.hardware [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.471 239969 DEBUG nova.virt.hardware [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.471 239969 DEBUG nova.virt.hardware [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.472 239969 DEBUG nova.virt.hardware [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.472 239969 DEBUG nova.virt.hardware [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.472 239969 DEBUG nova.virt.hardware [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.472 239969 DEBUG nova.virt.hardware [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.472 239969 DEBUG nova.virt.hardware [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.473 239969 DEBUG nova.virt.hardware [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.473 239969 DEBUG nova.virt.hardware [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.473 239969 DEBUG nova.virt.hardware [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.476 239969 DEBUG oslo_concurrency.processutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.507 239969 DEBUG oslo_concurrency.lockutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.508 239969 DEBUG nova.compute.manager [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.513 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.518 239969 DEBUG nova.virt.hardware [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.519 239969 INFO nova.compute.claims [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.575 239969 DEBUG nova.compute.manager [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.610 239969 INFO nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:08:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.629 239969 DEBUG nova.compute.manager [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.744 239969 DEBUG oslo_concurrency.processutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.783 239969 DEBUG nova.compute.manager [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.784 239969 DEBUG nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.784 239969 INFO nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Creating image(s)
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.808 239969 DEBUG nova.storage.rbd_utils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.831 239969 DEBUG nova.storage.rbd_utils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.860 239969 DEBUG nova.storage.rbd_utils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.864 239969 DEBUG oslo_concurrency.processutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.898 239969 DEBUG nova.compute.manager [req-dd5b710a-f725-4d60-a677-a1b2455ed641 req-eb48adcb-3b51-4243-aa65-20600aab59ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received event network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.898 239969 DEBUG oslo_concurrency.lockutils [req-dd5b710a-f725-4d60-a677-a1b2455ed641 req-eb48adcb-3b51-4243-aa65-20600aab59ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.899 239969 DEBUG oslo_concurrency.lockutils [req-dd5b710a-f725-4d60-a677-a1b2455ed641 req-eb48adcb-3b51-4243-aa65-20600aab59ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.899 239969 DEBUG oslo_concurrency.lockutils [req-dd5b710a-f725-4d60-a677-a1b2455ed641 req-eb48adcb-3b51-4243-aa65-20600aab59ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.900 239969 DEBUG nova.compute.manager [req-dd5b710a-f725-4d60-a677-a1b2455ed641 req-eb48adcb-3b51-4243-aa65-20600aab59ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] No waiting events found dispatching network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.900 239969 WARNING nova.compute.manager [req-dd5b710a-f725-4d60-a677-a1b2455ed641 req-eb48adcb-3b51-4243-aa65-20600aab59ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received unexpected event network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 for instance with vm_state active and task_state None.
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.938 239969 DEBUG oslo_concurrency.processutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.939 239969 DEBUG oslo_concurrency.lockutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.939 239969 DEBUG oslo_concurrency.lockutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.940 239969 DEBUG oslo_concurrency.lockutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.961 239969 DEBUG nova.storage.rbd_utils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:02 compute-0 nova_compute[239965]: 2026-01-26 16:08:02.966 239969 DEBUG oslo_concurrency.processutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:08:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/228421867' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.095 239969 DEBUG oslo_concurrency.processutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.143 239969 DEBUG nova.storage.rbd_utils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image ffae5575-2de8-4386-8619-8c1fe66b887a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.148 239969 DEBUG oslo_concurrency.processutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:03 compute-0 ceph-mon[75140]: pgmap v1685: 305 pgs: 305 active+clean; 410 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 5.4 MiB/s wr, 323 op/s
Jan 26 16:08:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3386067840' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/228421867' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.251 239969 DEBUG oslo_concurrency.processutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.285s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.310 239969 DEBUG nova.storage.rbd_utils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] resizing rbd image dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.343 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:08:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2255966487' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.392 239969 DEBUG nova.objects.instance [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lazy-loading 'migration_context' on Instance uuid dc214d0d-327d-4a44-b811-2c04d4ccc9de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.394 239969 DEBUG oslo_concurrency.processutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.398 239969 DEBUG nova.compute.provider_tree [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.411 239969 DEBUG nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.411 239969 DEBUG nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Ensure instance console log exists: /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.411 239969 DEBUG oslo_concurrency.lockutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.412 239969 DEBUG oslo_concurrency.lockutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.412 239969 DEBUG oslo_concurrency.lockutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.414 239969 DEBUG nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.416 239969 DEBUG nova.scheduler.client.report [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.423 239969 WARNING nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.429 239969 DEBUG nova.virt.libvirt.host [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.429 239969 DEBUG nova.virt.libvirt.host [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.432 239969 DEBUG nova.virt.libvirt.host [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.432 239969 DEBUG nova.virt.libvirt.host [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.432 239969 DEBUG nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.433 239969 DEBUG nova.virt.hardware [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.434 239969 DEBUG nova.virt.hardware [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.434 239969 DEBUG nova.virt.hardware [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.434 239969 DEBUG nova.virt.hardware [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.434 239969 DEBUG nova.virt.hardware [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.434 239969 DEBUG nova.virt.hardware [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.435 239969 DEBUG nova.virt.hardware [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.435 239969 DEBUG nova.virt.hardware [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.435 239969 DEBUG nova.virt.hardware [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.436 239969 DEBUG nova.virt.hardware [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.436 239969 DEBUG nova.virt.hardware [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.439 239969 DEBUG oslo_concurrency.processutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.484 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.972s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.486 239969 DEBUG nova.compute.manager [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.545 239969 DEBUG nova.compute.manager [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.546 239969 DEBUG nova.network.neutron [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.574 239969 INFO nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.593 239969 DEBUG nova.compute.manager [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.690 239969 DEBUG nova.compute.manager [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.691 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.692 239969 INFO nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Creating image(s)
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.718 239969 DEBUG nova.storage.rbd_utils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 0e27c936-f272-4dff-b393-9602b77da425_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.750 239969 DEBUG nova.storage.rbd_utils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 0e27c936-f272-4dff-b393-9602b77da425_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:08:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1771112443' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.778 239969 DEBUG nova.storage.rbd_utils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 0e27c936-f272-4dff-b393-9602b77da425_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.783 239969 DEBUG oslo_concurrency.processutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.818 239969 DEBUG nova.policy [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '392bf4c554724bd3b097b990cec964ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3d3c26abe454a90816833e484abbbd5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.822 239969 DEBUG oslo_concurrency.processutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.824 239969 DEBUG nova.objects.instance [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lazy-loading 'pci_devices' on Instance uuid ffae5575-2de8-4386-8619-8c1fe66b887a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.843 239969 DEBUG nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:08:03 compute-0 nova_compute[239965]:   <uuid>ffae5575-2de8-4386-8619-8c1fe66b887a</uuid>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   <name>instance-0000005f</name>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerShowV247Test-server-379321192</nova:name>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:08:02</nova:creationTime>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:08:03 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:08:03 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:08:03 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:08:03 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:08:03 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:08:03 compute-0 nova_compute[239965]:         <nova:user uuid="82e03105001a493dbe4acc52bc208148">tempest-ServerShowV247Test-477144177-project-member</nova:user>
Jan 26 16:08:03 compute-0 nova_compute[239965]:         <nova:project uuid="3fe1ff70a12f48e19752112548c850be">tempest-ServerShowV247Test-477144177</nova:project>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <system>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <entry name="serial">ffae5575-2de8-4386-8619-8c1fe66b887a</entry>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <entry name="uuid">ffae5575-2de8-4386-8619-8c1fe66b887a</entry>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     </system>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   <os>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   </os>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   <features>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   </features>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/ffae5575-2de8-4386-8619-8c1fe66b887a_disk">
Jan 26 16:08:03 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       </source>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:08:03 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/ffae5575-2de8-4386-8619-8c1fe66b887a_disk.config">
Jan 26 16:08:03 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       </source>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:08:03 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/ffae5575-2de8-4386-8619-8c1fe66b887a/console.log" append="off"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <video>
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     </video>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:08:03 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:08:03 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:08:03 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:08:03 compute-0 nova_compute[239965]: </domain>
Jan 26 16:08:03 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.861 239969 DEBUG oslo_concurrency.processutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.862 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.862 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.862 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.885 239969 DEBUG nova.storage.rbd_utils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 0e27c936-f272-4dff-b393-9602b77da425_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.889 239969 DEBUG oslo_concurrency.processutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 0e27c936-f272-4dff-b393-9602b77da425_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.953 239969 DEBUG nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.954 239969 DEBUG nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.955 239969 INFO nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Using config drive
Jan 26 16:08:03 compute-0 nova_compute[239965]: 2026-01-26 16:08:03.975 239969 DEBUG nova.storage.rbd_utils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image ffae5575-2de8-4386-8619-8c1fe66b887a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:08:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1610359295' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.103 239969 DEBUG oslo_concurrency.processutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.663s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 400 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 5.6 MiB/s wr, 264 op/s
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.123 239969 DEBUG nova.storage.rbd_utils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.127 239969 DEBUG oslo_concurrency.processutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.194 239969 INFO nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Creating config drive at /var/lib/nova/instances/ffae5575-2de8-4386-8619-8c1fe66b887a/disk.config
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.200 239969 DEBUG oslo_concurrency.processutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ffae5575-2de8-4386-8619-8c1fe66b887a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpebtoz6c6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Jan 26 16:08:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Jan 26 16:08:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2255966487' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1771112443' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1610359295' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:04 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.242 239969 DEBUG oslo_concurrency.processutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 0e27c936-f272-4dff-b393-9602b77da425_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.338 239969 DEBUG nova.storage.rbd_utils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] resizing rbd image 0e27c936-f272-4dff-b393-9602b77da425_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.371 239969 DEBUG oslo_concurrency.processutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ffae5575-2de8-4386-8619-8c1fe66b887a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpebtoz6c6" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.410 239969 DEBUG nova.storage.rbd_utils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image ffae5575-2de8-4386-8619-8c1fe66b887a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.418 239969 DEBUG oslo_concurrency.processutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ffae5575-2de8-4386-8619-8c1fe66b887a/disk.config ffae5575-2de8-4386-8619-8c1fe66b887a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.467 239969 DEBUG nova.network.neutron [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Successfully created port: 6bca1cb5-f515-4008-9c5f-ccd488d46f98 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.522 239969 DEBUG nova.objects.instance [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'migration_context' on Instance uuid 0e27c936-f272-4dff-b393-9602b77da425 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.541 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.541 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Ensure instance console log exists: /var/lib/nova/instances/0e27c936-f272-4dff-b393-9602b77da425/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.542 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.542 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.542 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.576 239969 DEBUG oslo_concurrency.processutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ffae5575-2de8-4386-8619-8c1fe66b887a/disk.config ffae5575-2de8-4386-8619-8c1fe66b887a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.576 239969 INFO nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Deleting local config drive /var/lib/nova/instances/ffae5575-2de8-4386-8619-8c1fe66b887a/disk.config because it was imported into RBD.
Jan 26 16:08:04 compute-0 systemd-machined[208061]: New machine qemu-117-instance-0000005f.
Jan 26 16:08:04 compute-0 systemd[1]: Started Virtual Machine qemu-117-instance-0000005f.
Jan 26 16:08:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:08:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2346638801' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.750 239969 DEBUG oslo_concurrency.processutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.623s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.752 239969 DEBUG nova.objects.instance [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lazy-loading 'pci_devices' on Instance uuid dc214d0d-327d-4a44-b811-2c04d4ccc9de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.770 239969 DEBUG nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:08:04 compute-0 nova_compute[239965]:   <uuid>dc214d0d-327d-4a44-b811-2c04d4ccc9de</uuid>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   <name>instance-00000060</name>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerShowV247Test-server-382537760</nova:name>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:08:03</nova:creationTime>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:08:04 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:08:04 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:08:04 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:08:04 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:08:04 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:08:04 compute-0 nova_compute[239965]:         <nova:user uuid="82e03105001a493dbe4acc52bc208148">tempest-ServerShowV247Test-477144177-project-member</nova:user>
Jan 26 16:08:04 compute-0 nova_compute[239965]:         <nova:project uuid="3fe1ff70a12f48e19752112548c850be">tempest-ServerShowV247Test-477144177</nova:project>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <system>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <entry name="serial">dc214d0d-327d-4a44-b811-2c04d4ccc9de</entry>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <entry name="uuid">dc214d0d-327d-4a44-b811-2c04d4ccc9de</entry>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     </system>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   <os>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   </os>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   <features>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   </features>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk">
Jan 26 16:08:04 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       </source>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:08:04 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk.config">
Jan 26 16:08:04 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       </source>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:08:04 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de/console.log" append="off"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <video>
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     </video>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:08:04 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:08:04 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:08:04 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:08:04 compute-0 nova_compute[239965]: </domain>
Jan 26 16:08:04 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.818 239969 DEBUG nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.819 239969 DEBUG nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.819 239969 INFO nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Using config drive
Jan 26 16:08:04 compute-0 nova_compute[239965]: 2026-01-26 16:08:04.841 239969 DEBUG nova.storage.rbd_utils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.115 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443685.1147215, ffae5575-2de8-4386-8619-8c1fe66b887a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.115 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] VM Resumed (Lifecycle Event)
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.119 239969 DEBUG nova.compute.manager [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.120 239969 DEBUG nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.123 239969 INFO nova.virt.libvirt.driver [-] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Instance spawned successfully.
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.124 239969 DEBUG nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.176 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.179 239969 DEBUG nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.180 239969 DEBUG nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.180 239969 DEBUG nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.180 239969 DEBUG nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.181 239969 DEBUG nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.181 239969 DEBUG nova.virt.libvirt.driver [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.185 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.228 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.229 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443685.1188073, ffae5575-2de8-4386-8619-8c1fe66b887a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.229 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] VM Started (Lifecycle Event)
Jan 26 16:08:05 compute-0 ceph-mon[75140]: pgmap v1686: 305 pgs: 305 active+clean; 400 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 7.1 MiB/s rd, 5.6 MiB/s wr, 264 op/s
Jan 26 16:08:05 compute-0 ceph-mon[75140]: osdmap e269: 3 total, 3 up, 3 in
Jan 26 16:08:05 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2346638801' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.256 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.259 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.264 239969 INFO nova.compute.manager [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Took 3.42 seconds to spawn the instance on the hypervisor.
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.264 239969 DEBUG nova.compute.manager [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.291 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.319 239969 INFO nova.compute.manager [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Took 4.76 seconds to build instance.
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.344 239969 DEBUG oslo_concurrency.lockutils [None req-83d4445c-e2ac-4adf-ad1b-dafc66d5bf2b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "ffae5575-2de8-4386-8619-8c1fe66b887a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.378 239969 INFO nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Creating config drive at /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de/disk.config
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.384 239969 DEBUG oslo_concurrency.processutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfmt1re20 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.525 239969 DEBUG oslo_concurrency.processutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpfmt1re20" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.562 239969 DEBUG nova.storage.rbd_utils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.567 239969 DEBUG oslo_concurrency.processutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de/disk.config dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.700 239969 DEBUG nova.network.neutron [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Successfully updated port: 6bca1cb5-f515-4008-9c5f-ccd488d46f98 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.721 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "refresh_cache-0e27c936-f272-4dff-b393-9602b77da425" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.721 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquired lock "refresh_cache-0e27c936-f272-4dff-b393-9602b77da425" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.721 239969 DEBUG nova.network.neutron [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.725 239969 DEBUG oslo_concurrency.processutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de/disk.config dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.726 239969 INFO nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Deleting local config drive /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de/disk.config because it was imported into RBD.
Jan 26 16:08:05 compute-0 systemd-machined[208061]: New machine qemu-118-instance-00000060.
Jan 26 16:08:05 compute-0 systemd[1]: Started Virtual Machine qemu-118-instance-00000060.
Jan 26 16:08:05 compute-0 nova_compute[239965]: 2026-01-26 16:08:05.845 239969 DEBUG nova.network.neutron [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.071 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 402 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.1 MiB/s wr, 292 op/s
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.353 239969 DEBUG oslo_concurrency.lockutils [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.354 239969 DEBUG oslo_concurrency.lockutils [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.354 239969 DEBUG oslo_concurrency.lockutils [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.354 239969 DEBUG oslo_concurrency.lockutils [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.355 239969 DEBUG oslo_concurrency.lockutils [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.357 239969 INFO nova.compute.manager [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Terminating instance
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.358 239969 DEBUG nova.compute.manager [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.394 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "59b6d7ef-0e86-41af-af94-b94def8570b0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.394 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "59b6d7ef-0e86-41af-af94-b94def8570b0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:06 compute-0 kernel: tapd0f5820b-f3 (unregistering): left promiscuous mode
Jan 26 16:08:06 compute-0 NetworkManager[48954]: <info>  [1769443686.4078] device (tapd0f5820b-f3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.410 239969 DEBUG nova.compute.manager [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.422 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:06 compute-0 ovn_controller[146046]: 2026-01-26T16:08:06Z|00935|binding|INFO|Releasing lport d0f5820b-f3ad-4a26-84c4-c20c732dbcfd from this chassis (sb_readonly=0)
Jan 26 16:08:06 compute-0 ovn_controller[146046]: 2026-01-26T16:08:06Z|00936|binding|INFO|Setting lport d0f5820b-f3ad-4a26-84c4-c20c732dbcfd down in Southbound
Jan 26 16:08:06 compute-0 ovn_controller[146046]: 2026-01-26T16:08:06Z|00937|binding|INFO|Removing iface tapd0f5820b-f3 ovn-installed in OVS
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.425 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.429 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:71:25 10.100.0.11'], port_security=['fa:16:3e:75:71:25 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8dcfffb0-b36f-463e-b49a-9954483b8f5b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56f8818d291f4e738d868673048ce025', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5352fc20-0398-48ca-8854-47c9620f712b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=112ebdb6-08cf-4ff4-bdfb-063b37c813d0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d0f5820b-f3ad-4a26-84c4-c20c732dbcfd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.430 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d0f5820b-f3ad-4a26-84c4-c20c732dbcfd in datapath 67ce52d3-4d78-44cc-ab83-0fab051ec68d unbound from our chassis
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.432 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67ce52d3-4d78-44cc-ab83-0fab051ec68d
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.445 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.453 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5b543db1-dd7c-44a4-acf5-7f27bf76a872]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:06 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000058.scope: Deactivated successfully.
Jan 26 16:08:06 compute-0 systemd[1]: machine-qemu\x2d109\x2dinstance\x2d00000058.scope: Consumed 15.362s CPU time.
Jan 26 16:08:06 compute-0 systemd-machined[208061]: Machine qemu-109-instance-00000058 terminated.
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.487 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.488 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.493 239969 DEBUG nova.virt.hardware [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.494 239969 INFO nova.compute.claims [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.501 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[18500b30-0e3c-4778-bf12-05a02efcb20d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.507 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9222f892-25a8-4422-8a80-e304cb93d528]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.543 239969 DEBUG nova.network.neutron [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Updating instance_info_cache with network_info: [{"id": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "address": "fa:16:3e:a5:31:a8", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bca1cb5-f5", "ovs_interfaceid": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.552 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f7d463d7-5f9b-4c6f-9035-6d3d62986f87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.578 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Releasing lock "refresh_cache-0e27c936-f272-4dff-b393-9602b77da425" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.579 239969 DEBUG nova.compute.manager [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Instance network_info: |[{"id": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "address": "fa:16:3e:a5:31:a8", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bca1cb5-f5", "ovs_interfaceid": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.582 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Start _get_guest_xml network_info=[{"id": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "address": "fa:16:3e:a5:31:a8", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bca1cb5-f5", "ovs_interfaceid": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:08:06 compute-0 kernel: tapd0f5820b-f3: entered promiscuous mode
Jan 26 16:08:06 compute-0 NetworkManager[48954]: <info>  [1769443686.5863] manager: (tapd0f5820b-f3): new Tun device (/org/freedesktop/NetworkManager/Devices/394)
Jan 26 16:08:06 compute-0 systemd-udevd[322207]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.583 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8a70fd4b-2471-4c72-b2eb-670c52a3778f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67ce52d3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:dd:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 700, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502154, 'reachable_time': 34738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322275, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.587 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:06 compute-0 kernel: tapd0f5820b-f3 (unregistering): left promiscuous mode
Jan 26 16:08:06 compute-0 ovn_controller[146046]: 2026-01-26T16:08:06Z|00938|binding|INFO|Claiming lport d0f5820b-f3ad-4a26-84c4-c20c732dbcfd for this chassis.
Jan 26 16:08:06 compute-0 ovn_controller[146046]: 2026-01-26T16:08:06Z|00939|binding|INFO|d0f5820b-f3ad-4a26-84c4-c20c732dbcfd: Claiming fa:16:3e:75:71:25 10.100.0.11
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.608 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:71:25 10.100.0.11'], port_security=['fa:16:3e:75:71:25 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8dcfffb0-b36f-463e-b49a-9954483b8f5b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56f8818d291f4e738d868673048ce025', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5352fc20-0398-48ca-8854-47c9620f712b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=112ebdb6-08cf-4ff4-bdfb-063b37c813d0, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d0f5820b-f3ad-4a26-84c4-c20c732dbcfd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.611 239969 WARNING nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.623 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed26454-4f80-40ab-836d-a375016192e2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap67ce52d3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502171, 'tstamp': 502171}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322279, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67ce52d3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502175, 'tstamp': 502175}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322279, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.626 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:06 compute-0 ovn_controller[146046]: 2026-01-26T16:08:06Z|00940|binding|INFO|Setting lport d0f5820b-f3ad-4a26-84c4-c20c732dbcfd ovn-installed in OVS
Jan 26 16:08:06 compute-0 ovn_controller[146046]: 2026-01-26T16:08:06Z|00941|binding|INFO|Setting lport d0f5820b-f3ad-4a26-84c4-c20c732dbcfd up in Southbound
Jan 26 16:08:06 compute-0 ovn_controller[146046]: 2026-01-26T16:08:06Z|00942|binding|INFO|Releasing lport d0f5820b-f3ad-4a26-84c4-c20c732dbcfd from this chassis (sb_readonly=1)
Jan 26 16:08:06 compute-0 ovn_controller[146046]: 2026-01-26T16:08:06Z|00943|binding|INFO|Removing iface tapd0f5820b-f3 ovn-installed in OVS
Jan 26 16:08:06 compute-0 ovn_controller[146046]: 2026-01-26T16:08:06Z|00944|if_status|INFO|Dropped 2 log messages in last 112 seconds (most recently, 112 seconds ago) due to excessive rate
Jan 26 16:08:06 compute-0 ovn_controller[146046]: 2026-01-26T16:08:06Z|00945|if_status|INFO|Not setting lport d0f5820b-f3ad-4a26-84c4-c20c732dbcfd down as sb is readonly
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.629 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67ce52d3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.629 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.630 239969 DEBUG nova.virt.libvirt.host [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:08:06 compute-0 ovn_controller[146046]: 2026-01-26T16:08:06Z|00946|binding|INFO|Releasing lport d0f5820b-f3ad-4a26-84c4-c20c732dbcfd from this chassis (sb_readonly=0)
Jan 26 16:08:06 compute-0 ovn_controller[146046]: 2026-01-26T16:08:06Z|00947|binding|INFO|Setting lport d0f5820b-f3ad-4a26-84c4-c20c732dbcfd down in Southbound
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.632 239969 DEBUG nova.virt.libvirt.host [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.632 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.637 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:75:71:25 10.100.0.11'], port_security=['fa:16:3e:75:71:25 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '8dcfffb0-b36f-463e-b49a-9954483b8f5b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56f8818d291f4e738d868673048ce025', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5352fc20-0398-48ca-8854-47c9620f712b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=112ebdb6-08cf-4ff4-bdfb-063b37c813d0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d0f5820b-f3ad-4a26-84c4-c20c732dbcfd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.639 239969 INFO nova.virt.libvirt.driver [-] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Instance destroyed successfully.
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.641 239969 DEBUG nova.objects.instance [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'resources' on Instance uuid 8dcfffb0-b36f-463e-b49a-9954483b8f5b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.644 239969 DEBUG nova.virt.libvirt.host [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.646 239969 DEBUG nova.virt.libvirt.host [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.650 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.650 239969 DEBUG nova.virt.hardware [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.651 239969 DEBUG nova.virt.hardware [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.652 239969 DEBUG nova.virt.hardware [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.652 239969 DEBUG nova.virt.hardware [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.652 239969 DEBUG nova.virt.hardware [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.652 239969 DEBUG nova.virt.hardware [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.652 239969 DEBUG nova.virt.hardware [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.653 239969 DEBUG nova.virt.hardware [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.653 239969 DEBUG nova.virt.hardware [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.653 239969 DEBUG nova.virt.hardware [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.653 239969 DEBUG nova.virt.hardware [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.653 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67ce52d3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.654 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.655 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67ce52d3-40, col_values=(('external_ids', {'iface-id': '8d8bd19b-594d-4000-99fc-eb8020a59c23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.655 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.656 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d0f5820b-f3ad-4a26-84c4-c20c732dbcfd in datapath 67ce52d3-4d78-44cc-ab83-0fab051ec68d unbound from our chassis
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.657 239969 DEBUG oslo_concurrency.processutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.658 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67ce52d3-4d78-44cc-ab83-0fab051ec68d
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.687 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a732ceba-1a45-4b03-9364-8039a3203e34]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.713 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.723 239969 DEBUG nova.virt.libvirt.vif [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:06:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1343019912',display_name='tempest-ServerActionsTestOtherB-server-1343019912',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1343019912',id=88,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:06:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56f8818d291f4e738d868673048ce025',ramdisk_id='',reservation_id='r-iovk31tw',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1778121066',owner_user_name='tempest-ServerActionsTestOtherB-1778121066-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:06:56Z,user_data=None,user_id='ad9c6196af60436caf20747e96ad8388',uuid=8dcfffb0-b36f-463e-b49a-9954483b8f5b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "address": "fa:16:3e:75:71:25", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0f5820b-f3", "ovs_interfaceid": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.724 239969 DEBUG nova.network.os_vif_util [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converting VIF {"id": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "address": "fa:16:3e:75:71:25", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd0f5820b-f3", "ovs_interfaceid": "d0f5820b-f3ad-4a26-84c4-c20c732dbcfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.725 239969 DEBUG nova.network.os_vif_util [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:75:71:25,bridge_name='br-int',has_traffic_filtering=True,id=d0f5820b-f3ad-4a26-84c4-c20c732dbcfd,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0f5820b-f3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.726 239969 DEBUG os_vif [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:71:25,bridge_name='br-int',has_traffic_filtering=True,id=d0f5820b-f3ad-4a26-84c4-c20c732dbcfd,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0f5820b-f3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.729 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.729 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0f5820b-f3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.732 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.734 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.738 239969 INFO os_vif [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:75:71:25,bridge_name='br-int',has_traffic_filtering=True,id=d0f5820b-f3ad-4a26-84c4-c20c732dbcfd,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd0f5820b-f3')
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.741 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d3ad50fa-9686-4927-bea7-bba6959c86ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.749 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[147d2378-c46e-4501-a1a7-ce2a41b45d4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.791 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6412b113-075f-4cd1-a443-4079022ceaf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.819 239969 DEBUG oslo_concurrency.processutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.821 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dc153206-1b33-4773-a41d-4b8da2228005]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67ce52d3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:dd:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 17, 'rx_bytes': 700, 'tx_bytes': 858, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502154, 'reachable_time': 34738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322306, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.844 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef2eaf8-01b3-4e13-9ae0-88656429569b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap67ce52d3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502171, 'tstamp': 502171}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322332, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67ce52d3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502175, 'tstamp': 502175}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322332, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.857 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67ce52d3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.861 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67ce52d3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.862 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.863 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67ce52d3-40, col_values=(('external_ids', {'iface-id': '8d8bd19b-594d-4000-99fc-eb8020a59c23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.863 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:06 compute-0 nova_compute[239965]: 2026-01-26 16:08:06.864 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.865 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d0f5820b-f3ad-4a26-84c4-c20c732dbcfd in datapath 67ce52d3-4d78-44cc-ab83-0fab051ec68d unbound from our chassis
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.867 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 67ce52d3-4d78-44cc-ab83-0fab051ec68d
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.894 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e704c3af-d63f-4674-b956-3157b1f21219]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.923 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[90292259-397d-495d-b008-9aa2cf9bca9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.928 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[faf3f82d-63bb-4355-9571-c8c47b9d8e37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:06.979 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2fc6052e-dfd4-43cc-9f49-515362586a59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:07.004 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f905139d-deb2-4b0b-b962-7b7f259208dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap67ce52d3-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:dd:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 19, 'rx_bytes': 700, 'tx_bytes': 942, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 242], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502154, 'reachable_time': 34738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322374, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:07.032 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0c9490b8-1aa4-438d-a45c-b5b8533a599e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap67ce52d3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502171, 'tstamp': 502171}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322393, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap67ce52d3-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 502175, 'tstamp': 502175}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322393, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:07.036 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67ce52d3-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:07.039 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap67ce52d3-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:07.040 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.040 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:07.041 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap67ce52d3-40, col_values=(('external_ids', {'iface-id': '8d8bd19b-594d-4000-99fc-eb8020a59c23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:07.042 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.052 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443687.0475614, dc214d0d-327d-4a44-b811-2c04d4ccc9de => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.053 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] VM Resumed (Lifecycle Event)
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.056 239969 DEBUG nova.compute.manager [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.057 239969 DEBUG nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.062 239969 INFO nova.virt.libvirt.driver [-] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Instance spawned successfully.
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.063 239969 DEBUG nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.078 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.085 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.089 239969 DEBUG nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.089 239969 DEBUG nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.090 239969 DEBUG nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.090 239969 DEBUG nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.091 239969 DEBUG nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.091 239969 DEBUG nova.virt.libvirt.driver [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.112 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.113 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443687.0477273, dc214d0d-327d-4a44-b811-2c04d4ccc9de => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.113 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] VM Started (Lifecycle Event)
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.143 239969 INFO nova.virt.libvirt.driver [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Deleting instance files /var/lib/nova/instances/8dcfffb0-b36f-463e-b49a-9954483b8f5b_del
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.144 239969 INFO nova.virt.libvirt.driver [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Deletion of /var/lib/nova/instances/8dcfffb0-b36f-463e-b49a-9954483b8f5b_del complete
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.149 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.153 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.157 239969 INFO nova.compute.manager [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Took 4.37 seconds to spawn the instance on the hypervisor.
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.158 239969 DEBUG nova.compute.manager [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.197 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.238 239969 INFO nova.compute.manager [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Took 5.98 seconds to build instance.
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.244 239969 INFO nova.compute.manager [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Took 0.89 seconds to destroy the instance on the hypervisor.
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.244 239969 DEBUG oslo.service.loopingcall [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.244 239969 DEBUG nova.compute.manager [-] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.245 239969 DEBUG nova.network.neutron [-] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:08:07 compute-0 ceph-mon[75140]: pgmap v1688: 305 pgs: 305 active+clean; 402 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.1 MiB/s wr, 292 op/s
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.254 239969 DEBUG oslo_concurrency.lockutils [None req-b4b377a0-d616-4bf1-b94e-3782f44428ba 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "dc214d0d-327d-4a44-b811-2c04d4ccc9de" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:08:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3857070454' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.361 239969 DEBUG oslo_concurrency.processutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.704s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.386 239969 DEBUG nova.storage.rbd_utils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 0e27c936-f272-4dff-b393-9602b77da425_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.393 239969 DEBUG oslo_concurrency.processutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:08:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2797932050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.524 239969 DEBUG oslo_concurrency.processutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.704s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.532 239969 DEBUG nova.compute.provider_tree [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.549 239969 DEBUG nova.scheduler.client.report [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.573 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.574 239969 DEBUG nova.compute.manager [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:08:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:08:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Jan 26 16:08:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Jan 26 16:08:07 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.640 239969 DEBUG nova.compute.manager [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.641 239969 DEBUG nova.network.neutron [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.662 239969 INFO nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.680 239969 DEBUG nova.compute.manager [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.763 239969 DEBUG nova.compute.manager [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.766 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.766 239969 INFO nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Creating image(s)
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.801 239969 DEBUG nova.storage.rbd_utils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 59b6d7ef-0e86-41af-af94-b94def8570b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.835 239969 DEBUG nova.storage.rbd_utils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 59b6d7ef-0e86-41af-af94-b94def8570b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.863 239969 DEBUG nova.storage.rbd_utils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 59b6d7ef-0e86-41af-af94-b94def8570b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.868 239969 DEBUG oslo_concurrency.processutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.923 239969 DEBUG nova.policy [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0338b308661e4205839e9e957f674d8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df501970c9864b77b86bb4aa58ef846b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.943 239969 DEBUG nova.network.neutron [-] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.969 239969 INFO nova.compute.manager [-] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Took 0.72 seconds to deallocate network for instance.
Jan 26 16:08:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:08:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4153946357' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.976 239969 DEBUG oslo_concurrency.processutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.977 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.978 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:07 compute-0 nova_compute[239965]: 2026-01-26 16:08:07.979 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.005 239969 DEBUG nova.storage.rbd_utils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 59b6d7ef-0e86-41af-af94-b94def8570b0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.010 239969 DEBUG oslo_concurrency.processutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 59b6d7ef-0e86-41af-af94-b94def8570b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.056 239969 DEBUG oslo_concurrency.processutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.663s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.060 239969 DEBUG nova.compute.manager [req-b3eb3cc9-461f-439f-9e34-99704f866d65 req-5b46d310-a106-416c-a73b-ef1b8bef8d3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Received event network-changed-6bca1cb5-f515-4008-9c5f-ccd488d46f98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.060 239969 DEBUG nova.compute.manager [req-b3eb3cc9-461f-439f-9e34-99704f866d65 req-5b46d310-a106-416c-a73b-ef1b8bef8d3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Refreshing instance network info cache due to event network-changed-6bca1cb5-f515-4008-9c5f-ccd488d46f98. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.060 239969 DEBUG oslo_concurrency.lockutils [req-b3eb3cc9-461f-439f-9e34-99704f866d65 req-5b46d310-a106-416c-a73b-ef1b8bef8d3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-0e27c936-f272-4dff-b393-9602b77da425" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.061 239969 DEBUG oslo_concurrency.lockutils [req-b3eb3cc9-461f-439f-9e34-99704f866d65 req-5b46d310-a106-416c-a73b-ef1b8bef8d3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-0e27c936-f272-4dff-b393-9602b77da425" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.061 239969 DEBUG nova.network.neutron [req-b3eb3cc9-461f-439f-9e34-99704f866d65 req-5b46d310-a106-416c-a73b-ef1b8bef8d3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Refreshing network info cache for port 6bca1cb5-f515-4008-9c5f-ccd488d46f98 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.063 239969 DEBUG nova.virt.libvirt.vif [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:08:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1705397455',display_name='tempest-ServersTestJSON-server-1705397455',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1705397455',id=97,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-x3s4kata',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:08:03Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=0e27c936-f272-4dff-b393-9602b77da425,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "address": "fa:16:3e:a5:31:a8", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bca1cb5-f5", "ovs_interfaceid": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.063 239969 DEBUG nova.network.os_vif_util [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "address": "fa:16:3e:a5:31:a8", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bca1cb5-f5", "ovs_interfaceid": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.064 239969 DEBUG nova.network.os_vif_util [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:31:a8,bridge_name='br-int',has_traffic_filtering=True,id=6bca1cb5-f515-4008-9c5f-ccd488d46f98,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bca1cb5-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.065 239969 DEBUG nova.objects.instance [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0e27c936-f272-4dff-b393-9602b77da425 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.067 239969 DEBUG oslo_concurrency.lockutils [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.067 239969 DEBUG oslo_concurrency.lockutils [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.089 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:08:08 compute-0 nova_compute[239965]:   <uuid>0e27c936-f272-4dff-b393-9602b77da425</uuid>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   <name>instance-00000061</name>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersTestJSON-server-1705397455</nova:name>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:08:06</nova:creationTime>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:08:08 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:08:08 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:08:08 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:08:08 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:08:08 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:08:08 compute-0 nova_compute[239965]:         <nova:user uuid="392bf4c554724bd3b097b990cec964ac">tempest-ServersTestJSON-190839520-project-member</nova:user>
Jan 26 16:08:08 compute-0 nova_compute[239965]:         <nova:project uuid="e3d3c26abe454a90816833e484abbbd5">tempest-ServersTestJSON-190839520</nova:project>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:08:08 compute-0 nova_compute[239965]:         <nova:port uuid="6bca1cb5-f515-4008-9c5f-ccd488d46f98">
Jan 26 16:08:08 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <system>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <entry name="serial">0e27c936-f272-4dff-b393-9602b77da425</entry>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <entry name="uuid">0e27c936-f272-4dff-b393-9602b77da425</entry>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     </system>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   <os>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   </os>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   <features>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   </features>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/0e27c936-f272-4dff-b393-9602b77da425_disk">
Jan 26 16:08:08 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       </source>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:08:08 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/0e27c936-f272-4dff-b393-9602b77da425_disk.config">
Jan 26 16:08:08 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       </source>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:08:08 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:a5:31:a8"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <target dev="tap6bca1cb5-f5"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/0e27c936-f272-4dff-b393-9602b77da425/console.log" append="off"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <video>
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     </video>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:08:08 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:08:08 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:08:08 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:08:08 compute-0 nova_compute[239965]: </domain>
Jan 26 16:08:08 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.091 239969 DEBUG nova.compute.manager [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Preparing to wait for external event network-vif-plugged-6bca1cb5-f515-4008-9c5f-ccd488d46f98 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.091 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "0e27c936-f272-4dff-b393-9602b77da425-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.091 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "0e27c936-f272-4dff-b393-9602b77da425-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.091 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "0e27c936-f272-4dff-b393-9602b77da425-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.092 239969 DEBUG nova.virt.libvirt.vif [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:08:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1705397455',display_name='tempest-ServersTestJSON-server-1705397455',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1705397455',id=97,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-x3s4kata',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:08:03Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=0e27c936-f272-4dff-b393-9602b77da425,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "address": "fa:16:3e:a5:31:a8", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bca1cb5-f5", "ovs_interfaceid": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.092 239969 DEBUG nova.network.os_vif_util [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "address": "fa:16:3e:a5:31:a8", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bca1cb5-f5", "ovs_interfaceid": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.093 239969 DEBUG nova.network.os_vif_util [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:31:a8,bridge_name='br-int',has_traffic_filtering=True,id=6bca1cb5-f515-4008-9c5f-ccd488d46f98,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bca1cb5-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.093 239969 DEBUG os_vif [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:31:a8,bridge_name='br-int',has_traffic_filtering=True,id=6bca1cb5-f515-4008-9c5f-ccd488d46f98,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bca1cb5-f5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.094 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.094 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.095 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.097 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.098 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6bca1cb5-f5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.098 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6bca1cb5-f5, col_values=(('external_ids', {'iface-id': '6bca1cb5-f515-4008-9c5f-ccd488d46f98', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:31:a8', 'vm-uuid': '0e27c936-f272-4dff-b393-9602b77da425'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.099 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:08 compute-0 NetworkManager[48954]: <info>  [1769443688.1005] manager: (tap6bca1cb5-f5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/395)
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.102 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.107 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.108 239969 INFO os_vif [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:31:a8,bridge_name='br-int',has_traffic_filtering=True,id=6bca1cb5-f515-4008-9c5f-ccd488d46f98,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bca1cb5-f5')
Jan 26 16:08:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 412 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 9.3 MiB/s wr, 528 op/s
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.211 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.211 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.211 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No VIF found with MAC fa:16:3e:a5:31:a8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.212 239969 INFO nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Using config drive
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.248 239969 DEBUG nova.storage.rbd_utils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 0e27c936-f272-4dff-b393-9602b77da425_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.258 239969 DEBUG oslo_concurrency.processutils [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3857070454' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2797932050' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:08 compute-0 ceph-mon[75140]: osdmap e270: 3 total, 3 up, 3 in
Jan 26 16:08:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4153946357' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.356 239969 DEBUG oslo_concurrency.processutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 59b6d7ef-0e86-41af-af94-b94def8570b0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.448 239969 DEBUG nova.storage.rbd_utils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] resizing rbd image 59b6d7ef-0e86-41af-af94-b94def8570b0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.476 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443673.449741, f3d29c29-2c6a-4e49-bd86-415f47139844 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.476 239969 INFO nova.compute.manager [-] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] VM Stopped (Lifecycle Event)
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.532 239969 DEBUG nova.compute.manager [None req-911356f4-406d-498a-a1be-083f731214b4 - - - - - -] [instance: f3d29c29-2c6a-4e49-bd86-415f47139844] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.541 239969 DEBUG nova.objects.instance [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'migration_context' on Instance uuid 59b6d7ef-0e86-41af-af94-b94def8570b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.713 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.713 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Ensure instance console log exists: /var/lib/nova/instances/59b6d7ef-0e86-41af-af94-b94def8570b0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.714 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.714 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.714 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:08:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/964163739' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.824 239969 DEBUG oslo_concurrency.processutils [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.830 239969 DEBUG nova.compute.provider_tree [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.863 239969 DEBUG nova.scheduler.client.report [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.896 239969 DEBUG oslo_concurrency.lockutils [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:08 compute-0 nova_compute[239965]: 2026-01-26 16:08:08.926 239969 INFO nova.scheduler.client.report [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Deleted allocations for instance 8dcfffb0-b36f-463e-b49a-9954483b8f5b
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.002 239969 DEBUG oslo_concurrency.lockutils [None req-dba2761a-9e81-4e39-a2cf-e3721075606f ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "8dcfffb0-b36f-463e-b49a-9954483b8f5b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:09 compute-0 ceph-mon[75140]: pgmap v1690: 305 pgs: 305 active+clean; 412 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 9.3 MiB/s wr, 528 op/s
Jan 26 16:08:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/964163739' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.530 239969 INFO nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Creating config drive at /var/lib/nova/instances/0e27c936-f272-4dff-b393-9602b77da425/disk.config
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.534 239969 DEBUG oslo_concurrency.processutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0e27c936-f272-4dff-b393-9602b77da425/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdtxkkkk8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.675 239969 DEBUG oslo_concurrency.processutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0e27c936-f272-4dff-b393-9602b77da425/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdtxkkkk8" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.710 239969 DEBUG nova.storage.rbd_utils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 0e27c936-f272-4dff-b393-9602b77da425_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.715 239969 DEBUG oslo_concurrency.processutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0e27c936-f272-4dff-b393-9602b77da425/disk.config 0e27c936-f272-4dff-b393-9602b77da425_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.811 239969 INFO nova.compute.manager [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Rebuilding instance
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.871 239969 DEBUG oslo_concurrency.processutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0e27c936-f272-4dff-b393-9602b77da425/disk.config 0e27c936-f272-4dff-b393-9602b77da425_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.872 239969 INFO nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Deleting local config drive /var/lib/nova/instances/0e27c936-f272-4dff-b393-9602b77da425/disk.config because it was imported into RBD.
Jan 26 16:08:09 compute-0 kernel: tap6bca1cb5-f5: entered promiscuous mode
Jan 26 16:08:09 compute-0 NetworkManager[48954]: <info>  [1769443689.9240] manager: (tap6bca1cb5-f5): new Tun device (/org/freedesktop/NetworkManager/Devices/396)
Jan 26 16:08:09 compute-0 ovn_controller[146046]: 2026-01-26T16:08:09Z|00948|binding|INFO|Claiming lport 6bca1cb5-f515-4008-9c5f-ccd488d46f98 for this chassis.
Jan 26 16:08:09 compute-0 ovn_controller[146046]: 2026-01-26T16:08:09Z|00949|binding|INFO|6bca1cb5-f515-4008-9c5f-ccd488d46f98: Claiming fa:16:3e:a5:31:a8 10.100.0.6
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.930 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.935 239969 DEBUG oslo_concurrency.lockutils [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "571cd10f-e22e-43b6-9523-e3539d047e4a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.936 239969 DEBUG oslo_concurrency.lockutils [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.936 239969 DEBUG oslo_concurrency.lockutils [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.937 239969 DEBUG oslo_concurrency.lockutils [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.937 239969 DEBUG oslo_concurrency.lockutils [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.938 239969 INFO nova.compute.manager [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Terminating instance
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.939 239969 DEBUG nova.compute.manager [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:08:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:09.939 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:31:a8 10.100.0.6'], port_security=['fa:16:3e:a5:31:a8 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0e27c936-f272-4dff-b393-9602b77da425', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3d3c26abe454a90816833e484abbbd5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1788d42-3ab8-4af6-9d96-b2f289468a39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=866be92a-2c15-4b84-ae39-2a0041a8b635, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=6bca1cb5-f515-4008-9c5f-ccd488d46f98) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:08:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:09.941 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 6bca1cb5-f515-4008-9c5f-ccd488d46f98 in datapath e7ef659c-2c7b-4cdf-8956-267cdfeff707 bound to our chassis
Jan 26 16:08:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:09.944 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:08:09 compute-0 systemd-udevd[322703]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:08:09 compute-0 systemd-machined[208061]: New machine qemu-119-instance-00000061.
Jan 26 16:08:09 compute-0 ovn_controller[146046]: 2026-01-26T16:08:09Z|00950|binding|INFO|Setting lport 6bca1cb5-f515-4008-9c5f-ccd488d46f98 ovn-installed in OVS
Jan 26 16:08:09 compute-0 ovn_controller[146046]: 2026-01-26T16:08:09Z|00951|binding|INFO|Setting lport 6bca1cb5-f515-4008-9c5f-ccd488d46f98 up in Southbound
Jan 26 16:08:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:09.963 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[961ce4b4-5ff8-4dcb-872b-8e81627ad8fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:09 compute-0 nova_compute[239965]: 2026-01-26 16:08:09.963 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:09 compute-0 systemd[1]: Started Virtual Machine qemu-119-instance-00000061.
Jan 26 16:08:09 compute-0 NetworkManager[48954]: <info>  [1769443689.9731] device (tap6bca1cb5-f5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:08:09 compute-0 NetworkManager[48954]: <info>  [1769443689.9743] device (tap6bca1cb5-f5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:08:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:09.995 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[bef95b6b-c4d2-4dd3-b483-a9e778f449fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:09.998 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[09d8b0fc-55f4-439d-bc3f-7f4c361dffb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:10 compute-0 kernel: tap4dce6184-01 (unregistering): left promiscuous mode
Jan 26 16:08:10 compute-0 NetworkManager[48954]: <info>  [1769443690.0035] device (tap4dce6184-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.013 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:10 compute-0 ovn_controller[146046]: 2026-01-26T16:08:10Z|00952|binding|INFO|Releasing lport 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 from this chassis (sb_readonly=0)
Jan 26 16:08:10 compute-0 ovn_controller[146046]: 2026-01-26T16:08:10Z|00953|binding|INFO|Setting lport 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 down in Southbound
Jan 26 16:08:10 compute-0 ovn_controller[146046]: 2026-01-26T16:08:10Z|00954|binding|INFO|Removing iface tap4dce6184-01 ovn-installed in OVS
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.015 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.019 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:ea:5b 10.100.0.6'], port_security=['fa:16:3e:72:ea:5b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '571cd10f-e22e-43b6-9523-e3539d047e4a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '56f8818d291f4e738d868673048ce025', 'neutron:revision_number': '9', 'neutron:security_group_ids': '8357fb8f-8173-42b8-8413-682bfcbe9368', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.244', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=112ebdb6-08cf-4ff4-bdfb-063b37c813d0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=4dce6184-01c6-4ef8-a5d3-7fc88f784d89) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.036 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.040 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b684df8d-a04f-4a69-b540-40f2b3b0165f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:10 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d00000055.scope: Deactivated successfully.
Jan 26 16:08:10 compute-0 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d00000055.scope: Consumed 10.100s CPU time.
Jan 26 16:08:10 compute-0 systemd-machined[208061]: Machine qemu-116-instance-00000055 terminated.
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.060 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4be359-2460-453c-94e0-440760ccbb46]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7ef659c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:1b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 25, 'rx_bytes': 658, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 25, 'rx_bytes': 658, 'tx_bytes': 1194, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507991, 'reachable_time': 15716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322719, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.087 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1528c06d-58ae-4b1f-a0c8-9f2b9a30966c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508006, 'tstamp': 508006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322720, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508010, 'tstamp': 508010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322720, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.088 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7ef659c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.089 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.093 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.094 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7ef659c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.095 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.095 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape7ef659c-20, col_values=(('external_ids', {'iface-id': '86107515-3deb-4a4b-a7ca-2f22eed5cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.095 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.097 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 4dce6184-01c6-4ef8-a5d3-7fc88f784d89 in datapath 67ce52d3-4d78-44cc-ab83-0fab051ec68d unbound from our chassis
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.098 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 67ce52d3-4d78-44cc-ab83-0fab051ec68d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.098 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[185a3918-2df9-48c1-aede-0d7aebded004]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.099 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d namespace which is not needed anymore
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.116 239969 DEBUG nova.objects.instance [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lazy-loading 'trusted_certs' on Instance uuid dc214d0d-327d-4a44-b811-2c04d4ccc9de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1691: 305 pgs: 305 active+clean; 412 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 8.0 MiB/s wr, 392 op/s
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.143 239969 DEBUG nova.compute.manager [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.185 239969 INFO nova.virt.libvirt.driver [-] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Instance destroyed successfully.
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.185 239969 DEBUG nova.objects.instance [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lazy-loading 'resources' on Instance uuid 571cd10f-e22e-43b6-9523-e3539d047e4a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.203 239969 DEBUG nova.objects.instance [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lazy-loading 'pci_requests' on Instance uuid dc214d0d-327d-4a44-b811-2c04d4ccc9de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.204 239969 DEBUG nova.virt.libvirt.vif [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:05:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-2114557834',display_name='tempest-ServerActionsTestOtherB-server-2114557834',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-2114557834',id=85,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH1KacXv8geoEy+28E7FY8ymb1wNpeorh0ig3qqxM26aclzAUqPX8+7CUPQ9iESN1O5fiTgrITEiydlNm6ZYPZvODzMzw3Vex9NYYyqIj3iZQ6pC0nbgSJxWQCDOHEzqIA==',key_name='tempest-keypair-1024411567',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:08:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='56f8818d291f4e738d868673048ce025',ramdisk_id='',reservation_id='r-1wqrxcju',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1778121066',owner_user_name='tempest-ServerActionsTestOtherB-1778121066-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:08:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ad9c6196af60436caf20747e96ad8388',uuid=571cd10f-e22e-43b6-9523-e3539d047e4a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.204 239969 DEBUG nova.network.os_vif_util [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converting VIF {"id": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "address": "fa:16:3e:72:ea:5b", "network": {"id": "67ce52d3-4d78-44cc-ab83-0fab051ec68d", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-1239214195-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "56f8818d291f4e738d868673048ce025", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4dce6184-01", "ovs_interfaceid": "4dce6184-01c6-4ef8-a5d3-7fc88f784d89", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.205 239969 DEBUG nova.network.os_vif_util [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:ea:5b,bridge_name='br-int',has_traffic_filtering=True,id=4dce6184-01c6-4ef8-a5d3-7fc88f784d89,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dce6184-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.206 239969 DEBUG os_vif [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:ea:5b,bridge_name='br-int',has_traffic_filtering=True,id=4dce6184-01c6-4ef8-a5d3-7fc88f784d89,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dce6184-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.207 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.208 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4dce6184-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.209 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.212 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.214 239969 INFO os_vif [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:ea:5b,bridge_name='br-int',has_traffic_filtering=True,id=4dce6184-01c6-4ef8-a5d3-7fc88f784d89,network=Network(67ce52d3-4d78-44cc-ab83-0fab051ec68d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4dce6184-01')
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.234 239969 DEBUG nova.objects.instance [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lazy-loading 'pci_devices' on Instance uuid dc214d0d-327d-4a44-b811-2c04d4ccc9de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.247 239969 DEBUG nova.objects.instance [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lazy-loading 'resources' on Instance uuid dc214d0d-327d-4a44-b811-2c04d4ccc9de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:10 compute-0 neutron-haproxy-ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d[312073]: [NOTICE]   (312077) : haproxy version is 2.8.14-c23fe91
Jan 26 16:08:10 compute-0 neutron-haproxy-ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d[312073]: [NOTICE]   (312077) : path to executable is /usr/sbin/haproxy
Jan 26 16:08:10 compute-0 neutron-haproxy-ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d[312073]: [ALERT]    (312077) : Current worker (312079) exited with code 143 (Terminated)
Jan 26 16:08:10 compute-0 neutron-haproxy-ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d[312073]: [WARNING]  (312077) : All workers exited. Exiting... (0)
Jan 26 16:08:10 compute-0 systemd[1]: libpod-d4e7f1af173924e1831d49d02d154f1888f6725f24bfdf3b3c561074865eed1e.scope: Deactivated successfully.
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.258 239969 DEBUG nova.objects.instance [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lazy-loading 'migration_context' on Instance uuid dc214d0d-327d-4a44-b811-2c04d4ccc9de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:10 compute-0 podman[322741]: 2026-01-26 16:08:10.259463559 +0000 UTC m=+0.063372271 container died d4e7f1af173924e1831d49d02d154f1888f6725f24bfdf3b3c561074865eed1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.279 239969 DEBUG nova.objects.instance [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.285 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d4e7f1af173924e1831d49d02d154f1888f6725f24bfdf3b3c561074865eed1e-userdata-shm.mount: Deactivated successfully.
Jan 26 16:08:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-627e3fb86124d8cc8f1e97b53597d3b8cc738188852576428221fd2d173f4280-merged.mount: Deactivated successfully.
Jan 26 16:08:10 compute-0 ceph-mon[75140]: pgmap v1691: 305 pgs: 305 active+clean; 412 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 8.0 MiB/s wr, 392 op/s
Jan 26 16:08:10 compute-0 podman[322741]: 2026-01-26 16:08:10.316400712 +0000 UTC m=+0.120309424 container cleanup d4e7f1af173924e1831d49d02d154f1888f6725f24bfdf3b3c561074865eed1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:08:10 compute-0 systemd[1]: libpod-conmon-d4e7f1af173924e1831d49d02d154f1888f6725f24bfdf3b3c561074865eed1e.scope: Deactivated successfully.
Jan 26 16:08:10 compute-0 podman[322804]: 2026-01-26 16:08:10.402376365 +0000 UTC m=+0.056188676 container remove d4e7f1af173924e1831d49d02d154f1888f6725f24bfdf3b3c561074865eed1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.409 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4204aae6-6add-44da-9ed4-6f9839c7c0c1]: (4, ('Mon Jan 26 04:08:10 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d (d4e7f1af173924e1831d49d02d154f1888f6725f24bfdf3b3c561074865eed1e)\nd4e7f1af173924e1831d49d02d154f1888f6725f24bfdf3b3c561074865eed1e\nMon Jan 26 04:08:10 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d (d4e7f1af173924e1831d49d02d154f1888f6725f24bfdf3b3c561074865eed1e)\nd4e7f1af173924e1831d49d02d154f1888f6725f24bfdf3b3c561074865eed1e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.411 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[42ccbd6b-c3fb-40e6-9c31-ee691f1cac86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.412 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap67ce52d3-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.414 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:10 compute-0 kernel: tap67ce52d3-40: left promiscuous mode
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.436 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.442 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.445 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[edff87b9-8837-4987-a8d1-fad412bae445]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.462 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e67c5d61-8346-4204-acab-72777575dea9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.463 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[40036c93-8a2d-43b3-a1cb-8dc5c0b5b33c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.480 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[894cb908-6b17-44c6-81cf-e20cb534b922]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 502147, 'reachable_time': 19055, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322864, 'error': None, 'target': 'ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d67ce52d3\x2d4d78\x2d44cc\x2dab83\x2d0fab051ec68d.mount: Deactivated successfully.
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.485 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-67ce52d3-4d78-44cc-ab83-0fab051ec68d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:08:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:10.485 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f7881d-8dfd-4ded-9fba-d1869da796fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.523 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443690.5209856, 0e27c936-f272-4dff-b393-9602b77da425 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.523 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0e27c936-f272-4dff-b393-9602b77da425] VM Started (Lifecycle Event)
Jan 26 16:08:10 compute-0 podman[322845]: 2026-01-26 16:08:10.524605435 +0000 UTC m=+0.075967000 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.542 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.547 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443690.5211241, 0e27c936-f272-4dff-b393-9602b77da425 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.547 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0e27c936-f272-4dff-b393-9602b77da425] VM Paused (Lifecycle Event)
Jan 26 16:08:10 compute-0 podman[322846]: 2026-01-26 16:08:10.555118441 +0000 UTC m=+0.106603819 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.567 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.569 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.577 239969 INFO nova.virt.libvirt.driver [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Deleting instance files /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a_del
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.577 239969 INFO nova.virt.libvirt.driver [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Deletion of /var/lib/nova/instances/571cd10f-e22e-43b6-9523-e3539d047e4a_del complete
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.592 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0e27c936-f272-4dff-b393-9602b77da425] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.633 239969 INFO nova.compute.manager [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Took 0.69 seconds to destroy the instance on the hypervisor.
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.634 239969 DEBUG oslo.service.loopingcall [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.634 239969 DEBUG nova.compute.manager [-] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:08:10 compute-0 nova_compute[239965]: 2026-01-26 16:08:10.634 239969 DEBUG nova.network.neutron [-] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:08:11 compute-0 nova_compute[239965]: 2026-01-26 16:08:11.073 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:11 compute-0 nova_compute[239965]: 2026-01-26 16:08:11.107 239969 DEBUG nova.network.neutron [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Successfully created port: a8973089-7e12-4dd1-be5a-417921b2ddf9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:08:11 compute-0 nova_compute[239965]: 2026-01-26 16:08:11.566 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443676.5530648, 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:11 compute-0 nova_compute[239965]: 2026-01-26 16:08:11.566 239969 INFO nova.compute.manager [-] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] VM Stopped (Lifecycle Event)
Jan 26 16:08:11 compute-0 nova_compute[239965]: 2026-01-26 16:08:11.586 239969 DEBUG nova.compute.manager [None req-9bcfbe38-8ccd-403d-82d0-d36da36a70d3 - - - - - -] [instance: 2e2288e8-d2d5-4aa6-9b07-a9976dd31e6e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:11 compute-0 nova_compute[239965]: 2026-01-26 16:08:11.756 239969 DEBUG nova.compute.manager [req-7c01901b-1853-489d-9bac-b79083b1c627 req-77ef862c-feec-45d7-a13d-d2db9b115450 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received event network-vif-unplugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:11 compute-0 nova_compute[239965]: 2026-01-26 16:08:11.756 239969 DEBUG oslo_concurrency.lockutils [req-7c01901b-1853-489d-9bac-b79083b1c627 req-77ef862c-feec-45d7-a13d-d2db9b115450 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:11 compute-0 nova_compute[239965]: 2026-01-26 16:08:11.757 239969 DEBUG oslo_concurrency.lockutils [req-7c01901b-1853-489d-9bac-b79083b1c627 req-77ef862c-feec-45d7-a13d-d2db9b115450 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:11 compute-0 nova_compute[239965]: 2026-01-26 16:08:11.757 239969 DEBUG oslo_concurrency.lockutils [req-7c01901b-1853-489d-9bac-b79083b1c627 req-77ef862c-feec-45d7-a13d-d2db9b115450 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:11 compute-0 nova_compute[239965]: 2026-01-26 16:08:11.757 239969 DEBUG nova.compute.manager [req-7c01901b-1853-489d-9bac-b79083b1c627 req-77ef862c-feec-45d7-a13d-d2db9b115450 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] No waiting events found dispatching network-vif-unplugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:08:11 compute-0 nova_compute[239965]: 2026-01-26 16:08:11.758 239969 DEBUG nova.compute.manager [req-7c01901b-1853-489d-9bac-b79083b1c627 req-77ef862c-feec-45d7-a13d-d2db9b115450 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received event network-vif-unplugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:08:11 compute-0 nova_compute[239965]: 2026-01-26 16:08:11.791 239969 DEBUG nova.network.neutron [req-b3eb3cc9-461f-439f-9e34-99704f866d65 req-5b46d310-a106-416c-a73b-ef1b8bef8d3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Updated VIF entry in instance network info cache for port 6bca1cb5-f515-4008-9c5f-ccd488d46f98. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:08:11 compute-0 nova_compute[239965]: 2026-01-26 16:08:11.791 239969 DEBUG nova.network.neutron [req-b3eb3cc9-461f-439f-9e34-99704f866d65 req-5b46d310-a106-416c-a73b-ef1b8bef8d3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Updating instance_info_cache with network_info: [{"id": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "address": "fa:16:3e:a5:31:a8", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bca1cb5-f5", "ovs_interfaceid": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:11 compute-0 nova_compute[239965]: 2026-01-26 16:08:11.828 239969 DEBUG oslo_concurrency.lockutils [req-b3eb3cc9-461f-439f-9e34-99704f866d65 req-5b46d310-a106-416c-a73b-ef1b8bef8d3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-0e27c936-f272-4dff-b393-9602b77da425" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:08:11 compute-0 nova_compute[239965]: 2026-01-26 16:08:11.829 239969 DEBUG nova.compute.manager [req-b3eb3cc9-461f-439f-9e34-99704f866d65 req-5b46d310-a106-416c-a73b-ef1b8bef8d3d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Received event network-vif-deleted-d0f5820b-f3ad-4a26-84c4-c20c732dbcfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.091 239969 DEBUG nova.compute.manager [req-51a0e458-3bfe-4994-94a3-67bd1042aa52 req-137c2642-93eb-4459-9f3e-5ec2ee1d71d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Received event network-vif-plugged-6bca1cb5-f515-4008-9c5f-ccd488d46f98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.091 239969 DEBUG oslo_concurrency.lockutils [req-51a0e458-3bfe-4994-94a3-67bd1042aa52 req-137c2642-93eb-4459-9f3e-5ec2ee1d71d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0e27c936-f272-4dff-b393-9602b77da425-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.091 239969 DEBUG oslo_concurrency.lockutils [req-51a0e458-3bfe-4994-94a3-67bd1042aa52 req-137c2642-93eb-4459-9f3e-5ec2ee1d71d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0e27c936-f272-4dff-b393-9602b77da425-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.091 239969 DEBUG oslo_concurrency.lockutils [req-51a0e458-3bfe-4994-94a3-67bd1042aa52 req-137c2642-93eb-4459-9f3e-5ec2ee1d71d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0e27c936-f272-4dff-b393-9602b77da425-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.092 239969 DEBUG nova.compute.manager [req-51a0e458-3bfe-4994-94a3-67bd1042aa52 req-137c2642-93eb-4459-9f3e-5ec2ee1d71d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Processing event network-vif-plugged-6bca1cb5-f515-4008-9c5f-ccd488d46f98 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.092 239969 DEBUG nova.compute.manager [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.096 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.100 239969 INFO nova.virt.libvirt.driver [-] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Instance spawned successfully.
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.101 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443692.1008587, 0e27c936-f272-4dff-b393-9602b77da425 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.101 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0e27c936-f272-4dff-b393-9602b77da425] VM Resumed (Lifecycle Event)
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.102 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:08:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 360 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 8.0 MiB/s wr, 480 op/s
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.127 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.133 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.136 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.137 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.137 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.138 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.138 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.139 239969 DEBUG nova.virt.libvirt.driver [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.164 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0e27c936-f272-4dff-b393-9602b77da425] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.194 239969 INFO nova.compute.manager [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Took 8.50 seconds to spawn the instance on the hypervisor.
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.195 239969 DEBUG nova.compute.manager [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.252 239969 INFO nova.compute.manager [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Took 10.30 seconds to build instance.
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.267 239969 DEBUG oslo_concurrency.lockutils [None req-fccfa361-a4a8-476e-b6be-f46bb0e26af0 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "0e27c936-f272-4dff-b393-9602b77da425" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.537s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:08:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Jan 26 16:08:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Jan 26 16:08:12 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.851 239969 DEBUG nova.network.neutron [-] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:12 compute-0 nova_compute[239965]: 2026-01-26 16:08:12.872 239969 INFO nova.compute.manager [-] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Took 2.24 seconds to deallocate network for instance.
Jan 26 16:08:13 compute-0 nova_compute[239965]: 2026-01-26 16:08:13.126 239969 DEBUG oslo_concurrency.lockutils [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:13 compute-0 nova_compute[239965]: 2026-01-26 16:08:13.126 239969 DEBUG oslo_concurrency.lockutils [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:13 compute-0 ceph-mon[75140]: pgmap v1692: 305 pgs: 305 active+clean; 360 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 8.0 MiB/s wr, 480 op/s
Jan 26 16:08:13 compute-0 ceph-mon[75140]: osdmap e271: 3 total, 3 up, 3 in
Jan 26 16:08:13 compute-0 nova_compute[239965]: 2026-01-26 16:08:13.292 239969 DEBUG oslo_concurrency.processutils [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:08:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1586846008' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:13 compute-0 nova_compute[239965]: 2026-01-26 16:08:13.902 239969 DEBUG oslo_concurrency.processutils [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:13 compute-0 nova_compute[239965]: 2026-01-26 16:08:13.907 239969 DEBUG nova.compute.provider_tree [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:08:13 compute-0 nova_compute[239965]: 2026-01-26 16:08:13.938 239969 DEBUG nova.scheduler.client.report [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:08:13 compute-0 nova_compute[239965]: 2026-01-26 16:08:13.958 239969 DEBUG oslo_concurrency.lockutils [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.832s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:13 compute-0 nova_compute[239965]: 2026-01-26 16:08:13.990 239969 INFO nova.scheduler.client.report [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Deleted allocations for instance 571cd10f-e22e-43b6-9523-e3539d047e4a
Jan 26 16:08:14 compute-0 nova_compute[239965]: 2026-01-26 16:08:14.047 239969 DEBUG nova.network.neutron [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Successfully updated port: a8973089-7e12-4dd1-be5a-417921b2ddf9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:08:14 compute-0 nova_compute[239965]: 2026-01-26 16:08:14.093 239969 DEBUG oslo_concurrency.lockutils [None req-b9174c45-92dd-441d-aaeb-0e0d33231838 ad9c6196af60436caf20747e96ad8388 56f8818d291f4e738d868673048ce025 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1694: 305 pgs: 305 active+clean; 353 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 7.5 MiB/s wr, 462 op/s
Jan 26 16:08:14 compute-0 nova_compute[239965]: 2026-01-26 16:08:14.174 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "refresh_cache-59b6d7ef-0e86-41af-af94-b94def8570b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:08:14 compute-0 nova_compute[239965]: 2026-01-26 16:08:14.174 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquired lock "refresh_cache-59b6d7ef-0e86-41af-af94-b94def8570b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:08:14 compute-0 nova_compute[239965]: 2026-01-26 16:08:14.175 239969 DEBUG nova.network.neutron [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:08:14 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1586846008' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.061 239969 DEBUG nova.network.neutron [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.130 239969 DEBUG nova.compute.manager [req-262c0e56-ecaa-4dee-81c1-71087b878e58 req-b7446f7f-2127-4ea2-b803-e04c3aff2616 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received event network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.131 239969 DEBUG oslo_concurrency.lockutils [req-262c0e56-ecaa-4dee-81c1-71087b878e58 req-b7446f7f-2127-4ea2-b803-e04c3aff2616 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.131 239969 DEBUG oslo_concurrency.lockutils [req-262c0e56-ecaa-4dee-81c1-71087b878e58 req-b7446f7f-2127-4ea2-b803-e04c3aff2616 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.131 239969 DEBUG oslo_concurrency.lockutils [req-262c0e56-ecaa-4dee-81c1-71087b878e58 req-b7446f7f-2127-4ea2-b803-e04c3aff2616 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "571cd10f-e22e-43b6-9523-e3539d047e4a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.132 239969 DEBUG nova.compute.manager [req-262c0e56-ecaa-4dee-81c1-71087b878e58 req-b7446f7f-2127-4ea2-b803-e04c3aff2616 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] No waiting events found dispatching network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.132 239969 WARNING nova.compute.manager [req-262c0e56-ecaa-4dee-81c1-71087b878e58 req-b7446f7f-2127-4ea2-b803-e04c3aff2616 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received unexpected event network-vif-plugged-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 for instance with vm_state deleted and task_state None.
Jan 26 16:08:15 compute-0 ceph-mon[75140]: pgmap v1694: 305 pgs: 305 active+clean; 353 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 7.5 MiB/s wr, 462 op/s
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.210 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.217 239969 DEBUG nova.compute.manager [req-b0754fb6-1f95-4783-ab10-96f11112c559 req-5d1a5ee0-1f3b-4b61-81d2-8ba0aeca813e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Received event network-vif-plugged-6bca1cb5-f515-4008-9c5f-ccd488d46f98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.217 239969 DEBUG oslo_concurrency.lockutils [req-b0754fb6-1f95-4783-ab10-96f11112c559 req-5d1a5ee0-1f3b-4b61-81d2-8ba0aeca813e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0e27c936-f272-4dff-b393-9602b77da425-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.218 239969 DEBUG oslo_concurrency.lockutils [req-b0754fb6-1f95-4783-ab10-96f11112c559 req-5d1a5ee0-1f3b-4b61-81d2-8ba0aeca813e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0e27c936-f272-4dff-b393-9602b77da425-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.218 239969 DEBUG oslo_concurrency.lockutils [req-b0754fb6-1f95-4783-ab10-96f11112c559 req-5d1a5ee0-1f3b-4b61-81d2-8ba0aeca813e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0e27c936-f272-4dff-b393-9602b77da425-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.218 239969 DEBUG nova.compute.manager [req-b0754fb6-1f95-4783-ab10-96f11112c559 req-5d1a5ee0-1f3b-4b61-81d2-8ba0aeca813e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] No waiting events found dispatching network-vif-plugged-6bca1cb5-f515-4008-9c5f-ccd488d46f98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.218 239969 WARNING nova.compute.manager [req-b0754fb6-1f95-4783-ab10-96f11112c559 req-5d1a5ee0-1f3b-4b61-81d2-8ba0aeca813e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Received unexpected event network-vif-plugged-6bca1cb5-f515-4008-9c5f-ccd488d46f98 for instance with vm_state active and task_state None.
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.218 239969 DEBUG nova.compute.manager [req-b0754fb6-1f95-4783-ab10-96f11112c559 req-5d1a5ee0-1f3b-4b61-81d2-8ba0aeca813e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Received event network-vif-deleted-4dce6184-01c6-4ef8-a5d3-7fc88f784d89 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.219 239969 DEBUG nova.compute.manager [req-b0754fb6-1f95-4783-ab10-96f11112c559 req-5d1a5ee0-1f3b-4b61-81d2-8ba0aeca813e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Received event network-changed-a8973089-7e12-4dd1-be5a-417921b2ddf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.219 239969 DEBUG nova.compute.manager [req-b0754fb6-1f95-4783-ab10-96f11112c559 req-5d1a5ee0-1f3b-4b61-81d2-8ba0aeca813e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Refreshing instance network info cache due to event network-changed-a8973089-7e12-4dd1-be5a-417921b2ddf9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:08:15 compute-0 nova_compute[239965]: 2026-01-26 16:08:15.219 239969 DEBUG oslo_concurrency.lockutils [req-b0754fb6-1f95-4783-ab10-96f11112c559 req-5d1a5ee0-1f3b-4b61-81d2-8ba0aeca813e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-59b6d7ef-0e86-41af-af94-b94def8570b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.076 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 353 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.5 MiB/s wr, 218 op/s
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.475 239969 DEBUG nova.network.neutron [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Updating instance_info_cache with network_info: [{"id": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "address": "fa:16:3e:5b:71:19", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:7119", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8973089-7e", "ovs_interfaceid": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.614 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Releasing lock "refresh_cache-59b6d7ef-0e86-41af-af94-b94def8570b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.614 239969 DEBUG nova.compute.manager [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Instance network_info: |[{"id": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "address": "fa:16:3e:5b:71:19", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:7119", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8973089-7e", "ovs_interfaceid": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.615 239969 DEBUG oslo_concurrency.lockutils [req-b0754fb6-1f95-4783-ab10-96f11112c559 req-5d1a5ee0-1f3b-4b61-81d2-8ba0aeca813e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-59b6d7ef-0e86-41af-af94-b94def8570b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.616 239969 DEBUG nova.network.neutron [req-b0754fb6-1f95-4783-ab10-96f11112c559 req-5d1a5ee0-1f3b-4b61-81d2-8ba0aeca813e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Refreshing network info cache for port a8973089-7e12-4dd1-be5a-417921b2ddf9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.619 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Start _get_guest_xml network_info=[{"id": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "address": "fa:16:3e:5b:71:19", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:7119", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8973089-7e", "ovs_interfaceid": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.624 239969 WARNING nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.630 239969 DEBUG nova.virt.libvirt.host [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.630 239969 DEBUG nova.virt.libvirt.host [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.639 239969 DEBUG nova.virt.libvirt.host [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.640 239969 DEBUG nova.virt.libvirt.host [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.641 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.642 239969 DEBUG nova.virt.hardware [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.642 239969 DEBUG nova.virt.hardware [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.643 239969 DEBUG nova.virt.hardware [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.643 239969 DEBUG nova.virt.hardware [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.643 239969 DEBUG nova.virt.hardware [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.643 239969 DEBUG nova.virt.hardware [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.644 239969 DEBUG nova.virt.hardware [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.644 239969 DEBUG nova.virt.hardware [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.644 239969 DEBUG nova.virt.hardware [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.645 239969 DEBUG nova.virt.hardware [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.645 239969 DEBUG nova.virt.hardware [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.649 239969 DEBUG oslo_concurrency.processutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.792 239969 DEBUG oslo_concurrency.lockutils [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "0e27c936-f272-4dff-b393-9602b77da425" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.793 239969 DEBUG oslo_concurrency.lockutils [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "0e27c936-f272-4dff-b393-9602b77da425" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.794 239969 DEBUG oslo_concurrency.lockutils [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "0e27c936-f272-4dff-b393-9602b77da425-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.794 239969 DEBUG oslo_concurrency.lockutils [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "0e27c936-f272-4dff-b393-9602b77da425-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.794 239969 DEBUG oslo_concurrency.lockutils [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "0e27c936-f272-4dff-b393-9602b77da425-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.796 239969 INFO nova.compute.manager [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Terminating instance
Jan 26 16:08:16 compute-0 nova_compute[239965]: 2026-01-26 16:08:16.797 239969 DEBUG nova.compute.manager [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:08:17 compute-0 kernel: tap6bca1cb5-f5 (unregistering): left promiscuous mode
Jan 26 16:08:17 compute-0 NetworkManager[48954]: <info>  [1769443697.1012] device (tap6bca1cb5-f5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:08:17 compute-0 ovn_controller[146046]: 2026-01-26T16:08:17Z|00955|binding|INFO|Releasing lport 6bca1cb5-f515-4008-9c5f-ccd488d46f98 from this chassis (sb_readonly=0)
Jan 26 16:08:17 compute-0 ovn_controller[146046]: 2026-01-26T16:08:17Z|00956|binding|INFO|Setting lport 6bca1cb5-f515-4008-9c5f-ccd488d46f98 down in Southbound
Jan 26 16:08:17 compute-0 ovn_controller[146046]: 2026-01-26T16:08:17Z|00957|binding|INFO|Removing iface tap6bca1cb5-f5 ovn-installed in OVS
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.119 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.136 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:17.158 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:31:a8 10.100.0.6'], port_security=['fa:16:3e:a5:31:a8 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '0e27c936-f272-4dff-b393-9602b77da425', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3d3c26abe454a90816833e484abbbd5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1788d42-3ab8-4af6-9d96-b2f289468a39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=866be92a-2c15-4b84-ae39-2a0041a8b635, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=6bca1cb5-f515-4008-9c5f-ccd488d46f98) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:08:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:17.159 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 6bca1cb5-f515-4008-9c5f-ccd488d46f98 in datapath e7ef659c-2c7b-4cdf-8956-267cdfeff707 unbound from our chassis
Jan 26 16:08:17 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000061.scope: Deactivated successfully.
Jan 26 16:08:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:17.160 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:08:17 compute-0 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000061.scope: Consumed 5.298s CPU time.
Jan 26 16:08:17 compute-0 systemd-machined[208061]: Machine qemu-119-instance-00000061 terminated.
Jan 26 16:08:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:17.182 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[27f36fde-1dc2-4dd7-b375-958f214f0a45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:17 compute-0 ceph-mon[75140]: pgmap v1695: 305 pgs: 305 active+clean; 353 MiB data, 779 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.5 MiB/s wr, 218 op/s
Jan 26 16:08:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:17.220 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f1629fc1-a89c-4b29-91a6-c92bd3af211b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.227 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:17.226 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[452b8b76-0af3-4971-a719-836be24cfd61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.231 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.249 239969 INFO nova.virt.libvirt.driver [-] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Instance destroyed successfully.
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.251 239969 DEBUG nova.objects.instance [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'resources' on Instance uuid 0e27c936-f272-4dff-b393-9602b77da425 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:17.266 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0045d6dc-cec4-42be-b988-7be594f81c43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:17.289 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[37256b68-19fd-40ab-8e21-e64ab60c21bc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7ef659c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:1b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 27, 'rx_bytes': 658, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 27, 'rx_bytes': 658, 'tx_bytes': 1278, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507991, 'reachable_time': 15716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322954, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.305 239969 DEBUG nova.virt.libvirt.vif [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:08:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1705397455',display_name='tempest-ServersTestJSON-server-1705397455',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1705397455',id=97,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:08:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-x3s4kata',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:08:15Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=0e27c936-f272-4dff-b393-9602b77da425,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "address": "fa:16:3e:a5:31:a8", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bca1cb5-f5", "ovs_interfaceid": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.306 239969 DEBUG nova.network.os_vif_util [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "address": "fa:16:3e:a5:31:a8", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6bca1cb5-f5", "ovs_interfaceid": "6bca1cb5-f515-4008-9c5f-ccd488d46f98", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.306 239969 DEBUG nova.network.os_vif_util [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:31:a8,bridge_name='br-int',has_traffic_filtering=True,id=6bca1cb5-f515-4008-9c5f-ccd488d46f98,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bca1cb5-f5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.307 239969 DEBUG os_vif [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:31:a8,bridge_name='br-int',has_traffic_filtering=True,id=6bca1cb5-f515-4008-9c5f-ccd488d46f98,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bca1cb5-f5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.309 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.309 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6bca1cb5-f5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.311 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:17.310 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[85c6f32f-8ac6-4c53-bd58-9d68079352c0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508006, 'tstamp': 508006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322955, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508010, 'tstamp': 508010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322955, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:17.313 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7ef659c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.313 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.314 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.315 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:17.315 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7ef659c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:17.315 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:17.316 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape7ef659c-20, col_values=(('external_ids', {'iface-id': '86107515-3deb-4a4b-a7ca-2f22eed5cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.315 239969 INFO os_vif [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:31:a8,bridge_name='br-int',has_traffic_filtering=True,id=6bca1cb5-f515-4008-9c5f-ccd488d46f98,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6bca1cb5-f5')
Jan 26 16:08:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:17.316 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:08:17 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1677678004' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.371 239969 DEBUG oslo_concurrency.processutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.721s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.392 239969 DEBUG nova.storage.rbd_utils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 59b6d7ef-0e86-41af-af94-b94def8570b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.397 239969 DEBUG oslo_concurrency.processutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.572 239969 DEBUG nova.compute.manager [req-9ec1e072-25e9-4092-8f7b-a9ab28b2a0d2 req-b9acac24-6fdf-41ac-991b-eb2061dc1b67 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Received event network-vif-unplugged-6bca1cb5-f515-4008-9c5f-ccd488d46f98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.574 239969 DEBUG oslo_concurrency.lockutils [req-9ec1e072-25e9-4092-8f7b-a9ab28b2a0d2 req-b9acac24-6fdf-41ac-991b-eb2061dc1b67 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0e27c936-f272-4dff-b393-9602b77da425-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.574 239969 DEBUG oslo_concurrency.lockutils [req-9ec1e072-25e9-4092-8f7b-a9ab28b2a0d2 req-b9acac24-6fdf-41ac-991b-eb2061dc1b67 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0e27c936-f272-4dff-b393-9602b77da425-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.574 239969 DEBUG oslo_concurrency.lockutils [req-9ec1e072-25e9-4092-8f7b-a9ab28b2a0d2 req-b9acac24-6fdf-41ac-991b-eb2061dc1b67 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0e27c936-f272-4dff-b393-9602b77da425-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.575 239969 DEBUG nova.compute.manager [req-9ec1e072-25e9-4092-8f7b-a9ab28b2a0d2 req-b9acac24-6fdf-41ac-991b-eb2061dc1b67 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] No waiting events found dispatching network-vif-unplugged-6bca1cb5-f515-4008-9c5f-ccd488d46f98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.575 239969 DEBUG nova.compute.manager [req-9ec1e072-25e9-4092-8f7b-a9ab28b2a0d2 req-b9acac24-6fdf-41ac-991b-eb2061dc1b67 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Received event network-vif-unplugged-6bca1cb5-f515-4008-9c5f-ccd488d46f98 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:08:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.646 239969 INFO nova.virt.libvirt.driver [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Deleting instance files /var/lib/nova/instances/0e27c936-f272-4dff-b393-9602b77da425_del
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.647 239969 INFO nova.virt.libvirt.driver [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Deletion of /var/lib/nova/instances/0e27c936-f272-4dff-b393-9602b77da425_del complete
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.736 239969 INFO nova.compute.manager [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Took 0.94 seconds to destroy the instance on the hypervisor.
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.737 239969 DEBUG oslo.service.loopingcall [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.737 239969 DEBUG nova.compute.manager [-] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:08:17 compute-0 nova_compute[239965]: 2026-01-26 16:08:17.738 239969 DEBUG nova.network.neutron [-] [instance: 0e27c936-f272-4dff-b393-9602b77da425] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.048 239969 DEBUG nova.network.neutron [req-b0754fb6-1f95-4783-ab10-96f11112c559 req-5d1a5ee0-1f3b-4b61-81d2-8ba0aeca813e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Updated VIF entry in instance network info cache for port a8973089-7e12-4dd1-be5a-417921b2ddf9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.049 239969 DEBUG nova.network.neutron [req-b0754fb6-1f95-4783-ab10-96f11112c559 req-5d1a5ee0-1f3b-4b61-81d2-8ba0aeca813e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Updating instance_info_cache with network_info: [{"id": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "address": "fa:16:3e:5b:71:19", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:7119", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8973089-7e", "ovs_interfaceid": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:08:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425677604' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.072 239969 DEBUG oslo_concurrency.lockutils [req-b0754fb6-1f95-4783-ab10-96f11112c559 req-5d1a5ee0-1f3b-4b61-81d2-8ba0aeca813e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-59b6d7ef-0e86-41af-af94-b94def8570b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.085 239969 DEBUG oslo_concurrency.processutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.689s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.087 239969 DEBUG nova.virt.libvirt.vif [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:08:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-491867800',display_name='tempest-TestGettingAddress-server-491867800',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-491867800',id=98,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMOs4HnVuAJnvNp8CfE4X6ZDqYLND76tO1hTE0oumV/FRhtgz3i41mfhLwvewrya4uSND6SmxBSCPXJNfLvwF+32ywxOiO4XFvZqstko0NAzPzy/X3VMwFSRh9eMG+0ToA==',key_name='tempest-TestGettingAddress-841928054',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-vhhodg3c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:08:07Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=59b6d7ef-0e86-41af-af94-b94def8570b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "address": "fa:16:3e:5b:71:19", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:7119", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8973089-7e", "ovs_interfaceid": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.087 239969 DEBUG nova.network.os_vif_util [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "address": "fa:16:3e:5b:71:19", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:7119", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8973089-7e", "ovs_interfaceid": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.089 239969 DEBUG nova.network.os_vif_util [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:71:19,bridge_name='br-int',has_traffic_filtering=True,id=a8973089-7e12-4dd1-be5a-417921b2ddf9,network=Network(8b5a2723-70e1-4acc-a601-a4f0bf0d77a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8973089-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.090 239969 DEBUG nova.objects.instance [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'pci_devices' on Instance uuid 59b6d7ef-0e86-41af-af94-b94def8570b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.105 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:08:18 compute-0 nova_compute[239965]:   <uuid>59b6d7ef-0e86-41af-af94-b94def8570b0</uuid>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   <name>instance-00000062</name>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <nova:name>tempest-TestGettingAddress-server-491867800</nova:name>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:08:16</nova:creationTime>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:08:18 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:08:18 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:08:18 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:08:18 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:08:18 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:08:18 compute-0 nova_compute[239965]:         <nova:user uuid="0338b308661e4205839e9e957f674d8c">tempest-TestGettingAddress-206943419-project-member</nova:user>
Jan 26 16:08:18 compute-0 nova_compute[239965]:         <nova:project uuid="df501970c9864b77b86bb4aa58ef846b">tempest-TestGettingAddress-206943419</nova:project>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:08:18 compute-0 nova_compute[239965]:         <nova:port uuid="a8973089-7e12-4dd1-be5a-417921b2ddf9">
Jan 26 16:08:18 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe5b:7119" ipVersion="6"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <system>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <entry name="serial">59b6d7ef-0e86-41af-af94-b94def8570b0</entry>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <entry name="uuid">59b6d7ef-0e86-41af-af94-b94def8570b0</entry>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     </system>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   <os>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   </os>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   <features>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   </features>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/59b6d7ef-0e86-41af-af94-b94def8570b0_disk">
Jan 26 16:08:18 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       </source>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:08:18 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/59b6d7ef-0e86-41af-af94-b94def8570b0_disk.config">
Jan 26 16:08:18 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       </source>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:08:18 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:5b:71:19"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <target dev="tapa8973089-7e"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/59b6d7ef-0e86-41af-af94-b94def8570b0/console.log" append="off"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <video>
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     </video>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:08:18 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:08:18 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:08:18 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:08:18 compute-0 nova_compute[239965]: </domain>
Jan 26 16:08:18 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.111 239969 DEBUG nova.compute.manager [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Preparing to wait for external event network-vif-plugged-a8973089-7e12-4dd1-be5a-417921b2ddf9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.111 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.112 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.112 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.113 239969 DEBUG nova.virt.libvirt.vif [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:08:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-491867800',display_name='tempest-TestGettingAddress-server-491867800',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-491867800',id=98,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMOs4HnVuAJnvNp8CfE4X6ZDqYLND76tO1hTE0oumV/FRhtgz3i41mfhLwvewrya4uSND6SmxBSCPXJNfLvwF+32ywxOiO4XFvZqstko0NAzPzy/X3VMwFSRh9eMG+0ToA==',key_name='tempest-TestGettingAddress-841928054',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-vhhodg3c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:08:07Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=59b6d7ef-0e86-41af-af94-b94def8570b0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "address": "fa:16:3e:5b:71:19", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:7119", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8973089-7e", "ovs_interfaceid": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.114 239969 DEBUG nova.network.os_vif_util [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "address": "fa:16:3e:5b:71:19", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:7119", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8973089-7e", "ovs_interfaceid": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.115 239969 DEBUG nova.network.os_vif_util [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5b:71:19,bridge_name='br-int',has_traffic_filtering=True,id=a8973089-7e12-4dd1-be5a-417921b2ddf9,network=Network(8b5a2723-70e1-4acc-a601-a4f0bf0d77a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8973089-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.116 239969 DEBUG os_vif [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:71:19,bridge_name='br-int',has_traffic_filtering=True,id=a8973089-7e12-4dd1-be5a-417921b2ddf9,network=Network(8b5a2723-70e1-4acc-a601-a4f0bf0d77a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8973089-7e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.117 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.118 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.118 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.123 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.124 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa8973089-7e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1696: 305 pgs: 305 active+clean; 353 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 240 op/s
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.125 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa8973089-7e, col_values=(('external_ids', {'iface-id': 'a8973089-7e12-4dd1-be5a-417921b2ddf9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5b:71:19', 'vm-uuid': '59b6d7ef-0e86-41af-af94-b94def8570b0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.127 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:18 compute-0 NetworkManager[48954]: <info>  [1769443698.1285] manager: (tapa8973089-7e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.132 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.133 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.133 239969 INFO os_vif [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5b:71:19,bridge_name='br-int',has_traffic_filtering=True,id=a8973089-7e12-4dd1-be5a-417921b2ddf9,network=Network(8b5a2723-70e1-4acc-a601-a4f0bf0d77a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8973089-7e')
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.197 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.198 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.198 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:5b:71:19, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.199 239969 INFO nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Using config drive
Jan 26 16:08:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1677678004' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3425677604' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.232 239969 DEBUG nova.storage.rbd_utils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 59b6d7ef-0e86-41af-af94-b94def8570b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.294 239969 DEBUG nova.network.neutron [-] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.322 239969 INFO nova.compute.manager [-] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Took 0.58 seconds to deallocate network for instance.
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.369 239969 DEBUG oslo_concurrency.lockutils [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.370 239969 DEBUG oslo_concurrency.lockutils [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:18 compute-0 nova_compute[239965]: 2026-01-26 16:08:18.491 239969 DEBUG oslo_concurrency.processutils [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:08:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1740059587' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:19 compute-0 nova_compute[239965]: 2026-01-26 16:08:19.097 239969 DEBUG oslo_concurrency.processutils [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:19 compute-0 nova_compute[239965]: 2026-01-26 16:08:19.105 239969 DEBUG nova.compute.provider_tree [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:08:19 compute-0 ceph-mon[75140]: pgmap v1696: 305 pgs: 305 active+clean; 353 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 240 op/s
Jan 26 16:08:19 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1740059587' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:19 compute-0 nova_compute[239965]: 2026-01-26 16:08:19.643 239969 DEBUG nova.compute.manager [req-c934a5e9-cb0e-47e9-ba51-512504134940 req-d72cc519-d73d-48bd-ac90-534ebdc201e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Received event network-vif-deleted-6bca1cb5-f515-4008-9c5f-ccd488d46f98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:19 compute-0 nova_compute[239965]: 2026-01-26 16:08:19.652 239969 DEBUG nova.scheduler.client.report [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:08:19 compute-0 nova_compute[239965]: 2026-01-26 16:08:19.680 239969 DEBUG oslo_concurrency.lockutils [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:19 compute-0 nova_compute[239965]: 2026-01-26 16:08:19.788 239969 INFO nova.scheduler.client.report [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Deleted allocations for instance 0e27c936-f272-4dff-b393-9602b77da425
Jan 26 16:08:19 compute-0 nova_compute[239965]: 2026-01-26 16:08:19.812 239969 INFO nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Creating config drive at /var/lib/nova/instances/59b6d7ef-0e86-41af-af94-b94def8570b0/disk.config
Jan 26 16:08:19 compute-0 nova_compute[239965]: 2026-01-26 16:08:19.818 239969 DEBUG oslo_concurrency.processutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59b6d7ef-0e86-41af-af94-b94def8570b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaa195c6g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:19 compute-0 nova_compute[239965]: 2026-01-26 16:08:19.970 239969 DEBUG oslo_concurrency.processutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59b6d7ef-0e86-41af-af94-b94def8570b0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaa195c6g" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:19 compute-0 nova_compute[239965]: 2026-01-26 16:08:19.993 239969 DEBUG nova.storage.rbd_utils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 59b6d7ef-0e86-41af-af94-b94def8570b0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:19 compute-0 nova_compute[239965]: 2026-01-26 16:08:19.996 239969 DEBUG oslo_concurrency.processutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59b6d7ef-0e86-41af-af94-b94def8570b0/disk.config 59b6d7ef-0e86-41af-af94-b94def8570b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.035 239969 DEBUG nova.compute.manager [req-291c10a8-a812-4c41-bf81-2171937e0e95 req-1e27fd60-5d06-4337-aacb-264bfd6e883d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Received event network-vif-plugged-6bca1cb5-f515-4008-9c5f-ccd488d46f98 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.036 239969 DEBUG oslo_concurrency.lockutils [req-291c10a8-a812-4c41-bf81-2171937e0e95 req-1e27fd60-5d06-4337-aacb-264bfd6e883d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0e27c936-f272-4dff-b393-9602b77da425-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.036 239969 DEBUG oslo_concurrency.lockutils [req-291c10a8-a812-4c41-bf81-2171937e0e95 req-1e27fd60-5d06-4337-aacb-264bfd6e883d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0e27c936-f272-4dff-b393-9602b77da425-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.037 239969 DEBUG oslo_concurrency.lockutils [req-291c10a8-a812-4c41-bf81-2171937e0e95 req-1e27fd60-5d06-4337-aacb-264bfd6e883d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0e27c936-f272-4dff-b393-9602b77da425-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.037 239969 DEBUG nova.compute.manager [req-291c10a8-a812-4c41-bf81-2171937e0e95 req-1e27fd60-5d06-4337-aacb-264bfd6e883d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] No waiting events found dispatching network-vif-plugged-6bca1cb5-f515-4008-9c5f-ccd488d46f98 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.037 239969 WARNING nova.compute.manager [req-291c10a8-a812-4c41-bf81-2171937e0e95 req-1e27fd60-5d06-4337-aacb-264bfd6e883d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Received unexpected event network-vif-plugged-6bca1cb5-f515-4008-9c5f-ccd488d46f98 for instance with vm_state deleted and task_state None.
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.039 239969 DEBUG oslo_concurrency.lockutils [None req-cf5c2617-1106-4f7b-938c-cb12961cf8c5 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "0e27c936-f272-4dff-b393-9602b77da425" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1697: 305 pgs: 305 active+clean; 353 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 240 op/s
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.127 239969 DEBUG oslo_concurrency.processutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59b6d7ef-0e86-41af-af94-b94def8570b0/disk.config 59b6d7ef-0e86-41af-af94-b94def8570b0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.127 239969 INFO nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Deleting local config drive /var/lib/nova/instances/59b6d7ef-0e86-41af-af94-b94def8570b0/disk.config because it was imported into RBD.
Jan 26 16:08:20 compute-0 kernel: tapa8973089-7e: entered promiscuous mode
Jan 26 16:08:20 compute-0 NetworkManager[48954]: <info>  [1769443700.1749] manager: (tapa8973089-7e): new Tun device (/org/freedesktop/NetworkManager/Devices/398)
Jan 26 16:08:20 compute-0 ovn_controller[146046]: 2026-01-26T16:08:20Z|00958|binding|INFO|Claiming lport a8973089-7e12-4dd1-be5a-417921b2ddf9 for this chassis.
Jan 26 16:08:20 compute-0 ovn_controller[146046]: 2026-01-26T16:08:20Z|00959|binding|INFO|a8973089-7e12-4dd1-be5a-417921b2ddf9: Claiming fa:16:3e:5b:71:19 10.100.0.11 2001:db8::f816:3eff:fe5b:7119
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.182 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:20 compute-0 ovn_controller[146046]: 2026-01-26T16:08:20Z|00960|binding|INFO|Setting lport a8973089-7e12-4dd1-be5a-417921b2ddf9 ovn-installed in OVS
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.200 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:20 compute-0 systemd-udevd[323110]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:08:20 compute-0 NetworkManager[48954]: <info>  [1769443700.2114] device (tapa8973089-7e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:08:20 compute-0 NetworkManager[48954]: <info>  [1769443700.2119] device (tapa8973089-7e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:08:20 compute-0 systemd-machined[208061]: New machine qemu-120-instance-00000062.
Jan 26 16:08:20 compute-0 systemd[1]: Started Virtual Machine qemu-120-instance-00000062.
Jan 26 16:08:20 compute-0 ovn_controller[146046]: 2026-01-26T16:08:20Z|00961|binding|INFO|Setting lport a8973089-7e12-4dd1-be5a-417921b2ddf9 up in Southbound
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.311 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:71:19 10.100.0.11 2001:db8::f816:3eff:fe5b:7119'], port_security=['fa:16:3e:5b:71:19 10.100.0.11 2001:db8::f816:3eff:fe5b:7119'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fe5b:7119/64', 'neutron:device_id': '59b6d7ef-0e86-41af-af94-b94def8570b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a2a2f41a-90e4-40a5-bfe6-8f9c75570ebd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afff7a52-f9b6-46d8-88e0-ad54d3e2a5be, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a8973089-7e12-4dd1-be5a-417921b2ddf9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.312 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a8973089-7e12-4dd1-be5a-417921b2ddf9 in datapath 8b5a2723-70e1-4acc-a601-a4f0bf0d77a4 bound to our chassis
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.314 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8b5a2723-70e1-4acc-a601-a4f0bf0d77a4
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.327 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ea402bb6-050b-4147-a963-541edd9c4b9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.328 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8b5a2723-71 in ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.330 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8b5a2723-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.330 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[45060e32-0dfe-4c15-bb85-14587ee358f5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.331 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4dd13e73-3a30-4488-8dc7-e776f9894572]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.348 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[6802514a-4308-41ca-a198-e5da1ac77edd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.361 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.374 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1349db0d-aa9f-4fe6-9b1a-11abdfb30538]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.411 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b7247416-6326-4c18-956a-04c4555fb595]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:20 compute-0 NetworkManager[48954]: <info>  [1769443700.4212] manager: (tap8b5a2723-70): new Veth device (/org/freedesktop/NetworkManager/Devices/399)
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.417 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[24c5214a-dd53-4024-a4f7-61e1da524c3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.463 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f54d0991-949d-40e3-ba48-db2c6637aad4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.466 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9bdc0c74-d6d4-4c92-a4d5-821a2b7aae51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:20 compute-0 NetworkManager[48954]: <info>  [1769443700.4947] device (tap8b5a2723-70): carrier: link connected
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.504 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[184caa1d-fc72-44c6-8b76-90bf131d86dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.522 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1101f6-88c8-4064-86f6-4f05a9ed29f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b5a2723-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:ea:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519130, 'reachable_time': 41400, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323181, 'error': None, 'target': 'ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.542 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[21fc71f1-e2cd-4d1a-9572-2539889ca2f2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe35:ea76'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519130, 'tstamp': 519130}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323186, 'error': None, 'target': 'ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.561 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f34c59-5c05-4ab5-b2ba-da8812b6e100]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b5a2723-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:ea:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519130, 'reachable_time': 41400, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 323188, 'error': None, 'target': 'ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.600 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3f4e2076-3c4c-4031-bfd6-570d2e4c9fa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.621 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443700.6205072, 59b6d7ef-0e86-41af-af94-b94def8570b0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.621 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] VM Started (Lifecycle Event)
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.642 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.645 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443700.620715, 59b6d7ef-0e86-41af-af94-b94def8570b0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.645 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] VM Paused (Lifecycle Event)
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.670 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[277741a3-fce4-40d1-8445-bd2902c85b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.671 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b5a2723-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.672 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.672 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b5a2723-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.674 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:20 compute-0 NetworkManager[48954]: <info>  [1769443700.6751] manager: (tap8b5a2723-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Jan 26 16:08:20 compute-0 kernel: tap8b5a2723-70: entered promiscuous mode
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.676 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.681 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8b5a2723-70, col_values=(('external_ids', {'iface-id': '4fab6bb9-7be8-461a-b3cf-098c70ef4666'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:20 compute-0 ovn_controller[146046]: 2026-01-26T16:08:20Z|00962|binding|INFO|Releasing lport 4fab6bb9-7be8-461a-b3cf-098c70ef4666 from this chassis (sb_readonly=0)
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.682 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.683 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.685 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8b5a2723-70e1-4acc-a601-a4f0bf0d77a4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8b5a2723-70e1-4acc-a601-a4f0bf0d77a4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.685 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[28f58bce-dd4b-4caa-8ae3-189466243a1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.686 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/8b5a2723-70e1-4acc-a601-a4f0bf0d77a4.pid.haproxy
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 8b5a2723-70e1-4acc-a601-a4f0bf0d77a4
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:08:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:20.687 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'env', 'PROCESS_TAG=haproxy-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8b5a2723-70e1-4acc-a601-a4f0bf0d77a4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:08:20 compute-0 ovn_controller[146046]: 2026-01-26T16:08:20Z|00963|binding|INFO|Releasing lport 86107515-3deb-4a4b-a7ca-2f22eed5cb66 from this chassis (sb_readonly=0)
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.688 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:20 compute-0 ovn_controller[146046]: 2026-01-26T16:08:20Z|00964|binding|INFO|Releasing lport 4fab6bb9-7be8-461a-b3cf-098c70ef4666 from this chassis (sb_readonly=0)
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.694 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.697 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.705 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.770 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:08:20 compute-0 ovn_controller[146046]: 2026-01-26T16:08:20Z|00965|binding|INFO|Releasing lport 86107515-3deb-4a4b-a7ca-2f22eed5cb66 from this chassis (sb_readonly=0)
Jan 26 16:08:20 compute-0 ovn_controller[146046]: 2026-01-26T16:08:20Z|00966|binding|INFO|Releasing lport 4fab6bb9-7be8-461a-b3cf-098c70ef4666 from this chassis (sb_readonly=0)
Jan 26 16:08:20 compute-0 nova_compute[239965]: 2026-01-26 16:08:20.835 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:21 compute-0 nova_compute[239965]: 2026-01-26 16:08:21.079 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:21 compute-0 podman[323221]: 2026-01-26 16:08:21.017787822 +0000 UTC m=+0.020344328 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:08:21 compute-0 ceph-mon[75140]: pgmap v1697: 305 pgs: 305 active+clean; 353 MiB data, 776 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 240 op/s
Jan 26 16:08:21 compute-0 podman[323221]: 2026-01-26 16:08:21.30468498 +0000 UTC m=+0.307241476 container create 24fcca172d778adae485c6ed135bb1b427b776e81d645dfc85026a8f25d287cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:08:21 compute-0 systemd[1]: Started libpod-conmon-24fcca172d778adae485c6ed135bb1b427b776e81d645dfc85026a8f25d287cc.scope.
Jan 26 16:08:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:08:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c6552f4e883ae6f8f0ba3dcddf7dce17cef332686611fb36173eb8a9b10d37d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:08:21 compute-0 podman[323221]: 2026-01-26 16:08:21.545237985 +0000 UTC m=+0.547794501 container init 24fcca172d778adae485c6ed135bb1b427b776e81d645dfc85026a8f25d287cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:08:21 compute-0 podman[323221]: 2026-01-26 16:08:21.555036095 +0000 UTC m=+0.557592611 container start 24fcca172d778adae485c6ed135bb1b427b776e81d645dfc85026a8f25d287cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 16:08:21 compute-0 neutron-haproxy-ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4[323237]: [NOTICE]   (323241) : New worker (323243) forked
Jan 26 16:08:21 compute-0 neutron-haproxy-ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4[323237]: [NOTICE]   (323241) : Loading success.
Jan 26 16:08:21 compute-0 nova_compute[239965]: 2026-01-26 16:08:21.623 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443686.620296, 8dcfffb0-b36f-463e-b49a-9954483b8f5b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:21 compute-0 nova_compute[239965]: 2026-01-26 16:08:21.623 239969 INFO nova.compute.manager [-] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] VM Stopped (Lifecycle Event)
Jan 26 16:08:21 compute-0 nova_compute[239965]: 2026-01-26 16:08:21.674 239969 DEBUG nova.compute.manager [None req-904b1121-6848-4536-ad4e-561e836a9fdb - - - - - -] [instance: 8dcfffb0-b36f-463e-b49a-9954483b8f5b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.124 239969 DEBUG nova.compute.manager [req-a8e94313-1c62-4f94-b61b-604f2fd4c58f req-32c0d1e8-2c56-4fd7-8614-2f00b91b56ca a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Received event network-vif-plugged-a8973089-7e12-4dd1-be5a-417921b2ddf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.124 239969 DEBUG oslo_concurrency.lockutils [req-a8e94313-1c62-4f94-b61b-604f2fd4c58f req-32c0d1e8-2c56-4fd7-8614-2f00b91b56ca a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.124 239969 DEBUG oslo_concurrency.lockutils [req-a8e94313-1c62-4f94-b61b-604f2fd4c58f req-32c0d1e8-2c56-4fd7-8614-2f00b91b56ca a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.125 239969 DEBUG oslo_concurrency.lockutils [req-a8e94313-1c62-4f94-b61b-604f2fd4c58f req-32c0d1e8-2c56-4fd7-8614-2f00b91b56ca a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.125 239969 DEBUG nova.compute.manager [req-a8e94313-1c62-4f94-b61b-604f2fd4c58f req-32c0d1e8-2c56-4fd7-8614-2f00b91b56ca a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Processing event network-vif-plugged-a8973089-7e12-4dd1-be5a-417921b2ddf9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.125 239969 DEBUG nova.compute.manager [req-a8e94313-1c62-4f94-b61b-604f2fd4c58f req-32c0d1e8-2c56-4fd7-8614-2f00b91b56ca a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Received event network-vif-plugged-a8973089-7e12-4dd1-be5a-417921b2ddf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.125 239969 DEBUG oslo_concurrency.lockutils [req-a8e94313-1c62-4f94-b61b-604f2fd4c58f req-32c0d1e8-2c56-4fd7-8614-2f00b91b56ca a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.125 239969 DEBUG oslo_concurrency.lockutils [req-a8e94313-1c62-4f94-b61b-604f2fd4c58f req-32c0d1e8-2c56-4fd7-8614-2f00b91b56ca a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.126 239969 DEBUG oslo_concurrency.lockutils [req-a8e94313-1c62-4f94-b61b-604f2fd4c58f req-32c0d1e8-2c56-4fd7-8614-2f00b91b56ca a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.126 239969 DEBUG nova.compute.manager [req-a8e94313-1c62-4f94-b61b-604f2fd4c58f req-32c0d1e8-2c56-4fd7-8614-2f00b91b56ca a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] No waiting events found dispatching network-vif-plugged-a8973089-7e12-4dd1-be5a-417921b2ddf9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.126 239969 WARNING nova.compute.manager [req-a8e94313-1c62-4f94-b61b-604f2fd4c58f req-32c0d1e8-2c56-4fd7-8614-2f00b91b56ca a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Received unexpected event network-vif-plugged-a8973089-7e12-4dd1-be5a-417921b2ddf9 for instance with vm_state building and task_state spawning.
Jan 26 16:08:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 305 active+clean; 368 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.5 MiB/s wr, 256 op/s
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.127 239969 DEBUG nova.compute.manager [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.130 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443702.1300755, 59b6d7ef-0e86-41af-af94-b94def8570b0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.130 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] VM Resumed (Lifecycle Event)
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.132 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.137 239969 INFO nova.virt.libvirt.driver [-] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Instance spawned successfully.
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.137 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.158 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.161 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.172 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.172 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.173 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.173 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.173 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.174 239969 DEBUG nova.virt.libvirt.driver [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.185 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.267 239969 INFO nova.compute.manager [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Took 14.50 seconds to spawn the instance on the hypervisor.
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.267 239969 DEBUG nova.compute.manager [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:22 compute-0 ceph-mon[75140]: pgmap v1698: 305 pgs: 305 active+clean; 368 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 5.5 MiB/s wr, 256 op/s
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.348 239969 INFO nova.compute.manager [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Took 15.89 seconds to build instance.
Jan 26 16:08:22 compute-0 nova_compute[239965]: 2026-01-26 16:08:22.378 239969 DEBUG oslo_concurrency.lockutils [None req-42dbbcc1-3827-41aa-aac7-149315a3fc3b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "59b6d7ef-0e86-41af-af94-b94def8570b0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.984s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:08:23 compute-0 nova_compute[239965]: 2026-01-26 16:08:23.129 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:23 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000060.scope: Deactivated successfully.
Jan 26 16:08:23 compute-0 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d00000060.scope: Consumed 14.134s CPU time.
Jan 26 16:08:23 compute-0 systemd-machined[208061]: Machine qemu-118-instance-00000060 terminated.
Jan 26 16:08:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 305 active+clean; 372 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.5 MiB/s wr, 237 op/s
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.383 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.384 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.386 239969 INFO nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Instance shutdown successfully after 14 seconds.
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.391 239969 INFO nova.virt.libvirt.driver [-] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Instance destroyed successfully.
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.397 239969 INFO nova.virt.libvirt.driver [-] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Instance destroyed successfully.
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.421 239969 DEBUG nova.compute.manager [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.478 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.478 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.486 239969 DEBUG nova.virt.hardware [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.486 239969 INFO nova.compute.claims [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.685 239969 INFO nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Deleting instance files /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de_del
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.686 239969 INFO nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Deletion of /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de_del complete
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.879 239969 DEBUG oslo_concurrency.processutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.911 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.912 239969 INFO nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Creating image(s)
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.935 239969 DEBUG nova.storage.rbd_utils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.960 239969 DEBUG nova.storage.rbd_utils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.988 239969 DEBUG nova.storage.rbd_utils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:24 compute-0 nova_compute[239965]: 2026-01-26 16:08:24.992 239969 DEBUG oslo_concurrency.processutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.068 239969 DEBUG oslo_concurrency.processutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.070 239969 DEBUG oslo_concurrency.lockutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "43c516c9cae415c0ab334521ada79f427fb2809a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.070 239969 DEBUG oslo_concurrency.lockutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.071 239969 DEBUG oslo_concurrency.lockutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.097 239969 DEBUG nova.storage.rbd_utils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.101 239969 DEBUG oslo_concurrency.processutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.182 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443690.1801722, 571cd10f-e22e-43b6-9523-e3539d047e4a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.183 239969 INFO nova.compute.manager [-] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] VM Stopped (Lifecycle Event)
Jan 26 16:08:25 compute-0 ceph-mon[75140]: pgmap v1699: 305 pgs: 305 active+clean; 372 MiB data, 805 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.5 MiB/s wr, 237 op/s
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.206 239969 DEBUG nova.compute.manager [None req-793b20dc-e2c4-41b8-a8a0-a66c5d261604 - - - - - -] [instance: 571cd10f-e22e-43b6-9523-e3539d047e4a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:08:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3835751153' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.466 239969 DEBUG oslo_concurrency.processutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.472 239969 DEBUG nova.compute.provider_tree [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.486 239969 DEBUG nova.scheduler.client.report [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.504 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.504 239969 DEBUG nova.compute.manager [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.531 239969 DEBUG oslo_concurrency.processutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.584 239969 DEBUG nova.compute.manager [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.585 239969 DEBUG nova.network.neutron [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.593 239969 DEBUG nova.storage.rbd_utils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] resizing rbd image dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.621 239969 INFO nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.659 239969 DEBUG nova.compute.manager [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.665 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.666 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Ensure instance console log exists: /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.666 239969 DEBUG oslo_concurrency.lockutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.667 239969 DEBUG oslo_concurrency.lockutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.667 239969 DEBUG oslo_concurrency.lockutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.668 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.671 239969 WARNING nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.675 239969 DEBUG nova.virt.libvirt.host [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.675 239969 DEBUG nova.virt.libvirt.host [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.678 239969 DEBUG nova.virt.libvirt.host [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.678 239969 DEBUG nova.virt.libvirt.host [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.679 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.679 239969 DEBUG nova.virt.hardware [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.679 239969 DEBUG nova.virt.hardware [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.679 239969 DEBUG nova.virt.hardware [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.680 239969 DEBUG nova.virt.hardware [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.680 239969 DEBUG nova.virt.hardware [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.680 239969 DEBUG nova.virt.hardware [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.680 239969 DEBUG nova.virt.hardware [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.681 239969 DEBUG nova.virt.hardware [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.681 239969 DEBUG nova.virt.hardware [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.681 239969 DEBUG nova.virt.hardware [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.681 239969 DEBUG nova.virt.hardware [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.681 239969 DEBUG nova.objects.instance [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lazy-loading 'vcpu_model' on Instance uuid dc214d0d-327d-4a44-b811-2c04d4ccc9de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.699 239969 DEBUG oslo_concurrency.processutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.796 239969 DEBUG nova.policy [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '392bf4c554724bd3b097b990cec964ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e3d3c26abe454a90816833e484abbbd5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.801 239969 DEBUG nova.compute.manager [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.803 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.803 239969 INFO nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Creating image(s)
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.825 239969 DEBUG nova.storage.rbd_utils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 08bf6991-2ba2-47d1-865a-8f6ddfd9a067_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.846 239969 DEBUG nova.storage.rbd_utils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 08bf6991-2ba2-47d1-865a-8f6ddfd9a067_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.875 239969 DEBUG nova.storage.rbd_utils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 08bf6991-2ba2-47d1-865a-8f6ddfd9a067_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.878 239969 DEBUG oslo_concurrency.processutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.949 239969 DEBUG oslo_concurrency.processutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.950 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.950 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.950 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.970 239969 DEBUG nova.storage.rbd_utils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 08bf6991-2ba2-47d1-865a-8f6ddfd9a067_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:25 compute-0 nova_compute[239965]: 2026-01-26 16:08:25.974 239969 DEBUG oslo_concurrency.processutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 08bf6991-2ba2-47d1-865a-8f6ddfd9a067_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.081 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 366 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.7 MiB/s wr, 260 op/s
Jan 26 16:08:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3835751153' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.282 239969 DEBUG oslo_concurrency.processutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 08bf6991-2ba2-47d1-865a-8f6ddfd9a067_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:08:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/58981422' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.315 239969 DEBUG oslo_concurrency.processutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.338 239969 DEBUG nova.storage.rbd_utils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.342 239969 DEBUG oslo_concurrency.processutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:26 compute-0 NetworkManager[48954]: <info>  [1769443706.3780] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/401)
Jan 26 16:08:26 compute-0 NetworkManager[48954]: <info>  [1769443706.3789] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/402)
Jan 26 16:08:26 compute-0 ovn_controller[146046]: 2026-01-26T16:08:26Z|00967|binding|INFO|Releasing lport 86107515-3deb-4a4b-a7ca-2f22eed5cb66 from this chassis (sb_readonly=0)
Jan 26 16:08:26 compute-0 ovn_controller[146046]: 2026-01-26T16:08:26Z|00968|binding|INFO|Releasing lport 4fab6bb9-7be8-461a-b3cf-098c70ef4666 from this chassis (sb_readonly=0)
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.387 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:26 compute-0 ovn_controller[146046]: 2026-01-26T16:08:26Z|00969|binding|INFO|Releasing lport 86107515-3deb-4a4b-a7ca-2f22eed5cb66 from this chassis (sb_readonly=0)
Jan 26 16:08:26 compute-0 ovn_controller[146046]: 2026-01-26T16:08:26Z|00970|binding|INFO|Releasing lport 4fab6bb9-7be8-461a-b3cf-098c70ef4666 from this chassis (sb_readonly=0)
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.432 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.437 239969 DEBUG nova.storage.rbd_utils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] resizing rbd image 08bf6991-2ba2-47d1-865a-8f6ddfd9a067_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.537 239969 DEBUG nova.objects.instance [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'migration_context' on Instance uuid 08bf6991-2ba2-47d1-865a-8f6ddfd9a067 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.560 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.561 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Ensure instance console log exists: /var/lib/nova/instances/08bf6991-2ba2-47d1-865a-8f6ddfd9a067/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.561 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.561 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.562 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.691 239969 DEBUG nova.network.neutron [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Successfully created port: 5c47e905-ff93-4134-b724-0c779bfdf9a1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:08:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:08:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1425926953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.951 239969 DEBUG nova.compute.manager [req-ead2b6f9-fa28-4750-89cb-25df46861bea req-32724e2a-12b3-412a-976e-70e1e837c69c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Received event network-changed-a8973089-7e12-4dd1-be5a-417921b2ddf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.952 239969 DEBUG nova.compute.manager [req-ead2b6f9-fa28-4750-89cb-25df46861bea req-32724e2a-12b3-412a-976e-70e1e837c69c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Refreshing instance network info cache due to event network-changed-a8973089-7e12-4dd1-be5a-417921b2ddf9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.952 239969 DEBUG oslo_concurrency.lockutils [req-ead2b6f9-fa28-4750-89cb-25df46861bea req-32724e2a-12b3-412a-976e-70e1e837c69c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-59b6d7ef-0e86-41af-af94-b94def8570b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.952 239969 DEBUG oslo_concurrency.lockutils [req-ead2b6f9-fa28-4750-89cb-25df46861bea req-32724e2a-12b3-412a-976e-70e1e837c69c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-59b6d7ef-0e86-41af-af94-b94def8570b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.953 239969 DEBUG nova.network.neutron [req-ead2b6f9-fa28-4750-89cb-25df46861bea req-32724e2a-12b3-412a-976e-70e1e837c69c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Refreshing network info cache for port a8973089-7e12-4dd1-be5a-417921b2ddf9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.961 239969 DEBUG oslo_concurrency.processutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:26 compute-0 nova_compute[239965]: 2026-01-26 16:08:26.963 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:08:26 compute-0 nova_compute[239965]:   <uuid>dc214d0d-327d-4a44-b811-2c04d4ccc9de</uuid>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   <name>instance-00000060</name>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerShowV247Test-server-382537760</nova:name>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:08:25</nova:creationTime>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:08:26 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:08:26 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:08:26 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:08:26 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:08:26 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:08:26 compute-0 nova_compute[239965]:         <nova:user uuid="82e03105001a493dbe4acc52bc208148">tempest-ServerShowV247Test-477144177-project-member</nova:user>
Jan 26 16:08:26 compute-0 nova_compute[239965]:         <nova:project uuid="3fe1ff70a12f48e19752112548c850be">tempest-ServerShowV247Test-477144177</nova:project>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="24ff5bfa-2bf0-4d76-ba05-fc857cd2108f"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <system>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <entry name="serial">dc214d0d-327d-4a44-b811-2c04d4ccc9de</entry>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <entry name="uuid">dc214d0d-327d-4a44-b811-2c04d4ccc9de</entry>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     </system>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   <os>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   </os>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   <features>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   </features>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk">
Jan 26 16:08:26 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       </source>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:08:26 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk.config">
Jan 26 16:08:26 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       </source>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:08:26 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de/console.log" append="off"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <video>
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     </video>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:08:26 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:08:26 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:08:26 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:08:26 compute-0 nova_compute[239965]: </domain>
Jan 26 16:08:26 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:08:27 compute-0 ceph-mon[75140]: pgmap v1700: 305 pgs: 305 active+clean; 366 MiB data, 793 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.7 MiB/s wr, 260 op/s
Jan 26 16:08:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/58981422' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1425926953' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:27 compute-0 nova_compute[239965]: 2026-01-26 16:08:27.320 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:08:27 compute-0 nova_compute[239965]: 2026-01-26 16:08:27.321 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:08:27 compute-0 nova_compute[239965]: 2026-01-26 16:08:27.321 239969 INFO nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Using config drive
Jan 26 16:08:27 compute-0 nova_compute[239965]: 2026-01-26 16:08:27.338 239969 DEBUG nova.storage.rbd_utils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:27 compute-0 nova_compute[239965]: 2026-01-26 16:08:27.377 239969 DEBUG nova.objects.instance [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lazy-loading 'ec2_ids' on Instance uuid dc214d0d-327d-4a44-b811-2c04d4ccc9de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:27 compute-0 nova_compute[239965]: 2026-01-26 16:08:27.438 239969 DEBUG nova.objects.instance [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lazy-loading 'keypairs' on Instance uuid dc214d0d-327d-4a44-b811-2c04d4ccc9de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:08:27 compute-0 nova_compute[239965]: 2026-01-26 16:08:27.645 239969 INFO nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Creating config drive at /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de/disk.config
Jan 26 16:08:27 compute-0 nova_compute[239965]: 2026-01-26 16:08:27.649 239969 DEBUG oslo_concurrency.processutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkbq0d5_e execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:27 compute-0 nova_compute[239965]: 2026-01-26 16:08:27.793 239969 DEBUG oslo_concurrency.processutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkbq0d5_e" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:27 compute-0 nova_compute[239965]: 2026-01-26 16:08:27.819 239969 DEBUG nova.storage.rbd_utils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] rbd image dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:27 compute-0 nova_compute[239965]: 2026-01-26 16:08:27.822 239969 DEBUG oslo_concurrency.processutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de/disk.config dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:27 compute-0 nova_compute[239965]: 2026-01-26 16:08:27.990 239969 DEBUG oslo_concurrency.processutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de/disk.config dc214d0d-327d-4a44-b811-2c04d4ccc9de_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:27 compute-0 nova_compute[239965]: 2026-01-26 16:08:27.991 239969 INFO nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Deleting local config drive /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de/disk.config because it was imported into RBD.
Jan 26 16:08:28 compute-0 systemd-machined[208061]: New machine qemu-121-instance-00000060.
Jan 26 16:08:28 compute-0 systemd[1]: Started Virtual Machine qemu-121-instance-00000060.
Jan 26 16:08:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1701: 305 pgs: 305 active+clean; 378 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.4 MiB/s wr, 351 op/s
Jan 26 16:08:28 compute-0 nova_compute[239965]: 2026-01-26 16:08:28.132 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:28 compute-0 nova_compute[239965]: 2026-01-26 16:08:28.430 239969 DEBUG nova.network.neutron [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Successfully updated port: 5c47e905-ff93-4134-b724-0c779bfdf9a1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:08:28 compute-0 nova_compute[239965]: 2026-01-26 16:08:28.450 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "refresh_cache-08bf6991-2ba2-47d1-865a-8f6ddfd9a067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:08:28 compute-0 nova_compute[239965]: 2026-01-26 16:08:28.452 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquired lock "refresh_cache-08bf6991-2ba2-47d1-865a-8f6ddfd9a067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:08:28 compute-0 nova_compute[239965]: 2026-01-26 16:08:28.452 239969 DEBUG nova.network.neutron [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:08:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:08:28
Jan 26 16:08:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:08:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:08:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'backups', 'images', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.meta']
Jan 26 16:08:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:08:28 compute-0 nova_compute[239965]: 2026-01-26 16:08:28.831 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for dc214d0d-327d-4a44-b811-2c04d4ccc9de due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:08:28 compute-0 nova_compute[239965]: 2026-01-26 16:08:28.832 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443708.8313565, dc214d0d-327d-4a44-b811-2c04d4ccc9de => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:28 compute-0 nova_compute[239965]: 2026-01-26 16:08:28.832 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] VM Resumed (Lifecycle Event)
Jan 26 16:08:28 compute-0 nova_compute[239965]: 2026-01-26 16:08:28.837 239969 DEBUG nova.compute.manager [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:08:28 compute-0 nova_compute[239965]: 2026-01-26 16:08:28.837 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:08:28 compute-0 nova_compute[239965]: 2026-01-26 16:08:28.843 239969 INFO nova.virt.libvirt.driver [-] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Instance spawned successfully.
Jan 26 16:08:28 compute-0 nova_compute[239965]: 2026-01-26 16:08:28.844 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:08:28 compute-0 nova_compute[239965]: 2026-01-26 16:08:28.852 239969 DEBUG nova.network.neutron [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:08:28 compute-0 nova_compute[239965]: 2026-01-26 16:08:28.983 239969 DEBUG nova.compute.manager [req-6262343d-ee2d-460a-821d-6b79b39237eb req-74c8c4a1-572e-4d1a-bdf2-94e0422b7fd4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Received event network-changed-5c47e905-ff93-4134-b724-0c779bfdf9a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:28 compute-0 nova_compute[239965]: 2026-01-26 16:08:28.983 239969 DEBUG nova.compute.manager [req-6262343d-ee2d-460a-821d-6b79b39237eb req-74c8c4a1-572e-4d1a-bdf2-94e0422b7fd4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Refreshing instance network info cache due to event network-changed-5c47e905-ff93-4134-b724-0c779bfdf9a1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:08:28 compute-0 nova_compute[239965]: 2026-01-26 16:08:28.983 239969 DEBUG oslo_concurrency.lockutils [req-6262343d-ee2d-460a-821d-6b79b39237eb req-74c8c4a1-572e-4d1a-bdf2-94e0422b7fd4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-08bf6991-2ba2-47d1-865a-8f6ddfd9a067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:08:29 compute-0 ceph-mon[75140]: pgmap v1701: 305 pgs: 305 active+clean; 378 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 7.4 MiB/s wr, 351 op/s
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.388 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.395 239969 DEBUG nova.network.neutron [req-ead2b6f9-fa28-4750-89cb-25df46861bea req-32724e2a-12b3-412a-976e-70e1e837c69c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Updated VIF entry in instance network info cache for port a8973089-7e12-4dd1-be5a-417921b2ddf9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.396 239969 DEBUG nova.network.neutron [req-ead2b6f9-fa28-4750-89cb-25df46861bea req-32724e2a-12b3-412a-976e-70e1e837c69c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Updating instance_info_cache with network_info: [{"id": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "address": "fa:16:3e:5b:71:19", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:7119", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8973089-7e", "ovs_interfaceid": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.398 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.400 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.401 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.401 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.402 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.402 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.403 239969 DEBUG nova.virt.libvirt.driver [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.515 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.515 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443708.831601, dc214d0d-327d-4a44-b811-2c04d4ccc9de => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.516 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] VM Started (Lifecycle Event)
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.601 239969 DEBUG nova.network.neutron [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Updating instance_info_cache with network_info: [{"id": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "address": "fa:16:3e:67:1d:8b", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c47e905-ff", "ovs_interfaceid": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.747 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.751 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.897 239969 DEBUG nova.compute.manager [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.898 239969 DEBUG oslo_concurrency.lockutils [req-ead2b6f9-fa28-4750-89cb-25df46861bea req-32724e2a-12b3-412a-976e-70e1e837c69c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-59b6d7ef-0e86-41af-af94-b94def8570b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.899 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Releasing lock "refresh_cache-08bf6991-2ba2-47d1-865a-8f6ddfd9a067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.899 239969 DEBUG nova.compute.manager [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Instance network_info: |[{"id": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "address": "fa:16:3e:67:1d:8b", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c47e905-ff", "ovs_interfaceid": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.900 239969 DEBUG oslo_concurrency.lockutils [req-6262343d-ee2d-460a-821d-6b79b39237eb req-74c8c4a1-572e-4d1a-bdf2-94e0422b7fd4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-08bf6991-2ba2-47d1-865a-8f6ddfd9a067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.901 239969 DEBUG nova.network.neutron [req-6262343d-ee2d-460a-821d-6b79b39237eb req-74c8c4a1-572e-4d1a-bdf2-94e0422b7fd4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Refreshing network info cache for port 5c47e905-ff93-4134-b724-0c779bfdf9a1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.903 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Start _get_guest_xml network_info=[{"id": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "address": "fa:16:3e:67:1d:8b", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c47e905-ff", "ovs_interfaceid": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.904 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.912 239969 WARNING nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.920 239969 DEBUG nova.virt.libvirt.host [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.921 239969 DEBUG nova.virt.libvirt.host [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.924 239969 DEBUG nova.virt.libvirt.host [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.925 239969 DEBUG nova.virt.libvirt.host [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.925 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.925 239969 DEBUG nova.virt.hardware [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.926 239969 DEBUG nova.virt.hardware [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.926 239969 DEBUG nova.virt.hardware [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.927 239969 DEBUG nova.virt.hardware [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.927 239969 DEBUG nova.virt.hardware [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.927 239969 DEBUG nova.virt.hardware [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.927 239969 DEBUG nova.virt.hardware [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.928 239969 DEBUG nova.virt.hardware [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.928 239969 DEBUG nova.virt.hardware [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.928 239969 DEBUG nova.virt.hardware [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.929 239969 DEBUG nova.virt.hardware [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:08:29 compute-0 nova_compute[239965]: 2026-01-26 16:08:29.932 239969 DEBUG oslo_concurrency.processutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:30 compute-0 nova_compute[239965]: 2026-01-26 16:08:30.039 239969 DEBUG oslo_concurrency.lockutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:30 compute-0 nova_compute[239965]: 2026-01-26 16:08:30.041 239969 DEBUG oslo_concurrency.lockutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:30 compute-0 nova_compute[239965]: 2026-01-26 16:08:30.042 239969 DEBUG nova.objects.instance [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1702: 305 pgs: 305 active+clean; 378 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.4 MiB/s wr, 305 op/s
Jan 26 16:08:30 compute-0 nova_compute[239965]: 2026-01-26 16:08:30.160 239969 DEBUG oslo_concurrency.lockutils [None req-b1bb50e6-663c-4e37-b4c5-b298032a08f0 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:08:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:08:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/376907713' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:30 compute-0 nova_compute[239965]: 2026-01-26 16:08:30.571 239969 DEBUG oslo_concurrency.processutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:30 compute-0 nova_compute[239965]: 2026-01-26 16:08:30.604 239969 DEBUG nova.storage.rbd_utils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 08bf6991-2ba2-47d1-865a-8f6ddfd9a067_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:30 compute-0 nova_compute[239965]: 2026-01-26 16:08:30.610 239969 DEBUG oslo_concurrency.processutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:08:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.084 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:08:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/629221214' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.225 239969 DEBUG oslo_concurrency.processutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.227 239969 DEBUG nova.virt.libvirt.vif [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:08:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1632372869',display_name='tempest-ServersTestJSON-server-1632372869',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1632372869',id=99,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-zl73sh7t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:08:25Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=08bf6991-2ba2-47d1-865a-8f6ddfd9a067,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "address": "fa:16:3e:67:1d:8b", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c47e905-ff", "ovs_interfaceid": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.227 239969 DEBUG nova.network.os_vif_util [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "address": "fa:16:3e:67:1d:8b", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c47e905-ff", "ovs_interfaceid": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.229 239969 DEBUG nova.network.os_vif_util [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:1d:8b,bridge_name='br-int',has_traffic_filtering=True,id=5c47e905-ff93-4134-b724-0c779bfdf9a1,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c47e905-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.230 239969 DEBUG nova.objects.instance [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 08bf6991-2ba2-47d1-865a-8f6ddfd9a067 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:31 compute-0 ceph-mon[75140]: pgmap v1702: 305 pgs: 305 active+clean; 378 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.4 MiB/s wr, 305 op/s
Jan 26 16:08:31 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/376907713' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:31 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/629221214' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.405 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:08:31 compute-0 nova_compute[239965]:   <uuid>08bf6991-2ba2-47d1-865a-8f6ddfd9a067</uuid>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   <name>instance-00000063</name>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersTestJSON-server-1632372869</nova:name>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:08:29</nova:creationTime>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:08:31 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:08:31 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:08:31 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:08:31 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:08:31 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:08:31 compute-0 nova_compute[239965]:         <nova:user uuid="392bf4c554724bd3b097b990cec964ac">tempest-ServersTestJSON-190839520-project-member</nova:user>
Jan 26 16:08:31 compute-0 nova_compute[239965]:         <nova:project uuid="e3d3c26abe454a90816833e484abbbd5">tempest-ServersTestJSON-190839520</nova:project>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:08:31 compute-0 nova_compute[239965]:         <nova:port uuid="5c47e905-ff93-4134-b724-0c779bfdf9a1">
Jan 26 16:08:31 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <system>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <entry name="serial">08bf6991-2ba2-47d1-865a-8f6ddfd9a067</entry>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <entry name="uuid">08bf6991-2ba2-47d1-865a-8f6ddfd9a067</entry>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     </system>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   <os>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   </os>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   <features>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   </features>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/08bf6991-2ba2-47d1-865a-8f6ddfd9a067_disk">
Jan 26 16:08:31 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       </source>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:08:31 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/08bf6991-2ba2-47d1-865a-8f6ddfd9a067_disk.config">
Jan 26 16:08:31 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       </source>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:08:31 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:67:1d:8b"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <target dev="tap5c47e905-ff"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/08bf6991-2ba2-47d1-865a-8f6ddfd9a067/console.log" append="off"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <video>
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     </video>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:08:31 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:08:31 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:08:31 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:08:31 compute-0 nova_compute[239965]: </domain>
Jan 26 16:08:31 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.413 239969 DEBUG nova.compute.manager [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Preparing to wait for external event network-vif-plugged-5c47e905-ff93-4134-b724-0c779bfdf9a1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.413 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.414 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.414 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.414 239969 DEBUG nova.virt.libvirt.vif [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:08:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1632372869',display_name='tempest-ServersTestJSON-server-1632372869',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1632372869',id=99,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-zl73sh7t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:08:25Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=08bf6991-2ba2-47d1-865a-8f6ddfd9a067,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "address": "fa:16:3e:67:1d:8b", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c47e905-ff", "ovs_interfaceid": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.414 239969 DEBUG nova.network.os_vif_util [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "address": "fa:16:3e:67:1d:8b", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c47e905-ff", "ovs_interfaceid": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.415 239969 DEBUG nova.network.os_vif_util [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:1d:8b,bridge_name='br-int',has_traffic_filtering=True,id=5c47e905-ff93-4134-b724-0c779bfdf9a1,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c47e905-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.415 239969 DEBUG os_vif [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:1d:8b,bridge_name='br-int',has_traffic_filtering=True,id=5c47e905-ff93-4134-b724-0c779bfdf9a1,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c47e905-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.416 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.416 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.416 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.420 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.421 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5c47e905-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.421 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5c47e905-ff, col_values=(('external_ids', {'iface-id': '5c47e905-ff93-4134-b724-0c779bfdf9a1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:67:1d:8b', 'vm-uuid': '08bf6991-2ba2-47d1-865a-8f6ddfd9a067'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:31 compute-0 NetworkManager[48954]: <info>  [1769443711.4750] manager: (tap5c47e905-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.475 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.478 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.484 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.485 239969 INFO os_vif [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:1d:8b,bridge_name='br-int',has_traffic_filtering=True,id=5c47e905-ff93-4134-b724-0c779bfdf9a1,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c47e905-ff')
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.831 239969 DEBUG oslo_concurrency.lockutils [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "dc214d0d-327d-4a44-b811-2c04d4ccc9de" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.832 239969 DEBUG oslo_concurrency.lockutils [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "dc214d0d-327d-4a44-b811-2c04d4ccc9de" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.833 239969 DEBUG oslo_concurrency.lockutils [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "dc214d0d-327d-4a44-b811-2c04d4ccc9de-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.833 239969 DEBUG oslo_concurrency.lockutils [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "dc214d0d-327d-4a44-b811-2c04d4ccc9de-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.834 239969 DEBUG oslo_concurrency.lockutils [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "dc214d0d-327d-4a44-b811-2c04d4ccc9de-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.835 239969 INFO nova.compute.manager [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Terminating instance
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.836 239969 DEBUG oslo_concurrency.lockutils [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "refresh_cache-dc214d0d-327d-4a44-b811-2c04d4ccc9de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.837 239969 DEBUG oslo_concurrency.lockutils [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquired lock "refresh_cache-dc214d0d-327d-4a44-b811-2c04d4ccc9de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.837 239969 DEBUG nova.network.neutron [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.843 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.843 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.844 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] No VIF found with MAC fa:16:3e:67:1d:8b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.844 239969 INFO nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Using config drive
Jan 26 16:08:31 compute-0 nova_compute[239965]: 2026-01-26 16:08:31.873 239969 DEBUG nova.storage.rbd_utils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 08bf6991-2ba2-47d1-865a-8f6ddfd9a067_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:32 compute-0 nova_compute[239965]: 2026-01-26 16:08:32.073 239969 DEBUG nova.network.neutron [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:08:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 385 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 7.9 MiB/s wr, 361 op/s
Jan 26 16:08:32 compute-0 nova_compute[239965]: 2026-01-26 16:08:32.141 239969 DEBUG nova.network.neutron [req-6262343d-ee2d-460a-821d-6b79b39237eb req-74c8c4a1-572e-4d1a-bdf2-94e0422b7fd4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Updated VIF entry in instance network info cache for port 5c47e905-ff93-4134-b724-0c779bfdf9a1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:08:32 compute-0 nova_compute[239965]: 2026-01-26 16:08:32.142 239969 DEBUG nova.network.neutron [req-6262343d-ee2d-460a-821d-6b79b39237eb req-74c8c4a1-572e-4d1a-bdf2-94e0422b7fd4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Updating instance_info_cache with network_info: [{"id": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "address": "fa:16:3e:67:1d:8b", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c47e905-ff", "ovs_interfaceid": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:32 compute-0 nova_compute[239965]: 2026-01-26 16:08:32.243 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443697.2419674, 0e27c936-f272-4dff-b393-9602b77da425 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:32 compute-0 nova_compute[239965]: 2026-01-26 16:08:32.244 239969 INFO nova.compute.manager [-] [instance: 0e27c936-f272-4dff-b393-9602b77da425] VM Stopped (Lifecycle Event)
Jan 26 16:08:32 compute-0 nova_compute[239965]: 2026-01-26 16:08:32.293 239969 INFO nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Creating config drive at /var/lib/nova/instances/08bf6991-2ba2-47d1-865a-8f6ddfd9a067/disk.config
Jan 26 16:08:32 compute-0 nova_compute[239965]: 2026-01-26 16:08:32.303 239969 DEBUG oslo_concurrency.processutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/08bf6991-2ba2-47d1-865a-8f6ddfd9a067/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1wdjjhve execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:32 compute-0 nova_compute[239965]: 2026-01-26 16:08:32.353 239969 DEBUG nova.network.neutron [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:32 compute-0 nova_compute[239965]: 2026-01-26 16:08:32.461 239969 DEBUG oslo_concurrency.processutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/08bf6991-2ba2-47d1-865a-8f6ddfd9a067/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1wdjjhve" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:08:32 compute-0 nova_compute[239965]: 2026-01-26 16:08:32.834 239969 DEBUG nova.storage.rbd_utils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] rbd image 08bf6991-2ba2-47d1-865a-8f6ddfd9a067_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:32 compute-0 nova_compute[239965]: 2026-01-26 16:08:32.838 239969 DEBUG oslo_concurrency.processutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/08bf6991-2ba2-47d1-865a-8f6ddfd9a067/disk.config 08bf6991-2ba2-47d1-865a-8f6ddfd9a067_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:32 compute-0 ceph-mon[75140]: pgmap v1703: 305 pgs: 305 active+clean; 385 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 7.9 MiB/s wr, 361 op/s
Jan 26 16:08:32 compute-0 nova_compute[239965]: 2026-01-26 16:08:32.873 239969 DEBUG oslo_concurrency.lockutils [req-6262343d-ee2d-460a-821d-6b79b39237eb req-74c8c4a1-572e-4d1a-bdf2-94e0422b7fd4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-08bf6991-2ba2-47d1-865a-8f6ddfd9a067" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:08:32 compute-0 nova_compute[239965]: 2026-01-26 16:08:32.988 239969 DEBUG oslo_concurrency.processutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/08bf6991-2ba2-47d1-865a-8f6ddfd9a067/disk.config 08bf6991-2ba2-47d1-865a-8f6ddfd9a067_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:32 compute-0 nova_compute[239965]: 2026-01-26 16:08:32.989 239969 INFO nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Deleting local config drive /var/lib/nova/instances/08bf6991-2ba2-47d1-865a-8f6ddfd9a067/disk.config because it was imported into RBD.
Jan 26 16:08:33 compute-0 kernel: tap5c47e905-ff: entered promiscuous mode
Jan 26 16:08:33 compute-0 NetworkManager[48954]: <info>  [1769443713.0426] manager: (tap5c47e905-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/404)
Jan 26 16:08:33 compute-0 ovn_controller[146046]: 2026-01-26T16:08:33Z|00971|binding|INFO|Claiming lport 5c47e905-ff93-4134-b724-0c779bfdf9a1 for this chassis.
Jan 26 16:08:33 compute-0 ovn_controller[146046]: 2026-01-26T16:08:33Z|00972|binding|INFO|5c47e905-ff93-4134-b724-0c779bfdf9a1: Claiming fa:16:3e:67:1d:8b 10.100.0.14
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.045 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:33 compute-0 systemd-udevd[323942]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:08:33 compute-0 ovn_controller[146046]: 2026-01-26T16:08:33Z|00973|binding|INFO|Setting lport 5c47e905-ff93-4134-b724-0c779bfdf9a1 ovn-installed in OVS
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.079 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:33 compute-0 systemd-machined[208061]: New machine qemu-122-instance-00000063.
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.086 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:33 compute-0 systemd[1]: Started Virtual Machine qemu-122-instance-00000063.
Jan 26 16:08:33 compute-0 NetworkManager[48954]: <info>  [1769443713.0976] device (tap5c47e905-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:08:33 compute-0 NetworkManager[48954]: <info>  [1769443713.0987] device (tap5c47e905-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:08:33 compute-0 ovn_controller[146046]: 2026-01-26T16:08:33Z|00974|binding|INFO|Setting lport 5c47e905-ff93-4134-b724-0c779bfdf9a1 up in Southbound
Jan 26 16:08:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:33.309 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:1d:8b 10.100.0.14'], port_security=['fa:16:3e:67:1d:8b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '08bf6991-2ba2-47d1-865a-8f6ddfd9a067', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3d3c26abe454a90816833e484abbbd5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd1788d42-3ab8-4af6-9d96-b2f289468a39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=866be92a-2c15-4b84-ae39-2a0041a8b635, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5c47e905-ff93-4134-b724-0c779bfdf9a1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:08:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:33.310 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5c47e905-ff93-4134-b724-0c779bfdf9a1 in datapath e7ef659c-2c7b-4cdf-8956-267cdfeff707 bound to our chassis
Jan 26 16:08:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:33.311 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:08:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:33.328 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dcc20cdd-ce91-4adb-9bd0-cbf267c69afa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.329 239969 DEBUG nova.compute.manager [None req-bc12b41f-b4a8-4148-b968-a0f55e15805b - - - - - -] [instance: 0e27c936-f272-4dff-b393-9602b77da425] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.330 239969 DEBUG oslo_concurrency.lockutils [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Releasing lock "refresh_cache-dc214d0d-327d-4a44-b811-2c04d4ccc9de" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.331 239969 DEBUG nova.compute.manager [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:08:33 compute-0 ovn_controller[146046]: 2026-01-26T16:08:33Z|00975|binding|INFO|Releasing lport 86107515-3deb-4a4b-a7ca-2f22eed5cb66 from this chassis (sb_readonly=0)
Jan 26 16:08:33 compute-0 ovn_controller[146046]: 2026-01-26T16:08:33Z|00976|binding|INFO|Releasing lport 4fab6bb9-7be8-461a-b3cf-098c70ef4666 from this chassis (sb_readonly=0)
Jan 26 16:08:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:33.366 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[395c93f1-fce6-4205-835f-c6ba3f3a09cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:33.369 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[28ae7b8a-e308-4cb1-b890-2ca38d2ace70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.396 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:33.398 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8a8e48ef-e065-4cb7-9b26-7c7a1a1eda12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:33.417 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4ce55382-dad6-457a-8157-8dac884ee40c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7ef659c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:1b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 29, 'rx_bytes': 658, 'tx_bytes': 1362, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 29, 'rx_bytes': 658, 'tx_bytes': 1362, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507991, 'reachable_time': 15716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323957, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:33 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000060.scope: Deactivated successfully.
Jan 26 16:08:33 compute-0 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000060.scope: Consumed 5.315s CPU time.
Jan 26 16:08:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:33.436 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f60af6c5-a3f9-4787-a733-0e193247aee5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508006, 'tstamp': 508006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323958, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508010, 'tstamp': 508010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323958, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:33.437 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7ef659c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:33 compute-0 systemd-machined[208061]: Machine qemu-121-instance-00000060 terminated.
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.439 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.440 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:33.440 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7ef659c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:33.440 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:33.441 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape7ef659c-20, col_values=(('external_ids', {'iface-id': '86107515-3deb-4a4b-a7ca-2f22eed5cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:33 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:33.441 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.562 239969 INFO nova.virt.libvirt.driver [-] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Instance destroyed successfully.
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.563 239969 DEBUG nova.objects.instance [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lazy-loading 'resources' on Instance uuid dc214d0d-327d-4a44-b811-2c04d4ccc9de obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.666 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443713.6663606, 08bf6991-2ba2-47d1-865a-8f6ddfd9a067 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.667 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] VM Started (Lifecycle Event)
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.714 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.719 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443713.6672363, 08bf6991-2ba2-47d1-865a-8f6ddfd9a067 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.719 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] VM Paused (Lifecycle Event)
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.745 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.752 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.775 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.849 239969 INFO nova.virt.libvirt.driver [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Deleting instance files /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de_del
Jan 26 16:08:33 compute-0 nova_compute[239965]: 2026-01-26 16:08:33.850 239969 INFO nova.virt.libvirt.driver [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Deletion of /var/lib/nova/instances/dc214d0d-327d-4a44-b811-2c04d4ccc9de_del complete
Jan 26 16:08:34 compute-0 nova_compute[239965]: 2026-01-26 16:08:34.119 239969 INFO nova.compute.manager [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Took 0.79 seconds to destroy the instance on the hypervisor.
Jan 26 16:08:34 compute-0 nova_compute[239965]: 2026-01-26 16:08:34.120 239969 DEBUG oslo.service.loopingcall [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:08:34 compute-0 nova_compute[239965]: 2026-01-26 16:08:34.121 239969 DEBUG nova.compute.manager [-] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:08:34 compute-0 nova_compute[239965]: 2026-01-26 16:08:34.121 239969 DEBUG nova.network.neutron [-] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:08:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1704: 305 pgs: 305 active+clean; 385 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.3 MiB/s wr, 272 op/s
Jan 26 16:08:34 compute-0 nova_compute[239965]: 2026-01-26 16:08:34.356 239969 DEBUG nova.network.neutron [-] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:08:34 compute-0 nova_compute[239965]: 2026-01-26 16:08:34.423 239969 DEBUG nova.network.neutron [-] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:35 compute-0 nova_compute[239965]: 2026-01-26 16:08:35.132 239969 INFO nova.compute.manager [-] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Took 1.01 seconds to deallocate network for instance.
Jan 26 16:08:35 compute-0 ceph-mon[75140]: pgmap v1704: 305 pgs: 305 active+clean; 385 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.3 MiB/s wr, 272 op/s
Jan 26 16:08:35 compute-0 nova_compute[239965]: 2026-01-26 16:08:35.198 239969 DEBUG oslo_concurrency.lockutils [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:35 compute-0 nova_compute[239965]: 2026-01-26 16:08:35.198 239969 DEBUG oslo_concurrency.lockutils [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:35 compute-0 nova_compute[239965]: 2026-01-26 16:08:35.317 239969 DEBUG oslo_concurrency.processutils [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:35 compute-0 ovn_controller[146046]: 2026-01-26T16:08:35Z|00098|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5b:71:19 10.100.0.11
Jan 26 16:08:35 compute-0 ovn_controller[146046]: 2026-01-26T16:08:35Z|00099|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5b:71:19 10.100.0.11
Jan 26 16:08:35 compute-0 nova_compute[239965]: 2026-01-26 16:08:35.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:08:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:08:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/226731662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:35 compute-0 nova_compute[239965]: 2026-01-26 16:08:35.875 239969 DEBUG oslo_concurrency.processutils [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:35 compute-0 nova_compute[239965]: 2026-01-26 16:08:35.880 239969 DEBUG nova.compute.provider_tree [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:08:35 compute-0 nova_compute[239965]: 2026-01-26 16:08:35.970 239969 DEBUG nova.scheduler.client.report [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:08:35 compute-0 nova_compute[239965]: 2026-01-26 16:08:35.990 239969 DEBUG oslo_concurrency.lockutils [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.019 239969 INFO nova.scheduler.client.report [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Deleted allocations for instance dc214d0d-327d-4a44-b811-2c04d4ccc9de
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.094 239969 DEBUG oslo_concurrency.lockutils [None req-4f62c480-0e38-473a-b64f-6195a5de0cab 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "dc214d0d-327d-4a44-b811-2c04d4ccc9de" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.131 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1705: 305 pgs: 305 active+clean; 370 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.1 MiB/s wr, 255 op/s
Jan 26 16:08:36 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/226731662' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.474 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.660 239969 DEBUG nova.compute.manager [req-91a067c7-8b50-42cf-bb37-9fa29a6affa8 req-831c7859-7b8c-4649-b3fc-cc5395185d70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Received event network-vif-plugged-5c47e905-ff93-4134-b724-0c779bfdf9a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.660 239969 DEBUG oslo_concurrency.lockutils [req-91a067c7-8b50-42cf-bb37-9fa29a6affa8 req-831c7859-7b8c-4649-b3fc-cc5395185d70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.661 239969 DEBUG oslo_concurrency.lockutils [req-91a067c7-8b50-42cf-bb37-9fa29a6affa8 req-831c7859-7b8c-4649-b3fc-cc5395185d70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.661 239969 DEBUG oslo_concurrency.lockutils [req-91a067c7-8b50-42cf-bb37-9fa29a6affa8 req-831c7859-7b8c-4649-b3fc-cc5395185d70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.662 239969 DEBUG nova.compute.manager [req-91a067c7-8b50-42cf-bb37-9fa29a6affa8 req-831c7859-7b8c-4649-b3fc-cc5395185d70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Processing event network-vif-plugged-5c47e905-ff93-4134-b724-0c779bfdf9a1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.663 239969 DEBUG nova.compute.manager [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.667 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443716.6665666, 08bf6991-2ba2-47d1-865a-8f6ddfd9a067 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.668 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] VM Resumed (Lifecycle Event)
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.672 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.677 239969 INFO nova.virt.libvirt.driver [-] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Instance spawned successfully.
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.678 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.704 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.708 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.717 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.717 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.717 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.718 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.718 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.719 239969 DEBUG nova.virt.libvirt.driver [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.755 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.803 239969 INFO nova.compute.manager [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Took 11.00 seconds to spawn the instance on the hypervisor.
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.803 239969 DEBUG nova.compute.manager [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.912 239969 INFO nova.compute.manager [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Took 12.45 seconds to build instance.
Jan 26 16:08:36 compute-0 nova_compute[239965]: 2026-01-26 16:08:36.927 239969 DEBUG oslo_concurrency.lockutils [None req-9f96caae-4a94-48b4-a9cb-15fd02c58667 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.543s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:37 compute-0 ceph-mon[75140]: pgmap v1705: 305 pgs: 305 active+clean; 370 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 4.1 MiB/s wr, 255 op/s
Jan 26 16:08:37 compute-0 nova_compute[239965]: 2026-01-26 16:08:37.340 239969 DEBUG oslo_concurrency.lockutils [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "ffae5575-2de8-4386-8619-8c1fe66b887a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:37 compute-0 nova_compute[239965]: 2026-01-26 16:08:37.341 239969 DEBUG oslo_concurrency.lockutils [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "ffae5575-2de8-4386-8619-8c1fe66b887a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:37 compute-0 nova_compute[239965]: 2026-01-26 16:08:37.341 239969 DEBUG oslo_concurrency.lockutils [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "ffae5575-2de8-4386-8619-8c1fe66b887a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:37 compute-0 nova_compute[239965]: 2026-01-26 16:08:37.341 239969 DEBUG oslo_concurrency.lockutils [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "ffae5575-2de8-4386-8619-8c1fe66b887a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:37 compute-0 nova_compute[239965]: 2026-01-26 16:08:37.342 239969 DEBUG oslo_concurrency.lockutils [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "ffae5575-2de8-4386-8619-8c1fe66b887a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:37 compute-0 nova_compute[239965]: 2026-01-26 16:08:37.343 239969 INFO nova.compute.manager [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Terminating instance
Jan 26 16:08:37 compute-0 nova_compute[239965]: 2026-01-26 16:08:37.344 239969 DEBUG oslo_concurrency.lockutils [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "refresh_cache-ffae5575-2de8-4386-8619-8c1fe66b887a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:08:37 compute-0 nova_compute[239965]: 2026-01-26 16:08:37.344 239969 DEBUG oslo_concurrency.lockutils [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquired lock "refresh_cache-ffae5575-2de8-4386-8619-8c1fe66b887a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:08:37 compute-0 nova_compute[239965]: 2026-01-26 16:08:37.345 239969 DEBUG nova.network.neutron [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:08:37 compute-0 nova_compute[239965]: 2026-01-26 16:08:37.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:08:37 compute-0 nova_compute[239965]: 2026-01-26 16:08:37.635 239969 DEBUG nova.network.neutron [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:08:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:08:38 compute-0 nova_compute[239965]: 2026-01-26 16:08:38.011 239969 DEBUG nova.network.neutron [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 372 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 287 op/s
Jan 26 16:08:38 compute-0 nova_compute[239965]: 2026-01-26 16:08:38.232 239969 DEBUG oslo_concurrency.lockutils [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Releasing lock "refresh_cache-ffae5575-2de8-4386-8619-8c1fe66b887a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:08:38 compute-0 nova_compute[239965]: 2026-01-26 16:08:38.233 239969 DEBUG nova.compute.manager [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:08:38 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Jan 26 16:08:38 compute-0 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005f.scope: Consumed 14.564s CPU time.
Jan 26 16:08:38 compute-0 systemd-machined[208061]: Machine qemu-117-instance-0000005f terminated.
Jan 26 16:08:38 compute-0 nova_compute[239965]: 2026-01-26 16:08:38.453 239969 INFO nova.virt.libvirt.driver [-] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Instance destroyed successfully.
Jan 26 16:08:38 compute-0 nova_compute[239965]: 2026-01-26 16:08:38.454 239969 DEBUG nova.objects.instance [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lazy-loading 'resources' on Instance uuid ffae5575-2de8-4386-8619-8c1fe66b887a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:38 compute-0 nova_compute[239965]: 2026-01-26 16:08:38.694 239969 INFO nova.virt.libvirt.driver [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Deleting instance files /var/lib/nova/instances/ffae5575-2de8-4386-8619-8c1fe66b887a_del
Jan 26 16:08:38 compute-0 nova_compute[239965]: 2026-01-26 16:08:38.695 239969 INFO nova.virt.libvirt.driver [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Deletion of /var/lib/nova/instances/ffae5575-2de8-4386-8619-8c1fe66b887a_del complete
Jan 26 16:08:38 compute-0 nova_compute[239965]: 2026-01-26 16:08:38.747 239969 INFO nova.compute.manager [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Took 0.51 seconds to destroy the instance on the hypervisor.
Jan 26 16:08:38 compute-0 nova_compute[239965]: 2026-01-26 16:08:38.748 239969 DEBUG oslo.service.loopingcall [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:08:38 compute-0 nova_compute[239965]: 2026-01-26 16:08:38.748 239969 DEBUG nova.compute.manager [-] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:08:38 compute-0 nova_compute[239965]: 2026-01-26 16:08:38.748 239969 DEBUG nova.network.neutron [-] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.060 239969 DEBUG nova.network.neutron [-] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.079 239969 DEBUG nova.network.neutron [-] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.098 239969 INFO nova.compute.manager [-] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Took 0.35 seconds to deallocate network for instance.
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.130 239969 DEBUG nova.compute.manager [req-16468f8b-8e55-4c79-bd7f-48586b54e065 req-816243dc-6160-4c7e-9dc7-af43d9fe1935 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Received event network-vif-plugged-5c47e905-ff93-4134-b724-0c779bfdf9a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.130 239969 DEBUG oslo_concurrency.lockutils [req-16468f8b-8e55-4c79-bd7f-48586b54e065 req-816243dc-6160-4c7e-9dc7-af43d9fe1935 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.130 239969 DEBUG oslo_concurrency.lockutils [req-16468f8b-8e55-4c79-bd7f-48586b54e065 req-816243dc-6160-4c7e-9dc7-af43d9fe1935 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.131 239969 DEBUG oslo_concurrency.lockutils [req-16468f8b-8e55-4c79-bd7f-48586b54e065 req-816243dc-6160-4c7e-9dc7-af43d9fe1935 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.131 239969 DEBUG nova.compute.manager [req-16468f8b-8e55-4c79-bd7f-48586b54e065 req-816243dc-6160-4c7e-9dc7-af43d9fe1935 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] No waiting events found dispatching network-vif-plugged-5c47e905-ff93-4134-b724-0c779bfdf9a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.132 239969 WARNING nova.compute.manager [req-16468f8b-8e55-4c79-bd7f-48586b54e065 req-816243dc-6160-4c7e-9dc7-af43d9fe1935 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Received unexpected event network-vif-plugged-5c47e905-ff93-4134-b724-0c779bfdf9a1 for instance with vm_state active and task_state None.
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.155 239969 DEBUG oslo_concurrency.lockutils [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.155 239969 DEBUG oslo_concurrency.lockutils [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:39 compute-0 ceph-mon[75140]: pgmap v1706: 305 pgs: 305 active+clean; 372 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.3 MiB/s wr, 287 op/s
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.303 239969 DEBUG oslo_concurrency.processutils [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:08:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3412186842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.851 239969 DEBUG oslo_concurrency.processutils [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.858 239969 DEBUG nova.compute.provider_tree [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.874 239969 DEBUG nova.scheduler.client.report [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.898 239969 DEBUG oslo_concurrency.lockutils [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.930 239969 INFO nova.scheduler.client.report [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Deleted allocations for instance ffae5575-2de8-4386-8619-8c1fe66b887a
Jan 26 16:08:39 compute-0 nova_compute[239965]: 2026-01-26 16:08:39.993 239969 DEBUG oslo_concurrency.lockutils [None req-c7184c50-b004-4dd9-bbe9-6f9f78134a1b 82e03105001a493dbe4acc52bc208148 3fe1ff70a12f48e19752112548c850be - - default default] Lock "ffae5575-2de8-4386-8619-8c1fe66b887a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1707: 305 pgs: 305 active+clean; 372 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 177 op/s
Jan 26 16:08:40 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3412186842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:40 compute-0 nova_compute[239965]: 2026-01-26 16:08:40.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:08:40 compute-0 nova_compute[239965]: 2026-01-26 16:08:40.966 239969 DEBUG oslo_concurrency.lockutils [None req-5f2a1e62-8a9c-4d69-adb3-f8497d5dbb3a 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:40 compute-0 nova_compute[239965]: 2026-01-26 16:08:40.967 239969 DEBUG oslo_concurrency.lockutils [None req-5f2a1e62-8a9c-4d69-adb3-f8497d5dbb3a 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:40 compute-0 nova_compute[239965]: 2026-01-26 16:08:40.967 239969 DEBUG nova.compute.manager [None req-5f2a1e62-8a9c-4d69-adb3-f8497d5dbb3a 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:40 compute-0 nova_compute[239965]: 2026-01-26 16:08:40.971 239969 DEBUG nova.compute.manager [None req-5f2a1e62-8a9c-4d69-adb3-f8497d5dbb3a 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 26 16:08:40 compute-0 nova_compute[239965]: 2026-01-26 16:08:40.972 239969 DEBUG nova.objects.instance [None req-5f2a1e62-8a9c-4d69-adb3-f8497d5dbb3a 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'flavor' on Instance uuid 08bf6991-2ba2-47d1-865a-8f6ddfd9a067 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:40 compute-0 nova_compute[239965]: 2026-01-26 16:08:40.995 239969 DEBUG nova.virt.libvirt.driver [None req-5f2a1e62-8a9c-4d69-adb3-f8497d5dbb3a 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:08:41 compute-0 nova_compute[239965]: 2026-01-26 16:08:41.134 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:41 compute-0 ceph-mon[75140]: pgmap v1707: 305 pgs: 305 active+clean; 372 MiB data, 807 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.6 MiB/s wr, 177 op/s
Jan 26 16:08:41 compute-0 podman[324088]: 2026-01-26 16:08:41.386859446 +0000 UTC m=+0.068115818 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 16:08:41 compute-0 podman[324089]: 2026-01-26 16:08:41.423909122 +0000 UTC m=+0.104347313 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:08:41 compute-0 nova_compute[239965]: 2026-01-26 16:08:41.476 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:41 compute-0 nova_compute[239965]: 2026-01-26 16:08:41.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:08:41 compute-0 nova_compute[239965]: 2026-01-26 16:08:41.951 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 316 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.6 MiB/s wr, 213 op/s
Jan 26 16:08:42 compute-0 nova_compute[239965]: 2026-01-26 16:08:42.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:08:42 compute-0 nova_compute[239965]: 2026-01-26 16:08:42.509 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:08:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:08:42 compute-0 nova_compute[239965]: 2026-01-26 16:08:42.932 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-1c5a4474-3020-4bb1-96e5-41b02d0eb962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:08:42 compute-0 nova_compute[239965]: 2026-01-26 16:08:42.932 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-1c5a4474-3020-4bb1-96e5-41b02d0eb962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:08:42 compute-0 nova_compute[239965]: 2026-01-26 16:08:42.933 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:08:43 compute-0 ceph-mon[75140]: pgmap v1708: 305 pgs: 305 active+clean; 316 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 2.6 MiB/s wr, 213 op/s
Jan 26 16:08:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 293 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 211 op/s
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:08:44.493937) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443724494063, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 1803, "num_deletes": 257, "total_data_size": 2607005, "memory_usage": 2663296, "flush_reason": "Manual Compaction"}
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Jan 26 16:08:44 compute-0 ceph-mon[75140]: pgmap v1709: 305 pgs: 305 active+clean; 293 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.2 MiB/s wr, 211 op/s
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443724510432, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 2553735, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33856, "largest_seqno": 35658, "table_properties": {"data_size": 2545413, "index_size": 5008, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18368, "raw_average_key_size": 20, "raw_value_size": 2528322, "raw_average_value_size": 2869, "num_data_blocks": 220, "num_entries": 881, "num_filter_entries": 881, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769443583, "oldest_key_time": 1769443583, "file_creation_time": 1769443724, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 16555 microseconds, and 7814 cpu microseconds.
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:08:44.510498) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 2553735 bytes OK
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:08:44.510523) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:08:44.512608) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:08:44.512629) EVENT_LOG_v1 {"time_micros": 1769443724512622, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:08:44.512650) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 2599083, prev total WAL file size 2599083, number of live WAL files 2.
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:08:44.513624) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(2493KB)], [74(9185KB)]
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443724513663, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 11959893, "oldest_snapshot_seqno": -1}
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6234 keys, 10301251 bytes, temperature: kUnknown
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443724574951, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 10301251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10256905, "index_size": 27660, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 157481, "raw_average_key_size": 25, "raw_value_size": 10142641, "raw_average_value_size": 1626, "num_data_blocks": 1116, "num_entries": 6234, "num_filter_entries": 6234, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769443724, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:08:44.575168) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 10301251 bytes
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:08:44.576541) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 194.9 rd, 167.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 9.0 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(8.7) write-amplify(4.0) OK, records in: 6760, records dropped: 526 output_compression: NoCompression
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:08:44.576555) EVENT_LOG_v1 {"time_micros": 1769443724576548, "job": 42, "event": "compaction_finished", "compaction_time_micros": 61369, "compaction_time_cpu_micros": 29781, "output_level": 6, "num_output_files": 1, "total_output_size": 10301251, "num_input_records": 6760, "num_output_records": 6234, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443724577021, "job": 42, "event": "table_file_deletion", "file_number": 76}
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443724578341, "job": 42, "event": "table_file_deletion", "file_number": 74}
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:08:44.513544) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:08:44.578387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:08:44.578392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:08:44.578394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:08:44.578395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:08:44 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:08:44.578397) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:08:45 compute-0 nova_compute[239965]: 2026-01-26 16:08:45.437 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Updating instance_info_cache with network_info: [{"id": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "address": "fa:16:3e:2d:8a:67", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbece9c-48", "ovs_interfaceid": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:45 compute-0 nova_compute[239965]: 2026-01-26 16:08:45.456 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-1c5a4474-3020-4bb1-96e5-41b02d0eb962" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:08:45 compute-0 nova_compute[239965]: 2026-01-26 16:08:45.457 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:08:45 compute-0 nova_compute[239965]: 2026-01-26 16:08:45.457 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:08:45 compute-0 nova_compute[239965]: 2026-01-26 16:08:45.457 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:08:45 compute-0 nova_compute[239965]: 2026-01-26 16:08:45.458 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:08:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1710: 305 pgs: 305 active+clean; 293 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.138 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.478 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.533 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.534 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.534 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.534 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.535 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.577 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "441048cd-ded0-43ff-93b7-bacf44bd6c17" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.577 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "441048cd-ded0-43ff-93b7-bacf44bd6c17" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.579 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Acquiring lock "efde3b30-f743-42a2-bc38-f27b2b0909dc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.580 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lock "efde3b30-f743-42a2-bc38-f27b2b0909dc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.598 239969 DEBUG nova.compute.manager [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.602 239969 DEBUG nova.compute.manager [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.687 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.688 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.693 239969 DEBUG nova.virt.hardware [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.693 239969 INFO nova.compute.claims [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.698 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:46 compute-0 nova_compute[239965]: 2026-01-26 16:08:46.869 239969 DEBUG oslo_concurrency.processutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:08:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1853654474' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.090 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.168 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.168 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000056 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.171 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.171 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000062 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.174 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.175 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000063 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:08:47 compute-0 ceph-mon[75140]: pgmap v1710: 305 pgs: 305 active+clean; 293 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Jan 26 16:08:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1853654474' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.372 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.373 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3227MB free_disk=59.87556330114603GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.373 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:08:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2262318200' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.476 239969 DEBUG oslo_concurrency.processutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.480 239969 DEBUG nova.compute.provider_tree [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.550 239969 DEBUG nova.scheduler.client.report [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:08:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.882 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.883 239969 DEBUG nova.compute.manager [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.886 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.895 239969 DEBUG nova.virt.hardware [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:08:47 compute-0 nova_compute[239965]: 2026-01-26 16:08:47.895 239969 INFO nova.compute.claims [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.006 239969 DEBUG nova.compute.manager [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.007 239969 DEBUG nova.network.neutron [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.034 239969 INFO nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.064 239969 DEBUG nova.compute.manager [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.130 239969 DEBUG oslo_concurrency.processutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 293 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 154 op/s
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.166 239969 DEBUG nova.compute.manager [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.168 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.168 239969 INFO nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Creating image(s)
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.191 239969 DEBUG nova.storage.rbd_utils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] rbd image efde3b30-f743-42a2-bc38-f27b2b0909dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2262318200' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.218 239969 DEBUG nova.storage.rbd_utils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] rbd image efde3b30-f743-42a2-bc38-f27b2b0909dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.238 239969 DEBUG nova.storage.rbd_utils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] rbd image efde3b30-f743-42a2-bc38-f27b2b0909dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.241 239969 DEBUG oslo_concurrency.processutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.332 239969 DEBUG oslo_concurrency.processutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.333 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.333 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.334 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.356 239969 DEBUG nova.storage.rbd_utils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] rbd image efde3b30-f743-42a2-bc38-f27b2b0909dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.360 239969 DEBUG oslo_concurrency.processutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 efde3b30-f743-42a2-bc38-f27b2b0909dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.393 239969 DEBUG nova.policy [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8e477503e0564b80a186bf69c65a8b50', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c702a646b6024e939e7a909ee2dea3a5', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.560 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443713.559461, dc214d0d-327d-4a44-b811-2c04d4ccc9de => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.561 239969 INFO nova.compute.manager [-] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] VM Stopped (Lifecycle Event)
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.580 239969 DEBUG nova.compute.manager [None req-0a785707-71da-4a72-84ac-c64ed9d3c72e - - - - - -] [instance: dc214d0d-327d-4a44-b811-2c04d4ccc9de] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.626 239969 DEBUG oslo_concurrency.processutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 efde3b30-f743-42a2-bc38-f27b2b0909dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.708 239969 DEBUG nova.storage.rbd_utils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] resizing rbd image efde3b30-f743-42a2-bc38-f27b2b0909dc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:08:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:08:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2817113281' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:08:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:08:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2817113281' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:08:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:08:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/609165357' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.764 239969 DEBUG oslo_concurrency.processutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.768 239969 DEBUG nova.compute.provider_tree [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.786 239969 DEBUG nova.scheduler.client.report [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.809 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.811 239969 DEBUG nova.compute.manager [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.817 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 1.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.874 239969 DEBUG nova.compute.manager [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.875 239969 DEBUG nova.network.neutron [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00187899947298406 of space, bias 1.0, pg target 0.563699841895218 quantized to 32 (current 32)
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010170085034100597 of space, bias 1.0, pg target 0.3051025510230179 quantized to 32 (current 32)
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.909870538936972e-07 of space, bias 4.0, pg target 0.0009491844646724366 quantized to 16 (current 16)
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:08:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.917 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 1c5a4474-3020-4bb1-96e5-41b02d0eb962 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.917 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 59b6d7ef-0e86-41af-af94-b94def8570b0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.917 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 08bf6991-2ba2-47d1-865a-8f6ddfd9a067 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.917 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance efde3b30-f743-42a2-bc38-f27b2b0909dc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.918 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 441048cd-ded0-43ff-93b7-bacf44bd6c17 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.918 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.918 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.920 239969 INFO nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.926 239969 DEBUG nova.objects.instance [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lazy-loading 'migration_context' on Instance uuid efde3b30-f743-42a2-bc38-f27b2b0909dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.942 239969 DEBUG nova.compute.manager [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.944 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.945 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Ensure instance console log exists: /var/lib/nova/instances/efde3b30-f743-42a2-bc38-f27b2b0909dc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.945 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.945 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:48 compute-0 nova_compute[239965]: 2026-01-26 16:08:48.945 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.028 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.064 239969 DEBUG nova.compute.manager [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.067 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.067 239969 INFO nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Creating image(s)
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.094 239969 DEBUG nova.storage.rbd_utils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 441048cd-ded0-43ff-93b7-bacf44bd6c17_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.124 239969 DEBUG nova.storage.rbd_utils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 441048cd-ded0-43ff-93b7-bacf44bd6c17_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.152 239969 DEBUG nova.storage.rbd_utils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 441048cd-ded0-43ff-93b7-bacf44bd6c17_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.155 239969 DEBUG oslo_concurrency.processutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.187 239969 DEBUG nova.policy [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0338b308661e4205839e9e957f674d8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df501970c9864b77b86bb4aa58ef846b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:08:49 compute-0 ceph-mon[75140]: pgmap v1711: 305 pgs: 305 active+clean; 293 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 154 op/s
Jan 26 16:08:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2817113281' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:08:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2817113281' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:08:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/609165357' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.226 239969 DEBUG oslo_concurrency.processutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.227 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.228 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.228 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.249 239969 DEBUG nova.storage.rbd_utils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 441048cd-ded0-43ff-93b7-bacf44bd6c17_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.255 239969 DEBUG oslo_concurrency.processutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 441048cd-ded0-43ff-93b7-bacf44bd6c17_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.528 239969 DEBUG nova.network.neutron [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Successfully created port: 3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.557 239969 DEBUG oslo_concurrency.processutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 441048cd-ded0-43ff-93b7-bacf44bd6c17_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:08:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4265771199' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.634 239969 DEBUG nova.storage.rbd_utils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] resizing rbd image 441048cd-ded0-43ff-93b7-bacf44bd6c17_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.664 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.673 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.719 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.727 239969 DEBUG nova.objects.instance [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'migration_context' on Instance uuid 441048cd-ded0-43ff-93b7-bacf44bd6c17 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.743 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.743 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.926s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.744 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.744 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Ensure instance console log exists: /var/lib/nova/instances/441048cd-ded0-43ff-93b7-bacf44bd6c17/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.745 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.745 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:49 compute-0 nova_compute[239965]: 2026-01-26 16:08:49.745 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 293 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 89 op/s
Jan 26 16:08:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4265771199' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:08:50 compute-0 ovn_controller[146046]: 2026-01-26T16:08:50Z|00100|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:67:1d:8b 10.100.0.14
Jan 26 16:08:50 compute-0 ovn_controller[146046]: 2026-01-26T16:08:50Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:67:1d:8b 10.100.0.14
Jan 26 16:08:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:51.065 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:08:51 compute-0 nova_compute[239965]: 2026-01-26 16:08:51.065 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:51.066 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:08:51 compute-0 nova_compute[239965]: 2026-01-26 16:08:51.108 239969 DEBUG nova.network.neutron [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Successfully created port: f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:08:51 compute-0 nova_compute[239965]: 2026-01-26 16:08:51.139 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:51 compute-0 nova_compute[239965]: 2026-01-26 16:08:51.201 239969 DEBUG nova.virt.libvirt.driver [None req-5f2a1e62-8a9c-4d69-adb3-f8497d5dbb3a 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:08:51 compute-0 ceph-mon[75140]: pgmap v1712: 305 pgs: 305 active+clean; 293 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 89 op/s
Jan 26 16:08:51 compute-0 nova_compute[239965]: 2026-01-26 16:08:51.481 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:51 compute-0 nova_compute[239965]: 2026-01-26 16:08:51.939 239969 DEBUG nova.network.neutron [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Successfully updated port: 3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:08:52 compute-0 nova_compute[239965]: 2026-01-26 16:08:52.066 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Acquiring lock "refresh_cache-efde3b30-f743-42a2-bc38-f27b2b0909dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:08:52 compute-0 nova_compute[239965]: 2026-01-26 16:08:52.066 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Acquired lock "refresh_cache-efde3b30-f743-42a2-bc38-f27b2b0909dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:08:52 compute-0 nova_compute[239965]: 2026-01-26 16:08:52.066 239969 DEBUG nova.network.neutron [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:08:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1713: 305 pgs: 305 active+clean; 373 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 161 op/s
Jan 26 16:08:52 compute-0 nova_compute[239965]: 2026-01-26 16:08:52.703 239969 DEBUG nova.compute.manager [req-4eef6163-82be-4cf1-af5f-27ca65e44362 req-e7f0bb03-32fb-4ce6-9dbf-8bea82b8a165 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Received event network-changed-3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:52 compute-0 nova_compute[239965]: 2026-01-26 16:08:52.703 239969 DEBUG nova.compute.manager [req-4eef6163-82be-4cf1-af5f-27ca65e44362 req-e7f0bb03-32fb-4ce6-9dbf-8bea82b8a165 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Refreshing instance network info cache due to event network-changed-3b9c9999-9b1a-4f39-bf2b-dcde55e0c481. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:08:52 compute-0 nova_compute[239965]: 2026-01-26 16:08:52.703 239969 DEBUG oslo_concurrency.lockutils [req-4eef6163-82be-4cf1-af5f-27ca65e44362 req-e7f0bb03-32fb-4ce6-9dbf-8bea82b8a165 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-efde3b30-f743-42a2-bc38-f27b2b0909dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:08:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:08:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:53.069 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:53 compute-0 nova_compute[239965]: 2026-01-26 16:08:53.101 239969 DEBUG nova.network.neutron [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:08:53 compute-0 ceph-mon[75140]: pgmap v1713: 305 pgs: 305 active+clean; 373 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.4 MiB/s wr, 161 op/s
Jan 26 16:08:53 compute-0 kernel: tap5c47e905-ff (unregistering): left promiscuous mode
Jan 26 16:08:53 compute-0 NetworkManager[48954]: <info>  [1769443733.4354] device (tap5c47e905-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:08:53 compute-0 nova_compute[239965]: 2026-01-26 16:08:53.452 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443718.4514356, ffae5575-2de8-4386-8619-8c1fe66b887a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:53 compute-0 nova_compute[239965]: 2026-01-26 16:08:53.453 239969 INFO nova.compute.manager [-] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] VM Stopped (Lifecycle Event)
Jan 26 16:08:53 compute-0 nova_compute[239965]: 2026-01-26 16:08:53.456 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:53 compute-0 ovn_controller[146046]: 2026-01-26T16:08:53Z|00977|binding|INFO|Releasing lport 5c47e905-ff93-4134-b724-0c779bfdf9a1 from this chassis (sb_readonly=0)
Jan 26 16:08:53 compute-0 ovn_controller[146046]: 2026-01-26T16:08:53Z|00978|binding|INFO|Setting lport 5c47e905-ff93-4134-b724-0c779bfdf9a1 down in Southbound
Jan 26 16:08:53 compute-0 ovn_controller[146046]: 2026-01-26T16:08:53Z|00979|binding|INFO|Removing iface tap5c47e905-ff ovn-installed in OVS
Jan 26 16:08:53 compute-0 nova_compute[239965]: 2026-01-26 16:08:53.462 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:53.467 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:67:1d:8b 10.100.0.14'], port_security=['fa:16:3e:67:1d:8b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '08bf6991-2ba2-47d1-865a-8f6ddfd9a067', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3d3c26abe454a90816833e484abbbd5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1788d42-3ab8-4af6-9d96-b2f289468a39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=866be92a-2c15-4b84-ae39-2a0041a8b635, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5c47e905-ff93-4134-b724-0c779bfdf9a1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:08:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:53.468 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5c47e905-ff93-4134-b724-0c779bfdf9a1 in datapath e7ef659c-2c7b-4cdf-8956-267cdfeff707 unbound from our chassis
Jan 26 16:08:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:53.470 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e7ef659c-2c7b-4cdf-8956-267cdfeff707
Jan 26 16:08:53 compute-0 nova_compute[239965]: 2026-01-26 16:08:53.475 239969 DEBUG nova.compute.manager [None req-7472e185-d32a-4319-83d1-b6a309f958b8 - - - - - -] [instance: ffae5575-2de8-4386-8619-8c1fe66b887a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:53.486 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8b164fe4-ac4a-4481-9d25-056792dc088b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:53 compute-0 sshd-session[324553]: Invalid user ubuntu from 45.148.10.240 port 42386
Jan 26 16:08:53 compute-0 nova_compute[239965]: 2026-01-26 16:08:53.492 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:53.523 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ec1e9b7f-ddb2-4542-afb4-ed29063f4d69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:53 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000063.scope: Deactivated successfully.
Jan 26 16:08:53 compute-0 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000063.scope: Consumed 13.904s CPU time.
Jan 26 16:08:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:53.527 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[99697851-5b01-4c93-b24d-b4d54497e56c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:53 compute-0 systemd-machined[208061]: Machine qemu-122-instance-00000063 terminated.
Jan 26 16:08:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:53.566 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f6b6d112-bd8e-4b25-9acb-e2c3d2501a9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:53.588 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3caa9b4c-e219-41fc-8296-5418311d89e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape7ef659c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:1b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 31, 'rx_bytes': 658, 'tx_bytes': 1446, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 31, 'rx_bytes': 658, 'tx_bytes': 1446, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 260], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507991, 'reachable_time': 15716, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324567, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:53.603 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[241f8ab3-f3ef-498a-b40b-775c282f8690]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508006, 'tstamp': 508006}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324568, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape7ef659c-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 508010, 'tstamp': 508010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324568, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:53.604 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7ef659c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:53 compute-0 nova_compute[239965]: 2026-01-26 16:08:53.605 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:53 compute-0 nova_compute[239965]: 2026-01-26 16:08:53.610 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:53.611 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape7ef659c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:53.611 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:53.612 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape7ef659c-20, col_values=(('external_ids', {'iface-id': '86107515-3deb-4a4b-a7ca-2f22eed5cb66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:53.612 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:53 compute-0 sshd-session[324553]: Connection closed by invalid user ubuntu 45.148.10.240 port 42386 [preauth]
Jan 26 16:08:53 compute-0 nova_compute[239965]: 2026-01-26 16:08:53.909 239969 DEBUG nova.compute.manager [req-442a6d82-9bf5-4de9-8c58-5d89a0d3d614 req-602e73ad-d8d5-4a04-8107-bbc76c091d0a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Received event network-vif-unplugged-5c47e905-ff93-4134-b724-0c779bfdf9a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:53 compute-0 nova_compute[239965]: 2026-01-26 16:08:53.911 239969 DEBUG oslo_concurrency.lockutils [req-442a6d82-9bf5-4de9-8c58-5d89a0d3d614 req-602e73ad-d8d5-4a04-8107-bbc76c091d0a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:53 compute-0 nova_compute[239965]: 2026-01-26 16:08:53.911 239969 DEBUG oslo_concurrency.lockutils [req-442a6d82-9bf5-4de9-8c58-5d89a0d3d614 req-602e73ad-d8d5-4a04-8107-bbc76c091d0a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:53 compute-0 nova_compute[239965]: 2026-01-26 16:08:53.911 239969 DEBUG oslo_concurrency.lockutils [req-442a6d82-9bf5-4de9-8c58-5d89a0d3d614 req-602e73ad-d8d5-4a04-8107-bbc76c091d0a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:53 compute-0 nova_compute[239965]: 2026-01-26 16:08:53.912 239969 DEBUG nova.compute.manager [req-442a6d82-9bf5-4de9-8c58-5d89a0d3d614 req-602e73ad-d8d5-4a04-8107-bbc76c091d0a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] No waiting events found dispatching network-vif-unplugged-5c47e905-ff93-4134-b724-0c779bfdf9a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:08:53 compute-0 nova_compute[239965]: 2026-01-26 16:08:53.912 239969 WARNING nova.compute.manager [req-442a6d82-9bf5-4de9-8c58-5d89a0d3d614 req-602e73ad-d8d5-4a04-8107-bbc76c091d0a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Received unexpected event network-vif-unplugged-5c47e905-ff93-4134-b724-0c779bfdf9a1 for instance with vm_state active and task_state powering-off.
Jan 26 16:08:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 418 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.7 MiB/s wr, 173 op/s
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.153 239969 DEBUG nova.network.neutron [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Successfully updated port: f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.176 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "refresh_cache-441048cd-ded0-43ff-93b7-bacf44bd6c17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.176 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquired lock "refresh_cache-441048cd-ded0-43ff-93b7-bacf44bd6c17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.176 239969 DEBUG nova.network.neutron [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.214 239969 INFO nova.virt.libvirt.driver [None req-5f2a1e62-8a9c-4d69-adb3-f8497d5dbb3a 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Instance shutdown successfully after 13 seconds.
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.221 239969 INFO nova.virt.libvirt.driver [-] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Instance destroyed successfully.
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.222 239969 DEBUG nova.objects.instance [None req-5f2a1e62-8a9c-4d69-adb3-f8497d5dbb3a 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'numa_topology' on Instance uuid 08bf6991-2ba2-47d1-865a-8f6ddfd9a067 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.237 239969 DEBUG nova.compute.manager [None req-5f2a1e62-8a9c-4d69-adb3-f8497d5dbb3a 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.330 239969 DEBUG oslo_concurrency.lockutils [None req-5f2a1e62-8a9c-4d69-adb3-f8497d5dbb3a 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.508 239969 DEBUG nova.network.neutron [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.695 239969 DEBUG nova.network.neutron [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Updating instance_info_cache with network_info: [{"id": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "address": "fa:16:3e:66:d4:6d", "network": {"id": "ffefe096-4dca-4912-b372-1a2319ae67f2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-95484242-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c702a646b6024e939e7a909ee2dea3a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9c9999-9b", "ovs_interfaceid": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.717 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Releasing lock "refresh_cache-efde3b30-f743-42a2-bc38-f27b2b0909dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.717 239969 DEBUG nova.compute.manager [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Instance network_info: |[{"id": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "address": "fa:16:3e:66:d4:6d", "network": {"id": "ffefe096-4dca-4912-b372-1a2319ae67f2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-95484242-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c702a646b6024e939e7a909ee2dea3a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9c9999-9b", "ovs_interfaceid": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.718 239969 DEBUG oslo_concurrency.lockutils [req-4eef6163-82be-4cf1-af5f-27ca65e44362 req-e7f0bb03-32fb-4ce6-9dbf-8bea82b8a165 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-efde3b30-f743-42a2-bc38-f27b2b0909dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.718 239969 DEBUG nova.network.neutron [req-4eef6163-82be-4cf1-af5f-27ca65e44362 req-e7f0bb03-32fb-4ce6-9dbf-8bea82b8a165 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Refreshing network info cache for port 3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.721 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Start _get_guest_xml network_info=[{"id": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "address": "fa:16:3e:66:d4:6d", "network": {"id": "ffefe096-4dca-4912-b372-1a2319ae67f2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-95484242-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c702a646b6024e939e7a909ee2dea3a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9c9999-9b", "ovs_interfaceid": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.725 239969 WARNING nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.729 239969 DEBUG nova.virt.libvirt.host [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.730 239969 DEBUG nova.virt.libvirt.host [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.736 239969 DEBUG nova.virt.libvirt.host [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.737 239969 DEBUG nova.virt.libvirt.host [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.738 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.738 239969 DEBUG nova.virt.hardware [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.738 239969 DEBUG nova.virt.hardware [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.738 239969 DEBUG nova.virt.hardware [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.739 239969 DEBUG nova.virt.hardware [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.739 239969 DEBUG nova.virt.hardware [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.739 239969 DEBUG nova.virt.hardware [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.739 239969 DEBUG nova.virt.hardware [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.740 239969 DEBUG nova.virt.hardware [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.740 239969 DEBUG nova.virt.hardware [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.740 239969 DEBUG nova.virt.hardware [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.740 239969 DEBUG nova.virt.hardware [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.743 239969 DEBUG oslo_concurrency.processutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:54 compute-0 sshd-session[324581]: Received disconnect from 45.148.10.141 port 15970:11:  [preauth]
Jan 26 16:08:54 compute-0 sshd-session[324581]: Disconnected from authenticating user root 45.148.10.141 port 15970 [preauth]
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.924 239969 DEBUG nova.compute.manager [req-1cd6dac4-1903-413f-b740-f9036f20b2b8 req-0858020f-6824-47e6-a751-bcca28fbdf90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Received event network-changed-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.924 239969 DEBUG nova.compute.manager [req-1cd6dac4-1903-413f-b740-f9036f20b2b8 req-0858020f-6824-47e6-a751-bcca28fbdf90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Refreshing instance network info cache due to event network-changed-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:08:54 compute-0 nova_compute[239965]: 2026-01-26 16:08:54.925 239969 DEBUG oslo_concurrency.lockutils [req-1cd6dac4-1903-413f-b740-f9036f20b2b8 req-0858020f-6824-47e6-a751-bcca28fbdf90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-441048cd-ded0-43ff-93b7-bacf44bd6c17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:08:55 compute-0 ceph-mon[75140]: pgmap v1714: 305 pgs: 305 active+clean; 418 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.7 MiB/s wr, 173 op/s
Jan 26 16:08:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:08:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3790567570' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.331 239969 DEBUG oslo_concurrency.processutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.351 239969 DEBUG nova.storage.rbd_utils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] rbd image efde3b30-f743-42a2-bc38-f27b2b0909dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.354 239969 DEBUG oslo_concurrency.processutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.746 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.829 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:08:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3912250198' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.896 239969 DEBUG oslo_concurrency.processutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.897 239969 DEBUG nova.virt.libvirt.vif [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:08:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1925595817',display_name='tempest-ServerMetadataNegativeTestJSON-server-1925595817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1925595817',id=100,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c702a646b6024e939e7a909ee2dea3a5',ramdisk_id='',reservation_id='r-b1a0qk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-606192248',owner_user_name='tempest-ServerMetadataNegativeTestJSON-606192248-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:08:48Z,user_data=None,user_id='8e477503e0564b80a186bf69c65a8b50',uuid=efde3b30-f743-42a2-bc38-f27b2b0909dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "address": "fa:16:3e:66:d4:6d", "network": {"id": "ffefe096-4dca-4912-b372-1a2319ae67f2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-95484242-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c702a646b6024e939e7a909ee2dea3a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9c9999-9b", "ovs_interfaceid": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.898 239969 DEBUG nova.network.os_vif_util [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Converting VIF {"id": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "address": "fa:16:3e:66:d4:6d", "network": {"id": "ffefe096-4dca-4912-b372-1a2319ae67f2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-95484242-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c702a646b6024e939e7a909ee2dea3a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9c9999-9b", "ovs_interfaceid": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.899 239969 DEBUG nova.network.os_vif_util [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:d4:6d,bridge_name='br-int',has_traffic_filtering=True,id=3b9c9999-9b1a-4f39-bf2b-dcde55e0c481,network=Network(ffefe096-4dca-4912-b372-1a2319ae67f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b9c9999-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.900 239969 DEBUG nova.objects.instance [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid efde3b30-f743-42a2-bc38-f27b2b0909dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.915 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:08:55 compute-0 nova_compute[239965]:   <uuid>efde3b30-f743-42a2-bc38-f27b2b0909dc</uuid>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   <name>instance-00000064</name>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerMetadataNegativeTestJSON-server-1925595817</nova:name>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:08:54</nova:creationTime>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:08:55 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:08:55 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:08:55 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:08:55 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:08:55 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:08:55 compute-0 nova_compute[239965]:         <nova:user uuid="8e477503e0564b80a186bf69c65a8b50">tempest-ServerMetadataNegativeTestJSON-606192248-project-member</nova:user>
Jan 26 16:08:55 compute-0 nova_compute[239965]:         <nova:project uuid="c702a646b6024e939e7a909ee2dea3a5">tempest-ServerMetadataNegativeTestJSON-606192248</nova:project>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:08:55 compute-0 nova_compute[239965]:         <nova:port uuid="3b9c9999-9b1a-4f39-bf2b-dcde55e0c481">
Jan 26 16:08:55 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <system>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <entry name="serial">efde3b30-f743-42a2-bc38-f27b2b0909dc</entry>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <entry name="uuid">efde3b30-f743-42a2-bc38-f27b2b0909dc</entry>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     </system>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   <os>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   </os>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   <features>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   </features>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/efde3b30-f743-42a2-bc38-f27b2b0909dc_disk">
Jan 26 16:08:55 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       </source>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:08:55 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/efde3b30-f743-42a2-bc38-f27b2b0909dc_disk.config">
Jan 26 16:08:55 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       </source>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:08:55 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:66:d4:6d"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <target dev="tap3b9c9999-9b"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/efde3b30-f743-42a2-bc38-f27b2b0909dc/console.log" append="off"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <video>
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     </video>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:08:55 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:08:55 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:08:55 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:08:55 compute-0 nova_compute[239965]: </domain>
Jan 26 16:08:55 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.916 239969 DEBUG nova.compute.manager [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Preparing to wait for external event network-vif-plugged-3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.917 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Acquiring lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.917 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.917 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.918 239969 DEBUG nova.virt.libvirt.vif [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:08:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1925595817',display_name='tempest-ServerMetadataNegativeTestJSON-server-1925595817',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1925595817',id=100,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c702a646b6024e939e7a909ee2dea3a5',ramdisk_id='',reservation_id='r-b1a0qk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerMetadataNegativeTestJSON-606192248',owner_user_name='tempest-ServerMetadataNegativeTestJSON-606192248-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:08:48Z,user_data=None,user_id='8e477503e0564b80a186bf69c65a8b50',uuid=efde3b30-f743-42a2-bc38-f27b2b0909dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "address": "fa:16:3e:66:d4:6d", "network": {"id": "ffefe096-4dca-4912-b372-1a2319ae67f2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-95484242-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c702a646b6024e939e7a909ee2dea3a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9c9999-9b", "ovs_interfaceid": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.918 239969 DEBUG nova.network.os_vif_util [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Converting VIF {"id": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "address": "fa:16:3e:66:d4:6d", "network": {"id": "ffefe096-4dca-4912-b372-1a2319ae67f2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-95484242-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c702a646b6024e939e7a909ee2dea3a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9c9999-9b", "ovs_interfaceid": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.919 239969 DEBUG nova.network.os_vif_util [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:d4:6d,bridge_name='br-int',has_traffic_filtering=True,id=3b9c9999-9b1a-4f39-bf2b-dcde55e0c481,network=Network(ffefe096-4dca-4912-b372-1a2319ae67f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b9c9999-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.919 239969 DEBUG os_vif [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:d4:6d,bridge_name='br-int',has_traffic_filtering=True,id=3b9c9999-9b1a-4f39-bf2b-dcde55e0c481,network=Network(ffefe096-4dca-4912-b372-1a2319ae67f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b9c9999-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.920 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.920 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.921 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.923 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.923 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3b9c9999-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.924 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3b9c9999-9b, col_values=(('external_ids', {'iface-id': '3b9c9999-9b1a-4f39-bf2b-dcde55e0c481', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:d4:6d', 'vm-uuid': 'efde3b30-f743-42a2-bc38-f27b2b0909dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.925 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:55 compute-0 NetworkManager[48954]: <info>  [1769443735.9269] manager: (tap3b9c9999-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/405)
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.929 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.934 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.935 239969 INFO os_vif [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:d4:6d,bridge_name='br-int',has_traffic_filtering=True,id=3b9c9999-9b1a-4f39-bf2b-dcde55e0c481,network=Network(ffefe096-4dca-4912-b372-1a2319ae67f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b9c9999-9b')
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.987 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.987 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.988 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] No VIF found with MAC fa:16:3e:66:d4:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:08:55 compute-0 nova_compute[239965]: 2026-01-26 16:08:55.988 239969 INFO nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Using config drive
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.008 239969 DEBUG nova.storage.rbd_utils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] rbd image efde3b30-f743-42a2-bc38-f27b2b0909dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.016 239969 DEBUG nova.compute.manager [req-98fac83f-3e0f-4897-a198-f9ab9455ef99 req-db41c2c2-1a69-463a-b8f5-f1f95655d3d6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Received event network-vif-plugged-5c47e905-ff93-4134-b724-0c779bfdf9a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.016 239969 DEBUG oslo_concurrency.lockutils [req-98fac83f-3e0f-4897-a198-f9ab9455ef99 req-db41c2c2-1a69-463a-b8f5-f1f95655d3d6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.016 239969 DEBUG oslo_concurrency.lockutils [req-98fac83f-3e0f-4897-a198-f9ab9455ef99 req-db41c2c2-1a69-463a-b8f5-f1f95655d3d6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.016 239969 DEBUG oslo_concurrency.lockutils [req-98fac83f-3e0f-4897-a198-f9ab9455ef99 req-db41c2c2-1a69-463a-b8f5-f1f95655d3d6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.016 239969 DEBUG nova.compute.manager [req-98fac83f-3e0f-4897-a198-f9ab9455ef99 req-db41c2c2-1a69-463a-b8f5-f1f95655d3d6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] No waiting events found dispatching network-vif-plugged-5c47e905-ff93-4134-b724-0c779bfdf9a1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.017 239969 WARNING nova.compute.manager [req-98fac83f-3e0f-4897-a198-f9ab9455ef99 req-db41c2c2-1a69-463a-b8f5-f1f95655d3d6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Received unexpected event network-vif-plugged-5c47e905-ff93-4134-b724-0c779bfdf9a1 for instance with vm_state stopped and task_state None.
Jan 26 16:08:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 418 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 5.7 MiB/s wr, 120 op/s
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.141 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3790567570' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3912250198' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.837 239969 DEBUG nova.network.neutron [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Updating instance_info_cache with network_info: [{"id": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "address": "fa:16:3e:1a:11:50", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:1150", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92a9aeb-bc", "ovs_interfaceid": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.934 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Releasing lock "refresh_cache-441048cd-ded0-43ff-93b7-bacf44bd6c17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.935 239969 DEBUG nova.compute.manager [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Instance network_info: |[{"id": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "address": "fa:16:3e:1a:11:50", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:1150", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92a9aeb-bc", "ovs_interfaceid": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.935 239969 DEBUG oslo_concurrency.lockutils [req-1cd6dac4-1903-413f-b740-f9036f20b2b8 req-0858020f-6824-47e6-a751-bcca28fbdf90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-441048cd-ded0-43ff-93b7-bacf44bd6c17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.935 239969 DEBUG nova.network.neutron [req-1cd6dac4-1903-413f-b740-f9036f20b2b8 req-0858020f-6824-47e6-a751-bcca28fbdf90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Refreshing network info cache for port f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.938 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Start _get_guest_xml network_info=[{"id": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "address": "fa:16:3e:1a:11:50", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:1150", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92a9aeb-bc", "ovs_interfaceid": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.945 239969 WARNING nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.951 239969 DEBUG nova.virt.libvirt.host [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.952 239969 DEBUG nova.virt.libvirt.host [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.959 239969 DEBUG nova.virt.libvirt.host [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.960 239969 DEBUG nova.virt.libvirt.host [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.960 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.960 239969 DEBUG nova.virt.hardware [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.960 239969 DEBUG nova.virt.hardware [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.961 239969 DEBUG nova.virt.hardware [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.961 239969 DEBUG nova.virt.hardware [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.961 239969 DEBUG nova.virt.hardware [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.961 239969 DEBUG nova.virt.hardware [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.961 239969 DEBUG nova.virt.hardware [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.962 239969 DEBUG nova.virt.hardware [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.962 239969 DEBUG nova.virt.hardware [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.962 239969 DEBUG nova.virt.hardware [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.962 239969 DEBUG nova.virt.hardware [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:08:56 compute-0 nova_compute[239965]: 2026-01-26 16:08:56.965 239969 DEBUG oslo_concurrency.processutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.025 239969 INFO nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Creating config drive at /var/lib/nova/instances/efde3b30-f743-42a2-bc38-f27b2b0909dc/disk.config
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.030 239969 DEBUG oslo_concurrency.processutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/efde3b30-f743-42a2-bc38-f27b2b0909dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf3govi9v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.168 239969 DEBUG oslo_concurrency.processutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/efde3b30-f743-42a2-bc38-f27b2b0909dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf3govi9v" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.197 239969 DEBUG nova.storage.rbd_utils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] rbd image efde3b30-f743-42a2-bc38-f27b2b0909dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.202 239969 DEBUG oslo_concurrency.processutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/efde3b30-f743-42a2-bc38-f27b2b0909dc/disk.config efde3b30-f743-42a2-bc38-f27b2b0909dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.247 239969 DEBUG nova.network.neutron [req-4eef6163-82be-4cf1-af5f-27ca65e44362 req-e7f0bb03-32fb-4ce6-9dbf-8bea82b8a165 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Updated VIF entry in instance network info cache for port 3b9c9999-9b1a-4f39-bf2b-dcde55e0c481. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.248 239969 DEBUG nova.network.neutron [req-4eef6163-82be-4cf1-af5f-27ca65e44362 req-e7f0bb03-32fb-4ce6-9dbf-8bea82b8a165 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Updating instance_info_cache with network_info: [{"id": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "address": "fa:16:3e:66:d4:6d", "network": {"id": "ffefe096-4dca-4912-b372-1a2319ae67f2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-95484242-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c702a646b6024e939e7a909ee2dea3a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9c9999-9b", "ovs_interfaceid": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:08:57 compute-0 ceph-mon[75140]: pgmap v1715: 305 pgs: 305 active+clean; 418 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 5.7 MiB/s wr, 120 op/s
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.270 239969 DEBUG oslo_concurrency.lockutils [req-4eef6163-82be-4cf1-af5f-27ca65e44362 req-e7f0bb03-32fb-4ce6-9dbf-8bea82b8a165 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-efde3b30-f743-42a2-bc38-f27b2b0909dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.369 239969 DEBUG oslo_concurrency.processutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/efde3b30-f743-42a2-bc38-f27b2b0909dc/disk.config efde3b30-f743-42a2-bc38-f27b2b0909dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.370 239969 INFO nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Deleting local config drive /var/lib/nova/instances/efde3b30-f743-42a2-bc38-f27b2b0909dc/disk.config because it was imported into RBD.
Jan 26 16:08:57 compute-0 kernel: tap3b9c9999-9b: entered promiscuous mode
Jan 26 16:08:57 compute-0 NetworkManager[48954]: <info>  [1769443737.4221] manager: (tap3b9c9999-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/406)
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.426 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:57 compute-0 ovn_controller[146046]: 2026-01-26T16:08:57Z|00980|binding|INFO|Claiming lport 3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 for this chassis.
Jan 26 16:08:57 compute-0 ovn_controller[146046]: 2026-01-26T16:08:57Z|00981|binding|INFO|3b9c9999-9b1a-4f39-bf2b-dcde55e0c481: Claiming fa:16:3e:66:d4:6d 10.100.0.14
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.433 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:d4:6d 10.100.0.14'], port_security=['fa:16:3e:66:d4:6d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'efde3b30-f743-42a2-bc38-f27b2b0909dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ffefe096-4dca-4912-b372-1a2319ae67f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c702a646b6024e939e7a909ee2dea3a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ef054cc7-0f73-4149-b7e7-6ef73bb26ad3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0d25665-6733-43f8-800f-97f6ce914748, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3b9c9999-9b1a-4f39-bf2b-dcde55e0c481) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.434 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 in datapath ffefe096-4dca-4912-b372-1a2319ae67f2 bound to our chassis
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.436 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ffefe096-4dca-4912-b372-1a2319ae67f2
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.449 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[040c99f1-e25e-4c78-a54c-da5f4911dab9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.450 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapffefe096-41 in ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.453 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapffefe096-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.453 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[afea7bdc-ade5-44ce-905b-dde43255648b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:57 compute-0 ovn_controller[146046]: 2026-01-26T16:08:57Z|00982|binding|INFO|Setting lport 3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 ovn-installed in OVS
Jan 26 16:08:57 compute-0 ovn_controller[146046]: 2026-01-26T16:08:57Z|00983|binding|INFO|Setting lport 3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 up in Southbound
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.454 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.454 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3093e2d6-a763-4689-b3c0-a84d9b7ea3e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:57 compute-0 systemd-udevd[324744]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.468 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.467 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[1c31d24e-4e3e-42b8-b0ca-bed4b47c9e6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:57 compute-0 systemd-machined[208061]: New machine qemu-123-instance-00000064.
Jan 26 16:08:57 compute-0 NetworkManager[48954]: <info>  [1769443737.4801] device (tap3b9c9999-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:08:57 compute-0 NetworkManager[48954]: <info>  [1769443737.4808] device (tap3b9c9999-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:08:57 compute-0 systemd[1]: Started Virtual Machine qemu-123-instance-00000064.
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.496 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[65ad2238-be46-4c88-ba16-6c70f3d7ec67]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:08:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3286972126' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.537 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[dd1aa56c-9772-4bb5-a069-0ac6a25a485f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.540 239969 DEBUG oslo_concurrency.processutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.544 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba5be5e-5921-4e5e-a54d-5f3f4d23f546]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:57 compute-0 NetworkManager[48954]: <info>  [1769443737.5457] manager: (tapffefe096-40): new Veth device (/org/freedesktop/NetworkManager/Devices/407)
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.572 239969 DEBUG nova.storage.rbd_utils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 441048cd-ded0-43ff-93b7-bacf44bd6c17_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.576 239969 DEBUG oslo_concurrency.processutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.582 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a504b0-7106-4f04-b081-09ef09e26580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.585 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1c07ae00-d44c-4499-b77b-356c597ce447]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:57 compute-0 NetworkManager[48954]: <info>  [1769443737.6109] device (tapffefe096-40): carrier: link connected
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.616 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[08468466-fac4-4be7-8027-33fe7d88db6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.636 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[41079d8b-c04e-44f6-bf40-71b5307f7824]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapffefe096-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:11:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 289], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522841, 'reachable_time': 27482, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324800, 'error': None, 'target': 'ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.654 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[16bbdd33-d69e-42b8-b6aa-feb91b331fa2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feba:11d5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 522841, 'tstamp': 522841}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324801, 'error': None, 'target': 'ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.676 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b926b484-703b-4010-bdcf-1105952636be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapffefe096-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:11:d5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 289], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522841, 'reachable_time': 27482, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 324802, 'error': None, 'target': 'ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.708 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7aaa8360-f0a6-4ef2-805a-5c8618ef16cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.779 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1d7faf-be4d-4165-b8de-da4756860c14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.780 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapffefe096-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.780 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.781 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapffefe096-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:57 compute-0 NetworkManager[48954]: <info>  [1769443737.7834] manager: (tapffefe096-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.782 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:57 compute-0 kernel: tapffefe096-40: entered promiscuous mode
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.788 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.788 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapffefe096-40, col_values=(('external_ids', {'iface-id': 'cf764e38-cd17-440e-ba3b-d7cd6f1b645c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:57 compute-0 ovn_controller[146046]: 2026-01-26T16:08:57Z|00984|binding|INFO|Releasing lport cf764e38-cd17-440e-ba3b-d7cd6f1b645c from this chassis (sb_readonly=0)
Jan 26 16:08:57 compute-0 nova_compute[239965]: 2026-01-26 16:08:57.809 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.810 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ffefe096-4dca-4912-b372-1a2319ae67f2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ffefe096-4dca-4912-b372-1a2319ae67f2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.811 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[06ac6d2e-c45d-48f6-beea-f3ce2715352c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.812 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-ffefe096-4dca-4912-b372-1a2319ae67f2
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/ffefe096-4dca-4912-b372-1a2319ae67f2.pid.haproxy
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID ffefe096-4dca-4912-b372-1a2319ae67f2
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:08:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:57.812 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2', 'env', 'PROCESS_TAG=haproxy-ffefe096-4dca-4912-b372-1a2319ae67f2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ffefe096-4dca-4912-b372-1a2319ae67f2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:08:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:08:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1716: 305 pgs: 305 active+clean; 418 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 5.7 MiB/s wr, 120 op/s
Jan 26 16:08:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:08:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1319988907' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:58 compute-0 podman[324856]: 2026-01-26 16:08:58.194018759 +0000 UTC m=+0.052580308 container create b4861057002aea21d0f3bfe350f2eb17ae61882cc11db4141b154f1285cc6502 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.207 239969 DEBUG oslo_concurrency.processutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.208 239969 DEBUG nova.virt.libvirt.vif [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:08:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-718393457',display_name='tempest-TestGettingAddress-server-718393457',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-718393457',id=101,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMOs4HnVuAJnvNp8CfE4X6ZDqYLND76tO1hTE0oumV/FRhtgz3i41mfhLwvewrya4uSND6SmxBSCPXJNfLvwF+32ywxOiO4XFvZqstko0NAzPzy/X3VMwFSRh9eMG+0ToA==',key_name='tempest-TestGettingAddress-841928054',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-rfau3ynl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:08:48Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=441048cd-ded0-43ff-93b7-bacf44bd6c17,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "address": "fa:16:3e:1a:11:50", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:1150", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92a9aeb-bc", "ovs_interfaceid": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.208 239969 DEBUG nova.network.os_vif_util [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "address": "fa:16:3e:1a:11:50", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:1150", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92a9aeb-bc", "ovs_interfaceid": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.209 239969 DEBUG nova.network.os_vif_util [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:11:50,bridge_name='br-int',has_traffic_filtering=True,id=f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d,network=Network(8b5a2723-70e1-4acc-a601-a4f0bf0d77a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf92a9aeb-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.210 239969 DEBUG nova.objects.instance [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'pci_devices' on Instance uuid 441048cd-ded0-43ff-93b7-bacf44bd6c17 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:08:58 compute-0 systemd[1]: Started libpod-conmon-b4861057002aea21d0f3bfe350f2eb17ae61882cc11db4141b154f1285cc6502.scope.
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.240 239969 DEBUG nova.compute.manager [req-881cf294-6f69-4732-8646-c16449685afc req-1250a083-a79f-4c11-9750-282dc225ad10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Received event network-vif-plugged-3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.242 239969 DEBUG oslo_concurrency.lockutils [req-881cf294-6f69-4732-8646-c16449685afc req-1250a083-a79f-4c11-9750-282dc225ad10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.242 239969 DEBUG oslo_concurrency.lockutils [req-881cf294-6f69-4732-8646-c16449685afc req-1250a083-a79f-4c11-9750-282dc225ad10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.242 239969 DEBUG oslo_concurrency.lockutils [req-881cf294-6f69-4732-8646-c16449685afc req-1250a083-a79f-4c11-9750-282dc225ad10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.242 239969 DEBUG nova.compute.manager [req-881cf294-6f69-4732-8646-c16449685afc req-1250a083-a79f-4c11-9750-282dc225ad10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Processing event network-vif-plugged-3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.243 239969 DEBUG nova.compute.manager [req-881cf294-6f69-4732-8646-c16449685afc req-1250a083-a79f-4c11-9750-282dc225ad10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Received event network-vif-plugged-3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.243 239969 DEBUG oslo_concurrency.lockutils [req-881cf294-6f69-4732-8646-c16449685afc req-1250a083-a79f-4c11-9750-282dc225ad10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.243 239969 DEBUG oslo_concurrency.lockutils [req-881cf294-6f69-4732-8646-c16449685afc req-1250a083-a79f-4c11-9750-282dc225ad10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.243 239969 DEBUG oslo_concurrency.lockutils [req-881cf294-6f69-4732-8646-c16449685afc req-1250a083-a79f-4c11-9750-282dc225ad10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.243 239969 DEBUG nova.compute.manager [req-881cf294-6f69-4732-8646-c16449685afc req-1250a083-a79f-4c11-9750-282dc225ad10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] No waiting events found dispatching network-vif-plugged-3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.244 239969 WARNING nova.compute.manager [req-881cf294-6f69-4732-8646-c16449685afc req-1250a083-a79f-4c11-9750-282dc225ad10 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Received unexpected event network-vif-plugged-3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 for instance with vm_state building and task_state spawning.
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.251 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:08:58 compute-0 nova_compute[239965]:   <uuid>441048cd-ded0-43ff-93b7-bacf44bd6c17</uuid>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   <name>instance-00000065</name>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <nova:name>tempest-TestGettingAddress-server-718393457</nova:name>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:08:56</nova:creationTime>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:08:58 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:08:58 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:08:58 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:08:58 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:08:58 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:08:58 compute-0 nova_compute[239965]:         <nova:user uuid="0338b308661e4205839e9e957f674d8c">tempest-TestGettingAddress-206943419-project-member</nova:user>
Jan 26 16:08:58 compute-0 nova_compute[239965]:         <nova:project uuid="df501970c9864b77b86bb4aa58ef846b">tempest-TestGettingAddress-206943419</nova:project>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:08:58 compute-0 nova_compute[239965]:         <nova:port uuid="f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d">
Jan 26 16:08:58 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe1a:1150" ipVersion="6"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <system>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <entry name="serial">441048cd-ded0-43ff-93b7-bacf44bd6c17</entry>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <entry name="uuid">441048cd-ded0-43ff-93b7-bacf44bd6c17</entry>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     </system>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   <os>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   </os>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   <features>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   </features>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/441048cd-ded0-43ff-93b7-bacf44bd6c17_disk">
Jan 26 16:08:58 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       </source>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:08:58 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/441048cd-ded0-43ff-93b7-bacf44bd6c17_disk.config">
Jan 26 16:08:58 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       </source>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:08:58 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:1a:11:50"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <target dev="tapf92a9aeb-bc"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/441048cd-ded0-43ff-93b7-bacf44bd6c17/console.log" append="off"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <video>
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     </video>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:08:58 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:08:58 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:08:58 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:08:58 compute-0 nova_compute[239965]: </domain>
Jan 26 16:08:58 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.252 239969 DEBUG nova.compute.manager [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Preparing to wait for external event network-vif-plugged-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.252 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.252 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.252 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.253 239969 DEBUG nova.virt.libvirt.vif [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:08:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-718393457',display_name='tempest-TestGettingAddress-server-718393457',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-718393457',id=101,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMOs4HnVuAJnvNp8CfE4X6ZDqYLND76tO1hTE0oumV/FRhtgz3i41mfhLwvewrya4uSND6SmxBSCPXJNfLvwF+32ywxOiO4XFvZqstko0NAzPzy/X3VMwFSRh9eMG+0ToA==',key_name='tempest-TestGettingAddress-841928054',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-rfau3ynl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:08:48Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=441048cd-ded0-43ff-93b7-bacf44bd6c17,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "address": "fa:16:3e:1a:11:50", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:1150", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92a9aeb-bc", "ovs_interfaceid": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.254 239969 DEBUG nova.network.os_vif_util [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "address": "fa:16:3e:1a:11:50", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:1150", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92a9aeb-bc", "ovs_interfaceid": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.255 239969 DEBUG nova.network.os_vif_util [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1a:11:50,bridge_name='br-int',has_traffic_filtering=True,id=f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d,network=Network(8b5a2723-70e1-4acc-a601-a4f0bf0d77a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf92a9aeb-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.255 239969 DEBUG os_vif [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:11:50,bridge_name='br-int',has_traffic_filtering=True,id=f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d,network=Network(8b5a2723-70e1-4acc-a601-a4f0bf0d77a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf92a9aeb-bc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.256 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.257 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.257 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.262 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3286972126' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.263 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf92a9aeb-bc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1319988907' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.263 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf92a9aeb-bc, col_values=(('external_ids', {'iface-id': 'f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1a:11:50', 'vm-uuid': '441048cd-ded0-43ff-93b7-bacf44bd6c17'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:58 compute-0 podman[324856]: 2026-01-26 16:08:58.166073335 +0000 UTC m=+0.024634894 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:08:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:08:58 compute-0 NetworkManager[48954]: <info>  [1769443738.4635] manager: (tapf92a9aeb-bc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/409)
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.463 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.466 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:08:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4850b5c41cbeecbc0602196d19c82797ce02b5cd6c2542d3b4681ba2b029619d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.475 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.476 239969 INFO os_vif [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1a:11:50,bridge_name='br-int',has_traffic_filtering=True,id=f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d,network=Network(8b5a2723-70e1-4acc-a601-a4f0bf0d77a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf92a9aeb-bc')
Jan 26 16:08:58 compute-0 podman[324856]: 2026-01-26 16:08:58.489246331 +0000 UTC m=+0.347807890 container init b4861057002aea21d0f3bfe350f2eb17ae61882cc11db4141b154f1285cc6502 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 26 16:08:58 compute-0 podman[324856]: 2026-01-26 16:08:58.49537154 +0000 UTC m=+0.353933079 container start b4861057002aea21d0f3bfe350f2eb17ae61882cc11db4141b154f1285cc6502 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:08:58 compute-0 neutron-haproxy-ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2[324873]: [NOTICE]   (324902) : New worker (324905) forked
Jan 26 16:08:58 compute-0 neutron-haproxy-ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2[324873]: [NOTICE]   (324902) : Loading success.
Jan 26 16:08:58 compute-0 sudo[324876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:08:58 compute-0 sudo[324876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.527 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.528 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.528 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:1a:11:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.529 239969 INFO nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Using config drive
Jan 26 16:08:58 compute-0 sudo[324876]: pam_unix(sudo:session): session closed for user root
Jan 26 16:08:58 compute-0 nova_compute[239965]: 2026-01-26 16:08:58.553 239969 DEBUG nova.storage.rbd_utils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 441048cd-ded0-43ff-93b7-bacf44bd6c17_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:58 compute-0 sudo[324922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:08:58 compute-0 sudo[324922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:08:59 compute-0 sudo[324922]: pam_unix(sudo:session): session closed for user root
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.230 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.230 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.231 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:08:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:08:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:08:59 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:08:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:08:59 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:08:59 compute-0 ceph-mon[75140]: pgmap v1716: 305 pgs: 305 active+clean; 418 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 5.7 MiB/s wr, 120 op/s
Jan 26 16:08:59 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:08:59 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:08:59 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:08:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:08:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:08:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:08:59 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:08:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:08:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.317 239969 INFO nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Creating config drive at /var/lib/nova/instances/441048cd-ded0-43ff-93b7-bacf44bd6c17/disk.config
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.322 239969 DEBUG oslo_concurrency.processutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/441048cd-ded0-43ff-93b7-bacf44bd6c17/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_m9wf26k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:59 compute-0 sudo[325034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:08:59 compute-0 sudo[325034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:08:59 compute-0 sudo[325034]: pam_unix(sudo:session): session closed for user root
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.356 239969 DEBUG nova.compute.manager [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.358 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443739.3576632, efde3b30-f743-42a2-bc38-f27b2b0909dc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.358 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] VM Started (Lifecycle Event)
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.368 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.372 239969 INFO nova.virt.libvirt.driver [-] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Instance spawned successfully.
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.372 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:08:59 compute-0 sudo[325061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:08:59 compute-0 sudo[325061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.403 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.411 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.415 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.415 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.416 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.416 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.417 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.417 239969 DEBUG nova.virt.libvirt.driver [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.443 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.444 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443739.3641412, efde3b30-f743-42a2-bc38-f27b2b0909dc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.444 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] VM Paused (Lifecycle Event)
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.465 239969 DEBUG oslo_concurrency.processutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/441048cd-ded0-43ff-93b7-bacf44bd6c17/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_m9wf26k" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.494 239969 DEBUG nova.storage.rbd_utils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 441048cd-ded0-43ff-93b7-bacf44bd6c17_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.499 239969 DEBUG oslo_concurrency.processutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/441048cd-ded0-43ff-93b7-bacf44bd6c17/disk.config 441048cd-ded0-43ff-93b7-bacf44bd6c17_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.543 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.545 239969 INFO nova.compute.manager [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Took 11.38 seconds to spawn the instance on the hypervisor.
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.546 239969 DEBUG nova.compute.manager [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.549 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443739.368491, efde3b30-f743-42a2-bc38-f27b2b0909dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.550 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] VM Resumed (Lifecycle Event)
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.588 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.593 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.610 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.631 239969 INFO nova.compute.manager [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Took 12.97 seconds to build instance.
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.649 239969 DEBUG oslo_concurrency.lockutils [None req-017bbd0f-6788-4270-a615-321a8f531f1a 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lock "efde3b30-f743-42a2-bc38-f27b2b0909dc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.070s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.661 239969 DEBUG oslo_concurrency.processutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/441048cd-ded0-43ff-93b7-bacf44bd6c17/disk.config 441048cd-ded0-43ff-93b7-bacf44bd6c17_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.661 239969 INFO nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Deleting local config drive /var/lib/nova/instances/441048cd-ded0-43ff-93b7-bacf44bd6c17/disk.config because it was imported into RBD.
Jan 26 16:08:59 compute-0 podman[325136]: 2026-01-26 16:08:59.711767576 +0000 UTC m=+0.060574822 container create 4a43966e46615d4a5e28a734d46eecc2ed71ab8d4710154603d960114d2ade6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_taussig, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:08:59 compute-0 NetworkManager[48954]: <info>  [1769443739.7719] manager: (tapf92a9aeb-bc): new Tun device (/org/freedesktop/NetworkManager/Devices/410)
Jan 26 16:08:59 compute-0 kernel: tapf92a9aeb-bc: entered promiscuous mode
Jan 26 16:08:59 compute-0 systemd-udevd[324791]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:08:59 compute-0 podman[325136]: 2026-01-26 16:08:59.681111316 +0000 UTC m=+0.029918582 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.780 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:59 compute-0 ovn_controller[146046]: 2026-01-26T16:08:59Z|00985|binding|INFO|Claiming lport f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d for this chassis.
Jan 26 16:08:59 compute-0 ovn_controller[146046]: 2026-01-26T16:08:59Z|00986|binding|INFO|f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d: Claiming fa:16:3e:1a:11:50 10.100.0.5 2001:db8::f816:3eff:fe1a:1150
Jan 26 16:08:59 compute-0 NetworkManager[48954]: <info>  [1769443739.7868] device (tapf92a9aeb-bc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:08:59 compute-0 NetworkManager[48954]: <info>  [1769443739.7878] device (tapf92a9aeb-bc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.792 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:11:50 10.100.0.5 2001:db8::f816:3eff:fe1a:1150'], port_security=['fa:16:3e:1a:11:50 10.100.0.5 2001:db8::f816:3eff:fe1a:1150'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28 2001:db8::f816:3eff:fe1a:1150/64', 'neutron:device_id': '441048cd-ded0-43ff-93b7-bacf44bd6c17', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a2a2f41a-90e4-40a5-bfe6-8f9c75570ebd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afff7a52-f9b6-46d8-88e0-ad54d3e2a5be, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.793 156105 INFO neutron.agent.ovn.metadata.agent [-] Port f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d in datapath 8b5a2723-70e1-4acc-a601-a4f0bf0d77a4 bound to our chassis
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.795 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8b5a2723-70e1-4acc-a601-a4f0bf0d77a4
Jan 26 16:08:59 compute-0 systemd[1]: Started libpod-conmon-4a43966e46615d4a5e28a734d46eecc2ed71ab8d4710154603d960114d2ade6f.scope.
Jan 26 16:08:59 compute-0 ovn_controller[146046]: 2026-01-26T16:08:59Z|00987|binding|INFO|Setting lport f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d ovn-installed in OVS
Jan 26 16:08:59 compute-0 ovn_controller[146046]: 2026-01-26T16:08:59Z|00988|binding|INFO|Setting lport f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d up in Southbound
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.813 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.817 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c6bd2a2b-8bf7-4961-8659-aeb27a81463d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:08:59 compute-0 systemd-machined[208061]: New machine qemu-124-instance-00000065.
Jan 26 16:08:59 compute-0 systemd[1]: Started Virtual Machine qemu-124-instance-00000065.
Jan 26 16:08:59 compute-0 podman[325136]: 2026-01-26 16:08:59.859004048 +0000 UTC m=+0.207811294 container init 4a43966e46615d4a5e28a734d46eecc2ed71ab8d4710154603d960114d2ade6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_taussig, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.858 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[cd129681-0ee1-4f9e-aa01-6c9f46f9943f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.862 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f68fcf-2743-4cab-9adb-f91f386ad0be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:59 compute-0 podman[325136]: 2026-01-26 16:08:59.868433989 +0000 UTC m=+0.217241235 container start 4a43966e46615d4a5e28a734d46eecc2ed71ab8d4710154603d960114d2ade6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Jan 26 16:08:59 compute-0 podman[325136]: 2026-01-26 16:08:59.874180129 +0000 UTC m=+0.222987405 container attach 4a43966e46615d4a5e28a734d46eecc2ed71ab8d4710154603d960114d2ade6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_taussig, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:08:59 compute-0 sharp_taussig[325166]: 167 167
Jan 26 16:08:59 compute-0 systemd[1]: libpod-4a43966e46615d4a5e28a734d46eecc2ed71ab8d4710154603d960114d2ade6f.scope: Deactivated successfully.
Jan 26 16:08:59 compute-0 podman[325136]: 2026-01-26 16:08:59.876028084 +0000 UTC m=+0.224835320 container died 4a43966e46615d4a5e28a734d46eecc2ed71ab8d4710154603d960114d2ade6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_taussig, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.899 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f09feff2-6f5a-4ac7-a2b5-05f2060c35c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7bda9c8e37d4f53a0c34ac8be692b5a4fe5b9d9f1c1a14bc9d24b16835fee9c-merged.mount: Deactivated successfully.
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.925 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[26cdeb6b-a9fa-4b74-a062-ff55fcd10f0a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b5a2723-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:ea:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 20, 'tx_packets': 6, 'rx_bytes': 1656, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 20, 'tx_packets': 6, 'rx_bytes': 1656, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519130, 'reachable_time': 41400, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325194, 'error': None, 'target': 'ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:59 compute-0 podman[325136]: 2026-01-26 16:08:59.929220895 +0000 UTC m=+0.278028141 container remove 4a43966e46615d4a5e28a734d46eecc2ed71ab8d4710154603d960114d2ade6f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 16:08:59 compute-0 systemd[1]: libpod-conmon-4a43966e46615d4a5e28a734d46eecc2ed71ab8d4710154603d960114d2ade6f.scope: Deactivated successfully.
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.950 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b90308d6-e529-49b8-b749-f1faa8d534d7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8b5a2723-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519143, 'tstamp': 519143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325196, 'error': None, 'target': 'ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8b5a2723-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519146, 'tstamp': 519146}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325196, 'error': None, 'target': 'ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.952 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b5a2723-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.953 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:59 compute-0 nova_compute[239965]: 2026-01-26 16:08:59.959 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.960 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b5a2723-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.960 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.961 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8b5a2723-70, col_values=(('external_ids', {'iface-id': '4fab6bb9-7be8-461a-b3cf-098c70ef4666'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:08:59.961 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:09:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 418 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 5.7 MiB/s wr, 120 op/s
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.142 239969 DEBUG oslo_concurrency.lockutils [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.144 239969 DEBUG oslo_concurrency.lockutils [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.144 239969 DEBUG oslo_concurrency.lockutils [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.144 239969 DEBUG oslo_concurrency.lockutils [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.145 239969 DEBUG oslo_concurrency.lockutils [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.146 239969 INFO nova.compute.manager [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Terminating instance
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.147 239969 DEBUG nova.compute.manager [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:09:00 compute-0 podman[325205]: 2026-01-26 16:09:00.149402181 +0000 UTC m=+0.053176062 container create 64b404d2d88a31f2e37e64f33eaef2158753ff21bdade75c53a3adcf74327fa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.160 239969 INFO nova.virt.libvirt.driver [-] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Instance destroyed successfully.
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.160 239969 DEBUG nova.objects.instance [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'resources' on Instance uuid 08bf6991-2ba2-47d1-865a-8f6ddfd9a067 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.173 239969 DEBUG nova.virt.libvirt.vif [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:08:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1632372869',display_name='tempest-Íñstáñcé-1077116605',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1632372869',id=99,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:08:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-zl73sh7t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:08:57Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=08bf6991-2ba2-47d1-865a-8f6ddfd9a067,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "address": "fa:16:3e:67:1d:8b", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c47e905-ff", "ovs_interfaceid": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.173 239969 DEBUG nova.network.os_vif_util [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "address": "fa:16:3e:67:1d:8b", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5c47e905-ff", "ovs_interfaceid": "5c47e905-ff93-4134-b724-0c779bfdf9a1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.174 239969 DEBUG nova.network.os_vif_util [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:67:1d:8b,bridge_name='br-int',has_traffic_filtering=True,id=5c47e905-ff93-4134-b724-0c779bfdf9a1,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c47e905-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.174 239969 DEBUG os_vif [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:1d:8b,bridge_name='br-int',has_traffic_filtering=True,id=5c47e905-ff93-4134-b724-0c779bfdf9a1,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c47e905-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.176 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.177 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5c47e905-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.178 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.180 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.182 239969 INFO os_vif [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:67:1d:8b,bridge_name='br-int',has_traffic_filtering=True,id=5c47e905-ff93-4134-b724-0c779bfdf9a1,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5c47e905-ff')
Jan 26 16:09:00 compute-0 systemd[1]: Started libpod-conmon-64b404d2d88a31f2e37e64f33eaef2158753ff21bdade75c53a3adcf74327fa7.scope.
Jan 26 16:09:00 compute-0 podman[325205]: 2026-01-26 16:09:00.128341606 +0000 UTC m=+0.032115497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:09:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1518f9543fc5c79ae6b7b59ba4c990020e250039df64919c3190de6cc85937/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1518f9543fc5c79ae6b7b59ba4c990020e250039df64919c3190de6cc85937/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1518f9543fc5c79ae6b7b59ba4c990020e250039df64919c3190de6cc85937/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1518f9543fc5c79ae6b7b59ba4c990020e250039df64919c3190de6cc85937/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:09:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d1518f9543fc5c79ae6b7b59ba4c990020e250039df64919c3190de6cc85937/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:09:00 compute-0 podman[325205]: 2026-01-26 16:09:00.256615594 +0000 UTC m=+0.160389505 container init 64b404d2d88a31f2e37e64f33eaef2158753ff21bdade75c53a3adcf74327fa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hellman, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:09:00 compute-0 podman[325205]: 2026-01-26 16:09:00.266333232 +0000 UTC m=+0.170107113 container start 64b404d2d88a31f2e37e64f33eaef2158753ff21bdade75c53a3adcf74327fa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hellman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:09:00 compute-0 podman[325205]: 2026-01-26 16:09:00.271799206 +0000 UTC m=+0.175573087 container attach 64b404d2d88a31f2e37e64f33eaef2158753ff21bdade75c53a3adcf74327fa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hellman, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:09:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:09:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:09:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:09:00 compute-0 ceph-mon[75140]: pgmap v1717: 305 pgs: 305 active+clean; 418 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 361 KiB/s rd, 5.7 MiB/s wr, 120 op/s
Jan 26 16:09:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:09:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:09:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:09:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:09:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:09:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.441 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443740.4409604, 441048cd-ded0-43ff-93b7-bacf44bd6c17 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.441 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] VM Started (Lifecycle Event)
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.458 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.465 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443740.441057, 441048cd-ded0-43ff-93b7-bacf44bd6c17 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.465 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] VM Paused (Lifecycle Event)
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.481 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.485 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.508 239969 INFO nova.virt.libvirt.driver [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Deleting instance files /var/lib/nova/instances/08bf6991-2ba2-47d1-865a-8f6ddfd9a067_del
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.509 239969 INFO nova.virt.libvirt.driver [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Deletion of /var/lib/nova/instances/08bf6991-2ba2-47d1-865a-8f6ddfd9a067_del complete
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.513 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.560 239969 INFO nova.compute.manager [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Took 0.41 seconds to destroy the instance on the hypervisor.
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.560 239969 DEBUG oslo.service.loopingcall [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.561 239969 DEBUG nova.compute.manager [-] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:09:00 compute-0 nova_compute[239965]: 2026-01-26 16:09:00.561 239969 DEBUG nova.network.neutron [-] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:09:00 compute-0 sad_hellman[325231]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:09:00 compute-0 sad_hellman[325231]: --> All data devices are unavailable
Jan 26 16:09:00 compute-0 podman[325205]: 2026-01-26 16:09:00.815077536 +0000 UTC m=+0.718851417 container died 64b404d2d88a31f2e37e64f33eaef2158753ff21bdade75c53a3adcf74327fa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 16:09:00 compute-0 systemd[1]: libpod-64b404d2d88a31f2e37e64f33eaef2158753ff21bdade75c53a3adcf74327fa7.scope: Deactivated successfully.
Jan 26 16:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d1518f9543fc5c79ae6b7b59ba4c990020e250039df64919c3190de6cc85937-merged.mount: Deactivated successfully.
Jan 26 16:09:00 compute-0 podman[325205]: 2026-01-26 16:09:00.870238505 +0000 UTC m=+0.774012386 container remove 64b404d2d88a31f2e37e64f33eaef2158753ff21bdade75c53a3adcf74327fa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hellman, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 16:09:00 compute-0 systemd[1]: libpod-conmon-64b404d2d88a31f2e37e64f33eaef2158753ff21bdade75c53a3adcf74327fa7.scope: Deactivated successfully.
Jan 26 16:09:00 compute-0 sudo[325061]: pam_unix(sudo:session): session closed for user root
Jan 26 16:09:00 compute-0 sudo[325313]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:09:00 compute-0 sudo[325313]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:09:00 compute-0 sudo[325313]: pam_unix(sudo:session): session closed for user root
Jan 26 16:09:01 compute-0 sudo[325338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:09:01 compute-0 sudo[325338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.143 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.184 239969 DEBUG nova.network.neutron [req-1cd6dac4-1903-413f-b740-f9036f20b2b8 req-0858020f-6824-47e6-a751-bcca28fbdf90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Updated VIF entry in instance network info cache for port f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.185 239969 DEBUG nova.network.neutron [req-1cd6dac4-1903-413f-b740-f9036f20b2b8 req-0858020f-6824-47e6-a751-bcca28fbdf90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Updating instance_info_cache with network_info: [{"id": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "address": "fa:16:3e:1a:11:50", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:1150", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92a9aeb-bc", "ovs_interfaceid": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.211 239969 DEBUG oslo_concurrency.lockutils [req-1cd6dac4-1903-413f-b740-f9036f20b2b8 req-0858020f-6824-47e6-a751-bcca28fbdf90 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-441048cd-ded0-43ff-93b7-bacf44bd6c17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:09:01 compute-0 podman[325375]: 2026-01-26 16:09:01.278127663 +0000 UTC m=+0.035993522 container create bdecddc63b8ab13471b9394df48242b14c0ce52eb53f3950e1f8074e65fc8cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haibt, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:09:01 compute-0 systemd[1]: Started libpod-conmon-bdecddc63b8ab13471b9394df48242b14c0ce52eb53f3950e1f8074e65fc8cb8.scope.
Jan 26 16:09:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:09:01 compute-0 podman[325375]: 2026-01-26 16:09:01.262801458 +0000 UTC m=+0.020667347 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:09:01 compute-0 podman[325375]: 2026-01-26 16:09:01.363405598 +0000 UTC m=+0.121271487 container init bdecddc63b8ab13471b9394df48242b14c0ce52eb53f3950e1f8074e65fc8cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:09:01 compute-0 podman[325375]: 2026-01-26 16:09:01.371320992 +0000 UTC m=+0.129186861 container start bdecddc63b8ab13471b9394df48242b14c0ce52eb53f3950e1f8074e65fc8cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 16:09:01 compute-0 podman[325375]: 2026-01-26 16:09:01.374571411 +0000 UTC m=+0.132437300 container attach bdecddc63b8ab13471b9394df48242b14c0ce52eb53f3950e1f8074e65fc8cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haibt, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:09:01 compute-0 upbeat_haibt[325391]: 167 167
Jan 26 16:09:01 compute-0 systemd[1]: libpod-bdecddc63b8ab13471b9394df48242b14c0ce52eb53f3950e1f8074e65fc8cb8.scope: Deactivated successfully.
Jan 26 16:09:01 compute-0 conmon[325391]: conmon bdecddc63b8ab13471b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bdecddc63b8ab13471b9394df48242b14c0ce52eb53f3950e1f8074e65fc8cb8.scope/container/memory.events
Jan 26 16:09:01 compute-0 podman[325375]: 2026-01-26 16:09:01.377652978 +0000 UTC m=+0.135518847 container died bdecddc63b8ab13471b9394df48242b14c0ce52eb53f3950e1f8074e65fc8cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 16:09:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae84deeccf7bc1bf0f82a35974f8dda4dacb7f5675e5c93a4451749388e52a28-merged.mount: Deactivated successfully.
Jan 26 16:09:01 compute-0 podman[325375]: 2026-01-26 16:09:01.417271646 +0000 UTC m=+0.175137535 container remove bdecddc63b8ab13471b9394df48242b14c0ce52eb53f3950e1f8074e65fc8cb8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haibt, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:09:01 compute-0 systemd[1]: libpod-conmon-bdecddc63b8ab13471b9394df48242b14c0ce52eb53f3950e1f8074e65fc8cb8.scope: Deactivated successfully.
Jan 26 16:09:01 compute-0 podman[325414]: 2026-01-26 16:09:01.656229582 +0000 UTC m=+0.060323606 container create f65b662998007f23a09341532b2a7177e890218e34aa57bb6ee257fad587f36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_jennings, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:09:01 compute-0 systemd[1]: Started libpod-conmon-f65b662998007f23a09341532b2a7177e890218e34aa57bb6ee257fad587f36a.scope.
Jan 26 16:09:01 compute-0 podman[325414]: 2026-01-26 16:09:01.629504669 +0000 UTC m=+0.033598713 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:09:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63aa8631002ed2c2416203edbb952eb59c76e3400467816cc5579e00fbd90983/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63aa8631002ed2c2416203edbb952eb59c76e3400467816cc5579e00fbd90983/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63aa8631002ed2c2416203edbb952eb59c76e3400467816cc5579e00fbd90983/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63aa8631002ed2c2416203edbb952eb59c76e3400467816cc5579e00fbd90983/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:09:01 compute-0 podman[325414]: 2026-01-26 16:09:01.768259703 +0000 UTC m=+0.172353747 container init f65b662998007f23a09341532b2a7177e890218e34aa57bb6ee257fad587f36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_jennings, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:09:01 compute-0 podman[325414]: 2026-01-26 16:09:01.773288045 +0000 UTC m=+0.177382069 container start f65b662998007f23a09341532b2a7177e890218e34aa57bb6ee257fad587f36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:09:01 compute-0 podman[325414]: 2026-01-26 16:09:01.776483863 +0000 UTC m=+0.180577917 container attach f65b662998007f23a09341532b2a7177e890218e34aa57bb6ee257fad587f36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_jennings, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.909 239969 DEBUG nova.compute.manager [req-ab3c8b7f-1d33-425e-a2d4-6d206772cb35 req-2abc4b85-c9e0-4e87-9346-a43e7efd3fa7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Received event network-vif-plugged-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.909 239969 DEBUG oslo_concurrency.lockutils [req-ab3c8b7f-1d33-425e-a2d4-6d206772cb35 req-2abc4b85-c9e0-4e87-9346-a43e7efd3fa7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.909 239969 DEBUG oslo_concurrency.lockutils [req-ab3c8b7f-1d33-425e-a2d4-6d206772cb35 req-2abc4b85-c9e0-4e87-9346-a43e7efd3fa7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.910 239969 DEBUG oslo_concurrency.lockutils [req-ab3c8b7f-1d33-425e-a2d4-6d206772cb35 req-2abc4b85-c9e0-4e87-9346-a43e7efd3fa7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.910 239969 DEBUG nova.compute.manager [req-ab3c8b7f-1d33-425e-a2d4-6d206772cb35 req-2abc4b85-c9e0-4e87-9346-a43e7efd3fa7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Processing event network-vif-plugged-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.910 239969 DEBUG nova.compute.manager [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.931 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443741.9312594, 441048cd-ded0-43ff-93b7-bacf44bd6c17 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.932 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] VM Resumed (Lifecycle Event)
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.933 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.938 239969 INFO nova.virt.libvirt.driver [-] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Instance spawned successfully.
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.939 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.958 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.966 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.970 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.970 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.971 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.972 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.972 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.972 239969 DEBUG nova.virt.libvirt.driver [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:01 compute-0 nova_compute[239965]: 2026-01-26 16:09:01.996 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:09:02 compute-0 nova_compute[239965]: 2026-01-26 16:09:02.032 239969 INFO nova.compute.manager [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Took 12.97 seconds to spawn the instance on the hypervisor.
Jan 26 16:09:02 compute-0 nova_compute[239965]: 2026-01-26 16:09:02.032 239969 DEBUG nova.compute.manager [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:02 compute-0 happy_jennings[325430]: {
Jan 26 16:09:02 compute-0 happy_jennings[325430]:     "0": [
Jan 26 16:09:02 compute-0 happy_jennings[325430]:         {
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "devices": [
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "/dev/loop3"
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             ],
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "lv_name": "ceph_lv0",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "lv_size": "21470642176",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "name": "ceph_lv0",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "tags": {
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.cluster_name": "ceph",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.crush_device_class": "",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.encrypted": "0",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.objectstore": "bluestore",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.osd_id": "0",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.type": "block",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.vdo": "0",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.with_tpm": "0"
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             },
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "type": "block",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "vg_name": "ceph_vg0"
Jan 26 16:09:02 compute-0 happy_jennings[325430]:         }
Jan 26 16:09:02 compute-0 happy_jennings[325430]:     ],
Jan 26 16:09:02 compute-0 happy_jennings[325430]:     "1": [
Jan 26 16:09:02 compute-0 happy_jennings[325430]:         {
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "devices": [
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "/dev/loop4"
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             ],
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "lv_name": "ceph_lv1",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "lv_size": "21470642176",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "name": "ceph_lv1",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "tags": {
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.cluster_name": "ceph",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.crush_device_class": "",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.encrypted": "0",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.objectstore": "bluestore",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.osd_id": "1",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.type": "block",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.vdo": "0",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.with_tpm": "0"
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             },
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "type": "block",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "vg_name": "ceph_vg1"
Jan 26 16:09:02 compute-0 happy_jennings[325430]:         }
Jan 26 16:09:02 compute-0 happy_jennings[325430]:     ],
Jan 26 16:09:02 compute-0 happy_jennings[325430]:     "2": [
Jan 26 16:09:02 compute-0 happy_jennings[325430]:         {
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "devices": [
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "/dev/loop5"
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             ],
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "lv_name": "ceph_lv2",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "lv_size": "21470642176",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "name": "ceph_lv2",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "tags": {
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.cluster_name": "ceph",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.crush_device_class": "",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.encrypted": "0",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.objectstore": "bluestore",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.osd_id": "2",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.type": "block",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.vdo": "0",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:                 "ceph.with_tpm": "0"
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             },
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "type": "block",
Jan 26 16:09:02 compute-0 happy_jennings[325430]:             "vg_name": "ceph_vg2"
Jan 26 16:09:02 compute-0 happy_jennings[325430]:         }
Jan 26 16:09:02 compute-0 happy_jennings[325430]:     ]
Jan 26 16:09:02 compute-0 happy_jennings[325430]: }
Jan 26 16:09:02 compute-0 systemd[1]: libpod-f65b662998007f23a09341532b2a7177e890218e34aa57bb6ee257fad587f36a.scope: Deactivated successfully.
Jan 26 16:09:02 compute-0 podman[325414]: 2026-01-26 16:09:02.090318781 +0000 UTC m=+0.494412815 container died f65b662998007f23a09341532b2a7177e890218e34aa57bb6ee257fad587f36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_jennings, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:09:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-63aa8631002ed2c2416203edbb952eb59c76e3400467816cc5579e00fbd90983-merged.mount: Deactivated successfully.
Jan 26 16:09:02 compute-0 podman[325414]: 2026-01-26 16:09:02.130054613 +0000 UTC m=+0.534148657 container remove f65b662998007f23a09341532b2a7177e890218e34aa57bb6ee257fad587f36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 16:09:02 compute-0 systemd[1]: libpod-conmon-f65b662998007f23a09341532b2a7177e890218e34aa57bb6ee257fad587f36a.scope: Deactivated successfully.
Jan 26 16:09:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1718: 305 pgs: 305 active+clean; 374 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.7 MiB/s wr, 181 op/s
Jan 26 16:09:02 compute-0 sudo[325338]: pam_unix(sudo:session): session closed for user root
Jan 26 16:09:02 compute-0 nova_compute[239965]: 2026-01-26 16:09:02.244 239969 INFO nova.compute.manager [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Took 15.59 seconds to build instance.
Jan 26 16:09:02 compute-0 sudo[325450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:09:02 compute-0 sudo[325450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:09:02 compute-0 sudo[325450]: pam_unix(sudo:session): session closed for user root
Jan 26 16:09:02 compute-0 nova_compute[239965]: 2026-01-26 16:09:02.259 239969 DEBUG oslo_concurrency.lockutils [None req-c458d321-4f1c-4a39-81eb-73143c492643 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "441048cd-ded0-43ff-93b7-bacf44bd6c17" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:02 compute-0 sudo[325475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:09:02 compute-0 sudo[325475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:09:02 compute-0 podman[325512]: 2026-01-26 16:09:02.621172177 +0000 UTC m=+0.040359379 container create 1d109f9c61aca849b56740f7d069687f7caec240dcfed6acc9e589eb662e9299 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 16:09:02 compute-0 systemd[1]: Started libpod-conmon-1d109f9c61aca849b56740f7d069687f7caec240dcfed6acc9e589eb662e9299.scope.
Jan 26 16:09:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:09:02 compute-0 podman[325512]: 2026-01-26 16:09:02.604294624 +0000 UTC m=+0.023481856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:09:02 compute-0 podman[325512]: 2026-01-26 16:09:02.713352222 +0000 UTC m=+0.132539444 container init 1d109f9c61aca849b56740f7d069687f7caec240dcfed6acc9e589eb662e9299 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bell, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:09:02 compute-0 podman[325512]: 2026-01-26 16:09:02.721318396 +0000 UTC m=+0.140505608 container start 1d109f9c61aca849b56740f7d069687f7caec240dcfed6acc9e589eb662e9299 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 16:09:02 compute-0 dreamy_bell[325529]: 167 167
Jan 26 16:09:02 compute-0 podman[325512]: 2026-01-26 16:09:02.725687493 +0000 UTC m=+0.144874765 container attach 1d109f9c61aca849b56740f7d069687f7caec240dcfed6acc9e589eb662e9299 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bell, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:09:02 compute-0 systemd[1]: libpod-1d109f9c61aca849b56740f7d069687f7caec240dcfed6acc9e589eb662e9299.scope: Deactivated successfully.
Jan 26 16:09:02 compute-0 podman[325512]: 2026-01-26 16:09:02.726076113 +0000 UTC m=+0.145263315 container died 1d109f9c61aca849b56740f7d069687f7caec240dcfed6acc9e589eb662e9299 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 16:09:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc5b937586b74271e050726f19d913a847b1dd66b7e82cbda5d920a9e68eaf16-merged.mount: Deactivated successfully.
Jan 26 16:09:02 compute-0 podman[325512]: 2026-01-26 16:09:02.769173807 +0000 UTC m=+0.188361009 container remove 1d109f9c61aca849b56740f7d069687f7caec240dcfed6acc9e589eb662e9299 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_bell, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:09:02 compute-0 systemd[1]: libpod-conmon-1d109f9c61aca849b56740f7d069687f7caec240dcfed6acc9e589eb662e9299.scope: Deactivated successfully.
Jan 26 16:09:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:09:02 compute-0 nova_compute[239965]: 2026-01-26 16:09:02.888 239969 DEBUG nova.network.neutron [-] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:09:02 compute-0 nova_compute[239965]: 2026-01-26 16:09:02.908 239969 INFO nova.compute.manager [-] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Took 2.35 seconds to deallocate network for instance.
Jan 26 16:09:02 compute-0 nova_compute[239965]: 2026-01-26 16:09:02.968 239969 DEBUG oslo_concurrency.lockutils [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:02 compute-0 nova_compute[239965]: 2026-01-26 16:09:02.969 239969 DEBUG oslo_concurrency.lockutils [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:02 compute-0 podman[325554]: 2026-01-26 16:09:02.973138626 +0000 UTC m=+0.039174049 container create 5e1a0cf146c6d44ad967c81ffddec4627b7c6511926d8d82705413dcbef91575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:09:03 compute-0 systemd[1]: Started libpod-conmon-5e1a0cf146c6d44ad967c81ffddec4627b7c6511926d8d82705413dcbef91575.scope.
Jan 26 16:09:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:09:03 compute-0 podman[325554]: 2026-01-26 16:09:02.956028138 +0000 UTC m=+0.022063601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88fdd83ebbb2a6b5ceb094c3b7e6d9fc74a23026a62a667f00615ce5be872b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88fdd83ebbb2a6b5ceb094c3b7e6d9fc74a23026a62a667f00615ce5be872b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88fdd83ebbb2a6b5ceb094c3b7e6d9fc74a23026a62a667f00615ce5be872b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:09:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c88fdd83ebbb2a6b5ceb094c3b7e6d9fc74a23026a62a667f00615ce5be872b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:09:03 compute-0 podman[325554]: 2026-01-26 16:09:03.07426931 +0000 UTC m=+0.140304763 container init 5e1a0cf146c6d44ad967c81ffddec4627b7c6511926d8d82705413dcbef91575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:09:03 compute-0 podman[325554]: 2026-01-26 16:09:03.08367 +0000 UTC m=+0.149705433 container start 5e1a0cf146c6d44ad967c81ffddec4627b7c6511926d8d82705413dcbef91575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 16:09:03 compute-0 podman[325554]: 2026-01-26 16:09:03.087833352 +0000 UTC m=+0.153868825 container attach 5e1a0cf146c6d44ad967c81ffddec4627b7c6511926d8d82705413dcbef91575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:09:03 compute-0 nova_compute[239965]: 2026-01-26 16:09:03.127 239969 DEBUG oslo_concurrency.processutils [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:03 compute-0 ceph-mon[75140]: pgmap v1718: 305 pgs: 305 active+clean; 374 MiB data, 820 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 5.7 MiB/s wr, 181 op/s
Jan 26 16:09:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:09:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4034375019' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:03 compute-0 nova_compute[239965]: 2026-01-26 16:09:03.778 239969 DEBUG oslo_concurrency.processutils [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:03 compute-0 nova_compute[239965]: 2026-01-26 16:09:03.787 239969 DEBUG nova.compute.provider_tree [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:09:03 compute-0 nova_compute[239965]: 2026-01-26 16:09:03.806 239969 DEBUG nova.scheduler.client.report [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:09:03 compute-0 nova_compute[239965]: 2026-01-26 16:09:03.830 239969 DEBUG oslo_concurrency.lockutils [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:03 compute-0 lvm[325672]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:09:03 compute-0 lvm[325672]: VG ceph_vg0 finished
Jan 26 16:09:03 compute-0 lvm[325673]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:09:03 compute-0 lvm[325673]: VG ceph_vg1 finished
Jan 26 16:09:03 compute-0 lvm[325675]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:09:03 compute-0 lvm[325675]: VG ceph_vg2 finished
Jan 26 16:09:03 compute-0 nova_compute[239965]: 2026-01-26 16:09:03.882 239969 INFO nova.scheduler.client.report [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Deleted allocations for instance 08bf6991-2ba2-47d1-865a-8f6ddfd9a067
Jan 26 16:09:03 compute-0 strange_agnesi[325568]: {}
Jan 26 16:09:03 compute-0 nova_compute[239965]: 2026-01-26 16:09:03.963 239969 DEBUG oslo_concurrency.lockutils [None req-cb210c14-bd83-41f4-9f46-7e8d746d8040 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "08bf6991-2ba2-47d1-865a-8f6ddfd9a067" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.819s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:03 compute-0 systemd[1]: libpod-5e1a0cf146c6d44ad967c81ffddec4627b7c6511926d8d82705413dcbef91575.scope: Deactivated successfully.
Jan 26 16:09:03 compute-0 systemd[1]: libpod-5e1a0cf146c6d44ad967c81ffddec4627b7c6511926d8d82705413dcbef91575.scope: Consumed 1.398s CPU time.
Jan 26 16:09:03 compute-0 podman[325554]: 2026-01-26 16:09:03.971679213 +0000 UTC m=+1.037714646 container died 5e1a0cf146c6d44ad967c81ffddec4627b7c6511926d8d82705413dcbef91575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 16:09:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c88fdd83ebbb2a6b5ceb094c3b7e6d9fc74a23026a62a667f00615ce5be872b5-merged.mount: Deactivated successfully.
Jan 26 16:09:04 compute-0 nova_compute[239965]: 2026-01-26 16:09:04.011 239969 DEBUG nova.compute.manager [req-c9da2916-5c02-4e74-81eb-dd6ba5fe8e12 req-4459e913-1f4e-430d-949c-573c3ea299d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Received event network-vif-plugged-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:04 compute-0 nova_compute[239965]: 2026-01-26 16:09:04.012 239969 DEBUG oslo_concurrency.lockutils [req-c9da2916-5c02-4e74-81eb-dd6ba5fe8e12 req-4459e913-1f4e-430d-949c-573c3ea299d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:04 compute-0 nova_compute[239965]: 2026-01-26 16:09:04.012 239969 DEBUG oslo_concurrency.lockutils [req-c9da2916-5c02-4e74-81eb-dd6ba5fe8e12 req-4459e913-1f4e-430d-949c-573c3ea299d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:04 compute-0 nova_compute[239965]: 2026-01-26 16:09:04.012 239969 DEBUG oslo_concurrency.lockutils [req-c9da2916-5c02-4e74-81eb-dd6ba5fe8e12 req-4459e913-1f4e-430d-949c-573c3ea299d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:04 compute-0 nova_compute[239965]: 2026-01-26 16:09:04.012 239969 DEBUG nova.compute.manager [req-c9da2916-5c02-4e74-81eb-dd6ba5fe8e12 req-4459e913-1f4e-430d-949c-573c3ea299d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] No waiting events found dispatching network-vif-plugged-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:09:04 compute-0 nova_compute[239965]: 2026-01-26 16:09:04.012 239969 WARNING nova.compute.manager [req-c9da2916-5c02-4e74-81eb-dd6ba5fe8e12 req-4459e913-1f4e-430d-949c-573c3ea299d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Received unexpected event network-vif-plugged-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d for instance with vm_state active and task_state None.
Jan 26 16:09:04 compute-0 nova_compute[239965]: 2026-01-26 16:09:04.012 239969 DEBUG nova.compute.manager [req-c9da2916-5c02-4e74-81eb-dd6ba5fe8e12 req-4459e913-1f4e-430d-949c-573c3ea299d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Received event network-vif-deleted-5c47e905-ff93-4134-b724-0c779bfdf9a1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:04 compute-0 podman[325554]: 2026-01-26 16:09:04.026023462 +0000 UTC m=+1.092058895 container remove 5e1a0cf146c6d44ad967c81ffddec4627b7c6511926d8d82705413dcbef91575 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:09:04 compute-0 systemd[1]: libpod-conmon-5e1a0cf146c6d44ad967c81ffddec4627b7c6511926d8d82705413dcbef91575.scope: Deactivated successfully.
Jan 26 16:09:04 compute-0 sudo[325475]: pam_unix(sudo:session): session closed for user root
Jan 26 16:09:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:09:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:09:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:09:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:09:04 compute-0 sudo[325690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:09:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 339 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 159 op/s
Jan 26 16:09:04 compute-0 sudo[325690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:09:04 compute-0 sudo[325690]: pam_unix(sudo:session): session closed for user root
Jan 26 16:09:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4034375019' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:04 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:09:04 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:09:05 compute-0 nova_compute[239965]: 2026-01-26 16:09:05.179 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:05 compute-0 ceph-mon[75140]: pgmap v1719: 305 pgs: 305 active+clean; 339 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.4 MiB/s wr, 159 op/s
Jan 26 16:09:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1720: 305 pgs: 305 active+clean; 339 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 115 op/s
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.146 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.492 239969 DEBUG oslo_concurrency.lockutils [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.493 239969 DEBUG oslo_concurrency.lockutils [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.493 239969 DEBUG oslo_concurrency.lockutils [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.494 239969 DEBUG oslo_concurrency.lockutils [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.494 239969 DEBUG oslo_concurrency.lockutils [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.496 239969 INFO nova.compute.manager [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Terminating instance
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.497 239969 DEBUG nova.compute.manager [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:09:06 compute-0 kernel: tap3cbece9c-48 (unregistering): left promiscuous mode
Jan 26 16:09:06 compute-0 NetworkManager[48954]: <info>  [1769443746.5467] device (tap3cbece9c-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:09:06 compute-0 ovn_controller[146046]: 2026-01-26T16:09:06Z|00989|binding|INFO|Releasing lport 3cbece9c-4828-4d27-8ede-b67dab0fc2b0 from this chassis (sb_readonly=0)
Jan 26 16:09:06 compute-0 ovn_controller[146046]: 2026-01-26T16:09:06Z|00990|binding|INFO|Setting lport 3cbece9c-4828-4d27-8ede-b67dab0fc2b0 down in Southbound
Jan 26 16:09:06 compute-0 ovn_controller[146046]: 2026-01-26T16:09:06Z|00991|binding|INFO|Removing iface tap3cbece9c-48 ovn-installed in OVS
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.556 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.562 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:8a:67 10.100.0.5'], port_security=['fa:16:3e:2d:8a:67 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1c5a4474-3020-4bb1-96e5-41b02d0eb962', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e3d3c26abe454a90816833e484abbbd5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd1788d42-3ab8-4af6-9d96-b2f289468a39', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=866be92a-2c15-4b84-ae39-2a0041a8b635, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3cbece9c-4828-4d27-8ede-b67dab0fc2b0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.563 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3cbece9c-4828-4d27-8ede-b67dab0fc2b0 in datapath e7ef659c-2c7b-4cdf-8956-267cdfeff707 unbound from our chassis
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.565 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e7ef659c-2c7b-4cdf-8956-267cdfeff707, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.566 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[30f641b9-9a52-4a9d-9c38-bbd1b4e32ee3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.567 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707 namespace which is not needed anymore
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.584 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:06 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000056.scope: Deactivated successfully.
Jan 26 16:09:06 compute-0 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000056.scope: Consumed 19.322s CPU time.
Jan 26 16:09:06 compute-0 systemd-machined[208061]: Machine qemu-107-instance-00000056 terminated.
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.647 239969 DEBUG oslo_concurrency.lockutils [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Acquiring lock "efde3b30-f743-42a2-bc38-f27b2b0909dc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.648 239969 DEBUG oslo_concurrency.lockutils [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lock "efde3b30-f743-42a2-bc38-f27b2b0909dc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.648 239969 DEBUG oslo_concurrency.lockutils [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Acquiring lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.648 239969 DEBUG oslo_concurrency.lockutils [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.648 239969 DEBUG oslo_concurrency.lockutils [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.649 239969 INFO nova.compute.manager [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Terminating instance
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.650 239969 DEBUG nova.compute.manager [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:09:06 compute-0 kernel: tap3b9c9999-9b (unregistering): left promiscuous mode
Jan 26 16:09:06 compute-0 NetworkManager[48954]: <info>  [1769443746.6864] device (tap3b9c9999-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:09:06 compute-0 ovn_controller[146046]: 2026-01-26T16:09:06Z|00992|binding|INFO|Releasing lport 3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 from this chassis (sb_readonly=0)
Jan 26 16:09:06 compute-0 ovn_controller[146046]: 2026-01-26T16:09:06Z|00993|binding|INFO|Setting lport 3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 down in Southbound
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.704 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:06 compute-0 ovn_controller[146046]: 2026-01-26T16:09:06Z|00994|binding|INFO|Removing iface tap3b9c9999-9b ovn-installed in OVS
Jan 26 16:09:06 compute-0 neutron-haproxy-ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707[315348]: [NOTICE]   (315352) : haproxy version is 2.8.14-c23fe91
Jan 26 16:09:06 compute-0 neutron-haproxy-ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707[315348]: [NOTICE]   (315352) : path to executable is /usr/sbin/haproxy
Jan 26 16:09:06 compute-0 neutron-haproxy-ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707[315348]: [WARNING]  (315352) : Exiting Master process...
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.710 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:06 compute-0 neutron-haproxy-ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707[315348]: [ALERT]    (315352) : Current worker (315354) exited with code 143 (Terminated)
Jan 26 16:09:06 compute-0 neutron-haproxy-ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707[315348]: [WARNING]  (315352) : All workers exited. Exiting... (0)
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.719 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:d4:6d 10.100.0.14'], port_security=['fa:16:3e:66:d4:6d 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'efde3b30-f743-42a2-bc38-f27b2b0909dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ffefe096-4dca-4912-b372-1a2319ae67f2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c702a646b6024e939e7a909ee2dea3a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ef054cc7-0f73-4149-b7e7-6ef73bb26ad3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b0d25665-6733-43f8-800f-97f6ce914748, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3b9c9999-9b1a-4f39-bf2b-dcde55e0c481) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:09:06 compute-0 systemd[1]: libpod-da1254d4eadee8317f58eeb17eb10b0f8014ab39fb55b0c4f3629947ab8bfeeb.scope: Deactivated successfully.
Jan 26 16:09:06 compute-0 podman[325736]: 2026-01-26 16:09:06.721358696 +0000 UTC m=+0.055902269 container died da1254d4eadee8317f58eeb17eb10b0f8014ab39fb55b0c4f3629947ab8bfeeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.756 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.764 239969 INFO nova.virt.libvirt.driver [-] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Instance destroyed successfully.
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.764 239969 DEBUG nova.objects.instance [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lazy-loading 'resources' on Instance uuid 1c5a4474-3020-4bb1-96e5-41b02d0eb962 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:06 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Deactivated successfully.
Jan 26 16:09:06 compute-0 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Consumed 9.272s CPU time.
Jan 26 16:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-da1254d4eadee8317f58eeb17eb10b0f8014ab39fb55b0c4f3629947ab8bfeeb-userdata-shm.mount: Deactivated successfully.
Jan 26 16:09:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f0b6f31645d9f0ece3c6711209f5dc1bb8c997ef0ec06a8ab3ee4af6c1c6c9a-merged.mount: Deactivated successfully.
Jan 26 16:09:06 compute-0 systemd-machined[208061]: Machine qemu-123-instance-00000064 terminated.
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.782 239969 DEBUG nova.virt.libvirt.vif [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:06:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-108293772',display_name='tempest-₡-108293772',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--108293772',id=86,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:06:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='e3d3c26abe454a90816833e484abbbd5',ramdisk_id='',reservation_id='r-4cw1yfux',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-190839520',owner_user_name='tempest-ServersTestJSON-190839520-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:06:29Z,user_data=None,user_id='392bf4c554724bd3b097b990cec964ac',uuid=1c5a4474-3020-4bb1-96e5-41b02d0eb962,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "address": "fa:16:3e:2d:8a:67", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbece9c-48", "ovs_interfaceid": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.782 239969 DEBUG nova.network.os_vif_util [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converting VIF {"id": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "address": "fa:16:3e:2d:8a:67", "network": {"id": "e7ef659c-2c7b-4cdf-8956-267cdfeff707", "bridge": "br-int", "label": "tempest-ServersTestJSON-1007794324-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e3d3c26abe454a90816833e484abbbd5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cbece9c-48", "ovs_interfaceid": "3cbece9c-4828-4d27-8ede-b67dab0fc2b0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.783 239969 DEBUG nova.network.os_vif_util [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2d:8a:67,bridge_name='br-int',has_traffic_filtering=True,id=3cbece9c-4828-4d27-8ede-b67dab0fc2b0,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cbece9c-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.784 239969 DEBUG os_vif [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:8a:67,bridge_name='br-int',has_traffic_filtering=True,id=3cbece9c-4828-4d27-8ede-b67dab0fc2b0,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cbece9c-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.785 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:06 compute-0 podman[325736]: 2026-01-26 16:09:06.786388837 +0000 UTC m=+0.120932410 container cleanup da1254d4eadee8317f58eeb17eb10b0f8014ab39fb55b0c4f3629947ab8bfeeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.787 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cbece9c-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.789 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.792 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.793 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.796 239969 INFO os_vif [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2d:8a:67,bridge_name='br-int',has_traffic_filtering=True,id=3cbece9c-4828-4d27-8ede-b67dab0fc2b0,network=Network(e7ef659c-2c7b-4cdf-8956-267cdfeff707),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cbece9c-48')
Jan 26 16:09:06 compute-0 systemd[1]: libpod-conmon-da1254d4eadee8317f58eeb17eb10b0f8014ab39fb55b0c4f3629947ab8bfeeb.scope: Deactivated successfully.
Jan 26 16:09:06 compute-0 podman[325781]: 2026-01-26 16:09:06.864821366 +0000 UTC m=+0.050781554 container remove da1254d4eadee8317f58eeb17eb10b0f8014ab39fb55b0c4f3629947ab8bfeeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.874 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[58647f57-5a76-45b6-96ba-11fe05eb1262]: (4, ('Mon Jan 26 04:09:06 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707 (da1254d4eadee8317f58eeb17eb10b0f8014ab39fb55b0c4f3629947ab8bfeeb)\nda1254d4eadee8317f58eeb17eb10b0f8014ab39fb55b0c4f3629947ab8bfeeb\nMon Jan 26 04:09:06 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707 (da1254d4eadee8317f58eeb17eb10b0f8014ab39fb55b0c4f3629947ab8bfeeb)\nda1254d4eadee8317f58eeb17eb10b0f8014ab39fb55b0c4f3629947ab8bfeeb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.876 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[09750adc-c38b-4c46-911e-62b53a35cd47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.878 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape7ef659c-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.880 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:06 compute-0 kernel: tape7ef659c-20: left promiscuous mode
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.884 239969 INFO nova.virt.libvirt.driver [-] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Instance destroyed successfully.
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.885 239969 DEBUG nova.objects.instance [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lazy-loading 'resources' on Instance uuid efde3b30-f743-42a2-bc38-f27b2b0909dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.901 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.904 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1431f328-c922-4d80-90ba-c503a32c79c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.918 239969 DEBUG nova.virt.libvirt.vif [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:08:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-1925595817',display_name='tempest-ServerMetadataNegativeTestJSON-server-1925595817',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-1925595817',id=100,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:08:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c702a646b6024e939e7a909ee2dea3a5',ramdisk_id='',reservation_id='r-b1a0qk0i',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-606192248',owner_user_name='tempest-ServerMetadataNegativeTestJSON-606192248-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:08:59Z,user_data=None,user_id='8e477503e0564b80a186bf69c65a8b50',uuid=efde3b30-f743-42a2-bc38-f27b2b0909dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "address": "fa:16:3e:66:d4:6d", "network": {"id": "ffefe096-4dca-4912-b372-1a2319ae67f2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-95484242-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c702a646b6024e939e7a909ee2dea3a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9c9999-9b", "ovs_interfaceid": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.918 239969 DEBUG nova.network.os_vif_util [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Converting VIF {"id": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "address": "fa:16:3e:66:d4:6d", "network": {"id": "ffefe096-4dca-4912-b372-1a2319ae67f2", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-95484242-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c702a646b6024e939e7a909ee2dea3a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3b9c9999-9b", "ovs_interfaceid": "3b9c9999-9b1a-4f39-bf2b-dcde55e0c481", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.919 239969 DEBUG nova.network.os_vif_util [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:d4:6d,bridge_name='br-int',has_traffic_filtering=True,id=3b9c9999-9b1a-4f39-bf2b-dcde55e0c481,network=Network(ffefe096-4dca-4912-b372-1a2319ae67f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b9c9999-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.919 239969 DEBUG os_vif [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:d4:6d,bridge_name='br-int',has_traffic_filtering=True,id=3b9c9999-9b1a-4f39-bf2b-dcde55e0c481,network=Network(ffefe096-4dca-4912-b372-1a2319ae67f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b9c9999-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.921 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.921 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3b9c9999-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.923 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.924 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.923 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[30505a84-f870-446b-a219-80351df21138]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.925 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8feac858-43ee-4e82-9208-e3e4aa48aa0e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:06 compute-0 nova_compute[239965]: 2026-01-26 16:09:06.926 239969 INFO os_vif [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:d4:6d,bridge_name='br-int',has_traffic_filtering=True,id=3b9c9999-9b1a-4f39-bf2b-dcde55e0c481,network=Network(ffefe096-4dca-4912-b372-1a2319ae67f2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3b9c9999-9b')
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.942 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[878d9fbb-8dcc-4238-a326-c1fc6929a94d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 507982, 'reachable_time': 40112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325827, 'error': None, 'target': 'ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:06 compute-0 systemd[1]: run-netns-ovnmeta\x2de7ef659c\x2d2c7b\x2d4cdf\x2d8956\x2d267cdfeff707.mount: Deactivated successfully.
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.948 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e7ef659c-2c7b-4cdf-8956-267cdfeff707 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.949 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[3021538e-2e35-430a-9196-2c002eb84544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.950 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 in datapath ffefe096-4dca-4912-b372-1a2319ae67f2 unbound from our chassis
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.952 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ffefe096-4dca-4912-b372-1a2319ae67f2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.953 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7e5f9a5e-1765-494c-a696-f641ae3cc2d7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:06.954 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2 namespace which is not needed anymore
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.048 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.127 239969 INFO nova.virt.libvirt.driver [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Deleting instance files /var/lib/nova/instances/1c5a4474-3020-4bb1-96e5-41b02d0eb962_del
Jan 26 16:09:07 compute-0 neutron-haproxy-ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2[324873]: [NOTICE]   (324902) : haproxy version is 2.8.14-c23fe91
Jan 26 16:09:07 compute-0 neutron-haproxy-ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2[324873]: [NOTICE]   (324902) : path to executable is /usr/sbin/haproxy
Jan 26 16:09:07 compute-0 neutron-haproxy-ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2[324873]: [WARNING]  (324902) : Exiting Master process...
Jan 26 16:09:07 compute-0 neutron-haproxy-ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2[324873]: [WARNING]  (324902) : Exiting Master process...
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.129 239969 INFO nova.virt.libvirt.driver [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Deletion of /var/lib/nova/instances/1c5a4474-3020-4bb1-96e5-41b02d0eb962_del complete
Jan 26 16:09:07 compute-0 neutron-haproxy-ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2[324873]: [ALERT]    (324902) : Current worker (324905) exited with code 143 (Terminated)
Jan 26 16:09:07 compute-0 neutron-haproxy-ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2[324873]: [WARNING]  (324902) : All workers exited. Exiting... (0)
Jan 26 16:09:07 compute-0 systemd[1]: libpod-b4861057002aea21d0f3bfe350f2eb17ae61882cc11db4141b154f1285cc6502.scope: Deactivated successfully.
Jan 26 16:09:07 compute-0 podman[325865]: 2026-01-26 16:09:07.141552855 +0000 UTC m=+0.084446587 container died b4861057002aea21d0f3bfe350f2eb17ae61882cc11db4141b154f1285cc6502 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 16:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4861057002aea21d0f3bfe350f2eb17ae61882cc11db4141b154f1285cc6502-userdata-shm.mount: Deactivated successfully.
Jan 26 16:09:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-4850b5c41cbeecbc0602196d19c82797ce02b5cd6c2542d3b4681ba2b029619d-merged.mount: Deactivated successfully.
Jan 26 16:09:07 compute-0 podman[325865]: 2026-01-26 16:09:07.179019131 +0000 UTC m=+0.121912863 container cleanup b4861057002aea21d0f3bfe350f2eb17ae61882cc11db4141b154f1285cc6502 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:09:07 compute-0 systemd[1]: libpod-conmon-b4861057002aea21d0f3bfe350f2eb17ae61882cc11db4141b154f1285cc6502.scope: Deactivated successfully.
Jan 26 16:09:07 compute-0 ceph-mon[75140]: pgmap v1720: 305 pgs: 305 active+clean; 339 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 26 KiB/s wr, 115 op/s
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.216 239969 INFO nova.compute.manager [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Took 0.72 seconds to destroy the instance on the hypervisor.
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.217 239969 DEBUG oslo.service.loopingcall [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.218 239969 DEBUG nova.compute.manager [-] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.218 239969 DEBUG nova.network.neutron [-] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.248 239969 INFO nova.virt.libvirt.driver [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Deleting instance files /var/lib/nova/instances/efde3b30-f743-42a2-bc38-f27b2b0909dc_del
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.248 239969 INFO nova.virt.libvirt.driver [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Deletion of /var/lib/nova/instances/efde3b30-f743-42a2-bc38-f27b2b0909dc_del complete
Jan 26 16:09:07 compute-0 podman[325897]: 2026-01-26 16:09:07.253248817 +0000 UTC m=+0.052000253 container remove b4861057002aea21d0f3bfe350f2eb17ae61882cc11db4141b154f1285cc6502 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 16:09:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:07.259 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e712195a-b3ec-4de8-b15d-732814f34b37]: (4, ('Mon Jan 26 04:09:07 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2 (b4861057002aea21d0f3bfe350f2eb17ae61882cc11db4141b154f1285cc6502)\nb4861057002aea21d0f3bfe350f2eb17ae61882cc11db4141b154f1285cc6502\nMon Jan 26 04:09:07 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2 (b4861057002aea21d0f3bfe350f2eb17ae61882cc11db4141b154f1285cc6502)\nb4861057002aea21d0f3bfe350f2eb17ae61882cc11db4141b154f1285cc6502\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:07.261 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[af4c01af-299d-45fe-bed2-6075371ec7ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:07.261 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapffefe096-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.263 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:07 compute-0 kernel: tapffefe096-40: left promiscuous mode
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.285 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:07.288 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f364a59a-c1d2-4dc5-9851-d497fa85f82c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:07.310 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e60698da-dadc-463b-911a-c520aa72e283]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:07.311 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ae71f1b8-8615-41a3-9bfc-c8229de4e364]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:07.328 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bb4ff86c-7e64-40ef-9353-97b61f597802]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 522833, 'reachable_time': 40729, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325912, 'error': None, 'target': 'ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:07.330 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ffefe096-4dca-4912-b372-1a2319ae67f2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:09:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:07.331 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[f3aec9c4-3003-4542-bcb8-d3f9dff844d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.420 239969 DEBUG nova.compute.manager [req-df13a10f-6693-442b-a162-950b3a3047ca req-d452df74-0a3b-4f54-b210-beb3b9bdf4f0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Received event network-vif-unplugged-3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.420 239969 DEBUG oslo_concurrency.lockutils [req-df13a10f-6693-442b-a162-950b3a3047ca req-d452df74-0a3b-4f54-b210-beb3b9bdf4f0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.421 239969 DEBUG oslo_concurrency.lockutils [req-df13a10f-6693-442b-a162-950b3a3047ca req-d452df74-0a3b-4f54-b210-beb3b9bdf4f0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.421 239969 DEBUG oslo_concurrency.lockutils [req-df13a10f-6693-442b-a162-950b3a3047ca req-d452df74-0a3b-4f54-b210-beb3b9bdf4f0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.421 239969 DEBUG nova.compute.manager [req-df13a10f-6693-442b-a162-950b3a3047ca req-d452df74-0a3b-4f54-b210-beb3b9bdf4f0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] No waiting events found dispatching network-vif-unplugged-3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.422 239969 DEBUG nova.compute.manager [req-df13a10f-6693-442b-a162-950b3a3047ca req-d452df74-0a3b-4f54-b210-beb3b9bdf4f0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Received event network-vif-unplugged-3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.424 239969 INFO nova.compute.manager [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Took 0.77 seconds to destroy the instance on the hypervisor.
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.424 239969 DEBUG oslo.service.loopingcall [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.424 239969 DEBUG nova.compute.manager [-] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.424 239969 DEBUG nova.network.neutron [-] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:09:07 compute-0 systemd[1]: run-netns-ovnmeta\x2dffefe096\x2d4dca\x2d4912\x2db372\x2d1a2319ae67f2.mount: Deactivated successfully.
Jan 26 16:09:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.880 239969 DEBUG nova.network.neutron [-] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.902 239969 INFO nova.compute.manager [-] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Took 0.68 seconds to deallocate network for instance.
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.941 239969 DEBUG nova.compute.manager [req-ca22e626-90b9-4e22-af45-a6ee37006bd4 req-18fea064-9e72-4688-9450-d9a99026067f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Received event network-vif-deleted-3cbece9c-4828-4d27-8ede-b67dab0fc2b0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.954 239969 DEBUG oslo_concurrency.lockutils [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:07 compute-0 nova_compute[239965]: 2026-01-26 16:09:07.955 239969 DEBUG oslo_concurrency.lockutils [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:08 compute-0 nova_compute[239965]: 2026-01-26 16:09:08.053 239969 DEBUG oslo_concurrency.processutils [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:08 compute-0 nova_compute[239965]: 2026-01-26 16:09:08.116 239969 DEBUG nova.network.neutron [-] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:09:08 compute-0 nova_compute[239965]: 2026-01-26 16:09:08.136 239969 INFO nova.compute.manager [-] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Took 0.71 seconds to deallocate network for instance.
Jan 26 16:09:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 255 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 217 op/s
Jan 26 16:09:08 compute-0 nova_compute[239965]: 2026-01-26 16:09:08.187 239969 DEBUG oslo_concurrency.lockutils [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:09:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562102742' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:08 compute-0 nova_compute[239965]: 2026-01-26 16:09:08.647 239969 DEBUG oslo_concurrency.processutils [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:08 compute-0 nova_compute[239965]: 2026-01-26 16:09:08.652 239969 DEBUG nova.compute.provider_tree [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:09:08 compute-0 nova_compute[239965]: 2026-01-26 16:09:08.690 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443733.6894484, 08bf6991-2ba2-47d1-865a-8f6ddfd9a067 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:08 compute-0 nova_compute[239965]: 2026-01-26 16:09:08.691 239969 INFO nova.compute.manager [-] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] VM Stopped (Lifecycle Event)
Jan 26 16:09:08 compute-0 nova_compute[239965]: 2026-01-26 16:09:08.879 239969 DEBUG nova.compute.manager [None req-d76f489c-45dd-4502-91c6-7b09c6dd231c - - - - - -] [instance: 08bf6991-2ba2-47d1-865a-8f6ddfd9a067] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:08 compute-0 nova_compute[239965]: 2026-01-26 16:09:08.885 239969 DEBUG nova.scheduler.client.report [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.013 239969 DEBUG oslo_concurrency.lockutils [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.017 239969 DEBUG oslo_concurrency.lockutils [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.094 239969 INFO nova.scheduler.client.report [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Deleted allocations for instance 1c5a4474-3020-4bb1-96e5-41b02d0eb962
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.105 239969 DEBUG oslo_concurrency.processutils [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:09 compute-0 ceph-mon[75140]: pgmap v1721: 305 pgs: 305 active+clean; 255 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 217 op/s
Jan 26 16:09:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/562102742' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.332 239969 DEBUG oslo_concurrency.lockutils [None req-ea7b8cbd-8336-451c-9fa6-c6a1b6171484 392bf4c554724bd3b097b990cec964ac e3d3c26abe454a90816833e484abbbd5 - - default default] Lock "1c5a4474-3020-4bb1-96e5-41b02d0eb962" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:09:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/848434949' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.683 239969 DEBUG oslo_concurrency.processutils [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.690 239969 DEBUG nova.compute.provider_tree [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.697 239969 DEBUG nova.compute.manager [req-139744be-9477-46ca-8ad3-53ea4619893b req-7490ce06-7156-4578-9201-07e339bee5b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Received event network-vif-plugged-3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.698 239969 DEBUG oslo_concurrency.lockutils [req-139744be-9477-46ca-8ad3-53ea4619893b req-7490ce06-7156-4578-9201-07e339bee5b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.699 239969 DEBUG oslo_concurrency.lockutils [req-139744be-9477-46ca-8ad3-53ea4619893b req-7490ce06-7156-4578-9201-07e339bee5b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.699 239969 DEBUG oslo_concurrency.lockutils [req-139744be-9477-46ca-8ad3-53ea4619893b req-7490ce06-7156-4578-9201-07e339bee5b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "efde3b30-f743-42a2-bc38-f27b2b0909dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.699 239969 DEBUG nova.compute.manager [req-139744be-9477-46ca-8ad3-53ea4619893b req-7490ce06-7156-4578-9201-07e339bee5b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] No waiting events found dispatching network-vif-plugged-3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.700 239969 WARNING nova.compute.manager [req-139744be-9477-46ca-8ad3-53ea4619893b req-7490ce06-7156-4578-9201-07e339bee5b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Received unexpected event network-vif-plugged-3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 for instance with vm_state deleted and task_state None.
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.700 239969 DEBUG nova.compute.manager [req-139744be-9477-46ca-8ad3-53ea4619893b req-7490ce06-7156-4578-9201-07e339bee5b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Received event network-changed-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.700 239969 DEBUG nova.compute.manager [req-139744be-9477-46ca-8ad3-53ea4619893b req-7490ce06-7156-4578-9201-07e339bee5b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Refreshing instance network info cache due to event network-changed-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.701 239969 DEBUG oslo_concurrency.lockutils [req-139744be-9477-46ca-8ad3-53ea4619893b req-7490ce06-7156-4578-9201-07e339bee5b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-441048cd-ded0-43ff-93b7-bacf44bd6c17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.701 239969 DEBUG oslo_concurrency.lockutils [req-139744be-9477-46ca-8ad3-53ea4619893b req-7490ce06-7156-4578-9201-07e339bee5b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-441048cd-ded0-43ff-93b7-bacf44bd6c17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.701 239969 DEBUG nova.network.neutron [req-139744be-9477-46ca-8ad3-53ea4619893b req-7490ce06-7156-4578-9201-07e339bee5b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Refreshing network info cache for port f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.705 239969 DEBUG nova.scheduler.client.report [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.869 239969 DEBUG oslo_concurrency.lockutils [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:09 compute-0 nova_compute[239965]: 2026-01-26 16:09:09.899 239969 INFO nova.scheduler.client.report [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Deleted allocations for instance efde3b30-f743-42a2-bc38-f27b2b0909dc
Jan 26 16:09:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 255 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 217 op/s
Jan 26 16:09:10 compute-0 nova_compute[239965]: 2026-01-26 16:09:10.184 239969 DEBUG oslo_concurrency.lockutils [None req-e5117cc1-9b55-426a-9d4b-26221b686f27 8e477503e0564b80a186bf69c65a8b50 c702a646b6024e939e7a909ee2dea3a5 - - default default] Lock "efde3b30-f743-42a2-bc38-f27b2b0909dc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.536s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:10 compute-0 nova_compute[239965]: 2026-01-26 16:09:10.187 239969 DEBUG nova.compute.manager [req-a5db1cd0-617e-47b2-9b91-f87f1c23ed0f req-617aa731-2183-4841-99b2-55fb12738e38 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Received event network-vif-deleted-3b9c9999-9b1a-4f39-bf2b-dcde55e0c481 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/848434949' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:11 compute-0 ceph-mon[75140]: pgmap v1722: 305 pgs: 305 active+clean; 255 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 30 KiB/s wr, 217 op/s
Jan 26 16:09:11 compute-0 nova_compute[239965]: 2026-01-26 16:09:11.338 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:11 compute-0 nova_compute[239965]: 2026-01-26 16:09:11.924 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:11 compute-0 nova_compute[239965]: 2026-01-26 16:09:11.970 239969 DEBUG nova.network.neutron [req-139744be-9477-46ca-8ad3-53ea4619893b req-7490ce06-7156-4578-9201-07e339bee5b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Updated VIF entry in instance network info cache for port f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:09:11 compute-0 nova_compute[239965]: 2026-01-26 16:09:11.971 239969 DEBUG nova.network.neutron [req-139744be-9477-46ca-8ad3-53ea4619893b req-7490ce06-7156-4578-9201-07e339bee5b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Updating instance_info_cache with network_info: [{"id": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "address": "fa:16:3e:1a:11:50", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:1150", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92a9aeb-bc", "ovs_interfaceid": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:09:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 213 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 31 KiB/s wr, 229 op/s
Jan 26 16:09:12 compute-0 nova_compute[239965]: 2026-01-26 16:09:12.260 239969 DEBUG oslo_concurrency.lockutils [req-139744be-9477-46ca-8ad3-53ea4619893b req-7490ce06-7156-4578-9201-07e339bee5b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-441048cd-ded0-43ff-93b7-bacf44bd6c17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:09:12 compute-0 podman[325957]: 2026-01-26 16:09:12.378622356 +0000 UTC m=+0.059310483 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:09:12 compute-0 podman[325958]: 2026-01-26 16:09:12.439259389 +0000 UTC m=+0.118797097 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 16:09:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:09:13 compute-0 ceph-mon[75140]: pgmap v1723: 305 pgs: 305 active+clean; 213 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 31 KiB/s wr, 229 op/s
Jan 26 16:09:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 213 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 18 KiB/s wr, 168 op/s
Jan 26 16:09:14 compute-0 ovn_controller[146046]: 2026-01-26T16:09:14Z|00995|binding|INFO|Releasing lport 4fab6bb9-7be8-461a-b3cf-098c70ef4666 from this chassis (sb_readonly=0)
Jan 26 16:09:14 compute-0 nova_compute[239965]: 2026-01-26 16:09:14.406 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:14 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Jan 26 16:09:14 compute-0 ceph-mon[75140]: pgmap v1724: 305 pgs: 305 active+clean; 213 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 18 KiB/s wr, 168 op/s
Jan 26 16:09:15 compute-0 nova_compute[239965]: 2026-01-26 16:09:15.059 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:16 compute-0 ovn_controller[146046]: 2026-01-26T16:09:16Z|00996|binding|INFO|Releasing lport 4fab6bb9-7be8-461a-b3cf-098c70ef4666 from this chassis (sb_readonly=0)
Jan 26 16:09:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 223 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 582 KiB/s wr, 131 op/s
Jan 26 16:09:16 compute-0 nova_compute[239965]: 2026-01-26 16:09:16.165 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:16 compute-0 ovn_controller[146046]: 2026-01-26T16:09:16Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1a:11:50 10.100.0.5
Jan 26 16:09:16 compute-0 ovn_controller[146046]: 2026-01-26T16:09:16Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1a:11:50 10.100.0.5
Jan 26 16:09:16 compute-0 nova_compute[239965]: 2026-01-26 16:09:16.340 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:16 compute-0 nova_compute[239965]: 2026-01-26 16:09:16.927 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:17 compute-0 ceph-mon[75140]: pgmap v1725: 305 pgs: 305 active+clean; 223 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 582 KiB/s wr, 131 op/s
Jan 26 16:09:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:09:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 246 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 177 op/s
Jan 26 16:09:18 compute-0 nova_compute[239965]: 2026-01-26 16:09:18.217 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:19 compute-0 ceph-mon[75140]: pgmap v1726: 305 pgs: 305 active+clean; 246 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 177 op/s
Jan 26 16:09:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1727: 305 pgs: 305 active+clean; 246 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Jan 26 16:09:20 compute-0 ceph-mon[75140]: pgmap v1727: 305 pgs: 305 active+clean; 246 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Jan 26 16:09:20 compute-0 nova_compute[239965]: 2026-01-26 16:09:20.687 239969 DEBUG oslo_concurrency.lockutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Acquiring lock "82fb4b02-e957-47a0-a2f8-6d93f039e7d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:20 compute-0 nova_compute[239965]: 2026-01-26 16:09:20.688 239969 DEBUG oslo_concurrency.lockutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "82fb4b02-e957-47a0-a2f8-6d93f039e7d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:20 compute-0 nova_compute[239965]: 2026-01-26 16:09:20.706 239969 DEBUG nova.compute.manager [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:09:20 compute-0 nova_compute[239965]: 2026-01-26 16:09:20.775 239969 DEBUG oslo_concurrency.lockutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:20 compute-0 nova_compute[239965]: 2026-01-26 16:09:20.775 239969 DEBUG oslo_concurrency.lockutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:20 compute-0 nova_compute[239965]: 2026-01-26 16:09:20.784 239969 DEBUG nova.virt.hardware [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:09:20 compute-0 nova_compute[239965]: 2026-01-26 16:09:20.784 239969 INFO nova.compute.claims [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:09:20 compute-0 nova_compute[239965]: 2026-01-26 16:09:20.917 239969 DEBUG oslo_concurrency.processutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.248 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Acquiring lock "6a424452-c5eb-42aa-9bed-46777c6790eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.248 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lock "6a424452-c5eb-42aa-9bed-46777c6790eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.262 239969 DEBUG nova.compute.manager [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.344 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.345 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:09:21 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3589550519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.485 239969 DEBUG oslo_concurrency.processutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.491 239969 DEBUG nova.compute.provider_tree [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.507 239969 DEBUG nova.scheduler.client.report [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.529 239969 DEBUG oslo_concurrency.lockutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.529 239969 DEBUG nova.compute.manager [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.532 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.538 239969 DEBUG nova.virt.hardware [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.538 239969 INFO nova.compute.claims [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.578 239969 DEBUG oslo_concurrency.lockutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Acquiring lock "2266fbd6-0bef-4f98-a9ce-5ff705dc5121" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.579 239969 DEBUG oslo_concurrency.lockutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "2266fbd6-0bef-4f98-a9ce-5ff705dc5121" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.599 239969 DEBUG nova.compute.manager [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.604 239969 DEBUG nova.compute.manager [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.627 239969 INFO nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.656 239969 DEBUG nova.compute.manager [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.671 239969 DEBUG oslo_concurrency.lockutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:21 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3589550519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.732 239969 DEBUG oslo_concurrency.processutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.773 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443746.758549, 1c5a4474-3020-4bb1-96e5-41b02d0eb962 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.773 239969 INFO nova.compute.manager [-] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] VM Stopped (Lifecycle Event)
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.776 239969 DEBUG nova.compute.manager [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.777 239969 DEBUG nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.778 239969 INFO nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Creating image(s)
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.801 239969 DEBUG nova.storage.rbd_utils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] rbd image 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.828 239969 DEBUG nova.storage.rbd_utils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] rbd image 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.855 239969 DEBUG nova.storage.rbd_utils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] rbd image 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.859 239969 DEBUG oslo_concurrency.processutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.899 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443746.8795898, efde3b30-f743-42a2-bc38-f27b2b0909dc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.899 239969 INFO nova.compute.manager [-] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] VM Stopped (Lifecycle Event)
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.920 239969 DEBUG nova.compute.manager [None req-23dc07eb-1695-415c-81f3-608eab46b0f9 - - - - - -] [instance: 1c5a4474-3020-4bb1-96e5-41b02d0eb962] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.922 239969 DEBUG nova.compute.manager [None req-30257014-f525-44d3-9daf-9a076e6ec8c7 - - - - - -] [instance: efde3b30-f743-42a2-bc38-f27b2b0909dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.929 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.936 239969 DEBUG oslo_concurrency.processutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.937 239969 DEBUG oslo_concurrency.lockutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.938 239969 DEBUG oslo_concurrency.lockutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.938 239969 DEBUG oslo_concurrency.lockutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.964 239969 DEBUG nova.storage.rbd_utils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] rbd image 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:21 compute-0 nova_compute[239965]: 2026-01-26 16:09:21.969 239969 DEBUG oslo_concurrency.processutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 246 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Jan 26 16:09:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:09:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3712637824' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.366 239969 DEBUG oslo_concurrency.processutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.373 239969 DEBUG nova.compute.provider_tree [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.393 239969 DEBUG nova.scheduler.client.report [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.445 239969 DEBUG oslo_concurrency.processutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.514 239969 DEBUG nova.storage.rbd_utils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] resizing rbd image 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.604 239969 DEBUG nova.objects.instance [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lazy-loading 'migration_context' on Instance uuid 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.610 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.611 239969 DEBUG nova.compute.manager [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.613 239969 DEBUG oslo_concurrency.lockutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.647 239969 DEBUG nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.648 239969 DEBUG nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Ensure instance console log exists: /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.648 239969 DEBUG oslo_concurrency.lockutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.649 239969 DEBUG oslo_concurrency.lockutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.649 239969 DEBUG oslo_concurrency.lockutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.651 239969 DEBUG nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.654 239969 DEBUG nova.virt.hardware [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.654 239969 INFO nova.compute.claims [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.661 239969 WARNING nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.669 239969 DEBUG nova.virt.libvirt.host [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.669 239969 DEBUG nova.virt.libvirt.host [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:09:22 compute-0 ceph-mon[75140]: pgmap v1728: 305 pgs: 305 active+clean; 246 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Jan 26 16:09:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3712637824' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.681 239969 DEBUG nova.virt.libvirt.host [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.682 239969 DEBUG nova.virt.libvirt.host [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.682 239969 DEBUG nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.682 239969 DEBUG nova.virt.hardware [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.683 239969 DEBUG nova.virt.hardware [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.683 239969 DEBUG nova.virt.hardware [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.683 239969 DEBUG nova.virt.hardware [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.683 239969 DEBUG nova.virt.hardware [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.683 239969 DEBUG nova.virt.hardware [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.684 239969 DEBUG nova.virt.hardware [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.684 239969 DEBUG nova.virt.hardware [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.684 239969 DEBUG nova.virt.hardware [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.684 239969 DEBUG nova.virt.hardware [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.684 239969 DEBUG nova.virt.hardware [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.688 239969 DEBUG oslo_concurrency.processutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.751 239969 DEBUG nova.compute.manager [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.751 239969 DEBUG nova.network.neutron [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.780 239969 INFO nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.807 239969 DEBUG nova.compute.manager [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:09:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.909 239969 DEBUG oslo_concurrency.processutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.961 239969 DEBUG nova.compute.manager [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.964 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.964 239969 INFO nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Creating image(s)
Jan 26 16:09:22 compute-0 nova_compute[239965]: 2026-01-26 16:09:22.994 239969 DEBUG nova.storage.rbd_utils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] rbd image 6a424452-c5eb-42aa-9bed-46777c6790eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.022 239969 DEBUG nova.storage.rbd_utils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] rbd image 6a424452-c5eb-42aa-9bed-46777c6790eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.050 239969 DEBUG nova.storage.rbd_utils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] rbd image 6a424452-c5eb-42aa-9bed-46777c6790eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.054 239969 DEBUG oslo_concurrency.processutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.086 239969 DEBUG nova.policy [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '51e88d4b3c0c4ad0ab5e64b26bdfbb97', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7e72d6cbe2ea4b8fb797418b35695443', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.124 239969 DEBUG oslo_concurrency.processutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.125 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.125 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.126 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.146 239969 DEBUG nova.storage.rbd_utils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] rbd image 6a424452-c5eb-42aa-9bed-46777c6790eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.149 239969 DEBUG oslo_concurrency.processutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 6a424452-c5eb-42aa-9bed-46777c6790eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:09:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2001677773' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.286 239969 DEBUG oslo_concurrency.processutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.334 239969 DEBUG nova.storage.rbd_utils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] rbd image 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.349 239969 DEBUG oslo_concurrency.processutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.422 239969 DEBUG oslo_concurrency.processutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 6a424452-c5eb-42aa-9bed-46777c6790eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.473 239969 DEBUG nova.storage.rbd_utils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] resizing rbd image 6a424452-c5eb-42aa-9bed-46777c6790eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:09:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:09:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/482501623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.500 239969 DEBUG oslo_concurrency.processutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.504 239969 DEBUG nova.compute.provider_tree [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.542 239969 DEBUG nova.scheduler.client.report [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.551 239969 DEBUG nova.objects.instance [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lazy-loading 'migration_context' on Instance uuid 6a424452-c5eb-42aa-9bed-46777c6790eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.568 239969 DEBUG oslo_concurrency.lockutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.569 239969 DEBUG nova.compute.manager [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.573 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.573 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Ensure instance console log exists: /var/lib/nova/instances/6a424452-c5eb-42aa-9bed-46777c6790eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.574 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.574 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.574 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.649 239969 DEBUG nova.compute.manager [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.664 239969 INFO nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.686 239969 DEBUG nova.compute.manager [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:09:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2001677773' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/482501623' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.790 239969 DEBUG nova.compute.manager [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.791 239969 DEBUG nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.792 239969 INFO nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Creating image(s)
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.811 239969 DEBUG nova.storage.rbd_utils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] rbd image 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.831 239969 DEBUG nova.storage.rbd_utils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] rbd image 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.852 239969 DEBUG nova.storage.rbd_utils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] rbd image 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.856 239969 DEBUG oslo_concurrency.processutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.884 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.923 239969 DEBUG oslo_concurrency.processutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.924 239969 DEBUG oslo_concurrency.lockutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.925 239969 DEBUG oslo_concurrency.lockutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.925 239969 DEBUG oslo_concurrency.lockutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.942 239969 DEBUG nova.storage.rbd_utils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] rbd image 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.945 239969 DEBUG oslo_concurrency.processutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:09:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2158836531' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.980 239969 DEBUG oslo_concurrency.processutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.982 239969 DEBUG nova.objects.instance [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lazy-loading 'pci_devices' on Instance uuid 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:23 compute-0 nova_compute[239965]: 2026-01-26 16:09:23.995 239969 DEBUG nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:09:23 compute-0 nova_compute[239965]:   <uuid>82fb4b02-e957-47a0-a2f8-6d93f039e7d1</uuid>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   <name>instance-00000066</name>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerShowV257Test-server-2145599240</nova:name>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:09:22</nova:creationTime>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:09:23 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:09:23 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:09:23 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:09:23 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:09:23 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:09:23 compute-0 nova_compute[239965]:         <nova:user uuid="bae7b9cd55cc4d358ac0f897d8cc1ae9">tempest-ServerShowV257Test-1554770872-project-member</nova:user>
Jan 26 16:09:23 compute-0 nova_compute[239965]:         <nova:project uuid="d74dfc414eb3415c833bdd6ec3a75187">tempest-ServerShowV257Test-1554770872</nova:project>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <system>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <entry name="serial">82fb4b02-e957-47a0-a2f8-6d93f039e7d1</entry>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <entry name="uuid">82fb4b02-e957-47a0-a2f8-6d93f039e7d1</entry>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     </system>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   <os>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   </os>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   <features>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   </features>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk">
Jan 26 16:09:23 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       </source>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:09:23 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk.config">
Jan 26 16:09:23 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       </source>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:09:23 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1/console.log" append="off"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <video>
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     </video>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:09:23 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:09:23 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:09:23 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:09:23 compute-0 nova_compute[239965]: </domain>
Jan 26 16:09:23 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.051 239969 DEBUG nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.052 239969 DEBUG nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.052 239969 INFO nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Using config drive
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.082 239969 DEBUG nova.storage.rbd_utils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] rbd image 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 250 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.229 239969 DEBUG oslo_concurrency.processutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.283s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.281 239969 DEBUG nova.storage.rbd_utils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] resizing rbd image 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.360 239969 DEBUG nova.objects.instance [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lazy-loading 'migration_context' on Instance uuid 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.377 239969 DEBUG nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.378 239969 DEBUG nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Ensure instance console log exists: /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.378 239969 DEBUG oslo_concurrency.lockutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.379 239969 DEBUG oslo_concurrency.lockutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.379 239969 DEBUG oslo_concurrency.lockutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.381 239969 DEBUG nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.386 239969 WARNING nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.390 239969 DEBUG nova.virt.libvirt.host [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.391 239969 DEBUG nova.virt.libvirt.host [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.394 239969 DEBUG nova.virt.libvirt.host [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.395 239969 DEBUG nova.virt.libvirt.host [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.396 239969 DEBUG nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.396 239969 DEBUG nova.virt.hardware [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.397 239969 DEBUG nova.virt.hardware [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.398 239969 DEBUG nova.virt.hardware [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.398 239969 DEBUG nova.virt.hardware [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.398 239969 DEBUG nova.virt.hardware [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.399 239969 DEBUG nova.virt.hardware [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.399 239969 DEBUG nova.virt.hardware [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.400 239969 DEBUG nova.virt.hardware [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.400 239969 DEBUG nova.virt.hardware [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.401 239969 DEBUG nova.virt.hardware [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.401 239969 DEBUG nova.virt.hardware [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.405 239969 DEBUG oslo_concurrency.processutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2158836531' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:24 compute-0 ceph-mon[75140]: pgmap v1729: 305 pgs: 305 active+clean; 250 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 26 16:09:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:09:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3754426255' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:24 compute-0 nova_compute[239965]: 2026-01-26 16:09:24.990 239969 DEBUG oslo_concurrency.processutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:25 compute-0 nova_compute[239965]: 2026-01-26 16:09:25.025 239969 DEBUG nova.storage.rbd_utils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] rbd image 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:25 compute-0 nova_compute[239965]: 2026-01-26 16:09:25.029 239969 DEBUG oslo_concurrency.processutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:25 compute-0 nova_compute[239965]: 2026-01-26 16:09:25.178 239969 INFO nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Creating config drive at /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1/disk.config
Jan 26 16:09:25 compute-0 nova_compute[239965]: 2026-01-26 16:09:25.183 239969 DEBUG oslo_concurrency.processutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx8k55dkt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:25 compute-0 nova_compute[239965]: 2026-01-26 16:09:25.322 239969 DEBUG oslo_concurrency.processutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx8k55dkt" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:25 compute-0 nova_compute[239965]: 2026-01-26 16:09:25.347 239969 DEBUG nova.storage.rbd_utils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] rbd image 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:25 compute-0 nova_compute[239965]: 2026-01-26 16:09:25.351 239969 DEBUG oslo_concurrency.processutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1/disk.config 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:25 compute-0 nova_compute[239965]: 2026-01-26 16:09:25.486 239969 DEBUG oslo_concurrency.processutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1/disk.config 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:25 compute-0 nova_compute[239965]: 2026-01-26 16:09:25.487 239969 INFO nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Deleting local config drive /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1/disk.config because it was imported into RBD.
Jan 26 16:09:25 compute-0 systemd-machined[208061]: New machine qemu-125-instance-00000066.
Jan 26 16:09:25 compute-0 systemd[1]: Started Virtual Machine qemu-125-instance-00000066.
Jan 26 16:09:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:09:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4143135823' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:25 compute-0 nova_compute[239965]: 2026-01-26 16:09:25.601 239969 DEBUG oslo_concurrency.processutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:25 compute-0 nova_compute[239965]: 2026-01-26 16:09:25.602 239969 DEBUG nova.objects.instance [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:25 compute-0 nova_compute[239965]: 2026-01-26 16:09:25.624 239969 DEBUG nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:09:25 compute-0 nova_compute[239965]:   <uuid>2266fbd6-0bef-4f98-a9ce-5ff705dc5121</uuid>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   <name>instance-00000068</name>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerShowV254Test-server-916688478</nova:name>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:09:24</nova:creationTime>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:09:25 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:09:25 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:09:25 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:09:25 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:09:25 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:09:25 compute-0 nova_compute[239965]:         <nova:user uuid="60f17e2d4e40423689513bf5b4ed0697">tempest-ServerShowV254Test-1162693505-project-member</nova:user>
Jan 26 16:09:25 compute-0 nova_compute[239965]:         <nova:project uuid="67b84c37a6f04b6fbba155d82794bda6">tempest-ServerShowV254Test-1162693505</nova:project>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <system>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <entry name="serial">2266fbd6-0bef-4f98-a9ce-5ff705dc5121</entry>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <entry name="uuid">2266fbd6-0bef-4f98-a9ce-5ff705dc5121</entry>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     </system>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   <os>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   </os>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   <features>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   </features>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk">
Jan 26 16:09:25 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       </source>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:09:25 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk.config">
Jan 26 16:09:25 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       </source>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:09:25 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121/console.log" append="off"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <video>
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     </video>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:09:25 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:09:25 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:09:25 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:09:25 compute-0 nova_compute[239965]: </domain>
Jan 26 16:09:25 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:09:25 compute-0 nova_compute[239965]: 2026-01-26 16:09:25.781 239969 DEBUG nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:09:25 compute-0 nova_compute[239965]: 2026-01-26 16:09:25.782 239969 DEBUG nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:09:25 compute-0 nova_compute[239965]: 2026-01-26 16:09:25.782 239969 INFO nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Using config drive
Jan 26 16:09:25 compute-0 nova_compute[239965]: 2026-01-26 16:09:25.979 239969 DEBUG nova.storage.rbd_utils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] rbd image 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3754426255' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4143135823' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.119 239969 INFO nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Creating config drive at /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121/disk.config
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.125 239969 DEBUG oslo_concurrency.processutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg_f1co9f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1730: 305 pgs: 305 active+clean; 289 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 4.1 MiB/s wr, 95 op/s
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.284 239969 DEBUG oslo_concurrency.processutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg_f1co9f" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.314 239969 DEBUG nova.storage.rbd_utils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] rbd image 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.318 239969 DEBUG oslo_concurrency.processutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121/disk.config 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.353 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.426 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443766.425925, 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.426 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] VM Resumed (Lifecycle Event)
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.429 239969 DEBUG nova.compute.manager [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.430 239969 DEBUG nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.433 239969 INFO nova.virt.libvirt.driver [-] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Instance spawned successfully.
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.433 239969 DEBUG nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.451 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.456 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.461 239969 DEBUG nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.461 239969 DEBUG nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.462 239969 DEBUG nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.463 239969 DEBUG nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.463 239969 DEBUG nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.463 239969 DEBUG nova.virt.libvirt.driver [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.474 239969 DEBUG nova.network.neutron [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Successfully created port: 60632de1-afb5-4319-9c73-66daf4c5ffd0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.498 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.498 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443766.4288096, 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.498 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] VM Started (Lifecycle Event)
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.529 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.536 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.541 239969 INFO nova.compute.manager [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Took 4.76 seconds to spawn the instance on the hypervisor.
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.542 239969 DEBUG nova.compute.manager [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.578 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.586 239969 DEBUG nova.compute.manager [req-139763eb-2fcc-4250-b04f-5e1c9e8e82c7 req-7dae35bb-8c05-4eae-9727-b26455083108 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Received event network-changed-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.586 239969 DEBUG nova.compute.manager [req-139763eb-2fcc-4250-b04f-5e1c9e8e82c7 req-7dae35bb-8c05-4eae-9727-b26455083108 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Refreshing instance network info cache due to event network-changed-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.587 239969 DEBUG oslo_concurrency.lockutils [req-139763eb-2fcc-4250-b04f-5e1c9e8e82c7 req-7dae35bb-8c05-4eae-9727-b26455083108 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-441048cd-ded0-43ff-93b7-bacf44bd6c17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.587 239969 DEBUG oslo_concurrency.lockutils [req-139763eb-2fcc-4250-b04f-5e1c9e8e82c7 req-7dae35bb-8c05-4eae-9727-b26455083108 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-441048cd-ded0-43ff-93b7-bacf44bd6c17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.587 239969 DEBUG nova.network.neutron [req-139763eb-2fcc-4250-b04f-5e1c9e8e82c7 req-7dae35bb-8c05-4eae-9727-b26455083108 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Refreshing network info cache for port f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.617 239969 DEBUG oslo_concurrency.lockutils [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "441048cd-ded0-43ff-93b7-bacf44bd6c17" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.617 239969 DEBUG oslo_concurrency.lockutils [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "441048cd-ded0-43ff-93b7-bacf44bd6c17" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.618 239969 DEBUG oslo_concurrency.lockutils [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.618 239969 DEBUG oslo_concurrency.lockutils [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.618 239969 DEBUG oslo_concurrency.lockutils [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.619 239969 INFO nova.compute.manager [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Terminating instance
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.620 239969 DEBUG nova.compute.manager [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.622 239969 INFO nova.compute.manager [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Took 5.87 seconds to build instance.
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.638 239969 DEBUG oslo_concurrency.lockutils [None req-70f88f92-3302-42d8-bfad-021f4945fd7c bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "82fb4b02-e957-47a0-a2f8-6d93f039e7d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.931 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:26 compute-0 kernel: tapf92a9aeb-bc (unregistering): left promiscuous mode
Jan 26 16:09:26 compute-0 NetworkManager[48954]: <info>  [1769443766.9596] device (tapf92a9aeb-bc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:09:26 compute-0 ovn_controller[146046]: 2026-01-26T16:09:26Z|00997|binding|INFO|Releasing lport f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d from this chassis (sb_readonly=0)
Jan 26 16:09:26 compute-0 ovn_controller[146046]: 2026-01-26T16:09:26Z|00998|binding|INFO|Setting lport f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d down in Southbound
Jan 26 16:09:26 compute-0 ovn_controller[146046]: 2026-01-26T16:09:26Z|00999|binding|INFO|Removing iface tapf92a9aeb-bc ovn-installed in OVS
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.967 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.970 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:26.974 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1a:11:50 10.100.0.5 2001:db8::f816:3eff:fe1a:1150'], port_security=['fa:16:3e:1a:11:50 10.100.0.5 2001:db8::f816:3eff:fe1a:1150'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28 2001:db8::f816:3eff:fe1a:1150/64', 'neutron:device_id': '441048cd-ded0-43ff-93b7-bacf44bd6c17', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a2a2f41a-90e4-40a5-bfe6-8f9c75570ebd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afff7a52-f9b6-46d8-88e0-ad54d3e2a5be, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:09:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:26.976 156105 INFO neutron.agent.ovn.metadata.agent [-] Port f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d in datapath 8b5a2723-70e1-4acc-a601-a4f0bf0d77a4 unbound from our chassis
Jan 26 16:09:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:26.978 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8b5a2723-70e1-4acc-a601-a4f0bf0d77a4
Jan 26 16:09:26 compute-0 nova_compute[239965]: 2026-01-26 16:09:26.986 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:27.003 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f4284bd1-c3c3-43e6-aa5b-9640c217e4e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:27 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000065.scope: Deactivated successfully.
Jan 26 16:09:27 compute-0 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000065.scope: Consumed 14.350s CPU time.
Jan 26 16:09:27 compute-0 systemd-machined[208061]: Machine qemu-124-instance-00000065 terminated.
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.033 239969 DEBUG oslo_concurrency.processutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121/disk.config 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.715s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.034 239969 INFO nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Deleting local config drive /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121/disk.config because it was imported into RBD.
Jan 26 16:09:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:27.041 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c2ce1a17-6dbd-4c13-adc2-3403fe9f1010]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:27.045 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7e025849-c476-4310-906d-bda39d16c8ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.057 239969 INFO nova.virt.libvirt.driver [-] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Instance destroyed successfully.
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.058 239969 DEBUG nova.objects.instance [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'resources' on Instance uuid 441048cd-ded0-43ff-93b7-bacf44bd6c17 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:27 compute-0 ceph-mon[75140]: pgmap v1730: 305 pgs: 305 active+clean; 289 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 4.1 MiB/s wr, 95 op/s
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.071 239969 DEBUG nova.virt.libvirt.vif [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:08:45Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-718393457',display_name='tempest-TestGettingAddress-server-718393457',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-718393457',id=101,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMOs4HnVuAJnvNp8CfE4X6ZDqYLND76tO1hTE0oumV/FRhtgz3i41mfhLwvewrya4uSND6SmxBSCPXJNfLvwF+32ywxOiO4XFvZqstko0NAzPzy/X3VMwFSRh9eMG+0ToA==',key_name='tempest-TestGettingAddress-841928054',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:09:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-rfau3ynl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:09:02Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=441048cd-ded0-43ff-93b7-bacf44bd6c17,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "address": "fa:16:3e:1a:11:50", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:1150", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92a9aeb-bc", "ovs_interfaceid": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.072 239969 DEBUG nova.network.os_vif_util [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "address": "fa:16:3e:1a:11:50", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:1150", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92a9aeb-bc", "ovs_interfaceid": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.073 239969 DEBUG nova.network.os_vif_util [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1a:11:50,bridge_name='br-int',has_traffic_filtering=True,id=f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d,network=Network(8b5a2723-70e1-4acc-a601-a4f0bf0d77a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf92a9aeb-bc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.073 239969 DEBUG os_vif [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:11:50,bridge_name='br-int',has_traffic_filtering=True,id=f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d,network=Network(8b5a2723-70e1-4acc-a601-a4f0bf0d77a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf92a9aeb-bc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.075 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.075 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf92a9aeb-bc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.077 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.079 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.079 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.084 239969 INFO os_vif [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1a:11:50,bridge_name='br-int',has_traffic_filtering=True,id=f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d,network=Network(8b5a2723-70e1-4acc-a601-a4f0bf0d77a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf92a9aeb-bc')
Jan 26 16:09:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:27.084 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[78baf824-6ceb-4a03-b28c-c355930e6d48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:27.105 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9c79bb67-9224-4047-97ae-129d82bfeacb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8b5a2723-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:35:ea:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 34, 'tx_packets': 8, 'rx_bytes': 2780, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 34, 'tx_packets': 8, 'rx_bytes': 2780, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519130, 'reachable_time': 41400, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326891, 'error': None, 'target': 'ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:27 compute-0 systemd-machined[208061]: New machine qemu-126-instance-00000068.
Jan 26 16:09:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:27.129 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ff3a81b6-f575-40fc-8d13-043eb7e45ba3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8b5a2723-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519143, 'tstamp': 519143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326910, 'error': None, 'target': 'ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8b5a2723-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 519146, 'tstamp': 519146}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326910, 'error': None, 'target': 'ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:27 compute-0 systemd[1]: Started Virtual Machine qemu-126-instance-00000068.
Jan 26 16:09:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:27.132 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b5a2723-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.135 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:27.138 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8b5a2723-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:27.139 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:09:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:27.140 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8b5a2723-70, col_values=(('external_ids', {'iface-id': '4fab6bb9-7be8-461a-b3cf-098c70ef4666'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:27.141 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.342 239969 INFO nova.virt.libvirt.driver [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Deleting instance files /var/lib/nova/instances/441048cd-ded0-43ff-93b7-bacf44bd6c17_del
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.342 239969 INFO nova.virt.libvirt.driver [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Deletion of /var/lib/nova/instances/441048cd-ded0-43ff-93b7-bacf44bd6c17_del complete
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.399 239969 INFO nova.compute.manager [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Took 0.78 seconds to destroy the instance on the hypervisor.
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.399 239969 DEBUG oslo.service.loopingcall [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.399 239969 DEBUG nova.compute.manager [-] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.399 239969 DEBUG nova.network.neutron [-] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:09:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:09:27 compute-0 nova_compute[239965]: 2026-01-26 16:09:27.983 239969 DEBUG nova.network.neutron [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Successfully updated port: 60632de1-afb5-4319-9c73-66daf4c5ffd0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.012 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Acquiring lock "refresh_cache-6a424452-c5eb-42aa-9bed-46777c6790eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.012 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Acquired lock "refresh_cache-6a424452-c5eb-42aa-9bed-46777c6790eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.012 239969 DEBUG nova.network.neutron [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:09:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 362 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 474 KiB/s rd, 6.9 MiB/s wr, 171 op/s
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.193 239969 DEBUG nova.network.neutron [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.581 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443768.581673, 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.582 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] VM Resumed (Lifecycle Event)
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.589 239969 DEBUG nova.compute.manager [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.590 239969 DEBUG nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.594 239969 INFO nova.virt.libvirt.driver [-] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Instance spawned successfully.
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.594 239969 DEBUG nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.612 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.619 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.621 239969 DEBUG nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.622 239969 DEBUG nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.622 239969 DEBUG nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.622 239969 DEBUG nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.623 239969 DEBUG nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.623 239969 DEBUG nova.virt.libvirt.driver [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:09:28
Jan 26 16:09:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:09:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:09:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.control', '.rgw.root', 'vms', 'images', 'backups', 'default.rgw.meta', 'volumes', 'default.rgw.log']
Jan 26 16:09:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.652 239969 DEBUG nova.network.neutron [-] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.655 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.655 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443768.5890117, 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.656 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] VM Started (Lifecycle Event)
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.700 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.703 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.775 239969 INFO nova.compute.manager [-] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Took 1.38 seconds to deallocate network for instance.
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.806 239969 INFO nova.compute.manager [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Took 5.02 seconds to spawn the instance on the hypervisor.
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.807 239969 DEBUG nova.compute.manager [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.807 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.841 239969 DEBUG nova.network.neutron [req-139763eb-2fcc-4250-b04f-5e1c9e8e82c7 req-7dae35bb-8c05-4eae-9727-b26455083108 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Updated VIF entry in instance network info cache for port f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.842 239969 DEBUG nova.network.neutron [req-139763eb-2fcc-4250-b04f-5e1c9e8e82c7 req-7dae35bb-8c05-4eae-9727-b26455083108 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Updating instance_info_cache with network_info: [{"id": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "address": "fa:16:3e:1a:11:50", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe1a:1150", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf92a9aeb-bc", "ovs_interfaceid": "f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.876 239969 DEBUG oslo_concurrency.lockutils [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.876 239969 DEBUG oslo_concurrency.lockutils [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.890 239969 DEBUG oslo_concurrency.lockutils [req-139763eb-2fcc-4250-b04f-5e1c9e8e82c7 req-7dae35bb-8c05-4eae-9727-b26455083108 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-441048cd-ded0-43ff-93b7-bacf44bd6c17" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.892 239969 INFO nova.compute.manager [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Took 7.25 seconds to build instance.
Jan 26 16:09:28 compute-0 nova_compute[239965]: 2026-01-26 16:09:28.916 239969 DEBUG oslo_concurrency.lockutils [None req-04a228b3-2288-4071-a90b-9effcdc5b6fe 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "2266fbd6-0bef-4f98-a9ce-5ff705dc5121" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.337s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.015 239969 DEBUG nova.compute.manager [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Received event network-vif-unplugged-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.015 239969 DEBUG oslo_concurrency.lockutils [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.015 239969 DEBUG oslo_concurrency.lockutils [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.016 239969 DEBUG oslo_concurrency.lockutils [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.016 239969 DEBUG nova.compute.manager [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] No waiting events found dispatching network-vif-unplugged-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.016 239969 WARNING nova.compute.manager [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Received unexpected event network-vif-unplugged-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d for instance with vm_state deleted and task_state None.
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.016 239969 DEBUG nova.compute.manager [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Received event network-vif-plugged-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.017 239969 DEBUG oslo_concurrency.lockutils [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.017 239969 DEBUG oslo_concurrency.lockutils [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.017 239969 DEBUG oslo_concurrency.lockutils [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "441048cd-ded0-43ff-93b7-bacf44bd6c17-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.017 239969 DEBUG nova.compute.manager [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] No waiting events found dispatching network-vif-plugged-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.018 239969 WARNING nova.compute.manager [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Received unexpected event network-vif-plugged-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d for instance with vm_state deleted and task_state None.
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.018 239969 DEBUG nova.compute.manager [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Received event network-changed-60632de1-afb5-4319-9c73-66daf4c5ffd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.018 239969 DEBUG nova.compute.manager [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Refreshing instance network info cache due to event network-changed-60632de1-afb5-4319-9c73-66daf4c5ffd0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.018 239969 DEBUG oslo_concurrency.lockutils [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-6a424452-c5eb-42aa-9bed-46777c6790eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.020 239969 DEBUG oslo_concurrency.processutils [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.059 239969 INFO nova.compute.manager [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Rebuilding instance
Jan 26 16:09:29 compute-0 ceph-mon[75140]: pgmap v1731: 305 pgs: 305 active+clean; 362 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 474 KiB/s rd, 6.9 MiB/s wr, 171 op/s
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.257 239969 DEBUG nova.network.neutron [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Updating instance_info_cache with network_info: [{"id": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "address": "fa:16:3e:a4:04:57", "network": {"id": "8adc6d2b-03da-4ff3-8b51-594f2e5ddd81", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1013695405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e72d6cbe2ea4b8fb797418b35695443", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60632de1-af", "ovs_interfaceid": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.276 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Releasing lock "refresh_cache-6a424452-c5eb-42aa-9bed-46777c6790eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.277 239969 DEBUG nova.compute.manager [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Instance network_info: |[{"id": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "address": "fa:16:3e:a4:04:57", "network": {"id": "8adc6d2b-03da-4ff3-8b51-594f2e5ddd81", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1013695405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e72d6cbe2ea4b8fb797418b35695443", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60632de1-af", "ovs_interfaceid": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.278 239969 DEBUG oslo_concurrency.lockutils [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-6a424452-c5eb-42aa-9bed-46777c6790eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.278 239969 DEBUG nova.network.neutron [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Refreshing network info cache for port 60632de1-afb5-4319-9c73-66daf4c5ffd0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.282 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Start _get_guest_xml network_info=[{"id": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "address": "fa:16:3e:a4:04:57", "network": {"id": "8adc6d2b-03da-4ff3-8b51-594f2e5ddd81", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1013695405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e72d6cbe2ea4b8fb797418b35695443", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60632de1-af", "ovs_interfaceid": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.300 239969 WARNING nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.303 239969 DEBUG nova.objects.instance [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.307 239969 DEBUG nova.virt.libvirt.host [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.308 239969 DEBUG nova.virt.libvirt.host [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.314 239969 DEBUG nova.virt.libvirt.host [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.315 239969 DEBUG nova.virt.libvirt.host [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.316 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.316 239969 DEBUG nova.virt.hardware [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.317 239969 DEBUG nova.virt.hardware [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.317 239969 DEBUG nova.virt.hardware [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.318 239969 DEBUG nova.virt.hardware [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.318 239969 DEBUG nova.virt.hardware [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.319 239969 DEBUG nova.virt.hardware [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.319 239969 DEBUG nova.virt.hardware [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.320 239969 DEBUG nova.virt.hardware [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.320 239969 DEBUG nova.virt.hardware [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.320 239969 DEBUG nova.virt.hardware [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.321 239969 DEBUG nova.virt.hardware [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.324 239969 DEBUG oslo_concurrency.processutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.371 239969 DEBUG nova.compute.manager [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.417 239969 DEBUG nova.objects.instance [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lazy-loading 'pci_requests' on Instance uuid 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.431 239969 DEBUG nova.objects.instance [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lazy-loading 'pci_devices' on Instance uuid 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.443 239969 DEBUG nova.objects.instance [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lazy-loading 'resources' on Instance uuid 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.455 239969 DEBUG nova.objects.instance [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lazy-loading 'migration_context' on Instance uuid 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.467 239969 DEBUG nova.objects.instance [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.471 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:09:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:09:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002665850' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.694 239969 DEBUG oslo_concurrency.processutils [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.675s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.700 239969 DEBUG nova.compute.provider_tree [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.717 239969 DEBUG nova.scheduler.client.report [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.739 239969 DEBUG oslo_concurrency.lockutils [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.779 239969 INFO nova.scheduler.client.report [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Deleted allocations for instance 441048cd-ded0-43ff-93b7-bacf44bd6c17
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.851 239969 DEBUG oslo_concurrency.lockutils [None req-bfacfe82-1e10-4c50-85f8-2072a8ffdadc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "441048cd-ded0-43ff-93b7-bacf44bd6c17" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:09:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/56303517' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:29 compute-0 nova_compute[239965]: 2026-01-26 16:09:29.994 239969 DEBUG oslo_concurrency.processutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.017 239969 DEBUG nova.storage.rbd_utils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] rbd image 6a424452-c5eb-42aa-9bed-46777c6790eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.021 239969 DEBUG oslo_concurrency.processutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 362 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 206 KiB/s rd, 5.4 MiB/s wr, 121 op/s
Jan 26 16:09:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4002665850' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/56303517' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:09:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:09:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1019343774' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.619 239969 DEBUG oslo_concurrency.processutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.620 239969 DEBUG nova.virt.libvirt.vif [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:09:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-580633455',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-580633455',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-580633455',id=103,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7e72d6cbe2ea4b8fb797418b35695443',ramdisk_id='',reservation_id='r-ltbho49l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-118530518',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-118530518-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:09:22Z,user_data=None,user_id='51e88d4b3c0c4ad0ab5e64b26bdfbb97',uuid=6a424452-c5eb-42aa-9bed-46777c6790eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "address": "fa:16:3e:a4:04:57", "network": {"id": "8adc6d2b-03da-4ff3-8b51-594f2e5ddd81", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1013695405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e72d6cbe2ea4b8fb797418b35695443", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60632de1-af", "ovs_interfaceid": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.621 239969 DEBUG nova.network.os_vif_util [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Converting VIF {"id": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "address": "fa:16:3e:a4:04:57", "network": {"id": "8adc6d2b-03da-4ff3-8b51-594f2e5ddd81", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1013695405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e72d6cbe2ea4b8fb797418b35695443", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60632de1-af", "ovs_interfaceid": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.622 239969 DEBUG nova.network.os_vif_util [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:04:57,bridge_name='br-int',has_traffic_filtering=True,id=60632de1-afb5-4319-9c73-66daf4c5ffd0,network=Network(8adc6d2b-03da-4ff3-8b51-594f2e5ddd81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60632de1-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.623 239969 DEBUG nova.objects.instance [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a424452-c5eb-42aa-9bed-46777c6790eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.637 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:09:30 compute-0 nova_compute[239965]:   <uuid>6a424452-c5eb-42aa-9bed-46777c6790eb</uuid>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   <name>instance-00000067</name>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-580633455</nova:name>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:09:29</nova:creationTime>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:09:30 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:09:30 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:09:30 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:09:30 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:09:30 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:09:30 compute-0 nova_compute[239965]:         <nova:user uuid="51e88d4b3c0c4ad0ab5e64b26bdfbb97">tempest-ServersNegativeTestMultiTenantJSON-118530518-project-member</nova:user>
Jan 26 16:09:30 compute-0 nova_compute[239965]:         <nova:project uuid="7e72d6cbe2ea4b8fb797418b35695443">tempest-ServersNegativeTestMultiTenantJSON-118530518</nova:project>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:09:30 compute-0 nova_compute[239965]:         <nova:port uuid="60632de1-afb5-4319-9c73-66daf4c5ffd0">
Jan 26 16:09:30 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <system>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <entry name="serial">6a424452-c5eb-42aa-9bed-46777c6790eb</entry>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <entry name="uuid">6a424452-c5eb-42aa-9bed-46777c6790eb</entry>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     </system>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   <os>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   </os>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   <features>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   </features>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/6a424452-c5eb-42aa-9bed-46777c6790eb_disk">
Jan 26 16:09:30 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       </source>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:09:30 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/6a424452-c5eb-42aa-9bed-46777c6790eb_disk.config">
Jan 26 16:09:30 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       </source>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:09:30 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:a4:04:57"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <target dev="tap60632de1-af"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/6a424452-c5eb-42aa-9bed-46777c6790eb/console.log" append="off"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <video>
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     </video>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:09:30 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:09:30 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:09:30 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:09:30 compute-0 nova_compute[239965]: </domain>
Jan 26 16:09:30 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.643 239969 DEBUG nova.compute.manager [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Preparing to wait for external event network-vif-plugged-60632de1-afb5-4319-9c73-66daf4c5ffd0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.644 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Acquiring lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.644 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.644 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.645 239969 DEBUG nova.virt.libvirt.vif [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:09:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-580633455',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-580633455',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-580633455',id=103,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7e72d6cbe2ea4b8fb797418b35695443',ramdisk_id='',reservation_id='r-ltbho49l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-118530518',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-118530518-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:09:22Z,user_data=None,user_id='51e88d4b3c0c4ad0ab5e64b26bdfbb97',uuid=6a424452-c5eb-42aa-9bed-46777c6790eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "address": "fa:16:3e:a4:04:57", "network": {"id": "8adc6d2b-03da-4ff3-8b51-594f2e5ddd81", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1013695405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e72d6cbe2ea4b8fb797418b35695443", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60632de1-af", "ovs_interfaceid": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.646 239969 DEBUG nova.network.os_vif_util [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Converting VIF {"id": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "address": "fa:16:3e:a4:04:57", "network": {"id": "8adc6d2b-03da-4ff3-8b51-594f2e5ddd81", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1013695405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e72d6cbe2ea4b8fb797418b35695443", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60632de1-af", "ovs_interfaceid": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.646 239969 DEBUG nova.network.os_vif_util [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:04:57,bridge_name='br-int',has_traffic_filtering=True,id=60632de1-afb5-4319-9c73-66daf4c5ffd0,network=Network(8adc6d2b-03da-4ff3-8b51-594f2e5ddd81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60632de1-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.647 239969 DEBUG os_vif [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:04:57,bridge_name='br-int',has_traffic_filtering=True,id=60632de1-afb5-4319-9c73-66daf4c5ffd0,network=Network(8adc6d2b-03da-4ff3-8b51-594f2e5ddd81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60632de1-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.647 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.648 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.649 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:09:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.653 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.654 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60632de1-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.654 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60632de1-af, col_values=(('external_ids', {'iface-id': '60632de1-afb5-4319-9c73-66daf4c5ffd0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:04:57', 'vm-uuid': '6a424452-c5eb-42aa-9bed-46777c6790eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.656 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:30 compute-0 NetworkManager[48954]: <info>  [1769443770.6567] manager: (tap60632de1-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.658 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.661 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.662 239969 INFO os_vif [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:04:57,bridge_name='br-int',has_traffic_filtering=True,id=60632de1-afb5-4319-9c73-66daf4c5ffd0,network=Network(8adc6d2b-03da-4ff3-8b51-594f2e5ddd81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60632de1-af')
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.730 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.730 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.730 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] No VIF found with MAC fa:16:3e:a4:04:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.731 239969 INFO nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Using config drive
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.751 239969 DEBUG nova.storage.rbd_utils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] rbd image 6a424452-c5eb-42aa-9bed-46777c6790eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.892 239969 DEBUG nova.network.neutron [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Updated VIF entry in instance network info cache for port 60632de1-afb5-4319-9c73-66daf4c5ffd0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.893 239969 DEBUG nova.network.neutron [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Updating instance_info_cache with network_info: [{"id": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "address": "fa:16:3e:a4:04:57", "network": {"id": "8adc6d2b-03da-4ff3-8b51-594f2e5ddd81", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1013695405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e72d6cbe2ea4b8fb797418b35695443", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60632de1-af", "ovs_interfaceid": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:09:30 compute-0 nova_compute[239965]: 2026-01-26 16:09:30.907 239969 DEBUG oslo_concurrency.lockutils [req-07e5c6d1-ac26-451d-befd-f5cfe2a44ea1 req-3ecda44c-7ea5-4c3c-b6f6-0a9cdb3607d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-6a424452-c5eb-42aa-9bed-46777c6790eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.009 239969 INFO nova.compute.manager [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Rebuilding instance
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.069 239969 INFO nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Creating config drive at /var/lib/nova/instances/6a424452-c5eb-42aa-9bed-46777c6790eb/disk.config
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.074 239969 DEBUG oslo_concurrency.processutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a424452-c5eb-42aa-9bed-46777c6790eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsiy5jj6a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.111 239969 DEBUG nova.compute.manager [req-5d7cb685-c6e0-4498-899a-720f0df2ac3b req-a3ef6306-c075-44d7-9b9e-e1de54e78b7f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Received event network-vif-deleted-f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.112 239969 INFO nova.compute.manager [req-5d7cb685-c6e0-4498-899a-720f0df2ac3b req-a3ef6306-c075-44d7-9b9e-e1de54e78b7f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Neutron deleted interface f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d; detaching it from the instance and deleting it from the info cache
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.113 239969 DEBUG nova.network.neutron [req-5d7cb685-c6e0-4498-899a-720f0df2ac3b req-a3ef6306-c075-44d7-9b9e-e1de54e78b7f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.119 239969 DEBUG nova.compute.manager [req-5d7cb685-c6e0-4498-899a-720f0df2ac3b req-a3ef6306-c075-44d7-9b9e-e1de54e78b7f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Detach interface failed, port_id=f92a9aeb-bc84-4ae4-a711-4c5ff5c0446d, reason: Instance 441048cd-ded0-43ff-93b7-bacf44bd6c17 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.215 239969 DEBUG oslo_concurrency.processutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a424452-c5eb-42aa-9bed-46777c6790eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsiy5jj6a" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:31 compute-0 ceph-mon[75140]: pgmap v1732: 305 pgs: 305 active+clean; 362 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 206 KiB/s rd, 5.4 MiB/s wr, 121 op/s
Jan 26 16:09:31 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1019343774' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.251 239969 DEBUG nova.storage.rbd_utils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] rbd image 6a424452-c5eb-42aa-9bed-46777c6790eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.255 239969 DEBUG oslo_concurrency.processutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a424452-c5eb-42aa-9bed-46777c6790eb/disk.config 6a424452-c5eb-42aa-9bed-46777c6790eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.293 239969 DEBUG nova.objects.instance [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.315 239969 DEBUG nova.compute.manager [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.359 239969 DEBUG nova.objects.instance [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lazy-loading 'pci_requests' on Instance uuid 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.373 239969 DEBUG nova.objects.instance [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.385 239969 DEBUG nova.objects.instance [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lazy-loading 'resources' on Instance uuid 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.389 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.395 239969 DEBUG nova.objects.instance [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lazy-loading 'migration_context' on Instance uuid 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.407 239969 DEBUG nova.objects.instance [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.412 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.437 239969 DEBUG oslo_concurrency.processutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a424452-c5eb-42aa-9bed-46777c6790eb/disk.config 6a424452-c5eb-42aa-9bed-46777c6790eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.437 239969 INFO nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Deleting local config drive /var/lib/nova/instances/6a424452-c5eb-42aa-9bed-46777c6790eb/disk.config because it was imported into RBD.
Jan 26 16:09:31 compute-0 kernel: tap60632de1-af: entered promiscuous mode
Jan 26 16:09:31 compute-0 NetworkManager[48954]: <info>  [1769443771.4919] manager: (tap60632de1-af): new Tun device (/org/freedesktop/NetworkManager/Devices/412)
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.494 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:31 compute-0 ovn_controller[146046]: 2026-01-26T16:09:31Z|01000|binding|INFO|Claiming lport 60632de1-afb5-4319-9c73-66daf4c5ffd0 for this chassis.
Jan 26 16:09:31 compute-0 ovn_controller[146046]: 2026-01-26T16:09:31Z|01001|binding|INFO|60632de1-afb5-4319-9c73-66daf4c5ffd0: Claiming fa:16:3e:a4:04:57 10.100.0.11
Jan 26 16:09:31 compute-0 systemd-udevd[327118]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:09:31 compute-0 systemd-machined[208061]: New machine qemu-127-instance-00000067.
Jan 26 16:09:31 compute-0 ovn_controller[146046]: 2026-01-26T16:09:31Z|01002|binding|INFO|Setting lport 60632de1-afb5-4319-9c73-66daf4c5ffd0 ovn-installed in OVS
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.524 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.527 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:31 compute-0 NetworkManager[48954]: <info>  [1769443771.5319] device (tap60632de1-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:09:31 compute-0 NetworkManager[48954]: <info>  [1769443771.5331] device (tap60632de1-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:09:31 compute-0 systemd[1]: Started Virtual Machine qemu-127-instance-00000067.
Jan 26 16:09:31 compute-0 ovn_controller[146046]: 2026-01-26T16:09:31Z|01003|binding|INFO|Setting lport 60632de1-afb5-4319-9c73-66daf4c5ffd0 up in Southbound
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.583 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:04:57 10.100.0.11'], port_security=['fa:16:3e:a4:04:57 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6a424452-c5eb-42aa-9bed-46777c6790eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7e72d6cbe2ea4b8fb797418b35695443', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1513c325-31f8-4295-985d-299c911a2ed6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17d5dc96-8a66-4f58-8003-f923455547d5, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=60632de1-afb5-4319-9c73-66daf4c5ffd0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.584 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 60632de1-afb5-4319-9c73-66daf4c5ffd0 in datapath 8adc6d2b-03da-4ff3-8b51-594f2e5ddd81 bound to our chassis
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.585 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8adc6d2b-03da-4ff3-8b51-594f2e5ddd81
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.598 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1c16d55f-fca1-461e-ba7e-cf6c337f0f7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.599 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8adc6d2b-01 in ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.600 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8adc6d2b-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.600 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc9b17c-1486-47bc-8852-cd09ad1a7418]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.601 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9a5bbff0-a944-47ee-94fc-5fcee4ba5304]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.617 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[ad51ae07-5327-4b5a-bc5a-504baf57eee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.642 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[667e38f5-7da3-4246-8158-a39b7fa2b9b2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.667 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f4047606-6a0c-4fdc-94eb-49104ae2dc99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.673 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0ec4f566-bfe0-4d51-9bb8-4a27ccceb89d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:31 compute-0 NetworkManager[48954]: <info>  [1769443771.6747] manager: (tap8adc6d2b-00): new Veth device (/org/freedesktop/NetworkManager/Devices/413)
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.707 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[66d5dbac-53b3-44b3-af79-fac5ff136d31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.710 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[76de4a21-a5fa-4f63-b55a-573910235aeb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:31 compute-0 NetworkManager[48954]: <info>  [1769443771.7329] device (tap8adc6d2b-00): carrier: link connected
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.739 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7fbc8a9f-b900-4c02-a918-1de586cc0610]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.758 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4c4c97-7c6e-4156-9d39-73eaf5cfe8b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8adc6d2b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:99:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 295], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526253, 'reachable_time': 44126, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327152, 'error': None, 'target': 'ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.782 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[46c71ad6-8f3e-4341-b08e-5202ca881271]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedc:99ca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 526253, 'tstamp': 526253}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 327153, 'error': None, 'target': 'ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.799 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c1357700-5ba3-4c92-8846-8251a01762ab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8adc6d2b-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:dc:99:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 295], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526253, 'reachable_time': 44126, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 327154, 'error': None, 'target': 'ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.831 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f7acfb88-699f-4c67-8c09-8da55269642f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.899 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b8bb15-3385-4bb2-a469-15dcca0c12b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.900 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8adc6d2b-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.900 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.901 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8adc6d2b-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:31 compute-0 NetworkManager[48954]: <info>  [1769443771.9032] manager: (tap8adc6d2b-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/414)
Jan 26 16:09:31 compute-0 kernel: tap8adc6d2b-00: entered promiscuous mode
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.904 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.907 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8adc6d2b-00, col_values=(('external_ids', {'iface-id': 'fc8764ed-f8f3-43de-bb01-1cf76743b6bd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.908 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:31 compute-0 ovn_controller[146046]: 2026-01-26T16:09:31Z|01004|binding|INFO|Releasing lport fc8764ed-f8f3-43de-bb01-1cf76743b6bd from this chassis (sb_readonly=0)
Jan 26 16:09:31 compute-0 nova_compute[239965]: 2026-01-26 16:09:31.927 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.928 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8adc6d2b-03da-4ff3-8b51-594f2e5ddd81.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8adc6d2b-03da-4ff3-8b51-594f2e5ddd81.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.929 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9e90fc22-d6cb-4e3e-8bf6-dcd0a677e50e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.930 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/8adc6d2b-03da-4ff3-8b51-594f2e5ddd81.pid.haproxy
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 8adc6d2b-03da-4ff3-8b51-594f2e5ddd81
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:09:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:31.931 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81', 'env', 'PROCESS_TAG=haproxy-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8adc6d2b-03da-4ff3-8b51-594f2e5ddd81.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:09:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1733: 305 pgs: 305 active+clean; 306 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.4 MiB/s wr, 202 op/s
Jan 26 16:09:32 compute-0 podman[327185]: 2026-01-26 16:09:32.326876996 +0000 UTC m=+0.049786880 container create 51398962040de1ae92318b58f232e37586ef30fad30d38c7340e5dfa22405968 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 16:09:32 compute-0 systemd[1]: Started libpod-conmon-51398962040de1ae92318b58f232e37586ef30fad30d38c7340e5dfa22405968.scope.
Jan 26 16:09:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:09:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52e715615bf1bd7386651a64d1c194124b3478be1bc30d086ad658e771998402/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:09:32 compute-0 podman[327185]: 2026-01-26 16:09:32.301042554 +0000 UTC m=+0.023952458 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:09:32 compute-0 podman[327185]: 2026-01-26 16:09:32.405738694 +0000 UTC m=+0.128648578 container init 51398962040de1ae92318b58f232e37586ef30fad30d38c7340e5dfa22405968 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 26 16:09:32 compute-0 podman[327185]: 2026-01-26 16:09:32.411405583 +0000 UTC m=+0.134315467 container start 51398962040de1ae92318b58f232e37586ef30fad30d38c7340e5dfa22405968 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.424 239969 DEBUG oslo_concurrency.lockutils [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "59b6d7ef-0e86-41af-af94-b94def8570b0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.424 239969 DEBUG oslo_concurrency.lockutils [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "59b6d7ef-0e86-41af-af94-b94def8570b0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.425 239969 DEBUG oslo_concurrency.lockutils [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.425 239969 DEBUG oslo_concurrency.lockutils [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.425 239969 DEBUG oslo_concurrency.lockutils [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.426 239969 INFO nova.compute.manager [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Terminating instance
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.427 239969 DEBUG nova.compute.manager [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:09:32 compute-0 neutron-haproxy-ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81[327200]: [NOTICE]   (327204) : New worker (327206) forked
Jan 26 16:09:32 compute-0 neutron-haproxy-ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81[327200]: [NOTICE]   (327204) : Loading success.
Jan 26 16:09:32 compute-0 kernel: tapa8973089-7e (unregistering): left promiscuous mode
Jan 26 16:09:32 compute-0 NetworkManager[48954]: <info>  [1769443772.4742] device (tapa8973089-7e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.476 239969 DEBUG nova.compute.manager [req-7c93b89a-019c-4ada-9a79-177216d9f2e2 req-e8b7d73b-e934-4c3d-9f5b-f702a4d0edb5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Received event network-vif-plugged-60632de1-afb5-4319-9c73-66daf4c5ffd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.477 239969 DEBUG oslo_concurrency.lockutils [req-7c93b89a-019c-4ada-9a79-177216d9f2e2 req-e8b7d73b-e934-4c3d-9f5b-f702a4d0edb5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.477 239969 DEBUG oslo_concurrency.lockutils [req-7c93b89a-019c-4ada-9a79-177216d9f2e2 req-e8b7d73b-e934-4c3d-9f5b-f702a4d0edb5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.477 239969 DEBUG oslo_concurrency.lockutils [req-7c93b89a-019c-4ada-9a79-177216d9f2e2 req-e8b7d73b-e934-4c3d-9f5b-f702a4d0edb5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.477 239969 DEBUG nova.compute.manager [req-7c93b89a-019c-4ada-9a79-177216d9f2e2 req-e8b7d73b-e934-4c3d-9f5b-f702a4d0edb5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Processing event network-vif-plugged-60632de1-afb5-4319-9c73-66daf4c5ffd0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.524 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:32 compute-0 ovn_controller[146046]: 2026-01-26T16:09:32Z|01005|binding|INFO|Releasing lport a8973089-7e12-4dd1-be5a-417921b2ddf9 from this chassis (sb_readonly=0)
Jan 26 16:09:32 compute-0 ovn_controller[146046]: 2026-01-26T16:09:32Z|01006|binding|INFO|Setting lport a8973089-7e12-4dd1-be5a-417921b2ddf9 down in Southbound
Jan 26 16:09:32 compute-0 ovn_controller[146046]: 2026-01-26T16:09:32Z|01007|binding|INFO|Removing iface tapa8973089-7e ovn-installed in OVS
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.525 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:32.531 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5b:71:19 10.100.0.11 2001:db8::f816:3eff:fe5b:7119'], port_security=['fa:16:3e:5b:71:19 10.100.0.11 2001:db8::f816:3eff:fe5b:7119'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fe5b:7119/64', 'neutron:device_id': '59b6d7ef-0e86-41af-af94-b94def8570b0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a2a2f41a-90e4-40a5-bfe6-8f9c75570ebd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=afff7a52-f9b6-46d8-88e0-ad54d3e2a5be, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a8973089-7e12-4dd1-be5a-417921b2ddf9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:09:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:32.533 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a8973089-7e12-4dd1-be5a-417921b2ddf9 in datapath 8b5a2723-70e1-4acc-a601-a4f0bf0d77a4 unbound from our chassis
Jan 26 16:09:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:32.534 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8b5a2723-70e1-4acc-a601-a4f0bf0d77a4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:09:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:32.535 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f757d9c3-ffa0-497f-836b-aa322e3860af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:32.535 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4 namespace which is not needed anymore
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.541 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:32 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000062.scope: Deactivated successfully.
Jan 26 16:09:32 compute-0 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000062.scope: Consumed 15.837s CPU time.
Jan 26 16:09:32 compute-0 systemd-machined[208061]: Machine qemu-120-instance-00000062 terminated.
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.662 239969 INFO nova.virt.libvirt.driver [-] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Instance destroyed successfully.
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.664 239969 DEBUG nova.objects.instance [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'resources' on Instance uuid 59b6d7ef-0e86-41af-af94-b94def8570b0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.680 239969 DEBUG nova.virt.libvirt.vif [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:08:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-491867800',display_name='tempest-TestGettingAddress-server-491867800',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-491867800',id=98,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMOs4HnVuAJnvNp8CfE4X6ZDqYLND76tO1hTE0oumV/FRhtgz3i41mfhLwvewrya4uSND6SmxBSCPXJNfLvwF+32ywxOiO4XFvZqstko0NAzPzy/X3VMwFSRh9eMG+0ToA==',key_name='tempest-TestGettingAddress-841928054',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:08:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-vhhodg3c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:08:22Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=59b6d7ef-0e86-41af-af94-b94def8570b0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "address": "fa:16:3e:5b:71:19", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:7119", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8973089-7e", "ovs_interfaceid": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.681 239969 DEBUG nova.network.os_vif_util [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "address": "fa:16:3e:5b:71:19", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:7119", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8973089-7e", "ovs_interfaceid": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.681 239969 DEBUG nova.network.os_vif_util [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5b:71:19,bridge_name='br-int',has_traffic_filtering=True,id=a8973089-7e12-4dd1-be5a-417921b2ddf9,network=Network(8b5a2723-70e1-4acc-a601-a4f0bf0d77a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8973089-7e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.682 239969 DEBUG os_vif [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:71:19,bridge_name='br-int',has_traffic_filtering=True,id=a8973089-7e12-4dd1-be5a-417921b2ddf9,network=Network(8b5a2723-70e1-4acc-a601-a4f0bf0d77a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8973089-7e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.683 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.683 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa8973089-7e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:32 compute-0 neutron-haproxy-ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4[323237]: [NOTICE]   (323241) : haproxy version is 2.8.14-c23fe91
Jan 26 16:09:32 compute-0 neutron-haproxy-ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4[323237]: [NOTICE]   (323241) : path to executable is /usr/sbin/haproxy
Jan 26 16:09:32 compute-0 neutron-haproxy-ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4[323237]: [WARNING]  (323241) : Exiting Master process...
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.685 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.686 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:32 compute-0 neutron-haproxy-ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4[323237]: [WARNING]  (323241) : Exiting Master process...
Jan 26 16:09:32 compute-0 neutron-haproxy-ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4[323237]: [ALERT]    (323241) : Current worker (323243) exited with code 143 (Terminated)
Jan 26 16:09:32 compute-0 neutron-haproxy-ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4[323237]: [WARNING]  (323241) : All workers exited. Exiting... (0)
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.688 239969 INFO os_vif [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5b:71:19,bridge_name='br-int',has_traffic_filtering=True,id=a8973089-7e12-4dd1-be5a-417921b2ddf9,network=Network(8b5a2723-70e1-4acc-a601-a4f0bf0d77a4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa8973089-7e')
Jan 26 16:09:32 compute-0 systemd[1]: libpod-24fcca172d778adae485c6ed135bb1b427b776e81d645dfc85026a8f25d287cc.scope: Deactivated successfully.
Jan 26 16:09:32 compute-0 podman[327257]: 2026-01-26 16:09:32.699714546 +0000 UTC m=+0.067584534 container died 24fcca172d778adae485c6ed135bb1b427b776e81d645dfc85026a8f25d287cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:09:32 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-24fcca172d778adae485c6ed135bb1b427b776e81d645dfc85026a8f25d287cc-userdata-shm.mount: Deactivated successfully.
Jan 26 16:09:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c6552f4e883ae6f8f0ba3dcddf7dce17cef332686611fb36173eb8a9b10d37d-merged.mount: Deactivated successfully.
Jan 26 16:09:32 compute-0 podman[327257]: 2026-01-26 16:09:32.739693604 +0000 UTC m=+0.107563592 container cleanup 24fcca172d778adae485c6ed135bb1b427b776e81d645dfc85026a8f25d287cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 16:09:32 compute-0 systemd[1]: libpod-conmon-24fcca172d778adae485c6ed135bb1b427b776e81d645dfc85026a8f25d287cc.scope: Deactivated successfully.
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.757 239969 DEBUG nova.compute.manager [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.764 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443772.7644339, 6a424452-c5eb-42aa-9bed-46777c6790eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.764 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] VM Started (Lifecycle Event)
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.766 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.773 239969 INFO nova.virt.libvirt.driver [-] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Instance spawned successfully.
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.773 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.785 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.792 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.796 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.797 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.797 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.798 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.798 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.799 239969 DEBUG nova.virt.libvirt.driver [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:32 compute-0 podman[327330]: 2026-01-26 16:09:32.820760587 +0000 UTC m=+0.054209107 container remove 24fcca172d778adae485c6ed135bb1b427b776e81d645dfc85026a8f25d287cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:09:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:32.828 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1f457f89-bdfe-4508-8763-57e587a3e144]: (4, ('Mon Jan 26 04:09:32 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4 (24fcca172d778adae485c6ed135bb1b427b776e81d645dfc85026a8f25d287cc)\n24fcca172d778adae485c6ed135bb1b427b776e81d645dfc85026a8f25d287cc\nMon Jan 26 04:09:32 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4 (24fcca172d778adae485c6ed135bb1b427b776e81d645dfc85026a8f25d287cc)\n24fcca172d778adae485c6ed135bb1b427b776e81d645dfc85026a8f25d287cc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:09:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:32.830 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[33c01ccb-d746-455a-8521-2b07bfffcdf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:32.831 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8b5a2723-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.833 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.834 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443772.76627, 6a424452-c5eb-42aa-9bed-46777c6790eb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.834 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] VM Paused (Lifecycle Event)
Jan 26 16:09:32 compute-0 kernel: tap8b5a2723-70: left promiscuous mode
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.838 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:32.847 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ae24ca84-1d89-4b97-9a37-823027698e81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.856 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:32.860 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[89baa952-84b7-4cd3-aa15-56a9822e4639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.865 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:32.867 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a275f427-ce5b-4bca-96b1-bfbd85a1560b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.873 239969 INFO nova.compute.manager [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Took 9.91 seconds to spawn the instance on the hypervisor.
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.873 239969 DEBUG nova.compute.manager [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.876 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443772.7663636, 6a424452-c5eb-42aa-9bed-46777c6790eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.877 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] VM Resumed (Lifecycle Event)
Jan 26 16:09:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:32.893 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5289f9bb-e1f6-4217-a86e-38ce147651da]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 519121, 'reachable_time': 37219, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327344, 'error': None, 'target': 'ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:32 compute-0 systemd[1]: run-netns-ovnmeta\x2d8b5a2723\x2d70e1\x2d4acc\x2da601\x2da4f0bf0d77a4.mount: Deactivated successfully.
Jan 26 16:09:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:32.900 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8b5a2723-70e1-4acc-a601-a4f0bf0d77a4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:09:32 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:32.900 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[62dad111-afd9-45c4-973c-b2c84f667497]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.916 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.921 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.949 239969 INFO nova.compute.manager [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Took 11.64 seconds to build instance.
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.969 239969 DEBUG oslo_concurrency.lockutils [None req-9bbc5c28-fed3-4888-a1ef-ab965db98d1a 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lock "6a424452-c5eb-42aa-9bed-46777c6790eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.721s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.985 239969 INFO nova.virt.libvirt.driver [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Deleting instance files /var/lib/nova/instances/59b6d7ef-0e86-41af-af94-b94def8570b0_del
Jan 26 16:09:32 compute-0 nova_compute[239965]: 2026-01-26 16:09:32.986 239969 INFO nova.virt.libvirt.driver [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Deletion of /var/lib/nova/instances/59b6d7ef-0e86-41af-af94-b94def8570b0_del complete
Jan 26 16:09:33 compute-0 nova_compute[239965]: 2026-01-26 16:09:33.036 239969 INFO nova.compute.manager [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Took 0.61 seconds to destroy the instance on the hypervisor.
Jan 26 16:09:33 compute-0 nova_compute[239965]: 2026-01-26 16:09:33.036 239969 DEBUG oslo.service.loopingcall [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:09:33 compute-0 nova_compute[239965]: 2026-01-26 16:09:33.037 239969 DEBUG nova.compute.manager [-] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:09:33 compute-0 nova_compute[239965]: 2026-01-26 16:09:33.037 239969 DEBUG nova.network.neutron [-] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:09:33 compute-0 nova_compute[239965]: 2026-01-26 16:09:33.182 239969 DEBUG nova.compute.manager [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Received event network-changed-a8973089-7e12-4dd1-be5a-417921b2ddf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:33 compute-0 nova_compute[239965]: 2026-01-26 16:09:33.182 239969 DEBUG nova.compute.manager [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Refreshing instance network info cache due to event network-changed-a8973089-7e12-4dd1-be5a-417921b2ddf9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:09:33 compute-0 nova_compute[239965]: 2026-01-26 16:09:33.183 239969 DEBUG oslo_concurrency.lockutils [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-59b6d7ef-0e86-41af-af94-b94def8570b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:09:33 compute-0 nova_compute[239965]: 2026-01-26 16:09:33.183 239969 DEBUG oslo_concurrency.lockutils [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-59b6d7ef-0e86-41af-af94-b94def8570b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:09:33 compute-0 nova_compute[239965]: 2026-01-26 16:09:33.183 239969 DEBUG nova.network.neutron [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Refreshing network info cache for port a8973089-7e12-4dd1-be5a-417921b2ddf9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:09:33 compute-0 ceph-mon[75140]: pgmap v1733: 305 pgs: 305 active+clean; 306 MiB data, 803 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.4 MiB/s wr, 202 op/s
Jan 26 16:09:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1734: 305 pgs: 305 active+clean; 306 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.4 MiB/s wr, 262 op/s
Jan 26 16:09:35 compute-0 ceph-mon[75140]: pgmap v1734: 305 pgs: 305 active+clean; 306 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.4 MiB/s wr, 262 op/s
Jan 26 16:09:35 compute-0 nova_compute[239965]: 2026-01-26 16:09:35.370 239969 DEBUG nova.compute.manager [req-10f313d2-29a7-47ef-8493-fc1d47f0daf2 req-e2c9e62a-4231-4a71-a939-68c07436c9cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Received event network-vif-plugged-60632de1-afb5-4319-9c73-66daf4c5ffd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:35 compute-0 nova_compute[239965]: 2026-01-26 16:09:35.372 239969 DEBUG oslo_concurrency.lockutils [req-10f313d2-29a7-47ef-8493-fc1d47f0daf2 req-e2c9e62a-4231-4a71-a939-68c07436c9cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:35 compute-0 nova_compute[239965]: 2026-01-26 16:09:35.372 239969 DEBUG oslo_concurrency.lockutils [req-10f313d2-29a7-47ef-8493-fc1d47f0daf2 req-e2c9e62a-4231-4a71-a939-68c07436c9cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:35 compute-0 nova_compute[239965]: 2026-01-26 16:09:35.372 239969 DEBUG oslo_concurrency.lockutils [req-10f313d2-29a7-47ef-8493-fc1d47f0daf2 req-e2c9e62a-4231-4a71-a939-68c07436c9cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:35 compute-0 nova_compute[239965]: 2026-01-26 16:09:35.373 239969 DEBUG nova.compute.manager [req-10f313d2-29a7-47ef-8493-fc1d47f0daf2 req-e2c9e62a-4231-4a71-a939-68c07436c9cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] No waiting events found dispatching network-vif-plugged-60632de1-afb5-4319-9c73-66daf4c5ffd0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:09:35 compute-0 nova_compute[239965]: 2026-01-26 16:09:35.373 239969 WARNING nova.compute.manager [req-10f313d2-29a7-47ef-8493-fc1d47f0daf2 req-e2c9e62a-4231-4a71-a939-68c07436c9cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Received unexpected event network-vif-plugged-60632de1-afb5-4319-9c73-66daf4c5ffd0 for instance with vm_state active and task_state None.
Jan 26 16:09:35 compute-0 nova_compute[239965]: 2026-01-26 16:09:35.400 239969 DEBUG nova.network.neutron [-] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:09:35 compute-0 nova_compute[239965]: 2026-01-26 16:09:35.428 239969 INFO nova.compute.manager [-] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Took 2.39 seconds to deallocate network for instance.
Jan 26 16:09:35 compute-0 nova_compute[239965]: 2026-01-26 16:09:35.472 239969 DEBUG oslo_concurrency.lockutils [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:35 compute-0 nova_compute[239965]: 2026-01-26 16:09:35.473 239969 DEBUG oslo_concurrency.lockutils [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:35 compute-0 nova_compute[239965]: 2026-01-26 16:09:35.542 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:09:35 compute-0 nova_compute[239965]: 2026-01-26 16:09:35.599 239969 DEBUG oslo_concurrency.processutils [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:35 compute-0 nova_compute[239965]: 2026-01-26 16:09:35.966 239969 DEBUG nova.compute.manager [req-f3cb4df4-86d9-4a30-9e47-38cba91636a1 req-c4020018-06d1-48fe-8a32-dc444e96c2d7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Received event network-vif-deleted-a8973089-7e12-4dd1-be5a-417921b2ddf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 289 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.3 MiB/s wr, 293 op/s
Jan 26 16:09:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:09:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1097573707' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.182 239969 DEBUG oslo_concurrency.processutils [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.187 239969 DEBUG nova.compute.provider_tree [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:09:36 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1097573707' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.272 239969 DEBUG nova.scheduler.client.report [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.301 239969 DEBUG oslo_concurrency.lockutils [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.359 239969 DEBUG nova.network.neutron [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Updated VIF entry in instance network info cache for port a8973089-7e12-4dd1-be5a-417921b2ddf9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.360 239969 DEBUG nova.network.neutron [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Updating instance_info_cache with network_info: [{"id": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "address": "fa:16:3e:5b:71:19", "network": {"id": "8b5a2723-70e1-4acc-a601-a4f0bf0d77a4", "bridge": "br-int", "label": "tempest-network-smoke--559162150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5b:7119", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa8973089-7e", "ovs_interfaceid": "a8973089-7e12-4dd1-be5a-417921b2ddf9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.389 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.401 239969 DEBUG oslo_concurrency.lockutils [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-59b6d7ef-0e86-41af-af94-b94def8570b0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.401 239969 DEBUG nova.compute.manager [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Received event network-vif-unplugged-a8973089-7e12-4dd1-be5a-417921b2ddf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.402 239969 DEBUG oslo_concurrency.lockutils [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.402 239969 DEBUG oslo_concurrency.lockutils [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.403 239969 DEBUG oslo_concurrency.lockutils [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.403 239969 DEBUG nova.compute.manager [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] No waiting events found dispatching network-vif-unplugged-a8973089-7e12-4dd1-be5a-417921b2ddf9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.403 239969 DEBUG nova.compute.manager [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Received event network-vif-unplugged-a8973089-7e12-4dd1-be5a-417921b2ddf9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.403 239969 DEBUG nova.compute.manager [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Received event network-vif-plugged-a8973089-7e12-4dd1-be5a-417921b2ddf9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.404 239969 DEBUG oslo_concurrency.lockutils [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.404 239969 DEBUG oslo_concurrency.lockutils [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.404 239969 DEBUG oslo_concurrency.lockutils [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "59b6d7ef-0e86-41af-af94-b94def8570b0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.405 239969 DEBUG nova.compute.manager [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] No waiting events found dispatching network-vif-plugged-a8973089-7e12-4dd1-be5a-417921b2ddf9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.405 239969 WARNING nova.compute.manager [req-fd9cbc79-3254-43c2-94bb-c6013ee9453a req-539f639c-9382-465b-aab3-e1b8b8def92e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Received unexpected event network-vif-plugged-a8973089-7e12-4dd1-be5a-417921b2ddf9 for instance with vm_state active and task_state deleting.
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.477 239969 INFO nova.scheduler.client.report [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Deleted allocations for instance 59b6d7ef-0e86-41af-af94-b94def8570b0
Jan 26 16:09:36 compute-0 nova_compute[239965]: 2026-01-26 16:09:36.537 239969 DEBUG oslo_concurrency.lockutils [None req-36b573a4-788f-4bba-9270-a9fccf7d5dff 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "59b6d7ef-0e86-41af-af94-b94def8570b0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:37 compute-0 ceph-mon[75140]: pgmap v1735: 305 pgs: 305 active+clean; 289 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 5.3 MiB/s wr, 293 op/s
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.480 239969 DEBUG oslo_concurrency.lockutils [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Acquiring lock "6a424452-c5eb-42aa-9bed-46777c6790eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.482 239969 DEBUG oslo_concurrency.lockutils [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lock "6a424452-c5eb-42aa-9bed-46777c6790eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.482 239969 DEBUG oslo_concurrency.lockutils [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Acquiring lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.482 239969 DEBUG oslo_concurrency.lockutils [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.483 239969 DEBUG oslo_concurrency.lockutils [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.484 239969 INFO nova.compute.manager [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Terminating instance
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.485 239969 DEBUG nova.compute.manager [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:09:37 compute-0 kernel: tap60632de1-af (unregistering): left promiscuous mode
Jan 26 16:09:37 compute-0 NetworkManager[48954]: <info>  [1769443777.5256] device (tap60632de1-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:09:37 compute-0 ovn_controller[146046]: 2026-01-26T16:09:37Z|01008|binding|INFO|Releasing lport 60632de1-afb5-4319-9c73-66daf4c5ffd0 from this chassis (sb_readonly=0)
Jan 26 16:09:37 compute-0 ovn_controller[146046]: 2026-01-26T16:09:37Z|01009|binding|INFO|Setting lport 60632de1-afb5-4319-9c73-66daf4c5ffd0 down in Southbound
Jan 26 16:09:37 compute-0 ovn_controller[146046]: 2026-01-26T16:09:37Z|01010|binding|INFO|Removing iface tap60632de1-af ovn-installed in OVS
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.542 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:37.546 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:04:57 10.100.0.11'], port_security=['fa:16:3e:a4:04:57 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6a424452-c5eb-42aa-9bed-46777c6790eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7e72d6cbe2ea4b8fb797418b35695443', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1513c325-31f8-4295-985d-299c911a2ed6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17d5dc96-8a66-4f58-8003-f923455547d5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=60632de1-afb5-4319-9c73-66daf4c5ffd0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:09:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:37.547 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 60632de1-afb5-4319-9c73-66daf4c5ffd0 in datapath 8adc6d2b-03da-4ff3-8b51-594f2e5ddd81 unbound from our chassis
Jan 26 16:09:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:37.548 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8adc6d2b-03da-4ff3-8b51-594f2e5ddd81, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:09:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:37.549 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[90a9b786-6852-4248-b992-db5c8d16c5d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:37.550 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81 namespace which is not needed anymore
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.563 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:37 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000067.scope: Deactivated successfully.
Jan 26 16:09:37 compute-0 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000067.scope: Consumed 5.941s CPU time.
Jan 26 16:09:37 compute-0 systemd-machined[208061]: Machine qemu-127-instance-00000067 terminated.
Jan 26 16:09:37 compute-0 neutron-haproxy-ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81[327200]: [NOTICE]   (327204) : haproxy version is 2.8.14-c23fe91
Jan 26 16:09:37 compute-0 neutron-haproxy-ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81[327200]: [NOTICE]   (327204) : path to executable is /usr/sbin/haproxy
Jan 26 16:09:37 compute-0 neutron-haproxy-ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81[327200]: [WARNING]  (327204) : Exiting Master process...
Jan 26 16:09:37 compute-0 neutron-haproxy-ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81[327200]: [ALERT]    (327204) : Current worker (327206) exited with code 143 (Terminated)
Jan 26 16:09:37 compute-0 neutron-haproxy-ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81[327200]: [WARNING]  (327204) : All workers exited. Exiting... (0)
Jan 26 16:09:37 compute-0 systemd[1]: libpod-51398962040de1ae92318b58f232e37586ef30fad30d38c7340e5dfa22405968.scope: Deactivated successfully.
Jan 26 16:09:37 compute-0 podman[327391]: 2026-01-26 16:09:37.681965813 +0000 UTC m=+0.044612722 container died 51398962040de1ae92318b58f232e37586ef30fad30d38c7340e5dfa22405968 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.685 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.705 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-51398962040de1ae92318b58f232e37586ef30fad30d38c7340e5dfa22405968-userdata-shm.mount: Deactivated successfully.
Jan 26 16:09:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-52e715615bf1bd7386651a64d1c194124b3478be1bc30d086ad658e771998402-merged.mount: Deactivated successfully.
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.715 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.723 239969 INFO nova.virt.libvirt.driver [-] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Instance destroyed successfully.
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.724 239969 DEBUG nova.objects.instance [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lazy-loading 'resources' on Instance uuid 6a424452-c5eb-42aa-9bed-46777c6790eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:37 compute-0 podman[327391]: 2026-01-26 16:09:37.725081627 +0000 UTC m=+0.087728526 container cleanup 51398962040de1ae92318b58f232e37586ef30fad30d38c7340e5dfa22405968 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:09:37 compute-0 systemd[1]: libpod-conmon-51398962040de1ae92318b58f232e37586ef30fad30d38c7340e5dfa22405968.scope: Deactivated successfully.
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.737 239969 DEBUG nova.virt.libvirt.vif [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:09:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-580633455',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-580633455',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-580633455',id=103,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:09:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7e72d6cbe2ea4b8fb797418b35695443',ramdisk_id='',reservation_id='r-ltbho49l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-118530518',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-118530518-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:09:32Z,user_data=None,user_id='51e88d4b3c0c4ad0ab5e64b26bdfbb97',uuid=6a424452-c5eb-42aa-9bed-46777c6790eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "address": "fa:16:3e:a4:04:57", "network": {"id": "8adc6d2b-03da-4ff3-8b51-594f2e5ddd81", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1013695405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e72d6cbe2ea4b8fb797418b35695443", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60632de1-af", "ovs_interfaceid": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.738 239969 DEBUG nova.network.os_vif_util [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Converting VIF {"id": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "address": "fa:16:3e:a4:04:57", "network": {"id": "8adc6d2b-03da-4ff3-8b51-594f2e5ddd81", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1013695405-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e72d6cbe2ea4b8fb797418b35695443", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60632de1-af", "ovs_interfaceid": "60632de1-afb5-4319-9c73-66daf4c5ffd0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.739 239969 DEBUG nova.network.os_vif_util [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:04:57,bridge_name='br-int',has_traffic_filtering=True,id=60632de1-afb5-4319-9c73-66daf4c5ffd0,network=Network(8adc6d2b-03da-4ff3-8b51-594f2e5ddd81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60632de1-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.740 239969 DEBUG os_vif [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:04:57,bridge_name='br-int',has_traffic_filtering=True,id=60632de1-afb5-4319-9c73-66daf4c5ffd0,network=Network(8adc6d2b-03da-4ff3-8b51-594f2e5ddd81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60632de1-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.742 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.743 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60632de1-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.744 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.745 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.748 239969 INFO os_vif [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:04:57,bridge_name='br-int',has_traffic_filtering=True,id=60632de1-afb5-4319-9c73-66daf4c5ffd0,network=Network(8adc6d2b-03da-4ff3-8b51-594f2e5ddd81),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60632de1-af')
Jan 26 16:09:37 compute-0 podman[327428]: 2026-01-26 16:09:37.791172985 +0000 UTC m=+0.043347431 container remove 51398962040de1ae92318b58f232e37586ef30fad30d38c7340e5dfa22405968 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:09:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:37.796 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[34943bf1-ef68-43f4-b1fb-960b8041e11f]: (4, ('Mon Jan 26 04:09:37 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81 (51398962040de1ae92318b58f232e37586ef30fad30d38c7340e5dfa22405968)\n51398962040de1ae92318b58f232e37586ef30fad30d38c7340e5dfa22405968\nMon Jan 26 04:09:37 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81 (51398962040de1ae92318b58f232e37586ef30fad30d38c7340e5dfa22405968)\n51398962040de1ae92318b58f232e37586ef30fad30d38c7340e5dfa22405968\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:37.797 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3b9d7044-0023-4eea-bce7-495a5349cca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:37.798 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8adc6d2b-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.800 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:37 compute-0 kernel: tap8adc6d2b-00: left promiscuous mode
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.801 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:37.806 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[63c2c8da-b655-4ea9-8cf4-d3c3029b75d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:37.821 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1c7585b2-1d93-4212-a35c-1f7ed41b8483]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:37 compute-0 nova_compute[239965]: 2026-01-26 16:09:37.822 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:37.823 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[571d217a-e30f-4990-90e4-eec978511e7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:09:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:37.841 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f5adbd1e-0be0-4139-b841-5f728f02d2e6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526246, 'reachable_time': 40247, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 327458, 'error': None, 'target': 'ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d8adc6d2b\x2d03da\x2d4ff3\x2d8b51\x2d594f2e5ddd81.mount: Deactivated successfully.
Jan 26 16:09:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:37.845 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8adc6d2b-03da-4ff3-8b51-594f2e5ddd81 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:09:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:37.845 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[ab900e4f-12ad-433f-af1e-fdf8b8054164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:09:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1736: 305 pgs: 305 active+clean; 227 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.4 MiB/s wr, 327 op/s
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.181 239969 INFO nova.virt.libvirt.driver [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Deleting instance files /var/lib/nova/instances/6a424452-c5eb-42aa-9bed-46777c6790eb_del
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.182 239969 INFO nova.virt.libvirt.driver [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Deletion of /var/lib/nova/instances/6a424452-c5eb-42aa-9bed-46777c6790eb_del complete
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.244 239969 INFO nova.compute.manager [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Took 0.76 seconds to destroy the instance on the hypervisor.
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.245 239969 DEBUG oslo.service.loopingcall [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.245 239969 DEBUG nova.compute.manager [-] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.246 239969 DEBUG nova.network.neutron [-] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.583 239969 DEBUG nova.compute.manager [req-84fbd230-2254-48f1-8d21-86d5476d5def req-e36879bb-673f-48d3-a567-052fe69afe55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Received event network-vif-unplugged-60632de1-afb5-4319-9c73-66daf4c5ffd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.584 239969 DEBUG oslo_concurrency.lockutils [req-84fbd230-2254-48f1-8d21-86d5476d5def req-e36879bb-673f-48d3-a567-052fe69afe55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.584 239969 DEBUG oslo_concurrency.lockutils [req-84fbd230-2254-48f1-8d21-86d5476d5def req-e36879bb-673f-48d3-a567-052fe69afe55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.584 239969 DEBUG oslo_concurrency.lockutils [req-84fbd230-2254-48f1-8d21-86d5476d5def req-e36879bb-673f-48d3-a567-052fe69afe55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.584 239969 DEBUG nova.compute.manager [req-84fbd230-2254-48f1-8d21-86d5476d5def req-e36879bb-673f-48d3-a567-052fe69afe55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] No waiting events found dispatching network-vif-unplugged-60632de1-afb5-4319-9c73-66daf4c5ffd0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.585 239969 DEBUG nova.compute.manager [req-84fbd230-2254-48f1-8d21-86d5476d5def req-e36879bb-673f-48d3-a567-052fe69afe55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Received event network-vif-unplugged-60632de1-afb5-4319-9c73-66daf4c5ffd0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.856 239969 DEBUG nova.network.neutron [-] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.873 239969 INFO nova.compute.manager [-] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Took 0.63 seconds to deallocate network for instance.
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.912 239969 DEBUG nova.compute.manager [req-cc8cc047-e8e6-445c-b951-62a128aad4be req-2f57257e-a6ad-4e80-9441-b228562b6abf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Received event network-vif-deleted-60632de1-afb5-4319-9c73-66daf4c5ffd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.915 239969 DEBUG oslo_concurrency.lockutils [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:38 compute-0 nova_compute[239965]: 2026-01-26 16:09:38.916 239969 DEBUG oslo_concurrency.lockutils [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:39 compute-0 nova_compute[239965]: 2026-01-26 16:09:39.127 239969 DEBUG oslo_concurrency.processutils [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:39 compute-0 ceph-mon[75140]: pgmap v1736: 305 pgs: 305 active+clean; 227 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.4 MiB/s wr, 327 op/s
Jan 26 16:09:39 compute-0 nova_compute[239965]: 2026-01-26 16:09:39.574 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:09:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:09:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/709864339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:39 compute-0 nova_compute[239965]: 2026-01-26 16:09:39.782 239969 DEBUG oslo_concurrency.processutils [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:39 compute-0 nova_compute[239965]: 2026-01-26 16:09:39.788 239969 DEBUG nova.compute.provider_tree [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:09:39 compute-0 nova_compute[239965]: 2026-01-26 16:09:39.815 239969 DEBUG nova.scheduler.client.report [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:09:39 compute-0 nova_compute[239965]: 2026-01-26 16:09:39.837 239969 DEBUG oslo_concurrency.lockutils [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:39 compute-0 nova_compute[239965]: 2026-01-26 16:09:39.871 239969 INFO nova.scheduler.client.report [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Deleted allocations for instance 6a424452-c5eb-42aa-9bed-46777c6790eb
Jan 26 16:09:39 compute-0 nova_compute[239965]: 2026-01-26 16:09:39.960 239969 DEBUG oslo_concurrency.lockutils [None req-3d4f91ed-cdf7-4cc4-85de-fbd124bfe4a6 51e88d4b3c0c4ad0ab5e64b26bdfbb97 7e72d6cbe2ea4b8fb797418b35695443 - - default default] Lock "6a424452-c5eb-42aa-9bed-46777c6790eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 227 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 17 KiB/s wr, 238 op/s
Jan 26 16:09:40 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/709864339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:40 compute-0 nova_compute[239965]: 2026-01-26 16:09:40.681 239969 DEBUG nova.compute.manager [req-110c65c9-5671-4576-b62a-17f9f4ff5c95 req-012529a7-a301-4417-b2dc-2f61aa60bab9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Received event network-vif-plugged-60632de1-afb5-4319-9c73-66daf4c5ffd0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:09:40 compute-0 nova_compute[239965]: 2026-01-26 16:09:40.682 239969 DEBUG oslo_concurrency.lockutils [req-110c65c9-5671-4576-b62a-17f9f4ff5c95 req-012529a7-a301-4417-b2dc-2f61aa60bab9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:40 compute-0 nova_compute[239965]: 2026-01-26 16:09:40.682 239969 DEBUG oslo_concurrency.lockutils [req-110c65c9-5671-4576-b62a-17f9f4ff5c95 req-012529a7-a301-4417-b2dc-2f61aa60bab9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:40 compute-0 nova_compute[239965]: 2026-01-26 16:09:40.682 239969 DEBUG oslo_concurrency.lockutils [req-110c65c9-5671-4576-b62a-17f9f4ff5c95 req-012529a7-a301-4417-b2dc-2f61aa60bab9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "6a424452-c5eb-42aa-9bed-46777c6790eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:40 compute-0 nova_compute[239965]: 2026-01-26 16:09:40.683 239969 DEBUG nova.compute.manager [req-110c65c9-5671-4576-b62a-17f9f4ff5c95 req-012529a7-a301-4417-b2dc-2f61aa60bab9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] No waiting events found dispatching network-vif-plugged-60632de1-afb5-4319-9c73-66daf4c5ffd0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:09:40 compute-0 nova_compute[239965]: 2026-01-26 16:09:40.683 239969 WARNING nova.compute.manager [req-110c65c9-5671-4576-b62a-17f9f4ff5c95 req-012529a7-a301-4417-b2dc-2f61aa60bab9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Received unexpected event network-vif-plugged-60632de1-afb5-4319-9c73-66daf4c5ffd0 for instance with vm_state deleted and task_state None.
Jan 26 16:09:41 compute-0 ceph-mon[75140]: pgmap v1737: 305 pgs: 305 active+clean; 227 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 17 KiB/s wr, 238 op/s
Jan 26 16:09:41 compute-0 nova_compute[239965]: 2026-01-26 16:09:41.391 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:41 compute-0 nova_compute[239965]: 2026-01-26 16:09:41.457 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101
Jan 26 16:09:41 compute-0 nova_compute[239965]: 2026-01-26 16:09:41.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:09:41 compute-0 nova_compute[239965]: 2026-01-26 16:09:41.713 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:41 compute-0 nova_compute[239965]: 2026-01-26 16:09:41.922 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:42 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000066.scope: Deactivated successfully.
Jan 26 16:09:42 compute-0 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000066.scope: Consumed 13.393s CPU time.
Jan 26 16:09:42 compute-0 systemd-machined[208061]: Machine qemu-125-instance-00000066 terminated.
Jan 26 16:09:42 compute-0 nova_compute[239965]: 2026-01-26 16:09:42.054 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443767.0535948, 441048cd-ded0-43ff-93b7-bacf44bd6c17 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:42 compute-0 nova_compute[239965]: 2026-01-26 16:09:42.055 239969 INFO nova.compute.manager [-] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] VM Stopped (Lifecycle Event)
Jan 26 16:09:42 compute-0 nova_compute[239965]: 2026-01-26 16:09:42.091 239969 DEBUG nova.compute.manager [None req-4898935d-afd5-43f0-a119-499358d11d6b - - - - - -] [instance: 441048cd-ded0-43ff-93b7-bacf44bd6c17] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 233 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.6 MiB/s wr, 334 op/s
Jan 26 16:09:42 compute-0 ceph-mon[75140]: pgmap v1738: 305 pgs: 305 active+clean; 233 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 2.6 MiB/s wr, 334 op/s
Jan 26 16:09:42 compute-0 nova_compute[239965]: 2026-01-26 16:09:42.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:09:42 compute-0 nova_compute[239965]: 2026-01-26 16:09:42.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:09:42 compute-0 nova_compute[239965]: 2026-01-26 16:09:42.588 239969 INFO nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Instance shutdown successfully after 13 seconds.
Jan 26 16:09:42 compute-0 nova_compute[239965]: 2026-01-26 16:09:42.593 239969 INFO nova.virt.libvirt.driver [-] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Instance destroyed successfully.
Jan 26 16:09:42 compute-0 nova_compute[239965]: 2026-01-26 16:09:42.597 239969 INFO nova.virt.libvirt.driver [-] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Instance destroyed successfully.
Jan 26 16:09:42 compute-0 nova_compute[239965]: 2026-01-26 16:09:42.789 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:09:42 compute-0 nova_compute[239965]: 2026-01-26 16:09:42.883 239969 INFO nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Deleting instance files /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1_del
Jan 26 16:09:42 compute-0 nova_compute[239965]: 2026-01-26 16:09:42.885 239969 INFO nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Deletion of /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1_del complete
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.026 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.026 239969 INFO nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Creating image(s)
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.058 239969 DEBUG nova.storage.rbd_utils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] rbd image 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.094 239969 DEBUG nova.storage.rbd_utils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] rbd image 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.129 239969 DEBUG nova.storage.rbd_utils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] rbd image 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.133 239969 DEBUG oslo_concurrency.processutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.213 239969 DEBUG oslo_concurrency.processutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.215 239969 DEBUG oslo_concurrency.lockutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Acquiring lock "43c516c9cae415c0ab334521ada79f427fb2809a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.216 239969 DEBUG oslo_concurrency.lockutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.217 239969 DEBUG oslo_concurrency.lockutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.250 239969 DEBUG nova.storage.rbd_utils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] rbd image 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.255 239969 DEBUG oslo_concurrency.processutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:43 compute-0 podman[327581]: 2026-01-26 16:09:43.415888508 +0000 UTC m=+0.093130979 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:09:43 compute-0 podman[327582]: 2026-01-26 16:09:43.417331054 +0000 UTC m=+0.095744673 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.577 239969 DEBUG oslo_concurrency.processutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.668 239969 DEBUG nova.storage.rbd_utils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] resizing rbd image 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.777 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.778 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Ensure instance console log exists: /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.779 239969 DEBUG oslo_concurrency.lockutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.780 239969 DEBUG oslo_concurrency.lockutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.781 239969 DEBUG oslo_concurrency.lockutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.784 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.790 239969 WARNING nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.798 239969 DEBUG nova.virt.libvirt.host [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.799 239969 DEBUG nova.virt.libvirt.host [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.804 239969 DEBUG nova.virt.libvirt.host [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.804 239969 DEBUG nova.virt.libvirt.host [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.805 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.805 239969 DEBUG nova.virt.hardware [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.806 239969 DEBUG nova.virt.hardware [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.806 239969 DEBUG nova.virt.hardware [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.807 239969 DEBUG nova.virt.hardware [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.807 239969 DEBUG nova.virt.hardware [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.807 239969 DEBUG nova.virt.hardware [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.808 239969 DEBUG nova.virt.hardware [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.808 239969 DEBUG nova.virt.hardware [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.808 239969 DEBUG nova.virt.hardware [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.809 239969 DEBUG nova.virt.hardware [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.809 239969 DEBUG nova.virt.hardware [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.809 239969 DEBUG nova.objects.instance [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:43 compute-0 nova_compute[239965]: 2026-01-26 16:09:43.836 239969 DEBUG oslo_concurrency.processutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:43 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000068.scope: Deactivated successfully.
Jan 26 16:09:43 compute-0 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000068.scope: Consumed 13.820s CPU time.
Jan 26 16:09:43 compute-0 systemd-machined[208061]: Machine qemu-126-instance-00000068 terminated.
Jan 26 16:09:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 245 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.3 MiB/s wr, 314 op/s
Jan 26 16:09:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:09:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2125972337' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.449 239969 DEBUG oslo_concurrency.processutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.477 239969 DEBUG nova.storage.rbd_utils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] rbd image 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.482 239969 DEBUG oslo_concurrency.processutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.518 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.518 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.524 239969 INFO nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Instance shutdown successfully after 13 seconds.
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.530 239969 INFO nova.virt.libvirt.driver [-] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Instance destroyed successfully.
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.535 239969 INFO nova.virt.libvirt.driver [-] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Instance destroyed successfully.
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.575 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.576 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.803 239969 INFO nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Deleting instance files /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121_del
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.804 239969 INFO nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Deletion of /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121_del complete
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.934 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.935 239969 INFO nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Creating image(s)
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.953 239969 DEBUG nova.storage.rbd_utils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] rbd image 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.973 239969 DEBUG nova.storage.rbd_utils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] rbd image 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.992 239969 DEBUG nova.storage.rbd_utils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] rbd image 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:44 compute-0 nova_compute[239965]: 2026-01-26 16:09:44.996 239969 DEBUG oslo_concurrency.processutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:09:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3068772940' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.053 239969 DEBUG oslo_concurrency.processutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.056 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:09:45 compute-0 nova_compute[239965]:   <uuid>82fb4b02-e957-47a0-a2f8-6d93f039e7d1</uuid>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   <name>instance-00000066</name>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerShowV257Test-server-2145599240</nova:name>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:09:43</nova:creationTime>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:09:45 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:09:45 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:09:45 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:09:45 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:09:45 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:09:45 compute-0 nova_compute[239965]:         <nova:user uuid="bae7b9cd55cc4d358ac0f897d8cc1ae9">tempest-ServerShowV257Test-1554770872-project-member</nova:user>
Jan 26 16:09:45 compute-0 nova_compute[239965]:         <nova:project uuid="d74dfc414eb3415c833bdd6ec3a75187">tempest-ServerShowV257Test-1554770872</nova:project>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="24ff5bfa-2bf0-4d76-ba05-fc857cd2108f"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <system>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <entry name="serial">82fb4b02-e957-47a0-a2f8-6d93f039e7d1</entry>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <entry name="uuid">82fb4b02-e957-47a0-a2f8-6d93f039e7d1</entry>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     </system>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   <os>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   </os>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   <features>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   </features>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk">
Jan 26 16:09:45 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       </source>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:09:45 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk.config">
Jan 26 16:09:45 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       </source>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:09:45 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1/console.log" append="off"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <video>
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     </video>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:09:45 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:09:45 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:09:45 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:09:45 compute-0 nova_compute[239965]: </domain>
Jan 26 16:09:45 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.072 239969 DEBUG oslo_concurrency.processutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.073 239969 DEBUG oslo_concurrency.lockutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Acquiring lock "43c516c9cae415c0ab334521ada79f427fb2809a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.073 239969 DEBUG oslo_concurrency.lockutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.074 239969 DEBUG oslo_concurrency.lockutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.093 239969 DEBUG nova.storage.rbd_utils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] rbd image 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.096 239969 DEBUG oslo_concurrency.processutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.156 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.157 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.158 239969 INFO nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Using config drive
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.179 239969 DEBUG nova.storage.rbd_utils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] rbd image 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.198 239969 DEBUG nova.objects.instance [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:45 compute-0 ceph-mon[75140]: pgmap v1739: 305 pgs: 305 active+clean; 245 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.3 MiB/s wr, 314 op/s
Jan 26 16:09:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2125972337' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3068772940' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.230 239969 DEBUG nova.objects.instance [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lazy-loading 'keypairs' on Instance uuid 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.366 239969 DEBUG oslo_concurrency.processutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.418 239969 DEBUG nova.storage.rbd_utils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] resizing rbd image 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.481 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.482 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Ensure instance console log exists: /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.482 239969 DEBUG oslo_concurrency.lockutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.482 239969 DEBUG oslo_concurrency.lockutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.483 239969 DEBUG oslo_concurrency.lockutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.484 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.487 239969 WARNING nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.516 239969 DEBUG nova.virt.libvirt.host [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.517 239969 DEBUG nova.virt.libvirt.host [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.520 239969 DEBUG nova.virt.libvirt.host [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.521 239969 DEBUG nova.virt.libvirt.host [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.521 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.521 239969 DEBUG nova.virt.hardware [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.522 239969 DEBUG nova.virt.hardware [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.522 239969 DEBUG nova.virt.hardware [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.522 239969 DEBUG nova.virt.hardware [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.522 239969 DEBUG nova.virt.hardware [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.522 239969 DEBUG nova.virt.hardware [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.522 239969 DEBUG nova.virt.hardware [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.523 239969 DEBUG nova.virt.hardware [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.523 239969 DEBUG nova.virt.hardware [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.523 239969 DEBUG nova.virt.hardware [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.523 239969 DEBUG nova.virt.hardware [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.524 239969 DEBUG nova.objects.instance [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.545 239969 DEBUG oslo_concurrency.processutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.879 239969 INFO nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Creating config drive at /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1/disk.config
Jan 26 16:09:45 compute-0 nova_compute[239965]: 2026-01-26 16:09:45.889 239969 DEBUG oslo_concurrency.processutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpegm0dxlt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.031 239969 DEBUG oslo_concurrency.processutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpegm0dxlt" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.056 239969 DEBUG nova.storage.rbd_utils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] rbd image 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.060 239969 DEBUG oslo_concurrency.processutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1/disk.config 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:09:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3791568578' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.129 239969 DEBUG oslo_concurrency.processutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.152 239969 DEBUG nova.storage.rbd_utils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] rbd image 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.158 239969 DEBUG oslo_concurrency.processutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 219 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.8 MiB/s wr, 297 op/s
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.216 239969 DEBUG oslo_concurrency.processutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1/disk.config 82fb4b02-e957-47a0-a2f8-6d93f039e7d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.217 239969 INFO nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Deleting local config drive /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1/disk.config because it was imported into RBD.
Jan 26 16:09:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3791568578' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:46 compute-0 systemd-machined[208061]: New machine qemu-128-instance-00000066.
Jan 26 16:09:46 compute-0 systemd[1]: Started Virtual Machine qemu-128-instance-00000066.
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.393 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:09:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1939198049' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.774 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.774 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443786.7732396, 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.775 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] VM Resumed (Lifecycle Event)
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.779 239969 DEBUG oslo_concurrency.processutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.785 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:09:46 compute-0 nova_compute[239965]:   <uuid>2266fbd6-0bef-4f98-a9ce-5ff705dc5121</uuid>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   <name>instance-00000068</name>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <nova:name>tempest-ServerShowV254Test-server-916688478</nova:name>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:09:45</nova:creationTime>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:09:46 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:09:46 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:09:46 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:09:46 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:09:46 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:09:46 compute-0 nova_compute[239965]:         <nova:user uuid="60f17e2d4e40423689513bf5b4ed0697">tempest-ServerShowV254Test-1162693505-project-member</nova:user>
Jan 26 16:09:46 compute-0 nova_compute[239965]:         <nova:project uuid="67b84c37a6f04b6fbba155d82794bda6">tempest-ServerShowV254Test-1162693505</nova:project>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="24ff5bfa-2bf0-4d76-ba05-fc857cd2108f"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <system>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <entry name="serial">2266fbd6-0bef-4f98-a9ce-5ff705dc5121</entry>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <entry name="uuid">2266fbd6-0bef-4f98-a9ce-5ff705dc5121</entry>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     </system>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   <os>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   </os>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   <features>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   </features>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk">
Jan 26 16:09:46 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       </source>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:09:46 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk.config">
Jan 26 16:09:46 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       </source>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:09:46 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121/console.log" append="off"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <video>
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     </video>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:09:46 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:09:46 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:09:46 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:09:46 compute-0 nova_compute[239965]: </domain>
Jan 26 16:09:46 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.788 239969 DEBUG nova.compute.manager [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.788 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.796 239969 INFO nova.virt.libvirt.driver [-] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Instance spawned successfully.
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.797 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.807 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.810 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.835 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.836 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.836 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.836 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.837 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.837 239969 DEBUG nova.virt.libvirt.driver [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.850 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.851 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443786.7782395, 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.851 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] VM Started (Lifecycle Event)
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.881 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.883 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.883 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.884 239969 INFO nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Using config drive
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.904 239969 DEBUG nova.storage.rbd_utils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] rbd image 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.911 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.914 239969 DEBUG nova.compute.manager [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.948 239969 DEBUG nova.objects.instance [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:46 compute-0 nova_compute[239965]: 2026-01-26 16:09:46.951 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.008 239969 DEBUG oslo_concurrency.lockutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.008 239969 DEBUG oslo_concurrency.lockutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.008 239969 DEBUG nova.objects.instance [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.084 239969 DEBUG oslo_concurrency.lockutils [None req-2fc7d4b6-c2b5-44bb-b21f-41839e2bdb76 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.157 239969 INFO nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Creating config drive at /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121/disk.config
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.164 239969 DEBUG oslo_concurrency.processutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptp5o8ptb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:47 compute-0 ceph-mon[75140]: pgmap v1740: 305 pgs: 305 active+clean; 219 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 5.8 MiB/s wr, 297 op/s
Jan 26 16:09:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1939198049' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.333 239969 DEBUG oslo_concurrency.processutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptp5o8ptb" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.359 239969 DEBUG nova.storage.rbd_utils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] rbd image 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.362 239969 DEBUG oslo_concurrency.processutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121/disk.config 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.484 239969 DEBUG oslo_concurrency.processutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121/disk.config 2266fbd6-0bef-4f98-a9ce-5ff705dc5121_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.485 239969 INFO nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Deleting local config drive /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121/disk.config because it was imported into RBD.
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.544 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.544 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:47 compute-0 systemd-machined[208061]: New machine qemu-129-instance-00000068.
Jan 26 16:09:47 compute-0 systemd[1]: Started Virtual Machine qemu-129-instance-00000068.
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.657 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443772.6567874, 59b6d7ef-0e86-41af-af94-b94def8570b0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.658 239969 INFO nova.compute.manager [-] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] VM Stopped (Lifecycle Event)
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.683 239969 DEBUG nova.compute.manager [None req-7bf3f845-b79d-4776-9066-138731d5bf93 - - - - - -] [instance: 59b6d7ef-0e86-41af-af94-b94def8570b0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:47 compute-0 nova_compute[239965]: 2026-01-26 16:09:47.791 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.009 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.010 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443788.0092556, 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.010 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] VM Resumed (Lifecycle Event)
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.012 239969 DEBUG nova.compute.manager [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.012 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.018 239969 INFO nova.virt.libvirt.driver [-] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Instance spawned successfully.
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.018 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.029 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.034 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.041 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.042 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.042 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.043 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.043 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.044 239969 DEBUG nova.virt.libvirt.driver [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.054 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.054 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443788.0102851, 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.055 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] VM Started (Lifecycle Event)
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.076 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.080 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.105 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.110 239969 DEBUG nova.compute.manager [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:09:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3236684178' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.159 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 180 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 7.9 MiB/s wr, 341 op/s
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.174 239969 DEBUG oslo_concurrency.lockutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.175 239969 DEBUG oslo_concurrency.lockutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.176 239969 DEBUG nova.objects.instance [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.269 239969 DEBUG oslo_concurrency.lockutils [None req-25c3830c-929d-4604-9362-37eba9337268 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.282 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.282 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000068 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.288 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.289 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:09:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3236684178' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.412 239969 DEBUG oslo_concurrency.lockutils [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Acquiring lock "82fb4b02-e957-47a0-a2f8-6d93f039e7d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.412 239969 DEBUG oslo_concurrency.lockutils [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "82fb4b02-e957-47a0-a2f8-6d93f039e7d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.413 239969 DEBUG oslo_concurrency.lockutils [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Acquiring lock "82fb4b02-e957-47a0-a2f8-6d93f039e7d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.413 239969 DEBUG oslo_concurrency.lockutils [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "82fb4b02-e957-47a0-a2f8-6d93f039e7d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.413 239969 DEBUG oslo_concurrency.lockutils [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "82fb4b02-e957-47a0-a2f8-6d93f039e7d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.414 239969 INFO nova.compute.manager [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Terminating instance
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.415 239969 DEBUG oslo_concurrency.lockutils [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Acquiring lock "refresh_cache-82fb4b02-e957-47a0-a2f8-6d93f039e7d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.415 239969 DEBUG oslo_concurrency.lockutils [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Acquired lock "refresh_cache-82fb4b02-e957-47a0-a2f8-6d93f039e7d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.415 239969 DEBUG nova.network.neutron [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.450 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.451 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3560MB free_disk=59.913671306334436GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.451 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.451 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.550 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.551 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.551 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.551 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.581 239969 DEBUG nova.network.neutron [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:09:48 compute-0 nova_compute[239965]: 2026-01-26 16:09:48.625 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:09:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1017807123' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:09:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:09:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1017807123' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000707072718593535 of space, bias 1.0, pg target 0.2121218155780605 quantized to 32 (current 32)
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001017008239483782 of space, bias 1.0, pg target 0.3051024718451346 quantized to 32 (current 32)
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.861121802961266e-07 of space, bias 4.0, pg target 0.000943334616355352 quantized to 16 (current 16)
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:09:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:09:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:09:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4086932899' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.214 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.219 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.232 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.258 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.258 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.259 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.259 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.272 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:09:49 compute-0 ceph-mon[75140]: pgmap v1741: 305 pgs: 305 active+clean; 180 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 7.9 MiB/s wr, 341 op/s
Jan 26 16:09:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1017807123' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:09:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1017807123' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:09:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4086932899' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.443 239969 DEBUG nova.network.neutron [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.462 239969 DEBUG oslo_concurrency.lockutils [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Releasing lock "refresh_cache-82fb4b02-e957-47a0-a2f8-6d93f039e7d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.463 239969 DEBUG nova.compute.manager [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:09:49 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000066.scope: Deactivated successfully.
Jan 26 16:09:49 compute-0 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000066.scope: Consumed 3.134s CPU time.
Jan 26 16:09:49 compute-0 systemd-machined[208061]: Machine qemu-128-instance-00000066 terminated.
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.685 239969 INFO nova.virt.libvirt.driver [-] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Instance destroyed successfully.
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.686 239969 DEBUG nova.objects.instance [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lazy-loading 'resources' on Instance uuid 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:49 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.950 239969 INFO nova.virt.libvirt.driver [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Deleting instance files /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1_del
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.951 239969 INFO nova.virt.libvirt.driver [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Deletion of /var/lib/nova/instances/82fb4b02-e957-47a0-a2f8-6d93f039e7d1_del complete
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.993 239969 INFO nova.compute.manager [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Took 0.53 seconds to destroy the instance on the hypervisor.
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.994 239969 DEBUG oslo.service.loopingcall [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.994 239969 DEBUG nova.compute.manager [-] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:09:49 compute-0 nova_compute[239965]: 2026-01-26 16:09:49.995 239969 DEBUG nova.network.neutron [-] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:09:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1742: 305 pgs: 305 active+clean; 180 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 7.9 MiB/s wr, 278 op/s
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.224 239969 DEBUG oslo_concurrency.lockutils [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Acquiring lock "2266fbd6-0bef-4f98-a9ce-5ff705dc5121" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.225 239969 DEBUG oslo_concurrency.lockutils [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "2266fbd6-0bef-4f98-a9ce-5ff705dc5121" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.226 239969 DEBUG oslo_concurrency.lockutils [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Acquiring lock "2266fbd6-0bef-4f98-a9ce-5ff705dc5121-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.226 239969 DEBUG oslo_concurrency.lockutils [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "2266fbd6-0bef-4f98-a9ce-5ff705dc5121-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.226 239969 DEBUG oslo_concurrency.lockutils [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "2266fbd6-0bef-4f98-a9ce-5ff705dc5121-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.227 239969 INFO nova.compute.manager [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Terminating instance
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.228 239969 DEBUG oslo_concurrency.lockutils [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Acquiring lock "refresh_cache-2266fbd6-0bef-4f98-a9ce-5ff705dc5121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.228 239969 DEBUG oslo_concurrency.lockutils [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Acquired lock "refresh_cache-2266fbd6-0bef-4f98-a9ce-5ff705dc5121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.229 239969 DEBUG nova.network.neutron [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:09:50 compute-0 ceph-mon[75140]: pgmap v1742: 305 pgs: 305 active+clean; 180 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 7.9 MiB/s wr, 278 op/s
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.425 239969 DEBUG nova.network.neutron [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.490 239969 DEBUG nova.network.neutron [-] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.505 239969 DEBUG nova.network.neutron [-] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.522 239969 INFO nova.compute.manager [-] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Took 0.53 seconds to deallocate network for instance.
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.578 239969 DEBUG oslo_concurrency.lockutils [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.579 239969 DEBUG oslo_concurrency.lockutils [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.637 239969 DEBUG oslo_concurrency.processutils [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.677 239969 DEBUG nova.network.neutron [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.697 239969 DEBUG oslo_concurrency.lockutils [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Releasing lock "refresh_cache-2266fbd6-0bef-4f98-a9ce-5ff705dc5121" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.698 239969 DEBUG nova.compute.manager [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:09:50 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000068.scope: Deactivated successfully.
Jan 26 16:09:50 compute-0 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000068.scope: Consumed 2.982s CPU time.
Jan 26 16:09:50 compute-0 systemd-machined[208061]: Machine qemu-129-instance-00000068 terminated.
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.918 239969 INFO nova.virt.libvirt.driver [-] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Instance destroyed successfully.
Jan 26 16:09:50 compute-0 nova_compute[239965]: 2026-01-26 16:09:50.919 239969 DEBUG nova.objects.instance [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lazy-loading 'resources' on Instance uuid 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.165 239969 INFO nova.virt.libvirt.driver [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Deleting instance files /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121_del
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.166 239969 INFO nova.virt.libvirt.driver [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Deletion of /var/lib/nova/instances/2266fbd6-0bef-4f98-a9ce-5ff705dc5121_del complete
Jan 26 16:09:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:09:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4003509181' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.220 239969 INFO nova.compute.manager [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Took 0.52 seconds to destroy the instance on the hypervisor.
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.221 239969 DEBUG oslo.service.loopingcall [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.221 239969 DEBUG nova.compute.manager [-] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.221 239969 DEBUG nova.network.neutron [-] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.225 239969 DEBUG oslo_concurrency.processutils [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:51.231 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.232 239969 DEBUG nova.compute.provider_tree [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:09:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:51.233 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.235 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.253 239969 DEBUG nova.scheduler.client.report [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.281 239969 DEBUG oslo_concurrency.lockutils [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.318 239969 INFO nova.scheduler.client.report [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Deleted allocations for instance 82fb4b02-e957-47a0-a2f8-6d93f039e7d1
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.388 239969 DEBUG oslo_concurrency.lockutils [None req-704d9246-5a93-42fc-9d52-3ad17eb863f0 bae7b9cd55cc4d358ac0f897d8cc1ae9 d74dfc414eb3415c833bdd6ec3a75187 - - default default] Lock "82fb4b02-e957-47a0-a2f8-6d93f039e7d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.976s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.394 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:51 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4003509181' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.528 239969 WARNING nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] While synchronizing instance power states, found 1 instances in the database and 0 instances on the hypervisor.
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.529 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Triggering sync for uuid 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.529 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "2266fbd6-0bef-4f98-a9ce-5ff705dc5121" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.529 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.543 239969 DEBUG nova.network.neutron [-] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.556 239969 DEBUG nova.network.neutron [-] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.569 239969 INFO nova.compute.manager [-] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Took 0.35 seconds to deallocate network for instance.
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.616 239969 DEBUG oslo_concurrency.lockutils [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.616 239969 DEBUG oslo_concurrency.lockutils [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:51 compute-0 nova_compute[239965]: 2026-01-26 16:09:51.673 239969 DEBUG oslo_concurrency.processutils [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:09:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 128 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 7.9 MiB/s wr, 404 op/s
Jan 26 16:09:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:09:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1249619525' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:52 compute-0 nova_compute[239965]: 2026-01-26 16:09:52.245 239969 DEBUG oslo_concurrency.processutils [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:09:52 compute-0 nova_compute[239965]: 2026-01-26 16:09:52.252 239969 DEBUG nova.compute.provider_tree [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:09:52 compute-0 nova_compute[239965]: 2026-01-26 16:09:52.289 239969 DEBUG nova.scheduler.client.report [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:09:52 compute-0 nova_compute[239965]: 2026-01-26 16:09:52.315 239969 DEBUG oslo_concurrency.lockutils [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:52 compute-0 nova_compute[239965]: 2026-01-26 16:09:52.342 239969 INFO nova.scheduler.client.report [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Deleted allocations for instance 2266fbd6-0bef-4f98-a9ce-5ff705dc5121
Jan 26 16:09:52 compute-0 nova_compute[239965]: 2026-01-26 16:09:52.410 239969 DEBUG oslo_concurrency.lockutils [None req-6048c9ad-5f3a-455a-8efb-03f779f0fddd 60f17e2d4e40423689513bf5b4ed0697 67b84c37a6f04b6fbba155d82794bda6 - - default default] Lock "2266fbd6-0bef-4f98-a9ce-5ff705dc5121" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:52 compute-0 nova_compute[239965]: 2026-01-26 16:09:52.411 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "2266fbd6-0bef-4f98-a9ce-5ff705dc5121" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:52 compute-0 nova_compute[239965]: 2026-01-26 16:09:52.412 239969 INFO nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] During sync_power_state the instance has a pending task (deleting). Skip.
Jan 26 16:09:52 compute-0 nova_compute[239965]: 2026-01-26 16:09:52.412 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "2266fbd6-0bef-4f98-a9ce-5ff705dc5121" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:52 compute-0 ceph-mon[75140]: pgmap v1743: 305 pgs: 305 active+clean; 128 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 7.9 MiB/s wr, 404 op/s
Jan 26 16:09:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1249619525' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:09:52 compute-0 nova_compute[239965]: 2026-01-26 16:09:52.539 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:09:52 compute-0 nova_compute[239965]: 2026-01-26 16:09:52.721 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443777.7199898, 6a424452-c5eb-42aa-9bed-46777c6790eb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:09:52 compute-0 nova_compute[239965]: 2026-01-26 16:09:52.721 239969 INFO nova.compute.manager [-] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] VM Stopped (Lifecycle Event)
Jan 26 16:09:52 compute-0 nova_compute[239965]: 2026-01-26 16:09:52.748 239969 DEBUG nova.compute.manager [None req-212258ab-10c4-4ae9-bd6c-f56ad0b9d745 - - - - - -] [instance: 6a424452-c5eb-42aa-9bed-46777c6790eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:09:52 compute-0 nova_compute[239965]: 2026-01-26 16:09:52.793 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:09:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 97 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.3 MiB/s wr, 365 op/s
Jan 26 16:09:55 compute-0 ceph-mon[75140]: pgmap v1744: 305 pgs: 305 active+clean; 97 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 5.3 MiB/s wr, 365 op/s
Jan 26 16:09:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1745: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 310 op/s
Jan 26 16:09:56 compute-0 nova_compute[239965]: 2026-01-26 16:09:56.396 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:57 compute-0 ceph-mon[75140]: pgmap v1745: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 310 op/s
Jan 26 16:09:57 compute-0 nova_compute[239965]: 2026-01-26 16:09:57.796 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:09:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:09:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 266 op/s
Jan 26 16:09:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:59.230 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:09:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:59.231 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:09:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:09:59.231 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:09:59 compute-0 ceph-mon[75140]: pgmap v1746: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.1 MiB/s wr, 266 op/s
Jan 26 16:10:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 15 KiB/s wr, 188 op/s
Jan 26 16:10:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:10:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:10:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:10:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:10:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:10:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:10:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:01.234 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:01 compute-0 ceph-mon[75140]: pgmap v1747: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 15 KiB/s wr, 188 op/s
Jan 26 16:10:01 compute-0 nova_compute[239965]: 2026-01-26 16:10:01.398 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:01 compute-0 nova_compute[239965]: 2026-01-26 16:10:01.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:10:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 15 KiB/s wr, 188 op/s
Jan 26 16:10:02 compute-0 nova_compute[239965]: 2026-01-26 16:10:02.797 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:10:03 compute-0 ceph-mon[75140]: pgmap v1748: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 15 KiB/s wr, 188 op/s
Jan 26 16:10:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 818 KiB/s rd, 2.1 KiB/s wr, 63 op/s
Jan 26 16:10:04 compute-0 sudo[328392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:10:04 compute-0 sudo[328392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:10:04 compute-0 sudo[328392]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:04 compute-0 sudo[328417]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:10:04 compute-0 sudo[328417]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:10:04 compute-0 ceph-mon[75140]: pgmap v1749: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 818 KiB/s rd, 2.1 KiB/s wr, 63 op/s
Jan 26 16:10:04 compute-0 nova_compute[239965]: 2026-01-26 16:10:04.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:10:04 compute-0 nova_compute[239965]: 2026-01-26 16:10:04.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:10:04 compute-0 nova_compute[239965]: 2026-01-26 16:10:04.684 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443789.6833243, 82fb4b02-e957-47a0-a2f8-6d93f039e7d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:10:04 compute-0 nova_compute[239965]: 2026-01-26 16:10:04.684 239969 INFO nova.compute.manager [-] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] VM Stopped (Lifecycle Event)
Jan 26 16:10:04 compute-0 nova_compute[239965]: 2026-01-26 16:10:04.699 239969 DEBUG nova.compute.manager [None req-3dadf88a-7150-4e65-b4b2-00121890a73a - - - - - -] [instance: 82fb4b02-e957-47a0-a2f8-6d93f039e7d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:04 compute-0 sudo[328417]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:10:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:10:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:10:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:10:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:10:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:10:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:10:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:10:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:10:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:10:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:10:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:10:04 compute-0 sudo[328473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:10:04 compute-0 sudo[328473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:10:04 compute-0 sudo[328473]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:04 compute-0 sudo[328498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:10:04 compute-0 sudo[328498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:10:05 compute-0 podman[328535]: 2026-01-26 16:10:05.250011267 +0000 UTC m=+0.040216605 container create 325f1466bec95756e34407510608e3fb96f1e8c4912eb7ce4291e698e17cae58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_goldstine, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 16:10:05 compute-0 systemd[1]: Started libpod-conmon-325f1466bec95756e34407510608e3fb96f1e8c4912eb7ce4291e698e17cae58.scope.
Jan 26 16:10:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:10:05 compute-0 podman[328535]: 2026-01-26 16:10:05.231903934 +0000 UTC m=+0.022109292 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:10:05 compute-0 podman[328535]: 2026-01-26 16:10:05.334711539 +0000 UTC m=+0.124916917 container init 325f1466bec95756e34407510608e3fb96f1e8c4912eb7ce4291e698e17cae58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 16:10:05 compute-0 podman[328535]: 2026-01-26 16:10:05.343458863 +0000 UTC m=+0.133664201 container start 325f1466bec95756e34407510608e3fb96f1e8c4912eb7ce4291e698e17cae58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:10:05 compute-0 podman[328535]: 2026-01-26 16:10:05.347191375 +0000 UTC m=+0.137396753 container attach 325f1466bec95756e34407510608e3fb96f1e8c4912eb7ce4291e698e17cae58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 16:10:05 compute-0 festive_goldstine[328549]: 167 167
Jan 26 16:10:05 compute-0 systemd[1]: libpod-325f1466bec95756e34407510608e3fb96f1e8c4912eb7ce4291e698e17cae58.scope: Deactivated successfully.
Jan 26 16:10:05 compute-0 podman[328535]: 2026-01-26 16:10:05.351438309 +0000 UTC m=+0.141643687 container died 325f1466bec95756e34407510608e3fb96f1e8c4912eb7ce4291e698e17cae58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:10:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-41de41dc3ff317ac45a73719565995609d4771b543c830cc97fc6ee925f0e35e-merged.mount: Deactivated successfully.
Jan 26 16:10:05 compute-0 podman[328535]: 2026-01-26 16:10:05.397945356 +0000 UTC m=+0.188150684 container remove 325f1466bec95756e34407510608e3fb96f1e8c4912eb7ce4291e698e17cae58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_goldstine, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 16:10:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:10:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Cumulative writes: 7986 writes, 36K keys, 7986 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 7986 writes, 7986 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1605 writes, 7449 keys, 1605 commit groups, 1.0 writes per commit group, ingest: 9.82 MB, 0.02 MB/s
                                           Interval WAL: 1605 writes, 1605 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     96.6      0.44              0.13        21    0.021       0      0       0.0       0.0
                                             L6      1/0    9.82 MB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   3.8    133.6    110.9      1.46              0.41        20    0.073    105K    11K       0.0       0.0
                                            Sum      1/0    9.82 MB   0.0      0.2     0.0      0.1       0.2      0.1       0.0   4.8    102.4    107.6      1.90              0.54        41    0.046    105K    11K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.9     83.6     85.2      0.63              0.16        10    0.063     32K   3115       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.1       0.2      0.0       0.0   0.0    133.6    110.9      1.46              0.41        20    0.073    105K    11K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     97.6      0.44              0.13        20    0.022       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.6      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.042, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.20 GB write, 0.07 MB/s write, 0.19 GB read, 0.06 MB/s read, 1.9 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x565356c818d0#2 capacity: 304.00 MB usage: 23.12 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000163 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1461,22.29 MB,7.33379%) FilterBlock(42,299.61 KB,0.0962458%) IndexBlock(42,545.55 KB,0.17525%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 16:10:05 compute-0 systemd[1]: libpod-conmon-325f1466bec95756e34407510608e3fb96f1e8c4912eb7ce4291e698e17cae58.scope: Deactivated successfully.
Jan 26 16:10:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:10:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:10:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:10:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:10:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:10:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:10:05 compute-0 podman[328574]: 2026-01-26 16:10:05.601828943 +0000 UTC m=+0.062400727 container create 32b881ea426f17b379b57b6f535597b99b374e16e8da76633be34f832df4c2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:10:05 compute-0 systemd[1]: Started libpod-conmon-32b881ea426f17b379b57b6f535597b99b374e16e8da76633be34f832df4c2f4.scope.
Jan 26 16:10:05 compute-0 podman[328574]: 2026-01-26 16:10:05.575097179 +0000 UTC m=+0.035669023 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:10:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1307a837a76635b36291bcfc909261a88b69e97bde060eb4e9c4ee2ffc3ad63a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1307a837a76635b36291bcfc909261a88b69e97bde060eb4e9c4ee2ffc3ad63a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1307a837a76635b36291bcfc909261a88b69e97bde060eb4e9c4ee2ffc3ad63a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1307a837a76635b36291bcfc909261a88b69e97bde060eb4e9c4ee2ffc3ad63a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1307a837a76635b36291bcfc909261a88b69e97bde060eb4e9c4ee2ffc3ad63a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:10:05 compute-0 podman[328574]: 2026-01-26 16:10:05.696235383 +0000 UTC m=+0.156807197 container init 32b881ea426f17b379b57b6f535597b99b374e16e8da76633be34f832df4c2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hugle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 16:10:05 compute-0 podman[328574]: 2026-01-26 16:10:05.702025485 +0000 UTC m=+0.162597269 container start 32b881ea426f17b379b57b6f535597b99b374e16e8da76633be34f832df4c2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hugle, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:10:05 compute-0 podman[328574]: 2026-01-26 16:10:05.705535191 +0000 UTC m=+0.166106975 container attach 32b881ea426f17b379b57b6f535597b99b374e16e8da76633be34f832df4c2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hugle, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:10:05 compute-0 nova_compute[239965]: 2026-01-26 16:10:05.917 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443790.916099, 2266fbd6-0bef-4f98-a9ce-5ff705dc5121 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:10:05 compute-0 nova_compute[239965]: 2026-01-26 16:10:05.919 239969 INFO nova.compute.manager [-] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] VM Stopped (Lifecycle Event)
Jan 26 16:10:05 compute-0 nova_compute[239965]: 2026-01-26 16:10:05.939 239969 DEBUG nova.compute.manager [None req-fdd230f0-022b-4e82-88b3-cb73ad2db332 - - - - - -] [instance: 2266fbd6-0bef-4f98-a9ce-5ff705dc5121] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:06 compute-0 gracious_hugle[328591]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:10:06 compute-0 gracious_hugle[328591]: --> All data devices are unavailable
Jan 26 16:10:06 compute-0 systemd[1]: libpod-32b881ea426f17b379b57b6f535597b99b374e16e8da76633be34f832df4c2f4.scope: Deactivated successfully.
Jan 26 16:10:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 341 B/s wr, 6 op/s
Jan 26 16:10:06 compute-0 podman[328611]: 2026-01-26 16:10:06.205622484 +0000 UTC m=+0.023975037 container died 32b881ea426f17b379b57b6f535597b99b374e16e8da76633be34f832df4c2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:10:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1307a837a76635b36291bcfc909261a88b69e97bde060eb4e9c4ee2ffc3ad63a-merged.mount: Deactivated successfully.
Jan 26 16:10:06 compute-0 podman[328611]: 2026-01-26 16:10:06.245458498 +0000 UTC m=+0.063811031 container remove 32b881ea426f17b379b57b6f535597b99b374e16e8da76633be34f832df4c2f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_hugle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:10:06 compute-0 systemd[1]: libpod-conmon-32b881ea426f17b379b57b6f535597b99b374e16e8da76633be34f832df4c2f4.scope: Deactivated successfully.
Jan 26 16:10:06 compute-0 sudo[328498]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:06 compute-0 sudo[328625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:10:06 compute-0 sudo[328625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:10:06 compute-0 sudo[328625]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:06 compute-0 nova_compute[239965]: 2026-01-26 16:10:06.399 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:06 compute-0 sudo[328650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:10:06 compute-0 sudo[328650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:10:06 compute-0 ceph-mon[75140]: pgmap v1750: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 341 B/s wr, 6 op/s
Jan 26 16:10:06 compute-0 podman[328687]: 2026-01-26 16:10:06.670383713 +0000 UTC m=+0.038173155 container create 3dfdd491135be510d9062e2c9068e3c2d582d1aa636963fbf1fcb2b25534ae65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Jan 26 16:10:06 compute-0 systemd[1]: Started libpod-conmon-3dfdd491135be510d9062e2c9068e3c2d582d1aa636963fbf1fcb2b25534ae65.scope.
Jan 26 16:10:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:10:06 compute-0 podman[328687]: 2026-01-26 16:10:06.655826927 +0000 UTC m=+0.023616389 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:10:06 compute-0 podman[328687]: 2026-01-26 16:10:06.769715183 +0000 UTC m=+0.137504645 container init 3dfdd491135be510d9062e2c9068e3c2d582d1aa636963fbf1fcb2b25534ae65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:10:06 compute-0 podman[328687]: 2026-01-26 16:10:06.777846131 +0000 UTC m=+0.145635603 container start 3dfdd491135be510d9062e2c9068e3c2d582d1aa636963fbf1fcb2b25534ae65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_brown, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:10:06 compute-0 podman[328687]: 2026-01-26 16:10:06.781919021 +0000 UTC m=+0.149708523 container attach 3dfdd491135be510d9062e2c9068e3c2d582d1aa636963fbf1fcb2b25534ae65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 16:10:06 compute-0 angry_brown[328704]: 167 167
Jan 26 16:10:06 compute-0 systemd[1]: libpod-3dfdd491135be510d9062e2c9068e3c2d582d1aa636963fbf1fcb2b25534ae65.scope: Deactivated successfully.
Jan 26 16:10:06 compute-0 conmon[328704]: conmon 3dfdd491135be510d906 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3dfdd491135be510d9062e2c9068e3c2d582d1aa636963fbf1fcb2b25534ae65.scope/container/memory.events
Jan 26 16:10:06 compute-0 podman[328687]: 2026-01-26 16:10:06.786550525 +0000 UTC m=+0.154340017 container died 3dfdd491135be510d9062e2c9068e3c2d582d1aa636963fbf1fcb2b25534ae65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:10:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e91357fdd2e8dba7b66bea0d70396797637836f99227010581506d1222c8c70-merged.mount: Deactivated successfully.
Jan 26 16:10:06 compute-0 podman[328687]: 2026-01-26 16:10:06.82968873 +0000 UTC m=+0.197478172 container remove 3dfdd491135be510d9062e2c9068e3c2d582d1aa636963fbf1fcb2b25534ae65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_brown, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 16:10:06 compute-0 nova_compute[239965]: 2026-01-26 16:10:06.833 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "7d660383-738b-422b-af54-1a15464bf8e3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:06 compute-0 nova_compute[239965]: 2026-01-26 16:10:06.834 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:06 compute-0 nova_compute[239965]: 2026-01-26 16:10:06.850 239969 DEBUG nova.compute.manager [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:10:06 compute-0 systemd[1]: libpod-conmon-3dfdd491135be510d9062e2c9068e3c2d582d1aa636963fbf1fcb2b25534ae65.scope: Deactivated successfully.
Jan 26 16:10:06 compute-0 nova_compute[239965]: 2026-01-26 16:10:06.933 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:06 compute-0 nova_compute[239965]: 2026-01-26 16:10:06.935 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:06 compute-0 nova_compute[239965]: 2026-01-26 16:10:06.940 239969 DEBUG nova.virt.hardware [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:10:06 compute-0 nova_compute[239965]: 2026-01-26 16:10:06.941 239969 INFO nova.compute.claims [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:10:07 compute-0 podman[328727]: 2026-01-26 16:10:07.028581445 +0000 UTC m=+0.068790064 container create 48e4465fa504a4bfa2cae838d37580454174123d5b227fe52436edda7785a706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_rosalind, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.043 239969 DEBUG oslo_concurrency.processutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:07 compute-0 systemd[1]: Started libpod-conmon-48e4465fa504a4bfa2cae838d37580454174123d5b227fe52436edda7785a706.scope.
Jan 26 16:10:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91271d0e63eac3fc105256fd777c418a843db27a973f6415a084dbf597d865eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:10:07 compute-0 podman[328727]: 2026-01-26 16:10:07.003368728 +0000 UTC m=+0.043577427 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91271d0e63eac3fc105256fd777c418a843db27a973f6415a084dbf597d865eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91271d0e63eac3fc105256fd777c418a843db27a973f6415a084dbf597d865eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:10:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91271d0e63eac3fc105256fd777c418a843db27a973f6415a084dbf597d865eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:10:07 compute-0 podman[328727]: 2026-01-26 16:10:07.114883537 +0000 UTC m=+0.155092196 container init 48e4465fa504a4bfa2cae838d37580454174123d5b227fe52436edda7785a706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_rosalind, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Jan 26 16:10:07 compute-0 podman[328727]: 2026-01-26 16:10:07.1219967 +0000 UTC m=+0.162205319 container start 48e4465fa504a4bfa2cae838d37580454174123d5b227fe52436edda7785a706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_rosalind, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 16:10:07 compute-0 podman[328727]: 2026-01-26 16:10:07.125965298 +0000 UTC m=+0.166173967 container attach 48e4465fa504a4bfa2cae838d37580454174123d5b227fe52436edda7785a706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_rosalind, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]: {
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:     "0": [
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:         {
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "devices": [
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "/dev/loop3"
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             ],
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "lv_name": "ceph_lv0",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "lv_size": "21470642176",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "name": "ceph_lv0",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "tags": {
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.cluster_name": "ceph",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.crush_device_class": "",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.encrypted": "0",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.objectstore": "bluestore",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.osd_id": "0",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.type": "block",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.vdo": "0",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.with_tpm": "0"
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             },
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "type": "block",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "vg_name": "ceph_vg0"
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:         }
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:     ],
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:     "1": [
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:         {
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "devices": [
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "/dev/loop4"
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             ],
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "lv_name": "ceph_lv1",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "lv_size": "21470642176",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "name": "ceph_lv1",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "tags": {
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.cluster_name": "ceph",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.crush_device_class": "",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.encrypted": "0",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.objectstore": "bluestore",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.osd_id": "1",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.type": "block",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.vdo": "0",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.with_tpm": "0"
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             },
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "type": "block",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "vg_name": "ceph_vg1"
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:         }
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:     ],
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:     "2": [
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:         {
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "devices": [
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "/dev/loop5"
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             ],
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "lv_name": "ceph_lv2",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "lv_size": "21470642176",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "name": "ceph_lv2",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "tags": {
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.cluster_name": "ceph",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.crush_device_class": "",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.encrypted": "0",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.objectstore": "bluestore",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.osd_id": "2",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.type": "block",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.vdo": "0",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:                 "ceph.with_tpm": "0"
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             },
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "type": "block",
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:             "vg_name": "ceph_vg2"
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:         }
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]:     ]
Jan 26 16:10:07 compute-0 youthful_rosalind[328744]: }
Jan 26 16:10:07 compute-0 systemd[1]: libpod-48e4465fa504a4bfa2cae838d37580454174123d5b227fe52436edda7785a706.scope: Deactivated successfully.
Jan 26 16:10:07 compute-0 podman[328727]: 2026-01-26 16:10:07.467172325 +0000 UTC m=+0.507380944 container died 48e4465fa504a4bfa2cae838d37580454174123d5b227fe52436edda7785a706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_rosalind, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:10:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-91271d0e63eac3fc105256fd777c418a843db27a973f6415a084dbf597d865eb-merged.mount: Deactivated successfully.
Jan 26 16:10:07 compute-0 podman[328727]: 2026-01-26 16:10:07.527384478 +0000 UTC m=+0.567593087 container remove 48e4465fa504a4bfa2cae838d37580454174123d5b227fe52436edda7785a706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_rosalind, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Jan 26 16:10:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:10:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2860203241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:10:07 compute-0 systemd[1]: libpod-conmon-48e4465fa504a4bfa2cae838d37580454174123d5b227fe52436edda7785a706.scope: Deactivated successfully.
Jan 26 16:10:07 compute-0 sudo[328650]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.587 239969 DEBUG oslo_concurrency.processutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.596 239969 DEBUG nova.compute.provider_tree [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:10:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2860203241' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.617 239969 DEBUG nova.scheduler.client.report [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:10:07 compute-0 sudo[328786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.639 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.640 239969 DEBUG nova.compute.manager [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:10:07 compute-0 sudo[328786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:10:07 compute-0 sudo[328786]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:07 compute-0 sudo[328811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:10:07 compute-0 sudo[328811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.704 239969 DEBUG nova.compute.manager [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.704 239969 DEBUG nova.network.neutron [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.732 239969 INFO nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.753 239969 DEBUG nova.compute.manager [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.800 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.854 239969 DEBUG nova.compute.manager [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.856 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.856 239969 INFO nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Creating image(s)
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.886 239969 DEBUG nova.storage.rbd_utils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 7d660383-738b-422b-af54-1a15464bf8e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.918 239969 DEBUG nova.storage.rbd_utils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 7d660383-738b-422b-af54-1a15464bf8e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.948 239969 DEBUG nova.storage.rbd_utils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 7d660383-738b-422b-af54-1a15464bf8e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:07 compute-0 nova_compute[239965]: 2026-01-26 16:10:07.953 239969 DEBUG oslo_concurrency.processutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:08 compute-0 podman[328892]: 2026-01-26 16:10:08.004638742 +0000 UTC m=+0.050901696 container create fc623bf3d7a4eaf337a1e60aa92bf0adbf4a9a29cfc014216d74804f0055b65d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_johnson, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.025 239969 DEBUG oslo_concurrency.processutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.026 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.027 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.027 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:08 compute-0 systemd[1]: Started libpod-conmon-fc623bf3d7a4eaf337a1e60aa92bf0adbf4a9a29cfc014216d74804f0055b65d.scope.
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.047 239969 DEBUG nova.storage.rbd_utils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 7d660383-738b-422b-af54-1a15464bf8e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.051 239969 DEBUG oslo_concurrency.processutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 7d660383-738b-422b-af54-1a15464bf8e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:10:08 compute-0 podman[328892]: 2026-01-26 16:10:07.983121895 +0000 UTC m=+0.029384899 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:10:08 compute-0 podman[328892]: 2026-01-26 16:10:08.083319837 +0000 UTC m=+0.129582811 container init fc623bf3d7a4eaf337a1e60aa92bf0adbf4a9a29cfc014216d74804f0055b65d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:10:08 compute-0 podman[328892]: 2026-01-26 16:10:08.089450907 +0000 UTC m=+0.135713861 container start fc623bf3d7a4eaf337a1e60aa92bf0adbf4a9a29cfc014216d74804f0055b65d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_johnson, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:10:08 compute-0 podman[328892]: 2026-01-26 16:10:08.092798089 +0000 UTC m=+0.139061063 container attach fc623bf3d7a4eaf337a1e60aa92bf0adbf4a9a29cfc014216d74804f0055b65d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_johnson, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:10:08 compute-0 elastic_johnson[328934]: 167 167
Jan 26 16:10:08 compute-0 systemd[1]: libpod-fc623bf3d7a4eaf337a1e60aa92bf0adbf4a9a29cfc014216d74804f0055b65d.scope: Deactivated successfully.
Jan 26 16:10:08 compute-0 conmon[328934]: conmon fc623bf3d7a4eaf337a1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc623bf3d7a4eaf337a1e60aa92bf0adbf4a9a29cfc014216d74804f0055b65d.scope/container/memory.events
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.099 239969 DEBUG nova.policy [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0338b308661e4205839e9e957f674d8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df501970c9864b77b86bb4aa58ef846b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:10:08 compute-0 podman[328943]: 2026-01-26 16:10:08.133155295 +0000 UTC m=+0.025040963 container died fc623bf3d7a4eaf337a1e60aa92bf0adbf4a9a29cfc014216d74804f0055b65d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 16:10:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8433585d0e42554a5b89847fb6bf8a384be0aac8a98a1d833cc21f147072ef53-merged.mount: Deactivated successfully.
Jan 26 16:10:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:10:08 compute-0 podman[328943]: 2026-01-26 16:10:08.207163366 +0000 UTC m=+0.099049014 container remove fc623bf3d7a4eaf337a1e60aa92bf0adbf4a9a29cfc014216d74804f0055b65d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_johnson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 16:10:08 compute-0 systemd[1]: libpod-conmon-fc623bf3d7a4eaf337a1e60aa92bf0adbf4a9a29cfc014216d74804f0055b65d.scope: Deactivated successfully.
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.305 239969 DEBUG oslo_concurrency.processutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 7d660383-738b-422b-af54-1a15464bf8e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.253s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.366 239969 DEBUG nova.storage.rbd_utils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] resizing rbd image 7d660383-738b-422b-af54-1a15464bf8e3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:10:08 compute-0 podman[329001]: 2026-01-26 16:10:08.385679933 +0000 UTC m=+0.042771047 container create 4e33f7281b3c7d5e762b32f31e396b184ab184307f1a4c8ffcb8f722c60087ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_fermat, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:10:08 compute-0 systemd[1]: Started libpod-conmon-4e33f7281b3c7d5e762b32f31e396b184ab184307f1a4c8ffcb8f722c60087ad.scope.
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.452 239969 DEBUG nova.objects.instance [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'migration_context' on Instance uuid 7d660383-738b-422b-af54-1a15464bf8e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:10:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:10:08 compute-0 podman[329001]: 2026-01-26 16:10:08.36511463 +0000 UTC m=+0.022205764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1fec4fb22a1fd6c5850246c863430aa89daf209c1db2a0565995fcca123887/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1fec4fb22a1fd6c5850246c863430aa89daf209c1db2a0565995fcca123887/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1fec4fb22a1fd6c5850246c863430aa89daf209c1db2a0565995fcca123887/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:10:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be1fec4fb22a1fd6c5850246c863430aa89daf209c1db2a0565995fcca123887/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.469 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.469 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Ensure instance console log exists: /var/lib/nova/instances/7d660383-738b-422b-af54-1a15464bf8e3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.470 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.470 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.470 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:08 compute-0 podman[329001]: 2026-01-26 16:10:08.474620589 +0000 UTC m=+0.131711723 container init 4e33f7281b3c7d5e762b32f31e396b184ab184307f1a4c8ffcb8f722c60087ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_fermat, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:10:08 compute-0 podman[329001]: 2026-01-26 16:10:08.489644716 +0000 UTC m=+0.146735830 container start 4e33f7281b3c7d5e762b32f31e396b184ab184307f1a4c8ffcb8f722c60087ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_fermat, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:10:08 compute-0 podman[329001]: 2026-01-26 16:10:08.493542902 +0000 UTC m=+0.150634046 container attach 4e33f7281b3c7d5e762b32f31e396b184ab184307f1a4c8ffcb8f722c60087ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_fermat, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:10:08 compute-0 ceph-mon[75140]: pgmap v1751: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.686 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.687 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.709 239969 DEBUG nova.compute.manager [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.802 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.803 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.811 239969 DEBUG nova.virt.hardware [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.811 239969 INFO nova.compute.claims [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:10:08 compute-0 nova_compute[239965]: 2026-01-26 16:10:08.932 239969 DEBUG oslo_concurrency.processutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.109 239969 DEBUG nova.network.neutron [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Successfully created port: 3a527bbf-13af-4ef8-8703-f32cc8169291 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:10:09 compute-0 lvm[329169]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:10:09 compute-0 lvm[329171]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:10:09 compute-0 lvm[329169]: VG ceph_vg0 finished
Jan 26 16:10:09 compute-0 lvm[329171]: VG ceph_vg1 finished
Jan 26 16:10:09 compute-0 lvm[329173]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:10:09 compute-0 lvm[329173]: VG ceph_vg2 finished
Jan 26 16:10:09 compute-0 flamboyant_fermat[329064]: {}
Jan 26 16:10:09 compute-0 systemd[1]: libpod-4e33f7281b3c7d5e762b32f31e396b184ab184307f1a4c8ffcb8f722c60087ad.scope: Deactivated successfully.
Jan 26 16:10:09 compute-0 systemd[1]: libpod-4e33f7281b3c7d5e762b32f31e396b184ab184307f1a4c8ffcb8f722c60087ad.scope: Consumed 1.296s CPU time.
Jan 26 16:10:09 compute-0 conmon[329064]: conmon 4e33f7281b3c7d5e762b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e33f7281b3c7d5e762b32f31e396b184ab184307f1a4c8ffcb8f722c60087ad.scope/container/memory.events
Jan 26 16:10:09 compute-0 podman[329176]: 2026-01-26 16:10:09.311345657 +0000 UTC m=+0.023801663 container died 4e33f7281b3c7d5e762b32f31e396b184ab184307f1a4c8ffcb8f722c60087ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_fermat, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Jan 26 16:10:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-be1fec4fb22a1fd6c5850246c863430aa89daf209c1db2a0565995fcca123887-merged.mount: Deactivated successfully.
Jan 26 16:10:09 compute-0 podman[329176]: 2026-01-26 16:10:09.351801587 +0000 UTC m=+0.064257573 container remove 4e33f7281b3c7d5e762b32f31e396b184ab184307f1a4c8ffcb8f722c60087ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_fermat, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:10:09 compute-0 systemd[1]: libpod-conmon-4e33f7281b3c7d5e762b32f31e396b184ab184307f1a4c8ffcb8f722c60087ad.scope: Deactivated successfully.
Jan 26 16:10:09 compute-0 sudo[328811]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:10:09 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:10:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:10:09 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:10:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:10:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3345962837' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:10:09 compute-0 sudo[329191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:10:09 compute-0 sudo[329191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:10:09 compute-0 sudo[329191]: pam_unix(sudo:session): session closed for user root
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.492 239969 DEBUG oslo_concurrency.processutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.501 239969 DEBUG nova.compute.provider_tree [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.521 239969 DEBUG nova.scheduler.client.report [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.548 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.549 239969 DEBUG nova.compute.manager [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.603 239969 DEBUG nova.compute.manager [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.604 239969 DEBUG nova.network.neutron [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.621 239969 INFO nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.640 239969 DEBUG nova.compute.manager [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.737 239969 DEBUG nova.compute.manager [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.739 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.739 239969 INFO nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Creating image(s)
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.760 239969 DEBUG nova.storage.rbd_utils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.783 239969 DEBUG nova.storage.rbd_utils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.807 239969 DEBUG nova.storage.rbd_utils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.812 239969 DEBUG oslo_concurrency.processutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.888 239969 DEBUG oslo_concurrency.processutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.889 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.890 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.890 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.911 239969 DEBUG nova.storage.rbd_utils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.915 239969 DEBUG oslo_concurrency.processutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.954 239969 DEBUG nova.policy [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2d9b11d846a34b6e892fb80186bedba5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd8d1cdeaf6004b94bba2d3c727959e51', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:10:09 compute-0 nova_compute[239965]: 2026-01-26 16:10:09.957 239969 DEBUG nova.network.neutron [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Successfully created port: 9ade0424-9307-41d2-99d9-0bbc7146bd66 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:10:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:10:10 compute-0 nova_compute[239965]: 2026-01-26 16:10:10.181 239969 DEBUG oslo_concurrency.processutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:10 compute-0 nova_compute[239965]: 2026-01-26 16:10:10.243 239969 DEBUG nova.storage.rbd_utils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] resizing rbd image 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:10:10 compute-0 nova_compute[239965]: 2026-01-26 16:10:10.326 239969 DEBUG nova.objects.instance [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'migration_context' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:10:10 compute-0 nova_compute[239965]: 2026-01-26 16:10:10.346 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:10:10 compute-0 nova_compute[239965]: 2026-01-26 16:10:10.346 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Ensure instance console log exists: /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:10:10 compute-0 nova_compute[239965]: 2026-01-26 16:10:10.347 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:10 compute-0 nova_compute[239965]: 2026-01-26 16:10:10.347 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:10 compute-0 nova_compute[239965]: 2026-01-26 16:10:10.347 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:10:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:10:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3345962837' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:10:10 compute-0 ceph-mon[75140]: pgmap v1752: 305 pgs: 305 active+clean; 88 MiB data, 715 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:10:10 compute-0 nova_compute[239965]: 2026-01-26 16:10:10.890 239969 DEBUG nova.network.neutron [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Successfully updated port: 3a527bbf-13af-4ef8-8703-f32cc8169291 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:10:10 compute-0 nova_compute[239965]: 2026-01-26 16:10:10.932 239969 DEBUG nova.network.neutron [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Successfully created port: d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:10:11 compute-0 nova_compute[239965]: 2026-01-26 16:10:11.451 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:11 compute-0 nova_compute[239965]: 2026-01-26 16:10:11.601 239969 DEBUG nova.network.neutron [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Successfully updated port: 9ade0424-9307-41d2-99d9-0bbc7146bd66 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:10:11 compute-0 nova_compute[239965]: 2026-01-26 16:10:11.617 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:10:11 compute-0 nova_compute[239965]: 2026-01-26 16:10:11.618 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquired lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:10:11 compute-0 nova_compute[239965]: 2026-01-26 16:10:11.619 239969 DEBUG nova.network.neutron [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:10:11 compute-0 nova_compute[239965]: 2026-01-26 16:10:11.825 239969 DEBUG nova.compute.manager [req-5ddfe1d1-2982-4eaf-9ff9-409cf568af1c req-e988811c-cf1e-4447-9896-9086a16f1f77 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received event network-changed-3a527bbf-13af-4ef8-8703-f32cc8169291 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:11 compute-0 nova_compute[239965]: 2026-01-26 16:10:11.825 239969 DEBUG nova.compute.manager [req-5ddfe1d1-2982-4eaf-9ff9-409cf568af1c req-e988811c-cf1e-4447-9896-9086a16f1f77 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Refreshing instance network info cache due to event network-changed-3a527bbf-13af-4ef8-8703-f32cc8169291. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:10:11 compute-0 nova_compute[239965]: 2026-01-26 16:10:11.826 239969 DEBUG oslo_concurrency.lockutils [req-5ddfe1d1-2982-4eaf-9ff9-409cf568af1c req-e988811c-cf1e-4447-9896-9086a16f1f77 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:10:11 compute-0 nova_compute[239965]: 2026-01-26 16:10:11.894 239969 DEBUG nova.network.neutron [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:10:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 152 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.6 MiB/s wr, 41 op/s
Jan 26 16:10:12 compute-0 nova_compute[239965]: 2026-01-26 16:10:12.368 239969 DEBUG nova.network.neutron [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Successfully updated port: d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:10:12 compute-0 nova_compute[239965]: 2026-01-26 16:10:12.385 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:10:12 compute-0 nova_compute[239965]: 2026-01-26 16:10:12.386 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquired lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:10:12 compute-0 nova_compute[239965]: 2026-01-26 16:10:12.386 239969 DEBUG nova.network.neutron [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:10:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:10:12 compute-0 nova_compute[239965]: 2026-01-26 16:10:12.851 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:12 compute-0 nova_compute[239965]: 2026-01-26 16:10:12.976 239969 DEBUG nova.compute.manager [req-077ce362-a3f4-45fe-ad88-136d663a2b8e req-f93ec5d1-3d0d-4b96-a0d2-49b9fb25206a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-changed-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:12 compute-0 nova_compute[239965]: 2026-01-26 16:10:12.976 239969 DEBUG nova.compute.manager [req-077ce362-a3f4-45fe-ad88-136d663a2b8e req-f93ec5d1-3d0d-4b96-a0d2-49b9fb25206a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Refreshing instance network info cache due to event network-changed-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:10:12 compute-0 nova_compute[239965]: 2026-01-26 16:10:12.977 239969 DEBUG oslo_concurrency.lockutils [req-077ce362-a3f4-45fe-ad88-136d663a2b8e req-f93ec5d1-3d0d-4b96-a0d2-49b9fb25206a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:10:13 compute-0 nova_compute[239965]: 2026-01-26 16:10:13.229 239969 DEBUG nova.network.neutron [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:10:13 compute-0 ceph-mon[75140]: pgmap v1753: 305 pgs: 305 active+clean; 152 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.6 MiB/s wr, 41 op/s
Jan 26 16:10:13 compute-0 nova_compute[239965]: 2026-01-26 16:10:13.956 239969 DEBUG nova.compute.manager [req-048aab69-e46a-496d-91c7-6c0266c17483 req-b5136a9e-2361-4c44-9050-ae55e8b6a997 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received event network-changed-9ade0424-9307-41d2-99d9-0bbc7146bd66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:13 compute-0 nova_compute[239965]: 2026-01-26 16:10:13.956 239969 DEBUG nova.compute.manager [req-048aab69-e46a-496d-91c7-6c0266c17483 req-b5136a9e-2361-4c44-9050-ae55e8b6a997 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Refreshing instance network info cache due to event network-changed-9ade0424-9307-41d2-99d9-0bbc7146bd66. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:10:13 compute-0 nova_compute[239965]: 2026-01-26 16:10:13.956 239969 DEBUG oslo_concurrency.lockutils [req-048aab69-e46a-496d-91c7-6c0266c17483 req-b5136a9e-2361-4c44-9050-ae55e8b6a997 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:10:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 180 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.357 239969 DEBUG nova.network.neutron [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Updating instance_info_cache with network_info: [{"id": "3a527bbf-13af-4ef8-8703-f32cc8169291", "address": "fa:16:3e:69:92:94", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a527bbf-13", "ovs_interfaceid": "3a527bbf-13af-4ef8-8703-f32cc8169291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "address": "fa:16:3e:b9:d5:b2", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:d5b2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ade0424-93", "ovs_interfaceid": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.367 239969 DEBUG nova.network.neutron [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Updating instance_info_cache with network_info: [{"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:10:14 compute-0 podman[329384]: 2026-01-26 16:10:14.37880031 +0000 UTC m=+0.060749207 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.385 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Releasing lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.386 239969 DEBUG nova.compute.manager [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Instance network_info: |[{"id": "3a527bbf-13af-4ef8-8703-f32cc8169291", "address": "fa:16:3e:69:92:94", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a527bbf-13", "ovs_interfaceid": "3a527bbf-13af-4ef8-8703-f32cc8169291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "address": "fa:16:3e:b9:d5:b2", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:d5b2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ade0424-93", "ovs_interfaceid": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.386 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Releasing lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.386 239969 DEBUG nova.compute.manager [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Instance network_info: |[{"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.386 239969 DEBUG oslo_concurrency.lockutils [req-5ddfe1d1-2982-4eaf-9ff9-409cf568af1c req-e988811c-cf1e-4447-9896-9086a16f1f77 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.387 239969 DEBUG nova.network.neutron [req-5ddfe1d1-2982-4eaf-9ff9-409cf568af1c req-e988811c-cf1e-4447-9896-9086a16f1f77 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Refreshing network info cache for port 3a527bbf-13af-4ef8-8703-f32cc8169291 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.390 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Start _get_guest_xml network_info=[{"id": "3a527bbf-13af-4ef8-8703-f32cc8169291", "address": "fa:16:3e:69:92:94", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a527bbf-13", "ovs_interfaceid": "3a527bbf-13af-4ef8-8703-f32cc8169291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "address": "fa:16:3e:b9:d5:b2", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:d5b2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ade0424-93", "ovs_interfaceid": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.391 239969 DEBUG oslo_concurrency.lockutils [req-077ce362-a3f4-45fe-ad88-136d663a2b8e req-f93ec5d1-3d0d-4b96-a0d2-49b9fb25206a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.391 239969 DEBUG nova.network.neutron [req-077ce362-a3f4-45fe-ad88-136d663a2b8e req-f93ec5d1-3d0d-4b96-a0d2-49b9fb25206a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Refreshing network info cache for port d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.393 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Start _get_guest_xml network_info=[{"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.399 239969 WARNING nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.401 239969 WARNING nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.404 239969 DEBUG nova.virt.libvirt.host [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.405 239969 DEBUG nova.virt.libvirt.host [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.405 239969 DEBUG nova.virt.libvirt.host [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.406 239969 DEBUG nova.virt.libvirt.host [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:10:14 compute-0 podman[329385]: 2026-01-26 16:10:14.410793323 +0000 UTC m=+0.093095269 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.419 239969 DEBUG nova.virt.libvirt.host [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.420 239969 DEBUG nova.virt.libvirt.host [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.421 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.421 239969 DEBUG nova.virt.hardware [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.422 239969 DEBUG nova.virt.hardware [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.422 239969 DEBUG nova.virt.hardware [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.422 239969 DEBUG nova.virt.hardware [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.422 239969 DEBUG nova.virt.hardware [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.423 239969 DEBUG nova.virt.hardware [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.423 239969 DEBUG nova.virt.hardware [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.423 239969 DEBUG nova.virt.hardware [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.423 239969 DEBUG nova.virt.hardware [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.424 239969 DEBUG nova.virt.hardware [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.424 239969 DEBUG nova.virt.hardware [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.427 239969 DEBUG oslo_concurrency.processutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.475 239969 DEBUG nova.virt.libvirt.host [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.477 239969 DEBUG nova.virt.libvirt.host [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.477 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.478 239969 DEBUG nova.virt.hardware [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.478 239969 DEBUG nova.virt.hardware [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.479 239969 DEBUG nova.virt.hardware [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.479 239969 DEBUG nova.virt.hardware [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.479 239969 DEBUG nova.virt.hardware [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.480 239969 DEBUG nova.virt.hardware [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.480 239969 DEBUG nova.virt.hardware [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.481 239969 DEBUG nova.virt.hardware [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.481 239969 DEBUG nova.virt.hardware [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.481 239969 DEBUG nova.virt.hardware [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.482 239969 DEBUG nova.virt.hardware [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:10:14 compute-0 nova_compute[239965]: 2026-01-26 16:10:14.487 239969 DEBUG oslo_concurrency.processutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:10:14 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3926875885' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.016 239969 DEBUG oslo_concurrency.processutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.040 239969 DEBUG nova.storage.rbd_utils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 7d660383-738b-422b-af54-1a15464bf8e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.045 239969 DEBUG oslo_concurrency.processutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:10:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1593691859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.096 239969 DEBUG oslo_concurrency.processutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.119 239969 DEBUG nova.storage.rbd_utils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.123 239969 DEBUG oslo_concurrency.processutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:15 compute-0 ceph-mon[75140]: pgmap v1754: 305 pgs: 305 active+clean; 180 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 26 16:10:15 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3926875885' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:10:15 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1593691859' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:10:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:10:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3542247039' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.633 239969 DEBUG oslo_concurrency.processutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.635 239969 DEBUG nova.virt.libvirt.vif [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-66572341',display_name='tempest-TestGettingAddress-server-66572341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-66572341',id=105,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEzKe6a2VldUJeYUz82U7kuWR1IjLUu+uday8/rBf6kWo6w7hcHpRo5TRunJYM1KjlHcyvxH5YSbKzdUCWpXcwe9doLwR4k7RaPJluwXLyon4b8Yo66Pr00SerXQJms4Q==',key_name='tempest-TestGettingAddress-1487907947',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-bfx0ple1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:10:07Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=7d660383-738b-422b-af54-1a15464bf8e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a527bbf-13af-4ef8-8703-f32cc8169291", "address": "fa:16:3e:69:92:94", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a527bbf-13", "ovs_interfaceid": "3a527bbf-13af-4ef8-8703-f32cc8169291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.635 239969 DEBUG nova.network.os_vif_util [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "3a527bbf-13af-4ef8-8703-f32cc8169291", "address": "fa:16:3e:69:92:94", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a527bbf-13", "ovs_interfaceid": "3a527bbf-13af-4ef8-8703-f32cc8169291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.636 239969 DEBUG nova.network.os_vif_util [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:92:94,bridge_name='br-int',has_traffic_filtering=True,id=3a527bbf-13af-4ef8-8703-f32cc8169291,network=Network(065353ad-2d91-4f6b-899c-49c1351ef25d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a527bbf-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.637 239969 DEBUG nova.virt.libvirt.vif [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-66572341',display_name='tempest-TestGettingAddress-server-66572341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-66572341',id=105,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEzKe6a2VldUJeYUz82U7kuWR1IjLUu+uday8/rBf6kWo6w7hcHpRo5TRunJYM1KjlHcyvxH5YSbKzdUCWpXcwe9doLwR4k7RaPJluwXLyon4b8Yo66Pr00SerXQJms4Q==',key_name='tempest-TestGettingAddress-1487907947',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-bfx0ple1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:10:07Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=7d660383-738b-422b-af54-1a15464bf8e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "address": "fa:16:3e:b9:d5:b2", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:d5b2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ade0424-93", "ovs_interfaceid": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.637 239969 DEBUG nova.network.os_vif_util [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "address": "fa:16:3e:b9:d5:b2", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:d5b2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ade0424-93", "ovs_interfaceid": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.638 239969 DEBUG nova.network.os_vif_util [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:d5:b2,bridge_name='br-int',has_traffic_filtering=True,id=9ade0424-9307-41d2-99d9-0bbc7146bd66,network=Network(1f659c36-b294-4de4-81d7-0a221d4d63ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ade0424-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.638 239969 DEBUG nova.objects.instance [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'pci_devices' on Instance uuid 7d660383-738b-422b-af54-1a15464bf8e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.655 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <uuid>7d660383-738b-422b-af54-1a15464bf8e3</uuid>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <name>instance-00000069</name>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <nova:name>tempest-TestGettingAddress-server-66572341</nova:name>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:10:14</nova:creationTime>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:user uuid="0338b308661e4205839e9e957f674d8c">tempest-TestGettingAddress-206943419-project-member</nova:user>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:project uuid="df501970c9864b77b86bb4aa58ef846b">tempest-TestGettingAddress-206943419</nova:project>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:port uuid="3a527bbf-13af-4ef8-8703-f32cc8169291">
Jan 26 16:10:15 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:port uuid="9ade0424-9307-41d2-99d9-0bbc7146bd66">
Jan 26 16:10:15 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:feb9:d5b2" ipVersion="6"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <system>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <entry name="serial">7d660383-738b-422b-af54-1a15464bf8e3</entry>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <entry name="uuid">7d660383-738b-422b-af54-1a15464bf8e3</entry>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </system>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <os>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   </os>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <features>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   </features>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/7d660383-738b-422b-af54-1a15464bf8e3_disk">
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       </source>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/7d660383-738b-422b-af54-1a15464bf8e3_disk.config">
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       </source>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:69:92:94"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <target dev="tap3a527bbf-13"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:b9:d5:b2"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <target dev="tap9ade0424-93"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/7d660383-738b-422b-af54-1a15464bf8e3/console.log" append="off"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <video>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </video>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:10:15 compute-0 nova_compute[239965]: </domain>
Jan 26 16:10:15 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.657 239969 DEBUG nova.compute.manager [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Preparing to wait for external event network-vif-plugged-3a527bbf-13af-4ef8-8703-f32cc8169291 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.657 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "7d660383-738b-422b-af54-1a15464bf8e3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.658 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.658 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.658 239969 DEBUG nova.compute.manager [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Preparing to wait for external event network-vif-plugged-9ade0424-9307-41d2-99d9-0bbc7146bd66 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.658 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "7d660383-738b-422b-af54-1a15464bf8e3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.659 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.659 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.660 239969 DEBUG nova.virt.libvirt.vif [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-66572341',display_name='tempest-TestGettingAddress-server-66572341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-66572341',id=105,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEzKe6a2VldUJeYUz82U7kuWR1IjLUu+uday8/rBf6kWo6w7hcHpRo5TRunJYM1KjlHcyvxH5YSbKzdUCWpXcwe9doLwR4k7RaPJluwXLyon4b8Yo66Pr00SerXQJms4Q==',key_name='tempest-TestGettingAddress-1487907947',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-bfx0ple1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:10:07Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=7d660383-738b-422b-af54-1a15464bf8e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3a527bbf-13af-4ef8-8703-f32cc8169291", "address": "fa:16:3e:69:92:94", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a527bbf-13", "ovs_interfaceid": "3a527bbf-13af-4ef8-8703-f32cc8169291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.660 239969 DEBUG nova.network.os_vif_util [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "3a527bbf-13af-4ef8-8703-f32cc8169291", "address": "fa:16:3e:69:92:94", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a527bbf-13", "ovs_interfaceid": "3a527bbf-13af-4ef8-8703-f32cc8169291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.661 239969 DEBUG nova.network.os_vif_util [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:69:92:94,bridge_name='br-int',has_traffic_filtering=True,id=3a527bbf-13af-4ef8-8703-f32cc8169291,network=Network(065353ad-2d91-4f6b-899c-49c1351ef25d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a527bbf-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.661 239969 DEBUG os_vif [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:92:94,bridge_name='br-int',has_traffic_filtering=True,id=3a527bbf-13af-4ef8-8703-f32cc8169291,network=Network(065353ad-2d91-4f6b-899c-49c1351ef25d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a527bbf-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.662 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.662 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.663 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.667 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.667 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3a527bbf-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.668 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3a527bbf-13, col_values=(('external_ids', {'iface-id': '3a527bbf-13af-4ef8-8703-f32cc8169291', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:69:92:94', 'vm-uuid': '7d660383-738b-422b-af54-1a15464bf8e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.670 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:15 compute-0 NetworkManager[48954]: <info>  [1769443815.6708] manager: (tap3a527bbf-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.674 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.677 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.678 239969 INFO os_vif [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:69:92:94,bridge_name='br-int',has_traffic_filtering=True,id=3a527bbf-13af-4ef8-8703-f32cc8169291,network=Network(065353ad-2d91-4f6b-899c-49c1351ef25d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a527bbf-13')
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.679 239969 DEBUG nova.virt.libvirt.vif [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-66572341',display_name='tempest-TestGettingAddress-server-66572341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-66572341',id=105,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEzKe6a2VldUJeYUz82U7kuWR1IjLUu+uday8/rBf6kWo6w7hcHpRo5TRunJYM1KjlHcyvxH5YSbKzdUCWpXcwe9doLwR4k7RaPJluwXLyon4b8Yo66Pr00SerXQJms4Q==',key_name='tempest-TestGettingAddress-1487907947',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-bfx0ple1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:10:07Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=7d660383-738b-422b-af54-1a15464bf8e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "address": "fa:16:3e:b9:d5:b2", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:d5b2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ade0424-93", "ovs_interfaceid": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.680 239969 DEBUG nova.network.os_vif_util [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "address": "fa:16:3e:b9:d5:b2", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:d5b2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ade0424-93", "ovs_interfaceid": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.680 239969 DEBUG nova.network.os_vif_util [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:d5:b2,bridge_name='br-int',has_traffic_filtering=True,id=9ade0424-9307-41d2-99d9-0bbc7146bd66,network=Network(1f659c36-b294-4de4-81d7-0a221d4d63ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ade0424-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.681 239969 DEBUG os_vif [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:d5:b2,bridge_name='br-int',has_traffic_filtering=True,id=9ade0424-9307-41d2-99d9-0bbc7146bd66,network=Network(1f659c36-b294-4de4-81d7-0a221d4d63ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ade0424-93') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.682 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.682 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.682 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.685 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.685 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ade0424-93, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.685 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9ade0424-93, col_values=(('external_ids', {'iface-id': '9ade0424-9307-41d2-99d9-0bbc7146bd66', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:d5:b2', 'vm-uuid': '7d660383-738b-422b-af54-1a15464bf8e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.686 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:15 compute-0 NetworkManager[48954]: <info>  [1769443815.6876] manager: (tap9ade0424-93): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.689 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.693 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.693 239969 INFO os_vif [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:d5:b2,bridge_name='br-int',has_traffic_filtering=True,id=9ade0424-9307-41d2-99d9-0bbc7146bd66,network=Network(1f659c36-b294-4de4-81d7-0a221d4d63ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ade0424-93')
Jan 26 16:10:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:10:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3945427286' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.734 239969 DEBUG oslo_concurrency.processutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.735 239969 DEBUG nova.virt.libvirt.vif [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-944879613',display_name='tempest-ServersNegativeTestJSON-server-944879613',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-944879613',id=106,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8d1cdeaf6004b94bba2d3c727959e51',ramdisk_id='',reservation_id='r-np89kd30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1013867593',owner_user_name='tempest-ServersNegativeTestJSON-1013867593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:10:09Z,user_data=None,user_id='2d9b11d846a34b6e892fb80186bedba5',uuid=20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.735 239969 DEBUG nova.network.os_vif_util [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converting VIF {"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.736 239969 DEBUG nova.network.os_vif_util [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.737 239969 DEBUG nova.objects.instance [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.754 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.755 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.755 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:69:92:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.755 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:b9:d5:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.756 239969 INFO nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Using config drive
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.774 239969 DEBUG nova.storage.rbd_utils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 7d660383-738b-422b-af54-1a15464bf8e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.780 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <uuid>20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4</uuid>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <name>instance-0000006a</name>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersNegativeTestJSON-server-944879613</nova:name>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:10:14</nova:creationTime>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:user uuid="2d9b11d846a34b6e892fb80186bedba5">tempest-ServersNegativeTestJSON-1013867593-project-member</nova:user>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:project uuid="d8d1cdeaf6004b94bba2d3c727959e51">tempest-ServersNegativeTestJSON-1013867593</nova:project>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <nova:port uuid="d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3">
Jan 26 16:10:15 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <system>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <entry name="serial">20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4</entry>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <entry name="uuid">20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4</entry>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </system>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <os>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   </os>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <features>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   </features>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk">
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       </source>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk.config">
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       </source>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:10:15 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:74:58:77"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <target dev="tapd8d3ce43-ab"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4/console.log" append="off"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <video>
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </video>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:10:15 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:10:15 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:10:15 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:10:15 compute-0 nova_compute[239965]: </domain>
Jan 26 16:10:15 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.781 239969 DEBUG nova.compute.manager [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Preparing to wait for external event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.781 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.781 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.781 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.782 239969 DEBUG nova.virt.libvirt.vif [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-944879613',display_name='tempest-ServersNegativeTestJSON-server-944879613',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-944879613',id=106,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8d1cdeaf6004b94bba2d3c727959e51',ramdisk_id='',reservation_id='r-np89kd30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1013867593',owner_user_name='tempest-ServersNegativeTestJSON-1013867593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:10:09Z,user_data=None,user_id='2d9b11d846a34b6e892fb80186bedba5',uuid=20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.782 239969 DEBUG nova.network.os_vif_util [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converting VIF {"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.783 239969 DEBUG nova.network.os_vif_util [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.783 239969 DEBUG os_vif [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.784 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.784 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.784 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.787 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.787 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8d3ce43-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.787 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8d3ce43-ab, col_values=(('external_ids', {'iface-id': 'd8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:74:58:77', 'vm-uuid': '20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.789 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:15 compute-0 NetworkManager[48954]: <info>  [1769443815.7898] manager: (tapd8d3ce43-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/417)
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.791 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.800 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.801 239969 INFO os_vif [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab')
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.847 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.848 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.848 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] No VIF found with MAC fa:16:3e:74:58:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.848 239969 INFO nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Using config drive
Jan 26 16:10:15 compute-0 nova_compute[239965]: 2026-01-26 16:10:15.868 239969 DEBUG nova.storage.rbd_utils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 180 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 26 16:10:16 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3542247039' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:10:16 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3945427286' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.457 239969 INFO nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Creating config drive at /var/lib/nova/instances/7d660383-738b-422b-af54-1a15464bf8e3/disk.config
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.464 239969 DEBUG oslo_concurrency.processutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7d660383-738b-422b-af54-1a15464bf8e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9n0_sddf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.505 239969 INFO nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Creating config drive at /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4/disk.config
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.510 239969 DEBUG oslo_concurrency.processutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptv2xwru5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.540 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.612 239969 DEBUG oslo_concurrency.processutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7d660383-738b-422b-af54-1a15464bf8e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9n0_sddf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.650 239969 DEBUG nova.storage.rbd_utils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 7d660383-738b-422b-af54-1a15464bf8e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.657 239969 DEBUG oslo_concurrency.processutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7d660383-738b-422b-af54-1a15464bf8e3/disk.config 7d660383-738b-422b-af54-1a15464bf8e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.712 239969 DEBUG nova.network.neutron [req-5ddfe1d1-2982-4eaf-9ff9-409cf568af1c req-e988811c-cf1e-4447-9896-9086a16f1f77 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Updated VIF entry in instance network info cache for port 3a527bbf-13af-4ef8-8703-f32cc8169291. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.714 239969 DEBUG nova.network.neutron [req-5ddfe1d1-2982-4eaf-9ff9-409cf568af1c req-e988811c-cf1e-4447-9896-9086a16f1f77 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Updating instance_info_cache with network_info: [{"id": "3a527bbf-13af-4ef8-8703-f32cc8169291", "address": "fa:16:3e:69:92:94", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a527bbf-13", "ovs_interfaceid": "3a527bbf-13af-4ef8-8703-f32cc8169291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "address": "fa:16:3e:b9:d5:b2", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:d5b2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ade0424-93", "ovs_interfaceid": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.717 239969 DEBUG oslo_concurrency.processutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptv2xwru5" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.747 239969 DEBUG nova.storage.rbd_utils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.752 239969 DEBUG oslo_concurrency.processutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4/disk.config 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.798 239969 DEBUG oslo_concurrency.lockutils [req-5ddfe1d1-2982-4eaf-9ff9-409cf568af1c req-e988811c-cf1e-4447-9896-9086a16f1f77 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.799 239969 DEBUG oslo_concurrency.lockutils [req-048aab69-e46a-496d-91c7-6c0266c17483 req-b5136a9e-2361-4c44-9050-ae55e8b6a997 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.799 239969 DEBUG nova.network.neutron [req-048aab69-e46a-496d-91c7-6c0266c17483 req-b5136a9e-2361-4c44-9050-ae55e8b6a997 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Refreshing network info cache for port 9ade0424-9307-41d2-99d9-0bbc7146bd66 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.844 239969 DEBUG oslo_concurrency.processutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7d660383-738b-422b-af54-1a15464bf8e3/disk.config 7d660383-738b-422b-af54-1a15464bf8e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.844 239969 INFO nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Deleting local config drive /var/lib/nova/instances/7d660383-738b-422b-af54-1a15464bf8e3/disk.config because it was imported into RBD.
Jan 26 16:10:16 compute-0 NetworkManager[48954]: <info>  [1769443816.9076] manager: (tap3a527bbf-13): new Tun device (/org/freedesktop/NetworkManager/Devices/418)
Jan 26 16:10:16 compute-0 kernel: tap3a527bbf-13: entered promiscuous mode
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.911 239969 DEBUG oslo_concurrency.processutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4/disk.config 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.912 239969 INFO nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Deleting local config drive /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4/disk.config because it was imported into RBD.
Jan 26 16:10:16 compute-0 ovn_controller[146046]: 2026-01-26T16:10:16Z|01011|binding|INFO|Claiming lport 3a527bbf-13af-4ef8-8703-f32cc8169291 for this chassis.
Jan 26 16:10:16 compute-0 nova_compute[239965]: 2026-01-26 16:10:16.914 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:16 compute-0 ovn_controller[146046]: 2026-01-26T16:10:16Z|01012|binding|INFO|3a527bbf-13af-4ef8-8703-f32cc8169291: Claiming fa:16:3e:69:92:94 10.100.0.4
Jan 26 16:10:16 compute-0 NetworkManager[48954]: <info>  [1769443816.9233] manager: (tap9ade0424-93): new Tun device (/org/freedesktop/NetworkManager/Devices/419)
Jan 26 16:10:16 compute-0 kernel: tap9ade0424-93: entered promiscuous mode
Jan 26 16:10:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:16.934 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:92:94 10.100.0.4'], port_security=['fa:16:3e:69:92:94 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7d660383-738b-422b-af54-1a15464bf8e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-065353ad-2d91-4f6b-899c-49c1351ef25d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64fb00c1-6bf2-4adc-ad05-b75da367e5b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9ceb6e3-7336-40c9-9bd9-e17d330deadf, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3a527bbf-13af-4ef8-8703-f32cc8169291) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:10:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:16.935 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3a527bbf-13af-4ef8-8703-f32cc8169291 in datapath 065353ad-2d91-4f6b-899c-49c1351ef25d bound to our chassis
Jan 26 16:10:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:16.936 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 065353ad-2d91-4f6b-899c-49c1351ef25d
Jan 26 16:10:16 compute-0 systemd-udevd[329698]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:10:16 compute-0 systemd-udevd[329700]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:10:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:16.948 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[44217b39-e3d6-4106-adc0-9a7c42e0666e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:16.949 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap065353ad-21 in ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:10:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:16.951 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap065353ad-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:10:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:16.951 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[022be9bb-4462-4b8e-bdec-ae6f145b7feb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:16.952 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[51359508-4795-4c16-9b3e-7ceae110af9c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:16 compute-0 NetworkManager[48954]: <info>  [1769443816.9585] device (tap9ade0424-93): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:10:16 compute-0 NetworkManager[48954]: <info>  [1769443816.9592] device (tap9ade0424-93): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:10:16 compute-0 NetworkManager[48954]: <info>  [1769443816.9600] device (tap3a527bbf-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:10:16 compute-0 NetworkManager[48954]: <info>  [1769443816.9606] device (tap3a527bbf-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:10:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:16.966 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[43494fef-9d16-405d-aa37-f5a46ecd6670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:16 compute-0 NetworkManager[48954]: <info>  [1769443816.9815] manager: (tapd8d3ce43-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/420)
Jan 26 16:10:16 compute-0 systemd-machined[208061]: New machine qemu-130-instance-00000069.
Jan 26 16:10:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:16.990 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ebadab55-94fa-4c16-b06c-cb971042cadf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:16 compute-0 systemd[1]: Started Virtual Machine qemu-130-instance-00000069.
Jan 26 16:10:17 compute-0 systemd-machined[208061]: New machine qemu-131-instance-0000006a.
Jan 26 16:10:17 compute-0 systemd[1]: Started Virtual Machine qemu-131-instance-0000006a.
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.017 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d1fa68d2-37d9-4948-9786-fab808f4903c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.023 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4a8511df-d9a3-4d81-b367-ee5163693895]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 NetworkManager[48954]: <info>  [1769443817.0244] manager: (tap065353ad-20): new Veth device (/org/freedesktop/NetworkManager/Devices/421)
Jan 26 16:10:17 compute-0 ovn_controller[146046]: 2026-01-26T16:10:17Z|01013|binding|INFO|Claiming lport 9ade0424-9307-41d2-99d9-0bbc7146bd66 for this chassis.
Jan 26 16:10:17 compute-0 ovn_controller[146046]: 2026-01-26T16:10:17Z|01014|binding|INFO|9ade0424-9307-41d2-99d9-0bbc7146bd66: Claiming fa:16:3e:b9:d5:b2 2001:db8::f816:3eff:feb9:d5b2
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.026 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:17 compute-0 kernel: tapd8d3ce43-ab: entered promiscuous mode
Jan 26 16:10:17 compute-0 NetworkManager[48954]: <info>  [1769443817.0304] device (tapd8d3ce43-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:10:17 compute-0 NetworkManager[48954]: <info>  [1769443817.0314] device (tapd8d3ce43-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.031 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.033 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:d5:b2 2001:db8::f816:3eff:feb9:d5b2'], port_security=['fa:16:3e:b9:d5:b2 2001:db8::f816:3eff:feb9:d5b2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb9:d5b2/64', 'neutron:device_id': '7d660383-738b-422b-af54-1a15464bf8e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f659c36-b294-4de4-81d7-0a221d4d63ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64fb00c1-6bf2-4adc-ad05-b75da367e5b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63bbc7aa-7f64-4a2d-a32f-7de8c6f03798, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=9ade0424-9307-41d2-99d9-0bbc7146bd66) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:10:17 compute-0 ovn_controller[146046]: 2026-01-26T16:10:17Z|01015|if_status|INFO|Not updating pb chassis for d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 now as sb is readonly
Jan 26 16:10:17 compute-0 ovn_controller[146046]: 2026-01-26T16:10:17Z|01016|binding|INFO|Claiming lport d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 for this chassis.
Jan 26 16:10:17 compute-0 ovn_controller[146046]: 2026-01-26T16:10:17Z|01017|binding|INFO|d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3: Claiming fa:16:3e:74:58:77 10.100.0.6
Jan 26 16:10:17 compute-0 ovn_controller[146046]: 2026-01-26T16:10:17Z|01018|binding|INFO|Setting lport 3a527bbf-13af-4ef8-8703-f32cc8169291 ovn-installed in OVS
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.049 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.056 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:58:77 10.100.0.6'], port_security=['fa:16:3e:74:58:77 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8d1cdeaf6004b94bba2d3c727959e51', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb0a5e0f-b1d5-4f6c-9884-3b66f10a59d8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=568e2bde-137f-4351-b2df-67acb89970b2, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.056 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[98c71439-285d-480b-bf33-1f2fffb5293d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 ovn_controller[146046]: 2026-01-26T16:10:17Z|01019|binding|INFO|Setting lport 3a527bbf-13af-4ef8-8703-f32cc8169291 up in Southbound
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.060 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[963459e2-9113-40f8-80df-bcde930da630]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 NetworkManager[48954]: <info>  [1769443817.0815] device (tap065353ad-20): carrier: link connected
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.085 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[510fa755-da9b-4166-a2f1-b45bc61754b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.103 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[55e49858-22ef-44b4-bb29-024c0bd0941e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap065353ad-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:4f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 301], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530788, 'reachable_time': 19462, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329749, 'error': None, 'target': 'ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.114 239969 DEBUG nova.network.neutron [req-077ce362-a3f4-45fe-ad88-136d663a2b8e req-f93ec5d1-3d0d-4b96-a0d2-49b9fb25206a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Updated VIF entry in instance network info cache for port d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.114 239969 DEBUG nova.network.neutron [req-077ce362-a3f4-45fe-ad88-136d663a2b8e req-f93ec5d1-3d0d-4b96-a0d2-49b9fb25206a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Updating instance_info_cache with network_info: [{"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.116 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2eb5d981-a741-494d-9f30-dcb2a1f52857]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:4fae'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530788, 'tstamp': 530788}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329752, 'error': None, 'target': 'ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.134 239969 DEBUG oslo_concurrency.lockutils [req-077ce362-a3f4-45fe-ad88-136d663a2b8e req-f93ec5d1-3d0d-4b96-a0d2-49b9fb25206a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.135 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7b5bc767-19e8-4884-a780-63ae8e318373]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap065353ad-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:4f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 301], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530788, 'reachable_time': 19462, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329753, 'error': None, 'target': 'ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 ovn_controller[146046]: 2026-01-26T16:10:17Z|01020|binding|INFO|Setting lport 9ade0424-9307-41d2-99d9-0bbc7146bd66 ovn-installed in OVS
Jan 26 16:10:17 compute-0 ovn_controller[146046]: 2026-01-26T16:10:17Z|01021|binding|INFO|Setting lport 9ade0424-9307-41d2-99d9-0bbc7146bd66 up in Southbound
Jan 26 16:10:17 compute-0 ovn_controller[146046]: 2026-01-26T16:10:17Z|01022|binding|INFO|Setting lport d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 ovn-installed in OVS
Jan 26 16:10:17 compute-0 ovn_controller[146046]: 2026-01-26T16:10:17Z|01023|binding|INFO|Setting lport d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 up in Southbound
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.138 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.163 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c98b17b9-1d2f-4e7a-8350-36a1620422f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.238 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ec4568ad-03ee-41ec-88dd-e424b1001631]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.240 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap065353ad-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.240 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.241 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap065353ad-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:17 compute-0 kernel: tap065353ad-20: entered promiscuous mode
Jan 26 16:10:17 compute-0 NetworkManager[48954]: <info>  [1769443817.2440] manager: (tap065353ad-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/422)
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.243 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.248 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap065353ad-20, col_values=(('external_ids', {'iface-id': '37d312e5-a280-4644-9ef2-86d934dd9b95'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.250 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:17 compute-0 ovn_controller[146046]: 2026-01-26T16:10:17Z|01024|binding|INFO|Releasing lport 37d312e5-a280-4644-9ef2-86d934dd9b95 from this chassis (sb_readonly=0)
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.253 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/065353ad-2d91-4f6b-899c-49c1351ef25d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/065353ad-2d91-4f6b-899c-49c1351ef25d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.254 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f397b22b-64d7-49d1-9776-52874f2ae564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.255 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-065353ad-2d91-4f6b-899c-49c1351ef25d
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/065353ad-2d91-4f6b-899c-49c1351ef25d.pid.haproxy
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 065353ad-2d91-4f6b-899c-49c1351ef25d
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.258 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d', 'env', 'PROCESS_TAG=haproxy-065353ad-2d91-4f6b-899c-49c1351ef25d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/065353ad-2d91-4f6b-899c-49c1351ef25d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:10:17 compute-0 ceph-mon[75140]: pgmap v1755: 305 pgs: 305 active+clean; 180 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.266 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.379 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443817.378164, 7d660383-738b-422b-af54-1a15464bf8e3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.379 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] VM Started (Lifecycle Event)
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.402 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.407 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443817.3784297, 7d660383-738b-422b-af54-1a15464bf8e3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.407 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] VM Paused (Lifecycle Event)
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.466 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.470 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.494 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.604 239969 DEBUG nova.compute.manager [req-214c49fa-d4dc-40a0-ab49-dfafda4dec3a req-79d4c150-fbfb-40b2-86be-c3ae62f5108c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received event network-vif-plugged-3a527bbf-13af-4ef8-8703-f32cc8169291 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.606 239969 DEBUG oslo_concurrency.lockutils [req-214c49fa-d4dc-40a0-ab49-dfafda4dec3a req-79d4c150-fbfb-40b2-86be-c3ae62f5108c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7d660383-738b-422b-af54-1a15464bf8e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.607 239969 DEBUG oslo_concurrency.lockutils [req-214c49fa-d4dc-40a0-ab49-dfafda4dec3a req-79d4c150-fbfb-40b2-86be-c3ae62f5108c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.608 239969 DEBUG oslo_concurrency.lockutils [req-214c49fa-d4dc-40a0-ab49-dfafda4dec3a req-79d4c150-fbfb-40b2-86be-c3ae62f5108c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:17 compute-0 nova_compute[239965]: 2026-01-26 16:10:17.608 239969 DEBUG nova.compute.manager [req-214c49fa-d4dc-40a0-ab49-dfafda4dec3a req-79d4c150-fbfb-40b2-86be-c3ae62f5108c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Processing event network-vif-plugged-3a527bbf-13af-4ef8-8703-f32cc8169291 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:10:17 compute-0 podman[329828]: 2026-01-26 16:10:17.66549731 +0000 UTC m=+0.043181727 container create 0b4fc7d1bc5f674fd371bc99c435094574d95e62999e4f446656025f2d1f8f84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:10:17 compute-0 systemd[1]: Started libpod-conmon-0b4fc7d1bc5f674fd371bc99c435094574d95e62999e4f446656025f2d1f8f84.scope.
Jan 26 16:10:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:10:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06c64fe4e75d4d1b301e5111a733b4311ad3775d975484b229f35438bfbfdcad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:10:17 compute-0 podman[329828]: 2026-01-26 16:10:17.644967188 +0000 UTC m=+0.022651605 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:10:17 compute-0 podman[329828]: 2026-01-26 16:10:17.751522735 +0000 UTC m=+0.129207182 container init 0b4fc7d1bc5f674fd371bc99c435094574d95e62999e4f446656025f2d1f8f84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:10:17 compute-0 podman[329828]: 2026-01-26 16:10:17.757649694 +0000 UTC m=+0.135334111 container start 0b4fc7d1bc5f674fd371bc99c435094574d95e62999e4f446656025f2d1f8f84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:10:17 compute-0 neutron-haproxy-ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d[329843]: [NOTICE]   (329847) : New worker (329849) forked
Jan 26 16:10:17 compute-0 neutron-haproxy-ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d[329843]: [NOTICE]   (329847) : Loading success.
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.822 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 9ade0424-9307-41d2-99d9-0bbc7146bd66 in datapath 1f659c36-b294-4de4-81d7-0a221d4d63ba unbound from our chassis
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.825 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f659c36-b294-4de4-81d7-0a221d4d63ba
Jan 26 16:10:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.840 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b81bfbb7-e639-4b18-a76c-39b93de175f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.841 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1f659c36-b1 in ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.844 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1f659c36-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.844 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[71d1daae-016c-42e0-b385-2927da21abee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.844 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3a06f557-2186-4d53-ba28-57f42350166b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.860 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[1d94fa8a-6fb4-470c-9078-4b412b231465]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.890 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6193d260-076a-4882-95f2-68459c725381]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.915 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[47cae758-fcd6-4738-9b9d-0bd5c656b7c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.920 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6213b0-b1cd-4485-8270-db401b65f1ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 systemd-udevd[329725]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:10:17 compute-0 NetworkManager[48954]: <info>  [1769443817.9216] manager: (tap1f659c36-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.961 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[cf5987e6-4764-4c69-9239-2da45426cb0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:17.964 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d494e4f6-7926-4b22-a155-01dbc1841c3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 NetworkManager[48954]: <info>  [1769443818.0044] device (tap1f659c36-b0): carrier: link connected
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.011 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ac3d64-6542-4827-832d-b892da2d1284]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.034 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6f74353e-765a-495e-bb32-9ee03cf08495]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f659c36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:8f:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530881, 'reachable_time': 44944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329910, 'error': None, 'target': 'ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 nova_compute[239965]: 2026-01-26 16:10:18.040 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443818.040253, 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:10:18 compute-0 nova_compute[239965]: 2026-01-26 16:10:18.041 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] VM Started (Lifecycle Event)
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.056 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ef334c3f-a776-4787-81d3-4e536eaba4cc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:8f93'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530881, 'tstamp': 530881}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329911, 'error': None, 'target': 'ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 nova_compute[239965]: 2026-01-26 16:10:18.063 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:18 compute-0 nova_compute[239965]: 2026-01-26 16:10:18.067 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443818.0404427, 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:10:18 compute-0 nova_compute[239965]: 2026-01-26 16:10:18.067 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] VM Paused (Lifecycle Event)
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.081 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ee85da4e-ff36-48e8-9f87-802d8828eac1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f659c36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:8f:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530881, 'reachable_time': 44944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329912, 'error': None, 'target': 'ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 nova_compute[239965]: 2026-01-26 16:10:18.085 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:18 compute-0 nova_compute[239965]: 2026-01-26 16:10:18.090 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:10:18 compute-0 nova_compute[239965]: 2026-01-26 16:10:18.117 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.124 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a49c5fc5-f130-4dca-b0a6-0f20e7678719]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.162 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0f48ef-5cf7-461f-83d2-14bf3eed5673]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.163 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f659c36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.163 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.164 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f659c36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:18 compute-0 nova_compute[239965]: 2026-01-26 16:10:18.165 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:18 compute-0 NetworkManager[48954]: <info>  [1769443818.1663] manager: (tap1f659c36-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Jan 26 16:10:18 compute-0 kernel: tap1f659c36-b0: entered promiscuous mode
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.168 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f659c36-b0, col_values=(('external_ids', {'iface-id': 'a41f6fa4-45ca-4584-9219-78408cd0f9f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:18 compute-0 nova_compute[239965]: 2026-01-26 16:10:18.169 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:18 compute-0 ovn_controller[146046]: 2026-01-26T16:10:18Z|01025|binding|INFO|Releasing lport a41f6fa4-45ca-4584-9219-78408cd0f9f7 from this chassis (sb_readonly=0)
Jan 26 16:10:18 compute-0 nova_compute[239965]: 2026-01-26 16:10:18.171 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.172 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1f659c36-b294-4de4-81d7-0a221d4d63ba.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1f659c36-b294-4de4-81d7-0a221d4d63ba.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.173 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea7b145-7d67-41a8-86e9-79c6bd58c6b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 181 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.6 MiB/s wr, 58 op/s
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.175 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-1f659c36-b294-4de4-81d7-0a221d4d63ba
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/1f659c36-b294-4de4-81d7-0a221d4d63ba.pid.haproxy
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 1f659c36-b294-4de4-81d7-0a221d4d63ba
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.177 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba', 'env', 'PROCESS_TAG=haproxy-1f659c36-b294-4de4-81d7-0a221d4d63ba', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1f659c36-b294-4de4-81d7-0a221d4d63ba.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:10:18 compute-0 nova_compute[239965]: 2026-01-26 16:10:18.200 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:18 compute-0 nova_compute[239965]: 2026-01-26 16:10:18.542 239969 DEBUG nova.network.neutron [req-048aab69-e46a-496d-91c7-6c0266c17483 req-b5136a9e-2361-4c44-9050-ae55e8b6a997 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Updated VIF entry in instance network info cache for port 9ade0424-9307-41d2-99d9-0bbc7146bd66. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:10:18 compute-0 nova_compute[239965]: 2026-01-26 16:10:18.542 239969 DEBUG nova.network.neutron [req-048aab69-e46a-496d-91c7-6c0266c17483 req-b5136a9e-2361-4c44-9050-ae55e8b6a997 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Updating instance_info_cache with network_info: [{"id": "3a527bbf-13af-4ef8-8703-f32cc8169291", "address": "fa:16:3e:69:92:94", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a527bbf-13", "ovs_interfaceid": "3a527bbf-13af-4ef8-8703-f32cc8169291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "address": "fa:16:3e:b9:d5:b2", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:d5b2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ade0424-93", "ovs_interfaceid": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:10:18 compute-0 nova_compute[239965]: 2026-01-26 16:10:18.577 239969 DEBUG oslo_concurrency.lockutils [req-048aab69-e46a-496d-91c7-6c0266c17483 req-b5136a9e-2361-4c44-9050-ae55e8b6a997 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:10:18 compute-0 podman[329942]: 2026-01-26 16:10:18.611549672 +0000 UTC m=+0.079494215 container create a0a48d905440f4646b9b138fa16e6bbba6de1fe305ed9fc55794e7b66a05572a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 26 16:10:18 compute-0 systemd[1]: Started libpod-conmon-a0a48d905440f4646b9b138fa16e6bbba6de1fe305ed9fc55794e7b66a05572a.scope.
Jan 26 16:10:18 compute-0 podman[329942]: 2026-01-26 16:10:18.575798788 +0000 UTC m=+0.043743411 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:10:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:10:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9600c7b8092f5310b0c01bcd8304fd51f9ae738056d59266a1d9afe95de38d56/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:10:18 compute-0 podman[329942]: 2026-01-26 16:10:18.70627264 +0000 UTC m=+0.174217203 container init a0a48d905440f4646b9b138fa16e6bbba6de1fe305ed9fc55794e7b66a05572a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 26 16:10:18 compute-0 podman[329942]: 2026-01-26 16:10:18.711774355 +0000 UTC m=+0.179718928 container start a0a48d905440f4646b9b138fa16e6bbba6de1fe305ed9fc55794e7b66a05572a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 16:10:18 compute-0 neutron-haproxy-ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba[329957]: [NOTICE]   (329961) : New worker (329963) forked
Jan 26 16:10:18 compute-0 neutron-haproxy-ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba[329957]: [NOTICE]   (329961) : Loading success.
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.765 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 in datapath 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 unbound from our chassis
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.767 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.781 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[569adbd0-48ec-4153-aa88-2a6836a18dff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.782 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74a6493e-91 in ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.785 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74a6493e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.785 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[07c031c1-84b1-4823-904a-ca91950abe1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.786 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[47abc9a9-4f8e-4efc-b243-2f2878d26142]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.798 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[1bbdb434-45e8-4c95-b43a-d346a4425aca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.822 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9d272667-ed67-48b5-8bec-c4a3e76d7af4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.850 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[51892116-8708-4fa0-ad79-ea1102ebe42e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 systemd-udevd[329731]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:10:18 compute-0 NetworkManager[48954]: <info>  [1769443818.8575] manager: (tap74a6493e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/425)
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.856 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f17f17df-3c31-4ab8-b463-0cdcea8a2038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.900 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ff2598c2-8551-47fb-82dd-1912561829bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.903 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6e61f24e-9021-4819-879b-0b482934f5fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 NetworkManager[48954]: <info>  [1769443818.9313] device (tap74a6493e-90): carrier: link connected
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.937 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[df66e57c-a364-42cc-8888-5960e296b4cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.965 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4e8827-75c4-4d6e-95c3-d7b767f5feb7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74a6493e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:10:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530973, 'reachable_time': 18504, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 329982, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:18.984 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cdda9ccb-f1da-4bb8-96cb-fade619d408a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:10c6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530973, 'tstamp': 530973}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 329983, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:19.004 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5ffb9f54-c80e-4b73-b73e-e1068b74a148]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74a6493e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:10:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 180, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530973, 'reachable_time': 18504, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 329984, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:19.042 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[94aa7cfc-7457-46ee-bdee-1a3cefb89d8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:19.113 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d655c3e6-2350-4893-9326-628831cd70f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:19.114 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74a6493e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:19.115 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:19.116 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74a6493e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:19 compute-0 NetworkManager[48954]: <info>  [1769443819.1187] manager: (tap74a6493e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/426)
Jan 26 16:10:19 compute-0 nova_compute[239965]: 2026-01-26 16:10:19.118 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:19 compute-0 kernel: tap74a6493e-90: entered promiscuous mode
Jan 26 16:10:19 compute-0 nova_compute[239965]: 2026-01-26 16:10:19.122 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:19.123 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74a6493e-90, col_values=(('external_ids', {'iface-id': 'fb7d9679-6ae1-476e-99df-96ff9d9ad5e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:19 compute-0 nova_compute[239965]: 2026-01-26 16:10:19.125 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:19 compute-0 ovn_controller[146046]: 2026-01-26T16:10:19Z|01026|binding|INFO|Releasing lport fb7d9679-6ae1-476e-99df-96ff9d9ad5e9 from this chassis (sb_readonly=0)
Jan 26 16:10:19 compute-0 nova_compute[239965]: 2026-01-26 16:10:19.126 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:19.127 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74a6493e-9050-4f8b-9dc9-e7bb3992eeb6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74a6493e-9050-4f8b-9dc9-e7bb3992eeb6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:19.127 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[10979fbd-7f6b-4d26-ad7e-e04aca31ab7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:19.128 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/74a6493e-9050-4f8b-9dc9-e7bb3992eeb6.pid.haproxy
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:10:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:19.129 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'env', 'PROCESS_TAG=haproxy-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74a6493e-9050-4f8b-9dc9-e7bb3992eeb6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:10:19 compute-0 nova_compute[239965]: 2026-01-26 16:10:19.143 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:19 compute-0 ceph-mon[75140]: pgmap v1756: 305 pgs: 305 active+clean; 181 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.6 MiB/s wr, 58 op/s
Jan 26 16:10:19 compute-0 podman[330016]: 2026-01-26 16:10:19.482087948 +0000 UTC m=+0.050855015 container create 45d58dd047e34e80962bb183fdd846a6cf080628808c49b4edce38a05ae2eabe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:10:19 compute-0 systemd[1]: Started libpod-conmon-45d58dd047e34e80962bb183fdd846a6cf080628808c49b4edce38a05ae2eabe.scope.
Jan 26 16:10:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:10:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f74924d9bc06e12748245418289a7f9d8aef6773dd7e93b7ee67e5c6b59b839/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:10:19 compute-0 podman[330016]: 2026-01-26 16:10:19.455642161 +0000 UTC m=+0.024409248 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:10:19 compute-0 podman[330016]: 2026-01-26 16:10:19.558361653 +0000 UTC m=+0.127128740 container init 45d58dd047e34e80962bb183fdd846a6cf080628808c49b4edce38a05ae2eabe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 26 16:10:19 compute-0 podman[330016]: 2026-01-26 16:10:19.563554061 +0000 UTC m=+0.132321128 container start 45d58dd047e34e80962bb183fdd846a6cf080628808c49b4edce38a05ae2eabe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:10:19 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[330031]: [NOTICE]   (330035) : New worker (330037) forked
Jan 26 16:10:19 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[330031]: [NOTICE]   (330035) : Loading success.
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.096 239969 DEBUG nova.compute.manager [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received event network-vif-plugged-3a527bbf-13af-4ef8-8703-f32cc8169291 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.097 239969 DEBUG oslo_concurrency.lockutils [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7d660383-738b-422b-af54-1a15464bf8e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.097 239969 DEBUG oslo_concurrency.lockutils [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.098 239969 DEBUG oslo_concurrency.lockutils [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.098 239969 DEBUG nova.compute.manager [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] No event matching network-vif-plugged-3a527bbf-13af-4ef8-8703-f32cc8169291 in dict_keys([('network-vif-plugged', '9ade0424-9307-41d2-99d9-0bbc7146bd66')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.098 239969 WARNING nova.compute.manager [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received unexpected event network-vif-plugged-3a527bbf-13af-4ef8-8703-f32cc8169291 for instance with vm_state building and task_state spawning.
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.098 239969 DEBUG nova.compute.manager [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.099 239969 DEBUG oslo_concurrency.lockutils [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.099 239969 DEBUG oslo_concurrency.lockutils [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.099 239969 DEBUG oslo_concurrency.lockutils [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.099 239969 DEBUG nova.compute.manager [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Processing event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.099 239969 DEBUG nova.compute.manager [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.100 239969 DEBUG oslo_concurrency.lockutils [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.100 239969 DEBUG oslo_concurrency.lockutils [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.100 239969 DEBUG oslo_concurrency.lockutils [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.100 239969 DEBUG nova.compute.manager [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] No waiting events found dispatching network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.100 239969 WARNING nova.compute.manager [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received unexpected event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 for instance with vm_state building and task_state spawning.
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.101 239969 DEBUG nova.compute.manager [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received event network-vif-plugged-9ade0424-9307-41d2-99d9-0bbc7146bd66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.101 239969 DEBUG oslo_concurrency.lockutils [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7d660383-738b-422b-af54-1a15464bf8e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.101 239969 DEBUG oslo_concurrency.lockutils [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.102 239969 DEBUG oslo_concurrency.lockutils [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.102 239969 DEBUG nova.compute.manager [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Processing event network-vif-plugged-9ade0424-9307-41d2-99d9-0bbc7146bd66 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.102 239969 DEBUG nova.compute.manager [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received event network-vif-plugged-9ade0424-9307-41d2-99d9-0bbc7146bd66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.102 239969 DEBUG oslo_concurrency.lockutils [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7d660383-738b-422b-af54-1a15464bf8e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.102 239969 DEBUG oslo_concurrency.lockutils [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.103 239969 DEBUG oslo_concurrency.lockutils [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.103 239969 DEBUG nova.compute.manager [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] No waiting events found dispatching network-vif-plugged-9ade0424-9307-41d2-99d9-0bbc7146bd66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.103 239969 WARNING nova.compute.manager [req-f14e56fe-0373-4f75-b161-3007ce5b8269 req-3b4b85b6-6906-427e-9a64-1fa9d3695467 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received unexpected event network-vif-plugged-9ade0424-9307-41d2-99d9-0bbc7146bd66 for instance with vm_state building and task_state spawning.
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.104 239969 DEBUG nova.compute.manager [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.104 239969 DEBUG nova.compute.manager [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.108 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443820.1077018, 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.109 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] VM Resumed (Lifecycle Event)
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.112 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.112 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.116 239969 INFO nova.virt.libvirt.driver [-] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Instance spawned successfully.
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.117 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.118 239969 INFO nova.virt.libvirt.driver [-] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Instance spawned successfully.
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.119 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.147 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.157 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.161 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.161 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.162 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.162 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.163 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.163 239969 DEBUG nova.virt.libvirt.driver [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.168 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.168 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.169 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.169 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.170 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.170 239969 DEBUG nova.virt.libvirt.driver [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 181 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.6 MiB/s wr, 58 op/s
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.207 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.208 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443820.109035, 7d660383-738b-422b-af54-1a15464bf8e3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.208 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] VM Resumed (Lifecycle Event)
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.248 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.253 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.262 239969 INFO nova.compute.manager [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Took 10.52 seconds to spawn the instance on the hypervisor.
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.263 239969 DEBUG nova.compute.manager [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.270 239969 INFO nova.compute.manager [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Took 12.42 seconds to spawn the instance on the hypervisor.
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.271 239969 DEBUG nova.compute.manager [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.279 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.625 239969 INFO nova.compute.manager [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Took 11.87 seconds to build instance.
Jan 26 16:10:20 compute-0 nova_compute[239965]: 2026-01-26 16:10:20.789 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:21 compute-0 ceph-mon[75140]: pgmap v1757: 305 pgs: 305 active+clean; 181 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 3.6 MiB/s wr, 58 op/s
Jan 26 16:10:21 compute-0 nova_compute[239965]: 2026-01-26 16:10:21.517 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 181 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 143 op/s
Jan 26 16:10:22 compute-0 ceph-mon[75140]: pgmap v1758: 305 pgs: 305 active+clean; 181 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.6 MiB/s wr, 143 op/s
Jan 26 16:10:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:10:23 compute-0 nova_compute[239965]: 2026-01-26 16:10:23.008 239969 INFO nova.compute.manager [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Took 16.10 seconds to build instance.
Jan 26 16:10:23 compute-0 nova_compute[239965]: 2026-01-26 16:10:23.045 239969 DEBUG oslo_concurrency.lockutils [None req-1f9e8988-8fb2-4511-97a7-5110d4db83e5 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:23 compute-0 nova_compute[239965]: 2026-01-26 16:10:23.054 239969 DEBUG oslo_concurrency.lockutils [None req-7ec08986-449c-4c8c-8c51-ffec0ed087d3 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 181 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.0 MiB/s wr, 140 op/s
Jan 26 16:10:24 compute-0 ceph-mon[75140]: pgmap v1759: 305 pgs: 305 active+clean; 181 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.0 MiB/s wr, 140 op/s
Jan 26 16:10:25 compute-0 ovn_controller[146046]: 2026-01-26T16:10:25Z|01027|binding|INFO|Releasing lport 37d312e5-a280-4644-9ef2-86d934dd9b95 from this chassis (sb_readonly=0)
Jan 26 16:10:25 compute-0 ovn_controller[146046]: 2026-01-26T16:10:25Z|01028|binding|INFO|Releasing lport a41f6fa4-45ca-4584-9219-78408cd0f9f7 from this chassis (sb_readonly=0)
Jan 26 16:10:25 compute-0 ovn_controller[146046]: 2026-01-26T16:10:25Z|01029|binding|INFO|Releasing lport fb7d9679-6ae1-476e-99df-96ff9d9ad5e9 from this chassis (sb_readonly=0)
Jan 26 16:10:25 compute-0 NetworkManager[48954]: <info>  [1769443825.4907] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/427)
Jan 26 16:10:25 compute-0 NetworkManager[48954]: <info>  [1769443825.4915] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Jan 26 16:10:25 compute-0 nova_compute[239965]: 2026-01-26 16:10:25.489 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:25 compute-0 ovn_controller[146046]: 2026-01-26T16:10:25Z|01030|binding|INFO|Releasing lport 37d312e5-a280-4644-9ef2-86d934dd9b95 from this chassis (sb_readonly=0)
Jan 26 16:10:25 compute-0 ovn_controller[146046]: 2026-01-26T16:10:25Z|01031|binding|INFO|Releasing lport a41f6fa4-45ca-4584-9219-78408cd0f9f7 from this chassis (sb_readonly=0)
Jan 26 16:10:25 compute-0 ovn_controller[146046]: 2026-01-26T16:10:25Z|01032|binding|INFO|Releasing lport fb7d9679-6ae1-476e-99df-96ff9d9ad5e9 from this chassis (sb_readonly=0)
Jan 26 16:10:25 compute-0 nova_compute[239965]: 2026-01-26 16:10:25.502 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:25 compute-0 nova_compute[239965]: 2026-01-26 16:10:25.792 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:25 compute-0 nova_compute[239965]: 2026-01-26 16:10:25.855 239969 DEBUG nova.compute.manager [req-033de9d0-8fa6-42e2-b7c7-995e20e23c9f req-c72ffb59-2876-48fc-a059-7ab0089b1262 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received event network-changed-3a527bbf-13af-4ef8-8703-f32cc8169291 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:25 compute-0 nova_compute[239965]: 2026-01-26 16:10:25.855 239969 DEBUG nova.compute.manager [req-033de9d0-8fa6-42e2-b7c7-995e20e23c9f req-c72ffb59-2876-48fc-a059-7ab0089b1262 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Refreshing instance network info cache due to event network-changed-3a527bbf-13af-4ef8-8703-f32cc8169291. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:10:25 compute-0 nova_compute[239965]: 2026-01-26 16:10:25.856 239969 DEBUG oslo_concurrency.lockutils [req-033de9d0-8fa6-42e2-b7c7-995e20e23c9f req-c72ffb59-2876-48fc-a059-7ab0089b1262 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:10:25 compute-0 nova_compute[239965]: 2026-01-26 16:10:25.856 239969 DEBUG oslo_concurrency.lockutils [req-033de9d0-8fa6-42e2-b7c7-995e20e23c9f req-c72ffb59-2876-48fc-a059-7ab0089b1262 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:10:25 compute-0 nova_compute[239965]: 2026-01-26 16:10:25.856 239969 DEBUG nova.network.neutron [req-033de9d0-8fa6-42e2-b7c7-995e20e23c9f req-c72ffb59-2876-48fc-a059-7ab0089b1262 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Refreshing network info cache for port 3a527bbf-13af-4ef8-8703-f32cc8169291 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:10:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 181 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Jan 26 16:10:26 compute-0 nova_compute[239965]: 2026-01-26 16:10:26.580 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:27 compute-0 ceph-mon[75140]: pgmap v1760: 305 pgs: 305 active+clean; 181 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Jan 26 16:10:27 compute-0 nova_compute[239965]: 2026-01-26 16:10:27.651 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "8adc68cc-08d4-4743-a049-2ebd5b030fef" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:27 compute-0 nova_compute[239965]: 2026-01-26 16:10:27.652 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "8adc68cc-08d4-4743-a049-2ebd5b030fef" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:27 compute-0 nova_compute[239965]: 2026-01-26 16:10:27.668 239969 DEBUG nova.compute.manager [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:10:27 compute-0 nova_compute[239965]: 2026-01-26 16:10:27.740 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:27 compute-0 nova_compute[239965]: 2026-01-26 16:10:27.740 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:27 compute-0 nova_compute[239965]: 2026-01-26 16:10:27.749 239969 DEBUG nova.virt.hardware [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:10:27 compute-0 nova_compute[239965]: 2026-01-26 16:10:27.750 239969 INFO nova.compute.claims [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:10:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:10:27 compute-0 nova_compute[239965]: 2026-01-26 16:10:27.874 239969 DEBUG oslo_concurrency.processutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 181 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.181 239969 DEBUG nova.network.neutron [req-033de9d0-8fa6-42e2-b7c7-995e20e23c9f req-c72ffb59-2876-48fc-a059-7ab0089b1262 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Updated VIF entry in instance network info cache for port 3a527bbf-13af-4ef8-8703-f32cc8169291. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.181 239969 DEBUG nova.network.neutron [req-033de9d0-8fa6-42e2-b7c7-995e20e23c9f req-c72ffb59-2876-48fc-a059-7ab0089b1262 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Updating instance_info_cache with network_info: [{"id": "3a527bbf-13af-4ef8-8703-f32cc8169291", "address": "fa:16:3e:69:92:94", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a527bbf-13", "ovs_interfaceid": "3a527bbf-13af-4ef8-8703-f32cc8169291", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "address": "fa:16:3e:b9:d5:b2", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:d5b2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ade0424-93", "ovs_interfaceid": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.209 239969 DEBUG oslo_concurrency.lockutils [req-033de9d0-8fa6-42e2-b7c7-995e20e23c9f req-c72ffb59-2876-48fc-a059-7ab0089b1262 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:10:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:10:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1218647314' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.456 239969 DEBUG oslo_concurrency.processutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.461 239969 DEBUG nova.compute.provider_tree [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.478 239969 DEBUG nova.scheduler.client.report [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.496 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.496 239969 DEBUG nova.compute.manager [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.532 239969 DEBUG nova.compute.manager [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.532 239969 DEBUG nova.network.neutron [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.554 239969 INFO nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.575 239969 DEBUG nova.compute.manager [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:10:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:10:28
Jan 26 16:10:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:10:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:10:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'backups', '.mgr', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'images']
Jan 26 16:10:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.666 239969 DEBUG nova.compute.manager [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.667 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.668 239969 INFO nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Creating image(s)
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.690 239969 DEBUG nova.storage.rbd_utils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 8adc68cc-08d4-4743-a049-2ebd5b030fef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.712 239969 DEBUG nova.storage.rbd_utils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 8adc68cc-08d4-4743-a049-2ebd5b030fef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.731 239969 DEBUG nova.storage.rbd_utils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 8adc68cc-08d4-4743-a049-2ebd5b030fef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.736 239969 DEBUG oslo_concurrency.processutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.820 239969 DEBUG oslo_concurrency.processutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.821 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.822 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.822 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.842 239969 DEBUG nova.storage.rbd_utils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 8adc68cc-08d4-4743-a049-2ebd5b030fef_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.845 239969 DEBUG oslo_concurrency.processutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 8adc68cc-08d4-4743-a049-2ebd5b030fef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:28 compute-0 nova_compute[239965]: 2026-01-26 16:10:28.895 239969 DEBUG nova.policy [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2d9b11d846a34b6e892fb80186bedba5', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd8d1cdeaf6004b94bba2d3c727959e51', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:10:29 compute-0 nova_compute[239965]: 2026-01-26 16:10:29.103 239969 DEBUG oslo_concurrency.processutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 8adc68cc-08d4-4743-a049-2ebd5b030fef_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:29 compute-0 nova_compute[239965]: 2026-01-26 16:10:29.160 239969 DEBUG nova.storage.rbd_utils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] resizing rbd image 8adc68cc-08d4-4743-a049-2ebd5b030fef_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:10:29 compute-0 nova_compute[239965]: 2026-01-26 16:10:29.235 239969 DEBUG nova.objects.instance [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'migration_context' on Instance uuid 8adc68cc-08d4-4743-a049-2ebd5b030fef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:10:29 compute-0 ceph-mon[75140]: pgmap v1761: 305 pgs: 305 active+clean; 181 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 25 KiB/s wr, 147 op/s
Jan 26 16:10:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1218647314' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:10:29 compute-0 nova_compute[239965]: 2026-01-26 16:10:29.250 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:10:29 compute-0 nova_compute[239965]: 2026-01-26 16:10:29.250 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Ensure instance console log exists: /var/lib/nova/instances/8adc68cc-08d4-4743-a049-2ebd5b030fef/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:10:29 compute-0 nova_compute[239965]: 2026-01-26 16:10:29.251 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:29 compute-0 nova_compute[239965]: 2026-01-26 16:10:29.251 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:29 compute-0 nova_compute[239965]: 2026-01-26 16:10:29.252 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:29 compute-0 nova_compute[239965]: 2026-01-26 16:10:29.628 239969 DEBUG nova.network.neutron [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Successfully created port: 8feda26e-e9fc-4f52-8036-f1c850cfd359 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 305 active+clean; 181 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 170 B/s wr, 143 op/s
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:10:30 compute-0 nova_compute[239965]: 2026-01-26 16:10:30.591 239969 DEBUG nova.network.neutron [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Successfully updated port: 8feda26e-e9fc-4f52-8036-f1c850cfd359 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:10:30 compute-0 nova_compute[239965]: 2026-01-26 16:10:30.609 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "refresh_cache-8adc68cc-08d4-4743-a049-2ebd5b030fef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:10:30 compute-0 nova_compute[239965]: 2026-01-26 16:10:30.610 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquired lock "refresh_cache-8adc68cc-08d4-4743-a049-2ebd5b030fef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:10:30 compute-0 nova_compute[239965]: 2026-01-26 16:10:30.610 239969 DEBUG nova.network.neutron [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:10:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:10:30 compute-0 nova_compute[239965]: 2026-01-26 16:10:30.795 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:30 compute-0 nova_compute[239965]: 2026-01-26 16:10:30.800 239969 DEBUG nova.compute.manager [req-133f08c0-6977-4510-8474-42cdfb047267 req-c5dda337-f06c-4889-a285-5ea338122569 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Received event network-changed-8feda26e-e9fc-4f52-8036-f1c850cfd359 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:30 compute-0 nova_compute[239965]: 2026-01-26 16:10:30.800 239969 DEBUG nova.compute.manager [req-133f08c0-6977-4510-8474-42cdfb047267 req-c5dda337-f06c-4889-a285-5ea338122569 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Refreshing instance network info cache due to event network-changed-8feda26e-e9fc-4f52-8036-f1c850cfd359. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:10:30 compute-0 nova_compute[239965]: 2026-01-26 16:10:30.800 239969 DEBUG oslo_concurrency.lockutils [req-133f08c0-6977-4510-8474-42cdfb047267 req-c5dda337-f06c-4889-a285-5ea338122569 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-8adc68cc-08d4-4743-a049-2ebd5b030fef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:10:31 compute-0 nova_compute[239965]: 2026-01-26 16:10:31.080 239969 DEBUG nova.network.neutron [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:10:31 compute-0 ceph-mon[75140]: pgmap v1762: 305 pgs: 305 active+clean; 181 MiB data, 758 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 170 B/s wr, 143 op/s
Jan 26 16:10:31 compute-0 nova_compute[239965]: 2026-01-26 16:10:31.703 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 216 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.2 MiB/s wr, 158 op/s
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.654 239969 DEBUG nova.network.neutron [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Updating instance_info_cache with network_info: [{"id": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "address": "fa:16:3e:1b:40:83", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feda26e-e9", "ovs_interfaceid": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.676 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Releasing lock "refresh_cache-8adc68cc-08d4-4743-a049-2ebd5b030fef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.696 239969 DEBUG nova.compute.manager [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Instance network_info: |[{"id": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "address": "fa:16:3e:1b:40:83", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feda26e-e9", "ovs_interfaceid": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.698 239969 DEBUG oslo_concurrency.lockutils [req-133f08c0-6977-4510-8474-42cdfb047267 req-c5dda337-f06c-4889-a285-5ea338122569 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-8adc68cc-08d4-4743-a049-2ebd5b030fef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.698 239969 DEBUG nova.network.neutron [req-133f08c0-6977-4510-8474-42cdfb047267 req-c5dda337-f06c-4889-a285-5ea338122569 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Refreshing network info cache for port 8feda26e-e9fc-4f52-8036-f1c850cfd359 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.701 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Start _get_guest_xml network_info=[{"id": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "address": "fa:16:3e:1b:40:83", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feda26e-e9", "ovs_interfaceid": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.705 239969 WARNING nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.711 239969 DEBUG nova.virt.libvirt.host [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.711 239969 DEBUG nova.virt.libvirt.host [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.716 239969 DEBUG nova.virt.libvirt.host [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.717 239969 DEBUG nova.virt.libvirt.host [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.717 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.717 239969 DEBUG nova.virt.hardware [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.718 239969 DEBUG nova.virt.hardware [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.718 239969 DEBUG nova.virt.hardware [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.718 239969 DEBUG nova.virt.hardware [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.718 239969 DEBUG nova.virt.hardware [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.718 239969 DEBUG nova.virt.hardware [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.719 239969 DEBUG nova.virt.hardware [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.719 239969 DEBUG nova.virt.hardware [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.719 239969 DEBUG nova.virt.hardware [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.719 239969 DEBUG nova.virt.hardware [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.719 239969 DEBUG nova.virt.hardware [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:10:32 compute-0 nova_compute[239965]: 2026-01-26 16:10:32.722 239969 DEBUG oslo_concurrency.processutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:32 compute-0 ovn_controller[146046]: 2026-01-26T16:10:32Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:74:58:77 10.100.0.6
Jan 26 16:10:32 compute-0 ovn_controller[146046]: 2026-01-26T16:10:32Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:74:58:77 10.100.0.6
Jan 26 16:10:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:10:33 compute-0 ovn_controller[146046]: 2026-01-26T16:10:33Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:69:92:94 10.100.0.4
Jan 26 16:10:33 compute-0 ovn_controller[146046]: 2026-01-26T16:10:33Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:69:92:94 10.100.0.4
Jan 26 16:10:33 compute-0 ceph-mon[75140]: pgmap v1763: 305 pgs: 305 active+clean; 216 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.2 MiB/s wr, 158 op/s
Jan 26 16:10:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:10:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3455751071' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.311 239969 DEBUG oslo_concurrency.processutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.332 239969 DEBUG nova.storage.rbd_utils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 8adc68cc-08d4-4743-a049-2ebd5b030fef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.336 239969 DEBUG oslo_concurrency.processutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:10:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1435721036' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.888 239969 DEBUG oslo_concurrency.processutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.890 239969 DEBUG nova.virt.libvirt.vif [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:10:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-820130843',display_name='tempest-ServersNegativeTestJSON-server-820130843',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-820130843',id=107,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8d1cdeaf6004b94bba2d3c727959e51',ramdisk_id='',reservation_id='r-u96z07pg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1013867593',owner_user_name='tempest-ServersNegativeTestJSON-1013867593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:10:28Z,user_data=None,user_id='2d9b11d846a34b6e892fb80186bedba5',uuid=8adc68cc-08d4-4743-a049-2ebd5b030fef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "address": "fa:16:3e:1b:40:83", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feda26e-e9", "ovs_interfaceid": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.890 239969 DEBUG nova.network.os_vif_util [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converting VIF {"id": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "address": "fa:16:3e:1b:40:83", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feda26e-e9", "ovs_interfaceid": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.891 239969 DEBUG nova.network.os_vif_util [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:40:83,bridge_name='br-int',has_traffic_filtering=True,id=8feda26e-e9fc-4f52-8036-f1c850cfd359,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8feda26e-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.892 239969 DEBUG nova.objects.instance [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'pci_devices' on Instance uuid 8adc68cc-08d4-4743-a049-2ebd5b030fef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.905 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:10:33 compute-0 nova_compute[239965]:   <uuid>8adc68cc-08d4-4743-a049-2ebd5b030fef</uuid>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   <name>instance-0000006b</name>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersNegativeTestJSON-server-820130843</nova:name>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:10:32</nova:creationTime>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:10:33 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:10:33 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:10:33 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:10:33 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:10:33 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:10:33 compute-0 nova_compute[239965]:         <nova:user uuid="2d9b11d846a34b6e892fb80186bedba5">tempest-ServersNegativeTestJSON-1013867593-project-member</nova:user>
Jan 26 16:10:33 compute-0 nova_compute[239965]:         <nova:project uuid="d8d1cdeaf6004b94bba2d3c727959e51">tempest-ServersNegativeTestJSON-1013867593</nova:project>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:10:33 compute-0 nova_compute[239965]:         <nova:port uuid="8feda26e-e9fc-4f52-8036-f1c850cfd359">
Jan 26 16:10:33 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <system>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <entry name="serial">8adc68cc-08d4-4743-a049-2ebd5b030fef</entry>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <entry name="uuid">8adc68cc-08d4-4743-a049-2ebd5b030fef</entry>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     </system>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   <os>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   </os>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   <features>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   </features>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/8adc68cc-08d4-4743-a049-2ebd5b030fef_disk">
Jan 26 16:10:33 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       </source>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:10:33 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/8adc68cc-08d4-4743-a049-2ebd5b030fef_disk.config">
Jan 26 16:10:33 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       </source>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:10:33 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:1b:40:83"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <target dev="tap8feda26e-e9"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/8adc68cc-08d4-4743-a049-2ebd5b030fef/console.log" append="off"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <video>
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     </video>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:10:33 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:10:33 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:10:33 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:10:33 compute-0 nova_compute[239965]: </domain>
Jan 26 16:10:33 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.907 239969 DEBUG nova.compute.manager [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Preparing to wait for external event network-vif-plugged-8feda26e-e9fc-4f52-8036-f1c850cfd359 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.907 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.908 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.908 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.909 239969 DEBUG nova.virt.libvirt.vif [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:10:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-820130843',display_name='tempest-ServersNegativeTestJSON-server-820130843',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-820130843',id=107,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d8d1cdeaf6004b94bba2d3c727959e51',ramdisk_id='',reservation_id='r-u96z07pg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1013867593',owner_user_name='tempest-ServersNegativeTestJSON-1013867593-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:10:28Z,user_data=None,user_id='2d9b11d846a34b6e892fb80186bedba5',uuid=8adc68cc-08d4-4743-a049-2ebd5b030fef,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "address": "fa:16:3e:1b:40:83", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feda26e-e9", "ovs_interfaceid": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.909 239969 DEBUG nova.network.os_vif_util [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converting VIF {"id": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "address": "fa:16:3e:1b:40:83", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feda26e-e9", "ovs_interfaceid": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.910 239969 DEBUG nova.network.os_vif_util [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:40:83,bridge_name='br-int',has_traffic_filtering=True,id=8feda26e-e9fc-4f52-8036-f1c850cfd359,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8feda26e-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.910 239969 DEBUG os_vif [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:40:83,bridge_name='br-int',has_traffic_filtering=True,id=8feda26e-e9fc-4f52-8036-f1c850cfd359,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8feda26e-e9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.911 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.911 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.911 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.914 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.914 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8feda26e-e9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.914 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8feda26e-e9, col_values=(('external_ids', {'iface-id': '8feda26e-e9fc-4f52-8036-f1c850cfd359', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1b:40:83', 'vm-uuid': '8adc68cc-08d4-4743-a049-2ebd5b030fef'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:33 compute-0 NetworkManager[48954]: <info>  [1769443833.9168] manager: (tap8feda26e-e9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.918 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.924 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.925 239969 INFO os_vif [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:40:83,bridge_name='br-int',has_traffic_filtering=True,id=8feda26e-e9fc-4f52-8036-f1c850cfd359,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8feda26e-e9')
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.976 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.977 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.977 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] No VIF found with MAC fa:16:3e:1b:40:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.977 239969 INFO nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Using config drive
Jan 26 16:10:33 compute-0 nova_compute[239965]: 2026-01-26 16:10:33.997 239969 DEBUG nova.storage.rbd_utils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 8adc68cc-08d4-4743-a049-2ebd5b030fef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 254 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.2 MiB/s wr, 116 op/s
Jan 26 16:10:34 compute-0 nova_compute[239965]: 2026-01-26 16:10:34.254 239969 DEBUG nova.network.neutron [req-133f08c0-6977-4510-8474-42cdfb047267 req-c5dda337-f06c-4889-a285-5ea338122569 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Updated VIF entry in instance network info cache for port 8feda26e-e9fc-4f52-8036-f1c850cfd359. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:10:34 compute-0 nova_compute[239965]: 2026-01-26 16:10:34.254 239969 DEBUG nova.network.neutron [req-133f08c0-6977-4510-8474-42cdfb047267 req-c5dda337-f06c-4889-a285-5ea338122569 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Updating instance_info_cache with network_info: [{"id": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "address": "fa:16:3e:1b:40:83", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feda26e-e9", "ovs_interfaceid": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:10:34 compute-0 nova_compute[239965]: 2026-01-26 16:10:34.275 239969 DEBUG oslo_concurrency.lockutils [req-133f08c0-6977-4510-8474-42cdfb047267 req-c5dda337-f06c-4889-a285-5ea338122569 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-8adc68cc-08d4-4743-a049-2ebd5b030fef" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:10:34 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3455751071' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:10:34 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1435721036' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:10:34 compute-0 nova_compute[239965]: 2026-01-26 16:10:34.351 239969 INFO nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Creating config drive at /var/lib/nova/instances/8adc68cc-08d4-4743-a049-2ebd5b030fef/disk.config
Jan 26 16:10:34 compute-0 nova_compute[239965]: 2026-01-26 16:10:34.356 239969 DEBUG oslo_concurrency.processutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8adc68cc-08d4-4743-a049-2ebd5b030fef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnbgvcrmm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:34 compute-0 nova_compute[239965]: 2026-01-26 16:10:34.497 239969 DEBUG oslo_concurrency.processutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8adc68cc-08d4-4743-a049-2ebd5b030fef/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnbgvcrmm" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:34 compute-0 nova_compute[239965]: 2026-01-26 16:10:34.523 239969 DEBUG nova.storage.rbd_utils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 8adc68cc-08d4-4743-a049-2ebd5b030fef_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:34 compute-0 nova_compute[239965]: 2026-01-26 16:10:34.528 239969 DEBUG oslo_concurrency.processutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8adc68cc-08d4-4743-a049-2ebd5b030fef/disk.config 8adc68cc-08d4-4743-a049-2ebd5b030fef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:34 compute-0 nova_compute[239965]: 2026-01-26 16:10:34.688 239969 DEBUG oslo_concurrency.processutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8adc68cc-08d4-4743-a049-2ebd5b030fef/disk.config 8adc68cc-08d4-4743-a049-2ebd5b030fef_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:34 compute-0 nova_compute[239965]: 2026-01-26 16:10:34.689 239969 INFO nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Deleting local config drive /var/lib/nova/instances/8adc68cc-08d4-4743-a049-2ebd5b030fef/disk.config because it was imported into RBD.
Jan 26 16:10:34 compute-0 kernel: tap8feda26e-e9: entered promiscuous mode
Jan 26 16:10:34 compute-0 NetworkManager[48954]: <info>  [1769443834.7454] manager: (tap8feda26e-e9): new Tun device (/org/freedesktop/NetworkManager/Devices/430)
Jan 26 16:10:34 compute-0 ovn_controller[146046]: 2026-01-26T16:10:34Z|01033|binding|INFO|Claiming lport 8feda26e-e9fc-4f52-8036-f1c850cfd359 for this chassis.
Jan 26 16:10:34 compute-0 ovn_controller[146046]: 2026-01-26T16:10:34Z|01034|binding|INFO|8feda26e-e9fc-4f52-8036-f1c850cfd359: Claiming fa:16:3e:1b:40:83 10.100.0.7
Jan 26 16:10:34 compute-0 nova_compute[239965]: 2026-01-26 16:10:34.748 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:34.759 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:40:83 10.100.0.7'], port_security=['fa:16:3e:1b:40:83 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8adc68cc-08d4-4743-a049-2ebd5b030fef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8d1cdeaf6004b94bba2d3c727959e51', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb0a5e0f-b1d5-4f6c-9884-3b66f10a59d8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=568e2bde-137f-4351-b2df-67acb89970b2, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=8feda26e-e9fc-4f52-8036-f1c850cfd359) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:10:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:34.760 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 8feda26e-e9fc-4f52-8036-f1c850cfd359 in datapath 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 bound to our chassis
Jan 26 16:10:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:34.761 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6
Jan 26 16:10:34 compute-0 nova_compute[239965]: 2026-01-26 16:10:34.767 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:34 compute-0 ovn_controller[146046]: 2026-01-26T16:10:34Z|01035|binding|INFO|Setting lport 8feda26e-e9fc-4f52-8036-f1c850cfd359 ovn-installed in OVS
Jan 26 16:10:34 compute-0 ovn_controller[146046]: 2026-01-26T16:10:34Z|01036|binding|INFO|Setting lport 8feda26e-e9fc-4f52-8036-f1c850cfd359 up in Southbound
Jan 26 16:10:34 compute-0 nova_compute[239965]: 2026-01-26 16:10:34.769 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:34 compute-0 systemd-udevd[330371]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:10:34 compute-0 systemd-machined[208061]: New machine qemu-132-instance-0000006b.
Jan 26 16:10:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:34.779 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2fd0b8-d727-4e8a-a507-85c64407fef2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:34 compute-0 NetworkManager[48954]: <info>  [1769443834.7873] device (tap8feda26e-e9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:10:34 compute-0 NetworkManager[48954]: <info>  [1769443834.7878] device (tap8feda26e-e9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:10:34 compute-0 systemd[1]: Started Virtual Machine qemu-132-instance-0000006b.
Jan 26 16:10:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:34.811 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a1cc55c6-f3f2-4b5b-822f-4b3be5471ba7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:34.814 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[efd2c7c2-1214-4ec3-b620-25b99fe137ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:34.840 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[824696f6-bd4b-4935-8647-6736e9ce1f40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:34.857 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[72a59657-7f16-45c8-9f87-9615ffdb49ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74a6493e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:10:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 6, 'rx_bytes': 532, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530973, 'reachable_time': 18504, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330383, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:34.874 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ba1d3d3c-45a2-4faa-ac90-15e6ab1b2c32]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74a6493e-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530987, 'tstamp': 530987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330385, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74a6493e-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530991, 'tstamp': 530991}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330385, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:34.876 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74a6493e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:34 compute-0 nova_compute[239965]: 2026-01-26 16:10:34.877 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:34 compute-0 nova_compute[239965]: 2026-01-26 16:10:34.878 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:34.880 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74a6493e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:34.881 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:34.881 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74a6493e-90, col_values=(('external_ids', {'iface-id': 'fb7d9679-6ae1-476e-99df-96ff9d9ad5e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:34.882 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.173 239969 DEBUG nova.compute.manager [req-7cbe9606-dda3-4321-b24e-370161124c05 req-b59423f6-c47a-4eeb-ac92-873c710df66f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Received event network-vif-plugged-8feda26e-e9fc-4f52-8036-f1c850cfd359 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.174 239969 DEBUG oslo_concurrency.lockutils [req-7cbe9606-dda3-4321-b24e-370161124c05 req-b59423f6-c47a-4eeb-ac92-873c710df66f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.174 239969 DEBUG oslo_concurrency.lockutils [req-7cbe9606-dda3-4321-b24e-370161124c05 req-b59423f6-c47a-4eeb-ac92-873c710df66f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.174 239969 DEBUG oslo_concurrency.lockutils [req-7cbe9606-dda3-4321-b24e-370161124c05 req-b59423f6-c47a-4eeb-ac92-873c710df66f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.174 239969 DEBUG nova.compute.manager [req-7cbe9606-dda3-4321-b24e-370161124c05 req-b59423f6-c47a-4eeb-ac92-873c710df66f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Processing event network-vif-plugged-8feda26e-e9fc-4f52-8036-f1c850cfd359 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.286 239969 DEBUG nova.compute.manager [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.289 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443835.2857492, 8adc68cc-08d4-4743-a049-2ebd5b030fef => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.289 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] VM Started (Lifecycle Event)
Jan 26 16:10:35 compute-0 ceph-mon[75140]: pgmap v1764: 305 pgs: 305 active+clean; 254 MiB data, 781 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.2 MiB/s wr, 116 op/s
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.294 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.299 239969 INFO nova.virt.libvirt.driver [-] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Instance spawned successfully.
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.300 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.320 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.324 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.384 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.385 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443835.2863476, 8adc68cc-08d4-4743-a049-2ebd5b030fef => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.385 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] VM Paused (Lifecycle Event)
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.392 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.393 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.393 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.393 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.394 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.394 239969 DEBUG nova.virt.libvirt.driver [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.416 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.421 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443835.2938066, 8adc68cc-08d4-4743-a049-2ebd5b030fef => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.421 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] VM Resumed (Lifecycle Event)
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.453 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.457 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.463 239969 INFO nova.compute.manager [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Took 6.80 seconds to spawn the instance on the hypervisor.
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.463 239969 DEBUG nova.compute.manager [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.477 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.537 239969 INFO nova.compute.manager [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Took 7.82 seconds to build instance.
Jan 26 16:10:35 compute-0 nova_compute[239965]: 2026-01-26 16:10:35.554 239969 DEBUG oslo_concurrency.lockutils [None req-f29f3c22-a781-4d29-85e0-449f9ebc1463 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "8adc68cc-08d4-4743-a049-2ebd5b030fef" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 259 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 965 KiB/s rd, 4.8 MiB/s wr, 117 op/s
Jan 26 16:10:36 compute-0 ceph-mon[75140]: pgmap v1765: 305 pgs: 305 active+clean; 259 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 965 KiB/s rd, 4.8 MiB/s wr, 117 op/s
Jan 26 16:10:36 compute-0 nova_compute[239965]: 2026-01-26 16:10:36.703 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.331 239969 DEBUG nova.compute.manager [req-e5304880-7d05-4d2d-870f-777782d385c7 req-09a93679-9c9c-4200-ab74-b98b39d46f31 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Received event network-vif-plugged-8feda26e-e9fc-4f52-8036-f1c850cfd359 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.331 239969 DEBUG oslo_concurrency.lockutils [req-e5304880-7d05-4d2d-870f-777782d385c7 req-09a93679-9c9c-4200-ab74-b98b39d46f31 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.332 239969 DEBUG oslo_concurrency.lockutils [req-e5304880-7d05-4d2d-870f-777782d385c7 req-09a93679-9c9c-4200-ab74-b98b39d46f31 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.332 239969 DEBUG oslo_concurrency.lockutils [req-e5304880-7d05-4d2d-870f-777782d385c7 req-09a93679-9c9c-4200-ab74-b98b39d46f31 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.332 239969 DEBUG nova.compute.manager [req-e5304880-7d05-4d2d-870f-777782d385c7 req-09a93679-9c9c-4200-ab74-b98b39d46f31 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] No waiting events found dispatching network-vif-plugged-8feda26e-e9fc-4f52-8036-f1c850cfd359 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.332 239969 WARNING nova.compute.manager [req-e5304880-7d05-4d2d-870f-777782d385c7 req-09a93679-9c9c-4200-ab74-b98b39d46f31 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Received unexpected event network-vif-plugged-8feda26e-e9fc-4f52-8036-f1c850cfd359 for instance with vm_state active and task_state None.
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.544 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.805 239969 DEBUG oslo_concurrency.lockutils [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "8adc68cc-08d4-4743-a049-2ebd5b030fef" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.806 239969 DEBUG oslo_concurrency.lockutils [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "8adc68cc-08d4-4743-a049-2ebd5b030fef" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.806 239969 DEBUG oslo_concurrency.lockutils [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.806 239969 DEBUG oslo_concurrency.lockutils [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.806 239969 DEBUG oslo_concurrency.lockutils [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.807 239969 INFO nova.compute.manager [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Terminating instance
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.808 239969 DEBUG nova.compute.manager [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:10:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:10:37 compute-0 kernel: tap8feda26e-e9 (unregistering): left promiscuous mode
Jan 26 16:10:37 compute-0 NetworkManager[48954]: <info>  [1769443837.8536] device (tap8feda26e-e9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.861 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:37 compute-0 ovn_controller[146046]: 2026-01-26T16:10:37Z|01037|binding|INFO|Releasing lport 8feda26e-e9fc-4f52-8036-f1c850cfd359 from this chassis (sb_readonly=0)
Jan 26 16:10:37 compute-0 ovn_controller[146046]: 2026-01-26T16:10:37Z|01038|binding|INFO|Setting lport 8feda26e-e9fc-4f52-8036-f1c850cfd359 down in Southbound
Jan 26 16:10:37 compute-0 ovn_controller[146046]: 2026-01-26T16:10:37Z|01039|binding|INFO|Removing iface tap8feda26e-e9 ovn-installed in OVS
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.863 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:37.869 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1b:40:83 10.100.0.7'], port_security=['fa:16:3e:1b:40:83 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '8adc68cc-08d4-4743-a049-2ebd5b030fef', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8d1cdeaf6004b94bba2d3c727959e51', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb0a5e0f-b1d5-4f6c-9884-3b66f10a59d8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=568e2bde-137f-4351-b2df-67acb89970b2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=8feda26e-e9fc-4f52-8036-f1c850cfd359) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:10:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:37.870 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 8feda26e-e9fc-4f52-8036-f1c850cfd359 in datapath 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 unbound from our chassis
Jan 26 16:10:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:37.871 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6
Jan 26 16:10:37 compute-0 nova_compute[239965]: 2026-01-26 16:10:37.879 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:37.887 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fccb6826-4fb6-4621-9654-890753c9b84f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:37 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Jan 26 16:10:37 compute-0 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d0000006b.scope: Consumed 3.127s CPU time.
Jan 26 16:10:37 compute-0 systemd-machined[208061]: Machine qemu-132-instance-0000006b terminated.
Jan 26 16:10:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:37.918 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[58225f57-3260-4996-ae3d-3f5a64b54220]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:37.921 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f688458d-48d4-48ff-b817-c5087dea83e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:37.961 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[842408c7-5324-4c99-9d2a-e4254f09c806]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:37.980 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[283f621e-f296-448f-a18f-efe1bb4bfee0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74a6493e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:10:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 8, 'rx_bytes': 532, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 303], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530973, 'reachable_time': 18504, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330439, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:37.996 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c8df3f83-3c94-4272-afc3-f400c758a977]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap74a6493e-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530987, 'tstamp': 530987}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330440, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap74a6493e-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530991, 'tstamp': 530991}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330440, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:37.997 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74a6493e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:38 compute-0 nova_compute[239965]: 2026-01-26 16:10:38.008 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:38 compute-0 nova_compute[239965]: 2026-01-26 16:10:38.014 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:38.015 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74a6493e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:38.015 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:38.016 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74a6493e-90, col_values=(('external_ids', {'iface-id': 'fb7d9679-6ae1-476e-99df-96ff9d9ad5e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:38.016 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:38 compute-0 nova_compute[239965]: 2026-01-26 16:10:38.046 239969 INFO nova.virt.libvirt.driver [-] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Instance destroyed successfully.
Jan 26 16:10:38 compute-0 nova_compute[239965]: 2026-01-26 16:10:38.047 239969 DEBUG nova.objects.instance [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'resources' on Instance uuid 8adc68cc-08d4-4743-a049-2ebd5b030fef obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:10:38 compute-0 nova_compute[239965]: 2026-01-26 16:10:38.065 239969 DEBUG nova.virt.libvirt.vif [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:10:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-820130843',display_name='tempest-ServersNegativeTestJSON-server-820130843',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-820130843',id=107,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:10:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d8d1cdeaf6004b94bba2d3c727959e51',ramdisk_id='',reservation_id='r-u96z07pg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1013867593',owner_user_name='tempest-ServersNegativeTestJSON-1013867593-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:10:35Z,user_data=None,user_id='2d9b11d846a34b6e892fb80186bedba5',uuid=8adc68cc-08d4-4743-a049-2ebd5b030fef,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "address": "fa:16:3e:1b:40:83", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feda26e-e9", "ovs_interfaceid": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:10:38 compute-0 nova_compute[239965]: 2026-01-26 16:10:38.065 239969 DEBUG nova.network.os_vif_util [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converting VIF {"id": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "address": "fa:16:3e:1b:40:83", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8feda26e-e9", "ovs_interfaceid": "8feda26e-e9fc-4f52-8036-f1c850cfd359", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:10:38 compute-0 nova_compute[239965]: 2026-01-26 16:10:38.066 239969 DEBUG nova.network.os_vif_util [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1b:40:83,bridge_name='br-int',has_traffic_filtering=True,id=8feda26e-e9fc-4f52-8036-f1c850cfd359,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8feda26e-e9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:10:38 compute-0 nova_compute[239965]: 2026-01-26 16:10:38.066 239969 DEBUG os_vif [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:40:83,bridge_name='br-int',has_traffic_filtering=True,id=8feda26e-e9fc-4f52-8036-f1c850cfd359,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8feda26e-e9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:10:38 compute-0 nova_compute[239965]: 2026-01-26 16:10:38.068 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:38 compute-0 nova_compute[239965]: 2026-01-26 16:10:38.069 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8feda26e-e9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:38 compute-0 nova_compute[239965]: 2026-01-26 16:10:38.070 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:38 compute-0 nova_compute[239965]: 2026-01-26 16:10:38.071 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:38 compute-0 nova_compute[239965]: 2026-01-26 16:10:38.073 239969 INFO os_vif [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1b:40:83,bridge_name='br-int',has_traffic_filtering=True,id=8feda26e-e9fc-4f52-8036-f1c850cfd359,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8feda26e-e9')
Jan 26 16:10:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 293 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.0 MiB/s wr, 215 op/s
Jan 26 16:10:38 compute-0 ceph-mon[75140]: pgmap v1766: 305 pgs: 305 active+clean; 293 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.0 MiB/s wr, 215 op/s
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.529 239969 DEBUG nova.compute.manager [req-9c640213-222b-4cc3-8e78-8027adebce0d req-24204007-685d-4dd9-9195-63d7ea5ec8d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Received event network-vif-unplugged-8feda26e-e9fc-4f52-8036-f1c850cfd359 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.530 239969 DEBUG oslo_concurrency.lockutils [req-9c640213-222b-4cc3-8e78-8027adebce0d req-24204007-685d-4dd9-9195-63d7ea5ec8d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.530 239969 DEBUG oslo_concurrency.lockutils [req-9c640213-222b-4cc3-8e78-8027adebce0d req-24204007-685d-4dd9-9195-63d7ea5ec8d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.530 239969 DEBUG oslo_concurrency.lockutils [req-9c640213-222b-4cc3-8e78-8027adebce0d req-24204007-685d-4dd9-9195-63d7ea5ec8d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.530 239969 DEBUG nova.compute.manager [req-9c640213-222b-4cc3-8e78-8027adebce0d req-24204007-685d-4dd9-9195-63d7ea5ec8d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] No waiting events found dispatching network-vif-unplugged-8feda26e-e9fc-4f52-8036-f1c850cfd359 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.530 239969 DEBUG nova.compute.manager [req-9c640213-222b-4cc3-8e78-8027adebce0d req-24204007-685d-4dd9-9195-63d7ea5ec8d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Received event network-vif-unplugged-8feda26e-e9fc-4f52-8036-f1c850cfd359 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.531 239969 DEBUG nova.compute.manager [req-9c640213-222b-4cc3-8e78-8027adebce0d req-24204007-685d-4dd9-9195-63d7ea5ec8d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Received event network-vif-plugged-8feda26e-e9fc-4f52-8036-f1c850cfd359 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.531 239969 DEBUG oslo_concurrency.lockutils [req-9c640213-222b-4cc3-8e78-8027adebce0d req-24204007-685d-4dd9-9195-63d7ea5ec8d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.531 239969 DEBUG oslo_concurrency.lockutils [req-9c640213-222b-4cc3-8e78-8027adebce0d req-24204007-685d-4dd9-9195-63d7ea5ec8d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.531 239969 DEBUG oslo_concurrency.lockutils [req-9c640213-222b-4cc3-8e78-8027adebce0d req-24204007-685d-4dd9-9195-63d7ea5ec8d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "8adc68cc-08d4-4743-a049-2ebd5b030fef-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.531 239969 DEBUG nova.compute.manager [req-9c640213-222b-4cc3-8e78-8027adebce0d req-24204007-685d-4dd9-9195-63d7ea5ec8d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] No waiting events found dispatching network-vif-plugged-8feda26e-e9fc-4f52-8036-f1c850cfd359 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.531 239969 WARNING nova.compute.manager [req-9c640213-222b-4cc3-8e78-8027adebce0d req-24204007-685d-4dd9-9195-63d7ea5ec8d8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Received unexpected event network-vif-plugged-8feda26e-e9fc-4f52-8036-f1c850cfd359 for instance with vm_state active and task_state deleting.
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.868 239969 INFO nova.virt.libvirt.driver [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Deleting instance files /var/lib/nova/instances/8adc68cc-08d4-4743-a049-2ebd5b030fef_del
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.869 239969 INFO nova.virt.libvirt.driver [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Deletion of /var/lib/nova/instances/8adc68cc-08d4-4743-a049-2ebd5b030fef_del complete
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.930 239969 INFO nova.compute.manager [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Took 2.12 seconds to destroy the instance on the hypervisor.
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.931 239969 DEBUG oslo.service.loopingcall [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.931 239969 DEBUG nova.compute.manager [-] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:10:39 compute-0 nova_compute[239965]: 2026-01-26 16:10:39.931 239969 DEBUG nova.network.neutron [-] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:10:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 293 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.0 MiB/s wr, 215 op/s
Jan 26 16:10:40 compute-0 nova_compute[239965]: 2026-01-26 16:10:40.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:10:41 compute-0 nova_compute[239965]: 2026-01-26 16:10:41.302 239969 DEBUG nova.network.neutron [-] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:10:41 compute-0 nova_compute[239965]: 2026-01-26 16:10:41.319 239969 INFO nova.compute.manager [-] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Took 1.39 seconds to deallocate network for instance.
Jan 26 16:10:41 compute-0 nova_compute[239965]: 2026-01-26 16:10:41.404 239969 DEBUG oslo_concurrency.lockutils [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:41 compute-0 nova_compute[239965]: 2026-01-26 16:10:41.404 239969 DEBUG oslo_concurrency.lockutils [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:41 compute-0 ceph-mon[75140]: pgmap v1767: 305 pgs: 305 active+clean; 293 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.0 MiB/s wr, 215 op/s
Jan 26 16:10:41 compute-0 nova_compute[239965]: 2026-01-26 16:10:41.475 239969 DEBUG oslo_concurrency.processutils [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:41 compute-0 nova_compute[239965]: 2026-01-26 16:10:41.621 239969 DEBUG nova.compute.manager [req-e1a84160-d723-45c5-9f85-2539e910c7d8 req-2862ae44-5253-4777-8c88-a5a8fcd0ebf1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Received event network-vif-deleted-8feda26e-e9fc-4f52-8036-f1c850cfd359 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:41 compute-0 nova_compute[239965]: 2026-01-26 16:10:41.706 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:10:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884424151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:10:42 compute-0 nova_compute[239965]: 2026-01-26 16:10:42.071 239969 DEBUG oslo_concurrency.processutils [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:42 compute-0 nova_compute[239965]: 2026-01-26 16:10:42.076 239969 DEBUG nova.compute.provider_tree [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:10:42 compute-0 nova_compute[239965]: 2026-01-26 16:10:42.100 239969 DEBUG nova.scheduler.client.report [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:10:42 compute-0 nova_compute[239965]: 2026-01-26 16:10:42.123 239969 DEBUG oslo_concurrency.lockutils [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:42 compute-0 nova_compute[239965]: 2026-01-26 16:10:42.172 239969 INFO nova.scheduler.client.report [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Deleted allocations for instance 8adc68cc-08d4-4743-a049-2ebd5b030fef
Jan 26 16:10:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 268 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 239 op/s
Jan 26 16:10:42 compute-0 nova_compute[239965]: 2026-01-26 16:10:42.239 239969 DEBUG oslo_concurrency.lockutils [None req-b3535779-f41d-45e3-93cc-7f2ab34dd546 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "8adc68cc-08d4-4743-a049-2ebd5b030fef" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/884424151' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:10:42 compute-0 ceph-mon[75140]: pgmap v1768: 305 pgs: 305 active+clean; 268 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.1 MiB/s wr, 239 op/s
Jan 26 16:10:42 compute-0 nova_compute[239965]: 2026-01-26 16:10:42.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:10:42 compute-0 nova_compute[239965]: 2026-01-26 16:10:42.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:10:42 compute-0 nova_compute[239965]: 2026-01-26 16:10:42.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:10:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:10:43 compute-0 nova_compute[239965]: 2026-01-26 16:10:43.071 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:43 compute-0 nova_compute[239965]: 2026-01-26 16:10:43.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:10:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 246 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 237 op/s
Jan 26 16:10:44 compute-0 nova_compute[239965]: 2026-01-26 16:10:44.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:10:44 compute-0 nova_compute[239965]: 2026-01-26 16:10:44.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:10:44 compute-0 nova_compute[239965]: 2026-01-26 16:10:44.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:10:44 compute-0 nova_compute[239965]: 2026-01-26 16:10:44.971 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:10:44 compute-0 nova_compute[239965]: 2026-01-26 16:10:44.971 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:10:44 compute-0 nova_compute[239965]: 2026-01-26 16:10:44.972 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:10:44 compute-0 nova_compute[239965]: 2026-01-26 16:10:44.972 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7d660383-738b-422b-af54-1a15464bf8e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:10:45 compute-0 nova_compute[239965]: 2026-01-26 16:10:45.167 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "b16e73f5-93cb-4c68-8f59-291227c8aa43" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:45 compute-0 nova_compute[239965]: 2026-01-26 16:10:45.168 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:45 compute-0 nova_compute[239965]: 2026-01-26 16:10:45.185 239969 DEBUG nova.compute.manager [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:10:45 compute-0 nova_compute[239965]: 2026-01-26 16:10:45.258 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:45 compute-0 nova_compute[239965]: 2026-01-26 16:10:45.259 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:45 compute-0 nova_compute[239965]: 2026-01-26 16:10:45.267 239969 DEBUG nova.virt.hardware [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:10:45 compute-0 nova_compute[239965]: 2026-01-26 16:10:45.267 239969 INFO nova.compute.claims [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:10:45 compute-0 ceph-mon[75140]: pgmap v1769: 305 pgs: 305 active+clean; 246 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 237 op/s
Jan 26 16:10:45 compute-0 podman[330494]: 2026-01-26 16:10:45.374787502 +0000 UTC m=+0.058213045 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 16:10:45 compute-0 podman[330495]: 2026-01-26 16:10:45.41270803 +0000 UTC m=+0.096020151 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:10:45 compute-0 nova_compute[239965]: 2026-01-26 16:10:45.416 239969 DEBUG oslo_concurrency.processutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:10:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1831472440' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.017 239969 DEBUG oslo_concurrency.processutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.022 239969 DEBUG nova.compute.provider_tree [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.040 239969 DEBUG nova.scheduler.client.report [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.059 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.060 239969 DEBUG nova.compute.manager [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.105 239969 DEBUG nova.compute.manager [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.105 239969 DEBUG nova.network.neutron [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.123 239969 INFO nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.140 239969 DEBUG nova.compute.manager [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:10:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 246 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.9 MiB/s wr, 194 op/s
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.217 239969 DEBUG nova.compute.manager [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.218 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.219 239969 INFO nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Creating image(s)
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.238 239969 DEBUG nova.storage.rbd_utils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image b16e73f5-93cb-4c68-8f59-291227c8aa43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.261 239969 DEBUG nova.storage.rbd_utils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image b16e73f5-93cb-4c68-8f59-291227c8aa43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.295 239969 DEBUG nova.storage.rbd_utils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image b16e73f5-93cb-4c68-8f59-291227c8aa43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.298 239969 DEBUG oslo_concurrency.processutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.329 239969 DEBUG nova.policy [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0338b308661e4205839e9e957f674d8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df501970c9864b77b86bb4aa58ef846b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:10:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1831472440' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.369 239969 DEBUG oslo_concurrency.processutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.370 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.370 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.371 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.392 239969 DEBUG nova.storage.rbd_utils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image b16e73f5-93cb-4c68-8f59-291227c8aa43_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.395 239969 DEBUG oslo_concurrency.processutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 b16e73f5-93cb-4c68-8f59-291227c8aa43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.668 239969 DEBUG oslo_concurrency.processutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 b16e73f5-93cb-4c68-8f59-291227c8aa43_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.729 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.736 239969 DEBUG nova.storage.rbd_utils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] resizing rbd image b16e73f5-93cb-4c68-8f59-291227c8aa43_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.806 239969 DEBUG nova.objects.instance [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'migration_context' on Instance uuid b16e73f5-93cb-4c68-8f59-291227c8aa43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.820 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.821 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Ensure instance console log exists: /var/lib/nova/instances/b16e73f5-93cb-4c68-8f59-291227c8aa43/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.821 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.822 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:46 compute-0 nova_compute[239965]: 2026-01-26 16:10:46.822 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:47 compute-0 ceph-mon[75140]: pgmap v1770: 305 pgs: 305 active+clean; 246 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.9 MiB/s wr, 194 op/s
Jan 26 16:10:47 compute-0 nova_compute[239965]: 2026-01-26 16:10:47.764 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Updating instance_info_cache with network_info: [{"id": "3a527bbf-13af-4ef8-8703-f32cc8169291", "address": "fa:16:3e:69:92:94", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a527bbf-13", "ovs_interfaceid": "3a527bbf-13af-4ef8-8703-f32cc8169291", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "address": "fa:16:3e:b9:d5:b2", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:d5b2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ade0424-93", "ovs_interfaceid": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:10:47 compute-0 nova_compute[239965]: 2026-01-26 16:10:47.790 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:10:47 compute-0 nova_compute[239965]: 2026-01-26 16:10:47.790 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:10:47 compute-0 nova_compute[239965]: 2026-01-26 16:10:47.791 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:10:47 compute-0 nova_compute[239965]: 2026-01-26 16:10:47.791 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:10:47 compute-0 nova_compute[239965]: 2026-01-26 16:10:47.809 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:47 compute-0 nova_compute[239965]: 2026-01-26 16:10:47.810 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:47 compute-0 nova_compute[239965]: 2026-01-26 16:10:47.810 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:47 compute-0 nova_compute[239965]: 2026-01-26 16:10:47.810 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:10:47 compute-0 nova_compute[239965]: 2026-01-26 16:10:47.810 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.072 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 276 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 170 op/s
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.337 239969 DEBUG nova.network.neutron [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Successfully created port: 95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:10:48 compute-0 ceph-mon[75140]: pgmap v1771: 305 pgs: 305 active+clean; 276 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.4 MiB/s wr, 170 op/s
Jan 26 16:10:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:10:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3835421014' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.373 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.469 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.469 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.473 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.473 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.657 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.658 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3390MB free_disk=59.883053357712924GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.658 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.659 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:10:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2794532057' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:10:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:10:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2794532057' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.735 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 7d660383-738b-422b-af54-1a15464bf8e3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.736 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.736 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance b16e73f5-93cb-4c68-8f59-291227c8aa43 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.736 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.736 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:10:48 compute-0 nova_compute[239965]: 2026-01-26 16:10:48.828 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017541408104116256 of space, bias 1.0, pg target 0.5262422431234877 quantized to 32 (current 32)
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010169049045835737 of space, bias 1.0, pg target 0.3050714713750721 quantized to 32 (current 32)
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.838765694743109e-07 of space, bias 4.0, pg target 0.0009406518833691731 quantized to 16 (current 16)
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:10:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:10:49 compute-0 nova_compute[239965]: 2026-01-26 16:10:49.036 239969 DEBUG nova.network.neutron [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Successfully created port: f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:10:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3835421014' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:10:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2794532057' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:10:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2794532057' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:10:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:10:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2997004526' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:10:49 compute-0 nova_compute[239965]: 2026-01-26 16:10:49.435 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:49 compute-0 nova_compute[239965]: 2026-01-26 16:10:49.442 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:10:49 compute-0 nova_compute[239965]: 2026-01-26 16:10:49.458 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:10:49 compute-0 nova_compute[239965]: 2026-01-26 16:10:49.492 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:10:49 compute-0 nova_compute[239965]: 2026-01-26 16:10:49.493 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:49 compute-0 nova_compute[239965]: 2026-01-26 16:10:49.909 239969 DEBUG nova.network.neutron [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Successfully updated port: 95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:10:50 compute-0 nova_compute[239965]: 2026-01-26 16:10:50.045 239969 DEBUG nova.compute.manager [req-29d8c75a-a8eb-4f87-8ac7-b587442d1ddd req-0467bb6f-b502-4a7c-a049-f2d91db201b5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received event network-changed-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:50 compute-0 nova_compute[239965]: 2026-01-26 16:10:50.046 239969 DEBUG nova.compute.manager [req-29d8c75a-a8eb-4f87-8ac7-b587442d1ddd req-0467bb6f-b502-4a7c-a049-f2d91db201b5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Refreshing instance network info cache due to event network-changed-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:10:50 compute-0 nova_compute[239965]: 2026-01-26 16:10:50.046 239969 DEBUG oslo_concurrency.lockutils [req-29d8c75a-a8eb-4f87-8ac7-b587442d1ddd req-0467bb6f-b502-4a7c-a049-f2d91db201b5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b16e73f5-93cb-4c68-8f59-291227c8aa43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:10:50 compute-0 nova_compute[239965]: 2026-01-26 16:10:50.046 239969 DEBUG oslo_concurrency.lockutils [req-29d8c75a-a8eb-4f87-8ac7-b587442d1ddd req-0467bb6f-b502-4a7c-a049-f2d91db201b5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b16e73f5-93cb-4c68-8f59-291227c8aa43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:10:50 compute-0 nova_compute[239965]: 2026-01-26 16:10:50.047 239969 DEBUG nova.network.neutron [req-29d8c75a-a8eb-4f87-8ac7-b587442d1ddd req-0467bb6f-b502-4a7c-a049-f2d91db201b5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Refreshing network info cache for port 95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:10:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 276 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 1.2 MiB/s wr, 52 op/s
Jan 26 16:10:50 compute-0 nova_compute[239965]: 2026-01-26 16:10:50.268 239969 DEBUG nova.network.neutron [req-29d8c75a-a8eb-4f87-8ac7-b587442d1ddd req-0467bb6f-b502-4a7c-a049-f2d91db201b5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:10:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2997004526' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:10:50 compute-0 ceph-mon[75140]: pgmap v1772: 305 pgs: 305 active+clean; 276 MiB data, 822 MiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 1.2 MiB/s wr, 52 op/s
Jan 26 16:10:50 compute-0 nova_compute[239965]: 2026-01-26 16:10:50.897 239969 DEBUG nova.network.neutron [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Successfully updated port: f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:10:50 compute-0 nova_compute[239965]: 2026-01-26 16:10:50.910 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "refresh_cache-b16e73f5-93cb-4c68-8f59-291227c8aa43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:10:51 compute-0 nova_compute[239965]: 2026-01-26 16:10:51.028 239969 DEBUG nova.network.neutron [req-29d8c75a-a8eb-4f87-8ac7-b587442d1ddd req-0467bb6f-b502-4a7c-a049-f2d91db201b5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:10:51 compute-0 nova_compute[239965]: 2026-01-26 16:10:51.049 239969 DEBUG oslo_concurrency.lockutils [req-29d8c75a-a8eb-4f87-8ac7-b587442d1ddd req-0467bb6f-b502-4a7c-a049-f2d91db201b5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b16e73f5-93cb-4c68-8f59-291227c8aa43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:10:51 compute-0 nova_compute[239965]: 2026-01-26 16:10:51.050 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquired lock "refresh_cache-b16e73f5-93cb-4c68-8f59-291227c8aa43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:10:51 compute-0 nova_compute[239965]: 2026-01-26 16:10:51.050 239969 DEBUG nova.network.neutron [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:10:51 compute-0 nova_compute[239965]: 2026-01-26 16:10:51.336 239969 DEBUG nova.network.neutron [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:10:51 compute-0 nova_compute[239965]: 2026-01-26 16:10:51.713 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 292 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 388 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Jan 26 16:10:52 compute-0 nova_compute[239965]: 2026-01-26 16:10:52.235 239969 DEBUG nova.compute.manager [req-3c5ed436-0a93-49ad-a4b9-fdaa9d2783e5 req-47ee21dd-6c61-461a-9fa5-ffe95f3a6f27 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received event network-changed-f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:52 compute-0 nova_compute[239965]: 2026-01-26 16:10:52.235 239969 DEBUG nova.compute.manager [req-3c5ed436-0a93-49ad-a4b9-fdaa9d2783e5 req-47ee21dd-6c61-461a-9fa5-ffe95f3a6f27 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Refreshing instance network info cache due to event network-changed-f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:10:52 compute-0 nova_compute[239965]: 2026-01-26 16:10:52.236 239969 DEBUG oslo_concurrency.lockutils [req-3c5ed436-0a93-49ad-a4b9-fdaa9d2783e5 req-47ee21dd-6c61-461a-9fa5-ffe95f3a6f27 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b16e73f5-93cb-4c68-8f59-291227c8aa43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:10:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:10:52 compute-0 nova_compute[239965]: 2026-01-26 16:10:52.918 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:52.918 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:10:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:52.919 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.045 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443838.0442004, 8adc68cc-08d4-4743-a049-2ebd5b030fef => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.046 239969 INFO nova.compute.manager [-] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] VM Stopped (Lifecycle Event)
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.068 239969 DEBUG nova.compute.manager [None req-af7037fd-e092-4aaf-8bc5-8e0e503aa5cb - - - - - -] [instance: 8adc68cc-08d4-4743-a049-2ebd5b030fef] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.074 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:53 compute-0 ceph-mon[75140]: pgmap v1773: 305 pgs: 305 active+clean; 292 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 388 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.578 239969 DEBUG nova.network.neutron [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Updating instance_info_cache with network_info: [{"id": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "address": "fa:16:3e:98:1a:f6", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d6c1b0-dd", "ovs_interfaceid": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "address": "fa:16:3e:f6:78:a9", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:78a9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf07a4c1a-26", "ovs_interfaceid": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.610 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Releasing lock "refresh_cache-b16e73f5-93cb-4c68-8f59-291227c8aa43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.610 239969 DEBUG nova.compute.manager [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Instance network_info: |[{"id": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "address": "fa:16:3e:98:1a:f6", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d6c1b0-dd", "ovs_interfaceid": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "address": "fa:16:3e:f6:78:a9", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:78a9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf07a4c1a-26", "ovs_interfaceid": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.611 239969 DEBUG oslo_concurrency.lockutils [req-3c5ed436-0a93-49ad-a4b9-fdaa9d2783e5 req-47ee21dd-6c61-461a-9fa5-ffe95f3a6f27 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b16e73f5-93cb-4c68-8f59-291227c8aa43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.611 239969 DEBUG nova.network.neutron [req-3c5ed436-0a93-49ad-a4b9-fdaa9d2783e5 req-47ee21dd-6c61-461a-9fa5-ffe95f3a6f27 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Refreshing network info cache for port f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.615 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Start _get_guest_xml network_info=[{"id": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "address": "fa:16:3e:98:1a:f6", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d6c1b0-dd", "ovs_interfaceid": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "address": "fa:16:3e:f6:78:a9", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:78a9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf07a4c1a-26", "ovs_interfaceid": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.621 239969 WARNING nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.631 239969 DEBUG nova.virt.libvirt.host [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.633 239969 DEBUG nova.virt.libvirt.host [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.639 239969 DEBUG nova.virt.libvirt.host [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.639 239969 DEBUG nova.virt.libvirt.host [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.640 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.640 239969 DEBUG nova.virt.hardware [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.641 239969 DEBUG nova.virt.hardware [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.641 239969 DEBUG nova.virt.hardware [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.641 239969 DEBUG nova.virt.hardware [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.641 239969 DEBUG nova.virt.hardware [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.642 239969 DEBUG nova.virt.hardware [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.642 239969 DEBUG nova.virt.hardware [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.643 239969 DEBUG nova.virt.hardware [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.643 239969 DEBUG nova.virt.hardware [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.643 239969 DEBUG nova.virt.hardware [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.644 239969 DEBUG nova.virt.hardware [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:10:53 compute-0 nova_compute[239965]: 2026-01-26 16:10:53.648 239969 DEBUG oslo_concurrency.processutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 292 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 26 16:10:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:10:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2338236455' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.250 239969 DEBUG oslo_concurrency.processutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.271 239969 DEBUG nova.storage.rbd_utils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image b16e73f5-93cb-4c68-8f59-291227c8aa43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.276 239969 DEBUG oslo_concurrency.processutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:10:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1963846853' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.838 239969 DEBUG oslo_concurrency.processutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.840 239969 DEBUG nova.virt.libvirt.vif [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:10:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1472034940',display_name='tempest-TestGettingAddress-server-1472034940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1472034940',id=108,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEzKe6a2VldUJeYUz82U7kuWR1IjLUu+uday8/rBf6kWo6w7hcHpRo5TRunJYM1KjlHcyvxH5YSbKzdUCWpXcwe9doLwR4k7RaPJluwXLyon4b8Yo66Pr00SerXQJms4Q==',key_name='tempest-TestGettingAddress-1487907947',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-lxdshte6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:10:46Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=b16e73f5-93cb-4c68-8f59-291227c8aa43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "address": "fa:16:3e:98:1a:f6", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d6c1b0-dd", "ovs_interfaceid": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.841 239969 DEBUG nova.network.os_vif_util [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "address": "fa:16:3e:98:1a:f6", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d6c1b0-dd", "ovs_interfaceid": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.841 239969 DEBUG nova.network.os_vif_util [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:1a:f6,bridge_name='br-int',has_traffic_filtering=True,id=95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5,network=Network(065353ad-2d91-4f6b-899c-49c1351ef25d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d6c1b0-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.842 239969 DEBUG nova.virt.libvirt.vif [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:10:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1472034940',display_name='tempest-TestGettingAddress-server-1472034940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1472034940',id=108,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEzKe6a2VldUJeYUz82U7kuWR1IjLUu+uday8/rBf6kWo6w7hcHpRo5TRunJYM1KjlHcyvxH5YSbKzdUCWpXcwe9doLwR4k7RaPJluwXLyon4b8Yo66Pr00SerXQJms4Q==',key_name='tempest-TestGettingAddress-1487907947',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-lxdshte6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:10:46Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=b16e73f5-93cb-4c68-8f59-291227c8aa43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "address": "fa:16:3e:f6:78:a9", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:78a9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf07a4c1a-26", "ovs_interfaceid": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.842 239969 DEBUG nova.network.os_vif_util [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "address": "fa:16:3e:f6:78:a9", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:78a9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf07a4c1a-26", "ovs_interfaceid": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.843 239969 DEBUG nova.network.os_vif_util [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:78:a9,bridge_name='br-int',has_traffic_filtering=True,id=f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4,network=Network(1f659c36-b294-4de4-81d7-0a221d4d63ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf07a4c1a-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.843 239969 DEBUG nova.objects.instance [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'pci_devices' on Instance uuid b16e73f5-93cb-4c68-8f59-291227c8aa43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.858 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:10:54 compute-0 nova_compute[239965]:   <uuid>b16e73f5-93cb-4c68-8f59-291227c8aa43</uuid>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   <name>instance-0000006c</name>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <nova:name>tempest-TestGettingAddress-server-1472034940</nova:name>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:10:53</nova:creationTime>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:10:54 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:10:54 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:10:54 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:10:54 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:10:54 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:10:54 compute-0 nova_compute[239965]:         <nova:user uuid="0338b308661e4205839e9e957f674d8c">tempest-TestGettingAddress-206943419-project-member</nova:user>
Jan 26 16:10:54 compute-0 nova_compute[239965]:         <nova:project uuid="df501970c9864b77b86bb4aa58ef846b">tempest-TestGettingAddress-206943419</nova:project>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:10:54 compute-0 nova_compute[239965]:         <nova:port uuid="95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5">
Jan 26 16:10:54 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:10:54 compute-0 nova_compute[239965]:         <nova:port uuid="f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4">
Jan 26 16:10:54 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fef6:78a9" ipVersion="6"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <system>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <entry name="serial">b16e73f5-93cb-4c68-8f59-291227c8aa43</entry>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <entry name="uuid">b16e73f5-93cb-4c68-8f59-291227c8aa43</entry>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     </system>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   <os>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   </os>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   <features>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   </features>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/b16e73f5-93cb-4c68-8f59-291227c8aa43_disk">
Jan 26 16:10:54 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       </source>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:10:54 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/b16e73f5-93cb-4c68-8f59-291227c8aa43_disk.config">
Jan 26 16:10:54 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       </source>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:10:54 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:98:1a:f6"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <target dev="tap95d6c1b0-dd"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:f6:78:a9"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <target dev="tapf07a4c1a-26"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/b16e73f5-93cb-4c68-8f59-291227c8aa43/console.log" append="off"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <video>
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     </video>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:10:54 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:10:54 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:10:54 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:10:54 compute-0 nova_compute[239965]: </domain>
Jan 26 16:10:54 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.858 239969 DEBUG nova.compute.manager [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Preparing to wait for external event network-vif-plugged-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.858 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.858 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.859 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.859 239969 DEBUG nova.compute.manager [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Preparing to wait for external event network-vif-plugged-f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.859 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.859 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.859 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.860 239969 DEBUG nova.virt.libvirt.vif [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:10:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1472034940',display_name='tempest-TestGettingAddress-server-1472034940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1472034940',id=108,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEzKe6a2VldUJeYUz82U7kuWR1IjLUu+uday8/rBf6kWo6w7hcHpRo5TRunJYM1KjlHcyvxH5YSbKzdUCWpXcwe9doLwR4k7RaPJluwXLyon4b8Yo66Pr00SerXQJms4Q==',key_name='tempest-TestGettingAddress-1487907947',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-lxdshte6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:10:46Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=b16e73f5-93cb-4c68-8f59-291227c8aa43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "address": "fa:16:3e:98:1a:f6", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d6c1b0-dd", "ovs_interfaceid": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.860 239969 DEBUG nova.network.os_vif_util [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "address": "fa:16:3e:98:1a:f6", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d6c1b0-dd", "ovs_interfaceid": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.861 239969 DEBUG nova.network.os_vif_util [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:1a:f6,bridge_name='br-int',has_traffic_filtering=True,id=95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5,network=Network(065353ad-2d91-4f6b-899c-49c1351ef25d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d6c1b0-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.861 239969 DEBUG os_vif [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:1a:f6,bridge_name='br-int',has_traffic_filtering=True,id=95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5,network=Network(065353ad-2d91-4f6b-899c-49c1351ef25d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d6c1b0-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.861 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.862 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.862 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.865 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.865 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95d6c1b0-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.865 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap95d6c1b0-dd, col_values=(('external_ids', {'iface-id': '95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:1a:f6', 'vm-uuid': 'b16e73f5-93cb-4c68-8f59-291227c8aa43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:54 compute-0 ceph-mon[75140]: pgmap v1774: 305 pgs: 305 active+clean; 292 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 40 op/s
Jan 26 16:10:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2338236455' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.908 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:54 compute-0 NetworkManager[48954]: <info>  [1769443854.9095] manager: (tap95d6c1b0-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.912 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.917 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.918 239969 INFO os_vif [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:1a:f6,bridge_name='br-int',has_traffic_filtering=True,id=95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5,network=Network(065353ad-2d91-4f6b-899c-49c1351ef25d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d6c1b0-dd')
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.919 239969 DEBUG nova.virt.libvirt.vif [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:10:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1472034940',display_name='tempest-TestGettingAddress-server-1472034940',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1472034940',id=108,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEzKe6a2VldUJeYUz82U7kuWR1IjLUu+uday8/rBf6kWo6w7hcHpRo5TRunJYM1KjlHcyvxH5YSbKzdUCWpXcwe9doLwR4k7RaPJluwXLyon4b8Yo66Pr00SerXQJms4Q==',key_name='tempest-TestGettingAddress-1487907947',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-lxdshte6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:10:46Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=b16e73f5-93cb-4c68-8f59-291227c8aa43,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "address": "fa:16:3e:f6:78:a9", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:78a9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf07a4c1a-26", "ovs_interfaceid": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.919 239969 DEBUG nova.network.os_vif_util [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "address": "fa:16:3e:f6:78:a9", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:78a9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf07a4c1a-26", "ovs_interfaceid": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.919 239969 DEBUG nova.network.os_vif_util [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:78:a9,bridge_name='br-int',has_traffic_filtering=True,id=f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4,network=Network(1f659c36-b294-4de4-81d7-0a221d4d63ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf07a4c1a-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.920 239969 DEBUG os_vif [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:78:a9,bridge_name='br-int',has_traffic_filtering=True,id=f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4,network=Network(1f659c36-b294-4de4-81d7-0a221d4d63ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf07a4c1a-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.920 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.920 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.921 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.922 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.922 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf07a4c1a-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.923 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf07a4c1a-26, col_values=(('external_ids', {'iface-id': 'f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:78:a9', 'vm-uuid': 'b16e73f5-93cb-4c68-8f59-291227c8aa43'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:54 compute-0 NetworkManager[48954]: <info>  [1769443854.9249] manager: (tapf07a4c1a-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/432)
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.924 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.927 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.933 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:54 compute-0 nova_compute[239965]: 2026-01-26 16:10:54.934 239969 INFO os_vif [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:78:a9,bridge_name='br-int',has_traffic_filtering=True,id=f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4,network=Network(1f659c36-b294-4de4-81d7-0a221d4d63ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf07a4c1a-26')
Jan 26 16:10:55 compute-0 nova_compute[239965]: 2026-01-26 16:10:55.005 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:10:55 compute-0 nova_compute[239965]: 2026-01-26 16:10:55.006 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:10:55 compute-0 nova_compute[239965]: 2026-01-26 16:10:55.006 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:98:1a:f6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:10:55 compute-0 nova_compute[239965]: 2026-01-26 16:10:55.006 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:f6:78:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:10:55 compute-0 nova_compute[239965]: 2026-01-26 16:10:55.007 239969 INFO nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Using config drive
Jan 26 16:10:55 compute-0 nova_compute[239965]: 2026-01-26 16:10:55.031 239969 DEBUG nova.storage.rbd_utils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image b16e73f5-93cb-4c68-8f59-291227c8aa43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:55 compute-0 nova_compute[239965]: 2026-01-26 16:10:55.512 239969 DEBUG nova.network.neutron [req-3c5ed436-0a93-49ad-a4b9-fdaa9d2783e5 req-47ee21dd-6c61-461a-9fa5-ffe95f3a6f27 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Updated VIF entry in instance network info cache for port f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:10:55 compute-0 nova_compute[239965]: 2026-01-26 16:10:55.513 239969 DEBUG nova.network.neutron [req-3c5ed436-0a93-49ad-a4b9-fdaa9d2783e5 req-47ee21dd-6c61-461a-9fa5-ffe95f3a6f27 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Updating instance_info_cache with network_info: [{"id": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "address": "fa:16:3e:98:1a:f6", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d6c1b0-dd", "ovs_interfaceid": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "address": "fa:16:3e:f6:78:a9", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:78a9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf07a4c1a-26", "ovs_interfaceid": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:10:55 compute-0 nova_compute[239965]: 2026-01-26 16:10:55.531 239969 DEBUG oslo_concurrency.lockutils [req-3c5ed436-0a93-49ad-a4b9-fdaa9d2783e5 req-47ee21dd-6c61-461a-9fa5-ffe95f3a6f27 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b16e73f5-93cb-4c68-8f59-291227c8aa43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:10:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1963846853' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:10:55 compute-0 nova_compute[239965]: 2026-01-26 16:10:55.946 239969 INFO nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Creating config drive at /var/lib/nova/instances/b16e73f5-93cb-4c68-8f59-291227c8aa43/disk.config
Jan 26 16:10:55 compute-0 nova_compute[239965]: 2026-01-26 16:10:55.953 239969 DEBUG oslo_concurrency.processutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b16e73f5-93cb-4c68-8f59-291227c8aa43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpplm7jp9y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.093 239969 DEBUG oslo_concurrency.processutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b16e73f5-93cb-4c68-8f59-291227c8aa43/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpplm7jp9y" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.116 239969 DEBUG nova.storage.rbd_utils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image b16e73f5-93cb-4c68-8f59-291227c8aa43_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.120 239969 DEBUG oslo_concurrency.processutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b16e73f5-93cb-4c68-8f59-291227c8aa43/disk.config b16e73f5-93cb-4c68-8f59-291227c8aa43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:10:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 292 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.212 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.254 239969 DEBUG oslo_concurrency.processutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b16e73f5-93cb-4c68-8f59-291227c8aa43/disk.config b16e73f5-93cb-4c68-8f59-291227c8aa43_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.255 239969 INFO nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Deleting local config drive /var/lib/nova/instances/b16e73f5-93cb-4c68-8f59-291227c8aa43/disk.config because it was imported into RBD.
Jan 26 16:10:56 compute-0 kernel: tap95d6c1b0-dd: entered promiscuous mode
Jan 26 16:10:56 compute-0 NetworkManager[48954]: <info>  [1769443856.3330] manager: (tap95d6c1b0-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/433)
Jan 26 16:10:56 compute-0 ovn_controller[146046]: 2026-01-26T16:10:56Z|01040|binding|INFO|Claiming lport 95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 for this chassis.
Jan 26 16:10:56 compute-0 ovn_controller[146046]: 2026-01-26T16:10:56Z|01041|binding|INFO|95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5: Claiming fa:16:3e:98:1a:f6 10.100.0.5
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.335 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.344 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:1a:f6 10.100.0.5'], port_security=['fa:16:3e:98:1a:f6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b16e73f5-93cb-4c68-8f59-291227c8aa43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-065353ad-2d91-4f6b-899c-49c1351ef25d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64fb00c1-6bf2-4adc-ad05-b75da367e5b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9ceb6e3-7336-40c9-9bd9-e17d330deadf, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.346 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 in datapath 065353ad-2d91-4f6b-899c-49c1351ef25d bound to our chassis
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.348 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 065353ad-2d91-4f6b-899c-49c1351ef25d
Jan 26 16:10:56 compute-0 ovn_controller[146046]: 2026-01-26T16:10:56Z|01042|binding|INFO|Setting lport 95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 ovn-installed in OVS
Jan 26 16:10:56 compute-0 ovn_controller[146046]: 2026-01-26T16:10:56Z|01043|binding|INFO|Setting lport 95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 up in Southbound
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.358 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:56 compute-0 NetworkManager[48954]: <info>  [1769443856.3626] manager: (tapf07a4c1a-26): new Tun device (/org/freedesktop/NetworkManager/Devices/434)
Jan 26 16:10:56 compute-0 kernel: tapf07a4c1a-26: entered promiscuous mode
Jan 26 16:10:56 compute-0 systemd-udevd[330914]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:10:56 compute-0 systemd-udevd[330913]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:10:56 compute-0 ovn_controller[146046]: 2026-01-26T16:10:56Z|01044|binding|INFO|Claiming lport f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 for this chassis.
Jan 26 16:10:56 compute-0 ovn_controller[146046]: 2026-01-26T16:10:56Z|01045|binding|INFO|f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4: Claiming fa:16:3e:f6:78:a9 2001:db8::f816:3eff:fef6:78a9
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.374 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.373 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ceb748-388a-4c3c-aa97-e29cce9d17c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.375 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:78:a9 2001:db8::f816:3eff:fef6:78a9'], port_security=['fa:16:3e:f6:78:a9 2001:db8::f816:3eff:fef6:78a9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef6:78a9/64', 'neutron:device_id': 'b16e73f5-93cb-4c68-8f59-291227c8aa43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f659c36-b294-4de4-81d7-0a221d4d63ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64fb00c1-6bf2-4adc-ad05-b75da367e5b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63bbc7aa-7f64-4a2d-a32f-7de8c6f03798, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:10:56 compute-0 NetworkManager[48954]: <info>  [1769443856.3822] device (tap95d6c1b0-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:10:56 compute-0 NetworkManager[48954]: <info>  [1769443856.3830] device (tap95d6c1b0-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:10:56 compute-0 NetworkManager[48954]: <info>  [1769443856.3835] device (tapf07a4c1a-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:10:56 compute-0 NetworkManager[48954]: <info>  [1769443856.3840] device (tapf07a4c1a-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:10:56 compute-0 ovn_controller[146046]: 2026-01-26T16:10:56Z|01046|binding|INFO|Setting lport f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 ovn-installed in OVS
Jan 26 16:10:56 compute-0 ovn_controller[146046]: 2026-01-26T16:10:56Z|01047|binding|INFO|Setting lport f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 up in Southbound
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.388 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.390 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.404 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[887d4707-fcb7-4bf9-b52b-b4db44d3cd23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.407 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[dc2f9f70-06d9-4498-84eb-d53a29d226a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:56 compute-0 systemd-machined[208061]: New machine qemu-133-instance-0000006c.
Jan 26 16:10:56 compute-0 systemd[1]: Started Virtual Machine qemu-133-instance-0000006c.
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.440 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d85ed52c-c898-46fa-9661-f90dd5054ec8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.458 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a039c055-2a0d-43b3-bd44-485913653e2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap065353ad-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:4f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 301], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530788, 'reachable_time': 19462, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330928, 'error': None, 'target': 'ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.480 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e6024705-da44-4141-a2b7-6891d3870187]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap065353ad-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530799, 'tstamp': 530799}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330932, 'error': None, 'target': 'ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap065353ad-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530803, 'tstamp': 530803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330932, 'error': None, 'target': 'ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.481 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap065353ad-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.483 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.484 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.485 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap065353ad-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.485 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.486 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap065353ad-20, col_values=(('external_ids', {'iface-id': '37d312e5-a280-4644-9ef2-86d934dd9b95'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.486 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.488 156105 INFO neutron.agent.ovn.metadata.agent [-] Port f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 in datapath 1f659c36-b294-4de4-81d7-0a221d4d63ba unbound from our chassis
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.489 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f659c36-b294-4de4-81d7-0a221d4d63ba
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.503 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed2b101-bd22-41b2-8708-eb5921d501cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.535 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[28bc847f-f1ed-4a39-afa3-439eb48f7a03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.538 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a619cb4f-f2f7-42a4-86f6-b1ffc498019a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.571 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7b321614-0d37-4ad3-8ee1-d56c039112b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.589 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4c70e755-71ef-46d9-843d-4fa0884aa4ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f659c36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:8f:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 18, 'tx_packets': 4, 'rx_bytes': 1572, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530881, 'reachable_time': 44944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 18, 'inoctets': 1320, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 18, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1320, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 18, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 330939, 'error': None, 'target': 'ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.604 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe24034-f4c1-4f21-9dd6-2e4cd76cc0fa]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f659c36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530896, 'tstamp': 530896}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 330947, 'error': None, 'target': 'ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.606 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f659c36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.607 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.609 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.609 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f659c36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.609 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.609 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f659c36-b0, col_values=(('external_ids', {'iface-id': 'a41f6fa4-45ca-4584-9219-78408cd0f9f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:56.610 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.615 239969 DEBUG nova.compute.manager [req-16657305-d141-4c1c-9e83-9dccc88a625f req-60bd77dd-edf6-4c80-9d2a-913659c198e3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received event network-vif-plugged-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.616 239969 DEBUG oslo_concurrency.lockutils [req-16657305-d141-4c1c-9e83-9dccc88a625f req-60bd77dd-edf6-4c80-9d2a-913659c198e3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.616 239969 DEBUG oslo_concurrency.lockutils [req-16657305-d141-4c1c-9e83-9dccc88a625f req-60bd77dd-edf6-4c80-9d2a-913659c198e3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.617 239969 DEBUG oslo_concurrency.lockutils [req-16657305-d141-4c1c-9e83-9dccc88a625f req-60bd77dd-edf6-4c80-9d2a-913659c198e3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.617 239969 DEBUG nova.compute.manager [req-16657305-d141-4c1c-9e83-9dccc88a625f req-60bd77dd-edf6-4c80-9d2a-913659c198e3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Processing event network-vif-plugged-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.702 239969 DEBUG nova.compute.manager [req-b66449f7-2243-43c5-9244-7ea8a065b00c req-89733e4b-189f-49b7-9361-81132a444c4d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received event network-vif-plugged-f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.703 239969 DEBUG oslo_concurrency.lockutils [req-b66449f7-2243-43c5-9244-7ea8a065b00c req-89733e4b-189f-49b7-9361-81132a444c4d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.703 239969 DEBUG oslo_concurrency.lockutils [req-b66449f7-2243-43c5-9244-7ea8a065b00c req-89733e4b-189f-49b7-9361-81132a444c4d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.703 239969 DEBUG oslo_concurrency.lockutils [req-b66449f7-2243-43c5-9244-7ea8a065b00c req-89733e4b-189f-49b7-9361-81132a444c4d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.703 239969 DEBUG nova.compute.manager [req-b66449f7-2243-43c5-9244-7ea8a065b00c req-89733e4b-189f-49b7-9361-81132a444c4d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Processing event network-vif-plugged-f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.712 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.754 239969 DEBUG nova.compute.manager [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.754 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443856.753568, b16e73f5-93cb-4c68-8f59-291227c8aa43 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.755 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] VM Started (Lifecycle Event)
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.760 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.763 239969 INFO nova.virt.libvirt.driver [-] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Instance spawned successfully.
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.764 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.775 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.778 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.787 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.788 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.788 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.789 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.789 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.789 239969 DEBUG nova.virt.libvirt.driver [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.796 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.796 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443856.7558541, b16e73f5-93cb-4c68-8f59-291227c8aa43 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.796 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] VM Paused (Lifecycle Event)
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.821 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.824 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443856.759134, b16e73f5-93cb-4c68-8f59-291227c8aa43 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.824 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] VM Resumed (Lifecycle Event)
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.848 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.851 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.857 239969 INFO nova.compute.manager [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Took 10.64 seconds to spawn the instance on the hypervisor.
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.858 239969 DEBUG nova.compute.manager [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.868 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:10:56 compute-0 ceph-mon[75140]: pgmap v1775: 305 pgs: 305 active+clean; 292 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.917 239969 INFO nova.compute.manager [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Took 11.68 seconds to build instance.
Jan 26 16:10:56 compute-0 nova_compute[239965]: 2026-01-26 16:10:56.935 239969 DEBUG oslo_concurrency.lockutils [None req-e5dcf8bd-b960-4e05-895e-d5303e96a130 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:10:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:57.920 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:10:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 292 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 26 16:10:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:59.231 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:59.232 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:10:59.232 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:59 compute-0 ceph-mon[75140]: pgmap v1776: 305 pgs: 305 active+clean; 292 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Jan 26 16:10:59 compute-0 nova_compute[239965]: 2026-01-26 16:10:59.372 239969 DEBUG nova.compute.manager [req-80b5c847-153e-411f-8c55-b7468d1afe7e req-185be197-9695-4f26-af96-b2242ea3c301 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received event network-vif-plugged-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:59 compute-0 nova_compute[239965]: 2026-01-26 16:10:59.372 239969 DEBUG oslo_concurrency.lockutils [req-80b5c847-153e-411f-8c55-b7468d1afe7e req-185be197-9695-4f26-af96-b2242ea3c301 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:59 compute-0 nova_compute[239965]: 2026-01-26 16:10:59.373 239969 DEBUG oslo_concurrency.lockutils [req-80b5c847-153e-411f-8c55-b7468d1afe7e req-185be197-9695-4f26-af96-b2242ea3c301 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:59 compute-0 nova_compute[239965]: 2026-01-26 16:10:59.373 239969 DEBUG oslo_concurrency.lockutils [req-80b5c847-153e-411f-8c55-b7468d1afe7e req-185be197-9695-4f26-af96-b2242ea3c301 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:59 compute-0 nova_compute[239965]: 2026-01-26 16:10:59.373 239969 DEBUG nova.compute.manager [req-80b5c847-153e-411f-8c55-b7468d1afe7e req-185be197-9695-4f26-af96-b2242ea3c301 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] No waiting events found dispatching network-vif-plugged-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:10:59 compute-0 nova_compute[239965]: 2026-01-26 16:10:59.373 239969 WARNING nova.compute.manager [req-80b5c847-153e-411f-8c55-b7468d1afe7e req-185be197-9695-4f26-af96-b2242ea3c301 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received unexpected event network-vif-plugged-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 for instance with vm_state active and task_state None.
Jan 26 16:10:59 compute-0 nova_compute[239965]: 2026-01-26 16:10:59.497 239969 DEBUG nova.compute.manager [req-5e43e782-4d3d-41c2-8f3b-edf4e5416e80 req-e69ca936-8142-4247-bb6c-4b57e8438bfc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received event network-vif-plugged-f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:10:59 compute-0 nova_compute[239965]: 2026-01-26 16:10:59.497 239969 DEBUG oslo_concurrency.lockutils [req-5e43e782-4d3d-41c2-8f3b-edf4e5416e80 req-e69ca936-8142-4247-bb6c-4b57e8438bfc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:10:59 compute-0 nova_compute[239965]: 2026-01-26 16:10:59.498 239969 DEBUG oslo_concurrency.lockutils [req-5e43e782-4d3d-41c2-8f3b-edf4e5416e80 req-e69ca936-8142-4247-bb6c-4b57e8438bfc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:10:59 compute-0 nova_compute[239965]: 2026-01-26 16:10:59.498 239969 DEBUG oslo_concurrency.lockutils [req-5e43e782-4d3d-41c2-8f3b-edf4e5416e80 req-e69ca936-8142-4247-bb6c-4b57e8438bfc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:10:59 compute-0 nova_compute[239965]: 2026-01-26 16:10:59.498 239969 DEBUG nova.compute.manager [req-5e43e782-4d3d-41c2-8f3b-edf4e5416e80 req-e69ca936-8142-4247-bb6c-4b57e8438bfc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] No waiting events found dispatching network-vif-plugged-f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:10:59 compute-0 nova_compute[239965]: 2026-01-26 16:10:59.498 239969 WARNING nova.compute.manager [req-5e43e782-4d3d-41c2-8f3b-edf4e5416e80 req-e69ca936-8142-4247-bb6c-4b57e8438bfc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received unexpected event network-vif-plugged-f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 for instance with vm_state active and task_state None.
Jan 26 16:10:59 compute-0 nova_compute[239965]: 2026-01-26 16:10:59.923 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 292 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 641 KiB/s wr, 21 op/s
Jan 26 16:11:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:11:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:11:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:11:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:11:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:11:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:11:01 compute-0 ceph-mon[75140]: pgmap v1777: 305 pgs: 305 active+clean; 292 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 641 KiB/s wr, 21 op/s
Jan 26 16:11:01 compute-0 nova_compute[239965]: 2026-01-26 16:11:01.512 239969 DEBUG nova.compute.manager [req-eb1f5622-d72a-4247-b992-ff95131a1245 req-972ced37-c7b5-459b-9893-5533e5872d1d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received event network-changed-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:01 compute-0 nova_compute[239965]: 2026-01-26 16:11:01.512 239969 DEBUG nova.compute.manager [req-eb1f5622-d72a-4247-b992-ff95131a1245 req-972ced37-c7b5-459b-9893-5533e5872d1d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Refreshing instance network info cache due to event network-changed-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:11:01 compute-0 nova_compute[239965]: 2026-01-26 16:11:01.513 239969 DEBUG oslo_concurrency.lockutils [req-eb1f5622-d72a-4247-b992-ff95131a1245 req-972ced37-c7b5-459b-9893-5533e5872d1d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b16e73f5-93cb-4c68-8f59-291227c8aa43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:11:01 compute-0 nova_compute[239965]: 2026-01-26 16:11:01.513 239969 DEBUG oslo_concurrency.lockutils [req-eb1f5622-d72a-4247-b992-ff95131a1245 req-972ced37-c7b5-459b-9893-5533e5872d1d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b16e73f5-93cb-4c68-8f59-291227c8aa43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:11:01 compute-0 nova_compute[239965]: 2026-01-26 16:11:01.513 239969 DEBUG nova.network.neutron [req-eb1f5622-d72a-4247-b992-ff95131a1245 req-972ced37-c7b5-459b-9893-5533e5872d1d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Refreshing network info cache for port 95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:11:01 compute-0 nova_compute[239965]: 2026-01-26 16:11:01.714 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 293 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 653 KiB/s wr, 60 op/s
Jan 26 16:11:02 compute-0 ceph-mon[75140]: pgmap v1778: 305 pgs: 305 active+clean; 293 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 653 KiB/s wr, 60 op/s
Jan 26 16:11:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:11:03 compute-0 nova_compute[239965]: 2026-01-26 16:11:03.489 239969 DEBUG nova.network.neutron [req-eb1f5622-d72a-4247-b992-ff95131a1245 req-972ced37-c7b5-459b-9893-5533e5872d1d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Updated VIF entry in instance network info cache for port 95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:11:03 compute-0 nova_compute[239965]: 2026-01-26 16:11:03.490 239969 DEBUG nova.network.neutron [req-eb1f5622-d72a-4247-b992-ff95131a1245 req-972ced37-c7b5-459b-9893-5533e5872d1d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Updating instance_info_cache with network_info: [{"id": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "address": "fa:16:3e:98:1a:f6", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d6c1b0-dd", "ovs_interfaceid": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "address": "fa:16:3e:f6:78:a9", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:78a9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf07a4c1a-26", "ovs_interfaceid": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:11:03 compute-0 nova_compute[239965]: 2026-01-26 16:11:03.518 239969 DEBUG oslo_concurrency.lockutils [req-eb1f5622-d72a-4247-b992-ff95131a1245 req-972ced37-c7b5-459b-9893-5533e5872d1d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b16e73f5-93cb-4c68-8f59-291227c8aa43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:11:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 293 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 26 16:11:04 compute-0 nova_compute[239965]: 2026-01-26 16:11:04.924 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:05 compute-0 ceph-mon[75140]: pgmap v1779: 305 pgs: 305 active+clean; 293 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Jan 26 16:11:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 293 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 26 16:11:06 compute-0 nova_compute[239965]: 2026-01-26 16:11:06.791 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:07 compute-0 ceph-mon[75140]: pgmap v1780: 305 pgs: 305 active+clean; 293 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 26 16:11:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:11:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 293 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 74 op/s
Jan 26 16:11:08 compute-0 sshd-session[330985]: Invalid user ubuntu from 45.148.10.240 port 56042
Jan 26 16:11:08 compute-0 sshd-session[330985]: Connection closed by invalid user ubuntu 45.148.10.240 port 56042 [preauth]
Jan 26 16:11:09 compute-0 ceph-mon[75140]: pgmap v1781: 305 pgs: 305 active+clean; 293 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 74 op/s
Jan 26 16:11:09 compute-0 ovn_controller[146046]: 2026-01-26T16:11:09Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:98:1a:f6 10.100.0.5
Jan 26 16:11:09 compute-0 ovn_controller[146046]: 2026-01-26T16:11:09Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:98:1a:f6 10.100.0.5
Jan 26 16:11:09 compute-0 sudo[330989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:11:09 compute-0 sudo[330989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:11:09 compute-0 sudo[330989]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:09 compute-0 sudo[331014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:11:09 compute-0 sudo[331014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:11:09 compute-0 nova_compute[239965]: 2026-01-26 16:11:09.926 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 293 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 65 op/s
Jan 26 16:11:10 compute-0 sudo[331014]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:11:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:11:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:11:10 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:11:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:11:10 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:11:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:11:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:11:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:11:10 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:11:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:11:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:11:10 compute-0 sudo[331069]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:11:10 compute-0 sudo[331069]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:11:10 compute-0 sudo[331069]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:10 compute-0 sudo[331094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:11:10 compute-0 sudo[331094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:11:10 compute-0 podman[331131]: 2026-01-26 16:11:10.703481012 +0000 UTC m=+0.042088971 container create 62dbda3df66cdf634a81d0d8ea003a68feefe38a8d4233343f05ec8524462bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 16:11:10 compute-0 systemd[1]: Started libpod-conmon-62dbda3df66cdf634a81d0d8ea003a68feefe38a8d4233343f05ec8524462bf0.scope.
Jan 26 16:11:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:11:10 compute-0 podman[331131]: 2026-01-26 16:11:10.686043175 +0000 UTC m=+0.024651154 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:11:10 compute-0 podman[331131]: 2026-01-26 16:11:10.790237084 +0000 UTC m=+0.128845063 container init 62dbda3df66cdf634a81d0d8ea003a68feefe38a8d4233343f05ec8524462bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:11:10 compute-0 podman[331131]: 2026-01-26 16:11:10.798068695 +0000 UTC m=+0.136676654 container start 62dbda3df66cdf634a81d0d8ea003a68feefe38a8d4233343f05ec8524462bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:11:10 compute-0 podman[331131]: 2026-01-26 16:11:10.801815197 +0000 UTC m=+0.140423156 container attach 62dbda3df66cdf634a81d0d8ea003a68feefe38a8d4233343f05ec8524462bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 16:11:10 compute-0 systemd[1]: libpod-62dbda3df66cdf634a81d0d8ea003a68feefe38a8d4233343f05ec8524462bf0.scope: Deactivated successfully.
Jan 26 16:11:10 compute-0 awesome_shamir[331148]: 167 167
Jan 26 16:11:10 compute-0 podman[331131]: 2026-01-26 16:11:10.805236711 +0000 UTC m=+0.143844670 container died 62dbda3df66cdf634a81d0d8ea003a68feefe38a8d4233343f05ec8524462bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 16:11:10 compute-0 conmon[331148]: conmon 62dbda3df66cdf634a81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62dbda3df66cdf634a81d0d8ea003a68feefe38a8d4233343f05ec8524462bf0.scope/container/memory.events
Jan 26 16:11:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-fadd1f75562da8ada15c5ace0e6cd321f7dcc92bfcc8b674479ad596c54454a6-merged.mount: Deactivated successfully.
Jan 26 16:11:10 compute-0 podman[331131]: 2026-01-26 16:11:10.873536471 +0000 UTC m=+0.212144430 container remove 62dbda3df66cdf634a81d0d8ea003a68feefe38a8d4233343f05ec8524462bf0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shamir, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Jan 26 16:11:10 compute-0 systemd[1]: libpod-conmon-62dbda3df66cdf634a81d0d8ea003a68feefe38a8d4233343f05ec8524462bf0.scope: Deactivated successfully.
Jan 26 16:11:11 compute-0 podman[331172]: 2026-01-26 16:11:11.075794599 +0000 UTC m=+0.043870493 container create f1078044f6837ff7073069bce1ff09cee2654a620111739b7339cf0145a79986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 26 16:11:11 compute-0 systemd[1]: Started libpod-conmon-f1078044f6837ff7073069bce1ff09cee2654a620111739b7339cf0145a79986.scope.
Jan 26 16:11:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cecdbcbfeec6a5fbe7852742745b771457e31f3e33fd9e447b6336e40d32b2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cecdbcbfeec6a5fbe7852742745b771457e31f3e33fd9e447b6336e40d32b2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cecdbcbfeec6a5fbe7852742745b771457e31f3e33fd9e447b6336e40d32b2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:11:11 compute-0 podman[331172]: 2026-01-26 16:11:11.057214985 +0000 UTC m=+0.025290889 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cecdbcbfeec6a5fbe7852742745b771457e31f3e33fd9e447b6336e40d32b2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:11:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8cecdbcbfeec6a5fbe7852742745b771457e31f3e33fd9e447b6336e40d32b2b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:11:11 compute-0 podman[331172]: 2026-01-26 16:11:11.16453213 +0000 UTC m=+0.132608044 container init f1078044f6837ff7073069bce1ff09cee2654a620111739b7339cf0145a79986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Jan 26 16:11:11 compute-0 podman[331172]: 2026-01-26 16:11:11.175253202 +0000 UTC m=+0.143329096 container start f1078044f6837ff7073069bce1ff09cee2654a620111739b7339cf0145a79986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:11:11 compute-0 podman[331172]: 2026-01-26 16:11:11.182626433 +0000 UTC m=+0.150702357 container attach f1078044f6837ff7073069bce1ff09cee2654a620111739b7339cf0145a79986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 16:11:11 compute-0 ceph-mon[75140]: pgmap v1782: 305 pgs: 305 active+clean; 293 MiB data, 830 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 65 op/s
Jan 26 16:11:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:11:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:11:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:11:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:11:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:11:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:11:11 compute-0 sleepy_sanderson[331188]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:11:11 compute-0 sleepy_sanderson[331188]: --> All data devices are unavailable
Jan 26 16:11:11 compute-0 systemd[1]: libpod-f1078044f6837ff7073069bce1ff09cee2654a620111739b7339cf0145a79986.scope: Deactivated successfully.
Jan 26 16:11:11 compute-0 podman[331172]: 2026-01-26 16:11:11.652325683 +0000 UTC m=+0.620401607 container died f1078044f6837ff7073069bce1ff09cee2654a620111739b7339cf0145a79986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:11:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-8cecdbcbfeec6a5fbe7852742745b771457e31f3e33fd9e447b6336e40d32b2b-merged.mount: Deactivated successfully.
Jan 26 16:11:11 compute-0 podman[331172]: 2026-01-26 16:11:11.703263239 +0000 UTC m=+0.671339133 container remove f1078044f6837ff7073069bce1ff09cee2654a620111739b7339cf0145a79986 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:11:11 compute-0 systemd[1]: libpod-conmon-f1078044f6837ff7073069bce1ff09cee2654a620111739b7339cf0145a79986.scope: Deactivated successfully.
Jan 26 16:11:11 compute-0 sudo[331094]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:11 compute-0 nova_compute[239965]: 2026-01-26 16:11:11.794 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:11 compute-0 sudo[331222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:11:11 compute-0 sudo[331222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:11:11 compute-0 sudo[331222]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:11 compute-0 sudo[331247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:11:11 compute-0 sudo[331247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:11:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 318 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 122 op/s
Jan 26 16:11:12 compute-0 podman[331284]: 2026-01-26 16:11:12.217514828 +0000 UTC m=+0.055779306 container create 66c978faf9c8a8e135a9f42b5e11361ad32cf08da499734515e9bee048d38121 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 16:11:12 compute-0 systemd[1]: Started libpod-conmon-66c978faf9c8a8e135a9f42b5e11361ad32cf08da499734515e9bee048d38121.scope.
Jan 26 16:11:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:11:12 compute-0 podman[331284]: 2026-01-26 16:11:12.192874166 +0000 UTC m=+0.031138704 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:11:12 compute-0 podman[331284]: 2026-01-26 16:11:12.299197247 +0000 UTC m=+0.137461745 container init 66c978faf9c8a8e135a9f42b5e11361ad32cf08da499734515e9bee048d38121 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 16:11:12 compute-0 podman[331284]: 2026-01-26 16:11:12.305351497 +0000 UTC m=+0.143615975 container start 66c978faf9c8a8e135a9f42b5e11361ad32cf08da499734515e9bee048d38121 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 16:11:12 compute-0 podman[331284]: 2026-01-26 16:11:12.308419792 +0000 UTC m=+0.146684270 container attach 66c978faf9c8a8e135a9f42b5e11361ad32cf08da499734515e9bee048d38121 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 16:11:12 compute-0 friendly_dubinsky[331301]: 167 167
Jan 26 16:11:12 compute-0 systemd[1]: libpod-66c978faf9c8a8e135a9f42b5e11361ad32cf08da499734515e9bee048d38121.scope: Deactivated successfully.
Jan 26 16:11:12 compute-0 podman[331284]: 2026-01-26 16:11:12.310519734 +0000 UTC m=+0.148784222 container died 66c978faf9c8a8e135a9f42b5e11361ad32cf08da499734515e9bee048d38121 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 16:11:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-11cf7d365be39248bb6ef9a27f5bced9f2ff9f0afae781274c94805229245f75-merged.mount: Deactivated successfully.
Jan 26 16:11:12 compute-0 podman[331284]: 2026-01-26 16:11:12.34718197 +0000 UTC m=+0.185446448 container remove 66c978faf9c8a8e135a9f42b5e11361ad32cf08da499734515e9bee048d38121 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_dubinsky, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:11:12 compute-0 systemd[1]: libpod-conmon-66c978faf9c8a8e135a9f42b5e11361ad32cf08da499734515e9bee048d38121.scope: Deactivated successfully.
Jan 26 16:11:12 compute-0 podman[331323]: 2026-01-26 16:11:12.543554454 +0000 UTC m=+0.043120316 container create 84a067b0af45feae3741114178e616129372db2f775ce48317899594aa7f7da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ganguly, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 16:11:12 compute-0 systemd[1]: Started libpod-conmon-84a067b0af45feae3741114178e616129372db2f775ce48317899594aa7f7da0.scope.
Jan 26 16:11:12 compute-0 podman[331323]: 2026-01-26 16:11:12.526385814 +0000 UTC m=+0.025951706 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:11:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:11:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6599587085fe9e7ce2f892b2750067e7c297b63b15926503924aad70055cc54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:11:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6599587085fe9e7ce2f892b2750067e7c297b63b15926503924aad70055cc54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:11:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6599587085fe9e7ce2f892b2750067e7c297b63b15926503924aad70055cc54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:11:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6599587085fe9e7ce2f892b2750067e7c297b63b15926503924aad70055cc54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:11:12 compute-0 podman[331323]: 2026-01-26 16:11:12.640778193 +0000 UTC m=+0.140344075 container init 84a067b0af45feae3741114178e616129372db2f775ce48317899594aa7f7da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ganguly, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 16:11:12 compute-0 podman[331323]: 2026-01-26 16:11:12.648286636 +0000 UTC m=+0.147852498 container start 84a067b0af45feae3741114178e616129372db2f775ce48317899594aa7f7da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 16:11:12 compute-0 podman[331323]: 2026-01-26 16:11:12.651660568 +0000 UTC m=+0.151226430 container attach 84a067b0af45feae3741114178e616129372db2f775ce48317899594aa7f7da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ganguly, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:11:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:11:12 compute-0 sad_ganguly[331339]: {
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:     "0": [
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:         {
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "devices": [
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "/dev/loop3"
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             ],
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "lv_name": "ceph_lv0",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "lv_size": "21470642176",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "name": "ceph_lv0",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "tags": {
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.cluster_name": "ceph",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.crush_device_class": "",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.encrypted": "0",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.objectstore": "bluestore",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.osd_id": "0",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.type": "block",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.vdo": "0",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.with_tpm": "0"
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             },
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "type": "block",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "vg_name": "ceph_vg0"
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:         }
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:     ],
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:     "1": [
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:         {
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "devices": [
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "/dev/loop4"
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             ],
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "lv_name": "ceph_lv1",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "lv_size": "21470642176",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "name": "ceph_lv1",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "tags": {
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.cluster_name": "ceph",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.crush_device_class": "",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.encrypted": "0",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.objectstore": "bluestore",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.osd_id": "1",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.type": "block",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.vdo": "0",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.with_tpm": "0"
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             },
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "type": "block",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "vg_name": "ceph_vg1"
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:         }
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:     ],
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:     "2": [
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:         {
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "devices": [
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "/dev/loop5"
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             ],
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "lv_name": "ceph_lv2",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "lv_size": "21470642176",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "name": "ceph_lv2",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "tags": {
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.cluster_name": "ceph",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.crush_device_class": "",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.encrypted": "0",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.objectstore": "bluestore",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.osd_id": "2",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.type": "block",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.vdo": "0",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:                 "ceph.with_tpm": "0"
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             },
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "type": "block",
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:             "vg_name": "ceph_vg2"
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:         }
Jan 26 16:11:12 compute-0 sad_ganguly[331339]:     ]
Jan 26 16:11:12 compute-0 sad_ganguly[331339]: }
Jan 26 16:11:12 compute-0 systemd[1]: libpod-84a067b0af45feae3741114178e616129372db2f775ce48317899594aa7f7da0.scope: Deactivated successfully.
Jan 26 16:11:12 compute-0 podman[331323]: 2026-01-26 16:11:12.979582521 +0000 UTC m=+0.479148383 container died 84a067b0af45feae3741114178e616129372db2f775ce48317899594aa7f7da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 16:11:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-c6599587085fe9e7ce2f892b2750067e7c297b63b15926503924aad70055cc54-merged.mount: Deactivated successfully.
Jan 26 16:11:13 compute-0 podman[331323]: 2026-01-26 16:11:13.024397767 +0000 UTC m=+0.523963629 container remove 84a067b0af45feae3741114178e616129372db2f775ce48317899594aa7f7da0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 26 16:11:13 compute-0 systemd[1]: libpod-conmon-84a067b0af45feae3741114178e616129372db2f775ce48317899594aa7f7da0.scope: Deactivated successfully.
Jan 26 16:11:13 compute-0 sudo[331247]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:13 compute-0 sudo[331360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:11:13 compute-0 sudo[331360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:11:13 compute-0 sudo[331360]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:13 compute-0 sudo[331385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:11:13 compute-0 sudo[331385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:11:13 compute-0 ceph-mon[75140]: pgmap v1783: 305 pgs: 305 active+clean; 318 MiB data, 870 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 122 op/s
Jan 26 16:11:13 compute-0 podman[331420]: 2026-01-26 16:11:13.454459237 +0000 UTC m=+0.039870437 container create c14218dcb7800cc408cab451c96aa0339124f79bdea8b9927fc112a9dcdaf37e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 16:11:13 compute-0 systemd[1]: Started libpod-conmon-c14218dcb7800cc408cab451c96aa0339124f79bdea8b9927fc112a9dcdaf37e.scope.
Jan 26 16:11:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:11:13 compute-0 podman[331420]: 2026-01-26 16:11:13.513240715 +0000 UTC m=+0.098651935 container init c14218dcb7800cc408cab451c96aa0339124f79bdea8b9927fc112a9dcdaf37e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:11:13 compute-0 podman[331420]: 2026-01-26 16:11:13.524167883 +0000 UTC m=+0.109579083 container start c14218dcb7800cc408cab451c96aa0339124f79bdea8b9927fc112a9dcdaf37e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:11:13 compute-0 podman[331420]: 2026-01-26 16:11:13.527860262 +0000 UTC m=+0.113271482 container attach c14218dcb7800cc408cab451c96aa0339124f79bdea8b9927fc112a9dcdaf37e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 26 16:11:13 compute-0 nice_gould[331437]: 167 167
Jan 26 16:11:13 compute-0 systemd[1]: libpod-c14218dcb7800cc408cab451c96aa0339124f79bdea8b9927fc112a9dcdaf37e.scope: Deactivated successfully.
Jan 26 16:11:13 compute-0 conmon[331437]: conmon c14218dcb7800cc408ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c14218dcb7800cc408cab451c96aa0339124f79bdea8b9927fc112a9dcdaf37e.scope/container/memory.events
Jan 26 16:11:13 compute-0 podman[331420]: 2026-01-26 16:11:13.530362804 +0000 UTC m=+0.115774004 container died c14218dcb7800cc408cab451c96aa0339124f79bdea8b9927fc112a9dcdaf37e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 16:11:13 compute-0 podman[331420]: 2026-01-26 16:11:13.438127557 +0000 UTC m=+0.023538777 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:11:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e6398d9c95339f7cfe2cb7d300849cc1ffb1835c315ff3409549ef879196da4-merged.mount: Deactivated successfully.
Jan 26 16:11:13 compute-0 podman[331420]: 2026-01-26 16:11:13.568575258 +0000 UTC m=+0.153986498 container remove c14218dcb7800cc408cab451c96aa0339124f79bdea8b9927fc112a9dcdaf37e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 16:11:13 compute-0 systemd[1]: libpod-conmon-c14218dcb7800cc408cab451c96aa0339124f79bdea8b9927fc112a9dcdaf37e.scope: Deactivated successfully.
Jan 26 16:11:13 compute-0 podman[331460]: 2026-01-26 16:11:13.785267089 +0000 UTC m=+0.060218214 container create caf123688810fad54315d29b0bbe977a034fbdc274f62171d93eb8b91fead7a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 16:11:13 compute-0 systemd[1]: Started libpod-conmon-caf123688810fad54315d29b0bbe977a034fbdc274f62171d93eb8b91fead7a6.scope.
Jan 26 16:11:13 compute-0 podman[331460]: 2026-01-26 16:11:13.752553759 +0000 UTC m=+0.027504974 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:11:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b36f103248fae78927406d3b3dff3565d706e551fcbbeb37c542c8c34f79dbbf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b36f103248fae78927406d3b3dff3565d706e551fcbbeb37c542c8c34f79dbbf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b36f103248fae78927406d3b3dff3565d706e551fcbbeb37c542c8c34f79dbbf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:11:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b36f103248fae78927406d3b3dff3565d706e551fcbbeb37c542c8c34f79dbbf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:11:13 compute-0 podman[331460]: 2026-01-26 16:11:13.904491515 +0000 UTC m=+0.179442660 container init caf123688810fad54315d29b0bbe977a034fbdc274f62171d93eb8b91fead7a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_engelbart, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:11:13 compute-0 podman[331460]: 2026-01-26 16:11:13.913115826 +0000 UTC m=+0.188066961 container start caf123688810fad54315d29b0bbe977a034fbdc274f62171d93eb8b91fead7a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_engelbart, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:11:13 compute-0 podman[331460]: 2026-01-26 16:11:13.917545304 +0000 UTC m=+0.192496439 container attach caf123688810fad54315d29b0bbe977a034fbdc274f62171d93eb8b91fead7a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 16:11:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 325 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 26 16:11:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Jan 26 16:11:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Jan 26 16:11:14 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Jan 26 16:11:14 compute-0 lvm[331555]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:11:14 compute-0 lvm[331555]: VG ceph_vg0 finished
Jan 26 16:11:14 compute-0 lvm[331556]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:11:14 compute-0 lvm[331556]: VG ceph_vg1 finished
Jan 26 16:11:14 compute-0 lvm[331558]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:11:14 compute-0 lvm[331558]: VG ceph_vg2 finished
Jan 26 16:11:14 compute-0 lvm[331559]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:11:14 compute-0 lvm[331559]: VG ceph_vg0 finished
Jan 26 16:11:14 compute-0 strange_engelbart[331477]: {}
Jan 26 16:11:14 compute-0 systemd[1]: libpod-caf123688810fad54315d29b0bbe977a034fbdc274f62171d93eb8b91fead7a6.scope: Deactivated successfully.
Jan 26 16:11:14 compute-0 systemd[1]: libpod-caf123688810fad54315d29b0bbe977a034fbdc274f62171d93eb8b91fead7a6.scope: Consumed 1.436s CPU time.
Jan 26 16:11:14 compute-0 podman[331460]: 2026-01-26 16:11:14.877559879 +0000 UTC m=+1.152511004 container died caf123688810fad54315d29b0bbe977a034fbdc274f62171d93eb8b91fead7a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:11:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b36f103248fae78927406d3b3dff3565d706e551fcbbeb37c542c8c34f79dbbf-merged.mount: Deactivated successfully.
Jan 26 16:11:14 compute-0 podman[331460]: 2026-01-26 16:11:14.922894949 +0000 UTC m=+1.197846074 container remove caf123688810fad54315d29b0bbe977a034fbdc274f62171d93eb8b91fead7a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_engelbart, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 26 16:11:14 compute-0 nova_compute[239965]: 2026-01-26 16:11:14.929 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:14 compute-0 systemd[1]: libpod-conmon-caf123688810fad54315d29b0bbe977a034fbdc274f62171d93eb8b91fead7a6.scope: Deactivated successfully.
Jan 26 16:11:14 compute-0 sudo[331385]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:11:14 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:11:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:11:14 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:11:15 compute-0 sudo[331574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:11:15 compute-0 sudo[331574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:11:15 compute-0 sudo[331574]: pam_unix(sudo:session): session closed for user root
Jan 26 16:11:15 compute-0 ceph-mon[75140]: pgmap v1784: 305 pgs: 305 active+clean; 325 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.1 MiB/s wr, 90 op/s
Jan 26 16:11:15 compute-0 ceph-mon[75140]: osdmap e272: 3 total, 3 up, 3 in
Jan 26 16:11:15 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:11:15 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:11:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Jan 26 16:11:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Jan 26 16:11:15 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Jan 26 16:11:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 325 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 509 KiB/s rd, 3.2 MiB/s wr, 123 op/s
Jan 26 16:11:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Jan 26 16:11:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Jan 26 16:11:16 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Jan 26 16:11:16 compute-0 ceph-mon[75140]: osdmap e273: 3 total, 3 up, 3 in
Jan 26 16:11:16 compute-0 ceph-mon[75140]: pgmap v1787: 305 pgs: 305 active+clean; 325 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 509 KiB/s rd, 3.2 MiB/s wr, 123 op/s
Jan 26 16:11:16 compute-0 podman[331600]: 2026-01-26 16:11:16.438853701 +0000 UTC m=+0.112057162 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 16:11:16 compute-0 podman[331599]: 2026-01-26 16:11:16.438997825 +0000 UTC m=+0.111688363 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 16:11:16 compute-0 nova_compute[239965]: 2026-01-26 16:11:16.797 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:17 compute-0 ceph-mon[75140]: osdmap e274: 3 total, 3 up, 3 in
Jan 26 16:11:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e274 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:17.856643) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443877856693, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1629, "num_deletes": 250, "total_data_size": 2491005, "memory_usage": 2526064, "flush_reason": "Manual Compaction"}
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443877870360, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 1487921, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35659, "largest_seqno": 37287, "table_properties": {"data_size": 1482280, "index_size": 2717, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15306, "raw_average_key_size": 20, "raw_value_size": 1469660, "raw_average_value_size": 2013, "num_data_blocks": 123, "num_entries": 730, "num_filter_entries": 730, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769443726, "oldest_key_time": 1769443726, "file_creation_time": 1769443877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 13773 microseconds, and 7751 cpu microseconds.
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:17.870414) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 1487921 bytes OK
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:17.870436) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:17.871857) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:17.871873) EVENT_LOG_v1 {"time_micros": 1769443877871868, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:17.871891) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2483876, prev total WAL file size 2483876, number of live WAL files 2.
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:17.872913) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323535' seq:72057594037927935, type:22 .. '6D6772737461740031353036' seq:0, type:0; will stop at (end)
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(1453KB)], [77(10059KB)]
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443877872963, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 11789172, "oldest_snapshot_seqno": -1}
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6518 keys, 9343823 bytes, temperature: kUnknown
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443877920819, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 9343823, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9300304, "index_size": 26129, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16325, "raw_key_size": 163686, "raw_average_key_size": 25, "raw_value_size": 9183777, "raw_average_value_size": 1408, "num_data_blocks": 1058, "num_entries": 6518, "num_filter_entries": 6518, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769443877, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:17.921085) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 9343823 bytes
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:17.922221) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 246.1 rd, 195.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 9.8 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(14.2) write-amplify(6.3) OK, records in: 6964, records dropped: 446 output_compression: NoCompression
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:17.922236) EVENT_LOG_v1 {"time_micros": 1769443877922229, "job": 44, "event": "compaction_finished", "compaction_time_micros": 47901, "compaction_time_cpu_micros": 20890, "output_level": 6, "num_output_files": 1, "total_output_size": 9343823, "num_input_records": 6964, "num_output_records": 6518, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443877922554, "job": 44, "event": "table_file_deletion", "file_number": 79}
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443877924316, "job": 44, "event": "table_file_deletion", "file_number": 77}
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:17.872818) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:17.924378) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:17.924383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:17.924385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:17.924387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:11:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:17.924389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:11:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 326 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 389 KiB/s wr, 86 op/s
Jan 26 16:11:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Jan 26 16:11:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Jan 26 16:11:18 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Jan 26 16:11:18 compute-0 ceph-mon[75140]: pgmap v1789: 305 pgs: 305 active+clean; 326 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 72 KiB/s rd, 389 KiB/s wr, 86 op/s
Jan 26 16:11:18 compute-0 ceph-mon[75140]: osdmap e275: 3 total, 3 up, 3 in
Jan 26 16:11:19 compute-0 nova_compute[239965]: 2026-01-26 16:11:19.574 239969 INFO nova.compute.manager [None req-476fa7f9-a907-4c5a-b40e-335c49153de3 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Pausing
Jan 26 16:11:19 compute-0 nova_compute[239965]: 2026-01-26 16:11:19.574 239969 DEBUG nova.objects.instance [None req-476fa7f9-a907-4c5a-b40e-335c49153de3 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'flavor' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:11:19 compute-0 nova_compute[239965]: 2026-01-26 16:11:19.876 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443879.876036, 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:11:19 compute-0 nova_compute[239965]: 2026-01-26 16:11:19.877 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] VM Paused (Lifecycle Event)
Jan 26 16:11:19 compute-0 nova_compute[239965]: 2026-01-26 16:11:19.879 239969 DEBUG nova.compute.manager [None req-476fa7f9-a907-4c5a-b40e-335c49153de3 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:11:19 compute-0 nova_compute[239965]: 2026-01-26 16:11:19.917 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:11:19 compute-0 nova_compute[239965]: 2026-01-26 16:11:19.921 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:11:19 compute-0 nova_compute[239965]: 2026-01-26 16:11:19.948 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] During sync_power_state the instance has a pending task (pausing). Skip.
Jan 26 16:11:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Jan 26 16:11:19 compute-0 nova_compute[239965]: 2026-01-26 16:11:19.965 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Jan 26 16:11:19 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Jan 26 16:11:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 326 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 41 KiB/s wr, 91 op/s
Jan 26 16:11:20 compute-0 nova_compute[239965]: 2026-01-26 16:11:20.882 239969 DEBUG oslo_concurrency.lockutils [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "b16e73f5-93cb-4c68-8f59-291227c8aa43" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:20 compute-0 nova_compute[239965]: 2026-01-26 16:11:20.882 239969 DEBUG oslo_concurrency.lockutils [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:20 compute-0 nova_compute[239965]: 2026-01-26 16:11:20.882 239969 DEBUG oslo_concurrency.lockutils [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:20 compute-0 nova_compute[239965]: 2026-01-26 16:11:20.883 239969 DEBUG oslo_concurrency.lockutils [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:20 compute-0 nova_compute[239965]: 2026-01-26 16:11:20.883 239969 DEBUG oslo_concurrency.lockutils [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:20 compute-0 nova_compute[239965]: 2026-01-26 16:11:20.884 239969 INFO nova.compute.manager [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Terminating instance
Jan 26 16:11:20 compute-0 nova_compute[239965]: 2026-01-26 16:11:20.885 239969 DEBUG nova.compute.manager [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:11:20 compute-0 kernel: tap95d6c1b0-dd (unregistering): left promiscuous mode
Jan 26 16:11:20 compute-0 NetworkManager[48954]: <info>  [1769443880.9338] device (tap95d6c1b0-dd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:11:20 compute-0 ovn_controller[146046]: 2026-01-26T16:11:20Z|01048|binding|INFO|Releasing lport 95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 from this chassis (sb_readonly=0)
Jan 26 16:11:20 compute-0 ovn_controller[146046]: 2026-01-26T16:11:20Z|01049|binding|INFO|Setting lport 95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 down in Southbound
Jan 26 16:11:20 compute-0 nova_compute[239965]: 2026-01-26 16:11:20.942 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:20 compute-0 ovn_controller[146046]: 2026-01-26T16:11:20Z|01050|binding|INFO|Removing iface tap95d6c1b0-dd ovn-installed in OVS
Jan 26 16:11:20 compute-0 nova_compute[239965]: 2026-01-26 16:11:20.945 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:20.957 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:1a:f6 10.100.0.5'], port_security=['fa:16:3e:98:1a:f6 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b16e73f5-93cb-4c68-8f59-291227c8aa43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-065353ad-2d91-4f6b-899c-49c1351ef25d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64fb00c1-6bf2-4adc-ad05-b75da367e5b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9ceb6e3-7336-40c9-9bd9-e17d330deadf, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:11:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:20.958 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 in datapath 065353ad-2d91-4f6b-899c-49c1351ef25d unbound from our chassis
Jan 26 16:11:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:20.959 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 065353ad-2d91-4f6b-899c-49c1351ef25d
Jan 26 16:11:20 compute-0 kernel: tapf07a4c1a-26 (unregistering): left promiscuous mode
Jan 26 16:11:20 compute-0 NetworkManager[48954]: <info>  [1769443880.9811] device (tapf07a4c1a-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:11:20 compute-0 ceph-mon[75140]: osdmap e276: 3 total, 3 up, 3 in
Jan 26 16:11:20 compute-0 ceph-mon[75140]: pgmap v1792: 305 pgs: 305 active+clean; 326 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 41 KiB/s wr, 91 op/s
Jan 26 16:11:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:20.985 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[08b23040-67a0-4de5-81eb-dd5fe1eb2401]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.017 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3b1ec773-7254-4a5b-980b-1725dab2c5fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.019 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[487575cd-fec7-4620-b2bf-6d31a3968e33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:21 compute-0 ovn_controller[146046]: 2026-01-26T16:11:21Z|01051|binding|INFO|Releasing lport f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 from this chassis (sb_readonly=0)
Jan 26 16:11:21 compute-0 ovn_controller[146046]: 2026-01-26T16:11:21Z|01052|binding|INFO|Setting lport f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 down in Southbound
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.044 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:21 compute-0 ovn_controller[146046]: 2026-01-26T16:11:21Z|01053|binding|INFO|Removing iface tapf07a4c1a-26 ovn-installed in OVS
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.045 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[43cd009e-ffcd-4ea3-8e1b-6c726197c8e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.047 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.061 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:21 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Jan 26 16:11:21 compute-0 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006c.scope: Consumed 13.811s CPU time.
Jan 26 16:11:21 compute-0 systemd-machined[208061]: Machine qemu-133-instance-0000006c terminated.
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.070 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e2a96b41-752b-4408-8d2f-e5068c54f747]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap065353ad-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:4f:ae'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 301], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530788, 'reachable_time': 19462, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331657, 'error': None, 'target': 'ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.090 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[da1a3f53-2787-4fe8-abee-f2b3dde88ab1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap065353ad-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530799, 'tstamp': 530799}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331658, 'error': None, 'target': 'ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap065353ad-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530803, 'tstamp': 530803}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331658, 'error': None, 'target': 'ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.093 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap065353ad-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.095 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.102 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.103 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap065353ad-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.103 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.104 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap065353ad-20, col_values=(('external_ids', {'iface-id': '37d312e5-a280-4644-9ef2-86d934dd9b95'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.104 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.112 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.119 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:21 compute-0 NetworkManager[48954]: <info>  [1769443881.1292] manager: (tapf07a4c1a-26): new Tun device (/org/freedesktop/NetworkManager/Devices/435)
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.146 239969 INFO nova.virt.libvirt.driver [-] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Instance destroyed successfully.
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.147 239969 DEBUG nova.objects.instance [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'resources' on Instance uuid b16e73f5-93cb-4c68-8f59-291227c8aa43 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.251 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:78:a9 2001:db8::f816:3eff:fef6:78a9'], port_security=['fa:16:3e:f6:78:a9 2001:db8::f816:3eff:fef6:78a9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fef6:78a9/64', 'neutron:device_id': 'b16e73f5-93cb-4c68-8f59-291227c8aa43', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f659c36-b294-4de4-81d7-0a221d4d63ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64fb00c1-6bf2-4adc-ad05-b75da367e5b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63bbc7aa-7f64-4a2d-a32f-7de8c6f03798, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.252 156105 INFO neutron.agent.ovn.metadata.agent [-] Port f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 in datapath 1f659c36-b294-4de4-81d7-0a221d4d63ba unbound from our chassis
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.253 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1f659c36-b294-4de4-81d7-0a221d4d63ba
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.268 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ba68f4fb-a924-462a-80de-fd735f642e84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.297 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ca0966b8-59ac-431f-87f4-42c5fdbc32c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.300 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f35e94-cfa8-4e74-83e8-a2ddcd1f1954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.334 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8160c547-d3ea-4f67-864f-d542666ab42e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.351 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f82c6a8c-15cf-43af-9f50-a86a4c58d6b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1f659c36-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:8f:93'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2612, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 302], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530881, 'reachable_time': 44944, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2192, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2192, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331690, 'error': None, 'target': 'ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.369 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a271c9-62c7-4eb3-ae82-9b25e5fa71c1]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1f659c36-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 530896, 'tstamp': 530896}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 331691, 'error': None, 'target': 'ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.371 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f659c36-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.382 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1f659c36-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.382 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.382 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1f659c36-b0, col_values=(('external_ids', {'iface-id': 'a41f6fa4-45ca-4584-9219-78408cd0f9f7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:11:21 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:21.383 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.251 239969 DEBUG nova.compute.manager [req-635c8254-f5dd-4444-9655-fcb2124d18a6 req-a7f646e9-7ff6-4f66-9fb9-d88316fb9b00 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received event network-changed-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.391 239969 DEBUG nova.compute.manager [req-635c8254-f5dd-4444-9655-fcb2124d18a6 req-a7f646e9-7ff6-4f66-9fb9-d88316fb9b00 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Refreshing instance network info cache due to event network-changed-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.391 239969 DEBUG oslo_concurrency.lockutils [req-635c8254-f5dd-4444-9655-fcb2124d18a6 req-a7f646e9-7ff6-4f66-9fb9-d88316fb9b00 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b16e73f5-93cb-4c68-8f59-291227c8aa43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.391 239969 DEBUG oslo_concurrency.lockutils [req-635c8254-f5dd-4444-9655-fcb2124d18a6 req-a7f646e9-7ff6-4f66-9fb9-d88316fb9b00 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b16e73f5-93cb-4c68-8f59-291227c8aa43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.391 239969 DEBUG nova.network.neutron [req-635c8254-f5dd-4444-9655-fcb2124d18a6 req-a7f646e9-7ff6-4f66-9fb9-d88316fb9b00 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Refreshing network info cache for port 95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.392 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.394 239969 DEBUG nova.virt.libvirt.vif [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:10:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1472034940',display_name='tempest-TestGettingAddress-server-1472034940',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1472034940',id=108,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEzKe6a2VldUJeYUz82U7kuWR1IjLUu+uday8/rBf6kWo6w7hcHpRo5TRunJYM1KjlHcyvxH5YSbKzdUCWpXcwe9doLwR4k7RaPJluwXLyon4b8Yo66Pr00SerXQJms4Q==',key_name='tempest-TestGettingAddress-1487907947',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-lxdshte6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:10:56Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=b16e73f5-93cb-4c68-8f59-291227c8aa43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "address": "fa:16:3e:98:1a:f6", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d6c1b0-dd", "ovs_interfaceid": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.394 239969 DEBUG nova.network.os_vif_util [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "address": "fa:16:3e:98:1a:f6", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d6c1b0-dd", "ovs_interfaceid": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.395 239969 DEBUG nova.network.os_vif_util [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:98:1a:f6,bridge_name='br-int',has_traffic_filtering=True,id=95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5,network=Network(065353ad-2d91-4f6b-899c-49c1351ef25d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d6c1b0-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.395 239969 DEBUG os_vif [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:1a:f6,bridge_name='br-int',has_traffic_filtering=True,id=95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5,network=Network(065353ad-2d91-4f6b-899c-49c1351ef25d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d6c1b0-dd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.397 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.397 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95d6c1b0-dd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.399 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.400 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.402 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.405 239969 INFO os_vif [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:1a:f6,bridge_name='br-int',has_traffic_filtering=True,id=95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5,network=Network(065353ad-2d91-4f6b-899c-49c1351ef25d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95d6c1b0-dd')
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.405 239969 DEBUG nova.virt.libvirt.vif [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:10:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1472034940',display_name='tempest-TestGettingAddress-server-1472034940',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1472034940',id=108,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEzKe6a2VldUJeYUz82U7kuWR1IjLUu+uday8/rBf6kWo6w7hcHpRo5TRunJYM1KjlHcyvxH5YSbKzdUCWpXcwe9doLwR4k7RaPJluwXLyon4b8Yo66Pr00SerXQJms4Q==',key_name='tempest-TestGettingAddress-1487907947',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:10:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-lxdshte6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:10:56Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=b16e73f5-93cb-4c68-8f59-291227c8aa43,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "address": "fa:16:3e:f6:78:a9", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:78a9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf07a4c1a-26", "ovs_interfaceid": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.406 239969 DEBUG nova.network.os_vif_util [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "address": "fa:16:3e:f6:78:a9", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:78a9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf07a4c1a-26", "ovs_interfaceid": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.406 239969 DEBUG nova.network.os_vif_util [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:78:a9,bridge_name='br-int',has_traffic_filtering=True,id=f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4,network=Network(1f659c36-b294-4de4-81d7-0a221d4d63ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf07a4c1a-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.406 239969 DEBUG os_vif [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:78:a9,bridge_name='br-int',has_traffic_filtering=True,id=f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4,network=Network(1f659c36-b294-4de4-81d7-0a221d4d63ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf07a4c1a-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.407 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.407 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf07a4c1a-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.408 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.411 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.413 239969 INFO os_vif [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:78:a9,bridge_name='br-int',has_traffic_filtering=True,id=f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4,network=Network(1f659c36-b294-4de4-81d7-0a221d4d63ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf07a4c1a-26')
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.436 239969 DEBUG nova.compute.manager [req-3c4c0782-fe0f-4c3c-8688-6c4dac5b8cb9 req-920deed7-3851-40b6-a294-03a4004612c1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received event network-vif-unplugged-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.437 239969 DEBUG oslo_concurrency.lockutils [req-3c4c0782-fe0f-4c3c-8688-6c4dac5b8cb9 req-920deed7-3851-40b6-a294-03a4004612c1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.437 239969 DEBUG oslo_concurrency.lockutils [req-3c4c0782-fe0f-4c3c-8688-6c4dac5b8cb9 req-920deed7-3851-40b6-a294-03a4004612c1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.437 239969 DEBUG oslo_concurrency.lockutils [req-3c4c0782-fe0f-4c3c-8688-6c4dac5b8cb9 req-920deed7-3851-40b6-a294-03a4004612c1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.437 239969 DEBUG nova.compute.manager [req-3c4c0782-fe0f-4c3c-8688-6c4dac5b8cb9 req-920deed7-3851-40b6-a294-03a4004612c1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] No waiting events found dispatching network-vif-unplugged-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.438 239969 DEBUG nova.compute.manager [req-3c4c0782-fe0f-4c3c-8688-6c4dac5b8cb9 req-920deed7-3851-40b6-a294-03a4004612c1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received event network-vif-unplugged-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:11:21 compute-0 nova_compute[239965]: 2026-01-26 16:11:21.799 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Jan 26 16:11:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Jan 26 16:11:22 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Jan 26 16:11:22 compute-0 nova_compute[239965]: 2026-01-26 16:11:22.091 239969 INFO nova.virt.libvirt.driver [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Deleting instance files /var/lib/nova/instances/b16e73f5-93cb-4c68-8f59-291227c8aa43_del
Jan 26 16:11:22 compute-0 nova_compute[239965]: 2026-01-26 16:11:22.092 239969 INFO nova.virt.libvirt.driver [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Deletion of /var/lib/nova/instances/b16e73f5-93cb-4c68-8f59-291227c8aa43_del complete
Jan 26 16:11:22 compute-0 nova_compute[239965]: 2026-01-26 16:11:22.152 239969 INFO nova.compute.manager [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Took 1.27 seconds to destroy the instance on the hypervisor.
Jan 26 16:11:22 compute-0 nova_compute[239965]: 2026-01-26 16:11:22.153 239969 DEBUG oslo.service.loopingcall [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:11:22 compute-0 nova_compute[239965]: 2026-01-26 16:11:22.153 239969 DEBUG nova.compute.manager [-] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:11:22 compute-0 nova_compute[239965]: 2026-01-26 16:11:22.153 239969 DEBUG nova.network.neutron [-] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:11:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 295 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 50 KiB/s wr, 85 op/s
Jan 26 16:11:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:11:23 compute-0 ceph-mon[75140]: osdmap e277: 3 total, 3 up, 3 in
Jan 26 16:11:23 compute-0 ceph-mon[75140]: pgmap v1794: 305 pgs: 305 active+clean; 295 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 77 KiB/s rd, 50 KiB/s wr, 85 op/s
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.338 239969 DEBUG nova.network.neutron [req-635c8254-f5dd-4444-9655-fcb2124d18a6 req-a7f646e9-7ff6-4f66-9fb9-d88316fb9b00 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Updated VIF entry in instance network info cache for port 95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.338 239969 DEBUG nova.network.neutron [req-635c8254-f5dd-4444-9655-fcb2124d18a6 req-a7f646e9-7ff6-4f66-9fb9-d88316fb9b00 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Updating instance_info_cache with network_info: [{"id": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "address": "fa:16:3e:98:1a:f6", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d6c1b0-dd", "ovs_interfaceid": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "address": "fa:16:3e:f6:78:a9", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef6:78a9", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf07a4c1a-26", "ovs_interfaceid": "f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.365 239969 DEBUG oslo_concurrency.lockutils [req-635c8254-f5dd-4444-9655-fcb2124d18a6 req-a7f646e9-7ff6-4f66-9fb9-d88316fb9b00 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b16e73f5-93cb-4c68-8f59-291227c8aa43" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.388 239969 INFO nova.compute.manager [None req-f94cef62-618c-46ab-87d1-d147395fd19b 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Unpausing
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.389 239969 DEBUG nova.objects.instance [None req-f94cef62-618c-46ab-87d1-d147395fd19b 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'flavor' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.416 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443883.416318, 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.416 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] VM Resumed (Lifecycle Event)
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.419 239969 DEBUG nova.compute.manager [req-d3625dbf-0ff1-4f2f-bf3d-a51cd778e69d req-245b62d9-6fb2-4fb7-90bb-70635447a7bf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received event network-vif-unplugged-f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.420 239969 DEBUG oslo_concurrency.lockutils [req-d3625dbf-0ff1-4f2f-bf3d-a51cd778e69d req-245b62d9-6fb2-4fb7-90bb-70635447a7bf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.420 239969 DEBUG oslo_concurrency.lockutils [req-d3625dbf-0ff1-4f2f-bf3d-a51cd778e69d req-245b62d9-6fb2-4fb7-90bb-70635447a7bf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.420 239969 DEBUG oslo_concurrency.lockutils [req-d3625dbf-0ff1-4f2f-bf3d-a51cd778e69d req-245b62d9-6fb2-4fb7-90bb-70635447a7bf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.420 239969 DEBUG nova.compute.manager [req-d3625dbf-0ff1-4f2f-bf3d-a51cd778e69d req-245b62d9-6fb2-4fb7-90bb-70635447a7bf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] No waiting events found dispatching network-vif-unplugged-f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.420 239969 DEBUG nova.compute.manager [req-d3625dbf-0ff1-4f2f-bf3d-a51cd778e69d req-245b62d9-6fb2-4fb7-90bb-70635447a7bf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received event network-vif-unplugged-f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.420 239969 DEBUG nova.compute.manager [req-d3625dbf-0ff1-4f2f-bf3d-a51cd778e69d req-245b62d9-6fb2-4fb7-90bb-70635447a7bf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received event network-vif-plugged-f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.421 239969 DEBUG oslo_concurrency.lockutils [req-d3625dbf-0ff1-4f2f-bf3d-a51cd778e69d req-245b62d9-6fb2-4fb7-90bb-70635447a7bf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.421 239969 DEBUG oslo_concurrency.lockutils [req-d3625dbf-0ff1-4f2f-bf3d-a51cd778e69d req-245b62d9-6fb2-4fb7-90bb-70635447a7bf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.421 239969 DEBUG oslo_concurrency.lockutils [req-d3625dbf-0ff1-4f2f-bf3d-a51cd778e69d req-245b62d9-6fb2-4fb7-90bb-70635447a7bf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.421 239969 DEBUG nova.compute.manager [req-d3625dbf-0ff1-4f2f-bf3d-a51cd778e69d req-245b62d9-6fb2-4fb7-90bb-70635447a7bf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] No waiting events found dispatching network-vif-plugged-f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.421 239969 WARNING nova.compute.manager [req-d3625dbf-0ff1-4f2f-bf3d-a51cd778e69d req-245b62d9-6fb2-4fb7-90bb-70635447a7bf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received unexpected event network-vif-plugged-f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 for instance with vm_state active and task_state deleting.
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.422 239969 DEBUG nova.compute.manager [req-d3625dbf-0ff1-4f2f-bf3d-a51cd778e69d req-245b62d9-6fb2-4fb7-90bb-70635447a7bf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received event network-vif-deleted-f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.422 239969 INFO nova.compute.manager [req-d3625dbf-0ff1-4f2f-bf3d-a51cd778e69d req-245b62d9-6fb2-4fb7-90bb-70635447a7bf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Neutron deleted interface f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4; detaching it from the instance and deleting it from the info cache
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.422 239969 DEBUG nova.network.neutron [req-d3625dbf-0ff1-4f2f-bf3d-a51cd778e69d req-245b62d9-6fb2-4fb7-90bb-70635447a7bf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Updating instance_info_cache with network_info: [{"id": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "address": "fa:16:3e:98:1a:f6", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95d6c1b0-dd", "ovs_interfaceid": "95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:11:23 compute-0 virtqemud[240263]: argument unsupported: QEMU guest agent is not configured
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.425 239969 DEBUG nova.virt.libvirt.guest [None req-f94cef62-618c-46ab-87d1-d147395fd19b 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.425 239969 DEBUG nova.compute.manager [None req-f94cef62-618c-46ab-87d1-d147395fd19b 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.472 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.474 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.479 239969 DEBUG nova.compute.manager [req-d3625dbf-0ff1-4f2f-bf3d-a51cd778e69d req-245b62d9-6fb2-4fb7-90bb-70635447a7bf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Detach interface failed, port_id=f07a4c1a-2641-4c21-b2b4-9a6700e6ebf4, reason: Instance b16e73f5-93cb-4c68-8f59-291227c8aa43 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:11:23 compute-0 nova_compute[239965]: 2026-01-26 16:11:23.504 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] During sync_power_state the instance has a pending task (unpausing). Skip.
Jan 26 16:11:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 271 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 49 KiB/s wr, 119 op/s
Jan 26 16:11:24 compute-0 nova_compute[239965]: 2026-01-26 16:11:24.323 239969 DEBUG nova.network.neutron [-] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:11:24 compute-0 nova_compute[239965]: 2026-01-26 16:11:24.356 239969 INFO nova.compute.manager [-] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Took 2.20 seconds to deallocate network for instance.
Jan 26 16:11:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:11:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 32K writes, 129K keys, 32K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s
                                           Cumulative WAL: 32K writes, 11K syncs, 2.84 writes per sync, written: 0.13 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 45.06 MB, 0.08 MB/s
                                           Interval WAL: 10K writes, 4121 syncs, 2.55 writes per sync, written: 0.04 GB, 0.08 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:11:24 compute-0 nova_compute[239965]: 2026-01-26 16:11:24.380 239969 DEBUG nova.compute.manager [req-939609fc-0cf8-4919-aa8e-4b0829f705e6 req-f8b61ecd-bf52-47a7-9ab8-902c2646c7db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received event network-vif-plugged-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:24 compute-0 nova_compute[239965]: 2026-01-26 16:11:24.381 239969 DEBUG oslo_concurrency.lockutils [req-939609fc-0cf8-4919-aa8e-4b0829f705e6 req-f8b61ecd-bf52-47a7-9ab8-902c2646c7db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:24 compute-0 nova_compute[239965]: 2026-01-26 16:11:24.381 239969 DEBUG oslo_concurrency.lockutils [req-939609fc-0cf8-4919-aa8e-4b0829f705e6 req-f8b61ecd-bf52-47a7-9ab8-902c2646c7db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:24 compute-0 nova_compute[239965]: 2026-01-26 16:11:24.381 239969 DEBUG oslo_concurrency.lockutils [req-939609fc-0cf8-4919-aa8e-4b0829f705e6 req-f8b61ecd-bf52-47a7-9ab8-902c2646c7db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:24 compute-0 nova_compute[239965]: 2026-01-26 16:11:24.382 239969 DEBUG nova.compute.manager [req-939609fc-0cf8-4919-aa8e-4b0829f705e6 req-f8b61ecd-bf52-47a7-9ab8-902c2646c7db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] No waiting events found dispatching network-vif-plugged-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:11:24 compute-0 nova_compute[239965]: 2026-01-26 16:11:24.382 239969 WARNING nova.compute.manager [req-939609fc-0cf8-4919-aa8e-4b0829f705e6 req-f8b61ecd-bf52-47a7-9ab8-902c2646c7db a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received unexpected event network-vif-plugged-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 for instance with vm_state active and task_state deleting.
Jan 26 16:11:24 compute-0 nova_compute[239965]: 2026-01-26 16:11:24.408 239969 DEBUG oslo_concurrency.lockutils [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:24 compute-0 nova_compute[239965]: 2026-01-26 16:11:24.408 239969 DEBUG oslo_concurrency.lockutils [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:24 compute-0 ceph-mon[75140]: pgmap v1795: 305 pgs: 305 active+clean; 271 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 49 KiB/s wr, 119 op/s
Jan 26 16:11:24 compute-0 nova_compute[239965]: 2026-01-26 16:11:24.503 239969 DEBUG oslo_concurrency.processutils [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:11:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:11:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2104253656' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:11:25 compute-0 nova_compute[239965]: 2026-01-26 16:11:25.092 239969 DEBUG oslo_concurrency.processutils [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:11:25 compute-0 nova_compute[239965]: 2026-01-26 16:11:25.098 239969 DEBUG nova.compute.provider_tree [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:11:25 compute-0 nova_compute[239965]: 2026-01-26 16:11:25.114 239969 DEBUG nova.scheduler.client.report [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:11:25 compute-0 nova_compute[239965]: 2026-01-26 16:11:25.139 239969 DEBUG oslo_concurrency.lockutils [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:25 compute-0 nova_compute[239965]: 2026-01-26 16:11:25.161 239969 INFO nova.scheduler.client.report [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Deleted allocations for instance b16e73f5-93cb-4c68-8f59-291227c8aa43
Jan 26 16:11:25 compute-0 nova_compute[239965]: 2026-01-26 16:11:25.235 239969 DEBUG oslo_concurrency.lockutils [None req-81109863-be79-45fa-912a-a0b481223cbc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b16e73f5-93cb-4c68-8f59-291227c8aa43" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2104253656' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:11:25 compute-0 nova_compute[239965]: 2026-01-26 16:11:25.582 239969 DEBUG nova.compute.manager [req-9d00f44d-110c-403f-a937-0da6d9b101e4 req-3f65b4f9-fef5-4923-99fe-7aa7f3885707 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Received event network-vif-deleted-95d6c1b0-dd6a-41b7-9d15-b8b8c4a4f4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 246 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 39 KiB/s wr, 107 op/s
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.408 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Jan 26 16:11:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Jan 26 16:11:26 compute-0 ceph-mon[75140]: pgmap v1796: 305 pgs: 305 active+clean; 246 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 39 KiB/s wr, 107 op/s
Jan 26 16:11:26 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.663 239969 DEBUG oslo_concurrency.lockutils [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "7d660383-738b-422b-af54-1a15464bf8e3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.664 239969 DEBUG oslo_concurrency.lockutils [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.664 239969 DEBUG oslo_concurrency.lockutils [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "7d660383-738b-422b-af54-1a15464bf8e3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.664 239969 DEBUG oslo_concurrency.lockutils [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.664 239969 DEBUG oslo_concurrency.lockutils [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.665 239969 INFO nova.compute.manager [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Terminating instance
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.667 239969 DEBUG nova.compute.manager [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:11:26 compute-0 kernel: tap3a527bbf-13 (unregistering): left promiscuous mode
Jan 26 16:11:26 compute-0 NetworkManager[48954]: <info>  [1769443886.7121] device (tap3a527bbf-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:11:26 compute-0 ovn_controller[146046]: 2026-01-26T16:11:26Z|01054|binding|INFO|Releasing lport 3a527bbf-13af-4ef8-8703-f32cc8169291 from this chassis (sb_readonly=0)
Jan 26 16:11:26 compute-0 ovn_controller[146046]: 2026-01-26T16:11:26Z|01055|binding|INFO|Setting lport 3a527bbf-13af-4ef8-8703-f32cc8169291 down in Southbound
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.720 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:26 compute-0 ovn_controller[146046]: 2026-01-26T16:11:26Z|01056|binding|INFO|Removing iface tap3a527bbf-13 ovn-installed in OVS
Jan 26 16:11:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:26.727 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:69:92:94 10.100.0.4'], port_security=['fa:16:3e:69:92:94 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '7d660383-738b-422b-af54-1a15464bf8e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-065353ad-2d91-4f6b-899c-49c1351ef25d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64fb00c1-6bf2-4adc-ad05-b75da367e5b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f9ceb6e3-7336-40c9-9bd9-e17d330deadf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3a527bbf-13af-4ef8-8703-f32cc8169291) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:11:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:26.729 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3a527bbf-13af-4ef8-8703-f32cc8169291 in datapath 065353ad-2d91-4f6b-899c-49c1351ef25d unbound from our chassis
Jan 26 16:11:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:26.730 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 065353ad-2d91-4f6b-899c-49c1351ef25d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:11:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:26.730 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dc350a8b-1b18-4fa1-9d71-a022838f449b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:26.731 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d namespace which is not needed anymore
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.736 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:26 compute-0 kernel: tap9ade0424-93 (unregistering): left promiscuous mode
Jan 26 16:11:26 compute-0 NetworkManager[48954]: <info>  [1769443886.7416] device (tap9ade0424-93): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.749 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:26 compute-0 ovn_controller[146046]: 2026-01-26T16:11:26Z|01057|binding|INFO|Releasing lport 9ade0424-9307-41d2-99d9-0bbc7146bd66 from this chassis (sb_readonly=0)
Jan 26 16:11:26 compute-0 ovn_controller[146046]: 2026-01-26T16:11:26Z|01058|binding|INFO|Setting lport 9ade0424-9307-41d2-99d9-0bbc7146bd66 down in Southbound
Jan 26 16:11:26 compute-0 ovn_controller[146046]: 2026-01-26T16:11:26Z|01059|binding|INFO|Removing iface tap9ade0424-93 ovn-installed in OVS
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.751 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.754 239969 DEBUG nova.compute.manager [req-51b17b2d-f17e-4d6f-8126-47f7249007f5 req-23135deb-898f-46ae-844e-92de7eb5e519 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received event network-changed-3a527bbf-13af-4ef8-8703-f32cc8169291 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.754 239969 DEBUG nova.compute.manager [req-51b17b2d-f17e-4d6f-8126-47f7249007f5 req-23135deb-898f-46ae-844e-92de7eb5e519 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Refreshing instance network info cache due to event network-changed-3a527bbf-13af-4ef8-8703-f32cc8169291. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.755 239969 DEBUG oslo_concurrency.lockutils [req-51b17b2d-f17e-4d6f-8126-47f7249007f5 req-23135deb-898f-46ae-844e-92de7eb5e519 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.755 239969 DEBUG oslo_concurrency.lockutils [req-51b17b2d-f17e-4d6f-8126-47f7249007f5 req-23135deb-898f-46ae-844e-92de7eb5e519 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.755 239969 DEBUG nova.network.neutron [req-51b17b2d-f17e-4d6f-8126-47f7249007f5 req-23135deb-898f-46ae-844e-92de7eb5e519 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Refreshing network info cache for port 3a527bbf-13af-4ef8-8703-f32cc8169291 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:11:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:26.755 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:d5:b2 2001:db8::f816:3eff:feb9:d5b2'], port_security=['fa:16:3e:b9:d5:b2 2001:db8::f816:3eff:feb9:d5b2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb9:d5b2/64', 'neutron:device_id': '7d660383-738b-422b-af54-1a15464bf8e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1f659c36-b294-4de4-81d7-0a221d4d63ba', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64fb00c1-6bf2-4adc-ad05-b75da367e5b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=63bbc7aa-7f64-4a2d-a32f-7de8c6f03798, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=9ade0424-9307-41d2-99d9-0bbc7146bd66) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.768 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.800 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:26 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000069.scope: Deactivated successfully.
Jan 26 16:11:26 compute-0 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000069.scope: Consumed 16.303s CPU time.
Jan 26 16:11:26 compute-0 systemd-machined[208061]: Machine qemu-130-instance-00000069 terminated.
Jan 26 16:11:26 compute-0 neutron-haproxy-ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d[329843]: [NOTICE]   (329847) : haproxy version is 2.8.14-c23fe91
Jan 26 16:11:26 compute-0 neutron-haproxy-ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d[329843]: [NOTICE]   (329847) : path to executable is /usr/sbin/haproxy
Jan 26 16:11:26 compute-0 neutron-haproxy-ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d[329843]: [WARNING]  (329847) : Exiting Master process...
Jan 26 16:11:26 compute-0 neutron-haproxy-ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d[329843]: [WARNING]  (329847) : Exiting Master process...
Jan 26 16:11:26 compute-0 neutron-haproxy-ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d[329843]: [ALERT]    (329847) : Current worker (329849) exited with code 143 (Terminated)
Jan 26 16:11:26 compute-0 neutron-haproxy-ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d[329843]: [WARNING]  (329847) : All workers exited. Exiting... (0)
Jan 26 16:11:26 compute-0 systemd[1]: libpod-0b4fc7d1bc5f674fd371bc99c435094574d95e62999e4f446656025f2d1f8f84.scope: Deactivated successfully.
Jan 26 16:11:26 compute-0 podman[331763]: 2026-01-26 16:11:26.880260864 +0000 UTC m=+0.049488033 container died 0b4fc7d1bc5f674fd371bc99c435094574d95e62999e4f446656025f2d1f8f84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:11:26 compute-0 NetworkManager[48954]: <info>  [1769443886.8956] manager: (tap9ade0424-93): new Tun device (/org/freedesktop/NetworkManager/Devices/436)
Jan 26 16:11:26 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0b4fc7d1bc5f674fd371bc99c435094574d95e62999e4f446656025f2d1f8f84-userdata-shm.mount: Deactivated successfully.
Jan 26 16:11:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-06c64fe4e75d4d1b301e5111a733b4311ad3775d975484b229f35438bfbfdcad-merged.mount: Deactivated successfully.
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.916 239969 INFO nova.virt.libvirt.driver [-] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Instance destroyed successfully.
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.916 239969 DEBUG nova.objects.instance [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'resources' on Instance uuid 7d660383-738b-422b-af54-1a15464bf8e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:11:26 compute-0 podman[331763]: 2026-01-26 16:11:26.922291542 +0000 UTC m=+0.091518711 container cleanup 0b4fc7d1bc5f674fd371bc99c435094574d95e62999e4f446656025f2d1f8f84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.929 239969 DEBUG nova.virt.libvirt.vif [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-66572341',display_name='tempest-TestGettingAddress-server-66572341',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-66572341',id=105,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEzKe6a2VldUJeYUz82U7kuWR1IjLUu+uday8/rBf6kWo6w7hcHpRo5TRunJYM1KjlHcyvxH5YSbKzdUCWpXcwe9doLwR4k7RaPJluwXLyon4b8Yo66Pr00SerXQJms4Q==',key_name='tempest-TestGettingAddress-1487907947',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-bfx0ple1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:10:20Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=7d660383-738b-422b-af54-1a15464bf8e3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3a527bbf-13af-4ef8-8703-f32cc8169291", "address": "fa:16:3e:69:92:94", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a527bbf-13", "ovs_interfaceid": "3a527bbf-13af-4ef8-8703-f32cc8169291", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.930 239969 DEBUG nova.network.os_vif_util [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "3a527bbf-13af-4ef8-8703-f32cc8169291", "address": "fa:16:3e:69:92:94", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a527bbf-13", "ovs_interfaceid": "3a527bbf-13af-4ef8-8703-f32cc8169291", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:11:26 compute-0 systemd[1]: libpod-conmon-0b4fc7d1bc5f674fd371bc99c435094574d95e62999e4f446656025f2d1f8f84.scope: Deactivated successfully.
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.930 239969 DEBUG nova.network.os_vif_util [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:69:92:94,bridge_name='br-int',has_traffic_filtering=True,id=3a527bbf-13af-4ef8-8703-f32cc8169291,network=Network(065353ad-2d91-4f6b-899c-49c1351ef25d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a527bbf-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.931 239969 DEBUG os_vif [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:92:94,bridge_name='br-int',has_traffic_filtering=True,id=3a527bbf-13af-4ef8-8703-f32cc8169291,network=Network(065353ad-2d91-4f6b-899c-49c1351ef25d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a527bbf-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.933 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.933 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3a527bbf-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.935 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.937 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.941 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.944 239969 INFO os_vif [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:69:92:94,bridge_name='br-int',has_traffic_filtering=True,id=3a527bbf-13af-4ef8-8703-f32cc8169291,network=Network(065353ad-2d91-4f6b-899c-49c1351ef25d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3a527bbf-13')
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.945 239969 DEBUG nova.virt.libvirt.vif [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:10:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-66572341',display_name='tempest-TestGettingAddress-server-66572341',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-66572341',id=105,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAEzKe6a2VldUJeYUz82U7kuWR1IjLUu+uday8/rBf6kWo6w7hcHpRo5TRunJYM1KjlHcyvxH5YSbKzdUCWpXcwe9doLwR4k7RaPJluwXLyon4b8Yo66Pr00SerXQJms4Q==',key_name='tempest-TestGettingAddress-1487907947',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-bfx0ple1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:10:20Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=7d660383-738b-422b-af54-1a15464bf8e3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "address": "fa:16:3e:b9:d5:b2", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:d5b2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ade0424-93", "ovs_interfaceid": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.945 239969 DEBUG nova.network.os_vif_util [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "address": "fa:16:3e:b9:d5:b2", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:d5b2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ade0424-93", "ovs_interfaceid": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.946 239969 DEBUG nova.network.os_vif_util [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:d5:b2,bridge_name='br-int',has_traffic_filtering=True,id=9ade0424-9307-41d2-99d9-0bbc7146bd66,network=Network(1f659c36-b294-4de4-81d7-0a221d4d63ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ade0424-93') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.946 239969 DEBUG os_vif [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:d5:b2,bridge_name='br-int',has_traffic_filtering=True,id=9ade0424-9307-41d2-99d9-0bbc7146bd66,network=Network(1f659c36-b294-4de4-81d7-0a221d4d63ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ade0424-93') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.947 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.947 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ade0424-93, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.948 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.950 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:26 compute-0 nova_compute[239965]: 2026-01-26 16:11:26.952 239969 INFO os_vif [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:d5:b2,bridge_name='br-int',has_traffic_filtering=True,id=9ade0424-9307-41d2-99d9-0bbc7146bd66,network=Network(1f659c36-b294-4de4-81d7-0a221d4d63ba),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9ade0424-93')
Jan 26 16:11:27 compute-0 podman[331812]: 2026-01-26 16:11:27.009450894 +0000 UTC m=+0.060411529 container remove 0b4fc7d1bc5f674fd371bc99c435094574d95e62999e4f446656025f2d1f8f84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.016 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ac4fc854-2f46-41ae-96fc-5daf2915ef7c]: (4, ('Mon Jan 26 04:11:26 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d (0b4fc7d1bc5f674fd371bc99c435094574d95e62999e4f446656025f2d1f8f84)\n0b4fc7d1bc5f674fd371bc99c435094574d95e62999e4f446656025f2d1f8f84\nMon Jan 26 04:11:26 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d (0b4fc7d1bc5f674fd371bc99c435094574d95e62999e4f446656025f2d1f8f84)\n0b4fc7d1bc5f674fd371bc99c435094574d95e62999e4f446656025f2d1f8f84\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.018 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ead47f68-284f-41d5-a62a-ec3ea7b3dd8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.019 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap065353ad-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:11:27 compute-0 kernel: tap065353ad-20: left promiscuous mode
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.021 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.038 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.042 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3c373a8a-403b-4e13-9b7a-f42843b2a278]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.066 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[50a924e9-e123-4bc6-a6cd-4a018f53c859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.068 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e0daad0b-bea3-4410-8913-e0c3c4c113c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.092 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[25845795-4218-406d-8fd0-2bd482b8455b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530781, 'reachable_time': 16180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331845, 'error': None, 'target': 'ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.095 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-065353ad-2d91-4f6b-899c-49c1351ef25d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.095 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[592138c1-a56c-4742-ab6d-76a54599b79a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.096 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 9ade0424-9307-41d2-99d9-0bbc7146bd66 in datapath 1f659c36-b294-4de4-81d7-0a221d4d63ba unbound from our chassis
Jan 26 16:11:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d065353ad\x2d2d91\x2d4f6b\x2d899c\x2d49c1351ef25d.mount: Deactivated successfully.
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.097 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1f659c36-b294-4de4-81d7-0a221d4d63ba, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.098 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[aa5b230f-a177-4448-9d91-84b876962410]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.098 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba namespace which is not needed anymore
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.241 239969 INFO nova.virt.libvirt.driver [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Deleting instance files /var/lib/nova/instances/7d660383-738b-422b-af54-1a15464bf8e3_del
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.241 239969 INFO nova.virt.libvirt.driver [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Deletion of /var/lib/nova/instances/7d660383-738b-422b-af54-1a15464bf8e3_del complete
Jan 26 16:11:27 compute-0 neutron-haproxy-ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba[329957]: [NOTICE]   (329961) : haproxy version is 2.8.14-c23fe91
Jan 26 16:11:27 compute-0 neutron-haproxy-ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba[329957]: [NOTICE]   (329961) : path to executable is /usr/sbin/haproxy
Jan 26 16:11:27 compute-0 neutron-haproxy-ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba[329957]: [WARNING]  (329961) : Exiting Master process...
Jan 26 16:11:27 compute-0 neutron-haproxy-ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba[329957]: [ALERT]    (329961) : Current worker (329963) exited with code 143 (Terminated)
Jan 26 16:11:27 compute-0 neutron-haproxy-ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba[329957]: [WARNING]  (329961) : All workers exited. Exiting... (0)
Jan 26 16:11:27 compute-0 systemd[1]: libpod-a0a48d905440f4646b9b138fa16e6bbba6de1fe305ed9fc55794e7b66a05572a.scope: Deactivated successfully.
Jan 26 16:11:27 compute-0 podman[331864]: 2026-01-26 16:11:27.254762034 +0000 UTC m=+0.046905648 container died a0a48d905440f4646b9b138fa16e6bbba6de1fe305ed9fc55794e7b66a05572a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:11:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a0a48d905440f4646b9b138fa16e6bbba6de1fe305ed9fc55794e7b66a05572a-userdata-shm.mount: Deactivated successfully.
Jan 26 16:11:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-9600c7b8092f5310b0c01bcd8304fd51f9ae738056d59266a1d9afe95de38d56-merged.mount: Deactivated successfully.
Jan 26 16:11:27 compute-0 podman[331864]: 2026-01-26 16:11:27.2864639 +0000 UTC m=+0.078607514 container cleanup a0a48d905440f4646b9b138fa16e6bbba6de1fe305ed9fc55794e7b66a05572a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.297 239969 INFO nova.compute.manager [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Took 0.63 seconds to destroy the instance on the hypervisor.
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.297 239969 DEBUG oslo.service.loopingcall [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.298 239969 DEBUG nova.compute.manager [-] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.298 239969 DEBUG nova.network.neutron [-] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:11:27 compute-0 systemd[1]: libpod-conmon-a0a48d905440f4646b9b138fa16e6bbba6de1fe305ed9fc55794e7b66a05572a.scope: Deactivated successfully.
Jan 26 16:11:27 compute-0 podman[331893]: 2026-01-26 16:11:27.344407597 +0000 UTC m=+0.039139388 container remove a0a48d905440f4646b9b138fa16e6bbba6de1fe305ed9fc55794e7b66a05572a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.352 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c9dba1ca-5882-4692-8a51-3992e56db77a]: (4, ('Mon Jan 26 04:11:27 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba (a0a48d905440f4646b9b138fa16e6bbba6de1fe305ed9fc55794e7b66a05572a)\na0a48d905440f4646b9b138fa16e6bbba6de1fe305ed9fc55794e7b66a05572a\nMon Jan 26 04:11:27 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba (a0a48d905440f4646b9b138fa16e6bbba6de1fe305ed9fc55794e7b66a05572a)\na0a48d905440f4646b9b138fa16e6bbba6de1fe305ed9fc55794e7b66a05572a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.353 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[048ada56-dd06-40bf-b334-6b1c71025bb9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.354 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1f659c36-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.356 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:27 compute-0 kernel: tap1f659c36-b0: left promiscuous mode
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.371 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.373 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bbfba7a1-ca56-46cf-b785-4375950f6cce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.394 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b7681a43-d480-4931-b5d4-dd1dbbf8b64b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.395 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3ed2b4-a2ec-4fe5-bc34-f7879d6eade5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.414 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7827e8f5-8984-4d22-8a81-89f8de8e6a8b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530871, 'reachable_time': 27533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 331909, 'error': None, 'target': 'ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.415 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1f659c36-b294-4de4-81d7-0a221d4d63ba deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:11:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:27.415 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[26f16489-aa6a-42f5-a4e2-060da29d8b9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Jan 26 16:11:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Jan 26 16:11:27 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Jan 26 16:11:27 compute-0 ceph-mon[75140]: osdmap e278: 3 total, 3 up, 3 in
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.841 239969 DEBUG nova.compute.manager [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received event network-vif-unplugged-3a527bbf-13af-4ef8-8703-f32cc8169291 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.841 239969 DEBUG oslo_concurrency.lockutils [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7d660383-738b-422b-af54-1a15464bf8e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.841 239969 DEBUG oslo_concurrency.lockutils [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.841 239969 DEBUG oslo_concurrency.lockutils [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.841 239969 DEBUG nova.compute.manager [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] No waiting events found dispatching network-vif-unplugged-3a527bbf-13af-4ef8-8703-f32cc8169291 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.842 239969 DEBUG nova.compute.manager [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received event network-vif-unplugged-3a527bbf-13af-4ef8-8703-f32cc8169291 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.842 239969 DEBUG nova.compute.manager [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received event network-vif-plugged-3a527bbf-13af-4ef8-8703-f32cc8169291 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.842 239969 DEBUG oslo_concurrency.lockutils [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7d660383-738b-422b-af54-1a15464bf8e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.842 239969 DEBUG oslo_concurrency.lockutils [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.842 239969 DEBUG oslo_concurrency.lockutils [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.842 239969 DEBUG nova.compute.manager [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] No waiting events found dispatching network-vif-plugged-3a527bbf-13af-4ef8-8703-f32cc8169291 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.843 239969 WARNING nova.compute.manager [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received unexpected event network-vif-plugged-3a527bbf-13af-4ef8-8703-f32cc8169291 for instance with vm_state active and task_state deleting.
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.843 239969 DEBUG nova.compute.manager [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received event network-vif-unplugged-9ade0424-9307-41d2-99d9-0bbc7146bd66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.843 239969 DEBUG oslo_concurrency.lockutils [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7d660383-738b-422b-af54-1a15464bf8e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.843 239969 DEBUG oslo_concurrency.lockutils [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.843 239969 DEBUG oslo_concurrency.lockutils [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.843 239969 DEBUG nova.compute.manager [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] No waiting events found dispatching network-vif-unplugged-9ade0424-9307-41d2-99d9-0bbc7146bd66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:11:27 compute-0 nova_compute[239965]: 2026-01-26 16:11:27.844 239969 DEBUG nova.compute.manager [req-5be5800c-1dce-4390-8da7-affe65ada363 req-b3cb74ae-db4c-4bf8-a3ae-43d8007916e8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received event network-vif-unplugged-9ade0424-9307-41d2-99d9-0bbc7146bd66 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:11:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e279 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:11:27 compute-0 systemd[1]: run-netns-ovnmeta\x2d1f659c36\x2db294\x2d4de4\x2d81d7\x2d0a221d4d63ba.mount: Deactivated successfully.
Jan 26 16:11:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 182 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 58 KiB/s wr, 170 op/s
Jan 26 16:11:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Jan 26 16:11:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Jan 26 16:11:28 compute-0 ceph-mon[75140]: osdmap e279: 3 total, 3 up, 3 in
Jan 26 16:11:28 compute-0 ceph-mon[75140]: pgmap v1799: 305 pgs: 305 active+clean; 182 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 135 KiB/s rd, 58 KiB/s wr, 170 op/s
Jan 26 16:11:28 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Jan 26 16:11:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:11:28
Jan 26 16:11:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:11:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:11:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'volumes', '.mgr', 'default.rgw.control', 'images', '.rgw.root', 'backups']
Jan 26 16:11:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:11:28 compute-0 nova_compute[239965]: 2026-01-26 16:11:28.762 239969 DEBUG nova.network.neutron [req-51b17b2d-f17e-4d6f-8126-47f7249007f5 req-23135deb-898f-46ae-844e-92de7eb5e519 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Updated VIF entry in instance network info cache for port 3a527bbf-13af-4ef8-8703-f32cc8169291. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:11:28 compute-0 nova_compute[239965]: 2026-01-26 16:11:28.763 239969 DEBUG nova.network.neutron [req-51b17b2d-f17e-4d6f-8126-47f7249007f5 req-23135deb-898f-46ae-844e-92de7eb5e519 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Updating instance_info_cache with network_info: [{"id": "3a527bbf-13af-4ef8-8703-f32cc8169291", "address": "fa:16:3e:69:92:94", "network": {"id": "065353ad-2d91-4f6b-899c-49c1351ef25d", "bridge": "br-int", "label": "tempest-network-smoke--819464943", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3a527bbf-13", "ovs_interfaceid": "3a527bbf-13af-4ef8-8703-f32cc8169291", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "address": "fa:16:3e:b9:d5:b2", "network": {"id": "1f659c36-b294-4de4-81d7-0a221d4d63ba", "bridge": "br-int", "label": "tempest-network-smoke--85396435", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb9:d5b2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9ade0424-93", "ovs_interfaceid": "9ade0424-9307-41d2-99d9-0bbc7146bd66", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:11:28 compute-0 nova_compute[239965]: 2026-01-26 16:11:28.791 239969 DEBUG oslo_concurrency.lockutils [req-51b17b2d-f17e-4d6f-8126-47f7249007f5 req-23135deb-898f-46ae-844e-92de7eb5e519 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-7d660383-738b-422b-af54-1a15464bf8e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:11:28 compute-0 nova_compute[239965]: 2026-01-26 16:11:28.974 239969 DEBUG nova.network.neutron [-] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.000 239969 INFO nova.compute.manager [-] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Took 1.70 seconds to deallocate network for instance.
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.053 239969 DEBUG oslo_concurrency.lockutils [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.053 239969 DEBUG oslo_concurrency.lockutils [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.156 239969 DEBUG oslo_concurrency.processutils [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:11:29 compute-0 ceph-mon[75140]: osdmap e280: 3 total, 3 up, 3 in
Jan 26 16:11:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:11:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.4 total, 600.0 interval
                                           Cumulative writes: 35K writes, 134K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.04 MB/s
                                           Cumulative WAL: 35K writes, 12K syncs, 2.77 writes per sync, written: 0.12 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 11K writes, 40K keys, 11K commit groups, 1.0 writes per commit group, ingest: 39.46 MB, 0.07 MB/s
                                           Interval WAL: 11K writes, 4607 syncs, 2.41 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:11:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:11:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3898033535' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.758 239969 DEBUG oslo_concurrency.processutils [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.764 239969 DEBUG nova.compute.provider_tree [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.786 239969 DEBUG nova.scheduler.client.report [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.814 239969 DEBUG oslo_concurrency.lockutils [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.853 239969 INFO nova.scheduler.client.report [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Deleted allocations for instance 7d660383-738b-422b-af54-1a15464bf8e3
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.919 239969 DEBUG oslo_concurrency.lockutils [None req-518a82f5-cf7b-4c20-9a38-7f162998d1a9 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.938 239969 DEBUG nova.compute.manager [req-6f97d6f6-afd4-49c8-a9cb-dee946892570 req-18473489-e60a-41d6-9e21-61ee4c8956d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received event network-vif-plugged-9ade0424-9307-41d2-99d9-0bbc7146bd66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.938 239969 DEBUG oslo_concurrency.lockutils [req-6f97d6f6-afd4-49c8-a9cb-dee946892570 req-18473489-e60a-41d6-9e21-61ee4c8956d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7d660383-738b-422b-af54-1a15464bf8e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.939 239969 DEBUG oslo_concurrency.lockutils [req-6f97d6f6-afd4-49c8-a9cb-dee946892570 req-18473489-e60a-41d6-9e21-61ee4c8956d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.939 239969 DEBUG oslo_concurrency.lockutils [req-6f97d6f6-afd4-49c8-a9cb-dee946892570 req-18473489-e60a-41d6-9e21-61ee4c8956d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7d660383-738b-422b-af54-1a15464bf8e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.939 239969 DEBUG nova.compute.manager [req-6f97d6f6-afd4-49c8-a9cb-dee946892570 req-18473489-e60a-41d6-9e21-61ee4c8956d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] No waiting events found dispatching network-vif-plugged-9ade0424-9307-41d2-99d9-0bbc7146bd66 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.939 239969 WARNING nova.compute.manager [req-6f97d6f6-afd4-49c8-a9cb-dee946892570 req-18473489-e60a-41d6-9e21-61ee4c8956d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received unexpected event network-vif-plugged-9ade0424-9307-41d2-99d9-0bbc7146bd66 for instance with vm_state deleted and task_state None.
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.940 239969 DEBUG nova.compute.manager [req-6f97d6f6-afd4-49c8-a9cb-dee946892570 req-18473489-e60a-41d6-9e21-61ee4c8956d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received event network-vif-deleted-9ade0424-9307-41d2-99d9-0bbc7146bd66 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:29 compute-0 nova_compute[239965]: 2026-01-26 16:11:29.940 239969 DEBUG nova.compute.manager [req-6f97d6f6-afd4-49c8-a9cb-dee946892570 req-18473489-e60a-41d6-9e21-61ee4c8956d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Received event network-vif-deleted-3a527bbf-13af-4ef8-8703-f32cc8169291 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 182 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 13 KiB/s wr, 70 op/s
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:11:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Jan 26 16:11:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Jan 26 16:11:30 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Jan 26 16:11:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3898033535' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:11:30 compute-0 ceph-mon[75140]: pgmap v1801: 305 pgs: 305 active+clean; 182 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 13 KiB/s wr, 70 op/s
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:11:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:11:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Jan 26 16:11:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Jan 26 16:11:31 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Jan 26 16:11:31 compute-0 ceph-mon[75140]: osdmap e281: 3 total, 3 up, 3 in
Jan 26 16:11:31 compute-0 nova_compute[239965]: 2026-01-26 16:11:31.803 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:31 compute-0 nova_compute[239965]: 2026-01-26 16:11:31.949 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 167 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 9.9 KiB/s wr, 159 op/s
Jan 26 16:11:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Jan 26 16:11:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Jan 26 16:11:32 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Jan 26 16:11:32 compute-0 ceph-mon[75140]: osdmap e282: 3 total, 3 up, 3 in
Jan 26 16:11:32 compute-0 ceph-mon[75140]: pgmap v1804: 305 pgs: 305 active+clean; 167 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 110 KiB/s rd, 9.9 KiB/s wr, 159 op/s
Jan 26 16:11:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:11:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Jan 26 16:11:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Jan 26 16:11:32 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Jan 26 16:11:33 compute-0 ceph-mon[75140]: osdmap e283: 3 total, 3 up, 3 in
Jan 26 16:11:33 compute-0 ceph-mon[75140]: osdmap e284: 3 total, 3 up, 3 in
Jan 26 16:11:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 15 KiB/s wr, 281 op/s
Jan 26 16:11:34 compute-0 ceph-mon[75140]: pgmap v1807: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 201 KiB/s rd, 15 KiB/s wr, 281 op/s
Jan 26 16:11:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:11:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.2 total, 600.0 interval
                                           Cumulative writes: 26K writes, 104K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s
                                           Cumulative WAL: 26K writes, 9398 syncs, 2.83 writes per sync, written: 0.10 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8587 writes, 30K keys, 8587 commit groups, 1.0 writes per commit group, ingest: 30.00 MB, 0.05 MB/s
                                           Interval WAL: 8587 writes, 3609 syncs, 2.38 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:11:34 compute-0 ovn_controller[146046]: 2026-01-26T16:11:34Z|01060|binding|INFO|Releasing lport fb7d9679-6ae1-476e-99df-96ff9d9ad5e9 from this chassis (sb_readonly=0)
Jan 26 16:11:34 compute-0 nova_compute[239965]: 2026-01-26 16:11:34.689 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:34 compute-0 ovn_controller[146046]: 2026-01-26T16:11:34Z|01061|binding|INFO|Releasing lport fb7d9679-6ae1-476e-99df-96ff9d9ad5e9 from this chassis (sb_readonly=0)
Jan 26 16:11:34 compute-0 nova_compute[239965]: 2026-01-26 16:11:34.804 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:36 compute-0 nova_compute[239965]: 2026-01-26 16:11:36.146 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443881.1438944, b16e73f5-93cb-4c68-8f59-291227c8aa43 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:11:36 compute-0 nova_compute[239965]: 2026-01-26 16:11:36.146 239969 INFO nova.compute.manager [-] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] VM Stopped (Lifecycle Event)
Jan 26 16:11:36 compute-0 nova_compute[239965]: 2026-01-26 16:11:36.189 239969 DEBUG nova.compute.manager [None req-27e63a43-855f-4156-b01f-2fb18c4bf489 - - - - - -] [instance: b16e73f5-93cb-4c68-8f59-291227c8aa43] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:11:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 191 KiB/s rd, 13 KiB/s wr, 260 op/s
Jan 26 16:11:36 compute-0 nova_compute[239965]: 2026-01-26 16:11:36.805 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:36 compute-0 nova_compute[239965]: 2026-01-26 16:11:36.951 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:37 compute-0 ceph-mon[75140]: pgmap v1808: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 191 KiB/s rd, 13 KiB/s wr, 260 op/s
Jan 26 16:11:37 compute-0 nova_compute[239965]: 2026-01-26 16:11:37.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:11:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:11:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Jan 26 16:11:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Jan 26 16:11:37 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Jan 26 16:11:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 5.3 KiB/s wr, 130 op/s
Jan 26 16:11:38 compute-0 ceph-mon[75140]: osdmap e285: 3 total, 3 up, 3 in
Jan 26 16:11:38 compute-0 ceph-mon[75140]: pgmap v1810: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 5.3 KiB/s wr, 130 op/s
Jan 26 16:11:39 compute-0 ceph-mgr[75431]: [devicehealth INFO root] Check health
Jan 26 16:11:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.5 KiB/s wr, 54 op/s
Jan 26 16:11:40 compute-0 ceph-mon[75140]: pgmap v1811: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.5 KiB/s wr, 54 op/s
Jan 26 16:11:40 compute-0 nova_compute[239965]: 2026-01-26 16:11:40.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:11:41 compute-0 nova_compute[239965]: 2026-01-26 16:11:41.807 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:41 compute-0 nova_compute[239965]: 2026-01-26 16:11:41.909 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443886.907766, 7d660383-738b-422b-af54-1a15464bf8e3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:11:41 compute-0 nova_compute[239965]: 2026-01-26 16:11:41.909 239969 INFO nova.compute.manager [-] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] VM Stopped (Lifecycle Event)
Jan 26 16:11:41 compute-0 nova_compute[239965]: 2026-01-26 16:11:41.926 239969 DEBUG nova.compute.manager [None req-6b302fea-c947-4c03-8224-4449b91253d0 - - - - - -] [instance: 7d660383-738b-422b-af54-1a15464bf8e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:11:41 compute-0 nova_compute[239965]: 2026-01-26 16:11:41.953 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.0 KiB/s wr, 44 op/s
Jan 26 16:11:42 compute-0 nova_compute[239965]: 2026-01-26 16:11:42.276 239969 DEBUG oslo_concurrency.lockutils [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:42 compute-0 nova_compute[239965]: 2026-01-26 16:11:42.276 239969 DEBUG oslo_concurrency.lockutils [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:42 compute-0 nova_compute[239965]: 2026-01-26 16:11:42.276 239969 INFO nova.compute.manager [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Shelving
Jan 26 16:11:42 compute-0 nova_compute[239965]: 2026-01-26 16:11:42.296 239969 DEBUG nova.virt.libvirt.driver [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:11:42 compute-0 nova_compute[239965]: 2026-01-26 16:11:42.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:11:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:11:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Jan 26 16:11:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Jan 26 16:11:42 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Jan 26 16:11:43 compute-0 ceph-mon[75140]: pgmap v1812: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 2.0 KiB/s wr, 44 op/s
Jan 26 16:11:43 compute-0 ceph-mon[75140]: osdmap e286: 3 total, 3 up, 3 in
Jan 26 16:11:43 compute-0 nova_compute[239965]: 2026-01-26 16:11:43.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:11:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4.9 KiB/s wr, 7 op/s
Jan 26 16:11:44 compute-0 nova_compute[239965]: 2026-01-26 16:11:44.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:11:44 compute-0 nova_compute[239965]: 2026-01-26 16:11:44.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:11:44 compute-0 kernel: tapd8d3ce43-ab (unregistering): left promiscuous mode
Jan 26 16:11:44 compute-0 NetworkManager[48954]: <info>  [1769443904.5255] device (tapd8d3ce43-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:11:44 compute-0 ovn_controller[146046]: 2026-01-26T16:11:44Z|01062|binding|INFO|Releasing lport d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 from this chassis (sb_readonly=0)
Jan 26 16:11:44 compute-0 ovn_controller[146046]: 2026-01-26T16:11:44Z|01063|binding|INFO|Setting lport d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 down in Southbound
Jan 26 16:11:44 compute-0 nova_compute[239965]: 2026-01-26 16:11:44.535 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:44 compute-0 ovn_controller[146046]: 2026-01-26T16:11:44Z|01064|binding|INFO|Removing iface tapd8d3ce43-ab ovn-installed in OVS
Jan 26 16:11:44 compute-0 nova_compute[239965]: 2026-01-26 16:11:44.537 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:44.545 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:58:77 10.100.0.6'], port_security=['fa:16:3e:74:58:77 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8d1cdeaf6004b94bba2d3c727959e51', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb0a5e0f-b1d5-4f6c-9884-3b66f10a59d8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=568e2bde-137f-4351-b2df-67acb89970b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:11:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:44.546 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 in datapath 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 unbound from our chassis
Jan 26 16:11:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:44.547 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:11:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:44.549 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[05a759ee-62f3-47f3-a84f-ad91dc97ee94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:44.549 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 namespace which is not needed anymore
Jan 26 16:11:44 compute-0 nova_compute[239965]: 2026-01-26 16:11:44.561 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:44 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Jan 26 16:11:44 compute-0 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d0000006a.scope: Consumed 16.755s CPU time.
Jan 26 16:11:44 compute-0 systemd-machined[208061]: Machine qemu-131-instance-0000006a terminated.
Jan 26 16:11:44 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[330031]: [NOTICE]   (330035) : haproxy version is 2.8.14-c23fe91
Jan 26 16:11:44 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[330031]: [NOTICE]   (330035) : path to executable is /usr/sbin/haproxy
Jan 26 16:11:44 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[330031]: [WARNING]  (330035) : Exiting Master process...
Jan 26 16:11:44 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[330031]: [WARNING]  (330035) : Exiting Master process...
Jan 26 16:11:44 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[330031]: [ALERT]    (330035) : Current worker (330037) exited with code 143 (Terminated)
Jan 26 16:11:44 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[330031]: [WARNING]  (330035) : All workers exited. Exiting... (0)
Jan 26 16:11:44 compute-0 systemd[1]: libpod-45d58dd047e34e80962bb183fdd846a6cf080628808c49b4edce38a05ae2eabe.scope: Deactivated successfully.
Jan 26 16:11:44 compute-0 podman[331957]: 2026-01-26 16:11:44.67270819 +0000 UTC m=+0.042102631 container died 45d58dd047e34e80962bb183fdd846a6cf080628808c49b4edce38a05ae2eabe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 26 16:11:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-45d58dd047e34e80962bb183fdd846a6cf080628808c49b4edce38a05ae2eabe-userdata-shm.mount: Deactivated successfully.
Jan 26 16:11:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f74924d9bc06e12748245418289a7f9d8aef6773dd7e93b7ee67e5c6b59b839-merged.mount: Deactivated successfully.
Jan 26 16:11:44 compute-0 podman[331957]: 2026-01-26 16:11:44.708986847 +0000 UTC m=+0.078381288 container cleanup 45d58dd047e34e80962bb183fdd846a6cf080628808c49b4edce38a05ae2eabe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 16:11:44 compute-0 systemd[1]: libpod-conmon-45d58dd047e34e80962bb183fdd846a6cf080628808c49b4edce38a05ae2eabe.scope: Deactivated successfully.
Jan 26 16:11:44 compute-0 nova_compute[239965]: 2026-01-26 16:11:44.753 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:44 compute-0 nova_compute[239965]: 2026-01-26 16:11:44.756 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:44 compute-0 podman[331986]: 2026-01-26 16:11:44.764335762 +0000 UTC m=+0.038084034 container remove 45d58dd047e34e80962bb183fdd846a6cf080628808c49b4edce38a05ae2eabe (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 16:11:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:44.769 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0be665c1-501f-4e54-82d7-74edf3da481b]: (4, ('Mon Jan 26 04:11:44 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 (45d58dd047e34e80962bb183fdd846a6cf080628808c49b4edce38a05ae2eabe)\n45d58dd047e34e80962bb183fdd846a6cf080628808c49b4edce38a05ae2eabe\nMon Jan 26 04:11:44 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 (45d58dd047e34e80962bb183fdd846a6cf080628808c49b4edce38a05ae2eabe)\n45d58dd047e34e80962bb183fdd846a6cf080628808c49b4edce38a05ae2eabe\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:44.770 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[79437797-3dcb-4f43-9629-c0c92a32539e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:44.771 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74a6493e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:11:44 compute-0 nova_compute[239965]: 2026-01-26 16:11:44.772 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:44 compute-0 kernel: tap74a6493e-90: left promiscuous mode
Jan 26 16:11:44 compute-0 nova_compute[239965]: 2026-01-26 16:11:44.789 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:44.793 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c637bcba-04f8-4037-a72c-798e4bcc595b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:44.803 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[01d0b16d-6a76-4d97-af07-ec84347cbcdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:44.804 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1a041672-86b0-4ef4-894f-386c5965504b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:44.818 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[10dc7298-4f72-410a-b975-3bb2494f95c0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 530965, 'reachable_time': 30965, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332014, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:44.820 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:11:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:44.820 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[0003b30b-9796-4607-8853-01181b739ab4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d74a6493e\x2d9050\x2d4f8b\x2d9dc9\x2de7bb3992eeb6.mount: Deactivated successfully.
Jan 26 16:11:45 compute-0 nova_compute[239965]: 2026-01-26 16:11:45.048 239969 DEBUG nova.compute.manager [req-4f5f8502-5a39-4657-a30c-a2e6e22d05ca req-b6191bc9-2f70-4792-b79d-74785a1e0dd9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-vif-unplugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:45 compute-0 nova_compute[239965]: 2026-01-26 16:11:45.049 239969 DEBUG oslo_concurrency.lockutils [req-4f5f8502-5a39-4657-a30c-a2e6e22d05ca req-b6191bc9-2f70-4792-b79d-74785a1e0dd9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:45 compute-0 nova_compute[239965]: 2026-01-26 16:11:45.049 239969 DEBUG oslo_concurrency.lockutils [req-4f5f8502-5a39-4657-a30c-a2e6e22d05ca req-b6191bc9-2f70-4792-b79d-74785a1e0dd9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:45 compute-0 nova_compute[239965]: 2026-01-26 16:11:45.049 239969 DEBUG oslo_concurrency.lockutils [req-4f5f8502-5a39-4657-a30c-a2e6e22d05ca req-b6191bc9-2f70-4792-b79d-74785a1e0dd9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:45 compute-0 nova_compute[239965]: 2026-01-26 16:11:45.049 239969 DEBUG nova.compute.manager [req-4f5f8502-5a39-4657-a30c-a2e6e22d05ca req-b6191bc9-2f70-4792-b79d-74785a1e0dd9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] No waiting events found dispatching network-vif-unplugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:11:45 compute-0 nova_compute[239965]: 2026-01-26 16:11:45.049 239969 WARNING nova.compute.manager [req-4f5f8502-5a39-4657-a30c-a2e6e22d05ca req-b6191bc9-2f70-4792-b79d-74785a1e0dd9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received unexpected event network-vif-unplugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 for instance with vm_state active and task_state shelving.
Jan 26 16:11:45 compute-0 ceph-mon[75140]: pgmap v1814: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 4.2 KiB/s rd, 4.9 KiB/s wr, 7 op/s
Jan 26 16:11:45 compute-0 nova_compute[239965]: 2026-01-26 16:11:45.313 239969 INFO nova.virt.libvirt.driver [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Instance shutdown successfully after 3 seconds.
Jan 26 16:11:45 compute-0 nova_compute[239965]: 2026-01-26 16:11:45.318 239969 INFO nova.virt.libvirt.driver [-] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Instance destroyed successfully.
Jan 26 16:11:45 compute-0 nova_compute[239965]: 2026-01-26 16:11:45.319 239969 DEBUG nova.objects.instance [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'numa_topology' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:11:45 compute-0 nova_compute[239965]: 2026-01-26 16:11:45.657 239969 INFO nova.virt.libvirt.driver [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Beginning cold snapshot process
Jan 26 16:11:45 compute-0 nova_compute[239965]: 2026-01-26 16:11:45.803 239969 DEBUG nova.virt.libvirt.imagebackend [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 16:11:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 1.6 KiB/s rd, 3.8 KiB/s wr, 1 op/s
Jan 26 16:11:46 compute-0 nova_compute[239965]: 2026-01-26 16:11:46.271 239969 DEBUG nova.storage.rbd_utils [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] creating snapshot(c2163d14e324494bbc28d214cd0b81c8) on rbd image(20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:11:46 compute-0 nova_compute[239965]: 2026-01-26 16:11:46.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:11:46 compute-0 nova_compute[239965]: 2026-01-26 16:11:46.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:11:46 compute-0 nova_compute[239965]: 2026-01-26 16:11:46.534 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:11:46 compute-0 nova_compute[239965]: 2026-01-26 16:11:46.535 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:11:46 compute-0 nova_compute[239965]: 2026-01-26 16:11:46.535 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:11:46 compute-0 nova_compute[239965]: 2026-01-26 16:11:46.807 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:46 compute-0 nova_compute[239965]: 2026-01-26 16:11:46.955 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Jan 26 16:11:47 compute-0 ceph-mon[75140]: pgmap v1815: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 1.6 KiB/s rd, 3.8 KiB/s wr, 1 op/s
Jan 26 16:11:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Jan 26 16:11:47 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Jan 26 16:11:47 compute-0 nova_compute[239965]: 2026-01-26 16:11:47.326 239969 DEBUG nova.storage.rbd_utils [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] cloning vms/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk@c2163d14e324494bbc28d214cd0b81c8 to images/d2348ec7-2ac6-402c-ae5a-aa351210d1c0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 16:11:47 compute-0 podman[332066]: 2026-01-26 16:11:47.370835002 +0000 UTC m=+0.053284615 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 16:11:47 compute-0 podman[332067]: 2026-01-26 16:11:47.398042758 +0000 UTC m=+0.080355688 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:11:47 compute-0 nova_compute[239965]: 2026-01-26 16:11:47.406 239969 DEBUG nova.storage.rbd_utils [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] flattening images/d2348ec7-2ac6-402c-ae5a-aa351210d1c0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 16:11:47 compute-0 nova_compute[239965]: 2026-01-26 16:11:47.441 239969 DEBUG nova.compute.manager [req-3b233e9b-35a6-4a3f-8c24-cf9290b928dd req-f050baa4-2939-4d6a-8745-875c3565cc98 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:47 compute-0 nova_compute[239965]: 2026-01-26 16:11:47.442 239969 DEBUG oslo_concurrency.lockutils [req-3b233e9b-35a6-4a3f-8c24-cf9290b928dd req-f050baa4-2939-4d6a-8745-875c3565cc98 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:47 compute-0 nova_compute[239965]: 2026-01-26 16:11:47.442 239969 DEBUG oslo_concurrency.lockutils [req-3b233e9b-35a6-4a3f-8c24-cf9290b928dd req-f050baa4-2939-4d6a-8745-875c3565cc98 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:47 compute-0 nova_compute[239965]: 2026-01-26 16:11:47.442 239969 DEBUG oslo_concurrency.lockutils [req-3b233e9b-35a6-4a3f-8c24-cf9290b928dd req-f050baa4-2939-4d6a-8745-875c3565cc98 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:47 compute-0 nova_compute[239965]: 2026-01-26 16:11:47.442 239969 DEBUG nova.compute.manager [req-3b233e9b-35a6-4a3f-8c24-cf9290b928dd req-f050baa4-2939-4d6a-8745-875c3565cc98 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] No waiting events found dispatching network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:11:47 compute-0 nova_compute[239965]: 2026-01-26 16:11:47.443 239969 WARNING nova.compute.manager [req-3b233e9b-35a6-4a3f-8c24-cf9290b928dd req-f050baa4-2939-4d6a-8745-875c3565cc98 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received unexpected event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 for instance with vm_state active and task_state shelving_image_uploading.
Jan 26 16:11:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:11:48 compute-0 nova_compute[239965]: 2026-01-26 16:11:48.173 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Updating instance_info_cache with network_info: [{"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:11:48 compute-0 nova_compute[239965]: 2026-01-26 16:11:48.191 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:11:48 compute-0 nova_compute[239965]: 2026-01-26 16:11:48.192 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:11:48 compute-0 nova_compute[239965]: 2026-01-26 16:11:48.192 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 13 KiB/s wr, 11 op/s
Jan 26 16:11:48 compute-0 ceph-mon[75140]: osdmap e287: 3 total, 3 up, 3 in
Jan 26 16:11:48 compute-0 nova_compute[239965]: 2026-01-26 16:11:48.353 239969 DEBUG nova.storage.rbd_utils [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] removing snapshot(c2163d14e324494bbc28d214cd0b81c8) on rbd image(20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 16:11:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:11:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4091242868' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:11:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:11:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4091242868' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007723187720270263 of space, bias 1.0, pg target 0.23169563160810788 quantized to 32 (current 32)
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010176400324170723 of space, bias 1.0, pg target 0.30529200972512166 quantized to 32 (current 32)
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.363387893604225e-07 of space, bias 4.0, pg target 0.000883606547232507 quantized to 16 (current 16)
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:11:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:11:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Jan 26 16:11:49 compute-0 ceph-mon[75140]: pgmap v1817: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 13 KiB/s wr, 11 op/s
Jan 26 16:11:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/4091242868' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:11:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/4091242868' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:11:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Jan 26 16:11:49 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Jan 26 16:11:49 compute-0 nova_compute[239965]: 2026-01-26 16:11:49.344 239969 DEBUG nova.storage.rbd_utils [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] creating snapshot(snap) on rbd image(d2348ec7-2ac6-402c-ae5a-aa351210d1c0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:11:49 compute-0 nova_compute[239965]: 2026-01-26 16:11:49.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:11:49 compute-0 nova_compute[239965]: 2026-01-26 16:11:49.534 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:49 compute-0 nova_compute[239965]: 2026-01-26 16:11:49.534 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:49 compute-0 nova_compute[239965]: 2026-01-26 16:11:49.534 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:49 compute-0 nova_compute[239965]: 2026-01-26 16:11:49.535 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:11:49 compute-0 nova_compute[239965]: 2026-01-26 16:11:49.535 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:11:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:11:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1713981478' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:11:50 compute-0 nova_compute[239965]: 2026-01-26 16:11:50.081 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:11:50 compute-0 nova_compute[239965]: 2026-01-26 16:11:50.172 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:11:50 compute-0 nova_compute[239965]: 2026-01-26 16:11:50.173 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:11:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 9.7 KiB/s wr, 11 op/s
Jan 26 16:11:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Jan 26 16:11:50 compute-0 ceph-mon[75140]: osdmap e288: 3 total, 3 up, 3 in
Jan 26 16:11:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1713981478' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:11:50 compute-0 ceph-mon[75140]: pgmap v1819: 305 pgs: 305 active+clean; 167 MiB data, 780 MiB used, 59 GiB / 60 GiB avail; 8.9 KiB/s rd, 9.7 KiB/s wr, 11 op/s
Jan 26 16:11:50 compute-0 nova_compute[239965]: 2026-01-26 16:11:50.320 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:11:50 compute-0 nova_compute[239965]: 2026-01-26 16:11:50.321 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3731MB free_disk=59.94195117428899GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:11:50 compute-0 nova_compute[239965]: 2026-01-26 16:11:50.322 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:50 compute-0 nova_compute[239965]: 2026-01-26 16:11:50.322 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Jan 26 16:11:50 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Jan 26 16:11:50 compute-0 nova_compute[239965]: 2026-01-26 16:11:50.405 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:11:50 compute-0 nova_compute[239965]: 2026-01-26 16:11:50.405 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:11:50 compute-0 nova_compute[239965]: 2026-01-26 16:11:50.405 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:11:50 compute-0 nova_compute[239965]: 2026-01-26 16:11:50.475 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:11:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:11:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4274087315' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:11:51 compute-0 nova_compute[239965]: 2026-01-26 16:11:51.066 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:11:51 compute-0 nova_compute[239965]: 2026-01-26 16:11:51.072 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:11:51 compute-0 nova_compute[239965]: 2026-01-26 16:11:51.096 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:11:51 compute-0 nova_compute[239965]: 2026-01-26 16:11:51.120 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:11:51 compute-0 nova_compute[239965]: 2026-01-26 16:11:51.121 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:51 compute-0 ceph-mon[75140]: osdmap e289: 3 total, 3 up, 3 in
Jan 26 16:11:51 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4274087315' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:11:51 compute-0 nova_compute[239965]: 2026-01-26 16:11:51.809 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:51 compute-0 nova_compute[239965]: 2026-01-26 16:11:51.956 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:52.085 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:07:b5:a8 2001:db8:0:1:f816:3eff:fe07:b5a8 2001:db8::f816:3eff:fe07:b5a8'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe07:b5a8/64 2001:db8::f816:3eff:fe07:b5a8/64', 'neutron:device_id': 'ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46f31be5-6cff-44a8-9f7f-e0dfbc4df89f, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a1098b06-e219-4f9c-9e25-e366022a5f05) old=Port_Binding(mac=['fa:16:3e:07:b5:a8 2001:db8::f816:3eff:fe07:b5a8'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe07:b5a8/64', 'neutron:device_id': 'ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:11:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:52.087 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a1098b06-e219-4f9c-9e25-e366022a5f05 in datapath ce7228b3-b329-4b48-a563-2dfcf286a94e updated
Jan 26 16:11:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:52.089 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce7228b3-b329-4b48-a563-2dfcf286a94e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:11:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:52.090 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[da36afc8-d387-4c96-892d-8ce12cb47f9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:11:52 compute-0 nova_compute[239965]: 2026-01-26 16:11:52.116 239969 INFO nova.virt.libvirt.driver [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Snapshot image upload complete
Jan 26 16:11:52 compute-0 nova_compute[239965]: 2026-01-26 16:11:52.117 239969 DEBUG nova.compute.manager [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:11:52 compute-0 nova_compute[239965]: 2026-01-26 16:11:52.164 239969 INFO nova.compute.manager [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Shelve offloading
Jan 26 16:11:52 compute-0 nova_compute[239965]: 2026-01-26 16:11:52.170 239969 INFO nova.virt.libvirt.driver [-] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Instance destroyed successfully.
Jan 26 16:11:52 compute-0 nova_compute[239965]: 2026-01-26 16:11:52.170 239969 DEBUG nova.compute.manager [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:11:52 compute-0 nova_compute[239965]: 2026-01-26 16:11:52.172 239969 DEBUG oslo_concurrency.lockutils [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:11:52 compute-0 nova_compute[239965]: 2026-01-26 16:11:52.173 239969 DEBUG oslo_concurrency.lockutils [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquired lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:11:52 compute-0 nova_compute[239965]: 2026-01-26 16:11:52.173 239969 DEBUG nova.network.neutron [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:11:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 216 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.7 MiB/s wr, 148 op/s
Jan 26 16:11:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Jan 26 16:11:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Jan 26 16:11:52 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Jan 26 16:11:52 compute-0 ceph-mon[75140]: pgmap v1821: 305 pgs: 305 active+clean; 216 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.7 MiB/s wr, 148 op/s
Jan 26 16:11:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e290 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:11:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:53.231 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:11:53 compute-0 nova_compute[239965]: 2026-01-26 16:11:53.232 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:53.233 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:11:53 compute-0 ceph-mon[75140]: osdmap e290: 3 total, 3 up, 3 in
Jan 26 16:11:54 compute-0 nova_compute[239965]: 2026-01-26 16:11:54.173 239969 DEBUG nova.network.neutron [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Updating instance_info_cache with network_info: [{"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:11:54 compute-0 nova_compute[239965]: 2026-01-26 16:11:54.196 239969 DEBUG oslo_concurrency.lockutils [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Releasing lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:11:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 246 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 200 op/s
Jan 26 16:11:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Jan 26 16:11:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Jan 26 16:11:54 compute-0 ceph-mon[75140]: pgmap v1823: 305 pgs: 305 active+clean; 246 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 200 op/s
Jan 26 16:11:54 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Jan 26 16:11:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Jan 26 16:11:55 compute-0 ceph-mon[75140]: osdmap e291: 3 total, 3 up, 3 in
Jan 26 16:11:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Jan 26 16:11:55 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Jan 26 16:11:55 compute-0 nova_compute[239965]: 2026-01-26 16:11:55.704 239969 INFO nova.virt.libvirt.driver [-] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Instance destroyed successfully.
Jan 26 16:11:55 compute-0 nova_compute[239965]: 2026-01-26 16:11:55.705 239969 DEBUG nova.objects.instance [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'resources' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:11:55 compute-0 nova_compute[239965]: 2026-01-26 16:11:55.719 239969 DEBUG nova.virt.libvirt.vif [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-944879613',display_name='tempest-ServersNegativeTestJSON-server-944879613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-944879613',id=106,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d8d1cdeaf6004b94bba2d3c727959e51',ramdisk_id='',reservation_id='r-np89kd30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1013867593',owner_user_name='tempest-ServersNegativeTestJSON-1013867593-project-member',shelved_at='2026-01-26T16:11:52.117060',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='d2348ec7-2ac6-402c-ae5a-aa351210d1c0'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:11:45Z,user_data=None,user_id='2d9b11d846a34b6e892fb80186bedba5',uuid=20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:11:55 compute-0 nova_compute[239965]: 2026-01-26 16:11:55.720 239969 DEBUG nova.network.os_vif_util [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converting VIF {"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:11:55 compute-0 nova_compute[239965]: 2026-01-26 16:11:55.721 239969 DEBUG nova.network.os_vif_util [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:11:55 compute-0 nova_compute[239965]: 2026-01-26 16:11:55.722 239969 DEBUG os_vif [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:11:55 compute-0 nova_compute[239965]: 2026-01-26 16:11:55.725 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:55 compute-0 nova_compute[239965]: 2026-01-26 16:11:55.726 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8d3ce43-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:11:55 compute-0 nova_compute[239965]: 2026-01-26 16:11:55.728 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:55 compute-0 nova_compute[239965]: 2026-01-26 16:11:55.730 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:55 compute-0 nova_compute[239965]: 2026-01-26 16:11:55.733 239969 INFO os_vif [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab')
Jan 26 16:11:55 compute-0 nova_compute[239965]: 2026-01-26 16:11:55.820 239969 DEBUG nova.compute.manager [req-9beb1959-146f-4db7-b75a-352bbbd0182d req-b22151f3-6e59-4529-bd28-1b91b94dbe1d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-changed-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:11:55 compute-0 nova_compute[239965]: 2026-01-26 16:11:55.820 239969 DEBUG nova.compute.manager [req-9beb1959-146f-4db7-b75a-352bbbd0182d req-b22151f3-6e59-4529-bd28-1b91b94dbe1d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Refreshing instance network info cache due to event network-changed-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:11:55 compute-0 nova_compute[239965]: 2026-01-26 16:11:55.821 239969 DEBUG oslo_concurrency.lockutils [req-9beb1959-146f-4db7-b75a-352bbbd0182d req-b22151f3-6e59-4529-bd28-1b91b94dbe1d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:11:55 compute-0 nova_compute[239965]: 2026-01-26 16:11:55.821 239969 DEBUG oslo_concurrency.lockutils [req-9beb1959-146f-4db7-b75a-352bbbd0182d req-b22151f3-6e59-4529-bd28-1b91b94dbe1d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:11:55 compute-0 nova_compute[239965]: 2026-01-26 16:11:55.821 239969 DEBUG nova.network.neutron [req-9beb1959-146f-4db7-b75a-352bbbd0182d req-b22151f3-6e59-4529-bd28-1b91b94dbe1d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Refreshing network info cache for port d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:11:56 compute-0 nova_compute[239965]: 2026-01-26 16:11:56.039 239969 INFO nova.virt.libvirt.driver [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Deleting instance files /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_del
Jan 26 16:11:56 compute-0 nova_compute[239965]: 2026-01-26 16:11:56.040 239969 INFO nova.virt.libvirt.driver [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Deletion of /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_del complete
Jan 26 16:11:56 compute-0 nova_compute[239965]: 2026-01-26 16:11:56.121 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:11:56 compute-0 nova_compute[239965]: 2026-01-26 16:11:56.179 239969 INFO nova.scheduler.client.report [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Deleted allocations for instance 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4
Jan 26 16:11:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 230 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 7.9 MiB/s wr, 289 op/s
Jan 26 16:11:56 compute-0 nova_compute[239965]: 2026-01-26 16:11:56.228 239969 DEBUG oslo_concurrency.lockutils [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:56 compute-0 nova_compute[239965]: 2026-01-26 16:11:56.228 239969 DEBUG oslo_concurrency.lockutils [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:56 compute-0 nova_compute[239965]: 2026-01-26 16:11:56.250 239969 DEBUG oslo_concurrency.processutils [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:11:56 compute-0 ceph-mon[75140]: osdmap e292: 3 total, 3 up, 3 in
Jan 26 16:11:56 compute-0 ceph-mon[75140]: pgmap v1826: 305 pgs: 305 active+clean; 230 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 7.9 MiB/s wr, 289 op/s
Jan 26 16:11:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:11:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3662912829' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:11:56 compute-0 nova_compute[239965]: 2026-01-26 16:11:56.810 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:11:56 compute-0 nova_compute[239965]: 2026-01-26 16:11:56.826 239969 DEBUG oslo_concurrency.processutils [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:11:56 compute-0 nova_compute[239965]: 2026-01-26 16:11:56.833 239969 DEBUG nova.compute.provider_tree [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:11:56 compute-0 nova_compute[239965]: 2026-01-26 16:11:56.852 239969 DEBUG nova.scheduler.client.report [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:11:56 compute-0 nova_compute[239965]: 2026-01-26 16:11:56.873 239969 DEBUG oslo_concurrency.lockutils [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:56 compute-0 nova_compute[239965]: 2026-01-26 16:11:56.917 239969 DEBUG oslo_concurrency.lockutils [None req-02ad6c6b-9c1d-4b74-b44d-c9eea72d6f36 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:57 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3662912829' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:11:57 compute-0 nova_compute[239965]: 2026-01-26 16:11:57.730 239969 DEBUG nova.network.neutron [req-9beb1959-146f-4db7-b75a-352bbbd0182d req-b22151f3-6e59-4529-bd28-1b91b94dbe1d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Updated VIF entry in instance network info cache for port d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:11:57 compute-0 nova_compute[239965]: 2026-01-26 16:11:57.731 239969 DEBUG nova.network.neutron [req-9beb1959-146f-4db7-b75a-352bbbd0182d req-b22151f3-6e59-4529-bd28-1b91b94dbe1d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Updating instance_info_cache with network_info: [{"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": null, "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:11:57 compute-0 nova_compute[239965]: 2026-01-26 16:11:57.756 239969 DEBUG oslo_concurrency.lockutils [req-9beb1959-146f-4db7-b75a-352bbbd0182d req-b22151f3-6e59-4529-bd28-1b91b94dbe1d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:11:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:11:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Jan 26 16:11:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Jan 26 16:11:57 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Jan 26 16:11:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 244 op/s
Jan 26 16:11:58 compute-0 nova_compute[239965]: 2026-01-26 16:11:58.700 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:58 compute-0 nova_compute[239965]: 2026-01-26 16:11:58.700 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:58 compute-0 nova_compute[239965]: 2026-01-26 16:11:58.719 239969 DEBUG nova.compute.manager [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:11:58 compute-0 nova_compute[239965]: 2026-01-26 16:11:58.779 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:58 compute-0 nova_compute[239965]: 2026-01-26 16:11:58.780 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:58 compute-0 nova_compute[239965]: 2026-01-26 16:11:58.786 239969 DEBUG nova.virt.hardware [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:11:58 compute-0 nova_compute[239965]: 2026-01-26 16:11:58.787 239969 INFO nova.compute.claims [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:58.890521) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443918890569, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 804, "num_deletes": 253, "total_data_size": 927691, "memory_usage": 943480, "flush_reason": "Manual Compaction"}
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Jan 26 16:11:58 compute-0 ceph-mon[75140]: osdmap e293: 3 total, 3 up, 3 in
Jan 26 16:11:58 compute-0 ceph-mon[75140]: pgmap v1828: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 244 op/s
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443918900590, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 912860, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37288, "largest_seqno": 38091, "table_properties": {"data_size": 908568, "index_size": 2008, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9376, "raw_average_key_size": 19, "raw_value_size": 900091, "raw_average_value_size": 1911, "num_data_blocks": 88, "num_entries": 471, "num_filter_entries": 471, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769443878, "oldest_key_time": 1769443878, "file_creation_time": 1769443918, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 10106 microseconds, and 6184 cpu microseconds.
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:58.900630) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 912860 bytes OK
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:58.900648) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:58.902329) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:58.902343) EVENT_LOG_v1 {"time_micros": 1769443918902338, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:58.902360) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 923574, prev total WAL file size 923574, number of live WAL files 2.
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:58.902960) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(891KB)], [80(9124KB)]
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443918903110, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 10256683, "oldest_snapshot_seqno": -1}
Jan 26 16:11:58 compute-0 nova_compute[239965]: 2026-01-26 16:11:58.917 239969 DEBUG oslo_concurrency.processutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6472 keys, 8589568 bytes, temperature: kUnknown
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443918957708, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8589568, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8546652, "index_size": 25654, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 163547, "raw_average_key_size": 25, "raw_value_size": 8431102, "raw_average_value_size": 1302, "num_data_blocks": 1029, "num_entries": 6472, "num_filter_entries": 6472, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769443918, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:58.958163) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8589568 bytes
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:58.960278) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.4 rd, 157.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.9 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(20.6) write-amplify(9.4) OK, records in: 6989, records dropped: 517 output_compression: NoCompression
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:58.960333) EVENT_LOG_v1 {"time_micros": 1769443918960310, "job": 46, "event": "compaction_finished", "compaction_time_micros": 54721, "compaction_time_cpu_micros": 23293, "output_level": 6, "num_output_files": 1, "total_output_size": 8589568, "num_input_records": 6989, "num_output_records": 6472, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443918960820, "job": 46, "event": "table_file_deletion", "file_number": 82}
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443918963967, "job": 46, "event": "table_file_deletion", "file_number": 80}
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:58.902863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:58.964036) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:58.964041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:58.964044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:58.964047) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:11:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:11:58.964049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:11:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:59.233 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:59.234 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:11:59.234 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:11:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3620610503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.468 239969 DEBUG oslo_concurrency.processutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.474 239969 DEBUG nova.compute.provider_tree [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.498 239969 DEBUG nova.scheduler.client.report [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.590 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.592 239969 DEBUG nova.compute.manager [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.641 239969 DEBUG nova.compute.manager [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.642 239969 DEBUG nova.network.neutron [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.675 239969 INFO nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.700 239969 DEBUG nova.compute.manager [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.765 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443904.7647502, 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.766 239969 INFO nova.compute.manager [-] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] VM Stopped (Lifecycle Event)
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.792 239969 DEBUG nova.compute.manager [None req-42ab4a78-b0b7-4ce7-9970-5c51fd94ca38 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.797 239969 DEBUG nova.compute.manager [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.798 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.799 239969 INFO nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Creating image(s)
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.826 239969 DEBUG nova.storage.rbd_utils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 67eb236e-aeee-43e6-9b43-41c17fcc37a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.851 239969 DEBUG nova.storage.rbd_utils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 67eb236e-aeee-43e6-9b43-41c17fcc37a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.883 239969 DEBUG nova.storage.rbd_utils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 67eb236e-aeee-43e6-9b43-41c17fcc37a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.887 239969 DEBUG oslo_concurrency.processutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:11:59 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3620610503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.931 239969 DEBUG nova.policy [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0338b308661e4205839e9e957f674d8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df501970c9864b77b86bb4aa58ef846b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.968 239969 DEBUG oslo_concurrency.processutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.969 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.970 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.971 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.995 239969 DEBUG nova.storage.rbd_utils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 67eb236e-aeee-43e6-9b43-41c17fcc37a5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:11:59 compute-0 nova_compute[239965]: 2026-01-26 16:11:59.999 239969 DEBUG oslo_concurrency.processutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 67eb236e-aeee-43e6-9b43-41c17fcc37a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 125 KiB/s rd, 9.0 KiB/s wr, 174 op/s
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.299 239969 DEBUG oslo_concurrency.processutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 67eb236e-aeee-43e6-9b43-41c17fcc37a5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.300s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.355 239969 DEBUG nova.storage.rbd_utils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] resizing rbd image 67eb236e-aeee-43e6-9b43-41c17fcc37a5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:12:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:12:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:12:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:12:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:12:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:12:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.422 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.422 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.423 239969 INFO nova.compute.manager [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Unshelving
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.428 239969 DEBUG nova.objects.instance [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'migration_context' on Instance uuid 67eb236e-aeee-43e6-9b43-41c17fcc37a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.445 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.445 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Ensure instance console log exists: /var/lib/nova/instances/67eb236e-aeee-43e6-9b43-41c17fcc37a5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.446 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.446 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.446 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.537 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.538 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.542 239969 DEBUG nova.objects.instance [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'pci_requests' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.557 239969 DEBUG nova.objects.instance [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'numa_topology' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.569 239969 DEBUG nova.virt.hardware [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.570 239969 INFO nova.compute.claims [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.705 239969 DEBUG oslo_concurrency.processutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.751 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:00 compute-0 nova_compute[239965]: 2026-01-26 16:12:00.779 239969 DEBUG nova.network.neutron [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Successfully created port: 24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:12:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Jan 26 16:12:00 compute-0 ceph-mon[75140]: pgmap v1829: 305 pgs: 305 active+clean; 167 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 125 KiB/s rd, 9.0 KiB/s wr, 174 op/s
Jan 26 16:12:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Jan 26 16:12:01 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Jan 26 16:12:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:12:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162950821' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:12:01 compute-0 nova_compute[239965]: 2026-01-26 16:12:01.285 239969 DEBUG oslo_concurrency.processutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:01 compute-0 nova_compute[239965]: 2026-01-26 16:12:01.292 239969 DEBUG nova.compute.provider_tree [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:12:01 compute-0 nova_compute[239965]: 2026-01-26 16:12:01.296 239969 DEBUG nova.network.neutron [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Successfully created port: 5e1d227f-594c-4a1e-8f28-56469a20df56 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:12:01 compute-0 nova_compute[239965]: 2026-01-26 16:12:01.305 239969 DEBUG nova.scheduler.client.report [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:12:01 compute-0 nova_compute[239965]: 2026-01-26 16:12:01.328 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:01 compute-0 nova_compute[239965]: 2026-01-26 16:12:01.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:12:01 compute-0 nova_compute[239965]: 2026-01-26 16:12:01.578 239969 INFO nova.network.neutron [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Updating port d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 26 16:12:01 compute-0 nova_compute[239965]: 2026-01-26 16:12:01.811 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:02 compute-0 ceph-mon[75140]: osdmap e294: 3 total, 3 up, 3 in
Jan 26 16:12:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4162950821' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:12:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 202 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 2.0 MiB/s wr, 175 op/s
Jan 26 16:12:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:12:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Jan 26 16:12:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Jan 26 16:12:02 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Jan 26 16:12:03 compute-0 ceph-mon[75140]: pgmap v1831: 305 pgs: 305 active+clean; 202 MiB data, 794 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 2.0 MiB/s wr, 175 op/s
Jan 26 16:12:03 compute-0 ceph-mon[75140]: osdmap e295: 3 total, 3 up, 3 in
Jan 26 16:12:03 compute-0 nova_compute[239965]: 2026-01-26 16:12:03.196 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:12:03 compute-0 nova_compute[239965]: 2026-01-26 16:12:03.197 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquired lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:12:03 compute-0 nova_compute[239965]: 2026-01-26 16:12:03.197 239969 DEBUG nova.network.neutron [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:12:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:03.234 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:03 compute-0 nova_compute[239965]: 2026-01-26 16:12:03.257 239969 DEBUG nova.network.neutron [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Successfully updated port: 24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:12:03 compute-0 nova_compute[239965]: 2026-01-26 16:12:03.305 239969 DEBUG nova.compute.manager [req-769ee9ca-7c1b-45f0-a149-199dec643334 req-78afc6d0-7b98-4850-ae69-c89b4a58bad9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-changed-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:03 compute-0 nova_compute[239965]: 2026-01-26 16:12:03.306 239969 DEBUG nova.compute.manager [req-769ee9ca-7c1b-45f0-a149-199dec643334 req-78afc6d0-7b98-4850-ae69-c89b4a58bad9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Refreshing instance network info cache due to event network-changed-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:12:03 compute-0 nova_compute[239965]: 2026-01-26 16:12:03.307 239969 DEBUG oslo_concurrency.lockutils [req-769ee9ca-7c1b-45f0-a149-199dec643334 req-78afc6d0-7b98-4850-ae69-c89b4a58bad9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:12:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 213 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.4 MiB/s wr, 75 op/s
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.583 239969 DEBUG nova.network.neutron [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Successfully updated port: 5e1d227f-594c-4a1e-8f28-56469a20df56 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.603 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.604 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquired lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.604 239969 DEBUG nova.network.neutron [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.766 239969 DEBUG nova.network.neutron [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Updating instance_info_cache with network_info: [{"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.787 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Releasing lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.788 239969 DEBUG nova.virt.libvirt.driver [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.789 239969 INFO nova.virt.libvirt.driver [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Creating image(s)
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.809 239969 DEBUG nova.storage.rbd_utils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.813 239969 DEBUG nova.objects.instance [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.814 239969 DEBUG oslo_concurrency.lockutils [req-769ee9ca-7c1b-45f0-a149-199dec643334 req-78afc6d0-7b98-4850-ae69-c89b4a58bad9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.814 239969 DEBUG nova.network.neutron [req-769ee9ca-7c1b-45f0-a149-199dec643334 req-78afc6d0-7b98-4850-ae69-c89b4a58bad9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Refreshing network info cache for port d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.850 239969 DEBUG nova.storage.rbd_utils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.879 239969 DEBUG nova.storage.rbd_utils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.883 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "56637848f0d0f5d3f31cbd5361defebcca0d0baa" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.884 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "56637848f0d0f5d3f31cbd5361defebcca0d0baa" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:04 compute-0 nova_compute[239965]: 2026-01-26 16:12:04.888 239969 DEBUG nova.network.neutron [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:12:05 compute-0 nova_compute[239965]: 2026-01-26 16:12:05.262 239969 DEBUG nova.virt.libvirt.imagebackend [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Image locations are: [{'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/d2348ec7-2ac6-402c-ae5a-aa351210d1c0/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/d2348ec7-2ac6-402c-ae5a-aa351210d1c0/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 26 16:12:05 compute-0 ceph-mon[75140]: pgmap v1833: 305 pgs: 305 active+clean; 213 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.4 MiB/s wr, 75 op/s
Jan 26 16:12:05 compute-0 nova_compute[239965]: 2026-01-26 16:12:05.316 239969 DEBUG nova.virt.libvirt.imagebackend [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Selected location: {'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/d2348ec7-2ac6-402c-ae5a-aa351210d1c0/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 26 16:12:05 compute-0 nova_compute[239965]: 2026-01-26 16:12:05.317 239969 DEBUG nova.storage.rbd_utils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] cloning images/d2348ec7-2ac6-402c-ae5a-aa351210d1c0@snap to None/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 16:12:05 compute-0 nova_compute[239965]: 2026-01-26 16:12:05.407 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "56637848f0d0f5d3f31cbd5361defebcca0d0baa" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:05 compute-0 nova_compute[239965]: 2026-01-26 16:12:05.440 239969 DEBUG nova.compute.manager [req-205a0c1f-09ef-4fa2-ba6f-d4471ca50470 req-538317b4-c350-4d26-bb22-b2529ff795f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received event network-changed-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:05 compute-0 nova_compute[239965]: 2026-01-26 16:12:05.441 239969 DEBUG nova.compute.manager [req-205a0c1f-09ef-4fa2-ba6f-d4471ca50470 req-538317b4-c350-4d26-bb22-b2529ff795f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Refreshing instance network info cache due to event network-changed-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:12:05 compute-0 nova_compute[239965]: 2026-01-26 16:12:05.441 239969 DEBUG oslo_concurrency.lockutils [req-205a0c1f-09ef-4fa2-ba6f-d4471ca50470 req-538317b4-c350-4d26-bb22-b2529ff795f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:12:05 compute-0 nova_compute[239965]: 2026-01-26 16:12:05.524 239969 DEBUG nova.objects.instance [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'migration_context' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:05 compute-0 nova_compute[239965]: 2026-01-26 16:12:05.629 239969 DEBUG nova.storage.rbd_utils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] flattening vms/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 16:12:05 compute-0 nova_compute[239965]: 2026-01-26 16:12:05.753 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:05 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.149 239969 DEBUG nova.virt.libvirt.driver [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Image rbd:vms/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.150 239969 DEBUG nova.virt.libvirt.driver [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.150 239969 DEBUG nova.virt.libvirt.driver [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Ensure instance console log exists: /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.151 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.151 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.152 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.155 239969 DEBUG nova.virt.libvirt.driver [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Start _get_guest_xml network_info=[{"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-26T16:11:42Z,direct_url=<?>,disk_format='raw',id=d2348ec7-2ac6-402c-ae5a-aa351210d1c0,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-944879613-shelved',owner='d8d1cdeaf6004b94bba2d3c727959e51',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-26T16:11:51Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.160 239969 WARNING nova.virt.libvirt.driver [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.166 239969 DEBUG nova.virt.libvirt.host [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.167 239969 DEBUG nova.virt.libvirt.host [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.171 239969 DEBUG nova.virt.libvirt.host [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.172 239969 DEBUG nova.virt.libvirt.host [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.172 239969 DEBUG nova.virt.libvirt.driver [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.172 239969 DEBUG nova.virt.hardware [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-26T16:11:42Z,direct_url=<?>,disk_format='raw',id=d2348ec7-2ac6-402c-ae5a-aa351210d1c0,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-944879613-shelved',owner='d8d1cdeaf6004b94bba2d3c727959e51',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-26T16:11:51Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.173 239969 DEBUG nova.virt.hardware [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.173 239969 DEBUG nova.virt.hardware [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.174 239969 DEBUG nova.virt.hardware [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.174 239969 DEBUG nova.virt.hardware [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.174 239969 DEBUG nova.virt.hardware [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.174 239969 DEBUG nova.virt.hardware [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.175 239969 DEBUG nova.virt.hardware [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.175 239969 DEBUG nova.virt.hardware [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.175 239969 DEBUG nova.virt.hardware [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.175 239969 DEBUG nova.virt.hardware [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.175 239969 DEBUG nova.objects.instance [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.194 239969 DEBUG oslo_concurrency.processutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 258 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 8.0 MiB/s wr, 80 op/s
Jan 26 16:12:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:12:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2551630691' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.752 239969 DEBUG oslo_concurrency.processutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.778 239969 DEBUG nova.storage.rbd_utils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.782 239969 DEBUG oslo_concurrency.processutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:06 compute-0 nova_compute[239965]: 2026-01-26 16:12:06.820 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Jan 26 16:12:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Jan 26 16:12:07 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Jan 26 16:12:07 compute-0 ceph-mon[75140]: pgmap v1834: 305 pgs: 305 active+clean; 258 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 8.0 MiB/s wr, 80 op/s
Jan 26 16:12:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2551630691' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:12:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:12:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2474438054' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.372 239969 DEBUG oslo_concurrency.processutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.375 239969 DEBUG nova.virt.libvirt.vif [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-944879613',display_name='tempest-ServersNegativeTestJSON-server-944879613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-944879613',id=106,image_ref='d2348ec7-2ac6-402c-ae5a-aa351210d1c0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d8d1cdeaf6004b94bba2d3c727959e51',ramdisk_id='',reservation_id='r-np89kd30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1013867593',owner_user_name='tempest-ServersNegativeTestJSON-1013867593-project-member',shelved_at='2026-01-26T16:11:52.117060',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='d2348ec7-2ac6-402c-ae5a-aa351210d1c0'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:12:00Z,user_data=None,user_id='2d9b11d846a34b6e892fb80186bedba5',uuid=20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.375 239969 DEBUG nova.network.os_vif_util [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converting VIF {"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.376 239969 DEBUG nova.network.os_vif_util [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.378 239969 DEBUG nova.objects.instance [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.399 239969 DEBUG nova.virt.libvirt.driver [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:12:07 compute-0 nova_compute[239965]:   <uuid>20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4</uuid>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   <name>instance-0000006a</name>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <nova:name>tempest-ServersNegativeTestJSON-server-944879613</nova:name>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:12:06</nova:creationTime>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:12:07 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:12:07 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:12:07 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:12:07 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:12:07 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:12:07 compute-0 nova_compute[239965]:         <nova:user uuid="2d9b11d846a34b6e892fb80186bedba5">tempest-ServersNegativeTestJSON-1013867593-project-member</nova:user>
Jan 26 16:12:07 compute-0 nova_compute[239965]:         <nova:project uuid="d8d1cdeaf6004b94bba2d3c727959e51">tempest-ServersNegativeTestJSON-1013867593</nova:project>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="d2348ec7-2ac6-402c-ae5a-aa351210d1c0"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:12:07 compute-0 nova_compute[239965]:         <nova:port uuid="d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3">
Jan 26 16:12:07 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <system>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <entry name="serial">20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4</entry>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <entry name="uuid">20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4</entry>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     </system>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   <os>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   </os>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   <features>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   </features>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk">
Jan 26 16:12:07 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       </source>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:12:07 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk.config">
Jan 26 16:12:07 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       </source>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:12:07 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:74:58:77"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <target dev="tapd8d3ce43-ab"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4/console.log" append="off"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <video>
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     </video>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <input type="keyboard" bus="usb"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:12:07 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:12:07 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:12:07 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:12:07 compute-0 nova_compute[239965]: </domain>
Jan 26 16:12:07 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.402 239969 DEBUG nova.compute.manager [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Preparing to wait for external event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.402 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.403 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.403 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.404 239969 DEBUG nova.virt.libvirt.vif [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-944879613',display_name='tempest-ServersNegativeTestJSON-server-944879613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-944879613',id=106,image_ref='d2348ec7-2ac6-402c-ae5a-aa351210d1c0',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:10:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='d8d1cdeaf6004b94bba2d3c727959e51',ramdisk_id='',reservation_id='r-np89kd30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1013867593',owner_user_name='tempest-ServersNegativeTestJSON-1013867593-project-member',shelved_at='2026-01-26T16:11:52.117060',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='d2348ec7-2ac6-402c-ae5a-aa351210d1c0'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:12:00Z,user_data=None,user_id='2d9b11d846a34b6e892fb80186bedba5',uuid=20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.405 239969 DEBUG nova.network.os_vif_util [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converting VIF {"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.406 239969 DEBUG nova.network.os_vif_util [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.406 239969 DEBUG os_vif [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.407 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.408 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.408 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.414 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.414 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8d3ce43-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.415 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8d3ce43-ab, col_values=(('external_ids', {'iface-id': 'd8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:74:58:77', 'vm-uuid': '20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.417 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:07 compute-0 NetworkManager[48954]: <info>  [1769443927.4182] manager: (tapd8d3ce43-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/437)
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.418 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.423 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.424 239969 INFO os_vif [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab')
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.473 239969 DEBUG nova.virt.libvirt.driver [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.474 239969 DEBUG nova.virt.libvirt.driver [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.474 239969 DEBUG nova.virt.libvirt.driver [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] No VIF found with MAC fa:16:3e:74:58:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.475 239969 INFO nova.virt.libvirt.driver [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Using config drive
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.504 239969 DEBUG nova.storage.rbd_utils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.533 239969 DEBUG nova.objects.instance [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:07 compute-0 rsyslogd[1006]: imjournal from <np0005595828:nova_compute>: begin to drop messages due to rate-limiting
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.592 239969 DEBUG nova.network.neutron [req-769ee9ca-7c1b-45f0-a149-199dec643334 req-78afc6d0-7b98-4850-ae69-c89b4a58bad9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Updated VIF entry in instance network info cache for port d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.594 239969 DEBUG nova.network.neutron [req-769ee9ca-7c1b-45f0-a149-199dec643334 req-78afc6d0-7b98-4850-ae69-c89b4a58bad9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Updating instance_info_cache with network_info: [{"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.611 239969 DEBUG oslo_concurrency.lockutils [req-769ee9ca-7c1b-45f0-a149-199dec643334 req-78afc6d0-7b98-4850-ae69-c89b4a58bad9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.612 239969 DEBUG nova.objects.instance [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'keypairs' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.978 239969 INFO nova.virt.libvirt.driver [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Creating config drive at /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4/disk.config
Jan 26 16:12:07 compute-0 nova_compute[239965]: 2026-01-26 16:12:07.983 239969 DEBUG oslo_concurrency.processutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg3bhy3uk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.126 239969 DEBUG oslo_concurrency.processutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg3bhy3uk" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.154 239969 DEBUG nova.storage.rbd_utils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] rbd image 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.158 239969 DEBUG oslo_concurrency.processutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4/disk.config 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 385 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 22 MiB/s wr, 229 op/s
Jan 26 16:12:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.290 239969 DEBUG oslo_concurrency.processutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4/disk.config 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.291 239969 INFO nova.virt.libvirt.driver [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Deleting local config drive /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4/disk.config because it was imported into RBD.
Jan 26 16:12:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Jan 26 16:12:08 compute-0 ceph-mon[75140]: osdmap e296: 3 total, 3 up, 3 in
Jan 26 16:12:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2474438054' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:12:08 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Jan 26 16:12:08 compute-0 kernel: tapd8d3ce43-ab: entered promiscuous mode
Jan 26 16:12:08 compute-0 NetworkManager[48954]: <info>  [1769443928.3430] manager: (tapd8d3ce43-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/438)
Jan 26 16:12:08 compute-0 ovn_controller[146046]: 2026-01-26T16:12:08Z|01065|binding|INFO|Claiming lport d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 for this chassis.
Jan 26 16:12:08 compute-0 ovn_controller[146046]: 2026-01-26T16:12:08Z|01066|binding|INFO|d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3: Claiming fa:16:3e:74:58:77 10.100.0.6
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.350 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.359 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:58:77 10.100.0.6'], port_security=['fa:16:3e:74:58:77 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8d1cdeaf6004b94bba2d3c727959e51', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'fb0a5e0f-b1d5-4f6c-9884-3b66f10a59d8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=568e2bde-137f-4351-b2df-67acb89970b2, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.360 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 in datapath 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 bound to our chassis
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.362 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6
Jan 26 16:12:08 compute-0 ovn_controller[146046]: 2026-01-26T16:12:08Z|01067|binding|INFO|Setting lport d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 ovn-installed in OVS
Jan 26 16:12:08 compute-0 ovn_controller[146046]: 2026-01-26T16:12:08Z|01068|binding|INFO|Setting lport d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 up in Southbound
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.367 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.374 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[71c4f6ad-2bb6-454f-bf92-8345ed42ed7c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.374 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74a6493e-91 in ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:12:08 compute-0 systemd-udevd[332843]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.376 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74a6493e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.376 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0d36c6b0-8380-405a-9e0d-0a9d4c5ed4b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.376 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[44eda513-dfe5-4939-9b8b-ce7210a9843b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.387 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[0a73990a-35e9-4e73-9221-6ec30486be9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:08 compute-0 NetworkManager[48954]: <info>  [1769443928.3904] device (tapd8d3ce43-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:12:08 compute-0 NetworkManager[48954]: <info>  [1769443928.3913] device (tapd8d3ce43-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:12:08 compute-0 systemd-machined[208061]: New machine qemu-134-instance-0000006a.
Jan 26 16:12:08 compute-0 systemd[1]: Started Virtual Machine qemu-134-instance-0000006a.
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.416 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7d9c07-6cd6-4326-889c-e669b20becf3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.444 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[71424f82-fdc4-4da0-ad21-6a719310ef5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.452 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7ed486ae-8c5b-4019-8d0f-8b498b0ec5c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:08 compute-0 systemd-udevd[332848]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:12:08 compute-0 NetworkManager[48954]: <info>  [1769443928.4546] manager: (tap74a6493e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/439)
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.485 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ea150ebb-942a-4f8a-bd4f-0274707d9bce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.489 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4917a1e9-33a2-44a8-8ecd-d8bc31f21e62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:08 compute-0 NetworkManager[48954]: <info>  [1769443928.5116] device (tap74a6493e-90): carrier: link connected
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.516 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d1d8f62c-1d17-42bc-9f26-d4dabc19301e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.540 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a5303e-4b9f-4461-9dac-93fcbd8c614a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74a6493e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:10:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541931, 'reachable_time': 30748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 332878, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.559 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8bf9f14e-ddac-4526-834c-da438c3fcd2a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:10c6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 541931, 'tstamp': 541931}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 332879, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.579 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fe97eb34-a909-41bd-8abf-86fd38293a3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74a6493e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:10:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 314], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541931, 'reachable_time': 30748, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 332880, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.618 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7982f2-11c7-4752-9f88-2ea29a9aa0e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.675 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a579f3ed-39fe-43a6-986c-e31a42a2e47d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.678 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74a6493e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.679 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.679 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74a6493e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.681 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:08 compute-0 kernel: tap74a6493e-90: entered promiscuous mode
Jan 26 16:12:08 compute-0 NetworkManager[48954]: <info>  [1769443928.6824] manager: (tap74a6493e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/440)
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.690 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74a6493e-90, col_values=(('external_ids', {'iface-id': 'fb7d9679-6ae1-476e-99df-96ff9d9ad5e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.691 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.692 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:08 compute-0 ovn_controller[146046]: 2026-01-26T16:12:08Z|01069|binding|INFO|Releasing lport fb7d9679-6ae1-476e-99df-96ff9d9ad5e9 from this chassis (sb_readonly=0)
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.707 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.706 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74a6493e-9050-4f8b-9dc9-e7bb3992eeb6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74a6493e-9050-4f8b-9dc9-e7bb3992eeb6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.708 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[50a95674-4fc9-4937-9fd5-155a52002ab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.709 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/74a6493e-9050-4f8b-9dc9-e7bb3992eeb6.pid.haproxy
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:12:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:08.709 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'env', 'PROCESS_TAG=haproxy-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74a6493e-9050-4f8b-9dc9-e7bb3992eeb6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.722 239969 DEBUG nova.network.neutron [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Updating instance_info_cache with network_info: [{"id": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "address": "fa:16:3e:e8:8d:bc", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c5ed41-3d", "ovs_interfaceid": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5e1d227f-594c-4a1e-8f28-56469a20df56", "address": "fa:16:3e:42:83:4d", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e1d227f-59", "ovs_interfaceid": "5e1d227f-594c-4a1e-8f28-56469a20df56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.741 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Releasing lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.742 239969 DEBUG nova.compute.manager [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Instance network_info: |[{"id": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "address": "fa:16:3e:e8:8d:bc", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c5ed41-3d", "ovs_interfaceid": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5e1d227f-594c-4a1e-8f28-56469a20df56", "address": "fa:16:3e:42:83:4d", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e1d227f-59", "ovs_interfaceid": "5e1d227f-594c-4a1e-8f28-56469a20df56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.742 239969 DEBUG oslo_concurrency.lockutils [req-205a0c1f-09ef-4fa2-ba6f-d4471ca50470 req-538317b4-c350-4d26-bb22-b2529ff795f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.742 239969 DEBUG nova.network.neutron [req-205a0c1f-09ef-4fa2-ba6f-d4471ca50470 req-538317b4-c350-4d26-bb22-b2529ff795f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Refreshing network info cache for port 24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.745 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Start _get_guest_xml network_info=[{"id": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "address": "fa:16:3e:e8:8d:bc", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c5ed41-3d", "ovs_interfaceid": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5e1d227f-594c-4a1e-8f28-56469a20df56", "address": "fa:16:3e:42:83:4d", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e1d227f-59", "ovs_interfaceid": "5e1d227f-594c-4a1e-8f28-56469a20df56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.750 239969 WARNING nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.755 239969 DEBUG nova.virt.libvirt.host [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.756 239969 DEBUG nova.virt.libvirt.host [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.765 239969 DEBUG nova.virt.libvirt.host [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.766 239969 DEBUG nova.virt.libvirt.host [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.766 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.766 239969 DEBUG nova.virt.hardware [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.767 239969 DEBUG nova.virt.hardware [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.767 239969 DEBUG nova.virt.hardware [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.767 239969 DEBUG nova.virt.hardware [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.768 239969 DEBUG nova.virt.hardware [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.768 239969 DEBUG nova.virt.hardware [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.768 239969 DEBUG nova.virt.hardware [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.768 239969 DEBUG nova.virt.hardware [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.769 239969 DEBUG nova.virt.hardware [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.769 239969 DEBUG nova.virt.hardware [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.769 239969 DEBUG nova.virt.hardware [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.773 239969 DEBUG oslo_concurrency.processutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.897 239969 DEBUG nova.compute.manager [req-56f388d9-94c4-46d6-abc5-7c9d357541f7 req-8d194293-8f19-4186-97bf-636362b296e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.898 239969 DEBUG oslo_concurrency.lockutils [req-56f388d9-94c4-46d6-abc5-7c9d357541f7 req-8d194293-8f19-4186-97bf-636362b296e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.898 239969 DEBUG oslo_concurrency.lockutils [req-56f388d9-94c4-46d6-abc5-7c9d357541f7 req-8d194293-8f19-4186-97bf-636362b296e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.898 239969 DEBUG oslo_concurrency.lockutils [req-56f388d9-94c4-46d6-abc5-7c9d357541f7 req-8d194293-8f19-4186-97bf-636362b296e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:08 compute-0 nova_compute[239965]: 2026-01-26 16:12:08.898 239969 DEBUG nova.compute.manager [req-56f388d9-94c4-46d6-abc5-7c9d357541f7 req-8d194293-8f19-4186-97bf-636362b296e0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Processing event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:12:09 compute-0 podman[332932]: 2026-01-26 16:12:09.084611642 +0000 UTC m=+0.055386536 container create 0d12ed90dd34fd444d64d4d46247f627cf844e45c24e3ce91b42f682c703edce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:12:09 compute-0 systemd[1]: Started libpod-conmon-0d12ed90dd34fd444d64d4d46247f627cf844e45c24e3ce91b42f682c703edce.scope.
Jan 26 16:12:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/916d4919df42cf4a959d90cbb43e8408e43f913698ef2d202c4ebfe0715e1555/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:09 compute-0 podman[332932]: 2026-01-26 16:12:09.057495068 +0000 UTC m=+0.028269982 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:12:09 compute-0 podman[332932]: 2026-01-26 16:12:09.277558911 +0000 UTC m=+0.248333835 container init 0d12ed90dd34fd444d64d4d46247f627cf844e45c24e3ce91b42f682c703edce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 16:12:09 compute-0 podman[332932]: 2026-01-26 16:12:09.28482553 +0000 UTC m=+0.255600424 container start 0d12ed90dd34fd444d64d4d46247f627cf844e45c24e3ce91b42f682c703edce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:12:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Jan 26 16:12:09 compute-0 ceph-mon[75140]: pgmap v1836: 305 pgs: 305 active+clean; 385 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 22 MiB/s wr, 229 op/s
Jan 26 16:12:09 compute-0 ceph-mon[75140]: osdmap e297: 3 total, 3 up, 3 in
Jan 26 16:12:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:12:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1429065568' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:12:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Jan 26 16:12:09 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Jan 26 16:12:09 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[332947]: [NOTICE]   (332951) : New worker (332962) forked
Jan 26 16:12:09 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[332947]: [NOTICE]   (332951) : Loading success.
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.335 239969 DEBUG oslo_concurrency.processutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.363 239969 DEBUG nova.storage.rbd_utils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 67eb236e-aeee-43e6-9b43-41c17fcc37a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.368 239969 DEBUG oslo_concurrency.processutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.520 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443929.5188522, 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.520 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] VM Started (Lifecycle Event)
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.524 239969 DEBUG nova.compute.manager [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.528 239969 DEBUG nova.virt.libvirt.driver [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.542 239969 INFO nova.virt.libvirt.driver [-] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Instance spawned successfully.
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.546 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.550 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.572 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.573 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443929.5191135, 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.573 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] VM Paused (Lifecycle Event)
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.593 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.597 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443929.5275066, 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.597 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] VM Resumed (Lifecycle Event)
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.622 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.626 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.654 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:12:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:12:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4068183313' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.984 239969 DEBUG oslo_concurrency.processutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.985 239969 DEBUG nova.virt.libvirt.vif [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:11:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-4860381',display_name='tempest-TestGettingAddress-server-4860381',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-4860381',id=109,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISau4/UN40WP+ptUkwpIsWNn3J4Wp77nPpgbhcl7w/8Nb1TcxykSmejoZt2JELHRJfHb9mvljAwUSziemePss6yCvaAr9GOGZ0K8nlDi6voyMAfJj2IwlWWxnJ7+bk9WQ==',key_name='tempest-TestGettingAddress-360952793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-lxkt7s3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:11:59Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=67eb236e-aeee-43e6-9b43-41c17fcc37a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "address": "fa:16:3e:e8:8d:bc", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c5ed41-3d", "ovs_interfaceid": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.985 239969 DEBUG nova.network.os_vif_util [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "address": "fa:16:3e:e8:8d:bc", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c5ed41-3d", "ovs_interfaceid": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.986 239969 DEBUG nova.network.os_vif_util [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:8d:bc,bridge_name='br-int',has_traffic_filtering=True,id=24c5ed41-3d5f-47a8-a80b-a40478dcb4b3,network=Network(daa791d6-dfc2-418f-92aa-2771eaf5b1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24c5ed41-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.987 239969 DEBUG nova.virt.libvirt.vif [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:11:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-4860381',display_name='tempest-TestGettingAddress-server-4860381',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-4860381',id=109,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISau4/UN40WP+ptUkwpIsWNn3J4Wp77nPpgbhcl7w/8Nb1TcxykSmejoZt2JELHRJfHb9mvljAwUSziemePss6yCvaAr9GOGZ0K8nlDi6voyMAfJj2IwlWWxnJ7+bk9WQ==',key_name='tempest-TestGettingAddress-360952793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-lxkt7s3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:11:59Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=67eb236e-aeee-43e6-9b43-41c17fcc37a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e1d227f-594c-4a1e-8f28-56469a20df56", "address": "fa:16:3e:42:83:4d", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e1d227f-59", "ovs_interfaceid": "5e1d227f-594c-4a1e-8f28-56469a20df56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.987 239969 DEBUG nova.network.os_vif_util [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "5e1d227f-594c-4a1e-8f28-56469a20df56", "address": "fa:16:3e:42:83:4d", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e1d227f-59", "ovs_interfaceid": "5e1d227f-594c-4a1e-8f28-56469a20df56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.988 239969 DEBUG nova.network.os_vif_util [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:4d,bridge_name='br-int',has_traffic_filtering=True,id=5e1d227f-594c-4a1e-8f28-56469a20df56,network=Network(ce7228b3-b329-4b48-a563-2dfcf286a94e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e1d227f-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:12:09 compute-0 nova_compute[239965]: 2026-01-26 16:12:09.989 239969 DEBUG nova.objects.instance [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'pci_devices' on Instance uuid 67eb236e-aeee-43e6-9b43-41c17fcc37a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.006 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:12:10 compute-0 nova_compute[239965]:   <uuid>67eb236e-aeee-43e6-9b43-41c17fcc37a5</uuid>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   <name>instance-0000006d</name>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <nova:name>tempest-TestGettingAddress-server-4860381</nova:name>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:12:08</nova:creationTime>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:12:10 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:12:10 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:12:10 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:12:10 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:12:10 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:12:10 compute-0 nova_compute[239965]:         <nova:user uuid="0338b308661e4205839e9e957f674d8c">tempest-TestGettingAddress-206943419-project-member</nova:user>
Jan 26 16:12:10 compute-0 nova_compute[239965]:         <nova:project uuid="df501970c9864b77b86bb4aa58ef846b">tempest-TestGettingAddress-206943419</nova:project>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:12:10 compute-0 nova_compute[239965]:         <nova:port uuid="24c5ed41-3d5f-47a8-a80b-a40478dcb4b3">
Jan 26 16:12:10 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:12:10 compute-0 nova_compute[239965]:         <nova:port uuid="5e1d227f-594c-4a1e-8f28-56469a20df56">
Jan 26 16:12:10 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe42:834d" ipVersion="6"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe42:834d" ipVersion="6"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <system>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <entry name="serial">67eb236e-aeee-43e6-9b43-41c17fcc37a5</entry>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <entry name="uuid">67eb236e-aeee-43e6-9b43-41c17fcc37a5</entry>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     </system>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   <os>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   </os>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   <features>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   </features>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/67eb236e-aeee-43e6-9b43-41c17fcc37a5_disk">
Jan 26 16:12:10 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       </source>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:12:10 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/67eb236e-aeee-43e6-9b43-41c17fcc37a5_disk.config">
Jan 26 16:12:10 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       </source>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:12:10 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:e8:8d:bc"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <target dev="tap24c5ed41-3d"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:42:83:4d"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <target dev="tap5e1d227f-59"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/67eb236e-aeee-43e6-9b43-41c17fcc37a5/console.log" append="off"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <video>
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     </video>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:12:10 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:12:10 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:12:10 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:12:10 compute-0 nova_compute[239965]: </domain>
Jan 26 16:12:10 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.007 239969 DEBUG nova.compute.manager [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Preparing to wait for external event network-vif-plugged-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.007 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.007 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.008 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.008 239969 DEBUG nova.compute.manager [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Preparing to wait for external event network-vif-plugged-5e1d227f-594c-4a1e-8f28-56469a20df56 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.008 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.008 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.008 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.009 239969 DEBUG nova.virt.libvirt.vif [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:11:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-4860381',display_name='tempest-TestGettingAddress-server-4860381',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-4860381',id=109,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISau4/UN40WP+ptUkwpIsWNn3J4Wp77nPpgbhcl7w/8Nb1TcxykSmejoZt2JELHRJfHb9mvljAwUSziemePss6yCvaAr9GOGZ0K8nlDi6voyMAfJj2IwlWWxnJ7+bk9WQ==',key_name='tempest-TestGettingAddress-360952793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-lxkt7s3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:11:59Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=67eb236e-aeee-43e6-9b43-41c17fcc37a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "address": "fa:16:3e:e8:8d:bc", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c5ed41-3d", "ovs_interfaceid": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.009 239969 DEBUG nova.network.os_vif_util [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "address": "fa:16:3e:e8:8d:bc", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c5ed41-3d", "ovs_interfaceid": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.009 239969 DEBUG nova.network.os_vif_util [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e8:8d:bc,bridge_name='br-int',has_traffic_filtering=True,id=24c5ed41-3d5f-47a8-a80b-a40478dcb4b3,network=Network(daa791d6-dfc2-418f-92aa-2771eaf5b1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24c5ed41-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.010 239969 DEBUG os_vif [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:8d:bc,bridge_name='br-int',has_traffic_filtering=True,id=24c5ed41-3d5f-47a8-a80b-a40478dcb4b3,network=Network(daa791d6-dfc2-418f-92aa-2771eaf5b1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24c5ed41-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.010 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.011 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.011 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.013 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.014 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap24c5ed41-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.014 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap24c5ed41-3d, col_values=(('external_ids', {'iface-id': '24c5ed41-3d5f-47a8-a80b-a40478dcb4b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e8:8d:bc', 'vm-uuid': '67eb236e-aeee-43e6-9b43-41c17fcc37a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.015 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:10 compute-0 NetworkManager[48954]: <info>  [1769443930.0169] manager: (tap24c5ed41-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.018 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.022 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.023 239969 INFO os_vif [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e8:8d:bc,bridge_name='br-int',has_traffic_filtering=True,id=24c5ed41-3d5f-47a8-a80b-a40478dcb4b3,network=Network(daa791d6-dfc2-418f-92aa-2771eaf5b1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24c5ed41-3d')
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.024 239969 DEBUG nova.virt.libvirt.vif [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:11:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-4860381',display_name='tempest-TestGettingAddress-server-4860381',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-4860381',id=109,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISau4/UN40WP+ptUkwpIsWNn3J4Wp77nPpgbhcl7w/8Nb1TcxykSmejoZt2JELHRJfHb9mvljAwUSziemePss6yCvaAr9GOGZ0K8nlDi6voyMAfJj2IwlWWxnJ7+bk9WQ==',key_name='tempest-TestGettingAddress-360952793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-lxkt7s3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:11:59Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=67eb236e-aeee-43e6-9b43-41c17fcc37a5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5e1d227f-594c-4a1e-8f28-56469a20df56", "address": "fa:16:3e:42:83:4d", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e1d227f-59", "ovs_interfaceid": "5e1d227f-594c-4a1e-8f28-56469a20df56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.024 239969 DEBUG nova.network.os_vif_util [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "5e1d227f-594c-4a1e-8f28-56469a20df56", "address": "fa:16:3e:42:83:4d", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e1d227f-59", "ovs_interfaceid": "5e1d227f-594c-4a1e-8f28-56469a20df56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.025 239969 DEBUG nova.network.os_vif_util [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:4d,bridge_name='br-int',has_traffic_filtering=True,id=5e1d227f-594c-4a1e-8f28-56469a20df56,network=Network(ce7228b3-b329-4b48-a563-2dfcf286a94e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e1d227f-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.025 239969 DEBUG os_vif [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:4d,bridge_name='br-int',has_traffic_filtering=True,id=5e1d227f-594c-4a1e-8f28-56469a20df56,network=Network(ce7228b3-b329-4b48-a563-2dfcf286a94e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e1d227f-59') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.026 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.026 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.026 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.030 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.030 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e1d227f-59, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.030 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5e1d227f-59, col_values=(('external_ids', {'iface-id': '5e1d227f-594c-4a1e-8f28-56469a20df56', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:42:83:4d', 'vm-uuid': '67eb236e-aeee-43e6-9b43-41c17fcc37a5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.031 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:10 compute-0 NetworkManager[48954]: <info>  [1769443930.0323] manager: (tap5e1d227f-59): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/442)
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.033 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.038 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.039 239969 INFO os_vif [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:42:83:4d,bridge_name='br-int',has_traffic_filtering=True,id=5e1d227f-594c-4a1e-8f28-56469a20df56,network=Network(ce7228b3-b329-4b48-a563-2dfcf286a94e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e1d227f-59')
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.095 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.096 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.096 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:e8:8d:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.096 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:42:83:4d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.097 239969 INFO nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Using config drive
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.119 239969 DEBUG nova.storage.rbd_utils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 67eb236e-aeee-43e6-9b43-41c17fcc37a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.137 239969 DEBUG nova.network.neutron [req-205a0c1f-09ef-4fa2-ba6f-d4471ca50470 req-538317b4-c350-4d26-bb22-b2529ff795f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Updated VIF entry in instance network info cache for port 24c5ed41-3d5f-47a8-a80b-a40478dcb4b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.138 239969 DEBUG nova.network.neutron [req-205a0c1f-09ef-4fa2-ba6f-d4471ca50470 req-538317b4-c350-4d26-bb22-b2529ff795f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Updating instance_info_cache with network_info: [{"id": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "address": "fa:16:3e:e8:8d:bc", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c5ed41-3d", "ovs_interfaceid": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5e1d227f-594c-4a1e-8f28-56469a20df56", "address": "fa:16:3e:42:83:4d", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e1d227f-59", "ovs_interfaceid": "5e1d227f-594c-4a1e-8f28-56469a20df56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.197 239969 DEBUG oslo_concurrency.lockutils [req-205a0c1f-09ef-4fa2-ba6f-d4471ca50470 req-538317b4-c350-4d26-bb22-b2529ff795f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.197 239969 DEBUG nova.compute.manager [req-205a0c1f-09ef-4fa2-ba6f-d4471ca50470 req-538317b4-c350-4d26-bb22-b2529ff795f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received event network-changed-5e1d227f-594c-4a1e-8f28-56469a20df56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.197 239969 DEBUG nova.compute.manager [req-205a0c1f-09ef-4fa2-ba6f-d4471ca50470 req-538317b4-c350-4d26-bb22-b2529ff795f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Refreshing instance network info cache due to event network-changed-5e1d227f-594c-4a1e-8f28-56469a20df56. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.198 239969 DEBUG oslo_concurrency.lockutils [req-205a0c1f-09ef-4fa2-ba6f-d4471ca50470 req-538317b4-c350-4d26-bb22-b2529ff795f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.198 239969 DEBUG oslo_concurrency.lockutils [req-205a0c1f-09ef-4fa2-ba6f-d4471ca50470 req-538317b4-c350-4d26-bb22-b2529ff795f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.198 239969 DEBUG nova.network.neutron [req-205a0c1f-09ef-4fa2-ba6f-d4471ca50470 req-538317b4-c350-4d26-bb22-b2529ff795f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Refreshing network info cache for port 5e1d227f-594c-4a1e-8f28-56469a20df56 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:12:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 385 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 24 MiB/s wr, 197 op/s
Jan 26 16:12:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1429065568' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:12:10 compute-0 ceph-mon[75140]: osdmap e298: 3 total, 3 up, 3 in
Jan 26 16:12:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4068183313' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:12:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Jan 26 16:12:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Jan 26 16:12:10 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.561 239969 INFO nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Creating config drive at /var/lib/nova/instances/67eb236e-aeee-43e6-9b43-41c17fcc37a5/disk.config
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.567 239969 DEBUG oslo_concurrency.processutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/67eb236e-aeee-43e6-9b43-41c17fcc37a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdjz8tyxh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.711 239969 DEBUG oslo_concurrency.processutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/67eb236e-aeee-43e6-9b43-41c17fcc37a5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdjz8tyxh" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.736 239969 DEBUG nova.storage.rbd_utils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 67eb236e-aeee-43e6-9b43-41c17fcc37a5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.740 239969 DEBUG oslo_concurrency.processutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/67eb236e-aeee-43e6-9b43-41c17fcc37a5/disk.config 67eb236e-aeee-43e6-9b43-41c17fcc37a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.862 239969 DEBUG oslo_concurrency.processutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/67eb236e-aeee-43e6-9b43-41c17fcc37a5/disk.config 67eb236e-aeee-43e6-9b43-41c17fcc37a5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.863 239969 INFO nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Deleting local config drive /var/lib/nova/instances/67eb236e-aeee-43e6-9b43-41c17fcc37a5/disk.config because it was imported into RBD.
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.920 239969 DEBUG nova.compute.manager [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:10 compute-0 NetworkManager[48954]: <info>  [1769443930.9332] manager: (tap24c5ed41-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/443)
Jan 26 16:12:10 compute-0 kernel: tap24c5ed41-3d: entered promiscuous mode
Jan 26 16:12:10 compute-0 systemd-udevd[332863]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:12:10 compute-0 ovn_controller[146046]: 2026-01-26T16:12:10Z|01070|binding|INFO|Claiming lport 24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 for this chassis.
Jan 26 16:12:10 compute-0 ovn_controller[146046]: 2026-01-26T16:12:10Z|01071|binding|INFO|24c5ed41-3d5f-47a8-a80b-a40478dcb4b3: Claiming fa:16:3e:e8:8d:bc 10.100.0.13
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.937 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.948 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:10 compute-0 NetworkManager[48954]: <info>  [1769443930.9515] manager: (tap5e1d227f-59): new Tun device (/org/freedesktop/NetworkManager/Devices/444)
Jan 26 16:12:10 compute-0 kernel: tap5e1d227f-59: entered promiscuous mode
Jan 26 16:12:10 compute-0 ovn_controller[146046]: 2026-01-26T16:12:10Z|01072|if_status|INFO|Not updating pb chassis for 5e1d227f-594c-4a1e-8f28-56469a20df56 now as sb is readonly
Jan 26 16:12:10 compute-0 nova_compute[239965]: 2026-01-26 16:12:10.955 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:10 compute-0 ovn_controller[146046]: 2026-01-26T16:12:10Z|01073|binding|INFO|Claiming lport 5e1d227f-594c-4a1e-8f28-56469a20df56 for this chassis.
Jan 26 16:12:10 compute-0 ovn_controller[146046]: 2026-01-26T16:12:10Z|01074|binding|INFO|5e1d227f-594c-4a1e-8f28-56469a20df56: Claiming fa:16:3e:42:83:4d 2001:db8:0:1:f816:3eff:fe42:834d 2001:db8::f816:3eff:fe42:834d
Jan 26 16:12:10 compute-0 NetworkManager[48954]: <info>  [1769443930.9615] device (tap24c5ed41-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:12:10 compute-0 NetworkManager[48954]: <info>  [1769443930.9636] device (tap24c5ed41-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:12:10 compute-0 NetworkManager[48954]: <info>  [1769443930.9642] device (tap5e1d227f-59): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:12:10 compute-0 NetworkManager[48954]: <info>  [1769443930.9652] device (tap5e1d227f-59): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:12:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:10.962 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:8d:bc 10.100.0.13'], port_security=['fa:16:3e:e8:8d:bc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '67eb236e-aeee-43e6-9b43-41c17fcc37a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daa791d6-dfc2-418f-92aa-2771eaf5b1a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '85d3d038-2bc9-4f80-8a4a-2f5ebd3d53a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7117378c-aa96-48b6-b2fa-6c8b5ccb02bd, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=24c5ed41-3d5f-47a8-a80b-a40478dcb4b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:12:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:10.963 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 in datapath daa791d6-dfc2-418f-92aa-2771eaf5b1a9 bound to our chassis
Jan 26 16:12:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:10.966 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network daa791d6-dfc2-418f-92aa-2771eaf5b1a9
Jan 26 16:12:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:10.968 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:83:4d 2001:db8:0:1:f816:3eff:fe42:834d 2001:db8::f816:3eff:fe42:834d'], port_security=['fa:16:3e:42:83:4d 2001:db8:0:1:f816:3eff:fe42:834d 2001:db8::f816:3eff:fe42:834d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe42:834d/64 2001:db8::f816:3eff:fe42:834d/64', 'neutron:device_id': '67eb236e-aeee-43e6-9b43-41c17fcc37a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '85d3d038-2bc9-4f80-8a4a-2f5ebd3d53a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46f31be5-6cff-44a8-9f7f-e0dfbc4df89f, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5e1d227f-594c-4a1e-8f28-56469a20df56) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:12:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:10.979 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2d69b683-d54d-4660-baf1-c44cdbdc3d41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:10.980 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdaa791d6-d1 in ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:12:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:10.982 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdaa791d6-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:12:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:10.982 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[097d5f43-0633-49f7-ad1a-6c1c50bd422b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:10.983 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7a4b5e-ae6e-40a5-9983-218963e5a2e2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:10.995 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[5c67ca53-7b7e-4c0e-9271-e997233036d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:10 compute-0 systemd-machined[208061]: New machine qemu-135-instance-0000006d.
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.000 239969 DEBUG oslo_concurrency.lockutils [None req-84e07bea-6033-4fa1-b439-5f3e34299c4d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.022 239969 DEBUG nova.compute.manager [req-0de9c01a-a13b-4e6c-a997-efadffda0331 req-06b3847f-8808-4c96-8349-40b681234dcc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.023 239969 DEBUG oslo_concurrency.lockutils [req-0de9c01a-a13b-4e6c-a997-efadffda0331 req-06b3847f-8808-4c96-8349-40b681234dcc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.023 239969 DEBUG oslo_concurrency.lockutils [req-0de9c01a-a13b-4e6c-a997-efadffda0331 req-06b3847f-8808-4c96-8349-40b681234dcc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.023 239969 DEBUG oslo_concurrency.lockutils [req-0de9c01a-a13b-4e6c-a997-efadffda0331 req-06b3847f-8808-4c96-8349-40b681234dcc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:11 compute-0 systemd[1]: Started Virtual Machine qemu-135-instance-0000006d.
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.023 239969 DEBUG nova.compute.manager [req-0de9c01a-a13b-4e6c-a997-efadffda0331 req-06b3847f-8808-4c96-8349-40b681234dcc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] No waiting events found dispatching network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.023 239969 WARNING nova.compute.manager [req-0de9c01a-a13b-4e6c-a997-efadffda0331 req-06b3847f-8808-4c96-8349-40b681234dcc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received unexpected event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 for instance with vm_state active and task_state None.
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.026 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[23907af0-6512-4167-a08e-b45d3910d5a6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.058 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[29013f7d-1304-4aa2-92ae-69f26493388c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.061 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.066 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[33cc190c-c5d8-4cf3-8e17-1c9e885e1450]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:11 compute-0 NetworkManager[48954]: <info>  [1769443931.0668] manager: (tapdaa791d6-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/445)
Jan 26 16:12:11 compute-0 ovn_controller[146046]: 2026-01-26T16:12:11Z|01075|binding|INFO|Setting lport 24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 ovn-installed in OVS
Jan 26 16:12:11 compute-0 ovn_controller[146046]: 2026-01-26T16:12:11Z|01076|binding|INFO|Setting lport 24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 up in Southbound
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.079 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:11 compute-0 ovn_controller[146046]: 2026-01-26T16:12:11Z|01077|binding|INFO|Setting lport 5e1d227f-594c-4a1e-8f28-56469a20df56 ovn-installed in OVS
Jan 26 16:12:11 compute-0 ovn_controller[146046]: 2026-01-26T16:12:11Z|01078|binding|INFO|Setting lport 5e1d227f-594c-4a1e-8f28-56469a20df56 up in Southbound
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.089 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.098 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[554b2b27-75e6-48c3-be1c-eb26c55a6830]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.101 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a087c1a7-893b-46e8-a1b2-21a172d62571]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:11 compute-0 NetworkManager[48954]: <info>  [1769443931.1267] device (tapdaa791d6-d0): carrier: link connected
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.133 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f525d98f-2d6e-4c41-88c6-c78a12514b9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.150 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[542aad2d-cfbd-4d16-b60d-691a3536d504]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdaa791d6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:4e:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542193, 'reachable_time': 23754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333145, 'error': None, 'target': 'ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.168 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a8bd226a-71fe-43a5-9f0b-ad98004ab810]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe21:4e4f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 542193, 'tstamp': 542193}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333146, 'error': None, 'target': 'ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.187 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[28f669a2-ed49-4103-9766-08cf620cdd24]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdaa791d6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:4e:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542193, 'reachable_time': 23754, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333147, 'error': None, 'target': 'ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.218 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c63fe0b7-f32f-4404-8ae5-c775ca0fd5a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.275 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4b5a2502-7b16-4d77-9f79-d7661836f696]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.277 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaa791d6-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.277 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.278 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdaa791d6-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:11 compute-0 NetworkManager[48954]: <info>  [1769443931.2805] manager: (tapdaa791d6-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/446)
Jan 26 16:12:11 compute-0 kernel: tapdaa791d6-d0: entered promiscuous mode
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.279 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.284 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdaa791d6-d0, col_values=(('external_ids', {'iface-id': 'e0fa1a12-fa6c-4878-8114-78f4badaec8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.285 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:11 compute-0 ovn_controller[146046]: 2026-01-26T16:12:11Z|01079|binding|INFO|Releasing lport e0fa1a12-fa6c-4878-8114-78f4badaec8c from this chassis (sb_readonly=0)
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.288 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/daa791d6-dfc2-418f-92aa-2771eaf5b1a9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/daa791d6-dfc2-418f-92aa-2771eaf5b1a9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.289 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8a637d8a-3c04-4bf3-8142-922a0a7633c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.290 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-daa791d6-dfc2-418f-92aa-2771eaf5b1a9
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/daa791d6-dfc2-418f-92aa-2771eaf5b1a9.pid.haproxy
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID daa791d6-dfc2-418f-92aa-2771eaf5b1a9
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:12:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:11.292 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9', 'env', 'PROCESS_TAG=haproxy-daa791d6-dfc2-418f-92aa-2771eaf5b1a9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/daa791d6-dfc2-418f-92aa-2771eaf5b1a9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.302 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:11 compute-0 ceph-mon[75140]: pgmap v1839: 305 pgs: 305 active+clean; 385 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 24 MiB/s wr, 197 op/s
Jan 26 16:12:11 compute-0 ceph-mon[75140]: osdmap e299: 3 total, 3 up, 3 in
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.315 239969 DEBUG nova.compute.manager [req-ff09bf3a-3d57-4d92-a52d-e5a8da5b817c req-af43026a-5862-4436-ba36-d1f5c89e8ffd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received event network-vif-plugged-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.316 239969 DEBUG oslo_concurrency.lockutils [req-ff09bf3a-3d57-4d92-a52d-e5a8da5b817c req-af43026a-5862-4436-ba36-d1f5c89e8ffd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.316 239969 DEBUG oslo_concurrency.lockutils [req-ff09bf3a-3d57-4d92-a52d-e5a8da5b817c req-af43026a-5862-4436-ba36-d1f5c89e8ffd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.316 239969 DEBUG oslo_concurrency.lockutils [req-ff09bf3a-3d57-4d92-a52d-e5a8da5b817c req-af43026a-5862-4436-ba36-d1f5c89e8ffd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.317 239969 DEBUG nova.compute.manager [req-ff09bf3a-3d57-4d92-a52d-e5a8da5b817c req-af43026a-5862-4436-ba36-d1f5c89e8ffd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Processing event network-vif-plugged-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.398 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443931.3977315, 67eb236e-aeee-43e6-9b43-41c17fcc37a5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.398 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] VM Started (Lifecycle Event)
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.422 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.426 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443931.4007342, 67eb236e-aeee-43e6-9b43-41c17fcc37a5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.426 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] VM Paused (Lifecycle Event)
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.443 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.446 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.463 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:12:11 compute-0 podman[333222]: 2026-01-26 16:12:11.699841587 +0000 UTC m=+0.060899731 container create 82b901f8c5f88d8615579877d3b5b5cbde18a82d7193fb00eaa59bfe23d39a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 16:12:11 compute-0 systemd[1]: Started libpod-conmon-82b901f8c5f88d8615579877d3b5b5cbde18a82d7193fb00eaa59bfe23d39a31.scope.
Jan 26 16:12:11 compute-0 podman[333222]: 2026-01-26 16:12:11.660928676 +0000 UTC m=+0.021986850 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:12:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:12:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/508b12c7449f8fbf75b588c75bea5f21b772b5078df81919fef273408af7d4c9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:11 compute-0 nova_compute[239965]: 2026-01-26 16:12:11.849 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:12 compute-0 podman[333222]: 2026-01-26 16:12:12.103319848 +0000 UTC m=+0.464378022 container init 82b901f8c5f88d8615579877d3b5b5cbde18a82d7193fb00eaa59bfe23d39a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:12:12 compute-0 podman[333222]: 2026-01-26 16:12:12.109503528 +0000 UTC m=+0.470561672 container start 82b901f8c5f88d8615579877d3b5b5cbde18a82d7193fb00eaa59bfe23d39a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:12:12 compute-0 neutron-haproxy-ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9[333238]: [NOTICE]   (333242) : New worker (333244) forked
Jan 26 16:12:12 compute-0 neutron-haproxy-ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9[333238]: [NOTICE]   (333242) : Loading success.
Jan 26 16:12:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 291 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 17 MiB/s wr, 417 op/s
Jan 26 16:12:12 compute-0 nova_compute[239965]: 2026-01-26 16:12:12.299 239969 DEBUG nova.network.neutron [req-205a0c1f-09ef-4fa2-ba6f-d4471ca50470 req-538317b4-c350-4d26-bb22-b2529ff795f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Updated VIF entry in instance network info cache for port 5e1d227f-594c-4a1e-8f28-56469a20df56. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:12:12 compute-0 nova_compute[239965]: 2026-01-26 16:12:12.300 239969 DEBUG nova.network.neutron [req-205a0c1f-09ef-4fa2-ba6f-d4471ca50470 req-538317b4-c350-4d26-bb22-b2529ff795f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Updating instance_info_cache with network_info: [{"id": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "address": "fa:16:3e:e8:8d:bc", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c5ed41-3d", "ovs_interfaceid": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5e1d227f-594c-4a1e-8f28-56469a20df56", "address": "fa:16:3e:42:83:4d", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e1d227f-59", "ovs_interfaceid": "5e1d227f-594c-4a1e-8f28-56469a20df56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:12:12 compute-0 nova_compute[239965]: 2026-01-26 16:12:12.316 239969 DEBUG oslo_concurrency.lockutils [req-205a0c1f-09ef-4fa2-ba6f-d4471ca50470 req-538317b4-c350-4d26-bb22-b2529ff795f7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.617 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5e1d227f-594c-4a1e-8f28-56469a20df56 in datapath ce7228b3-b329-4b48-a563-2dfcf286a94e unbound from our chassis
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.619 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce7228b3-b329-4b48-a563-2dfcf286a94e
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.634 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[00daf731-61a5-49da-b547-f42e13084c0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.635 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapce7228b3-b1 in ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.637 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapce7228b3-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.637 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[423efba6-f8ec-4522-a770-7d645c00a145]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.638 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4ada6af0-1067-4372-aa37-bcbfd129c007]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.656 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[2494d38a-6342-48f6-9edb-206665126217]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:12 compute-0 ceph-mon[75140]: pgmap v1841: 305 pgs: 305 active+clean; 291 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 6.7 MiB/s rd, 17 MiB/s wr, 417 op/s
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.678 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[edf014be-e21e-4581-9483-e538c53e4e3e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.714 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a6ad5778-180d-4523-93ab-b04eacbe4662]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:12 compute-0 NetworkManager[48954]: <info>  [1769443932.7223] manager: (tapce7228b3-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/447)
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.721 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a2669df1-e3ce-4d28-8a42-b2241853a703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:12 compute-0 systemd-udevd[333260]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.757 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[eebade9f-53a6-477e-b667-7ce3cba5f0ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.760 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[39822a3d-74d7-4d36-bcc8-dd1c0def8226]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:12 compute-0 NetworkManager[48954]: <info>  [1769443932.7877] device (tapce7228b3-b0): carrier: link connected
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.793 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3f3e146a-7650-4d2a-a60d-c823105e2b67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:12 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:12:12 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.811 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[430e7673-94ec-43c4-acf2-87999b8ce52a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce7228b3-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:b5:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 318], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542359, 'reachable_time': 27262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333279, 'error': None, 'target': 'ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.828 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5b63c7ee-c5a6-4faf-a339-1f6605702588]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe07:b5a8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 542359, 'tstamp': 542359}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 333281, 'error': None, 'target': 'ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.849 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[637a1532-1767-425b-957d-40acc73c2c19]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce7228b3-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:b5:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 318], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542359, 'reachable_time': 27262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 333282, 'error': None, 'target': 'ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.881 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[05edbda6-50fa-42ad-8519-4fdf97ef2b95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.911 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4ac4da44-86d8-4f08-8d46-9dbbcac8e31f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.913 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce7228b3-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.913 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.913 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce7228b3-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:12 compute-0 NetworkManager[48954]: <info>  [1769443932.9591] manager: (tapce7228b3-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/448)
Jan 26 16:12:12 compute-0 nova_compute[239965]: 2026-01-26 16:12:12.958 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:12 compute-0 kernel: tapce7228b3-b0: entered promiscuous mode
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.967 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce7228b3-b0, col_values=(('external_ids', {'iface-id': 'a1098b06-e219-4f9c-9e25-e366022a5f05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:12 compute-0 ovn_controller[146046]: 2026-01-26T16:12:12Z|01080|binding|INFO|Releasing lport a1098b06-e219-4f9c-9e25-e366022a5f05 from this chassis (sb_readonly=0)
Jan 26 16:12:12 compute-0 nova_compute[239965]: 2026-01-26 16:12:12.968 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.972 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ce7228b3-b329-4b48-a563-2dfcf286a94e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ce7228b3-b329-4b48-a563-2dfcf286a94e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.973 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[53f1b28e-9f7c-4656-9c09-c87f580a4b55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.974 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-ce7228b3-b329-4b48-a563-2dfcf286a94e
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/ce7228b3-b329-4b48-a563-2dfcf286a94e.pid.haproxy
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID ce7228b3-b329-4b48-a563-2dfcf286a94e
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:12:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:12.975 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'env', 'PROCESS_TAG=haproxy-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ce7228b3-b329-4b48-a563-2dfcf286a94e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:12:12 compute-0 nova_compute[239965]: 2026-01-26 16:12:12.987 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:13 compute-0 podman[333313]: 2026-01-26 16:12:13.320177724 +0000 UTC m=+0.055784136 container create 83d055a6fdb7277fa5d4c9b5f31948532da9141c1b3fcfc1c618844dd0f6c675 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:12:13 compute-0 systemd[1]: Started libpod-conmon-83d055a6fdb7277fa5d4c9b5f31948532da9141c1b3fcfc1c618844dd0f6c675.scope.
Jan 26 16:12:13 compute-0 podman[333313]: 2026-01-26 16:12:13.290455987 +0000 UTC m=+0.026062388 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:12:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:12:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c4ac20c357ebc113ba7c867c1582900451508994f989a7a372df93c277c8f21/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:13 compute-0 podman[333313]: 2026-01-26 16:12:13.423558563 +0000 UTC m=+0.159164974 container init 83d055a6fdb7277fa5d4c9b5f31948532da9141c1b3fcfc1c618844dd0f6c675 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 16:12:13 compute-0 podman[333313]: 2026-01-26 16:12:13.430614866 +0000 UTC m=+0.166221247 container start 83d055a6fdb7277fa5d4c9b5f31948532da9141c1b3fcfc1c618844dd0f6c675 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 16:12:13 compute-0 neutron-haproxy-ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e[333329]: [NOTICE]   (333333) : New worker (333335) forked
Jan 26 16:12:13 compute-0 neutron-haproxy-ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e[333329]: [NOTICE]   (333333) : Loading success.
Jan 26 16:12:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 213 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.8 MiB/s wr, 310 op/s
Jan 26 16:12:14 compute-0 ceph-mon[75140]: pgmap v1842: 305 pgs: 305 active+clean; 213 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.8 MiB/s wr, 310 op/s
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.032 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:15 compute-0 sudo[333344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:12:15 compute-0 sudo[333344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:12:15 compute-0 sudo[333344]: pam_unix(sudo:session): session closed for user root
Jan 26 16:12:15 compute-0 sudo[333369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:12:15 compute-0 sudo[333369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.450 239969 DEBUG nova.compute.manager [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received event network-vif-plugged-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.451 239969 DEBUG oslo_concurrency.lockutils [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.451 239969 DEBUG oslo_concurrency.lockutils [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.452 239969 DEBUG oslo_concurrency.lockutils [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.452 239969 DEBUG nova.compute.manager [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] No event matching network-vif-plugged-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 in dict_keys([('network-vif-plugged', '5e1d227f-594c-4a1e-8f28-56469a20df56')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.452 239969 WARNING nova.compute.manager [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received unexpected event network-vif-plugged-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 for instance with vm_state building and task_state spawning.
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.453 239969 DEBUG nova.compute.manager [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received event network-vif-plugged-5e1d227f-594c-4a1e-8f28-56469a20df56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.453 239969 DEBUG oslo_concurrency.lockutils [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.453 239969 DEBUG oslo_concurrency.lockutils [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.454 239969 DEBUG oslo_concurrency.lockutils [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.454 239969 DEBUG nova.compute.manager [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Processing event network-vif-plugged-5e1d227f-594c-4a1e-8f28-56469a20df56 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.454 239969 DEBUG nova.compute.manager [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received event network-vif-plugged-5e1d227f-594c-4a1e-8f28-56469a20df56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.454 239969 DEBUG oslo_concurrency.lockutils [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.455 239969 DEBUG oslo_concurrency.lockutils [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.455 239969 DEBUG oslo_concurrency.lockutils [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.455 239969 DEBUG nova.compute.manager [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] No waiting events found dispatching network-vif-plugged-5e1d227f-594c-4a1e-8f28-56469a20df56 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.456 239969 WARNING nova.compute.manager [req-40d8b3f7-a602-4a99-aa7b-5227c4138ee6 req-c467109b-2367-495a-8d22-5db08fc5c952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received unexpected event network-vif-plugged-5e1d227f-594c-4a1e-8f28-56469a20df56 for instance with vm_state building and task_state spawning.
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.456 239969 DEBUG nova.compute.manager [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Instance event wait completed in 4 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.461 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443935.460867, 67eb236e-aeee-43e6-9b43-41c17fcc37a5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.462 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] VM Resumed (Lifecycle Event)
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.469 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.473 239969 INFO nova.virt.libvirt.driver [-] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Instance spawned successfully.
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.474 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.479 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.482 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.512 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.516 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.516 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.517 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.517 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.518 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.518 239969 DEBUG nova.virt.libvirt.driver [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.588 239969 INFO nova.compute.manager [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Took 15.79 seconds to spawn the instance on the hypervisor.
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.589 239969 DEBUG nova.compute.manager [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.687 239969 INFO nova.compute.manager [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Took 16.92 seconds to build instance.
Jan 26 16:12:15 compute-0 nova_compute[239965]: 2026-01-26 16:12:15.703 239969 DEBUG oslo_concurrency.lockutils [None req-7ee40c1e-66e0-4f54-a88d-09b2d3392098 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:15 compute-0 sudo[333369]: pam_unix(sudo:session): session closed for user root
Jan 26 16:12:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:12:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:12:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:12:15 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:12:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:12:15 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:12:15 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:12:15 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:12:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:12:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:12:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:12:15 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:12:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:12:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:12:16 compute-0 sudo[333423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:12:16 compute-0 sudo[333423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:12:16 compute-0 sudo[333423]: pam_unix(sudo:session): session closed for user root
Jan 26 16:12:16 compute-0 sudo[333448]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:12:16 compute-0 sudo[333448]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:12:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 213 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 235 op/s
Jan 26 16:12:16 compute-0 podman[333484]: 2026-01-26 16:12:16.378093038 +0000 UTC m=+0.046411746 container create 0255be48dc564f9b892964dabd2d5cd12b2670373466f23dd58948809ae2e13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_panini, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 16:12:16 compute-0 systemd[1]: Started libpod-conmon-0255be48dc564f9b892964dabd2d5cd12b2670373466f23dd58948809ae2e13c.scope.
Jan 26 16:12:16 compute-0 podman[333484]: 2026-01-26 16:12:16.354939021 +0000 UTC m=+0.023257749 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:12:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:12:16 compute-0 podman[333484]: 2026-01-26 16:12:16.528839235 +0000 UTC m=+0.197157963 container init 0255be48dc564f9b892964dabd2d5cd12b2670373466f23dd58948809ae2e13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_panini, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 16:12:16 compute-0 podman[333484]: 2026-01-26 16:12:16.541611707 +0000 UTC m=+0.209930415 container start 0255be48dc564f9b892964dabd2d5cd12b2670373466f23dd58948809ae2e13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_panini, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 16:12:16 compute-0 podman[333484]: 2026-01-26 16:12:16.545479792 +0000 UTC m=+0.213798500 container attach 0255be48dc564f9b892964dabd2d5cd12b2670373466f23dd58948809ae2e13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:12:16 compute-0 systemd[1]: libpod-0255be48dc564f9b892964dabd2d5cd12b2670373466f23dd58948809ae2e13c.scope: Deactivated successfully.
Jan 26 16:12:16 compute-0 goofy_panini[333500]: 167 167
Jan 26 16:12:16 compute-0 conmon[333500]: conmon 0255be48dc564f9b8929 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0255be48dc564f9b892964dabd2d5cd12b2670373466f23dd58948809ae2e13c.scope/container/memory.events
Jan 26 16:12:16 compute-0 podman[333484]: 2026-01-26 16:12:16.552176596 +0000 UTC m=+0.220495304 container died 0255be48dc564f9b892964dabd2d5cd12b2670373466f23dd58948809ae2e13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_panini, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 16:12:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-c20d9d314382603d318b72980b739f434ac5e6495b64029993d96ff7eb9125d7-merged.mount: Deactivated successfully.
Jan 26 16:12:16 compute-0 nova_compute[239965]: 2026-01-26 16:12:16.852 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:16 compute-0 podman[333484]: 2026-01-26 16:12:16.959040929 +0000 UTC m=+0.627359637 container remove 0255be48dc564f9b892964dabd2d5cd12b2670373466f23dd58948809ae2e13c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_panini, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 16:12:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:12:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:12:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:12:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:12:16 compute-0 ceph-mon[75140]: pgmap v1843: 305 pgs: 305 active+clean; 213 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.1 MiB/s wr, 235 op/s
Jan 26 16:12:17 compute-0 systemd[1]: libpod-conmon-0255be48dc564f9b892964dabd2d5cd12b2670373466f23dd58948809ae2e13c.scope: Deactivated successfully.
Jan 26 16:12:17 compute-0 podman[333523]: 2026-01-26 16:12:17.16551787 +0000 UTC m=+0.022586004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:12:17 compute-0 podman[333523]: 2026-01-26 16:12:17.270733483 +0000 UTC m=+0.127801597 container create ceb5090985f21e1d8639322dddbf590f489fbadfe27457158307e77c17b77404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 16:12:17 compute-0 systemd[1]: Started libpod-conmon-ceb5090985f21e1d8639322dddbf590f489fbadfe27457158307e77c17b77404.scope.
Jan 26 16:12:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5edd8e66c0fcfd1e2ac7a693c08bb464a7b12f26c270008746c9b0810ee93b41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5edd8e66c0fcfd1e2ac7a693c08bb464a7b12f26c270008746c9b0810ee93b41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5edd8e66c0fcfd1e2ac7a693c08bb464a7b12f26c270008746c9b0810ee93b41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5edd8e66c0fcfd1e2ac7a693c08bb464a7b12f26c270008746c9b0810ee93b41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5edd8e66c0fcfd1e2ac7a693c08bb464a7b12f26c270008746c9b0810ee93b41/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:17 compute-0 podman[333523]: 2026-01-26 16:12:17.388053834 +0000 UTC m=+0.245121988 container init ceb5090985f21e1d8639322dddbf590f489fbadfe27457158307e77c17b77404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:12:17 compute-0 podman[333523]: 2026-01-26 16:12:17.393866616 +0000 UTC m=+0.250934730 container start ceb5090985f21e1d8639322dddbf590f489fbadfe27457158307e77c17b77404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_yonath, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:12:17 compute-0 podman[333523]: 2026-01-26 16:12:17.465278303 +0000 UTC m=+0.322346417 container attach ceb5090985f21e1d8639322dddbf590f489fbadfe27457158307e77c17b77404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_yonath, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:12:17 compute-0 romantic_yonath[333539]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:12:17 compute-0 romantic_yonath[333539]: --> All data devices are unavailable
Jan 26 16:12:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:12:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Jan 26 16:12:17 compute-0 systemd[1]: libpod-ceb5090985f21e1d8639322dddbf590f489fbadfe27457158307e77c17b77404.scope: Deactivated successfully.
Jan 26 16:12:17 compute-0 podman[333523]: 2026-01-26 16:12:17.885867522 +0000 UTC m=+0.742935636 container died ceb5090985f21e1d8639322dddbf590f489fbadfe27457158307e77c17b77404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_yonath, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 16:12:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Jan 26 16:12:17 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Jan 26 16:12:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-5edd8e66c0fcfd1e2ac7a693c08bb464a7b12f26c270008746c9b0810ee93b41-merged.mount: Deactivated successfully.
Jan 26 16:12:17 compute-0 podman[333523]: 2026-01-26 16:12:17.95652241 +0000 UTC m=+0.813590524 container remove ceb5090985f21e1d8639322dddbf590f489fbadfe27457158307e77c17b77404 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_yonath, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:12:17 compute-0 systemd[1]: libpod-conmon-ceb5090985f21e1d8639322dddbf590f489fbadfe27457158307e77c17b77404.scope: Deactivated successfully.
Jan 26 16:12:18 compute-0 sudo[333448]: pam_unix(sudo:session): session closed for user root
Jan 26 16:12:18 compute-0 podman[333560]: 2026-01-26 16:12:18.033207636 +0000 UTC m=+0.116636915 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 16:12:18 compute-0 podman[333567]: 2026-01-26 16:12:18.038583487 +0000 UTC m=+0.122019526 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 16:12:18 compute-0 sudo[333607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:12:18 compute-0 sudo[333607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:12:18 compute-0 sudo[333607]: pam_unix(sudo:session): session closed for user root
Jan 26 16:12:18 compute-0 sudo[333639]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:12:18 compute-0 sudo[333639]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:12:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 213 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 289 op/s
Jan 26 16:12:18 compute-0 ovn_controller[146046]: 2026-01-26T16:12:18Z|01081|binding|INFO|Releasing lport e0fa1a12-fa6c-4878-8114-78f4badaec8c from this chassis (sb_readonly=0)
Jan 26 16:12:18 compute-0 ovn_controller[146046]: 2026-01-26T16:12:18Z|01082|binding|INFO|Releasing lport fb7d9679-6ae1-476e-99df-96ff9d9ad5e9 from this chassis (sb_readonly=0)
Jan 26 16:12:18 compute-0 ovn_controller[146046]: 2026-01-26T16:12:18Z|01083|binding|INFO|Releasing lport a1098b06-e219-4f9c-9e25-e366022a5f05 from this chassis (sb_readonly=0)
Jan 26 16:12:18 compute-0 nova_compute[239965]: 2026-01-26 16:12:18.378 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:18 compute-0 NetworkManager[48954]: <info>  [1769443938.3789] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/449)
Jan 26 16:12:18 compute-0 NetworkManager[48954]: <info>  [1769443938.3799] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/450)
Jan 26 16:12:18 compute-0 ovn_controller[146046]: 2026-01-26T16:12:18Z|01084|binding|INFO|Releasing lport e0fa1a12-fa6c-4878-8114-78f4badaec8c from this chassis (sb_readonly=0)
Jan 26 16:12:18 compute-0 ovn_controller[146046]: 2026-01-26T16:12:18Z|01085|binding|INFO|Releasing lport fb7d9679-6ae1-476e-99df-96ff9d9ad5e9 from this chassis (sb_readonly=0)
Jan 26 16:12:18 compute-0 ovn_controller[146046]: 2026-01-26T16:12:18Z|01086|binding|INFO|Releasing lport a1098b06-e219-4f9c-9e25-e366022a5f05 from this chassis (sb_readonly=0)
Jan 26 16:12:18 compute-0 nova_compute[239965]: 2026-01-26 16:12:18.438 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:18 compute-0 podman[333675]: 2026-01-26 16:12:18.410387972 +0000 UTC m=+0.024036579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:12:18 compute-0 podman[333675]: 2026-01-26 16:12:18.515504593 +0000 UTC m=+0.129153210 container create e17915d49e5f59ac63b2aabeee219749c07c74c3563d8f94691aa100c5bfe3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:12:18 compute-0 systemd[1]: Started libpod-conmon-e17915d49e5f59ac63b2aabeee219749c07c74c3563d8f94691aa100c5bfe3b6.scope.
Jan 26 16:12:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:12:18 compute-0 podman[333675]: 2026-01-26 16:12:18.610477827 +0000 UTC m=+0.224126454 container init e17915d49e5f59ac63b2aabeee219749c07c74c3563d8f94691aa100c5bfe3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 16:12:18 compute-0 podman[333675]: 2026-01-26 16:12:18.620616354 +0000 UTC m=+0.234264931 container start e17915d49e5f59ac63b2aabeee219749c07c74c3563d8f94691aa100c5bfe3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:12:18 compute-0 podman[333675]: 2026-01-26 16:12:18.625057813 +0000 UTC m=+0.238706420 container attach e17915d49e5f59ac63b2aabeee219749c07c74c3563d8f94691aa100c5bfe3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 16:12:18 compute-0 jolly_curie[333692]: 167 167
Jan 26 16:12:18 compute-0 systemd[1]: libpod-e17915d49e5f59ac63b2aabeee219749c07c74c3563d8f94691aa100c5bfe3b6.scope: Deactivated successfully.
Jan 26 16:12:18 compute-0 podman[333675]: 2026-01-26 16:12:18.627930844 +0000 UTC m=+0.241579431 container died e17915d49e5f59ac63b2aabeee219749c07c74c3563d8f94691aa100c5bfe3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd43784f38cde5e54d23f1f351f07a044f78c51d7711408f18e3f7de9266927c-merged.mount: Deactivated successfully.
Jan 26 16:12:18 compute-0 podman[333675]: 2026-01-26 16:12:18.669709755 +0000 UTC m=+0.283358342 container remove e17915d49e5f59ac63b2aabeee219749c07c74c3563d8f94691aa100c5bfe3b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:12:18 compute-0 systemd[1]: libpod-conmon-e17915d49e5f59ac63b2aabeee219749c07c74c3563d8f94691aa100c5bfe3b6.scope: Deactivated successfully.
Jan 26 16:12:18 compute-0 podman[333715]: 2026-01-26 16:12:18.844236315 +0000 UTC m=+0.024878900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:12:19 compute-0 nova_compute[239965]: 2026-01-26 16:12:19.045 239969 DEBUG nova.compute.manager [req-e5274b7b-8b21-4c07-9415-1dac412f2db2 req-b1f91a6f-2fad-4042-b749-dfd46cc2ae22 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received event network-changed-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:19 compute-0 nova_compute[239965]: 2026-01-26 16:12:19.046 239969 DEBUG nova.compute.manager [req-e5274b7b-8b21-4c07-9415-1dac412f2db2 req-b1f91a6f-2fad-4042-b749-dfd46cc2ae22 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Refreshing instance network info cache due to event network-changed-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:12:19 compute-0 nova_compute[239965]: 2026-01-26 16:12:19.046 239969 DEBUG oslo_concurrency.lockutils [req-e5274b7b-8b21-4c07-9415-1dac412f2db2 req-b1f91a6f-2fad-4042-b749-dfd46cc2ae22 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:12:19 compute-0 nova_compute[239965]: 2026-01-26 16:12:19.046 239969 DEBUG oslo_concurrency.lockutils [req-e5274b7b-8b21-4c07-9415-1dac412f2db2 req-b1f91a6f-2fad-4042-b749-dfd46cc2ae22 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:12:19 compute-0 nova_compute[239965]: 2026-01-26 16:12:19.047 239969 DEBUG nova.network.neutron [req-e5274b7b-8b21-4c07-9415-1dac412f2db2 req-b1f91a6f-2fad-4042-b749-dfd46cc2ae22 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Refreshing network info cache for port 24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:12:19 compute-0 ceph-mon[75140]: osdmap e300: 3 total, 3 up, 3 in
Jan 26 16:12:19 compute-0 ceph-mon[75140]: pgmap v1845: 305 pgs: 305 active+clean; 213 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.1 MiB/s wr, 289 op/s
Jan 26 16:12:19 compute-0 podman[333715]: 2026-01-26 16:12:19.183952306 +0000 UTC m=+0.364594871 container create abe38034c97ff14ede2e599b6c5eec3733c8042196f14692c8bfed3fa259f006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 16:12:19 compute-0 systemd[1]: Started libpod-conmon-abe38034c97ff14ede2e599b6c5eec3733c8042196f14692c8bfed3fa259f006.scope.
Jan 26 16:12:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d72ed1124f52a262ec6a5ed45aee4d1a19d730a78d5c6ce06afd18773eb6caef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d72ed1124f52a262ec6a5ed45aee4d1a19d730a78d5c6ce06afd18773eb6caef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d72ed1124f52a262ec6a5ed45aee4d1a19d730a78d5c6ce06afd18773eb6caef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d72ed1124f52a262ec6a5ed45aee4d1a19d730a78d5c6ce06afd18773eb6caef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:19 compute-0 podman[333715]: 2026-01-26 16:12:19.307956809 +0000 UTC m=+0.488599404 container init abe38034c97ff14ede2e599b6c5eec3733c8042196f14692c8bfed3fa259f006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lichterman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 16:12:19 compute-0 podman[333715]: 2026-01-26 16:12:19.317287607 +0000 UTC m=+0.497930192 container start abe38034c97ff14ede2e599b6c5eec3733c8042196f14692c8bfed3fa259f006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lichterman, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:12:19 compute-0 podman[333715]: 2026-01-26 16:12:19.327926647 +0000 UTC m=+0.508569222 container attach abe38034c97ff14ede2e599b6c5eec3733c8042196f14692c8bfed3fa259f006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lichterman, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 16:12:19 compute-0 nova_compute[239965]: 2026-01-26 16:12:19.550 239969 DEBUG nova.objects.instance [None req-546e0173-9a2a-4cf0-9325-23663b38ca3d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:19 compute-0 nova_compute[239965]: 2026-01-26 16:12:19.578 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443939.577886, 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:12:19 compute-0 nova_compute[239965]: 2026-01-26 16:12:19.578 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] VM Paused (Lifecycle Event)
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]: {
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:     "0": [
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:         {
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "devices": [
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "/dev/loop3"
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             ],
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "lv_name": "ceph_lv0",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "lv_size": "21470642176",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "name": "ceph_lv0",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "tags": {
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.cluster_name": "ceph",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.crush_device_class": "",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.encrypted": "0",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.objectstore": "bluestore",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.osd_id": "0",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.type": "block",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.vdo": "0",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.with_tpm": "0"
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             },
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "type": "block",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "vg_name": "ceph_vg0"
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:         }
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:     ],
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:     "1": [
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:         {
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "devices": [
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "/dev/loop4"
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             ],
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "lv_name": "ceph_lv1",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "lv_size": "21470642176",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "name": "ceph_lv1",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "tags": {
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.cluster_name": "ceph",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.crush_device_class": "",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.encrypted": "0",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.objectstore": "bluestore",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.osd_id": "1",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.type": "block",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.vdo": "0",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.with_tpm": "0"
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             },
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "type": "block",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "vg_name": "ceph_vg1"
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:         }
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:     ],
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:     "2": [
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:         {
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "devices": [
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "/dev/loop5"
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             ],
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "lv_name": "ceph_lv2",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "lv_size": "21470642176",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "name": "ceph_lv2",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "tags": {
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.cluster_name": "ceph",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.crush_device_class": "",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.encrypted": "0",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.objectstore": "bluestore",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.osd_id": "2",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.type": "block",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.vdo": "0",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:                 "ceph.with_tpm": "0"
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             },
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "type": "block",
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:             "vg_name": "ceph_vg2"
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:         }
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]:     ]
Jan 26 16:12:19 compute-0 fervent_lichterman[333731]: }
Jan 26 16:12:19 compute-0 nova_compute[239965]: 2026-01-26 16:12:19.602 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:19 compute-0 nova_compute[239965]: 2026-01-26 16:12:19.611 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:12:19 compute-0 systemd[1]: libpod-abe38034c97ff14ede2e599b6c5eec3733c8042196f14692c8bfed3fa259f006.scope: Deactivated successfully.
Jan 26 16:12:19 compute-0 podman[333715]: 2026-01-26 16:12:19.617079161 +0000 UTC m=+0.797721726 container died abe38034c97ff14ede2e599b6c5eec3733c8042196f14692c8bfed3fa259f006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:12:19 compute-0 nova_compute[239965]: 2026-01-26 16:12:19.633 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 26 16:12:20 compute-0 nova_compute[239965]: 2026-01-26 16:12:20.033 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-d72ed1124f52a262ec6a5ed45aee4d1a19d730a78d5c6ce06afd18773eb6caef-merged.mount: Deactivated successfully.
Jan 26 16:12:20 compute-0 podman[333715]: 2026-01-26 16:12:20.204137111 +0000 UTC m=+1.384779676 container remove abe38034c97ff14ede2e599b6c5eec3733c8042196f14692c8bfed3fa259f006 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_lichterman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:12:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 213 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.7 MiB/s wr, 233 op/s
Jan 26 16:12:20 compute-0 kernel: tapd8d3ce43-ab (unregistering): left promiscuous mode
Jan 26 16:12:20 compute-0 NetworkManager[48954]: <info>  [1769443940.2345] device (tapd8d3ce43-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:12:20 compute-0 nova_compute[239965]: 2026-01-26 16:12:20.243 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:20 compute-0 systemd[1]: libpod-conmon-abe38034c97ff14ede2e599b6c5eec3733c8042196f14692c8bfed3fa259f006.scope: Deactivated successfully.
Jan 26 16:12:20 compute-0 ovn_controller[146046]: 2026-01-26T16:12:20Z|01087|binding|INFO|Releasing lport d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 from this chassis (sb_readonly=0)
Jan 26 16:12:20 compute-0 ovn_controller[146046]: 2026-01-26T16:12:20Z|01088|binding|INFO|Setting lport d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 down in Southbound
Jan 26 16:12:20 compute-0 ovn_controller[146046]: 2026-01-26T16:12:20Z|01089|binding|INFO|Removing iface tapd8d3ce43-ab ovn-installed in OVS
Jan 26 16:12:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:20.250 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:58:77 10.100.0.6'], port_security=['fa:16:3e:74:58:77 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8d1cdeaf6004b94bba2d3c727959e51', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'fb0a5e0f-b1d5-4f6c-9884-3b66f10a59d8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=568e2bde-137f-4351-b2df-67acb89970b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:12:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:20.252 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 in datapath 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 unbound from our chassis
Jan 26 16:12:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:20.254 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:12:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:20.255 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1593703f-a0fd-4ac3-8fa4-bc452b6b65c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:20.257 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 namespace which is not needed anymore
Jan 26 16:12:20 compute-0 sudo[333639]: pam_unix(sudo:session): session closed for user root
Jan 26 16:12:20 compute-0 nova_compute[239965]: 2026-01-26 16:12:20.279 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:20 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Jan 26 16:12:20 compute-0 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006a.scope: Consumed 11.606s CPU time.
Jan 26 16:12:20 compute-0 systemd-machined[208061]: Machine qemu-134-instance-0000006a terminated.
Jan 26 16:12:20 compute-0 sudo[333762]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:12:20 compute-0 sudo[333762]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:12:20 compute-0 sudo[333762]: pam_unix(sudo:session): session closed for user root
Jan 26 16:12:20 compute-0 sudo[333802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:12:20 compute-0 sudo[333802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:12:20 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[332947]: [NOTICE]   (332951) : haproxy version is 2.8.14-c23fe91
Jan 26 16:12:20 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[332947]: [NOTICE]   (332951) : path to executable is /usr/sbin/haproxy
Jan 26 16:12:20 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[332947]: [WARNING]  (332951) : Exiting Master process...
Jan 26 16:12:20 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[332947]: [WARNING]  (332951) : Exiting Master process...
Jan 26 16:12:20 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[332947]: [ALERT]    (332951) : Current worker (332962) exited with code 143 (Terminated)
Jan 26 16:12:20 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[332947]: [WARNING]  (332951) : All workers exited. Exiting... (0)
Jan 26 16:12:20 compute-0 systemd[1]: libpod-0d12ed90dd34fd444d64d4d46247f627cf844e45c24e3ce91b42f682c703edce.scope: Deactivated successfully.
Jan 26 16:12:20 compute-0 podman[333807]: 2026-01-26 16:12:20.415763778 +0000 UTC m=+0.050636430 container died 0d12ed90dd34fd444d64d4d46247f627cf844e45c24e3ce91b42f682c703edce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 26 16:12:20 compute-0 nova_compute[239965]: 2026-01-26 16:12:20.421 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:20 compute-0 nova_compute[239965]: 2026-01-26 16:12:20.427 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:20 compute-0 nova_compute[239965]: 2026-01-26 16:12:20.434 239969 DEBUG nova.compute.manager [None req-546e0173-9a2a-4cf0-9325-23663b38ca3d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0d12ed90dd34fd444d64d4d46247f627cf844e45c24e3ce91b42f682c703edce-userdata-shm.mount: Deactivated successfully.
Jan 26 16:12:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-916d4919df42cf4a959d90cbb43e8408e43f913698ef2d202c4ebfe0715e1555-merged.mount: Deactivated successfully.
Jan 26 16:12:20 compute-0 podman[333807]: 2026-01-26 16:12:20.46527526 +0000 UTC m=+0.100147892 container cleanup 0d12ed90dd34fd444d64d4d46247f627cf844e45c24e3ce91b42f682c703edce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 16:12:20 compute-0 systemd[1]: libpod-conmon-0d12ed90dd34fd444d64d4d46247f627cf844e45c24e3ce91b42f682c703edce.scope: Deactivated successfully.
Jan 26 16:12:20 compute-0 podman[333866]: 2026-01-26 16:12:20.532420082 +0000 UTC m=+0.046678563 container remove 0d12ed90dd34fd444d64d4d46247f627cf844e45c24e3ce91b42f682c703edce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 16:12:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:20.540 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ba29eac6-1091-4db5-8261-44f360f64be5]: (4, ('Mon Jan 26 04:12:20 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 (0d12ed90dd34fd444d64d4d46247f627cf844e45c24e3ce91b42f682c703edce)\n0d12ed90dd34fd444d64d4d46247f627cf844e45c24e3ce91b42f682c703edce\nMon Jan 26 04:12:20 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 (0d12ed90dd34fd444d64d4d46247f627cf844e45c24e3ce91b42f682c703edce)\n0d12ed90dd34fd444d64d4d46247f627cf844e45c24e3ce91b42f682c703edce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:20.542 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1aea20-dc1c-4d75-9883-2e8a950d0c5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:20.543 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74a6493e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:20 compute-0 nova_compute[239965]: 2026-01-26 16:12:20.546 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:20 compute-0 kernel: tap74a6493e-90: left promiscuous mode
Jan 26 16:12:20 compute-0 nova_compute[239965]: 2026-01-26 16:12:20.567 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:20.571 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2c7d6250-eca9-4b8b-b36e-7d7851d45c9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:20.597 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ec14d6-e760-4e22-aedd-48ff857e29f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:20.599 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8438778d-873e-4168-9edf-a62f2f074cba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:20.621 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2f9613ae-b51a-46d0-9c03-f4d5e7a0dbc5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 541924, 'reachable_time': 26031, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 333883, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d74a6493e\x2d9050\x2d4f8b\x2d9dc9\x2de7bb3992eeb6.mount: Deactivated successfully.
Jan 26 16:12:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:20.625 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:12:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:20.625 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[182f3808-7bee-464d-9637-d5f2331a1d8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:20 compute-0 nova_compute[239965]: 2026-01-26 16:12:20.683 239969 DEBUG nova.network.neutron [req-e5274b7b-8b21-4c07-9415-1dac412f2db2 req-b1f91a6f-2fad-4042-b749-dfd46cc2ae22 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Updated VIF entry in instance network info cache for port 24c5ed41-3d5f-47a8-a80b-a40478dcb4b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:12:20 compute-0 nova_compute[239965]: 2026-01-26 16:12:20.684 239969 DEBUG nova.network.neutron [req-e5274b7b-8b21-4c07-9415-1dac412f2db2 req-b1f91a6f-2fad-4042-b749-dfd46cc2ae22 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Updating instance_info_cache with network_info: [{"id": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "address": "fa:16:3e:e8:8d:bc", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c5ed41-3d", "ovs_interfaceid": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5e1d227f-594c-4a1e-8f28-56469a20df56", "address": "fa:16:3e:42:83:4d", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e1d227f-59", "ovs_interfaceid": "5e1d227f-594c-4a1e-8f28-56469a20df56", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:12:20 compute-0 nova_compute[239965]: 2026-01-26 16:12:20.701 239969 DEBUG oslo_concurrency.lockutils [req-e5274b7b-8b21-4c07-9415-1dac412f2db2 req-b1f91a6f-2fad-4042-b749-dfd46cc2ae22 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:12:20 compute-0 podman[333896]: 2026-01-26 16:12:20.722685657 +0000 UTC m=+0.051113762 container create 60dffa029480159960355e558a9500640b240669b5d67a9de234b832bcb25e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_euler, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 16:12:20 compute-0 systemd[1]: Started libpod-conmon-60dffa029480159960355e558a9500640b240669b5d67a9de234b832bcb25e81.scope.
Jan 26 16:12:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:12:20 compute-0 podman[333896]: 2026-01-26 16:12:20.702193435 +0000 UTC m=+0.030621570 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:12:20 compute-0 podman[333896]: 2026-01-26 16:12:20.812383981 +0000 UTC m=+0.140812186 container init 60dffa029480159960355e558a9500640b240669b5d67a9de234b832bcb25e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 26 16:12:20 compute-0 podman[333896]: 2026-01-26 16:12:20.823458092 +0000 UTC m=+0.151886197 container start 60dffa029480159960355e558a9500640b240669b5d67a9de234b832bcb25e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Jan 26 16:12:20 compute-0 systemd[1]: libpod-60dffa029480159960355e558a9500640b240669b5d67a9de234b832bcb25e81.scope: Deactivated successfully.
Jan 26 16:12:20 compute-0 optimistic_euler[333913]: 167 167
Jan 26 16:12:20 compute-0 conmon[333913]: conmon 60dffa02948015996035 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-60dffa029480159960355e558a9500640b240669b5d67a9de234b832bcb25e81.scope/container/memory.events
Jan 26 16:12:20 compute-0 podman[333896]: 2026-01-26 16:12:20.836933681 +0000 UTC m=+0.165361786 container attach 60dffa029480159960355e558a9500640b240669b5d67a9de234b832bcb25e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 16:12:20 compute-0 podman[333896]: 2026-01-26 16:12:20.837386122 +0000 UTC m=+0.165814237 container died 60dffa029480159960355e558a9500640b240669b5d67a9de234b832bcb25e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_euler, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:12:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f0f341eeba1f0688024388fca088baff5f83156eef4c44af0118d7c3239d960-merged.mount: Deactivated successfully.
Jan 26 16:12:20 compute-0 podman[333896]: 2026-01-26 16:12:20.874659864 +0000 UTC m=+0.203087969 container remove 60dffa029480159960355e558a9500640b240669b5d67a9de234b832bcb25e81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_euler, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 16:12:20 compute-0 systemd[1]: libpod-conmon-60dffa029480159960355e558a9500640b240669b5d67a9de234b832bcb25e81.scope: Deactivated successfully.
Jan 26 16:12:21 compute-0 podman[333936]: 2026-01-26 16:12:21.058501111 +0000 UTC m=+0.042216504 container create 3a9187f439df84cd2f77678b4b6733ece0f7912829a961e31d8ed73f2e8f1089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 16:12:21 compute-0 systemd[1]: Started libpod-conmon-3a9187f439df84cd2f77678b4b6733ece0f7912829a961e31d8ed73f2e8f1089.scope.
Jan 26 16:12:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2540d65a614509e06874933e96d5fe07597636043218740f1da8000830a33d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2540d65a614509e06874933e96d5fe07597636043218740f1da8000830a33d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2540d65a614509e06874933e96d5fe07597636043218740f1da8000830a33d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a2540d65a614509e06874933e96d5fe07597636043218740f1da8000830a33d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:21 compute-0 podman[333936]: 2026-01-26 16:12:21.040383298 +0000 UTC m=+0.024098711 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.173 239969 DEBUG nova.compute.manager [req-e524c492-a6ce-409c-8a2e-f9fedef5e7f8 req-c563773a-13f6-4d26-81d0-efec0d41fc6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-vif-unplugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.176 239969 DEBUG oslo_concurrency.lockutils [req-e524c492-a6ce-409c-8a2e-f9fedef5e7f8 req-c563773a-13f6-4d26-81d0-efec0d41fc6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.176 239969 DEBUG oslo_concurrency.lockutils [req-e524c492-a6ce-409c-8a2e-f9fedef5e7f8 req-c563773a-13f6-4d26-81d0-efec0d41fc6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.177 239969 DEBUG oslo_concurrency.lockutils [req-e524c492-a6ce-409c-8a2e-f9fedef5e7f8 req-c563773a-13f6-4d26-81d0-efec0d41fc6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.177 239969 DEBUG nova.compute.manager [req-e524c492-a6ce-409c-8a2e-f9fedef5e7f8 req-c563773a-13f6-4d26-81d0-efec0d41fc6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] No waiting events found dispatching network-vif-unplugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.178 239969 WARNING nova.compute.manager [req-e524c492-a6ce-409c-8a2e-f9fedef5e7f8 req-c563773a-13f6-4d26-81d0-efec0d41fc6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received unexpected event network-vif-unplugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 for instance with vm_state suspended and task_state None.
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.178 239969 DEBUG nova.compute.manager [req-e524c492-a6ce-409c-8a2e-f9fedef5e7f8 req-c563773a-13f6-4d26-81d0-efec0d41fc6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.179 239969 DEBUG oslo_concurrency.lockutils [req-e524c492-a6ce-409c-8a2e-f9fedef5e7f8 req-c563773a-13f6-4d26-81d0-efec0d41fc6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.179 239969 DEBUG oslo_concurrency.lockutils [req-e524c492-a6ce-409c-8a2e-f9fedef5e7f8 req-c563773a-13f6-4d26-81d0-efec0d41fc6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.180 239969 DEBUG oslo_concurrency.lockutils [req-e524c492-a6ce-409c-8a2e-f9fedef5e7f8 req-c563773a-13f6-4d26-81d0-efec0d41fc6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.180 239969 DEBUG nova.compute.manager [req-e524c492-a6ce-409c-8a2e-f9fedef5e7f8 req-c563773a-13f6-4d26-81d0-efec0d41fc6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] No waiting events found dispatching network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.181 239969 WARNING nova.compute.manager [req-e524c492-a6ce-409c-8a2e-f9fedef5e7f8 req-c563773a-13f6-4d26-81d0-efec0d41fc6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received unexpected event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 for instance with vm_state suspended and task_state None.
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.854 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.930 239969 INFO nova.compute.manager [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Resuming
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.932 239969 DEBUG nova.objects.instance [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'flavor' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.970 239969 DEBUG oslo_concurrency.lockutils [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.971 239969 DEBUG oslo_concurrency.lockutils [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquired lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:12:21 compute-0 nova_compute[239965]: 2026-01-26 16:12:21.971 239969 DEBUG nova.network.neutron [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:12:22 compute-0 podman[333936]: 2026-01-26 16:12:22.027724641 +0000 UTC m=+1.011440144 container init 3a9187f439df84cd2f77678b4b6733ece0f7912829a961e31d8ed73f2e8f1089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:12:22 compute-0 podman[333936]: 2026-01-26 16:12:22.046155781 +0000 UTC m=+1.029871174 container start 3a9187f439df84cd2f77678b4b6733ece0f7912829a961e31d8ed73f2e8f1089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:12:22 compute-0 ceph-mon[75140]: pgmap v1846: 305 pgs: 305 active+clean; 213 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.7 MiB/s wr, 233 op/s
Jan 26 16:12:22 compute-0 podman[333936]: 2026-01-26 16:12:22.140894499 +0000 UTC m=+1.124609932 container attach 3a9187f439df84cd2f77678b4b6733ece0f7912829a961e31d8ed73f2e8f1089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:12:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 213 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 31 KiB/s wr, 128 op/s
Jan 26 16:12:22 compute-0 lvm[334027]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:12:22 compute-0 lvm[334027]: VG ceph_vg0 finished
Jan 26 16:12:22 compute-0 lvm[334029]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:12:22 compute-0 lvm[334029]: VG ceph_vg1 finished
Jan 26 16:12:22 compute-0 lvm[334030]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:12:22 compute-0 lvm[334030]: VG ceph_vg2 finished
Jan 26 16:12:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:12:22 compute-0 inspiring_herschel[333952]: {}
Jan 26 16:12:22 compute-0 systemd[1]: libpod-3a9187f439df84cd2f77678b4b6733ece0f7912829a961e31d8ed73f2e8f1089.scope: Deactivated successfully.
Jan 26 16:12:22 compute-0 systemd[1]: libpod-3a9187f439df84cd2f77678b4b6733ece0f7912829a961e31d8ed73f2e8f1089.scope: Consumed 1.310s CPU time.
Jan 26 16:12:22 compute-0 podman[334033]: 2026-01-26 16:12:22.993576158 +0000 UTC m=+0.025852154 container died 3a9187f439df84cd2f77678b4b6733ece0f7912829a961e31d8ed73f2e8f1089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:12:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a2540d65a614509e06874933e96d5fe07597636043218740f1da8000830a33d5-merged.mount: Deactivated successfully.
Jan 26 16:12:23 compute-0 podman[334033]: 2026-01-26 16:12:23.070311614 +0000 UTC m=+0.102587600 container remove 3a9187f439df84cd2f77678b4b6733ece0f7912829a961e31d8ed73f2e8f1089 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:12:23 compute-0 systemd[1]: libpod-conmon-3a9187f439df84cd2f77678b4b6733ece0f7912829a961e31d8ed73f2e8f1089.scope: Deactivated successfully.
Jan 26 16:12:23 compute-0 sudo[333802]: pam_unix(sudo:session): session closed for user root
Jan 26 16:12:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:12:23 compute-0 ceph-mon[75140]: pgmap v1847: 305 pgs: 305 active+clean; 213 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 31 KiB/s wr, 128 op/s
Jan 26 16:12:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:12:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:12:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:12:23 compute-0 sudo[334048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:12:23 compute-0 sudo[334048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:12:23 compute-0 sudo[334048]: pam_unix(sudo:session): session closed for user root
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.659 239969 DEBUG nova.network.neutron [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Updating instance_info_cache with network_info: [{"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.683 239969 DEBUG oslo_concurrency.lockutils [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Releasing lock "refresh_cache-20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.689 239969 DEBUG nova.virt.libvirt.vif [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-944879613',display_name='tempest-ServersNegativeTestJSON-server-944879613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-944879613',id=106,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:12:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='d8d1cdeaf6004b94bba2d3c727959e51',ramdisk_id='',reservation_id='r-np89kd30',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-1013867593',owner_user_name='tempest-ServersNegativeTestJSON-1013867593-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:12:20Z,user_data=None,user_id='2d9b11d846a34b6e892fb80186bedba5',uuid=20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.690 239969 DEBUG nova.network.os_vif_util [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converting VIF {"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.691 239969 DEBUG nova.network.os_vif_util [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.691 239969 DEBUG os_vif [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.692 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.692 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.693 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.696 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.696 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8d3ce43-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.696 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8d3ce43-ab, col_values=(('external_ids', {'iface-id': 'd8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:74:58:77', 'vm-uuid': '20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.697 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.697 239969 INFO os_vif [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab')
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.789 239969 DEBUG nova.objects.instance [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'numa_topology' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:23 compute-0 kernel: tapd8d3ce43-ab: entered promiscuous mode
Jan 26 16:12:23 compute-0 NetworkManager[48954]: <info>  [1769443943.9105] manager: (tapd8d3ce43-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/451)
Jan 26 16:12:23 compute-0 ovn_controller[146046]: 2026-01-26T16:12:23Z|01090|binding|INFO|Claiming lport d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 for this chassis.
Jan 26 16:12:23 compute-0 ovn_controller[146046]: 2026-01-26T16:12:23Z|01091|binding|INFO|d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3: Claiming fa:16:3e:74:58:77 10.100.0.6
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.914 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:23.923 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:58:77 10.100.0.6'], port_security=['fa:16:3e:74:58:77 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8d1cdeaf6004b94bba2d3c727959e51', 'neutron:revision_number': '10', 'neutron:security_group_ids': 'fb0a5e0f-b1d5-4f6c-9884-3b66f10a59d8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=568e2bde-137f-4351-b2df-67acb89970b2, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:12:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:23.924 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 in datapath 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 bound to our chassis
Jan 26 16:12:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:23.926 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6
Jan 26 16:12:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:23.943 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[54d549c3-da7f-436b-97f2-609cef427737]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:23.944 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap74a6493e-91 in ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:12:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:23.947 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap74a6493e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:12:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:23.947 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cbd15e1d-dac1-4e74-a1a1-d6a6b2ad9569]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:23.948 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[18ca5031-a820-41fc-8f3b-e1a9577000c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:23 compute-0 ovn_controller[146046]: 2026-01-26T16:12:23Z|01092|binding|INFO|Setting lport d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 ovn-installed in OVS
Jan 26 16:12:23 compute-0 ovn_controller[146046]: 2026-01-26T16:12:23Z|01093|binding|INFO|Setting lport d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 up in Southbound
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.956 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:23 compute-0 nova_compute[239965]: 2026-01-26 16:12:23.963 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:23 compute-0 systemd-machined[208061]: New machine qemu-136-instance-0000006a.
Jan 26 16:12:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:23.967 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[47eef109-3d63-4a9e-91c5-fdeeba5cdb70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:23 compute-0 systemd[1]: Started Virtual Machine qemu-136-instance-0000006a.
Jan 26 16:12:23 compute-0 systemd-udevd[334090]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:12:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:23.994 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ad7cb751-2ee3-49ec-bab4-9fa9e473e54f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:24 compute-0 NetworkManager[48954]: <info>  [1769443944.0008] device (tapd8d3ce43-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:12:24 compute-0 NetworkManager[48954]: <info>  [1769443944.0017] device (tapd8d3ce43-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.023 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4c8a48b4-cd73-4120-bc0c-030997eded50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:24 compute-0 NetworkManager[48954]: <info>  [1769443944.0298] manager: (tap74a6493e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/452)
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.028 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[76c3c278-1401-4040-ba8f-a8523da1377a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.061 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ca25c3-e853-42a2-b2d0-f26ff3ad81b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.064 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[29890a76-e909-4a16-ba92-8c61babff575]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:24 compute-0 NetworkManager[48954]: <info>  [1769443944.0864] device (tap74a6493e-90): carrier: link connected
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.091 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff7f91a-a300-4b7a-8815-7f66058db619]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.108 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2d2e4801-cf71-4914-89dc-12179b4a6ee7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74a6493e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:10:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543489, 'reachable_time': 21711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334120, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.123 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a6002776-b675-4357-9fe3-619cf6378308]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5f:10c6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 543489, 'tstamp': 543489}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334121, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.147 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5e9c0ee2-3b0f-4123-93bf-1418693e32d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap74a6493e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5f:10:c6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 321], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543489, 'reachable_time': 21711, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 334122, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:24 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:12:24 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.181 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e5cd562a-b11d-44f4-bf6f-f7f9ffc3304d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 76 op/s
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.243 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c21dcccc-3b5c-48ea-9f70-62fca0ba5cd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.244 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74a6493e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.244 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.245 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap74a6493e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:24 compute-0 nova_compute[239965]: 2026-01-26 16:12:24.246 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:24 compute-0 NetworkManager[48954]: <info>  [1769443944.2472] manager: (tap74a6493e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/453)
Jan 26 16:12:24 compute-0 kernel: tap74a6493e-90: entered promiscuous mode
Jan 26 16:12:24 compute-0 nova_compute[239965]: 2026-01-26 16:12:24.250 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.251 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap74a6493e-90, col_values=(('external_ids', {'iface-id': 'fb7d9679-6ae1-476e-99df-96ff9d9ad5e9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:24 compute-0 nova_compute[239965]: 2026-01-26 16:12:24.252 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:24 compute-0 ovn_controller[146046]: 2026-01-26T16:12:24Z|01094|binding|INFO|Releasing lport fb7d9679-6ae1-476e-99df-96ff9d9ad5e9 from this chassis (sb_readonly=0)
Jan 26 16:12:24 compute-0 nova_compute[239965]: 2026-01-26 16:12:24.268 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:24 compute-0 nova_compute[239965]: 2026-01-26 16:12:24.272 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.273 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/74a6493e-9050-4f8b-9dc9-e7bb3992eeb6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/74a6493e-9050-4f8b-9dc9-e7bb3992eeb6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.274 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[373f947f-4a5e-4086-bbba-084d8fbd9374]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.274 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/74a6493e-9050-4f8b-9dc9-e7bb3992eeb6.pid.haproxy
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:12:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:24.275 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'env', 'PROCESS_TAG=haproxy-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/74a6493e-9050-4f8b-9dc9-e7bb3992eeb6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:12:24 compute-0 nova_compute[239965]: 2026-01-26 16:12:24.589 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:12:24 compute-0 nova_compute[239965]: 2026-01-26 16:12:24.589 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443944.5887148, 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:12:24 compute-0 nova_compute[239965]: 2026-01-26 16:12:24.590 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] VM Started (Lifecycle Event)
Jan 26 16:12:24 compute-0 nova_compute[239965]: 2026-01-26 16:12:24.615 239969 DEBUG nova.compute.manager [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:12:24 compute-0 nova_compute[239965]: 2026-01-26 16:12:24.616 239969 DEBUG nova.objects.instance [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'pci_devices' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:24 compute-0 podman[334195]: 2026-01-26 16:12:24.660382481 +0000 UTC m=+0.023553517 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.004 239969 DEBUG nova.compute.manager [req-905fbc64-0f29-4003-8822-7abebd5b0f08 req-05a904cc-fdc0-48cd-ba54-97773e4892a1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.005 239969 DEBUG oslo_concurrency.lockutils [req-905fbc64-0f29-4003-8822-7abebd5b0f08 req-05a904cc-fdc0-48cd-ba54-97773e4892a1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.005 239969 DEBUG oslo_concurrency.lockutils [req-905fbc64-0f29-4003-8822-7abebd5b0f08 req-05a904cc-fdc0-48cd-ba54-97773e4892a1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.005 239969 DEBUG oslo_concurrency.lockutils [req-905fbc64-0f29-4003-8822-7abebd5b0f08 req-05a904cc-fdc0-48cd-ba54-97773e4892a1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.006 239969 DEBUG nova.compute.manager [req-905fbc64-0f29-4003-8822-7abebd5b0f08 req-05a904cc-fdc0-48cd-ba54-97773e4892a1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] No waiting events found dispatching network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.006 239969 WARNING nova.compute.manager [req-905fbc64-0f29-4003-8822-7abebd5b0f08 req-05a904cc-fdc0-48cd-ba54-97773e4892a1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received unexpected event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 for instance with vm_state suspended and task_state resuming.
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.034 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:25 compute-0 podman[334195]: 2026-01-26 16:12:25.057074075 +0000 UTC m=+0.420245091 container create 2a6fc9eb8bf37bb77ac70b57afd44749ac25621fe663dfe0fd068ca46083fe88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.069 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.076 239969 INFO nova.virt.libvirt.driver [-] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Instance running successfully.
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.077 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:12:25 compute-0 virtqemud[240263]: argument unsupported: QEMU guest agent is not configured
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.083 239969 DEBUG nova.virt.libvirt.guest [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.084 239969 DEBUG nova.compute.manager [None req-daa027dd-e355-4e72-8dee-247f79c8511d 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:25 compute-0 systemd[1]: Started libpod-conmon-2a6fc9eb8bf37bb77ac70b57afd44749ac25621fe663dfe0fd068ca46083fe88.scope.
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.136 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.138 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443944.5946825, 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.138 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] VM Resumed (Lifecycle Event)
Jan 26 16:12:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:12:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49303e142a648da9ee5583699b806c54a76eb8b1b9941da6b9f60301f91d9f39/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.163 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.166 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:12:25 compute-0 ceph-mon[75140]: pgmap v1848: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 76 op/s
Jan 26 16:12:25 compute-0 podman[334195]: 2026-01-26 16:12:25.175911612 +0000 UTC m=+0.539082648 container init 2a6fc9eb8bf37bb77ac70b57afd44749ac25621fe663dfe0fd068ca46083fe88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 16:12:25 compute-0 podman[334195]: 2026-01-26 16:12:25.182815911 +0000 UTC m=+0.545986927 container start 2a6fc9eb8bf37bb77ac70b57afd44749ac25621fe663dfe0fd068ca46083fe88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:12:25 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[334210]: [NOTICE]   (334214) : New worker (334216) forked
Jan 26 16:12:25 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[334210]: [NOTICE]   (334214) : Loading success.
Jan 26 16:12:25 compute-0 nova_compute[239965]: 2026-01-26 16:12:25.210 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 26 16:12:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 76 op/s
Jan 26 16:12:26 compute-0 nova_compute[239965]: 2026-01-26 16:12:26.856 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:27 compute-0 nova_compute[239965]: 2026-01-26 16:12:27.306 239969 DEBUG nova.compute.manager [req-98d96de2-5866-4a74-bd67-940bd0406658 req-7ab4d902-6348-43be-bd5b-f77dd59e26a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:27 compute-0 nova_compute[239965]: 2026-01-26 16:12:27.307 239969 DEBUG oslo_concurrency.lockutils [req-98d96de2-5866-4a74-bd67-940bd0406658 req-7ab4d902-6348-43be-bd5b-f77dd59e26a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:27 compute-0 nova_compute[239965]: 2026-01-26 16:12:27.307 239969 DEBUG oslo_concurrency.lockutils [req-98d96de2-5866-4a74-bd67-940bd0406658 req-7ab4d902-6348-43be-bd5b-f77dd59e26a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:27 compute-0 nova_compute[239965]: 2026-01-26 16:12:27.307 239969 DEBUG oslo_concurrency.lockutils [req-98d96de2-5866-4a74-bd67-940bd0406658 req-7ab4d902-6348-43be-bd5b-f77dd59e26a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:27 compute-0 nova_compute[239965]: 2026-01-26 16:12:27.307 239969 DEBUG nova.compute.manager [req-98d96de2-5866-4a74-bd67-940bd0406658 req-7ab4d902-6348-43be-bd5b-f77dd59e26a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] No waiting events found dispatching network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:12:27 compute-0 nova_compute[239965]: 2026-01-26 16:12:27.308 239969 WARNING nova.compute.manager [req-98d96de2-5866-4a74-bd67-940bd0406658 req-7ab4d902-6348-43be-bd5b-f77dd59e26a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received unexpected event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 for instance with vm_state active and task_state None.
Jan 26 16:12:27 compute-0 ceph-mon[75140]: pgmap v1849: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 76 op/s
Jan 26 16:12:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:12:27.909112) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443947909133, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 604, "num_deletes": 260, "total_data_size": 594676, "memory_usage": 606120, "flush_reason": "Manual Compaction"}
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443947913647, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 586283, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38092, "largest_seqno": 38695, "table_properties": {"data_size": 582976, "index_size": 1215, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7840, "raw_average_key_size": 19, "raw_value_size": 576117, "raw_average_value_size": 1412, "num_data_blocks": 53, "num_entries": 408, "num_filter_entries": 408, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769443919, "oldest_key_time": 1769443919, "file_creation_time": 1769443947, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 4582 microseconds, and 2021 cpu microseconds.
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:12:27.913693) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 586283 bytes OK
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:12:27.913708) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:12:27.915496) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:12:27.915509) EVENT_LOG_v1 {"time_micros": 1769443947915505, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:12:27.915524) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 591280, prev total WAL file size 591280, number of live WAL files 2.
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:12:27.915961) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323532' seq:72057594037927935, type:22 .. '6C6F676D0031353036' seq:0, type:0; will stop at (end)
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(572KB)], [83(8388KB)]
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443947916007, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 9175851, "oldest_snapshot_seqno": -1}
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6345 keys, 9041324 bytes, temperature: kUnknown
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443947969516, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 9041324, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8998142, "index_size": 26240, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 161926, "raw_average_key_size": 25, "raw_value_size": 8883708, "raw_average_value_size": 1400, "num_data_blocks": 1051, "num_entries": 6345, "num_filter_entries": 6345, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769443947, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:12:27.969697) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 9041324 bytes
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:12:27.970887) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.3 rd, 168.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 8.2 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(31.1) write-amplify(15.4) OK, records in: 6880, records dropped: 535 output_compression: NoCompression
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:12:27.970903) EVENT_LOG_v1 {"time_micros": 1769443947970895, "job": 48, "event": "compaction_finished", "compaction_time_micros": 53569, "compaction_time_cpu_micros": 20321, "output_level": 6, "num_output_files": 1, "total_output_size": 9041324, "num_input_records": 6880, "num_output_records": 6345, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443947971091, "job": 48, "event": "table_file_deletion", "file_number": 85}
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769443947972439, "job": 48, "event": "table_file_deletion", "file_number": 83}
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:12:27.915909) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:12:27.972463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:12:27.972467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:12:27.972468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:12:27.972470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:12:27 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:12:27.972471) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:12:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.5 KiB/s wr, 65 op/s
Jan 26 16:12:28 compute-0 ovn_controller[146046]: 2026-01-26T16:12:28Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e8:8d:bc 10.100.0.13
Jan 26 16:12:28 compute-0 ovn_controller[146046]: 2026-01-26T16:12:28Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e8:8d:bc 10.100.0.13
Jan 26 16:12:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:12:28
Jan 26 16:12:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:12:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:12:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', '.mgr', '.rgw.root', 'volumes', 'backups', 'images', 'cephfs.cephfs.meta']
Jan 26 16:12:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:12:28 compute-0 ovn_controller[146046]: 2026-01-26T16:12:28Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:74:58:77 10.100.0.6
Jan 26 16:12:28 compute-0 ceph-mon[75140]: pgmap v1850: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.5 KiB/s wr, 65 op/s
Jan 26 16:12:30 compute-0 nova_compute[239965]: 2026-01-26 16:12:30.037 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1022 KiB/s rd, 1.3 KiB/s wr, 56 op/s
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:12:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:12:31 compute-0 ceph-mon[75140]: pgmap v1851: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1022 KiB/s rd, 1.3 KiB/s wr, 56 op/s
Jan 26 16:12:31 compute-0 nova_compute[239965]: 2026-01-26 16:12:31.919 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 240 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.7 MiB/s wr, 109 op/s
Jan 26 16:12:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:12:33 compute-0 ceph-mon[75140]: pgmap v1852: 305 pgs: 305 active+clean; 240 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.7 MiB/s wr, 109 op/s
Jan 26 16:12:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 246 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 858 KiB/s rd, 2.2 MiB/s wr, 113 op/s
Jan 26 16:12:34 compute-0 ceph-mon[75140]: pgmap v1853: 305 pgs: 305 active+clean; 246 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 858 KiB/s rd, 2.2 MiB/s wr, 113 op/s
Jan 26 16:12:35 compute-0 nova_compute[239965]: 2026-01-26 16:12:35.039 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 248 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 858 KiB/s rd, 2.2 MiB/s wr, 113 op/s
Jan 26 16:12:36 compute-0 nova_compute[239965]: 2026-01-26 16:12:36.922 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:37 compute-0 ceph-mon[75140]: pgmap v1854: 305 pgs: 305 active+clean; 248 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 858 KiB/s rd, 2.2 MiB/s wr, 113 op/s
Jan 26 16:12:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:12:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 248 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 858 KiB/s rd, 2.2 MiB/s wr, 114 op/s
Jan 26 16:12:38 compute-0 nova_compute[239965]: 2026-01-26 16:12:38.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:12:39 compute-0 ceph-mon[75140]: pgmap v1855: 305 pgs: 305 active+clean; 248 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 858 KiB/s rd, 2.2 MiB/s wr, 114 op/s
Jan 26 16:12:39 compute-0 nova_compute[239965]: 2026-01-26 16:12:39.498 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:39 compute-0 nova_compute[239965]: 2026-01-26 16:12:39.498 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:39 compute-0 nova_compute[239965]: 2026-01-26 16:12:39.521 239969 DEBUG nova.compute.manager [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:12:39 compute-0 nova_compute[239965]: 2026-01-26 16:12:39.612 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:39 compute-0 nova_compute[239965]: 2026-01-26 16:12:39.613 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:39 compute-0 nova_compute[239965]: 2026-01-26 16:12:39.622 239969 DEBUG nova.virt.hardware [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:12:39 compute-0 nova_compute[239965]: 2026-01-26 16:12:39.622 239969 INFO nova.compute.claims [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:12:39 compute-0 nova_compute[239965]: 2026-01-26 16:12:39.777 239969 DEBUG oslo_concurrency.processutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.040 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 248 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 649 KiB/s rd, 2.2 MiB/s wr, 84 op/s
Jan 26 16:12:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:12:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1327090972' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.353 239969 DEBUG oslo_concurrency.processutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.360 239969 DEBUG nova.compute.provider_tree [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.386 239969 DEBUG nova.scheduler.client.report [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.425 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.426 239969 DEBUG nova.compute.manager [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.501 239969 DEBUG nova.compute.manager [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.501 239969 DEBUG nova.network.neutron [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.521 239969 INFO nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.548 239969 DEBUG nova.compute.manager [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.658 239969 DEBUG nova.compute.manager [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.660 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.661 239969 INFO nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Creating image(s)
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.688 239969 DEBUG nova.storage.rbd_utils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image b4c8aa81-ea1c-4786-b53e-082bc9fadd9a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.716 239969 DEBUG nova.storage.rbd_utils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image b4c8aa81-ea1c-4786-b53e-082bc9fadd9a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.744 239969 DEBUG nova.storage.rbd_utils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image b4c8aa81-ea1c-4786-b53e-082bc9fadd9a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.749 239969 DEBUG oslo_concurrency.processutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.833 239969 DEBUG oslo_concurrency.processutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.834 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.835 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.836 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.858 239969 DEBUG nova.storage.rbd_utils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image b4c8aa81-ea1c-4786-b53e-082bc9fadd9a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.862 239969 DEBUG oslo_concurrency.processutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 b4c8aa81-ea1c-4786-b53e-082bc9fadd9a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:40 compute-0 nova_compute[239965]: 2026-01-26 16:12:40.900 239969 DEBUG nova.policy [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0338b308661e4205839e9e957f674d8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df501970c9864b77b86bb4aa58ef846b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.135 239969 DEBUG oslo_concurrency.lockutils [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.135 239969 DEBUG oslo_concurrency.lockutils [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.136 239969 DEBUG oslo_concurrency.lockutils [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.136 239969 DEBUG oslo_concurrency.lockutils [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.136 239969 DEBUG oslo_concurrency.lockutils [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.137 239969 INFO nova.compute.manager [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Terminating instance
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.138 239969 DEBUG nova.compute.manager [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.151 239969 DEBUG oslo_concurrency.processutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 b4c8aa81-ea1c-4786-b53e-082bc9fadd9a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.289s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:41 compute-0 kernel: tapd8d3ce43-ab (unregistering): left promiscuous mode
Jan 26 16:12:41 compute-0 NetworkManager[48954]: <info>  [1769443961.1870] device (tapd8d3ce43-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:12:41 compute-0 ovn_controller[146046]: 2026-01-26T16:12:41Z|01095|binding|INFO|Releasing lport d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 from this chassis (sb_readonly=0)
Jan 26 16:12:41 compute-0 ovn_controller[146046]: 2026-01-26T16:12:41Z|01096|binding|INFO|Setting lport d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 down in Southbound
Jan 26 16:12:41 compute-0 ovn_controller[146046]: 2026-01-26T16:12:41Z|01097|binding|INFO|Removing iface tapd8d3ce43-ab ovn-installed in OVS
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.222 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.234 239969 DEBUG nova.storage.rbd_utils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] resizing rbd image b4c8aa81-ea1c-4786-b53e-082bc9fadd9a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:12:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:41.252 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:74:58:77 10.100.0.6'], port_security=['fa:16:3e:74:58:77 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd8d1cdeaf6004b94bba2d3c727959e51', 'neutron:revision_number': '11', 'neutron:security_group_ids': 'fb0a5e0f-b1d5-4f6c-9884-3b66f10a59d8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=568e2bde-137f-4351-b2df-67acb89970b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:12:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:41.254 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 in datapath 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 unbound from our chassis
Jan 26 16:12:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:41.255 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:12:41 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Jan 26 16:12:41 compute-0 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006a.scope: Consumed 4.190s CPU time.
Jan 26 16:12:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:41.257 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[24ff57c7-8493-4ce3-9674-337fe8206fee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:41.259 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 namespace which is not needed anymore
Jan 26 16:12:41 compute-0 systemd-machined[208061]: Machine qemu-136-instance-0000006a terminated.
Jan 26 16:12:41 compute-0 ceph-mon[75140]: pgmap v1856: 305 pgs: 305 active+clean; 248 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 649 KiB/s rd, 2.2 MiB/s wr, 84 op/s
Jan 26 16:12:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1327090972' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.328 239969 DEBUG nova.objects.instance [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'migration_context' on Instance uuid b4c8aa81-ea1c-4786-b53e-082bc9fadd9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.345 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.346 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Ensure instance console log exists: /var/lib/nova/instances/b4c8aa81-ea1c-4786-b53e-082bc9fadd9a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.347 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.347 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.348 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.368 239969 INFO nova.virt.libvirt.driver [-] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Instance destroyed successfully.
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.369 239969 DEBUG nova.objects.instance [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lazy-loading 'resources' on Instance uuid 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:41 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[334210]: [NOTICE]   (334214) : haproxy version is 2.8.14-c23fe91
Jan 26 16:12:41 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[334210]: [NOTICE]   (334214) : path to executable is /usr/sbin/haproxy
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.390 239969 DEBUG nova.virt.libvirt.vif [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:10:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-944879613',display_name='tempest-ServersNegativeTestJSON-server-944879613',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-944879613',id=106,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:12:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d8d1cdeaf6004b94bba2d3c727959e51',ramdisk_id='',reservation_id='r-np89kd30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1013867593',owner_user_name='tempest-ServersNegativeTestJSON-1013867593-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:12:25Z,user_data=None,user_id='2d9b11d846a34b6e892fb80186bedba5',uuid=20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:12:41 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[334210]: [WARNING]  (334214) : Exiting Master process...
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.390 239969 DEBUG nova.network.os_vif_util [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converting VIF {"id": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "address": "fa:16:3e:74:58:77", "network": {"id": "74a6493e-9050-4f8b-9dc9-e7bb3992eeb6", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1044665582-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d8d1cdeaf6004b94bba2d3c727959e51", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8d3ce43-ab", "ovs_interfaceid": "d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.391 239969 DEBUG nova.network.os_vif_util [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:12:41 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[334210]: [ALERT]    (334214) : Current worker (334216) exited with code 143 (Terminated)
Jan 26 16:12:41 compute-0 neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6[334210]: [WARNING]  (334214) : All workers exited. Exiting... (0)
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.391 239969 DEBUG os_vif [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.393 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.393 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8d3ce43-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:41 compute-0 systemd[1]: libpod-2a6fc9eb8bf37bb77ac70b57afd44749ac25621fe663dfe0fd068ca46083fe88.scope: Deactivated successfully.
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.395 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.397 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.399 239969 INFO os_vif [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:74:58:77,bridge_name='br-int',has_traffic_filtering=True,id=d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3,network=Network(74a6493e-9050-4f8b-9dc9-e7bb3992eeb6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8d3ce43-ab')
Jan 26 16:12:41 compute-0 podman[334436]: 2026-01-26 16:12:41.401231401 +0000 UTC m=+0.045758241 container died 2a6fc9eb8bf37bb77ac70b57afd44749ac25621fe663dfe0fd068ca46083fe88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a6fc9eb8bf37bb77ac70b57afd44749ac25621fe663dfe0fd068ca46083fe88-userdata-shm.mount: Deactivated successfully.
Jan 26 16:12:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-49303e142a648da9ee5583699b806c54a76eb8b1b9941da6b9f60301f91d9f39-merged.mount: Deactivated successfully.
Jan 26 16:12:41 compute-0 podman[334436]: 2026-01-26 16:12:41.436293068 +0000 UTC m=+0.080819908 container cleanup 2a6fc9eb8bf37bb77ac70b57afd44749ac25621fe663dfe0fd068ca46083fe88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:12:41 compute-0 systemd[1]: libpod-conmon-2a6fc9eb8bf37bb77ac70b57afd44749ac25621fe663dfe0fd068ca46083fe88.scope: Deactivated successfully.
Jan 26 16:12:41 compute-0 podman[334491]: 2026-01-26 16:12:41.522914016 +0000 UTC m=+0.066281532 container remove 2a6fc9eb8bf37bb77ac70b57afd44749ac25621fe663dfe0fd068ca46083fe88 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 16:12:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:41.529 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1ac4eb5d-e30e-49be-a235-f2b246d64202]: (4, ('Mon Jan 26 04:12:41 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 (2a6fc9eb8bf37bb77ac70b57afd44749ac25621fe663dfe0fd068ca46083fe88)\n2a6fc9eb8bf37bb77ac70b57afd44749ac25621fe663dfe0fd068ca46083fe88\nMon Jan 26 04:12:41 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 (2a6fc9eb8bf37bb77ac70b57afd44749ac25621fe663dfe0fd068ca46083fe88)\n2a6fc9eb8bf37bb77ac70b57afd44749ac25621fe663dfe0fd068ca46083fe88\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:41.531 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[62eb06a6-c4ae-4290-b296-74016a3f7864]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:41.532 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap74a6493e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.535 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:41 compute-0 kernel: tap74a6493e-90: left promiscuous mode
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.550 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:41.553 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4497c0f0-2b69-4a44-96ea-7edbcbe65dc2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:41.563 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b9e5e033-99ae-40b9-9041-3cfec5a080bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:41.565 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c0fa16a4-d901-4b2d-bdc1-04056e509c30]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:41.582 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2a401800-b4b2-487f-9688-78254e24dc63]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 543482, 'reachable_time': 40655, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334507, 'error': None, 'target': 'ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:41.585 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-74a6493e-9050-4f8b-9dc9-e7bb3992eeb6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:12:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:41.585 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[e312ced9-b303-4c45-9c99-c109c4f48e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d74a6493e\x2d9050\x2d4f8b\x2d9dc9\x2de7bb3992eeb6.mount: Deactivated successfully.
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.663 239969 INFO nova.virt.libvirt.driver [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Deleting instance files /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_del
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.664 239969 INFO nova.virt.libvirt.driver [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Deletion of /var/lib/nova/instances/20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4_del complete
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.715 239969 INFO nova.compute.manager [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Took 0.58 seconds to destroy the instance on the hypervisor.
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.716 239969 DEBUG oslo.service.loopingcall [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.716 239969 DEBUG nova.compute.manager [-] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.716 239969 DEBUG nova.network.neutron [-] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.925 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.955 239969 DEBUG nova.compute.manager [req-58178a99-aa52-45a3-99bf-f9e1e8524f7d req-2b08fabb-a579-43c6-b9bf-e73184b83c1e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-vif-unplugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.956 239969 DEBUG oslo_concurrency.lockutils [req-58178a99-aa52-45a3-99bf-f9e1e8524f7d req-2b08fabb-a579-43c6-b9bf-e73184b83c1e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.956 239969 DEBUG oslo_concurrency.lockutils [req-58178a99-aa52-45a3-99bf-f9e1e8524f7d req-2b08fabb-a579-43c6-b9bf-e73184b83c1e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.956 239969 DEBUG oslo_concurrency.lockutils [req-58178a99-aa52-45a3-99bf-f9e1e8524f7d req-2b08fabb-a579-43c6-b9bf-e73184b83c1e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.957 239969 DEBUG nova.compute.manager [req-58178a99-aa52-45a3-99bf-f9e1e8524f7d req-2b08fabb-a579-43c6-b9bf-e73184b83c1e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] No waiting events found dispatching network-vif-unplugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:12:41 compute-0 nova_compute[239965]: 2026-01-26 16:12:41.957 239969 DEBUG nova.compute.manager [req-58178a99-aa52-45a3-99bf-f9e1e8524f7d req-2b08fabb-a579-43c6-b9bf-e73184b83c1e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-vif-unplugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:12:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 242 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 660 KiB/s rd, 3.1 MiB/s wr, 102 op/s
Jan 26 16:12:42 compute-0 nova_compute[239965]: 2026-01-26 16:12:42.675 239969 DEBUG nova.network.neutron [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Successfully created port: e4857015-0634-48e6-ba17-182f06530971 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:12:42 compute-0 nova_compute[239965]: 2026-01-26 16:12:42.833 239969 DEBUG nova.network.neutron [-] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:12:42 compute-0 nova_compute[239965]: 2026-01-26 16:12:42.848 239969 INFO nova.compute.manager [-] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Took 1.13 seconds to deallocate network for instance.
Jan 26 16:12:42 compute-0 nova_compute[239965]: 2026-01-26 16:12:42.891 239969 DEBUG oslo_concurrency.lockutils [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:42 compute-0 nova_compute[239965]: 2026-01-26 16:12:42.891 239969 DEBUG oslo_concurrency.lockutils [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:12:42 compute-0 nova_compute[239965]: 2026-01-26 16:12:42.911 239969 DEBUG nova.scheduler.client.report [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:12:42 compute-0 nova_compute[239965]: 2026-01-26 16:12:42.928 239969 DEBUG nova.scheduler.client.report [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:12:42 compute-0 nova_compute[239965]: 2026-01-26 16:12:42.929 239969 DEBUG nova.compute.provider_tree [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:12:42 compute-0 nova_compute[239965]: 2026-01-26 16:12:42.944 239969 DEBUG nova.scheduler.client.report [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:12:42 compute-0 nova_compute[239965]: 2026-01-26 16:12:42.968 239969 DEBUG nova.scheduler.client.report [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:12:43 compute-0 nova_compute[239965]: 2026-01-26 16:12:43.035 239969 DEBUG oslo_concurrency.processutils [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:43 compute-0 ceph-mon[75140]: pgmap v1857: 305 pgs: 305 active+clean; 242 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 660 KiB/s rd, 3.1 MiB/s wr, 102 op/s
Jan 26 16:12:43 compute-0 nova_compute[239965]: 2026-01-26 16:12:43.533 239969 DEBUG nova.network.neutron [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Successfully created port: d55c5196-61aa-4fe8-81d6-2c1ea20ab825 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:12:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:12:43 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1343617172' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:12:43 compute-0 nova_compute[239965]: 2026-01-26 16:12:43.568 239969 DEBUG oslo_concurrency.processutils [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:43 compute-0 nova_compute[239965]: 2026-01-26 16:12:43.575 239969 DEBUG nova.compute.provider_tree [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:12:43 compute-0 nova_compute[239965]: 2026-01-26 16:12:43.598 239969 DEBUG nova.scheduler.client.report [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:12:43 compute-0 nova_compute[239965]: 2026-01-26 16:12:43.630 239969 DEBUG oslo_concurrency.lockutils [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:43 compute-0 nova_compute[239965]: 2026-01-26 16:12:43.656 239969 INFO nova.scheduler.client.report [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Deleted allocations for instance 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4
Jan 26 16:12:43 compute-0 nova_compute[239965]: 2026-01-26 16:12:43.717 239969 DEBUG oslo_concurrency.lockutils [None req-182bd4ce-3c54-4507-8dd7-f6db7f9f721e 2d9b11d846a34b6e892fb80186bedba5 d8d1cdeaf6004b94bba2d3c727959e51 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:44 compute-0 nova_compute[239965]: 2026-01-26 16:12:44.042 239969 DEBUG nova.compute.manager [req-0ad838bf-3d01-467e-830c-b7fb303157ee req-c74f1faf-2881-4887-be40-d34c1e8cf5ee a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:44 compute-0 nova_compute[239965]: 2026-01-26 16:12:44.042 239969 DEBUG oslo_concurrency.lockutils [req-0ad838bf-3d01-467e-830c-b7fb303157ee req-c74f1faf-2881-4887-be40-d34c1e8cf5ee a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:44 compute-0 nova_compute[239965]: 2026-01-26 16:12:44.042 239969 DEBUG oslo_concurrency.lockutils [req-0ad838bf-3d01-467e-830c-b7fb303157ee req-c74f1faf-2881-4887-be40-d34c1e8cf5ee a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:44 compute-0 nova_compute[239965]: 2026-01-26 16:12:44.043 239969 DEBUG oslo_concurrency.lockutils [req-0ad838bf-3d01-467e-830c-b7fb303157ee req-c74f1faf-2881-4887-be40-d34c1e8cf5ee a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:44 compute-0 nova_compute[239965]: 2026-01-26 16:12:44.043 239969 DEBUG nova.compute.manager [req-0ad838bf-3d01-467e-830c-b7fb303157ee req-c74f1faf-2881-4887-be40-d34c1e8cf5ee a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] No waiting events found dispatching network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:12:44 compute-0 nova_compute[239965]: 2026-01-26 16:12:44.043 239969 WARNING nova.compute.manager [req-0ad838bf-3d01-467e-830c-b7fb303157ee req-c74f1faf-2881-4887-be40-d34c1e8cf5ee a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received unexpected event network-vif-plugged-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 for instance with vm_state deleted and task_state None.
Jan 26 16:12:44 compute-0 nova_compute[239965]: 2026-01-26 16:12:44.043 239969 DEBUG nova.compute.manager [req-0ad838bf-3d01-467e-830c-b7fb303157ee req-c74f1faf-2881-4887-be40-d34c1e8cf5ee a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Received event network-vif-deleted-d8d3ce43-ab49-49dd-8b31-8ebbdcaec8b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 221 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 245 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Jan 26 16:12:44 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1343617172' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:12:44 compute-0 nova_compute[239965]: 2026-01-26 16:12:44.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:12:45 compute-0 ceph-mon[75140]: pgmap v1858: 305 pgs: 305 active+clean; 221 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 245 KiB/s rd, 1.8 MiB/s wr, 75 op/s
Jan 26 16:12:45 compute-0 nova_compute[239965]: 2026-01-26 16:12:45.463 239969 DEBUG nova.network.neutron [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Successfully updated port: e4857015-0634-48e6-ba17-182f06530971 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:12:45 compute-0 nova_compute[239965]: 2026-01-26 16:12:45.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:12:45 compute-0 nova_compute[239965]: 2026-01-26 16:12:45.617 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.166 239969 DEBUG nova.compute.manager [req-0f6caee1-7e31-4f9e-93f7-388703dff0cd req-38cde5ba-b7c6-45e0-8a54-bf885a3b07e9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received event network-changed-e4857015-0634-48e6-ba17-182f06530971 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.167 239969 DEBUG nova.compute.manager [req-0f6caee1-7e31-4f9e-93f7-388703dff0cd req-38cde5ba-b7c6-45e0-8a54-bf885a3b07e9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Refreshing instance network info cache due to event network-changed-e4857015-0634-48e6-ba17-182f06530971. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.167 239969 DEBUG oslo_concurrency.lockutils [req-0f6caee1-7e31-4f9e-93f7-388703dff0cd req-38cde5ba-b7c6-45e0-8a54-bf885a3b07e9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.167 239969 DEBUG oslo_concurrency.lockutils [req-0f6caee1-7e31-4f9e-93f7-388703dff0cd req-38cde5ba-b7c6-45e0-8a54-bf885a3b07e9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.168 239969 DEBUG nova.network.neutron [req-0f6caee1-7e31-4f9e-93f7-388703dff0cd req-38cde5ba-b7c6-45e0-8a54-bf885a3b07e9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Refreshing network info cache for port e4857015-0634-48e6-ba17-182f06530971 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:12:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 213 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.337 239969 DEBUG nova.network.neutron [req-0f6caee1-7e31-4f9e-93f7-388703dff0cd req-38cde5ba-b7c6-45e0-8a54-bf885a3b07e9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:12:46 compute-0 ceph-mon[75140]: pgmap v1859: 305 pgs: 305 active+clean; 213 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.395 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.528 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.699 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.699 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.699 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.700 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 67eb236e-aeee-43e6-9b43-41c17fcc37a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.769 239969 DEBUG nova.network.neutron [req-0f6caee1-7e31-4f9e-93f7-388703dff0cd req-38cde5ba-b7c6-45e0-8a54-bf885a3b07e9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.785 239969 DEBUG oslo_concurrency.lockutils [req-0f6caee1-7e31-4f9e-93f7-388703dff0cd req-38cde5ba-b7c6-45e0-8a54-bf885a3b07e9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:12:46 compute-0 nova_compute[239965]: 2026-01-26 16:12:46.974 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:47 compute-0 nova_compute[239965]: 2026-01-26 16:12:47.508 239969 DEBUG nova.network.neutron [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Successfully updated port: d55c5196-61aa-4fe8-81d6-2c1ea20ab825 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:12:47 compute-0 nova_compute[239965]: 2026-01-26 16:12:47.528 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "refresh_cache-b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:12:47 compute-0 nova_compute[239965]: 2026-01-26 16:12:47.528 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquired lock "refresh_cache-b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:12:47 compute-0 nova_compute[239965]: 2026-01-26 16:12:47.529 239969 DEBUG nova.network.neutron [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:12:47 compute-0 nova_compute[239965]: 2026-01-26 16:12:47.868 239969 DEBUG nova.network.neutron [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:12:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 213 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 26 16:12:48 compute-0 nova_compute[239965]: 2026-01-26 16:12:48.291 239969 DEBUG nova.compute.manager [req-b3224184-a0a6-4074-870b-5c0608e15c6b req-4087497c-3a3f-447b-8f4f-45e2dfe4d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received event network-changed-d55c5196-61aa-4fe8-81d6-2c1ea20ab825 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:48 compute-0 nova_compute[239965]: 2026-01-26 16:12:48.291 239969 DEBUG nova.compute.manager [req-b3224184-a0a6-4074-870b-5c0608e15c6b req-4087497c-3a3f-447b-8f4f-45e2dfe4d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Refreshing instance network info cache due to event network-changed-d55c5196-61aa-4fe8-81d6-2c1ea20ab825. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:12:48 compute-0 nova_compute[239965]: 2026-01-26 16:12:48.292 239969 DEBUG oslo_concurrency.lockutils [req-b3224184-a0a6-4074-870b-5c0608e15c6b req-4087497c-3a3f-447b-8f4f-45e2dfe4d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:12:48 compute-0 podman[334531]: 2026-01-26 16:12:48.380724374 +0000 UTC m=+0.059457295 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 16:12:48 compute-0 podman[334532]: 2026-01-26 16:12:48.41080081 +0000 UTC m=+0.089130792 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 16:12:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:12:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/68447672' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:12:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:12:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/68447672' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011188412439220002 of space, bias 1.0, pg target 0.33565237317660007 quantized to 32 (current 32)
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010183882478889236 of space, bias 1.0, pg target 0.3055164743666771 quantized to 32 (current 32)
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.401113826222366e-07 of space, bias 4.0, pg target 0.0008881336591466839 quantized to 16 (current 16)
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:12:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:12:49 compute-0 ceph-mon[75140]: pgmap v1860: 305 pgs: 305 active+clean; 213 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 26 16:12:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/68447672' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:12:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/68447672' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:12:49 compute-0 nova_compute[239965]: 2026-01-26 16:12:49.710 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 213 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 26 16:12:50 compute-0 ceph-mon[75140]: pgmap v1861: 305 pgs: 305 active+clean; 213 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Jan 26 16:12:51 compute-0 ovn_controller[146046]: 2026-01-26T16:12:51Z|01098|binding|INFO|Releasing lport e0fa1a12-fa6c-4878-8114-78f4badaec8c from this chassis (sb_readonly=0)
Jan 26 16:12:51 compute-0 ovn_controller[146046]: 2026-01-26T16:12:51Z|01099|binding|INFO|Releasing lport a1098b06-e219-4f9c-9e25-e366022a5f05 from this chassis (sb_readonly=0)
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.098 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.397 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.415 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Updating instance_info_cache with network_info: [{"id": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "address": "fa:16:3e:e8:8d:bc", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c5ed41-3d", "ovs_interfaceid": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5e1d227f-594c-4a1e-8f28-56469a20df56", "address": "fa:16:3e:42:83:4d", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e1d227f-59", "ovs_interfaceid": "5e1d227f-594c-4a1e-8f28-56469a20df56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.442 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.442 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.442 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.443 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.443 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.456 239969 DEBUG nova.network.neutron [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Updating instance_info_cache with network_info: [{"id": "e4857015-0634-48e6-ba17-182f06530971", "address": "fa:16:3e:fd:62:9b", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4857015-06", "ovs_interfaceid": "e4857015-0634-48e6-ba17-182f06530971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "address": "fa:16:3e:c7:c5:3e", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd55c5196-61", "ovs_interfaceid": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.487 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Releasing lock "refresh_cache-b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.487 239969 DEBUG nova.compute.manager [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Instance network_info: |[{"id": "e4857015-0634-48e6-ba17-182f06530971", "address": "fa:16:3e:fd:62:9b", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4857015-06", "ovs_interfaceid": "e4857015-0634-48e6-ba17-182f06530971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "address": "fa:16:3e:c7:c5:3e", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd55c5196-61", "ovs_interfaceid": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.488 239969 DEBUG oslo_concurrency.lockutils [req-b3224184-a0a6-4074-870b-5c0608e15c6b req-4087497c-3a3f-447b-8f4f-45e2dfe4d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.488 239969 DEBUG nova.network.neutron [req-b3224184-a0a6-4074-870b-5c0608e15c6b req-4087497c-3a3f-447b-8f4f-45e2dfe4d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Refreshing network info cache for port d55c5196-61aa-4fe8-81d6-2c1ea20ab825 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.493 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Start _get_guest_xml network_info=[{"id": "e4857015-0634-48e6-ba17-182f06530971", "address": "fa:16:3e:fd:62:9b", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4857015-06", "ovs_interfaceid": "e4857015-0634-48e6-ba17-182f06530971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "address": "fa:16:3e:c7:c5:3e", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd55c5196-61", "ovs_interfaceid": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.500 239969 WARNING nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.515 239969 DEBUG nova.virt.libvirt.host [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.516 239969 DEBUG nova.virt.libvirt.host [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.523 239969 DEBUG nova.virt.libvirt.host [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.523 239969 DEBUG nova.virt.libvirt.host [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.524 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.524 239969 DEBUG nova.virt.hardware [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.525 239969 DEBUG nova.virt.hardware [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.525 239969 DEBUG nova.virt.hardware [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.525 239969 DEBUG nova.virt.hardware [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.525 239969 DEBUG nova.virt.hardware [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.525 239969 DEBUG nova.virt.hardware [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.526 239969 DEBUG nova.virt.hardware [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.526 239969 DEBUG nova.virt.hardware [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.526 239969 DEBUG nova.virt.hardware [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.526 239969 DEBUG nova.virt.hardware [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.527 239969 DEBUG nova.virt.hardware [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.531 239969 DEBUG oslo_concurrency.processutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.597 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.598 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.598 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.598 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.599 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:51 compute-0 nova_compute[239965]: 2026-01-26 16:12:51.981 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:52 compute-0 ovn_controller[146046]: 2026-01-26T16:12:52Z|01100|binding|INFO|Releasing lport e0fa1a12-fa6c-4878-8114-78f4badaec8c from this chassis (sb_readonly=0)
Jan 26 16:12:52 compute-0 ovn_controller[146046]: 2026-01-26T16:12:52Z|01101|binding|INFO|Releasing lport a1098b06-e219-4f9c-9e25-e366022a5f05 from this chassis (sb_readonly=0)
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.080 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:12:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1172494141' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.101 239969 DEBUG oslo_concurrency.processutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.132 239969 DEBUG nova.storage.rbd_utils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image b4c8aa81-ea1c-4786-b53e-082bc9fadd9a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.136 239969 DEBUG oslo_concurrency.processutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 213 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 26 16:12:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:12:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/663829323' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:12:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1172494141' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.271 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.672s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.507 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.508 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.699 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.701 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3489MB free_disk=59.92116388678551GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.701 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.702 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:12:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3324043812' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.743 239969 DEBUG oslo_concurrency.processutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.745 239969 DEBUG nova.virt.libvirt.vif [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:12:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1287169184',display_name='tempest-TestGettingAddress-server-1287169184',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1287169184',id=110,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISau4/UN40WP+ptUkwpIsWNn3J4Wp77nPpgbhcl7w/8Nb1TcxykSmejoZt2JELHRJfHb9mvljAwUSziemePss6yCvaAr9GOGZ0K8nlDi6voyMAfJj2IwlWWxnJ7+bk9WQ==',key_name='tempest-TestGettingAddress-360952793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-u6w8t0q6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:12:40Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=b4c8aa81-ea1c-4786-b53e-082bc9fadd9a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4857015-0634-48e6-ba17-182f06530971", "address": "fa:16:3e:fd:62:9b", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4857015-06", "ovs_interfaceid": "e4857015-0634-48e6-ba17-182f06530971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.745 239969 DEBUG nova.network.os_vif_util [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "e4857015-0634-48e6-ba17-182f06530971", "address": "fa:16:3e:fd:62:9b", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4857015-06", "ovs_interfaceid": "e4857015-0634-48e6-ba17-182f06530971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.746 239969 DEBUG nova.network.os_vif_util [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:62:9b,bridge_name='br-int',has_traffic_filtering=True,id=e4857015-0634-48e6-ba17-182f06530971,network=Network(daa791d6-dfc2-418f-92aa-2771eaf5b1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4857015-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.747 239969 DEBUG nova.virt.libvirt.vif [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:12:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1287169184',display_name='tempest-TestGettingAddress-server-1287169184',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1287169184',id=110,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISau4/UN40WP+ptUkwpIsWNn3J4Wp77nPpgbhcl7w/8Nb1TcxykSmejoZt2JELHRJfHb9mvljAwUSziemePss6yCvaAr9GOGZ0K8nlDi6voyMAfJj2IwlWWxnJ7+bk9WQ==',key_name='tempest-TestGettingAddress-360952793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-u6w8t0q6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:12:40Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=b4c8aa81-ea1c-4786-b53e-082bc9fadd9a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "address": "fa:16:3e:c7:c5:3e", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd55c5196-61", "ovs_interfaceid": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.747 239969 DEBUG nova.network.os_vif_util [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "address": "fa:16:3e:c7:c5:3e", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd55c5196-61", "ovs_interfaceid": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.748 239969 DEBUG nova.network.os_vif_util [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:c5:3e,bridge_name='br-int',has_traffic_filtering=True,id=d55c5196-61aa-4fe8-81d6-2c1ea20ab825,network=Network(ce7228b3-b329-4b48-a563-2dfcf286a94e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd55c5196-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.749 239969 DEBUG nova.objects.instance [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'pci_devices' on Instance uuid b4c8aa81-ea1c-4786-b53e-082bc9fadd9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.762 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:12:52 compute-0 nova_compute[239965]:   <uuid>b4c8aa81-ea1c-4786-b53e-082bc9fadd9a</uuid>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   <name>instance-0000006e</name>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <nova:name>tempest-TestGettingAddress-server-1287169184</nova:name>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:12:51</nova:creationTime>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:12:52 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:12:52 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:12:52 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:12:52 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:12:52 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:12:52 compute-0 nova_compute[239965]:         <nova:user uuid="0338b308661e4205839e9e957f674d8c">tempest-TestGettingAddress-206943419-project-member</nova:user>
Jan 26 16:12:52 compute-0 nova_compute[239965]:         <nova:project uuid="df501970c9864b77b86bb4aa58ef846b">tempest-TestGettingAddress-206943419</nova:project>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:12:52 compute-0 nova_compute[239965]:         <nova:port uuid="e4857015-0634-48e6-ba17-182f06530971">
Jan 26 16:12:52 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:12:52 compute-0 nova_compute[239965]:         <nova:port uuid="d55c5196-61aa-4fe8-81d6-2c1ea20ab825">
Jan 26 16:12:52 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fec7:c53e" ipVersion="6"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fec7:c53e" ipVersion="6"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <system>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <entry name="serial">b4c8aa81-ea1c-4786-b53e-082bc9fadd9a</entry>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <entry name="uuid">b4c8aa81-ea1c-4786-b53e-082bc9fadd9a</entry>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     </system>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   <os>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   </os>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   <features>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   </features>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/b4c8aa81-ea1c-4786-b53e-082bc9fadd9a_disk">
Jan 26 16:12:52 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       </source>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:12:52 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/b4c8aa81-ea1c-4786-b53e-082bc9fadd9a_disk.config">
Jan 26 16:12:52 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       </source>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:12:52 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:fd:62:9b"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <target dev="tape4857015-06"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:c7:c5:3e"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <target dev="tapd55c5196-61"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/b4c8aa81-ea1c-4786-b53e-082bc9fadd9a/console.log" append="off"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <video>
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     </video>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:12:52 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:12:52 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:12:52 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:12:52 compute-0 nova_compute[239965]: </domain>
Jan 26 16:12:52 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.764 239969 DEBUG nova.compute.manager [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Preparing to wait for external event network-vif-plugged-e4857015-0634-48e6-ba17-182f06530971 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.764 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.765 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.765 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.765 239969 DEBUG nova.compute.manager [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Preparing to wait for external event network-vif-plugged-d55c5196-61aa-4fe8-81d6-2c1ea20ab825 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.765 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.765 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.766 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.766 239969 DEBUG nova.virt.libvirt.vif [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:12:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1287169184',display_name='tempest-TestGettingAddress-server-1287169184',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1287169184',id=110,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISau4/UN40WP+ptUkwpIsWNn3J4Wp77nPpgbhcl7w/8Nb1TcxykSmejoZt2JELHRJfHb9mvljAwUSziemePss6yCvaAr9GOGZ0K8nlDi6voyMAfJj2IwlWWxnJ7+bk9WQ==',key_name='tempest-TestGettingAddress-360952793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-u6w8t0q6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:12:40Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=b4c8aa81-ea1c-4786-b53e-082bc9fadd9a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4857015-0634-48e6-ba17-182f06530971", "address": "fa:16:3e:fd:62:9b", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4857015-06", "ovs_interfaceid": "e4857015-0634-48e6-ba17-182f06530971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.767 239969 DEBUG nova.network.os_vif_util [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "e4857015-0634-48e6-ba17-182f06530971", "address": "fa:16:3e:fd:62:9b", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4857015-06", "ovs_interfaceid": "e4857015-0634-48e6-ba17-182f06530971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.767 239969 DEBUG nova.network.os_vif_util [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fd:62:9b,bridge_name='br-int',has_traffic_filtering=True,id=e4857015-0634-48e6-ba17-182f06530971,network=Network(daa791d6-dfc2-418f-92aa-2771eaf5b1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4857015-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.768 239969 DEBUG os_vif [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:62:9b,bridge_name='br-int',has_traffic_filtering=True,id=e4857015-0634-48e6-ba17-182f06530971,network=Network(daa791d6-dfc2-418f-92aa-2771eaf5b1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4857015-06') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.769 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.770 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.770 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.774 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.774 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4857015-06, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.774 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4857015-06, col_values=(('external_ids', {'iface-id': 'e4857015-0634-48e6-ba17-182f06530971', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fd:62:9b', 'vm-uuid': 'b4c8aa81-ea1c-4786-b53e-082bc9fadd9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.776 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:52 compute-0 NetworkManager[48954]: <info>  [1769443972.7774] manager: (tape4857015-06): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/454)
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.779 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.781 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.782 239969 INFO os_vif [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fd:62:9b,bridge_name='br-int',has_traffic_filtering=True,id=e4857015-0634-48e6-ba17-182f06530971,network=Network(daa791d6-dfc2-418f-92aa-2771eaf5b1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4857015-06')
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.783 239969 DEBUG nova.virt.libvirt.vif [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:12:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1287169184',display_name='tempest-TestGettingAddress-server-1287169184',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1287169184',id=110,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISau4/UN40WP+ptUkwpIsWNn3J4Wp77nPpgbhcl7w/8Nb1TcxykSmejoZt2JELHRJfHb9mvljAwUSziemePss6yCvaAr9GOGZ0K8nlDi6voyMAfJj2IwlWWxnJ7+bk9WQ==',key_name='tempest-TestGettingAddress-360952793',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-u6w8t0q6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:12:40Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=b4c8aa81-ea1c-4786-b53e-082bc9fadd9a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "address": "fa:16:3e:c7:c5:3e", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd55c5196-61", "ovs_interfaceid": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.783 239969 DEBUG nova.network.os_vif_util [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "address": "fa:16:3e:c7:c5:3e", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd55c5196-61", "ovs_interfaceid": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.784 239969 DEBUG nova.network.os_vif_util [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:c5:3e,bridge_name='br-int',has_traffic_filtering=True,id=d55c5196-61aa-4fe8-81d6-2c1ea20ab825,network=Network(ce7228b3-b329-4b48-a563-2dfcf286a94e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd55c5196-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.784 239969 DEBUG os_vif [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:c5:3e,bridge_name='br-int',has_traffic_filtering=True,id=d55c5196-61aa-4fe8-81d6-2c1ea20ab825,network=Network(ce7228b3-b329-4b48-a563-2dfcf286a94e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd55c5196-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.785 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.785 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.785 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.787 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.787 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd55c5196-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.788 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd55c5196-61, col_values=(('external_ids', {'iface-id': 'd55c5196-61aa-4fe8-81d6-2c1ea20ab825', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:c5:3e', 'vm-uuid': 'b4c8aa81-ea1c-4786-b53e-082bc9fadd9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.790 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:52 compute-0 NetworkManager[48954]: <info>  [1769443972.7911] manager: (tapd55c5196-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/455)
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.792 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.796 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 67eb236e-aeee-43e6-9b43-41c17fcc37a5 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.796 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance b4c8aa81-ea1c-4786-b53e-082bc9fadd9a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.796 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.797 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.799 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.801 239969 INFO os_vif [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:c5:3e,bridge_name='br-int',has_traffic_filtering=True,id=d55c5196-61aa-4fe8-81d6-2c1ea20ab825,network=Network(ce7228b3-b329-4b48-a563-2dfcf286a94e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd55c5196-61')
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.857 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.857 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.858 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:fd:62:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.858 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:c7:c5:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.858 239969 INFO nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Using config drive
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.882 239969 DEBUG nova.storage.rbd_utils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image b4c8aa81-ea1c-4786-b53e-082bc9fadd9a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:12:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:12:52 compute-0 nova_compute[239965]: 2026-01-26 16:12:52.926 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:53 compute-0 ceph-mon[75140]: pgmap v1862: 305 pgs: 305 active+clean; 213 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 26 16:12:53 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/663829323' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:12:53 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3324043812' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:12:53 compute-0 ovn_controller[146046]: 2026-01-26T16:12:53Z|01102|binding|INFO|Releasing lport e0fa1a12-fa6c-4878-8114-78f4badaec8c from this chassis (sb_readonly=0)
Jan 26 16:12:53 compute-0 ovn_controller[146046]: 2026-01-26T16:12:53Z|01103|binding|INFO|Releasing lport a1098b06-e219-4f9c-9e25-e366022a5f05 from this chassis (sb_readonly=0)
Jan 26 16:12:53 compute-0 nova_compute[239965]: 2026-01-26 16:12:53.429 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:12:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2932604084' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:12:53 compute-0 nova_compute[239965]: 2026-01-26 16:12:53.463 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:53 compute-0 nova_compute[239965]: 2026-01-26 16:12:53.468 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:12:53 compute-0 nova_compute[239965]: 2026-01-26 16:12:53.484 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:12:53 compute-0 nova_compute[239965]: 2026-01-26 16:12:53.506 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:12:53 compute-0 nova_compute[239965]: 2026-01-26 16:12:53.506 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:53 compute-0 nova_compute[239965]: 2026-01-26 16:12:53.876 239969 INFO nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Creating config drive at /var/lib/nova/instances/b4c8aa81-ea1c-4786-b53e-082bc9fadd9a/disk.config
Jan 26 16:12:53 compute-0 nova_compute[239965]: 2026-01-26 16:12:53.887 239969 DEBUG oslo_concurrency.processutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4c8aa81-ea1c-4786-b53e-082bc9fadd9a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd7xu2r11 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.029 239969 DEBUG oslo_concurrency.processutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4c8aa81-ea1c-4786-b53e-082bc9fadd9a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd7xu2r11" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.058 239969 DEBUG nova.storage.rbd_utils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image b4c8aa81-ea1c-4786-b53e-082bc9fadd9a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.067 239969 DEBUG oslo_concurrency.processutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4c8aa81-ea1c-4786-b53e-082bc9fadd9a/disk.config b4c8aa81-ea1c-4786-b53e-082bc9fadd9a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.191 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.191 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.193 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:12:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 889 KiB/s wr, 44 op/s
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.244 239969 DEBUG oslo_concurrency.processutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4c8aa81-ea1c-4786-b53e-082bc9fadd9a/disk.config b4c8aa81-ea1c-4786-b53e-082bc9fadd9a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.245 239969 INFO nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Deleting local config drive /var/lib/nova/instances/b4c8aa81-ea1c-4786-b53e-082bc9fadd9a/disk.config because it was imported into RBD.
Jan 26 16:12:54 compute-0 kernel: tape4857015-06: entered promiscuous mode
Jan 26 16:12:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2932604084' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:12:54 compute-0 NetworkManager[48954]: <info>  [1769443974.3227] manager: (tape4857015-06): new Tun device (/org/freedesktop/NetworkManager/Devices/456)
Jan 26 16:12:54 compute-0 ovn_controller[146046]: 2026-01-26T16:12:54Z|01104|binding|INFO|Claiming lport e4857015-0634-48e6-ba17-182f06530971 for this chassis.
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.327 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:54 compute-0 ovn_controller[146046]: 2026-01-26T16:12:54Z|01105|binding|INFO|e4857015-0634-48e6-ba17-182f06530971: Claiming fa:16:3e:fd:62:9b 10.100.0.7
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.340 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:62:9b 10.100.0.7'], port_security=['fa:16:3e:fd:62:9b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b4c8aa81-ea1c-4786-b53e-082bc9fadd9a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daa791d6-dfc2-418f-92aa-2771eaf5b1a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '85d3d038-2bc9-4f80-8a4a-2f5ebd3d53a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7117378c-aa96-48b6-b2fa-6c8b5ccb02bd, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=e4857015-0634-48e6-ba17-182f06530971) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.341 156105 INFO neutron.agent.ovn.metadata.agent [-] Port e4857015-0634-48e6-ba17-182f06530971 in datapath daa791d6-dfc2-418f-92aa-2771eaf5b1a9 bound to our chassis
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.342 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network daa791d6-dfc2-418f-92aa-2771eaf5b1a9
Jan 26 16:12:54 compute-0 NetworkManager[48954]: <info>  [1769443974.3489] manager: (tapd55c5196-61): new Tun device (/org/freedesktop/NetworkManager/Devices/457)
Jan 26 16:12:54 compute-0 kernel: tapd55c5196-61: entered promiscuous mode
Jan 26 16:12:54 compute-0 systemd-udevd[334760]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:12:54 compute-0 ovn_controller[146046]: 2026-01-26T16:12:54Z|01106|binding|INFO|Setting lport e4857015-0634-48e6-ba17-182f06530971 ovn-installed in OVS
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.356 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:54 compute-0 ovn_controller[146046]: 2026-01-26T16:12:54Z|01107|binding|INFO|Setting lport e4857015-0634-48e6-ba17-182f06530971 up in Southbound
Jan 26 16:12:54 compute-0 systemd-udevd[334762]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:12:54 compute-0 ovn_controller[146046]: 2026-01-26T16:12:54Z|01108|if_status|INFO|Not updating pb chassis for d55c5196-61aa-4fe8-81d6-2c1ea20ab825 now as sb is readonly
Jan 26 16:12:54 compute-0 ovn_controller[146046]: 2026-01-26T16:12:54Z|01109|binding|INFO|Claiming lport d55c5196-61aa-4fe8-81d6-2c1ea20ab825 for this chassis.
Jan 26 16:12:54 compute-0 ovn_controller[146046]: 2026-01-26T16:12:54Z|01110|binding|INFO|d55c5196-61aa-4fe8-81d6-2c1ea20ab825: Claiming fa:16:3e:c7:c5:3e 2001:db8:0:1:f816:3eff:fec7:c53e 2001:db8::f816:3eff:fec7:c53e
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.363 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[60522874-508e-4d54-b141-47e0151d25a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.371 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:c5:3e 2001:db8:0:1:f816:3eff:fec7:c53e 2001:db8::f816:3eff:fec7:c53e'], port_security=['fa:16:3e:c7:c5:3e 2001:db8:0:1:f816:3eff:fec7:c53e 2001:db8::f816:3eff:fec7:c53e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fec7:c53e/64 2001:db8::f816:3eff:fec7:c53e/64', 'neutron:device_id': 'b4c8aa81-ea1c-4786-b53e-082bc9fadd9a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '85d3d038-2bc9-4f80-8a4a-2f5ebd3d53a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46f31be5-6cff-44a8-9f7f-e0dfbc4df89f, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d55c5196-61aa-4fe8-81d6-2c1ea20ab825) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:12:54 compute-0 ovn_controller[146046]: 2026-01-26T16:12:54Z|01111|binding|INFO|Setting lport d55c5196-61aa-4fe8-81d6-2c1ea20ab825 ovn-installed in OVS
Jan 26 16:12:54 compute-0 ovn_controller[146046]: 2026-01-26T16:12:54Z|01112|binding|INFO|Setting lport d55c5196-61aa-4fe8-81d6-2c1ea20ab825 up in Southbound
Jan 26 16:12:54 compute-0 NetworkManager[48954]: <info>  [1769443974.3737] device (tapd55c5196-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:12:54 compute-0 NetworkManager[48954]: <info>  [1769443974.3748] device (tapd55c5196-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.375 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:54 compute-0 NetworkManager[48954]: <info>  [1769443974.3864] device (tape4857015-06): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:12:54 compute-0 NetworkManager[48954]: <info>  [1769443974.3876] device (tape4857015-06): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:12:54 compute-0 systemd-machined[208061]: New machine qemu-137-instance-0000006e.
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.413 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[47cffb4b-8720-4201-b2b6-292298f2bd44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.416 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[db0dc6f9-5c07-491b-9370-4d3ba763e4c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:54 compute-0 systemd[1]: Started Virtual Machine qemu-137-instance-0000006e.
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.455 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1d162aab-90db-46a8-9e99-0996452974d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.474 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6b80f62f-235e-4206-bb0d-e41eb8c32289]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdaa791d6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:4e:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542193, 'reachable_time': 15395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334773, 'error': None, 'target': 'ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.493 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e8bb0a67-98a0-4b07-b5b0-96676614443c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdaa791d6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 542204, 'tstamp': 542204}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334778, 'error': None, 'target': 'ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdaa791d6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 542207, 'tstamp': 542207}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334778, 'error': None, 'target': 'ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.494 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaa791d6-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.496 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.497 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdaa791d6-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.498 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.498 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdaa791d6-d0, col_values=(('external_ids', {'iface-id': 'e0fa1a12-fa6c-4878-8114-78f4badaec8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.499 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.500 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d55c5196-61aa-4fe8-81d6-2c1ea20ab825 in datapath ce7228b3-b329-4b48-a563-2dfcf286a94e unbound from our chassis
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.502 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce7228b3-b329-4b48-a563-2dfcf286a94e
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.516 239969 DEBUG nova.network.neutron [req-b3224184-a0a6-4074-870b-5c0608e15c6b req-4087497c-3a3f-447b-8f4f-45e2dfe4d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Updated VIF entry in instance network info cache for port d55c5196-61aa-4fe8-81d6-2c1ea20ab825. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.517 239969 DEBUG nova.network.neutron [req-b3224184-a0a6-4074-870b-5c0608e15c6b req-4087497c-3a3f-447b-8f4f-45e2dfe4d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Updating instance_info_cache with network_info: [{"id": "e4857015-0634-48e6-ba17-182f06530971", "address": "fa:16:3e:fd:62:9b", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4857015-06", "ovs_interfaceid": "e4857015-0634-48e6-ba17-182f06530971", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "address": "fa:16:3e:c7:c5:3e", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd55c5196-61", "ovs_interfaceid": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.518 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6078a9d0-8ff3-475a-b588-a9f9de3737c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.536 239969 DEBUG oslo_concurrency.lockutils [req-b3224184-a0a6-4074-870b-5c0608e15c6b req-4087497c-3a3f-447b-8f4f-45e2dfe4d458 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.548 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9240296c-0a5f-4a4c-82ab-7516352d3bfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.552 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[bb6388d9-78b2-4ab2-b87b-0b3339bc9b99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.583 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9016627d-735b-4930-b533-4059bfa98c0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.601 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7e0f9d-6ba0-4c04-bc4c-288f812d4984]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce7228b3-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:b5:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 4, 'rx_bytes': 1916, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 318], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542359, 'reachable_time': 27262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 334785, 'error': None, 'target': 'ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.619 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ba28fcd3-bf7a-4ec9-b61a-adcd71de93c2]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce7228b3-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 542371, 'tstamp': 542371}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 334786, 'error': None, 'target': 'ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.621 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce7228b3-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.622 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.623 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.623 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce7228b3-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.624 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.624 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce7228b3-b0, col_values=(('external_ids', {'iface-id': 'a1098b06-e219-4f9c-9e25-e366022a5f05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:12:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:54.624 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.808 239969 DEBUG nova.compute.manager [req-b19cc615-cec7-4d95-935e-cfc9bbefee68 req-9864f9d6-c9db-482b-bcf7-ff00da6b9ef3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received event network-vif-plugged-e4857015-0634-48e6-ba17-182f06530971 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.808 239969 DEBUG oslo_concurrency.lockutils [req-b19cc615-cec7-4d95-935e-cfc9bbefee68 req-9864f9d6-c9db-482b-bcf7-ff00da6b9ef3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.809 239969 DEBUG oslo_concurrency.lockutils [req-b19cc615-cec7-4d95-935e-cfc9bbefee68 req-9864f9d6-c9db-482b-bcf7-ff00da6b9ef3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.809 239969 DEBUG oslo_concurrency.lockutils [req-b19cc615-cec7-4d95-935e-cfc9bbefee68 req-9864f9d6-c9db-482b-bcf7-ff00da6b9ef3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.809 239969 DEBUG nova.compute.manager [req-b19cc615-cec7-4d95-935e-cfc9bbefee68 req-9864f9d6-c9db-482b-bcf7-ff00da6b9ef3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Processing event network-vif-plugged-e4857015-0634-48e6-ba17-182f06530971 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.888 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443974.887833, b4c8aa81-ea1c-4786-b53e-082bc9fadd9a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.888 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] VM Started (Lifecycle Event)
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.917 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.921 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443974.8905735, b4c8aa81-ea1c-4786-b53e-082bc9fadd9a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.921 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] VM Paused (Lifecycle Event)
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.942 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.945 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:12:54 compute-0 nova_compute[239965]: 2026-01-26 16:12:54.974 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:12:55 compute-0 ceph-mon[75140]: pgmap v1863: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 889 KiB/s wr, 44 op/s
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.138 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 477 KiB/s wr, 38 op/s
Jan 26 16:12:56 compute-0 ceph-mon[75140]: pgmap v1864: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 477 KiB/s wr, 38 op/s
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.367 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769443961.3657014, 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.368 239969 INFO nova.compute.manager [-] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] VM Stopped (Lifecycle Event)
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.398 239969 DEBUG nova.compute.manager [None req-5c1cee79-d8f7-4a08-a51b-c8713e4414a9 - - - - - -] [instance: 20bcad9b-aeb2-4d82-b1e9-7f2e1c67dae4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.506 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.897 239969 DEBUG nova.compute.manager [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received event network-vif-plugged-e4857015-0634-48e6-ba17-182f06530971 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.898 239969 DEBUG oslo_concurrency.lockutils [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.898 239969 DEBUG oslo_concurrency.lockutils [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.898 239969 DEBUG oslo_concurrency.lockutils [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.899 239969 DEBUG nova.compute.manager [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] No event matching network-vif-plugged-e4857015-0634-48e6-ba17-182f06530971 in dict_keys([('network-vif-plugged', 'd55c5196-61aa-4fe8-81d6-2c1ea20ab825')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.899 239969 WARNING nova.compute.manager [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received unexpected event network-vif-plugged-e4857015-0634-48e6-ba17-182f06530971 for instance with vm_state building and task_state spawning.
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.899 239969 DEBUG nova.compute.manager [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received event network-vif-plugged-d55c5196-61aa-4fe8-81d6-2c1ea20ab825 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.899 239969 DEBUG oslo_concurrency.lockutils [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.899 239969 DEBUG oslo_concurrency.lockutils [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.900 239969 DEBUG oslo_concurrency.lockutils [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.900 239969 DEBUG nova.compute.manager [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Processing event network-vif-plugged-d55c5196-61aa-4fe8-81d6-2c1ea20ab825 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.900 239969 DEBUG nova.compute.manager [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received event network-vif-plugged-d55c5196-61aa-4fe8-81d6-2c1ea20ab825 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.900 239969 DEBUG oslo_concurrency.lockutils [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.900 239969 DEBUG oslo_concurrency.lockutils [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.901 239969 DEBUG oslo_concurrency.lockutils [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.901 239969 DEBUG nova.compute.manager [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] No waiting events found dispatching network-vif-plugged-d55c5196-61aa-4fe8-81d6-2c1ea20ab825 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.901 239969 WARNING nova.compute.manager [req-f440dfa0-877f-408a-8aa9-3cc607ba35a0 req-509cb7c9-58db-458f-a4b5-ac2d33b60e45 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received unexpected event network-vif-plugged-d55c5196-61aa-4fe8-81d6-2c1ea20ab825 for instance with vm_state building and task_state spawning.
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.902 239969 DEBUG nova.compute.manager [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.907 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769443976.9070683, b4c8aa81-ea1c-4786-b53e-082bc9fadd9a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.907 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] VM Resumed (Lifecycle Event)
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.909 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.913 239969 INFO nova.virt.libvirt.driver [-] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Instance spawned successfully.
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.913 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.931 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.938 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.942 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.942 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.943 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.943 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.944 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.944 239969 DEBUG nova.virt.libvirt.driver [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.978 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:12:56 compute-0 nova_compute[239965]: 2026-01-26 16:12:56.981 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:57 compute-0 nova_compute[239965]: 2026-01-26 16:12:57.016 239969 INFO nova.compute.manager [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Took 16.36 seconds to spawn the instance on the hypervisor.
Jan 26 16:12:57 compute-0 nova_compute[239965]: 2026-01-26 16:12:57.017 239969 DEBUG nova.compute.manager [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:12:57 compute-0 nova_compute[239965]: 2026-01-26 16:12:57.099 239969 INFO nova.compute.manager [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Took 17.52 seconds to build instance.
Jan 26 16:12:57 compute-0 nova_compute[239965]: 2026-01-26 16:12:57.119 239969 DEBUG oslo_concurrency.lockutils [None req-2abc1f2e-d8be-4379-92c6-cb769feea904 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:57 compute-0 nova_compute[239965]: 2026-01-26 16:12:57.790 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:12:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:12:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 13 KiB/s wr, 69 op/s
Jan 26 16:12:58 compute-0 ceph-mon[75140]: pgmap v1865: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 13 KiB/s wr, 69 op/s
Jan 26 16:12:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:59.235 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:12:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:59.236 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:12:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:12:59.237 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:12:59 compute-0 nova_compute[239965]: 2026-01-26 16:12:59.607 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 12 KiB/s wr, 69 op/s
Jan 26 16:13:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:13:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:13:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:13:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:13:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:13:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:13:00 compute-0 ceph-mon[75140]: pgmap v1866: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 12 KiB/s wr, 69 op/s
Jan 26 16:13:01 compute-0 nova_compute[239965]: 2026-01-26 16:13:01.985 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 14 KiB/s wr, 119 op/s
Jan 26 16:13:02 compute-0 nova_compute[239965]: 2026-01-26 16:13:02.792 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:13:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:03.195 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:13:03 compute-0 ceph-mon[75140]: pgmap v1867: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 14 KiB/s wr, 119 op/s
Jan 26 16:13:03 compute-0 ovn_controller[146046]: 2026-01-26T16:13:03Z|01113|binding|INFO|Releasing lport e0fa1a12-fa6c-4878-8114-78f4badaec8c from this chassis (sb_readonly=0)
Jan 26 16:13:03 compute-0 ovn_controller[146046]: 2026-01-26T16:13:03Z|01114|binding|INFO|Releasing lport a1098b06-e219-4f9c-9e25-e366022a5f05 from this chassis (sb_readonly=0)
Jan 26 16:13:03 compute-0 nova_compute[239965]: 2026-01-26 16:13:03.424 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:04 compute-0 nova_compute[239965]: 2026-01-26 16:13:04.092 239969 DEBUG nova.compute.manager [req-b058bf45-d901-4169-b762-0a24d2925890 req-7d90b636-4ae0-4c32-b90b-b2f532277440 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received event network-changed-e4857015-0634-48e6-ba17-182f06530971 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:13:04 compute-0 nova_compute[239965]: 2026-01-26 16:13:04.093 239969 DEBUG nova.compute.manager [req-b058bf45-d901-4169-b762-0a24d2925890 req-7d90b636-4ae0-4c32-b90b-b2f532277440 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Refreshing instance network info cache due to event network-changed-e4857015-0634-48e6-ba17-182f06530971. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:13:04 compute-0 nova_compute[239965]: 2026-01-26 16:13:04.093 239969 DEBUG oslo_concurrency.lockutils [req-b058bf45-d901-4169-b762-0a24d2925890 req-7d90b636-4ae0-4c32-b90b-b2f532277440 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:13:04 compute-0 nova_compute[239965]: 2026-01-26 16:13:04.093 239969 DEBUG oslo_concurrency.lockutils [req-b058bf45-d901-4169-b762-0a24d2925890 req-7d90b636-4ae0-4c32-b90b-b2f532277440 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:13:04 compute-0 nova_compute[239965]: 2026-01-26 16:13:04.093 239969 DEBUG nova.network.neutron [req-b058bf45-d901-4169-b762-0a24d2925890 req-7d90b636-4ae0-4c32-b90b-b2f532277440 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Refreshing network info cache for port e4857015-0634-48e6-ba17-182f06530971 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:13:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 129 op/s
Jan 26 16:13:05 compute-0 ovn_controller[146046]: 2026-01-26T16:13:05Z|01115|binding|INFO|Releasing lport e0fa1a12-fa6c-4878-8114-78f4badaec8c from this chassis (sb_readonly=0)
Jan 26 16:13:05 compute-0 ovn_controller[146046]: 2026-01-26T16:13:05Z|01116|binding|INFO|Releasing lport a1098b06-e219-4f9c-9e25-e366022a5f05 from this chassis (sb_readonly=0)
Jan 26 16:13:05 compute-0 nova_compute[239965]: 2026-01-26 16:13:05.092 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:05 compute-0 ceph-mon[75140]: pgmap v1868: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 129 op/s
Jan 26 16:13:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 126 op/s
Jan 26 16:13:06 compute-0 ceph-mon[75140]: pgmap v1869: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 126 op/s
Jan 26 16:13:06 compute-0 nova_compute[239965]: 2026-01-26 16:13:06.988 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:07 compute-0 nova_compute[239965]: 2026-01-26 16:13:07.179 239969 DEBUG nova.network.neutron [req-b058bf45-d901-4169-b762-0a24d2925890 req-7d90b636-4ae0-4c32-b90b-b2f532277440 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Updated VIF entry in instance network info cache for port e4857015-0634-48e6-ba17-182f06530971. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:13:07 compute-0 nova_compute[239965]: 2026-01-26 16:13:07.180 239969 DEBUG nova.network.neutron [req-b058bf45-d901-4169-b762-0a24d2925890 req-7d90b636-4ae0-4c32-b90b-b2f532277440 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Updating instance_info_cache with network_info: [{"id": "e4857015-0634-48e6-ba17-182f06530971", "address": "fa:16:3e:fd:62:9b", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4857015-06", "ovs_interfaceid": "e4857015-0634-48e6-ba17-182f06530971", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "address": "fa:16:3e:c7:c5:3e", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd55c5196-61", "ovs_interfaceid": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:13:07 compute-0 nova_compute[239965]: 2026-01-26 16:13:07.198 239969 DEBUG oslo_concurrency.lockutils [req-b058bf45-d901-4169-b762-0a24d2925890 req-7d90b636-4ae0-4c32-b90b-b2f532277440 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:13:07 compute-0 nova_compute[239965]: 2026-01-26 16:13:07.793 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:13:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 106 op/s
Jan 26 16:13:09 compute-0 ceph-mon[75140]: pgmap v1870: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 106 op/s
Jan 26 16:13:09 compute-0 ovn_controller[146046]: 2026-01-26T16:13:09Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:fd:62:9b 10.100.0.7
Jan 26 16:13:09 compute-0 ovn_controller[146046]: 2026-01-26T16:13:09Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:fd:62:9b 10.100.0.7
Jan 26 16:13:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Jan 26 16:13:10 compute-0 sshd-session[334832]: Invalid user sol from 45.148.10.240 port 33428
Jan 26 16:13:10 compute-0 sshd-session[334832]: Connection closed by invalid user sol 45.148.10.240 port 33428 [preauth]
Jan 26 16:13:11 compute-0 ceph-mon[75140]: pgmap v1871: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.3 KiB/s wr, 64 op/s
Jan 26 16:13:11 compute-0 nova_compute[239965]: 2026-01-26 16:13:11.991 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 232 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 115 op/s
Jan 26 16:13:12 compute-0 nova_compute[239965]: 2026-01-26 16:13:12.795 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:13:12 compute-0 ceph-mon[75140]: pgmap v1872: 305 pgs: 305 active+clean; 232 MiB data, 834 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 115 op/s
Jan 26 16:13:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 246 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 760 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Jan 26 16:13:15 compute-0 ceph-mon[75140]: pgmap v1873: 305 pgs: 305 active+clean; 246 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 760 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Jan 26 16:13:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 246 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 26 16:13:16 compute-0 nova_compute[239965]: 2026-01-26 16:13:16.994 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:17 compute-0 ceph-mon[75140]: pgmap v1874: 305 pgs: 305 active+clean; 246 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 26 16:13:17 compute-0 nova_compute[239965]: 2026-01-26 16:13:17.797 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:13:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 246 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 26 16:13:18 compute-0 ceph-mon[75140]: pgmap v1875: 305 pgs: 305 active+clean; 246 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 26 16:13:19 compute-0 podman[334834]: 2026-01-26 16:13:19.388607962 +0000 UTC m=+0.067835049 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:13:19 compute-0 podman[334835]: 2026-01-26 16:13:19.419116366 +0000 UTC m=+0.098229141 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:13:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 246 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 26 16:13:21 compute-0 ceph-mon[75140]: pgmap v1876: 305 pgs: 305 active+clean; 246 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Jan 26 16:13:21 compute-0 nova_compute[239965]: 2026-01-26 16:13:21.996 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 246 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 26 16:13:22 compute-0 nova_compute[239965]: 2026-01-26 16:13:22.837 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:13:23 compute-0 sudo[334877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:13:23 compute-0 sudo[334877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:13:23 compute-0 sudo[334877]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:23 compute-0 ceph-mon[75140]: pgmap v1877: 305 pgs: 305 active+clean; 246 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 26 16:13:23 compute-0 sudo[334902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 26 16:13:23 compute-0 sudo[334902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:13:23 compute-0 sudo[334902]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:13:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:13:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:13:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.755 239969 DEBUG nova.compute.manager [req-b660528b-b6d7-4bf3-abc7-fe27b43dbcbd req-199c7b64-d296-4e59-a69d-7991645e5a8b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received event network-changed-e4857015-0634-48e6-ba17-182f06530971 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.755 239969 DEBUG nova.compute.manager [req-b660528b-b6d7-4bf3-abc7-fe27b43dbcbd req-199c7b64-d296-4e59-a69d-7991645e5a8b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Refreshing instance network info cache due to event network-changed-e4857015-0634-48e6-ba17-182f06530971. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.755 239969 DEBUG oslo_concurrency.lockutils [req-b660528b-b6d7-4bf3-abc7-fe27b43dbcbd req-199c7b64-d296-4e59-a69d-7991645e5a8b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.755 239969 DEBUG oslo_concurrency.lockutils [req-b660528b-b6d7-4bf3-abc7-fe27b43dbcbd req-199c7b64-d296-4e59-a69d-7991645e5a8b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.755 239969 DEBUG nova.network.neutron [req-b660528b-b6d7-4bf3-abc7-fe27b43dbcbd req-199c7b64-d296-4e59-a69d-7991645e5a8b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Refreshing network info cache for port e4857015-0634-48e6-ba17-182f06530971 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:13:23 compute-0 sudo[334947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:13:23 compute-0 sudo[334947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:13:23 compute-0 sudo[334947]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.839 239969 DEBUG oslo_concurrency.lockutils [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.840 239969 DEBUG oslo_concurrency.lockutils [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.840 239969 DEBUG oslo_concurrency.lockutils [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.840 239969 DEBUG oslo_concurrency.lockutils [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.840 239969 DEBUG oslo_concurrency.lockutils [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.841 239969 INFO nova.compute.manager [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Terminating instance
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.842 239969 DEBUG nova.compute.manager [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:13:23 compute-0 sudo[334972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:13:23 compute-0 sudo[334972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:13:23 compute-0 kernel: tape4857015-06 (unregistering): left promiscuous mode
Jan 26 16:13:23 compute-0 NetworkManager[48954]: <info>  [1769444003.8894] device (tape4857015-06): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.914 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:23 compute-0 ovn_controller[146046]: 2026-01-26T16:13:23Z|01117|binding|INFO|Releasing lport e4857015-0634-48e6-ba17-182f06530971 from this chassis (sb_readonly=0)
Jan 26 16:13:23 compute-0 kernel: tapd55c5196-61 (unregistering): left promiscuous mode
Jan 26 16:13:23 compute-0 ovn_controller[146046]: 2026-01-26T16:13:23Z|01118|binding|INFO|Setting lport e4857015-0634-48e6-ba17-182f06530971 down in Southbound
Jan 26 16:13:23 compute-0 ovn_controller[146046]: 2026-01-26T16:13:23Z|01119|binding|INFO|Removing iface tape4857015-06 ovn-installed in OVS
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.921 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:23 compute-0 NetworkManager[48954]: <info>  [1769444003.9233] device (tapd55c5196-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:13:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:23.927 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fd:62:9b 10.100.0.7'], port_security=['fa:16:3e:fd:62:9b 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b4c8aa81-ea1c-4786-b53e-082bc9fadd9a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daa791d6-dfc2-418f-92aa-2771eaf5b1a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '85d3d038-2bc9-4f80-8a4a-2f5ebd3d53a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7117378c-aa96-48b6-b2fa-6c8b5ccb02bd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=e4857015-0634-48e6-ba17-182f06530971) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:13:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:23.929 156105 INFO neutron.agent.ovn.metadata.agent [-] Port e4857015-0634-48e6-ba17-182f06530971 in datapath daa791d6-dfc2-418f-92aa-2771eaf5b1a9 unbound from our chassis
Jan 26 16:13:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:23.930 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network daa791d6-dfc2-418f-92aa-2771eaf5b1a9
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.937 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:23.948 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ee46f94c-4f7d-46c6-8a34-86e0f1fd45af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:23 compute-0 ovn_controller[146046]: 2026-01-26T16:13:23Z|01120|binding|INFO|Releasing lport d55c5196-61aa-4fe8-81d6-2c1ea20ab825 from this chassis (sb_readonly=0)
Jan 26 16:13:23 compute-0 ovn_controller[146046]: 2026-01-26T16:13:23Z|01121|binding|INFO|Setting lport d55c5196-61aa-4fe8-81d6-2c1ea20ab825 down in Southbound
Jan 26 16:13:23 compute-0 ovn_controller[146046]: 2026-01-26T16:13:23Z|01122|binding|INFO|Removing iface tapd55c5196-61 ovn-installed in OVS
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.958 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:23.959 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:c5:3e 2001:db8:0:1:f816:3eff:fec7:c53e 2001:db8::f816:3eff:fec7:c53e'], port_security=['fa:16:3e:c7:c5:3e 2001:db8:0:1:f816:3eff:fec7:c53e 2001:db8::f816:3eff:fec7:c53e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fec7:c53e/64 2001:db8::f816:3eff:fec7:c53e/64', 'neutron:device_id': 'b4c8aa81-ea1c-4786-b53e-082bc9fadd9a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '85d3d038-2bc9-4f80-8a4a-2f5ebd3d53a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46f31be5-6cff-44a8-9f7f-e0dfbc4df89f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d55c5196-61aa-4fe8-81d6-2c1ea20ab825) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:13:23 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Jan 26 16:13:23 compute-0 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006e.scope: Consumed 14.297s CPU time.
Jan 26 16:13:23 compute-0 systemd-machined[208061]: Machine qemu-137-instance-0000006e terminated.
Jan 26 16:13:23 compute-0 nova_compute[239965]: 2026-01-26 16:13:23.993 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:23.995 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[034e12fd-04bf-4f67-a212-bf953dc6f17a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:23.998 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e175458d-0e5e-494a-acda-2bea1a66274f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.025 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7f980090-791c-44ae-b249-6a0c2626f087]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.058 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8db4cd3e-8f99-474e-a768-84272a7f8e41]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdaa791d6-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:21:4e:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542193, 'reachable_time': 15395, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335013, 'error': None, 'target': 'ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.069 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:24 compute-0 NetworkManager[48954]: <info>  [1769444004.0750] manager: (tapd55c5196-61): new Tun device (/org/freedesktop/NetworkManager/Devices/458)
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.077 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8909c767-8efc-43ef-bb20-cf9efeab6b63]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdaa791d6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 542204, 'tstamp': 542204}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335017, 'error': None, 'target': 'ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdaa791d6-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 542207, 'tstamp': 542207}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335017, 'error': None, 'target': 'ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.078 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaa791d6-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.079 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.083 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.094 239969 INFO nova.virt.libvirt.driver [-] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Instance destroyed successfully.
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.095 239969 DEBUG nova.objects.instance [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'resources' on Instance uuid b4c8aa81-ea1c-4786-b53e-082bc9fadd9a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.096 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.097 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdaa791d6-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.097 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.097 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdaa791d6-d0, col_values=(('external_ids', {'iface-id': 'e0fa1a12-fa6c-4878-8114-78f4badaec8c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.098 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.099 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d55c5196-61aa-4fe8-81d6-2c1ea20ab825 in datapath ce7228b3-b329-4b48-a563-2dfcf286a94e unbound from our chassis
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.100 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ce7228b3-b329-4b48-a563-2dfcf286a94e
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.108 239969 DEBUG nova.virt.libvirt.vif [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:12:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1287169184',display_name='tempest-TestGettingAddress-server-1287169184',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1287169184',id=110,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISau4/UN40WP+ptUkwpIsWNn3J4Wp77nPpgbhcl7w/8Nb1TcxykSmejoZt2JELHRJfHb9mvljAwUSziemePss6yCvaAr9GOGZ0K8nlDi6voyMAfJj2IwlWWxnJ7+bk9WQ==',key_name='tempest-TestGettingAddress-360952793',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:12:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-u6w8t0q6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:12:57Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=b4c8aa81-ea1c-4786-b53e-082bc9fadd9a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4857015-0634-48e6-ba17-182f06530971", "address": "fa:16:3e:fd:62:9b", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4857015-06", "ovs_interfaceid": "e4857015-0634-48e6-ba17-182f06530971", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.108 239969 DEBUG nova.network.os_vif_util [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "e4857015-0634-48e6-ba17-182f06530971", "address": "fa:16:3e:fd:62:9b", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4857015-06", "ovs_interfaceid": "e4857015-0634-48e6-ba17-182f06530971", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.109 239969 DEBUG nova.network.os_vif_util [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:fd:62:9b,bridge_name='br-int',has_traffic_filtering=True,id=e4857015-0634-48e6-ba17-182f06530971,network=Network(daa791d6-dfc2-418f-92aa-2771eaf5b1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4857015-06') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.109 239969 DEBUG os_vif [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:62:9b,bridge_name='br-int',has_traffic_filtering=True,id=e4857015-0634-48e6-ba17-182f06530971,network=Network(daa791d6-dfc2-418f-92aa-2771eaf5b1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4857015-06') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.111 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.111 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4857015-06, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.113 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.115 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.116 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.117 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7958bac4-5368-4fec-884a-1afd213e095b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.119 239969 INFO os_vif [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:fd:62:9b,bridge_name='br-int',has_traffic_filtering=True,id=e4857015-0634-48e6-ba17-182f06530971,network=Network(daa791d6-dfc2-418f-92aa-2771eaf5b1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4857015-06')
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.120 239969 DEBUG nova.virt.libvirt.vif [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:12:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1287169184',display_name='tempest-TestGettingAddress-server-1287169184',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1287169184',id=110,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISau4/UN40WP+ptUkwpIsWNn3J4Wp77nPpgbhcl7w/8Nb1TcxykSmejoZt2JELHRJfHb9mvljAwUSziemePss6yCvaAr9GOGZ0K8nlDi6voyMAfJj2IwlWWxnJ7+bk9WQ==',key_name='tempest-TestGettingAddress-360952793',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:12:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-u6w8t0q6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:12:57Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=b4c8aa81-ea1c-4786-b53e-082bc9fadd9a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "address": "fa:16:3e:c7:c5:3e", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd55c5196-61", "ovs_interfaceid": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.121 239969 DEBUG nova.network.os_vif_util [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "address": "fa:16:3e:c7:c5:3e", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd55c5196-61", "ovs_interfaceid": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.122 239969 DEBUG nova.network.os_vif_util [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:c5:3e,bridge_name='br-int',has_traffic_filtering=True,id=d55c5196-61aa-4fe8-81d6-2c1ea20ab825,network=Network(ce7228b3-b329-4b48-a563-2dfcf286a94e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd55c5196-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.122 239969 DEBUG os_vif [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:c5:3e,bridge_name='br-int',has_traffic_filtering=True,id=d55c5196-61aa-4fe8-81d6-2c1ea20ab825,network=Network(ce7228b3-b329-4b48-a563-2dfcf286a94e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd55c5196-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.124 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.124 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd55c5196-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.126 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.128 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.130 239969 INFO os_vif [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:c5:3e,bridge_name='br-int',has_traffic_filtering=True,id=d55c5196-61aa-4fe8-81d6-2c1ea20ab825,network=Network(ce7228b3-b329-4b48-a563-2dfcf286a94e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd55c5196-61')
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.150 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0ef2c212-fc68-49c9-a2cf-84984f9b49cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.153 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5575812f-bd61-485e-8a81-2fab526e0d65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.185 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[fc7d121d-5ca8-48f8-852a-6212b36165f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.207 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[912c6597-e44d-42cf-9e6d-044e4b7cd2f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapce7228b3-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:07:b5:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 318], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542359, 'reachable_time': 27262, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335076, 'error': None, 'target': 'ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.225 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e62da75e-71c4-48ec-804e-452d06bd8187]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapce7228b3-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 542371, 'tstamp': 542371}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 335077, 'error': None, 'target': 'ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.228 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce7228b3-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.230 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.231 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.233 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapce7228b3-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.233 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.233 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapce7228b3-b0, col_values=(('external_ids', {'iface-id': 'a1098b06-e219-4f9c-9e25-e366022a5f05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:13:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:24.234 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:13:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 246 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 889 KiB/s wr, 17 op/s
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.392 239969 INFO nova.virt.libvirt.driver [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Deleting instance files /var/lib/nova/instances/b4c8aa81-ea1c-4786-b53e-082bc9fadd9a_del
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.393 239969 INFO nova.virt.libvirt.driver [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Deletion of /var/lib/nova/instances/b4c8aa81-ea1c-4786-b53e-082bc9fadd9a_del complete
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.465 239969 INFO nova.compute.manager [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Took 0.62 seconds to destroy the instance on the hypervisor.
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.466 239969 DEBUG oslo.service.loopingcall [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.466 239969 DEBUG nova.compute.manager [-] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:13:24 compute-0 nova_compute[239965]: 2026-01-26 16:13:24.466 239969 DEBUG nova.network.neutron [-] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:13:24 compute-0 sudo[334972]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 16:13:24 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 16:13:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:13:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:13:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:13:24 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:13:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:13:24 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:13:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:13:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:13:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:13:24 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:13:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:13:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:13:24 compute-0 sudo[335096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:13:24 compute-0 sudo[335096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:13:24 compute-0 sudo[335096]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:24 compute-0 sudo[335121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:13:24 compute-0 sudo[335121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:13:24 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:13:24 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:13:24 compute-0 ceph-mon[75140]: pgmap v1878: 305 pgs: 305 active+clean; 246 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 889 KiB/s wr, 17 op/s
Jan 26 16:13:24 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 16:13:24 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:13:24 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:13:24 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:13:24 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:13:24 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:13:24 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:13:25 compute-0 podman[335158]: 2026-01-26 16:13:25.0671707 +0000 UTC m=+0.049679025 container create 6bd98c15e369d88a0a10a81c3a859620ea5d1e7f815fb1752a91305a306d5526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:13:25 compute-0 systemd[1]: Started libpod-conmon-6bd98c15e369d88a0a10a81c3a859620ea5d1e7f815fb1752a91305a306d5526.scope.
Jan 26 16:13:25 compute-0 podman[335158]: 2026-01-26 16:13:25.043766158 +0000 UTC m=+0.026274513 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:13:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:13:25 compute-0 podman[335158]: 2026-01-26 16:13:25.164805145 +0000 UTC m=+0.147313490 container init 6bd98c15e369d88a0a10a81c3a859620ea5d1e7f815fb1752a91305a306d5526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_kalam, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 26 16:13:25 compute-0 podman[335158]: 2026-01-26 16:13:25.17196436 +0000 UTC m=+0.154472685 container start 6bd98c15e369d88a0a10a81c3a859620ea5d1e7f815fb1752a91305a306d5526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:13:25 compute-0 podman[335158]: 2026-01-26 16:13:25.175242491 +0000 UTC m=+0.157750816 container attach 6bd98c15e369d88a0a10a81c3a859620ea5d1e7f815fb1752a91305a306d5526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_kalam, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:13:25 compute-0 relaxed_kalam[335175]: 167 167
Jan 26 16:13:25 compute-0 systemd[1]: libpod-6bd98c15e369d88a0a10a81c3a859620ea5d1e7f815fb1752a91305a306d5526.scope: Deactivated successfully.
Jan 26 16:13:25 compute-0 conmon[335175]: conmon 6bd98c15e369d88a0a10 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6bd98c15e369d88a0a10a81c3a859620ea5d1e7f815fb1752a91305a306d5526.scope/container/memory.events
Jan 26 16:13:25 compute-0 podman[335158]: 2026-01-26 16:13:25.182655401 +0000 UTC m=+0.165163736 container died 6bd98c15e369d88a0a10a81c3a859620ea5d1e7f815fb1752a91305a306d5526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_kalam, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:13:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8723e93b69a2325cc2d06b2fc9c4e234b8b24925cd798d2542175bac7467257c-merged.mount: Deactivated successfully.
Jan 26 16:13:25 compute-0 podman[335158]: 2026-01-26 16:13:25.228738507 +0000 UTC m=+0.211246862 container remove 6bd98c15e369d88a0a10a81c3a859620ea5d1e7f815fb1752a91305a306d5526 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_kalam, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 16:13:25 compute-0 systemd[1]: libpod-conmon-6bd98c15e369d88a0a10a81c3a859620ea5d1e7f815fb1752a91305a306d5526.scope: Deactivated successfully.
Jan 26 16:13:25 compute-0 podman[335197]: 2026-01-26 16:13:25.461213337 +0000 UTC m=+0.071375475 container create 99921e9ef9c57ed283414344f7a969efd943200c8f0c55b7383a646b6e2181e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hugle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 16:13:25 compute-0 systemd[1]: Started libpod-conmon-99921e9ef9c57ed283414344f7a969efd943200c8f0c55b7383a646b6e2181e6.scope.
Jan 26 16:13:25 compute-0 podman[335197]: 2026-01-26 16:13:25.434362311 +0000 UTC m=+0.044524529 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:13:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:13:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/103662c64e6f449a68ce6b12e3e8a243228f0b1a230ee6117c14b11979284e9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:13:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/103662c64e6f449a68ce6b12e3e8a243228f0b1a230ee6117c14b11979284e9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:13:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/103662c64e6f449a68ce6b12e3e8a243228f0b1a230ee6117c14b11979284e9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:13:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/103662c64e6f449a68ce6b12e3e8a243228f0b1a230ee6117c14b11979284e9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:13:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/103662c64e6f449a68ce6b12e3e8a243228f0b1a230ee6117c14b11979284e9f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:13:25 compute-0 podman[335197]: 2026-01-26 16:13:25.560624545 +0000 UTC m=+0.170786693 container init 99921e9ef9c57ed283414344f7a969efd943200c8f0c55b7383a646b6e2181e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 16:13:25 compute-0 podman[335197]: 2026-01-26 16:13:25.574014622 +0000 UTC m=+0.184176760 container start 99921e9ef9c57ed283414344f7a969efd943200c8f0c55b7383a646b6e2181e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hugle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:13:25 compute-0 podman[335197]: 2026-01-26 16:13:25.57682409 +0000 UTC m=+0.186986228 container attach 99921e9ef9c57ed283414344f7a969efd943200c8f0c55b7383a646b6e2181e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.852 239969 DEBUG nova.compute.manager [req-ec714bcb-b461-4d33-8966-2984bbb3de21 req-593cd57d-f604-4628-9c44-4516cccdb011 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received event network-vif-deleted-d55c5196-61aa-4fe8-81d6-2c1ea20ab825 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.854 239969 INFO nova.compute.manager [req-ec714bcb-b461-4d33-8966-2984bbb3de21 req-593cd57d-f604-4628-9c44-4516cccdb011 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Neutron deleted interface d55c5196-61aa-4fe8-81d6-2c1ea20ab825; detaching it from the instance and deleting it from the info cache
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.854 239969 DEBUG nova.network.neutron [req-ec714bcb-b461-4d33-8966-2984bbb3de21 req-593cd57d-f604-4628-9c44-4516cccdb011 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Updating instance_info_cache with network_info: [{"id": "e4857015-0634-48e6-ba17-182f06530971", "address": "fa:16:3e:fd:62:9b", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4857015-06", "ovs_interfaceid": "e4857015-0634-48e6-ba17-182f06530971", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.883 239969 DEBUG nova.compute.manager [req-ec714bcb-b461-4d33-8966-2984bbb3de21 req-593cd57d-f604-4628-9c44-4516cccdb011 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Detach interface failed, port_id=d55c5196-61aa-4fe8-81d6-2c1ea20ab825, reason: Instance b4c8aa81-ea1c-4786-b53e-082bc9fadd9a could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.904 239969 DEBUG nova.compute.manager [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received event network-vif-unplugged-e4857015-0634-48e6-ba17-182f06530971 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.904 239969 DEBUG oslo_concurrency.lockutils [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.904 239969 DEBUG oslo_concurrency.lockutils [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.905 239969 DEBUG oslo_concurrency.lockutils [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.905 239969 DEBUG nova.compute.manager [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] No waiting events found dispatching network-vif-unplugged-e4857015-0634-48e6-ba17-182f06530971 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.905 239969 DEBUG nova.compute.manager [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received event network-vif-unplugged-e4857015-0634-48e6-ba17-182f06530971 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.905 239969 DEBUG nova.compute.manager [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received event network-vif-plugged-e4857015-0634-48e6-ba17-182f06530971 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.905 239969 DEBUG oslo_concurrency.lockutils [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.906 239969 DEBUG oslo_concurrency.lockutils [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.906 239969 DEBUG oslo_concurrency.lockutils [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.906 239969 DEBUG nova.compute.manager [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] No waiting events found dispatching network-vif-plugged-e4857015-0634-48e6-ba17-182f06530971 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.906 239969 WARNING nova.compute.manager [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received unexpected event network-vif-plugged-e4857015-0634-48e6-ba17-182f06530971 for instance with vm_state active and task_state deleting.
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.907 239969 DEBUG nova.compute.manager [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received event network-vif-unplugged-d55c5196-61aa-4fe8-81d6-2c1ea20ab825 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.907 239969 DEBUG oslo_concurrency.lockutils [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.907 239969 DEBUG oslo_concurrency.lockutils [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.907 239969 DEBUG oslo_concurrency.lockutils [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.908 239969 DEBUG nova.compute.manager [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] No waiting events found dispatching network-vif-unplugged-d55c5196-61aa-4fe8-81d6-2c1ea20ab825 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.908 239969 DEBUG nova.compute.manager [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received event network-vif-unplugged-d55c5196-61aa-4fe8-81d6-2c1ea20ab825 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.908 239969 DEBUG nova.compute.manager [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received event network-vif-plugged-d55c5196-61aa-4fe8-81d6-2c1ea20ab825 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.908 239969 DEBUG oslo_concurrency.lockutils [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.909 239969 DEBUG oslo_concurrency.lockutils [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.909 239969 DEBUG oslo_concurrency.lockutils [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.909 239969 DEBUG nova.compute.manager [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] No waiting events found dispatching network-vif-plugged-d55c5196-61aa-4fe8-81d6-2c1ea20ab825 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:13:25 compute-0 nova_compute[239965]: 2026-01-26 16:13:25.909 239969 WARNING nova.compute.manager [req-3690d622-dfd0-45d2-ba6c-f93f6eefbe4e req-0a639839-c39c-4e1b-b628-7ee198b72da7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received unexpected event network-vif-plugged-d55c5196-61aa-4fe8-81d6-2c1ea20ab825 for instance with vm_state active and task_state deleting.
Jan 26 16:13:26 compute-0 nova_compute[239965]: 2026-01-26 16:13:26.051 239969 DEBUG nova.network.neutron [req-b660528b-b6d7-4bf3-abc7-fe27b43dbcbd req-199c7b64-d296-4e59-a69d-7991645e5a8b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Updated VIF entry in instance network info cache for port e4857015-0634-48e6-ba17-182f06530971. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:13:26 compute-0 nova_compute[239965]: 2026-01-26 16:13:26.052 239969 DEBUG nova.network.neutron [req-b660528b-b6d7-4bf3-abc7-fe27b43dbcbd req-199c7b64-d296-4e59-a69d-7991645e5a8b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Updating instance_info_cache with network_info: [{"id": "e4857015-0634-48e6-ba17-182f06530971", "address": "fa:16:3e:fd:62:9b", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4857015-06", "ovs_interfaceid": "e4857015-0634-48e6-ba17-182f06530971", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "address": "fa:16:3e:c7:c5:3e", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec7:c53e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd55c5196-61", "ovs_interfaceid": "d55c5196-61aa-4fe8-81d6-2c1ea20ab825", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:13:26 compute-0 magical_hugle[335214]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:13:26 compute-0 magical_hugle[335214]: --> All data devices are unavailable
Jan 26 16:13:26 compute-0 nova_compute[239965]: 2026-01-26 16:13:26.075 239969 DEBUG oslo_concurrency.lockutils [req-b660528b-b6d7-4bf3-abc7-fe27b43dbcbd req-199c7b64-d296-4e59-a69d-7991645e5a8b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:13:26 compute-0 systemd[1]: libpod-99921e9ef9c57ed283414344f7a969efd943200c8f0c55b7383a646b6e2181e6.scope: Deactivated successfully.
Jan 26 16:13:26 compute-0 podman[335234]: 2026-01-26 16:13:26.128663332 +0000 UTC m=+0.026156219 container died 99921e9ef9c57ed283414344f7a969efd943200c8f0c55b7383a646b6e2181e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:13:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-103662c64e6f449a68ce6b12e3e8a243228f0b1a230ee6117c14b11979284e9f-merged.mount: Deactivated successfully.
Jan 26 16:13:26 compute-0 podman[335234]: 2026-01-26 16:13:26.182938758 +0000 UTC m=+0.080431625 container remove 99921e9ef9c57ed283414344f7a969efd943200c8f0c55b7383a646b6e2181e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_hugle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 16:13:26 compute-0 systemd[1]: libpod-conmon-99921e9ef9c57ed283414344f7a969efd943200c8f0c55b7383a646b6e2181e6.scope: Deactivated successfully.
Jan 26 16:13:26 compute-0 sudo[335121]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 221 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 12 KiB/s wr, 11 op/s
Jan 26 16:13:26 compute-0 sudo[335250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:13:26 compute-0 sudo[335250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:13:26 compute-0 sudo[335250]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:26 compute-0 nova_compute[239965]: 2026-01-26 16:13:26.305 239969 DEBUG nova.network.neutron [-] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:13:26 compute-0 nova_compute[239965]: 2026-01-26 16:13:26.335 239969 INFO nova.compute.manager [-] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Took 1.87 seconds to deallocate network for instance.
Jan 26 16:13:26 compute-0 sudo[335275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:13:26 compute-0 sudo[335275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:13:26 compute-0 ceph-mon[75140]: pgmap v1879: 305 pgs: 305 active+clean; 221 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 12 KiB/s wr, 11 op/s
Jan 26 16:13:26 compute-0 nova_compute[239965]: 2026-01-26 16:13:26.400 239969 DEBUG oslo_concurrency.lockutils [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:26 compute-0 nova_compute[239965]: 2026-01-26 16:13:26.400 239969 DEBUG oslo_concurrency.lockutils [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:26 compute-0 nova_compute[239965]: 2026-01-26 16:13:26.493 239969 DEBUG oslo_concurrency.processutils [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:13:26 compute-0 podman[335314]: 2026-01-26 16:13:26.661679194 +0000 UTC m=+0.042291044 container create 6724e02cba880e12f76ad818dedf097bdbe7c554f2d47c232695fdffe7fee0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 16:13:26 compute-0 systemd[1]: Started libpod-conmon-6724e02cba880e12f76ad818dedf097bdbe7c554f2d47c232695fdffe7fee0b2.scope.
Jan 26 16:13:26 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:13:26 compute-0 podman[335314]: 2026-01-26 16:13:26.731187332 +0000 UTC m=+0.111799212 container init 6724e02cba880e12f76ad818dedf097bdbe7c554f2d47c232695fdffe7fee0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:13:26 compute-0 podman[335314]: 2026-01-26 16:13:26.643323356 +0000 UTC m=+0.023935226 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:13:26 compute-0 podman[335314]: 2026-01-26 16:13:26.741003172 +0000 UTC m=+0.121615022 container start 6724e02cba880e12f76ad818dedf097bdbe7c554f2d47c232695fdffe7fee0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_khayyam, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:13:26 compute-0 systemd[1]: libpod-6724e02cba880e12f76ad818dedf097bdbe7c554f2d47c232695fdffe7fee0b2.scope: Deactivated successfully.
Jan 26 16:13:26 compute-0 frosty_khayyam[335349]: 167 167
Jan 26 16:13:26 compute-0 conmon[335349]: conmon 6724e02cba880e12f76a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6724e02cba880e12f76ad818dedf097bdbe7c554f2d47c232695fdffe7fee0b2.scope/container/memory.events
Jan 26 16:13:26 compute-0 podman[335314]: 2026-01-26 16:13:26.843135847 +0000 UTC m=+0.223747697 container attach 6724e02cba880e12f76ad818dedf097bdbe7c554f2d47c232695fdffe7fee0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:13:26 compute-0 podman[335314]: 2026-01-26 16:13:26.84409256 +0000 UTC m=+0.224704430 container died 6724e02cba880e12f76ad818dedf097bdbe7c554f2d47c232695fdffe7fee0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 16:13:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-2611bdb0f52828669905ed1105e0c3cd6357091ed641c5ede585c821d693207d-merged.mount: Deactivated successfully.
Jan 26 16:13:26 compute-0 podman[335314]: 2026-01-26 16:13:26.905095431 +0000 UTC m=+0.285707291 container remove 6724e02cba880e12f76ad818dedf097bdbe7c554f2d47c232695fdffe7fee0b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_khayyam, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:13:26 compute-0 systemd[1]: libpod-conmon-6724e02cba880e12f76ad818dedf097bdbe7c554f2d47c232695fdffe7fee0b2.scope: Deactivated successfully.
Jan 26 16:13:27 compute-0 nova_compute[239965]: 2026-01-26 16:13:26.998 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:13:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/114288839' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:13:27 compute-0 nova_compute[239965]: 2026-01-26 16:13:27.099 239969 DEBUG oslo_concurrency.processutils [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:13:27 compute-0 nova_compute[239965]: 2026-01-26 16:13:27.110 239969 DEBUG nova.compute.provider_tree [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:13:27 compute-0 nova_compute[239965]: 2026-01-26 16:13:27.134 239969 DEBUG nova.scheduler.client.report [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:13:27 compute-0 nova_compute[239965]: 2026-01-26 16:13:27.157 239969 DEBUG oslo_concurrency.lockutils [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:27 compute-0 podman[335372]: 2026-01-26 16:13:27.175397294 +0000 UTC m=+0.112620892 container create 9917c4191bbfc4963836e6f9c3beade59825d418f636723a4174b5f9495dde81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_dubinsky, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:13:27 compute-0 nova_compute[239965]: 2026-01-26 16:13:27.192 239969 INFO nova.scheduler.client.report [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Deleted allocations for instance b4c8aa81-ea1c-4786-b53e-082bc9fadd9a
Jan 26 16:13:27 compute-0 podman[335372]: 2026-01-26 16:13:27.105611859 +0000 UTC m=+0.042835547 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:13:27 compute-0 systemd[1]: Started libpod-conmon-9917c4191bbfc4963836e6f9c3beade59825d418f636723a4174b5f9495dde81.scope.
Jan 26 16:13:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:13:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d739c792adc4c6e6241390e6b9b63c5fddacb58fa4e7d57f3b0bd92e29dd0eae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:13:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d739c792adc4c6e6241390e6b9b63c5fddacb58fa4e7d57f3b0bd92e29dd0eae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:13:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d739c792adc4c6e6241390e6b9b63c5fddacb58fa4e7d57f3b0bd92e29dd0eae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:13:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d739c792adc4c6e6241390e6b9b63c5fddacb58fa4e7d57f3b0bd92e29dd0eae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:13:27 compute-0 nova_compute[239965]: 2026-01-26 16:13:27.277 239969 DEBUG oslo_concurrency.lockutils [None req-3c8f37f2-d0f5-4f53-baf1-92f1ce6a78af 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "b4c8aa81-ea1c-4786-b53e-082bc9fadd9a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:27 compute-0 podman[335372]: 2026-01-26 16:13:27.279888137 +0000 UTC m=+0.217111765 container init 9917c4191bbfc4963836e6f9c3beade59825d418f636723a4174b5f9495dde81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_dubinsky, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 16:13:27 compute-0 podman[335372]: 2026-01-26 16:13:27.287669717 +0000 UTC m=+0.224893305 container start 9917c4191bbfc4963836e6f9c3beade59825d418f636723a4174b5f9495dde81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_dubinsky, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:13:27 compute-0 podman[335372]: 2026-01-26 16:13:27.291809008 +0000 UTC m=+0.229032596 container attach 9917c4191bbfc4963836e6f9c3beade59825d418f636723a4174b5f9495dde81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:13:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/114288839' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]: {
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:     "0": [
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:         {
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "devices": [
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "/dev/loop3"
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             ],
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "lv_name": "ceph_lv0",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "lv_size": "21470642176",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "name": "ceph_lv0",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "tags": {
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.cluster_name": "ceph",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.crush_device_class": "",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.encrypted": "0",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.objectstore": "bluestore",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.osd_id": "0",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.type": "block",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.vdo": "0",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.with_tpm": "0"
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             },
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "type": "block",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "vg_name": "ceph_vg0"
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:         }
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:     ],
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:     "1": [
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:         {
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "devices": [
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "/dev/loop4"
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             ],
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "lv_name": "ceph_lv1",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "lv_size": "21470642176",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "name": "ceph_lv1",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "tags": {
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.cluster_name": "ceph",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.crush_device_class": "",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.encrypted": "0",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.objectstore": "bluestore",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.osd_id": "1",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.type": "block",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.vdo": "0",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.with_tpm": "0"
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             },
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "type": "block",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "vg_name": "ceph_vg1"
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:         }
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:     ],
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:     "2": [
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:         {
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "devices": [
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "/dev/loop5"
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             ],
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "lv_name": "ceph_lv2",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "lv_size": "21470642176",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "name": "ceph_lv2",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "tags": {
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.cluster_name": "ceph",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.crush_device_class": "",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.encrypted": "0",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.objectstore": "bluestore",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.osd_id": "2",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.type": "block",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.vdo": "0",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:                 "ceph.with_tpm": "0"
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             },
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "type": "block",
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:             "vg_name": "ceph_vg2"
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:         }
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]:     ]
Jan 26 16:13:27 compute-0 frosty_dubinsky[335391]: }
Jan 26 16:13:27 compute-0 systemd[1]: libpod-9917c4191bbfc4963836e6f9c3beade59825d418f636723a4174b5f9495dde81.scope: Deactivated successfully.
Jan 26 16:13:27 compute-0 podman[335372]: 2026-01-26 16:13:27.617560517 +0000 UTC m=+0.554784115 container died 9917c4191bbfc4963836e6f9c3beade59825d418f636723a4174b5f9495dde81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_dubinsky, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 16:13:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d739c792adc4c6e6241390e6b9b63c5fddacb58fa4e7d57f3b0bd92e29dd0eae-merged.mount: Deactivated successfully.
Jan 26 16:13:27 compute-0 podman[335372]: 2026-01-26 16:13:27.672696633 +0000 UTC m=+0.609920231 container remove 9917c4191bbfc4963836e6f9c3beade59825d418f636723a4174b5f9495dde81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_dubinsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:13:27 compute-0 systemd[1]: libpod-conmon-9917c4191bbfc4963836e6f9c3beade59825d418f636723a4174b5f9495dde81.scope: Deactivated successfully.
Jan 26 16:13:27 compute-0 sudo[335275]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:27 compute-0 sudo[335411]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:13:27 compute-0 sudo[335411]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:13:27 compute-0 sudo[335411]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:27 compute-0 sudo[335436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:13:27 compute-0 sudo[335436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:13:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:13:28 compute-0 nova_compute[239965]: 2026-01-26 16:13:28.072 239969 DEBUG nova.compute.manager [req-238c771d-4961-4e8b-a64d-9d82fcdc0c98 req-0af1c8ea-f6c1-45a4-87c7-ee7409881392 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Received event network-vif-deleted-e4857015-0634-48e6-ba17-182f06530971 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:13:28 compute-0 podman[335473]: 2026-01-26 16:13:28.14245202 +0000 UTC m=+0.040514281 container create c3d15bd04a1c624cda76e54737096436ca131a85ff2859cb0daf84ecef83d7df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chaum, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 16:13:28 compute-0 systemd[1]: Started libpod-conmon-c3d15bd04a1c624cda76e54737096436ca131a85ff2859cb0daf84ecef83d7df.scope.
Jan 26 16:13:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:13:28 compute-0 podman[335473]: 2026-01-26 16:13:28.216853038 +0000 UTC m=+0.114915289 container init c3d15bd04a1c624cda76e54737096436ca131a85ff2859cb0daf84ecef83d7df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 16:13:28 compute-0 podman[335473]: 2026-01-26 16:13:28.124632165 +0000 UTC m=+0.022694436 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:13:28 compute-0 podman[335473]: 2026-01-26 16:13:28.224497854 +0000 UTC m=+0.122560105 container start c3d15bd04a1c624cda76e54737096436ca131a85ff2859cb0daf84ecef83d7df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chaum, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:13:28 compute-0 serene_chaum[335490]: 167 167
Jan 26 16:13:28 compute-0 systemd[1]: libpod-c3d15bd04a1c624cda76e54737096436ca131a85ff2859cb0daf84ecef83d7df.scope: Deactivated successfully.
Jan 26 16:13:28 compute-0 podman[335473]: 2026-01-26 16:13:28.228927613 +0000 UTC m=+0.126989874 container attach c3d15bd04a1c624cda76e54737096436ca131a85ff2859cb0daf84ecef83d7df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chaum, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 16:13:28 compute-0 podman[335473]: 2026-01-26 16:13:28.22922259 +0000 UTC m=+0.127284841 container died c3d15bd04a1c624cda76e54737096436ca131a85ff2859cb0daf84ecef83d7df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chaum, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 16:13:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 18 KiB/s wr, 30 op/s
Jan 26 16:13:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bca9b5d535c7965666c05a9825bb0fe6d228c0c1076784d7084aeb8fc19f0c1-merged.mount: Deactivated successfully.
Jan 26 16:13:28 compute-0 podman[335473]: 2026-01-26 16:13:28.270467577 +0000 UTC m=+0.168529828 container remove c3d15bd04a1c624cda76e54737096436ca131a85ff2859cb0daf84ecef83d7df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_chaum, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:13:28 compute-0 systemd[1]: libpod-conmon-c3d15bd04a1c624cda76e54737096436ca131a85ff2859cb0daf84ecef83d7df.scope: Deactivated successfully.
Jan 26 16:13:28 compute-0 ceph-mon[75140]: pgmap v1880: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 18 KiB/s wr, 30 op/s
Jan 26 16:13:28 compute-0 podman[335512]: 2026-01-26 16:13:28.449227725 +0000 UTC m=+0.049539932 container create 979883b4436da25d376cdbc3bdb1df511b6fc1567a26816e841ccd7febb6002c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:13:28 compute-0 systemd[1]: Started libpod-conmon-979883b4436da25d376cdbc3bdb1df511b6fc1567a26816e841ccd7febb6002c.scope.
Jan 26 16:13:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:13:28 compute-0 podman[335512]: 2026-01-26 16:13:28.425020083 +0000 UTC m=+0.025332340 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8911e6ccf396fc37090cc7cbcc25292698ed745095131a244cac2ca54938d3e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8911e6ccf396fc37090cc7cbcc25292698ed745095131a244cac2ca54938d3e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8911e6ccf396fc37090cc7cbcc25292698ed745095131a244cac2ca54938d3e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:13:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8911e6ccf396fc37090cc7cbcc25292698ed745095131a244cac2ca54938d3e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:13:28 compute-0 podman[335512]: 2026-01-26 16:13:28.543761364 +0000 UTC m=+0.144073631 container init 979883b4436da25d376cdbc3bdb1df511b6fc1567a26816e841ccd7febb6002c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_euclid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 16:13:28 compute-0 podman[335512]: 2026-01-26 16:13:28.550634242 +0000 UTC m=+0.150946429 container start 979883b4436da25d376cdbc3bdb1df511b6fc1567a26816e841ccd7febb6002c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:13:28 compute-0 podman[335512]: 2026-01-26 16:13:28.554090307 +0000 UTC m=+0.154402524 container attach 979883b4436da25d376cdbc3bdb1df511b6fc1567a26816e841ccd7febb6002c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_euclid, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 26 16:13:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:13:28
Jan 26 16:13:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:13:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:13:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'images', '.rgw.root', '.mgr', 'vms', 'default.rgw.control']
Jan 26 16:13:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.128 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 lvm[335608]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:13:29 compute-0 lvm[335609]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:13:29 compute-0 lvm[335608]: VG ceph_vg0 finished
Jan 26 16:13:29 compute-0 lvm[335609]: VG ceph_vg1 finished
Jan 26 16:13:29 compute-0 lvm[335611]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:13:29 compute-0 lvm[335611]: VG ceph_vg2 finished
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.368 239969 DEBUG nova.compute.manager [req-0772b162-982b-4099-8a3b-e665ba1e33d4 req-b8b6bc53-245e-4cc0-93b2-b2d0d230c818 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received event network-changed-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.368 239969 DEBUG nova.compute.manager [req-0772b162-982b-4099-8a3b-e665ba1e33d4 req-b8b6bc53-245e-4cc0-93b2-b2d0d230c818 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Refreshing instance network info cache due to event network-changed-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.368 239969 DEBUG oslo_concurrency.lockutils [req-0772b162-982b-4099-8a3b-e665ba1e33d4 req-b8b6bc53-245e-4cc0-93b2-b2d0d230c818 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.368 239969 DEBUG oslo_concurrency.lockutils [req-0772b162-982b-4099-8a3b-e665ba1e33d4 req-b8b6bc53-245e-4cc0-93b2-b2d0d230c818 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.368 239969 DEBUG nova.network.neutron [req-0772b162-982b-4099-8a3b-e665ba1e33d4 req-b8b6bc53-245e-4cc0-93b2-b2d0d230c818 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Refreshing network info cache for port 24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:13:29 compute-0 stoic_euclid[335529]: {}
Jan 26 16:13:29 compute-0 systemd[1]: libpod-979883b4436da25d376cdbc3bdb1df511b6fc1567a26816e841ccd7febb6002c.scope: Deactivated successfully.
Jan 26 16:13:29 compute-0 podman[335512]: 2026-01-26 16:13:29.406722707 +0000 UTC m=+1.007034884 container died 979883b4436da25d376cdbc3bdb1df511b6fc1567a26816e841ccd7febb6002c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_euclid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 26 16:13:29 compute-0 systemd[1]: libpod-979883b4436da25d376cdbc3bdb1df511b6fc1567a26816e841ccd7febb6002c.scope: Consumed 1.355s CPU time.
Jan 26 16:13:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-8911e6ccf396fc37090cc7cbcc25292698ed745095131a244cac2ca54938d3e5-merged.mount: Deactivated successfully.
Jan 26 16:13:29 compute-0 podman[335512]: 2026-01-26 16:13:29.449871741 +0000 UTC m=+1.050183918 container remove 979883b4436da25d376cdbc3bdb1df511b6fc1567a26816e841ccd7febb6002c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 26 16:13:29 compute-0 systemd[1]: libpod-conmon-979883b4436da25d376cdbc3bdb1df511b6fc1567a26816e841ccd7febb6002c.scope: Deactivated successfully.
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.462 239969 DEBUG oslo_concurrency.lockutils [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.463 239969 DEBUG oslo_concurrency.lockutils [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.463 239969 DEBUG oslo_concurrency.lockutils [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.463 239969 DEBUG oslo_concurrency.lockutils [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.463 239969 DEBUG oslo_concurrency.lockutils [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.464 239969 INFO nova.compute.manager [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Terminating instance
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.465 239969 DEBUG nova.compute.manager [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:13:29 compute-0 sudo[335436]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:13:29 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:13:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:13:29 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:13:29 compute-0 kernel: tap24c5ed41-3d (unregistering): left promiscuous mode
Jan 26 16:13:29 compute-0 NetworkManager[48954]: <info>  [1769444009.5142] device (tap24c5ed41-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.523 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 ovn_controller[146046]: 2026-01-26T16:13:29Z|01123|binding|INFO|Releasing lport 24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 from this chassis (sb_readonly=0)
Jan 26 16:13:29 compute-0 ovn_controller[146046]: 2026-01-26T16:13:29Z|01124|binding|INFO|Setting lport 24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 down in Southbound
Jan 26 16:13:29 compute-0 ovn_controller[146046]: 2026-01-26T16:13:29Z|01125|binding|INFO|Removing iface tap24c5ed41-3d ovn-installed in OVS
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.526 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.530 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e8:8d:bc 10.100.0.13'], port_security=['fa:16:3e:e8:8d:bc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '67eb236e-aeee-43e6-9b43-41c17fcc37a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-daa791d6-dfc2-418f-92aa-2771eaf5b1a9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '85d3d038-2bc9-4f80-8a4a-2f5ebd3d53a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7117378c-aa96-48b6-b2fa-6c8b5ccb02bd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=24c5ed41-3d5f-47a8-a80b-a40478dcb4b3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.531 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 in datapath daa791d6-dfc2-418f-92aa-2771eaf5b1a9 unbound from our chassis
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.532 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network daa791d6-dfc2-418f-92aa-2771eaf5b1a9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.533 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7109e905-850e-46c2-be8f-db61457a8653]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.534 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9 namespace which is not needed anymore
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.539 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 kernel: tap5e1d227f-59 (unregistering): left promiscuous mode
Jan 26 16:13:29 compute-0 NetworkManager[48954]: <info>  [1769444009.5505] device (tap5e1d227f-59): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.553 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 ovn_controller[146046]: 2026-01-26T16:13:29Z|01126|binding|INFO|Releasing lport 5e1d227f-594c-4a1e-8f28-56469a20df56 from this chassis (sb_readonly=0)
Jan 26 16:13:29 compute-0 ovn_controller[146046]: 2026-01-26T16:13:29Z|01127|binding|INFO|Setting lport 5e1d227f-594c-4a1e-8f28-56469a20df56 down in Southbound
Jan 26 16:13:29 compute-0 ovn_controller[146046]: 2026-01-26T16:13:29Z|01128|binding|INFO|Removing iface tap5e1d227f-59 ovn-installed in OVS
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.558 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.561 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.572 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:42:83:4d 2001:db8:0:1:f816:3eff:fe42:834d 2001:db8::f816:3eff:fe42:834d'], port_security=['fa:16:3e:42:83:4d 2001:db8:0:1:f816:3eff:fe42:834d 2001:db8::f816:3eff:fe42:834d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe42:834d/64 2001:db8::f816:3eff:fe42:834d/64', 'neutron:device_id': '67eb236e-aeee-43e6-9b43-41c17fcc37a5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '85d3d038-2bc9-4f80-8a4a-2f5ebd3d53a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46f31be5-6cff-44a8-9f7f-e0dfbc4df89f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=5e1d227f-594c-4a1e-8f28-56469a20df56) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:13:29 compute-0 sudo[335624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.576 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 sudo[335624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:13:29 compute-0 sudo[335624]: pam_unix(sudo:session): session closed for user root
Jan 26 16:13:29 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Jan 26 16:13:29 compute-0 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006d.scope: Consumed 15.987s CPU time.
Jan 26 16:13:29 compute-0 systemd-machined[208061]: Machine qemu-135-instance-0000006d terminated.
Jan 26 16:13:29 compute-0 neutron-haproxy-ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9[333238]: [NOTICE]   (333242) : haproxy version is 2.8.14-c23fe91
Jan 26 16:13:29 compute-0 neutron-haproxy-ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9[333238]: [NOTICE]   (333242) : path to executable is /usr/sbin/haproxy
Jan 26 16:13:29 compute-0 neutron-haproxy-ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9[333238]: [WARNING]  (333242) : Exiting Master process...
Jan 26 16:13:29 compute-0 neutron-haproxy-ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9[333238]: [ALERT]    (333242) : Current worker (333244) exited with code 143 (Terminated)
Jan 26 16:13:29 compute-0 neutron-haproxy-ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9[333238]: [WARNING]  (333242) : All workers exited. Exiting... (0)
Jan 26 16:13:29 compute-0 systemd[1]: libpod-82b901f8c5f88d8615579877d3b5b5cbde18a82d7193fb00eaa59bfe23d39a31.scope: Deactivated successfully.
Jan 26 16:13:29 compute-0 podman[335672]: 2026-01-26 16:13:29.675192545 +0000 UTC m=+0.047309126 container died 82b901f8c5f88d8615579877d3b5b5cbde18a82d7193fb00eaa59bfe23d39a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:13:29 compute-0 NetworkManager[48954]: <info>  [1769444009.6861] manager: (tap24c5ed41-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/459)
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.690 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.697 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 NetworkManager[48954]: <info>  [1769444009.7007] manager: (tap5e1d227f-59): new Tun device (/org/freedesktop/NetworkManager/Devices/460)
Jan 26 16:13:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-82b901f8c5f88d8615579877d3b5b5cbde18a82d7193fb00eaa59bfe23d39a31-userdata-shm.mount: Deactivated successfully.
Jan 26 16:13:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-508b12c7449f8fbf75b588c75bea5f21b772b5078df81919fef273408af7d4c9-merged.mount: Deactivated successfully.
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.718 239969 INFO nova.virt.libvirt.driver [-] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Instance destroyed successfully.
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.718 239969 DEBUG nova.objects.instance [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'resources' on Instance uuid 67eb236e-aeee-43e6-9b43-41c17fcc37a5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:13:29 compute-0 podman[335672]: 2026-01-26 16:13:29.728766124 +0000 UTC m=+0.100882705 container cleanup 82b901f8c5f88d8615579877d3b5b5cbde18a82d7193fb00eaa59bfe23d39a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.732 239969 DEBUG nova.virt.libvirt.vif [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:11:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-4860381',display_name='tempest-TestGettingAddress-server-4860381',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-4860381',id=109,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISau4/UN40WP+ptUkwpIsWNn3J4Wp77nPpgbhcl7w/8Nb1TcxykSmejoZt2JELHRJfHb9mvljAwUSziemePss6yCvaAr9GOGZ0K8nlDi6voyMAfJj2IwlWWxnJ7+bk9WQ==',key_name='tempest-TestGettingAddress-360952793',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:12:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-lxkt7s3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:12:15Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=67eb236e-aeee-43e6-9b43-41c17fcc37a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "address": "fa:16:3e:e8:8d:bc", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c5ed41-3d", "ovs_interfaceid": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.732 239969 DEBUG nova.network.os_vif_util [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "address": "fa:16:3e:e8:8d:bc", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c5ed41-3d", "ovs_interfaceid": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.733 239969 DEBUG nova.network.os_vif_util [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e8:8d:bc,bridge_name='br-int',has_traffic_filtering=True,id=24c5ed41-3d5f-47a8-a80b-a40478dcb4b3,network=Network(daa791d6-dfc2-418f-92aa-2771eaf5b1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24c5ed41-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.734 239969 DEBUG os_vif [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:8d:bc,bridge_name='br-int',has_traffic_filtering=True,id=24c5ed41-3d5f-47a8-a80b-a40478dcb4b3,network=Network(daa791d6-dfc2-418f-92aa-2771eaf5b1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24c5ed41-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.736 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 systemd[1]: libpod-conmon-82b901f8c5f88d8615579877d3b5b5cbde18a82d7193fb00eaa59bfe23d39a31.scope: Deactivated successfully.
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.736 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap24c5ed41-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.781 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.783 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.785 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.787 239969 INFO os_vif [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e8:8d:bc,bridge_name='br-int',has_traffic_filtering=True,id=24c5ed41-3d5f-47a8-a80b-a40478dcb4b3,network=Network(daa791d6-dfc2-418f-92aa-2771eaf5b1a9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap24c5ed41-3d')
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.788 239969 DEBUG nova.virt.libvirt.vif [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:11:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-4860381',display_name='tempest-TestGettingAddress-server-4860381',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-4860381',id=109,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBISau4/UN40WP+ptUkwpIsWNn3J4Wp77nPpgbhcl7w/8Nb1TcxykSmejoZt2JELHRJfHb9mvljAwUSziemePss6yCvaAr9GOGZ0K8nlDi6voyMAfJj2IwlWWxnJ7+bk9WQ==',key_name='tempest-TestGettingAddress-360952793',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:12:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-lxkt7s3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:12:15Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=67eb236e-aeee-43e6-9b43-41c17fcc37a5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5e1d227f-594c-4a1e-8f28-56469a20df56", "address": "fa:16:3e:42:83:4d", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e1d227f-59", "ovs_interfaceid": "5e1d227f-594c-4a1e-8f28-56469a20df56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.788 239969 DEBUG nova.network.os_vif_util [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "5e1d227f-594c-4a1e-8f28-56469a20df56", "address": "fa:16:3e:42:83:4d", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e1d227f-59", "ovs_interfaceid": "5e1d227f-594c-4a1e-8f28-56469a20df56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.789 239969 DEBUG nova.network.os_vif_util [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:42:83:4d,bridge_name='br-int',has_traffic_filtering=True,id=5e1d227f-594c-4a1e-8f28-56469a20df56,network=Network(ce7228b3-b329-4b48-a563-2dfcf286a94e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e1d227f-59') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.789 239969 DEBUG os_vif [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:83:4d,bridge_name='br-int',has_traffic_filtering=True,id=5e1d227f-594c-4a1e-8f28-56469a20df56,network=Network(ce7228b3-b329-4b48-a563-2dfcf286a94e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e1d227f-59') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.790 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.790 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e1d227f-59, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.791 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.792 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.794 239969 INFO os_vif [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:42:83:4d,bridge_name='br-int',has_traffic_filtering=True,id=5e1d227f-594c-4a1e-8f28-56469a20df56,network=Network(ce7228b3-b329-4b48-a563-2dfcf286a94e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5e1d227f-59')
Jan 26 16:13:29 compute-0 podman[335718]: 2026-01-26 16:13:29.802577487 +0000 UTC m=+0.050118945 container remove 82b901f8c5f88d8615579877d3b5b5cbde18a82d7193fb00eaa59bfe23d39a31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.808 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[265cff9a-1a69-4d2f-b83c-347ada88ec85]: (4, ('Mon Jan 26 04:13:29 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9 (82b901f8c5f88d8615579877d3b5b5cbde18a82d7193fb00eaa59bfe23d39a31)\n82b901f8c5f88d8615579877d3b5b5cbde18a82d7193fb00eaa59bfe23d39a31\nMon Jan 26 04:13:29 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9 (82b901f8c5f88d8615579877d3b5b5cbde18a82d7193fb00eaa59bfe23d39a31)\n82b901f8c5f88d8615579877d3b5b5cbde18a82d7193fb00eaa59bfe23d39a31\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.810 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6bc6c3-e3f3-4b50-8746-ba80b0c0c06e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.811 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdaa791d6-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.813 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 kernel: tapdaa791d6-d0: left promiscuous mode
Jan 26 16:13:29 compute-0 nova_compute[239965]: 2026-01-26 16:13:29.833 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.835 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a04b7bfe-415d-4fb3-8ce7-54940264bdee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.854 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1eea8545-0d6d-4de4-ba7e-9305ae60fe5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.855 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c429246d-fcc6-4edb-89b5-249f5066bf94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.871 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[866e3a02-914f-4b68-b699-9c676742597f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542185, 'reachable_time': 26513, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335752, 'error': None, 'target': 'ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:29 compute-0 systemd[1]: run-netns-ovnmeta\x2ddaa791d6\x2ddfc2\x2d418f\x2d92aa\x2d2771eaf5b1a9.mount: Deactivated successfully.
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.873 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-daa791d6-dfc2-418f-92aa-2771eaf5b1a9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.874 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[227e8fbc-b637-4c09-8aa9-db4a433eef01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.875 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 5e1d227f-594c-4a1e-8f28-56469a20df56 in datapath ce7228b3-b329-4b48-a563-2dfcf286a94e unbound from our chassis
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.877 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce7228b3-b329-4b48-a563-2dfcf286a94e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.878 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0d10b655-e12e-4803-b5e9-e144a74cad6f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:29.879 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e namespace which is not needed anymore
Jan 26 16:13:29 compute-0 neutron-haproxy-ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e[333329]: [NOTICE]   (333333) : haproxy version is 2.8.14-c23fe91
Jan 26 16:13:29 compute-0 neutron-haproxy-ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e[333329]: [NOTICE]   (333333) : path to executable is /usr/sbin/haproxy
Jan 26 16:13:29 compute-0 neutron-haproxy-ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e[333329]: [WARNING]  (333333) : Exiting Master process...
Jan 26 16:13:29 compute-0 neutron-haproxy-ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e[333329]: [WARNING]  (333333) : Exiting Master process...
Jan 26 16:13:29 compute-0 neutron-haproxy-ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e[333329]: [ALERT]    (333333) : Current worker (333335) exited with code 143 (Terminated)
Jan 26 16:13:29 compute-0 neutron-haproxy-ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e[333329]: [WARNING]  (333333) : All workers exited. Exiting... (0)
Jan 26 16:13:30 compute-0 systemd[1]: libpod-83d055a6fdb7277fa5d4c9b5f31948532da9141c1b3fcfc1c618844dd0f6c675.scope: Deactivated successfully.
Jan 26 16:13:30 compute-0 podman[335771]: 2026-01-26 16:13:30.00817574 +0000 UTC m=+0.043260718 container died 83d055a6fdb7277fa5d4c9b5f31948532da9141c1b3fcfc1c618844dd0f6c675 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 26 16:13:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83d055a6fdb7277fa5d4c9b5f31948532da9141c1b3fcfc1c618844dd0f6c675-userdata-shm.mount: Deactivated successfully.
Jan 26 16:13:30 compute-0 podman[335771]: 2026-01-26 16:13:30.046860715 +0000 UTC m=+0.081945693 container cleanup 83d055a6fdb7277fa5d4c9b5f31948532da9141c1b3fcfc1c618844dd0f6c675 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:13:30 compute-0 nova_compute[239965]: 2026-01-26 16:13:30.057 239969 INFO nova.virt.libvirt.driver [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Deleting instance files /var/lib/nova/instances/67eb236e-aeee-43e6-9b43-41c17fcc37a5_del
Jan 26 16:13:30 compute-0 nova_compute[239965]: 2026-01-26 16:13:30.058 239969 INFO nova.virt.libvirt.driver [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Deletion of /var/lib/nova/instances/67eb236e-aeee-43e6-9b43-41c17fcc37a5_del complete
Jan 26 16:13:30 compute-0 systemd[1]: libpod-conmon-83d055a6fdb7277fa5d4c9b5f31948532da9141c1b3fcfc1c618844dd0f6c675.scope: Deactivated successfully.
Jan 26 16:13:30 compute-0 podman[335800]: 2026-01-26 16:13:30.108947782 +0000 UTC m=+0.042229773 container remove 83d055a6fdb7277fa5d4c9b5f31948532da9141c1b3fcfc1c618844dd0f6c675 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:13:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:30.116 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a3711a80-8d95-4579-8127-9654fdb21dc4]: (4, ('Mon Jan 26 04:13:29 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e (83d055a6fdb7277fa5d4c9b5f31948532da9141c1b3fcfc1c618844dd0f6c675)\n83d055a6fdb7277fa5d4c9b5f31948532da9141c1b3fcfc1c618844dd0f6c675\nMon Jan 26 04:13:30 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e (83d055a6fdb7277fa5d4c9b5f31948532da9141c1b3fcfc1c618844dd0f6c675)\n83d055a6fdb7277fa5d4c9b5f31948532da9141c1b3fcfc1c618844dd0f6c675\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:30 compute-0 nova_compute[239965]: 2026-01-26 16:13:30.117 239969 INFO nova.compute.manager [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Took 0.65 seconds to destroy the instance on the hypervisor.
Jan 26 16:13:30 compute-0 nova_compute[239965]: 2026-01-26 16:13:30.117 239969 DEBUG oslo.service.loopingcall [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:13:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:30.117 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7b98d5-2b33-461f-a06d-cbf8717a7b66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:30 compute-0 nova_compute[239965]: 2026-01-26 16:13:30.117 239969 DEBUG nova.compute.manager [-] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:13:30 compute-0 nova_compute[239965]: 2026-01-26 16:13:30.118 239969 DEBUG nova.network.neutron [-] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:13:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:30.118 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapce7228b3-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:13:30 compute-0 nova_compute[239965]: 2026-01-26 16:13:30.120 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:30 compute-0 kernel: tapce7228b3-b0: left promiscuous mode
Jan 26 16:13:30 compute-0 nova_compute[239965]: 2026-01-26 16:13:30.138 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:30.141 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cf39491c-9341-4d0d-93b3-8a83e602aff8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:30.156 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3c2a4fce-e338-4865-95cd-d7dddef1e750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:30.157 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcf8f24-2166-46e9-bf66-d2f8605c32e2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:30.176 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8d5bdee6-59e2-45a9-bb45-4b6f5d84ae93]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 542351, 'reachable_time': 27756, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 335815, 'error': None, 'target': 'ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:30.178 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ce7228b3-b329-4b48-a563-2dfcf286a94e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:13:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:30.178 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[bc0b148a-1841-4ee2-862c-4a2cfd7b473c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 6.0 KiB/s wr, 29 op/s
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:13:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c4ac20c357ebc113ba7c867c1582900451508994f989a7a372df93c277c8f21-merged.mount: Deactivated successfully.
Jan 26 16:13:30 compute-0 systemd[1]: run-netns-ovnmeta\x2dce7228b3\x2db329\x2d4b48\x2da563\x2d2dfcf286a94e.mount: Deactivated successfully.
Jan 26 16:13:30 compute-0 nova_compute[239965]: 2026-01-26 16:13:30.474 239969 DEBUG nova.compute.manager [req-d109969c-01b8-48c3-bca5-72adb40809c7 req-096f8286-b827-4740-a940-7359977eeeb3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received event network-vif-unplugged-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:13:30 compute-0 nova_compute[239965]: 2026-01-26 16:13:30.474 239969 DEBUG oslo_concurrency.lockutils [req-d109969c-01b8-48c3-bca5-72adb40809c7 req-096f8286-b827-4740-a940-7359977eeeb3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:30 compute-0 nova_compute[239965]: 2026-01-26 16:13:30.474 239969 DEBUG oslo_concurrency.lockutils [req-d109969c-01b8-48c3-bca5-72adb40809c7 req-096f8286-b827-4740-a940-7359977eeeb3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:30 compute-0 nova_compute[239965]: 2026-01-26 16:13:30.474 239969 DEBUG oslo_concurrency.lockutils [req-d109969c-01b8-48c3-bca5-72adb40809c7 req-096f8286-b827-4740-a940-7359977eeeb3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:30 compute-0 nova_compute[239965]: 2026-01-26 16:13:30.475 239969 DEBUG nova.compute.manager [req-d109969c-01b8-48c3-bca5-72adb40809c7 req-096f8286-b827-4740-a940-7359977eeeb3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] No waiting events found dispatching network-vif-unplugged-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:13:30 compute-0 nova_compute[239965]: 2026-01-26 16:13:30.475 239969 DEBUG nova.compute.manager [req-d109969c-01b8-48c3-bca5-72adb40809c7 req-096f8286-b827-4740-a940-7359977eeeb3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received event network-vif-unplugged-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:13:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:13:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:13:30 compute-0 ceph-mon[75140]: pgmap v1881: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 6.0 KiB/s wr, 29 op/s
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:13:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.004 239969 DEBUG nova.network.neutron [-] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.021 239969 INFO nova.compute.manager [-] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Took 1.90 seconds to deallocate network for instance.
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.038 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.082 239969 DEBUG oslo_concurrency.lockutils [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.083 239969 DEBUG oslo_concurrency.lockutils [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.137 239969 DEBUG oslo_concurrency.processutils [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:13:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 105 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 6.0 KiB/s wr, 41 op/s
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.597 239969 DEBUG nova.network.neutron [req-0772b162-982b-4099-8a3b-e665ba1e33d4 req-b8b6bc53-245e-4cc0-93b2-b2d0d230c818 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Updated VIF entry in instance network info cache for port 24c5ed41-3d5f-47a8-a80b-a40478dcb4b3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.598 239969 DEBUG nova.network.neutron [req-0772b162-982b-4099-8a3b-e665ba1e33d4 req-b8b6bc53-245e-4cc0-93b2-b2d0d230c818 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Updating instance_info_cache with network_info: [{"id": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "address": "fa:16:3e:e8:8d:bc", "network": {"id": "daa791d6-dfc2-418f-92aa-2771eaf5b1a9", "bridge": "br-int", "label": "tempest-network-smoke--1986447094", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap24c5ed41-3d", "ovs_interfaceid": "24c5ed41-3d5f-47a8-a80b-a40478dcb4b3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5e1d227f-594c-4a1e-8f28-56469a20df56", "address": "fa:16:3e:42:83:4d", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e1d227f-59", "ovs_interfaceid": "5e1d227f-594c-4a1e-8f28-56469a20df56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.623 239969 DEBUG oslo_concurrency.lockutils [req-0772b162-982b-4099-8a3b-e665ba1e33d4 req-b8b6bc53-245e-4cc0-93b2-b2d0d230c818 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-67eb236e-aeee-43e6-9b43-41c17fcc37a5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.646 239969 DEBUG nova.compute.manager [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received event network-vif-plugged-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.646 239969 DEBUG oslo_concurrency.lockutils [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.646 239969 DEBUG oslo_concurrency.lockutils [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.647 239969 DEBUG oslo_concurrency.lockutils [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.647 239969 DEBUG nova.compute.manager [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] No waiting events found dispatching network-vif-plugged-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.647 239969 WARNING nova.compute.manager [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received unexpected event network-vif-plugged-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 for instance with vm_state deleted and task_state None.
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.647 239969 DEBUG nova.compute.manager [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received event network-vif-unplugged-5e1d227f-594c-4a1e-8f28-56469a20df56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.647 239969 DEBUG oslo_concurrency.lockutils [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.648 239969 DEBUG oslo_concurrency.lockutils [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.648 239969 DEBUG oslo_concurrency.lockutils [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.648 239969 DEBUG nova.compute.manager [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] No waiting events found dispatching network-vif-unplugged-5e1d227f-594c-4a1e-8f28-56469a20df56 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.648 239969 WARNING nova.compute.manager [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received unexpected event network-vif-unplugged-5e1d227f-594c-4a1e-8f28-56469a20df56 for instance with vm_state deleted and task_state None.
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.648 239969 DEBUG nova.compute.manager [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received event network-vif-plugged-5e1d227f-594c-4a1e-8f28-56469a20df56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.648 239969 DEBUG oslo_concurrency.lockutils [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.649 239969 DEBUG oslo_concurrency.lockutils [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.649 239969 DEBUG oslo_concurrency.lockutils [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.649 239969 DEBUG nova.compute.manager [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] No waiting events found dispatching network-vif-plugged-5e1d227f-594c-4a1e-8f28-56469a20df56 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.649 239969 WARNING nova.compute.manager [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received unexpected event network-vif-plugged-5e1d227f-594c-4a1e-8f28-56469a20df56 for instance with vm_state deleted and task_state None.
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.649 239969 DEBUG nova.compute.manager [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received event network-vif-deleted-24c5ed41-3d5f-47a8-a80b-a40478dcb4b3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.650 239969 INFO nova.compute.manager [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Neutron deleted interface 24c5ed41-3d5f-47a8-a80b-a40478dcb4b3; detaching it from the instance and deleting it from the info cache
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.650 239969 DEBUG nova.network.neutron [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Updating instance_info_cache with network_info: [{"id": "5e1d227f-594c-4a1e-8f28-56469a20df56", "address": "fa:16:3e:42:83:4d", "network": {"id": "ce7228b3-b329-4b48-a563-2dfcf286a94e", "bridge": "br-int", "label": "tempest-network-smoke--724648161", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe42:834d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5e1d227f-59", "ovs_interfaceid": "5e1d227f-594c-4a1e-8f28-56469a20df56", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.676 239969 DEBUG nova.compute.manager [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Detach interface failed, port_id=24c5ed41-3d5f-47a8-a80b-a40478dcb4b3, reason: Instance 67eb236e-aeee-43e6-9b43-41c17fcc37a5 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.676 239969 DEBUG nova.compute.manager [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Received event network-vif-deleted-5e1d227f-594c-4a1e-8f28-56469a20df56 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.676 239969 INFO nova.compute.manager [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Neutron deleted interface 5e1d227f-594c-4a1e-8f28-56469a20df56; detaching it from the instance and deleting it from the info cache
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.677 239969 DEBUG nova.network.neutron [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.702 239969 DEBUG nova.compute.manager [req-421ceec1-07fe-413f-aead-474eedea4008 req-fa13e719-a344-424f-b2e7-24ce4f98b44c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Detach interface failed, port_id=5e1d227f-594c-4a1e-8f28-56469a20df56, reason: Instance 67eb236e-aeee-43e6-9b43-41c17fcc37a5 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:13:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:13:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3499210814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.756 239969 DEBUG oslo_concurrency.processutils [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.762 239969 DEBUG nova.compute.provider_tree [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.790 239969 DEBUG nova.scheduler.client.report [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.822 239969 DEBUG oslo_concurrency.lockutils [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.846 239969 INFO nova.scheduler.client.report [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Deleted allocations for instance 67eb236e-aeee-43e6-9b43-41c17fcc37a5
Jan 26 16:13:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:13:32 compute-0 nova_compute[239965]: 2026-01-26 16:13:32.942 239969 DEBUG oslo_concurrency.lockutils [None req-cb40d6e3-2b0d-4c1a-87f0-6551fb8a5c58 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "67eb236e-aeee-43e6-9b43-41c17fcc37a5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.479s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:33 compute-0 ceph-mon[75140]: pgmap v1882: 305 pgs: 305 active+clean; 105 MiB data, 764 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 6.0 KiB/s wr, 41 op/s
Jan 26 16:13:33 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3499210814' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:13:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 7.2 KiB/s wr, 56 op/s
Jan 26 16:13:34 compute-0 nova_compute[239965]: 2026-01-26 16:13:34.793 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:35 compute-0 ceph-mon[75140]: pgmap v1883: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 7.2 KiB/s wr, 56 op/s
Jan 26 16:13:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 7.1 KiB/s wr, 55 op/s
Jan 26 16:13:36 compute-0 ceph-mon[75140]: pgmap v1884: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 7.1 KiB/s wr, 55 op/s
Jan 26 16:13:37 compute-0 nova_compute[239965]: 2026-01-26 16:13:37.040 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:13:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 7.1 KiB/s wr, 46 op/s
Jan 26 16:13:38 compute-0 nova_compute[239965]: 2026-01-26 16:13:38.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:13:39 compute-0 nova_compute[239965]: 2026-01-26 16:13:39.093 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444004.0926564, b4c8aa81-ea1c-4786-b53e-082bc9fadd9a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:13:39 compute-0 nova_compute[239965]: 2026-01-26 16:13:39.094 239969 INFO nova.compute.manager [-] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] VM Stopped (Lifecycle Event)
Jan 26 16:13:39 compute-0 nova_compute[239965]: 2026-01-26 16:13:39.129 239969 DEBUG nova.compute.manager [None req-88e09d3e-c280-4114-8d99-33ecec8dcf5a - - - - - -] [instance: b4c8aa81-ea1c-4786-b53e-082bc9fadd9a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:13:39 compute-0 ceph-mon[75140]: pgmap v1885: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 7.1 KiB/s wr, 46 op/s
Jan 26 16:13:39 compute-0 nova_compute[239965]: 2026-01-26 16:13:39.797 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 16:13:41 compute-0 ceph-mon[75140]: pgmap v1886: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 16:13:41 compute-0 nova_compute[239965]: 2026-01-26 16:13:41.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:13:42 compute-0 nova_compute[239965]: 2026-01-26 16:13:42.042 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 16:13:42 compute-0 ceph-mon[75140]: pgmap v1887: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 16:13:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:13:42 compute-0 nova_compute[239965]: 2026-01-26 16:13:42.983 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:43 compute-0 nova_compute[239965]: 2026-01-26 16:13:43.199 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Jan 26 16:13:44 compute-0 nova_compute[239965]: 2026-01-26 16:13:44.712 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444009.7113786, 67eb236e-aeee-43e6-9b43-41c17fcc37a5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:13:44 compute-0 nova_compute[239965]: 2026-01-26 16:13:44.713 239969 INFO nova.compute.manager [-] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] VM Stopped (Lifecycle Event)
Jan 26 16:13:44 compute-0 nova_compute[239965]: 2026-01-26 16:13:44.799 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:44 compute-0 nova_compute[239965]: 2026-01-26 16:13:44.803 239969 DEBUG nova.compute.manager [None req-5da06c69-0e4c-4c7d-a3bd-32b90e3a0aaf - - - - - -] [instance: 67eb236e-aeee-43e6-9b43-41c17fcc37a5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:13:45 compute-0 ceph-mon[75140]: pgmap v1888: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.2 KiB/s wr, 16 op/s
Jan 26 16:13:45 compute-0 nova_compute[239965]: 2026-01-26 16:13:45.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:13:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:13:46 compute-0 nova_compute[239965]: 2026-01-26 16:13:46.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:13:46 compute-0 nova_compute[239965]: 2026-01-26 16:13:46.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:13:46 compute-0 nova_compute[239965]: 2026-01-26 16:13:46.509 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:13:46 compute-0 nova_compute[239965]: 2026-01-26 16:13:46.509 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:13:46 compute-0 nova_compute[239965]: 2026-01-26 16:13:46.531 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:13:46 compute-0 nova_compute[239965]: 2026-01-26 16:13:46.531 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:13:46 compute-0 nova_compute[239965]: 2026-01-26 16:13:46.531 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:13:47 compute-0 nova_compute[239965]: 2026-01-26 16:13:47.044 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:47 compute-0 ceph-mon[75140]: pgmap v1889: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:13:47 compute-0 nova_compute[239965]: 2026-01-26 16:13:47.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:13:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:13:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:13:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3107750973' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:13:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:13:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3107750973' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.336875088848142e-05 of space, bias 1.0, pg target 0.004010625266544426 quantized to 32 (current 32)
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010183769611592885 of space, bias 1.0, pg target 0.30551308834778657 quantized to 32 (current 32)
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.389004267604197e-07 of space, bias 4.0, pg target 0.0008866805121125036 quantized to 16 (current 16)
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:13:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:13:49 compute-0 ceph-mon[75140]: pgmap v1890: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:13:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3107750973' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:13:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3107750973' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:13:49 compute-0 nova_compute[239965]: 2026-01-26 16:13:49.803 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:13:50 compute-0 ceph-mon[75140]: pgmap v1891: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:13:50 compute-0 podman[335839]: 2026-01-26 16:13:50.373878239 +0000 UTC m=+0.055981949 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 16:13:50 compute-0 podman[335840]: 2026-01-26 16:13:50.421694227 +0000 UTC m=+0.103104380 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 16:13:51 compute-0 nova_compute[239965]: 2026-01-26 16:13:51.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:13:51 compute-0 nova_compute[239965]: 2026-01-26 16:13:51.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:51 compute-0 nova_compute[239965]: 2026-01-26 16:13:51.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:51 compute-0 nova_compute[239965]: 2026-01-26 16:13:51.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:51 compute-0 nova_compute[239965]: 2026-01-26 16:13:51.541 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:13:51 compute-0 nova_compute[239965]: 2026-01-26 16:13:51.541 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:13:52 compute-0 nova_compute[239965]: 2026-01-26 16:13:52.047 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:13:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/849259727' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:13:52 compute-0 nova_compute[239965]: 2026-01-26 16:13:52.112 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:13:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/849259727' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:13:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:13:52 compute-0 nova_compute[239965]: 2026-01-26 16:13:52.273 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:13:52 compute-0 nova_compute[239965]: 2026-01-26 16:13:52.275 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3752MB free_disk=59.98747928161174GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:13:52 compute-0 nova_compute[239965]: 2026-01-26 16:13:52.275 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:52 compute-0 nova_compute[239965]: 2026-01-26 16:13:52.275 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:52 compute-0 nova_compute[239965]: 2026-01-26 16:13:52.355 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:13:52 compute-0 nova_compute[239965]: 2026-01-26 16:13:52.356 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:13:52 compute-0 nova_compute[239965]: 2026-01-26 16:13:52.377 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:13:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:13:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3335754882' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:13:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:13:52 compute-0 nova_compute[239965]: 2026-01-26 16:13:52.937 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:13:52 compute-0 nova_compute[239965]: 2026-01-26 16:13:52.943 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:13:52 compute-0 nova_compute[239965]: 2026-01-26 16:13:52.962 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:13:53 compute-0 nova_compute[239965]: 2026-01-26 16:13:53.184 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:13:53 compute-0 nova_compute[239965]: 2026-01-26 16:13:53.184 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.909s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:53 compute-0 ceph-mon[75140]: pgmap v1892: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:13:53 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3335754882' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:13:53 compute-0 nova_compute[239965]: 2026-01-26 16:13:53.483 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:53.484 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:13:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:53.486 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:13:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:13:54 compute-0 ceph-mon[75140]: pgmap v1893: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:13:54 compute-0 nova_compute[239965]: 2026-01-26 16:13:54.807 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:13:56 compute-0 ceph-mon[75140]: pgmap v1894: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:13:57 compute-0 nova_compute[239965]: 2026-01-26 16:13:57.049 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:13:57 compute-0 nova_compute[239965]: 2026-01-26 16:13:57.185 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:13:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:13:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:13:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:59.236 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:13:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:59.236 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:13:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:13:59.236 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:13:59 compute-0 nova_compute[239965]: 2026-01-26 16:13:59.811 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:00 compute-0 ceph-mon[75140]: pgmap v1895: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:14:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:14:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:14:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:14:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:14:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:14:01 compute-0 ceph-mon[75140]: pgmap v1896: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:02 compute-0 nova_compute[239965]: 2026-01-26 16:14:02.053 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:02 compute-0 ceph-mon[75140]: pgmap v1897: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:02.488 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:14:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:14:03 compute-0 rsyslogd[1006]: imjournal: 2789 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 26 16:14:03 compute-0 nova_compute[239965]: 2026-01-26 16:14:03.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:14:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:04 compute-0 nova_compute[239965]: 2026-01-26 16:14:04.813 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:05 compute-0 ceph-mon[75140]: pgmap v1898: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:05.435 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:05:1e:88 2001:db8:0:1:f816:3eff:fe05:1e88 2001:db8::f816:3eff:fe05:1e88'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe05:1e88/64 2001:db8::f816:3eff:fe05:1e88/64', 'neutron:device_id': 'ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef5475f2-541c-4651-b877-e2e4325b6344', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dbc549b-a211-495a-b7d4-5846cf98bef7, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=05f31558-fab2-41bc-9c60-661571b4fe12) old=Port_Binding(mac=['fa:16:3e:05:1e:88 2001:db8::f816:3eff:fe05:1e88'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe05:1e88/64', 'neutron:device_id': 'ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef5475f2-541c-4651-b877-e2e4325b6344', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:14:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:05.436 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 05f31558-fab2-41bc-9c60-661571b4fe12 in datapath ef5475f2-541c-4651-b877-e2e4325b6344 updated
Jan 26 16:14:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:05.437 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef5475f2-541c-4651-b877-e2e4325b6344, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:14:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:05.439 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[47e51b13-e8c7-4efa-9f38-ee55c6f38361]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:06 compute-0 ceph-mon[75140]: pgmap v1899: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:07 compute-0 nova_compute[239965]: 2026-01-26 16:14:07.055 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:14:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:09 compute-0 ceph-mon[75140]: pgmap v1900: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:09 compute-0 nova_compute[239965]: 2026-01-26 16:14:09.818 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:11 compute-0 ceph-mon[75140]: pgmap v1901: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:12 compute-0 nova_compute[239965]: 2026-01-26 16:14:12.057 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:12 compute-0 ceph-mon[75140]: pgmap v1902: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:14:13 compute-0 nova_compute[239965]: 2026-01-26 16:14:13.360 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:14:13 compute-0 nova_compute[239965]: 2026-01-26 16:14:13.360 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:14:13 compute-0 nova_compute[239965]: 2026-01-26 16:14:13.396 239969 DEBUG nova.compute.manager [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:14:13 compute-0 nova_compute[239965]: 2026-01-26 16:14:13.490 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:14:13 compute-0 nova_compute[239965]: 2026-01-26 16:14:13.491 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:14:13 compute-0 nova_compute[239965]: 2026-01-26 16:14:13.499 239969 DEBUG nova.virt.hardware [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:14:13 compute-0 nova_compute[239965]: 2026-01-26 16:14:13.500 239969 INFO nova.compute.claims [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:14:13 compute-0 nova_compute[239965]: 2026-01-26 16:14:13.647 239969 DEBUG oslo_concurrency.processutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:14:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:14:14 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1226774405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.234 239969 DEBUG oslo_concurrency.processutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.240 239969 DEBUG nova.compute.provider_tree [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:14:14 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1226774405' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:14:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.279 239969 DEBUG nova.scheduler.client.report [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.313 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.314 239969 DEBUG nova.compute.manager [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.388 239969 DEBUG nova.compute.manager [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.389 239969 DEBUG nova.network.neutron [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.426 239969 INFO nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.460 239969 DEBUG nova.compute.manager [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.595 239969 DEBUG nova.compute.manager [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.597 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.598 239969 INFO nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Creating image(s)
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.632 239969 DEBUG nova.storage.rbd_utils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image ad5cd088-59a3-4894-8741-f0daa1fb4ff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.662 239969 DEBUG nova.storage.rbd_utils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image ad5cd088-59a3-4894-8741-f0daa1fb4ff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.690 239969 DEBUG nova.storage.rbd_utils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image ad5cd088-59a3-4894-8741-f0daa1fb4ff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.695 239969 DEBUG oslo_concurrency.processutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.776 239969 DEBUG oslo_concurrency.processutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.778 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.779 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.779 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.800 239969 DEBUG nova.storage.rbd_utils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image ad5cd088-59a3-4894-8741-f0daa1fb4ff9_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.804 239969 DEBUG oslo_concurrency.processutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 ad5cd088-59a3-4894-8741-f0daa1fb4ff9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:14:14 compute-0 nova_compute[239965]: 2026-01-26 16:14:14.846 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:15 compute-0 nova_compute[239965]: 2026-01-26 16:14:15.010 239969 DEBUG nova.policy [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0338b308661e4205839e9e957f674d8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df501970c9864b77b86bb4aa58ef846b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:14:15 compute-0 nova_compute[239965]: 2026-01-26 16:14:15.168 239969 DEBUG oslo_concurrency.processutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 ad5cd088-59a3-4894-8741-f0daa1fb4ff9_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:14:15 compute-0 nova_compute[239965]: 2026-01-26 16:14:15.218 239969 DEBUG nova.storage.rbd_utils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] resizing rbd image ad5cd088-59a3-4894-8741-f0daa1fb4ff9_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:14:15 compute-0 ceph-mon[75140]: pgmap v1903: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:15 compute-0 nova_compute[239965]: 2026-01-26 16:14:15.285 239969 DEBUG nova.objects.instance [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'migration_context' on Instance uuid ad5cd088-59a3-4894-8741-f0daa1fb4ff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:14:15 compute-0 nova_compute[239965]: 2026-01-26 16:14:15.317 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:14:15 compute-0 nova_compute[239965]: 2026-01-26 16:14:15.318 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Ensure instance console log exists: /var/lib/nova/instances/ad5cd088-59a3-4894-8741-f0daa1fb4ff9/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:14:15 compute-0 nova_compute[239965]: 2026-01-26 16:14:15.318 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:14:15 compute-0 nova_compute[239965]: 2026-01-26 16:14:15.319 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:14:15 compute-0 nova_compute[239965]: 2026-01-26 16:14:15.319 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:14:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 99 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 290 KiB/s wr, 1 op/s
Jan 26 16:14:16 compute-0 nova_compute[239965]: 2026-01-26 16:14:16.395 239969 DEBUG nova.network.neutron [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Successfully created port: 1883cc75-9bba-4da7-bb9d-e038a9254a10 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:14:17 compute-0 nova_compute[239965]: 2026-01-26 16:14:17.059 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:17 compute-0 nova_compute[239965]: 2026-01-26 16:14:17.132 239969 DEBUG nova.network.neutron [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Successfully created port: b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:14:17 compute-0 ceph-mon[75140]: pgmap v1904: 305 pgs: 305 active+clean; 99 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 597 B/s rd, 290 KiB/s wr, 1 op/s
Jan 26 16:14:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:14:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 134 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:14:18 compute-0 nova_compute[239965]: 2026-01-26 16:14:18.628 239969 DEBUG nova.network.neutron [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Successfully updated port: 1883cc75-9bba-4da7-bb9d-e038a9254a10 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:14:18 compute-0 nova_compute[239965]: 2026-01-26 16:14:18.923 239969 DEBUG nova.compute.manager [req-ab7e7fa2-c719-4302-a2eb-3ce238682a0d req-ce9201bd-7c2b-4410-aeca-e4d7e4802183 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received event network-changed-1883cc75-9bba-4da7-bb9d-e038a9254a10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:14:18 compute-0 nova_compute[239965]: 2026-01-26 16:14:18.924 239969 DEBUG nova.compute.manager [req-ab7e7fa2-c719-4302-a2eb-3ce238682a0d req-ce9201bd-7c2b-4410-aeca-e4d7e4802183 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Refreshing instance network info cache due to event network-changed-1883cc75-9bba-4da7-bb9d-e038a9254a10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:14:18 compute-0 nova_compute[239965]: 2026-01-26 16:14:18.924 239969 DEBUG oslo_concurrency.lockutils [req-ab7e7fa2-c719-4302-a2eb-3ce238682a0d req-ce9201bd-7c2b-4410-aeca-e4d7e4802183 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:14:18 compute-0 nova_compute[239965]: 2026-01-26 16:14:18.924 239969 DEBUG oslo_concurrency.lockutils [req-ab7e7fa2-c719-4302-a2eb-3ce238682a0d req-ce9201bd-7c2b-4410-aeca-e4d7e4802183 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:14:18 compute-0 nova_compute[239965]: 2026-01-26 16:14:18.925 239969 DEBUG nova.network.neutron [req-ab7e7fa2-c719-4302-a2eb-3ce238682a0d req-ce9201bd-7c2b-4410-aeca-e4d7e4802183 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Refreshing network info cache for port 1883cc75-9bba-4da7-bb9d-e038a9254a10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:14:19 compute-0 nova_compute[239965]: 2026-01-26 16:14:19.223 239969 DEBUG nova.network.neutron [req-ab7e7fa2-c719-4302-a2eb-3ce238682a0d req-ce9201bd-7c2b-4410-aeca-e4d7e4802183 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:14:19 compute-0 ceph-mon[75140]: pgmap v1905: 305 pgs: 305 active+clean; 134 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:14:19 compute-0 nova_compute[239965]: 2026-01-26 16:14:19.738 239969 DEBUG nova.network.neutron [req-ab7e7fa2-c719-4302-a2eb-3ce238682a0d req-ce9201bd-7c2b-4410-aeca-e4d7e4802183 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:14:19 compute-0 nova_compute[239965]: 2026-01-26 16:14:19.754 239969 DEBUG oslo_concurrency.lockutils [req-ab7e7fa2-c719-4302-a2eb-3ce238682a0d req-ce9201bd-7c2b-4410-aeca-e4d7e4802183 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:14:19 compute-0 nova_compute[239965]: 2026-01-26 16:14:19.851 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 134 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:14:20 compute-0 nova_compute[239965]: 2026-01-26 16:14:20.542 239969 DEBUG nova.network.neutron [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Successfully updated port: b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:14:20 compute-0 nova_compute[239965]: 2026-01-26 16:14:20.556 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:14:20 compute-0 nova_compute[239965]: 2026-01-26 16:14:20.556 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquired lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:14:20 compute-0 nova_compute[239965]: 2026-01-26 16:14:20.556 239969 DEBUG nova.network.neutron [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:14:20 compute-0 nova_compute[239965]: 2026-01-26 16:14:20.900 239969 DEBUG nova.network.neutron [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:14:21 compute-0 nova_compute[239965]: 2026-01-26 16:14:21.220 239969 DEBUG nova.compute.manager [req-79533b47-cd26-48f5-8abe-e226ea890d0b req-a075a9c7-9da0-42b3-9c6f-1398ec2c0dc7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received event network-changed-b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:14:21 compute-0 nova_compute[239965]: 2026-01-26 16:14:21.220 239969 DEBUG nova.compute.manager [req-79533b47-cd26-48f5-8abe-e226ea890d0b req-a075a9c7-9da0-42b3-9c6f-1398ec2c0dc7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Refreshing instance network info cache due to event network-changed-b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:14:21 compute-0 nova_compute[239965]: 2026-01-26 16:14:21.220 239969 DEBUG oslo_concurrency.lockutils [req-79533b47-cd26-48f5-8abe-e226ea890d0b req-a075a9c7-9da0-42b3-9c6f-1398ec2c0dc7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:14:21 compute-0 ceph-mon[75140]: pgmap v1906: 305 pgs: 305 active+clean; 134 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:14:21 compute-0 podman[336115]: 2026-01-26 16:14:21.359943728 +0000 UTC m=+0.051815407 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 26 16:14:21 compute-0 podman[336116]: 2026-01-26 16:14:21.418050598 +0000 UTC m=+0.106703918 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller)
Jan 26 16:14:22 compute-0 nova_compute[239965]: 2026-01-26 16:14:22.061 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 134 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:14:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:14:23 compute-0 ceph-mon[75140]: pgmap v1907: 305 pgs: 305 active+clean; 134 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:14:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 134 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:14:24 compute-0 ceph-mon[75140]: pgmap v1908: 305 pgs: 305 active+clean; 134 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:14:24 compute-0 nova_compute[239965]: 2026-01-26 16:14:24.855 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.057 239969 DEBUG nova.network.neutron [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Updating instance_info_cache with network_info: [{"id": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "address": "fa:16:3e:11:5d:a9", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1883cc75-9b", "ovs_interfaceid": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "address": "fa:16:3e:c4:61:7d", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4f3ee90-85", "ovs_interfaceid": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.078 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Releasing lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.078 239969 DEBUG nova.compute.manager [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Instance network_info: |[{"id": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "address": "fa:16:3e:11:5d:a9", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1883cc75-9b", "ovs_interfaceid": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "address": "fa:16:3e:c4:61:7d", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4f3ee90-85", "ovs_interfaceid": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.079 239969 DEBUG oslo_concurrency.lockutils [req-79533b47-cd26-48f5-8abe-e226ea890d0b req-a075a9c7-9da0-42b3-9c6f-1398ec2c0dc7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.079 239969 DEBUG nova.network.neutron [req-79533b47-cd26-48f5-8abe-e226ea890d0b req-a075a9c7-9da0-42b3-9c6f-1398ec2c0dc7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Refreshing network info cache for port b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.083 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Start _get_guest_xml network_info=[{"id": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "address": "fa:16:3e:11:5d:a9", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1883cc75-9b", "ovs_interfaceid": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "address": "fa:16:3e:c4:61:7d", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4f3ee90-85", "ovs_interfaceid": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.087 239969 WARNING nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.095 239969 DEBUG nova.virt.libvirt.host [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.096 239969 DEBUG nova.virt.libvirt.host [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.103 239969 DEBUG nova.virt.libvirt.host [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.104 239969 DEBUG nova.virt.libvirt.host [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.105 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.105 239969 DEBUG nova.virt.hardware [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.105 239969 DEBUG nova.virt.hardware [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.105 239969 DEBUG nova.virt.hardware [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.106 239969 DEBUG nova.virt.hardware [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.106 239969 DEBUG nova.virt.hardware [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.106 239969 DEBUG nova.virt.hardware [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.106 239969 DEBUG nova.virt.hardware [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.107 239969 DEBUG nova.virt.hardware [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.107 239969 DEBUG nova.virt.hardware [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.107 239969 DEBUG nova.virt.hardware [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.107 239969 DEBUG nova.virt.hardware [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.110 239969 DEBUG oslo_concurrency.processutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:14:25 compute-0 ovn_controller[146046]: 2026-01-26T16:14:25Z|01129|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 26 16:14:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:14:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3227372347' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.668 239969 DEBUG oslo_concurrency.processutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.690 239969 DEBUG nova.storage.rbd_utils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image ad5cd088-59a3-4894-8741-f0daa1fb4ff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:14:25 compute-0 nova_compute[239965]: 2026-01-26 16:14:25.695 239969 DEBUG oslo_concurrency.processutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:14:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3227372347' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:14:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:14:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4010135541' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:14:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 134 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.278 239969 DEBUG oslo_concurrency.processutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.280 239969 DEBUG nova.virt.libvirt.vif [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:14:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-777747036',display_name='tempest-TestGettingAddress-server-777747036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-777747036',id=111,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbULeaJxwcnozcWoyvQ50FWCotzgPVSIhLPca1F8fB5NEsE8naHpO1jQEaSARRlz4I3VrQe48iPgEoQDNyMeK3lJ4jF1ej588mNcPtcDyL+pxJcccFJcRHS+XeHWriRmg==',key_name='tempest-TestGettingAddress-1924722263',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-wuhqw1ae',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:14:14Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=ad5cd088-59a3-4894-8741-f0daa1fb4ff9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "address": "fa:16:3e:11:5d:a9", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1883cc75-9b", "ovs_interfaceid": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.281 239969 DEBUG nova.network.os_vif_util [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "address": "fa:16:3e:11:5d:a9", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1883cc75-9b", "ovs_interfaceid": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.282 239969 DEBUG nova.network.os_vif_util [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:5d:a9,bridge_name='br-int',has_traffic_filtering=True,id=1883cc75-9bba-4da7-bb9d-e038a9254a10,network=Network(7bd0b480-793e-4d46-83dd-5e27335992d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1883cc75-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.283 239969 DEBUG nova.virt.libvirt.vif [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:14:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-777747036',display_name='tempest-TestGettingAddress-server-777747036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-777747036',id=111,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbULeaJxwcnozcWoyvQ50FWCotzgPVSIhLPca1F8fB5NEsE8naHpO1jQEaSARRlz4I3VrQe48iPgEoQDNyMeK3lJ4jF1ej588mNcPtcDyL+pxJcccFJcRHS+XeHWriRmg==',key_name='tempest-TestGettingAddress-1924722263',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-wuhqw1ae',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:14:14Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=ad5cd088-59a3-4894-8741-f0daa1fb4ff9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "address": "fa:16:3e:c4:61:7d", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4f3ee90-85", "ovs_interfaceid": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.283 239969 DEBUG nova.network.os_vif_util [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "address": "fa:16:3e:c4:61:7d", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4f3ee90-85", "ovs_interfaceid": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.284 239969 DEBUG nova.network.os_vif_util [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:61:7d,bridge_name='br-int',has_traffic_filtering=True,id=b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c,network=Network(ef5475f2-541c-4651-b877-e2e4325b6344),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4f3ee90-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.285 239969 DEBUG nova.objects.instance [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'pci_devices' on Instance uuid ad5cd088-59a3-4894-8741-f0daa1fb4ff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.301 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:14:26 compute-0 nova_compute[239965]:   <uuid>ad5cd088-59a3-4894-8741-f0daa1fb4ff9</uuid>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   <name>instance-0000006f</name>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <nova:name>tempest-TestGettingAddress-server-777747036</nova:name>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:14:25</nova:creationTime>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:14:26 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:14:26 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:14:26 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:14:26 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:14:26 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:14:26 compute-0 nova_compute[239965]:         <nova:user uuid="0338b308661e4205839e9e957f674d8c">tempest-TestGettingAddress-206943419-project-member</nova:user>
Jan 26 16:14:26 compute-0 nova_compute[239965]:         <nova:project uuid="df501970c9864b77b86bb4aa58ef846b">tempest-TestGettingAddress-206943419</nova:project>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:14:26 compute-0 nova_compute[239965]:         <nova:port uuid="1883cc75-9bba-4da7-bb9d-e038a9254a10">
Jan 26 16:14:26 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:14:26 compute-0 nova_compute[239965]:         <nova:port uuid="b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c">
Jan 26 16:14:26 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fec4:617d" ipVersion="6"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fec4:617d" ipVersion="6"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <system>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <entry name="serial">ad5cd088-59a3-4894-8741-f0daa1fb4ff9</entry>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <entry name="uuid">ad5cd088-59a3-4894-8741-f0daa1fb4ff9</entry>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     </system>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   <os>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   </os>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   <features>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   </features>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/ad5cd088-59a3-4894-8741-f0daa1fb4ff9_disk">
Jan 26 16:14:26 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       </source>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:14:26 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/ad5cd088-59a3-4894-8741-f0daa1fb4ff9_disk.config">
Jan 26 16:14:26 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       </source>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:14:26 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:11:5d:a9"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <target dev="tap1883cc75-9b"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:c4:61:7d"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <target dev="tapb4f3ee90-85"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/ad5cd088-59a3-4894-8741-f0daa1fb4ff9/console.log" append="off"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <video>
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     </video>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:14:26 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:14:26 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:14:26 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:14:26 compute-0 nova_compute[239965]: </domain>
Jan 26 16:14:26 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.302 239969 DEBUG nova.compute.manager [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Preparing to wait for external event network-vif-plugged-1883cc75-9bba-4da7-bb9d-e038a9254a10 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.303 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.303 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.303 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.303 239969 DEBUG nova.compute.manager [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Preparing to wait for external event network-vif-plugged-b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.303 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.303 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.304 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.304 239969 DEBUG nova.virt.libvirt.vif [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:14:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-777747036',display_name='tempest-TestGettingAddress-server-777747036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-777747036',id=111,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbULeaJxwcnozcWoyvQ50FWCotzgPVSIhLPca1F8fB5NEsE8naHpO1jQEaSARRlz4I3VrQe48iPgEoQDNyMeK3lJ4jF1ej588mNcPtcDyL+pxJcccFJcRHS+XeHWriRmg==',key_name='tempest-TestGettingAddress-1924722263',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-wuhqw1ae',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:14:14Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=ad5cd088-59a3-4894-8741-f0daa1fb4ff9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "address": "fa:16:3e:11:5d:a9", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1883cc75-9b", "ovs_interfaceid": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.304 239969 DEBUG nova.network.os_vif_util [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "address": "fa:16:3e:11:5d:a9", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1883cc75-9b", "ovs_interfaceid": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.305 239969 DEBUG nova.network.os_vif_util [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:11:5d:a9,bridge_name='br-int',has_traffic_filtering=True,id=1883cc75-9bba-4da7-bb9d-e038a9254a10,network=Network(7bd0b480-793e-4d46-83dd-5e27335992d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1883cc75-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.305 239969 DEBUG os_vif [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:5d:a9,bridge_name='br-int',has_traffic_filtering=True,id=1883cc75-9bba-4da7-bb9d-e038a9254a10,network=Network(7bd0b480-793e-4d46-83dd-5e27335992d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1883cc75-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.306 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.307 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.307 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.310 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.310 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1883cc75-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.310 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1883cc75-9b, col_values=(('external_ids', {'iface-id': '1883cc75-9bba-4da7-bb9d-e038a9254a10', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:11:5d:a9', 'vm-uuid': 'ad5cd088-59a3-4894-8741-f0daa1fb4ff9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:14:26 compute-0 NetworkManager[48954]: <info>  [1769444066.3126] manager: (tap1883cc75-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/461)
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.314 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.316 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.317 239969 INFO os_vif [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:11:5d:a9,bridge_name='br-int',has_traffic_filtering=True,id=1883cc75-9bba-4da7-bb9d-e038a9254a10,network=Network(7bd0b480-793e-4d46-83dd-5e27335992d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1883cc75-9b')
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.318 239969 DEBUG nova.virt.libvirt.vif [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:14:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-777747036',display_name='tempest-TestGettingAddress-server-777747036',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-777747036',id=111,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbULeaJxwcnozcWoyvQ50FWCotzgPVSIhLPca1F8fB5NEsE8naHpO1jQEaSARRlz4I3VrQe48iPgEoQDNyMeK3lJ4jF1ej588mNcPtcDyL+pxJcccFJcRHS+XeHWriRmg==',key_name='tempest-TestGettingAddress-1924722263',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-wuhqw1ae',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:14:14Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=ad5cd088-59a3-4894-8741-f0daa1fb4ff9,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "address": "fa:16:3e:c4:61:7d", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4f3ee90-85", "ovs_interfaceid": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.319 239969 DEBUG nova.network.os_vif_util [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "address": "fa:16:3e:c4:61:7d", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4f3ee90-85", "ovs_interfaceid": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.320 239969 DEBUG nova.network.os_vif_util [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:61:7d,bridge_name='br-int',has_traffic_filtering=True,id=b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c,network=Network(ef5475f2-541c-4651-b877-e2e4325b6344),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4f3ee90-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.320 239969 DEBUG os_vif [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:61:7d,bridge_name='br-int',has_traffic_filtering=True,id=b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c,network=Network(ef5475f2-541c-4651-b877-e2e4325b6344),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4f3ee90-85') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.321 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.321 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.322 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.324 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.324 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4f3ee90-85, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.325 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb4f3ee90-85, col_values=(('external_ids', {'iface-id': 'b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:61:7d', 'vm-uuid': 'ad5cd088-59a3-4894-8741-f0daa1fb4ff9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.326 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:26 compute-0 NetworkManager[48954]: <info>  [1769444066.3266] manager: (tapb4f3ee90-85): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/462)
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.328 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.332 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.332 239969 INFO os_vif [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:61:7d,bridge_name='br-int',has_traffic_filtering=True,id=b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c,network=Network(ef5475f2-541c-4651-b877-e2e4325b6344),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4f3ee90-85')
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.385 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.385 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.386 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:11:5d:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.386 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:c4:61:7d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.386 239969 INFO nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Using config drive
Jan 26 16:14:26 compute-0 nova_compute[239965]: 2026-01-26 16:14:26.407 239969 DEBUG nova.storage.rbd_utils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image ad5cd088-59a3-4894-8741-f0daa1fb4ff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:14:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4010135541' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:14:26 compute-0 ceph-mon[75140]: pgmap v1909: 305 pgs: 305 active+clean; 134 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.063 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.153 239969 INFO nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Creating config drive at /var/lib/nova/instances/ad5cd088-59a3-4894-8741-f0daa1fb4ff9/disk.config
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.157 239969 DEBUG oslo_concurrency.processutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ad5cd088-59a3-4894-8741-f0daa1fb4ff9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm0fiun0x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.294 239969 DEBUG oslo_concurrency.processutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ad5cd088-59a3-4894-8741-f0daa1fb4ff9/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm0fiun0x" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.319 239969 DEBUG nova.storage.rbd_utils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image ad5cd088-59a3-4894-8741-f0daa1fb4ff9_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.323 239969 DEBUG oslo_concurrency.processutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ad5cd088-59a3-4894-8741-f0daa1fb4ff9/disk.config ad5cd088-59a3-4894-8741-f0daa1fb4ff9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.454 239969 DEBUG oslo_concurrency.processutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ad5cd088-59a3-4894-8741-f0daa1fb4ff9/disk.config ad5cd088-59a3-4894-8741-f0daa1fb4ff9_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.455 239969 INFO nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Deleting local config drive /var/lib/nova/instances/ad5cd088-59a3-4894-8741-f0daa1fb4ff9/disk.config because it was imported into RBD.
Jan 26 16:14:27 compute-0 NetworkManager[48954]: <info>  [1769444067.5186] manager: (tap1883cc75-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/463)
Jan 26 16:14:27 compute-0 kernel: tap1883cc75-9b: entered promiscuous mode
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.523 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:27 compute-0 ovn_controller[146046]: 2026-01-26T16:14:27Z|01130|binding|INFO|Claiming lport 1883cc75-9bba-4da7-bb9d-e038a9254a10 for this chassis.
Jan 26 16:14:27 compute-0 ovn_controller[146046]: 2026-01-26T16:14:27Z|01131|binding|INFO|1883cc75-9bba-4da7-bb9d-e038a9254a10: Claiming fa:16:3e:11:5d:a9 10.100.0.10
Jan 26 16:14:27 compute-0 kernel: tapb4f3ee90-85: entered promiscuous mode
Jan 26 16:14:27 compute-0 NetworkManager[48954]: <info>  [1769444067.5391] manager: (tapb4f3ee90-85): new Tun device (/org/freedesktop/NetworkManager/Devices/464)
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.542 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.547 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:27 compute-0 ovn_controller[146046]: 2026-01-26T16:14:27Z|01132|if_status|INFO|Dropped 1 log messages in last 93 seconds (most recently, 93 seconds ago) due to excessive rate
Jan 26 16:14:27 compute-0 ovn_controller[146046]: 2026-01-26T16:14:27Z|01133|if_status|INFO|Not updating pb chassis for b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c now as sb is readonly
Jan 26 16:14:27 compute-0 ovn_controller[146046]: 2026-01-26T16:14:27Z|01134|binding|INFO|Claiming lport b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c for this chassis.
Jan 26 16:14:27 compute-0 ovn_controller[146046]: 2026-01-26T16:14:27Z|01135|binding|INFO|b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c: Claiming fa:16:3e:c4:61:7d 2001:db8:0:1:f816:3eff:fec4:617d 2001:db8::f816:3eff:fec4:617d
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.558 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:5d:a9 10.100.0.10'], port_security=['fa:16:3e:11:5d:a9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ad5cd088-59a3-4894-8741-f0daa1fb4ff9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bd0b480-793e-4d46-83dd-5e27335992d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9ccd144-fbed-4376-90b2-5da9708cb290', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c38b8b6-821e-4ed4-9284-993e44902e19, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1883cc75-9bba-4da7-bb9d-e038a9254a10) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.559 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1883cc75-9bba-4da7-bb9d-e038a9254a10 in datapath 7bd0b480-793e-4d46-83dd-5e27335992d9 bound to our chassis
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.560 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7bd0b480-793e-4d46-83dd-5e27335992d9
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.566 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:61:7d 2001:db8:0:1:f816:3eff:fec4:617d 2001:db8::f816:3eff:fec4:617d'], port_security=['fa:16:3e:c4:61:7d 2001:db8:0:1:f816:3eff:fec4:617d 2001:db8::f816:3eff:fec4:617d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fec4:617d/64 2001:db8::f816:3eff:fec4:617d/64', 'neutron:device_id': 'ad5cd088-59a3-4894-8741-f0daa1fb4ff9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef5475f2-541c-4651-b877-e2e4325b6344', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9ccd144-fbed-4376-90b2-5da9708cb290', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dbc549b-a211-495a-b7d4-5846cf98bef7, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:14:27 compute-0 systemd-udevd[336306]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:14:27 compute-0 systemd-udevd[336307]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.578 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ebbc4963-95ea-4bf0-9f92-a684e16f3661]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.579 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7bd0b480-71 in ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.580 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7bd0b480-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.581 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[48a0eaaa-5ea5-4d5c-8e50-724e03a42ef7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.581 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b66f7551-ab89-4d03-a465-32dca6e45084]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:27 compute-0 systemd-machined[208061]: New machine qemu-138-instance-0000006f.
Jan 26 16:14:27 compute-0 NetworkManager[48954]: <info>  [1769444067.5895] device (tap1883cc75-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:14:27 compute-0 NetworkManager[48954]: <info>  [1769444067.5901] device (tap1883cc75-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.595 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[adba243a-3492-4024-abf2-a890f8671c7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:27 compute-0 NetworkManager[48954]: <info>  [1769444067.5985] device (tapb4f3ee90-85): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:14:27 compute-0 systemd[1]: Started Virtual Machine qemu-138-instance-0000006f.
Jan 26 16:14:27 compute-0 NetworkManager[48954]: <info>  [1769444067.5995] device (tapb4f3ee90-85): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.620 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4a87ba6d-6320-49a2-9e44-ddedcd5822c2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:27 compute-0 ovn_controller[146046]: 2026-01-26T16:14:27Z|01136|binding|INFO|Setting lport 1883cc75-9bba-4da7-bb9d-e038a9254a10 ovn-installed in OVS
Jan 26 16:14:27 compute-0 ovn_controller[146046]: 2026-01-26T16:14:27Z|01137|binding|INFO|Setting lport 1883cc75-9bba-4da7-bb9d-e038a9254a10 up in Southbound
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.631 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:27 compute-0 ovn_controller[146046]: 2026-01-26T16:14:27Z|01138|binding|INFO|Setting lport b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c ovn-installed in OVS
Jan 26 16:14:27 compute-0 ovn_controller[146046]: 2026-01-26T16:14:27Z|01139|binding|INFO|Setting lport b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c up in Southbound
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.643 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.653 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[40e7ce88-6ea1-46bc-8a99-00f45ecabea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:27 compute-0 NetworkManager[48954]: <info>  [1769444067.6588] manager: (tap7bd0b480-70): new Veth device (/org/freedesktop/NetworkManager/Devices/465)
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.658 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c9eb17-699e-4792-a2e1-5e2f4c46f1d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.689 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[216fbe7d-7988-4246-ae5b-ba13c98e5f71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.692 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6d5bfe42-3489-4a74-b911-1234a3c3e7fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:27 compute-0 NetworkManager[48954]: <info>  [1769444067.7137] device (tap7bd0b480-70): carrier: link connected
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.719 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0061d36d-0480-487b-b19e-e0425ebe8cc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.734 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ca0265bb-4570-4844-b6dd-f2a3dc876836]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bd0b480-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:07:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 331], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555852, 'reachable_time': 30920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336339, 'error': None, 'target': 'ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.749 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[575c02b5-c350-4c76-bd5b-3d413c7c63a8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:74f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 555852, 'tstamp': 555852}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336340, 'error': None, 'target': 'ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.764 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7f50a86e-5449-4b42-8b0b-ee2b334c71ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bd0b480-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:07:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 331], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555852, 'reachable_time': 30920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336341, 'error': None, 'target': 'ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.794 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0e06aaf2-7f02-458e-ae66-bb374e9f75a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.861 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[90a57373-536d-4a49-b7a8-c12f8ec1bbcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.862 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bd0b480-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.862 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.863 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bd0b480-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:14:27 compute-0 NetworkManager[48954]: <info>  [1769444067.8652] manager: (tap7bd0b480-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/466)
Jan 26 16:14:27 compute-0 kernel: tap7bd0b480-70: entered promiscuous mode
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.864 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.868 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7bd0b480-70, col_values=(('external_ids', {'iface-id': '43b09bd8-2cf9-447b-ad81-d331160a2f5f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.870 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:27 compute-0 ovn_controller[146046]: 2026-01-26T16:14:27Z|01140|binding|INFO|Releasing lport 43b09bd8-2cf9-447b-ad81-d331160a2f5f from this chassis (sb_readonly=0)
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.871 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7bd0b480-793e-4d46-83dd-5e27335992d9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7bd0b480-793e-4d46-83dd-5e27335992d9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.872 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[25d2f574-6445-4972-a509-e5df0e7c18fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.873 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-7bd0b480-793e-4d46-83dd-5e27335992d9
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/7bd0b480-793e-4d46-83dd-5e27335992d9.pid.haproxy
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 7bd0b480-793e-4d46-83dd-5e27335992d9
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:14:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:27.874 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9', 'env', 'PROCESS_TAG=haproxy-7bd0b480-793e-4d46-83dd-5e27335992d9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7bd0b480-793e-4d46-83dd-5e27335992d9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.884 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.963 239969 DEBUG nova.network.neutron [req-79533b47-cd26-48f5-8abe-e226ea890d0b req-a075a9c7-9da0-42b3-9c6f-1398ec2c0dc7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Updated VIF entry in instance network info cache for port b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.963 239969 DEBUG nova.network.neutron [req-79533b47-cd26-48f5-8abe-e226ea890d0b req-a075a9c7-9da0-42b3-9c6f-1398ec2c0dc7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Updating instance_info_cache with network_info: [{"id": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "address": "fa:16:3e:11:5d:a9", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1883cc75-9b", "ovs_interfaceid": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "address": "fa:16:3e:c4:61:7d", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4f3ee90-85", "ovs_interfaceid": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:14:27 compute-0 nova_compute[239965]: 2026-01-26 16:14:27.978 239969 DEBUG oslo_concurrency.lockutils [req-79533b47-cd26-48f5-8abe-e226ea890d0b req-a075a9c7-9da0-42b3-9c6f-1398ec2c0dc7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:14:28 compute-0 nova_compute[239965]: 2026-01-26 16:14:28.021 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444068.02069, ad5cd088-59a3-4894-8741-f0daa1fb4ff9 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:14:28 compute-0 nova_compute[239965]: 2026-01-26 16:14:28.021 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] VM Started (Lifecycle Event)
Jan 26 16:14:28 compute-0 nova_compute[239965]: 2026-01-26 16:14:28.042 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:14:28 compute-0 nova_compute[239965]: 2026-01-26 16:14:28.045 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444068.020921, ad5cd088-59a3-4894-8741-f0daa1fb4ff9 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:14:28 compute-0 nova_compute[239965]: 2026-01-26 16:14:28.046 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] VM Paused (Lifecycle Event)
Jan 26 16:14:28 compute-0 nova_compute[239965]: 2026-01-26 16:14:28.067 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:14:28 compute-0 nova_compute[239965]: 2026-01-26 16:14:28.071 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:14:28 compute-0 nova_compute[239965]: 2026-01-26 16:14:28.090 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:14:28 compute-0 nova_compute[239965]: 2026-01-26 16:14:28.152 239969 DEBUG nova.compute.manager [req-59112616-14f2-4b41-bb1b-c65eef842e4d req-e89ea186-d322-48c6-a78e-bcd2570976f9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received event network-vif-plugged-b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:14:28 compute-0 nova_compute[239965]: 2026-01-26 16:14:28.152 239969 DEBUG oslo_concurrency.lockutils [req-59112616-14f2-4b41-bb1b-c65eef842e4d req-e89ea186-d322-48c6-a78e-bcd2570976f9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:14:28 compute-0 nova_compute[239965]: 2026-01-26 16:14:28.153 239969 DEBUG oslo_concurrency.lockutils [req-59112616-14f2-4b41-bb1b-c65eef842e4d req-e89ea186-d322-48c6-a78e-bcd2570976f9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:14:28 compute-0 nova_compute[239965]: 2026-01-26 16:14:28.153 239969 DEBUG oslo_concurrency.lockutils [req-59112616-14f2-4b41-bb1b-c65eef842e4d req-e89ea186-d322-48c6-a78e-bcd2570976f9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:14:28 compute-0 nova_compute[239965]: 2026-01-26 16:14:28.153 239969 DEBUG nova.compute.manager [req-59112616-14f2-4b41-bb1b-c65eef842e4d req-e89ea186-d322-48c6-a78e-bcd2570976f9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Processing event network-vif-plugged-b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:14:28 compute-0 podman[336414]: 2026-01-26 16:14:28.231294848 +0000 UTC m=+0.050625198 container create 2aae7421ee210bd6c426825eb74ca377b84638df6cec4a05ba8031a5400c6b48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 26 16:14:28 compute-0 systemd[1]: Started libpod-conmon-2aae7421ee210bd6c426825eb74ca377b84638df6cec4a05ba8031a5400c6b48.scope.
Jan 26 16:14:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 134 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.5 MiB/s wr, 25 op/s
Jan 26 16:14:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:14:28 compute-0 podman[336414]: 2026-01-26 16:14:28.203439207 +0000 UTC m=+0.022769577 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:14:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1042739d239caf2c40ad5eed4380d715d121fe90e2de4e65df17b0ffd2e62eca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:14:28 compute-0 podman[336414]: 2026-01-26 16:14:28.318262752 +0000 UTC m=+0.137593122 container init 2aae7421ee210bd6c426825eb74ca377b84638df6cec4a05ba8031a5400c6b48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:14:28 compute-0 podman[336414]: 2026-01-26 16:14:28.324140975 +0000 UTC m=+0.143471325 container start 2aae7421ee210bd6c426825eb74ca377b84638df6cec4a05ba8031a5400c6b48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:14:28 compute-0 neutron-haproxy-ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9[336429]: [NOTICE]   (336433) : New worker (336435) forked
Jan 26 16:14:28 compute-0 neutron-haproxy-ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9[336429]: [NOTICE]   (336433) : Loading success.
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.380 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c in datapath ef5475f2-541c-4651-b877-e2e4325b6344 unbound from our chassis
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.382 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef5475f2-541c-4651-b877-e2e4325b6344
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.396 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[96cde0a0-0258-4ca8-9ade-e013e53b594c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.397 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapef5475f2-51 in ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.399 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapef5475f2-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.399 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1155db12-4ea2-4912-accc-a53d2e431e3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.400 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[721fc779-4f9a-489a-8aa5-17eab1e9c351]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.411 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9a5c94-66fc-4b4f-a306-36a9f5d71a05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.423 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[72c3fed5-74a1-4bf4-94c9-089e63624a82]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.454 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[81e95c60-0c2f-4def-a956-a2381293b4a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:28 compute-0 NetworkManager[48954]: <info>  [1769444068.4648] manager: (tapef5475f2-50): new Veth device (/org/freedesktop/NetworkManager/Devices/467)
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.464 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[49e5a8c6-bd79-4352-9923-d5e0f88b1d7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.500 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7e93ecaf-1a11-4132-965c-7af5731058b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.503 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1e3f5889-d41b-4373-8d2b-18ac78a9ea80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:28 compute-0 NetworkManager[48954]: <info>  [1769444068.5342] device (tapef5475f2-50): carrier: link connected
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.543 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[55829470-2910-4c61-95c8-1fdb011fc92b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.564 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e3a995ed-b3fc-452b-bfd6-2dea0bd697ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef5475f2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:1e:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555934, 'reachable_time': 43394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 336456, 'error': None, 'target': 'ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.587 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f20a68-963e-4404-a8c6-59970a3f013c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:1e88'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 555934, 'tstamp': 555934}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 336457, 'error': None, 'target': 'ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.608 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bddc0001-c8b7-490a-89c6-27eda16f7fe2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef5475f2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:1e:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555934, 'reachable_time': 43394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 336458, 'error': None, 'target': 'ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:14:28
Jan 26 16:14:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:14:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:14:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'vms', 'volumes', 'backups', 'default.rgw.log', '.mgr', '.rgw.root']
Jan 26 16:14:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.652 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2b019306-d444-4469-b652-220c5d2c341d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.694 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[82872c31-0f53-4f7c-abc1-302e48a1b3d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.696 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef5475f2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.696 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.697 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef5475f2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:14:28 compute-0 nova_compute[239965]: 2026-01-26 16:14:28.699 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:28 compute-0 kernel: tapef5475f2-50: entered promiscuous mode
Jan 26 16:14:28 compute-0 NetworkManager[48954]: <info>  [1769444068.7001] manager: (tapef5475f2-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.701 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef5475f2-50, col_values=(('external_ids', {'iface-id': '05f31558-fab2-41bc-9c60-661571b4fe12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:14:28 compute-0 ovn_controller[146046]: 2026-01-26T16:14:28Z|01141|binding|INFO|Releasing lport 05f31558-fab2-41bc-9c60-661571b4fe12 from this chassis (sb_readonly=0)
Jan 26 16:14:28 compute-0 nova_compute[239965]: 2026-01-26 16:14:28.716 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.717 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ef5475f2-541c-4651-b877-e2e4325b6344.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ef5475f2-541c-4651-b877-e2e4325b6344.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.717 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ab83ee34-7faa-448e-9722-1e07012effaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.718 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-ef5475f2-541c-4651-b877-e2e4325b6344
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/ef5475f2-541c-4651-b877-e2e4325b6344.pid.haproxy
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID ef5475f2-541c-4651-b877-e2e4325b6344
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:14:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:28.719 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344', 'env', 'PROCESS_TAG=haproxy-ef5475f2-541c-4651-b877-e2e4325b6344', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ef5475f2-541c-4651-b877-e2e4325b6344.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:14:29 compute-0 podman[336488]: 2026-01-26 16:14:29.079617502 +0000 UTC m=+0.061484033 container create 32d9b72e4865331a22798dca21f35e6257b607e7fdfce891cf284128c9fd5f3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 26 16:14:29 compute-0 systemd[1]: Started libpod-conmon-32d9b72e4865331a22798dca21f35e6257b607e7fdfce891cf284128c9fd5f3e.scope.
Jan 26 16:14:29 compute-0 podman[336488]: 2026-01-26 16:14:29.044909214 +0000 UTC m=+0.026775785 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:14:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6702055633131226b04612cd58b0f344045e7fa03232491f9fa25db6581d7cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:14:29 compute-0 podman[336488]: 2026-01-26 16:14:29.167066028 +0000 UTC m=+0.148932569 container init 32d9b72e4865331a22798dca21f35e6257b607e7fdfce891cf284128c9fd5f3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:14:29 compute-0 podman[336488]: 2026-01-26 16:14:29.172906602 +0000 UTC m=+0.154773113 container start 32d9b72e4865331a22798dca21f35e6257b607e7fdfce891cf284128c9fd5f3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:14:29 compute-0 neutron-haproxy-ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344[336503]: [NOTICE]   (336507) : New worker (336509) forked
Jan 26 16:14:29 compute-0 neutron-haproxy-ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344[336503]: [NOTICE]   (336507) : Loading success.
Jan 26 16:14:29 compute-0 ceph-mon[75140]: pgmap v1910: 305 pgs: 305 active+clean; 134 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.5 MiB/s wr, 25 op/s
Jan 26 16:14:29 compute-0 sudo[336518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:14:29 compute-0 sudo[336518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:14:29 compute-0 sudo[336518]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:29 compute-0 sudo[336543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:14:29 compute-0 sudo[336543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 134 MiB data, 772 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:30 compute-0 sudo[336543]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:14:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:14:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:14:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:14:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:14:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:14:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:14:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:14:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:14:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:14:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:14:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:14:30 compute-0 ceph-mon[75140]: pgmap v1911: 305 pgs: 305 active+clean; 134 MiB data, 772 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:14:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:14:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:14:30 compute-0 sudo[336599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:14:30 compute-0 sudo[336599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:14:30 compute-0 sudo[336599]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:30 compute-0 nova_compute[239965]: 2026-01-26 16:14:30.526 239969 DEBUG nova.compute.manager [req-fc1f5565-105a-4f12-a079-9839d95358b3 req-b5901642-f8b1-47c3-ab43-b9c0d3b9e88a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received event network-vif-plugged-b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:14:30 compute-0 nova_compute[239965]: 2026-01-26 16:14:30.526 239969 DEBUG oslo_concurrency.lockutils [req-fc1f5565-105a-4f12-a079-9839d95358b3 req-b5901642-f8b1-47c3-ab43-b9c0d3b9e88a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:14:30 compute-0 nova_compute[239965]: 2026-01-26 16:14:30.527 239969 DEBUG oslo_concurrency.lockutils [req-fc1f5565-105a-4f12-a079-9839d95358b3 req-b5901642-f8b1-47c3-ab43-b9c0d3b9e88a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:14:30 compute-0 nova_compute[239965]: 2026-01-26 16:14:30.527 239969 DEBUG oslo_concurrency.lockutils [req-fc1f5565-105a-4f12-a079-9839d95358b3 req-b5901642-f8b1-47c3-ab43-b9c0d3b9e88a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:14:30 compute-0 nova_compute[239965]: 2026-01-26 16:14:30.527 239969 DEBUG nova.compute.manager [req-fc1f5565-105a-4f12-a079-9839d95358b3 req-b5901642-f8b1-47c3-ab43-b9c0d3b9e88a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] No event matching network-vif-plugged-b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c in dict_keys([('network-vif-plugged', '1883cc75-9bba-4da7-bb9d-e038a9254a10')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 26 16:14:30 compute-0 nova_compute[239965]: 2026-01-26 16:14:30.527 239969 WARNING nova.compute.manager [req-fc1f5565-105a-4f12-a079-9839d95358b3 req-b5901642-f8b1-47c3-ab43-b9c0d3b9e88a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received unexpected event network-vif-plugged-b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c for instance with vm_state building and task_state spawning.
Jan 26 16:14:30 compute-0 sudo[336624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:14:30 compute-0 sudo[336624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:14:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:14:30 compute-0 podman[336661]: 2026-01-26 16:14:30.803751513 +0000 UTC m=+0.036818850 container create 86532c9e6b6c844347cabc653fb8723fcb96ff1c3cf21ee0963ee142b8a07177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_robinson, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:14:30 compute-0 systemd[1]: Started libpod-conmon-86532c9e6b6c844347cabc653fb8723fcb96ff1c3cf21ee0963ee142b8a07177.scope.
Jan 26 16:14:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:14:30 compute-0 podman[336661]: 2026-01-26 16:14:30.880239602 +0000 UTC m=+0.113306959 container init 86532c9e6b6c844347cabc653fb8723fcb96ff1c3cf21ee0963ee142b8a07177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_robinson, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 16:14:30 compute-0 podman[336661]: 2026-01-26 16:14:30.787521297 +0000 UTC m=+0.020588654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:14:30 compute-0 podman[336661]: 2026-01-26 16:14:30.890623585 +0000 UTC m=+0.123690922 container start 86532c9e6b6c844347cabc653fb8723fcb96ff1c3cf21ee0963ee142b8a07177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_robinson, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:14:30 compute-0 podman[336661]: 2026-01-26 16:14:30.894372227 +0000 UTC m=+0.127439574 container attach 86532c9e6b6c844347cabc653fb8723fcb96ff1c3cf21ee0963ee142b8a07177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 16:14:30 compute-0 great_robinson[336677]: 167 167
Jan 26 16:14:30 compute-0 systemd[1]: libpod-86532c9e6b6c844347cabc653fb8723fcb96ff1c3cf21ee0963ee142b8a07177.scope: Deactivated successfully.
Jan 26 16:14:30 compute-0 conmon[336677]: conmon 86532c9e6b6c844347ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86532c9e6b6c844347cabc653fb8723fcb96ff1c3cf21ee0963ee142b8a07177.scope/container/memory.events
Jan 26 16:14:30 compute-0 podman[336661]: 2026-01-26 16:14:30.898184911 +0000 UTC m=+0.131252248 container died 86532c9e6b6c844347cabc653fb8723fcb96ff1c3cf21ee0963ee142b8a07177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_robinson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 16:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-5de9c4190e4d4235a0fdfe81085d5d5525e8484fc6e744a19d5f1924563e3f84-merged.mount: Deactivated successfully.
Jan 26 16:14:30 compute-0 podman[336661]: 2026-01-26 16:14:30.935233945 +0000 UTC m=+0.168301272 container remove 86532c9e6b6c844347cabc653fb8723fcb96ff1c3cf21ee0963ee142b8a07177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:14:30 compute-0 systemd[1]: libpod-conmon-86532c9e6b6c844347cabc653fb8723fcb96ff1c3cf21ee0963ee142b8a07177.scope: Deactivated successfully.
Jan 26 16:14:31 compute-0 podman[336700]: 2026-01-26 16:14:31.109265658 +0000 UTC m=+0.047749489 container create cfc8c98039fe8f4028d96098f1e30cf0d10d0ba1c7e6f35e3a253cb40523f5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 16:14:31 compute-0 systemd[1]: Started libpod-conmon-cfc8c98039fe8f4028d96098f1e30cf0d10d0ba1c7e6f35e3a253cb40523f5e8.scope.
Jan 26 16:14:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40cb90e1cebdd7673d633cdc81c85bc0839c1ccedfa2f3ed5b4b13e714b96f64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:14:31 compute-0 podman[336700]: 2026-01-26 16:14:31.09055149 +0000 UTC m=+0.029035351 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40cb90e1cebdd7673d633cdc81c85bc0839c1ccedfa2f3ed5b4b13e714b96f64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40cb90e1cebdd7673d633cdc81c85bc0839c1ccedfa2f3ed5b4b13e714b96f64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40cb90e1cebdd7673d633cdc81c85bc0839c1ccedfa2f3ed5b4b13e714b96f64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:14:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/40cb90e1cebdd7673d633cdc81c85bc0839c1ccedfa2f3ed5b4b13e714b96f64/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:14:31 compute-0 podman[336700]: 2026-01-26 16:14:31.199162084 +0000 UTC m=+0.137645945 container init cfc8c98039fe8f4028d96098f1e30cf0d10d0ba1c7e6f35e3a253cb40523f5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:14:31 compute-0 podman[336700]: 2026-01-26 16:14:31.209019034 +0000 UTC m=+0.147502865 container start cfc8c98039fe8f4028d96098f1e30cf0d10d0ba1c7e6f35e3a253cb40523f5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 16:14:31 compute-0 podman[336700]: 2026-01-26 16:14:31.212793106 +0000 UTC m=+0.151276947 container attach cfc8c98039fe8f4028d96098f1e30cf0d10d0ba1c7e6f35e3a253cb40523f5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:14:31 compute-0 nova_compute[239965]: 2026-01-26 16:14:31.327 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:14:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:14:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:14:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:14:31 compute-0 gracious_cray[336717]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:14:31 compute-0 gracious_cray[336717]: --> All data devices are unavailable
Jan 26 16:14:31 compute-0 systemd[1]: libpod-cfc8c98039fe8f4028d96098f1e30cf0d10d0ba1c7e6f35e3a253cb40523f5e8.scope: Deactivated successfully.
Jan 26 16:14:31 compute-0 podman[336700]: 2026-01-26 16:14:31.67956917 +0000 UTC m=+0.618053011 container died cfc8c98039fe8f4028d96098f1e30cf0d10d0ba1c7e6f35e3a253cb40523f5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 16:14:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-40cb90e1cebdd7673d633cdc81c85bc0839c1ccedfa2f3ed5b4b13e714b96f64-merged.mount: Deactivated successfully.
Jan 26 16:14:31 compute-0 podman[336700]: 2026-01-26 16:14:31.718636085 +0000 UTC m=+0.657119916 container remove cfc8c98039fe8f4028d96098f1e30cf0d10d0ba1c7e6f35e3a253cb40523f5e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_cray, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:14:31 compute-0 systemd[1]: libpod-conmon-cfc8c98039fe8f4028d96098f1e30cf0d10d0ba1c7e6f35e3a253cb40523f5e8.scope: Deactivated successfully.
Jan 26 16:14:31 compute-0 sudo[336624]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:31 compute-0 sudo[336748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:14:31 compute-0 sudo[336748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:14:31 compute-0 sudo[336748]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:31 compute-0 sudo[336773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:14:31 compute-0 sudo[336773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:14:32 compute-0 nova_compute[239965]: 2026-01-26 16:14:32.067 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:32 compute-0 podman[336808]: 2026-01-26 16:14:32.169660363 +0000 UTC m=+0.043887923 container create 84afe73af436f2c7fbc365ea8be6f1c4834197698a047d9f5ec49be43510e474 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_varahamihira, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 16:14:32 compute-0 systemd[1]: Started libpod-conmon-84afe73af436f2c7fbc365ea8be6f1c4834197698a047d9f5ec49be43510e474.scope.
Jan 26 16:14:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:14:32 compute-0 podman[336808]: 2026-01-26 16:14:32.152594536 +0000 UTC m=+0.026822116 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:14:32 compute-0 podman[336808]: 2026-01-26 16:14:32.2567333 +0000 UTC m=+0.130960880 container init 84afe73af436f2c7fbc365ea8be6f1c4834197698a047d9f5ec49be43510e474 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_varahamihira, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 16:14:32 compute-0 podman[336808]: 2026-01-26 16:14:32.262390548 +0000 UTC m=+0.136618108 container start 84afe73af436f2c7fbc365ea8be6f1c4834197698a047d9f5ec49be43510e474 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:14:32 compute-0 podman[336808]: 2026-01-26 16:14:32.26533441 +0000 UTC m=+0.139561960 container attach 84afe73af436f2c7fbc365ea8be6f1c4834197698a047d9f5ec49be43510e474 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:14:32 compute-0 objective_varahamihira[336824]: 167 167
Jan 26 16:14:32 compute-0 systemd[1]: libpod-84afe73af436f2c7fbc365ea8be6f1c4834197698a047d9f5ec49be43510e474.scope: Deactivated successfully.
Jan 26 16:14:32 compute-0 podman[336808]: 2026-01-26 16:14:32.269084052 +0000 UTC m=+0.143311642 container died 84afe73af436f2c7fbc365ea8be6f1c4834197698a047d9f5ec49be43510e474 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 16:14:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 134 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 2 op/s
Jan 26 16:14:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a42b6c22a65b06540e7144c740463f04c64874b7f672aa380ee00030728d05b2-merged.mount: Deactivated successfully.
Jan 26 16:14:32 compute-0 podman[336808]: 2026-01-26 16:14:32.314639294 +0000 UTC m=+0.188866844 container remove 84afe73af436f2c7fbc365ea8be6f1c4834197698a047d9f5ec49be43510e474 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_varahamihira, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:14:32 compute-0 systemd[1]: libpod-conmon-84afe73af436f2c7fbc365ea8be6f1c4834197698a047d9f5ec49be43510e474.scope: Deactivated successfully.
Jan 26 16:14:32 compute-0 ceph-mon[75140]: pgmap v1912: 305 pgs: 305 active+clean; 134 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 2 op/s
Jan 26 16:14:32 compute-0 podman[336849]: 2026-01-26 16:14:32.481517952 +0000 UTC m=+0.035621672 container create 7d5ea01c44183dbef2653270807b56bcae6ca579b592253c9a06dbf066a065e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_babbage, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:14:32 compute-0 systemd[1]: Started libpod-conmon-7d5ea01c44183dbef2653270807b56bcae6ca579b592253c9a06dbf066a065e4.scope.
Jan 26 16:14:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716621fbbfdfa1b06879de7e3140bbcb0d74f15f436c1cc6434ab58c5dd700e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716621fbbfdfa1b06879de7e3140bbcb0d74f15f436c1cc6434ab58c5dd700e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716621fbbfdfa1b06879de7e3140bbcb0d74f15f436c1cc6434ab58c5dd700e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:14:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716621fbbfdfa1b06879de7e3140bbcb0d74f15f436c1cc6434ab58c5dd700e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:14:32 compute-0 podman[336849]: 2026-01-26 16:14:32.467310514 +0000 UTC m=+0.021414264 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:14:32 compute-0 podman[336849]: 2026-01-26 16:14:32.564246073 +0000 UTC m=+0.118349803 container init 7d5ea01c44183dbef2653270807b56bcae6ca579b592253c9a06dbf066a065e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 16:14:32 compute-0 podman[336849]: 2026-01-26 16:14:32.571772727 +0000 UTC m=+0.125876447 container start 7d5ea01c44183dbef2653270807b56bcae6ca579b592253c9a06dbf066a065e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_babbage, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:14:32 compute-0 podman[336849]: 2026-01-26 16:14:32.575551359 +0000 UTC m=+0.129655109 container attach 7d5ea01c44183dbef2653270807b56bcae6ca579b592253c9a06dbf066a065e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]: {
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:     "0": [
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:         {
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "devices": [
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "/dev/loop3"
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             ],
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "lv_name": "ceph_lv0",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "lv_size": "21470642176",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "name": "ceph_lv0",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "tags": {
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.cluster_name": "ceph",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.crush_device_class": "",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.encrypted": "0",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.objectstore": "bluestore",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.osd_id": "0",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.type": "block",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.vdo": "0",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.with_tpm": "0"
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             },
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "type": "block",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "vg_name": "ceph_vg0"
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:         }
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:     ],
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:     "1": [
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:         {
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "devices": [
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "/dev/loop4"
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             ],
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "lv_name": "ceph_lv1",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "lv_size": "21470642176",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "name": "ceph_lv1",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "tags": {
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.cluster_name": "ceph",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.crush_device_class": "",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.encrypted": "0",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.objectstore": "bluestore",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.osd_id": "1",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.type": "block",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.vdo": "0",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.with_tpm": "0"
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             },
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "type": "block",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "vg_name": "ceph_vg1"
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:         }
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:     ],
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:     "2": [
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:         {
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "devices": [
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "/dev/loop5"
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             ],
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "lv_name": "ceph_lv2",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "lv_size": "21470642176",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "name": "ceph_lv2",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "tags": {
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.cluster_name": "ceph",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.crush_device_class": "",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.encrypted": "0",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.objectstore": "bluestore",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.osd_id": "2",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.type": "block",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.vdo": "0",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:                 "ceph.with_tpm": "0"
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             },
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "type": "block",
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:             "vg_name": "ceph_vg2"
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:         }
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]:     ]
Jan 26 16:14:32 compute-0 heuristic_babbage[336866]: }
Jan 26 16:14:32 compute-0 systemd[1]: libpod-7d5ea01c44183dbef2653270807b56bcae6ca579b592253c9a06dbf066a065e4.scope: Deactivated successfully.
Jan 26 16:14:32 compute-0 podman[336849]: 2026-01-26 16:14:32.873530549 +0000 UTC m=+0.427634289 container died 7d5ea01c44183dbef2653270807b56bcae6ca579b592253c9a06dbf066a065e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:14:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-716621fbbfdfa1b06879de7e3140bbcb0d74f15f436c1cc6434ab58c5dd700e8-merged.mount: Deactivated successfully.
Jan 26 16:14:32 compute-0 podman[336849]: 2026-01-26 16:14:32.920149218 +0000 UTC m=+0.474252958 container remove 7d5ea01c44183dbef2653270807b56bcae6ca579b592253c9a06dbf066a065e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_babbage, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Jan 26 16:14:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:14:32 compute-0 systemd[1]: libpod-conmon-7d5ea01c44183dbef2653270807b56bcae6ca579b592253c9a06dbf066a065e4.scope: Deactivated successfully.
Jan 26 16:14:32 compute-0 sudo[336773]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:33 compute-0 sudo[336886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:14:33 compute-0 sudo[336886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:14:33 compute-0 sudo[336886]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:33 compute-0 sudo[336911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:14:33 compute-0 sudo[336911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.191 239969 DEBUG nova.compute.manager [req-a6620ff4-2b54-443d-973b-617cdc6e0f28 req-9418803e-8892-4d22-97d5-18382fb35ea0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received event network-vif-plugged-1883cc75-9bba-4da7-bb9d-e038a9254a10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.192 239969 DEBUG oslo_concurrency.lockutils [req-a6620ff4-2b54-443d-973b-617cdc6e0f28 req-9418803e-8892-4d22-97d5-18382fb35ea0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.192 239969 DEBUG oslo_concurrency.lockutils [req-a6620ff4-2b54-443d-973b-617cdc6e0f28 req-9418803e-8892-4d22-97d5-18382fb35ea0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.193 239969 DEBUG oslo_concurrency.lockutils [req-a6620ff4-2b54-443d-973b-617cdc6e0f28 req-9418803e-8892-4d22-97d5-18382fb35ea0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.193 239969 DEBUG nova.compute.manager [req-a6620ff4-2b54-443d-973b-617cdc6e0f28 req-9418803e-8892-4d22-97d5-18382fb35ea0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Processing event network-vif-plugged-1883cc75-9bba-4da7-bb9d-e038a9254a10 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.194 239969 DEBUG nova.compute.manager [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Instance event wait completed in 5 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.198 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444073.1981158, ad5cd088-59a3-4894-8741-f0daa1fb4ff9 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.198 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] VM Resumed (Lifecycle Event)
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.200 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.203 239969 INFO nova.virt.libvirt.driver [-] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Instance spawned successfully.
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.204 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.230 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.235 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.236 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.237 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.237 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.238 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.239 239969 DEBUG nova.virt.libvirt.driver [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.242 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.274 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:14:33 compute-0 podman[336948]: 2026-01-26 16:14:33.398501644 +0000 UTC m=+0.036149694 container create 805c40a6f48e7c2fe5e4621a74831af279eee6dc82737fe8ad8390f3958a4711 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_snyder, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 16:14:33 compute-0 systemd[1]: Started libpod-conmon-805c40a6f48e7c2fe5e4621a74831af279eee6dc82737fe8ad8390f3958a4711.scope.
Jan 26 16:14:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:14:33 compute-0 podman[336948]: 2026-01-26 16:14:33.477433752 +0000 UTC m=+0.115081832 container init 805c40a6f48e7c2fe5e4621a74831af279eee6dc82737fe8ad8390f3958a4711 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_snyder, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:14:33 compute-0 podman[336948]: 2026-01-26 16:14:33.383960278 +0000 UTC m=+0.021608348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:14:33 compute-0 podman[336948]: 2026-01-26 16:14:33.484532266 +0000 UTC m=+0.122180316 container start 805c40a6f48e7c2fe5e4621a74831af279eee6dc82737fe8ad8390f3958a4711 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:14:33 compute-0 podman[336948]: 2026-01-26 16:14:33.487912138 +0000 UTC m=+0.125560208 container attach 805c40a6f48e7c2fe5e4621a74831af279eee6dc82737fe8ad8390f3958a4711 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 16:14:33 compute-0 nice_snyder[336964]: 167 167
Jan 26 16:14:33 compute-0 systemd[1]: libpod-805c40a6f48e7c2fe5e4621a74831af279eee6dc82737fe8ad8390f3958a4711.scope: Deactivated successfully.
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.517 239969 INFO nova.compute.manager [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Took 18.92 seconds to spawn the instance on the hypervisor.
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.519 239969 DEBUG nova.compute.manager [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:14:33 compute-0 podman[336969]: 2026-01-26 16:14:33.527838194 +0000 UTC m=+0.024351266 container died 805c40a6f48e7c2fe5e4621a74831af279eee6dc82737fe8ad8390f3958a4711 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_snyder, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 16:14:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4071fc14460b7e94e00e72abe483056926e14557c717ee9592b11e456189b09f-merged.mount: Deactivated successfully.
Jan 26 16:14:33 compute-0 podman[336969]: 2026-01-26 16:14:33.566674633 +0000 UTC m=+0.063187685 container remove 805c40a6f48e7c2fe5e4621a74831af279eee6dc82737fe8ad8390f3958a4711 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:14:33 compute-0 systemd[1]: libpod-conmon-805c40a6f48e7c2fe5e4621a74831af279eee6dc82737fe8ad8390f3958a4711.scope: Deactivated successfully.
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.583 239969 INFO nova.compute.manager [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Took 20.12 seconds to build instance.
Jan 26 16:14:33 compute-0 nova_compute[239965]: 2026-01-26 16:14:33.602 239969 DEBUG oslo_concurrency.lockutils [None req-87b65faa-9d38-4fd3-b004-fb6677b72ec5 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:14:33 compute-0 podman[336991]: 2026-01-26 16:14:33.73727654 +0000 UTC m=+0.040316765 container create 96ebc8a74edda2ca5a41fec5fbd5552bda0060ca61408a2c35fbb62a72be4f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 16:14:33 compute-0 systemd[1]: Started libpod-conmon-96ebc8a74edda2ca5a41fec5fbd5552bda0060ca61408a2c35fbb62a72be4f90.scope.
Jan 26 16:14:33 compute-0 podman[336991]: 2026-01-26 16:14:33.718900991 +0000 UTC m=+0.021941246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:14:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c45d3a1bd7d356faab614d28f4dc3d25a0658d98f110bb66280692cb9ecdd2eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c45d3a1bd7d356faab614d28f4dc3d25a0658d98f110bb66280692cb9ecdd2eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c45d3a1bd7d356faab614d28f4dc3d25a0658d98f110bb66280692cb9ecdd2eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:14:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c45d3a1bd7d356faab614d28f4dc3d25a0658d98f110bb66280692cb9ecdd2eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:14:33 compute-0 podman[336991]: 2026-01-26 16:14:33.844747155 +0000 UTC m=+0.147787430 container init 96ebc8a74edda2ca5a41fec5fbd5552bda0060ca61408a2c35fbb62a72be4f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 16:14:33 compute-0 podman[336991]: 2026-01-26 16:14:33.853211542 +0000 UTC m=+0.156251777 container start 96ebc8a74edda2ca5a41fec5fbd5552bda0060ca61408a2c35fbb62a72be4f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 16:14:33 compute-0 podman[336991]: 2026-01-26 16:14:33.858659815 +0000 UTC m=+0.161700050 container attach 96ebc8a74edda2ca5a41fec5fbd5552bda0060ca61408a2c35fbb62a72be4f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:14:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 134 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 26 16:14:34 compute-0 lvm[337084]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:14:34 compute-0 lvm[337084]: VG ceph_vg0 finished
Jan 26 16:14:34 compute-0 lvm[337087]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:14:34 compute-0 lvm[337087]: VG ceph_vg1 finished
Jan 26 16:14:34 compute-0 lvm[337089]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:14:34 compute-0 lvm[337089]: VG ceph_vg2 finished
Jan 26 16:14:34 compute-0 lvm[337090]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:14:34 compute-0 lvm[337090]: VG ceph_vg0 finished
Jan 26 16:14:34 compute-0 sad_cartwright[337007]: {}
Jan 26 16:14:34 compute-0 systemd[1]: libpod-96ebc8a74edda2ca5a41fec5fbd5552bda0060ca61408a2c35fbb62a72be4f90.scope: Deactivated successfully.
Jan 26 16:14:34 compute-0 systemd[1]: libpod-96ebc8a74edda2ca5a41fec5fbd5552bda0060ca61408a2c35fbb62a72be4f90.scope: Consumed 1.336s CPU time.
Jan 26 16:14:34 compute-0 podman[336991]: 2026-01-26 16:14:34.730054734 +0000 UTC m=+1.033094989 container died 96ebc8a74edda2ca5a41fec5fbd5552bda0060ca61408a2c35fbb62a72be4f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 16:14:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-c45d3a1bd7d356faab614d28f4dc3d25a0658d98f110bb66280692cb9ecdd2eb-merged.mount: Deactivated successfully.
Jan 26 16:14:34 compute-0 podman[336991]: 2026-01-26 16:14:34.769786375 +0000 UTC m=+1.072826610 container remove 96ebc8a74edda2ca5a41fec5fbd5552bda0060ca61408a2c35fbb62a72be4f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_cartwright, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Jan 26 16:14:34 compute-0 systemd[1]: libpod-conmon-96ebc8a74edda2ca5a41fec5fbd5552bda0060ca61408a2c35fbb62a72be4f90.scope: Deactivated successfully.
Jan 26 16:14:34 compute-0 sudo[336911]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:14:34 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:14:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:14:34 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:14:34 compute-0 sudo[337106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:14:34 compute-0 sudo[337106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:14:34 compute-0 sudo[337106]: pam_unix(sudo:session): session closed for user root
Jan 26 16:14:35 compute-0 nova_compute[239965]: 2026-01-26 16:14:35.321 239969 DEBUG nova.compute.manager [req-d99d069d-e3fb-49f5-95fa-152c8e4d8ecd req-589614c4-1ae0-4fe9-9704-d98b313cd87e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received event network-vif-plugged-1883cc75-9bba-4da7-bb9d-e038a9254a10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:14:35 compute-0 nova_compute[239965]: 2026-01-26 16:14:35.322 239969 DEBUG oslo_concurrency.lockutils [req-d99d069d-e3fb-49f5-95fa-152c8e4d8ecd req-589614c4-1ae0-4fe9-9704-d98b313cd87e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:14:35 compute-0 nova_compute[239965]: 2026-01-26 16:14:35.322 239969 DEBUG oslo_concurrency.lockutils [req-d99d069d-e3fb-49f5-95fa-152c8e4d8ecd req-589614c4-1ae0-4fe9-9704-d98b313cd87e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:14:35 compute-0 nova_compute[239965]: 2026-01-26 16:14:35.322 239969 DEBUG oslo_concurrency.lockutils [req-d99d069d-e3fb-49f5-95fa-152c8e4d8ecd req-589614c4-1ae0-4fe9-9704-d98b313cd87e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:14:35 compute-0 nova_compute[239965]: 2026-01-26 16:14:35.323 239969 DEBUG nova.compute.manager [req-d99d069d-e3fb-49f5-95fa-152c8e4d8ecd req-589614c4-1ae0-4fe9-9704-d98b313cd87e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] No waiting events found dispatching network-vif-plugged-1883cc75-9bba-4da7-bb9d-e038a9254a10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:14:35 compute-0 nova_compute[239965]: 2026-01-26 16:14:35.323 239969 WARNING nova.compute.manager [req-d99d069d-e3fb-49f5-95fa-152c8e4d8ecd req-589614c4-1ae0-4fe9-9704-d98b313cd87e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received unexpected event network-vif-plugged-1883cc75-9bba-4da7-bb9d-e038a9254a10 for instance with vm_state active and task_state None.
Jan 26 16:14:35 compute-0 ceph-mon[75140]: pgmap v1913: 305 pgs: 305 active+clean; 134 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Jan 26 16:14:35 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:14:35 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:14:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 134 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 12 KiB/s wr, 22 op/s
Jan 26 16:14:36 compute-0 nova_compute[239965]: 2026-01-26 16:14:36.331 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:37 compute-0 nova_compute[239965]: 2026-01-26 16:14:37.068 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:37 compute-0 ceph-mon[75140]: pgmap v1914: 305 pgs: 305 active+clean; 134 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 12 KiB/s wr, 22 op/s
Jan 26 16:14:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:14:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 134 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:14:38 compute-0 ceph-mon[75140]: pgmap v1915: 305 pgs: 305 active+clean; 134 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:14:40 compute-0 ovn_controller[146046]: 2026-01-26T16:14:40Z|01142|binding|INFO|Releasing lport 43b09bd8-2cf9-447b-ad81-d331160a2f5f from this chassis (sb_readonly=0)
Jan 26 16:14:40 compute-0 ovn_controller[146046]: 2026-01-26T16:14:40Z|01143|binding|INFO|Releasing lport 05f31558-fab2-41bc-9c60-661571b4fe12 from this chassis (sb_readonly=0)
Jan 26 16:14:40 compute-0 NetworkManager[48954]: <info>  [1769444080.2647] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/469)
Jan 26 16:14:40 compute-0 NetworkManager[48954]: <info>  [1769444080.2662] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/470)
Jan 26 16:14:40 compute-0 nova_compute[239965]: 2026-01-26 16:14:40.278 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 134 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:14:40 compute-0 ovn_controller[146046]: 2026-01-26T16:14:40Z|01144|binding|INFO|Releasing lport 43b09bd8-2cf9-447b-ad81-d331160a2f5f from this chassis (sb_readonly=0)
Jan 26 16:14:40 compute-0 ovn_controller[146046]: 2026-01-26T16:14:40Z|01145|binding|INFO|Releasing lport 05f31558-fab2-41bc-9c60-661571b4fe12 from this chassis (sb_readonly=0)
Jan 26 16:14:40 compute-0 nova_compute[239965]: 2026-01-26 16:14:40.296 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:40 compute-0 nova_compute[239965]: 2026-01-26 16:14:40.305 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:40 compute-0 nova_compute[239965]: 2026-01-26 16:14:40.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:14:41 compute-0 nova_compute[239965]: 2026-01-26 16:14:41.334 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:41 compute-0 ceph-mon[75140]: pgmap v1916: 305 pgs: 305 active+clean; 134 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:14:42 compute-0 nova_compute[239965]: 2026-01-26 16:14:42.100 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 134 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:14:42 compute-0 nova_compute[239965]: 2026-01-26 16:14:42.415 239969 DEBUG nova.compute.manager [req-91db8571-6986-4470-8a26-bfcf50f34ebe req-c8b2b3e6-3c75-4aaf-b2b9-a5c91ce293b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received event network-changed-1883cc75-9bba-4da7-bb9d-e038a9254a10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:14:42 compute-0 nova_compute[239965]: 2026-01-26 16:14:42.415 239969 DEBUG nova.compute.manager [req-91db8571-6986-4470-8a26-bfcf50f34ebe req-c8b2b3e6-3c75-4aaf-b2b9-a5c91ce293b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Refreshing instance network info cache due to event network-changed-1883cc75-9bba-4da7-bb9d-e038a9254a10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:14:42 compute-0 nova_compute[239965]: 2026-01-26 16:14:42.416 239969 DEBUG oslo_concurrency.lockutils [req-91db8571-6986-4470-8a26-bfcf50f34ebe req-c8b2b3e6-3c75-4aaf-b2b9-a5c91ce293b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:14:42 compute-0 nova_compute[239965]: 2026-01-26 16:14:42.416 239969 DEBUG oslo_concurrency.lockutils [req-91db8571-6986-4470-8a26-bfcf50f34ebe req-c8b2b3e6-3c75-4aaf-b2b9-a5c91ce293b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:14:42 compute-0 nova_compute[239965]: 2026-01-26 16:14:42.416 239969 DEBUG nova.network.neutron [req-91db8571-6986-4470-8a26-bfcf50f34ebe req-c8b2b3e6-3c75-4aaf-b2b9-a5c91ce293b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Refreshing network info cache for port 1883cc75-9bba-4da7-bb9d-e038a9254a10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:14:42 compute-0 ceph-mon[75140]: pgmap v1917: 305 pgs: 305 active+clean; 134 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:14:42 compute-0 nova_compute[239965]: 2026-01-26 16:14:42.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:14:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:14:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 134 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Jan 26 16:14:44 compute-0 nova_compute[239965]: 2026-01-26 16:14:44.908 239969 DEBUG nova.network.neutron [req-91db8571-6986-4470-8a26-bfcf50f34ebe req-c8b2b3e6-3c75-4aaf-b2b9-a5c91ce293b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Updated VIF entry in instance network info cache for port 1883cc75-9bba-4da7-bb9d-e038a9254a10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:14:44 compute-0 nova_compute[239965]: 2026-01-26 16:14:44.908 239969 DEBUG nova.network.neutron [req-91db8571-6986-4470-8a26-bfcf50f34ebe req-c8b2b3e6-3c75-4aaf-b2b9-a5c91ce293b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Updating instance_info_cache with network_info: [{"id": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "address": "fa:16:3e:11:5d:a9", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1883cc75-9b", "ovs_interfaceid": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "address": "fa:16:3e:c4:61:7d", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4f3ee90-85", "ovs_interfaceid": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:14:44 compute-0 nova_compute[239965]: 2026-01-26 16:14:44.933 239969 DEBUG oslo_concurrency.lockutils [req-91db8571-6986-4470-8a26-bfcf50f34ebe req-c8b2b3e6-3c75-4aaf-b2b9-a5c91ce293b0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:14:45 compute-0 ceph-mon[75140]: pgmap v1918: 305 pgs: 305 active+clean; 134 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Jan 26 16:14:45 compute-0 nova_compute[239965]: 2026-01-26 16:14:45.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:14:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 139 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 182 KiB/s wr, 67 op/s
Jan 26 16:14:46 compute-0 nova_compute[239965]: 2026-01-26 16:14:46.336 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:46 compute-0 ceph-mon[75140]: pgmap v1919: 305 pgs: 305 active+clean; 139 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 182 KiB/s wr, 67 op/s
Jan 26 16:14:46 compute-0 nova_compute[239965]: 2026-01-26 16:14:46.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:14:46 compute-0 nova_compute[239965]: 2026-01-26 16:14:46.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:14:46 compute-0 ovn_controller[146046]: 2026-01-26T16:14:46Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:11:5d:a9 10.100.0.10
Jan 26 16:14:46 compute-0 ovn_controller[146046]: 2026-01-26T16:14:46Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:11:5d:a9 10.100.0.10
Jan 26 16:14:47 compute-0 nova_compute[239965]: 2026-01-26 16:14:47.104 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 163 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Jan 26 16:14:48 compute-0 nova_compute[239965]: 2026-01-26 16:14:48.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:14:48 compute-0 nova_compute[239965]: 2026-01-26 16:14:48.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:14:48 compute-0 nova_compute[239965]: 2026-01-26 16:14:48.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:14:48 compute-0 nova_compute[239965]: 2026-01-26 16:14:48.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:14:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:14:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130504378' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:14:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:14:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2130504378' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:14:48 compute-0 nova_compute[239965]: 2026-01-26 16:14:48.774 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:14:48 compute-0 nova_compute[239965]: 2026-01-26 16:14:48.774 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:14:48 compute-0 nova_compute[239965]: 2026-01-26 16:14:48.774 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:14:48 compute-0 nova_compute[239965]: 2026-01-26 16:14:48.775 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ad5cd088-59a3-4894-8741-f0daa1fb4ff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007615966272748463 of space, bias 1.0, pg target 0.2284789881824539 quantized to 32 (current 32)
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010183854999506218 of space, bias 1.0, pg target 0.30551564998518654 quantized to 32 (current 32)
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.389004267604197e-07 of space, bias 4.0, pg target 0.0008866805121125036 quantized to 16 (current 16)
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:14:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:14:49 compute-0 ceph-mon[75140]: pgmap v1920: 305 pgs: 305 active+clean; 163 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 112 op/s
Jan 26 16:14:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2130504378' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:14:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2130504378' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:14:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 163 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 16:14:50 compute-0 ceph-mon[75140]: pgmap v1921: 305 pgs: 305 active+clean; 163 MiB data, 797 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 16:14:51 compute-0 nova_compute[239965]: 2026-01-26 16:14:51.339 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:52 compute-0 nova_compute[239965]: 2026-01-26 16:14:52.106 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:52 compute-0 nova_compute[239965]: 2026-01-26 16:14:52.114 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Updating instance_info_cache with network_info: [{"id": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "address": "fa:16:3e:11:5d:a9", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1883cc75-9b", "ovs_interfaceid": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "address": "fa:16:3e:c4:61:7d", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4f3ee90-85", "ovs_interfaceid": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:14:52 compute-0 nova_compute[239965]: 2026-01-26 16:14:52.137 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:14:52 compute-0 nova_compute[239965]: 2026-01-26 16:14:52.138 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:14:52 compute-0 nova_compute[239965]: 2026-01-26 16:14:52.139 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:14:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:14:52 compute-0 podman[337133]: 2026-01-26 16:14:52.36295584 +0000 UTC m=+0.053168320 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 16:14:52 compute-0 podman[337134]: 2026-01-26 16:14:52.395881595 +0000 UTC m=+0.085746877 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 16:14:52 compute-0 nova_compute[239965]: 2026-01-26 16:14:52.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:14:52 compute-0 nova_compute[239965]: 2026-01-26 16:14:52.531 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:14:52 compute-0 nova_compute[239965]: 2026-01-26 16:14:52.531 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:14:52 compute-0 nova_compute[239965]: 2026-01-26 16:14:52.531 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:14:52 compute-0 nova_compute[239965]: 2026-01-26 16:14:52.531 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:14:52 compute-0 nova_compute[239965]: 2026-01-26 16:14:52.532 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:14:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:14:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:14:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2951291118' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:14:53 compute-0 nova_compute[239965]: 2026-01-26 16:14:53.091 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:14:53 compute-0 nova_compute[239965]: 2026-01-26 16:14:53.172 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:14:53 compute-0 nova_compute[239965]: 2026-01-26 16:14:53.173 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:14:53 compute-0 ceph-mon[75140]: pgmap v1922: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:14:53 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2951291118' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:14:53 compute-0 nova_compute[239965]: 2026-01-26 16:14:53.349 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:14:53 compute-0 nova_compute[239965]: 2026-01-26 16:14:53.350 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3550MB free_disk=59.94209083728492GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:14:53 compute-0 nova_compute[239965]: 2026-01-26 16:14:53.350 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:14:53 compute-0 nova_compute[239965]: 2026-01-26 16:14:53.351 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:14:53 compute-0 nova_compute[239965]: 2026-01-26 16:14:53.608 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance ad5cd088-59a3-4894-8741-f0daa1fb4ff9 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:14:53 compute-0 nova_compute[239965]: 2026-01-26 16:14:53.609 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:14:53 compute-0 nova_compute[239965]: 2026-01-26 16:14:53.609 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:14:53 compute-0 nova_compute[239965]: 2026-01-26 16:14:53.758 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:14:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:14:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:14:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1839473738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:14:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Jan 26 16:14:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Jan 26 16:14:54 compute-0 nova_compute[239965]: 2026-01-26 16:14:54.352 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:14:54 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Jan 26 16:14:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1839473738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:14:54 compute-0 nova_compute[239965]: 2026-01-26 16:14:54.358 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:14:54 compute-0 nova_compute[239965]: 2026-01-26 16:14:54.375 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:14:54 compute-0 nova_compute[239965]: 2026-01-26 16:14:54.398 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:14:54 compute-0 nova_compute[239965]: 2026-01-26 16:14:54.398 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:14:54 compute-0 nova_compute[239965]: 2026-01-26 16:14:54.399 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:14:54 compute-0 nova_compute[239965]: 2026-01-26 16:14:54.519 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:14:54 compute-0 nova_compute[239965]: 2026-01-26 16:14:54.520 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:14:54 compute-0 nova_compute[239965]: 2026-01-26 16:14:54.536 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:14:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Jan 26 16:14:55 compute-0 ceph-mon[75140]: pgmap v1923: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:14:55 compute-0 ceph-mon[75140]: osdmap e301: 3 total, 3 up, 3 in
Jan 26 16:14:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Jan 26 16:14:55 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Jan 26 16:14:55 compute-0 nova_compute[239965]: 2026-01-26 16:14:55.528 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:14:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:55.594 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:14:55 compute-0 nova_compute[239965]: 2026-01-26 16:14:55.595 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:55.596 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:14:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 9.1 KiB/s rd, 91 KiB/s wr, 15 op/s
Jan 26 16:14:56 compute-0 nova_compute[239965]: 2026-01-26 16:14:56.341 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:56 compute-0 ceph-mon[75140]: osdmap e302: 3 total, 3 up, 3 in
Jan 26 16:14:56 compute-0 ceph-mon[75140]: pgmap v1926: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 9.1 KiB/s rd, 91 KiB/s wr, 15 op/s
Jan 26 16:14:57 compute-0 nova_compute[239965]: 2026-01-26 16:14:57.107 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:14:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:14:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 93 KiB/s wr, 54 op/s
Jan 26 16:14:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:59.237 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:14:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:59.238 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:14:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:14:59.238 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:14:59 compute-0 ceph-mon[75140]: pgmap v1927: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 93 KiB/s wr, 54 op/s
Jan 26 16:15:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 22 KiB/s wr, 50 op/s
Jan 26 16:15:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:15:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:15:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:15:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:15:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:15:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:15:00 compute-0 ceph-mon[75140]: pgmap v1928: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 22 KiB/s wr, 50 op/s
Jan 26 16:15:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:00.598 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:01 compute-0 nova_compute[239965]: 2026-01-26 16:15:01.344 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:01 compute-0 nova_compute[239965]: 2026-01-26 16:15:01.904 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:01 compute-0 nova_compute[239965]: 2026-01-26 16:15:01.904 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:01 compute-0 nova_compute[239965]: 2026-01-26 16:15:01.926 239969 DEBUG nova.compute.manager [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.000 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.001 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.008 239969 DEBUG nova.virt.hardware [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.008 239969 INFO nova.compute.claims [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.109 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.133 239969 DEBUG oslo_concurrency.processutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:15:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 22 KiB/s wr, 50 op/s
Jan 26 16:15:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:15:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4193682127' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.689 239969 DEBUG oslo_concurrency.processutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.696 239969 DEBUG nova.compute.provider_tree [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.719 239969 DEBUG nova.scheduler.client.report [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.751 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.752 239969 DEBUG nova.compute.manager [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.804 239969 DEBUG nova.compute.manager [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.804 239969 DEBUG nova.network.neutron [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.836 239969 INFO nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.857 239969 DEBUG nova.compute.manager [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:15:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:15:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Jan 26 16:15:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Jan 26 16:15:02 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.968 239969 DEBUG nova.compute.manager [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.969 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.969 239969 INFO nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Creating image(s)
Jan 26 16:15:02 compute-0 nova_compute[239965]: 2026-01-26 16:15:02.997 239969 DEBUG nova.storage.rbd_utils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 46b779d5-01aa-4ab2-bbcf-cb9c344284bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.018 239969 DEBUG nova.storage.rbd_utils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 46b779d5-01aa-4ab2-bbcf-cb9c344284bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.039 239969 DEBUG nova.storage.rbd_utils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 46b779d5-01aa-4ab2-bbcf-cb9c344284bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.043 239969 DEBUG oslo_concurrency.processutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.091 239969 DEBUG nova.policy [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0338b308661e4205839e9e957f674d8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df501970c9864b77b86bb4aa58ef846b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.142 239969 DEBUG oslo_concurrency.processutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.143 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.143 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.144 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.166 239969 DEBUG nova.storage.rbd_utils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 46b779d5-01aa-4ab2-bbcf-cb9c344284bf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.170 239969 DEBUG oslo_concurrency.processutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 46b779d5-01aa-4ab2-bbcf-cb9c344284bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:15:03 compute-0 ceph-mon[75140]: pgmap v1929: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 22 KiB/s wr, 50 op/s
Jan 26 16:15:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4193682127' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:15:03 compute-0 ceph-mon[75140]: osdmap e303: 3 total, 3 up, 3 in
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.439 239969 DEBUG oslo_concurrency.processutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 46b779d5-01aa-4ab2-bbcf-cb9c344284bf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.498 239969 DEBUG nova.storage.rbd_utils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] resizing rbd image 46b779d5-01aa-4ab2-bbcf-cb9c344284bf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.574 239969 DEBUG nova.objects.instance [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'migration_context' on Instance uuid 46b779d5-01aa-4ab2-bbcf-cb9c344284bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.593 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.594 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Ensure instance console log exists: /var/lib/nova/instances/46b779d5-01aa-4ab2-bbcf-cb9c344284bf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.594 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.594 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.595 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:03 compute-0 nova_compute[239965]: 2026-01-26 16:15:03.818 239969 DEBUG nova.network.neutron [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Successfully created port: 8167513e-a387-4e59-ae09-285d12c97d90 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:15:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 20 KiB/s wr, 45 op/s
Jan 26 16:15:04 compute-0 ceph-mon[75140]: pgmap v1931: 305 pgs: 305 active+clean; 167 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 20 KiB/s wr, 45 op/s
Jan 26 16:15:04 compute-0 nova_compute[239965]: 2026-01-26 16:15:04.769 239969 DEBUG nova.network.neutron [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Successfully created port: 91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:15:05 compute-0 sshd-session[337411]: Invalid user solana from 45.148.10.240 port 60050
Jan 26 16:15:05 compute-0 sshd-session[337411]: Connection closed by invalid user solana 45.148.10.240 port 60050 [preauth]
Jan 26 16:15:05 compute-0 nova_compute[239965]: 2026-01-26 16:15:05.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:15:05 compute-0 nova_compute[239965]: 2026-01-26 16:15:05.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:15:05 compute-0 nova_compute[239965]: 2026-01-26 16:15:05.522 239969 DEBUG nova.network.neutron [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Successfully updated port: 8167513e-a387-4e59-ae09-285d12c97d90 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:15:05 compute-0 nova_compute[239965]: 2026-01-26 16:15:05.679 239969 DEBUG nova.compute.manager [req-970bbf4b-d733-43d7-807f-0fcca61482aa req-1cc81661-34ec-4c88-9a10-4493b6a29a70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Received event network-changed-8167513e-a387-4e59-ae09-285d12c97d90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:05 compute-0 nova_compute[239965]: 2026-01-26 16:15:05.680 239969 DEBUG nova.compute.manager [req-970bbf4b-d733-43d7-807f-0fcca61482aa req-1cc81661-34ec-4c88-9a10-4493b6a29a70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Refreshing instance network info cache due to event network-changed-8167513e-a387-4e59-ae09-285d12c97d90. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:15:05 compute-0 nova_compute[239965]: 2026-01-26 16:15:05.680 239969 DEBUG oslo_concurrency.lockutils [req-970bbf4b-d733-43d7-807f-0fcca61482aa req-1cc81661-34ec-4c88-9a10-4493b6a29a70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-46b779d5-01aa-4ab2-bbcf-cb9c344284bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:15:05 compute-0 nova_compute[239965]: 2026-01-26 16:15:05.680 239969 DEBUG oslo_concurrency.lockutils [req-970bbf4b-d733-43d7-807f-0fcca61482aa req-1cc81661-34ec-4c88-9a10-4493b6a29a70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-46b779d5-01aa-4ab2-bbcf-cb9c344284bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:15:05 compute-0 nova_compute[239965]: 2026-01-26 16:15:05.681 239969 DEBUG nova.network.neutron [req-970bbf4b-d733-43d7-807f-0fcca61482aa req-1cc81661-34ec-4c88-9a10-4493b6a29a70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Refreshing network info cache for port 8167513e-a387-4e59-ae09-285d12c97d90 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:15:05 compute-0 nova_compute[239965]: 2026-01-26 16:15:05.889 239969 DEBUG nova.network.neutron [req-970bbf4b-d733-43d7-807f-0fcca61482aa req-1cc81661-34ec-4c88-9a10-4493b6a29a70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:15:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 175 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 365 KiB/s wr, 46 op/s
Jan 26 16:15:06 compute-0 nova_compute[239965]: 2026-01-26 16:15:06.321 239969 DEBUG nova.network.neutron [req-970bbf4b-d733-43d7-807f-0fcca61482aa req-1cc81661-34ec-4c88-9a10-4493b6a29a70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:15:06 compute-0 nova_compute[239965]: 2026-01-26 16:15:06.346 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:06 compute-0 nova_compute[239965]: 2026-01-26 16:15:06.413 239969 DEBUG nova.network.neutron [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Successfully updated port: 91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:15:06 compute-0 nova_compute[239965]: 2026-01-26 16:15:06.428 239969 DEBUG oslo_concurrency.lockutils [req-970bbf4b-d733-43d7-807f-0fcca61482aa req-1cc81661-34ec-4c88-9a10-4493b6a29a70 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-46b779d5-01aa-4ab2-bbcf-cb9c344284bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:15:06 compute-0 nova_compute[239965]: 2026-01-26 16:15:06.430 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "refresh_cache-46b779d5-01aa-4ab2-bbcf-cb9c344284bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:15:06 compute-0 nova_compute[239965]: 2026-01-26 16:15:06.430 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquired lock "refresh_cache-46b779d5-01aa-4ab2-bbcf-cb9c344284bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:15:06 compute-0 nova_compute[239965]: 2026-01-26 16:15:06.430 239969 DEBUG nova.network.neutron [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:15:06 compute-0 nova_compute[239965]: 2026-01-26 16:15:06.595 239969 DEBUG nova.network.neutron [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:15:07 compute-0 nova_compute[239965]: 2026-01-26 16:15:07.111 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:07 compute-0 ceph-mon[75140]: pgmap v1932: 305 pgs: 305 active+clean; 175 MiB data, 801 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 365 KiB/s wr, 46 op/s
Jan 26 16:15:07 compute-0 nova_compute[239965]: 2026-01-26 16:15:07.806 239969 DEBUG nova.compute.manager [req-c1517075-f3d8-4d0c-b406-e74a04999a28 req-105c6004-009c-4e86-ae48-cc5f5e95928f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Received event network-changed-91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:07 compute-0 nova_compute[239965]: 2026-01-26 16:15:07.807 239969 DEBUG nova.compute.manager [req-c1517075-f3d8-4d0c-b406-e74a04999a28 req-105c6004-009c-4e86-ae48-cc5f5e95928f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Refreshing instance network info cache due to event network-changed-91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:15:07 compute-0 nova_compute[239965]: 2026-01-26 16:15:07.807 239969 DEBUG oslo_concurrency.lockutils [req-c1517075-f3d8-4d0c-b406-e74a04999a28 req-105c6004-009c-4e86-ae48-cc5f5e95928f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-46b779d5-01aa-4ab2-bbcf-cb9c344284bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:15:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:15:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Jan 26 16:15:08 compute-0 ceph-mon[75140]: pgmap v1933: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.697 239969 DEBUG nova.network.neutron [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Updating instance_info_cache with network_info: [{"id": "8167513e-a387-4e59-ae09-285d12c97d90", "address": "fa:16:3e:c1:a8:72", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8167513e-a3", "ovs_interfaceid": "8167513e-a387-4e59-ae09-285d12c97d90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "address": "fa:16:3e:37:fd:ac", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3e5dd-a1", "ovs_interfaceid": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.719 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Releasing lock "refresh_cache-46b779d5-01aa-4ab2-bbcf-cb9c344284bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.720 239969 DEBUG nova.compute.manager [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Instance network_info: |[{"id": "8167513e-a387-4e59-ae09-285d12c97d90", "address": "fa:16:3e:c1:a8:72", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8167513e-a3", "ovs_interfaceid": "8167513e-a387-4e59-ae09-285d12c97d90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "address": "fa:16:3e:37:fd:ac", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3e5dd-a1", "ovs_interfaceid": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.721 239969 DEBUG oslo_concurrency.lockutils [req-c1517075-f3d8-4d0c-b406-e74a04999a28 req-105c6004-009c-4e86-ae48-cc5f5e95928f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-46b779d5-01aa-4ab2-bbcf-cb9c344284bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.721 239969 DEBUG nova.network.neutron [req-c1517075-f3d8-4d0c-b406-e74a04999a28 req-105c6004-009c-4e86-ae48-cc5f5e95928f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Refreshing network info cache for port 91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.726 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Start _get_guest_xml network_info=[{"id": "8167513e-a387-4e59-ae09-285d12c97d90", "address": "fa:16:3e:c1:a8:72", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8167513e-a3", "ovs_interfaceid": "8167513e-a387-4e59-ae09-285d12c97d90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "address": "fa:16:3e:37:fd:ac", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3e5dd-a1", "ovs_interfaceid": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.731 239969 WARNING nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.746 239969 DEBUG nova.virt.libvirt.host [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.747 239969 DEBUG nova.virt.libvirt.host [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.753 239969 DEBUG nova.virt.libvirt.host [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.754 239969 DEBUG nova.virt.libvirt.host [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.754 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.754 239969 DEBUG nova.virt.hardware [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.755 239969 DEBUG nova.virt.hardware [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.755 239969 DEBUG nova.virt.hardware [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.755 239969 DEBUG nova.virt.hardware [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.755 239969 DEBUG nova.virt.hardware [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.756 239969 DEBUG nova.virt.hardware [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.756 239969 DEBUG nova.virt.hardware [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.756 239969 DEBUG nova.virt.hardware [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.756 239969 DEBUG nova.virt.hardware [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.756 239969 DEBUG nova.virt.hardware [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.757 239969 DEBUG nova.virt.hardware [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:15:08 compute-0 nova_compute[239965]: 2026-01-26 16:15:08.760 239969 DEBUG oslo_concurrency.processutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:15:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:15:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2978792359' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:15:09 compute-0 nova_compute[239965]: 2026-01-26 16:15:09.390 239969 DEBUG oslo_concurrency.processutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:15:09 compute-0 nova_compute[239965]: 2026-01-26 16:15:09.416 239969 DEBUG nova.storage.rbd_utils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 46b779d5-01aa-4ab2-bbcf-cb9c344284bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:15:09 compute-0 nova_compute[239965]: 2026-01-26 16:15:09.421 239969 DEBUG oslo_concurrency.processutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:15:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2978792359' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:15:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:15:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2646441990' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:15:09 compute-0 nova_compute[239965]: 2026-01-26 16:15:09.989 239969 DEBUG oslo_concurrency.processutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:15:09 compute-0 nova_compute[239965]: 2026-01-26 16:15:09.992 239969 DEBUG nova.virt.libvirt.vif [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:15:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-404697000',display_name='tempest-TestGettingAddress-server-404697000',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-404697000',id=112,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbULeaJxwcnozcWoyvQ50FWCotzgPVSIhLPca1F8fB5NEsE8naHpO1jQEaSARRlz4I3VrQe48iPgEoQDNyMeK3lJ4jF1ej588mNcPtcDyL+pxJcccFJcRHS+XeHWriRmg==',key_name='tempest-TestGettingAddress-1924722263',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-50zm6634',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:15:02Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=46b779d5-01aa-4ab2-bbcf-cb9c344284bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8167513e-a387-4e59-ae09-285d12c97d90", "address": "fa:16:3e:c1:a8:72", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8167513e-a3", "ovs_interfaceid": "8167513e-a387-4e59-ae09-285d12c97d90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:15:09 compute-0 nova_compute[239965]: 2026-01-26 16:15:09.993 239969 DEBUG nova.network.os_vif_util [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "8167513e-a387-4e59-ae09-285d12c97d90", "address": "fa:16:3e:c1:a8:72", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8167513e-a3", "ovs_interfaceid": "8167513e-a387-4e59-ae09-285d12c97d90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:15:09 compute-0 nova_compute[239965]: 2026-01-26 16:15:09.995 239969 DEBUG nova.network.os_vif_util [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=8167513e-a387-4e59-ae09-285d12c97d90,network=Network(7bd0b480-793e-4d46-83dd-5e27335992d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8167513e-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:15:09 compute-0 nova_compute[239965]: 2026-01-26 16:15:09.997 239969 DEBUG nova.virt.libvirt.vif [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:15:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-404697000',display_name='tempest-TestGettingAddress-server-404697000',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-404697000',id=112,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbULeaJxwcnozcWoyvQ50FWCotzgPVSIhLPca1F8fB5NEsE8naHpO1jQEaSARRlz4I3VrQe48iPgEoQDNyMeK3lJ4jF1ej588mNcPtcDyL+pxJcccFJcRHS+XeHWriRmg==',key_name='tempest-TestGettingAddress-1924722263',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-50zm6634',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:15:02Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=46b779d5-01aa-4ab2-bbcf-cb9c344284bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "address": "fa:16:3e:37:fd:ac", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3e5dd-a1", "ovs_interfaceid": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:15:09 compute-0 nova_compute[239965]: 2026-01-26 16:15:09.997 239969 DEBUG nova.network.os_vif_util [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "address": "fa:16:3e:37:fd:ac", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3e5dd-a1", "ovs_interfaceid": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:15:09 compute-0 nova_compute[239965]: 2026-01-26 16:15:09.998 239969 DEBUG nova.network.os_vif_util [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:fd:ac,bridge_name='br-int',has_traffic_filtering=True,id=91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc,network=Network(ef5475f2-541c-4651-b877-e2e4325b6344),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3e5dd-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:09.999 239969 DEBUG nova.objects.instance [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'pci_devices' on Instance uuid 46b779d5-01aa-4ab2-bbcf-cb9c344284bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.026 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:15:10 compute-0 nova_compute[239965]:   <uuid>46b779d5-01aa-4ab2-bbcf-cb9c344284bf</uuid>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   <name>instance-00000070</name>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <nova:name>tempest-TestGettingAddress-server-404697000</nova:name>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:15:08</nova:creationTime>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:15:10 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:15:10 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:15:10 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:15:10 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:15:10 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:15:10 compute-0 nova_compute[239965]:         <nova:user uuid="0338b308661e4205839e9e957f674d8c">tempest-TestGettingAddress-206943419-project-member</nova:user>
Jan 26 16:15:10 compute-0 nova_compute[239965]:         <nova:project uuid="df501970c9864b77b86bb4aa58ef846b">tempest-TestGettingAddress-206943419</nova:project>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:15:10 compute-0 nova_compute[239965]:         <nova:port uuid="8167513e-a387-4e59-ae09-285d12c97d90">
Jan 26 16:15:10 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:15:10 compute-0 nova_compute[239965]:         <nova:port uuid="91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc">
Jan 26 16:15:10 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe37:fdac" ipVersion="6"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe37:fdac" ipVersion="6"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <system>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <entry name="serial">46b779d5-01aa-4ab2-bbcf-cb9c344284bf</entry>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <entry name="uuid">46b779d5-01aa-4ab2-bbcf-cb9c344284bf</entry>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     </system>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   <os>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   </os>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   <features>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   </features>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/46b779d5-01aa-4ab2-bbcf-cb9c344284bf_disk">
Jan 26 16:15:10 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       </source>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:15:10 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/46b779d5-01aa-4ab2-bbcf-cb9c344284bf_disk.config">
Jan 26 16:15:10 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       </source>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:15:10 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:c1:a8:72"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <target dev="tap8167513e-a3"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:37:fd:ac"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <target dev="tap91e3e5dd-a1"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/46b779d5-01aa-4ab2-bbcf-cb9c344284bf/console.log" append="off"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <video>
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     </video>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:15:10 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:15:10 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:15:10 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:15:10 compute-0 nova_compute[239965]: </domain>
Jan 26 16:15:10 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.027 239969 DEBUG nova.compute.manager [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Preparing to wait for external event network-vif-plugged-8167513e-a387-4e59-ae09-285d12c97d90 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.028 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.028 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.029 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.029 239969 DEBUG nova.compute.manager [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Preparing to wait for external event network-vif-plugged-91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.030 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.030 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.031 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.032 239969 DEBUG nova.virt.libvirt.vif [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:15:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-404697000',display_name='tempest-TestGettingAddress-server-404697000',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-404697000',id=112,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbULeaJxwcnozcWoyvQ50FWCotzgPVSIhLPca1F8fB5NEsE8naHpO1jQEaSARRlz4I3VrQe48iPgEoQDNyMeK3lJ4jF1ej588mNcPtcDyL+pxJcccFJcRHS+XeHWriRmg==',key_name='tempest-TestGettingAddress-1924722263',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-50zm6634',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:15:02Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=46b779d5-01aa-4ab2-bbcf-cb9c344284bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8167513e-a387-4e59-ae09-285d12c97d90", "address": "fa:16:3e:c1:a8:72", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8167513e-a3", "ovs_interfaceid": "8167513e-a387-4e59-ae09-285d12c97d90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.032 239969 DEBUG nova.network.os_vif_util [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "8167513e-a387-4e59-ae09-285d12c97d90", "address": "fa:16:3e:c1:a8:72", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8167513e-a3", "ovs_interfaceid": "8167513e-a387-4e59-ae09-285d12c97d90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.033 239969 DEBUG nova.network.os_vif_util [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c1:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=8167513e-a387-4e59-ae09-285d12c97d90,network=Network(7bd0b480-793e-4d46-83dd-5e27335992d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8167513e-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.034 239969 DEBUG os_vif [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=8167513e-a387-4e59-ae09-285d12c97d90,network=Network(7bd0b480-793e-4d46-83dd-5e27335992d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8167513e-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.035 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.035 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.036 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.040 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.040 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8167513e-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.041 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8167513e-a3, col_values=(('external_ids', {'iface-id': '8167513e-a387-4e59-ae09-285d12c97d90', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c1:a8:72', 'vm-uuid': '46b779d5-01aa-4ab2-bbcf-cb9c344284bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.043 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:10 compute-0 NetworkManager[48954]: <info>  [1769444110.0438] manager: (tap8167513e-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/471)
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.045 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.050 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.051 239969 INFO os_vif [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c1:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=8167513e-a387-4e59-ae09-285d12c97d90,network=Network(7bd0b480-793e-4d46-83dd-5e27335992d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8167513e-a3')
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.054 239969 DEBUG nova.virt.libvirt.vif [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:15:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-404697000',display_name='tempest-TestGettingAddress-server-404697000',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-404697000',id=112,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbULeaJxwcnozcWoyvQ50FWCotzgPVSIhLPca1F8fB5NEsE8naHpO1jQEaSARRlz4I3VrQe48iPgEoQDNyMeK3lJ4jF1ej588mNcPtcDyL+pxJcccFJcRHS+XeHWriRmg==',key_name='tempest-TestGettingAddress-1924722263',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-50zm6634',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:15:02Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=46b779d5-01aa-4ab2-bbcf-cb9c344284bf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "address": "fa:16:3e:37:fd:ac", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3e5dd-a1", "ovs_interfaceid": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.054 239969 DEBUG nova.network.os_vif_util [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "address": "fa:16:3e:37:fd:ac", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3e5dd-a1", "ovs_interfaceid": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.056 239969 DEBUG nova.network.os_vif_util [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:fd:ac,bridge_name='br-int',has_traffic_filtering=True,id=91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc,network=Network(ef5475f2-541c-4651-b877-e2e4325b6344),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3e5dd-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.057 239969 DEBUG os_vif [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:fd:ac,bridge_name='br-int',has_traffic_filtering=True,id=91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc,network=Network(ef5475f2-541c-4651-b877-e2e4325b6344),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3e5dd-a1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.058 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.059 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.060 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.063 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.063 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap91e3e5dd-a1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.064 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap91e3e5dd-a1, col_values=(('external_ids', {'iface-id': '91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:fd:ac', 'vm-uuid': '46b779d5-01aa-4ab2-bbcf-cb9c344284bf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.065 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:10 compute-0 NetworkManager[48954]: <info>  [1769444110.0666] manager: (tap91e3e5dd-a1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.069 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.072 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.073 239969 INFO os_vif [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:fd:ac,bridge_name='br-int',has_traffic_filtering=True,id=91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc,network=Network(ef5475f2-541c-4651-b877-e2e4325b6344),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3e5dd-a1')
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.120 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.121 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.121 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:c1:a8:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.121 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:37:fd:ac, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.122 239969 INFO nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Using config drive
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.141 239969 DEBUG nova.storage.rbd_utils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 46b779d5-01aa-4ab2-bbcf-cb9c344284bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:15:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Jan 26 16:15:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2646441990' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:15:10 compute-0 ceph-mon[75140]: pgmap v1934: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.584 239969 INFO nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Creating config drive at /var/lib/nova/instances/46b779d5-01aa-4ab2-bbcf-cb9c344284bf/disk.config
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.588 239969 DEBUG oslo_concurrency.processutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/46b779d5-01aa-4ab2-bbcf-cb9c344284bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsbx9ldx1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.734 239969 DEBUG oslo_concurrency.processutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/46b779d5-01aa-4ab2-bbcf-cb9c344284bf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsbx9ldx1" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.757 239969 DEBUG nova.storage.rbd_utils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 46b779d5-01aa-4ab2-bbcf-cb9c344284bf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.761 239969 DEBUG oslo_concurrency.processutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/46b779d5-01aa-4ab2-bbcf-cb9c344284bf/disk.config 46b779d5-01aa-4ab2-bbcf-cb9c344284bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.801 239969 DEBUG nova.network.neutron [req-c1517075-f3d8-4d0c-b406-e74a04999a28 req-105c6004-009c-4e86-ae48-cc5f5e95928f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Updated VIF entry in instance network info cache for port 91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.803 239969 DEBUG nova.network.neutron [req-c1517075-f3d8-4d0c-b406-e74a04999a28 req-105c6004-009c-4e86-ae48-cc5f5e95928f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Updating instance_info_cache with network_info: [{"id": "8167513e-a387-4e59-ae09-285d12c97d90", "address": "fa:16:3e:c1:a8:72", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8167513e-a3", "ovs_interfaceid": "8167513e-a387-4e59-ae09-285d12c97d90", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "address": "fa:16:3e:37:fd:ac", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3e5dd-a1", "ovs_interfaceid": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.825 239969 DEBUG oslo_concurrency.lockutils [req-c1517075-f3d8-4d0c-b406-e74a04999a28 req-105c6004-009c-4e86-ae48-cc5f5e95928f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-46b779d5-01aa-4ab2-bbcf-cb9c344284bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.904 239969 DEBUG oslo_concurrency.processutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/46b779d5-01aa-4ab2-bbcf-cb9c344284bf/disk.config 46b779d5-01aa-4ab2-bbcf-cb9c344284bf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:15:10 compute-0 nova_compute[239965]: 2026-01-26 16:15:10.905 239969 INFO nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Deleting local config drive /var/lib/nova/instances/46b779d5-01aa-4ab2-bbcf-cb9c344284bf/disk.config because it was imported into RBD.
Jan 26 16:15:10 compute-0 NetworkManager[48954]: <info>  [1769444110.9534] manager: (tap8167513e-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/473)
Jan 26 16:15:10 compute-0 kernel: tap8167513e-a3: entered promiscuous mode
Jan 26 16:15:11 compute-0 systemd-udevd[337548]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:15:11 compute-0 ovn_controller[146046]: 2026-01-26T16:15:11Z|01146|binding|INFO|Claiming lport 8167513e-a387-4e59-ae09-285d12c97d90 for this chassis.
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.008 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:11 compute-0 ovn_controller[146046]: 2026-01-26T16:15:11Z|01147|binding|INFO|8167513e-a387-4e59-ae09-285d12c97d90: Claiming fa:16:3e:c1:a8:72 10.100.0.4
Jan 26 16:15:11 compute-0 NetworkManager[48954]: <info>  [1769444111.0147] manager: (tap91e3e5dd-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/474)
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.015 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:a8:72 10.100.0.4'], port_security=['fa:16:3e:c1:a8:72 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '46b779d5-01aa-4ab2-bbcf-cb9c344284bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bd0b480-793e-4d46-83dd-5e27335992d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9ccd144-fbed-4376-90b2-5da9708cb290', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c38b8b6-821e-4ed4-9284-993e44902e19, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=8167513e-a387-4e59-ae09-285d12c97d90) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:15:11 compute-0 systemd-udevd[337552]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.017 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 8167513e-a387-4e59-ae09-285d12c97d90 in datapath 7bd0b480-793e-4d46-83dd-5e27335992d9 bound to our chassis
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.018 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7bd0b480-793e-4d46-83dd-5e27335992d9
Jan 26 16:15:11 compute-0 kernel: tap91e3e5dd-a1: entered promiscuous mode
Jan 26 16:15:11 compute-0 ovn_controller[146046]: 2026-01-26T16:15:11Z|01148|binding|INFO|Setting lport 8167513e-a387-4e59-ae09-285d12c97d90 ovn-installed in OVS
Jan 26 16:15:11 compute-0 ovn_controller[146046]: 2026-01-26T16:15:11Z|01149|binding|INFO|Setting lport 8167513e-a387-4e59-ae09-285d12c97d90 up in Southbound
Jan 26 16:15:11 compute-0 NetworkManager[48954]: <info>  [1769444111.0259] device (tap8167513e-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.026 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:11 compute-0 NetworkManager[48954]: <info>  [1769444111.0272] device (tap8167513e-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:15:11 compute-0 ovn_controller[146046]: 2026-01-26T16:15:11Z|01150|if_status|INFO|Not updating pb chassis for 91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc now as sb is readonly
Jan 26 16:15:11 compute-0 ovn_controller[146046]: 2026-01-26T16:15:11Z|01151|binding|INFO|Claiming lport 91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc for this chassis.
Jan 26 16:15:11 compute-0 ovn_controller[146046]: 2026-01-26T16:15:11Z|01152|binding|INFO|91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc: Claiming fa:16:3e:37:fd:ac 2001:db8:0:1:f816:3eff:fe37:fdac 2001:db8::f816:3eff:fe37:fdac
Jan 26 16:15:11 compute-0 NetworkManager[48954]: <info>  [1769444111.0322] device (tap91e3e5dd-a1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:15:11 compute-0 NetworkManager[48954]: <info>  [1769444111.0336] device (tap91e3e5dd-a1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.036 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:fd:ac 2001:db8:0:1:f816:3eff:fe37:fdac 2001:db8::f816:3eff:fe37:fdac'], port_security=['fa:16:3e:37:fd:ac 2001:db8:0:1:f816:3eff:fe37:fdac 2001:db8::f816:3eff:fe37:fdac'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe37:fdac/64 2001:db8::f816:3eff:fe37:fdac/64', 'neutron:device_id': '46b779d5-01aa-4ab2-bbcf-cb9c344284bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef5475f2-541c-4651-b877-e2e4325b6344', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a9ccd144-fbed-4376-90b2-5da9708cb290', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dbc549b-a211-495a-b7d4-5846cf98bef7, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.039 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ac5a979a-d475-488e-940d-3f0697a82e5b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:11 compute-0 ovn_controller[146046]: 2026-01-26T16:15:11Z|01153|binding|INFO|Setting lport 91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc ovn-installed in OVS
Jan 26 16:15:11 compute-0 ovn_controller[146046]: 2026-01-26T16:15:11Z|01154|binding|INFO|Setting lport 91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc up in Southbound
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.041 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:11 compute-0 systemd-machined[208061]: New machine qemu-139-instance-00000070.
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.066 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[63696b19-6fc8-4ece-800c-c08c9400c46f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:11 compute-0 systemd[1]: Started Virtual Machine qemu-139-instance-00000070.
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.068 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[58534ea1-137a-4941-9da9-8d67fea62043]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.098 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7127d744-6591-47f3-ad0d-c7b81d92c07c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.116 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0559e8b9-bdff-4bbd-a227-ccc35a05cbba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bd0b480-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:07:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 331], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555852, 'reachable_time': 30920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337566, 'error': None, 'target': 'ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.131 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7005d6b6-1a80-4cb0-b5bf-350e1e5e3a89]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7bd0b480-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 555862, 'tstamp': 555862}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337570, 'error': None, 'target': 'ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7bd0b480-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 555866, 'tstamp': 555866}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337570, 'error': None, 'target': 'ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.132 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bd0b480-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.134 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.135 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.135 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bd0b480-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.135 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.135 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7bd0b480-70, col_values=(('external_ids', {'iface-id': '43b09bd8-2cf9-447b-ad81-d331160a2f5f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.136 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.137 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc in datapath ef5475f2-541c-4651-b877-e2e4325b6344 unbound from our chassis
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.138 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef5475f2-541c-4651-b877-e2e4325b6344
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.153 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6b8e66be-1fa4-4f73-ba62-d93b9890687b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.179 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6156512e-4b31-409d-b157-f0f6ae873f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.183 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6f3508-8d1f-4c65-94ed-cd9578913275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.211 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[31ad23ac-419c-4789-aa08-02c39d97036b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.219 239969 DEBUG nova.compute.manager [req-4e775687-e224-4c41-9808-0f64fe91a951 req-cc3c32a3-0048-4e5c-bcae-e3a3a4e73752 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Received event network-vif-plugged-8167513e-a387-4e59-ae09-285d12c97d90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.220 239969 DEBUG oslo_concurrency.lockutils [req-4e775687-e224-4c41-9808-0f64fe91a951 req-cc3c32a3-0048-4e5c-bcae-e3a3a4e73752 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.220 239969 DEBUG oslo_concurrency.lockutils [req-4e775687-e224-4c41-9808-0f64fe91a951 req-cc3c32a3-0048-4e5c-bcae-e3a3a4e73752 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.221 239969 DEBUG oslo_concurrency.lockutils [req-4e775687-e224-4c41-9808-0f64fe91a951 req-cc3c32a3-0048-4e5c-bcae-e3a3a4e73752 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.221 239969 DEBUG nova.compute.manager [req-4e775687-e224-4c41-9808-0f64fe91a951 req-cc3c32a3-0048-4e5c-bcae-e3a3a4e73752 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Processing event network-vif-plugged-8167513e-a387-4e59-ae09-285d12c97d90 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.227 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8bd730b7-a782-4d8d-9d98-c6b4f1d1bfab]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef5475f2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:1e:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 4, 'rx_bytes': 1846, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 4, 'rx_bytes': 1846, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555934, 'reachable_time': 43394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 21, 'inoctets': 1552, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 21, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1552, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 21, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 337577, 'error': None, 'target': 'ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.247 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[89c87d79-5431-42a7-9ee6-65b3d636bc4d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapef5475f2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 555949, 'tstamp': 555949}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 337578, 'error': None, 'target': 'ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.249 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef5475f2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.251 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.252 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.252 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef5475f2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.252 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.253 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef5475f2-50, col_values=(('external_ids', {'iface-id': '05f31558-fab2-41bc-9c60-661571b4fe12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:11.253 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.516 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444111.5162005, 46b779d5-01aa-4ab2-bbcf-cb9c344284bf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.517 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] VM Started (Lifecycle Event)
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.544 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.548 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444111.517253, 46b779d5-01aa-4ab2-bbcf-cb9c344284bf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.549 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] VM Paused (Lifecycle Event)
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.572 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.578 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:15:11 compute-0 nova_compute[239965]: 2026-01-26 16:15:11.599 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:15:12 compute-0 nova_compute[239965]: 2026-01-26 16:15:12.116 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.1 MiB/s wr, 35 op/s
Jan 26 16:15:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.313 239969 DEBUG nova.compute.manager [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Received event network-vif-plugged-8167513e-a387-4e59-ae09-285d12c97d90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.313 239969 DEBUG oslo_concurrency.lockutils [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.313 239969 DEBUG oslo_concurrency.lockutils [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.314 239969 DEBUG oslo_concurrency.lockutils [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.314 239969 DEBUG nova.compute.manager [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] No event matching network-vif-plugged-8167513e-a387-4e59-ae09-285d12c97d90 in dict_keys([('network-vif-plugged', '91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.314 239969 WARNING nova.compute.manager [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Received unexpected event network-vif-plugged-8167513e-a387-4e59-ae09-285d12c97d90 for instance with vm_state building and task_state spawning.
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.314 239969 DEBUG nova.compute.manager [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Received event network-vif-plugged-91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.315 239969 DEBUG oslo_concurrency.lockutils [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.315 239969 DEBUG oslo_concurrency.lockutils [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.315 239969 DEBUG oslo_concurrency.lockutils [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.315 239969 DEBUG nova.compute.manager [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Processing event network-vif-plugged-91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.316 239969 DEBUG nova.compute.manager [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Received event network-vif-plugged-91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.316 239969 DEBUG oslo_concurrency.lockutils [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.316 239969 DEBUG oslo_concurrency.lockutils [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.316 239969 DEBUG oslo_concurrency.lockutils [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.317 239969 DEBUG nova.compute.manager [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] No waiting events found dispatching network-vif-plugged-91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.317 239969 WARNING nova.compute.manager [req-af5dfe2d-452a-4160-8be6-3941341917cd req-2b75d280-e8fd-4ad7-87ad-c5a4e6fe4225 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Received unexpected event network-vif-plugged-91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc for instance with vm_state building and task_state spawning.
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.318 239969 DEBUG nova.compute.manager [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.321 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444113.3216834, 46b779d5-01aa-4ab2-bbcf-cb9c344284bf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.322 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] VM Resumed (Lifecycle Event)
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.323 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.327 239969 INFO nova.virt.libvirt.driver [-] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Instance spawned successfully.
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.327 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.352 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.359 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.365 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.365 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.366 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.366 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.367 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.368 239969 DEBUG nova.virt.libvirt.driver [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:15:13 compute-0 ceph-mon[75140]: pgmap v1935: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.1 MiB/s wr, 35 op/s
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.398 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.433 239969 INFO nova.compute.manager [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Took 10.46 seconds to spawn the instance on the hypervisor.
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.433 239969 DEBUG nova.compute.manager [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.487 239969 INFO nova.compute.manager [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Took 11.51 seconds to build instance.
Jan 26 16:15:13 compute-0 nova_compute[239965]: 2026-01-26 16:15:13.505 239969 DEBUG oslo_concurrency.lockutils [None req-e13c42ed-bb91-47f5-98e7-88ebaa7c15ca 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.9 MiB/s wr, 36 op/s
Jan 26 16:15:14 compute-0 ceph-mon[75140]: pgmap v1936: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.9 MiB/s wr, 36 op/s
Jan 26 16:15:15 compute-0 nova_compute[239965]: 2026-01-26 16:15:15.069 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Jan 26 16:15:17 compute-0 nova_compute[239965]: 2026-01-26 16:15:17.117 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:17 compute-0 ceph-mon[75140]: pgmap v1937: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 1.8 MiB/s wr, 47 op/s
Jan 26 16:15:17 compute-0 nova_compute[239965]: 2026-01-26 16:15:17.858 239969 DEBUG nova.compute.manager [req-f4a0c45a-deee-4163-8708-93e0ed15adf9 req-c3b2d77e-3259-49a5-b48a-f14451599616 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Received event network-changed-8167513e-a387-4e59-ae09-285d12c97d90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:17 compute-0 nova_compute[239965]: 2026-01-26 16:15:17.859 239969 DEBUG nova.compute.manager [req-f4a0c45a-deee-4163-8708-93e0ed15adf9 req-c3b2d77e-3259-49a5-b48a-f14451599616 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Refreshing instance network info cache due to event network-changed-8167513e-a387-4e59-ae09-285d12c97d90. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:15:17 compute-0 nova_compute[239965]: 2026-01-26 16:15:17.860 239969 DEBUG oslo_concurrency.lockutils [req-f4a0c45a-deee-4163-8708-93e0ed15adf9 req-c3b2d77e-3259-49a5-b48a-f14451599616 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-46b779d5-01aa-4ab2-bbcf-cb9c344284bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:15:17 compute-0 nova_compute[239965]: 2026-01-26 16:15:17.860 239969 DEBUG oslo_concurrency.lockutils [req-f4a0c45a-deee-4163-8708-93e0ed15adf9 req-c3b2d77e-3259-49a5-b48a-f14451599616 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-46b779d5-01aa-4ab2-bbcf-cb9c344284bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:15:17 compute-0 nova_compute[239965]: 2026-01-26 16:15:17.861 239969 DEBUG nova.network.neutron [req-f4a0c45a-deee-4163-8708-93e0ed15adf9 req-c3b2d77e-3259-49a5-b48a-f14451599616 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Refreshing network info cache for port 8167513e-a387-4e59-ae09-285d12c97d90 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:15:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:15:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 89 op/s
Jan 26 16:15:18 compute-0 ceph-mon[75140]: pgmap v1938: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 89 op/s
Jan 26 16:15:19 compute-0 nova_compute[239965]: 2026-01-26 16:15:19.148 239969 DEBUG nova.network.neutron [req-f4a0c45a-deee-4163-8708-93e0ed15adf9 req-c3b2d77e-3259-49a5-b48a-f14451599616 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Updated VIF entry in instance network info cache for port 8167513e-a387-4e59-ae09-285d12c97d90. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:15:19 compute-0 nova_compute[239965]: 2026-01-26 16:15:19.149 239969 DEBUG nova.network.neutron [req-f4a0c45a-deee-4163-8708-93e0ed15adf9 req-c3b2d77e-3259-49a5-b48a-f14451599616 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Updating instance_info_cache with network_info: [{"id": "8167513e-a387-4e59-ae09-285d12c97d90", "address": "fa:16:3e:c1:a8:72", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8167513e-a3", "ovs_interfaceid": "8167513e-a387-4e59-ae09-285d12c97d90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "address": "fa:16:3e:37:fd:ac", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3e5dd-a1", "ovs_interfaceid": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:15:19 compute-0 nova_compute[239965]: 2026-01-26 16:15:19.180 239969 DEBUG oslo_concurrency.lockutils [req-f4a0c45a-deee-4163-8708-93e0ed15adf9 req-c3b2d77e-3259-49a5-b48a-f14451599616 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-46b779d5-01aa-4ab2-bbcf-cb9c344284bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:15:20 compute-0 nova_compute[239965]: 2026-01-26 16:15:20.073 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Jan 26 16:15:21 compute-0 ceph-mon[75140]: pgmap v1939: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Jan 26 16:15:22 compute-0 nova_compute[239965]: 2026-01-26 16:15:22.119 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 26 16:15:22 compute-0 ceph-mon[75140]: pgmap v1940: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Jan 26 16:15:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:15:23 compute-0 podman[337622]: 2026-01-26 16:15:23.376985792 +0000 UTC m=+0.060733186 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 16:15:23 compute-0 podman[337623]: 2026-01-26 16:15:23.410722066 +0000 UTC m=+0.094468500 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 16:15:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 72 op/s
Jan 26 16:15:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:25.067 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:c0:60 10.100.0.2 2001:db8::f816:3eff:fed3:c060'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fed3:c060/64', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b148aee5-31c9-4b40-aded-fc877d3044d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5e2c04fa-ed19-4e90-8fe2-e33d24918520) old=Port_Binding(mac=['fa:16:3e:d3:c0:60 2001:db8::f816:3eff:fed3:c060'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fed3:c060/64', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:15:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:25.069 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5e2c04fa-ed19-4e90-8fe2-e33d24918520 in datapath 50ca1210-9784-41a5-b102-be1704bf4f24 updated
Jan 26 16:15:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:25.070 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ca1210-9784-41a5-b102-be1704bf4f24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:15:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:25.073 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[984af171-3c4c-4d14-b2cd-3512f9034dfd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:25 compute-0 nova_compute[239965]: 2026-01-26 16:15:25.076 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:25 compute-0 ceph-mon[75140]: pgmap v1941: 305 pgs: 305 active+clean; 213 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 KiB/s wr, 72 op/s
Jan 26 16:15:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 221 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 821 KiB/s wr, 72 op/s
Jan 26 16:15:26 compute-0 ceph-mon[75140]: pgmap v1942: 305 pgs: 305 active+clean; 221 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 821 KiB/s wr, 72 op/s
Jan 26 16:15:26 compute-0 ovn_controller[146046]: 2026-01-26T16:15:26Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c1:a8:72 10.100.0.4
Jan 26 16:15:26 compute-0 ovn_controller[146046]: 2026-01-26T16:15:26Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c1:a8:72 10.100.0.4
Jan 26 16:15:27 compute-0 nova_compute[239965]: 2026-01-26 16:15:27.121 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:15:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 245 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Jan 26 16:15:28 compute-0 ceph-mon[75140]: pgmap v1943: 305 pgs: 305 active+clean; 245 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Jan 26 16:15:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:28.639 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:c0:60 2001:db8::f816:3eff:fed3:c060'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fed3:c060/64', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b148aee5-31c9-4b40-aded-fc877d3044d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5e2c04fa-ed19-4e90-8fe2-e33d24918520) old=Port_Binding(mac=['fa:16:3e:d3:c0:60 10.100.0.2 2001:db8::f816:3eff:fed3:c060'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fed3:c060/64', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:15:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:28.642 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5e2c04fa-ed19-4e90-8fe2-e33d24918520 in datapath 50ca1210-9784-41a5-b102-be1704bf4f24 updated
Jan 26 16:15:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:28.643 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ca1210-9784-41a5-b102-be1704bf4f24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:15:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:28.644 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[abe87a2e-aead-4c51-a55a-196337aac397]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:15:28
Jan 26 16:15:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:15:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:15:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', '.mgr', 'volumes', 'vms']
Jan 26 16:15:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:15:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:29.594 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:c0:60 10.100.0.2 2001:db8::f816:3eff:fed3:c060'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fed3:c060/64', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b148aee5-31c9-4b40-aded-fc877d3044d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5e2c04fa-ed19-4e90-8fe2-e33d24918520) old=Port_Binding(mac=['fa:16:3e:d3:c0:60 2001:db8::f816:3eff:fed3:c060'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fed3:c060/64', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:15:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:29.596 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5e2c04fa-ed19-4e90-8fe2-e33d24918520 in datapath 50ca1210-9784-41a5-b102-be1704bf4f24 updated
Jan 26 16:15:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:29.597 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ca1210-9784-41a5-b102-be1704bf4f24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:15:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:29.598 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c8dbfa13-fc01-4b03-a55a-574345f41867]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:30 compute-0 nova_compute[239965]: 2026-01-26 16:15:30.080 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 245 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:15:30 compute-0 ceph-mon[75140]: pgmap v1944: 305 pgs: 305 active+clean; 245 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:15:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:15:32 compute-0 nova_compute[239965]: 2026-01-26 16:15:32.124 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 246 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:15:32 compute-0 ceph-mon[75140]: pgmap v1945: 305 pgs: 305 active+clean; 246 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:15:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:15:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 246 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:15:34 compute-0 ceph-mon[75140]: pgmap v1946: 305 pgs: 305 active+clean; 246 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:15:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:34.531 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:c0:60 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b148aee5-31c9-4b40-aded-fc877d3044d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5e2c04fa-ed19-4e90-8fe2-e33d24918520) old=Port_Binding(mac=['fa:16:3e:d3:c0:60 10.100.0.2 2001:db8::f816:3eff:fed3:c060'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fed3:c060/64', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:15:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:34.532 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5e2c04fa-ed19-4e90-8fe2-e33d24918520 in datapath 50ca1210-9784-41a5-b102-be1704bf4f24 updated
Jan 26 16:15:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:34.534 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ca1210-9784-41a5-b102-be1704bf4f24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:15:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:34.534 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[afac883b-1c9b-4f08-ac38-c9616462d9ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:34 compute-0 sudo[337665]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:15:34 compute-0 sudo[337665]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:15:34 compute-0 sudo[337665]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:35 compute-0 sudo[337690]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 26 16:15:35 compute-0 sudo[337690]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:15:35 compute-0 nova_compute[239965]: 2026-01-26 16:15:35.082 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:35.511 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:c0:60 10.100.0.2 2001:db8::f816:3eff:fed3:c060'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fed3:c060/64', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b148aee5-31c9-4b40-aded-fc877d3044d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5e2c04fa-ed19-4e90-8fe2-e33d24918520) old=Port_Binding(mac=['fa:16:3e:d3:c0:60 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:15:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:35.513 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5e2c04fa-ed19-4e90-8fe2-e33d24918520 in datapath 50ca1210-9784-41a5-b102-be1704bf4f24 updated
Jan 26 16:15:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:35.514 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ca1210-9784-41a5-b102-be1704bf4f24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:15:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:35.515 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e6e4af7b-be5d-4ac8-84fb-860bf8173d22]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:35 compute-0 podman[337759]: 2026-01-26 16:15:35.552808771 +0000 UTC m=+0.058554022 container exec 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 16:15:35 compute-0 podman[337759]: 2026-01-26 16:15:35.656358131 +0000 UTC m=+0.162103332 container exec_died 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:15:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 246 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:15:36.369320) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444136369367, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1770, "num_deletes": 252, "total_data_size": 2991611, "memory_usage": 3044576, "flush_reason": "Manual Compaction"}
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444136391080, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2925518, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38696, "largest_seqno": 40465, "table_properties": {"data_size": 2917270, "index_size": 5062, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 16832, "raw_average_key_size": 20, "raw_value_size": 2900852, "raw_average_value_size": 3478, "num_data_blocks": 224, "num_entries": 834, "num_filter_entries": 834, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769443948, "oldest_key_time": 1769443948, "file_creation_time": 1769444136, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 21894 microseconds, and 11768 cpu microseconds.
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:15:36.391216) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2925518 bytes OK
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:15:36.391276) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:15:36.393177) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:15:36.393193) EVENT_LOG_v1 {"time_micros": 1769444136393188, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:15:36.393210) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2984021, prev total WAL file size 2984021, number of live WAL files 2.
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:15:36.394324) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2856KB)], [86(8829KB)]
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444136394381, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11966842, "oldest_snapshot_seqno": -1}
Jan 26 16:15:36 compute-0 ceph-mon[75140]: pgmap v1947: 305 pgs: 305 active+clean; 246 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:15:36 compute-0 sudo[337690]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6659 keys, 10258138 bytes, temperature: kUnknown
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444136457251, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 10258138, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10211668, "index_size": 28694, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 169129, "raw_average_key_size": 25, "raw_value_size": 10090687, "raw_average_value_size": 1515, "num_data_blocks": 1150, "num_entries": 6659, "num_filter_entries": 6659, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769444136, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:15:36.457606) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 10258138 bytes
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:15:36.460088) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 190.0 rd, 162.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 8.6 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 7179, records dropped: 520 output_compression: NoCompression
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:15:36.460126) EVENT_LOG_v1 {"time_micros": 1769444136460108, "job": 50, "event": "compaction_finished", "compaction_time_micros": 62967, "compaction_time_cpu_micros": 23851, "output_level": 6, "num_output_files": 1, "total_output_size": 10258138, "num_input_records": 7179, "num_output_records": 6659, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444136461741, "job": 50, "event": "table_file_deletion", "file_number": 88}
Jan 26 16:15:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444136465829, "job": 50, "event": "table_file_deletion", "file_number": 86}
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:15:36.394250) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:15:36.465907) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:15:36.466041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:15:36.466046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:15:36.466049) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:15:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:15:36.466052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:15:36 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:15:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:15:36 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:15:36 compute-0 sudo[337942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:15:36 compute-0 sudo[337942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:15:36 compute-0 sudo[337942]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:36 compute-0 sudo[337967]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:15:36 compute-0 sudo[337967]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.779 239969 DEBUG nova.compute.manager [req-6d5fb777-f7e7-4943-a5db-579995a9373c req-2967b4cb-7405-417f-8df8-15fd9cc17b6f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Received event network-changed-8167513e-a387-4e59-ae09-285d12c97d90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.780 239969 DEBUG nova.compute.manager [req-6d5fb777-f7e7-4943-a5db-579995a9373c req-2967b4cb-7405-417f-8df8-15fd9cc17b6f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Refreshing instance network info cache due to event network-changed-8167513e-a387-4e59-ae09-285d12c97d90. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.780 239969 DEBUG oslo_concurrency.lockutils [req-6d5fb777-f7e7-4943-a5db-579995a9373c req-2967b4cb-7405-417f-8df8-15fd9cc17b6f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-46b779d5-01aa-4ab2-bbcf-cb9c344284bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.780 239969 DEBUG oslo_concurrency.lockutils [req-6d5fb777-f7e7-4943-a5db-579995a9373c req-2967b4cb-7405-417f-8df8-15fd9cc17b6f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-46b779d5-01aa-4ab2-bbcf-cb9c344284bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.780 239969 DEBUG nova.network.neutron [req-6d5fb777-f7e7-4943-a5db-579995a9373c req-2967b4cb-7405-417f-8df8-15fd9cc17b6f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Refreshing network info cache for port 8167513e-a387-4e59-ae09-285d12c97d90 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.873 239969 DEBUG oslo_concurrency.lockutils [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.873 239969 DEBUG oslo_concurrency.lockutils [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.874 239969 DEBUG oslo_concurrency.lockutils [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.874 239969 DEBUG oslo_concurrency.lockutils [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.875 239969 DEBUG oslo_concurrency.lockutils [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.876 239969 INFO nova.compute.manager [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Terminating instance
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.878 239969 DEBUG nova.compute.manager [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:15:36 compute-0 kernel: tap8167513e-a3 (unregistering): left promiscuous mode
Jan 26 16:15:36 compute-0 NetworkManager[48954]: <info>  [1769444136.9310] device (tap8167513e-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.936 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:36 compute-0 ovn_controller[146046]: 2026-01-26T16:15:36Z|01155|binding|INFO|Releasing lport 8167513e-a387-4e59-ae09-285d12c97d90 from this chassis (sb_readonly=0)
Jan 26 16:15:36 compute-0 ovn_controller[146046]: 2026-01-26T16:15:36Z|01156|binding|INFO|Setting lport 8167513e-a387-4e59-ae09-285d12c97d90 down in Southbound
Jan 26 16:15:36 compute-0 ovn_controller[146046]: 2026-01-26T16:15:36Z|01157|binding|INFO|Removing iface tap8167513e-a3 ovn-installed in OVS
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.942 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:36.946 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c1:a8:72 10.100.0.4'], port_security=['fa:16:3e:c1:a8:72 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '46b779d5-01aa-4ab2-bbcf-cb9c344284bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bd0b480-793e-4d46-83dd-5e27335992d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9ccd144-fbed-4376-90b2-5da9708cb290', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c38b8b6-821e-4ed4-9284-993e44902e19, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=8167513e-a387-4e59-ae09-285d12c97d90) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:15:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:36.947 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 8167513e-a387-4e59-ae09-285d12c97d90 in datapath 7bd0b480-793e-4d46-83dd-5e27335992d9 unbound from our chassis
Jan 26 16:15:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:36.948 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7bd0b480-793e-4d46-83dd-5e27335992d9
Jan 26 16:15:36 compute-0 kernel: tap91e3e5dd-a1 (unregistering): left promiscuous mode
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.959 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:36 compute-0 NetworkManager[48954]: <info>  [1769444136.9624] device (tap91e3e5dd-a1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:15:36 compute-0 ovn_controller[146046]: 2026-01-26T16:15:36Z|01158|binding|INFO|Releasing lport 91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc from this chassis (sb_readonly=0)
Jan 26 16:15:36 compute-0 ovn_controller[146046]: 2026-01-26T16:15:36Z|01159|binding|INFO|Setting lport 91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc down in Southbound
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.970 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:36 compute-0 ovn_controller[146046]: 2026-01-26T16:15:36Z|01160|binding|INFO|Removing iface tap91e3e5dd-a1 ovn-installed in OVS
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.971 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:36.977 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:fd:ac 2001:db8:0:1:f816:3eff:fe37:fdac 2001:db8::f816:3eff:fe37:fdac'], port_security=['fa:16:3e:37:fd:ac 2001:db8:0:1:f816:3eff:fe37:fdac 2001:db8::f816:3eff:fe37:fdac'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe37:fdac/64 2001:db8::f816:3eff:fe37:fdac/64', 'neutron:device_id': '46b779d5-01aa-4ab2-bbcf-cb9c344284bf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef5475f2-541c-4651-b877-e2e4325b6344', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9ccd144-fbed-4376-90b2-5da9708cb290', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dbc549b-a211-495a-b7d4-5846cf98bef7, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:15:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:36.978 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[db57677b-e4f4-49dd-966c-eb792374205d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:36 compute-0 nova_compute[239965]: 2026-01-26 16:15:36.987 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.018 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b2eb0c29-e09f-4f47-bada-7a9999599f01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.021 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d07ae6bf-c5f3-4e33-ad13-1a00698d8136]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:37 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d00000070.scope: Deactivated successfully.
Jan 26 16:15:37 compute-0 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d00000070.scope: Consumed 14.249s CPU time.
Jan 26 16:15:37 compute-0 systemd-machined[208061]: Machine qemu-139-instance-00000070 terminated.
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.054 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1e7c348e-61bb-482e-b1a5-37dd0f4db627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.074 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9d1aa839-e586-40df-a937-ad8395afd359]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7bd0b480-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:07:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 331], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555852, 'reachable_time': 30920, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338026, 'error': None, 'target': 'ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.092 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[da20ed2c-8249-4671-8e1a-998e9358abfb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7bd0b480-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 555862, 'tstamp': 555862}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338027, 'error': None, 'target': 'ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7bd0b480-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 555866, 'tstamp': 555866}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338027, 'error': None, 'target': 'ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.094 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bd0b480-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.114 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:37 compute-0 NetworkManager[48954]: <info>  [1769444137.1265] manager: (tap91e3e5dd-a1): new Tun device (/org/freedesktop/NetworkManager/Devices/475)
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.133 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.136 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7bd0b480-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.136 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.137 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7bd0b480-70, col_values=(('external_ids', {'iface-id': '43b09bd8-2cf9-447b-ad81-d331160a2f5f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.137 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.139 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc in datapath ef5475f2-541c-4651-b877-e2e4325b6344 unbound from our chassis
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.140 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ef5475f2-541c-4651-b877-e2e4325b6344
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.144 239969 INFO nova.virt.libvirt.driver [-] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Instance destroyed successfully.
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.145 239969 DEBUG nova.objects.instance [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'resources' on Instance uuid 46b779d5-01aa-4ab2-bbcf-cb9c344284bf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.158 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e970930a-4c54-4e3f-a464-74620a62f05e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.166 239969 DEBUG nova.virt.libvirt.vif [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:15:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-404697000',display_name='tempest-TestGettingAddress-server-404697000',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-404697000',id=112,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbULeaJxwcnozcWoyvQ50FWCotzgPVSIhLPca1F8fB5NEsE8naHpO1jQEaSARRlz4I3VrQe48iPgEoQDNyMeK3lJ4jF1ej588mNcPtcDyL+pxJcccFJcRHS+XeHWriRmg==',key_name='tempest-TestGettingAddress-1924722263',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:15:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-50zm6634',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:15:13Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=46b779d5-01aa-4ab2-bbcf-cb9c344284bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8167513e-a387-4e59-ae09-285d12c97d90", "address": "fa:16:3e:c1:a8:72", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8167513e-a3", "ovs_interfaceid": "8167513e-a387-4e59-ae09-285d12c97d90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.167 239969 DEBUG nova.network.os_vif_util [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "8167513e-a387-4e59-ae09-285d12c97d90", "address": "fa:16:3e:c1:a8:72", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8167513e-a3", "ovs_interfaceid": "8167513e-a387-4e59-ae09-285d12c97d90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.168 239969 DEBUG nova.network.os_vif_util [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c1:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=8167513e-a387-4e59-ae09-285d12c97d90,network=Network(7bd0b480-793e-4d46-83dd-5e27335992d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8167513e-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.168 239969 DEBUG os_vif [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=8167513e-a387-4e59-ae09-285d12c97d90,network=Network(7bd0b480-793e-4d46-83dd-5e27335992d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8167513e-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.170 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.171 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8167513e-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.172 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.174 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.179 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.183 239969 INFO os_vif [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c1:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=8167513e-a387-4e59-ae09-285d12c97d90,network=Network(7bd0b480-793e-4d46-83dd-5e27335992d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8167513e-a3')
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.183 239969 DEBUG nova.virt.libvirt.vif [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:15:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-404697000',display_name='tempest-TestGettingAddress-server-404697000',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-404697000',id=112,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbULeaJxwcnozcWoyvQ50FWCotzgPVSIhLPca1F8fB5NEsE8naHpO1jQEaSARRlz4I3VrQe48iPgEoQDNyMeK3lJ4jF1ej588mNcPtcDyL+pxJcccFJcRHS+XeHWriRmg==',key_name='tempest-TestGettingAddress-1924722263',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:15:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-50zm6634',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:15:13Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=46b779d5-01aa-4ab2-bbcf-cb9c344284bf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "address": "fa:16:3e:37:fd:ac", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3e5dd-a1", "ovs_interfaceid": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.184 239969 DEBUG nova.network.os_vif_util [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "address": "fa:16:3e:37:fd:ac", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3e5dd-a1", "ovs_interfaceid": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.185 239969 DEBUG nova.network.os_vif_util [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:fd:ac,bridge_name='br-int',has_traffic_filtering=True,id=91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc,network=Network(ef5475f2-541c-4651-b877-e2e4325b6344),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3e5dd-a1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.185 239969 DEBUG os_vif [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:fd:ac,bridge_name='br-int',has_traffic_filtering=True,id=91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc,network=Network(ef5475f2-541c-4651-b877-e2e4325b6344),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3e5dd-a1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.186 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.186 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap91e3e5dd-a1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.188 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.187 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7489e9fd-a9a7-46fc-b87f-d6332049fd78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.190 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8cb69afc-d790-43f7-9274-d5e6717246de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.191 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.193 239969 INFO os_vif [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:fd:ac,bridge_name='br-int',has_traffic_filtering=True,id=91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc,network=Network(ef5475f2-541c-4651-b877-e2e4325b6344),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap91e3e5dd-a1')
Jan 26 16:15:37 compute-0 sudo[337967]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.219 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9ff76d21-d0c1-465a-9059-192b8d6b17e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.235 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc67b39-f2c2-43e6-8932-7cc943972f2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapef5475f2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:1e:88'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 5, 'rx_bytes': 3160, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 5, 'rx_bytes': 3160, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 332], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555934, 'reachable_time': 43394, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 36, 'inoctets': 2656, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 36, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2656, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 36, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338084, 'error': None, 'target': 'ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.249 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[11078b64-f570-4c11-ab1f-d76210bdd200]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapef5475f2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 555949, 'tstamp': 555949}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 338088, 'error': None, 'target': 'ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.251 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef5475f2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.254 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef5475f2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.254 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.254 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.255 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapef5475f2-50, col_values=(('external_ids', {'iface-id': '05f31558-fab2-41bc-9c60-661571b4fe12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:37.255 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:15:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:15:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:15:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:15:37 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:15:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:15:37 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:15:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:15:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:15:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:15:37 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:15:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:15:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:15:37 compute-0 sudo[338089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:15:37 compute-0 sudo[338089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:15:37 compute-0 sudo[338089]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:37 compute-0 sudo[338115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:15:37 compute-0 sudo[338115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.448 239969 INFO nova.virt.libvirt.driver [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Deleting instance files /var/lib/nova/instances/46b779d5-01aa-4ab2-bbcf-cb9c344284bf_del
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.449 239969 INFO nova.virt.libvirt.driver [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Deletion of /var/lib/nova/instances/46b779d5-01aa-4ab2-bbcf-cb9c344284bf_del complete
Jan 26 16:15:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:15:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:15:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:15:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:15:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:15:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:15:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:15:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.511 239969 INFO nova.compute.manager [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Took 0.63 seconds to destroy the instance on the hypervisor.
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.512 239969 DEBUG oslo.service.loopingcall [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.512 239969 DEBUG nova.compute.manager [-] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:15:37 compute-0 nova_compute[239965]: 2026-01-26 16:15:37.512 239969 DEBUG nova.network.neutron [-] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:15:37 compute-0 podman[338151]: 2026-01-26 16:15:37.676314629 +0000 UTC m=+0.037867906 container create 4274e466f41e43005da14574ca33a306b8cb174dde80099b4c35faf5b7949cc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hypatia, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:15:37 compute-0 systemd[1]: Started libpod-conmon-4274e466f41e43005da14574ca33a306b8cb174dde80099b4c35faf5b7949cc5.scope.
Jan 26 16:15:37 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:15:37 compute-0 podman[338151]: 2026-01-26 16:15:37.660578144 +0000 UTC m=+0.022131411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:15:37 compute-0 podman[338151]: 2026-01-26 16:15:37.760524126 +0000 UTC m=+0.122077403 container init 4274e466f41e43005da14574ca33a306b8cb174dde80099b4c35faf5b7949cc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hypatia, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 26 16:15:37 compute-0 podman[338151]: 2026-01-26 16:15:37.770394477 +0000 UTC m=+0.131947734 container start 4274e466f41e43005da14574ca33a306b8cb174dde80099b4c35faf5b7949cc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hypatia, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:15:37 compute-0 podman[338151]: 2026-01-26 16:15:37.774092787 +0000 UTC m=+0.135646044 container attach 4274e466f41e43005da14574ca33a306b8cb174dde80099b4c35faf5b7949cc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hypatia, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:15:37 compute-0 clever_hypatia[338167]: 167 167
Jan 26 16:15:37 compute-0 systemd[1]: libpod-4274e466f41e43005da14574ca33a306b8cb174dde80099b4c35faf5b7949cc5.scope: Deactivated successfully.
Jan 26 16:15:37 compute-0 podman[338151]: 2026-01-26 16:15:37.776600668 +0000 UTC m=+0.138153925 container died 4274e466f41e43005da14574ca33a306b8cb174dde80099b4c35faf5b7949cc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:15:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba1a261a214477fc27ca1baf0dfe95e1ddbf7e82f6e7ca43b80b7ffcd1d2f70b-merged.mount: Deactivated successfully.
Jan 26 16:15:37 compute-0 podman[338151]: 2026-01-26 16:15:37.815780546 +0000 UTC m=+0.177333803 container remove 4274e466f41e43005da14574ca33a306b8cb174dde80099b4c35faf5b7949cc5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 26 16:15:37 compute-0 systemd[1]: libpod-conmon-4274e466f41e43005da14574ca33a306b8cb174dde80099b4c35faf5b7949cc5.scope: Deactivated successfully.
Jan 26 16:15:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:15:37 compute-0 podman[338191]: 2026-01-26 16:15:37.977533158 +0000 UTC m=+0.041360792 container create ce7a5fc8c0685fe185cdab2aa6ad052ddddffd156e4775af795b8687ebe0d22f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:15:38 compute-0 systemd[1]: Started libpod-conmon-ce7a5fc8c0685fe185cdab2aa6ad052ddddffd156e4775af795b8687ebe0d22f.scope.
Jan 26 16:15:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ceb83ee0c017741edde5bee8d36d9d68df8930df51d7a5a92e9de9be2a0ef9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ceb83ee0c017741edde5bee8d36d9d68df8930df51d7a5a92e9de9be2a0ef9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ceb83ee0c017741edde5bee8d36d9d68df8930df51d7a5a92e9de9be2a0ef9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ceb83ee0c017741edde5bee8d36d9d68df8930df51d7a5a92e9de9be2a0ef9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:15:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0ceb83ee0c017741edde5bee8d36d9d68df8930df51d7a5a92e9de9be2a0ef9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:15:38 compute-0 podman[338191]: 2026-01-26 16:15:37.957826766 +0000 UTC m=+0.021654400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:15:38 compute-0 podman[338191]: 2026-01-26 16:15:38.053854092 +0000 UTC m=+0.117681776 container init ce7a5fc8c0685fe185cdab2aa6ad052ddddffd156e4775af795b8687ebe0d22f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ritchie, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 16:15:38 compute-0 podman[338191]: 2026-01-26 16:15:38.059288135 +0000 UTC m=+0.123115779 container start ce7a5fc8c0685fe185cdab2aa6ad052ddddffd156e4775af795b8687ebe0d22f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ritchie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:15:38 compute-0 podman[338191]: 2026-01-26 16:15:38.063383445 +0000 UTC m=+0.127211149 container attach ce7a5fc8c0685fe185cdab2aa6ad052ddddffd156e4775af795b8687ebe0d22f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:15:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 228 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 1.3 MiB/s wr, 66 op/s
Jan 26 16:15:38 compute-0 nova_compute[239965]: 2026-01-26 16:15:38.361 239969 DEBUG nova.network.neutron [req-6d5fb777-f7e7-4943-a5db-579995a9373c req-2967b4cb-7405-417f-8df8-15fd9cc17b6f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Updated VIF entry in instance network info cache for port 8167513e-a387-4e59-ae09-285d12c97d90. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:15:38 compute-0 nova_compute[239965]: 2026-01-26 16:15:38.362 239969 DEBUG nova.network.neutron [req-6d5fb777-f7e7-4943-a5db-579995a9373c req-2967b4cb-7405-417f-8df8-15fd9cc17b6f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Updating instance_info_cache with network_info: [{"id": "8167513e-a387-4e59-ae09-285d12c97d90", "address": "fa:16:3e:c1:a8:72", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8167513e-a3", "ovs_interfaceid": "8167513e-a387-4e59-ae09-285d12c97d90", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "address": "fa:16:3e:37:fd:ac", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3e5dd-a1", "ovs_interfaceid": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:15:38 compute-0 nova_compute[239965]: 2026-01-26 16:15:38.387 239969 DEBUG oslo_concurrency.lockutils [req-6d5fb777-f7e7-4943-a5db-579995a9373c req-2967b4cb-7405-417f-8df8-15fd9cc17b6f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-46b779d5-01aa-4ab2-bbcf-cb9c344284bf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:15:38 compute-0 ceph-mon[75140]: pgmap v1948: 305 pgs: 305 active+clean; 228 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 1.3 MiB/s wr, 66 op/s
Jan 26 16:15:38 compute-0 agitated_ritchie[338208]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:15:38 compute-0 agitated_ritchie[338208]: --> All data devices are unavailable
Jan 26 16:15:38 compute-0 systemd[1]: libpod-ce7a5fc8c0685fe185cdab2aa6ad052ddddffd156e4775af795b8687ebe0d22f.scope: Deactivated successfully.
Jan 26 16:15:38 compute-0 podman[338191]: 2026-01-26 16:15:38.579215536 +0000 UTC m=+0.643043160 container died ce7a5fc8c0685fe185cdab2aa6ad052ddddffd156e4775af795b8687ebe0d22f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 16:15:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a0ceb83ee0c017741edde5bee8d36d9d68df8930df51d7a5a92e9de9be2a0ef9-merged.mount: Deactivated successfully.
Jan 26 16:15:38 compute-0 podman[338191]: 2026-01-26 16:15:38.625315923 +0000 UTC m=+0.689143527 container remove ce7a5fc8c0685fe185cdab2aa6ad052ddddffd156e4775af795b8687ebe0d22f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:15:38 compute-0 systemd[1]: libpod-conmon-ce7a5fc8c0685fe185cdab2aa6ad052ddddffd156e4775af795b8687ebe0d22f.scope: Deactivated successfully.
Jan 26 16:15:38 compute-0 sudo[338115]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:38 compute-0 sudo[338240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:15:38 compute-0 sudo[338240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:15:38 compute-0 sudo[338240]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:38 compute-0 sudo[338265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:15:38 compute-0 nova_compute[239965]: 2026-01-26 16:15:38.775 239969 DEBUG nova.compute.manager [req-a71b9616-47e8-471a-a506-f34adf0080bb req-8e573d78-26f1-48ee-8440-cb9872976267 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Received event network-vif-deleted-8167513e-a387-4e59-ae09-285d12c97d90 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:38 compute-0 nova_compute[239965]: 2026-01-26 16:15:38.776 239969 INFO nova.compute.manager [req-a71b9616-47e8-471a-a506-f34adf0080bb req-8e573d78-26f1-48ee-8440-cb9872976267 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Neutron deleted interface 8167513e-a387-4e59-ae09-285d12c97d90; detaching it from the instance and deleting it from the info cache
Jan 26 16:15:38 compute-0 nova_compute[239965]: 2026-01-26 16:15:38.776 239969 DEBUG nova.network.neutron [req-a71b9616-47e8-471a-a506-f34adf0080bb req-8e573d78-26f1-48ee-8440-cb9872976267 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Updating instance_info_cache with network_info: [{"id": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "address": "fa:16:3e:37:fd:ac", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:fdac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap91e3e5dd-a1", "ovs_interfaceid": "91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:15:38 compute-0 sudo[338265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:15:38 compute-0 nova_compute[239965]: 2026-01-26 16:15:38.802 239969 DEBUG nova.compute.manager [req-a71b9616-47e8-471a-a506-f34adf0080bb req-8e573d78-26f1-48ee-8440-cb9872976267 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Detach interface failed, port_id=8167513e-a387-4e59-ae09-285d12c97d90, reason: Instance 46b779d5-01aa-4ab2-bbcf-cb9c344284bf could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:15:39 compute-0 podman[338301]: 2026-01-26 16:15:39.059202903 +0000 UTC m=+0.046883856 container create 06eaee1f56f341d9ff42e3d1b44e2118bf5010dd7aa5568f6df474eeb00f4fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:15:39 compute-0 systemd[1]: Started libpod-conmon-06eaee1f56f341d9ff42e3d1b44e2118bf5010dd7aa5568f6df474eeb00f4fd4.scope.
Jan 26 16:15:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:15:39 compute-0 podman[338301]: 2026-01-26 16:15:39.130457254 +0000 UTC m=+0.118138227 container init 06eaee1f56f341d9ff42e3d1b44e2118bf5010dd7aa5568f6df474eeb00f4fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lamarr, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 16:15:39 compute-0 podman[338301]: 2026-01-26 16:15:39.040424334 +0000 UTC m=+0.028105377 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:15:39 compute-0 podman[338301]: 2026-01-26 16:15:39.138025689 +0000 UTC m=+0.125706632 container start 06eaee1f56f341d9ff42e3d1b44e2118bf5010dd7aa5568f6df474eeb00f4fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:15:39 compute-0 podman[338301]: 2026-01-26 16:15:39.141041733 +0000 UTC m=+0.128722706 container attach 06eaee1f56f341d9ff42e3d1b44e2118bf5010dd7aa5568f6df474eeb00f4fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lamarr, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:15:39 compute-0 hungry_lamarr[338318]: 167 167
Jan 26 16:15:39 compute-0 systemd[1]: libpod-06eaee1f56f341d9ff42e3d1b44e2118bf5010dd7aa5568f6df474eeb00f4fd4.scope: Deactivated successfully.
Jan 26 16:15:39 compute-0 podman[338301]: 2026-01-26 16:15:39.141869262 +0000 UTC m=+0.129550205 container died 06eaee1f56f341d9ff42e3d1b44e2118bf5010dd7aa5568f6df474eeb00f4fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lamarr, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:15:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c97284d7fb5e528d60aeb380517f47bf9118fb6f2b38bd62527e1c8d473ccb46-merged.mount: Deactivated successfully.
Jan 26 16:15:39 compute-0 podman[338301]: 2026-01-26 16:15:39.180456185 +0000 UTC m=+0.168137138 container remove 06eaee1f56f341d9ff42e3d1b44e2118bf5010dd7aa5568f6df474eeb00f4fd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:15:39 compute-0 systemd[1]: libpod-conmon-06eaee1f56f341d9ff42e3d1b44e2118bf5010dd7aa5568f6df474eeb00f4fd4.scope: Deactivated successfully.
Jan 26 16:15:39 compute-0 podman[338341]: 2026-01-26 16:15:39.397031567 +0000 UTC m=+0.062753615 container create 926131f90e1163697f838f38a36597915a74839f02a83c70b7f25500e0419f8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_cannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:15:39 compute-0 systemd[1]: Started libpod-conmon-926131f90e1163697f838f38a36597915a74839f02a83c70b7f25500e0419f8a.scope.
Jan 26 16:15:39 compute-0 nova_compute[239965]: 2026-01-26 16:15:39.463 239969 DEBUG nova.network.neutron [-] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:15:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:15:39 compute-0 podman[338341]: 2026-01-26 16:15:39.376151477 +0000 UTC m=+0.041873565 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7ef52823dff2b57f28fa7157a1a8c2c36888c7b285862afe680871d5d6749c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7ef52823dff2b57f28fa7157a1a8c2c36888c7b285862afe680871d5d6749c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7ef52823dff2b57f28fa7157a1a8c2c36888c7b285862afe680871d5d6749c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:15:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6a7ef52823dff2b57f28fa7157a1a8c2c36888c7b285862afe680871d5d6749c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:15:39 compute-0 nova_compute[239965]: 2026-01-26 16:15:39.478 239969 INFO nova.compute.manager [-] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Took 1.97 seconds to deallocate network for instance.
Jan 26 16:15:39 compute-0 podman[338341]: 2026-01-26 16:15:39.490294825 +0000 UTC m=+0.156016903 container init 926131f90e1163697f838f38a36597915a74839f02a83c70b7f25500e0419f8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_cannon, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:15:39 compute-0 podman[338341]: 2026-01-26 16:15:39.496826555 +0000 UTC m=+0.162548613 container start 926131f90e1163697f838f38a36597915a74839f02a83c70b7f25500e0419f8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_cannon, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:15:39 compute-0 podman[338341]: 2026-01-26 16:15:39.501306084 +0000 UTC m=+0.167028162 container attach 926131f90e1163697f838f38a36597915a74839f02a83c70b7f25500e0419f8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_cannon, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:15:39 compute-0 nova_compute[239965]: 2026-01-26 16:15:39.531 239969 DEBUG oslo_concurrency.lockutils [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:39 compute-0 nova_compute[239965]: 2026-01-26 16:15:39.531 239969 DEBUG oslo_concurrency.lockutils [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:39.596 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:c0:60 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b148aee5-31c9-4b40-aded-fc877d3044d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5e2c04fa-ed19-4e90-8fe2-e33d24918520) old=Port_Binding(mac=['fa:16:3e:d3:c0:60 10.100.0.2 2001:db8::f816:3eff:fed3:c060'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fed3:c060/64', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:15:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:39.599 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5e2c04fa-ed19-4e90-8fe2-e33d24918520 in datapath 50ca1210-9784-41a5-b102-be1704bf4f24 updated
Jan 26 16:15:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:39.600 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ca1210-9784-41a5-b102-be1704bf4f24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:15:39 compute-0 nova_compute[239965]: 2026-01-26 16:15:39.600 239969 DEBUG oslo_concurrency.processutils [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:15:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:39.601 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c87bff4e-8314-4643-bbce-0f0525d7969e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:39 compute-0 focused_cannon[338357]: {
Jan 26 16:15:39 compute-0 focused_cannon[338357]:     "0": [
Jan 26 16:15:39 compute-0 focused_cannon[338357]:         {
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "devices": [
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "/dev/loop3"
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             ],
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "lv_name": "ceph_lv0",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "lv_size": "21470642176",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "name": "ceph_lv0",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "tags": {
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.cluster_name": "ceph",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.crush_device_class": "",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.encrypted": "0",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.objectstore": "bluestore",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.osd_id": "0",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.type": "block",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.vdo": "0",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.with_tpm": "0"
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             },
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "type": "block",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "vg_name": "ceph_vg0"
Jan 26 16:15:39 compute-0 focused_cannon[338357]:         }
Jan 26 16:15:39 compute-0 focused_cannon[338357]:     ],
Jan 26 16:15:39 compute-0 focused_cannon[338357]:     "1": [
Jan 26 16:15:39 compute-0 focused_cannon[338357]:         {
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "devices": [
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "/dev/loop4"
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             ],
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "lv_name": "ceph_lv1",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "lv_size": "21470642176",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "name": "ceph_lv1",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "tags": {
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.cluster_name": "ceph",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.crush_device_class": "",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.encrypted": "0",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.objectstore": "bluestore",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.osd_id": "1",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.type": "block",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.vdo": "0",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.with_tpm": "0"
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             },
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "type": "block",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "vg_name": "ceph_vg1"
Jan 26 16:15:39 compute-0 focused_cannon[338357]:         }
Jan 26 16:15:39 compute-0 focused_cannon[338357]:     ],
Jan 26 16:15:39 compute-0 focused_cannon[338357]:     "2": [
Jan 26 16:15:39 compute-0 focused_cannon[338357]:         {
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "devices": [
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "/dev/loop5"
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             ],
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "lv_name": "ceph_lv2",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "lv_size": "21470642176",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "name": "ceph_lv2",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "tags": {
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.cluster_name": "ceph",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.crush_device_class": "",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.encrypted": "0",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.objectstore": "bluestore",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.osd_id": "2",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.type": "block",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.vdo": "0",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:                 "ceph.with_tpm": "0"
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             },
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "type": "block",
Jan 26 16:15:39 compute-0 focused_cannon[338357]:             "vg_name": "ceph_vg2"
Jan 26 16:15:39 compute-0 focused_cannon[338357]:         }
Jan 26 16:15:39 compute-0 focused_cannon[338357]:     ]
Jan 26 16:15:39 compute-0 focused_cannon[338357]: }
Jan 26 16:15:39 compute-0 systemd[1]: libpod-926131f90e1163697f838f38a36597915a74839f02a83c70b7f25500e0419f8a.scope: Deactivated successfully.
Jan 26 16:15:39 compute-0 podman[338341]: 2026-01-26 16:15:39.836337728 +0000 UTC m=+0.502059816 container died 926131f90e1163697f838f38a36597915a74839f02a83c70b7f25500e0419f8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 16:15:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a7ef52823dff2b57f28fa7157a1a8c2c36888c7b285862afe680871d5d6749c-merged.mount: Deactivated successfully.
Jan 26 16:15:39 compute-0 podman[338341]: 2026-01-26 16:15:39.884134216 +0000 UTC m=+0.549856264 container remove 926131f90e1163697f838f38a36597915a74839f02a83c70b7f25500e0419f8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_cannon, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:15:39 compute-0 systemd[1]: libpod-conmon-926131f90e1163697f838f38a36597915a74839f02a83c70b7f25500e0419f8a.scope: Deactivated successfully.
Jan 26 16:15:39 compute-0 sudo[338265]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:39 compute-0 sudo[338398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:15:39 compute-0 sudo[338398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:15:39 compute-0 sudo[338398]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:40 compute-0 sudo[338423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:15:40 compute-0 sudo[338423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:15:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:15:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3002949285' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:15:40 compute-0 nova_compute[239965]: 2026-01-26 16:15:40.206 239969 DEBUG oslo_concurrency.processutils [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:15:40 compute-0 nova_compute[239965]: 2026-01-26 16:15:40.214 239969 DEBUG nova.compute.provider_tree [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:15:40 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3002949285' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:15:40 compute-0 nova_compute[239965]: 2026-01-26 16:15:40.233 239969 DEBUG nova.scheduler.client.report [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:15:40 compute-0 nova_compute[239965]: 2026-01-26 16:15:40.255 239969 DEBUG oslo_concurrency.lockutils [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:40 compute-0 nova_compute[239965]: 2026-01-26 16:15:40.289 239969 INFO nova.scheduler.client.report [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Deleted allocations for instance 46b779d5-01aa-4ab2-bbcf-cb9c344284bf
Jan 26 16:15:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 228 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 8 op/s
Jan 26 16:15:40 compute-0 podman[338463]: 2026-01-26 16:15:40.328539463 +0000 UTC m=+0.042873118 container create 2f9450d12405576d9ef141978686b37d74d92f1c640f2f5b9314bf5d2a82d211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:15:40 compute-0 systemd[1]: Started libpod-conmon-2f9450d12405576d9ef141978686b37d74d92f1c640f2f5b9314bf5d2a82d211.scope.
Jan 26 16:15:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:40.369 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:c0:60 10.100.0.2 2001:db8::f816:3eff:fed3:c060'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fed3:c060/64', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b148aee5-31c9-4b40-aded-fc877d3044d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5e2c04fa-ed19-4e90-8fe2-e33d24918520) old=Port_Binding(mac=['fa:16:3e:d3:c0:60 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:15:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:40.370 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5e2c04fa-ed19-4e90-8fe2-e33d24918520 in datapath 50ca1210-9784-41a5-b102-be1704bf4f24 updated
Jan 26 16:15:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:40.372 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ca1210-9784-41a5-b102-be1704bf4f24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:15:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:40.373 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[84f2c27a-a9d6-4b2a-a648-e272eebd230e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:40 compute-0 nova_compute[239965]: 2026-01-26 16:15:40.380 239969 DEBUG oslo_concurrency.lockutils [None req-557bd600-3112-48c5-af80-4c4f489b829d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "46b779d5-01aa-4ab2-bbcf-cb9c344284bf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.506s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:15:40 compute-0 podman[338463]: 2026-01-26 16:15:40.400219034 +0000 UTC m=+0.114552699 container init 2f9450d12405576d9ef141978686b37d74d92f1c640f2f5b9314bf5d2a82d211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:15:40 compute-0 podman[338463]: 2026-01-26 16:15:40.407085382 +0000 UTC m=+0.121419027 container start 2f9450d12405576d9ef141978686b37d74d92f1c640f2f5b9314bf5d2a82d211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 16:15:40 compute-0 podman[338463]: 2026-01-26 16:15:40.312886511 +0000 UTC m=+0.027220176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:15:40 compute-0 podman[338463]: 2026-01-26 16:15:40.410545876 +0000 UTC m=+0.124879521 container attach 2f9450d12405576d9ef141978686b37d74d92f1c640f2f5b9314bf5d2a82d211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:15:40 compute-0 flamboyant_hofstadter[338479]: 167 167
Jan 26 16:15:40 compute-0 podman[338463]: 2026-01-26 16:15:40.411710315 +0000 UTC m=+0.126043960 container died 2f9450d12405576d9ef141978686b37d74d92f1c640f2f5b9314bf5d2a82d211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 26 16:15:40 compute-0 systemd[1]: libpod-2f9450d12405576d9ef141978686b37d74d92f1c640f2f5b9314bf5d2a82d211.scope: Deactivated successfully.
Jan 26 16:15:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-93636fce8ca4a7fa766814b42086b11a9a21f955111d9bb1e1d96bc9559040da-merged.mount: Deactivated successfully.
Jan 26 16:15:40 compute-0 podman[338463]: 2026-01-26 16:15:40.451388985 +0000 UTC m=+0.165722640 container remove 2f9450d12405576d9ef141978686b37d74d92f1c640f2f5b9314bf5d2a82d211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_hofstadter, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:15:40 compute-0 systemd[1]: libpod-conmon-2f9450d12405576d9ef141978686b37d74d92f1c640f2f5b9314bf5d2a82d211.scope: Deactivated successfully.
Jan 26 16:15:40 compute-0 podman[338502]: 2026-01-26 16:15:40.628586893 +0000 UTC m=+0.040321096 container create 47a8ffdec444bf0659bd218b2d5f628f5f433180c59818727031b5fb9c65d003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:15:40 compute-0 systemd[1]: Started libpod-conmon-47a8ffdec444bf0659bd218b2d5f628f5f433180c59818727031b5fb9c65d003.scope.
Jan 26 16:15:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/741fd9be3a6fb28112014c09a823be6a3681ca9470b2b10770d17dc212a28186/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/741fd9be3a6fb28112014c09a823be6a3681ca9470b2b10770d17dc212a28186/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/741fd9be3a6fb28112014c09a823be6a3681ca9470b2b10770d17dc212a28186/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:15:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/741fd9be3a6fb28112014c09a823be6a3681ca9470b2b10770d17dc212a28186/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:15:40 compute-0 podman[338502]: 2026-01-26 16:15:40.701185337 +0000 UTC m=+0.112919540 container init 47a8ffdec444bf0659bd218b2d5f628f5f433180c59818727031b5fb9c65d003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 16:15:40 compute-0 podman[338502]: 2026-01-26 16:15:40.611427984 +0000 UTC m=+0.023162207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:15:40 compute-0 podman[338502]: 2026-01-26 16:15:40.708635329 +0000 UTC m=+0.120369532 container start 47a8ffdec444bf0659bd218b2d5f628f5f433180c59818727031b5fb9c65d003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wing, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:15:40 compute-0 podman[338502]: 2026-01-26 16:15:40.712658167 +0000 UTC m=+0.124392370 container attach 47a8ffdec444bf0659bd218b2d5f628f5f433180c59818727031b5fb9c65d003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wing, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 16:15:40 compute-0 nova_compute[239965]: 2026-01-26 16:15:40.863 239969 DEBUG nova.compute.manager [req-43bfaac2-1fc0-4da2-abe5-abdd2ec9a7a8 req-4abab7be-e683-4a48-bd58-499ec35c24b3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Received event network-vif-deleted-91e3e5dd-a1b9-4c66-b0b4-08c7720a6ebc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:41 compute-0 ceph-mon[75140]: pgmap v1949: 305 pgs: 305 active+clean; 228 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 13 KiB/s wr, 8 op/s
Jan 26 16:15:41 compute-0 lvm[338597]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:15:41 compute-0 lvm[338597]: VG ceph_vg0 finished
Jan 26 16:15:41 compute-0 lvm[338598]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:15:41 compute-0 lvm[338598]: VG ceph_vg1 finished
Jan 26 16:15:41 compute-0 lvm[338600]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:15:41 compute-0 lvm[338600]: VG ceph_vg2 finished
Jan 26 16:15:41 compute-0 laughing_wing[338518]: {}
Jan 26 16:15:41 compute-0 systemd[1]: libpod-47a8ffdec444bf0659bd218b2d5f628f5f433180c59818727031b5fb9c65d003.scope: Deactivated successfully.
Jan 26 16:15:41 compute-0 systemd[1]: libpod-47a8ffdec444bf0659bd218b2d5f628f5f433180c59818727031b5fb9c65d003.scope: Consumed 1.273s CPU time.
Jan 26 16:15:41 compute-0 podman[338502]: 2026-01-26 16:15:41.50934429 +0000 UTC m=+0.921078583 container died 47a8ffdec444bf0659bd218b2d5f628f5f433180c59818727031b5fb9c65d003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wing, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 16:15:41 compute-0 nova_compute[239965]: 2026-01-26 16:15:41.525 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:15:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-741fd9be3a6fb28112014c09a823be6a3681ca9470b2b10770d17dc212a28186-merged.mount: Deactivated successfully.
Jan 26 16:15:41 compute-0 podman[338502]: 2026-01-26 16:15:41.564297803 +0000 UTC m=+0.976031996 container remove 47a8ffdec444bf0659bd218b2d5f628f5f433180c59818727031b5fb9c65d003 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_wing, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:15:41 compute-0 systemd[1]: libpod-conmon-47a8ffdec444bf0659bd218b2d5f628f5f433180c59818727031b5fb9c65d003.scope: Deactivated successfully.
Jan 26 16:15:41 compute-0 sudo[338423]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:15:41 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:15:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:15:41 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:15:41 compute-0 sudo[338616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:15:41 compute-0 sudo[338616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:15:41 compute-0 sudo[338616]: pam_unix(sudo:session): session closed for user root
Jan 26 16:15:41 compute-0 nova_compute[239965]: 2026-01-26 16:15:41.773 239969 DEBUG oslo_concurrency.lockutils [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:41 compute-0 nova_compute[239965]: 2026-01-26 16:15:41.774 239969 DEBUG oslo_concurrency.lockutils [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:41 compute-0 nova_compute[239965]: 2026-01-26 16:15:41.775 239969 DEBUG oslo_concurrency.lockutils [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:41 compute-0 nova_compute[239965]: 2026-01-26 16:15:41.775 239969 DEBUG oslo_concurrency.lockutils [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:41 compute-0 nova_compute[239965]: 2026-01-26 16:15:41.776 239969 DEBUG oslo_concurrency.lockutils [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:41 compute-0 nova_compute[239965]: 2026-01-26 16:15:41.776 239969 INFO nova.compute.manager [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Terminating instance
Jan 26 16:15:41 compute-0 nova_compute[239965]: 2026-01-26 16:15:41.777 239969 DEBUG nova.compute.manager [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:15:41 compute-0 kernel: tap1883cc75-9b (unregistering): left promiscuous mode
Jan 26 16:15:41 compute-0 NetworkManager[48954]: <info>  [1769444141.8390] device (tap1883cc75-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:15:41 compute-0 nova_compute[239965]: 2026-01-26 16:15:41.840 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:41 compute-0 ovn_controller[146046]: 2026-01-26T16:15:41Z|01161|binding|INFO|Releasing lport 1883cc75-9bba-4da7-bb9d-e038a9254a10 from this chassis (sb_readonly=0)
Jan 26 16:15:41 compute-0 ovn_controller[146046]: 2026-01-26T16:15:41Z|01162|binding|INFO|Setting lport 1883cc75-9bba-4da7-bb9d-e038a9254a10 down in Southbound
Jan 26 16:15:41 compute-0 ovn_controller[146046]: 2026-01-26T16:15:41Z|01163|binding|INFO|Removing iface tap1883cc75-9b ovn-installed in OVS
Jan 26 16:15:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:41.847 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:11:5d:a9 10.100.0.10'], port_security=['fa:16:3e:11:5d:a9 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'ad5cd088-59a3-4894-8741-f0daa1fb4ff9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7bd0b480-793e-4d46-83dd-5e27335992d9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9ccd144-fbed-4376-90b2-5da9708cb290', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7c38b8b6-821e-4ed4-9284-993e44902e19, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1883cc75-9bba-4da7-bb9d-e038a9254a10) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:15:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:41.849 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1883cc75-9bba-4da7-bb9d-e038a9254a10 in datapath 7bd0b480-793e-4d46-83dd-5e27335992d9 unbound from our chassis
Jan 26 16:15:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:41.851 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7bd0b480-793e-4d46-83dd-5e27335992d9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:15:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:41.851 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0496d937-83fc-40eb-b30f-4c9db654c5e5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:41.852 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9 namespace which is not needed anymore
Jan 26 16:15:41 compute-0 nova_compute[239965]: 2026-01-26 16:15:41.873 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:41 compute-0 kernel: tapb4f3ee90-85 (unregistering): left promiscuous mode
Jan 26 16:15:41 compute-0 NetworkManager[48954]: <info>  [1769444141.8827] device (tapb4f3ee90-85): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:15:41 compute-0 ovn_controller[146046]: 2026-01-26T16:15:41Z|01164|binding|INFO|Releasing lport b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c from this chassis (sb_readonly=0)
Jan 26 16:15:41 compute-0 ovn_controller[146046]: 2026-01-26T16:15:41Z|01165|binding|INFO|Setting lport b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c down in Southbound
Jan 26 16:15:41 compute-0 nova_compute[239965]: 2026-01-26 16:15:41.893 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:41 compute-0 ovn_controller[146046]: 2026-01-26T16:15:41Z|01166|binding|INFO|Removing iface tapb4f3ee90-85 ovn-installed in OVS
Jan 26 16:15:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:41.899 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:61:7d 2001:db8:0:1:f816:3eff:fec4:617d 2001:db8::f816:3eff:fec4:617d'], port_security=['fa:16:3e:c4:61:7d 2001:db8:0:1:f816:3eff:fec4:617d 2001:db8::f816:3eff:fec4:617d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fec4:617d/64 2001:db8::f816:3eff:fec4:617d/64', 'neutron:device_id': 'ad5cd088-59a3-4894-8741-f0daa1fb4ff9', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ef5475f2-541c-4651-b877-e2e4325b6344', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a9ccd144-fbed-4376-90b2-5da9708cb290', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2dbc549b-a211-495a-b7d4-5846cf98bef7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:15:41 compute-0 nova_compute[239965]: 2026-01-26 16:15:41.926 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:41 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Jan 26 16:15:41 compute-0 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006f.scope: Consumed 16.132s CPU time.
Jan 26 16:15:41 compute-0 systemd-machined[208061]: Machine qemu-138-instance-0000006f terminated.
Jan 26 16:15:41 compute-0 NetworkManager[48954]: <info>  [1769444141.9971] manager: (tap1883cc75-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/476)
Jan 26 16:15:42 compute-0 NetworkManager[48954]: <info>  [1769444142.0095] manager: (tapb4f3ee90-85): new Tun device (/org/freedesktop/NetworkManager/Devices/477)
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.033 239969 INFO nova.virt.libvirt.driver [-] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Instance destroyed successfully.
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.034 239969 DEBUG nova.objects.instance [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'resources' on Instance uuid ad5cd088-59a3-4894-8741-f0daa1fb4ff9 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.049 239969 DEBUG nova.virt.libvirt.vif [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:14:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-777747036',display_name='tempest-TestGettingAddress-server-777747036',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-777747036',id=111,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbULeaJxwcnozcWoyvQ50FWCotzgPVSIhLPca1F8fB5NEsE8naHpO1jQEaSARRlz4I3VrQe48iPgEoQDNyMeK3lJ4jF1ej588mNcPtcDyL+pxJcccFJcRHS+XeHWriRmg==',key_name='tempest-TestGettingAddress-1924722263',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:14:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-wuhqw1ae',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:14:33Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=ad5cd088-59a3-4894-8741-f0daa1fb4ff9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "address": "fa:16:3e:11:5d:a9", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1883cc75-9b", "ovs_interfaceid": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.049 239969 DEBUG nova.network.os_vif_util [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "address": "fa:16:3e:11:5d:a9", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1883cc75-9b", "ovs_interfaceid": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.050 239969 DEBUG nova.network.os_vif_util [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:11:5d:a9,bridge_name='br-int',has_traffic_filtering=True,id=1883cc75-9bba-4da7-bb9d-e038a9254a10,network=Network(7bd0b480-793e-4d46-83dd-5e27335992d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1883cc75-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.051 239969 DEBUG os_vif [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:11:5d:a9,bridge_name='br-int',has_traffic_filtering=True,id=1883cc75-9bba-4da7-bb9d-e038a9254a10,network=Network(7bd0b480-793e-4d46-83dd-5e27335992d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1883cc75-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.052 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.053 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1883cc75-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.054 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.056 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:15:42 compute-0 neutron-haproxy-ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9[336429]: [NOTICE]   (336433) : haproxy version is 2.8.14-c23fe91
Jan 26 16:15:42 compute-0 neutron-haproxy-ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9[336429]: [NOTICE]   (336433) : path to executable is /usr/sbin/haproxy
Jan 26 16:15:42 compute-0 neutron-haproxy-ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9[336429]: [WARNING]  (336433) : Exiting Master process...
Jan 26 16:15:42 compute-0 neutron-haproxy-ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9[336429]: [ALERT]    (336433) : Current worker (336435) exited with code 143 (Terminated)
Jan 26 16:15:42 compute-0 neutron-haproxy-ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9[336429]: [WARNING]  (336433) : All workers exited. Exiting... (0)
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.060 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:42 compute-0 systemd[1]: libpod-2aae7421ee210bd6c426825eb74ca377b84638df6cec4a05ba8031a5400c6b48.scope: Deactivated successfully.
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.063 239969 INFO os_vif [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:11:5d:a9,bridge_name='br-int',has_traffic_filtering=True,id=1883cc75-9bba-4da7-bb9d-e038a9254a10,network=Network(7bd0b480-793e-4d46-83dd-5e27335992d9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1883cc75-9b')
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.063 239969 DEBUG nova.virt.libvirt.vif [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:14:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-777747036',display_name='tempest-TestGettingAddress-server-777747036',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-777747036',id=111,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFbULeaJxwcnozcWoyvQ50FWCotzgPVSIhLPca1F8fB5NEsE8naHpO1jQEaSARRlz4I3VrQe48iPgEoQDNyMeK3lJ4jF1ej588mNcPtcDyL+pxJcccFJcRHS+XeHWriRmg==',key_name='tempest-TestGettingAddress-1924722263',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:14:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-wuhqw1ae',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:14:33Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=ad5cd088-59a3-4894-8741-f0daa1fb4ff9,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "address": "fa:16:3e:c4:61:7d", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4f3ee90-85", "ovs_interfaceid": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.064 239969 DEBUG nova.network.os_vif_util [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "address": "fa:16:3e:c4:61:7d", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4f3ee90-85", "ovs_interfaceid": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.064 239969 DEBUG nova.network.os_vif_util [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c4:61:7d,bridge_name='br-int',has_traffic_filtering=True,id=b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c,network=Network(ef5475f2-541c-4651-b877-e2e4325b6344),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4f3ee90-85') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.065 239969 DEBUG os_vif [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:61:7d,bridge_name='br-int',has_traffic_filtering=True,id=b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c,network=Network(ef5475f2-541c-4651-b877-e2e4325b6344),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4f3ee90-85') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.066 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.066 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4f3ee90-85, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.067 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:42 compute-0 podman[338667]: 2026-01-26 16:15:42.067743392 +0000 UTC m=+0.078931598 container died 2aae7421ee210bd6c426825eb74ca377b84638df6cec4a05ba8031a5400c6b48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.070 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.072 239969 INFO os_vif [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c4:61:7d,bridge_name='br-int',has_traffic_filtering=True,id=b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c,network=Network(ef5475f2-541c-4651-b877-e2e4325b6344),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4f3ee90-85')
Jan 26 16:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2aae7421ee210bd6c426825eb74ca377b84638df6cec4a05ba8031a5400c6b48-userdata-shm.mount: Deactivated successfully.
Jan 26 16:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1042739d239caf2c40ad5eed4380d715d121fe90e2de4e65df17b0ffd2e62eca-merged.mount: Deactivated successfully.
Jan 26 16:15:42 compute-0 podman[338667]: 2026-01-26 16:15:42.107575365 +0000 UTC m=+0.118763551 container cleanup 2aae7421ee210bd6c426825eb74ca377b84638df6cec4a05ba8031a5400c6b48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 16:15:42 compute-0 systemd[1]: libpod-conmon-2aae7421ee210bd6c426825eb74ca377b84638df6cec4a05ba8031a5400c6b48.scope: Deactivated successfully.
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.144 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:42 compute-0 podman[338739]: 2026-01-26 16:15:42.17814513 +0000 UTC m=+0.048705481 container remove 2aae7421ee210bd6c426825eb74ca377b84638df6cec4a05ba8031a5400c6b48 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.184 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[602a13d1-4331-4831-956b-f1b8690a1819]: (4, ('Mon Jan 26 04:15:41 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9 (2aae7421ee210bd6c426825eb74ca377b84638df6cec4a05ba8031a5400c6b48)\n2aae7421ee210bd6c426825eb74ca377b84638df6cec4a05ba8031a5400c6b48\nMon Jan 26 04:15:42 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9 (2aae7421ee210bd6c426825eb74ca377b84638df6cec4a05ba8031a5400c6b48)\n2aae7421ee210bd6c426825eb74ca377b84638df6cec4a05ba8031a5400c6b48\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.187 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[660ab5ac-1365-4476-82a4-fb5a89144c5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.188 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7bd0b480-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:42 compute-0 kernel: tap7bd0b480-70: left promiscuous mode
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.189 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.201 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.203 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e5eb8803-9911-4535-aab1-8f184812d09c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.230 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[40871c62-e6bf-4ea2-baed-a2596e289ad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.232 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1ff6bf07-1cc8-4532-ba86-7a87434d0636]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.253 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6c609b-261d-4875-ab4e-aabd4da9bee9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555845, 'reachable_time': 18803, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338754, 'error': None, 'target': 'ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.255 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7bd0b480-793e-4d46-83dd-5e27335992d9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:15:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d7bd0b480\x2d793e\x2d4d46\x2d83dd\x2d5e27335992d9.mount: Deactivated successfully.
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.256 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[96c38780-c848-4d91-8727-85a9814e6e68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.258 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c in datapath ef5475f2-541c-4651-b877-e2e4325b6344 unbound from our chassis
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.259 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ef5475f2-541c-4651-b877-e2e4325b6344, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.260 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3bf90fcb-988c-4406-8958-ff48372cb69d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.261 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344 namespace which is not needed anymore
Jan 26 16:15:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 139 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 18 KiB/s wr, 30 op/s
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.330 239969 INFO nova.virt.libvirt.driver [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Deleting instance files /var/lib/nova/instances/ad5cd088-59a3-4894-8741-f0daa1fb4ff9_del
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.331 239969 INFO nova.virt.libvirt.driver [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Deletion of /var/lib/nova/instances/ad5cd088-59a3-4894-8741-f0daa1fb4ff9_del complete
Jan 26 16:15:42 compute-0 neutron-haproxy-ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344[336503]: [NOTICE]   (336507) : haproxy version is 2.8.14-c23fe91
Jan 26 16:15:42 compute-0 neutron-haproxy-ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344[336503]: [NOTICE]   (336507) : path to executable is /usr/sbin/haproxy
Jan 26 16:15:42 compute-0 neutron-haproxy-ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344[336503]: [WARNING]  (336507) : Exiting Master process...
Jan 26 16:15:42 compute-0 neutron-haproxy-ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344[336503]: [WARNING]  (336507) : Exiting Master process...
Jan 26 16:15:42 compute-0 neutron-haproxy-ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344[336503]: [ALERT]    (336507) : Current worker (336509) exited with code 143 (Terminated)
Jan 26 16:15:42 compute-0 neutron-haproxy-ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344[336503]: [WARNING]  (336507) : All workers exited. Exiting... (0)
Jan 26 16:15:42 compute-0 systemd[1]: libpod-32d9b72e4865331a22798dca21f35e6257b607e7fdfce891cf284128c9fd5f3e.scope: Deactivated successfully.
Jan 26 16:15:42 compute-0 podman[338772]: 2026-01-26 16:15:42.392738382 +0000 UTC m=+0.044046837 container died 32d9b72e4865331a22798dca21f35e6257b607e7fdfce891cf284128c9fd5f3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.398 239969 INFO nova.compute.manager [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Took 0.62 seconds to destroy the instance on the hypervisor.
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.398 239969 DEBUG oslo.service.loopingcall [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.398 239969 DEBUG nova.compute.manager [-] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.399 239969 DEBUG nova.network.neutron [-] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32d9b72e4865331a22798dca21f35e6257b607e7fdfce891cf284128c9fd5f3e-userdata-shm.mount: Deactivated successfully.
Jan 26 16:15:42 compute-0 podman[338772]: 2026-01-26 16:15:42.426272001 +0000 UTC m=+0.077580456 container cleanup 32d9b72e4865331a22798dca21f35e6257b607e7fdfce891cf284128c9fd5f3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:15:42 compute-0 systemd[1]: libpod-conmon-32d9b72e4865331a22798dca21f35e6257b607e7fdfce891cf284128c9fd5f3e.scope: Deactivated successfully.
Jan 26 16:15:42 compute-0 podman[338802]: 2026-01-26 16:15:42.484856242 +0000 UTC m=+0.038073440 container remove 32d9b72e4865331a22798dca21f35e6257b607e7fdfce891cf284128c9fd5f3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.491 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5171200e-74d4-4619-b79c-987492636011]: (4, ('Mon Jan 26 04:15:42 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344 (32d9b72e4865331a22798dca21f35e6257b607e7fdfce891cf284128c9fd5f3e)\n32d9b72e4865331a22798dca21f35e6257b607e7fdfce891cf284128c9fd5f3e\nMon Jan 26 04:15:42 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344 (32d9b72e4865331a22798dca21f35e6257b607e7fdfce891cf284128c9fd5f3e)\n32d9b72e4865331a22798dca21f35e6257b607e7fdfce891cf284128c9fd5f3e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.492 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ec793d07-1c59-4fcf-a23f-03db42d6fdc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.494 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef5475f2-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.496 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:42 compute-0 kernel: tapef5475f2-50: left promiscuous mode
Jan 26 16:15:42 compute-0 nova_compute[239965]: 2026-01-26 16:15:42.508 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.511 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7264e133-1099-4987-9797-f13cd713c936]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.521 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5bff397d-f111-434c-8239-2d1df009479a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.522 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f64c3acb-1c95-4fc0-8098-e801b16057e0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.539 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[10a5d502-b554-43ad-9a02-4e83eb14822e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 555925, 'reachable_time': 34041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 338817, 'error': None, 'target': 'ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.541 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ef5475f2-541c-4651-b877-e2e4325b6344 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:15:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:42.541 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[09076cde-faa6-4f51-9e5d-cfaac553ba55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6702055633131226b04612cd58b0f344045e7fa03232491f9fa25db6581d7cd-merged.mount: Deactivated successfully.
Jan 26 16:15:42 compute-0 systemd[1]: run-netns-ovnmeta\x2def5475f2\x2d541c\x2d4651\x2db877\x2de2e4325b6344.mount: Deactivated successfully.
Jan 26 16:15:42 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:15:42 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:15:42 compute-0 ceph-mon[75140]: pgmap v1950: 305 pgs: 305 active+clean; 139 MiB data, 832 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 18 KiB/s wr, 30 op/s
Jan 26 16:15:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:15:43 compute-0 nova_compute[239965]: 2026-01-26 16:15:43.392 239969 DEBUG nova.compute.manager [req-33b2a680-8cb9-4fa3-a173-f71ce2d54534 req-6dfcb707-5066-4fd5-8c7e-037bb9b3728c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received event network-changed-1883cc75-9bba-4da7-bb9d-e038a9254a10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:43 compute-0 nova_compute[239965]: 2026-01-26 16:15:43.392 239969 DEBUG nova.compute.manager [req-33b2a680-8cb9-4fa3-a173-f71ce2d54534 req-6dfcb707-5066-4fd5-8c7e-037bb9b3728c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Refreshing instance network info cache due to event network-changed-1883cc75-9bba-4da7-bb9d-e038a9254a10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:15:43 compute-0 nova_compute[239965]: 2026-01-26 16:15:43.393 239969 DEBUG oslo_concurrency.lockutils [req-33b2a680-8cb9-4fa3-a173-f71ce2d54534 req-6dfcb707-5066-4fd5-8c7e-037bb9b3728c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:15:43 compute-0 nova_compute[239965]: 2026-01-26 16:15:43.393 239969 DEBUG oslo_concurrency.lockutils [req-33b2a680-8cb9-4fa3-a173-f71ce2d54534 req-6dfcb707-5066-4fd5-8c7e-037bb9b3728c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:15:43 compute-0 nova_compute[239965]: 2026-01-26 16:15:43.393 239969 DEBUG nova.network.neutron [req-33b2a680-8cb9-4fa3-a173-f71ce2d54534 req-6dfcb707-5066-4fd5-8c7e-037bb9b3728c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Refreshing network info cache for port 1883cc75-9bba-4da7-bb9d-e038a9254a10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:15:43 compute-0 nova_compute[239965]: 2026-01-26 16:15:43.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:15:44 compute-0 nova_compute[239965]: 2026-01-26 16:15:44.054 239969 DEBUG nova.compute.manager [req-e97ebdf2-0f3e-442c-b842-473929761939 req-559ed353-887b-4f30-a695-205fe59c74cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received event network-vif-unplugged-1883cc75-9bba-4da7-bb9d-e038a9254a10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:44 compute-0 nova_compute[239965]: 2026-01-26 16:15:44.054 239969 DEBUG oslo_concurrency.lockutils [req-e97ebdf2-0f3e-442c-b842-473929761939 req-559ed353-887b-4f30-a695-205fe59c74cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:44 compute-0 nova_compute[239965]: 2026-01-26 16:15:44.055 239969 DEBUG oslo_concurrency.lockutils [req-e97ebdf2-0f3e-442c-b842-473929761939 req-559ed353-887b-4f30-a695-205fe59c74cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:44 compute-0 nova_compute[239965]: 2026-01-26 16:15:44.056 239969 DEBUG oslo_concurrency.lockutils [req-e97ebdf2-0f3e-442c-b842-473929761939 req-559ed353-887b-4f30-a695-205fe59c74cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:44 compute-0 nova_compute[239965]: 2026-01-26 16:15:44.056 239969 DEBUG nova.compute.manager [req-e97ebdf2-0f3e-442c-b842-473929761939 req-559ed353-887b-4f30-a695-205fe59c74cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] No waiting events found dispatching network-vif-unplugged-1883cc75-9bba-4da7-bb9d-e038a9254a10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:15:44 compute-0 nova_compute[239965]: 2026-01-26 16:15:44.057 239969 DEBUG nova.compute.manager [req-e97ebdf2-0f3e-442c-b842-473929761939 req-559ed353-887b-4f30-a695-205fe59c74cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received event network-vif-unplugged-1883cc75-9bba-4da7-bb9d-e038a9254a10 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:15:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 108 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 19 KiB/s wr, 52 op/s
Jan 26 16:15:44 compute-0 ceph-mon[75140]: pgmap v1951: 305 pgs: 305 active+clean; 108 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 19 KiB/s wr, 52 op/s
Jan 26 16:15:44 compute-0 nova_compute[239965]: 2026-01-26 16:15:44.875 239969 DEBUG nova.network.neutron [-] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:15:44 compute-0 nova_compute[239965]: 2026-01-26 16:15:44.902 239969 INFO nova.compute.manager [-] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Took 2.50 seconds to deallocate network for instance.
Jan 26 16:15:44 compute-0 nova_compute[239965]: 2026-01-26 16:15:44.966 239969 DEBUG oslo_concurrency.lockutils [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:44 compute-0 nova_compute[239965]: 2026-01-26 16:15:44.967 239969 DEBUG oslo_concurrency.lockutils [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.011 239969 DEBUG oslo_concurrency.processutils [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.051 239969 DEBUG nova.network.neutron [req-33b2a680-8cb9-4fa3-a173-f71ce2d54534 req-6dfcb707-5066-4fd5-8c7e-037bb9b3728c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Updated VIF entry in instance network info cache for port 1883cc75-9bba-4da7-bb9d-e038a9254a10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.052 239969 DEBUG nova.network.neutron [req-33b2a680-8cb9-4fa3-a173-f71ce2d54534 req-6dfcb707-5066-4fd5-8c7e-037bb9b3728c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Updating instance_info_cache with network_info: [{"id": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "address": "fa:16:3e:11:5d:a9", "network": {"id": "7bd0b480-793e-4d46-83dd-5e27335992d9", "bridge": "br-int", "label": "tempest-network-smoke--1805440024", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1883cc75-9b", "ovs_interfaceid": "1883cc75-9bba-4da7-bb9d-e038a9254a10", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "address": "fa:16:3e:c4:61:7d", "network": {"id": "ef5475f2-541c-4651-b877-e2e4325b6344", "bridge": "br-int", "label": "tempest-network-smoke--2108232030", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fec4:617d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4f3ee90-85", "ovs_interfaceid": "b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.057 239969 DEBUG nova.compute.manager [req-429d9538-d9e2-4172-82cf-17a040a21444 req-d29db745-aa5a-4f43-b157-5af19c28abaa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received event network-vif-plugged-b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.058 239969 DEBUG oslo_concurrency.lockutils [req-429d9538-d9e2-4172-82cf-17a040a21444 req-d29db745-aa5a-4f43-b157-5af19c28abaa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.059 239969 DEBUG oslo_concurrency.lockutils [req-429d9538-d9e2-4172-82cf-17a040a21444 req-d29db745-aa5a-4f43-b157-5af19c28abaa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.059 239969 DEBUG oslo_concurrency.lockutils [req-429d9538-d9e2-4172-82cf-17a040a21444 req-d29db745-aa5a-4f43-b157-5af19c28abaa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.059 239969 DEBUG nova.compute.manager [req-429d9538-d9e2-4172-82cf-17a040a21444 req-d29db745-aa5a-4f43-b157-5af19c28abaa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] No waiting events found dispatching network-vif-plugged-b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.060 239969 WARNING nova.compute.manager [req-429d9538-d9e2-4172-82cf-17a040a21444 req-d29db745-aa5a-4f43-b157-5af19c28abaa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received unexpected event network-vif-plugged-b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c for instance with vm_state deleted and task_state None.
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.060 239969 DEBUG nova.compute.manager [req-429d9538-d9e2-4172-82cf-17a040a21444 req-d29db745-aa5a-4f43-b157-5af19c28abaa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received event network-vif-deleted-1883cc75-9bba-4da7-bb9d-e038a9254a10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.061 239969 DEBUG nova.compute.manager [req-429d9538-d9e2-4172-82cf-17a040a21444 req-d29db745-aa5a-4f43-b157-5af19c28abaa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received event network-vif-deleted-b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.083 239969 DEBUG oslo_concurrency.lockutils [req-33b2a680-8cb9-4fa3-a173-f71ce2d54534 req-6dfcb707-5066-4fd5-8c7e-037bb9b3728c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-ad5cd088-59a3-4894-8741-f0daa1fb4ff9" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.083 239969 DEBUG nova.compute.manager [req-33b2a680-8cb9-4fa3-a173-f71ce2d54534 req-6dfcb707-5066-4fd5-8c7e-037bb9b3728c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received event network-vif-unplugged-b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.084 239969 DEBUG oslo_concurrency.lockutils [req-33b2a680-8cb9-4fa3-a173-f71ce2d54534 req-6dfcb707-5066-4fd5-8c7e-037bb9b3728c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.084 239969 DEBUG oslo_concurrency.lockutils [req-33b2a680-8cb9-4fa3-a173-f71ce2d54534 req-6dfcb707-5066-4fd5-8c7e-037bb9b3728c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.085 239969 DEBUG oslo_concurrency.lockutils [req-33b2a680-8cb9-4fa3-a173-f71ce2d54534 req-6dfcb707-5066-4fd5-8c7e-037bb9b3728c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.085 239969 DEBUG nova.compute.manager [req-33b2a680-8cb9-4fa3-a173-f71ce2d54534 req-6dfcb707-5066-4fd5-8c7e-037bb9b3728c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] No waiting events found dispatching network-vif-unplugged-b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.086 239969 DEBUG nova.compute.manager [req-33b2a680-8cb9-4fa3-a173-f71ce2d54534 req-6dfcb707-5066-4fd5-8c7e-037bb9b3728c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received event network-vif-unplugged-b4f3ee90-85ed-4fa3-89cc-c5921d4f6f1c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:15:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:15:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2204163950' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.555 239969 DEBUG oslo_concurrency.processutils [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.561 239969 DEBUG nova.compute.provider_tree [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.575 239969 DEBUG nova.scheduler.client.report [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.597 239969 DEBUG oslo_concurrency.lockutils [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.647 239969 INFO nova.scheduler.client.report [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Deleted allocations for instance ad5cd088-59a3-4894-8741-f0daa1fb4ff9
Jan 26 16:15:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2204163950' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:15:45 compute-0 nova_compute[239965]: 2026-01-26 16:15:45.719 239969 DEBUG oslo_concurrency.lockutils [None req-0b295b00-d08f-4beb-a1f0-8e15af28586b 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.945s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:46 compute-0 nova_compute[239965]: 2026-01-26 16:15:46.133 239969 DEBUG nova.compute.manager [req-235c4ffb-71d6-471e-a991-207e479a60d6 req-a6c1a627-a57a-41a1-8f60-d05b96bde6cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received event network-vif-plugged-1883cc75-9bba-4da7-bb9d-e038a9254a10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:15:46 compute-0 nova_compute[239965]: 2026-01-26 16:15:46.134 239969 DEBUG oslo_concurrency.lockutils [req-235c4ffb-71d6-471e-a991-207e479a60d6 req-a6c1a627-a57a-41a1-8f60-d05b96bde6cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:46 compute-0 nova_compute[239965]: 2026-01-26 16:15:46.134 239969 DEBUG oslo_concurrency.lockutils [req-235c4ffb-71d6-471e-a991-207e479a60d6 req-a6c1a627-a57a-41a1-8f60-d05b96bde6cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:46 compute-0 nova_compute[239965]: 2026-01-26 16:15:46.134 239969 DEBUG oslo_concurrency.lockutils [req-235c4ffb-71d6-471e-a991-207e479a60d6 req-a6c1a627-a57a-41a1-8f60-d05b96bde6cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ad5cd088-59a3-4894-8741-f0daa1fb4ff9-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:46 compute-0 nova_compute[239965]: 2026-01-26 16:15:46.135 239969 DEBUG nova.compute.manager [req-235c4ffb-71d6-471e-a991-207e479a60d6 req-a6c1a627-a57a-41a1-8f60-d05b96bde6cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] No waiting events found dispatching network-vif-plugged-1883cc75-9bba-4da7-bb9d-e038a9254a10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:15:46 compute-0 nova_compute[239965]: 2026-01-26 16:15:46.135 239969 WARNING nova.compute.manager [req-235c4ffb-71d6-471e-a991-207e479a60d6 req-a6c1a627-a57a-41a1-8f60-d05b96bde6cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Received unexpected event network-vif-plugged-1883cc75-9bba-4da7-bb9d-e038a9254a10 for instance with vm_state deleted and task_state None.
Jan 26 16:15:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 88 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 19 KiB/s wr, 54 op/s
Jan 26 16:15:46 compute-0 ceph-mon[75140]: pgmap v1952: 305 pgs: 305 active+clean; 88 MiB data, 813 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 19 KiB/s wr, 54 op/s
Jan 26 16:15:47 compute-0 nova_compute[239965]: 2026-01-26 16:15:47.069 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:47 compute-0 nova_compute[239965]: 2026-01-26 16:15:47.146 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:47 compute-0 nova_compute[239965]: 2026-01-26 16:15:47.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:15:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:15:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:47.961 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:c0:60 2001:db8::f816:3eff:fed3:c060'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fed3:c060/64', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b148aee5-31c9-4b40-aded-fc877d3044d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5e2c04fa-ed19-4e90-8fe2-e33d24918520) old=Port_Binding(mac=['fa:16:3e:d3:c0:60 10.100.0.2 2001:db8::f816:3eff:fed3:c060'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fed3:c060/64', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:15:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:47.962 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5e2c04fa-ed19-4e90-8fe2-e33d24918520 in datapath 50ca1210-9784-41a5-b102-be1704bf4f24 updated
Jan 26 16:15:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:47.963 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ca1210-9784-41a5-b102-be1704bf4f24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:15:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:47.964 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[88ce79c5-b71c-48ad-be27-21e41b27efc5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 19 KiB/s wr, 57 op/s
Jan 26 16:15:48 compute-0 ceph-mon[75140]: pgmap v1953: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 19 KiB/s wr, 57 op/s
Jan 26 16:15:48 compute-0 nova_compute[239965]: 2026-01-26 16:15:48.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:15:48 compute-0 nova_compute[239965]: 2026-01-26 16:15:48.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:15:48 compute-0 nova_compute[239965]: 2026-01-26 16:15:48.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:15:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:15:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3718883696' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:15:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:15:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3718883696' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.3518598913843685e-05 of space, bias 1.0, pg target 0.004055579674153106 quantized to 32 (current 32)
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00101844679294732 of space, bias 1.0, pg target 0.305534037884196 quantized to 32 (current 32)
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.120575718234788e-07 of space, bias 4.0, pg target 0.0008544690861881746 quantized to 16 (current 16)
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:15:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:15:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3718883696' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:15:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3718883696' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:15:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 6.7 KiB/s wr, 50 op/s
Jan 26 16:15:50 compute-0 ceph-mon[75140]: pgmap v1954: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 6.7 KiB/s wr, 50 op/s
Jan 26 16:15:50 compute-0 nova_compute[239965]: 2026-01-26 16:15:50.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:15:50 compute-0 nova_compute[239965]: 2026-01-26 16:15:50.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:15:50 compute-0 nova_compute[239965]: 2026-01-26 16:15:50.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:15:50 compute-0 nova_compute[239965]: 2026-01-26 16:15:50.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:15:50 compute-0 nova_compute[239965]: 2026-01-26 16:15:50.535 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:15:52 compute-0 nova_compute[239965]: 2026-01-26 16:15:52.073 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:52 compute-0 nova_compute[239965]: 2026-01-26 16:15:52.143 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444137.1422255, 46b779d5-01aa-4ab2-bbcf-cb9c344284bf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:15:52 compute-0 nova_compute[239965]: 2026-01-26 16:15:52.143 239969 INFO nova.compute.manager [-] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] VM Stopped (Lifecycle Event)
Jan 26 16:15:52 compute-0 nova_compute[239965]: 2026-01-26 16:15:52.147 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:52 compute-0 nova_compute[239965]: 2026-01-26 16:15:52.164 239969 DEBUG nova.compute.manager [None req-04cc78f6-b306-4c89-9650-4cbb0807bc53 - - - - - -] [instance: 46b779d5-01aa-4ab2-bbcf-cb9c344284bf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:15:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 6.7 KiB/s wr, 50 op/s
Jan 26 16:15:52 compute-0 ceph-mon[75140]: pgmap v1955: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 6.7 KiB/s wr, 50 op/s
Jan 26 16:15:52 compute-0 nova_compute[239965]: 2026-01-26 16:15:52.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:15:52 compute-0 nova_compute[239965]: 2026-01-26 16:15:52.532 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:52 compute-0 nova_compute[239965]: 2026-01-26 16:15:52.533 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:52 compute-0 nova_compute[239965]: 2026-01-26 16:15:52.533 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:52 compute-0 nova_compute[239965]: 2026-01-26 16:15:52.533 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:15:52 compute-0 nova_compute[239965]: 2026-01-26 16:15:52.534 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:15:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:15:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:15:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1607229758' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:15:53 compute-0 nova_compute[239965]: 2026-01-26 16:15:53.146 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:15:53 compute-0 nova_compute[239965]: 2026-01-26 16:15:53.316 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:15:53 compute-0 nova_compute[239965]: 2026-01-26 16:15:53.318 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3784MB free_disk=59.98747029248625GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:15:53 compute-0 nova_compute[239965]: 2026-01-26 16:15:53.318 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:53 compute-0 nova_compute[239965]: 2026-01-26 16:15:53.319 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:53 compute-0 nova_compute[239965]: 2026-01-26 16:15:53.385 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:15:53 compute-0 nova_compute[239965]: 2026-01-26 16:15:53.386 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:15:53 compute-0 nova_compute[239965]: 2026-01-26 16:15:53.415 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:15:53 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1607229758' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:15:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:15:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1868200850' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:15:54 compute-0 nova_compute[239965]: 2026-01-26 16:15:54.033 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:15:54 compute-0 nova_compute[239965]: 2026-01-26 16:15:54.041 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:15:54 compute-0 nova_compute[239965]: 2026-01-26 16:15:54.062 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:15:54 compute-0 nova_compute[239965]: 2026-01-26 16:15:54.094 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:15:54 compute-0 nova_compute[239965]: 2026-01-26 16:15:54.094 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:15:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 28 op/s
Jan 26 16:15:54 compute-0 podman[338885]: 2026-01-26 16:15:54.393931175 +0000 UTC m=+0.071172381 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 26 16:15:54 compute-0 podman[338886]: 2026-01-26 16:15:54.41215538 +0000 UTC m=+0.095588457 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Jan 26 16:15:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1868200850' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:15:54 compute-0 ceph-mon[75140]: pgmap v1956: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.7 KiB/s wr, 28 op/s
Jan 26 16:15:55 compute-0 nova_compute[239965]: 2026-01-26 16:15:55.054 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:55 compute-0 nova_compute[239965]: 2026-01-26 16:15:55.148 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:55.981 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:15:55 compute-0 nova_compute[239965]: 2026-01-26 16:15:55.981 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:55.982 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:15:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 341 B/s wr, 5 op/s
Jan 26 16:15:56 compute-0 ceph-mon[75140]: pgmap v1957: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 341 B/s wr, 5 op/s
Jan 26 16:15:57 compute-0 nova_compute[239965]: 2026-01-26 16:15:57.028 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444142.0263054, ad5cd088-59a3-4894-8741-f0daa1fb4ff9 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:15:57 compute-0 nova_compute[239965]: 2026-01-26 16:15:57.028 239969 INFO nova.compute.manager [-] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] VM Stopped (Lifecycle Event)
Jan 26 16:15:57 compute-0 nova_compute[239965]: 2026-01-26 16:15:57.047 239969 DEBUG nova.compute.manager [None req-ab6b5ef6-3205-47af-b9bd-6158f22197ee - - - - - -] [instance: ad5cd088-59a3-4894-8741-f0daa1fb4ff9] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:15:57 compute-0 nova_compute[239965]: 2026-01-26 16:15:57.076 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:57 compute-0 nova_compute[239965]: 2026-01-26 16:15:57.094 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:15:57 compute-0 nova_compute[239965]: 2026-01-26 16:15:57.185 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:15:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:15:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 341 B/s wr, 3 op/s
Jan 26 16:15:58 compute-0 ceph-mon[75140]: pgmap v1958: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 341 B/s wr, 3 op/s
Jan 26 16:15:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:59.238 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:15:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:59.239 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:15:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:15:59.239 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:16:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:16:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:16:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:16:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:16:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:16:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:16:00 compute-0 ceph-mon[75140]: pgmap v1959: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:02 compute-0 nova_compute[239965]: 2026-01-26 16:16:02.079 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:02 compute-0 nova_compute[239965]: 2026-01-26 16:16:02.186 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:02 compute-0 ceph-mon[75140]: pgmap v1960: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:16:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:04 compute-0 ceph-mon[75140]: pgmap v1961: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:05.983 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:16:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:06 compute-0 ceph-mon[75140]: pgmap v1962: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:06 compute-0 nova_compute[239965]: 2026-01-26 16:16:06.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:16:07 compute-0 nova_compute[239965]: 2026-01-26 16:16:07.083 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:07 compute-0 nova_compute[239965]: 2026-01-26 16:16:07.187 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:16:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:08 compute-0 ceph-mon[75140]: pgmap v1963: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:10 compute-0 ceph-mon[75140]: pgmap v1964: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:12 compute-0 nova_compute[239965]: 2026-01-26 16:16:12.086 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:12 compute-0 nova_compute[239965]: 2026-01-26 16:16:12.189 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:12 compute-0 ceph-mon[75140]: pgmap v1965: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:16:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:14 compute-0 ceph-mon[75140]: pgmap v1966: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:16 compute-0 ceph-mon[75140]: pgmap v1967: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:17 compute-0 nova_compute[239965]: 2026-01-26 16:16:17.089 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:17 compute-0 nova_compute[239965]: 2026-01-26 16:16:17.191 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:16:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:18 compute-0 ceph-mon[75140]: pgmap v1968: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:20 compute-0 ceph-mon[75140]: pgmap v1969: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:22 compute-0 nova_compute[239965]: 2026-01-26 16:16:22.092 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:22 compute-0 nova_compute[239965]: 2026-01-26 16:16:22.193 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:22 compute-0 ceph-mon[75140]: pgmap v1970: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:16:23 compute-0 nova_compute[239965]: 2026-01-26 16:16:23.377 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "89d77454-1c5d-4556-88b6-e5d4023c8c36" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:16:23 compute-0 nova_compute[239965]: 2026-01-26 16:16:23.377 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:16:23 compute-0 nova_compute[239965]: 2026-01-26 16:16:23.409 239969 DEBUG nova.compute.manager [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:16:23 compute-0 nova_compute[239965]: 2026-01-26 16:16:23.508 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:16:23 compute-0 nova_compute[239965]: 2026-01-26 16:16:23.508 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:16:23 compute-0 nova_compute[239965]: 2026-01-26 16:16:23.518 239969 DEBUG nova.virt.hardware [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:16:23 compute-0 nova_compute[239965]: 2026-01-26 16:16:23.518 239969 INFO nova.compute.claims [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:16:23 compute-0 nova_compute[239965]: 2026-01-26 16:16:23.633 239969 DEBUG oslo_concurrency.processutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:16:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:16:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/291037618' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.202 239969 DEBUG oslo_concurrency.processutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.211 239969 DEBUG nova.compute.provider_tree [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.230 239969 DEBUG nova.scheduler.client.report [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:16:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/291037618' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.267 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.269 239969 DEBUG nova.compute.manager [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:16:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.339 239969 DEBUG nova.compute.manager [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.340 239969 DEBUG nova.network.neutron [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.365 239969 INFO nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:16:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:24.381 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:c0:60 2001:db8:0:1:f816:3eff:fed3:c060'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fed3:c060/64', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b148aee5-31c9-4b40-aded-fc877d3044d0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=5e2c04fa-ed19-4e90-8fe2-e33d24918520) old=Port_Binding(mac=['fa:16:3e:d3:c0:60 2001:db8::f816:3eff:fed3:c060'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fed3:c060/64', 'neutron:device_id': 'ovnmeta-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-50ca1210-9784-41a5-b102-be1704bf4f24', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b1d43589ffaf4e34b56d4e46ba740df4', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:16:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:24.384 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 5e2c04fa-ed19-4e90-8fe2-e33d24918520 in datapath 50ca1210-9784-41a5-b102-be1704bf4f24 updated
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.387 239969 DEBUG nova.compute.manager [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:16:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:24.387 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 50ca1210-9784-41a5-b102-be1704bf4f24, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:16:24 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:24.389 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1d1e1261-59a2-4deb-9c28-345056bf111b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.478 239969 DEBUG nova.compute.manager [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.481 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.482 239969 INFO nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Creating image(s)
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.511 239969 DEBUG nova.storage.rbd_utils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 89d77454-1c5d-4556-88b6-e5d4023c8c36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.533 239969 DEBUG nova.storage.rbd_utils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 89d77454-1c5d-4556-88b6-e5d4023c8c36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.555 239969 DEBUG nova.storage.rbd_utils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 89d77454-1c5d-4556-88b6-e5d4023c8c36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.558 239969 DEBUG oslo_concurrency.processutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.590 239969 DEBUG nova.policy [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0338b308661e4205839e9e957f674d8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df501970c9864b77b86bb4aa58ef846b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.626 239969 DEBUG oslo_concurrency.processutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.627 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.628 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.628 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.656 239969 DEBUG nova.storage.rbd_utils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 89d77454-1c5d-4556-88b6-e5d4023c8c36_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.659 239969 DEBUG oslo_concurrency.processutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 89d77454-1c5d-4556-88b6-e5d4023c8c36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:16:24 compute-0 podman[339010]: 2026-01-26 16:16:24.671962006 +0000 UTC m=+0.045565795 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:16:24 compute-0 podman[339011]: 2026-01-26 16:16:24.701473867 +0000 UTC m=+0.069768846 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.928 239969 DEBUG oslo_concurrency.processutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 89d77454-1c5d-4556-88b6-e5d4023c8c36_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:16:24 compute-0 nova_compute[239965]: 2026-01-26 16:16:24.989 239969 DEBUG nova.storage.rbd_utils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] resizing rbd image 89d77454-1c5d-4556-88b6-e5d4023c8c36_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:16:25 compute-0 nova_compute[239965]: 2026-01-26 16:16:25.077 239969 DEBUG nova.objects.instance [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'migration_context' on Instance uuid 89d77454-1c5d-4556-88b6-e5d4023c8c36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:16:25 compute-0 nova_compute[239965]: 2026-01-26 16:16:25.095 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:16:25 compute-0 nova_compute[239965]: 2026-01-26 16:16:25.095 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Ensure instance console log exists: /var/lib/nova/instances/89d77454-1c5d-4556-88b6-e5d4023c8c36/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:16:25 compute-0 nova_compute[239965]: 2026-01-26 16:16:25.096 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:16:25 compute-0 nova_compute[239965]: 2026-01-26 16:16:25.096 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:16:25 compute-0 nova_compute[239965]: 2026-01-26 16:16:25.097 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:16:25 compute-0 ceph-mon[75140]: pgmap v1971: 305 pgs: 305 active+clean; 88 MiB data, 771 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:16:25 compute-0 nova_compute[239965]: 2026-01-26 16:16:25.534 239969 DEBUG nova.network.neutron [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Successfully created port: 1751925c-4e5b-4436-af71-dae4e4c07755 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:16:26 compute-0 nova_compute[239965]: 2026-01-26 16:16:26.139 239969 DEBUG nova.network.neutron [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Successfully created port: e462f031-2502-4089-9f71-ba99ec9f5c8d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:16:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 101 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 440 KiB/s wr, 12 op/s
Jan 26 16:16:26 compute-0 ceph-mon[75140]: pgmap v1972: 305 pgs: 305 active+clean; 101 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 440 KiB/s wr, 12 op/s
Jan 26 16:16:27 compute-0 nova_compute[239965]: 2026-01-26 16:16:27.094 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:27 compute-0 nova_compute[239965]: 2026-01-26 16:16:27.196 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:27 compute-0 nova_compute[239965]: 2026-01-26 16:16:27.786 239969 DEBUG nova.network.neutron [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Successfully updated port: 1751925c-4e5b-4436-af71-dae4e4c07755 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:16:27 compute-0 nova_compute[239965]: 2026-01-26 16:16:27.939 239969 DEBUG nova.compute.manager [req-69e8410c-0204-4a5c-8df9-03882c80444f req-56ac8057-2767-45b1-883e-26bdadd65ee5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received event network-changed-1751925c-4e5b-4436-af71-dae4e4c07755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:16:27 compute-0 nova_compute[239965]: 2026-01-26 16:16:27.940 239969 DEBUG nova.compute.manager [req-69e8410c-0204-4a5c-8df9-03882c80444f req-56ac8057-2767-45b1-883e-26bdadd65ee5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Refreshing instance network info cache due to event network-changed-1751925c-4e5b-4436-af71-dae4e4c07755. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:16:27 compute-0 nova_compute[239965]: 2026-01-26 16:16:27.940 239969 DEBUG oslo_concurrency.lockutils [req-69e8410c-0204-4a5c-8df9-03882c80444f req-56ac8057-2767-45b1-883e-26bdadd65ee5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:16:27 compute-0 nova_compute[239965]: 2026-01-26 16:16:27.940 239969 DEBUG oslo_concurrency.lockutils [req-69e8410c-0204-4a5c-8df9-03882c80444f req-56ac8057-2767-45b1-883e-26bdadd65ee5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:16:27 compute-0 nova_compute[239965]: 2026-01-26 16:16:27.941 239969 DEBUG nova.network.neutron [req-69e8410c-0204-4a5c-8df9-03882c80444f req-56ac8057-2767-45b1-883e-26bdadd65ee5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Refreshing network info cache for port 1751925c-4e5b-4436-af71-dae4e4c07755 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:16:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:16:28 compute-0 sshd-session[339165]: Received disconnect from 45.148.10.151 port 62862:11:  [preauth]
Jan 26 16:16:28 compute-0 sshd-session[339165]: Disconnected from authenticating user root 45.148.10.151 port 62862 [preauth]
Jan 26 16:16:28 compute-0 nova_compute[239965]: 2026-01-26 16:16:28.189 239969 DEBUG nova.network.neutron [req-69e8410c-0204-4a5c-8df9-03882c80444f req-56ac8057-2767-45b1-883e-26bdadd65ee5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:16:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:16:28 compute-0 ceph-mon[75140]: pgmap v1973: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:16:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:16:28
Jan 26 16:16:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:16:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:16:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'default.rgw.meta', 'volumes', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'backups']
Jan 26 16:16:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:16:28 compute-0 nova_compute[239965]: 2026-01-26 16:16:28.671 239969 DEBUG nova.network.neutron [req-69e8410c-0204-4a5c-8df9-03882c80444f req-56ac8057-2767-45b1-883e-26bdadd65ee5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:16:28 compute-0 nova_compute[239965]: 2026-01-26 16:16:28.686 239969 DEBUG oslo_concurrency.lockutils [req-69e8410c-0204-4a5c-8df9-03882c80444f req-56ac8057-2767-45b1-883e-26bdadd65ee5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:16:28 compute-0 nova_compute[239965]: 2026-01-26 16:16:28.850 239969 DEBUG nova.network.neutron [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Successfully updated port: e462f031-2502-4089-9f71-ba99ec9f5c8d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:16:28 compute-0 nova_compute[239965]: 2026-01-26 16:16:28.867 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:16:28 compute-0 nova_compute[239965]: 2026-01-26 16:16:28.868 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquired lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:16:28 compute-0 nova_compute[239965]: 2026-01-26 16:16:28.868 239969 DEBUG nova.network.neutron [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:16:29 compute-0 nova_compute[239965]: 2026-01-26 16:16:29.059 239969 DEBUG nova.network.neutron [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:16:30 compute-0 nova_compute[239965]: 2026-01-26 16:16:30.062 239969 DEBUG nova.compute.manager [req-ffec95bd-1def-4e3b-9c65-a51719a9bf03 req-11ef9dfc-8ca0-45f9-a299-833282de0cd9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received event network-changed-e462f031-2502-4089-9f71-ba99ec9f5c8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:16:30 compute-0 nova_compute[239965]: 2026-01-26 16:16:30.063 239969 DEBUG nova.compute.manager [req-ffec95bd-1def-4e3b-9c65-a51719a9bf03 req-11ef9dfc-8ca0-45f9-a299-833282de0cd9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Refreshing instance network info cache due to event network-changed-e462f031-2502-4089-9f71-ba99ec9f5c8d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:16:30 compute-0 nova_compute[239965]: 2026-01-26 16:16:30.063 239969 DEBUG oslo_concurrency.lockutils [req-ffec95bd-1def-4e3b-9c65-a51719a9bf03 req-11ef9dfc-8ca0-45f9-a299-833282de0cd9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:16:30 compute-0 ceph-mon[75140]: pgmap v1974: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:16:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:16:31 compute-0 ovn_controller[146046]: 2026-01-26T16:16:31Z|01167|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.779 239969 DEBUG nova.network.neutron [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Updating instance_info_cache with network_info: [{"id": "1751925c-4e5b-4436-af71-dae4e4c07755", "address": "fa:16:3e:92:3c:80", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1751925c-4e", "ovs_interfaceid": "1751925c-4e5b-4436-af71-dae4e4c07755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "address": "fa:16:3e:10:11:fa", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe10:11fa", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape462f031-25", "ovs_interfaceid": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.811 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Releasing lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.812 239969 DEBUG nova.compute.manager [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Instance network_info: |[{"id": "1751925c-4e5b-4436-af71-dae4e4c07755", "address": "fa:16:3e:92:3c:80", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1751925c-4e", "ovs_interfaceid": "1751925c-4e5b-4436-af71-dae4e4c07755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "address": "fa:16:3e:10:11:fa", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe10:11fa", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape462f031-25", "ovs_interfaceid": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.812 239969 DEBUG oslo_concurrency.lockutils [req-ffec95bd-1def-4e3b-9c65-a51719a9bf03 req-11ef9dfc-8ca0-45f9-a299-833282de0cd9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.812 239969 DEBUG nova.network.neutron [req-ffec95bd-1def-4e3b-9c65-a51719a9bf03 req-11ef9dfc-8ca0-45f9-a299-833282de0cd9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Refreshing network info cache for port e462f031-2502-4089-9f71-ba99ec9f5c8d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.816 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Start _get_guest_xml network_info=[{"id": "1751925c-4e5b-4436-af71-dae4e4c07755", "address": "fa:16:3e:92:3c:80", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1751925c-4e", "ovs_interfaceid": "1751925c-4e5b-4436-af71-dae4e4c07755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "address": "fa:16:3e:10:11:fa", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe10:11fa", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape462f031-25", "ovs_interfaceid": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.820 239969 WARNING nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.825 239969 DEBUG nova.virt.libvirt.host [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.826 239969 DEBUG nova.virt.libvirt.host [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.831 239969 DEBUG nova.virt.libvirt.host [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.832 239969 DEBUG nova.virt.libvirt.host [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.832 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.833 239969 DEBUG nova.virt.hardware [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.833 239969 DEBUG nova.virt.hardware [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.833 239969 DEBUG nova.virt.hardware [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.833 239969 DEBUG nova.virt.hardware [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.834 239969 DEBUG nova.virt.hardware [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.834 239969 DEBUG nova.virt.hardware [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.834 239969 DEBUG nova.virt.hardware [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.834 239969 DEBUG nova.virt.hardware [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.834 239969 DEBUG nova.virt.hardware [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.835 239969 DEBUG nova.virt.hardware [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.835 239969 DEBUG nova.virt.hardware [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:16:31 compute-0 nova_compute[239965]: 2026-01-26 16:16:31.837 239969 DEBUG oslo_concurrency.processutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:16:32 compute-0 nova_compute[239965]: 2026-01-26 16:16:32.097 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:32 compute-0 nova_compute[239965]: 2026-01-26 16:16:32.198 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:16:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:16:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/83619113' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:16:32 compute-0 nova_compute[239965]: 2026-01-26 16:16:32.413 239969 DEBUG oslo_concurrency.processutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:16:32 compute-0 ceph-mon[75140]: pgmap v1975: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:16:32 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/83619113' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:16:32 compute-0 nova_compute[239965]: 2026-01-26 16:16:32.450 239969 DEBUG nova.storage.rbd_utils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 89d77454-1c5d-4556-88b6-e5d4023c8c36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:16:32 compute-0 nova_compute[239965]: 2026-01-26 16:16:32.454 239969 DEBUG oslo_concurrency.processutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:16:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:16:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:16:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1904616513' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:16:32 compute-0 nova_compute[239965]: 2026-01-26 16:16:32.982 239969 DEBUG oslo_concurrency.processutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:16:32 compute-0 nova_compute[239965]: 2026-01-26 16:16:32.985 239969 DEBUG nova.virt.libvirt.vif [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:16:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-9655236',display_name='tempest-TestGettingAddress-server-9655236',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-9655236',id=113,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH15mfCf163A1QbfTaXuR54BflHpaWebtcCFKLbElmwMma07W8Dr24ff/LIXiYDE7MCaURfuMy4ql5GPdDHSAVAPtvXLIqBDdEa7wAHGpsKoM0uA4YWynEtBDWpo6lK7nw==',key_name='tempest-TestGettingAddress-1300583488',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-kezm37dn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:16:24Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=89d77454-1c5d-4556-88b6-e5d4023c8c36,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1751925c-4e5b-4436-af71-dae4e4c07755", "address": "fa:16:3e:92:3c:80", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1751925c-4e", "ovs_interfaceid": "1751925c-4e5b-4436-af71-dae4e4c07755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:16:32 compute-0 nova_compute[239965]: 2026-01-26 16:16:32.985 239969 DEBUG nova.network.os_vif_util [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "1751925c-4e5b-4436-af71-dae4e4c07755", "address": "fa:16:3e:92:3c:80", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1751925c-4e", "ovs_interfaceid": "1751925c-4e5b-4436-af71-dae4e4c07755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:16:32 compute-0 nova_compute[239965]: 2026-01-26 16:16:32.987 239969 DEBUG nova.network.os_vif_util [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:92:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=1751925c-4e5b-4436-af71-dae4e4c07755,network=Network(96c2fafe-7dfa-4123-80bb-6807f1870aaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1751925c-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:16:32 compute-0 nova_compute[239965]: 2026-01-26 16:16:32.988 239969 DEBUG nova.virt.libvirt.vif [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:16:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-9655236',display_name='tempest-TestGettingAddress-server-9655236',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-9655236',id=113,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH15mfCf163A1QbfTaXuR54BflHpaWebtcCFKLbElmwMma07W8Dr24ff/LIXiYDE7MCaURfuMy4ql5GPdDHSAVAPtvXLIqBDdEa7wAHGpsKoM0uA4YWynEtBDWpo6lK7nw==',key_name='tempest-TestGettingAddress-1300583488',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-kezm37dn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:16:24Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=89d77454-1c5d-4556-88b6-e5d4023c8c36,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "address": "fa:16:3e:10:11:fa", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe10:11fa", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape462f031-25", "ovs_interfaceid": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:16:32 compute-0 nova_compute[239965]: 2026-01-26 16:16:32.989 239969 DEBUG nova.network.os_vif_util [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "address": "fa:16:3e:10:11:fa", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe10:11fa", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape462f031-25", "ovs_interfaceid": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:16:32 compute-0 nova_compute[239965]: 2026-01-26 16:16:32.989 239969 DEBUG nova.network.os_vif_util [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:11:fa,bridge_name='br-int',has_traffic_filtering=True,id=e462f031-2502-4089-9f71-ba99ec9f5c8d,network=Network(4b542b76-8e6a-4678-aea6-ecc3e7938bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape462f031-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:16:32 compute-0 nova_compute[239965]: 2026-01-26 16:16:32.991 239969 DEBUG nova.objects.instance [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'pci_devices' on Instance uuid 89d77454-1c5d-4556-88b6-e5d4023c8c36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.009 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:16:33 compute-0 nova_compute[239965]:   <uuid>89d77454-1c5d-4556-88b6-e5d4023c8c36</uuid>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   <name>instance-00000071</name>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <nova:name>tempest-TestGettingAddress-server-9655236</nova:name>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:16:31</nova:creationTime>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:16:33 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:16:33 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:16:33 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:16:33 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:16:33 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:16:33 compute-0 nova_compute[239965]:         <nova:user uuid="0338b308661e4205839e9e957f674d8c">tempest-TestGettingAddress-206943419-project-member</nova:user>
Jan 26 16:16:33 compute-0 nova_compute[239965]:         <nova:project uuid="df501970c9864b77b86bb4aa58ef846b">tempest-TestGettingAddress-206943419</nova:project>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:16:33 compute-0 nova_compute[239965]:         <nova:port uuid="1751925c-4e5b-4436-af71-dae4e4c07755">
Jan 26 16:16:33 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:16:33 compute-0 nova_compute[239965]:         <nova:port uuid="e462f031-2502-4089-9f71-ba99ec9f5c8d">
Jan 26 16:16:33 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe10:11fa" ipVersion="6"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <system>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <entry name="serial">89d77454-1c5d-4556-88b6-e5d4023c8c36</entry>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <entry name="uuid">89d77454-1c5d-4556-88b6-e5d4023c8c36</entry>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     </system>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   <os>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   </os>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   <features>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   </features>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/89d77454-1c5d-4556-88b6-e5d4023c8c36_disk">
Jan 26 16:16:33 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       </source>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:16:33 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/89d77454-1c5d-4556-88b6-e5d4023c8c36_disk.config">
Jan 26 16:16:33 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       </source>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:16:33 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:92:3c:80"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <target dev="tap1751925c-4e"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:10:11:fa"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <target dev="tape462f031-25"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/89d77454-1c5d-4556-88b6-e5d4023c8c36/console.log" append="off"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <video>
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     </video>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:16:33 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:16:33 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:16:33 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:16:33 compute-0 nova_compute[239965]: </domain>
Jan 26 16:16:33 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.011 239969 DEBUG nova.compute.manager [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Preparing to wait for external event network-vif-plugged-1751925c-4e5b-4436-af71-dae4e4c07755 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.013 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.013 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.014 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.015 239969 DEBUG nova.compute.manager [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Preparing to wait for external event network-vif-plugged-e462f031-2502-4089-9f71-ba99ec9f5c8d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.015 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.016 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.017 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.018 239969 DEBUG nova.virt.libvirt.vif [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:16:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-9655236',display_name='tempest-TestGettingAddress-server-9655236',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-9655236',id=113,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH15mfCf163A1QbfTaXuR54BflHpaWebtcCFKLbElmwMma07W8Dr24ff/LIXiYDE7MCaURfuMy4ql5GPdDHSAVAPtvXLIqBDdEa7wAHGpsKoM0uA4YWynEtBDWpo6lK7nw==',key_name='tempest-TestGettingAddress-1300583488',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-kezm37dn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:16:24Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=89d77454-1c5d-4556-88b6-e5d4023c8c36,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1751925c-4e5b-4436-af71-dae4e4c07755", "address": "fa:16:3e:92:3c:80", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1751925c-4e", "ovs_interfaceid": "1751925c-4e5b-4436-af71-dae4e4c07755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.019 239969 DEBUG nova.network.os_vif_util [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "1751925c-4e5b-4436-af71-dae4e4c07755", "address": "fa:16:3e:92:3c:80", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1751925c-4e", "ovs_interfaceid": "1751925c-4e5b-4436-af71-dae4e4c07755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.021 239969 DEBUG nova.network.os_vif_util [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:92:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=1751925c-4e5b-4436-af71-dae4e4c07755,network=Network(96c2fafe-7dfa-4123-80bb-6807f1870aaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1751925c-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.022 239969 DEBUG os_vif [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:92:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=1751925c-4e5b-4436-af71-dae4e4c07755,network=Network(96c2fafe-7dfa-4123-80bb-6807f1870aaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1751925c-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.023 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.024 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.026 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.030 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.031 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1751925c-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.032 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1751925c-4e, col_values=(('external_ids', {'iface-id': '1751925c-4e5b-4436-af71-dae4e4c07755', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:92:3c:80', 'vm-uuid': '89d77454-1c5d-4556-88b6-e5d4023c8c36'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:16:33 compute-0 NetworkManager[48954]: <info>  [1769444193.0354] manager: (tap1751925c-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/478)
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.034 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.038 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.041 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.042 239969 INFO os_vif [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:92:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=1751925c-4e5b-4436-af71-dae4e4c07755,network=Network(96c2fafe-7dfa-4123-80bb-6807f1870aaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1751925c-4e')
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.043 239969 DEBUG nova.virt.libvirt.vif [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:16:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-9655236',display_name='tempest-TestGettingAddress-server-9655236',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-9655236',id=113,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH15mfCf163A1QbfTaXuR54BflHpaWebtcCFKLbElmwMma07W8Dr24ff/LIXiYDE7MCaURfuMy4ql5GPdDHSAVAPtvXLIqBDdEa7wAHGpsKoM0uA4YWynEtBDWpo6lK7nw==',key_name='tempest-TestGettingAddress-1300583488',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-kezm37dn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:16:24Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=89d77454-1c5d-4556-88b6-e5d4023c8c36,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "address": "fa:16:3e:10:11:fa", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe10:11fa", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape462f031-25", "ovs_interfaceid": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.044 239969 DEBUG nova.network.os_vif_util [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "address": "fa:16:3e:10:11:fa", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe10:11fa", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape462f031-25", "ovs_interfaceid": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.045 239969 DEBUG nova.network.os_vif_util [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:10:11:fa,bridge_name='br-int',has_traffic_filtering=True,id=e462f031-2502-4089-9f71-ba99ec9f5c8d,network=Network(4b542b76-8e6a-4678-aea6-ecc3e7938bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape462f031-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.045 239969 DEBUG os_vif [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:11:fa,bridge_name='br-int',has_traffic_filtering=True,id=e462f031-2502-4089-9f71-ba99ec9f5c8d,network=Network(4b542b76-8e6a-4678-aea6-ecc3e7938bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape462f031-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.046 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.046 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.047 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.049 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.050 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape462f031-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.050 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape462f031-25, col_values=(('external_ids', {'iface-id': 'e462f031-2502-4089-9f71-ba99ec9f5c8d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:10:11:fa', 'vm-uuid': '89d77454-1c5d-4556-88b6-e5d4023c8c36'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.052 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:33 compute-0 NetworkManager[48954]: <info>  [1769444193.0532] manager: (tape462f031-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/479)
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.055 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.062 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.064 239969 INFO os_vif [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:10:11:fa,bridge_name='br-int',has_traffic_filtering=True,id=e462f031-2502-4089-9f71-ba99ec9f5c8d,network=Network(4b542b76-8e6a-4678-aea6-ecc3e7938bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape462f031-25')
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.118 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.118 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.119 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:92:3c:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.119 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:10:11:fa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.119 239969 INFO nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Using config drive
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.141 239969 DEBUG nova.storage.rbd_utils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 89d77454-1c5d-4556-88b6-e5d4023c8c36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.246 239969 DEBUG nova.network.neutron [req-ffec95bd-1def-4e3b-9c65-a51719a9bf03 req-11ef9dfc-8ca0-45f9-a299-833282de0cd9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Updated VIF entry in instance network info cache for port e462f031-2502-4089-9f71-ba99ec9f5c8d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.246 239969 DEBUG nova.network.neutron [req-ffec95bd-1def-4e3b-9c65-a51719a9bf03 req-11ef9dfc-8ca0-45f9-a299-833282de0cd9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Updating instance_info_cache with network_info: [{"id": "1751925c-4e5b-4436-af71-dae4e4c07755", "address": "fa:16:3e:92:3c:80", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1751925c-4e", "ovs_interfaceid": "1751925c-4e5b-4436-af71-dae4e4c07755", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "address": "fa:16:3e:10:11:fa", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe10:11fa", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape462f031-25", "ovs_interfaceid": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:16:33 compute-0 nova_compute[239965]: 2026-01-26 16:16:33.261 239969 DEBUG oslo_concurrency.lockutils [req-ffec95bd-1def-4e3b-9c65-a51719a9bf03 req-11ef9dfc-8ca0-45f9-a299-833282de0cd9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:16:33 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1904616513' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:16:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:16:34 compute-0 ceph-mon[75140]: pgmap v1976: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:16:35 compute-0 nova_compute[239965]: 2026-01-26 16:16:35.448 239969 INFO nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Creating config drive at /var/lib/nova/instances/89d77454-1c5d-4556-88b6-e5d4023c8c36/disk.config
Jan 26 16:16:35 compute-0 nova_compute[239965]: 2026-01-26 16:16:35.453 239969 DEBUG oslo_concurrency.processutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89d77454-1c5d-4556-88b6-e5d4023c8c36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7vpnmlcn execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:16:35 compute-0 nova_compute[239965]: 2026-01-26 16:16:35.598 239969 DEBUG oslo_concurrency.processutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89d77454-1c5d-4556-88b6-e5d4023c8c36/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7vpnmlcn" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:16:35 compute-0 nova_compute[239965]: 2026-01-26 16:16:35.622 239969 DEBUG nova.storage.rbd_utils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 89d77454-1c5d-4556-88b6-e5d4023c8c36_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:16:35 compute-0 nova_compute[239965]: 2026-01-26 16:16:35.625 239969 DEBUG oslo_concurrency.processutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89d77454-1c5d-4556-88b6-e5d4023c8c36/disk.config 89d77454-1c5d-4556-88b6-e5d4023c8c36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:16:35 compute-0 nova_compute[239965]: 2026-01-26 16:16:35.837 239969 DEBUG oslo_concurrency.processutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89d77454-1c5d-4556-88b6-e5d4023c8c36/disk.config 89d77454-1c5d-4556-88b6-e5d4023c8c36_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.212s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:16:35 compute-0 nova_compute[239965]: 2026-01-26 16:16:35.839 239969 INFO nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Deleting local config drive /var/lib/nova/instances/89d77454-1c5d-4556-88b6-e5d4023c8c36/disk.config because it was imported into RBD.
Jan 26 16:16:35 compute-0 kernel: tap1751925c-4e: entered promiscuous mode
Jan 26 16:16:35 compute-0 NetworkManager[48954]: <info>  [1769444195.8974] manager: (tap1751925c-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/480)
Jan 26 16:16:35 compute-0 ovn_controller[146046]: 2026-01-26T16:16:35Z|01168|binding|INFO|Claiming lport 1751925c-4e5b-4436-af71-dae4e4c07755 for this chassis.
Jan 26 16:16:35 compute-0 ovn_controller[146046]: 2026-01-26T16:16:35Z|01169|binding|INFO|1751925c-4e5b-4436-af71-dae4e4c07755: Claiming fa:16:3e:92:3c:80 10.100.0.10
Jan 26 16:16:35 compute-0 nova_compute[239965]: 2026-01-26 16:16:35.902 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:35 compute-0 NetworkManager[48954]: <info>  [1769444195.9196] manager: (tape462f031-25): new Tun device (/org/freedesktop/NetworkManager/Devices/481)
Jan 26 16:16:35 compute-0 systemd-udevd[339307]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:16:35 compute-0 systemd-udevd[339308]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:16:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:35.953 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:3c:80 10.100.0.10'], port_security=['fa:16:3e:92:3c:80 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '89d77454-1c5d-4556-88b6-e5d4023c8c36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96c2fafe-7dfa-4123-80bb-6807f1870aaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0e31774a-f33d-4386-89f9-6e7af3d4c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e91b07fc-e506-4aaa-a11a-7d624ebfb418, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1751925c-4e5b-4436-af71-dae4e4c07755) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:16:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:35.955 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1751925c-4e5b-4436-af71-dae4e4c07755 in datapath 96c2fafe-7dfa-4123-80bb-6807f1870aaa bound to our chassis
Jan 26 16:16:35 compute-0 NetworkManager[48954]: <info>  [1769444195.9574] device (tap1751925c-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:16:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:35.957 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 96c2fafe-7dfa-4123-80bb-6807f1870aaa
Jan 26 16:16:35 compute-0 NetworkManager[48954]: <info>  [1769444195.9604] device (tap1751925c-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:16:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:35.969 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9ca2c399-3fe0-4a2b-8cc5-87105b7106e8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:35.970 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap96c2fafe-71 in ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:16:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:35.972 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap96c2fafe-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:16:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:35.972 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5fcda704-9d3b-4407-98bc-93cd5d908481]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:35.973 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9cd6c0b4-c2fc-4368-ace0-a5671f2311ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:35.984 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8b369e-2277-4d51-a125-29284c1b940d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:35 compute-0 NetworkManager[48954]: <info>  [1769444195.9965] device (tape462f031-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:16:35 compute-0 kernel: tape462f031-25: entered promiscuous mode
Jan 26 16:16:35 compute-0 nova_compute[239965]: 2026-01-26 16:16:35.995 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:35 compute-0 NetworkManager[48954]: <info>  [1769444195.9979] device (tape462f031-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:16:35 compute-0 ovn_controller[146046]: 2026-01-26T16:16:35Z|01170|binding|INFO|Claiming lport e462f031-2502-4089-9f71-ba99ec9f5c8d for this chassis.
Jan 26 16:16:35 compute-0 ovn_controller[146046]: 2026-01-26T16:16:35Z|01171|binding|INFO|e462f031-2502-4089-9f71-ba99ec9f5c8d: Claiming fa:16:3e:10:11:fa 2001:db8::f816:3eff:fe10:11fa
Jan 26 16:16:36 compute-0 ovn_controller[146046]: 2026-01-26T16:16:36Z|01172|binding|INFO|Setting lport 1751925c-4e5b-4436-af71-dae4e4c07755 ovn-installed in OVS
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.009 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.011 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c3f1f60a-80b2-49b2-8988-ff7cfcb2255c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:36 compute-0 ovn_controller[146046]: 2026-01-26T16:16:36Z|01173|binding|INFO|Setting lport e462f031-2502-4089-9f71-ba99ec9f5c8d ovn-installed in OVS
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.019 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:36 compute-0 systemd-machined[208061]: New machine qemu-140-instance-00000071.
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.046 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6360c701-18f8-40b0-9a49-7f6db4c6db87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:36 compute-0 systemd[1]: Started Virtual Machine qemu-140-instance-00000071.
Jan 26 16:16:36 compute-0 NetworkManager[48954]: <info>  [1769444196.0543] manager: (tap96c2fafe-70): new Veth device (/org/freedesktop/NetworkManager/Devices/482)
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.052 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[480690ab-9571-4187-af8f-b1e731c06afa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:36 compute-0 ovn_controller[146046]: 2026-01-26T16:16:36Z|01174|binding|INFO|Setting lport 1751925c-4e5b-4436-af71-dae4e4c07755 up in Southbound
Jan 26 16:16:36 compute-0 ovn_controller[146046]: 2026-01-26T16:16:36Z|01175|binding|INFO|Setting lport e462f031-2502-4089-9f71-ba99ec9f5c8d up in Southbound
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.073 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:11:fa 2001:db8::f816:3eff:fe10:11fa'], port_security=['fa:16:3e:10:11:fa 2001:db8::f816:3eff:fe10:11fa'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe10:11fa/64', 'neutron:device_id': '89d77454-1c5d-4556-88b6-e5d4023c8c36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b542b76-8e6a-4678-aea6-ecc3e7938bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0e31774a-f33d-4386-89f9-6e7af3d4c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a8893da-ccb7-4082-92ad-5c7938360303, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=e462f031-2502-4089-9f71-ba99ec9f5c8d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.094 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a9e44b09-8ae8-45ee-9cdd-175fd24e7aa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.096 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2f5442-abe5-486b-9bc9-413f795f2126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:36 compute-0 NetworkManager[48954]: <info>  [1769444196.1241] device (tap96c2fafe-70): carrier: link connected
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.130 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[36ede397-fcf8-4867-9a15-90344c7ac3d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.147 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[83064b17-23b3-4063-b111-9a348db5ac7a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96c2fafe-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:00:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568693, 'reachable_time': 15322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339344, 'error': None, 'target': 'ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.164 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a11aca40-bce8-4593-af78-e6ffe6fb200e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe43:c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568693, 'tstamp': 568693}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339345, 'error': None, 'target': 'ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.187 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ae698d15-bf6b-448a-a787-25c17997100d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96c2fafe-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:00:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568693, 'reachable_time': 15322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339346, 'error': None, 'target': 'ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.227 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a34d1d1d-84d5-4610-9eaa-2d1ff97e97fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.297 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[085dd44b-72e8-4cee-9547-a672a6c4bf70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.298 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96c2fafe-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.299 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.299 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96c2fafe-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:16:36 compute-0 NetworkManager[48954]: <info>  [1769444196.3026] manager: (tap96c2fafe-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/483)
Jan 26 16:16:36 compute-0 kernel: tap96c2fafe-70: entered promiscuous mode
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.302 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.304 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap96c2fafe-70, col_values=(('external_ids', {'iface-id': 'f840b217-53ba-4982-acb3-2dea2bac979e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.305 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:36 compute-0 ovn_controller[146046]: 2026-01-26T16:16:36Z|01176|binding|INFO|Releasing lport f840b217-53ba-4982-acb3-2dea2bac979e from this chassis (sb_readonly=0)
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.317 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.318 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/96c2fafe-7dfa-4123-80bb-6807f1870aaa.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/96c2fafe-7dfa-4123-80bb-6807f1870aaa.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.320 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f22c68c0-ccba-4272-b7db-4f78721b1b1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.320 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-96c2fafe-7dfa-4123-80bb-6807f1870aaa
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/96c2fafe-7dfa-4123-80bb-6807f1870aaa.pid.haproxy
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 96c2fafe-7dfa-4123-80bb-6807f1870aaa
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:16:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:36.321 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa', 'env', 'PROCESS_TAG=haproxy-96c2fafe-7dfa-4123-80bb-6807f1870aaa', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/96c2fafe-7dfa-4123-80bb-6807f1870aaa.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:16:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.445 239969 DEBUG nova.compute.manager [req-91c0df80-1f7b-41bb-8aad-0b6e342c9dab req-aece73f3-161b-44de-ac4e-1980e02be7dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received event network-vif-plugged-1751925c-4e5b-4436-af71-dae4e4c07755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.445 239969 DEBUG oslo_concurrency.lockutils [req-91c0df80-1f7b-41bb-8aad-0b6e342c9dab req-aece73f3-161b-44de-ac4e-1980e02be7dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.446 239969 DEBUG oslo_concurrency.lockutils [req-91c0df80-1f7b-41bb-8aad-0b6e342c9dab req-aece73f3-161b-44de-ac4e-1980e02be7dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.446 239969 DEBUG oslo_concurrency.lockutils [req-91c0df80-1f7b-41bb-8aad-0b6e342c9dab req-aece73f3-161b-44de-ac4e-1980e02be7dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.446 239969 DEBUG nova.compute.manager [req-91c0df80-1f7b-41bb-8aad-0b6e342c9dab req-aece73f3-161b-44de-ac4e-1980e02be7dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Processing event network-vif-plugged-1751925c-4e5b-4436-af71-dae4e4c07755 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:16:36 compute-0 ceph-mon[75140]: pgmap v1977: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.492 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444196.4908555, 89d77454-1c5d-4556-88b6-e5d4023c8c36 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.493 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] VM Started (Lifecycle Event)
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.522 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.528 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444196.4910977, 89d77454-1c5d-4556-88b6-e5d4023c8c36 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.528 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] VM Paused (Lifecycle Event)
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.571 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.575 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:16:36 compute-0 nova_compute[239965]: 2026-01-26 16:16:36.603 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:16:36 compute-0 podman[339421]: 2026-01-26 16:16:36.765470143 +0000 UTC m=+0.096786535 container create f60a9622a43bad992d8e762060bd32189e593b2fd8c7e3117b66b7385be8c15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:16:36 compute-0 podman[339421]: 2026-01-26 16:16:36.697151274 +0000 UTC m=+0.028467696 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:16:36 compute-0 systemd[1]: Started libpod-conmon-f60a9622a43bad992d8e762060bd32189e593b2fd8c7e3117b66b7385be8c15c.scope.
Jan 26 16:16:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5628a522201256fbf13874cb4c9c09a6e67fd51712a4a84f6cb8b2e3b4491700/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:16:36 compute-0 podman[339421]: 2026-01-26 16:16:36.943219786 +0000 UTC m=+0.274536198 container init f60a9622a43bad992d8e762060bd32189e593b2fd8c7e3117b66b7385be8c15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true)
Jan 26 16:16:36 compute-0 podman[339421]: 2026-01-26 16:16:36.954368888 +0000 UTC m=+0.285685280 container start f60a9622a43bad992d8e762060bd32189e593b2fd8c7e3117b66b7385be8c15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:16:36 compute-0 neutron-haproxy-ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa[339436]: [NOTICE]   (339440) : New worker (339442) forked
Jan 26 16:16:36 compute-0 neutron-haproxy-ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa[339436]: [NOTICE]   (339440) : Loading success.
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.039 156105 INFO neutron.agent.ovn.metadata.agent [-] Port e462f031-2502-4089-9f71-ba99ec9f5c8d in datapath 4b542b76-8e6a-4678-aea6-ecc3e7938bb4 unbound from our chassis
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.040 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4b542b76-8e6a-4678-aea6-ecc3e7938bb4
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.056 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0097e664-0339-405c-8acc-445daa6dde2a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.057 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4b542b76-81 in ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.059 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4b542b76-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.059 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[12977472-61c8-4599-8984-3038b7bd9afe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.060 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2408671a-3ef7-47d6-aa24-9c4a613ea154]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.077 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[431ca233-e585-442c-bc4c-fe55e5327dce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.106 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0893ac03-782f-4244-a004-83fad44bf3af]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.148 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[567e1c20-3ac1-4976-b32e-54be30793656]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.157 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6db4643e-3058-407b-a32f-601f3eb9b79a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:37 compute-0 NetworkManager[48954]: <info>  [1769444197.1592] manager: (tap4b542b76-80): new Veth device (/org/freedesktop/NetworkManager/Devices/484)
Jan 26 16:16:37 compute-0 systemd-udevd[339326]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.193 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[91d9885c-716e-4062-aa3d-2c2f44b5e39f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.196 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b1389696-60a3-446b-b746-402c618624c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:37 compute-0 nova_compute[239965]: 2026-01-26 16:16:37.200 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:37 compute-0 NetworkManager[48954]: <info>  [1769444197.2280] device (tap4b542b76-80): carrier: link connected
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.237 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5b487708-82da-485b-9433-9205bc97e891]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.254 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[92e0f915-99ba-41b7-bd89-80fcfc0abbfc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b542b76-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:7a:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568803, 'reachable_time': 19258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 339461, 'error': None, 'target': 'ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.271 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[08a8d7dd-6e52-404b-84b8-93c5a88efd36]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe59:7afe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568803, 'tstamp': 568803}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 339462, 'error': None, 'target': 'ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.292 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a324a693-00c7-49e5-ab6e-9c0b58d17877]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b542b76-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:7a:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568803, 'reachable_time': 19258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 339463, 'error': None, 'target': 'ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.328 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[47c231d0-4eaf-44ca-baf5-03ef5eff0097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.365 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[209c75e9-35a1-4fe4-bcc1-f213fa5f9b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.367 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b542b76-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.367 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.368 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b542b76-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:16:37 compute-0 kernel: tap4b542b76-80: entered promiscuous mode
Jan 26 16:16:37 compute-0 NetworkManager[48954]: <info>  [1769444197.4089] manager: (tap4b542b76-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/485)
Jan 26 16:16:37 compute-0 nova_compute[239965]: 2026-01-26 16:16:37.408 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:37 compute-0 nova_compute[239965]: 2026-01-26 16:16:37.410 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.413 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4b542b76-80, col_values=(('external_ids', {'iface-id': '61b5c2b4-c9ba-4399-9641-b64f1ba83437'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:16:37 compute-0 nova_compute[239965]: 2026-01-26 16:16:37.415 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:37 compute-0 ovn_controller[146046]: 2026-01-26T16:16:37Z|01177|binding|INFO|Releasing lport 61b5c2b4-c9ba-4399-9641-b64f1ba83437 from this chassis (sb_readonly=0)
Jan 26 16:16:37 compute-0 nova_compute[239965]: 2026-01-26 16:16:37.417 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.418 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4b542b76-8e6a-4678-aea6-ecc3e7938bb4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4b542b76-8e6a-4678-aea6-ecc3e7938bb4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.419 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[81fa8859-8123-4beb-9fce-b1b8e988eb91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.420 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-4b542b76-8e6a-4678-aea6-ecc3e7938bb4
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/4b542b76-8e6a-4678-aea6-ecc3e7938bb4.pid.haproxy
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 4b542b76-8e6a-4678-aea6-ecc3e7938bb4
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:16:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:37.421 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4', 'env', 'PROCESS_TAG=haproxy-4b542b76-8e6a-4678-aea6-ecc3e7938bb4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4b542b76-8e6a-4678-aea6-ecc3e7938bb4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:16:37 compute-0 nova_compute[239965]: 2026-01-26 16:16:37.434 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:37 compute-0 podman[339494]: 2026-01-26 16:16:37.920403819 +0000 UTC m=+0.121733055 container create a70ecf8962db209cce96a03f0a1b8907e30bc89ebe33547bcfb4f68867d2f61d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 16:16:37 compute-0 podman[339494]: 2026-01-26 16:16:37.836440308 +0000 UTC m=+0.037769624 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:16:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:16:37 compute-0 systemd[1]: Started libpod-conmon-a70ecf8962db209cce96a03f0a1b8907e30bc89ebe33547bcfb4f68867d2f61d.scope.
Jan 26 16:16:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:16:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cf1f6239a7568407f7bccf46ede043b5503f750377318541ec00af8b0dd4316/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:16:38 compute-0 podman[339494]: 2026-01-26 16:16:38.038872163 +0000 UTC m=+0.240201479 container init a70ecf8962db209cce96a03f0a1b8907e30bc89ebe33547bcfb4f68867d2f61d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:16:38 compute-0 podman[339494]: 2026-01-26 16:16:38.048853177 +0000 UTC m=+0.250182443 container start a70ecf8962db209cce96a03f0a1b8907e30bc89ebe33547bcfb4f68867d2f61d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.052 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:38 compute-0 neutron-haproxy-ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4[339510]: [NOTICE]   (339514) : New worker (339516) forked
Jan 26 16:16:38 compute-0 neutron-haproxy-ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4[339510]: [NOTICE]   (339514) : Loading success.
Jan 26 16:16:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.4 MiB/s wr, 22 op/s
Jan 26 16:16:38 compute-0 ceph-mon[75140]: pgmap v1978: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.4 MiB/s wr, 22 op/s
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.616 239969 DEBUG nova.compute.manager [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received event network-vif-plugged-1751925c-4e5b-4436-af71-dae4e4c07755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.617 239969 DEBUG oslo_concurrency.lockutils [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.617 239969 DEBUG oslo_concurrency.lockutils [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.617 239969 DEBUG oslo_concurrency.lockutils [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.618 239969 DEBUG nova.compute.manager [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] No event matching network-vif-plugged-1751925c-4e5b-4436-af71-dae4e4c07755 in dict_keys([('network-vif-plugged', 'e462f031-2502-4089-9f71-ba99ec9f5c8d')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.618 239969 WARNING nova.compute.manager [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received unexpected event network-vif-plugged-1751925c-4e5b-4436-af71-dae4e4c07755 for instance with vm_state building and task_state spawning.
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.618 239969 DEBUG nova.compute.manager [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received event network-vif-plugged-e462f031-2502-4089-9f71-ba99ec9f5c8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.618 239969 DEBUG oslo_concurrency.lockutils [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.619 239969 DEBUG oslo_concurrency.lockutils [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.619 239969 DEBUG oslo_concurrency.lockutils [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.619 239969 DEBUG nova.compute.manager [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Processing event network-vif-plugged-e462f031-2502-4089-9f71-ba99ec9f5c8d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.620 239969 DEBUG nova.compute.manager [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received event network-vif-plugged-e462f031-2502-4089-9f71-ba99ec9f5c8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.620 239969 DEBUG oslo_concurrency.lockutils [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.620 239969 DEBUG oslo_concurrency.lockutils [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.620 239969 DEBUG oslo_concurrency.lockutils [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.620 239969 DEBUG nova.compute.manager [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] No waiting events found dispatching network-vif-plugged-e462f031-2502-4089-9f71-ba99ec9f5c8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.621 239969 WARNING nova.compute.manager [req-959e8bf1-9292-474c-a7c9-0f2ccebe1f0b req-3bf067e5-194e-417f-b804-2a00059ab1c4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received unexpected event network-vif-plugged-e462f031-2502-4089-9f71-ba99ec9f5c8d for instance with vm_state building and task_state spawning.
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.623 239969 DEBUG nova.compute.manager [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.630 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444198.6297054, 89d77454-1c5d-4556-88b6-e5d4023c8c36 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.630 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] VM Resumed (Lifecycle Event)
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.632 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.638 239969 INFO nova.virt.libvirt.driver [-] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Instance spawned successfully.
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.638 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.661 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.671 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.675 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.675 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.676 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.676 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.677 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.677 239969 DEBUG nova.virt.libvirt.driver [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.704 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.740 239969 INFO nova.compute.manager [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Took 14.26 seconds to spawn the instance on the hypervisor.
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.741 239969 DEBUG nova.compute.manager [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.815 239969 INFO nova.compute.manager [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Took 15.34 seconds to build instance.
Jan 26 16:16:38 compute-0 nova_compute[239965]: 2026-01-26 16:16:38.850 239969 DEBUG oslo_concurrency.lockutils [None req-734541ce-a043-4b59-8426-d3aff00f7d99 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:16:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 12 KiB/s wr, 7 op/s
Jan 26 16:16:40 compute-0 ceph-mon[75140]: pgmap v1979: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 5.1 KiB/s rd, 12 KiB/s wr, 7 op/s
Jan 26 16:16:41 compute-0 nova_compute[239965]: 2026-01-26 16:16:41.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:16:41 compute-0 sudo[339525]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:16:41 compute-0 sudo[339525]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:16:41 compute-0 sudo[339525]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:41 compute-0 sudo[339550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:16:41 compute-0 sudo[339550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:16:42 compute-0 nova_compute[239965]: 2026-01-26 16:16:42.202 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 703 KiB/s rd, 12 KiB/s wr, 31 op/s
Jan 26 16:16:42 compute-0 sudo[339550]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:42 compute-0 ceph-mon[75140]: pgmap v1980: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 703 KiB/s rd, 12 KiB/s wr, 31 op/s
Jan 26 16:16:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:16:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:16:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:16:42 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:16:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:16:42 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:16:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:16:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:16:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:16:42 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:16:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:16:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:16:42 compute-0 sudo[339606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:16:42 compute-0 sudo[339606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:16:42 compute-0 sudo[339606]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:42 compute-0 sudo[339631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:16:42 compute-0 sudo[339631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:16:42 compute-0 podman[339668]: 2026-01-26 16:16:42.870592023 +0000 UTC m=+0.037567489 container create fe08c83e0ce478b1b5e7bd0aa3f21944029c7a1360b3986d4e275f20d0278f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_noether, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 16:16:42 compute-0 systemd[1]: Started libpod-conmon-fe08c83e0ce478b1b5e7bd0aa3f21944029c7a1360b3986d4e275f20d0278f90.scope.
Jan 26 16:16:42 compute-0 podman[339668]: 2026-01-26 16:16:42.853172928 +0000 UTC m=+0.020148394 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:16:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:16:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:16:43 compute-0 podman[339668]: 2026-01-26 16:16:43.005202882 +0000 UTC m=+0.172178348 container init fe08c83e0ce478b1b5e7bd0aa3f21944029c7a1360b3986d4e275f20d0278f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_noether, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 16:16:43 compute-0 podman[339668]: 2026-01-26 16:16:43.012691805 +0000 UTC m=+0.179667251 container start fe08c83e0ce478b1b5e7bd0aa3f21944029c7a1360b3986d4e275f20d0278f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:16:43 compute-0 podman[339668]: 2026-01-26 16:16:43.016467637 +0000 UTC m=+0.183443103 container attach fe08c83e0ce478b1b5e7bd0aa3f21944029c7a1360b3986d4e275f20d0278f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 16:16:43 compute-0 great_noether[339684]: 167 167
Jan 26 16:16:43 compute-0 systemd[1]: libpod-fe08c83e0ce478b1b5e7bd0aa3f21944029c7a1360b3986d4e275f20d0278f90.scope: Deactivated successfully.
Jan 26 16:16:43 compute-0 podman[339668]: 2026-01-26 16:16:43.020542156 +0000 UTC m=+0.187517602 container died fe08c83e0ce478b1b5e7bd0aa3f21944029c7a1360b3986d4e275f20d0278f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_noether, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:16:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-94f268532e25da543ce3ff1516b8d25a9fa645889ca72217a2c7e6303fee975f-merged.mount: Deactivated successfully.
Jan 26 16:16:43 compute-0 nova_compute[239965]: 2026-01-26 16:16:43.055 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:43 compute-0 podman[339668]: 2026-01-26 16:16:43.09635911 +0000 UTC m=+0.263334576 container remove fe08c83e0ce478b1b5e7bd0aa3f21944029c7a1360b3986d4e275f20d0278f90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_noether, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 16:16:43 compute-0 systemd[1]: libpod-conmon-fe08c83e0ce478b1b5e7bd0aa3f21944029c7a1360b3986d4e275f20d0278f90.scope: Deactivated successfully.
Jan 26 16:16:43 compute-0 podman[339708]: 2026-01-26 16:16:43.316847126 +0000 UTC m=+0.074167533 container create a9e6c51f13ae8a5daeeace0c6be7243fbc9d6bfea980769677493ea0a58948d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_chaum, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:16:43 compute-0 systemd[1]: Started libpod-conmon-a9e6c51f13ae8a5daeeace0c6be7243fbc9d6bfea980769677493ea0a58948d6.scope.
Jan 26 16:16:43 compute-0 ovn_controller[146046]: 2026-01-26T16:16:43Z|01178|binding|INFO|Releasing lport f840b217-53ba-4982-acb3-2dea2bac979e from this chassis (sb_readonly=0)
Jan 26 16:16:43 compute-0 ovn_controller[146046]: 2026-01-26T16:16:43Z|01179|binding|INFO|Releasing lport 61b5c2b4-c9ba-4399-9641-b64f1ba83437 from this chassis (sb_readonly=0)
Jan 26 16:16:43 compute-0 nova_compute[239965]: 2026-01-26 16:16:43.362 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:43 compute-0 NetworkManager[48954]: <info>  [1769444203.3647] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/486)
Jan 26 16:16:43 compute-0 NetworkManager[48954]: <info>  [1769444203.3667] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/487)
Jan 26 16:16:43 compute-0 podman[339708]: 2026-01-26 16:16:43.270459343 +0000 UTC m=+0.027779770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:16:43 compute-0 ovn_controller[146046]: 2026-01-26T16:16:43Z|01180|binding|INFO|Releasing lport f840b217-53ba-4982-acb3-2dea2bac979e from this chassis (sb_readonly=0)
Jan 26 16:16:43 compute-0 ovn_controller[146046]: 2026-01-26T16:16:43Z|01181|binding|INFO|Releasing lport 61b5c2b4-c9ba-4399-9641-b64f1ba83437 from this chassis (sb_readonly=0)
Jan 26 16:16:43 compute-0 nova_compute[239965]: 2026-01-26 16:16:43.404 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:16:43 compute-0 nova_compute[239965]: 2026-01-26 16:16:43.411 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3fbf29819ff6538e08bef0f8b9243a64c2f833fd94ae51c6997099507536c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3fbf29819ff6538e08bef0f8b9243a64c2f833fd94ae51c6997099507536c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3fbf29819ff6538e08bef0f8b9243a64c2f833fd94ae51c6997099507536c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3fbf29819ff6538e08bef0f8b9243a64c2f833fd94ae51c6997099507536c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:16:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f3fbf29819ff6538e08bef0f8b9243a64c2f833fd94ae51c6997099507536c7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:16:43 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:16:43 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:16:43 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:16:43 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:16:43 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:16:43 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:16:43 compute-0 podman[339708]: 2026-01-26 16:16:43.471883924 +0000 UTC m=+0.229204421 container init a9e6c51f13ae8a5daeeace0c6be7243fbc9d6bfea980769677493ea0a58948d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_chaum, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:16:43 compute-0 podman[339708]: 2026-01-26 16:16:43.480903223 +0000 UTC m=+0.238223630 container start a9e6c51f13ae8a5daeeace0c6be7243fbc9d6bfea980769677493ea0a58948d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_chaum, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 26 16:16:43 compute-0 podman[339708]: 2026-01-26 16:16:43.484695166 +0000 UTC m=+0.242015603 container attach a9e6c51f13ae8a5daeeace0c6be7243fbc9d6bfea980769677493ea0a58948d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:16:43 compute-0 clever_chaum[339725]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:16:43 compute-0 clever_chaum[339725]: --> All data devices are unavailable
Jan 26 16:16:43 compute-0 systemd[1]: libpod-a9e6c51f13ae8a5daeeace0c6be7243fbc9d6bfea980769677493ea0a58948d6.scope: Deactivated successfully.
Jan 26 16:16:43 compute-0 podman[339708]: 2026-01-26 16:16:43.952376392 +0000 UTC m=+0.709696799 container died a9e6c51f13ae8a5daeeace0c6be7243fbc9d6bfea980769677493ea0a58948d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_chaum, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 16:16:44 compute-0 nova_compute[239965]: 2026-01-26 16:16:44.091 239969 DEBUG nova.compute.manager [req-dfda87c3-30d1-4c6b-ac4d-4bff4cfe6789 req-52b381e1-52a0-455c-85db-ed3b143c3dfd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received event network-changed-1751925c-4e5b-4436-af71-dae4e4c07755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:16:44 compute-0 nova_compute[239965]: 2026-01-26 16:16:44.091 239969 DEBUG nova.compute.manager [req-dfda87c3-30d1-4c6b-ac4d-4bff4cfe6789 req-52b381e1-52a0-455c-85db-ed3b143c3dfd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Refreshing instance network info cache due to event network-changed-1751925c-4e5b-4436-af71-dae4e4c07755. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:16:44 compute-0 nova_compute[239965]: 2026-01-26 16:16:44.091 239969 DEBUG oslo_concurrency.lockutils [req-dfda87c3-30d1-4c6b-ac4d-4bff4cfe6789 req-52b381e1-52a0-455c-85db-ed3b143c3dfd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:16:44 compute-0 nova_compute[239965]: 2026-01-26 16:16:44.092 239969 DEBUG oslo_concurrency.lockutils [req-dfda87c3-30d1-4c6b-ac4d-4bff4cfe6789 req-52b381e1-52a0-455c-85db-ed3b143c3dfd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:16:44 compute-0 nova_compute[239965]: 2026-01-26 16:16:44.092 239969 DEBUG nova.network.neutron [req-dfda87c3-30d1-4c6b-ac4d-4bff4cfe6789 req-52b381e1-52a0-455c-85db-ed3b143c3dfd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Refreshing network info cache for port 1751925c-4e5b-4436-af71-dae4e4c07755 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:16:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:16:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f3fbf29819ff6538e08bef0f8b9243a64c2f833fd94ae51c6997099507536c7-merged.mount: Deactivated successfully.
Jan 26 16:16:45 compute-0 ceph-mon[75140]: pgmap v1981: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:16:45 compute-0 podman[339708]: 2026-01-26 16:16:45.381324991 +0000 UTC m=+2.138645438 container remove a9e6c51f13ae8a5daeeace0c6be7243fbc9d6bfea980769677493ea0a58948d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_chaum, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 26 16:16:45 compute-0 systemd[1]: libpod-conmon-a9e6c51f13ae8a5daeeace0c6be7243fbc9d6bfea980769677493ea0a58948d6.scope: Deactivated successfully.
Jan 26 16:16:45 compute-0 sudo[339631]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:45 compute-0 sudo[339757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:16:45 compute-0 sudo[339757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:16:45 compute-0 sudo[339757]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:45 compute-0 nova_compute[239965]: 2026-01-26 16:16:45.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:16:45 compute-0 sudo[339782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:16:45 compute-0 sudo[339782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:16:45 compute-0 podman[339819]: 2026-01-26 16:16:45.890949531 +0000 UTC m=+0.082368992 container create cd0b90491e51e7d5614a2e36c485a440c04f9088ba2821b01ebf75ed51de6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 26 16:16:45 compute-0 systemd[1]: Started libpod-conmon-cd0b90491e51e7d5614a2e36c485a440c04f9088ba2821b01ebf75ed51de6adb.scope.
Jan 26 16:16:45 compute-0 podman[339819]: 2026-01-26 16:16:45.839043323 +0000 UTC m=+0.030462864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:16:45 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:16:45 compute-0 podman[339819]: 2026-01-26 16:16:45.973267973 +0000 UTC m=+0.164687464 container init cd0b90491e51e7d5614a2e36c485a440c04f9088ba2821b01ebf75ed51de6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:16:45 compute-0 podman[339819]: 2026-01-26 16:16:45.980484069 +0000 UTC m=+0.171903530 container start cd0b90491e51e7d5614a2e36c485a440c04f9088ba2821b01ebf75ed51de6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:16:45 compute-0 podman[339819]: 2026-01-26 16:16:45.983930173 +0000 UTC m=+0.175349664 container attach cd0b90491e51e7d5614a2e36c485a440c04f9088ba2821b01ebf75ed51de6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:16:45 compute-0 optimistic_dubinsky[339836]: 167 167
Jan 26 16:16:45 compute-0 systemd[1]: libpod-cd0b90491e51e7d5614a2e36c485a440c04f9088ba2821b01ebf75ed51de6adb.scope: Deactivated successfully.
Jan 26 16:16:45 compute-0 conmon[339836]: conmon cd0b90491e51e7d5614a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cd0b90491e51e7d5614a2e36c485a440c04f9088ba2821b01ebf75ed51de6adb.scope/container/memory.events
Jan 26 16:16:45 compute-0 podman[339819]: 2026-01-26 16:16:45.988988406 +0000 UTC m=+0.180407887 container died cd0b90491e51e7d5614a2e36c485a440c04f9088ba2821b01ebf75ed51de6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 16:16:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-29c37cdff633b231f4d642bd6a6e640572d05d5d27ff7133d1c0066bea2c6bf6-merged.mount: Deactivated successfully.
Jan 26 16:16:46 compute-0 podman[339819]: 2026-01-26 16:16:46.023804777 +0000 UTC m=+0.215224268 container remove cd0b90491e51e7d5614a2e36c485a440c04f9088ba2821b01ebf75ed51de6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:16:46 compute-0 systemd[1]: libpod-conmon-cd0b90491e51e7d5614a2e36c485a440c04f9088ba2821b01ebf75ed51de6adb.scope: Deactivated successfully.
Jan 26 16:16:46 compute-0 podman[339860]: 2026-01-26 16:16:46.264659921 +0000 UTC m=+0.086896974 container create 462dae8141d93496a72570300031d9f58138481e10a00754d8c2a3a8ea25449a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_ishizaka, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 16:16:46 compute-0 podman[339860]: 2026-01-26 16:16:46.201617051 +0000 UTC m=+0.023854134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:16:46 compute-0 systemd[1]: Started libpod-conmon-462dae8141d93496a72570300031d9f58138481e10a00754d8c2a3a8ea25449a.scope.
Jan 26 16:16:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a69a62dde3b04435cf254b856cd316be2e79d85da8b7d125de2d91e75fe4fedf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a69a62dde3b04435cf254b856cd316be2e79d85da8b7d125de2d91e75fe4fedf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a69a62dde3b04435cf254b856cd316be2e79d85da8b7d125de2d91e75fe4fedf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:16:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a69a62dde3b04435cf254b856cd316be2e79d85da8b7d125de2d91e75fe4fedf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:16:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:16:46 compute-0 podman[339860]: 2026-01-26 16:16:46.349122685 +0000 UTC m=+0.171359758 container init 462dae8141d93496a72570300031d9f58138481e10a00754d8c2a3a8ea25449a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:16:46 compute-0 podman[339860]: 2026-01-26 16:16:46.355419279 +0000 UTC m=+0.177656332 container start 462dae8141d93496a72570300031d9f58138481e10a00754d8c2a3a8ea25449a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:16:46 compute-0 podman[339860]: 2026-01-26 16:16:46.359722414 +0000 UTC m=+0.181959487 container attach 462dae8141d93496a72570300031d9f58138481e10a00754d8c2a3a8ea25449a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_ishizaka, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:16:46 compute-0 ceph-mon[75140]: pgmap v1982: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]: {
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:     "0": [
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:         {
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "devices": [
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "/dev/loop3"
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             ],
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "lv_name": "ceph_lv0",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "lv_size": "21470642176",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "name": "ceph_lv0",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "tags": {
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.cluster_name": "ceph",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.crush_device_class": "",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.encrypted": "0",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.objectstore": "bluestore",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.osd_id": "0",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.type": "block",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.vdo": "0",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.with_tpm": "0"
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             },
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "type": "block",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "vg_name": "ceph_vg0"
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:         }
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:     ],
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:     "1": [
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:         {
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "devices": [
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "/dev/loop4"
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             ],
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "lv_name": "ceph_lv1",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "lv_size": "21470642176",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "name": "ceph_lv1",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "tags": {
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.cluster_name": "ceph",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.crush_device_class": "",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.encrypted": "0",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.objectstore": "bluestore",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.osd_id": "1",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.type": "block",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.vdo": "0",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.with_tpm": "0"
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             },
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "type": "block",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "vg_name": "ceph_vg1"
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:         }
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:     ],
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:     "2": [
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:         {
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "devices": [
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "/dev/loop5"
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             ],
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "lv_name": "ceph_lv2",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "lv_size": "21470642176",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "name": "ceph_lv2",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "tags": {
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.cluster_name": "ceph",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.crush_device_class": "",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.encrypted": "0",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.objectstore": "bluestore",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.osd_id": "2",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.type": "block",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.vdo": "0",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:                 "ceph.with_tpm": "0"
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             },
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "type": "block",
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:             "vg_name": "ceph_vg2"
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:         }
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]:     ]
Jan 26 16:16:46 compute-0 exciting_ishizaka[339877]: }
Jan 26 16:16:46 compute-0 systemd[1]: libpod-462dae8141d93496a72570300031d9f58138481e10a00754d8c2a3a8ea25449a.scope: Deactivated successfully.
Jan 26 16:16:46 compute-0 podman[339860]: 2026-01-26 16:16:46.656638707 +0000 UTC m=+0.478875760 container died 462dae8141d93496a72570300031d9f58138481e10a00754d8c2a3a8ea25449a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_ishizaka, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:16:46 compute-0 nova_compute[239965]: 2026-01-26 16:16:46.698 239969 DEBUG nova.network.neutron [req-dfda87c3-30d1-4c6b-ac4d-4bff4cfe6789 req-52b381e1-52a0-455c-85db-ed3b143c3dfd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Updated VIF entry in instance network info cache for port 1751925c-4e5b-4436-af71-dae4e4c07755. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:16:46 compute-0 nova_compute[239965]: 2026-01-26 16:16:46.700 239969 DEBUG nova.network.neutron [req-dfda87c3-30d1-4c6b-ac4d-4bff4cfe6789 req-52b381e1-52a0-455c-85db-ed3b143c3dfd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Updating instance_info_cache with network_info: [{"id": "1751925c-4e5b-4436-af71-dae4e4c07755", "address": "fa:16:3e:92:3c:80", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1751925c-4e", "ovs_interfaceid": "1751925c-4e5b-4436-af71-dae4e4c07755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "address": "fa:16:3e:10:11:fa", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe10:11fa", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape462f031-25", "ovs_interfaceid": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:16:46 compute-0 nova_compute[239965]: 2026-01-26 16:16:46.763 239969 DEBUG oslo_concurrency.lockutils [req-dfda87c3-30d1-4c6b-ac4d-4bff4cfe6789 req-52b381e1-52a0-455c-85db-ed3b143c3dfd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:16:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-a69a62dde3b04435cf254b856cd316be2e79d85da8b7d125de2d91e75fe4fedf-merged.mount: Deactivated successfully.
Jan 26 16:16:46 compute-0 podman[339860]: 2026-01-26 16:16:46.997102475 +0000 UTC m=+0.819339528 container remove 462dae8141d93496a72570300031d9f58138481e10a00754d8c2a3a8ea25449a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 16:16:47 compute-0 systemd[1]: libpod-conmon-462dae8141d93496a72570300031d9f58138481e10a00754d8c2a3a8ea25449a.scope: Deactivated successfully.
Jan 26 16:16:47 compute-0 sudo[339782]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:47 compute-0 sudo[339897]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:16:47 compute-0 sudo[339897]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:16:47 compute-0 sudo[339897]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:47 compute-0 sudo[339922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:16:47 compute-0 sudo[339922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:16:47 compute-0 nova_compute[239965]: 2026-01-26 16:16:47.204 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:47 compute-0 nova_compute[239965]: 2026-01-26 16:16:47.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:16:47 compute-0 podman[339960]: 2026-01-26 16:16:47.53759072 +0000 UTC m=+0.111743081 container create 39a1c09188caa87605a40e91a5e736d483073c4168f8e3b9e6f8094933dc02c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wing, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:16:47 compute-0 podman[339960]: 2026-01-26 16:16:47.455179377 +0000 UTC m=+0.029331768 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:16:47 compute-0 systemd[1]: Started libpod-conmon-39a1c09188caa87605a40e91a5e736d483073c4168f8e3b9e6f8094933dc02c3.scope.
Jan 26 16:16:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:16:47 compute-0 podman[339960]: 2026-01-26 16:16:47.639215782 +0000 UTC m=+0.213368163 container init 39a1c09188caa87605a40e91a5e736d483073c4168f8e3b9e6f8094933dc02c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 26 16:16:47 compute-0 podman[339960]: 2026-01-26 16:16:47.64812595 +0000 UTC m=+0.222278311 container start 39a1c09188caa87605a40e91a5e736d483073c4168f8e3b9e6f8094933dc02c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wing, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 16:16:47 compute-0 podman[339960]: 2026-01-26 16:16:47.651920003 +0000 UTC m=+0.226072364 container attach 39a1c09188caa87605a40e91a5e736d483073c4168f8e3b9e6f8094933dc02c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wing, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:16:47 compute-0 bold_wing[339976]: 167 167
Jan 26 16:16:47 compute-0 systemd[1]: libpod-39a1c09188caa87605a40e91a5e736d483073c4168f8e3b9e6f8094933dc02c3.scope: Deactivated successfully.
Jan 26 16:16:47 compute-0 conmon[339976]: conmon 39a1c09188caa87605a4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39a1c09188caa87605a40e91a5e736d483073c4168f8e3b9e6f8094933dc02c3.scope/container/memory.events
Jan 26 16:16:47 compute-0 podman[339960]: 2026-01-26 16:16:47.655629924 +0000 UTC m=+0.229782285 container died 39a1c09188caa87605a40e91a5e736d483073c4168f8e3b9e6f8094933dc02c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wing, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 16:16:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c1a84172ff965f2a6729ed31b4704586209930a328b594c48185e6d7d48f5953-merged.mount: Deactivated successfully.
Jan 26 16:16:47 compute-0 podman[339960]: 2026-01-26 16:16:47.742111127 +0000 UTC m=+0.316263488 container remove 39a1c09188caa87605a40e91a5e736d483073c4168f8e3b9e6f8094933dc02c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wing, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:16:47 compute-0 systemd[1]: libpod-conmon-39a1c09188caa87605a40e91a5e736d483073c4168f8e3b9e6f8094933dc02c3.scope: Deactivated successfully.
Jan 26 16:16:47 compute-0 podman[340000]: 2026-01-26 16:16:47.919248404 +0000 UTC m=+0.046254321 container create e83bad857c8c3a2d1236835b27f3b13629d3af62c197a51c876d3a1b5d4b2a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_rubin, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 16:16:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:16:47 compute-0 systemd[1]: Started libpod-conmon-e83bad857c8c3a2d1236835b27f3b13629d3af62c197a51c876d3a1b5d4b2a20.scope.
Jan 26 16:16:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cc35507a513da06ca22401d0a698caccc2f1e38db50e663cecf8dfcdcd9d6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:16:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cc35507a513da06ca22401d0a698caccc2f1e38db50e663cecf8dfcdcd9d6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:16:47 compute-0 podman[340000]: 2026-01-26 16:16:47.898678641 +0000 UTC m=+0.025684598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cc35507a513da06ca22401d0a698caccc2f1e38db50e663cecf8dfcdcd9d6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:16:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cc35507a513da06ca22401d0a698caccc2f1e38db50e663cecf8dfcdcd9d6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:16:48 compute-0 podman[340000]: 2026-01-26 16:16:48.02225482 +0000 UTC m=+0.149260767 container init e83bad857c8c3a2d1236835b27f3b13629d3af62c197a51c876d3a1b5d4b2a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_rubin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 16:16:48 compute-0 podman[340000]: 2026-01-26 16:16:48.030865541 +0000 UTC m=+0.157871448 container start e83bad857c8c3a2d1236835b27f3b13629d3af62c197a51c876d3a1b5d4b2a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_rubin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 16:16:48 compute-0 podman[340000]: 2026-01-26 16:16:48.035860942 +0000 UTC m=+0.162866869 container attach e83bad857c8c3a2d1236835b27f3b13629d3af62c197a51c876d3a1b5d4b2a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_rubin, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 16:16:48 compute-0 nova_compute[239965]: 2026-01-26 16:16:48.059 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 72 op/s
Jan 26 16:16:48 compute-0 ceph-mon[75140]: pgmap v1983: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 426 B/s wr, 72 op/s
Jan 26 16:16:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:16:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2519177321' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:16:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:16:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2519177321' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:16:48 compute-0 lvm[340095]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:16:48 compute-0 lvm[340095]: VG ceph_vg0 finished
Jan 26 16:16:48 compute-0 lvm[340096]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:16:48 compute-0 lvm[340096]: VG ceph_vg1 finished
Jan 26 16:16:48 compute-0 lvm[340098]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:16:48 compute-0 lvm[340098]: VG ceph_vg2 finished
Jan 26 16:16:48 compute-0 sweet_rubin[340017]: {}
Jan 26 16:16:48 compute-0 systemd[1]: libpod-e83bad857c8c3a2d1236835b27f3b13629d3af62c197a51c876d3a1b5d4b2a20.scope: Deactivated successfully.
Jan 26 16:16:48 compute-0 podman[340000]: 2026-01-26 16:16:48.935580653 +0000 UTC m=+1.062586560 container died e83bad857c8c3a2d1236835b27f3b13629d3af62c197a51c876d3a1b5d4b2a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_rubin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 16:16:48 compute-0 systemd[1]: libpod-e83bad857c8c3a2d1236835b27f3b13629d3af62c197a51c876d3a1b5d4b2a20.scope: Consumed 1.373s CPU time.
Jan 26 16:16:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5cc35507a513da06ca22401d0a698caccc2f1e38db50e663cecf8dfcdcd9d6f-merged.mount: Deactivated successfully.
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003618790689309283 of space, bias 1.0, pg target 0.1085637206792785 quantized to 32 (current 32)
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010184452404398048 of space, bias 1.0, pg target 0.30553357213194143 quantized to 32 (current 32)
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.117936455459033e-07 of space, bias 4.0, pg target 0.000854152374655084 quantized to 16 (current 16)
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:16:48 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:16:49 compute-0 podman[340000]: 2026-01-26 16:16:49.045415236 +0000 UTC m=+1.172421153 container remove e83bad857c8c3a2d1236835b27f3b13629d3af62c197a51c876d3a1b5d4b2a20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_rubin, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 16:16:49 compute-0 systemd[1]: libpod-conmon-e83bad857c8c3a2d1236835b27f3b13629d3af62c197a51c876d3a1b5d4b2a20.scope: Deactivated successfully.
Jan 26 16:16:49 compute-0 sudo[339922]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:16:49 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:16:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:16:49 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:16:49 compute-0 sudo[340115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:16:49 compute-0 sudo[340115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:16:49 compute-0 sudo[340115]: pam_unix(sudo:session): session closed for user root
Jan 26 16:16:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2519177321' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:16:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2519177321' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:16:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:16:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:16:49 compute-0 nova_compute[239965]: 2026-01-26 16:16:49.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:16:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Jan 26 16:16:50 compute-0 ceph-mon[75140]: pgmap v1984: 305 pgs: 305 active+clean; 134 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Jan 26 16:16:50 compute-0 nova_compute[239965]: 2026-01-26 16:16:50.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:16:50 compute-0 nova_compute[239965]: 2026-01-26 16:16:50.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:16:51 compute-0 nova_compute[239965]: 2026-01-26 16:16:51.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:16:51 compute-0 nova_compute[239965]: 2026-01-26 16:16:51.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:16:51 compute-0 nova_compute[239965]: 2026-01-26 16:16:51.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:16:51 compute-0 nova_compute[239965]: 2026-01-26 16:16:51.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:16:52 compute-0 nova_compute[239965]: 2026-01-26 16:16:52.146 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:16:52 compute-0 nova_compute[239965]: 2026-01-26 16:16:52.146 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:16:52 compute-0 nova_compute[239965]: 2026-01-26 16:16:52.146 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:16:52 compute-0 nova_compute[239965]: 2026-01-26 16:16:52.147 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 89d77454-1c5d-4556-88b6-e5d4023c8c36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:16:52 compute-0 nova_compute[239965]: 2026-01-26 16:16:52.206 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 148 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 86 op/s
Jan 26 16:16:52 compute-0 ceph-mon[75140]: pgmap v1985: 305 pgs: 305 active+clean; 148 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 86 op/s
Jan 26 16:16:52 compute-0 ovn_controller[146046]: 2026-01-26T16:16:52Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:92:3c:80 10.100.0.10
Jan 26 16:16:52 compute-0 ovn_controller[146046]: 2026-01-26T16:16:52Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:92:3c:80 10.100.0.10
Jan 26 16:16:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:16:53 compute-0 nova_compute[239965]: 2026-01-26 16:16:53.011 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:53 compute-0 nova_compute[239965]: 2026-01-26 16:16:53.061 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 160 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 26 16:16:54 compute-0 ceph-mon[75140]: pgmap v1986: 305 pgs: 305 active+clean; 160 MiB data, 808 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Jan 26 16:16:55 compute-0 podman[340140]: 2026-01-26 16:16:55.451582601 +0000 UTC m=+0.129018163 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:16:55 compute-0 podman[340141]: 2026-01-26 16:16:55.484143006 +0000 UTC m=+0.164099870 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 16:16:55 compute-0 nova_compute[239965]: 2026-01-26 16:16:55.564 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Updating instance_info_cache with network_info: [{"id": "1751925c-4e5b-4436-af71-dae4e4c07755", "address": "fa:16:3e:92:3c:80", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1751925c-4e", "ovs_interfaceid": "1751925c-4e5b-4436-af71-dae4e4c07755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "address": "fa:16:3e:10:11:fa", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe10:11fa", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape462f031-25", "ovs_interfaceid": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:16:55 compute-0 nova_compute[239965]: 2026-01-26 16:16:55.589 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:16:55 compute-0 nova_compute[239965]: 2026-01-26 16:16:55.589 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:16:55 compute-0 nova_compute[239965]: 2026-01-26 16:16:55.590 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:16:55 compute-0 nova_compute[239965]: 2026-01-26 16:16:55.590 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:16:55 compute-0 nova_compute[239965]: 2026-01-26 16:16:55.614 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:16:55 compute-0 nova_compute[239965]: 2026-01-26 16:16:55.614 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:16:55 compute-0 nova_compute[239965]: 2026-01-26 16:16:55.614 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:16:55 compute-0 nova_compute[239965]: 2026-01-26 16:16:55.614 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:16:55 compute-0 nova_compute[239965]: 2026-01-26 16:16:55.615 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:16:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:16:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2402416045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:16:56 compute-0 nova_compute[239965]: 2026-01-26 16:16:56.158 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:16:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2402416045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:16:56 compute-0 nova_compute[239965]: 2026-01-26 16:16:56.223 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:16:56 compute-0 nova_compute[239965]: 2026-01-26 16:16:56.224 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:16:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 26 16:16:56 compute-0 nova_compute[239965]: 2026-01-26 16:16:56.395 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:56.394 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:16:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:56.396 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:16:56 compute-0 nova_compute[239965]: 2026-01-26 16:16:56.413 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:16:56 compute-0 nova_compute[239965]: 2026-01-26 16:16:56.415 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3571MB free_disk=59.942139610648155GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:16:56 compute-0 nova_compute[239965]: 2026-01-26 16:16:56.415 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:16:56 compute-0 nova_compute[239965]: 2026-01-26 16:16:56.415 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:16:56 compute-0 nova_compute[239965]: 2026-01-26 16:16:56.494 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 89d77454-1c5d-4556-88b6-e5d4023c8c36 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:16:56 compute-0 nova_compute[239965]: 2026-01-26 16:16:56.495 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:16:56 compute-0 nova_compute[239965]: 2026-01-26 16:16:56.495 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:16:56 compute-0 nova_compute[239965]: 2026-01-26 16:16:56.543 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:16:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:16:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4020380706' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:16:57 compute-0 nova_compute[239965]: 2026-01-26 16:16:57.186 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:16:57 compute-0 nova_compute[239965]: 2026-01-26 16:16:57.191 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:16:57 compute-0 ceph-mon[75140]: pgmap v1987: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 324 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 26 16:16:57 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4020380706' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:16:57 compute-0 nova_compute[239965]: 2026-01-26 16:16:57.207 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:16:57 compute-0 nova_compute[239965]: 2026-01-26 16:16:57.211 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:57 compute-0 nova_compute[239965]: 2026-01-26 16:16:57.233 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:16:57 compute-0 nova_compute[239965]: 2026-01-26 16:16:57.233 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:16:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:16:58 compute-0 nova_compute[239965]: 2026-01-26 16:16:58.063 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:16:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 26 16:16:58 compute-0 ceph-mon[75140]: pgmap v1988: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 26 16:16:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:59.240 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:16:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:59.240 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:16:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:16:59.241 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:00 compute-0 nova_compute[239965]: 2026-01-26 16:17:00.163 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 26 16:17:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:17:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:17:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:17:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:17:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:17:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:17:00 compute-0 ceph-mon[75140]: pgmap v1989: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 26 16:17:02 compute-0 nova_compute[239965]: 2026-01-26 16:17:02.212 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 26 16:17:02 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:02.398 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:17:03 compute-0 ceph-mon[75140]: pgmap v1990: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Jan 26 16:17:03 compute-0 nova_compute[239965]: 2026-01-26 16:17:03.065 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:03 compute-0 nova_compute[239965]: 2026-01-26 16:17:03.091 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "311f59c2-3926-4794-b450-819f63642a6b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:03 compute-0 nova_compute[239965]: 2026-01-26 16:17:03.092 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:03 compute-0 nova_compute[239965]: 2026-01-26 16:17:03.113 239969 DEBUG nova.compute.manager [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:17:03 compute-0 nova_compute[239965]: 2026-01-26 16:17:03.216 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:03 compute-0 nova_compute[239965]: 2026-01-26 16:17:03.217 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:03 compute-0 nova_compute[239965]: 2026-01-26 16:17:03.224 239969 DEBUG nova.virt.hardware [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:17:03 compute-0 nova_compute[239965]: 2026-01-26 16:17:03.224 239969 INFO nova.compute.claims [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:17:03 compute-0 nova_compute[239965]: 2026-01-26 16:17:03.348 239969 DEBUG oslo_concurrency.processutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:17:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/205267108' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:17:03 compute-0 nova_compute[239965]: 2026-01-26 16:17:03.922 239969 DEBUG oslo_concurrency.processutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:03 compute-0 nova_compute[239965]: 2026-01-26 16:17:03.929 239969 DEBUG nova.compute.provider_tree [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:17:03 compute-0 nova_compute[239965]: 2026-01-26 16:17:03.948 239969 DEBUG nova.scheduler.client.report [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:17:03 compute-0 nova_compute[239965]: 2026-01-26 16:17:03.973 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.756s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:03 compute-0 nova_compute[239965]: 2026-01-26 16:17:03.973 239969 DEBUG nova.compute.manager [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.032 239969 DEBUG nova.compute.manager [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.033 239969 DEBUG nova.network.neutron [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.052 239969 INFO nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.067 239969 DEBUG nova.compute.manager [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:17:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/205267108' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.145 239969 DEBUG nova.compute.manager [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.146 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.146 239969 INFO nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Creating image(s)
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.165 239969 DEBUG nova.storage.rbd_utils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 311f59c2-3926-4794-b450-819f63642a6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.188 239969 DEBUG nova.storage.rbd_utils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 311f59c2-3926-4794-b450-819f63642a6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.210 239969 DEBUG nova.storage.rbd_utils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 311f59c2-3926-4794-b450-819f63642a6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.213 239969 DEBUG oslo_concurrency.processutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.296 239969 DEBUG oslo_concurrency.processutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.297 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.298 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.298 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.317 239969 DEBUG nova.storage.rbd_utils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 311f59c2-3926-4794-b450-819f63642a6b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.321 239969 DEBUG oslo_concurrency.processutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 311f59c2-3926-4794-b450-819f63642a6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 238 KiB/s rd, 941 KiB/s wr, 44 op/s
Jan 26 16:17:04 compute-0 nova_compute[239965]: 2026-01-26 16:17:04.359 239969 DEBUG nova.policy [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0338b308661e4205839e9e957f674d8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df501970c9864b77b86bb4aa58ef846b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:17:05 compute-0 ceph-mon[75140]: pgmap v1991: 305 pgs: 305 active+clean; 167 MiB data, 815 MiB used, 59 GiB / 60 GiB avail; 238 KiB/s rd, 941 KiB/s wr, 44 op/s
Jan 26 16:17:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 175 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 298 KiB/s wr, 5 op/s
Jan 26 16:17:06 compute-0 ceph-mon[75140]: pgmap v1992: 305 pgs: 305 active+clean; 175 MiB data, 819 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 298 KiB/s wr, 5 op/s
Jan 26 16:17:06 compute-0 nova_compute[239965]: 2026-01-26 16:17:06.660 239969 DEBUG oslo_concurrency.processutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 311f59c2-3926-4794-b450-819f63642a6b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:06 compute-0 nova_compute[239965]: 2026-01-26 16:17:06.714 239969 DEBUG nova.storage.rbd_utils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] resizing rbd image 311f59c2-3926-4794-b450-819f63642a6b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:17:06 compute-0 nova_compute[239965]: 2026-01-26 16:17:06.949 239969 DEBUG nova.network.neutron [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Successfully created port: 623e70ba-12bf-443e-abb2-f259acba930d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:17:06 compute-0 nova_compute[239965]: 2026-01-26 16:17:06.956 239969 DEBUG nova.objects.instance [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'migration_context' on Instance uuid 311f59c2-3926-4794-b450-819f63642a6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:17:06 compute-0 nova_compute[239965]: 2026-01-26 16:17:06.976 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:17:06 compute-0 nova_compute[239965]: 2026-01-26 16:17:06.977 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Ensure instance console log exists: /var/lib/nova/instances/311f59c2-3926-4794-b450-819f63642a6b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:17:06 compute-0 nova_compute[239965]: 2026-01-26 16:17:06.978 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:06 compute-0 nova_compute[239965]: 2026-01-26 16:17:06.978 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:06 compute-0 nova_compute[239965]: 2026-01-26 16:17:06.978 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:07 compute-0 nova_compute[239965]: 2026-01-26 16:17:07.216 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:07 compute-0 nova_compute[239965]: 2026-01-26 16:17:07.218 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:07 compute-0 nova_compute[239965]: 2026-01-26 16:17:07.218 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:07 compute-0 nova_compute[239965]: 2026-01-26 16:17:07.259 239969 DEBUG nova.compute.manager [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:17:07 compute-0 nova_compute[239965]: 2026-01-26 16:17:07.388 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:07 compute-0 nova_compute[239965]: 2026-01-26 16:17:07.389 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:07 compute-0 nova_compute[239965]: 2026-01-26 16:17:07.395 239969 DEBUG nova.virt.hardware [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:17:07 compute-0 nova_compute[239965]: 2026-01-26 16:17:07.395 239969 INFO nova.compute.claims [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:17:07 compute-0 nova_compute[239965]: 2026-01-26 16:17:07.547 239969 DEBUG oslo_concurrency.processutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:07 compute-0 nova_compute[239965]: 2026-01-26 16:17:07.878 239969 DEBUG nova.network.neutron [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Successfully created port: 8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:17:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.067 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:17:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2060226324' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.107 239969 DEBUG oslo_concurrency.processutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.114 239969 DEBUG nova.compute.provider_tree [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.132 239969 DEBUG nova.scheduler.client.report [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.195 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.196 239969 DEBUG nova.compute.manager [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.272 239969 DEBUG nova.compute.manager [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.273 239969 DEBUG nova.network.neutron [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.309 239969 INFO nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:17:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 213 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.345 239969 DEBUG nova.compute.manager [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.487 239969 DEBUG nova.compute.manager [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.488 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.489 239969 INFO nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Creating image(s)
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.510 239969 DEBUG nova.storage.rbd_utils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.530 239969 DEBUG nova.storage.rbd_utils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.549 239969 DEBUG nova.storage.rbd_utils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.553 239969 DEBUG oslo_concurrency.processutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.601 239969 DEBUG nova.policy [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e865b82cb8db42568f95d2257ed9b158', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f543c15474f7451fa07b01e79dde95dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.642 239969 DEBUG oslo_concurrency.processutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.643 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.644 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.644 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2060226324' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:17:08 compute-0 ceph-mon[75140]: pgmap v1993: 305 pgs: 305 active+clean; 213 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.678 239969 DEBUG nova.storage.rbd_utils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:08 compute-0 nova_compute[239965]: 2026-01-26 16:17:08.681 239969 DEBUG oslo_concurrency.processutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:09 compute-0 nova_compute[239965]: 2026-01-26 16:17:09.193 239969 DEBUG oslo_concurrency.processutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:09 compute-0 nova_compute[239965]: 2026-01-26 16:17:09.250 239969 DEBUG nova.storage.rbd_utils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] resizing rbd image dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:17:09 compute-0 nova_compute[239965]: 2026-01-26 16:17:09.571 239969 DEBUG nova.objects.instance [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'migration_context' on Instance uuid dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:17:09 compute-0 nova_compute[239965]: 2026-01-26 16:17:09.592 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:17:09 compute-0 nova_compute[239965]: 2026-01-26 16:17:09.593 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Ensure instance console log exists: /var/lib/nova/instances/dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:17:09 compute-0 nova_compute[239965]: 2026-01-26 16:17:09.594 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:09 compute-0 nova_compute[239965]: 2026-01-26 16:17:09.595 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:09 compute-0 nova_compute[239965]: 2026-01-26 16:17:09.595 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:10 compute-0 nova_compute[239965]: 2026-01-26 16:17:10.135 239969 DEBUG nova.network.neutron [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Successfully updated port: 623e70ba-12bf-443e-abb2-f259acba930d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:17:10 compute-0 nova_compute[239965]: 2026-01-26 16:17:10.239 239969 DEBUG nova.compute.manager [req-42e3d061-41c7-41f8-832f-21973fd19acb req-1f455528-d739-47f4-ba74-b937f004390a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received event network-changed-623e70ba-12bf-443e-abb2-f259acba930d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:10 compute-0 nova_compute[239965]: 2026-01-26 16:17:10.239 239969 DEBUG nova.compute.manager [req-42e3d061-41c7-41f8-832f-21973fd19acb req-1f455528-d739-47f4-ba74-b937f004390a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Refreshing instance network info cache due to event network-changed-623e70ba-12bf-443e-abb2-f259acba930d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:17:10 compute-0 nova_compute[239965]: 2026-01-26 16:17:10.240 239969 DEBUG oslo_concurrency.lockutils [req-42e3d061-41c7-41f8-832f-21973fd19acb req-1f455528-d739-47f4-ba74-b937f004390a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-311f59c2-3926-4794-b450-819f63642a6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:17:10 compute-0 nova_compute[239965]: 2026-01-26 16:17:10.240 239969 DEBUG oslo_concurrency.lockutils [req-42e3d061-41c7-41f8-832f-21973fd19acb req-1f455528-d739-47f4-ba74-b937f004390a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-311f59c2-3926-4794-b450-819f63642a6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:17:10 compute-0 nova_compute[239965]: 2026-01-26 16:17:10.240 239969 DEBUG nova.network.neutron [req-42e3d061-41c7-41f8-832f-21973fd19acb req-1f455528-d739-47f4-ba74-b937f004390a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Refreshing network info cache for port 623e70ba-12bf-443e-abb2-f259acba930d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:17:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 213 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:17:10 compute-0 ceph-mon[75140]: pgmap v1994: 305 pgs: 305 active+clean; 213 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:17:10 compute-0 nova_compute[239965]: 2026-01-26 16:17:10.490 239969 DEBUG nova.network.neutron [req-42e3d061-41c7-41f8-832f-21973fd19acb req-1f455528-d739-47f4-ba74-b937f004390a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:17:10 compute-0 nova_compute[239965]: 2026-01-26 16:17:10.940 239969 DEBUG nova.network.neutron [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Successfully created port: d7a7b654-8380-4796-9567-cf4032213582 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:17:11 compute-0 nova_compute[239965]: 2026-01-26 16:17:11.221 239969 DEBUG nova.network.neutron [req-42e3d061-41c7-41f8-832f-21973fd19acb req-1f455528-d739-47f4-ba74-b937f004390a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:17:11 compute-0 nova_compute[239965]: 2026-01-26 16:17:11.242 239969 DEBUG oslo_concurrency.lockutils [req-42e3d061-41c7-41f8-832f-21973fd19acb req-1f455528-d739-47f4-ba74-b937f004390a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-311f59c2-3926-4794-b450-819f63642a6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:17:11 compute-0 nova_compute[239965]: 2026-01-26 16:17:11.524 239969 DEBUG nova.network.neutron [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Successfully updated port: 8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:17:11 compute-0 nova_compute[239965]: 2026-01-26 16:17:11.542 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "refresh_cache-311f59c2-3926-4794-b450-819f63642a6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:17:11 compute-0 nova_compute[239965]: 2026-01-26 16:17:11.542 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquired lock "refresh_cache-311f59c2-3926-4794-b450-819f63642a6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:17:11 compute-0 nova_compute[239965]: 2026-01-26 16:17:11.542 239969 DEBUG nova.network.neutron [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:17:11 compute-0 nova_compute[239965]: 2026-01-26 16:17:11.701 239969 DEBUG nova.network.neutron [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:17:11 compute-0 nova_compute[239965]: 2026-01-26 16:17:11.888 239969 DEBUG nova.network.neutron [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Successfully updated port: d7a7b654-8380-4796-9567-cf4032213582 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:17:11 compute-0 nova_compute[239965]: 2026-01-26 16:17:11.903 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:17:11 compute-0 nova_compute[239965]: 2026-01-26 16:17:11.904 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:17:11 compute-0 nova_compute[239965]: 2026-01-26 16:17:11.904 239969 DEBUG nova.network.neutron [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:17:12 compute-0 nova_compute[239965]: 2026-01-26 16:17:12.001 239969 DEBUG nova.compute.manager [req-49275e3a-5806-404d-9c23-ac32e9eafc46 req-b11c2a54-f960-4d22-8cbe-f45ff3a38a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Received event network-changed-d7a7b654-8380-4796-9567-cf4032213582 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:12 compute-0 nova_compute[239965]: 2026-01-26 16:17:12.001 239969 DEBUG nova.compute.manager [req-49275e3a-5806-404d-9c23-ac32e9eafc46 req-b11c2a54-f960-4d22-8cbe-f45ff3a38a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Refreshing instance network info cache due to event network-changed-d7a7b654-8380-4796-9567-cf4032213582. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:17:12 compute-0 nova_compute[239965]: 2026-01-26 16:17:12.002 239969 DEBUG oslo_concurrency.lockutils [req-49275e3a-5806-404d-9c23-ac32e9eafc46 req-b11c2a54-f960-4d22-8cbe-f45ff3a38a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:17:12 compute-0 nova_compute[239965]: 2026-01-26 16:17:12.115 239969 DEBUG nova.network.neutron [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:17:12 compute-0 nova_compute[239965]: 2026-01-26 16:17:12.217 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 250 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.3 MiB/s wr, 32 op/s
Jan 26 16:17:12 compute-0 nova_compute[239965]: 2026-01-26 16:17:12.415 239969 DEBUG nova.compute.manager [req-b5de2fd8-9e2a-4e0a-988f-96dd8e5dd304 req-777a38e7-c7e3-4dee-9cb6-86089ec6c974 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received event network-changed-8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:12 compute-0 nova_compute[239965]: 2026-01-26 16:17:12.415 239969 DEBUG nova.compute.manager [req-b5de2fd8-9e2a-4e0a-988f-96dd8e5dd304 req-777a38e7-c7e3-4dee-9cb6-86089ec6c974 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Refreshing instance network info cache due to event network-changed-8e934ddc-7506-48e6-9e44-13ca6c5d5bb2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:17:12 compute-0 nova_compute[239965]: 2026-01-26 16:17:12.416 239969 DEBUG oslo_concurrency.lockutils [req-b5de2fd8-9e2a-4e0a-988f-96dd8e5dd304 req-777a38e7-c7e3-4dee-9cb6-86089ec6c974 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-311f59c2-3926-4794-b450-819f63642a6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:17:12 compute-0 ceph-mon[75140]: pgmap v1995: 305 pgs: 305 active+clean; 250 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 3.3 MiB/s wr, 32 op/s
Jan 26 16:17:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.069 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.162 239969 DEBUG nova.network.neutron [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Updating instance_info_cache with network_info: [{"id": "d7a7b654-8380-4796-9567-cf4032213582", "address": "fa:16:3e:b1:a1:c3", "network": {"id": "e76f9602-5b6a-4dfd-b8a5-93cc21546e0c", "bridge": "br-int", "label": "tempest-network-smoke--579472937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7a7b654-83", "ovs_interfaceid": "d7a7b654-8380-4796-9567-cf4032213582", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.200 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.201 239969 DEBUG nova.compute.manager [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Instance network_info: |[{"id": "d7a7b654-8380-4796-9567-cf4032213582", "address": "fa:16:3e:b1:a1:c3", "network": {"id": "e76f9602-5b6a-4dfd-b8a5-93cc21546e0c", "bridge": "br-int", "label": "tempest-network-smoke--579472937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7a7b654-83", "ovs_interfaceid": "d7a7b654-8380-4796-9567-cf4032213582", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.201 239969 DEBUG oslo_concurrency.lockutils [req-49275e3a-5806-404d-9c23-ac32e9eafc46 req-b11c2a54-f960-4d22-8cbe-f45ff3a38a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.201 239969 DEBUG nova.network.neutron [req-49275e3a-5806-404d-9c23-ac32e9eafc46 req-b11c2a54-f960-4d22-8cbe-f45ff3a38a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Refreshing network info cache for port d7a7b654-8380-4796-9567-cf4032213582 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.204 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Start _get_guest_xml network_info=[{"id": "d7a7b654-8380-4796-9567-cf4032213582", "address": "fa:16:3e:b1:a1:c3", "network": {"id": "e76f9602-5b6a-4dfd-b8a5-93cc21546e0c", "bridge": "br-int", "label": "tempest-network-smoke--579472937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7a7b654-83", "ovs_interfaceid": "d7a7b654-8380-4796-9567-cf4032213582", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.209 239969 WARNING nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.216 239969 DEBUG nova.virt.libvirt.host [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.216 239969 DEBUG nova.virt.libvirt.host [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.224 239969 DEBUG nova.virt.libvirt.host [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.224 239969 DEBUG nova.virt.libvirt.host [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.225 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.225 239969 DEBUG nova.virt.hardware [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.225 239969 DEBUG nova.virt.hardware [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.226 239969 DEBUG nova.virt.hardware [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.226 239969 DEBUG nova.virt.hardware [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.226 239969 DEBUG nova.virt.hardware [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.226 239969 DEBUG nova.virt.hardware [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.227 239969 DEBUG nova.virt.hardware [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.227 239969 DEBUG nova.virt.hardware [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.227 239969 DEBUG nova.virt.hardware [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.227 239969 DEBUG nova.virt.hardware [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.228 239969 DEBUG nova.virt.hardware [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.231 239969 DEBUG oslo_concurrency.processutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:17:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/232684062' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.852 239969 DEBUG oslo_concurrency.processutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.878 239969 DEBUG nova.storage.rbd_utils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:13 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/232684062' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:17:13 compute-0 nova_compute[239965]: 2026-01-26 16:17:13.882 239969 DEBUG oslo_concurrency.processutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.315 239969 DEBUG nova.network.neutron [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Updating instance_info_cache with network_info: [{"id": "623e70ba-12bf-443e-abb2-f259acba930d", "address": "fa:16:3e:4b:02:a7", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623e70ba-12", "ovs_interfaceid": "623e70ba-12bf-443e-abb2-f259acba930d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "address": "fa:16:3e:bc:5b:31", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:5b31", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e934ddc-75", "ovs_interfaceid": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:17:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 259 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.366 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Releasing lock "refresh_cache-311f59c2-3926-4794-b450-819f63642a6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.367 239969 DEBUG nova.compute.manager [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Instance network_info: |[{"id": "623e70ba-12bf-443e-abb2-f259acba930d", "address": "fa:16:3e:4b:02:a7", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623e70ba-12", "ovs_interfaceid": "623e70ba-12bf-443e-abb2-f259acba930d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "address": "fa:16:3e:bc:5b:31", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:5b31", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e934ddc-75", "ovs_interfaceid": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.368 239969 DEBUG oslo_concurrency.lockutils [req-b5de2fd8-9e2a-4e0a-988f-96dd8e5dd304 req-777a38e7-c7e3-4dee-9cb6-86089ec6c974 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-311f59c2-3926-4794-b450-819f63642a6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.369 239969 DEBUG nova.network.neutron [req-b5de2fd8-9e2a-4e0a-988f-96dd8e5dd304 req-777a38e7-c7e3-4dee-9cb6-86089ec6c974 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Refreshing network info cache for port 8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.373 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Start _get_guest_xml network_info=[{"id": "623e70ba-12bf-443e-abb2-f259acba930d", "address": "fa:16:3e:4b:02:a7", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623e70ba-12", "ovs_interfaceid": "623e70ba-12bf-443e-abb2-f259acba930d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "address": "fa:16:3e:bc:5b:31", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:5b31", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e934ddc-75", "ovs_interfaceid": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.378 239969 WARNING nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.383 239969 DEBUG nova.virt.libvirt.host [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.384 239969 DEBUG nova.virt.libvirt.host [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.386 239969 DEBUG nova.virt.libvirt.host [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.387 239969 DEBUG nova.virt.libvirt.host [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.387 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.388 239969 DEBUG nova.virt.hardware [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.388 239969 DEBUG nova.virt.hardware [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.389 239969 DEBUG nova.virt.hardware [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.389 239969 DEBUG nova.virt.hardware [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.389 239969 DEBUG nova.virt.hardware [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.389 239969 DEBUG nova.virt.hardware [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.390 239969 DEBUG nova.virt.hardware [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.390 239969 DEBUG nova.virt.hardware [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.390 239969 DEBUG nova.virt.hardware [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.391 239969 DEBUG nova.virt.hardware [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.391 239969 DEBUG nova.virt.hardware [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.395 239969 DEBUG oslo_concurrency.processutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:17:14 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/600161024' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.445 239969 DEBUG oslo_concurrency.processutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.447 239969 DEBUG nova.virt.libvirt.vif [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:17:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1990905637',display_name='tempest-TestNetworkBasicOps-server-1990905637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1990905637',id=115,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLAifdPei62MC3HGgHOnaTQp7rfviEisVt2PdWf3OjLKLpNk2wvhm4+Db9zsv5q7m0fc3VIYgFAw6bwPWmCYoSQtS/nYYTstdh3EXd4AVP6AmkFUIUkttraekRLDIhOIjw==',key_name='tempest-TestNetworkBasicOps-555094853',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-mnzeqw2c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:17:08Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7a7b654-8380-4796-9567-cf4032213582", "address": "fa:16:3e:b1:a1:c3", "network": {"id": "e76f9602-5b6a-4dfd-b8a5-93cc21546e0c", "bridge": "br-int", "label": "tempest-network-smoke--579472937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7a7b654-83", "ovs_interfaceid": "d7a7b654-8380-4796-9567-cf4032213582", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.447 239969 DEBUG nova.network.os_vif_util [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "d7a7b654-8380-4796-9567-cf4032213582", "address": "fa:16:3e:b1:a1:c3", "network": {"id": "e76f9602-5b6a-4dfd-b8a5-93cc21546e0c", "bridge": "br-int", "label": "tempest-network-smoke--579472937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7a7b654-83", "ovs_interfaceid": "d7a7b654-8380-4796-9567-cf4032213582", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.448 239969 DEBUG nova.network.os_vif_util [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:a1:c3,bridge_name='br-int',has_traffic_filtering=True,id=d7a7b654-8380-4796-9567-cf4032213582,network=Network(e76f9602-5b6a-4dfd-b8a5-93cc21546e0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7a7b654-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.449 239969 DEBUG nova.objects.instance [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'pci_devices' on Instance uuid dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.466 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:17:14 compute-0 nova_compute[239965]:   <uuid>dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271</uuid>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   <name>instance-00000073</name>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkBasicOps-server-1990905637</nova:name>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:17:13</nova:creationTime>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:17:14 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:17:14 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:17:14 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:17:14 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:17:14 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:17:14 compute-0 nova_compute[239965]:         <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:17:14 compute-0 nova_compute[239965]:         <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:17:14 compute-0 nova_compute[239965]:         <nova:port uuid="d7a7b654-8380-4796-9567-cf4032213582">
Jan 26 16:17:14 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <system>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <entry name="serial">dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271</entry>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <entry name="uuid">dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271</entry>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     </system>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   <os>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   </os>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   <features>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   </features>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271_disk">
Jan 26 16:17:14 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       </source>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:17:14 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271_disk.config">
Jan 26 16:17:14 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       </source>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:17:14 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:b1:a1:c3"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <target dev="tapd7a7b654-83"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271/console.log" append="off"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <video>
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     </video>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:17:14 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:17:14 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:17:14 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:17:14 compute-0 nova_compute[239965]: </domain>
Jan 26 16:17:14 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.472 239969 DEBUG nova.compute.manager [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Preparing to wait for external event network-vif-plugged-d7a7b654-8380-4796-9567-cf4032213582 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.472 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.473 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.473 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.474 239969 DEBUG nova.virt.libvirt.vif [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:17:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1990905637',display_name='tempest-TestNetworkBasicOps-server-1990905637',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1990905637',id=115,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLAifdPei62MC3HGgHOnaTQp7rfviEisVt2PdWf3OjLKLpNk2wvhm4+Db9zsv5q7m0fc3VIYgFAw6bwPWmCYoSQtS/nYYTstdh3EXd4AVP6AmkFUIUkttraekRLDIhOIjw==',key_name='tempest-TestNetworkBasicOps-555094853',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-mnzeqw2c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:17:08Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d7a7b654-8380-4796-9567-cf4032213582", "address": "fa:16:3e:b1:a1:c3", "network": {"id": "e76f9602-5b6a-4dfd-b8a5-93cc21546e0c", "bridge": "br-int", "label": "tempest-network-smoke--579472937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7a7b654-83", "ovs_interfaceid": "d7a7b654-8380-4796-9567-cf4032213582", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.474 239969 DEBUG nova.network.os_vif_util [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "d7a7b654-8380-4796-9567-cf4032213582", "address": "fa:16:3e:b1:a1:c3", "network": {"id": "e76f9602-5b6a-4dfd-b8a5-93cc21546e0c", "bridge": "br-int", "label": "tempest-network-smoke--579472937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7a7b654-83", "ovs_interfaceid": "d7a7b654-8380-4796-9567-cf4032213582", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.475 239969 DEBUG nova.network.os_vif_util [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:a1:c3,bridge_name='br-int',has_traffic_filtering=True,id=d7a7b654-8380-4796-9567-cf4032213582,network=Network(e76f9602-5b6a-4dfd-b8a5-93cc21546e0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7a7b654-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.475 239969 DEBUG os_vif [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:a1:c3,bridge_name='br-int',has_traffic_filtering=True,id=d7a7b654-8380-4796-9567-cf4032213582,network=Network(e76f9602-5b6a-4dfd-b8a5-93cc21546e0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7a7b654-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.476 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.476 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.477 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.480 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.481 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7a7b654-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.481 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd7a7b654-83, col_values=(('external_ids', {'iface-id': 'd7a7b654-8380-4796-9567-cf4032213582', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:a1:c3', 'vm-uuid': 'dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.483 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:14 compute-0 NetworkManager[48954]: <info>  [1769444234.4842] manager: (tapd7a7b654-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/488)
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.486 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.491 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.492 239969 INFO os_vif [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:a1:c3,bridge_name='br-int',has_traffic_filtering=True,id=d7a7b654-8380-4796-9567-cf4032213582,network=Network(e76f9602-5b6a-4dfd-b8a5-93cc21546e0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7a7b654-83')
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.660 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.660 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.661 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:b1:a1:c3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.661 239969 INFO nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Using config drive
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.683 239969 DEBUG nova.storage.rbd_utils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:14 compute-0 ceph-mon[75140]: pgmap v1996: 305 pgs: 305 active+clean; 259 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.6 MiB/s wr, 54 op/s
Jan 26 16:17:14 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/600161024' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:17:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:17:14 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3106060898' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.963 239969 DEBUG oslo_concurrency.processutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.981 239969 DEBUG nova.storage.rbd_utils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 311f59c2-3926-4794-b450-819f63642a6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:14 compute-0 nova_compute[239965]: 2026-01-26 16:17:14.984 239969 DEBUG oslo_concurrency.processutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:17:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4239949922' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.527 239969 DEBUG oslo_concurrency.processutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.529 239969 DEBUG nova.virt.libvirt.vif [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:17:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-16933863',display_name='tempest-TestGettingAddress-server-16933863',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-16933863',id=114,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH15mfCf163A1QbfTaXuR54BflHpaWebtcCFKLbElmwMma07W8Dr24ff/LIXiYDE7MCaURfuMy4ql5GPdDHSAVAPtvXLIqBDdEa7wAHGpsKoM0uA4YWynEtBDWpo6lK7nw==',key_name='tempest-TestGettingAddress-1300583488',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-zluaa16m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:17:04Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=311f59c2-3926-4794-b450-819f63642a6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "623e70ba-12bf-443e-abb2-f259acba930d", "address": "fa:16:3e:4b:02:a7", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623e70ba-12", "ovs_interfaceid": "623e70ba-12bf-443e-abb2-f259acba930d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.530 239969 DEBUG nova.network.os_vif_util [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "623e70ba-12bf-443e-abb2-f259acba930d", "address": "fa:16:3e:4b:02:a7", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623e70ba-12", "ovs_interfaceid": "623e70ba-12bf-443e-abb2-f259acba930d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.531 239969 DEBUG nova.network.os_vif_util [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:02:a7,bridge_name='br-int',has_traffic_filtering=True,id=623e70ba-12bf-443e-abb2-f259acba930d,network=Network(96c2fafe-7dfa-4123-80bb-6807f1870aaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap623e70ba-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.532 239969 DEBUG nova.virt.libvirt.vif [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:17:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-16933863',display_name='tempest-TestGettingAddress-server-16933863',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-16933863',id=114,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH15mfCf163A1QbfTaXuR54BflHpaWebtcCFKLbElmwMma07W8Dr24ff/LIXiYDE7MCaURfuMy4ql5GPdDHSAVAPtvXLIqBDdEa7wAHGpsKoM0uA4YWynEtBDWpo6lK7nw==',key_name='tempest-TestGettingAddress-1300583488',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-zluaa16m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:17:04Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=311f59c2-3926-4794-b450-819f63642a6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "address": "fa:16:3e:bc:5b:31", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:5b31", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e934ddc-75", "ovs_interfaceid": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.532 239969 DEBUG nova.network.os_vif_util [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "address": "fa:16:3e:bc:5b:31", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:5b31", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e934ddc-75", "ovs_interfaceid": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.533 239969 DEBUG nova.network.os_vif_util [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:5b:31,bridge_name='br-int',has_traffic_filtering=True,id=8e934ddc-7506-48e6-9e44-13ca6c5d5bb2,network=Network(4b542b76-8e6a-4678-aea6-ecc3e7938bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e934ddc-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.534 239969 DEBUG nova.objects.instance [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'pci_devices' on Instance uuid 311f59c2-3926-4794-b450-819f63642a6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.552 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:17:15 compute-0 nova_compute[239965]:   <uuid>311f59c2-3926-4794-b450-819f63642a6b</uuid>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   <name>instance-00000072</name>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <nova:name>tempest-TestGettingAddress-server-16933863</nova:name>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:17:14</nova:creationTime>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:17:15 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:17:15 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:17:15 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:17:15 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:17:15 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:17:15 compute-0 nova_compute[239965]:         <nova:user uuid="0338b308661e4205839e9e957f674d8c">tempest-TestGettingAddress-206943419-project-member</nova:user>
Jan 26 16:17:15 compute-0 nova_compute[239965]:         <nova:project uuid="df501970c9864b77b86bb4aa58ef846b">tempest-TestGettingAddress-206943419</nova:project>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:17:15 compute-0 nova_compute[239965]:         <nova:port uuid="623e70ba-12bf-443e-abb2-f259acba930d">
Jan 26 16:17:15 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:17:15 compute-0 nova_compute[239965]:         <nova:port uuid="8e934ddc-7506-48e6-9e44-13ca6c5d5bb2">
Jan 26 16:17:15 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:febc:5b31" ipVersion="6"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <system>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <entry name="serial">311f59c2-3926-4794-b450-819f63642a6b</entry>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <entry name="uuid">311f59c2-3926-4794-b450-819f63642a6b</entry>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     </system>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   <os>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   </os>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   <features>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   </features>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/311f59c2-3926-4794-b450-819f63642a6b_disk">
Jan 26 16:17:15 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       </source>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:17:15 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/311f59c2-3926-4794-b450-819f63642a6b_disk.config">
Jan 26 16:17:15 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       </source>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:17:15 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:4b:02:a7"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <target dev="tap623e70ba-12"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:bc:5b:31"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <target dev="tap8e934ddc-75"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/311f59c2-3926-4794-b450-819f63642a6b/console.log" append="off"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <video>
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     </video>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:17:15 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:17:15 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:17:15 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:17:15 compute-0 nova_compute[239965]: </domain>
Jan 26 16:17:15 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.554 239969 DEBUG nova.compute.manager [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Preparing to wait for external event network-vif-plugged-623e70ba-12bf-443e-abb2-f259acba930d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.555 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "311f59c2-3926-4794-b450-819f63642a6b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.555 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.555 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.555 239969 DEBUG nova.compute.manager [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Preparing to wait for external event network-vif-plugged-8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.556 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "311f59c2-3926-4794-b450-819f63642a6b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.556 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.556 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.557 239969 DEBUG nova.virt.libvirt.vif [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:17:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-16933863',display_name='tempest-TestGettingAddress-server-16933863',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-16933863',id=114,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH15mfCf163A1QbfTaXuR54BflHpaWebtcCFKLbElmwMma07W8Dr24ff/LIXiYDE7MCaURfuMy4ql5GPdDHSAVAPtvXLIqBDdEa7wAHGpsKoM0uA4YWynEtBDWpo6lK7nw==',key_name='tempest-TestGettingAddress-1300583488',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-zluaa16m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:17:04Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=311f59c2-3926-4794-b450-819f63642a6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "623e70ba-12bf-443e-abb2-f259acba930d", "address": "fa:16:3e:4b:02:a7", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623e70ba-12", "ovs_interfaceid": "623e70ba-12bf-443e-abb2-f259acba930d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.557 239969 DEBUG nova.network.os_vif_util [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "623e70ba-12bf-443e-abb2-f259acba930d", "address": "fa:16:3e:4b:02:a7", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623e70ba-12", "ovs_interfaceid": "623e70ba-12bf-443e-abb2-f259acba930d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.558 239969 DEBUG nova.network.os_vif_util [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:02:a7,bridge_name='br-int',has_traffic_filtering=True,id=623e70ba-12bf-443e-abb2-f259acba930d,network=Network(96c2fafe-7dfa-4123-80bb-6807f1870aaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap623e70ba-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.558 239969 DEBUG os_vif [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:02:a7,bridge_name='br-int',has_traffic_filtering=True,id=623e70ba-12bf-443e-abb2-f259acba930d,network=Network(96c2fafe-7dfa-4123-80bb-6807f1870aaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap623e70ba-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.559 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.559 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.560 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.562 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.562 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap623e70ba-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.563 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap623e70ba-12, col_values=(('external_ids', {'iface-id': '623e70ba-12bf-443e-abb2-f259acba930d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:02:a7', 'vm-uuid': '311f59c2-3926-4794-b450-819f63642a6b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:15 compute-0 NetworkManager[48954]: <info>  [1769444235.5652] manager: (tap623e70ba-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/489)
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.566 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.572 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.575 239969 INFO os_vif [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:02:a7,bridge_name='br-int',has_traffic_filtering=True,id=623e70ba-12bf-443e-abb2-f259acba930d,network=Network(96c2fafe-7dfa-4123-80bb-6807f1870aaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap623e70ba-12')
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.576 239969 DEBUG nova.virt.libvirt.vif [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:17:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-16933863',display_name='tempest-TestGettingAddress-server-16933863',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-16933863',id=114,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH15mfCf163A1QbfTaXuR54BflHpaWebtcCFKLbElmwMma07W8Dr24ff/LIXiYDE7MCaURfuMy4ql5GPdDHSAVAPtvXLIqBDdEa7wAHGpsKoM0uA4YWynEtBDWpo6lK7nw==',key_name='tempest-TestGettingAddress-1300583488',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-zluaa16m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:17:04Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=311f59c2-3926-4794-b450-819f63642a6b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "address": "fa:16:3e:bc:5b:31", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:5b31", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e934ddc-75", "ovs_interfaceid": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.576 239969 DEBUG nova.network.os_vif_util [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "address": "fa:16:3e:bc:5b:31", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:5b31", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e934ddc-75", "ovs_interfaceid": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.578 239969 DEBUG nova.network.os_vif_util [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:5b:31,bridge_name='br-int',has_traffic_filtering=True,id=8e934ddc-7506-48e6-9e44-13ca6c5d5bb2,network=Network(4b542b76-8e6a-4678-aea6-ecc3e7938bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e934ddc-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.578 239969 DEBUG os_vif [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:5b:31,bridge_name='br-int',has_traffic_filtering=True,id=8e934ddc-7506-48e6-9e44-13ca6c5d5bb2,network=Network(4b542b76-8e6a-4678-aea6-ecc3e7938bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e934ddc-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.579 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.579 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.580 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.582 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.582 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e934ddc-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.583 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8e934ddc-75, col_values=(('external_ids', {'iface-id': '8e934ddc-7506-48e6-9e44-13ca6c5d5bb2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bc:5b:31', 'vm-uuid': '311f59c2-3926-4794-b450-819f63642a6b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.584 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:15 compute-0 NetworkManager[48954]: <info>  [1769444235.5855] manager: (tap8e934ddc-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/490)
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.588 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.597 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.598 239969 INFO os_vif [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:5b:31,bridge_name='br-int',has_traffic_filtering=True,id=8e934ddc-7506-48e6-9e44-13ca6c5d5bb2,network=Network(4b542b76-8e6a-4678-aea6-ecc3e7938bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e934ddc-75')
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.655 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.655 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.655 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:4b:02:a7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.656 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:bc:5b:31, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.656 239969 INFO nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Using config drive
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.680 239969 DEBUG nova.storage.rbd_utils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 311f59c2-3926-4794-b450-819f63642a6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.760 239969 INFO nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Creating config drive at /var/lib/nova/instances/dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271/disk.config
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.767 239969 DEBUG oslo_concurrency.processutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmkleof65 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.814 239969 DEBUG nova.network.neutron [req-49275e3a-5806-404d-9c23-ac32e9eafc46 req-b11c2a54-f960-4d22-8cbe-f45ff3a38a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Updated VIF entry in instance network info cache for port d7a7b654-8380-4796-9567-cf4032213582. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.815 239969 DEBUG nova.network.neutron [req-49275e3a-5806-404d-9c23-ac32e9eafc46 req-b11c2a54-f960-4d22-8cbe-f45ff3a38a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Updating instance_info_cache with network_info: [{"id": "d7a7b654-8380-4796-9567-cf4032213582", "address": "fa:16:3e:b1:a1:c3", "network": {"id": "e76f9602-5b6a-4dfd-b8a5-93cc21546e0c", "bridge": "br-int", "label": "tempest-network-smoke--579472937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7a7b654-83", "ovs_interfaceid": "d7a7b654-8380-4796-9567-cf4032213582", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.838 239969 DEBUG oslo_concurrency.lockutils [req-49275e3a-5806-404d-9c23-ac32e9eafc46 req-b11c2a54-f960-4d22-8cbe-f45ff3a38a2e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:17:15 compute-0 nova_compute[239965]: 2026-01-26 16:17:15.924 239969 DEBUG oslo_concurrency.processutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmkleof65" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:16 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3106060898' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:17:16 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4239949922' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.148 239969 DEBUG nova.storage.rbd_utils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.154 239969 DEBUG oslo_concurrency.processutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271/disk.config dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.330 239969 DEBUG oslo_concurrency.processutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271/disk.config dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.331 239969 INFO nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Deleting local config drive /var/lib/nova/instances/dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271/disk.config because it was imported into RBD.
Jan 26 16:17:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 259 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 26 16:17:16 compute-0 NetworkManager[48954]: <info>  [1769444236.4093] manager: (tapd7a7b654-83): new Tun device (/org/freedesktop/NetworkManager/Devices/491)
Jan 26 16:17:16 compute-0 kernel: tapd7a7b654-83: entered promiscuous mode
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.445 239969 INFO nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Creating config drive at /var/lib/nova/instances/311f59c2-3926-4794-b450-819f63642a6b/disk.config
Jan 26 16:17:16 compute-0 ovn_controller[146046]: 2026-01-26T16:17:16Z|01182|binding|INFO|Claiming lport d7a7b654-8380-4796-9567-cf4032213582 for this chassis.
Jan 26 16:17:16 compute-0 ovn_controller[146046]: 2026-01-26T16:17:16Z|01183|binding|INFO|d7a7b654-8380-4796-9567-cf4032213582: Claiming fa:16:3e:b1:a1:c3 10.100.0.7
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.454 239969 DEBUG oslo_concurrency.processutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/311f59c2-3926-4794-b450-819f63642a6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpndz59f1h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:16 compute-0 systemd-udevd[340827]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.467 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:a1:c3 10.100.0.7'], port_security=['fa:16:3e:b1:a1:c3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4cb5d445-077f-4868-a619-d07393d42134', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4515f149-8897-4e5a-8e7f-088b776346f8, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d7a7b654-8380-4796-9567-cf4032213582) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.470 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d7a7b654-8380-4796-9567-cf4032213582 in datapath e76f9602-5b6a-4dfd-b8a5-93cc21546e0c bound to our chassis
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.472 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e76f9602-5b6a-4dfd-b8a5-93cc21546e0c
Jan 26 16:17:16 compute-0 NetworkManager[48954]: <info>  [1769444236.4736] device (tapd7a7b654-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:17:16 compute-0 NetworkManager[48954]: <info>  [1769444236.4742] device (tapd7a7b654-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:17:16 compute-0 ovn_controller[146046]: 2026-01-26T16:17:16Z|01184|binding|INFO|Setting lport d7a7b654-8380-4796-9567-cf4032213582 ovn-installed in OVS
Jan 26 16:17:16 compute-0 ovn_controller[146046]: 2026-01-26T16:17:16Z|01185|binding|INFO|Setting lport d7a7b654-8380-4796-9567-cf4032213582 up in Southbound
Jan 26 16:17:16 compute-0 systemd-machined[208061]: New machine qemu-141-instance-00000073.
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.487 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[89a5ff37-5ec0-4d75-b4cd-ab3b56f0173d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.488 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape76f9602-51 in ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.490 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape76f9602-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.490 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[04c1a08e-a2b3-435c-bb3e-1204aab84ea3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.492 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[892e4914-f77e-4422-a1c5-f6beb0f70416]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.493 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:16 compute-0 systemd[1]: Started Virtual Machine qemu-141-instance-00000073.
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.506 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[4b2cb293-38e5-465b-bdbd-51a00c35b379]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.521 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d39aa4-cae2-4032-948d-057c532565a9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.559 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[16161944-bcd6-454d-81fb-1b85326d649b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:16 compute-0 systemd-udevd[340834]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.567 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8d2e8415-d0cf-40ed-a859-4f54213ffd6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:16 compute-0 NetworkManager[48954]: <info>  [1769444236.5692] manager: (tape76f9602-50): new Veth device (/org/freedesktop/NetworkManager/Devices/492)
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.600 239969 DEBUG oslo_concurrency.processutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/311f59c2-3926-4794-b450-819f63642a6b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpndz59f1h" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.606 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[68cd75be-70d5-4b32-843a-feb651a9100f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.609 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[281eb6fd-02af-4298-a21c-4133aa46e334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.634 239969 DEBUG nova.storage.rbd_utils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 311f59c2-3926-4794-b450-819f63642a6b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:16 compute-0 NetworkManager[48954]: <info>  [1769444236.6373] device (tape76f9602-50): carrier: link connected
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.639 239969 DEBUG oslo_concurrency.processutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/311f59c2-3926-4794-b450-819f63642a6b/disk.config 311f59c2-3926-4794-b450-819f63642a6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.641 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c35ac205-b55f-4418-8ff9-dac2db38caba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.657 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[69d09042-32a7-46fd-91eb-5377734c0e6d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape76f9602-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:fc:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 344], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572744, 'reachable_time': 38293, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 340892, 'error': None, 'target': 'ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.674 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9b2430ce-f395-4ad8-9f15-756ee9a1f87e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb3:fc8f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 572744, 'tstamp': 572744}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 340894, 'error': None, 'target': 'ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.698 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5d113c71-80e7-4783-b80e-a68180bb3e2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape76f9602-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b3:fc:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 344], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572744, 'reachable_time': 38293, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 340895, 'error': None, 'target': 'ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.729 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc9a8f7-f0e5-48cf-8aa5-e0346361eee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.787 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cb07ff15-2141-4df1-9388-484cb5c97e30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.788 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape76f9602-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.789 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.789 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape76f9602-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:16 compute-0 NetworkManager[48954]: <info>  [1769444236.7926] manager: (tape76f9602-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/493)
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.793 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:16 compute-0 kernel: tape76f9602-50: entered promiscuous mode
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.798 239969 DEBUG oslo_concurrency.processutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/311f59c2-3926-4794-b450-819f63642a6b/disk.config 311f59c2-3926-4794-b450-819f63642a6b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.799 239969 INFO nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Deleting local config drive /var/lib/nova/instances/311f59c2-3926-4794-b450-819f63642a6b/disk.config because it was imported into RBD.
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.805 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.806 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape76f9602-50, col_values=(('external_ids', {'iface-id': 'f7bb788d-e36c-4f9a-b442-9f826e16e4d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:16 compute-0 ovn_controller[146046]: 2026-01-26T16:17:16Z|01186|binding|INFO|Releasing lport f7bb788d-e36c-4f9a-b442-9f826e16e4d4 from this chassis (sb_readonly=0)
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.827 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.835 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.836 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e76f9602-5b6a-4dfd-b8a5-93cc21546e0c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e76f9602-5b6a-4dfd-b8a5-93cc21546e0c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.837 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3250fb-9bc4-4059-a46e-9f9c45e3020f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.837 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/e76f9602-5b6a-4dfd-b8a5-93cc21546e0c.pid.haproxy
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID e76f9602-5b6a-4dfd-b8a5-93cc21546e0c
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.838 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c', 'env', 'PROCESS_TAG=haproxy-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e76f9602-5b6a-4dfd-b8a5-93cc21546e0c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:17:16 compute-0 systemd-udevd[340854]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:17:16 compute-0 NetworkManager[48954]: <info>  [1769444236.8571] manager: (tap623e70ba-12): new Tun device (/org/freedesktop/NetworkManager/Devices/494)
Jan 26 16:17:16 compute-0 kernel: tap623e70ba-12: entered promiscuous mode
Jan 26 16:17:16 compute-0 NetworkManager[48954]: <info>  [1769444236.8664] device (tap623e70ba-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:17:16 compute-0 NetworkManager[48954]: <info>  [1769444236.8678] device (tap623e70ba-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:17:16 compute-0 ovn_controller[146046]: 2026-01-26T16:17:16Z|01187|binding|INFO|Claiming lport 623e70ba-12bf-443e-abb2-f259acba930d for this chassis.
Jan 26 16:17:16 compute-0 ovn_controller[146046]: 2026-01-26T16:17:16Z|01188|binding|INFO|623e70ba-12bf-443e-abb2-f259acba930d: Claiming fa:16:3e:4b:02:a7 10.100.0.4
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.865 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:16 compute-0 NetworkManager[48954]: <info>  [1769444236.8702] manager: (tap8e934ddc-75): new Tun device (/org/freedesktop/NetworkManager/Devices/495)
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.878 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:02:a7 10.100.0.4'], port_security=['fa:16:3e:4b:02:a7 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '311f59c2-3926-4794-b450-819f63642a6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96c2fafe-7dfa-4123-80bb-6807f1870aaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0e31774a-f33d-4386-89f9-6e7af3d4c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e91b07fc-e506-4aaa-a11a-7d624ebfb418, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=623e70ba-12bf-443e-abb2-f259acba930d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:17:16 compute-0 kernel: tap8e934ddc-75: entered promiscuous mode
Jan 26 16:17:16 compute-0 NetworkManager[48954]: <info>  [1769444236.8969] device (tap8e934ddc-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:17:16 compute-0 ovn_controller[146046]: 2026-01-26T16:17:16Z|01189|binding|INFO|Setting lport 623e70ba-12bf-443e-abb2-f259acba930d ovn-installed in OVS
Jan 26 16:17:16 compute-0 ovn_controller[146046]: 2026-01-26T16:17:16Z|01190|binding|INFO|Setting lport 623e70ba-12bf-443e-abb2-f259acba930d up in Southbound
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.899 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:16 compute-0 ovn_controller[146046]: 2026-01-26T16:17:16Z|01191|if_status|INFO|Dropped 1 log messages in last 126 seconds (most recently, 126 seconds ago) due to excessive rate
Jan 26 16:17:16 compute-0 ovn_controller[146046]: 2026-01-26T16:17:16Z|01192|if_status|INFO|Not updating pb chassis for 8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 now as sb is readonly
Jan 26 16:17:16 compute-0 NetworkManager[48954]: <info>  [1769444236.9015] device (tap8e934ddc-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:17:16 compute-0 ovn_controller[146046]: 2026-01-26T16:17:16Z|01193|binding|INFO|Claiming lport 8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 for this chassis.
Jan 26 16:17:16 compute-0 ovn_controller[146046]: 2026-01-26T16:17:16Z|01194|binding|INFO|8e934ddc-7506-48e6-9e44-13ca6c5d5bb2: Claiming fa:16:3e:bc:5b:31 2001:db8::f816:3eff:febc:5b31
Jan 26 16:17:16 compute-0 systemd-machined[208061]: New machine qemu-142-instance-00000072.
Jan 26 16:17:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:16.911 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:5b:31 2001:db8::f816:3eff:febc:5b31'], port_security=['fa:16:3e:bc:5b:31 2001:db8::f816:3eff:febc:5b31'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:febc:5b31/64', 'neutron:device_id': '311f59c2-3926-4794-b450-819f63642a6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b542b76-8e6a-4678-aea6-ecc3e7938bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0e31774a-f33d-4386-89f9-6e7af3d4c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a8893da-ccb7-4082-92ad-5c7938360303, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=8e934ddc-7506-48e6-9e44-13ca6c5d5bb2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:17:16 compute-0 systemd[1]: Started Virtual Machine qemu-142-instance-00000072.
Jan 26 16:17:16 compute-0 ovn_controller[146046]: 2026-01-26T16:17:16Z|01195|binding|INFO|Setting lport 8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 up in Southbound
Jan 26 16:17:16 compute-0 ovn_controller[146046]: 2026-01-26T16:17:16Z|01196|binding|INFO|Setting lport 8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 ovn-installed in OVS
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.920 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:16 compute-0 nova_compute[239965]: 2026-01-26 16:17:16.921 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.219 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:17 compute-0 podman[340993]: 2026-01-26 16:17:17.190063367 +0000 UTC m=+0.024210703 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:17:17 compute-0 ceph-mon[75140]: pgmap v1997: 305 pgs: 305 active+clean; 259 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Jan 26 16:17:17 compute-0 podman[340993]: 2026-01-26 16:17:17.344654323 +0000 UTC m=+0.178801679 container create 360c0f060ce246fdba55cc4e7c6ceb01bf8450f241b87f4985f6e216dfe6083a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 16:17:17 compute-0 systemd[1]: Started libpod-conmon-360c0f060ce246fdba55cc4e7c6ceb01bf8450f241b87f4985f6e216dfe6083a.scope.
Jan 26 16:17:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:17:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d99a85948953b380d8f892be725b6b1ed95a4686e35ede87c397775d8d4af5eb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.453 239969 DEBUG nova.compute.manager [req-0a178524-d1a1-4fe7-ac69-65b67c8d1d4f req-df97b55c-1e6c-4c5c-8205-93db7541c549 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Received event network-vif-plugged-d7a7b654-8380-4796-9567-cf4032213582 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.454 239969 DEBUG oslo_concurrency.lockutils [req-0a178524-d1a1-4fe7-ac69-65b67c8d1d4f req-df97b55c-1e6c-4c5c-8205-93db7541c549 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.455 239969 DEBUG oslo_concurrency.lockutils [req-0a178524-d1a1-4fe7-ac69-65b67c8d1d4f req-df97b55c-1e6c-4c5c-8205-93db7541c549 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.455 239969 DEBUG oslo_concurrency.lockutils [req-0a178524-d1a1-4fe7-ac69-65b67c8d1d4f req-df97b55c-1e6c-4c5c-8205-93db7541c549 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.455 239969 DEBUG nova.compute.manager [req-0a178524-d1a1-4fe7-ac69-65b67c8d1d4f req-df97b55c-1e6c-4c5c-8205-93db7541c549 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Processing event network-vif-plugged-d7a7b654-8380-4796-9567-cf4032213582 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:17:17 compute-0 podman[340993]: 2026-01-26 16:17:17.459098509 +0000 UTC m=+0.293245855 container init 360c0f060ce246fdba55cc4e7c6ceb01bf8450f241b87f4985f6e216dfe6083a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:17:17 compute-0 podman[340993]: 2026-01-26 16:17:17.464734447 +0000 UTC m=+0.298881773 container start 360c0f060ce246fdba55cc4e7c6ceb01bf8450f241b87f4985f6e216dfe6083a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 16:17:17 compute-0 neutron-haproxy-ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c[341064]: [NOTICE]   (341078) : New worker (341081) forked
Jan 26 16:17:17 compute-0 neutron-haproxy-ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c[341064]: [NOTICE]   (341078) : Loading success.
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.501 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444237.5010948, dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.502 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] VM Started (Lifecycle Event)
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.505 239969 DEBUG nova.compute.manager [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.508 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.512 239969 INFO nova.virt.libvirt.driver [-] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Instance spawned successfully.
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.513 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.538 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 623e70ba-12bf-443e-abb2-f259acba930d in datapath 96c2fafe-7dfa-4123-80bb-6807f1870aaa unbound from our chassis
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.539 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 96c2fafe-7dfa-4123-80bb-6807f1870aaa
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.544 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.550 239969 DEBUG nova.network.neutron [req-b5de2fd8-9e2a-4e0a-988f-96dd8e5dd304 req-777a38e7-c7e3-4dee-9cb6-86089ec6c974 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Updated VIF entry in instance network info cache for port 8e934ddc-7506-48e6-9e44-13ca6c5d5bb2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.551 239969 DEBUG nova.network.neutron [req-b5de2fd8-9e2a-4e0a-988f-96dd8e5dd304 req-777a38e7-c7e3-4dee-9cb6-86089ec6c974 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Updating instance_info_cache with network_info: [{"id": "623e70ba-12bf-443e-abb2-f259acba930d", "address": "fa:16:3e:4b:02:a7", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623e70ba-12", "ovs_interfaceid": "623e70ba-12bf-443e-abb2-f259acba930d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "address": "fa:16:3e:bc:5b:31", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:5b31", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e934ddc-75", "ovs_interfaceid": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.553 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.554 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d9e0aebb-69b3-47b0-88ee-ccbaa915f82c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.561 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.561 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.562 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.562 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.562 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.563 239969 DEBUG nova.virt.libvirt.driver [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.573 239969 DEBUG oslo_concurrency.lockutils [req-b5de2fd8-9e2a-4e0a-988f-96dd8e5dd304 req-777a38e7-c7e3-4dee-9cb6-86089ec6c974 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-311f59c2-3926-4794-b450-819f63642a6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.590 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.591 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444237.5014155, dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.591 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] VM Paused (Lifecycle Event)
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.593 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b707713b-b6ae-4210-b8d0-a320f6bd7cc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.595 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5d7dc6c3-a115-4314-92b4-74398fd250a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.611 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.614 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444237.5045643, 311f59c2-3926-4794-b450-819f63642a6b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.615 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 311f59c2-3926-4794-b450-819f63642a6b] VM Started (Lifecycle Event)
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.622 239969 INFO nova.compute.manager [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Took 9.13 seconds to spawn the instance on the hypervisor.
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.622 239969 DEBUG nova.compute.manager [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.627 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c4d1b54c-3e00-438f-bdf1-6cd8e2398187]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.645 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.650 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444237.5058832, 311f59c2-3926-4794-b450-819f63642a6b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.651 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 311f59c2-3926-4794-b450-819f63642a6b] VM Paused (Lifecycle Event)
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.661 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3db47f28-3a3c-4399-8049-98a08ea86c80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96c2fafe-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:00:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568693, 'reachable_time': 15322, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341095, 'error': None, 'target': 'ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.678 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f31959e7-3621-4d25-ba8f-cc21c912e255]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap96c2fafe-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568706, 'tstamp': 568706}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341096, 'error': None, 'target': 'ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap96c2fafe-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568709, 'tstamp': 568709}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341096, 'error': None, 'target': 'ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.680 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96c2fafe-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.682 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.682 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.683 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96c2fafe-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.683 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.684 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap96c2fafe-70, col_values=(('external_ids', {'iface-id': 'f840b217-53ba-4982-acb3-2dea2bac979e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.684 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.685 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.685 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 in datapath 4b542b76-8e6a-4678-aea6-ecc3e7938bb4 unbound from our chassis
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.687 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4b542b76-8e6a-4678-aea6-ecc3e7938bb4
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.702 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4d9e712c-2b77-4f32-9f16-8b245f266e02]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.707 239969 INFO nova.compute.manager [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Took 10.33 seconds to build instance.
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.712 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 311f59c2-3926-4794-b450-819f63642a6b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.712 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444237.507666, dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.712 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] VM Resumed (Lifecycle Event)
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.729 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.731 239969 DEBUG oslo_concurrency.lockutils [None req-e104cf8f-6bed-4e61-935c-3444829e3618 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.732 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.732 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[cd1dec1b-cc3e-4470-a0d5-80a83dda2cfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.736 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[cc950d36-d404-48b8-912d-46eba286630c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.763 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[92807ca6-e136-4e76-9ae3-8415a54fc4e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.787 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd5561b-0ccc-40be-8a5b-34987f960406]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b542b76-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:7a:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 17, 'tx_packets': 4, 'rx_bytes': 1502, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 17, 'tx_packets': 4, 'rx_bytes': 1502, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568803, 'reachable_time': 19258, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341102, 'error': None, 'target': 'ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.806 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[376adc0a-1efa-4614-88c9-d1903d6c0bb3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4b542b76-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568816, 'tstamp': 568816}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341103, 'error': None, 'target': 'ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.808 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b542b76-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.809 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:17 compute-0 nova_compute[239965]: 2026-01-26 16:17:17.811 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.811 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b542b76-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.811 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.812 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4b542b76-80, col_values=(('external_ids', {'iface-id': '61b5c2b4-c9ba-4399-9641-b64f1ba83437'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:17.812 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:17:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:17:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 260 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.3 MiB/s wr, 61 op/s
Jan 26 16:17:18 compute-0 ceph-mon[75140]: pgmap v1998: 305 pgs: 305 active+clean; 260 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 3.3 MiB/s wr, 61 op/s
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.614 239969 DEBUG nova.compute.manager [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Received event network-vif-plugged-d7a7b654-8380-4796-9567-cf4032213582 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.615 239969 DEBUG oslo_concurrency.lockutils [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.615 239969 DEBUG oslo_concurrency.lockutils [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.615 239969 DEBUG oslo_concurrency.lockutils [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.615 239969 DEBUG nova.compute.manager [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] No waiting events found dispatching network-vif-plugged-d7a7b654-8380-4796-9567-cf4032213582 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.615 239969 WARNING nova.compute.manager [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Received unexpected event network-vif-plugged-d7a7b654-8380-4796-9567-cf4032213582 for instance with vm_state active and task_state None.
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.615 239969 DEBUG nova.compute.manager [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received event network-vif-plugged-623e70ba-12bf-443e-abb2-f259acba930d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.616 239969 DEBUG oslo_concurrency.lockutils [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "311f59c2-3926-4794-b450-819f63642a6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.616 239969 DEBUG oslo_concurrency.lockutils [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.616 239969 DEBUG oslo_concurrency.lockutils [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.616 239969 DEBUG nova.compute.manager [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Processing event network-vif-plugged-623e70ba-12bf-443e-abb2-f259acba930d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.616 239969 DEBUG nova.compute.manager [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received event network-vif-plugged-623e70ba-12bf-443e-abb2-f259acba930d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.616 239969 DEBUG oslo_concurrency.lockutils [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "311f59c2-3926-4794-b450-819f63642a6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.616 239969 DEBUG oslo_concurrency.lockutils [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.617 239969 DEBUG oslo_concurrency.lockutils [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.617 239969 DEBUG nova.compute.manager [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] No event matching network-vif-plugged-623e70ba-12bf-443e-abb2-f259acba930d in dict_keys([('network-vif-plugged', '8e934ddc-7506-48e6-9e44-13ca6c5d5bb2')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.617 239969 WARNING nova.compute.manager [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received unexpected event network-vif-plugged-623e70ba-12bf-443e-abb2-f259acba930d for instance with vm_state building and task_state spawning.
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.617 239969 DEBUG nova.compute.manager [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received event network-vif-plugged-8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.617 239969 DEBUG oslo_concurrency.lockutils [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "311f59c2-3926-4794-b450-819f63642a6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.618 239969 DEBUG oslo_concurrency.lockutils [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.618 239969 DEBUG oslo_concurrency.lockutils [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.618 239969 DEBUG nova.compute.manager [req-ced1408b-3c8c-4696-bfe3-116cc27f3694 req-4ab371c3-3703-4cc8-92ad-022360c736df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Processing event network-vif-plugged-8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.619 239969 DEBUG nova.compute.manager [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.632 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444239.6218753, 311f59c2-3926-4794-b450-819f63642a6b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.633 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 311f59c2-3926-4794-b450-819f63642a6b] VM Resumed (Lifecycle Event)
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.637 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.645 239969 INFO nova.virt.libvirt.driver [-] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Instance spawned successfully.
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.646 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.673 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.679 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.680 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.680 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.681 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.682 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.683 239969 DEBUG nova.virt.libvirt.driver [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.700 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.750 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 311f59c2-3926-4794-b450-819f63642a6b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.785 239969 INFO nova.compute.manager [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Took 15.64 seconds to spawn the instance on the hypervisor.
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.785 239969 DEBUG nova.compute.manager [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.851 239969 INFO nova.compute.manager [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Took 16.66 seconds to build instance.
Jan 26 16:17:19 compute-0 nova_compute[239965]: 2026-01-26 16:17:19.869 239969 DEBUG oslo_concurrency.lockutils [None req-94d79e66-cdd6-457a-a9b3-4f32640d6d0d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 260 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 26 16:17:20 compute-0 ceph-mon[75140]: pgmap v1999: 305 pgs: 305 active+clean; 260 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Jan 26 16:17:20 compute-0 nova_compute[239965]: 2026-01-26 16:17:20.588 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:21 compute-0 sshd-session[341104]: Invalid user solana from 45.148.10.240 port 48558
Jan 26 16:17:21 compute-0 sshd-session[341104]: Connection closed by invalid user solana 45.148.10.240 port 48558 [preauth]
Jan 26 16:17:22 compute-0 nova_compute[239965]: 2026-01-26 16:17:22.032 239969 DEBUG nova.compute.manager [req-ca92575f-d31b-4acb-81b1-27583defef60 req-15bb19b2-076b-4a8d-8766-90e3668082a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received event network-vif-plugged-8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:22 compute-0 nova_compute[239965]: 2026-01-26 16:17:22.033 239969 DEBUG oslo_concurrency.lockutils [req-ca92575f-d31b-4acb-81b1-27583defef60 req-15bb19b2-076b-4a8d-8766-90e3668082a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "311f59c2-3926-4794-b450-819f63642a6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:22 compute-0 nova_compute[239965]: 2026-01-26 16:17:22.033 239969 DEBUG oslo_concurrency.lockutils [req-ca92575f-d31b-4acb-81b1-27583defef60 req-15bb19b2-076b-4a8d-8766-90e3668082a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:22 compute-0 nova_compute[239965]: 2026-01-26 16:17:22.033 239969 DEBUG oslo_concurrency.lockutils [req-ca92575f-d31b-4acb-81b1-27583defef60 req-15bb19b2-076b-4a8d-8766-90e3668082a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:22 compute-0 nova_compute[239965]: 2026-01-26 16:17:22.033 239969 DEBUG nova.compute.manager [req-ca92575f-d31b-4acb-81b1-27583defef60 req-15bb19b2-076b-4a8d-8766-90e3668082a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] No waiting events found dispatching network-vif-plugged-8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:17:22 compute-0 nova_compute[239965]: 2026-01-26 16:17:22.033 239969 WARNING nova.compute.manager [req-ca92575f-d31b-4acb-81b1-27583defef60 req-15bb19b2-076b-4a8d-8766-90e3668082a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received unexpected event network-vif-plugged-8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 for instance with vm_state active and task_state None.
Jan 26 16:17:22 compute-0 nova_compute[239965]: 2026-01-26 16:17:22.222 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 260 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Jan 26 16:17:22 compute-0 ceph-mon[75140]: pgmap v2000: 305 pgs: 305 active+clean; 260 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 139 op/s
Jan 26 16:17:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:17:23 compute-0 nova_compute[239965]: 2026-01-26 16:17:23.610 239969 DEBUG nova.compute.manager [req-8030b733-1f99-4b16-a9ff-08e6f3a2fc46 req-2b44f2e9-afff-4d60-8569-25530208ba36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Received event network-changed-d7a7b654-8380-4796-9567-cf4032213582 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:23 compute-0 nova_compute[239965]: 2026-01-26 16:17:23.610 239969 DEBUG nova.compute.manager [req-8030b733-1f99-4b16-a9ff-08e6f3a2fc46 req-2b44f2e9-afff-4d60-8569-25530208ba36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Refreshing instance network info cache due to event network-changed-d7a7b654-8380-4796-9567-cf4032213582. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:17:23 compute-0 nova_compute[239965]: 2026-01-26 16:17:23.611 239969 DEBUG oslo_concurrency.lockutils [req-8030b733-1f99-4b16-a9ff-08e6f3a2fc46 req-2b44f2e9-afff-4d60-8569-25530208ba36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:17:23 compute-0 nova_compute[239965]: 2026-01-26 16:17:23.611 239969 DEBUG oslo_concurrency.lockutils [req-8030b733-1f99-4b16-a9ff-08e6f3a2fc46 req-2b44f2e9-afff-4d60-8569-25530208ba36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:17:23 compute-0 nova_compute[239965]: 2026-01-26 16:17:23.611 239969 DEBUG nova.network.neutron [req-8030b733-1f99-4b16-a9ff-08e6f3a2fc46 req-2b44f2e9-afff-4d60-8569-25530208ba36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Refreshing network info cache for port d7a7b654-8380-4796-9567-cf4032213582 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:17:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 260 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 246 KiB/s wr, 169 op/s
Jan 26 16:17:24 compute-0 ceph-mon[75140]: pgmap v2001: 305 pgs: 305 active+clean; 260 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 246 KiB/s wr, 169 op/s
Jan 26 16:17:24 compute-0 nova_compute[239965]: 2026-01-26 16:17:24.813 239969 DEBUG nova.network.neutron [req-8030b733-1f99-4b16-a9ff-08e6f3a2fc46 req-2b44f2e9-afff-4d60-8569-25530208ba36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Updated VIF entry in instance network info cache for port d7a7b654-8380-4796-9567-cf4032213582. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:17:24 compute-0 nova_compute[239965]: 2026-01-26 16:17:24.814 239969 DEBUG nova.network.neutron [req-8030b733-1f99-4b16-a9ff-08e6f3a2fc46 req-2b44f2e9-afff-4d60-8569-25530208ba36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Updating instance_info_cache with network_info: [{"id": "d7a7b654-8380-4796-9567-cf4032213582", "address": "fa:16:3e:b1:a1:c3", "network": {"id": "e76f9602-5b6a-4dfd-b8a5-93cc21546e0c", "bridge": "br-int", "label": "tempest-network-smoke--579472937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7a7b654-83", "ovs_interfaceid": "d7a7b654-8380-4796-9567-cf4032213582", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:17:24 compute-0 nova_compute[239965]: 2026-01-26 16:17:24.840 239969 DEBUG oslo_concurrency.lockutils [req-8030b733-1f99-4b16-a9ff-08e6f3a2fc46 req-2b44f2e9-afff-4d60-8569-25530208ba36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:17:25 compute-0 nova_compute[239965]: 2026-01-26 16:17:25.590 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 260 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 147 op/s
Jan 26 16:17:26 compute-0 podman[341106]: 2026-01-26 16:17:26.374236209 +0000 UTC m=+0.053032687 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:17:26 compute-0 podman[341107]: 2026-01-26 16:17:26.40294473 +0000 UTC m=+0.081301567 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:17:26 compute-0 ceph-mon[75140]: pgmap v2002: 305 pgs: 305 active+clean; 260 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 26 KiB/s wr, 147 op/s
Jan 26 16:17:26 compute-0 nova_compute[239965]: 2026-01-26 16:17:26.893 239969 DEBUG nova.compute.manager [req-f1a82ddd-fb7c-4001-a209-4aaee7908b3d req-369a117b-3ba1-4775-b8cf-0ff2b702cd4f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received event network-changed-623e70ba-12bf-443e-abb2-f259acba930d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:26 compute-0 nova_compute[239965]: 2026-01-26 16:17:26.894 239969 DEBUG nova.compute.manager [req-f1a82ddd-fb7c-4001-a209-4aaee7908b3d req-369a117b-3ba1-4775-b8cf-0ff2b702cd4f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Refreshing instance network info cache due to event network-changed-623e70ba-12bf-443e-abb2-f259acba930d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:17:26 compute-0 nova_compute[239965]: 2026-01-26 16:17:26.894 239969 DEBUG oslo_concurrency.lockutils [req-f1a82ddd-fb7c-4001-a209-4aaee7908b3d req-369a117b-3ba1-4775-b8cf-0ff2b702cd4f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-311f59c2-3926-4794-b450-819f63642a6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:17:26 compute-0 nova_compute[239965]: 2026-01-26 16:17:26.894 239969 DEBUG oslo_concurrency.lockutils [req-f1a82ddd-fb7c-4001-a209-4aaee7908b3d req-369a117b-3ba1-4775-b8cf-0ff2b702cd4f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-311f59c2-3926-4794-b450-819f63642a6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:17:26 compute-0 nova_compute[239965]: 2026-01-26 16:17:26.895 239969 DEBUG nova.network.neutron [req-f1a82ddd-fb7c-4001-a209-4aaee7908b3d req-369a117b-3ba1-4775-b8cf-0ff2b702cd4f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Refreshing network info cache for port 623e70ba-12bf-443e-abb2-f259acba930d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:17:27 compute-0 nova_compute[239965]: 2026-01-26 16:17:27.225 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:17:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 260 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Jan 26 16:17:28 compute-0 ceph-mon[75140]: pgmap v2003: 305 pgs: 305 active+clean; 260 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 147 op/s
Jan 26 16:17:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:17:28
Jan 26 16:17:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:17:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:17:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'backups', 'images', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data']
Jan 26 16:17:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:17:28 compute-0 nova_compute[239965]: 2026-01-26 16:17:28.809 239969 DEBUG nova.network.neutron [req-f1a82ddd-fb7c-4001-a209-4aaee7908b3d req-369a117b-3ba1-4775-b8cf-0ff2b702cd4f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Updated VIF entry in instance network info cache for port 623e70ba-12bf-443e-abb2-f259acba930d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:17:28 compute-0 nova_compute[239965]: 2026-01-26 16:17:28.810 239969 DEBUG nova.network.neutron [req-f1a82ddd-fb7c-4001-a209-4aaee7908b3d req-369a117b-3ba1-4775-b8cf-0ff2b702cd4f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Updating instance_info_cache with network_info: [{"id": "623e70ba-12bf-443e-abb2-f259acba930d", "address": "fa:16:3e:4b:02:a7", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623e70ba-12", "ovs_interfaceid": "623e70ba-12bf-443e-abb2-f259acba930d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "address": "fa:16:3e:bc:5b:31", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:5b31", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e934ddc-75", "ovs_interfaceid": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:17:28 compute-0 nova_compute[239965]: 2026-01-26 16:17:28.828 239969 DEBUG oslo_concurrency.lockutils [req-f1a82ddd-fb7c-4001-a209-4aaee7908b3d req-369a117b-3ba1-4775-b8cf-0ff2b702cd4f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-311f59c2-3926-4794-b450-819f63642a6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 260 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.7 KiB/s wr, 140 op/s
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:17:30 compute-0 ceph-mon[75140]: pgmap v2004: 305 pgs: 305 active+clean; 260 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 2.7 KiB/s wr, 140 op/s
Jan 26 16:17:30 compute-0 nova_compute[239965]: 2026-01-26 16:17:30.591 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:17:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:17:31 compute-0 ovn_controller[146046]: 2026-01-26T16:17:31Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b1:a1:c3 10.100.0.7
Jan 26 16:17:31 compute-0 ovn_controller[146046]: 2026-01-26T16:17:31Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b1:a1:c3 10.100.0.7
Jan 26 16:17:32 compute-0 nova_compute[239965]: 2026-01-26 16:17:32.228 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 268 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 909 KiB/s wr, 176 op/s
Jan 26 16:17:32 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 26 16:17:32 compute-0 ceph-mon[75140]: pgmap v2005: 305 pgs: 305 active+clean; 268 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 909 KiB/s wr, 176 op/s
Jan 26 16:17:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:17:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 299 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.7 MiB/s wr, 106 op/s
Jan 26 16:17:34 compute-0 ceph-mon[75140]: pgmap v2006: 305 pgs: 305 active+clean; 299 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.7 MiB/s wr, 106 op/s
Jan 26 16:17:34 compute-0 ovn_controller[146046]: 2026-01-26T16:17:34Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:02:a7 10.100.0.4
Jan 26 16:17:34 compute-0 ovn_controller[146046]: 2026-01-26T16:17:34Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:02:a7 10.100.0.4
Jan 26 16:17:35 compute-0 nova_compute[239965]: 2026-01-26 16:17:35.594 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 311 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 81 op/s
Jan 26 16:17:36 compute-0 ceph-mon[75140]: pgmap v2007: 305 pgs: 305 active+clean; 311 MiB data, 885 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 81 op/s
Jan 26 16:17:37 compute-0 nova_compute[239965]: 2026-01-26 16:17:37.231 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:17:38 compute-0 sshd-session[341150]: Connection closed by 45.249.247.124 port 36070
Jan 26 16:17:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 326 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Jan 26 16:17:38 compute-0 nova_compute[239965]: 2026-01-26 16:17:38.419 239969 INFO nova.compute.manager [None req-110c6608-8df9-4ee5-af71-682d0cd63d1a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Get console output
Jan 26 16:17:38 compute-0 nova_compute[239965]: 2026-01-26 16:17:38.424 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:17:38 compute-0 ceph-mon[75140]: pgmap v2008: 305 pgs: 305 active+clean; 326 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Jan 26 16:17:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 326 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Jan 26 16:17:40 compute-0 ceph-mon[75140]: pgmap v2009: 305 pgs: 305 active+clean; 326 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Jan 26 16:17:40 compute-0 nova_compute[239965]: 2026-01-26 16:17:40.596 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 326 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Jan 26 16:17:42 compute-0 nova_compute[239965]: 2026-01-26 16:17:42.471 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:42 compute-0 ceph-mon[75140]: pgmap v2010: 305 pgs: 305 active+clean; 326 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 675 KiB/s rd, 4.3 MiB/s wr, 129 op/s
Jan 26 16:17:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:17:43 compute-0 nova_compute[239965]: 2026-01-26 16:17:43.153 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:17:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 326 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 451 KiB/s rd, 3.4 MiB/s wr, 94 op/s
Jan 26 16:17:45 compute-0 ceph-mon[75140]: pgmap v2011: 305 pgs: 305 active+clean; 326 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 451 KiB/s rd, 3.4 MiB/s wr, 94 op/s
Jan 26 16:17:45 compute-0 nova_compute[239965]: 2026-01-26 16:17:45.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:17:45 compute-0 nova_compute[239965]: 2026-01-26 16:17:45.599 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:45 compute-0 nova_compute[239965]: 2026-01-26 16:17:45.868 239969 DEBUG nova.compute.manager [req-c6faf016-31c4-4ae8-96db-2ee968da1c93 req-dc2e49ae-0486-4cd6-ba94-3e9f23bda46d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received event network-changed-623e70ba-12bf-443e-abb2-f259acba930d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:45 compute-0 nova_compute[239965]: 2026-01-26 16:17:45.868 239969 DEBUG nova.compute.manager [req-c6faf016-31c4-4ae8-96db-2ee968da1c93 req-dc2e49ae-0486-4cd6-ba94-3e9f23bda46d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Refreshing instance network info cache due to event network-changed-623e70ba-12bf-443e-abb2-f259acba930d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:17:45 compute-0 nova_compute[239965]: 2026-01-26 16:17:45.869 239969 DEBUG oslo_concurrency.lockutils [req-c6faf016-31c4-4ae8-96db-2ee968da1c93 req-dc2e49ae-0486-4cd6-ba94-3e9f23bda46d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-311f59c2-3926-4794-b450-819f63642a6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:17:45 compute-0 nova_compute[239965]: 2026-01-26 16:17:45.869 239969 DEBUG oslo_concurrency.lockutils [req-c6faf016-31c4-4ae8-96db-2ee968da1c93 req-dc2e49ae-0486-4cd6-ba94-3e9f23bda46d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-311f59c2-3926-4794-b450-819f63642a6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:17:45 compute-0 nova_compute[239965]: 2026-01-26 16:17:45.869 239969 DEBUG nova.network.neutron [req-c6faf016-31c4-4ae8-96db-2ee968da1c93 req-dc2e49ae-0486-4cd6-ba94-3e9f23bda46d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Refreshing network info cache for port 623e70ba-12bf-443e-abb2-f259acba930d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:17:45 compute-0 nova_compute[239965]: 2026-01-26 16:17:45.942 239969 DEBUG oslo_concurrency.lockutils [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "311f59c2-3926-4794-b450-819f63642a6b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:45 compute-0 nova_compute[239965]: 2026-01-26 16:17:45.943 239969 DEBUG oslo_concurrency.lockutils [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:45 compute-0 nova_compute[239965]: 2026-01-26 16:17:45.943 239969 DEBUG oslo_concurrency.lockutils [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "311f59c2-3926-4794-b450-819f63642a6b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:45 compute-0 nova_compute[239965]: 2026-01-26 16:17:45.943 239969 DEBUG oslo_concurrency.lockutils [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:45 compute-0 nova_compute[239965]: 2026-01-26 16:17:45.943 239969 DEBUG oslo_concurrency.lockutils [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:45 compute-0 nova_compute[239965]: 2026-01-26 16:17:45.944 239969 INFO nova.compute.manager [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Terminating instance
Jan 26 16:17:45 compute-0 nova_compute[239965]: 2026-01-26 16:17:45.946 239969 DEBUG nova.compute.manager [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:17:45 compute-0 kernel: tap623e70ba-12 (unregistering): left promiscuous mode
Jan 26 16:17:45 compute-0 NetworkManager[48954]: <info>  [1769444265.9900] device (tap623e70ba-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:17:45 compute-0 nova_compute[239965]: 2026-01-26 16:17:45.998 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:45 compute-0 ovn_controller[146046]: 2026-01-26T16:17:45Z|01197|binding|INFO|Releasing lport 623e70ba-12bf-443e-abb2-f259acba930d from this chassis (sb_readonly=0)
Jan 26 16:17:45 compute-0 ovn_controller[146046]: 2026-01-26T16:17:45Z|01198|binding|INFO|Setting lport 623e70ba-12bf-443e-abb2-f259acba930d down in Southbound
Jan 26 16:17:46 compute-0 ovn_controller[146046]: 2026-01-26T16:17:46Z|01199|binding|INFO|Removing iface tap623e70ba-12 ovn-installed in OVS
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.001 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.009 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:02:a7 10.100.0.4'], port_security=['fa:16:3e:4b:02:a7 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '311f59c2-3926-4794-b450-819f63642a6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96c2fafe-7dfa-4123-80bb-6807f1870aaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0e31774a-f33d-4386-89f9-6e7af3d4c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e91b07fc-e506-4aaa-a11a-7d624ebfb418, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=623e70ba-12bf-443e-abb2-f259acba930d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.010 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 623e70ba-12bf-443e-abb2-f259acba930d in datapath 96c2fafe-7dfa-4123-80bb-6807f1870aaa unbound from our chassis
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.011 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 96c2fafe-7dfa-4123-80bb-6807f1870aaa
Jan 26 16:17:46 compute-0 kernel: tap8e934ddc-75 (unregistering): left promiscuous mode
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.018 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:46 compute-0 NetworkManager[48954]: <info>  [1769444266.0212] device (tap8e934ddc-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.028 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[477fd892-bb13-43f8-afa8-64012be6af81]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:46 compute-0 ovn_controller[146046]: 2026-01-26T16:17:46Z|01200|binding|INFO|Releasing lport 8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 from this chassis (sb_readonly=0)
Jan 26 16:17:46 compute-0 ovn_controller[146046]: 2026-01-26T16:17:46Z|01201|binding|INFO|Setting lport 8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 down in Southbound
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.032 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:46 compute-0 ovn_controller[146046]: 2026-01-26T16:17:46Z|01202|binding|INFO|Removing iface tap8e934ddc-75 ovn-installed in OVS
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.035 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.040 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bc:5b:31 2001:db8::f816:3eff:febc:5b31'], port_security=['fa:16:3e:bc:5b:31 2001:db8::f816:3eff:febc:5b31'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:febc:5b31/64', 'neutron:device_id': '311f59c2-3926-4794-b450-819f63642a6b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b542b76-8e6a-4678-aea6-ecc3e7938bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0e31774a-f33d-4386-89f9-6e7af3d4c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a8893da-ccb7-4082-92ad-5c7938360303, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=8e934ddc-7506-48e6-9e44-13ca6c5d5bb2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.047 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.064 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[33f42482-8de5-4de4-b46b-c361c6a51de4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.068 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[caa76962-ffa3-46b2-90f9-62227b822b9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:46 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000072.scope: Deactivated successfully.
Jan 26 16:17:46 compute-0 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000072.scope: Consumed 14.059s CPU time.
Jan 26 16:17:46 compute-0 systemd-machined[208061]: Machine qemu-142-instance-00000072 terminated.
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.101 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6f9f014e-606a-4aa0-bb58-b983a8a47ce6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.120 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[401eb64a-4f9d-4c03-b9b5-fc3fc95aec8f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap96c2fafe-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:00:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 341], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568693, 'reachable_time': 41308, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341167, 'error': None, 'target': 'ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.136 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[edd25ce5-ae76-4812-b3f7-575df63a4a5a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap96c2fafe-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568706, 'tstamp': 568706}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341168, 'error': None, 'target': 'ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap96c2fafe-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568709, 'tstamp': 568709}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341168, 'error': None, 'target': 'ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.137 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96c2fafe-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.138 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.144 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.145 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap96c2fafe-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.146 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.146 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap96c2fafe-70, col_values=(('external_ids', {'iface-id': 'f840b217-53ba-4982-acb3-2dea2bac979e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.146 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.148 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 in datapath 4b542b76-8e6a-4678-aea6-ecc3e7938bb4 unbound from our chassis
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.150 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4b542b76-8e6a-4678-aea6-ecc3e7938bb4
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.164 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[be4a7bac-626c-416d-840a-076e4c4d04b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:46 compute-0 NetworkManager[48954]: <info>  [1769444266.1851] manager: (tap8e934ddc-75): new Tun device (/org/freedesktop/NetworkManager/Devices/496)
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.189 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.193 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e2f20e73-fbb1-4559-9b22-b78205041d56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.196 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[902ce3e2-3d21-4b0c-9db1-7dd07a5cb4c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.199 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.202 239969 INFO nova.virt.libvirt.driver [-] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Instance destroyed successfully.
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.202 239969 DEBUG nova.objects.instance [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'resources' on Instance uuid 311f59c2-3926-4794-b450-819f63642a6b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.217 239969 DEBUG nova.virt.libvirt.vif [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:17:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-16933863',display_name='tempest-TestGettingAddress-server-16933863',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-16933863',id=114,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH15mfCf163A1QbfTaXuR54BflHpaWebtcCFKLbElmwMma07W8Dr24ff/LIXiYDE7MCaURfuMy4ql5GPdDHSAVAPtvXLIqBDdEa7wAHGpsKoM0uA4YWynEtBDWpo6lK7nw==',key_name='tempest-TestGettingAddress-1300583488',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:17:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-zluaa16m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:17:19Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=311f59c2-3926-4794-b450-819f63642a6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "623e70ba-12bf-443e-abb2-f259acba930d", "address": "fa:16:3e:4b:02:a7", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623e70ba-12", "ovs_interfaceid": "623e70ba-12bf-443e-abb2-f259acba930d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.218 239969 DEBUG nova.network.os_vif_util [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "623e70ba-12bf-443e-abb2-f259acba930d", "address": "fa:16:3e:4b:02:a7", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623e70ba-12", "ovs_interfaceid": "623e70ba-12bf-443e-abb2-f259acba930d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.218 239969 DEBUG nova.network.os_vif_util [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4b:02:a7,bridge_name='br-int',has_traffic_filtering=True,id=623e70ba-12bf-443e-abb2-f259acba930d,network=Network(96c2fafe-7dfa-4123-80bb-6807f1870aaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap623e70ba-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.219 239969 DEBUG os_vif [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:02:a7,bridge_name='br-int',has_traffic_filtering=True,id=623e70ba-12bf-443e-abb2-f259acba930d,network=Network(96c2fafe-7dfa-4123-80bb-6807f1870aaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap623e70ba-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.221 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.221 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap623e70ba-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.223 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.224 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.226 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.226 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[bd320991-42b1-49c5-8ced-030291275c7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.228 239969 INFO os_vif [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4b:02:a7,bridge_name='br-int',has_traffic_filtering=True,id=623e70ba-12bf-443e-abb2-f259acba930d,network=Network(96c2fafe-7dfa-4123-80bb-6807f1870aaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap623e70ba-12')
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.229 239969 DEBUG nova.virt.libvirt.vif [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:17:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-16933863',display_name='tempest-TestGettingAddress-server-16933863',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-16933863',id=114,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH15mfCf163A1QbfTaXuR54BflHpaWebtcCFKLbElmwMma07W8Dr24ff/LIXiYDE7MCaURfuMy4ql5GPdDHSAVAPtvXLIqBDdEa7wAHGpsKoM0uA4YWynEtBDWpo6lK7nw==',key_name='tempest-TestGettingAddress-1300583488',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:17:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-zluaa16m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:17:19Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=311f59c2-3926-4794-b450-819f63642a6b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "address": "fa:16:3e:bc:5b:31", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:5b31", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e934ddc-75", "ovs_interfaceid": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.229 239969 DEBUG nova.network.os_vif_util [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "address": "fa:16:3e:bc:5b:31", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:5b31", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e934ddc-75", "ovs_interfaceid": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.230 239969 DEBUG nova.network.os_vif_util [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bc:5b:31,bridge_name='br-int',has_traffic_filtering=True,id=8e934ddc-7506-48e6-9e44-13ca6c5d5bb2,network=Network(4b542b76-8e6a-4678-aea6-ecc3e7938bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e934ddc-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.230 239969 DEBUG os_vif [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:5b:31,bridge_name='br-int',has_traffic_filtering=True,id=8e934ddc-7506-48e6-9e44-13ca6c5d5bb2,network=Network(4b542b76-8e6a-4678-aea6-ecc3e7938bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e934ddc-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.231 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.231 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e934ddc-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.235 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.236 239969 INFO os_vif [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bc:5b:31,bridge_name='br-int',has_traffic_filtering=True,id=8e934ddc-7506-48e6-9e44-13ca6c5d5bb2,network=Network(4b542b76-8e6a-4678-aea6-ecc3e7938bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e934ddc-75')
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.243 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[357b18c5-5c90-46b6-9723-ae152618891a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4b542b76-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:7a:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 2472, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 28, 'tx_packets': 5, 'rx_bytes': 2472, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568803, 'reachable_time': 27330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341200, 'error': None, 'target': 'ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.262 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3c0aa956-daf5-455d-a51f-465fe0d86ff4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4b542b76-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 568816, 'tstamp': 568816}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 341211, 'error': None, 'target': 'ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.264 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b542b76-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.265 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.266 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4b542b76-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.266 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.267 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4b542b76-80, col_values=(('external_ids', {'iface-id': '61b5c2b4-c9ba-4399-9641-b64f1ba83437'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:46.267 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:17:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 326 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 611 KiB/s wr, 60 op/s
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.495 239969 INFO nova.virt.libvirt.driver [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Deleting instance files /var/lib/nova/instances/311f59c2-3926-4794-b450-819f63642a6b_del
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.496 239969 INFO nova.virt.libvirt.driver [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Deletion of /var/lib/nova/instances/311f59c2-3926-4794-b450-819f63642a6b_del complete
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.536 239969 INFO nova.compute.manager [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Took 0.59 seconds to destroy the instance on the hypervisor.
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.536 239969 DEBUG oslo.service.loopingcall [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.537 239969 DEBUG nova.compute.manager [-] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:17:46 compute-0 nova_compute[239965]: 2026-01-26 16:17:46.537 239969 DEBUG nova.network.neutron [-] [instance: 311f59c2-3926-4794-b450-819f63642a6b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.382 239969 DEBUG nova.compute.manager [req-5e0155a1-96d0-4d53-9012-2d08aa458833 req-87525e3b-2d45-4d3c-9b1f-35e687fc41a7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received event network-vif-deleted-8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.383 239969 INFO nova.compute.manager [req-5e0155a1-96d0-4d53-9012-2d08aa458833 req-87525e3b-2d45-4d3c-9b1f-35e687fc41a7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Neutron deleted interface 8e934ddc-7506-48e6-9e44-13ca6c5d5bb2; detaching it from the instance and deleting it from the info cache
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.383 239969 DEBUG nova.network.neutron [req-5e0155a1-96d0-4d53-9012-2d08aa458833 req-87525e3b-2d45-4d3c-9b1f-35e687fc41a7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Updating instance_info_cache with network_info: [{"id": "623e70ba-12bf-443e-abb2-f259acba930d", "address": "fa:16:3e:4b:02:a7", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.211", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623e70ba-12", "ovs_interfaceid": "623e70ba-12bf-443e-abb2-f259acba930d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.408 239969 DEBUG nova.compute.manager [req-5e0155a1-96d0-4d53-9012-2d08aa458833 req-87525e3b-2d45-4d3c-9b1f-35e687fc41a7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Detach interface failed, port_id=8e934ddc-7506-48e6-9e44-13ca6c5d5bb2, reason: Instance 311f59c2-3926-4794-b450-819f63642a6b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:17:47 compute-0 ceph-mon[75140]: pgmap v2012: 305 pgs: 305 active+clean; 326 MiB data, 926 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 611 KiB/s wr, 60 op/s
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.475 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.784 239969 DEBUG nova.network.neutron [-] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.801 239969 INFO nova.compute.manager [-] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Took 1.26 seconds to deallocate network for instance.
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.851 239969 DEBUG oslo_concurrency.lockutils [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.851 239969 DEBUG oslo_concurrency.lockutils [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.886 239969 DEBUG nova.scheduler.client.report [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.909 239969 DEBUG nova.scheduler.client.report [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.910 239969 DEBUG nova.compute.provider_tree [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.931 239969 DEBUG nova.scheduler.client.report [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.955 239969 DEBUG nova.scheduler.client.report [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.964 239969 DEBUG nova.network.neutron [req-c6faf016-31c4-4ae8-96db-2ee968da1c93 req-dc2e49ae-0486-4cd6-ba94-3e9f23bda46d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Updated VIF entry in instance network info cache for port 623e70ba-12bf-443e-abb2-f259acba930d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.964 239969 DEBUG nova.network.neutron [req-c6faf016-31c4-4ae8-96db-2ee968da1c93 req-dc2e49ae-0486-4cd6-ba94-3e9f23bda46d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Updating instance_info_cache with network_info: [{"id": "623e70ba-12bf-443e-abb2-f259acba930d", "address": "fa:16:3e:4b:02:a7", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap623e70ba-12", "ovs_interfaceid": "623e70ba-12bf-443e-abb2-f259acba930d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "address": "fa:16:3e:bc:5b:31", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:febc:5b31", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e934ddc-75", "ovs_interfaceid": "8e934ddc-7506-48e6-9e44-13ca6c5d5bb2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:17:47 compute-0 nova_compute[239965]: 2026-01-26 16:17:47.994 239969 DEBUG oslo_concurrency.lockutils [req-c6faf016-31c4-4ae8-96db-2ee968da1c93 req-dc2e49ae-0486-4cd6-ba94-3e9f23bda46d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-311f59c2-3926-4794-b450-819f63642a6b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.022 239969 DEBUG oslo_concurrency.processutils [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.186 239969 DEBUG nova.compute.manager [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received event network-vif-unplugged-623e70ba-12bf-443e-abb2-f259acba930d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.186 239969 DEBUG oslo_concurrency.lockutils [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "311f59c2-3926-4794-b450-819f63642a6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.186 239969 DEBUG oslo_concurrency.lockutils [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.186 239969 DEBUG oslo_concurrency.lockutils [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.187 239969 DEBUG nova.compute.manager [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] No waiting events found dispatching network-vif-unplugged-623e70ba-12bf-443e-abb2-f259acba930d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.187 239969 WARNING nova.compute.manager [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received unexpected event network-vif-unplugged-623e70ba-12bf-443e-abb2-f259acba930d for instance with vm_state deleted and task_state None.
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.187 239969 DEBUG nova.compute.manager [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received event network-vif-plugged-623e70ba-12bf-443e-abb2-f259acba930d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.187 239969 DEBUG oslo_concurrency.lockutils [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "311f59c2-3926-4794-b450-819f63642a6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.187 239969 DEBUG oslo_concurrency.lockutils [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.187 239969 DEBUG oslo_concurrency.lockutils [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.188 239969 DEBUG nova.compute.manager [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] No waiting events found dispatching network-vif-plugged-623e70ba-12bf-443e-abb2-f259acba930d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.188 239969 WARNING nova.compute.manager [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received unexpected event network-vif-plugged-623e70ba-12bf-443e-abb2-f259acba930d for instance with vm_state deleted and task_state None.
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.188 239969 DEBUG nova.compute.manager [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received event network-vif-plugged-8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.188 239969 DEBUG oslo_concurrency.lockutils [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "311f59c2-3926-4794-b450-819f63642a6b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.188 239969 DEBUG oslo_concurrency.lockutils [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.188 239969 DEBUG oslo_concurrency.lockutils [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.188 239969 DEBUG nova.compute.manager [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] No waiting events found dispatching network-vif-plugged-8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.188 239969 WARNING nova.compute.manager [req-773c47c1-4c79-4804-9941-749e80126cdf req-dea2c87b-de6e-4b87-a51b-2ef3f1ca639f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received unexpected event network-vif-plugged-8e934ddc-7506-48e6-9e44-13ca6c5d5bb2 for instance with vm_state deleted and task_state None.
Jan 26 16:17:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 259 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 396 KiB/s wr, 76 op/s
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:17:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:17:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1113477160' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.588 239969 DEBUG oslo_concurrency.processutils [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.595 239969 DEBUG nova.compute.provider_tree [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.612 239969 DEBUG nova.scheduler.client.report [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.637 239969 DEBUG oslo_concurrency.lockutils [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.661 239969 INFO nova.scheduler.client.report [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Deleted allocations for instance 311f59c2-3926-4794-b450-819f63642a6b
Jan 26 16:17:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:17:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/937790106' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:17:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:17:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/937790106' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.743 239969 DEBUG oslo_concurrency.lockutils [None req-b32b4ce3-0145-4bf2-a046-e8763c7ea637 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "311f59c2-3926-4794-b450-819f63642a6b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.823 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "d37c9efe-0572-4b48-9bce-6af473740fc4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.823 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "d37c9efe-0572-4b48-9bce-6af473740fc4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.844 239969 DEBUG nova.compute.manager [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.964 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.964 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.970 239969 DEBUG nova.virt.hardware [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:17:48 compute-0 nova_compute[239965]: 2026-01-26 16:17:48.971 239969 INFO nova.compute.claims [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001620649305600599 of space, bias 1.0, pg target 0.48619479168017965 quantized to 32 (current 32)
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010184526614257271 of space, bias 1.0, pg target 0.3055357984277181 quantized to 32 (current 32)
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.110794920889344e-07 of space, bias 4.0, pg target 0.0008532953905067214 quantized to 16 (current 16)
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:17:49 compute-0 nova_compute[239965]: 2026-01-26 16:17:49.173 239969 DEBUG oslo_concurrency.processutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:49 compute-0 sudo[341244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:17:49 compute-0 sudo[341244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:17:49 compute-0 sudo[341244]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:49 compute-0 sudo[341269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:17:49 compute-0 sudo[341269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:17:49 compute-0 ceph-mon[75140]: pgmap v2013: 305 pgs: 305 active+clean; 259 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 396 KiB/s wr, 76 op/s
Jan 26 16:17:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1113477160' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:17:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/937790106' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:17:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/937790106' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:17:49 compute-0 nova_compute[239965]: 2026-01-26 16:17:49.476 239969 DEBUG nova.compute.manager [req-c128393f-f675-4f85-9af0-7bdf267bd530 req-32069560-f221-44e9-895e-d43ba518d59d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Received event network-vif-deleted-623e70ba-12bf-443e-abb2-f259acba930d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:49 compute-0 nova_compute[239965]: 2026-01-26 16:17:49.477 239969 INFO nova.compute.manager [req-c128393f-f675-4f85-9af0-7bdf267bd530 req-32069560-f221-44e9-895e-d43ba518d59d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Neutron deleted interface 623e70ba-12bf-443e-abb2-f259acba930d; detaching it from the instance and deleting it from the info cache
Jan 26 16:17:49 compute-0 nova_compute[239965]: 2026-01-26 16:17:49.477 239969 DEBUG nova.network.neutron [req-c128393f-f675-4f85-9af0-7bdf267bd530 req-32069560-f221-44e9-895e-d43ba518d59d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 26 16:17:49 compute-0 nova_compute[239965]: 2026-01-26 16:17:49.479 239969 DEBUG nova.compute.manager [req-c128393f-f675-4f85-9af0-7bdf267bd530 req-32069560-f221-44e9-895e-d43ba518d59d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Detach interface failed, port_id=623e70ba-12bf-443e-abb2-f259acba930d, reason: Instance 311f59c2-3926-4794-b450-819f63642a6b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:17:49 compute-0 nova_compute[239965]: 2026-01-26 16:17:49.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:17:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:17:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3584151978' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:17:49 compute-0 nova_compute[239965]: 2026-01-26 16:17:49.788 239969 DEBUG oslo_concurrency.processutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:49 compute-0 nova_compute[239965]: 2026-01-26 16:17:49.794 239969 DEBUG nova.compute.provider_tree [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:17:49 compute-0 nova_compute[239965]: 2026-01-26 16:17:49.809 239969 DEBUG nova.scheduler.client.report [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:17:49 compute-0 nova_compute[239965]: 2026-01-26 16:17:49.832 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.868s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:49 compute-0 nova_compute[239965]: 2026-01-26 16:17:49.833 239969 DEBUG nova.compute.manager [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:17:49 compute-0 nova_compute[239965]: 2026-01-26 16:17:49.874 239969 DEBUG nova.compute.manager [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:17:49 compute-0 nova_compute[239965]: 2026-01-26 16:17:49.874 239969 DEBUG nova.network.neutron [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:17:49 compute-0 nova_compute[239965]: 2026-01-26 16:17:49.895 239969 INFO nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:17:49 compute-0 nova_compute[239965]: 2026-01-26 16:17:49.911 239969 DEBUG nova.compute.manager [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:17:49 compute-0 sudo[341269]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:17:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:17:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:17:49 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:17:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:17:50 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.007 239969 DEBUG nova.compute.manager [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:17:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:17:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.008 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.009 239969 INFO nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Creating image(s)
Jan 26 16:17:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:17:50 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:17:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:17:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.032 239969 DEBUG nova.storage.rbd_utils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image d37c9efe-0572-4b48-9bce-6af473740fc4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.055 239969 DEBUG nova.storage.rbd_utils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image d37c9efe-0572-4b48-9bce-6af473740fc4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:50 compute-0 sudo[341352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:17:50 compute-0 sudo[341352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.079 239969 DEBUG nova.storage.rbd_utils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image d37c9efe-0572-4b48-9bce-6af473740fc4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:50 compute-0 sudo[341352]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.083 239969 DEBUG oslo_concurrency.processutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.121 239969 DEBUG nova.policy [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e865b82cb8db42568f95d2257ed9b158', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f543c15474f7451fa07b01e79dde95dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:17:50 compute-0 sudo[341425]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:17:50 compute-0 sudo[341425]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.161 239969 DEBUG oslo_concurrency.processutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.162 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.162 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.163 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.183 239969 DEBUG nova.storage.rbd_utils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image d37c9efe-0572-4b48-9bce-6af473740fc4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.186 239969 DEBUG oslo_concurrency.processutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d37c9efe-0572-4b48-9bce-6af473740fc4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 259 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 34 KiB/s wr, 27 op/s
Jan 26 16:17:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3584151978' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:17:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:17:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:17:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:17:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:17:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:17:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:17:50 compute-0 podman[341501]: 2026-01-26 16:17:50.559780857 +0000 UTC m=+0.108948573 container create 1810631ac3d9d2a19b49e42504977f1d815667d36397e1152b74291a512c3187 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swartz, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:17:50 compute-0 podman[341501]: 2026-01-26 16:17:50.474620187 +0000 UTC m=+0.023787933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.591 239969 DEBUG oslo_concurrency.processutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 d37c9efe-0572-4b48-9bce-6af473740fc4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.406s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:50 compute-0 systemd[1]: Started libpod-conmon-1810631ac3d9d2a19b49e42504977f1d815667d36397e1152b74291a512c3187.scope.
Jan 26 16:17:50 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.656 239969 DEBUG nova.storage.rbd_utils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] resizing rbd image d37c9efe-0572-4b48-9bce-6af473740fc4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:17:50 compute-0 podman[341501]: 2026-01-26 16:17:50.664036003 +0000 UTC m=+0.213203739 container init 1810631ac3d9d2a19b49e42504977f1d815667d36397e1152b74291a512c3187 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swartz, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 16:17:50 compute-0 podman[341501]: 2026-01-26 16:17:50.671511397 +0000 UTC m=+0.220679113 container start 1810631ac3d9d2a19b49e42504977f1d815667d36397e1152b74291a512c3187 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swartz, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 16:17:50 compute-0 podman[341501]: 2026-01-26 16:17:50.676360665 +0000 UTC m=+0.225528401 container attach 1810631ac3d9d2a19b49e42504977f1d815667d36397e1152b74291a512c3187 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 16:17:50 compute-0 cool_swartz[341533]: 167 167
Jan 26 16:17:50 compute-0 systemd[1]: libpod-1810631ac3d9d2a19b49e42504977f1d815667d36397e1152b74291a512c3187.scope: Deactivated successfully.
Jan 26 16:17:50 compute-0 conmon[341533]: conmon 1810631ac3d9d2a19b49 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1810631ac3d9d2a19b49e42504977f1d815667d36397e1152b74291a512c3187.scope/container/memory.events
Jan 26 16:17:50 compute-0 podman[341576]: 2026-01-26 16:17:50.730336794 +0000 UTC m=+0.029927853 container died 1810631ac3d9d2a19b49e42504977f1d815667d36397e1152b74291a512c3187 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.743 239969 DEBUG nova.objects.instance [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'migration_context' on Instance uuid d37c9efe-0572-4b48-9bce-6af473740fc4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:17:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-76bde7fb7ebe7b1cc099bbad4c5cc520e4b0e0f6a18d2454891a1d3030bf78a3-merged.mount: Deactivated successfully.
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.758 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.759 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Ensure instance console log exists: /var/lib/nova/instances/d37c9efe-0572-4b48-9bce-6af473740fc4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.760 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.761 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:50 compute-0 nova_compute[239965]: 2026-01-26 16:17:50.761 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:50 compute-0 podman[341576]: 2026-01-26 16:17:50.778329546 +0000 UTC m=+0.077920585 container remove 1810631ac3d9d2a19b49e42504977f1d815667d36397e1152b74291a512c3187 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 16:17:50 compute-0 systemd[1]: libpod-conmon-1810631ac3d9d2a19b49e42504977f1d815667d36397e1152b74291a512c3187.scope: Deactivated successfully.
Jan 26 16:17:50 compute-0 podman[341616]: 2026-01-26 16:17:50.986700397 +0000 UTC m=+0.044954769 container create 7b34734ead3cd5774885af846dbdc0df6cf52efc71c3f0e1b62a0f47dfbd34e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 16:17:51 compute-0 systemd[1]: Started libpod-conmon-7b34734ead3cd5774885af846dbdc0df6cf52efc71c3f0e1b62a0f47dfbd34e1.scope.
Jan 26 16:17:51 compute-0 podman[341616]: 2026-01-26 16:17:50.967430366 +0000 UTC m=+0.025684758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:17:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f4792122174da2504f25beb4269a410a1ee54be11d2fc499b2d432b0517fd33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f4792122174da2504f25beb4269a410a1ee54be11d2fc499b2d432b0517fd33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f4792122174da2504f25beb4269a410a1ee54be11d2fc499b2d432b0517fd33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f4792122174da2504f25beb4269a410a1ee54be11d2fc499b2d432b0517fd33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:17:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f4792122174da2504f25beb4269a410a1ee54be11d2fc499b2d432b0517fd33/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:17:51 compute-0 podman[341616]: 2026-01-26 16:17:51.088121304 +0000 UTC m=+0.146375706 container init 7b34734ead3cd5774885af846dbdc0df6cf52efc71c3f0e1b62a0f47dfbd34e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:17:51 compute-0 podman[341616]: 2026-01-26 16:17:51.097283618 +0000 UTC m=+0.155538000 container start 7b34734ead3cd5774885af846dbdc0df6cf52efc71c3f0e1b62a0f47dfbd34e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 16:17:51 compute-0 podman[341616]: 2026-01-26 16:17:51.102810893 +0000 UTC m=+0.161065275 container attach 7b34734ead3cd5774885af846dbdc0df6cf52efc71c3f0e1b62a0f47dfbd34e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.231 239969 DEBUG nova.compute.manager [req-3b10737f-664d-4395-857b-01605064336b req-c1a69b7a-f47f-4301-98b1-bb75ac38e7c7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received event network-changed-1751925c-4e5b-4436-af71-dae4e4c07755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.283 239969 DEBUG nova.compute.manager [req-3b10737f-664d-4395-857b-01605064336b req-c1a69b7a-f47f-4301-98b1-bb75ac38e7c7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Refreshing instance network info cache due to event network-changed-1751925c-4e5b-4436-af71-dae4e4c07755. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.284 239969 DEBUG oslo_concurrency.lockutils [req-3b10737f-664d-4395-857b-01605064336b req-c1a69b7a-f47f-4301-98b1-bb75ac38e7c7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.284 239969 DEBUG oslo_concurrency.lockutils [req-3b10737f-664d-4395-857b-01605064336b req-c1a69b7a-f47f-4301-98b1-bb75ac38e7c7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.285 239969 DEBUG nova.network.neutron [req-3b10737f-664d-4395-857b-01605064336b req-c1a69b7a-f47f-4301-98b1-bb75ac38e7c7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Refreshing network info cache for port 1751925c-4e5b-4436-af71-dae4e4c07755 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.286 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.310 239969 DEBUG oslo_concurrency.lockutils [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "89d77454-1c5d-4556-88b6-e5d4023c8c36" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.311 239969 DEBUG oslo_concurrency.lockutils [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.316 239969 DEBUG oslo_concurrency.lockutils [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.316 239969 DEBUG oslo_concurrency.lockutils [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.316 239969 DEBUG oslo_concurrency.lockutils [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.318 239969 INFO nova.compute.manager [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Terminating instance
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.319 239969 DEBUG nova.compute.manager [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:17:51 compute-0 kernel: tap1751925c-4e (unregistering): left promiscuous mode
Jan 26 16:17:51 compute-0 NetworkManager[48954]: <info>  [1769444271.3669] device (tap1751925c-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.375 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:51 compute-0 ovn_controller[146046]: 2026-01-26T16:17:51Z|01203|binding|INFO|Releasing lport 1751925c-4e5b-4436-af71-dae4e4c07755 from this chassis (sb_readonly=0)
Jan 26 16:17:51 compute-0 ovn_controller[146046]: 2026-01-26T16:17:51Z|01204|binding|INFO|Setting lport 1751925c-4e5b-4436-af71-dae4e4c07755 down in Southbound
Jan 26 16:17:51 compute-0 ovn_controller[146046]: 2026-01-26T16:17:51Z|01205|binding|INFO|Removing iface tap1751925c-4e ovn-installed in OVS
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.379 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.384 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:92:3c:80 10.100.0.10'], port_security=['fa:16:3e:92:3c:80 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '89d77454-1c5d-4556-88b6-e5d4023c8c36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-96c2fafe-7dfa-4123-80bb-6807f1870aaa', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0e31774a-f33d-4386-89f9-6e7af3d4c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e91b07fc-e506-4aaa-a11a-7d624ebfb418, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1751925c-4e5b-4436-af71-dae4e4c07755) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.386 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1751925c-4e5b-4436-af71-dae4e4c07755 in datapath 96c2fafe-7dfa-4123-80bb-6807f1870aaa unbound from our chassis
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.388 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 96c2fafe-7dfa-4123-80bb-6807f1870aaa, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.390 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[686cf374-5f52-408a-9fd7-7ec0b4cd928a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.390 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa namespace which is not needed anymore
Jan 26 16:17:51 compute-0 kernel: tape462f031-25 (unregistering): left promiscuous mode
Jan 26 16:17:51 compute-0 NetworkManager[48954]: <info>  [1769444271.3977] device (tape462f031-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.400 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.411 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:51 compute-0 ovn_controller[146046]: 2026-01-26T16:17:51Z|01206|binding|INFO|Releasing lport e462f031-2502-4089-9f71-ba99ec9f5c8d from this chassis (sb_readonly=0)
Jan 26 16:17:51 compute-0 ovn_controller[146046]: 2026-01-26T16:17:51Z|01207|binding|INFO|Setting lport e462f031-2502-4089-9f71-ba99ec9f5c8d down in Southbound
Jan 26 16:17:51 compute-0 ovn_controller[146046]: 2026-01-26T16:17:51Z|01208|binding|INFO|Removing iface tape462f031-25 ovn-installed in OVS
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.413 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.420 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:10:11:fa 2001:db8::f816:3eff:fe10:11fa'], port_security=['fa:16:3e:10:11:fa 2001:db8::f816:3eff:fe10:11fa'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe10:11fa/64', 'neutron:device_id': '89d77454-1c5d-4556-88b6-e5d4023c8c36', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4b542b76-8e6a-4678-aea6-ecc3e7938bb4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0e31774a-f33d-4386-89f9-6e7af3d4c795', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7a8893da-ccb7-4082-92ad-5c7938360303, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=e462f031-2502-4089-9f71-ba99ec9f5c8d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.434 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:51 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000071.scope: Deactivated successfully.
Jan 26 16:17:51 compute-0 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000071.scope: Consumed 16.231s CPU time.
Jan 26 16:17:51 compute-0 systemd-machined[208061]: Machine qemu-140-instance-00000071 terminated.
Jan 26 16:17:51 compute-0 ceph-mon[75140]: pgmap v2014: 305 pgs: 305 active+clean; 259 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 34 KiB/s wr, 27 op/s
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.532 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.533 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 16:17:51 compute-0 neutron-haproxy-ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa[339436]: [NOTICE]   (339440) : haproxy version is 2.8.14-c23fe91
Jan 26 16:17:51 compute-0 neutron-haproxy-ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa[339436]: [NOTICE]   (339440) : path to executable is /usr/sbin/haproxy
Jan 26 16:17:51 compute-0 neutron-haproxy-ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa[339436]: [WARNING]  (339440) : Exiting Master process...
Jan 26 16:17:51 compute-0 neutron-haproxy-ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa[339436]: [WARNING]  (339440) : Exiting Master process...
Jan 26 16:17:51 compute-0 neutron-haproxy-ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa[339436]: [ALERT]    (339440) : Current worker (339442) exited with code 143 (Terminated)
Jan 26 16:17:51 compute-0 neutron-haproxy-ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa[339436]: [WARNING]  (339440) : All workers exited. Exiting... (0)
Jan 26 16:17:51 compute-0 systemd[1]: libpod-f60a9622a43bad992d8e762060bd32189e593b2fd8c7e3117b66b7385be8c15c.scope: Deactivated successfully.
Jan 26 16:17:51 compute-0 podman[341675]: 2026-01-26 16:17:51.546936613 +0000 UTC m=+0.052247177 container died f60a9622a43bad992d8e762060bd32189e593b2fd8c7e3117b66b7385be8c15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:17:51 compute-0 NetworkManager[48954]: <info>  [1769444271.5588] manager: (tape462f031-25): new Tun device (/org/freedesktop/NetworkManager/Devices/497)
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.580 239969 INFO nova.virt.libvirt.driver [-] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Instance destroyed successfully.
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.583 239969 DEBUG nova.objects.instance [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'resources' on Instance uuid 89d77454-1c5d-4556-88b6-e5d4023c8c36 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:17:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f60a9622a43bad992d8e762060bd32189e593b2fd8c7e3117b66b7385be8c15c-userdata-shm.mount: Deactivated successfully.
Jan 26 16:17:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-5628a522201256fbf13874cb4c9c09a6e67fd51712a4a84f6cb8b2e3b4491700-merged.mount: Deactivated successfully.
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.598 239969 DEBUG nova.virt.libvirt.vif [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:16:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-9655236',display_name='tempest-TestGettingAddress-server-9655236',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-9655236',id=113,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH15mfCf163A1QbfTaXuR54BflHpaWebtcCFKLbElmwMma07W8Dr24ff/LIXiYDE7MCaURfuMy4ql5GPdDHSAVAPtvXLIqBDdEa7wAHGpsKoM0uA4YWynEtBDWpo6lK7nw==',key_name='tempest-TestGettingAddress-1300583488',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:16:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-kezm37dn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:16:38Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=89d77454-1c5d-4556-88b6-e5d4023c8c36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1751925c-4e5b-4436-af71-dae4e4c07755", "address": "fa:16:3e:92:3c:80", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1751925c-4e", "ovs_interfaceid": "1751925c-4e5b-4436-af71-dae4e4c07755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.599 239969 DEBUG nova.network.os_vif_util [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "1751925c-4e5b-4436-af71-dae4e4c07755", "address": "fa:16:3e:92:3c:80", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1751925c-4e", "ovs_interfaceid": "1751925c-4e5b-4436-af71-dae4e4c07755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.600 239969 DEBUG nova.network.os_vif_util [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:92:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=1751925c-4e5b-4436-af71-dae4e4c07755,network=Network(96c2fafe-7dfa-4123-80bb-6807f1870aaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1751925c-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.600 239969 DEBUG os_vif [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:92:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=1751925c-4e5b-4436-af71-dae4e4c07755,network=Network(96c2fafe-7dfa-4123-80bb-6807f1870aaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1751925c-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:17:51 compute-0 podman[341675]: 2026-01-26 16:17:51.601394654 +0000 UTC m=+0.106705208 container cleanup f60a9622a43bad992d8e762060bd32189e593b2fd8c7e3117b66b7385be8c15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.602 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.603 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1751925c-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.608 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:51 compute-0 systemd[1]: libpod-conmon-f60a9622a43bad992d8e762060bd32189e593b2fd8c7e3117b66b7385be8c15c.scope: Deactivated successfully.
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.612 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.613 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.616 239969 INFO os_vif [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:92:3c:80,bridge_name='br-int',has_traffic_filtering=True,id=1751925c-4e5b-4436-af71-dae4e4c07755,network=Network(96c2fafe-7dfa-4123-80bb-6807f1870aaa),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1751925c-4e')
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.617 239969 DEBUG nova.virt.libvirt.vif [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:16:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-9655236',display_name='tempest-TestGettingAddress-server-9655236',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-9655236',id=113,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBH15mfCf163A1QbfTaXuR54BflHpaWebtcCFKLbElmwMma07W8Dr24ff/LIXiYDE7MCaURfuMy4ql5GPdDHSAVAPtvXLIqBDdEa7wAHGpsKoM0uA4YWynEtBDWpo6lK7nw==',key_name='tempest-TestGettingAddress-1300583488',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:16:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-kezm37dn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:16:38Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=89d77454-1c5d-4556-88b6-e5d4023c8c36,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "address": "fa:16:3e:10:11:fa", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe10:11fa", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape462f031-25", "ovs_interfaceid": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.617 239969 DEBUG nova.network.os_vif_util [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "address": "fa:16:3e:10:11:fa", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe10:11fa", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape462f031-25", "ovs_interfaceid": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.619 239969 DEBUG nova.network.os_vif_util [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:10:11:fa,bridge_name='br-int',has_traffic_filtering=True,id=e462f031-2502-4089-9f71-ba99ec9f5c8d,network=Network(4b542b76-8e6a-4678-aea6-ecc3e7938bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape462f031-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.619 239969 DEBUG os_vif [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:11:fa,bridge_name='br-int',has_traffic_filtering=True,id=e462f031-2502-4089-9f71-ba99ec9f5c8d,network=Network(4b542b76-8e6a-4678-aea6-ecc3e7938bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape462f031-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.620 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:51 compute-0 admiring_vaughan[341632]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:17:51 compute-0 admiring_vaughan[341632]: --> All data devices are unavailable
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.621 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape462f031-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.624 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.627 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.630 239969 INFO os_vif [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:10:11:fa,bridge_name='br-int',has_traffic_filtering=True,id=e462f031-2502-4089-9f71-ba99ec9f5c8d,network=Network(4b542b76-8e6a-4678-aea6-ecc3e7938bb4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape462f031-25')
Jan 26 16:17:51 compute-0 systemd[1]: libpod-7b34734ead3cd5774885af846dbdc0df6cf52efc71c3f0e1b62a0f47dfbd34e1.scope: Deactivated successfully.
Jan 26 16:17:51 compute-0 conmon[341632]: conmon 7b34734ead3cd5774885 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b34734ead3cd5774885af846dbdc0df6cf52efc71c3f0e1b62a0f47dfbd34e1.scope/container/memory.events
Jan 26 16:17:51 compute-0 podman[341616]: 2026-01-26 16:17:51.660408406 +0000 UTC m=+0.718662798 container died 7b34734ead3cd5774885af846dbdc0df6cf52efc71c3f0e1b62a0f47dfbd34e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_vaughan, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:17:51 compute-0 podman[341730]: 2026-01-26 16:17:51.675636258 +0000 UTC m=+0.046753134 container remove f60a9622a43bad992d8e762060bd32189e593b2fd8c7e3117b66b7385be8c15c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.681 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5770aa-8265-4e90-b800-185b911fcae3]: (4, ('Mon Jan 26 04:17:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa (f60a9622a43bad992d8e762060bd32189e593b2fd8c7e3117b66b7385be8c15c)\nf60a9622a43bad992d8e762060bd32189e593b2fd8c7e3117b66b7385be8c15c\nMon Jan 26 04:17:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa (f60a9622a43bad992d8e762060bd32189e593b2fd8c7e3117b66b7385be8c15c)\nf60a9622a43bad992d8e762060bd32189e593b2fd8c7e3117b66b7385be8c15c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.684 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1540da82-2e50-444b-8a63-680fc398aaa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.685 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap96c2fafe-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.688 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f4792122174da2504f25beb4269a410a1ee54be11d2fc499b2d432b0517fd33-merged.mount: Deactivated successfully.
Jan 26 16:17:51 compute-0 kernel: tap96c2fafe-70: left promiscuous mode
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.706 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.710 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[27f4e7e9-78c3-4e23-9880-a051eb2d9e5f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:51 compute-0 podman[341616]: 2026-01-26 16:17:51.720489733 +0000 UTC m=+0.778744105 container remove 7b34734ead3cd5774885af846dbdc0df6cf52efc71c3f0e1b62a0f47dfbd34e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_vaughan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 16:17:51 compute-0 systemd[1]: libpod-conmon-7b34734ead3cd5774885af846dbdc0df6cf52efc71c3f0e1b62a0f47dfbd34e1.scope: Deactivated successfully.
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.732 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0b03f1e3-0a70-4e90-9a47-7297d88e1760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.734 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2ddf7a76-e86e-4454-83b2-e90a22eb38b4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.768 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5c120fe8-5c90-443a-a859-150eec76e126]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568684, 'reachable_time': 17599, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341776, 'error': None, 'target': 'ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.770 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-96c2fafe-7dfa-4123-80bb-6807f1870aaa deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.771 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[8a4db7d4-1b58-4290-bfe5-5e021497c66b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d96c2fafe\x2d7dfa\x2d4123\x2d80bb\x2d6807f1870aaa.mount: Deactivated successfully.
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.771 156105 INFO neutron.agent.ovn.metadata.agent [-] Port e462f031-2502-4089-9f71-ba99ec9f5c8d in datapath 4b542b76-8e6a-4678-aea6-ecc3e7938bb4 unbound from our chassis
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.773 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4b542b76-8e6a-4678-aea6-ecc3e7938bb4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.774 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cc07979f-7428-48ed-bfa5-a8b3aba0192e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:51.774 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4 namespace which is not needed anymore
Jan 26 16:17:51 compute-0 sudo[341425]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:51 compute-0 sudo[341786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.893 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.894 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:17:51 compute-0 sudo[341786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.894 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.895 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:17:51 compute-0 sudo[341786]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.903 239969 DEBUG nova.compute.manager [req-57c1bdd4-f6e8-4421-b58c-ea6912f2db7e req-807a27ee-e61a-487b-939f-e6c2c47cbe06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received event network-vif-unplugged-1751925c-4e5b-4436-af71-dae4e4c07755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.904 239969 DEBUG oslo_concurrency.lockutils [req-57c1bdd4-f6e8-4421-b58c-ea6912f2db7e req-807a27ee-e61a-487b-939f-e6c2c47cbe06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.905 239969 DEBUG oslo_concurrency.lockutils [req-57c1bdd4-f6e8-4421-b58c-ea6912f2db7e req-807a27ee-e61a-487b-939f-e6c2c47cbe06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.905 239969 DEBUG oslo_concurrency.lockutils [req-57c1bdd4-f6e8-4421-b58c-ea6912f2db7e req-807a27ee-e61a-487b-939f-e6c2c47cbe06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.905 239969 DEBUG nova.compute.manager [req-57c1bdd4-f6e8-4421-b58c-ea6912f2db7e req-807a27ee-e61a-487b-939f-e6c2c47cbe06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] No waiting events found dispatching network-vif-unplugged-1751925c-4e5b-4436-af71-dae4e4c07755 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.905 239969 DEBUG nova.compute.manager [req-57c1bdd4-f6e8-4421-b58c-ea6912f2db7e req-807a27ee-e61a-487b-939f-e6c2c47cbe06 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received event network-vif-unplugged-1751925c-4e5b-4436-af71-dae4e4c07755 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:17:51 compute-0 neutron-haproxy-ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4[339510]: [NOTICE]   (339514) : haproxy version is 2.8.14-c23fe91
Jan 26 16:17:51 compute-0 neutron-haproxy-ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4[339510]: [NOTICE]   (339514) : path to executable is /usr/sbin/haproxy
Jan 26 16:17:51 compute-0 neutron-haproxy-ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4[339510]: [WARNING]  (339514) : Exiting Master process...
Jan 26 16:17:51 compute-0 neutron-haproxy-ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4[339510]: [ALERT]    (339514) : Current worker (339516) exited with code 143 (Terminated)
Jan 26 16:17:51 compute-0 neutron-haproxy-ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4[339510]: [WARNING]  (339514) : All workers exited. Exiting... (0)
Jan 26 16:17:51 compute-0 systemd[1]: libpod-a70ecf8962db209cce96a03f0a1b8907e30bc89ebe33547bcfb4f68867d2f61d.scope: Deactivated successfully.
Jan 26 16:17:51 compute-0 podman[341815]: 2026-01-26 16:17:51.925496952 +0000 UTC m=+0.050564157 container died a70ecf8962db209cce96a03f0a1b8907e30bc89ebe33547bcfb4f68867d2f61d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:17:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cf1f6239a7568407f7bccf46ede043b5503f750377318541ec00af8b0dd4316-merged.mount: Deactivated successfully.
Jan 26 16:17:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a70ecf8962db209cce96a03f0a1b8907e30bc89ebe33547bcfb4f68867d2f61d-userdata-shm.mount: Deactivated successfully.
Jan 26 16:17:51 compute-0 sudo[341828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:17:51 compute-0 podman[341815]: 2026-01-26 16:17:51.967767784 +0000 UTC m=+0.092834989 container cleanup a70ecf8962db209cce96a03f0a1b8907e30bc89ebe33547bcfb4f68867d2f61d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 16:17:51 compute-0 sudo[341828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:17:51 compute-0 systemd[1]: libpod-conmon-a70ecf8962db209cce96a03f0a1b8907e30bc89ebe33547bcfb4f68867d2f61d.scope: Deactivated successfully.
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.982 239969 INFO nova.virt.libvirt.driver [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Deleting instance files /var/lib/nova/instances/89d77454-1c5d-4556-88b6-e5d4023c8c36_del
Jan 26 16:17:51 compute-0 nova_compute[239965]: 2026-01-26 16:17:51.982 239969 INFO nova.virt.libvirt.driver [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Deletion of /var/lib/nova/instances/89d77454-1c5d-4556-88b6-e5d4023c8c36_del complete
Jan 26 16:17:52 compute-0 podman[341871]: 2026-01-26 16:17:52.037573409 +0000 UTC m=+0.043396131 container remove a70ecf8962db209cce96a03f0a1b8907e30bc89ebe33547bcfb4f68867d2f61d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:17:52 compute-0 nova_compute[239965]: 2026-01-26 16:17:52.046 239969 INFO nova.compute.manager [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Took 0.73 seconds to destroy the instance on the hypervisor.
Jan 26 16:17:52 compute-0 nova_compute[239965]: 2026-01-26 16:17:52.046 239969 DEBUG oslo.service.loopingcall [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:17:52 compute-0 nova_compute[239965]: 2026-01-26 16:17:52.046 239969 DEBUG nova.compute.manager [-] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:17:52 compute-0 nova_compute[239965]: 2026-01-26 16:17:52.047 239969 DEBUG nova.network.neutron [-] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:17:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:52.045 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a1fe8bf9-5159-4912-a7a9-59136c307ad3]: (4, ('Mon Jan 26 04:17:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4 (a70ecf8962db209cce96a03f0a1b8907e30bc89ebe33547bcfb4f68867d2f61d)\na70ecf8962db209cce96a03f0a1b8907e30bc89ebe33547bcfb4f68867d2f61d\nMon Jan 26 04:17:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4 (a70ecf8962db209cce96a03f0a1b8907e30bc89ebe33547bcfb4f68867d2f61d)\na70ecf8962db209cce96a03f0a1b8907e30bc89ebe33547bcfb4f68867d2f61d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:52.053 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cc514545-3876-4630-98b2-49e341af9704]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:52.054 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4b542b76-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:52 compute-0 kernel: tap4b542b76-80: left promiscuous mode
Jan 26 16:17:52 compute-0 nova_compute[239965]: 2026-01-26 16:17:52.059 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:52 compute-0 nova_compute[239965]: 2026-01-26 16:17:52.072 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:52.075 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8778484d-cb90-40c9-a9bb-ca02aa2073b2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:52.091 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6356b6fb-3520-43f1-a765-f5c9879dfb90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:52.092 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[06718b3a-0897-46c9-b297-72b63cf60d11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:52.111 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[51ac19c8-950e-4d53-948d-0a44d0ef3568]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 568794, 'reachable_time': 27930, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 341886, 'error': None, 'target': 'ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:52.113 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4b542b76-8e6a-4678-aea6-ecc3e7938bb4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:17:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:52.113 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[2dc633f5-be0c-42be-b491-88e3cac36a3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:52 compute-0 podman[341898]: 2026-01-26 16:17:52.278790543 +0000 UTC m=+0.038095822 container create 7eaacfe217bcba52389f419fd23fa9b35b2ecc2c6aad3de8aab5c592b7b32822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Jan 26 16:17:52 compute-0 systemd[1]: Started libpod-conmon-7eaacfe217bcba52389f419fd23fa9b35b2ecc2c6aad3de8aab5c592b7b32822.scope.
Jan 26 16:17:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:17:52 compute-0 podman[341898]: 2026-01-26 16:17:52.355476466 +0000 UTC m=+0.114781745 container init 7eaacfe217bcba52389f419fd23fa9b35b2ecc2c6aad3de8aab5c592b7b32822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:17:52 compute-0 podman[341898]: 2026-01-26 16:17:52.262197198 +0000 UTC m=+0.021502487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:17:52 compute-0 podman[341898]: 2026-01-26 16:17:52.36670667 +0000 UTC m=+0.126011939 container start 7eaacfe217bcba52389f419fd23fa9b35b2ecc2c6aad3de8aab5c592b7b32822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:17:52 compute-0 podman[341898]: 2026-01-26 16:17:52.369880178 +0000 UTC m=+0.129185447 container attach 7eaacfe217bcba52389f419fd23fa9b35b2ecc2c6aad3de8aab5c592b7b32822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 16:17:52 compute-0 busy_thompson[341915]: 167 167
Jan 26 16:17:52 compute-0 systemd[1]: libpod-7eaacfe217bcba52389f419fd23fa9b35b2ecc2c6aad3de8aab5c592b7b32822.scope: Deactivated successfully.
Jan 26 16:17:52 compute-0 conmon[341915]: conmon 7eaacfe217bcba52389f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7eaacfe217bcba52389f419fd23fa9b35b2ecc2c6aad3de8aab5c592b7b32822.scope/container/memory.events
Jan 26 16:17:52 compute-0 podman[341898]: 2026-01-26 16:17:52.373346753 +0000 UTC m=+0.132652042 container died 7eaacfe217bcba52389f419fd23fa9b35b2ecc2c6aad3de8aab5c592b7b32822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 16:17:52 compute-0 podman[341898]: 2026-01-26 16:17:52.409630669 +0000 UTC m=+0.168935938 container remove 7eaacfe217bcba52389f419fd23fa9b35b2ecc2c6aad3de8aab5c592b7b32822 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:17:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 240 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Jan 26 16:17:52 compute-0 systemd[1]: libpod-conmon-7eaacfe217bcba52389f419fd23fa9b35b2ecc2c6aad3de8aab5c592b7b32822.scope: Deactivated successfully.
Jan 26 16:17:52 compute-0 nova_compute[239965]: 2026-01-26 16:17:52.475 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:52 compute-0 ceph-mon[75140]: pgmap v2015: 305 pgs: 305 active+clean; 240 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.8 MiB/s wr, 77 op/s
Jan 26 16:17:52 compute-0 podman[341940]: 2026-01-26 16:17:52.576747402 +0000 UTC m=+0.035699813 container create 78bdb0df0771da1d37ad8ba853264f85cf37422f25d9dc4a87971462fe98304a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_archimedes, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:17:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d4b542b76\x2d8e6a\x2d4678\x2daea6\x2decc3e7938bb4.mount: Deactivated successfully.
Jan 26 16:17:52 compute-0 systemd[1]: Started libpod-conmon-78bdb0df0771da1d37ad8ba853264f85cf37422f25d9dc4a87971462fe98304a.scope.
Jan 26 16:17:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:17:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b398a336760659fbb64799f120dd4319adb43328308e49291e71a8775d7fad5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:17:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b398a336760659fbb64799f120dd4319adb43328308e49291e71a8775d7fad5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:17:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b398a336760659fbb64799f120dd4319adb43328308e49291e71a8775d7fad5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:17:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b398a336760659fbb64799f120dd4319adb43328308e49291e71a8775d7fad5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:17:52 compute-0 podman[341940]: 2026-01-26 16:17:52.562271698 +0000 UTC m=+0.021224129 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:17:52 compute-0 podman[341940]: 2026-01-26 16:17:52.665449139 +0000 UTC m=+0.124401550 container init 78bdb0df0771da1d37ad8ba853264f85cf37422f25d9dc4a87971462fe98304a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:17:52 compute-0 podman[341940]: 2026-01-26 16:17:52.671399144 +0000 UTC m=+0.130351555 container start 78bdb0df0771da1d37ad8ba853264f85cf37422f25d9dc4a87971462fe98304a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_archimedes, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:17:52 compute-0 podman[341940]: 2026-01-26 16:17:52.674680614 +0000 UTC m=+0.133633025 container attach 78bdb0df0771da1d37ad8ba853264f85cf37422f25d9dc4a87971462fe98304a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_archimedes, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]: {
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:     "0": [
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:         {
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "devices": [
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "/dev/loop3"
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             ],
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "lv_name": "ceph_lv0",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "lv_size": "21470642176",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "name": "ceph_lv0",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "tags": {
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.cluster_name": "ceph",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.crush_device_class": "",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.encrypted": "0",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.objectstore": "bluestore",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.osd_id": "0",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.type": "block",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.vdo": "0",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.with_tpm": "0"
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             },
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "type": "block",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "vg_name": "ceph_vg0"
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:         }
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:     ],
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:     "1": [
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:         {
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "devices": [
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "/dev/loop4"
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             ],
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "lv_name": "ceph_lv1",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "lv_size": "21470642176",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "name": "ceph_lv1",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "tags": {
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.cluster_name": "ceph",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.crush_device_class": "",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.encrypted": "0",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.objectstore": "bluestore",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.osd_id": "1",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.type": "block",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.vdo": "0",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.with_tpm": "0"
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             },
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "type": "block",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "vg_name": "ceph_vg1"
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:         }
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:     ],
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:     "2": [
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:         {
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "devices": [
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "/dev/loop5"
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             ],
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "lv_name": "ceph_lv2",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "lv_size": "21470642176",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "name": "ceph_lv2",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "tags": {
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.cluster_name": "ceph",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.crush_device_class": "",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.encrypted": "0",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.objectstore": "bluestore",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.osd_id": "2",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.type": "block",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.vdo": "0",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:                 "ceph.with_tpm": "0"
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             },
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "type": "block",
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:             "vg_name": "ceph_vg2"
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:         }
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]:     ]
Jan 26 16:17:52 compute-0 optimistic_archimedes[341957]: }
Jan 26 16:17:53 compute-0 systemd[1]: libpod-78bdb0df0771da1d37ad8ba853264f85cf37422f25d9dc4a87971462fe98304a.scope: Deactivated successfully.
Jan 26 16:17:53 compute-0 podman[341940]: 2026-01-26 16:17:53.009224517 +0000 UTC m=+0.468176928 container died 78bdb0df0771da1d37ad8ba853264f85cf37422f25d9dc4a87971462fe98304a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_archimedes, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:17:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b398a336760659fbb64799f120dd4319adb43328308e49291e71a8775d7fad5e-merged.mount: Deactivated successfully.
Jan 26 16:17:53 compute-0 podman[341940]: 2026-01-26 16:17:53.04577171 +0000 UTC m=+0.504724131 container remove 78bdb0df0771da1d37ad8ba853264f85cf37422f25d9dc4a87971462fe98304a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:17:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:17:53 compute-0 systemd[1]: libpod-conmon-78bdb0df0771da1d37ad8ba853264f85cf37422f25d9dc4a87971462fe98304a.scope: Deactivated successfully.
Jan 26 16:17:53 compute-0 sudo[341828]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:53 compute-0 sudo[341979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:17:53 compute-0 sudo[341979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:17:53 compute-0 sudo[341979]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:53 compute-0 sudo[342004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:17:53 compute-0 sudo[342004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:17:53 compute-0 nova_compute[239965]: 2026-01-26 16:17:53.473 239969 DEBUG nova.network.neutron [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Successfully created port: 0b8b7ee7-a378-4a72-992b-23094d7e73c4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:17:53 compute-0 podman[342041]: 2026-01-26 16:17:53.528654527 +0000 UTC m=+0.044313723 container create 2d714f67719dbfb55494b07dccf5bb04828d31d56dc13c65a9ec419996117bb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:17:53 compute-0 systemd[1]: Started libpod-conmon-2d714f67719dbfb55494b07dccf5bb04828d31d56dc13c65a9ec419996117bb1.scope.
Jan 26 16:17:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:17:53 compute-0 podman[342041]: 2026-01-26 16:17:53.510245128 +0000 UTC m=+0.025904424 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:17:53 compute-0 podman[342041]: 2026-01-26 16:17:53.616866442 +0000 UTC m=+0.132525688 container init 2d714f67719dbfb55494b07dccf5bb04828d31d56dc13c65a9ec419996117bb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wozniak, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 16:17:53 compute-0 podman[342041]: 2026-01-26 16:17:53.625045512 +0000 UTC m=+0.140704718 container start 2d714f67719dbfb55494b07dccf5bb04828d31d56dc13c65a9ec419996117bb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wozniak, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 16:17:53 compute-0 podman[342041]: 2026-01-26 16:17:53.629198794 +0000 UTC m=+0.144857990 container attach 2d714f67719dbfb55494b07dccf5bb04828d31d56dc13c65a9ec419996117bb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 16:17:53 compute-0 eager_wozniak[342058]: 167 167
Jan 26 16:17:53 compute-0 systemd[1]: libpod-2d714f67719dbfb55494b07dccf5bb04828d31d56dc13c65a9ec419996117bb1.scope: Deactivated successfully.
Jan 26 16:17:53 compute-0 podman[342041]: 2026-01-26 16:17:53.631515741 +0000 UTC m=+0.147174937 container died 2d714f67719dbfb55494b07dccf5bb04828d31d56dc13c65a9ec419996117bb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:17:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-694519ffd1301e39da1b3834f37d9e011fb9d3e97fe9c2efebae4bd8c6fa60df-merged.mount: Deactivated successfully.
Jan 26 16:17:53 compute-0 podman[342041]: 2026-01-26 16:17:53.668861362 +0000 UTC m=+0.184520558 container remove 2d714f67719dbfb55494b07dccf5bb04828d31d56dc13c65a9ec419996117bb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wozniak, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:17:53 compute-0 systemd[1]: libpod-conmon-2d714f67719dbfb55494b07dccf5bb04828d31d56dc13c65a9ec419996117bb1.scope: Deactivated successfully.
Jan 26 16:17:53 compute-0 podman[342080]: 2026-01-26 16:17:53.868089 +0000 UTC m=+0.041531626 container create b531b89a5d83e35c06dd15d963f64be9c671e5c45288b614e3b5245de3e0ff03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_snyder, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:17:53 compute-0 systemd[1]: Started libpod-conmon-b531b89a5d83e35c06dd15d963f64be9c671e5c45288b614e3b5245de3e0ff03.scope.
Jan 26 16:17:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:17:53 compute-0 podman[342080]: 2026-01-26 16:17:53.851275419 +0000 UTC m=+0.024718065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c28e3fda4bba990b5897c499be56e075e623c776e8fb6b31922ccbd4180ec196/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c28e3fda4bba990b5897c499be56e075e623c776e8fb6b31922ccbd4180ec196/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c28e3fda4bba990b5897c499be56e075e623c776e8fb6b31922ccbd4180ec196/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:17:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c28e3fda4bba990b5897c499be56e075e623c776e8fb6b31922ccbd4180ec196/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:17:53 compute-0 podman[342080]: 2026-01-26 16:17:53.96718559 +0000 UTC m=+0.140628236 container init b531b89a5d83e35c06dd15d963f64be9c671e5c45288b614e3b5245de3e0ff03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:17:53 compute-0 podman[342080]: 2026-01-26 16:17:53.972588043 +0000 UTC m=+0.146030669 container start b531b89a5d83e35c06dd15d963f64be9c671e5c45288b614e3b5245de3e0ff03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_snyder, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:17:53 compute-0 podman[342080]: 2026-01-26 16:17:53.975585546 +0000 UTC m=+0.149028162 container attach b531b89a5d83e35c06dd15d963f64be9c671e5c45288b614e3b5245de3e0ff03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.385 239969 DEBUG nova.network.neutron [req-3b10737f-664d-4395-857b-01605064336b req-c1a69b7a-f47f-4301-98b1-bb75ac38e7c7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Updated VIF entry in instance network info cache for port 1751925c-4e5b-4436-af71-dae4e4c07755. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.386 239969 DEBUG nova.network.neutron [req-3b10737f-664d-4395-857b-01605064336b req-c1a69b7a-f47f-4301-98b1-bb75ac38e7c7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Updating instance_info_cache with network_info: [{"id": "1751925c-4e5b-4436-af71-dae4e4c07755", "address": "fa:16:3e:92:3c:80", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1751925c-4e", "ovs_interfaceid": "1751925c-4e5b-4436-af71-dae4e4c07755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "address": "fa:16:3e:10:11:fa", "network": {"id": "4b542b76-8e6a-4678-aea6-ecc3e7938bb4", "bridge": "br-int", "label": "tempest-network-smoke--1329365431", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe10:11fa", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape462f031-25", "ovs_interfaceid": "e462f031-2502-4089-9f71-ba99ec9f5c8d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.415 239969 DEBUG oslo_concurrency.lockutils [req-3b10737f-664d-4395-857b-01605064336b req-c1a69b7a-f47f-4301-98b1-bb75ac38e7c7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-89d77454-1c5d-4556-88b6-e5d4023c8c36" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:17:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 240 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Jan 26 16:17:54 compute-0 ceph-mon[75140]: pgmap v2016: 305 pgs: 305 active+clean; 240 MiB data, 914 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Jan 26 16:17:54 compute-0 lvm[342177]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:17:54 compute-0 lvm[342177]: VG ceph_vg2 finished
Jan 26 16:17:54 compute-0 lvm[342175]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:17:54 compute-0 lvm[342175]: VG ceph_vg1 finished
Jan 26 16:17:54 compute-0 lvm[342174]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:17:54 compute-0 lvm[342174]: VG ceph_vg0 finished
Jan 26 16:17:54 compute-0 compassionate_snyder[342096]: {}
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.745 239969 DEBUG nova.compute.manager [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received event network-vif-plugged-1751925c-4e5b-4436-af71-dae4e4c07755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.746 239969 DEBUG oslo_concurrency.lockutils [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.746 239969 DEBUG oslo_concurrency.lockutils [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.746 239969 DEBUG oslo_concurrency.lockutils [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.746 239969 DEBUG nova.compute.manager [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] No waiting events found dispatching network-vif-plugged-1751925c-4e5b-4436-af71-dae4e4c07755 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.747 239969 WARNING nova.compute.manager [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received unexpected event network-vif-plugged-1751925c-4e5b-4436-af71-dae4e4c07755 for instance with vm_state active and task_state deleting.
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.747 239969 DEBUG nova.compute.manager [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received event network-vif-unplugged-e462f031-2502-4089-9f71-ba99ec9f5c8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.747 239969 DEBUG oslo_concurrency.lockutils [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.747 239969 DEBUG oslo_concurrency.lockutils [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.748 239969 DEBUG oslo_concurrency.lockutils [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.748 239969 DEBUG nova.compute.manager [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] No waiting events found dispatching network-vif-unplugged-e462f031-2502-4089-9f71-ba99ec9f5c8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.748 239969 DEBUG nova.compute.manager [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received event network-vif-unplugged-e462f031-2502-4089-9f71-ba99ec9f5c8d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.748 239969 DEBUG nova.compute.manager [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received event network-vif-plugged-e462f031-2502-4089-9f71-ba99ec9f5c8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.748 239969 DEBUG oslo_concurrency.lockutils [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.749 239969 DEBUG oslo_concurrency.lockutils [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.749 239969 DEBUG oslo_concurrency.lockutils [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.749 239969 DEBUG nova.compute.manager [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] No waiting events found dispatching network-vif-plugged-e462f031-2502-4089-9f71-ba99ec9f5c8d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.749 239969 WARNING nova.compute.manager [req-8c26b067-200b-484c-8b75-116bce2eef2c req-2aae3b15-2fb9-48bb-81c0-2f95355296ad a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received unexpected event network-vif-plugged-e462f031-2502-4089-9f71-ba99ec9f5c8d for instance with vm_state active and task_state deleting.
Jan 26 16:17:54 compute-0 systemd[1]: libpod-b531b89a5d83e35c06dd15d963f64be9c671e5c45288b614e3b5245de3e0ff03.scope: Deactivated successfully.
Jan 26 16:17:54 compute-0 systemd[1]: libpod-b531b89a5d83e35c06dd15d963f64be9c671e5c45288b614e3b5245de3e0ff03.scope: Consumed 1.336s CPU time.
Jan 26 16:17:54 compute-0 podman[342080]: 2026-01-26 16:17:54.770858324 +0000 UTC m=+0.944300970 container died b531b89a5d83e35c06dd15d963f64be9c671e5c45288b614e3b5245de3e0ff03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_snyder, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:17:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c28e3fda4bba990b5897c499be56e075e623c776e8fb6b31922ccbd4180ec196-merged.mount: Deactivated successfully.
Jan 26 16:17:54 compute-0 podman[342080]: 2026-01-26 16:17:54.824560127 +0000 UTC m=+0.998002743 container remove b531b89a5d83e35c06dd15d963f64be9c671e5c45288b614e3b5245de3e0ff03 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_snyder, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.829 239969 DEBUG nova.compute.manager [req-71c23926-76f3-490e-b59a-a12711349d41 req-2c4a9171-0cd2-4409-aa69-8386746c2e37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received event network-vif-deleted-e462f031-2502-4089-9f71-ba99ec9f5c8d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.830 239969 INFO nova.compute.manager [req-71c23926-76f3-490e-b59a-a12711349d41 req-2c4a9171-0cd2-4409-aa69-8386746c2e37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Neutron deleted interface e462f031-2502-4089-9f71-ba99ec9f5c8d; detaching it from the instance and deleting it from the info cache
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.830 239969 DEBUG nova.network.neutron [req-71c23926-76f3-490e-b59a-a12711349d41 req-2c4a9171-0cd2-4409-aa69-8386746c2e37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Updating instance_info_cache with network_info: [{"id": "1751925c-4e5b-4436-af71-dae4e4c07755", "address": "fa:16:3e:92:3c:80", "network": {"id": "96c2fafe-7dfa-4123-80bb-6807f1870aaa", "bridge": "br-int", "label": "tempest-network-smoke--261004171", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1751925c-4e", "ovs_interfaceid": "1751925c-4e5b-4436-af71-dae4e4c07755", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:17:54 compute-0 systemd[1]: libpod-conmon-b531b89a5d83e35c06dd15d963f64be9c671e5c45288b614e3b5245de3e0ff03.scope: Deactivated successfully.
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.832 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Updating instance_info_cache with network_info: [{"id": "d7a7b654-8380-4796-9567-cf4032213582", "address": "fa:16:3e:b1:a1:c3", "network": {"id": "e76f9602-5b6a-4dfd-b8a5-93cc21546e0c", "bridge": "br-int", "label": "tempest-network-smoke--579472937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7a7b654-83", "ovs_interfaceid": "d7a7b654-8380-4796-9567-cf4032213582", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.856 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.857 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.858 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.858 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:17:54 compute-0 nova_compute[239965]: 2026-01-26 16:17:54.860 239969 DEBUG nova.compute.manager [req-71c23926-76f3-490e-b59a-a12711349d41 req-2c4a9171-0cd2-4409-aa69-8386746c2e37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Detach interface failed, port_id=e462f031-2502-4089-9f71-ba99ec9f5c8d, reason: Instance 89d77454-1c5d-4556-88b6-e5d4023c8c36 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:17:54 compute-0 sudo[342004]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:17:54 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:17:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:17:54 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:17:54 compute-0 sudo[342191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:17:54 compute-0 sudo[342191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:17:54 compute-0 sudo[342191]: pam_unix(sudo:session): session closed for user root
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.121 239969 DEBUG nova.network.neutron [-] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.141 239969 INFO nova.compute.manager [-] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Took 3.09 seconds to deallocate network for instance.
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.165 239969 DEBUG nova.network.neutron [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Successfully updated port: 0b8b7ee7-a378-4a72-992b-23094d7e73c4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.189 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-d37c9efe-0572-4b48-9bce-6af473740fc4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.189 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-d37c9efe-0572-4b48-9bce-6af473740fc4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.189 239969 DEBUG nova.network.neutron [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.192 239969 DEBUG oslo_concurrency.lockutils [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.192 239969 DEBUG oslo_concurrency.lockutils [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.284 239969 DEBUG oslo_concurrency.processutils [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.470 239969 DEBUG nova.network.neutron [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:17:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:17:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3643892869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.857 239969 DEBUG oslo_concurrency.processutils [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.865 239969 DEBUG nova.compute.provider_tree [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.883 239969 DEBUG nova.scheduler.client.report [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:17:55 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:17:55 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:17:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3643892869' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.906 239969 DEBUG oslo_concurrency.lockutils [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:55 compute-0 nova_compute[239965]: 2026-01-26 16:17:55.942 239969 INFO nova.scheduler.client.report [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Deleted allocations for instance 89d77454-1c5d-4556-88b6-e5d4023c8c36
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.037 239969 DEBUG oslo_concurrency.lockutils [None req-501c5e6d-6765-4093-b4b5-2730deaae8cf 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "89d77454-1c5d-4556-88b6-e5d4023c8c36" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.318 239969 DEBUG nova.network.neutron [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Updating instance_info_cache with network_info: [{"id": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "address": "fa:16:3e:4b:d6:5f", "network": {"id": "6984db9d-a98d-48a6-9c09-8b249a7b8ccf", "bridge": "br-int", "label": "tempest-network-smoke--807851546", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b8b7ee7-a3", "ovs_interfaceid": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.347 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-d37c9efe-0572-4b48-9bce-6af473740fc4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.347 239969 DEBUG nova.compute.manager [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Instance network_info: |[{"id": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "address": "fa:16:3e:4b:d6:5f", "network": {"id": "6984db9d-a98d-48a6-9c09-8b249a7b8ccf", "bridge": "br-int", "label": "tempest-network-smoke--807851546", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b8b7ee7-a3", "ovs_interfaceid": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.350 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Start _get_guest_xml network_info=[{"id": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "address": "fa:16:3e:4b:d6:5f", "network": {"id": "6984db9d-a98d-48a6-9c09-8b249a7b8ccf", "bridge": "br-int", "label": "tempest-network-smoke--807851546", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b8b7ee7-a3", "ovs_interfaceid": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.354 239969 WARNING nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.359 239969 DEBUG nova.virt.libvirt.host [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.360 239969 DEBUG nova.virt.libvirt.host [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.363 239969 DEBUG nova.virt.libvirt.host [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.363 239969 DEBUG nova.virt.libvirt.host [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.364 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.364 239969 DEBUG nova.virt.hardware [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.365 239969 DEBUG nova.virt.hardware [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.365 239969 DEBUG nova.virt.hardware [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.365 239969 DEBUG nova.virt.hardware [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.365 239969 DEBUG nova.virt.hardware [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.365 239969 DEBUG nova.virt.hardware [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.366 239969 DEBUG nova.virt.hardware [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.366 239969 DEBUG nova.virt.hardware [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.366 239969 DEBUG nova.virt.hardware [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.366 239969 DEBUG nova.virt.hardware [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.366 239969 DEBUG nova.virt.hardware [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.369 239969 DEBUG oslo_concurrency.processutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 213 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.532 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.533 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.533 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.533 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.534 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.625 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.907 239969 DEBUG nova.compute.manager [req-0d6bc676-d153-43f5-8f92-5ae9fa549e34 req-5ba9f370-9429-415f-968c-b40e95938b54 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Received event network-vif-deleted-1751925c-4e5b-4436-af71-dae4e4c07755 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.908 239969 DEBUG nova.compute.manager [req-0d6bc676-d153-43f5-8f92-5ae9fa549e34 req-5ba9f370-9429-415f-968c-b40e95938b54 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Received event network-changed-0b8b7ee7-a378-4a72-992b-23094d7e73c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.908 239969 DEBUG nova.compute.manager [req-0d6bc676-d153-43f5-8f92-5ae9fa549e34 req-5ba9f370-9429-415f-968c-b40e95938b54 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Refreshing instance network info cache due to event network-changed-0b8b7ee7-a378-4a72-992b-23094d7e73c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.909 239969 DEBUG oslo_concurrency.lockutils [req-0d6bc676-d153-43f5-8f92-5ae9fa549e34 req-5ba9f370-9429-415f-968c-b40e95938b54 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-d37c9efe-0572-4b48-9bce-6af473740fc4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.909 239969 DEBUG oslo_concurrency.lockutils [req-0d6bc676-d153-43f5-8f92-5ae9fa549e34 req-5ba9f370-9429-415f-968c-b40e95938b54 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-d37c9efe-0572-4b48-9bce-6af473740fc4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.909 239969 DEBUG nova.network.neutron [req-0d6bc676-d153-43f5-8f92-5ae9fa549e34 req-5ba9f370-9429-415f-968c-b40e95938b54 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Refreshing network info cache for port 0b8b7ee7-a378-4a72-992b-23094d7e73c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:17:56 compute-0 ceph-mon[75140]: pgmap v2017: 305 pgs: 305 active+clean; 213 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 70 KiB/s rd, 1.8 MiB/s wr, 85 op/s
Jan 26 16:17:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:17:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2295399663' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.960 239969 DEBUG oslo_concurrency.processutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:56.979 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:17:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:56.980 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.986 239969 DEBUG nova.storage.rbd_utils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image d37c9efe-0572-4b48-9bce-6af473740fc4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:56 compute-0 nova_compute[239965]: 2026-01-26 16:17:56.991 239969 DEBUG oslo_concurrency.processutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.024 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:17:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4237312045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.106 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.176 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.177 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:17:57 compute-0 podman[342307]: 2026-01-26 16:17:57.215874917 +0000 UTC m=+0.062794685 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:17:57 compute-0 podman[342322]: 2026-01-26 16:17:57.240206991 +0000 UTC m=+0.088427051 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.363 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.364 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3448MB free_disk=59.92114012222737GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.365 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.365 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.426 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.426 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance d37c9efe-0572-4b48-9bce-6af473740fc4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.427 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.427 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.478 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.516 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:17:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1479481510' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.585 239969 DEBUG oslo_concurrency.processutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.587 239969 DEBUG nova.virt.libvirt.vif [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:17:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-547911124',display_name='tempest-TestNetworkBasicOps-server-547911124',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-547911124',id=116,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBExlZz5z+PLI6EC/Sk1VWVN5QL7TCenj++Hb1Vjiz9IR51KbBLpjwX8RiPD1qPfqcw3SxQGkdb/MJLDyDZIGBZkvYCzEhiTpFnMJS9tvnzLtAeZbKPZ9e9uuDISkpARFRg==',key_name='tempest-TestNetworkBasicOps-787215910',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-v371wi2u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:17:49Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=d37c9efe-0572-4b48-9bce-6af473740fc4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "address": "fa:16:3e:4b:d6:5f", "network": {"id": "6984db9d-a98d-48a6-9c09-8b249a7b8ccf", "bridge": "br-int", "label": "tempest-network-smoke--807851546", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b8b7ee7-a3", "ovs_interfaceid": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.587 239969 DEBUG nova.network.os_vif_util [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "address": "fa:16:3e:4b:d6:5f", "network": {"id": "6984db9d-a98d-48a6-9c09-8b249a7b8ccf", "bridge": "br-int", "label": "tempest-network-smoke--807851546", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b8b7ee7-a3", "ovs_interfaceid": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.588 239969 DEBUG nova.network.os_vif_util [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:d6:5f,bridge_name='br-int',has_traffic_filtering=True,id=0b8b7ee7-a378-4a72-992b-23094d7e73c4,network=Network(6984db9d-a98d-48a6-9c09-8b249a7b8ccf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b8b7ee7-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.589 239969 DEBUG nova.objects.instance [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'pci_devices' on Instance uuid d37c9efe-0572-4b48-9bce-6af473740fc4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.605 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:17:57 compute-0 nova_compute[239965]:   <uuid>d37c9efe-0572-4b48-9bce-6af473740fc4</uuid>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   <name>instance-00000074</name>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkBasicOps-server-547911124</nova:name>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:17:56</nova:creationTime>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:17:57 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:17:57 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:17:57 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:17:57 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:17:57 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:17:57 compute-0 nova_compute[239965]:         <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:17:57 compute-0 nova_compute[239965]:         <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:17:57 compute-0 nova_compute[239965]:         <nova:port uuid="0b8b7ee7-a378-4a72-992b-23094d7e73c4">
Jan 26 16:17:57 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.20" ipVersion="4"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <system>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <entry name="serial">d37c9efe-0572-4b48-9bce-6af473740fc4</entry>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <entry name="uuid">d37c9efe-0572-4b48-9bce-6af473740fc4</entry>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     </system>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   <os>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   </os>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   <features>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   </features>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d37c9efe-0572-4b48-9bce-6af473740fc4_disk">
Jan 26 16:17:57 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       </source>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:17:57 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/d37c9efe-0572-4b48-9bce-6af473740fc4_disk.config">
Jan 26 16:17:57 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       </source>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:17:57 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:4b:d6:5f"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <target dev="tap0b8b7ee7-a3"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/d37c9efe-0572-4b48-9bce-6af473740fc4/console.log" append="off"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <video>
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     </video>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:17:57 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:17:57 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:17:57 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:17:57 compute-0 nova_compute[239965]: </domain>
Jan 26 16:17:57 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.606 239969 DEBUG nova.compute.manager [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Preparing to wait for external event network-vif-plugged-0b8b7ee7-a378-4a72-992b-23094d7e73c4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.606 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.606 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.607 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.607 239969 DEBUG nova.virt.libvirt.vif [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:17:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-547911124',display_name='tempest-TestNetworkBasicOps-server-547911124',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-547911124',id=116,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBExlZz5z+PLI6EC/Sk1VWVN5QL7TCenj++Hb1Vjiz9IR51KbBLpjwX8RiPD1qPfqcw3SxQGkdb/MJLDyDZIGBZkvYCzEhiTpFnMJS9tvnzLtAeZbKPZ9e9uuDISkpARFRg==',key_name='tempest-TestNetworkBasicOps-787215910',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-v371wi2u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:17:49Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=d37c9efe-0572-4b48-9bce-6af473740fc4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "address": "fa:16:3e:4b:d6:5f", "network": {"id": "6984db9d-a98d-48a6-9c09-8b249a7b8ccf", "bridge": "br-int", "label": "tempest-network-smoke--807851546", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b8b7ee7-a3", "ovs_interfaceid": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.607 239969 DEBUG nova.network.os_vif_util [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "address": "fa:16:3e:4b:d6:5f", "network": {"id": "6984db9d-a98d-48a6-9c09-8b249a7b8ccf", "bridge": "br-int", "label": "tempest-network-smoke--807851546", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b8b7ee7-a3", "ovs_interfaceid": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.608 239969 DEBUG nova.network.os_vif_util [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:d6:5f,bridge_name='br-int',has_traffic_filtering=True,id=0b8b7ee7-a378-4a72-992b-23094d7e73c4,network=Network(6984db9d-a98d-48a6-9c09-8b249a7b8ccf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b8b7ee7-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.608 239969 DEBUG os_vif [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:d6:5f,bridge_name='br-int',has_traffic_filtering=True,id=0b8b7ee7-a378-4a72-992b-23094d7e73c4,network=Network(6984db9d-a98d-48a6-9c09-8b249a7b8ccf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b8b7ee7-a3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.609 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.609 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.610 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.614 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.614 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b8b7ee7-a3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.614 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b8b7ee7-a3, col_values=(('external_ids', {'iface-id': '0b8b7ee7-a378-4a72-992b-23094d7e73c4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4b:d6:5f', 'vm-uuid': 'd37c9efe-0572-4b48-9bce-6af473740fc4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.616 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:57 compute-0 NetworkManager[48954]: <info>  [1769444277.6167] manager: (tap0b8b7ee7-a3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.620 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.623 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.623 239969 INFO os_vif [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:d6:5f,bridge_name='br-int',has_traffic_filtering=True,id=0b8b7ee7-a378-4a72-992b-23094d7e73c4,network=Network(6984db9d-a98d-48a6-9c09-8b249a7b8ccf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b8b7ee7-a3')
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.668 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.668 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.669 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:4b:d6:5f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.669 239969 INFO nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Using config drive
Jan 26 16:17:57 compute-0 nova_compute[239965]: 2026-01-26 16:17:57.689 239969 DEBUG nova.storage.rbd_utils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image d37c9efe-0572-4b48-9bce-6af473740fc4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:57 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2295399663' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:17:57 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4237312045' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:17:57 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1479481510' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:17:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:17:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1085715281' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.030 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.038 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:17:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:17:58.062703) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444278062743, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1473, "num_deletes": 251, "total_data_size": 2286445, "memory_usage": 2315984, "flush_reason": "Manual Compaction"}
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.064 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444278076853, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 2219171, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40466, "largest_seqno": 41938, "table_properties": {"data_size": 2212424, "index_size": 3818, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 13403, "raw_average_key_size": 18, "raw_value_size": 2198848, "raw_average_value_size": 3028, "num_data_blocks": 171, "num_entries": 726, "num_filter_entries": 726, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769444136, "oldest_key_time": 1769444136, "file_creation_time": 1769444278, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 14293 microseconds, and 6381 cpu microseconds.
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:17:58.076956) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 2219171 bytes OK
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:17:58.077022) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:17:58.079696) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:17:58.079743) EVENT_LOG_v1 {"time_micros": 1769444278079733, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:17:58.079771) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 2279939, prev total WAL file size 2279939, number of live WAL files 2.
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:17:58.080675) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323532' seq:0, type:0; will stop at (end)
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(2167KB)], [89(10017KB)]
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444278080851, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 12477309, "oldest_snapshot_seqno": -1}
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.095 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.096 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6871 keys, 11719808 bytes, temperature: kUnknown
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444278188134, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 11719808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11670255, "index_size": 31265, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 175332, "raw_average_key_size": 25, "raw_value_size": 11543859, "raw_average_value_size": 1680, "num_data_blocks": 1243, "num_entries": 6871, "num_filter_entries": 6871, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769444278, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:17:58.188428) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 11719808 bytes
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:17:58.190175) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 116.2 rd, 109.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 9.8 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(10.9) write-amplify(5.3) OK, records in: 7385, records dropped: 514 output_compression: NoCompression
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:17:58.190191) EVENT_LOG_v1 {"time_micros": 1769444278190183, "job": 52, "event": "compaction_finished", "compaction_time_micros": 107419, "compaction_time_cpu_micros": 30627, "output_level": 6, "num_output_files": 1, "total_output_size": 11719808, "num_input_records": 7385, "num_output_records": 6871, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444278190758, "job": 52, "event": "table_file_deletion", "file_number": 91}
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444278192629, "job": 52, "event": "table_file_deletion", "file_number": 89}
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:17:58.080572) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:17:58.192752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:17:58.192760) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:17:58.192761) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:17:58.192764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:17:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:17:58.192766) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:17:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 213 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.489 239969 INFO nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Creating config drive at /var/lib/nova/instances/d37c9efe-0572-4b48-9bce-6af473740fc4/disk.config
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.494 239969 DEBUG oslo_concurrency.processutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d37c9efe-0572-4b48-9bce-6af473740fc4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz73kq6ci execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.642 239969 DEBUG oslo_concurrency.processutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d37c9efe-0572-4b48-9bce-6af473740fc4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz73kq6ci" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.702 239969 DEBUG nova.storage.rbd_utils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image d37c9efe-0572-4b48-9bce-6af473740fc4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.706 239969 DEBUG oslo_concurrency.processutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d37c9efe-0572-4b48-9bce-6af473740fc4/disk.config d37c9efe-0572-4b48-9bce-6af473740fc4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.793 239969 DEBUG nova.network.neutron [req-0d6bc676-d153-43f5-8f92-5ae9fa549e34 req-5ba9f370-9429-415f-968c-b40e95938b54 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Updated VIF entry in instance network info cache for port 0b8b7ee7-a378-4a72-992b-23094d7e73c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.794 239969 DEBUG nova.network.neutron [req-0d6bc676-d153-43f5-8f92-5ae9fa549e34 req-5ba9f370-9429-415f-968c-b40e95938b54 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Updating instance_info_cache with network_info: [{"id": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "address": "fa:16:3e:4b:d6:5f", "network": {"id": "6984db9d-a98d-48a6-9c09-8b249a7b8ccf", "bridge": "br-int", "label": "tempest-network-smoke--807851546", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b8b7ee7-a3", "ovs_interfaceid": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.811 239969 DEBUG oslo_concurrency.lockutils [req-0d6bc676-d153-43f5-8f92-5ae9fa549e34 req-5ba9f370-9429-415f-968c-b40e95938b54 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-d37c9efe-0572-4b48-9bce-6af473740fc4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.840 239969 DEBUG oslo_concurrency.processutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d37c9efe-0572-4b48-9bce-6af473740fc4/disk.config d37c9efe-0572-4b48-9bce-6af473740fc4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.841 239969 INFO nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Deleting local config drive /var/lib/nova/instances/d37c9efe-0572-4b48-9bce-6af473740fc4/disk.config because it was imported into RBD.
Jan 26 16:17:58 compute-0 kernel: tap0b8b7ee7-a3: entered promiscuous mode
Jan 26 16:17:58 compute-0 NetworkManager[48954]: <info>  [1769444278.8970] manager: (tap0b8b7ee7-a3): new Tun device (/org/freedesktop/NetworkManager/Devices/499)
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.896 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:58 compute-0 ovn_controller[146046]: 2026-01-26T16:17:58Z|01209|binding|INFO|Claiming lport 0b8b7ee7-a378-4a72-992b-23094d7e73c4 for this chassis.
Jan 26 16:17:58 compute-0 ovn_controller[146046]: 2026-01-26T16:17:58Z|01210|binding|INFO|0b8b7ee7-a378-4a72-992b-23094d7e73c4: Claiming fa:16:3e:4b:d6:5f 10.100.0.20
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.900 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:58.906 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:d6:5f 10.100.0.20'], port_security=['fa:16:3e:4b:d6:5f 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': 'd37c9efe-0572-4b48-9bce-6af473740fc4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6984db9d-a98d-48a6-9c09-8b249a7b8ccf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '73040af5-6564-484d-9630-e62de6d7f8e7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2750665e-551c-40fe-a03b-7646b1ceae66, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=0b8b7ee7-a378-4a72-992b-23094d7e73c4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:17:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:58.907 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 0b8b7ee7-a378-4a72-992b-23094d7e73c4 in datapath 6984db9d-a98d-48a6-9c09-8b249a7b8ccf bound to our chassis
Jan 26 16:17:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:58.909 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6984db9d-a98d-48a6-9c09-8b249a7b8ccf
Jan 26 16:17:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:58.919 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dba3082c-c299-42f6-8ac6-ecefde34183f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:58.920 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6984db9d-a1 in ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:17:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:58.922 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6984db9d-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:17:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:58.922 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3feb5714-dc26-4d1b-9b63-9ea1f5f515c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:58.923 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2be79bf9-4f74-4c21-8111-d984b946c32b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1085715281' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:17:58 compute-0 ceph-mon[75140]: pgmap v2018: 305 pgs: 305 active+clean; 213 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Jan 26 16:17:58 compute-0 systemd-udevd[342463]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:17:58 compute-0 systemd-machined[208061]: New machine qemu-143-instance-00000074.
Jan 26 16:17:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:58.937 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[9510a996-81c5-4f6d-84c9-964b22fe63ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:58 compute-0 ovn_controller[146046]: 2026-01-26T16:17:58Z|01211|binding|INFO|Setting lport 0b8b7ee7-a378-4a72-992b-23094d7e73c4 ovn-installed in OVS
Jan 26 16:17:58 compute-0 ovn_controller[146046]: 2026-01-26T16:17:58Z|01212|binding|INFO|Setting lport 0b8b7ee7-a378-4a72-992b-23094d7e73c4 up in Southbound
Jan 26 16:17:58 compute-0 nova_compute[239965]: 2026-01-26 16:17:58.948 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:58 compute-0 NetworkManager[48954]: <info>  [1769444278.9500] device (tap0b8b7ee7-a3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:17:58 compute-0 NetworkManager[48954]: <info>  [1769444278.9523] device (tap0b8b7ee7-a3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:17:58 compute-0 systemd[1]: Started Virtual Machine qemu-143-instance-00000074.
Jan 26 16:17:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:58.956 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4d084f08-3305-4577-84b5-c937abb5bf91]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:58.992 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9dd197ee-6e43-4c9e-af0e-a26f6a0de6b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:58.998 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[872b6a92-c912-4aa5-8599-939dcbeeaf64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:58 compute-0 NetworkManager[48954]: <info>  [1769444278.9993] manager: (tap6984db9d-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/500)
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.132 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb59a85-7521-4cf2-9e6e-afcc5d0e690d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.135 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f44b1d38-3315-4e49-99a4-dd09ed05901f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:59 compute-0 NetworkManager[48954]: <info>  [1769444279.1601] device (tap6984db9d-a0): carrier: link connected
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.166 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c28e5c2d-1709-4953-a32d-34fbf4f1bb0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.185 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d35204-abb6-47db-8389-679638dc088d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6984db9d-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:98:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576996, 'reachable_time': 31255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342494, 'error': None, 'target': 'ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.203 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[81d6f95d-6e5f-492d-97cf-e5fb322f1e40]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe65:9877'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 576996, 'tstamp': 576996}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 342509, 'error': None, 'target': 'ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.220 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0418fd-a904-4fae-bb6b-0e3ebef887a1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6984db9d-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:98:77'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576996, 'reachable_time': 31255, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 342514, 'error': None, 'target': 'ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.240 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.241 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.242 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.252 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fe799c8f-b2bd-44c6-926a-4e9efb8e241f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.315 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[33782c8c-0e77-4b36-93c5-b467d266b068]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.316 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6984db9d-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.316 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.316 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6984db9d-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.318 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:59 compute-0 NetworkManager[48954]: <info>  [1769444279.3192] manager: (tap6984db9d-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/501)
Jan 26 16:17:59 compute-0 kernel: tap6984db9d-a0: entered promiscuous mode
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.320 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.323 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6984db9d-a0, col_values=(('external_ids', {'iface-id': '408f5baa-d5af-4215-8074-ad51c41d2a9a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.324 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:59 compute-0 ovn_controller[146046]: 2026-01-26T16:17:59Z|01213|binding|INFO|Releasing lport 408f5baa-d5af-4215-8074-ad51c41d2a9a from this chassis (sb_readonly=0)
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.325 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.326 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6984db9d-a98d-48a6-9c09-8b249a7b8ccf.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6984db9d-a98d-48a6-9c09-8b249a7b8ccf.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.327 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3a06d649-5e13-4d9c-b4e6-28c3b36c9c91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.328 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-6984db9d-a98d-48a6-9c09-8b249a7b8ccf
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/6984db9d-a98d-48a6-9c09-8b249a7b8ccf.pid.haproxy
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 6984db9d-a98d-48a6-9c09-8b249a7b8ccf
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:17:59.328 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf', 'env', 'PROCESS_TAG=haproxy-6984db9d-a98d-48a6-9c09-8b249a7b8ccf', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6984db9d-a98d-48a6-9c09-8b249a7b8ccf.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.334 239969 DEBUG nova.compute.manager [req-f82b010f-83db-415f-9f32-2ad468f18cd1 req-7db5bf82-d6dc-4dc3-a300-d172736452dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Received event network-vif-plugged-0b8b7ee7-a378-4a72-992b-23094d7e73c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.335 239969 DEBUG oslo_concurrency.lockutils [req-f82b010f-83db-415f-9f32-2ad468f18cd1 req-7db5bf82-d6dc-4dc3-a300-d172736452dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.335 239969 DEBUG oslo_concurrency.lockutils [req-f82b010f-83db-415f-9f32-2ad468f18cd1 req-7db5bf82-d6dc-4dc3-a300-d172736452dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.335 239969 DEBUG oslo_concurrency.lockutils [req-f82b010f-83db-415f-9f32-2ad468f18cd1 req-7db5bf82-d6dc-4dc3-a300-d172736452dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.335 239969 DEBUG nova.compute.manager [req-f82b010f-83db-415f-9f32-2ad468f18cd1 req-7db5bf82-d6dc-4dc3-a300-d172736452dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Processing event network-vif-plugged-0b8b7ee7-a378-4a72-992b-23094d7e73c4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.338 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.360 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444279.3603313, d37c9efe-0572-4b48-9bce-6af473740fc4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.361 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] VM Started (Lifecycle Event)
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.363 239969 DEBUG nova.compute.manager [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.366 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.369 239969 INFO nova.virt.libvirt.driver [-] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Instance spawned successfully.
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.369 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.397 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.405 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.405 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.406 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.407 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.408 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.408 239969 DEBUG nova.virt.libvirt.driver [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.413 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.443 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.444 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444279.3606477, d37c9efe-0572-4b48-9bce-6af473740fc4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.444 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] VM Paused (Lifecycle Event)
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.466 239969 INFO nova.compute.manager [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Took 9.46 seconds to spawn the instance on the hypervisor.
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.467 239969 DEBUG nova.compute.manager [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.469 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.476 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444279.365377, d37c9efe-0572-4b48-9bce-6af473740fc4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.476 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] VM Resumed (Lifecycle Event)
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.508 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.512 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.552 239969 INFO nova.compute.manager [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Took 10.61 seconds to build instance.
Jan 26 16:17:59 compute-0 nova_compute[239965]: 2026-01-26 16:17:59.572 239969 DEBUG oslo_concurrency.lockutils [None req-b3cc6305-1b84-4465-bb1f-309b02d06438 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "d37c9efe-0572-4b48-9bce-6af473740fc4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:17:59 compute-0 podman[342570]: 2026-01-26 16:17:59.677778102 +0000 UTC m=+0.023330792 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:17:59 compute-0 podman[342570]: 2026-01-26 16:17:59.77718598 +0000 UTC m=+0.122738640 container create c95deec9b27f3ced516b5f0c6e336f59c719ef5a9ad7c846e77352672d00a1c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:17:59 compute-0 systemd[1]: Started libpod-conmon-c95deec9b27f3ced516b5f0c6e336f59c719ef5a9ad7c846e77352672d00a1c7.scope.
Jan 26 16:17:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:17:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/101e2e4c70fe91c2474925e37d90e6d824903167e59eb8a28366133d56796739/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:17:59 compute-0 podman[342570]: 2026-01-26 16:17:59.871735361 +0000 UTC m=+0.217288041 container init c95deec9b27f3ced516b5f0c6e336f59c719ef5a9ad7c846e77352672d00a1c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:17:59 compute-0 podman[342570]: 2026-01-26 16:17:59.877208683 +0000 UTC m=+0.222761343 container start c95deec9b27f3ced516b5f0c6e336f59c719ef5a9ad7c846e77352672d00a1c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 16:17:59 compute-0 neutron-haproxy-ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf[342585]: [NOTICE]   (342589) : New worker (342591) forked
Jan 26 16:17:59 compute-0 neutron-haproxy-ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf[342585]: [NOTICE]   (342589) : Loading success.
Jan 26 16:18:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:18:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:18:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:18:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:18:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:18:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:18:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 213 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 26 16:18:00 compute-0 ceph-mon[75140]: pgmap v2019: 305 pgs: 305 active+clean; 213 MiB data, 880 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 59 op/s
Jan 26 16:18:00 compute-0 ovn_controller[146046]: 2026-01-26T16:18:00Z|01214|binding|INFO|Releasing lport f7bb788d-e36c-4f9a-b442-9f826e16e4d4 from this chassis (sb_readonly=0)
Jan 26 16:18:00 compute-0 ovn_controller[146046]: 2026-01-26T16:18:00Z|01215|binding|INFO|Releasing lport 408f5baa-d5af-4215-8074-ad51c41d2a9a from this chassis (sb_readonly=0)
Jan 26 16:18:00 compute-0 nova_compute[239965]: 2026-01-26 16:18:00.818 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:01 compute-0 nova_compute[239965]: 2026-01-26 16:18:01.200 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444266.1991363, 311f59c2-3926-4794-b450-819f63642a6b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:18:01 compute-0 nova_compute[239965]: 2026-01-26 16:18:01.201 239969 INFO nova.compute.manager [-] [instance: 311f59c2-3926-4794-b450-819f63642a6b] VM Stopped (Lifecycle Event)
Jan 26 16:18:01 compute-0 nova_compute[239965]: 2026-01-26 16:18:01.220 239969 DEBUG nova.compute.manager [None req-f2fa7a77-d09e-4509-a7ef-a0f9770853e1 - - - - - -] [instance: 311f59c2-3926-4794-b450-819f63642a6b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:18:01 compute-0 nova_compute[239965]: 2026-01-26 16:18:01.445 239969 DEBUG nova.compute.manager [req-07fb9d39-38e6-43a3-b2a4-6110d4e31c40 req-d66e2586-10e0-46a2-bc5a-57157473648b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Received event network-vif-plugged-0b8b7ee7-a378-4a72-992b-23094d7e73c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:18:01 compute-0 nova_compute[239965]: 2026-01-26 16:18:01.446 239969 DEBUG oslo_concurrency.lockutils [req-07fb9d39-38e6-43a3-b2a4-6110d4e31c40 req-d66e2586-10e0-46a2-bc5a-57157473648b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:01 compute-0 nova_compute[239965]: 2026-01-26 16:18:01.447 239969 DEBUG oslo_concurrency.lockutils [req-07fb9d39-38e6-43a3-b2a4-6110d4e31c40 req-d66e2586-10e0-46a2-bc5a-57157473648b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:01 compute-0 nova_compute[239965]: 2026-01-26 16:18:01.448 239969 DEBUG oslo_concurrency.lockutils [req-07fb9d39-38e6-43a3-b2a4-6110d4e31c40 req-d66e2586-10e0-46a2-bc5a-57157473648b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:01 compute-0 nova_compute[239965]: 2026-01-26 16:18:01.449 239969 DEBUG nova.compute.manager [req-07fb9d39-38e6-43a3-b2a4-6110d4e31c40 req-d66e2586-10e0-46a2-bc5a-57157473648b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] No waiting events found dispatching network-vif-plugged-0b8b7ee7-a378-4a72-992b-23094d7e73c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:18:01 compute-0 nova_compute[239965]: 2026-01-26 16:18:01.450 239969 WARNING nova.compute.manager [req-07fb9d39-38e6-43a3-b2a4-6110d4e31c40 req-d66e2586-10e0-46a2-bc5a-57157473648b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Received unexpected event network-vif-plugged-0b8b7ee7-a378-4a72-992b-23094d7e73c4 for instance with vm_state active and task_state None.
Jan 26 16:18:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 213 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Jan 26 16:18:02 compute-0 nova_compute[239965]: 2026-01-26 16:18:02.480 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:02 compute-0 ceph-mon[75140]: pgmap v2020: 305 pgs: 305 active+clean; 213 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 133 op/s
Jan 26 16:18:02 compute-0 nova_compute[239965]: 2026-01-26 16:18:02.616 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:18:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:03.982 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:18:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 213 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 83 op/s
Jan 26 16:18:04 compute-0 ceph-mon[75140]: pgmap v2021: 305 pgs: 305 active+clean; 213 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 83 op/s
Jan 26 16:18:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 213 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 83 op/s
Jan 26 16:18:06 compute-0 ceph-mon[75140]: pgmap v2022: 305 pgs: 305 active+clean; 213 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 83 op/s
Jan 26 16:18:06 compute-0 nova_compute[239965]: 2026-01-26 16:18:06.574 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444271.5735106, 89d77454-1c5d-4556-88b6-e5d4023c8c36 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:18:06 compute-0 nova_compute[239965]: 2026-01-26 16:18:06.575 239969 INFO nova.compute.manager [-] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] VM Stopped (Lifecycle Event)
Jan 26 16:18:06 compute-0 nova_compute[239965]: 2026-01-26 16:18:06.605 239969 DEBUG nova.compute.manager [None req-391648fa-b223-4aa5-b91b-8a2f4a3253c2 - - - - - -] [instance: 89d77454-1c5d-4556-88b6-e5d4023c8c36] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:18:07 compute-0 nova_compute[239965]: 2026-01-26 16:18:07.482 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:07 compute-0 nova_compute[239965]: 2026-01-26 16:18:07.664 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:18:08 compute-0 nova_compute[239965]: 2026-01-26 16:18:08.153 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 213 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Jan 26 16:18:08 compute-0 ceph-mon[75140]: pgmap v2023: 305 pgs: 305 active+clean; 213 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Jan 26 16:18:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 213 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Jan 26 16:18:10 compute-0 ceph-mon[75140]: pgmap v2024: 305 pgs: 305 active+clean; 213 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 74 op/s
Jan 26 16:18:12 compute-0 nova_compute[239965]: 2026-01-26 16:18:12.088 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:18:12 compute-0 nova_compute[239965]: 2026-01-26 16:18:12.287 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 229 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 102 op/s
Jan 26 16:18:12 compute-0 nova_compute[239965]: 2026-01-26 16:18:12.484 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:12 compute-0 ceph-mon[75140]: pgmap v2025: 305 pgs: 305 active+clean; 229 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.6 MiB/s wr, 102 op/s
Jan 26 16:18:12 compute-0 nova_compute[239965]: 2026-01-26 16:18:12.666 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:18:13 compute-0 ovn_controller[146046]: 2026-01-26T16:18:13Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4b:d6:5f 10.100.0.20
Jan 26 16:18:13 compute-0 ovn_controller[146046]: 2026-01-26T16:18:13Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4b:d6:5f 10.100.0.20
Jan 26 16:18:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:14.400 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:3f:ac 10.100.0.2 2001:db8::f816:3eff:fe0f:3fac'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe0f:3fac/64', 'neutron:device_id': 'ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b663232-d708-4949-9a4a-d4a2c365ff36, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ac473cbf-5010-4e34-9ff9-00fae14d9b70) old=Port_Binding(mac=['fa:16:3e:0f:3f:ac 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:18:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:14.401 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ac473cbf-5010-4e34-9ff9-00fae14d9b70 in datapath 869bf55a-0f32-4f15-8ff4-2acbed7fab15 updated
Jan 26 16:18:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:14.402 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 869bf55a-0f32-4f15-8ff4-2acbed7fab15, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:18:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:14.403 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dad7bcb4-a354-4b54-874c-daabb31c259b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 229 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 167 KiB/s rd, 1.6 MiB/s wr, 28 op/s
Jan 26 16:18:14 compute-0 ceph-mon[75140]: pgmap v2026: 305 pgs: 305 active+clean; 229 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 167 KiB/s rd, 1.6 MiB/s wr, 28 op/s
Jan 26 16:18:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 237 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 188 KiB/s rd, 2.1 MiB/s wr, 34 op/s
Jan 26 16:18:16 compute-0 ceph-mon[75140]: pgmap v2027: 305 pgs: 305 active+clean; 237 MiB data, 883 MiB used, 59 GiB / 60 GiB avail; 188 KiB/s rd, 2.1 MiB/s wr, 34 op/s
Jan 26 16:18:17 compute-0 nova_compute[239965]: 2026-01-26 16:18:17.486 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:17 compute-0 nova_compute[239965]: 2026-01-26 16:18:17.668 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:18:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:18:19 compute-0 ceph-mon[75140]: pgmap v2028: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:18:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:18:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:20.635 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0f:3f:ac 10.100.0.2 2001:db8:0:1:f816:3eff:fe0f:3fac 2001:db8::f816:3eff:fe0f:3fac'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe0f:3fac/64 2001:db8::f816:3eff:fe0f:3fac/64', 'neutron:device_id': 'ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b663232-d708-4949-9a4a-d4a2c365ff36, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ac473cbf-5010-4e34-9ff9-00fae14d9b70) old=Port_Binding(mac=['fa:16:3e:0f:3f:ac 10.100.0.2 2001:db8::f816:3eff:fe0f:3fac'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe0f:3fac/64', 'neutron:device_id': 'ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:18:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:20.637 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ac473cbf-5010-4e34-9ff9-00fae14d9b70 in datapath 869bf55a-0f32-4f15-8ff4-2acbed7fab15 updated
Jan 26 16:18:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:20.638 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 869bf55a-0f32-4f15-8ff4-2acbed7fab15, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:18:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:20.639 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fe612530-9ad0-4935-9c09-026685d3660b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:20 compute-0 ceph-mon[75140]: pgmap v2029: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:18:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Jan 26 16:18:22 compute-0 nova_compute[239965]: 2026-01-26 16:18:22.489 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:22 compute-0 ceph-mon[75140]: pgmap v2030: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 359 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Jan 26 16:18:22 compute-0 nova_compute[239965]: 2026-01-26 16:18:22.671 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.042 239969 DEBUG oslo_concurrency.lockutils [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "d37c9efe-0572-4b48-9bce-6af473740fc4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.042 239969 DEBUG oslo_concurrency.lockutils [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "d37c9efe-0572-4b48-9bce-6af473740fc4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.042 239969 DEBUG oslo_concurrency.lockutils [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.043 239969 DEBUG oslo_concurrency.lockutils [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.043 239969 DEBUG oslo_concurrency.lockutils [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.044 239969 INFO nova.compute.manager [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Terminating instance
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.045 239969 DEBUG nova.compute.manager [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:18:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:18:23 compute-0 kernel: tap0b8b7ee7-a3 (unregistering): left promiscuous mode
Jan 26 16:18:23 compute-0 NetworkManager[48954]: <info>  [1769444303.1294] device (tap0b8b7ee7-a3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:18:23 compute-0 ovn_controller[146046]: 2026-01-26T16:18:23Z|01216|binding|INFO|Releasing lport 0b8b7ee7-a378-4a72-992b-23094d7e73c4 from this chassis (sb_readonly=0)
Jan 26 16:18:23 compute-0 ovn_controller[146046]: 2026-01-26T16:18:23Z|01217|binding|INFO|Setting lport 0b8b7ee7-a378-4a72-992b-23094d7e73c4 down in Southbound
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.136 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:23 compute-0 ovn_controller[146046]: 2026-01-26T16:18:23Z|01218|binding|INFO|Removing iface tap0b8b7ee7-a3 ovn-installed in OVS
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.138 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:23.147 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4b:d6:5f 10.100.0.20'], port_security=['fa:16:3e:4b:d6:5f 10.100.0.20'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.20/28', 'neutron:device_id': 'd37c9efe-0572-4b48-9bce-6af473740fc4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6984db9d-a98d-48a6-9c09-8b249a7b8ccf', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '73040af5-6564-484d-9630-e62de6d7f8e7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2750665e-551c-40fe-a03b-7646b1ceae66, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=0b8b7ee7-a378-4a72-992b-23094d7e73c4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:18:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:23.148 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 0b8b7ee7-a378-4a72-992b-23094d7e73c4 in datapath 6984db9d-a98d-48a6-9c09-8b249a7b8ccf unbound from our chassis
Jan 26 16:18:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:23.149 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6984db9d-a98d-48a6-9c09-8b249a7b8ccf, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:18:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:23.150 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ef362e4a-d655-4cca-a3dd-c3927a4769d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:23.150 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf namespace which is not needed anymore
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.155 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:23 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000074.scope: Deactivated successfully.
Jan 26 16:18:23 compute-0 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000074.scope: Consumed 13.751s CPU time.
Jan 26 16:18:23 compute-0 systemd-machined[208061]: Machine qemu-143-instance-00000074 terminated.
Jan 26 16:18:23 compute-0 neutron-haproxy-ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf[342585]: [NOTICE]   (342589) : haproxy version is 2.8.14-c23fe91
Jan 26 16:18:23 compute-0 neutron-haproxy-ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf[342585]: [NOTICE]   (342589) : path to executable is /usr/sbin/haproxy
Jan 26 16:18:23 compute-0 neutron-haproxy-ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf[342585]: [WARNING]  (342589) : Exiting Master process...
Jan 26 16:18:23 compute-0 neutron-haproxy-ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf[342585]: [ALERT]    (342589) : Current worker (342591) exited with code 143 (Terminated)
Jan 26 16:18:23 compute-0 neutron-haproxy-ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf[342585]: [WARNING]  (342589) : All workers exited. Exiting... (0)
Jan 26 16:18:23 compute-0 systemd[1]: libpod-c95deec9b27f3ced516b5f0c6e336f59c719ef5a9ad7c846e77352672d00a1c7.scope: Deactivated successfully.
Jan 26 16:18:23 compute-0 podman[342626]: 2026-01-26 16:18:23.273940233 +0000 UTC m=+0.044816756 container died c95deec9b27f3ced516b5f0c6e336f59c719ef5a9ad7c846e77352672d00a1c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.281 239969 INFO nova.virt.libvirt.driver [-] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Instance destroyed successfully.
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.281 239969 DEBUG nova.objects.instance [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'resources' on Instance uuid d37c9efe-0572-4b48-9bce-6af473740fc4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.302 239969 DEBUG nova.virt.libvirt.vif [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:17:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-547911124',display_name='tempest-TestNetworkBasicOps-server-547911124',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-547911124',id=116,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBExlZz5z+PLI6EC/Sk1VWVN5QL7TCenj++Hb1Vjiz9IR51KbBLpjwX8RiPD1qPfqcw3SxQGkdb/MJLDyDZIGBZkvYCzEhiTpFnMJS9tvnzLtAeZbKPZ9e9uuDISkpARFRg==',key_name='tempest-TestNetworkBasicOps-787215910',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:17:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-v371wi2u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:17:59Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=d37c9efe-0572-4b48-9bce-6af473740fc4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "address": "fa:16:3e:4b:d6:5f", "network": {"id": "6984db9d-a98d-48a6-9c09-8b249a7b8ccf", "bridge": "br-int", "label": "tempest-network-smoke--807851546", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b8b7ee7-a3", "ovs_interfaceid": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.303 239969 DEBUG nova.network.os_vif_util [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "address": "fa:16:3e:4b:d6:5f", "network": {"id": "6984db9d-a98d-48a6-9c09-8b249a7b8ccf", "bridge": "br-int", "label": "tempest-network-smoke--807851546", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.20", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b8b7ee7-a3", "ovs_interfaceid": "0b8b7ee7-a378-4a72-992b-23094d7e73c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.305 239969 DEBUG nova.network.os_vif_util [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4b:d6:5f,bridge_name='br-int',has_traffic_filtering=True,id=0b8b7ee7-a378-4a72-992b-23094d7e73c4,network=Network(6984db9d-a98d-48a6-9c09-8b249a7b8ccf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b8b7ee7-a3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.305 239969 DEBUG os_vif [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:d6:5f,bridge_name='br-int',has_traffic_filtering=True,id=0b8b7ee7-a378-4a72-992b-23094d7e73c4,network=Network(6984db9d-a98d-48a6-9c09-8b249a7b8ccf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b8b7ee7-a3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.307 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.307 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b8b7ee7-a3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.311 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.313 239969 INFO os_vif [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4b:d6:5f,bridge_name='br-int',has_traffic_filtering=True,id=0b8b7ee7-a378-4a72-992b-23094d7e73c4,network=Network(6984db9d-a98d-48a6-9c09-8b249a7b8ccf),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b8b7ee7-a3')
Jan 26 16:18:23 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c95deec9b27f3ced516b5f0c6e336f59c719ef5a9ad7c846e77352672d00a1c7-userdata-shm.mount: Deactivated successfully.
Jan 26 16:18:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-101e2e4c70fe91c2474925e37d90e6d824903167e59eb8a28366133d56796739-merged.mount: Deactivated successfully.
Jan 26 16:18:23 compute-0 podman[342626]: 2026-01-26 16:18:23.606363764 +0000 UTC m=+0.377240287 container cleanup c95deec9b27f3ced516b5f0c6e336f59c719ef5a9ad7c846e77352672d00a1c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 16:18:23 compute-0 systemd[1]: libpod-conmon-c95deec9b27f3ced516b5f0c6e336f59c719ef5a9ad7c846e77352672d00a1c7.scope: Deactivated successfully.
Jan 26 16:18:23 compute-0 podman[342684]: 2026-01-26 16:18:23.693624225 +0000 UTC m=+0.061474722 container remove c95deec9b27f3ced516b5f0c6e336f59c719ef5a9ad7c846e77352672d00a1c7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 16:18:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:23.699 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4e841e0b-1d8c-49fb-8637-af20e2b8808f]: (4, ('Mon Jan 26 04:18:23 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf (c95deec9b27f3ced516b5f0c6e336f59c719ef5a9ad7c846e77352672d00a1c7)\nc95deec9b27f3ced516b5f0c6e336f59c719ef5a9ad7c846e77352672d00a1c7\nMon Jan 26 04:18:23 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf (c95deec9b27f3ced516b5f0c6e336f59c719ef5a9ad7c846e77352672d00a1c7)\nc95deec9b27f3ced516b5f0c6e336f59c719ef5a9ad7c846e77352672d00a1c7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:23.701 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8e5d6545-b852-43e5-895e-bef883412a5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:23.702 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6984db9d-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.703 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:23 compute-0 kernel: tap6984db9d-a0: left promiscuous mode
Jan 26 16:18:23 compute-0 nova_compute[239965]: 2026-01-26 16:18:23.717 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:23.720 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[96abcc75-5c61-4d41-83de-e77b161b13a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:23.742 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc8e17c-3c54-43af-b429-c3e7934aaf12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:23.744 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8ed84b32-901c-4a45-af09-fddcd0d50eb3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:23.765 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ac62ad-49fc-498b-a99d-1e4588cface1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 576979, 'reachable_time': 41073, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 342698, 'error': None, 'target': 'ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d6984db9d\x2da98d\x2d48a6\x2d9c09\x2d8b249a7b8ccf.mount: Deactivated successfully.
Jan 26 16:18:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:23.767 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6984db9d-a98d-48a6-9c09-8b249a7b8ccf deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:18:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:23.767 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[588ffd9c-7c70-409f-8e34-549ac1383dc4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:24 compute-0 nova_compute[239965]: 2026-01-26 16:18:24.380 239969 INFO nova.virt.libvirt.driver [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Deleting instance files /var/lib/nova/instances/d37c9efe-0572-4b48-9bce-6af473740fc4_del
Jan 26 16:18:24 compute-0 nova_compute[239965]: 2026-01-26 16:18:24.381 239969 INFO nova.virt.libvirt.driver [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Deletion of /var/lib/nova/instances/d37c9efe-0572-4b48-9bce-6af473740fc4_del complete
Jan 26 16:18:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 192 KiB/s rd, 605 KiB/s wr, 38 op/s
Jan 26 16:18:24 compute-0 nova_compute[239965]: 2026-01-26 16:18:24.438 239969 INFO nova.compute.manager [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Took 1.39 seconds to destroy the instance on the hypervisor.
Jan 26 16:18:24 compute-0 nova_compute[239965]: 2026-01-26 16:18:24.439 239969 DEBUG oslo.service.loopingcall [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:18:24 compute-0 nova_compute[239965]: 2026-01-26 16:18:24.439 239969 DEBUG nova.compute.manager [-] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:18:24 compute-0 nova_compute[239965]: 2026-01-26 16:18:24.439 239969 DEBUG nova.network.neutron [-] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:18:24 compute-0 ceph-mon[75140]: pgmap v2031: 305 pgs: 305 active+clean; 246 MiB data, 897 MiB used, 59 GiB / 60 GiB avail; 192 KiB/s rd, 605 KiB/s wr, 38 op/s
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.047 239969 DEBUG nova.compute.manager [req-342bdc48-e57a-4fb5-b673-aa1336ccba91 req-4ff813a1-4321-446d-8f44-96e85e4b1232 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Received event network-vif-unplugged-0b8b7ee7-a378-4a72-992b-23094d7e73c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.048 239969 DEBUG oslo_concurrency.lockutils [req-342bdc48-e57a-4fb5-b673-aa1336ccba91 req-4ff813a1-4321-446d-8f44-96e85e4b1232 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.048 239969 DEBUG oslo_concurrency.lockutils [req-342bdc48-e57a-4fb5-b673-aa1336ccba91 req-4ff813a1-4321-446d-8f44-96e85e4b1232 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.048 239969 DEBUG oslo_concurrency.lockutils [req-342bdc48-e57a-4fb5-b673-aa1336ccba91 req-4ff813a1-4321-446d-8f44-96e85e4b1232 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.048 239969 DEBUG nova.compute.manager [req-342bdc48-e57a-4fb5-b673-aa1336ccba91 req-4ff813a1-4321-446d-8f44-96e85e4b1232 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] No waiting events found dispatching network-vif-unplugged-0b8b7ee7-a378-4a72-992b-23094d7e73c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.049 239969 DEBUG nova.compute.manager [req-342bdc48-e57a-4fb5-b673-aa1336ccba91 req-4ff813a1-4321-446d-8f44-96e85e4b1232 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Received event network-vif-unplugged-0b8b7ee7-a378-4a72-992b-23094d7e73c4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.195 239969 DEBUG nova.network.neutron [-] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.213 239969 INFO nova.compute.manager [-] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Took 0.77 seconds to deallocate network for instance.
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.250 239969 DEBUG oslo_concurrency.lockutils [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.251 239969 DEBUG oslo_concurrency.lockutils [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.284 239969 DEBUG nova.compute.manager [req-9ab716da-8315-4fca-b53e-bf26d7d330a7 req-3abfb326-9c67-4227-b829-898be7c34f1b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Received event network-vif-deleted-0b8b7ee7-a378-4a72-992b-23094d7e73c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.353 239969 DEBUG oslo_concurrency.processutils [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:18:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:18:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4175695226' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.901 239969 DEBUG oslo_concurrency.processutils [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.907 239969 DEBUG nova.compute.provider_tree [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.931 239969 DEBUG nova.scheduler.client.report [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.952 239969 DEBUG oslo_concurrency.lockutils [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:25 compute-0 nova_compute[239965]: 2026-01-26 16:18:25.990 239969 INFO nova.scheduler.client.report [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Deleted allocations for instance d37c9efe-0572-4b48-9bce-6af473740fc4
Jan 26 16:18:26 compute-0 nova_compute[239965]: 2026-01-26 16:18:26.052 239969 DEBUG oslo_concurrency.lockutils [None req-57d47307-e738-41e7-94d2-64a834cb801a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "d37c9efe-0572-4b48-9bce-6af473740fc4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4175695226' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:18:26 compute-0 nova_compute[239965]: 2026-01-26 16:18:26.252 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "12ac525a-976f-4afb-b4db-6035caada7e0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:26 compute-0 nova_compute[239965]: 2026-01-26 16:18:26.253 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "12ac525a-976f-4afb-b4db-6035caada7e0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:26 compute-0 nova_compute[239965]: 2026-01-26 16:18:26.279 239969 DEBUG nova.compute.manager [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:18:26 compute-0 nova_compute[239965]: 2026-01-26 16:18:26.363 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:26 compute-0 nova_compute[239965]: 2026-01-26 16:18:26.364 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:26 compute-0 nova_compute[239965]: 2026-01-26 16:18:26.371 239969 DEBUG nova.virt.hardware [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:18:26 compute-0 nova_compute[239965]: 2026-01-26 16:18:26.371 239969 INFO nova.compute.claims [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:18:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 227 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 193 KiB/s rd, 607 KiB/s wr, 40 op/s
Jan 26 16:18:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:26.474 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:76:37:f3 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-72dcd7f7-b861-4fb1-a7ec-c70efbcf37b7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72dcd7f7-b861-4fb1-a7ec-c70efbcf37b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '846353c2154b4f35ae0fbada5ce2ba45', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e4134965-e55a-454b-b4c2-5cb1d7abcc53, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=909f69c5-dbf1-41b8-ab46-e2eb1bf050fd) old=Port_Binding(mac=['fa:16:3e:76:37:f3 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-72dcd7f7-b861-4fb1-a7ec-c70efbcf37b7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-72dcd7f7-b861-4fb1-a7ec-c70efbcf37b7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '846353c2154b4f35ae0fbada5ce2ba45', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:18:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:26.475 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 909f69c5-dbf1-41b8-ab46-e2eb1bf050fd in datapath 72dcd7f7-b861-4fb1-a7ec-c70efbcf37b7 updated
Jan 26 16:18:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:26.476 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 72dcd7f7-b861-4fb1-a7ec-c70efbcf37b7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:18:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:26.477 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad178ca-ab88-413d-b2cb-5efe56640f4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:26 compute-0 nova_compute[239965]: 2026-01-26 16:18:26.497 239969 DEBUG oslo_concurrency.processutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.154 239969 DEBUG nova.compute.manager [req-736edcb5-c450-4ef6-9ed5-8d15148fadf8 req-2d2a7bc8-61a5-4a18-9a90-d0d2e15c694b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Received event network-vif-plugged-0b8b7ee7-a378-4a72-992b-23094d7e73c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.155 239969 DEBUG oslo_concurrency.lockutils [req-736edcb5-c450-4ef6-9ed5-8d15148fadf8 req-2d2a7bc8-61a5-4a18-9a90-d0d2e15c694b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.155 239969 DEBUG oslo_concurrency.lockutils [req-736edcb5-c450-4ef6-9ed5-8d15148fadf8 req-2d2a7bc8-61a5-4a18-9a90-d0d2e15c694b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.155 239969 DEBUG oslo_concurrency.lockutils [req-736edcb5-c450-4ef6-9ed5-8d15148fadf8 req-2d2a7bc8-61a5-4a18-9a90-d0d2e15c694b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "d37c9efe-0572-4b48-9bce-6af473740fc4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.155 239969 DEBUG nova.compute.manager [req-736edcb5-c450-4ef6-9ed5-8d15148fadf8 req-2d2a7bc8-61a5-4a18-9a90-d0d2e15c694b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] No waiting events found dispatching network-vif-plugged-0b8b7ee7-a378-4a72-992b-23094d7e73c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.156 239969 WARNING nova.compute.manager [req-736edcb5-c450-4ef6-9ed5-8d15148fadf8 req-2d2a7bc8-61a5-4a18-9a90-d0d2e15c694b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Received unexpected event network-vif-plugged-0b8b7ee7-a378-4a72-992b-23094d7e73c4 for instance with vm_state deleted and task_state None.
Jan 26 16:18:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:18:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2484394825' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:18:27 compute-0 ceph-mon[75140]: pgmap v2032: 305 pgs: 305 active+clean; 227 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 193 KiB/s rd, 607 KiB/s wr, 40 op/s
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.179 239969 DEBUG oslo_concurrency.processutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.682s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.185 239969 DEBUG nova.compute.provider_tree [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.200 239969 DEBUG nova.scheduler.client.report [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.221 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.857s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.222 239969 DEBUG nova.compute.manager [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.307 239969 DEBUG nova.compute.manager [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.308 239969 DEBUG nova.network.neutron [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.339 239969 INFO nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.359 239969 DEBUG nova.compute.manager [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:18:27 compute-0 podman[342744]: 2026-01-26 16:18:27.400896835 +0000 UTC m=+0.087448387 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:18:27 compute-0 podman[342745]: 2026-01-26 16:18:27.407240271 +0000 UTC m=+0.092769188 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.455 239969 DEBUG nova.compute.manager [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.456 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.456 239969 INFO nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Creating image(s)
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.476 239969 DEBUG nova.storage.rbd_utils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 12ac525a-976f-4afb-b4db-6035caada7e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.496 239969 DEBUG nova.storage.rbd_utils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 12ac525a-976f-4afb-b4db-6035caada7e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.515 239969 DEBUG nova.storage.rbd_utils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 12ac525a-976f-4afb-b4db-6035caada7e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.518 239969 DEBUG oslo_concurrency.processutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.546 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.550 239969 DEBUG nova.policy [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0338b308661e4205839e9e957f674d8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df501970c9864b77b86bb4aa58ef846b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.584 239969 DEBUG oslo_concurrency.processutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.584 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.585 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.585 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.602 239969 DEBUG nova.storage.rbd_utils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 12ac525a-976f-4afb-b4db-6035caada7e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:18:27 compute-0 nova_compute[239965]: 2026-01-26 16:18:27.605 239969 DEBUG oslo_concurrency.processutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 12ac525a-976f-4afb-b4db-6035caada7e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:18:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:18:28 compute-0 ovn_controller[146046]: 2026-01-26T16:18:28Z|01219|binding|INFO|Releasing lport f7bb788d-e36c-4f9a-b442-9f826e16e4d4 from this chassis (sb_readonly=0)
Jan 26 16:18:28 compute-0 nova_compute[239965]: 2026-01-26 16:18:28.223 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:28 compute-0 nova_compute[239965]: 2026-01-26 16:18:28.310 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:28 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2484394825' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:18:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 167 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 190 KiB/s rd, 76 KiB/s wr, 61 op/s
Jan 26 16:18:28 compute-0 nova_compute[239965]: 2026-01-26 16:18:28.455 239969 DEBUG oslo_concurrency.processutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 12ac525a-976f-4afb-b4db-6035caada7e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.851s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:18:28 compute-0 nova_compute[239965]: 2026-01-26 16:18:28.510 239969 DEBUG nova.storage.rbd_utils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] resizing rbd image 12ac525a-976f-4afb-b4db-6035caada7e0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:18:28 compute-0 nova_compute[239965]: 2026-01-26 16:18:28.576 239969 DEBUG nova.objects.instance [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'migration_context' on Instance uuid 12ac525a-976f-4afb-b4db-6035caada7e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:18:28 compute-0 nova_compute[239965]: 2026-01-26 16:18:28.591 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:18:28 compute-0 nova_compute[239965]: 2026-01-26 16:18:28.591 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Ensure instance console log exists: /var/lib/nova/instances/12ac525a-976f-4afb-b4db-6035caada7e0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:18:28 compute-0 nova_compute[239965]: 2026-01-26 16:18:28.592 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:28 compute-0 nova_compute[239965]: 2026-01-26 16:18:28.592 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:28 compute-0 nova_compute[239965]: 2026-01-26 16:18:28.592 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:18:28
Jan 26 16:18:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:18:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:18:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'backups', '.mgr', 'vms']
Jan 26 16:18:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:18:28 compute-0 nova_compute[239965]: 2026-01-26 16:18:28.738 239969 DEBUG nova.network.neutron [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Successfully created port: 6ee09ee1-7760-4d7f-af2e-0b8d367c129c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.378 239969 DEBUG oslo_concurrency.lockutils [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.379 239969 DEBUG oslo_concurrency.lockutils [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.380 239969 DEBUG oslo_concurrency.lockutils [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.380 239969 DEBUG oslo_concurrency.lockutils [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.381 239969 DEBUG oslo_concurrency.lockutils [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.383 239969 INFO nova.compute.manager [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Terminating instance
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.385 239969 DEBUG nova.compute.manager [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:18:29 compute-0 ceph-mon[75140]: pgmap v2033: 305 pgs: 305 active+clean; 167 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 190 KiB/s rd, 76 KiB/s wr, 61 op/s
Jan 26 16:18:29 compute-0 kernel: tapd7a7b654-83 (unregistering): left promiscuous mode
Jan 26 16:18:29 compute-0 NetworkManager[48954]: <info>  [1769444309.4388] device (tapd7a7b654-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:18:29 compute-0 ovn_controller[146046]: 2026-01-26T16:18:29Z|01220|binding|INFO|Releasing lport d7a7b654-8380-4796-9567-cf4032213582 from this chassis (sb_readonly=0)
Jan 26 16:18:29 compute-0 ovn_controller[146046]: 2026-01-26T16:18:29Z|01221|binding|INFO|Setting lport d7a7b654-8380-4796-9567-cf4032213582 down in Southbound
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.445 239969 DEBUG nova.compute.manager [req-c6717978-1e34-4e4f-968f-c4cc570ffe9c req-43b4e416-155c-4ff4-9832-3cdad09e49d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Received event network-changed-d7a7b654-8380-4796-9567-cf4032213582 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:18:29 compute-0 ovn_controller[146046]: 2026-01-26T16:18:29Z|01222|binding|INFO|Removing iface tapd7a7b654-83 ovn-installed in OVS
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.446 239969 DEBUG nova.compute.manager [req-c6717978-1e34-4e4f-968f-c4cc570ffe9c req-43b4e416-155c-4ff4-9832-3cdad09e49d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Refreshing instance network info cache due to event network-changed-d7a7b654-8380-4796-9567-cf4032213582. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.446 239969 DEBUG oslo_concurrency.lockutils [req-c6717978-1e34-4e4f-968f-c4cc570ffe9c req-43b4e416-155c-4ff4-9832-3cdad09e49d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.446 239969 DEBUG oslo_concurrency.lockutils [req-c6717978-1e34-4e4f-968f-c4cc570ffe9c req-43b4e416-155c-4ff4-9832-3cdad09e49d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.447 239969 DEBUG nova.network.neutron [req-c6717978-1e34-4e4f-968f-c4cc570ffe9c req-43b4e416-155c-4ff4-9832-3cdad09e49d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Refreshing network info cache for port d7a7b654-8380-4796-9567-cf4032213582 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.448 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:29.455 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:a1:c3 10.100.0.7'], port_security=['fa:16:3e:b1:a1:c3 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4cb5d445-077f-4868-a619-d07393d42134', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4515f149-8897-4e5a-8e7f-088b776346f8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d7a7b654-8380-4796-9567-cf4032213582) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:18:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:29.456 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d7a7b654-8380-4796-9567-cf4032213582 in datapath e76f9602-5b6a-4dfd-b8a5-93cc21546e0c unbound from our chassis
Jan 26 16:18:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:29.457 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e76f9602-5b6a-4dfd-b8a5-93cc21546e0c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:18:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:29.458 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[14763849-0685-4c33-adef-29010ffd6058]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:29.458 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c namespace which is not needed anymore
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.463 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:29 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000073.scope: Deactivated successfully.
Jan 26 16:18:29 compute-0 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000073.scope: Consumed 16.113s CPU time.
Jan 26 16:18:29 compute-0 systemd-machined[208061]: Machine qemu-141-instance-00000073 terminated.
Jan 26 16:18:29 compute-0 neutron-haproxy-ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c[341064]: [NOTICE]   (341078) : haproxy version is 2.8.14-c23fe91
Jan 26 16:18:29 compute-0 neutron-haproxy-ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c[341064]: [NOTICE]   (341078) : path to executable is /usr/sbin/haproxy
Jan 26 16:18:29 compute-0 neutron-haproxy-ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c[341064]: [WARNING]  (341078) : Exiting Master process...
Jan 26 16:18:29 compute-0 neutron-haproxy-ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c[341064]: [WARNING]  (341078) : Exiting Master process...
Jan 26 16:18:29 compute-0 neutron-haproxy-ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c[341064]: [ALERT]    (341078) : Current worker (341081) exited with code 143 (Terminated)
Jan 26 16:18:29 compute-0 neutron-haproxy-ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c[341064]: [WARNING]  (341078) : All workers exited. Exiting... (0)
Jan 26 16:18:29 compute-0 systemd[1]: libpod-360c0f060ce246fdba55cc4e7c6ceb01bf8450f241b87f4985f6e216dfe6083a.scope: Deactivated successfully.
Jan 26 16:18:29 compute-0 conmon[341064]: conmon 360c0f060ce246fdba55 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-360c0f060ce246fdba55cc4e7c6ceb01bf8450f241b87f4985f6e216dfe6083a.scope/container/memory.events
Jan 26 16:18:29 compute-0 podman[342979]: 2026-01-26 16:18:29.577666024 +0000 UTC m=+0.039973648 container died 360c0f060ce246fdba55cc4e7c6ceb01bf8450f241b87f4985f6e216dfe6083a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:18:29 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-360c0f060ce246fdba55cc4e7c6ceb01bf8450f241b87f4985f6e216dfe6083a-userdata-shm.mount: Deactivated successfully.
Jan 26 16:18:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d99a85948953b380d8f892be725b6b1ed95a4686e35ede87c397775d8d4af5eb-merged.mount: Deactivated successfully.
Jan 26 16:18:29 compute-0 podman[342979]: 2026-01-26 16:18:29.616484463 +0000 UTC m=+0.078792087 container cleanup 360c0f060ce246fdba55cc4e7c6ceb01bf8450f241b87f4985f6e216dfe6083a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.616 239969 INFO nova.virt.libvirt.driver [-] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Instance destroyed successfully.
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.617 239969 DEBUG nova.objects.instance [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'resources' on Instance uuid dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:18:29 compute-0 systemd[1]: libpod-conmon-360c0f060ce246fdba55cc4e7c6ceb01bf8450f241b87f4985f6e216dfe6083a.scope: Deactivated successfully.
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.639 239969 DEBUG nova.virt.libvirt.vif [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:17:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1990905637',display_name='tempest-TestNetworkBasicOps-server-1990905637',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1990905637',id=115,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLAifdPei62MC3HGgHOnaTQp7rfviEisVt2PdWf3OjLKLpNk2wvhm4+Db9zsv5q7m0fc3VIYgFAw6bwPWmCYoSQtS/nYYTstdh3EXd4AVP6AmkFUIUkttraekRLDIhOIjw==',key_name='tempest-TestNetworkBasicOps-555094853',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:17:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-mnzeqw2c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:17:17Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d7a7b654-8380-4796-9567-cf4032213582", "address": "fa:16:3e:b1:a1:c3", "network": {"id": "e76f9602-5b6a-4dfd-b8a5-93cc21546e0c", "bridge": "br-int", "label": "tempest-network-smoke--579472937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7a7b654-83", "ovs_interfaceid": "d7a7b654-8380-4796-9567-cf4032213582", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.639 239969 DEBUG nova.network.os_vif_util [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "d7a7b654-8380-4796-9567-cf4032213582", "address": "fa:16:3e:b1:a1:c3", "network": {"id": "e76f9602-5b6a-4dfd-b8a5-93cc21546e0c", "bridge": "br-int", "label": "tempest-network-smoke--579472937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7a7b654-83", "ovs_interfaceid": "d7a7b654-8380-4796-9567-cf4032213582", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.641 239969 DEBUG nova.network.os_vif_util [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b1:a1:c3,bridge_name='br-int',has_traffic_filtering=True,id=d7a7b654-8380-4796-9567-cf4032213582,network=Network(e76f9602-5b6a-4dfd-b8a5-93cc21546e0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7a7b654-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.641 239969 DEBUG os_vif [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:a1:c3,bridge_name='br-int',has_traffic_filtering=True,id=d7a7b654-8380-4796-9567-cf4032213582,network=Network(e76f9602-5b6a-4dfd-b8a5-93cc21546e0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7a7b654-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.644 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.644 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7a7b654-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.647 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.650 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.651 239969 INFO os_vif [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:a1:c3,bridge_name='br-int',has_traffic_filtering=True,id=d7a7b654-8380-4796-9567-cf4032213582,network=Network(e76f9602-5b6a-4dfd-b8a5-93cc21546e0c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd7a7b654-83')
Jan 26 16:18:29 compute-0 podman[343017]: 2026-01-26 16:18:29.675595947 +0000 UTC m=+0.039915237 container remove 360c0f060ce246fdba55cc4e7c6ceb01bf8450f241b87f4985f6e216dfe6083a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 16:18:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:29.682 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9fb18823-3928-4699-9ea2-97555e58c81d]: (4, ('Mon Jan 26 04:18:29 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c (360c0f060ce246fdba55cc4e7c6ceb01bf8450f241b87f4985f6e216dfe6083a)\n360c0f060ce246fdba55cc4e7c6ceb01bf8450f241b87f4985f6e216dfe6083a\nMon Jan 26 04:18:29 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c (360c0f060ce246fdba55cc4e7c6ceb01bf8450f241b87f4985f6e216dfe6083a)\n360c0f060ce246fdba55cc4e7c6ceb01bf8450f241b87f4985f6e216dfe6083a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:29.684 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[18f615e4-6fa9-4fd5-8ec9-0c1708c1b703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:29.685 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape76f9602-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:18:29 compute-0 kernel: tape76f9602-50: left promiscuous mode
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.687 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.700 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:29.703 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[67e0d3c0-1574-44ca-9836-b3413ec56829]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:29.724 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe0bf4d-68ef-43fd-a18b-d735cb1bedff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:29.725 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c59b21ef-b3ff-455f-bd44-7a0b0b655339]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:29.742 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[381f8446-915f-4f1d-9088-8275ca3248b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 572735, 'reachable_time': 40686, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343048, 'error': None, 'target': 'ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:29 compute-0 systemd[1]: run-netns-ovnmeta\x2de76f9602\x2d5b6a\x2d4dfd\x2db8a5\x2d93cc21546e0c.mount: Deactivated successfully.
Jan 26 16:18:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:29.747 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e76f9602-5b6a-4dfd-b8a5-93cc21546e0c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:18:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:29.747 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[7f624775-284b-456c-ad16-32fb99b41c3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.900 239969 INFO nova.virt.libvirt.driver [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Deleting instance files /var/lib/nova/instances/dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271_del
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.900 239969 INFO nova.virt.libvirt.driver [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Deletion of /var/lib/nova/instances/dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271_del complete
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.955 239969 INFO nova.compute.manager [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Took 0.57 seconds to destroy the instance on the hypervisor.
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.955 239969 DEBUG oslo.service.loopingcall [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.955 239969 DEBUG nova.compute.manager [-] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:18:29 compute-0 nova_compute[239965]: 2026-01-26 16:18:29.956 239969 DEBUG nova.network.neutron [-] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 167 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 24 KiB/s wr, 29 op/s
Jan 26 16:18:30 compute-0 ceph-mon[75140]: pgmap v2034: 305 pgs: 305 active+clean; 167 MiB data, 854 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 24 KiB/s wr, 29 op/s
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:18:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.016 239969 DEBUG nova.network.neutron [-] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.024 239969 DEBUG nova.network.neutron [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Successfully updated port: 6ee09ee1-7760-4d7f-af2e-0b8d367c129c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.039 239969 INFO nova.compute.manager [-] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Took 1.08 seconds to deallocate network for instance.
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.040 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.040 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquired lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.041 239969 DEBUG nova.network.neutron [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.101 239969 DEBUG nova.compute.manager [req-6da5fa2d-e827-4bf1-acde-bc42d7e8e08a req-6fdf633d-b6a0-4546-abb2-6bdb5a543fdf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Received event network-vif-deleted-d7a7b654-8380-4796-9567-cf4032213582 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.110 239969 DEBUG oslo_concurrency.lockutils [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.111 239969 DEBUG oslo_concurrency.lockutils [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.183 239969 DEBUG oslo_concurrency.processutils [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.243 239969 DEBUG nova.network.neutron [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.251 239969 DEBUG nova.network.neutron [req-c6717978-1e34-4e4f-968f-c4cc570ffe9c req-43b4e416-155c-4ff4-9832-3cdad09e49d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Updated VIF entry in instance network info cache for port d7a7b654-8380-4796-9567-cf4032213582. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.252 239969 DEBUG nova.network.neutron [req-c6717978-1e34-4e4f-968f-c4cc570ffe9c req-43b4e416-155c-4ff4-9832-3cdad09e49d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Updating instance_info_cache with network_info: [{"id": "d7a7b654-8380-4796-9567-cf4032213582", "address": "fa:16:3e:b1:a1:c3", "network": {"id": "e76f9602-5b6a-4dfd-b8a5-93cc21546e0c", "bridge": "br-int", "label": "tempest-network-smoke--579472937", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd7a7b654-83", "ovs_interfaceid": "d7a7b654-8380-4796-9567-cf4032213582", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.274 239969 DEBUG oslo_concurrency.lockutils [req-c6717978-1e34-4e4f-968f-c4cc570ffe9c req-43b4e416-155c-4ff4-9832-3cdad09e49d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.547 239969 DEBUG nova.compute.manager [req-33233319-3c34-4c48-ae65-0ddfd38ea2b7 req-07de9885-7354-428f-8cae-3d2ce329471e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Received event network-vif-unplugged-d7a7b654-8380-4796-9567-cf4032213582 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.547 239969 DEBUG oslo_concurrency.lockutils [req-33233319-3c34-4c48-ae65-0ddfd38ea2b7 req-07de9885-7354-428f-8cae-3d2ce329471e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.548 239969 DEBUG oslo_concurrency.lockutils [req-33233319-3c34-4c48-ae65-0ddfd38ea2b7 req-07de9885-7354-428f-8cae-3d2ce329471e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.548 239969 DEBUG oslo_concurrency.lockutils [req-33233319-3c34-4c48-ae65-0ddfd38ea2b7 req-07de9885-7354-428f-8cae-3d2ce329471e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.548 239969 DEBUG nova.compute.manager [req-33233319-3c34-4c48-ae65-0ddfd38ea2b7 req-07de9885-7354-428f-8cae-3d2ce329471e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] No waiting events found dispatching network-vif-unplugged-d7a7b654-8380-4796-9567-cf4032213582 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.548 239969 WARNING nova.compute.manager [req-33233319-3c34-4c48-ae65-0ddfd38ea2b7 req-07de9885-7354-428f-8cae-3d2ce329471e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Received unexpected event network-vif-unplugged-d7a7b654-8380-4796-9567-cf4032213582 for instance with vm_state deleted and task_state None.
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.548 239969 DEBUG nova.compute.manager [req-33233319-3c34-4c48-ae65-0ddfd38ea2b7 req-07de9885-7354-428f-8cae-3d2ce329471e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Received event network-vif-plugged-d7a7b654-8380-4796-9567-cf4032213582 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.549 239969 DEBUG oslo_concurrency.lockutils [req-33233319-3c34-4c48-ae65-0ddfd38ea2b7 req-07de9885-7354-428f-8cae-3d2ce329471e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.549 239969 DEBUG oslo_concurrency.lockutils [req-33233319-3c34-4c48-ae65-0ddfd38ea2b7 req-07de9885-7354-428f-8cae-3d2ce329471e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.549 239969 DEBUG oslo_concurrency.lockutils [req-33233319-3c34-4c48-ae65-0ddfd38ea2b7 req-07de9885-7354-428f-8cae-3d2ce329471e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.549 239969 DEBUG nova.compute.manager [req-33233319-3c34-4c48-ae65-0ddfd38ea2b7 req-07de9885-7354-428f-8cae-3d2ce329471e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] No waiting events found dispatching network-vif-plugged-d7a7b654-8380-4796-9567-cf4032213582 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.549 239969 WARNING nova.compute.manager [req-33233319-3c34-4c48-ae65-0ddfd38ea2b7 req-07de9885-7354-428f-8cae-3d2ce329471e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Received unexpected event network-vif-plugged-d7a7b654-8380-4796-9567-cf4032213582 for instance with vm_state deleted and task_state None.
Jan 26 16:18:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:18:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3346016320' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.812 239969 DEBUG oslo_concurrency.processutils [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.819 239969 DEBUG nova.compute.provider_tree [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.840 239969 DEBUG nova.scheduler.client.report [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.876 239969 DEBUG oslo_concurrency.lockutils [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.766s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:31 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3346016320' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.910 239969 INFO nova.scheduler.client.report [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Deleted allocations for instance dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271
Jan 26 16:18:31 compute-0 nova_compute[239965]: 2026-01-26 16:18:31.976 239969 DEBUG oslo_concurrency.lockutils [None req-b2da3671-edba-49ba-b716-85f6431ce53f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 134 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Jan 26 16:18:32 compute-0 nova_compute[239965]: 2026-01-26 16:18:32.494 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:33 compute-0 ceph-mon[75140]: pgmap v2035: 305 pgs: 305 active+clean; 134 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Jan 26 16:18:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.195 239969 DEBUG nova.compute.manager [req-1d61e398-1878-4ec1-8145-79579c9720af req-6f060261-0777-41e7-a838-80e02bdb0b99 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Received event network-changed-6ee09ee1-7760-4d7f-af2e-0b8d367c129c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.195 239969 DEBUG nova.compute.manager [req-1d61e398-1878-4ec1-8145-79579c9720af req-6f060261-0777-41e7-a838-80e02bdb0b99 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Refreshing instance network info cache due to event network-changed-6ee09ee1-7760-4d7f-af2e-0b8d367c129c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.196 239969 DEBUG oslo_concurrency.lockutils [req-1d61e398-1878-4ec1-8145-79579c9720af req-6f060261-0777-41e7-a838-80e02bdb0b99 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.771 239969 DEBUG nova.network.neutron [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Updating instance_info_cache with network_info: [{"id": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "address": "fa:16:3e:78:06:ac", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee09ee1-77", "ovs_interfaceid": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.792 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Releasing lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.793 239969 DEBUG nova.compute.manager [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Instance network_info: |[{"id": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "address": "fa:16:3e:78:06:ac", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee09ee1-77", "ovs_interfaceid": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.793 239969 DEBUG oslo_concurrency.lockutils [req-1d61e398-1878-4ec1-8145-79579c9720af req-6f060261-0777-41e7-a838-80e02bdb0b99 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.794 239969 DEBUG nova.network.neutron [req-1d61e398-1878-4ec1-8145-79579c9720af req-6f060261-0777-41e7-a838-80e02bdb0b99 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Refreshing network info cache for port 6ee09ee1-7760-4d7f-af2e-0b8d367c129c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.798 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Start _get_guest_xml network_info=[{"id": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "address": "fa:16:3e:78:06:ac", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee09ee1-77", "ovs_interfaceid": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.803 239969 WARNING nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.809 239969 DEBUG nova.virt.libvirt.host [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.810 239969 DEBUG nova.virt.libvirt.host [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.817 239969 DEBUG nova.virt.libvirt.host [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.817 239969 DEBUG nova.virt.libvirt.host [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.818 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.818 239969 DEBUG nova.virt.hardware [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.819 239969 DEBUG nova.virt.hardware [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.819 239969 DEBUG nova.virt.hardware [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.819 239969 DEBUG nova.virt.hardware [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.820 239969 DEBUG nova.virt.hardware [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.820 239969 DEBUG nova.virt.hardware [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.820 239969 DEBUG nova.virt.hardware [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.821 239969 DEBUG nova.virt.hardware [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.821 239969 DEBUG nova.virt.hardware [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.821 239969 DEBUG nova.virt.hardware [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.822 239969 DEBUG nova.virt.hardware [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:18:33 compute-0 nova_compute[239965]: 2026-01-26 16:18:33.825 239969 DEBUG oslo_concurrency.processutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:18:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:18:34 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3376026956' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:18:34 compute-0 nova_compute[239965]: 2026-01-26 16:18:34.422 239969 DEBUG oslo_concurrency.processutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:18:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 134 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 26 16:18:34 compute-0 nova_compute[239965]: 2026-01-26 16:18:34.447 239969 DEBUG nova.storage.rbd_utils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 12ac525a-976f-4afb-b4db-6035caada7e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:18:34 compute-0 nova_compute[239965]: 2026-01-26 16:18:34.451 239969 DEBUG oslo_concurrency.processutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:18:34 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3376026956' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:18:34 compute-0 nova_compute[239965]: 2026-01-26 16:18:34.649 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:18:34 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/630455062' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.006 239969 DEBUG oslo_concurrency.processutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.008 239969 DEBUG nova.virt.libvirt.vif [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:18:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-547377129',display_name='tempest-TestGettingAddress-server-547377129',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-547377129',id=117,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhQG087QD+2hAbAMQU1nGEagJiakW/DCR9CbPhEN0TT1VEnvAWKVpxW1jmBBFv8JmlAY1EliJqVpxhfQ7DGAWVhpLujt7IrW/DFNwTQWV1RlSnJjK8DC9EppnenO9SXIw==',key_name='tempest-TestGettingAddress-1313867492',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-mnwp0k0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:18:27Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=12ac525a-976f-4afb-b4db-6035caada7e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "address": "fa:16:3e:78:06:ac", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee09ee1-77", "ovs_interfaceid": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.008 239969 DEBUG nova.network.os_vif_util [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "address": "fa:16:3e:78:06:ac", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee09ee1-77", "ovs_interfaceid": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.009 239969 DEBUG nova.network.os_vif_util [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:06:ac,bridge_name='br-int',has_traffic_filtering=True,id=6ee09ee1-7760-4d7f-af2e-0b8d367c129c,network=Network(869bf55a-0f32-4f15-8ff4-2acbed7fab15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ee09ee1-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.010 239969 DEBUG nova.objects.instance [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'pci_devices' on Instance uuid 12ac525a-976f-4afb-b4db-6035caada7e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.025 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:18:35 compute-0 nova_compute[239965]:   <uuid>12ac525a-976f-4afb-b4db-6035caada7e0</uuid>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   <name>instance-00000075</name>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <nova:name>tempest-TestGettingAddress-server-547377129</nova:name>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:18:33</nova:creationTime>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:18:35 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:18:35 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:18:35 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:18:35 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:18:35 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:18:35 compute-0 nova_compute[239965]:         <nova:user uuid="0338b308661e4205839e9e957f674d8c">tempest-TestGettingAddress-206943419-project-member</nova:user>
Jan 26 16:18:35 compute-0 nova_compute[239965]:         <nova:project uuid="df501970c9864b77b86bb4aa58ef846b">tempest-TestGettingAddress-206943419</nova:project>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:18:35 compute-0 nova_compute[239965]:         <nova:port uuid="6ee09ee1-7760-4d7f-af2e-0b8d367c129c">
Jan 26 16:18:35 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe78:6ac" ipVersion="6"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe78:6ac" ipVersion="6"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <system>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <entry name="serial">12ac525a-976f-4afb-b4db-6035caada7e0</entry>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <entry name="uuid">12ac525a-976f-4afb-b4db-6035caada7e0</entry>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     </system>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   <os>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   </os>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   <features>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   </features>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/12ac525a-976f-4afb-b4db-6035caada7e0_disk">
Jan 26 16:18:35 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       </source>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:18:35 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/12ac525a-976f-4afb-b4db-6035caada7e0_disk.config">
Jan 26 16:18:35 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       </source>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:18:35 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:78:06:ac"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <target dev="tap6ee09ee1-77"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/12ac525a-976f-4afb-b4db-6035caada7e0/console.log" append="off"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <video>
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     </video>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:18:35 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:18:35 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:18:35 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:18:35 compute-0 nova_compute[239965]: </domain>
Jan 26 16:18:35 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.027 239969 DEBUG nova.compute.manager [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Preparing to wait for external event network-vif-plugged-6ee09ee1-7760-4d7f-af2e-0b8d367c129c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.027 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.028 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.028 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.029 239969 DEBUG nova.virt.libvirt.vif [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:18:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-547377129',display_name='tempest-TestGettingAddress-server-547377129',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-547377129',id=117,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhQG087QD+2hAbAMQU1nGEagJiakW/DCR9CbPhEN0TT1VEnvAWKVpxW1jmBBFv8JmlAY1EliJqVpxhfQ7DGAWVhpLujt7IrW/DFNwTQWV1RlSnJjK8DC9EppnenO9SXIw==',key_name='tempest-TestGettingAddress-1313867492',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-mnwp0k0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:18:27Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=12ac525a-976f-4afb-b4db-6035caada7e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "address": "fa:16:3e:78:06:ac", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee09ee1-77", "ovs_interfaceid": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.029 239969 DEBUG nova.network.os_vif_util [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "address": "fa:16:3e:78:06:ac", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee09ee1-77", "ovs_interfaceid": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.030 239969 DEBUG nova.network.os_vif_util [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:78:06:ac,bridge_name='br-int',has_traffic_filtering=True,id=6ee09ee1-7760-4d7f-af2e-0b8d367c129c,network=Network(869bf55a-0f32-4f15-8ff4-2acbed7fab15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ee09ee1-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.030 239969 DEBUG os_vif [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:06:ac,bridge_name='br-int',has_traffic_filtering=True,id=6ee09ee1-7760-4d7f-af2e-0b8d367c129c,network=Network(869bf55a-0f32-4f15-8ff4-2acbed7fab15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ee09ee1-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.031 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.031 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.031 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.034 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.034 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6ee09ee1-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.034 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6ee09ee1-77, col_values=(('external_ids', {'iface-id': '6ee09ee1-7760-4d7f-af2e-0b8d367c129c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:78:06:ac', 'vm-uuid': '12ac525a-976f-4afb-b4db-6035caada7e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.036 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:35 compute-0 NetworkManager[48954]: <info>  [1769444315.0372] manager: (tap6ee09ee1-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/502)
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.039 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.041 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.042 239969 INFO os_vif [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:78:06:ac,bridge_name='br-int',has_traffic_filtering=True,id=6ee09ee1-7760-4d7f-af2e-0b8d367c129c,network=Network(869bf55a-0f32-4f15-8ff4-2acbed7fab15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ee09ee1-77')
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.112 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.113 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.113 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:78:06:ac, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.114 239969 INFO nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Using config drive
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.138 239969 DEBUG nova.storage.rbd_utils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 12ac525a-976f-4afb-b4db-6035caada7e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:18:35 compute-0 ceph-mon[75140]: pgmap v2036: 305 pgs: 305 active+clean; 134 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 82 op/s
Jan 26 16:18:35 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/630455062' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.653 239969 INFO nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Creating config drive at /var/lib/nova/instances/12ac525a-976f-4afb-b4db-6035caada7e0/disk.config
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.658 239969 DEBUG oslo_concurrency.processutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/12ac525a-976f-4afb-b4db-6035caada7e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyoi200zs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.805 239969 DEBUG oslo_concurrency.processutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/12ac525a-976f-4afb-b4db-6035caada7e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyoi200zs" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.835 239969 DEBUG nova.storage.rbd_utils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 12ac525a-976f-4afb-b4db-6035caada7e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.838 239969 DEBUG oslo_concurrency.processutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/12ac525a-976f-4afb-b4db-6035caada7e0/disk.config 12ac525a-976f-4afb-b4db-6035caada7e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.973 239969 DEBUG oslo_concurrency.processutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/12ac525a-976f-4afb-b4db-6035caada7e0/disk.config 12ac525a-976f-4afb-b4db-6035caada7e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:18:35 compute-0 nova_compute[239965]: 2026-01-26 16:18:35.974 239969 INFO nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Deleting local config drive /var/lib/nova/instances/12ac525a-976f-4afb-b4db-6035caada7e0/disk.config because it was imported into RBD.
Jan 26 16:18:36 compute-0 kernel: tap6ee09ee1-77: entered promiscuous mode
Jan 26 16:18:36 compute-0 NetworkManager[48954]: <info>  [1769444316.0273] manager: (tap6ee09ee1-77): new Tun device (/org/freedesktop/NetworkManager/Devices/503)
Jan 26 16:18:36 compute-0 ovn_controller[146046]: 2026-01-26T16:18:36Z|01223|binding|INFO|Claiming lport 6ee09ee1-7760-4d7f-af2e-0b8d367c129c for this chassis.
Jan 26 16:18:36 compute-0 ovn_controller[146046]: 2026-01-26T16:18:36Z|01224|binding|INFO|6ee09ee1-7760-4d7f-af2e-0b8d367c129c: Claiming fa:16:3e:78:06:ac 10.100.0.14 2001:db8:0:1:f816:3eff:fe78:6ac 2001:db8::f816:3eff:fe78:6ac
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.027 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.036 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:06:ac 10.100.0.14 2001:db8:0:1:f816:3eff:fe78:6ac 2001:db8::f816:3eff:fe78:6ac'], port_security=['fa:16:3e:78:06:ac 10.100.0.14 2001:db8:0:1:f816:3eff:fe78:6ac 2001:db8::f816:3eff:fe78:6ac'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fe78:6ac/64 2001:db8::f816:3eff:fe78:6ac/64', 'neutron:device_id': '12ac525a-976f-4afb-b4db-6035caada7e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fe48bd19-537b-482b-a9dd-72cb7a300d8a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b663232-d708-4949-9a4a-d4a2c365ff36, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=6ee09ee1-7760-4d7f-af2e-0b8d367c129c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.037 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 6ee09ee1-7760-4d7f-af2e-0b8d367c129c in datapath 869bf55a-0f32-4f15-8ff4-2acbed7fab15 bound to our chassis
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.038 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 869bf55a-0f32-4f15-8ff4-2acbed7fab15
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.042 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:36 compute-0 ovn_controller[146046]: 2026-01-26T16:18:36Z|01225|binding|INFO|Setting lport 6ee09ee1-7760-4d7f-af2e-0b8d367c129c ovn-installed in OVS
Jan 26 16:18:36 compute-0 ovn_controller[146046]: 2026-01-26T16:18:36Z|01226|binding|INFO|Setting lport 6ee09ee1-7760-4d7f-af2e-0b8d367c129c up in Southbound
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.044 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.046 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.076 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cedff8cc-03a9-46c7-887a-b54f8a8e0731]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.077 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap869bf55a-01 in ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.079 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap869bf55a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.079 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8f44de79-39e5-42e5-9ef3-24fa4452da73]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.080 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a2cfb63f-9dc3-4cdc-82cf-7ee9cd912c4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:36 compute-0 systemd-udevd[343208]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:18:36 compute-0 systemd-machined[208061]: New machine qemu-144-instance-00000075.
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.095 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[d2413be8-9213-46c8-9d99-f66a99e2df81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:36 compute-0 NetworkManager[48954]: <info>  [1769444316.1011] device (tap6ee09ee1-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:18:36 compute-0 NetworkManager[48954]: <info>  [1769444316.1019] device (tap6ee09ee1-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:18:36 compute-0 systemd[1]: Started Virtual Machine qemu-144-instance-00000075.
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.120 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6383f746-de3f-484d-aaf5-987e0efa3965]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.153 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[832db77e-f26b-48d9-b36e-5802e2d9eafa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.159 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d86250c5-568f-4db7-844a-c4ee70f4b0d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:36 compute-0 NetworkManager[48954]: <info>  [1769444316.1605] manager: (tap869bf55a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/504)
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.192 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d7b14586-88a9-4a77-94b3-cc9ad0d80e6f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.195 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e8aae940-6fa9-41d0-afa4-e392fc52fde8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:36 compute-0 NetworkManager[48954]: <info>  [1769444316.2156] device (tap869bf55a-00): carrier: link connected
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.219 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8ef713-0e5b-4f32-8eaf-ebb270f9228a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.235 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a5712804-6da6-44d6-9aaa-a2d637e8b3c3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap869bf55a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:3f:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 356], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580702, 'reachable_time': 36000, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 343240, 'error': None, 'target': 'ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.249 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e19a13a8-249f-44cf-b1cd-2b0f752b3ef8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:3fac'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580702, 'tstamp': 580702}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 343241, 'error': None, 'target': 'ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.263 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0aa30e97-51d0-409e-bf14-7be790cbcd0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap869bf55a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:3f:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 356], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580702, 'reachable_time': 36000, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 343242, 'error': None, 'target': 'ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.291 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3455f977-200c-4f73-85f5-67121be87288]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.349 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b0c226e4-aeff-4a97-ac72-97722421f3e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.350 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap869bf55a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.350 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.351 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap869bf55a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:18:36 compute-0 kernel: tap869bf55a-00: entered promiscuous mode
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.406 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap869bf55a-00, col_values=(('external_ids', {'iface-id': 'ac473cbf-5010-4e34-9ff9-00fae14d9b70'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.403 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:36 compute-0 NetworkManager[48954]: <info>  [1769444316.4074] manager: (tap869bf55a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/505)
Jan 26 16:18:36 compute-0 ovn_controller[146046]: 2026-01-26T16:18:36Z|01227|binding|INFO|Releasing lport ac473cbf-5010-4e34-9ff9-00fae14d9b70 from this chassis (sb_readonly=0)
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.408 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.420 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.421 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/869bf55a-0f32-4f15-8ff4-2acbed7fab15.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/869bf55a-0f32-4f15-8ff4-2acbed7fab15.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.422 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1eeb8649-7063-419e-b46f-678bf6d5be40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.422 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-869bf55a-0f32-4f15-8ff4-2acbed7fab15
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/869bf55a-0f32-4f15-8ff4-2acbed7fab15.pid.haproxy
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 869bf55a-0f32-4f15-8ff4-2acbed7fab15
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:18:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:36.423 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'env', 'PROCESS_TAG=haproxy-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/869bf55a-0f32-4f15-8ff4-2acbed7fab15.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:18:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 134 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Jan 26 16:18:36 compute-0 podman[343310]: 2026-01-26 16:18:36.786474008 +0000 UTC m=+0.041565617 container create b6d67c0f0a12f092962dbfe9bffdc7f78173a1becd1ffbb8965129010f326781 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.809 239969 DEBUG nova.compute.manager [req-44e1bf9c-8c61-4efd-ba0a-98b1f6e29304 req-081421ac-732e-4367-aac3-a7886d06c316 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Received event network-vif-plugged-6ee09ee1-7760-4d7f-af2e-0b8d367c129c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.809 239969 DEBUG oslo_concurrency.lockutils [req-44e1bf9c-8c61-4efd-ba0a-98b1f6e29304 req-081421ac-732e-4367-aac3-a7886d06c316 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.809 239969 DEBUG oslo_concurrency.lockutils [req-44e1bf9c-8c61-4efd-ba0a-98b1f6e29304 req-081421ac-732e-4367-aac3-a7886d06c316 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.810 239969 DEBUG oslo_concurrency.lockutils [req-44e1bf9c-8c61-4efd-ba0a-98b1f6e29304 req-081421ac-732e-4367-aac3-a7886d06c316 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.810 239969 DEBUG nova.compute.manager [req-44e1bf9c-8c61-4efd-ba0a-98b1f6e29304 req-081421ac-732e-4367-aac3-a7886d06c316 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Processing event network-vif-plugged-6ee09ee1-7760-4d7f-af2e-0b8d367c129c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:18:36 compute-0 systemd[1]: Started libpod-conmon-b6d67c0f0a12f092962dbfe9bffdc7f78173a1becd1ffbb8965129010f326781.scope.
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.829 239969 DEBUG nova.network.neutron [req-1d61e398-1878-4ec1-8145-79579c9720af req-6f060261-0777-41e7-a838-80e02bdb0b99 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Updated VIF entry in instance network info cache for port 6ee09ee1-7760-4d7f-af2e-0b8d367c129c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.830 239969 DEBUG nova.network.neutron [req-1d61e398-1878-4ec1-8145-79579c9720af req-6f060261-0777-41e7-a838-80e02bdb0b99 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Updating instance_info_cache with network_info: [{"id": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "address": "fa:16:3e:78:06:ac", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee09ee1-77", "ovs_interfaceid": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.838 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444316.8380315, 12ac525a-976f-4afb-b4db-6035caada7e0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.839 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] VM Started (Lifecycle Event)
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.840 239969 DEBUG nova.compute.manager [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:18:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.846 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:18:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03ed4963320cc366f409f3e8ac29818249940f68328eb164fe3e7c3344e14a48/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.853 239969 INFO nova.virt.libvirt.driver [-] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Instance spawned successfully.
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.855 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:18:36 compute-0 podman[343310]: 2026-01-26 16:18:36.765040783 +0000 UTC m=+0.020132412 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.862 239969 DEBUG oslo_concurrency.lockutils [req-1d61e398-1878-4ec1-8145-79579c9720af req-6f060261-0777-41e7-a838-80e02bdb0b99 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.863 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.866 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.883 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.883 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444316.839287, 12ac525a-976f-4afb-b4db-6035caada7e0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.883 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] VM Paused (Lifecycle Event)
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.887 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.887 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.888 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.888 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.888 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.889 239969 DEBUG nova.virt.libvirt.driver [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.900 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.902 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444316.8435051, 12ac525a-976f-4afb-b4db-6035caada7e0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.902 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] VM Resumed (Lifecycle Event)
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.923 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.926 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.954 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.969 239969 INFO nova.compute.manager [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Took 9.51 seconds to spawn the instance on the hypervisor.
Jan 26 16:18:36 compute-0 nova_compute[239965]: 2026-01-26 16:18:36.969 239969 DEBUG nova.compute.manager [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:18:37 compute-0 nova_compute[239965]: 2026-01-26 16:18:37.024 239969 INFO nova.compute.manager [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Took 10.67 seconds to build instance.
Jan 26 16:18:37 compute-0 nova_compute[239965]: 2026-01-26 16:18:37.044 239969 DEBUG oslo_concurrency.lockutils [None req-a2897130-1ff8-44b3-a69d-062ddfd16db7 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "12ac525a-976f-4afb-b4db-6035caada7e0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.792s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:37 compute-0 podman[343310]: 2026-01-26 16:18:37.060213595 +0000 UTC m=+0.315305234 container init b6d67c0f0a12f092962dbfe9bffdc7f78173a1becd1ffbb8965129010f326781 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 16:18:37 compute-0 podman[343310]: 2026-01-26 16:18:37.067116383 +0000 UTC m=+0.322208002 container start b6d67c0f0a12f092962dbfe9bffdc7f78173a1becd1ffbb8965129010f326781 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:18:37 compute-0 neutron-haproxy-ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15[343330]: [NOTICE]   (343336) : New worker (343338) forked
Jan 26 16:18:37 compute-0 neutron-haproxy-ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15[343330]: [NOTICE]   (343336) : Loading success.
Jan 26 16:18:37 compute-0 ovn_controller[146046]: 2026-01-26T16:18:37Z|01228|binding|INFO|Releasing lport ac473cbf-5010-4e34-9ff9-00fae14d9b70 from this chassis (sb_readonly=0)
Jan 26 16:18:37 compute-0 nova_compute[239965]: 2026-01-26 16:18:37.325 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:37 compute-0 ovn_controller[146046]: 2026-01-26T16:18:37Z|01229|binding|INFO|Releasing lport ac473cbf-5010-4e34-9ff9-00fae14d9b70 from this chassis (sb_readonly=0)
Jan 26 16:18:37 compute-0 nova_compute[239965]: 2026-01-26 16:18:37.393 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:37 compute-0 ceph-mon[75140]: pgmap v2037: 305 pgs: 305 active+clean; 134 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.8 MiB/s wr, 83 op/s
Jan 26 16:18:37 compute-0 nova_compute[239965]: 2026-01-26 16:18:37.493 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:18:38 compute-0 nova_compute[239965]: 2026-01-26 16:18:38.281 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444303.27989, d37c9efe-0572-4b48-9bce-6af473740fc4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:18:38 compute-0 nova_compute[239965]: 2026-01-26 16:18:38.282 239969 INFO nova.compute.manager [-] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] VM Stopped (Lifecycle Event)
Jan 26 16:18:38 compute-0 nova_compute[239965]: 2026-01-26 16:18:38.307 239969 DEBUG nova.compute.manager [None req-3999081b-50fa-4eb6-bfa4-159fbe810f5b - - - - - -] [instance: d37c9efe-0572-4b48-9bce-6af473740fc4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:18:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 134 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 91 op/s
Jan 26 16:18:38 compute-0 nova_compute[239965]: 2026-01-26 16:18:38.926 239969 DEBUG nova.compute.manager [req-d80eccb8-19af-401c-bf9b-af63ca7241f9 req-a4323d4f-9eff-4b19-96cf-fd5ea37ccfe9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Received event network-vif-plugged-6ee09ee1-7760-4d7f-af2e-0b8d367c129c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:18:38 compute-0 nova_compute[239965]: 2026-01-26 16:18:38.926 239969 DEBUG oslo_concurrency.lockutils [req-d80eccb8-19af-401c-bf9b-af63ca7241f9 req-a4323d4f-9eff-4b19-96cf-fd5ea37ccfe9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:38 compute-0 nova_compute[239965]: 2026-01-26 16:18:38.926 239969 DEBUG oslo_concurrency.lockutils [req-d80eccb8-19af-401c-bf9b-af63ca7241f9 req-a4323d4f-9eff-4b19-96cf-fd5ea37ccfe9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:38 compute-0 nova_compute[239965]: 2026-01-26 16:18:38.927 239969 DEBUG oslo_concurrency.lockutils [req-d80eccb8-19af-401c-bf9b-af63ca7241f9 req-a4323d4f-9eff-4b19-96cf-fd5ea37ccfe9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:38 compute-0 nova_compute[239965]: 2026-01-26 16:18:38.927 239969 DEBUG nova.compute.manager [req-d80eccb8-19af-401c-bf9b-af63ca7241f9 req-a4323d4f-9eff-4b19-96cf-fd5ea37ccfe9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] No waiting events found dispatching network-vif-plugged-6ee09ee1-7760-4d7f-af2e-0b8d367c129c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:18:38 compute-0 nova_compute[239965]: 2026-01-26 16:18:38.927 239969 WARNING nova.compute.manager [req-d80eccb8-19af-401c-bf9b-af63ca7241f9 req-a4323d4f-9eff-4b19-96cf-fd5ea37ccfe9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Received unexpected event network-vif-plugged-6ee09ee1-7760-4d7f-af2e-0b8d367c129c for instance with vm_state active and task_state None.
Jan 26 16:18:39 compute-0 ceph-mon[75140]: pgmap v2038: 305 pgs: 305 active+clean; 134 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 1.8 MiB/s wr, 91 op/s
Jan 26 16:18:40 compute-0 nova_compute[239965]: 2026-01-26 16:18:40.037 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 134 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Jan 26 16:18:40 compute-0 ceph-mon[75140]: pgmap v2039: 305 pgs: 305 active+clean; 134 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Jan 26 16:18:41 compute-0 ovn_controller[146046]: 2026-01-26T16:18:41Z|01230|binding|INFO|Releasing lport ac473cbf-5010-4e34-9ff9-00fae14d9b70 from this chassis (sb_readonly=0)
Jan 26 16:18:41 compute-0 NetworkManager[48954]: <info>  [1769444321.1176] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/506)
Jan 26 16:18:41 compute-0 NetworkManager[48954]: <info>  [1769444321.1186] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/507)
Jan 26 16:18:41 compute-0 nova_compute[239965]: 2026-01-26 16:18:41.116 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:41 compute-0 ovn_controller[146046]: 2026-01-26T16:18:41Z|01231|binding|INFO|Releasing lport ac473cbf-5010-4e34-9ff9-00fae14d9b70 from this chassis (sb_readonly=0)
Jan 26 16:18:41 compute-0 nova_compute[239965]: 2026-01-26 16:18:41.144 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:41 compute-0 nova_compute[239965]: 2026-01-26 16:18:41.150 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:41 compute-0 nova_compute[239965]: 2026-01-26 16:18:41.444 239969 DEBUG nova.compute.manager [req-cf551193-4fc1-49bb-8391-7b9ff67d5341 req-033e9836-8e36-420d-8d14-14962466738b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Received event network-changed-6ee09ee1-7760-4d7f-af2e-0b8d367c129c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:18:41 compute-0 nova_compute[239965]: 2026-01-26 16:18:41.444 239969 DEBUG nova.compute.manager [req-cf551193-4fc1-49bb-8391-7b9ff67d5341 req-033e9836-8e36-420d-8d14-14962466738b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Refreshing instance network info cache due to event network-changed-6ee09ee1-7760-4d7f-af2e-0b8d367c129c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:18:41 compute-0 nova_compute[239965]: 2026-01-26 16:18:41.445 239969 DEBUG oslo_concurrency.lockutils [req-cf551193-4fc1-49bb-8391-7b9ff67d5341 req-033e9836-8e36-420d-8d14-14962466738b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:18:41 compute-0 nova_compute[239965]: 2026-01-26 16:18:41.445 239969 DEBUG oslo_concurrency.lockutils [req-cf551193-4fc1-49bb-8391-7b9ff67d5341 req-033e9836-8e36-420d-8d14-14962466738b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:18:41 compute-0 nova_compute[239965]: 2026-01-26 16:18:41.446 239969 DEBUG nova.network.neutron [req-cf551193-4fc1-49bb-8391-7b9ff67d5341 req-033e9836-8e36-420d-8d14-14962466738b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Refreshing network info cache for port 6ee09ee1-7760-4d7f-af2e-0b8d367c129c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:18:42 compute-0 nova_compute[239965]: 2026-01-26 16:18:42.006 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 134 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 26 16:18:42 compute-0 nova_compute[239965]: 2026-01-26 16:18:42.495 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:42 compute-0 nova_compute[239965]: 2026-01-26 16:18:42.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:18:42 compute-0 ceph-mon[75140]: pgmap v2040: 305 pgs: 305 active+clean; 134 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 26 16:18:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:18:43 compute-0 nova_compute[239965]: 2026-01-26 16:18:43.531 239969 DEBUG nova.network.neutron [req-cf551193-4fc1-49bb-8391-7b9ff67d5341 req-033e9836-8e36-420d-8d14-14962466738b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Updated VIF entry in instance network info cache for port 6ee09ee1-7760-4d7f-af2e-0b8d367c129c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:18:43 compute-0 nova_compute[239965]: 2026-01-26 16:18:43.533 239969 DEBUG nova.network.neutron [req-cf551193-4fc1-49bb-8391-7b9ff67d5341 req-033e9836-8e36-420d-8d14-14962466738b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Updating instance_info_cache with network_info: [{"id": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "address": "fa:16:3e:78:06:ac", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee09ee1-77", "ovs_interfaceid": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:18:43 compute-0 nova_compute[239965]: 2026-01-26 16:18:43.567 239969 DEBUG oslo_concurrency.lockutils [req-cf551193-4fc1-49bb-8391-7b9ff67d5341 req-033e9836-8e36-420d-8d14-14962466738b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:18:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 134 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:18:44 compute-0 ceph-mon[75140]: pgmap v2041: 305 pgs: 305 active+clean; 134 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:18:44 compute-0 nova_compute[239965]: 2026-01-26 16:18:44.613 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444309.6128042, dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:18:44 compute-0 nova_compute[239965]: 2026-01-26 16:18:44.615 239969 INFO nova.compute.manager [-] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] VM Stopped (Lifecycle Event)
Jan 26 16:18:44 compute-0 nova_compute[239965]: 2026-01-26 16:18:44.634 239969 DEBUG nova.compute.manager [None req-af4b560d-b91d-48d0-9865-6974a787539e - - - - - -] [instance: dd0db5bd-7bc2-4c80-a5e7-92dcdb1f1271] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:18:45 compute-0 nova_compute[239965]: 2026-01-26 16:18:45.042 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:45 compute-0 nova_compute[239965]: 2026-01-26 16:18:45.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:18:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 134 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:18:46 compute-0 ceph-mon[75140]: pgmap v2042: 305 pgs: 305 active+clean; 134 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:18:47 compute-0 nova_compute[239965]: 2026-01-26 16:18:47.530 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:47 compute-0 nova_compute[239965]: 2026-01-26 16:18:47.905 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:18:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 134 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:18:48 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 26 16:18:48 compute-0 ceph-mon[75140]: pgmap v2043: 305 pgs: 305 active+clean; 134 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:18:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:18:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4196584662' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:18:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:18:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4196584662' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003619756659485209 of space, bias 1.0, pg target 0.10859269978455627 quantized to 32 (current 32)
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010184309263205152 of space, bias 1.0, pg target 0.3055292778961545 quantized to 32 (current 32)
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.090767573943911e-07 of space, bias 4.0, pg target 0.0008508921088732694 quantized to 16 (current 16)
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:18:49 compute-0 nova_compute[239965]: 2026-01-26 16:18:49.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:18:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/4196584662' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:18:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/4196584662' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:18:50 compute-0 nova_compute[239965]: 2026-01-26 16:18:50.046 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 134 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Jan 26 16:18:50 compute-0 ceph-mon[75140]: pgmap v2044: 305 pgs: 305 active+clean; 134 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Jan 26 16:18:51 compute-0 ovn_controller[146046]: 2026-01-26T16:18:51Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:78:06:ac 10.100.0.14
Jan 26 16:18:51 compute-0 ovn_controller[146046]: 2026-01-26T16:18:51Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:78:06:ac 10.100.0.14
Jan 26 16:18:51 compute-0 nova_compute[239965]: 2026-01-26 16:18:51.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:18:51 compute-0 nova_compute[239965]: 2026-01-26 16:18:51.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:18:51 compute-0 nova_compute[239965]: 2026-01-26 16:18:51.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:18:52 compute-0 nova_compute[239965]: 2026-01-26 16:18:52.060 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:18:52 compute-0 nova_compute[239965]: 2026-01-26 16:18:52.061 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:18:52 compute-0 nova_compute[239965]: 2026-01-26 16:18:52.061 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:18:52 compute-0 nova_compute[239965]: 2026-01-26 16:18:52.061 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 12ac525a-976f-4afb-b4db-6035caada7e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:18:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Jan 26 16:18:52 compute-0 nova_compute[239965]: 2026-01-26 16:18:52.531 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:52 compute-0 ceph-mon[75140]: pgmap v2045: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Jan 26 16:18:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:18:54 compute-0 nova_compute[239965]: 2026-01-26 16:18:54.190 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Updating instance_info_cache with network_info: [{"id": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "address": "fa:16:3e:78:06:ac", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee09ee1-77", "ovs_interfaceid": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:18:54 compute-0 nova_compute[239965]: 2026-01-26 16:18:54.214 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:18:54 compute-0 nova_compute[239965]: 2026-01-26 16:18:54.215 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:18:54 compute-0 nova_compute[239965]: 2026-01-26 16:18:54.215 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:18:54 compute-0 nova_compute[239965]: 2026-01-26 16:18:54.215 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:18:54 compute-0 nova_compute[239965]: 2026-01-26 16:18:54.216 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:18:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 16:18:54 compute-0 ceph-mon[75140]: pgmap v2046: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 16:18:55 compute-0 nova_compute[239965]: 2026-01-26 16:18:55.049 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:55 compute-0 sudo[343350]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:18:55 compute-0 sudo[343350]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:18:55 compute-0 sudo[343350]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:55 compute-0 sudo[343375]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:18:55 compute-0 sudo[343375]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:18:55 compute-0 nova_compute[239965]: 2026-01-26 16:18:55.482 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "03b61c83-5aca-49d6-ad69-1609916476e2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:55 compute-0 nova_compute[239965]: 2026-01-26 16:18:55.484 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:55 compute-0 nova_compute[239965]: 2026-01-26 16:18:55.504 239969 DEBUG nova.compute.manager [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:18:55 compute-0 nova_compute[239965]: 2026-01-26 16:18:55.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:18:55 compute-0 nova_compute[239965]: 2026-01-26 16:18:55.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:18:55 compute-0 nova_compute[239965]: 2026-01-26 16:18:55.585 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:55 compute-0 nova_compute[239965]: 2026-01-26 16:18:55.586 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:55 compute-0 nova_compute[239965]: 2026-01-26 16:18:55.595 239969 DEBUG nova.virt.hardware [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:18:55 compute-0 nova_compute[239965]: 2026-01-26 16:18:55.596 239969 INFO nova.compute.claims [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:18:55 compute-0 sudo[343375]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:55 compute-0 nova_compute[239965]: 2026-01-26 16:18:55.717 239969 DEBUG oslo_concurrency.processutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:18:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:18:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:18:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:18:55 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:18:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:18:55 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:18:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:18:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:18:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:18:55 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:18:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:18:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:18:55 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:18:55 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:18:55 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:18:55 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:18:55 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:18:55 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:18:55 compute-0 sudo[343431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:18:55 compute-0 sudo[343431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:18:55 compute-0 sudo[343431]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:55 compute-0 sudo[343458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:18:55 compute-0 sudo[343458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:18:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:56.050 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:3d:85 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b693f631-0e78-4293-bfc2-2d727801bcfe', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b693f631-0e78-4293-bfc2-2d727801bcfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '846353c2154b4f35ae0fbada5ce2ba45', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6f881fd-9f8f-4d81-9287-fd52fd4a8b9e, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60da77b8-7540-46a2-96f8-50ed38096ec7) old=Port_Binding(mac=['fa:16:3e:b5:3d:85 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b693f631-0e78-4293-bfc2-2d727801bcfe', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b693f631-0e78-4293-bfc2-2d727801bcfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '846353c2154b4f35ae0fbada5ce2ba45', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:18:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:56.052 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60da77b8-7540-46a2-96f8-50ed38096ec7 in datapath b693f631-0e78-4293-bfc2-2d727801bcfe updated
Jan 26 16:18:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:56.053 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b693f631-0e78-4293-bfc2-2d727801bcfe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:18:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:56.054 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[57648cf7-9227-4caa-9aab-28e02af65da7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:18:56 compute-0 podman[343511]: 2026-01-26 16:18:56.233122555 +0000 UTC m=+0.102871585 container create a43a641e603e5f45e5cb8054b9a94471024c4a66ee2f213249e97c4c0e6a922e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:18:56 compute-0 podman[343511]: 2026-01-26 16:18:56.153769056 +0000 UTC m=+0.023518096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:18:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:18:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2879861829' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:18:56 compute-0 systemd[1]: Started libpod-conmon-a43a641e603e5f45e5cb8054b9a94471024c4a66ee2f213249e97c4c0e6a922e.scope.
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.291 239969 DEBUG oslo_concurrency.processutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.299 239969 DEBUG nova.compute.provider_tree [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:18:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.427 239969 DEBUG nova.scheduler.client.report [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:18:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.449 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.450 239969 DEBUG nova.compute.manager [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.497 239969 DEBUG nova.compute.manager [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.498 239969 DEBUG nova.network.neutron [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.520 239969 INFO nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.540 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.541 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.586 239969 DEBUG nova.compute.manager [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:18:56 compute-0 podman[343511]: 2026-01-26 16:18:56.677350468 +0000 UTC m=+0.547099518 container init a43a641e603e5f45e5cb8054b9a94471024c4a66ee2f213249e97c4c0e6a922e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_cray, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:18:56 compute-0 podman[343511]: 2026-01-26 16:18:56.690009757 +0000 UTC m=+0.559758797 container start a43a641e603e5f45e5cb8054b9a94471024c4a66ee2f213249e97c4c0e6a922e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_cray, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 16:18:56 compute-0 cranky_cray[343529]: 167 167
Jan 26 16:18:56 compute-0 systemd[1]: libpod-a43a641e603e5f45e5cb8054b9a94471024c4a66ee2f213249e97c4c0e6a922e.scope: Deactivated successfully.
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.704 239969 DEBUG nova.compute.manager [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.706 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.706 239969 INFO nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Creating image(s)
Jan 26 16:18:56 compute-0 podman[343511]: 2026-01-26 16:18:56.714565587 +0000 UTC m=+0.584314627 container attach a43a641e603e5f45e5cb8054b9a94471024c4a66ee2f213249e97c4c0e6a922e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 16:18:56 compute-0 podman[343511]: 2026-01-26 16:18:56.71634426 +0000 UTC m=+0.586093300 container died a43a641e603e5f45e5cb8054b9a94471024c4a66ee2f213249e97c4c0e6a922e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_cray, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.745 239969 DEBUG nova.storage.rbd_utils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 03b61c83-5aca-49d6-ad69-1609916476e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:18:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-963f725a7031d8bb98d31a4c0ea4969526b12497c1fda5c87f0f1bf63c4ae293-merged.mount: Deactivated successfully.
Jan 26 16:18:56 compute-0 podman[343511]: 2026-01-26 16:18:56.787786286 +0000 UTC m=+0.657535326 container remove a43a641e603e5f45e5cb8054b9a94471024c4a66ee2f213249e97c4c0e6a922e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.797 239969 DEBUG nova.storage.rbd_utils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 03b61c83-5aca-49d6-ad69-1609916476e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:18:56 compute-0 systemd[1]: libpod-conmon-a43a641e603e5f45e5cb8054b9a94471024c4a66ee2f213249e97c4c0e6a922e.scope: Deactivated successfully.
Jan 26 16:18:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2879861829' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:18:56 compute-0 ceph-mon[75140]: pgmap v2047: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.833 239969 DEBUG nova.storage.rbd_utils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 03b61c83-5aca-49d6-ad69-1609916476e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.838 239969 DEBUG oslo_concurrency.processutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.880 239969 DEBUG nova.policy [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e865b82cb8db42568f95d2257ed9b158', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f543c15474f7451fa07b01e79dde95dd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.918 239969 DEBUG oslo_concurrency.processutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.919 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.920 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.920 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.943 239969 DEBUG nova.storage.rbd_utils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 03b61c83-5aca-49d6-ad69-1609916476e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:18:56 compute-0 nova_compute[239965]: 2026-01-26 16:18:56.947 239969 DEBUG oslo_concurrency.processutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 03b61c83-5aca-49d6-ad69-1609916476e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:18:56 compute-0 podman[343629]: 2026-01-26 16:18:56.98322128 +0000 UTC m=+0.048983988 container create 0af481301c18618dae8c98f774b022583c69a5b1e2eb35a2b7e7114b0fc61fce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chatterjee, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 16:18:57 compute-0 systemd[1]: Started libpod-conmon-0af481301c18618dae8c98f774b022583c69a5b1e2eb35a2b7e7114b0fc61fce.scope.
Jan 26 16:18:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:18:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78524f012f168452645b67c16bfe5fb11df98c449ddd196058967d5034dc2c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:18:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78524f012f168452645b67c16bfe5fb11df98c449ddd196058967d5034dc2c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:18:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78524f012f168452645b67c16bfe5fb11df98c449ddd196058967d5034dc2c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:18:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78524f012f168452645b67c16bfe5fb11df98c449ddd196058967d5034dc2c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:18:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b78524f012f168452645b67c16bfe5fb11df98c449ddd196058967d5034dc2c2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:18:57 compute-0 podman[343629]: 2026-01-26 16:18:56.961818367 +0000 UTC m=+0.027581105 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:18:57 compute-0 podman[343629]: 2026-01-26 16:18:57.069940578 +0000 UTC m=+0.135703286 container init 0af481301c18618dae8c98f774b022583c69a5b1e2eb35a2b7e7114b0fc61fce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chatterjee, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 16:18:57 compute-0 podman[343629]: 2026-01-26 16:18:57.077607926 +0000 UTC m=+0.143370634 container start 0af481301c18618dae8c98f774b022583c69a5b1e2eb35a2b7e7114b0fc61fce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:18:57 compute-0 podman[343629]: 2026-01-26 16:18:57.089375743 +0000 UTC m=+0.155138471 container attach 0af481301c18618dae8c98f774b022583c69a5b1e2eb35a2b7e7114b0fc61fce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:18:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:57.172 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.173 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:57.173 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:18:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:18:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3628264017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.247 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.706s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.319 239969 DEBUG oslo_concurrency.processutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 03b61c83-5aca-49d6-ad69-1609916476e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.378 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.378 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.381 239969 DEBUG nova.storage.rbd_utils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] resizing rbd image 03b61c83-5aca-49d6-ad69-1609916476e2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.490 239969 DEBUG nova.objects.instance [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'migration_context' on Instance uuid 03b61c83-5aca-49d6-ad69-1609916476e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.523 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.524 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Ensure instance console log exists: /var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.524 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.525 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.525 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.534 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:18:57 compute-0 romantic_chatterjee[343671]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:18:57 compute-0 romantic_chatterjee[343671]: --> All data devices are unavailable
Jan 26 16:18:57 compute-0 systemd[1]: libpod-0af481301c18618dae8c98f774b022583c69a5b1e2eb35a2b7e7114b0fc61fce.scope: Deactivated successfully.
Jan 26 16:18:57 compute-0 podman[343629]: 2026-01-26 16:18:57.602189171 +0000 UTC m=+0.667951899 container died 0af481301c18618dae8c98f774b022583c69a5b1e2eb35a2b7e7114b0fc61fce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chatterjee, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:18:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-b78524f012f168452645b67c16bfe5fb11df98c449ddd196058967d5034dc2c2-merged.mount: Deactivated successfully.
Jan 26 16:18:57 compute-0 podman[343629]: 2026-01-26 16:18:57.654431478 +0000 UTC m=+0.720194186 container remove 0af481301c18618dae8c98f774b022583c69a5b1e2eb35a2b7e7114b0fc61fce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_chatterjee, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 16:18:57 compute-0 systemd[1]: libpod-conmon-0af481301c18618dae8c98f774b022583c69a5b1e2eb35a2b7e7114b0fc61fce.scope: Deactivated successfully.
Jan 26 16:18:57 compute-0 podman[343778]: 2026-01-26 16:18:57.695150493 +0000 UTC m=+0.059291949 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.703 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.704 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3404MB free_disk=59.941950710490346GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.704 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:57 compute-0 sudo[343458]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.705 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:57 compute-0 podman[343785]: 2026-01-26 16:18:57.727150935 +0000 UTC m=+0.086196008 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:18:57 compute-0 sudo[343831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:18:57 compute-0 sudo[343831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:18:57 compute-0 sudo[343831]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.783 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 12ac525a-976f-4afb-b4db-6035caada7e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.784 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 03b61c83-5aca-49d6-ad69-1609916476e2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.784 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.784 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:18:57 compute-0 sudo[343859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:18:57 compute-0 sudo[343859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:18:57 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3628264017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:18:57 compute-0 nova_compute[239965]: 2026-01-26 16:18:57.856 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:18:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:18:58 compute-0 nova_compute[239965]: 2026-01-26 16:18:58.086 239969 DEBUG nova.network.neutron [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Successfully created port: 6784938e-ce4b-4961-a42a-2d07d12fb8f1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:18:58 compute-0 podman[343916]: 2026-01-26 16:18:58.137024387 +0000 UTC m=+0.062042236 container create 69a756d94219fd2a5212412be7fcbc75669eeeaa948ffb727b0669b34ebb6a54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_feynman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 16:18:58 compute-0 systemd[1]: Started libpod-conmon-69a756d94219fd2a5212412be7fcbc75669eeeaa948ffb727b0669b34ebb6a54.scope.
Jan 26 16:18:58 compute-0 podman[343916]: 2026-01-26 16:18:58.114152569 +0000 UTC m=+0.039170468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:18:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:18:58 compute-0 podman[343916]: 2026-01-26 16:18:58.217484153 +0000 UTC m=+0.142502042 container init 69a756d94219fd2a5212412be7fcbc75669eeeaa948ffb727b0669b34ebb6a54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_feynman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 26 16:18:58 compute-0 podman[343916]: 2026-01-26 16:18:58.228643056 +0000 UTC m=+0.153660905 container start 69a756d94219fd2a5212412be7fcbc75669eeeaa948ffb727b0669b34ebb6a54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_feynman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:18:58 compute-0 podman[343916]: 2026-01-26 16:18:58.232218633 +0000 UTC m=+0.157236482 container attach 69a756d94219fd2a5212412be7fcbc75669eeeaa948ffb727b0669b34ebb6a54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_feynman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:18:58 compute-0 admiring_feynman[343932]: 167 167
Jan 26 16:18:58 compute-0 systemd[1]: libpod-69a756d94219fd2a5212412be7fcbc75669eeeaa948ffb727b0669b34ebb6a54.scope: Deactivated successfully.
Jan 26 16:18:58 compute-0 podman[343916]: 2026-01-26 16:18:58.234680073 +0000 UTC m=+0.159697922 container died 69a756d94219fd2a5212412be7fcbc75669eeeaa948ffb727b0669b34ebb6a54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 16:18:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-fec40e9fd57e363c44fd3f0e527e66ac815fd122a2ccb63a852e9b43e4a96bac-merged.mount: Deactivated successfully.
Jan 26 16:18:58 compute-0 podman[343916]: 2026-01-26 16:18:58.270326255 +0000 UTC m=+0.195344104 container remove 69a756d94219fd2a5212412be7fcbc75669eeeaa948ffb727b0669b34ebb6a54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_feynman, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 16:18:58 compute-0 systemd[1]: libpod-conmon-69a756d94219fd2a5212412be7fcbc75669eeeaa948ffb727b0669b34ebb6a54.scope: Deactivated successfully.
Jan 26 16:18:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:18:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3586550538' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:18:58 compute-0 nova_compute[239965]: 2026-01-26 16:18:58.430 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:18:58 compute-0 nova_compute[239965]: 2026-01-26 16:18:58.437 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:18:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 194 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 3.1 MiB/s wr, 62 op/s
Jan 26 16:18:58 compute-0 nova_compute[239965]: 2026-01-26 16:18:58.454 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:18:58 compute-0 podman[343956]: 2026-01-26 16:18:58.457381404 +0000 UTC m=+0.039570068 container create 5833591240d3b5b1961a66451ed345c2c4c4b08fdec75e783d416208d4b43dab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:18:58 compute-0 nova_compute[239965]: 2026-01-26 16:18:58.481 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:18:58 compute-0 nova_compute[239965]: 2026-01-26 16:18:58.481 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:58 compute-0 systemd[1]: Started libpod-conmon-5833591240d3b5b1961a66451ed345c2c4c4b08fdec75e783d416208d4b43dab.scope.
Jan 26 16:18:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67db2e694153314a140718a25cba1b4af247c9037d554defac4cc2b7cef22713/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67db2e694153314a140718a25cba1b4af247c9037d554defac4cc2b7cef22713/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67db2e694153314a140718a25cba1b4af247c9037d554defac4cc2b7cef22713/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67db2e694153314a140718a25cba1b4af247c9037d554defac4cc2b7cef22713/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:18:58 compute-0 podman[343956]: 2026-01-26 16:18:58.440576433 +0000 UTC m=+0.022765117 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:18:58 compute-0 podman[343956]: 2026-01-26 16:18:58.539570582 +0000 UTC m=+0.121759276 container init 5833591240d3b5b1961a66451ed345c2c4c4b08fdec75e783d416208d4b43dab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_rosalind, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 16:18:58 compute-0 podman[343956]: 2026-01-26 16:18:58.547528107 +0000 UTC m=+0.129716771 container start 5833591240d3b5b1961a66451ed345c2c4c4b08fdec75e783d416208d4b43dab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 16:18:58 compute-0 podman[343956]: 2026-01-26 16:18:58.551832612 +0000 UTC m=+0.134021306 container attach 5833591240d3b5b1961a66451ed345c2c4c4b08fdec75e783d416208d4b43dab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_rosalind, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 16:18:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3586550538' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:18:58 compute-0 ceph-mon[75140]: pgmap v2048: 305 pgs: 305 active+clean; 194 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 3.1 MiB/s wr, 62 op/s
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]: {
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:     "0": [
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:         {
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "devices": [
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "/dev/loop3"
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             ],
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "lv_name": "ceph_lv0",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "lv_size": "21470642176",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "name": "ceph_lv0",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "tags": {
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.cluster_name": "ceph",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.crush_device_class": "",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.encrypted": "0",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.objectstore": "bluestore",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.osd_id": "0",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.type": "block",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.vdo": "0",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.with_tpm": "0"
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             },
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "type": "block",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "vg_name": "ceph_vg0"
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:         }
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:     ],
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:     "1": [
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:         {
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "devices": [
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "/dev/loop4"
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             ],
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "lv_name": "ceph_lv1",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "lv_size": "21470642176",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "name": "ceph_lv1",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "tags": {
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.cluster_name": "ceph",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.crush_device_class": "",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.encrypted": "0",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.objectstore": "bluestore",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.osd_id": "1",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.type": "block",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.vdo": "0",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.with_tpm": "0"
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             },
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "type": "block",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "vg_name": "ceph_vg1"
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:         }
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:     ],
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:     "2": [
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:         {
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "devices": [
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "/dev/loop5"
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             ],
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "lv_name": "ceph_lv2",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "lv_size": "21470642176",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "name": "ceph_lv2",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "tags": {
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.cluster_name": "ceph",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.crush_device_class": "",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.encrypted": "0",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.objectstore": "bluestore",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.osd_id": "2",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.type": "block",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.vdo": "0",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:                 "ceph.with_tpm": "0"
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             },
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "type": "block",
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:             "vg_name": "ceph_vg2"
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:         }
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]:     ]
Jan 26 16:18:58 compute-0 suspicious_rosalind[343974]: }
Jan 26 16:18:58 compute-0 systemd[1]: libpod-5833591240d3b5b1961a66451ed345c2c4c4b08fdec75e783d416208d4b43dab.scope: Deactivated successfully.
Jan 26 16:18:58 compute-0 podman[343956]: 2026-01-26 16:18:58.894642837 +0000 UTC m=+0.476831571 container died 5833591240d3b5b1961a66451ed345c2c4c4b08fdec75e783d416208d4b43dab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_rosalind, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 16:18:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-67db2e694153314a140718a25cba1b4af247c9037d554defac4cc2b7cef22713-merged.mount: Deactivated successfully.
Jan 26 16:18:58 compute-0 podman[343956]: 2026-01-26 16:18:58.950155583 +0000 UTC m=+0.532344237 container remove 5833591240d3b5b1961a66451ed345c2c4c4b08fdec75e783d416208d4b43dab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 16:18:58 compute-0 systemd[1]: libpod-conmon-5833591240d3b5b1961a66451ed345c2c4c4b08fdec75e783d416208d4b43dab.scope: Deactivated successfully.
Jan 26 16:18:58 compute-0 sudo[343859]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:59 compute-0 sudo[343997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:18:59 compute-0 sudo[343997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:18:59 compute-0 sudo[343997]: pam_unix(sudo:session): session closed for user root
Jan 26 16:18:59 compute-0 sudo[344022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:18:59 compute-0 sudo[344022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:18:59 compute-0 nova_compute[239965]: 2026-01-26 16:18:59.190 239969 DEBUG nova.network.neutron [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Successfully updated port: 6784938e-ce4b-4961-a42a-2d07d12fb8f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:18:59 compute-0 nova_compute[239965]: 2026-01-26 16:18:59.206 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:18:59 compute-0 nova_compute[239965]: 2026-01-26 16:18:59.206 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:18:59 compute-0 nova_compute[239965]: 2026-01-26 16:18:59.206 239969 DEBUG nova.network.neutron [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:18:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:59.241 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:18:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:59.242 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:18:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:18:59.243 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:18:59 compute-0 nova_compute[239965]: 2026-01-26 16:18:59.306 239969 DEBUG nova.compute.manager [req-a8e30723-7275-4b1d-b08d-9efce786dc3e req-b242f263-29bb-4d8b-a9ea-f61fc18b7151 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received event network-changed-6784938e-ce4b-4961-a42a-2d07d12fb8f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:18:59 compute-0 nova_compute[239965]: 2026-01-26 16:18:59.306 239969 DEBUG nova.compute.manager [req-a8e30723-7275-4b1d-b08d-9efce786dc3e req-b242f263-29bb-4d8b-a9ea-f61fc18b7151 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Refreshing instance network info cache due to event network-changed-6784938e-ce4b-4961-a42a-2d07d12fb8f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:18:59 compute-0 nova_compute[239965]: 2026-01-26 16:18:59.307 239969 DEBUG oslo_concurrency.lockutils [req-a8e30723-7275-4b1d-b08d-9efce786dc3e req-b242f263-29bb-4d8b-a9ea-f61fc18b7151 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:18:59 compute-0 podman[344059]: 2026-01-26 16:18:59.381820138 +0000 UTC m=+0.037734802 container create f9e8c4b7947ff885f34cc19e9ab46f94137854db726b52cfa7fa5dfcd7052349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 16:18:59 compute-0 systemd[1]: Started libpod-conmon-f9e8c4b7947ff885f34cc19e9ab46f94137854db726b52cfa7fa5dfcd7052349.scope.
Jan 26 16:18:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:18:59 compute-0 podman[344059]: 2026-01-26 16:18:59.455672192 +0000 UTC m=+0.111586886 container init f9e8c4b7947ff885f34cc19e9ab46f94137854db726b52cfa7fa5dfcd7052349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_gould, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 26 16:18:59 compute-0 podman[344059]: 2026-01-26 16:18:59.460965812 +0000 UTC m=+0.116880486 container start f9e8c4b7947ff885f34cc19e9ab46f94137854db726b52cfa7fa5dfcd7052349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 16:18:59 compute-0 podman[344059]: 2026-01-26 16:18:59.3667437 +0000 UTC m=+0.022658404 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:18:59 compute-0 podman[344059]: 2026-01-26 16:18:59.463848862 +0000 UTC m=+0.119763576 container attach f9e8c4b7947ff885f34cc19e9ab46f94137854db726b52cfa7fa5dfcd7052349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_gould, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 16:18:59 compute-0 gifted_gould[344075]: 167 167
Jan 26 16:18:59 compute-0 systemd[1]: libpod-f9e8c4b7947ff885f34cc19e9ab46f94137854db726b52cfa7fa5dfcd7052349.scope: Deactivated successfully.
Jan 26 16:18:59 compute-0 podman[344059]: 2026-01-26 16:18:59.466150039 +0000 UTC m=+0.122064713 container died f9e8c4b7947ff885f34cc19e9ab46f94137854db726b52cfa7fa5dfcd7052349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_gould, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:18:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fdc1bfda5d24324cb593cd8d8885e47370c3b17c133d981fec40d46626d6299-merged.mount: Deactivated successfully.
Jan 26 16:18:59 compute-0 nova_compute[239965]: 2026-01-26 16:18:59.510 239969 DEBUG nova.network.neutron [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:18:59 compute-0 podman[344059]: 2026-01-26 16:18:59.523175881 +0000 UTC m=+0.179090555 container remove f9e8c4b7947ff885f34cc19e9ab46f94137854db726b52cfa7fa5dfcd7052349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:18:59 compute-0 systemd[1]: libpod-conmon-f9e8c4b7947ff885f34cc19e9ab46f94137854db726b52cfa7fa5dfcd7052349.scope: Deactivated successfully.
Jan 26 16:18:59 compute-0 podman[344097]: 2026-01-26 16:18:59.713541142 +0000 UTC m=+0.038779178 container create 9cdf7b1fb9813014e036317e200ac0f308bc9e3d83f5b3790de1ec15a1ceaa0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:18:59 compute-0 systemd[1]: Started libpod-conmon-9cdf7b1fb9813014e036317e200ac0f308bc9e3d83f5b3790de1ec15a1ceaa0e.scope.
Jan 26 16:18:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:18:59 compute-0 podman[344097]: 2026-01-26 16:18:59.69705626 +0000 UTC m=+0.022294296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5500ee5e0ec515466eadd00cb10e1c16eb56abcd71f614480b30443a5439f219/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5500ee5e0ec515466eadd00cb10e1c16eb56abcd71f614480b30443a5439f219/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5500ee5e0ec515466eadd00cb10e1c16eb56abcd71f614480b30443a5439f219/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5500ee5e0ec515466eadd00cb10e1c16eb56abcd71f614480b30443a5439f219/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:18:59 compute-0 podman[344097]: 2026-01-26 16:18:59.807139269 +0000 UTC m=+0.132377325 container init 9cdf7b1fb9813014e036317e200ac0f308bc9e3d83f5b3790de1ec15a1ceaa0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:18:59 compute-0 podman[344097]: 2026-01-26 16:18:59.820062984 +0000 UTC m=+0.145301010 container start 9cdf7b1fb9813014e036317e200ac0f308bc9e3d83f5b3790de1ec15a1ceaa0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Jan 26 16:18:59 compute-0 podman[344097]: 2026-01-26 16:18:59.824407681 +0000 UTC m=+0.149645707 container attach 9cdf7b1fb9813014e036317e200ac0f308bc9e3d83f5b3790de1ec15a1ceaa0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 16:19:00 compute-0 nova_compute[239965]: 2026-01-26 16:19:00.052 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:19:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:19:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:19:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:19:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:19:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:19:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 194 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 3.1 MiB/s wr, 62 op/s
Jan 26 16:19:00 compute-0 lvm[344193]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:19:00 compute-0 lvm[344192]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:19:00 compute-0 lvm[344192]: VG ceph_vg0 finished
Jan 26 16:19:00 compute-0 lvm[344193]: VG ceph_vg1 finished
Jan 26 16:19:00 compute-0 ceph-mon[75140]: pgmap v2049: 305 pgs: 305 active+clean; 194 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 262 KiB/s rd, 3.1 MiB/s wr, 62 op/s
Jan 26 16:19:00 compute-0 lvm[344195]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:19:00 compute-0 lvm[344195]: VG ceph_vg2 finished
Jan 26 16:19:00 compute-0 unruffled_kapitsa[344114]: {}
Jan 26 16:19:00 compute-0 systemd[1]: libpod-9cdf7b1fb9813014e036317e200ac0f308bc9e3d83f5b3790de1ec15a1ceaa0e.scope: Deactivated successfully.
Jan 26 16:19:00 compute-0 systemd[1]: libpod-9cdf7b1fb9813014e036317e200ac0f308bc9e3d83f5b3790de1ec15a1ceaa0e.scope: Consumed 1.373s CPU time.
Jan 26 16:19:00 compute-0 podman[344198]: 2026-01-26 16:19:00.7428728 +0000 UTC m=+0.032278350 container died 9cdf7b1fb9813014e036317e200ac0f308bc9e3d83f5b3790de1ec15a1ceaa0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:19:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5500ee5e0ec515466eadd00cb10e1c16eb56abcd71f614480b30443a5439f219-merged.mount: Deactivated successfully.
Jan 26 16:19:00 compute-0 podman[344198]: 2026-01-26 16:19:00.790490073 +0000 UTC m=+0.079895603 container remove 9cdf7b1fb9813014e036317e200ac0f308bc9e3d83f5b3790de1ec15a1ceaa0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_kapitsa, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 26 16:19:00 compute-0 systemd[1]: libpod-conmon-9cdf7b1fb9813014e036317e200ac0f308bc9e3d83f5b3790de1ec15a1ceaa0e.scope: Deactivated successfully.
Jan 26 16:19:00 compute-0 sudo[344022]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:19:00 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:19:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:19:00 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:19:00 compute-0 sudo[344213]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:19:00 compute-0 sudo[344213]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:19:00 compute-0 sudo[344213]: pam_unix(sudo:session): session closed for user root
Jan 26 16:19:00 compute-0 nova_compute[239965]: 2026-01-26 16:19:00.963 239969 DEBUG nova.network.neutron [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Updating instance_info_cache with network_info: [{"id": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "address": "fa:16:3e:16:f4:39", "network": {"id": "cfc8a5e3-1ced-4191-b7c4-24f862b986e1", "bridge": "br-int", "label": "tempest-network-smoke--1397269541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6784938e-ce", "ovs_interfaceid": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.002 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.003 239969 DEBUG nova.compute.manager [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Instance network_info: |[{"id": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "address": "fa:16:3e:16:f4:39", "network": {"id": "cfc8a5e3-1ced-4191-b7c4-24f862b986e1", "bridge": "br-int", "label": "tempest-network-smoke--1397269541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6784938e-ce", "ovs_interfaceid": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.003 239969 DEBUG oslo_concurrency.lockutils [req-a8e30723-7275-4b1d-b08d-9efce786dc3e req-b242f263-29bb-4d8b-a9ea-f61fc18b7151 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.004 239969 DEBUG nova.network.neutron [req-a8e30723-7275-4b1d-b08d-9efce786dc3e req-b242f263-29bb-4d8b-a9ea-f61fc18b7151 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Refreshing network info cache for port 6784938e-ce4b-4961-a42a-2d07d12fb8f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.009 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Start _get_guest_xml network_info=[{"id": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "address": "fa:16:3e:16:f4:39", "network": {"id": "cfc8a5e3-1ced-4191-b7c4-24f862b986e1", "bridge": "br-int", "label": "tempest-network-smoke--1397269541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6784938e-ce", "ovs_interfaceid": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.014 239969 WARNING nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.027 239969 DEBUG nova.virt.libvirt.host [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.028 239969 DEBUG nova.virt.libvirt.host [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.033 239969 DEBUG nova.virt.libvirt.host [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.033 239969 DEBUG nova.virt.libvirt.host [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.034 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.034 239969 DEBUG nova.virt.hardware [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.035 239969 DEBUG nova.virt.hardware [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.035 239969 DEBUG nova.virt.hardware [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.036 239969 DEBUG nova.virt.hardware [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.036 239969 DEBUG nova.virt.hardware [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.036 239969 DEBUG nova.virt.hardware [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.037 239969 DEBUG nova.virt.hardware [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.037 239969 DEBUG nova.virt.hardware [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.037 239969 DEBUG nova.virt.hardware [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.038 239969 DEBUG nova.virt.hardware [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.038 239969 DEBUG nova.virt.hardware [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.043 239969 DEBUG oslo_concurrency.processutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.326 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "aa258935-ebf3-42bd-bca7-5b58621978f0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.326 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "aa258935-ebf3-42bd-bca7-5b58621978f0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.341 239969 DEBUG nova.compute.manager [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.427 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.427 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.432 239969 DEBUG nova.virt.hardware [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.433 239969 INFO nova.compute.claims [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.597 239969 DEBUG oslo_concurrency.processutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:19:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:19:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/349660450' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.664 239969 DEBUG oslo_concurrency.processutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.621s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.690 239969 DEBUG nova.storage.rbd_utils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 03b61c83-5aca-49d6-ad69-1609916476e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:19:01 compute-0 nova_compute[239965]: 2026-01-26 16:19:01.696 239969 DEBUG oslo_concurrency.processutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:19:01 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:19:01 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:19:01 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/349660450' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:19:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:19:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/338801597' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.161 239969 DEBUG oslo_concurrency.processutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.167 239969 DEBUG nova.compute.provider_tree [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.217 239969 DEBUG nova.scheduler.client.report [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:19:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:19:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1902536388' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.248 239969 DEBUG oslo_concurrency.processutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.249 239969 DEBUG nova.virt.libvirt.vif [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1727772541',display_name='tempest-TestNetworkBasicOps-server-1727772541',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1727772541',id=118,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnAPD09GHRy5Ale/TKHIPd1lPu3k3PivMb654OK+PXj1wkjjKHmv1gJsjde79Ri7/bf6NqgeAkf0SfsjKBLuBdatenOFgP44bWnHvJ+vx50tXAV+0yesCLD4pyPDCOTKw==',key_name='tempest-TestNetworkBasicOps-908318729',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-hmsse4gk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:18:56Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=03b61c83-5aca-49d6-ad69-1609916476e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "address": "fa:16:3e:16:f4:39", "network": {"id": "cfc8a5e3-1ced-4191-b7c4-24f862b986e1", "bridge": "br-int", "label": "tempest-network-smoke--1397269541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6784938e-ce", "ovs_interfaceid": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.250 239969 DEBUG nova.network.os_vif_util [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "address": "fa:16:3e:16:f4:39", "network": {"id": "cfc8a5e3-1ced-4191-b7c4-24f862b986e1", "bridge": "br-int", "label": "tempest-network-smoke--1397269541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6784938e-ce", "ovs_interfaceid": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.250 239969 DEBUG nova.network.os_vif_util [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:f4:39,bridge_name='br-int',has_traffic_filtering=True,id=6784938e-ce4b-4961-a42a-2d07d12fb8f1,network=Network(cfc8a5e3-1ced-4191-b7c4-24f862b986e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6784938e-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.252 239969 DEBUG nova.objects.instance [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'pci_devices' on Instance uuid 03b61c83-5aca-49d6-ad69-1609916476e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.286 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:19:02 compute-0 nova_compute[239965]:   <uuid>03b61c83-5aca-49d6-ad69-1609916476e2</uuid>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   <name>instance-00000076</name>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkBasicOps-server-1727772541</nova:name>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:19:01</nova:creationTime>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:19:02 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:19:02 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:19:02 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:19:02 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:19:02 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:19:02 compute-0 nova_compute[239965]:         <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:19:02 compute-0 nova_compute[239965]:         <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:19:02 compute-0 nova_compute[239965]:         <nova:port uuid="6784938e-ce4b-4961-a42a-2d07d12fb8f1">
Jan 26 16:19:02 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <system>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <entry name="serial">03b61c83-5aca-49d6-ad69-1609916476e2</entry>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <entry name="uuid">03b61c83-5aca-49d6-ad69-1609916476e2</entry>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     </system>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   <os>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   </os>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   <features>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   </features>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/03b61c83-5aca-49d6-ad69-1609916476e2_disk">
Jan 26 16:19:02 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       </source>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:19:02 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/03b61c83-5aca-49d6-ad69-1609916476e2_disk.config">
Jan 26 16:19:02 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       </source>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:19:02 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:16:f4:39"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <target dev="tap6784938e-ce"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2/console.log" append="off"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <video>
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     </video>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:19:02 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:19:02 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:19:02 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:19:02 compute-0 nova_compute[239965]: </domain>
Jan 26 16:19:02 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.286 239969 DEBUG nova.compute.manager [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Preparing to wait for external event network-vif-plugged-6784938e-ce4b-4961-a42a-2d07d12fb8f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.287 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.287 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.287 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.288 239969 DEBUG nova.virt.libvirt.vif [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1727772541',display_name='tempest-TestNetworkBasicOps-server-1727772541',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1727772541',id=118,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnAPD09GHRy5Ale/TKHIPd1lPu3k3PivMb654OK+PXj1wkjjKHmv1gJsjde79Ri7/bf6NqgeAkf0SfsjKBLuBdatenOFgP44bWnHvJ+vx50tXAV+0yesCLD4pyPDCOTKw==',key_name='tempest-TestNetworkBasicOps-908318729',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-hmsse4gk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:18:56Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=03b61c83-5aca-49d6-ad69-1609916476e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "address": "fa:16:3e:16:f4:39", "network": {"id": "cfc8a5e3-1ced-4191-b7c4-24f862b986e1", "bridge": "br-int", "label": "tempest-network-smoke--1397269541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6784938e-ce", "ovs_interfaceid": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.288 239969 DEBUG nova.network.os_vif_util [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "address": "fa:16:3e:16:f4:39", "network": {"id": "cfc8a5e3-1ced-4191-b7c4-24f862b986e1", "bridge": "br-int", "label": "tempest-network-smoke--1397269541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6784938e-ce", "ovs_interfaceid": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.288 239969 DEBUG nova.network.os_vif_util [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:f4:39,bridge_name='br-int',has_traffic_filtering=True,id=6784938e-ce4b-4961-a42a-2d07d12fb8f1,network=Network(cfc8a5e3-1ced-4191-b7c4-24f862b986e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6784938e-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.289 239969 DEBUG os_vif [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:f4:39,bridge_name='br-int',has_traffic_filtering=True,id=6784938e-ce4b-4961-a42a-2d07d12fb8f1,network=Network(cfc8a5e3-1ced-4191-b7c4-24f862b986e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6784938e-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.289 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.290 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.290 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.292 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.293 239969 DEBUG nova.compute.manager [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.296 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.297 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6784938e-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.297 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6784938e-ce, col_values=(('external_ids', {'iface-id': '6784938e-ce4b-4961-a42a-2d07d12fb8f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:f4:39', 'vm-uuid': '03b61c83-5aca-49d6-ad69-1609916476e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.298 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:02 compute-0 NetworkManager[48954]: <info>  [1769444342.2998] manager: (tap6784938e-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/508)
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.300 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.306 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.307 239969 INFO os_vif [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:f4:39,bridge_name='br-int',has_traffic_filtering=True,id=6784938e-ce4b-4961-a42a-2d07d12fb8f1,network=Network(cfc8a5e3-1ced-4191-b7c4-24f862b986e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6784938e-ce')
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.381 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.381 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.381 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:16:f4:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.382 239969 INFO nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Using config drive
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.401 239969 DEBUG nova.storage.rbd_utils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 03b61c83-5aca-49d6-ad69-1609916476e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.426 239969 DEBUG nova.compute.manager [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.426 239969 DEBUG nova.network.neutron [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.445 239969 INFO nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:19:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.465 239969 DEBUG nova.compute.manager [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.536 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.689 239969 DEBUG nova.compute.manager [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.690 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.691 239969 INFO nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Creating image(s)
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.711 239969 DEBUG nova.storage.rbd_utils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image aa258935-ebf3-42bd-bca7-5b58621978f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.731 239969 DEBUG nova.storage.rbd_utils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image aa258935-ebf3-42bd-bca7-5b58621978f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.751 239969 DEBUG nova.storage.rbd_utils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image aa258935-ebf3-42bd-bca7-5b58621978f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.754 239969 DEBUG oslo_concurrency.processutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.790 239969 DEBUG nova.policy [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0338b308661e4205839e9e957f674d8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df501970c9864b77b86bb4aa58ef846b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.827 239969 DEBUG oslo_concurrency.processutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.829 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.830 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.831 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.860 239969 DEBUG nova.storage.rbd_utils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image aa258935-ebf3-42bd-bca7-5b58621978f0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.865 239969 DEBUG oslo_concurrency.processutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 aa258935-ebf3-42bd-bca7-5b58621978f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:19:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/338801597' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:19:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1902536388' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:19:02 compute-0 ceph-mon[75140]: pgmap v2050: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 279 KiB/s rd, 3.9 MiB/s wr, 87 op/s
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.906 239969 INFO nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Creating config drive at /var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2/disk.config
Jan 26 16:19:02 compute-0 nova_compute[239965]: 2026-01-26 16:19:02.912 239969 DEBUG oslo_concurrency.processutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw5ot0ssh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.018 239969 DEBUG nova.network.neutron [req-a8e30723-7275-4b1d-b08d-9efce786dc3e req-b242f263-29bb-4d8b-a9ea-f61fc18b7151 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Updated VIF entry in instance network info cache for port 6784938e-ce4b-4961-a42a-2d07d12fb8f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.019 239969 DEBUG nova.network.neutron [req-a8e30723-7275-4b1d-b08d-9efce786dc3e req-b242f263-29bb-4d8b-a9ea-f61fc18b7151 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Updating instance_info_cache with network_info: [{"id": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "address": "fa:16:3e:16:f4:39", "network": {"id": "cfc8a5e3-1ced-4191-b7c4-24f862b986e1", "bridge": "br-int", "label": "tempest-network-smoke--1397269541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6784938e-ce", "ovs_interfaceid": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.067 239969 DEBUG oslo_concurrency.processutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw5ot0ssh" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:19:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.099 239969 DEBUG nova.storage.rbd_utils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 03b61c83-5aca-49d6-ad69-1609916476e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.104 239969 DEBUG oslo_concurrency.processutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2/disk.config 03b61c83-5aca-49d6-ad69-1609916476e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.147 239969 DEBUG oslo_concurrency.lockutils [req-a8e30723-7275-4b1d-b08d-9efce786dc3e req-b242f263-29bb-4d8b-a9ea-f61fc18b7151 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.164 239969 DEBUG oslo_concurrency.processutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 aa258935-ebf3-42bd-bca7-5b58621978f0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.228 239969 DEBUG nova.storage.rbd_utils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] resizing rbd image aa258935-ebf3-42bd-bca7-5b58621978f0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.262 239969 DEBUG oslo_concurrency.processutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2/disk.config 03b61c83-5aca-49d6-ad69-1609916476e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.263 239969 INFO nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Deleting local config drive /var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2/disk.config because it was imported into RBD.
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.323 239969 DEBUG nova.objects.instance [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'migration_context' on Instance uuid aa258935-ebf3-42bd-bca7-5b58621978f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:19:03 compute-0 NetworkManager[48954]: <info>  [1769444343.3391] manager: (tap6784938e-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/509)
Jan 26 16:19:03 compute-0 kernel: tap6784938e-ce: entered promiscuous mode
Jan 26 16:19:03 compute-0 systemd-udevd[344190]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:19:03 compute-0 ovn_controller[146046]: 2026-01-26T16:19:03Z|01232|binding|INFO|Claiming lport 6784938e-ce4b-4961-a42a-2d07d12fb8f1 for this chassis.
Jan 26 16:19:03 compute-0 ovn_controller[146046]: 2026-01-26T16:19:03Z|01233|binding|INFO|6784938e-ce4b-4961-a42a-2d07d12fb8f1: Claiming fa:16:3e:16:f4:39 10.100.0.7
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.357 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:03 compute-0 NetworkManager[48954]: <info>  [1769444343.3686] device (tap6784938e-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:19:03 compute-0 NetworkManager[48954]: <info>  [1769444343.3693] device (tap6784938e-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.373 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:03 compute-0 ovn_controller[146046]: 2026-01-26T16:19:03Z|01234|binding|INFO|Setting lport 6784938e-ce4b-4961-a42a-2d07d12fb8f1 ovn-installed in OVS
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.375 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:03 compute-0 ovn_controller[146046]: 2026-01-26T16:19:03Z|01235|binding|INFO|Setting lport 6784938e-ce4b-4961-a42a-2d07d12fb8f1 up in Southbound
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.386 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:f4:39 10.100.0.7'], port_security=['fa:16:3e:16:f4:39 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '03b61c83-5aca-49d6-ad69-1609916476e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfc8a5e3-1ced-4191-b7c4-24f862b986e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bd05763d-4eca-4d9a-9a40-f33a88922164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f2dd7df8-a4e3-43d1-9b0b-ece81ff8039c, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=6784938e-ce4b-4961-a42a-2d07d12fb8f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.387 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 6784938e-ce4b-4961-a42a-2d07d12fb8f1 in datapath cfc8a5e3-1ced-4191-b7c4-24f862b986e1 bound to our chassis
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.388 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfc8a5e3-1ced-4191-b7c4-24f862b986e1
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.396 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.397 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Ensure instance console log exists: /var/lib/nova/instances/aa258935-ebf3-42bd-bca7-5b58621978f0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.397 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.397 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:03 compute-0 systemd-machined[208061]: New machine qemu-145-instance-00000076.
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.398 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.399 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[297f8575-e5a8-43ea-a001-f5b78f4e899d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.400 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcfc8a5e3-11 in ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.401 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcfc8a5e3-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.402 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4350dd28-2d5f-4166-862e-19479afd4f11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.402 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[75b8a7c0-84f9-49ab-bb8b-c8e5f8668481]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.414 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[25f3d1a3-ef42-457a-bfb8-d51398dc2d8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:03 compute-0 systemd[1]: Started Virtual Machine qemu-145-instance-00000076.
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.429 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f0861ff5-f23f-4008-8ac7-8952c9d1d98a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.461 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[551e43d8-fd6e-477d-ab54-78369154ff7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:03 compute-0 NetworkManager[48954]: <info>  [1769444343.4677] manager: (tapcfc8a5e3-10): new Veth device (/org/freedesktop/NetworkManager/Devices/510)
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.467 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4d7c56d6-9ecb-46a3-9201-e45a0a98340a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.502 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a39fc3dc-0ff2-40bb-8e27-399ef3ec0e26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.505 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4c102895-6043-4759-845a-27fa7067a2b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:03 compute-0 NetworkManager[48954]: <info>  [1769444343.5335] device (tapcfc8a5e3-10): carrier: link connected
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.535 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3bddd007-7794-488b-8ac5-4f4d399f5c26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.553 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9eae20fb-0f6a-4976-b01c-baa1e0e40b64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfc8a5e3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:d1:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 358], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 583433, 'reachable_time': 24420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344603, 'error': None, 'target': 'ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.568 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8430598f-cb43-49b4-9fd3-d3e44cd5eca6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe10:d17f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 583433, 'tstamp': 583433}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344604, 'error': None, 'target': 'ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.588 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d361a62e-dcc2-4503-b709-5ca261f86573]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfc8a5e3-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:10:d1:7f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 358], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 583433, 'reachable_time': 24420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 344605, 'error': None, 'target': 'ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.614 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[15454e87-e7d5-4163-8129-17a616202ae0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.670 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ed90f3a9-d5e8-4a05-8fbe-8dd16fafc8dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.671 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfc8a5e3-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.671 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.672 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfc8a5e3-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:03 compute-0 NetworkManager[48954]: <info>  [1769444343.6743] manager: (tapcfc8a5e3-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Jan 26 16:19:03 compute-0 kernel: tapcfc8a5e3-10: entered promiscuous mode
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.673 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.675 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.678 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfc8a5e3-10, col_values=(('external_ids', {'iface-id': '9829d4f8-542b-4928-b494-a0e6aa624937'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.679 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:03 compute-0 ovn_controller[146046]: 2026-01-26T16:19:03Z|01236|binding|INFO|Releasing lport 9829d4f8-542b-4928-b494-a0e6aa624937 from this chassis (sb_readonly=0)
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.680 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.682 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cfc8a5e3-1ced-4191-b7c4-24f862b986e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cfc8a5e3-1ced-4191-b7c4-24f862b986e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.683 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9da335d9-d052-4f4f-81b2-05ec1e67dd16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.683 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-cfc8a5e3-1ced-4191-b7c4-24f862b986e1
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/cfc8a5e3-1ced-4191-b7c4-24f862b986e1.pid.haproxy
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID cfc8a5e3-1ced-4191-b7c4-24f862b986e1
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:19:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:03.684 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1', 'env', 'PROCESS_TAG=haproxy-cfc8a5e3-1ced-4191-b7c4-24f862b986e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cfc8a5e3-1ced-4191-b7c4-24f862b986e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.693 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.698 239969 DEBUG nova.compute.manager [req-20bc7cb1-b9f5-4bd3-a409-5e31a775e569 req-9a00a2a1-64f4-4606-a1a5-f2b3e1c54f57 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received event network-vif-plugged-6784938e-ce4b-4961-a42a-2d07d12fb8f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.699 239969 DEBUG oslo_concurrency.lockutils [req-20bc7cb1-b9f5-4bd3-a409-5e31a775e569 req-9a00a2a1-64f4-4606-a1a5-f2b3e1c54f57 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.699 239969 DEBUG oslo_concurrency.lockutils [req-20bc7cb1-b9f5-4bd3-a409-5e31a775e569 req-9a00a2a1-64f4-4606-a1a5-f2b3e1c54f57 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.699 239969 DEBUG oslo_concurrency.lockutils [req-20bc7cb1-b9f5-4bd3-a409-5e31a775e569 req-9a00a2a1-64f4-4606-a1a5-f2b3e1c54f57 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:03 compute-0 nova_compute[239965]: 2026-01-26 16:19:03.699 239969 DEBUG nova.compute.manager [req-20bc7cb1-b9f5-4bd3-a409-5e31a775e569 req-9a00a2a1-64f4-4606-a1a5-f2b3e1c54f57 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Processing event network-vif-plugged-6784938e-ce4b-4961-a42a-2d07d12fb8f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.036 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444344.0348094, 03b61c83-5aca-49d6-ad69-1609916476e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.036 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] VM Started (Lifecycle Event)
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.038 239969 DEBUG nova.compute.manager [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.041 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.045 239969 INFO nova.virt.libvirt.driver [-] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Instance spawned successfully.
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.045 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:19:04 compute-0 podman[344676]: 2026-01-26 16:19:04.052472054 +0000 UTC m=+0.052256818 container create a85daba0cf4599f1ae05e44d3a27814cdf6191ee94c67bd836c2d0e691aca5bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.065 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.072 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.076 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.077 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.077 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.078 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.079 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.079 239969 DEBUG nova.virt.libvirt.driver [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:19:04 compute-0 systemd[1]: Started libpod-conmon-a85daba0cf4599f1ae05e44d3a27814cdf6191ee94c67bd836c2d0e691aca5bb.scope.
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.111 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.112 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444344.0382855, 03b61c83-5aca-49d6-ad69-1609916476e2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.112 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] VM Paused (Lifecycle Event)
Jan 26 16:19:04 compute-0 podman[344676]: 2026-01-26 16:19:04.022057361 +0000 UTC m=+0.021842155 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:19:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:19:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2a8376509d2460c1324bf7c21d6911d4b51eb3fa124c882bc484775fd5e9e38/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.140 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.143 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444344.0409195, 03b61c83-5aca-49d6-ad69-1609916476e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.144 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] VM Resumed (Lifecycle Event)
Jan 26 16:19:04 compute-0 podman[344676]: 2026-01-26 16:19:04.145679991 +0000 UTC m=+0.145464785 container init a85daba0cf4599f1ae05e44d3a27814cdf6191ee94c67bd836c2d0e691aca5bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:19:04 compute-0 podman[344676]: 2026-01-26 16:19:04.15094431 +0000 UTC m=+0.150729074 container start a85daba0cf4599f1ae05e44d3a27814cdf6191ee94c67bd836c2d0e691aca5bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.171 239969 INFO nova.compute.manager [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Took 7.47 seconds to spawn the instance on the hypervisor.
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.172 239969 DEBUG nova.compute.manager [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.173 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:19:04 compute-0 neutron-haproxy-ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1[344692]: [NOTICE]   (344696) : New worker (344698) forked
Jan 26 16:19:04 compute-0 neutron-haproxy-ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1[344692]: [NOTICE]   (344696) : Loading success.
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.182 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.230 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.265 239969 INFO nova.compute.manager [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Took 8.71 seconds to build instance.
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.289 239969 DEBUG oslo_concurrency.lockutils [None req-8e7235dd-2d9e-48d8-8ef6-ad7e41b5cb76 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:04 compute-0 nova_compute[239965]: 2026-01-26 16:19:04.302 239969 DEBUG nova.network.neutron [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Successfully created port: a64a836b-74d0-41fe-acc2-6df7f843675a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:19:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:19:04 compute-0 ceph-mon[75140]: pgmap v2051: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:19:05 compute-0 nova_compute[239965]: 2026-01-26 16:19:05.830 239969 DEBUG nova.compute.manager [req-ba7da265-f852-4116-b6f1-22a5c1adb04a req-aaa62927-eb15-4af5-ab62-35db4a4e28f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received event network-vif-plugged-6784938e-ce4b-4961-a42a-2d07d12fb8f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:05 compute-0 nova_compute[239965]: 2026-01-26 16:19:05.831 239969 DEBUG oslo_concurrency.lockutils [req-ba7da265-f852-4116-b6f1-22a5c1adb04a req-aaa62927-eb15-4af5-ab62-35db4a4e28f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:05 compute-0 nova_compute[239965]: 2026-01-26 16:19:05.831 239969 DEBUG oslo_concurrency.lockutils [req-ba7da265-f852-4116-b6f1-22a5c1adb04a req-aaa62927-eb15-4af5-ab62-35db4a4e28f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:05 compute-0 nova_compute[239965]: 2026-01-26 16:19:05.831 239969 DEBUG oslo_concurrency.lockutils [req-ba7da265-f852-4116-b6f1-22a5c1adb04a req-aaa62927-eb15-4af5-ab62-35db4a4e28f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:05 compute-0 nova_compute[239965]: 2026-01-26 16:19:05.831 239969 DEBUG nova.compute.manager [req-ba7da265-f852-4116-b6f1-22a5c1adb04a req-aaa62927-eb15-4af5-ab62-35db4a4e28f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] No waiting events found dispatching network-vif-plugged-6784938e-ce4b-4961-a42a-2d07d12fb8f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:19:05 compute-0 nova_compute[239965]: 2026-01-26 16:19:05.832 239969 WARNING nova.compute.manager [req-ba7da265-f852-4116-b6f1-22a5c1adb04a req-aaa62927-eb15-4af5-ab62-35db4a4e28f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received unexpected event network-vif-plugged-6784938e-ce4b-4961-a42a-2d07d12fb8f1 for instance with vm_state active and task_state None.
Jan 26 16:19:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 218 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 238 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 26 16:19:06 compute-0 ceph-mon[75140]: pgmap v2052: 305 pgs: 305 active+clean; 218 MiB data, 893 MiB used, 59 GiB / 60 GiB avail; 238 KiB/s rd, 2.1 MiB/s wr, 44 op/s
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:06.551210) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444346551272, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 837, "num_deletes": 251, "total_data_size": 1096323, "memory_usage": 1112272, "flush_reason": "Manual Compaction"}
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444346558194, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 1080209, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41939, "largest_seqno": 42775, "table_properties": {"data_size": 1076031, "index_size": 1894, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9443, "raw_average_key_size": 19, "raw_value_size": 1067608, "raw_average_value_size": 2214, "num_data_blocks": 84, "num_entries": 482, "num_filter_entries": 482, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769444278, "oldest_key_time": 1769444278, "file_creation_time": 1769444346, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 7019 microseconds, and 3372 cpu microseconds.
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:06.558240) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 1080209 bytes OK
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:06.558258) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:06.559718) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:06.559731) EVENT_LOG_v1 {"time_micros": 1769444346559727, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:06.559748) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 1092181, prev total WAL file size 1092181, number of live WAL files 2.
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:06.560247) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(1054KB)], [92(11MB)]
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444346560293, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 12800017, "oldest_snapshot_seqno": -1}
Jan 26 16:19:06 compute-0 nova_compute[239965]: 2026-01-26 16:19:06.595 239969 DEBUG nova.network.neutron [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Successfully updated port: a64a836b-74d0-41fe-acc2-6df7f843675a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:19:06 compute-0 nova_compute[239965]: 2026-01-26 16:19:06.616 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "refresh_cache-aa258935-ebf3-42bd-bca7-5b58621978f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:19:06 compute-0 nova_compute[239965]: 2026-01-26 16:19:06.617 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquired lock "refresh_cache-aa258935-ebf3-42bd-bca7-5b58621978f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:19:06 compute-0 nova_compute[239965]: 2026-01-26 16:19:06.617 239969 DEBUG nova.network.neutron [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6840 keys, 11068790 bytes, temperature: kUnknown
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444346652067, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 11068790, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11020231, "index_size": 30335, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 175372, "raw_average_key_size": 25, "raw_value_size": 10895140, "raw_average_value_size": 1592, "num_data_blocks": 1196, "num_entries": 6840, "num_filter_entries": 6840, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769444346, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:06.652492) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 11068790 bytes
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:06.654503) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.2 rd, 120.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.2 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(22.1) write-amplify(10.2) OK, records in: 7353, records dropped: 513 output_compression: NoCompression
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:06.654530) EVENT_LOG_v1 {"time_micros": 1769444346654516, "job": 54, "event": "compaction_finished", "compaction_time_micros": 91939, "compaction_time_cpu_micros": 28953, "output_level": 6, "num_output_files": 1, "total_output_size": 11068790, "num_input_records": 7353, "num_output_records": 6840, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444346655157, "job": 54, "event": "table_file_deletion", "file_number": 94}
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444346658253, "job": 54, "event": "table_file_deletion", "file_number": 92}
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:06.560166) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:06.658375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:06.658382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:06.658384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:06.658386) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:06.658388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:06 compute-0 nova_compute[239965]: 2026-01-26 16:19:06.815 239969 DEBUG nova.compute.manager [req-042fa320-7f92-4b9b-82b8-140cbc6dab5e req-a1a1f093-1674-415e-b412-46ea20179c5c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Received event network-changed-a64a836b-74d0-41fe-acc2-6df7f843675a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:06 compute-0 nova_compute[239965]: 2026-01-26 16:19:06.815 239969 DEBUG nova.compute.manager [req-042fa320-7f92-4b9b-82b8-140cbc6dab5e req-a1a1f093-1674-415e-b412-46ea20179c5c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Refreshing instance network info cache due to event network-changed-a64a836b-74d0-41fe-acc2-6df7f843675a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:19:06 compute-0 nova_compute[239965]: 2026-01-26 16:19:06.816 239969 DEBUG oslo_concurrency.lockutils [req-042fa320-7f92-4b9b-82b8-140cbc6dab5e req-a1a1f093-1674-415e-b412-46ea20179c5c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-aa258935-ebf3-42bd-bca7-5b58621978f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:19:06 compute-0 nova_compute[239965]: 2026-01-26 16:19:06.949 239969 DEBUG nova.network.neutron [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:19:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:07.176 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:07 compute-0 nova_compute[239965]: 2026-01-26 16:19:07.300 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:07.361 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:3d:85 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-b693f631-0e78-4293-bfc2-2d727801bcfe', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b693f631-0e78-4293-bfc2-2d727801bcfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '846353c2154b4f35ae0fbada5ce2ba45', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a6f881fd-9f8f-4d81-9287-fd52fd4a8b9e, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=60da77b8-7540-46a2-96f8-50ed38096ec7) old=Port_Binding(mac=['fa:16:3e:b5:3d:85 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b693f631-0e78-4293-bfc2-2d727801bcfe', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b693f631-0e78-4293-bfc2-2d727801bcfe', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '846353c2154b4f35ae0fbada5ce2ba45', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:19:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:07.363 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 60da77b8-7540-46a2-96f8-50ed38096ec7 in datapath b693f631-0e78-4293-bfc2-2d727801bcfe updated
Jan 26 16:19:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:07.364 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b693f631-0e78-4293-bfc2-2d727801bcfe, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:19:07 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:07.365 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cf4caab4-bab7-40b5-8304-923707f0dd54]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:07 compute-0 nova_compute[239965]: 2026-01-26 16:19:07.539 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:19:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 260 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Jan 26 16:19:08 compute-0 ceph-mon[75140]: pgmap v2053: 305 pgs: 305 active+clean; 260 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.6 MiB/s wr, 128 op/s
Jan 26 16:19:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 260 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 125 op/s
Jan 26 16:19:10 compute-0 ceph-mon[75140]: pgmap v2054: 305 pgs: 305 active+clean; 260 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 125 op/s
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.608 239969 DEBUG nova.network.neutron [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Updating instance_info_cache with network_info: [{"id": "a64a836b-74d0-41fe-acc2-6df7f843675a", "address": "fa:16:3e:50:b4:c5", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa64a836b-74", "ovs_interfaceid": "a64a836b-74d0-41fe-acc2-6df7f843675a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.630 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Releasing lock "refresh_cache-aa258935-ebf3-42bd-bca7-5b58621978f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.630 239969 DEBUG nova.compute.manager [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Instance network_info: |[{"id": "a64a836b-74d0-41fe-acc2-6df7f843675a", "address": "fa:16:3e:50:b4:c5", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa64a836b-74", "ovs_interfaceid": "a64a836b-74d0-41fe-acc2-6df7f843675a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.631 239969 DEBUG oslo_concurrency.lockutils [req-042fa320-7f92-4b9b-82b8-140cbc6dab5e req-a1a1f093-1674-415e-b412-46ea20179c5c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-aa258935-ebf3-42bd-bca7-5b58621978f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.631 239969 DEBUG nova.network.neutron [req-042fa320-7f92-4b9b-82b8-140cbc6dab5e req-a1a1f093-1674-415e-b412-46ea20179c5c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Refreshing network info cache for port a64a836b-74d0-41fe-acc2-6df7f843675a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.634 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Start _get_guest_xml network_info=[{"id": "a64a836b-74d0-41fe-acc2-6df7f843675a", "address": "fa:16:3e:50:b4:c5", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa64a836b-74", "ovs_interfaceid": "a64a836b-74d0-41fe-acc2-6df7f843675a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.639 239969 WARNING nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.645 239969 DEBUG nova.virt.libvirt.host [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.646 239969 DEBUG nova.virt.libvirt.host [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.652 239969 DEBUG nova.virt.libvirt.host [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.653 239969 DEBUG nova.virt.libvirt.host [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.653 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.653 239969 DEBUG nova.virt.hardware [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.654 239969 DEBUG nova.virt.hardware [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.654 239969 DEBUG nova.virt.hardware [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.654 239969 DEBUG nova.virt.hardware [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.654 239969 DEBUG nova.virt.hardware [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.654 239969 DEBUG nova.virt.hardware [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.655 239969 DEBUG nova.virt.hardware [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.655 239969 DEBUG nova.virt.hardware [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.655 239969 DEBUG nova.virt.hardware [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.655 239969 DEBUG nova.virt.hardware [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.656 239969 DEBUG nova.virt.hardware [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:19:10 compute-0 nova_compute[239965]: 2026-01-26 16:19:10.659 239969 DEBUG oslo_concurrency.processutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:19:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:19:11 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1335537028' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.211 239969 DEBUG oslo_concurrency.processutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.233 239969 DEBUG nova.storage.rbd_utils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image aa258935-ebf3-42bd-bca7-5b58621978f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.236 239969 DEBUG oslo_concurrency.processutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.324 239969 DEBUG nova.compute.manager [req-dd0eddb7-8018-407a-96dd-9cbfd14848d4 req-5591b8af-7c73-41e4-96a6-1789d4f6a4b4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received event network-changed-6784938e-ce4b-4961-a42a-2d07d12fb8f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.325 239969 DEBUG nova.compute.manager [req-dd0eddb7-8018-407a-96dd-9cbfd14848d4 req-5591b8af-7c73-41e4-96a6-1789d4f6a4b4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Refreshing instance network info cache due to event network-changed-6784938e-ce4b-4961-a42a-2d07d12fb8f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.325 239969 DEBUG oslo_concurrency.lockutils [req-dd0eddb7-8018-407a-96dd-9cbfd14848d4 req-5591b8af-7c73-41e4-96a6-1789d4f6a4b4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.325 239969 DEBUG oslo_concurrency.lockutils [req-dd0eddb7-8018-407a-96dd-9cbfd14848d4 req-5591b8af-7c73-41e4-96a6-1789d4f6a4b4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.325 239969 DEBUG nova.network.neutron [req-dd0eddb7-8018-407a-96dd-9cbfd14848d4 req-5591b8af-7c73-41e4-96a6-1789d4f6a4b4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Refreshing network info cache for port 6784938e-ce4b-4961-a42a-2d07d12fb8f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:19:11 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1335537028' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:19:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:19:11 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/69123366' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.848 239969 DEBUG oslo_concurrency.processutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.849 239969 DEBUG nova.virt.libvirt.vif [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:19:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-260093260',display_name='tempest-TestGettingAddress-server-260093260',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-260093260',id=119,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhQG087QD+2hAbAMQU1nGEagJiakW/DCR9CbPhEN0TT1VEnvAWKVpxW1jmBBFv8JmlAY1EliJqVpxhfQ7DGAWVhpLujt7IrW/DFNwTQWV1RlSnJjK8DC9EppnenO9SXIw==',key_name='tempest-TestGettingAddress-1313867492',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-ew0cfadq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:19:02Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=aa258935-ebf3-42bd-bca7-5b58621978f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a64a836b-74d0-41fe-acc2-6df7f843675a", "address": "fa:16:3e:50:b4:c5", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa64a836b-74", "ovs_interfaceid": "a64a836b-74d0-41fe-acc2-6df7f843675a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.849 239969 DEBUG nova.network.os_vif_util [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "a64a836b-74d0-41fe-acc2-6df7f843675a", "address": "fa:16:3e:50:b4:c5", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa64a836b-74", "ovs_interfaceid": "a64a836b-74d0-41fe-acc2-6df7f843675a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.850 239969 DEBUG nova.network.os_vif_util [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:b4:c5,bridge_name='br-int',has_traffic_filtering=True,id=a64a836b-74d0-41fe-acc2-6df7f843675a,network=Network(869bf55a-0f32-4f15-8ff4-2acbed7fab15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa64a836b-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.851 239969 DEBUG nova.objects.instance [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'pci_devices' on Instance uuid aa258935-ebf3-42bd-bca7-5b58621978f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.863 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:19:11 compute-0 nova_compute[239965]:   <uuid>aa258935-ebf3-42bd-bca7-5b58621978f0</uuid>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   <name>instance-00000077</name>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <nova:name>tempest-TestGettingAddress-server-260093260</nova:name>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:19:10</nova:creationTime>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:19:11 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:19:11 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:19:11 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:19:11 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:19:11 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:19:11 compute-0 nova_compute[239965]:         <nova:user uuid="0338b308661e4205839e9e957f674d8c">tempest-TestGettingAddress-206943419-project-member</nova:user>
Jan 26 16:19:11 compute-0 nova_compute[239965]:         <nova:project uuid="df501970c9864b77b86bb4aa58ef846b">tempest-TestGettingAddress-206943419</nova:project>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:19:11 compute-0 nova_compute[239965]:         <nova:port uuid="a64a836b-74d0-41fe-acc2-6df7f843675a">
Jan 26 16:19:11 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe50:b4c5" ipVersion="6"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe50:b4c5" ipVersion="6"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <system>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <entry name="serial">aa258935-ebf3-42bd-bca7-5b58621978f0</entry>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <entry name="uuid">aa258935-ebf3-42bd-bca7-5b58621978f0</entry>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     </system>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   <os>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   </os>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   <features>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   </features>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/aa258935-ebf3-42bd-bca7-5b58621978f0_disk">
Jan 26 16:19:11 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       </source>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:19:11 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/aa258935-ebf3-42bd-bca7-5b58621978f0_disk.config">
Jan 26 16:19:11 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       </source>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:19:11 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:50:b4:c5"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <target dev="tapa64a836b-74"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/aa258935-ebf3-42bd-bca7-5b58621978f0/console.log" append="off"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <video>
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     </video>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:19:11 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:19:11 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:19:11 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:19:11 compute-0 nova_compute[239965]: </domain>
Jan 26 16:19:11 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.864 239969 DEBUG nova.compute.manager [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Preparing to wait for external event network-vif-plugged-a64a836b-74d0-41fe-acc2-6df7f843675a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.864 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.864 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.864 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.865 239969 DEBUG nova.virt.libvirt.vif [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:19:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-260093260',display_name='tempest-TestGettingAddress-server-260093260',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-260093260',id=119,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhQG087QD+2hAbAMQU1nGEagJiakW/DCR9CbPhEN0TT1VEnvAWKVpxW1jmBBFv8JmlAY1EliJqVpxhfQ7DGAWVhpLujt7IrW/DFNwTQWV1RlSnJjK8DC9EppnenO9SXIw==',key_name='tempest-TestGettingAddress-1313867492',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-ew0cfadq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:19:02Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=aa258935-ebf3-42bd-bca7-5b58621978f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a64a836b-74d0-41fe-acc2-6df7f843675a", "address": "fa:16:3e:50:b4:c5", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa64a836b-74", "ovs_interfaceid": "a64a836b-74d0-41fe-acc2-6df7f843675a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.865 239969 DEBUG nova.network.os_vif_util [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "a64a836b-74d0-41fe-acc2-6df7f843675a", "address": "fa:16:3e:50:b4:c5", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa64a836b-74", "ovs_interfaceid": "a64a836b-74d0-41fe-acc2-6df7f843675a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.867 239969 DEBUG nova.network.os_vif_util [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:50:b4:c5,bridge_name='br-int',has_traffic_filtering=True,id=a64a836b-74d0-41fe-acc2-6df7f843675a,network=Network(869bf55a-0f32-4f15-8ff4-2acbed7fab15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa64a836b-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.867 239969 DEBUG os_vif [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:b4:c5,bridge_name='br-int',has_traffic_filtering=True,id=a64a836b-74d0-41fe-acc2-6df7f843675a,network=Network(869bf55a-0f32-4f15-8ff4-2acbed7fab15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa64a836b-74') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.868 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.868 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.869 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.871 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.871 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa64a836b-74, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.872 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa64a836b-74, col_values=(('external_ids', {'iface-id': 'a64a836b-74d0-41fe-acc2-6df7f843675a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:50:b4:c5', 'vm-uuid': 'aa258935-ebf3-42bd-bca7-5b58621978f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.873 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:11 compute-0 NetworkManager[48954]: <info>  [1769444351.8744] manager: (tapa64a836b-74): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/512)
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.877 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.882 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.883 239969 INFO os_vif [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:50:b4:c5,bridge_name='br-int',has_traffic_filtering=True,id=a64a836b-74d0-41fe-acc2-6df7f843675a,network=Network(869bf55a-0f32-4f15-8ff4-2acbed7fab15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa64a836b-74')
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.938 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.938 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.939 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:50:b4:c5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.939 239969 INFO nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Using config drive
Jan 26 16:19:11 compute-0 nova_compute[239965]: 2026-01-26 16:19:11.959 239969 DEBUG nova.storage.rbd_utils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image aa258935-ebf3-42bd-bca7-5b58621978f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:19:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 260 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 125 op/s
Jan 26 16:19:12 compute-0 nova_compute[239965]: 2026-01-26 16:19:12.498 239969 INFO nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Creating config drive at /var/lib/nova/instances/aa258935-ebf3-42bd-bca7-5b58621978f0/disk.config
Jan 26 16:19:12 compute-0 nova_compute[239965]: 2026-01-26 16:19:12.502 239969 DEBUG oslo_concurrency.processutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/aa258935-ebf3-42bd-bca7-5b58621978f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1asy_kla execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:19:12 compute-0 nova_compute[239965]: 2026-01-26 16:19:12.541 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:12 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/69123366' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:19:12 compute-0 ceph-mon[75140]: pgmap v2055: 305 pgs: 305 active+clean; 260 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.6 MiB/s wr, 125 op/s
Jan 26 16:19:12 compute-0 nova_compute[239965]: 2026-01-26 16:19:12.647 239969 DEBUG oslo_concurrency.processutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/aa258935-ebf3-42bd-bca7-5b58621978f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1asy_kla" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:19:12 compute-0 nova_compute[239965]: 2026-01-26 16:19:12.672 239969 DEBUG nova.storage.rbd_utils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image aa258935-ebf3-42bd-bca7-5b58621978f0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:19:12 compute-0 nova_compute[239965]: 2026-01-26 16:19:12.676 239969 DEBUG oslo_concurrency.processutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/aa258935-ebf3-42bd-bca7-5b58621978f0/disk.config aa258935-ebf3-42bd-bca7-5b58621978f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:19:12 compute-0 nova_compute[239965]: 2026-01-26 16:19:12.820 239969 DEBUG oslo_concurrency.processutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/aa258935-ebf3-42bd-bca7-5b58621978f0/disk.config aa258935-ebf3-42bd-bca7-5b58621978f0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:19:12 compute-0 nova_compute[239965]: 2026-01-26 16:19:12.821 239969 INFO nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Deleting local config drive /var/lib/nova/instances/aa258935-ebf3-42bd-bca7-5b58621978f0/disk.config because it was imported into RBD.
Jan 26 16:19:12 compute-0 kernel: tapa64a836b-74: entered promiscuous mode
Jan 26 16:19:12 compute-0 NetworkManager[48954]: <info>  [1769444352.8874] manager: (tapa64a836b-74): new Tun device (/org/freedesktop/NetworkManager/Devices/513)
Jan 26 16:19:12 compute-0 ovn_controller[146046]: 2026-01-26T16:19:12Z|01237|binding|INFO|Claiming lport a64a836b-74d0-41fe-acc2-6df7f843675a for this chassis.
Jan 26 16:19:12 compute-0 ovn_controller[146046]: 2026-01-26T16:19:12Z|01238|binding|INFO|a64a836b-74d0-41fe-acc2-6df7f843675a: Claiming fa:16:3e:50:b4:c5 10.100.0.13 2001:db8:0:1:f816:3eff:fe50:b4c5 2001:db8::f816:3eff:fe50:b4c5
Jan 26 16:19:12 compute-0 nova_compute[239965]: 2026-01-26 16:19:12.892 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:12.900 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:b4:c5 10.100.0.13 2001:db8:0:1:f816:3eff:fe50:b4c5 2001:db8::f816:3eff:fe50:b4c5'], port_security=['fa:16:3e:50:b4:c5 10.100.0.13 2001:db8:0:1:f816:3eff:fe50:b4c5 2001:db8::f816:3eff:fe50:b4c5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:fe50:b4c5/64 2001:db8::f816:3eff:fe50:b4c5/64', 'neutron:device_id': 'aa258935-ebf3-42bd-bca7-5b58621978f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fe48bd19-537b-482b-a9dd-72cb7a300d8a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b663232-d708-4949-9a4a-d4a2c365ff36, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a64a836b-74d0-41fe-acc2-6df7f843675a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:19:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:12.901 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a64a836b-74d0-41fe-acc2-6df7f843675a in datapath 869bf55a-0f32-4f15-8ff4-2acbed7fab15 bound to our chassis
Jan 26 16:19:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:12.903 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 869bf55a-0f32-4f15-8ff4-2acbed7fab15
Jan 26 16:19:12 compute-0 ovn_controller[146046]: 2026-01-26T16:19:12Z|01239|binding|INFO|Setting lport a64a836b-74d0-41fe-acc2-6df7f843675a ovn-installed in OVS
Jan 26 16:19:12 compute-0 ovn_controller[146046]: 2026-01-26T16:19:12Z|01240|binding|INFO|Setting lport a64a836b-74d0-41fe-acc2-6df7f843675a up in Southbound
Jan 26 16:19:12 compute-0 nova_compute[239965]: 2026-01-26 16:19:12.912 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:12 compute-0 systemd-udevd[344843]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:19:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:12.926 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ab74611e-46d6-4d26-b8f9-c52c4e050225]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:12 compute-0 systemd-machined[208061]: New machine qemu-146-instance-00000077.
Jan 26 16:19:12 compute-0 NetworkManager[48954]: <info>  [1769444352.9442] device (tapa64a836b-74): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:19:12 compute-0 systemd[1]: Started Virtual Machine qemu-146-instance-00000077.
Jan 26 16:19:12 compute-0 NetworkManager[48954]: <info>  [1769444352.9454] device (tapa64a836b-74): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:19:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:12.958 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[12f709d7-a7da-4d64-98f5-eac0f39116ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:12.962 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[94357fd6-33cb-425d-a979-f07ed915ab5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:13.002 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[fe01944a-9b42-46bc-b969-f35fa069c593]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:13.031 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[613ae653-6114-4ac0-9c80-dfafcc3e262f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap869bf55a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:3f:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 24, 'tx_packets': 5, 'rx_bytes': 2000, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 24, 'tx_packets': 5, 'rx_bytes': 2000, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 356], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580702, 'reachable_time': 36000, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344856, 'error': None, 'target': 'ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:13.048 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[36ac10c5-e1de-42fe-831f-1af1849947fb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap869bf55a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580712, 'tstamp': 580712}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344858, 'error': None, 'target': 'ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap869bf55a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580715, 'tstamp': 580715}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344858, 'error': None, 'target': 'ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:13.050 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap869bf55a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.052 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.054 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:13.054 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap869bf55a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:13.055 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:19:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:13.055 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap869bf55a-00, col_values=(('external_ids', {'iface-id': 'ac473cbf-5010-4e34-9ff9-00fae14d9b70'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:13.055 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:19:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.107 239969 DEBUG nova.network.neutron [req-dd0eddb7-8018-407a-96dd-9cbfd14848d4 req-5591b8af-7c73-41e4-96a6-1789d4f6a4b4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Updated VIF entry in instance network info cache for port 6784938e-ce4b-4961-a42a-2d07d12fb8f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.108 239969 DEBUG nova.network.neutron [req-dd0eddb7-8018-407a-96dd-9cbfd14848d4 req-5591b8af-7c73-41e4-96a6-1789d4f6a4b4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Updating instance_info_cache with network_info: [{"id": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "address": "fa:16:3e:16:f4:39", "network": {"id": "cfc8a5e3-1ced-4191-b7c4-24f862b986e1", "bridge": "br-int", "label": "tempest-network-smoke--1397269541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6784938e-ce", "ovs_interfaceid": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.129 239969 DEBUG oslo_concurrency.lockutils [req-dd0eddb7-8018-407a-96dd-9cbfd14848d4 req-5591b8af-7c73-41e4-96a6-1789d4f6a4b4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.516 239969 DEBUG nova.network.neutron [req-042fa320-7f92-4b9b-82b8-140cbc6dab5e req-a1a1f093-1674-415e-b412-46ea20179c5c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Updated VIF entry in instance network info cache for port a64a836b-74d0-41fe-acc2-6df7f843675a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.517 239969 DEBUG nova.network.neutron [req-042fa320-7f92-4b9b-82b8-140cbc6dab5e req-a1a1f093-1674-415e-b412-46ea20179c5c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Updating instance_info_cache with network_info: [{"id": "a64a836b-74d0-41fe-acc2-6df7f843675a", "address": "fa:16:3e:50:b4:c5", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa64a836b-74", "ovs_interfaceid": "a64a836b-74d0-41fe-acc2-6df7f843675a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.536 239969 DEBUG nova.compute.manager [req-23cb46e4-524c-4a53-a34c-a290928123bf req-ababd79d-e663-4ac1-aa44-0557f0424566 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Received event network-vif-plugged-a64a836b-74d0-41fe-acc2-6df7f843675a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.537 239969 DEBUG oslo_concurrency.lockutils [req-23cb46e4-524c-4a53-a34c-a290928123bf req-ababd79d-e663-4ac1-aa44-0557f0424566 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.537 239969 DEBUG oslo_concurrency.lockutils [req-23cb46e4-524c-4a53-a34c-a290928123bf req-ababd79d-e663-4ac1-aa44-0557f0424566 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.537 239969 DEBUG oslo_concurrency.lockutils [req-23cb46e4-524c-4a53-a34c-a290928123bf req-ababd79d-e663-4ac1-aa44-0557f0424566 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.538 239969 DEBUG nova.compute.manager [req-23cb46e4-524c-4a53-a34c-a290928123bf req-ababd79d-e663-4ac1-aa44-0557f0424566 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Processing event network-vif-plugged-a64a836b-74d0-41fe-acc2-6df7f843675a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.550 239969 DEBUG oslo_concurrency.lockutils [req-042fa320-7f92-4b9b-82b8-140cbc6dab5e req-a1a1f093-1674-415e-b412-46ea20179c5c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-aa258935-ebf3-42bd-bca7-5b58621978f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.687 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444353.6866205, aa258935-ebf3-42bd-bca7-5b58621978f0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.687 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] VM Started (Lifecycle Event)
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.690 239969 DEBUG nova.compute.manager [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.693 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.696 239969 INFO nova.virt.libvirt.driver [-] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Instance spawned successfully.
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.696 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.715 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.721 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.725 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.726 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.726 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.727 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.727 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.728 239969 DEBUG nova.virt.libvirt.driver [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.750 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.751 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444353.6895907, aa258935-ebf3-42bd-bca7-5b58621978f0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.751 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] VM Paused (Lifecycle Event)
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.774 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.778 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444353.6922567, aa258935-ebf3-42bd-bca7-5b58621978f0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.778 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] VM Resumed (Lifecycle Event)
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.788 239969 INFO nova.compute.manager [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Took 11.10 seconds to spawn the instance on the hypervisor.
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.789 239969 DEBUG nova.compute.manager [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.797 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.800 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.832 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.860 239969 INFO nova.compute.manager [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Took 12.45 seconds to build instance.
Jan 26 16:19:13 compute-0 nova_compute[239965]: 2026-01-26 16:19:13.882 239969 DEBUG oslo_concurrency.lockutils [None req-13a10881-f82c-4a11-975b-02e22380e319 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "aa258935-ebf3-42bd-bca7-5b58621978f0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 260 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 26 16:19:14 compute-0 ceph-mon[75140]: pgmap v2056: 305 pgs: 305 active+clean; 260 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 26 16:19:15 compute-0 nova_compute[239965]: 2026-01-26 16:19:15.660 239969 DEBUG nova.compute.manager [req-f53d3445-bba6-4209-af79-cc12b32ff5ca req-13d4c793-b5ce-422f-9988-17d996210724 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Received event network-vif-plugged-a64a836b-74d0-41fe-acc2-6df7f843675a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:15 compute-0 nova_compute[239965]: 2026-01-26 16:19:15.660 239969 DEBUG oslo_concurrency.lockutils [req-f53d3445-bba6-4209-af79-cc12b32ff5ca req-13d4c793-b5ce-422f-9988-17d996210724 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:15 compute-0 nova_compute[239965]: 2026-01-26 16:19:15.661 239969 DEBUG oslo_concurrency.lockutils [req-f53d3445-bba6-4209-af79-cc12b32ff5ca req-13d4c793-b5ce-422f-9988-17d996210724 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:15 compute-0 nova_compute[239965]: 2026-01-26 16:19:15.662 239969 DEBUG oslo_concurrency.lockutils [req-f53d3445-bba6-4209-af79-cc12b32ff5ca req-13d4c793-b5ce-422f-9988-17d996210724 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:15 compute-0 nova_compute[239965]: 2026-01-26 16:19:15.662 239969 DEBUG nova.compute.manager [req-f53d3445-bba6-4209-af79-cc12b32ff5ca req-13d4c793-b5ce-422f-9988-17d996210724 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] No waiting events found dispatching network-vif-plugged-a64a836b-74d0-41fe-acc2-6df7f843675a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:19:15 compute-0 nova_compute[239965]: 2026-01-26 16:19:15.663 239969 WARNING nova.compute.manager [req-f53d3445-bba6-4209-af79-cc12b32ff5ca req-13d4c793-b5ce-422f-9988-17d996210724 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Received unexpected event network-vif-plugged-a64a836b-74d0-41fe-acc2-6df7f843675a for instance with vm_state active and task_state None.
Jan 26 16:19:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 266 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 129 op/s
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.512 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.513 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.513 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.513 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.514 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.514 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.537 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Jan 26 16:19:16 compute-0 ceph-mon[75140]: pgmap v2057: 305 pgs: 305 active+clean; 266 MiB data, 911 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 129 op/s
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.552 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.553 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Image id e0817942-948b-4945-aa42-c1cb3a1c65ba yields fingerprint 6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.553 239969 INFO nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] image e0817942-948b-4945-aa42-c1cb3a1c65ba at (/var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0): checking
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.553 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] image e0817942-948b-4945-aa42-c1cb3a1c65ba at (/var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.555 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.555 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] 12ac525a-976f-4afb-b4db-6035caada7e0 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.555 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] 03b61c83-5aca-49d6-ad69-1609916476e2 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.555 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] aa258935-ebf3-42bd-bca7-5b58621978f0 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.555 239969 WARNING nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.556 239969 INFO nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Active base files: /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.556 239969 INFO nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Removable base files: /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.556 239969 INFO nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.556 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.556 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.556 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.556 239969 INFO nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Jan 26 16:19:16 compute-0 nova_compute[239965]: 2026-01-26 16:19:16.875 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:17 compute-0 ovn_controller[146046]: 2026-01-26T16:19:17Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:16:f4:39 10.100.0.7
Jan 26 16:19:17 compute-0 ovn_controller[146046]: 2026-01-26T16:19:17Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:f4:39 10.100.0.7
Jan 26 16:19:17 compute-0 nova_compute[239965]: 2026-01-26 16:19:17.543 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:17 compute-0 nova_compute[239965]: 2026-01-26 16:19:17.631 239969 DEBUG nova.compute.manager [req-b4f49f2f-78c3-449c-9779-0e6af10d1d33 req-ee362f18-020a-435b-ae56-1e1bfe602d4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Received event network-changed-a64a836b-74d0-41fe-acc2-6df7f843675a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:17 compute-0 nova_compute[239965]: 2026-01-26 16:19:17.631 239969 DEBUG nova.compute.manager [req-b4f49f2f-78c3-449c-9779-0e6af10d1d33 req-ee362f18-020a-435b-ae56-1e1bfe602d4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Refreshing instance network info cache due to event network-changed-a64a836b-74d0-41fe-acc2-6df7f843675a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:19:17 compute-0 nova_compute[239965]: 2026-01-26 16:19:17.631 239969 DEBUG oslo_concurrency.lockutils [req-b4f49f2f-78c3-449c-9779-0e6af10d1d33 req-ee362f18-020a-435b-ae56-1e1bfe602d4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-aa258935-ebf3-42bd-bca7-5b58621978f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:19:17 compute-0 nova_compute[239965]: 2026-01-26 16:19:17.632 239969 DEBUG oslo_concurrency.lockutils [req-b4f49f2f-78c3-449c-9779-0e6af10d1d33 req-ee362f18-020a-435b-ae56-1e1bfe602d4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-aa258935-ebf3-42bd-bca7-5b58621978f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:19:17 compute-0 nova_compute[239965]: 2026-01-26 16:19:17.632 239969 DEBUG nova.network.neutron [req-b4f49f2f-78c3-449c-9779-0e6af10d1d33 req-ee362f18-020a-435b-ae56-1e1bfe602d4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Refreshing network info cache for port a64a836b-74d0-41fe-acc2-6df7f843675a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:19:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:19:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 291 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 207 op/s
Jan 26 16:19:18 compute-0 ceph-mon[75140]: pgmap v2058: 305 pgs: 305 active+clean; 291 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 207 op/s
Jan 26 16:19:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 291 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 26 16:19:20 compute-0 ceph-mon[75140]: pgmap v2059: 305 pgs: 305 active+clean; 291 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 26 16:19:20 compute-0 nova_compute[239965]: 2026-01-26 16:19:20.989 239969 DEBUG nova.network.neutron [req-b4f49f2f-78c3-449c-9779-0e6af10d1d33 req-ee362f18-020a-435b-ae56-1e1bfe602d4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Updated VIF entry in instance network info cache for port a64a836b-74d0-41fe-acc2-6df7f843675a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:19:20 compute-0 nova_compute[239965]: 2026-01-26 16:19:20.990 239969 DEBUG nova.network.neutron [req-b4f49f2f-78c3-449c-9779-0e6af10d1d33 req-ee362f18-020a-435b-ae56-1e1bfe602d4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Updating instance_info_cache with network_info: [{"id": "a64a836b-74d0-41fe-acc2-6df7f843675a", "address": "fa:16:3e:50:b4:c5", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa64a836b-74", "ovs_interfaceid": "a64a836b-74d0-41fe-acc2-6df7f843675a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:19:21 compute-0 nova_compute[239965]: 2026-01-26 16:19:21.018 239969 DEBUG oslo_concurrency.lockutils [req-b4f49f2f-78c3-449c-9779-0e6af10d1d33 req-ee362f18-020a-435b-ae56-1e1bfe602d4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-aa258935-ebf3-42bd-bca7-5b58621978f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:19:21 compute-0 nova_compute[239965]: 2026-01-26 16:19:21.878 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 293 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Jan 26 16:19:22 compute-0 ceph-mon[75140]: pgmap v2060: 305 pgs: 305 active+clean; 293 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Jan 26 16:19:22 compute-0 nova_compute[239965]: 2026-01-26 16:19:22.545 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:19:24 compute-0 nova_compute[239965]: 2026-01-26 16:19:24.072 239969 INFO nova.compute.manager [None req-6f99e73e-d6a4-4ea4-9d61-7eb2378e1b58 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Get console output
Jan 26 16:19:24 compute-0 nova_compute[239965]: 2026-01-26 16:19:24.080 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:19:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 293 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Jan 26 16:19:25 compute-0 ceph-mon[75140]: pgmap v2061: 305 pgs: 305 active+clean; 293 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Jan 26 16:19:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 293 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Jan 26 16:19:26 compute-0 ceph-mon[75140]: pgmap v2062: 305 pgs: 305 active+clean; 293 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Jan 26 16:19:26 compute-0 nova_compute[239965]: 2026-01-26 16:19:26.881 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:27 compute-0 nova_compute[239965]: 2026-01-26 16:19:27.547 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:19:28 compute-0 nova_compute[239965]: 2026-01-26 16:19:28.261 239969 DEBUG oslo_concurrency.lockutils [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "interface-03b61c83-5aca-49d6-ad69-1609916476e2-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:28 compute-0 nova_compute[239965]: 2026-01-26 16:19:28.262 239969 DEBUG oslo_concurrency.lockutils [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "interface-03b61c83-5aca-49d6-ad69-1609916476e2-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:28 compute-0 nova_compute[239965]: 2026-01-26 16:19:28.263 239969 DEBUG nova.objects.instance [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'flavor' on Instance uuid 03b61c83-5aca-49d6-ad69-1609916476e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:19:28 compute-0 podman[344901]: 2026-01-26 16:19:28.374095318 +0000 UTC m=+0.056456660 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 16:19:28 compute-0 podman[344902]: 2026-01-26 16:19:28.428731813 +0000 UTC m=+0.111153877 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 16:19:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 302 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.0 MiB/s wr, 128 op/s
Jan 26 16:19:28 compute-0 ceph-mon[75140]: pgmap v2063: 305 pgs: 305 active+clean; 302 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.0 MiB/s wr, 128 op/s
Jan 26 16:19:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:19:28
Jan 26 16:19:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:19:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:19:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'default.rgw.control', 'backups', '.mgr']
Jan 26 16:19:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:19:29 compute-0 ovn_controller[146046]: 2026-01-26T16:19:29Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:50:b4:c5 10.100.0.13
Jan 26 16:19:29 compute-0 ovn_controller[146046]: 2026-01-26T16:19:29Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:50:b4:c5 10.100.0.13
Jan 26 16:19:29 compute-0 nova_compute[239965]: 2026-01-26 16:19:29.157 239969 DEBUG nova.objects.instance [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'pci_requests' on Instance uuid 03b61c83-5aca-49d6-ad69-1609916476e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:19:29 compute-0 nova_compute[239965]: 2026-01-26 16:19:29.169 239969 DEBUG nova.network.neutron [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:19:29 compute-0 nova_compute[239965]: 2026-01-26 16:19:29.617 239969 DEBUG nova.policy [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e865b82cb8db42568f95d2257ed9b158', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f543c15474f7451fa07b01e79dde95dd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 302 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 1.2 MiB/s wr, 31 op/s
Jan 26 16:19:30 compute-0 ceph-mon[75140]: pgmap v2064: 305 pgs: 305 active+clean; 302 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 1.2 MiB/s wr, 31 op/s
Jan 26 16:19:30 compute-0 nova_compute[239965]: 2026-01-26 16:19:30.708 239969 DEBUG nova.network.neutron [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Successfully created port: 1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:19:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:19:31 compute-0 nova_compute[239965]: 2026-01-26 16:19:31.148 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:31 compute-0 nova_compute[239965]: 2026-01-26 16:19:31.884 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:32 compute-0 nova_compute[239965]: 2026-01-26 16:19:32.222 239969 DEBUG nova.network.neutron [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Successfully updated port: 1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:19:32 compute-0 nova_compute[239965]: 2026-01-26 16:19:32.238 239969 DEBUG oslo_concurrency.lockutils [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:19:32 compute-0 nova_compute[239965]: 2026-01-26 16:19:32.239 239969 DEBUG oslo_concurrency.lockutils [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:19:32 compute-0 nova_compute[239965]: 2026-01-26 16:19:32.239 239969 DEBUG nova.network.neutron [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:19:32 compute-0 nova_compute[239965]: 2026-01-26 16:19:32.345 239969 DEBUG nova.compute.manager [req-2c214c8d-4f03-40a5-9e69-a28600d6ccbc req-202527de-dc9a-4b00-8a14-8c9a2baa09d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received event network-changed-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:32 compute-0 nova_compute[239965]: 2026-01-26 16:19:32.345 239969 DEBUG nova.compute.manager [req-2c214c8d-4f03-40a5-9e69-a28600d6ccbc req-202527de-dc9a-4b00-8a14-8c9a2baa09d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Refreshing instance network info cache due to event network-changed-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:19:32 compute-0 nova_compute[239965]: 2026-01-26 16:19:32.346 239969 DEBUG oslo_concurrency.lockutils [req-2c214c8d-4f03-40a5-9e69-a28600d6ccbc req-202527de-dc9a-4b00-8a14-8c9a2baa09d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:19:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 326 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Jan 26 16:19:32 compute-0 nova_compute[239965]: 2026-01-26 16:19:32.550 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:32 compute-0 ceph-mon[75140]: pgmap v2065: 305 pgs: 305 active+clean; 326 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 400 KiB/s rd, 2.2 MiB/s wr, 76 op/s
Jan 26 16:19:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:19:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 326 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.569 239969 DEBUG nova.network.neutron [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Updating instance_info_cache with network_info: [{"id": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "address": "fa:16:3e:16:f4:39", "network": {"id": "cfc8a5e3-1ced-4191-b7c4-24f862b986e1", "bridge": "br-int", "label": "tempest-network-smoke--1397269541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6784938e-ce", "ovs_interfaceid": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "address": "fa:16:3e:be:ed:18", "network": {"id": "ba09ae1b-c812-41ab-9113-065962bb26d8", "bridge": "br-int", "label": "tempest-network-smoke--1979345474", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa2f4f-dc", "ovs_interfaceid": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.647 239969 DEBUG oslo_concurrency.lockutils [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.648 239969 DEBUG oslo_concurrency.lockutils [req-2c214c8d-4f03-40a5-9e69-a28600d6ccbc req-202527de-dc9a-4b00-8a14-8c9a2baa09d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.648 239969 DEBUG nova.network.neutron [req-2c214c8d-4f03-40a5-9e69-a28600d6ccbc req-202527de-dc9a-4b00-8a14-8c9a2baa09d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Refreshing network info cache for port 1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.651 239969 DEBUG nova.virt.libvirt.vif [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1727772541',display_name='tempest-TestNetworkBasicOps-server-1727772541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1727772541',id=118,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnAPD09GHRy5Ale/TKHIPd1lPu3k3PivMb654OK+PXj1wkjjKHmv1gJsjde79Ri7/bf6NqgeAkf0SfsjKBLuBdatenOFgP44bWnHvJ+vx50tXAV+0yesCLD4pyPDCOTKw==',key_name='tempest-TestNetworkBasicOps-908318729',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:19:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-hmsse4gk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:19:04Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=03b61c83-5aca-49d6-ad69-1609916476e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "address": "fa:16:3e:be:ed:18", "network": {"id": "ba09ae1b-c812-41ab-9113-065962bb26d8", "bridge": "br-int", "label": "tempest-network-smoke--1979345474", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa2f4f-dc", "ovs_interfaceid": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.652 239969 DEBUG nova.network.os_vif_util [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "address": "fa:16:3e:be:ed:18", "network": {"id": "ba09ae1b-c812-41ab-9113-065962bb26d8", "bridge": "br-int", "label": "tempest-network-smoke--1979345474", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa2f4f-dc", "ovs_interfaceid": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.653 239969 DEBUG nova.network.os_vif_util [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:ed:18,bridge_name='br-int',has_traffic_filtering=True,id=1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17,network=Network(ba09ae1b-c812-41ab-9113-065962bb26d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa2f4f-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.653 239969 DEBUG os_vif [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:ed:18,bridge_name='br-int',has_traffic_filtering=True,id=1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17,network=Network(ba09ae1b-c812-41ab-9113-065962bb26d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa2f4f-dc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.653 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.654 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.654 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.656 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.657 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1faa2f4f-dc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.657 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1faa2f4f-dc, col_values=(('external_ids', {'iface-id': '1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:ed:18', 'vm-uuid': '03b61c83-5aca-49d6-ad69-1609916476e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:34 compute-0 NetworkManager[48954]: <info>  [1769444374.6598] manager: (tap1faa2f4f-dc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/514)
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.660 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.680 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.682 239969 INFO os_vif [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:ed:18,bridge_name='br-int',has_traffic_filtering=True,id=1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17,network=Network(ba09ae1b-c812-41ab-9113-065962bb26d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa2f4f-dc')
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.682 239969 DEBUG nova.virt.libvirt.vif [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1727772541',display_name='tempest-TestNetworkBasicOps-server-1727772541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1727772541',id=118,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnAPD09GHRy5Ale/TKHIPd1lPu3k3PivMb654OK+PXj1wkjjKHmv1gJsjde79Ri7/bf6NqgeAkf0SfsjKBLuBdatenOFgP44bWnHvJ+vx50tXAV+0yesCLD4pyPDCOTKw==',key_name='tempest-TestNetworkBasicOps-908318729',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:19:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-hmsse4gk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:19:04Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=03b61c83-5aca-49d6-ad69-1609916476e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "address": "fa:16:3e:be:ed:18", "network": {"id": "ba09ae1b-c812-41ab-9113-065962bb26d8", "bridge": "br-int", "label": "tempest-network-smoke--1979345474", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa2f4f-dc", "ovs_interfaceid": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.683 239969 DEBUG nova.network.os_vif_util [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "address": "fa:16:3e:be:ed:18", "network": {"id": "ba09ae1b-c812-41ab-9113-065962bb26d8", "bridge": "br-int", "label": "tempest-network-smoke--1979345474", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa2f4f-dc", "ovs_interfaceid": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.684 239969 DEBUG nova.network.os_vif_util [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:ed:18,bridge_name='br-int',has_traffic_filtering=True,id=1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17,network=Network(ba09ae1b-c812-41ab-9113-065962bb26d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa2f4f-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.684 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.686 239969 DEBUG nova.virt.libvirt.guest [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] attach device xml: <interface type="ethernet">
Jan 26 16:19:34 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:be:ed:18"/>
Jan 26 16:19:34 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 16:19:34 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:19:34 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 16:19:34 compute-0 nova_compute[239965]:   <target dev="tap1faa2f4f-dc"/>
Jan 26 16:19:34 compute-0 nova_compute[239965]: </interface>
Jan 26 16:19:34 compute-0 nova_compute[239965]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 26 16:19:34 compute-0 kernel: tap1faa2f4f-dc: entered promiscuous mode
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.700 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:34 compute-0 ovn_controller[146046]: 2026-01-26T16:19:34Z|01241|binding|INFO|Claiming lport 1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 for this chassis.
Jan 26 16:19:34 compute-0 ovn_controller[146046]: 2026-01-26T16:19:34Z|01242|binding|INFO|1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17: Claiming fa:16:3e:be:ed:18 10.100.0.29
Jan 26 16:19:34 compute-0 NetworkManager[48954]: <info>  [1769444374.7140] manager: (tap1faa2f4f-dc): new Tun device (/org/freedesktop/NetworkManager/Devices/515)
Jan 26 16:19:34 compute-0 ovn_controller[146046]: 2026-01-26T16:19:34Z|01243|binding|INFO|Setting lport 1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 ovn-installed in OVS
Jan 26 16:19:34 compute-0 ovn_controller[146046]: 2026-01-26T16:19:34Z|01244|binding|INFO|Setting lport 1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 up in Southbound
Jan 26 16:19:34 compute-0 systemd-udevd[344947]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:19:34 compute-0 ceph-mon[75140]: pgmap v2066: 305 pgs: 305 active+clean; 326 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.760 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.759 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:ed:18 10.100.0.29'], port_security=['fa:16:3e:be:ed:18 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '03b61c83-5aca-49d6-ad69-1609916476e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba09ae1b-c812-41ab-9113-065962bb26d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '039287cb-946e-4ce0-ad47-a6333988fa03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e603b96-dbd7-4b52-a091-3a62246978c1, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.761 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 in datapath ba09ae1b-c812-41ab-9113-065962bb26d8 bound to our chassis
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.762 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.763 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ba09ae1b-c812-41ab-9113-065962bb26d8
Jan 26 16:19:34 compute-0 NetworkManager[48954]: <info>  [1769444374.7756] device (tap1faa2f4f-dc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:19:34 compute-0 NetworkManager[48954]: <info>  [1769444374.7764] device (tap1faa2f4f-dc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.776 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e4135939-114c-4613-a901-6a87eebde084]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.777 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapba09ae1b-c1 in ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.779 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapba09ae1b-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.779 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[33b7b5e0-0b18-4380-b415-d44fecd9292a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.779 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5989c7-9448-4a8b-95e2-e641c6b67790]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.793 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[2062ea28-c423-4859-be25-e5c9755d5bd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.810 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3d66af-88da-4df0-ab56-8ea1cfa26034]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.848 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[982833e3-5e49-45ed-8644-c1055d8c6821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:34 compute-0 systemd-udevd[344951]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:19:34 compute-0 NetworkManager[48954]: <info>  [1769444374.8549] manager: (tapba09ae1b-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/516)
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.854 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9164a532-28e8-4b78-8bc0-2eddd75f5de7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.879 239969 DEBUG nova.virt.libvirt.driver [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.879 239969 DEBUG nova.virt.libvirt.driver [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.879 239969 DEBUG nova.virt.libvirt.driver [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:16:f4:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.879 239969 DEBUG nova.virt.libvirt.driver [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:be:ed:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.885 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[fb62a9f9-d2ba-4ff1-b08b-74065d8a7854]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.888 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[295d53ee-8adb-44bb-8d2b-dcef98bb3c7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.908 239969 DEBUG nova.virt.libvirt.guest [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:19:34 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:19:34 compute-0 nova_compute[239965]:   <nova:name>tempest-TestNetworkBasicOps-server-1727772541</nova:name>
Jan 26 16:19:34 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 16:19:34</nova:creationTime>
Jan 26 16:19:34 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 16:19:34 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 16:19:34 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 16:19:34 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 16:19:34 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:19:34 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 16:19:34 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 16:19:34 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 16:19:34 compute-0 nova_compute[239965]:     <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:19:34 compute-0 nova_compute[239965]:     <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:19:34 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 16:19:34 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:19:34 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 16:19:34 compute-0 nova_compute[239965]:     <nova:port uuid="6784938e-ce4b-4961-a42a-2d07d12fb8f1">
Jan 26 16:19:34 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:19:34 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:19:34 compute-0 nova_compute[239965]:     <nova:port uuid="1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17">
Jan 26 16:19:34 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Jan 26 16:19:34 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:19:34 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 16:19:34 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 16:19:34 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 16:19:34 compute-0 NetworkManager[48954]: <info>  [1769444374.9157] device (tapba09ae1b-c0): carrier: link connected
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.922 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[48c88b6d-a1d4-41c3-a6bd-1ce6252f3172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:34 compute-0 nova_compute[239965]: 2026-01-26 16:19:34.931 239969 DEBUG oslo_concurrency.lockutils [None req-25c0dbe4-2e7b-4e31-874e-2abe4bc2d11c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "interface-03b61c83-5aca-49d6-ad69-1609916476e2-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.941 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ce4b8a21-7566-45e9-bbed-4a9aa5027377]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba09ae1b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:26:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586572, 'reachable_time': 34075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 344975, 'error': None, 'target': 'ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.963 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ba9d650b-753c-4081-8242-f077a2b08161]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe72:26fa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 586572, 'tstamp': 586572}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 344976, 'error': None, 'target': 'ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:34.991 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[303823ef-54d1-4fe7-a344-1c35d249e15e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapba09ae1b-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:72:26:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 361], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586572, 'reachable_time': 34075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 344977, 'error': None, 'target': 'ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:35.033 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[83632e69-f7e1-4084-9d43-20838217898a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:35.109 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a86d162f-a22f-4d5c-b473-b30a56756d18]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:35.110 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba09ae1b-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:35.111 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:35.111 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba09ae1b-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:35 compute-0 kernel: tapba09ae1b-c0: entered promiscuous mode
Jan 26 16:19:35 compute-0 NetworkManager[48954]: <info>  [1769444375.1139] manager: (tapba09ae1b-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/517)
Jan 26 16:19:35 compute-0 nova_compute[239965]: 2026-01-26 16:19:35.113 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:35.116 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapba09ae1b-c0, col_values=(('external_ids', {'iface-id': '1ec9b87e-91d2-4eab-be3b-c60638ca7145'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:35 compute-0 nova_compute[239965]: 2026-01-26 16:19:35.117 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:35 compute-0 ovn_controller[146046]: 2026-01-26T16:19:35Z|01245|binding|INFO|Releasing lport 1ec9b87e-91d2-4eab-be3b-c60638ca7145 from this chassis (sb_readonly=0)
Jan 26 16:19:35 compute-0 nova_compute[239965]: 2026-01-26 16:19:35.130 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:35.131 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ba09ae1b-c812-41ab-9113-065962bb26d8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ba09ae1b-c812-41ab-9113-065962bb26d8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:35.132 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[51a8c75b-0e25-4cbd-a66e-2f9316412ce8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:35.133 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-ba09ae1b-c812-41ab-9113-065962bb26d8
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/ba09ae1b-c812-41ab-9113-065962bb26d8.pid.haproxy
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID ba09ae1b-c812-41ab-9113-065962bb26d8
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:19:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:35.134 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8', 'env', 'PROCESS_TAG=haproxy-ba09ae1b-c812-41ab-9113-065962bb26d8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ba09ae1b-c812-41ab-9113-065962bb26d8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:19:35 compute-0 podman[345009]: 2026-01-26 16:19:35.52310569 +0000 UTC m=+0.051812737 container create 71134f9c8e19c0be444bd85f2441a1e4ac417b2f1fc1ec8e03a5c6770d8a9970 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 16:19:35 compute-0 systemd[1]: Started libpod-conmon-71134f9c8e19c0be444bd85f2441a1e4ac417b2f1fc1ec8e03a5c6770d8a9970.scope.
Jan 26 16:19:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:19:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b265d19e5aee5b57a4da0a0fe29be4567f6bae3a59028580e2f092af26805bdd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:19:35 compute-0 podman[345009]: 2026-01-26 16:19:35.495731812 +0000 UTC m=+0.024438899 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:19:35 compute-0 podman[345009]: 2026-01-26 16:19:35.59924482 +0000 UTC m=+0.127951887 container init 71134f9c8e19c0be444bd85f2441a1e4ac417b2f1fc1ec8e03a5c6770d8a9970 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:19:35 compute-0 podman[345009]: 2026-01-26 16:19:35.607759299 +0000 UTC m=+0.136466386 container start 71134f9c8e19c0be444bd85f2441a1e4ac417b2f1fc1ec8e03a5c6770d8a9970 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:19:35 compute-0 neutron-haproxy-ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8[345024]: [NOTICE]   (345028) : New worker (345030) forked
Jan 26 16:19:35 compute-0 neutron-haproxy-ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8[345024]: [NOTICE]   (345028) : Loading success.
Jan 26 16:19:35 compute-0 nova_compute[239965]: 2026-01-26 16:19:35.916 239969 DEBUG nova.compute.manager [req-9ecf61b9-1e43-400a-a50b-23d54c41c408 req-f30140f9-be4a-406e-9025-51ab7e35ef37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received event network-vif-plugged-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:35 compute-0 nova_compute[239965]: 2026-01-26 16:19:35.916 239969 DEBUG oslo_concurrency.lockutils [req-9ecf61b9-1e43-400a-a50b-23d54c41c408 req-f30140f9-be4a-406e-9025-51ab7e35ef37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:35 compute-0 nova_compute[239965]: 2026-01-26 16:19:35.917 239969 DEBUG oslo_concurrency.lockutils [req-9ecf61b9-1e43-400a-a50b-23d54c41c408 req-f30140f9-be4a-406e-9025-51ab7e35ef37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:35 compute-0 nova_compute[239965]: 2026-01-26 16:19:35.917 239969 DEBUG oslo_concurrency.lockutils [req-9ecf61b9-1e43-400a-a50b-23d54c41c408 req-f30140f9-be4a-406e-9025-51ab7e35ef37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:35 compute-0 nova_compute[239965]: 2026-01-26 16:19:35.917 239969 DEBUG nova.compute.manager [req-9ecf61b9-1e43-400a-a50b-23d54c41c408 req-f30140f9-be4a-406e-9025-51ab7e35ef37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] No waiting events found dispatching network-vif-plugged-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:19:35 compute-0 nova_compute[239965]: 2026-01-26 16:19:35.917 239969 WARNING nova.compute.manager [req-9ecf61b9-1e43-400a-a50b-23d54c41c408 req-f30140f9-be4a-406e-9025-51ab7e35ef37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received unexpected event network-vif-plugged-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 for instance with vm_state active and task_state None.
Jan 26 16:19:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 326 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:19:36 compute-0 ceph-mon[75140]: pgmap v2067: 305 pgs: 305 active+clean; 326 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:19:36 compute-0 nova_compute[239965]: 2026-01-26 16:19:36.937 239969 DEBUG oslo_concurrency.lockutils [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "interface-03b61c83-5aca-49d6-ad69-1609916476e2-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:36 compute-0 nova_compute[239965]: 2026-01-26 16:19:36.937 239969 DEBUG oslo_concurrency.lockutils [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "interface-03b61c83-5aca-49d6-ad69-1609916476e2-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:36 compute-0 nova_compute[239965]: 2026-01-26 16:19:36.940 239969 DEBUG nova.network.neutron [req-2c214c8d-4f03-40a5-9e69-a28600d6ccbc req-202527de-dc9a-4b00-8a14-8c9a2baa09d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Updated VIF entry in instance network info cache for port 1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:19:36 compute-0 nova_compute[239965]: 2026-01-26 16:19:36.941 239969 DEBUG nova.network.neutron [req-2c214c8d-4f03-40a5-9e69-a28600d6ccbc req-202527de-dc9a-4b00-8a14-8c9a2baa09d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Updating instance_info_cache with network_info: [{"id": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "address": "fa:16:3e:16:f4:39", "network": {"id": "cfc8a5e3-1ced-4191-b7c4-24f862b986e1", "bridge": "br-int", "label": "tempest-network-smoke--1397269541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6784938e-ce", "ovs_interfaceid": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "address": "fa:16:3e:be:ed:18", "network": {"id": "ba09ae1b-c812-41ab-9113-065962bb26d8", "bridge": "br-int", "label": "tempest-network-smoke--1979345474", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa2f4f-dc", "ovs_interfaceid": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:19:36 compute-0 nova_compute[239965]: 2026-01-26 16:19:36.960 239969 DEBUG nova.objects.instance [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'flavor' on Instance uuid 03b61c83-5aca-49d6-ad69-1609916476e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:19:36 compute-0 nova_compute[239965]: 2026-01-26 16:19:36.962 239969 DEBUG oslo_concurrency.lockutils [req-2c214c8d-4f03-40a5-9e69-a28600d6ccbc req-202527de-dc9a-4b00-8a14-8c9a2baa09d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:19:36 compute-0 nova_compute[239965]: 2026-01-26 16:19:36.979 239969 DEBUG nova.virt.libvirt.vif [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1727772541',display_name='tempest-TestNetworkBasicOps-server-1727772541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1727772541',id=118,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnAPD09GHRy5Ale/TKHIPd1lPu3k3PivMb654OK+PXj1wkjjKHmv1gJsjde79Ri7/bf6NqgeAkf0SfsjKBLuBdatenOFgP44bWnHvJ+vx50tXAV+0yesCLD4pyPDCOTKw==',key_name='tempest-TestNetworkBasicOps-908318729',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:19:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-hmsse4gk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:19:04Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=03b61c83-5aca-49d6-ad69-1609916476e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "address": "fa:16:3e:be:ed:18", "network": {"id": "ba09ae1b-c812-41ab-9113-065962bb26d8", "bridge": "br-int", "label": "tempest-network-smoke--1979345474", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa2f4f-dc", "ovs_interfaceid": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:19:36 compute-0 nova_compute[239965]: 2026-01-26 16:19:36.980 239969 DEBUG nova.network.os_vif_util [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "address": "fa:16:3e:be:ed:18", "network": {"id": "ba09ae1b-c812-41ab-9113-065962bb26d8", "bridge": "br-int", "label": "tempest-network-smoke--1979345474", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa2f4f-dc", "ovs_interfaceid": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:19:36 compute-0 nova_compute[239965]: 2026-01-26 16:19:36.980 239969 DEBUG nova.network.os_vif_util [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:ed:18,bridge_name='br-int',has_traffic_filtering=True,id=1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17,network=Network(ba09ae1b-c812-41ab-9113-065962bb26d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa2f4f-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:19:36 compute-0 nova_compute[239965]: 2026-01-26 16:19:36.984 239969 DEBUG nova.virt.libvirt.guest [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:be:ed:18"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1faa2f4f-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 16:19:36 compute-0 nova_compute[239965]: 2026-01-26 16:19:36.985 239969 DEBUG nova.virt.libvirt.guest [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:be:ed:18"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1faa2f4f-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 16:19:36 compute-0 nova_compute[239965]: 2026-01-26 16:19:36.988 239969 DEBUG nova.virt.libvirt.driver [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Attempting to detach device tap1faa2f4f-dc from instance 03b61c83-5aca-49d6-ad69-1609916476e2 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 26 16:19:36 compute-0 nova_compute[239965]: 2026-01-26 16:19:36.988 239969 DEBUG nova.virt.libvirt.guest [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] detach device xml: <interface type="ethernet">
Jan 26 16:19:36 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:be:ed:18"/>
Jan 26 16:19:36 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 16:19:36 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:19:36 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 16:19:36 compute-0 nova_compute[239965]:   <target dev="tap1faa2f4f-dc"/>
Jan 26 16:19:36 compute-0 nova_compute[239965]: </interface>
Jan 26 16:19:36 compute-0 nova_compute[239965]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 26 16:19:36 compute-0 nova_compute[239965]: 2026-01-26 16:19:36.998 239969 DEBUG nova.virt.libvirt.guest [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:be:ed:18"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1faa2f4f-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.003 239969 DEBUG nova.virt.libvirt.guest [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:be:ed:18"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1faa2f4f-dc"/></interface>not found in domain: <domain type='kvm' id='145'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <name>instance-00000076</name>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <uuid>03b61c83-5aca-49d6-ad69-1609916476e2</uuid>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:name>tempest-TestNetworkBasicOps-server-1727772541</nova:name>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 16:19:34</nova:creationTime>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:port uuid="6784938e-ce4b-4961-a42a-2d07d12fb8f1">
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:port uuid="1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17">
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 16:19:37 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <resource>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </resource>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <system>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <entry name='serial'>03b61c83-5aca-49d6-ad69-1609916476e2</entry>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <entry name='uuid'>03b61c83-5aca-49d6-ad69-1609916476e2</entry>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </system>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <os>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </os>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <features>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </features>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/03b61c83-5aca-49d6-ad69-1609916476e2_disk' index='2'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       </source>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/03b61c83-5aca-49d6-ad69-1609916476e2_disk.config' index='1'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       </source>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:16:f4:39'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target dev='tap6784938e-ce'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:be:ed:18'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target dev='tap1faa2f4f-dc'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='net1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <source path='/dev/pts/1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2/console.log' append='off'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       </target>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/1'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <source path='/dev/pts/1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2/console.log' append='off'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </console>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </input>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </input>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </input>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </graphics>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <video>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </video>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c810,c843</label>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c810,c843</imagelabel>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 16:19:37 compute-0 nova_compute[239965]: </domain>
Jan 26 16:19:37 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.003 239969 INFO nova.virt.libvirt.driver [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully detached device tap1faa2f4f-dc from instance 03b61c83-5aca-49d6-ad69-1609916476e2 from the persistent domain config.
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.003 239969 DEBUG nova.virt.libvirt.driver [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] (1/8): Attempting to detach device tap1faa2f4f-dc with device alias net1 from instance 03b61c83-5aca-49d6-ad69-1609916476e2 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.004 239969 DEBUG nova.virt.libvirt.guest [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] detach device xml: <interface type="ethernet">
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:be:ed:18"/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <target dev="tap1faa2f4f-dc"/>
Jan 26 16:19:37 compute-0 nova_compute[239965]: </interface>
Jan 26 16:19:37 compute-0 nova_compute[239965]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 26 16:19:37 compute-0 kernel: tap1faa2f4f-dc (unregistering): left promiscuous mode
Jan 26 16:19:37 compute-0 NetworkManager[48954]: <info>  [1769444377.0555] device (tap1faa2f4f-dc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.067 239969 DEBUG nova.virt.libvirt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Received event <DeviceRemovedEvent: 1769444377.0666504, 03b61c83-5aca-49d6-ad69-1609916476e2 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.069 239969 DEBUG nova.virt.libvirt.driver [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Start waiting for the detach event from libvirt for device tap1faa2f4f-dc with device alias net1 for instance 03b61c83-5aca-49d6-ad69-1609916476e2 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.069 239969 DEBUG nova.virt.libvirt.guest [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:be:ed:18"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1faa2f4f-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.074 239969 DEBUG nova.virt.libvirt.guest [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:be:ed:18"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1faa2f4f-dc"/></interface>not found in domain: <domain type='kvm' id='145'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <name>instance-00000076</name>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <uuid>03b61c83-5aca-49d6-ad69-1609916476e2</uuid>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:name>tempest-TestNetworkBasicOps-server-1727772541</nova:name>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 16:19:34</nova:creationTime>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:port uuid="6784938e-ce4b-4961-a42a-2d07d12fb8f1">
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:port uuid="1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17">
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.29" ipVersion="4"/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 16:19:37 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <resource>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </resource>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <system>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <entry name='serial'>03b61c83-5aca-49d6-ad69-1609916476e2</entry>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <entry name='uuid'>03b61c83-5aca-49d6-ad69-1609916476e2</entry>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </system>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <os>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </os>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <features>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </features>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/03b61c83-5aca-49d6-ad69-1609916476e2_disk' index='2'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       </source>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/03b61c83-5aca-49d6-ad69-1609916476e2_disk.config' index='1'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       </source>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:16:f4:39'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target dev='tap6784938e-ce'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <source path='/dev/pts/1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2/console.log' append='off'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       </target>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/1'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <source path='/dev/pts/1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2/console.log' append='off'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </console>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </input>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </input>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </input>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </graphics>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <video>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </video>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c810,c843</label>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c810,c843</imagelabel>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 16:19:37 compute-0 nova_compute[239965]: </domain>
Jan 26 16:19:37 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.074 239969 INFO nova.virt.libvirt.driver [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully detached device tap1faa2f4f-dc from instance 03b61c83-5aca-49d6-ad69-1609916476e2 from the live domain config.
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.075 239969 DEBUG nova.virt.libvirt.vif [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1727772541',display_name='tempest-TestNetworkBasicOps-server-1727772541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1727772541',id=118,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnAPD09GHRy5Ale/TKHIPd1lPu3k3PivMb654OK+PXj1wkjjKHmv1gJsjde79Ri7/bf6NqgeAkf0SfsjKBLuBdatenOFgP44bWnHvJ+vx50tXAV+0yesCLD4pyPDCOTKw==',key_name='tempest-TestNetworkBasicOps-908318729',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:19:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-hmsse4gk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:19:04Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=03b61c83-5aca-49d6-ad69-1609916476e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "address": "fa:16:3e:be:ed:18", "network": {"id": "ba09ae1b-c812-41ab-9113-065962bb26d8", "bridge": "br-int", "label": "tempest-network-smoke--1979345474", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa2f4f-dc", "ovs_interfaceid": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.075 239969 DEBUG nova.network.os_vif_util [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "address": "fa:16:3e:be:ed:18", "network": {"id": "ba09ae1b-c812-41ab-9113-065962bb26d8", "bridge": "br-int", "label": "tempest-network-smoke--1979345474", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa2f4f-dc", "ovs_interfaceid": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.076 239969 DEBUG nova.network.os_vif_util [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:ed:18,bridge_name='br-int',has_traffic_filtering=True,id=1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17,network=Network(ba09ae1b-c812-41ab-9113-065962bb26d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa2f4f-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.076 239969 DEBUG os_vif [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:ed:18,bridge_name='br-int',has_traffic_filtering=True,id=1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17,network=Network(ba09ae1b-c812-41ab-9113-065962bb26d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa2f4f-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.078 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.079 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1faa2f4f-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:37 compute-0 ovn_controller[146046]: 2026-01-26T16:19:37Z|01246|binding|INFO|Releasing lport 1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 from this chassis (sb_readonly=0)
Jan 26 16:19:37 compute-0 ovn_controller[146046]: 2026-01-26T16:19:37Z|01247|binding|INFO|Setting lport 1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 down in Southbound
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.124 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:37 compute-0 ovn_controller[146046]: 2026-01-26T16:19:37Z|01248|binding|INFO|Removing iface tap1faa2f4f-dc ovn-installed in OVS
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.126 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:19:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:37.133 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:ed:18 10.100.0.29'], port_security=['fa:16:3e:be:ed:18 10.100.0.29'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.29/28', 'neutron:device_id': '03b61c83-5aca-49d6-ad69-1609916476e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ba09ae1b-c812-41ab-9113-065962bb26d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '039287cb-946e-4ce0-ad47-a6333988fa03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4e603b96-dbd7-4b52-a091-3a62246978c1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:19:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:37.134 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 in datapath ba09ae1b-c812-41ab-9113-065962bb26d8 unbound from our chassis
Jan 26 16:19:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:37.136 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ba09ae1b-c812-41ab-9113-065962bb26d8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:19:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:37.137 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d8a33b17-ce98-464b-8268-f734ca651afe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:37.138 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8 namespace which is not needed anymore
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.138 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.142 239969 INFO os_vif [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:ed:18,bridge_name='br-int',has_traffic_filtering=True,id=1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17,network=Network(ba09ae1b-c812-41ab-9113-065962bb26d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa2f4f-dc')
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.143 239969 DEBUG nova.virt.libvirt.guest [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:name>tempest-TestNetworkBasicOps-server-1727772541</nova:name>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 16:19:37</nova:creationTime>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     <nova:port uuid="6784938e-ce4b-4961-a42a-2d07d12fb8f1">
Jan 26 16:19:37 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:19:37 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:19:37 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 16:19:37 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 16:19:37 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 16:19:37 compute-0 neutron-haproxy-ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8[345024]: [NOTICE]   (345028) : haproxy version is 2.8.14-c23fe91
Jan 26 16:19:37 compute-0 neutron-haproxy-ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8[345024]: [NOTICE]   (345028) : path to executable is /usr/sbin/haproxy
Jan 26 16:19:37 compute-0 neutron-haproxy-ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8[345024]: [WARNING]  (345028) : Exiting Master process...
Jan 26 16:19:37 compute-0 neutron-haproxy-ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8[345024]: [WARNING]  (345028) : Exiting Master process...
Jan 26 16:19:37 compute-0 neutron-haproxy-ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8[345024]: [ALERT]    (345028) : Current worker (345030) exited with code 143 (Terminated)
Jan 26 16:19:37 compute-0 neutron-haproxy-ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8[345024]: [WARNING]  (345028) : All workers exited. Exiting... (0)
Jan 26 16:19:37 compute-0 systemd[1]: libpod-71134f9c8e19c0be444bd85f2441a1e4ac417b2f1fc1ec8e03a5c6770d8a9970.scope: Deactivated successfully.
Jan 26 16:19:37 compute-0 podman[345060]: 2026-01-26 16:19:37.282411401 +0000 UTC m=+0.044792465 container died 71134f9c8e19c0be444bd85f2441a1e4ac417b2f1fc1ec8e03a5c6770d8a9970 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 16:19:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-71134f9c8e19c0be444bd85f2441a1e4ac417b2f1fc1ec8e03a5c6770d8a9970-userdata-shm.mount: Deactivated successfully.
Jan 26 16:19:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b265d19e5aee5b57a4da0a0fe29be4567f6bae3a59028580e2f092af26805bdd-merged.mount: Deactivated successfully.
Jan 26 16:19:37 compute-0 podman[345060]: 2026-01-26 16:19:37.325083213 +0000 UTC m=+0.087464217 container cleanup 71134f9c8e19c0be444bd85f2441a1e4ac417b2f1fc1ec8e03a5c6770d8a9970 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:19:37 compute-0 systemd[1]: libpod-conmon-71134f9c8e19c0be444bd85f2441a1e4ac417b2f1fc1ec8e03a5c6770d8a9970.scope: Deactivated successfully.
Jan 26 16:19:37 compute-0 podman[345091]: 2026-01-26 16:19:37.398762863 +0000 UTC m=+0.045381070 container remove 71134f9c8e19c0be444bd85f2441a1e4ac417b2f1fc1ec8e03a5c6770d8a9970 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 16:19:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:37.405 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d8983cd7-ae09-47eb-80b9-28480bbebcbe]: (4, ('Mon Jan 26 04:19:37 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8 (71134f9c8e19c0be444bd85f2441a1e4ac417b2f1fc1ec8e03a5c6770d8a9970)\n71134f9c8e19c0be444bd85f2441a1e4ac417b2f1fc1ec8e03a5c6770d8a9970\nMon Jan 26 04:19:37 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8 (71134f9c8e19c0be444bd85f2441a1e4ac417b2f1fc1ec8e03a5c6770d8a9970)\n71134f9c8e19c0be444bd85f2441a1e4ac417b2f1fc1ec8e03a5c6770d8a9970\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:37.406 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4d0c80d6-ab93-4199-99b7-c5bfda6a8dbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:37.407 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba09ae1b-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.408 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:37 compute-0 kernel: tapba09ae1b-c0: left promiscuous mode
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.421 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:37.423 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[625edeb6-24ed-4ca2-8f91-2528e6411ada]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:37.438 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a00ed089-fdc9-41c8-a13f-a9b15decb1d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:37.439 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2f60c0a7-d3e0-4eef-a163-a784d8539b72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:37.452 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8d368a78-f748-434c-91d6-58eee1012251]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 586564, 'reachable_time': 44825, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345106, 'error': None, 'target': 'ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:37.454 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ba09ae1b-c812-41ab-9113-065962bb26d8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:19:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:37.454 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[210ce7af-9fec-4a68-8e5f-e705f73df3ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:37 compute-0 systemd[1]: run-netns-ovnmeta\x2dba09ae1b\x2dc812\x2d41ab\x2d9113\x2d065962bb26d8.mount: Deactivated successfully.
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.551 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.924 239969 DEBUG oslo_concurrency.lockutils [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.924 239969 DEBUG oslo_concurrency.lockutils [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:19:37 compute-0 nova_compute[239965]: 2026-01-26 16:19:37.924 239969 DEBUG nova.network.neutron [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.036 239969 DEBUG nova.compute.manager [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received event network-vif-plugged-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.036 239969 DEBUG oslo_concurrency.lockutils [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.037 239969 DEBUG oslo_concurrency.lockutils [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.037 239969 DEBUG oslo_concurrency.lockutils [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.037 239969 DEBUG nova.compute.manager [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] No waiting events found dispatching network-vif-plugged-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.037 239969 WARNING nova.compute.manager [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received unexpected event network-vif-plugged-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 for instance with vm_state active and task_state None.
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.037 239969 DEBUG nova.compute.manager [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received event network-vif-unplugged-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.038 239969 DEBUG oslo_concurrency.lockutils [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.038 239969 DEBUG oslo_concurrency.lockutils [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.038 239969 DEBUG oslo_concurrency.lockutils [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.038 239969 DEBUG nova.compute.manager [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] No waiting events found dispatching network-vif-unplugged-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.038 239969 WARNING nova.compute.manager [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received unexpected event network-vif-unplugged-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 for instance with vm_state active and task_state None.
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.039 239969 DEBUG nova.compute.manager [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received event network-vif-plugged-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.039 239969 DEBUG oslo_concurrency.lockutils [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.039 239969 DEBUG oslo_concurrency.lockutils [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.039 239969 DEBUG oslo_concurrency.lockutils [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.039 239969 DEBUG nova.compute.manager [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] No waiting events found dispatching network-vif-plugged-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.039 239969 WARNING nova.compute.manager [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received unexpected event network-vif-plugged-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 for instance with vm_state active and task_state None.
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.039 239969 DEBUG nova.compute.manager [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received event network-vif-deleted-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.040 239969 INFO nova.compute.manager [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Neutron deleted interface 1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17; detaching it from the instance and deleting it from the info cache
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.040 239969 DEBUG nova.network.neutron [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Updating instance_info_cache with network_info: [{"id": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "address": "fa:16:3e:16:f4:39", "network": {"id": "cfc8a5e3-1ced-4191-b7c4-24f862b986e1", "bridge": "br-int", "label": "tempest-network-smoke--1397269541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6784938e-ce", "ovs_interfaceid": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:19:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.085 239969 DEBUG nova.objects.instance [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lazy-loading 'system_metadata' on Instance uuid 03b61c83-5aca-49d6-ad69-1609916476e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.109 239969 DEBUG nova.objects.instance [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lazy-loading 'flavor' on Instance uuid 03b61c83-5aca-49d6-ad69-1609916476e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.131 239969 DEBUG nova.virt.libvirt.vif [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1727772541',display_name='tempest-TestNetworkBasicOps-server-1727772541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1727772541',id=118,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnAPD09GHRy5Ale/TKHIPd1lPu3k3PivMb654OK+PXj1wkjjKHmv1gJsjde79Ri7/bf6NqgeAkf0SfsjKBLuBdatenOFgP44bWnHvJ+vx50tXAV+0yesCLD4pyPDCOTKw==',key_name='tempest-TestNetworkBasicOps-908318729',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:19:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-hmsse4gk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:19:04Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=03b61c83-5aca-49d6-ad69-1609916476e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "address": "fa:16:3e:be:ed:18", "network": {"id": "ba09ae1b-c812-41ab-9113-065962bb26d8", "bridge": "br-int", "label": "tempest-network-smoke--1979345474", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa2f4f-dc", "ovs_interfaceid": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.131 239969 DEBUG nova.network.os_vif_util [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converting VIF {"id": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "address": "fa:16:3e:be:ed:18", "network": {"id": "ba09ae1b-c812-41ab-9113-065962bb26d8", "bridge": "br-int", "label": "tempest-network-smoke--1979345474", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa2f4f-dc", "ovs_interfaceid": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.132 239969 DEBUG nova.network.os_vif_util [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:ed:18,bridge_name='br-int',has_traffic_filtering=True,id=1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17,network=Network(ba09ae1b-c812-41ab-9113-065962bb26d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa2f4f-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.136 239969 DEBUG nova.virt.libvirt.guest [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:be:ed:18"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1faa2f4f-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.142 239969 DEBUG nova.virt.libvirt.guest [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:be:ed:18"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1faa2f4f-dc"/></interface>not found in domain: <domain type='kvm' id='145'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <name>instance-00000076</name>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <uuid>03b61c83-5aca-49d6-ad69-1609916476e2</uuid>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:name>tempest-TestNetworkBasicOps-server-1727772541</nova:name>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 16:19:37</nova:creationTime>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:port uuid="6784938e-ce4b-4961-a42a-2d07d12fb8f1">
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 16:19:38 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <resource>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </resource>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <system>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <entry name='serial'>03b61c83-5aca-49d6-ad69-1609916476e2</entry>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <entry name='uuid'>03b61c83-5aca-49d6-ad69-1609916476e2</entry>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </system>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <os>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </os>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <features>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </features>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/03b61c83-5aca-49d6-ad69-1609916476e2_disk' index='2'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       </source>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/03b61c83-5aca-49d6-ad69-1609916476e2_disk.config' index='1'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       </source>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:16:f4:39'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target dev='tap6784938e-ce'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <source path='/dev/pts/1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2/console.log' append='off'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       </target>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/1'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <source path='/dev/pts/1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2/console.log' append='off'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </console>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </input>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </input>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </input>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </graphics>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <video>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </video>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c810,c843</label>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c810,c843</imagelabel>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 16:19:38 compute-0 nova_compute[239965]: </domain>
Jan 26 16:19:38 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.143 239969 DEBUG nova.virt.libvirt.guest [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:be:ed:18"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1faa2f4f-dc"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.149 239969 DEBUG nova.virt.libvirt.guest [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:be:ed:18"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1faa2f4f-dc"/></interface>not found in domain: <domain type='kvm' id='145'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <name>instance-00000076</name>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <uuid>03b61c83-5aca-49d6-ad69-1609916476e2</uuid>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:name>tempest-TestNetworkBasicOps-server-1727772541</nova:name>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 16:19:37</nova:creationTime>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:port uuid="6784938e-ce4b-4961-a42a-2d07d12fb8f1">
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 16:19:38 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <resource>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </resource>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <system>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <entry name='serial'>03b61c83-5aca-49d6-ad69-1609916476e2</entry>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <entry name='uuid'>03b61c83-5aca-49d6-ad69-1609916476e2</entry>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </system>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <os>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </os>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <features>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </features>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/03b61c83-5aca-49d6-ad69-1609916476e2_disk' index='2'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       </source>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/03b61c83-5aca-49d6-ad69-1609916476e2_disk.config' index='1'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       </source>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:16:f4:39'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target dev='tap6784938e-ce'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <source path='/dev/pts/1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2/console.log' append='off'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       </target>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/1'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <source path='/dev/pts/1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2/console.log' append='off'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </console>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </input>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </input>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </input>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5901' autoport='yes' listen='::0'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </graphics>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <video>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </video>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c810,c843</label>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c810,c843</imagelabel>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 16:19:38 compute-0 nova_compute[239965]: </domain>
Jan 26 16:19:38 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.149 239969 WARNING nova.virt.libvirt.driver [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Detaching interface fa:16:3e:be:ed:18 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap1faa2f4f-dc' not found.
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.151 239969 DEBUG nova.virt.libvirt.vif [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1727772541',display_name='tempest-TestNetworkBasicOps-server-1727772541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1727772541',id=118,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnAPD09GHRy5Ale/TKHIPd1lPu3k3PivMb654OK+PXj1wkjjKHmv1gJsjde79Ri7/bf6NqgeAkf0SfsjKBLuBdatenOFgP44bWnHvJ+vx50tXAV+0yesCLD4pyPDCOTKw==',key_name='tempest-TestNetworkBasicOps-908318729',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:19:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-hmsse4gk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:19:04Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=03b61c83-5aca-49d6-ad69-1609916476e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "address": "fa:16:3e:be:ed:18", "network": {"id": "ba09ae1b-c812-41ab-9113-065962bb26d8", "bridge": "br-int", "label": "tempest-network-smoke--1979345474", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa2f4f-dc", "ovs_interfaceid": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.151 239969 DEBUG nova.network.os_vif_util [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converting VIF {"id": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "address": "fa:16:3e:be:ed:18", "network": {"id": "ba09ae1b-c812-41ab-9113-065962bb26d8", "bridge": "br-int", "label": "tempest-network-smoke--1979345474", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.29", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1faa2f4f-dc", "ovs_interfaceid": "1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.152 239969 DEBUG nova.network.os_vif_util [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:ed:18,bridge_name='br-int',has_traffic_filtering=True,id=1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17,network=Network(ba09ae1b-c812-41ab-9113-065962bb26d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa2f4f-dc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.153 239969 DEBUG os_vif [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:ed:18,bridge_name='br-int',has_traffic_filtering=True,id=1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17,network=Network(ba09ae1b-c812-41ab-9113-065962bb26d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa2f4f-dc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.155 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.156 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1faa2f4f-dc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.156 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.159 239969 INFO os_vif [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:ed:18,bridge_name='br-int',has_traffic_filtering=True,id=1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17,network=Network(ba09ae1b-c812-41ab-9113-065962bb26d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1faa2f4f-dc')
Jan 26 16:19:38 compute-0 nova_compute[239965]: 2026-01-26 16:19:38.160 239969 DEBUG nova.virt.libvirt.guest [req-603183cb-ebc1-4486-9072-bd8a7655b542 req-34b68a68-8e19-4d9c-a4c8-a3783b0a703d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:name>tempest-TestNetworkBasicOps-server-1727772541</nova:name>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 16:19:38</nova:creationTime>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     <nova:port uuid="6784938e-ce4b-4961-a42a-2d07d12fb8f1">
Jan 26 16:19:38 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:19:38 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:19:38 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 16:19:38 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 16:19:38 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.214772) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444378214797, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 480, "num_deletes": 255, "total_data_size": 451764, "memory_usage": 462208, "flush_reason": "Manual Compaction"}
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444378225010, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 445785, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42776, "largest_seqno": 43255, "table_properties": {"data_size": 443051, "index_size": 770, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6225, "raw_average_key_size": 18, "raw_value_size": 437683, "raw_average_value_size": 1272, "num_data_blocks": 35, "num_entries": 344, "num_filter_entries": 344, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769444348, "oldest_key_time": 1769444348, "file_creation_time": 1769444378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 10304 microseconds, and 2592 cpu microseconds.
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.225066) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 445785 bytes OK
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.225090) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.230126) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.230161) EVENT_LOG_v1 {"time_micros": 1769444378230154, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.230184) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 448917, prev total WAL file size 461238, number of live WAL files 2.
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.230815) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353035' seq:72057594037927935, type:22 .. '6C6F676D0031373536' seq:0, type:0; will stop at (end)
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(435KB)], [95(10MB)]
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444378231957, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 11514575, "oldest_snapshot_seqno": -1}
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6666 keys, 11396681 bytes, temperature: kUnknown
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444378367608, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 11396681, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11348370, "index_size": 30535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16709, "raw_key_size": 172671, "raw_average_key_size": 25, "raw_value_size": 11225290, "raw_average_value_size": 1683, "num_data_blocks": 1202, "num_entries": 6666, "num_filter_entries": 6666, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769444378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.367877) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 11396681 bytes
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.370282) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 84.8 rd, 83.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.6 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(51.4) write-amplify(25.6) OK, records in: 7184, records dropped: 518 output_compression: NoCompression
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.370300) EVENT_LOG_v1 {"time_micros": 1769444378370292, "job": 56, "event": "compaction_finished", "compaction_time_micros": 135774, "compaction_time_cpu_micros": 31878, "output_level": 6, "num_output_files": 1, "total_output_size": 11396681, "num_input_records": 7184, "num_output_records": 6666, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444378370487, "job": 56, "event": "table_file_deletion", "file_number": 97}
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444378373185, "job": 56, "event": "table_file_deletion", "file_number": 95}
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.230659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.373287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.373293) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.373295) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.373296) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.373298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.373690) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444378373721, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 255, "num_deletes": 250, "total_data_size": 14332, "memory_usage": 20096, "flush_reason": "Manual Compaction"}
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444378375505, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 13847, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43256, "largest_seqno": 43510, "table_properties": {"data_size": 12095, "index_size": 49, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 645, "raw_key_size": 5124, "raw_average_key_size": 20, "raw_value_size": 8697, "raw_average_value_size": 34, "num_data_blocks": 2, "num_entries": 255, "num_filter_entries": 255, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769444378, "oldest_key_time": 1769444378, "file_creation_time": 1769444378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 1842 microseconds, and 593 cpu microseconds.
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.375535) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 13847 bytes OK
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.375548) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.376959) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.376986) EVENT_LOG_v1 {"time_micros": 1769444378376967, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.376997) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 12321, prev total WAL file size 12321, number of live WAL files 2.
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.377277) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353035' seq:72057594037927935, type:22 .. '6D6772737461740031373536' seq:0, type:0; will stop at (end)
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(13KB)], [98(10MB)]
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444378377296, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 11410528, "oldest_snapshot_seqno": -1}
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6417 keys, 8040504 bytes, temperature: kUnknown
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444378437397, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 8040504, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7998786, "index_size": 24584, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16069, "raw_key_size": 167689, "raw_average_key_size": 26, "raw_value_size": 7884921, "raw_average_value_size": 1228, "num_data_blocks": 953, "num_entries": 6417, "num_filter_entries": 6417, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769444378, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.437843) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 8040504 bytes
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.439772) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.3 rd, 133.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.9 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(1404.7) write-amplify(580.7) OK, records in: 6921, records dropped: 504 output_compression: NoCompression
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.439798) EVENT_LOG_v1 {"time_micros": 1769444378439784, "job": 58, "event": "compaction_finished", "compaction_time_micros": 60264, "compaction_time_cpu_micros": 22032, "output_level": 6, "num_output_files": 1, "total_output_size": 8040504, "num_input_records": 6921, "num_output_records": 6417, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444378439964, "job": 58, "event": "table_file_deletion", "file_number": 100}
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444378442348, "job": 58, "event": "table_file_deletion", "file_number": 98}
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.377218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.442489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.442498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.442500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.442503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:19:38.442506) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:19:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 326 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.2 MiB/s wr, 63 op/s
Jan 26 16:19:39 compute-0 ceph-mon[75140]: pgmap v2068: 305 pgs: 305 active+clean; 326 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.2 MiB/s wr, 63 op/s
Jan 26 16:19:39 compute-0 ovn_controller[146046]: 2026-01-26T16:19:39Z|01249|binding|INFO|Releasing lport 9829d4f8-542b-4928-b494-a0e6aa624937 from this chassis (sb_readonly=0)
Jan 26 16:19:39 compute-0 ovn_controller[146046]: 2026-01-26T16:19:39Z|01250|binding|INFO|Releasing lport ac473cbf-5010-4e34-9ff9-00fae14d9b70 from this chassis (sb_readonly=0)
Jan 26 16:19:39 compute-0 nova_compute[239965]: 2026-01-26 16:19:39.371 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:39 compute-0 nova_compute[239965]: 2026-01-26 16:19:39.503 239969 INFO nova.network.neutron [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Port 1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 26 16:19:39 compute-0 nova_compute[239965]: 2026-01-26 16:19:39.503 239969 DEBUG nova.network.neutron [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Updating instance_info_cache with network_info: [{"id": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "address": "fa:16:3e:16:f4:39", "network": {"id": "cfc8a5e3-1ced-4191-b7c4-24f862b986e1", "bridge": "br-int", "label": "tempest-network-smoke--1397269541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6784938e-ce", "ovs_interfaceid": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:19:39 compute-0 nova_compute[239965]: 2026-01-26 16:19:39.534 239969 DEBUG oslo_concurrency.lockutils [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:19:39 compute-0 nova_compute[239965]: 2026-01-26 16:19:39.558 239969 DEBUG oslo_concurrency.lockutils [None req-cead31cc-c30a-438a-8ee2-53bcfa77c7e2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "interface-03b61c83-5aca-49d6-ad69-1609916476e2-1faa2f4f-dc1f-4c00-9fa4-b0c3b069ef17" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 326 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 314 KiB/s rd, 961 KiB/s wr, 46 op/s
Jan 26 16:19:40 compute-0 ceph-mon[75140]: pgmap v2069: 305 pgs: 305 active+clean; 326 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 314 KiB/s rd, 961 KiB/s wr, 46 op/s
Jan 26 16:19:40 compute-0 sshd-session[345107]: Invalid user sol from 45.148.10.240 port 53780
Jan 26 16:19:40 compute-0 sshd-session[345107]: Connection closed by invalid user sol 45.148.10.240 port 53780 [preauth]
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.213 239969 DEBUG nova.compute.manager [req-e87b72c0-c3b2-43da-90dc-48dd0f06f062 req-9521934e-0a96-4008-885a-547343dbfd8a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received event network-changed-6784938e-ce4b-4961-a42a-2d07d12fb8f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.215 239969 DEBUG nova.compute.manager [req-e87b72c0-c3b2-43da-90dc-48dd0f06f062 req-9521934e-0a96-4008-885a-547343dbfd8a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Refreshing instance network info cache due to event network-changed-6784938e-ce4b-4961-a42a-2d07d12fb8f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.215 239969 DEBUG oslo_concurrency.lockutils [req-e87b72c0-c3b2-43da-90dc-48dd0f06f062 req-9521934e-0a96-4008-885a-547343dbfd8a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.216 239969 DEBUG oslo_concurrency.lockutils [req-e87b72c0-c3b2-43da-90dc-48dd0f06f062 req-9521934e-0a96-4008-885a-547343dbfd8a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.217 239969 DEBUG nova.network.neutron [req-e87b72c0-c3b2-43da-90dc-48dd0f06f062 req-9521934e-0a96-4008-885a-547343dbfd8a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Refreshing network info cache for port 6784938e-ce4b-4961-a42a-2d07d12fb8f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.280 239969 DEBUG oslo_concurrency.lockutils [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "03b61c83-5aca-49d6-ad69-1609916476e2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.280 239969 DEBUG oslo_concurrency.lockutils [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.280 239969 DEBUG oslo_concurrency.lockutils [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.281 239969 DEBUG oslo_concurrency.lockutils [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.281 239969 DEBUG oslo_concurrency.lockutils [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.282 239969 INFO nova.compute.manager [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Terminating instance
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.283 239969 DEBUG nova.compute.manager [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:19:41 compute-0 kernel: tap6784938e-ce (unregistering): left promiscuous mode
Jan 26 16:19:41 compute-0 NetworkManager[48954]: <info>  [1769444381.3745] device (tap6784938e-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.383 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:41 compute-0 ovn_controller[146046]: 2026-01-26T16:19:41Z|01251|binding|INFO|Releasing lport 6784938e-ce4b-4961-a42a-2d07d12fb8f1 from this chassis (sb_readonly=0)
Jan 26 16:19:41 compute-0 ovn_controller[146046]: 2026-01-26T16:19:41Z|01252|binding|INFO|Setting lport 6784938e-ce4b-4961-a42a-2d07d12fb8f1 down in Southbound
Jan 26 16:19:41 compute-0 ovn_controller[146046]: 2026-01-26T16:19:41Z|01253|binding|INFO|Removing iface tap6784938e-ce ovn-installed in OVS
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.385 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:41.390 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:f4:39 10.100.0.7'], port_security=['fa:16:3e:16:f4:39 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '03b61c83-5aca-49d6-ad69-1609916476e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfc8a5e3-1ced-4191-b7c4-24f862b986e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bd05763d-4eca-4d9a-9a40-f33a88922164', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f2dd7df8-a4e3-43d1-9b0b-ece81ff8039c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=6784938e-ce4b-4961-a42a-2d07d12fb8f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:19:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:41.391 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 6784938e-ce4b-4961-a42a-2d07d12fb8f1 in datapath cfc8a5e3-1ced-4191-b7c4-24f862b986e1 unbound from our chassis
Jan 26 16:19:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:41.392 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cfc8a5e3-1ced-4191-b7c4-24f862b986e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:19:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:41.393 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d00ac406-1fe3-45ce-a454-0773c929ca95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:41.394 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1 namespace which is not needed anymore
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.408 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:41 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000076.scope: Deactivated successfully.
Jan 26 16:19:41 compute-0 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000076.scope: Consumed 14.687s CPU time.
Jan 26 16:19:41 compute-0 systemd-machined[208061]: Machine qemu-145-instance-00000076 terminated.
Jan 26 16:19:41 compute-0 neutron-haproxy-ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1[344692]: [NOTICE]   (344696) : haproxy version is 2.8.14-c23fe91
Jan 26 16:19:41 compute-0 neutron-haproxy-ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1[344692]: [NOTICE]   (344696) : path to executable is /usr/sbin/haproxy
Jan 26 16:19:41 compute-0 neutron-haproxy-ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1[344692]: [WARNING]  (344696) : Exiting Master process...
Jan 26 16:19:41 compute-0 neutron-haproxy-ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1[344692]: [ALERT]    (344696) : Current worker (344698) exited with code 143 (Terminated)
Jan 26 16:19:41 compute-0 neutron-haproxy-ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1[344692]: [WARNING]  (344696) : All workers exited. Exiting... (0)
Jan 26 16:19:41 compute-0 systemd[1]: libpod-a85daba0cf4599f1ae05e44d3a27814cdf6191ee94c67bd836c2d0e691aca5bb.scope: Deactivated successfully.
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.526 239969 INFO nova.virt.libvirt.driver [-] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Instance destroyed successfully.
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.527 239969 DEBUG nova.objects.instance [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'resources' on Instance uuid 03b61c83-5aca-49d6-ad69-1609916476e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:19:41 compute-0 podman[345129]: 2026-01-26 16:19:41.528940105 +0000 UTC m=+0.045297128 container died a85daba0cf4599f1ae05e44d3a27814cdf6191ee94c67bd836c2d0e691aca5bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.542 239969 DEBUG nova.virt.libvirt.vif [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:18:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1727772541',display_name='tempest-TestNetworkBasicOps-server-1727772541',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1727772541',id=118,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPnAPD09GHRy5Ale/TKHIPd1lPu3k3PivMb654OK+PXj1wkjjKHmv1gJsjde79Ri7/bf6NqgeAkf0SfsjKBLuBdatenOFgP44bWnHvJ+vx50tXAV+0yesCLD4pyPDCOTKw==',key_name='tempest-TestNetworkBasicOps-908318729',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:19:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-hmsse4gk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:19:04Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=03b61c83-5aca-49d6-ad69-1609916476e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "address": "fa:16:3e:16:f4:39", "network": {"id": "cfc8a5e3-1ced-4191-b7c4-24f862b986e1", "bridge": "br-int", "label": "tempest-network-smoke--1397269541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6784938e-ce", "ovs_interfaceid": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.543 239969 DEBUG nova.network.os_vif_util [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "address": "fa:16:3e:16:f4:39", "network": {"id": "cfc8a5e3-1ced-4191-b7c4-24f862b986e1", "bridge": "br-int", "label": "tempest-network-smoke--1397269541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6784938e-ce", "ovs_interfaceid": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.545 239969 DEBUG nova.network.os_vif_util [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:f4:39,bridge_name='br-int',has_traffic_filtering=True,id=6784938e-ce4b-4961-a42a-2d07d12fb8f1,network=Network(cfc8a5e3-1ced-4191-b7c4-24f862b986e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6784938e-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.545 239969 DEBUG os_vif [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:f4:39,bridge_name='br-int',has_traffic_filtering=True,id=6784938e-ce4b-4961-a42a-2d07d12fb8f1,network=Network(cfc8a5e3-1ced-4191-b7c4-24f862b986e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6784938e-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.546 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.547 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6784938e-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.548 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.550 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.553 239969 INFO os_vif [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:f4:39,bridge_name='br-int',has_traffic_filtering=True,id=6784938e-ce4b-4961-a42a-2d07d12fb8f1,network=Network(cfc8a5e3-1ced-4191-b7c4-24f862b986e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6784938e-ce')
Jan 26 16:19:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a85daba0cf4599f1ae05e44d3a27814cdf6191ee94c67bd836c2d0e691aca5bb-userdata-shm.mount: Deactivated successfully.
Jan 26 16:19:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2a8376509d2460c1324bf7c21d6911d4b51eb3fa124c882bc484775fd5e9e38-merged.mount: Deactivated successfully.
Jan 26 16:19:41 compute-0 podman[345129]: 2026-01-26 16:19:41.572309444 +0000 UTC m=+0.088666477 container cleanup a85daba0cf4599f1ae05e44d3a27814cdf6191ee94c67bd836c2d0e691aca5bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:19:41 compute-0 systemd[1]: libpod-conmon-a85daba0cf4599f1ae05e44d3a27814cdf6191ee94c67bd836c2d0e691aca5bb.scope: Deactivated successfully.
Jan 26 16:19:41 compute-0 podman[345184]: 2026-01-26 16:19:41.628467336 +0000 UTC m=+0.037587919 container remove a85daba0cf4599f1ae05e44d3a27814cdf6191ee94c67bd836c2d0e691aca5bb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:19:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:41.633 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[61c3f0d3-2163-4140-a5a7-0ecd22eb8b14]: (4, ('Mon Jan 26 04:19:41 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1 (a85daba0cf4599f1ae05e44d3a27814cdf6191ee94c67bd836c2d0e691aca5bb)\na85daba0cf4599f1ae05e44d3a27814cdf6191ee94c67bd836c2d0e691aca5bb\nMon Jan 26 04:19:41 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1 (a85daba0cf4599f1ae05e44d3a27814cdf6191ee94c67bd836c2d0e691aca5bb)\na85daba0cf4599f1ae05e44d3a27814cdf6191ee94c67bd836c2d0e691aca5bb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:41.635 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[688311d9-30e4-48c0-a01a-db556d00dada]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:41.636 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfc8a5e3-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.638 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.644 239969 DEBUG nova.compute.manager [req-4805f10a-2615-4b7e-9c25-aa0e6d818126 req-f61171b3-44e2-4a99-b736-9fb1c36e0fc2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received event network-vif-unplugged-6784938e-ce4b-4961-a42a-2d07d12fb8f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.644 239969 DEBUG oslo_concurrency.lockutils [req-4805f10a-2615-4b7e-9c25-aa0e6d818126 req-f61171b3-44e2-4a99-b736-9fb1c36e0fc2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.645 239969 DEBUG oslo_concurrency.lockutils [req-4805f10a-2615-4b7e-9c25-aa0e6d818126 req-f61171b3-44e2-4a99-b736-9fb1c36e0fc2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.645 239969 DEBUG oslo_concurrency.lockutils [req-4805f10a-2615-4b7e-9c25-aa0e6d818126 req-f61171b3-44e2-4a99-b736-9fb1c36e0fc2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.645 239969 DEBUG nova.compute.manager [req-4805f10a-2615-4b7e-9c25-aa0e6d818126 req-f61171b3-44e2-4a99-b736-9fb1c36e0fc2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] No waiting events found dispatching network-vif-unplugged-6784938e-ce4b-4961-a42a-2d07d12fb8f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.645 239969 DEBUG nova.compute.manager [req-4805f10a-2615-4b7e-9c25-aa0e6d818126 req-f61171b3-44e2-4a99-b736-9fb1c36e0fc2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received event network-vif-unplugged-6784938e-ce4b-4961-a42a-2d07d12fb8f1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:19:41 compute-0 kernel: tapcfc8a5e3-10: left promiscuous mode
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.652 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:41.655 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ac73738d-c4c2-45ef-8f84-a7508bfa0b7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:41.669 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6e0b8d96-dbba-4dbe-9288-1790721ca65c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:41.670 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e20f2666-46df-4985-b8c8-346990045e1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:41.689 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b043cc3c-782a-4195-9211-1118c79af171]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 583426, 'reachable_time': 41240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345201, 'error': None, 'target': 'ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:41.691 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cfc8a5e3-1ced-4191-b7c4-24f862b986e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:19:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:41.691 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[418dcc13-a990-4cbb-afaa-f488c638c004]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:41 compute-0 systemd[1]: run-netns-ovnmeta\x2dcfc8a5e3\x2d1ced\x2d4191\x2db7c4\x2d24f862b986e1.mount: Deactivated successfully.
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.796 239969 INFO nova.virt.libvirt.driver [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Deleting instance files /var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2_del
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.797 239969 INFO nova.virt.libvirt.driver [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Deletion of /var/lib/nova/instances/03b61c83-5aca-49d6-ad69-1609916476e2_del complete
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.849 239969 INFO nova.compute.manager [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Took 0.57 seconds to destroy the instance on the hypervisor.
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.849 239969 DEBUG oslo.service.loopingcall [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.850 239969 DEBUG nova.compute.manager [-] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:19:41 compute-0 nova_compute[239965]: 2026-01-26 16:19:41.850 239969 DEBUG nova.network.neutron [-] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:19:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 265 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 967 KiB/s wr, 52 op/s
Jan 26 16:19:42 compute-0 ceph-mon[75140]: pgmap v2070: 305 pgs: 305 active+clean; 265 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 321 KiB/s rd, 967 KiB/s wr, 52 op/s
Jan 26 16:19:42 compute-0 nova_compute[239965]: 2026-01-26 16:19:42.553 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.426 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.555 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.759 239969 DEBUG nova.compute.manager [req-5b687880-a108-47bb-8f64-d60a46a6929c req-87377ee9-41df-46b6-ac85-358f6ef9cd2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received event network-vif-plugged-6784938e-ce4b-4961-a42a-2d07d12fb8f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.759 239969 DEBUG oslo_concurrency.lockutils [req-5b687880-a108-47bb-8f64-d60a46a6929c req-87377ee9-41df-46b6-ac85-358f6ef9cd2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.759 239969 DEBUG oslo_concurrency.lockutils [req-5b687880-a108-47bb-8f64-d60a46a6929c req-87377ee9-41df-46b6-ac85-358f6ef9cd2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.760 239969 DEBUG oslo_concurrency.lockutils [req-5b687880-a108-47bb-8f64-d60a46a6929c req-87377ee9-41df-46b6-ac85-358f6ef9cd2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.760 239969 DEBUG nova.compute.manager [req-5b687880-a108-47bb-8f64-d60a46a6929c req-87377ee9-41df-46b6-ac85-358f6ef9cd2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] No waiting events found dispatching network-vif-plugged-6784938e-ce4b-4961-a42a-2d07d12fb8f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.760 239969 WARNING nova.compute.manager [req-5b687880-a108-47bb-8f64-d60a46a6929c req-87377ee9-41df-46b6-ac85-358f6ef9cd2a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received unexpected event network-vif-plugged-6784938e-ce4b-4961-a42a-2d07d12fb8f1 for instance with vm_state active and task_state deleting.
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.775 239969 DEBUG nova.network.neutron [-] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.791 239969 INFO nova.compute.manager [-] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Took 1.94 seconds to deallocate network for instance.
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.844 239969 DEBUG oslo_concurrency.lockutils [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.845 239969 DEBUG oslo_concurrency.lockutils [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.861 239969 DEBUG nova.compute.manager [req-f6d50507-58da-460c-a3cb-db19c534c1c1 req-1bbf104a-f659-4be7-8d4c-7bf6eac389e3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Received event network-vif-deleted-6784938e-ce4b-4961-a42a-2d07d12fb8f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.870 239969 DEBUG nova.network.neutron [req-e87b72c0-c3b2-43da-90dc-48dd0f06f062 req-9521934e-0a96-4008-885a-547343dbfd8a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Updated VIF entry in instance network info cache for port 6784938e-ce4b-4961-a42a-2d07d12fb8f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.870 239969 DEBUG nova.network.neutron [req-e87b72c0-c3b2-43da-90dc-48dd0f06f062 req-9521934e-0a96-4008-885a-547343dbfd8a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Updating instance_info_cache with network_info: [{"id": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "address": "fa:16:3e:16:f4:39", "network": {"id": "cfc8a5e3-1ced-4191-b7c4-24f862b986e1", "bridge": "br-int", "label": "tempest-network-smoke--1397269541", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6784938e-ce", "ovs_interfaceid": "6784938e-ce4b-4961-a42a-2d07d12fb8f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.888 239969 DEBUG oslo_concurrency.lockutils [req-e87b72c0-c3b2-43da-90dc-48dd0f06f062 req-9521934e-0a96-4008-885a-547343dbfd8a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-03b61c83-5aca-49d6-ad69-1609916476e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:19:43 compute-0 nova_compute[239965]: 2026-01-26 16:19:43.929 239969 DEBUG oslo_concurrency.processutils [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:19:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 265 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 25 KiB/s wr, 7 op/s
Jan 26 16:19:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:19:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1490677648' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:19:44 compute-0 nova_compute[239965]: 2026-01-26 16:19:44.528 239969 DEBUG oslo_concurrency.processutils [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:19:44 compute-0 nova_compute[239965]: 2026-01-26 16:19:44.536 239969 DEBUG nova.compute.provider_tree [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:19:44 compute-0 ceph-mon[75140]: pgmap v2071: 305 pgs: 305 active+clean; 265 MiB data, 939 MiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 25 KiB/s wr, 7 op/s
Jan 26 16:19:44 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1490677648' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:19:44 compute-0 nova_compute[239965]: 2026-01-26 16:19:44.550 239969 DEBUG nova.scheduler.client.report [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:19:44 compute-0 nova_compute[239965]: 2026-01-26 16:19:44.572 239969 DEBUG oslo_concurrency.lockutils [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:44 compute-0 nova_compute[239965]: 2026-01-26 16:19:44.593 239969 INFO nova.scheduler.client.report [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Deleted allocations for instance 03b61c83-5aca-49d6-ad69-1609916476e2
Jan 26 16:19:44 compute-0 nova_compute[239965]: 2026-01-26 16:19:44.655 239969 DEBUG oslo_concurrency.lockutils [None req-ef81ec36-69e9-4c1b-a690-91916ffbc3f2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "03b61c83-5aca-49d6-ad69-1609916476e2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:45 compute-0 nova_compute[239965]: 2026-01-26 16:19:45.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:19:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 26 KiB/s wr, 30 op/s
Jan 26 16:19:46 compute-0 ceph-mon[75140]: pgmap v2072: 305 pgs: 305 active+clean; 246 MiB data, 937 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 26 KiB/s wr, 30 op/s
Jan 26 16:19:46 compute-0 nova_compute[239965]: 2026-01-26 16:19:46.550 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:47 compute-0 nova_compute[239965]: 2026-01-26 16:19:47.555 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:48 compute-0 nova_compute[239965]: 2026-01-26 16:19:48.001 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:19:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 246 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 26 KiB/s wr, 30 op/s
Jan 26 16:19:48 compute-0 ceph-mon[75140]: pgmap v2073: 305 pgs: 305 active+clean; 246 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 26 KiB/s wr, 30 op/s
Jan 26 16:19:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:19:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1070484643' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:19:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:19:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1070484643' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015321952209750866 of space, bias 1.0, pg target 0.459658566292526 quantized to 32 (current 32)
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010184389683094437 of space, bias 1.0, pg target 0.30553169049283313 quantized to 32 (current 32)
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.090457072440881e-07 of space, bias 4.0, pg target 0.0008508548486929057 quantized to 16 (current 16)
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:19:49 compute-0 nova_compute[239965]: 2026-01-26 16:19:49.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:19:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1070484643' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:19:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1070484643' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:19:49 compute-0 ovn_controller[146046]: 2026-01-26T16:19:49Z|01254|binding|INFO|Releasing lport ac473cbf-5010-4e34-9ff9-00fae14d9b70 from this chassis (sb_readonly=0)
Jan 26 16:19:49 compute-0 nova_compute[239965]: 2026-01-26 16:19:49.687 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 246 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Jan 26 16:19:50 compute-0 ceph-mon[75140]: pgmap v2074: 305 pgs: 305 active+clean; 246 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 5.2 KiB/s wr, 28 op/s
Jan 26 16:19:51 compute-0 nova_compute[239965]: 2026-01-26 16:19:51.553 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 246 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:19:52 compute-0 ceph-mon[75140]: pgmap v2075: 305 pgs: 305 active+clean; 246 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.557 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.670 239969 DEBUG nova.compute.manager [req-b2b73492-657d-406f-b3a1-3cc0cf4712bf req-1d5a0cbe-f1e1-477a-8ed1-998bce3c311d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Received event network-changed-a64a836b-74d0-41fe-acc2-6df7f843675a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.670 239969 DEBUG nova.compute.manager [req-b2b73492-657d-406f-b3a1-3cc0cf4712bf req-1d5a0cbe-f1e1-477a-8ed1-998bce3c311d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Refreshing instance network info cache due to event network-changed-a64a836b-74d0-41fe-acc2-6df7f843675a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.671 239969 DEBUG oslo_concurrency.lockutils [req-b2b73492-657d-406f-b3a1-3cc0cf4712bf req-1d5a0cbe-f1e1-477a-8ed1-998bce3c311d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-aa258935-ebf3-42bd-bca7-5b58621978f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.671 239969 DEBUG oslo_concurrency.lockutils [req-b2b73492-657d-406f-b3a1-3cc0cf4712bf req-1d5a0cbe-f1e1-477a-8ed1-998bce3c311d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-aa258935-ebf3-42bd-bca7-5b58621978f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.671 239969 DEBUG nova.network.neutron [req-b2b73492-657d-406f-b3a1-3cc0cf4712bf req-1d5a0cbe-f1e1-477a-8ed1-998bce3c311d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Refreshing network info cache for port a64a836b-74d0-41fe-acc2-6df7f843675a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.698 239969 DEBUG oslo_concurrency.lockutils [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "aa258935-ebf3-42bd-bca7-5b58621978f0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.699 239969 DEBUG oslo_concurrency.lockutils [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "aa258935-ebf3-42bd-bca7-5b58621978f0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.700 239969 DEBUG oslo_concurrency.lockutils [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.700 239969 DEBUG oslo_concurrency.lockutils [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.700 239969 DEBUG oslo_concurrency.lockutils [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.702 239969 INFO nova.compute.manager [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Terminating instance
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.703 239969 DEBUG nova.compute.manager [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:19:52 compute-0 kernel: tapa64a836b-74 (unregistering): left promiscuous mode
Jan 26 16:19:52 compute-0 NetworkManager[48954]: <info>  [1769444392.8201] device (tapa64a836b-74): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:19:52 compute-0 ovn_controller[146046]: 2026-01-26T16:19:52Z|01255|binding|INFO|Releasing lport a64a836b-74d0-41fe-acc2-6df7f843675a from this chassis (sb_readonly=0)
Jan 26 16:19:52 compute-0 ovn_controller[146046]: 2026-01-26T16:19:52Z|01256|binding|INFO|Setting lport a64a836b-74d0-41fe-acc2-6df7f843675a down in Southbound
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.876 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:52 compute-0 ovn_controller[146046]: 2026-01-26T16:19:52Z|01257|binding|INFO|Removing iface tapa64a836b-74 ovn-installed in OVS
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.879 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:52.887 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:50:b4:c5 10.100.0.13 2001:db8:0:1:f816:3eff:fe50:b4c5 2001:db8::f816:3eff:fe50:b4c5'], port_security=['fa:16:3e:50:b4:c5 10.100.0.13 2001:db8:0:1:f816:3eff:fe50:b4c5 2001:db8::f816:3eff:fe50:b4c5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:fe50:b4c5/64 2001:db8::f816:3eff:fe50:b4c5/64', 'neutron:device_id': 'aa258935-ebf3-42bd-bca7-5b58621978f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fe48bd19-537b-482b-a9dd-72cb7a300d8a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b663232-d708-4949-9a4a-d4a2c365ff36, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a64a836b-74d0-41fe-acc2-6df7f843675a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:19:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:52.888 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a64a836b-74d0-41fe-acc2-6df7f843675a in datapath 869bf55a-0f32-4f15-8ff4-2acbed7fab15 unbound from our chassis
Jan 26 16:19:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:52.890 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 869bf55a-0f32-4f15-8ff4-2acbed7fab15
Jan 26 16:19:52 compute-0 nova_compute[239965]: 2026-01-26 16:19:52.891 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:52.906 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cf788197-6819-42fc-ac38-e5f718c2442f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:52 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000077.scope: Deactivated successfully.
Jan 26 16:19:52 compute-0 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000077.scope: Consumed 14.834s CPU time.
Jan 26 16:19:52 compute-0 systemd-machined[208061]: Machine qemu-146-instance-00000077 terminated.
Jan 26 16:19:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:52.940 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e42806b0-2e60-4b97-923a-47641ba219bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:52.942 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7f2f0f4d-6747-42fc-9136-b13ac0b15ebe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:52.964 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[192741d6-7877-4941-893d-f5fb002207d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:52.979 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[236a877c-545d-4bfb-806a-d1ee382c1880]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap869bf55a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:3f:ac'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3468, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3468, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 356], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580702, 'reachable_time': 36000, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345237, 'error': None, 'target': 'ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:52.996 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[41988037-4887-45fe-9f04-54545a10cd80]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap869bf55a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580712, 'tstamp': 580712}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345238, 'error': None, 'target': 'ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap869bf55a-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 580715, 'tstamp': 580715}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 345238, 'error': None, 'target': 'ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:52.998 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap869bf55a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.000 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.004 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:53.005 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap869bf55a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:53.005 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:19:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:53.006 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap869bf55a-00, col_values=(('external_ids', {'iface-id': 'ac473cbf-5010-4e34-9ff9-00fae14d9b70'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:53.006 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.148 239969 INFO nova.virt.libvirt.driver [-] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Instance destroyed successfully.
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.149 239969 DEBUG nova.objects.instance [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'resources' on Instance uuid aa258935-ebf3-42bd-bca7-5b58621978f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.174 239969 DEBUG nova.virt.libvirt.vif [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:19:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-260093260',display_name='tempest-TestGettingAddress-server-260093260',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-260093260',id=119,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhQG087QD+2hAbAMQU1nGEagJiakW/DCR9CbPhEN0TT1VEnvAWKVpxW1jmBBFv8JmlAY1EliJqVpxhfQ7DGAWVhpLujt7IrW/DFNwTQWV1RlSnJjK8DC9EppnenO9SXIw==',key_name='tempest-TestGettingAddress-1313867492',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:19:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-ew0cfadq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:19:13Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=aa258935-ebf3-42bd-bca7-5b58621978f0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a64a836b-74d0-41fe-acc2-6df7f843675a", "address": "fa:16:3e:50:b4:c5", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa64a836b-74", "ovs_interfaceid": "a64a836b-74d0-41fe-acc2-6df7f843675a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.175 239969 DEBUG nova.network.os_vif_util [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "a64a836b-74d0-41fe-acc2-6df7f843675a", "address": "fa:16:3e:50:b4:c5", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa64a836b-74", "ovs_interfaceid": "a64a836b-74d0-41fe-acc2-6df7f843675a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.176 239969 DEBUG nova.network.os_vif_util [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:50:b4:c5,bridge_name='br-int',has_traffic_filtering=True,id=a64a836b-74d0-41fe-acc2-6df7f843675a,network=Network(869bf55a-0f32-4f15-8ff4-2acbed7fab15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa64a836b-74') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.176 239969 DEBUG os_vif [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:b4:c5,bridge_name='br-int',has_traffic_filtering=True,id=a64a836b-74d0-41fe-acc2-6df7f843675a,network=Network(869bf55a-0f32-4f15-8ff4-2acbed7fab15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa64a836b-74') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.179 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.179 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.179 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.180 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 12ac525a-976f-4afb-b4db-6035caada7e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.181 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.181 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa64a836b-74, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.183 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.184 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.187 239969 INFO os_vif [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:50:b4:c5,bridge_name='br-int',has_traffic_filtering=True,id=a64a836b-74d0-41fe-acc2-6df7f843675a,network=Network(869bf55a-0f32-4f15-8ff4-2acbed7fab15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa64a836b-74')
Jan 26 16:19:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:19:53 compute-0 ovn_controller[146046]: 2026-01-26T16:19:53Z|01258|binding|INFO|Releasing lport ac473cbf-5010-4e34-9ff9-00fae14d9b70 from this chassis (sb_readonly=0)
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.814 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.842 239969 DEBUG nova.compute.manager [req-2cb7233e-91eb-42cb-9baf-bf361cbe252a req-9dd3b0e2-ff63-439a-baa3-16d99c7eb7fa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Received event network-vif-unplugged-a64a836b-74d0-41fe-acc2-6df7f843675a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.842 239969 DEBUG oslo_concurrency.lockutils [req-2cb7233e-91eb-42cb-9baf-bf361cbe252a req-9dd3b0e2-ff63-439a-baa3-16d99c7eb7fa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.843 239969 DEBUG oslo_concurrency.lockutils [req-2cb7233e-91eb-42cb-9baf-bf361cbe252a req-9dd3b0e2-ff63-439a-baa3-16d99c7eb7fa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.843 239969 DEBUG oslo_concurrency.lockutils [req-2cb7233e-91eb-42cb-9baf-bf361cbe252a req-9dd3b0e2-ff63-439a-baa3-16d99c7eb7fa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.843 239969 DEBUG nova.compute.manager [req-2cb7233e-91eb-42cb-9baf-bf361cbe252a req-9dd3b0e2-ff63-439a-baa3-16d99c7eb7fa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] No waiting events found dispatching network-vif-unplugged-a64a836b-74d0-41fe-acc2-6df7f843675a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:19:53 compute-0 nova_compute[239965]: 2026-01-26 16:19:53.843 239969 DEBUG nova.compute.manager [req-2cb7233e-91eb-42cb-9baf-bf361cbe252a req-9dd3b0e2-ff63-439a-baa3-16d99c7eb7fa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Received event network-vif-unplugged-a64a836b-74d0-41fe-acc2-6df7f843675a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:19:54 compute-0 nova_compute[239965]: 2026-01-26 16:19:54.164 239969 INFO nova.virt.libvirt.driver [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Deleting instance files /var/lib/nova/instances/aa258935-ebf3-42bd-bca7-5b58621978f0_del
Jan 26 16:19:54 compute-0 nova_compute[239965]: 2026-01-26 16:19:54.165 239969 INFO nova.virt.libvirt.driver [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Deletion of /var/lib/nova/instances/aa258935-ebf3-42bd-bca7-5b58621978f0_del complete
Jan 26 16:19:54 compute-0 nova_compute[239965]: 2026-01-26 16:19:54.218 239969 INFO nova.compute.manager [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Took 1.51 seconds to destroy the instance on the hypervisor.
Jan 26 16:19:54 compute-0 nova_compute[239965]: 2026-01-26 16:19:54.219 239969 DEBUG oslo.service.loopingcall [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:19:54 compute-0 nova_compute[239965]: 2026-01-26 16:19:54.220 239969 DEBUG nova.compute.manager [-] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:19:54 compute-0 nova_compute[239965]: 2026-01-26 16:19:54.220 239969 DEBUG nova.network.neutron [-] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:19:54 compute-0 nova_compute[239965]: 2026-01-26 16:19:54.381 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 246 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 767 B/s wr, 23 op/s
Jan 26 16:19:54 compute-0 ceph-mon[75140]: pgmap v2076: 305 pgs: 305 active+clean; 246 MiB data, 925 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 767 B/s wr, 23 op/s
Jan 26 16:19:55 compute-0 nova_compute[239965]: 2026-01-26 16:19:55.385 239969 DEBUG nova.network.neutron [-] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:19:55 compute-0 nova_compute[239965]: 2026-01-26 16:19:55.402 239969 INFO nova.compute.manager [-] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Took 1.18 seconds to deallocate network for instance.
Jan 26 16:19:55 compute-0 nova_compute[239965]: 2026-01-26 16:19:55.453 239969 DEBUG oslo_concurrency.lockutils [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:55 compute-0 nova_compute[239965]: 2026-01-26 16:19:55.454 239969 DEBUG oslo_concurrency.lockutils [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:55 compute-0 nova_compute[239965]: 2026-01-26 16:19:55.690 239969 DEBUG oslo_concurrency.processutils [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:19:55 compute-0 nova_compute[239965]: 2026-01-26 16:19:55.960 239969 DEBUG nova.network.neutron [req-b2b73492-657d-406f-b3a1-3cc0cf4712bf req-1d5a0cbe-f1e1-477a-8ed1-998bce3c311d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Updated VIF entry in instance network info cache for port a64a836b-74d0-41fe-acc2-6df7f843675a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:19:55 compute-0 nova_compute[239965]: 2026-01-26 16:19:55.962 239969 DEBUG nova.network.neutron [req-b2b73492-657d-406f-b3a1-3cc0cf4712bf req-1d5a0cbe-f1e1-477a-8ed1-998bce3c311d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Updating instance_info_cache with network_info: [{"id": "a64a836b-74d0-41fe-acc2-6df7f843675a", "address": "fa:16:3e:50:b4:c5", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe50:b4c5", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa64a836b-74", "ovs_interfaceid": "a64a836b-74d0-41fe-acc2-6df7f843675a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:19:55 compute-0 nova_compute[239965]: 2026-01-26 16:19:55.966 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Updating instance_info_cache with network_info: [{"id": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "address": "fa:16:3e:78:06:ac", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee09ee1-77", "ovs_interfaceid": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.055 239969 DEBUG nova.compute.manager [req-4e432199-9775-421c-bafe-b78eae147c9d req-917fd56c-f467-48c9-90bb-4ac23ccb4f28 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Received event network-vif-plugged-a64a836b-74d0-41fe-acc2-6df7f843675a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.056 239969 DEBUG oslo_concurrency.lockutils [req-4e432199-9775-421c-bafe-b78eae147c9d req-917fd56c-f467-48c9-90bb-4ac23ccb4f28 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.056 239969 DEBUG oslo_concurrency.lockutils [req-4e432199-9775-421c-bafe-b78eae147c9d req-917fd56c-f467-48c9-90bb-4ac23ccb4f28 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.056 239969 DEBUG oslo_concurrency.lockutils [req-4e432199-9775-421c-bafe-b78eae147c9d req-917fd56c-f467-48c9-90bb-4ac23ccb4f28 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "aa258935-ebf3-42bd-bca7-5b58621978f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.056 239969 DEBUG nova.compute.manager [req-4e432199-9775-421c-bafe-b78eae147c9d req-917fd56c-f467-48c9-90bb-4ac23ccb4f28 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] No waiting events found dispatching network-vif-plugged-a64a836b-74d0-41fe-acc2-6df7f843675a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.057 239969 WARNING nova.compute.manager [req-4e432199-9775-421c-bafe-b78eae147c9d req-917fd56c-f467-48c9-90bb-4ac23ccb4f28 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Received unexpected event network-vif-plugged-a64a836b-74d0-41fe-acc2-6df7f843675a for instance with vm_state deleted and task_state None.
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.057 239969 DEBUG nova.compute.manager [req-4e432199-9775-421c-bafe-b78eae147c9d req-917fd56c-f467-48c9-90bb-4ac23ccb4f28 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Received event network-vif-deleted-a64a836b-74d0-41fe-acc2-6df7f843675a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.075 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.076 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.076 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.077 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.077 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.077 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.077 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.079 239969 DEBUG oslo_concurrency.lockutils [req-b2b73492-657d-406f-b3a1-3cc0cf4712bf req-1d5a0cbe-f1e1-477a-8ed1-998bce3c311d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-aa258935-ebf3-42bd-bca7-5b58621978f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:19:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:19:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/506829106' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.303 239969 DEBUG oslo_concurrency.processutils [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.310 239969 DEBUG nova.compute.provider_tree [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:19:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/506829106' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.428 239969 DEBUG nova.scheduler.client.report [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.449 239969 DEBUG oslo_concurrency.lockutils [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.471 239969 INFO nova.scheduler.client.report [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Deleted allocations for instance aa258935-ebf3-42bd-bca7-5b58621978f0
Jan 26 16:19:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 236 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 3.4 KiB/s wr, 24 op/s
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.525 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444381.524889, 03b61c83-5aca-49d6-ad69-1609916476e2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.526 239969 INFO nova.compute.manager [-] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] VM Stopped (Lifecycle Event)
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.536 239969 DEBUG oslo_concurrency.lockutils [None req-fbc4a8d0-5af8-4a79-bca2-f3449a20dedc 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "aa258935-ebf3-42bd-bca7-5b58621978f0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.837s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:56 compute-0 nova_compute[239965]: 2026-01-26 16:19:56.546 239969 DEBUG nova.compute.manager [None req-0b0bb16f-19f4-4da0-8770-dc28e4534b4b - - - - - -] [instance: 03b61c83-5aca-49d6-ad69-1609916476e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:19:57 compute-0 nova_compute[239965]: 2026-01-26 16:19:57.150 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:19:57 compute-0 ceph-mon[75140]: pgmap v2077: 305 pgs: 305 active+clean; 236 MiB data, 913 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 3.4 KiB/s wr, 24 op/s
Jan 26 16:19:57 compute-0 nova_compute[239965]: 2026-01-26 16:19:57.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:19:57 compute-0 nova_compute[239965]: 2026-01-26 16:19:57.560 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:57 compute-0 nova_compute[239965]: 2026-01-26 16:19:57.596 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:57 compute-0 nova_compute[239965]: 2026-01-26 16:19:57.597 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:57 compute-0 nova_compute[239965]: 2026-01-26 16:19:57.597 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:57 compute-0 nova_compute[239965]: 2026-01-26 16:19:57.598 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:19:57 compute-0 nova_compute[239965]: 2026-01-26 16:19:57.598 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:19:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:57.605 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:19:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:57.606 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:19:57 compute-0 nova_compute[239965]: 2026-01-26 16:19:57.635 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:19:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3852866034' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:19:58 compute-0 nova_compute[239965]: 2026-01-26 16:19:58.141 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:19:58 compute-0 nova_compute[239965]: 2026-01-26 16:19:58.183 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:19:58 compute-0 nova_compute[239965]: 2026-01-26 16:19:58.279 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:19:58 compute-0 nova_compute[239965]: 2026-01-26 16:19:58.279 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:19:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 167 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 3.9 KiB/s wr, 28 op/s
Jan 26 16:19:58 compute-0 nova_compute[239965]: 2026-01-26 16:19:58.493 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:19:58 compute-0 nova_compute[239965]: 2026-01-26 16:19:58.494 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3484MB free_disk=59.902573940344155GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:19:58 compute-0 nova_compute[239965]: 2026-01-26 16:19:58.494 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:58 compute-0 nova_compute[239965]: 2026-01-26 16:19:58.494 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:58 compute-0 nova_compute[239965]: 2026-01-26 16:19:58.553 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 12ac525a-976f-4afb-b4db-6035caada7e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:19:58 compute-0 nova_compute[239965]: 2026-01-26 16:19:58.553 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:19:58 compute-0 nova_compute[239965]: 2026-01-26 16:19:58.553 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:19:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3852866034' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:19:58 compute-0 nova_compute[239965]: 2026-01-26 16:19:58.665 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.190 239969 DEBUG nova.compute.manager [req-5a8a2fa9-db9a-4002-892e-724150ae564f req-78a04480-8f5f-4084-b331-bd90ef13f2c8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Received event network-changed-6ee09ee1-7760-4d7f-af2e-0b8d367c129c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.191 239969 DEBUG nova.compute.manager [req-5a8a2fa9-db9a-4002-892e-724150ae564f req-78a04480-8f5f-4084-b331-bd90ef13f2c8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Refreshing instance network info cache due to event network-changed-6ee09ee1-7760-4d7f-af2e-0b8d367c129c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.191 239969 DEBUG oslo_concurrency.lockutils [req-5a8a2fa9-db9a-4002-892e-724150ae564f req-78a04480-8f5f-4084-b331-bd90ef13f2c8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.192 239969 DEBUG oslo_concurrency.lockutils [req-5a8a2fa9-db9a-4002-892e-724150ae564f req-78a04480-8f5f-4084-b331-bd90ef13f2c8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.192 239969 DEBUG nova.network.neutron [req-5a8a2fa9-db9a-4002-892e-724150ae564f req-78a04480-8f5f-4084-b331-bd90ef13f2c8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Refreshing network info cache for port 6ee09ee1-7760-4d7f-af2e-0b8d367c129c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.214 239969 DEBUG oslo_concurrency.lockutils [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "12ac525a-976f-4afb-b4db-6035caada7e0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.214 239969 DEBUG oslo_concurrency.lockutils [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "12ac525a-976f-4afb-b4db-6035caada7e0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.214 239969 DEBUG oslo_concurrency.lockutils [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.215 239969 DEBUG oslo_concurrency.lockutils [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.215 239969 DEBUG oslo_concurrency.lockutils [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.216 239969 INFO nova.compute.manager [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Terminating instance
Jan 26 16:19:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:19:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/813509440' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.219 239969 DEBUG nova.compute.manager [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.238 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:19:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:59.244 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:19:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:59.245 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:19:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:59.246 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.250 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.268 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:19:59 compute-0 kernel: tap6ee09ee1-77 (unregistering): left promiscuous mode
Jan 26 16:19:59 compute-0 NetworkManager[48954]: <info>  [1769444399.2830] device (tap6ee09ee1-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:19:59 compute-0 ovn_controller[146046]: 2026-01-26T16:19:59Z|01259|binding|INFO|Releasing lport 6ee09ee1-7760-4d7f-af2e-0b8d367c129c from this chassis (sb_readonly=0)
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.289 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:59 compute-0 ovn_controller[146046]: 2026-01-26T16:19:59Z|01260|binding|INFO|Setting lport 6ee09ee1-7760-4d7f-af2e-0b8d367c129c down in Southbound
Jan 26 16:19:59 compute-0 ovn_controller[146046]: 2026-01-26T16:19:59Z|01261|binding|INFO|Removing iface tap6ee09ee1-77 ovn-installed in OVS
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.292 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:59.299 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:78:06:ac 10.100.0.14 2001:db8:0:1:f816:3eff:fe78:6ac 2001:db8::f816:3eff:fe78:6ac'], port_security=['fa:16:3e:78:06:ac 10.100.0.14 2001:db8:0:1:f816:3eff:fe78:6ac 2001:db8::f816:3eff:fe78:6ac'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8:0:1:f816:3eff:fe78:6ac/64 2001:db8::f816:3eff:fe78:6ac/64', 'neutron:device_id': '12ac525a-976f-4afb-b4db-6035caada7e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fe48bd19-537b-482b-a9dd-72cb7a300d8a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7b663232-d708-4949-9a4a-d4a2c365ff36, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=6ee09ee1-7760-4d7f-af2e-0b8d367c129c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.302 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.302 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:19:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:59.301 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 6ee09ee1-7760-4d7f-af2e-0b8d367c129c in datapath 869bf55a-0f32-4f15-8ff4-2acbed7fab15 unbound from our chassis
Jan 26 16:19:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:59.302 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 869bf55a-0f32-4f15-8ff4-2acbed7fab15, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:19:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:59.303 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[885d8cb1-ee83-42a3-8fe0-b18e6383355b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:19:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:19:59.304 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15 namespace which is not needed anymore
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.315 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:59 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000075.scope: Deactivated successfully.
Jan 26 16:19:59 compute-0 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000075.scope: Consumed 16.719s CPU time.
Jan 26 16:19:59 compute-0 systemd-machined[208061]: Machine qemu-144-instance-00000075 terminated.
Jan 26 16:19:59 compute-0 podman[345334]: 2026-01-26 16:19:59.37511282 +0000 UTC m=+0.057960617 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:19:59 compute-0 podman[345336]: 2026-01-26 16:19:59.413777534 +0000 UTC m=+0.098748153 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.443 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.447 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:59 compute-0 neutron-haproxy-ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15[343330]: [NOTICE]   (343336) : haproxy version is 2.8.14-c23fe91
Jan 26 16:19:59 compute-0 neutron-haproxy-ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15[343330]: [NOTICE]   (343336) : path to executable is /usr/sbin/haproxy
Jan 26 16:19:59 compute-0 neutron-haproxy-ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15[343330]: [WARNING]  (343336) : Exiting Master process...
Jan 26 16:19:59 compute-0 neutron-haproxy-ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15[343330]: [WARNING]  (343336) : Exiting Master process...
Jan 26 16:19:59 compute-0 neutron-haproxy-ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15[343330]: [ALERT]    (343336) : Current worker (343338) exited with code 143 (Terminated)
Jan 26 16:19:59 compute-0 neutron-haproxy-ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15[343330]: [WARNING]  (343336) : All workers exited. Exiting... (0)
Jan 26 16:19:59 compute-0 systemd[1]: libpod-b6d67c0f0a12f092962dbfe9bffdc7f78173a1becd1ffbb8965129010f326781.scope: Deactivated successfully.
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.458 239969 INFO nova.virt.libvirt.driver [-] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Instance destroyed successfully.
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.458 239969 DEBUG nova.objects.instance [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'resources' on Instance uuid 12ac525a-976f-4afb-b4db-6035caada7e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:19:59 compute-0 podman[345396]: 2026-01-26 16:19:59.460907876 +0000 UTC m=+0.053626891 container died b6d67c0f0a12f092962dbfe9bffdc7f78173a1becd1ffbb8965129010f326781 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.475 239969 DEBUG nova.virt.libvirt.vif [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:18:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-547377129',display_name='tempest-TestGettingAddress-server-547377129',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-547377129',id=117,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPhQG087QD+2hAbAMQU1nGEagJiakW/DCR9CbPhEN0TT1VEnvAWKVpxW1jmBBFv8JmlAY1EliJqVpxhfQ7DGAWVhpLujt7IrW/DFNwTQWV1RlSnJjK8DC9EppnenO9SXIw==',key_name='tempest-TestGettingAddress-1313867492',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:18:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-mnwp0k0c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:18:37Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=12ac525a-976f-4afb-b4db-6035caada7e0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "address": "fa:16:3e:78:06:ac", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee09ee1-77", "ovs_interfaceid": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.476 239969 DEBUG nova.network.os_vif_util [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "address": "fa:16:3e:78:06:ac", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee09ee1-77", "ovs_interfaceid": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.477 239969 DEBUG nova.network.os_vif_util [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:78:06:ac,bridge_name='br-int',has_traffic_filtering=True,id=6ee09ee1-7760-4d7f-af2e-0b8d367c129c,network=Network(869bf55a-0f32-4f15-8ff4-2acbed7fab15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ee09ee1-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.478 239969 DEBUG os_vif [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:06:ac,bridge_name='br-int',has_traffic_filtering=True,id=6ee09ee1-7760-4d7f-af2e-0b8d367c129c,network=Network(869bf55a-0f32-4f15-8ff4-2acbed7fab15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ee09ee1-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.481 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.481 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6ee09ee1-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.486 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:19:59 compute-0 nova_compute[239965]: 2026-01-26 16:19:59.489 239969 INFO os_vif [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:78:06:ac,bridge_name='br-int',has_traffic_filtering=True,id=6ee09ee1-7760-4d7f-af2e-0b8d367c129c,network=Network(869bf55a-0f32-4f15-8ff4-2acbed7fab15),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6ee09ee1-77')
Jan 26 16:19:59 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b6d67c0f0a12f092962dbfe9bffdc7f78173a1becd1ffbb8965129010f326781-userdata-shm.mount: Deactivated successfully.
Jan 26 16:19:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-03ed4963320cc366f409f3e8ac29818249940f68328eb164fe3e7c3344e14a48-merged.mount: Deactivated successfully.
Jan 26 16:20:00 compute-0 ceph-mon[75140]: pgmap v2078: 305 pgs: 305 active+clean; 167 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 3.9 KiB/s wr, 28 op/s
Jan 26 16:20:00 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/813509440' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:20:00 compute-0 podman[345396]: 2026-01-26 16:20:00.048171863 +0000 UTC m=+0.640890908 container cleanup b6d67c0f0a12f092962dbfe9bffdc7f78173a1becd1ffbb8965129010f326781 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:20:00 compute-0 systemd[1]: libpod-conmon-b6d67c0f0a12f092962dbfe9bffdc7f78173a1becd1ffbb8965129010f326781.scope: Deactivated successfully.
Jan 26 16:20:00 compute-0 podman[345456]: 2026-01-26 16:20:00.148932824 +0000 UTC m=+0.064702301 container remove b6d67c0f0a12f092962dbfe9bffdc7f78173a1becd1ffbb8965129010f326781 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:20:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:00.158 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b46c7cb8-c902-4dca-aa88-14bbd64d02f6]: (4, ('Mon Jan 26 04:19:59 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15 (b6d67c0f0a12f092962dbfe9bffdc7f78173a1becd1ffbb8965129010f326781)\nb6d67c0f0a12f092962dbfe9bffdc7f78173a1becd1ffbb8965129010f326781\nMon Jan 26 04:20:00 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15 (b6d67c0f0a12f092962dbfe9bffdc7f78173a1becd1ffbb8965129010f326781)\nb6d67c0f0a12f092962dbfe9bffdc7f78173a1becd1ffbb8965129010f326781\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:00.160 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d6679733-a99f-4716-b9ff-423784edfae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:00.161 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap869bf55a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:00 compute-0 nova_compute[239965]: 2026-01-26 16:20:00.163 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:00 compute-0 kernel: tap869bf55a-00: left promiscuous mode
Jan 26 16:20:00 compute-0 nova_compute[239965]: 2026-01-26 16:20:00.165 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:00.169 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2a338614-ad6b-4e1b-b5cf-d1628ae3b463]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:00 compute-0 nova_compute[239965]: 2026-01-26 16:20:00.178 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:00.188 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4f093091-3dbe-4c00-9348-5e048174fb67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:00.190 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5fcb12be-a3b3-4cb6-93b9-195e5e37446d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:00.205 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[eda6135d-d0f7-4d54-b5d0-7bb6ba086d72]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 580695, 'reachable_time': 24764, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 345470, 'error': None, 'target': 'ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:00.208 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-869bf55a-0f32-4f15-8ff4-2acbed7fab15 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:20:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:00.208 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[979146a2-35b5-42cd-9469-f1fccdc89bd2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:00 compute-0 systemd[1]: run-netns-ovnmeta\x2d869bf55a\x2d0f32\x2d4f15\x2d8ff4\x2d2acbed7fab15.mount: Deactivated successfully.
Jan 26 16:20:00 compute-0 nova_compute[239965]: 2026-01-26 16:20:00.271 239969 INFO nova.virt.libvirt.driver [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Deleting instance files /var/lib/nova/instances/12ac525a-976f-4afb-b4db-6035caada7e0_del
Jan 26 16:20:00 compute-0 nova_compute[239965]: 2026-01-26 16:20:00.271 239969 INFO nova.virt.libvirt.driver [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Deletion of /var/lib/nova/instances/12ac525a-976f-4afb-b4db-6035caada7e0_del complete
Jan 26 16:20:00 compute-0 nova_compute[239965]: 2026-01-26 16:20:00.347 239969 INFO nova.compute.manager [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Took 1.13 seconds to destroy the instance on the hypervisor.
Jan 26 16:20:00 compute-0 nova_compute[239965]: 2026-01-26 16:20:00.347 239969 DEBUG oslo.service.loopingcall [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:20:00 compute-0 nova_compute[239965]: 2026-01-26 16:20:00.348 239969 DEBUG nova.compute.manager [-] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:20:00 compute-0 nova_compute[239965]: 2026-01-26 16:20:00.348 239969 DEBUG nova.network.neutron [-] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:20:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:20:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:20:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:20:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:20:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:20:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:20:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 167 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 3.9 KiB/s wr, 28 op/s
Jan 26 16:20:01 compute-0 sudo[345471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:20:01 compute-0 sudo[345471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:20:01 compute-0 sudo[345471]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:01 compute-0 ceph-mon[75140]: pgmap v2079: 305 pgs: 305 active+clean; 167 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 3.9 KiB/s wr, 28 op/s
Jan 26 16:20:01 compute-0 sudo[345496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:20:01 compute-0 sudo[345496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:20:01 compute-0 nova_compute[239965]: 2026-01-26 16:20:01.576 239969 DEBUG nova.compute.manager [req-62443116-ec2d-4272-89db-e5721218832a req-7764df81-d5c3-4b74-8f3d-17ab3fd70643 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Received event network-vif-unplugged-6ee09ee1-7760-4d7f-af2e-0b8d367c129c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:20:01 compute-0 nova_compute[239965]: 2026-01-26 16:20:01.578 239969 DEBUG oslo_concurrency.lockutils [req-62443116-ec2d-4272-89db-e5721218832a req-7764df81-d5c3-4b74-8f3d-17ab3fd70643 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:01 compute-0 nova_compute[239965]: 2026-01-26 16:20:01.579 239969 DEBUG oslo_concurrency.lockutils [req-62443116-ec2d-4272-89db-e5721218832a req-7764df81-d5c3-4b74-8f3d-17ab3fd70643 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:01 compute-0 nova_compute[239965]: 2026-01-26 16:20:01.579 239969 DEBUG oslo_concurrency.lockutils [req-62443116-ec2d-4272-89db-e5721218832a req-7764df81-d5c3-4b74-8f3d-17ab3fd70643 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:01 compute-0 nova_compute[239965]: 2026-01-26 16:20:01.580 239969 DEBUG nova.compute.manager [req-62443116-ec2d-4272-89db-e5721218832a req-7764df81-d5c3-4b74-8f3d-17ab3fd70643 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] No waiting events found dispatching network-vif-unplugged-6ee09ee1-7760-4d7f-af2e-0b8d367c129c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:20:01 compute-0 nova_compute[239965]: 2026-01-26 16:20:01.580 239969 DEBUG nova.compute.manager [req-62443116-ec2d-4272-89db-e5721218832a req-7764df81-d5c3-4b74-8f3d-17ab3fd70643 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Received event network-vif-unplugged-6ee09ee1-7760-4d7f-af2e-0b8d367c129c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:20:01 compute-0 nova_compute[239965]: 2026-01-26 16:20:01.580 239969 DEBUG nova.compute.manager [req-62443116-ec2d-4272-89db-e5721218832a req-7764df81-d5c3-4b74-8f3d-17ab3fd70643 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Received event network-vif-plugged-6ee09ee1-7760-4d7f-af2e-0b8d367c129c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:20:01 compute-0 nova_compute[239965]: 2026-01-26 16:20:01.581 239969 DEBUG oslo_concurrency.lockutils [req-62443116-ec2d-4272-89db-e5721218832a req-7764df81-d5c3-4b74-8f3d-17ab3fd70643 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:01 compute-0 nova_compute[239965]: 2026-01-26 16:20:01.581 239969 DEBUG oslo_concurrency.lockutils [req-62443116-ec2d-4272-89db-e5721218832a req-7764df81-d5c3-4b74-8f3d-17ab3fd70643 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:01 compute-0 nova_compute[239965]: 2026-01-26 16:20:01.581 239969 DEBUG oslo_concurrency.lockutils [req-62443116-ec2d-4272-89db-e5721218832a req-7764df81-d5c3-4b74-8f3d-17ab3fd70643 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "12ac525a-976f-4afb-b4db-6035caada7e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:01 compute-0 nova_compute[239965]: 2026-01-26 16:20:01.581 239969 DEBUG nova.compute.manager [req-62443116-ec2d-4272-89db-e5721218832a req-7764df81-d5c3-4b74-8f3d-17ab3fd70643 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] No waiting events found dispatching network-vif-plugged-6ee09ee1-7760-4d7f-af2e-0b8d367c129c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:20:01 compute-0 nova_compute[239965]: 2026-01-26 16:20:01.581 239969 WARNING nova.compute.manager [req-62443116-ec2d-4272-89db-e5721218832a req-7764df81-d5c3-4b74-8f3d-17ab3fd70643 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Received unexpected event network-vif-plugged-6ee09ee1-7760-4d7f-af2e-0b8d367c129c for instance with vm_state active and task_state deleting.
Jan 26 16:20:01 compute-0 sudo[345496]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:20:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:20:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:20:01 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:20:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:20:01 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:20:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:20:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:20:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:20:01 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:20:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:20:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:20:01 compute-0 sudo[345552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:20:01 compute-0 sudo[345552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:20:01 compute-0 sudo[345552]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:01 compute-0 nova_compute[239965]: 2026-01-26 16:20:01.920 239969 DEBUG nova.network.neutron [-] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:20:01 compute-0 nova_compute[239965]: 2026-01-26 16:20:01.939 239969 INFO nova.compute.manager [-] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Took 1.59 seconds to deallocate network for instance.
Jan 26 16:20:01 compute-0 nova_compute[239965]: 2026-01-26 16:20:01.982 239969 DEBUG oslo_concurrency.lockutils [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:01 compute-0 nova_compute[239965]: 2026-01-26 16:20:01.982 239969 DEBUG oslo_concurrency.lockutils [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:01 compute-0 sudo[345577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:20:01 compute-0 sudo[345577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:20:02 compute-0 nova_compute[239965]: 2026-01-26 16:20:02.026 239969 DEBUG nova.compute.manager [req-14d63051-4552-477d-bf76-904165f39489 req-9c71ab5b-2995-4415-8afa-130fbf27ad53 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Received event network-vif-deleted-6ee09ee1-7760-4d7f-af2e-0b8d367c129c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:20:02 compute-0 nova_compute[239965]: 2026-01-26 16:20:02.037 239969 DEBUG oslo_concurrency.processutils [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:20:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:20:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:20:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:20:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:20:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:20:02 compute-0 podman[345634]: 2026-01-26 16:20:02.336758174 +0000 UTC m=+0.103516730 container create 6159275ab2a68b4f11dc29d6196a1e7a611b74f2af5502a59190386f37b8836d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 16:20:02 compute-0 podman[345634]: 2026-01-26 16:20:02.261330731 +0000 UTC m=+0.028089337 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:20:02 compute-0 systemd[1]: Started libpod-conmon-6159275ab2a68b4f11dc29d6196a1e7a611b74f2af5502a59190386f37b8836d.scope.
Jan 26 16:20:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:20:02 compute-0 podman[345634]: 2026-01-26 16:20:02.447077299 +0000 UTC m=+0.213835885 container init 6159275ab2a68b4f11dc29d6196a1e7a611b74f2af5502a59190386f37b8836d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:20:02 compute-0 podman[345634]: 2026-01-26 16:20:02.459630466 +0000 UTC m=+0.226389022 container start 6159275ab2a68b4f11dc29d6196a1e7a611b74f2af5502a59190386f37b8836d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_roentgen, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:20:02 compute-0 boring_roentgen[345651]: 167 167
Jan 26 16:20:02 compute-0 systemd[1]: libpod-6159275ab2a68b4f11dc29d6196a1e7a611b74f2af5502a59190386f37b8836d.scope: Deactivated successfully.
Jan 26 16:20:02 compute-0 podman[345634]: 2026-01-26 16:20:02.472009408 +0000 UTC m=+0.238767964 container attach 6159275ab2a68b4f11dc29d6196a1e7a611b74f2af5502a59190386f37b8836d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:20:02 compute-0 podman[345634]: 2026-01-26 16:20:02.472602373 +0000 UTC m=+0.239360929 container died 6159275ab2a68b4f11dc29d6196a1e7a611b74f2af5502a59190386f37b8836d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 16:20:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Jan 26 16:20:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-df406ce01d5f932e2f6f4d0107034a18f3d65b154434e9f34c37493f36f23041-merged.mount: Deactivated successfully.
Jan 26 16:20:02 compute-0 nova_compute[239965]: 2026-01-26 16:20:02.561 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:20:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1969467484' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:20:02 compute-0 podman[345634]: 2026-01-26 16:20:02.694540474 +0000 UTC m=+0.461299070 container remove 6159275ab2a68b4f11dc29d6196a1e7a611b74f2af5502a59190386f37b8836d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_roentgen, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 16:20:02 compute-0 nova_compute[239965]: 2026-01-26 16:20:02.696 239969 DEBUG oslo_concurrency.processutils [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:20:02 compute-0 nova_compute[239965]: 2026-01-26 16:20:02.706 239969 DEBUG nova.compute.provider_tree [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:20:02 compute-0 systemd[1]: libpod-conmon-6159275ab2a68b4f11dc29d6196a1e7a611b74f2af5502a59190386f37b8836d.scope: Deactivated successfully.
Jan 26 16:20:02 compute-0 nova_compute[239965]: 2026-01-26 16:20:02.825 239969 DEBUG nova.scheduler.client.report [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:20:02 compute-0 podman[345677]: 2026-01-26 16:20:02.907475556 +0000 UTC m=+0.055157928 container create 401d0da0987c2ebfb58813408463f346554dfdb225b39ac7270b5cd255ddbf80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 16:20:02 compute-0 systemd[1]: Started libpod-conmon-401d0da0987c2ebfb58813408463f346554dfdb225b39ac7270b5cd255ddbf80.scope.
Jan 26 16:20:02 compute-0 nova_compute[239965]: 2026-01-26 16:20:02.961 239969 DEBUG oslo_concurrency.lockutils [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.979s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:02 compute-0 podman[345677]: 2026-01-26 16:20:02.879201796 +0000 UTC m=+0.026884208 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:20:02 compute-0 nova_compute[239965]: 2026-01-26 16:20:02.970 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6007ecf5dded0a5236c19c8337f7ff9771cbb96b302b2f877a9c7fde93170952/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6007ecf5dded0a5236c19c8337f7ff9771cbb96b302b2f877a9c7fde93170952/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6007ecf5dded0a5236c19c8337f7ff9771cbb96b302b2f877a9c7fde93170952/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6007ecf5dded0a5236c19c8337f7ff9771cbb96b302b2f877a9c7fde93170952/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:20:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6007ecf5dded0a5236c19c8337f7ff9771cbb96b302b2f877a9c7fde93170952/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:20:03 compute-0 nova_compute[239965]: 2026-01-26 16:20:03.017 239969 INFO nova.scheduler.client.report [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Deleted allocations for instance 12ac525a-976f-4afb-b4db-6035caada7e0
Jan 26 16:20:03 compute-0 podman[345677]: 2026-01-26 16:20:03.094336052 +0000 UTC m=+0.242018424 container init 401d0da0987c2ebfb58813408463f346554dfdb225b39ac7270b5cd255ddbf80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 16:20:03 compute-0 ceph-mon[75140]: pgmap v2080: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 5.1 KiB/s wr, 56 op/s
Jan 26 16:20:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1969467484' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:20:03 compute-0 podman[345677]: 2026-01-26 16:20:03.106552421 +0000 UTC m=+0.254234793 container start 401d0da0987c2ebfb58813408463f346554dfdb225b39ac7270b5cd255ddbf80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 16:20:03 compute-0 nova_compute[239965]: 2026-01-26 16:20:03.114 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:03 compute-0 podman[345677]: 2026-01-26 16:20:03.124510638 +0000 UTC m=+0.272193000 container attach 401d0da0987c2ebfb58813408463f346554dfdb225b39ac7270b5cd255ddbf80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 16:20:03 compute-0 nova_compute[239965]: 2026-01-26 16:20:03.166 239969 DEBUG oslo_concurrency.lockutils [None req-0ceae72a-e7d9-4770-a082-35c30a1f7f84 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "12ac525a-976f-4afb-b4db-6035caada7e0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:20:03 compute-0 nova_compute[239965]: 2026-01-26 16:20:03.423 239969 DEBUG nova.network.neutron [req-5a8a2fa9-db9a-4002-892e-724150ae564f req-78a04480-8f5f-4084-b331-bd90ef13f2c8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Updated VIF entry in instance network info cache for port 6ee09ee1-7760-4d7f-af2e-0b8d367c129c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:20:03 compute-0 nova_compute[239965]: 2026-01-26 16:20:03.423 239969 DEBUG nova.network.neutron [req-5a8a2fa9-db9a-4002-892e-724150ae564f req-78a04480-8f5f-4084-b331-bd90ef13f2c8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Updating instance_info_cache with network_info: [{"id": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "address": "fa:16:3e:78:06:ac", "network": {"id": "869bf55a-0f32-4f15-8ff4-2acbed7fab15", "bridge": "br-int", "label": "tempest-network-smoke--960583180", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe78:6ac", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6ee09ee1-77", "ovs_interfaceid": "6ee09ee1-7760-4d7f-af2e-0b8d367c129c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:20:03 compute-0 nova_compute[239965]: 2026-01-26 16:20:03.452 239969 DEBUG oslo_concurrency.lockutils [req-5a8a2fa9-db9a-4002-892e-724150ae564f req-78a04480-8f5f-4084-b331-bd90ef13f2c8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-12ac525a-976f-4afb-b4db-6035caada7e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:20:03 compute-0 nova_compute[239965]: 2026-01-26 16:20:03.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:20:03 compute-0 nova_compute[239965]: 2026-01-26 16:20:03.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:20:03 compute-0 nova_compute[239965]: 2026-01-26 16:20:03.535 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:20:03 compute-0 suspicious_edison[345694]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:20:03 compute-0 suspicious_edison[345694]: --> All data devices are unavailable
Jan 26 16:20:03 compute-0 systemd[1]: libpod-401d0da0987c2ebfb58813408463f346554dfdb225b39ac7270b5cd255ddbf80.scope: Deactivated successfully.
Jan 26 16:20:03 compute-0 podman[345715]: 2026-01-26 16:20:03.747917299 +0000 UTC m=+0.040718036 container died 401d0da0987c2ebfb58813408463f346554dfdb225b39ac7270b5cd255ddbf80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 26 16:20:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6007ecf5dded0a5236c19c8337f7ff9771cbb96b302b2f877a9c7fde93170952-merged.mount: Deactivated successfully.
Jan 26 16:20:03 compute-0 podman[345715]: 2026-01-26 16:20:03.800144375 +0000 UTC m=+0.092945032 container remove 401d0da0987c2ebfb58813408463f346554dfdb225b39ac7270b5cd255ddbf80 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_edison, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 16:20:03 compute-0 systemd[1]: libpod-conmon-401d0da0987c2ebfb58813408463f346554dfdb225b39ac7270b5cd255ddbf80.scope: Deactivated successfully.
Jan 26 16:20:03 compute-0 sudo[345577]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:03 compute-0 sudo[345729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:20:03 compute-0 sudo[345729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:20:03 compute-0 sudo[345729]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:04 compute-0 sudo[345754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:20:04 compute-0 sudo[345754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:20:04 compute-0 podman[345789]: 2026-01-26 16:20:04.363412275 +0000 UTC m=+0.051411486 container create 1c270111b2243d1f79b317a109f21f5edc07fe5a32db86b6e7b8e97a1ca8cc39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 16:20:04 compute-0 systemd[1]: Started libpod-conmon-1c270111b2243d1f79b317a109f21f5edc07fe5a32db86b6e7b8e97a1ca8cc39.scope.
Jan 26 16:20:04 compute-0 podman[345789]: 2026-01-26 16:20:04.336279753 +0000 UTC m=+0.024279004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:20:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:20:04 compute-0 podman[345789]: 2026-01-26 16:20:04.465077229 +0000 UTC m=+0.153076500 container init 1c270111b2243d1f79b317a109f21f5edc07fe5a32db86b6e7b8e97a1ca8cc39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_goldberg, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:20:04 compute-0 podman[345789]: 2026-01-26 16:20:04.478133758 +0000 UTC m=+0.166132969 container start 1c270111b2243d1f79b317a109f21f5edc07fe5a32db86b6e7b8e97a1ca8cc39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_goldberg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 16:20:04 compute-0 podman[345789]: 2026-01-26 16:20:04.482467554 +0000 UTC m=+0.170466765 container attach 1c270111b2243d1f79b317a109f21f5edc07fe5a32db86b6e7b8e97a1ca8cc39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_goldberg, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 26 16:20:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Jan 26 16:20:04 compute-0 sleepy_goldberg[345805]: 167 167
Jan 26 16:20:04 compute-0 nova_compute[239965]: 2026-01-26 16:20:04.484 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:04 compute-0 systemd[1]: libpod-1c270111b2243d1f79b317a109f21f5edc07fe5a32db86b6e7b8e97a1ca8cc39.scope: Deactivated successfully.
Jan 26 16:20:04 compute-0 podman[345789]: 2026-01-26 16:20:04.487735583 +0000 UTC m=+0.175734764 container died 1c270111b2243d1f79b317a109f21f5edc07fe5a32db86b6e7b8e97a1ca8cc39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_goldberg, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Jan 26 16:20:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b608cc6a67450e481f41f81fda1be1bfa7eafefef195501d6b0d35f6e3f7f799-merged.mount: Deactivated successfully.
Jan 26 16:20:04 compute-0 podman[345789]: 2026-01-26 16:20:04.536937925 +0000 UTC m=+0.224937136 container remove 1c270111b2243d1f79b317a109f21f5edc07fe5a32db86b6e7b8e97a1ca8cc39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_goldberg, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 16:20:04 compute-0 ceph-mon[75140]: pgmap v2081: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Jan 26 16:20:04 compute-0 systemd[1]: libpod-conmon-1c270111b2243d1f79b317a109f21f5edc07fe5a32db86b6e7b8e97a1ca8cc39.scope: Deactivated successfully.
Jan 26 16:20:04 compute-0 podman[345829]: 2026-01-26 16:20:04.789494915 +0000 UTC m=+0.074526452 container create 7dcd67e5795c5f20daf972b702f1197b0f2e9c1192aed9d1bc94584e8bbdacb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dirac, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:20:04 compute-0 nova_compute[239965]: 2026-01-26 16:20:04.809 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:20:04 compute-0 systemd[1]: Started libpod-conmon-7dcd67e5795c5f20daf972b702f1197b0f2e9c1192aed9d1bc94584e8bbdacb2.scope.
Jan 26 16:20:04 compute-0 podman[345829]: 2026-01-26 16:20:04.759689286 +0000 UTC m=+0.044720883 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:20:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb6139a2df469ba592c9e6b554432f06d1ba3f6524248d77159ed2e01529541e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb6139a2df469ba592c9e6b554432f06d1ba3f6524248d77159ed2e01529541e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb6139a2df469ba592c9e6b554432f06d1ba3f6524248d77159ed2e01529541e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:20:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb6139a2df469ba592c9e6b554432f06d1ba3f6524248d77159ed2e01529541e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:20:04 compute-0 podman[345829]: 2026-01-26 16:20:04.892661065 +0000 UTC m=+0.177692602 container init 7dcd67e5795c5f20daf972b702f1197b0f2e9c1192aed9d1bc94584e8bbdacb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dirac, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 16:20:04 compute-0 podman[345829]: 2026-01-26 16:20:04.90106918 +0000 UTC m=+0.186100717 container start 7dcd67e5795c5f20daf972b702f1197b0f2e9c1192aed9d1bc94584e8bbdacb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:20:04 compute-0 podman[345829]: 2026-01-26 16:20:04.906212826 +0000 UTC m=+0.191244393 container attach 7dcd67e5795c5f20daf972b702f1197b0f2e9c1192aed9d1bc94584e8bbdacb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]: {
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:     "0": [
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:         {
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "devices": [
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "/dev/loop3"
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             ],
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "lv_name": "ceph_lv0",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "lv_size": "21470642176",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "name": "ceph_lv0",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "tags": {
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.cluster_name": "ceph",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.crush_device_class": "",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.encrypted": "0",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.objectstore": "bluestore",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.osd_id": "0",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.type": "block",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.vdo": "0",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.with_tpm": "0"
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             },
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "type": "block",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "vg_name": "ceph_vg0"
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:         }
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:     ],
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:     "1": [
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:         {
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "devices": [
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "/dev/loop4"
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             ],
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "lv_name": "ceph_lv1",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "lv_size": "21470642176",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "name": "ceph_lv1",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "tags": {
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.cluster_name": "ceph",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.crush_device_class": "",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.encrypted": "0",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.objectstore": "bluestore",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.osd_id": "1",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.type": "block",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.vdo": "0",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.with_tpm": "0"
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             },
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "type": "block",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "vg_name": "ceph_vg1"
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:         }
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:     ],
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:     "2": [
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:         {
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "devices": [
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "/dev/loop5"
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             ],
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "lv_name": "ceph_lv2",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "lv_size": "21470642176",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "name": "ceph_lv2",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "tags": {
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.cluster_name": "ceph",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.crush_device_class": "",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.encrypted": "0",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.objectstore": "bluestore",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.osd_id": "2",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.type": "block",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.vdo": "0",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:                 "ceph.with_tpm": "0"
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             },
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "type": "block",
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:             "vg_name": "ceph_vg2"
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:         }
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]:     ]
Jan 26 16:20:05 compute-0 unruffled_dirac[345845]: }
Jan 26 16:20:05 compute-0 systemd[1]: libpod-7dcd67e5795c5f20daf972b702f1197b0f2e9c1192aed9d1bc94584e8bbdacb2.scope: Deactivated successfully.
Jan 26 16:20:05 compute-0 podman[345829]: 2026-01-26 16:20:05.283641327 +0000 UTC m=+0.568672934 container died 7dcd67e5795c5f20daf972b702f1197b0f2e9c1192aed9d1bc94584e8bbdacb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dirac, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 16:20:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb6139a2df469ba592c9e6b554432f06d1ba3f6524248d77159ed2e01529541e-merged.mount: Deactivated successfully.
Jan 26 16:20:05 compute-0 podman[345829]: 2026-01-26 16:20:05.342405512 +0000 UTC m=+0.627437079 container remove 7dcd67e5795c5f20daf972b702f1197b0f2e9c1192aed9d1bc94584e8bbdacb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dirac, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 16:20:05 compute-0 systemd[1]: libpod-conmon-7dcd67e5795c5f20daf972b702f1197b0f2e9c1192aed9d1bc94584e8bbdacb2.scope: Deactivated successfully.
Jan 26 16:20:05 compute-0 sudo[345754]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:20:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Cumulative writes: 9458 writes, 43K keys, 9458 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 9458 writes, 9458 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1472 writes, 7369 keys, 1472 commit groups, 1.0 writes per commit group, ingest: 9.52 MB, 0.02 MB/s
                                           Interval WAL: 1472 writes, 1472 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     98.8      0.53              0.17        29    0.018       0      0       0.0       0.0
                                             L6      1/0    7.67 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.6    136.1    114.6      2.07              0.61        28    0.074    162K    15K       0.0       0.0
                                            Sum      1/0    7.67 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.6    108.4    111.4      2.60              0.78        57    0.046    162K    15K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   9.2    124.8    121.7      0.70              0.24        16    0.044     56K   4067       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    136.1    114.6      2.07              0.61        28    0.074    162K    15K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     99.6      0.52              0.17        28    0.019       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.6      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.051, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.28 GB write, 0.08 MB/s write, 0.28 GB read, 0.08 MB/s read, 2.6 seconds
                                           Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.09 GB read, 0.15 MB/s read, 0.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x565356c818d0#2 capacity: 304.00 MB usage: 30.82 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000343 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1948,29.60 MB,9.7359%) FilterBlock(58,455.67 KB,0.146379%) IndexBlock(58,796.73 KB,0.255941%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 16:20:05 compute-0 sudo[345866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:20:05 compute-0 sudo[345866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:20:05 compute-0 sudo[345866]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:05 compute-0 sudo[345891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:20:05 compute-0 sudo[345891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:20:05 compute-0 podman[345928]: 2026-01-26 16:20:05.905671434 +0000 UTC m=+0.062725074 container create 637bd9a7f2173cfd45f86b604f10904060b6f1a561c8be16acc5bc208b439bb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_gould, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:20:05 compute-0 systemd[1]: Started libpod-conmon-637bd9a7f2173cfd45f86b604f10904060b6f1a561c8be16acc5bc208b439bb1.scope.
Jan 26 16:20:05 compute-0 podman[345928]: 2026-01-26 16:20:05.871113689 +0000 UTC m=+0.028167379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:20:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:20:05 compute-0 podman[345928]: 2026-01-26 16:20:05.99027328 +0000 UTC m=+0.147327020 container init 637bd9a7f2173cfd45f86b604f10904060b6f1a561c8be16acc5bc208b439bb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 16:20:06 compute-0 podman[345928]: 2026-01-26 16:20:06.001255339 +0000 UTC m=+0.158309019 container start 637bd9a7f2173cfd45f86b604f10904060b6f1a561c8be16acc5bc208b439bb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:20:06 compute-0 podman[345928]: 2026-01-26 16:20:06.005967373 +0000 UTC m=+0.163021073 container attach 637bd9a7f2173cfd45f86b604f10904060b6f1a561c8be16acc5bc208b439bb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:20:06 compute-0 intelligent_gould[345944]: 167 167
Jan 26 16:20:06 compute-0 systemd[1]: libpod-637bd9a7f2173cfd45f86b604f10904060b6f1a561c8be16acc5bc208b439bb1.scope: Deactivated successfully.
Jan 26 16:20:06 compute-0 podman[345928]: 2026-01-26 16:20:06.008343292 +0000 UTC m=+0.165396962 container died 637bd9a7f2173cfd45f86b604f10904060b6f1a561c8be16acc5bc208b439bb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_gould, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Jan 26 16:20:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5e6e3a396428f35cab03bc407884a917d2f867202a5702ad295901cb6091c54-merged.mount: Deactivated successfully.
Jan 26 16:20:06 compute-0 podman[345928]: 2026-01-26 16:20:06.05576663 +0000 UTC m=+0.212820280 container remove 637bd9a7f2173cfd45f86b604f10904060b6f1a561c8be16acc5bc208b439bb1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 16:20:06 compute-0 systemd[1]: libpod-conmon-637bd9a7f2173cfd45f86b604f10904060b6f1a561c8be16acc5bc208b439bb1.scope: Deactivated successfully.
Jan 26 16:20:06 compute-0 podman[345968]: 2026-01-26 16:20:06.23546488 +0000 UTC m=+0.048499835 container create 029772d5e690136279e9a1dd8a12b3ae3af7c9afeb0161a8b6384004d7818769 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:20:06 compute-0 systemd[1]: Started libpod-conmon-029772d5e690136279e9a1dd8a12b3ae3af7c9afeb0161a8b6384004d7818769.scope.
Jan 26 16:20:06 compute-0 podman[345968]: 2026-01-26 16:20:06.213684939 +0000 UTC m=+0.026719944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:20:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5899842473d30482e2adb929f2fcdf43512bd70b504917658bc3f726673e46e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5899842473d30482e2adb929f2fcdf43512bd70b504917658bc3f726673e46e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5899842473d30482e2adb929f2fcdf43512bd70b504917658bc3f726673e46e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5899842473d30482e2adb929f2fcdf43512bd70b504917658bc3f726673e46e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:20:06 compute-0 podman[345968]: 2026-01-26 16:20:06.333415893 +0000 UTC m=+0.146450958 container init 029772d5e690136279e9a1dd8a12b3ae3af7c9afeb0161a8b6384004d7818769 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:20:06 compute-0 podman[345968]: 2026-01-26 16:20:06.349745622 +0000 UTC m=+0.162780617 container start 029772d5e690136279e9a1dd8a12b3ae3af7c9afeb0161a8b6384004d7818769 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:20:06 compute-0 podman[345968]: 2026-01-26 16:20:06.354094838 +0000 UTC m=+0.167129813 container attach 029772d5e690136279e9a1dd8a12b3ae3af7c9afeb0161a8b6384004d7818769 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 16:20:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Jan 26 16:20:06 compute-0 ceph-mon[75140]: pgmap v2082: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 5.0 KiB/s wr, 55 op/s
Jan 26 16:20:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:06.608 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:07 compute-0 lvm[346066]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:20:07 compute-0 lvm[346066]: VG ceph_vg2 finished
Jan 26 16:20:07 compute-0 lvm[346064]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:20:07 compute-0 lvm[346062]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:20:07 compute-0 lvm[346062]: VG ceph_vg0 finished
Jan 26 16:20:07 compute-0 lvm[346064]: VG ceph_vg1 finished
Jan 26 16:20:07 compute-0 zealous_borg[345985]: {}
Jan 26 16:20:07 compute-0 systemd[1]: libpod-029772d5e690136279e9a1dd8a12b3ae3af7c9afeb0161a8b6384004d7818769.scope: Deactivated successfully.
Jan 26 16:20:07 compute-0 systemd[1]: libpod-029772d5e690136279e9a1dd8a12b3ae3af7c9afeb0161a8b6384004d7818769.scope: Consumed 1.475s CPU time.
Jan 26 16:20:07 compute-0 podman[345968]: 2026-01-26 16:20:07.189134848 +0000 UTC m=+1.002169843 container died 029772d5e690136279e9a1dd8a12b3ae3af7c9afeb0161a8b6384004d7818769 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_borg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:20:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-5899842473d30482e2adb929f2fcdf43512bd70b504917658bc3f726673e46e3-merged.mount: Deactivated successfully.
Jan 26 16:20:07 compute-0 podman[345968]: 2026-01-26 16:20:07.227458755 +0000 UTC m=+1.040493710 container remove 029772d5e690136279e9a1dd8a12b3ae3af7c9afeb0161a8b6384004d7818769 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_borg, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:20:07 compute-0 systemd[1]: libpod-conmon-029772d5e690136279e9a1dd8a12b3ae3af7c9afeb0161a8b6384004d7818769.scope: Deactivated successfully.
Jan 26 16:20:07 compute-0 sudo[345891]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:20:07 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:20:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:20:07 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:20:07 compute-0 sudo[346079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:20:07 compute-0 sudo[346079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:20:07 compute-0 sudo[346079]: pam_unix(sudo:session): session closed for user root
Jan 26 16:20:07 compute-0 nova_compute[239965]: 2026-01-26 16:20:07.603 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:08 compute-0 nova_compute[239965]: 2026-01-26 16:20:08.147 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444393.1461, aa258935-ebf3-42bd-bca7-5b58621978f0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:20:08 compute-0 nova_compute[239965]: 2026-01-26 16:20:08.148 239969 INFO nova.compute.manager [-] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] VM Stopped (Lifecycle Event)
Jan 26 16:20:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:20:08 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:20:08 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:20:08 compute-0 nova_compute[239965]: 2026-01-26 16:20:08.323 239969 DEBUG nova.compute.manager [None req-42b1cf4c-8c69-4111-8de2-398f8ad6ed55 - - - - - -] [instance: aa258935-ebf3-42bd-bca7-5b58621978f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:20:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Jan 26 16:20:09 compute-0 ceph-mon[75140]: pgmap v2083: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.3 KiB/s wr, 54 op/s
Jan 26 16:20:09 compute-0 nova_compute[239965]: 2026-01-26 16:20:09.487 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 16:20:10 compute-0 ceph-mon[75140]: pgmap v2084: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 16:20:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 16:20:12 compute-0 ceph-mon[75140]: pgmap v2085: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Jan 26 16:20:12 compute-0 nova_compute[239965]: 2026-01-26 16:20:12.605 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:20:14 compute-0 nova_compute[239965]: 2026-01-26 16:20:14.456 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444399.4548037, 12ac525a-976f-4afb-b4db-6035caada7e0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:20:14 compute-0 nova_compute[239965]: 2026-01-26 16:20:14.457 239969 INFO nova.compute.manager [-] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] VM Stopped (Lifecycle Event)
Jan 26 16:20:14 compute-0 nova_compute[239965]: 2026-01-26 16:20:14.480 239969 DEBUG nova.compute.manager [None req-6116c0d7-0776-4c9e-ada0-615e467765c1 - - - - - -] [instance: 12ac525a-976f-4afb-b4db-6035caada7e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:20:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:20:14 compute-0 nova_compute[239965]: 2026-01-26 16:20:14.490 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:14 compute-0 nova_compute[239965]: 2026-01-26 16:20:14.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:20:14 compute-0 nova_compute[239965]: 2026-01-26 16:20:14.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:20:14 compute-0 ceph-mon[75140]: pgmap v2086: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:20:15 compute-0 nova_compute[239965]: 2026-01-26 16:20:15.518 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:20:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:20:16 compute-0 ceph-mon[75140]: pgmap v2087: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:20:17 compute-0 nova_compute[239965]: 2026-01-26 16:20:17.607 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:20:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:20:18 compute-0 ceph-mon[75140]: pgmap v2088: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:20:18 compute-0 nova_compute[239965]: 2026-01-26 16:20:18.971 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:18 compute-0 nova_compute[239965]: 2026-01-26 16:20:18.971 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:19 compute-0 nova_compute[239965]: 2026-01-26 16:20:19.002 239969 DEBUG nova.compute.manager [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:20:19 compute-0 nova_compute[239965]: 2026-01-26 16:20:19.117 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:19 compute-0 nova_compute[239965]: 2026-01-26 16:20:19.118 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:19 compute-0 nova_compute[239965]: 2026-01-26 16:20:19.128 239969 DEBUG nova.virt.hardware [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:20:19 compute-0 nova_compute[239965]: 2026-01-26 16:20:19.129 239969 INFO nova.compute.claims [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:20:19 compute-0 nova_compute[239965]: 2026-01-26 16:20:19.261 239969 DEBUG oslo_concurrency.processutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:19 compute-0 nova_compute[239965]: 2026-01-26 16:20:19.533 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:20:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/508803073' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:20:19 compute-0 nova_compute[239965]: 2026-01-26 16:20:19.820 239969 DEBUG oslo_concurrency.processutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:20:19 compute-0 nova_compute[239965]: 2026-01-26 16:20:19.827 239969 DEBUG nova.compute.provider_tree [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:20:19 compute-0 nova_compute[239965]: 2026-01-26 16:20:19.843 239969 DEBUG nova.scheduler.client.report [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:20:19 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/508803073' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:20:19 compute-0 nova_compute[239965]: 2026-01-26 16:20:19.865 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.747s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:19 compute-0 nova_compute[239965]: 2026-01-26 16:20:19.866 239969 DEBUG nova.compute.manager [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:20:19 compute-0 nova_compute[239965]: 2026-01-26 16:20:19.919 239969 DEBUG nova.compute.manager [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:20:19 compute-0 nova_compute[239965]: 2026-01-26 16:20:19.919 239969 DEBUG nova.network.neutron [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:20:19 compute-0 nova_compute[239965]: 2026-01-26 16:20:19.945 239969 INFO nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:20:19 compute-0 nova_compute[239965]: 2026-01-26 16:20:19.978 239969 DEBUG nova.compute.manager [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.074 239969 DEBUG nova.compute.manager [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.076 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.077 239969 INFO nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Creating image(s)
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.113 239969 DEBUG nova.storage.rbd_utils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 84cd45c5-937a-4d42-9e80-2aaa7aa558c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.150 239969 DEBUG nova.storage.rbd_utils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 84cd45c5-937a-4d42-9e80-2aaa7aa558c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.187 239969 DEBUG nova.storage.rbd_utils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 84cd45c5-937a-4d42-9e80-2aaa7aa558c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.192 239969 DEBUG oslo_concurrency.processutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.256 239969 DEBUG nova.policy [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e865b82cb8db42568f95d2257ed9b158', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f543c15474f7451fa07b01e79dde95dd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.276 239969 DEBUG oslo_concurrency.processutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.277 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.278 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.279 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.313 239969 DEBUG nova.storage.rbd_utils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 84cd45c5-937a-4d42-9e80-2aaa7aa558c2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.318 239969 DEBUG oslo_concurrency.processutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 84cd45c5-937a-4d42-9e80-2aaa7aa558c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.683 239969 DEBUG oslo_concurrency.processutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 84cd45c5-937a-4d42-9e80-2aaa7aa558c2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.366s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.747 239969 DEBUG nova.storage.rbd_utils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] resizing rbd image 84cd45c5-937a-4d42-9e80-2aaa7aa558c2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:20:20 compute-0 ceph-mon[75140]: pgmap v2089: 305 pgs: 305 active+clean; 88 MiB data, 829 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.973 239969 DEBUG nova.objects.instance [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'migration_context' on Instance uuid 84cd45c5-937a-4d42-9e80-2aaa7aa558c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.994 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.995 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Ensure instance console log exists: /var/lib/nova/instances/84cd45c5-937a-4d42-9e80-2aaa7aa558c2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.996 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.997 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:20 compute-0 nova_compute[239965]: 2026-01-26 16:20:20.998 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 134 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:20:22 compute-0 ceph-mon[75140]: pgmap v2090: 305 pgs: 305 active+clean; 134 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:20:22 compute-0 nova_compute[239965]: 2026-01-26 16:20:22.609 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:22 compute-0 nova_compute[239965]: 2026-01-26 16:20:22.627 239969 DEBUG nova.network.neutron [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Successfully created port: a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:20:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:20:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 134 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:20:24 compute-0 nova_compute[239965]: 2026-01-26 16:20:24.535 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:24 compute-0 ceph-mon[75140]: pgmap v2091: 305 pgs: 305 active+clean; 134 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:20:25 compute-0 nova_compute[239965]: 2026-01-26 16:20:25.913 239969 DEBUG nova.network.neutron [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Successfully updated port: a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:20:25 compute-0 nova_compute[239965]: 2026-01-26 16:20:25.930 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-84cd45c5-937a-4d42-9e80-2aaa7aa558c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:20:25 compute-0 nova_compute[239965]: 2026-01-26 16:20:25.930 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-84cd45c5-937a-4d42-9e80-2aaa7aa558c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:20:25 compute-0 nova_compute[239965]: 2026-01-26 16:20:25.931 239969 DEBUG nova.network.neutron [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:20:26 compute-0 nova_compute[239965]: 2026-01-26 16:20:26.082 239969 DEBUG nova.compute.manager [req-d8c2c4b7-a122-45dd-ae20-880435309039 req-91ffce5b-7905-48e8-8a6b-ebddd86ad102 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Received event network-changed-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:20:26 compute-0 nova_compute[239965]: 2026-01-26 16:20:26.083 239969 DEBUG nova.compute.manager [req-d8c2c4b7-a122-45dd-ae20-880435309039 req-91ffce5b-7905-48e8-8a6b-ebddd86ad102 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Refreshing instance network info cache due to event network-changed-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:20:26 compute-0 nova_compute[239965]: 2026-01-26 16:20:26.084 239969 DEBUG oslo_concurrency.lockutils [req-d8c2c4b7-a122-45dd-ae20-880435309039 req-91ffce5b-7905-48e8-8a6b-ebddd86ad102 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-84cd45c5-937a-4d42-9e80-2aaa7aa558c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:20:26 compute-0 nova_compute[239965]: 2026-01-26 16:20:26.331 239969 DEBUG nova.network.neutron [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:20:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:20:26 compute-0 ceph-mon[75140]: pgmap v2092: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:20:27 compute-0 nova_compute[239965]: 2026-01-26 16:20:27.611 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:27 compute-0 nova_compute[239965]: 2026-01-26 16:20:27.952 239969 DEBUG nova.network.neutron [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Updating instance_info_cache with network_info: [{"id": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "address": "fa:16:3e:45:f3:55", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2b8e0fe-aa", "ovs_interfaceid": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:20:27 compute-0 nova_compute[239965]: 2026-01-26 16:20:27.973 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-84cd45c5-937a-4d42-9e80-2aaa7aa558c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:20:27 compute-0 nova_compute[239965]: 2026-01-26 16:20:27.974 239969 DEBUG nova.compute.manager [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Instance network_info: |[{"id": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "address": "fa:16:3e:45:f3:55", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2b8e0fe-aa", "ovs_interfaceid": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:20:27 compute-0 nova_compute[239965]: 2026-01-26 16:20:27.975 239969 DEBUG oslo_concurrency.lockutils [req-d8c2c4b7-a122-45dd-ae20-880435309039 req-91ffce5b-7905-48e8-8a6b-ebddd86ad102 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-84cd45c5-937a-4d42-9e80-2aaa7aa558c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:20:27 compute-0 nova_compute[239965]: 2026-01-26 16:20:27.975 239969 DEBUG nova.network.neutron [req-d8c2c4b7-a122-45dd-ae20-880435309039 req-91ffce5b-7905-48e8-8a6b-ebddd86ad102 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Refreshing network info cache for port a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:20:27 compute-0 nova_compute[239965]: 2026-01-26 16:20:27.980 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Start _get_guest_xml network_info=[{"id": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "address": "fa:16:3e:45:f3:55", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2b8e0fe-aa", "ovs_interfaceid": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:20:27 compute-0 nova_compute[239965]: 2026-01-26 16:20:27.989 239969 WARNING nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:27.999 239969 DEBUG nova.virt.libvirt.host [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.001 239969 DEBUG nova.virt.libvirt.host [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.004 239969 DEBUG nova.virt.libvirt.host [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.005 239969 DEBUG nova.virt.libvirt.host [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.006 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.007 239969 DEBUG nova.virt.hardware [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.007 239969 DEBUG nova.virt.hardware [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.008 239969 DEBUG nova.virt.hardware [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.009 239969 DEBUG nova.virt.hardware [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.009 239969 DEBUG nova.virt.hardware [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.010 239969 DEBUG nova.virt.hardware [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.010 239969 DEBUG nova.virt.hardware [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.011 239969 DEBUG nova.virt.hardware [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.011 239969 DEBUG nova.virt.hardware [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.012 239969 DEBUG nova.virt.hardware [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.012 239969 DEBUG nova.virt.hardware [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.017 239969 DEBUG oslo_concurrency.processutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:20:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:20:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:20:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1988162577' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.587 239969 DEBUG oslo_concurrency.processutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:20:28 compute-0 ceph-mon[75140]: pgmap v2093: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:20:28 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1988162577' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.622 239969 DEBUG nova.storage.rbd_utils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 84cd45c5-937a-4d42-9e80-2aaa7aa558c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:20:28 compute-0 nova_compute[239965]: 2026-01-26 16:20:28.625 239969 DEBUG oslo_concurrency.processutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:20:28
Jan 26 16:20:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:20:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:20:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['vms', 'images', 'backups', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'default.rgw.log']
Jan 26 16:20:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:20:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:20:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3159958281' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.194 239969 DEBUG oslo_concurrency.processutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.196 239969 DEBUG nova.virt.libvirt.vif [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:20:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1497622994',display_name='tempest-TestNetworkBasicOps-server-1497622994',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1497622994',id=120,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMEbNbkirpn8q6fOUcuOBLNwlHKG1ipK4t+MC4V9pNQ7fEY1buV+g1Lo8jtOaeRRA9dB73yZ5bS6kVJRhgngcieAdijcLEIBj4r5FY+dsZJud6vOs//08LbVZa6UA9cOVw==',key_name='tempest-TestNetworkBasicOps-288144499',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-ad8czmgq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:20:20Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=84cd45c5-937a-4d42-9e80-2aaa7aa558c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "address": "fa:16:3e:45:f3:55", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2b8e0fe-aa", "ovs_interfaceid": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.197 239969 DEBUG nova.network.os_vif_util [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "address": "fa:16:3e:45:f3:55", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2b8e0fe-aa", "ovs_interfaceid": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.201 239969 DEBUG nova.network.os_vif_util [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:f3:55,bridge_name='br-int',has_traffic_filtering=True,id=a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb,network=Network(94d25833-d97d-4ef4-9c91-97f8d2c0ffa9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2b8e0fe-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.203 239969 DEBUG nova.objects.instance [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'pci_devices' on Instance uuid 84cd45c5-937a-4d42-9e80-2aaa7aa558c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.230 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:20:29 compute-0 nova_compute[239965]:   <uuid>84cd45c5-937a-4d42-9e80-2aaa7aa558c2</uuid>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   <name>instance-00000078</name>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkBasicOps-server-1497622994</nova:name>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:20:27</nova:creationTime>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:20:29 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:20:29 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:20:29 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:20:29 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:20:29 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:20:29 compute-0 nova_compute[239965]:         <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:20:29 compute-0 nova_compute[239965]:         <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:20:29 compute-0 nova_compute[239965]:         <nova:port uuid="a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb">
Jan 26 16:20:29 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <system>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <entry name="serial">84cd45c5-937a-4d42-9e80-2aaa7aa558c2</entry>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <entry name="uuid">84cd45c5-937a-4d42-9e80-2aaa7aa558c2</entry>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     </system>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   <os>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   </os>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   <features>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   </features>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/84cd45c5-937a-4d42-9e80-2aaa7aa558c2_disk">
Jan 26 16:20:29 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       </source>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:20:29 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/84cd45c5-937a-4d42-9e80-2aaa7aa558c2_disk.config">
Jan 26 16:20:29 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       </source>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:20:29 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:45:f3:55"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <target dev="tapa2b8e0fe-aa"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/84cd45c5-937a-4d42-9e80-2aaa7aa558c2/console.log" append="off"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <video>
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     </video>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:20:29 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:20:29 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:20:29 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:20:29 compute-0 nova_compute[239965]: </domain>
Jan 26 16:20:29 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.232 239969 DEBUG nova.compute.manager [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Preparing to wait for external event network-vif-plugged-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.233 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.233 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.234 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.235 239969 DEBUG nova.virt.libvirt.vif [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:20:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1497622994',display_name='tempest-TestNetworkBasicOps-server-1497622994',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1497622994',id=120,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMEbNbkirpn8q6fOUcuOBLNwlHKG1ipK4t+MC4V9pNQ7fEY1buV+g1Lo8jtOaeRRA9dB73yZ5bS6kVJRhgngcieAdijcLEIBj4r5FY+dsZJud6vOs//08LbVZa6UA9cOVw==',key_name='tempest-TestNetworkBasicOps-288144499',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-ad8czmgq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:20:20Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=84cd45c5-937a-4d42-9e80-2aaa7aa558c2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "address": "fa:16:3e:45:f3:55", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2b8e0fe-aa", "ovs_interfaceid": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.236 239969 DEBUG nova.network.os_vif_util [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "address": "fa:16:3e:45:f3:55", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2b8e0fe-aa", "ovs_interfaceid": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.237 239969 DEBUG nova.network.os_vif_util [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:f3:55,bridge_name='br-int',has_traffic_filtering=True,id=a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb,network=Network(94d25833-d97d-4ef4-9c91-97f8d2c0ffa9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2b8e0fe-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.237 239969 DEBUG os_vif [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:f3:55,bridge_name='br-int',has_traffic_filtering=True,id=a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb,network=Network(94d25833-d97d-4ef4-9c91-97f8d2c0ffa9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2b8e0fe-aa') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.238 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.239 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.240 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.244 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.245 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa2b8e0fe-aa, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.246 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa2b8e0fe-aa, col_values=(('external_ids', {'iface-id': 'a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:f3:55', 'vm-uuid': '84cd45c5-937a-4d42-9e80-2aaa7aa558c2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.248 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:29 compute-0 NetworkManager[48954]: <info>  [1769444429.2499] manager: (tapa2b8e0fe-aa): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/518)
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.253 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.255 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.256 239969 INFO os_vif [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:f3:55,bridge_name='br-int',has_traffic_filtering=True,id=a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb,network=Network(94d25833-d97d-4ef4-9c91-97f8d2c0ffa9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2b8e0fe-aa')
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.351 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.352 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.352 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:45:f3:55, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.353 239969 INFO nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Using config drive
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.381 239969 DEBUG nova.storage.rbd_utils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 84cd45c5-937a-4d42-9e80-2aaa7aa558c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:20:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3159958281' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.722 239969 INFO nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Creating config drive at /var/lib/nova/instances/84cd45c5-937a-4d42-9e80-2aaa7aa558c2/disk.config
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.727 239969 DEBUG oslo_concurrency.processutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/84cd45c5-937a-4d42-9e80-2aaa7aa558c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2_x8ikmt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.887 239969 DEBUG oslo_concurrency.processutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/84cd45c5-937a-4d42-9e80-2aaa7aa558c2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2_x8ikmt" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.917 239969 DEBUG nova.storage.rbd_utils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 84cd45c5-937a-4d42-9e80-2aaa7aa558c2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:20:29 compute-0 nova_compute[239965]: 2026-01-26 16:20:29.920 239969 DEBUG oslo_concurrency.processutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/84cd45c5-937a-4d42-9e80-2aaa7aa558c2/disk.config 84cd45c5-937a-4d42-9e80-2aaa7aa558c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.067 239969 DEBUG oslo_concurrency.processutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/84cd45c5-937a-4d42-9e80-2aaa7aa558c2/disk.config 84cd45c5-937a-4d42-9e80-2aaa7aa558c2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.068 239969 INFO nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Deleting local config drive /var/lib/nova/instances/84cd45c5-937a-4d42-9e80-2aaa7aa558c2/disk.config because it was imported into RBD.
Jan 26 16:20:30 compute-0 kernel: tapa2b8e0fe-aa: entered promiscuous mode
Jan 26 16:20:30 compute-0 NetworkManager[48954]: <info>  [1769444430.1296] manager: (tapa2b8e0fe-aa): new Tun device (/org/freedesktop/NetworkManager/Devices/519)
Jan 26 16:20:30 compute-0 ovn_controller[146046]: 2026-01-26T16:20:30Z|01262|binding|INFO|Claiming lport a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb for this chassis.
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.129 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:30 compute-0 ovn_controller[146046]: 2026-01-26T16:20:30Z|01263|binding|INFO|a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb: Claiming fa:16:3e:45:f3:55 10.100.0.5
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.137 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.152 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:f3:55 10.100.0.5'], port_security=['fa:16:3e:45:f3:55 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '84cd45c5-937a-4d42-9e80-2aaa7aa558c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e66c0628-319c-4933-bfe7-b5cee0b15a77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2629d1bb-a533-4276-ab03-448f2c4761f6, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.154 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb in datapath 94d25833-d97d-4ef4-9c91-97f8d2c0ffa9 bound to our chassis
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.156 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 94d25833-d97d-4ef4-9c91-97f8d2c0ffa9
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.173 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[66276c0b-da7d-43ec-8ac2-d0f42c49a398]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.174 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap94d25833-d1 in ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.177 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap94d25833-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.177 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[04fd8cea-b143-4b1e-9fbc-9a3806a7d69c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.178 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[14fc631b-a725-4a81-9b3d-7ff56ec1e4bf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.191 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[d1c9f3e3-e5e9-4902-b39f-2126918e582a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:30 compute-0 systemd-machined[208061]: New machine qemu-147-instance-00000078.
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.236 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:30 compute-0 ovn_controller[146046]: 2026-01-26T16:20:30Z|01264|binding|INFO|Setting lport a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb ovn-installed in OVS
Jan 26 16:20:30 compute-0 ovn_controller[146046]: 2026-01-26T16:20:30Z|01265|binding|INFO|Setting lport a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb up in Southbound
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.239 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:30 compute-0 systemd[1]: Started Virtual Machine qemu-147-instance-00000078.
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.247 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[06901ce7-a5ab-48a6-84fb-6f6ab773b6d2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:30 compute-0 systemd-udevd[346448]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:20:30 compute-0 NetworkManager[48954]: <info>  [1769444430.2701] device (tapa2b8e0fe-aa): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:20:30 compute-0 NetworkManager[48954]: <info>  [1769444430.2709] device (tapa2b8e0fe-aa): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.278 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[40248d8a-f37b-42f0-b9ba-9d2016656434]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:30 compute-0 podman[346424]: 2026-01-26 16:20:30.279743479 +0000 UTC m=+0.099273786 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.282 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[64e41422-59ea-4c05-b076-48c9b459cde9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:30 compute-0 NetworkManager[48954]: <info>  [1769444430.2838] manager: (tap94d25833-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/520)
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.310 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4e1d0e31-ba47-47d3-b0eb-9575bca8a8f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.312 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3a679582-b354-459d-b285-c1b58b28e401]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:30 compute-0 podman[346425]: 2026-01-26 16:20:30.315094372 +0000 UTC m=+0.140536164 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 26 16:20:30 compute-0 NetworkManager[48954]: <info>  [1769444430.3344] device (tap94d25833-d0): carrier: link connected
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.336 239969 DEBUG nova.network.neutron [req-d8c2c4b7-a122-45dd-ae20-880435309039 req-91ffce5b-7905-48e8-8a6b-ebddd86ad102 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Updated VIF entry in instance network info cache for port a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.337 239969 DEBUG nova.network.neutron [req-d8c2c4b7-a122-45dd-ae20-880435309039 req-91ffce5b-7905-48e8-8a6b-ebddd86ad102 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Updating instance_info_cache with network_info: [{"id": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "address": "fa:16:3e:45:f3:55", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2b8e0fe-aa", "ovs_interfaceid": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.339 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc26b3c-d6b0-4707-913e-8c6f15fe914a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.354 239969 DEBUG oslo_concurrency.lockutils [req-d8c2c4b7-a122-45dd-ae20-880435309039 req-91ffce5b-7905-48e8-8a6b-ebddd86ad102 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-84cd45c5-937a-4d42-9e80-2aaa7aa558c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.356 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc4efe9-1a3b-48bc-94ee-df4bf43f20cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap94d25833-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:22:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592114, 'reachable_time': 24138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346503, 'error': None, 'target': 'ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.372 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3c0b4a2b-12e5-4664-9f21-a58e6ae0c711]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:228d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592114, 'tstamp': 592114}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346504, 'error': None, 'target': 'ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.392 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[33349279-3a4e-4ba9-83d8-6ed8a0c2f189]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap94d25833-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:22:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592114, 'reachable_time': 24138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346505, 'error': None, 'target': 'ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.421 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[410b8b38-8fad-4a5b-b32a-e4f4341cbe13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.476 239969 DEBUG nova.compute.manager [req-6ea859f2-f1d4-48b6-a158-c7f8934ca767 req-04bdc500-9b4c-4760-9e3b-ccb465b56a79 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Received event network-vif-plugged-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.476 239969 DEBUG oslo_concurrency.lockutils [req-6ea859f2-f1d4-48b6-a158-c7f8934ca767 req-04bdc500-9b4c-4760-9e3b-ccb465b56a79 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.477 239969 DEBUG oslo_concurrency.lockutils [req-6ea859f2-f1d4-48b6-a158-c7f8934ca767 req-04bdc500-9b4c-4760-9e3b-ccb465b56a79 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.477 239969 DEBUG oslo_concurrency.lockutils [req-6ea859f2-f1d4-48b6-a158-c7f8934ca767 req-04bdc500-9b4c-4760-9e3b-ccb465b56a79 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.477 239969 DEBUG nova.compute.manager [req-6ea859f2-f1d4-48b6-a158-c7f8934ca767 req-04bdc500-9b4c-4760-9e3b-ccb465b56a79 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Processing event network-vif-plugged-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.483 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0e33a536-4208-479f-b0bf-3e8e0e95b764]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.484 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94d25833-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.485 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.485 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94d25833-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.487 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:30 compute-0 kernel: tap94d25833-d0: entered promiscuous mode
Jan 26 16:20:30 compute-0 NetworkManager[48954]: <info>  [1769444430.4877] manager: (tap94d25833-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/521)
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.489 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.490 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap94d25833-d0, col_values=(('external_ids', {'iface-id': '1e5f3a8e-47d3-4b4a-9225-a0fadd35ac3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.491 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:30 compute-0 ovn_controller[146046]: 2026-01-26T16:20:30Z|01266|binding|INFO|Releasing lport 1e5f3a8e-47d3-4b4a-9225-a0fadd35ac3f from this chassis (sb_readonly=0)
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.504 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.505 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/94d25833-d97d-4ef4-9c91-97f8d2c0ffa9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/94d25833-d97d-4ef4-9c91-97f8d2c0ffa9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.505 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[31186bc6-2514-4aed-85f0-f003196f224e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.506 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/94d25833-d97d-4ef4-9c91-97f8d2c0ffa9.pid.haproxy
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 94d25833-d97d-4ef4-9c91-97f8d2c0ffa9
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:20:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:30.506 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9', 'env', 'PROCESS_TAG=haproxy-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/94d25833-d97d-4ef4-9c91-97f8d2c0ffa9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:20:30 compute-0 ceph-mon[75140]: pgmap v2094: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:20:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.906 239969 DEBUG nova.compute.manager [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.907 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444430.9071703, 84cd45c5-937a-4d42-9e80-2aaa7aa558c2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.908 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] VM Started (Lifecycle Event)
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.912 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:20:30 compute-0 podman[346575]: 2026-01-26 16:20:30.914840705 +0000 UTC m=+0.058011989 container create 3226040fbce9a30b87755a37759b7a98cb75069739864e4f4ecdee15c4ca27a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.919 239969 INFO nova.virt.libvirt.driver [-] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Instance spawned successfully.
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.919 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.927 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.935 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.940 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.941 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.942 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.942 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.942 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.943 239969 DEBUG nova.virt.libvirt.driver [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:20:30 compute-0 systemd[1]: Started libpod-conmon-3226040fbce9a30b87755a37759b7a98cb75069739864e4f4ecdee15c4ca27a8.scope.
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.982 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.983 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444430.9079306, 84cd45c5-937a-4d42-9e80-2aaa7aa558c2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:20:30 compute-0 podman[346575]: 2026-01-26 16:20:30.88885562 +0000 UTC m=+0.032026914 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:20:30 compute-0 nova_compute[239965]: 2026-01-26 16:20:30.983 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] VM Paused (Lifecycle Event)
Jan 26 16:20:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:20:31 compute-0 nova_compute[239965]: 2026-01-26 16:20:31.006 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:20:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87dd6431e8701fbd21a890fd278ab45f14b242539cb13f44af670703dd5c8498/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:20:31 compute-0 nova_compute[239965]: 2026-01-26 16:20:31.015 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444430.9117572, 84cd45c5-937a-4d42-9e80-2aaa7aa558c2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:20:31 compute-0 nova_compute[239965]: 2026-01-26 16:20:31.015 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] VM Resumed (Lifecycle Event)
Jan 26 16:20:31 compute-0 nova_compute[239965]: 2026-01-26 16:20:31.022 239969 INFO nova.compute.manager [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Took 10.95 seconds to spawn the instance on the hypervisor.
Jan 26 16:20:31 compute-0 nova_compute[239965]: 2026-01-26 16:20:31.023 239969 DEBUG nova.compute.manager [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:20:31 compute-0 podman[346575]: 2026-01-26 16:20:31.024233177 +0000 UTC m=+0.167404491 container init 3226040fbce9a30b87755a37759b7a98cb75069739864e4f4ecdee15c4ca27a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 16:20:31 compute-0 podman[346575]: 2026-01-26 16:20:31.029996858 +0000 UTC m=+0.173168142 container start 3226040fbce9a30b87755a37759b7a98cb75069739864e4f4ecdee15c4ca27a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 16:20:31 compute-0 nova_compute[239965]: 2026-01-26 16:20:31.035 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:20:31 compute-0 nova_compute[239965]: 2026-01-26 16:20:31.039 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:20:31 compute-0 nova_compute[239965]: 2026-01-26 16:20:31.059 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:20:31 compute-0 neutron-haproxy-ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9[346591]: [NOTICE]   (346595) : New worker (346597) forked
Jan 26 16:20:31 compute-0 neutron-haproxy-ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9[346591]: [NOTICE]   (346595) : Loading success.
Jan 26 16:20:31 compute-0 nova_compute[239965]: 2026-01-26 16:20:31.092 239969 INFO nova.compute.manager [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Took 12.04 seconds to build instance.
Jan 26 16:20:31 compute-0 nova_compute[239965]: 2026-01-26 16:20:31.109 239969 DEBUG oslo_concurrency.lockutils [None req-e41ba8f9-579f-41aa-8660-ce5138558572 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Jan 26 16:20:32 compute-0 ceph-mon[75140]: pgmap v2095: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 95 op/s
Jan 26 16:20:32 compute-0 nova_compute[239965]: 2026-01-26 16:20:32.612 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:32 compute-0 nova_compute[239965]: 2026-01-26 16:20:32.739 239969 DEBUG nova.compute.manager [req-576e0a5f-f134-4630-b963-301b48da58e7 req-7a95c7c3-13ac-40b9-8601-b0770dd87ff5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Received event network-vif-plugged-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:20:32 compute-0 nova_compute[239965]: 2026-01-26 16:20:32.740 239969 DEBUG oslo_concurrency.lockutils [req-576e0a5f-f134-4630-b963-301b48da58e7 req-7a95c7c3-13ac-40b9-8601-b0770dd87ff5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:32 compute-0 nova_compute[239965]: 2026-01-26 16:20:32.740 239969 DEBUG oslo_concurrency.lockutils [req-576e0a5f-f134-4630-b963-301b48da58e7 req-7a95c7c3-13ac-40b9-8601-b0770dd87ff5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:32 compute-0 nova_compute[239965]: 2026-01-26 16:20:32.741 239969 DEBUG oslo_concurrency.lockutils [req-576e0a5f-f134-4630-b963-301b48da58e7 req-7a95c7c3-13ac-40b9-8601-b0770dd87ff5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:32 compute-0 nova_compute[239965]: 2026-01-26 16:20:32.741 239969 DEBUG nova.compute.manager [req-576e0a5f-f134-4630-b963-301b48da58e7 req-7a95c7c3-13ac-40b9-8601-b0770dd87ff5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] No waiting events found dispatching network-vif-plugged-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:20:32 compute-0 nova_compute[239965]: 2026-01-26 16:20:32.741 239969 WARNING nova.compute.manager [req-576e0a5f-f134-4630-b963-301b48da58e7 req-7a95c7c3-13ac-40b9-8601-b0770dd87ff5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Received unexpected event network-vif-plugged-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb for instance with vm_state active and task_state None.
Jan 26 16:20:33 compute-0 nova_compute[239965]: 2026-01-26 16:20:33.148 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "b1abd864-1de3-45b1-8fbc-3885a84b1363" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:33 compute-0 nova_compute[239965]: 2026-01-26 16:20:33.148 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "b1abd864-1de3-45b1-8fbc-3885a84b1363" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:33 compute-0 nova_compute[239965]: 2026-01-26 16:20:33.175 239969 DEBUG nova.compute.manager [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:20:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:20:33 compute-0 nova_compute[239965]: 2026-01-26 16:20:33.260 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:33 compute-0 nova_compute[239965]: 2026-01-26 16:20:33.260 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:33 compute-0 nova_compute[239965]: 2026-01-26 16:20:33.267 239969 DEBUG nova.virt.hardware [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:20:33 compute-0 nova_compute[239965]: 2026-01-26 16:20:33.267 239969 INFO nova.compute.claims [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:20:34 compute-0 nova_compute[239965]: 2026-01-26 16:20:34.249 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 12 KiB/s wr, 68 op/s
Jan 26 16:20:34 compute-0 ceph-mon[75140]: pgmap v2096: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 12 KiB/s wr, 68 op/s
Jan 26 16:20:35 compute-0 nova_compute[239965]: 2026-01-26 16:20:35.394 239969 DEBUG oslo_concurrency.processutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:20:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1894266455' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:20:35 compute-0 nova_compute[239965]: 2026-01-26 16:20:35.976 239969 DEBUG oslo_concurrency.processutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:20:35 compute-0 nova_compute[239965]: 2026-01-26 16:20:35.985 239969 DEBUG nova.compute.provider_tree [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:20:36 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1894266455' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.121 239969 DEBUG nova.scheduler.client.report [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.185 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.925s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.186 239969 DEBUG nova.compute.manager [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.242 239969 DEBUG nova.compute.manager [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.242 239969 DEBUG nova.network.neutron [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.453 239969 INFO nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:20:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.548 239969 DEBUG nova.compute.manager [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.610 239969 DEBUG nova.policy [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ace5f36058684b5782589457d9e15921', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.732 239969 DEBUG nova.compute.manager [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.735 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.735 239969 INFO nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Creating image(s)
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.767 239969 DEBUG nova.storage.rbd_utils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image b1abd864-1de3-45b1-8fbc-3885a84b1363_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.794 239969 DEBUG nova.storage.rbd_utils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image b1abd864-1de3-45b1-8fbc-3885a84b1363_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.815 239969 DEBUG nova.storage.rbd_utils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image b1abd864-1de3-45b1-8fbc-3885a84b1363_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.818 239969 DEBUG oslo_concurrency.processutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.911 239969 DEBUG oslo_concurrency.processutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.912 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.913 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.913 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.934 239969 DEBUG nova.storage.rbd_utils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image b1abd864-1de3-45b1-8fbc-3885a84b1363_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:20:36 compute-0 nova_compute[239965]: 2026-01-26 16:20:36.937 239969 DEBUG oslo_concurrency.processutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 b1abd864-1de3-45b1-8fbc-3885a84b1363_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:37 compute-0 ceph-mon[75140]: pgmap v2097: 305 pgs: 305 active+clean; 134 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:20:37 compute-0 nova_compute[239965]: 2026-01-26 16:20:37.215 239969 DEBUG oslo_concurrency.processutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 b1abd864-1de3-45b1-8fbc-3885a84b1363_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:20:37 compute-0 nova_compute[239965]: 2026-01-26 16:20:37.277 239969 DEBUG nova.storage.rbd_utils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] resizing rbd image b1abd864-1de3-45b1-8fbc-3885a84b1363_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:20:37 compute-0 nova_compute[239965]: 2026-01-26 16:20:37.374 239969 DEBUG nova.objects.instance [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'migration_context' on Instance uuid b1abd864-1de3-45b1-8fbc-3885a84b1363 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:20:37 compute-0 nova_compute[239965]: 2026-01-26 16:20:37.407 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:20:37 compute-0 nova_compute[239965]: 2026-01-26 16:20:37.407 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Ensure instance console log exists: /var/lib/nova/instances/b1abd864-1de3-45b1-8fbc-3885a84b1363/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:20:37 compute-0 nova_compute[239965]: 2026-01-26 16:20:37.408 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:37 compute-0 nova_compute[239965]: 2026-01-26 16:20:37.409 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:37 compute-0 nova_compute[239965]: 2026-01-26 16:20:37.409 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:37 compute-0 nova_compute[239965]: 2026-01-26 16:20:37.614 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:20:38 compute-0 NetworkManager[48954]: <info>  [1769444438.2633] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/522)
Jan 26 16:20:38 compute-0 NetworkManager[48954]: <info>  [1769444438.2641] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/523)
Jan 26 16:20:38 compute-0 nova_compute[239965]: 2026-01-26 16:20:38.279 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:38 compute-0 nova_compute[239965]: 2026-01-26 16:20:38.397 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:38 compute-0 ovn_controller[146046]: 2026-01-26T16:20:38Z|01267|binding|INFO|Releasing lport 1e5f3a8e-47d3-4b4a-9225-a0fadd35ac3f from this chassis (sb_readonly=0)
Jan 26 16:20:38 compute-0 nova_compute[239965]: 2026-01-26 16:20:38.407 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 169 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 99 op/s
Jan 26 16:20:38 compute-0 ceph-mon[75140]: pgmap v2098: 305 pgs: 305 active+clean; 169 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 99 op/s
Jan 26 16:20:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:38.690 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:20:38 compute-0 nova_compute[239965]: 2026-01-26 16:20:38.691 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:38.692 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:20:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:38.693 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:39 compute-0 nova_compute[239965]: 2026-01-26 16:20:39.251 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:39 compute-0 nova_compute[239965]: 2026-01-26 16:20:39.810 239969 DEBUG nova.compute.manager [req-e50c7a5a-ef9c-44ef-9311-fa7cfa259776 req-95a18d98-29b8-468f-899a-7e74da77cee4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Received event network-changed-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:20:39 compute-0 nova_compute[239965]: 2026-01-26 16:20:39.811 239969 DEBUG nova.compute.manager [req-e50c7a5a-ef9c-44ef-9311-fa7cfa259776 req-95a18d98-29b8-468f-899a-7e74da77cee4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Refreshing instance network info cache due to event network-changed-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:20:39 compute-0 nova_compute[239965]: 2026-01-26 16:20:39.812 239969 DEBUG oslo_concurrency.lockutils [req-e50c7a5a-ef9c-44ef-9311-fa7cfa259776 req-95a18d98-29b8-468f-899a-7e74da77cee4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-84cd45c5-937a-4d42-9e80-2aaa7aa558c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:20:39 compute-0 nova_compute[239965]: 2026-01-26 16:20:39.813 239969 DEBUG oslo_concurrency.lockutils [req-e50c7a5a-ef9c-44ef-9311-fa7cfa259776 req-95a18d98-29b8-468f-899a-7e74da77cee4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-84cd45c5-937a-4d42-9e80-2aaa7aa558c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:20:39 compute-0 nova_compute[239965]: 2026-01-26 16:20:39.813 239969 DEBUG nova.network.neutron [req-e50c7a5a-ef9c-44ef-9311-fa7cfa259776 req-95a18d98-29b8-468f-899a-7e74da77cee4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Refreshing network info cache for port a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:20:39 compute-0 nova_compute[239965]: 2026-01-26 16:20:39.824 239969 DEBUG nova.network.neutron [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Successfully created port: 28ddf76d-a2d2-49fc-b567-f9364c3848c4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:20:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 169 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 99 op/s
Jan 26 16:20:40 compute-0 ceph-mon[75140]: pgmap v2099: 305 pgs: 305 active+clean; 169 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 99 op/s
Jan 26 16:20:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 180 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 26 16:20:42 compute-0 ceph-mon[75140]: pgmap v2100: 305 pgs: 305 active+clean; 180 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Jan 26 16:20:42 compute-0 nova_compute[239965]: 2026-01-26 16:20:42.616 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:20:43 compute-0 nova_compute[239965]: 2026-01-26 16:20:43.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:20:43 compute-0 nova_compute[239965]: 2026-01-26 16:20:43.887 239969 DEBUG nova.network.neutron [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Successfully updated port: 28ddf76d-a2d2-49fc-b567-f9364c3848c4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:20:43 compute-0 nova_compute[239965]: 2026-01-26 16:20:43.911 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "refresh_cache-b1abd864-1de3-45b1-8fbc-3885a84b1363" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:20:43 compute-0 nova_compute[239965]: 2026-01-26 16:20:43.911 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquired lock "refresh_cache-b1abd864-1de3-45b1-8fbc-3885a84b1363" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:20:43 compute-0 nova_compute[239965]: 2026-01-26 16:20:43.911 239969 DEBUG nova.network.neutron [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:20:43 compute-0 nova_compute[239965]: 2026-01-26 16:20:43.990 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:44 compute-0 nova_compute[239965]: 2026-01-26 16:20:44.006 239969 DEBUG nova.compute.manager [req-fdb1bc6d-5eb1-4dce-a7e3-dcd7e5b382d8 req-9c498681-0bfb-4e7b-90de-188238bbf94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Received event network-changed-28ddf76d-a2d2-49fc-b567-f9364c3848c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:20:44 compute-0 nova_compute[239965]: 2026-01-26 16:20:44.006 239969 DEBUG nova.compute.manager [req-fdb1bc6d-5eb1-4dce-a7e3-dcd7e5b382d8 req-9c498681-0bfb-4e7b-90de-188238bbf94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Refreshing instance network info cache due to event network-changed-28ddf76d-a2d2-49fc-b567-f9364c3848c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:20:44 compute-0 nova_compute[239965]: 2026-01-26 16:20:44.007 239969 DEBUG oslo_concurrency.lockutils [req-fdb1bc6d-5eb1-4dce-a7e3-dcd7e5b382d8 req-9c498681-0bfb-4e7b-90de-188238bbf94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b1abd864-1de3-45b1-8fbc-3885a84b1363" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:20:44 compute-0 nova_compute[239965]: 2026-01-26 16:20:44.175 239969 DEBUG nova.network.neutron [req-e50c7a5a-ef9c-44ef-9311-fa7cfa259776 req-95a18d98-29b8-468f-899a-7e74da77cee4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Updated VIF entry in instance network info cache for port a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:20:44 compute-0 nova_compute[239965]: 2026-01-26 16:20:44.176 239969 DEBUG nova.network.neutron [req-e50c7a5a-ef9c-44ef-9311-fa7cfa259776 req-95a18d98-29b8-468f-899a-7e74da77cee4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Updating instance_info_cache with network_info: [{"id": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "address": "fa:16:3e:45:f3:55", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2b8e0fe-aa", "ovs_interfaceid": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:20:44 compute-0 nova_compute[239965]: 2026-01-26 16:20:44.202 239969 DEBUG oslo_concurrency.lockutils [req-e50c7a5a-ef9c-44ef-9311-fa7cfa259776 req-95a18d98-29b8-468f-899a-7e74da77cee4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-84cd45c5-937a-4d42-9e80-2aaa7aa558c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:20:44 compute-0 nova_compute[239965]: 2026-01-26 16:20:44.230 239969 DEBUG nova.network.neutron [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:20:44 compute-0 nova_compute[239965]: 2026-01-26 16:20:44.343 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 180 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 152 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 26 16:20:44 compute-0 ceph-mon[75140]: pgmap v2101: 305 pgs: 305 active+clean; 180 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 152 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 26 16:20:44 compute-0 ovn_controller[146046]: 2026-01-26T16:20:44Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:f3:55 10.100.0.5
Jan 26 16:20:44 compute-0 ovn_controller[146046]: 2026-01-26T16:20:44Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:f3:55 10.100.0.5
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.563 239969 DEBUG nova.network.neutron [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Updating instance_info_cache with network_info: [{"id": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "address": "fa:16:3e:ef:95:e6", "network": {"id": "8fbff3e6-13d0-457a-8cfa-fa92dd36bc03", "bridge": "br-int", "label": "tempest-network-smoke--2073612296", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ddf76d-a2", "ovs_interfaceid": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.589 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Releasing lock "refresh_cache-b1abd864-1de3-45b1-8fbc-3885a84b1363" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.590 239969 DEBUG nova.compute.manager [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Instance network_info: |[{"id": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "address": "fa:16:3e:ef:95:e6", "network": {"id": "8fbff3e6-13d0-457a-8cfa-fa92dd36bc03", "bridge": "br-int", "label": "tempest-network-smoke--2073612296", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ddf76d-a2", "ovs_interfaceid": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.591 239969 DEBUG oslo_concurrency.lockutils [req-fdb1bc6d-5eb1-4dce-a7e3-dcd7e5b382d8 req-9c498681-0bfb-4e7b-90de-188238bbf94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b1abd864-1de3-45b1-8fbc-3885a84b1363" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.591 239969 DEBUG nova.network.neutron [req-fdb1bc6d-5eb1-4dce-a7e3-dcd7e5b382d8 req-9c498681-0bfb-4e7b-90de-188238bbf94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Refreshing network info cache for port 28ddf76d-a2d2-49fc-b567-f9364c3848c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.596 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Start _get_guest_xml network_info=[{"id": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "address": "fa:16:3e:ef:95:e6", "network": {"id": "8fbff3e6-13d0-457a-8cfa-fa92dd36bc03", "bridge": "br-int", "label": "tempest-network-smoke--2073612296", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ddf76d-a2", "ovs_interfaceid": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.600 239969 WARNING nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.607 239969 DEBUG nova.virt.libvirt.host [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.608 239969 DEBUG nova.virt.libvirt.host [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.613 239969 DEBUG nova.virt.libvirt.host [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.613 239969 DEBUG nova.virt.libvirt.host [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.614 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.614 239969 DEBUG nova.virt.hardware [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.614 239969 DEBUG nova.virt.hardware [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.614 239969 DEBUG nova.virt.hardware [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.615 239969 DEBUG nova.virt.hardware [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.615 239969 DEBUG nova.virt.hardware [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.615 239969 DEBUG nova.virt.hardware [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.615 239969 DEBUG nova.virt.hardware [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.615 239969 DEBUG nova.virt.hardware [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.616 239969 DEBUG nova.virt.hardware [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.616 239969 DEBUG nova.virt.hardware [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.616 239969 DEBUG nova.virt.hardware [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:20:45 compute-0 nova_compute[239965]: 2026-01-26 16:20:45.619 239969 DEBUG oslo_concurrency.processutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:20:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3730657472' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.220 239969 DEBUG oslo_concurrency.processutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.246 239969 DEBUG nova.storage.rbd_utils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image b1abd864-1de3-45b1-8fbc-3885a84b1363_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.250 239969 DEBUG oslo_concurrency.processutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3730657472' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:20:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 191 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 178 KiB/s rd, 2.4 MiB/s wr, 51 op/s
Jan 26 16:20:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:46.727 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:1a:41 10.100.0.2 2001:db8::f816:3eff:fe26:1a41'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe26:1a41/64', 'neutron:device_id': 'ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c934df91-a8a2-4191-8341-9c66e4ad8b82, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=15bb6ba7-12a8-4618-8905-b715a4089dde) old=Port_Binding(mac=['fa:16:3e:26:1a:41 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:20:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:46.729 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 15bb6ba7-12a8-4618-8905-b715a4089dde in datapath 323586d4-0f5a-43b7-8831-7a2cc02b73e0 updated
Jan 26 16:20:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:46.732 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 323586d4-0f5a-43b7-8831-7a2cc02b73e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:20:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:46.734 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a380293c-ed13-443c-acda-7c6ab641079e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:20:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3130058438' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.823 239969 DEBUG oslo_concurrency.processutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.825 239969 DEBUG nova.virt.libvirt.vif [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:20:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-2077716436',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-2077716436',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-acce',id=121,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAH5HmGgJNWvGdVkmlHJRrePaakFb21tks+KLfYAfRRceef2V+IxZFoFo8gHC+GlnVy+k8FfLLLE2ZPZUqadlze8sD5xc0uoNaRqdi9NffjKHIkwpi9oAc3U2aezCu0lUQ==',key_name='tempest-TestSecurityGroupsBasicOps-1132471326',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-0lsogerf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:20:36Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=b1abd864-1de3-45b1-8fbc-3885a84b1363,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "address": "fa:16:3e:ef:95:e6", "network": {"id": "8fbff3e6-13d0-457a-8cfa-fa92dd36bc03", "bridge": "br-int", "label": "tempest-network-smoke--2073612296", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ddf76d-a2", "ovs_interfaceid": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.825 239969 DEBUG nova.network.os_vif_util [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "address": "fa:16:3e:ef:95:e6", "network": {"id": "8fbff3e6-13d0-457a-8cfa-fa92dd36bc03", "bridge": "br-int", "label": "tempest-network-smoke--2073612296", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ddf76d-a2", "ovs_interfaceid": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.826 239969 DEBUG nova.network.os_vif_util [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:95:e6,bridge_name='br-int',has_traffic_filtering=True,id=28ddf76d-a2d2-49fc-b567-f9364c3848c4,network=Network(8fbff3e6-13d0-457a-8cfa-fa92dd36bc03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28ddf76d-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.827 239969 DEBUG nova.objects.instance [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'pci_devices' on Instance uuid b1abd864-1de3-45b1-8fbc-3885a84b1363 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.841 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:20:46 compute-0 nova_compute[239965]:   <uuid>b1abd864-1de3-45b1-8fbc-3885a84b1363</uuid>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   <name>instance-00000079</name>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-2077716436</nova:name>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:20:45</nova:creationTime>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:20:46 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:20:46 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:20:46 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:20:46 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:20:46 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:20:46 compute-0 nova_compute[239965]:         <nova:user uuid="ace5f36058684b5782589457d9e15921">tempest-TestSecurityGroupsBasicOps-75910484-project-member</nova:user>
Jan 26 16:20:46 compute-0 nova_compute[239965]:         <nova:project uuid="1f814bd45d164be6827f7ef54ed07f5f">tempest-TestSecurityGroupsBasicOps-75910484</nova:project>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:20:46 compute-0 nova_compute[239965]:         <nova:port uuid="28ddf76d-a2d2-49fc-b567-f9364c3848c4">
Jan 26 16:20:46 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <system>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <entry name="serial">b1abd864-1de3-45b1-8fbc-3885a84b1363</entry>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <entry name="uuid">b1abd864-1de3-45b1-8fbc-3885a84b1363</entry>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     </system>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   <os>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   </os>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   <features>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   </features>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/b1abd864-1de3-45b1-8fbc-3885a84b1363_disk">
Jan 26 16:20:46 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       </source>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:20:46 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/b1abd864-1de3-45b1-8fbc-3885a84b1363_disk.config">
Jan 26 16:20:46 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       </source>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:20:46 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:ef:95:e6"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <target dev="tap28ddf76d-a2"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/b1abd864-1de3-45b1-8fbc-3885a84b1363/console.log" append="off"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <video>
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     </video>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:20:46 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:20:46 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:20:46 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:20:46 compute-0 nova_compute[239965]: </domain>
Jan 26 16:20:46 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.843 239969 DEBUG nova.compute.manager [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Preparing to wait for external event network-vif-plugged-28ddf76d-a2d2-49fc-b567-f9364c3848c4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.844 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.844 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.844 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.845 239969 DEBUG nova.virt.libvirt.vif [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:20:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-2077716436',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-2077716436',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-acce',id=121,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAH5HmGgJNWvGdVkmlHJRrePaakFb21tks+KLfYAfRRceef2V+IxZFoFo8gHC+GlnVy+k8FfLLLE2ZPZUqadlze8sD5xc0uoNaRqdi9NffjKHIkwpi9oAc3U2aezCu0lUQ==',key_name='tempest-TestSecurityGroupsBasicOps-1132471326',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-0lsogerf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:20:36Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=b1abd864-1de3-45b1-8fbc-3885a84b1363,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "address": "fa:16:3e:ef:95:e6", "network": {"id": "8fbff3e6-13d0-457a-8cfa-fa92dd36bc03", "bridge": "br-int", "label": "tempest-network-smoke--2073612296", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ddf76d-a2", "ovs_interfaceid": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.845 239969 DEBUG nova.network.os_vif_util [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "address": "fa:16:3e:ef:95:e6", "network": {"id": "8fbff3e6-13d0-457a-8cfa-fa92dd36bc03", "bridge": "br-int", "label": "tempest-network-smoke--2073612296", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ddf76d-a2", "ovs_interfaceid": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.846 239969 DEBUG nova.network.os_vif_util [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ef:95:e6,bridge_name='br-int',has_traffic_filtering=True,id=28ddf76d-a2d2-49fc-b567-f9364c3848c4,network=Network(8fbff3e6-13d0-457a-8cfa-fa92dd36bc03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28ddf76d-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.846 239969 DEBUG os_vif [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:95:e6,bridge_name='br-int',has_traffic_filtering=True,id=28ddf76d-a2d2-49fc-b567-f9364c3848c4,network=Network(8fbff3e6-13d0-457a-8cfa-fa92dd36bc03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28ddf76d-a2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.847 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.848 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.848 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.855 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.856 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28ddf76d-a2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.856 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap28ddf76d-a2, col_values=(('external_ids', {'iface-id': '28ddf76d-a2d2-49fc-b567-f9364c3848c4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ef:95:e6', 'vm-uuid': 'b1abd864-1de3-45b1-8fbc-3885a84b1363'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.953 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:46 compute-0 NetworkManager[48954]: <info>  [1769444446.9544] manager: (tap28ddf76d-a2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/524)
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.956 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.959 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:46 compute-0 nova_compute[239965]: 2026-01-26 16:20:46.960 239969 INFO os_vif [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ef:95:e6,bridge_name='br-int',has_traffic_filtering=True,id=28ddf76d-a2d2-49fc-b567-f9364c3848c4,network=Network(8fbff3e6-13d0-457a-8cfa-fa92dd36bc03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28ddf76d-a2')
Jan 26 16:20:47 compute-0 nova_compute[239965]: 2026-01-26 16:20:47.019 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:20:47 compute-0 nova_compute[239965]: 2026-01-26 16:20:47.020 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:20:47 compute-0 nova_compute[239965]: 2026-01-26 16:20:47.020 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No VIF found with MAC fa:16:3e:ef:95:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:20:47 compute-0 nova_compute[239965]: 2026-01-26 16:20:47.021 239969 INFO nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Using config drive
Jan 26 16:20:47 compute-0 nova_compute[239965]: 2026-01-26 16:20:47.043 239969 DEBUG nova.storage.rbd_utils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image b1abd864-1de3-45b1-8fbc-3885a84b1363_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:20:47 compute-0 ceph-mon[75140]: pgmap v2102: 305 pgs: 305 active+clean; 191 MiB data, 872 MiB used, 59 GiB / 60 GiB avail; 178 KiB/s rd, 2.4 MiB/s wr, 51 op/s
Jan 26 16:20:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3130058438' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:20:47 compute-0 nova_compute[239965]: 2026-01-26 16:20:47.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:20:47 compute-0 nova_compute[239965]: 2026-01-26 16:20:47.619 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:47 compute-0 nova_compute[239965]: 2026-01-26 16:20:47.826 239969 DEBUG nova.network.neutron [req-fdb1bc6d-5eb1-4dce-a7e3-dcd7e5b382d8 req-9c498681-0bfb-4e7b-90de-188238bbf94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Updated VIF entry in instance network info cache for port 28ddf76d-a2d2-49fc-b567-f9364c3848c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:20:47 compute-0 nova_compute[239965]: 2026-01-26 16:20:47.827 239969 DEBUG nova.network.neutron [req-fdb1bc6d-5eb1-4dce-a7e3-dcd7e5b382d8 req-9c498681-0bfb-4e7b-90de-188238bbf94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Updating instance_info_cache with network_info: [{"id": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "address": "fa:16:3e:ef:95:e6", "network": {"id": "8fbff3e6-13d0-457a-8cfa-fa92dd36bc03", "bridge": "br-int", "label": "tempest-network-smoke--2073612296", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ddf76d-a2", "ovs_interfaceid": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:20:47 compute-0 nova_compute[239965]: 2026-01-26 16:20:47.857 239969 INFO nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Creating config drive at /var/lib/nova/instances/b1abd864-1de3-45b1-8fbc-3885a84b1363/disk.config
Jan 26 16:20:47 compute-0 nova_compute[239965]: 2026-01-26 16:20:47.865 239969 DEBUG oslo_concurrency.processutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b1abd864-1de3-45b1-8fbc-3885a84b1363/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpowsrb_qt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:47 compute-0 nova_compute[239965]: 2026-01-26 16:20:47.903 239969 DEBUG oslo_concurrency.lockutils [req-fdb1bc6d-5eb1-4dce-a7e3-dcd7e5b382d8 req-9c498681-0bfb-4e7b-90de-188238bbf94b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b1abd864-1de3-45b1-8fbc-3885a84b1363" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.008 239969 DEBUG oslo_concurrency.processutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b1abd864-1de3-45b1-8fbc-3885a84b1363/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpowsrb_qt" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.034 239969 DEBUG nova.storage.rbd_utils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image b1abd864-1de3-45b1-8fbc-3885a84b1363_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.038 239969 DEBUG oslo_concurrency.processutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b1abd864-1de3-45b1-8fbc-3885a84b1363/disk.config b1abd864-1de3-45b1-8fbc-3885a84b1363_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.178 239969 DEBUG oslo_concurrency.processutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b1abd864-1de3-45b1-8fbc-3885a84b1363/disk.config b1abd864-1de3-45b1-8fbc-3885a84b1363_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.180 239969 INFO nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Deleting local config drive /var/lib/nova/instances/b1abd864-1de3-45b1-8fbc-3885a84b1363/disk.config because it was imported into RBD.
Jan 26 16:20:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:20:48 compute-0 kernel: tap28ddf76d-a2: entered promiscuous mode
Jan 26 16:20:48 compute-0 ovn_controller[146046]: 2026-01-26T16:20:48Z|01268|binding|INFO|Claiming lport 28ddf76d-a2d2-49fc-b567-f9364c3848c4 for this chassis.
Jan 26 16:20:48 compute-0 ovn_controller[146046]: 2026-01-26T16:20:48Z|01269|binding|INFO|28ddf76d-a2d2-49fc-b567-f9364c3848c4: Claiming fa:16:3e:ef:95:e6 10.100.0.4
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.259 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:48 compute-0 NetworkManager[48954]: <info>  [1769444448.2619] manager: (tap28ddf76d-a2): new Tun device (/org/freedesktop/NetworkManager/Devices/525)
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.266 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:95:e6 10.100.0.4'], port_security=['fa:16:3e:ef:95:e6 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b1abd864-1de3-45b1-8fbc-3885a84b1363', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7717533c-aa5c-4001-8fe1-767842de9742 c15057a5-d11a-493e-8ca8-7cd42363857a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e52caf4-6569-41d9-823b-c2a38f81cae8, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=28ddf76d-a2d2-49fc-b567-f9364c3848c4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.268 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 28ddf76d-a2d2-49fc-b567-f9364c3848c4 in datapath 8fbff3e6-13d0-457a-8cfa-fa92dd36bc03 bound to our chassis
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.269 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8fbff3e6-13d0-457a-8cfa-fa92dd36bc03
Jan 26 16:20:48 compute-0 ovn_controller[146046]: 2026-01-26T16:20:48Z|01270|binding|INFO|Setting lport 28ddf76d-a2d2-49fc-b567-f9364c3848c4 ovn-installed in OVS
Jan 26 16:20:48 compute-0 ovn_controller[146046]: 2026-01-26T16:20:48Z|01271|binding|INFO|Setting lport 28ddf76d-a2d2-49fc-b567-f9364c3848c4 up in Southbound
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.277 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.284 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa87ba8-6552-4a8a-8da7-72dc5e5b68c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.285 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8fbff3e6-11 in ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.287 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8fbff3e6-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.287 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[649993e2-ba1f-43df-a740-5bf9c5b4be2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.288 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2a4321a0-7e09-4e30-8b80-c23282ad1dee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:48 compute-0 systemd-udevd[346931]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:20:48 compute-0 systemd-machined[208061]: New machine qemu-148-instance-00000079.
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.302 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[e80cdbe8-88f2-4c1d-a4e9-8ec40f882310]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:48 compute-0 NetworkManager[48954]: <info>  [1769444448.3064] device (tap28ddf76d-a2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:20:48 compute-0 NetworkManager[48954]: <info>  [1769444448.3072] device (tap28ddf76d-a2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:20:48 compute-0 systemd[1]: Started Virtual Machine qemu-148-instance-00000079.
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.316 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[417c6b77-a158-452b-8c8a-dd633e27d44b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.353 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[82fb8443-5c21-448d-ad66-82f5a814a7f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.358 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[24d9262b-9bec-4efd-bba7-d412282857b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:48 compute-0 NetworkManager[48954]: <info>  [1769444448.3601] manager: (tap8fbff3e6-10): new Veth device (/org/freedesktop/NetworkManager/Devices/526)
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.390 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[920f0f80-a388-4ac8-baac-a23e004a4214]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.392 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c2f107-cb8a-4a0b-a4f7-3415c60964dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:48 compute-0 NetworkManager[48954]: <info>  [1769444448.4158] device (tap8fbff3e6-10): carrier: link connected
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.422 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a5b9851c-2cd9-4db0-bfca-85d6c33085a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.442 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e1efae8a-a817-4137-a370-d16bb8e5ed7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8fbff3e6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:3a:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 368], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593922, 'reachable_time': 34645, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 346964, 'error': None, 'target': 'ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.456 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d731cb-a364-48ac-b265-af4527b15297]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe92:3a1c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 593922, 'tstamp': 593922}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 346965, 'error': None, 'target': 'ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.474 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e3be48bc-e97e-4d00-aa09-2fc230ba3683]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8fbff3e6-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:92:3a:1c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 368], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593922, 'reachable_time': 34645, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 346966, 'error': None, 'target': 'ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 213 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.519 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f2c3fd8c-9458-4fcf-b9fd-d5f6fbb5fbb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.594 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e625ccb6-8d54-43d0-a97b-3df89b0acd85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.596 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8fbff3e6-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.596 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:20:48 compute-0 ceph-mon[75140]: pgmap v2103: 305 pgs: 305 active+clean; 213 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.597 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8fbff3e6-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:48 compute-0 kernel: tap8fbff3e6-10: entered promiscuous mode
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.599 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:48 compute-0 NetworkManager[48954]: <info>  [1769444448.6004] manager: (tap8fbff3e6-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.602 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8fbff3e6-10, col_values=(('external_ids', {'iface-id': '7ccaaade-f6a1-403a-ae4b-e4af874cc1b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:20:48 compute-0 ovn_controller[146046]: 2026-01-26T16:20:48Z|01272|binding|INFO|Releasing lport 7ccaaade-f6a1-403a-ae4b-e4af874cc1b7 from this chassis (sb_readonly=0)
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.619 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.619 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8fbff3e6-13d0-457a-8cfa-fa92dd36bc03.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8fbff3e6-13d0-457a-8cfa-fa92dd36bc03.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.620 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e7431efb-ff63-426d-a6c8-820e97efff13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.621 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/8fbff3e6-13d0-457a-8cfa-fa92dd36bc03.pid.haproxy
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 8fbff3e6-13d0-457a-8cfa-fa92dd36bc03
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:20:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:48.623 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03', 'env', 'PROCESS_TAG=haproxy-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8fbff3e6-13d0-457a-8cfa-fa92dd36bc03.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:20:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:20:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3232008598' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:20:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:20:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3232008598' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.926 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444448.92539, b1abd864-1de3-45b1-8fbc-3885a84b1363 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.927 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] VM Started (Lifecycle Event)
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.955 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.960 239969 DEBUG nova.compute.manager [req-1d0e9db8-2b79-43d6-b3de-a25e3d926186 req-d2164093-9857-469d-b62d-c95fd2dd8e52 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Received event network-vif-plugged-28ddf76d-a2d2-49fc-b567-f9364c3848c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.961 239969 DEBUG oslo_concurrency.lockutils [req-1d0e9db8-2b79-43d6-b3de-a25e3d926186 req-d2164093-9857-469d-b62d-c95fd2dd8e52 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.961 239969 DEBUG oslo_concurrency.lockutils [req-1d0e9db8-2b79-43d6-b3de-a25e3d926186 req-d2164093-9857-469d-b62d-c95fd2dd8e52 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.961 239969 DEBUG oslo_concurrency.lockutils [req-1d0e9db8-2b79-43d6-b3de-a25e3d926186 req-d2164093-9857-469d-b62d-c95fd2dd8e52 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.962 239969 DEBUG nova.compute.manager [req-1d0e9db8-2b79-43d6-b3de-a25e3d926186 req-d2164093-9857-469d-b62d-c95fd2dd8e52 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Processing event network-vif-plugged-28ddf76d-a2d2-49fc-b567-f9364c3848c4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.962 239969 DEBUG nova.compute.manager [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.963 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444448.9257247, b1abd864-1de3-45b1-8fbc-3885a84b1363 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.963 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] VM Paused (Lifecycle Event)
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.969 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.972 239969 INFO nova.virt.libvirt.driver [-] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Instance spawned successfully.
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.973 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.992 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.998 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444448.9681547, b1abd864-1de3-45b1-8fbc-3885a84b1363 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:20:48 compute-0 nova_compute[239965]: 2026-01-26 16:20:48.999 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] VM Resumed (Lifecycle Event)
Jan 26 16:20:49 compute-0 nova_compute[239965]: 2026-01-26 16:20:49.004 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:20:49 compute-0 nova_compute[239965]: 2026-01-26 16:20:49.004 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:20:49 compute-0 nova_compute[239965]: 2026-01-26 16:20:49.005 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:20:49 compute-0 nova_compute[239965]: 2026-01-26 16:20:49.005 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:20:49 compute-0 nova_compute[239965]: 2026-01-26 16:20:49.006 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:20:49 compute-0 nova_compute[239965]: 2026-01-26 16:20:49.006 239969 DEBUG nova.virt.libvirt.driver [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:20:49 compute-0 podman[347040]: 2026-01-26 16:20:49.017868806 +0000 UTC m=+0.056094721 container create 06f2c201d8cd975cf99701655ff7b3aad1188b7e7238828402c0fa898a4a9cea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:20:49 compute-0 nova_compute[239965]: 2026-01-26 16:20:49.039 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011162785352918174 of space, bias 1.0, pg target 0.33488356058754526 quantized to 32 (current 32)
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010184358788194884 of space, bias 1.0, pg target 0.3055307636458465 quantized to 32 (current 32)
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.082228782610587e-07 of space, bias 4.0, pg target 0.0008498674539132704 quantized to 16 (current 16)
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:20:49 compute-0 nova_compute[239965]: 2026-01-26 16:20:49.046 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:20:49 compute-0 systemd[1]: Started libpod-conmon-06f2c201d8cd975cf99701655ff7b3aad1188b7e7238828402c0fa898a4a9cea.scope.
Jan 26 16:20:49 compute-0 nova_compute[239965]: 2026-01-26 16:20:49.074 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:20:49 compute-0 podman[347040]: 2026-01-26 16:20:48.988720664 +0000 UTC m=+0.026946609 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:20:49 compute-0 nova_compute[239965]: 2026-01-26 16:20:49.083 239969 INFO nova.compute.manager [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Took 12.35 seconds to spawn the instance on the hypervisor.
Jan 26 16:20:49 compute-0 nova_compute[239965]: 2026-01-26 16:20:49.085 239969 DEBUG nova.compute.manager [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:20:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:20:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8dc1162b80eaca55e0767b1027ff18b2a7e452f2c2fb2170c5d5cc0fa4cd7a9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:20:49 compute-0 podman[347040]: 2026-01-26 16:20:49.11663593 +0000 UTC m=+0.154861875 container init 06f2c201d8cd975cf99701655ff7b3aad1188b7e7238828402c0fa898a4a9cea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 16:20:49 compute-0 podman[347040]: 2026-01-26 16:20:49.12404509 +0000 UTC m=+0.162271005 container start 06f2c201d8cd975cf99701655ff7b3aad1188b7e7238828402c0fa898a4a9cea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:20:49 compute-0 neutron-haproxy-ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03[347055]: [NOTICE]   (347059) : New worker (347061) forked
Jan 26 16:20:49 compute-0 neutron-haproxy-ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03[347055]: [NOTICE]   (347059) : Loading success.
Jan 26 16:20:49 compute-0 nova_compute[239965]: 2026-01-26 16:20:49.157 239969 INFO nova.compute.manager [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Took 15.93 seconds to build instance.
Jan 26 16:20:49 compute-0 nova_compute[239965]: 2026-01-26 16:20:49.173 239969 DEBUG oslo_concurrency.lockutils [None req-727ac82a-03b4-418e-bfb7-7ae8a82ac5c8 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "b1abd864-1de3-45b1-8fbc-3885a84b1363" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3232008598' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:20:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3232008598' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:20:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 213 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.6 MiB/s wr, 65 op/s
Jan 26 16:20:50 compute-0 nova_compute[239965]: 2026-01-26 16:20:50.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:20:50 compute-0 ceph-mon[75140]: pgmap v2104: 305 pgs: 305 active+clean; 213 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.6 MiB/s wr, 65 op/s
Jan 26 16:20:51 compute-0 nova_compute[239965]: 2026-01-26 16:20:51.056 239969 DEBUG nova.compute.manager [req-74a91157-169a-44da-a6ef-4c6844276a72 req-9e74174d-380f-4e7f-9a8b-b3ac1ae8dcaa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Received event network-vif-plugged-28ddf76d-a2d2-49fc-b567-f9364c3848c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:20:51 compute-0 nova_compute[239965]: 2026-01-26 16:20:51.057 239969 DEBUG oslo_concurrency.lockutils [req-74a91157-169a-44da-a6ef-4c6844276a72 req-9e74174d-380f-4e7f-9a8b-b3ac1ae8dcaa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:51 compute-0 nova_compute[239965]: 2026-01-26 16:20:51.057 239969 DEBUG oslo_concurrency.lockutils [req-74a91157-169a-44da-a6ef-4c6844276a72 req-9e74174d-380f-4e7f-9a8b-b3ac1ae8dcaa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:51 compute-0 nova_compute[239965]: 2026-01-26 16:20:51.058 239969 DEBUG oslo_concurrency.lockutils [req-74a91157-169a-44da-a6ef-4c6844276a72 req-9e74174d-380f-4e7f-9a8b-b3ac1ae8dcaa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:51 compute-0 nova_compute[239965]: 2026-01-26 16:20:51.059 239969 DEBUG nova.compute.manager [req-74a91157-169a-44da-a6ef-4c6844276a72 req-9e74174d-380f-4e7f-9a8b-b3ac1ae8dcaa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] No waiting events found dispatching network-vif-plugged-28ddf76d-a2d2-49fc-b567-f9364c3848c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:20:51 compute-0 nova_compute[239965]: 2026-01-26 16:20:51.059 239969 WARNING nova.compute.manager [req-74a91157-169a-44da-a6ef-4c6844276a72 req-9e74174d-380f-4e7f-9a8b-b3ac1ae8dcaa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Received unexpected event network-vif-plugged-28ddf76d-a2d2-49fc-b567-f9364c3848c4 for instance with vm_state active and task_state None.
Jan 26 16:20:51 compute-0 nova_compute[239965]: 2026-01-26 16:20:51.134 239969 INFO nova.compute.manager [None req-c71cc8d2-8fdc-4139-8d6b-3cfb6a491fc1 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Get console output
Jan 26 16:20:51 compute-0 nova_compute[239965]: 2026-01-26 16:20:51.142 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:20:51 compute-0 nova_compute[239965]: 2026-01-26 16:20:51.953 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 139 op/s
Jan 26 16:20:52 compute-0 nova_compute[239965]: 2026-01-26 16:20:52.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:20:52 compute-0 nova_compute[239965]: 2026-01-26 16:20:52.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:20:52 compute-0 nova_compute[239965]: 2026-01-26 16:20:52.540 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:20:52 compute-0 ceph-mon[75140]: pgmap v2105: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.7 MiB/s wr, 139 op/s
Jan 26 16:20:52 compute-0 nova_compute[239965]: 2026-01-26 16:20:52.663 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:20:54 compute-0 nova_compute[239965]: 2026-01-26 16:20:54.214 239969 DEBUG nova.compute.manager [req-13c2f134-8fbc-4390-96d1-35f915a101c4 req-7b88369f-ffa0-4fb1-be15-11e83b15b49c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Received event network-changed-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:20:54 compute-0 nova_compute[239965]: 2026-01-26 16:20:54.215 239969 DEBUG nova.compute.manager [req-13c2f134-8fbc-4390-96d1-35f915a101c4 req-7b88369f-ffa0-4fb1-be15-11e83b15b49c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Refreshing instance network info cache due to event network-changed-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:20:54 compute-0 nova_compute[239965]: 2026-01-26 16:20:54.215 239969 DEBUG oslo_concurrency.lockutils [req-13c2f134-8fbc-4390-96d1-35f915a101c4 req-7b88369f-ffa0-4fb1-be15-11e83b15b49c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-84cd45c5-937a-4d42-9e80-2aaa7aa558c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:20:54 compute-0 nova_compute[239965]: 2026-01-26 16:20:54.216 239969 DEBUG oslo_concurrency.lockutils [req-13c2f134-8fbc-4390-96d1-35f915a101c4 req-7b88369f-ffa0-4fb1-be15-11e83b15b49c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-84cd45c5-937a-4d42-9e80-2aaa7aa558c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:20:54 compute-0 nova_compute[239965]: 2026-01-26 16:20:54.216 239969 DEBUG nova.network.neutron [req-13c2f134-8fbc-4390-96d1-35f915a101c4 req-7b88369f-ffa0-4fb1-be15-11e83b15b49c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Refreshing network info cache for port a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:20:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 26 16:20:54 compute-0 nova_compute[239965]: 2026-01-26 16:20:54.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:20:54 compute-0 nova_compute[239965]: 2026-01-26 16:20:54.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:20:54 compute-0 nova_compute[239965]: 2026-01-26 16:20:54.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:20:54 compute-0 nova_compute[239965]: 2026-01-26 16:20:54.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:20:54 compute-0 nova_compute[239965]: 2026-01-26 16:20:54.606 239969 DEBUG nova.compute.manager [req-b0c1a4d8-e3db-4397-a419-8ac474d51fd2 req-176fd62e-a9bf-444b-a630-26b6a61999cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Received event network-changed-28ddf76d-a2d2-49fc-b567-f9364c3848c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:20:54 compute-0 nova_compute[239965]: 2026-01-26 16:20:54.607 239969 DEBUG nova.compute.manager [req-b0c1a4d8-e3db-4397-a419-8ac474d51fd2 req-176fd62e-a9bf-444b-a630-26b6a61999cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Refreshing instance network info cache due to event network-changed-28ddf76d-a2d2-49fc-b567-f9364c3848c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:20:54 compute-0 nova_compute[239965]: 2026-01-26 16:20:54.607 239969 DEBUG oslo_concurrency.lockutils [req-b0c1a4d8-e3db-4397-a419-8ac474d51fd2 req-176fd62e-a9bf-444b-a630-26b6a61999cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b1abd864-1de3-45b1-8fbc-3885a84b1363" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:20:54 compute-0 nova_compute[239965]: 2026-01-26 16:20:54.607 239969 DEBUG oslo_concurrency.lockutils [req-b0c1a4d8-e3db-4397-a419-8ac474d51fd2 req-176fd62e-a9bf-444b-a630-26b6a61999cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b1abd864-1de3-45b1-8fbc-3885a84b1363" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:20:54 compute-0 nova_compute[239965]: 2026-01-26 16:20:54.608 239969 DEBUG nova.network.neutron [req-b0c1a4d8-e3db-4397-a419-8ac474d51fd2 req-176fd62e-a9bf-444b-a630-26b6a61999cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Refreshing network info cache for port 28ddf76d-a2d2-49fc-b567-f9364c3848c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:20:54 compute-0 ceph-mon[75140]: pgmap v2106: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 26 16:20:55 compute-0 nova_compute[239965]: 2026-01-26 16:20:55.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:20:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 26 16:20:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:56.532 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:1a:41 10.100.0.2 2001:db8:0:1:f816:3eff:fe26:1a41 2001:db8::f816:3eff:fe26:1a41'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe26:1a41/64 2001:db8::f816:3eff:fe26:1a41/64', 'neutron:device_id': 'ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c934df91-a8a2-4191-8341-9c66e4ad8b82, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=15bb6ba7-12a8-4618-8905-b715a4089dde) old=Port_Binding(mac=['fa:16:3e:26:1a:41 10.100.0.2 2001:db8::f816:3eff:fe26:1a41'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe26:1a41/64', 'neutron:device_id': 'ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:20:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:56.535 156105 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 15bb6ba7-12a8-4618-8905-b715a4089dde in datapath 323586d4-0f5a-43b7-8831-7a2cc02b73e0 updated
Jan 26 16:20:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:56.537 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 323586d4-0f5a-43b7-8831-7a2cc02b73e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:20:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:56.538 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5c56d9d4-fcc5-4e15-ab7c-0836abbcc605]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:20:56 compute-0 ceph-mon[75140]: pgmap v2107: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Jan 26 16:20:56 compute-0 nova_compute[239965]: 2026-01-26 16:20:56.956 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:57 compute-0 nova_compute[239965]: 2026-01-26 16:20:57.665 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:20:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:20:58 compute-0 nova_compute[239965]: 2026-01-26 16:20:58.506 239969 DEBUG nova.network.neutron [req-13c2f134-8fbc-4390-96d1-35f915a101c4 req-7b88369f-ffa0-4fb1-be15-11e83b15b49c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Updated VIF entry in instance network info cache for port a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:20:58 compute-0 nova_compute[239965]: 2026-01-26 16:20:58.507 239969 DEBUG nova.network.neutron [req-13c2f134-8fbc-4390-96d1-35f915a101c4 req-7b88369f-ffa0-4fb1-be15-11e83b15b49c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Updating instance_info_cache with network_info: [{"id": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "address": "fa:16:3e:45:f3:55", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2b8e0fe-aa", "ovs_interfaceid": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:20:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 119 op/s
Jan 26 16:20:58 compute-0 nova_compute[239965]: 2026-01-26 16:20:58.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:20:58 compute-0 nova_compute[239965]: 2026-01-26 16:20:58.514 239969 DEBUG nova.network.neutron [req-b0c1a4d8-e3db-4397-a419-8ac474d51fd2 req-176fd62e-a9bf-444b-a630-26b6a61999cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Updated VIF entry in instance network info cache for port 28ddf76d-a2d2-49fc-b567-f9364c3848c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:20:58 compute-0 nova_compute[239965]: 2026-01-26 16:20:58.515 239969 DEBUG nova.network.neutron [req-b0c1a4d8-e3db-4397-a419-8ac474d51fd2 req-176fd62e-a9bf-444b-a630-26b6a61999cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Updating instance_info_cache with network_info: [{"id": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "address": "fa:16:3e:ef:95:e6", "network": {"id": "8fbff3e6-13d0-457a-8cfa-fa92dd36bc03", "bridge": "br-int", "label": "tempest-network-smoke--2073612296", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ddf76d-a2", "ovs_interfaceid": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:20:58 compute-0 nova_compute[239965]: 2026-01-26 16:20:58.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:58 compute-0 nova_compute[239965]: 2026-01-26 16:20:58.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:58 compute-0 nova_compute[239965]: 2026-01-26 16:20:58.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:58 compute-0 nova_compute[239965]: 2026-01-26 16:20:58.541 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:20:58 compute-0 nova_compute[239965]: 2026-01-26 16:20:58.542 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:58 compute-0 nova_compute[239965]: 2026-01-26 16:20:58.573 239969 DEBUG oslo_concurrency.lockutils [req-13c2f134-8fbc-4390-96d1-35f915a101c4 req-7b88369f-ffa0-4fb1-be15-11e83b15b49c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-84cd45c5-937a-4d42-9e80-2aaa7aa558c2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:20:58 compute-0 nova_compute[239965]: 2026-01-26 16:20:58.577 239969 DEBUG oslo_concurrency.lockutils [req-b0c1a4d8-e3db-4397-a419-8ac474d51fd2 req-176fd62e-a9bf-444b-a630-26b6a61999cd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b1abd864-1de3-45b1-8fbc-3885a84b1363" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:20:58 compute-0 ceph-mon[75140]: pgmap v2108: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.5 MiB/s wr, 119 op/s
Jan 26 16:20:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:20:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2584986991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:20:59 compute-0 nova_compute[239965]: 2026-01-26 16:20:59.096 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:20:59 compute-0 nova_compute[239965]: 2026-01-26 16:20:59.176 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:20:59 compute-0 nova_compute[239965]: 2026-01-26 16:20:59.176 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000078 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:20:59 compute-0 nova_compute[239965]: 2026-01-26 16:20:59.180 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:20:59 compute-0 nova_compute[239965]: 2026-01-26 16:20:59.180 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:20:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:59.244 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:59.245 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:20:59.246 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:20:59 compute-0 nova_compute[239965]: 2026-01-26 16:20:59.347 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:20:59 compute-0 nova_compute[239965]: 2026-01-26 16:20:59.349 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3315MB free_disk=59.92102961242199GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:20:59 compute-0 nova_compute[239965]: 2026-01-26 16:20:59.349 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:20:59 compute-0 nova_compute[239965]: 2026-01-26 16:20:59.350 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:20:59 compute-0 nova_compute[239965]: 2026-01-26 16:20:59.449 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 84cd45c5-937a-4d42-9e80-2aaa7aa558c2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:20:59 compute-0 nova_compute[239965]: 2026-01-26 16:20:59.449 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance b1abd864-1de3-45b1-8fbc-3885a84b1363 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:20:59 compute-0 nova_compute[239965]: 2026-01-26 16:20:59.450 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:20:59 compute-0 nova_compute[239965]: 2026-01-26 16:20:59.450 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:20:59 compute-0 nova_compute[239965]: 2026-01-26 16:20:59.531 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:20:59 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2584986991' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:21:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:21:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3174785436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:21:00 compute-0 nova_compute[239965]: 2026-01-26 16:21:00.080 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:00 compute-0 nova_compute[239965]: 2026-01-26 16:21:00.090 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:21:00 compute-0 nova_compute[239965]: 2026-01-26 16:21:00.115 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:21:00 compute-0 nova_compute[239965]: 2026-01-26 16:21:00.151 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:21:00 compute-0 nova_compute[239965]: 2026-01-26 16:21:00.152 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:21:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:21:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:21:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:21:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:21:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:21:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Jan 26 16:21:00 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3174785436' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:21:00 compute-0 ceph-mon[75140]: pgmap v2109: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 25 KiB/s wr, 74 op/s
Jan 26 16:21:01 compute-0 podman[347117]: 2026-01-26 16:21:01.390640007 +0000 UTC m=+0.071210011 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:21:01 compute-0 podman[347118]: 2026-01-26 16:21:01.498650926 +0000 UTC m=+0.180366188 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:21:01 compute-0 ovn_controller[146046]: 2026-01-26T16:21:01Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ef:95:e6 10.100.0.4
Jan 26 16:21:01 compute-0 ovn_controller[146046]: 2026-01-26T16:21:01Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ef:95:e6 10.100.0.4
Jan 26 16:21:01 compute-0 nova_compute[239965]: 2026-01-26 16:21:01.959 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:02 compute-0 nova_compute[239965]: 2026-01-26 16:21:02.256 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "59934c7e-3c2e-4e68-9629-663ecfc6692d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:02 compute-0 nova_compute[239965]: 2026-01-26 16:21:02.257 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "59934c7e-3c2e-4e68-9629-663ecfc6692d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:02 compute-0 nova_compute[239965]: 2026-01-26 16:21:02.280 239969 DEBUG nova.compute.manager [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:21:02 compute-0 nova_compute[239965]: 2026-01-26 16:21:02.343 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:02 compute-0 nova_compute[239965]: 2026-01-26 16:21:02.344 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:02 compute-0 nova_compute[239965]: 2026-01-26 16:21:02.352 239969 DEBUG nova.virt.hardware [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:21:02 compute-0 nova_compute[239965]: 2026-01-26 16:21:02.353 239969 INFO nova.compute.claims [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:21:02 compute-0 nova_compute[239965]: 2026-01-26 16:21:02.489 239969 DEBUG oslo_concurrency.processutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 242 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Jan 26 16:21:02 compute-0 ceph-mon[75140]: pgmap v2110: 305 pgs: 305 active+clean; 242 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 134 op/s
Jan 26 16:21:02 compute-0 nova_compute[239965]: 2026-01-26 16:21:02.786 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:02 compute-0 nova_compute[239965]: 2026-01-26 16:21:02.938 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:02 compute-0 nova_compute[239965]: 2026-01-26 16:21:02.939 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:02 compute-0 nova_compute[239965]: 2026-01-26 16:21:02.953 239969 DEBUG nova.compute.manager [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.030 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:21:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/54683724' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.081 239969 DEBUG oslo_concurrency.processutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.087 239969 DEBUG nova.compute.provider_tree [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.101 239969 DEBUG nova.scheduler.client.report [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.118 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.118 239969 DEBUG nova.compute.manager [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.121 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.132 239969 DEBUG nova.virt.hardware [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.133 239969 INFO nova.compute.claims [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:21:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.229 239969 DEBUG nova.compute.manager [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.229 239969 DEBUG nova.network.neutron [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.251 239969 INFO nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.277 239969 DEBUG nova.compute.manager [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.365 239969 DEBUG oslo_concurrency.processutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.405 239969 DEBUG nova.compute.manager [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.406 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.407 239969 INFO nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Creating image(s)
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.427 239969 DEBUG nova.storage.rbd_utils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 59934c7e-3c2e-4e68-9629-663ecfc6692d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.448 239969 DEBUG nova.storage.rbd_utils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 59934c7e-3c2e-4e68-9629-663ecfc6692d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.471 239969 DEBUG nova.storage.rbd_utils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 59934c7e-3c2e-4e68-9629-663ecfc6692d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.475 239969 DEBUG oslo_concurrency.processutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.513 239969 DEBUG nova.policy [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0338b308661e4205839e9e957f674d8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df501970c9864b77b86bb4aa58ef846b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.558 239969 DEBUG oslo_concurrency.processutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.559 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.559 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.560 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.580 239969 DEBUG nova.storage.rbd_utils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 59934c7e-3c2e-4e68-9629-663ecfc6692d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.584 239969 DEBUG oslo_concurrency.processutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 59934c7e-3c2e-4e68-9629-663ecfc6692d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/54683724' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:21:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:21:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3524844842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.908 239969 DEBUG oslo_concurrency.processutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.914 239969 DEBUG nova.compute.provider_tree [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.937 239969 DEBUG nova.scheduler.client.report [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.951 239969 DEBUG oslo_concurrency.processutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 59934c7e-3c2e-4e68-9629-663ecfc6692d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.367s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.974 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:03 compute-0 nova_compute[239965]: 2026-01-26 16:21:03.975 239969 DEBUG nova.compute.manager [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.005 239969 DEBUG nova.storage.rbd_utils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] resizing rbd image 59934c7e-3c2e-4e68-9629-663ecfc6692d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.037 239969 DEBUG nova.compute.manager [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.038 239969 DEBUG nova.network.neutron [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.082 239969 INFO nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.091 239969 DEBUG nova.objects.instance [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'migration_context' on Instance uuid 59934c7e-3c2e-4e68-9629-663ecfc6692d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.099 239969 DEBUG nova.compute.manager [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.105 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.106 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Ensure instance console log exists: /var/lib/nova/instances/59934c7e-3c2e-4e68-9629-663ecfc6692d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.106 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.107 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.107 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.188 239969 DEBUG nova.compute.manager [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.189 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.189 239969 INFO nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Creating image(s)
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.211 239969 DEBUG nova.storage.rbd_utils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 5544e5ad-3c83-4a30-9534-f7a0e6f29a04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.240 239969 DEBUG nova.storage.rbd_utils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 5544e5ad-3c83-4a30-9534-f7a0e6f29a04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.263 239969 DEBUG nova.storage.rbd_utils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 5544e5ad-3c83-4a30-9534-f7a0e6f29a04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.267 239969 DEBUG oslo_concurrency.processutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.345 239969 DEBUG oslo_concurrency.processutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.347 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.347 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.348 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.375 239969 DEBUG nova.storage.rbd_utils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 5544e5ad-3c83-4a30-9534-f7a0e6f29a04_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.379 239969 DEBUG oslo_concurrency.processutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 5544e5ad-3c83-4a30-9534-f7a0e6f29a04_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 242 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 26 16:21:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3524844842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:21:04 compute-0 ceph-mon[75140]: pgmap v2111: 305 pgs: 305 active+clean; 242 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.804 239969 DEBUG nova.policy [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e865b82cb8db42568f95d2257ed9b158', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f543c15474f7451fa07b01e79dde95dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.837 239969 DEBUG oslo_concurrency.processutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 5544e5ad-3c83-4a30-9534-f7a0e6f29a04_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.887 239969 DEBUG nova.storage.rbd_utils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] resizing rbd image 5544e5ad-3c83-4a30-9534-f7a0e6f29a04_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.958 239969 DEBUG nova.objects.instance [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'migration_context' on Instance uuid 5544e5ad-3c83-4a30-9534-f7a0e6f29a04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.974 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.975 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Ensure instance console log exists: /var/lib/nova/instances/5544e5ad-3c83-4a30-9534-f7a0e6f29a04/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.975 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.975 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:04 compute-0 nova_compute[239965]: 2026-01-26 16:21:04.976 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:05 compute-0 nova_compute[239965]: 2026-01-26 16:21:05.555 239969 DEBUG nova.network.neutron [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Successfully created port: 73d5501c-512b-4eaf-b85c-86251a8a4235 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:21:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 256 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 2.3 MiB/s wr, 74 op/s
Jan 26 16:21:06 compute-0 nova_compute[239965]: 2026-01-26 16:21:06.582 239969 DEBUG nova.network.neutron [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Successfully created port: b27889cb-0f7d-409c-830b-20a79cd51358 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:21:06 compute-0 ceph-mon[75140]: pgmap v2112: 305 pgs: 305 active+clean; 256 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 300 KiB/s rd, 2.3 MiB/s wr, 74 op/s
Jan 26 16:21:06 compute-0 nova_compute[239965]: 2026-01-26 16:21:06.963 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:07 compute-0 sudo[347539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:21:07 compute-0 sudo[347539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:21:07 compute-0 sudo[347539]: pam_unix(sudo:session): session closed for user root
Jan 26 16:21:07 compute-0 nova_compute[239965]: 2026-01-26 16:21:07.462 239969 DEBUG nova.network.neutron [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Successfully updated port: 73d5501c-512b-4eaf-b85c-86251a8a4235 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:21:07 compute-0 nova_compute[239965]: 2026-01-26 16:21:07.484 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "refresh_cache-59934c7e-3c2e-4e68-9629-663ecfc6692d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:21:07 compute-0 nova_compute[239965]: 2026-01-26 16:21:07.485 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquired lock "refresh_cache-59934c7e-3c2e-4e68-9629-663ecfc6692d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:21:07 compute-0 nova_compute[239965]: 2026-01-26 16:21:07.485 239969 DEBUG nova.network.neutron [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:21:07 compute-0 sudo[347564]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:21:07 compute-0 sudo[347564]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:21:07 compute-0 nova_compute[239965]: 2026-01-26 16:21:07.598 239969 DEBUG nova.compute.manager [req-b5ee0576-3126-4c05-be79-0a75b93a70f3 req-fdee2334-7804-485a-9f40-db50da71e2cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Received event network-changed-73d5501c-512b-4eaf-b85c-86251a8a4235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:07 compute-0 nova_compute[239965]: 2026-01-26 16:21:07.598 239969 DEBUG nova.compute.manager [req-b5ee0576-3126-4c05-be79-0a75b93a70f3 req-fdee2334-7804-485a-9f40-db50da71e2cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Refreshing instance network info cache due to event network-changed-73d5501c-512b-4eaf-b85c-86251a8a4235. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:21:07 compute-0 nova_compute[239965]: 2026-01-26 16:21:07.599 239969 DEBUG oslo_concurrency.lockutils [req-b5ee0576-3126-4c05-be79-0a75b93a70f3 req-fdee2334-7804-485a-9f40-db50da71e2cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-59934c7e-3c2e-4e68-9629-663ecfc6692d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:21:07 compute-0 nova_compute[239965]: 2026-01-26 16:21:07.721 239969 DEBUG nova.network.neutron [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:21:07 compute-0 nova_compute[239965]: 2026-01-26 16:21:07.787 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:08 compute-0 sudo[347564]: pam_unix(sudo:session): session closed for user root
Jan 26 16:21:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:21:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:21:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:21:08 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:21:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:21:08 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:21:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:21:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:21:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:21:08 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:21:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:21:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:21:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:21:08 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:21:08 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:21:08 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:21:08 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:21:08 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:21:08 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:21:08 compute-0 sudo[347621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:21:08 compute-0 sudo[347621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:21:08 compute-0 sudo[347621]: pam_unix(sudo:session): session closed for user root
Jan 26 16:21:08 compute-0 sudo[347646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:21:08 compute-0 sudo[347646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:21:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 339 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 5.7 MiB/s wr, 116 op/s
Jan 26 16:21:08 compute-0 nova_compute[239965]: 2026-01-26 16:21:08.519 239969 DEBUG nova.network.neutron [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Successfully updated port: b27889cb-0f7d-409c-830b-20a79cd51358 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:21:08 compute-0 nova_compute[239965]: 2026-01-26 16:21:08.535 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-5544e5ad-3c83-4a30-9534-f7a0e6f29a04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:21:08 compute-0 nova_compute[239965]: 2026-01-26 16:21:08.535 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-5544e5ad-3c83-4a30-9534-f7a0e6f29a04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:21:08 compute-0 nova_compute[239965]: 2026-01-26 16:21:08.535 239969 DEBUG nova.network.neutron [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:21:08 compute-0 podman[347683]: 2026-01-26 16:21:08.568487914 +0000 UTC m=+0.040195383 container create 30d3853696708005aec5cea6a3567096b1ea6e16baafaca174c29d2c23992716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 26 16:21:08 compute-0 systemd[1]: Started libpod-conmon-30d3853696708005aec5cea6a3567096b1ea6e16baafaca174c29d2c23992716.scope.
Jan 26 16:21:08 compute-0 podman[347683]: 2026-01-26 16:21:08.549420658 +0000 UTC m=+0.021128157 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:21:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:21:08 compute-0 podman[347683]: 2026-01-26 16:21:08.670523737 +0000 UTC m=+0.142231226 container init 30d3853696708005aec5cea6a3567096b1ea6e16baafaca174c29d2c23992716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_almeida, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 16:21:08 compute-0 podman[347683]: 2026-01-26 16:21:08.676584815 +0000 UTC m=+0.148292284 container start 30d3853696708005aec5cea6a3567096b1ea6e16baafaca174c29d2c23992716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_almeida, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:21:08 compute-0 systemd[1]: libpod-30d3853696708005aec5cea6a3567096b1ea6e16baafaca174c29d2c23992716.scope: Deactivated successfully.
Jan 26 16:21:08 compute-0 youthful_almeida[347699]: 167 167
Jan 26 16:21:08 compute-0 conmon[347699]: conmon 30d3853696708005aec5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30d3853696708005aec5cea6a3567096b1ea6e16baafaca174c29d2c23992716.scope/container/memory.events
Jan 26 16:21:08 compute-0 podman[347683]: 2026-01-26 16:21:08.69686489 +0000 UTC m=+0.168572379 container attach 30d3853696708005aec5cea6a3567096b1ea6e16baafaca174c29d2c23992716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_almeida, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:21:08 compute-0 podman[347683]: 2026-01-26 16:21:08.697249259 +0000 UTC m=+0.168956728 container died 30d3853696708005aec5cea6a3567096b1ea6e16baafaca174c29d2c23992716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_almeida, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 16:21:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e65a2a0d3ff0d41f9491aa118b2fcb5f54c06865acfb70a49b9a1dbeba8ca26-merged.mount: Deactivated successfully.
Jan 26 16:21:08 compute-0 podman[347683]: 2026-01-26 16:21:08.731095667 +0000 UTC m=+0.202803156 container remove 30d3853696708005aec5cea6a3567096b1ea6e16baafaca174c29d2c23992716 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_almeida, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 16:21:08 compute-0 systemd[1]: libpod-conmon-30d3853696708005aec5cea6a3567096b1ea6e16baafaca174c29d2c23992716.scope: Deactivated successfully.
Jan 26 16:21:08 compute-0 nova_compute[239965]: 2026-01-26 16:21:08.776 239969 DEBUG nova.network.neutron [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:21:08 compute-0 podman[347723]: 2026-01-26 16:21:08.90274034 +0000 UTC m=+0.044577360 container create 753eef6b0c0608d47eb825b541ce441531e0b1e140be3bd34373f26b4245515e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ptolemy, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Jan 26 16:21:08 compute-0 systemd[1]: Started libpod-conmon-753eef6b0c0608d47eb825b541ce441531e0b1e140be3bd34373f26b4245515e.scope.
Jan 26 16:21:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c34c1fb661b901362e30cb610625ff9aaa8f541e43853e031f8315af9e72829/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:21:08 compute-0 podman[347723]: 2026-01-26 16:21:08.884771461 +0000 UTC m=+0.026608531 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c34c1fb661b901362e30cb610625ff9aaa8f541e43853e031f8315af9e72829/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c34c1fb661b901362e30cb610625ff9aaa8f541e43853e031f8315af9e72829/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c34c1fb661b901362e30cb610625ff9aaa8f541e43853e031f8315af9e72829/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:21:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c34c1fb661b901362e30cb610625ff9aaa8f541e43853e031f8315af9e72829/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:21:08 compute-0 podman[347723]: 2026-01-26 16:21:08.996969722 +0000 UTC m=+0.138806762 container init 753eef6b0c0608d47eb825b541ce441531e0b1e140be3bd34373f26b4245515e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ptolemy, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:21:09 compute-0 podman[347723]: 2026-01-26 16:21:09.00344095 +0000 UTC m=+0.145277970 container start 753eef6b0c0608d47eb825b541ce441531e0b1e140be3bd34373f26b4245515e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ptolemy, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:21:09 compute-0 podman[347723]: 2026-01-26 16:21:09.00796608 +0000 UTC m=+0.149803100 container attach 753eef6b0c0608d47eb825b541ce441531e0b1e140be3bd34373f26b4245515e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ptolemy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 16:21:09 compute-0 ceph-mon[75140]: pgmap v2113: 305 pgs: 305 active+clean; 339 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 5.7 MiB/s wr, 116 op/s
Jan 26 16:21:09 compute-0 unruffled_ptolemy[347739]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:21:09 compute-0 unruffled_ptolemy[347739]: --> All data devices are unavailable
Jan 26 16:21:09 compute-0 systemd[1]: libpod-753eef6b0c0608d47eb825b541ce441531e0b1e140be3bd34373f26b4245515e.scope: Deactivated successfully.
Jan 26 16:21:09 compute-0 podman[347723]: 2026-01-26 16:21:09.510324273 +0000 UTC m=+0.652161303 container died 753eef6b0c0608d47eb825b541ce441531e0b1e140be3bd34373f26b4245515e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 16:21:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c34c1fb661b901362e30cb610625ff9aaa8f541e43853e031f8315af9e72829-merged.mount: Deactivated successfully.
Jan 26 16:21:09 compute-0 podman[347723]: 2026-01-26 16:21:09.550852164 +0000 UTC m=+0.692689184 container remove 753eef6b0c0608d47eb825b541ce441531e0b1e140be3bd34373f26b4245515e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ptolemy, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 16:21:09 compute-0 systemd[1]: libpod-conmon-753eef6b0c0608d47eb825b541ce441531e0b1e140be3bd34373f26b4245515e.scope: Deactivated successfully.
Jan 26 16:21:09 compute-0 sudo[347646]: pam_unix(sudo:session): session closed for user root
Jan 26 16:21:09 compute-0 sudo[347771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:21:09 compute-0 sudo[347771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:21:09 compute-0 sudo[347771]: pam_unix(sudo:session): session closed for user root
Jan 26 16:21:09 compute-0 sudo[347796]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:21:09 compute-0 sudo[347796]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.757 239969 DEBUG nova.network.neutron [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Updating instance_info_cache with network_info: [{"id": "73d5501c-512b-4eaf-b85c-86251a8a4235", "address": "fa:16:3e:26:c2:94", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d5501c-51", "ovs_interfaceid": "73d5501c-512b-4eaf-b85c-86251a8a4235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.786 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Releasing lock "refresh_cache-59934c7e-3c2e-4e68-9629-663ecfc6692d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.787 239969 DEBUG nova.compute.manager [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Instance network_info: |[{"id": "73d5501c-512b-4eaf-b85c-86251a8a4235", "address": "fa:16:3e:26:c2:94", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d5501c-51", "ovs_interfaceid": "73d5501c-512b-4eaf-b85c-86251a8a4235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.787 239969 DEBUG oslo_concurrency.lockutils [req-b5ee0576-3126-4c05-be79-0a75b93a70f3 req-fdee2334-7804-485a-9f40-db50da71e2cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-59934c7e-3c2e-4e68-9629-663ecfc6692d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.787 239969 DEBUG nova.network.neutron [req-b5ee0576-3126-4c05-be79-0a75b93a70f3 req-fdee2334-7804-485a-9f40-db50da71e2cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Refreshing network info cache for port 73d5501c-512b-4eaf-b85c-86251a8a4235 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.790 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Start _get_guest_xml network_info=[{"id": "73d5501c-512b-4eaf-b85c-86251a8a4235", "address": "fa:16:3e:26:c2:94", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d5501c-51", "ovs_interfaceid": "73d5501c-512b-4eaf-b85c-86251a8a4235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.795 239969 WARNING nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.800 239969 DEBUG nova.virt.libvirt.host [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.800 239969 DEBUG nova.virt.libvirt.host [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.806 239969 DEBUG nova.virt.libvirt.host [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.808 239969 DEBUG nova.virt.libvirt.host [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.808 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.808 239969 DEBUG nova.virt.hardware [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.809 239969 DEBUG nova.virt.hardware [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.809 239969 DEBUG nova.virt.hardware [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.809 239969 DEBUG nova.virt.hardware [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.809 239969 DEBUG nova.virt.hardware [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.810 239969 DEBUG nova.virt.hardware [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.810 239969 DEBUG nova.virt.hardware [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.810 239969 DEBUG nova.virt.hardware [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.810 239969 DEBUG nova.virt.hardware [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.810 239969 DEBUG nova.virt.hardware [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.811 239969 DEBUG nova.virt.hardware [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.814 239969 DEBUG oslo_concurrency.processutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.979 239969 DEBUG nova.compute.manager [req-c0b2fd5f-74fa-43c9-a547-370ba491983d req-cec32f90-7ba0-4c9e-b783-4b46cd084504 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Received event network-changed-b27889cb-0f7d-409c-830b-20a79cd51358 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.979 239969 DEBUG nova.compute.manager [req-c0b2fd5f-74fa-43c9-a547-370ba491983d req-cec32f90-7ba0-4c9e-b783-4b46cd084504 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Refreshing instance network info cache due to event network-changed-b27889cb-0f7d-409c-830b-20a79cd51358. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.979 239969 DEBUG oslo_concurrency.lockutils [req-c0b2fd5f-74fa-43c9-a547-370ba491983d req-cec32f90-7ba0-4c9e-b783-4b46cd084504 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-5544e5ad-3c83-4a30-9534-f7a0e6f29a04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.981 239969 DEBUG nova.network.neutron [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Updating instance_info_cache with network_info: [{"id": "b27889cb-0f7d-409c-830b-20a79cd51358", "address": "fa:16:3e:29:20:40", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27889cb-0f", "ovs_interfaceid": "b27889cb-0f7d-409c-830b-20a79cd51358", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.996 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-5544e5ad-3c83-4a30-9534-f7a0e6f29a04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.997 239969 DEBUG nova.compute.manager [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Instance network_info: |[{"id": "b27889cb-0f7d-409c-830b-20a79cd51358", "address": "fa:16:3e:29:20:40", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27889cb-0f", "ovs_interfaceid": "b27889cb-0f7d-409c-830b-20a79cd51358", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.997 239969 DEBUG oslo_concurrency.lockutils [req-c0b2fd5f-74fa-43c9-a547-370ba491983d req-cec32f90-7ba0-4c9e-b783-4b46cd084504 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-5544e5ad-3c83-4a30-9534-f7a0e6f29a04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:21:09 compute-0 nova_compute[239965]: 2026-01-26 16:21:09.997 239969 DEBUG nova.network.neutron [req-c0b2fd5f-74fa-43c9-a547-370ba491983d req-cec32f90-7ba0-4c9e-b783-4b46cd084504 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Refreshing network info cache for port b27889cb-0f7d-409c-830b-20a79cd51358 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.000 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Start _get_guest_xml network_info=[{"id": "b27889cb-0f7d-409c-830b-20a79cd51358", "address": "fa:16:3e:29:20:40", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27889cb-0f", "ovs_interfaceid": "b27889cb-0f7d-409c-830b-20a79cd51358", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.005 239969 WARNING nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:21:10 compute-0 podman[347843]: 2026-01-26 16:21:10.016449719 +0000 UTC m=+0.057270611 container create 37ef1dca8612c88f81c5e947c156c0bd1f9dfeff27b630da0aea65312a5f7b0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pascal, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.016 239969 DEBUG nova.virt.libvirt.host [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.017 239969 DEBUG nova.virt.libvirt.host [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.021 239969 DEBUG nova.virt.libvirt.host [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.021 239969 DEBUG nova.virt.libvirt.host [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.021 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.021 239969 DEBUG nova.virt.hardware [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.022 239969 DEBUG nova.virt.hardware [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.022 239969 DEBUG nova.virt.hardware [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.022 239969 DEBUG nova.virt.hardware [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.022 239969 DEBUG nova.virt.hardware [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.022 239969 DEBUG nova.virt.hardware [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.022 239969 DEBUG nova.virt.hardware [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.023 239969 DEBUG nova.virt.hardware [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.023 239969 DEBUG nova.virt.hardware [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.023 239969 DEBUG nova.virt.hardware [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.023 239969 DEBUG nova.virt.hardware [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.027 239969 DEBUG oslo_concurrency.processutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:10 compute-0 systemd[1]: Started libpod-conmon-37ef1dca8612c88f81c5e947c156c0bd1f9dfeff27b630da0aea65312a5f7b0c.scope.
Jan 26 16:21:10 compute-0 podman[347843]: 2026-01-26 16:21:09.984534489 +0000 UTC m=+0.025355411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:21:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:21:10 compute-0 podman[347843]: 2026-01-26 16:21:10.118929172 +0000 UTC m=+0.159750104 container init 37ef1dca8612c88f81c5e947c156c0bd1f9dfeff27b630da0aea65312a5f7b0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pascal, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 16:21:10 compute-0 podman[347843]: 2026-01-26 16:21:10.125961124 +0000 UTC m=+0.166782016 container start 37ef1dca8612c88f81c5e947c156c0bd1f9dfeff27b630da0aea65312a5f7b0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pascal, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:21:10 compute-0 podman[347843]: 2026-01-26 16:21:10.129441709 +0000 UTC m=+0.170262611 container attach 37ef1dca8612c88f81c5e947c156c0bd1f9dfeff27b630da0aea65312a5f7b0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pascal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:21:10 compute-0 amazing_pascal[347869]: 167 167
Jan 26 16:21:10 compute-0 systemd[1]: libpod-37ef1dca8612c88f81c5e947c156c0bd1f9dfeff27b630da0aea65312a5f7b0c.scope: Deactivated successfully.
Jan 26 16:21:10 compute-0 podman[347843]: 2026-01-26 16:21:10.133626511 +0000 UTC m=+0.174447423 container died 37ef1dca8612c88f81c5e947c156c0bd1f9dfeff27b630da0aea65312a5f7b0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pascal, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:21:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca9d47bbfc991eb441a65781a58357f233fa132d9f3c613133ecbccfa715826b-merged.mount: Deactivated successfully.
Jan 26 16:21:10 compute-0 podman[347843]: 2026-01-26 16:21:10.171895336 +0000 UTC m=+0.212716228 container remove 37ef1dca8612c88f81c5e947c156c0bd1f9dfeff27b630da0aea65312a5f7b0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_pascal, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 16:21:10 compute-0 systemd[1]: libpod-conmon-37ef1dca8612c88f81c5e947c156c0bd1f9dfeff27b630da0aea65312a5f7b0c.scope: Deactivated successfully.
Jan 26 16:21:10 compute-0 podman[347912]: 2026-01-26 16:21:10.341355826 +0000 UTC m=+0.038881601 container create b80d96248de765ce233248dd372c4188470701acf226847debdcc8e82b919bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moore, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 16:21:10 compute-0 systemd[1]: Started libpod-conmon-b80d96248de765ce233248dd372c4188470701acf226847debdcc8e82b919bb9.scope.
Jan 26 16:21:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:21:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1155474147' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.398 239969 DEBUG oslo_concurrency.processutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf0f28027ac97a5d6dd1d075d44caddd6f99acb48bb8f361f428da2a4167386/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:21:10 compute-0 podman[347912]: 2026-01-26 16:21:10.322671129 +0000 UTC m=+0.020196934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf0f28027ac97a5d6dd1d075d44caddd6f99acb48bb8f361f428da2a4167386/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.424 239969 DEBUG nova.storage.rbd_utils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 59934c7e-3c2e-4e68-9629-663ecfc6692d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf0f28027ac97a5d6dd1d075d44caddd6f99acb48bb8f361f428da2a4167386/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.430 239969 DEBUG oslo_concurrency.processutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1155474147' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:21:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7bf0f28027ac97a5d6dd1d075d44caddd6f99acb48bb8f361f428da2a4167386/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:21:10 compute-0 podman[347912]: 2026-01-26 16:21:10.44468505 +0000 UTC m=+0.142210835 container init b80d96248de765ce233248dd372c4188470701acf226847debdcc8e82b919bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moore, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 16:21:10 compute-0 podman[347912]: 2026-01-26 16:21:10.45366478 +0000 UTC m=+0.151190575 container start b80d96248de765ce233248dd372c4188470701acf226847debdcc8e82b919bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 16:21:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 339 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 5.7 MiB/s wr, 116 op/s
Jan 26 16:21:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:21:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006024242' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.623 239969 DEBUG oslo_concurrency.processutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.662 239969 DEBUG nova.storage.rbd_utils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 5544e5ad-3c83-4a30-9534-f7a0e6f29a04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.669 239969 DEBUG oslo_concurrency.processutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:10 compute-0 stoic_moore[347929]: {
Jan 26 16:21:10 compute-0 stoic_moore[347929]:     "0": [
Jan 26 16:21:10 compute-0 stoic_moore[347929]:         {
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "devices": [
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "/dev/loop3"
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             ],
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "lv_name": "ceph_lv0",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "lv_size": "21470642176",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "name": "ceph_lv0",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "tags": {
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.cluster_name": "ceph",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.crush_device_class": "",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.encrypted": "0",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.objectstore": "bluestore",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.osd_id": "0",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.type": "block",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.vdo": "0",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.with_tpm": "0"
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             },
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "type": "block",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "vg_name": "ceph_vg0"
Jan 26 16:21:10 compute-0 stoic_moore[347929]:         }
Jan 26 16:21:10 compute-0 stoic_moore[347929]:     ],
Jan 26 16:21:10 compute-0 stoic_moore[347929]:     "1": [
Jan 26 16:21:10 compute-0 stoic_moore[347929]:         {
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "devices": [
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "/dev/loop4"
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             ],
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "lv_name": "ceph_lv1",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "lv_size": "21470642176",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "name": "ceph_lv1",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "tags": {
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.cluster_name": "ceph",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.crush_device_class": "",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.encrypted": "0",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.objectstore": "bluestore",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.osd_id": "1",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.type": "block",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.vdo": "0",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.with_tpm": "0"
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             },
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "type": "block",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "vg_name": "ceph_vg1"
Jan 26 16:21:10 compute-0 stoic_moore[347929]:         }
Jan 26 16:21:10 compute-0 stoic_moore[347929]:     ],
Jan 26 16:21:10 compute-0 stoic_moore[347929]:     "2": [
Jan 26 16:21:10 compute-0 stoic_moore[347929]:         {
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "devices": [
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "/dev/loop5"
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             ],
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "lv_name": "ceph_lv2",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "lv_size": "21470642176",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "name": "ceph_lv2",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "tags": {
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.cluster_name": "ceph",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.crush_device_class": "",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.encrypted": "0",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.objectstore": "bluestore",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.osd_id": "2",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.type": "block",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.vdo": "0",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:                 "ceph.with_tpm": "0"
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             },
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "type": "block",
Jan 26 16:21:10 compute-0 stoic_moore[347929]:             "vg_name": "ceph_vg2"
Jan 26 16:21:10 compute-0 stoic_moore[347929]:         }
Jan 26 16:21:10 compute-0 stoic_moore[347929]:     ]
Jan 26 16:21:10 compute-0 stoic_moore[347929]: }
Jan 26 16:21:10 compute-0 podman[347912]: 2026-01-26 16:21:10.780594306 +0000 UTC m=+0.478120101 container attach b80d96248de765ce233248dd372c4188470701acf226847debdcc8e82b919bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:21:10 compute-0 systemd[1]: libpod-b80d96248de765ce233248dd372c4188470701acf226847debdcc8e82b919bb9.scope: Deactivated successfully.
Jan 26 16:21:10 compute-0 podman[347912]: 2026-01-26 16:21:10.782449032 +0000 UTC m=+0.479974837 container died b80d96248de765ce233248dd372c4188470701acf226847debdcc8e82b919bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moore, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:21:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:21:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3965431433' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.963 239969 DEBUG oslo_concurrency.processutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.967 239969 DEBUG nova.virt.libvirt.vif [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:21:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-532358425',display_name='tempest-TestGettingAddress-server-532358425',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-532358425',id=122,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPQQRmhcVg/kdrEK+wEbWjyXF34qlRdwvPy0otQ55i1stbQy0di0x+Z5lZr6s6TYoyDlUGpSbQ5L4J7FSSwP4IrinKkNibV2VUaY2Vkw+XX8B3+FyRjPoqy7nyNiOJOWbg==',key_name='tempest-TestGettingAddress-1914910139',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-z7l003ts',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:21:03Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=59934c7e-3c2e-4e68-9629-663ecfc6692d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73d5501c-512b-4eaf-b85c-86251a8a4235", "address": "fa:16:3e:26:c2:94", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d5501c-51", "ovs_interfaceid": "73d5501c-512b-4eaf-b85c-86251a8a4235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.968 239969 DEBUG nova.network.os_vif_util [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "73d5501c-512b-4eaf-b85c-86251a8a4235", "address": "fa:16:3e:26:c2:94", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d5501c-51", "ovs_interfaceid": "73d5501c-512b-4eaf-b85c-86251a8a4235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.970 239969 DEBUG nova.network.os_vif_util [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:c2:94,bridge_name='br-int',has_traffic_filtering=True,id=73d5501c-512b-4eaf-b85c-86251a8a4235,network=Network(323586d4-0f5a-43b7-8831-7a2cc02b73e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73d5501c-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:21:10 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.974 239969 DEBUG nova.objects.instance [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'pci_devices' on Instance uuid 59934c7e-3c2e-4e68-9629-663ecfc6692d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:10.999 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <uuid>59934c7e-3c2e-4e68-9629-663ecfc6692d</uuid>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <name>instance-0000007a</name>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <nova:name>tempest-TestGettingAddress-server-532358425</nova:name>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:21:09</nova:creationTime>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <nova:user uuid="0338b308661e4205839e9e957f674d8c">tempest-TestGettingAddress-206943419-project-member</nova:user>
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <nova:project uuid="df501970c9864b77b86bb4aa58ef846b">tempest-TestGettingAddress-206943419</nova:project>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <nova:port uuid="73d5501c-512b-4eaf-b85c-86251a8a4235">
Jan 26 16:21:11 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe26:c294" ipVersion="6"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe26:c294" ipVersion="6"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <system>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <entry name="serial">59934c7e-3c2e-4e68-9629-663ecfc6692d</entry>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <entry name="uuid">59934c7e-3c2e-4e68-9629-663ecfc6692d</entry>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </system>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <os>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   </os>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <features>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   </features>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/59934c7e-3c2e-4e68-9629-663ecfc6692d_disk">
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       </source>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/59934c7e-3c2e-4e68-9629-663ecfc6692d_disk.config">
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       </source>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:26:c2:94"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <target dev="tap73d5501c-51"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/59934c7e-3c2e-4e68-9629-663ecfc6692d/console.log" append="off"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <video>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </video>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:21:11 compute-0 nova_compute[239965]: </domain>
Jan 26 16:21:11 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.000 239969 DEBUG nova.compute.manager [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Preparing to wait for external event network-vif-plugged-73d5501c-512b-4eaf-b85c-86251a8a4235 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.001 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.001 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.002 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.003 239969 DEBUG nova.virt.libvirt.vif [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:21:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-532358425',display_name='tempest-TestGettingAddress-server-532358425',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-532358425',id=122,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPQQRmhcVg/kdrEK+wEbWjyXF34qlRdwvPy0otQ55i1stbQy0di0x+Z5lZr6s6TYoyDlUGpSbQ5L4J7FSSwP4IrinKkNibV2VUaY2Vkw+XX8B3+FyRjPoqy7nyNiOJOWbg==',key_name='tempest-TestGettingAddress-1914910139',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-z7l003ts',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:21:03Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=59934c7e-3c2e-4e68-9629-663ecfc6692d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "73d5501c-512b-4eaf-b85c-86251a8a4235", "address": "fa:16:3e:26:c2:94", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d5501c-51", "ovs_interfaceid": "73d5501c-512b-4eaf-b85c-86251a8a4235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.004 239969 DEBUG nova.network.os_vif_util [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "73d5501c-512b-4eaf-b85c-86251a8a4235", "address": "fa:16:3e:26:c2:94", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d5501c-51", "ovs_interfaceid": "73d5501c-512b-4eaf-b85c-86251a8a4235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.005 239969 DEBUG nova.network.os_vif_util [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:c2:94,bridge_name='br-int',has_traffic_filtering=True,id=73d5501c-512b-4eaf-b85c-86251a8a4235,network=Network(323586d4-0f5a-43b7-8831-7a2cc02b73e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73d5501c-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.006 239969 DEBUG os_vif [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:c2:94,bridge_name='br-int',has_traffic_filtering=True,id=73d5501c-512b-4eaf-b85c-86251a8a4235,network=Network(323586d4-0f5a-43b7-8831-7a2cc02b73e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73d5501c-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.007 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.008 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.009 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.018 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.018 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap73d5501c-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.020 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap73d5501c-51, col_values=(('external_ids', {'iface-id': '73d5501c-512b-4eaf-b85c-86251a8a4235', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:c2:94', 'vm-uuid': '59934c7e-3c2e-4e68-9629-663ecfc6692d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:11 compute-0 NetworkManager[48954]: <info>  [1769444471.0238] manager: (tap73d5501c-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/528)
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.022 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.028 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.030 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.031 239969 INFO os_vif [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:c2:94,bridge_name='br-int',has_traffic_filtering=True,id=73d5501c-512b-4eaf-b85c-86251a8a4235,network=Network(323586d4-0f5a-43b7-8831-7a2cc02b73e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73d5501c-51')
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.199 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:21:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:21:11 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/235383227' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.200 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.200 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:26:c2:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.200 239969 INFO nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Using config drive
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.223 239969 DEBUG nova.storage.rbd_utils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 59934c7e-3c2e-4e68-9629-663ecfc6692d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.230 239969 DEBUG oslo_concurrency.processutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.231 239969 DEBUG nova.virt.libvirt.vif [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:21:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1083326071',display_name='tempest-TestNetworkBasicOps-server-1083326071',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1083326071',id=123,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnc6ABUVZxmUVIFHzMrSF2wHDgUppuyBm/4Qpc+CBG9cly270KJFUGdE7lRkI23XtPm3Un5iEJliNuUeKKpKlQa5tnmaDsMmvi2B67XnlTjwI3+fuIa/NRYcjeBQIJDRw==',key_name='tempest-TestNetworkBasicOps-923206999',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-ldwrec2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:21:04Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=5544e5ad-3c83-4a30-9534-f7a0e6f29a04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b27889cb-0f7d-409c-830b-20a79cd51358", "address": "fa:16:3e:29:20:40", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27889cb-0f", "ovs_interfaceid": "b27889cb-0f7d-409c-830b-20a79cd51358", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.231 239969 DEBUG nova.network.os_vif_util [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "b27889cb-0f7d-409c-830b-20a79cd51358", "address": "fa:16:3e:29:20:40", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27889cb-0f", "ovs_interfaceid": "b27889cb-0f7d-409c-830b-20a79cd51358", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.232 239969 DEBUG nova.network.os_vif_util [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:20:40,bridge_name='br-int',has_traffic_filtering=True,id=b27889cb-0f7d-409c-830b-20a79cd51358,network=Network(94d25833-d97d-4ef4-9c91-97f8d2c0ffa9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27889cb-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.233 239969 DEBUG nova.objects.instance [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'pci_devices' on Instance uuid 5544e5ad-3c83-4a30-9534-f7a0e6f29a04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.259 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <uuid>5544e5ad-3c83-4a30-9534-f7a0e6f29a04</uuid>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <name>instance-0000007b</name>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkBasicOps-server-1083326071</nova:name>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:21:10</nova:creationTime>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <nova:port uuid="b27889cb-0f7d-409c-830b-20a79cd51358">
Jan 26 16:21:11 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <system>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <entry name="serial">5544e5ad-3c83-4a30-9534-f7a0e6f29a04</entry>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <entry name="uuid">5544e5ad-3c83-4a30-9534-f7a0e6f29a04</entry>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </system>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <os>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   </os>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <features>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   </features>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/5544e5ad-3c83-4a30-9534-f7a0e6f29a04_disk">
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       </source>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/5544e5ad-3c83-4a30-9534-f7a0e6f29a04_disk.config">
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       </source>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:21:11 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:29:20:40"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <target dev="tapb27889cb-0f"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/5544e5ad-3c83-4a30-9534-f7a0e6f29a04/console.log" append="off"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <video>
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </video>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:21:11 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:21:11 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:21:11 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:21:11 compute-0 nova_compute[239965]: </domain>
Jan 26 16:21:11 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.260 239969 DEBUG nova.compute.manager [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Preparing to wait for external event network-vif-plugged-b27889cb-0f7d-409c-830b-20a79cd51358 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.260 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.260 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.260 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.261 239969 DEBUG nova.virt.libvirt.vif [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:21:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1083326071',display_name='tempest-TestNetworkBasicOps-server-1083326071',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1083326071',id=123,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnc6ABUVZxmUVIFHzMrSF2wHDgUppuyBm/4Qpc+CBG9cly270KJFUGdE7lRkI23XtPm3Un5iEJliNuUeKKpKlQa5tnmaDsMmvi2B67XnlTjwI3+fuIa/NRYcjeBQIJDRw==',key_name='tempest-TestNetworkBasicOps-923206999',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-ldwrec2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:21:04Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=5544e5ad-3c83-4a30-9534-f7a0e6f29a04,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b27889cb-0f7d-409c-830b-20a79cd51358", "address": "fa:16:3e:29:20:40", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27889cb-0f", "ovs_interfaceid": "b27889cb-0f7d-409c-830b-20a79cd51358", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.261 239969 DEBUG nova.network.os_vif_util [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "b27889cb-0f7d-409c-830b-20a79cd51358", "address": "fa:16:3e:29:20:40", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27889cb-0f", "ovs_interfaceid": "b27889cb-0f7d-409c-830b-20a79cd51358", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.262 239969 DEBUG nova.network.os_vif_util [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:29:20:40,bridge_name='br-int',has_traffic_filtering=True,id=b27889cb-0f7d-409c-830b-20a79cd51358,network=Network(94d25833-d97d-4ef4-9c91-97f8d2c0ffa9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27889cb-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.262 239969 DEBUG os_vif [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:20:40,bridge_name='br-int',has_traffic_filtering=True,id=b27889cb-0f7d-409c-830b-20a79cd51358,network=Network(94d25833-d97d-4ef4-9c91-97f8d2c0ffa9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27889cb-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.263 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.263 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.264 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.267 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.267 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb27889cb-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.267 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb27889cb-0f, col_values=(('external_ids', {'iface-id': 'b27889cb-0f7d-409c-830b-20a79cd51358', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:29:20:40', 'vm-uuid': '5544e5ad-3c83-4a30-9534-f7a0e6f29a04'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:11 compute-0 NetworkManager[48954]: <info>  [1769444471.2696] manager: (tapb27889cb-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/529)
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.269 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.270 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.275 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.276 239969 INFO os_vif [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:29:20:40,bridge_name='br-int',has_traffic_filtering=True,id=b27889cb-0f7d-409c-830b-20a79cd51358,network=Network(94d25833-d97d-4ef4-9c91-97f8d2c0ffa9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27889cb-0f')
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.446 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.447 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.447 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:29:20:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.448 239969 INFO nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Using config drive
Jan 26 16:21:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-7bf0f28027ac97a5d6dd1d075d44caddd6f99acb48bb8f361f428da2a4167386-merged.mount: Deactivated successfully.
Jan 26 16:21:11 compute-0 nova_compute[239965]: 2026-01-26 16:21:11.691 239969 DEBUG nova.storage.rbd_utils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 5544e5ad-3c83-4a30-9534-f7a0e6f29a04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:11 compute-0 ceph-mon[75140]: pgmap v2114: 305 pgs: 305 active+clean; 339 MiB data, 957 MiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 5.7 MiB/s wr, 116 op/s
Jan 26 16:21:11 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3006024242' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:21:11 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3965431433' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:21:11 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/235383227' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:21:12 compute-0 podman[347912]: 2026-01-26 16:21:12.037926303 +0000 UTC m=+1.735452088 container remove b80d96248de765ce233248dd372c4188470701acf226847debdcc8e82b919bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_moore, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:21:12 compute-0 systemd[1]: libpod-conmon-b80d96248de765ce233248dd372c4188470701acf226847debdcc8e82b919bb9.scope: Deactivated successfully.
Jan 26 16:21:12 compute-0 sudo[347796]: pam_unix(sudo:session): session closed for user root
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.113 239969 INFO nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Creating config drive at /var/lib/nova/instances/59934c7e-3c2e-4e68-9629-663ecfc6692d/disk.config
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.119 239969 DEBUG oslo_concurrency.processutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/59934c7e-3c2e-4e68-9629-663ecfc6692d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprml2wgrd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:12 compute-0 sudo[348073]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:21:12 compute-0 sudo[348073]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:21:12 compute-0 sudo[348073]: pam_unix(sudo:session): session closed for user root
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.167 239969 DEBUG nova.network.neutron [req-c0b2fd5f-74fa-43c9-a547-370ba491983d req-cec32f90-7ba0-4c9e-b783-4b46cd084504 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Updated VIF entry in instance network info cache for port b27889cb-0f7d-409c-830b-20a79cd51358. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.169 239969 DEBUG nova.network.neutron [req-c0b2fd5f-74fa-43c9-a547-370ba491983d req-cec32f90-7ba0-4c9e-b783-4b46cd084504 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Updating instance_info_cache with network_info: [{"id": "b27889cb-0f7d-409c-830b-20a79cd51358", "address": "fa:16:3e:29:20:40", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27889cb-0f", "ovs_interfaceid": "b27889cb-0f7d-409c-830b-20a79cd51358", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.186 239969 DEBUG oslo_concurrency.lockutils [req-c0b2fd5f-74fa-43c9-a547-370ba491983d req-cec32f90-7ba0-4c9e-b783-4b46cd084504 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-5544e5ad-3c83-4a30-9534-f7a0e6f29a04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:21:12 compute-0 sudo[348101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:21:12 compute-0 sudo[348101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.271 239969 DEBUG oslo_concurrency.processutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/59934c7e-3c2e-4e68-9629-663ecfc6692d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprml2wgrd" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.292 239969 DEBUG nova.storage.rbd_utils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 59934c7e-3c2e-4e68-9629-663ecfc6692d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.296 239969 DEBUG oslo_concurrency.processutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/59934c7e-3c2e-4e68-9629-663ecfc6692d/disk.config 59934c7e-3c2e-4e68-9629-663ecfc6692d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.418 239969 INFO nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Creating config drive at /var/lib/nova/instances/5544e5ad-3c83-4a30-9534-f7a0e6f29a04/disk.config
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.424 239969 DEBUG oslo_concurrency.processutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5544e5ad-3c83-4a30-9534-f7a0e6f29a04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp27kgw5et execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.456 239969 DEBUG oslo_concurrency.processutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/59934c7e-3c2e-4e68-9629-663ecfc6692d/disk.config 59934c7e-3c2e-4e68-9629-663ecfc6692d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.457 239969 INFO nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Deleting local config drive /var/lib/nova/instances/59934c7e-3c2e-4e68-9629-663ecfc6692d/disk.config because it was imported into RBD.
Jan 26 16:21:12 compute-0 podman[348176]: 2026-01-26 16:21:12.505192929 +0000 UTC m=+0.038527372 container create c01463ca4b1bc35bcd7236f62948ec0e1115fa3b404ae58766ac0caa5d37ba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:21:12 compute-0 kernel: tap73d5501c-51: entered promiscuous mode
Jan 26 16:21:12 compute-0 NetworkManager[48954]: <info>  [1769444472.5134] manager: (tap73d5501c-51): new Tun device (/org/freedesktop/NetworkManager/Devices/530)
Jan 26 16:21:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 5.7 MiB/s wr, 120 op/s
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.516 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:12 compute-0 ovn_controller[146046]: 2026-01-26T16:21:12Z|01273|binding|INFO|Claiming lport 73d5501c-512b-4eaf-b85c-86251a8a4235 for this chassis.
Jan 26 16:21:12 compute-0 ovn_controller[146046]: 2026-01-26T16:21:12Z|01274|binding|INFO|73d5501c-512b-4eaf-b85c-86251a8a4235: Claiming fa:16:3e:26:c2:94 10.100.0.3 2001:db8:0:1:f816:3eff:fe26:c294 2001:db8::f816:3eff:fe26:c294
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.528 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:c2:94 10.100.0.3 2001:db8:0:1:f816:3eff:fe26:c294 2001:db8::f816:3eff:fe26:c294'], port_security=['fa:16:3e:26:c2:94 10.100.0.3 2001:db8:0:1:f816:3eff:fe26:c294 2001:db8::f816:3eff:fe26:c294'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8:0:1:f816:3eff:fe26:c294/64 2001:db8::f816:3eff:fe26:c294/64', 'neutron:device_id': '59934c7e-3c2e-4e68-9629-663ecfc6692d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '01961e6c-51dc-4713-a95f-047c09f9eb18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c934df91-a8a2-4191-8341-9c66e4ad8b82, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=73d5501c-512b-4eaf-b85c-86251a8a4235) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.529 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 73d5501c-512b-4eaf-b85c-86251a8a4235 in datapath 323586d4-0f5a-43b7-8831-7a2cc02b73e0 bound to our chassis
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.530 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 323586d4-0f5a-43b7-8831-7a2cc02b73e0
Jan 26 16:21:12 compute-0 ovn_controller[146046]: 2026-01-26T16:21:12Z|01275|binding|INFO|Setting lport 73d5501c-512b-4eaf-b85c-86251a8a4235 ovn-installed in OVS
Jan 26 16:21:12 compute-0 ovn_controller[146046]: 2026-01-26T16:21:12Z|01276|binding|INFO|Setting lport 73d5501c-512b-4eaf-b85c-86251a8a4235 up in Southbound
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.542 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.542 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[90624599-bdb7-4bd2-ae5d-3a83a85ab7e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.544 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap323586d4-01 in ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:21:12 compute-0 systemd-udevd[348209]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.546 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.546 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap323586d4-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.546 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[63579bee-e99b-47af-b46d-d51d6d29928b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.547 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8431d02d-a18a-42ca-86cf-4f223da010b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:12 compute-0 systemd-machined[208061]: New machine qemu-149-instance-0000007a.
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.557 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[16bb529b-fd3d-4d61-b2d5-7b9febe9d712]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:12 compute-0 NetworkManager[48954]: <info>  [1769444472.5615] device (tap73d5501c-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:21:12 compute-0 systemd[1]: Started Virtual Machine qemu-149-instance-0000007a.
Jan 26 16:21:12 compute-0 NetworkManager[48954]: <info>  [1769444472.5624] device (tap73d5501c-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.565 239969 DEBUG oslo_concurrency.processutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5544e5ad-3c83-4a30-9534-f7a0e6f29a04/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp27kgw5et" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.570 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d20dfd41-c6da-4275-a7ff-c25505212474]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:12 compute-0 systemd[1]: Started libpod-conmon-c01463ca4b1bc35bcd7236f62948ec0e1115fa3b404ae58766ac0caa5d37ba2d.scope.
Jan 26 16:21:12 compute-0 podman[348176]: 2026-01-26 16:21:12.488847969 +0000 UTC m=+0.022182432 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.598 239969 DEBUG nova.storage.rbd_utils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 5544e5ad-3c83-4a30-9534-f7a0e6f29a04_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.605 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ec70e3ce-1a44-4a59-b2ee-cdb4694f09eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.605 239969 DEBUG oslo_concurrency.processutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5544e5ad-3c83-4a30-9534-f7a0e6f29a04/disk.config 5544e5ad-3c83-4a30-9534-f7a0e6f29a04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:12 compute-0 systemd-udevd[348213]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:21:12 compute-0 NetworkManager[48954]: <info>  [1769444472.6115] manager: (tap323586d4-00): new Veth device (/org/freedesktop/NetworkManager/Devices/531)
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.611 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[75504550-86f0-4429-9019-01a0a0ccee35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:12 compute-0 podman[348176]: 2026-01-26 16:21:12.62149603 +0000 UTC m=+0.154830483 container init c01463ca4b1bc35bcd7236f62948ec0e1115fa3b404ae58766ac0caa5d37ba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:21:12 compute-0 podman[348176]: 2026-01-26 16:21:12.630634004 +0000 UTC m=+0.163968447 container start c01463ca4b1bc35bcd7236f62948ec0e1115fa3b404ae58766ac0caa5d37ba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:21:12 compute-0 podman[348176]: 2026-01-26 16:21:12.634570319 +0000 UTC m=+0.167904782 container attach c01463ca4b1bc35bcd7236f62948ec0e1115fa3b404ae58766ac0caa5d37ba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_cartwright, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 16:21:12 compute-0 keen_cartwright[348226]: 167 167
Jan 26 16:21:12 compute-0 systemd[1]: libpod-c01463ca4b1bc35bcd7236f62948ec0e1115fa3b404ae58766ac0caa5d37ba2d.scope: Deactivated successfully.
Jan 26 16:21:12 compute-0 podman[348176]: 2026-01-26 16:21:12.64154479 +0000 UTC m=+0.174879233 container died c01463ca4b1bc35bcd7236f62948ec0e1115fa3b404ae58766ac0caa5d37ba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_cartwright, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.649 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b4707330-a53f-44ad-a5df-a550f2dc07dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.652 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff42f2d-c990-456b-860d-f7b3594ee888]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c820dc620852f03022aa3f14b207f45311991cbf32b2411c49095292a7d535c0-merged.mount: Deactivated successfully.
Jan 26 16:21:12 compute-0 podman[348176]: 2026-01-26 16:21:12.682762377 +0000 UTC m=+0.216096820 container remove c01463ca4b1bc35bcd7236f62948ec0e1115fa3b404ae58766ac0caa5d37ba2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_cartwright, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:21:12 compute-0 NetworkManager[48954]: <info>  [1769444472.6849] device (tap323586d4-00): carrier: link connected
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.690 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a02d9269-79d6-4ea0-9328-d054c1b0a675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:12 compute-0 systemd[1]: libpod-conmon-c01463ca4b1bc35bcd7236f62948ec0e1115fa3b404ae58766ac0caa5d37ba2d.scope: Deactivated successfully.
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.709 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ca03ec3e-0ffe-4cf9-9701-d9a8c454c0f9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap323586d4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:1a:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596349, 'reachable_time': 43930, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348300, 'error': None, 'target': 'ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.725 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a34cdf6f-7b8c-45f3-87a2-f8b8e230c735]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe26:1a41'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596349, 'tstamp': 596349}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348301, 'error': None, 'target': 'ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.741 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[36307bf9-ac2f-4d5e-8d2a-d1fa24108627]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap323586d4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:1a:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596349, 'reachable_time': 43930, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 348305, 'error': None, 'target': 'ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.752 239969 DEBUG oslo_concurrency.processutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5544e5ad-3c83-4a30-9534-f7a0e6f29a04/disk.config 5544e5ad-3c83-4a30-9534-f7a0e6f29a04_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.753 239969 INFO nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Deleting local config drive /var/lib/nova/instances/5544e5ad-3c83-4a30-9534-f7a0e6f29a04/disk.config because it was imported into RBD.
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.771 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[14bcee05-31b0-4aae-95ab-ff1298ab3ad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.827 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.828 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f4cea736-7bac-4e0b-9823-ad2ebe95d7d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.829 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap323586d4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.829 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.830 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap323586d4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:12 compute-0 kernel: tap323586d4-00: entered promiscuous mode
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.831 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:12 compute-0 NetworkManager[48954]: <info>  [1769444472.8329] manager: (tap323586d4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/532)
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.840 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.840 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap323586d4-00, col_values=(('external_ids', {'iface-id': '15bb6ba7-12a8-4618-8905-b715a4089dde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:12 compute-0 ovn_controller[146046]: 2026-01-26T16:21:12Z|01277|binding|INFO|Releasing lport 15bb6ba7-12a8-4618-8905-b715a4089dde from this chassis (sb_readonly=0)
Jan 26 16:21:12 compute-0 NetworkManager[48954]: <info>  [1769444472.8567] manager: (tapb27889cb-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/533)
Jan 26 16:21:12 compute-0 systemd-udevd[348257]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.859 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.859 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/323586d4-0f5a-43b7-8831-7a2cc02b73e0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/323586d4-0f5a-43b7-8831-7a2cc02b73e0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:21:12 compute-0 kernel: tapb27889cb-0f: entered promiscuous mode
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.861 239969 DEBUG nova.network.neutron [req-b5ee0576-3126-4c05-be79-0a75b93a70f3 req-fdee2334-7804-485a-9f40-db50da71e2cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Updated VIF entry in instance network info cache for port 73d5501c-512b-4eaf-b85c-86251a8a4235. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.861 239969 DEBUG nova.network.neutron [req-b5ee0576-3126-4c05-be79-0a75b93a70f3 req-fdee2334-7804-485a-9f40-db50da71e2cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Updating instance_info_cache with network_info: [{"id": "73d5501c-512b-4eaf-b85c-86251a8a4235", "address": "fa:16:3e:26:c2:94", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d5501c-51", "ovs_interfaceid": "73d5501c-512b-4eaf-b85c-86251a8a4235", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.863 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.864 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:12 compute-0 ovn_controller[146046]: 2026-01-26T16:21:12Z|01278|binding|INFO|Claiming lport b27889cb-0f7d-409c-830b-20a79cd51358 for this chassis.
Jan 26 16:21:12 compute-0 ovn_controller[146046]: 2026-01-26T16:21:12Z|01279|binding|INFO|b27889cb-0f7d-409c-830b-20a79cd51358: Claiming fa:16:3e:29:20:40 10.100.0.6
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.865 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[72c588fe-4954-4249-9b16-4fedb15794d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.866 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-323586d4-0f5a-43b7-8831-7a2cc02b73e0
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/323586d4-0f5a-43b7-8831-7a2cc02b73e0.pid.haproxy
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 323586d4-0f5a-43b7-8831-7a2cc02b73e0
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.867 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'env', 'PROCESS_TAG=haproxy-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/323586d4-0f5a-43b7-8831-7a2cc02b73e0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:21:12 compute-0 NetworkManager[48954]: <info>  [1769444472.8737] device (tapb27889cb-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:21:12 compute-0 NetworkManager[48954]: <info>  [1769444472.8748] device (tapb27889cb-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:21:12 compute-0 podman[348318]: 2026-01-26 16:21:12.877351151 +0000 UTC m=+0.056972103 container create 1d651f63dba530f12a104202c1a36799487f095a00d4b83ec96af2f8641ae80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shtern, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.885 239969 DEBUG oslo_concurrency.lockutils [req-b5ee0576-3126-4c05-be79-0a75b93a70f3 req-fdee2334-7804-485a-9f40-db50da71e2cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-59934c7e-3c2e-4e68-9629-663ecfc6692d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:21:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:12.891 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:20:40 10.100.0.6'], port_security=['fa:16:3e:29:20:40 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5544e5ad-3c83-4a30-9534-f7a0e6f29a04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '13bca217-5749-44d8-824e-52eb25694519', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2629d1bb-a533-4276-ab03-448f2c4761f6, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b27889cb-0f7d-409c-830b-20a79cd51358) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:21:12 compute-0 ovn_controller[146046]: 2026-01-26T16:21:12Z|01280|binding|INFO|Setting lport b27889cb-0f7d-409c-830b-20a79cd51358 ovn-installed in OVS
Jan 26 16:21:12 compute-0 ovn_controller[146046]: 2026-01-26T16:21:12Z|01281|binding|INFO|Setting lport b27889cb-0f7d-409c-830b-20a79cd51358 up in Southbound
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.893 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:12 compute-0 nova_compute[239965]: 2026-01-26 16:21:12.896 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:12 compute-0 systemd-machined[208061]: New machine qemu-150-instance-0000007b.
Jan 26 16:21:12 compute-0 systemd[1]: Started Virtual Machine qemu-150-instance-0000007b.
Jan 26 16:21:12 compute-0 systemd[1]: Started libpod-conmon-1d651f63dba530f12a104202c1a36799487f095a00d4b83ec96af2f8641ae80b.scope.
Jan 26 16:21:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebedbf6f5ec872a9292e3780e950d4b89065b2c515ccba7c5a9393702a4ec69/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebedbf6f5ec872a9292e3780e950d4b89065b2c515ccba7c5a9393702a4ec69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebedbf6f5ec872a9292e3780e950d4b89065b2c515ccba7c5a9393702a4ec69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eebedbf6f5ec872a9292e3780e950d4b89065b2c515ccba7c5a9393702a4ec69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:21:12 compute-0 podman[348318]: 2026-01-26 16:21:12.951297418 +0000 UTC m=+0.130918410 container init 1d651f63dba530f12a104202c1a36799487f095a00d4b83ec96af2f8641ae80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 16:21:12 compute-0 podman[348318]: 2026-01-26 16:21:12.857379953 +0000 UTC m=+0.037000925 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:21:12 compute-0 podman[348318]: 2026-01-26 16:21:12.965510005 +0000 UTC m=+0.145130967 container start 1d651f63dba530f12a104202c1a36799487f095a00d4b83ec96af2f8641ae80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 16:21:12 compute-0 podman[348318]: 2026-01-26 16:21:12.96981542 +0000 UTC m=+0.149436382 container attach 1d651f63dba530f12a104202c1a36799487f095a00d4b83ec96af2f8641ae80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 16:21:13 compute-0 ceph-mon[75140]: pgmap v2115: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 5.7 MiB/s wr, 120 op/s
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.079 239969 DEBUG nova.compute.manager [req-1bdb19a2-925e-4af8-97cb-96fd53bc6791 req-d966b4f0-def3-4410-a770-85566bbc4dd8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Received event network-vif-plugged-73d5501c-512b-4eaf-b85c-86251a8a4235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.079 239969 DEBUG oslo_concurrency.lockutils [req-1bdb19a2-925e-4af8-97cb-96fd53bc6791 req-d966b4f0-def3-4410-a770-85566bbc4dd8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.079 239969 DEBUG oslo_concurrency.lockutils [req-1bdb19a2-925e-4af8-97cb-96fd53bc6791 req-d966b4f0-def3-4410-a770-85566bbc4dd8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.080 239969 DEBUG oslo_concurrency.lockutils [req-1bdb19a2-925e-4af8-97cb-96fd53bc6791 req-d966b4f0-def3-4410-a770-85566bbc4dd8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.080 239969 DEBUG nova.compute.manager [req-1bdb19a2-925e-4af8-97cb-96fd53bc6791 req-d966b4f0-def3-4410-a770-85566bbc4dd8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Processing event network-vif-plugged-73d5501c-512b-4eaf-b85c-86251a8a4235 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.095 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444473.0952241, 59934c7e-3c2e-4e68-9629-663ecfc6692d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.096 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] VM Started (Lifecycle Event)
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.098 239969 DEBUG nova.compute.manager [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.101 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.104 239969 INFO nova.virt.libvirt.driver [-] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Instance spawned successfully.
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.105 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.115 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.119 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.125 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.125 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.126 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.126 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.126 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.127 239969 DEBUG nova.virt.libvirt.driver [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.166 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.167 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444473.0954394, 59934c7e-3c2e-4e68-9629-663ecfc6692d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.167 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] VM Paused (Lifecycle Event)
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.198 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.202 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444473.1003568, 59934c7e-3c2e-4e68-9629-663ecfc6692d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.202 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] VM Resumed (Lifecycle Event)
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.219 239969 INFO nova.compute.manager [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Took 9.81 seconds to spawn the instance on the hypervisor.
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.220 239969 DEBUG nova.compute.manager [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.228 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.231 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:21:13 compute-0 podman[348428]: 2026-01-26 16:21:13.249434711 +0000 UTC m=+0.047295776 container create 870ac69ad3b676e8d8ca4d3627f730b6d567261a521a7e3f184fc966b255dae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.265 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:21:13 compute-0 systemd[1]: Started libpod-conmon-870ac69ad3b676e8d8ca4d3627f730b6d567261a521a7e3f184fc966b255dae3.scope.
Jan 26 16:21:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:21:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a21fb8361d16737fd203049a03797734bb7fec0ca249ea20eda253dba3268b34/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.314 239969 INFO nova.compute.manager [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Took 10.99 seconds to build instance.
Jan 26 16:21:13 compute-0 podman[348428]: 2026-01-26 16:21:13.225150488 +0000 UTC m=+0.023011573 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:21:13 compute-0 podman[348428]: 2026-01-26 16:21:13.322681911 +0000 UTC m=+0.120543006 container init 870ac69ad3b676e8d8ca4d3627f730b6d567261a521a7e3f184fc966b255dae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 16:21:13 compute-0 podman[348428]: 2026-01-26 16:21:13.328767169 +0000 UTC m=+0.126628234 container start 870ac69ad3b676e8d8ca4d3627f730b6d567261a521a7e3f184fc966b255dae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.334 239969 DEBUG oslo_concurrency.lockutils [None req-95b04444-6d2a-4f67-b044-921d93173d32 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "59934c7e-3c2e-4e68-9629-663ecfc6692d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:13 compute-0 neutron-haproxy-ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0[348452]: [NOTICE]   (348475) : New worker (348481) forked
Jan 26 16:21:13 compute-0 neutron-haproxy-ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0[348452]: [NOTICE]   (348475) : Loading success.
Jan 26 16:21:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:13.399 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b27889cb-0f7d-409c-830b-20a79cd51358 in datapath 94d25833-d97d-4ef4-9c91-97f8d2c0ffa9 unbound from our chassis
Jan 26 16:21:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:13.401 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 94d25833-d97d-4ef4-9c91-97f8d2c0ffa9
Jan 26 16:21:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:13.419 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e47303b6-581d-46d6-8b85-fd614df4ac0c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:13.456 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[73a417af-d6e4-4d08-b655-84676ef2214a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:13.459 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d619d41a-4b7b-48e5-9f03-d4a41c0529c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:13.490 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b22c7eb6-799c-432c-ab3d-13108503147a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:13.510 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ffc812-a22f-4b5a-a675-429937f3da27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap94d25833-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:22:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592114, 'reachable_time': 24138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348552, 'error': None, 'target': 'ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.532 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444473.5324006, 5544e5ad-3c83-4a30-9534-f7a0e6f29a04 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.533 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] VM Started (Lifecycle Event)
Jan 26 16:21:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:13.538 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[57a35fab-224d-4f26-b8cb-d55189cc6336]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap94d25833-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592125, 'tstamp': 592125}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348559, 'error': None, 'target': 'ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap94d25833-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592128, 'tstamp': 592128}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 348559, 'error': None, 'target': 'ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:13.540 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94d25833-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.541 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.542 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:13.543 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94d25833-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:13.543 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:21:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:13.543 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap94d25833-d0, col_values=(('external_ids', {'iface-id': '1e5f3a8e-47d3-4b4a-9225-a0fadd35ac3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:13 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:13.543 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:21:13 compute-0 lvm[348576]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:21:13 compute-0 lvm[348576]: VG ceph_vg0 finished
Jan 26 16:21:13 compute-0 lvm[348578]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:21:13 compute-0 lvm[348578]: VG ceph_vg1 finished
Jan 26 16:21:13 compute-0 lvm[348579]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:21:13 compute-0 lvm[348579]: VG ceph_vg2 finished
Jan 26 16:21:13 compute-0 lvm[348581]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:21:13 compute-0 lvm[348581]: VG ceph_vg2 finished
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.792 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.797 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444473.532607, 5544e5ad-3c83-4a30-9534-f7a0e6f29a04 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.797 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] VM Paused (Lifecycle Event)
Jan 26 16:21:13 compute-0 great_shtern[348353]: {}
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.818 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.822 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:21:13 compute-0 lvm[348583]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:21:13 compute-0 lvm[348583]: VG ceph_vg2 finished
Jan 26 16:21:13 compute-0 systemd[1]: libpod-1d651f63dba530f12a104202c1a36799487f095a00d4b83ec96af2f8641ae80b.scope: Deactivated successfully.
Jan 26 16:21:13 compute-0 systemd[1]: libpod-1d651f63dba530f12a104202c1a36799487f095a00d4b83ec96af2f8641ae80b.scope: Consumed 1.311s CPU time.
Jan 26 16:21:13 compute-0 podman[348318]: 2026-01-26 16:21:13.838037941 +0000 UTC m=+1.017658933 container died 1d651f63dba530f12a104202c1a36799487f095a00d4b83ec96af2f8641ae80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shtern, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 16:21:13 compute-0 nova_compute[239965]: 2026-01-26 16:21:13.846 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:21:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-eebedbf6f5ec872a9292e3780e950d4b89065b2c515ccba7c5a9393702a4ec69-merged.mount: Deactivated successfully.
Jan 26 16:21:13 compute-0 podman[348318]: 2026-01-26 16:21:13.900999049 +0000 UTC m=+1.080620001 container remove 1d651f63dba530f12a104202c1a36799487f095a00d4b83ec96af2f8641ae80b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_shtern, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:21:13 compute-0 systemd[1]: libpod-conmon-1d651f63dba530f12a104202c1a36799487f095a00d4b83ec96af2f8641ae80b.scope: Deactivated successfully.
Jan 26 16:21:13 compute-0 sudo[348101]: pam_unix(sudo:session): session closed for user root
Jan 26 16:21:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:21:13 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:21:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:21:13 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:21:14 compute-0 sudo[348596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:21:14 compute-0 sudo[348596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:21:14 compute-0 sudo[348596]: pam_unix(sudo:session): session closed for user root
Jan 26 16:21:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 3.6 MiB/s wr, 60 op/s
Jan 26 16:21:14 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:21:14 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:21:14 compute-0 ceph-mon[75140]: pgmap v2116: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 69 KiB/s rd, 3.6 MiB/s wr, 60 op/s
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.174 239969 DEBUG nova.compute.manager [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Received event network-vif-plugged-73d5501c-512b-4eaf-b85c-86251a8a4235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.174 239969 DEBUG oslo_concurrency.lockutils [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.175 239969 DEBUG oslo_concurrency.lockutils [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.175 239969 DEBUG oslo_concurrency.lockutils [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.176 239969 DEBUG nova.compute.manager [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] No waiting events found dispatching network-vif-plugged-73d5501c-512b-4eaf-b85c-86251a8a4235 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.176 239969 WARNING nova.compute.manager [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Received unexpected event network-vif-plugged-73d5501c-512b-4eaf-b85c-86251a8a4235 for instance with vm_state active and task_state None.
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.176 239969 DEBUG nova.compute.manager [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Received event network-vif-plugged-b27889cb-0f7d-409c-830b-20a79cd51358 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.177 239969 DEBUG oslo_concurrency.lockutils [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.177 239969 DEBUG oslo_concurrency.lockutils [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.177 239969 DEBUG oslo_concurrency.lockutils [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.178 239969 DEBUG nova.compute.manager [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Processing event network-vif-plugged-b27889cb-0f7d-409c-830b-20a79cd51358 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.178 239969 DEBUG nova.compute.manager [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Received event network-vif-plugged-b27889cb-0f7d-409c-830b-20a79cd51358 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.179 239969 DEBUG oslo_concurrency.lockutils [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.179 239969 DEBUG oslo_concurrency.lockutils [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.180 239969 DEBUG oslo_concurrency.lockutils [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.180 239969 DEBUG nova.compute.manager [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] No waiting events found dispatching network-vif-plugged-b27889cb-0f7d-409c-830b-20a79cd51358 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.180 239969 WARNING nova.compute.manager [req-6e33e5bf-d59d-494b-9dec-0d36ed254dcc req-1b15ba05-eb39-4238-b6c8-7f7288647f89 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Received unexpected event network-vif-plugged-b27889cb-0f7d-409c-830b-20a79cd51358 for instance with vm_state building and task_state spawning.
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.182 239969 DEBUG nova.compute.manager [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.204 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.204 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444475.2038493, 5544e5ad-3c83-4a30-9534-f7a0e6f29a04 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.205 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] VM Resumed (Lifecycle Event)
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.210 239969 INFO nova.virt.libvirt.driver [-] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Instance spawned successfully.
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.210 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.222 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.227 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.231 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.231 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.231 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.232 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.232 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.232 239969 DEBUG nova.virt.libvirt.driver [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.256 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.293 239969 INFO nova.compute.manager [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Took 11.10 seconds to spawn the instance on the hypervisor.
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.293 239969 DEBUG nova.compute.manager [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.369 239969 INFO nova.compute.manager [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Took 12.36 seconds to build instance.
Jan 26 16:21:15 compute-0 nova_compute[239965]: 2026-01-26 16:21:15.382 239969 DEBUG oslo_concurrency.lockutils [None req-3ac219ce-8097-4852-8764-a3f8931a872a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.444s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:16 compute-0 nova_compute[239965]: 2026-01-26 16:21:16.310 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 189 KiB/s rd, 3.6 MiB/s wr, 67 op/s
Jan 26 16:21:16 compute-0 ceph-mon[75140]: pgmap v2117: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 189 KiB/s rd, 3.6 MiB/s wr, 67 op/s
Jan 26 16:21:17 compute-0 nova_compute[239965]: 2026-01-26 16:21:17.833 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:21:18 compute-0 nova_compute[239965]: 2026-01-26 16:21:18.315 239969 DEBUG nova.compute.manager [req-104084e9-c81a-4b41-a94d-08b577901f58 req-4f976298-4c2a-42b8-80e4-b152d8d7f48d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Received event network-changed-73d5501c-512b-4eaf-b85c-86251a8a4235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:18 compute-0 nova_compute[239965]: 2026-01-26 16:21:18.316 239969 DEBUG nova.compute.manager [req-104084e9-c81a-4b41-a94d-08b577901f58 req-4f976298-4c2a-42b8-80e4-b152d8d7f48d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Refreshing instance network info cache due to event network-changed-73d5501c-512b-4eaf-b85c-86251a8a4235. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:21:18 compute-0 nova_compute[239965]: 2026-01-26 16:21:18.316 239969 DEBUG oslo_concurrency.lockutils [req-104084e9-c81a-4b41-a94d-08b577901f58 req-4f976298-4c2a-42b8-80e4-b152d8d7f48d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-59934c7e-3c2e-4e68-9629-663ecfc6692d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:21:18 compute-0 nova_compute[239965]: 2026-01-26 16:21:18.317 239969 DEBUG oslo_concurrency.lockutils [req-104084e9-c81a-4b41-a94d-08b577901f58 req-4f976298-4c2a-42b8-80e4-b152d8d7f48d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-59934c7e-3c2e-4e68-9629-663ecfc6692d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:21:18 compute-0 nova_compute[239965]: 2026-01-26 16:21:18.317 239969 DEBUG nova.network.neutron [req-104084e9-c81a-4b41-a94d-08b577901f58 req-4f976298-4c2a-42b8-80e4-b152d8d7f48d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Refreshing network info cache for port 73d5501c-512b-4eaf-b85c-86251a8a4235 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:21:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.4 MiB/s wr, 188 op/s
Jan 26 16:21:18 compute-0 ceph-mon[75140]: pgmap v2118: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.4 MiB/s wr, 188 op/s
Jan 26 16:21:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 147 op/s
Jan 26 16:21:20 compute-0 nova_compute[239965]: 2026-01-26 16:21:20.566 239969 DEBUG nova.compute.manager [req-d644492d-591c-495c-8273-98ce3b49d72b req-e61933ec-c813-4541-a44a-6f48e4b368d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Received event network-changed-b27889cb-0f7d-409c-830b-20a79cd51358 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:20 compute-0 nova_compute[239965]: 2026-01-26 16:21:20.567 239969 DEBUG nova.compute.manager [req-d644492d-591c-495c-8273-98ce3b49d72b req-e61933ec-c813-4541-a44a-6f48e4b368d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Refreshing instance network info cache due to event network-changed-b27889cb-0f7d-409c-830b-20a79cd51358. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:21:20 compute-0 nova_compute[239965]: 2026-01-26 16:21:20.567 239969 DEBUG oslo_concurrency.lockutils [req-d644492d-591c-495c-8273-98ce3b49d72b req-e61933ec-c813-4541-a44a-6f48e4b368d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-5544e5ad-3c83-4a30-9534-f7a0e6f29a04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:21:20 compute-0 nova_compute[239965]: 2026-01-26 16:21:20.568 239969 DEBUG oslo_concurrency.lockutils [req-d644492d-591c-495c-8273-98ce3b49d72b req-e61933ec-c813-4541-a44a-6f48e4b368d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-5544e5ad-3c83-4a30-9534-f7a0e6f29a04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:21:20 compute-0 nova_compute[239965]: 2026-01-26 16:21:20.568 239969 DEBUG nova.network.neutron [req-d644492d-591c-495c-8273-98ce3b49d72b req-e61933ec-c813-4541-a44a-6f48e4b368d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Refreshing network info cache for port b27889cb-0f7d-409c-830b-20a79cd51358 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:21:20 compute-0 nova_compute[239965]: 2026-01-26 16:21:20.632 239969 DEBUG nova.network.neutron [req-104084e9-c81a-4b41-a94d-08b577901f58 req-4f976298-4c2a-42b8-80e4-b152d8d7f48d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Updated VIF entry in instance network info cache for port 73d5501c-512b-4eaf-b85c-86251a8a4235. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:21:20 compute-0 nova_compute[239965]: 2026-01-26 16:21:20.633 239969 DEBUG nova.network.neutron [req-104084e9-c81a-4b41-a94d-08b577901f58 req-4f976298-4c2a-42b8-80e4-b152d8d7f48d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Updating instance_info_cache with network_info: [{"id": "73d5501c-512b-4eaf-b85c-86251a8a4235", "address": "fa:16:3e:26:c2:94", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d5501c-51", "ovs_interfaceid": "73d5501c-512b-4eaf-b85c-86251a8a4235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:21:20 compute-0 nova_compute[239965]: 2026-01-26 16:21:20.659 239969 DEBUG oslo_concurrency.lockutils [req-104084e9-c81a-4b41-a94d-08b577901f58 req-4f976298-4c2a-42b8-80e4-b152d8d7f48d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-59934c7e-3c2e-4e68-9629-663ecfc6692d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:21:21 compute-0 nova_compute[239965]: 2026-01-26 16:21:21.314 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:21 compute-0 ceph-mon[75140]: pgmap v2119: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 147 op/s
Jan 26 16:21:21 compute-0 nova_compute[239965]: 2026-01-26 16:21:21.997 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:22 compute-0 nova_compute[239965]: 2026-01-26 16:21:22.229 239969 DEBUG nova.network.neutron [req-d644492d-591c-495c-8273-98ce3b49d72b req-e61933ec-c813-4541-a44a-6f48e4b368d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Updated VIF entry in instance network info cache for port b27889cb-0f7d-409c-830b-20a79cd51358. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:21:22 compute-0 nova_compute[239965]: 2026-01-26 16:21:22.230 239969 DEBUG nova.network.neutron [req-d644492d-591c-495c-8273-98ce3b49d72b req-e61933ec-c813-4541-a44a-6f48e4b368d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Updating instance_info_cache with network_info: [{"id": "b27889cb-0f7d-409c-830b-20a79cd51358", "address": "fa:16:3e:29:20:40", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27889cb-0f", "ovs_interfaceid": "b27889cb-0f7d-409c-830b-20a79cd51358", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:21:22 compute-0 nova_compute[239965]: 2026-01-26 16:21:22.249 239969 DEBUG oslo_concurrency.lockutils [req-d644492d-591c-495c-8273-98ce3b49d72b req-e61933ec-c813-4541-a44a-6f48e4b368d1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-5544e5ad-3c83-4a30-9534-f7a0e6f29a04" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:21:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 31 KiB/s wr, 148 op/s
Jan 26 16:21:22 compute-0 nova_compute[239965]: 2026-01-26 16:21:22.836 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:21:23 compute-0 ceph-mon[75140]: pgmap v2120: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 31 KiB/s wr, 148 op/s
Jan 26 16:21:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:21:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.1 total, 600.0 interval
                                           Cumulative writes: 36K writes, 148K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 36K writes, 13K syncs, 2.82 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4450 writes, 18K keys, 4450 commit groups, 1.0 writes per commit group, ingest: 22.46 MB, 0.04 MB/s
                                           Interval WAL: 4450 writes, 1677 syncs, 2.65 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:21:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 144 op/s
Jan 26 16:21:25 compute-0 ceph-mon[75140]: pgmap v2121: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 144 op/s
Jan 26 16:21:26 compute-0 nova_compute[239965]: 2026-01-26 16:21:26.016 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Acquiring lock "11befbe4-5dfe-4019-9aae-812949d43f40" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:26 compute-0 nova_compute[239965]: 2026-01-26 16:21:26.017 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lock "11befbe4-5dfe-4019-9aae-812949d43f40" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:26 compute-0 nova_compute[239965]: 2026-01-26 16:21:26.042 239969 DEBUG nova.compute.manager [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:21:26 compute-0 nova_compute[239965]: 2026-01-26 16:21:26.119 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:26 compute-0 nova_compute[239965]: 2026-01-26 16:21:26.120 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:26 compute-0 nova_compute[239965]: 2026-01-26 16:21:26.135 239969 DEBUG nova.virt.hardware [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:21:26 compute-0 nova_compute[239965]: 2026-01-26 16:21:26.135 239969 INFO nova.compute.claims [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:21:26 compute-0 nova_compute[239965]: 2026-01-26 16:21:26.316 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:26 compute-0 nova_compute[239965]: 2026-01-26 16:21:26.318 239969 DEBUG oslo_concurrency.processutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 146 op/s
Jan 26 16:21:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:21:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1146637748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:21:26 compute-0 nova_compute[239965]: 2026-01-26 16:21:26.958 239969 DEBUG oslo_concurrency.processutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.639s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:26 compute-0 nova_compute[239965]: 2026-01-26 16:21:26.969 239969 DEBUG nova.compute.provider_tree [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:21:26 compute-0 nova_compute[239965]: 2026-01-26 16:21:26.989 239969 DEBUG nova.scheduler.client.report [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.016 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.017 239969 DEBUG nova.compute.manager [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.080 239969 DEBUG nova.compute.manager [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.081 239969 DEBUG nova.network.neutron [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.103 239969 INFO nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.125 239969 DEBUG nova.compute.manager [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.218 239969 DEBUG nova.compute.manager [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.221 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.222 239969 INFO nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Creating image(s)
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.256 239969 DEBUG nova.storage.rbd_utils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] rbd image 11befbe4-5dfe-4019-9aae-812949d43f40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.282 239969 DEBUG nova.storage.rbd_utils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] rbd image 11befbe4-5dfe-4019-9aae-812949d43f40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.305 239969 DEBUG nova.storage.rbd_utils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] rbd image 11befbe4-5dfe-4019-9aae-812949d43f40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.309 239969 DEBUG oslo_concurrency.processutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:27 compute-0 ovn_controller[146046]: 2026-01-26T16:21:27Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:26:c2:94 10.100.0.3
Jan 26 16:21:27 compute-0 ovn_controller[146046]: 2026-01-26T16:21:27Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:26:c2:94 10.100.0.3
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.379 239969 DEBUG oslo_concurrency.processutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.380 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.381 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.381 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.401 239969 DEBUG nova.storage.rbd_utils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] rbd image 11befbe4-5dfe-4019-9aae-812949d43f40_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.404 239969 DEBUG oslo_concurrency.processutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 11befbe4-5dfe-4019-9aae-812949d43f40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.575 239969 DEBUG nova.policy [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7494b8fe226247c9accec02ae2beee97', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '7e6e29d3d46a48db9f596eca01b3ecf4', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:21:27 compute-0 nova_compute[239965]: 2026-01-26 16:21:27.841 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:28 compute-0 ceph-mon[75140]: pgmap v2122: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 30 KiB/s wr, 146 op/s
Jan 26 16:21:28 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1146637748' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:21:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:21:28 compute-0 nova_compute[239965]: 2026-01-26 16:21:28.329 239969 DEBUG oslo_concurrency.processutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 11befbe4-5dfe-4019-9aae-812949d43f40_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.926s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:28 compute-0 nova_compute[239965]: 2026-01-26 16:21:28.407 239969 DEBUG nova.storage.rbd_utils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] resizing rbd image 11befbe4-5dfe-4019-9aae-812949d43f40_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:21:28 compute-0 nova_compute[239965]: 2026-01-26 16:21:28.517 239969 DEBUG nova.objects.instance [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lazy-loading 'migration_context' on Instance uuid 11befbe4-5dfe-4019-9aae-812949d43f40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:21:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 360 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.0 MiB/s wr, 179 op/s
Jan 26 16:21:28 compute-0 nova_compute[239965]: 2026-01-26 16:21:28.532 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:21:28 compute-0 nova_compute[239965]: 2026-01-26 16:21:28.532 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Ensure instance console log exists: /var/lib/nova/instances/11befbe4-5dfe-4019-9aae-812949d43f40/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:21:28 compute-0 nova_compute[239965]: 2026-01-26 16:21:28.533 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:28 compute-0 nova_compute[239965]: 2026-01-26 16:21:28.534 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:28 compute-0 nova_compute[239965]: 2026-01-26 16:21:28.534 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:21:28
Jan 26 16:21:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:21:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:21:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'vms', 'backups', '.mgr', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.rgw.root']
Jan 26 16:21:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:21:29 compute-0 ceph-mon[75140]: pgmap v2123: 305 pgs: 305 active+clean; 360 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 2.0 MiB/s wr, 179 op/s
Jan 26 16:21:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:21:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.4 total, 600.0 interval
                                           Cumulative writes: 40K writes, 155K keys, 40K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 40K writes, 14K syncs, 2.74 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5044 writes, 20K keys, 5044 commit groups, 1.0 writes per commit group, ingest: 22.74 MB, 0.04 MB/s
                                           Interval WAL: 5044 writes, 1955 syncs, 2.58 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:21:29 compute-0 nova_compute[239965]: 2026-01-26 16:21:29.852 239969 DEBUG nova.network.neutron [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Successfully created port: 8d829ab8-dfca-4c55-9775-0be98c68aab4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 360 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 299 KiB/s rd, 2.0 MiB/s wr, 42 op/s
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:21:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:21:31 compute-0 nova_compute[239965]: 2026-01-26 16:21:31.318 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:31 compute-0 ceph-mon[75140]: pgmap v2124: 305 pgs: 305 active+clean; 360 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 299 KiB/s rd, 2.0 MiB/s wr, 42 op/s
Jan 26 16:21:31 compute-0 ovn_controller[146046]: 2026-01-26T16:21:31Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:29:20:40 10.100.0.6
Jan 26 16:21:31 compute-0 ovn_controller[146046]: 2026-01-26T16:21:31Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:29:20:40 10.100.0.6
Jan 26 16:21:31 compute-0 nova_compute[239965]: 2026-01-26 16:21:31.686 239969 DEBUG nova.network.neutron [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Successfully updated port: 8d829ab8-dfca-4c55-9775-0be98c68aab4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:21:31 compute-0 nova_compute[239965]: 2026-01-26 16:21:31.714 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Acquiring lock "refresh_cache-11befbe4-5dfe-4019-9aae-812949d43f40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:21:31 compute-0 nova_compute[239965]: 2026-01-26 16:21:31.715 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Acquired lock "refresh_cache-11befbe4-5dfe-4019-9aae-812949d43f40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:21:31 compute-0 nova_compute[239965]: 2026-01-26 16:21:31.715 239969 DEBUG nova.network.neutron [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:21:31 compute-0 nova_compute[239965]: 2026-01-26 16:21:31.817 239969 DEBUG nova.compute.manager [req-49840e0a-121d-4571-b86f-22cb51179787 req-ade1e293-29d7-4ce8-a77d-e422b3e6e308 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Received event network-changed-8d829ab8-dfca-4c55-9775-0be98c68aab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:31 compute-0 nova_compute[239965]: 2026-01-26 16:21:31.818 239969 DEBUG nova.compute.manager [req-49840e0a-121d-4571-b86f-22cb51179787 req-ade1e293-29d7-4ce8-a77d-e422b3e6e308 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Refreshing instance network info cache due to event network-changed-8d829ab8-dfca-4c55-9775-0be98c68aab4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:21:31 compute-0 nova_compute[239965]: 2026-01-26 16:21:31.818 239969 DEBUG oslo_concurrency.lockutils [req-49840e0a-121d-4571-b86f-22cb51179787 req-ade1e293-29d7-4ce8-a77d-e422b3e6e308 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-11befbe4-5dfe-4019-9aae-812949d43f40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:21:31 compute-0 nova_compute[239965]: 2026-01-26 16:21:31.910 239969 DEBUG nova.network.neutron [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:21:32 compute-0 podman[348809]: 2026-01-26 16:21:32.411672621 +0000 UTC m=+0.087171422 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:21:32 compute-0 podman[348810]: 2026-01-26 16:21:32.453886222 +0000 UTC m=+0.128086431 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 16:21:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 447 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 680 KiB/s rd, 6.0 MiB/s wr, 155 op/s
Jan 26 16:21:32 compute-0 ceph-mon[75140]: pgmap v2125: 305 pgs: 305 active+clean; 447 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 680 KiB/s rd, 6.0 MiB/s wr, 155 op/s
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.772 239969 DEBUG nova.network.neutron [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Updating instance_info_cache with network_info: [{"id": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "address": "fa:16:3e:27:78:9e", "network": {"id": "e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5", "bridge": "br-int", "label": "tempest-network-smoke--493055974", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e6e29d3d46a48db9f596eca01b3ecf4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d829ab8-df", "ovs_interfaceid": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.798 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Releasing lock "refresh_cache-11befbe4-5dfe-4019-9aae-812949d43f40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.799 239969 DEBUG nova.compute.manager [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Instance network_info: |[{"id": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "address": "fa:16:3e:27:78:9e", "network": {"id": "e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5", "bridge": "br-int", "label": "tempest-network-smoke--493055974", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e6e29d3d46a48db9f596eca01b3ecf4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d829ab8-df", "ovs_interfaceid": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.800 239969 DEBUG oslo_concurrency.lockutils [req-49840e0a-121d-4571-b86f-22cb51179787 req-ade1e293-29d7-4ce8-a77d-e422b3e6e308 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-11befbe4-5dfe-4019-9aae-812949d43f40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.800 239969 DEBUG nova.network.neutron [req-49840e0a-121d-4571-b86f-22cb51179787 req-ade1e293-29d7-4ce8-a77d-e422b3e6e308 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Refreshing network info cache for port 8d829ab8-dfca-4c55-9775-0be98c68aab4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.806 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Start _get_guest_xml network_info=[{"id": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "address": "fa:16:3e:27:78:9e", "network": {"id": "e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5", "bridge": "br-int", "label": "tempest-network-smoke--493055974", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e6e29d3d46a48db9f596eca01b3ecf4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d829ab8-df", "ovs_interfaceid": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.811 239969 WARNING nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.817 239969 DEBUG nova.virt.libvirt.host [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.818 239969 DEBUG nova.virt.libvirt.host [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.824 239969 DEBUG nova.virt.libvirt.host [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.824 239969 DEBUG nova.virt.libvirt.host [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.825 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.825 239969 DEBUG nova.virt.hardware [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.826 239969 DEBUG nova.virt.hardware [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.826 239969 DEBUG nova.virt.hardware [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.826 239969 DEBUG nova.virt.hardware [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.826 239969 DEBUG nova.virt.hardware [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.827 239969 DEBUG nova.virt.hardware [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.827 239969 DEBUG nova.virt.hardware [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.827 239969 DEBUG nova.virt.hardware [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.828 239969 DEBUG nova.virt.hardware [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.828 239969 DEBUG nova.virt.hardware [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.828 239969 DEBUG nova.virt.hardware [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.832 239969 DEBUG oslo_concurrency.processutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:32 compute-0 nova_compute[239965]: 2026-01-26 16:21:32.878 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:21:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:21:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3532341529' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:21:33 compute-0 nova_compute[239965]: 2026-01-26 16:21:33.422 239969 DEBUG oslo_concurrency.processutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:33 compute-0 nova_compute[239965]: 2026-01-26 16:21:33.455 239969 DEBUG nova.storage.rbd_utils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] rbd image 11befbe4-5dfe-4019-9aae-812949d43f40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:33 compute-0 nova_compute[239965]: 2026-01-26 16:21:33.459 239969 DEBUG oslo_concurrency.processutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:33 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3532341529' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.015 239969 DEBUG nova.network.neutron [req-49840e0a-121d-4571-b86f-22cb51179787 req-ade1e293-29d7-4ce8-a77d-e422b3e6e308 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Updated VIF entry in instance network info cache for port 8d829ab8-dfca-4c55-9775-0be98c68aab4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.017 239969 DEBUG nova.network.neutron [req-49840e0a-121d-4571-b86f-22cb51179787 req-ade1e293-29d7-4ce8-a77d-e422b3e6e308 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Updating instance_info_cache with network_info: [{"id": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "address": "fa:16:3e:27:78:9e", "network": {"id": "e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5", "bridge": "br-int", "label": "tempest-network-smoke--493055974", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e6e29d3d46a48db9f596eca01b3ecf4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d829ab8-df", "ovs_interfaceid": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:21:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:21:34 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/596469504' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.036 239969 DEBUG oslo_concurrency.lockutils [req-49840e0a-121d-4571-b86f-22cb51179787 req-ade1e293-29d7-4ce8-a77d-e422b3e6e308 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-11befbe4-5dfe-4019-9aae-812949d43f40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.045 239969 DEBUG oslo_concurrency.processutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.048 239969 DEBUG nova.virt.libvirt.vif [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:21:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-583381645-access_point-28386830',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-583381645-access_point-28386830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-583381645-acc',id=124,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMRKXEm5Y5llTOC9cbXvKrIQsHtbd/UG4rVHmOjmygEUQvHShGZ/21+93d2lJj9E5vpMWpihy3NWJyDHc6jpN2SJp0Z6+07v3X1U/zidwO26JsBXKa8n76jvJ3WGYRUsdw==',key_name='tempest-TestSecurityGroupsBasicOps-1479257615',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7e6e29d3d46a48db9f596eca01b3ecf4',ramdisk_id='',reservation_id='r-610t2003',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-583381645',owner_user_name='tempest-TestSecurityGroupsBasicOps-583381645-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:21:27Z,user_data=None,user_id='7494b8fe226247c9accec02ae2beee97',uuid=11befbe4-5dfe-4019-9aae-812949d43f40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "address": "fa:16:3e:27:78:9e", "network": {"id": "e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5", "bridge": "br-int", "label": "tempest-network-smoke--493055974", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e6e29d3d46a48db9f596eca01b3ecf4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d829ab8-df", "ovs_interfaceid": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.049 239969 DEBUG nova.network.os_vif_util [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Converting VIF {"id": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "address": "fa:16:3e:27:78:9e", "network": {"id": "e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5", "bridge": "br-int", "label": "tempest-network-smoke--493055974", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e6e29d3d46a48db9f596eca01b3ecf4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d829ab8-df", "ovs_interfaceid": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.051 239969 DEBUG nova.network.os_vif_util [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:78:9e,bridge_name='br-int',has_traffic_filtering=True,id=8d829ab8-dfca-4c55-9775-0be98c68aab4,network=Network(e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d829ab8-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.055 239969 DEBUG nova.objects.instance [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lazy-loading 'pci_devices' on Instance uuid 11befbe4-5dfe-4019-9aae-812949d43f40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.074 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:21:34 compute-0 nova_compute[239965]:   <uuid>11befbe4-5dfe-4019-9aae-812949d43f40</uuid>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   <name>instance-0000007c</name>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-583381645-access_point-28386830</nova:name>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:21:32</nova:creationTime>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:21:34 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:21:34 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:21:34 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:21:34 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:21:34 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:21:34 compute-0 nova_compute[239965]:         <nova:user uuid="7494b8fe226247c9accec02ae2beee97">tempest-TestSecurityGroupsBasicOps-583381645-project-member</nova:user>
Jan 26 16:21:34 compute-0 nova_compute[239965]:         <nova:project uuid="7e6e29d3d46a48db9f596eca01b3ecf4">tempest-TestSecurityGroupsBasicOps-583381645</nova:project>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:21:34 compute-0 nova_compute[239965]:         <nova:port uuid="8d829ab8-dfca-4c55-9775-0be98c68aab4">
Jan 26 16:21:34 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <system>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <entry name="serial">11befbe4-5dfe-4019-9aae-812949d43f40</entry>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <entry name="uuid">11befbe4-5dfe-4019-9aae-812949d43f40</entry>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     </system>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   <os>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   </os>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   <features>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   </features>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/11befbe4-5dfe-4019-9aae-812949d43f40_disk">
Jan 26 16:21:34 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       </source>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:21:34 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/11befbe4-5dfe-4019-9aae-812949d43f40_disk.config">
Jan 26 16:21:34 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       </source>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:21:34 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:27:78:9e"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <target dev="tap8d829ab8-df"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/11befbe4-5dfe-4019-9aae-812949d43f40/console.log" append="off"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <video>
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     </video>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:21:34 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:21:34 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:21:34 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:21:34 compute-0 nova_compute[239965]: </domain>
Jan 26 16:21:34 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.076 239969 DEBUG nova.compute.manager [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Preparing to wait for external event network-vif-plugged-8d829ab8-dfca-4c55-9775-0be98c68aab4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.077 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Acquiring lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.078 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.078 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.080 239969 DEBUG nova.virt.libvirt.vif [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:21:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-583381645-access_point-28386830',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-583381645-access_point-28386830',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-583381645-acc',id=124,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMRKXEm5Y5llTOC9cbXvKrIQsHtbd/UG4rVHmOjmygEUQvHShGZ/21+93d2lJj9E5vpMWpihy3NWJyDHc6jpN2SJp0Z6+07v3X1U/zidwO26JsBXKa8n76jvJ3WGYRUsdw==',key_name='tempest-TestSecurityGroupsBasicOps-1479257615',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='7e6e29d3d46a48db9f596eca01b3ecf4',ramdisk_id='',reservation_id='r-610t2003',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-583381645',owner_user_name='tempest-TestSecurityGroupsBasicOps-583381645-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:21:27Z,user_data=None,user_id='7494b8fe226247c9accec02ae2beee97',uuid=11befbe4-5dfe-4019-9aae-812949d43f40,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "address": "fa:16:3e:27:78:9e", "network": {"id": "e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5", "bridge": "br-int", "label": "tempest-network-smoke--493055974", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e6e29d3d46a48db9f596eca01b3ecf4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d829ab8-df", "ovs_interfaceid": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.081 239969 DEBUG nova.network.os_vif_util [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Converting VIF {"id": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "address": "fa:16:3e:27:78:9e", "network": {"id": "e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5", "bridge": "br-int", "label": "tempest-network-smoke--493055974", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e6e29d3d46a48db9f596eca01b3ecf4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d829ab8-df", "ovs_interfaceid": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.082 239969 DEBUG nova.network.os_vif_util [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:78:9e,bridge_name='br-int',has_traffic_filtering=True,id=8d829ab8-dfca-4c55-9775-0be98c68aab4,network=Network(e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d829ab8-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.083 239969 DEBUG os_vif [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:78:9e,bridge_name='br-int',has_traffic_filtering=True,id=8d829ab8-dfca-4c55-9775-0be98c68aab4,network=Network(e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d829ab8-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.084 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.085 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.086 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.091 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.091 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d829ab8-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.092 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d829ab8-df, col_values=(('external_ids', {'iface-id': '8d829ab8-dfca-4c55-9775-0be98c68aab4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:78:9e', 'vm-uuid': '11befbe4-5dfe-4019-9aae-812949d43f40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.094 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:34 compute-0 NetworkManager[48954]: <info>  [1769444494.0962] manager: (tap8d829ab8-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/534)
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.098 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.105 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.106 239969 INFO os_vif [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:78:9e,bridge_name='br-int',has_traffic_filtering=True,id=8d829ab8-dfca-4c55-9775-0be98c68aab4,network=Network(e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d829ab8-df')
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.398 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.399 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.399 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] No VIF found with MAC fa:16:3e:27:78:9e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.400 239969 INFO nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Using config drive
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.434 239969 DEBUG nova.storage.rbd_utils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] rbd image 11befbe4-5dfe-4019-9aae-812949d43f40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 447 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 656 KiB/s rd, 6.0 MiB/s wr, 154 op/s
Jan 26 16:21:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:21:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3600.2 total, 600.0 interval
                                           Cumulative writes: 30K writes, 117K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.03 MB/s
                                           Cumulative WAL: 30K writes, 10K syncs, 2.78 writes per sync, written: 0.11 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3474 writes, 13K keys, 3474 commit groups, 1.0 writes per commit group, ingest: 14.71 MB, 0.02 MB/s
                                           Interval WAL: 3474 writes, 1410 syncs, 2.46 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:21:34 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/596469504' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:21:34 compute-0 ceph-mon[75140]: pgmap v2126: 305 pgs: 305 active+clean; 447 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 656 KiB/s rd, 6.0 MiB/s wr, 154 op/s
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.835 239969 INFO nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Creating config drive at /var/lib/nova/instances/11befbe4-5dfe-4019-9aae-812949d43f40/disk.config
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.839 239969 DEBUG oslo_concurrency.processutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/11befbe4-5dfe-4019-9aae-812949d43f40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt2cqgm3u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:34 compute-0 nova_compute[239965]: 2026-01-26 16:21:34.978 239969 DEBUG oslo_concurrency.processutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/11befbe4-5dfe-4019-9aae-812949d43f40/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpt2cqgm3u" returned: 0 in 0.139s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:35 compute-0 nova_compute[239965]: 2026-01-26 16:21:35.014 239969 DEBUG nova.storage.rbd_utils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] rbd image 11befbe4-5dfe-4019-9aae-812949d43f40_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:35 compute-0 nova_compute[239965]: 2026-01-26 16:21:35.018 239969 DEBUG oslo_concurrency.processutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/11befbe4-5dfe-4019-9aae-812949d43f40/disk.config 11befbe4-5dfe-4019-9aae-812949d43f40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:35 compute-0 nova_compute[239965]: 2026-01-26 16:21:35.411 239969 DEBUG oslo_concurrency.processutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/11befbe4-5dfe-4019-9aae-812949d43f40/disk.config 11befbe4-5dfe-4019-9aae-812949d43f40_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.392s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:35 compute-0 nova_compute[239965]: 2026-01-26 16:21:35.412 239969 INFO nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Deleting local config drive /var/lib/nova/instances/11befbe4-5dfe-4019-9aae-812949d43f40/disk.config because it was imported into RBD.
Jan 26 16:21:35 compute-0 kernel: tap8d829ab8-df: entered promiscuous mode
Jan 26 16:21:35 compute-0 ovn_controller[146046]: 2026-01-26T16:21:35Z|01282|binding|INFO|Claiming lport 8d829ab8-dfca-4c55-9775-0be98c68aab4 for this chassis.
Jan 26 16:21:35 compute-0 ovn_controller[146046]: 2026-01-26T16:21:35Z|01283|binding|INFO|8d829ab8-dfca-4c55-9775-0be98c68aab4: Claiming fa:16:3e:27:78:9e 10.100.0.6
Jan 26 16:21:35 compute-0 nova_compute[239965]: 2026-01-26 16:21:35.495 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:35 compute-0 NetworkManager[48954]: <info>  [1769444495.4966] manager: (tap8d829ab8-df): new Tun device (/org/freedesktop/NetworkManager/Devices/535)
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.502 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:78:9e 10.100.0.6'], port_security=['fa:16:3e:27:78:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '11befbe4-5dfe-4019-9aae-812949d43f40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7e6e29d3d46a48db9f596eca01b3ecf4', 'neutron:revision_number': '2', 'neutron:security_group_ids': '53cf850d-c1bb-414f-955d-63f5cc333cb9 dbf8351f-d0fc-40d2-8918-be28528f3ee9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a019195-0947-4dd6-b1b9-9a90f0c4f83c, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=8d829ab8-dfca-4c55-9775-0be98c68aab4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.504 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 8d829ab8-dfca-4c55-9775-0be98c68aab4 in datapath e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5 bound to our chassis
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.505 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5
Jan 26 16:21:35 compute-0 ovn_controller[146046]: 2026-01-26T16:21:35Z|01284|binding|INFO|Setting lport 8d829ab8-dfca-4c55-9775-0be98c68aab4 up in Southbound
Jan 26 16:21:35 compute-0 ovn_controller[146046]: 2026-01-26T16:21:35Z|01285|binding|INFO|Setting lport 8d829ab8-dfca-4c55-9775-0be98c68aab4 ovn-installed in OVS
Jan 26 16:21:35 compute-0 nova_compute[239965]: 2026-01-26 16:21:35.511 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:35 compute-0 nova_compute[239965]: 2026-01-26 16:21:35.512 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:35 compute-0 nova_compute[239965]: 2026-01-26 16:21:35.514 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.524 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cea378ff-39c1-4dfc-8298-b04aa9e20068]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.526 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape8995ce4-f1 in ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.529 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape8995ce4-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.529 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bb1ccd9d-aee0-4f56-aa09-a18f4efba422]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.531 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bef17168-fc57-4933-86fa-4c4d940a3bb2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:35 compute-0 systemd-machined[208061]: New machine qemu-151-instance-0000007c.
Jan 26 16:21:35 compute-0 systemd[1]: Started Virtual Machine qemu-151-instance-0000007c.
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.550 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[9c8063e8-7a67-409b-885b-af8ff80e2461]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:35 compute-0 systemd-udevd[348993]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.572 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f10e7e-51dd-4e44-a09b-35acf80ec30e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:35 compute-0 NetworkManager[48954]: <info>  [1769444495.5867] device (tap8d829ab8-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:21:35 compute-0 NetworkManager[48954]: <info>  [1769444495.5881] device (tap8d829ab8-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.606 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5cef04-a58d-4986-906b-d8769998b90c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.611 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1b1ded27-7341-4b21-8e03-102596cdbcae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:35 compute-0 NetworkManager[48954]: <info>  [1769444495.6120] manager: (tape8995ce4-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/536)
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.645 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[371d6907-9d85-43b8-aca3-47c7c6a84fe3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.648 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[aa1e0bde-3b11-470b-adc7-27e53045d8a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:35 compute-0 NetworkManager[48954]: <info>  [1769444495.6730] device (tape8995ce4-f0): carrier: link connected
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.680 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[17cab525-7491-4630-a66b-ee6b03625433]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.699 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[646242a9-00b2-4679-98fd-67b4e2dfc62c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8995ce4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:90:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598647, 'reachable_time': 23134, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349023, 'error': None, 'target': 'ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.714 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e31d9c5f-9837-48b2-9e01-8d1814882e1a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:90e5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 598647, 'tstamp': 598647}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349024, 'error': None, 'target': 'ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.739 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fd3e1f2d-371d-4a05-a8d2-c3c9267ba06a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape8995ce4-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:90:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 373], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598647, 'reachable_time': 23134, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 349025, 'error': None, 'target': 'ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.771 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[31c8c38d-3e50-4e40-94e5-aefed20d7755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.840 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d678339f-79ff-4e54-a75d-99c79f82a826]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.841 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8995ce4-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.841 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.841 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape8995ce4-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:35 compute-0 nova_compute[239965]: 2026-01-26 16:21:35.843 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:35 compute-0 kernel: tape8995ce4-f0: entered promiscuous mode
Jan 26 16:21:35 compute-0 NetworkManager[48954]: <info>  [1769444495.8438] manager: (tape8995ce4-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/537)
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.845 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape8995ce4-f0, col_values=(('external_ids', {'iface-id': 'bd57d6a4-a740-4833-a7e1-a3ae36073633'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.848 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.850 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[86d5eaaf-4589-4d47-854b-286718674c17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.850 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5.pid.haproxy
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:21:35 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:35.851 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5', 'env', 'PROCESS_TAG=haproxy-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:21:35 compute-0 ovn_controller[146046]: 2026-01-26T16:21:35Z|01286|binding|INFO|Releasing lport bd57d6a4-a740-4833-a7e1-a3ae36073633 from this chassis (sb_readonly=0)
Jan 26 16:21:35 compute-0 nova_compute[239965]: 2026-01-26 16:21:35.865 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:35 compute-0 nova_compute[239965]: 2026-01-26 16:21:35.878 239969 DEBUG nova.compute.manager [req-c90a93a1-e908-4399-92f0-b539e937ccb1 req-c5b78544-17f2-4f6e-9b4e-63dd0c6c2c08 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Received event network-vif-plugged-8d829ab8-dfca-4c55-9775-0be98c68aab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:35 compute-0 nova_compute[239965]: 2026-01-26 16:21:35.878 239969 DEBUG oslo_concurrency.lockutils [req-c90a93a1-e908-4399-92f0-b539e937ccb1 req-c5b78544-17f2-4f6e-9b4e-63dd0c6c2c08 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:35 compute-0 nova_compute[239965]: 2026-01-26 16:21:35.878 239969 DEBUG oslo_concurrency.lockutils [req-c90a93a1-e908-4399-92f0-b539e937ccb1 req-c5b78544-17f2-4f6e-9b4e-63dd0c6c2c08 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:35 compute-0 nova_compute[239965]: 2026-01-26 16:21:35.879 239969 DEBUG oslo_concurrency.lockutils [req-c90a93a1-e908-4399-92f0-b539e937ccb1 req-c5b78544-17f2-4f6e-9b4e-63dd0c6c2c08 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:35 compute-0 nova_compute[239965]: 2026-01-26 16:21:35.879 239969 DEBUG nova.compute.manager [req-c90a93a1-e908-4399-92f0-b539e937ccb1 req-c5b78544-17f2-4f6e-9b4e-63dd0c6c2c08 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Processing event network-vif-plugged-8d829ab8-dfca-4c55-9775-0be98c68aab4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.066 239969 DEBUG nova.compute.manager [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.067 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444496.0658147, 11befbe4-5dfe-4019-9aae-812949d43f40 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.067 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] VM Started (Lifecycle Event)
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.073 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.077 239969 INFO nova.virt.libvirt.driver [-] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Instance spawned successfully.
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.077 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.099 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.104 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.109 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.109 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.110 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.110 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.110 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.111 239969 DEBUG nova.virt.libvirt.driver [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.129 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.130 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444496.0695856, 11befbe4-5dfe-4019-9aae-812949d43f40 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.130 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] VM Paused (Lifecycle Event)
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.168 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.172 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444496.0718555, 11befbe4-5dfe-4019-9aae-812949d43f40 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.172 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] VM Resumed (Lifecycle Event)
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.200 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.202 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.205 239969 INFO nova.compute.manager [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Took 8.99 seconds to spawn the instance on the hypervisor.
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.206 239969 DEBUG nova.compute.manager [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.247 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:21:36 compute-0 podman[349099]: 2026-01-26 16:21:36.250835753 +0000 UTC m=+0.046666232 container create aac225602298543735eab6c357d8f7aabfb7a53ff4898c773d655ef84c129aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.286 239969 INFO nova.compute.manager [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Took 10.19 seconds to build instance.
Jan 26 16:21:36 compute-0 systemd[1]: Started libpod-conmon-aac225602298543735eab6c357d8f7aabfb7a53ff4898c773d655ef84c129aa6.scope.
Jan 26 16:21:36 compute-0 nova_compute[239965]: 2026-01-26 16:21:36.303 239969 DEBUG oslo_concurrency.lockutils [None req-762838e1-9244-47aa-830b-33c0042424cf 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lock "11befbe4-5dfe-4019-9aae-812949d43f40" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:21:36 compute-0 podman[349099]: 2026-01-26 16:21:36.226896807 +0000 UTC m=+0.022727306 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:21:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2aeff431a17394afbfab947441472cd2fae701a15231838a7025d864e19b795a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:21:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 671 KiB/s rd, 6.0 MiB/s wr, 158 op/s
Jan 26 16:21:36 compute-0 podman[349099]: 2026-01-26 16:21:36.584824531 +0000 UTC m=+0.380655110 container init aac225602298543735eab6c357d8f7aabfb7a53ff4898c773d655ef84c129aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 16:21:36 compute-0 podman[349099]: 2026-01-26 16:21:36.596278391 +0000 UTC m=+0.392108870 container start aac225602298543735eab6c357d8f7aabfb7a53ff4898c773d655ef84c129aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:21:36 compute-0 neutron-haproxy-ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5[349114]: [NOTICE]   (349118) : New worker (349120) forked
Jan 26 16:21:36 compute-0 neutron-haproxy-ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5[349114]: [NOTICE]   (349118) : Loading success.
Jan 26 16:21:37 compute-0 ceph-mon[75140]: pgmap v2127: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 671 KiB/s rd, 6.0 MiB/s wr, 158 op/s
Jan 26 16:21:37 compute-0 nova_compute[239965]: 2026-01-26 16:21:37.692 239969 INFO nova.compute.manager [None req-781d36c4-9eaf-43c8-999b-cf091a39da87 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Get console output
Jan 26 16:21:37 compute-0 nova_compute[239965]: 2026-01-26 16:21:37.706 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:21:37 compute-0 nova_compute[239965]: 2026-01-26 16:21:37.864 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "9ad7b43b-be73-4270-af57-e061d3581019" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:37 compute-0 nova_compute[239965]: 2026-01-26 16:21:37.867 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "9ad7b43b-be73-4270-af57-e061d3581019" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:37 compute-0 nova_compute[239965]: 2026-01-26 16:21:37.910 239969 DEBUG nova.compute.manager [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:21:37 compute-0 nova_compute[239965]: 2026-01-26 16:21:37.915 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.000 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.001 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.011 239969 DEBUG nova.virt.hardware [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.011 239969 INFO nova.compute.claims [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.032 239969 DEBUG oslo_concurrency.lockutils [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.032 239969 DEBUG oslo_concurrency.lockutils [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.033 239969 DEBUG oslo_concurrency.lockutils [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.033 239969 DEBUG oslo_concurrency.lockutils [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.034 239969 DEBUG oslo_concurrency.lockutils [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.035 239969 INFO nova.compute.manager [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Terminating instance
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.036 239969 DEBUG nova.compute.manager [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.079 239969 DEBUG nova.compute.manager [req-e8989806-bae8-4192-ab36-a05e95093f00 req-312c12e5-5efc-446b-8f3e-871e0e7ee02c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Received event network-vif-plugged-8d829ab8-dfca-4c55-9775-0be98c68aab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.080 239969 DEBUG oslo_concurrency.lockutils [req-e8989806-bae8-4192-ab36-a05e95093f00 req-312c12e5-5efc-446b-8f3e-871e0e7ee02c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.080 239969 DEBUG oslo_concurrency.lockutils [req-e8989806-bae8-4192-ab36-a05e95093f00 req-312c12e5-5efc-446b-8f3e-871e0e7ee02c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.081 239969 DEBUG oslo_concurrency.lockutils [req-e8989806-bae8-4192-ab36-a05e95093f00 req-312c12e5-5efc-446b-8f3e-871e0e7ee02c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.081 239969 DEBUG nova.compute.manager [req-e8989806-bae8-4192-ab36-a05e95093f00 req-312c12e5-5efc-446b-8f3e-871e0e7ee02c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] No waiting events found dispatching network-vif-plugged-8d829ab8-dfca-4c55-9775-0be98c68aab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.082 239969 WARNING nova.compute.manager [req-e8989806-bae8-4192-ab36-a05e95093f00 req-312c12e5-5efc-446b-8f3e-871e0e7ee02c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Received unexpected event network-vif-plugged-8d829ab8-dfca-4c55-9775-0be98c68aab4 for instance with vm_state active and task_state None.
Jan 26 16:21:38 compute-0 kernel: tapb27889cb-0f (unregistering): left promiscuous mode
Jan 26 16:21:38 compute-0 NetworkManager[48954]: <info>  [1769444498.0982] device (tapb27889cb-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:21:38 compute-0 ovn_controller[146046]: 2026-01-26T16:21:38Z|01287|binding|INFO|Releasing lport b27889cb-0f7d-409c-830b-20a79cd51358 from this chassis (sb_readonly=0)
Jan 26 16:21:38 compute-0 ovn_controller[146046]: 2026-01-26T16:21:38Z|01288|binding|INFO|Setting lport b27889cb-0f7d-409c-830b-20a79cd51358 down in Southbound
Jan 26 16:21:38 compute-0 ovn_controller[146046]: 2026-01-26T16:21:38Z|01289|binding|INFO|Removing iface tapb27889cb-0f ovn-installed in OVS
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.133 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:38.138 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:29:20:40 10.100.0.6'], port_security=['fa:16:3e:29:20:40 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5544e5ad-3c83-4a30-9534-f7a0e6f29a04', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '13bca217-5749-44d8-824e-52eb25694519', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.206'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2629d1bb-a533-4276-ab03-448f2c4761f6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b27889cb-0f7d-409c-830b-20a79cd51358) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:21:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:38.139 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b27889cb-0f7d-409c-830b-20a79cd51358 in datapath 94d25833-d97d-4ef4-9c91-97f8d2c0ffa9 unbound from our chassis
Jan 26 16:21:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:38.141 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 94d25833-d97d-4ef4-9c91-97f8d2c0ffa9
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.149 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:38.157 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7906dfcf-6b34-42a6-93fd-7f5374f14d2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:38 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Jan 26 16:21:38 compute-0 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d0000007b.scope: Consumed 16.820s CPU time.
Jan 26 16:21:38 compute-0 systemd-machined[208061]: Machine qemu-150-instance-0000007b terminated.
Jan 26 16:21:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:38.194 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[cf209df3-34ee-4a8e-9b73-02ef46229647]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:38.198 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f50368e0-4afb-460a-91eb-79d76810dca3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:21:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:38.232 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3bcd824b-43b8-4b95-bead-020db0d4ddb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.246 239969 DEBUG oslo_concurrency.processutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:38.255 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[eebae889-206a-4ad6-a5e8-477536d65f2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap94d25833-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:22:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 366], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592114, 'reachable_time': 24138, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349138, 'error': None, 'target': 'ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:38.272 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a016cb85-af13-470c-b79c-e71c4aec3bf4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap94d25833-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592125, 'tstamp': 592125}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349143, 'error': None, 'target': 'ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap94d25833-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 592128, 'tstamp': 592128}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349143, 'error': None, 'target': 'ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:38.274 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94d25833-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:38.279 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94d25833-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:38.279 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:21:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:38.280 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap94d25833-d0, col_values=(('external_ids', {'iface-id': '1e5f3a8e-47d3-4b4a-9225-a0fadd35ac3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:38.280 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.289 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.298 239969 INFO nova.virt.libvirt.driver [-] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Instance destroyed successfully.
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.299 239969 DEBUG nova.objects.instance [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'resources' on Instance uuid 5544e5ad-3c83-4a30-9534-f7a0e6f29a04 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.320 239969 DEBUG nova.virt.libvirt.vif [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:21:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1083326071',display_name='tempest-TestNetworkBasicOps-server-1083326071',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1083326071',id=123,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOnc6ABUVZxmUVIFHzMrSF2wHDgUppuyBm/4Qpc+CBG9cly270KJFUGdE7lRkI23XtPm3Un5iEJliNuUeKKpKlQa5tnmaDsMmvi2B67XnlTjwI3+fuIa/NRYcjeBQIJDRw==',key_name='tempest-TestNetworkBasicOps-923206999',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:21:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-ldwrec2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:21:15Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=5544e5ad-3c83-4a30-9534-f7a0e6f29a04,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b27889cb-0f7d-409c-830b-20a79cd51358", "address": "fa:16:3e:29:20:40", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27889cb-0f", "ovs_interfaceid": "b27889cb-0f7d-409c-830b-20a79cd51358", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.320 239969 DEBUG nova.network.os_vif_util [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "b27889cb-0f7d-409c-830b-20a79cd51358", "address": "fa:16:3e:29:20:40", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb27889cb-0f", "ovs_interfaceid": "b27889cb-0f7d-409c-830b-20a79cd51358", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.321 239969 DEBUG nova.network.os_vif_util [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:29:20:40,bridge_name='br-int',has_traffic_filtering=True,id=b27889cb-0f7d-409c-830b-20a79cd51358,network=Network(94d25833-d97d-4ef4-9c91-97f8d2c0ffa9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27889cb-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.322 239969 DEBUG os_vif [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:29:20:40,bridge_name='br-int',has_traffic_filtering=True,id=b27889cb-0f7d-409c-830b-20a79cd51358,network=Network(94d25833-d97d-4ef4-9c91-97f8d2c0ffa9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27889cb-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.324 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.324 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb27889cb-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.326 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.328 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.331 239969 INFO os_vif [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:29:20:40,bridge_name='br-int',has_traffic_filtering=True,id=b27889cb-0f7d-409c-830b-20a79cd51358,network=Network(94d25833-d97d-4ef4-9c91-97f8d2c0ffa9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb27889cb-0f')
Jan 26 16:21:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 6.1 MiB/s wr, 199 op/s
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.579 239969 INFO nova.virt.libvirt.driver [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Deleting instance files /var/lib/nova/instances/5544e5ad-3c83-4a30-9534-f7a0e6f29a04_del
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.580 239969 INFO nova.virt.libvirt.driver [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Deletion of /var/lib/nova/instances/5544e5ad-3c83-4a30-9534-f7a0e6f29a04_del complete
Jan 26 16:21:38 compute-0 ceph-mon[75140]: pgmap v2128: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 6.1 MiB/s wr, 199 op/s
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.652 239969 INFO nova.compute.manager [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Took 0.62 seconds to destroy the instance on the hypervisor.
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.654 239969 DEBUG oslo.service.loopingcall [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.655 239969 DEBUG nova.compute.manager [-] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.656 239969 DEBUG nova.network.neutron [-] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:21:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:21:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1468345686' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.847 239969 DEBUG oslo_concurrency.processutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.852 239969 DEBUG nova.compute.provider_tree [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.866 239969 DEBUG nova.scheduler.client.report [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.884 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.885 239969 DEBUG nova.compute.manager [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.938 239969 DEBUG nova.compute.manager [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.938 239969 DEBUG nova.network.neutron [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.969 239969 INFO nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:21:38 compute-0 nova_compute[239965]: 2026-01-26 16:21:38.987 239969 DEBUG nova.compute.manager [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.131 239969 DEBUG nova.compute.manager [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.132 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.133 239969 INFO nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Creating image(s)
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.157 239969 DEBUG nova.storage.rbd_utils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 9ad7b43b-be73-4270-af57-e061d3581019_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.188 239969 DEBUG nova.storage.rbd_utils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 9ad7b43b-be73-4270-af57-e061d3581019_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.210 239969 DEBUG nova.storage.rbd_utils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 9ad7b43b-be73-4270-af57-e061d3581019_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.216 239969 DEBUG oslo_concurrency.processutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.249 239969 DEBUG nova.policy [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0338b308661e4205839e9e957f674d8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df501970c9864b77b86bb4aa58ef846b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.285 239969 DEBUG oslo_concurrency.processutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.286 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.287 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.287 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.311 239969 DEBUG nova.storage.rbd_utils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 9ad7b43b-be73-4270-af57-e061d3581019_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.315 239969 DEBUG oslo_concurrency.processutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 9ad7b43b-be73-4270-af57-e061d3581019_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:39.509 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:21:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:39.510 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.520 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.533 239969 DEBUG nova.network.neutron [-] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.552 239969 INFO nova.compute.manager [-] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Took 0.90 seconds to deallocate network for instance.
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.596 239969 DEBUG oslo_concurrency.processutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 9ad7b43b-be73-4270-af57-e061d3581019_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.630 239969 DEBUG oslo_concurrency.lockutils [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.631 239969 DEBUG oslo_concurrency.lockutils [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1468345686' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.682 239969 DEBUG nova.storage.rbd_utils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] resizing rbd image 9ad7b43b-be73-4270-af57-e061d3581019_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:21:39 compute-0 ceph-mgr[75431]: [devicehealth INFO root] Check health
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.771 239969 DEBUG nova.objects.instance [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'migration_context' on Instance uuid 9ad7b43b-be73-4270-af57-e061d3581019 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.798 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.799 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Ensure instance console log exists: /var/lib/nova/instances/9ad7b43b-be73-4270-af57-e061d3581019/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.799 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.799 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.799 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:39 compute-0 nova_compute[239965]: 2026-01-26 16:21:39.846 239969 DEBUG oslo_concurrency.processutils [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:40 compute-0 nova_compute[239965]: 2026-01-26 16:21:40.294 239969 DEBUG nova.compute.manager [req-abd28334-f86d-4021-b1e2-f998ace20cdb req-5342c11a-7ff0-496f-b21f-25b60d81c567 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Received event network-vif-deleted-b27889cb-0f7d-409c-830b-20a79cd51358 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:21:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1236683954' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:21:40 compute-0 nova_compute[239965]: 2026-01-26 16:21:40.393 239969 DEBUG oslo_concurrency.processutils [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:40 compute-0 nova_compute[239965]: 2026-01-26 16:21:40.401 239969 DEBUG nova.compute.provider_tree [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:21:40 compute-0 nova_compute[239965]: 2026-01-26 16:21:40.419 239969 DEBUG nova.scheduler.client.report [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:21:40 compute-0 nova_compute[239965]: 2026-01-26 16:21:40.454 239969 DEBUG oslo_concurrency.lockutils [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:40 compute-0 nova_compute[239965]: 2026-01-26 16:21:40.505 239969 INFO nova.scheduler.client.report [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Deleted allocations for instance 5544e5ad-3c83-4a30-9534-f7a0e6f29a04
Jan 26 16:21:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.1 MiB/s wr, 159 op/s
Jan 26 16:21:40 compute-0 nova_compute[239965]: 2026-01-26 16:21:40.564 239969 DEBUG oslo_concurrency.lockutils [None req-47344e37-fc2f-4f81-8c49-4aa00339ddf6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "5544e5ad-3c83-4a30-9534-f7a0e6f29a04" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.532s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:40 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1236683954' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:21:40 compute-0 ceph-mon[75140]: pgmap v2129: 305 pgs: 305 active+clean; 451 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 4.1 MiB/s wr, 159 op/s
Jan 26 16:21:41 compute-0 nova_compute[239965]: 2026-01-26 16:21:41.859 239969 DEBUG nova.network.neutron [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Successfully created port: 499173ab-2069-456f-bf77-0e96d8e9c43b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:21:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 418 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.8 MiB/s wr, 245 op/s
Jan 26 16:21:42 compute-0 nova_compute[239965]: 2026-01-26 16:21:42.912 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:21:43 compute-0 nova_compute[239965]: 2026-01-26 16:21:43.324 239969 DEBUG nova.compute.manager [req-cf19b7eb-8ed9-4b2f-88e7-7f73ae2305f6 req-1fe75ac2-8452-46fe-b258-cc2d8e5fca41 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Received event network-changed-8d829ab8-dfca-4c55-9775-0be98c68aab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:43 compute-0 nova_compute[239965]: 2026-01-26 16:21:43.325 239969 DEBUG nova.compute.manager [req-cf19b7eb-8ed9-4b2f-88e7-7f73ae2305f6 req-1fe75ac2-8452-46fe-b258-cc2d8e5fca41 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Refreshing instance network info cache due to event network-changed-8d829ab8-dfca-4c55-9775-0be98c68aab4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:21:43 compute-0 nova_compute[239965]: 2026-01-26 16:21:43.325 239969 DEBUG oslo_concurrency.lockutils [req-cf19b7eb-8ed9-4b2f-88e7-7f73ae2305f6 req-1fe75ac2-8452-46fe-b258-cc2d8e5fca41 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-11befbe4-5dfe-4019-9aae-812949d43f40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:21:43 compute-0 nova_compute[239965]: 2026-01-26 16:21:43.326 239969 DEBUG oslo_concurrency.lockutils [req-cf19b7eb-8ed9-4b2f-88e7-7f73ae2305f6 req-1fe75ac2-8452-46fe-b258-cc2d8e5fca41 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-11befbe4-5dfe-4019-9aae-812949d43f40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:21:43 compute-0 nova_compute[239965]: 2026-01-26 16:21:43.326 239969 DEBUG nova.network.neutron [req-cf19b7eb-8ed9-4b2f-88e7-7f73ae2305f6 req-1fe75ac2-8452-46fe-b258-cc2d8e5fca41 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Refreshing network info cache for port 8d829ab8-dfca-4c55-9775-0be98c68aab4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:21:43 compute-0 nova_compute[239965]: 2026-01-26 16:21:43.327 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:43 compute-0 ceph-mon[75140]: pgmap v2130: 305 pgs: 305 active+clean; 418 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.8 MiB/s wr, 245 op/s
Jan 26 16:21:43 compute-0 nova_compute[239965]: 2026-01-26 16:21:43.907 239969 DEBUG oslo_concurrency.lockutils [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:43 compute-0 nova_compute[239965]: 2026-01-26 16:21:43.908 239969 DEBUG oslo_concurrency.lockutils [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:43 compute-0 nova_compute[239965]: 2026-01-26 16:21:43.908 239969 DEBUG oslo_concurrency.lockutils [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:43 compute-0 nova_compute[239965]: 2026-01-26 16:21:43.909 239969 DEBUG oslo_concurrency.lockutils [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:43 compute-0 nova_compute[239965]: 2026-01-26 16:21:43.909 239969 DEBUG oslo_concurrency.lockutils [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:43 compute-0 nova_compute[239965]: 2026-01-26 16:21:43.910 239969 INFO nova.compute.manager [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Terminating instance
Jan 26 16:21:43 compute-0 nova_compute[239965]: 2026-01-26 16:21:43.912 239969 DEBUG nova.compute.manager [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:21:43 compute-0 kernel: tapa2b8e0fe-aa (unregistering): left promiscuous mode
Jan 26 16:21:43 compute-0 NetworkManager[48954]: <info>  [1769444503.9644] device (tapa2b8e0fe-aa): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.013 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:44 compute-0 ovn_controller[146046]: 2026-01-26T16:21:44Z|01290|binding|INFO|Releasing lport a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb from this chassis (sb_readonly=0)
Jan 26 16:21:44 compute-0 ovn_controller[146046]: 2026-01-26T16:21:44Z|01291|binding|INFO|Setting lport a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb down in Southbound
Jan 26 16:21:44 compute-0 ovn_controller[146046]: 2026-01-26T16:21:44Z|01292|binding|INFO|Removing iface tapa2b8e0fe-aa ovn-installed in OVS
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.017 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:44.022 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:f3:55 10.100.0.5'], port_security=['fa:16:3e:45:f3:55 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '84cd45c5-937a-4d42-9e80-2aaa7aa558c2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e66c0628-319c-4933-bfe7-b5cee0b15a77', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2629d1bb-a533-4276-ab03-448f2c4761f6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:21:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:44.024 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb in datapath 94d25833-d97d-4ef4-9c91-97f8d2c0ffa9 unbound from our chassis
Jan 26 16:21:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:44.027 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 94d25833-d97d-4ef4-9c91-97f8d2c0ffa9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:21:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:44.028 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5b7de7bf-2f4f-4675-9f92-77930616d350]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:44.029 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9 namespace which is not needed anymore
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.043 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:44 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000078.scope: Deactivated successfully.
Jan 26 16:21:44 compute-0 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000078.scope: Consumed 16.475s CPU time.
Jan 26 16:21:44 compute-0 systemd-machined[208061]: Machine qemu-147-instance-00000078 terminated.
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.143 239969 INFO nova.virt.libvirt.driver [-] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Instance destroyed successfully.
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.144 239969 DEBUG nova.objects.instance [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'resources' on Instance uuid 84cd45c5-937a-4d42-9e80-2aaa7aa558c2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.158 239969 DEBUG nova.virt.libvirt.vif [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:20:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1497622994',display_name='tempest-TestNetworkBasicOps-server-1497622994',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1497622994',id=120,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMEbNbkirpn8q6fOUcuOBLNwlHKG1ipK4t+MC4V9pNQ7fEY1buV+g1Lo8jtOaeRRA9dB73yZ5bS6kVJRhgngcieAdijcLEIBj4r5FY+dsZJud6vOs//08LbVZa6UA9cOVw==',key_name='tempest-TestNetworkBasicOps-288144499',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:20:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-ad8czmgq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:20:31Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=84cd45c5-937a-4d42-9e80-2aaa7aa558c2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "address": "fa:16:3e:45:f3:55", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2b8e0fe-aa", "ovs_interfaceid": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.158 239969 DEBUG nova.network.os_vif_util [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "address": "fa:16:3e:45:f3:55", "network": {"id": "94d25833-d97d-4ef4-9c91-97f8d2c0ffa9", "bridge": "br-int", "label": "tempest-network-smoke--765542682", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa2b8e0fe-aa", "ovs_interfaceid": "a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.159 239969 DEBUG nova.network.os_vif_util [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:f3:55,bridge_name='br-int',has_traffic_filtering=True,id=a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb,network=Network(94d25833-d97d-4ef4-9c91-97f8d2c0ffa9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2b8e0fe-aa') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.159 239969 DEBUG os_vif [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:f3:55,bridge_name='br-int',has_traffic_filtering=True,id=a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb,network=Network(94d25833-d97d-4ef4-9c91-97f8d2c0ffa9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2b8e0fe-aa') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.161 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.161 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa2b8e0fe-aa, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.163 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.165 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.167 239969 INFO os_vif [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:f3:55,bridge_name='br-int',has_traffic_filtering=True,id=a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb,network=Network(94d25833-d97d-4ef4-9c91-97f8d2c0ffa9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa2b8e0fe-aa')
Jan 26 16:21:44 compute-0 neutron-haproxy-ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9[346591]: [NOTICE]   (346595) : haproxy version is 2.8.14-c23fe91
Jan 26 16:21:44 compute-0 neutron-haproxy-ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9[346591]: [NOTICE]   (346595) : path to executable is /usr/sbin/haproxy
Jan 26 16:21:44 compute-0 neutron-haproxy-ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9[346591]: [WARNING]  (346595) : Exiting Master process...
Jan 26 16:21:44 compute-0 neutron-haproxy-ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9[346591]: [ALERT]    (346595) : Current worker (346597) exited with code 143 (Terminated)
Jan 26 16:21:44 compute-0 neutron-haproxy-ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9[346591]: [WARNING]  (346595) : All workers exited. Exiting... (0)
Jan 26 16:21:44 compute-0 systemd[1]: libpod-3226040fbce9a30b87755a37759b7a98cb75069739864e4f4ecdee15c4ca27a8.scope: Deactivated successfully.
Jan 26 16:21:44 compute-0 podman[349407]: 2026-01-26 16:21:44.202031674 +0000 UTC m=+0.057733224 container died 3226040fbce9a30b87755a37759b7a98cb75069739864e4f4ecdee15c4ca27a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 16:21:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3226040fbce9a30b87755a37759b7a98cb75069739864e4f4ecdee15c4ca27a8-userdata-shm.mount: Deactivated successfully.
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.229 239969 DEBUG nova.network.neutron [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Successfully updated port: 499173ab-2069-456f-bf77-0e96d8e9c43b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:21:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-87dd6431e8701fbd21a890fd278ab45f14b242539cb13f44af670703dd5c8498-merged.mount: Deactivated successfully.
Jan 26 16:21:44 compute-0 podman[349407]: 2026-01-26 16:21:44.242560977 +0000 UTC m=+0.098262527 container cleanup 3226040fbce9a30b87755a37759b7a98cb75069739864e4f4ecdee15c4ca27a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.251 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "refresh_cache-9ad7b43b-be73-4270-af57-e061d3581019" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.251 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquired lock "refresh_cache-9ad7b43b-be73-4270-af57-e061d3581019" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.251 239969 DEBUG nova.network.neutron [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:21:44 compute-0 systemd[1]: libpod-conmon-3226040fbce9a30b87755a37759b7a98cb75069739864e4f4ecdee15c4ca27a8.scope: Deactivated successfully.
Jan 26 16:21:44 compute-0 podman[349461]: 2026-01-26 16:21:44.315224397 +0000 UTC m=+0.051777179 container remove 3226040fbce9a30b87755a37759b7a98cb75069739864e4f4ecdee15c4ca27a8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 16:21:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:44.321 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[27e8c8d4-6495-40a6-b032-16f2ae93a523]: (4, ('Mon Jan 26 04:21:44 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9 (3226040fbce9a30b87755a37759b7a98cb75069739864e4f4ecdee15c4ca27a8)\n3226040fbce9a30b87755a37759b7a98cb75069739864e4f4ecdee15c4ca27a8\nMon Jan 26 04:21:44 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9 (3226040fbce9a30b87755a37759b7a98cb75069739864e4f4ecdee15c4ca27a8)\n3226040fbce9a30b87755a37759b7a98cb75069739864e4f4ecdee15c4ca27a8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:44.322 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[27330c3a-5203-40d2-937c-29918e3b31c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:44.323 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94d25833-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.324 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:44 compute-0 kernel: tap94d25833-d0: left promiscuous mode
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.328 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:44.330 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d2836a8f-4cbf-42af-a3b3-6297c87b0808]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.334 239969 DEBUG nova.compute.manager [req-ff5b43da-905c-4b04-93e2-516a89624ee0 req-6e2e3142-aba1-402f-9aa7-23a4787a3258 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Received event network-vif-unplugged-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.334 239969 DEBUG oslo_concurrency.lockutils [req-ff5b43da-905c-4b04-93e2-516a89624ee0 req-6e2e3142-aba1-402f-9aa7-23a4787a3258 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.334 239969 DEBUG oslo_concurrency.lockutils [req-ff5b43da-905c-4b04-93e2-516a89624ee0 req-6e2e3142-aba1-402f-9aa7-23a4787a3258 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.334 239969 DEBUG oslo_concurrency.lockutils [req-ff5b43da-905c-4b04-93e2-516a89624ee0 req-6e2e3142-aba1-402f-9aa7-23a4787a3258 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.335 239969 DEBUG nova.compute.manager [req-ff5b43da-905c-4b04-93e2-516a89624ee0 req-6e2e3142-aba1-402f-9aa7-23a4787a3258 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] No waiting events found dispatching network-vif-unplugged-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.335 239969 DEBUG nova.compute.manager [req-ff5b43da-905c-4b04-93e2-516a89624ee0 req-6e2e3142-aba1-402f-9aa7-23a4787a3258 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Received event network-vif-unplugged-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:21:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:44.340 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3d739a31-7684-4577-ac36-d3914e5cb9bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:44.341 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5397aad5-c21e-4003-a0dd-ff1bfdd89c23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.354 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:44.357 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[25372f6f-a50f-412f-8bea-af9911cc58ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 592108, 'reachable_time': 20719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349477, 'error': None, 'target': 'ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:44 compute-0 systemd[1]: run-netns-ovnmeta\x2d94d25833\x2dd97d\x2d4ef4\x2d9c91\x2d97f8d2c0ffa9.mount: Deactivated successfully.
Jan 26 16:21:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:44.359 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-94d25833-d97d-4ef4-9c91-97f8d2c0ffa9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:21:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:44.359 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[d835ff70-865f-462a-9a7c-a0babbb45b4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.416 239969 DEBUG nova.network.neutron [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.435 239969 INFO nova.virt.libvirt.driver [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Deleting instance files /var/lib/nova/instances/84cd45c5-937a-4d42-9e80-2aaa7aa558c2_del
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.436 239969 INFO nova.virt.libvirt.driver [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Deletion of /var/lib/nova/instances/84cd45c5-937a-4d42-9e80-2aaa7aa558c2_del complete
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.499 239969 INFO nova.compute.manager [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Took 0.59 seconds to destroy the instance on the hypervisor.
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.499 239969 DEBUG oslo.service.loopingcall [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.500 239969 DEBUG nova.compute.manager [-] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:21:44 compute-0 nova_compute[239965]: 2026-01-26 16:21:44.500 239969 DEBUG nova.network.neutron [-] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:21:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 418 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Jan 26 16:21:44 compute-0 ceph-mon[75140]: pgmap v2131: 305 pgs: 305 active+clean; 418 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Jan 26 16:21:45 compute-0 nova_compute[239965]: 2026-01-26 16:21:45.153 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:21:45 compute-0 nova_compute[239965]: 2026-01-26 16:21:45.326 239969 DEBUG nova.network.neutron [req-cf19b7eb-8ed9-4b2f-88e7-7f73ae2305f6 req-1fe75ac2-8452-46fe-b258-cc2d8e5fca41 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Updated VIF entry in instance network info cache for port 8d829ab8-dfca-4c55-9775-0be98c68aab4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:21:45 compute-0 nova_compute[239965]: 2026-01-26 16:21:45.327 239969 DEBUG nova.network.neutron [req-cf19b7eb-8ed9-4b2f-88e7-7f73ae2305f6 req-1fe75ac2-8452-46fe-b258-cc2d8e5fca41 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Updating instance_info_cache with network_info: [{"id": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "address": "fa:16:3e:27:78:9e", "network": {"id": "e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5", "bridge": "br-int", "label": "tempest-network-smoke--493055974", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e6e29d3d46a48db9f596eca01b3ecf4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d829ab8-df", "ovs_interfaceid": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:21:45 compute-0 nova_compute[239965]: 2026-01-26 16:21:45.353 239969 DEBUG oslo_concurrency.lockutils [req-cf19b7eb-8ed9-4b2f-88e7-7f73ae2305f6 req-1fe75ac2-8452-46fe-b258-cc2d8e5fca41 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-11befbe4-5dfe-4019-9aae-812949d43f40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:21:45 compute-0 nova_compute[239965]: 2026-01-26 16:21:45.431 239969 DEBUG nova.compute.manager [req-c8aa5fa4-4c57-4722-bf73-0f50702713ad req-b2ec24be-c205-4f38-b197-eae133eb5702 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Received event network-changed-499173ab-2069-456f-bf77-0e96d8e9c43b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:45 compute-0 nova_compute[239965]: 2026-01-26 16:21:45.432 239969 DEBUG nova.compute.manager [req-c8aa5fa4-4c57-4722-bf73-0f50702713ad req-b2ec24be-c205-4f38-b197-eae133eb5702 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Refreshing instance network info cache due to event network-changed-499173ab-2069-456f-bf77-0e96d8e9c43b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:21:45 compute-0 nova_compute[239965]: 2026-01-26 16:21:45.433 239969 DEBUG oslo_concurrency.lockutils [req-c8aa5fa4-4c57-4722-bf73-0f50702713ad req-b2ec24be-c205-4f38-b197-eae133eb5702 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-9ad7b43b-be73-4270-af57-e061d3581019" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:21:45 compute-0 nova_compute[239965]: 2026-01-26 16:21:45.595 239969 DEBUG nova.network.neutron [-] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:21:45 compute-0 nova_compute[239965]: 2026-01-26 16:21:45.616 239969 INFO nova.compute.manager [-] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Took 1.12 seconds to deallocate network for instance.
Jan 26 16:21:45 compute-0 nova_compute[239965]: 2026-01-26 16:21:45.670 239969 DEBUG oslo_concurrency.lockutils [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:45 compute-0 nova_compute[239965]: 2026-01-26 16:21:45.671 239969 DEBUG oslo_concurrency.lockutils [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:45 compute-0 nova_compute[239965]: 2026-01-26 16:21:45.821 239969 DEBUG oslo_concurrency.processutils [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:21:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/772448109' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.408 239969 DEBUG oslo_concurrency.processutils [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.416 239969 DEBUG nova.compute.provider_tree [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.437 239969 DEBUG nova.scheduler.client.report [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:21:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/772448109' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.466 239969 DEBUG oslo_concurrency.lockutils [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.498 239969 INFO nova.scheduler.client.report [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Deleted allocations for instance 84cd45c5-937a-4d42-9e80-2aaa7aa558c2
Jan 26 16:21:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:46.512 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 397 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.537 239969 DEBUG nova.network.neutron [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Updating instance_info_cache with network_info: [{"id": "499173ab-2069-456f-bf77-0e96d8e9c43b", "address": "fa:16:3e:dc:ca:d7", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499173ab-20", "ovs_interfaceid": "499173ab-2069-456f-bf77-0e96d8e9c43b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.567 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Releasing lock "refresh_cache-9ad7b43b-be73-4270-af57-e061d3581019" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.568 239969 DEBUG nova.compute.manager [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Instance network_info: |[{"id": "499173ab-2069-456f-bf77-0e96d8e9c43b", "address": "fa:16:3e:dc:ca:d7", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499173ab-20", "ovs_interfaceid": "499173ab-2069-456f-bf77-0e96d8e9c43b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.568 239969 DEBUG oslo_concurrency.lockutils [req-c8aa5fa4-4c57-4722-bf73-0f50702713ad req-b2ec24be-c205-4f38-b197-eae133eb5702 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-9ad7b43b-be73-4270-af57-e061d3581019" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.568 239969 DEBUG nova.network.neutron [req-c8aa5fa4-4c57-4722-bf73-0f50702713ad req-b2ec24be-c205-4f38-b197-eae133eb5702 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Refreshing network info cache for port 499173ab-2069-456f-bf77-0e96d8e9c43b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.573 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Start _get_guest_xml network_info=[{"id": "499173ab-2069-456f-bf77-0e96d8e9c43b", "address": "fa:16:3e:dc:ca:d7", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499173ab-20", "ovs_interfaceid": "499173ab-2069-456f-bf77-0e96d8e9c43b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.576 239969 DEBUG oslo_concurrency.lockutils [None req-631ef6a1-6410-41a3-92b6-89294329a3ac e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.580 239969 WARNING nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.584 239969 DEBUG nova.virt.libvirt.host [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.585 239969 DEBUG nova.virt.libvirt.host [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.588 239969 DEBUG nova.virt.libvirt.host [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.588 239969 DEBUG nova.virt.libvirt.host [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.589 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.589 239969 DEBUG nova.virt.hardware [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.590 239969 DEBUG nova.virt.hardware [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.590 239969 DEBUG nova.virt.hardware [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.590 239969 DEBUG nova.virt.hardware [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.590 239969 DEBUG nova.virt.hardware [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.591 239969 DEBUG nova.virt.hardware [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.591 239969 DEBUG nova.virt.hardware [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.591 239969 DEBUG nova.virt.hardware [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.592 239969 DEBUG nova.virt.hardware [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.592 239969 DEBUG nova.virt.hardware [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.592 239969 DEBUG nova.virt.hardware [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.596 239969 DEBUG oslo_concurrency.processutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.785 239969 DEBUG nova.compute.manager [req-7f064f17-2419-4d01-980a-711688e2550d req-aa738197-2a11-4523-8c69-158e65a22d5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Received event network-vif-plugged-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.786 239969 DEBUG oslo_concurrency.lockutils [req-7f064f17-2419-4d01-980a-711688e2550d req-aa738197-2a11-4523-8c69-158e65a22d5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.786 239969 DEBUG oslo_concurrency.lockutils [req-7f064f17-2419-4d01-980a-711688e2550d req-aa738197-2a11-4523-8c69-158e65a22d5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.787 239969 DEBUG oslo_concurrency.lockutils [req-7f064f17-2419-4d01-980a-711688e2550d req-aa738197-2a11-4523-8c69-158e65a22d5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "84cd45c5-937a-4d42-9e80-2aaa7aa558c2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.787 239969 DEBUG nova.compute.manager [req-7f064f17-2419-4d01-980a-711688e2550d req-aa738197-2a11-4523-8c69-158e65a22d5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] No waiting events found dispatching network-vif-plugged-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:21:46 compute-0 nova_compute[239965]: 2026-01-26 16:21:46.787 239969 WARNING nova.compute.manager [req-7f064f17-2419-4d01-980a-711688e2550d req-aa738197-2a11-4523-8c69-158e65a22d5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Received unexpected event network-vif-plugged-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb for instance with vm_state deleted and task_state None.
Jan 26 16:21:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:21:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/79292265' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.185 239969 DEBUG oslo_concurrency.processutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.219 239969 DEBUG nova.storage.rbd_utils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 9ad7b43b-be73-4270-af57-e061d3581019_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.225 239969 DEBUG oslo_concurrency.processutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:47 compute-0 ceph-mon[75140]: pgmap v2132: 305 pgs: 305 active+clean; 397 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Jan 26 16:21:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/79292265' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.546 239969 DEBUG nova.compute.manager [req-8e3b4a3c-9955-47ba-bb43-7ffbd042728c req-4b99c0ad-672c-45c3-b7b3-441fe914a33d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Received event network-vif-deleted-a2b8e0fe-aa5d-4113-a609-7d7d8f722fdb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:21:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2542179546' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.845 239969 DEBUG oslo_concurrency.processutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.847 239969 DEBUG nova.virt.libvirt.vif [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:21:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1027176414',display_name='tempest-TestGettingAddress-server-1027176414',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1027176414',id=125,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPQQRmhcVg/kdrEK+wEbWjyXF34qlRdwvPy0otQ55i1stbQy0di0x+Z5lZr6s6TYoyDlUGpSbQ5L4J7FSSwP4IrinKkNibV2VUaY2Vkw+XX8B3+FyRjPoqy7nyNiOJOWbg==',key_name='tempest-TestGettingAddress-1914910139',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-1qp01esd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:21:39Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=9ad7b43b-be73-4270-af57-e061d3581019,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "499173ab-2069-456f-bf77-0e96d8e9c43b", "address": "fa:16:3e:dc:ca:d7", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499173ab-20", "ovs_interfaceid": "499173ab-2069-456f-bf77-0e96d8e9c43b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.847 239969 DEBUG nova.network.os_vif_util [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "499173ab-2069-456f-bf77-0e96d8e9c43b", "address": "fa:16:3e:dc:ca:d7", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499173ab-20", "ovs_interfaceid": "499173ab-2069-456f-bf77-0e96d8e9c43b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.849 239969 DEBUG nova.network.os_vif_util [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ca:d7,bridge_name='br-int',has_traffic_filtering=True,id=499173ab-2069-456f-bf77-0e96d8e9c43b,network=Network(323586d4-0f5a-43b7-8831-7a2cc02b73e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499173ab-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.850 239969 DEBUG nova.objects.instance [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'pci_devices' on Instance uuid 9ad7b43b-be73-4270-af57-e061d3581019 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.864 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:21:47 compute-0 nova_compute[239965]:   <uuid>9ad7b43b-be73-4270-af57-e061d3581019</uuid>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   <name>instance-0000007d</name>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <nova:name>tempest-TestGettingAddress-server-1027176414</nova:name>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:21:46</nova:creationTime>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:21:47 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:21:47 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:21:47 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:21:47 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:21:47 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:21:47 compute-0 nova_compute[239965]:         <nova:user uuid="0338b308661e4205839e9e957f674d8c">tempest-TestGettingAddress-206943419-project-member</nova:user>
Jan 26 16:21:47 compute-0 nova_compute[239965]:         <nova:project uuid="df501970c9864b77b86bb4aa58ef846b">tempest-TestGettingAddress-206943419</nova:project>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:21:47 compute-0 nova_compute[239965]:         <nova:port uuid="499173ab-2069-456f-bf77-0e96d8e9c43b">
Jan 26 16:21:47 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fedc:cad7" ipVersion="6"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fedc:cad7" ipVersion="6"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <system>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <entry name="serial">9ad7b43b-be73-4270-af57-e061d3581019</entry>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <entry name="uuid">9ad7b43b-be73-4270-af57-e061d3581019</entry>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     </system>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   <os>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   </os>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   <features>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   </features>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/9ad7b43b-be73-4270-af57-e061d3581019_disk">
Jan 26 16:21:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       </source>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:21:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/9ad7b43b-be73-4270-af57-e061d3581019_disk.config">
Jan 26 16:21:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       </source>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:21:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:dc:ca:d7"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <target dev="tap499173ab-20"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/9ad7b43b-be73-4270-af57-e061d3581019/console.log" append="off"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <video>
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     </video>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:21:47 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:21:47 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:21:47 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:21:47 compute-0 nova_compute[239965]: </domain>
Jan 26 16:21:47 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.866 239969 DEBUG nova.compute.manager [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Preparing to wait for external event network-vif-plugged-499173ab-2069-456f-bf77-0e96d8e9c43b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.866 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "9ad7b43b-be73-4270-af57-e061d3581019-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.866 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "9ad7b43b-be73-4270-af57-e061d3581019-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.866 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "9ad7b43b-be73-4270-af57-e061d3581019-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.867 239969 DEBUG nova.virt.libvirt.vif [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:21:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1027176414',display_name='tempest-TestGettingAddress-server-1027176414',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1027176414',id=125,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPQQRmhcVg/kdrEK+wEbWjyXF34qlRdwvPy0otQ55i1stbQy0di0x+Z5lZr6s6TYoyDlUGpSbQ5L4J7FSSwP4IrinKkNibV2VUaY2Vkw+XX8B3+FyRjPoqy7nyNiOJOWbg==',key_name='tempest-TestGettingAddress-1914910139',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-1qp01esd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:21:39Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=9ad7b43b-be73-4270-af57-e061d3581019,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "499173ab-2069-456f-bf77-0e96d8e9c43b", "address": "fa:16:3e:dc:ca:d7", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499173ab-20", "ovs_interfaceid": "499173ab-2069-456f-bf77-0e96d8e9c43b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.867 239969 DEBUG nova.network.os_vif_util [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "499173ab-2069-456f-bf77-0e96d8e9c43b", "address": "fa:16:3e:dc:ca:d7", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499173ab-20", "ovs_interfaceid": "499173ab-2069-456f-bf77-0e96d8e9c43b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.868 239969 DEBUG nova.network.os_vif_util [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ca:d7,bridge_name='br-int',has_traffic_filtering=True,id=499173ab-2069-456f-bf77-0e96d8e9c43b,network=Network(323586d4-0f5a-43b7-8831-7a2cc02b73e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499173ab-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.869 239969 DEBUG os_vif [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ca:d7,bridge_name='br-int',has_traffic_filtering=True,id=499173ab-2069-456f-bf77-0e96d8e9c43b,network=Network(323586d4-0f5a-43b7-8831-7a2cc02b73e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499173ab-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.869 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.869 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.870 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.873 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.874 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap499173ab-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.874 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap499173ab-20, col_values=(('external_ids', {'iface-id': '499173ab-2069-456f-bf77-0e96d8e9c43b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:ca:d7', 'vm-uuid': '9ad7b43b-be73-4270-af57-e061d3581019'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.876 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:47 compute-0 NetworkManager[48954]: <info>  [1769444507.8771] manager: (tap499173ab-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/538)
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.878 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.883 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.884 239969 INFO os_vif [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:ca:d7,bridge_name='br-int',has_traffic_filtering=True,id=499173ab-2069-456f-bf77-0e96d8e9c43b,network=Network(323586d4-0f5a-43b7-8831-7a2cc02b73e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499173ab-20')
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.953 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.959 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.959 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.960 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:dc:ca:d7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.960 239969 INFO nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Using config drive
Jan 26 16:21:47 compute-0 nova_compute[239965]: 2026-01-26 16:21:47.981 239969 DEBUG nova.storage.rbd_utils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 9ad7b43b-be73-4270-af57-e061d3581019_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:21:48 compute-0 nova_compute[239965]: 2026-01-26 16:21:48.366 239969 INFO nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Creating config drive at /var/lib/nova/instances/9ad7b43b-be73-4270-af57-e061d3581019/disk.config
Jan 26 16:21:48 compute-0 nova_compute[239965]: 2026-01-26 16:21:48.373 239969 DEBUG oslo_concurrency.processutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9ad7b43b-be73-4270-af57-e061d3581019/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyp_9uoon execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2542179546' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:21:48 compute-0 nova_compute[239965]: 2026-01-26 16:21:48.516 239969 DEBUG oslo_concurrency.processutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9ad7b43b-be73-4270-af57-e061d3581019/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyp_9uoon" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Jan 26 16:21:48 compute-0 nova_compute[239965]: 2026-01-26 16:21:48.546 239969 DEBUG nova.storage.rbd_utils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 9ad7b43b-be73-4270-af57-e061d3581019_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:21:48 compute-0 nova_compute[239965]: 2026-01-26 16:21:48.548 239969 DEBUG oslo_concurrency.processutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9ad7b43b-be73-4270-af57-e061d3581019/disk.config 9ad7b43b-be73-4270-af57-e061d3581019_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:48 compute-0 nova_compute[239965]: 2026-01-26 16:21:48.680 239969 DEBUG oslo_concurrency.processutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9ad7b43b-be73-4270-af57-e061d3581019/disk.config 9ad7b43b-be73-4270-af57-e061d3581019_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:21:48 compute-0 nova_compute[239965]: 2026-01-26 16:21:48.680 239969 INFO nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Deleting local config drive /var/lib/nova/instances/9ad7b43b-be73-4270-af57-e061d3581019/disk.config because it was imported into RBD.
Jan 26 16:21:48 compute-0 kernel: tap499173ab-20: entered promiscuous mode
Jan 26 16:21:48 compute-0 NetworkManager[48954]: <info>  [1769444508.7370] manager: (tap499173ab-20): new Tun device (/org/freedesktop/NetworkManager/Devices/539)
Jan 26 16:21:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:21:48 compute-0 ovn_controller[146046]: 2026-01-26T16:21:48Z|01293|binding|INFO|Claiming lport 499173ab-2069-456f-bf77-0e96d8e9c43b for this chassis.
Jan 26 16:21:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4213036623' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:21:48 compute-0 ovn_controller[146046]: 2026-01-26T16:21:48Z|01294|binding|INFO|499173ab-2069-456f-bf77-0e96d8e9c43b: Claiming fa:16:3e:dc:ca:d7 10.100.0.10 2001:db8:0:1:f816:3eff:fedc:cad7 2001:db8::f816:3eff:fedc:cad7
Jan 26 16:21:48 compute-0 nova_compute[239965]: 2026-01-26 16:21:48.741 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:21:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4213036623' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:21:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:48.753 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:ca:d7 10.100.0.10 2001:db8:0:1:f816:3eff:fedc:cad7 2001:db8::f816:3eff:fedc:cad7'], port_security=['fa:16:3e:dc:ca:d7 10.100.0.10 2001:db8:0:1:f816:3eff:fedc:cad7 2001:db8::f816:3eff:fedc:cad7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8:0:1:f816:3eff:fedc:cad7/64 2001:db8::f816:3eff:fedc:cad7/64', 'neutron:device_id': '9ad7b43b-be73-4270-af57-e061d3581019', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '01961e6c-51dc-4713-a95f-047c09f9eb18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c934df91-a8a2-4191-8341-9c66e4ad8b82, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=499173ab-2069-456f-bf77-0e96d8e9c43b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:21:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:48.755 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 499173ab-2069-456f-bf77-0e96d8e9c43b in datapath 323586d4-0f5a-43b7-8831-7a2cc02b73e0 bound to our chassis
Jan 26 16:21:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:48.757 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 323586d4-0f5a-43b7-8831-7a2cc02b73e0
Jan 26 16:21:48 compute-0 ovn_controller[146046]: 2026-01-26T16:21:48Z|01295|binding|INFO|Setting lport 499173ab-2069-456f-bf77-0e96d8e9c43b ovn-installed in OVS
Jan 26 16:21:48 compute-0 ovn_controller[146046]: 2026-01-26T16:21:48Z|01296|binding|INFO|Setting lport 499173ab-2069-456f-bf77-0e96d8e9c43b up in Southbound
Jan 26 16:21:48 compute-0 nova_compute[239965]: 2026-01-26 16:21:48.767 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:48.778 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[29c95e3b-74c9-436b-9c79-58d8edba4eb7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:48 compute-0 systemd-udevd[349637]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:21:48 compute-0 systemd-machined[208061]: New machine qemu-152-instance-0000007d.
Jan 26 16:21:48 compute-0 NetworkManager[48954]: <info>  [1769444508.7992] device (tap499173ab-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:21:48 compute-0 systemd[1]: Started Virtual Machine qemu-152-instance-0000007d.
Jan 26 16:21:48 compute-0 NetworkManager[48954]: <info>  [1769444508.8008] device (tap499173ab-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:21:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:48.813 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1e6b15-5c94-44a6-9910-691a9b9f5110]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:48.816 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0b66dbae-dfca-43c9-9525-b22c2558b1ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:48.852 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b0cc5122-ed6b-4168-b5d6-9f7664dddee0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:48.875 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9f95b182-2139-4755-addb-19c55cde5963]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap323586d4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:1a:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 24, 'tx_packets': 5, 'rx_bytes': 2000, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 24, 'tx_packets': 5, 'rx_bytes': 2000, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596349, 'reachable_time': 43930, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 22, 'inoctets': 1608, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 22, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1608, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 22, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349649, 'error': None, 'target': 'ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:48.894 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2b019e8e-70e9-4db1-8f9a-0b978682e92d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap323586d4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596360, 'tstamp': 596360}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349651, 'error': None, 'target': 'ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap323586d4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596363, 'tstamp': 596363}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349651, 'error': None, 'target': 'ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:21:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:48.897 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap323586d4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:48 compute-0 nova_compute[239965]: 2026-01-26 16:21:48.900 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:48.901 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap323586d4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:48.902 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:21:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:48.902 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap323586d4-00, col_values=(('external_ids', {'iface-id': '15bb6ba7-12a8-4618-8905-b715a4089dde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:21:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:48.903 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.033 239969 DEBUG nova.compute.manager [req-9568cbe4-8bfe-40d2-a5a5-0776808196cf req-64e5dfd6-6214-49b2-8b7b-45cbad58a8fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Received event network-vif-plugged-499173ab-2069-456f-bf77-0e96d8e9c43b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.034 239969 DEBUG oslo_concurrency.lockutils [req-9568cbe4-8bfe-40d2-a5a5-0776808196cf req-64e5dfd6-6214-49b2-8b7b-45cbad58a8fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "9ad7b43b-be73-4270-af57-e061d3581019-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.034 239969 DEBUG oslo_concurrency.lockutils [req-9568cbe4-8bfe-40d2-a5a5-0776808196cf req-64e5dfd6-6214-49b2-8b7b-45cbad58a8fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9ad7b43b-be73-4270-af57-e061d3581019-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.035 239969 DEBUG oslo_concurrency.lockutils [req-9568cbe4-8bfe-40d2-a5a5-0776808196cf req-64e5dfd6-6214-49b2-8b7b-45cbad58a8fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9ad7b43b-be73-4270-af57-e061d3581019-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.035 239969 DEBUG nova.compute.manager [req-9568cbe4-8bfe-40d2-a5a5-0776808196cf req-64e5dfd6-6214-49b2-8b7b-45cbad58a8fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Processing event network-vif-plugged-499173ab-2069-456f-bf77-0e96d8e9c43b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002227558555287558 of space, bias 1.0, pg target 0.6682675665862674 quantized to 32 (current 32)
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010184463427201406 of space, bias 1.0, pg target 0.30553390281604215 quantized to 32 (current 32)
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.067479961216663e-07 of space, bias 4.0, pg target 0.0008480975953459996 quantized to 16 (current 16)
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.152 239969 DEBUG nova.compute.manager [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.154 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444509.1518352, 9ad7b43b-be73-4270-af57-e061d3581019 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.154 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] VM Started (Lifecycle Event)
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.158 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.163 239969 INFO nova.virt.libvirt.driver [-] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Instance spawned successfully.
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.164 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.180 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.186 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.202 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.203 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.204 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.205 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.206 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.206 239969 DEBUG nova.virt.libvirt.driver [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.211 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.212 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444509.1520221, 9ad7b43b-be73-4270-af57-e061d3581019 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.212 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] VM Paused (Lifecycle Event)
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.246 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.249 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444509.1578965, 9ad7b43b-be73-4270-af57-e061d3581019 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.249 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] VM Resumed (Lifecycle Event)
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.272 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.275 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.284 239969 INFO nova.compute.manager [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Took 10.15 seconds to spawn the instance on the hypervisor.
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.285 239969 DEBUG nova.compute.manager [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.297 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.362 239969 INFO nova.compute.manager [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Took 11.40 seconds to build instance.
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.378 239969 DEBUG oslo_concurrency.lockutils [None req-6fef5ca8-fd07-4ddd-a7c1-cf7b30357600 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "9ad7b43b-be73-4270-af57-e061d3581019" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.381 239969 DEBUG nova.network.neutron [req-c8aa5fa4-4c57-4722-bf73-0f50702713ad req-b2ec24be-c205-4f38-b197-eae133eb5702 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Updated VIF entry in instance network info cache for port 499173ab-2069-456f-bf77-0e96d8e9c43b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.382 239969 DEBUG nova.network.neutron [req-c8aa5fa4-4c57-4722-bf73-0f50702713ad req-b2ec24be-c205-4f38-b197-eae133eb5702 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Updating instance_info_cache with network_info: [{"id": "499173ab-2069-456f-bf77-0e96d8e9c43b", "address": "fa:16:3e:dc:ca:d7", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499173ab-20", "ovs_interfaceid": "499173ab-2069-456f-bf77-0e96d8e9c43b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.395 239969 DEBUG oslo_concurrency.lockutils [req-c8aa5fa4-4c57-4722-bf73-0f50702713ad req-b2ec24be-c205-4f38-b197-eae133eb5702 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-9ad7b43b-be73-4270-af57-e061d3581019" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:21:49 compute-0 ceph-mon[75140]: pgmap v2133: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Jan 26 16:21:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/4213036623' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:21:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/4213036623' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:21:49 compute-0 nova_compute[239965]: 2026-01-26 16:21:49.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:21:50 compute-0 ovn_controller[146046]: 2026-01-26T16:21:50Z|01297|binding|INFO|Releasing lport 15bb6ba7-12a8-4618-8905-b715a4089dde from this chassis (sb_readonly=0)
Jan 26 16:21:50 compute-0 ovn_controller[146046]: 2026-01-26T16:21:50Z|01298|binding|INFO|Releasing lport bd57d6a4-a740-4833-a7e1-a3ae36073633 from this chassis (sb_readonly=0)
Jan 26 16:21:50 compute-0 ovn_controller[146046]: 2026-01-26T16:21:50Z|01299|binding|INFO|Releasing lport 7ccaaade-f6a1-403a-ae4b-e4af874cc1b7 from this chassis (sb_readonly=0)
Jan 26 16:21:50 compute-0 nova_compute[239965]: 2026-01-26 16:21:50.315 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:50 compute-0 ovn_controller[146046]: 2026-01-26T16:21:50Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:27:78:9e 10.100.0.6
Jan 26 16:21:50 compute-0 ovn_controller[146046]: 2026-01-26T16:21:50Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:27:78:9e 10.100.0.6
Jan 26 16:21:50 compute-0 nova_compute[239965]: 2026-01-26 16:21:50.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:21:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 1013 KiB/s rd, 1.8 MiB/s wr, 113 op/s
Jan 26 16:21:51 compute-0 nova_compute[239965]: 2026-01-26 16:21:51.135 239969 DEBUG nova.compute.manager [req-d949e988-fc76-496a-ac2a-71dda4c38301 req-4fb3a5e8-b43c-4752-9bb4-173eda061ff6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Received event network-vif-plugged-499173ab-2069-456f-bf77-0e96d8e9c43b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:51 compute-0 nova_compute[239965]: 2026-01-26 16:21:51.135 239969 DEBUG oslo_concurrency.lockutils [req-d949e988-fc76-496a-ac2a-71dda4c38301 req-4fb3a5e8-b43c-4752-9bb4-173eda061ff6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "9ad7b43b-be73-4270-af57-e061d3581019-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:51 compute-0 nova_compute[239965]: 2026-01-26 16:21:51.135 239969 DEBUG oslo_concurrency.lockutils [req-d949e988-fc76-496a-ac2a-71dda4c38301 req-4fb3a5e8-b43c-4752-9bb4-173eda061ff6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9ad7b43b-be73-4270-af57-e061d3581019-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:51 compute-0 nova_compute[239965]: 2026-01-26 16:21:51.136 239969 DEBUG oslo_concurrency.lockutils [req-d949e988-fc76-496a-ac2a-71dda4c38301 req-4fb3a5e8-b43c-4752-9bb4-173eda061ff6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9ad7b43b-be73-4270-af57-e061d3581019-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:51 compute-0 nova_compute[239965]: 2026-01-26 16:21:51.136 239969 DEBUG nova.compute.manager [req-d949e988-fc76-496a-ac2a-71dda4c38301 req-4fb3a5e8-b43c-4752-9bb4-173eda061ff6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] No waiting events found dispatching network-vif-plugged-499173ab-2069-456f-bf77-0e96d8e9c43b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:21:51 compute-0 nova_compute[239965]: 2026-01-26 16:21:51.136 239969 WARNING nova.compute.manager [req-d949e988-fc76-496a-ac2a-71dda4c38301 req-4fb3a5e8-b43c-4752-9bb4-173eda061ff6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Received unexpected event network-vif-plugged-499173ab-2069-456f-bf77-0e96d8e9c43b for instance with vm_state active and task_state None.
Jan 26 16:21:51 compute-0 ceph-mon[75140]: pgmap v2134: 305 pgs: 305 active+clean; 339 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 1013 KiB/s rd, 1.8 MiB/s wr, 113 op/s
Jan 26 16:21:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 372 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 251 op/s
Jan 26 16:21:52 compute-0 ceph-mon[75140]: pgmap v2135: 305 pgs: 305 active+clean; 372 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.9 MiB/s wr, 251 op/s
Jan 26 16:21:52 compute-0 nova_compute[239965]: 2026-01-26 16:21:52.876 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:52 compute-0 nova_compute[239965]: 2026-01-26 16:21:52.959 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:21:53 compute-0 nova_compute[239965]: 2026-01-26 16:21:53.293 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444498.271849, 5544e5ad-3c83-4a30-9534-f7a0e6f29a04 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:21:53 compute-0 nova_compute[239965]: 2026-01-26 16:21:53.294 239969 INFO nova.compute.manager [-] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] VM Stopped (Lifecycle Event)
Jan 26 16:21:53 compute-0 nova_compute[239965]: 2026-01-26 16:21:53.301 239969 DEBUG nova.compute.manager [req-55564150-f7a4-441f-a06a-23c8f1b872ab req-5818ac4f-1618-4356-9e9d-530d97d9b608 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Received event network-changed-499173ab-2069-456f-bf77-0e96d8e9c43b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:21:53 compute-0 nova_compute[239965]: 2026-01-26 16:21:53.301 239969 DEBUG nova.compute.manager [req-55564150-f7a4-441f-a06a-23c8f1b872ab req-5818ac4f-1618-4356-9e9d-530d97d9b608 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Refreshing instance network info cache due to event network-changed-499173ab-2069-456f-bf77-0e96d8e9c43b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:21:53 compute-0 nova_compute[239965]: 2026-01-26 16:21:53.301 239969 DEBUG oslo_concurrency.lockutils [req-55564150-f7a4-441f-a06a-23c8f1b872ab req-5818ac4f-1618-4356-9e9d-530d97d9b608 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-9ad7b43b-be73-4270-af57-e061d3581019" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:21:53 compute-0 nova_compute[239965]: 2026-01-26 16:21:53.302 239969 DEBUG oslo_concurrency.lockutils [req-55564150-f7a4-441f-a06a-23c8f1b872ab req-5818ac4f-1618-4356-9e9d-530d97d9b608 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-9ad7b43b-be73-4270-af57-e061d3581019" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:21:53 compute-0 nova_compute[239965]: 2026-01-26 16:21:53.302 239969 DEBUG nova.network.neutron [req-55564150-f7a4-441f-a06a-23c8f1b872ab req-5818ac4f-1618-4356-9e9d-530d97d9b608 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Refreshing network info cache for port 499173ab-2069-456f-bf77-0e96d8e9c43b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:21:53 compute-0 nova_compute[239965]: 2026-01-26 16:21:53.319 239969 DEBUG nova.compute.manager [None req-d7fbf454-8d0f-4b5e-a6fd-5669a3c1dac6 - - - - - -] [instance: 5544e5ad-3c83-4a30-9534-f7a0e6f29a04] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:53 compute-0 nova_compute[239965]: 2026-01-26 16:21:53.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:21:53 compute-0 nova_compute[239965]: 2026-01-26 16:21:53.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:21:53 compute-0 nova_compute[239965]: 2026-01-26 16:21:53.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:21:53 compute-0 nova_compute[239965]: 2026-01-26 16:21:53.753 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-b1abd864-1de3-45b1-8fbc-3885a84b1363" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:21:53 compute-0 nova_compute[239965]: 2026-01-26 16:21:53.754 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-b1abd864-1de3-45b1-8fbc-3885a84b1363" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:21:53 compute-0 nova_compute[239965]: 2026-01-26 16:21:53.754 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:21:53 compute-0 nova_compute[239965]: 2026-01-26 16:21:53.754 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b1abd864-1de3-45b1-8fbc-3885a84b1363 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:21:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 372 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 165 op/s
Jan 26 16:21:55 compute-0 ceph-mon[75140]: pgmap v2136: 305 pgs: 305 active+clean; 372 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 165 op/s
Jan 26 16:21:56 compute-0 nova_compute[239965]: 2026-01-26 16:21:56.338 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:56 compute-0 nova_compute[239965]: 2026-01-26 16:21:56.474 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Updating instance_info_cache with network_info: [{"id": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "address": "fa:16:3e:ef:95:e6", "network": {"id": "8fbff3e6-13d0-457a-8cfa-fa92dd36bc03", "bridge": "br-int", "label": "tempest-network-smoke--2073612296", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ddf76d-a2", "ovs_interfaceid": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:21:56 compute-0 nova_compute[239965]: 2026-01-26 16:21:56.490 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-b1abd864-1de3-45b1-8fbc-3885a84b1363" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:21:56 compute-0 nova_compute[239965]: 2026-01-26 16:21:56.491 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:21:56 compute-0 nova_compute[239965]: 2026-01-26 16:21:56.491 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:21:56 compute-0 nova_compute[239965]: 2026-01-26 16:21:56.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:21:56 compute-0 nova_compute[239965]: 2026-01-26 16:21:56.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:21:56 compute-0 nova_compute[239965]: 2026-01-26 16:21:56.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:21:56 compute-0 nova_compute[239965]: 2026-01-26 16:21:56.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:21:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 372 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 165 op/s
Jan 26 16:21:56 compute-0 ceph-mon[75140]: pgmap v2137: 305 pgs: 305 active+clean; 372 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 165 op/s
Jan 26 16:21:57 compute-0 nova_compute[239965]: 2026-01-26 16:21:57.860 239969 DEBUG nova.network.neutron [req-55564150-f7a4-441f-a06a-23c8f1b872ab req-5818ac4f-1618-4356-9e9d-530d97d9b608 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Updated VIF entry in instance network info cache for port 499173ab-2069-456f-bf77-0e96d8e9c43b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:21:57 compute-0 nova_compute[239965]: 2026-01-26 16:21:57.861 239969 DEBUG nova.network.neutron [req-55564150-f7a4-441f-a06a-23c8f1b872ab req-5818ac4f-1618-4356-9e9d-530d97d9b608 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Updating instance_info_cache with network_info: [{"id": "499173ab-2069-456f-bf77-0e96d8e9c43b", "address": "fa:16:3e:dc:ca:d7", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499173ab-20", "ovs_interfaceid": "499173ab-2069-456f-bf77-0e96d8e9c43b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:21:57 compute-0 nova_compute[239965]: 2026-01-26 16:21:57.879 239969 DEBUG oslo_concurrency.lockutils [req-55564150-f7a4-441f-a06a-23c8f1b872ab req-5818ac4f-1618-4356-9e9d-530d97d9b608 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-9ad7b43b-be73-4270-af57-e061d3581019" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:21:57 compute-0 nova_compute[239965]: 2026-01-26 16:21:57.880 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:57 compute-0 nova_compute[239965]: 2026-01-26 16:21:57.958 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:21:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:21:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 372 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 158 op/s
Jan 26 16:21:59 compute-0 nova_compute[239965]: 2026-01-26 16:21:59.141 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444504.1401675, 84cd45c5-937a-4d42-9e80-2aaa7aa558c2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:21:59 compute-0 nova_compute[239965]: 2026-01-26 16:21:59.142 239969 INFO nova.compute.manager [-] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] VM Stopped (Lifecycle Event)
Jan 26 16:21:59 compute-0 nova_compute[239965]: 2026-01-26 16:21:59.169 239969 DEBUG nova.compute.manager [None req-2e426717-f3b7-4eb5-9f94-053de8e34b42 - - - - - -] [instance: 84cd45c5-937a-4d42-9e80-2aaa7aa558c2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:21:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:59.246 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:59.247 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:21:59.248 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:59 compute-0 nova_compute[239965]: 2026-01-26 16:21:59.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:21:59 compute-0 nova_compute[239965]: 2026-01-26 16:21:59.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:21:59 compute-0 nova_compute[239965]: 2026-01-26 16:21:59.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:21:59 compute-0 nova_compute[239965]: 2026-01-26 16:21:59.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:21:59 compute-0 nova_compute[239965]: 2026-01-26 16:21:59.542 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:21:59 compute-0 nova_compute[239965]: 2026-01-26 16:21:59.542 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:21:59 compute-0 ceph-mon[75140]: pgmap v2138: 305 pgs: 305 active+clean; 372 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 158 op/s
Jan 26 16:22:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:22:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/246013977' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.134 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.226 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.228 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.234 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.235 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.240 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.240 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.244 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.245 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:22:00 compute-0 sshd-session[349715]: Invalid user sol from 45.148.10.240 port 38658
Jan 26 16:22:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:22:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:22:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:22:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:22:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:22:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.431 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.432 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2849MB free_disk=59.82988188415766GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.432 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.433 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:00 compute-0 sshd-session[349715]: Connection closed by invalid user sol 45.148.10.240 port 38658 [preauth]
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.513 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance b1abd864-1de3-45b1-8fbc-3885a84b1363 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.514 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 59934c7e-3c2e-4e68-9629-663ecfc6692d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.514 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 11befbe4-5dfe-4019-9aae-812949d43f40 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.514 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 9ad7b43b-be73-4270-af57-e061d3581019 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.514 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.514 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:22:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 372 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Jan 26 16:22:00 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/246013977' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:22:00 compute-0 ceph-mon[75140]: pgmap v2139: 305 pgs: 305 active+clean; 372 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 138 op/s
Jan 26 16:22:00 compute-0 nova_compute[239965]: 2026-01-26 16:22:00.603 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:22:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2688732622' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:22:01 compute-0 nova_compute[239965]: 2026-01-26 16:22:01.183 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:01 compute-0 nova_compute[239965]: 2026-01-26 16:22:01.190 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:22:01 compute-0 nova_compute[239965]: 2026-01-26 16:22:01.216 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:22:01 compute-0 nova_compute[239965]: 2026-01-26 16:22:01.244 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:22:01 compute-0 nova_compute[239965]: 2026-01-26 16:22:01.245 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.812s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:01 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2688732622' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:22:02 compute-0 nova_compute[239965]: 2026-01-26 16:22:02.478 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 402 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 198 op/s
Jan 26 16:22:02 compute-0 ceph-mon[75140]: pgmap v2140: 305 pgs: 305 active+clean; 402 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 198 op/s
Jan 26 16:22:02 compute-0 nova_compute[239965]: 2026-01-26 16:22:02.881 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:02 compute-0 ovn_controller[146046]: 2026-01-26T16:22:02Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:ca:d7 10.100.0.10
Jan 26 16:22:02 compute-0 ovn_controller[146046]: 2026-01-26T16:22:02Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:ca:d7 10.100.0.10
Jan 26 16:22:03 compute-0 nova_compute[239965]: 2026-01-26 16:22:03.013 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:22:03 compute-0 podman[349742]: 2026-01-26 16:22:03.409152674 +0000 UTC m=+0.077524159 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 26 16:22:03 compute-0 podman[349743]: 2026-01-26 16:22:03.459413085 +0000 UTC m=+0.123674339 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:22:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 402 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 16:22:04 compute-0 ceph-mon[75140]: pgmap v2141: 305 pgs: 305 active+clean; 402 MiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.459 239969 DEBUG nova.compute.manager [req-10c6d6ed-2773-43df-9212-72d7e252bc17 req-3209621a-e1dc-4ff3-ae63-9f10b87e19cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Received event network-changed-8d829ab8-dfca-4c55-9775-0be98c68aab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.460 239969 DEBUG nova.compute.manager [req-10c6d6ed-2773-43df-9212-72d7e252bc17 req-3209621a-e1dc-4ff3-ae63-9f10b87e19cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Refreshing instance network info cache due to event network-changed-8d829ab8-dfca-4c55-9775-0be98c68aab4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.460 239969 DEBUG oslo_concurrency.lockutils [req-10c6d6ed-2773-43df-9212-72d7e252bc17 req-3209621a-e1dc-4ff3-ae63-9f10b87e19cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-11befbe4-5dfe-4019-9aae-812949d43f40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.460 239969 DEBUG oslo_concurrency.lockutils [req-10c6d6ed-2773-43df-9212-72d7e252bc17 req-3209621a-e1dc-4ff3-ae63-9f10b87e19cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-11befbe4-5dfe-4019-9aae-812949d43f40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.460 239969 DEBUG nova.network.neutron [req-10c6d6ed-2773-43df-9212-72d7e252bc17 req-3209621a-e1dc-4ff3-ae63-9f10b87e19cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Refreshing network info cache for port 8d829ab8-dfca-4c55-9775-0be98c68aab4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.581 239969 DEBUG oslo_concurrency.lockutils [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Acquiring lock "11befbe4-5dfe-4019-9aae-812949d43f40" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.582 239969 DEBUG oslo_concurrency.lockutils [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lock "11befbe4-5dfe-4019-9aae-812949d43f40" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.582 239969 DEBUG oslo_concurrency.lockutils [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Acquiring lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.583 239969 DEBUG oslo_concurrency.lockutils [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.583 239969 DEBUG oslo_concurrency.lockutils [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.585 239969 INFO nova.compute.manager [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Terminating instance
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.586 239969 DEBUG nova.compute.manager [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:22:05 compute-0 kernel: tap8d829ab8-df (unregistering): left promiscuous mode
Jan 26 16:22:05 compute-0 NetworkManager[48954]: <info>  [1769444525.6289] device (tap8d829ab8-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:22:05 compute-0 ovn_controller[146046]: 2026-01-26T16:22:05Z|01300|binding|INFO|Releasing lport 8d829ab8-dfca-4c55-9775-0be98c68aab4 from this chassis (sb_readonly=0)
Jan 26 16:22:05 compute-0 ovn_controller[146046]: 2026-01-26T16:22:05Z|01301|binding|INFO|Setting lport 8d829ab8-dfca-4c55-9775-0be98c68aab4 down in Southbound
Jan 26 16:22:05 compute-0 ovn_controller[146046]: 2026-01-26T16:22:05Z|01302|binding|INFO|Removing iface tap8d829ab8-df ovn-installed in OVS
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.640 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.643 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:05.651 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:78:9e 10.100.0.6'], port_security=['fa:16:3e:27:78:9e 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '11befbe4-5dfe-4019-9aae-812949d43f40', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '7e6e29d3d46a48db9f596eca01b3ecf4', 'neutron:revision_number': '4', 'neutron:security_group_ids': '53cf850d-c1bb-414f-955d-63f5cc333cb9 dbf8351f-d0fc-40d2-8918-be28528f3ee9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a019195-0947-4dd6-b1b9-9a90f0c4f83c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=8d829ab8-dfca-4c55-9775-0be98c68aab4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:22:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:05.652 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 8d829ab8-dfca-4c55-9775-0be98c68aab4 in datapath e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5 unbound from our chassis
Jan 26 16:22:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:05.654 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.657 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:05.656 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4d07c92f-b4e6-43fc-a69f-77b41e1aa79d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:05.658 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5 namespace which is not needed anymore
Jan 26 16:22:05 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Jan 26 16:22:05 compute-0 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d0000007c.scope: Consumed 14.691s CPU time.
Jan 26 16:22:05 compute-0 systemd-machined[208061]: Machine qemu-151-instance-0000007c terminated.
Jan 26 16:22:05 compute-0 neutron-haproxy-ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5[349114]: [NOTICE]   (349118) : haproxy version is 2.8.14-c23fe91
Jan 26 16:22:05 compute-0 neutron-haproxy-ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5[349114]: [NOTICE]   (349118) : path to executable is /usr/sbin/haproxy
Jan 26 16:22:05 compute-0 neutron-haproxy-ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5[349114]: [WARNING]  (349118) : Exiting Master process...
Jan 26 16:22:05 compute-0 neutron-haproxy-ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5[349114]: [WARNING]  (349118) : Exiting Master process...
Jan 26 16:22:05 compute-0 neutron-haproxy-ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5[349114]: [ALERT]    (349118) : Current worker (349120) exited with code 143 (Terminated)
Jan 26 16:22:05 compute-0 neutron-haproxy-ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5[349114]: [WARNING]  (349118) : All workers exited. Exiting... (0)
Jan 26 16:22:05 compute-0 systemd[1]: libpod-aac225602298543735eab6c357d8f7aabfb7a53ff4898c773d655ef84c129aa6.scope: Deactivated successfully.
Jan 26 16:22:05 compute-0 podman[349808]: 2026-01-26 16:22:05.791688114 +0000 UTC m=+0.051757939 container died aac225602298543735eab6c357d8f7aabfb7a53ff4898c773d655ef84c129aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.807 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.812 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-aac225602298543735eab6c357d8f7aabfb7a53ff4898c773d655ef84c129aa6-userdata-shm.mount: Deactivated successfully.
Jan 26 16:22:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-2aeff431a17394afbfab947441472cd2fae701a15231838a7025d864e19b795a-merged.mount: Deactivated successfully.
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.828 239969 INFO nova.virt.libvirt.driver [-] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Instance destroyed successfully.
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.829 239969 DEBUG nova.objects.instance [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lazy-loading 'resources' on Instance uuid 11befbe4-5dfe-4019-9aae-812949d43f40 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:22:05 compute-0 podman[349808]: 2026-01-26 16:22:05.839278159 +0000 UTC m=+0.099347974 container cleanup aac225602298543735eab6c357d8f7aabfb7a53ff4898c773d655ef84c129aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:22:05 compute-0 systemd[1]: libpod-conmon-aac225602298543735eab6c357d8f7aabfb7a53ff4898c773d655ef84c129aa6.scope: Deactivated successfully.
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.849 239969 DEBUG nova.virt.libvirt.vif [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:21:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-583381645-access_point-28386830',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-583381645-access_point-28386830',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-583381645-acc',id=124,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMRKXEm5Y5llTOC9cbXvKrIQsHtbd/UG4rVHmOjmygEUQvHShGZ/21+93d2lJj9E5vpMWpihy3NWJyDHc6jpN2SJp0Z6+07v3X1U/zidwO26JsBXKa8n76jvJ3WGYRUsdw==',key_name='tempest-TestSecurityGroupsBasicOps-1479257615',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:21:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='7e6e29d3d46a48db9f596eca01b3ecf4',ramdisk_id='',reservation_id='r-610t2003',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-583381645',owner_user_name='tempest-TestSecurityGroupsBasicOps-583381645-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:21:36Z,user_data=None,user_id='7494b8fe226247c9accec02ae2beee97',uuid=11befbe4-5dfe-4019-9aae-812949d43f40,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "address": "fa:16:3e:27:78:9e", "network": {"id": "e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5", "bridge": "br-int", "label": "tempest-network-smoke--493055974", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e6e29d3d46a48db9f596eca01b3ecf4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d829ab8-df", "ovs_interfaceid": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.849 239969 DEBUG nova.network.os_vif_util [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Converting VIF {"id": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "address": "fa:16:3e:27:78:9e", "network": {"id": "e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5", "bridge": "br-int", "label": "tempest-network-smoke--493055974", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.184", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e6e29d3d46a48db9f596eca01b3ecf4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d829ab8-df", "ovs_interfaceid": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.850 239969 DEBUG nova.network.os_vif_util [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:78:9e,bridge_name='br-int',has_traffic_filtering=True,id=8d829ab8-dfca-4c55-9775-0be98c68aab4,network=Network(e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d829ab8-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.850 239969 DEBUG os_vif [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:78:9e,bridge_name='br-int',has_traffic_filtering=True,id=8d829ab8-dfca-4c55-9775-0be98c68aab4,network=Network(e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d829ab8-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.852 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.853 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d829ab8-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.854 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.856 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.861 239969 INFO os_vif [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:78:9e,bridge_name='br-int',has_traffic_filtering=True,id=8d829ab8-dfca-4c55-9775-0be98c68aab4,network=Network(e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d829ab8-df')
Jan 26 16:22:05 compute-0 podman[349846]: 2026-01-26 16:22:05.918607432 +0000 UTC m=+0.048786146 container remove aac225602298543735eab6c357d8f7aabfb7a53ff4898c773d655ef84c129aa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:22:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:05.927 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6458b5e5-0b99-4578-bab6-6d1e51c78268]: (4, ('Mon Jan 26 04:22:05 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5 (aac225602298543735eab6c357d8f7aabfb7a53ff4898c773d655ef84c129aa6)\naac225602298543735eab6c357d8f7aabfb7a53ff4898c773d655ef84c129aa6\nMon Jan 26 04:22:05 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5 (aac225602298543735eab6c357d8f7aabfb7a53ff4898c773d655ef84c129aa6)\naac225602298543735eab6c357d8f7aabfb7a53ff4898c773d655ef84c129aa6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:05.930 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bb4280b3-edc9-41ed-817a-4072a11a6717]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:05.932 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape8995ce4-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:05 compute-0 kernel: tape8995ce4-f0: left promiscuous mode
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.934 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:05 compute-0 nova_compute[239965]: 2026-01-26 16:22:05.953 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:05.957 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[73243188-8863-4841-ab4e-6f2b52fbe933]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:05.973 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8a7b824b-c8e3-45a3-84e3-2ba4e3020e8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:05.974 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3d92a170-f1cd-4b22-87a2-cf989edb0e7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:05.995 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d8b580c6-8315-43a2-9255-226c4a5d0455]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 598640, 'reachable_time': 18507, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349879, 'error': None, 'target': 'ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:05.999 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:22:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:05.999 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[046a70e3-e023-4ef4-8b03-4483959d2cf0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:06 compute-0 systemd[1]: run-netns-ovnmeta\x2de8995ce4\x2df9c0\x2d4d8b\x2da300\x2d5ed1ccb72ba5.mount: Deactivated successfully.
Jan 26 16:22:06 compute-0 nova_compute[239965]: 2026-01-26 16:22:06.157 239969 INFO nova.virt.libvirt.driver [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Deleting instance files /var/lib/nova/instances/11befbe4-5dfe-4019-9aae-812949d43f40_del
Jan 26 16:22:06 compute-0 nova_compute[239965]: 2026-01-26 16:22:06.158 239969 INFO nova.virt.libvirt.driver [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Deletion of /var/lib/nova/instances/11befbe4-5dfe-4019-9aae-812949d43f40_del complete
Jan 26 16:22:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 377 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 26 16:22:06 compute-0 ceph-mon[75140]: pgmap v2142: 305 pgs: 305 active+clean; 377 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 68 op/s
Jan 26 16:22:07 compute-0 nova_compute[239965]: 2026-01-26 16:22:07.272 239969 INFO nova.compute.manager [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Took 1.69 seconds to destroy the instance on the hypervisor.
Jan 26 16:22:07 compute-0 nova_compute[239965]: 2026-01-26 16:22:07.274 239969 DEBUG oslo.service.loopingcall [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:22:07 compute-0 nova_compute[239965]: 2026-01-26 16:22:07.274 239969 DEBUG nova.compute.manager [-] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:22:07 compute-0 nova_compute[239965]: 2026-01-26 16:22:07.274 239969 DEBUG nova.network.neutron [-] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:22:08 compute-0 nova_compute[239965]: 2026-01-26 16:22:08.015 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:08 compute-0 nova_compute[239965]: 2026-01-26 16:22:08.087 239969 DEBUG nova.compute.manager [req-6fd95c72-af58-4414-8210-018ce4e87832 req-d07906b7-e667-45c6-a9c2-24b93978ce71 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Received event network-vif-unplugged-8d829ab8-dfca-4c55-9775-0be98c68aab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:08 compute-0 nova_compute[239965]: 2026-01-26 16:22:08.088 239969 DEBUG oslo_concurrency.lockutils [req-6fd95c72-af58-4414-8210-018ce4e87832 req-d07906b7-e667-45c6-a9c2-24b93978ce71 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:08 compute-0 nova_compute[239965]: 2026-01-26 16:22:08.088 239969 DEBUG oslo_concurrency.lockutils [req-6fd95c72-af58-4414-8210-018ce4e87832 req-d07906b7-e667-45c6-a9c2-24b93978ce71 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:08 compute-0 nova_compute[239965]: 2026-01-26 16:22:08.088 239969 DEBUG oslo_concurrency.lockutils [req-6fd95c72-af58-4414-8210-018ce4e87832 req-d07906b7-e667-45c6-a9c2-24b93978ce71 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:08 compute-0 nova_compute[239965]: 2026-01-26 16:22:08.088 239969 DEBUG nova.compute.manager [req-6fd95c72-af58-4414-8210-018ce4e87832 req-d07906b7-e667-45c6-a9c2-24b93978ce71 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] No waiting events found dispatching network-vif-unplugged-8d829ab8-dfca-4c55-9775-0be98c68aab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:22:08 compute-0 nova_compute[239965]: 2026-01-26 16:22:08.089 239969 DEBUG nova.compute.manager [req-6fd95c72-af58-4414-8210-018ce4e87832 req-d07906b7-e667-45c6-a9c2-24b93978ce71 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Received event network-vif-unplugged-8d829ab8-dfca-4c55-9775-0be98c68aab4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:22:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:22:08 compute-0 nova_compute[239965]: 2026-01-26 16:22:08.425 239969 DEBUG nova.network.neutron [-] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:22:08 compute-0 nova_compute[239965]: 2026-01-26 16:22:08.443 239969 DEBUG nova.network.neutron [req-10c6d6ed-2773-43df-9212-72d7e252bc17 req-3209621a-e1dc-4ff3-ae63-9f10b87e19cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Updated VIF entry in instance network info cache for port 8d829ab8-dfca-4c55-9775-0be98c68aab4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:22:08 compute-0 nova_compute[239965]: 2026-01-26 16:22:08.443 239969 DEBUG nova.network.neutron [req-10c6d6ed-2773-43df-9212-72d7e252bc17 req-3209621a-e1dc-4ff3-ae63-9f10b87e19cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Updating instance_info_cache with network_info: [{"id": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "address": "fa:16:3e:27:78:9e", "network": {"id": "e8995ce4-f9c0-4d8b-a300-5ed1ccb72ba5", "bridge": "br-int", "label": "tempest-network-smoke--493055974", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "7e6e29d3d46a48db9f596eca01b3ecf4", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d829ab8-df", "ovs_interfaceid": "8d829ab8-dfca-4c55-9775-0be98c68aab4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:22:08 compute-0 nova_compute[239965]: 2026-01-26 16:22:08.451 239969 INFO nova.compute.manager [-] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Took 1.18 seconds to deallocate network for instance.
Jan 26 16:22:08 compute-0 nova_compute[239965]: 2026-01-26 16:22:08.469 239969 DEBUG oslo_concurrency.lockutils [req-10c6d6ed-2773-43df-9212-72d7e252bc17 req-3209621a-e1dc-4ff3-ae63-9f10b87e19cc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-11befbe4-5dfe-4019-9aae-812949d43f40" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:22:08 compute-0 nova_compute[239965]: 2026-01-26 16:22:08.509 239969 DEBUG oslo_concurrency.lockutils [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:08 compute-0 nova_compute[239965]: 2026-01-26 16:22:08.510 239969 DEBUG oslo_concurrency.lockutils [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 326 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 26 16:22:08 compute-0 nova_compute[239965]: 2026-01-26 16:22:08.634 239969 DEBUG oslo_concurrency.processutils [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:08 compute-0 ceph-mon[75140]: pgmap v2143: 305 pgs: 305 active+clean; 326 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Jan 26 16:22:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:22:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3222924885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:22:09 compute-0 nova_compute[239965]: 2026-01-26 16:22:09.267 239969 DEBUG oslo_concurrency.processutils [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:09 compute-0 nova_compute[239965]: 2026-01-26 16:22:09.274 239969 DEBUG nova.compute.provider_tree [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:22:09 compute-0 nova_compute[239965]: 2026-01-26 16:22:09.293 239969 DEBUG nova.scheduler.client.report [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:22:09 compute-0 nova_compute[239965]: 2026-01-26 16:22:09.320 239969 DEBUG oslo_concurrency.lockutils [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:09 compute-0 nova_compute[239965]: 2026-01-26 16:22:09.352 239969 INFO nova.scheduler.client.report [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Deleted allocations for instance 11befbe4-5dfe-4019-9aae-812949d43f40
Jan 26 16:22:09 compute-0 nova_compute[239965]: 2026-01-26 16:22:09.416 239969 DEBUG oslo_concurrency.lockutils [None req-a7d50896-3f6b-4f0d-9993-a81b51d6961c 7494b8fe226247c9accec02ae2beee97 7e6e29d3d46a48db9f596eca01b3ecf4 - - default default] Lock "11befbe4-5dfe-4019-9aae-812949d43f40" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.834s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3222924885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:22:10 compute-0 nova_compute[239965]: 2026-01-26 16:22:10.160 239969 DEBUG nova.compute.manager [req-a59b22f7-ef5a-4fd4-acc6-985f88f97cf4 req-a0ebc889-6270-4b0e-ba2d-5816d6ebcf26 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Received event network-vif-plugged-8d829ab8-dfca-4c55-9775-0be98c68aab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:10 compute-0 nova_compute[239965]: 2026-01-26 16:22:10.161 239969 DEBUG oslo_concurrency.lockutils [req-a59b22f7-ef5a-4fd4-acc6-985f88f97cf4 req-a0ebc889-6270-4b0e-ba2d-5816d6ebcf26 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:10 compute-0 nova_compute[239965]: 2026-01-26 16:22:10.162 239969 DEBUG oslo_concurrency.lockutils [req-a59b22f7-ef5a-4fd4-acc6-985f88f97cf4 req-a0ebc889-6270-4b0e-ba2d-5816d6ebcf26 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:10 compute-0 nova_compute[239965]: 2026-01-26 16:22:10.162 239969 DEBUG oslo_concurrency.lockutils [req-a59b22f7-ef5a-4fd4-acc6-985f88f97cf4 req-a0ebc889-6270-4b0e-ba2d-5816d6ebcf26 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "11befbe4-5dfe-4019-9aae-812949d43f40-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:10 compute-0 nova_compute[239965]: 2026-01-26 16:22:10.162 239969 DEBUG nova.compute.manager [req-a59b22f7-ef5a-4fd4-acc6-985f88f97cf4 req-a0ebc889-6270-4b0e-ba2d-5816d6ebcf26 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] No waiting events found dispatching network-vif-plugged-8d829ab8-dfca-4c55-9775-0be98c68aab4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:22:10 compute-0 nova_compute[239965]: 2026-01-26 16:22:10.163 239969 WARNING nova.compute.manager [req-a59b22f7-ef5a-4fd4-acc6-985f88f97cf4 req-a0ebc889-6270-4b0e-ba2d-5816d6ebcf26 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Received unexpected event network-vif-plugged-8d829ab8-dfca-4c55-9775-0be98c68aab4 for instance with vm_state deleted and task_state None.
Jan 26 16:22:10 compute-0 nova_compute[239965]: 2026-01-26 16:22:10.163 239969 DEBUG nova.compute.manager [req-a59b22f7-ef5a-4fd4-acc6-985f88f97cf4 req-a0ebc889-6270-4b0e-ba2d-5816d6ebcf26 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Received event network-vif-deleted-8d829ab8-dfca-4c55-9775-0be98c68aab4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:10 compute-0 nova_compute[239965]: 2026-01-26 16:22:10.164 239969 INFO nova.compute.manager [req-a59b22f7-ef5a-4fd4-acc6-985f88f97cf4 req-a0ebc889-6270-4b0e-ba2d-5816d6ebcf26 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Neutron deleted interface 8d829ab8-dfca-4c55-9775-0be98c68aab4; detaching it from the instance and deleting it from the info cache
Jan 26 16:22:10 compute-0 nova_compute[239965]: 2026-01-26 16:22:10.164 239969 DEBUG nova.network.neutron [req-a59b22f7-ef5a-4fd4-acc6-985f88f97cf4 req-a0ebc889-6270-4b0e-ba2d-5816d6ebcf26 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 26 16:22:10 compute-0 nova_compute[239965]: 2026-01-26 16:22:10.168 239969 DEBUG nova.compute.manager [req-a59b22f7-ef5a-4fd4-acc6-985f88f97cf4 req-a0ebc889-6270-4b0e-ba2d-5816d6ebcf26 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Detach interface failed, port_id=8d829ab8-dfca-4c55-9775-0be98c68aab4, reason: Instance 11befbe4-5dfe-4019-9aae-812949d43f40 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:22:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 326 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Jan 26 16:22:10 compute-0 ceph-mon[75140]: pgmap v2144: 305 pgs: 305 active+clean; 326 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Jan 26 16:22:10 compute-0 nova_compute[239965]: 2026-01-26 16:22:10.855 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:11 compute-0 nova_compute[239965]: 2026-01-26 16:22:11.810 239969 DEBUG oslo_concurrency.lockutils [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "9ad7b43b-be73-4270-af57-e061d3581019" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:11 compute-0 nova_compute[239965]: 2026-01-26 16:22:11.811 239969 DEBUG oslo_concurrency.lockutils [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "9ad7b43b-be73-4270-af57-e061d3581019" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:11 compute-0 nova_compute[239965]: 2026-01-26 16:22:11.811 239969 DEBUG oslo_concurrency.lockutils [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "9ad7b43b-be73-4270-af57-e061d3581019-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:11 compute-0 nova_compute[239965]: 2026-01-26 16:22:11.811 239969 DEBUG oslo_concurrency.lockutils [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "9ad7b43b-be73-4270-af57-e061d3581019-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:11 compute-0 nova_compute[239965]: 2026-01-26 16:22:11.811 239969 DEBUG oslo_concurrency.lockutils [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "9ad7b43b-be73-4270-af57-e061d3581019-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:11 compute-0 nova_compute[239965]: 2026-01-26 16:22:11.812 239969 INFO nova.compute.manager [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Terminating instance
Jan 26 16:22:11 compute-0 nova_compute[239965]: 2026-01-26 16:22:11.814 239969 DEBUG nova.compute.manager [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:22:11 compute-0 kernel: tap499173ab-20 (unregistering): left promiscuous mode
Jan 26 16:22:11 compute-0 NetworkManager[48954]: <info>  [1769444531.8693] device (tap499173ab-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:22:11 compute-0 nova_compute[239965]: 2026-01-26 16:22:11.878 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:11 compute-0 ovn_controller[146046]: 2026-01-26T16:22:11Z|01303|binding|INFO|Releasing lport 499173ab-2069-456f-bf77-0e96d8e9c43b from this chassis (sb_readonly=0)
Jan 26 16:22:11 compute-0 ovn_controller[146046]: 2026-01-26T16:22:11Z|01304|binding|INFO|Setting lport 499173ab-2069-456f-bf77-0e96d8e9c43b down in Southbound
Jan 26 16:22:11 compute-0 ovn_controller[146046]: 2026-01-26T16:22:11Z|01305|binding|INFO|Removing iface tap499173ab-20 ovn-installed in OVS
Jan 26 16:22:11 compute-0 nova_compute[239965]: 2026-01-26 16:22:11.885 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:11.889 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:ca:d7 10.100.0.10 2001:db8:0:1:f816:3eff:fedc:cad7 2001:db8::f816:3eff:fedc:cad7'], port_security=['fa:16:3e:dc:ca:d7 10.100.0.10 2001:db8:0:1:f816:3eff:fedc:cad7 2001:db8::f816:3eff:fedc:cad7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28 2001:db8:0:1:f816:3eff:fedc:cad7/64 2001:db8::f816:3eff:fedc:cad7/64', 'neutron:device_id': '9ad7b43b-be73-4270-af57-e061d3581019', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '01961e6c-51dc-4713-a95f-047c09f9eb18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c934df91-a8a2-4191-8341-9c66e4ad8b82, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=499173ab-2069-456f-bf77-0e96d8e9c43b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:22:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:11.891 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 499173ab-2069-456f-bf77-0e96d8e9c43b in datapath 323586d4-0f5a-43b7-8831-7a2cc02b73e0 unbound from our chassis
Jan 26 16:22:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:11.892 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 323586d4-0f5a-43b7-8831-7a2cc02b73e0
Jan 26 16:22:11 compute-0 nova_compute[239965]: 2026-01-26 16:22:11.900 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:11.918 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[355d8670-004a-4447-b10c-8fb4776485d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:11 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Jan 26 16:22:11 compute-0 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d0000007d.scope: Consumed 13.741s CPU time.
Jan 26 16:22:11 compute-0 systemd-machined[208061]: Machine qemu-152-instance-0000007d terminated.
Jan 26 16:22:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:11.951 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e3f197-4aa9-47bd-b5ec-89c0781c4957]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:11.954 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1a9f05be-e08c-4d74-95af-e967b0d273a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:11.991 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3e016a-b1f2-460e-95ce-c5087e175c21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:12.016 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7007fe4d-7f3e-4edb-954a-dfd979993024]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap323586d4-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:26:1a:41'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3468, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 42, 'tx_packets': 7, 'rx_bytes': 3468, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 370], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596349, 'reachable_time': 43930, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2768, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2768, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349915, 'error': None, 'target': 'ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:12.035 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3070b392-a08f-4426-8b0d-c7cc64f9d585]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap323586d4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596360, 'tstamp': 596360}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349917, 'error': None, 'target': 'ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap323586d4-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 596363, 'tstamp': 596363}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349917, 'error': None, 'target': 'ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:12.036 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap323586d4-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.038 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.043 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:12.043 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap323586d4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:12.044 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:22:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:12.044 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap323586d4-00, col_values=(('external_ids', {'iface-id': '15bb6ba7-12a8-4618-8905-b715a4089dde'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:12.045 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.048 239969 INFO nova.virt.libvirt.driver [-] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Instance destroyed successfully.
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.048 239969 DEBUG nova.objects.instance [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'resources' on Instance uuid 9ad7b43b-be73-4270-af57-e061d3581019 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.069 239969 DEBUG nova.virt.libvirt.vif [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:21:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1027176414',display_name='tempest-TestGettingAddress-server-1027176414',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1027176414',id=125,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPQQRmhcVg/kdrEK+wEbWjyXF34qlRdwvPy0otQ55i1stbQy0di0x+Z5lZr6s6TYoyDlUGpSbQ5L4J7FSSwP4IrinKkNibV2VUaY2Vkw+XX8B3+FyRjPoqy7nyNiOJOWbg==',key_name='tempest-TestGettingAddress-1914910139',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:21:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-1qp01esd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:21:49Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=9ad7b43b-be73-4270-af57-e061d3581019,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "499173ab-2069-456f-bf77-0e96d8e9c43b", "address": "fa:16:3e:dc:ca:d7", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499173ab-20", "ovs_interfaceid": "499173ab-2069-456f-bf77-0e96d8e9c43b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.069 239969 DEBUG nova.network.os_vif_util [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "499173ab-2069-456f-bf77-0e96d8e9c43b", "address": "fa:16:3e:dc:ca:d7", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499173ab-20", "ovs_interfaceid": "499173ab-2069-456f-bf77-0e96d8e9c43b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.074 239969 DEBUG nova.network.os_vif_util [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:ca:d7,bridge_name='br-int',has_traffic_filtering=True,id=499173ab-2069-456f-bf77-0e96d8e9c43b,network=Network(323586d4-0f5a-43b7-8831-7a2cc02b73e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499173ab-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.075 239969 DEBUG os_vif [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:ca:d7,bridge_name='br-int',has_traffic_filtering=True,id=499173ab-2069-456f-bf77-0e96d8e9c43b,network=Network(323586d4-0f5a-43b7-8831-7a2cc02b73e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499173ab-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.076 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.076 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap499173ab-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.078 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.079 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.081 239969 INFO os_vif [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:ca:d7,bridge_name='br-int',has_traffic_filtering=True,id=499173ab-2069-456f-bf77-0e96d8e9c43b,network=Network(323586d4-0f5a-43b7-8831-7a2cc02b73e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap499173ab-20')
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.202 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.203 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.219 239969 DEBUG nova.compute.manager [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.262 239969 DEBUG nova.compute.manager [req-5d68f005-c326-448c-91c3-d9ce1b4ea63c req-8873f3c9-de77-4726-8671-9f8233fc120c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Received event network-changed-499173ab-2069-456f-bf77-0e96d8e9c43b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.263 239969 DEBUG nova.compute.manager [req-5d68f005-c326-448c-91c3-d9ce1b4ea63c req-8873f3c9-de77-4726-8671-9f8233fc120c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Refreshing instance network info cache due to event network-changed-499173ab-2069-456f-bf77-0e96d8e9c43b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.263 239969 DEBUG oslo_concurrency.lockutils [req-5d68f005-c326-448c-91c3-d9ce1b4ea63c req-8873f3c9-de77-4726-8671-9f8233fc120c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-9ad7b43b-be73-4270-af57-e061d3581019" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.264 239969 DEBUG oslo_concurrency.lockutils [req-5d68f005-c326-448c-91c3-d9ce1b4ea63c req-8873f3c9-de77-4726-8671-9f8233fc120c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-9ad7b43b-be73-4270-af57-e061d3581019" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.264 239969 DEBUG nova.network.neutron [req-5d68f005-c326-448c-91c3-d9ce1b4ea63c req-8873f3c9-de77-4726-8671-9f8233fc120c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Refreshing network info cache for port 499173ab-2069-456f-bf77-0e96d8e9c43b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.307 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.308 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.318 239969 DEBUG nova.virt.hardware [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.318 239969 INFO nova.compute.claims [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.407 239969 INFO nova.virt.libvirt.driver [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Deleting instance files /var/lib/nova/instances/9ad7b43b-be73-4270-af57-e061d3581019_del
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.408 239969 INFO nova.virt.libvirt.driver [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Deletion of /var/lib/nova/instances/9ad7b43b-be73-4270-af57-e061d3581019_del complete
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.459 239969 INFO nova.compute.manager [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Took 0.64 seconds to destroy the instance on the hypervisor.
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.459 239969 DEBUG oslo.service.loopingcall [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.461 239969 DEBUG oslo_concurrency.processutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.501 239969 DEBUG nova.compute.manager [-] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:22:12 compute-0 nova_compute[239965]: 2026-01-26 16:22:12.502 239969 DEBUG nova.network.neutron [-] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:22:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 300 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 97 op/s
Jan 26 16:22:12 compute-0 ceph-mon[75140]: pgmap v2145: 305 pgs: 305 active+clean; 300 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 2.1 MiB/s wr, 97 op/s
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.018 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:22:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2233350418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.040 239969 DEBUG oslo_concurrency.processutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.046 239969 DEBUG nova.compute.provider_tree [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.069 239969 DEBUG nova.scheduler.client.report [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.088 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.089 239969 DEBUG nova.compute.manager [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.155 239969 DEBUG nova.compute.manager [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.155 239969 DEBUG nova.network.neutron [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.184 239969 INFO nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.216 239969 DEBUG nova.compute.manager [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:22:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.321 239969 DEBUG nova.compute.manager [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.323 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.325 239969 INFO nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Creating image(s)
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.349 239969 DEBUG nova.storage.rbd_utils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.373 239969 DEBUG nova.storage.rbd_utils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.394 239969 DEBUG nova.storage.rbd_utils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.398 239969 DEBUG oslo_concurrency.processutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.435 239969 DEBUG nova.policy [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e865b82cb8db42568f95d2257ed9b158', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f543c15474f7451fa07b01e79dde95dd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.476 239969 DEBUG oslo_concurrency.processutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.477 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.477 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.478 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.500 239969 DEBUG nova.storage.rbd_utils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.505 239969 DEBUG oslo_concurrency.processutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.579 239969 DEBUG nova.network.neutron [-] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.597 239969 INFO nova.compute.manager [-] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Took 1.10 seconds to deallocate network for instance.
Jan 26 16:22:13 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2233350418' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.662 239969 DEBUG oslo_concurrency.lockutils [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.663 239969 DEBUG oslo_concurrency.lockutils [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.802 239969 DEBUG oslo_concurrency.processutils [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.840 239969 DEBUG oslo_concurrency.processutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.915 239969 DEBUG nova.storage.rbd_utils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] resizing rbd image 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:22:13 compute-0 nova_compute[239965]: 2026-01-26 16:22:13.992 239969 DEBUG nova.objects.instance [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'migration_context' on Instance uuid 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.009 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.010 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Ensure instance console log exists: /var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.011 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.011 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.012 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:14 compute-0 sudo[350156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:22:14 compute-0 sudo[350156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:22:14 compute-0 sudo[350156]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:14 compute-0 ovn_controller[146046]: 2026-01-26T16:22:14Z|01306|binding|INFO|Releasing lport 15bb6ba7-12a8-4618-8905-b715a4089dde from this chassis (sb_readonly=0)
Jan 26 16:22:14 compute-0 ovn_controller[146046]: 2026-01-26T16:22:14Z|01307|binding|INFO|Releasing lport 7ccaaade-f6a1-403a-ae4b-e4af874cc1b7 from this chassis (sb_readonly=0)
Jan 26 16:22:14 compute-0 sudo[350181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:22:14 compute-0 sudo[350181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.232 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:22:14 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2775527519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.345 239969 DEBUG oslo_concurrency.processutils [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.351 239969 DEBUG nova.compute.provider_tree [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.376 239969 DEBUG nova.compute.manager [req-cb3b33b8-9b3c-4d6d-bea7-955a6f4518ee req-a0f0dc89-cc47-4f22-a755-01d4b1eaf4a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Received event network-vif-unplugged-499173ab-2069-456f-bf77-0e96d8e9c43b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.377 239969 DEBUG oslo_concurrency.lockutils [req-cb3b33b8-9b3c-4d6d-bea7-955a6f4518ee req-a0f0dc89-cc47-4f22-a755-01d4b1eaf4a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "9ad7b43b-be73-4270-af57-e061d3581019-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.377 239969 DEBUG oslo_concurrency.lockutils [req-cb3b33b8-9b3c-4d6d-bea7-955a6f4518ee req-a0f0dc89-cc47-4f22-a755-01d4b1eaf4a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9ad7b43b-be73-4270-af57-e061d3581019-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.378 239969 DEBUG oslo_concurrency.lockutils [req-cb3b33b8-9b3c-4d6d-bea7-955a6f4518ee req-a0f0dc89-cc47-4f22-a755-01d4b1eaf4a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9ad7b43b-be73-4270-af57-e061d3581019-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.378 239969 DEBUG nova.compute.manager [req-cb3b33b8-9b3c-4d6d-bea7-955a6f4518ee req-a0f0dc89-cc47-4f22-a755-01d4b1eaf4a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] No waiting events found dispatching network-vif-unplugged-499173ab-2069-456f-bf77-0e96d8e9c43b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.378 239969 WARNING nova.compute.manager [req-cb3b33b8-9b3c-4d6d-bea7-955a6f4518ee req-a0f0dc89-cc47-4f22-a755-01d4b1eaf4a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Received unexpected event network-vif-unplugged-499173ab-2069-456f-bf77-0e96d8e9c43b for instance with vm_state deleted and task_state None.
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.379 239969 DEBUG nova.compute.manager [req-cb3b33b8-9b3c-4d6d-bea7-955a6f4518ee req-a0f0dc89-cc47-4f22-a755-01d4b1eaf4a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Received event network-vif-plugged-499173ab-2069-456f-bf77-0e96d8e9c43b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.379 239969 DEBUG oslo_concurrency.lockutils [req-cb3b33b8-9b3c-4d6d-bea7-955a6f4518ee req-a0f0dc89-cc47-4f22-a755-01d4b1eaf4a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "9ad7b43b-be73-4270-af57-e061d3581019-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.379 239969 DEBUG oslo_concurrency.lockutils [req-cb3b33b8-9b3c-4d6d-bea7-955a6f4518ee req-a0f0dc89-cc47-4f22-a755-01d4b1eaf4a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9ad7b43b-be73-4270-af57-e061d3581019-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.380 239969 DEBUG oslo_concurrency.lockutils [req-cb3b33b8-9b3c-4d6d-bea7-955a6f4518ee req-a0f0dc89-cc47-4f22-a755-01d4b1eaf4a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9ad7b43b-be73-4270-af57-e061d3581019-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.380 239969 DEBUG nova.compute.manager [req-cb3b33b8-9b3c-4d6d-bea7-955a6f4518ee req-a0f0dc89-cc47-4f22-a755-01d4b1eaf4a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] No waiting events found dispatching network-vif-plugged-499173ab-2069-456f-bf77-0e96d8e9c43b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.381 239969 WARNING nova.compute.manager [req-cb3b33b8-9b3c-4d6d-bea7-955a6f4518ee req-a0f0dc89-cc47-4f22-a755-01d4b1eaf4a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Received unexpected event network-vif-plugged-499173ab-2069-456f-bf77-0e96d8e9c43b for instance with vm_state deleted and task_state None.
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.381 239969 DEBUG nova.compute.manager [req-cb3b33b8-9b3c-4d6d-bea7-955a6f4518ee req-a0f0dc89-cc47-4f22-a755-01d4b1eaf4a4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Received event network-vif-deleted-499173ab-2069-456f-bf77-0e96d8e9c43b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.383 239969 DEBUG nova.scheduler.client.report [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.407 239969 DEBUG oslo_concurrency.lockutils [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.744s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.433 239969 INFO nova.scheduler.client.report [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Deleted allocations for instance 9ad7b43b-be73-4270-af57-e061d3581019
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.499 239969 DEBUG oslo_concurrency.lockutils [None req-0bb5406e-d39e-4296-ab9e-3e7e2baaa5fb 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "9ad7b43b-be73-4270-af57-e061d3581019" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 300 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 61 KiB/s wr, 36 op/s
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.581 239969 DEBUG nova.network.neutron [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Successfully created port: 7b88408c-5847-4417-be2b-c52e6a358bc7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:22:14 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2775527519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:22:14 compute-0 ceph-mon[75140]: pgmap v2146: 305 pgs: 305 active+clean; 300 MiB data, 969 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 61 KiB/s wr, 36 op/s
Jan 26 16:22:14 compute-0 sudo[350181]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.810 239969 DEBUG nova.network.neutron [req-5d68f005-c326-448c-91c3-d9ce1b4ea63c req-8873f3c9-de77-4726-8671-9f8233fc120c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Updated VIF entry in instance network info cache for port 499173ab-2069-456f-bf77-0e96d8e9c43b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.811 239969 DEBUG nova.network.neutron [req-5d68f005-c326-448c-91c3-d9ce1b4ea63c req-8873f3c9-de77-4726-8671-9f8233fc120c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Updating instance_info_cache with network_info: [{"id": "499173ab-2069-456f-bf77-0e96d8e9c43b", "address": "fa:16:3e:dc:ca:d7", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fedc:cad7", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap499173ab-20", "ovs_interfaceid": "499173ab-2069-456f-bf77-0e96d8e9c43b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:22:14 compute-0 nova_compute[239965]: 2026-01-26 16:22:14.834 239969 DEBUG oslo_concurrency.lockutils [req-5d68f005-c326-448c-91c3-d9ce1b4ea63c req-8873f3c9-de77-4726-8671-9f8233fc120c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-9ad7b43b-be73-4270-af57-e061d3581019" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:22:14 compute-0 sudo[350239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:22:14 compute-0 sudo[350239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:22:14 compute-0 sudo[350239]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:14 compute-0 sudo[350264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 26 16:22:14 compute-0 sudo[350264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:22:15 compute-0 sudo[350264]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:22:15 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:22:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:22:15 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:22:15 compute-0 sudo[350306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:22:15 compute-0 sudo[350306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:22:15 compute-0 sudo[350306]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:15 compute-0 sudo[350331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- inventory --format=json-pretty --filter-for-batch
Jan 26 16:22:15 compute-0 sudo[350331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:22:15 compute-0 nova_compute[239965]: 2026-01-26 16:22:15.680 239969 DEBUG nova.network.neutron [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Successfully updated port: 7b88408c-5847-4417-be2b-c52e6a358bc7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:22:15 compute-0 nova_compute[239965]: 2026-01-26 16:22:15.701 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:22:15 compute-0 nova_compute[239965]: 2026-01-26 16:22:15.702 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:22:15 compute-0 nova_compute[239965]: 2026-01-26 16:22:15.702 239969 DEBUG nova.network.neutron [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:22:15 compute-0 podman[350369]: 2026-01-26 16:22:15.709653325 +0000 UTC m=+0.056412943 container create 496ad4f4821a3693d9475b57cd03d2615dfb3b0745387497b035f937bc58540e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_villani, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:22:15 compute-0 systemd[1]: Started libpod-conmon-496ad4f4821a3693d9475b57cd03d2615dfb3b0745387497b035f937bc58540e.scope.
Jan 26 16:22:15 compute-0 podman[350369]: 2026-01-26 16:22:15.684192231 +0000 UTC m=+0.030951899 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:22:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:22:15 compute-0 nova_compute[239965]: 2026-01-26 16:22:15.789 239969 DEBUG nova.compute.manager [req-5094b234-fe6c-4d90-881f-fcf23c878067 req-7e011ef8-9236-4748-9158-23eb5d3fe9b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received event network-changed-7b88408c-5847-4417-be2b-c52e6a358bc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:15 compute-0 nova_compute[239965]: 2026-01-26 16:22:15.790 239969 DEBUG nova.compute.manager [req-5094b234-fe6c-4d90-881f-fcf23c878067 req-7e011ef8-9236-4748-9158-23eb5d3fe9b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Refreshing instance network info cache due to event network-changed-7b88408c-5847-4417-be2b-c52e6a358bc7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:22:15 compute-0 nova_compute[239965]: 2026-01-26 16:22:15.791 239969 DEBUG oslo_concurrency.lockutils [req-5094b234-fe6c-4d90-881f-fcf23c878067 req-7e011ef8-9236-4748-9158-23eb5d3fe9b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:22:15 compute-0 podman[350369]: 2026-01-26 16:22:15.805023051 +0000 UTC m=+0.151782689 container init 496ad4f4821a3693d9475b57cd03d2615dfb3b0745387497b035f937bc58540e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 26 16:22:15 compute-0 podman[350369]: 2026-01-26 16:22:15.813967389 +0000 UTC m=+0.160727037 container start 496ad4f4821a3693d9475b57cd03d2615dfb3b0745387497b035f937bc58540e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 26 16:22:15 compute-0 podman[350369]: 2026-01-26 16:22:15.81972116 +0000 UTC m=+0.166480788 container attach 496ad4f4821a3693d9475b57cd03d2615dfb3b0745387497b035f937bc58540e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:22:15 compute-0 pensive_villani[350386]: 167 167
Jan 26 16:22:15 compute-0 systemd[1]: libpod-496ad4f4821a3693d9475b57cd03d2615dfb3b0745387497b035f937bc58540e.scope: Deactivated successfully.
Jan 26 16:22:15 compute-0 podman[350369]: 2026-01-26 16:22:15.824940008 +0000 UTC m=+0.171699626 container died 496ad4f4821a3693d9475b57cd03d2615dfb3b0745387497b035f937bc58540e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:22:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-661b502e6f32da9f0452d14f68578b18644299b47ce56da7354b6e68e768787a-merged.mount: Deactivated successfully.
Jan 26 16:22:15 compute-0 podman[350369]: 2026-01-26 16:22:15.865902841 +0000 UTC m=+0.212662459 container remove 496ad4f4821a3693d9475b57cd03d2615dfb3b0745387497b035f937bc58540e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:22:15 compute-0 systemd[1]: libpod-conmon-496ad4f4821a3693d9475b57cd03d2615dfb3b0745387497b035f937bc58540e.scope: Deactivated successfully.
Jan 26 16:22:15 compute-0 nova_compute[239965]: 2026-01-26 16:22:15.923 239969 DEBUG nova.network.neutron [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:22:16 compute-0 podman[350411]: 2026-01-26 16:22:16.062221669 +0000 UTC m=+0.054929156 container create 1ec355fb03de61398194145b1c70e4a2b6801c7cf5d4fa21ade7069dbaa03349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 16:22:16 compute-0 systemd[1]: Started libpod-conmon-1ec355fb03de61398194145b1c70e4a2b6801c7cf5d4fa21ade7069dbaa03349.scope.
Jan 26 16:22:16 compute-0 podman[350411]: 2026-01-26 16:22:16.034674715 +0000 UTC m=+0.027382202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:22:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27459b11262f28bfbaa9f1a9a62fa4401715bd6aa26141c6a7c5e30822e49096/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27459b11262f28bfbaa9f1a9a62fa4401715bd6aa26141c6a7c5e30822e49096/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27459b11262f28bfbaa9f1a9a62fa4401715bd6aa26141c6a7c5e30822e49096/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27459b11262f28bfbaa9f1a9a62fa4401715bd6aa26141c6a7c5e30822e49096/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:16 compute-0 podman[350411]: 2026-01-26 16:22:16.159748398 +0000 UTC m=+0.152455875 container init 1ec355fb03de61398194145b1c70e4a2b6801c7cf5d4fa21ade7069dbaa03349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_napier, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 16:22:16 compute-0 podman[350411]: 2026-01-26 16:22:16.17577866 +0000 UTC m=+0.168486157 container start 1ec355fb03de61398194145b1c70e4a2b6801c7cf5d4fa21ade7069dbaa03349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_napier, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:22:16 compute-0 podman[350411]: 2026-01-26 16:22:16.181992432 +0000 UTC m=+0.174699999 container attach 1ec355fb03de61398194145b1c70e4a2b6801c7cf5d4fa21ade7069dbaa03349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_napier, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:22:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:22:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.482 239969 DEBUG nova.compute.manager [req-99e29cb2-8013-483d-92d6-1da6443098a5 req-85ba30ab-4801-4eae-add9-b8dd615e70f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Received event network-changed-28ddf76d-a2d2-49fc-b567-f9364c3848c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.483 239969 DEBUG nova.compute.manager [req-99e29cb2-8013-483d-92d6-1da6443098a5 req-85ba30ab-4801-4eae-add9-b8dd615e70f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Refreshing instance network info cache due to event network-changed-28ddf76d-a2d2-49fc-b567-f9364c3848c4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.483 239969 DEBUG oslo_concurrency.lockutils [req-99e29cb2-8013-483d-92d6-1da6443098a5 req-85ba30ab-4801-4eae-add9-b8dd615e70f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b1abd864-1de3-45b1-8fbc-3885a84b1363" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.483 239969 DEBUG oslo_concurrency.lockutils [req-99e29cb2-8013-483d-92d6-1da6443098a5 req-85ba30ab-4801-4eae-add9-b8dd615e70f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b1abd864-1de3-45b1-8fbc-3885a84b1363" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.484 239969 DEBUG nova.network.neutron [req-99e29cb2-8013-483d-92d6-1da6443098a5 req-85ba30ab-4801-4eae-add9-b8dd615e70f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Refreshing network info cache for port 28ddf76d-a2d2-49fc-b567-f9364c3848c4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.508 239969 DEBUG oslo_concurrency.lockutils [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "b1abd864-1de3-45b1-8fbc-3885a84b1363" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.508 239969 DEBUG oslo_concurrency.lockutils [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "b1abd864-1de3-45b1-8fbc-3885a84b1363" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.509 239969 DEBUG oslo_concurrency.lockutils [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.509 239969 DEBUG oslo_concurrency.lockutils [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.510 239969 DEBUG oslo_concurrency.lockutils [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.511 239969 INFO nova.compute.manager [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Terminating instance
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.513 239969 DEBUG nova.compute.manager [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:22:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 297 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 748 KiB/s wr, 58 op/s
Jan 26 16:22:16 compute-0 kernel: tap28ddf76d-a2 (unregistering): left promiscuous mode
Jan 26 16:22:16 compute-0 NetworkManager[48954]: <info>  [1769444536.5710] device (tap28ddf76d-a2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.613 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:16 compute-0 ovn_controller[146046]: 2026-01-26T16:22:16Z|01308|binding|INFO|Releasing lport 28ddf76d-a2d2-49fc-b567-f9364c3848c4 from this chassis (sb_readonly=0)
Jan 26 16:22:16 compute-0 ovn_controller[146046]: 2026-01-26T16:22:16Z|01309|binding|INFO|Setting lport 28ddf76d-a2d2-49fc-b567-f9364c3848c4 down in Southbound
Jan 26 16:22:16 compute-0 ovn_controller[146046]: 2026-01-26T16:22:16Z|01310|binding|INFO|Removing iface tap28ddf76d-a2 ovn-installed in OVS
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.616 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.624 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ef:95:e6 10.100.0.4'], port_security=['fa:16:3e:ef:95:e6 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b1abd864-1de3-45b1-8fbc-3885a84b1363', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7717533c-aa5c-4001-8fe1-767842de9742 c15057a5-d11a-493e-8ca8-7cd42363857a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8e52caf4-6569-41d9-823b-c2a38f81cae8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=28ddf76d-a2d2-49fc-b567-f9364c3848c4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.626 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 28ddf76d-a2d2-49fc-b567-f9364c3848c4 in datapath 8fbff3e6-13d0-457a-8cfa-fa92dd36bc03 unbound from our chassis
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.629 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8fbff3e6-13d0-457a-8cfa-fa92dd36bc03, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.632 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0ad8748b-5b0f-4013-906f-dcd370400094]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.633 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03 namespace which is not needed anymore
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.638 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:16 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000079.scope: Deactivated successfully.
Jan 26 16:22:16 compute-0 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000079.scope: Consumed 16.465s CPU time.
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.667 239969 DEBUG oslo_concurrency.lockutils [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "59934c7e-3c2e-4e68-9629-663ecfc6692d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.668 239969 DEBUG oslo_concurrency.lockutils [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "59934c7e-3c2e-4e68-9629-663ecfc6692d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.668 239969 DEBUG oslo_concurrency.lockutils [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.668 239969 DEBUG oslo_concurrency.lockutils [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.669 239969 DEBUG oslo_concurrency.lockutils [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:16 compute-0 systemd-machined[208061]: Machine qemu-148-instance-00000079 terminated.
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.669 239969 INFO nova.compute.manager [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Terminating instance
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.670 239969 DEBUG nova.compute.manager [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:22:16 compute-0 kernel: tap73d5501c-51 (unregistering): left promiscuous mode
Jan 26 16:22:16 compute-0 NetworkManager[48954]: <info>  [1769444536.7150] device (tap73d5501c-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.717 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:16 compute-0 ovn_controller[146046]: 2026-01-26T16:22:16Z|01311|binding|INFO|Releasing lport 73d5501c-512b-4eaf-b85c-86251a8a4235 from this chassis (sb_readonly=0)
Jan 26 16:22:16 compute-0 ovn_controller[146046]: 2026-01-26T16:22:16Z|01312|binding|INFO|Setting lport 73d5501c-512b-4eaf-b85c-86251a8a4235 down in Southbound
Jan 26 16:22:16 compute-0 ovn_controller[146046]: 2026-01-26T16:22:16Z|01313|binding|INFO|Removing iface tap73d5501c-51 ovn-installed in OVS
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.732 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.745 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.747 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:c2:94 10.100.0.3 2001:db8:0:1:f816:3eff:fe26:c294 2001:db8::f816:3eff:fe26:c294'], port_security=['fa:16:3e:26:c2:94 10.100.0.3 2001:db8:0:1:f816:3eff:fe26:c294 2001:db8::f816:3eff:fe26:c294'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8:0:1:f816:3eff:fe26:c294/64 2001:db8::f816:3eff:fe26:c294/64', 'neutron:device_id': '59934c7e-3c2e-4e68-9629-663ecfc6692d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '01961e6c-51dc-4713-a95f-047c09f9eb18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c934df91-a8a2-4191-8341-9c66e4ad8b82, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=73d5501c-512b-4eaf-b85c-86251a8a4235) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.758 239969 INFO nova.virt.libvirt.driver [-] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Instance destroyed successfully.
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.759 239969 DEBUG nova.objects.instance [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'resources' on Instance uuid b1abd864-1de3-45b1-8fbc-3885a84b1363 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.780 239969 DEBUG nova.virt.libvirt.vif [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:20:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-2077716436',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-2077716436',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-acce',id=121,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAH5HmGgJNWvGdVkmlHJRrePaakFb21tks+KLfYAfRRceef2V+IxZFoFo8gHC+GlnVy+k8FfLLLE2ZPZUqadlze8sD5xc0uoNaRqdi9NffjKHIkwpi9oAc3U2aezCu0lUQ==',key_name='tempest-TestSecurityGroupsBasicOps-1132471326',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:20:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-0lsogerf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:20:49Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=b1abd864-1de3-45b1-8fbc-3885a84b1363,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "address": "fa:16:3e:ef:95:e6", "network": {"id": "8fbff3e6-13d0-457a-8cfa-fa92dd36bc03", "bridge": "br-int", "label": "tempest-network-smoke--2073612296", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ddf76d-a2", "ovs_interfaceid": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.780 239969 DEBUG nova.network.os_vif_util [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "address": "fa:16:3e:ef:95:e6", "network": {"id": "8fbff3e6-13d0-457a-8cfa-fa92dd36bc03", "bridge": "br-int", "label": "tempest-network-smoke--2073612296", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ddf76d-a2", "ovs_interfaceid": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.781 239969 DEBUG nova.network.os_vif_util [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ef:95:e6,bridge_name='br-int',has_traffic_filtering=True,id=28ddf76d-a2d2-49fc-b567-f9364c3848c4,network=Network(8fbff3e6-13d0-457a-8cfa-fa92dd36bc03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28ddf76d-a2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.781 239969 DEBUG os_vif [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:95:e6,bridge_name='br-int',has_traffic_filtering=True,id=28ddf76d-a2d2-49fc-b567-f9364c3848c4,network=Network(8fbff3e6-13d0-457a-8cfa-fa92dd36bc03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28ddf76d-a2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.783 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.783 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28ddf76d-a2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.784 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:16 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.786 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:22:16 compute-0 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d0000007a.scope: Consumed 17.725s CPU time.
Jan 26 16:22:16 compute-0 systemd-machined[208061]: Machine qemu-149-instance-0000007a terminated.
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.791 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.792 239969 INFO os_vif [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ef:95:e6,bridge_name='br-int',has_traffic_filtering=True,id=28ddf76d-a2d2-49fc-b567-f9364c3848c4,network=Network(8fbff3e6-13d0-457a-8cfa-fa92dd36bc03),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28ddf76d-a2')
Jan 26 16:22:16 compute-0 neutron-haproxy-ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03[347055]: [NOTICE]   (347059) : haproxy version is 2.8.14-c23fe91
Jan 26 16:22:16 compute-0 neutron-haproxy-ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03[347055]: [NOTICE]   (347059) : path to executable is /usr/sbin/haproxy
Jan 26 16:22:16 compute-0 neutron-haproxy-ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03[347055]: [WARNING]  (347059) : Exiting Master process...
Jan 26 16:22:16 compute-0 neutron-haproxy-ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03[347055]: [WARNING]  (347059) : Exiting Master process...
Jan 26 16:22:16 compute-0 neutron-haproxy-ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03[347055]: [ALERT]    (347059) : Current worker (347061) exited with code 143 (Terminated)
Jan 26 16:22:16 compute-0 neutron-haproxy-ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03[347055]: [WARNING]  (347059) : All workers exited. Exiting... (0)
Jan 26 16:22:16 compute-0 systemd[1]: libpod-06f2c201d8cd975cf99701655ff7b3aad1188b7e7238828402c0fa898a4a9cea.scope: Deactivated successfully.
Jan 26 16:22:16 compute-0 podman[350731]: 2026-01-26 16:22:16.814193494 +0000 UTC m=+0.052037685 container died 06f2c201d8cd975cf99701655ff7b3aad1188b7e7238828402c0fa898a4a9cea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:22:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-06f2c201d8cd975cf99701655ff7b3aad1188b7e7238828402c0fa898a4a9cea-userdata-shm.mount: Deactivated successfully.
Jan 26 16:22:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8dc1162b80eaca55e0767b1027ff18b2a7e452f2c2fb2170c5d5cc0fa4cd7a9-merged.mount: Deactivated successfully.
Jan 26 16:22:16 compute-0 podman[350731]: 2026-01-26 16:22:16.846810074 +0000 UTC m=+0.084654255 container cleanup 06f2c201d8cd975cf99701655ff7b3aad1188b7e7238828402c0fa898a4a9cea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:22:16 compute-0 systemd[1]: libpod-conmon-06f2c201d8cd975cf99701655ff7b3aad1188b7e7238828402c0fa898a4a9cea.scope: Deactivated successfully.
Jan 26 16:22:16 compute-0 systemd-udevd[350443]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:22:16 compute-0 kernel: tap73d5501c-51: entered promiscuous mode
Jan 26 16:22:16 compute-0 kernel: tap73d5501c-51 (unregistering): left promiscuous mode
Jan 26 16:22:16 compute-0 NetworkManager[48954]: <info>  [1769444536.8927] manager: (tap73d5501c-51): new Tun device (/org/freedesktop/NetworkManager/Devices/540)
Jan 26 16:22:16 compute-0 ovn_controller[146046]: 2026-01-26T16:22:16Z|01314|binding|INFO|Claiming lport 73d5501c-512b-4eaf-b85c-86251a8a4235 for this chassis.
Jan 26 16:22:16 compute-0 ovn_controller[146046]: 2026-01-26T16:22:16Z|01315|binding|INFO|73d5501c-512b-4eaf-b85c-86251a8a4235: Claiming fa:16:3e:26:c2:94 10.100.0.3 2001:db8:0:1:f816:3eff:fe26:c294 2001:db8::f816:3eff:fe26:c294
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.899 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.905 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:c2:94 10.100.0.3 2001:db8:0:1:f816:3eff:fe26:c294 2001:db8::f816:3eff:fe26:c294'], port_security=['fa:16:3e:26:c2:94 10.100.0.3 2001:db8:0:1:f816:3eff:fe26:c294 2001:db8::f816:3eff:fe26:c294'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8:0:1:f816:3eff:fe26:c294/64 2001:db8::f816:3eff:fe26:c294/64', 'neutron:device_id': '59934c7e-3c2e-4e68-9629-663ecfc6692d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '01961e6c-51dc-4713-a95f-047c09f9eb18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c934df91-a8a2-4191-8341-9c66e4ad8b82, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=73d5501c-512b-4eaf-b85c-86251a8a4235) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:22:16 compute-0 ovn_controller[146046]: 2026-01-26T16:22:16Z|01316|binding|INFO|Releasing lport 73d5501c-512b-4eaf-b85c-86251a8a4235 from this chassis (sb_readonly=0)
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.917 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.921 239969 INFO nova.virt.libvirt.driver [-] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Instance destroyed successfully.
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.922 239969 DEBUG nova.objects.instance [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'resources' on Instance uuid 59934c7e-3c2e-4e68-9629-663ecfc6692d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.924 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:c2:94 10.100.0.3 2001:db8:0:1:f816:3eff:fe26:c294 2001:db8::f816:3eff:fe26:c294'], port_security=['fa:16:3e:26:c2:94 10.100.0.3 2001:db8:0:1:f816:3eff:fe26:c294 2001:db8::f816:3eff:fe26:c294'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28 2001:db8:0:1:f816:3eff:fe26:c294/64 2001:db8::f816:3eff:fe26:c294/64', 'neutron:device_id': '59934c7e-3c2e-4e68-9629-663ecfc6692d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '01961e6c-51dc-4713-a95f-047c09f9eb18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c934df91-a8a2-4191-8341-9c66e4ad8b82, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=73d5501c-512b-4eaf-b85c-86251a8a4235) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:22:16 compute-0 elegant_napier[350427]: [
Jan 26 16:22:16 compute-0 elegant_napier[350427]:     {
Jan 26 16:22:16 compute-0 elegant_napier[350427]:         "available": false,
Jan 26 16:22:16 compute-0 elegant_napier[350427]:         "being_replaced": false,
Jan 26 16:22:16 compute-0 elegant_napier[350427]:         "ceph_device_lvm": false,
Jan 26 16:22:16 compute-0 elegant_napier[350427]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:         "lsm_data": {},
Jan 26 16:22:16 compute-0 elegant_napier[350427]:         "lvs": [],
Jan 26 16:22:16 compute-0 elegant_napier[350427]:         "path": "/dev/sr0",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:         "rejected_reasons": [
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "Insufficient space (<5GB)",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "Has a FileSystem"
Jan 26 16:22:16 compute-0 elegant_napier[350427]:         ],
Jan 26 16:22:16 compute-0 elegant_napier[350427]:         "sys_api": {
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "actuators": null,
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "device_nodes": [
Jan 26 16:22:16 compute-0 elegant_napier[350427]:                 "sr0"
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             ],
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "devname": "sr0",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "human_readable_size": "482.00 KB",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "id_bus": "ata",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "model": "QEMU DVD-ROM",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "nr_requests": "2",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "parent": "/dev/sr0",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "partitions": {},
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "path": "/dev/sr0",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "removable": "1",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "rev": "2.5+",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "ro": "0",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "rotational": "1",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "sas_address": "",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "sas_device_handle": "",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "scheduler_mode": "mq-deadline",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "sectors": 0,
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "sectorsize": "2048",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "size": 493568.0,
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "support_discard": "2048",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "type": "disk",
Jan 26 16:22:16 compute-0 elegant_napier[350427]:             "vendor": "QEMU"
Jan 26 16:22:16 compute-0 elegant_napier[350427]:         }
Jan 26 16:22:16 compute-0 elegant_napier[350427]:     }
Jan 26 16:22:16 compute-0 elegant_napier[350427]: ]
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.934 239969 DEBUG nova.virt.libvirt.vif [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:21:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-532358425',display_name='tempest-TestGettingAddress-server-532358425',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-532358425',id=122,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPQQRmhcVg/kdrEK+wEbWjyXF34qlRdwvPy0otQ55i1stbQy0di0x+Z5lZr6s6TYoyDlUGpSbQ5L4J7FSSwP4IrinKkNibV2VUaY2Vkw+XX8B3+FyRjPoqy7nyNiOJOWbg==',key_name='tempest-TestGettingAddress-1914910139',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:21:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-z7l003ts',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:21:13Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=59934c7e-3c2e-4e68-9629-663ecfc6692d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "73d5501c-512b-4eaf-b85c-86251a8a4235", "address": "fa:16:3e:26:c2:94", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d5501c-51", "ovs_interfaceid": "73d5501c-512b-4eaf-b85c-86251a8a4235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.935 239969 DEBUG nova.network.os_vif_util [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "73d5501c-512b-4eaf-b85c-86251a8a4235", "address": "fa:16:3e:26:c2:94", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.200", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d5501c-51", "ovs_interfaceid": "73d5501c-512b-4eaf-b85c-86251a8a4235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.935 239969 DEBUG nova.network.os_vif_util [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:26:c2:94,bridge_name='br-int',has_traffic_filtering=True,id=73d5501c-512b-4eaf-b85c-86251a8a4235,network=Network(323586d4-0f5a-43b7-8831-7a2cc02b73e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73d5501c-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.936 239969 DEBUG os_vif [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:c2:94,bridge_name='br-int',has_traffic_filtering=True,id=73d5501c-512b-4eaf-b85c-86251a8a4235,network=Network(323586d4-0f5a-43b7-8831-7a2cc02b73e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73d5501c-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:22:16 compute-0 podman[351197]: 2026-01-26 16:22:16.936263975 +0000 UTC m=+0.068089659 container remove 06f2c201d8cd975cf99701655ff7b3aad1188b7e7238828402c0fa898a4a9cea (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.937 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.937 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap73d5501c-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.939 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.940 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.941 239969 INFO os_vif [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:26:c2:94,bridge_name='br-int',has_traffic_filtering=True,id=73d5501c-512b-4eaf-b85c-86251a8a4235,network=Network(323586d4-0f5a-43b7-8831-7a2cc02b73e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap73d5501c-51')
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.944 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[65daf121-4f4d-43ae-8018-cca54c4c42f2]: (4, ('Mon Jan 26 04:22:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03 (06f2c201d8cd975cf99701655ff7b3aad1188b7e7238828402c0fa898a4a9cea)\n06f2c201d8cd975cf99701655ff7b3aad1188b7e7238828402c0fa898a4a9cea\nMon Jan 26 04:22:16 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03 (06f2c201d8cd975cf99701655ff7b3aad1188b7e7238828402c0fa898a4a9cea)\n06f2c201d8cd975cf99701655ff7b3aad1188b7e7238828402c0fa898a4a9cea\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.946 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2d22eab1-67b4-48cb-b725-156e0a072c22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.947 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8fbff3e6-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:16 compute-0 kernel: tap8fbff3e6-10: left promiscuous mode
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.952 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab77747-1567-4d3f-9c12-21ae6fe747ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.959 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:16 compute-0 nova_compute[239965]: 2026-01-26 16:22:16.965 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.967 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d48d18-bfbd-415a-9914-0b0fc942cbe0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.968 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[96da8081-1755-4d1c-8fcb-610d6ace0ec0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:16 compute-0 systemd[1]: libpod-1ec355fb03de61398194145b1c70e4a2b6801c7cf5d4fa21ade7069dbaa03349.scope: Deactivated successfully.
Jan 26 16:22:16 compute-0 podman[350411]: 2026-01-26 16:22:16.973839244 +0000 UTC m=+0.966546711 container died 1ec355fb03de61398194145b1c70e4a2b6801c7cf5d4fa21ade7069dbaa03349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_napier, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.985 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3230f9fd-3d18-4b17-8b9b-8ab0e1f319d5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 593915, 'reachable_time': 23009, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351459, 'error': None, 'target': 'ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:16 compute-0 systemd[1]: run-netns-ovnmeta\x2d8fbff3e6\x2d13d0\x2d457a\x2d8cfa\x2dfa92dd36bc03.mount: Deactivated successfully.
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.990 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8fbff3e6-13d0-457a-8cfa-fa92dd36bc03 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.990 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[1123e5ab-85e8-4e76-94d4-36c4e8f1ea69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.992 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 73d5501c-512b-4eaf-b85c-86251a8a4235 in datapath 323586d4-0f5a-43b7-8831-7a2cc02b73e0 unbound from our chassis
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.994 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 323586d4-0f5a-43b7-8831-7a2cc02b73e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.995 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c467f766-5ba8-4180-8fa6-8025aa69d422]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:16.996 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0 namespace which is not needed anymore
Jan 26 16:22:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-27459b11262f28bfbaa9f1a9a62fa4401715bd6aa26141c6a7c5e30822e49096-merged.mount: Deactivated successfully.
Jan 26 16:22:17 compute-0 podman[350411]: 2026-01-26 16:22:17.019551684 +0000 UTC m=+1.012259141 container remove 1ec355fb03de61398194145b1c70e4a2b6801c7cf5d4fa21ade7069dbaa03349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_napier, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:22:17 compute-0 systemd[1]: libpod-conmon-1ec355fb03de61398194145b1c70e4a2b6801c7cf5d4fa21ade7069dbaa03349.scope: Deactivated successfully.
Jan 26 16:22:17 compute-0 sudo[350331]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:22:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:22:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:22:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:22:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:22:17 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:22:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:22:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:22:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:22:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:22:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:22:17 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:22:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:22:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:22:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:22:17 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.114 239969 INFO nova.virt.libvirt.driver [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Deleting instance files /var/lib/nova/instances/b1abd864-1de3-45b1-8fbc-3885a84b1363_del
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.115 239969 INFO nova.virt.libvirt.driver [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Deletion of /var/lib/nova/instances/b1abd864-1de3-45b1-8fbc-3885a84b1363_del complete
Jan 26 16:22:17 compute-0 sudo[351490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:22:17 compute-0 sudo[351490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:22:17 compute-0 neutron-haproxy-ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0[348452]: [NOTICE]   (348475) : haproxy version is 2.8.14-c23fe91
Jan 26 16:22:17 compute-0 neutron-haproxy-ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0[348452]: [NOTICE]   (348475) : path to executable is /usr/sbin/haproxy
Jan 26 16:22:17 compute-0 sudo[351490]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:17 compute-0 neutron-haproxy-ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0[348452]: [WARNING]  (348475) : Exiting Master process...
Jan 26 16:22:17 compute-0 neutron-haproxy-ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0[348452]: [ALERT]    (348475) : Current worker (348481) exited with code 143 (Terminated)
Jan 26 16:22:17 compute-0 neutron-haproxy-ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0[348452]: [WARNING]  (348475) : All workers exited. Exiting... (0)
Jan 26 16:22:17 compute-0 systemd[1]: libpod-870ac69ad3b676e8d8ca4d3627f730b6d567261a521a7e3f184fc966b255dae3.scope: Deactivated successfully.
Jan 26 16:22:17 compute-0 podman[351491]: 2026-01-26 16:22:17.18762179 +0000 UTC m=+0.056838312 container died 870ac69ad3b676e8d8ca4d3627f730b6d567261a521a7e3f184fc966b255dae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.224 239969 INFO nova.compute.manager [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Took 0.71 seconds to destroy the instance on the hypervisor.
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.225 239969 DEBUG oslo.service.loopingcall [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.225 239969 DEBUG nova.compute.manager [-] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.226 239969 DEBUG nova.network.neutron [-] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:22:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-870ac69ad3b676e8d8ca4d3627f730b6d567261a521a7e3f184fc966b255dae3-userdata-shm.mount: Deactivated successfully.
Jan 26 16:22:17 compute-0 sudo[351528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.239 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:22:17 compute-0 sudo[351528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:22:17 compute-0 podman[351491]: 2026-01-26 16:22:17.242178486 +0000 UTC m=+0.111395008 container cleanup 870ac69ad3b676e8d8ca4d3627f730b6d567261a521a7e3f184fc966b255dae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:22:17 compute-0 systemd[1]: libpod-conmon-870ac69ad3b676e8d8ca4d3627f730b6d567261a521a7e3f184fc966b255dae3.scope: Deactivated successfully.
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.274 239969 INFO nova.virt.libvirt.driver [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Deleting instance files /var/lib/nova/instances/59934c7e-3c2e-4e68-9629-663ecfc6692d_del
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.275 239969 INFO nova.virt.libvirt.driver [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Deletion of /var/lib/nova/instances/59934c7e-3c2e-4e68-9629-663ecfc6692d_del complete
Jan 26 16:22:17 compute-0 ceph-mon[75140]: pgmap v2147: 305 pgs: 305 active+clean; 297 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 748 KiB/s wr, 58 op/s
Jan 26 16:22:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:22:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:22:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:22:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:22:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:22:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:22:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:22:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:22:17 compute-0 podman[351570]: 2026-01-26 16:22:17.304291077 +0000 UTC m=+0.037698934 container remove 870ac69ad3b676e8d8ca4d3627f730b6d567261a521a7e3f184fc966b255dae3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 16:22:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:17.312 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bd951fb1-ecf8-4a32-89b8-c5d08ce781df]: (4, ('Mon Jan 26 04:22:17 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0 (870ac69ad3b676e8d8ca4d3627f730b6d567261a521a7e3f184fc966b255dae3)\n870ac69ad3b676e8d8ca4d3627f730b6d567261a521a7e3f184fc966b255dae3\nMon Jan 26 04:22:17 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0 (870ac69ad3b676e8d8ca4d3627f730b6d567261a521a7e3f184fc966b255dae3)\n870ac69ad3b676e8d8ca4d3627f730b6d567261a521a7e3f184fc966b255dae3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:17.313 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4af23be9-adaa-436e-8465-b51ff3957184]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:17.314 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap323586d4-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.315 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:17 compute-0 kernel: tap323586d4-00: left promiscuous mode
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.317 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:17.321 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[24c25fa2-a38f-4817-89a1-df7fedf0bd56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.334 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.343 239969 INFO nova.compute.manager [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Took 0.67 seconds to destroy the instance on the hypervisor.
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.343 239969 DEBUG oslo.service.loopingcall [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.343 239969 DEBUG nova.compute.manager [-] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:22:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:17.343 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0bcb98e5-2ed5-40be-9156-d417e4bc96ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.344 239969 DEBUG nova.network.neutron [-] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:22:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:17.344 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b4eaa1c4-ce3a-4a75-bb0e-e4afbfe8be88]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:17.362 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[500386f2-6a4b-445b-a942-ca8fb87078dd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 596340, 'reachable_time': 23503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351586, 'error': None, 'target': 'ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:17.364 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-323586d4-0f5a-43b7-8831-7a2cc02b73e0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:22:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:17.364 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[5152608a-3f48-445f-8160-3f585a235689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:17.364 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 73d5501c-512b-4eaf-b85c-86251a8a4235 in datapath 323586d4-0f5a-43b7-8831-7a2cc02b73e0 unbound from our chassis
Jan 26 16:22:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:17.365 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 323586d4-0f5a-43b7-8831-7a2cc02b73e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:22:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:17.366 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1f7f2ab0-e6c3-4b6f-a135-05b3fc35d4f7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:17.366 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 73d5501c-512b-4eaf-b85c-86251a8a4235 in datapath 323586d4-0f5a-43b7-8831-7a2cc02b73e0 unbound from our chassis
Jan 26 16:22:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:17.367 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 323586d4-0f5a-43b7-8831-7a2cc02b73e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:22:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:17.368 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[25ddc162-15d1-416d-a797-d213849f6fd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:17 compute-0 podman[351599]: 2026-01-26 16:22:17.520057202 +0000 UTC m=+0.039331475 container create 2910870dc6e00cc36f6a04eb447c636523a9d0518d1ba027fa597fdfcc41ea94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 16:22:17 compute-0 systemd[1]: Started libpod-conmon-2910870dc6e00cc36f6a04eb447c636523a9d0518d1ba027fa597fdfcc41ea94.scope.
Jan 26 16:22:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:22:17 compute-0 podman[351599]: 2026-01-26 16:22:17.501952228 +0000 UTC m=+0.021226521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:22:17 compute-0 podman[351599]: 2026-01-26 16:22:17.598717348 +0000 UTC m=+0.117991651 container init 2910870dc6e00cc36f6a04eb447c636523a9d0518d1ba027fa597fdfcc41ea94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:22:17 compute-0 podman[351599]: 2026-01-26 16:22:17.604773226 +0000 UTC m=+0.124047509 container start 2910870dc6e00cc36f6a04eb447c636523a9d0518d1ba027fa597fdfcc41ea94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 16:22:17 compute-0 podman[351599]: 2026-01-26 16:22:17.608216581 +0000 UTC m=+0.127490874 container attach 2910870dc6e00cc36f6a04eb447c636523a9d0518d1ba027fa597fdfcc41ea94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:22:17 compute-0 cool_shaw[351616]: 167 167
Jan 26 16:22:17 compute-0 systemd[1]: libpod-2910870dc6e00cc36f6a04eb447c636523a9d0518d1ba027fa597fdfcc41ea94.scope: Deactivated successfully.
Jan 26 16:22:17 compute-0 conmon[351616]: conmon 2910870dc6e00cc36f6a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2910870dc6e00cc36f6a04eb447c636523a9d0518d1ba027fa597fdfcc41ea94.scope/container/memory.events
Jan 26 16:22:17 compute-0 podman[351599]: 2026-01-26 16:22:17.613689024 +0000 UTC m=+0.132963297 container died 2910870dc6e00cc36f6a04eb447c636523a9d0518d1ba027fa597fdfcc41ea94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 16:22:17 compute-0 podman[351599]: 2026-01-26 16:22:17.651021989 +0000 UTC m=+0.170296272 container remove 2910870dc6e00cc36f6a04eb447c636523a9d0518d1ba027fa597fdfcc41ea94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_shaw, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:22:17 compute-0 systemd[1]: libpod-conmon-2910870dc6e00cc36f6a04eb447c636523a9d0518d1ba027fa597fdfcc41ea94.scope: Deactivated successfully.
Jan 26 16:22:17 compute-0 podman[351641]: 2026-01-26 16:22:17.821241098 +0000 UTC m=+0.050123989 container create becb8c11d3a34b2d3df938912978e2ee780cebed39dc0ecbd5fdf6fbad38c691 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pike, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.839 239969 DEBUG nova.network.neutron [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Updating instance_info_cache with network_info: [{"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:22:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a21fb8361d16737fd203049a03797734bb7fec0ca249ea20eda253dba3268b34-merged.mount: Deactivated successfully.
Jan 26 16:22:17 compute-0 systemd[1]: run-netns-ovnmeta\x2d323586d4\x2d0f5a\x2d43b7\x2d8831\x2d7a2cc02b73e0.mount: Deactivated successfully.
Jan 26 16:22:17 compute-0 systemd[1]: Started libpod-conmon-becb8c11d3a34b2d3df938912978e2ee780cebed39dc0ecbd5fdf6fbad38c691.scope.
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.866 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.867 239969 DEBUG nova.compute.manager [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Instance network_info: |[{"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.867 239969 DEBUG oslo_concurrency.lockutils [req-5094b234-fe6c-4d90-881f-fcf23c878067 req-7e011ef8-9236-4748-9158-23eb5d3fe9b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.868 239969 DEBUG nova.network.neutron [req-5094b234-fe6c-4d90-881f-fcf23c878067 req-7e011ef8-9236-4748-9158-23eb5d3fe9b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Refreshing network info cache for port 7b88408c-5847-4417-be2b-c52e6a358bc7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.873 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Start _get_guest_xml network_info=[{"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.880 239969 WARNING nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.892 239969 DEBUG nova.virt.libvirt.host [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.893 239969 DEBUG nova.virt.libvirt.host [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:22:17 compute-0 podman[351641]: 2026-01-26 16:22:17.79887666 +0000 UTC m=+0.027759591 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:22:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.897 239969 DEBUG nova.virt.libvirt.host [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.897 239969 DEBUG nova.virt.libvirt.host [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.897 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.898 239969 DEBUG nova.virt.hardware [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.898 239969 DEBUG nova.virt.hardware [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.898 239969 DEBUG nova.virt.hardware [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.898 239969 DEBUG nova.virt.hardware [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4309d323eec722c4edf17f8705c1690b28204c50be37c37a4f6f12dc0b2a2ad3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.899 239969 DEBUG nova.virt.hardware [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.900 239969 DEBUG nova.virt.hardware [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.900 239969 DEBUG nova.virt.hardware [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.900 239969 DEBUG nova.virt.hardware [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4309d323eec722c4edf17f8705c1690b28204c50be37c37a4f6f12dc0b2a2ad3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.901 239969 DEBUG nova.virt.hardware [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.901 239969 DEBUG nova.virt.hardware [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.901 239969 DEBUG nova.virt.hardware [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4309d323eec722c4edf17f8705c1690b28204c50be37c37a4f6f12dc0b2a2ad3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4309d323eec722c4edf17f8705c1690b28204c50be37c37a4f6f12dc0b2a2ad3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4309d323eec722c4edf17f8705c1690b28204c50be37c37a4f6f12dc0b2a2ad3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.904 239969 DEBUG oslo_concurrency.processutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:17 compute-0 podman[351641]: 2026-01-26 16:22:17.913873046 +0000 UTC m=+0.142755957 container init becb8c11d3a34b2d3df938912978e2ee780cebed39dc0ecbd5fdf6fbad38c691 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pike, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:22:17 compute-0 podman[351641]: 2026-01-26 16:22:17.923567963 +0000 UTC m=+0.152450894 container start becb8c11d3a34b2d3df938912978e2ee780cebed39dc0ecbd5fdf6fbad38c691 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pike, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:22:17 compute-0 podman[351641]: 2026-01-26 16:22:17.927184791 +0000 UTC m=+0.156067702 container attach becb8c11d3a34b2d3df938912978e2ee780cebed39dc0ecbd5fdf6fbad38c691 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pike, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.998 239969 DEBUG nova.compute.manager [req-3327fb77-bbc6-434f-a889-502ceb09f3f4 req-bea79811-ec93-49ac-a251-ec9000444696 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Received event network-changed-73d5501c-512b-4eaf-b85c-86251a8a4235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:17 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.999 239969 DEBUG nova.compute.manager [req-3327fb77-bbc6-434f-a889-502ceb09f3f4 req-bea79811-ec93-49ac-a251-ec9000444696 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Refreshing instance network info cache due to event network-changed-73d5501c-512b-4eaf-b85c-86251a8a4235. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:22:18 compute-0 nova_compute[239965]: 2026-01-26 16:22:17.999 239969 DEBUG oslo_concurrency.lockutils [req-3327fb77-bbc6-434f-a889-502ceb09f3f4 req-bea79811-ec93-49ac-a251-ec9000444696 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-59934c7e-3c2e-4e68-9629-663ecfc6692d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:22:18 compute-0 nova_compute[239965]: 2026-01-26 16:22:18.000 239969 DEBUG oslo_concurrency.lockutils [req-3327fb77-bbc6-434f-a889-502ceb09f3f4 req-bea79811-ec93-49ac-a251-ec9000444696 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-59934c7e-3c2e-4e68-9629-663ecfc6692d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:22:18 compute-0 nova_compute[239965]: 2026-01-26 16:22:18.000 239969 DEBUG nova.network.neutron [req-3327fb77-bbc6-434f-a889-502ceb09f3f4 req-bea79811-ec93-49ac-a251-ec9000444696 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Refreshing network info cache for port 73d5501c-512b-4eaf-b85c-86251a8a4235 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:22:18 compute-0 nova_compute[239965]: 2026-01-26 16:22:18.024 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:22:18 compute-0 competent_pike[351658]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:22:18 compute-0 competent_pike[351658]: --> All data devices are unavailable
Jan 26 16:22:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:22:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3304844705' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:22:18 compute-0 systemd[1]: libpod-becb8c11d3a34b2d3df938912978e2ee780cebed39dc0ecbd5fdf6fbad38c691.scope: Deactivated successfully.
Jan 26 16:22:18 compute-0 conmon[351658]: conmon becb8c11d3a34b2d3df9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-becb8c11d3a34b2d3df938912978e2ee780cebed39dc0ecbd5fdf6fbad38c691.scope/container/memory.events
Jan 26 16:22:18 compute-0 nova_compute[239965]: 2026-01-26 16:22:18.484 239969 DEBUG oslo_concurrency.processutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:18 compute-0 nova_compute[239965]: 2026-01-26 16:22:18.511 239969 DEBUG nova.storage.rbd_utils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:22:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3304844705' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:22:18 compute-0 podman[351700]: 2026-01-26 16:22:18.514992367 +0000 UTC m=+0.029355900 container died becb8c11d3a34b2d3df938912978e2ee780cebed39dc0ecbd5fdf6fbad38c691 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pike, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:22:18 compute-0 nova_compute[239965]: 2026-01-26 16:22:18.515 239969 DEBUG oslo_concurrency.processutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-4309d323eec722c4edf17f8705c1690b28204c50be37c37a4f6f12dc0b2a2ad3-merged.mount: Deactivated successfully.
Jan 26 16:22:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 168 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 1.8 MiB/s wr, 129 op/s
Jan 26 16:22:18 compute-0 podman[351700]: 2026-01-26 16:22:18.554806793 +0000 UTC m=+0.069170336 container remove becb8c11d3a34b2d3df938912978e2ee780cebed39dc0ecbd5fdf6fbad38c691 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 16:22:18 compute-0 systemd[1]: libpod-conmon-becb8c11d3a34b2d3df938912978e2ee780cebed39dc0ecbd5fdf6fbad38c691.scope: Deactivated successfully.
Jan 26 16:22:18 compute-0 sudo[351528]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:18 compute-0 sudo[351733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:22:18 compute-0 sudo[351733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:22:18 compute-0 sudo[351733]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:18 compute-0 sudo[351777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:22:18 compute-0 sudo[351777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:22:18 compute-0 nova_compute[239965]: 2026-01-26 16:22:18.741 239969 DEBUG nova.network.neutron [-] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:22:18 compute-0 nova_compute[239965]: 2026-01-26 16:22:18.761 239969 INFO nova.compute.manager [-] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Took 1.42 seconds to deallocate network for instance.
Jan 26 16:22:18 compute-0 nova_compute[239965]: 2026-01-26 16:22:18.816 239969 DEBUG oslo_concurrency.lockutils [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:18 compute-0 nova_compute[239965]: 2026-01-26 16:22:18.816 239969 DEBUG oslo_concurrency.lockutils [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:18 compute-0 nova_compute[239965]: 2026-01-26 16:22:18.821 239969 DEBUG nova.compute.manager [req-8fb8ec89-a2ba-4b9e-95c5-ccec1b9e2423 req-5820ab12-a06c-454f-9ffc-35c9cbe88402 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Received event network-vif-deleted-73d5501c-512b-4eaf-b85c-86251a8a4235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:18 compute-0 nova_compute[239965]: 2026-01-26 16:22:18.898 239969 DEBUG oslo_concurrency.processutils [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:22:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/687782672' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.035 239969 DEBUG oslo_concurrency.processutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.039 239969 DEBUG nova.virt.libvirt.vif [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:22:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1426087027',display_name='tempest-TestNetworkBasicOps-server-1426087027',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1426087027',id=126,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVQvnTTId2F2dZ4CmrqUuSRPgxVpg5Gcg7/WdXsulHhwVIkk74RWE9IZv+pRN7sfUWNe4e/QS34rEQOYJGDOrZn6Q/HpS10QJgjM9+nMxrjR7pUidrziE/wnxgJrIPauA==',key_name='tempest-TestNetworkBasicOps-1612194243',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-058vjbml',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:22:13Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=34442cd1-7fb4-45b4-bd46-9d2cb53d00e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.040 239969 DEBUG nova.network.os_vif_util [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.042 239969 DEBUG nova.network.os_vif_util [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:83:22,bridge_name='br-int',has_traffic_filtering=True,id=7b88408c-5847-4417-be2b-c52e6a358bc7,network=Network(167a489e-5d74-43ba-9aa7-606465815c37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b88408c-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.045 239969 DEBUG nova.objects.instance [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'pci_devices' on Instance uuid 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:22:19 compute-0 podman[351815]: 2026-01-26 16:22:19.057158644 +0000 UTC m=+0.078531074 container create eab32d6484490f4992e50263ae997023804f7509bd9cfc8664b66afc22e2affd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.066 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:22:19 compute-0 nova_compute[239965]:   <uuid>34442cd1-7fb4-45b4-bd46-9d2cb53d00e1</uuid>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   <name>instance-0000007e</name>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkBasicOps-server-1426087027</nova:name>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:22:17</nova:creationTime>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:22:19 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:22:19 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:22:19 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:22:19 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:22:19 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:22:19 compute-0 nova_compute[239965]:         <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:22:19 compute-0 nova_compute[239965]:         <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:22:19 compute-0 nova_compute[239965]:         <nova:port uuid="7b88408c-5847-4417-be2b-c52e6a358bc7">
Jan 26 16:22:19 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <system>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <entry name="serial">34442cd1-7fb4-45b4-bd46-9d2cb53d00e1</entry>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <entry name="uuid">34442cd1-7fb4-45b4-bd46-9d2cb53d00e1</entry>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     </system>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   <os>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   </os>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   <features>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   </features>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk">
Jan 26 16:22:19 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       </source>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:22:19 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk.config">
Jan 26 16:22:19 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       </source>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:22:19 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:bd:83:22"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <target dev="tap7b88408c-58"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1/console.log" append="off"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <video>
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     </video>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:22:19 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:22:19 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:22:19 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:22:19 compute-0 nova_compute[239965]: </domain>
Jan 26 16:22:19 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.068 239969 DEBUG nova.compute.manager [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Preparing to wait for external event network-vif-plugged-7b88408c-5847-4417-be2b-c52e6a358bc7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.068 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.069 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.069 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.070 239969 DEBUG nova.virt.libvirt.vif [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:22:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1426087027',display_name='tempest-TestNetworkBasicOps-server-1426087027',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1426087027',id=126,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVQvnTTId2F2dZ4CmrqUuSRPgxVpg5Gcg7/WdXsulHhwVIkk74RWE9IZv+pRN7sfUWNe4e/QS34rEQOYJGDOrZn6Q/HpS10QJgjM9+nMxrjR7pUidrziE/wnxgJrIPauA==',key_name='tempest-TestNetworkBasicOps-1612194243',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-058vjbml',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:22:13Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=34442cd1-7fb4-45b4-bd46-9d2cb53d00e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.070 239969 DEBUG nova.network.os_vif_util [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.071 239969 DEBUG nova.network.os_vif_util [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bd:83:22,bridge_name='br-int',has_traffic_filtering=True,id=7b88408c-5847-4417-be2b-c52e6a358bc7,network=Network(167a489e-5d74-43ba-9aa7-606465815c37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b88408c-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.072 239969 DEBUG os_vif [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:83:22,bridge_name='br-int',has_traffic_filtering=True,id=7b88408c-5847-4417-be2b-c52e6a358bc7,network=Network(167a489e-5d74-43ba-9aa7-606465815c37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b88408c-58') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.073 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.073 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.074 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.078 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.078 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7b88408c-58, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.078 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7b88408c-58, col_values=(('external_ids', {'iface-id': '7b88408c-5847-4417-be2b-c52e6a358bc7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bd:83:22', 'vm-uuid': '34442cd1-7fb4-45b4-bd46-9d2cb53d00e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.080 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:19 compute-0 NetworkManager[48954]: <info>  [1769444539.0814] manager: (tap7b88408c-58): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/541)
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.082 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.086 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.087 239969 INFO os_vif [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bd:83:22,bridge_name='br-int',has_traffic_filtering=True,id=7b88408c-5847-4417-be2b-c52e6a358bc7,network=Network(167a489e-5d74-43ba-9aa7-606465815c37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b88408c-58')
Jan 26 16:22:19 compute-0 systemd[1]: Started libpod-conmon-eab32d6484490f4992e50263ae997023804f7509bd9cfc8664b66afc22e2affd.scope.
Jan 26 16:22:19 compute-0 podman[351815]: 2026-01-26 16:22:19.012510831 +0000 UTC m=+0.033883291 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:22:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:22:19 compute-0 podman[351815]: 2026-01-26 16:22:19.153481983 +0000 UTC m=+0.174854423 container init eab32d6484490f4992e50263ae997023804f7509bd9cfc8664b66afc22e2affd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 16:22:19 compute-0 podman[351815]: 2026-01-26 16:22:19.161573932 +0000 UTC m=+0.182946342 container start eab32d6484490f4992e50263ae997023804f7509bd9cfc8664b66afc22e2affd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_murdock, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:22:19 compute-0 podman[351815]: 2026-01-26 16:22:19.165698983 +0000 UTC m=+0.187071383 container attach eab32d6484490f4992e50263ae997023804f7509bd9cfc8664b66afc22e2affd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.165 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.166 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.166 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:bd:83:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.167 239969 INFO nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Using config drive
Jan 26 16:22:19 compute-0 eloquent_murdock[351855]: 167 167
Jan 26 16:22:19 compute-0 podman[351815]: 2026-01-26 16:22:19.169924356 +0000 UTC m=+0.191296756 container died eab32d6484490f4992e50263ae997023804f7509bd9cfc8664b66afc22e2affd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 16:22:19 compute-0 systemd[1]: libpod-eab32d6484490f4992e50263ae997023804f7509bd9cfc8664b66afc22e2affd.scope: Deactivated successfully.
Jan 26 16:22:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-41e1d501e2c5aae6fd0c0594d71044f4be92b2b6640c8200425d13a71a52b94d-merged.mount: Deactivated successfully.
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.201 239969 DEBUG nova.storage.rbd_utils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:22:19 compute-0 podman[351815]: 2026-01-26 16:22:19.215396949 +0000 UTC m=+0.236769349 container remove eab32d6484490f4992e50263ae997023804f7509bd9cfc8664b66afc22e2affd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:22:19 compute-0 systemd[1]: libpod-conmon-eab32d6484490f4992e50263ae997023804f7509bd9cfc8664b66afc22e2affd.scope: Deactivated successfully.
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.349 239969 DEBUG nova.network.neutron [req-99e29cb2-8013-483d-92d6-1da6443098a5 req-85ba30ab-4801-4eae-add9-b8dd615e70f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Updated VIF entry in instance network info cache for port 28ddf76d-a2d2-49fc-b567-f9364c3848c4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.350 239969 DEBUG nova.network.neutron [req-99e29cb2-8013-483d-92d6-1da6443098a5 req-85ba30ab-4801-4eae-add9-b8dd615e70f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Updating instance_info_cache with network_info: [{"id": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "address": "fa:16:3e:ef:95:e6", "network": {"id": "8fbff3e6-13d0-457a-8cfa-fa92dd36bc03", "bridge": "br-int", "label": "tempest-network-smoke--2073612296", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ddf76d-a2", "ovs_interfaceid": "28ddf76d-a2d2-49fc-b567-f9364c3848c4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.367 239969 DEBUG oslo_concurrency.lockutils [req-99e29cb2-8013-483d-92d6-1da6443098a5 req-85ba30ab-4801-4eae-add9-b8dd615e70f5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b1abd864-1de3-45b1-8fbc-3885a84b1363" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:22:19 compute-0 podman[351897]: 2026-01-26 16:22:19.406434348 +0000 UTC m=+0.059200700 container create c6dbe6852be4c5cae00e7e8b89c50cea988871110e0b729a247bbd2f8881a5d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_carson, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 16:22:19 compute-0 systemd[1]: Started libpod-conmon-c6dbe6852be4c5cae00e7e8b89c50cea988871110e0b729a247bbd2f8881a5d9.scope.
Jan 26 16:22:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d3c2a2176f86ed94178a55f473cde9ed9a3732b4bebadb244b2ed9940cf86b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d3c2a2176f86ed94178a55f473cde9ed9a3732b4bebadb244b2ed9940cf86b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d3c2a2176f86ed94178a55f473cde9ed9a3732b4bebadb244b2ed9940cf86b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4d3c2a2176f86ed94178a55f473cde9ed9a3732b4bebadb244b2ed9940cf86b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:19 compute-0 podman[351897]: 2026-01-26 16:22:19.38118477 +0000 UTC m=+0.033951212 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.477 239969 INFO nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Creating config drive at /var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1/disk.config
Jan 26 16:22:19 compute-0 podman[351897]: 2026-01-26 16:22:19.48082063 +0000 UTC m=+0.133586992 container init c6dbe6852be4c5cae00e7e8b89c50cea988871110e0b729a247bbd2f8881a5d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_carson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.482 239969 DEBUG oslo_concurrency.processutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw742qt2f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:19 compute-0 podman[351897]: 2026-01-26 16:22:19.488398005 +0000 UTC m=+0.141164347 container start c6dbe6852be4c5cae00e7e8b89c50cea988871110e0b729a247bbd2f8881a5d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 16:22:19 compute-0 podman[351897]: 2026-01-26 16:22:19.492355712 +0000 UTC m=+0.145122074 container attach c6dbe6852be4c5cae00e7e8b89c50cea988871110e0b729a247bbd2f8881a5d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:22:19 compute-0 ceph-mon[75140]: pgmap v2148: 305 pgs: 305 active+clean; 168 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 102 KiB/s rd, 1.8 MiB/s wr, 129 op/s
Jan 26 16:22:19 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/687782672' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:22:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:22:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1404323361' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.543 239969 DEBUG oslo_concurrency.processutils [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.645s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.558 239969 DEBUG nova.compute.provider_tree [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.580 239969 DEBUG nova.scheduler.client.report [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.623 239969 DEBUG oslo_concurrency.processutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw742qt2f" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.651 239969 DEBUG nova.storage.rbd_utils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.655 239969 DEBUG oslo_concurrency.processutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1/disk.config 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:19 compute-0 admiring_carson[351914]: {
Jan 26 16:22:19 compute-0 admiring_carson[351914]:     "0": [
Jan 26 16:22:19 compute-0 admiring_carson[351914]:         {
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "devices": [
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "/dev/loop3"
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             ],
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "lv_name": "ceph_lv0",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "lv_size": "21470642176",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "name": "ceph_lv0",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "tags": {
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.cluster_name": "ceph",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.crush_device_class": "",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.encrypted": "0",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.objectstore": "bluestore",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.osd_id": "0",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.type": "block",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.vdo": "0",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.with_tpm": "0"
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             },
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "type": "block",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "vg_name": "ceph_vg0"
Jan 26 16:22:19 compute-0 admiring_carson[351914]:         }
Jan 26 16:22:19 compute-0 admiring_carson[351914]:     ],
Jan 26 16:22:19 compute-0 admiring_carson[351914]:     "1": [
Jan 26 16:22:19 compute-0 admiring_carson[351914]:         {
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "devices": [
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "/dev/loop4"
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             ],
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "lv_name": "ceph_lv1",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "lv_size": "21470642176",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "name": "ceph_lv1",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "tags": {
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.cluster_name": "ceph",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.crush_device_class": "",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.encrypted": "0",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.objectstore": "bluestore",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.osd_id": "1",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.type": "block",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.vdo": "0",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.with_tpm": "0"
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             },
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "type": "block",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "vg_name": "ceph_vg1"
Jan 26 16:22:19 compute-0 admiring_carson[351914]:         }
Jan 26 16:22:19 compute-0 admiring_carson[351914]:     ],
Jan 26 16:22:19 compute-0 admiring_carson[351914]:     "2": [
Jan 26 16:22:19 compute-0 admiring_carson[351914]:         {
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "devices": [
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "/dev/loop5"
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             ],
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "lv_name": "ceph_lv2",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "lv_size": "21470642176",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "name": "ceph_lv2",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "tags": {
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.cluster_name": "ceph",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.crush_device_class": "",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.encrypted": "0",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.objectstore": "bluestore",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.osd_id": "2",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.type": "block",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.vdo": "0",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:                 "ceph.with_tpm": "0"
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             },
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "type": "block",
Jan 26 16:22:19 compute-0 admiring_carson[351914]:             "vg_name": "ceph_vg2"
Jan 26 16:22:19 compute-0 admiring_carson[351914]:         }
Jan 26 16:22:19 compute-0 admiring_carson[351914]:     ]
Jan 26 16:22:19 compute-0 admiring_carson[351914]: }
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.809 239969 DEBUG oslo_concurrency.processutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1/disk.config 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.811 239969 INFO nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Deleting local config drive /var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1/disk.config because it was imported into RBD.
Jan 26 16:22:19 compute-0 systemd[1]: libpod-c6dbe6852be4c5cae00e7e8b89c50cea988871110e0b729a247bbd2f8881a5d9.scope: Deactivated successfully.
Jan 26 16:22:19 compute-0 kernel: tap7b88408c-58: entered promiscuous mode
Jan 26 16:22:19 compute-0 NetworkManager[48954]: <info>  [1769444539.8582] manager: (tap7b88408c-58): new Tun device (/org/freedesktop/NetworkManager/Devices/542)
Jan 26 16:22:19 compute-0 ovn_controller[146046]: 2026-01-26T16:22:19Z|01317|binding|INFO|Claiming lport 7b88408c-5847-4417-be2b-c52e6a358bc7 for this chassis.
Jan 26 16:22:19 compute-0 ovn_controller[146046]: 2026-01-26T16:22:19Z|01318|binding|INFO|7b88408c-5847-4417-be2b-c52e6a358bc7: Claiming fa:16:3e:bd:83:22 10.100.0.7
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.861 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:19 compute-0 podman[351967]: 2026-01-26 16:22:19.867223073 +0000 UTC m=+0.024824829 container died c6dbe6852be4c5cae00e7e8b89c50cea988871110e0b729a247bbd2f8881a5d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_carson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:22:19 compute-0 ovn_controller[146046]: 2026-01-26T16:22:19Z|01319|binding|INFO|Setting lport 7b88408c-5847-4417-be2b-c52e6a358bc7 ovn-installed in OVS
Jan 26 16:22:19 compute-0 nova_compute[239965]: 2026-01-26 16:22:19.879 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4d3c2a2176f86ed94178a55f473cde9ed9a3732b4bebadb244b2ed9940cf86b-merged.mount: Deactivated successfully.
Jan 26 16:22:19 compute-0 systemd-udevd[351992]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:22:19 compute-0 systemd-machined[208061]: New machine qemu-153-instance-0000007e.
Jan 26 16:22:19 compute-0 NetworkManager[48954]: <info>  [1769444539.9136] device (tap7b88408c-58): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:22:19 compute-0 NetworkManager[48954]: <info>  [1769444539.9147] device (tap7b88408c-58): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:22:19 compute-0 systemd[1]: Started Virtual Machine qemu-153-instance-0000007e.
Jan 26 16:22:19 compute-0 podman[351967]: 2026-01-26 16:22:19.922194179 +0000 UTC m=+0.079795935 container remove c6dbe6852be4c5cae00e7e8b89c50cea988871110e0b729a247bbd2f8881a5d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:22:19 compute-0 systemd[1]: libpod-conmon-c6dbe6852be4c5cae00e7e8b89c50cea988871110e0b729a247bbd2f8881a5d9.scope: Deactivated successfully.
Jan 26 16:22:19 compute-0 sudo[351777]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:20 compute-0 sudo[352001]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:22:20 compute-0 sudo[352001]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:22:20 compute-0 sudo[352001]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:20 compute-0 sudo[352026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:22:20 compute-0 sudo[352026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:22:20 compute-0 ovn_controller[146046]: 2026-01-26T16:22:20Z|01320|binding|INFO|Setting lport 7b88408c-5847-4417-be2b-c52e6a358bc7 up in Southbound
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.295 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:83:22 10.100.0.7'], port_security=['fa:16:3e:bd:83:22 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '34442cd1-7fb4-45b4-bd46-9d2cb53d00e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-167a489e-5d74-43ba-9aa7-606465815c37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd77bf471-b7d2-4299-a894-5b23c9253b59', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e97a4241-36cb-4e28-a469-e73b5fed0754, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=7b88408c-5847-4417-be2b-c52e6a358bc7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.298 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 7b88408c-5847-4417-be2b-c52e6a358bc7 in datapath 167a489e-5d74-43ba-9aa7-606465815c37 bound to our chassis
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.300 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 167a489e-5d74-43ba-9aa7-606465815c37
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.313 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1eb5850a-f192-4578-91f1-76ea1d16f2a1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.315 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap167a489e-51 in ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.319 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap167a489e-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.319 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f7c599b1-a7d6-4813-aba7-4367cab00fa5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.321 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[19ad9794-8d6f-4a69-873d-e88115373d74]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.332 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[310b7819-d246-461c-abd9-95174ece07de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.346 239969 DEBUG oslo_concurrency.lockutils [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.352 239969 DEBUG nova.network.neutron [-] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.367 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3de9685e-3dd1-40a3-a7b8-acb770cd5d5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.373 239969 INFO nova.compute.manager [-] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Took 3.15 seconds to deallocate network for instance.
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.378 239969 INFO nova.scheduler.client.report [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Deleted allocations for instance 59934c7e-3c2e-4e68-9629-663ecfc6692d
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.398 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5c6a6660-fb32-4857-b210-1b8dc0891597]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:20 compute-0 NetworkManager[48954]: <info>  [1769444540.4052] manager: (tap167a489e-50): new Veth device (/org/freedesktop/NetworkManager/Devices/543)
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.408 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[20c2b48f-1bd4-454d-b658-4a2da5b786dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.439 239969 DEBUG oslo_concurrency.lockutils [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.440 239969 DEBUG oslo_concurrency.lockutils [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.443 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7b702637-5a1e-4865-91e9-86b8ac97c027]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.445 239969 DEBUG nova.compute.manager [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Received event network-vif-plugged-28ddf76d-a2d2-49fc-b567-f9364c3848c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.445 239969 DEBUG oslo_concurrency.lockutils [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.446 239969 DEBUG oslo_concurrency.lockutils [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.446 239969 DEBUG oslo_concurrency.lockutils [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.447 239969 DEBUG nova.compute.manager [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] No waiting events found dispatching network-vif-plugged-28ddf76d-a2d2-49fc-b567-f9364c3848c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.447 239969 WARNING nova.compute.manager [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Received unexpected event network-vif-plugged-28ddf76d-a2d2-49fc-b567-f9364c3848c4 for instance with vm_state deleted and task_state None.
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.447 239969 DEBUG nova.compute.manager [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Received event network-vif-unplugged-73d5501c-512b-4eaf-b85c-86251a8a4235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.448 239969 DEBUG oslo_concurrency.lockutils [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.448 239969 DEBUG oslo_concurrency.lockutils [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.448 239969 DEBUG oslo_concurrency.lockutils [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.448 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9d060496-59dc-445c-a758-63a7710cb60f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.449 239969 DEBUG nova.compute.manager [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] No waiting events found dispatching network-vif-unplugged-73d5501c-512b-4eaf-b85c-86251a8a4235 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.449 239969 WARNING nova.compute.manager [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Received unexpected event network-vif-unplugged-73d5501c-512b-4eaf-b85c-86251a8a4235 for instance with vm_state deleted and task_state None.
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.449 239969 DEBUG nova.compute.manager [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Received event network-vif-plugged-73d5501c-512b-4eaf-b85c-86251a8a4235 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.450 239969 DEBUG oslo_concurrency.lockutils [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.450 239969 DEBUG oslo_concurrency.lockutils [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.450 239969 DEBUG oslo_concurrency.lockutils [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "59934c7e-3c2e-4e68-9629-663ecfc6692d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.450 239969 DEBUG nova.compute.manager [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] No waiting events found dispatching network-vif-plugged-73d5501c-512b-4eaf-b85c-86251a8a4235 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.451 239969 WARNING nova.compute.manager [req-f0aad162-a4b1-4c3c-9eab-930959f441c9 req-c2626b76-b053-43bd-8b71-8dcbbd857dba a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Received unexpected event network-vif-plugged-73d5501c-512b-4eaf-b85c-86251a8a4235 for instance with vm_state deleted and task_state None.
Jan 26 16:22:20 compute-0 podman[352067]: 2026-01-26 16:22:20.46038338 +0000 UTC m=+0.053358389 container create 7048cbe0b04f0dc4709aeb1b480731ac69aecb9c488c54d53ee40d5cbd589a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.474 239969 DEBUG oslo_concurrency.lockutils [None req-b8bf13ef-d922-42e9-8888-3f1edbaf61c8 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "59934c7e-3c2e-4e68-9629-663ecfc6692d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:20 compute-0 NetworkManager[48954]: <info>  [1769444540.4827] device (tap167a489e-50): carrier: link connected
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.492 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9da5f1-59e3-4191-b70f-8eac6bab0dfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:20 compute-0 systemd[1]: Started libpod-conmon-7048cbe0b04f0dc4709aeb1b480731ac69aecb9c488c54d53ee40d5cbd589a6a.scope.
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.511 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9356e1eb-6d95-4242-8c3e-902aaf4b6596]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap167a489e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:e7:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603128, 'reachable_time': 41838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352102, 'error': None, 'target': 'ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:20 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1404323361' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.527 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1a418219-1bfd-47fa-a852-cebbf75027a9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee6:e7d3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 603128, 'tstamp': 603128}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352106, 'error': None, 'target': 'ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:22:20 compute-0 podman[352067]: 2026-01-26 16:22:20.445668619 +0000 UTC m=+0.038643648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.543 239969 DEBUG oslo_concurrency.processutils [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 168 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 1.8 MiB/s wr, 105 op/s
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.546 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b324b93e-5761-4f53-9a0c-f633a42ff5b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap167a489e-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e6:e7:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 382], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603128, 'reachable_time': 41838, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352107, 'error': None, 'target': 'ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:20 compute-0 podman[352067]: 2026-01-26 16:22:20.555722264 +0000 UTC m=+0.148697293 container init 7048cbe0b04f0dc4709aeb1b480731ac69aecb9c488c54d53ee40d5cbd589a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:22:20 compute-0 podman[352067]: 2026-01-26 16:22:20.563400212 +0000 UTC m=+0.156375221 container start 7048cbe0b04f0dc4709aeb1b480731ac69aecb9c488c54d53ee40d5cbd589a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 16:22:20 compute-0 podman[352067]: 2026-01-26 16:22:20.567292588 +0000 UTC m=+0.160267617 container attach 7048cbe0b04f0dc4709aeb1b480731ac69aecb9c488c54d53ee40d5cbd589a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 16:22:20 compute-0 determined_gagarin[352103]: 167 167
Jan 26 16:22:20 compute-0 systemd[1]: libpod-7048cbe0b04f0dc4709aeb1b480731ac69aecb9c488c54d53ee40d5cbd589a6a.scope: Deactivated successfully.
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.579 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[68aadb8b-8116-48ae-9b59-993998933953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.584 239969 DEBUG nova.network.neutron [req-5094b234-fe6c-4d90-881f-fcf23c878067 req-7e011ef8-9236-4748-9158-23eb5d3fe9b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Updated VIF entry in instance network info cache for port 7b88408c-5847-4417-be2b-c52e6a358bc7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.585 239969 DEBUG nova.network.neutron [req-5094b234-fe6c-4d90-881f-fcf23c878067 req-7e011ef8-9236-4748-9158-23eb5d3fe9b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Updating instance_info_cache with network_info: [{"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.601 239969 DEBUG oslo_concurrency.lockutils [req-5094b234-fe6c-4d90-881f-fcf23c878067 req-7e011ef8-9236-4748-9158-23eb5d3fe9b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:22:20 compute-0 podman[352113]: 2026-01-26 16:22:20.608870536 +0000 UTC m=+0.028697744 container died 7048cbe0b04f0dc4709aeb1b480731ac69aecb9c488c54d53ee40d5cbd589a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 26 16:22:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-636c4a6e5cea7ede146633c72354c27ac0d4389546d15edc9b1e5dd713b1bbd2-merged.mount: Deactivated successfully.
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.646 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4d5581-48ac-4961-a95e-c4fd13e169a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.648 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap167a489e-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.649 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.649 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap167a489e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.651 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:20 compute-0 kernel: tap167a489e-50: entered promiscuous mode
Jan 26 16:22:20 compute-0 NetworkManager[48954]: <info>  [1769444540.6516] manager: (tap167a489e-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/544)
Jan 26 16:22:20 compute-0 podman[352113]: 2026-01-26 16:22:20.654247367 +0000 UTC m=+0.074074565 container remove 7048cbe0b04f0dc4709aeb1b480731ac69aecb9c488c54d53ee40d5cbd589a6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.658 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap167a489e-50, col_values=(('external_ids', {'iface-id': '9cc9437d-4445-4fbb-951e-2360a1081c9c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:20 compute-0 systemd[1]: libpod-conmon-7048cbe0b04f0dc4709aeb1b480731ac69aecb9c488c54d53ee40d5cbd589a6a.scope: Deactivated successfully.
Jan 26 16:22:20 compute-0 ovn_controller[146046]: 2026-01-26T16:22:20Z|01321|binding|INFO|Releasing lport 9cc9437d-4445-4fbb-951e-2360a1081c9c from this chassis (sb_readonly=0)
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.662 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/167a489e-5d74-43ba-9aa7-606465815c37.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/167a489e-5d74-43ba-9aa7-606465815c37.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.664 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.665 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b2221182-19af-484d-a441-4a5d6881862d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.666 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-167a489e-5d74-43ba-9aa7-606465815c37
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/167a489e-5d74-43ba-9aa7-606465815c37.pid.haproxy
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 167a489e-5d74-43ba-9aa7-606465815c37
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:22:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:20.666 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37', 'env', 'PROCESS_TAG=haproxy-167a489e-5d74-43ba-9aa7-606465815c37', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/167a489e-5d74-43ba-9aa7-606465815c37.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.675 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.828 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444525.8263972, 11befbe4-5dfe-4019-9aae-812949d43f40 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.828 239969 INFO nova.compute.manager [-] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] VM Stopped (Lifecycle Event)
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.849 239969 DEBUG nova.compute.manager [None req-4ed76c8a-269b-46d8-9a29-095b4aceae0b - - - - - -] [instance: 11befbe4-5dfe-4019-9aae-812949d43f40] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:22:20 compute-0 podman[352201]: 2026-01-26 16:22:20.850062193 +0000 UTC m=+0.048533700 container create 2de7f2f9306ff2010c819cefe2a163d4691a332ee9859a8c36c794538e941002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_snyder, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.861 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444540.8615098, 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.862 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] VM Started (Lifecycle Event)
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.884 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.889 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444540.862659, 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.889 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] VM Paused (Lifecycle Event)
Jan 26 16:22:20 compute-0 systemd[1]: Started libpod-conmon-2de7f2f9306ff2010c819cefe2a163d4691a332ee9859a8c36c794538e941002.scope.
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.910 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.916 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:22:20 compute-0 podman[352201]: 2026-01-26 16:22:20.828266148 +0000 UTC m=+0.026737735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:22:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77acac0d300dcb1507e65d423ab622a3fc57cd80cac1dbf1209ec209bded4c62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77acac0d300dcb1507e65d423ab622a3fc57cd80cac1dbf1209ec209bded4c62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77acac0d300dcb1507e65d423ab622a3fc57cd80cac1dbf1209ec209bded4c62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77acac0d300dcb1507e65d423ab622a3fc57cd80cac1dbf1209ec209bded4c62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:20 compute-0 nova_compute[239965]: 2026-01-26 16:22:20.942 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:22:20 compute-0 podman[352201]: 2026-01-26 16:22:20.948251227 +0000 UTC m=+0.146722754 container init 2de7f2f9306ff2010c819cefe2a163d4691a332ee9859a8c36c794538e941002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:22:20 compute-0 podman[352201]: 2026-01-26 16:22:20.955910954 +0000 UTC m=+0.154382461 container start 2de7f2f9306ff2010c819cefe2a163d4691a332ee9859a8c36c794538e941002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_snyder, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:22:20 compute-0 podman[352201]: 2026-01-26 16:22:20.962633959 +0000 UTC m=+0.161105466 container attach 2de7f2f9306ff2010c819cefe2a163d4691a332ee9859a8c36c794538e941002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_snyder, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:22:21 compute-0 podman[352243]: 2026-01-26 16:22:21.035012512 +0000 UTC m=+0.050869287 container create ee305591ee2c4dfc9b1fa24c0c712e56c0194b4a74032c9ece7c72037060e374 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:22:21 compute-0 systemd[1]: Started libpod-conmon-ee305591ee2c4dfc9b1fa24c0c712e56c0194b4a74032c9ece7c72037060e374.scope.
Jan 26 16:22:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:22:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3dc8f555cec561a43c97650024625225cb134649199573a0d486592d845b5aa9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:21 compute-0 podman[352243]: 2026-01-26 16:22:21.010158353 +0000 UTC m=+0.026015138 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:22:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:22:21 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2719015383' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:22:21 compute-0 nova_compute[239965]: 2026-01-26 16:22:21.135 239969 DEBUG oslo_concurrency.processutils [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:21 compute-0 podman[352243]: 2026-01-26 16:22:21.137679966 +0000 UTC m=+0.153536831 container init ee305591ee2c4dfc9b1fa24c0c712e56c0194b4a74032c9ece7c72037060e374 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:22:21 compute-0 nova_compute[239965]: 2026-01-26 16:22:21.143 239969 DEBUG nova.compute.provider_tree [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:22:21 compute-0 podman[352243]: 2026-01-26 16:22:21.144571845 +0000 UTC m=+0.160428650 container start ee305591ee2c4dfc9b1fa24c0c712e56c0194b4a74032c9ece7c72037060e374 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 26 16:22:21 compute-0 neutron-haproxy-ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37[352258]: [NOTICE]   (352265) : New worker (352276) forked
Jan 26 16:22:21 compute-0 neutron-haproxy-ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37[352258]: [NOTICE]   (352265) : Loading success.
Jan 26 16:22:21 compute-0 nova_compute[239965]: 2026-01-26 16:22:21.282 239969 DEBUG nova.scheduler.client.report [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:22:21 compute-0 nova_compute[239965]: 2026-01-26 16:22:21.336 239969 DEBUG oslo_concurrency.lockutils [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:21 compute-0 nova_compute[239965]: 2026-01-26 16:22:21.357 239969 INFO nova.scheduler.client.report [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Deleted allocations for instance b1abd864-1de3-45b1-8fbc-3885a84b1363
Jan 26 16:22:21 compute-0 nova_compute[239965]: 2026-01-26 16:22:21.413 239969 DEBUG oslo_concurrency.lockutils [None req-30410e48-ebd5-47cb-a79c-286382aefe5f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "b1abd864-1de3-45b1-8fbc-3885a84b1363" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:21 compute-0 ceph-mon[75140]: pgmap v2149: 305 pgs: 305 active+clean; 168 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 81 KiB/s rd, 1.8 MiB/s wr, 105 op/s
Jan 26 16:22:21 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2719015383' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:22:21 compute-0 lvm[352345]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:22:21 compute-0 lvm[352345]: VG ceph_vg0 finished
Jan 26 16:22:21 compute-0 lvm[352347]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:22:21 compute-0 lvm[352347]: VG ceph_vg1 finished
Jan 26 16:22:21 compute-0 lvm[352348]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:22:21 compute-0 lvm[352348]: VG ceph_vg0 finished
Jan 26 16:22:21 compute-0 lvm[352349]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:22:21 compute-0 lvm[352349]: VG ceph_vg2 finished
Jan 26 16:22:21 compute-0 lvm[352350]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:22:21 compute-0 lvm[352350]: VG ceph_vg1 finished
Jan 26 16:22:21 compute-0 laughing_snyder[352219]: {}
Jan 26 16:22:21 compute-0 systemd[1]: libpod-2de7f2f9306ff2010c819cefe2a163d4691a332ee9859a8c36c794538e941002.scope: Deactivated successfully.
Jan 26 16:22:21 compute-0 podman[352201]: 2026-01-26 16:22:21.817746631 +0000 UTC m=+1.016218138 container died 2de7f2f9306ff2010c819cefe2a163d4691a332ee9859a8c36c794538e941002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_snyder, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:22:21 compute-0 systemd[1]: libpod-2de7f2f9306ff2010c819cefe2a163d4691a332ee9859a8c36c794538e941002.scope: Consumed 1.417s CPU time.
Jan 26 16:22:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-77acac0d300dcb1507e65d423ab622a3fc57cd80cac1dbf1209ec209bded4c62-merged.mount: Deactivated successfully.
Jan 26 16:22:21 compute-0 nova_compute[239965]: 2026-01-26 16:22:21.847 239969 DEBUG nova.network.neutron [req-3327fb77-bbc6-434f-a889-502ceb09f3f4 req-bea79811-ec93-49ac-a251-ec9000444696 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Updated VIF entry in instance network info cache for port 73d5501c-512b-4eaf-b85c-86251a8a4235. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:22:21 compute-0 nova_compute[239965]: 2026-01-26 16:22:21.848 239969 DEBUG nova.network.neutron [req-3327fb77-bbc6-434f-a889-502ceb09f3f4 req-bea79811-ec93-49ac-a251-ec9000444696 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Updating instance_info_cache with network_info: [{"id": "73d5501c-512b-4eaf-b85c-86251a8a4235", "address": "fa:16:3e:26:c2:94", "network": {"id": "323586d4-0f5a-43b7-8831-7a2cc02b73e0", "bridge": "br-int", "label": "tempest-network-smoke--602776590", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe26:c294", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap73d5501c-51", "ovs_interfaceid": "73d5501c-512b-4eaf-b85c-86251a8a4235", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:22:21 compute-0 podman[352201]: 2026-01-26 16:22:21.864222499 +0000 UTC m=+1.062694006 container remove 2de7f2f9306ff2010c819cefe2a163d4691a332ee9859a8c36c794538e941002 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:22:21 compute-0 systemd[1]: libpod-conmon-2de7f2f9306ff2010c819cefe2a163d4691a332ee9859a8c36c794538e941002.scope: Deactivated successfully.
Jan 26 16:22:21 compute-0 nova_compute[239965]: 2026-01-26 16:22:21.891 239969 DEBUG oslo_concurrency.lockutils [req-3327fb77-bbc6-434f-a889-502ceb09f3f4 req-bea79811-ec93-49ac-a251-ec9000444696 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-59934c7e-3c2e-4e68-9629-663ecfc6692d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:22:21 compute-0 nova_compute[239965]: 2026-01-26 16:22:21.892 239969 DEBUG nova.compute.manager [req-3327fb77-bbc6-434f-a889-502ceb09f3f4 req-bea79811-ec93-49ac-a251-ec9000444696 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Received event network-vif-unplugged-28ddf76d-a2d2-49fc-b567-f9364c3848c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:21 compute-0 nova_compute[239965]: 2026-01-26 16:22:21.892 239969 DEBUG oslo_concurrency.lockutils [req-3327fb77-bbc6-434f-a889-502ceb09f3f4 req-bea79811-ec93-49ac-a251-ec9000444696 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:21 compute-0 nova_compute[239965]: 2026-01-26 16:22:21.892 239969 DEBUG oslo_concurrency.lockutils [req-3327fb77-bbc6-434f-a889-502ceb09f3f4 req-bea79811-ec93-49ac-a251-ec9000444696 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:21 compute-0 nova_compute[239965]: 2026-01-26 16:22:21.892 239969 DEBUG oslo_concurrency.lockutils [req-3327fb77-bbc6-434f-a889-502ceb09f3f4 req-bea79811-ec93-49ac-a251-ec9000444696 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1abd864-1de3-45b1-8fbc-3885a84b1363-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:21 compute-0 nova_compute[239965]: 2026-01-26 16:22:21.893 239969 DEBUG nova.compute.manager [req-3327fb77-bbc6-434f-a889-502ceb09f3f4 req-bea79811-ec93-49ac-a251-ec9000444696 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] No waiting events found dispatching network-vif-unplugged-28ddf76d-a2d2-49fc-b567-f9364c3848c4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:22:21 compute-0 nova_compute[239965]: 2026-01-26 16:22:21.893 239969 DEBUG nova.compute.manager [req-3327fb77-bbc6-434f-a889-502ceb09f3f4 req-bea79811-ec93-49ac-a251-ec9000444696 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Received event network-vif-unplugged-28ddf76d-a2d2-49fc-b567-f9364c3848c4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:22:21 compute-0 sudo[352026]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:22:21 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:22:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:22:21 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:22:22 compute-0 sudo[352365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:22:22 compute-0 sudo[352365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:22:22 compute-0 sudo[352365]: pam_unix(sudo:session): session closed for user root
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.535 239969 DEBUG nova.compute.manager [req-37cba055-e914-4e23-b14f-11bf8285c0a1 req-abb445b9-7e21-45ee-89d1-209f5ccf6f9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Received event network-vif-deleted-28ddf76d-a2d2-49fc-b567-f9364c3848c4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.536 239969 DEBUG nova.compute.manager [req-37cba055-e914-4e23-b14f-11bf8285c0a1 req-abb445b9-7e21-45ee-89d1-209f5ccf6f9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received event network-vif-plugged-7b88408c-5847-4417-be2b-c52e6a358bc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.537 239969 DEBUG oslo_concurrency.lockutils [req-37cba055-e914-4e23-b14f-11bf8285c0a1 req-abb445b9-7e21-45ee-89d1-209f5ccf6f9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.537 239969 DEBUG oslo_concurrency.lockutils [req-37cba055-e914-4e23-b14f-11bf8285c0a1 req-abb445b9-7e21-45ee-89d1-209f5ccf6f9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.538 239969 DEBUG oslo_concurrency.lockutils [req-37cba055-e914-4e23-b14f-11bf8285c0a1 req-abb445b9-7e21-45ee-89d1-209f5ccf6f9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.538 239969 DEBUG nova.compute.manager [req-37cba055-e914-4e23-b14f-11bf8285c0a1 req-abb445b9-7e21-45ee-89d1-209f5ccf6f9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Processing event network-vif-plugged-7b88408c-5847-4417-be2b-c52e6a358bc7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.538 239969 DEBUG nova.compute.manager [req-37cba055-e914-4e23-b14f-11bf8285c0a1 req-abb445b9-7e21-45ee-89d1-209f5ccf6f9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received event network-vif-plugged-7b88408c-5847-4417-be2b-c52e6a358bc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.539 239969 DEBUG oslo_concurrency.lockutils [req-37cba055-e914-4e23-b14f-11bf8285c0a1 req-abb445b9-7e21-45ee-89d1-209f5ccf6f9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.539 239969 DEBUG oslo_concurrency.lockutils [req-37cba055-e914-4e23-b14f-11bf8285c0a1 req-abb445b9-7e21-45ee-89d1-209f5ccf6f9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.539 239969 DEBUG oslo_concurrency.lockutils [req-37cba055-e914-4e23-b14f-11bf8285c0a1 req-abb445b9-7e21-45ee-89d1-209f5ccf6f9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.540 239969 DEBUG nova.compute.manager [req-37cba055-e914-4e23-b14f-11bf8285c0a1 req-abb445b9-7e21-45ee-89d1-209f5ccf6f9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] No waiting events found dispatching network-vif-plugged-7b88408c-5847-4417-be2b-c52e6a358bc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.540 239969 WARNING nova.compute.manager [req-37cba055-e914-4e23-b14f-11bf8285c0a1 req-abb445b9-7e21-45ee-89d1-209f5ccf6f9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received unexpected event network-vif-plugged-7b88408c-5847-4417-be2b-c52e6a358bc7 for instance with vm_state building and task_state spawning.
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.541 239969 DEBUG nova.compute.manager [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:22:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 134 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 1.8 MiB/s wr, 122 op/s
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.550 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444542.5498452, 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.550 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] VM Resumed (Lifecycle Event)
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.553 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.560 239969 INFO nova.virt.libvirt.driver [-] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Instance spawned successfully.
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.561 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.577 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.586 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.590 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.590 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.590 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.591 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.592 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.592 239969 DEBUG nova.virt.libvirt.driver [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.620 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.661 239969 INFO nova.compute.manager [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Took 9.34 seconds to spawn the instance on the hypervisor.
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.662 239969 DEBUG nova.compute.manager [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.730 239969 INFO nova.compute.manager [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Took 10.46 seconds to build instance.
Jan 26 16:22:22 compute-0 nova_compute[239965]: 2026-01-26 16:22:22.748 239969 DEBUG oslo_concurrency.lockutils [None req-d966e19a-01de-4e9e-b9d5-77dd90374b43 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.546s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:22 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:22:22 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:22:22 compute-0 ceph-mon[75140]: pgmap v2150: 305 pgs: 305 active+clean; 134 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 1.8 MiB/s wr, 122 op/s
Jan 26 16:22:23 compute-0 nova_compute[239965]: 2026-01-26 16:22:23.021 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:22:24 compute-0 nova_compute[239965]: 2026-01-26 16:22:24.080 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 134 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 26 16:22:24 compute-0 ceph-mon[75140]: pgmap v2151: 305 pgs: 305 active+clean; 134 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 1.8 MiB/s wr, 117 op/s
Jan 26 16:22:25 compute-0 ovn_controller[146046]: 2026-01-26T16:22:25Z|01322|binding|INFO|Releasing lport 9cc9437d-4445-4fbb-951e-2360a1081c9c from this chassis (sb_readonly=0)
Jan 26 16:22:25 compute-0 nova_compute[239965]: 2026-01-26 16:22:25.169 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:25 compute-0 ovn_controller[146046]: 2026-01-26T16:22:25Z|01323|binding|INFO|Releasing lport 9cc9437d-4445-4fbb-951e-2360a1081c9c from this chassis (sb_readonly=0)
Jan 26 16:22:25 compute-0 nova_compute[239965]: 2026-01-26 16:22:25.348 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 134 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 807 KiB/s rd, 1.8 MiB/s wr, 140 op/s
Jan 26 16:22:26 compute-0 ceph-mon[75140]: pgmap v2152: 305 pgs: 305 active+clean; 134 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 807 KiB/s rd, 1.8 MiB/s wr, 140 op/s
Jan 26 16:22:27 compute-0 nova_compute[239965]: 2026-01-26 16:22:27.047 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444532.0458748, 9ad7b43b-be73-4270-af57-e061d3581019 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:22:27 compute-0 nova_compute[239965]: 2026-01-26 16:22:27.047 239969 INFO nova.compute.manager [-] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] VM Stopped (Lifecycle Event)
Jan 26 16:22:27 compute-0 nova_compute[239965]: 2026-01-26 16:22:27.073 239969 DEBUG nova.compute.manager [None req-d44a8082-88f0-4403-871c-5d536f4ed26c - - - - - -] [instance: 9ad7b43b-be73-4270-af57-e061d3581019] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:22:27 compute-0 nova_compute[239965]: 2026-01-26 16:22:27.149 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:27 compute-0 NetworkManager[48954]: <info>  [1769444547.1504] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/545)
Jan 26 16:22:27 compute-0 NetworkManager[48954]: <info>  [1769444547.1517] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Jan 26 16:22:27 compute-0 nova_compute[239965]: 2026-01-26 16:22:27.236 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:27 compute-0 ovn_controller[146046]: 2026-01-26T16:22:27Z|01324|binding|INFO|Releasing lport 9cc9437d-4445-4fbb-951e-2360a1081c9c from this chassis (sb_readonly=0)
Jan 26 16:22:27 compute-0 nova_compute[239965]: 2026-01-26 16:22:27.244 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:28 compute-0 nova_compute[239965]: 2026-01-26 16:22:28.024 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:22:28 compute-0 nova_compute[239965]: 2026-01-26 16:22:28.259 239969 DEBUG nova.compute.manager [req-657f02ab-6635-4aea-adf0-f635b6dfc117 req-14f321da-d881-45f0-8412-e24bc7f602b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received event network-changed-7b88408c-5847-4417-be2b-c52e6a358bc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:28 compute-0 nova_compute[239965]: 2026-01-26 16:22:28.260 239969 DEBUG nova.compute.manager [req-657f02ab-6635-4aea-adf0-f635b6dfc117 req-14f321da-d881-45f0-8412-e24bc7f602b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Refreshing instance network info cache due to event network-changed-7b88408c-5847-4417-be2b-c52e6a358bc7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:22:28 compute-0 nova_compute[239965]: 2026-01-26 16:22:28.260 239969 DEBUG oslo_concurrency.lockutils [req-657f02ab-6635-4aea-adf0-f635b6dfc117 req-14f321da-d881-45f0-8412-e24bc7f602b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:22:28 compute-0 nova_compute[239965]: 2026-01-26 16:22:28.261 239969 DEBUG oslo_concurrency.lockutils [req-657f02ab-6635-4aea-adf0-f635b6dfc117 req-14f321da-d881-45f0-8412-e24bc7f602b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:22:28 compute-0 nova_compute[239965]: 2026-01-26 16:22:28.261 239969 DEBUG nova.network.neutron [req-657f02ab-6635-4aea-adf0-f635b6dfc117 req-14f321da-d881-45f0-8412-e24bc7f602b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Refreshing network info cache for port 7b88408c-5847-4417-be2b-c52e6a358bc7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:22:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 134 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 159 op/s
Jan 26 16:22:28 compute-0 ceph-mon[75140]: pgmap v2153: 305 pgs: 305 active+clean; 134 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 159 op/s
Jan 26 16:22:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:22:28
Jan 26 16:22:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:22:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:22:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'images']
Jan 26 16:22:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:22:29 compute-0 nova_compute[239965]: 2026-01-26 16:22:29.083 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:29 compute-0 nova_compute[239965]: 2026-01-26 16:22:29.501 239969 DEBUG nova.network.neutron [req-657f02ab-6635-4aea-adf0-f635b6dfc117 req-14f321da-d881-45f0-8412-e24bc7f602b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Updated VIF entry in instance network info cache for port 7b88408c-5847-4417-be2b-c52e6a358bc7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:22:29 compute-0 nova_compute[239965]: 2026-01-26 16:22:29.503 239969 DEBUG nova.network.neutron [req-657f02ab-6635-4aea-adf0-f635b6dfc117 req-14f321da-d881-45f0-8412-e24bc7f602b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Updating instance_info_cache with network_info: [{"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:22:29 compute-0 nova_compute[239965]: 2026-01-26 16:22:29.523 239969 DEBUG oslo_concurrency.lockutils [req-657f02ab-6635-4aea-adf0-f635b6dfc117 req-14f321da-d881-45f0-8412-e24bc7f602b9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 134 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 80 op/s
Jan 26 16:22:30 compute-0 ceph-mon[75140]: pgmap v2154: 305 pgs: 305 active+clean; 134 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 80 op/s
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:22:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:22:31 compute-0 ovn_controller[146046]: 2026-01-26T16:22:31Z|01325|binding|INFO|Releasing lport 9cc9437d-4445-4fbb-951e-2360a1081c9c from this chassis (sb_readonly=0)
Jan 26 16:22:31 compute-0 nova_compute[239965]: 2026-01-26 16:22:31.214 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:31 compute-0 nova_compute[239965]: 2026-01-26 16:22:31.470 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:31 compute-0 sshd-session[352392]: Invalid user admin from 209.38.206.249 port 54882
Jan 26 16:22:31 compute-0 sshd-session[352392]: Connection closed by invalid user admin 209.38.206.249 port 54882 [preauth]
Jan 26 16:22:31 compute-0 nova_compute[239965]: 2026-01-26 16:22:31.753 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444536.7521918, b1abd864-1de3-45b1-8fbc-3885a84b1363 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:22:31 compute-0 nova_compute[239965]: 2026-01-26 16:22:31.754 239969 INFO nova.compute.manager [-] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] VM Stopped (Lifecycle Event)
Jan 26 16:22:31 compute-0 nova_compute[239965]: 2026-01-26 16:22:31.782 239969 DEBUG nova.compute.manager [None req-1c1b39ac-bfef-4380-afec-66a2fdc76b2e - - - - - -] [instance: b1abd864-1de3-45b1-8fbc-3885a84b1363] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:22:31 compute-0 nova_compute[239965]: 2026-01-26 16:22:31.913 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444536.9113944, 59934c7e-3c2e-4e68-9629-663ecfc6692d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:22:31 compute-0 nova_compute[239965]: 2026-01-26 16:22:31.914 239969 INFO nova.compute.manager [-] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] VM Stopped (Lifecycle Event)
Jan 26 16:22:31 compute-0 nova_compute[239965]: 2026-01-26 16:22:31.934 239969 DEBUG nova.compute.manager [None req-9668c2ff-e97f-40c9-b479-98e7445a6592 - - - - - -] [instance: 59934c7e-3c2e-4e68-9629-663ecfc6692d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:22:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 134 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 80 op/s
Jan 26 16:22:32 compute-0 ceph-mon[75140]: pgmap v2155: 305 pgs: 305 active+clean; 134 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 80 op/s
Jan 26 16:22:33 compute-0 nova_compute[239965]: 2026-01-26 16:22:33.063 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:33 compute-0 sshd-session[352394]: Invalid user kafka from 209.38.206.249 port 54892
Jan 26 16:22:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:22:33 compute-0 sshd-session[352394]: Connection closed by invalid user kafka 209.38.206.249 port 54892 [preauth]
Jan 26 16:22:33 compute-0 sshd-session[352396]: Invalid user admin from 209.38.206.249 port 54906
Jan 26 16:22:33 compute-0 podman[352398]: 2026-01-26 16:22:33.878729122 +0000 UTC m=+0.065640129 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 16:22:33 compute-0 sshd-session[352396]: Connection closed by invalid user admin 209.38.206.249 port 54906 [preauth]
Jan 26 16:22:33 compute-0 podman[352399]: 2026-01-26 16:22:33.919914881 +0000 UTC m=+0.107243848 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:22:34 compute-0 nova_compute[239965]: 2026-01-26 16:22:34.125 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 134 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Jan 26 16:22:34 compute-0 ceph-mon[75140]: pgmap v2156: 305 pgs: 305 active+clean; 134 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Jan 26 16:22:34 compute-0 ovn_controller[146046]: 2026-01-26T16:22:34Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bd:83:22 10.100.0.7
Jan 26 16:22:34 compute-0 ovn_controller[146046]: 2026-01-26T16:22:34Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bd:83:22 10.100.0.7
Jan 26 16:22:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 156 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 80 op/s
Jan 26 16:22:36 compute-0 nova_compute[239965]: 2026-01-26 16:22:36.644 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:36 compute-0 ceph-mon[75140]: pgmap v2157: 305 pgs: 305 active+clean; 156 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.3 MiB/s wr, 80 op/s
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:22:36.664914) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444556664936, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1825, "num_deletes": 251, "total_data_size": 2930541, "memory_usage": 2968896, "flush_reason": "Manual Compaction"}
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444556681150, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2841108, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43511, "largest_seqno": 45335, "table_properties": {"data_size": 2832941, "index_size": 4917, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17297, "raw_average_key_size": 20, "raw_value_size": 2816448, "raw_average_value_size": 3274, "num_data_blocks": 219, "num_entries": 860, "num_filter_entries": 860, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769444379, "oldest_key_time": 1769444379, "file_creation_time": 1769444556, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 16298 microseconds, and 6172 cpu microseconds.
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:22:36.681206) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2841108 bytes OK
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:22:36.681226) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:22:36.683038) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:22:36.683053) EVENT_LOG_v1 {"time_micros": 1769444556683049, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:22:36.683073) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2922747, prev total WAL file size 2922747, number of live WAL files 2.
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:22:36.683930) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2774KB)], [101(7852KB)]
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444556684029, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10881612, "oldest_snapshot_seqno": -1}
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6763 keys, 9066729 bytes, temperature: kUnknown
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444556748523, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 9066729, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9021912, "index_size": 26785, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16965, "raw_key_size": 175559, "raw_average_key_size": 25, "raw_value_size": 8901294, "raw_average_value_size": 1316, "num_data_blocks": 1043, "num_entries": 6763, "num_filter_entries": 6763, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769444556, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:22:36.748893) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 9066729 bytes
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:22:36.750136) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 168.4 rd, 140.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 7.7 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(7.0) write-amplify(3.2) OK, records in: 7277, records dropped: 514 output_compression: NoCompression
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:22:36.750162) EVENT_LOG_v1 {"time_micros": 1769444556750149, "job": 60, "event": "compaction_finished", "compaction_time_micros": 64613, "compaction_time_cpu_micros": 26982, "output_level": 6, "num_output_files": 1, "total_output_size": 9066729, "num_input_records": 7277, "num_output_records": 6763, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444556751165, "job": 60, "event": "table_file_deletion", "file_number": 103}
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444556753708, "job": 60, "event": "table_file_deletion", "file_number": 101}
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:22:36.683844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:22:36.753791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:22:36.753796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:22:36.753800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:22:36.753802) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:22:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:22:36.753806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:22:38 compute-0 nova_compute[239965]: 2026-01-26 16:22:38.065 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:22:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 102 op/s
Jan 26 16:22:38 compute-0 ceph-mon[75140]: pgmap v2158: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 102 op/s
Jan 26 16:22:39 compute-0 nova_compute[239965]: 2026-01-26 16:22:39.128 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:39 compute-0 sshd-session[352442]: Invalid user debian from 209.38.206.249 port 54912
Jan 26 16:22:39 compute-0 sshd-session[352442]: Connection closed by invalid user debian 209.38.206.249 port 54912 [preauth]
Jan 26 16:22:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:39.839 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:22:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:39.840 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:22:39 compute-0 nova_compute[239965]: 2026-01-26 16:22:39.888 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:40 compute-0 sshd-session[352444]: Connection closed by authenticating user root 209.38.206.249 port 49008 [preauth]
Jan 26 16:22:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 16:22:40 compute-0 ceph-mon[75140]: pgmap v2159: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 16:22:40 compute-0 nova_compute[239965]: 2026-01-26 16:22:40.912 239969 INFO nova.compute.manager [None req-69cac123-386a-41e6-bda7-716e87f84c63 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Get console output
Jan 26 16:22:40 compute-0 nova_compute[239965]: 2026-01-26 16:22:40.919 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:22:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:41.842 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:42 compute-0 nova_compute[239965]: 2026-01-26 16:22:42.088 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:42 compute-0 nova_compute[239965]: 2026-01-26 16:22:42.089 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:42 compute-0 nova_compute[239965]: 2026-01-26 16:22:42.112 239969 DEBUG nova.compute.manager [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:22:42 compute-0 nova_compute[239965]: 2026-01-26 16:22:42.180 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:42 compute-0 nova_compute[239965]: 2026-01-26 16:22:42.181 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:42 compute-0 nova_compute[239965]: 2026-01-26 16:22:42.190 239969 DEBUG nova.virt.hardware [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:22:42 compute-0 nova_compute[239965]: 2026-01-26 16:22:42.191 239969 INFO nova.compute.claims [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:22:42 compute-0 nova_compute[239965]: 2026-01-26 16:22:42.306 239969 DEBUG oslo_concurrency.processutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 26 16:22:42 compute-0 ceph-mon[75140]: pgmap v2160: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 26 16:22:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:22:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/315321017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:22:42 compute-0 nova_compute[239965]: 2026-01-26 16:22:42.925 239969 DEBUG oslo_concurrency.processutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.619s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:42 compute-0 nova_compute[239965]: 2026-01-26 16:22:42.932 239969 DEBUG nova.compute.provider_tree [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:22:42 compute-0 nova_compute[239965]: 2026-01-26 16:22:42.948 239969 DEBUG nova.scheduler.client.report [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:22:42 compute-0 nova_compute[239965]: 2026-01-26 16:22:42.981 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:42 compute-0 nova_compute[239965]: 2026-01-26 16:22:42.982 239969 DEBUG nova.compute.manager [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.042 239969 DEBUG nova.compute.manager [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.043 239969 DEBUG nova.network.neutron [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.064 239969 INFO nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.067 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.084 239969 DEBUG nova.compute.manager [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.184 239969 DEBUG nova.compute.manager [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.186 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.187 239969 INFO nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Creating image(s)
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.208 239969 DEBUG nova.storage.rbd_utils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.231 239969 DEBUG nova.storage.rbd_utils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:22:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.260 239969 DEBUG nova.storage.rbd_utils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.264 239969 DEBUG oslo_concurrency.processutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.335 239969 DEBUG oslo_concurrency.processutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.337 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.337 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.338 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.405 239969 DEBUG nova.storage.rbd_utils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.409 239969 DEBUG oslo_concurrency.processutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.445 239969 DEBUG nova.policy [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ace5f36058684b5782589457d9e15921', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:22:43 compute-0 sshd-session[352519]: Invalid user odoo18 from 209.38.206.249 port 49022
Jan 26 16:22:43 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/315321017' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.685 239969 DEBUG oslo_concurrency.processutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:43 compute-0 sshd-session[352519]: Connection closed by invalid user odoo18 209.38.206.249 port 49022 [preauth]
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.765 239969 DEBUG nova.storage.rbd_utils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] resizing rbd image 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.869 239969 DEBUG nova.objects.instance [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'migration_context' on Instance uuid 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.888 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.888 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Ensure instance console log exists: /var/lib/nova/instances/7ddfbe11-7a13-4e3c-8fe6-0cca0be74583/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.889 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.889 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:43 compute-0 nova_compute[239965]: 2026-01-26 16:22:43.889 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:44 compute-0 nova_compute[239965]: 2026-01-26 16:22:44.131 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:44 compute-0 sshd-session[352618]: Invalid user postgres from 209.38.206.249 port 49030
Jan 26 16:22:44 compute-0 sshd-session[352618]: Connection closed by invalid user postgres 209.38.206.249 port 49030 [preauth]
Jan 26 16:22:44 compute-0 nova_compute[239965]: 2026-01-26 16:22:44.324 239969 DEBUG nova.network.neutron [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Successfully created port: 47519d16-bf4d-4b36-aa40-e533f3ff9d77 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:22:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 26 16:22:44 compute-0 ceph-mon[75140]: pgmap v2161: 305 pgs: 305 active+clean; 167 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Jan 26 16:22:44 compute-0 sshd-session[352638]: Invalid user orangepi from 209.38.206.249 port 49042
Jan 26 16:22:44 compute-0 sshd-session[352638]: Connection closed by invalid user orangepi 209.38.206.249 port 49042 [preauth]
Jan 26 16:22:45 compute-0 nova_compute[239965]: 2026-01-26 16:22:45.111 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:45 compute-0 sshd-session[352640]: Invalid user minecraft from 209.38.206.249 port 49054
Jan 26 16:22:45 compute-0 sshd-session[352640]: Connection closed by invalid user minecraft 209.38.206.249 port 49054 [preauth]
Jan 26 16:22:45 compute-0 nova_compute[239965]: 2026-01-26 16:22:45.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:22:45 compute-0 nova_compute[239965]: 2026-01-26 16:22:45.569 239969 DEBUG nova.network.neutron [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Successfully updated port: 47519d16-bf4d-4b36-aa40-e533f3ff9d77 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:22:45 compute-0 nova_compute[239965]: 2026-01-26 16:22:45.583 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "refresh_cache-7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:22:45 compute-0 nova_compute[239965]: 2026-01-26 16:22:45.583 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquired lock "refresh_cache-7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:22:45 compute-0 nova_compute[239965]: 2026-01-26 16:22:45.583 239969 DEBUG nova.network.neutron [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:22:45 compute-0 nova_compute[239965]: 2026-01-26 16:22:45.646 239969 DEBUG nova.compute.manager [req-edc5de45-6f1c-48b5-a00a-4fffb6ab8195 req-e398d479-92be-4fd2-b4c6-f0ae52a16359 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Received event network-changed-47519d16-bf4d-4b36-aa40-e533f3ff9d77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:45 compute-0 nova_compute[239965]: 2026-01-26 16:22:45.647 239969 DEBUG nova.compute.manager [req-edc5de45-6f1c-48b5-a00a-4fffb6ab8195 req-e398d479-92be-4fd2-b4c6-f0ae52a16359 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Refreshing instance network info cache due to event network-changed-47519d16-bf4d-4b36-aa40-e533f3ff9d77. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:22:45 compute-0 nova_compute[239965]: 2026-01-26 16:22:45.647 239969 DEBUG oslo_concurrency.lockutils [req-edc5de45-6f1c-48b5-a00a-4fffb6ab8195 req-e398d479-92be-4fd2-b4c6-f0ae52a16359 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:22:45 compute-0 nova_compute[239965]: 2026-01-26 16:22:45.705 239969 DEBUG nova.network.neutron [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:22:45 compute-0 sshd-session[352642]: Invalid user vagrant from 209.38.206.249 port 49066
Jan 26 16:22:45 compute-0 sshd-session[352642]: Connection closed by invalid user vagrant 209.38.206.249 port 49066 [preauth]
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.308 239969 DEBUG oslo_concurrency.lockutils [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "interface-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.308 239969 DEBUG oslo_concurrency.lockutils [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "interface-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.309 239969 DEBUG nova.objects.instance [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'flavor' on Instance uuid 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:22:46 compute-0 sshd-session[352644]: Invalid user devops from 209.38.206.249 port 49068
Jan 26 16:22:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 186 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 312 KiB/s rd, 2.7 MiB/s wr, 74 op/s
Jan 26 16:22:46 compute-0 sshd-session[352644]: Connection closed by invalid user devops 209.38.206.249 port 49068 [preauth]
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.659 239969 DEBUG nova.network.neutron [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Updating instance_info_cache with network_info: [{"id": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "address": "fa:16:3e:34:7e:f9", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47519d16-bf", "ovs_interfaceid": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:22:46 compute-0 ceph-mon[75140]: pgmap v2162: 305 pgs: 305 active+clean; 186 MiB data, 876 MiB used, 59 GiB / 60 GiB avail; 312 KiB/s rd, 2.7 MiB/s wr, 74 op/s
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.678 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Releasing lock "refresh_cache-7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.678 239969 DEBUG nova.compute.manager [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Instance network_info: |[{"id": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "address": "fa:16:3e:34:7e:f9", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47519d16-bf", "ovs_interfaceid": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.679 239969 DEBUG oslo_concurrency.lockutils [req-edc5de45-6f1c-48b5-a00a-4fffb6ab8195 req-e398d479-92be-4fd2-b4c6-f0ae52a16359 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.679 239969 DEBUG nova.network.neutron [req-edc5de45-6f1c-48b5-a00a-4fffb6ab8195 req-e398d479-92be-4fd2-b4c6-f0ae52a16359 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Refreshing network info cache for port 47519d16-bf4d-4b36-aa40-e533f3ff9d77 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.681 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Start _get_guest_xml network_info=[{"id": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "address": "fa:16:3e:34:7e:f9", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47519d16-bf", "ovs_interfaceid": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.686 239969 WARNING nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.699 239969 DEBUG nova.virt.libvirt.host [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.700 239969 DEBUG nova.virt.libvirt.host [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.703 239969 DEBUG nova.virt.libvirt.host [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.704 239969 DEBUG nova.virt.libvirt.host [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.704 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.705 239969 DEBUG nova.virt.hardware [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.705 239969 DEBUG nova.virt.hardware [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.705 239969 DEBUG nova.virt.hardware [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.706 239969 DEBUG nova.virt.hardware [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.706 239969 DEBUG nova.virt.hardware [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.706 239969 DEBUG nova.virt.hardware [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.706 239969 DEBUG nova.virt.hardware [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.706 239969 DEBUG nova.virt.hardware [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.707 239969 DEBUG nova.virt.hardware [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.707 239969 DEBUG nova.virt.hardware [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.707 239969 DEBUG nova.virt.hardware [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.710 239969 DEBUG oslo_concurrency.processutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.762 239969 DEBUG nova.objects.instance [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'pci_requests' on Instance uuid 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.786 239969 DEBUG nova.network.neutron [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:22:46 compute-0 nova_compute[239965]: 2026-01-26 16:22:46.949 239969 DEBUG nova.policy [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e865b82cb8db42568f95d2257ed9b158', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f543c15474f7451fa07b01e79dde95dd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:22:47 compute-0 sshd-session[352646]: Invalid user mysql from 209.38.206.249 port 49080
Jan 26 16:22:47 compute-0 sshd-session[352646]: Connection closed by invalid user mysql 209.38.206.249 port 49080 [preauth]
Jan 26 16:22:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:22:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2691320064' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.327 239969 DEBUG oslo_concurrency.processutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.353 239969 DEBUG nova.storage.rbd_utils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.357 239969 DEBUG oslo_concurrency.processutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:47 compute-0 sshd-session[352668]: Invalid user nexus from 209.38.206.249 port 49082
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.622 239969 DEBUG nova.network.neutron [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Successfully created port: 62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:22:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2691320064' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:22:47 compute-0 sshd-session[352668]: Connection closed by invalid user nexus 209.38.206.249 port 49082 [preauth]
Jan 26 16:22:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:22:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3993007818' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.952 239969 DEBUG oslo_concurrency.processutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.953 239969 DEBUG nova.virt.libvirt.vif [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:22:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-147250426',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-147250426',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-acce',id=127,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNaGoHE+Y8ji3+JCDO/bDUoDInaraCUH4QlGo7jTOAJTsSMKwIPcdLhQoPl5Hxa2VpL5egGSZOv5v3mz9KbIjWDatj77IcRZ2MRFQE+vsAUPEwBUyrw2qpYszZfZz2+J9w==',key_name='tempest-TestSecurityGroupsBasicOps-326396829',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-apj6m4p0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:22:43Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=7ddfbe11-7a13-4e3c-8fe6-0cca0be74583,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "address": "fa:16:3e:34:7e:f9", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47519d16-bf", "ovs_interfaceid": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.954 239969 DEBUG nova.network.os_vif_util [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "address": "fa:16:3e:34:7e:f9", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47519d16-bf", "ovs_interfaceid": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.954 239969 DEBUG nova.network.os_vif_util [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:7e:f9,bridge_name='br-int',has_traffic_filtering=True,id=47519d16-bf4d-4b36-aa40-e533f3ff9d77,network=Network(9bc91585-31b9-47fa-8cd0-03064511558e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47519d16-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.956 239969 DEBUG nova.objects.instance [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.973 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:22:47 compute-0 nova_compute[239965]:   <uuid>7ddfbe11-7a13-4e3c-8fe6-0cca0be74583</uuid>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   <name>instance-0000007f</name>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-147250426</nova:name>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:22:46</nova:creationTime>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:22:47 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:22:47 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:22:47 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:22:47 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:22:47 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:22:47 compute-0 nova_compute[239965]:         <nova:user uuid="ace5f36058684b5782589457d9e15921">tempest-TestSecurityGroupsBasicOps-75910484-project-member</nova:user>
Jan 26 16:22:47 compute-0 nova_compute[239965]:         <nova:project uuid="1f814bd45d164be6827f7ef54ed07f5f">tempest-TestSecurityGroupsBasicOps-75910484</nova:project>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:22:47 compute-0 nova_compute[239965]:         <nova:port uuid="47519d16-bf4d-4b36-aa40-e533f3ff9d77">
Jan 26 16:22:47 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <system>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <entry name="serial">7ddfbe11-7a13-4e3c-8fe6-0cca0be74583</entry>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <entry name="uuid">7ddfbe11-7a13-4e3c-8fe6-0cca0be74583</entry>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     </system>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   <os>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   </os>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   <features>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   </features>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/7ddfbe11-7a13-4e3c-8fe6-0cca0be74583_disk">
Jan 26 16:22:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       </source>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:22:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/7ddfbe11-7a13-4e3c-8fe6-0cca0be74583_disk.config">
Jan 26 16:22:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       </source>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:22:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:34:7e:f9"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <target dev="tap47519d16-bf"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/7ddfbe11-7a13-4e3c-8fe6-0cca0be74583/console.log" append="off"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <video>
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     </video>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:22:47 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:22:47 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:22:47 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:22:47 compute-0 nova_compute[239965]: </domain>
Jan 26 16:22:47 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.975 239969 DEBUG nova.compute.manager [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Preparing to wait for external event network-vif-plugged-47519d16-bf4d-4b36-aa40-e533f3ff9d77 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.976 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.977 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.977 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.978 239969 DEBUG nova.virt.libvirt.vif [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:22:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-147250426',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-147250426',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-acce',id=127,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNaGoHE+Y8ji3+JCDO/bDUoDInaraCUH4QlGo7jTOAJTsSMKwIPcdLhQoPl5Hxa2VpL5egGSZOv5v3mz9KbIjWDatj77IcRZ2MRFQE+vsAUPEwBUyrw2qpYszZfZz2+J9w==',key_name='tempest-TestSecurityGroupsBasicOps-326396829',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-apj6m4p0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:22:43Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=7ddfbe11-7a13-4e3c-8fe6-0cca0be74583,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "address": "fa:16:3e:34:7e:f9", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47519d16-bf", "ovs_interfaceid": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.978 239969 DEBUG nova.network.os_vif_util [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "address": "fa:16:3e:34:7e:f9", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47519d16-bf", "ovs_interfaceid": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.979 239969 DEBUG nova.network.os_vif_util [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:7e:f9,bridge_name='br-int',has_traffic_filtering=True,id=47519d16-bf4d-4b36-aa40-e533f3ff9d77,network=Network(9bc91585-31b9-47fa-8cd0-03064511558e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47519d16-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.979 239969 DEBUG os_vif [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:7e:f9,bridge_name='br-int',has_traffic_filtering=True,id=47519d16-bf4d-4b36-aa40-e533f3ff9d77,network=Network(9bc91585-31b9-47fa-8cd0-03064511558e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47519d16-bf') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.980 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.981 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.981 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.985 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.985 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap47519d16-bf, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.986 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap47519d16-bf, col_values=(('external_ids', {'iface-id': '47519d16-bf4d-4b36-aa40-e533f3ff9d77', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:7e:f9', 'vm-uuid': '7ddfbe11-7a13-4e3c-8fe6-0cca0be74583'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.987 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:47 compute-0 NetworkManager[48954]: <info>  [1769444567.9882] manager: (tap47519d16-bf): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/547)
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.990 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:22:47 compute-0 nova_compute[239965]: 2026-01-26 16:22:47.998 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:48 compute-0 nova_compute[239965]: 2026-01-26 16:22:48.001 239969 INFO os_vif [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:7e:f9,bridge_name='br-int',has_traffic_filtering=True,id=47519d16-bf4d-4b36-aa40-e533f3ff9d77,network=Network(9bc91585-31b9-47fa-8cd0-03064511558e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47519d16-bf')
Jan 26 16:22:48 compute-0 nova_compute[239965]: 2026-01-26 16:22:48.064 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:22:48 compute-0 nova_compute[239965]: 2026-01-26 16:22:48.065 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:22:48 compute-0 nova_compute[239965]: 2026-01-26 16:22:48.065 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No VIF found with MAC fa:16:3e:34:7e:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:22:48 compute-0 nova_compute[239965]: 2026-01-26 16:22:48.066 239969 INFO nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Using config drive
Jan 26 16:22:48 compute-0 nova_compute[239965]: 2026-01-26 16:22:48.090 239969 DEBUG nova.storage.rbd_utils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:22:48 compute-0 nova_compute[239965]: 2026-01-26 16:22:48.097 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:48 compute-0 nova_compute[239965]: 2026-01-26 16:22:48.209 239969 DEBUG nova.network.neutron [req-edc5de45-6f1c-48b5-a00a-4fffb6ab8195 req-e398d479-92be-4fd2-b4c6-f0ae52a16359 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Updated VIF entry in instance network info cache for port 47519d16-bf4d-4b36-aa40-e533f3ff9d77. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:22:48 compute-0 nova_compute[239965]: 2026-01-26 16:22:48.210 239969 DEBUG nova.network.neutron [req-edc5de45-6f1c-48b5-a00a-4fffb6ab8195 req-e398d479-92be-4fd2-b4c6-f0ae52a16359 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Updating instance_info_cache with network_info: [{"id": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "address": "fa:16:3e:34:7e:f9", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47519d16-bf", "ovs_interfaceid": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:22:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:22:48 compute-0 nova_compute[239965]: 2026-01-26 16:22:48.256 239969 DEBUG oslo_concurrency.lockutils [req-edc5de45-6f1c-48b5-a00a-4fffb6ab8195 req-e398d479-92be-4fd2-b4c6-f0ae52a16359 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:22:48 compute-0 sshd-session[352710]: Connection closed by authenticating user root 209.38.206.249 port 49090 [preauth]
Jan 26 16:22:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 2.6 MiB/s wr, 72 op/s
Jan 26 16:22:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3993007818' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:22:48 compute-0 ceph-mon[75140]: pgmap v2163: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 2.6 MiB/s wr, 72 op/s
Jan 26 16:22:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:22:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1169507029' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:22:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:22:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1169507029' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:22:48 compute-0 sshd-session[352735]: Connection closed by authenticating user root 209.38.206.249 port 42892 [preauth]
Jan 26 16:22:48 compute-0 nova_compute[239965]: 2026-01-26 16:22:48.931 239969 DEBUG nova.network.neutron [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Successfully updated port: 62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:22:48 compute-0 nova_compute[239965]: 2026-01-26 16:22:48.953 239969 DEBUG oslo_concurrency.lockutils [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:22:48 compute-0 nova_compute[239965]: 2026-01-26 16:22:48.953 239969 DEBUG oslo_concurrency.lockutils [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:22:48 compute-0 nova_compute[239965]: 2026-01-26 16:22:48.954 239969 DEBUG nova.network.neutron [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011196048602684018 of space, bias 1.0, pg target 0.33588145808052056 quantized to 32 (current 32)
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010184297308897285 of space, bias 1.0, pg target 0.30552891926691855 quantized to 32 (current 32)
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.055059901095464e-07 of space, bias 4.0, pg target 0.0008466071881314557 quantized to 16 (current 16)
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:22:49 compute-0 sshd-session[352737]: Invalid user jenkins from 209.38.206.249 port 42894
Jan 26 16:22:49 compute-0 sshd-session[352737]: Connection closed by invalid user jenkins 209.38.206.249 port 42894 [preauth]
Jan 26 16:22:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1169507029' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:22:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1169507029' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:22:49 compute-0 nova_compute[239965]: 2026-01-26 16:22:49.798 239969 INFO nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Creating config drive at /var/lib/nova/instances/7ddfbe11-7a13-4e3c-8fe6-0cca0be74583/disk.config
Jan 26 16:22:49 compute-0 nova_compute[239965]: 2026-01-26 16:22:49.808 239969 DEBUG oslo_concurrency.processutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7ddfbe11-7a13-4e3c-8fe6-0cca0be74583/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp48kv_e0g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:49 compute-0 nova_compute[239965]: 2026-01-26 16:22:49.974 239969 DEBUG oslo_concurrency.processutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7ddfbe11-7a13-4e3c-8fe6-0cca0be74583/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp48kv_e0g" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:49 compute-0 nova_compute[239965]: 2026-01-26 16:22:49.998 239969 DEBUG nova.storage.rbd_utils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:22:50 compute-0 nova_compute[239965]: 2026-01-26 16:22:50.001 239969 DEBUG oslo_concurrency.processutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7ddfbe11-7a13-4e3c-8fe6-0cca0be74583/disk.config 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:22:50 compute-0 nova_compute[239965]: 2026-01-26 16:22:50.158 239969 DEBUG oslo_concurrency.processutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7ddfbe11-7a13-4e3c-8fe6-0cca0be74583/disk.config 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:22:50 compute-0 nova_compute[239965]: 2026-01-26 16:22:50.159 239969 INFO nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Deleting local config drive /var/lib/nova/instances/7ddfbe11-7a13-4e3c-8fe6-0cca0be74583/disk.config because it was imported into RBD.
Jan 26 16:22:50 compute-0 kernel: tap47519d16-bf: entered promiscuous mode
Jan 26 16:22:50 compute-0 NetworkManager[48954]: <info>  [1769444570.2178] manager: (tap47519d16-bf): new Tun device (/org/freedesktop/NetworkManager/Devices/548)
Jan 26 16:22:50 compute-0 ovn_controller[146046]: 2026-01-26T16:22:50Z|01326|binding|INFO|Claiming lport 47519d16-bf4d-4b36-aa40-e533f3ff9d77 for this chassis.
Jan 26 16:22:50 compute-0 ovn_controller[146046]: 2026-01-26T16:22:50Z|01327|binding|INFO|47519d16-bf4d-4b36-aa40-e533f3ff9d77: Claiming fa:16:3e:34:7e:f9 10.100.0.6
Jan 26 16:22:50 compute-0 nova_compute[239965]: 2026-01-26 16:22:50.220 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:50 compute-0 ovn_controller[146046]: 2026-01-26T16:22:50Z|01328|binding|INFO|Setting lport 47519d16-bf4d-4b36-aa40-e533f3ff9d77 ovn-installed in OVS
Jan 26 16:22:50 compute-0 nova_compute[239965]: 2026-01-26 16:22:50.236 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:50 compute-0 nova_compute[239965]: 2026-01-26 16:22:50.240 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:50 compute-0 systemd-machined[208061]: New machine qemu-154-instance-0000007f.
Jan 26 16:22:50 compute-0 systemd-udevd[352792]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:22:50 compute-0 systemd[1]: Started Virtual Machine qemu-154-instance-0000007f.
Jan 26 16:22:50 compute-0 NetworkManager[48954]: <info>  [1769444570.2875] device (tap47519d16-bf): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:22:50 compute-0 NetworkManager[48954]: <info>  [1769444570.2882] device (tap47519d16-bf): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:22:50 compute-0 ovn_controller[146046]: 2026-01-26T16:22:50Z|01329|binding|INFO|Setting lport 47519d16-bf4d-4b36-aa40-e533f3ff9d77 up in Southbound
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.315 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:7e:f9 10.100.0.6'], port_security=['fa:16:3e:34:7e:f9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7ddfbe11-7a13-4e3c-8fe6-0cca0be74583', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bc91585-31b9-47fa-8cd0-03064511558e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0a3a9fe-99c5-4bfe-a92f-e1de2e2a1727 fa917f24-1d33-45b8-bab6-704cf1bde01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7379a9d2-4570-48f3-bfe7-5857500f5f77, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=47519d16-bf4d-4b36-aa40-e533f3ff9d77) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.317 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 47519d16-bf4d-4b36-aa40-e533f3ff9d77 in datapath 9bc91585-31b9-47fa-8cd0-03064511558e bound to our chassis
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.319 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bc91585-31b9-47fa-8cd0-03064511558e
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.334 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e8ea74ee-8cf7-4538-b9db-56ed93843672]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.335 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9bc91585-31 in ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.337 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9bc91585-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.337 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0599acd6-07dc-45ea-bf16-549ba849a6aa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.338 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9153557a-7b80-4c6b-b69d-946c9020f11c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.351 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ce0b00-3711-438f-94f1-f120411a15cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.369 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f22476c5-edb4-4931-b45f-e0859320ad5a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.402 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e93fe7ed-a563-4d70-aa82-cd6fd2a3b059]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:50 compute-0 NetworkManager[48954]: <info>  [1769444570.4104] manager: (tap9bc91585-30): new Veth device (/org/freedesktop/NetworkManager/Devices/549)
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.410 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5bfda5e9-f74e-49d2-8e59-7e7ceedff785]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.450 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1c501453-c3b2-4044-9054-390531adadef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.456 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9945e7e3-8587-4784-83e9-25bc52af5955]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:50 compute-0 NetworkManager[48954]: <info>  [1769444570.4896] device (tap9bc91585-30): carrier: link connected
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.502 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[135422b5-5591-494f-bf71-3d0a64b0f512]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:50 compute-0 nova_compute[239965]: 2026-01-26 16:22:50.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.525 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e286c45c-7d0e-4042-b773-fab594d7075b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bc91585-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:67:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 384], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606129, 'reachable_time': 39464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352829, 'error': None, 'target': 'ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.543 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5fb805-fad8-4eea-9b9b-40adf1fbe499]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:6722'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606129, 'tstamp': 606129}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352830, 'error': None, 'target': 'ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.562 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8432a9f4-541b-4f8c-aef8-0bc0883dcf54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bc91585-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:67:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 384], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606129, 'reachable_time': 39464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352831, 'error': None, 'target': 'ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.594 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[831faf78-e01a-4bfe-8c76-8a861c501ad5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:50 compute-0 nova_compute[239965]: 2026-01-26 16:22:50.650 239969 DEBUG nova.compute.manager [req-1da03ebb-7749-4b9c-96b6-7ab652c7bd4e req-c828b0e1-0556-42bc-aa7c-71d65737e71f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received event network-changed-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:50 compute-0 nova_compute[239965]: 2026-01-26 16:22:50.650 239969 DEBUG nova.compute.manager [req-1da03ebb-7749-4b9c-96b6-7ab652c7bd4e req-c828b0e1-0556-42bc-aa7c-71d65737e71f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Refreshing instance network info cache due to event network-changed-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:22:50 compute-0 nova_compute[239965]: 2026-01-26 16:22:50.651 239969 DEBUG oslo_concurrency.lockutils [req-1da03ebb-7749-4b9c-96b6-7ab652c7bd4e req-c828b0e1-0556-42bc-aa7c-71d65737e71f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.667 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f5fd78-27ad-4d02-9db3-484573bcbd62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.669 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bc91585-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.670 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.671 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bc91585-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:50 compute-0 nova_compute[239965]: 2026-01-26 16:22:50.672 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:50 compute-0 NetworkManager[48954]: <info>  [1769444570.6735] manager: (tap9bc91585-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/550)
Jan 26 16:22:50 compute-0 kernel: tap9bc91585-30: entered promiscuous mode
Jan 26 16:22:50 compute-0 nova_compute[239965]: 2026-01-26 16:22:50.676 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.685 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bc91585-30, col_values=(('external_ids', {'iface-id': 'ad3f6d2f-d76e-421c-82c8-5fe4525b7056'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:50 compute-0 ovn_controller[146046]: 2026-01-26T16:22:50Z|01330|binding|INFO|Releasing lport ad3f6d2f-d76e-421c-82c8-5fe4525b7056 from this chassis (sb_readonly=0)
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.688 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9bc91585-31b9-47fa-8cd0-03064511558e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9bc91585-31b9-47fa-8cd0-03064511558e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.690 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8dd6ed6c-a2ce-42fc-8720-6e710b8a5392]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:50 compute-0 nova_compute[239965]: 2026-01-26 16:22:50.690 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.691 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-9bc91585-31b9-47fa-8cd0-03064511558e
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/9bc91585-31b9-47fa-8cd0-03064511558e.pid.haproxy
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 9bc91585-31b9-47fa-8cd0-03064511558e
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:22:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:50.692 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e', 'env', 'PROCESS_TAG=haproxy-9bc91585-31b9-47fa-8cd0-03064511558e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9bc91585-31b9-47fa-8cd0-03064511558e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:22:50 compute-0 ceph-mon[75140]: pgmap v2164: 305 pgs: 305 active+clean; 213 MiB data, 890 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:22:50 compute-0 nova_compute[239965]: 2026-01-26 16:22:50.705 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:50 compute-0 nova_compute[239965]: 2026-01-26 16:22:50.837 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:50 compute-0 sshd-session[352827]: Invalid user ts3 from 209.38.206.249 port 42898
Jan 26 16:22:50 compute-0 sshd-session[352827]: Connection closed by invalid user ts3 209.38.206.249 port 42898 [preauth]
Jan 26 16:22:51 compute-0 podman[352901]: 2026-01-26 16:22:51.083904952 +0000 UTC m=+0.057552010 container create b20c75124f196adde7b2eb7a0f2675f34197cd8ad70cdec03a4adf6fd28abd0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 16:22:51 compute-0 nova_compute[239965]: 2026-01-26 16:22:51.085 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444571.0846453, 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:22:51 compute-0 nova_compute[239965]: 2026-01-26 16:22:51.086 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] VM Started (Lifecycle Event)
Jan 26 16:22:51 compute-0 nova_compute[239965]: 2026-01-26 16:22:51.104 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:22:51 compute-0 nova_compute[239965]: 2026-01-26 16:22:51.108 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444571.0849497, 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:22:51 compute-0 nova_compute[239965]: 2026-01-26 16:22:51.109 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] VM Paused (Lifecycle Event)
Jan 26 16:22:51 compute-0 systemd[1]: Started libpod-conmon-b20c75124f196adde7b2eb7a0f2675f34197cd8ad70cdec03a4adf6fd28abd0b.scope.
Jan 26 16:22:51 compute-0 nova_compute[239965]: 2026-01-26 16:22:51.124 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:22:51 compute-0 nova_compute[239965]: 2026-01-26 16:22:51.128 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:22:51 compute-0 nova_compute[239965]: 2026-01-26 16:22:51.146 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:22:51 compute-0 podman[352901]: 2026-01-26 16:22:51.053233501 +0000 UTC m=+0.026880609 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:22:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:22:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebef440e628061add6e11d5722db7178f58630568610ca9ccaa0d3a738d2e270/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:51 compute-0 podman[352901]: 2026-01-26 16:22:51.170143765 +0000 UTC m=+0.143790843 container init b20c75124f196adde7b2eb7a0f2675f34197cd8ad70cdec03a4adf6fd28abd0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:22:51 compute-0 podman[352901]: 2026-01-26 16:22:51.177784041 +0000 UTC m=+0.151431099 container start b20c75124f196adde7b2eb7a0f2675f34197cd8ad70cdec03a4adf6fd28abd0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 16:22:51 compute-0 neutron-haproxy-ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e[352916]: [NOTICE]   (352920) : New worker (352922) forked
Jan 26 16:22:51 compute-0 neutron-haproxy-ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e[352916]: [NOTICE]   (352920) : Loading success.
Jan 26 16:22:51 compute-0 nova_compute[239965]: 2026-01-26 16:22:51.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.270 239969 DEBUG nova.network.neutron [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Updating instance_info_cache with network_info: [{"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "address": "fa:16:3e:b8:22:5d", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f75eb8-8d", "ovs_interfaceid": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.291 239969 DEBUG oslo_concurrency.lockutils [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.292 239969 DEBUG oslo_concurrency.lockutils [req-1da03ebb-7749-4b9c-96b6-7ab652c7bd4e req-c828b0e1-0556-42bc-aa7c-71d65737e71f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.292 239969 DEBUG nova.network.neutron [req-1da03ebb-7749-4b9c-96b6-7ab652c7bd4e req-c828b0e1-0556-42bc-aa7c-71d65737e71f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Refreshing network info cache for port 62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.296 239969 DEBUG nova.virt.libvirt.vif [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:22:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1426087027',display_name='tempest-TestNetworkBasicOps-server-1426087027',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1426087027',id=126,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVQvnTTId2F2dZ4CmrqUuSRPgxVpg5Gcg7/WdXsulHhwVIkk74RWE9IZv+pRN7sfUWNe4e/QS34rEQOYJGDOrZn6Q/HpS10QJgjM9+nMxrjR7pUidrziE/wnxgJrIPauA==',key_name='tempest-TestNetworkBasicOps-1612194243',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:22:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-058vjbml',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:22:22Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=34442cd1-7fb4-45b4-bd46-9d2cb53d00e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "address": "fa:16:3e:b8:22:5d", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f75eb8-8d", "ovs_interfaceid": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.296 239969 DEBUG nova.network.os_vif_util [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "address": "fa:16:3e:b8:22:5d", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f75eb8-8d", "ovs_interfaceid": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.296 239969 DEBUG nova.network.os_vif_util [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:22:5d,bridge_name='br-int',has_traffic_filtering=True,id=62f75eb8-8d58-40a9-bac0-5f5579d1d9c3,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f75eb8-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.297 239969 DEBUG os_vif [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:22:5d,bridge_name='br-int',has_traffic_filtering=True,id=62f75eb8-8d58-40a9-bac0-5f5579d1d9c3,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f75eb8-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.297 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.298 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.298 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.301 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.301 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62f75eb8-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.301 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap62f75eb8-8d, col_values=(('external_ids', {'iface-id': '62f75eb8-8d58-40a9-bac0-5f5579d1d9c3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:22:5d', 'vm-uuid': '34442cd1-7fb4-45b4-bd46-9d2cb53d00e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.303 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:52 compute-0 NetworkManager[48954]: <info>  [1769444572.3044] manager: (tap62f75eb8-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/551)
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.305 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.354 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.355 239969 INFO os_vif [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:22:5d,bridge_name='br-int',has_traffic_filtering=True,id=62f75eb8-8d58-40a9-bac0-5f5579d1d9c3,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f75eb8-8d')
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.356 239969 DEBUG nova.virt.libvirt.vif [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:22:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1426087027',display_name='tempest-TestNetworkBasicOps-server-1426087027',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1426087027',id=126,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVQvnTTId2F2dZ4CmrqUuSRPgxVpg5Gcg7/WdXsulHhwVIkk74RWE9IZv+pRN7sfUWNe4e/QS34rEQOYJGDOrZn6Q/HpS10QJgjM9+nMxrjR7pUidrziE/wnxgJrIPauA==',key_name='tempest-TestNetworkBasicOps-1612194243',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:22:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-058vjbml',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:22:22Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=34442cd1-7fb4-45b4-bd46-9d2cb53d00e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "address": "fa:16:3e:b8:22:5d", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f75eb8-8d", "ovs_interfaceid": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.357 239969 DEBUG nova.network.os_vif_util [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "address": "fa:16:3e:b8:22:5d", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f75eb8-8d", "ovs_interfaceid": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.357 239969 DEBUG nova.network.os_vif_util [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:22:5d,bridge_name='br-int',has_traffic_filtering=True,id=62f75eb8-8d58-40a9-bac0-5f5579d1d9c3,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f75eb8-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.361 239969 DEBUG nova.virt.libvirt.guest [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] attach device xml: <interface type="ethernet">
Jan 26 16:22:52 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:b8:22:5d"/>
Jan 26 16:22:52 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 16:22:52 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:22:52 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 16:22:52 compute-0 nova_compute[239965]:   <target dev="tap62f75eb8-8d"/>
Jan 26 16:22:52 compute-0 nova_compute[239965]: </interface>
Jan 26 16:22:52 compute-0 nova_compute[239965]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Jan 26 16:22:52 compute-0 kernel: tap62f75eb8-8d: entered promiscuous mode
Jan 26 16:22:52 compute-0 NetworkManager[48954]: <info>  [1769444572.3829] manager: (tap62f75eb8-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/552)
Jan 26 16:22:52 compute-0 ovn_controller[146046]: 2026-01-26T16:22:52Z|01331|binding|INFO|Claiming lport 62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 for this chassis.
Jan 26 16:22:52 compute-0 ovn_controller[146046]: 2026-01-26T16:22:52Z|01332|binding|INFO|62f75eb8-8d58-40a9-bac0-5f5579d1d9c3: Claiming fa:16:3e:b8:22:5d 10.100.0.21
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.387 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.398 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:22:5d 10.100.0.21'], port_security=['fa:16:3e:b8:22:5d 10.100.0.21'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': '34442cd1-7fb4-45b4-bd46-9d2cb53d00e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86f67c73-31b4-4f89-93c5-3a2f6b391b21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '039287cb-946e-4ce0-ad47-a6333988fa03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=299c72d0-590b-465a-ba1c-80ce946f37ca, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=62f75eb8-8d58-40a9-bac0-5f5579d1d9c3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:22:52 compute-0 NetworkManager[48954]: <info>  [1769444572.4003] device (tap62f75eb8-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.400 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 in datapath 86f67c73-31b4-4f89-93c5-3a2f6b391b21 bound to our chassis
Jan 26 16:22:52 compute-0 NetworkManager[48954]: <info>  [1769444572.4012] device (tap62f75eb8-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.403 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 86f67c73-31b4-4f89-93c5-3a2f6b391b21
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.416 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fb0448d1-df85-4ef2-a14b-e48d2438fa4e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.417 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap86f67c73-31 in ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.420 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap86f67c73-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.421 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c8bc79c7-de42-4bf7-9ea3-a7c6d1cf7064]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.421 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6924823a-a45b-4fe1-9cda-3a99431c8427]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.426 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:52 compute-0 ovn_controller[146046]: 2026-01-26T16:22:52Z|01333|binding|INFO|Setting lport 62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 ovn-installed in OVS
Jan 26 16:22:52 compute-0 ovn_controller[146046]: 2026-01-26T16:22:52Z|01334|binding|INFO|Setting lport 62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 up in Southbound
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.445 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.450 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[51b0103b-0cb0-4d4f-8bd6-a9b326bd9235]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.464 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8443b278-6d22-4e81-9ee7-e58f0ab45336]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.484 239969 DEBUG nova.virt.libvirt.driver [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.485 239969 DEBUG nova.virt.libvirt.driver [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.485 239969 DEBUG nova.virt.libvirt.driver [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:bd:83:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.485 239969 DEBUG nova.virt.libvirt.driver [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:b8:22:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.499 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1b94991f-eb90-4ce5-807c-2b7bb57b98a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:52 compute-0 NetworkManager[48954]: <info>  [1769444572.5071] manager: (tap86f67c73-30): new Veth device (/org/freedesktop/NetworkManager/Devices/553)
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.508 239969 DEBUG nova.virt.libvirt.guest [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:22:52 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:22:52 compute-0 nova_compute[239965]:   <nova:name>tempest-TestNetworkBasicOps-server-1426087027</nova:name>
Jan 26 16:22:52 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 16:22:52</nova:creationTime>
Jan 26 16:22:52 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 16:22:52 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 16:22:52 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 16:22:52 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 16:22:52 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:22:52 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 16:22:52 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.508 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[77ac6104-d93b-4730-adba-f29a938e7fbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:52 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 16:22:52 compute-0 nova_compute[239965]:     <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:22:52 compute-0 nova_compute[239965]:     <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:22:52 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 16:22:52 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:22:52 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 16:22:52 compute-0 nova_compute[239965]:     <nova:port uuid="7b88408c-5847-4417-be2b-c52e6a358bc7">
Jan 26 16:22:52 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:22:52 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:22:52 compute-0 nova_compute[239965]:     <nova:port uuid="62f75eb8-8d58-40a9-bac0-5f5579d1d9c3">
Jan 26 16:22:52 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Jan 26 16:22:52 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:22:52 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 16:22:52 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 16:22:52 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.530 239969 DEBUG oslo_concurrency.lockutils [None req-6353eafa-f66f-4b78-9219-95986cd48ee8 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "interface-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.540 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e423ae8f-032c-4bac-9daf-85d644e4e365]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.543 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[506b394c-d4ac-4255-bd44-00cfead5d3ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 213 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 26 16:22:52 compute-0 NetworkManager[48954]: <info>  [1769444572.5742] device (tap86f67c73-30): carrier: link connected
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.579 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[cd834961-f9b0-439b-ba58-cae970fec56d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.597 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6d8f4b35-b9e8-4f02-8a71-f24ba0b79f3b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap86f67c73-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:83:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606338, 'reachable_time': 33945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352950, 'error': None, 'target': 'ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.611 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[38a352fb-05a3-4152-8d1f-0b6a93242aae]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feea:8315'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606338, 'tstamp': 606338}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352951, 'error': None, 'target': 'ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.627 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dc257169-5005-4a15-9bb6-9dd86b298858]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap86f67c73-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:83:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606338, 'reachable_time': 33945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352952, 'error': None, 'target': 'ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:52 compute-0 ceph-mon[75140]: pgmap v2165: 305 pgs: 305 active+clean; 213 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.671 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b014c922-35c8-4a37-b7fb-1864ff020019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.758 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b39c4392-a466-478f-bc2e-a4107b9286c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.760 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86f67c73-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.760 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.761 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86f67c73-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.763 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:52 compute-0 kernel: tap86f67c73-30: entered promiscuous mode
Jan 26 16:22:52 compute-0 NetworkManager[48954]: <info>  [1769444572.7641] manager: (tap86f67c73-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/554)
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.766 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.768 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap86f67c73-30, col_values=(('external_ids', {'iface-id': '8154df63-0e6b-4a3f-8b4f-601fcbe2af2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.769 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:52 compute-0 ovn_controller[146046]: 2026-01-26T16:22:52Z|01335|binding|INFO|Releasing lport 8154df63-0e6b-4a3f-8b4f-601fcbe2af2d from this chassis (sb_readonly=0)
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.770 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.771 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/86f67c73-31b4-4f89-93c5-3a2f6b391b21.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/86f67c73-31b4-4f89-93c5-3a2f6b391b21.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.772 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[147b26e5-9ef8-41f4-b274-85ac7939e9b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.773 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-86f67c73-31b4-4f89-93c5-3a2f6b391b21
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/86f67c73-31b4-4f89-93c5-3a2f6b391b21.pid.haproxy
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 86f67c73-31b4-4f89-93c5-3a2f6b391b21
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:22:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:52.775 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21', 'env', 'PROCESS_TAG=haproxy-86f67c73-31b4-4f89-93c5-3a2f6b391b21', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/86f67c73-31b4-4f89-93c5-3a2f6b391b21.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:22:52 compute-0 nova_compute[239965]: 2026-01-26 16:22:52.784 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:53 compute-0 nova_compute[239965]: 2026-01-26 16:22:53.072 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:53.126 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:eb:1b:cc 10.100.0.2 2001:db8::f816:3eff:feeb:1bcc'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feeb:1bcc/64', 'neutron:device_id': 'ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59cf9173-0130-4701-8cd2-ef13c0f4cee0, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=29a5989d-1915-41bb-983b-b3c73bba2262) old=Port_Binding(mac=['fa:16:3e:eb:1b:cc 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:22:53 compute-0 podman[352985]: 2026-01-26 16:22:53.13854652 +0000 UTC m=+0.052782984 container create 37da4ab7fe9d5d7581968ce6794245a8cecc530478736e2b288b37c4e4561036 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:22:53 compute-0 systemd[1]: Started libpod-conmon-37da4ab7fe9d5d7581968ce6794245a8cecc530478736e2b288b37c4e4561036.scope.
Jan 26 16:22:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:22:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0225cb35f97205eeb782dc6b1507e46ea8ac1f60c3466debc5263ffa7eacd235/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:22:53 compute-0 podman[352985]: 2026-01-26 16:22:53.114288226 +0000 UTC m=+0.028524720 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:22:53 compute-0 podman[352985]: 2026-01-26 16:22:53.217875132 +0000 UTC m=+0.132111596 container init 37da4ab7fe9d5d7581968ce6794245a8cecc530478736e2b288b37c4e4561036 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 16:22:53 compute-0 podman[352985]: 2026-01-26 16:22:53.224028854 +0000 UTC m=+0.138265298 container start 37da4ab7fe9d5d7581968ce6794245a8cecc530478736e2b288b37c4e4561036 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 26 16:22:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:22:53 compute-0 neutron-haproxy-ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21[353000]: [NOTICE]   (353004) : New worker (353006) forked
Jan 26 16:22:53 compute-0 neutron-haproxy-ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21[353000]: [NOTICE]   (353004) : Loading success.
Jan 26 16:22:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:53.276 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 29a5989d-1915-41bb-983b-b3c73bba2262 in datapath 71a6c2b4-c95b-48d6-bc0d-8a809d49273b unbound from our chassis
Jan 26 16:22:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:53.278 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71a6c2b4-c95b-48d6-bc0d-8a809d49273b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:22:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:53.278 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0d4c7966-1cdc-4c0b-ac02-c3acd876a96c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:22:53 compute-0 nova_compute[239965]: 2026-01-26 16:22:53.343 239969 DEBUG nova.compute.manager [req-b747ae8c-bf44-4418-ad64-b393282de9be req-56fb5589-2137-447c-9afb-d117be819388 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received event network-vif-plugged-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:53 compute-0 nova_compute[239965]: 2026-01-26 16:22:53.344 239969 DEBUG oslo_concurrency.lockutils [req-b747ae8c-bf44-4418-ad64-b393282de9be req-56fb5589-2137-447c-9afb-d117be819388 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:53 compute-0 nova_compute[239965]: 2026-01-26 16:22:53.345 239969 DEBUG oslo_concurrency.lockutils [req-b747ae8c-bf44-4418-ad64-b393282de9be req-56fb5589-2137-447c-9afb-d117be819388 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:53 compute-0 nova_compute[239965]: 2026-01-26 16:22:53.346 239969 DEBUG oslo_concurrency.lockutils [req-b747ae8c-bf44-4418-ad64-b393282de9be req-56fb5589-2137-447c-9afb-d117be819388 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:53 compute-0 nova_compute[239965]: 2026-01-26 16:22:53.346 239969 DEBUG nova.compute.manager [req-b747ae8c-bf44-4418-ad64-b393282de9be req-56fb5589-2137-447c-9afb-d117be819388 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] No waiting events found dispatching network-vif-plugged-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:22:53 compute-0 nova_compute[239965]: 2026-01-26 16:22:53.346 239969 WARNING nova.compute.manager [req-b747ae8c-bf44-4418-ad64-b393282de9be req-56fb5589-2137-447c-9afb-d117be819388 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received unexpected event network-vif-plugged-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 for instance with vm_state active and task_state None.
Jan 26 16:22:53 compute-0 nova_compute[239965]: 2026-01-26 16:22:53.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:22:53 compute-0 nova_compute[239965]: 2026-01-26 16:22:53.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:22:53 compute-0 nova_compute[239965]: 2026-01-26 16:22:53.613 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:22:53 compute-0 nova_compute[239965]: 2026-01-26 16:22:53.617 239969 DEBUG nova.network.neutron [req-1da03ebb-7749-4b9c-96b6-7ab652c7bd4e req-c828b0e1-0556-42bc-aa7c-71d65737e71f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Updated VIF entry in instance network info cache for port 62f75eb8-8d58-40a9-bac0-5f5579d1d9c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:22:53 compute-0 nova_compute[239965]: 2026-01-26 16:22:53.617 239969 DEBUG nova.network.neutron [req-1da03ebb-7749-4b9c-96b6-7ab652c7bd4e req-c828b0e1-0556-42bc-aa7c-71d65737e71f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Updating instance_info_cache with network_info: [{"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "address": "fa:16:3e:b8:22:5d", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f75eb8-8d", "ovs_interfaceid": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:22:53 compute-0 nova_compute[239965]: 2026-01-26 16:22:53.637 239969 DEBUG oslo_concurrency.lockutils [req-1da03ebb-7749-4b9c-96b6-7ab652c7bd4e req-c828b0e1-0556-42bc-aa7c-71d65737e71f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.149 239969 DEBUG nova.compute.manager [req-c2fc1a57-4de6-4220-8e35-03d0c80e89de req-5f2da9ff-50a8-436f-92e8-f55cc621c476 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Received event network-vif-plugged-47519d16-bf4d-4b36-aa40-e533f3ff9d77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.150 239969 DEBUG oslo_concurrency.lockutils [req-c2fc1a57-4de6-4220-8e35-03d0c80e89de req-5f2da9ff-50a8-436f-92e8-f55cc621c476 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.150 239969 DEBUG oslo_concurrency.lockutils [req-c2fc1a57-4de6-4220-8e35-03d0c80e89de req-5f2da9ff-50a8-436f-92e8-f55cc621c476 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.151 239969 DEBUG oslo_concurrency.lockutils [req-c2fc1a57-4de6-4220-8e35-03d0c80e89de req-5f2da9ff-50a8-436f-92e8-f55cc621c476 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.151 239969 DEBUG nova.compute.manager [req-c2fc1a57-4de6-4220-8e35-03d0c80e89de req-5f2da9ff-50a8-436f-92e8-f55cc621c476 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Processing event network-vif-plugged-47519d16-bf4d-4b36-aa40-e533f3ff9d77 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.152 239969 DEBUG nova.compute.manager [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.156 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444574.1566677, 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.157 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] VM Resumed (Lifecycle Event)
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.159 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.164 239969 INFO nova.virt.libvirt.driver [-] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Instance spawned successfully.
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.164 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.178 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.184 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.188 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.189 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.189 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.189 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.190 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.190 239969 DEBUG nova.virt.libvirt.driver [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.214 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.249 239969 INFO nova.compute.manager [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Took 11.06 seconds to spawn the instance on the hypervisor.
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.250 239969 DEBUG nova.compute.manager [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.307 239969 INFO nova.compute.manager [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Took 12.15 seconds to build instance.
Jan 26 16:22:54 compute-0 nova_compute[239965]: 2026-01-26 16:22:54.331 239969 DEBUG oslo_concurrency.lockutils [None req-5ec8ef44-db1e-4da7-becf-4dede9756495 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 213 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 26 16:22:54 compute-0 ceph-mon[75140]: pgmap v2166: 305 pgs: 305 active+clean; 213 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Jan 26 16:22:55 compute-0 ovn_controller[146046]: 2026-01-26T16:22:55Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:22:5d 10.100.0.21
Jan 26 16:22:55 compute-0 ovn_controller[146046]: 2026-01-26T16:22:55Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:22:5d 10.100.0.21
Jan 26 16:22:55 compute-0 nova_compute[239965]: 2026-01-26 16:22:55.450 239969 DEBUG nova.compute.manager [req-286886f9-5aba-48b1-b0cd-b98724ace5f9 req-dc211ece-da28-4d68-8f1e-4e036df7131b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received event network-vif-plugged-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:55 compute-0 nova_compute[239965]: 2026-01-26 16:22:55.450 239969 DEBUG oslo_concurrency.lockutils [req-286886f9-5aba-48b1-b0cd-b98724ace5f9 req-dc211ece-da28-4d68-8f1e-4e036df7131b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:55 compute-0 nova_compute[239965]: 2026-01-26 16:22:55.451 239969 DEBUG oslo_concurrency.lockutils [req-286886f9-5aba-48b1-b0cd-b98724ace5f9 req-dc211ece-da28-4d68-8f1e-4e036df7131b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:55 compute-0 nova_compute[239965]: 2026-01-26 16:22:55.451 239969 DEBUG oslo_concurrency.lockutils [req-286886f9-5aba-48b1-b0cd-b98724ace5f9 req-dc211ece-da28-4d68-8f1e-4e036df7131b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:55 compute-0 nova_compute[239965]: 2026-01-26 16:22:55.451 239969 DEBUG nova.compute.manager [req-286886f9-5aba-48b1-b0cd-b98724ace5f9 req-dc211ece-da28-4d68-8f1e-4e036df7131b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] No waiting events found dispatching network-vif-plugged-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:22:55 compute-0 nova_compute[239965]: 2026-01-26 16:22:55.451 239969 WARNING nova.compute.manager [req-286886f9-5aba-48b1-b0cd-b98724ace5f9 req-dc211ece-da28-4d68-8f1e-4e036df7131b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received unexpected event network-vif-plugged-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 for instance with vm_state active and task_state None.
Jan 26 16:22:55 compute-0 nova_compute[239965]: 2026-01-26 16:22:55.605 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:22:56 compute-0 nova_compute[239965]: 2026-01-26 16:22:56.239 239969 DEBUG nova.compute.manager [req-7e3fee35-c00a-4728-a03a-00ace074ca75 req-69b517ee-ccb5-4930-b3b2-9f826560ae21 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Received event network-vif-plugged-47519d16-bf4d-4b36-aa40-e533f3ff9d77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:56 compute-0 nova_compute[239965]: 2026-01-26 16:22:56.240 239969 DEBUG oslo_concurrency.lockutils [req-7e3fee35-c00a-4728-a03a-00ace074ca75 req-69b517ee-ccb5-4930-b3b2-9f826560ae21 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:56 compute-0 nova_compute[239965]: 2026-01-26 16:22:56.240 239969 DEBUG oslo_concurrency.lockutils [req-7e3fee35-c00a-4728-a03a-00ace074ca75 req-69b517ee-ccb5-4930-b3b2-9f826560ae21 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:56 compute-0 nova_compute[239965]: 2026-01-26 16:22:56.241 239969 DEBUG oslo_concurrency.lockutils [req-7e3fee35-c00a-4728-a03a-00ace074ca75 req-69b517ee-ccb5-4930-b3b2-9f826560ae21 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:56 compute-0 nova_compute[239965]: 2026-01-26 16:22:56.241 239969 DEBUG nova.compute.manager [req-7e3fee35-c00a-4728-a03a-00ace074ca75 req-69b517ee-ccb5-4930-b3b2-9f826560ae21 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] No waiting events found dispatching network-vif-plugged-47519d16-bf4d-4b36-aa40-e533f3ff9d77 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:22:56 compute-0 nova_compute[239965]: 2026-01-26 16:22:56.242 239969 WARNING nova.compute.manager [req-7e3fee35-c00a-4728-a03a-00ace074ca75 req-69b517ee-ccb5-4930-b3b2-9f826560ae21 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Received unexpected event network-vif-plugged-47519d16-bf4d-4b36-aa40-e533f3ff9d77 for instance with vm_state active and task_state None.
Jan 26 16:22:56 compute-0 nova_compute[239965]: 2026-01-26 16:22:56.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:22:56 compute-0 nova_compute[239965]: 2026-01-26 16:22:56.509 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:22:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 213 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 788 KiB/s rd, 1.8 MiB/s wr, 119 op/s
Jan 26 16:22:56 compute-0 ceph-mon[75140]: pgmap v2167: 305 pgs: 305 active+clean; 213 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 788 KiB/s rd, 1.8 MiB/s wr, 119 op/s
Jan 26 16:22:57 compute-0 nova_compute[239965]: 2026-01-26 16:22:57.305 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:57 compute-0 nova_compute[239965]: 2026-01-26 16:22:57.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:22:57 compute-0 nova_compute[239965]: 2026-01-26 16:22:57.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:22:58 compute-0 nova_compute[239965]: 2026-01-26 16:22:58.072 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:22:58 compute-0 nova_compute[239965]: 2026-01-26 16:22:58.122 239969 DEBUG nova.compute.manager [req-a2f2a50b-c167-497e-b0e8-59847bb1a395 req-bb77af70-5590-4b1f-9af2-f31f80143b03 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Received event network-changed-47519d16-bf4d-4b36-aa40-e533f3ff9d77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:22:58 compute-0 nova_compute[239965]: 2026-01-26 16:22:58.123 239969 DEBUG nova.compute.manager [req-a2f2a50b-c167-497e-b0e8-59847bb1a395 req-bb77af70-5590-4b1f-9af2-f31f80143b03 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Refreshing instance network info cache due to event network-changed-47519d16-bf4d-4b36-aa40-e533f3ff9d77. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:22:58 compute-0 nova_compute[239965]: 2026-01-26 16:22:58.123 239969 DEBUG oslo_concurrency.lockutils [req-a2f2a50b-c167-497e-b0e8-59847bb1a395 req-bb77af70-5590-4b1f-9af2-f31f80143b03 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:22:58 compute-0 nova_compute[239965]: 2026-01-26 16:22:58.123 239969 DEBUG oslo_concurrency.lockutils [req-a2f2a50b-c167-497e-b0e8-59847bb1a395 req-bb77af70-5590-4b1f-9af2-f31f80143b03 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:22:58 compute-0 nova_compute[239965]: 2026-01-26 16:22:58.123 239969 DEBUG nova.network.neutron [req-a2f2a50b-c167-497e-b0e8-59847bb1a395 req-bb77af70-5590-4b1f-9af2-f31f80143b03 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Refreshing network info cache for port 47519d16-bf4d-4b36-aa40-e533f3ff9d77 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:22:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:22:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 213 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 178 op/s
Jan 26 16:22:58 compute-0 sshd-session[353016]: Invalid user linuxadmin from 209.38.206.249 port 42914
Jan 26 16:22:58 compute-0 ceph-mon[75140]: pgmap v2168: 305 pgs: 305 active+clean; 213 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.2 MiB/s wr, 178 op/s
Jan 26 16:22:58 compute-0 sshd-session[353016]: Connection closed by invalid user linuxadmin 209.38.206.249 port 42914 [preauth]
Jan 26 16:22:59 compute-0 sshd-session[353018]: Invalid user admin from 209.38.206.249 port 60024
Jan 26 16:22:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:59.246 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:59.247 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:22:59.248 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:59 compute-0 sshd-session[353018]: Connection closed by invalid user admin 209.38.206.249 port 60024 [preauth]
Jan 26 16:22:59 compute-0 nova_compute[239965]: 2026-01-26 16:22:59.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:22:59 compute-0 nova_compute[239965]: 2026-01-26 16:22:59.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:22:59 compute-0 nova_compute[239965]: 2026-01-26 16:22:59.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:22:59 compute-0 nova_compute[239965]: 2026-01-26 16:22:59.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:22:59 compute-0 nova_compute[239965]: 2026-01-26 16:22:59.544 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:22:59 compute-0 nova_compute[239965]: 2026-01-26 16:22:59.545 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:23:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2199428541' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.090 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "5e597276-b625-4985-8433-360736cd3c79" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.091 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "5e597276-b625-4985-8433-360736cd3c79" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.097 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.113 239969 DEBUG nova.compute.manager [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:23:00 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2199428541' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.147 239969 DEBUG nova.network.neutron [req-a2f2a50b-c167-497e-b0e8-59847bb1a395 req-bb77af70-5590-4b1f-9af2-f31f80143b03 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Updated VIF entry in instance network info cache for port 47519d16-bf4d-4b36-aa40-e533f3ff9d77. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.148 239969 DEBUG nova.network.neutron [req-a2f2a50b-c167-497e-b0e8-59847bb1a395 req-bb77af70-5590-4b1f-9af2-f31f80143b03 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Updating instance_info_cache with network_info: [{"id": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "address": "fa:16:3e:34:7e:f9", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47519d16-bf", "ovs_interfaceid": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.201 239969 DEBUG oslo_concurrency.lockutils [req-a2f2a50b-c167-497e-b0e8-59847bb1a395 req-bb77af70-5590-4b1f-9af2-f31f80143b03 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.213 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.214 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.217 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.217 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.220 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.220 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.226 239969 DEBUG nova.virt.hardware [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.226 239969 INFO nova.compute.claims [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.308 239969 DEBUG nova.scheduler.client.report [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.333 239969 DEBUG nova.scheduler.client.report [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.333 239969 DEBUG nova.compute.provider_tree [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.349 239969 DEBUG nova.scheduler.client.report [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.368 239969 DEBUG nova.scheduler.client.report [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:23:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:23:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:23:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:23:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:23:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:23:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.419 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.420 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3262MB free_disk=59.92096108850092GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.420 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.424 239969 DEBUG oslo_concurrency.processutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 213 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 163 op/s
Jan 26 16:23:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:23:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2758590053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.983 239969 DEBUG oslo_concurrency.processutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:00 compute-0 nova_compute[239965]: 2026-01-26 16:23:00.990 239969 DEBUG nova.compute.provider_tree [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.011 239969 DEBUG nova.scheduler.client.report [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.042 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.044 239969 DEBUG nova.compute.manager [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.049 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.111 239969 DEBUG nova.compute.manager [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.112 239969 DEBUG nova.network.neutron [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.129 239969 INFO nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.132 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.133 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.133 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 5e597276-b625-4985-8433-360736cd3c79 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.134 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.134 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:23:01 compute-0 ceph-mon[75140]: pgmap v2169: 305 pgs: 305 active+clean; 213 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 13 KiB/s wr, 163 op/s
Jan 26 16:23:01 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2758590053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.149 239969 DEBUG nova.compute.manager [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.206 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.311 239969 DEBUG nova.policy [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0338b308661e4205839e9e957f674d8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df501970c9864b77b86bb4aa58ef846b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.318 239969 DEBUG nova.compute.manager [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.320 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.321 239969 INFO nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Creating image(s)
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.355 239969 DEBUG nova.storage.rbd_utils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 5e597276-b625-4985-8433-360736cd3c79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.385 239969 DEBUG nova.storage.rbd_utils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 5e597276-b625-4985-8433-360736cd3c79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.418 239969 DEBUG nova.storage.rbd_utils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 5e597276-b625-4985-8433-360736cd3c79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.421 239969 DEBUG oslo_concurrency.processutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.489 239969 DEBUG oslo_concurrency.processutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.490 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.491 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.491 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.513 239969 DEBUG nova.storage.rbd_utils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 5e597276-b625-4985-8433-360736cd3c79_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.517 239969 DEBUG oslo_concurrency.processutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 5e597276-b625-4985-8433-360736cd3c79_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:23:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3722203318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.798 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.807 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.811 239969 DEBUG oslo_concurrency.processutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 5e597276-b625-4985-8433-360736cd3c79_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.294s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.854 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.905 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.906 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.858s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:01 compute-0 nova_compute[239965]: 2026-01-26 16:23:01.913 239969 DEBUG nova.storage.rbd_utils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] resizing rbd image 5e597276-b625-4985-8433-360736cd3c79_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.011 239969 DEBUG nova.objects.instance [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'migration_context' on Instance uuid 5e597276-b625-4985-8433-360736cd3c79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.031 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.031 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Ensure instance console log exists: /var/lib/nova/instances/5e597276-b625-4985-8433-360736cd3c79/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.032 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.033 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.033 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.068 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "9698a435-3de7-4c8c-9b12-35e22d8d405b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.070 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "9698a435-3de7-4c8c-9b12-35e22d8d405b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.095 239969 DEBUG nova.compute.manager [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:23:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3722203318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.190 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.191 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.200 239969 DEBUG nova.virt.hardware [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.201 239969 INFO nova.compute.claims [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.278 239969 DEBUG nova.network.neutron [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Successfully created port: 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.308 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.357 239969 DEBUG oslo_concurrency.processutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 246 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 188 op/s
Jan 26 16:23:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:23:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/289399927' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.936 239969 DEBUG oslo_concurrency.processutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.941 239969 DEBUG nova.compute.provider_tree [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.955 239969 DEBUG nova.scheduler.client.report [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.969 239969 DEBUG nova.network.neutron [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Successfully updated port: 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.986 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.987 239969 DEBUG nova.compute.manager [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.992 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "refresh_cache-5e597276-b625-4985-8433-360736cd3c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.992 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquired lock "refresh_cache-5e597276-b625-4985-8433-360736cd3c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:23:02 compute-0 nova_compute[239965]: 2026-01-26 16:23:02.993 239969 DEBUG nova.network.neutron [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.037 239969 DEBUG nova.compute.manager [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.038 239969 DEBUG nova.network.neutron [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.053 239969 DEBUG nova.compute.manager [req-b0629cf4-dd61-4457-bbbf-0b5ae8be6bc6 req-559f6b03-0a92-4c75-9fcc-fa226004c0b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Received event network-changed-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.053 239969 DEBUG nova.compute.manager [req-b0629cf4-dd61-4457-bbbf-0b5ae8be6bc6 req-559f6b03-0a92-4c75-9fcc-fa226004c0b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Refreshing instance network info cache due to event network-changed-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.054 239969 DEBUG oslo_concurrency.lockutils [req-b0629cf4-dd61-4457-bbbf-0b5ae8be6bc6 req-559f6b03-0a92-4c75-9fcc-fa226004c0b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-5e597276-b625-4985-8433-360736cd3c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.059 239969 INFO nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.074 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.082 239969 DEBUG nova.compute.manager [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:23:03 compute-0 ceph-mon[75140]: pgmap v2170: 305 pgs: 305 active+clean; 246 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 188 op/s
Jan 26 16:23:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/289399927' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.198 239969 DEBUG nova.network.neutron [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.212 239969 DEBUG nova.compute.manager [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.213 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.214 239969 INFO nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Creating image(s)
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.242 239969 DEBUG nova.storage.rbd_utils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 9698a435-3de7-4c8c-9b12-35e22d8d405b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.271 239969 DEBUG nova.storage.rbd_utils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 9698a435-3de7-4c8c-9b12-35e22d8d405b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.294 239969 DEBUG nova.storage.rbd_utils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 9698a435-3de7-4c8c-9b12-35e22d8d405b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.297 239969 DEBUG oslo_concurrency.processutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.336 239969 DEBUG nova.policy [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e865b82cb8db42568f95d2257ed9b158', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f543c15474f7451fa07b01e79dde95dd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.380 239969 DEBUG oslo_concurrency.processutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.381 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.381 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.381 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.405 239969 DEBUG nova.storage.rbd_utils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 9698a435-3de7-4c8c-9b12-35e22d8d405b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.409 239969 DEBUG oslo_concurrency.processutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 9698a435-3de7-4c8c-9b12-35e22d8d405b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.722 239969 DEBUG oslo_concurrency.processutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 9698a435-3de7-4c8c-9b12-35e22d8d405b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.782 239969 DEBUG nova.storage.rbd_utils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] resizing rbd image 9698a435-3de7-4c8c-9b12-35e22d8d405b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.859 239969 DEBUG nova.objects.instance [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'migration_context' on Instance uuid 9698a435-3de7-4c8c-9b12-35e22d8d405b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.874 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.874 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Ensure instance console log exists: /var/lib/nova/instances/9698a435-3de7-4c8c-9b12-35e22d8d405b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.875 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.875 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:03 compute-0 nova_compute[239965]: 2026-01-26 16:23:03.875 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:04 compute-0 podman[353441]: 2026-01-26 16:23:04.381761015 +0000 UTC m=+0.059449367 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 16:23:04 compute-0 podman[353442]: 2026-01-26 16:23:04.414112757 +0000 UTC m=+0.091879811 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 16:23:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 246 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 137 op/s
Jan 26 16:23:04 compute-0 ceph-mon[75140]: pgmap v2171: 305 pgs: 305 active+clean; 246 MiB data, 908 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 137 op/s
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.232 239969 DEBUG nova.network.neutron [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Updating instance_info_cache with network_info: [{"id": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "address": "fa:16:3e:16:03:01", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:301", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743ec95a-b5", "ovs_interfaceid": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.381 239969 DEBUG nova.network.neutron [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Successfully created port: 4eaabdc7-c0a3-4002-8812-a572315f3666 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.398 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Releasing lock "refresh_cache-5e597276-b625-4985-8433-360736cd3c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.399 239969 DEBUG nova.compute.manager [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Instance network_info: |[{"id": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "address": "fa:16:3e:16:03:01", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:301", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743ec95a-b5", "ovs_interfaceid": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.399 239969 DEBUG oslo_concurrency.lockutils [req-b0629cf4-dd61-4457-bbbf-0b5ae8be6bc6 req-559f6b03-0a92-4c75-9fcc-fa226004c0b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-5e597276-b625-4985-8433-360736cd3c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.399 239969 DEBUG nova.network.neutron [req-b0629cf4-dd61-4457-bbbf-0b5ae8be6bc6 req-559f6b03-0a92-4c75-9fcc-fa226004c0b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Refreshing network info cache for port 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.402 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Start _get_guest_xml network_info=[{"id": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "address": "fa:16:3e:16:03:01", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:301", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743ec95a-b5", "ovs_interfaceid": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.407 239969 WARNING nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.411 239969 DEBUG nova.virt.libvirt.host [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.412 239969 DEBUG nova.virt.libvirt.host [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.419 239969 DEBUG nova.virt.libvirt.host [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.420 239969 DEBUG nova.virt.libvirt.host [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.420 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.420 239969 DEBUG nova.virt.hardware [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.421 239969 DEBUG nova.virt.hardware [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.421 239969 DEBUG nova.virt.hardware [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.421 239969 DEBUG nova.virt.hardware [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.421 239969 DEBUG nova.virt.hardware [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.421 239969 DEBUG nova.virt.hardware [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.422 239969 DEBUG nova.virt.hardware [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.422 239969 DEBUG nova.virt.hardware [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.422 239969 DEBUG nova.virt.hardware [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.422 239969 DEBUG nova.virt.hardware [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.423 239969 DEBUG nova.virt.hardware [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:23:05 compute-0 nova_compute[239965]: 2026-01-26 16:23:05.426 239969 DEBUG oslo_concurrency.processutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:23:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/509869241' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.026 239969 DEBUG oslo_concurrency.processutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.052 239969 DEBUG nova.storage.rbd_utils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 5e597276-b625-4985-8433-360736cd3c79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.057 239969 DEBUG oslo_concurrency.processutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:06 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/509869241' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.445 239969 DEBUG nova.network.neutron [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Successfully updated port: 4eaabdc7-c0a3-4002-8812-a572315f3666 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.469 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-9698a435-3de7-4c8c-9b12-35e22d8d405b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.469 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-9698a435-3de7-4c8c-9b12-35e22d8d405b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.470 239969 DEBUG nova.network.neutron [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.546 239969 DEBUG nova.compute.manager [req-565d1ca1-2c87-47c2-924b-81cc0476381b req-3fe687ca-7ddf-4768-af1e-c425fa4c5db0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Received event network-changed-4eaabdc7-c0a3-4002-8812-a572315f3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.546 239969 DEBUG nova.compute.manager [req-565d1ca1-2c87-47c2-924b-81cc0476381b req-3fe687ca-7ddf-4768-af1e-c425fa4c5db0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Refreshing instance network info cache due to event network-changed-4eaabdc7-c0a3-4002-8812-a572315f3666. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.546 239969 DEBUG oslo_concurrency.lockutils [req-565d1ca1-2c87-47c2-924b-81cc0476381b req-3fe687ca-7ddf-4768-af1e-c425fa4c5db0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-9698a435-3de7-4c8c-9b12-35e22d8d405b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:23:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:23:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2425554746' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:23:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 284 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 165 op/s
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.568 239969 DEBUG oslo_concurrency.processutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.570 239969 DEBUG nova.virt.libvirt.vif [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:22:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1725827159',display_name='tempest-TestGettingAddress-server-1725827159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1725827159',id=128,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCC7xQvyZ3R4fi4Uk/Uk6Pa6MLruwnCjAWUP+8KMyXPkmagp3JbDxcXWQo/28wA3I75JT31hYs4VkRRPhBdGj71Xnfc2t9PlBlyrIJIqEAvEv7a7zDeqBbmQyfNsQiXQAw==',key_name='tempest-TestGettingAddress-1047513795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-07uj0sou',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:23:01Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=5e597276-b625-4985-8433-360736cd3c79,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "address": "fa:16:3e:16:03:01", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:301", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743ec95a-b5", "ovs_interfaceid": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.570 239969 DEBUG nova.network.os_vif_util [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "address": "fa:16:3e:16:03:01", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:301", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743ec95a-b5", "ovs_interfaceid": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.571 239969 DEBUG nova.network.os_vif_util [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:03:01,bridge_name='br-int',has_traffic_filtering=True,id=743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2,network=Network(71a6c2b4-c95b-48d6-bc0d-8a809d49273b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap743ec95a-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.572 239969 DEBUG nova.objects.instance [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'pci_devices' on Instance uuid 5e597276-b625-4985-8433-360736cd3c79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.585 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:23:06 compute-0 nova_compute[239965]:   <uuid>5e597276-b625-4985-8433-360736cd3c79</uuid>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   <name>instance-00000080</name>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <nova:name>tempest-TestGettingAddress-server-1725827159</nova:name>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:23:05</nova:creationTime>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:23:06 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:23:06 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:23:06 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:23:06 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:23:06 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:23:06 compute-0 nova_compute[239965]:         <nova:user uuid="0338b308661e4205839e9e957f674d8c">tempest-TestGettingAddress-206943419-project-member</nova:user>
Jan 26 16:23:06 compute-0 nova_compute[239965]:         <nova:project uuid="df501970c9864b77b86bb4aa58ef846b">tempest-TestGettingAddress-206943419</nova:project>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:23:06 compute-0 nova_compute[239965]:         <nova:port uuid="743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2">
Jan 26 16:23:06 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe16:301" ipVersion="6"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <system>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <entry name="serial">5e597276-b625-4985-8433-360736cd3c79</entry>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <entry name="uuid">5e597276-b625-4985-8433-360736cd3c79</entry>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     </system>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   <os>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   </os>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   <features>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   </features>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/5e597276-b625-4985-8433-360736cd3c79_disk">
Jan 26 16:23:06 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       </source>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:23:06 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/5e597276-b625-4985-8433-360736cd3c79_disk.config">
Jan 26 16:23:06 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       </source>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:23:06 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:16:03:01"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <target dev="tap743ec95a-b5"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/5e597276-b625-4985-8433-360736cd3c79/console.log" append="off"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <video>
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     </video>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:23:06 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:23:06 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:23:06 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:23:06 compute-0 nova_compute[239965]: </domain>
Jan 26 16:23:06 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.586 239969 DEBUG nova.compute.manager [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Preparing to wait for external event network-vif-plugged-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.586 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "5e597276-b625-4985-8433-360736cd3c79-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.586 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "5e597276-b625-4985-8433-360736cd3c79-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.586 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "5e597276-b625-4985-8433-360736cd3c79-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.587 239969 DEBUG nova.virt.libvirt.vif [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:22:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1725827159',display_name='tempest-TestGettingAddress-server-1725827159',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1725827159',id=128,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCC7xQvyZ3R4fi4Uk/Uk6Pa6MLruwnCjAWUP+8KMyXPkmagp3JbDxcXWQo/28wA3I75JT31hYs4VkRRPhBdGj71Xnfc2t9PlBlyrIJIqEAvEv7a7zDeqBbmQyfNsQiXQAw==',key_name='tempest-TestGettingAddress-1047513795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-07uj0sou',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:23:01Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=5e597276-b625-4985-8433-360736cd3c79,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "address": "fa:16:3e:16:03:01", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:301", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743ec95a-b5", "ovs_interfaceid": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.587 239969 DEBUG nova.network.os_vif_util [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "address": "fa:16:3e:16:03:01", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:301", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743ec95a-b5", "ovs_interfaceid": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.588 239969 DEBUG nova.network.os_vif_util [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:16:03:01,bridge_name='br-int',has_traffic_filtering=True,id=743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2,network=Network(71a6c2b4-c95b-48d6-bc0d-8a809d49273b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap743ec95a-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.588 239969 DEBUG os_vif [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:03:01,bridge_name='br-int',has_traffic_filtering=True,id=743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2,network=Network(71a6c2b4-c95b-48d6-bc0d-8a809d49273b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap743ec95a-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.588 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.589 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.589 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.592 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.592 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap743ec95a-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.592 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap743ec95a-b5, col_values=(('external_ids', {'iface-id': '743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:16:03:01', 'vm-uuid': '5e597276-b625-4985-8433-360736cd3c79'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.593 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:06 compute-0 NetworkManager[48954]: <info>  [1769444586.5946] manager: (tap743ec95a-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/555)
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.596 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.600 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.601 239969 INFO os_vif [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:16:03:01,bridge_name='br-int',has_traffic_filtering=True,id=743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2,network=Network(71a6c2b4-c95b-48d6-bc0d-8a809d49273b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap743ec95a-b5')
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.639 239969 DEBUG nova.network.neutron [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.657 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.657 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.657 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:16:03:01, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.658 239969 INFO nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Using config drive
Jan 26 16:23:06 compute-0 nova_compute[239965]: 2026-01-26 16:23:06.686 239969 DEBUG nova.storage.rbd_utils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 5e597276-b625-4985-8433-360736cd3c79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2425554746' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:23:07 compute-0 ceph-mon[75140]: pgmap v2172: 305 pgs: 305 active+clean; 284 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 165 op/s
Jan 26 16:23:07 compute-0 ovn_controller[146046]: 2026-01-26T16:23:07Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:34:7e:f9 10.100.0.6
Jan 26 16:23:07 compute-0 ovn_controller[146046]: 2026-01-26T16:23:07Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:34:7e:f9 10.100.0.6
Jan 26 16:23:07 compute-0 nova_compute[239965]: 2026-01-26 16:23:07.748 239969 INFO nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Creating config drive at /var/lib/nova/instances/5e597276-b625-4985-8433-360736cd3c79/disk.config
Jan 26 16:23:07 compute-0 nova_compute[239965]: 2026-01-26 16:23:07.754 239969 DEBUG oslo_concurrency.processutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5e597276-b625-4985-8433-360736cd3c79/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp606ozme7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:07 compute-0 nova_compute[239965]: 2026-01-26 16:23:07.837 239969 DEBUG nova.network.neutron [req-b0629cf4-dd61-4457-bbbf-0b5ae8be6bc6 req-559f6b03-0a92-4c75-9fcc-fa226004c0b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Updated VIF entry in instance network info cache for port 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:23:07 compute-0 nova_compute[239965]: 2026-01-26 16:23:07.838 239969 DEBUG nova.network.neutron [req-b0629cf4-dd61-4457-bbbf-0b5ae8be6bc6 req-559f6b03-0a92-4c75-9fcc-fa226004c0b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Updating instance_info_cache with network_info: [{"id": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "address": "fa:16:3e:16:03:01", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:301", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743ec95a-b5", "ovs_interfaceid": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:07 compute-0 nova_compute[239965]: 2026-01-26 16:23:07.856 239969 DEBUG oslo_concurrency.lockutils [req-b0629cf4-dd61-4457-bbbf-0b5ae8be6bc6 req-559f6b03-0a92-4c75-9fcc-fa226004c0b1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-5e597276-b625-4985-8433-360736cd3c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:23:07 compute-0 nova_compute[239965]: 2026-01-26 16:23:07.902 239969 DEBUG oslo_concurrency.processutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5e597276-b625-4985-8433-360736cd3c79/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp606ozme7" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:07 compute-0 nova_compute[239965]: 2026-01-26 16:23:07.943 239969 DEBUG nova.storage.rbd_utils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 5e597276-b625-4985-8433-360736cd3c79_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:07 compute-0 nova_compute[239965]: 2026-01-26 16:23:07.950 239969 DEBUG oslo_concurrency.processutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5e597276-b625-4985-8433-360736cd3c79/disk.config 5e597276-b625-4985-8433-360736cd3c79_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.077 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.154 239969 DEBUG oslo_concurrency.processutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5e597276-b625-4985-8433-360736cd3c79/disk.config 5e597276-b625-4985-8433-360736cd3c79_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.204s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.155 239969 INFO nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Deleting local config drive /var/lib/nova/instances/5e597276-b625-4985-8433-360736cd3c79/disk.config because it was imported into RBD.
Jan 26 16:23:08 compute-0 kernel: tap743ec95a-b5: entered promiscuous mode
Jan 26 16:23:08 compute-0 NetworkManager[48954]: <info>  [1769444588.2344] manager: (tap743ec95a-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/556)
Jan 26 16:23:08 compute-0 ovn_controller[146046]: 2026-01-26T16:23:08Z|01336|binding|INFO|Claiming lport 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 for this chassis.
Jan 26 16:23:08 compute-0 ovn_controller[146046]: 2026-01-26T16:23:08Z|01337|binding|INFO|743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2: Claiming fa:16:3e:16:03:01 10.100.0.4 2001:db8::f816:3eff:fe16:301
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.237 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.247 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:03:01 10.100.0.4 2001:db8::f816:3eff:fe16:301'], port_security=['fa:16:3e:16:03:01 10.100.0.4 2001:db8::f816:3eff:fe16:301'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fe16:301/64', 'neutron:device_id': '5e597276-b625-4985-8433-360736cd3c79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bbc6da3-4889-49ad-be6a-5c45e912baea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59cf9173-0130-4701-8cd2-ef13c0f4cee0, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.250 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 in datapath 71a6c2b4-c95b-48d6-bc0d-8a809d49273b bound to our chassis
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.254 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71a6c2b4-c95b-48d6-bc0d-8a809d49273b
Jan 26 16:23:08 compute-0 ovn_controller[146046]: 2026-01-26T16:23:08Z|01338|binding|INFO|Setting lport 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 ovn-installed in OVS
Jan 26 16:23:08 compute-0 ovn_controller[146046]: 2026-01-26T16:23:08Z|01339|binding|INFO|Setting lport 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 up in Southbound
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.269 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.271 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.273 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4ef98d-754c-44c7-9c81-a08a2388a9d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.275 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap71a6c2b4-c1 in ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.278 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap71a6c2b4-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.278 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[19c6eb5c-5ac0-45b5-ad68-62ff11ae59a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.279 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bd440f56-b6de-42a9-8f1a-6730f7a26e47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:08 compute-0 systemd-machined[208061]: New machine qemu-155-instance-00000080.
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.301 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[408dd4b1-134f-4baa-9911-5062ff096a22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:08 compute-0 systemd[1]: Started Virtual Machine qemu-155-instance-00000080.
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.335 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a1afbacd-95b1-4562-bccd-fc7966493090]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:08 compute-0 systemd-udevd[353624]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:23:08 compute-0 NetworkManager[48954]: <info>  [1769444588.3565] device (tap743ec95a-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:23:08 compute-0 NetworkManager[48954]: <info>  [1769444588.3578] device (tap743ec95a-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.381 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c2f62976-3e6d-4e83-93fa-76852b40373b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.389 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cea05d38-1d48-4378-b59d-e080ff03fd57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:08 compute-0 NetworkManager[48954]: <info>  [1769444588.3905] manager: (tap71a6c2b4-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/557)
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.425 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e227b8fe-ca3f-4ae2-ab21-239520499c4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.428 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f30dc77e-f248-4dbe-b4a3-d1e052d130ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:08 compute-0 NetworkManager[48954]: <info>  [1769444588.4586] device (tap71a6c2b4-c0): carrier: link connected
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.467 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7148974d-d4ab-4bc8-84e8-7aea79ed2fd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.491 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d689c666-f7ab-4c48-b3c5-bd2508509a3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71a6c2b4-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:1b:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607926, 'reachable_time': 34510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353654, 'error': None, 'target': 'ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.511 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0ae22e-e842-4556-8125-3d9fc0bd303e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:1bcc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607926, 'tstamp': 607926}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353655, 'error': None, 'target': 'ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.536 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[96397c84-3e46-4894-8f25-822e1b0b52cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71a6c2b4-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:1b:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607926, 'reachable_time': 34510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 353656, 'error': None, 'target': 'ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 337 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.6 MiB/s wr, 178 op/s
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.574 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4c0de444-4b0b-498d-a606-f34df104f606]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.648 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d31febbd-24dc-4922-8c65-bc2dd26a905c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.650 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71a6c2b4-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.651 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.652 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71a6c2b4-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.654 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:08 compute-0 NetworkManager[48954]: <info>  [1769444588.6561] manager: (tap71a6c2b4-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/558)
Jan 26 16:23:08 compute-0 kernel: tap71a6c2b4-c0: entered promiscuous mode
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.658 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.661 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71a6c2b4-c0, col_values=(('external_ids', {'iface-id': '29a5989d-1915-41bb-983b-b3c73bba2262'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.663 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:08 compute-0 ovn_controller[146046]: 2026-01-26T16:23:08Z|01340|binding|INFO|Releasing lport 29a5989d-1915-41bb-983b-b3c73bba2262 from this chassis (sb_readonly=0)
Jan 26 16:23:08 compute-0 ceph-mon[75140]: pgmap v2173: 305 pgs: 305 active+clean; 337 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.6 MiB/s wr, 178 op/s
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.691 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.696 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/71a6c2b4-c95b-48d6-bc0d-8a809d49273b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/71a6c2b4-c95b-48d6-bc0d-8a809d49273b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.696 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.697 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f2b893-4acf-4e43-956b-432c8e5c1fd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.698 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-71a6c2b4-c95b-48d6-bc0d-8a809d49273b
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/71a6c2b4-c95b-48d6-bc0d-8a809d49273b.pid.haproxy
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 71a6c2b4-c95b-48d6-bc0d-8a809d49273b
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:23:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:08.698 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'env', 'PROCESS_TAG=haproxy-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/71a6c2b4-c95b-48d6-bc0d-8a809d49273b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.859 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444588.8590477, 5e597276-b625-4985-8433-360736cd3c79 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.860 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5e597276-b625-4985-8433-360736cd3c79] VM Started (Lifecycle Event)
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.881 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5e597276-b625-4985-8433-360736cd3c79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.885 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444588.86024, 5e597276-b625-4985-8433-360736cd3c79 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.886 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5e597276-b625-4985-8433-360736cd3c79] VM Paused (Lifecycle Event)
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.903 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5e597276-b625-4985-8433-360736cd3c79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.911 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5e597276-b625-4985-8433-360736cd3c79] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:23:08 compute-0 nova_compute[239965]: 2026-01-26 16:23:08.944 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5e597276-b625-4985-8433-360736cd3c79] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:23:09 compute-0 podman[353730]: 2026-01-26 16:23:09.14434803 +0000 UTC m=+0.059189801 container create a1512d19821504f3239d109d1ce29bdd12fb25a68015f6ef3dc65e5dc93fce72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:23:09 compute-0 systemd[1]: Started libpod-conmon-a1512d19821504f3239d109d1ce29bdd12fb25a68015f6ef3dc65e5dc93fce72.scope.
Jan 26 16:23:09 compute-0 podman[353730]: 2026-01-26 16:23:09.111409404 +0000 UTC m=+0.026251245 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:23:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e664746d42975844e4cc59b7ab1ef766ba2e110fb63b3fc3964de4c0e2f0a20f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:23:09 compute-0 podman[353730]: 2026-01-26 16:23:09.239049239 +0000 UTC m=+0.153891070 container init a1512d19821504f3239d109d1ce29bdd12fb25a68015f6ef3dc65e5dc93fce72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 16:23:09 compute-0 podman[353730]: 2026-01-26 16:23:09.249489035 +0000 UTC m=+0.164330826 container start a1512d19821504f3239d109d1ce29bdd12fb25a68015f6ef3dc65e5dc93fce72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 16:23:09 compute-0 neutron-haproxy-ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b[353746]: [NOTICE]   (353750) : New worker (353752) forked
Jan 26 16:23:09 compute-0 neutron-haproxy-ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b[353746]: [NOTICE]   (353750) : Loading success.
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.810 239969 DEBUG nova.compute.manager [req-51781af0-67cf-4c3c-8742-0ae54d627aa4 req-e515af1d-6686-4629-b811-8d8063d59a84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Received event network-vif-plugged-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.810 239969 DEBUG oslo_concurrency.lockutils [req-51781af0-67cf-4c3c-8742-0ae54d627aa4 req-e515af1d-6686-4629-b811-8d8063d59a84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5e597276-b625-4985-8433-360736cd3c79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.810 239969 DEBUG oslo_concurrency.lockutils [req-51781af0-67cf-4c3c-8742-0ae54d627aa4 req-e515af1d-6686-4629-b811-8d8063d59a84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5e597276-b625-4985-8433-360736cd3c79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.811 239969 DEBUG oslo_concurrency.lockutils [req-51781af0-67cf-4c3c-8742-0ae54d627aa4 req-e515af1d-6686-4629-b811-8d8063d59a84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5e597276-b625-4985-8433-360736cd3c79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.811 239969 DEBUG nova.compute.manager [req-51781af0-67cf-4c3c-8742-0ae54d627aa4 req-e515af1d-6686-4629-b811-8d8063d59a84 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Processing event network-vif-plugged-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.812 239969 DEBUG nova.compute.manager [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.816 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444589.81569, 5e597276-b625-4985-8433-360736cd3c79 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.817 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5e597276-b625-4985-8433-360736cd3c79] VM Resumed (Lifecycle Event)
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.820 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.824 239969 INFO nova.virt.libvirt.driver [-] [instance: 5e597276-b625-4985-8433-360736cd3c79] Instance spawned successfully.
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.825 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.853 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5e597276-b625-4985-8433-360736cd3c79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.863 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5e597276-b625-4985-8433-360736cd3c79] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.869 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.870 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.871 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.872 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.872 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.874 239969 DEBUG nova.virt.libvirt.driver [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.880 239969 DEBUG nova.network.neutron [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Updating instance_info_cache with network_info: [{"id": "4eaabdc7-c0a3-4002-8812-a572315f3666", "address": "fa:16:3e:80:b5:7c", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eaabdc7-c0", "ovs_interfaceid": "4eaabdc7-c0a3-4002-8812-a572315f3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.885 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5e597276-b625-4985-8433-360736cd3c79] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.933 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-9698a435-3de7-4c8c-9b12-35e22d8d405b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.933 239969 DEBUG nova.compute.manager [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Instance network_info: |[{"id": "4eaabdc7-c0a3-4002-8812-a572315f3666", "address": "fa:16:3e:80:b5:7c", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eaabdc7-c0", "ovs_interfaceid": "4eaabdc7-c0a3-4002-8812-a572315f3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.935 239969 DEBUG oslo_concurrency.lockutils [req-565d1ca1-2c87-47c2-924b-81cc0476381b req-3fe687ca-7ddf-4768-af1e-c425fa4c5db0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-9698a435-3de7-4c8c-9b12-35e22d8d405b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.935 239969 DEBUG nova.network.neutron [req-565d1ca1-2c87-47c2-924b-81cc0476381b req-3fe687ca-7ddf-4768-af1e-c425fa4c5db0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Refreshing network info cache for port 4eaabdc7-c0a3-4002-8812-a572315f3666 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.940 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Start _get_guest_xml network_info=[{"id": "4eaabdc7-c0a3-4002-8812-a572315f3666", "address": "fa:16:3e:80:b5:7c", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eaabdc7-c0", "ovs_interfaceid": "4eaabdc7-c0a3-4002-8812-a572315f3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.947 239969 WARNING nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.960 239969 INFO nova.compute.manager [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Took 8.64 seconds to spawn the instance on the hypervisor.
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.960 239969 DEBUG nova.compute.manager [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.962 239969 DEBUG nova.virt.libvirt.host [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.963 239969 DEBUG nova.virt.libvirt.host [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.978 239969 DEBUG nova.virt.libvirt.host [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.979 239969 DEBUG nova.virt.libvirt.host [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.980 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.980 239969 DEBUG nova.virt.hardware [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.981 239969 DEBUG nova.virt.hardware [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.981 239969 DEBUG nova.virt.hardware [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.982 239969 DEBUG nova.virt.hardware [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.982 239969 DEBUG nova.virt.hardware [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.983 239969 DEBUG nova.virt.hardware [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.983 239969 DEBUG nova.virt.hardware [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.984 239969 DEBUG nova.virt.hardware [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.984 239969 DEBUG nova.virt.hardware [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.984 239969 DEBUG nova.virt.hardware [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.985 239969 DEBUG nova.virt.hardware [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:23:09 compute-0 nova_compute[239965]: 2026-01-26 16:23:09.989 239969 DEBUG oslo_concurrency.processutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:10 compute-0 nova_compute[239965]: 2026-01-26 16:23:10.174 239969 INFO nova.compute.manager [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Took 9.98 seconds to build instance.
Jan 26 16:23:10 compute-0 nova_compute[239965]: 2026-01-26 16:23:10.196 239969 DEBUG oslo_concurrency.lockutils [None req-bbbec3c3-ac4f-4121-b322-db076a1cb128 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "5e597276-b625-4985-8433-360736cd3c79" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 337 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 5.6 MiB/s wr, 107 op/s
Jan 26 16:23:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:23:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/529002954' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:23:10 compute-0 ceph-mon[75140]: pgmap v2174: 305 pgs: 305 active+clean; 337 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 5.6 MiB/s wr, 107 op/s
Jan 26 16:23:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/529002954' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:23:10 compute-0 nova_compute[239965]: 2026-01-26 16:23:10.669 239969 DEBUG oslo_concurrency.processutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.680s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:10 compute-0 nova_compute[239965]: 2026-01-26 16:23:10.689 239969 DEBUG nova.storage.rbd_utils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 9698a435-3de7-4c8c-9b12-35e22d8d405b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:10 compute-0 nova_compute[239965]: 2026-01-26 16:23:10.693 239969 DEBUG oslo_concurrency.processutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:23:11 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2838303233' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.256 239969 DEBUG oslo_concurrency.processutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.259 239969 DEBUG nova.virt.libvirt.vif [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:23:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1800247682',display_name='tempest-TestNetworkBasicOps-server-1800247682',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1800247682',id=129,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOEACH7YklzsdBuWbdkWITsEmcRXlPYbihr7xRwBSUihcxENNKlih1KFxNYArkssWuX7BJ2CfYdz+thoC/jrYHOR6D9Pi/VQcav/0JGduLKBpNRQX/wG2Yvccg4ePvtKPg==',key_name='tempest-TestNetworkBasicOps-1458569563',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-h90xbs63',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:23:03Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=9698a435-3de7-4c8c-9b12-35e22d8d405b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4eaabdc7-c0a3-4002-8812-a572315f3666", "address": "fa:16:3e:80:b5:7c", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eaabdc7-c0", "ovs_interfaceid": "4eaabdc7-c0a3-4002-8812-a572315f3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.260 239969 DEBUG nova.network.os_vif_util [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "4eaabdc7-c0a3-4002-8812-a572315f3666", "address": "fa:16:3e:80:b5:7c", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eaabdc7-c0", "ovs_interfaceid": "4eaabdc7-c0a3-4002-8812-a572315f3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.262 239969 DEBUG nova.network.os_vif_util [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:b5:7c,bridge_name='br-int',has_traffic_filtering=True,id=4eaabdc7-c0a3-4002-8812-a572315f3666,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eaabdc7-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.264 239969 DEBUG nova.objects.instance [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'pci_devices' on Instance uuid 9698a435-3de7-4c8c-9b12-35e22d8d405b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.287 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:23:11 compute-0 nova_compute[239965]:   <uuid>9698a435-3de7-4c8c-9b12-35e22d8d405b</uuid>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   <name>instance-00000081</name>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkBasicOps-server-1800247682</nova:name>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:23:09</nova:creationTime>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:23:11 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:23:11 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:23:11 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:23:11 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:23:11 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:23:11 compute-0 nova_compute[239965]:         <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:23:11 compute-0 nova_compute[239965]:         <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:23:11 compute-0 nova_compute[239965]:         <nova:port uuid="4eaabdc7-c0a3-4002-8812-a572315f3666">
Jan 26 16:23:11 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <system>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <entry name="serial">9698a435-3de7-4c8c-9b12-35e22d8d405b</entry>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <entry name="uuid">9698a435-3de7-4c8c-9b12-35e22d8d405b</entry>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     </system>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   <os>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   </os>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   <features>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   </features>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/9698a435-3de7-4c8c-9b12-35e22d8d405b_disk">
Jan 26 16:23:11 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       </source>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:23:11 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/9698a435-3de7-4c8c-9b12-35e22d8d405b_disk.config">
Jan 26 16:23:11 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       </source>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:23:11 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:80:b5:7c"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <target dev="tap4eaabdc7-c0"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/9698a435-3de7-4c8c-9b12-35e22d8d405b/console.log" append="off"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <video>
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     </video>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:23:11 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:23:11 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:23:11 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:23:11 compute-0 nova_compute[239965]: </domain>
Jan 26 16:23:11 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.289 239969 DEBUG nova.compute.manager [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Preparing to wait for external event network-vif-plugged-4eaabdc7-c0a3-4002-8812-a572315f3666 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.289 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.290 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.290 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.291 239969 DEBUG nova.virt.libvirt.vif [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:23:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1800247682',display_name='tempest-TestNetworkBasicOps-server-1800247682',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1800247682',id=129,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOEACH7YklzsdBuWbdkWITsEmcRXlPYbihr7xRwBSUihcxENNKlih1KFxNYArkssWuX7BJ2CfYdz+thoC/jrYHOR6D9Pi/VQcav/0JGduLKBpNRQX/wG2Yvccg4ePvtKPg==',key_name='tempest-TestNetworkBasicOps-1458569563',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-h90xbs63',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:23:03Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=9698a435-3de7-4c8c-9b12-35e22d8d405b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4eaabdc7-c0a3-4002-8812-a572315f3666", "address": "fa:16:3e:80:b5:7c", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eaabdc7-c0", "ovs_interfaceid": "4eaabdc7-c0a3-4002-8812-a572315f3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.291 239969 DEBUG nova.network.os_vif_util [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "4eaabdc7-c0a3-4002-8812-a572315f3666", "address": "fa:16:3e:80:b5:7c", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eaabdc7-c0", "ovs_interfaceid": "4eaabdc7-c0a3-4002-8812-a572315f3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.292 239969 DEBUG nova.network.os_vif_util [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:b5:7c,bridge_name='br-int',has_traffic_filtering=True,id=4eaabdc7-c0a3-4002-8812-a572315f3666,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eaabdc7-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.293 239969 DEBUG os_vif [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:b5:7c,bridge_name='br-int',has_traffic_filtering=True,id=4eaabdc7-c0a3-4002-8812-a572315f3666,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eaabdc7-c0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.293 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.294 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.295 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.298 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.298 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4eaabdc7-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.299 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4eaabdc7-c0, col_values=(('external_ids', {'iface-id': '4eaabdc7-c0a3-4002-8812-a572315f3666', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:80:b5:7c', 'vm-uuid': '9698a435-3de7-4c8c-9b12-35e22d8d405b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.300 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:11 compute-0 NetworkManager[48954]: <info>  [1769444591.3016] manager: (tap4eaabdc7-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/559)
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.303 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.308 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.310 239969 INFO os_vif [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:b5:7c,bridge_name='br-int',has_traffic_filtering=True,id=4eaabdc7-c0a3-4002-8812-a572315f3666,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eaabdc7-c0')
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.365 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.365 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.366 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:80:b5:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.366 239969 INFO nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Using config drive
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.391 239969 DEBUG nova.storage.rbd_utils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 9698a435-3de7-4c8c-9b12-35e22d8d405b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.398 239969 DEBUG nova.network.neutron [req-565d1ca1-2c87-47c2-924b-81cc0476381b req-3fe687ca-7ddf-4768-af1e-c425fa4c5db0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Updated VIF entry in instance network info cache for port 4eaabdc7-c0a3-4002-8812-a572315f3666. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.399 239969 DEBUG nova.network.neutron [req-565d1ca1-2c87-47c2-924b-81cc0476381b req-3fe687ca-7ddf-4768-af1e-c425fa4c5db0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Updating instance_info_cache with network_info: [{"id": "4eaabdc7-c0a3-4002-8812-a572315f3666", "address": "fa:16:3e:80:b5:7c", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eaabdc7-c0", "ovs_interfaceid": "4eaabdc7-c0a3-4002-8812-a572315f3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.431 239969 DEBUG oslo_concurrency.lockutils [req-565d1ca1-2c87-47c2-924b-81cc0476381b req-3fe687ca-7ddf-4768-af1e-c425fa4c5db0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-9698a435-3de7-4c8c-9b12-35e22d8d405b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:23:11 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2838303233' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.733 239969 INFO nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Creating config drive at /var/lib/nova/instances/9698a435-3de7-4c8c-9b12-35e22d8d405b/disk.config
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.737 239969 DEBUG oslo_concurrency.processutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9698a435-3de7-4c8c-9b12-35e22d8d405b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1yu7f8lc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.899 239969 DEBUG oslo_concurrency.processutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9698a435-3de7-4c8c-9b12-35e22d8d405b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1yu7f8lc" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.940 239969 DEBUG nova.storage.rbd_utils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 9698a435-3de7-4c8c-9b12-35e22d8d405b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:11 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.946 239969 DEBUG oslo_concurrency.processutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9698a435-3de7-4c8c-9b12-35e22d8d405b/disk.config 9698a435-3de7-4c8c-9b12-35e22d8d405b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:12 compute-0 nova_compute[239965]: 2026-01-26 16:23:11.999 239969 DEBUG nova.compute.manager [req-9115a7c3-e03a-4f24-aaa6-fa57eacfe850 req-9af0b72d-5f67-427f-8b0e-3dc6cbc3c398 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Received event network-vif-plugged-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:12 compute-0 nova_compute[239965]: 2026-01-26 16:23:12.001 239969 DEBUG oslo_concurrency.lockutils [req-9115a7c3-e03a-4f24-aaa6-fa57eacfe850 req-9af0b72d-5f67-427f-8b0e-3dc6cbc3c398 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5e597276-b625-4985-8433-360736cd3c79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:12 compute-0 nova_compute[239965]: 2026-01-26 16:23:12.001 239969 DEBUG oslo_concurrency.lockutils [req-9115a7c3-e03a-4f24-aaa6-fa57eacfe850 req-9af0b72d-5f67-427f-8b0e-3dc6cbc3c398 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5e597276-b625-4985-8433-360736cd3c79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:12 compute-0 nova_compute[239965]: 2026-01-26 16:23:12.002 239969 DEBUG oslo_concurrency.lockutils [req-9115a7c3-e03a-4f24-aaa6-fa57eacfe850 req-9af0b72d-5f67-427f-8b0e-3dc6cbc3c398 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5e597276-b625-4985-8433-360736cd3c79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:12 compute-0 nova_compute[239965]: 2026-01-26 16:23:12.002 239969 DEBUG nova.compute.manager [req-9115a7c3-e03a-4f24-aaa6-fa57eacfe850 req-9af0b72d-5f67-427f-8b0e-3dc6cbc3c398 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] No waiting events found dispatching network-vif-plugged-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:23:12 compute-0 nova_compute[239965]: 2026-01-26 16:23:12.003 239969 WARNING nova.compute.manager [req-9115a7c3-e03a-4f24-aaa6-fa57eacfe850 req-9af0b72d-5f67-427f-8b0e-3dc6cbc3c398 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Received unexpected event network-vif-plugged-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 for instance with vm_state active and task_state None.
Jan 26 16:23:12 compute-0 nova_compute[239965]: 2026-01-26 16:23:12.118 239969 DEBUG oslo_concurrency.processutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9698a435-3de7-4c8c-9b12-35e22d8d405b/disk.config 9698a435-3de7-4c8c-9b12-35e22d8d405b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:12 compute-0 nova_compute[239965]: 2026-01-26 16:23:12.119 239969 INFO nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Deleting local config drive /var/lib/nova/instances/9698a435-3de7-4c8c-9b12-35e22d8d405b/disk.config because it was imported into RBD.
Jan 26 16:23:12 compute-0 kernel: tap4eaabdc7-c0: entered promiscuous mode
Jan 26 16:23:12 compute-0 NetworkManager[48954]: <info>  [1769444592.2002] manager: (tap4eaabdc7-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/560)
Jan 26 16:23:12 compute-0 ovn_controller[146046]: 2026-01-26T16:23:12Z|01341|binding|INFO|Claiming lport 4eaabdc7-c0a3-4002-8812-a572315f3666 for this chassis.
Jan 26 16:23:12 compute-0 ovn_controller[146046]: 2026-01-26T16:23:12Z|01342|binding|INFO|4eaabdc7-c0a3-4002-8812-a572315f3666: Claiming fa:16:3e:80:b5:7c 10.100.0.19
Jan 26 16:23:12 compute-0 nova_compute[239965]: 2026-01-26 16:23:12.201 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:12.208 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:b5:7c 10.100.0.19'], port_security=['fa:16:3e:80:b5:7c 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': '9698a435-3de7-4c8c-9b12-35e22d8d405b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86f67c73-31b4-4f89-93c5-3a2f6b391b21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2949377-7667-47c8-b75f-54f940d9a0e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=299c72d0-590b-465a-ba1c-80ce946f37ca, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=4eaabdc7-c0a3-4002-8812-a572315f3666) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:23:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:12.211 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 4eaabdc7-c0a3-4002-8812-a572315f3666 in datapath 86f67c73-31b4-4f89-93c5-3a2f6b391b21 bound to our chassis
Jan 26 16:23:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:12.213 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 86f67c73-31b4-4f89-93c5-3a2f6b391b21
Jan 26 16:23:12 compute-0 ovn_controller[146046]: 2026-01-26T16:23:12Z|01343|binding|INFO|Setting lport 4eaabdc7-c0a3-4002-8812-a572315f3666 ovn-installed in OVS
Jan 26 16:23:12 compute-0 ovn_controller[146046]: 2026-01-26T16:23:12Z|01344|binding|INFO|Setting lport 4eaabdc7-c0a3-4002-8812-a572315f3666 up in Southbound
Jan 26 16:23:12 compute-0 nova_compute[239965]: 2026-01-26 16:23:12.242 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:12.243 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a65c727c-6861-48f9-9bbf-33d6fe8c85ac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:12 compute-0 nova_compute[239965]: 2026-01-26 16:23:12.245 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:12 compute-0 systemd-machined[208061]: New machine qemu-156-instance-00000081.
Jan 26 16:23:12 compute-0 systemd[1]: Started Virtual Machine qemu-156-instance-00000081.
Jan 26 16:23:12 compute-0 systemd-udevd[353898]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:23:12 compute-0 NetworkManager[48954]: <info>  [1769444592.2839] device (tap4eaabdc7-c0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:23:12 compute-0 NetworkManager[48954]: <info>  [1769444592.2854] device (tap4eaabdc7-c0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:23:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:12.295 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[6feaadff-9203-4d7d-a8be-52004ef0f3b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:12.299 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c0a5d339-e052-4c73-bc1a-3bce3134c995]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:12.335 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2bc052e8-d416-4fc8-8a27-c1921b926b43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:12.358 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c94650-d8c0-4c9d-b665-ef02ca0b5b90]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap86f67c73-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:83:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606338, 'reachable_time': 33945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353910, 'error': None, 'target': 'ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:12.377 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b5753c4a-3b44-46c1-925f-aafe95ff50db]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap86f67c73-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606351, 'tstamp': 606351}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353911, 'error': None, 'target': 'ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap86f67c73-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606355, 'tstamp': 606355}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353911, 'error': None, 'target': 'ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:12.379 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86f67c73-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:12 compute-0 nova_compute[239965]: 2026-01-26 16:23:12.381 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:12 compute-0 nova_compute[239965]: 2026-01-26 16:23:12.382 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:12.383 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86f67c73-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:12.383 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:23:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:12.384 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap86f67c73-30, col_values=(('external_ids', {'iface-id': '8154df63-0e6b-4a3f-8b4f-601fcbe2af2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:12.384 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:23:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 339 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 192 op/s
Jan 26 16:23:12 compute-0 ceph-mon[75140]: pgmap v2175: 305 pgs: 305 active+clean; 339 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 192 op/s
Jan 26 16:23:13 compute-0 nova_compute[239965]: 2026-01-26 16:23:13.019 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444593.0186596, 9698a435-3de7-4c8c-9b12-35e22d8d405b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:23:13 compute-0 nova_compute[239965]: 2026-01-26 16:23:13.020 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] VM Started (Lifecycle Event)
Jan 26 16:23:13 compute-0 nova_compute[239965]: 2026-01-26 16:23:13.039 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:13 compute-0 nova_compute[239965]: 2026-01-26 16:23:13.044 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444593.0205204, 9698a435-3de7-4c8c-9b12-35e22d8d405b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:23:13 compute-0 nova_compute[239965]: 2026-01-26 16:23:13.044 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] VM Paused (Lifecycle Event)
Jan 26 16:23:13 compute-0 nova_compute[239965]: 2026-01-26 16:23:13.060 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:13 compute-0 nova_compute[239965]: 2026-01-26 16:23:13.065 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:23:13 compute-0 nova_compute[239965]: 2026-01-26 16:23:13.089 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:23:13 compute-0 nova_compute[239965]: 2026-01-26 16:23:13.120 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.017 239969 DEBUG nova.compute.manager [req-6e51d44e-dc5d-45c7-bf27-a52daf393b53 req-60fc13ac-46e4-42f4-8dd4-e989e94029d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Received event network-vif-plugged-4eaabdc7-c0a3-4002-8812-a572315f3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.017 239969 DEBUG oslo_concurrency.lockutils [req-6e51d44e-dc5d-45c7-bf27-a52daf393b53 req-60fc13ac-46e4-42f4-8dd4-e989e94029d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.018 239969 DEBUG oslo_concurrency.lockutils [req-6e51d44e-dc5d-45c7-bf27-a52daf393b53 req-60fc13ac-46e4-42f4-8dd4-e989e94029d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.018 239969 DEBUG oslo_concurrency.lockutils [req-6e51d44e-dc5d-45c7-bf27-a52daf393b53 req-60fc13ac-46e4-42f4-8dd4-e989e94029d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.019 239969 DEBUG nova.compute.manager [req-6e51d44e-dc5d-45c7-bf27-a52daf393b53 req-60fc13ac-46e4-42f4-8dd4-e989e94029d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Processing event network-vif-plugged-4eaabdc7-c0a3-4002-8812-a572315f3666 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.019 239969 DEBUG nova.compute.manager [req-6e51d44e-dc5d-45c7-bf27-a52daf393b53 req-60fc13ac-46e4-42f4-8dd4-e989e94029d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Received event network-vif-plugged-4eaabdc7-c0a3-4002-8812-a572315f3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.019 239969 DEBUG oslo_concurrency.lockutils [req-6e51d44e-dc5d-45c7-bf27-a52daf393b53 req-60fc13ac-46e4-42f4-8dd4-e989e94029d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.020 239969 DEBUG oslo_concurrency.lockutils [req-6e51d44e-dc5d-45c7-bf27-a52daf393b53 req-60fc13ac-46e4-42f4-8dd4-e989e94029d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.020 239969 DEBUG oslo_concurrency.lockutils [req-6e51d44e-dc5d-45c7-bf27-a52daf393b53 req-60fc13ac-46e4-42f4-8dd4-e989e94029d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.021 239969 DEBUG nova.compute.manager [req-6e51d44e-dc5d-45c7-bf27-a52daf393b53 req-60fc13ac-46e4-42f4-8dd4-e989e94029d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] No waiting events found dispatching network-vif-plugged-4eaabdc7-c0a3-4002-8812-a572315f3666 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.021 239969 WARNING nova.compute.manager [req-6e51d44e-dc5d-45c7-bf27-a52daf393b53 req-60fc13ac-46e4-42f4-8dd4-e989e94029d4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Received unexpected event network-vif-plugged-4eaabdc7-c0a3-4002-8812-a572315f3666 for instance with vm_state building and task_state spawning.
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.022 239969 DEBUG nova.compute.manager [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.026 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444594.0263393, 9698a435-3de7-4c8c-9b12-35e22d8d405b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.027 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] VM Resumed (Lifecycle Event)
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.034 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.038 239969 INFO nova.virt.libvirt.driver [-] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Instance spawned successfully.
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.039 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.192 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.198 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.199 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.200 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.201 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.202 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.203 239969 DEBUG nova.virt.libvirt.driver [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.212 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.243 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.264 239969 INFO nova.compute.manager [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Took 11.05 seconds to spawn the instance on the hypervisor.
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.264 239969 DEBUG nova.compute.manager [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.321 239969 INFO nova.compute.manager [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Took 12.16 seconds to build instance.
Jan 26 16:23:14 compute-0 nova_compute[239965]: 2026-01-26 16:23:14.336 239969 DEBUG oslo_concurrency.lockutils [None req-95747814-c535-4530-9ad0-8d6b2018a473 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "9698a435-3de7-4c8c-9b12-35e22d8d405b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 339 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 167 op/s
Jan 26 16:23:14 compute-0 ceph-mon[75140]: pgmap v2176: 305 pgs: 305 active+clean; 339 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.3 MiB/s wr, 167 op/s
Jan 26 16:23:15 compute-0 nova_compute[239965]: 2026-01-26 16:23:15.256 239969 DEBUG nova.compute.manager [req-edc648b5-4850-41db-9499-399a18b482ee req-dd9dfdc4-4dff-466e-914e-590e97b882a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Received event network-changed-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:15 compute-0 nova_compute[239965]: 2026-01-26 16:23:15.256 239969 DEBUG nova.compute.manager [req-edc648b5-4850-41db-9499-399a18b482ee req-dd9dfdc4-4dff-466e-914e-590e97b882a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Refreshing instance network info cache due to event network-changed-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:23:15 compute-0 nova_compute[239965]: 2026-01-26 16:23:15.256 239969 DEBUG oslo_concurrency.lockutils [req-edc648b5-4850-41db-9499-399a18b482ee req-dd9dfdc4-4dff-466e-914e-590e97b882a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-5e597276-b625-4985-8433-360736cd3c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:23:15 compute-0 nova_compute[239965]: 2026-01-26 16:23:15.256 239969 DEBUG oslo_concurrency.lockutils [req-edc648b5-4850-41db-9499-399a18b482ee req-dd9dfdc4-4dff-466e-914e-590e97b882a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-5e597276-b625-4985-8433-360736cd3c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:23:15 compute-0 nova_compute[239965]: 2026-01-26 16:23:15.256 239969 DEBUG nova.network.neutron [req-edc648b5-4850-41db-9499-399a18b482ee req-dd9dfdc4-4dff-466e-914e-590e97b882a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Refreshing network info cache for port 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:23:16 compute-0 nova_compute[239965]: 2026-01-26 16:23:16.301 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 339 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.3 MiB/s wr, 199 op/s
Jan 26 16:23:16 compute-0 ceph-mon[75140]: pgmap v2177: 305 pgs: 305 active+clean; 339 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 4.3 MiB/s wr, 199 op/s
Jan 26 16:23:17 compute-0 nova_compute[239965]: 2026-01-26 16:23:17.902 239969 DEBUG nova.network.neutron [req-edc648b5-4850-41db-9499-399a18b482ee req-dd9dfdc4-4dff-466e-914e-590e97b882a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Updated VIF entry in instance network info cache for port 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:23:17 compute-0 nova_compute[239965]: 2026-01-26 16:23:17.903 239969 DEBUG nova.network.neutron [req-edc648b5-4850-41db-9499-399a18b482ee req-dd9dfdc4-4dff-466e-914e-590e97b882a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Updating instance_info_cache with network_info: [{"id": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "address": "fa:16:3e:16:03:01", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:301", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743ec95a-b5", "ovs_interfaceid": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:17 compute-0 nova_compute[239965]: 2026-01-26 16:23:17.931 239969 DEBUG oslo_concurrency.lockutils [req-edc648b5-4850-41db-9499-399a18b482ee req-dd9dfdc4-4dff-466e-914e-590e97b882a2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-5e597276-b625-4985-8433-360736cd3c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:23:18 compute-0 nova_compute[239965]: 2026-01-26 16:23:18.122 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:23:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 339 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.7 MiB/s wr, 212 op/s
Jan 26 16:23:18 compute-0 ceph-mon[75140]: pgmap v2178: 305 pgs: 305 active+clean; 339 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.7 MiB/s wr, 212 op/s
Jan 26 16:23:19 compute-0 nova_compute[239965]: 2026-01-26 16:23:19.068 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "3a113ae2-e017-46ee-b5d9-111ff761618b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:19 compute-0 nova_compute[239965]: 2026-01-26 16:23:19.069 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "3a113ae2-e017-46ee-b5d9-111ff761618b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:19 compute-0 nova_compute[239965]: 2026-01-26 16:23:19.091 239969 DEBUG nova.compute.manager [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:23:19 compute-0 nova_compute[239965]: 2026-01-26 16:23:19.173 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:19 compute-0 nova_compute[239965]: 2026-01-26 16:23:19.174 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:19 compute-0 nova_compute[239965]: 2026-01-26 16:23:19.180 239969 DEBUG nova.virt.hardware [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:23:19 compute-0 nova_compute[239965]: 2026-01-26 16:23:19.180 239969 INFO nova.compute.claims [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:23:19 compute-0 nova_compute[239965]: 2026-01-26 16:23:19.341 239969 DEBUG oslo_concurrency.processutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:19 compute-0 sshd-session[353955]: Invalid user deploy from 209.38.206.249 port 51616
Jan 26 16:23:19 compute-0 sshd-session[353955]: Connection closed by invalid user deploy 209.38.206.249 port 51616 [preauth]
Jan 26 16:23:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:23:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3326095182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:19 compute-0 nova_compute[239965]: 2026-01-26 16:23:19.870 239969 DEBUG oslo_concurrency.processutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:19 compute-0 nova_compute[239965]: 2026-01-26 16:23:19.880 239969 DEBUG nova.compute.provider_tree [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:23:19 compute-0 nova_compute[239965]: 2026-01-26 16:23:19.904 239969 DEBUG nova.scheduler.client.report [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:23:19 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3326095182' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:19 compute-0 nova_compute[239965]: 2026-01-26 16:23:19.932 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:19 compute-0 nova_compute[239965]: 2026-01-26 16:23:19.933 239969 DEBUG nova.compute.manager [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.005 239969 DEBUG nova.compute.manager [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.006 239969 DEBUG nova.network.neutron [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.032 239969 INFO nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.061 239969 DEBUG nova.compute.manager [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.168 239969 DEBUG nova.compute.manager [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.172 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.173 239969 INFO nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Creating image(s)
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.229 239969 DEBUG nova.storage.rbd_utils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 3a113ae2-e017-46ee-b5d9-111ff761618b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.267 239969 DEBUG nova.storage.rbd_utils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 3a113ae2-e017-46ee-b5d9-111ff761618b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.299 239969 DEBUG nova.storage.rbd_utils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 3a113ae2-e017-46ee-b5d9-111ff761618b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.304 239969 DEBUG oslo_concurrency.processutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.389 239969 DEBUG oslo_concurrency.processutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.390 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.391 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.391 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.411 239969 DEBUG nova.storage.rbd_utils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 3a113ae2-e017-46ee-b5d9-111ff761618b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.414 239969 DEBUG oslo_concurrency.processutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 3a113ae2-e017-46ee-b5d9-111ff761618b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 339 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 78 KiB/s wr, 159 op/s
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.676 239969 DEBUG oslo_concurrency.processutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 3a113ae2-e017-46ee-b5d9-111ff761618b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.733 239969 DEBUG nova.storage.rbd_utils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] resizing rbd image 3a113ae2-e017-46ee-b5d9-111ff761618b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.773 239969 DEBUG nova.policy [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ace5f36058684b5782589457d9e15921', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.817 239969 DEBUG nova.objects.instance [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'migration_context' on Instance uuid 3a113ae2-e017-46ee-b5d9-111ff761618b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.831 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.832 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Ensure instance console log exists: /var/lib/nova/instances/3a113ae2-e017-46ee-b5d9-111ff761618b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.832 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.833 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:20 compute-0 nova_compute[239965]: 2026-01-26 16:23:20.833 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:20 compute-0 ceph-mon[75140]: pgmap v2179: 305 pgs: 305 active+clean; 339 MiB data, 993 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 78 KiB/s wr, 159 op/s
Jan 26 16:23:21 compute-0 nova_compute[239965]: 2026-01-26 16:23:21.304 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:21 compute-0 sshd-session[354145]: Invalid user user1 from 209.38.206.249 port 51624
Jan 26 16:23:21 compute-0 sshd-session[354145]: Connection closed by invalid user user1 209.38.206.249 port 51624 [preauth]
Jan 26 16:23:21 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Jan 26 16:23:21 compute-0 sshd-session[354147]: Invalid user ubnt from 209.38.206.249 port 51626
Jan 26 16:23:21 compute-0 sshd-session[354147]: Connection closed by invalid user ubnt 209.38.206.249 port 51626 [preauth]
Jan 26 16:23:22 compute-0 sudo[354149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:23:22 compute-0 sudo[354149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:23:22 compute-0 sudo[354149]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:22 compute-0 sudo[354174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:23:22 compute-0 sudo[354174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:23:22 compute-0 nova_compute[239965]: 2026-01-26 16:23:22.370 239969 DEBUG nova.network.neutron [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Successfully created port: 51e9946a-d549-44e6-b844-37a0de1b1244 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:23:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 410 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.6 MiB/s wr, 223 op/s
Jan 26 16:23:22 compute-0 ceph-mon[75140]: pgmap v2180: 305 pgs: 305 active+clean; 410 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.6 MiB/s wr, 223 op/s
Jan 26 16:23:22 compute-0 sudo[354174]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:23:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:23:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:23:22 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:23:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:23:22 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:23:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:23:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:23:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:23:22 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:23:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:23:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:23:22 compute-0 sudo[354229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:23:22 compute-0 sudo[354229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:23:22 compute-0 sudo[354229]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:23 compute-0 sudo[354254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:23:23 compute-0 sudo[354254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:23:23 compute-0 nova_compute[239965]: 2026-01-26 16:23:23.125 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:23:23 compute-0 podman[354290]: 2026-01-26 16:23:23.371092661 +0000 UTC m=+0.025973377 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:23:23 compute-0 ovn_controller[146046]: 2026-01-26T16:23:23Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:16:03:01 10.100.0.4
Jan 26 16:23:23 compute-0 ovn_controller[146046]: 2026-01-26T16:23:23Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:16:03:01 10.100.0.4
Jan 26 16:23:23 compute-0 nova_compute[239965]: 2026-01-26 16:23:23.514 239969 DEBUG nova.network.neutron [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Successfully updated port: 51e9946a-d549-44e6-b844-37a0de1b1244 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:23:23 compute-0 podman[354290]: 2026-01-26 16:23:23.526506447 +0000 UTC m=+0.181387143 container create 1415826a8ef279568d5beb2ba53b7fb90ae16bebe190b266a43c0e188a000213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mestorf, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 16:23:23 compute-0 systemd[1]: Started libpod-conmon-1415826a8ef279568d5beb2ba53b7fb90ae16bebe190b266a43c0e188a000213.scope.
Jan 26 16:23:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:23:23 compute-0 podman[354290]: 2026-01-26 16:23:23.61603262 +0000 UTC m=+0.270913316 container init 1415826a8ef279568d5beb2ba53b7fb90ae16bebe190b266a43c0e188a000213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:23:23 compute-0 podman[354290]: 2026-01-26 16:23:23.623353889 +0000 UTC m=+0.278234585 container start 1415826a8ef279568d5beb2ba53b7fb90ae16bebe190b266a43c0e188a000213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mestorf, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:23:23 compute-0 nova_compute[239965]: 2026-01-26 16:23:23.626 239969 DEBUG nova.compute.manager [req-a5a19acf-f084-4f19-bdbb-d55b138ca440 req-44d123c1-6b14-430a-8cde-f1643cadffac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Received event network-changed-51e9946a-d549-44e6-b844-37a0de1b1244 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:23 compute-0 podman[354290]: 2026-01-26 16:23:23.627162222 +0000 UTC m=+0.282042948 container attach 1415826a8ef279568d5beb2ba53b7fb90ae16bebe190b266a43c0e188a000213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:23:23 compute-0 nova_compute[239965]: 2026-01-26 16:23:23.627 239969 DEBUG nova.compute.manager [req-a5a19acf-f084-4f19-bdbb-d55b138ca440 req-44d123c1-6b14-430a-8cde-f1643cadffac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Refreshing instance network info cache due to event network-changed-51e9946a-d549-44e6-b844-37a0de1b1244. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:23:23 compute-0 nova_compute[239965]: 2026-01-26 16:23:23.627 239969 DEBUG oslo_concurrency.lockutils [req-a5a19acf-f084-4f19-bdbb-d55b138ca440 req-44d123c1-6b14-430a-8cde-f1643cadffac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-3a113ae2-e017-46ee-b5d9-111ff761618b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:23:23 compute-0 nova_compute[239965]: 2026-01-26 16:23:23.628 239969 DEBUG oslo_concurrency.lockutils [req-a5a19acf-f084-4f19-bdbb-d55b138ca440 req-44d123c1-6b14-430a-8cde-f1643cadffac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-3a113ae2-e017-46ee-b5d9-111ff761618b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:23:23 compute-0 nervous_mestorf[354307]: 167 167
Jan 26 16:23:23 compute-0 podman[354290]: 2026-01-26 16:23:23.629259244 +0000 UTC m=+0.284139990 container died 1415826a8ef279568d5beb2ba53b7fb90ae16bebe190b266a43c0e188a000213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mestorf, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:23:23 compute-0 systemd[1]: libpod-1415826a8ef279568d5beb2ba53b7fb90ae16bebe190b266a43c0e188a000213.scope: Deactivated successfully.
Jan 26 16:23:23 compute-0 nova_compute[239965]: 2026-01-26 16:23:23.628 239969 DEBUG nova.network.neutron [req-a5a19acf-f084-4f19-bdbb-d55b138ca440 req-44d123c1-6b14-430a-8cde-f1643cadffac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Refreshing network info cache for port 51e9946a-d549-44e6-b844-37a0de1b1244 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:23:23 compute-0 nova_compute[239965]: 2026-01-26 16:23:23.638 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "refresh_cache-3a113ae2-e017-46ee-b5d9-111ff761618b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:23:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e9192208e0d265fe6c8c568811f7f0dc4f95c48947b5dee428a8811bec67eab-merged.mount: Deactivated successfully.
Jan 26 16:23:23 compute-0 podman[354290]: 2026-01-26 16:23:23.667576902 +0000 UTC m=+0.322457598 container remove 1415826a8ef279568d5beb2ba53b7fb90ae16bebe190b266a43c0e188a000213 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:23:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:23:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:23:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:23:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:23:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:23:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:23:23 compute-0 systemd[1]: libpod-conmon-1415826a8ef279568d5beb2ba53b7fb90ae16bebe190b266a43c0e188a000213.scope: Deactivated successfully.
Jan 26 16:23:23 compute-0 nova_compute[239965]: 2026-01-26 16:23:23.843 239969 DEBUG nova.network.neutron [req-a5a19acf-f084-4f19-bdbb-d55b138ca440 req-44d123c1-6b14-430a-8cde-f1643cadffac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:23:23 compute-0 podman[354329]: 2026-01-26 16:23:23.898696912 +0000 UTC m=+0.048742305 container create 0793f490ad2816d91f00389996d7ab7aee0fc6e329b25bdff9cdefb6ee984bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:23:23 compute-0 systemd[1]: Started libpod-conmon-0793f490ad2816d91f00389996d7ab7aee0fc6e329b25bdff9cdefb6ee984bbe.scope.
Jan 26 16:23:23 compute-0 podman[354329]: 2026-01-26 16:23:23.87777213 +0000 UTC m=+0.027817543 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:23:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ced82936036a77ea7fba1f1bd5721ac7c350c99f7569e8924963b86d0815f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ced82936036a77ea7fba1f1bd5721ac7c350c99f7569e8924963b86d0815f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ced82936036a77ea7fba1f1bd5721ac7c350c99f7569e8924963b86d0815f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ced82936036a77ea7fba1f1bd5721ac7c350c99f7569e8924963b86d0815f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:23:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/44ced82936036a77ea7fba1f1bd5721ac7c350c99f7569e8924963b86d0815f1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:23:24 compute-0 podman[354329]: 2026-01-26 16:23:24.008182493 +0000 UTC m=+0.158227936 container init 0793f490ad2816d91f00389996d7ab7aee0fc6e329b25bdff9cdefb6ee984bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:23:24 compute-0 podman[354329]: 2026-01-26 16:23:24.015550474 +0000 UTC m=+0.165595907 container start 0793f490ad2816d91f00389996d7ab7aee0fc6e329b25bdff9cdefb6ee984bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 16:23:24 compute-0 podman[354329]: 2026-01-26 16:23:24.019575112 +0000 UTC m=+0.169620535 container attach 0793f490ad2816d91f00389996d7ab7aee0fc6e329b25bdff9cdefb6ee984bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 16:23:24 compute-0 sshd-session[354351]: Invalid user ftpuser from 209.38.206.249 port 51638
Jan 26 16:23:24 compute-0 sshd-session[354351]: Connection closed by invalid user ftpuser 209.38.206.249 port 51638 [preauth]
Jan 26 16:23:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 410 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.5 MiB/s wr, 138 op/s
Jan 26 16:23:24 compute-0 zen_faraday[354346]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:23:24 compute-0 zen_faraday[354346]: --> All data devices are unavailable
Jan 26 16:23:24 compute-0 systemd[1]: libpod-0793f490ad2816d91f00389996d7ab7aee0fc6e329b25bdff9cdefb6ee984bbe.scope: Deactivated successfully.
Jan 26 16:23:24 compute-0 podman[354329]: 2026-01-26 16:23:24.630295039 +0000 UTC m=+0.780340432 container died 0793f490ad2816d91f00389996d7ab7aee0fc6e329b25bdff9cdefb6ee984bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 16:23:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-44ced82936036a77ea7fba1f1bd5721ac7c350c99f7569e8924963b86d0815f1-merged.mount: Deactivated successfully.
Jan 26 16:23:24 compute-0 ceph-mon[75140]: pgmap v2181: 305 pgs: 305 active+clean; 410 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.5 MiB/s wr, 138 op/s
Jan 26 16:23:24 compute-0 podman[354329]: 2026-01-26 16:23:24.695304571 +0000 UTC m=+0.845349994 container remove 0793f490ad2816d91f00389996d7ab7aee0fc6e329b25bdff9cdefb6ee984bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:23:24 compute-0 nova_compute[239965]: 2026-01-26 16:23:24.710 239969 DEBUG nova.network.neutron [req-a5a19acf-f084-4f19-bdbb-d55b138ca440 req-44d123c1-6b14-430a-8cde-f1643cadffac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:24 compute-0 systemd[1]: libpod-conmon-0793f490ad2816d91f00389996d7ab7aee0fc6e329b25bdff9cdefb6ee984bbe.scope: Deactivated successfully.
Jan 26 16:23:24 compute-0 nova_compute[239965]: 2026-01-26 16:23:24.730 239969 DEBUG oslo_concurrency.lockutils [req-a5a19acf-f084-4f19-bdbb-d55b138ca440 req-44d123c1-6b14-430a-8cde-f1643cadffac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-3a113ae2-e017-46ee-b5d9-111ff761618b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:23:24 compute-0 nova_compute[239965]: 2026-01-26 16:23:24.731 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquired lock "refresh_cache-3a113ae2-e017-46ee-b5d9-111ff761618b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:23:24 compute-0 nova_compute[239965]: 2026-01-26 16:23:24.732 239969 DEBUG nova.network.neutron [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:23:24 compute-0 sudo[354254]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:24 compute-0 sudo[354381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:23:24 compute-0 sudo[354381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:23:24 compute-0 sudo[354381]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:24 compute-0 sudo[354406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:23:24 compute-0 sudo[354406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:23:24 compute-0 nova_compute[239965]: 2026-01-26 16:23:24.944 239969 DEBUG nova.network.neutron [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:23:25 compute-0 podman[354442]: 2026-01-26 16:23:25.23876387 +0000 UTC m=+0.042649165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:23:25 compute-0 podman[354442]: 2026-01-26 16:23:25.891716381 +0000 UTC m=+0.695601636 container create 058b85849430185b532bb7db847ece4ce75a5ba335f94652236cca5032c15359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.307 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 412 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 152 op/s
Jan 26 16:23:26 compute-0 systemd[1]: Started libpod-conmon-058b85849430185b532bb7db847ece4ce75a5ba335f94652236cca5032c15359.scope.
Jan 26 16:23:26 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.663 239969 DEBUG nova.network.neutron [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Updating instance_info_cache with network_info: [{"id": "51e9946a-d549-44e6-b844-37a0de1b1244", "address": "fa:16:3e:e6:32:e4", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e9946a-d5", "ovs_interfaceid": "51e9946a-d549-44e6-b844-37a0de1b1244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.688 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Releasing lock "refresh_cache-3a113ae2-e017-46ee-b5d9-111ff761618b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.688 239969 DEBUG nova.compute.manager [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Instance network_info: |[{"id": "51e9946a-d549-44e6-b844-37a0de1b1244", "address": "fa:16:3e:e6:32:e4", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e9946a-d5", "ovs_interfaceid": "51e9946a-d549-44e6-b844-37a0de1b1244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.693 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Start _get_guest_xml network_info=[{"id": "51e9946a-d549-44e6-b844-37a0de1b1244", "address": "fa:16:3e:e6:32:e4", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e9946a-d5", "ovs_interfaceid": "51e9946a-d549-44e6-b844-37a0de1b1244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.699 239969 WARNING nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.706 239969 DEBUG nova.virt.libvirt.host [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.707 239969 DEBUG nova.virt.libvirt.host [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.710 239969 DEBUG nova.virt.libvirt.host [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.711 239969 DEBUG nova.virt.libvirt.host [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.712 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.713 239969 DEBUG nova.virt.hardware [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.713 239969 DEBUG nova.virt.hardware [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.714 239969 DEBUG nova.virt.hardware [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.715 239969 DEBUG nova.virt.hardware [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.715 239969 DEBUG nova.virt.hardware [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.716 239969 DEBUG nova.virt.hardware [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.716 239969 DEBUG nova.virt.hardware [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.717 239969 DEBUG nova.virt.hardware [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.717 239969 DEBUG nova.virt.hardware [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.718 239969 DEBUG nova.virt.hardware [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.718 239969 DEBUG nova.virt.hardware [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:23:26 compute-0 nova_compute[239965]: 2026-01-26 16:23:26.723 239969 DEBUG oslo_concurrency.processutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:26 compute-0 podman[354442]: 2026-01-26 16:23:26.961352134 +0000 UTC m=+1.765237379 container init 058b85849430185b532bb7db847ece4ce75a5ba335f94652236cca5032c15359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:23:26 compute-0 ceph-mon[75140]: pgmap v2182: 305 pgs: 305 active+clean; 412 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 152 op/s
Jan 26 16:23:26 compute-0 podman[354442]: 2026-01-26 16:23:26.97419925 +0000 UTC m=+1.778084465 container start 058b85849430185b532bb7db847ece4ce75a5ba335f94652236cca5032c15359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:23:26 compute-0 serene_babbage[354458]: 167 167
Jan 26 16:23:26 compute-0 systemd[1]: libpod-058b85849430185b532bb7db847ece4ce75a5ba335f94652236cca5032c15359.scope: Deactivated successfully.
Jan 26 16:23:26 compute-0 podman[354442]: 2026-01-26 16:23:26.987403383 +0000 UTC m=+1.791288608 container attach 058b85849430185b532bb7db847ece4ce75a5ba335f94652236cca5032c15359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_babbage, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:23:26 compute-0 podman[354442]: 2026-01-26 16:23:26.988115551 +0000 UTC m=+1.792000776 container died 058b85849430185b532bb7db847ece4ce75a5ba335f94652236cca5032c15359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_babbage, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:23:27 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Jan 26 16:23:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-098846a8736fd2d387bf470a34ba9b4c264b7096fa2c3a15f73e58d662fdb834-merged.mount: Deactivated successfully.
Jan 26 16:23:27 compute-0 podman[354442]: 2026-01-26 16:23:27.086034929 +0000 UTC m=+1.889920144 container remove 058b85849430185b532bb7db847ece4ce75a5ba335f94652236cca5032c15359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_babbage, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:23:27 compute-0 systemd[1]: libpod-conmon-058b85849430185b532bb7db847ece4ce75a5ba335f94652236cca5032c15359.scope: Deactivated successfully.
Jan 26 16:23:27 compute-0 podman[354505]: 2026-01-26 16:23:27.299582838 +0000 UTC m=+0.046933270 container create bf0e95b08a068e48708f1882c7767b59eef638c79a15f56f0ed942a3af9a52e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 16:23:27 compute-0 systemd[1]: Started libpod-conmon-bf0e95b08a068e48708f1882c7767b59eef638c79a15f56f0ed942a3af9a52e8.scope.
Jan 26 16:23:27 compute-0 podman[354505]: 2026-01-26 16:23:27.281009084 +0000 UTC m=+0.028359536 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:23:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:23:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d63f171f8d402ae57274086945f6cb1acd6fe96fa26c5377f67f7adbaff233/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:23:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d63f171f8d402ae57274086945f6cb1acd6fe96fa26c5377f67f7adbaff233/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:23:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d63f171f8d402ae57274086945f6cb1acd6fe96fa26c5377f67f7adbaff233/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:23:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:23:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1994628066' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:23:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3d63f171f8d402ae57274086945f6cb1acd6fe96fa26c5377f67f7adbaff233/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:23:27 compute-0 podman[354505]: 2026-01-26 16:23:27.413800576 +0000 UTC m=+0.161151028 container init bf0e95b08a068e48708f1882c7767b59eef638c79a15f56f0ed942a3af9a52e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bell, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:23:27 compute-0 podman[354505]: 2026-01-26 16:23:27.421660228 +0000 UTC m=+0.169010660 container start bf0e95b08a068e48708f1882c7767b59eef638c79a15f56f0ed942a3af9a52e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bell, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Jan 26 16:23:27 compute-0 podman[354505]: 2026-01-26 16:23:27.426205169 +0000 UTC m=+0.173555631 container attach bf0e95b08a068e48708f1882c7767b59eef638c79a15f56f0ed942a3af9a52e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 16:23:27 compute-0 nova_compute[239965]: 2026-01-26 16:23:27.427 239969 DEBUG oslo_concurrency.processutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.704s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:27 compute-0 nova_compute[239965]: 2026-01-26 16:23:27.448 239969 DEBUG nova.storage.rbd_utils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 3a113ae2-e017-46ee-b5d9-111ff761618b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:27 compute-0 nova_compute[239965]: 2026-01-26 16:23:27.452 239969 DEBUG oslo_concurrency.processutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:27 compute-0 affectionate_bell[354521]: {
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:     "0": [
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:         {
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "devices": [
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "/dev/loop3"
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             ],
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "lv_name": "ceph_lv0",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "lv_size": "21470642176",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "name": "ceph_lv0",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "tags": {
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.cluster_name": "ceph",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.crush_device_class": "",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.encrypted": "0",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.objectstore": "bluestore",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.osd_id": "0",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.type": "block",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.vdo": "0",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.with_tpm": "0"
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             },
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "type": "block",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "vg_name": "ceph_vg0"
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:         }
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:     ],
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:     "1": [
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:         {
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "devices": [
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "/dev/loop4"
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             ],
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "lv_name": "ceph_lv1",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "lv_size": "21470642176",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "name": "ceph_lv1",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "tags": {
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.cluster_name": "ceph",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.crush_device_class": "",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.encrypted": "0",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.objectstore": "bluestore",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.osd_id": "1",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.type": "block",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.vdo": "0",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.with_tpm": "0"
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             },
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "type": "block",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "vg_name": "ceph_vg1"
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:         }
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:     ],
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:     "2": [
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:         {
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "devices": [
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "/dev/loop5"
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             ],
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "lv_name": "ceph_lv2",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "lv_size": "21470642176",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "name": "ceph_lv2",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "tags": {
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.cluster_name": "ceph",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.crush_device_class": "",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.encrypted": "0",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.objectstore": "bluestore",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.osd_id": "2",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.type": "block",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.vdo": "0",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:                 "ceph.with_tpm": "0"
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             },
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "type": "block",
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:             "vg_name": "ceph_vg2"
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:         }
Jan 26 16:23:27 compute-0 affectionate_bell[354521]:     ]
Jan 26 16:23:27 compute-0 affectionate_bell[354521]: }
Jan 26 16:23:27 compute-0 systemd[1]: libpod-bf0e95b08a068e48708f1882c7767b59eef638c79a15f56f0ed942a3af9a52e8.scope: Deactivated successfully.
Jan 26 16:23:27 compute-0 podman[354505]: 2026-01-26 16:23:27.744233408 +0000 UTC m=+0.491583840 container died bf0e95b08a068e48708f1882c7767b59eef638c79a15f56f0ed942a3af9a52e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:23:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3d63f171f8d402ae57274086945f6cb1acd6fe96fa26c5377f67f7adbaff233-merged.mount: Deactivated successfully.
Jan 26 16:23:27 compute-0 podman[354505]: 2026-01-26 16:23:27.794653893 +0000 UTC m=+0.542004325 container remove bf0e95b08a068e48708f1882c7767b59eef638c79a15f56f0ed942a3af9a52e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 26 16:23:27 compute-0 systemd[1]: libpod-conmon-bf0e95b08a068e48708f1882c7767b59eef638c79a15f56f0ed942a3af9a52e8.scope: Deactivated successfully.
Jan 26 16:23:27 compute-0 sudo[354406]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:27 compute-0 sudo[354585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:23:27 compute-0 sudo[354585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:23:27 compute-0 sudo[354585]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1994628066' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:23:27 compute-0 sudo[354610]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:23:27 compute-0 sudo[354610]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:23:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:23:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/335733899' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.046 239969 DEBUG oslo_concurrency.processutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.049 239969 DEBUG nova.virt.libvirt.vif [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:23:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-0-474580322',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-0-474580322',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-gen',id=130,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNaGoHE+Y8ji3+JCDO/bDUoDInaraCUH4QlGo7jTOAJTsSMKwIPcdLhQoPl5Hxa2VpL5egGSZOv5v3mz9KbIjWDatj77IcRZ2MRFQE+vsAUPEwBUyrw2qpYszZfZz2+J9w==',key_name='tempest-TestSecurityGroupsBasicOps-326396829',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-3546p30a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:23:20Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=3a113ae2-e017-46ee-b5d9-111ff761618b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "51e9946a-d549-44e6-b844-37a0de1b1244", "address": "fa:16:3e:e6:32:e4", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e9946a-d5", "ovs_interfaceid": "51e9946a-d549-44e6-b844-37a0de1b1244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.050 239969 DEBUG nova.network.os_vif_util [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "51e9946a-d549-44e6-b844-37a0de1b1244", "address": "fa:16:3e:e6:32:e4", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e9946a-d5", "ovs_interfaceid": "51e9946a-d549-44e6-b844-37a0de1b1244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.051 239969 DEBUG nova.network.os_vif_util [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:32:e4,bridge_name='br-int',has_traffic_filtering=True,id=51e9946a-d549-44e6-b844-37a0de1b1244,network=Network(9bc91585-31b9-47fa-8cd0-03064511558e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e9946a-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.052 239969 DEBUG nova.objects.instance [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a113ae2-e017-46ee-b5d9-111ff761618b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.073 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:23:28 compute-0 nova_compute[239965]:   <uuid>3a113ae2-e017-46ee-b5d9-111ff761618b</uuid>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   <name>instance-00000082</name>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-0-474580322</nova:name>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:23:26</nova:creationTime>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:23:28 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:23:28 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:23:28 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:23:28 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:23:28 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:23:28 compute-0 nova_compute[239965]:         <nova:user uuid="ace5f36058684b5782589457d9e15921">tempest-TestSecurityGroupsBasicOps-75910484-project-member</nova:user>
Jan 26 16:23:28 compute-0 nova_compute[239965]:         <nova:project uuid="1f814bd45d164be6827f7ef54ed07f5f">tempest-TestSecurityGroupsBasicOps-75910484</nova:project>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:23:28 compute-0 nova_compute[239965]:         <nova:port uuid="51e9946a-d549-44e6-b844-37a0de1b1244">
Jan 26 16:23:28 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <system>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <entry name="serial">3a113ae2-e017-46ee-b5d9-111ff761618b</entry>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <entry name="uuid">3a113ae2-e017-46ee-b5d9-111ff761618b</entry>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     </system>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   <os>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   </os>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   <features>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   </features>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/3a113ae2-e017-46ee-b5d9-111ff761618b_disk">
Jan 26 16:23:28 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       </source>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:23:28 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/3a113ae2-e017-46ee-b5d9-111ff761618b_disk.config">
Jan 26 16:23:28 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       </source>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:23:28 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:e6:32:e4"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <target dev="tap51e9946a-d5"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/3a113ae2-e017-46ee-b5d9-111ff761618b/console.log" append="off"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <video>
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     </video>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:23:28 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:23:28 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:23:28 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:23:28 compute-0 nova_compute[239965]: </domain>
Jan 26 16:23:28 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.078 239969 DEBUG nova.compute.manager [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Preparing to wait for external event network-vif-plugged-51e9946a-d549-44e6-b844-37a0de1b1244 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.078 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.079 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.079 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.080 239969 DEBUG nova.virt.libvirt.vif [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:23:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-0-474580322',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-0-474580322',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-gen',id=130,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNaGoHE+Y8ji3+JCDO/bDUoDInaraCUH4QlGo7jTOAJTsSMKwIPcdLhQoPl5Hxa2VpL5egGSZOv5v3mz9KbIjWDatj77IcRZ2MRFQE+vsAUPEwBUyrw2qpYszZfZz2+J9w==',key_name='tempest-TestSecurityGroupsBasicOps-326396829',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-3546p30a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:23:20Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=3a113ae2-e017-46ee-b5d9-111ff761618b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "51e9946a-d549-44e6-b844-37a0de1b1244", "address": "fa:16:3e:e6:32:e4", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e9946a-d5", "ovs_interfaceid": "51e9946a-d549-44e6-b844-37a0de1b1244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.080 239969 DEBUG nova.network.os_vif_util [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "51e9946a-d549-44e6-b844-37a0de1b1244", "address": "fa:16:3e:e6:32:e4", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e9946a-d5", "ovs_interfaceid": "51e9946a-d549-44e6-b844-37a0de1b1244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.081 239969 DEBUG nova.network.os_vif_util [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:32:e4,bridge_name='br-int',has_traffic_filtering=True,id=51e9946a-d549-44e6-b844-37a0de1b1244,network=Network(9bc91585-31b9-47fa-8cd0-03064511558e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e9946a-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.081 239969 DEBUG os_vif [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:32:e4,bridge_name='br-int',has_traffic_filtering=True,id=51e9946a-d549-44e6-b844-37a0de1b1244,network=Network(9bc91585-31b9-47fa-8cd0-03064511558e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e9946a-d5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.082 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.082 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.083 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.086 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:28 compute-0 ovn_controller[146046]: 2026-01-26T16:23:28Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:80:b5:7c 10.100.0.19
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.087 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51e9946a-d5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:28 compute-0 ovn_controller[146046]: 2026-01-26T16:23:28Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:80:b5:7c 10.100.0.19
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.087 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap51e9946a-d5, col_values=(('external_ids', {'iface-id': '51e9946a-d549-44e6-b844-37a0de1b1244', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:32:e4', 'vm-uuid': '3a113ae2-e017-46ee-b5d9-111ff761618b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:28 compute-0 NetworkManager[48954]: <info>  [1769444608.1274] manager: (tap51e9946a-d5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/561)
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.126 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.134 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.135 239969 INFO os_vif [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:32:e4,bridge_name='br-int',has_traffic_filtering=True,id=51e9946a-d549-44e6-b844-37a0de1b1244,network=Network(9bc91585-31b9-47fa-8cd0-03064511558e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e9946a-d5')
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.215 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.217 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.217 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No VIF found with MAC fa:16:3e:e6:32:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.218 239969 INFO nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Using config drive
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.244 239969 DEBUG nova.storage.rbd_utils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 3a113ae2-e017-46ee-b5d9-111ff761618b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:23:28 compute-0 podman[354651]: 2026-01-26 16:23:28.296112983 +0000 UTC m=+0.059917448 container create bbf46a2f4dee24ad0f6eafd1aefbf1ec3b8149ee964662f33abd544e043c97b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_dijkstra, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 16:23:28 compute-0 systemd[1]: Started libpod-conmon-bbf46a2f4dee24ad0f6eafd1aefbf1ec3b8149ee964662f33abd544e043c97b7.scope.
Jan 26 16:23:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:23:28 compute-0 podman[354651]: 2026-01-26 16:23:28.364393906 +0000 UTC m=+0.128198391 container init bbf46a2f4dee24ad0f6eafd1aefbf1ec3b8149ee964662f33abd544e043c97b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_dijkstra, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:23:28 compute-0 podman[354651]: 2026-01-26 16:23:28.37110823 +0000 UTC m=+0.134912695 container start bbf46a2f4dee24ad0f6eafd1aefbf1ec3b8149ee964662f33abd544e043c97b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:23:28 compute-0 podman[354651]: 2026-01-26 16:23:28.278937143 +0000 UTC m=+0.042741628 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:23:28 compute-0 inspiring_dijkstra[354685]: 167 167
Jan 26 16:23:28 compute-0 podman[354651]: 2026-01-26 16:23:28.374096802 +0000 UTC m=+0.137901287 container attach bbf46a2f4dee24ad0f6eafd1aefbf1ec3b8149ee964662f33abd544e043c97b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_dijkstra, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:23:28 compute-0 systemd[1]: libpod-bbf46a2f4dee24ad0f6eafd1aefbf1ec3b8149ee964662f33abd544e043c97b7.scope: Deactivated successfully.
Jan 26 16:23:28 compute-0 podman[354651]: 2026-01-26 16:23:28.377675661 +0000 UTC m=+0.141480136 container died bbf46a2f4dee24ad0f6eafd1aefbf1ec3b8149ee964662f33abd544e043c97b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:23:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5cf60956b5ed9384691469501b4bd53429dd1da0b3cb04a82ab4f1f09cc74bc3-merged.mount: Deactivated successfully.
Jan 26 16:23:28 compute-0 podman[354651]: 2026-01-26 16:23:28.411725304 +0000 UTC m=+0.175529769 container remove bbf46a2f4dee24ad0f6eafd1aefbf1ec3b8149ee964662f33abd544e043c97b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 26 16:23:28 compute-0 systemd[1]: libpod-conmon-bbf46a2f4dee24ad0f6eafd1aefbf1ec3b8149ee964662f33abd544e043c97b7.scope: Deactivated successfully.
Jan 26 16:23:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 435 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.4 MiB/s wr, 154 op/s
Jan 26 16:23:28 compute-0 podman[354709]: 2026-01-26 16:23:28.655666969 +0000 UTC m=+0.061901168 container create 208dc878219bc710f78c0a9d2004588337baa877dad6ce4ee7b88b5435022635 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hopper, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:23:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:23:28
Jan 26 16:23:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:23:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:23:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.control', 'vms', 'backups', '.mgr']
Jan 26 16:23:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:23:28 compute-0 systemd[1]: Started libpod-conmon-208dc878219bc710f78c0a9d2004588337baa877dad6ce4ee7b88b5435022635.scope.
Jan 26 16:23:28 compute-0 podman[354709]: 2026-01-26 16:23:28.634817628 +0000 UTC m=+0.041051867 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:23:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:23:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8127a9fec9e8905e7b1f22fa29def59c7df0dc94a86c4b241f0751d0dcd78d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:23:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8127a9fec9e8905e7b1f22fa29def59c7df0dc94a86c4b241f0751d0dcd78d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:23:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8127a9fec9e8905e7b1f22fa29def59c7df0dc94a86c4b241f0751d0dcd78d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:23:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f8127a9fec9e8905e7b1f22fa29def59c7df0dc94a86c4b241f0751d0dcd78d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:23:28 compute-0 podman[354709]: 2026-01-26 16:23:28.780748672 +0000 UTC m=+0.186982971 container init 208dc878219bc710f78c0a9d2004588337baa877dad6ce4ee7b88b5435022635 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hopper, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 26 16:23:28 compute-0 podman[354709]: 2026-01-26 16:23:28.792427008 +0000 UTC m=+0.198661207 container start 208dc878219bc710f78c0a9d2004588337baa877dad6ce4ee7b88b5435022635 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hopper, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:23:28 compute-0 podman[354709]: 2026-01-26 16:23:28.796454046 +0000 UTC m=+0.202688285 container attach 208dc878219bc710f78c0a9d2004588337baa877dad6ce4ee7b88b5435022635 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.864 239969 INFO nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Creating config drive at /var/lib/nova/instances/3a113ae2-e017-46ee-b5d9-111ff761618b/disk.config
Jan 26 16:23:28 compute-0 nova_compute[239965]: 2026-01-26 16:23:28.878 239969 DEBUG oslo_concurrency.processutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a113ae2-e017-46ee-b5d9-111ff761618b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6d4dheeg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:28 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/335733899' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:23:28 compute-0 ceph-mon[75140]: pgmap v2183: 305 pgs: 305 active+clean; 435 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.4 MiB/s wr, 154 op/s
Jan 26 16:23:29 compute-0 nova_compute[239965]: 2026-01-26 16:23:29.048 239969 DEBUG oslo_concurrency.processutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a113ae2-e017-46ee-b5d9-111ff761618b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6d4dheeg" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:29 compute-0 nova_compute[239965]: 2026-01-26 16:23:29.079 239969 DEBUG nova.storage.rbd_utils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 3a113ae2-e017-46ee-b5d9-111ff761618b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:29 compute-0 nova_compute[239965]: 2026-01-26 16:23:29.083 239969 DEBUG oslo_concurrency.processutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a113ae2-e017-46ee-b5d9-111ff761618b/disk.config 3a113ae2-e017-46ee-b5d9-111ff761618b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:29 compute-0 nova_compute[239965]: 2026-01-26 16:23:29.269 239969 DEBUG oslo_concurrency.processutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a113ae2-e017-46ee-b5d9-111ff761618b/disk.config 3a113ae2-e017-46ee-b5d9-111ff761618b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:29 compute-0 nova_compute[239965]: 2026-01-26 16:23:29.271 239969 INFO nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Deleting local config drive /var/lib/nova/instances/3a113ae2-e017-46ee-b5d9-111ff761618b/disk.config because it was imported into RBD.
Jan 26 16:23:29 compute-0 NetworkManager[48954]: <info>  [1769444609.3256] manager: (tap51e9946a-d5): new Tun device (/org/freedesktop/NetworkManager/Devices/562)
Jan 26 16:23:29 compute-0 kernel: tap51e9946a-d5: entered promiscuous mode
Jan 26 16:23:29 compute-0 ovn_controller[146046]: 2026-01-26T16:23:29Z|01345|binding|INFO|Claiming lport 51e9946a-d549-44e6-b844-37a0de1b1244 for this chassis.
Jan 26 16:23:29 compute-0 ovn_controller[146046]: 2026-01-26T16:23:29Z|01346|binding|INFO|51e9946a-d549-44e6-b844-37a0de1b1244: Claiming fa:16:3e:e6:32:e4 10.100.0.8
Jan 26 16:23:29 compute-0 nova_compute[239965]: 2026-01-26 16:23:29.366 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:29 compute-0 systemd-udevd[354841]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:23:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:29.376 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:32:e4 10.100.0.8'], port_security=['fa:16:3e:e6:32:e4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3a113ae2-e017-46ee-b5d9-111ff761618b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bc91585-31b9-47fa-8cd0-03064511558e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c0a3a9fe-99c5-4bfe-a92f-e1de2e2a1727', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7379a9d2-4570-48f3-bfe7-5857500f5f77, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=51e9946a-d549-44e6-b844-37a0de1b1244) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:23:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:29.378 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 51e9946a-d549-44e6-b844-37a0de1b1244 in datapath 9bc91585-31b9-47fa-8cd0-03064511558e bound to our chassis
Jan 26 16:23:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:29.380 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bc91585-31b9-47fa-8cd0-03064511558e
Jan 26 16:23:29 compute-0 NetworkManager[48954]: <info>  [1769444609.3828] device (tap51e9946a-d5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:23:29 compute-0 NetworkManager[48954]: <info>  [1769444609.3849] device (tap51e9946a-d5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:23:29 compute-0 ovn_controller[146046]: 2026-01-26T16:23:29Z|01347|binding|INFO|Setting lport 51e9946a-d549-44e6-b844-37a0de1b1244 ovn-installed in OVS
Jan 26 16:23:29 compute-0 ovn_controller[146046]: 2026-01-26T16:23:29Z|01348|binding|INFO|Setting lport 51e9946a-d549-44e6-b844-37a0de1b1244 up in Southbound
Jan 26 16:23:29 compute-0 nova_compute[239965]: 2026-01-26 16:23:29.392 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:29 compute-0 nova_compute[239965]: 2026-01-26 16:23:29.394 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:29.403 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9e14da3c-a200-41e5-8f56-b87a75f88dbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:29 compute-0 systemd-machined[208061]: New machine qemu-157-instance-00000082.
Jan 26 16:23:29 compute-0 systemd[1]: Started Virtual Machine qemu-157-instance-00000082.
Jan 26 16:23:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:29.434 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7d76cd29-761d-44fc-8712-5549716cbf52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:29.438 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e7badbd4-df76-4176-a66f-d855a369c7f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:29.465 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[58fc81fe-e2f4-4f08-b2c9-c72a722b9ad8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:29.482 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1ca33de5-627a-4a53-b976-3d0520ff7bb2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bc91585-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:67:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 384], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606129, 'reachable_time': 39464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354866, 'error': None, 'target': 'ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:29.497 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4758225b-ec0f-4db8-988a-2cd47b2d3cec]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bc91585-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606143, 'tstamp': 606143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354868, 'error': None, 'target': 'ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bc91585-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606146, 'tstamp': 606146}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354868, 'error': None, 'target': 'ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:29.499 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bc91585-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:29 compute-0 nova_compute[239965]: 2026-01-26 16:23:29.502 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:29.502 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bc91585-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:29.502 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:23:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:29.503 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bc91585-30, col_values=(('external_ids', {'iface-id': 'ad3f6d2f-d76e-421c-82c8-5fe4525b7056'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:29 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:29.503 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:23:29 compute-0 lvm[354870]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:23:29 compute-0 lvm[354870]: VG ceph_vg0 finished
Jan 26 16:23:29 compute-0 lvm[354872]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:23:29 compute-0 lvm[354872]: VG ceph_vg1 finished
Jan 26 16:23:29 compute-0 lvm[354873]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:23:29 compute-0 lvm[354873]: VG ceph_vg0 finished
Jan 26 16:23:29 compute-0 lvm[354875]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:23:29 compute-0 lvm[354875]: VG ceph_vg2 finished
Jan 26 16:23:29 compute-0 wonderful_hopper[354726]: {}
Jan 26 16:23:29 compute-0 systemd[1]: libpod-208dc878219bc710f78c0a9d2004588337baa877dad6ce4ee7b88b5435022635.scope: Deactivated successfully.
Jan 26 16:23:29 compute-0 podman[354709]: 2026-01-26 16:23:29.678640331 +0000 UTC m=+1.084874580 container died 208dc878219bc710f78c0a9d2004588337baa877dad6ce4ee7b88b5435022635 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:23:29 compute-0 systemd[1]: libpod-208dc878219bc710f78c0a9d2004588337baa877dad6ce4ee7b88b5435022635.scope: Consumed 1.362s CPU time.
Jan 26 16:23:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f8127a9fec9e8905e7b1f22fa29def59c7df0dc94a86c4b241f0751d0dcd78d-merged.mount: Deactivated successfully.
Jan 26 16:23:29 compute-0 podman[354709]: 2026-01-26 16:23:29.726234086 +0000 UTC m=+1.132468285 container remove 208dc878219bc710f78c0a9d2004588337baa877dad6ce4ee7b88b5435022635 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_hopper, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:23:29 compute-0 systemd[1]: libpod-conmon-208dc878219bc710f78c0a9d2004588337baa877dad6ce4ee7b88b5435022635.scope: Deactivated successfully.
Jan 26 16:23:29 compute-0 sudo[354610]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:23:29 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:23:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:23:29 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:23:29 compute-0 sudo[354889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:23:29 compute-0 sudo[354889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:23:29 compute-0 sudo[354889]: pam_unix(sudo:session): session closed for user root
Jan 26 16:23:29 compute-0 nova_compute[239965]: 2026-01-26 16:23:29.891 239969 DEBUG nova.compute.manager [req-0239f94b-8a8f-44c4-b84f-6d48d8e1a872 req-c0876553-6f4f-425f-b813-eba77ff48b48 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Received event network-vif-plugged-51e9946a-d549-44e6-b844-37a0de1b1244 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:29 compute-0 nova_compute[239965]: 2026-01-26 16:23:29.896 239969 DEBUG oslo_concurrency.lockutils [req-0239f94b-8a8f-44c4-b84f-6d48d8e1a872 req-c0876553-6f4f-425f-b813-eba77ff48b48 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:29 compute-0 nova_compute[239965]: 2026-01-26 16:23:29.897 239969 DEBUG oslo_concurrency.lockutils [req-0239f94b-8a8f-44c4-b84f-6d48d8e1a872 req-c0876553-6f4f-425f-b813-eba77ff48b48 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:29 compute-0 nova_compute[239965]: 2026-01-26 16:23:29.897 239969 DEBUG oslo_concurrency.lockutils [req-0239f94b-8a8f-44c4-b84f-6d48d8e1a872 req-c0876553-6f4f-425f-b813-eba77ff48b48 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:29 compute-0 nova_compute[239965]: 2026-01-26 16:23:29.898 239969 DEBUG nova.compute.manager [req-0239f94b-8a8f-44c4-b84f-6d48d8e1a872 req-c0876553-6f4f-425f-b813-eba77ff48b48 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Processing event network-vif-plugged-51e9946a-d549-44e6-b844-37a0de1b1244 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 435 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 5.4 MiB/s wr, 112 op/s
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:23:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:23:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:23:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:23:30 compute-0 ceph-mon[75140]: pgmap v2184: 305 pgs: 305 active+clean; 435 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 392 KiB/s rd, 5.4 MiB/s wr, 112 op/s
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.887 239969 DEBUG nova.compute.manager [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.889 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444610.8870187, 3a113ae2-e017-46ee-b5d9-111ff761618b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.889 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] VM Started (Lifecycle Event)
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.894 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.899 239969 INFO nova.virt.libvirt.driver [-] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Instance spawned successfully.
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.899 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.913 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.919 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.924 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.924 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.925 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.925 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.926 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.926 239969 DEBUG nova.virt.libvirt.driver [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.957 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.957 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444610.8884616, 3a113ae2-e017-46ee-b5d9-111ff761618b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.957 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] VM Paused (Lifecycle Event)
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.980 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.984 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444610.8938677, 3a113ae2-e017-46ee-b5d9-111ff761618b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.985 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] VM Resumed (Lifecycle Event)
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.989 239969 INFO nova.compute.manager [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Took 10.82 seconds to spawn the instance on the hypervisor.
Jan 26 16:23:30 compute-0 nova_compute[239965]: 2026-01-26 16:23:30.989 239969 DEBUG nova.compute.manager [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:31 compute-0 nova_compute[239965]: 2026-01-26 16:23:31.015 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:31 compute-0 nova_compute[239965]: 2026-01-26 16:23:31.018 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:23:31 compute-0 nova_compute[239965]: 2026-01-26 16:23:31.087 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:23:31 compute-0 nova_compute[239965]: 2026-01-26 16:23:31.110 239969 INFO nova.compute.manager [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Took 11.97 seconds to build instance.
Jan 26 16:23:31 compute-0 nova_compute[239965]: 2026-01-26 16:23:31.125 239969 DEBUG oslo_concurrency.lockutils [None req-b7582b7f-a3d6-4a5d-b92f-dd8808b20848 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "3a113ae2-e017-46ee-b5d9-111ff761618b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.1 MiB/s wr, 220 op/s
Jan 26 16:23:32 compute-0 nova_compute[239965]: 2026-01-26 16:23:32.869 239969 DEBUG nova.compute.manager [req-5b10d69a-e2a9-41d7-9867-0a21f4db4749 req-8c5ce7bd-96c2-4175-a420-c05f6f946c8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Received event network-vif-plugged-51e9946a-d549-44e6-b844-37a0de1b1244 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:32 compute-0 nova_compute[239965]: 2026-01-26 16:23:32.870 239969 DEBUG oslo_concurrency.lockutils [req-5b10d69a-e2a9-41d7-9867-0a21f4db4749 req-8c5ce7bd-96c2-4175-a420-c05f6f946c8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:32 compute-0 nova_compute[239965]: 2026-01-26 16:23:32.871 239969 DEBUG oslo_concurrency.lockutils [req-5b10d69a-e2a9-41d7-9867-0a21f4db4749 req-8c5ce7bd-96c2-4175-a420-c05f6f946c8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:32 compute-0 nova_compute[239965]: 2026-01-26 16:23:32.871 239969 DEBUG oslo_concurrency.lockutils [req-5b10d69a-e2a9-41d7-9867-0a21f4db4749 req-8c5ce7bd-96c2-4175-a420-c05f6f946c8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:32 compute-0 nova_compute[239965]: 2026-01-26 16:23:32.872 239969 DEBUG nova.compute.manager [req-5b10d69a-e2a9-41d7-9867-0a21f4db4749 req-8c5ce7bd-96c2-4175-a420-c05f6f946c8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] No waiting events found dispatching network-vif-plugged-51e9946a-d549-44e6-b844-37a0de1b1244 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:23:32 compute-0 nova_compute[239965]: 2026-01-26 16:23:32.872 239969 WARNING nova.compute.manager [req-5b10d69a-e2a9-41d7-9867-0a21f4db4749 req-8c5ce7bd-96c2-4175-a420-c05f6f946c8d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Received unexpected event network-vif-plugged-51e9946a-d549-44e6-b844-37a0de1b1244 for instance with vm_state active and task_state None.
Jan 26 16:23:33 compute-0 nova_compute[239965]: 2026-01-26 16:23:33.129 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:33 compute-0 nova_compute[239965]: 2026-01-26 16:23:33.141 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:23:33 compute-0 ceph-mon[75140]: pgmap v2185: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.1 MiB/s wr, 220 op/s
Jan 26 16:23:34 compute-0 nova_compute[239965]: 2026-01-26 16:23:34.025 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:34 compute-0 nova_compute[239965]: 2026-01-26 16:23:34.026 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:34 compute-0 nova_compute[239965]: 2026-01-26 16:23:34.040 239969 DEBUG nova.compute.manager [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:23:34 compute-0 nova_compute[239965]: 2026-01-26 16:23:34.120 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:34 compute-0 nova_compute[239965]: 2026-01-26 16:23:34.120 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:34 compute-0 nova_compute[239965]: 2026-01-26 16:23:34.128 239969 DEBUG nova.virt.hardware [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:23:34 compute-0 nova_compute[239965]: 2026-01-26 16:23:34.129 239969 INFO nova.compute.claims [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:23:34 compute-0 nova_compute[239965]: 2026-01-26 16:23:34.318 239969 DEBUG oslo_concurrency.processutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 155 op/s
Jan 26 16:23:34 compute-0 ceph-mon[75140]: pgmap v2186: 305 pgs: 305 active+clean; 451 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.5 MiB/s wr, 155 op/s
Jan 26 16:23:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:23:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1934016782' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.197 239969 DEBUG oslo_concurrency.processutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.879s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.206 239969 DEBUG nova.compute.provider_tree [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.223 239969 DEBUG nova.scheduler.client.report [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.245 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.246 239969 DEBUG nova.compute.manager [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.295 239969 DEBUG nova.compute.manager [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.295 239969 DEBUG nova.network.neutron [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.315 239969 INFO nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.338 239969 DEBUG nova.compute.manager [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:23:35 compute-0 podman[354979]: 2026-01-26 16:23:35.415263699 +0000 UTC m=+0.085707049 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true)
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.427 239969 DEBUG nova.compute.manager [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.428 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.429 239969 INFO nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Creating image(s)
Jan 26 16:23:35 compute-0 podman[354980]: 2026-01-26 16:23:35.456893399 +0000 UTC m=+0.127067693 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.466 239969 DEBUG nova.storage.rbd_utils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 1999f11e-e9a2-4442-8baf-d8e51e0c856d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.505 239969 DEBUG nova.storage.rbd_utils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 1999f11e-e9a2-4442-8baf-d8e51e0c856d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.540 239969 DEBUG nova.storage.rbd_utils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 1999f11e-e9a2-4442-8baf-d8e51e0c856d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.545 239969 DEBUG oslo_concurrency.processutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.590 239969 DEBUG nova.policy [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '0338b308661e4205839e9e957f674d8c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'df501970c9864b77b86bb4aa58ef846b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.635 239969 DEBUG oslo_concurrency.processutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.636 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.637 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.638 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.663 239969 DEBUG nova.storage.rbd_utils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 1999f11e-e9a2-4442-8baf-d8e51e0c856d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.666 239969 DEBUG oslo_concurrency.processutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 1999f11e-e9a2-4442-8baf-d8e51e0c856d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:35 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1934016782' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:35 compute-0 nova_compute[239965]: 2026-01-26 16:23:35.943 239969 DEBUG oslo_concurrency.processutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 1999f11e-e9a2-4442-8baf-d8e51e0c856d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:36 compute-0 nova_compute[239965]: 2026-01-26 16:23:36.024 239969 DEBUG nova.storage.rbd_utils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] resizing rbd image 1999f11e-e9a2-4442-8baf-d8e51e0c856d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:23:36 compute-0 nova_compute[239965]: 2026-01-26 16:23:36.123 239969 DEBUG nova.objects.instance [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'migration_context' on Instance uuid 1999f11e-e9a2-4442-8baf-d8e51e0c856d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:23:36 compute-0 nova_compute[239965]: 2026-01-26 16:23:36.138 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:23:36 compute-0 nova_compute[239965]: 2026-01-26 16:23:36.139 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Ensure instance console log exists: /var/lib/nova/instances/1999f11e-e9a2-4442-8baf-d8e51e0c856d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:23:36 compute-0 nova_compute[239965]: 2026-01-26 16:23:36.139 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:36 compute-0 nova_compute[239965]: 2026-01-26 16:23:36.140 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:36 compute-0 nova_compute[239965]: 2026-01-26 16:23:36.140 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:36 compute-0 nova_compute[239965]: 2026-01-26 16:23:36.336 239969 DEBUG nova.network.neutron [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Successfully created port: d8b4b479-9869-4e47-bcea-fe0580fafbe3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:23:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 468 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 166 op/s
Jan 26 16:23:36 compute-0 ceph-mon[75140]: pgmap v2187: 305 pgs: 305 active+clean; 468 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 166 op/s
Jan 26 16:23:37 compute-0 nova_compute[239965]: 2026-01-26 16:23:37.322 239969 DEBUG nova.compute.manager [req-9132f107-a323-49d5-a6aa-071c82085d2d req-fea5c351-2d06-4d6d-b783-a7ebc6347054 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received event network-changed-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:37 compute-0 nova_compute[239965]: 2026-01-26 16:23:37.323 239969 DEBUG nova.compute.manager [req-9132f107-a323-49d5-a6aa-071c82085d2d req-fea5c351-2d06-4d6d-b783-a7ebc6347054 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Refreshing instance network info cache due to event network-changed-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:23:37 compute-0 nova_compute[239965]: 2026-01-26 16:23:37.324 239969 DEBUG oslo_concurrency.lockutils [req-9132f107-a323-49d5-a6aa-071c82085d2d req-fea5c351-2d06-4d6d-b783-a7ebc6347054 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:23:37 compute-0 nova_compute[239965]: 2026-01-26 16:23:37.324 239969 DEBUG oslo_concurrency.lockutils [req-9132f107-a323-49d5-a6aa-071c82085d2d req-fea5c351-2d06-4d6d-b783-a7ebc6347054 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:23:37 compute-0 nova_compute[239965]: 2026-01-26 16:23:37.325 239969 DEBUG nova.network.neutron [req-9132f107-a323-49d5-a6aa-071c82085d2d req-fea5c351-2d06-4d6d-b783-a7ebc6347054 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Refreshing network info cache for port 62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:23:37 compute-0 nova_compute[239965]: 2026-01-26 16:23:37.327 239969 DEBUG nova.network.neutron [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Successfully updated port: d8b4b479-9869-4e47-bcea-fe0580fafbe3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:23:37 compute-0 nova_compute[239965]: 2026-01-26 16:23:37.353 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "refresh_cache-1999f11e-e9a2-4442-8baf-d8e51e0c856d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:23:37 compute-0 nova_compute[239965]: 2026-01-26 16:23:37.354 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquired lock "refresh_cache-1999f11e-e9a2-4442-8baf-d8e51e0c856d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:23:37 compute-0 nova_compute[239965]: 2026-01-26 16:23:37.354 239969 DEBUG nova.network.neutron [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:23:37 compute-0 nova_compute[239965]: 2026-01-26 16:23:37.534 239969 DEBUG nova.compute.manager [req-3b8cfa60-2674-4438-9312-6d5828e1b81b req-bb1dd190-494e-4df9-a3af-df2458598d00 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Received event network-changed-d8b4b479-9869-4e47-bcea-fe0580fafbe3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:37 compute-0 nova_compute[239965]: 2026-01-26 16:23:37.535 239969 DEBUG nova.compute.manager [req-3b8cfa60-2674-4438-9312-6d5828e1b81b req-bb1dd190-494e-4df9-a3af-df2458598d00 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Refreshing instance network info cache due to event network-changed-d8b4b479-9869-4e47-bcea-fe0580fafbe3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:23:37 compute-0 nova_compute[239965]: 2026-01-26 16:23:37.535 239969 DEBUG oslo_concurrency.lockutils [req-3b8cfa60-2674-4438-9312-6d5828e1b81b req-bb1dd190-494e-4df9-a3af-df2458598d00 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-1999f11e-e9a2-4442-8baf-d8e51e0c856d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:23:37 compute-0 nova_compute[239965]: 2026-01-26 16:23:37.594 239969 DEBUG nova.network.neutron [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:23:38 compute-0 nova_compute[239965]: 2026-01-26 16:23:38.135 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:38 compute-0 nova_compute[239965]: 2026-01-26 16:23:38.143 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:23:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 179 op/s
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.201 239969 DEBUG nova.network.neutron [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Updating instance_info_cache with network_info: [{"id": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "address": "fa:16:3e:12:a8:15", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a815", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b4b479-98", "ovs_interfaceid": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.227 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Releasing lock "refresh_cache-1999f11e-e9a2-4442-8baf-d8e51e0c856d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.228 239969 DEBUG nova.compute.manager [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Instance network_info: |[{"id": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "address": "fa:16:3e:12:a8:15", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a815", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b4b479-98", "ovs_interfaceid": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.228 239969 DEBUG oslo_concurrency.lockutils [req-3b8cfa60-2674-4438-9312-6d5828e1b81b req-bb1dd190-494e-4df9-a3af-df2458598d00 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-1999f11e-e9a2-4442-8baf-d8e51e0c856d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.228 239969 DEBUG nova.network.neutron [req-3b8cfa60-2674-4438-9312-6d5828e1b81b req-bb1dd190-494e-4df9-a3af-df2458598d00 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Refreshing network info cache for port d8b4b479-9869-4e47-bcea-fe0580fafbe3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.230 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Start _get_guest_xml network_info=[{"id": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "address": "fa:16:3e:12:a8:15", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a815", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b4b479-98", "ovs_interfaceid": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.235 239969 WARNING nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.242 239969 DEBUG nova.virt.libvirt.host [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.243 239969 DEBUG nova.virt.libvirt.host [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.249 239969 DEBUG nova.virt.libvirt.host [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.249 239969 DEBUG nova.virt.libvirt.host [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.250 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.250 239969 DEBUG nova.virt.hardware [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.250 239969 DEBUG nova.virt.hardware [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.250 239969 DEBUG nova.virt.hardware [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.251 239969 DEBUG nova.virt.hardware [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.251 239969 DEBUG nova.virt.hardware [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.251 239969 DEBUG nova.virt.hardware [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.251 239969 DEBUG nova.virt.hardware [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.252 239969 DEBUG nova.virt.hardware [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.252 239969 DEBUG nova.virt.hardware [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.252 239969 DEBUG nova.virt.hardware [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.252 239969 DEBUG nova.virt.hardware [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.255 239969 DEBUG oslo_concurrency.processutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.347 239969 DEBUG nova.network.neutron [req-9132f107-a323-49d5-a6aa-071c82085d2d req-fea5c351-2d06-4d6d-b783-a7ebc6347054 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Updated VIF entry in instance network info cache for port 62f75eb8-8d58-40a9-bac0-5f5579d1d9c3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.349 239969 DEBUG nova.network.neutron [req-9132f107-a323-49d5-a6aa-071c82085d2d req-fea5c351-2d06-4d6d-b783-a7ebc6347054 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Updating instance_info_cache with network_info: [{"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "address": "fa:16:3e:b8:22:5d", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f75eb8-8d", "ovs_interfaceid": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.373 239969 DEBUG oslo_concurrency.lockutils [req-9132f107-a323-49d5-a6aa-071c82085d2d req-fea5c351-2d06-4d6d-b783-a7ebc6347054 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:23:39 compute-0 ceph-mon[75140]: pgmap v2188: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.0 MiB/s wr, 179 op/s
Jan 26 16:23:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:23:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1143641153' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.808 239969 DEBUG oslo_concurrency.processutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.845 239969 DEBUG nova.storage.rbd_utils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 1999f11e-e9a2-4442-8baf-d8e51e0c856d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:39 compute-0 nova_compute[239965]: 2026-01-26 16:23:39.851 239969 DEBUG oslo_concurrency.processutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:23:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1673300914' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.478 239969 DEBUG oslo_concurrency.processutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.480 239969 DEBUG nova.virt.libvirt.vif [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:23:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2002333273',display_name='tempest-TestGettingAddress-server-2002333273',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2002333273',id=131,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCC7xQvyZ3R4fi4Uk/Uk6Pa6MLruwnCjAWUP+8KMyXPkmagp3JbDxcXWQo/28wA3I75JT31hYs4VkRRPhBdGj71Xnfc2t9PlBlyrIJIqEAvEv7a7zDeqBbmQyfNsQiXQAw==',key_name='tempest-TestGettingAddress-1047513795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-ije6ecsv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:23:35Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=1999f11e-e9a2-4442-8baf-d8e51e0c856d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "address": "fa:16:3e:12:a8:15", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a815", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b4b479-98", "ovs_interfaceid": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.481 239969 DEBUG nova.network.os_vif_util [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "address": "fa:16:3e:12:a8:15", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a815", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b4b479-98", "ovs_interfaceid": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.482 239969 DEBUG nova.network.os_vif_util [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:a8:15,bridge_name='br-int',has_traffic_filtering=True,id=d8b4b479-9869-4e47-bcea-fe0580fafbe3,network=Network(71a6c2b4-c95b-48d6-bc0d-8a809d49273b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b4b479-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.483 239969 DEBUG nova.objects.instance [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'pci_devices' on Instance uuid 1999f11e-e9a2-4442-8baf-d8e51e0c856d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.502 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:23:40 compute-0 nova_compute[239965]:   <uuid>1999f11e-e9a2-4442-8baf-d8e51e0c856d</uuid>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   <name>instance-00000083</name>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <nova:name>tempest-TestGettingAddress-server-2002333273</nova:name>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:23:39</nova:creationTime>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:23:40 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:23:40 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:23:40 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:23:40 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:23:40 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:23:40 compute-0 nova_compute[239965]:         <nova:user uuid="0338b308661e4205839e9e957f674d8c">tempest-TestGettingAddress-206943419-project-member</nova:user>
Jan 26 16:23:40 compute-0 nova_compute[239965]:         <nova:project uuid="df501970c9864b77b86bb4aa58ef846b">tempest-TestGettingAddress-206943419</nova:project>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:23:40 compute-0 nova_compute[239965]:         <nova:port uuid="d8b4b479-9869-4e47-bcea-fe0580fafbe3">
Jan 26 16:23:40 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="2001:db8::f816:3eff:fe12:a815" ipVersion="6"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <system>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <entry name="serial">1999f11e-e9a2-4442-8baf-d8e51e0c856d</entry>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <entry name="uuid">1999f11e-e9a2-4442-8baf-d8e51e0c856d</entry>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     </system>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   <os>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   </os>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   <features>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   </features>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/1999f11e-e9a2-4442-8baf-d8e51e0c856d_disk">
Jan 26 16:23:40 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       </source>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:23:40 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/1999f11e-e9a2-4442-8baf-d8e51e0c856d_disk.config">
Jan 26 16:23:40 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       </source>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:23:40 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:12:a8:15"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <target dev="tapd8b4b479-98"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/1999f11e-e9a2-4442-8baf-d8e51e0c856d/console.log" append="off"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <video>
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     </video>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:23:40 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:23:40 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:23:40 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:23:40 compute-0 nova_compute[239965]: </domain>
Jan 26 16:23:40 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.515 239969 DEBUG nova.compute.manager [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Preparing to wait for external event network-vif-plugged-d8b4b479-9869-4e47-bcea-fe0580fafbe3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.515 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.516 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.516 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.517 239969 DEBUG nova.virt.libvirt.vif [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:23:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2002333273',display_name='tempest-TestGettingAddress-server-2002333273',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2002333273',id=131,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCC7xQvyZ3R4fi4Uk/Uk6Pa6MLruwnCjAWUP+8KMyXPkmagp3JbDxcXWQo/28wA3I75JT31hYs4VkRRPhBdGj71Xnfc2t9PlBlyrIJIqEAvEv7a7zDeqBbmQyfNsQiXQAw==',key_name='tempest-TestGettingAddress-1047513795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-ije6ecsv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:23:35Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=1999f11e-e9a2-4442-8baf-d8e51e0c856d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "address": "fa:16:3e:12:a8:15", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a815", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b4b479-98", "ovs_interfaceid": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.517 239969 DEBUG nova.network.os_vif_util [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "address": "fa:16:3e:12:a8:15", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a815", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b4b479-98", "ovs_interfaceid": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.518 239969 DEBUG nova.network.os_vif_util [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:a8:15,bridge_name='br-int',has_traffic_filtering=True,id=d8b4b479-9869-4e47-bcea-fe0580fafbe3,network=Network(71a6c2b4-c95b-48d6-bc0d-8a809d49273b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b4b479-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.518 239969 DEBUG os_vif [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:a8:15,bridge_name='br-int',has_traffic_filtering=True,id=d8b4b479-9869-4e47-bcea-fe0580fafbe3,network=Network(71a6c2b4-c95b-48d6-bc0d-8a809d49273b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b4b479-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.519 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.520 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.520 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.524 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.525 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd8b4b479-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.525 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd8b4b479-98, col_values=(('external_ids', {'iface-id': 'd8b4b479-9869-4e47-bcea-fe0580fafbe3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:a8:15', 'vm-uuid': '1999f11e-e9a2-4442-8baf-d8e51e0c856d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:40 compute-0 NetworkManager[48954]: <info>  [1769444620.5283] manager: (tapd8b4b479-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.527 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.531 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.537 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.538 239969 INFO os_vif [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:a8:15,bridge_name='br-int',has_traffic_filtering=True,id=d8b4b479-9869-4e47-bcea-fe0580fafbe3,network=Network(71a6c2b4-c95b-48d6-bc0d-8a809d49273b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b4b479-98')
Jan 26 16:23:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 144 op/s
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.597 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.598 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.598 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] No VIF found with MAC fa:16:3e:12:a8:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.599 239969 INFO nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Using config drive
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.636 239969 DEBUG nova.storage.rbd_utils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 1999f11e-e9a2-4442-8baf-d8e51e0c856d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:40 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1143641153' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:23:40 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1673300914' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:23:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:40.927 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:23:40 compute-0 nova_compute[239965]: 2026-01-26 16:23:40.927 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:40.928 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.016 239969 INFO nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Creating config drive at /var/lib/nova/instances/1999f11e-e9a2-4442-8baf-d8e51e0c856d/disk.config
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.023 239969 DEBUG oslo_concurrency.processutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1999f11e-e9a2-4442-8baf-d8e51e0c856d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzyycz5fm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.178 239969 DEBUG oslo_concurrency.processutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1999f11e-e9a2-4442-8baf-d8e51e0c856d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzyycz5fm" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.207 239969 DEBUG nova.storage.rbd_utils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] rbd image 1999f11e-e9a2-4442-8baf-d8e51e0c856d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.212 239969 DEBUG oslo_concurrency.processutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1999f11e-e9a2-4442-8baf-d8e51e0c856d/disk.config 1999f11e-e9a2-4442-8baf-d8e51e0c856d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.400 239969 DEBUG oslo_concurrency.processutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1999f11e-e9a2-4442-8baf-d8e51e0c856d/disk.config 1999f11e-e9a2-4442-8baf-d8e51e0c856d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.401 239969 INFO nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Deleting local config drive /var/lib/nova/instances/1999f11e-e9a2-4442-8baf-d8e51e0c856d/disk.config because it was imported into RBD.
Jan 26 16:23:41 compute-0 kernel: tapd8b4b479-98: entered promiscuous mode
Jan 26 16:23:41 compute-0 NetworkManager[48954]: <info>  [1769444621.4759] manager: (tapd8b4b479-98): new Tun device (/org/freedesktop/NetworkManager/Devices/564)
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.474 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:41 compute-0 ovn_controller[146046]: 2026-01-26T16:23:41Z|01349|binding|INFO|Claiming lport d8b4b479-9869-4e47-bcea-fe0580fafbe3 for this chassis.
Jan 26 16:23:41 compute-0 ovn_controller[146046]: 2026-01-26T16:23:41Z|01350|binding|INFO|d8b4b479-9869-4e47-bcea-fe0580fafbe3: Claiming fa:16:3e:12:a8:15 10.100.0.14 2001:db8::f816:3eff:fe12:a815
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.482 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:a8:15 10.100.0.14 2001:db8::f816:3eff:fe12:a815'], port_security=['fa:16:3e:12:a8:15 10.100.0.14 2001:db8::f816:3eff:fe12:a815'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fe12:a815/64', 'neutron:device_id': '1999f11e-e9a2-4442-8baf-d8e51e0c856d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3bbc6da3-4889-49ad-be6a-5c45e912baea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59cf9173-0130-4701-8cd2-ef13c0f4cee0, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d8b4b479-9869-4e47-bcea-fe0580fafbe3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.483 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d8b4b479-9869-4e47-bcea-fe0580fafbe3 in datapath 71a6c2b4-c95b-48d6-bc0d-8a809d49273b bound to our chassis
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.485 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71a6c2b4-c95b-48d6-bc0d-8a809d49273b
Jan 26 16:23:41 compute-0 ovn_controller[146046]: 2026-01-26T16:23:41Z|01351|binding|INFO|Setting lport d8b4b479-9869-4e47-bcea-fe0580fafbe3 ovn-installed in OVS
Jan 26 16:23:41 compute-0 ovn_controller[146046]: 2026-01-26T16:23:41Z|01352|binding|INFO|Setting lport d8b4b479-9869-4e47-bcea-fe0580fafbe3 up in Southbound
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.500 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.504 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.514 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[734e6bc1-2447-49dc-aa18-e1e0163b271e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:41 compute-0 systemd-machined[208061]: New machine qemu-158-instance-00000083.
Jan 26 16:23:41 compute-0 systemd[1]: Started Virtual Machine qemu-158-instance-00000083.
Jan 26 16:23:41 compute-0 systemd-udevd[355326]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.549 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3e349bb6-6abd-41aa-8a09-883d9ea088aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.553 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f06f589d-5ce7-4a4b-8738-6e3960739d4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:41 compute-0 NetworkManager[48954]: <info>  [1769444621.5604] device (tapd8b4b479-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:23:41 compute-0 NetworkManager[48954]: <info>  [1769444621.5611] device (tapd8b4b479-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.583 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[129c3a90-567d-4c75-8c5a-882b835d3fcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.602 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[30f6b6dd-d080-495e-8860-d2720145b1a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71a6c2b4-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:1b:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 6, 'rx_bytes': 1586, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 6, 'rx_bytes': 1586, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607926, 'reachable_time': 34510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 17, 'inoctets': 1264, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 17, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1264, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 17, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355334, 'error': None, 'target': 'ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.620 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[91b56d2c-a218-421f-b5f5-8fc71626c858]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap71a6c2b4-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607941, 'tstamp': 607941}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355337, 'error': None, 'target': 'ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap71a6c2b4-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607945, 'tstamp': 607945}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355337, 'error': None, 'target': 'ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.622 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71a6c2b4-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.623 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.626 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71a6c2b4-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.626 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.626 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.627 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71a6c2b4-c0, col_values=(('external_ids', {'iface-id': '29a5989d-1915-41bb-983b-b3c73bba2262'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.627 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:23:41 compute-0 ceph-mon[75140]: pgmap v2189: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 144 op/s
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.768 239969 DEBUG nova.network.neutron [req-3b8cfa60-2674-4438-9312-6d5828e1b81b req-bb1dd190-494e-4df9-a3af-df2458598d00 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Updated VIF entry in instance network info cache for port d8b4b479-9869-4e47-bcea-fe0580fafbe3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.769 239969 DEBUG nova.network.neutron [req-3b8cfa60-2674-4438-9312-6d5828e1b81b req-bb1dd190-494e-4df9-a3af-df2458598d00 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Updating instance_info_cache with network_info: [{"id": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "address": "fa:16:3e:12:a8:15", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a815", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b4b479-98", "ovs_interfaceid": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.785 239969 DEBUG oslo_concurrency.lockutils [req-3b8cfa60-2674-4438-9312-6d5828e1b81b req-bb1dd190-494e-4df9-a3af-df2458598d00 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-1999f11e-e9a2-4442-8baf-d8e51e0c856d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.819 239969 DEBUG oslo_concurrency.lockutils [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "9698a435-3de7-4c8c-9b12-35e22d8d405b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.820 239969 DEBUG oslo_concurrency.lockutils [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "9698a435-3de7-4c8c-9b12-35e22d8d405b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.820 239969 DEBUG oslo_concurrency.lockutils [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.820 239969 DEBUG oslo_concurrency.lockutils [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.820 239969 DEBUG oslo_concurrency.lockutils [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.823 239969 INFO nova.compute.manager [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Terminating instance
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.824 239969 DEBUG nova.compute.manager [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:23:41 compute-0 kernel: tap4eaabdc7-c0 (unregistering): left promiscuous mode
Jan 26 16:23:41 compute-0 NetworkManager[48954]: <info>  [1769444621.8766] device (tap4eaabdc7-c0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:23:41 compute-0 ovn_controller[146046]: 2026-01-26T16:23:41Z|01353|binding|INFO|Releasing lport 4eaabdc7-c0a3-4002-8812-a572315f3666 from this chassis (sb_readonly=0)
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.888 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:41 compute-0 ovn_controller[146046]: 2026-01-26T16:23:41Z|01354|binding|INFO|Setting lport 4eaabdc7-c0a3-4002-8812-a572315f3666 down in Southbound
Jan 26 16:23:41 compute-0 ovn_controller[146046]: 2026-01-26T16:23:41Z|01355|binding|INFO|Removing iface tap4eaabdc7-c0 ovn-installed in OVS
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.890 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.899 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:80:b5:7c 10.100.0.19'], port_security=['fa:16:3e:80:b5:7c 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': '9698a435-3de7-4c8c-9b12-35e22d8d405b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86f67c73-31b4-4f89-93c5-3a2f6b391b21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c2949377-7667-47c8-b75f-54f940d9a0e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=299c72d0-590b-465a-ba1c-80ce946f37ca, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=4eaabdc7-c0a3-4002-8812-a572315f3666) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.901 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 4eaabdc7-c0a3-4002-8812-a572315f3666 in datapath 86f67c73-31b4-4f89-93c5-3a2f6b391b21 unbound from our chassis
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.903 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 86f67c73-31b4-4f89-93c5-3a2f6b391b21
Jan 26 16:23:41 compute-0 nova_compute[239965]: 2026-01-26 16:23:41.908 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.922 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a2f72f-7709-4e30-ab39-7c2ae4923b3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:41 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d00000081.scope: Deactivated successfully.
Jan 26 16:23:41 compute-0 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d00000081.scope: Consumed 14.733s CPU time.
Jan 26 16:23:41 compute-0 systemd-machined[208061]: Machine qemu-156-instance-00000081 terminated.
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.964 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7fc1d8c4-389c-4a0d-81cb-cef231ba42ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:41.968 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c79123b7-6505-4ff2-9e11-9436013c0e90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:42.003 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e80cf9f9-ff85-4586-a507-061cf2c786e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:42.030 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7bf5a0e2-35f9-48d7-a140-680bd3d10504]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap86f67c73-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ea:83:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 8, 'rx_bytes': 790, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 8, 'rx_bytes': 790, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 386], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606338, 'reachable_time': 33945, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 7, 'inoctets': 524, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 7, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 524, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 7, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355390, 'error': None, 'target': 'ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:42 compute-0 NetworkManager[48954]: <info>  [1769444622.0450] manager: (tap4eaabdc7-c0): new Tun device (/org/freedesktop/NetworkManager/Devices/565)
Jan 26 16:23:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:42.048 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc2042a-a7be-4fe4-b1ab-5c219a91ab04]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap86f67c73-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606351, 'tstamp': 606351}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355393, 'error': None, 'target': 'ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tap86f67c73-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606355, 'tstamp': 606355}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355393, 'error': None, 'target': 'ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:42.049 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86f67c73-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.052 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.059 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:42.059 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap86f67c73-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:42.059 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:23:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:42.060 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap86f67c73-30, col_values=(('external_ids', {'iface-id': '8154df63-0e6b-4a3f-8b4f-601fcbe2af2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:42.061 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.062 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444622.0620368, 1999f11e-e9a2-4442-8baf-d8e51e0c856d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.062 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] VM Started (Lifecycle Event)
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.066 239969 INFO nova.virt.libvirt.driver [-] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Instance destroyed successfully.
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.066 239969 DEBUG nova.objects.instance [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'resources' on Instance uuid 9698a435-3de7-4c8c-9b12-35e22d8d405b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.287 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.288 239969 DEBUG nova.virt.libvirt.vif [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:23:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1800247682',display_name='tempest-TestNetworkBasicOps-server-1800247682',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1800247682',id=129,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOEACH7YklzsdBuWbdkWITsEmcRXlPYbihr7xRwBSUihcxENNKlih1KFxNYArkssWuX7BJ2CfYdz+thoC/jrYHOR6D9Pi/VQcav/0JGduLKBpNRQX/wG2Yvccg4ePvtKPg==',key_name='tempest-TestNetworkBasicOps-1458569563',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:23:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-h90xbs63',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:23:14Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=9698a435-3de7-4c8c-9b12-35e22d8d405b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4eaabdc7-c0a3-4002-8812-a572315f3666", "address": "fa:16:3e:80:b5:7c", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eaabdc7-c0", "ovs_interfaceid": "4eaabdc7-c0a3-4002-8812-a572315f3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.288 239969 DEBUG nova.network.os_vif_util [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "4eaabdc7-c0a3-4002-8812-a572315f3666", "address": "fa:16:3e:80:b5:7c", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eaabdc7-c0", "ovs_interfaceid": "4eaabdc7-c0a3-4002-8812-a572315f3666", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.290 239969 DEBUG nova.network.os_vif_util [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:80:b5:7c,bridge_name='br-int',has_traffic_filtering=True,id=4eaabdc7-c0a3-4002-8812-a572315f3666,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eaabdc7-c0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.290 239969 DEBUG os_vif [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:b5:7c,bridge_name='br-int',has_traffic_filtering=True,id=4eaabdc7-c0a3-4002-8812-a572315f3666,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eaabdc7-c0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.294 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.294 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4eaabdc7-c0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.296 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.298 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.300 239969 INFO os_vif [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:80:b5:7c,bridge_name='br-int',has_traffic_filtering=True,id=4eaabdc7-c0a3-4002-8812-a572315f3666,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eaabdc7-c0')
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.320 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444622.062115, 1999f11e-e9a2-4442-8baf-d8e51e0c856d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.320 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] VM Paused (Lifecycle Event)
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.341 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.345 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.362 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.571 239969 INFO nova.virt.libvirt.driver [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Deleting instance files /var/lib/nova/instances/9698a435-3de7-4c8c-9b12-35e22d8d405b_del
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.572 239969 INFO nova.virt.libvirt.driver [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Deletion of /var/lib/nova/instances/9698a435-3de7-4c8c-9b12-35e22d8d405b_del complete
Jan 26 16:23:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 466 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 152 op/s
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.633 239969 INFO nova.compute.manager [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Took 0.81 seconds to destroy the instance on the hypervisor.
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.633 239969 DEBUG oslo.service.loopingcall [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.633 239969 DEBUG nova.compute.manager [-] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.633 239969 DEBUG nova.network.neutron [-] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:23:42 compute-0 ceph-mon[75140]: pgmap v2190: 305 pgs: 305 active+clean; 466 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.5 MiB/s wr, 152 op/s
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.902 239969 DEBUG nova.compute.manager [req-dcee2c27-7c34-4c7a-87c3-d1cfabdaef8b req-7c584bdc-e8c5-49b7-bf6a-7b1f282f319a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Received event network-vif-plugged-d8b4b479-9869-4e47-bcea-fe0580fafbe3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.903 239969 DEBUG oslo_concurrency.lockutils [req-dcee2c27-7c34-4c7a-87c3-d1cfabdaef8b req-7c584bdc-e8c5-49b7-bf6a-7b1f282f319a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.903 239969 DEBUG oslo_concurrency.lockutils [req-dcee2c27-7c34-4c7a-87c3-d1cfabdaef8b req-7c584bdc-e8c5-49b7-bf6a-7b1f282f319a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.903 239969 DEBUG oslo_concurrency.lockutils [req-dcee2c27-7c34-4c7a-87c3-d1cfabdaef8b req-7c584bdc-e8c5-49b7-bf6a-7b1f282f319a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.904 239969 DEBUG nova.compute.manager [req-dcee2c27-7c34-4c7a-87c3-d1cfabdaef8b req-7c584bdc-e8c5-49b7-bf6a-7b1f282f319a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Processing event network-vif-plugged-d8b4b479-9869-4e47-bcea-fe0580fafbe3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.904 239969 DEBUG nova.compute.manager [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.910 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444622.9094954, 1999f11e-e9a2-4442-8baf-d8e51e0c856d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.910 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] VM Resumed (Lifecycle Event)
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.912 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.915 239969 INFO nova.virt.libvirt.driver [-] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Instance spawned successfully.
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.915 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.932 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.937 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.937 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.938 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.938 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.938 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.939 239969 DEBUG nova.virt.libvirt.driver [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.942 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.974 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.984 239969 DEBUG nova.compute.manager [req-831283bf-6a52-4fdb-9f7e-6b8001fd843b req-b68a383c-6817-4bb0-9975-b4ef62983891 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Received event network-vif-unplugged-4eaabdc7-c0a3-4002-8812-a572315f3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.984 239969 DEBUG oslo_concurrency.lockutils [req-831283bf-6a52-4fdb-9f7e-6b8001fd843b req-b68a383c-6817-4bb0-9975-b4ef62983891 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.984 239969 DEBUG oslo_concurrency.lockutils [req-831283bf-6a52-4fdb-9f7e-6b8001fd843b req-b68a383c-6817-4bb0-9975-b4ef62983891 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.985 239969 DEBUG oslo_concurrency.lockutils [req-831283bf-6a52-4fdb-9f7e-6b8001fd843b req-b68a383c-6817-4bb0-9975-b4ef62983891 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.985 239969 DEBUG nova.compute.manager [req-831283bf-6a52-4fdb-9f7e-6b8001fd843b req-b68a383c-6817-4bb0-9975-b4ef62983891 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] No waiting events found dispatching network-vif-unplugged-4eaabdc7-c0a3-4002-8812-a572315f3666 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:23:42 compute-0 nova_compute[239965]: 2026-01-26 16:23:42.985 239969 DEBUG nova.compute.manager [req-831283bf-6a52-4fdb-9f7e-6b8001fd843b req-b68a383c-6817-4bb0-9975-b4ef62983891 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Received event network-vif-unplugged-4eaabdc7-c0a3-4002-8812-a572315f3666 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:23:43 compute-0 nova_compute[239965]: 2026-01-26 16:23:43.010 239969 INFO nova.compute.manager [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Took 7.58 seconds to spawn the instance on the hypervisor.
Jan 26 16:23:43 compute-0 nova_compute[239965]: 2026-01-26 16:23:43.010 239969 DEBUG nova.compute.manager [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:43 compute-0 nova_compute[239965]: 2026-01-26 16:23:43.084 239969 INFO nova.compute.manager [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Took 9.00 seconds to build instance.
Jan 26 16:23:43 compute-0 nova_compute[239965]: 2026-01-26 16:23:43.103 239969 DEBUG oslo_concurrency.lockutils [None req-a9f4c4dd-c2dd-4b01-b2ca-a08982b92e88 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:43 compute-0 nova_compute[239965]: 2026-01-26 16:23:43.186 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:43 compute-0 nova_compute[239965]: 2026-01-26 16:23:43.361 239969 DEBUG nova.network.neutron [-] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:43 compute-0 nova_compute[239965]: 2026-01-26 16:23:43.385 239969 INFO nova.compute.manager [-] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Took 0.75 seconds to deallocate network for instance.
Jan 26 16:23:43 compute-0 nova_compute[239965]: 2026-01-26 16:23:43.430 239969 DEBUG oslo_concurrency.lockutils [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:43 compute-0 nova_compute[239965]: 2026-01-26 16:23:43.431 239969 DEBUG oslo_concurrency.lockutils [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:23:43 compute-0 nova_compute[239965]: 2026-01-26 16:23:43.570 239969 DEBUG oslo_concurrency.processutils [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:23:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/185495882' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:44 compute-0 nova_compute[239965]: 2026-01-26 16:23:44.115 239969 DEBUG oslo_concurrency.processutils [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:44 compute-0 nova_compute[239965]: 2026-01-26 16:23:44.121 239969 DEBUG nova.compute.provider_tree [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:23:44 compute-0 nova_compute[239965]: 2026-01-26 16:23:44.138 239969 DEBUG nova.scheduler.client.report [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:23:44 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/185495882' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:44 compute-0 nova_compute[239965]: 2026-01-26 16:23:44.162 239969 DEBUG oslo_concurrency.lockutils [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:44 compute-0 nova_compute[239965]: 2026-01-26 16:23:44.185 239969 INFO nova.scheduler.client.report [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Deleted allocations for instance 9698a435-3de7-4c8c-9b12-35e22d8d405b
Jan 26 16:23:44 compute-0 nova_compute[239965]: 2026-01-26 16:23:44.258 239969 DEBUG oslo_concurrency.lockutils [None req-de0d4d2e-cc16-4049-83a8-ec80e9f8712e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "9698a435-3de7-4c8c-9b12-35e22d8d405b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.438s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 466 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 257 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 26 16:23:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:44.930 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:45 compute-0 nova_compute[239965]: 2026-01-26 16:23:45.015 239969 DEBUG nova.compute.manager [req-fcf2ff42-95c6-4cf8-b911-408f4afae938 req-5ee0d4a7-1aee-41c7-ae49-d3b8a0212282 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Received event network-vif-plugged-d8b4b479-9869-4e47-bcea-fe0580fafbe3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:45 compute-0 nova_compute[239965]: 2026-01-26 16:23:45.015 239969 DEBUG oslo_concurrency.lockutils [req-fcf2ff42-95c6-4cf8-b911-408f4afae938 req-5ee0d4a7-1aee-41c7-ae49-d3b8a0212282 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:45 compute-0 nova_compute[239965]: 2026-01-26 16:23:45.016 239969 DEBUG oslo_concurrency.lockutils [req-fcf2ff42-95c6-4cf8-b911-408f4afae938 req-5ee0d4a7-1aee-41c7-ae49-d3b8a0212282 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:45 compute-0 nova_compute[239965]: 2026-01-26 16:23:45.016 239969 DEBUG oslo_concurrency.lockutils [req-fcf2ff42-95c6-4cf8-b911-408f4afae938 req-5ee0d4a7-1aee-41c7-ae49-d3b8a0212282 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:45 compute-0 nova_compute[239965]: 2026-01-26 16:23:45.016 239969 DEBUG nova.compute.manager [req-fcf2ff42-95c6-4cf8-b911-408f4afae938 req-5ee0d4a7-1aee-41c7-ae49-d3b8a0212282 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] No waiting events found dispatching network-vif-plugged-d8b4b479-9869-4e47-bcea-fe0580fafbe3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:23:45 compute-0 nova_compute[239965]: 2026-01-26 16:23:45.016 239969 WARNING nova.compute.manager [req-fcf2ff42-95c6-4cf8-b911-408f4afae938 req-5ee0d4a7-1aee-41c7-ae49-d3b8a0212282 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Received unexpected event network-vif-plugged-d8b4b479-9869-4e47-bcea-fe0580fafbe3 for instance with vm_state active and task_state None.
Jan 26 16:23:45 compute-0 nova_compute[239965]: 2026-01-26 16:23:45.016 239969 DEBUG nova.compute.manager [req-fcf2ff42-95c6-4cf8-b911-408f4afae938 req-5ee0d4a7-1aee-41c7-ae49-d3b8a0212282 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Received event network-vif-deleted-4eaabdc7-c0a3-4002-8812-a572315f3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:45 compute-0 nova_compute[239965]: 2026-01-26 16:23:45.074 239969 DEBUG nova.compute.manager [req-5ecc9d52-2d37-4c16-92c5-78ab614c1bd2 req-c0cf770f-b728-458d-856e-4fa1f447b683 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Received event network-vif-plugged-4eaabdc7-c0a3-4002-8812-a572315f3666 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:45 compute-0 nova_compute[239965]: 2026-01-26 16:23:45.074 239969 DEBUG oslo_concurrency.lockutils [req-5ecc9d52-2d37-4c16-92c5-78ab614c1bd2 req-c0cf770f-b728-458d-856e-4fa1f447b683 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:45 compute-0 nova_compute[239965]: 2026-01-26 16:23:45.075 239969 DEBUG oslo_concurrency.lockutils [req-5ecc9d52-2d37-4c16-92c5-78ab614c1bd2 req-c0cf770f-b728-458d-856e-4fa1f447b683 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:45 compute-0 nova_compute[239965]: 2026-01-26 16:23:45.075 239969 DEBUG oslo_concurrency.lockutils [req-5ecc9d52-2d37-4c16-92c5-78ab614c1bd2 req-c0cf770f-b728-458d-856e-4fa1f447b683 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "9698a435-3de7-4c8c-9b12-35e22d8d405b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:45 compute-0 nova_compute[239965]: 2026-01-26 16:23:45.076 239969 DEBUG nova.compute.manager [req-5ecc9d52-2d37-4c16-92c5-78ab614c1bd2 req-c0cf770f-b728-458d-856e-4fa1f447b683 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] No waiting events found dispatching network-vif-plugged-4eaabdc7-c0a3-4002-8812-a572315f3666 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:23:45 compute-0 nova_compute[239965]: 2026-01-26 16:23:45.076 239969 WARNING nova.compute.manager [req-5ecc9d52-2d37-4c16-92c5-78ab614c1bd2 req-c0cf770f-b728-458d-856e-4fa1f447b683 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Received unexpected event network-vif-plugged-4eaabdc7-c0a3-4002-8812-a572315f3666 for instance with vm_state deleted and task_state None.
Jan 26 16:23:45 compute-0 ovn_controller[146046]: 2026-01-26T16:23:45Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:32:e4 10.100.0.8
Jan 26 16:23:45 compute-0 ovn_controller[146046]: 2026-01-26T16:23:45Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:32:e4 10.100.0.8
Jan 26 16:23:45 compute-0 ceph-mon[75140]: pgmap v2191: 305 pgs: 305 active+clean; 466 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 257 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.136 239969 DEBUG oslo_concurrency.lockutils [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "interface-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.137 239969 DEBUG oslo_concurrency.lockutils [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "interface-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.163 239969 DEBUG nova.objects.instance [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'flavor' on Instance uuid 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.184 239969 DEBUG nova.virt.libvirt.vif [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:22:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1426087027',display_name='tempest-TestNetworkBasicOps-server-1426087027',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1426087027',id=126,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVQvnTTId2F2dZ4CmrqUuSRPgxVpg5Gcg7/WdXsulHhwVIkk74RWE9IZv+pRN7sfUWNe4e/QS34rEQOYJGDOrZn6Q/HpS10QJgjM9+nMxrjR7pUidrziE/wnxgJrIPauA==',key_name='tempest-TestNetworkBasicOps-1612194243',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:22:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-058vjbml',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:22:22Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=34442cd1-7fb4-45b4-bd46-9d2cb53d00e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "address": "fa:16:3e:b8:22:5d", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f75eb8-8d", "ovs_interfaceid": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.185 239969 DEBUG nova.network.os_vif_util [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "address": "fa:16:3e:b8:22:5d", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f75eb8-8d", "ovs_interfaceid": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.186 239969 DEBUG nova.network.os_vif_util [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:22:5d,bridge_name='br-int',has_traffic_filtering=True,id=62f75eb8-8d58-40a9-bac0-5f5579d1d9c3,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f75eb8-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.189 239969 DEBUG nova.virt.libvirt.guest [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b8:22:5d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62f75eb8-8d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.194 239969 DEBUG nova.virt.libvirt.guest [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b8:22:5d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62f75eb8-8d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.198 239969 DEBUG nova.virt.libvirt.driver [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Attempting to detach device tap62f75eb8-8d from instance 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.199 239969 DEBUG nova.virt.libvirt.guest [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] detach device xml: <interface type="ethernet">
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:b8:22:5d"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <target dev="tap62f75eb8-8d"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]: </interface>
Jan 26 16:23:46 compute-0 nova_compute[239965]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.205 239969 DEBUG nova.virt.libvirt.guest [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b8:22:5d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62f75eb8-8d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.208 239969 DEBUG nova.virt.libvirt.guest [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:b8:22:5d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62f75eb8-8d"/></interface>not found in domain: <domain type='kvm' id='153'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <name>instance-0000007e</name>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <uuid>34442cd1-7fb4-45b4-bd46-9d2cb53d00e1</uuid>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:name>tempest-TestNetworkBasicOps-server-1426087027</nova:name>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 16:22:52</nova:creationTime>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:port uuid="7b88408c-5847-4417-be2b-c52e6a358bc7">
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:port uuid="62f75eb8-8d58-40a9-bac0-5f5579d1d9c3">
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 16:23:46 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <resource>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </resource>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <system>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <entry name='serial'>34442cd1-7fb4-45b4-bd46-9d2cb53d00e1</entry>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <entry name='uuid'>34442cd1-7fb4-45b4-bd46-9d2cb53d00e1</entry>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </system>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <os>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </os>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <features>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </features>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk' index='2'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       </source>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk.config' index='1'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       </source>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:bd:83:22'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target dev='tap7b88408c-58'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:b8:22:5d'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target dev='tap62f75eb8-8d'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='net1'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1/console.log' append='off'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       </target>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/0'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1/console.log' append='off'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </console>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </input>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </input>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </input>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </graphics>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <video>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </video>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c322,c489</label>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c322,c489</imagelabel>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 16:23:46 compute-0 nova_compute[239965]: </domain>
Jan 26 16:23:46 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.208 239969 INFO nova.virt.libvirt.driver [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully detached device tap62f75eb8-8d from instance 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 from the persistent domain config.
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.209 239969 DEBUG nova.virt.libvirt.driver [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] (1/8): Attempting to detach device tap62f75eb8-8d with device alias net1 from instance 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.209 239969 DEBUG nova.virt.libvirt.guest [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] detach device xml: <interface type="ethernet">
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <mac address="fa:16:3e:b8:22:5d"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <model type="virtio"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <mtu size="1442"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <target dev="tap62f75eb8-8d"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]: </interface>
Jan 26 16:23:46 compute-0 nova_compute[239965]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Jan 26 16:23:46 compute-0 kernel: tap62f75eb8-8d (unregistering): left promiscuous mode
Jan 26 16:23:46 compute-0 NetworkManager[48954]: <info>  [1769444626.2609] device (tap62f75eb8-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:23:46 compute-0 ovn_controller[146046]: 2026-01-26T16:23:46Z|01356|binding|INFO|Releasing lport 62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 from this chassis (sb_readonly=0)
Jan 26 16:23:46 compute-0 ovn_controller[146046]: 2026-01-26T16:23:46Z|01357|binding|INFO|Setting lport 62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 down in Southbound
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.266 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:46 compute-0 ovn_controller[146046]: 2026-01-26T16:23:46Z|01358|binding|INFO|Removing iface tap62f75eb8-8d ovn-installed in OVS
Jan 26 16:23:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:46.273 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:22:5d 10.100.0.21', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.21/28', 'neutron:device_id': '34442cd1-7fb4-45b4-bd46-9d2cb53d00e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-86f67c73-31b4-4f89-93c5-3a2f6b391b21', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=299c72d0-590b-465a-ba1c-80ce946f37ca, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=62f75eb8-8d58-40a9-bac0-5f5579d1d9c3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:23:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:46.274 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 in datapath 86f67c73-31b4-4f89-93c5-3a2f6b391b21 unbound from our chassis
Jan 26 16:23:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:46.277 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 86f67c73-31b4-4f89-93c5-3a2f6b391b21, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.271 239969 DEBUG nova.virt.libvirt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Received event <DeviceRemovedEvent: 1769444626.2712212, 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.278 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.279 239969 DEBUG nova.virt.libvirt.driver [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Start waiting for the detach event from libvirt for device tap62f75eb8-8d with device alias net1 for instance 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Jan 26 16:23:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:46.278 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5ab672-1d38-4135-8464-077d9d6b6a52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:46.280 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21 namespace which is not needed anymore
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.281 239969 DEBUG nova.virt.libvirt.guest [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b8:22:5d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62f75eb8-8d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.284 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.287 239969 DEBUG nova.virt.libvirt.guest [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:b8:22:5d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62f75eb8-8d"/></interface>not found in domain: <domain type='kvm' id='153'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <name>instance-0000007e</name>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <uuid>34442cd1-7fb4-45b4-bd46-9d2cb53d00e1</uuid>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:name>tempest-TestNetworkBasicOps-server-1426087027</nova:name>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 16:22:52</nova:creationTime>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:port uuid="7b88408c-5847-4417-be2b-c52e6a358bc7">
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:port uuid="62f75eb8-8d58-40a9-bac0-5f5579d1d9c3">
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.21" ipVersion="4"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 16:23:46 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <resource>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </resource>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <system>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <entry name='serial'>34442cd1-7fb4-45b4-bd46-9d2cb53d00e1</entry>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <entry name='uuid'>34442cd1-7fb4-45b4-bd46-9d2cb53d00e1</entry>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </system>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <os>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </os>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <features>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </features>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk' index='2'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       </source>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk.config' index='1'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       </source>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:bd:83:22'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target dev='tap7b88408c-58'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1/console.log' append='off'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       </target>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/0'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1/console.log' append='off'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </console>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </input>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </input>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </input>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </graphics>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <video>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </video>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c322,c489</label>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c322,c489</imagelabel>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 16:23:46 compute-0 nova_compute[239965]: </domain>
Jan 26 16:23:46 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.287 239969 INFO nova.virt.libvirt.driver [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully detached device tap62f75eb8-8d from instance 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 from the live domain config.
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.288 239969 DEBUG nova.virt.libvirt.vif [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:22:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1426087027',display_name='tempest-TestNetworkBasicOps-server-1426087027',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1426087027',id=126,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVQvnTTId2F2dZ4CmrqUuSRPgxVpg5Gcg7/WdXsulHhwVIkk74RWE9IZv+pRN7sfUWNe4e/QS34rEQOYJGDOrZn6Q/HpS10QJgjM9+nMxrjR7pUidrziE/wnxgJrIPauA==',key_name='tempest-TestNetworkBasicOps-1612194243',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:22:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-058vjbml',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:22:22Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=34442cd1-7fb4-45b4-bd46-9d2cb53d00e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "address": "fa:16:3e:b8:22:5d", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f75eb8-8d", "ovs_interfaceid": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.289 239969 DEBUG nova.network.os_vif_util [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "address": "fa:16:3e:b8:22:5d", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f75eb8-8d", "ovs_interfaceid": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.290 239969 DEBUG nova.network.os_vif_util [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:22:5d,bridge_name='br-int',has_traffic_filtering=True,id=62f75eb8-8d58-40a9-bac0-5f5579d1d9c3,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f75eb8-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.290 239969 DEBUG os_vif [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:22:5d,bridge_name='br-int',has_traffic_filtering=True,id=62f75eb8-8d58-40a9-bac0-5f5579d1d9c3,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f75eb8-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.294 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.295 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62f75eb8-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.298 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.301 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.304 239969 INFO os_vif [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:22:5d,bridge_name='br-int',has_traffic_filtering=True,id=62f75eb8-8d58-40a9-bac0-5f5579d1d9c3,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f75eb8-8d')
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.304 239969 DEBUG nova.virt.libvirt.guest [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:name>tempest-TestNetworkBasicOps-server-1426087027</nova:name>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 16:23:46</nova:creationTime>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     <nova:port uuid="7b88408c-5847-4417-be2b-c52e6a358bc7">
Jan 26 16:23:46 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:23:46 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:23:46 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 16:23:46 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 16:23:46 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 16:23:46 compute-0 neutron-haproxy-ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21[353000]: [NOTICE]   (353004) : haproxy version is 2.8.14-c23fe91
Jan 26 16:23:46 compute-0 neutron-haproxy-ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21[353000]: [NOTICE]   (353004) : path to executable is /usr/sbin/haproxy
Jan 26 16:23:46 compute-0 neutron-haproxy-ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21[353000]: [WARNING]  (353004) : Exiting Master process...
Jan 26 16:23:46 compute-0 neutron-haproxy-ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21[353000]: [WARNING]  (353004) : Exiting Master process...
Jan 26 16:23:46 compute-0 neutron-haproxy-ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21[353000]: [ALERT]    (353004) : Current worker (353006) exited with code 143 (Terminated)
Jan 26 16:23:46 compute-0 neutron-haproxy-ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21[353000]: [WARNING]  (353004) : All workers exited. Exiting... (0)
Jan 26 16:23:46 compute-0 systemd[1]: libpod-37da4ab7fe9d5d7581968ce6794245a8cecc530478736e2b288b37c4e4561036.scope: Deactivated successfully.
Jan 26 16:23:46 compute-0 podman[355469]: 2026-01-26 16:23:46.447373425 +0000 UTC m=+0.055981712 container died 37da4ab7fe9d5d7581968ce6794245a8cecc530478736e2b288b37c4e4561036 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:23:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-37da4ab7fe9d5d7581968ce6794245a8cecc530478736e2b288b37c4e4561036-userdata-shm.mount: Deactivated successfully.
Jan 26 16:23:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0225cb35f97205eeb782dc6b1507e46ea8ac1f60c3466debc5263ffa7eacd235-merged.mount: Deactivated successfully.
Jan 26 16:23:46 compute-0 podman[355469]: 2026-01-26 16:23:46.495469032 +0000 UTC m=+0.104077319 container cleanup 37da4ab7fe9d5d7581968ce6794245a8cecc530478736e2b288b37c4e4561036 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 16:23:46 compute-0 systemd[1]: libpod-conmon-37da4ab7fe9d5d7581968ce6794245a8cecc530478736e2b288b37c4e4561036.scope: Deactivated successfully.
Jan 26 16:23:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 446 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 2.3 MiB/s wr, 87 op/s
Jan 26 16:23:46 compute-0 podman[355501]: 2026-01-26 16:23:46.595988544 +0000 UTC m=+0.060092643 container remove 37da4ab7fe9d5d7581968ce6794245a8cecc530478736e2b288b37c4e4561036 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:23:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:46.602 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[02a5d281-9b39-4950-b918-f5f56a11f0d2]: (4, ('Mon Jan 26 04:23:46 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21 (37da4ab7fe9d5d7581968ce6794245a8cecc530478736e2b288b37c4e4561036)\n37da4ab7fe9d5d7581968ce6794245a8cecc530478736e2b288b37c4e4561036\nMon Jan 26 04:23:46 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21 (37da4ab7fe9d5d7581968ce6794245a8cecc530478736e2b288b37c4e4561036)\n37da4ab7fe9d5d7581968ce6794245a8cecc530478736e2b288b37c4e4561036\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:46.606 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3177a5-9921-4038-a859-90005131980d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:46.607 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap86f67c73-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.609 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:46 compute-0 kernel: tap86f67c73-30: left promiscuous mode
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.611 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:46.620 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[68577401-6c72-4c08-9171-3310e451fb6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.632 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:46.636 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2aefbf6d-7a43-49f8-89e0-447c1c0d0690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:46.639 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[eedd28ec-b06b-4e09-b9d8-741cc27a33b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:46.659 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[de27fd30-1a21-416c-a134-c671565437e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606329, 'reachable_time': 18248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355516, 'error': None, 'target': 'ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d86f67c73\x2d31b4\x2d4f89\x2d93c5\x2d3a2f6b391b21.mount: Deactivated successfully.
Jan 26 16:23:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:46.665 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-86f67c73-31b4-4f89-93c5-3a2f6b391b21 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:23:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:46.665 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[6520c5ef-bda2-4c88-b0a2-0178c5b12119]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.974 239969 DEBUG oslo_concurrency.lockutils [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.974 239969 DEBUG oslo_concurrency.lockutils [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:23:46 compute-0 nova_compute[239965]: 2026-01-26 16:23:46.974 239969 DEBUG nova.network.neutron [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:23:47 compute-0 sshd-session[355517]: Invalid user ubuntu from 209.38.206.249 port 51976
Jan 26 16:23:47 compute-0 sshd-session[355517]: Connection closed by invalid user ubuntu 209.38.206.249 port 51976 [preauth]
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.215 239969 DEBUG nova.compute.manager [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received event network-vif-unplugged-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.216 239969 DEBUG oslo_concurrency.lockutils [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.216 239969 DEBUG oslo_concurrency.lockutils [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.216 239969 DEBUG oslo_concurrency.lockutils [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.216 239969 DEBUG nova.compute.manager [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] No waiting events found dispatching network-vif-unplugged-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.216 239969 WARNING nova.compute.manager [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received unexpected event network-vif-unplugged-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 for instance with vm_state active and task_state None.
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.217 239969 DEBUG nova.compute.manager [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received event network-vif-plugged-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.217 239969 DEBUG oslo_concurrency.lockutils [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.217 239969 DEBUG oslo_concurrency.lockutils [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.217 239969 DEBUG oslo_concurrency.lockutils [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.217 239969 DEBUG nova.compute.manager [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] No waiting events found dispatching network-vif-plugged-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.218 239969 WARNING nova.compute.manager [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received unexpected event network-vif-plugged-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 for instance with vm_state active and task_state None.
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.218 239969 DEBUG nova.compute.manager [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received event network-vif-deleted-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.218 239969 INFO nova.compute.manager [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Neutron deleted interface 62f75eb8-8d58-40a9-bac0-5f5579d1d9c3; detaching it from the instance and deleting it from the info cache
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.218 239969 DEBUG nova.network.neutron [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Updating instance_info_cache with network_info: [{"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.250 239969 DEBUG nova.objects.instance [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lazy-loading 'system_metadata' on Instance uuid 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.283 239969 DEBUG nova.objects.instance [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lazy-loading 'flavor' on Instance uuid 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.294 239969 DEBUG nova.compute.manager [req-fe371436-173a-4537-af86-1568ca25873b req-9e5a5b08-8d07-40c9-8af8-f94affe76525 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Received event network-changed-d8b4b479-9869-4e47-bcea-fe0580fafbe3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.294 239969 DEBUG nova.compute.manager [req-fe371436-173a-4537-af86-1568ca25873b req-9e5a5b08-8d07-40c9-8af8-f94affe76525 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Refreshing instance network info cache due to event network-changed-d8b4b479-9869-4e47-bcea-fe0580fafbe3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.294 239969 DEBUG oslo_concurrency.lockutils [req-fe371436-173a-4537-af86-1568ca25873b req-9e5a5b08-8d07-40c9-8af8-f94affe76525 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-1999f11e-e9a2-4442-8baf-d8e51e0c856d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.295 239969 DEBUG oslo_concurrency.lockutils [req-fe371436-173a-4537-af86-1568ca25873b req-9e5a5b08-8d07-40c9-8af8-f94affe76525 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-1999f11e-e9a2-4442-8baf-d8e51e0c856d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.295 239969 DEBUG nova.network.neutron [req-fe371436-173a-4537-af86-1568ca25873b req-9e5a5b08-8d07-40c9-8af8-f94affe76525 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Refreshing network info cache for port d8b4b479-9869-4e47-bcea-fe0580fafbe3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.315 239969 DEBUG nova.virt.libvirt.vif [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:22:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1426087027',display_name='tempest-TestNetworkBasicOps-server-1426087027',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1426087027',id=126,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVQvnTTId2F2dZ4CmrqUuSRPgxVpg5Gcg7/WdXsulHhwVIkk74RWE9IZv+pRN7sfUWNe4e/QS34rEQOYJGDOrZn6Q/HpS10QJgjM9+nMxrjR7pUidrziE/wnxgJrIPauA==',key_name='tempest-TestNetworkBasicOps-1612194243',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:22:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-058vjbml',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:22:22Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=34442cd1-7fb4-45b4-bd46-9d2cb53d00e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "address": "fa:16:3e:b8:22:5d", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f75eb8-8d", "ovs_interfaceid": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.315 239969 DEBUG nova.network.os_vif_util [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converting VIF {"id": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "address": "fa:16:3e:b8:22:5d", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f75eb8-8d", "ovs_interfaceid": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.316 239969 DEBUG nova.network.os_vif_util [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:22:5d,bridge_name='br-int',has_traffic_filtering=True,id=62f75eb8-8d58-40a9-bac0-5f5579d1d9c3,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f75eb8-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.326 239969 DEBUG nova.virt.libvirt.guest [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b8:22:5d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62f75eb8-8d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.332 239969 DEBUG nova.virt.libvirt.guest [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:b8:22:5d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62f75eb8-8d"/></interface>not found in domain: <domain type='kvm' id='153'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <name>instance-0000007e</name>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <uuid>34442cd1-7fb4-45b4-bd46-9d2cb53d00e1</uuid>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:name>tempest-TestNetworkBasicOps-server-1426087027</nova:name>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 16:23:46</nova:creationTime>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:port uuid="7b88408c-5847-4417-be2b-c52e6a358bc7">
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 16:23:47 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <resource>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </resource>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <system>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <entry name='serial'>34442cd1-7fb4-45b4-bd46-9d2cb53d00e1</entry>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <entry name='uuid'>34442cd1-7fb4-45b4-bd46-9d2cb53d00e1</entry>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </system>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <os>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </os>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <features>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </features>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk' index='2'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       </source>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk.config' index='1'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       </source>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:bd:83:22'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target dev='tap7b88408c-58'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1/console.log' append='off'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       </target>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/0'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1/console.log' append='off'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </console>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </input>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </input>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </input>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </graphics>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <video>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </video>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c322,c489</label>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c322,c489</imagelabel>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 16:23:47 compute-0 nova_compute[239965]: </domain>
Jan 26 16:23:47 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.333 239969 DEBUG nova.virt.libvirt.guest [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:b8:22:5d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62f75eb8-8d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.336 239969 DEBUG nova.virt.libvirt.guest [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:b8:22:5d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap62f75eb8-8d"/></interface>not found in domain: <domain type='kvm' id='153'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <name>instance-0000007e</name>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <uuid>34442cd1-7fb4-45b4-bd46-9d2cb53d00e1</uuid>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:name>tempest-TestNetworkBasicOps-server-1426087027</nova:name>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 16:23:46</nova:creationTime>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:port uuid="7b88408c-5847-4417-be2b-c52e6a358bc7">
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 16:23:47 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <memory unit='KiB'>131072</memory>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <currentMemory unit='KiB'>131072</currentMemory>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <vcpu placement='static'>1</vcpu>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <resource>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <partition>/machine</partition>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </resource>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <sysinfo type='smbios'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <system>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <entry name='manufacturer'>RDO</entry>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <entry name='product'>OpenStack Compute</entry>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <entry name='serial'>34442cd1-7fb4-45b4-bd46-9d2cb53d00e1</entry>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <entry name='uuid'>34442cd1-7fb4-45b4-bd46-9d2cb53d00e1</entry>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <entry name='family'>Virtual Machine</entry>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </system>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <os>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <type arch='x86_64' machine='pc-q35-rhel9.8.0'>hvm</type>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <boot dev='hd'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <smbios mode='sysinfo'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </os>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <features>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <vmcoreinfo state='on'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </features>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <cpu mode='custom' match='exact' check='full'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <model fallback='forbid'>EPYC-Rome</model>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <vendor>AMD</vendor>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='x2apic'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc-deadline'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='hypervisor'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='tsc_adjust'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='spec-ctrl'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='stibp'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='ssbd'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='cmp_legacy'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='overflow-recov'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='succor'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='ibrs'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='amd-ssbd'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='virt-ssbd'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='lbrv'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='tsc-scale'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='vmcb-clean'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='flushbyasid'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='pause-filter'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='pfthreshold'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='svme-addr-chk'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='lfence-always-serializing'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='xsaves'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='svm'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='require' name='topoext'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='npt'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <feature policy='disable' name='nrip-save'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <clock offset='utc'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <timer name='pit' tickpolicy='delay'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <timer name='rtc' tickpolicy='catchup'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <timer name='hpet' present='no'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <on_poweroff>destroy</on_poweroff>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <on_reboot>restart</on_reboot>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <on_crash>destroy</on_crash>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <disk type='network' device='disk'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk' index='2'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       </source>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target dev='vda' bus='virtio'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='virtio-disk0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <disk type='network' device='cdrom'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <driver name='qemu' type='raw' cache='none'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <auth username='openstack'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:         <secret type='ceph' uuid='8b9831ad-4b0d-59b4-8860-96eb895a171f'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <source protocol='rbd' name='vms/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_disk.config' index='1'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:         <host name='192.168.122.100' port='6789'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       </source>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target dev='sda' bus='sata'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <readonly/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='sata0-0-0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='0' model='pcie-root'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pcie.0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='1' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='1' port='0x10'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.1'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='2' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='2' port='0x11'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.2'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='3' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='3' port='0x12'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.3'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='4' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='4' port='0x13'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.4'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='5' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='5' port='0x14'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.5'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='6' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='6' port='0x15'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.6'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='7' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='7' port='0x16'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.7'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='8' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='8' port='0x17'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.8'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='9' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='9' port='0x18'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.9'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='10' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='10' port='0x19'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.10'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='11' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='11' port='0x1a'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.11'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='12' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='12' port='0x1b'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.12'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='13' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='13' port='0x1c'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.13'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='14' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='14' port='0x1d'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.14'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='15' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='15' port='0x1e'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.15'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='16' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='16' port='0x1f'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.16'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='17' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='17' port='0x20'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.17'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='18' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='18' port='0x21'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.18'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='19' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='19' port='0x22'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.19'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='20' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='20' port='0x23'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.20'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='21' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='21' port='0x24'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.21'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='22' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='22' port='0x25'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.22'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='23' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='23' port='0x26'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.23'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='24' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='24' port='0x27'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.24'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='25' model='pcie-root-port'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-root-port'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target chassis='25' port='0x28'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.25'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model name='pcie-pci-bridge'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='pci.26'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='usb' index='0' model='piix3-uhci'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='usb'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <controller type='sata' index='0'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='ide'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </controller>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <interface type='ethernet'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <mac address='fa:16:3e:bd:83:22'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target dev='tap7b88408c-58'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model type='virtio'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <driver name='vhost' rx_queue_size='512'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <mtu size='1442'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='net0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <serial type='pty'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1/console.log' append='off'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target type='isa-serial' port='0'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:         <model name='isa-serial'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       </target>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <console type='pty' tty='/dev/pts/0'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <source path='/dev/pts/0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <log file='/var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1/console.log' append='off'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <target type='serial' port='0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='serial0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </console>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <input type='tablet' bus='usb'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='input0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='usb' bus='0' port='1'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </input>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <input type='mouse' bus='ps2'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='input1'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </input>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <input type='keyboard' bus='ps2'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='input2'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </input>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <listen type='address' address='::0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </graphics>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <audio id='1' type='none'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <video>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <model type='virtio' heads='1' primary='yes'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='video0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </video>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <watchdog model='itco' action='reset'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='watchdog0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </watchdog>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <memballoon model='virtio'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <stats period='10'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='balloon0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <rng model='virtio'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <backend model='random'>/dev/urandom</backend>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <alias name='rng0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <label>system_u:system_r:svirt_t:s0:c322,c489</label>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c322,c489</imagelabel>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <label>+107:+107</label>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <imagelabel>+107:+107</imagelabel>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </seclabel>
Jan 26 16:23:47 compute-0 nova_compute[239965]: </domain>
Jan 26 16:23:47 compute-0 nova_compute[239965]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.336 239969 WARNING nova.virt.libvirt.driver [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Detaching interface fa:16:3e:b8:22:5d failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap62f75eb8-8d' not found.
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.337 239969 DEBUG nova.virt.libvirt.vif [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:22:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1426087027',display_name='tempest-TestNetworkBasicOps-server-1426087027',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1426087027',id=126,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVQvnTTId2F2dZ4CmrqUuSRPgxVpg5Gcg7/WdXsulHhwVIkk74RWE9IZv+pRN7sfUWNe4e/QS34rEQOYJGDOrZn6Q/HpS10QJgjM9+nMxrjR7pUidrziE/wnxgJrIPauA==',key_name='tempest-TestNetworkBasicOps-1612194243',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:22:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-058vjbml',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:22:22Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=34442cd1-7fb4-45b4-bd46-9d2cb53d00e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "address": "fa:16:3e:b8:22:5d", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f75eb8-8d", "ovs_interfaceid": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.337 239969 DEBUG nova.network.os_vif_util [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converting VIF {"id": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "address": "fa:16:3e:b8:22:5d", "network": {"id": "86f67c73-31b4-4f89-93c5-3a2f6b391b21", "bridge": "br-int", "label": "tempest-network-smoke--1375352961", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.21", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62f75eb8-8d", "ovs_interfaceid": "62f75eb8-8d58-40a9-bac0-5f5579d1d9c3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.338 239969 DEBUG nova.network.os_vif_util [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:22:5d,bridge_name='br-int',has_traffic_filtering=True,id=62f75eb8-8d58-40a9-bac0-5f5579d1d9c3,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f75eb8-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.338 239969 DEBUG os_vif [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:22:5d,bridge_name='br-int',has_traffic_filtering=True,id=62f75eb8-8d58-40a9-bac0-5f5579d1d9c3,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f75eb8-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.339 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.340 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap62f75eb8-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.340 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.342 239969 INFO os_vif [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:22:5d,bridge_name='br-int',has_traffic_filtering=True,id=62f75eb8-8d58-40a9-bac0-5f5579d1d9c3,network=Network(86f67c73-31b4-4f89-93c5-3a2f6b391b21),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62f75eb8-8d')
Jan 26 16:23:47 compute-0 nova_compute[239965]: 2026-01-26 16:23:47.342 239969 DEBUG nova.virt.libvirt.guest [req-f7a887be-105e-4af8-b2c9-8f8d7cafb15b req-2a4abce9-1040-4cec-9966-0d1109aa1b72 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:name>tempest-TestNetworkBasicOps-server-1426087027</nova:name>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:creationTime>2026-01-26 16:23:47</nova:creationTime>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:flavor name="m1.nano">
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:memory>128</nova:memory>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:disk>1</nova:disk>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:swap>0</nova:swap>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:vcpus>1</nova:vcpus>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </nova:flavor>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:owner>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </nova:owner>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   <nova:ports>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     <nova:port uuid="7b88408c-5847-4417-be2b-c52e6a358bc7">
Jan 26 16:23:47 compute-0 nova_compute[239965]:       <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:23:47 compute-0 nova_compute[239965]:     </nova:port>
Jan 26 16:23:47 compute-0 nova_compute[239965]:   </nova:ports>
Jan 26 16:23:47 compute-0 nova_compute[239965]: </nova:instance>
Jan 26 16:23:47 compute-0 nova_compute[239965]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Jan 26 16:23:47 compute-0 sshd-session[355519]: Connection closed by authenticating user root 209.38.206.249 port 51982 [preauth]
Jan 26 16:23:47 compute-0 ceph-mon[75140]: pgmap v2192: 305 pgs: 305 active+clean; 446 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 777 KiB/s rd, 2.3 MiB/s wr, 87 op/s
Jan 26 16:23:48 compute-0 nova_compute[239965]: 2026-01-26 16:23:48.189 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:23:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 450 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 192 op/s
Jan 26 16:23:48 compute-0 ceph-mon[75140]: pgmap v2193: 305 pgs: 305 active+clean; 450 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.6 MiB/s wr, 192 op/s
Jan 26 16:23:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:23:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4283486631' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:23:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:23:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4283486631' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:23:48 compute-0 nova_compute[239965]: 2026-01-26 16:23:48.980 239969 INFO nova.network.neutron [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Port 62f75eb8-8d58-40a9-bac0-5f5579d1d9c3 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 26 16:23:48 compute-0 nova_compute[239965]: 2026-01-26 16:23:48.980 239969 DEBUG nova.network.neutron [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Updating instance_info_cache with network_info: [{"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:49 compute-0 nova_compute[239965]: 2026-01-26 16:23:49.000 239969 DEBUG oslo_concurrency.lockutils [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:23:49 compute-0 nova_compute[239965]: 2026-01-26 16:23:49.031 239969 DEBUG oslo_concurrency.lockutils [None req-676d64c0-5b2d-4cea-8e72-3284f36b0e42 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "interface-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-62f75eb8-8d58-40a9-bac0-5f5579d1d9c3" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:49 compute-0 ovn_controller[146046]: 2026-01-26T16:23:49Z|01359|binding|INFO|Releasing lport 29a5989d-1915-41bb-983b-b3c73bba2262 from this chassis (sb_readonly=0)
Jan 26 16:23:49 compute-0 ovn_controller[146046]: 2026-01-26T16:23:49Z|01360|binding|INFO|Releasing lport ad3f6d2f-d76e-421c-82c8-5fe4525b7056 from this chassis (sb_readonly=0)
Jan 26 16:23:49 compute-0 ovn_controller[146046]: 2026-01-26T16:23:49Z|01361|binding|INFO|Releasing lport 9cc9437d-4445-4fbb-951e-2360a1081c9c from this chassis (sb_readonly=0)
Jan 26 16:23:49 compute-0 nova_compute[239965]: 2026-01-26 16:23:49.096 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0034003744462570845 of space, bias 1.0, pg target 1.0201123338771254 quantized to 32 (current 32)
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001018411225000148 of space, bias 1.0, pg target 0.30450495627504426 quantized to 32 (current 32)
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.043726596234871e-07 of space, bias 4.0, pg target 0.0008424297009096905 quantized to 16 (current 16)
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Jan 26 16:23:49 compute-0 sshd-session[355521]: Connection closed by authenticating user root 209.38.206.249 port 51992 [preauth]
Jan 26 16:23:49 compute-0 nova_compute[239965]: 2026-01-26 16:23:49.322 239969 DEBUG nova.network.neutron [req-fe371436-173a-4537-af86-1568ca25873b req-9e5a5b08-8d07-40c9-8af8-f94affe76525 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Updated VIF entry in instance network info cache for port d8b4b479-9869-4e47-bcea-fe0580fafbe3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:23:49 compute-0 nova_compute[239965]: 2026-01-26 16:23:49.323 239969 DEBUG nova.network.neutron [req-fe371436-173a-4537-af86-1568ca25873b req-9e5a5b08-8d07-40c9-8af8-f94affe76525 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Updating instance_info_cache with network_info: [{"id": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "address": "fa:16:3e:12:a8:15", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a815", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b4b479-98", "ovs_interfaceid": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:49 compute-0 nova_compute[239965]: 2026-01-26 16:23:49.352 239969 DEBUG oslo_concurrency.lockutils [req-fe371436-173a-4537-af86-1568ca25873b req-9e5a5b08-8d07-40c9-8af8-f94affe76525 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-1999f11e-e9a2-4442-8baf-d8e51e0c856d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:23:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/4283486631' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:23:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/4283486631' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:23:49 compute-0 sshd-session[355523]: Invalid user springboot from 209.38.206.249 port 35046
Jan 26 16:23:49 compute-0 sshd-session[355523]: Connection closed by invalid user springboot 209.38.206.249 port 35046 [preauth]
Jan 26 16:23:49 compute-0 nova_compute[239965]: 2026-01-26 16:23:49.907 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:23:49 compute-0 nova_compute[239965]: 2026-01-26 16:23:49.974 239969 DEBUG nova.compute.manager [req-fd314520-f10f-48fc-8a53-bb92cc4bcb48 req-fbd80796-5764-49cd-9fe1-be8d16bf8f41 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received event network-changed-7b88408c-5847-4417-be2b-c52e6a358bc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:49 compute-0 nova_compute[239965]: 2026-01-26 16:23:49.975 239969 DEBUG nova.compute.manager [req-fd314520-f10f-48fc-8a53-bb92cc4bcb48 req-fbd80796-5764-49cd-9fe1-be8d16bf8f41 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Refreshing instance network info cache due to event network-changed-7b88408c-5847-4417-be2b-c52e6a358bc7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:23:49 compute-0 nova_compute[239965]: 2026-01-26 16:23:49.975 239969 DEBUG oslo_concurrency.lockutils [req-fd314520-f10f-48fc-8a53-bb92cc4bcb48 req-fbd80796-5764-49cd-9fe1-be8d16bf8f41 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:23:49 compute-0 nova_compute[239965]: 2026-01-26 16:23:49.976 239969 DEBUG oslo_concurrency.lockutils [req-fd314520-f10f-48fc-8a53-bb92cc4bcb48 req-fbd80796-5764-49cd-9fe1-be8d16bf8f41 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:23:49 compute-0 nova_compute[239965]: 2026-01-26 16:23:49.976 239969 DEBUG nova.network.neutron [req-fd314520-f10f-48fc-8a53-bb92cc4bcb48 req-fbd80796-5764-49cd-9fe1-be8d16bf8f41 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Refreshing network info cache for port 7b88408c-5847-4417-be2b-c52e6a358bc7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.025 239969 DEBUG oslo_concurrency.lockutils [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.026 239969 DEBUG oslo_concurrency.lockutils [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.026 239969 DEBUG oslo_concurrency.lockutils [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.026 239969 DEBUG oslo_concurrency.lockutils [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.026 239969 DEBUG oslo_concurrency.lockutils [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.027 239969 INFO nova.compute.manager [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Terminating instance
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.028 239969 DEBUG nova.compute.manager [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:23:50 compute-0 kernel: tap7b88408c-58 (unregistering): left promiscuous mode
Jan 26 16:23:50 compute-0 NetworkManager[48954]: <info>  [1769444630.0946] device (tap7b88408c-58): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:23:50 compute-0 ovn_controller[146046]: 2026-01-26T16:23:50Z|01362|binding|INFO|Releasing lport 7b88408c-5847-4417-be2b-c52e6a358bc7 from this chassis (sb_readonly=0)
Jan 26 16:23:50 compute-0 ovn_controller[146046]: 2026-01-26T16:23:50Z|01363|binding|INFO|Setting lport 7b88408c-5847-4417-be2b-c52e6a358bc7 down in Southbound
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.107 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:50 compute-0 ovn_controller[146046]: 2026-01-26T16:23:50Z|01364|binding|INFO|Removing iface tap7b88408c-58 ovn-installed in OVS
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.110 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:50.121 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bd:83:22 10.100.0.7'], port_security=['fa:16:3e:bd:83:22 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '34442cd1-7fb4-45b4-bd46-9d2cb53d00e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-167a489e-5d74-43ba-9aa7-606465815c37', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd77bf471-b7d2-4299-a894-5b23c9253b59', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e97a4241-36cb-4e28-a469-e73b5fed0754, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=7b88408c-5847-4417-be2b-c52e6a358bc7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:23:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:50.125 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 7b88408c-5847-4417-be2b-c52e6a358bc7 in datapath 167a489e-5d74-43ba-9aa7-606465815c37 unbound from our chassis
Jan 26 16:23:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:50.129 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 167a489e-5d74-43ba-9aa7-606465815c37, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:23:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:50.130 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[818a8a90-3e19-4996-9625-cedfde26874c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:50.131 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37 namespace which is not needed anymore
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.146 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:50 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Jan 26 16:23:50 compute-0 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007e.scope: Consumed 17.900s CPU time.
Jan 26 16:23:50 compute-0 systemd-machined[208061]: Machine qemu-153-instance-0000007e terminated.
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.261 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.269 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.274 239969 INFO nova.virt.libvirt.driver [-] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Instance destroyed successfully.
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.275 239969 DEBUG nova.objects.instance [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'resources' on Instance uuid 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:23:50 compute-0 neutron-haproxy-ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37[352258]: [NOTICE]   (352265) : haproxy version is 2.8.14-c23fe91
Jan 26 16:23:50 compute-0 neutron-haproxy-ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37[352258]: [NOTICE]   (352265) : path to executable is /usr/sbin/haproxy
Jan 26 16:23:50 compute-0 neutron-haproxy-ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37[352258]: [WARNING]  (352265) : Exiting Master process...
Jan 26 16:23:50 compute-0 neutron-haproxy-ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37[352258]: [WARNING]  (352265) : Exiting Master process...
Jan 26 16:23:50 compute-0 neutron-haproxy-ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37[352258]: [ALERT]    (352265) : Current worker (352276) exited with code 143 (Terminated)
Jan 26 16:23:50 compute-0 neutron-haproxy-ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37[352258]: [WARNING]  (352265) : All workers exited. Exiting... (0)
Jan 26 16:23:50 compute-0 systemd[1]: libpod-ee305591ee2c4dfc9b1fa24c0c712e56c0194b4a74032c9ece7c72037060e374.scope: Deactivated successfully.
Jan 26 16:23:50 compute-0 podman[355550]: 2026-01-26 16:23:50.292330757 +0000 UTC m=+0.054550867 container died ee305591ee2c4dfc9b1fa24c0c712e56c0194b4a74032c9ece7c72037060e374 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.303 239969 DEBUG nova.virt.libvirt.vif [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:22:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1426087027',display_name='tempest-TestNetworkBasicOps-server-1426087027',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1426087027',id=126,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGVQvnTTId2F2dZ4CmrqUuSRPgxVpg5Gcg7/WdXsulHhwVIkk74RWE9IZv+pRN7sfUWNe4e/QS34rEQOYJGDOrZn6Q/HpS10QJgjM9+nMxrjR7pUidrziE/wnxgJrIPauA==',key_name='tempest-TestNetworkBasicOps-1612194243',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:22:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-058vjbml',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:22:22Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=34442cd1-7fb4-45b4-bd46-9d2cb53d00e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.304 239969 DEBUG nova.network.os_vif_util [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.305 239969 DEBUG nova.network.os_vif_util [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bd:83:22,bridge_name='br-int',has_traffic_filtering=True,id=7b88408c-5847-4417-be2b-c52e6a358bc7,network=Network(167a489e-5d74-43ba-9aa7-606465815c37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b88408c-58') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.305 239969 DEBUG os_vif [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:83:22,bridge_name='br-int',has_traffic_filtering=True,id=7b88408c-5847-4417-be2b-c52e6a358bc7,network=Network(167a489e-5d74-43ba-9aa7-606465815c37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b88408c-58') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.307 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.308 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7b88408c-58, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.309 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.311 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.313 239969 INFO os_vif [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bd:83:22,bridge_name='br-int',has_traffic_filtering=True,id=7b88408c-5847-4417-be2b-c52e6a358bc7,network=Network(167a489e-5d74-43ba-9aa7-606465815c37),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7b88408c-58')
Jan 26 16:23:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ee305591ee2c4dfc9b1fa24c0c712e56c0194b4a74032c9ece7c72037060e374-userdata-shm.mount: Deactivated successfully.
Jan 26 16:23:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-3dc8f555cec561a43c97650024625225cb134649199573a0d486592d845b5aa9-merged.mount: Deactivated successfully.
Jan 26 16:23:50 compute-0 podman[355550]: 2026-01-26 16:23:50.344041084 +0000 UTC m=+0.106261194 container cleanup ee305591ee2c4dfc9b1fa24c0c712e56c0194b4a74032c9ece7c72037060e374 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:23:50 compute-0 sshd-session[355525]: Invalid user fa from 209.38.206.249 port 35054
Jan 26 16:23:50 compute-0 systemd[1]: libpod-conmon-ee305591ee2c4dfc9b1fa24c0c712e56c0194b4a74032c9ece7c72037060e374.scope: Deactivated successfully.
Jan 26 16:23:50 compute-0 podman[355604]: 2026-01-26 16:23:50.424100904 +0000 UTC m=+0.049820191 container remove ee305591ee2c4dfc9b1fa24c0c712e56c0194b4a74032c9ece7c72037060e374 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:23:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:50.431 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7858d083-7d0d-42a7-ab90-d57e29d5fb4f]: (4, ('Mon Jan 26 04:23:50 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37 (ee305591ee2c4dfc9b1fa24c0c712e56c0194b4a74032c9ece7c72037060e374)\nee305591ee2c4dfc9b1fa24c0c712e56c0194b4a74032c9ece7c72037060e374\nMon Jan 26 04:23:50 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37 (ee305591ee2c4dfc9b1fa24c0c712e56c0194b4a74032c9ece7c72037060e374)\nee305591ee2c4dfc9b1fa24c0c712e56c0194b4a74032c9ece7c72037060e374\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:50.433 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2788347f-a843-4a02-a0c0-fa68a7fcd17d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:50.434 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap167a489e-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.437 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:50 compute-0 kernel: tap167a489e-50: left promiscuous mode
Jan 26 16:23:50 compute-0 sshd-session[355525]: Connection closed by invalid user fa 209.38.206.249 port 35054 [preauth]
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.460 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:50.462 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[24f38c13-4667-449c-9f8a-daa57dfd6a78]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:50.474 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8b79402c-97a8-45bd-94b2-9013713b4b49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:50.475 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c661d9a0-f9e5-4c98-9bc4-cb07561723bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:50.498 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9d762bf5-cba9-438a-92e5-b28a4b041fb3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 603119, 'reachable_time': 41083, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355620, 'error': None, 'target': 'ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d167a489e\x2d5d74\x2d43ba\x2d9aa7\x2d606465815c37.mount: Deactivated successfully.
Jan 26 16:23:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:50.504 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-167a489e-5d74-43ba-9aa7-606465815c37 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:23:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:50.504 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[7743dd75-cbc5-4a6d-9697-0fe5ccb53fa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 450 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.582 239969 INFO nova.virt.libvirt.driver [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Deleting instance files /var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_del
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.583 239969 INFO nova.virt.libvirt.driver [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Deletion of /var/lib/nova/instances/34442cd1-7fb4-45b4-bd46-9d2cb53d00e1_del complete
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.644 239969 INFO nova.compute.manager [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Took 0.62 seconds to destroy the instance on the hypervisor.
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.645 239969 DEBUG oslo.service.loopingcall [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.645 239969 DEBUG nova.compute.manager [-] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:23:50 compute-0 nova_compute[239965]: 2026-01-26 16:23:50.645 239969 DEBUG nova.network.neutron [-] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:23:50 compute-0 ceph-mon[75140]: pgmap v2194: 305 pgs: 305 active+clean; 450 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 166 op/s
Jan 26 16:23:51 compute-0 nova_compute[239965]: 2026-01-26 16:23:51.271 239969 DEBUG nova.network.neutron [-] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:51 compute-0 nova_compute[239965]: 2026-01-26 16:23:51.315 239969 INFO nova.compute.manager [-] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Took 0.67 seconds to deallocate network for instance.
Jan 26 16:23:51 compute-0 nova_compute[239965]: 2026-01-26 16:23:51.347 239969 DEBUG nova.compute.manager [req-21f6c8bb-8b48-412c-809e-991498379ff7 req-d62f6e2f-bc9e-45d5-aaeb-55f3e29ff5f9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received event network-vif-deleted-7b88408c-5847-4417-be2b-c52e6a358bc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:51 compute-0 nova_compute[239965]: 2026-01-26 16:23:51.363 239969 DEBUG oslo_concurrency.lockutils [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:51 compute-0 nova_compute[239965]: 2026-01-26 16:23:51.363 239969 DEBUG oslo_concurrency.lockutils [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:51 compute-0 nova_compute[239965]: 2026-01-26 16:23:51.489 239969 DEBUG oslo_concurrency.processutils [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:51 compute-0 nova_compute[239965]: 2026-01-26 16:23:51.536 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:23:51 compute-0 nova_compute[239965]: 2026-01-26 16:23:51.624 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:23:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3371335906' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.098 239969 DEBUG oslo_concurrency.processutils [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.106 239969 DEBUG nova.compute.provider_tree [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.124 239969 DEBUG nova.compute.manager [req-5143c32c-1d86-435b-8889-1d2e53890ed3 req-48865b85-9382-4006-b03c-320f017dfe3c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received event network-vif-unplugged-7b88408c-5847-4417-be2b-c52e6a358bc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.125 239969 DEBUG oslo_concurrency.lockutils [req-5143c32c-1d86-435b-8889-1d2e53890ed3 req-48865b85-9382-4006-b03c-320f017dfe3c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.125 239969 DEBUG oslo_concurrency.lockutils [req-5143c32c-1d86-435b-8889-1d2e53890ed3 req-48865b85-9382-4006-b03c-320f017dfe3c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.126 239969 DEBUG oslo_concurrency.lockutils [req-5143c32c-1d86-435b-8889-1d2e53890ed3 req-48865b85-9382-4006-b03c-320f017dfe3c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.126 239969 DEBUG nova.compute.manager [req-5143c32c-1d86-435b-8889-1d2e53890ed3 req-48865b85-9382-4006-b03c-320f017dfe3c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] No waiting events found dispatching network-vif-unplugged-7b88408c-5847-4417-be2b-c52e6a358bc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.127 239969 WARNING nova.compute.manager [req-5143c32c-1d86-435b-8889-1d2e53890ed3 req-48865b85-9382-4006-b03c-320f017dfe3c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received unexpected event network-vif-unplugged-7b88408c-5847-4417-be2b-c52e6a358bc7 for instance with vm_state deleted and task_state None.
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.127 239969 DEBUG nova.compute.manager [req-5143c32c-1d86-435b-8889-1d2e53890ed3 req-48865b85-9382-4006-b03c-320f017dfe3c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received event network-vif-plugged-7b88408c-5847-4417-be2b-c52e6a358bc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.127 239969 DEBUG oslo_concurrency.lockutils [req-5143c32c-1d86-435b-8889-1d2e53890ed3 req-48865b85-9382-4006-b03c-320f017dfe3c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.128 239969 DEBUG oslo_concurrency.lockutils [req-5143c32c-1d86-435b-8889-1d2e53890ed3 req-48865b85-9382-4006-b03c-320f017dfe3c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.128 239969 DEBUG oslo_concurrency.lockutils [req-5143c32c-1d86-435b-8889-1d2e53890ed3 req-48865b85-9382-4006-b03c-320f017dfe3c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.128 239969 DEBUG nova.compute.manager [req-5143c32c-1d86-435b-8889-1d2e53890ed3 req-48865b85-9382-4006-b03c-320f017dfe3c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] No waiting events found dispatching network-vif-plugged-7b88408c-5847-4417-be2b-c52e6a358bc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.129 239969 WARNING nova.compute.manager [req-5143c32c-1d86-435b-8889-1d2e53890ed3 req-48865b85-9382-4006-b03c-320f017dfe3c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Received unexpected event network-vif-plugged-7b88408c-5847-4417-be2b-c52e6a358bc7 for instance with vm_state deleted and task_state None.
Jan 26 16:23:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3371335906' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.132 239969 DEBUG nova.scheduler.client.report [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.156 239969 DEBUG oslo_concurrency.lockutils [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.185 239969 DEBUG oslo_concurrency.lockutils [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "3a113ae2-e017-46ee-b5d9-111ff761618b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.186 239969 DEBUG oslo_concurrency.lockutils [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "3a113ae2-e017-46ee-b5d9-111ff761618b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.187 239969 DEBUG oslo_concurrency.lockutils [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.187 239969 DEBUG oslo_concurrency.lockutils [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.188 239969 DEBUG oslo_concurrency.lockutils [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.190 239969 INFO nova.compute.manager [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Terminating instance
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.193 239969 DEBUG nova.compute.manager [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.196 239969 INFO nova.scheduler.client.report [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Deleted allocations for instance 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1
Jan 26 16:23:52 compute-0 kernel: tap51e9946a-d5 (unregistering): left promiscuous mode
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.248 239969 DEBUG nova.network.neutron [req-fd314520-f10f-48fc-8a53-bb92cc4bcb48 req-fbd80796-5764-49cd-9fe1-be8d16bf8f41 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Updated VIF entry in instance network info cache for port 7b88408c-5847-4417-be2b-c52e6a358bc7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:23:52 compute-0 NetworkManager[48954]: <info>  [1769444632.2496] device (tap51e9946a-d5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.249 239969 DEBUG nova.network.neutron [req-fd314520-f10f-48fc-8a53-bb92cc4bcb48 req-fbd80796-5764-49cd-9fe1-be8d16bf8f41 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Updating instance_info_cache with network_info: [{"id": "7b88408c-5847-4417-be2b-c52e6a358bc7", "address": "fa:16:3e:bd:83:22", "network": {"id": "167a489e-5d74-43ba-9aa7-606465815c37", "bridge": "br-int", "label": "tempest-network-smoke--1998497236", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7b88408c-58", "ovs_interfaceid": "7b88408c-5847-4417-be2b-c52e6a358bc7", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:52 compute-0 ovn_controller[146046]: 2026-01-26T16:23:52Z|01365|binding|INFO|Releasing lport 51e9946a-d549-44e6-b844-37a0de1b1244 from this chassis (sb_readonly=0)
Jan 26 16:23:52 compute-0 ovn_controller[146046]: 2026-01-26T16:23:52Z|01366|binding|INFO|Setting lport 51e9946a-d549-44e6-b844-37a0de1b1244 down in Southbound
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.298 239969 DEBUG oslo_concurrency.lockutils [None req-3294e0b0-7c4e-43d3-a41c-3060bba87f14 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:52 compute-0 ovn_controller[146046]: 2026-01-26T16:23:52Z|01367|binding|INFO|Removing iface tap51e9946a-d5 ovn-installed in OVS
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.299 239969 DEBUG oslo_concurrency.lockutils [req-fd314520-f10f-48fc-8a53-bb92cc4bcb48 req-fbd80796-5764-49cd-9fe1-be8d16bf8f41 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-34442cd1-7fb4-45b4-bd46-9d2cb53d00e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.300 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:52.304 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:32:e4 10.100.0.8'], port_security=['fa:16:3e:e6:32:e4 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '3a113ae2-e017-46ee-b5d9-111ff761618b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bc91585-31b9-47fa-8cd0-03064511558e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0a3a9fe-99c5-4bfe-a92f-e1de2e2a1727', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7379a9d2-4570-48f3-bfe7-5857500f5f77, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=51e9946a-d549-44e6-b844-37a0de1b1244) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:23:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:52.306 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 51e9946a-d549-44e6-b844-37a0de1b1244 in datapath 9bc91585-31b9-47fa-8cd0-03064511558e unbound from our chassis
Jan 26 16:23:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:52.307 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9bc91585-31b9-47fa-8cd0-03064511558e
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.321 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:52.325 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[49a3ce3d-2586-4c3a-b6d4-0985a5e70009]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:52.355 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[160a89a6-49cd-4d0e-b9e4-76b35eb0b210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:52.358 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[85321c7a-93a4-4606-ae7b-6e05269952e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:52 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d00000082.scope: Deactivated successfully.
Jan 26 16:23:52 compute-0 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d00000082.scope: Consumed 17.096s CPU time.
Jan 26 16:23:52 compute-0 systemd-machined[208061]: Machine qemu-157-instance-00000082 terminated.
Jan 26 16:23:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:52.392 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f2165088-4301-46e0-aa18-4859696fb3b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:52.408 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1cec35-504e-412a-8d4d-636339290803]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9bc91585-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:67:22'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 384], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606129, 'reachable_time': 39464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355652, 'error': None, 'target': 'ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.425 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.434 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:52.436 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e4aad2d8-731c-4fae-ae1e-40fb91a91ffe]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9bc91585-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606143, 'tstamp': 606143}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355655, 'error': None, 'target': 'ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9bc91585-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 606146, 'tstamp': 606146}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355655, 'error': None, 'target': 'ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:52.437 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bc91585-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.438 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.443 239969 INFO nova.virt.libvirt.driver [-] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Instance destroyed successfully.
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.444 239969 DEBUG nova.objects.instance [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'resources' on Instance uuid 3a113ae2-e017-46ee-b5d9-111ff761618b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.448 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:52.448 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9bc91585-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:52.449 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:23:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:52.449 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9bc91585-30, col_values=(('external_ids', {'iface-id': 'ad3f6d2f-d76e-421c-82c8-5fe4525b7056'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:52.449 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.465 239969 DEBUG nova.virt.libvirt.vif [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:23:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-0-474580322',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-0-474580322',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-gen',id=130,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNaGoHE+Y8ji3+JCDO/bDUoDInaraCUH4QlGo7jTOAJTsSMKwIPcdLhQoPl5Hxa2VpL5egGSZOv5v3mz9KbIjWDatj77IcRZ2MRFQE+vsAUPEwBUyrw2qpYszZfZz2+J9w==',key_name='tempest-TestSecurityGroupsBasicOps-326396829',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:23:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-3546p30a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:23:31Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=3a113ae2-e017-46ee-b5d9-111ff761618b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "51e9946a-d549-44e6-b844-37a0de1b1244", "address": "fa:16:3e:e6:32:e4", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e9946a-d5", "ovs_interfaceid": "51e9946a-d549-44e6-b844-37a0de1b1244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.465 239969 DEBUG nova.network.os_vif_util [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "51e9946a-d549-44e6-b844-37a0de1b1244", "address": "fa:16:3e:e6:32:e4", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e9946a-d5", "ovs_interfaceid": "51e9946a-d549-44e6-b844-37a0de1b1244", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.466 239969 DEBUG nova.network.os_vif_util [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:32:e4,bridge_name='br-int',has_traffic_filtering=True,id=51e9946a-d549-44e6-b844-37a0de1b1244,network=Network(9bc91585-31b9-47fa-8cd0-03064511558e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e9946a-d5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.466 239969 DEBUG os_vif [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:32:e4,bridge_name='br-int',has_traffic_filtering=True,id=51e9946a-d549-44e6-b844-37a0de1b1244,network=Network(9bc91585-31b9-47fa-8cd0-03064511558e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e9946a-d5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.468 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.469 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51e9946a-d5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.470 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.471 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.475 239969 INFO os_vif [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:32:e4,bridge_name='br-int',has_traffic_filtering=True,id=51e9946a-d549-44e6-b844-37a0de1b1244,network=Network(9bc91585-31b9-47fa-8cd0-03064511558e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e9946a-d5')
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.513 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:23:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 195 op/s
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.800 239969 INFO nova.virt.libvirt.driver [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Deleting instance files /var/lib/nova/instances/3a113ae2-e017-46ee-b5d9-111ff761618b_del
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.801 239969 INFO nova.virt.libvirt.driver [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Deletion of /var/lib/nova/instances/3a113ae2-e017-46ee-b5d9-111ff761618b_del complete
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.871 239969 INFO nova.compute.manager [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Took 0.68 seconds to destroy the instance on the hypervisor.
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.871 239969 DEBUG oslo.service.loopingcall [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.872 239969 DEBUG nova.compute.manager [-] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:23:52 compute-0 nova_compute[239965]: 2026-01-26 16:23:52.872 239969 DEBUG nova.network.neutron [-] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:23:53 compute-0 ceph-mon[75140]: pgmap v2195: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 195 op/s
Jan 26 16:23:53 compute-0 nova_compute[239965]: 2026-01-26 16:23:53.192 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:23:53 compute-0 nova_compute[239965]: 2026-01-26 16:23:53.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:23:53 compute-0 nova_compute[239965]: 2026-01-26 16:23:53.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:23:53 compute-0 nova_compute[239965]: 2026-01-26 16:23:53.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:23:53 compute-0 nova_compute[239965]: 2026-01-26 16:23:53.542 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.066 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.066 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.066 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.066 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.467 239969 DEBUG nova.compute.manager [req-40e1ed00-5e70-4775-82d7-14f48076667d req-f4b1b73c-efcd-45c5-9a1a-228ac8966bcf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Received event network-vif-unplugged-51e9946a-d549-44e6-b844-37a0de1b1244 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.467 239969 DEBUG oslo_concurrency.lockutils [req-40e1ed00-5e70-4775-82d7-14f48076667d req-f4b1b73c-efcd-45c5-9a1a-228ac8966bcf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.468 239969 DEBUG oslo_concurrency.lockutils [req-40e1ed00-5e70-4775-82d7-14f48076667d req-f4b1b73c-efcd-45c5-9a1a-228ac8966bcf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.468 239969 DEBUG oslo_concurrency.lockutils [req-40e1ed00-5e70-4775-82d7-14f48076667d req-f4b1b73c-efcd-45c5-9a1a-228ac8966bcf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.468 239969 DEBUG nova.compute.manager [req-40e1ed00-5e70-4775-82d7-14f48076667d req-f4b1b73c-efcd-45c5-9a1a-228ac8966bcf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] No waiting events found dispatching network-vif-unplugged-51e9946a-d549-44e6-b844-37a0de1b1244 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.468 239969 DEBUG nova.compute.manager [req-40e1ed00-5e70-4775-82d7-14f48076667d req-f4b1b73c-efcd-45c5-9a1a-228ac8966bcf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Received event network-vif-unplugged-51e9946a-d549-44e6-b844-37a0de1b1244 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.469 239969 DEBUG nova.compute.manager [req-40e1ed00-5e70-4775-82d7-14f48076667d req-f4b1b73c-efcd-45c5-9a1a-228ac8966bcf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Received event network-vif-plugged-51e9946a-d549-44e6-b844-37a0de1b1244 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.469 239969 DEBUG oslo_concurrency.lockutils [req-40e1ed00-5e70-4775-82d7-14f48076667d req-f4b1b73c-efcd-45c5-9a1a-228ac8966bcf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.469 239969 DEBUG oslo_concurrency.lockutils [req-40e1ed00-5e70-4775-82d7-14f48076667d req-f4b1b73c-efcd-45c5-9a1a-228ac8966bcf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.470 239969 DEBUG oslo_concurrency.lockutils [req-40e1ed00-5e70-4775-82d7-14f48076667d req-f4b1b73c-efcd-45c5-9a1a-228ac8966bcf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a113ae2-e017-46ee-b5d9-111ff761618b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.470 239969 DEBUG nova.compute.manager [req-40e1ed00-5e70-4775-82d7-14f48076667d req-f4b1b73c-efcd-45c5-9a1a-228ac8966bcf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] No waiting events found dispatching network-vif-plugged-51e9946a-d549-44e6-b844-37a0de1b1244 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.470 239969 WARNING nova.compute.manager [req-40e1ed00-5e70-4775-82d7-14f48076667d req-f4b1b73c-efcd-45c5-9a1a-228ac8966bcf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Received unexpected event network-vif-plugged-51e9946a-d549-44e6-b844-37a0de1b1244 for instance with vm_state active and task_state deleting.
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.488 239969 DEBUG nova.network.neutron [-] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.522 239969 INFO nova.compute.manager [-] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Took 1.65 seconds to deallocate network for instance.
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.571 239969 DEBUG oslo_concurrency.lockutils [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.572 239969 DEBUG oslo_concurrency.lockutils [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Jan 26 16:23:54 compute-0 nova_compute[239965]: 2026-01-26 16:23:54.697 239969 DEBUG oslo_concurrency.processutils [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:23:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:23:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1595253138' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:55 compute-0 nova_compute[239965]: 2026-01-26 16:23:55.365 239969 DEBUG oslo_concurrency.processutils [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.669s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:23:55 compute-0 nova_compute[239965]: 2026-01-26 16:23:55.372 239969 DEBUG nova.compute.provider_tree [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:23:55 compute-0 nova_compute[239965]: 2026-01-26 16:23:55.395 239969 DEBUG nova.scheduler.client.report [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:23:55 compute-0 nova_compute[239965]: 2026-01-26 16:23:55.423 239969 DEBUG oslo_concurrency.lockutils [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:55 compute-0 nova_compute[239965]: 2026-01-26 16:23:55.457 239969 INFO nova.scheduler.client.report [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Deleted allocations for instance 3a113ae2-e017-46ee-b5d9-111ff761618b
Jan 26 16:23:55 compute-0 nova_compute[239965]: 2026-01-26 16:23:55.538 239969 DEBUG oslo_concurrency.lockutils [None req-5cb8e29c-6bad-4ea5-9e59-4a5ccfe88b99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "3a113ae2-e017-46ee-b5d9-111ff761618b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:55 compute-0 ceph-mon[75140]: pgmap v2196: 305 pgs: 305 active+clean; 372 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 187 op/s
Jan 26 16:23:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1595253138' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:23:56 compute-0 ovn_controller[146046]: 2026-01-26T16:23:56Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:a8:15 10.100.0.14
Jan 26 16:23:56 compute-0 ovn_controller[146046]: 2026-01-26T16:23:56Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:a8:15 10.100.0.14
Jan 26 16:23:56 compute-0 nova_compute[239965]: 2026-01-26 16:23:56.300 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Updating instance_info_cache with network_info: [{"id": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "address": "fa:16:3e:34:7e:f9", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47519d16-bf", "ovs_interfaceid": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:56 compute-0 nova_compute[239965]: 2026-01-26 16:23:56.317 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:23:56 compute-0 nova_compute[239965]: 2026-01-26 16:23:56.317 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:23:56 compute-0 nova_compute[239965]: 2026-01-26 16:23:56.566 239969 DEBUG nova.compute.manager [req-cd24779d-f9a2-4c9c-ab41-7b345242d483 req-804fbde2-5ba0-48b6-ae35-df43d7821016 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Received event network-vif-deleted-51e9946a-d549-44e6-b844-37a0de1b1244 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 363 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 209 op/s
Jan 26 16:23:56 compute-0 ceph-mon[75140]: pgmap v2197: 305 pgs: 305 active+clean; 363 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.9 MiB/s wr, 209 op/s
Jan 26 16:23:57 compute-0 nova_compute[239965]: 2026-01-26 16:23:57.059 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444622.0573301, 9698a435-3de7-4c8c-9b12-35e22d8d405b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:23:57 compute-0 nova_compute[239965]: 2026-01-26 16:23:57.060 239969 INFO nova.compute.manager [-] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] VM Stopped (Lifecycle Event)
Jan 26 16:23:57 compute-0 nova_compute[239965]: 2026-01-26 16:23:57.076 239969 DEBUG nova.compute.manager [None req-986ef781-bb3d-43cf-803f-e51a25b9f9b8 - - - - - -] [instance: 9698a435-3de7-4c8c-9b12-35e22d8d405b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:23:57 compute-0 ovn_controller[146046]: 2026-01-26T16:23:57Z|01368|binding|INFO|Releasing lport 29a5989d-1915-41bb-983b-b3c73bba2262 from this chassis (sb_readonly=0)
Jan 26 16:23:57 compute-0 ovn_controller[146046]: 2026-01-26T16:23:57Z|01369|binding|INFO|Releasing lport ad3f6d2f-d76e-421c-82c8-5fe4525b7056 from this chassis (sb_readonly=0)
Jan 26 16:23:57 compute-0 nova_compute[239965]: 2026-01-26 16:23:57.346 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:57 compute-0 nova_compute[239965]: 2026-01-26 16:23:57.471 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:57 compute-0 nova_compute[239965]: 2026-01-26 16:23:57.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:23:57 compute-0 nova_compute[239965]: 2026-01-26 16:23:57.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:23:57 compute-0 nova_compute[239965]: 2026-01-26 16:23:57.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:23:57 compute-0 nova_compute[239965]: 2026-01-26 16:23:57.965 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.164 239969 DEBUG oslo_concurrency.lockutils [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.164 239969 DEBUG oslo_concurrency.lockutils [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.165 239969 DEBUG oslo_concurrency.lockutils [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.165 239969 DEBUG oslo_concurrency.lockutils [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.165 239969 DEBUG oslo_concurrency.lockutils [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.166 239969 INFO nova.compute.manager [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Terminating instance
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.167 239969 DEBUG nova.compute.manager [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.195 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:58 compute-0 kernel: tap47519d16-bf (unregistering): left promiscuous mode
Jan 26 16:23:58 compute-0 NetworkManager[48954]: <info>  [1769444638.2216] device (tap47519d16-bf): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:23:58 compute-0 ovn_controller[146046]: 2026-01-26T16:23:58Z|01370|binding|INFO|Releasing lport 47519d16-bf4d-4b36-aa40-e533f3ff9d77 from this chassis (sb_readonly=0)
Jan 26 16:23:58 compute-0 ovn_controller[146046]: 2026-01-26T16:23:58Z|01371|binding|INFO|Setting lport 47519d16-bf4d-4b36-aa40-e533f3ff9d77 down in Southbound
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.231 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:58 compute-0 ovn_controller[146046]: 2026-01-26T16:23:58Z|01372|binding|INFO|Removing iface tap47519d16-bf ovn-installed in OVS
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.234 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:58.243 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:7e:f9 10.100.0.6'], port_security=['fa:16:3e:34:7e:f9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '7ddfbe11-7a13-4e3c-8fe6-0cca0be74583', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9bc91585-31b9-47fa-8cd0-03064511558e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c0a3a9fe-99c5-4bfe-a92f-e1de2e2a1727 fa917f24-1d33-45b8-bab6-704cf1bde01b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=7379a9d2-4570-48f3-bfe7-5857500f5f77, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=47519d16-bf4d-4b36-aa40-e533f3ff9d77) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:23:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:58.244 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 47519d16-bf4d-4b36-aa40-e533f3ff9d77 in datapath 9bc91585-31b9-47fa-8cd0-03064511558e unbound from our chassis
Jan 26 16:23:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:58.246 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9bc91585-31b9-47fa-8cd0-03064511558e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:23:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:58.247 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[02e6535e-918a-41dd-9768-403757f1c7f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:58.248 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e namespace which is not needed anymore
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.249 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:58 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Jan 26 16:23:58 compute-0 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007f.scope: Consumed 15.944s CPU time.
Jan 26 16:23:58 compute-0 systemd-machined[208061]: Machine qemu-154-instance-0000007f terminated.
Jan 26 16:23:58 compute-0 neutron-haproxy-ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e[352916]: [NOTICE]   (352920) : haproxy version is 2.8.14-c23fe91
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.448 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:58 compute-0 neutron-haproxy-ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e[352916]: [NOTICE]   (352920) : path to executable is /usr/sbin/haproxy
Jan 26 16:23:58 compute-0 neutron-haproxy-ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e[352916]: [WARNING]  (352920) : Exiting Master process...
Jan 26 16:23:58 compute-0 neutron-haproxy-ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e[352916]: [ALERT]    (352920) : Current worker (352922) exited with code 143 (Terminated)
Jan 26 16:23:58 compute-0 neutron-haproxy-ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e[352916]: [WARNING]  (352920) : All workers exited. Exiting... (0)
Jan 26 16:23:58 compute-0 systemd[1]: libpod-b20c75124f196adde7b2eb7a0f2675f34197cd8ad70cdec03a4adf6fd28abd0b.scope: Deactivated successfully.
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.456 239969 INFO nova.virt.libvirt.driver [-] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Instance destroyed successfully.
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.457 239969 DEBUG nova.objects.instance [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'resources' on Instance uuid 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:23:58 compute-0 podman[355728]: 2026-01-26 16:23:58.460689299 +0000 UTC m=+0.103201499 container died b20c75124f196adde7b2eb7a0f2675f34197cd8ad70cdec03a4adf6fd28abd0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.469 239969 DEBUG nova.virt.libvirt.vif [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:22:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-147250426',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-147250426',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-acce',id=127,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNaGoHE+Y8ji3+JCDO/bDUoDInaraCUH4QlGo7jTOAJTsSMKwIPcdLhQoPl5Hxa2VpL5egGSZOv5v3mz9KbIjWDatj77IcRZ2MRFQE+vsAUPEwBUyrw2qpYszZfZz2+J9w==',key_name='tempest-TestSecurityGroupsBasicOps-326396829',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:22:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-apj6m4p0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:22:54Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=7ddfbe11-7a13-4e3c-8fe6-0cca0be74583,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "address": "fa:16:3e:34:7e:f9", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47519d16-bf", "ovs_interfaceid": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.470 239969 DEBUG nova.network.os_vif_util [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "address": "fa:16:3e:34:7e:f9", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47519d16-bf", "ovs_interfaceid": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.471 239969 DEBUG nova.network.os_vif_util [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:34:7e:f9,bridge_name='br-int',has_traffic_filtering=True,id=47519d16-bf4d-4b36-aa40-e533f3ff9d77,network=Network(9bc91585-31b9-47fa-8cd0-03064511558e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47519d16-bf') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.471 239969 DEBUG os_vif [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:7e:f9,bridge_name='br-int',has_traffic_filtering=True,id=47519d16-bf4d-4b36-aa40-e533f3ff9d77,network=Network(9bc91585-31b9-47fa-8cd0-03064511558e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47519d16-bf') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.473 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.474 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap47519d16-bf, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.477 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.480 239969 INFO os_vif [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:34:7e:f9,bridge_name='br-int',has_traffic_filtering=True,id=47519d16-bf4d-4b36-aa40-e533f3ff9d77,network=Network(9bc91585-31b9-47fa-8cd0-03064511558e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap47519d16-bf')
Jan 26 16:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b20c75124f196adde7b2eb7a0f2675f34197cd8ad70cdec03a4adf6fd28abd0b-userdata-shm.mount: Deactivated successfully.
Jan 26 16:23:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebef440e628061add6e11d5722db7178f58630568610ca9ccaa0d3a738d2e270-merged.mount: Deactivated successfully.
Jan 26 16:23:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:23:58 compute-0 podman[355728]: 2026-01-26 16:23:58.516095495 +0000 UTC m=+0.158607685 container cleanup b20c75124f196adde7b2eb7a0f2675f34197cd8ad70cdec03a4adf6fd28abd0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:23:58 compute-0 systemd[1]: libpod-conmon-b20c75124f196adde7b2eb7a0f2675f34197cd8ad70cdec03a4adf6fd28abd0b.scope: Deactivated successfully.
Jan 26 16:23:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 323 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 230 op/s
Jan 26 16:23:58 compute-0 podman[355789]: 2026-01-26 16:23:58.58735461 +0000 UTC m=+0.048944769 container remove b20c75124f196adde7b2eb7a0f2675f34197cd8ad70cdec03a4adf6fd28abd0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:23:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:58.594 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1fd9b1-66c8-4d69-b277-cd175315f11a]: (4, ('Mon Jan 26 04:23:58 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e (b20c75124f196adde7b2eb7a0f2675f34197cd8ad70cdec03a4adf6fd28abd0b)\nb20c75124f196adde7b2eb7a0f2675f34197cd8ad70cdec03a4adf6fd28abd0b\nMon Jan 26 04:23:58 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e (b20c75124f196adde7b2eb7a0f2675f34197cd8ad70cdec03a4adf6fd28abd0b)\nb20c75124f196adde7b2eb7a0f2675f34197cd8ad70cdec03a4adf6fd28abd0b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:58.595 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1edeb90a-0428-46c4-a903-7d81eb43d705]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:58.596 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9bc91585-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.598 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:58 compute-0 kernel: tap9bc91585-30: left promiscuous mode
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.613 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:23:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:58.616 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[529514e3-528d-43e9-9d3b-45a37c096256]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:58.630 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[69d6dbc0-ed99-437a-bfa9-cafad42e5e94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:58.631 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a4b5022f-0360-40ef-bcb9-de790121af55]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:58.650 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c0af0628-30c4-454b-a516-d4782cba78a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 606120, 'reachable_time': 35375, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355804, 'error': None, 'target': 'ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:58.652 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9bc91585-31b9-47fa-8cd0-03064511558e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:23:58 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:58.652 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0c2e2e-3400-4adc-a5f5-78ab57b48028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:23:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d9bc91585\x2d31b9\x2d47fa\x2d8cd0\x2d03064511558e.mount: Deactivated successfully.
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.728 239969 INFO nova.virt.libvirt.driver [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Deleting instance files /var/lib/nova/instances/7ddfbe11-7a13-4e3c-8fe6-0cca0be74583_del
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.729 239969 INFO nova.virt.libvirt.driver [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Deletion of /var/lib/nova/instances/7ddfbe11-7a13-4e3c-8fe6-0cca0be74583_del complete
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.777 239969 INFO nova.compute.manager [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Took 0.61 seconds to destroy the instance on the hypervisor.
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.777 239969 DEBUG oslo.service.loopingcall [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.778 239969 DEBUG nova.compute.manager [-] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.778 239969 DEBUG nova.network.neutron [-] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.946 239969 DEBUG nova.compute.manager [req-d3c9fb59-8f62-433b-97eb-7f75b8dc0a7d req-298f9081-3fd4-4401-ab3e-fb4404118909 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Received event network-changed-47519d16-bf4d-4b36-aa40-e533f3ff9d77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.947 239969 DEBUG nova.compute.manager [req-d3c9fb59-8f62-433b-97eb-7f75b8dc0a7d req-298f9081-3fd4-4401-ab3e-fb4404118909 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Refreshing instance network info cache due to event network-changed-47519d16-bf4d-4b36-aa40-e533f3ff9d77. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.947 239969 DEBUG oslo_concurrency.lockutils [req-d3c9fb59-8f62-433b-97eb-7f75b8dc0a7d req-298f9081-3fd4-4401-ab3e-fb4404118909 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.947 239969 DEBUG oslo_concurrency.lockutils [req-d3c9fb59-8f62-433b-97eb-7f75b8dc0a7d req-298f9081-3fd4-4401-ab3e-fb4404118909 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:23:58 compute-0 nova_compute[239965]: 2026-01-26 16:23:58.948 239969 DEBUG nova.network.neutron [req-d3c9fb59-8f62-433b-97eb-7f75b8dc0a7d req-298f9081-3fd4-4401-ab3e-fb4404118909 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Refreshing network info cache for port 47519d16-bf4d-4b36-aa40-e533f3ff9d77 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:23:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:59.247 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:59.248 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:23:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:23:59.248 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:23:59 compute-0 nova_compute[239965]: 2026-01-26 16:23:59.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:23:59 compute-0 ceph-mon[75140]: pgmap v2198: 305 pgs: 305 active+clean; 323 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.8 MiB/s wr, 230 op/s
Jan 26 16:23:59 compute-0 nova_compute[239965]: 2026-01-26 16:23:59.665 239969 DEBUG nova.network.neutron [-] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:23:59 compute-0 nova_compute[239965]: 2026-01-26 16:23:59.707 239969 INFO nova.compute.manager [-] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Took 0.93 seconds to deallocate network for instance.
Jan 26 16:23:59 compute-0 nova_compute[239965]: 2026-01-26 16:23:59.766 239969 DEBUG oslo_concurrency.lockutils [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:23:59 compute-0 nova_compute[239965]: 2026-01-26 16:23:59.767 239969 DEBUG oslo_concurrency.lockutils [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:00 compute-0 nova_compute[239965]: 2026-01-26 16:24:00.085 239969 DEBUG oslo_concurrency.processutils [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:24:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:24:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:24:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:24:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:24:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:24:00 compute-0 nova_compute[239965]: 2026-01-26 16:24:00.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:24:00 compute-0 nova_compute[239965]: 2026-01-26 16:24:00.530 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:00 compute-0 nova_compute[239965]: 2026-01-26 16:24:00.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 323 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 114 op/s
Jan 26 16:24:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:24:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1655892053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:00 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1655892053' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:00 compute-0 nova_compute[239965]: 2026-01-26 16:24:00.673 239969 DEBUG oslo_concurrency.processutils [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:00 compute-0 nova_compute[239965]: 2026-01-26 16:24:00.679 239969 DEBUG nova.compute.provider_tree [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:24:00 compute-0 nova_compute[239965]: 2026-01-26 16:24:00.712 239969 DEBUG nova.scheduler.client.report [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:24:00 compute-0 nova_compute[239965]: 2026-01-26 16:24:00.749 239969 DEBUG oslo_concurrency.lockutils [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.982s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:00 compute-0 nova_compute[239965]: 2026-01-26 16:24:00.752 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:00 compute-0 nova_compute[239965]: 2026-01-26 16:24:00.752 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:00 compute-0 nova_compute[239965]: 2026-01-26 16:24:00.753 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:24:00 compute-0 nova_compute[239965]: 2026-01-26 16:24:00.753 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:00 compute-0 nova_compute[239965]: 2026-01-26 16:24:00.858 239969 INFO nova.scheduler.client.report [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Deleted allocations for instance 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583
Jan 26 16:24:00 compute-0 nova_compute[239965]: 2026-01-26 16:24:00.924 239969 DEBUG oslo_concurrency.lockutils [None req-3583b909-2883-4054-b0ed-b9e523f13b7f ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.759s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.013 239969 DEBUG nova.network.neutron [req-d3c9fb59-8f62-433b-97eb-7f75b8dc0a7d req-298f9081-3fd4-4401-ab3e-fb4404118909 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Updated VIF entry in instance network info cache for port 47519d16-bf4d-4b36-aa40-e533f3ff9d77. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.014 239969 DEBUG nova.network.neutron [req-d3c9fb59-8f62-433b-97eb-7f75b8dc0a7d req-298f9081-3fd4-4401-ab3e-fb4404118909 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Updating instance_info_cache with network_info: [{"id": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "address": "fa:16:3e:34:7e:f9", "network": {"id": "9bc91585-31b9-47fa-8cd0-03064511558e", "bridge": "br-int", "label": "tempest-network-smoke--9814788", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap47519d16-bf", "ovs_interfaceid": "47519d16-bf4d-4b36-aa40-e533f3ff9d77", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.040 239969 DEBUG oslo_concurrency.lockutils [req-d3c9fb59-8f62-433b-97eb-7f75b8dc0a7d req-298f9081-3fd4-4401-ab3e-fb4404118909 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-7ddfbe11-7a13-4e3c-8fe6-0cca0be74583" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.079 239969 DEBUG nova.compute.manager [req-1f769b11-39c6-4454-bab8-dc7b2537f7d5 req-c99e925f-2533-4c01-85ff-76f0e3a58607 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Received event network-vif-deleted-47519d16-bf4d-4b36-aa40-e533f3ff9d77 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:24:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1933030306' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.322 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.417 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.417 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.420 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.421 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000080 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.576 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.576 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3223MB free_disk=59.85138609725982GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.577 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.577 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.614 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.614 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.647 239969 DEBUG nova.compute.manager [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:24:01 compute-0 ceph-mon[75140]: pgmap v2199: 305 pgs: 305 active+clean; 323 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 114 op/s
Jan 26 16:24:01 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1933030306' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.676 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 5e597276-b625-4985-8433-360736cd3c79 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.676 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 1999f11e-e9a2-4442-8baf-d8e51e0c856d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.706 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 86b65c1c-c168-4ecd-b98d-9080d00d16dc has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.706 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.707 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.742 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:01 compute-0 nova_compute[239965]: 2026-01-26 16:24:01.782 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:02 compute-0 sshd-session[355851]: Invalid user ubuntu from 209.38.206.249 port 35056
Jan 26 16:24:02 compute-0 sshd-session[355851]: Connection closed by invalid user ubuntu 209.38.206.249 port 35056 [preauth]
Jan 26 16:24:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:24:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1206744107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:02 compute-0 nova_compute[239965]: 2026-01-26 16:24:02.357 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:02 compute-0 nova_compute[239965]: 2026-01-26 16:24:02.363 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:24:02 compute-0 nova_compute[239965]: 2026-01-26 16:24:02.381 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:24:02 compute-0 nova_compute[239965]: 2026-01-26 16:24:02.399 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:24:02 compute-0 nova_compute[239965]: 2026-01-26 16:24:02.399 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:02 compute-0 nova_compute[239965]: 2026-01-26 16:24:02.399 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:02 compute-0 nova_compute[239965]: 2026-01-26 16:24:02.410 239969 DEBUG nova.virt.hardware [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:24:02 compute-0 nova_compute[239965]: 2026-01-26 16:24:02.410 239969 INFO nova.compute.claims [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:24:02 compute-0 nova_compute[239965]: 2026-01-26 16:24:02.561 239969 DEBUG oslo_concurrency.processutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 246 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.2 MiB/s wr, 149 op/s
Jan 26 16:24:02 compute-0 sshd-session[355873]: Invalid user es from 209.38.206.249 port 41676
Jan 26 16:24:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1206744107' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:02 compute-0 ceph-mon[75140]: pgmap v2200: 305 pgs: 305 active+clean; 246 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.2 MiB/s wr, 149 op/s
Jan 26 16:24:02 compute-0 sshd-session[355873]: Connection closed by invalid user es 209.38.206.249 port 41676 [preauth]
Jan 26 16:24:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:24:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/596241657' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.143 239969 DEBUG oslo_concurrency.processutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.150 239969 DEBUG nova.compute.provider_tree [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.168 239969 DEBUG nova.scheduler.client.report [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.198 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.199 239969 DEBUG nova.compute.manager [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.201 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.251 239969 DEBUG nova.compute.manager [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.252 239969 DEBUG nova.network.neutron [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.271 239969 INFO nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.287 239969 DEBUG nova.compute.manager [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.393 239969 DEBUG nova.compute.manager [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.395 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.395 239969 INFO nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Creating image(s)
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.422 239969 DEBUG nova.storage.rbd_utils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 86b65c1c-c168-4ecd-b98d-9080d00d16dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.451 239969 DEBUG nova.storage.rbd_utils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 86b65c1c-c168-4ecd-b98d-9080d00d16dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.486 239969 DEBUG nova.storage.rbd_utils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 86b65c1c-c168-4ecd-b98d-9080d00d16dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.491 239969 DEBUG oslo_concurrency.processutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.534 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.582 239969 DEBUG nova.policy [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '59ae1c17a260470c91f50965ddd53a9e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.586 239969 DEBUG oslo_concurrency.processutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.587 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.587 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.588 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.609 239969 DEBUG nova.storage.rbd_utils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 86b65c1c-c168-4ecd-b98d-9080d00d16dc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.613 239969 DEBUG oslo_concurrency.processutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 86b65c1c-c168-4ecd-b98d-9080d00d16dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/596241657' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.900 239969 DEBUG oslo_concurrency.processutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 86b65c1c-c168-4ecd-b98d-9080d00d16dc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.287s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:03 compute-0 nova_compute[239965]: 2026-01-26 16:24:03.961 239969 DEBUG nova.storage.rbd_utils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] resizing rbd image 86b65c1c-c168-4ecd-b98d-9080d00d16dc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.032 239969 DEBUG nova.objects.instance [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'migration_context' on Instance uuid 86b65c1c-c168-4ecd-b98d-9080d00d16dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.052 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.052 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Ensure instance console log exists: /var/lib/nova/instances/86b65c1c-c168-4ecd-b98d-9080d00d16dc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.053 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.053 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.053 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.481 239969 DEBUG nova.compute.manager [req-17013fab-ec8b-4d09-8ae1-adbcca43dad2 req-79dddfcd-5af5-40e5-90b2-9b11deebe455 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Received event network-changed-d8b4b479-9869-4e47-bcea-fe0580fafbe3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.482 239969 DEBUG nova.compute.manager [req-17013fab-ec8b-4d09-8ae1-adbcca43dad2 req-79dddfcd-5af5-40e5-90b2-9b11deebe455 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Refreshing instance network info cache due to event network-changed-d8b4b479-9869-4e47-bcea-fe0580fafbe3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.482 239969 DEBUG oslo_concurrency.lockutils [req-17013fab-ec8b-4d09-8ae1-adbcca43dad2 req-79dddfcd-5af5-40e5-90b2-9b11deebe455 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-1999f11e-e9a2-4442-8baf-d8e51e0c856d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.482 239969 DEBUG oslo_concurrency.lockutils [req-17013fab-ec8b-4d09-8ae1-adbcca43dad2 req-79dddfcd-5af5-40e5-90b2-9b11deebe455 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-1999f11e-e9a2-4442-8baf-d8e51e0c856d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.482 239969 DEBUG nova.network.neutron [req-17013fab-ec8b-4d09-8ae1-adbcca43dad2 req-79dddfcd-5af5-40e5-90b2-9b11deebe455 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Refreshing network info cache for port d8b4b479-9869-4e47-bcea-fe0580fafbe3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.567 239969 DEBUG oslo_concurrency.lockutils [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.568 239969 DEBUG oslo_concurrency.lockutils [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.569 239969 DEBUG oslo_concurrency.lockutils [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.569 239969 DEBUG oslo_concurrency.lockutils [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.570 239969 DEBUG oslo_concurrency.lockutils [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.572 239969 INFO nova.compute.manager [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Terminating instance
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.574 239969 DEBUG nova.compute.manager [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:24:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 246 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 26 16:24:04 compute-0 kernel: tapd8b4b479-98 (unregistering): left promiscuous mode
Jan 26 16:24:04 compute-0 NetworkManager[48954]: <info>  [1769444644.6270] device (tapd8b4b479-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:24:04 compute-0 ovn_controller[146046]: 2026-01-26T16:24:04Z|01373|binding|INFO|Releasing lport d8b4b479-9869-4e47-bcea-fe0580fafbe3 from this chassis (sb_readonly=0)
Jan 26 16:24:04 compute-0 ovn_controller[146046]: 2026-01-26T16:24:04Z|01374|binding|INFO|Setting lport d8b4b479-9869-4e47-bcea-fe0580fafbe3 down in Southbound
Jan 26 16:24:04 compute-0 ovn_controller[146046]: 2026-01-26T16:24:04Z|01375|binding|INFO|Removing iface tapd8b4b479-98 ovn-installed in OVS
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.679 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:04 compute-0 ceph-mon[75140]: pgmap v2201: 305 pgs: 305 active+clean; 246 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 363 KiB/s rd, 2.1 MiB/s wr, 120 op/s
Jan 26 16:24:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:04.686 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:a8:15 10.100.0.14 2001:db8::f816:3eff:fe12:a815'], port_security=['fa:16:3e:12:a8:15 10.100.0.14 2001:db8::f816:3eff:fe12:a815'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fe12:a815/64', 'neutron:device_id': '1999f11e-e9a2-4442-8baf-d8e51e0c856d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bbc6da3-4889-49ad-be6a-5c45e912baea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59cf9173-0130-4701-8cd2-ef13c0f4cee0, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=d8b4b479-9869-4e47-bcea-fe0580fafbe3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:24:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:04.687 156105 INFO neutron.agent.ovn.metadata.agent [-] Port d8b4b479-9869-4e47-bcea-fe0580fafbe3 in datapath 71a6c2b4-c95b-48d6-bc0d-8a809d49273b unbound from our chassis
Jan 26 16:24:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:04.689 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71a6c2b4-c95b-48d6-bc0d-8a809d49273b
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.695 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:04.714 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1e69d40d-3311-423e-8ac9-fbdbe4caa498]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:04 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d00000083.scope: Deactivated successfully.
Jan 26 16:24:04 compute-0 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d00000083.scope: Consumed 14.113s CPU time.
Jan 26 16:24:04 compute-0 systemd-machined[208061]: Machine qemu-158-instance-00000083 terminated.
Jan 26 16:24:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:04.750 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[30c25f84-5072-4509-869c-c983e5598149]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:04.754 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[bff17a9e-50c0-48f5-92bd-c01d62be43d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:04.788 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a16d651a-51b5-49ea-bbe5-b82ddfadeee7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.797 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.801 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:04.808 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[203c7c69-886e-4e93-a096-3ed9d6381a51]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71a6c2b4-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:1b:cc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 32, 'tx_packets': 8, 'rx_bytes': 2640, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 32, 'tx_packets': 8, 'rx_bytes': 2640, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 388], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607926, 'reachable_time': 34510, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 28, 'inoctets': 2080, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 28, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2080, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 28, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356079, 'error': None, 'target': 'ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.814 239969 INFO nova.virt.libvirt.driver [-] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Instance destroyed successfully.
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.814 239969 DEBUG nova.objects.instance [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'resources' on Instance uuid 1999f11e-e9a2-4442-8baf-d8e51e0c856d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:24:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:04.829 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[69562b51-1a9b-48ab-84c4-17f947f9cdee]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap71a6c2b4-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607941, 'tstamp': 607941}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356087, 'error': None, 'target': 'ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap71a6c2b4-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 607945, 'tstamp': 607945}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356087, 'error': None, 'target': 'ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:04.831 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71a6c2b4-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.832 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.835 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:04.835 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71a6c2b4-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:04.836 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:24:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:04.836 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71a6c2b4-c0, col_values=(('external_ids', {'iface-id': '29a5989d-1915-41bb-983b-b3c73bba2262'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:04.836 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.841 239969 DEBUG nova.virt.libvirt.vif [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:23:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-2002333273',display_name='tempest-TestGettingAddress-server-2002333273',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-2002333273',id=131,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCC7xQvyZ3R4fi4Uk/Uk6Pa6MLruwnCjAWUP+8KMyXPkmagp3JbDxcXWQo/28wA3I75JT31hYs4VkRRPhBdGj71Xnfc2t9PlBlyrIJIqEAvEv7a7zDeqBbmQyfNsQiXQAw==',key_name='tempest-TestGettingAddress-1047513795',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:23:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-ije6ecsv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:23:43Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=1999f11e-e9a2-4442-8baf-d8e51e0c856d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "address": "fa:16:3e:12:a8:15", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a815", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b4b479-98", "ovs_interfaceid": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.842 239969 DEBUG nova.network.os_vif_util [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "address": "fa:16:3e:12:a8:15", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a815", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b4b479-98", "ovs_interfaceid": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.842 239969 DEBUG nova.network.os_vif_util [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:12:a8:15,bridge_name='br-int',has_traffic_filtering=True,id=d8b4b479-9869-4e47-bcea-fe0580fafbe3,network=Network(71a6c2b4-c95b-48d6-bc0d-8a809d49273b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b4b479-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.843 239969 DEBUG os_vif [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:a8:15,bridge_name='br-int',has_traffic_filtering=True,id=d8b4b479-9869-4e47-bcea-fe0580fafbe3,network=Network(71a6c2b4-c95b-48d6-bc0d-8a809d49273b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b4b479-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.845 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.845 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd8b4b479-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.846 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.847 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:04 compute-0 nova_compute[239965]: 2026-01-26 16:24:04.850 239969 INFO os_vif [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:12:a8:15,bridge_name='br-int',has_traffic_filtering=True,id=d8b4b479-9869-4e47-bcea-fe0580fafbe3,network=Network(71a6c2b4-c95b-48d6-bc0d-8a809d49273b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd8b4b479-98')
Jan 26 16:24:05 compute-0 nova_compute[239965]: 2026-01-26 16:24:05.107 239969 INFO nova.virt.libvirt.driver [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Deleting instance files /var/lib/nova/instances/1999f11e-e9a2-4442-8baf-d8e51e0c856d_del
Jan 26 16:24:05 compute-0 nova_compute[239965]: 2026-01-26 16:24:05.108 239969 INFO nova.virt.libvirt.driver [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Deletion of /var/lib/nova/instances/1999f11e-e9a2-4442-8baf-d8e51e0c856d_del complete
Jan 26 16:24:05 compute-0 nova_compute[239965]: 2026-01-26 16:24:05.177 239969 INFO nova.compute.manager [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Took 0.60 seconds to destroy the instance on the hypervisor.
Jan 26 16:24:05 compute-0 nova_compute[239965]: 2026-01-26 16:24:05.177 239969 DEBUG oslo.service.loopingcall [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:24:05 compute-0 nova_compute[239965]: 2026-01-26 16:24:05.178 239969 DEBUG nova.compute.manager [-] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:24:05 compute-0 nova_compute[239965]: 2026-01-26 16:24:05.178 239969 DEBUG nova.network.neutron [-] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:24:05 compute-0 nova_compute[239965]: 2026-01-26 16:24:05.273 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444630.2723057, 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:05 compute-0 nova_compute[239965]: 2026-01-26 16:24:05.273 239969 INFO nova.compute.manager [-] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] VM Stopped (Lifecycle Event)
Jan 26 16:24:05 compute-0 nova_compute[239965]: 2026-01-26 16:24:05.306 239969 DEBUG nova.compute.manager [None req-d3ecd33a-07e9-45bf-8336-15817282312a - - - - - -] [instance: 34442cd1-7fb4-45b4-bd46-9d2cb53d00e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:05 compute-0 ovn_controller[146046]: 2026-01-26T16:24:05Z|01376|binding|INFO|Releasing lport 29a5989d-1915-41bb-983b-b3c73bba2262 from this chassis (sb_readonly=0)
Jan 26 16:24:05 compute-0 nova_compute[239965]: 2026-01-26 16:24:05.620 239969 DEBUG nova.network.neutron [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Successfully created port: b54c88a7-5376-4cb4-b4c2-09833f6c7c31 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:24:05 compute-0 nova_compute[239965]: 2026-01-26 16:24:05.766 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.233 239969 DEBUG nova.network.neutron [-] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.250 239969 INFO nova.compute.manager [-] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Took 1.07 seconds to deallocate network for instance.
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.295 239969 DEBUG oslo_concurrency.lockutils [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.295 239969 DEBUG oslo_concurrency.lockutils [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:06 compute-0 sshd-session[356109]: Connection closed by authenticating user root 209.38.206.249 port 41686 [preauth]
Jan 26 16:24:06 compute-0 podman[356111]: 2026-01-26 16:24:06.379262784 +0000 UTC m=+0.055511250 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.389 239969 DEBUG oslo_concurrency.processutils [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:06 compute-0 podman[356112]: 2026-01-26 16:24:06.404499292 +0000 UTC m=+0.085116945 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:24:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 224 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 2.2 MiB/s wr, 130 op/s
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.697 239969 DEBUG nova.network.neutron [req-17013fab-ec8b-4d09-8ae1-adbcca43dad2 req-79dddfcd-5af5-40e5-90b2-9b11deebe455 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Updated VIF entry in instance network info cache for port d8b4b479-9869-4e47-bcea-fe0580fafbe3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.698 239969 DEBUG nova.network.neutron [req-17013fab-ec8b-4d09-8ae1-adbcca43dad2 req-79dddfcd-5af5-40e5-90b2-9b11deebe455 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Updating instance_info_cache with network_info: [{"id": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "address": "fa:16:3e:12:a8:15", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe12:a815", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd8b4b479-98", "ovs_interfaceid": "d8b4b479-9869-4e47-bcea-fe0580fafbe3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.704 239969 DEBUG nova.compute.manager [req-d1b29e57-82fe-4df7-b8f1-9cfce50af11d req-0caea574-f9ca-4c4a-a710-43bf60b8be4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Received event network-vif-unplugged-d8b4b479-9869-4e47-bcea-fe0580fafbe3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.705 239969 DEBUG oslo_concurrency.lockutils [req-d1b29e57-82fe-4df7-b8f1-9cfce50af11d req-0caea574-f9ca-4c4a-a710-43bf60b8be4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.705 239969 DEBUG oslo_concurrency.lockutils [req-d1b29e57-82fe-4df7-b8f1-9cfce50af11d req-0caea574-f9ca-4c4a-a710-43bf60b8be4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.705 239969 DEBUG oslo_concurrency.lockutils [req-d1b29e57-82fe-4df7-b8f1-9cfce50af11d req-0caea574-f9ca-4c4a-a710-43bf60b8be4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.706 239969 DEBUG nova.compute.manager [req-d1b29e57-82fe-4df7-b8f1-9cfce50af11d req-0caea574-f9ca-4c4a-a710-43bf60b8be4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] No waiting events found dispatching network-vif-unplugged-d8b4b479-9869-4e47-bcea-fe0580fafbe3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.706 239969 WARNING nova.compute.manager [req-d1b29e57-82fe-4df7-b8f1-9cfce50af11d req-0caea574-f9ca-4c4a-a710-43bf60b8be4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Received unexpected event network-vif-unplugged-d8b4b479-9869-4e47-bcea-fe0580fafbe3 for instance with vm_state deleted and task_state None.
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.706 239969 DEBUG nova.compute.manager [req-d1b29e57-82fe-4df7-b8f1-9cfce50af11d req-0caea574-f9ca-4c4a-a710-43bf60b8be4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Received event network-vif-plugged-d8b4b479-9869-4e47-bcea-fe0580fafbe3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.707 239969 DEBUG oslo_concurrency.lockutils [req-d1b29e57-82fe-4df7-b8f1-9cfce50af11d req-0caea574-f9ca-4c4a-a710-43bf60b8be4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.707 239969 DEBUG oslo_concurrency.lockutils [req-d1b29e57-82fe-4df7-b8f1-9cfce50af11d req-0caea574-f9ca-4c4a-a710-43bf60b8be4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.707 239969 DEBUG oslo_concurrency.lockutils [req-d1b29e57-82fe-4df7-b8f1-9cfce50af11d req-0caea574-f9ca-4c4a-a710-43bf60b8be4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.708 239969 DEBUG nova.compute.manager [req-d1b29e57-82fe-4df7-b8f1-9cfce50af11d req-0caea574-f9ca-4c4a-a710-43bf60b8be4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] No waiting events found dispatching network-vif-plugged-d8b4b479-9869-4e47-bcea-fe0580fafbe3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.708 239969 WARNING nova.compute.manager [req-d1b29e57-82fe-4df7-b8f1-9cfce50af11d req-0caea574-f9ca-4c4a-a710-43bf60b8be4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Received unexpected event network-vif-plugged-d8b4b479-9869-4e47-bcea-fe0580fafbe3 for instance with vm_state deleted and task_state None.
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.708 239969 DEBUG nova.compute.manager [req-d1b29e57-82fe-4df7-b8f1-9cfce50af11d req-0caea574-f9ca-4c4a-a710-43bf60b8be4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Received event network-vif-deleted-d8b4b479-9869-4e47-bcea-fe0580fafbe3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.720 239969 DEBUG oslo_concurrency.lockutils [req-17013fab-ec8b-4d09-8ae1-adbcca43dad2 req-79dddfcd-5af5-40e5-90b2-9b11deebe455 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-1999f11e-e9a2-4442-8baf-d8e51e0c856d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:24:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:24:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/447600396' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.964 239969 DEBUG oslo_concurrency.processutils [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.971 239969 DEBUG nova.compute.provider_tree [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:24:06 compute-0 nova_compute[239965]: 2026-01-26 16:24:06.988 239969 DEBUG nova.scheduler.client.report [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:24:07 compute-0 nova_compute[239965]: 2026-01-26 16:24:07.007 239969 DEBUG oslo_concurrency.lockutils [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:07 compute-0 nova_compute[239965]: 2026-01-26 16:24:07.035 239969 INFO nova.scheduler.client.report [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Deleted allocations for instance 1999f11e-e9a2-4442-8baf-d8e51e0c856d
Jan 26 16:24:07 compute-0 nova_compute[239965]: 2026-01-26 16:24:07.125 239969 DEBUG oslo_concurrency.lockutils [None req-86a68645-b588-4483-82a7-0016f953681d 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "1999f11e-e9a2-4442-8baf-d8e51e0c856d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:07 compute-0 nova_compute[239965]: 2026-01-26 16:24:07.286 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:07 compute-0 nova_compute[239965]: 2026-01-26 16:24:07.443 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444632.4421673, 3a113ae2-e017-46ee-b5d9-111ff761618b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:07 compute-0 nova_compute[239965]: 2026-01-26 16:24:07.444 239969 INFO nova.compute.manager [-] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] VM Stopped (Lifecycle Event)
Jan 26 16:24:07 compute-0 nova_compute[239965]: 2026-01-26 16:24:07.460 239969 DEBUG nova.compute.manager [None req-4bef4533-77be-4b03-9acb-9597fe888602 - - - - - -] [instance: 3a113ae2-e017-46ee-b5d9-111ff761618b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:07 compute-0 nova_compute[239965]: 2026-01-26 16:24:07.512 239969 DEBUG nova.network.neutron [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Successfully updated port: b54c88a7-5376-4cb4-b4c2-09833f6c7c31 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:24:07 compute-0 nova_compute[239965]: 2026-01-26 16:24:07.523 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "refresh_cache-86b65c1c-c168-4ecd-b98d-9080d00d16dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:24:07 compute-0 nova_compute[239965]: 2026-01-26 16:24:07.523 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquired lock "refresh_cache-86b65c1c-c168-4ecd-b98d-9080d00d16dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:24:07 compute-0 nova_compute[239965]: 2026-01-26 16:24:07.523 239969 DEBUG nova.network.neutron [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:24:07 compute-0 nova_compute[239965]: 2026-01-26 16:24:07.642 239969 DEBUG nova.compute.manager [req-13fb1e6b-90c2-478d-8c6b-59998c28cd51 req-7f8d2fe9-cac9-403e-909d-d463bfb070c8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Received event network-changed-b54c88a7-5376-4cb4-b4c2-09833f6c7c31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:07 compute-0 nova_compute[239965]: 2026-01-26 16:24:07.643 239969 DEBUG nova.compute.manager [req-13fb1e6b-90c2-478d-8c6b-59998c28cd51 req-7f8d2fe9-cac9-403e-909d-d463bfb070c8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Refreshing instance network info cache due to event network-changed-b54c88a7-5376-4cb4-b4c2-09833f6c7c31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:24:07 compute-0 nova_compute[239965]: 2026-01-26 16:24:07.643 239969 DEBUG oslo_concurrency.lockutils [req-13fb1e6b-90c2-478d-8c6b-59998c28cd51 req-7f8d2fe9-cac9-403e-909d-d463bfb070c8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-86b65c1c-c168-4ecd-b98d-9080d00d16dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:24:07 compute-0 ceph-mon[75140]: pgmap v2202: 305 pgs: 305 active+clean; 224 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 2.2 MiB/s wr, 130 op/s
Jan 26 16:24:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/447600396' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:07 compute-0 nova_compute[239965]: 2026-01-26 16:24:07.768 239969 DEBUG nova.network.neutron [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:24:07 compute-0 sshd-session[356179]: Invalid user dev from 209.38.206.249 port 41692
Jan 26 16:24:07 compute-0 sshd-session[356179]: Connection closed by invalid user dev 209.38.206.249 port 41692 [preauth]
Jan 26 16:24:08 compute-0 nova_compute[239965]: 2026-01-26 16:24:08.248 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:24:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 213 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 3.2 MiB/s wr, 154 op/s
Jan 26 16:24:08 compute-0 ceph-mon[75140]: pgmap v2203: 305 pgs: 305 active+clean; 213 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 353 KiB/s rd, 3.2 MiB/s wr, 154 op/s
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.533 239969 DEBUG nova.network.neutron [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Updating instance_info_cache with network_info: [{"id": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "address": "fa:16:3e:8a:05:f8", "network": {"id": "f4edfbf5-c8ce-414d-90e1-ed10e0f29897", "bridge": "br-int", "label": "tempest-network-smoke--1452475994", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54c88a7-53", "ovs_interfaceid": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.567 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Releasing lock "refresh_cache-86b65c1c-c168-4ecd-b98d-9080d00d16dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.568 239969 DEBUG nova.compute.manager [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Instance network_info: |[{"id": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "address": "fa:16:3e:8a:05:f8", "network": {"id": "f4edfbf5-c8ce-414d-90e1-ed10e0f29897", "bridge": "br-int", "label": "tempest-network-smoke--1452475994", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54c88a7-53", "ovs_interfaceid": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.569 239969 DEBUG oslo_concurrency.lockutils [req-13fb1e6b-90c2-478d-8c6b-59998c28cd51 req-7f8d2fe9-cac9-403e-909d-d463bfb070c8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-86b65c1c-c168-4ecd-b98d-9080d00d16dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.569 239969 DEBUG nova.network.neutron [req-13fb1e6b-90c2-478d-8c6b-59998c28cd51 req-7f8d2fe9-cac9-403e-909d-d463bfb070c8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Refreshing network info cache for port b54c88a7-5376-4cb4-b4c2-09833f6c7c31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.573 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Start _get_guest_xml network_info=[{"id": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "address": "fa:16:3e:8a:05:f8", "network": {"id": "f4edfbf5-c8ce-414d-90e1-ed10e0f29897", "bridge": "br-int", "label": "tempest-network-smoke--1452475994", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54c88a7-53", "ovs_interfaceid": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.578 239969 WARNING nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.584 239969 DEBUG nova.virt.libvirt.host [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.584 239969 DEBUG nova.virt.libvirt.host [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.591 239969 DEBUG nova.virt.libvirt.host [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.592 239969 DEBUG nova.virt.libvirt.host [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.592 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.592 239969 DEBUG nova.virt.hardware [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.593 239969 DEBUG nova.virt.hardware [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.593 239969 DEBUG nova.virt.hardware [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.594 239969 DEBUG nova.virt.hardware [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.594 239969 DEBUG nova.virt.hardware [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.594 239969 DEBUG nova.virt.hardware [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.595 239969 DEBUG nova.virt.hardware [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.595 239969 DEBUG nova.virt.hardware [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.595 239969 DEBUG nova.virt.hardware [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.596 239969 DEBUG nova.virt.hardware [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.596 239969 DEBUG nova.virt.hardware [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.600 239969 DEBUG oslo_concurrency.processutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.743 239969 DEBUG nova.compute.manager [req-1ceb20b2-f208-4560-888e-c9ec8c6153d0 req-d047509d-2676-42a2-84ef-ee0cd5a7a102 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Received event network-changed-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.744 239969 DEBUG nova.compute.manager [req-1ceb20b2-f208-4560-888e-c9ec8c6153d0 req-d047509d-2676-42a2-84ef-ee0cd5a7a102 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Refreshing instance network info cache due to event network-changed-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.744 239969 DEBUG oslo_concurrency.lockutils [req-1ceb20b2-f208-4560-888e-c9ec8c6153d0 req-d047509d-2676-42a2-84ef-ee0cd5a7a102 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-5e597276-b625-4985-8433-360736cd3c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.744 239969 DEBUG oslo_concurrency.lockutils [req-1ceb20b2-f208-4560-888e-c9ec8c6153d0 req-d047509d-2676-42a2-84ef-ee0cd5a7a102 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-5e597276-b625-4985-8433-360736cd3c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.745 239969 DEBUG nova.network.neutron [req-1ceb20b2-f208-4560-888e-c9ec8c6153d0 req-d047509d-2676-42a2-84ef-ee0cd5a7a102 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Refreshing network info cache for port 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.816 239969 DEBUG oslo_concurrency.lockutils [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "5e597276-b625-4985-8433-360736cd3c79" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.817 239969 DEBUG oslo_concurrency.lockutils [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "5e597276-b625-4985-8433-360736cd3c79" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.817 239969 DEBUG oslo_concurrency.lockutils [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "5e597276-b625-4985-8433-360736cd3c79-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.818 239969 DEBUG oslo_concurrency.lockutils [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "5e597276-b625-4985-8433-360736cd3c79-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.818 239969 DEBUG oslo_concurrency.lockutils [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "5e597276-b625-4985-8433-360736cd3c79-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.820 239969 INFO nova.compute.manager [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Terminating instance
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.821 239969 DEBUG nova.compute.manager [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.848 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:09 compute-0 kernel: tap743ec95a-b5 (unregistering): left promiscuous mode
Jan 26 16:24:09 compute-0 NetworkManager[48954]: <info>  [1769444649.8668] device (tap743ec95a-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:24:09 compute-0 ovn_controller[146046]: 2026-01-26T16:24:09Z|01377|binding|INFO|Releasing lport 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 from this chassis (sb_readonly=0)
Jan 26 16:24:09 compute-0 ovn_controller[146046]: 2026-01-26T16:24:09Z|01378|binding|INFO|Setting lport 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 down in Southbound
Jan 26 16:24:09 compute-0 ovn_controller[146046]: 2026-01-26T16:24:09Z|01379|binding|INFO|Removing iface tap743ec95a-b5 ovn-installed in OVS
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.884 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:09.893 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:03:01 10.100.0.4 2001:db8::f816:3eff:fe16:301'], port_security=['fa:16:3e:16:03:01 10.100.0.4 2001:db8::f816:3eff:fe16:301'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fe16:301/64', 'neutron:device_id': '5e597276-b625-4985-8433-360736cd3c79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bbc6da3-4889-49ad-be6a-5c45e912baea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59cf9173-0130-4701-8cd2-ef13c0f4cee0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:24:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:09.895 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 in datapath 71a6c2b4-c95b-48d6-bc0d-8a809d49273b unbound from our chassis
Jan 26 16:24:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:09.897 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71a6c2b4-c95b-48d6-bc0d-8a809d49273b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:24:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:09.898 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1e694e-8cb3-41fd-8af4-b914591b76db]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:09.899 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b namespace which is not needed anymore
Jan 26 16:24:09 compute-0 nova_compute[239965]: 2026-01-26 16:24:09.903 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:09 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d00000080.scope: Deactivated successfully.
Jan 26 16:24:09 compute-0 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d00000080.scope: Consumed 15.129s CPU time.
Jan 26 16:24:09 compute-0 systemd-machined[208061]: Machine qemu-155-instance-00000080 terminated.
Jan 26 16:24:10 compute-0 kernel: tap743ec95a-b5: entered promiscuous mode
Jan 26 16:24:10 compute-0 systemd-udevd[356205]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:24:10 compute-0 kernel: tap743ec95a-b5 (unregistering): left promiscuous mode
Jan 26 16:24:10 compute-0 NetworkManager[48954]: <info>  [1769444650.0574] manager: (tap743ec95a-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/566)
Jan 26 16:24:10 compute-0 ovn_controller[146046]: 2026-01-26T16:24:10Z|01380|binding|INFO|Claiming lport 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 for this chassis.
Jan 26 16:24:10 compute-0 ovn_controller[146046]: 2026-01-26T16:24:10Z|01381|binding|INFO|743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2: Claiming fa:16:3e:16:03:01 10.100.0.4 2001:db8::f816:3eff:fe16:301
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.057 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:10.071 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:03:01 10.100.0.4 2001:db8::f816:3eff:fe16:301'], port_security=['fa:16:3e:16:03:01 10.100.0.4 2001:db8::f816:3eff:fe16:301'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fe16:301/64', 'neutron:device_id': '5e597276-b625-4985-8433-360736cd3c79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bbc6da3-4889-49ad-be6a-5c45e912baea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59cf9173-0130-4701-8cd2-ef13c0f4cee0, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.082 239969 INFO nova.virt.libvirt.driver [-] [instance: 5e597276-b625-4985-8433-360736cd3c79] Instance destroyed successfully.
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.083 239969 DEBUG nova.objects.instance [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lazy-loading 'resources' on Instance uuid 5e597276-b625-4985-8433-360736cd3c79 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.098 239969 DEBUG nova.virt.libvirt.vif [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:22:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1725827159',display_name='tempest-TestGettingAddress-server-1725827159',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1725827159',id=128,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCC7xQvyZ3R4fi4Uk/Uk6Pa6MLruwnCjAWUP+8KMyXPkmagp3JbDxcXWQo/28wA3I75JT31hYs4VkRRPhBdGj71Xnfc2t9PlBlyrIJIqEAvEv7a7zDeqBbmQyfNsQiXQAw==',key_name='tempest-TestGettingAddress-1047513795',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:23:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='df501970c9864b77b86bb4aa58ef846b',ramdisk_id='',reservation_id='r-07uj0sou',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-206943419',owner_user_name='tempest-TestGettingAddress-206943419-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:23:10Z,user_data=None,user_id='0338b308661e4205839e9e957f674d8c',uuid=5e597276-b625-4985-8433-360736cd3c79,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "address": "fa:16:3e:16:03:01", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:301", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743ec95a-b5", "ovs_interfaceid": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:24:10 compute-0 ovn_controller[146046]: 2026-01-26T16:24:10Z|01382|binding|INFO|Setting lport 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 ovn-installed in OVS
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.099 239969 DEBUG nova.network.os_vif_util [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converting VIF {"id": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "address": "fa:16:3e:16:03:01", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:301", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743ec95a-b5", "ovs_interfaceid": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:24:10 compute-0 ovn_controller[146046]: 2026-01-26T16:24:10Z|01383|binding|INFO|Setting lport 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 up in Southbound
Jan 26 16:24:10 compute-0 ovn_controller[146046]: 2026-01-26T16:24:10Z|01384|binding|INFO|Releasing lport 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 from this chassis (sb_readonly=1)
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.100 239969 DEBUG nova.network.os_vif_util [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:16:03:01,bridge_name='br-int',has_traffic_filtering=True,id=743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2,network=Network(71a6c2b4-c95b-48d6-bc0d-8a809d49273b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap743ec95a-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:24:10 compute-0 ovn_controller[146046]: 2026-01-26T16:24:10Z|01385|if_status|INFO|Dropped 1 log messages in last 963 seconds (most recently, 963 seconds ago) due to excessive rate
Jan 26 16:24:10 compute-0 ovn_controller[146046]: 2026-01-26T16:24:10Z|01386|if_status|INFO|Not setting lport 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 down as sb is readonly
Jan 26 16:24:10 compute-0 ovn_controller[146046]: 2026-01-26T16:24:10Z|01387|binding|INFO|Removing iface tap743ec95a-b5 ovn-installed in OVS
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.102 239969 DEBUG os_vif [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:03:01,bridge_name='br-int',has_traffic_filtering=True,id=743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2,network=Network(71a6c2b4-c95b-48d6-bc0d-8a809d49273b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap743ec95a-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.105 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.107 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap743ec95a-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.108 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:10 compute-0 ovn_controller[146046]: 2026-01-26T16:24:10Z|01388|binding|INFO|Releasing lport 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 from this chassis (sb_readonly=0)
Jan 26 16:24:10 compute-0 ovn_controller[146046]: 2026-01-26T16:24:10Z|01389|binding|INFO|Setting lport 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 down in Southbound
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.113 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.115 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:24:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:10.118 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:16:03:01 10.100.0.4 2001:db8::f816:3eff:fe16:301'], port_security=['fa:16:3e:16:03:01 10.100.0.4 2001:db8::f816:3eff:fe16:301'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28 2001:db8::f816:3eff:fe16:301/64', 'neutron:device_id': '5e597276-b625-4985-8433-360736cd3c79', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'df501970c9864b77b86bb4aa58ef846b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3bbc6da3-4889-49ad-be6a-5c45e912baea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=59cf9173-0130-4701-8cd2-ef13c0f4cee0, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.125 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.128 239969 INFO os_vif [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:16:03:01,bridge_name='br-int',has_traffic_filtering=True,id=743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2,network=Network(71a6c2b4-c95b-48d6-bc0d-8a809d49273b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap743ec95a-b5')
Jan 26 16:24:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:24:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3968082405' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.247 239969 DEBUG oslo_concurrency.processutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.271 239969 DEBUG nova.storage.rbd_utils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 86b65c1c-c168-4ecd-b98d-9080d00d16dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.276 239969 DEBUG oslo_concurrency.processutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:10 compute-0 neutron-haproxy-ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b[353746]: [NOTICE]   (353750) : haproxy version is 2.8.14-c23fe91
Jan 26 16:24:10 compute-0 neutron-haproxy-ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b[353746]: [NOTICE]   (353750) : path to executable is /usr/sbin/haproxy
Jan 26 16:24:10 compute-0 neutron-haproxy-ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b[353746]: [WARNING]  (353750) : Exiting Master process...
Jan 26 16:24:10 compute-0 neutron-haproxy-ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b[353746]: [WARNING]  (353750) : Exiting Master process...
Jan 26 16:24:10 compute-0 neutron-haproxy-ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b[353746]: [ALERT]    (353750) : Current worker (353752) exited with code 143 (Terminated)
Jan 26 16:24:10 compute-0 neutron-haproxy-ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b[353746]: [WARNING]  (353750) : All workers exited. Exiting... (0)
Jan 26 16:24:10 compute-0 systemd[1]: libpod-a1512d19821504f3239d109d1ce29bdd12fb25a68015f6ef3dc65e5dc93fce72.scope: Deactivated successfully.
Jan 26 16:24:10 compute-0 podman[356227]: 2026-01-26 16:24:10.462536263 +0000 UTC m=+0.449366486 container died a1512d19821504f3239d109d1ce29bdd12fb25a68015f6ef3dc65e5dc93fce72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 16:24:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3968082405' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:24:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 213 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 1.9 MiB/s wr, 91 op/s
Jan 26 16:24:10 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a1512d19821504f3239d109d1ce29bdd12fb25a68015f6ef3dc65e5dc93fce72-userdata-shm.mount: Deactivated successfully.
Jan 26 16:24:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e664746d42975844e4cc59b7ab1ef766ba2e110fb63b3fc3964de4c0e2f0a20f-merged.mount: Deactivated successfully.
Jan 26 16:24:10 compute-0 podman[356227]: 2026-01-26 16:24:10.78133692 +0000 UTC m=+0.768167153 container cleanup a1512d19821504f3239d109d1ce29bdd12fb25a68015f6ef3dc65e5dc93fce72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:24:10 compute-0 systemd[1]: libpod-conmon-a1512d19821504f3239d109d1ce29bdd12fb25a68015f6ef3dc65e5dc93fce72.scope: Deactivated successfully.
Jan 26 16:24:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:24:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2425450828' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.927 239969 DEBUG oslo_concurrency.processutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.930 239969 DEBUG nova.virt.libvirt.vif [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:24:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1395075827',display_name='tempest-TestNetworkAdvancedServerOps-server-1395075827',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1395075827',id=132,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkCUJmHnWc7pQ9mlVUUqu5qElEroC5GOpBlFh+K77aMSWkLQJWupY9+lKy0giVVb0rPg4LTstifGfAnDPHqlWrsAWCj3bXxWAl8o1IOG6laF0sGgrPgkRGpSiCSZx1vFA==',key_name='tempest-TestNetworkAdvancedServerOps-2089233658',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-rb9fxt2z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:24:03Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=86b65c1c-c168-4ecd-b98d-9080d00d16dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "address": "fa:16:3e:8a:05:f8", "network": {"id": "f4edfbf5-c8ce-414d-90e1-ed10e0f29897", "bridge": "br-int", "label": "tempest-network-smoke--1452475994", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54c88a7-53", "ovs_interfaceid": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.930 239969 DEBUG nova.network.os_vif_util [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "address": "fa:16:3e:8a:05:f8", "network": {"id": "f4edfbf5-c8ce-414d-90e1-ed10e0f29897", "bridge": "br-int", "label": "tempest-network-smoke--1452475994", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54c88a7-53", "ovs_interfaceid": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.931 239969 DEBUG nova.network.os_vif_util [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:05:f8,bridge_name='br-int',has_traffic_filtering=True,id=b54c88a7-5376-4cb4-b4c2-09833f6c7c31,network=Network(f4edfbf5-c8ce-414d-90e1-ed10e0f29897),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54c88a7-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.934 239969 DEBUG nova.objects.instance [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid 86b65c1c-c168-4ecd-b98d-9080d00d16dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.951 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:24:10 compute-0 nova_compute[239965]:   <uuid>86b65c1c-c168-4ecd-b98d-9080d00d16dc</uuid>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   <name>instance-00000084</name>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1395075827</nova:name>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:24:09</nova:creationTime>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:24:10 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:24:10 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:24:10 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:24:10 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:24:10 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:24:10 compute-0 nova_compute[239965]:         <nova:user uuid="59ae1c17a260470c91f50965ddd53a9e">tempest-TestNetworkAdvancedServerOps-842475489-project-member</nova:user>
Jan 26 16:24:10 compute-0 nova_compute[239965]:         <nova:project uuid="2a7615c0db4e4f38aec30c7c723c3c3a">tempest-TestNetworkAdvancedServerOps-842475489</nova:project>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:24:10 compute-0 nova_compute[239965]:         <nova:port uuid="b54c88a7-5376-4cb4-b4c2-09833f6c7c31">
Jan 26 16:24:10 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <system>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <entry name="serial">86b65c1c-c168-4ecd-b98d-9080d00d16dc</entry>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <entry name="uuid">86b65c1c-c168-4ecd-b98d-9080d00d16dc</entry>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     </system>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   <os>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   </os>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   <features>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   </features>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/86b65c1c-c168-4ecd-b98d-9080d00d16dc_disk">
Jan 26 16:24:10 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       </source>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:24:10 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/86b65c1c-c168-4ecd-b98d-9080d00d16dc_disk.config">
Jan 26 16:24:10 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       </source>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:24:10 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:8a:05:f8"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <target dev="tapb54c88a7-53"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/86b65c1c-c168-4ecd-b98d-9080d00d16dc/console.log" append="off"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <video>
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     </video>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:24:10 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:24:10 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:24:10 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:24:10 compute-0 nova_compute[239965]: </domain>
Jan 26 16:24:10 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.952 239969 DEBUG nova.compute.manager [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Preparing to wait for external event network-vif-plugged-b54c88a7-5376-4cb4-b4c2-09833f6c7c31 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.953 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.954 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.954 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.955 239969 DEBUG nova.virt.libvirt.vif [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:24:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1395075827',display_name='tempest-TestNetworkAdvancedServerOps-server-1395075827',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1395075827',id=132,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkCUJmHnWc7pQ9mlVUUqu5qElEroC5GOpBlFh+K77aMSWkLQJWupY9+lKy0giVVb0rPg4LTstifGfAnDPHqlWrsAWCj3bXxWAl8o1IOG6laF0sGgrPgkRGpSiCSZx1vFA==',key_name='tempest-TestNetworkAdvancedServerOps-2089233658',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-rb9fxt2z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:24:03Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=86b65c1c-c168-4ecd-b98d-9080d00d16dc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "address": "fa:16:3e:8a:05:f8", "network": {"id": "f4edfbf5-c8ce-414d-90e1-ed10e0f29897", "bridge": "br-int", "label": "tempest-network-smoke--1452475994", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54c88a7-53", "ovs_interfaceid": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.955 239969 DEBUG nova.network.os_vif_util [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "address": "fa:16:3e:8a:05:f8", "network": {"id": "f4edfbf5-c8ce-414d-90e1-ed10e0f29897", "bridge": "br-int", "label": "tempest-network-smoke--1452475994", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54c88a7-53", "ovs_interfaceid": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.956 239969 DEBUG nova.network.os_vif_util [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:05:f8,bridge_name='br-int',has_traffic_filtering=True,id=b54c88a7-5376-4cb4-b4c2-09833f6c7c31,network=Network(f4edfbf5-c8ce-414d-90e1-ed10e0f29897),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54c88a7-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.957 239969 DEBUG os_vif [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:05:f8,bridge_name='br-int',has_traffic_filtering=True,id=b54c88a7-5376-4cb4-b4c2-09833f6c7c31,network=Network(f4edfbf5-c8ce-414d-90e1-ed10e0f29897),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54c88a7-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.958 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.959 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.959 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.962 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.963 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb54c88a7-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.963 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb54c88a7-53, col_values=(('external_ids', {'iface-id': 'b54c88a7-5376-4cb4-b4c2-09833f6c7c31', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8a:05:f8', 'vm-uuid': '86b65c1c-c168-4ecd-b98d-9080d00d16dc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:10 compute-0 NetworkManager[48954]: <info>  [1769444650.9671] manager: (tapb54c88a7-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/567)
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.967 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.972 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:24:10 compute-0 nova_compute[239965]: 2026-01-26 16:24:10.974 239969 INFO os_vif [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:05:f8,bridge_name='br-int',has_traffic_filtering=True,id=b54c88a7-5376-4cb4-b4c2-09833f6c7c31,network=Network(f4edfbf5-c8ce-414d-90e1-ed10e0f29897),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54c88a7-53')
Jan 26 16:24:11 compute-0 podman[356317]: 2026-01-26 16:24:11.022859016 +0000 UTC m=+0.218902233 container remove a1512d19821504f3239d109d1ce29bdd12fb25a68015f6ef3dc65e5dc93fce72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 26 16:24:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:11.038 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0d96936d-ea1f-4b9c-8882-bab7e92f8f6b]: (4, ('Mon Jan 26 04:24:09 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b (a1512d19821504f3239d109d1ce29bdd12fb25a68015f6ef3dc65e5dc93fce72)\na1512d19821504f3239d109d1ce29bdd12fb25a68015f6ef3dc65e5dc93fce72\nMon Jan 26 04:24:10 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b (a1512d19821504f3239d109d1ce29bdd12fb25a68015f6ef3dc65e5dc93fce72)\na1512d19821504f3239d109d1ce29bdd12fb25a68015f6ef3dc65e5dc93fce72\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:11.040 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a66f95a3-549f-4540-a63d-1dba661a3111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:11.041 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71a6c2b4-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.042 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:11 compute-0 kernel: tap71a6c2b4-c0: left promiscuous mode
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.067 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:11.071 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[246dd72a-60d2-4a43-9363-1e697ff8cf9a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.072 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.072 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.073 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No VIF found with MAC fa:16:3e:8a:05:f8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.073 239969 INFO nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Using config drive
Jan 26 16:24:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:11.082 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc900d1-92e7-4aac-b68d-11f8fefc18c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:11.083 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4464eec4-4bc7-486e-8725-de5084ad2b14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.096 239969 DEBUG nova.storage.rbd_utils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 86b65c1c-c168-4ecd-b98d-9080d00d16dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:11.102 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[38c46d87-bfb2-44bb-8ca9-fcb067be8938]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 607917, 'reachable_time': 39047, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356357, 'error': None, 'target': 'ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:11.105 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-71a6c2b4-c95b-48d6-bc0d-8a809d49273b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:24:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:11.106 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[d0bc085c-74ac-48d7-aa0f-eccc79428bb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:11.106 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 in datapath 71a6c2b4-c95b-48d6-bc0d-8a809d49273b unbound from our chassis
Jan 26 16:24:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d71a6c2b4\x2dc95b\x2d48d6\x2dbc0d\x2d8a809d49273b.mount: Deactivated successfully.
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.107 239969 DEBUG nova.compute.manager [req-0c890cc1-4b8c-4c01-976f-d81740199b2a req-a9d0bf9d-cb51-4e4f-8c78-3ac6efdf204e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Received event network-vif-unplugged-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:11.107 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71a6c2b4-c95b-48d6-bc0d-8a809d49273b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.108 239969 DEBUG oslo_concurrency.lockutils [req-0c890cc1-4b8c-4c01-976f-d81740199b2a req-a9d0bf9d-cb51-4e4f-8c78-3ac6efdf204e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5e597276-b625-4985-8433-360736cd3c79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.108 239969 DEBUG oslo_concurrency.lockutils [req-0c890cc1-4b8c-4c01-976f-d81740199b2a req-a9d0bf9d-cb51-4e4f-8c78-3ac6efdf204e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5e597276-b625-4985-8433-360736cd3c79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.108 239969 DEBUG oslo_concurrency.lockutils [req-0c890cc1-4b8c-4c01-976f-d81740199b2a req-a9d0bf9d-cb51-4e4f-8c78-3ac6efdf204e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5e597276-b625-4985-8433-360736cd3c79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:11.108 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6a03ae20-ab72-4ff3-9203-cc941e5f39e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.108 239969 DEBUG nova.compute.manager [req-0c890cc1-4b8c-4c01-976f-d81740199b2a req-a9d0bf9d-cb51-4e4f-8c78-3ac6efdf204e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] No waiting events found dispatching network-vif-unplugged-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.109 239969 DEBUG nova.compute.manager [req-0c890cc1-4b8c-4c01-976f-d81740199b2a req-a9d0bf9d-cb51-4e4f-8c78-3ac6efdf204e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Received event network-vif-unplugged-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:24:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:11.109 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 in datapath 71a6c2b4-c95b-48d6-bc0d-8a809d49273b unbound from our chassis
Jan 26 16:24:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:11.111 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71a6c2b4-c95b-48d6-bc0d-8a809d49273b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:24:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:11.111 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bd5e1b1d-c826-4dcb-9502-bcddbf203b49]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.146 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.392 239969 INFO nova.virt.libvirt.driver [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Deleting instance files /var/lib/nova/instances/5e597276-b625-4985-8433-360736cd3c79_del
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.393 239969 INFO nova.virt.libvirt.driver [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Deletion of /var/lib/nova/instances/5e597276-b625-4985-8433-360736cd3c79_del complete
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.456 239969 INFO nova.compute.manager [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Took 1.64 seconds to destroy the instance on the hypervisor.
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.457 239969 DEBUG oslo.service.loopingcall [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.458 239969 DEBUG nova.compute.manager [-] [instance: 5e597276-b625-4985-8433-360736cd3c79] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.458 239969 DEBUG nova.network.neutron [-] [instance: 5e597276-b625-4985-8433-360736cd3c79] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:24:11 compute-0 ceph-mon[75140]: pgmap v2204: 305 pgs: 305 active+clean; 213 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 98 KiB/s rd, 1.9 MiB/s wr, 91 op/s
Jan 26 16:24:11 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2425450828' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:24:11 compute-0 sshd-session[356362]: Invalid user developer from 209.38.206.249 port 41708
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.588 239969 INFO nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Creating config drive at /var/lib/nova/instances/86b65c1c-c168-4ecd-b98d-9080d00d16dc/disk.config
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.594 239969 DEBUG oslo_concurrency.processutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/86b65c1c-c168-4ecd-b98d-9080d00d16dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_z0jka52 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:11 compute-0 sshd-session[356362]: Connection closed by invalid user developer 209.38.206.249 port 41708 [preauth]
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.740 239969 DEBUG oslo_concurrency.processutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/86b65c1c-c168-4ecd-b98d-9080d00d16dc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_z0jka52" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.766 239969 DEBUG nova.storage.rbd_utils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 86b65c1c-c168-4ecd-b98d-9080d00d16dc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.770 239969 DEBUG oslo_concurrency.processutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/86b65c1c-c168-4ecd-b98d-9080d00d16dc/disk.config 86b65c1c-c168-4ecd-b98d-9080d00d16dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.812 239969 DEBUG nova.network.neutron [req-13fb1e6b-90c2-478d-8c6b-59998c28cd51 req-7f8d2fe9-cac9-403e-909d-d463bfb070c8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Updated VIF entry in instance network info cache for port b54c88a7-5376-4cb4-b4c2-09833f6c7c31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.814 239969 DEBUG nova.network.neutron [req-13fb1e6b-90c2-478d-8c6b-59998c28cd51 req-7f8d2fe9-cac9-403e-909d-d463bfb070c8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Updating instance_info_cache with network_info: [{"id": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "address": "fa:16:3e:8a:05:f8", "network": {"id": "f4edfbf5-c8ce-414d-90e1-ed10e0f29897", "bridge": "br-int", "label": "tempest-network-smoke--1452475994", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54c88a7-53", "ovs_interfaceid": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.847 239969 DEBUG nova.network.neutron [req-1ceb20b2-f208-4560-888e-c9ec8c6153d0 req-d047509d-2676-42a2-84ef-ee0cd5a7a102 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Updated VIF entry in instance network info cache for port 743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.848 239969 DEBUG nova.network.neutron [req-1ceb20b2-f208-4560-888e-c9ec8c6153d0 req-d047509d-2676-42a2-84ef-ee0cd5a7a102 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Updating instance_info_cache with network_info: [{"id": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "address": "fa:16:3e:16:03:01", "network": {"id": "71a6c2b4-c95b-48d6-bc0d-8a809d49273b", "bridge": "br-int", "label": "tempest-network-smoke--1955436763", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe16:301", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "df501970c9864b77b86bb4aa58ef846b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap743ec95a-b5", "ovs_interfaceid": "743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.901 239969 DEBUG oslo_concurrency.lockutils [req-13fb1e6b-90c2-478d-8c6b-59998c28cd51 req-7f8d2fe9-cac9-403e-909d-d463bfb070c8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-86b65c1c-c168-4ecd-b98d-9080d00d16dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:24:11 compute-0 nova_compute[239965]: 2026-01-26 16:24:11.920 239969 DEBUG oslo_concurrency.lockutils [req-1ceb20b2-f208-4560-888e-c9ec8c6153d0 req-d047509d-2676-42a2-84ef-ee0cd5a7a102 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-5e597276-b625-4985-8433-360736cd3c79" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:24:12 compute-0 sshd-session[356368]: Connection closed by authenticating user root 209.38.206.249 port 35892 [preauth]
Jan 26 16:24:12 compute-0 nova_compute[239965]: 2026-01-26 16:24:12.181 239969 DEBUG oslo_concurrency.processutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/86b65c1c-c168-4ecd-b98d-9080d00d16dc/disk.config 86b65c1c-c168-4ecd-b98d-9080d00d16dc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:12 compute-0 nova_compute[239965]: 2026-01-26 16:24:12.182 239969 INFO nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Deleting local config drive /var/lib/nova/instances/86b65c1c-c168-4ecd-b98d-9080d00d16dc/disk.config because it was imported into RBD.
Jan 26 16:24:12 compute-0 kernel: tapb54c88a7-53: entered promiscuous mode
Jan 26 16:24:12 compute-0 NetworkManager[48954]: <info>  [1769444652.2495] manager: (tapb54c88a7-53): new Tun device (/org/freedesktop/NetworkManager/Devices/568)
Jan 26 16:24:12 compute-0 nova_compute[239965]: 2026-01-26 16:24:12.251 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:12 compute-0 ovn_controller[146046]: 2026-01-26T16:24:12Z|01390|binding|INFO|Claiming lport b54c88a7-5376-4cb4-b4c2-09833f6c7c31 for this chassis.
Jan 26 16:24:12 compute-0 ovn_controller[146046]: 2026-01-26T16:24:12Z|01391|binding|INFO|b54c88a7-5376-4cb4-b4c2-09833f6c7c31: Claiming fa:16:3e:8a:05:f8 10.100.0.9
Jan 26 16:24:12 compute-0 NetworkManager[48954]: <info>  [1769444652.2617] device (tapb54c88a7-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:24:12 compute-0 NetworkManager[48954]: <info>  [1769444652.2624] device (tapb54c88a7-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.261 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:05:f8 10.100.0.9'], port_security=['fa:16:3e:8a:05:f8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '86b65c1c-c168-4ecd-b98d-9080d00d16dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4edfbf5-c8ce-414d-90e1-ed10e0f29897', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '501e277a-8518-441f-89b3-5cede83d9c09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5090e7f9-5f51-445b-a868-17a512d26318, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b54c88a7-5376-4cb4-b4c2-09833f6c7c31) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.262 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b54c88a7-5376-4cb4-b4c2-09833f6c7c31 in datapath f4edfbf5-c8ce-414d-90e1-ed10e0f29897 bound to our chassis
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.264 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f4edfbf5-c8ce-414d-90e1-ed10e0f29897
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.276 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[96294142-6a54-4ea0-8ce9-4b07af33cdc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.277 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf4edfbf5-c1 in ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.279 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf4edfbf5-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.279 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1552cf86-3aab-4e35-9f4a-05146e4bb0c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.280 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fe93724b-29a3-47ac-9b26-b75afa792b0f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:12 compute-0 ovn_controller[146046]: 2026-01-26T16:24:12Z|01392|binding|INFO|Setting lport b54c88a7-5376-4cb4-b4c2-09833f6c7c31 up in Southbound
Jan 26 16:24:12 compute-0 ovn_controller[146046]: 2026-01-26T16:24:12Z|01393|binding|INFO|Setting lport b54c88a7-5376-4cb4-b4c2-09833f6c7c31 ovn-installed in OVS
Jan 26 16:24:12 compute-0 nova_compute[239965]: 2026-01-26 16:24:12.288 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:12 compute-0 systemd-machined[208061]: New machine qemu-159-instance-00000084.
Jan 26 16:24:12 compute-0 nova_compute[239965]: 2026-01-26 16:24:12.293 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.293 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[1122d4f0-00b5-48dd-ac25-8f1c7938c3be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:12 compute-0 systemd[1]: Started Virtual Machine qemu-159-instance-00000084.
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.319 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8005d3fc-e118-4f52-ad3b-b6eeffc22f7d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.354 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a0d2962f-1b69-45b5-9571-24507de5ca37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.364 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ee867473-60aa-421e-91f1-74e7fdb0dfd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:12 compute-0 NetworkManager[48954]: <info>  [1769444652.3657] manager: (tapf4edfbf5-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/569)
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.411 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[959d8644-c42b-4e7a-88b8-dc2d716a63cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.416 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a3d7b963-34c6-475e-a8e2-f47b4be6e42b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:12 compute-0 NetworkManager[48954]: <info>  [1769444652.4469] device (tapf4edfbf5-c0): carrier: link connected
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.452 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3404bff2-fc25-4c42-9ea0-baedf551a87a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.474 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe8aff1-b411-4aa7-9533-493c88679325]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4edfbf5-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:7d:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 399], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614325, 'reachable_time': 18815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356453, 'error': None, 'target': 'ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.493 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9e0dd94d-6b31-4c68-a34a-16af22d83285]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe12:7d2f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 614325, 'tstamp': 614325}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356454, 'error': None, 'target': 'ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.510 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8f51eceb-7ab8-484e-b7cf-b401b9094c8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf4edfbf5-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:12:7d:2f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 399], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614325, 'reachable_time': 18815, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356455, 'error': None, 'target': 'ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.545 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[befc5c2f-221b-4fd5-9806-5b2f5790aad1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 134 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 1.9 MiB/s wr, 119 op/s
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.617 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9b69da94-b5c5-437d-a147-81e5d620d2b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.619 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4edfbf5-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.619 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.619 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf4edfbf5-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:12 compute-0 nova_compute[239965]: 2026-01-26 16:24:12.621 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:12 compute-0 kernel: tapf4edfbf5-c0: entered promiscuous mode
Jan 26 16:24:12 compute-0 NetworkManager[48954]: <info>  [1769444652.6220] manager: (tapf4edfbf5-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/570)
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.624 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf4edfbf5-c0, col_values=(('external_ids', {'iface-id': 'b81958cf-d91b-4990-a7d9-15ab0f872f52'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:12 compute-0 ovn_controller[146046]: 2026-01-26T16:24:12Z|01394|binding|INFO|Releasing lport b81958cf-d91b-4990-a7d9-15ab0f872f52 from this chassis (sb_readonly=0)
Jan 26 16:24:12 compute-0 nova_compute[239965]: 2026-01-26 16:24:12.654 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.655 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f4edfbf5-c8ce-414d-90e1-ed10e0f29897.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f4edfbf5-c8ce-414d-90e1-ed10e0f29897.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.656 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c865d7bb-c1c1-4bfd-a09d-cd740ecd5831]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.657 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-f4edfbf5-c8ce-414d-90e1-ed10e0f29897
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/f4edfbf5-c8ce-414d-90e1-ed10e0f29897.pid.haproxy
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID f4edfbf5-c8ce-414d-90e1-ed10e0f29897
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:24:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:12.658 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897', 'env', 'PROCESS_TAG=haproxy-f4edfbf5-c8ce-414d-90e1-ed10e0f29897', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f4edfbf5-c8ce-414d-90e1-ed10e0f29897.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:24:12 compute-0 nova_compute[239965]: 2026-01-26 16:24:12.714 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444652.7140877, 86b65c1c-c168-4ecd-b98d-9080d00d16dc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:12 compute-0 nova_compute[239965]: 2026-01-26 16:24:12.715 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] VM Started (Lifecycle Event)
Jan 26 16:24:12 compute-0 sshd-session[356418]: Connection closed by authenticating user root 209.38.206.249 port 35898 [preauth]
Jan 26 16:24:12 compute-0 nova_compute[239965]: 2026-01-26 16:24:12.734 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:12 compute-0 nova_compute[239965]: 2026-01-26 16:24:12.739 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444652.715175, 86b65c1c-c168-4ecd-b98d-9080d00d16dc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:12 compute-0 nova_compute[239965]: 2026-01-26 16:24:12.740 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] VM Paused (Lifecycle Event)
Jan 26 16:24:12 compute-0 nova_compute[239965]: 2026-01-26 16:24:12.757 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:12 compute-0 nova_compute[239965]: 2026-01-26 16:24:12.761 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:24:12 compute-0 nova_compute[239965]: 2026-01-26 16:24:12.780 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:24:13 compute-0 podman[356529]: 2026-01-26 16:24:13.051166147 +0000 UTC m=+0.042616314 container create 091ec748c40ded0d9776e5515c05a9ce8ef42aaec131de8114115b93d5cb0df0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:24:13 compute-0 systemd[1]: Started libpod-conmon-091ec748c40ded0d9776e5515c05a9ce8ef42aaec131de8114115b93d5cb0df0.scope.
Jan 26 16:24:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:24:13 compute-0 podman[356529]: 2026-01-26 16:24:13.030215915 +0000 UTC m=+0.021666092 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a90d1b6e3aae1aa33a0384ee9f6de08cf5c52a65a713958d96f51a95f2c576/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:24:13 compute-0 podman[356529]: 2026-01-26 16:24:13.142826003 +0000 UTC m=+0.134276240 container init 091ec748c40ded0d9776e5515c05a9ce8ef42aaec131de8114115b93d5cb0df0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_managed=true)
Jan 26 16:24:13 compute-0 podman[356529]: 2026-01-26 16:24:13.148308317 +0000 UTC m=+0.139758514 container start 091ec748c40ded0d9776e5515c05a9ce8ef42aaec131de8114115b93d5cb0df0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:24:13 compute-0 neutron-haproxy-ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897[356544]: [NOTICE]   (356548) : New worker (356550) forked
Jan 26 16:24:13 compute-0 neutron-haproxy-ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897[356544]: [NOTICE]   (356548) : Loading success.
Jan 26 16:24:13 compute-0 nova_compute[239965]: 2026-01-26 16:24:13.250 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:13 compute-0 nova_compute[239965]: 2026-01-26 16:24:13.452 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444638.4521856, 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:13 compute-0 nova_compute[239965]: 2026-01-26 16:24:13.453 239969 INFO nova.compute.manager [-] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] VM Stopped (Lifecycle Event)
Jan 26 16:24:13 compute-0 nova_compute[239965]: 2026-01-26 16:24:13.461 239969 DEBUG nova.compute.manager [req-9de59618-024a-4c9b-90ad-e63f4f4bbbd4 req-9eec6c3c-1a15-4e1d-a01a-698c5d7b73a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Received event network-vif-plugged-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:13 compute-0 nova_compute[239965]: 2026-01-26 16:24:13.462 239969 DEBUG oslo_concurrency.lockutils [req-9de59618-024a-4c9b-90ad-e63f4f4bbbd4 req-9eec6c3c-1a15-4e1d-a01a-698c5d7b73a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5e597276-b625-4985-8433-360736cd3c79-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:13 compute-0 nova_compute[239965]: 2026-01-26 16:24:13.462 239969 DEBUG oslo_concurrency.lockutils [req-9de59618-024a-4c9b-90ad-e63f4f4bbbd4 req-9eec6c3c-1a15-4e1d-a01a-698c5d7b73a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5e597276-b625-4985-8433-360736cd3c79-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:13 compute-0 nova_compute[239965]: 2026-01-26 16:24:13.462 239969 DEBUG oslo_concurrency.lockutils [req-9de59618-024a-4c9b-90ad-e63f4f4bbbd4 req-9eec6c3c-1a15-4e1d-a01a-698c5d7b73a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5e597276-b625-4985-8433-360736cd3c79-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:13 compute-0 nova_compute[239965]: 2026-01-26 16:24:13.462 239969 DEBUG nova.compute.manager [req-9de59618-024a-4c9b-90ad-e63f4f4bbbd4 req-9eec6c3c-1a15-4e1d-a01a-698c5d7b73a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] No waiting events found dispatching network-vif-plugged-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:24:13 compute-0 nova_compute[239965]: 2026-01-26 16:24:13.463 239969 WARNING nova.compute.manager [req-9de59618-024a-4c9b-90ad-e63f4f4bbbd4 req-9eec6c3c-1a15-4e1d-a01a-698c5d7b73a0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Received unexpected event network-vif-plugged-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 for instance with vm_state active and task_state deleting.
Jan 26 16:24:13 compute-0 nova_compute[239965]: 2026-01-26 16:24:13.474 239969 DEBUG nova.compute.manager [None req-8c03a390-758c-40ca-9fcb-91012666bf8c - - - - - -] [instance: 7ddfbe11-7a13-4e3c-8fe6-0cca0be74583] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:24:13 compute-0 ceph-mon[75140]: pgmap v2205: 305 pgs: 305 active+clean; 134 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 117 KiB/s rd, 1.9 MiB/s wr, 119 op/s
Jan 26 16:24:14 compute-0 nova_compute[239965]: 2026-01-26 16:24:14.100 239969 DEBUG nova.network.neutron [-] [instance: 5e597276-b625-4985-8433-360736cd3c79] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:14 compute-0 nova_compute[239965]: 2026-01-26 16:24:14.122 239969 INFO nova.compute.manager [-] [instance: 5e597276-b625-4985-8433-360736cd3c79] Took 2.66 seconds to deallocate network for instance.
Jan 26 16:24:14 compute-0 nova_compute[239965]: 2026-01-26 16:24:14.173 239969 DEBUG oslo_concurrency.lockutils [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:14 compute-0 nova_compute[239965]: 2026-01-26 16:24:14.174 239969 DEBUG oslo_concurrency.lockutils [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:14 compute-0 nova_compute[239965]: 2026-01-26 16:24:14.178 239969 DEBUG nova.compute.manager [req-72353709-5772-4ddb-b43f-de838ff69813 req-1379bf33-a766-4767-b705-ecaa17eb5da9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5e597276-b625-4985-8433-360736cd3c79] Received event network-vif-deleted-743ec95a-b5ab-4fda-bdba-b98dc2c3c3a2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:14 compute-0 nova_compute[239965]: 2026-01-26 16:24:14.265 239969 DEBUG oslo_concurrency.processutils [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 134 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Jan 26 16:24:14 compute-0 ceph-mon[75140]: pgmap v2206: 305 pgs: 305 active+clean; 134 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Jan 26 16:24:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:24:14 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1079822205' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:14 compute-0 nova_compute[239965]: 2026-01-26 16:24:14.832 239969 DEBUG oslo_concurrency.processutils [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:14 compute-0 nova_compute[239965]: 2026-01-26 16:24:14.841 239969 DEBUG nova.compute.provider_tree [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:24:14 compute-0 nova_compute[239965]: 2026-01-26 16:24:14.859 239969 DEBUG nova.scheduler.client.report [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:24:14 compute-0 nova_compute[239965]: 2026-01-26 16:24:14.885 239969 DEBUG oslo_concurrency.lockutils [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:14 compute-0 nova_compute[239965]: 2026-01-26 16:24:14.924 239969 INFO nova.scheduler.client.report [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Deleted allocations for instance 5e597276-b625-4985-8433-360736cd3c79
Jan 26 16:24:14 compute-0 nova_compute[239965]: 2026-01-26 16:24:14.996 239969 DEBUG oslo_concurrency.lockutils [None req-e590e470-1464-4a40-84c1-c74e5cc60e30 0338b308661e4205839e9e957f674d8c df501970c9864b77b86bb4aa58ef846b - - default default] Lock "5e597276-b625-4985-8433-360736cd3c79" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.565 239969 DEBUG nova.compute.manager [req-0900cf1a-31c9-4dfc-bb9e-9ec1ba8117e0 req-93e8f7c6-9d61-40b3-b134-e266ec9e6987 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Received event network-vif-plugged-b54c88a7-5376-4cb4-b4c2-09833f6c7c31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.566 239969 DEBUG oslo_concurrency.lockutils [req-0900cf1a-31c9-4dfc-bb9e-9ec1ba8117e0 req-93e8f7c6-9d61-40b3-b134-e266ec9e6987 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.566 239969 DEBUG oslo_concurrency.lockutils [req-0900cf1a-31c9-4dfc-bb9e-9ec1ba8117e0 req-93e8f7c6-9d61-40b3-b134-e266ec9e6987 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.566 239969 DEBUG oslo_concurrency.lockutils [req-0900cf1a-31c9-4dfc-bb9e-9ec1ba8117e0 req-93e8f7c6-9d61-40b3-b134-e266ec9e6987 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.566 239969 DEBUG nova.compute.manager [req-0900cf1a-31c9-4dfc-bb9e-9ec1ba8117e0 req-93e8f7c6-9d61-40b3-b134-e266ec9e6987 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Processing event network-vif-plugged-b54c88a7-5376-4cb4-b4c2-09833f6c7c31 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.567 239969 DEBUG nova.compute.manager [req-0900cf1a-31c9-4dfc-bb9e-9ec1ba8117e0 req-93e8f7c6-9d61-40b3-b134-e266ec9e6987 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Received event network-vif-plugged-b54c88a7-5376-4cb4-b4c2-09833f6c7c31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.567 239969 DEBUG oslo_concurrency.lockutils [req-0900cf1a-31c9-4dfc-bb9e-9ec1ba8117e0 req-93e8f7c6-9d61-40b3-b134-e266ec9e6987 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.567 239969 DEBUG oslo_concurrency.lockutils [req-0900cf1a-31c9-4dfc-bb9e-9ec1ba8117e0 req-93e8f7c6-9d61-40b3-b134-e266ec9e6987 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.567 239969 DEBUG oslo_concurrency.lockutils [req-0900cf1a-31c9-4dfc-bb9e-9ec1ba8117e0 req-93e8f7c6-9d61-40b3-b134-e266ec9e6987 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.567 239969 DEBUG nova.compute.manager [req-0900cf1a-31c9-4dfc-bb9e-9ec1ba8117e0 req-93e8f7c6-9d61-40b3-b134-e266ec9e6987 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] No waiting events found dispatching network-vif-plugged-b54c88a7-5376-4cb4-b4c2-09833f6c7c31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.568 239969 WARNING nova.compute.manager [req-0900cf1a-31c9-4dfc-bb9e-9ec1ba8117e0 req-93e8f7c6-9d61-40b3-b134-e266ec9e6987 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Received unexpected event network-vif-plugged-b54c88a7-5376-4cb4-b4c2-09833f6c7c31 for instance with vm_state building and task_state spawning.
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.568 239969 DEBUG nova.compute.manager [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.573 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444655.5734951, 86b65c1c-c168-4ecd-b98d-9080d00d16dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.574 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] VM Resumed (Lifecycle Event)
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.576 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.578 239969 INFO nova.virt.libvirt.driver [-] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Instance spawned successfully.
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.579 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.596 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.602 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.606 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.607 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.607 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.607 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.608 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.608 239969 DEBUG nova.virt.libvirt.driver [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.639 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.673 239969 INFO nova.compute.manager [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Took 12.28 seconds to spawn the instance on the hypervisor.
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.673 239969 DEBUG nova.compute.manager [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.737 239969 INFO nova.compute.manager [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Took 14.03 seconds to build instance.
Jan 26 16:24:15 compute-0 nova_compute[239965]: 2026-01-26 16:24:15.752 239969 DEBUG oslo_concurrency.lockutils [None req-05230452-19a3-4b07-a76b-38e73ab04bf6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:15 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1079822205' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:16 compute-0 nova_compute[239965]: 2026-01-26 16:24:16.012 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 134 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 106 KiB/s rd, 1.8 MiB/s wr, 92 op/s
Jan 26 16:24:16 compute-0 ceph-mon[75140]: pgmap v2207: 305 pgs: 305 active+clean; 134 MiB data, 936 MiB used, 59 GiB / 60 GiB avail; 106 KiB/s rd, 1.8 MiB/s wr, 92 op/s
Jan 26 16:24:17 compute-0 nova_compute[239965]: 2026-01-26 16:24:17.567 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:18 compute-0 nova_compute[239965]: 2026-01-26 16:24:18.204 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "057983d1-894b-493d-8d76-df3cd85a69fb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:18 compute-0 nova_compute[239965]: 2026-01-26 16:24:18.204 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "057983d1-894b-493d-8d76-df3cd85a69fb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:18 compute-0 nova_compute[239965]: 2026-01-26 16:24:18.232 239969 DEBUG nova.compute.manager [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:24:18 compute-0 nova_compute[239965]: 2026-01-26 16:24:18.295 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:18 compute-0 nova_compute[239965]: 2026-01-26 16:24:18.303 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:18 compute-0 nova_compute[239965]: 2026-01-26 16:24:18.303 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:18 compute-0 nova_compute[239965]: 2026-01-26 16:24:18.317 239969 DEBUG nova.virt.hardware [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:24:18 compute-0 nova_compute[239965]: 2026-01-26 16:24:18.317 239969 INFO nova.compute.claims [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:24:18 compute-0 nova_compute[239965]: 2026-01-26 16:24:18.428 239969 DEBUG oslo_concurrency.processutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:24:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 134 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Jan 26 16:24:19 compute-0 ceph-mon[75140]: pgmap v2208: 305 pgs: 305 active+clean; 134 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Jan 26 16:24:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:24:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2476395666' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.084 239969 DEBUG oslo_concurrency.processutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.090 239969 DEBUG nova.compute.provider_tree [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.104 239969 DEBUG nova.scheduler.client.report [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.109 239969 DEBUG nova.compute.manager [req-d6ee5a2c-f122-4c6d-8eab-a83075a8d2ae req-80b92dbe-3927-45d7-8a56-b8cff81049d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Received event network-changed-b54c88a7-5376-4cb4-b4c2-09833f6c7c31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.110 239969 DEBUG nova.compute.manager [req-d6ee5a2c-f122-4c6d-8eab-a83075a8d2ae req-80b92dbe-3927-45d7-8a56-b8cff81049d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Refreshing instance network info cache due to event network-changed-b54c88a7-5376-4cb4-b4c2-09833f6c7c31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.110 239969 DEBUG oslo_concurrency.lockutils [req-d6ee5a2c-f122-4c6d-8eab-a83075a8d2ae req-80b92dbe-3927-45d7-8a56-b8cff81049d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-86b65c1c-c168-4ecd-b98d-9080d00d16dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.110 239969 DEBUG oslo_concurrency.lockutils [req-d6ee5a2c-f122-4c6d-8eab-a83075a8d2ae req-80b92dbe-3927-45d7-8a56-b8cff81049d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-86b65c1c-c168-4ecd-b98d-9080d00d16dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.110 239969 DEBUG nova.network.neutron [req-d6ee5a2c-f122-4c6d-8eab-a83075a8d2ae req-80b92dbe-3927-45d7-8a56-b8cff81049d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Refreshing network info cache for port b54c88a7-5376-4cb4-b4c2-09833f6c7c31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.137 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.138 239969 DEBUG nova.compute.manager [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.200 239969 DEBUG nova.compute.manager [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.201 239969 DEBUG nova.network.neutron [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.221 239969 INFO nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.242 239969 DEBUG nova.compute.manager [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.354 239969 DEBUG nova.compute.manager [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.356 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.357 239969 INFO nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Creating image(s)
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.390 239969 DEBUG nova.storage.rbd_utils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 057983d1-894b-493d-8d76-df3cd85a69fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.428 239969 DEBUG nova.storage.rbd_utils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 057983d1-894b-493d-8d76-df3cd85a69fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.464 239969 DEBUG nova.storage.rbd_utils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 057983d1-894b-493d-8d76-df3cd85a69fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.470 239969 DEBUG oslo_concurrency.processutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.571 239969 DEBUG oslo_concurrency.processutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.572 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.573 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.573 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.595 239969 DEBUG nova.storage.rbd_utils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 057983d1-894b-493d-8d76-df3cd85a69fb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.599 239969 DEBUG oslo_concurrency.processutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 057983d1-894b-493d-8d76-df3cd85a69fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.812 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444644.8114042, 1999f11e-e9a2-4442-8baf-d8e51e0c856d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.813 239969 INFO nova.compute.manager [-] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] VM Stopped (Lifecycle Event)
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.844 239969 DEBUG nova.compute.manager [None req-df3f787d-f442-4a22-9827-a6b3e677ae36 - - - - - -] [instance: 1999f11e-e9a2-4442-8baf-d8e51e0c856d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:19 compute-0 nova_compute[239965]: 2026-01-26 16:24:19.980 239969 DEBUG nova.policy [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e865b82cb8db42568f95d2257ed9b158', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f543c15474f7451fa07b01e79dde95dd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:24:20 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2476395666' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:20 compute-0 nova_compute[239965]: 2026-01-26 16:24:20.214 239969 DEBUG oslo_concurrency.processutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 057983d1-894b-493d-8d76-df3cd85a69fb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:20 compute-0 nova_compute[239965]: 2026-01-26 16:24:20.275 239969 DEBUG nova.storage.rbd_utils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] resizing rbd image 057983d1-894b-493d-8d76-df3cd85a69fb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:24:20 compute-0 nova_compute[239965]: 2026-01-26 16:24:20.392 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:24:20 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #50. Immutable memtables: 0.
Jan 26 16:24:20 compute-0 nova_compute[239965]: 2026-01-26 16:24:20.567 239969 DEBUG nova.objects.instance [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'migration_context' on Instance uuid 057983d1-894b-493d-8d76-df3cd85a69fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:24:20 compute-0 nova_compute[239965]: 2026-01-26 16:24:20.580 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:24:20 compute-0 nova_compute[239965]: 2026-01-26 16:24:20.581 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Ensure instance console log exists: /var/lib/nova/instances/057983d1-894b-493d-8d76-df3cd85a69fb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:24:20 compute-0 nova_compute[239965]: 2026-01-26 16:24:20.582 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:20 compute-0 nova_compute[239965]: 2026-01-26 16:24:20.582 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:20 compute-0 nova_compute[239965]: 2026-01-26 16:24:20.582 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 134 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 99 op/s
Jan 26 16:24:20 compute-0 sshd-session[356733]: Invalid user solana from 45.148.10.240 port 53052
Jan 26 16:24:20 compute-0 sshd-session[356733]: Connection closed by invalid user solana 45.148.10.240 port 53052 [preauth]
Jan 26 16:24:21 compute-0 nova_compute[239965]: 2026-01-26 16:24:21.015 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:21 compute-0 ceph-mon[75140]: pgmap v2209: 305 pgs: 305 active+clean; 134 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 99 op/s
Jan 26 16:24:21 compute-0 nova_compute[239965]: 2026-01-26 16:24:21.388 239969 DEBUG nova.network.neutron [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Successfully updated port: 07819469-3bd7-4885-9144-3eceda4eadcd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:24:21 compute-0 nova_compute[239965]: 2026-01-26 16:24:21.933 239969 DEBUG nova.compute.manager [req-c349caf2-f58f-4fea-b4cb-b3e6f63899b5 req-0dabe3c0-10bf-445f-8277-c77f4a5367f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Received event network-changed-07819469-3bd7-4885-9144-3eceda4eadcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:21 compute-0 nova_compute[239965]: 2026-01-26 16:24:21.934 239969 DEBUG nova.compute.manager [req-c349caf2-f58f-4fea-b4cb-b3e6f63899b5 req-0dabe3c0-10bf-445f-8277-c77f4a5367f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Refreshing instance network info cache due to event network-changed-07819469-3bd7-4885-9144-3eceda4eadcd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:24:21 compute-0 nova_compute[239965]: 2026-01-26 16:24:21.935 239969 DEBUG oslo_concurrency.lockutils [req-c349caf2-f58f-4fea-b4cb-b3e6f63899b5 req-0dabe3c0-10bf-445f-8277-c77f4a5367f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-057983d1-894b-493d-8d76-df3cd85a69fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:24:21 compute-0 nova_compute[239965]: 2026-01-26 16:24:21.935 239969 DEBUG oslo_concurrency.lockutils [req-c349caf2-f58f-4fea-b4cb-b3e6f63899b5 req-0dabe3c0-10bf-445f-8277-c77f4a5367f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-057983d1-894b-493d-8d76-df3cd85a69fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:24:21 compute-0 nova_compute[239965]: 2026-01-26 16:24:21.936 239969 DEBUG nova.network.neutron [req-c349caf2-f58f-4fea-b4cb-b3e6f63899b5 req-0dabe3c0-10bf-445f-8277-c77f4a5367f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Refreshing network info cache for port 07819469-3bd7-4885-9144-3eceda4eadcd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:24:21 compute-0 nova_compute[239965]: 2026-01-26 16:24:21.965 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-057983d1-894b-493d-8d76-df3cd85a69fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:24:22 compute-0 ovn_controller[146046]: 2026-01-26T16:24:22Z|01395|binding|INFO|Releasing lport b81958cf-d91b-4990-a7d9-15ab0f872f52 from this chassis (sb_readonly=0)
Jan 26 16:24:22 compute-0 nova_compute[239965]: 2026-01-26 16:24:22.142 239969 DEBUG nova.network.neutron [req-c349caf2-f58f-4fea-b4cb-b3e6f63899b5 req-0dabe3c0-10bf-445f-8277-c77f4a5367f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:24:22 compute-0 nova_compute[239965]: 2026-01-26 16:24:22.208 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:22 compute-0 nova_compute[239965]: 2026-01-26 16:24:22.236 239969 DEBUG nova.network.neutron [req-d6ee5a2c-f122-4c6d-8eab-a83075a8d2ae req-80b92dbe-3927-45d7-8a56-b8cff81049d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Updated VIF entry in instance network info cache for port b54c88a7-5376-4cb4-b4c2-09833f6c7c31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:24:22 compute-0 nova_compute[239965]: 2026-01-26 16:24:22.237 239969 DEBUG nova.network.neutron [req-d6ee5a2c-f122-4c6d-8eab-a83075a8d2ae req-80b92dbe-3927-45d7-8a56-b8cff81049d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Updating instance_info_cache with network_info: [{"id": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "address": "fa:16:3e:8a:05:f8", "network": {"id": "f4edfbf5-c8ce-414d-90e1-ed10e0f29897", "bridge": "br-int", "label": "tempest-network-smoke--1452475994", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54c88a7-53", "ovs_interfaceid": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:22 compute-0 nova_compute[239965]: 2026-01-26 16:24:22.269 239969 DEBUG oslo_concurrency.lockutils [req-d6ee5a2c-f122-4c6d-8eab-a83075a8d2ae req-80b92dbe-3927-45d7-8a56-b8cff81049d3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-86b65c1c-c168-4ecd-b98d-9080d00d16dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:24:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 180 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 26 16:24:23 compute-0 nova_compute[239965]: 2026-01-26 16:24:23.388 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:23 compute-0 nova_compute[239965]: 2026-01-26 16:24:23.437 239969 DEBUG nova.network.neutron [req-c349caf2-f58f-4fea-b4cb-b3e6f63899b5 req-0dabe3c0-10bf-445f-8277-c77f4a5367f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:23 compute-0 nova_compute[239965]: 2026-01-26 16:24:23.450 239969 DEBUG oslo_concurrency.lockutils [req-c349caf2-f58f-4fea-b4cb-b3e6f63899b5 req-0dabe3c0-10bf-445f-8277-c77f4a5367f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-057983d1-894b-493d-8d76-df3cd85a69fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:24:23 compute-0 nova_compute[239965]: 2026-01-26 16:24:23.451 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-057983d1-894b-493d-8d76-df3cd85a69fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:24:23 compute-0 nova_compute[239965]: 2026-01-26 16:24:23.451 239969 DEBUG nova.network.neutron [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:24:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:24:23 compute-0 nova_compute[239965]: 2026-01-26 16:24:23.586 239969 DEBUG nova.network.neutron [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:24:24 compute-0 ceph-mon[75140]: pgmap v2210: 305 pgs: 305 active+clean; 180 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.272 239969 DEBUG nova.network.neutron [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Updating instance_info_cache with network_info: [{"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.292 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-057983d1-894b-493d-8d76-df3cd85a69fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.292 239969 DEBUG nova.compute.manager [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Instance network_info: |[{"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.295 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Start _get_guest_xml network_info=[{"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.302 239969 WARNING nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.307 239969 DEBUG nova.virt.libvirt.host [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.308 239969 DEBUG nova.virt.libvirt.host [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.311 239969 DEBUG nova.virt.libvirt.host [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.312 239969 DEBUG nova.virt.libvirt.host [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.313 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.313 239969 DEBUG nova.virt.hardware [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.314 239969 DEBUG nova.virt.hardware [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.314 239969 DEBUG nova.virt.hardware [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.315 239969 DEBUG nova.virt.hardware [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.315 239969 DEBUG nova.virt.hardware [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.315 239969 DEBUG nova.virt.hardware [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.316 239969 DEBUG nova.virt.hardware [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.316 239969 DEBUG nova.virt.hardware [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.316 239969 DEBUG nova.virt.hardware [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.317 239969 DEBUG nova.virt.hardware [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.317 239969 DEBUG nova.virt.hardware [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.320 239969 DEBUG oslo_concurrency.processutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.376 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "1612372a-bdd8-4900-82cf-902335d93c41" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.377 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "1612372a-bdd8-4900-82cf-902335d93c41" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.394 239969 DEBUG nova.compute.manager [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:24:24 compute-0 sshd-session[356771]: Invalid user mailadmin from 209.38.206.249 port 35914
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.466 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.466 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.472 239969 DEBUG nova.virt.hardware [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.473 239969 INFO nova.compute.claims [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:24:24 compute-0 sshd-session[356771]: Connection closed by invalid user mailadmin 209.38.206.249 port 35914 [preauth]
Jan 26 16:24:24 compute-0 nova_compute[239965]: 2026-01-26 16:24:24.594 239969 DEBUG oslo_concurrency.processutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 180 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.075 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444650.0741153, 5e597276-b625-4985-8433-360736cd3c79 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.076 239969 INFO nova.compute.manager [-] [instance: 5e597276-b625-4985-8433-360736cd3c79] VM Stopped (Lifecycle Event)
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.100 239969 DEBUG nova.compute.manager [None req-2d5d0c12-9970-4d9a-8160-432a1cc645cd - - - - - -] [instance: 5e597276-b625-4985-8433-360736cd3c79] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:24:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1822929239' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:24:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4254268111' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.458 239969 DEBUG oslo_concurrency.processutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.864s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.462 239969 DEBUG oslo_concurrency.processutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.498 239969 DEBUG nova.storage.rbd_utils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 057983d1-894b-493d-8d76-df3cd85a69fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.505 239969 DEBUG oslo_concurrency.processutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.568 239969 DEBUG nova.compute.provider_tree [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.604 239969 DEBUG nova.scheduler.client.report [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.628 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.629 239969 DEBUG nova.compute.manager [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.681 239969 DEBUG nova.compute.manager [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.684 239969 DEBUG nova.network.neutron [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.703 239969 INFO nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.723 239969 DEBUG nova.compute.manager [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:24:25 compute-0 ceph-mon[75140]: pgmap v2211: 305 pgs: 305 active+clean; 180 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.823 239969 DEBUG nova.compute.manager [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.825 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:24:25 compute-0 nova_compute[239965]: 2026-01-26 16:24:25.826 239969 INFO nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Creating image(s)
Jan 26 16:24:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 180 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 16:24:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:24:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1698628012' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.615 239969 DEBUG nova.storage.rbd_utils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 1612372a-bdd8-4900-82cf-902335d93c41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.639 239969 DEBUG nova.storage.rbd_utils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 1612372a-bdd8-4900-82cf-902335d93c41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.665 239969 DEBUG nova.storage.rbd_utils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 1612372a-bdd8-4900-82cf-902335d93c41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.669 239969 DEBUG oslo_concurrency.processutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.712 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.718 239969 DEBUG nova.policy [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ace5f36058684b5782589457d9e15921', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.723 239969 DEBUG oslo_concurrency.processutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.725 239969 DEBUG nova.virt.libvirt.vif [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:24:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-412228353',display_name='tempest-TestNetworkBasicOps-server-412228353',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-412228353',id=133,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGd+FHLzzMbl7/VCFqOC6bdXBfxCu5YL/NLlWCfo2ucnfSHdg/lsHpZUXzUrr+HbzEaiY8BFeG/fkGOjEG18DvIhF9GYfLTzKzJ9G7eZ46bGQ7c7I3LeDWMgcDJ+jHGUQw==',key_name='tempest-TestNetworkBasicOps-320551724',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-99n8f3oc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:24:19Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=057983d1-894b-493d-8d76-df3cd85a69fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.725 239969 DEBUG nova.network.os_vif_util [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.727 239969 DEBUG nova.network.os_vif_util [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:79:ac,bridge_name='br-int',has_traffic_filtering=True,id=07819469-3bd7-4885-9144-3eceda4eadcd,network=Network(f62bb3f6-b715-4244-9fb7-5f2b6ae70b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07819469-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.728 239969 DEBUG nova.objects.instance [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'pci_devices' on Instance uuid 057983d1-894b-493d-8d76-df3cd85a69fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.749 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:24:26 compute-0 nova_compute[239965]:   <uuid>057983d1-894b-493d-8d76-df3cd85a69fb</uuid>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   <name>instance-00000085</name>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkBasicOps-server-412228353</nova:name>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:24:24</nova:creationTime>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:24:26 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:24:26 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:24:26 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:24:26 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:24:26 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:24:26 compute-0 nova_compute[239965]:         <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:24:26 compute-0 nova_compute[239965]:         <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:24:26 compute-0 nova_compute[239965]:         <nova:port uuid="07819469-3bd7-4885-9144-3eceda4eadcd">
Jan 26 16:24:26 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <system>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <entry name="serial">057983d1-894b-493d-8d76-df3cd85a69fb</entry>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <entry name="uuid">057983d1-894b-493d-8d76-df3cd85a69fb</entry>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     </system>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   <os>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   </os>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   <features>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   </features>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/057983d1-894b-493d-8d76-df3cd85a69fb_disk">
Jan 26 16:24:26 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       </source>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:24:26 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/057983d1-894b-493d-8d76-df3cd85a69fb_disk.config">
Jan 26 16:24:26 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       </source>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:24:26 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:b4:79:ac"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <target dev="tap07819469-3b"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/057983d1-894b-493d-8d76-df3cd85a69fb/console.log" append="off"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <video>
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     </video>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:24:26 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:24:26 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:24:26 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:24:26 compute-0 nova_compute[239965]: </domain>
Jan 26 16:24:26 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.752 239969 DEBUG nova.compute.manager [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Preparing to wait for external event network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.753 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.753 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.754 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.756 239969 DEBUG nova.virt.libvirt.vif [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:24:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-412228353',display_name='tempest-TestNetworkBasicOps-server-412228353',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-412228353',id=133,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGd+FHLzzMbl7/VCFqOC6bdXBfxCu5YL/NLlWCfo2ucnfSHdg/lsHpZUXzUrr+HbzEaiY8BFeG/fkGOjEG18DvIhF9GYfLTzKzJ9G7eZ46bGQ7c7I3LeDWMgcDJ+jHGUQw==',key_name='tempest-TestNetworkBasicOps-320551724',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-99n8f3oc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:24:19Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=057983d1-894b-493d-8d76-df3cd85a69fb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.757 239969 DEBUG nova.network.os_vif_util [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.758 239969 DEBUG nova.network.os_vif_util [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:79:ac,bridge_name='br-int',has_traffic_filtering=True,id=07819469-3bd7-4885-9144-3eceda4eadcd,network=Network(f62bb3f6-b715-4244-9fb7-5f2b6ae70b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07819469-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.758 239969 DEBUG os_vif [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:79:ac,bridge_name='br-int',has_traffic_filtering=True,id=07819469-3bd7-4885-9144-3eceda4eadcd,network=Network(f62bb3f6-b715-4244-9fb7-5f2b6ae70b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07819469-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.759 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.759 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.760 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.761 239969 DEBUG oslo_concurrency.processutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.762 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.762 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.763 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.788 239969 DEBUG nova.storage.rbd_utils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 1612372a-bdd8-4900-82cf-902335d93c41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.793 239969 DEBUG oslo_concurrency.processutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 1612372a-bdd8-4900-82cf-902335d93c41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.852 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.854 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07819469-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.861 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07819469-3b, col_values=(('external_ids', {'iface-id': '07819469-3bd7-4885-9144-3eceda4eadcd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:79:ac', 'vm-uuid': '057983d1-894b-493d-8d76-df3cd85a69fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.905 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:26 compute-0 NetworkManager[48954]: <info>  [1769444666.9063] manager: (tap07819469-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/571)
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.911 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.911 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:26 compute-0 nova_compute[239965]: 2026-01-26 16:24:26.912 239969 INFO os_vif [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:79:ac,bridge_name='br-int',has_traffic_filtering=True,id=07819469-3bd7-4885-9144-3eceda4eadcd,network=Network(f62bb3f6-b715-4244-9fb7-5f2b6ae70b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07819469-3b')
Jan 26 16:24:27 compute-0 nova_compute[239965]: 2026-01-26 16:24:27.419 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:24:27 compute-0 nova_compute[239965]: 2026-01-26 16:24:27.420 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:24:27 compute-0 nova_compute[239965]: 2026-01-26 16:24:27.420 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:b4:79:ac, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:24:27 compute-0 nova_compute[239965]: 2026-01-26 16:24:27.420 239969 INFO nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Using config drive
Jan 26 16:24:27 compute-0 nova_compute[239965]: 2026-01-26 16:24:27.447 239969 DEBUG nova.storage.rbd_utils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 057983d1-894b-493d-8d76-df3cd85a69fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:27 compute-0 nova_compute[239965]: 2026-01-26 16:24:27.549 239969 DEBUG nova.network.neutron [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Successfully created port: 0a21656f-54b8-4cb5-9654-be9b13a49ce2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:24:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1822929239' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4254268111' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:24:27 compute-0 ceph-mon[75140]: pgmap v2212: 305 pgs: 305 active+clean; 180 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 16:24:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1698628012' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:24:27 compute-0 sshd-session[356951]: Received disconnect from 45.148.10.157 port 34312:11:  [preauth]
Jan 26 16:24:27 compute-0 sshd-session[356951]: Disconnected from authenticating user root 45.148.10.157 port 34312 [preauth]
Jan 26 16:24:27 compute-0 nova_compute[239965]: 2026-01-26 16:24:27.856 239969 INFO nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Creating config drive at /var/lib/nova/instances/057983d1-894b-493d-8d76-df3cd85a69fb/disk.config
Jan 26 16:24:27 compute-0 nova_compute[239965]: 2026-01-26 16:24:27.865 239969 DEBUG oslo_concurrency.processutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/057983d1-894b-493d-8d76-df3cd85a69fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprc41rtgr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:28 compute-0 nova_compute[239965]: 2026-01-26 16:24:28.040 239969 DEBUG oslo_concurrency.processutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/057983d1-894b-493d-8d76-df3cd85a69fb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprc41rtgr" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 180 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 26 16:24:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:24:28
Jan 26 16:24:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:24:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:24:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'default.rgw.meta', 'backups', 'images']
Jan 26 16:24:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:24:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:24:29 compute-0 nova_compute[239965]: 2026-01-26 16:24:29.243 239969 DEBUG nova.storage.rbd_utils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 057983d1-894b-493d-8d76-df3cd85a69fb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:29 compute-0 nova_compute[239965]: 2026-01-26 16:24:29.248 239969 DEBUG oslo_concurrency.processutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/057983d1-894b-493d-8d76-df3cd85a69fb/disk.config 057983d1-894b-493d-8d76-df3cd85a69fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:29 compute-0 nova_compute[239965]: 2026-01-26 16:24:29.303 239969 DEBUG nova.network.neutron [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Successfully updated port: 0a21656f-54b8-4cb5-9654-be9b13a49ce2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:24:29 compute-0 nova_compute[239965]: 2026-01-26 16:24:29.306 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:29 compute-0 nova_compute[239965]: 2026-01-26 16:24:29.313 239969 DEBUG nova.compute.manager [req-016b613e-ed57-46ba-b998-950815908533 req-a2ca360f-7f4d-4c0b-8720-081841e213df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Received event network-changed-0a21656f-54b8-4cb5-9654-be9b13a49ce2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:29 compute-0 nova_compute[239965]: 2026-01-26 16:24:29.314 239969 DEBUG nova.compute.manager [req-016b613e-ed57-46ba-b998-950815908533 req-a2ca360f-7f4d-4c0b-8720-081841e213df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Refreshing instance network info cache due to event network-changed-0a21656f-54b8-4cb5-9654-be9b13a49ce2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:24:29 compute-0 nova_compute[239965]: 2026-01-26 16:24:29.314 239969 DEBUG oslo_concurrency.lockutils [req-016b613e-ed57-46ba-b998-950815908533 req-a2ca360f-7f4d-4c0b-8720-081841e213df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-1612372a-bdd8-4900-82cf-902335d93c41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:24:29 compute-0 nova_compute[239965]: 2026-01-26 16:24:29.315 239969 DEBUG oslo_concurrency.lockutils [req-016b613e-ed57-46ba-b998-950815908533 req-a2ca360f-7f4d-4c0b-8720-081841e213df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-1612372a-bdd8-4900-82cf-902335d93c41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:24:29 compute-0 nova_compute[239965]: 2026-01-26 16:24:29.315 239969 DEBUG nova.network.neutron [req-016b613e-ed57-46ba-b998-950815908533 req-a2ca360f-7f4d-4c0b-8720-081841e213df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Refreshing network info cache for port 0a21656f-54b8-4cb5-9654-be9b13a49ce2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:24:29 compute-0 nova_compute[239965]: 2026-01-26 16:24:29.340 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "refresh_cache-1612372a-bdd8-4900-82cf-902335d93c41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:24:29 compute-0 nova_compute[239965]: 2026-01-26 16:24:29.797 239969 DEBUG nova.network.neutron [req-016b613e-ed57-46ba-b998-950815908533 req-a2ca360f-7f4d-4c0b-8720-081841e213df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:24:29 compute-0 sudo[357008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:24:29 compute-0 sudo[357008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:24:30 compute-0 sudo[357008]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:30 compute-0 sudo[357033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 26 16:24:30 compute-0 sudo[357033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:24:30 compute-0 ceph-mon[75140]: pgmap v2213: 305 pgs: 305 active+clean; 180 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Jan 26 16:24:30 compute-0 nova_compute[239965]: 2026-01-26 16:24:30.147 239969 DEBUG nova.network.neutron [req-016b613e-ed57-46ba-b998-950815908533 req-a2ca360f-7f4d-4c0b-8720-081841e213df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:30 compute-0 nova_compute[239965]: 2026-01-26 16:24:30.171 239969 DEBUG oslo_concurrency.lockutils [req-016b613e-ed57-46ba-b998-950815908533 req-a2ca360f-7f4d-4c0b-8720-081841e213df a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-1612372a-bdd8-4900-82cf-902335d93c41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:24:30 compute-0 nova_compute[239965]: 2026-01-26 16:24:30.173 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquired lock "refresh_cache-1612372a-bdd8-4900-82cf-902335d93c41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:24:30 compute-0 nova_compute[239965]: 2026-01-26 16:24:30.173 239969 DEBUG nova.network.neutron [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:24:30 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 7.
Jan 26 16:24:30 compute-0 nova_compute[239965]: 2026-01-26 16:24:30.332 239969 DEBUG nova.network.neutron [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:24:30 compute-0 sudo[357033]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 180 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:24:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:24:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:24:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:24:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:24:30 compute-0 nova_compute[239965]: 2026-01-26 16:24:30.899 239969 DEBUG oslo_concurrency.processutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/057983d1-894b-493d-8d76-df3cd85a69fb/disk.config 057983d1-894b-493d-8d76-df3cd85a69fb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.651s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:30 compute-0 nova_compute[239965]: 2026-01-26 16:24:30.900 239969 INFO nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Deleting local config drive /var/lib/nova/instances/057983d1-894b-493d-8d76-df3cd85a69fb/disk.config because it was imported into RBD.
Jan 26 16:24:30 compute-0 nova_compute[239965]: 2026-01-26 16:24:30.934 239969 DEBUG oslo_concurrency.processutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 1612372a-bdd8-4900-82cf-902335d93c41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:30 compute-0 sudo[357085]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:24:30 compute-0 sudo[357085]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:24:30 compute-0 sudo[357085]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:30 compute-0 kernel: tap07819469-3b: entered promiscuous mode
Jan 26 16:24:30 compute-0 NetworkManager[48954]: <info>  [1769444670.9649] manager: (tap07819469-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/572)
Jan 26 16:24:30 compute-0 ovn_controller[146046]: 2026-01-26T16:24:30Z|01396|binding|INFO|Claiming lport 07819469-3bd7-4885-9144-3eceda4eadcd for this chassis.
Jan 26 16:24:30 compute-0 ovn_controller[146046]: 2026-01-26T16:24:30Z|01397|binding|INFO|07819469-3bd7-4885-9144-3eceda4eadcd: Claiming fa:16:3e:b4:79:ac 10.100.0.8
Jan 26 16:24:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:30.974 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:79:ac 10.100.0.8'], port_security=['fa:16:3e:b4:79:ac 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-648668074', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '057983d1-894b-493d-8d76-df3cd85a69fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-648668074', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '039287cb-946e-4ce0-ad47-a6333988fa03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c96c5555-0193-4c98-ba58-f33ef27ed1fc, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=07819469-3bd7-4885-9144-3eceda4eadcd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:24:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:30.976 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 07819469-3bd7-4885-9144-3eceda4eadcd in datapath f62bb3f6-b715-4244-9fb7-5f2b6ae70b93 bound to our chassis
Jan 26 16:24:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:30.979 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f62bb3f6-b715-4244-9fb7-5f2b6ae70b93
Jan 26 16:24:30 compute-0 ovn_controller[146046]: 2026-01-26T16:24:30Z|01398|binding|INFO|Setting lport 07819469-3bd7-4885-9144-3eceda4eadcd ovn-installed in OVS
Jan 26 16:24:30 compute-0 ovn_controller[146046]: 2026-01-26T16:24:30Z|01399|binding|INFO|Setting lport 07819469-3bd7-4885-9144-3eceda4eadcd up in Southbound
Jan 26 16:24:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:30.994 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[349f04ce-8d3d-45d3-a52b-3d307c5eb836]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:30.995 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf62bb3f6-b1 in ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:24:30 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:30.999 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf62bb3f6-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:30.999 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[23491dd4-bbb6-4c9e-8720-c4582f2200cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.000 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ff8cbd51-bf7a-411f-8f8b-a6a1a62ad261]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:31 compute-0 nova_compute[239965]: 2026-01-26 16:24:31.001 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.012 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[73dbdc4b-99f6-4a00-aa4a-35f029606f67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:31 compute-0 systemd-machined[208061]: New machine qemu-160-instance-00000085.
Jan 26 16:24:31 compute-0 systemd[1]: Started Virtual Machine qemu-160-instance-00000085.
Jan 26 16:24:31 compute-0 systemd-udevd[357168]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:24:31 compute-0 sudo[357136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:24:31 compute-0 sudo[357136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.039 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2524b2f1-400d-4126-bf7b-8445c53dc677]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:31 compute-0 NetworkManager[48954]: <info>  [1769444671.0537] device (tap07819469-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:24:31 compute-0 NetworkManager[48954]: <info>  [1769444671.0551] device (tap07819469-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.070 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1912d0e1-5f48-4958-925f-8b18168e1a1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.078 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[26a3fdd5-ec1c-41de-815e-4e9417eb578c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:31 compute-0 NetworkManager[48954]: <info>  [1769444671.0801] manager: (tapf62bb3f6-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/573)
Jan 26 16:24:31 compute-0 systemd-udevd[357190]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:24:31 compute-0 nova_compute[239965]: 2026-01-26 16:24:31.104 239969 DEBUG nova.storage.rbd_utils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] resizing rbd image 1612372a-bdd8-4900-82cf-902335d93c41_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.115 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2be527-9845-49b9-b506-791e61f78928]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.118 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d87a6aa8-a17f-4634-b9ea-130ee5d574ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:31 compute-0 NetworkManager[48954]: <info>  [1769444671.1438] device (tapf62bb3f6-b0): carrier: link connected
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.148 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7f8bf2-494d-4eee-a7cf-e65170c847b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.169 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[48a30aca-c358-451f-999a-54dee8d2ac75]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf62bb3f6-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:71:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616195, 'reachable_time': 17264, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357237, 'error': None, 'target': 'ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.187 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a0da2864-fdcd-4c33-a4ba-9e0bec65b572]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:7115'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616195, 'tstamp': 616195}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357238, 'error': None, 'target': 'ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.202 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cc8365af-bdc3-49be-b047-b81dd049d66c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf62bb3f6-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:71:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 401], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616195, 'reachable_time': 17264, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357239, 'error': None, 'target': 'ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.238 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd2462b-c3f7-481b-b575-5dbc5d00675a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.325 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2aeb40c7-2739-4eab-beb9-2d788928432b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.326 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf62bb3f6-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.327 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.327 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf62bb3f6-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:31 compute-0 kernel: tapf62bb3f6-b0: entered promiscuous mode
Jan 26 16:24:31 compute-0 NetworkManager[48954]: <info>  [1769444671.3298] manager: (tapf62bb3f6-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/574)
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.332 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf62bb3f6-b0, col_values=(('external_ids', {'iface-id': '6ff4e0de-8425-4a37-9c4d-ac8157b0fa40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:31 compute-0 ovn_controller[146046]: 2026-01-26T16:24:31Z|01400|binding|INFO|Releasing lport 6ff4e0de-8425-4a37-9c4d-ac8157b0fa40 from this chassis (sb_readonly=0)
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.335 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f62bb3f6-b715-4244-9fb7-5f2b6ae70b93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f62bb3f6-b715-4244-9fb7-5f2b6ae70b93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.335 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b183d2d4-b40a-499c-9388-894cef93446a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.336 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/f62bb3f6-b715-4244-9fb7-5f2b6ae70b93.pid.haproxy
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID f62bb3f6-b715-4244-9fb7-5f2b6ae70b93
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:24:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:31.338 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'env', 'PROCESS_TAG=haproxy-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f62bb3f6-b715-4244-9fb7-5f2b6ae70b93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:24:31 compute-0 nova_compute[239965]: 2026-01-26 16:24:31.584 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:31 compute-0 nova_compute[239965]: 2026-01-26 16:24:31.591 239969 DEBUG nova.compute.manager [req-00ce8985-5e47-49f6-ba8e-81db6233ed8b req-0de8e924-4339-4f77-b6c6-025b37cc9538 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Received event network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:31 compute-0 nova_compute[239965]: 2026-01-26 16:24:31.591 239969 DEBUG oslo_concurrency.lockutils [req-00ce8985-5e47-49f6-ba8e-81db6233ed8b req-0de8e924-4339-4f77-b6c6-025b37cc9538 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:31 compute-0 nova_compute[239965]: 2026-01-26 16:24:31.591 239969 DEBUG oslo_concurrency.lockutils [req-00ce8985-5e47-49f6-ba8e-81db6233ed8b req-0de8e924-4339-4f77-b6c6-025b37cc9538 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:31 compute-0 nova_compute[239965]: 2026-01-26 16:24:31.591 239969 DEBUG oslo_concurrency.lockutils [req-00ce8985-5e47-49f6-ba8e-81db6233ed8b req-0de8e924-4339-4f77-b6c6-025b37cc9538 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:31 compute-0 nova_compute[239965]: 2026-01-26 16:24:31.592 239969 DEBUG nova.compute.manager [req-00ce8985-5e47-49f6-ba8e-81db6233ed8b req-0de8e924-4339-4f77-b6c6-025b37cc9538 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Processing event network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:24:31 compute-0 sudo[357136]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 16:24:31 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 16:24:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:24:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:24:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:24:31 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:24:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:24:31 compute-0 nova_compute[239965]: 2026-01-26 16:24:31.819 239969 DEBUG nova.network.neutron [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Updating instance_info_cache with network_info: [{"id": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "address": "fa:16:3e:82:69:dc", "network": {"id": "3ed016ed-52a2-4162-aa29-25b6a60d9513", "bridge": "br-int", "label": "tempest-network-smoke--1387424993", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a21656f-54", "ovs_interfaceid": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:31 compute-0 nova_compute[239965]: 2026-01-26 16:24:31.837 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Releasing lock "refresh_cache-1612372a-bdd8-4900-82cf-902335d93c41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:24:31 compute-0 nova_compute[239965]: 2026-01-26 16:24:31.837 239969 DEBUG nova.compute.manager [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Instance network_info: |[{"id": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "address": "fa:16:3e:82:69:dc", "network": {"id": "3ed016ed-52a2-4162-aa29-25b6a60d9513", "bridge": "br-int", "label": "tempest-network-smoke--1387424993", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a21656f-54", "ovs_interfaceid": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:24:31 compute-0 podman[357319]: 2026-01-26 16:24:31.771438385 +0000 UTC m=+0.037303765 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:24:31 compute-0 nova_compute[239965]: 2026-01-26 16:24:31.943 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:32 compute-0 ceph-mon[75140]: pgmap v2214: 305 pgs: 305 active+clean; 180 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Jan 26 16:24:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:24:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:24:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 248 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 5.6 MiB/s wr, 105 op/s
Jan 26 16:24:32 compute-0 ovn_controller[146046]: 2026-01-26T16:24:32Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8a:05:f8 10.100.0.9
Jan 26 16:24:32 compute-0 ovn_controller[146046]: 2026-01-26T16:24:32Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8a:05:f8 10.100.0.9
Jan 26 16:24:33 compute-0 nova_compute[239965]: 2026-01-26 16:24:33.393 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:33 compute-0 nova_compute[239965]: 2026-01-26 16:24:33.573 239969 DEBUG nova.compute.manager [req-b103cdf9-c5d6-415d-950e-085fa7bc4ced req-ce7e60e3-f783-4d37-becd-24291b6fee21 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Received event network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:33 compute-0 nova_compute[239965]: 2026-01-26 16:24:33.573 239969 DEBUG oslo_concurrency.lockutils [req-b103cdf9-c5d6-415d-950e-085fa7bc4ced req-ce7e60e3-f783-4d37-becd-24291b6fee21 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:33 compute-0 nova_compute[239965]: 2026-01-26 16:24:33.574 239969 DEBUG oslo_concurrency.lockutils [req-b103cdf9-c5d6-415d-950e-085fa7bc4ced req-ce7e60e3-f783-4d37-becd-24291b6fee21 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:33 compute-0 nova_compute[239965]: 2026-01-26 16:24:33.574 239969 DEBUG oslo_concurrency.lockutils [req-b103cdf9-c5d6-415d-950e-085fa7bc4ced req-ce7e60e3-f783-4d37-becd-24291b6fee21 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:33 compute-0 nova_compute[239965]: 2026-01-26 16:24:33.574 239969 DEBUG nova.compute.manager [req-b103cdf9-c5d6-415d-950e-085fa7bc4ced req-ce7e60e3-f783-4d37-becd-24291b6fee21 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] No waiting events found dispatching network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:24:33 compute-0 nova_compute[239965]: 2026-01-26 16:24:33.575 239969 WARNING nova.compute.manager [req-b103cdf9-c5d6-415d-950e-085fa7bc4ced req-ce7e60e3-f783-4d37-becd-24291b6fee21 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Received unexpected event network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd for instance with vm_state building and task_state spawning.
Jan 26 16:24:34 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:24:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:24:34 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:24:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:24:34 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:24:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:24:34 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:24:34 compute-0 podman[357319]: 2026-01-26 16:24:34.137227432 +0000 UTC m=+2.403092832 container create 38495af7b2cf129dcbde86bc66a06a21984e7c7528d103d49a43dfc74ba697f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:24:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 16:24:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:24:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:24:34 compute-0 ceph-mon[75140]: pgmap v2215: 305 pgs: 305 active+clean; 248 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 349 KiB/s rd, 5.6 MiB/s wr, 105 op/s
Jan 26 16:24:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:24:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:24:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:24:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:24:34 compute-0 systemd[1]: Started libpod-conmon-38495af7b2cf129dcbde86bc66a06a21984e7c7528d103d49a43dfc74ba697f2.scope.
Jan 26 16:24:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.219 239969 DEBUG nova.objects.instance [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'migration_context' on Instance uuid 1612372a-bdd8-4900-82cf-902335d93c41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:24:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:24:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4969147f7294ae5a2dea511fdffb00777544d3bd88b0ddc9fd273f5c517aad2b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:24:34 compute-0 sudo[357349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:24:34 compute-0 sudo[357349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.237 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.238 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Ensure instance console log exists: /var/lib/nova/instances/1612372a-bdd8-4900-82cf-902335d93c41/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.238 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.239 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:34 compute-0 sudo[357349]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.239 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.241 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Start _get_guest_xml network_info=[{"id": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "address": "fa:16:3e:82:69:dc", "network": {"id": "3ed016ed-52a2-4162-aa29-25b6a60d9513", "bridge": "br-int", "label": "tempest-network-smoke--1387424993", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a21656f-54", "ovs_interfaceid": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:24:34 compute-0 podman[357319]: 2026-01-26 16:24:34.246343064 +0000 UTC m=+2.512208444 container init 38495af7b2cf129dcbde86bc66a06a21984e7c7528d103d49a43dfc74ba697f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 16:24:34 compute-0 podman[357319]: 2026-01-26 16:24:34.253172321 +0000 UTC m=+2.519037681 container start 38495af7b2cf129dcbde86bc66a06a21984e7c7528d103d49a43dfc74ba697f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.255 239969 WARNING nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.262 239969 DEBUG nova.virt.libvirt.host [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.264 239969 DEBUG nova.virt.libvirt.host [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.271 239969 DEBUG nova.virt.libvirt.host [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.271 239969 DEBUG nova.virt.libvirt.host [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.271 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.272 239969 DEBUG nova.virt.hardware [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.272 239969 DEBUG nova.virt.hardware [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.272 239969 DEBUG nova.virt.hardware [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.273 239969 DEBUG nova.virt.hardware [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.273 239969 DEBUG nova.virt.hardware [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:24:34 compute-0 neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93[357391]: [NOTICE]   (357411) : New worker (357430) forked
Jan 26 16:24:34 compute-0 neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93[357391]: [NOTICE]   (357411) : Loading success.
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.273 239969 DEBUG nova.virt.hardware [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.273 239969 DEBUG nova.virt.hardware [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.274 239969 DEBUG nova.virt.hardware [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.274 239969 DEBUG nova.virt.hardware [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.274 239969 DEBUG nova.virt.hardware [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.275 239969 DEBUG nova.virt.hardware [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.278 239969 DEBUG oslo_concurrency.processutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:34 compute-0 sudo[357406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:24:34 compute-0 sudo[357406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.318 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444674.2957742, 057983d1-894b-493d-8d76-df3cd85a69fb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.318 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] VM Started (Lifecycle Event)
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.321 239969 DEBUG nova.compute.manager [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.335 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.339 239969 INFO nova.virt.libvirt.driver [-] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Instance spawned successfully.
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.341 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.343 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.346 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.369 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.376 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.377 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.378 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.378 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.379 239969 DEBUG nova.virt.libvirt.driver [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.383 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.384 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444674.2959957, 057983d1-894b-493d-8d76-df3cd85a69fb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.384 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] VM Paused (Lifecycle Event)
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.426 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.431 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444674.3350463, 057983d1-894b-493d-8d76-df3cd85a69fb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.432 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] VM Resumed (Lifecycle Event)
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.453 239969 INFO nova.compute.manager [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Took 15.10 seconds to spawn the instance on the hypervisor.
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.454 239969 DEBUG nova.compute.manager [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.455 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.465 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.500 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.539 239969 INFO nova.compute.manager [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Took 16.26 seconds to build instance.
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.557 239969 DEBUG oslo_concurrency.lockutils [None req-e6338aa9-724a-4c0e-8e4c-d6e3f6da0b7c e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "057983d1-894b-493d-8d76-df3cd85a69fb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:34 compute-0 podman[357477]: 2026-01-26 16:24:34.599088293 +0000 UTC m=+0.045547646 container create 0aab2a008b75ff4e2c080d507414db51f2b24c09d4f61a87d1a862fe9078830e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 16:24:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 248 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 3.8 MiB/s wr, 76 op/s
Jan 26 16:24:34 compute-0 systemd[1]: Started libpod-conmon-0aab2a008b75ff4e2c080d507414db51f2b24c09d4f61a87d1a862fe9078830e.scope.
Jan 26 16:24:34 compute-0 podman[357477]: 2026-01-26 16:24:34.580503458 +0000 UTC m=+0.026962841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:24:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:24:34 compute-0 podman[357477]: 2026-01-26 16:24:34.704664129 +0000 UTC m=+0.151123492 container init 0aab2a008b75ff4e2c080d507414db51f2b24c09d4f61a87d1a862fe9078830e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:24:34 compute-0 podman[357477]: 2026-01-26 16:24:34.714050668 +0000 UTC m=+0.160510021 container start 0aab2a008b75ff4e2c080d507414db51f2b24c09d4f61a87d1a862fe9078830e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wilson, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:24:34 compute-0 podman[357477]: 2026-01-26 16:24:34.717409891 +0000 UTC m=+0.163869244 container attach 0aab2a008b75ff4e2c080d507414db51f2b24c09d4f61a87d1a862fe9078830e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wilson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 16:24:34 compute-0 brave_wilson[357493]: 167 167
Jan 26 16:24:34 compute-0 systemd[1]: libpod-0aab2a008b75ff4e2c080d507414db51f2b24c09d4f61a87d1a862fe9078830e.scope: Deactivated successfully.
Jan 26 16:24:34 compute-0 podman[357477]: 2026-01-26 16:24:34.724231698 +0000 UTC m=+0.170691061 container died 0aab2a008b75ff4e2c080d507414db51f2b24c09d4f61a87d1a862fe9078830e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wilson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:24:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-975b5729fdb06c26e23f7355977cf5606a755f94e2fd8a992a84a61cf2006c6d-merged.mount: Deactivated successfully.
Jan 26 16:24:34 compute-0 podman[357477]: 2026-01-26 16:24:34.767562499 +0000 UTC m=+0.214021852 container remove 0aab2a008b75ff4e2c080d507414db51f2b24c09d4f61a87d1a862fe9078830e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:24:34 compute-0 systemd[1]: libpod-conmon-0aab2a008b75ff4e2c080d507414db51f2b24c09d4f61a87d1a862fe9078830e.scope: Deactivated successfully.
Jan 26 16:24:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:24:34 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3252990380' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.903 239969 DEBUG oslo_concurrency.processutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.930 239969 DEBUG nova.storage.rbd_utils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 1612372a-bdd8-4900-82cf-902335d93c41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:34 compute-0 nova_compute[239965]: 2026-01-26 16:24:34.938 239969 DEBUG oslo_concurrency.processutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:34 compute-0 podman[357518]: 2026-01-26 16:24:34.964911592 +0000 UTC m=+0.056548006 container create 5d5c8cf0e25fec9f1680a350515a626e57318d3d8e74e31ff10b64dc4f93b3b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cerf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 16:24:35 compute-0 systemd[1]: Started libpod-conmon-5d5c8cf0e25fec9f1680a350515a626e57318d3d8e74e31ff10b64dc4f93b3b1.scope.
Jan 26 16:24:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:24:35 compute-0 podman[357518]: 2026-01-26 16:24:34.946087591 +0000 UTC m=+0.037724035 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:24:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6bbe91805a6bbf8c8848ae835f6dd349b671b49baf552eee6ce2c0b2344dc6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:24:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6bbe91805a6bbf8c8848ae835f6dd349b671b49baf552eee6ce2c0b2344dc6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:24:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6bbe91805a6bbf8c8848ae835f6dd349b671b49baf552eee6ce2c0b2344dc6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:24:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6bbe91805a6bbf8c8848ae835f6dd349b671b49baf552eee6ce2c0b2344dc6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:24:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f6bbe91805a6bbf8c8848ae835f6dd349b671b49baf552eee6ce2c0b2344dc6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:24:35 compute-0 podman[357518]: 2026-01-26 16:24:35.049259718 +0000 UTC m=+0.140896132 container init 5d5c8cf0e25fec9f1680a350515a626e57318d3d8e74e31ff10b64dc4f93b3b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cerf, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 16:24:35 compute-0 podman[357518]: 2026-01-26 16:24:35.057311875 +0000 UTC m=+0.148948299 container start 5d5c8cf0e25fec9f1680a350515a626e57318d3d8e74e31ff10b64dc4f93b3b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cerf, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:24:35 compute-0 podman[357518]: 2026-01-26 16:24:35.061703963 +0000 UTC m=+0.153340407 container attach 5d5c8cf0e25fec9f1680a350515a626e57318d3d8e74e31ff10b64dc4f93b3b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:24:35 compute-0 ceph-mon[75140]: pgmap v2216: 305 pgs: 305 active+clean; 248 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 281 KiB/s rd, 3.8 MiB/s wr, 76 op/s
Jan 26 16:24:35 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3252990380' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:24:35 compute-0 epic_cerf[357554]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:24:35 compute-0 epic_cerf[357554]: --> All data devices are unavailable
Jan 26 16:24:35 compute-0 sshd-session[357332]: Connection reset by authenticating user root 176.120.22.13 port 50798 [preauth]
Jan 26 16:24:35 compute-0 systemd[1]: libpod-5d5c8cf0e25fec9f1680a350515a626e57318d3d8e74e31ff10b64dc4f93b3b1.scope: Deactivated successfully.
Jan 26 16:24:35 compute-0 podman[357518]: 2026-01-26 16:24:35.574225764 +0000 UTC m=+0.665862178 container died 5d5c8cf0e25fec9f1680a350515a626e57318d3d8e74e31ff10b64dc4f93b3b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 16:24:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f6bbe91805a6bbf8c8848ae835f6dd349b671b49baf552eee6ce2c0b2344dc6-merged.mount: Deactivated successfully.
Jan 26 16:24:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:24:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2479030578' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:24:35 compute-0 podman[357518]: 2026-01-26 16:24:35.622484326 +0000 UTC m=+0.714120740 container remove 5d5c8cf0e25fec9f1680a350515a626e57318d3d8e74e31ff10b64dc4f93b3b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_cerf, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:24:35 compute-0 systemd[1]: libpod-conmon-5d5c8cf0e25fec9f1680a350515a626e57318d3d8e74e31ff10b64dc4f93b3b1.scope: Deactivated successfully.
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.633 239969 DEBUG oslo_concurrency.processutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.696s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.636 239969 DEBUG nova.virt.libvirt.vif [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:24:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-1048999134',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-1048999134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-acce',id=134,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGgZdPdXDQOl4vr4vDWBU2Dqr4teM6XHl7PTUYrRRLOB1y6m0VjRT0n1AjEl+718fAZXjP3ZgkMy9qm3KYl4oDGV88kyA1QCYDtQiYXgUmOkcs01daVhoT8MYQR0iSMtlg==',key_name='tempest-TestSecurityGroupsBasicOps-553015190',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-t5e325u7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:24:25Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=1612372a-bdd8-4900-82cf-902335d93c41,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "address": "fa:16:3e:82:69:dc", "network": {"id": "3ed016ed-52a2-4162-aa29-25b6a60d9513", "bridge": "br-int", "label": "tempest-network-smoke--1387424993", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a21656f-54", "ovs_interfaceid": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.637 239969 DEBUG nova.network.os_vif_util [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "address": "fa:16:3e:82:69:dc", "network": {"id": "3ed016ed-52a2-4162-aa29-25b6a60d9513", "bridge": "br-int", "label": "tempest-network-smoke--1387424993", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a21656f-54", "ovs_interfaceid": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.638 239969 DEBUG nova.network.os_vif_util [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:69:dc,bridge_name='br-int',has_traffic_filtering=True,id=0a21656f-54b8-4cb5-9654-be9b13a49ce2,network=Network(3ed016ed-52a2-4162-aa29-25b6a60d9513),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a21656f-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.639 239969 DEBUG nova.objects.instance [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 1612372a-bdd8-4900-82cf-902335d93c41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.657 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:24:35 compute-0 nova_compute[239965]:   <uuid>1612372a-bdd8-4900-82cf-902335d93c41</uuid>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   <name>instance-00000086</name>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-1048999134</nova:name>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:24:34</nova:creationTime>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:24:35 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:24:35 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:24:35 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:24:35 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:24:35 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:24:35 compute-0 nova_compute[239965]:         <nova:user uuid="ace5f36058684b5782589457d9e15921">tempest-TestSecurityGroupsBasicOps-75910484-project-member</nova:user>
Jan 26 16:24:35 compute-0 nova_compute[239965]:         <nova:project uuid="1f814bd45d164be6827f7ef54ed07f5f">tempest-TestSecurityGroupsBasicOps-75910484</nova:project>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:24:35 compute-0 nova_compute[239965]:         <nova:port uuid="0a21656f-54b8-4cb5-9654-be9b13a49ce2">
Jan 26 16:24:35 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <system>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <entry name="serial">1612372a-bdd8-4900-82cf-902335d93c41</entry>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <entry name="uuid">1612372a-bdd8-4900-82cf-902335d93c41</entry>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     </system>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   <os>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   </os>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   <features>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   </features>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/1612372a-bdd8-4900-82cf-902335d93c41_disk">
Jan 26 16:24:35 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       </source>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:24:35 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/1612372a-bdd8-4900-82cf-902335d93c41_disk.config">
Jan 26 16:24:35 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       </source>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:24:35 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:82:69:dc"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <target dev="tap0a21656f-54"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/1612372a-bdd8-4900-82cf-902335d93c41/console.log" append="off"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <video>
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     </video>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:24:35 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:24:35 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:24:35 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:24:35 compute-0 nova_compute[239965]: </domain>
Jan 26 16:24:35 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.658 239969 DEBUG nova.compute.manager [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Preparing to wait for external event network-vif-plugged-0a21656f-54b8-4cb5-9654-be9b13a49ce2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.658 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "1612372a-bdd8-4900-82cf-902335d93c41-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.658 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "1612372a-bdd8-4900-82cf-902335d93c41-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.659 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "1612372a-bdd8-4900-82cf-902335d93c41-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.659 239969 DEBUG nova.virt.libvirt.vif [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:24:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-1048999134',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-1048999134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-acce',id=134,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGgZdPdXDQOl4vr4vDWBU2Dqr4teM6XHl7PTUYrRRLOB1y6m0VjRT0n1AjEl+718fAZXjP3ZgkMy9qm3KYl4oDGV88kyA1QCYDtQiYXgUmOkcs01daVhoT8MYQR0iSMtlg==',key_name='tempest-TestSecurityGroupsBasicOps-553015190',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-t5e325u7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:24:25Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=1612372a-bdd8-4900-82cf-902335d93c41,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "address": "fa:16:3e:82:69:dc", "network": {"id": "3ed016ed-52a2-4162-aa29-25b6a60d9513", "bridge": "br-int", "label": "tempest-network-smoke--1387424993", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a21656f-54", "ovs_interfaceid": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.660 239969 DEBUG nova.network.os_vif_util [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "address": "fa:16:3e:82:69:dc", "network": {"id": "3ed016ed-52a2-4162-aa29-25b6a60d9513", "bridge": "br-int", "label": "tempest-network-smoke--1387424993", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a21656f-54", "ovs_interfaceid": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.660 239969 DEBUG nova.network.os_vif_util [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:69:dc,bridge_name='br-int',has_traffic_filtering=True,id=0a21656f-54b8-4cb5-9654-be9b13a49ce2,network=Network(3ed016ed-52a2-4162-aa29-25b6a60d9513),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a21656f-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.661 239969 DEBUG os_vif [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:69:dc,bridge_name='br-int',has_traffic_filtering=True,id=0a21656f-54b8-4cb5-9654-be9b13a49ce2,network=Network(3ed016ed-52a2-4162-aa29-25b6a60d9513),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a21656f-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.662 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.662 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.663 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.667 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.667 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a21656f-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.668 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0a21656f-54, col_values=(('external_ids', {'iface-id': '0a21656f-54b8-4cb5-9654-be9b13a49ce2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:69:dc', 'vm-uuid': '1612372a-bdd8-4900-82cf-902335d93c41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.669 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:35 compute-0 NetworkManager[48954]: <info>  [1769444675.6706] manager: (tap0a21656f-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/575)
Jan 26 16:24:35 compute-0 sudo[357406]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.672 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.677 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.678 239969 INFO os_vif [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:69:dc,bridge_name='br-int',has_traffic_filtering=True,id=0a21656f-54b8-4cb5-9654-be9b13a49ce2,network=Network(3ed016ed-52a2-4162-aa29-25b6a60d9513),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a21656f-54')
Jan 26 16:24:35 compute-0 sudo[357608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:24:35 compute-0 sudo[357608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:24:35 compute-0 sudo[357608]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.744 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.744 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.745 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No VIF found with MAC fa:16:3e:82:69:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.745 239969 INFO nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Using config drive
Jan 26 16:24:35 compute-0 nova_compute[239965]: 2026-01-26 16:24:35.769 239969 DEBUG nova.storage.rbd_utils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 1612372a-bdd8-4900-82cf-902335d93c41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:35 compute-0 sudo[357635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:24:35 compute-0 sudo[357635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:24:36 compute-0 podman[357689]: 2026-01-26 16:24:36.131899071 +0000 UTC m=+0.091594243 container create cc22628f6c6cb274379f493b49db1e7ce0d77bc0c8b8d1c23a22cd42b858e229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mestorf, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.145 239969 INFO nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Creating config drive at /var/lib/nova/instances/1612372a-bdd8-4900-82cf-902335d93c41/disk.config
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.149 239969 DEBUG oslo_concurrency.processutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1612372a-bdd8-4900-82cf-902335d93c41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmlab92d3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:36 compute-0 podman[357689]: 2026-01-26 16:24:36.065489195 +0000 UTC m=+0.025184387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:24:36 compute-0 systemd[1]: Started libpod-conmon-cc22628f6c6cb274379f493b49db1e7ce0d77bc0c8b8d1c23a22cd42b858e229.scope.
Jan 26 16:24:36 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2479030578' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:24:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:24:36 compute-0 podman[357689]: 2026-01-26 16:24:36.229629705 +0000 UTC m=+0.189324927 container init cc22628f6c6cb274379f493b49db1e7ce0d77bc0c8b8d1c23a22cd42b858e229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mestorf, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:24:36 compute-0 podman[357689]: 2026-01-26 16:24:36.239610329 +0000 UTC m=+0.199305501 container start cc22628f6c6cb274379f493b49db1e7ce0d77bc0c8b8d1c23a22cd42b858e229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mestorf, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:24:36 compute-0 pensive_mestorf[357707]: 167 167
Jan 26 16:24:36 compute-0 systemd[1]: libpod-cc22628f6c6cb274379f493b49db1e7ce0d77bc0c8b8d1c23a22cd42b858e229.scope: Deactivated successfully.
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.299 239969 DEBUG oslo_concurrency.processutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1612372a-bdd8-4900-82cf-902335d93c41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmlab92d3" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.403 239969 DEBUG nova.storage.rbd_utils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 1612372a-bdd8-4900-82cf-902335d93c41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.412 239969 DEBUG oslo_concurrency.processutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1612372a-bdd8-4900-82cf-902335d93c41/disk.config 1612372a-bdd8-4900-82cf-902335d93c41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:36 compute-0 podman[357689]: 2026-01-26 16:24:36.495804913 +0000 UTC m=+0.455500115 container attach cc22628f6c6cb274379f493b49db1e7ce0d77bc0c8b8d1c23a22cd42b858e229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mestorf, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 16:24:36 compute-0 podman[357689]: 2026-01-26 16:24:36.496233434 +0000 UTC m=+0.455928626 container died cc22628f6c6cb274379f493b49db1e7ce0d77bc0c8b8d1c23a22cd42b858e229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mestorf, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 16:24:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 248 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 877 KiB/s rd, 3.9 MiB/s wr, 104 op/s
Jan 26 16:24:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8bd86ec9abd16c15ec0741e67b4352e782447de6520c2d74167600d21b03564-merged.mount: Deactivated successfully.
Jan 26 16:24:36 compute-0 podman[357689]: 2026-01-26 16:24:36.614032708 +0000 UTC m=+0.573727900 container remove cc22628f6c6cb274379f493b49db1e7ce0d77bc0c8b8d1c23a22cd42b858e229 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.653 239969 DEBUG oslo_concurrency.processutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1612372a-bdd8-4900-82cf-902335d93c41/disk.config 1612372a-bdd8-4900-82cf-902335d93c41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.241s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.654 239969 INFO nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Deleting local config drive /var/lib/nova/instances/1612372a-bdd8-4900-82cf-902335d93c41/disk.config because it was imported into RBD.
Jan 26 16:24:36 compute-0 podman[357761]: 2026-01-26 16:24:36.688649366 +0000 UTC m=+0.071431050 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 26 16:24:36 compute-0 systemd[1]: libpod-conmon-cc22628f6c6cb274379f493b49db1e7ce0d77bc0c8b8d1c23a22cd42b858e229.scope: Deactivated successfully.
Jan 26 16:24:36 compute-0 kernel: tap0a21656f-54: entered promiscuous mode
Jan 26 16:24:36 compute-0 NetworkManager[48954]: <info>  [1769444676.7296] manager: (tap0a21656f-54): new Tun device (/org/freedesktop/NetworkManager/Devices/576)
Jan 26 16:24:36 compute-0 systemd-udevd[357405]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:24:36 compute-0 ovn_controller[146046]: 2026-01-26T16:24:36Z|01401|binding|INFO|Claiming lport 0a21656f-54b8-4cb5-9654-be9b13a49ce2 for this chassis.
Jan 26 16:24:36 compute-0 ovn_controller[146046]: 2026-01-26T16:24:36Z|01402|binding|INFO|0a21656f-54b8-4cb5-9654-be9b13a49ce2: Claiming fa:16:3e:82:69:dc 10.100.0.5
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.732 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.735 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Acquiring lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.735 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.741 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:69:dc 10.100.0.5'], port_security=['fa:16:3e:82:69:dc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1612372a-bdd8-4900-82cf-902335d93c41', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3ed016ed-52a2-4162-aa29-25b6a60d9513', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a5f53cff-127a-4a6c-8cce-783f85bf3ab7 b3b6fb74-2a30-43ad-acfe-f4d1ad979b67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7a974e2-b4a3-4d25-930e-0b964fb30d31, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=0a21656f-54b8-4cb5-9654-be9b13a49ce2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.743 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 0a21656f-54b8-4cb5-9654-be9b13a49ce2 in datapath 3ed016ed-52a2-4162-aa29-25b6a60d9513 bound to our chassis
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.746 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 3ed016ed-52a2-4162-aa29-25b6a60d9513
Jan 26 16:24:36 compute-0 NetworkManager[48954]: <info>  [1769444676.7504] device (tap0a21656f-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:24:36 compute-0 NetworkManager[48954]: <info>  [1769444676.7516] device (tap0a21656f-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.753 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:36 compute-0 ovn_controller[146046]: 2026-01-26T16:24:36Z|01403|binding|INFO|Setting lport 0a21656f-54b8-4cb5-9654-be9b13a49ce2 ovn-installed in OVS
Jan 26 16:24:36 compute-0 ovn_controller[146046]: 2026-01-26T16:24:36Z|01404|binding|INFO|Setting lport 0a21656f-54b8-4cb5-9654-be9b13a49ce2 up in Southbound
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.756 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.760 239969 DEBUG nova.compute.manager [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:24:36 compute-0 podman[357764]: 2026-01-26 16:24:36.766941734 +0000 UTC m=+0.145007973 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.771 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1c48ef02-72c0-4262-86fa-a796270e5a62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.774 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap3ed016ed-51 in ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.777 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap3ed016ed-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.777 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c59ef665-83c2-4036-946b-185854787a51]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.778 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fa518c0c-6626-4b76-a725-9def482d1564]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:36 compute-0 systemd-machined[208061]: New machine qemu-161-instance-00000086.
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.790 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[a06739a0-a15e-4c15-9233-759011de35d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:36 compute-0 systemd[1]: Started Virtual Machine qemu-161-instance-00000086.
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.802 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[078d1b3f-0220-4b5b-9623-f7711c13288e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.839 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.840 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:36 compute-0 podman[357823]: 2026-01-26 16:24:36.842654098 +0000 UTC m=+0.059634052 container create 23bd18d9323a62deb3485686860a63389e585cb746a5fcf2d7f8d3b41d10ed44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bartik, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.857 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c022e492-4c92-425b-98db-83d79b331d2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.862 239969 DEBUG nova.virt.hardware [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.862 239969 INFO nova.compute.claims [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:24:36 compute-0 NetworkManager[48954]: <info>  [1769444676.8682] manager: (tap3ed016ed-50): new Veth device (/org/freedesktop/NetworkManager/Devices/577)
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.867 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[adafb9ff-46b5-4ac3-9068-5c90bfbceca4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.906 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd198e1-d12f-401b-a635-0e2ea8cbb368]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.909 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b71633de-48bb-4906-a4eb-e0f4bc809545]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:36 compute-0 podman[357823]: 2026-01-26 16:24:36.820530756 +0000 UTC m=+0.037510740 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:24:36 compute-0 systemd[1]: Started libpod-conmon-23bd18d9323a62deb3485686860a63389e585cb746a5fcf2d7f8d3b41d10ed44.scope.
Jan 26 16:24:36 compute-0 NetworkManager[48954]: <info>  [1769444676.9360] device (tap3ed016ed-50): carrier: link connected
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.942 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e0d1b28a-8fd5-4826-8e27-8f7436189a6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:24:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17cf97169a1a24d38234ad97bca4ef22452e6bb9abbb145cf1febc4b5aad2f64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:24:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17cf97169a1a24d38234ad97bca4ef22452e6bb9abbb145cf1febc4b5aad2f64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:24:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17cf97169a1a24d38234ad97bca4ef22452e6bb9abbb145cf1febc4b5aad2f64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:24:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17cf97169a1a24d38234ad97bca4ef22452e6bb9abbb145cf1febc4b5aad2f64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:24:36 compute-0 podman[357823]: 2026-01-26 16:24:36.968633003 +0000 UTC m=+0.185612987 container init 23bd18d9323a62deb3485686860a63389e585cb746a5fcf2d7f8d3b41d10ed44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bartik, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.967 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[15875ea4-7e0e-49a6-9aa0-4bf99566d4b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3ed016ed-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:c6:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616774, 'reachable_time': 16254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 357871, 'error': None, 'target': 'ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:36 compute-0 podman[357823]: 2026-01-26 16:24:36.977430758 +0000 UTC m=+0.194410712 container start 23bd18d9323a62deb3485686860a63389e585cb746a5fcf2d7f8d3b41d10ed44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bartik, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:24:36 compute-0 podman[357823]: 2026-01-26 16:24:36.981686573 +0000 UTC m=+0.198666527 container attach 23bd18d9323a62deb3485686860a63389e585cb746a5fcf2d7f8d3b41d10ed44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.993 239969 DEBUG nova.compute.manager [req-81b2de95-2a51-465a-adea-971198473d4e req-9d142077-233e-409f-8168-3893acbbd856 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Received event network-vif-plugged-0a21656f-54b8-4cb5-9654-be9b13a49ce2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.994 239969 DEBUG oslo_concurrency.lockutils [req-81b2de95-2a51-465a-adea-971198473d4e req-9d142077-233e-409f-8168-3893acbbd856 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "1612372a-bdd8-4900-82cf-902335d93c41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.994 239969 DEBUG oslo_concurrency.lockutils [req-81b2de95-2a51-465a-adea-971198473d4e req-9d142077-233e-409f-8168-3893acbbd856 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1612372a-bdd8-4900-82cf-902335d93c41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.994 239969 DEBUG oslo_concurrency.lockutils [req-81b2de95-2a51-465a-adea-971198473d4e req-9d142077-233e-409f-8168-3893acbbd856 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1612372a-bdd8-4900-82cf-902335d93c41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:36 compute-0 nova_compute[239965]: 2026-01-26 16:24:36.994 239969 DEBUG nova.compute.manager [req-81b2de95-2a51-465a-adea-971198473d4e req-9d142077-233e-409f-8168-3893acbbd856 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Processing event network-vif-plugged-0a21656f-54b8-4cb5-9654-be9b13a49ce2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:24:36 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:36.995 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fc91f7b0-7af4-48a1-bc90-b77de21c6c0b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2e:c6df'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 616774, 'tstamp': 616774}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 357873, 'error': None, 'target': 'ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.020 239969 DEBUG oslo_concurrency.processutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:37.019 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b013ef8b-162b-4e7b-beb8-979cbe449fd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap3ed016ed-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:c6:df'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616774, 'reachable_time': 16254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 357875, 'error': None, 'target': 'ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:37.073 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7fac2ffd-8b86-4103-80dd-b8f6700cc17e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:37.144 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f07adb0a-0b37-4164-9075-f67ada9cae76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:37.147 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ed016ed-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:37.147 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:37.148 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ed016ed-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:37 compute-0 NetworkManager[48954]: <info>  [1769444677.1505] manager: (tap3ed016ed-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/578)
Jan 26 16:24:37 compute-0 kernel: tap3ed016ed-50: entered promiscuous mode
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.150 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:37.154 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap3ed016ed-50, col_values=(('external_ids', {'iface-id': '68565700-2045-4a57-8862-746d623af06c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:37 compute-0 ovn_controller[146046]: 2026-01-26T16:24:37Z|01405|binding|INFO|Releasing lport 68565700-2045-4a57-8862-746d623af06c from this chassis (sb_readonly=0)
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.155 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:37.156 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/3ed016ed-52a2-4162-aa29-25b6a60d9513.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/3ed016ed-52a2-4162-aa29-25b6a60d9513.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:37.157 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4390c627-8f00-4c25-b69c-055f0bc3b452]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:37.158 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-3ed016ed-52a2-4162-aa29-25b6a60d9513
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/3ed016ed-52a2-4162-aa29-25b6a60d9513.pid.haproxy
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 3ed016ed-52a2-4162-aa29-25b6a60d9513
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:24:37 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:37.159 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513', 'env', 'PROCESS_TAG=haproxy-3ed016ed-52a2-4162-aa29-25b6a60d9513', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/3ed016ed-52a2-4162-aa29-25b6a60d9513.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.171 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:37 compute-0 ceph-mon[75140]: pgmap v2217: 305 pgs: 305 active+clean; 248 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 877 KiB/s rd, 3.9 MiB/s wr, 104 op/s
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]: {
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:     "0": [
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:         {
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "devices": [
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "/dev/loop3"
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             ],
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "lv_name": "ceph_lv0",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "lv_size": "21470642176",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "name": "ceph_lv0",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "tags": {
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.cluster_name": "ceph",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.crush_device_class": "",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.encrypted": "0",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.objectstore": "bluestore",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.osd_id": "0",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.type": "block",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.vdo": "0",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.with_tpm": "0"
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             },
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "type": "block",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "vg_name": "ceph_vg0"
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:         }
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:     ],
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:     "1": [
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:         {
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "devices": [
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "/dev/loop4"
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             ],
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "lv_name": "ceph_lv1",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "lv_size": "21470642176",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "name": "ceph_lv1",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "tags": {
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.cluster_name": "ceph",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.crush_device_class": "",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.encrypted": "0",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.objectstore": "bluestore",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.osd_id": "1",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.type": "block",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.vdo": "0",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.with_tpm": "0"
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             },
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "type": "block",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "vg_name": "ceph_vg1"
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:         }
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:     ],
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:     "2": [
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:         {
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "devices": [
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "/dev/loop5"
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             ],
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "lv_name": "ceph_lv2",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "lv_size": "21470642176",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "name": "ceph_lv2",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "tags": {
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.cluster_name": "ceph",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.crush_device_class": "",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.encrypted": "0",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.objectstore": "bluestore",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.osd_id": "2",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.type": "block",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.vdo": "0",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:                 "ceph.with_tpm": "0"
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             },
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "type": "block",
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:             "vg_name": "ceph_vg2"
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:         }
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]:     ]
Jan 26 16:24:37 compute-0 eloquent_bartik[357867]: }
Jan 26 16:24:37 compute-0 systemd[1]: libpod-23bd18d9323a62deb3485686860a63389e585cb746a5fcf2d7f8d3b41d10ed44.scope: Deactivated successfully.
Jan 26 16:24:37 compute-0 podman[357823]: 2026-01-26 16:24:37.287762569 +0000 UTC m=+0.504742523 container died 23bd18d9323a62deb3485686860a63389e585cb746a5fcf2d7f8d3b41d10ed44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:24:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-17cf97169a1a24d38234ad97bca4ef22452e6bb9abbb145cf1febc4b5aad2f64-merged.mount: Deactivated successfully.
Jan 26 16:24:37 compute-0 podman[357823]: 2026-01-26 16:24:37.331692454 +0000 UTC m=+0.548672408 container remove 23bd18d9323a62deb3485686860a63389e585cb746a5fcf2d7f8d3b41d10ed44 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.339 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444677.3383276, 1612372a-bdd8-4900-82cf-902335d93c41 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.340 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] VM Started (Lifecycle Event)
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.345 239969 DEBUG nova.compute.manager [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.351 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.355 239969 INFO nova.virt.libvirt.driver [-] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Instance spawned successfully.
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.355 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.366 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:37 compute-0 systemd[1]: libpod-conmon-23bd18d9323a62deb3485686860a63389e585cb746a5fcf2d7f8d3b41d10ed44.scope: Deactivated successfully.
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.380 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:24:37 compute-0 sudo[357635]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.386 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.387 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.387 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.388 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.388 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.388 239969 DEBUG nova.virt.libvirt.driver [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.397 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.397 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444677.3385525, 1612372a-bdd8-4900-82cf-902335d93c41 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.398 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] VM Paused (Lifecycle Event)
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.416 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.420 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444677.347625, 1612372a-bdd8-4900-82cf-902335d93c41 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.421 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] VM Resumed (Lifecycle Event)
Jan 26 16:24:37 compute-0 sudo[357963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:24:37 compute-0 sudo[357963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.444 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.446 239969 INFO nova.compute.manager [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Took 11.62 seconds to spawn the instance on the hypervisor.
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.446 239969 DEBUG nova.compute.manager [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:37 compute-0 sudo[357963]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.455 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.487 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:24:37 compute-0 sudo[357993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:24:37 compute-0 sudo[357993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.532 239969 INFO nova.compute.manager [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Took 13.10 seconds to build instance.
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.550 239969 DEBUG oslo_concurrency.lockutils [None req-6c580180-ea2b-43cb-8086-9516a9170fdb ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "1612372a-bdd8-4900-82cf-902335d93c41" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:24:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/454483885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:37 compute-0 sshd-session[357633]: Invalid user user from 176.120.22.13 port 50810
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.589 239969 DEBUG oslo_concurrency.processutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:37 compute-0 podman[358033]: 2026-01-26 16:24:37.595510455 +0000 UTC m=+0.060055972 container create 22d472c74473e1236880942d438427cbbb0bcebe0739dd4cd612e15031068c70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.596 239969 DEBUG nova.compute.provider_tree [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.624 239969 DEBUG nova.scheduler.client.report [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:24:37 compute-0 systemd[1]: Started libpod-conmon-22d472c74473e1236880942d438427cbbb0bcebe0739dd4cd612e15031068c70.scope.
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.645 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.645 239969 DEBUG nova.compute.manager [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:24:37 compute-0 podman[358033]: 2026-01-26 16:24:37.556548861 +0000 UTC m=+0.021094388 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:24:37 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:24:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7834125455c3c58caf2314ea59ab5740e988abc38e5108cb43bbfbec25fc85e9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:24:37 compute-0 podman[358033]: 2026-01-26 16:24:37.681233655 +0000 UTC m=+0.145779182 container init 22d472c74473e1236880942d438427cbbb0bcebe0739dd4cd612e15031068c70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 16:24:37 compute-0 podman[358033]: 2026-01-26 16:24:37.687256742 +0000 UTC m=+0.151802249 container start 22d472c74473e1236880942d438427cbbb0bcebe0739dd4cd612e15031068c70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.693 239969 DEBUG nova.compute.manager [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.693 239969 DEBUG nova.network.neutron [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:24:37 compute-0 neutron-haproxy-ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513[358052]: [NOTICE]   (358056) : New worker (358059) forked
Jan 26 16:24:37 compute-0 neutron-haproxy-ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513[358052]: [NOTICE]   (358056) : Loading success.
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.722 239969 INFO nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.738 239969 DEBUG nova.compute.manager [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.826 239969 DEBUG nova.compute.manager [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.827 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.828 239969 INFO nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Creating image(s)
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.847 239969 DEBUG nova.storage.rbd_utils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] rbd image 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.869 239969 DEBUG nova.storage.rbd_utils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] rbd image 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:37 compute-0 podman[358077]: 2026-01-26 16:24:37.789705911 +0000 UTC m=+0.020943824 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.892 239969 DEBUG nova.storage.rbd_utils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] rbd image 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.895 239969 DEBUG oslo_concurrency.processutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:37 compute-0 sshd-session[357633]: Connection reset by invalid user user 176.120.22.13 port 50810 [preauth]
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.935 239969 DEBUG nova.policy [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7746910a4ce44ee4956e77cfac123ca8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '576e8719bc524933b1f15843689e97b1', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.977 239969 DEBUG oslo_concurrency.processutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.978 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.979 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:37 compute-0 nova_compute[239965]: 2026-01-26 16:24:37.979 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:38 compute-0 nova_compute[239965]: 2026-01-26 16:24:38.003 239969 DEBUG nova.storage.rbd_utils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] rbd image 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:38 compute-0 nova_compute[239965]: 2026-01-26 16:24:38.007 239969 DEBUG oslo_concurrency.processutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:38 compute-0 nova_compute[239965]: 2026-01-26 16:24:38.395 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 260 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Jan 26 16:24:39 compute-0 podman[358077]: 2026-01-26 16:24:39.004248415 +0000 UTC m=+1.235486338 container create baade080727cc6b7f613f9290e31ccb51c175e2ebc214f5a5ca93134e92a3b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wright, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:24:39 compute-0 nova_compute[239965]: 2026-01-26 16:24:39.093 239969 DEBUG nova.compute.manager [req-37fb051e-7e2d-4464-a7da-abc1bdb0a9cc req-d65b163b-d11d-42a9-8616-3b4b39bc0f11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Received event network-vif-plugged-0a21656f-54b8-4cb5-9654-be9b13a49ce2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:39 compute-0 nova_compute[239965]: 2026-01-26 16:24:39.094 239969 DEBUG oslo_concurrency.lockutils [req-37fb051e-7e2d-4464-a7da-abc1bdb0a9cc req-d65b163b-d11d-42a9-8616-3b4b39bc0f11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "1612372a-bdd8-4900-82cf-902335d93c41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:39 compute-0 nova_compute[239965]: 2026-01-26 16:24:39.094 239969 DEBUG oslo_concurrency.lockutils [req-37fb051e-7e2d-4464-a7da-abc1bdb0a9cc req-d65b163b-d11d-42a9-8616-3b4b39bc0f11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1612372a-bdd8-4900-82cf-902335d93c41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:39 compute-0 nova_compute[239965]: 2026-01-26 16:24:39.095 239969 DEBUG oslo_concurrency.lockutils [req-37fb051e-7e2d-4464-a7da-abc1bdb0a9cc req-d65b163b-d11d-42a9-8616-3b4b39bc0f11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1612372a-bdd8-4900-82cf-902335d93c41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:39 compute-0 nova_compute[239965]: 2026-01-26 16:24:39.095 239969 DEBUG nova.compute.manager [req-37fb051e-7e2d-4464-a7da-abc1bdb0a9cc req-d65b163b-d11d-42a9-8616-3b4b39bc0f11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] No waiting events found dispatching network-vif-plugged-0a21656f-54b8-4cb5-9654-be9b13a49ce2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:24:39 compute-0 nova_compute[239965]: 2026-01-26 16:24:39.095 239969 WARNING nova.compute.manager [req-37fb051e-7e2d-4464-a7da-abc1bdb0a9cc req-d65b163b-d11d-42a9-8616-3b4b39bc0f11 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Received unexpected event network-vif-plugged-0a21656f-54b8-4cb5-9654-be9b13a49ce2 for instance with vm_state active and task_state None.
Jan 26 16:24:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:24:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/454483885' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:39 compute-0 sshd-session[358182]: Connection reset by authenticating user root 176.120.22.13 port 50818 [preauth]
Jan 26 16:24:39 compute-0 nova_compute[239965]: 2026-01-26 16:24:39.761 239969 INFO nova.compute.manager [None req-202aef5b-58ec-4775-949e-8a6edfa3dc4d 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Get console output
Jan 26 16:24:39 compute-0 nova_compute[239965]: 2026-01-26 16:24:39.769 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:24:40 compute-0 nova_compute[239965]: 2026-01-26 16:24:40.102 239969 INFO nova.compute.manager [None req-d1ee3602-8bff-44f1-b49d-cbcd33e4d7e3 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Pausing
Jan 26 16:24:40 compute-0 nova_compute[239965]: 2026-01-26 16:24:40.104 239969 DEBUG nova.objects.instance [None req-d1ee3602-8bff-44f1-b49d-cbcd33e4d7e3 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'flavor' on Instance uuid 86b65c1c-c168-4ecd-b98d-9080d00d16dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:24:40 compute-0 nova_compute[239965]: 2026-01-26 16:24:40.553 239969 DEBUG nova.network.neutron [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Successfully created port: 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:24:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 260 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Jan 26 16:24:40 compute-0 nova_compute[239965]: 2026-01-26 16:24:40.664 239969 DEBUG nova.compute.manager [req-855bc9bd-f0a7-4977-94d3-711b1868f0e3 req-87aa36b7-9289-4e72-854e-5ba914770147 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Received event network-changed-07819469-3bd7-4885-9144-3eceda4eadcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:40 compute-0 nova_compute[239965]: 2026-01-26 16:24:40.665 239969 DEBUG nova.compute.manager [req-855bc9bd-f0a7-4977-94d3-711b1868f0e3 req-87aa36b7-9289-4e72-854e-5ba914770147 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Refreshing instance network info cache due to event network-changed-07819469-3bd7-4885-9144-3eceda4eadcd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:24:40 compute-0 nova_compute[239965]: 2026-01-26 16:24:40.665 239969 DEBUG oslo_concurrency.lockutils [req-855bc9bd-f0a7-4977-94d3-711b1868f0e3 req-87aa36b7-9289-4e72-854e-5ba914770147 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-057983d1-894b-493d-8d76-df3cd85a69fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:24:40 compute-0 nova_compute[239965]: 2026-01-26 16:24:40.666 239969 DEBUG oslo_concurrency.lockutils [req-855bc9bd-f0a7-4977-94d3-711b1868f0e3 req-87aa36b7-9289-4e72-854e-5ba914770147 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-057983d1-894b-493d-8d76-df3cd85a69fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:24:40 compute-0 nova_compute[239965]: 2026-01-26 16:24:40.666 239969 DEBUG nova.network.neutron [req-855bc9bd-f0a7-4977-94d3-711b1868f0e3 req-87aa36b7-9289-4e72-854e-5ba914770147 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Refreshing network info cache for port 07819469-3bd7-4885-9144-3eceda4eadcd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:24:40 compute-0 nova_compute[239965]: 2026-01-26 16:24:40.672 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:40 compute-0 nova_compute[239965]: 2026-01-26 16:24:40.829 239969 DEBUG oslo_concurrency.lockutils [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "057983d1-894b-493d-8d76-df3cd85a69fb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:40 compute-0 nova_compute[239965]: 2026-01-26 16:24:40.830 239969 DEBUG oslo_concurrency.lockutils [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "057983d1-894b-493d-8d76-df3cd85a69fb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:40 compute-0 nova_compute[239965]: 2026-01-26 16:24:40.830 239969 DEBUG oslo_concurrency.lockutils [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:40 compute-0 nova_compute[239965]: 2026-01-26 16:24:40.831 239969 DEBUG oslo_concurrency.lockutils [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:40 compute-0 nova_compute[239965]: 2026-01-26 16:24:40.831 239969 DEBUG oslo_concurrency.lockutils [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:40 compute-0 nova_compute[239965]: 2026-01-26 16:24:40.833 239969 INFO nova.compute.manager [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Terminating instance
Jan 26 16:24:40 compute-0 nova_compute[239965]: 2026-01-26 16:24:40.835 239969 DEBUG nova.compute.manager [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:24:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:41.121 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:24:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:41.124 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:24:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:41.126 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:41 compute-0 nova_compute[239965]: 2026-01-26 16:24:41.129 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:41 compute-0 nova_compute[239965]: 2026-01-26 16:24:41.948 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444681.9478097, 86b65c1c-c168-4ecd-b98d-9080d00d16dc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:41 compute-0 nova_compute[239965]: 2026-01-26 16:24:41.948 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] VM Paused (Lifecycle Event)
Jan 26 16:24:41 compute-0 nova_compute[239965]: 2026-01-26 16:24:41.950 239969 DEBUG nova.compute.manager [None req-d1ee3602-8bff-44f1-b49d-cbcd33e4d7e3 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:41 compute-0 kernel: tap07819469-3b (unregistering): left promiscuous mode
Jan 26 16:24:41 compute-0 nova_compute[239965]: 2026-01-26 16:24:41.983 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:41 compute-0 nova_compute[239965]: 2026-01-26 16:24:41.986 239969 DEBUG nova.network.neutron [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Successfully updated port: 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:24:41 compute-0 nova_compute[239965]: 2026-01-26 16:24:41.991 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:24:41 compute-0 systemd[1]: Started libpod-conmon-baade080727cc6b7f613f9290e31ccb51c175e2ebc214f5a5ca93134e92a3b4e.scope.
Jan 26 16:24:42 compute-0 NetworkManager[48954]: <info>  [1769444681.9984] device (tap07819469-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.004 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:42 compute-0 ovn_controller[146046]: 2026-01-26T16:24:42Z|01406|binding|INFO|Releasing lport 07819469-3bd7-4885-9144-3eceda4eadcd from this chassis (sb_readonly=0)
Jan 26 16:24:42 compute-0 ovn_controller[146046]: 2026-01-26T16:24:42Z|01407|binding|INFO|Setting lport 07819469-3bd7-4885-9144-3eceda4eadcd down in Southbound
Jan 26 16:24:42 compute-0 ovn_controller[146046]: 2026-01-26T16:24:42Z|01408|binding|INFO|Removing iface tap07819469-3b ovn-installed in OVS
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.008 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:42.016 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:79:ac 10.100.0.8'], port_security=['fa:16:3e:b4:79:ac 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-648668074', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '057983d1-894b-493d-8d76-df3cd85a69fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-648668074', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '039287cb-946e-4ce0-ad47-a6333988fa03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c96c5555-0193-4c98-ba58-f33ef27ed1fc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=07819469-3bd7-4885-9144-3eceda4eadcd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:24:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:42.019 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 07819469-3bd7-4885-9144-3eceda4eadcd in datapath f62bb3f6-b715-4244-9fb7-5f2b6ae70b93 unbound from our chassis
Jan 26 16:24:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:42.022 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f62bb3f6-b715-4244-9fb7-5f2b6ae70b93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:24:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:42.023 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4667c6f1-9633-4496-94fc-7d274bf76dd4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:42.024 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93 namespace which is not needed anymore
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.033 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.038 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] During sync_power_state the instance has a pending task (pausing). Skip.
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.041 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Acquiring lock "refresh_cache-0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.041 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Acquired lock "refresh_cache-0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.042 239969 DEBUG nova.network.neutron [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:24:42 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000085.scope: Deactivated successfully.
Jan 26 16:24:42 compute-0 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d00000085.scope: Consumed 7.290s CPU time.
Jan 26 16:24:42 compute-0 systemd-machined[208061]: Machine qemu-160-instance-00000085 terminated.
Jan 26 16:24:42 compute-0 podman[358077]: 2026-01-26 16:24:42.067277718 +0000 UTC m=+4.298515661 container init baade080727cc6b7f613f9290e31ccb51c175e2ebc214f5a5ca93134e92a3b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:24:42 compute-0 ceph-mon[75140]: pgmap v2218: 305 pgs: 305 active+clean; 260 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 171 op/s
Jan 26 16:24:42 compute-0 ceph-mon[75140]: pgmap v2219: 305 pgs: 305 active+clean; 260 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Jan 26 16:24:42 compute-0 podman[358077]: 2026-01-26 16:24:42.079084507 +0000 UTC m=+4.310322390 container start baade080727cc6b7f613f9290e31ccb51c175e2ebc214f5a5ca93134e92a3b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:24:42 compute-0 podman[358077]: 2026-01-26 16:24:42.086522389 +0000 UTC m=+4.317760302 container attach baade080727cc6b7f613f9290e31ccb51c175e2ebc214f5a5ca93134e92a3b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wright, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:24:42 compute-0 optimistic_wright[358194]: 167 167
Jan 26 16:24:42 compute-0 systemd[1]: libpod-baade080727cc6b7f613f9290e31ccb51c175e2ebc214f5a5ca93134e92a3b4e.scope: Deactivated successfully.
Jan 26 16:24:42 compute-0 conmon[358194]: conmon baade080727cc6b7f613 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-baade080727cc6b7f613f9290e31ccb51c175e2ebc214f5a5ca93134e92a3b4e.scope/container/memory.events
Jan 26 16:24:42 compute-0 podman[358077]: 2026-01-26 16:24:42.093015089 +0000 UTC m=+4.324252982 container died baade080727cc6b7f613f9290e31ccb51c175e2ebc214f5a5ca93134e92a3b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wright, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Jan 26 16:24:42 compute-0 sshd-session[358184]: Invalid user tomcat from 176.120.22.13 port 50828
Jan 26 16:24:42 compute-0 kernel: tap07819469-3b: entered promiscuous mode
Jan 26 16:24:42 compute-0 NetworkManager[48954]: <info>  [1769444682.2575] manager: (tap07819469-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/579)
Jan 26 16:24:42 compute-0 ovn_controller[146046]: 2026-01-26T16:24:42Z|01409|binding|INFO|Claiming lport 07819469-3bd7-4885-9144-3eceda4eadcd for this chassis.
Jan 26 16:24:42 compute-0 ovn_controller[146046]: 2026-01-26T16:24:42Z|01410|binding|INFO|07819469-3bd7-4885-9144-3eceda4eadcd: Claiming fa:16:3e:b4:79:ac 10.100.0.8
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.275 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:42 compute-0 kernel: tap07819469-3b (unregistering): left promiscuous mode
Jan 26 16:24:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:42.433 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:79:ac 10.100.0.8'], port_security=['fa:16:3e:b4:79:ac 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-648668074', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '057983d1-894b-493d-8d76-df3cd85a69fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-648668074', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '039287cb-946e-4ce0-ad47-a6333988fa03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c96c5555-0193-4c98-ba58-f33ef27ed1fc, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=07819469-3bd7-4885-9144-3eceda4eadcd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.457 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:42 compute-0 ovn_controller[146046]: 2026-01-26T16:24:42Z|01411|binding|INFO|Releasing lport 07819469-3bd7-4885-9144-3eceda4eadcd from this chassis (sb_readonly=0)
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.461 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.470 239969 INFO nova.virt.libvirt.driver [-] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Instance destroyed successfully.
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.470 239969 DEBUG nova.objects.instance [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'resources' on Instance uuid 057983d1-894b-493d-8d76-df3cd85a69fb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:24:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:42.479 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:79:ac 10.100.0.8'], port_security=['fa:16:3e:b4:79:ac 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-648668074', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '057983d1-894b-493d-8d76-df3cd85a69fb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-648668074', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '039287cb-946e-4ce0-ad47-a6333988fa03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c96c5555-0193-4c98-ba58-f33ef27ed1fc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=07819469-3bd7-4885-9144-3eceda4eadcd) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.485 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.487 239969 DEBUG nova.virt.libvirt.vif [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:24:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-412228353',display_name='tempest-TestNetworkBasicOps-server-412228353',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-412228353',id=133,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGd+FHLzzMbl7/VCFqOC6bdXBfxCu5YL/NLlWCfo2ucnfSHdg/lsHpZUXzUrr+HbzEaiY8BFeG/fkGOjEG18DvIhF9GYfLTzKzJ9G7eZ46bGQ7c7I3LeDWMgcDJ+jHGUQw==',key_name='tempest-TestNetworkBasicOps-320551724',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:24:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-99n8f3oc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:24:34Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=057983d1-894b-493d-8d76-df3cd85a69fb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.487 239969 DEBUG nova.network.os_vif_util [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.487 239969 DEBUG nova.network.os_vif_util [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:79:ac,bridge_name='br-int',has_traffic_filtering=True,id=07819469-3bd7-4885-9144-3eceda4eadcd,network=Network(f62bb3f6-b715-4244-9fb7-5f2b6ae70b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07819469-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.488 239969 DEBUG os_vif [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:79:ac,bridge_name='br-int',has_traffic_filtering=True,id=07819469-3bd7-4885-9144-3eceda4eadcd,network=Network(f62bb3f6-b715-4244-9fb7-5f2b6ae70b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07819469-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.489 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.490 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07819469-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.491 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.494 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.495 239969 INFO os_vif [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:79:ac,bridge_name='br-int',has_traffic_filtering=True,id=07819469-3bd7-4885-9144-3eceda4eadcd,network=Network(f62bb3f6-b715-4244-9fb7-5f2b6ae70b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07819469-3b')
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.532 239969 DEBUG nova.network.neutron [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:24:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 273 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.2 MiB/s wr, 243 op/s
Jan 26 16:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f22970dba5d8230353b1205876f5bbe1a0cb3728f5771446c17775f41f003bd-merged.mount: Deactivated successfully.
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.789 239969 DEBUG nova.compute.manager [req-7f3b1183-a0cd-4977-8485-dfc37c78165d req-8cb8c999-5260-42ec-8e31-c6d20d51023c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received event network-changed-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.789 239969 DEBUG nova.compute.manager [req-7f3b1183-a0cd-4977-8485-dfc37c78165d req-8cb8c999-5260-42ec-8e31-c6d20d51023c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Refreshing instance network info cache due to event network-changed-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:24:42 compute-0 nova_compute[239965]: 2026-01-26 16:24:42.790 239969 DEBUG oslo_concurrency.lockutils [req-7f3b1183-a0cd-4977-8485-dfc37c78165d req-8cb8c999-5260-42ec-8e31-c6d20d51023c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:24:42 compute-0 sshd-session[358184]: Connection reset by invalid user tomcat 176.120.22.13 port 50828 [preauth]
Jan 26 16:24:43 compute-0 nova_compute[239965]: 2026-01-26 16:24:43.074 239969 DEBUG nova.network.neutron [req-855bc9bd-f0a7-4977-94d3-711b1868f0e3 req-87aa36b7-9289-4e72-854e-5ba914770147 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Updated VIF entry in instance network info cache for port 07819469-3bd7-4885-9144-3eceda4eadcd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:24:43 compute-0 nova_compute[239965]: 2026-01-26 16:24:43.076 239969 DEBUG nova.network.neutron [req-855bc9bd-f0a7-4977-94d3-711b1868f0e3 req-87aa36b7-9289-4e72-854e-5ba914770147 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Updating instance_info_cache with network_info: [{"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:43 compute-0 nova_compute[239965]: 2026-01-26 16:24:43.099 239969 DEBUG oslo_concurrency.lockutils [req-855bc9bd-f0a7-4977-94d3-711b1868f0e3 req-87aa36b7-9289-4e72-854e-5ba914770147 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-057983d1-894b-493d-8d76-df3cd85a69fb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:24:43 compute-0 nova_compute[239965]: 2026-01-26 16:24:43.399 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:43 compute-0 nova_compute[239965]: 2026-01-26 16:24:43.429 239969 DEBUG nova.network.neutron [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Updating instance_info_cache with network_info: [{"id": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "address": "fa:16:3e:a4:09:94", "network": {"id": "5dda6da0-8bab-4507-ad24-be4d098fa548", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-772891590-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "576e8719bc524933b1f15843689e97b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b1dcb8-6a", "ovs_interfaceid": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:43 compute-0 nova_compute[239965]: 2026-01-26 16:24:43.449 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Releasing lock "refresh_cache-0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:24:43 compute-0 nova_compute[239965]: 2026-01-26 16:24:43.449 239969 DEBUG nova.compute.manager [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Instance network_info: |[{"id": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "address": "fa:16:3e:a4:09:94", "network": {"id": "5dda6da0-8bab-4507-ad24-be4d098fa548", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-772891590-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "576e8719bc524933b1f15843689e97b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b1dcb8-6a", "ovs_interfaceid": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:24:43 compute-0 nova_compute[239965]: 2026-01-26 16:24:43.450 239969 DEBUG oslo_concurrency.lockutils [req-7f3b1183-a0cd-4977-8485-dfc37c78165d req-8cb8c999-5260-42ec-8e31-c6d20d51023c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:24:43 compute-0 nova_compute[239965]: 2026-01-26 16:24:43.450 239969 DEBUG nova.network.neutron [req-7f3b1183-a0cd-4977-8485-dfc37c78165d req-8cb8c999-5260-42ec-8e31-c6d20d51023c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Refreshing network info cache for port 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:24:43 compute-0 nova_compute[239965]: 2026-01-26 16:24:43.481 239969 DEBUG nova.compute.manager [req-cbffa355-b645-420f-8ebd-5d037e396593 req-714feae6-bfca-4495-935b-dfcba39dfd0f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Received event network-changed-0a21656f-54b8-4cb5-9654-be9b13a49ce2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:43 compute-0 nova_compute[239965]: 2026-01-26 16:24:43.482 239969 DEBUG nova.compute.manager [req-cbffa355-b645-420f-8ebd-5d037e396593 req-714feae6-bfca-4495-935b-dfcba39dfd0f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Refreshing instance network info cache due to event network-changed-0a21656f-54b8-4cb5-9654-be9b13a49ce2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:24:43 compute-0 nova_compute[239965]: 2026-01-26 16:24:43.482 239969 DEBUG oslo_concurrency.lockutils [req-cbffa355-b645-420f-8ebd-5d037e396593 req-714feae6-bfca-4495-935b-dfcba39dfd0f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-1612372a-bdd8-4900-82cf-902335d93c41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:24:43 compute-0 nova_compute[239965]: 2026-01-26 16:24:43.483 239969 DEBUG oslo_concurrency.lockutils [req-cbffa355-b645-420f-8ebd-5d037e396593 req-714feae6-bfca-4495-935b-dfcba39dfd0f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-1612372a-bdd8-4900-82cf-902335d93c41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:24:43 compute-0 nova_compute[239965]: 2026-01-26 16:24:43.483 239969 DEBUG nova.network.neutron [req-cbffa355-b645-420f-8ebd-5d037e396593 req-714feae6-bfca-4495-935b-dfcba39dfd0f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Refreshing network info cache for port 0a21656f-54b8-4cb5-9654-be9b13a49ce2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:24:43 compute-0 ceph-mon[75140]: pgmap v2220: 305 pgs: 305 active+clean; 273 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.2 MiB/s wr, 243 op/s
Jan 26 16:24:43 compute-0 podman[358077]: 2026-01-26 16:24:43.981557248 +0000 UTC m=+6.212795181 container remove baade080727cc6b7f613f9290e31ccb51c175e2ebc214f5a5ca93134e92a3b4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_wright, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:24:44 compute-0 nova_compute[239965]: 2026-01-26 16:24:44.026 239969 DEBUG oslo_concurrency.processutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 6.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:44 compute-0 neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93[357391]: [NOTICE]   (357411) : haproxy version is 2.8.14-c23fe91
Jan 26 16:24:44 compute-0 neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93[357391]: [NOTICE]   (357411) : path to executable is /usr/sbin/haproxy
Jan 26 16:24:44 compute-0 neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93[357391]: [WARNING]  (357411) : Exiting Master process...
Jan 26 16:24:44 compute-0 neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93[357391]: [ALERT]    (357411) : Current worker (357430) exited with code 143 (Terminated)
Jan 26 16:24:44 compute-0 neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93[357391]: [WARNING]  (357411) : All workers exited. Exiting... (0)
Jan 26 16:24:44 compute-0 systemd[1]: libpod-38495af7b2cf129dcbde86bc66a06a21984e7c7528d103d49a43dfc74ba697f2.scope: Deactivated successfully.
Jan 26 16:24:44 compute-0 podman[358231]: 2026-01-26 16:24:44.07189233 +0000 UTC m=+1.907767822 container died 38495af7b2cf129dcbde86bc66a06a21984e7c7528d103d49a43dfc74ba697f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:24:44 compute-0 nova_compute[239965]: 2026-01-26 16:24:44.108 239969 DEBUG nova.storage.rbd_utils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] resizing rbd image 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:24:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:24:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 273 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 414 KiB/s wr, 172 op/s
Jan 26 16:24:44 compute-0 sshd-session[358272]: Connection reset by authenticating user root 176.120.22.13 port 30524 [preauth]
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.023 239969 DEBUG nova.compute.manager [req-bdf42145-05b6-42de-9b83-601c1344c8e7 req-50e5f614-50c4-4419-9a01-1528ff5ae7f2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Received event network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.026 239969 DEBUG oslo_concurrency.lockutils [req-bdf42145-05b6-42de-9b83-601c1344c8e7 req-50e5f614-50c4-4419-9a01-1528ff5ae7f2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.027 239969 DEBUG oslo_concurrency.lockutils [req-bdf42145-05b6-42de-9b83-601c1344c8e7 req-50e5f614-50c4-4419-9a01-1528ff5ae7f2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.028 239969 DEBUG oslo_concurrency.lockutils [req-bdf42145-05b6-42de-9b83-601c1344c8e7 req-50e5f614-50c4-4419-9a01-1528ff5ae7f2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.028 239969 DEBUG nova.compute.manager [req-bdf42145-05b6-42de-9b83-601c1344c8e7 req-50e5f614-50c4-4419-9a01-1528ff5ae7f2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] No waiting events found dispatching network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.029 239969 WARNING nova.compute.manager [req-bdf42145-05b6-42de-9b83-601c1344c8e7 req-50e5f614-50c4-4419-9a01-1528ff5ae7f2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Received unexpected event network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd for instance with vm_state active and task_state deleting.
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.307 239969 DEBUG nova.network.neutron [req-7f3b1183-a0cd-4977-8485-dfc37c78165d req-8cb8c999-5260-42ec-8e31-c6d20d51023c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Updated VIF entry in instance network info cache for port 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.308 239969 DEBUG nova.network.neutron [req-7f3b1183-a0cd-4977-8485-dfc37c78165d req-8cb8c999-5260-42ec-8e31-c6d20d51023c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Updating instance_info_cache with network_info: [{"id": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "address": "fa:16:3e:a4:09:94", "network": {"id": "5dda6da0-8bab-4507-ad24-be4d098fa548", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-772891590-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "576e8719bc524933b1f15843689e97b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b1dcb8-6a", "ovs_interfaceid": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.334 239969 DEBUG oslo_concurrency.lockutils [req-7f3b1183-a0cd-4977-8485-dfc37c78165d req-8cb8c999-5260-42ec-8e31-c6d20d51023c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.335 239969 DEBUG nova.compute.manager [req-7f3b1183-a0cd-4977-8485-dfc37c78165d req-8cb8c999-5260-42ec-8e31-c6d20d51023c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Received event network-vif-unplugged-07819469-3bd7-4885-9144-3eceda4eadcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.336 239969 DEBUG oslo_concurrency.lockutils [req-7f3b1183-a0cd-4977-8485-dfc37c78165d req-8cb8c999-5260-42ec-8e31-c6d20d51023c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.336 239969 DEBUG oslo_concurrency.lockutils [req-7f3b1183-a0cd-4977-8485-dfc37c78165d req-8cb8c999-5260-42ec-8e31-c6d20d51023c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.337 239969 DEBUG oslo_concurrency.lockutils [req-7f3b1183-a0cd-4977-8485-dfc37c78165d req-8cb8c999-5260-42ec-8e31-c6d20d51023c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "057983d1-894b-493d-8d76-df3cd85a69fb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.338 239969 DEBUG nova.compute.manager [req-7f3b1183-a0cd-4977-8485-dfc37c78165d req-8cb8c999-5260-42ec-8e31-c6d20d51023c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] No waiting events found dispatching network-vif-unplugged-07819469-3bd7-4885-9144-3eceda4eadcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.338 239969 DEBUG nova.compute.manager [req-7f3b1183-a0cd-4977-8485-dfc37c78165d req-8cb8c999-5260-42ec-8e31-c6d20d51023c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Received event network-vif-unplugged-07819469-3bd7-4885-9144-3eceda4eadcd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.806 239969 DEBUG nova.network.neutron [req-cbffa355-b645-420f-8ebd-5d037e396593 req-714feae6-bfca-4495-935b-dfcba39dfd0f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Updated VIF entry in instance network info cache for port 0a21656f-54b8-4cb5-9654-be9b13a49ce2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.807 239969 DEBUG nova.network.neutron [req-cbffa355-b645-420f-8ebd-5d037e396593 req-714feae6-bfca-4495-935b-dfcba39dfd0f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Updating instance_info_cache with network_info: [{"id": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "address": "fa:16:3e:82:69:dc", "network": {"id": "3ed016ed-52a2-4162-aa29-25b6a60d9513", "bridge": "br-int", "label": "tempest-network-smoke--1387424993", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a21656f-54", "ovs_interfaceid": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.825 239969 DEBUG oslo_concurrency.lockutils [req-cbffa355-b645-420f-8ebd-5d037e396593 req-714feae6-bfca-4495-935b-dfcba39dfd0f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-1612372a-bdd8-4900-82cf-902335d93c41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.972 239969 INFO nova.compute.manager [None req-2059055c-6d10-4a54-8591-c64e06e6eb55 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Get console output
Jan 26 16:24:45 compute-0 nova_compute[239965]: 2026-01-26 16:24:45.979 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:24:46 compute-0 sshd-session[358363]: Connection closed by authenticating user root 209.38.206.249 port 35044 [preauth]
Jan 26 16:24:46 compute-0 nova_compute[239965]: 2026-01-26 16:24:46.462 239969 INFO nova.compute.manager [None req-385b5fb1-f493-4bc5-bd9c-6c2307e97b19 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Unpausing
Jan 26 16:24:46 compute-0 nova_compute[239965]: 2026-01-26 16:24:46.464 239969 DEBUG nova.objects.instance [None req-385b5fb1-f493-4bc5-bd9c-6c2307e97b19 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'flavor' on Instance uuid 86b65c1c-c168-4ecd-b98d-9080d00d16dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:24:46 compute-0 nova_compute[239965]: 2026-01-26 16:24:46.490 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444686.4904575, 86b65c1c-c168-4ecd-b98d-9080d00d16dc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:46 compute-0 nova_compute[239965]: 2026-01-26 16:24:46.491 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] VM Resumed (Lifecycle Event)
Jan 26 16:24:46 compute-0 virtqemud[240263]: argument unsupported: QEMU guest agent is not configured
Jan 26 16:24:46 compute-0 nova_compute[239965]: 2026-01-26 16:24:46.495 239969 DEBUG nova.virt.libvirt.guest [None req-385b5fb1-f493-4bc5-bd9c-6c2307e97b19 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 26 16:24:46 compute-0 nova_compute[239965]: 2026-01-26 16:24:46.496 239969 DEBUG nova.compute.manager [None req-385b5fb1-f493-4bc5-bd9c-6c2307e97b19 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:46 compute-0 nova_compute[239965]: 2026-01-26 16:24:46.519 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:46 compute-0 nova_compute[239965]: 2026-01-26 16:24:46.523 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:24:46 compute-0 nova_compute[239965]: 2026-01-26 16:24:46.557 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] During sync_power_state the instance has a pending task (unpausing). Skip.
Jan 26 16:24:46 compute-0 sshd-session[358365]: Invalid user www from 209.38.206.249 port 35050
Jan 26 16:24:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 273 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 183 op/s
Jan 26 16:24:46 compute-0 sshd-session[358365]: Connection closed by invalid user www 209.38.206.249 port 35050 [preauth]
Jan 26 16:24:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-38495af7b2cf129dcbde86bc66a06a21984e7c7528d103d49a43dfc74ba697f2-userdata-shm.mount: Deactivated successfully.
Jan 26 16:24:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4969147f7294ae5a2dea511fdffb00777544d3bd88b0ddc9fd273f5c517aad2b-merged.mount: Deactivated successfully.
Jan 26 16:24:46 compute-0 ceph-mon[75140]: pgmap v2221: 305 pgs: 305 active+clean; 273 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 414 KiB/s wr, 172 op/s
Jan 26 16:24:46 compute-0 podman[358231]: 2026-01-26 16:24:46.908179791 +0000 UTC m=+4.744055283 container cleanup 38495af7b2cf129dcbde86bc66a06a21984e7c7528d103d49a43dfc74ba697f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:24:46 compute-0 systemd[1]: libpod-conmon-baade080727cc6b7f613f9290e31ccb51c175e2ebc214f5a5ca93134e92a3b4e.scope: Deactivated successfully.
Jan 26 16:24:46 compute-0 systemd[1]: libpod-conmon-38495af7b2cf129dcbde86bc66a06a21984e7c7528d103d49a43dfc74ba697f2.scope: Deactivated successfully.
Jan 26 16:24:47 compute-0 podman[358346]: 2026-01-26 16:24:47.017403946 +0000 UTC m=+2.864612805 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:24:47 compute-0 podman[358370]: 2026-01-26 16:24:47.128048706 +0000 UTC m=+0.195714675 container remove 38495af7b2cf129dcbde86bc66a06a21984e7c7528d103d49a43dfc74ba697f2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:24:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:47.136 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a8a4de93-2ed4-4c67-bf38-c96e55eb1a77]: (4, ('Mon Jan 26 04:24:42 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93 (38495af7b2cf129dcbde86bc66a06a21984e7c7528d103d49a43dfc74ba697f2)\n38495af7b2cf129dcbde86bc66a06a21984e7c7528d103d49a43dfc74ba697f2\nMon Jan 26 04:24:46 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93 (38495af7b2cf129dcbde86bc66a06a21984e7c7528d103d49a43dfc74ba697f2)\n38495af7b2cf129dcbde86bc66a06a21984e7c7528d103d49a43dfc74ba697f2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:47.138 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1fbdf862-ef9d-43e0-b790-26f0f39f417c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:47.140 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf62bb3f6-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:47 compute-0 kernel: tapf62bb3f6-b0: left promiscuous mode
Jan 26 16:24:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:47.159 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dd1c2de2-b89d-414d-aa89-747ccc179871]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:47 compute-0 sshd-session[358367]: Invalid user dspace from 209.38.206.249 port 35064
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.170 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.180 239969 DEBUG nova.objects.instance [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lazy-loading 'migration_context' on Instance uuid 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:24:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:47.180 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2d036f97-8022-4d37-a7d0-98161c304377]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:47.181 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f65056a6-6a09-4216-b4a0-fab1ff280c4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:47.197 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb03109-33e0-40fb-9b36-03a256a898b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616187, 'reachable_time': 35777, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358404, 'error': None, 'target': 'ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:47 compute-0 systemd[1]: run-netns-ovnmeta\x2df62bb3f6\x2db715\x2d4244\x2d9fb7\x2d5f2b6ae70b93.mount: Deactivated successfully.
Jan 26 16:24:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:47.201 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:24:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:47.201 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[ea5b82a3-7f66-43dc-95a2-22c55d07b0eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:47.204 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 07819469-3bd7-4885-9144-3eceda4eadcd in datapath f62bb3f6-b715-4244-9fb7-5f2b6ae70b93 unbound from our chassis
Jan 26 16:24:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:47.205 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f62bb3f6-b715-4244-9fb7-5f2b6ae70b93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:24:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:47.205 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ba0c8f-823d-46f6-a394-eedce86f6d85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:47.206 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 07819469-3bd7-4885-9144-3eceda4eadcd in datapath f62bb3f6-b715-4244-9fb7-5f2b6ae70b93 unbound from our chassis
Jan 26 16:24:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:47.207 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f62bb3f6-b715-4244-9fb7-5f2b6ae70b93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:24:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:47.207 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[685dabdf-d2b0-4bcf-9327-b0fe284a3d58]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.211 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.213 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Ensure instance console log exists: /var/lib/nova/instances/0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.214 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.215 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.216 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.218 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Start _get_guest_xml network_info=[{"id": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "address": "fa:16:3e:a4:09:94", "network": {"id": "5dda6da0-8bab-4507-ad24-be4d098fa548", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-772891590-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "576e8719bc524933b1f15843689e97b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b1dcb8-6a", "ovs_interfaceid": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.222 239969 WARNING nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.227 239969 DEBUG nova.virt.libvirt.host [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.229 239969 DEBUG nova.virt.libvirt.host [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.232 239969 DEBUG nova.virt.libvirt.host [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.233 239969 DEBUG nova.virt.libvirt.host [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.233 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.234 239969 DEBUG nova.virt.hardware [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.234 239969 DEBUG nova.virt.hardware [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.235 239969 DEBUG nova.virt.hardware [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.236 239969 DEBUG nova.virt.hardware [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.236 239969 DEBUG nova.virt.hardware [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.237 239969 DEBUG nova.virt.hardware [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.237 239969 DEBUG nova.virt.hardware [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.237 239969 DEBUG nova.virt.hardware [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.238 239969 DEBUG nova.virt.hardware [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.238 239969 DEBUG nova.virt.hardware [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.238 239969 DEBUG nova.virt.hardware [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.241 239969 DEBUG oslo_concurrency.processutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:47 compute-0 sshd-session[358367]: Connection closed by invalid user dspace 209.38.206.249 port 35064 [preauth]
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.492 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:47 compute-0 podman[358346]: 2026-01-26 16:24:47.682372371 +0000 UTC m=+3.529581170 container create 5758fd89d0c057c797d8981d9a18d6ef7a91c95cf2d97a6e59f311e6de9f996d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:24:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:24:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2494743760' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.805 239969 DEBUG oslo_concurrency.processutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.835 239969 DEBUG nova.storage.rbd_utils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] rbd image 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:47 compute-0 nova_compute[239965]: 2026-01-26 16:24:47.843 239969 DEBUG oslo_concurrency.processutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:48 compute-0 ceph-mon[75140]: pgmap v2222: 305 pgs: 305 active+clean; 273 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.3 MiB/s wr, 183 op/s
Jan 26 16:24:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2494743760' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:24:48 compute-0 systemd[1]: Started libpod-conmon-5758fd89d0c057c797d8981d9a18d6ef7a91c95cf2d97a6e59f311e6de9f996d.scope.
Jan 26 16:24:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:24:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3df7b0213760c96bec8e5de959ca61d713e08a1ab08f5ffcf8982411e9daff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:24:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3df7b0213760c96bec8e5de959ca61d713e08a1ab08f5ffcf8982411e9daff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:24:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3df7b0213760c96bec8e5de959ca61d713e08a1ab08f5ffcf8982411e9daff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:24:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b3df7b0213760c96bec8e5de959ca61d713e08a1ab08f5ffcf8982411e9daff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:24:48 compute-0 nova_compute[239965]: 2026-01-26 16:24:48.402 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:48 compute-0 nova_compute[239965]: 2026-01-26 16:24:48.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:24:48 compute-0 podman[358346]: 2026-01-26 16:24:48.563755306 +0000 UTC m=+4.410964185 container init 5758fd89d0c057c797d8981d9a18d6ef7a91c95cf2d97a6e59f311e6de9f996d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:24:48 compute-0 podman[358346]: 2026-01-26 16:24:48.57211304 +0000 UTC m=+4.419321869 container start 5758fd89d0c057c797d8981d9a18d6ef7a91c95cf2d97a6e59f311e6de9f996d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:24:48 compute-0 nova_compute[239965]: 2026-01-26 16:24:48.578 239969 INFO nova.compute.manager [None req-9330fb40-f136-4180-b371-d8397ef50b88 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Get console output
Jan 26 16:24:48 compute-0 podman[358346]: 2026-01-26 16:24:48.579911432 +0000 UTC m=+4.427120601 container attach 5758fd89d0c057c797d8981d9a18d6ef7a91c95cf2d97a6e59f311e6de9f996d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 26 16:24:48 compute-0 nova_compute[239965]: 2026-01-26 16:24:48.587 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:24:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 260 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.9 MiB/s wr, 172 op/s
Jan 26 16:24:48 compute-0 nova_compute[239965]: 2026-01-26 16:24:48.702 239969 INFO nova.virt.libvirt.driver [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Deleting instance files /var/lib/nova/instances/057983d1-894b-493d-8d76-df3cd85a69fb_del
Jan 26 16:24:48 compute-0 nova_compute[239965]: 2026-01-26 16:24:48.703 239969 INFO nova.virt.libvirt.driver [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Deletion of /var/lib/nova/instances/057983d1-894b-493d-8d76-df3cd85a69fb_del complete
Jan 26 16:24:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:24:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3574510727' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:24:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:24:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3574510727' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:24:48 compute-0 nova_compute[239965]: 2026-01-26 16:24:48.743 239969 INFO nova.compute.manager [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Took 7.91 seconds to destroy the instance on the hypervisor.
Jan 26 16:24:48 compute-0 nova_compute[239965]: 2026-01-26 16:24:48.744 239969 DEBUG oslo.service.loopingcall [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:24:48 compute-0 nova_compute[239965]: 2026-01-26 16:24:48.745 239969 DEBUG nova.compute.manager [-] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:24:48 compute-0 nova_compute[239965]: 2026-01-26 16:24:48.745 239969 DEBUG nova.network.neutron [-] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:24:48 compute-0 sshd-session[358472]: Invalid user vps from 209.38.206.249 port 35070
Jan 26 16:24:48 compute-0 sshd-session[358472]: Connection closed by invalid user vps 209.38.206.249 port 35070 [preauth]
Jan 26 16:24:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:24:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1663773161' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.053 239969 DEBUG oslo_concurrency.processutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.211s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.055 239969 DEBUG nova.virt.libvirt.vif [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:24:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1338239947',display_name='tempest-TestServerAdvancedOps-server-1338239947',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1338239947',id=135,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='576e8719bc524933b1f15843689e97b1',ramdisk_id='',reservation_id='r-7g6d9pwb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1211125831',owner_user_name='tempest-TestServerAdvancedOps-1211125831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:24:37Z,user_data=None,user_id='7746910a4ce44ee4956e77cfac123ca8',uuid=0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "address": "fa:16:3e:a4:09:94", "network": {"id": "5dda6da0-8bab-4507-ad24-be4d098fa548", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-772891590-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "576e8719bc524933b1f15843689e97b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b1dcb8-6a", "ovs_interfaceid": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.055 239969 DEBUG nova.network.os_vif_util [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Converting VIF {"id": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "address": "fa:16:3e:a4:09:94", "network": {"id": "5dda6da0-8bab-4507-ad24-be4d098fa548", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-772891590-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "576e8719bc524933b1f15843689e97b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b1dcb8-6a", "ovs_interfaceid": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.056 239969 DEBUG nova.network.os_vif_util [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:09:94,bridge_name='br-int',has_traffic_filtering=True,id=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e,network=Network(5dda6da0-8bab-4507-ad24-be4d098fa548),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b1dcb8-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.057 239969 DEBUG nova.objects.instance [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.078 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:24:49 compute-0 nova_compute[239965]:   <uuid>0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428</uuid>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   <name>instance-00000087</name>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <nova:name>tempest-TestServerAdvancedOps-server-1338239947</nova:name>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:24:47</nova:creationTime>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:24:49 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:24:49 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:24:49 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:24:49 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:24:49 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:24:49 compute-0 nova_compute[239965]:         <nova:user uuid="7746910a4ce44ee4956e77cfac123ca8">tempest-TestServerAdvancedOps-1211125831-project-member</nova:user>
Jan 26 16:24:49 compute-0 nova_compute[239965]:         <nova:project uuid="576e8719bc524933b1f15843689e97b1">tempest-TestServerAdvancedOps-1211125831</nova:project>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:24:49 compute-0 nova_compute[239965]:         <nova:port uuid="21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e">
Jan 26 16:24:49 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <system>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <entry name="serial">0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428</entry>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <entry name="uuid">0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428</entry>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     </system>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   <os>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   </os>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   <features>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   </features>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428_disk">
Jan 26 16:24:49 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       </source>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:24:49 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428_disk.config">
Jan 26 16:24:49 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       </source>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:24:49 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:a4:09:94"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <target dev="tap21b1dcb8-6a"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428/console.log" append="off"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <video>
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     </video>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:24:49 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:24:49 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:24:49 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:24:49 compute-0 nova_compute[239965]: </domain>
Jan 26 16:24:49 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.079 239969 DEBUG nova.compute.manager [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Preparing to wait for external event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.079 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Acquiring lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.079 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.080 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.080 239969 DEBUG nova.virt.libvirt.vif [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:24:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1338239947',display_name='tempest-TestServerAdvancedOps-server-1338239947',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1338239947',id=135,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='576e8719bc524933b1f15843689e97b1',ramdisk_id='',reservation_id='r-7g6d9pwb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1211125831',owner_user_name='tempest-TestServerAdvancedOps-1211125831-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:24:37Z,user_data=None,user_id='7746910a4ce44ee4956e77cfac123ca8',uuid=0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "address": "fa:16:3e:a4:09:94", "network": {"id": "5dda6da0-8bab-4507-ad24-be4d098fa548", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-772891590-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "576e8719bc524933b1f15843689e97b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b1dcb8-6a", "ovs_interfaceid": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.080 239969 DEBUG nova.network.os_vif_util [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Converting VIF {"id": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "address": "fa:16:3e:a4:09:94", "network": {"id": "5dda6da0-8bab-4507-ad24-be4d098fa548", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-772891590-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "576e8719bc524933b1f15843689e97b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b1dcb8-6a", "ovs_interfaceid": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.081 239969 DEBUG nova.network.os_vif_util [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:09:94,bridge_name='br-int',has_traffic_filtering=True,id=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e,network=Network(5dda6da0-8bab-4507-ad24-be4d098fa548),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b1dcb8-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.081 239969 DEBUG os_vif [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:09:94,bridge_name='br-int',has_traffic_filtering=True,id=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e,network=Network(5dda6da0-8bab-4507-ad24-be4d098fa548),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b1dcb8-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.082 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.082 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.083 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.085 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.085 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21b1dcb8-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.086 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21b1dcb8-6a, col_values=(('external_ids', {'iface-id': '21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:09:94', 'vm-uuid': '0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.087 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:49 compute-0 NetworkManager[48954]: <info>  [1769444689.0881] manager: (tap21b1dcb8-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/580)
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.090 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.094 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.095 239969 INFO os_vif [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:09:94,bridge_name='br-int',has_traffic_filtering=True,id=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e,network=Network(5dda6da0-8bab-4507-ad24-be4d098fa548),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b1dcb8-6a')
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014699990996052575 of space, bias 1.0, pg target 0.44099972988157726 quantized to 32 (current 32)
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010183591073228643 of space, bias 1.0, pg target 0.3055077321968593 quantized to 32 (current 32)
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.032082789871246e-07 of space, bias 4.0, pg target 0.0008438499347845496 quantized to 16 (current 16)
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:24:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:24:49 compute-0 lvm[358555]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:24:49 compute-0 lvm[358555]: VG ceph_vg0 finished
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.310 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.310 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.311 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] No VIF found with MAC fa:16:3e:a4:09:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.311 239969 INFO nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Using config drive
Jan 26 16:24:49 compute-0 lvm[358557]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:24:49 compute-0 lvm[358557]: VG ceph_vg1 finished
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.353 239969 DEBUG nova.storage.rbd_utils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] rbd image 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:49 compute-0 lvm[358574]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:24:49 compute-0 lvm[358574]: VG ceph_vg2 finished
Jan 26 16:24:49 compute-0 sshd-session[358488]: Connection closed by authenticating user root 209.38.206.249 port 45594 [preauth]
Jan 26 16:24:49 compute-0 blissful_perlman[358469]: {}
Jan 26 16:24:49 compute-0 systemd[1]: libpod-5758fd89d0c057c797d8981d9a18d6ef7a91c95cf2d97a6e59f311e6de9f996d.scope: Deactivated successfully.
Jan 26 16:24:49 compute-0 systemd[1]: libpod-5758fd89d0c057c797d8981d9a18d6ef7a91c95cf2d97a6e59f311e6de9f996d.scope: Consumed 1.268s CPU time.
Jan 26 16:24:49 compute-0 podman[358346]: 2026-01-26 16:24:49.476211062 +0000 UTC m=+5.323419891 container died 5758fd89d0c057c797d8981d9a18d6ef7a91c95cf2d97a6e59f311e6de9f996d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:24:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b3df7b0213760c96bec8e5de959ca61d713e08a1ab08f5ffcf8982411e9daff-merged.mount: Deactivated successfully.
Jan 26 16:24:49 compute-0 podman[358346]: 2026-01-26 16:24:49.539280137 +0000 UTC m=+5.386488976 container remove 5758fd89d0c057c797d8981d9a18d6ef7a91c95cf2d97a6e59f311e6de9f996d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 26 16:24:49 compute-0 systemd[1]: libpod-conmon-5758fd89d0c057c797d8981d9a18d6ef7a91c95cf2d97a6e59f311e6de9f996d.scope: Deactivated successfully.
Jan 26 16:24:49 compute-0 sudo[357993]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:49 compute-0 ceph-mon[75140]: pgmap v2223: 305 pgs: 305 active+clean; 260 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 1.9 MiB/s wr, 172 op/s
Jan 26 16:24:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3574510727' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:24:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3574510727' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:24:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1663773161' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:24:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:24:49 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:24:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.605 239969 DEBUG oslo_concurrency.lockutils [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.605 239969 DEBUG oslo_concurrency.lockutils [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.606 239969 DEBUG oslo_concurrency.lockutils [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.606 239969 DEBUG oslo_concurrency.lockutils [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.606 239969 DEBUG oslo_concurrency.lockutils [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.607 239969 INFO nova.compute.manager [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Terminating instance
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.608 239969 DEBUG nova.compute.manager [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:24:49 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:24:49 compute-0 kernel: tapb54c88a7-53 (unregistering): left promiscuous mode
Jan 26 16:24:49 compute-0 NetworkManager[48954]: <info>  [1769444689.6500] device (tapb54c88a7-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.660 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:49 compute-0 ovn_controller[146046]: 2026-01-26T16:24:49Z|01412|binding|INFO|Releasing lport b54c88a7-5376-4cb4-b4c2-09833f6c7c31 from this chassis (sb_readonly=0)
Jan 26 16:24:49 compute-0 ovn_controller[146046]: 2026-01-26T16:24:49Z|01413|binding|INFO|Setting lport b54c88a7-5376-4cb4-b4c2-09833f6c7c31 down in Southbound
Jan 26 16:24:49 compute-0 ovn_controller[146046]: 2026-01-26T16:24:49Z|01414|binding|INFO|Removing iface tapb54c88a7-53 ovn-installed in OVS
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.663 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:49.669 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:05:f8 10.100.0.9'], port_security=['fa:16:3e:8a:05:f8 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '86b65c1c-c168-4ecd-b98d-9080d00d16dc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f4edfbf5-c8ce-414d-90e1-ed10e0f29897', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '501e277a-8518-441f-89b3-5cede83d9c09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5090e7f9-5f51-445b-a868-17a512d26318, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=b54c88a7-5376-4cb4-b4c2-09833f6c7c31) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:24:49 compute-0 sudo[358594]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:24:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:49.671 156105 INFO neutron.agent.ovn.metadata.agent [-] Port b54c88a7-5376-4cb4-b4c2-09833f6c7c31 in datapath f4edfbf5-c8ce-414d-90e1-ed10e0f29897 unbound from our chassis
Jan 26 16:24:49 compute-0 sudo[358594]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:24:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:49.673 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f4edfbf5-c8ce-414d-90e1-ed10e0f29897, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:24:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:49.674 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c1281ddd-5685-4890-910a-73798e6bbdd9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:49.674 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897 namespace which is not needed anymore
Jan 26 16:24:49 compute-0 sudo[358594]: pam_unix(sudo:session): session closed for user root
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.678 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.681 239969 DEBUG nova.compute.manager [req-da569686-596d-42dd-b9ab-af73ac50b96a req-da1e4ee8-16b6-4122-ae74-0ea2860d8793 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Received event network-changed-b54c88a7-5376-4cb4-b4c2-09833f6c7c31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.681 239969 DEBUG nova.compute.manager [req-da569686-596d-42dd-b9ab-af73ac50b96a req-da1e4ee8-16b6-4122-ae74-0ea2860d8793 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Refreshing instance network info cache due to event network-changed-b54c88a7-5376-4cb4-b4c2-09833f6c7c31. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.681 239969 DEBUG oslo_concurrency.lockutils [req-da569686-596d-42dd-b9ab-af73ac50b96a req-da1e4ee8-16b6-4122-ae74-0ea2860d8793 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-86b65c1c-c168-4ecd-b98d-9080d00d16dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.681 239969 DEBUG oslo_concurrency.lockutils [req-da569686-596d-42dd-b9ab-af73ac50b96a req-da1e4ee8-16b6-4122-ae74-0ea2860d8793 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-86b65c1c-c168-4ecd-b98d-9080d00d16dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.681 239969 DEBUG nova.network.neutron [req-da569686-596d-42dd-b9ab-af73ac50b96a req-da1e4ee8-16b6-4122-ae74-0ea2860d8793 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Refreshing network info cache for port b54c88a7-5376-4cb4-b4c2-09833f6c7c31 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:24:49 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d00000084.scope: Deactivated successfully.
Jan 26 16:24:49 compute-0 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d00000084.scope: Consumed 13.435s CPU time.
Jan 26 16:24:49 compute-0 systemd-machined[208061]: Machine qemu-159-instance-00000084 terminated.
Jan 26 16:24:49 compute-0 ovn_controller[146046]: 2026-01-26T16:24:49Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:82:69:dc 10.100.0.5
Jan 26 16:24:49 compute-0 ovn_controller[146046]: 2026-01-26T16:24:49Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:82:69:dc 10.100.0.5
Jan 26 16:24:49 compute-0 neutron-haproxy-ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897[356544]: [NOTICE]   (356548) : haproxy version is 2.8.14-c23fe91
Jan 26 16:24:49 compute-0 neutron-haproxy-ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897[356544]: [NOTICE]   (356548) : path to executable is /usr/sbin/haproxy
Jan 26 16:24:49 compute-0 neutron-haproxy-ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897[356544]: [WARNING]  (356548) : Exiting Master process...
Jan 26 16:24:49 compute-0 neutron-haproxy-ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897[356544]: [ALERT]    (356548) : Current worker (356550) exited with code 143 (Terminated)
Jan 26 16:24:49 compute-0 neutron-haproxy-ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897[356544]: [WARNING]  (356548) : All workers exited. Exiting... (0)
Jan 26 16:24:49 compute-0 systemd[1]: libpod-091ec748c40ded0d9776e5515c05a9ce8ef42aaec131de8114115b93d5cb0df0.scope: Deactivated successfully.
Jan 26 16:24:49 compute-0 conmon[356544]: conmon 091ec748c40ded0d9776 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-091ec748c40ded0d9776e5515c05a9ce8ef42aaec131de8114115b93d5cb0df0.scope/container/memory.events
Jan 26 16:24:49 compute-0 podman[358641]: 2026-01-26 16:24:49.814502227 +0000 UTC m=+0.044625665 container died 091ec748c40ded0d9776e5515c05a9ce8ef42aaec131de8114115b93d5cb0df0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.844 239969 INFO nova.virt.libvirt.driver [-] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Instance destroyed successfully.
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.844 239969 DEBUG nova.objects.instance [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'resources' on Instance uuid 86b65c1c-c168-4ecd-b98d-9080d00d16dc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:24:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-091ec748c40ded0d9776e5515c05a9ce8ef42aaec131de8114115b93d5cb0df0-userdata-shm.mount: Deactivated successfully.
Jan 26 16:24:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0a90d1b6e3aae1aa33a0384ee9f6de08cf5c52a65a713958d96f51a95f2c576-merged.mount: Deactivated successfully.
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.862 239969 DEBUG nova.virt.libvirt.vif [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:24:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1395075827',display_name='tempest-TestNetworkAdvancedServerOps-server-1395075827',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1395075827',id=132,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkCUJmHnWc7pQ9mlVUUqu5qElEroC5GOpBlFh+K77aMSWkLQJWupY9+lKy0giVVb0rPg4LTstifGfAnDPHqlWrsAWCj3bXxWAl8o1IOG6laF0sGgrPgkRGpSiCSZx1vFA==',key_name='tempest-TestNetworkAdvancedServerOps-2089233658',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:24:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-rb9fxt2z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:24:46Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=86b65c1c-c168-4ecd-b98d-9080d00d16dc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "address": "fa:16:3e:8a:05:f8", "network": {"id": "f4edfbf5-c8ce-414d-90e1-ed10e0f29897", "bridge": "br-int", "label": "tempest-network-smoke--1452475994", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54c88a7-53", "ovs_interfaceid": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.863 239969 DEBUG nova.network.os_vif_util [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "address": "fa:16:3e:8a:05:f8", "network": {"id": "f4edfbf5-c8ce-414d-90e1-ed10e0f29897", "bridge": "br-int", "label": "tempest-network-smoke--1452475994", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.189", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54c88a7-53", "ovs_interfaceid": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.864 239969 DEBUG nova.network.os_vif_util [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:8a:05:f8,bridge_name='br-int',has_traffic_filtering=True,id=b54c88a7-5376-4cb4-b4c2-09833f6c7c31,network=Network(f4edfbf5-c8ce-414d-90e1-ed10e0f29897),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54c88a7-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.864 239969 DEBUG os_vif [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:8a:05:f8,bridge_name='br-int',has_traffic_filtering=True,id=b54c88a7-5376-4cb4-b4c2-09833f6c7c31,network=Network(f4edfbf5-c8ce-414d-90e1-ed10e0f29897),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54c88a7-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.867 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.867 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb54c88a7-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.870 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.872 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.873 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:49 compute-0 podman[358641]: 2026-01-26 16:24:49.874167188 +0000 UTC m=+0.104290646 container cleanup 091ec748c40ded0d9776e5515c05a9ce8ef42aaec131de8114115b93d5cb0df0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.875 239969 INFO os_vif [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:8a:05:f8,bridge_name='br-int',has_traffic_filtering=True,id=b54c88a7-5376-4cb4-b4c2-09833f6c7c31,network=Network(f4edfbf5-c8ce-414d-90e1-ed10e0f29897),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb54c88a7-53')
Jan 26 16:24:49 compute-0 systemd[1]: libpod-conmon-091ec748c40ded0d9776e5515c05a9ce8ef42aaec131de8114115b93d5cb0df0.scope: Deactivated successfully.
Jan 26 16:24:49 compute-0 podman[358684]: 2026-01-26 16:24:49.935372168 +0000 UTC m=+0.040095264 container remove 091ec748c40ded0d9776e5515c05a9ce8ef42aaec131de8114115b93d5cb0df0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 26 16:24:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:49.941 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[67b4f63d-b38f-4f2f-be78-655cccc1d6e6]: (4, ('Mon Jan 26 04:24:49 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897 (091ec748c40ded0d9776e5515c05a9ce8ef42aaec131de8114115b93d5cb0df0)\n091ec748c40ded0d9776e5515c05a9ce8ef42aaec131de8114115b93d5cb0df0\nMon Jan 26 04:24:49 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897 (091ec748c40ded0d9776e5515c05a9ce8ef42aaec131de8114115b93d5cb0df0)\n091ec748c40ded0d9776e5515c05a9ce8ef42aaec131de8114115b93d5cb0df0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:49.943 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[95b53521-7972-4d90-a1fb-b1e4bbb1623d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:49.944 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4edfbf5-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.948 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:49 compute-0 kernel: tapf4edfbf5-c0: left promiscuous mode
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.962 239969 INFO nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Creating config drive at /var/lib/nova/instances/0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428/disk.config
Jan 26 16:24:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:49.968 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fed2ab18-aecf-49c1-876c-523216ce1b94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:49 compute-0 nova_compute[239965]: 2026-01-26 16:24:49.969 239969 DEBUG oslo_concurrency.processutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjmf4qsol execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:49 compute-0 sshd-session[358592]: Connection closed by authenticating user root 209.38.206.249 port 45602 [preauth]
Jan 26 16:24:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:49.987 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e9aee498-fcc5-4114-b3a0-0d64a6375aa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:49.989 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d6145fa0-42db-48f0-aeb2-2c8c5981ca44]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:50.008 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[809e6a2c-c67c-4569-a31b-b478a6de9537]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 614315, 'reachable_time': 38382, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 358721, 'error': None, 'target': 'ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.009 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:50 compute-0 systemd[1]: run-netns-ovnmeta\x2df4edfbf5\x2dc8ce\x2d414d\x2d90e1\x2ded10e0f29897.mount: Deactivated successfully.
Jan 26 16:24:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:50.012 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f4edfbf5-c8ce-414d-90e1-ed10e0f29897 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:24:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:50.012 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[cc823fcb-68fb-446d-a7e6-76dd68fb733e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.118 239969 DEBUG oslo_concurrency.processutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjmf4qsol" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.150 239969 DEBUG nova.storage.rbd_utils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] rbd image 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.156 239969 DEBUG oslo_concurrency.processutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428/disk.config 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.265 239969 DEBUG nova.network.neutron [-] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.289 239969 INFO nova.compute.manager [-] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Took 1.54 seconds to deallocate network for instance.
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.337 239969 INFO nova.virt.libvirt.driver [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Deleting instance files /var/lib/nova/instances/86b65c1c-c168-4ecd-b98d-9080d00d16dc_del
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.338 239969 INFO nova.virt.libvirt.driver [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Deletion of /var/lib/nova/instances/86b65c1c-c168-4ecd-b98d-9080d00d16dc_del complete
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.341 239969 DEBUG oslo_concurrency.processutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428/disk.config 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.185s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.341 239969 INFO nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Deleting local config drive /var/lib/nova/instances/0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428/disk.config because it was imported into RBD.
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.360 239969 DEBUG oslo_concurrency.lockutils [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.360 239969 DEBUG oslo_concurrency.lockutils [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:50 compute-0 kernel: tap21b1dcb8-6a: entered promiscuous mode
Jan 26 16:24:50 compute-0 NetworkManager[48954]: <info>  [1769444690.4084] manager: (tap21b1dcb8-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/581)
Jan 26 16:24:50 compute-0 sshd-session[358724]: Invalid user test from 209.38.206.249 port 45612
Jan 26 16:24:50 compute-0 systemd-udevd[358774]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:24:50 compute-0 ovn_controller[146046]: 2026-01-26T16:24:50Z|01415|binding|INFO|Claiming lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e for this chassis.
Jan 26 16:24:50 compute-0 ovn_controller[146046]: 2026-01-26T16:24:50Z|01416|binding|INFO|21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e: Claiming fa:16:3e:a4:09:94 10.100.0.5
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.459 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.463 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:50.466 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:09:94 10.100.0.5'], port_security=['fa:16:3e:a4:09:94 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dda6da0-8bab-4507-ad24-be4d098fa548', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '576e8719bc524933b1f15843689e97b1', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c087843f-681f-4fe3-bdde-29cea08e250a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e164d7e9-1b1c-48fd-b9d0-dfbf1c68746b, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:24:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:50.467 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e in datapath 5dda6da0-8bab-4507-ad24-be4d098fa548 bound to our chassis
Jan 26 16:24:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:50.468 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5dda6da0-8bab-4507-ad24-be4d098fa548 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:24:50 compute-0 NetworkManager[48954]: <info>  [1769444690.4698] device (tap21b1dcb8-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:24:50 compute-0 NetworkManager[48954]: <info>  [1769444690.4704] device (tap21b1dcb8-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:24:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:50.469 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[19cf45cd-1d9d-4028-adb2-f04265761bba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.476 239969 INFO nova.compute.manager [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Took 0.87 seconds to destroy the instance on the hypervisor.
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.477 239969 DEBUG oslo.service.loopingcall [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.479 239969 DEBUG nova.compute.manager [-] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.480 239969 DEBUG nova.network.neutron [-] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:24:50 compute-0 systemd-machined[208061]: New machine qemu-162-instance-00000087.
Jan 26 16:24:50 compute-0 ovn_controller[146046]: 2026-01-26T16:24:50Z|01417|binding|INFO|Setting lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e ovn-installed in OVS
Jan 26 16:24:50 compute-0 ovn_controller[146046]: 2026-01-26T16:24:50Z|01418|binding|INFO|Setting lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e up in Southbound
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.504 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:50 compute-0 systemd[1]: Started Virtual Machine qemu-162-instance-00000087.
Jan 26 16:24:50 compute-0 sshd-session[358724]: Connection closed by invalid user test 209.38.206.249 port 45612 [preauth]
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.537 239969 DEBUG oslo_concurrency.processutils [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 260 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Jan 26 16:24:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:24:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.945 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444690.9447532, 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.946 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] VM Started (Lifecycle Event)
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.966 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.972 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444690.945201, 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.972 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] VM Paused (Lifecycle Event)
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.988 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:50 compute-0 nova_compute[239965]: 2026-01-26 16:24:50.991 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:24:50 compute-0 sshd-session[358787]: Invalid user max from 209.38.206.249 port 45626
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.006 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:24:51 compute-0 sshd-session[358787]: Connection closed by invalid user max 209.38.206.249 port 45626 [preauth]
Jan 26 16:24:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:24:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/520211363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.203 239969 DEBUG oslo_concurrency.processutils [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.667s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.209 239969 DEBUG nova.compute.provider_tree [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.228 239969 DEBUG nova.scheduler.client.report [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.255 239969 DEBUG oslo_concurrency.lockutils [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.285 239969 INFO nova.scheduler.client.report [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Deleted allocations for instance 057983d1-894b-493d-8d76-df3cd85a69fb
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.382 239969 DEBUG oslo_concurrency.lockutils [None req-469ed618-0607-4f50-9cbc-cdfae4437627 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "057983d1-894b-493d-8d76-df3cd85a69fb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 10.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:51 compute-0 ceph-mon[75140]: pgmap v2224: 305 pgs: 305 active+clean; 260 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 105 op/s
Jan 26 16:24:51 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/520211363' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.793 239969 DEBUG nova.compute.manager [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Received event network-vif-unplugged-b54c88a7-5376-4cb4-b4c2-09833f6c7c31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.794 239969 DEBUG oslo_concurrency.lockutils [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.794 239969 DEBUG oslo_concurrency.lockutils [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.794 239969 DEBUG oslo_concurrency.lockutils [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.794 239969 DEBUG nova.compute.manager [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] No waiting events found dispatching network-vif-unplugged-b54c88a7-5376-4cb4-b4c2-09833f6c7c31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.794 239969 DEBUG nova.compute.manager [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Received event network-vif-unplugged-b54c88a7-5376-4cb4-b4c2-09833f6c7c31 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.794 239969 DEBUG nova.compute.manager [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Received event network-vif-plugged-b54c88a7-5376-4cb4-b4c2-09833f6c7c31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.795 239969 DEBUG oslo_concurrency.lockutils [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.795 239969 DEBUG oslo_concurrency.lockutils [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.795 239969 DEBUG oslo_concurrency.lockutils [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.795 239969 DEBUG nova.compute.manager [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] No waiting events found dispatching network-vif-plugged-b54c88a7-5376-4cb4-b4c2-09833f6c7c31 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.795 239969 WARNING nova.compute.manager [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Received unexpected event network-vif-plugged-b54c88a7-5376-4cb4-b4c2-09833f6c7c31 for instance with vm_state active and task_state deleting.
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.795 239969 DEBUG nova.compute.manager [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.796 239969 DEBUG oslo_concurrency.lockutils [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.796 239969 DEBUG oslo_concurrency.lockutils [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.796 239969 DEBUG oslo_concurrency.lockutils [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.796 239969 DEBUG nova.compute.manager [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Processing event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.796 239969 DEBUG nova.compute.manager [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.796 239969 DEBUG oslo_concurrency.lockutils [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.796 239969 DEBUG oslo_concurrency.lockutils [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.797 239969 DEBUG oslo_concurrency.lockutils [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.797 239969 DEBUG nova.compute.manager [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] No waiting events found dispatching network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.797 239969 WARNING nova.compute.manager [req-75a04f9f-7831-435c-bd48-43bfe1500fe3 req-048a14fd-3cb7-4eb3-87a0-d55dd051006a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received unexpected event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e for instance with vm_state building and task_state spawning.
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.797 239969 DEBUG nova.compute.manager [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.799 239969 DEBUG nova.network.neutron [-] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.801 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444691.80076, 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.801 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] VM Resumed (Lifecycle Event)
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.803 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.806 239969 INFO nova.virt.libvirt.driver [-] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Instance spawned successfully.
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.806 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.830 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.831 239969 INFO nova.compute.manager [-] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Took 1.35 seconds to deallocate network for instance.
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.839 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.839 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.840 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.840 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.841 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.841 239969 DEBUG nova.virt.libvirt.driver [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.845 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.890 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.902 239969 DEBUG oslo_concurrency.lockutils [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.902 239969 DEBUG oslo_concurrency.lockutils [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.912 239969 INFO nova.compute.manager [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Took 14.09 seconds to spawn the instance on the hypervisor.
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.913 239969 DEBUG nova.compute.manager [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.974 239969 INFO nova.compute.manager [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Took 15.16 seconds to build instance.
Jan 26 16:24:51 compute-0 nova_compute[239965]: 2026-01-26 16:24:51.978 239969 DEBUG oslo_concurrency.processutils [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:24:52 compute-0 nova_compute[239965]: 2026-01-26 16:24:52.029 239969 DEBUG oslo_concurrency.lockutils [None req-4df6672b-f1a4-4307-9a4d-44184a407cd6 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.293s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:52 compute-0 nova_compute[239965]: 2026-01-26 16:24:52.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:24:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:24:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/267511114' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:52 compute-0 nova_compute[239965]: 2026-01-26 16:24:52.611 239969 DEBUG oslo_concurrency.processutils [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:24:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 213 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 225 op/s
Jan 26 16:24:52 compute-0 nova_compute[239965]: 2026-01-26 16:24:52.616 239969 DEBUG nova.compute.provider_tree [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:24:52 compute-0 nova_compute[239965]: 2026-01-26 16:24:52.632 239969 DEBUG nova.scheduler.client.report [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:24:52 compute-0 nova_compute[239965]: 2026-01-26 16:24:52.651 239969 DEBUG oslo_concurrency.lockutils [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:52 compute-0 nova_compute[239965]: 2026-01-26 16:24:52.697 239969 INFO nova.scheduler.client.report [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Deleted allocations for instance 86b65c1c-c168-4ecd-b98d-9080d00d16dc
Jan 26 16:24:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/267511114' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:24:52 compute-0 ceph-mon[75140]: pgmap v2225: 305 pgs: 305 active+clean; 213 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 225 op/s
Jan 26 16:24:52 compute-0 nova_compute[239965]: 2026-01-26 16:24:52.746 239969 DEBUG nova.network.neutron [req-da569686-596d-42dd-b9ab-af73ac50b96a req-da1e4ee8-16b6-4122-ae74-0ea2860d8793 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Updated VIF entry in instance network info cache for port b54c88a7-5376-4cb4-b4c2-09833f6c7c31. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:24:52 compute-0 nova_compute[239965]: 2026-01-26 16:24:52.746 239969 DEBUG nova.network.neutron [req-da569686-596d-42dd-b9ab-af73ac50b96a req-da1e4ee8-16b6-4122-ae74-0ea2860d8793 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Updating instance_info_cache with network_info: [{"id": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "address": "fa:16:3e:8a:05:f8", "network": {"id": "f4edfbf5-c8ce-414d-90e1-ed10e0f29897", "bridge": "br-int", "label": "tempest-network-smoke--1452475994", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb54c88a7-53", "ovs_interfaceid": "b54c88a7-5376-4cb4-b4c2-09833f6c7c31", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:24:52 compute-0 nova_compute[239965]: 2026-01-26 16:24:52.784 239969 DEBUG oslo_concurrency.lockutils [None req-715234c7-ab64-4841-93c4-df097e68031b 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "86b65c1c-c168-4ecd-b98d-9080d00d16dc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:52 compute-0 nova_compute[239965]: 2026-01-26 16:24:52.786 239969 DEBUG oslo_concurrency.lockutils [req-da569686-596d-42dd-b9ab-af73ac50b96a req-da1e4ee8-16b6-4122-ae74-0ea2860d8793 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-86b65c1c-c168-4ecd-b98d-9080d00d16dc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:24:53 compute-0 nova_compute[239965]: 2026-01-26 16:24:53.404 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:53 compute-0 nova_compute[239965]: 2026-01-26 16:24:53.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:24:53 compute-0 nova_compute[239965]: 2026-01-26 16:24:53.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:24:53 compute-0 nova_compute[239965]: 2026-01-26 16:24:53.549 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:24:53 compute-0 nova_compute[239965]: 2026-01-26 16:24:53.948 239969 DEBUG nova.compute.manager [req-f857debc-a9bc-4af0-b8ad-e61a45fa99b9 req-43b6dc18-17ad-40eb-8069-6c4dd9cae84c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Received event network-vif-deleted-b54c88a7-5376-4cb4-b4c2-09833f6c7c31 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:53 compute-0 nova_compute[239965]: 2026-01-26 16:24:53.949 239969 INFO nova.compute.manager [req-f857debc-a9bc-4af0-b8ad-e61a45fa99b9 req-43b6dc18-17ad-40eb-8069-6c4dd9cae84c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Neutron deleted interface b54c88a7-5376-4cb4-b4c2-09833f6c7c31; detaching it from the instance and deleting it from the info cache
Jan 26 16:24:53 compute-0 nova_compute[239965]: 2026-01-26 16:24:53.949 239969 DEBUG nova.network.neutron [req-f857debc-a9bc-4af0-b8ad-e61a45fa99b9 req-43b6dc18-17ad-40eb-8069-6c4dd9cae84c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 26 16:24:53 compute-0 nova_compute[239965]: 2026-01-26 16:24:53.953 239969 DEBUG nova.compute.manager [req-f857debc-a9bc-4af0-b8ad-e61a45fa99b9 req-43b6dc18-17ad-40eb-8069-6c4dd9cae84c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Detach interface failed, port_id=b54c88a7-5376-4cb4-b4c2-09833f6c7c31, reason: Instance 86b65c1c-c168-4ecd-b98d-9080d00d16dc could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:24:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:24:54 compute-0 nova_compute[239965]: 2026-01-26 16:24:54.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:24:54 compute-0 nova_compute[239965]: 2026-01-26 16:24:54.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:24:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 213 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 506 KiB/s rd, 3.6 MiB/s wr, 148 op/s
Jan 26 16:24:54 compute-0 ceph-mon[75140]: pgmap v2226: 305 pgs: 305 active+clean; 213 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 506 KiB/s rd, 3.6 MiB/s wr, 148 op/s
Jan 26 16:24:54 compute-0 nova_compute[239965]: 2026-01-26 16:24:54.870 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:55 compute-0 nova_compute[239965]: 2026-01-26 16:24:54.999 239969 DEBUG nova.objects.instance [None req-f926e14d-4997-47fa-9145-41b58ffadd22 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:24:55 compute-0 nova_compute[239965]: 2026-01-26 16:24:55.022 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444695.0213048, 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:55 compute-0 nova_compute[239965]: 2026-01-26 16:24:55.022 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] VM Paused (Lifecycle Event)
Jan 26 16:24:55 compute-0 nova_compute[239965]: 2026-01-26 16:24:55.045 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:55 compute-0 nova_compute[239965]: 2026-01-26 16:24:55.049 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:24:55 compute-0 nova_compute[239965]: 2026-01-26 16:24:55.064 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 26 16:24:56 compute-0 kernel: tap21b1dcb8-6a (unregistering): left promiscuous mode
Jan 26 16:24:56 compute-0 NetworkManager[48954]: <info>  [1769444696.4280] device (tap21b1dcb8-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:24:56 compute-0 nova_compute[239965]: 2026-01-26 16:24:56.440 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:56 compute-0 ovn_controller[146046]: 2026-01-26T16:24:56Z|01419|binding|INFO|Releasing lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e from this chassis (sb_readonly=0)
Jan 26 16:24:56 compute-0 ovn_controller[146046]: 2026-01-26T16:24:56Z|01420|binding|INFO|Setting lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e down in Southbound
Jan 26 16:24:56 compute-0 ovn_controller[146046]: 2026-01-26T16:24:56Z|01421|binding|INFO|Removing iface tap21b1dcb8-6a ovn-installed in OVS
Jan 26 16:24:56 compute-0 nova_compute[239965]: 2026-01-26 16:24:56.443 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:56.453 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:09:94 10.100.0.5'], port_security=['fa:16:3e:a4:09:94 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dda6da0-8bab-4507-ad24-be4d098fa548', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '576e8719bc524933b1f15843689e97b1', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c087843f-681f-4fe3-bdde-29cea08e250a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e164d7e9-1b1c-48fd-b9d0-dfbf1c68746b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:24:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:56.455 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e in datapath 5dda6da0-8bab-4507-ad24-be4d098fa548 unbound from our chassis
Jan 26 16:24:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:56.457 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5dda6da0-8bab-4507-ad24-be4d098fa548 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:24:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:56.458 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6081e8b4-3f2f-40ae-8423-8f14b210ba3b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:24:56 compute-0 nova_compute[239965]: 2026-01-26 16:24:56.476 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:56 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000087.scope: Deactivated successfully.
Jan 26 16:24:56 compute-0 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000087.scope: Consumed 3.729s CPU time.
Jan 26 16:24:56 compute-0 systemd-machined[208061]: Machine qemu-162-instance-00000087 terminated.
Jan 26 16:24:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 213 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 180 op/s
Jan 26 16:24:56 compute-0 nova_compute[239965]: 2026-01-26 16:24:56.640 239969 DEBUG nova.compute.manager [None req-f926e14d-4997-47fa-9145-41b58ffadd22 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:56 compute-0 nova_compute[239965]: 2026-01-26 16:24:56.688 239969 DEBUG nova.compute.manager [req-9463e476-9319-4892-8a59-cf8fa91ecfc7 req-4cb328a3-c43f-4e75-ae1a-efb2f1038f6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received event network-vif-unplugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:56 compute-0 nova_compute[239965]: 2026-01-26 16:24:56.690 239969 DEBUG oslo_concurrency.lockutils [req-9463e476-9319-4892-8a59-cf8fa91ecfc7 req-4cb328a3-c43f-4e75-ae1a-efb2f1038f6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:56 compute-0 nova_compute[239965]: 2026-01-26 16:24:56.690 239969 DEBUG oslo_concurrency.lockutils [req-9463e476-9319-4892-8a59-cf8fa91ecfc7 req-4cb328a3-c43f-4e75-ae1a-efb2f1038f6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:56 compute-0 nova_compute[239965]: 2026-01-26 16:24:56.690 239969 DEBUG oslo_concurrency.lockutils [req-9463e476-9319-4892-8a59-cf8fa91ecfc7 req-4cb328a3-c43f-4e75-ae1a-efb2f1038f6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:56 compute-0 nova_compute[239965]: 2026-01-26 16:24:56.691 239969 DEBUG nova.compute.manager [req-9463e476-9319-4892-8a59-cf8fa91ecfc7 req-4cb328a3-c43f-4e75-ae1a-efb2f1038f6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] No waiting events found dispatching network-vif-unplugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:24:56 compute-0 nova_compute[239965]: 2026-01-26 16:24:56.691 239969 WARNING nova.compute.manager [req-9463e476-9319-4892-8a59-cf8fa91ecfc7 req-4cb328a3-c43f-4e75-ae1a-efb2f1038f6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received unexpected event network-vif-unplugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e for instance with vm_state active and task_state suspending.
Jan 26 16:24:56 compute-0 ceph-mon[75140]: pgmap v2227: 305 pgs: 305 active+clean; 213 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 180 op/s
Jan 26 16:24:57 compute-0 nova_compute[239965]: 2026-01-26 16:24:57.469 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444682.4679623, 057983d1-894b-493d-8d76-df3cd85a69fb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:24:57 compute-0 nova_compute[239965]: 2026-01-26 16:24:57.469 239969 INFO nova.compute.manager [-] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] VM Stopped (Lifecycle Event)
Jan 26 16:24:57 compute-0 nova_compute[239965]: 2026-01-26 16:24:57.489 239969 DEBUG nova.compute.manager [None req-2e112d08-9f70-4b92-965b-a923c92a549e - - - - - -] [instance: 057983d1-894b-493d-8d76-df3cd85a69fb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:24:57 compute-0 nova_compute[239965]: 2026-01-26 16:24:57.520 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:24:57 compute-0 nova_compute[239965]: 2026-01-26 16:24:57.816 239969 INFO nova.compute.manager [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Resuming
Jan 26 16:24:57 compute-0 nova_compute[239965]: 2026-01-26 16:24:57.818 239969 DEBUG nova.objects.instance [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lazy-loading 'flavor' on Instance uuid 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:24:57 compute-0 nova_compute[239965]: 2026-01-26 16:24:57.859 239969 DEBUG oslo_concurrency.lockutils [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Acquiring lock "refresh_cache-0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:24:57 compute-0 nova_compute[239965]: 2026-01-26 16:24:57.860 239969 DEBUG oslo_concurrency.lockutils [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Acquired lock "refresh_cache-0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:24:57 compute-0 nova_compute[239965]: 2026-01-26 16:24:57.860 239969 DEBUG nova.network.neutron [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:24:58 compute-0 ovn_controller[146046]: 2026-01-26T16:24:58Z|01422|binding|INFO|Releasing lport 68565700-2045-4a57-8862-746d623af06c from this chassis (sb_readonly=0)
Jan 26 16:24:58 compute-0 nova_compute[239965]: 2026-01-26 16:24:58.451 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:58 compute-0 nova_compute[239965]: 2026-01-26 16:24:58.454 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:24:58 compute-0 nova_compute[239965]: 2026-01-26 16:24:58.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:24:58 compute-0 nova_compute[239965]: 2026-01-26 16:24:58.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:24:58 compute-0 nova_compute[239965]: 2026-01-26 16:24:58.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:24:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 213 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 197 op/s
Jan 26 16:24:58 compute-0 ceph-mon[75140]: pgmap v2228: 305 pgs: 305 active+clean; 213 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 197 op/s
Jan 26 16:24:58 compute-0 sshd-session[358895]: Invalid user ansadmin from 209.38.206.249 port 45638
Jan 26 16:24:58 compute-0 nova_compute[239965]: 2026-01-26 16:24:58.778 239969 DEBUG nova.compute.manager [req-ed773569-a104-4926-8499-7ab03df9ef34 req-a9788afe-c975-4a47-8234-7f6bc7d10940 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:24:58 compute-0 nova_compute[239965]: 2026-01-26 16:24:58.779 239969 DEBUG oslo_concurrency.lockutils [req-ed773569-a104-4926-8499-7ab03df9ef34 req-a9788afe-c975-4a47-8234-7f6bc7d10940 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:58 compute-0 nova_compute[239965]: 2026-01-26 16:24:58.779 239969 DEBUG oslo_concurrency.lockutils [req-ed773569-a104-4926-8499-7ab03df9ef34 req-a9788afe-c975-4a47-8234-7f6bc7d10940 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:58 compute-0 nova_compute[239965]: 2026-01-26 16:24:58.780 239969 DEBUG oslo_concurrency.lockutils [req-ed773569-a104-4926-8499-7ab03df9ef34 req-a9788afe-c975-4a47-8234-7f6bc7d10940 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:58 compute-0 nova_compute[239965]: 2026-01-26 16:24:58.780 239969 DEBUG nova.compute.manager [req-ed773569-a104-4926-8499-7ab03df9ef34 req-a9788afe-c975-4a47-8234-7f6bc7d10940 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] No waiting events found dispatching network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:24:58 compute-0 nova_compute[239965]: 2026-01-26 16:24:58.780 239969 WARNING nova.compute.manager [req-ed773569-a104-4926-8499-7ab03df9ef34 req-a9788afe-c975-4a47-8234-7f6bc7d10940 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received unexpected event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e for instance with vm_state suspended and task_state resuming.
Jan 26 16:24:58 compute-0 sshd-session[358895]: Connection closed by invalid user ansadmin 209.38.206.249 port 45638 [preauth]
Jan 26 16:24:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:24:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:59.249 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:24:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:59.250 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:24:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:24:59.251 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:24:59 compute-0 nova_compute[239965]: 2026-01-26 16:24:59.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:24:59 compute-0 nova_compute[239965]: 2026-01-26 16:24:59.876 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:25:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:25:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:25:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:25:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:25:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.534 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.534 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.535 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.535 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.535 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 213 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 180 op/s
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.846 239969 DEBUG nova.network.neutron [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Updating instance_info_cache with network_info: [{"id": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "address": "fa:16:3e:a4:09:94", "network": {"id": "5dda6da0-8bab-4507-ad24-be4d098fa548", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-772891590-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "576e8719bc524933b1f15843689e97b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b1dcb8-6a", "ovs_interfaceid": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.862 239969 DEBUG oslo_concurrency.lockutils [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Releasing lock "refresh_cache-0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.874 239969 DEBUG nova.virt.libvirt.vif [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:24:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1338239947',display_name='tempest-TestServerAdvancedOps-server-1338239947',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1338239947',id=135,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:24:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='576e8719bc524933b1f15843689e97b1',ramdisk_id='',reservation_id='r-7g6d9pwb',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1211125831',owner_user_name='tempest-TestServerAdvancedOps-1211125831-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:24:56Z,user_data=None,user_id='7746910a4ce44ee4956e77cfac123ca8',uuid=0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "address": "fa:16:3e:a4:09:94", "network": {"id": "5dda6da0-8bab-4507-ad24-be4d098fa548", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-772891590-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "576e8719bc524933b1f15843689e97b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b1dcb8-6a", "ovs_interfaceid": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.875 239969 DEBUG nova.network.os_vif_util [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Converting VIF {"id": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "address": "fa:16:3e:a4:09:94", "network": {"id": "5dda6da0-8bab-4507-ad24-be4d098fa548", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-772891590-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "576e8719bc524933b1f15843689e97b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b1dcb8-6a", "ovs_interfaceid": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.877 239969 DEBUG nova.network.os_vif_util [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:09:94,bridge_name='br-int',has_traffic_filtering=True,id=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e,network=Network(5dda6da0-8bab-4507-ad24-be4d098fa548),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b1dcb8-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.878 239969 DEBUG os_vif [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:09:94,bridge_name='br-int',has_traffic_filtering=True,id=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e,network=Network(5dda6da0-8bab-4507-ad24-be4d098fa548),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b1dcb8-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.879 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.880 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.880 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.885 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.885 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21b1dcb8-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.886 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21b1dcb8-6a, col_values=(('external_ids', {'iface-id': '21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:09:94', 'vm-uuid': '0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.887 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:25:00 compute-0 nova_compute[239965]: 2026-01-26 16:25:00.887 239969 INFO os_vif [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:09:94,bridge_name='br-int',has_traffic_filtering=True,id=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e,network=Network(5dda6da0-8bab-4507-ad24-be4d098fa548),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b1dcb8-6a')
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.007 239969 DEBUG nova.objects.instance [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:01 compute-0 ceph-mon[75140]: pgmap v2229: 305 pgs: 305 active+clean; 213 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 180 op/s
Jan 26 16:25:01 compute-0 kernel: tap21b1dcb8-6a: entered promiscuous mode
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.103 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:01 compute-0 ovn_controller[146046]: 2026-01-26T16:25:01Z|01423|binding|INFO|Claiming lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e for this chassis.
Jan 26 16:25:01 compute-0 ovn_controller[146046]: 2026-01-26T16:25:01Z|01424|binding|INFO|21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e: Claiming fa:16:3e:a4:09:94 10.100.0.5
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.110 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:01 compute-0 NetworkManager[48954]: <info>  [1769444701.1110] manager: (tap21b1dcb8-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/582)
Jan 26 16:25:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:01.115 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:09:94 10.100.0.5'], port_security=['fa:16:3e:a4:09:94 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dda6da0-8bab-4507-ad24-be4d098fa548', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '576e8719bc524933b1f15843689e97b1', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c087843f-681f-4fe3-bdde-29cea08e250a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e164d7e9-1b1c-48fd-b9d0-dfbf1c68746b, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:25:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:01.118 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e in datapath 5dda6da0-8bab-4507-ad24-be4d098fa548 bound to our chassis
Jan 26 16:25:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:01.118 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5dda6da0-8bab-4507-ad24-be4d098fa548 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:25:01 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:01.120 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8e996f2f-6fd8-4a20-9f38-e2b1d0e9e5c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:01 compute-0 systemd-udevd[358931]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:25:01 compute-0 systemd-machined[208061]: New machine qemu-163-instance-00000087.
Jan 26 16:25:01 compute-0 ovn_controller[146046]: 2026-01-26T16:25:01Z|01425|binding|INFO|Setting lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e ovn-installed in OVS
Jan 26 16:25:01 compute-0 ovn_controller[146046]: 2026-01-26T16:25:01Z|01426|binding|INFO|Setting lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e up in Southbound
Jan 26 16:25:01 compute-0 NetworkManager[48954]: <info>  [1769444701.1540] device (tap21b1dcb8-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:25:01 compute-0 systemd[1]: Started Virtual Machine qemu-163-instance-00000087.
Jan 26 16:25:01 compute-0 NetworkManager[48954]: <info>  [1769444701.1549] device (tap21b1dcb8-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.152 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.266 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "c7429f68-4b47-47a9-86bb-8ffec79926a6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.268 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "c7429f68-4b47-47a9-86bb-8ffec79926a6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.290 239969 DEBUG nova.compute.manager [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.371 239969 DEBUG nova.compute.manager [req-45c28a7f-59ad-4b5b-b619-fb3ea928caaa req-6ba6b1c5-66ac-4159-9e83-1f31617c3fdd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.371 239969 DEBUG oslo_concurrency.lockutils [req-45c28a7f-59ad-4b5b-b619-fb3ea928caaa req-6ba6b1c5-66ac-4159-9e83-1f31617c3fdd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.371 239969 DEBUG oslo_concurrency.lockutils [req-45c28a7f-59ad-4b5b-b619-fb3ea928caaa req-6ba6b1c5-66ac-4159-9e83-1f31617c3fdd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.372 239969 DEBUG oslo_concurrency.lockutils [req-45c28a7f-59ad-4b5b-b619-fb3ea928caaa req-6ba6b1c5-66ac-4159-9e83-1f31617c3fdd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.372 239969 DEBUG nova.compute.manager [req-45c28a7f-59ad-4b5b-b619-fb3ea928caaa req-6ba6b1c5-66ac-4159-9e83-1f31617c3fdd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] No waiting events found dispatching network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.372 239969 WARNING nova.compute.manager [req-45c28a7f-59ad-4b5b-b619-fb3ea928caaa req-6ba6b1c5-66ac-4159-9e83-1f31617c3fdd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received unexpected event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e for instance with vm_state suspended and task_state resuming.
Jan 26 16:25:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:25:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3430737590' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.429 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.894s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.677 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.678 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444701.6774323, 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.678 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] VM Started (Lifecycle Event)
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.685 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.685 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.700 239969 DEBUG nova.compute.manager [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.700 239969 DEBUG nova.objects.instance [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.703 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.706 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.707 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.708 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.718 239969 INFO nova.virt.libvirt.driver [-] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Instance running successfully.
Jan 26 16:25:01 compute-0 virtqemud[240263]: argument unsupported: QEMU guest agent is not configured
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.720 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.721 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.722 239969 DEBUG nova.virt.libvirt.guest [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.723 239969 DEBUG nova.compute.manager [None req-d6487cb2-97bc-4eaa-bd63-932d9b78b378 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.728 239969 DEBUG nova.virt.hardware [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.728 239969 INFO nova.compute.claims [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.734 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.734 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444701.6812644, 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.735 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] VM Resumed (Lifecycle Event)
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.779 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.782 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.907 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.908 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3372MB free_disk=59.92112521082163GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:25:01 compute-0 nova_compute[239965]: 2026-01-26 16:25:01.908 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.080 239969 DEBUG oslo_concurrency.processutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3430737590' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:02 compute-0 sshd-session[358985]: Connection closed by authenticating user root 209.38.206.249 port 53608 [preauth]
Jan 26 16:25:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 213 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 183 op/s
Jan 26 16:25:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:25:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3663197673' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.709 239969 DEBUG oslo_concurrency.processutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.717 239969 DEBUG nova.compute.provider_tree [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.741 239969 DEBUG nova.scheduler.client.report [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.766 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.767 239969 DEBUG nova.compute.manager [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.770 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.858 239969 DEBUG nova.compute.manager [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.859 239969 DEBUG nova.network.neutron [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.878 239969 INFO nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.901 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 1612372a-bdd8-4900-82cf-902335d93c41 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.901 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.902 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance c7429f68-4b47-47a9-86bb-8ffec79926a6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.902 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.902 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.906 239969 DEBUG nova.compute.manager [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:25:02 compute-0 sshd-session[359007]: Invalid user test from 209.38.206.249 port 53612
Jan 26 16:25:02 compute-0 nova_compute[239965]: 2026-01-26 16:25:02.994 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.051 239969 DEBUG nova.compute.manager [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.053 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:25:03 compute-0 sshd-session[359007]: Connection closed by invalid user test 209.38.206.249 port 53612 [preauth]
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.053 239969 INFO nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Creating image(s)
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.086 239969 DEBUG nova.storage.rbd_utils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image c7429f68-4b47-47a9-86bb-8ffec79926a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.113 239969 DEBUG nova.storage.rbd_utils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image c7429f68-4b47-47a9-86bb-8ffec79926a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.143 239969 DEBUG nova.storage.rbd_utils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image c7429f68-4b47-47a9-86bb-8ffec79926a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.159 239969 DEBUG oslo_concurrency.processutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.245 239969 DEBUG oslo_concurrency.lockutils [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "1612372a-bdd8-4900-82cf-902335d93c41" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.246 239969 DEBUG oslo_concurrency.lockutils [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "1612372a-bdd8-4900-82cf-902335d93c41" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.246 239969 DEBUG oslo_concurrency.lockutils [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "1612372a-bdd8-4900-82cf-902335d93c41-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.246 239969 DEBUG oslo_concurrency.lockutils [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "1612372a-bdd8-4900-82cf-902335d93c41-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.247 239969 DEBUG oslo_concurrency.lockutils [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "1612372a-bdd8-4900-82cf-902335d93c41-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.248 239969 INFO nova.compute.manager [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Terminating instance
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.250 239969 DEBUG nova.compute.manager [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.251 239969 DEBUG oslo_concurrency.processutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.253 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.254 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.254 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.279 239969 DEBUG nova.storage.rbd_utils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image c7429f68-4b47-47a9-86bb-8ffec79926a6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.283 239969 DEBUG oslo_concurrency.processutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 c7429f68-4b47-47a9-86bb-8ffec79926a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.330 239969 DEBUG nova.objects.instance [None req-1baeaebb-42f9-44c9-a2c9-080dd05e07da 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.372 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444703.3716333, 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.372 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] VM Paused (Lifecycle Event)
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.397 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.400 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.417 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.454 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.459 239969 DEBUG nova.policy [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e865b82cb8db42568f95d2257ed9b158', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f543c15474f7451fa07b01e79dde95dd', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:25:03 compute-0 sshd-session[359067]: Invalid user devopsadmin from 209.38.206.249 port 53614
Jan 26 16:25:03 compute-0 sshd-session[359067]: Connection closed by invalid user devopsadmin 209.38.206.249 port 53614 [preauth]
Jan 26 16:25:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:25:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3889620400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.698 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.703s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.704 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.733 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.768 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.769 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.907 239969 DEBUG nova.compute.manager [req-30cbca39-0c87-4b5a-9a66-2a8e636411cf req-0d4aeea4-2079-4f51-9ff5-8868e00e72ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.908 239969 DEBUG oslo_concurrency.lockutils [req-30cbca39-0c87-4b5a-9a66-2a8e636411cf req-0d4aeea4-2079-4f51-9ff5-8868e00e72ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.908 239969 DEBUG oslo_concurrency.lockutils [req-30cbca39-0c87-4b5a-9a66-2a8e636411cf req-0d4aeea4-2079-4f51-9ff5-8868e00e72ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.909 239969 DEBUG oslo_concurrency.lockutils [req-30cbca39-0c87-4b5a-9a66-2a8e636411cf req-0d4aeea4-2079-4f51-9ff5-8868e00e72ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.909 239969 DEBUG nova.compute.manager [req-30cbca39-0c87-4b5a-9a66-2a8e636411cf req-0d4aeea4-2079-4f51-9ff5-8868e00e72ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] No waiting events found dispatching network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.910 239969 WARNING nova.compute.manager [req-30cbca39-0c87-4b5a-9a66-2a8e636411cf req-0d4aeea4-2079-4f51-9ff5-8868e00e72ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received unexpected event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e for instance with vm_state active and task_state suspending.
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.910 239969 DEBUG nova.compute.manager [req-30cbca39-0c87-4b5a-9a66-2a8e636411cf req-0d4aeea4-2079-4f51-9ff5-8868e00e72ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Received event network-changed-0a21656f-54b8-4cb5-9654-be9b13a49ce2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.911 239969 DEBUG nova.compute.manager [req-30cbca39-0c87-4b5a-9a66-2a8e636411cf req-0d4aeea4-2079-4f51-9ff5-8868e00e72ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Refreshing instance network info cache due to event network-changed-0a21656f-54b8-4cb5-9654-be9b13a49ce2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.911 239969 DEBUG oslo_concurrency.lockutils [req-30cbca39-0c87-4b5a-9a66-2a8e636411cf req-0d4aeea4-2079-4f51-9ff5-8868e00e72ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-1612372a-bdd8-4900-82cf-902335d93c41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.912 239969 DEBUG oslo_concurrency.lockutils [req-30cbca39-0c87-4b5a-9a66-2a8e636411cf req-0d4aeea4-2079-4f51-9ff5-8868e00e72ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-1612372a-bdd8-4900-82cf-902335d93c41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:25:03 compute-0 nova_compute[239965]: 2026-01-26 16:25:03.912 239969 DEBUG nova.network.neutron [req-30cbca39-0c87-4b5a-9a66-2a8e636411cf req-0d4aeea4-2079-4f51-9ff5-8868e00e72ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Refreshing network info cache for port 0a21656f-54b8-4cb5-9654-be9b13a49ce2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:25:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:25:04 compute-0 ceph-mon[75140]: pgmap v2230: 305 pgs: 305 active+clean; 213 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 183 op/s
Jan 26 16:25:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3663197673' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 213 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 12 KiB/s wr, 63 op/s
Jan 26 16:25:04 compute-0 kernel: tap21b1dcb8-6a (unregistering): left promiscuous mode
Jan 26 16:25:04 compute-0 NetworkManager[48954]: <info>  [1769444704.6356] device (tap21b1dcb8-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:25:04 compute-0 kernel: tap0a21656f-54 (unregistering): left promiscuous mode
Jan 26 16:25:04 compute-0 NetworkManager[48954]: <info>  [1769444704.6485] device (tap0a21656f-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:25:04 compute-0 ovn_controller[146046]: 2026-01-26T16:25:04Z|01427|binding|INFO|Releasing lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e from this chassis (sb_readonly=0)
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.650 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:04 compute-0 ovn_controller[146046]: 2026-01-26T16:25:04Z|01428|binding|INFO|Setting lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e down in Southbound
Jan 26 16:25:04 compute-0 ovn_controller[146046]: 2026-01-26T16:25:04Z|01429|binding|INFO|Removing iface tap21b1dcb8-6a ovn-installed in OVS
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.652 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:04.671 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:09:94 10.100.0.5'], port_security=['fa:16:3e:a4:09:94 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dda6da0-8bab-4507-ad24-be4d098fa548', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '576e8719bc524933b1f15843689e97b1', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c087843f-681f-4fe3-bdde-29cea08e250a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e164d7e9-1b1c-48fd-b9d0-dfbf1c68746b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:25:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:04.672 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e in datapath 5dda6da0-8bab-4507-ad24-be4d098fa548 unbound from our chassis
Jan 26 16:25:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:04.672 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5dda6da0-8bab-4507-ad24-be4d098fa548 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:25:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:04.673 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d0528a5d-ee17-4a3e-9b5a-dc25836fc114]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:04 compute-0 ovn_controller[146046]: 2026-01-26T16:25:04Z|01430|binding|INFO|Releasing lport 0a21656f-54b8-4cb5-9654-be9b13a49ce2 from this chassis (sb_readonly=0)
Jan 26 16:25:04 compute-0 ovn_controller[146046]: 2026-01-26T16:25:04Z|01431|binding|INFO|Setting lport 0a21656f-54b8-4cb5-9654-be9b13a49ce2 down in Southbound
Jan 26 16:25:04 compute-0 ovn_controller[146046]: 2026-01-26T16:25:04Z|01432|binding|INFO|Removing iface tap0a21656f-54 ovn-installed in OVS
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.689 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:04.694 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:69:dc 10.100.0.5'], port_security=['fa:16:3e:82:69:dc 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '1612372a-bdd8-4900-82cf-902335d93c41', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-3ed016ed-52a2-4162-aa29-25b6a60d9513', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a5f53cff-127a-4a6c-8cce-783f85bf3ab7 b3b6fb74-2a30-43ad-acfe-f4d1ad979b67', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f7a974e2-b4a3-4d25-930e-0b964fb30d31, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=0a21656f-54b8-4cb5-9654-be9b13a49ce2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:25:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:04.695 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 0a21656f-54b8-4cb5-9654-be9b13a49ce2 in datapath 3ed016ed-52a2-4162-aa29-25b6a60d9513 unbound from our chassis
Jan 26 16:25:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:04.696 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 3ed016ed-52a2-4162-aa29-25b6a60d9513, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:25:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:04.698 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1866f3a6-4e2e-4d74-8ff5-2a644156c100]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:04.698 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513 namespace which is not needed anymore
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.713 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:04 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000087.scope: Deactivated successfully.
Jan 26 16:25:04 compute-0 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000087.scope: Consumed 2.192s CPU time.
Jan 26 16:25:04 compute-0 systemd-machined[208061]: Machine qemu-163-instance-00000087 terminated.
Jan 26 16:25:04 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000086.scope: Deactivated successfully.
Jan 26 16:25:04 compute-0 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000086.scope: Consumed 13.732s CPU time.
Jan 26 16:25:04 compute-0 systemd-machined[208061]: Machine qemu-161-instance-00000086 terminated.
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.768 239969 INFO nova.virt.libvirt.driver [-] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Instance destroyed successfully.
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.769 239969 DEBUG nova.objects.instance [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'resources' on Instance uuid 1612372a-bdd8-4900-82cf-902335d93c41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.790 239969 DEBUG nova.virt.libvirt.vif [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:24:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-1048999134',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-1048999134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-acce',id=134,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGgZdPdXDQOl4vr4vDWBU2Dqr4teM6XHl7PTUYrRRLOB1y6m0VjRT0n1AjEl+718fAZXjP3ZgkMy9qm3KYl4oDGV88kyA1QCYDtQiYXgUmOkcs01daVhoT8MYQR0iSMtlg==',key_name='tempest-TestSecurityGroupsBasicOps-553015190',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:24:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-t5e325u7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:24:37Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=1612372a-bdd8-4900-82cf-902335d93c41,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "address": "fa:16:3e:82:69:dc", "network": {"id": "3ed016ed-52a2-4162-aa29-25b6a60d9513", "bridge": "br-int", "label": "tempest-network-smoke--1387424993", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a21656f-54", "ovs_interfaceid": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.791 239969 DEBUG nova.network.os_vif_util [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "address": "fa:16:3e:82:69:dc", "network": {"id": "3ed016ed-52a2-4162-aa29-25b6a60d9513", "bridge": "br-int", "label": "tempest-network-smoke--1387424993", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a21656f-54", "ovs_interfaceid": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.792 239969 DEBUG nova.network.os_vif_util [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:82:69:dc,bridge_name='br-int',has_traffic_filtering=True,id=0a21656f-54b8-4cb5-9654-be9b13a49ce2,network=Network(3ed016ed-52a2-4162-aa29-25b6a60d9513),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a21656f-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.792 239969 DEBUG os_vif [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:69:dc,bridge_name='br-int',has_traffic_filtering=True,id=0a21656f-54b8-4cb5-9654-be9b13a49ce2,network=Network(3ed016ed-52a2-4162-aa29-25b6a60d9513),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a21656f-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.795 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.796 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a21656f-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.798 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.800 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.807 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.809 239969 INFO os_vif [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:69:dc,bridge_name='br-int',has_traffic_filtering=True,id=0a21656f-54b8-4cb5-9654-be9b13a49ce2,network=Network(3ed016ed-52a2-4162-aa29-25b6a60d9513),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a21656f-54')
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.852 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444689.8409262, 86b65c1c-c168-4ecd-b98d-9080d00d16dc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.852 239969 INFO nova.compute.manager [-] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] VM Stopped (Lifecycle Event)
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.859 239969 DEBUG nova.network.neutron [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Successfully updated port: 07819469-3bd7-4885-9144-3eceda4eadcd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.877 239969 DEBUG nova.compute.manager [None req-a90e1821-a1d8-49dd-8a47-2b702293c139 - - - - - -] [instance: 86b65c1c-c168-4ecd-b98d-9080d00d16dc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.878 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-c7429f68-4b47-47a9-86bb-8ffec79926a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.878 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-c7429f68-4b47-47a9-86bb-8ffec79926a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.879 239969 DEBUG nova.network.neutron [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:25:04 compute-0 nova_compute[239965]: 2026-01-26 16:25:04.930 239969 DEBUG nova.compute.manager [None req-1baeaebb-42f9-44c9-a2c9-080dd05e07da 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:05 compute-0 neutron-haproxy-ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513[358052]: [NOTICE]   (358056) : haproxy version is 2.8.14-c23fe91
Jan 26 16:25:05 compute-0 neutron-haproxy-ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513[358052]: [NOTICE]   (358056) : path to executable is /usr/sbin/haproxy
Jan 26 16:25:05 compute-0 neutron-haproxy-ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513[358052]: [WARNING]  (358056) : Exiting Master process...
Jan 26 16:25:05 compute-0 neutron-haproxy-ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513[358052]: [ALERT]    (358056) : Current worker (358059) exited with code 143 (Terminated)
Jan 26 16:25:05 compute-0 neutron-haproxy-ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513[358052]: [WARNING]  (358056) : All workers exited. Exiting... (0)
Jan 26 16:25:05 compute-0 systemd[1]: libpod-22d472c74473e1236880942d438427cbbb0bcebe0739dd4cd612e15031068c70.scope: Deactivated successfully.
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.063 239969 DEBUG nova.compute.manager [req-3d611739-390c-4579-ab46-276df20c291c req-f8a79758-eaae-4015-afd0-a25890b14f69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Received event network-changed-07819469-3bd7-4885-9144-3eceda4eadcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.064 239969 DEBUG nova.compute.manager [req-3d611739-390c-4579-ab46-276df20c291c req-f8a79758-eaae-4015-afd0-a25890b14f69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Refreshing instance network info cache due to event network-changed-07819469-3bd7-4885-9144-3eceda4eadcd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:25:05 compute-0 podman[359174]: 2026-01-26 16:25:05.064472797 +0000 UTC m=+0.255089528 container died 22d472c74473e1236880942d438427cbbb0bcebe0739dd4cd612e15031068c70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.064 239969 DEBUG oslo_concurrency.lockutils [req-3d611739-390c-4579-ab46-276df20c291c req-f8a79758-eaae-4015-afd0-a25890b14f69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-c7429f68-4b47-47a9-86bb-8ffec79926a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.126 239969 DEBUG nova.network.neutron [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:25:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22d472c74473e1236880942d438427cbbb0bcebe0739dd4cd612e15031068c70-userdata-shm.mount: Deactivated successfully.
Jan 26 16:25:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-7834125455c3c58caf2314ea59ab5740e988abc38e5108cb43bbfbec25fc85e9-merged.mount: Deactivated successfully.
Jan 26 16:25:05 compute-0 podman[359174]: 2026-01-26 16:25:05.157914306 +0000 UTC m=+0.348531057 container cleanup 22d472c74473e1236880942d438427cbbb0bcebe0739dd4cd612e15031068c70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:25:05 compute-0 systemd[1]: libpod-conmon-22d472c74473e1236880942d438427cbbb0bcebe0739dd4cd612e15031068c70.scope: Deactivated successfully.
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.175 239969 DEBUG oslo_concurrency.processutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 c7429f68-4b47-47a9-86bb-8ffec79926a6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.892s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.241 239969 DEBUG nova.storage.rbd_utils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] resizing rbd image c7429f68-4b47-47a9-86bb-8ffec79926a6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:25:05 compute-0 podman[359233]: 2026-01-26 16:25:05.254858819 +0000 UTC m=+0.065142745 container remove 22d472c74473e1236880942d438427cbbb0bcebe0739dd4cd612e15031068c70 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:25:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:05.260 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a670443a-fc3b-49f1-afb3-6b155b889495]: (4, ('Mon Jan 26 04:25:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513 (22d472c74473e1236880942d438427cbbb0bcebe0739dd4cd612e15031068c70)\n22d472c74473e1236880942d438427cbbb0bcebe0739dd4cd612e15031068c70\nMon Jan 26 04:25:05 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513 (22d472c74473e1236880942d438427cbbb0bcebe0739dd4cd612e15031068c70)\n22d472c74473e1236880942d438427cbbb0bcebe0739dd4cd612e15031068c70\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:05.262 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e8f179-797c-4728-a2a6-c2d6965fd3e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:05.263 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ed016ed-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:05 compute-0 kernel: tap3ed016ed-50: left promiscuous mode
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.277 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.283 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:05.286 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cd673c2b-5a61-47f9-8f2f-a140fa117496]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:05.300 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f983c3d0-c4b5-4991-b48b-e68ff8ac81c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:05.302 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5719d04d-ab7f-4bf6-8d87-6611a85804dd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:05.316 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[48ede0e7-2b3a-406d-8dba-63fe2e0f8050]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 616765, 'reachable_time': 27692, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359305, 'error': None, 'target': 'ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d3ed016ed\x2d52a2\x2d4162\x2daa29\x2d25b6a60d9513.mount: Deactivated successfully.
Jan 26 16:25:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:05.319 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-3ed016ed-52a2-4162-aa29-25b6a60d9513 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:25:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:05.319 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[0e027fd0-45b9-4e64-98bc-fccbad2f518c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.353 239969 DEBUG nova.objects.instance [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'migration_context' on Instance uuid c7429f68-4b47-47a9-86bb-8ffec79926a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.365 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.365 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Ensure instance console log exists: /var/lib/nova/instances/c7429f68-4b47-47a9-86bb-8ffec79926a6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.366 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.366 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.366 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.459 239969 INFO nova.virt.libvirt.driver [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Deleting instance files /var/lib/nova/instances/1612372a-bdd8-4900-82cf-902335d93c41_del
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.461 239969 INFO nova.virt.libvirt.driver [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Deletion of /var/lib/nova/instances/1612372a-bdd8-4900-82cf-902335d93c41_del complete
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.515 239969 DEBUG nova.network.neutron [req-30cbca39-0c87-4b5a-9a66-2a8e636411cf req-0d4aeea4-2079-4f51-9ff5-8868e00e72ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Updated VIF entry in instance network info cache for port 0a21656f-54b8-4cb5-9654-be9b13a49ce2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.516 239969 DEBUG nova.network.neutron [req-30cbca39-0c87-4b5a-9a66-2a8e636411cf req-0d4aeea4-2079-4f51-9ff5-8868e00e72ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Updating instance_info_cache with network_info: [{"id": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "address": "fa:16:3e:82:69:dc", "network": {"id": "3ed016ed-52a2-4162-aa29-25b6a60d9513", "bridge": "br-int", "label": "tempest-network-smoke--1387424993", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a21656f-54", "ovs_interfaceid": "0a21656f-54b8-4cb5-9654-be9b13a49ce2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.519 239969 INFO nova.compute.manager [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Took 2.27 seconds to destroy the instance on the hypervisor.
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.520 239969 DEBUG oslo.service.loopingcall [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.520 239969 DEBUG nova.compute.manager [-] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.520 239969 DEBUG nova.network.neutron [-] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:25:05 compute-0 nova_compute[239965]: 2026-01-26 16:25:05.547 239969 DEBUG oslo_concurrency.lockutils [req-30cbca39-0c87-4b5a-9a66-2a8e636411cf req-0d4aeea4-2079-4f51-9ff5-8868e00e72ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-1612372a-bdd8-4900-82cf-902335d93c41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:25:05 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3889620400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:05 compute-0 ceph-mon[75140]: pgmap v2231: 305 pgs: 305 active+clean; 213 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 12 KiB/s wr, 63 op/s
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.026 239969 INFO nova.compute.manager [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Resuming
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.027 239969 DEBUG nova.objects.instance [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lazy-loading 'flavor' on Instance uuid 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.065 239969 DEBUG oslo_concurrency.lockutils [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Acquiring lock "refresh_cache-0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.066 239969 DEBUG oslo_concurrency.lockutils [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Acquired lock "refresh_cache-0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.066 239969 DEBUG nova.network.neutron [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.071 239969 DEBUG nova.compute.manager [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received event network-vif-unplugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.072 239969 DEBUG oslo_concurrency.lockutils [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.072 239969 DEBUG oslo_concurrency.lockutils [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.073 239969 DEBUG oslo_concurrency.lockutils [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.073 239969 DEBUG nova.compute.manager [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] No waiting events found dispatching network-vif-unplugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.074 239969 WARNING nova.compute.manager [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received unexpected event network-vif-unplugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e for instance with vm_state suspended and task_state resuming.
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.074 239969 DEBUG nova.compute.manager [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.074 239969 DEBUG oslo_concurrency.lockutils [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.075 239969 DEBUG oslo_concurrency.lockutils [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.075 239969 DEBUG oslo_concurrency.lockutils [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.076 239969 DEBUG nova.compute.manager [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] No waiting events found dispatching network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.077 239969 WARNING nova.compute.manager [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received unexpected event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e for instance with vm_state suspended and task_state resuming.
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.077 239969 DEBUG nova.compute.manager [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Received event network-vif-unplugged-0a21656f-54b8-4cb5-9654-be9b13a49ce2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.078 239969 DEBUG oslo_concurrency.lockutils [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "1612372a-bdd8-4900-82cf-902335d93c41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.078 239969 DEBUG oslo_concurrency.lockutils [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1612372a-bdd8-4900-82cf-902335d93c41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.079 239969 DEBUG oslo_concurrency.lockutils [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1612372a-bdd8-4900-82cf-902335d93c41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.079 239969 DEBUG nova.compute.manager [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] No waiting events found dispatching network-vif-unplugged-0a21656f-54b8-4cb5-9654-be9b13a49ce2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.080 239969 DEBUG nova.compute.manager [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Received event network-vif-unplugged-0a21656f-54b8-4cb5-9654-be9b13a49ce2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.080 239969 DEBUG nova.compute.manager [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Received event network-vif-plugged-0a21656f-54b8-4cb5-9654-be9b13a49ce2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.081 239969 DEBUG oslo_concurrency.lockutils [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "1612372a-bdd8-4900-82cf-902335d93c41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.081 239969 DEBUG oslo_concurrency.lockutils [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1612372a-bdd8-4900-82cf-902335d93c41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.082 239969 DEBUG oslo_concurrency.lockutils [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "1612372a-bdd8-4900-82cf-902335d93c41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.082 239969 DEBUG nova.compute.manager [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] No waiting events found dispatching network-vif-plugged-0a21656f-54b8-4cb5-9654-be9b13a49ce2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:25:06 compute-0 nova_compute[239965]: 2026-01-26 16:25:06.083 239969 WARNING nova.compute.manager [req-61a01930-d6d3-488a-855f-3143a27b3f1f req-8b166092-fd8c-471a-91f7-b7dd5613e39f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Received unexpected event network-vif-plugged-0a21656f-54b8-4cb5-9654-be9b13a49ce2 for instance with vm_state active and task_state deleting.
Jan 26 16:25:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 184 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 362 KiB/s wr, 87 op/s
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.045 239969 DEBUG nova.network.neutron [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Updating instance_info_cache with network_info: [{"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.071 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-c7429f68-4b47-47a9-86bb-8ffec79926a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.072 239969 DEBUG nova.compute.manager [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Instance network_info: |[{"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.072 239969 DEBUG oslo_concurrency.lockutils [req-3d611739-390c-4579-ab46-276df20c291c req-f8a79758-eaae-4015-afd0-a25890b14f69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-c7429f68-4b47-47a9-86bb-8ffec79926a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.072 239969 DEBUG nova.network.neutron [req-3d611739-390c-4579-ab46-276df20c291c req-f8a79758-eaae-4015-afd0-a25890b14f69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Refreshing network info cache for port 07819469-3bd7-4885-9144-3eceda4eadcd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.076 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Start _get_guest_xml network_info=[{"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.080 239969 WARNING nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.086 239969 DEBUG nova.virt.libvirt.host [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.086 239969 DEBUG nova.virt.libvirt.host [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.089 239969 DEBUG nova.virt.libvirt.host [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.090 239969 DEBUG nova.virt.libvirt.host [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.090 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.090 239969 DEBUG nova.virt.hardware [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.091 239969 DEBUG nova.virt.hardware [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.091 239969 DEBUG nova.virt.hardware [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.091 239969 DEBUG nova.virt.hardware [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.092 239969 DEBUG nova.virt.hardware [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.092 239969 DEBUG nova.virt.hardware [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.092 239969 DEBUG nova.virt.hardware [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.093 239969 DEBUG nova.virt.hardware [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.093 239969 DEBUG nova.virt.hardware [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.093 239969 DEBUG nova.virt.hardware [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.093 239969 DEBUG nova.virt.hardware [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.097 239969 DEBUG oslo_concurrency.processutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:07 compute-0 podman[359335]: 2026-01-26 16:25:07.373383442 +0000 UTC m=+0.059289613 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 16:25:07 compute-0 podman[359336]: 2026-01-26 16:25:07.403870849 +0000 UTC m=+0.088537199 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 26 16:25:07 compute-0 ceph-mon[75140]: pgmap v2232: 305 pgs: 305 active+clean; 184 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 362 KiB/s wr, 87 op/s
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.653 239969 DEBUG nova.network.neutron [-] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.689 239969 INFO nova.compute.manager [-] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Took 2.17 seconds to deallocate network for instance.
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.784 239969 DEBUG oslo_concurrency.lockutils [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:07 compute-0 nova_compute[239965]: 2026-01-26 16:25:07.785 239969 DEBUG oslo_concurrency.lockutils [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.013 239969 DEBUG oslo_concurrency.processutils [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.066 239969 DEBUG nova.network.neutron [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Updating instance_info_cache with network_info: [{"id": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "address": "fa:16:3e:a4:09:94", "network": {"id": "5dda6da0-8bab-4507-ad24-be4d098fa548", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-772891590-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "576e8719bc524933b1f15843689e97b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b1dcb8-6a", "ovs_interfaceid": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.087 239969 DEBUG oslo_concurrency.lockutils [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Releasing lock "refresh_cache-0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.104 239969 DEBUG nova.virt.libvirt.vif [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:24:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1338239947',display_name='tempest-TestServerAdvancedOps-server-1338239947',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1338239947',id=135,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:24:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='576e8719bc524933b1f15843689e97b1',ramdisk_id='',reservation_id='r-7g6d9pwb',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1211125831',owner_user_name='tempest-TestServerAdvancedOps-1211125831-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:25:04Z,user_data=None,user_id='7746910a4ce44ee4956e77cfac123ca8',uuid=0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "address": "fa:16:3e:a4:09:94", "network": {"id": "5dda6da0-8bab-4507-ad24-be4d098fa548", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-772891590-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "576e8719bc524933b1f15843689e97b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b1dcb8-6a", "ovs_interfaceid": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.105 239969 DEBUG nova.network.os_vif_util [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Converting VIF {"id": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "address": "fa:16:3e:a4:09:94", "network": {"id": "5dda6da0-8bab-4507-ad24-be4d098fa548", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-772891590-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "576e8719bc524933b1f15843689e97b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b1dcb8-6a", "ovs_interfaceid": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.107 239969 DEBUG nova.network.os_vif_util [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:09:94,bridge_name='br-int',has_traffic_filtering=True,id=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e,network=Network(5dda6da0-8bab-4507-ad24-be4d098fa548),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b1dcb8-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.107 239969 DEBUG os_vif [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:09:94,bridge_name='br-int',has_traffic_filtering=True,id=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e,network=Network(5dda6da0-8bab-4507-ad24-be4d098fa548),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b1dcb8-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.108 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.109 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.110 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.114 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.114 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap21b1dcb8-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.115 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap21b1dcb8-6a, col_values=(('external_ids', {'iface-id': '21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:09:94', 'vm-uuid': '0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.115 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.116 239969 INFO os_vif [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:09:94,bridge_name='br-int',has_traffic_filtering=True,id=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e,network=Network(5dda6da0-8bab-4507-ad24-be4d098fa548),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b1dcb8-6a')
Jan 26 16:25:08 compute-0 sshd-session[359390]: Invalid user git from 209.38.206.249 port 53616
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.223 239969 DEBUG nova.compute.manager [req-e7057180-5bd4-44ba-bf67-56d1bb053819 req-d5a238d3-a37f-435a-807a-cd74dcfda5b5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Received event network-vif-deleted-0a21656f-54b8-4cb5-9654-be9b13a49ce2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.237 239969 DEBUG nova.objects.instance [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:25:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/261346693' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.274 239969 DEBUG oslo_concurrency.processutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:08 compute-0 sshd-session[359390]: Connection closed by invalid user git 209.38.206.249 port 53616 [preauth]
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.310 239969 DEBUG nova.storage.rbd_utils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image c7429f68-4b47-47a9-86bb-8ffec79926a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:08 compute-0 kernel: tap21b1dcb8-6a: entered promiscuous mode
Jan 26 16:25:08 compute-0 NetworkManager[48954]: <info>  [1769444708.3222] manager: (tap21b1dcb8-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/583)
Jan 26 16:25:08 compute-0 ovn_controller[146046]: 2026-01-26T16:25:08Z|01433|binding|INFO|Claiming lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e for this chassis.
Jan 26 16:25:08 compute-0 ovn_controller[146046]: 2026-01-26T16:25:08Z|01434|binding|INFO|21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e: Claiming fa:16:3e:a4:09:94 10.100.0.5
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.329 239969 DEBUG oslo_concurrency.processutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:08.335 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:09:94 10.100.0.5'], port_security=['fa:16:3e:a4:09:94 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dda6da0-8bab-4507-ad24-be4d098fa548', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '576e8719bc524933b1f15843689e97b1', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'c087843f-681f-4fe3-bdde-29cea08e250a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e164d7e9-1b1c-48fd-b9d0-dfbf1c68746b, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:25:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:08.336 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e in datapath 5dda6da0-8bab-4507-ad24-be4d098fa548 bound to our chassis
Jan 26 16:25:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:08.337 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5dda6da0-8bab-4507-ad24-be4d098fa548 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:25:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:08.339 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d00bf18a-bcd4-4a50-850f-fad3a7dbaa99]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:08 compute-0 ovn_controller[146046]: 2026-01-26T16:25:08Z|01435|binding|INFO|Setting lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e ovn-installed in OVS
Jan 26 16:25:08 compute-0 ovn_controller[146046]: 2026-01-26T16:25:08Z|01436|binding|INFO|Setting lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e up in Southbound
Jan 26 16:25:08 compute-0 systemd-machined[208061]: New machine qemu-164-instance-00000087.
Jan 26 16:25:08 compute-0 systemd-udevd[359446]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.376 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:08 compute-0 NetworkManager[48954]: <info>  [1769444708.3815] device (tap21b1dcb8-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:25:08 compute-0 NetworkManager[48954]: <info>  [1769444708.3822] device (tap21b1dcb8-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:25:08 compute-0 systemd[1]: Started Virtual Machine qemu-164-instance-00000087.
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.456 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/261346693' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:25:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 180 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 871 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 26 16:25:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:25:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2980606487' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.749 239969 DEBUG oslo_concurrency.processutils [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.736s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:08 compute-0 sshd-session[359448]: Invalid user teste from 209.38.206.249 port 35240
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.755 239969 DEBUG nova.compute.provider_tree [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.771 239969 DEBUG nova.scheduler.client.report [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.800 239969 DEBUG oslo_concurrency.lockutils [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.834 239969 INFO nova.scheduler.client.report [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Deleted allocations for instance 1612372a-bdd8-4900-82cf-902335d93c41
Jan 26 16:25:08 compute-0 sshd-session[359448]: Connection closed by invalid user teste 209.38.206.249 port 35240 [preauth]
Jan 26 16:25:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:25:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3351835326' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.905 239969 DEBUG oslo_concurrency.processutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.906 239969 DEBUG nova.virt.libvirt.vif [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1990900755',display_name='tempest-TestNetworkBasicOps-server-1990900755',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1990900755',id=136,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMxSUUzaFJtbDZTHYwmKXkMJsD/pEwWXqwpfvtUyMpbcPr/mwfX8X/SnoGVR/eP/UFIy9ixgO1xkGpKMWDvYFVaMqGPWY4eSR9XgOGlglnx04WpIqNxQ/U0bgmYXHL8lZA==',key_name='tempest-TestNetworkBasicOps-983093039',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-h0tbqhp1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:25:02Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=c7429f68-4b47-47a9-86bb-8ffec79926a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.906 239969 DEBUG nova.network.os_vif_util [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.907 239969 DEBUG nova.network.os_vif_util [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:79:ac,bridge_name='br-int',has_traffic_filtering=True,id=07819469-3bd7-4885-9144-3eceda4eadcd,network=Network(f62bb3f6-b715-4244-9fb7-5f2b6ae70b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07819469-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.908 239969 DEBUG nova.objects.instance [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'pci_devices' on Instance uuid c7429f68-4b47-47a9-86bb-8ffec79926a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.912 239969 DEBUG oslo_concurrency.lockutils [None req-f6f6bc52-3c57-4033-a4e0-1af19c28520e ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "1612372a-bdd8-4900-82cf-902335d93c41" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.921 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:25:08 compute-0 nova_compute[239965]:   <uuid>c7429f68-4b47-47a9-86bb-8ffec79926a6</uuid>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   <name>instance-00000088</name>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkBasicOps-server-1990900755</nova:name>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:25:07</nova:creationTime>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:25:08 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:25:08 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:25:08 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:25:08 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:25:08 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:25:08 compute-0 nova_compute[239965]:         <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:25:08 compute-0 nova_compute[239965]:         <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:25:08 compute-0 nova_compute[239965]:         <nova:port uuid="07819469-3bd7-4885-9144-3eceda4eadcd">
Jan 26 16:25:08 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <system>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <entry name="serial">c7429f68-4b47-47a9-86bb-8ffec79926a6</entry>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <entry name="uuid">c7429f68-4b47-47a9-86bb-8ffec79926a6</entry>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     </system>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   <os>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   </os>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   <features>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   </features>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/c7429f68-4b47-47a9-86bb-8ffec79926a6_disk">
Jan 26 16:25:08 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       </source>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:25:08 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/c7429f68-4b47-47a9-86bb-8ffec79926a6_disk.config">
Jan 26 16:25:08 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       </source>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:25:08 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:b4:79:ac"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <target dev="tap07819469-3b"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/c7429f68-4b47-47a9-86bb-8ffec79926a6/console.log" append="off"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <video>
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     </video>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:25:08 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:25:08 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:25:08 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:25:08 compute-0 nova_compute[239965]: </domain>
Jan 26 16:25:08 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.922 239969 DEBUG nova.compute.manager [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Preparing to wait for external event network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.922 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.922 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.923 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.923 239969 DEBUG nova.virt.libvirt.vif [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1990900755',display_name='tempest-TestNetworkBasicOps-server-1990900755',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1990900755',id=136,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMxSUUzaFJtbDZTHYwmKXkMJsD/pEwWXqwpfvtUyMpbcPr/mwfX8X/SnoGVR/eP/UFIy9ixgO1xkGpKMWDvYFVaMqGPWY4eSR9XgOGlglnx04WpIqNxQ/U0bgmYXHL8lZA==',key_name='tempest-TestNetworkBasicOps-983093039',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-h0tbqhp1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:25:02Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=c7429f68-4b47-47a9-86bb-8ffec79926a6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.924 239969 DEBUG nova.network.os_vif_util [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.924 239969 DEBUG nova.network.os_vif_util [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:79:ac,bridge_name='br-int',has_traffic_filtering=True,id=07819469-3bd7-4885-9144-3eceda4eadcd,network=Network(f62bb3f6-b715-4244-9fb7-5f2b6ae70b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07819469-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.924 239969 DEBUG os_vif [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:79:ac,bridge_name='br-int',has_traffic_filtering=True,id=07819469-3bd7-4885-9144-3eceda4eadcd,network=Network(f62bb3f6-b715-4244-9fb7-5f2b6ae70b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07819469-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.925 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.925 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.926 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.928 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.929 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07819469-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.929 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07819469-3b, col_values=(('external_ids', {'iface-id': '07819469-3bd7-4885-9144-3eceda4eadcd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:79:ac', 'vm-uuid': 'c7429f68-4b47-47a9-86bb-8ffec79926a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.930 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:08 compute-0 NetworkManager[48954]: <info>  [1769444708.9318] manager: (tap07819469-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/584)
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.934 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.936 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.937 239969 INFO os_vif [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:79:ac,bridge_name='br-int',has_traffic_filtering=True,id=07819469-3bd7-4885-9144-3eceda4eadcd,network=Network(f62bb3f6-b715-4244-9fb7-5f2b6ae70b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07819469-3b')
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.987 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.988 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.988 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:b4:79:ac, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:25:08 compute-0 nova_compute[239965]: 2026-01-26 16:25:08.988 239969 INFO nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Using config drive
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.010 239969 DEBUG nova.storage.rbd_utils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image c7429f68-4b47-47a9-86bb-8ffec79926a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.017 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.017 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444709.013443, 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.017 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] VM Started (Lifecycle Event)
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.033 239969 DEBUG nova.compute.manager [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.034 239969 DEBUG nova.objects.instance [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.043 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.047 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.051 239969 INFO nova.virt.libvirt.driver [-] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Instance running successfully.
Jan 26 16:25:09 compute-0 virtqemud[240263]: argument unsupported: QEMU guest agent is not configured
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.055 239969 DEBUG nova.virt.libvirt.guest [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.055 239969 DEBUG nova.compute.manager [None req-88a7c511-dbb7-4316-ba3f-da7b50da8051 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.079 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.080 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444709.0160067, 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.080 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] VM Resumed (Lifecycle Event)
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.103 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.107 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:25:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:25:09 compute-0 ceph-mon[75140]: pgmap v2233: 305 pgs: 305 active+clean; 180 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 871 KiB/s rd, 1.8 MiB/s wr, 88 op/s
Jan 26 16:25:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2980606487' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3351835326' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.809 239969 INFO nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Creating config drive at /var/lib/nova/instances/c7429f68-4b47-47a9-86bb-8ffec79926a6/disk.config
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.818 239969 DEBUG oslo_concurrency.processutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c7429f68-4b47-47a9-86bb-8ffec79926a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_qq72kk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:09 compute-0 nova_compute[239965]: 2026-01-26 16:25:09.982 239969 DEBUG oslo_concurrency.processutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c7429f68-4b47-47a9-86bb-8ffec79926a6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9_qq72kk" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.024 239969 DEBUG nova.storage.rbd_utils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image c7429f68-4b47-47a9-86bb-8ffec79926a6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.029 239969 DEBUG oslo_concurrency.processutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c7429f68-4b47-47a9-86bb-8ffec79926a6/disk.config c7429f68-4b47-47a9-86bb-8ffec79926a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.350 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.370 239969 DEBUG nova.compute.manager [req-9967fbb3-c2b6-4c3a-b514-a662ff52c300 req-88af2995-d1bc-4641-a4ed-6f645e656d7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.370 239969 DEBUG oslo_concurrency.lockutils [req-9967fbb3-c2b6-4c3a-b514-a662ff52c300 req-88af2995-d1bc-4641-a4ed-6f645e656d7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.371 239969 DEBUG oslo_concurrency.lockutils [req-9967fbb3-c2b6-4c3a-b514-a662ff52c300 req-88af2995-d1bc-4641-a4ed-6f645e656d7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.371 239969 DEBUG oslo_concurrency.lockutils [req-9967fbb3-c2b6-4c3a-b514-a662ff52c300 req-88af2995-d1bc-4641-a4ed-6f645e656d7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.372 239969 DEBUG nova.compute.manager [req-9967fbb3-c2b6-4c3a-b514-a662ff52c300 req-88af2995-d1bc-4641-a4ed-6f645e656d7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] No waiting events found dispatching network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.372 239969 WARNING nova.compute.manager [req-9967fbb3-c2b6-4c3a-b514-a662ff52c300 req-88af2995-d1bc-4641-a4ed-6f645e656d7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received unexpected event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e for instance with vm_state active and task_state None.
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.373 239969 DEBUG nova.compute.manager [req-9967fbb3-c2b6-4c3a-b514-a662ff52c300 req-88af2995-d1bc-4641-a4ed-6f645e656d7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.373 239969 DEBUG oslo_concurrency.lockutils [req-9967fbb3-c2b6-4c3a-b514-a662ff52c300 req-88af2995-d1bc-4641-a4ed-6f645e656d7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.373 239969 DEBUG oslo_concurrency.lockutils [req-9967fbb3-c2b6-4c3a-b514-a662ff52c300 req-88af2995-d1bc-4641-a4ed-6f645e656d7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.374 239969 DEBUG oslo_concurrency.lockutils [req-9967fbb3-c2b6-4c3a-b514-a662ff52c300 req-88af2995-d1bc-4641-a4ed-6f645e656d7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.374 239969 DEBUG nova.compute.manager [req-9967fbb3-c2b6-4c3a-b514-a662ff52c300 req-88af2995-d1bc-4641-a4ed-6f645e656d7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] No waiting events found dispatching network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.374 239969 WARNING nova.compute.manager [req-9967fbb3-c2b6-4c3a-b514-a662ff52c300 req-88af2995-d1bc-4641-a4ed-6f645e656d7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received unexpected event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e for instance with vm_state active and task_state None.
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.424 239969 DEBUG oslo_concurrency.processutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c7429f68-4b47-47a9-86bb-8ffec79926a6/disk.config c7429f68-4b47-47a9-86bb-8ffec79926a6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.424 239969 INFO nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Deleting local config drive /var/lib/nova/instances/c7429f68-4b47-47a9-86bb-8ffec79926a6/disk.config because it was imported into RBD.
Jan 26 16:25:10 compute-0 kernel: tap07819469-3b: entered promiscuous mode
Jan 26 16:25:10 compute-0 NetworkManager[48954]: <info>  [1769444710.4696] manager: (tap07819469-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/585)
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.469 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:10 compute-0 ovn_controller[146046]: 2026-01-26T16:25:10Z|01437|binding|INFO|Claiming lport 07819469-3bd7-4885-9144-3eceda4eadcd for this chassis.
Jan 26 16:25:10 compute-0 ovn_controller[146046]: 2026-01-26T16:25:10Z|01438|binding|INFO|07819469-3bd7-4885-9144-3eceda4eadcd: Claiming fa:16:3e:b4:79:ac 10.100.0.8
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.486 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.483 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:79:ac 10.100.0.8'], port_security=['fa:16:3e:b4:79:ac 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-648668074', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c7429f68-4b47-47a9-86bb-8ffec79926a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-648668074', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '7', 'neutron:security_group_ids': '039287cb-946e-4ce0-ad47-a6333988fa03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.225'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c96c5555-0193-4c98-ba58-f33ef27ed1fc, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=07819469-3bd7-4885-9144-3eceda4eadcd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.485 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 07819469-3bd7-4885-9144-3eceda4eadcd in datapath f62bb3f6-b715-4244-9fb7-5f2b6ae70b93 bound to our chassis
Jan 26 16:25:10 compute-0 ovn_controller[146046]: 2026-01-26T16:25:10Z|01439|binding|INFO|Setting lport 07819469-3bd7-4885-9144-3eceda4eadcd ovn-installed in OVS
Jan 26 16:25:10 compute-0 ovn_controller[146046]: 2026-01-26T16:25:10Z|01440|binding|INFO|Setting lport 07819469-3bd7-4885-9144-3eceda4eadcd up in Southbound
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.492 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.492 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f62bb3f6-b715-4244-9fb7-5f2b6ae70b93
Jan 26 16:25:10 compute-0 NetworkManager[48954]: <info>  [1769444710.4962] device (tap07819469-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:25:10 compute-0 NetworkManager[48954]: <info>  [1769444710.5009] device (tap07819469-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.509 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dd8883bf-effc-4ea9-a014-241623f19bf9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.510 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf62bb3f6-b1 in ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.514 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf62bb3f6-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.514 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[38b7d186-9ccd-4555-9b50-4a277aff109b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.515 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1ed79bf9-0c63-4035-adf3-06dacd7005c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:10 compute-0 systemd-machined[208061]: New machine qemu-165-instance-00000088.
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.525 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[4aab169d-6767-4b58-bdb8-e7ee7fd5be8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.541 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[21b5df01-051b-4db6-b3f2-96b7f1b4db96]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:10 compute-0 systemd[1]: Started Virtual Machine qemu-165-instance-00000088.
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.573 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f631fb78-c1ba-442c-b415-e789075dadb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.578 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e39e5a51-e055-4fee-9b6d-070a41fb2c5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:10 compute-0 NetworkManager[48954]: <info>  [1769444710.5800] manager: (tapf62bb3f6-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/586)
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.613 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[cd8fa20f-f59a-4f7e-8ca9-964afeead2ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.616 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[44f185df-019e-4fc0-af9a-f3e19186ccd8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 180 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Jan 26 16:25:10 compute-0 NetworkManager[48954]: <info>  [1769444710.6367] device (tapf62bb3f6-b0): carrier: link connected
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.642 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e114e507-2fac-40c3-bbe9-7b1ba1fb472b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.662 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[82648bd6-5c40-46ec-bfff-a50dcaaae044]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf62bb3f6-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:71:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620144, 'reachable_time': 32058, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359626, 'error': None, 'target': 'ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.676 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a89eb8a1-e619-4f3d-9e6c-413707fc17d7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe41:7115'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 620144, 'tstamp': 620144}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 359627, 'error': None, 'target': 'ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.699 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7786589d-a253-40ab-aa80-e767bce9b75e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf62bb3f6-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:41:71:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620144, 'reachable_time': 32058, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 359628, 'error': None, 'target': 'ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:10 compute-0 ceph-mon[75140]: pgmap v2234: 305 pgs: 305 active+clean; 180 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.736 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d25235c2-bf6c-4806-94ae-a384cd4df246]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.798 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4abb2cc4-2409-4342-8cfd-3b0de600764b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.800 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf62bb3f6-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.800 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.801 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf62bb3f6-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.802 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:10 compute-0 NetworkManager[48954]: <info>  [1769444710.8033] manager: (tapf62bb3f6-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/587)
Jan 26 16:25:10 compute-0 kernel: tapf62bb3f6-b0: entered promiscuous mode
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.805 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf62bb3f6-b0, col_values=(('external_ids', {'iface-id': '6ff4e0de-8425-4a37-9c4d-ac8157b0fa40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:10 compute-0 ovn_controller[146046]: 2026-01-26T16:25:10Z|01441|binding|INFO|Releasing lport 6ff4e0de-8425-4a37-9c4d-ac8157b0fa40 from this chassis (sb_readonly=0)
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.827 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.827 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f62bb3f6-b715-4244-9fb7-5f2b6ae70b93.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f62bb3f6-b715-4244-9fb7-5f2b6ae70b93.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.828 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[08f37d72-6809-41b9-9a68-dd832dce1410]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.829 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/f62bb3f6-b715-4244-9fb7-5f2b6ae70b93.pid.haproxy
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID f62bb3f6-b715-4244-9fb7-5f2b6ae70b93
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:25:10 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:10.830 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'env', 'PROCESS_TAG=haproxy-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f62bb3f6-b715-4244-9fb7-5f2b6ae70b93.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.894 239969 DEBUG nova.network.neutron [req-3d611739-390c-4579-ab46-276df20c291c req-f8a79758-eaae-4015-afd0-a25890b14f69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Updated VIF entry in instance network info cache for port 07819469-3bd7-4885-9144-3eceda4eadcd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.895 239969 DEBUG nova.network.neutron [req-3d611739-390c-4579-ab46-276df20c291c req-f8a79758-eaae-4015-afd0-a25890b14f69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Updating instance_info_cache with network_info: [{"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.912 239969 DEBUG oslo_concurrency.lockutils [req-3d611739-390c-4579-ab46-276df20c291c req-f8a79758-eaae-4015-afd0-a25890b14f69 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-c7429f68-4b47-47a9-86bb-8ffec79926a6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.964 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444710.9639027, c7429f68-4b47-47a9-86bb-8ffec79926a6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.965 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] VM Started (Lifecycle Event)
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.987 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.990 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444710.965066, c7429f68-4b47-47a9-86bb-8ffec79926a6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:10 compute-0 nova_compute[239965]: 2026-01-26 16:25:10.990 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] VM Paused (Lifecycle Event)
Jan 26 16:25:11 compute-0 nova_compute[239965]: 2026-01-26 16:25:11.031 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:11 compute-0 nova_compute[239965]: 2026-01-26 16:25:11.034 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:25:11 compute-0 nova_compute[239965]: 2026-01-26 16:25:11.066 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:25:11 compute-0 podman[359702]: 2026-01-26 16:25:11.258075317 +0000 UTC m=+0.072483616 container create 86723b138e839aa450ddbfcb9a81e0d12eb3b49a598297808d9649fc220f8f63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:25:11 compute-0 systemd[1]: Started libpod-conmon-86723b138e839aa450ddbfcb9a81e0d12eb3b49a598297808d9649fc220f8f63.scope.
Jan 26 16:25:11 compute-0 podman[359702]: 2026-01-26 16:25:11.217378821 +0000 UTC m=+0.031787210 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:25:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:25:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d4fc255534ec84707122258baf2eed9a3ac413b3fe3eacbb1a46343fd60727e2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:11 compute-0 podman[359702]: 2026-01-26 16:25:11.35496235 +0000 UTC m=+0.169370689 container init 86723b138e839aa450ddbfcb9a81e0d12eb3b49a598297808d9649fc220f8f63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:25:11 compute-0 podman[359702]: 2026-01-26 16:25:11.36476562 +0000 UTC m=+0.179173929 container start 86723b138e839aa450ddbfcb9a81e0d12eb3b49a598297808d9649fc220f8f63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:25:11 compute-0 neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93[359718]: [NOTICE]   (359722) : New worker (359724) forked
Jan 26 16:25:11 compute-0 neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93[359718]: [NOTICE]   (359722) : Loading success.
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.110 239969 DEBUG oslo_concurrency.lockutils [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Acquiring lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.111 239969 DEBUG oslo_concurrency.lockutils [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.111 239969 DEBUG oslo_concurrency.lockutils [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Acquiring lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.111 239969 DEBUG oslo_concurrency.lockutils [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.112 239969 DEBUG oslo_concurrency.lockutils [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.113 239969 INFO nova.compute.manager [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Terminating instance
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.114 239969 DEBUG nova.compute.manager [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:25:12 compute-0 kernel: tap21b1dcb8-6a (unregistering): left promiscuous mode
Jan 26 16:25:12 compute-0 NetworkManager[48954]: <info>  [1769444712.1531] device (tap21b1dcb8-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:25:12 compute-0 ovn_controller[146046]: 2026-01-26T16:25:12Z|01442|binding|INFO|Releasing lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e from this chassis (sb_readonly=0)
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.163 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:12 compute-0 ovn_controller[146046]: 2026-01-26T16:25:12Z|01443|binding|INFO|Setting lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e down in Southbound
Jan 26 16:25:12 compute-0 ovn_controller[146046]: 2026-01-26T16:25:12Z|01444|binding|INFO|Removing iface tap21b1dcb8-6a ovn-installed in OVS
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.166 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:12.173 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:09:94 10.100.0.5'], port_security=['fa:16:3e:a4:09:94 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dda6da0-8bab-4507-ad24-be4d098fa548', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '576e8719bc524933b1f15843689e97b1', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'c087843f-681f-4fe3-bdde-29cea08e250a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e164d7e9-1b1c-48fd-b9d0-dfbf1c68746b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:25:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:12.174 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e in datapath 5dda6da0-8bab-4507-ad24-be4d098fa548 unbound from our chassis
Jan 26 16:25:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:12.175 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5dda6da0-8bab-4507-ad24-be4d098fa548 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:25:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:12.177 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[644bc6c8-a9a2-4ec2-ad70-1d8cf9813e42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.198 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:12 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000087.scope: Deactivated successfully.
Jan 26 16:25:12 compute-0 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000087.scope: Consumed 3.645s CPU time.
Jan 26 16:25:12 compute-0 systemd-machined[208061]: Machine qemu-164-instance-00000087 terminated.
Jan 26 16:25:12 compute-0 kernel: tap21b1dcb8-6a: entered promiscuous mode
Jan 26 16:25:12 compute-0 kernel: tap21b1dcb8-6a (unregistering): left promiscuous mode
Jan 26 16:25:12 compute-0 NetworkManager[48954]: <info>  [1769444712.3348] manager: (tap21b1dcb8-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/588)
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.336 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:12 compute-0 ovn_controller[146046]: 2026-01-26T16:25:12Z|01445|binding|INFO|Claiming lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e for this chassis.
Jan 26 16:25:12 compute-0 ovn_controller[146046]: 2026-01-26T16:25:12Z|01446|binding|INFO|21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e: Claiming fa:16:3e:a4:09:94 10.100.0.5
Jan 26 16:25:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:12.351 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:09:94 10.100.0.5'], port_security=['fa:16:3e:a4:09:94 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dda6da0-8bab-4507-ad24-be4d098fa548', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '576e8719bc524933b1f15843689e97b1', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'c087843f-681f-4fe3-bdde-29cea08e250a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e164d7e9-1b1c-48fd-b9d0-dfbf1c68746b, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.352 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:12.354 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e in datapath 5dda6da0-8bab-4507-ad24-be4d098fa548 bound to our chassis
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.356 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:12 compute-0 ovn_controller[146046]: 2026-01-26T16:25:12Z|01447|binding|INFO|Setting lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e ovn-installed in OVS
Jan 26 16:25:12 compute-0 ovn_controller[146046]: 2026-01-26T16:25:12Z|01448|binding|INFO|Setting lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e up in Southbound
Jan 26 16:25:12 compute-0 ovn_controller[146046]: 2026-01-26T16:25:12Z|01449|binding|INFO|Releasing lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e from this chassis (sb_readonly=1)
Jan 26 16:25:12 compute-0 ovn_controller[146046]: 2026-01-26T16:25:12Z|01450|if_status|INFO|Dropped 2 log messages in last 63 seconds (most recently, 63 seconds ago) due to excessive rate
Jan 26 16:25:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:12.357 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5dda6da0-8bab-4507-ad24-be4d098fa548 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:25:12 compute-0 ovn_controller[146046]: 2026-01-26T16:25:12Z|01451|if_status|INFO|Not setting lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e down as sb is readonly
Jan 26 16:25:12 compute-0 ovn_controller[146046]: 2026-01-26T16:25:12Z|01452|binding|INFO|Removing iface tap21b1dcb8-6a ovn-installed in OVS
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.358 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:12 compute-0 ovn_controller[146046]: 2026-01-26T16:25:12Z|01453|binding|INFO|Releasing lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e from this chassis (sb_readonly=0)
Jan 26 16:25:12 compute-0 ovn_controller[146046]: 2026-01-26T16:25:12Z|01454|binding|INFO|Setting lport 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e down in Southbound
Jan 26 16:25:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:12.357 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d25f7b05-a62e-4b74-a19a-2b48b21171d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.359 239969 INFO nova.virt.libvirt.driver [-] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Instance destroyed successfully.
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.361 239969 DEBUG nova.objects.instance [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lazy-loading 'resources' on Instance uuid 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:12.366 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:09:94 10.100.0.5'], port_security=['fa:16:3e:a4:09:94 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dda6da0-8bab-4507-ad24-be4d098fa548', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '576e8719bc524933b1f15843689e97b1', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'c087843f-681f-4fe3-bdde-29cea08e250a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e164d7e9-1b1c-48fd-b9d0-dfbf1c68746b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:25:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:12.367 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e in datapath 5dda6da0-8bab-4507-ad24-be4d098fa548 unbound from our chassis
Jan 26 16:25:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:12.369 156105 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 5dda6da0-8bab-4507-ad24-be4d098fa548 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Jan 26 16:25:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:12.370 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[88ae29fa-989c-4240-83e7-ea84df40440d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.372 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.375 239969 DEBUG nova.virt.libvirt.vif [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:24:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1338239947',display_name='tempest-TestServerAdvancedOps-server-1338239947',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1338239947',id=135,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:24:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='576e8719bc524933b1f15843689e97b1',ramdisk_id='',reservation_id='r-7g6d9pwb',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-1211125831',owner_user_name='tempest-TestServerAdvancedOps-1211125831-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:25:09Z,user_data=None,user_id='7746910a4ce44ee4956e77cfac123ca8',uuid=0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "address": "fa:16:3e:a4:09:94", "network": {"id": "5dda6da0-8bab-4507-ad24-be4d098fa548", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-772891590-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "576e8719bc524933b1f15843689e97b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b1dcb8-6a", "ovs_interfaceid": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.376 239969 DEBUG nova.network.os_vif_util [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Converting VIF {"id": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "address": "fa:16:3e:a4:09:94", "network": {"id": "5dda6da0-8bab-4507-ad24-be4d098fa548", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-772891590-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "576e8719bc524933b1f15843689e97b1", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap21b1dcb8-6a", "ovs_interfaceid": "21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.377 239969 DEBUG nova.network.os_vif_util [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:09:94,bridge_name='br-int',has_traffic_filtering=True,id=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e,network=Network(5dda6da0-8bab-4507-ad24-be4d098fa548),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b1dcb8-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.377 239969 DEBUG os_vif [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:09:94,bridge_name='br-int',has_traffic_filtering=True,id=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e,network=Network(5dda6da0-8bab-4507-ad24-be4d098fa548),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b1dcb8-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.379 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.379 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap21b1dcb8-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.381 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.384 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.386 239969 INFO os_vif [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:09:94,bridge_name='br-int',has_traffic_filtering=True,id=21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e,network=Network(5dda6da0-8bab-4507-ad24-be4d098fa548),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap21b1dcb8-6a')
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.526 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.599 239969 INFO nova.virt.libvirt.driver [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Deleting instance files /var/lib/nova/instances/0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428_del
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.599 239969 INFO nova.virt.libvirt.driver [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Deletion of /var/lib/nova/instances/0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428_del complete
Jan 26 16:25:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 174 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.646 239969 INFO nova.compute.manager [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Took 0.53 seconds to destroy the instance on the hypervisor.
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.647 239969 DEBUG oslo.service.loopingcall [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.647 239969 DEBUG nova.compute.manager [-] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:25:12 compute-0 nova_compute[239965]: 2026-01-26 16:25:12.647 239969 DEBUG nova.network.neutron [-] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:25:12 compute-0 ceph-mon[75140]: pgmap v2235: 305 pgs: 305 active+clean; 174 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 1.8 MiB/s wr, 76 op/s
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.119 239969 DEBUG nova.compute.manager [req-3a88c29c-9f66-4fce-9131-203b8f1c9a90 req-cb84aa5f-3a3a-4294-a834-a735a27ab3b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received event network-vif-unplugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.120 239969 DEBUG oslo_concurrency.lockutils [req-3a88c29c-9f66-4fce-9131-203b8f1c9a90 req-cb84aa5f-3a3a-4294-a834-a735a27ab3b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.121 239969 DEBUG oslo_concurrency.lockutils [req-3a88c29c-9f66-4fce-9131-203b8f1c9a90 req-cb84aa5f-3a3a-4294-a834-a735a27ab3b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.121 239969 DEBUG oslo_concurrency.lockutils [req-3a88c29c-9f66-4fce-9131-203b8f1c9a90 req-cb84aa5f-3a3a-4294-a834-a735a27ab3b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.122 239969 DEBUG nova.compute.manager [req-3a88c29c-9f66-4fce-9131-203b8f1c9a90 req-cb84aa5f-3a3a-4294-a834-a735a27ab3b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] No waiting events found dispatching network-vif-unplugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.122 239969 DEBUG nova.compute.manager [req-3a88c29c-9f66-4fce-9131-203b8f1c9a90 req-cb84aa5f-3a3a-4294-a834-a735a27ab3b7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received event network-vif-unplugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.457 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.840 239969 DEBUG nova.compute.manager [req-4ed83417-2114-4964-acd6-41038dc298e8 req-a71d8ac0-5d80-4d6e-a700-dc27fd6f69d2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Received event network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.841 239969 DEBUG oslo_concurrency.lockutils [req-4ed83417-2114-4964-acd6-41038dc298e8 req-a71d8ac0-5d80-4d6e-a700-dc27fd6f69d2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.841 239969 DEBUG oslo_concurrency.lockutils [req-4ed83417-2114-4964-acd6-41038dc298e8 req-a71d8ac0-5d80-4d6e-a700-dc27fd6f69d2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.841 239969 DEBUG oslo_concurrency.lockutils [req-4ed83417-2114-4964-acd6-41038dc298e8 req-a71d8ac0-5d80-4d6e-a700-dc27fd6f69d2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.841 239969 DEBUG nova.compute.manager [req-4ed83417-2114-4964-acd6-41038dc298e8 req-a71d8ac0-5d80-4d6e-a700-dc27fd6f69d2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Processing event network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.842 239969 DEBUG nova.compute.manager [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.846 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444713.8466399, c7429f68-4b47-47a9-86bb-8ffec79926a6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.847 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] VM Resumed (Lifecycle Event)
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.850 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.862 239969 INFO nova.virt.libvirt.driver [-] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Instance spawned successfully.
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.862 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.873 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.878 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.908 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.918 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.919 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.919 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.920 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.921 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:13 compute-0 nova_compute[239965]: 2026-01-26 16:25:13.921 239969 DEBUG nova.virt.libvirt.driver [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:14 compute-0 nova_compute[239965]: 2026-01-26 16:25:14.007 239969 INFO nova.compute.manager [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Took 10.96 seconds to spawn the instance on the hypervisor.
Jan 26 16:25:14 compute-0 nova_compute[239965]: 2026-01-26 16:25:14.007 239969 DEBUG nova.compute.manager [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:14 compute-0 nova_compute[239965]: 2026-01-26 16:25:14.121 239969 DEBUG nova.network.neutron [-] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:25:14 compute-0 nova_compute[239965]: 2026-01-26 16:25:14.274 239969 INFO nova.compute.manager [-] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Took 1.63 seconds to deallocate network for instance.
Jan 26 16:25:14 compute-0 nova_compute[239965]: 2026-01-26 16:25:14.294 239969 INFO nova.compute.manager [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Took 12.95 seconds to build instance.
Jan 26 16:25:14 compute-0 nova_compute[239965]: 2026-01-26 16:25:14.329 239969 DEBUG oslo_concurrency.lockutils [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:14 compute-0 nova_compute[239965]: 2026-01-26 16:25:14.329 239969 DEBUG oslo_concurrency.lockutils [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:14 compute-0 nova_compute[239965]: 2026-01-26 16:25:14.331 239969 DEBUG oslo_concurrency.lockutils [None req-5ec67689-084e-49f5-9c59-fc31d737a3e0 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "c7429f68-4b47-47a9-86bb-8ffec79926a6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:14 compute-0 nova_compute[239965]: 2026-01-26 16:25:14.419 239969 DEBUG oslo_concurrency.processutils [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:14 compute-0 nova_compute[239965]: 2026-01-26 16:25:14.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:25:14 compute-0 nova_compute[239965]: 2026-01-26 16:25:14.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:25:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 174 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Jan 26 16:25:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:25:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4101174989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.080 239969 DEBUG oslo_concurrency.processutils [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.661s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.090 239969 DEBUG nova.compute.provider_tree [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.106 239969 DEBUG nova.scheduler.client.report [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.128 239969 DEBUG oslo_concurrency.lockutils [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.799s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.162 239969 INFO nova.scheduler.client.report [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Deleted allocations for instance 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.242 239969 DEBUG oslo_concurrency.lockutils [None req-ff164954-566e-4990-95e6-ec1cab760f81 7746910a4ce44ee4956e77cfac123ca8 576e8719bc524933b1f15843689e97b1 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.283 239969 DEBUG nova.compute.manager [req-09e88e05-866b-43a2-8089-43beb9813017 req-1ade278c-e82f-419c-bc89-216a9c9d5d6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.284 239969 DEBUG oslo_concurrency.lockutils [req-09e88e05-866b-43a2-8089-43beb9813017 req-1ade278c-e82f-419c-bc89-216a9c9d5d6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.284 239969 DEBUG oslo_concurrency.lockutils [req-09e88e05-866b-43a2-8089-43beb9813017 req-1ade278c-e82f-419c-bc89-216a9c9d5d6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.285 239969 DEBUG oslo_concurrency.lockutils [req-09e88e05-866b-43a2-8089-43beb9813017 req-1ade278c-e82f-419c-bc89-216a9c9d5d6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.285 239969 DEBUG nova.compute.manager [req-09e88e05-866b-43a2-8089-43beb9813017 req-1ade278c-e82f-419c-bc89-216a9c9d5d6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] No waiting events found dispatching network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.286 239969 WARNING nova.compute.manager [req-09e88e05-866b-43a2-8089-43beb9813017 req-1ade278c-e82f-419c-bc89-216a9c9d5d6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received unexpected event network-vif-plugged-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e for instance with vm_state deleted and task_state None.
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.286 239969 DEBUG nova.compute.manager [req-09e88e05-866b-43a2-8089-43beb9813017 req-1ade278c-e82f-419c-bc89-216a9c9d5d6c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Received event network-vif-deleted-21b1dcb8-6aa2-4cb5-a2ef-f13c4052267e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:15 compute-0 ceph-mon[75140]: pgmap v2236: 305 pgs: 305 active+clean; 174 MiB data, 989 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.932 239969 DEBUG nova.compute.manager [req-1c8645c2-01f9-4703-afe6-641961d9f289 req-88b3c242-e195-498e-adec-98c15338627b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Received event network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.933 239969 DEBUG oslo_concurrency.lockutils [req-1c8645c2-01f9-4703-afe6-641961d9f289 req-88b3c242-e195-498e-adec-98c15338627b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.933 239969 DEBUG oslo_concurrency.lockutils [req-1c8645c2-01f9-4703-afe6-641961d9f289 req-88b3c242-e195-498e-adec-98c15338627b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.933 239969 DEBUG oslo_concurrency.lockutils [req-1c8645c2-01f9-4703-afe6-641961d9f289 req-88b3c242-e195-498e-adec-98c15338627b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.933 239969 DEBUG nova.compute.manager [req-1c8645c2-01f9-4703-afe6-641961d9f289 req-88b3c242-e195-498e-adec-98c15338627b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] No waiting events found dispatching network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:25:15 compute-0 nova_compute[239965]: 2026-01-26 16:25:15.933 239969 WARNING nova.compute.manager [req-1c8645c2-01f9-4703-afe6-641961d9f289 req-88b3c242-e195-498e-adec-98c15338627b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Received unexpected event network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd for instance with vm_state active and task_state None.
Jan 26 16:25:16 compute-0 ovn_controller[146046]: 2026-01-26T16:25:16Z|01455|binding|INFO|Releasing lport 6ff4e0de-8425-4a37-9c4d-ac8157b0fa40 from this chassis (sb_readonly=0)
Jan 26 16:25:16 compute-0 nova_compute[239965]: 2026-01-26 16:25:16.188 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:16 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4101174989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 155 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 424 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.381 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:17 compute-0 ceph-mon[75140]: pgmap v2237: 305 pgs: 305 active+clean; 155 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 424 KiB/s rd, 1.8 MiB/s wr, 93 op/s
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.684 239969 DEBUG oslo_concurrency.lockutils [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "c7429f68-4b47-47a9-86bb-8ffec79926a6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.684 239969 DEBUG oslo_concurrency.lockutils [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "c7429f68-4b47-47a9-86bb-8ffec79926a6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.684 239969 DEBUG oslo_concurrency.lockutils [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.685 239969 DEBUG oslo_concurrency.lockutils [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.685 239969 DEBUG oslo_concurrency.lockutils [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.686 239969 INFO nova.compute.manager [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Terminating instance
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.687 239969 DEBUG nova.compute.manager [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:25:17 compute-0 kernel: tap07819469-3b (unregistering): left promiscuous mode
Jan 26 16:25:17 compute-0 NetworkManager[48954]: <info>  [1769444717.7330] device (tap07819469-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.739 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:17 compute-0 ovn_controller[146046]: 2026-01-26T16:25:17Z|01456|binding|INFO|Releasing lport 07819469-3bd7-4885-9144-3eceda4eadcd from this chassis (sb_readonly=0)
Jan 26 16:25:17 compute-0 ovn_controller[146046]: 2026-01-26T16:25:17Z|01457|binding|INFO|Setting lport 07819469-3bd7-4885-9144-3eceda4eadcd down in Southbound
Jan 26 16:25:17 compute-0 ovn_controller[146046]: 2026-01-26T16:25:17Z|01458|binding|INFO|Removing iface tap07819469-3b ovn-installed in OVS
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.742 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:17.747 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:79:ac 10.100.0.8'], port_security=['fa:16:3e:b4:79:ac 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-648668074', 'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'c7429f68-4b47-47a9-86bb-8ffec79926a6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-648668074', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '9', 'neutron:security_group_ids': '039287cb-946e-4ce0-ad47-a6333988fa03', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.225', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c96c5555-0193-4c98-ba58-f33ef27ed1fc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=07819469-3bd7-4885-9144-3eceda4eadcd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:25:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:17.749 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 07819469-3bd7-4885-9144-3eceda4eadcd in datapath f62bb3f6-b715-4244-9fb7-5f2b6ae70b93 unbound from our chassis
Jan 26 16:25:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:17.750 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f62bb3f6-b715-4244-9fb7-5f2b6ae70b93, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:25:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:17.751 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[602d4844-5cb4-4400-aa14-02b0ab634e78]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:17.752 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93 namespace which is not needed anymore
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.765 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:17 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000088.scope: Deactivated successfully.
Jan 26 16:25:17 compute-0 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000088.scope: Consumed 4.413s CPU time.
Jan 26 16:25:17 compute-0 systemd-machined[208061]: Machine qemu-165-instance-00000088 terminated.
Jan 26 16:25:17 compute-0 neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93[359718]: [NOTICE]   (359722) : haproxy version is 2.8.14-c23fe91
Jan 26 16:25:17 compute-0 neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93[359718]: [NOTICE]   (359722) : path to executable is /usr/sbin/haproxy
Jan 26 16:25:17 compute-0 neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93[359718]: [WARNING]  (359722) : Exiting Master process...
Jan 26 16:25:17 compute-0 neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93[359718]: [WARNING]  (359722) : Exiting Master process...
Jan 26 16:25:17 compute-0 neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93[359718]: [ALERT]    (359722) : Current worker (359724) exited with code 143 (Terminated)
Jan 26 16:25:17 compute-0 neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93[359718]: [WARNING]  (359722) : All workers exited. Exiting... (0)
Jan 26 16:25:17 compute-0 systemd[1]: libpod-86723b138e839aa450ddbfcb9a81e0d12eb3b49a598297808d9649fc220f8f63.scope: Deactivated successfully.
Jan 26 16:25:17 compute-0 podman[359811]: 2026-01-26 16:25:17.893843706 +0000 UTC m=+0.046931591 container died 86723b138e839aa450ddbfcb9a81e0d12eb3b49a598297808d9649fc220f8f63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.924 239969 INFO nova.virt.libvirt.driver [-] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Instance destroyed successfully.
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.925 239969 DEBUG nova.objects.instance [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'resources' on Instance uuid c7429f68-4b47-47a9-86bb-8ffec79926a6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.939 239969 DEBUG nova.virt.libvirt.vif [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:24:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1990900755',display_name='tempest-TestNetworkBasicOps-server-1990900755',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1990900755',id=136,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMxSUUzaFJtbDZTHYwmKXkMJsD/pEwWXqwpfvtUyMpbcPr/mwfX8X/SnoGVR/eP/UFIy9ixgO1xkGpKMWDvYFVaMqGPWY4eSR9XgOGlglnx04WpIqNxQ/U0bgmYXHL8lZA==',key_name='tempest-TestNetworkBasicOps-983093039',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:25:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-h0tbqhp1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:25:14Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=c7429f68-4b47-47a9-86bb-8ffec79926a6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.940 239969 DEBUG nova.network.os_vif_util [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "07819469-3bd7-4885-9144-3eceda4eadcd", "address": "fa:16:3e:b4:79:ac", "network": {"id": "f62bb3f6-b715-4244-9fb7-5f2b6ae70b93", "bridge": "br-int", "label": "tempest-network-smoke--1685887527", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.225", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07819469-3b", "ovs_interfaceid": "07819469-3bd7-4885-9144-3eceda4eadcd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.940 239969 DEBUG nova.network.os_vif_util [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:79:ac,bridge_name='br-int',has_traffic_filtering=True,id=07819469-3bd7-4885-9144-3eceda4eadcd,network=Network(f62bb3f6-b715-4244-9fb7-5f2b6ae70b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07819469-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.941 239969 DEBUG os_vif [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:79:ac,bridge_name='br-int',has_traffic_filtering=True,id=07819469-3bd7-4885-9144-3eceda4eadcd,network=Network(f62bb3f6-b715-4244-9fb7-5f2b6ae70b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07819469-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.942 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.943 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07819469-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.944 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.947 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:25:17 compute-0 nova_compute[239965]: 2026-01-26 16:25:17.950 239969 INFO os_vif [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:79:ac,bridge_name='br-int',has_traffic_filtering=True,id=07819469-3bd7-4885-9144-3eceda4eadcd,network=Network(f62bb3f6-b715-4244-9fb7-5f2b6ae70b93),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap07819469-3b')
Jan 26 16:25:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-86723b138e839aa450ddbfcb9a81e0d12eb3b49a598297808d9649fc220f8f63-userdata-shm.mount: Deactivated successfully.
Jan 26 16:25:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d4fc255534ec84707122258baf2eed9a3ac413b3fe3eacbb1a46343fd60727e2-merged.mount: Deactivated successfully.
Jan 26 16:25:17 compute-0 podman[359811]: 2026-01-26 16:25:17.970513244 +0000 UTC m=+0.123601109 container cleanup 86723b138e839aa450ddbfcb9a81e0d12eb3b49a598297808d9649fc220f8f63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:25:17 compute-0 systemd[1]: libpod-conmon-86723b138e839aa450ddbfcb9a81e0d12eb3b49a598297808d9649fc220f8f63.scope: Deactivated successfully.
Jan 26 16:25:18 compute-0 nova_compute[239965]: 2026-01-26 16:25:18.044 239969 DEBUG nova.compute.manager [req-8113059a-bd15-45bd-bb66-aee5c72cd1e0 req-0d8a52d4-11e2-47e1-984c-b50a2a21631d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Received event network-vif-unplugged-07819469-3bd7-4885-9144-3eceda4eadcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:18 compute-0 nova_compute[239965]: 2026-01-26 16:25:18.044 239969 DEBUG oslo_concurrency.lockutils [req-8113059a-bd15-45bd-bb66-aee5c72cd1e0 req-0d8a52d4-11e2-47e1-984c-b50a2a21631d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:18 compute-0 nova_compute[239965]: 2026-01-26 16:25:18.045 239969 DEBUG oslo_concurrency.lockutils [req-8113059a-bd15-45bd-bb66-aee5c72cd1e0 req-0d8a52d4-11e2-47e1-984c-b50a2a21631d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:18 compute-0 nova_compute[239965]: 2026-01-26 16:25:18.045 239969 DEBUG oslo_concurrency.lockutils [req-8113059a-bd15-45bd-bb66-aee5c72cd1e0 req-0d8a52d4-11e2-47e1-984c-b50a2a21631d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:18 compute-0 nova_compute[239965]: 2026-01-26 16:25:18.046 239969 DEBUG nova.compute.manager [req-8113059a-bd15-45bd-bb66-aee5c72cd1e0 req-0d8a52d4-11e2-47e1-984c-b50a2a21631d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] No waiting events found dispatching network-vif-unplugged-07819469-3bd7-4885-9144-3eceda4eadcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:25:18 compute-0 nova_compute[239965]: 2026-01-26 16:25:18.046 239969 DEBUG nova.compute.manager [req-8113059a-bd15-45bd-bb66-aee5c72cd1e0 req-0d8a52d4-11e2-47e1-984c-b50a2a21631d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Received event network-vif-unplugged-07819469-3bd7-4885-9144-3eceda4eadcd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:25:18 compute-0 podman[359867]: 2026-01-26 16:25:18.04793832 +0000 UTC m=+0.057679384 container remove 86723b138e839aa450ddbfcb9a81e0d12eb3b49a598297808d9649fc220f8f63 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 16:25:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:18.055 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[68029eb1-ee91-42ae-9728-f85447ec23e4]: (4, ('Mon Jan 26 04:25:17 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93 (86723b138e839aa450ddbfcb9a81e0d12eb3b49a598297808d9649fc220f8f63)\n86723b138e839aa450ddbfcb9a81e0d12eb3b49a598297808d9649fc220f8f63\nMon Jan 26 04:25:17 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93 (86723b138e839aa450ddbfcb9a81e0d12eb3b49a598297808d9649fc220f8f63)\n86723b138e839aa450ddbfcb9a81e0d12eb3b49a598297808d9649fc220f8f63\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:18.057 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fc27c1bd-f5ea-476d-af86-8bdc31499d2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:18.058 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf62bb3f6-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:18 compute-0 nova_compute[239965]: 2026-01-26 16:25:18.059 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:18 compute-0 kernel: tapf62bb3f6-b0: left promiscuous mode
Jan 26 16:25:18 compute-0 nova_compute[239965]: 2026-01-26 16:25:18.074 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:18.077 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[93ac84fa-2208-46b7-85fc-94d4cf6c8f66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:18.091 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[55f686b1-0034-4940-92a4-35a78fa35b72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:18.092 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[105c45ac-faf3-4a2f-b6ac-595b62c58d03]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:18.110 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9f51cc-4854-4312-bf1c-c496e7e51445]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 620137, 'reachable_time': 32773, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 359885, 'error': None, 'target': 'ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:18 compute-0 systemd[1]: run-netns-ovnmeta\x2df62bb3f6\x2db715\x2d4244\x2d9fb7\x2d5f2b6ae70b93.mount: Deactivated successfully.
Jan 26 16:25:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:18.115 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f62bb3f6-b715-4244-9fb7-5f2b6ae70b93 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:25:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:18.115 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[ca97ade3-3a7f-4532-8040-56d860156e67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:18 compute-0 nova_compute[239965]: 2026-01-26 16:25:18.231 239969 INFO nova.virt.libvirt.driver [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Deleting instance files /var/lib/nova/instances/c7429f68-4b47-47a9-86bb-8ffec79926a6_del
Jan 26 16:25:18 compute-0 nova_compute[239965]: 2026-01-26 16:25:18.232 239969 INFO nova.virt.libvirt.driver [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Deletion of /var/lib/nova/instances/c7429f68-4b47-47a9-86bb-8ffec79926a6_del complete
Jan 26 16:25:18 compute-0 nova_compute[239965]: 2026-01-26 16:25:18.304 239969 INFO nova.compute.manager [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Took 0.62 seconds to destroy the instance on the hypervisor.
Jan 26 16:25:18 compute-0 nova_compute[239965]: 2026-01-26 16:25:18.305 239969 DEBUG oslo.service.loopingcall [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:25:18 compute-0 nova_compute[239965]: 2026-01-26 16:25:18.305 239969 DEBUG nova.compute.manager [-] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:25:18 compute-0 nova_compute[239965]: 2026-01-26 16:25:18.306 239969 DEBUG nova.network.neutron [-] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:25:18 compute-0 nova_compute[239965]: 2026-01-26 16:25:18.459 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 134 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 137 op/s
Jan 26 16:25:18 compute-0 ceph-mon[75140]: pgmap v2238: 305 pgs: 305 active+clean; 134 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 137 op/s
Jan 26 16:25:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:25:19 compute-0 nova_compute[239965]: 2026-01-26 16:25:19.766 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444704.764842, 1612372a-bdd8-4900-82cf-902335d93c41 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:19 compute-0 nova_compute[239965]: 2026-01-26 16:25:19.767 239969 INFO nova.compute.manager [-] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] VM Stopped (Lifecycle Event)
Jan 26 16:25:19 compute-0 nova_compute[239965]: 2026-01-26 16:25:19.794 239969 DEBUG nova.compute.manager [None req-15da6a56-9ed5-46fd-aec9-2eb4746ea281 - - - - - -] [instance: 1612372a-bdd8-4900-82cf-902335d93c41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:20 compute-0 nova_compute[239965]: 2026-01-26 16:25:20.045 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:20 compute-0 nova_compute[239965]: 2026-01-26 16:25:20.260 239969 DEBUG nova.compute.manager [req-89543cc9-9a73-4699-9043-c7db5875c6ab req-ac389309-ecd4-485b-b77a-13755e3f0e37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Received event network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:20 compute-0 nova_compute[239965]: 2026-01-26 16:25:20.261 239969 DEBUG oslo_concurrency.lockutils [req-89543cc9-9a73-4699-9043-c7db5875c6ab req-ac389309-ecd4-485b-b77a-13755e3f0e37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:20 compute-0 nova_compute[239965]: 2026-01-26 16:25:20.261 239969 DEBUG oslo_concurrency.lockutils [req-89543cc9-9a73-4699-9043-c7db5875c6ab req-ac389309-ecd4-485b-b77a-13755e3f0e37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:20 compute-0 nova_compute[239965]: 2026-01-26 16:25:20.261 239969 DEBUG oslo_concurrency.lockutils [req-89543cc9-9a73-4699-9043-c7db5875c6ab req-ac389309-ecd4-485b-b77a-13755e3f0e37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "c7429f68-4b47-47a9-86bb-8ffec79926a6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:20 compute-0 nova_compute[239965]: 2026-01-26 16:25:20.261 239969 DEBUG nova.compute.manager [req-89543cc9-9a73-4699-9043-c7db5875c6ab req-ac389309-ecd4-485b-b77a-13755e3f0e37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] No waiting events found dispatching network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:25:20 compute-0 nova_compute[239965]: 2026-01-26 16:25:20.261 239969 WARNING nova.compute.manager [req-89543cc9-9a73-4699-9043-c7db5875c6ab req-ac389309-ecd4-485b-b77a-13755e3f0e37 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Received unexpected event network-vif-plugged-07819469-3bd7-4885-9144-3eceda4eadcd for instance with vm_state active and task_state deleting.
Jan 26 16:25:20 compute-0 sshd-session[359888]: Invalid user vyos from 209.38.206.249 port 35246
Jan 26 16:25:20 compute-0 sshd-session[359888]: Connection closed by invalid user vyos 209.38.206.249 port 35246 [preauth]
Jan 26 16:25:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 134 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 104 op/s
Jan 26 16:25:20 compute-0 ceph-mon[75140]: pgmap v2239: 305 pgs: 305 active+clean; 134 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 104 op/s
Jan 26 16:25:20 compute-0 nova_compute[239965]: 2026-01-26 16:25:20.774 239969 DEBUG nova.network.neutron [-] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:20 compute-0 nova_compute[239965]: 2026-01-26 16:25:20.802 239969 INFO nova.compute.manager [-] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Took 2.50 seconds to deallocate network for instance.
Jan 26 16:25:20 compute-0 nova_compute[239965]: 2026-01-26 16:25:20.844 239969 DEBUG oslo_concurrency.lockutils [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:20 compute-0 nova_compute[239965]: 2026-01-26 16:25:20.844 239969 DEBUG oslo_concurrency.lockutils [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:20 compute-0 nova_compute[239965]: 2026-01-26 16:25:20.898 239969 DEBUG oslo_concurrency.processutils [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:21 compute-0 sshd-session[359890]: Invalid user a from 209.38.206.249 port 41274
Jan 26 16:25:21 compute-0 sshd-session[359890]: Connection closed by invalid user a 209.38.206.249 port 41274 [preauth]
Jan 26 16:25:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:25:21 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3539981909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:21 compute-0 nova_compute[239965]: 2026-01-26 16:25:21.459 239969 DEBUG oslo_concurrency.processutils [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:21 compute-0 nova_compute[239965]: 2026-01-26 16:25:21.467 239969 DEBUG nova.compute.provider_tree [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:25:21 compute-0 nova_compute[239965]: 2026-01-26 16:25:21.490 239969 DEBUG nova.scheduler.client.report [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:25:21 compute-0 nova_compute[239965]: 2026-01-26 16:25:21.526 239969 DEBUG oslo_concurrency.lockutils [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:21 compute-0 nova_compute[239965]: 2026-01-26 16:25:21.553 239969 INFO nova.scheduler.client.report [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Deleted allocations for instance c7429f68-4b47-47a9-86bb-8ffec79926a6
Jan 26 16:25:21 compute-0 nova_compute[239965]: 2026-01-26 16:25:21.626 239969 DEBUG oslo_concurrency.lockutils [None req-a26877fe-3b3e-4f63-9de4-f91c2798dbce e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "c7429f68-4b47-47a9-86bb-8ffec79926a6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:21 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3539981909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:22 compute-0 nova_compute[239965]: 2026-01-26 16:25:22.476 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 88 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 131 op/s
Jan 26 16:25:22 compute-0 ceph-mon[75140]: pgmap v2240: 305 pgs: 305 active+clean; 88 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 15 KiB/s wr, 131 op/s
Jan 26 16:25:22 compute-0 nova_compute[239965]: 2026-01-26 16:25:22.945 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:23 compute-0 nova_compute[239965]: 2026-01-26 16:25:23.463 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:25:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 88 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 KiB/s wr, 115 op/s
Jan 26 16:25:24 compute-0 nova_compute[239965]: 2026-01-26 16:25:24.690 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "b1edd3fd-2563-465f-94c8-577ab4debb72" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:24 compute-0 nova_compute[239965]: 2026-01-26 16:25:24.690 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:24 compute-0 nova_compute[239965]: 2026-01-26 16:25:24.724 239969 DEBUG nova.compute.manager [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:25:24 compute-0 ceph-mon[75140]: pgmap v2241: 305 pgs: 305 active+clean; 88 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 KiB/s wr, 115 op/s
Jan 26 16:25:24 compute-0 nova_compute[239965]: 2026-01-26 16:25:24.798 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:24 compute-0 nova_compute[239965]: 2026-01-26 16:25:24.799 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:24 compute-0 nova_compute[239965]: 2026-01-26 16:25:24.812 239969 DEBUG nova.virt.hardware [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:25:24 compute-0 nova_compute[239965]: 2026-01-26 16:25:24.813 239969 INFO nova.compute.claims [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:25:24 compute-0 nova_compute[239965]: 2026-01-26 16:25:24.939 239969 DEBUG oslo_concurrency.processutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:25:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3906117192' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:25 compute-0 nova_compute[239965]: 2026-01-26 16:25:25.518 239969 DEBUG oslo_concurrency.processutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:25 compute-0 nova_compute[239965]: 2026-01-26 16:25:25.526 239969 DEBUG nova.compute.provider_tree [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:25:25 compute-0 nova_compute[239965]: 2026-01-26 16:25:25.567 239969 DEBUG nova.scheduler.client.report [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:25:25 compute-0 nova_compute[239965]: 2026-01-26 16:25:25.588 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:25 compute-0 nova_compute[239965]: 2026-01-26 16:25:25.588 239969 DEBUG nova.compute.manager [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:25:25 compute-0 nova_compute[239965]: 2026-01-26 16:25:25.631 239969 DEBUG nova.compute.manager [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:25:25 compute-0 nova_compute[239965]: 2026-01-26 16:25:25.631 239969 DEBUG nova.network.neutron [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:25:25 compute-0 nova_compute[239965]: 2026-01-26 16:25:25.658 239969 INFO nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:25:25 compute-0 nova_compute[239965]: 2026-01-26 16:25:25.673 239969 DEBUG nova.compute.manager [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:25:25 compute-0 nova_compute[239965]: 2026-01-26 16:25:25.792 239969 DEBUG nova.compute.manager [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:25:25 compute-0 nova_compute[239965]: 2026-01-26 16:25:25.793 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:25:25 compute-0 nova_compute[239965]: 2026-01-26 16:25:25.793 239969 INFO nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Creating image(s)
Jan 26 16:25:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3906117192' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.104 239969 DEBUG nova.storage.rbd_utils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image b1edd3fd-2563-465f-94c8-577ab4debb72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.142 239969 DEBUG nova.storage.rbd_utils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image b1edd3fd-2563-465f-94c8-577ab4debb72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.178 239969 DEBUG nova.storage.rbd_utils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image b1edd3fd-2563-465f-94c8-577ab4debb72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.184 239969 DEBUG oslo_concurrency.processutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.232 239969 DEBUG nova.policy [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '59ae1c17a260470c91f50965ddd53a9e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.288 239969 DEBUG oslo_concurrency.processutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.289 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.290 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.290 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.317 239969 DEBUG nova.storage.rbd_utils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image b1edd3fd-2563-465f-94c8-577ab4debb72_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.321 239969 DEBUG oslo_concurrency.processutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 b1edd3fd-2563-465f-94c8-577ab4debb72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 88 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 KiB/s wr, 115 op/s
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.644 239969 DEBUG oslo_concurrency.processutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 b1edd3fd-2563-465f-94c8-577ab4debb72_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.707 239969 DEBUG nova.storage.rbd_utils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] resizing rbd image b1edd3fd-2563-465f-94c8-577ab4debb72_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.803 239969 DEBUG nova.objects.instance [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'migration_context' on Instance uuid b1edd3fd-2563-465f-94c8-577ab4debb72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.824 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.824 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Ensure instance console log exists: /var/lib/nova/instances/b1edd3fd-2563-465f-94c8-577ab4debb72/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.825 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.826 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:26 compute-0 nova_compute[239965]: 2026-01-26 16:25:26.826 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:27 compute-0 ceph-mon[75140]: pgmap v2242: 305 pgs: 305 active+clean; 88 MiB data, 945 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 KiB/s wr, 115 op/s
Jan 26 16:25:27 compute-0 nova_compute[239965]: 2026-01-26 16:25:27.356 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444712.3545718, 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:27 compute-0 nova_compute[239965]: 2026-01-26 16:25:27.356 239969 INFO nova.compute.manager [-] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] VM Stopped (Lifecycle Event)
Jan 26 16:25:27 compute-0 nova_compute[239965]: 2026-01-26 16:25:27.377 239969 DEBUG nova.compute.manager [None req-5a864890-2bc4-4305-bcd4-85ec1f450dbe - - - - - -] [instance: 0fdaaa2b-bc68-4ffd-bfef-8f47d08c5428] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:27 compute-0 nova_compute[239965]: 2026-01-26 16:25:27.778 239969 DEBUG nova.network.neutron [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Successfully created port: 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:25:27 compute-0 nova_compute[239965]: 2026-01-26 16:25:27.840 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:27 compute-0 nova_compute[239965]: 2026-01-26 16:25:27.948 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:28 compute-0 nova_compute[239965]: 2026-01-26 16:25:28.465 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 118 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Jan 26 16:25:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:25:28
Jan 26 16:25:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:25:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:25:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.mgr', 'backups', 'vms', 'images', 'volumes', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 26 16:25:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:25:28 compute-0 ceph-mon[75140]: pgmap v2243: 305 pgs: 305 active+clean; 118 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.2 MiB/s wr, 109 op/s
Jan 26 16:25:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:25:29 compute-0 nova_compute[239965]: 2026-01-26 16:25:29.865 239969 DEBUG nova.network.neutron [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Successfully updated port: 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:25:29 compute-0 nova_compute[239965]: 2026-01-26 16:25:29.880 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:25:29 compute-0 nova_compute[239965]: 2026-01-26 16:25:29.880 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquired lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:25:29 compute-0 nova_compute[239965]: 2026-01-26 16:25:29.880 239969 DEBUG nova.network.neutron [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:25:30 compute-0 nova_compute[239965]: 2026-01-26 16:25:30.010 239969 DEBUG nova.compute.manager [req-72f645a8-68e0-4da9-906c-1c7aed25e773 req-1e01593d-bb14-4ad9-96dc-2855dd27647d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received event network-changed-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:30 compute-0 nova_compute[239965]: 2026-01-26 16:25:30.010 239969 DEBUG nova.compute.manager [req-72f645a8-68e0-4da9-906c-1c7aed25e773 req-1e01593d-bb14-4ad9-96dc-2855dd27647d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Refreshing instance network info cache due to event network-changed-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:25:30 compute-0 nova_compute[239965]: 2026-01-26 16:25:30.011 239969 DEBUG oslo_concurrency.lockutils [req-72f645a8-68e0-4da9-906c-1c7aed25e773 req-1e01593d-bb14-4ad9-96dc-2855dd27647d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:25:30 compute-0 nova_compute[239965]: 2026-01-26 16:25:30.146 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:30 compute-0 nova_compute[239965]: 2026-01-26 16:25:30.161 239969 DEBUG nova.network.neutron [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:25:30 compute-0 nova_compute[239965]: 2026-01-26 16:25:30.328 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 118 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.2 MiB/s wr, 41 op/s
Jan 26 16:25:30 compute-0 ceph-mon[75140]: pgmap v2244: 305 pgs: 305 active+clean; 118 MiB data, 954 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.2 MiB/s wr, 41 op/s
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:25:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.896 239969 DEBUG nova.network.neutron [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Updating instance_info_cache with network_info: [{"id": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "address": "fa:16:3e:00:29:6f", "network": {"id": "468ef4a7-70cc-4bbe-a116-35fd32215cb8", "bridge": "br-int", "label": "tempest-network-smoke--1786195953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b0936e-92", "ovs_interfaceid": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.914 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Releasing lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.915 239969 DEBUG nova.compute.manager [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Instance network_info: |[{"id": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "address": "fa:16:3e:00:29:6f", "network": {"id": "468ef4a7-70cc-4bbe-a116-35fd32215cb8", "bridge": "br-int", "label": "tempest-network-smoke--1786195953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b0936e-92", "ovs_interfaceid": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.916 239969 DEBUG oslo_concurrency.lockutils [req-72f645a8-68e0-4da9-906c-1c7aed25e773 req-1e01593d-bb14-4ad9-96dc-2855dd27647d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.916 239969 DEBUG nova.network.neutron [req-72f645a8-68e0-4da9-906c-1c7aed25e773 req-1e01593d-bb14-4ad9-96dc-2855dd27647d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Refreshing network info cache for port 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.921 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Start _get_guest_xml network_info=[{"id": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "address": "fa:16:3e:00:29:6f", "network": {"id": "468ef4a7-70cc-4bbe-a116-35fd32215cb8", "bridge": "br-int", "label": "tempest-network-smoke--1786195953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b0936e-92", "ovs_interfaceid": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.929 239969 WARNING nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.934 239969 DEBUG nova.virt.libvirt.host [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.935 239969 DEBUG nova.virt.libvirt.host [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.946 239969 DEBUG nova.virt.libvirt.host [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.947 239969 DEBUG nova.virt.libvirt.host [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.947 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.948 239969 DEBUG nova.virt.hardware [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.949 239969 DEBUG nova.virt.hardware [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.949 239969 DEBUG nova.virt.hardware [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.949 239969 DEBUG nova.virt.hardware [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.950 239969 DEBUG nova.virt.hardware [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.950 239969 DEBUG nova.virt.hardware [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.951 239969 DEBUG nova.virt.hardware [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.951 239969 DEBUG nova.virt.hardware [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.952 239969 DEBUG nova.virt.hardware [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.952 239969 DEBUG nova.virt.hardware [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.952 239969 DEBUG nova.virt.hardware [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:25:31 compute-0 nova_compute[239965]: 2026-01-26 16:25:31.957 239969 DEBUG oslo_concurrency.processutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:25:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2067482833' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:25:32 compute-0 nova_compute[239965]: 2026-01-26 16:25:32.568 239969 DEBUG oslo_concurrency.processutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:32 compute-0 nova_compute[239965]: 2026-01-26 16:25:32.594 239969 DEBUG nova.storage.rbd_utils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image b1edd3fd-2563-465f-94c8-577ab4debb72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:32 compute-0 nova_compute[239965]: 2026-01-26 16:25:32.598 239969 DEBUG oslo_concurrency.processutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:32 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2067482833' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:25:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 134 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Jan 26 16:25:32 compute-0 nova_compute[239965]: 2026-01-26 16:25:32.638 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "deadd136-f039-481d-bfc4-bbc46bc71563" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:32 compute-0 nova_compute[239965]: 2026-01-26 16:25:32.638 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "deadd136-f039-481d-bfc4-bbc46bc71563" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:32 compute-0 nova_compute[239965]: 2026-01-26 16:25:32.664 239969 DEBUG nova.compute.manager [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:25:32 compute-0 nova_compute[239965]: 2026-01-26 16:25:32.746 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:32 compute-0 nova_compute[239965]: 2026-01-26 16:25:32.746 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:32 compute-0 nova_compute[239965]: 2026-01-26 16:25:32.752 239969 DEBUG nova.virt.hardware [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:25:32 compute-0 nova_compute[239965]: 2026-01-26 16:25:32.752 239969 INFO nova.compute.claims [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:25:32 compute-0 nova_compute[239965]: 2026-01-26 16:25:32.852 239969 DEBUG oslo_concurrency.processutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:32 compute-0 nova_compute[239965]: 2026-01-26 16:25:32.922 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444717.9213452, c7429f68-4b47-47a9-86bb-8ffec79926a6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:32 compute-0 nova_compute[239965]: 2026-01-26 16:25:32.923 239969 INFO nova.compute.manager [-] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] VM Stopped (Lifecycle Event)
Jan 26 16:25:32 compute-0 nova_compute[239965]: 2026-01-26 16:25:32.941 239969 DEBUG nova.compute.manager [None req-1de6ec1d-e434-411e-9f13-5c61d804b9cd - - - - - -] [instance: c7429f68-4b47-47a9-86bb-8ffec79926a6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:32 compute-0 nova_compute[239965]: 2026-01-26 16:25:32.951 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:25:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3422609483' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.141 239969 DEBUG oslo_concurrency.processutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.144 239969 DEBUG nova.virt.libvirt.vif [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:25:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-218031812',display_name='tempest-TestNetworkAdvancedServerOps-server-218031812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-218031812',id=137,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIrMoI9V6cKHzfTFlUxBbATN2hHFUio1ZQ6nDgO3nipL1aW8ijkKCse9C6fJ8w0ajCwwIYfL3LOI/6X08nYHu0UQi8GzU+YFfJ0MDsrLZ69a2gtrBoG4QpsmS9eHdII6aA==',key_name='tempest-TestNetworkAdvancedServerOps-1986592575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-m0s2s10t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:25:25Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=b1edd3fd-2563-465f-94c8-577ab4debb72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "address": "fa:16:3e:00:29:6f", "network": {"id": "468ef4a7-70cc-4bbe-a116-35fd32215cb8", "bridge": "br-int", "label": "tempest-network-smoke--1786195953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b0936e-92", "ovs_interfaceid": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.145 239969 DEBUG nova.network.os_vif_util [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "address": "fa:16:3e:00:29:6f", "network": {"id": "468ef4a7-70cc-4bbe-a116-35fd32215cb8", "bridge": "br-int", "label": "tempest-network-smoke--1786195953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b0936e-92", "ovs_interfaceid": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.146 239969 DEBUG nova.network.os_vif_util [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:29:6f,bridge_name='br-int',has_traffic_filtering=True,id=60b0936e-92d4-4117-ad1c-6c6ba1f53cd8,network=Network(468ef4a7-70cc-4bbe-a116-35fd32215cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60b0936e-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.148 239969 DEBUG nova.objects.instance [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid b1edd3fd-2563-465f-94c8-577ab4debb72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.175 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:25:33 compute-0 nova_compute[239965]:   <uuid>b1edd3fd-2563-465f-94c8-577ab4debb72</uuid>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   <name>instance-00000089</name>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-218031812</nova:name>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:25:31</nova:creationTime>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:25:33 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:25:33 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:25:33 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:25:33 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:25:33 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:25:33 compute-0 nova_compute[239965]:         <nova:user uuid="59ae1c17a260470c91f50965ddd53a9e">tempest-TestNetworkAdvancedServerOps-842475489-project-member</nova:user>
Jan 26 16:25:33 compute-0 nova_compute[239965]:         <nova:project uuid="2a7615c0db4e4f38aec30c7c723c3c3a">tempest-TestNetworkAdvancedServerOps-842475489</nova:project>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:25:33 compute-0 nova_compute[239965]:         <nova:port uuid="60b0936e-92d4-4117-ad1c-6c6ba1f53cd8">
Jan 26 16:25:33 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <system>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <entry name="serial">b1edd3fd-2563-465f-94c8-577ab4debb72</entry>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <entry name="uuid">b1edd3fd-2563-465f-94c8-577ab4debb72</entry>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     </system>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   <os>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   </os>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   <features>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   </features>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/b1edd3fd-2563-465f-94c8-577ab4debb72_disk">
Jan 26 16:25:33 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       </source>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:25:33 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/b1edd3fd-2563-465f-94c8-577ab4debb72_disk.config">
Jan 26 16:25:33 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       </source>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:25:33 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:00:29:6f"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <target dev="tap60b0936e-92"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/b1edd3fd-2563-465f-94c8-577ab4debb72/console.log" append="off"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <video>
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     </video>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:25:33 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:25:33 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:25:33 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:25:33 compute-0 nova_compute[239965]: </domain>
Jan 26 16:25:33 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.177 239969 DEBUG nova.compute.manager [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Preparing to wait for external event network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.178 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.179 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.179 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.181 239969 DEBUG nova.virt.libvirt.vif [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:25:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-218031812',display_name='tempest-TestNetworkAdvancedServerOps-server-218031812',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-218031812',id=137,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIrMoI9V6cKHzfTFlUxBbATN2hHFUio1ZQ6nDgO3nipL1aW8ijkKCse9C6fJ8w0ajCwwIYfL3LOI/6X08nYHu0UQi8GzU+YFfJ0MDsrLZ69a2gtrBoG4QpsmS9eHdII6aA==',key_name='tempest-TestNetworkAdvancedServerOps-1986592575',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-m0s2s10t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:25:25Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=b1edd3fd-2563-465f-94c8-577ab4debb72,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "address": "fa:16:3e:00:29:6f", "network": {"id": "468ef4a7-70cc-4bbe-a116-35fd32215cb8", "bridge": "br-int", "label": "tempest-network-smoke--1786195953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b0936e-92", "ovs_interfaceid": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.182 239969 DEBUG nova.network.os_vif_util [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "address": "fa:16:3e:00:29:6f", "network": {"id": "468ef4a7-70cc-4bbe-a116-35fd32215cb8", "bridge": "br-int", "label": "tempest-network-smoke--1786195953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b0936e-92", "ovs_interfaceid": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.183 239969 DEBUG nova.network.os_vif_util [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:29:6f,bridge_name='br-int',has_traffic_filtering=True,id=60b0936e-92d4-4117-ad1c-6c6ba1f53cd8,network=Network(468ef4a7-70cc-4bbe-a116-35fd32215cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60b0936e-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.184 239969 DEBUG os_vif [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:29:6f,bridge_name='br-int',has_traffic_filtering=True,id=60b0936e-92d4-4117-ad1c-6c6ba1f53cd8,network=Network(468ef4a7-70cc-4bbe-a116-35fd32215cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60b0936e-92') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.185 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.186 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.187 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.192 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.192 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60b0936e-92, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.193 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60b0936e-92, col_values=(('external_ids', {'iface-id': '60b0936e-92d4-4117-ad1c-6c6ba1f53cd8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:29:6f', 'vm-uuid': 'b1edd3fd-2563-465f-94c8-577ab4debb72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.195 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:33 compute-0 NetworkManager[48954]: <info>  [1769444733.1963] manager: (tap60b0936e-92): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/589)
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.198 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.204 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.206 239969 INFO os_vif [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:29:6f,bridge_name='br-int',has_traffic_filtering=True,id=60b0936e-92d4-4117-ad1c-6c6ba1f53cd8,network=Network(468ef4a7-70cc-4bbe-a116-35fd32215cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60b0936e-92')
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.277 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.278 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.278 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No VIF found with MAC fa:16:3e:00:29:6f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.279 239969 INFO nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Using config drive
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.299 239969 DEBUG nova.storage.rbd_utils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image b1edd3fd-2563-465f-94c8-577ab4debb72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:25:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1828526412' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.472 239969 DEBUG oslo_concurrency.processutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.601 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.611 239969 DEBUG nova.compute.provider_tree [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:25:33 compute-0 ceph-mon[75140]: pgmap v2245: 305 pgs: 305 active+clean; 134 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Jan 26 16:25:33 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3422609483' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:25:33 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1828526412' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.633 239969 DEBUG nova.scheduler.client.report [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.665 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.666 239969 DEBUG nova.compute.manager [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.718 239969 DEBUG nova.compute.manager [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.719 239969 DEBUG nova.network.neutron [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.745 239969 INFO nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.758 239969 INFO nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Creating config drive at /var/lib/nova/instances/b1edd3fd-2563-465f-94c8-577ab4debb72/disk.config
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.767 239969 DEBUG oslo_concurrency.processutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b1edd3fd-2563-465f-94c8-577ab4debb72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprny_ceq0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.816 239969 DEBUG nova.compute.manager [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.853 239969 DEBUG nova.network.neutron [req-72f645a8-68e0-4da9-906c-1c7aed25e773 req-1e01593d-bb14-4ad9-96dc-2855dd27647d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Updated VIF entry in instance network info cache for port 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.853 239969 DEBUG nova.network.neutron [req-72f645a8-68e0-4da9-906c-1c7aed25e773 req-1e01593d-bb14-4ad9-96dc-2855dd27647d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Updating instance_info_cache with network_info: [{"id": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "address": "fa:16:3e:00:29:6f", "network": {"id": "468ef4a7-70cc-4bbe-a116-35fd32215cb8", "bridge": "br-int", "label": "tempest-network-smoke--1786195953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b0936e-92", "ovs_interfaceid": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.877 239969 DEBUG oslo_concurrency.lockutils [req-72f645a8-68e0-4da9-906c-1c7aed25e773 req-1e01593d-bb14-4ad9-96dc-2855dd27647d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.921 239969 DEBUG nova.policy [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ace5f36058684b5782589457d9e15921', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.924 239969 DEBUG oslo_concurrency.processutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b1edd3fd-2563-465f-94c8-577ab4debb72/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprny_ceq0" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.925 239969 DEBUG nova.compute.manager [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.928 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.928 239969 INFO nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Creating image(s)
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.954 239969 DEBUG nova.storage.rbd_utils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image deadd136-f039-481d-bfc4-bbc46bc71563_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:33 compute-0 nova_compute[239965]: 2026-01-26 16:25:33.977 239969 DEBUG nova.storage.rbd_utils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image deadd136-f039-481d-bfc4-bbc46bc71563_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.002 239969 DEBUG nova.storage.rbd_utils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image deadd136-f039-481d-bfc4-bbc46bc71563_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.006 239969 DEBUG oslo_concurrency.processutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.070 239969 DEBUG nova.storage.rbd_utils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image b1edd3fd-2563-465f-94c8-577ab4debb72_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.074 239969 DEBUG oslo_concurrency.processutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b1edd3fd-2563-465f-94c8-577ab4debb72/disk.config b1edd3fd-2563-465f-94c8-577ab4debb72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.121 239969 DEBUG oslo_concurrency.processutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.123 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.124 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.124 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.148 239969 DEBUG nova.storage.rbd_utils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image deadd136-f039-481d-bfc4-bbc46bc71563_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.153 239969 DEBUG oslo_concurrency.processutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 deadd136-f039-481d-bfc4-bbc46bc71563_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.224 239969 DEBUG oslo_concurrency.processutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b1edd3fd-2563-465f-94c8-577ab4debb72/disk.config b1edd3fd-2563-465f-94c8-577ab4debb72_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.225 239969 INFO nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Deleting local config drive /var/lib/nova/instances/b1edd3fd-2563-465f-94c8-577ab4debb72/disk.config because it was imported into RBD.
Jan 26 16:25:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:25:34 compute-0 kernel: tap60b0936e-92: entered promiscuous mode
Jan 26 16:25:34 compute-0 NetworkManager[48954]: <info>  [1769444734.2946] manager: (tap60b0936e-92): new Tun device (/org/freedesktop/NetworkManager/Devices/590)
Jan 26 16:25:34 compute-0 ovn_controller[146046]: 2026-01-26T16:25:34Z|01459|binding|INFO|Claiming lport 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 for this chassis.
Jan 26 16:25:34 compute-0 ovn_controller[146046]: 2026-01-26T16:25:34Z|01460|binding|INFO|60b0936e-92d4-4117-ad1c-6c6ba1f53cd8: Claiming fa:16:3e:00:29:6f 10.100.0.3
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.303 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.309 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.317 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:29:6f 10.100.0.3'], port_security=['fa:16:3e:00:29:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b1edd3fd-2563-465f-94c8-577ab4debb72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-468ef4a7-70cc-4bbe-a116-35fd32215cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0604e11d-f932-4e4e-a21d-f453b4c1f6d5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=016b0b61-69af-4aa0-b892-809b130821a2, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=60b0936e-92d4-4117-ad1c-6c6ba1f53cd8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.318 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 in datapath 468ef4a7-70cc-4bbe-a116-35fd32215cb8 bound to our chassis
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.320 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 468ef4a7-70cc-4bbe-a116-35fd32215cb8
Jan 26 16:25:34 compute-0 systemd-udevd[360356]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.338 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d00cadfc-b58a-45b7-ba23-78bf0523dd1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.339 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap468ef4a7-71 in ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.341 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap468ef4a7-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.341 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd2e8de-741d-4ac4-b29e-12595f1dc541]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.342 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[98d6d026-11a0-4889-82b1-41f83c44cd6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:34 compute-0 systemd-machined[208061]: New machine qemu-166-instance-00000089.
Jan 26 16:25:34 compute-0 NetworkManager[48954]: <info>  [1769444734.3507] device (tap60b0936e-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:25:34 compute-0 NetworkManager[48954]: <info>  [1769444734.3518] device (tap60b0936e-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.352 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[092614c0-1854-4b15-a666-42c24a5f43ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:34 compute-0 systemd[1]: Started Virtual Machine qemu-166-instance-00000089.
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.377 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.378 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[504d8219-7521-4e6c-b760-f99ab89363ad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:34 compute-0 ovn_controller[146046]: 2026-01-26T16:25:34Z|01461|binding|INFO|Setting lport 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 ovn-installed in OVS
Jan 26 16:25:34 compute-0 ovn_controller[146046]: 2026-01-26T16:25:34Z|01462|binding|INFO|Setting lport 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 up in Southbound
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.383 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.409 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[17a82ea9-b3d5-48cc-8e9e-4e219457bd28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.414 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[97b4a3bb-47cc-4fd3-9801-b44d030c6398]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:34 compute-0 NetworkManager[48954]: <info>  [1769444734.4153] manager: (tap468ef4a7-70): new Veth device (/org/freedesktop/NetworkManager/Devices/591)
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.449 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[72185416-a183-4cd3-b20a-31eced42a1e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.453 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f51e0379-1789-4d52-843e-0b1a73248b95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.469 239969 DEBUG oslo_concurrency.processutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 deadd136-f039-481d-bfc4-bbc46bc71563_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.315s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:34 compute-0 NetworkManager[48954]: <info>  [1769444734.4797] device (tap468ef4a7-70): carrier: link connected
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.488 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0947c2-aa14-413a-8daf-e2db37be5089]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.507 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1de86a14-ff7a-44b5-b1c2-d6522358ebd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap468ef4a7-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:de:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622528, 'reachable_time': 16313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360406, 'error': None, 'target': 'ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.519 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a0bb75f2-817e-4381-b16c-e797f020f8d3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe24:debb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622528, 'tstamp': 622528}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360414, 'error': None, 'target': 'ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.536 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[32eaa686-117e-4456-99b0-21b7b89eb6b8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap468ef4a7-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:de:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 417], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622528, 'reachable_time': 16313, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360424, 'error': None, 'target': 'ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.537 239969 DEBUG nova.storage.rbd_utils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] resizing rbd image deadd136-f039-481d-bfc4-bbc46bc71563_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.567 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c69e13e6-7e17-48b4-859d-22c21d8556da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.614 239969 DEBUG nova.objects.instance [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'migration_context' on Instance uuid deadd136-f039-481d-bfc4-bbc46bc71563 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.629 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.629 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Ensure instance console log exists: /var/lib/nova/instances/deadd136-f039-481d-bfc4-bbc46bc71563/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.629 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.629 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e98e7566-11a3-4ad4-b26b-239e4b5b6a10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.630 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.630 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.630 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap468ef4a7-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.630 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.631 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap468ef4a7-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:34 compute-0 NetworkManager[48954]: <info>  [1769444734.6335] manager: (tap468ef4a7-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/592)
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.632 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:34 compute-0 kernel: tap468ef4a7-70: entered promiscuous mode
Jan 26 16:25:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 134 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.635 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap468ef4a7-70, col_values=(('external_ids', {'iface-id': '54fc2172-ac7e-43ed-a29f-8766884bed13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.638 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:34 compute-0 ovn_controller[146046]: 2026-01-26T16:25:34Z|01463|binding|INFO|Releasing lport 54fc2172-ac7e-43ed-a29f-8766884bed13 from this chassis (sb_readonly=0)
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.639 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/468ef4a7-70cc-4bbe-a116-35fd32215cb8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/468ef4a7-70cc-4bbe-a116-35fd32215cb8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.640 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c07742dc-a3c1-404b-9f3f-8ed41c1a66e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.641 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-468ef4a7-70cc-4bbe-a116-35fd32215cb8
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/468ef4a7-70cc-4bbe-a116-35fd32215cb8.pid.haproxy
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 468ef4a7-70cc-4bbe-a116-35fd32215cb8
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:25:34 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:34.642 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8', 'env', 'PROCESS_TAG=haproxy-468ef4a7-70cc-4bbe-a116-35fd32215cb8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/468ef4a7-70cc-4bbe-a116-35fd32215cb8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.709 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:34 compute-0 ceph-mon[75140]: pgmap v2246: 305 pgs: 305 active+clean; 134 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.772 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444734.771414, b1edd3fd-2563-465f-94c8-577ab4debb72 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.778 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] VM Started (Lifecycle Event)
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.808 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.812 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444734.7716305, b1edd3fd-2563-465f-94c8-577ab4debb72 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.813 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] VM Paused (Lifecycle Event)
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.833 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.836 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.854 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:25:34 compute-0 nova_compute[239965]: 2026-01-26 16:25:34.860 239969 DEBUG nova.network.neutron [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Successfully created port: 7ad6a8b1-2176-4542-ab37-6ece8654876d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:25:35 compute-0 podman[360536]: 2026-01-26 16:25:35.027501496 +0000 UTC m=+0.048037127 container create a836775d6116e219714688aa6963577d14216d01741b4968108def3c3438ea5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:25:35 compute-0 systemd[1]: Started libpod-conmon-a836775d6116e219714688aa6963577d14216d01741b4968108def3c3438ea5d.scope.
Jan 26 16:25:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:25:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/486ecb2a938e603bc0798b48d9c707ad284dee0bffa0b6264637187d0ceb1c4f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:35 compute-0 podman[360536]: 2026-01-26 16:25:35.093745198 +0000 UTC m=+0.114280839 container init a836775d6116e219714688aa6963577d14216d01741b4968108def3c3438ea5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:25:35 compute-0 podman[360536]: 2026-01-26 16:25:35.000207169 +0000 UTC m=+0.020742810 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:25:35 compute-0 podman[360536]: 2026-01-26 16:25:35.099177612 +0000 UTC m=+0.119713233 container start a836775d6116e219714688aa6963577d14216d01741b4968108def3c3438ea5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 26 16:25:35 compute-0 neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8[360552]: [NOTICE]   (360556) : New worker (360558) forked
Jan 26 16:25:35 compute-0 neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8[360552]: [NOTICE]   (360556) : Loading success.
Jan 26 16:25:35 compute-0 nova_compute[239965]: 2026-01-26 16:25:35.779 239969 DEBUG nova.network.neutron [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Successfully updated port: 7ad6a8b1-2176-4542-ab37-6ece8654876d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:25:35 compute-0 nova_compute[239965]: 2026-01-26 16:25:35.792 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "refresh_cache-deadd136-f039-481d-bfc4-bbc46bc71563" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:25:35 compute-0 nova_compute[239965]: 2026-01-26 16:25:35.792 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquired lock "refresh_cache-deadd136-f039-481d-bfc4-bbc46bc71563" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:25:35 compute-0 nova_compute[239965]: 2026-01-26 16:25:35.793 239969 DEBUG nova.network.neutron [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:25:35 compute-0 nova_compute[239965]: 2026-01-26 16:25:35.876 239969 DEBUG nova.compute.manager [req-5d295ab5-e863-404b-ab61-08ff143e6fb9 req-b46be7bd-5ecb-4bec-8f1f-804284167fdc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Received event network-changed-7ad6a8b1-2176-4542-ab37-6ece8654876d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:35 compute-0 nova_compute[239965]: 2026-01-26 16:25:35.877 239969 DEBUG nova.compute.manager [req-5d295ab5-e863-404b-ab61-08ff143e6fb9 req-b46be7bd-5ecb-4bec-8f1f-804284167fdc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Refreshing instance network info cache due to event network-changed-7ad6a8b1-2176-4542-ab37-6ece8654876d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:25:35 compute-0 nova_compute[239965]: 2026-01-26 16:25:35.877 239969 DEBUG oslo_concurrency.lockutils [req-5d295ab5-e863-404b-ab61-08ff143e6fb9 req-b46be7bd-5ecb-4bec-8f1f-804284167fdc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-deadd136-f039-481d-bfc4-bbc46bc71563" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:25:35 compute-0 nova_compute[239965]: 2026-01-26 16:25:35.938 239969 DEBUG nova.network.neutron [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.626 239969 DEBUG nova.network.neutron [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Updating instance_info_cache with network_info: [{"id": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "address": "fa:16:3e:f5:61:f7", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ad6a8b1-21", "ovs_interfaceid": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 139 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.2 MiB/s wr, 51 op/s
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.651 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Releasing lock "refresh_cache-deadd136-f039-481d-bfc4-bbc46bc71563" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.651 239969 DEBUG nova.compute.manager [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Instance network_info: |[{"id": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "address": "fa:16:3e:f5:61:f7", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ad6a8b1-21", "ovs_interfaceid": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.652 239969 DEBUG oslo_concurrency.lockutils [req-5d295ab5-e863-404b-ab61-08ff143e6fb9 req-b46be7bd-5ecb-4bec-8f1f-804284167fdc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-deadd136-f039-481d-bfc4-bbc46bc71563" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.652 239969 DEBUG nova.network.neutron [req-5d295ab5-e863-404b-ab61-08ff143e6fb9 req-b46be7bd-5ecb-4bec-8f1f-804284167fdc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Refreshing network info cache for port 7ad6a8b1-2176-4542-ab37-6ece8654876d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.656 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Start _get_guest_xml network_info=[{"id": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "address": "fa:16:3e:f5:61:f7", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ad6a8b1-21", "ovs_interfaceid": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.660 239969 WARNING nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.666 239969 DEBUG nova.virt.libvirt.host [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.667 239969 DEBUG nova.virt.libvirt.host [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.671 239969 DEBUG nova.virt.libvirt.host [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.671 239969 DEBUG nova.virt.libvirt.host [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.672 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.672 239969 DEBUG nova.virt.hardware [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.673 239969 DEBUG nova.virt.hardware [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.673 239969 DEBUG nova.virt.hardware [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.673 239969 DEBUG nova.virt.hardware [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.673 239969 DEBUG nova.virt.hardware [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.674 239969 DEBUG nova.virt.hardware [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.674 239969 DEBUG nova.virt.hardware [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.674 239969 DEBUG nova.virt.hardware [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.675 239969 DEBUG nova.virt.hardware [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.675 239969 DEBUG nova.virt.hardware [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.675 239969 DEBUG nova.virt.hardware [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:25:36 compute-0 nova_compute[239965]: 2026-01-26 16:25:36.678 239969 DEBUG oslo_concurrency.processutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:36 compute-0 ceph-mon[75140]: pgmap v2247: 305 pgs: 305 active+clean; 139 MiB data, 971 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.2 MiB/s wr, 51 op/s
Jan 26 16:25:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:25:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1736955462' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.237 239969 DEBUG oslo_concurrency.processutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.265 239969 DEBUG nova.storage.rbd_utils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image deadd136-f039-481d-bfc4-bbc46bc71563_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.269 239969 DEBUG oslo_concurrency.processutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:37 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1736955462' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.906 239969 DEBUG nova.network.neutron [req-5d295ab5-e863-404b-ab61-08ff143e6fb9 req-b46be7bd-5ecb-4bec-8f1f-804284167fdc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Updated VIF entry in instance network info cache for port 7ad6a8b1-2176-4542-ab37-6ece8654876d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.907 239969 DEBUG nova.network.neutron [req-5d295ab5-e863-404b-ab61-08ff143e6fb9 req-b46be7bd-5ecb-4bec-8f1f-804284167fdc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Updating instance_info_cache with network_info: [{"id": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "address": "fa:16:3e:f5:61:f7", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ad6a8b1-21", "ovs_interfaceid": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:25:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2800150614' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.933 239969 DEBUG oslo_concurrency.processutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.935 239969 DEBUG nova.virt.libvirt.vif [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:25:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-1437534189',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-1437534189',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-acce',id=138,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBByI6tCrfJLyIj6sIsV+Ekerorscq6qkiM/Lle83NCyTUlha/oPyViDL+PBWA7bXq9+9oJOHfnt/sOtyAsLXUYu52zdtEkcLJAtD8nhzn87iwgtBznYvhaPSD4tmFm9Vgw==',key_name='tempest-TestSecurityGroupsBasicOps-173561617',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-4w5dnqt8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:25:33Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=deadd136-f039-481d-bfc4-bbc46bc71563,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "address": "fa:16:3e:f5:61:f7", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ad6a8b1-21", "ovs_interfaceid": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.936 239969 DEBUG nova.network.os_vif_util [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "address": "fa:16:3e:f5:61:f7", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ad6a8b1-21", "ovs_interfaceid": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.937 239969 DEBUG nova.network.os_vif_util [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:61:f7,bridge_name='br-int',has_traffic_filtering=True,id=7ad6a8b1-2176-4542-ab37-6ece8654876d,network=Network(c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ad6a8b1-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.938 239969 DEBUG nova.objects.instance [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'pci_devices' on Instance uuid deadd136-f039-481d-bfc4-bbc46bc71563 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.945 239969 DEBUG oslo_concurrency.lockutils [req-5d295ab5-e863-404b-ab61-08ff143e6fb9 req-b46be7bd-5ecb-4bec-8f1f-804284167fdc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-deadd136-f039-481d-bfc4-bbc46bc71563" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.957 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:25:37 compute-0 nova_compute[239965]:   <uuid>deadd136-f039-481d-bfc4-bbc46bc71563</uuid>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   <name>instance-0000008a</name>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-1437534189</nova:name>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:25:36</nova:creationTime>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:25:37 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:25:37 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:25:37 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:25:37 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:25:37 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:25:37 compute-0 nova_compute[239965]:         <nova:user uuid="ace5f36058684b5782589457d9e15921">tempest-TestSecurityGroupsBasicOps-75910484-project-member</nova:user>
Jan 26 16:25:37 compute-0 nova_compute[239965]:         <nova:project uuid="1f814bd45d164be6827f7ef54ed07f5f">tempest-TestSecurityGroupsBasicOps-75910484</nova:project>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:25:37 compute-0 nova_compute[239965]:         <nova:port uuid="7ad6a8b1-2176-4542-ab37-6ece8654876d">
Jan 26 16:25:37 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <system>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <entry name="serial">deadd136-f039-481d-bfc4-bbc46bc71563</entry>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <entry name="uuid">deadd136-f039-481d-bfc4-bbc46bc71563</entry>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     </system>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   <os>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   </os>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   <features>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   </features>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/deadd136-f039-481d-bfc4-bbc46bc71563_disk">
Jan 26 16:25:37 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       </source>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:25:37 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/deadd136-f039-481d-bfc4-bbc46bc71563_disk.config">
Jan 26 16:25:37 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       </source>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:25:37 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:f5:61:f7"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <target dev="tap7ad6a8b1-21"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/deadd136-f039-481d-bfc4-bbc46bc71563/console.log" append="off"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <video>
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     </video>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:25:37 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:25:37 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:25:37 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:25:37 compute-0 nova_compute[239965]: </domain>
Jan 26 16:25:37 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.959 239969 DEBUG nova.compute.manager [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Preparing to wait for external event network-vif-plugged-7ad6a8b1-2176-4542-ab37-6ece8654876d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.959 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "deadd136-f039-481d-bfc4-bbc46bc71563-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.960 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "deadd136-f039-481d-bfc4-bbc46bc71563-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.960 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "deadd136-f039-481d-bfc4-bbc46bc71563-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.960 239969 DEBUG nova.virt.libvirt.vif [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:25:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-1437534189',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-1437534189',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-acce',id=138,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBByI6tCrfJLyIj6sIsV+Ekerorscq6qkiM/Lle83NCyTUlha/oPyViDL+PBWA7bXq9+9oJOHfnt/sOtyAsLXUYu52zdtEkcLJAtD8nhzn87iwgtBznYvhaPSD4tmFm9Vgw==',key_name='tempest-TestSecurityGroupsBasicOps-173561617',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-4w5dnqt8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:25:33Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=deadd136-f039-481d-bfc4-bbc46bc71563,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "address": "fa:16:3e:f5:61:f7", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ad6a8b1-21", "ovs_interfaceid": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.961 239969 DEBUG nova.network.os_vif_util [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "address": "fa:16:3e:f5:61:f7", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ad6a8b1-21", "ovs_interfaceid": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.961 239969 DEBUG nova.network.os_vif_util [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:61:f7,bridge_name='br-int',has_traffic_filtering=True,id=7ad6a8b1-2176-4542-ab37-6ece8654876d,network=Network(c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ad6a8b1-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.961 239969 DEBUG os_vif [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:61:f7,bridge_name='br-int',has_traffic_filtering=True,id=7ad6a8b1-2176-4542-ab37-6ece8654876d,network=Network(c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ad6a8b1-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.962 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.962 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.963 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.966 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.966 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ad6a8b1-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.967 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7ad6a8b1-21, col_values=(('external_ids', {'iface-id': '7ad6a8b1-2176-4542-ab37-6ece8654876d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:61:f7', 'vm-uuid': 'deadd136-f039-481d-bfc4-bbc46bc71563'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.968 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:37 compute-0 NetworkManager[48954]: <info>  [1769444737.9691] manager: (tap7ad6a8b1-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/593)
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.970 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.975 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:37 compute-0 nova_compute[239965]: 2026-01-26 16:25:37.976 239969 INFO os_vif [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:61:f7,bridge_name='br-int',has_traffic_filtering=True,id=7ad6a8b1-2176-4542-ab37-6ece8654876d,network=Network(c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ad6a8b1-21')
Jan 26 16:25:38 compute-0 nova_compute[239965]: 2026-01-26 16:25:38.035 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:25:38 compute-0 nova_compute[239965]: 2026-01-26 16:25:38.036 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:25:38 compute-0 nova_compute[239965]: 2026-01-26 16:25:38.036 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No VIF found with MAC fa:16:3e:f5:61:f7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:25:38 compute-0 nova_compute[239965]: 2026-01-26 16:25:38.036 239969 INFO nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Using config drive
Jan 26 16:25:38 compute-0 nova_compute[239965]: 2026-01-26 16:25:38.057 239969 DEBUG nova.storage.rbd_utils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image deadd136-f039-481d-bfc4-bbc46bc71563_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:38 compute-0 podman[360632]: 2026-01-26 16:25:38.072751205 +0000 UTC m=+0.062813290 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:25:38 compute-0 podman[360633]: 2026-01-26 16:25:38.10239064 +0000 UTC m=+0.088186760 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 16:25:38 compute-0 nova_compute[239965]: 2026-01-26 16:25:38.509 239969 INFO nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Creating config drive at /var/lib/nova/instances/deadd136-f039-481d-bfc4-bbc46bc71563/disk.config
Jan 26 16:25:38 compute-0 nova_compute[239965]: 2026-01-26 16:25:38.515 239969 DEBUG oslo_concurrency.processutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/deadd136-f039-481d-bfc4-bbc46bc71563/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp83f2kva9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:38 compute-0 nova_compute[239965]: 2026-01-26 16:25:38.604 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 180 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Jan 26 16:25:38 compute-0 nova_compute[239965]: 2026-01-26 16:25:38.678 239969 DEBUG oslo_concurrency.processutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/deadd136-f039-481d-bfc4-bbc46bc71563/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp83f2kva9" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:38 compute-0 nova_compute[239965]: 2026-01-26 16:25:38.716 239969 DEBUG nova.storage.rbd_utils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image deadd136-f039-481d-bfc4-bbc46bc71563_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:38 compute-0 nova_compute[239965]: 2026-01-26 16:25:38.721 239969 DEBUG oslo_concurrency.processutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/deadd136-f039-481d-bfc4-bbc46bc71563/disk.config deadd136-f039-481d-bfc4-bbc46bc71563_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2800150614' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:25:38 compute-0 ceph-mon[75140]: pgmap v2248: 305 pgs: 305 active+clean; 180 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 3.6 MiB/s wr, 64 op/s
Jan 26 16:25:38 compute-0 nova_compute[239965]: 2026-01-26 16:25:38.883 239969 DEBUG oslo_concurrency.processutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/deadd136-f039-481d-bfc4-bbc46bc71563/disk.config deadd136-f039-481d-bfc4-bbc46bc71563_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:38 compute-0 nova_compute[239965]: 2026-01-26 16:25:38.885 239969 INFO nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Deleting local config drive /var/lib/nova/instances/deadd136-f039-481d-bfc4-bbc46bc71563/disk.config because it was imported into RBD.
Jan 26 16:25:38 compute-0 kernel: tap7ad6a8b1-21: entered promiscuous mode
Jan 26 16:25:38 compute-0 NetworkManager[48954]: <info>  [1769444738.9483] manager: (tap7ad6a8b1-21): new Tun device (/org/freedesktop/NetworkManager/Devices/594)
Jan 26 16:25:38 compute-0 ovn_controller[146046]: 2026-01-26T16:25:38Z|01464|binding|INFO|Claiming lport 7ad6a8b1-2176-4542-ab37-6ece8654876d for this chassis.
Jan 26 16:25:38 compute-0 ovn_controller[146046]: 2026-01-26T16:25:38Z|01465|binding|INFO|7ad6a8b1-2176-4542-ab37-6ece8654876d: Claiming fa:16:3e:f5:61:f7 10.100.0.11
Jan 26 16:25:38 compute-0 nova_compute[239965]: 2026-01-26 16:25:38.952 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:38.965 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:61:f7 10.100.0.11'], port_security=['fa:16:3e:f5:61:f7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'deadd136-f039-481d-bfc4-bbc46bc71563', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '12f86669-b30d-4046-b58e-a323b0390514 b3cd03ab-361a-4eb5-b4b2-10e2c053ad7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c35e7e4-1920-4b21-8562-6698247816ad, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=7ad6a8b1-2176-4542-ab37-6ece8654876d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:25:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:38.967 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 7ad6a8b1-2176-4542-ab37-6ece8654876d in datapath c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073 bound to our chassis
Jan 26 16:25:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:38.968 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073
Jan 26 16:25:38 compute-0 systemd-machined[208061]: New machine qemu-167-instance-0000008a.
Jan 26 16:25:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:38.985 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a4055c7b-a709-4be4-8a9e-08b4b5fc8cb0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:38.986 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc3b6f1ea-41 in ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:25:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:38.988 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc3b6f1ea-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:25:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:38.988 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[778ac9df-ca05-401f-b49d-75d5ce906867]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:38.989 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[40ad612d-17c5-44f3-8cd2-a57b36de71a3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.006 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[01f4b2cd-83e5-4c4a-91ae-09a16f27889c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.038 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[eb5385ba-415b-4260-87fc-b57c58fcbbcc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:39 compute-0 systemd[1]: Started Virtual Machine qemu-167-instance-0000008a.
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.056 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:39 compute-0 ovn_controller[146046]: 2026-01-26T16:25:39Z|01466|binding|INFO|Setting lport 7ad6a8b1-2176-4542-ab37-6ece8654876d ovn-installed in OVS
Jan 26 16:25:39 compute-0 ovn_controller[146046]: 2026-01-26T16:25:39Z|01467|binding|INFO|Setting lport 7ad6a8b1-2176-4542-ab37-6ece8654876d up in Southbound
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.060 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:39 compute-0 systemd-udevd[360746]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.069 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f8958b54-4ef3-4083-a18c-1c366884cde4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:39 compute-0 NetworkManager[48954]: <info>  [1769444739.0757] manager: (tapc3b6f1ea-40): new Veth device (/org/freedesktop/NetworkManager/Devices/595)
Jan 26 16:25:39 compute-0 systemd-udevd[360748]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.076 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[36355d41-297d-4d26-bc6b-026f9e217012]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:39 compute-0 NetworkManager[48954]: <info>  [1769444739.0801] device (tap7ad6a8b1-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:25:39 compute-0 NetworkManager[48954]: <info>  [1769444739.0812] device (tap7ad6a8b1-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.103 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2c01a19b-e273-4e1a-9cb4-16db79330046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.106 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[82a74466-ed7e-4ed4-9b87-ae961ecf70cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:39 compute-0 NetworkManager[48954]: <info>  [1769444739.1262] device (tapc3b6f1ea-40): carrier: link connected
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.131 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[0958c65d-ade9-4c1a-949f-87d5a099f77d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.148 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c54e2702-ce46-43e7-a397-c2ed5d53c283]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc3b6f1ea-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:1b:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 419], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622993, 'reachable_time': 28486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 360774, 'error': None, 'target': 'ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.165 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd97234-d641-4cfc-b180-a065a920aef6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe48:1b81'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 622993, 'tstamp': 622993}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 360775, 'error': None, 'target': 'ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.183 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ffad85d5-509b-46e6-a6a5-fbb601db4201]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc3b6f1ea-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:1b:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 419], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622993, 'reachable_time': 28486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 360776, 'error': None, 'target': 'ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.221 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6606d11e-57e9-4bc3-a0db-79e827dfb774]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.288 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[96cbd919-6712-49c2-96c1-9cc0893d0cef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.290 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3b6f1ea-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.290 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.290 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc3b6f1ea-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:39 compute-0 NetworkManager[48954]: <info>  [1769444739.2929] manager: (tapc3b6f1ea-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/596)
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.292 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:39 compute-0 kernel: tapc3b6f1ea-40: entered promiscuous mode
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.296 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc3b6f1ea-40, col_values=(('external_ids', {'iface-id': '1de7f01b-b4a3-4a1c-9255-2968d2bb9781'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:39 compute-0 ovn_controller[146046]: 2026-01-26T16:25:39Z|01468|binding|INFO|Releasing lport 1de7f01b-b4a3-4a1c-9255-2968d2bb9781 from this chassis (sb_readonly=0)
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.301 239969 DEBUG nova.compute.manager [req-248aa987-0d96-4fc0-b2c9-6821a21a215c req-2aed5b2b-cd37-4608-8c77-f159ae87569c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Received event network-vif-plugged-7ad6a8b1-2176-4542-ab37-6ece8654876d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.302 239969 DEBUG oslo_concurrency.lockutils [req-248aa987-0d96-4fc0-b2c9-6821a21a215c req-2aed5b2b-cd37-4608-8c77-f159ae87569c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "deadd136-f039-481d-bfc4-bbc46bc71563-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.302 239969 DEBUG oslo_concurrency.lockutils [req-248aa987-0d96-4fc0-b2c9-6821a21a215c req-2aed5b2b-cd37-4608-8c77-f159ae87569c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "deadd136-f039-481d-bfc4-bbc46bc71563-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.302 239969 DEBUG oslo_concurrency.lockutils [req-248aa987-0d96-4fc0-b2c9-6821a21a215c req-2aed5b2b-cd37-4608-8c77-f159ae87569c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "deadd136-f039-481d-bfc4-bbc46bc71563-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.303 239969 DEBUG nova.compute.manager [req-248aa987-0d96-4fc0-b2c9-6821a21a215c req-2aed5b2b-cd37-4608-8c77-f159ae87569c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Processing event network-vif-plugged-7ad6a8b1-2176-4542-ab37-6ece8654876d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.303 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.313 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.313 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.314 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[06fe8515-3ff4-40ff-b285-a48949eb023c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.315 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073.pid.haproxy
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:25:39 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:39.315 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073', 'env', 'PROCESS_TAG=haproxy-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.403 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444739.403458, deadd136-f039-481d-bfc4-bbc46bc71563 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.404 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] VM Started (Lifecycle Event)
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.406 239969 DEBUG nova.compute.manager [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.411 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.415 239969 INFO nova.virt.libvirt.driver [-] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Instance spawned successfully.
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.416 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.439 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.453 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.454 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.455 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.455 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.456 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.460 239969 DEBUG nova.virt.libvirt.driver [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.465 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.502 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.503 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444739.4035838, deadd136-f039-481d-bfc4-bbc46bc71563 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.503 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] VM Paused (Lifecycle Event)
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.531 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.536 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444739.4102468, deadd136-f039-481d-bfc4-bbc46bc71563 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.537 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] VM Resumed (Lifecycle Event)
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.542 239969 INFO nova.compute.manager [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Took 5.62 seconds to spawn the instance on the hypervisor.
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.543 239969 DEBUG nova.compute.manager [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.556 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.560 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.594 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.622 239969 INFO nova.compute.manager [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Took 6.90 seconds to build instance.
Jan 26 16:25:39 compute-0 nova_compute[239965]: 2026-01-26 16:25:39.640 239969 DEBUG oslo_concurrency.lockutils [None req-cb69f325-81dd-468a-b23f-f4239ea14257 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "deadd136-f039-481d-bfc4-bbc46bc71563" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:39 compute-0 podman[360850]: 2026-01-26 16:25:39.737699219 +0000 UTC m=+0.065938286 container create ba458dd7c249e4085e1667f60a6ab238df1c9b32814752f3326dd42f0a364e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:25:39 compute-0 systemd[1]: Started libpod-conmon-ba458dd7c249e4085e1667f60a6ab238df1c9b32814752f3326dd42f0a364e5f.scope.
Jan 26 16:25:39 compute-0 podman[360850]: 2026-01-26 16:25:39.698653712 +0000 UTC m=+0.026892859 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:25:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:25:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa1a158347e8d9fddaacd7ba72c9cb7d2d401475658409d597b6ff553bddb023/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:39 compute-0 podman[360850]: 2026-01-26 16:25:39.839481462 +0000 UTC m=+0.167720619 container init ba458dd7c249e4085e1667f60a6ab238df1c9b32814752f3326dd42f0a364e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 16:25:39 compute-0 podman[360850]: 2026-01-26 16:25:39.849363104 +0000 UTC m=+0.177602201 container start ba458dd7c249e4085e1667f60a6ab238df1c9b32814752f3326dd42f0a364e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:25:39 compute-0 neutron-haproxy-ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073[360863]: [NOTICE]   (360867) : New worker (360869) forked
Jan 26 16:25:39 compute-0 neutron-haproxy-ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073[360863]: [NOTICE]   (360867) : Loading success.
Jan 26 16:25:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 180 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.4 MiB/s wr, 49 op/s
Jan 26 16:25:40 compute-0 ceph-mon[75140]: pgmap v2249: 305 pgs: 305 active+clean; 180 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.4 MiB/s wr, 49 op/s
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.567 239969 DEBUG nova.compute.manager [req-0ff49b11-28a3-40d7-ad0b-15345a7f42f1 req-b64f1c46-f8c7-4add-924f-6589fab4c6ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Received event network-vif-plugged-7ad6a8b1-2176-4542-ab37-6ece8654876d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.568 239969 DEBUG oslo_concurrency.lockutils [req-0ff49b11-28a3-40d7-ad0b-15345a7f42f1 req-b64f1c46-f8c7-4add-924f-6589fab4c6ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "deadd136-f039-481d-bfc4-bbc46bc71563-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.568 239969 DEBUG oslo_concurrency.lockutils [req-0ff49b11-28a3-40d7-ad0b-15345a7f42f1 req-b64f1c46-f8c7-4add-924f-6589fab4c6ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "deadd136-f039-481d-bfc4-bbc46bc71563-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.568 239969 DEBUG oslo_concurrency.lockutils [req-0ff49b11-28a3-40d7-ad0b-15345a7f42f1 req-b64f1c46-f8c7-4add-924f-6589fab4c6ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "deadd136-f039-481d-bfc4-bbc46bc71563-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.568 239969 DEBUG nova.compute.manager [req-0ff49b11-28a3-40d7-ad0b-15345a7f42f1 req-b64f1c46-f8c7-4add-924f-6589fab4c6ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] No waiting events found dispatching network-vif-plugged-7ad6a8b1-2176-4542-ab37-6ece8654876d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.569 239969 WARNING nova.compute.manager [req-0ff49b11-28a3-40d7-ad0b-15345a7f42f1 req-b64f1c46-f8c7-4add-924f-6589fab4c6ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Received unexpected event network-vif-plugged-7ad6a8b1-2176-4542-ab37-6ece8654876d for instance with vm_state active and task_state None.
Jan 26 16:25:41 compute-0 sshd-session[360878]: Invalid user ansible from 209.38.206.249 port 55740
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.652 239969 DEBUG nova.compute.manager [req-27e75028-f4a5-436e-a221-76abeb459b60 req-90bf1dd3-318c-44d0-b81a-0a4b0d389f9f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received event network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.652 239969 DEBUG oslo_concurrency.lockutils [req-27e75028-f4a5-436e-a221-76abeb459b60 req-90bf1dd3-318c-44d0-b81a-0a4b0d389f9f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.652 239969 DEBUG oslo_concurrency.lockutils [req-27e75028-f4a5-436e-a221-76abeb459b60 req-90bf1dd3-318c-44d0-b81a-0a4b0d389f9f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.653 239969 DEBUG oslo_concurrency.lockutils [req-27e75028-f4a5-436e-a221-76abeb459b60 req-90bf1dd3-318c-44d0-b81a-0a4b0d389f9f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.653 239969 DEBUG nova.compute.manager [req-27e75028-f4a5-436e-a221-76abeb459b60 req-90bf1dd3-318c-44d0-b81a-0a4b0d389f9f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Processing event network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.653 239969 DEBUG nova.compute.manager [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.670 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444741.6592164, b1edd3fd-2563-465f-94c8-577ab4debb72 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.671 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] VM Resumed (Lifecycle Event)
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.674 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.688 239969 INFO nova.virt.libvirt.driver [-] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Instance spawned successfully.
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.689 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.693 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.697 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.709 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.710 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.711 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.711 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.712 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.713 239969 DEBUG nova.virt.libvirt.driver [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.717 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.761 239969 INFO nova.compute.manager [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Took 15.97 seconds to spawn the instance on the hypervisor.
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.762 239969 DEBUG nova.compute.manager [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:41.775 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:25:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:41.776 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.777 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.819 239969 INFO nova.compute.manager [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Took 17.04 seconds to build instance.
Jan 26 16:25:41 compute-0 nova_compute[239965]: 2026-01-26 16:25:41.833 239969 DEBUG oslo_concurrency.lockutils [None req-f0c1c153-aa1c-4306-9f9c-bb5f8100c8f4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:42 compute-0 sshd-session[360878]: Connection closed by invalid user ansible 209.38.206.249 port 55740 [preauth]
Jan 26 16:25:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 181 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 131 op/s
Jan 26 16:25:42 compute-0 sshd-session[360880]: Invalid user oracle from 209.38.206.249 port 55752
Jan 26 16:25:42 compute-0 sshd-session[360880]: Connection closed by invalid user oracle 209.38.206.249 port 55752 [preauth]
Jan 26 16:25:42 compute-0 nova_compute[239965]: 2026-01-26 16:25:42.980 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:43 compute-0 sshd-session[360882]: Invalid user bitnami from 209.38.206.249 port 55756
Jan 26 16:25:43 compute-0 nova_compute[239965]: 2026-01-26 16:25:43.785 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:43 compute-0 sshd-session[360882]: Connection closed by invalid user bitnami 209.38.206.249 port 55756 [preauth]
Jan 26 16:25:43 compute-0 ceph-mon[75140]: pgmap v2250: 305 pgs: 305 active+clean; 181 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.4 MiB/s wr, 131 op/s
Jan 26 16:25:43 compute-0 sshd-session[360884]: Invalid user hduser from 209.38.206.249 port 55764
Jan 26 16:25:44 compute-0 sshd-session[360884]: Connection closed by invalid user hduser 209.38.206.249 port 55764 [preauth]
Jan 26 16:25:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:25:44 compute-0 NetworkManager[48954]: <info>  [1769444744.3534] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/597)
Jan 26 16:25:44 compute-0 nova_compute[239965]: 2026-01-26 16:25:44.352 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:44 compute-0 NetworkManager[48954]: <info>  [1769444744.3547] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/598)
Jan 26 16:25:44 compute-0 nova_compute[239965]: 2026-01-26 16:25:44.391 239969 DEBUG nova.compute.manager [req-04d7ed98-cd4a-41c5-ae0f-3aa018d3a49f req-1e128aff-daec-493d-a0c4-d8e5a6524f29 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received event network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:44 compute-0 nova_compute[239965]: 2026-01-26 16:25:44.391 239969 DEBUG oslo_concurrency.lockutils [req-04d7ed98-cd4a-41c5-ae0f-3aa018d3a49f req-1e128aff-daec-493d-a0c4-d8e5a6524f29 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:44 compute-0 nova_compute[239965]: 2026-01-26 16:25:44.392 239969 DEBUG oslo_concurrency.lockutils [req-04d7ed98-cd4a-41c5-ae0f-3aa018d3a49f req-1e128aff-daec-493d-a0c4-d8e5a6524f29 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:44 compute-0 nova_compute[239965]: 2026-01-26 16:25:44.392 239969 DEBUG oslo_concurrency.lockutils [req-04d7ed98-cd4a-41c5-ae0f-3aa018d3a49f req-1e128aff-daec-493d-a0c4-d8e5a6524f29 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:44 compute-0 nova_compute[239965]: 2026-01-26 16:25:44.392 239969 DEBUG nova.compute.manager [req-04d7ed98-cd4a-41c5-ae0f-3aa018d3a49f req-1e128aff-daec-493d-a0c4-d8e5a6524f29 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] No waiting events found dispatching network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:25:44 compute-0 nova_compute[239965]: 2026-01-26 16:25:44.392 239969 WARNING nova.compute.manager [req-04d7ed98-cd4a-41c5-ae0f-3aa018d3a49f req-1e128aff-daec-493d-a0c4-d8e5a6524f29 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received unexpected event network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 for instance with vm_state active and task_state None.
Jan 26 16:25:44 compute-0 sshd-session[360886]: Invalid user user from 209.38.206.249 port 55774
Jan 26 16:25:44 compute-0 ovn_controller[146046]: 2026-01-26T16:25:44Z|01469|binding|INFO|Releasing lport 1de7f01b-b4a3-4a1c-9255-2968d2bb9781 from this chassis (sb_readonly=0)
Jan 26 16:25:44 compute-0 ovn_controller[146046]: 2026-01-26T16:25:44Z|01470|binding|INFO|Releasing lport 54fc2172-ac7e-43ed-a29f-8766884bed13 from this chassis (sb_readonly=0)
Jan 26 16:25:44 compute-0 ovn_controller[146046]: 2026-01-26T16:25:44Z|01471|binding|INFO|Releasing lport 1de7f01b-b4a3-4a1c-9255-2968d2bb9781 from this chassis (sb_readonly=0)
Jan 26 16:25:44 compute-0 ovn_controller[146046]: 2026-01-26T16:25:44Z|01472|binding|INFO|Releasing lport 54fc2172-ac7e-43ed-a29f-8766884bed13 from this chassis (sb_readonly=0)
Jan 26 16:25:44 compute-0 nova_compute[239965]: 2026-01-26 16:25:44.554 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:44 compute-0 sshd-session[360886]: Connection closed by invalid user user 209.38.206.249 port 55774 [preauth]
Jan 26 16:25:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 181 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Jan 26 16:25:44 compute-0 ceph-mon[75140]: pgmap v2251: 305 pgs: 305 active+clean; 181 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Jan 26 16:25:45 compute-0 nova_compute[239965]: 2026-01-26 16:25:45.274 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "fed642a5-01b1-488e-aae9-21971c326ddb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:45 compute-0 nova_compute[239965]: 2026-01-26 16:25:45.274 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "fed642a5-01b1-488e-aae9-21971c326ddb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:45 compute-0 nova_compute[239965]: 2026-01-26 16:25:45.299 239969 DEBUG nova.compute.manager [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:25:45 compute-0 nova_compute[239965]: 2026-01-26 16:25:45.412 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:45 compute-0 nova_compute[239965]: 2026-01-26 16:25:45.413 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:45 compute-0 nova_compute[239965]: 2026-01-26 16:25:45.424 239969 DEBUG nova.virt.hardware [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:25:45 compute-0 nova_compute[239965]: 2026-01-26 16:25:45.425 239969 INFO nova.compute.claims [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:25:45 compute-0 nova_compute[239965]: 2026-01-26 16:25:45.561 239969 DEBUG oslo_concurrency.processutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:25:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/881125728' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.100 239969 DEBUG oslo_concurrency.processutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.107 239969 DEBUG nova.compute.provider_tree [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.207 239969 DEBUG nova.scheduler.client.report [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.552 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.554 239969 DEBUG nova.compute.manager [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.587 239969 DEBUG nova.compute.manager [req-5f1a8409-cbcc-42e5-ae57-b3e2fabaed4f req-3bb915ab-2d75-4937-8fca-f16c8531d801 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Received event network-changed-7ad6a8b1-2176-4542-ab37-6ece8654876d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.588 239969 DEBUG nova.compute.manager [req-5f1a8409-cbcc-42e5-ae57-b3e2fabaed4f req-3bb915ab-2d75-4937-8fca-f16c8531d801 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Refreshing instance network info cache due to event network-changed-7ad6a8b1-2176-4542-ab37-6ece8654876d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.589 239969 DEBUG oslo_concurrency.lockutils [req-5f1a8409-cbcc-42e5-ae57-b3e2fabaed4f req-3bb915ab-2d75-4937-8fca-f16c8531d801 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-deadd136-f039-481d-bfc4-bbc46bc71563" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.589 239969 DEBUG oslo_concurrency.lockutils [req-5f1a8409-cbcc-42e5-ae57-b3e2fabaed4f req-3bb915ab-2d75-4937-8fca-f16c8531d801 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-deadd136-f039-481d-bfc4-bbc46bc71563" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.590 239969 DEBUG nova.network.neutron [req-5f1a8409-cbcc-42e5-ae57-b3e2fabaed4f req-3bb915ab-2d75-4937-8fca-f16c8531d801 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Refreshing network info cache for port 7ad6a8b1-2176-4542-ab37-6ece8654876d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:25:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/881125728' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.621 239969 DEBUG nova.compute.manager [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.622 239969 DEBUG nova.network.neutron [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:25:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 181 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.650 239969 INFO nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.670 239969 DEBUG nova.compute.manager [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.782 239969 DEBUG nova.compute.manager [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.784 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.785 239969 INFO nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Creating image(s)
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.826 239969 DEBUG nova.storage.rbd_utils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image fed642a5-01b1-488e-aae9-21971c326ddb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.862 239969 DEBUG nova.storage.rbd_utils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image fed642a5-01b1-488e-aae9-21971c326ddb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.884 239969 DEBUG nova.storage.rbd_utils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image fed642a5-01b1-488e-aae9-21971c326ddb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.887 239969 DEBUG oslo_concurrency.processutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.923 239969 DEBUG nova.policy [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e865b82cb8db42568f95d2257ed9b158', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f543c15474f7451fa07b01e79dde95dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.952 239969 DEBUG oslo_concurrency.processutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.953 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.954 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.954 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.982 239969 DEBUG nova.storage.rbd_utils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image fed642a5-01b1-488e-aae9-21971c326ddb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:46 compute-0 nova_compute[239965]: 2026-01-26 16:25:46.986 239969 DEBUG oslo_concurrency.processutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 fed642a5-01b1-488e-aae9-21971c326ddb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:47.778 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:47 compute-0 ceph-mon[75140]: pgmap v2252: 305 pgs: 305 active+clean; 181 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Jan 26 16:25:47 compute-0 nova_compute[239965]: 2026-01-26 16:25:47.986 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:48 compute-0 nova_compute[239965]: 2026-01-26 16:25:48.267 239969 DEBUG oslo_concurrency.processutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 fed642a5-01b1-488e-aae9-21971c326ddb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:48 compute-0 nova_compute[239965]: 2026-01-26 16:25:48.337 239969 DEBUG nova.storage.rbd_utils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] resizing rbd image fed642a5-01b1-488e-aae9-21971c326ddb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:25:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 181 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 159 op/s
Jan 26 16:25:48 compute-0 nova_compute[239965]: 2026-01-26 16:25:48.657 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:25:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3933597845' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:25:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:25:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3933597845' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:25:48 compute-0 nova_compute[239965]: 2026-01-26 16:25:48.800 239969 DEBUG nova.objects.instance [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'migration_context' on Instance uuid fed642a5-01b1-488e-aae9-21971c326ddb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:48 compute-0 nova_compute[239965]: 2026-01-26 16:25:48.823 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:25:48 compute-0 nova_compute[239965]: 2026-01-26 16:25:48.824 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Ensure instance console log exists: /var/lib/nova/instances/fed642a5-01b1-488e-aae9-21971c326ddb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:25:48 compute-0 nova_compute[239965]: 2026-01-26 16:25:48.824 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:48 compute-0 nova_compute[239965]: 2026-01-26 16:25:48.825 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:48 compute-0 nova_compute[239965]: 2026-01-26 16:25:48.825 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:49 compute-0 nova_compute[239965]: 2026-01-26 16:25:49.005 239969 DEBUG nova.compute.manager [req-e9f2b7ad-034b-4de9-bd00-039a5efc6716 req-7527b90a-127b-4ccb-9f54-64d3c3ac3740 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received event network-changed-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:49 compute-0 nova_compute[239965]: 2026-01-26 16:25:49.006 239969 DEBUG nova.compute.manager [req-e9f2b7ad-034b-4de9-bd00-039a5efc6716 req-7527b90a-127b-4ccb-9f54-64d3c3ac3740 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Refreshing instance network info cache due to event network-changed-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:25:49 compute-0 nova_compute[239965]: 2026-01-26 16:25:49.006 239969 DEBUG oslo_concurrency.lockutils [req-e9f2b7ad-034b-4de9-bd00-039a5efc6716 req-7527b90a-127b-4ccb-9f54-64d3c3ac3740 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:25:49 compute-0 nova_compute[239965]: 2026-01-26 16:25:49.006 239969 DEBUG oslo_concurrency.lockutils [req-e9f2b7ad-034b-4de9-bd00-039a5efc6716 req-7527b90a-127b-4ccb-9f54-64d3c3ac3740 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:25:49 compute-0 nova_compute[239965]: 2026-01-26 16:25:49.007 239969 DEBUG nova.network.neutron [req-e9f2b7ad-034b-4de9-bd00-039a5efc6716 req-7527b90a-127b-4ccb-9f54-64d3c3ac3740 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Refreshing network info cache for port 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007312110278137092 of space, bias 1.0, pg target 0.21936330834411275 quantized to 32 (current 32)
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010182266163315213 of space, bias 1.0, pg target 0.30546798489945637 quantized to 32 (current 32)
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.017333968477323e-07 of space, bias 4.0, pg target 0.0008420800762172787 quantized to 16 (current 16)
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:25:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:25:49 compute-0 ceph-mon[75140]: pgmap v2253: 305 pgs: 305 active+clean; 181 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.4 MiB/s wr, 159 op/s
Jan 26 16:25:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3933597845' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:25:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3933597845' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:25:49 compute-0 nova_compute[239965]: 2026-01-26 16:25:49.527 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:25:49 compute-0 sudo[361077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:25:49 compute-0 sudo[361077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:25:49 compute-0 sudo[361077]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:49 compute-0 nova_compute[239965]: 2026-01-26 16:25:49.824 239969 DEBUG nova.network.neutron [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Successfully created port: 00461e74-34f4-4d7c-b930-09797f49ca0f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:25:49 compute-0 sudo[361102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 26 16:25:49 compute-0 sudo[361102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:25:50 compute-0 sshd-session[361120]: Invalid user admin from 209.38.206.249 port 55780
Jan 26 16:25:50 compute-0 sshd-session[361120]: Connection closed by invalid user admin 209.38.206.249 port 55780 [preauth]
Jan 26 16:25:50 compute-0 podman[361172]: 2026-01-26 16:25:50.344448437 +0000 UTC m=+0.069666658 container exec 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 16:25:50 compute-0 podman[361172]: 2026-01-26 16:25:50.445517762 +0000 UTC m=+0.170736003 container exec_died 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:25:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 181 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 15 KiB/s wr, 147 op/s
Jan 26 16:25:50 compute-0 ceph-mon[75140]: pgmap v2254: 305 pgs: 305 active+clean; 181 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 15 KiB/s wr, 147 op/s
Jan 26 16:25:50 compute-0 sshd-session[361192]: Invalid user kali from 209.38.206.249 port 48724
Jan 26 16:25:50 compute-0 sshd-session[361192]: Connection closed by invalid user kali 209.38.206.249 port 48724 [preauth]
Jan 26 16:25:50 compute-0 nova_compute[239965]: 2026-01-26 16:25:50.934 239969 DEBUG nova.network.neutron [req-5f1a8409-cbcc-42e5-ae57-b3e2fabaed4f req-3bb915ab-2d75-4937-8fca-f16c8531d801 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Updated VIF entry in instance network info cache for port 7ad6a8b1-2176-4542-ab37-6ece8654876d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:25:50 compute-0 nova_compute[239965]: 2026-01-26 16:25:50.936 239969 DEBUG nova.network.neutron [req-5f1a8409-cbcc-42e5-ae57-b3e2fabaed4f req-3bb915ab-2d75-4937-8fca-f16c8531d801 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Updating instance_info_cache with network_info: [{"id": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "address": "fa:16:3e:f5:61:f7", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ad6a8b1-21", "ovs_interfaceid": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:50 compute-0 nova_compute[239965]: 2026-01-26 16:25:50.959 239969 DEBUG oslo_concurrency.lockutils [req-5f1a8409-cbcc-42e5-ae57-b3e2fabaed4f req-3bb915ab-2d75-4937-8fca-f16c8531d801 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-deadd136-f039-481d-bfc4-bbc46bc71563" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:25:51 compute-0 sudo[361102]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:25:51 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:25:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:25:51 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:25:51 compute-0 sudo[361356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:25:51 compute-0 sudo[361356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:25:51 compute-0 sudo[361356]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:51 compute-0 sudo[361381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:25:51 compute-0 sudo[361381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:25:52 compute-0 sudo[361381]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:25:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:25:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:25:52 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:25:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:25:52 compute-0 ovn_controller[146046]: 2026-01-26T16:25:52Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f5:61:f7 10.100.0.11
Jan 26 16:25:52 compute-0 ovn_controller[146046]: 2026-01-26T16:25:52Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f5:61:f7 10.100.0.11
Jan 26 16:25:52 compute-0 nova_compute[239965]: 2026-01-26 16:25:52.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:25:52 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:25:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:25:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:25:52 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:25:52 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:25:52 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:25:52 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:25:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:25:52 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:25:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 244 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 191 op/s
Jan 26 16:25:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:25:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:25:52 compute-0 sudo[361437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:25:52 compute-0 sudo[361437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:25:52 compute-0 sudo[361437]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:52 compute-0 sudo[361462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:25:52 compute-0 sudo[361462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:25:52 compute-0 nova_compute[239965]: 2026-01-26 16:25:52.990 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:53 compute-0 podman[361501]: 2026-01-26 16:25:53.057321634 +0000 UTC m=+0.054581338 container create a84daa20fbae8bd045dd864ec1d561d6589318e01e3c217a15e2299686a13f8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 16:25:53 compute-0 systemd[1]: Started libpod-conmon-a84daa20fbae8bd045dd864ec1d561d6589318e01e3c217a15e2299686a13f8a.scope.
Jan 26 16:25:53 compute-0 podman[361501]: 2026-01-26 16:25:53.032032215 +0000 UTC m=+0.029291919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:25:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:25:53 compute-0 podman[361501]: 2026-01-26 16:25:53.17067132 +0000 UTC m=+0.167930994 container init a84daa20fbae8bd045dd864ec1d561d6589318e01e3c217a15e2299686a13f8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 16:25:53 compute-0 podman[361501]: 2026-01-26 16:25:53.178373829 +0000 UTC m=+0.175633493 container start a84daa20fbae8bd045dd864ec1d561d6589318e01e3c217a15e2299686a13f8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:25:53 compute-0 podman[361501]: 2026-01-26 16:25:53.182676314 +0000 UTC m=+0.179935998 container attach a84daa20fbae8bd045dd864ec1d561d6589318e01e3c217a15e2299686a13f8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 16:25:53 compute-0 elastic_sinoussi[361517]: 167 167
Jan 26 16:25:53 compute-0 systemd[1]: libpod-a84daa20fbae8bd045dd864ec1d561d6589318e01e3c217a15e2299686a13f8a.scope: Deactivated successfully.
Jan 26 16:25:53 compute-0 conmon[361517]: conmon a84daa20fbae8bd045dd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a84daa20fbae8bd045dd864ec1d561d6589318e01e3c217a15e2299686a13f8a.scope/container/memory.events
Jan 26 16:25:53 compute-0 podman[361501]: 2026-01-26 16:25:53.185918693 +0000 UTC m=+0.183178377 container died a84daa20fbae8bd045dd864ec1d561d6589318e01e3c217a15e2299686a13f8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 16:25:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7275d99a6f6b145c2d65991a0bff95af95850953e5851dbf0f758d7e03404867-merged.mount: Deactivated successfully.
Jan 26 16:25:53 compute-0 podman[361501]: 2026-01-26 16:25:53.22740399 +0000 UTC m=+0.224663654 container remove a84daa20fbae8bd045dd864ec1d561d6589318e01e3c217a15e2299686a13f8a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sinoussi, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:25:53 compute-0 systemd[1]: libpod-conmon-a84daa20fbae8bd045dd864ec1d561d6589318e01e3c217a15e2299686a13f8a.scope: Deactivated successfully.
Jan 26 16:25:53 compute-0 podman[361539]: 2026-01-26 16:25:53.437061253 +0000 UTC m=+0.059024676 container create 7d1fd83c079326d34faa5f3959f87c7c162db8a5afdddf67ef5d4f599b44e049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cerf, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:25:53 compute-0 podman[361539]: 2026-01-26 16:25:53.40666684 +0000 UTC m=+0.028630293 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:25:53 compute-0 systemd[1]: Started libpod-conmon-7d1fd83c079326d34faa5f3959f87c7c162db8a5afdddf67ef5d4f599b44e049.scope.
Jan 26 16:25:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f14a0a47dc0005f284923a251518e0a71e6e337170d107cdf25916bda81b051/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f14a0a47dc0005f284923a251518e0a71e6e337170d107cdf25916bda81b051/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f14a0a47dc0005f284923a251518e0a71e6e337170d107cdf25916bda81b051/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f14a0a47dc0005f284923a251518e0a71e6e337170d107cdf25916bda81b051/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f14a0a47dc0005f284923a251518e0a71e6e337170d107cdf25916bda81b051/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:53 compute-0 podman[361539]: 2026-01-26 16:25:53.644958805 +0000 UTC m=+0.266922268 container init 7d1fd83c079326d34faa5f3959f87c7c162db8a5afdddf67ef5d4f599b44e049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cerf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:25:53 compute-0 podman[361539]: 2026-01-26 16:25:53.653559176 +0000 UTC m=+0.275522619 container start 7d1fd83c079326d34faa5f3959f87c7c162db8a5afdddf67ef5d4f599b44e049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 16:25:53 compute-0 nova_compute[239965]: 2026-01-26 16:25:53.659 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:25:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:25:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:25:53 compute-0 ceph-mon[75140]: pgmap v2255: 305 pgs: 305 active+clean; 244 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 3.5 MiB/s wr, 191 op/s
Jan 26 16:25:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:25:53 compute-0 podman[361539]: 2026-01-26 16:25:53.706268908 +0000 UTC m=+0.328232361 container attach 7d1fd83c079326d34faa5f3959f87c7c162db8a5afdddf67ef5d4f599b44e049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:25:53 compute-0 nova_compute[239965]: 2026-01-26 16:25:53.894 239969 DEBUG nova.network.neutron [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Successfully updated port: 00461e74-34f4-4d7c-b930-09797f49ca0f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:25:53 compute-0 nova_compute[239965]: 2026-01-26 16:25:53.920 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-fed642a5-01b1-488e-aae9-21971c326ddb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:25:53 compute-0 nova_compute[239965]: 2026-01-26 16:25:53.920 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-fed642a5-01b1-488e-aae9-21971c326ddb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:25:53 compute-0 nova_compute[239965]: 2026-01-26 16:25:53.921 239969 DEBUG nova.network.neutron [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:25:54 compute-0 nova_compute[239965]: 2026-01-26 16:25:54.018 239969 DEBUG nova.network.neutron [req-e9f2b7ad-034b-4de9-bd00-039a5efc6716 req-7527b90a-127b-4ccb-9f54-64d3c3ac3740 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Updated VIF entry in instance network info cache for port 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:25:54 compute-0 nova_compute[239965]: 2026-01-26 16:25:54.019 239969 DEBUG nova.network.neutron [req-e9f2b7ad-034b-4de9-bd00-039a5efc6716 req-7527b90a-127b-4ccb-9f54-64d3c3ac3740 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Updating instance_info_cache with network_info: [{"id": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "address": "fa:16:3e:00:29:6f", "network": {"id": "468ef4a7-70cc-4bbe-a116-35fd32215cb8", "bridge": "br-int", "label": "tempest-network-smoke--1786195953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b0936e-92", "ovs_interfaceid": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:54 compute-0 nova_compute[239965]: 2026-01-26 16:25:54.038 239969 DEBUG oslo_concurrency.lockutils [req-e9f2b7ad-034b-4de9-bd00-039a5efc6716 req-7527b90a-127b-4ccb-9f54-64d3c3ac3740 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:25:54 compute-0 nova_compute[239965]: 2026-01-26 16:25:54.048 239969 DEBUG nova.compute.manager [req-f355fc33-fc00-49e6-b07c-4124935a420b req-6014e58f-bf5a-43ca-b031-4de4291d4d9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Received event network-changed-00461e74-34f4-4d7c-b930-09797f49ca0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:54 compute-0 nova_compute[239965]: 2026-01-26 16:25:54.048 239969 DEBUG nova.compute.manager [req-f355fc33-fc00-49e6-b07c-4124935a420b req-6014e58f-bf5a-43ca-b031-4de4291d4d9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Refreshing instance network info cache due to event network-changed-00461e74-34f4-4d7c-b930-09797f49ca0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:25:54 compute-0 nova_compute[239965]: 2026-01-26 16:25:54.048 239969 DEBUG oslo_concurrency.lockutils [req-f355fc33-fc00-49e6-b07c-4124935a420b req-6014e58f-bf5a-43ca-b031-4de4291d4d9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-fed642a5-01b1-488e-aae9-21971c326ddb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:25:54 compute-0 nova_compute[239965]: 2026-01-26 16:25:54.096 239969 DEBUG nova.network.neutron [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:25:54 compute-0 dazzling_cerf[361557]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:25:54 compute-0 dazzling_cerf[361557]: --> All data devices are unavailable
Jan 26 16:25:54 compute-0 nova_compute[239965]: 2026-01-26 16:25:54.151 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:54 compute-0 systemd[1]: libpod-7d1fd83c079326d34faa5f3959f87c7c162db8a5afdddf67ef5d4f599b44e049.scope: Deactivated successfully.
Jan 26 16:25:54 compute-0 conmon[361557]: conmon 7d1fd83c079326d34faa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7d1fd83c079326d34faa5f3959f87c7c162db8a5afdddf67ef5d4f599b44e049.scope/container/memory.events
Jan 26 16:25:54 compute-0 podman[361539]: 2026-01-26 16:25:54.180686996 +0000 UTC m=+0.802650409 container died 7d1fd83c079326d34faa5f3959f87c7c162db8a5afdddf67ef5d4f599b44e049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 16:25:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f14a0a47dc0005f284923a251518e0a71e6e337170d107cdf25916bda81b051-merged.mount: Deactivated successfully.
Jan 26 16:25:54 compute-0 podman[361539]: 2026-01-26 16:25:54.231377417 +0000 UTC m=+0.853340830 container remove 7d1fd83c079326d34faa5f3959f87c7c162db8a5afdddf67ef5d4f599b44e049 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_cerf, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 16:25:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:25:54 compute-0 systemd[1]: libpod-conmon-7d1fd83c079326d34faa5f3959f87c7c162db8a5afdddf67ef5d4f599b44e049.scope: Deactivated successfully.
Jan 26 16:25:54 compute-0 sudo[361462]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:54 compute-0 sudo[361588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:25:54 compute-0 sudo[361588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:25:54 compute-0 sudo[361588]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:54 compute-0 ovn_controller[146046]: 2026-01-26T16:25:54Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:29:6f 10.100.0.3
Jan 26 16:25:54 compute-0 ovn_controller[146046]: 2026-01-26T16:25:54Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:29:6f 10.100.0.3
Jan 26 16:25:54 compute-0 sudo[361613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:25:54 compute-0 sudo[361613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:25:54 compute-0 nova_compute[239965]: 2026-01-26 16:25:54.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:25:54 compute-0 nova_compute[239965]: 2026-01-26 16:25:54.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:25:54 compute-0 nova_compute[239965]: 2026-01-26 16:25:54.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:25:54 compute-0 nova_compute[239965]: 2026-01-26 16:25:54.543 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 26 16:25:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 244 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 108 op/s
Jan 26 16:25:54 compute-0 podman[361650]: 2026-01-26 16:25:54.696041237 +0000 UTC m=+0.046350747 container create 6d99692e82cf03d118969a4eff62755cde637b42ccdfdf1873ada2789e16d846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_lalande, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 26 16:25:54 compute-0 ceph-mon[75140]: pgmap v2256: 305 pgs: 305 active+clean; 244 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.5 MiB/s wr, 108 op/s
Jan 26 16:25:54 compute-0 nova_compute[239965]: 2026-01-26 16:25:54.726 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:25:54 compute-0 nova_compute[239965]: 2026-01-26 16:25:54.727 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:25:54 compute-0 nova_compute[239965]: 2026-01-26 16:25:54.727 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:25:54 compute-0 nova_compute[239965]: 2026-01-26 16:25:54.727 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b1edd3fd-2563-465f-94c8-577ab4debb72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:54 compute-0 systemd[1]: Started libpod-conmon-6d99692e82cf03d118969a4eff62755cde637b42ccdfdf1873ada2789e16d846.scope.
Jan 26 16:25:54 compute-0 podman[361650]: 2026-01-26 16:25:54.67701605 +0000 UTC m=+0.027325660 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:25:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:25:54 compute-0 podman[361650]: 2026-01-26 16:25:54.792806506 +0000 UTC m=+0.143116106 container init 6d99692e82cf03d118969a4eff62755cde637b42ccdfdf1873ada2789e16d846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_lalande, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:25:54 compute-0 podman[361650]: 2026-01-26 16:25:54.800357661 +0000 UTC m=+0.150667211 container start 6d99692e82cf03d118969a4eff62755cde637b42ccdfdf1873ada2789e16d846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:25:54 compute-0 heuristic_lalande[361668]: 167 167
Jan 26 16:25:54 compute-0 podman[361650]: 2026-01-26 16:25:54.804493442 +0000 UTC m=+0.154802972 container attach 6d99692e82cf03d118969a4eff62755cde637b42ccdfdf1873ada2789e16d846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 16:25:54 compute-0 systemd[1]: libpod-6d99692e82cf03d118969a4eff62755cde637b42ccdfdf1873ada2789e16d846.scope: Deactivated successfully.
Jan 26 16:25:54 compute-0 podman[361650]: 2026-01-26 16:25:54.807721991 +0000 UTC m=+0.158031511 container died 6d99692e82cf03d118969a4eff62755cde637b42ccdfdf1873ada2789e16d846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_lalande, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 16:25:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7381b40c695bd7cd50bd2300c71eee89ddf03c898d917c2417c62dccf318725-merged.mount: Deactivated successfully.
Jan 26 16:25:54 compute-0 podman[361650]: 2026-01-26 16:25:54.852378655 +0000 UTC m=+0.202688165 container remove 6d99692e82cf03d118969a4eff62755cde637b42ccdfdf1873ada2789e16d846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 16:25:54 compute-0 systemd[1]: libpod-conmon-6d99692e82cf03d118969a4eff62755cde637b42ccdfdf1873ada2789e16d846.scope: Deactivated successfully.
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.018 239969 DEBUG nova.network.neutron [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Updating instance_info_cache with network_info: [{"id": "00461e74-34f4-4d7c-b930-09797f49ca0f", "address": "fa:16:3e:bf:c2:02", "network": {"id": "843fc191-89a3-4e63-b63d-61d012b45b53", "bridge": "br-int", "label": "tempest-network-smoke--1571296953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00461e74-34", "ovs_interfaceid": "00461e74-34f4-4d7c-b930-09797f49ca0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:55 compute-0 podman[361691]: 2026-01-26 16:25:55.023402473 +0000 UTC m=+0.041085637 container create a0595369bb013296c47b8ddba16fd2f2222f400889da73147b8853c20bd3671f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_engelbart, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.036 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-fed642a5-01b1-488e-aae9-21971c326ddb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.037 239969 DEBUG nova.compute.manager [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Instance network_info: |[{"id": "00461e74-34f4-4d7c-b930-09797f49ca0f", "address": "fa:16:3e:bf:c2:02", "network": {"id": "843fc191-89a3-4e63-b63d-61d012b45b53", "bridge": "br-int", "label": "tempest-network-smoke--1571296953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00461e74-34", "ovs_interfaceid": "00461e74-34f4-4d7c-b930-09797f49ca0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.037 239969 DEBUG oslo_concurrency.lockutils [req-f355fc33-fc00-49e6-b07c-4124935a420b req-6014e58f-bf5a-43ca-b031-4de4291d4d9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-fed642a5-01b1-488e-aae9-21971c326ddb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.037 239969 DEBUG nova.network.neutron [req-f355fc33-fc00-49e6-b07c-4124935a420b req-6014e58f-bf5a-43ca-b031-4de4291d4d9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Refreshing network info cache for port 00461e74-34f4-4d7c-b930-09797f49ca0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.040 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Start _get_guest_xml network_info=[{"id": "00461e74-34f4-4d7c-b930-09797f49ca0f", "address": "fa:16:3e:bf:c2:02", "network": {"id": "843fc191-89a3-4e63-b63d-61d012b45b53", "bridge": "br-int", "label": "tempest-network-smoke--1571296953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00461e74-34", "ovs_interfaceid": "00461e74-34f4-4d7c-b930-09797f49ca0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.046 239969 WARNING nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.052 239969 DEBUG nova.virt.libvirt.host [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.053 239969 DEBUG nova.virt.libvirt.host [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.060 239969 DEBUG nova.virt.libvirt.host [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.061 239969 DEBUG nova.virt.libvirt.host [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.061 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.061 239969 DEBUG nova.virt.hardware [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.062 239969 DEBUG nova.virt.hardware [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.062 239969 DEBUG nova.virt.hardware [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.062 239969 DEBUG nova.virt.hardware [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.062 239969 DEBUG nova.virt.hardware [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.062 239969 DEBUG nova.virt.hardware [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.063 239969 DEBUG nova.virt.hardware [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.063 239969 DEBUG nova.virt.hardware [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.063 239969 DEBUG nova.virt.hardware [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.063 239969 DEBUG nova.virt.hardware [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.063 239969 DEBUG nova.virt.hardware [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:25:55 compute-0 systemd[1]: Started libpod-conmon-a0595369bb013296c47b8ddba16fd2f2222f400889da73147b8853c20bd3671f.scope.
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.066 239969 DEBUG oslo_concurrency.processutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:55 compute-0 podman[361691]: 2026-01-26 16:25:55.005419863 +0000 UTC m=+0.023103057 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:25:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:25:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96a06f741bcd316de0f25eac2eeda1202354be71fe8de801c18723e75003efd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96a06f741bcd316de0f25eac2eeda1202354be71fe8de801c18723e75003efd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96a06f741bcd316de0f25eac2eeda1202354be71fe8de801c18723e75003efd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c96a06f741bcd316de0f25eac2eeda1202354be71fe8de801c18723e75003efd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:55 compute-0 podman[361691]: 2026-01-26 16:25:55.122313745 +0000 UTC m=+0.139996939 container init a0595369bb013296c47b8ddba16fd2f2222f400889da73147b8853c20bd3671f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_engelbart, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 16:25:55 compute-0 podman[361691]: 2026-01-26 16:25:55.12823214 +0000 UTC m=+0.145915334 container start a0595369bb013296c47b8ddba16fd2f2222f400889da73147b8853c20bd3671f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_engelbart, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:25:55 compute-0 podman[361691]: 2026-01-26 16:25:55.132020133 +0000 UTC m=+0.149703327 container attach a0595369bb013296c47b8ddba16fd2f2222f400889da73147b8853c20bd3671f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_engelbart, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]: {
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:     "0": [
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:         {
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "devices": [
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "/dev/loop3"
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             ],
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "lv_name": "ceph_lv0",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "lv_size": "21470642176",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "name": "ceph_lv0",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "tags": {
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.cluster_name": "ceph",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.crush_device_class": "",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.encrypted": "0",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.objectstore": "bluestore",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.osd_id": "0",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.type": "block",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.vdo": "0",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.with_tpm": "0"
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             },
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "type": "block",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "vg_name": "ceph_vg0"
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:         }
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:     ],
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:     "1": [
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:         {
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "devices": [
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "/dev/loop4"
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             ],
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "lv_name": "ceph_lv1",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "lv_size": "21470642176",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "name": "ceph_lv1",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "tags": {
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.cluster_name": "ceph",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.crush_device_class": "",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.encrypted": "0",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.objectstore": "bluestore",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.osd_id": "1",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.type": "block",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.vdo": "0",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.with_tpm": "0"
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             },
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "type": "block",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "vg_name": "ceph_vg1"
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:         }
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:     ],
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:     "2": [
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:         {
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "devices": [
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "/dev/loop5"
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             ],
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "lv_name": "ceph_lv2",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "lv_size": "21470642176",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "name": "ceph_lv2",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "tags": {
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.cluster_name": "ceph",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.crush_device_class": "",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.encrypted": "0",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.objectstore": "bluestore",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.osd_id": "2",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.type": "block",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.vdo": "0",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:                 "ceph.with_tpm": "0"
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             },
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "type": "block",
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:             "vg_name": "ceph_vg2"
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:         }
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]:     ]
Jan 26 16:25:55 compute-0 pensive_engelbart[361707]: }
Jan 26 16:25:55 compute-0 systemd[1]: libpod-a0595369bb013296c47b8ddba16fd2f2222f400889da73147b8853c20bd3671f.scope: Deactivated successfully.
Jan 26 16:25:55 compute-0 podman[361691]: 2026-01-26 16:25:55.486240828 +0000 UTC m=+0.503924002 container died a0595369bb013296c47b8ddba16fd2f2222f400889da73147b8853c20bd3671f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:25:55 compute-0 conmon[361707]: conmon a0595369bb013296c47b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a0595369bb013296c47b8ddba16fd2f2222f400889da73147b8853c20bd3671f.scope/container/memory.events
Jan 26 16:25:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c96a06f741bcd316de0f25eac2eeda1202354be71fe8de801c18723e75003efd-merged.mount: Deactivated successfully.
Jan 26 16:25:55 compute-0 podman[361691]: 2026-01-26 16:25:55.530617405 +0000 UTC m=+0.548300569 container remove a0595369bb013296c47b8ddba16fd2f2222f400889da73147b8853c20bd3671f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 16:25:55 compute-0 systemd[1]: libpod-conmon-a0595369bb013296c47b8ddba16fd2f2222f400889da73147b8853c20bd3671f.scope: Deactivated successfully.
Jan 26 16:25:55 compute-0 sudo[361613]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:55 compute-0 sudo[361748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:25:55 compute-0 sudo[361748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:25:55 compute-0 sudo[361748]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:55 compute-0 sudo[361773]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:25:55 compute-0 sudo[361773]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:25:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:25:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1782911206' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.756 239969 DEBUG oslo_concurrency.processutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.690s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.776 239969 DEBUG nova.storage.rbd_utils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image fed642a5-01b1-488e-aae9-21971c326ddb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:55 compute-0 nova_compute[239965]: 2026-01-26 16:25:55.780 239969 DEBUG oslo_concurrency.processutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1782911206' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:25:55 compute-0 podman[361849]: 2026-01-26 16:25:55.993755136 +0000 UTC m=+0.042306886 container create 320f4c0bd5d151b6f782df82b623ff53b6c3fc102f15043b5f27586475c51832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:25:56 compute-0 systemd[1]: Started libpod-conmon-320f4c0bd5d151b6f782df82b623ff53b6c3fc102f15043b5f27586475c51832.scope.
Jan 26 16:25:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:25:56 compute-0 podman[361849]: 2026-01-26 16:25:55.972425594 +0000 UTC m=+0.020977374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:25:56 compute-0 podman[361849]: 2026-01-26 16:25:56.071701856 +0000 UTC m=+0.120253626 container init 320f4c0bd5d151b6f782df82b623ff53b6c3fc102f15043b5f27586475c51832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:25:56 compute-0 podman[361849]: 2026-01-26 16:25:56.078750948 +0000 UTC m=+0.127302718 container start 320f4c0bd5d151b6f782df82b623ff53b6c3fc102f15043b5f27586475c51832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_northcutt, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 16:25:56 compute-0 podman[361849]: 2026-01-26 16:25:56.082260655 +0000 UTC m=+0.130812455 container attach 320f4c0bd5d151b6f782df82b623ff53b6c3fc102f15043b5f27586475c51832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_northcutt, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:25:56 compute-0 romantic_northcutt[361865]: 167 167
Jan 26 16:25:56 compute-0 systemd[1]: libpod-320f4c0bd5d151b6f782df82b623ff53b6c3fc102f15043b5f27586475c51832.scope: Deactivated successfully.
Jan 26 16:25:56 compute-0 podman[361849]: 2026-01-26 16:25:56.083937286 +0000 UTC m=+0.132489046 container died 320f4c0bd5d151b6f782df82b623ff53b6c3fc102f15043b5f27586475c51832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:25:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-69bf17bfe3ffd9f89e5e637b220fba83bc88ca34211a12abe7f4ca276443555c-merged.mount: Deactivated successfully.
Jan 26 16:25:56 compute-0 podman[361849]: 2026-01-26 16:25:56.136058092 +0000 UTC m=+0.184609882 container remove 320f4c0bd5d151b6f782df82b623ff53b6c3fc102f15043b5f27586475c51832 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_northcutt, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:25:56 compute-0 systemd[1]: libpod-conmon-320f4c0bd5d151b6f782df82b623ff53b6c3fc102f15043b5f27586475c51832.scope: Deactivated successfully.
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.197 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Updating instance_info_cache with network_info: [{"id": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "address": "fa:16:3e:00:29:6f", "network": {"id": "468ef4a7-70cc-4bbe-a116-35fd32215cb8", "bridge": "br-int", "label": "tempest-network-smoke--1786195953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b0936e-92", "ovs_interfaceid": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.217 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.218 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.261 239969 DEBUG nova.network.neutron [req-f355fc33-fc00-49e6-b07c-4124935a420b req-6014e58f-bf5a-43ca-b031-4de4291d4d9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Updated VIF entry in instance network info cache for port 00461e74-34f4-4d7c-b930-09797f49ca0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.262 239969 DEBUG nova.network.neutron [req-f355fc33-fc00-49e6-b07c-4124935a420b req-6014e58f-bf5a-43ca-b031-4de4291d4d9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Updating instance_info_cache with network_info: [{"id": "00461e74-34f4-4d7c-b930-09797f49ca0f", "address": "fa:16:3e:bf:c2:02", "network": {"id": "843fc191-89a3-4e63-b63d-61d012b45b53", "bridge": "br-int", "label": "tempest-network-smoke--1571296953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00461e74-34", "ovs_interfaceid": "00461e74-34f4-4d7c-b930-09797f49ca0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.281 239969 DEBUG oslo_concurrency.lockutils [req-f355fc33-fc00-49e6-b07c-4124935a420b req-6014e58f-bf5a-43ca-b031-4de4291d4d9c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-fed642a5-01b1-488e-aae9-21971c326ddb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:25:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:25:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1786816273' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.336 239969 DEBUG oslo_concurrency.processutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.337 239969 DEBUG nova.virt.libvirt.vif [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:25:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-178216006',display_name='tempest-TestNetworkBasicOps-server-178216006',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-178216006',id=139,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMhpUrDh36ssJ9Vym1gLGlLpgBVIIPj31hq9E58pCGxcFwcj3xnqplMiVn3+yrkZXgSAbZbCu9aez902uqN22e9Qra/AwSEnlDS3EhE8XdA110ow16DQGjzd2sXUocLgQ==',key_name='tempest-TestNetworkBasicOps-706367898',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-7dzx52a9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:25:46Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=fed642a5-01b1-488e-aae9-21971c326ddb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00461e74-34f4-4d7c-b930-09797f49ca0f", "address": "fa:16:3e:bf:c2:02", "network": {"id": "843fc191-89a3-4e63-b63d-61d012b45b53", "bridge": "br-int", "label": "tempest-network-smoke--1571296953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00461e74-34", "ovs_interfaceid": "00461e74-34f4-4d7c-b930-09797f49ca0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.338 239969 DEBUG nova.network.os_vif_util [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "00461e74-34f4-4d7c-b930-09797f49ca0f", "address": "fa:16:3e:bf:c2:02", "network": {"id": "843fc191-89a3-4e63-b63d-61d012b45b53", "bridge": "br-int", "label": "tempest-network-smoke--1571296953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00461e74-34", "ovs_interfaceid": "00461e74-34f4-4d7c-b930-09797f49ca0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.339 239969 DEBUG nova.network.os_vif_util [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:c2:02,bridge_name='br-int',has_traffic_filtering=True,id=00461e74-34f4-4d7c-b930-09797f49ca0f,network=Network(843fc191-89a3-4e63-b63d-61d012b45b53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00461e74-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.340 239969 DEBUG nova.objects.instance [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'pci_devices' on Instance uuid fed642a5-01b1-488e-aae9-21971c326ddb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.359 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:25:56 compute-0 nova_compute[239965]:   <uuid>fed642a5-01b1-488e-aae9-21971c326ddb</uuid>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   <name>instance-0000008b</name>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkBasicOps-server-178216006</nova:name>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:25:55</nova:creationTime>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:25:56 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:25:56 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:25:56 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:25:56 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:25:56 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:25:56 compute-0 nova_compute[239965]:         <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:25:56 compute-0 nova_compute[239965]:         <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:25:56 compute-0 nova_compute[239965]:         <nova:port uuid="00461e74-34f4-4d7c-b930-09797f49ca0f">
Jan 26 16:25:56 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <system>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <entry name="serial">fed642a5-01b1-488e-aae9-21971c326ddb</entry>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <entry name="uuid">fed642a5-01b1-488e-aae9-21971c326ddb</entry>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     </system>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   <os>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   </os>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   <features>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   </features>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/fed642a5-01b1-488e-aae9-21971c326ddb_disk">
Jan 26 16:25:56 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       </source>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:25:56 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/fed642a5-01b1-488e-aae9-21971c326ddb_disk.config">
Jan 26 16:25:56 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       </source>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:25:56 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:bf:c2:02"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <target dev="tap00461e74-34"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/fed642a5-01b1-488e-aae9-21971c326ddb/console.log" append="off"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <video>
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     </video>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:25:56 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:25:56 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:25:56 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:25:56 compute-0 nova_compute[239965]: </domain>
Jan 26 16:25:56 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.360 239969 DEBUG nova.compute.manager [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Preparing to wait for external event network-vif-plugged-00461e74-34f4-4d7c-b930-09797f49ca0f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.360 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.361 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.361 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.362 239969 DEBUG nova.virt.libvirt.vif [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:25:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-178216006',display_name='tempest-TestNetworkBasicOps-server-178216006',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-178216006',id=139,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMhpUrDh36ssJ9Vym1gLGlLpgBVIIPj31hq9E58pCGxcFwcj3xnqplMiVn3+yrkZXgSAbZbCu9aez902uqN22e9Qra/AwSEnlDS3EhE8XdA110ow16DQGjzd2sXUocLgQ==',key_name='tempest-TestNetworkBasicOps-706367898',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-7dzx52a9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:25:46Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=fed642a5-01b1-488e-aae9-21971c326ddb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "00461e74-34f4-4d7c-b930-09797f49ca0f", "address": "fa:16:3e:bf:c2:02", "network": {"id": "843fc191-89a3-4e63-b63d-61d012b45b53", "bridge": "br-int", "label": "tempest-network-smoke--1571296953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00461e74-34", "ovs_interfaceid": "00461e74-34f4-4d7c-b930-09797f49ca0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.362 239969 DEBUG nova.network.os_vif_util [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "00461e74-34f4-4d7c-b930-09797f49ca0f", "address": "fa:16:3e:bf:c2:02", "network": {"id": "843fc191-89a3-4e63-b63d-61d012b45b53", "bridge": "br-int", "label": "tempest-network-smoke--1571296953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00461e74-34", "ovs_interfaceid": "00461e74-34f4-4d7c-b930-09797f49ca0f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.363 239969 DEBUG nova.network.os_vif_util [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bf:c2:02,bridge_name='br-int',has_traffic_filtering=True,id=00461e74-34f4-4d7c-b930-09797f49ca0f,network=Network(843fc191-89a3-4e63-b63d-61d012b45b53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00461e74-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.363 239969 DEBUG os_vif [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:c2:02,bridge_name='br-int',has_traffic_filtering=True,id=00461e74-34f4-4d7c-b930-09797f49ca0f,network=Network(843fc191-89a3-4e63-b63d-61d012b45b53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00461e74-34') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.364 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.364 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.365 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.368 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.368 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap00461e74-34, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.369 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap00461e74-34, col_values=(('external_ids', {'iface-id': '00461e74-34f4-4d7c-b930-09797f49ca0f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bf:c2:02', 'vm-uuid': 'fed642a5-01b1-488e-aae9-21971c326ddb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:56 compute-0 NetworkManager[48954]: <info>  [1769444756.3718] manager: (tap00461e74-34): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/599)
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.374 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.377 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.378 239969 INFO os_vif [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bf:c2:02,bridge_name='br-int',has_traffic_filtering=True,id=00461e74-34f4-4d7c-b930-09797f49ca0f,network=Network(843fc191-89a3-4e63-b63d-61d012b45b53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00461e74-34')
Jan 26 16:25:56 compute-0 podman[361890]: 2026-01-26 16:25:56.349801127 +0000 UTC m=+0.022950434 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:25:56 compute-0 podman[361890]: 2026-01-26 16:25:56.514210632 +0000 UTC m=+0.187359959 container create 50b6062362f3da844e3d2f32c6d9fa6de2e1ef47b3be86d22230701a793c32fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_boyd, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:25:56 compute-0 systemd[1]: Started libpod-conmon-50b6062362f3da844e3d2f32c6d9fa6de2e1ef47b3be86d22230701a793c32fb.scope.
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.591 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.592 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.593 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:bf:c2:02, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.594 239969 INFO nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Using config drive
Jan 26 16:25:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:25:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c3ac0c036a82bfec558552a46200847d6946d5e1466c98c8334fa3da31be85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c3ac0c036a82bfec558552a46200847d6946d5e1466c98c8334fa3da31be85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c3ac0c036a82bfec558552a46200847d6946d5e1466c98c8334fa3da31be85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29c3ac0c036a82bfec558552a46200847d6946d5e1466c98c8334fa3da31be85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.631 239969 DEBUG nova.storage.rbd_utils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image fed642a5-01b1-488e-aae9-21971c326ddb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:56 compute-0 podman[361890]: 2026-01-26 16:25:56.636538799 +0000 UTC m=+0.309688136 container init 50b6062362f3da844e3d2f32c6d9fa6de2e1ef47b3be86d22230701a793c32fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_boyd, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 16:25:56 compute-0 podman[361890]: 2026-01-26 16:25:56.64680048 +0000 UTC m=+0.319949807 container start 50b6062362f3da844e3d2f32c6d9fa6de2e1ef47b3be86d22230701a793c32fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:25:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 262 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 150 op/s
Jan 26 16:25:56 compute-0 podman[361890]: 2026-01-26 16:25:56.651376492 +0000 UTC m=+0.324525789 container attach 50b6062362f3da844e3d2f32c6d9fa6de2e1ef47b3be86d22230701a793c32fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 16:25:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1786816273' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:25:56 compute-0 ceph-mon[75140]: pgmap v2257: 305 pgs: 305 active+clean; 262 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 150 op/s
Jan 26 16:25:56 compute-0 nova_compute[239965]: 2026-01-26 16:25:56.994 239969 INFO nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Creating config drive at /var/lib/nova/instances/fed642a5-01b1-488e-aae9-21971c326ddb/disk.config
Jan 26 16:25:57 compute-0 nova_compute[239965]: 2026-01-26 16:25:57.000 239969 DEBUG oslo_concurrency.processutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fed642a5-01b1-488e-aae9-21971c326ddb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqgv4jvnh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:57 compute-0 nova_compute[239965]: 2026-01-26 16:25:57.145 239969 DEBUG oslo_concurrency.processutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fed642a5-01b1-488e-aae9-21971c326ddb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqgv4jvnh" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:57 compute-0 nova_compute[239965]: 2026-01-26 16:25:57.173 239969 DEBUG nova.storage.rbd_utils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image fed642a5-01b1-488e-aae9-21971c326ddb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:25:57 compute-0 nova_compute[239965]: 2026-01-26 16:25:57.183 239969 DEBUG oslo_concurrency.processutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fed642a5-01b1-488e-aae9-21971c326ddb/disk.config fed642a5-01b1-488e-aae9-21971c326ddb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:25:57 compute-0 nova_compute[239965]: 2026-01-26 16:25:57.313 239969 DEBUG oslo_concurrency.processutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fed642a5-01b1-488e-aae9-21971c326ddb/disk.config fed642a5-01b1-488e-aae9-21971c326ddb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:25:57 compute-0 nova_compute[239965]: 2026-01-26 16:25:57.314 239969 INFO nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Deleting local config drive /var/lib/nova/instances/fed642a5-01b1-488e-aae9-21971c326ddb/disk.config because it was imported into RBD.
Jan 26 16:25:57 compute-0 kernel: tap00461e74-34: entered promiscuous mode
Jan 26 16:25:57 compute-0 NetworkManager[48954]: <info>  [1769444757.3609] manager: (tap00461e74-34): new Tun device (/org/freedesktop/NetworkManager/Devices/600)
Jan 26 16:25:57 compute-0 ovn_controller[146046]: 2026-01-26T16:25:57Z|01473|binding|INFO|Claiming lport 00461e74-34f4-4d7c-b930-09797f49ca0f for this chassis.
Jan 26 16:25:57 compute-0 ovn_controller[146046]: 2026-01-26T16:25:57Z|01474|binding|INFO|00461e74-34f4-4d7c-b930-09797f49ca0f: Claiming fa:16:3e:bf:c2:02 10.100.0.7
Jan 26 16:25:57 compute-0 nova_compute[239965]: 2026-01-26 16:25:57.363 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:57 compute-0 systemd-udevd[362056]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.371 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:c2:02 10.100.0.7'], port_security=['fa:16:3e:bf:c2:02 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'fed642a5-01b1-488e-aae9-21971c326ddb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-843fc191-89a3-4e63-b63d-61d012b45b53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ea11fac4-1f7d-4dda-9bbd-869a7a1325d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9843283c-0b92-425e-9399-e711bb75e634, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=00461e74-34f4-4d7c-b930-09797f49ca0f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.372 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 00461e74-34f4-4d7c-b930-09797f49ca0f in datapath 843fc191-89a3-4e63-b63d-61d012b45b53 bound to our chassis
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.374 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 843fc191-89a3-4e63-b63d-61d012b45b53
Jan 26 16:25:57 compute-0 lvm[362057]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:25:57 compute-0 lvm[362057]: VG ceph_vg1 finished
Jan 26 16:25:57 compute-0 lvm[362055]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:25:57 compute-0 lvm[362055]: VG ceph_vg0 finished
Jan 26 16:25:57 compute-0 ovn_controller[146046]: 2026-01-26T16:25:57Z|01475|binding|INFO|Setting lport 00461e74-34f4-4d7c-b930-09797f49ca0f ovn-installed in OVS
Jan 26 16:25:57 compute-0 ovn_controller[146046]: 2026-01-26T16:25:57Z|01476|binding|INFO|Setting lport 00461e74-34f4-4d7c-b930-09797f49ca0f up in Southbound
Jan 26 16:25:57 compute-0 NetworkManager[48954]: <info>  [1769444757.3858] device (tap00461e74-34): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:25:57 compute-0 NetworkManager[48954]: <info>  [1769444757.3870] device (tap00461e74-34): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:25:57 compute-0 nova_compute[239965]: 2026-01-26 16:25:57.381 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:57 compute-0 nova_compute[239965]: 2026-01-26 16:25:57.382 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.388 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b78028a7-a1cc-4ddd-a669-58de60c2685d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.388 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap843fc191-81 in ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.390 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap843fc191-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.390 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bbdf2f5a-7875-4a90-ba89-298a6068d2c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.391 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[382ef800-8689-43b1-bb2b-99df0e2ae2fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:57 compute-0 lvm[362061]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:25:57 compute-0 lvm[362061]: VG ceph_vg2 finished
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.401 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[a00f6111-e43d-46dc-96fb-9004170e4df8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:57 compute-0 systemd-machined[208061]: New machine qemu-168-instance-0000008b.
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.416 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e50869af-4a45-4029-aed4-ec2ad18edd66]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:57 compute-0 systemd[1]: Started Virtual Machine qemu-168-instance-0000008b.
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.449 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[601ebc71-f9ee-4159-bff1-431118e9a62b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.455 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7ce03a35-c490-4f9c-9b71-1561c11b2352]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:57 compute-0 NetworkManager[48954]: <info>  [1769444757.4580] manager: (tap843fc191-80): new Veth device (/org/freedesktop/NetworkManager/Devices/601)
Jan 26 16:25:57 compute-0 adoring_boyd[361911]: {}
Jan 26 16:25:57 compute-0 systemd[1]: libpod-50b6062362f3da844e3d2f32c6d9fa6de2e1ef47b3be86d22230701a793c32fb.scope: Deactivated successfully.
Jan 26 16:25:57 compute-0 systemd[1]: libpod-50b6062362f3da844e3d2f32c6d9fa6de2e1ef47b3be86d22230701a793c32fb.scope: Consumed 1.416s CPU time.
Jan 26 16:25:57 compute-0 conmon[361911]: conmon 50b6062362f3da844e3d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-50b6062362f3da844e3d2f32c6d9fa6de2e1ef47b3be86d22230701a793c32fb.scope/container/memory.events
Jan 26 16:25:57 compute-0 podman[361890]: 2026-01-26 16:25:57.489461407 +0000 UTC m=+1.162610694 container died 50b6062362f3da844e3d2f32c6d9fa6de2e1ef47b3be86d22230701a793c32fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_boyd, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.493 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[fa027e68-4696-45df-a329-8ab7fac95990]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.496 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2183119c-f1dd-47fd-a0c4-a0adf3886439]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-29c3ac0c036a82bfec558552a46200847d6946d5e1466c98c8334fa3da31be85-merged.mount: Deactivated successfully.
Jan 26 16:25:57 compute-0 NetworkManager[48954]: <info>  [1769444757.5281] device (tap843fc191-80): carrier: link connected
Jan 26 16:25:57 compute-0 podman[361890]: 2026-01-26 16:25:57.533756691 +0000 UTC m=+1.206905978 container remove 50b6062362f3da844e3d2f32c6d9fa6de2e1ef47b3be86d22230701a793c32fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_boyd, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.534 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8f3f64e6-eabe-405d-93d4-8e617b4ee9c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.554 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7b44d0f6-e8fb-4f29-9307-e0fa86f71a6c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap843fc191-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:a6:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 421], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624833, 'reachable_time': 42572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362104, 'error': None, 'target': 'ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:57 compute-0 systemd[1]: libpod-conmon-50b6062362f3da844e3d2f32c6d9fa6de2e1ef47b3be86d22230701a793c32fb.scope: Deactivated successfully.
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.571 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9b91d379-9722-4b59-8ef6-9e4695c5f205]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feba:a6fe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 624833, 'tstamp': 624833}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362106, 'error': None, 'target': 'ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:57 compute-0 nova_compute[239965]: 2026-01-26 16:25:57.584 239969 DEBUG nova.compute.manager [req-8ae363ba-ce32-49c3-bf04-cce79a49309f req-167a04f2-f705-4183-a8f0-9904b32574e2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Received event network-vif-plugged-00461e74-34f4-4d7c-b930-09797f49ca0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:57 compute-0 nova_compute[239965]: 2026-01-26 16:25:57.584 239969 DEBUG oslo_concurrency.lockutils [req-8ae363ba-ce32-49c3-bf04-cce79a49309f req-167a04f2-f705-4183-a8f0-9904b32574e2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:57 compute-0 nova_compute[239965]: 2026-01-26 16:25:57.584 239969 DEBUG oslo_concurrency.lockutils [req-8ae363ba-ce32-49c3-bf04-cce79a49309f req-167a04f2-f705-4183-a8f0-9904b32574e2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:57 compute-0 nova_compute[239965]: 2026-01-26 16:25:57.585 239969 DEBUG oslo_concurrency.lockutils [req-8ae363ba-ce32-49c3-bf04-cce79a49309f req-167a04f2-f705-4183-a8f0-9904b32574e2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:57 compute-0 nova_compute[239965]: 2026-01-26 16:25:57.585 239969 DEBUG nova.compute.manager [req-8ae363ba-ce32-49c3-bf04-cce79a49309f req-167a04f2-f705-4183-a8f0-9904b32574e2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Processing event network-vif-plugged-00461e74-34f4-4d7c-b930-09797f49ca0f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:25:57 compute-0 sudo[361773]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.591 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[eca043bf-77cb-43a1-a7dd-3cd6e5a7b295]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap843fc191-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ba:a6:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 421], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624833, 'reachable_time': 42572, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362107, 'error': None, 'target': 'ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:25:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:25:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:25:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.625 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d68c79b5-3231-4084-9739-0f48123a4802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:57 compute-0 sudo[362110]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:25:57 compute-0 sudo[362110]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:25:57 compute-0 sudo[362110]: pam_unix(sudo:session): session closed for user root
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.693 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8df6de7b-59ca-4da0-9406-a0e6da5e294a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.694 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap843fc191-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.695 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.695 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap843fc191-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:57 compute-0 NetworkManager[48954]: <info>  [1769444757.6979] manager: (tap843fc191-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/602)
Jan 26 16:25:57 compute-0 kernel: tap843fc191-80: entered promiscuous mode
Jan 26 16:25:57 compute-0 nova_compute[239965]: 2026-01-26 16:25:57.701 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.702 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap843fc191-80, col_values=(('external_ids', {'iface-id': '326ab86e-2d22-4eea-8007-6fd26759e1ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:25:57 compute-0 ovn_controller[146046]: 2026-01-26T16:25:57Z|01477|binding|INFO|Releasing lport 326ab86e-2d22-4eea-8007-6fd26759e1ac from this chassis (sb_readonly=0)
Jan 26 16:25:57 compute-0 nova_compute[239965]: 2026-01-26 16:25:57.725 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.726 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/843fc191-89a3-4e63-b63d-61d012b45b53.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/843fc191-89a3-4e63-b63d-61d012b45b53.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.727 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9f70ec8f-499e-4413-ac17-b0565bdba8cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.728 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-843fc191-89a3-4e63-b63d-61d012b45b53
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/843fc191-89a3-4e63-b63d-61d012b45b53.pid.haproxy
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 843fc191-89a3-4e63-b63d-61d012b45b53
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:25:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:57.729 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53', 'env', 'PROCESS_TAG=haproxy-843fc191-89a3-4e63-b63d-61d012b45b53', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/843fc191-89a3-4e63-b63d-61d012b45b53.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.010 239969 DEBUG nova.compute.manager [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.012 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444758.0101118, fed642a5-01b1-488e-aae9-21971c326ddb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.012 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] VM Started (Lifecycle Event)
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.016 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.020 239969 INFO nova.virt.libvirt.driver [-] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Instance spawned successfully.
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.021 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.043 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.051 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.052 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.052 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.053 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.053 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.054 239969 DEBUG nova.virt.libvirt.driver [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.058 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.087 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.088 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444758.011838, fed642a5-01b1-488e-aae9-21971c326ddb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.088 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] VM Paused (Lifecycle Event)
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.117 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.121 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444758.0150168, fed642a5-01b1-488e-aae9-21971c326ddb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.121 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] VM Resumed (Lifecycle Event)
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.133 239969 INFO nova.compute.manager [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Took 11.35 seconds to spawn the instance on the hypervisor.
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.133 239969 DEBUG nova.compute.manager [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:58 compute-0 podman[362202]: 2026-01-26 16:25:58.134142154 +0000 UTC m=+0.071566513 container create 60b59fbc2dd64307dbbc6c746d538bc2cc7a90c23d70dce43b1087940cee55f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.146 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.150 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:25:58 compute-0 systemd[1]: Started libpod-conmon-60b59fbc2dd64307dbbc6c746d538bc2cc7a90c23d70dce43b1087940cee55f1.scope.
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.189 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:25:58 compute-0 podman[362202]: 2026-01-26 16:25:58.102393267 +0000 UTC m=+0.039817626 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:25:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:25:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f95b3693cf8f87a0352830eeb0cc8426f285c99bc9d5b0d2c048845789966a4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.222 239969 INFO nova.compute.manager [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Took 12.84 seconds to build instance.
Jan 26 16:25:58 compute-0 podman[362202]: 2026-01-26 16:25:58.223187185 +0000 UTC m=+0.160611554 container init 60b59fbc2dd64307dbbc6c746d538bc2cc7a90c23d70dce43b1087940cee55f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:25:58 compute-0 podman[362202]: 2026-01-26 16:25:58.228250399 +0000 UTC m=+0.165674758 container start 60b59fbc2dd64307dbbc6c746d538bc2cc7a90c23d70dce43b1087940cee55f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.237 239969 DEBUG oslo_concurrency.lockutils [None req-148c5ebe-f89d-4ce7-8ad4-ff86ccea94e6 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "fed642a5-01b1-488e-aae9-21971c326ddb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:58 compute-0 neutron-haproxy-ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53[362217]: [NOTICE]   (362221) : New worker (362223) forked
Jan 26 16:25:58 compute-0 neutron-haproxy-ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53[362217]: [NOTICE]   (362221) : Loading success.
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:25:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:25:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:25:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 6.0 MiB/s wr, 193 op/s
Jan 26 16:25:58 compute-0 nova_compute[239965]: 2026-01-26 16:25:58.663 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:25:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:25:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:59.249 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:59.250 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:25:59.251 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:25:59 compute-0 ceph-mon[75140]: pgmap v2258: 305 pgs: 305 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 6.0 MiB/s wr, 193 op/s
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.650 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "39a41999-3c56-4679-8c57-9286dc4edbb6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.650 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "39a41999-3c56-4679-8c57-9286dc4edbb6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.656 239969 DEBUG nova.compute.manager [req-1aa855ed-1c05-492c-a25c-7e6deae8468b req-1ef9ebf9-48c4-4b96-b9ba-1bf85f8e901b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Received event network-vif-plugged-00461e74-34f4-4d7c-b930-09797f49ca0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.657 239969 DEBUG oslo_concurrency.lockutils [req-1aa855ed-1c05-492c-a25c-7e6deae8468b req-1ef9ebf9-48c4-4b96-b9ba-1bf85f8e901b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.657 239969 DEBUG oslo_concurrency.lockutils [req-1aa855ed-1c05-492c-a25c-7e6deae8468b req-1ef9ebf9-48c4-4b96-b9ba-1bf85f8e901b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.657 239969 DEBUG oslo_concurrency.lockutils [req-1aa855ed-1c05-492c-a25c-7e6deae8468b req-1ef9ebf9-48c4-4b96-b9ba-1bf85f8e901b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.657 239969 DEBUG nova.compute.manager [req-1aa855ed-1c05-492c-a25c-7e6deae8468b req-1ef9ebf9-48c4-4b96-b9ba-1bf85f8e901b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] No waiting events found dispatching network-vif-plugged-00461e74-34f4-4d7c-b930-09797f49ca0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.658 239969 WARNING nova.compute.manager [req-1aa855ed-1c05-492c-a25c-7e6deae8468b req-1ef9ebf9-48c4-4b96-b9ba-1bf85f8e901b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Received unexpected event network-vif-plugged-00461e74-34f4-4d7c-b930-09797f49ca0f for instance with vm_state active and task_state None.
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.673 239969 DEBUG nova.compute.manager [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.753 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.753 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.759 239969 DEBUG nova.virt.hardware [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.759 239969 INFO nova.compute.claims [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:25:59 compute-0 nova_compute[239965]: 2026-01-26 16:25:59.961 239969 DEBUG oslo_concurrency.processutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.245 239969 INFO nova.compute.manager [None req-c50d358b-91e9-4113-b8fa-ddacac6c88bb 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Get console output
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.253 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:26:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:26:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:26:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:26:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:26:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:26:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:26:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:26:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3488928943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.546 239969 DEBUG oslo_concurrency.processutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.552 239969 DEBUG nova.compute.provider_tree [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.571 239969 DEBUG nova.scheduler.client.report [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.606 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.607 239969 DEBUG nova.compute.manager [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:26:00 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3488928943' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 665 KiB/s rd, 6.0 MiB/s wr, 149 op/s
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.650 239969 DEBUG oslo_concurrency.lockutils [None req-137b91f3-435a-4a37-9fdc-a959e79f94f6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "b1edd3fd-2563-465f-94c8-577ab4debb72" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.651 239969 DEBUG oslo_concurrency.lockutils [None req-137b91f3-435a-4a37-9fdc-a959e79f94f6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.652 239969 INFO nova.compute.manager [None req-137b91f3-435a-4a37-9fdc-a959e79f94f6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Rebooting instance
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.665 239969 DEBUG nova.compute.manager [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.666 239969 DEBUG nova.network.neutron [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.672 239969 DEBUG oslo_concurrency.lockutils [None req-137b91f3-435a-4a37-9fdc-a959e79f94f6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.672 239969 DEBUG oslo_concurrency.lockutils [None req-137b91f3-435a-4a37-9fdc-a959e79f94f6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquired lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.672 239969 DEBUG nova.network.neutron [None req-137b91f3-435a-4a37-9fdc-a959e79f94f6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.688 239969 INFO nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.706 239969 DEBUG nova.compute.manager [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.794 239969 DEBUG nova.compute.manager [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.795 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.796 239969 INFO nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Creating image(s)
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.822 239969 DEBUG nova.storage.rbd_utils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] rbd image 39a41999-3c56-4679-8c57-9286dc4edbb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.848 239969 DEBUG nova.storage.rbd_utils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] rbd image 39a41999-3c56-4679-8c57-9286dc4edbb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.876 239969 DEBUG nova.storage.rbd_utils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] rbd image 39a41999-3c56-4679-8c57-9286dc4edbb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.881 239969 DEBUG oslo_concurrency.processutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.920 239969 DEBUG nova.policy [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '915d03d871484dc39e2d074d97f24809', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '02ec8c20b77c4ce9b1406d858bbcf14d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.955 239969 DEBUG oslo_concurrency.processutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.956 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.956 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.957 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.978 239969 DEBUG nova.storage.rbd_utils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] rbd image 39a41999-3c56-4679-8c57-9286dc4edbb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:00 compute-0 nova_compute[239965]: 2026-01-26 16:26:00.981 239969 DEBUG oslo_concurrency.processutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 39a41999-3c56-4679-8c57-9286dc4edbb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:01 compute-0 nova_compute[239965]: 2026-01-26 16:26:01.299 239969 DEBUG oslo_concurrency.processutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 39a41999-3c56-4679-8c57-9286dc4edbb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:01 compute-0 nova_compute[239965]: 2026-01-26 16:26:01.386 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:01 compute-0 nova_compute[239965]: 2026-01-26 16:26:01.392 239969 DEBUG nova.storage.rbd_utils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] resizing rbd image 39a41999-3c56-4679-8c57-9286dc4edbb6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:26:01 compute-0 nova_compute[239965]: 2026-01-26 16:26:01.481 239969 DEBUG nova.objects.instance [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lazy-loading 'migration_context' on Instance uuid 39a41999-3c56-4679-8c57-9286dc4edbb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:26:01 compute-0 nova_compute[239965]: 2026-01-26 16:26:01.499 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:26:01 compute-0 nova_compute[239965]: 2026-01-26 16:26:01.499 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Ensure instance console log exists: /var/lib/nova/instances/39a41999-3c56-4679-8c57-9286dc4edbb6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:26:01 compute-0 nova_compute[239965]: 2026-01-26 16:26:01.500 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:01 compute-0 nova_compute[239965]: 2026-01-26 16:26:01.500 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:01 compute-0 nova_compute[239965]: 2026-01-26 16:26:01.501 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:01 compute-0 ceph-mon[75140]: pgmap v2259: 305 pgs: 305 active+clean; 292 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 665 KiB/s rd, 6.0 MiB/s wr, 149 op/s
Jan 26 16:26:01 compute-0 nova_compute[239965]: 2026-01-26 16:26:01.969 239969 DEBUG nova.network.neutron [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Successfully created port: 30d9e550-982e-4e2c-9ee0-755cad494e41 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:26:02 compute-0 nova_compute[239965]: 2026-01-26 16:26:02.128 239969 DEBUG nova.network.neutron [None req-137b91f3-435a-4a37-9fdc-a959e79f94f6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Updating instance_info_cache with network_info: [{"id": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "address": "fa:16:3e:00:29:6f", "network": {"id": "468ef4a7-70cc-4bbe-a116-35fd32215cb8", "bridge": "br-int", "label": "tempest-network-smoke--1786195953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b0936e-92", "ovs_interfaceid": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:02 compute-0 nova_compute[239965]: 2026-01-26 16:26:02.148 239969 DEBUG oslo_concurrency.lockutils [None req-137b91f3-435a-4a37-9fdc-a959e79f94f6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Releasing lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:26:02 compute-0 nova_compute[239965]: 2026-01-26 16:26:02.150 239969 DEBUG nova.compute.manager [None req-137b91f3-435a-4a37-9fdc-a959e79f94f6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:02 compute-0 nova_compute[239965]: 2026-01-26 16:26:02.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:26:02 compute-0 nova_compute[239965]: 2026-01-26 16:26:02.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:02 compute-0 nova_compute[239965]: 2026-01-26 16:26:02.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:02 compute-0 nova_compute[239965]: 2026-01-26 16:26:02.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:02 compute-0 nova_compute[239965]: 2026-01-26 16:26:02.539 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:26:02 compute-0 nova_compute[239965]: 2026-01-26 16:26:02.539 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:02 compute-0 sshd-session[362420]: Invalid user linaro from 209.38.206.249 port 48732
Jan 26 16:26:02 compute-0 sshd-session[362420]: Connection closed by invalid user linaro 209.38.206.249 port 48732 [preauth]
Jan 26 16:26:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 333 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.5 MiB/s wr, 245 op/s
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:02.712987) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444762713019, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2072, "num_deletes": 251, "total_data_size": 3283720, "memory_usage": 3329376, "flush_reason": "Manual Compaction"}
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444762735742, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3188032, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45336, "largest_seqno": 47407, "table_properties": {"data_size": 3178984, "index_size": 5542, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19503, "raw_average_key_size": 20, "raw_value_size": 3160517, "raw_average_value_size": 3281, "num_data_blocks": 245, "num_entries": 963, "num_filter_entries": 963, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769444558, "oldest_key_time": 1769444558, "file_creation_time": 1769444762, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 22829 microseconds, and 8006 cpu microseconds.
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:02.735812) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3188032 bytes OK
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:02.735834) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:02.739395) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:02.739411) EVENT_LOG_v1 {"time_micros": 1769444762739405, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:02.739429) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3274989, prev total WAL file size 3274989, number of live WAL files 2.
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:02.740476) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3113KB)], [104(8854KB)]
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444762740539, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 12254761, "oldest_snapshot_seqno": -1}
Jan 26 16:26:02 compute-0 ceph-mon[75140]: pgmap v2260: 305 pgs: 305 active+clean; 333 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 7.5 MiB/s wr, 245 op/s
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 7212 keys, 10542252 bytes, temperature: kUnknown
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444762837776, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10542252, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10493315, "index_size": 29817, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 185635, "raw_average_key_size": 25, "raw_value_size": 10363786, "raw_average_value_size": 1437, "num_data_blocks": 1169, "num_entries": 7212, "num_filter_entries": 7212, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769444762, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:02.838120) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10542252 bytes
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:02.839569) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.9 rd, 108.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 8.6 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(7.2) write-amplify(3.3) OK, records in: 7726, records dropped: 514 output_compression: NoCompression
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:02.839596) EVENT_LOG_v1 {"time_micros": 1769444762839583, "job": 62, "event": "compaction_finished", "compaction_time_micros": 97318, "compaction_time_cpu_micros": 42605, "output_level": 6, "num_output_files": 1, "total_output_size": 10542252, "num_input_records": 7726, "num_output_records": 7212, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444762840672, "job": 62, "event": "table_file_deletion", "file_number": 106}
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444762843583, "job": 62, "event": "table_file_deletion", "file_number": 104}
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:02.740380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:02.843627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:02.843633) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:02.843636) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:02.843639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:26:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:02.843642) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:26:02 compute-0 nova_compute[239965]: 2026-01-26 16:26:02.956 239969 DEBUG nova.compute.manager [req-3bd57d23-a807-43b6-a328-2ecf72439daf req-fc03c240-8525-4df8-a1df-e61902815b43 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Received event network-changed-00461e74-34f4-4d7c-b930-09797f49ca0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:02 compute-0 nova_compute[239965]: 2026-01-26 16:26:02.957 239969 DEBUG nova.compute.manager [req-3bd57d23-a807-43b6-a328-2ecf72439daf req-fc03c240-8525-4df8-a1df-e61902815b43 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Refreshing instance network info cache due to event network-changed-00461e74-34f4-4d7c-b930-09797f49ca0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:26:02 compute-0 nova_compute[239965]: 2026-01-26 16:26:02.958 239969 DEBUG oslo_concurrency.lockutils [req-3bd57d23-a807-43b6-a328-2ecf72439daf req-fc03c240-8525-4df8-a1df-e61902815b43 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-fed642a5-01b1-488e-aae9-21971c326ddb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:02 compute-0 nova_compute[239965]: 2026-01-26 16:26:02.958 239969 DEBUG oslo_concurrency.lockutils [req-3bd57d23-a807-43b6-a328-2ecf72439daf req-fc03c240-8525-4df8-a1df-e61902815b43 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-fed642a5-01b1-488e-aae9-21971c326ddb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:02 compute-0 nova_compute[239965]: 2026-01-26 16:26:02.959 239969 DEBUG nova.network.neutron [req-3bd57d23-a807-43b6-a328-2ecf72439daf req-fc03c240-8525-4df8-a1df-e61902815b43 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Refreshing network info cache for port 00461e74-34f4-4d7c-b930-09797f49ca0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:26:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:26:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3340063921' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.054 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.136 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.137 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.140 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.141 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.143 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.144 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.351 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.352 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3039MB free_disk=59.85451069753617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.352 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.353 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.470 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance b1edd3fd-2563-465f-94c8-577ab4debb72 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.471 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance deadd136-f039-481d-bfc4-bbc46bc71563 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.471 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance fed642a5-01b1-488e-aae9-21971c326ddb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.471 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 39a41999-3c56-4679-8c57-9286dc4edbb6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.471 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.471 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.594 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.666 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3340063921' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.813 239969 DEBUG nova.network.neutron [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Successfully updated port: 30d9e550-982e-4e2c-9ee0-755cad494e41 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.829 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "refresh_cache-39a41999-3c56-4679-8c57-9286dc4edbb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.829 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquired lock "refresh_cache-39a41999-3c56-4679-8c57-9286dc4edbb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.830 239969 DEBUG nova.network.neutron [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:26:03 compute-0 nova_compute[239965]: 2026-01-26 16:26:03.992 239969 DEBUG nova.network.neutron [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:26:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:26:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3459223312' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:04 compute-0 nova_compute[239965]: 2026-01-26 16:26:04.114 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:04 compute-0 nova_compute[239965]: 2026-01-26 16:26:04.120 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:26:04 compute-0 sshd-session[362465]: Invalid user steam from 209.38.206.249 port 41450
Jan 26 16:26:04 compute-0 nova_compute[239965]: 2026-01-26 16:26:04.142 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:26:04 compute-0 nova_compute[239965]: 2026-01-26 16:26:04.173 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:26:04 compute-0 nova_compute[239965]: 2026-01-26 16:26:04.174 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.821s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:26:04 compute-0 sshd-session[362465]: Connection closed by invalid user steam 209.38.206.249 port 41450 [preauth]
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:04.246373) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444764246419, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 276, "num_deletes": 257, "total_data_size": 15968, "memory_usage": 21752, "flush_reason": "Manual Compaction"}
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444764251410, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 15939, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47408, "largest_seqno": 47683, "table_properties": {"data_size": 14107, "index_size": 65, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4668, "raw_average_key_size": 17, "raw_value_size": 10508, "raw_average_value_size": 39, "num_data_blocks": 3, "num_entries": 269, "num_filter_entries": 269, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769444762, "oldest_key_time": 1769444762, "file_creation_time": 1769444764, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 5084 microseconds, and 1026 cpu microseconds.
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:04.251456) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 15939 bytes OK
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:04.251476) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:04.252908) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:04.252930) EVENT_LOG_v1 {"time_micros": 1769444764252923, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:04.252951) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 13851, prev total WAL file size 13851, number of live WAL files 2.
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:04.253369) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373535' seq:72057594037927935, type:22 .. '6C6F676D0032303038' seq:0, type:0; will stop at (end)
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(15KB)], [107(10MB)]
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444764253410, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 10558191, "oldest_snapshot_seqno": -1}
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6962 keys, 10439779 bytes, temperature: kUnknown
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444764337022, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 10439779, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10391990, "index_size": 29309, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17413, "raw_key_size": 181385, "raw_average_key_size": 26, "raw_value_size": 10266281, "raw_average_value_size": 1474, "num_data_blocks": 1143, "num_entries": 6962, "num_filter_entries": 6962, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769444764, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:04.337323) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10439779 bytes
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:04.338817) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.1 rd, 124.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 10.1 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(1317.4) write-amplify(655.0) OK, records in: 7481, records dropped: 519 output_compression: NoCompression
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:04.338853) EVENT_LOG_v1 {"time_micros": 1769444764338832, "job": 64, "event": "compaction_finished", "compaction_time_micros": 83704, "compaction_time_cpu_micros": 25691, "output_level": 6, "num_output_files": 1, "total_output_size": 10439779, "num_input_records": 7481, "num_output_records": 6962, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444764339016, "job": 64, "event": "table_file_deletion", "file_number": 109}
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444764341421, "job": 64, "event": "table_file_deletion", "file_number": 107}
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:04.253310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:04.341463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:04.341469) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:04.341472) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:04.341475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:26:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:26:04.341478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:26:04 compute-0 kernel: tap60b0936e-92 (unregistering): left promiscuous mode
Jan 26 16:26:04 compute-0 NetworkManager[48954]: <info>  [1769444764.5334] device (tap60b0936e-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:26:04 compute-0 ovn_controller[146046]: 2026-01-26T16:26:04Z|01478|binding|INFO|Releasing lport 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 from this chassis (sb_readonly=0)
Jan 26 16:26:04 compute-0 ovn_controller[146046]: 2026-01-26T16:26:04Z|01479|binding|INFO|Setting lport 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 down in Southbound
Jan 26 16:26:04 compute-0 ovn_controller[146046]: 2026-01-26T16:26:04Z|01480|binding|INFO|Removing iface tap60b0936e-92 ovn-installed in OVS
Jan 26 16:26:04 compute-0 nova_compute[239965]: 2026-01-26 16:26:04.538 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:04.546 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:29:6f 10.100.0.3'], port_security=['fa:16:3e:00:29:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b1edd3fd-2563-465f-94c8-577ab4debb72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-468ef4a7-70cc-4bbe-a116-35fd32215cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0604e11d-f932-4e4e-a21d-f453b4c1f6d5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=016b0b61-69af-4aa0-b892-809b130821a2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=60b0936e-92d4-4117-ad1c-6c6ba1f53cd8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:26:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:04.548 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 in datapath 468ef4a7-70cc-4bbe-a116-35fd32215cb8 unbound from our chassis
Jan 26 16:26:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:04.552 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 468ef4a7-70cc-4bbe-a116-35fd32215cb8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:26:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:04.554 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[91f7a9b1-b2cc-4a3c-84c6-2b89535af289]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:04.555 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8 namespace which is not needed anymore
Jan 26 16:26:04 compute-0 nova_compute[239965]: 2026-01-26 16:26:04.571 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:04 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000089.scope: Deactivated successfully.
Jan 26 16:26:04 compute-0 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000089.scope: Consumed 14.120s CPU time.
Jan 26 16:26:04 compute-0 systemd-machined[208061]: Machine qemu-166-instance-00000089 terminated.
Jan 26 16:26:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 333 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 202 op/s
Jan 26 16:26:04 compute-0 neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8[360552]: [NOTICE]   (360556) : haproxy version is 2.8.14-c23fe91
Jan 26 16:26:04 compute-0 neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8[360552]: [NOTICE]   (360556) : path to executable is /usr/sbin/haproxy
Jan 26 16:26:04 compute-0 neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8[360552]: [WARNING]  (360556) : Exiting Master process...
Jan 26 16:26:04 compute-0 neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8[360552]: [ALERT]    (360556) : Current worker (360558) exited with code 143 (Terminated)
Jan 26 16:26:04 compute-0 neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8[360552]: [WARNING]  (360556) : All workers exited. Exiting... (0)
Jan 26 16:26:04 compute-0 systemd[1]: libpod-a836775d6116e219714688aa6963577d14216d01741b4968108def3c3438ea5d.scope: Deactivated successfully.
Jan 26 16:26:04 compute-0 podman[362495]: 2026-01-26 16:26:04.734066806 +0000 UTC m=+0.062969183 container died a836775d6116e219714688aa6963577d14216d01741b4968108def3c3438ea5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 16:26:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3459223312' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:04 compute-0 ceph-mon[75140]: pgmap v2261: 305 pgs: 305 active+clean; 333 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.0 MiB/s wr, 202 op/s
Jan 26 16:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a836775d6116e219714688aa6963577d14216d01741b4968108def3c3438ea5d-userdata-shm.mount: Deactivated successfully.
Jan 26 16:26:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-486ecb2a938e603bc0798b48d9c707ad284dee0bffa0b6264637187d0ceb1c4f-merged.mount: Deactivated successfully.
Jan 26 16:26:04 compute-0 sshd-session[362469]: Connection closed by authenticating user root 209.38.206.249 port 41454 [preauth]
Jan 26 16:26:04 compute-0 nova_compute[239965]: 2026-01-26 16:26:04.813 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:04 compute-0 podman[362495]: 2026-01-26 16:26:04.81796574 +0000 UTC m=+0.146868047 container cleanup a836775d6116e219714688aa6963577d14216d01741b4968108def3c3438ea5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 16:26:04 compute-0 systemd[1]: libpod-conmon-a836775d6116e219714688aa6963577d14216d01741b4968108def3c3438ea5d.scope: Deactivated successfully.
Jan 26 16:26:04 compute-0 podman[362535]: 2026-01-26 16:26:04.905844473 +0000 UTC m=+0.053590644 container remove a836775d6116e219714688aa6963577d14216d01741b4968108def3c3438ea5d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:26:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:04.913 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[35f9e20c-c07e-425c-9b7a-24c07afd5481]: (4, ('Mon Jan 26 04:26:04 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8 (a836775d6116e219714688aa6963577d14216d01741b4968108def3c3438ea5d)\na836775d6116e219714688aa6963577d14216d01741b4968108def3c3438ea5d\nMon Jan 26 04:26:04 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8 (a836775d6116e219714688aa6963577d14216d01741b4968108def3c3438ea5d)\na836775d6116e219714688aa6963577d14216d01741b4968108def3c3438ea5d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:04.915 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8d021a3b-5adc-4be7-b438-f77325961898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:04.916 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap468ef4a7-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:04 compute-0 nova_compute[239965]: 2026-01-26 16:26:04.918 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:04 compute-0 kernel: tap468ef4a7-70: left promiscuous mode
Jan 26 16:26:04 compute-0 nova_compute[239965]: 2026-01-26 16:26:04.934 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:04 compute-0 nova_compute[239965]: 2026-01-26 16:26:04.937 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:04.941 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[11ed1090-263c-4b81-a00b-ed9cce4d55ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:04.956 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[aaaf8e2b-d932-44fb-9e2a-188f40746fe6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:04.958 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[64de8dcc-fa00-4821-a8d2-2ab696ae0a08]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:04.976 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[21c27625-9c71-436a-b9f2-0af67a71fcaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622521, 'reachable_time': 32925, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362554, 'error': None, 'target': 'ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:04 compute-0 systemd[1]: run-netns-ovnmeta\x2d468ef4a7\x2d70cc\x2d4bbe\x2da116\x2d35fd32215cb8.mount: Deactivated successfully.
Jan 26 16:26:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:04.981 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:26:04 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:04.981 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[98531a89-79d1-403c-8628-af9997bc0b55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.324 239969 INFO nova.virt.libvirt.driver [None req-137b91f3-435a-4a37-9fdc-a959e79f94f6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Instance shutdown successfully.
Jan 26 16:26:05 compute-0 kernel: tap60b0936e-92: entered promiscuous mode
Jan 26 16:26:05 compute-0 systemd-udevd[362477]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:26:05 compute-0 ovn_controller[146046]: 2026-01-26T16:26:05Z|01481|binding|INFO|Claiming lport 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 for this chassis.
Jan 26 16:26:05 compute-0 ovn_controller[146046]: 2026-01-26T16:26:05Z|01482|binding|INFO|60b0936e-92d4-4117-ad1c-6c6ba1f53cd8: Claiming fa:16:3e:00:29:6f 10.100.0.3
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.419 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:05 compute-0 NetworkManager[48954]: <info>  [1769444765.4218] manager: (tap60b0936e-92): new Tun device (/org/freedesktop/NetworkManager/Devices/603)
Jan 26 16:26:05 compute-0 NetworkManager[48954]: <info>  [1769444765.4347] device (tap60b0936e-92): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:26:05 compute-0 NetworkManager[48954]: <info>  [1769444765.4357] device (tap60b0936e-92): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.428 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:29:6f 10.100.0.3'], port_security=['fa:16:3e:00:29:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b1edd3fd-2563-465f-94c8-577ab4debb72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-468ef4a7-70cc-4bbe-a116-35fd32215cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0604e11d-f932-4e4e-a21d-f453b4c1f6d5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=016b0b61-69af-4aa0-b892-809b130821a2, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=60b0936e-92d4-4117-ad1c-6c6ba1f53cd8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.430 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 in datapath 468ef4a7-70cc-4bbe-a116-35fd32215cb8 bound to our chassis
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.436 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 468ef4a7-70cc-4bbe-a116-35fd32215cb8
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.452 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c006e799-6173-4778-b796-e804854be48e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.456 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap468ef4a7-71 in ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:26:05 compute-0 ovn_controller[146046]: 2026-01-26T16:26:05Z|01483|binding|INFO|Setting lport 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 ovn-installed in OVS
Jan 26 16:26:05 compute-0 ovn_controller[146046]: 2026-01-26T16:26:05Z|01484|binding|INFO|Setting lport 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 up in Southbound
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.458 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap468ef4a7-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.459 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1edf92a6-7265-4d97-9341-a40ac8f99a31]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.460 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.460 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b8b308c6-f9d8-4198-a078-2cfd3b8eff47]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.468 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.477 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[47aa57d6-ef98-4a63-90ae-d22fc1a61146]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 systemd-machined[208061]: New machine qemu-169-instance-00000089.
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.493 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[77842b3a-706c-4da3-a6e4-a1a3633ef223]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 systemd[1]: Started Virtual Machine qemu-169-instance-00000089.
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.530 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[075f39d4-393f-4595-aac0-504d3d924f66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 NetworkManager[48954]: <info>  [1769444765.5384] manager: (tap468ef4a7-70): new Veth device (/org/freedesktop/NetworkManager/Devices/604)
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.539 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3af319d1-b9ef-40c5-b049-f440ad8b10d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.592 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[715c3f28-a70a-401e-bacf-17cdb78c2cd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.595 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e63090b6-502b-4282-903f-96844b8f4d54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 NetworkManager[48954]: <info>  [1769444765.6211] device (tap468ef4a7-70): carrier: link connected
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.628 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[fa884309-df34-4260-9afb-7148081bb8f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.647 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cd02adff-c5df-4dc6-a465-abc2d74c5154]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap468ef4a7-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:de:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625642, 'reachable_time': 29795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362599, 'error': None, 'target': 'ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.664 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0ce70228-a316-495e-a721-657ec02923b9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe24:debb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 625642, 'tstamp': 625642}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362600, 'error': None, 'target': 'ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.686 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[96302bf4-2859-4d24-95cb-30c296cc7b52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap468ef4a7-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:24:de:bb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 424], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625642, 'reachable_time': 29795, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 362601, 'error': None, 'target': 'ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.730 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[726b5504-b42e-41b3-a7de-7c04590c46f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.795 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4c745105-f0a1-4c7c-bfb5-04939038a54c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.796 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap468ef4a7-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.797 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.797 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap468ef4a7-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.799 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:05 compute-0 NetworkManager[48954]: <info>  [1769444765.8001] manager: (tap468ef4a7-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/605)
Jan 26 16:26:05 compute-0 kernel: tap468ef4a7-70: entered promiscuous mode
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.802 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.805 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap468ef4a7-70, col_values=(('external_ids', {'iface-id': '54fc2172-ac7e-43ed-a29f-8766884bed13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.806 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:05 compute-0 ovn_controller[146046]: 2026-01-26T16:26:05Z|01485|binding|INFO|Releasing lport 54fc2172-ac7e-43ed-a29f-8766884bed13 from this chassis (sb_readonly=0)
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.837 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.837 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.838 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/468ef4a7-70cc-4bbe-a116-35fd32215cb8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/468ef4a7-70cc-4bbe-a116-35fd32215cb8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.843 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[50962bb2-2f2e-41cd-8653-f7a6949fb670]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.844 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-468ef4a7-70cc-4bbe-a116-35fd32215cb8
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/468ef4a7-70cc-4bbe-a116-35fd32215cb8.pid.haproxy
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 468ef4a7-70cc-4bbe-a116-35fd32215cb8
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:26:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:05.846 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8', 'env', 'PROCESS_TAG=haproxy-468ef4a7-70cc-4bbe-a116-35fd32215cb8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/468ef4a7-70cc-4bbe-a116-35fd32215cb8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.872 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for b1edd3fd-2563-465f-94c8-577ab4debb72 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.872 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444765.862245, b1edd3fd-2563-465f-94c8-577ab4debb72 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.872 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] VM Resumed (Lifecycle Event)
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.877 239969 INFO nova.virt.libvirt.driver [-] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Instance running successfully.
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.877 239969 INFO nova.virt.libvirt.driver [None req-137b91f3-435a-4a37-9fdc-a959e79f94f6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Instance soft rebooted successfully.
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.878 239969 DEBUG nova.compute.manager [None req-137b91f3-435a-4a37-9fdc-a959e79f94f6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.904 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.907 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.927 239969 DEBUG nova.compute.manager [req-c3337dba-fdf7-4394-ad8e-525f58e67680 req-b9a9a473-69f6-4b67-a92c-f1a47a3eb4d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Received event network-changed-30d9e550-982e-4e2c-9ee0-755cad494e41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.927 239969 DEBUG nova.compute.manager [req-c3337dba-fdf7-4394-ad8e-525f58e67680 req-b9a9a473-69f6-4b67-a92c-f1a47a3eb4d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Refreshing instance network info cache due to event network-changed-30d9e550-982e-4e2c-9ee0-755cad494e41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.928 239969 DEBUG oslo_concurrency.lockutils [req-c3337dba-fdf7-4394-ad8e-525f58e67680 req-b9a9a473-69f6-4b67-a92c-f1a47a3eb4d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-39a41999-3c56-4679-8c57-9286dc4edbb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.941 239969 DEBUG oslo_concurrency.lockutils [None req-137b91f3-435a-4a37-9fdc-a959e79f94f6 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.290s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.943 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] During sync_power_state the instance has a pending task (reboot_started). Skip.
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.943 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444765.8629158, b1edd3fd-2563-465f-94c8-577ab4debb72 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.943 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] VM Started (Lifecycle Event)
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.965 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:05 compute-0 nova_compute[239965]: 2026-01-26 16:26:05.970 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.054 239969 DEBUG nova.compute.manager [req-b86b1540-9664-417d-937a-6dee7bc5e899 req-7cb5bb04-766b-4978-baef-6f7425015b6b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received event network-vif-unplugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.054 239969 DEBUG oslo_concurrency.lockutils [req-b86b1540-9664-417d-937a-6dee7bc5e899 req-7cb5bb04-766b-4978-baef-6f7425015b6b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.054 239969 DEBUG oslo_concurrency.lockutils [req-b86b1540-9664-417d-937a-6dee7bc5e899 req-7cb5bb04-766b-4978-baef-6f7425015b6b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.055 239969 DEBUG oslo_concurrency.lockutils [req-b86b1540-9664-417d-937a-6dee7bc5e899 req-7cb5bb04-766b-4978-baef-6f7425015b6b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.055 239969 DEBUG nova.compute.manager [req-b86b1540-9664-417d-937a-6dee7bc5e899 req-7cb5bb04-766b-4978-baef-6f7425015b6b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] No waiting events found dispatching network-vif-unplugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.055 239969 WARNING nova.compute.manager [req-b86b1540-9664-417d-937a-6dee7bc5e899 req-7cb5bb04-766b-4978-baef-6f7425015b6b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received unexpected event network-vif-unplugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 for instance with vm_state active and task_state None.
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.198 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "e9322863-2491-464c-92af-3e1f3181df98" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.199 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "e9322863-2491-464c-92af-3e1f3181df98" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.218 239969 DEBUG nova.compute.manager [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:26:06 compute-0 podman[362675]: 2026-01-26 16:26:06.257837712 +0000 UTC m=+0.085802962 container create 31e5d3d685b4a87d51ace2aa0633d8f2b5aae985d6cebbcca9aa61c05e6e76b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.269 239969 DEBUG nova.network.neutron [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Updating instance_info_cache with network_info: [{"id": "30d9e550-982e-4e2c-9ee0-755cad494e41", "address": "fa:16:3e:5d:8a:76", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30d9e550-98", "ovs_interfaceid": "30d9e550-982e-4e2c-9ee0-755cad494e41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.289 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Releasing lock "refresh_cache-39a41999-3c56-4679-8c57-9286dc4edbb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.289 239969 DEBUG nova.compute.manager [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Instance network_info: |[{"id": "30d9e550-982e-4e2c-9ee0-755cad494e41", "address": "fa:16:3e:5d:8a:76", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30d9e550-98", "ovs_interfaceid": "30d9e550-982e-4e2c-9ee0-755cad494e41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.290 239969 DEBUG oslo_concurrency.lockutils [req-c3337dba-fdf7-4394-ad8e-525f58e67680 req-b9a9a473-69f6-4b67-a92c-f1a47a3eb4d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-39a41999-3c56-4679-8c57-9286dc4edbb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.290 239969 DEBUG nova.network.neutron [req-c3337dba-fdf7-4394-ad8e-525f58e67680 req-b9a9a473-69f6-4b67-a92c-f1a47a3eb4d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Refreshing network info cache for port 30d9e550-982e-4e2c-9ee0-755cad494e41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.293 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Start _get_guest_xml network_info=[{"id": "30d9e550-982e-4e2c-9ee0-755cad494e41", "address": "fa:16:3e:5d:8a:76", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30d9e550-98", "ovs_interfaceid": "30d9e550-982e-4e2c-9ee0-755cad494e41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.299 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.299 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.303 239969 WARNING nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.306 239969 DEBUG nova.virt.hardware [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:26:06 compute-0 podman[362675]: 2026-01-26 16:26:06.213263201 +0000 UTC m=+0.041228501 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.307 239969 INFO nova.compute.claims [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.320 239969 DEBUG nova.virt.libvirt.host [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.320 239969 DEBUG nova.virt.libvirt.host [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.332 239969 DEBUG nova.virt.libvirt.host [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.332 239969 DEBUG nova.virt.libvirt.host [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.333 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.333 239969 DEBUG nova.virt.hardware [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.334 239969 DEBUG nova.virt.hardware [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.334 239969 DEBUG nova.virt.hardware [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.334 239969 DEBUG nova.virt.hardware [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.335 239969 DEBUG nova.virt.hardware [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.335 239969 DEBUG nova.virt.hardware [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.335 239969 DEBUG nova.virt.hardware [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.335 239969 DEBUG nova.virt.hardware [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.336 239969 DEBUG nova.virt.hardware [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.336 239969 DEBUG nova.virt.hardware [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.336 239969 DEBUG nova.virt.hardware [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:26:06 compute-0 systemd[1]: Started libpod-conmon-31e5d3d685b4a87d51ace2aa0633d8f2b5aae985d6cebbcca9aa61c05e6e76b8.scope.
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.341 239969 DEBUG oslo_concurrency.processutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:26:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ad63894d58fe0ffc073f7a032c19755edcdb07d0069e852e7c5e50387b8b613/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.390 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:06 compute-0 podman[362675]: 2026-01-26 16:26:06.405890868 +0000 UTC m=+0.233856098 container init 31e5d3d685b4a87d51ace2aa0633d8f2b5aae985d6cebbcca9aa61c05e6e76b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:26:06 compute-0 podman[362675]: 2026-01-26 16:26:06.417141914 +0000 UTC m=+0.245107134 container start 31e5d3d685b4a87d51ace2aa0633d8f2b5aae985d6cebbcca9aa61c05e6e76b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 26 16:26:06 compute-0 neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8[362688]: [NOTICE]   (362694) : New worker (362696) forked
Jan 26 16:26:06 compute-0 neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8[362688]: [NOTICE]   (362694) : Loading success.
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.588 239969 DEBUG oslo_concurrency.processutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.4 MiB/s wr, 204 op/s
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.700 239969 DEBUG nova.network.neutron [req-3bd57d23-a807-43b6-a328-2ecf72439daf req-fc03c240-8525-4df8-a1df-e61902815b43 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Updated VIF entry in instance network info cache for port 00461e74-34f4-4d7c-b930-09797f49ca0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.701 239969 DEBUG nova.network.neutron [req-3bd57d23-a807-43b6-a328-2ecf72439daf req-fc03c240-8525-4df8-a1df-e61902815b43 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Updating instance_info_cache with network_info: [{"id": "00461e74-34f4-4d7c-b930-09797f49ca0f", "address": "fa:16:3e:bf:c2:02", "network": {"id": "843fc191-89a3-4e63-b63d-61d012b45b53", "bridge": "br-int", "label": "tempest-network-smoke--1571296953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00461e74-34", "ovs_interfaceid": "00461e74-34f4-4d7c-b930-09797f49ca0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:06 compute-0 nova_compute[239965]: 2026-01-26 16:26:06.720 239969 DEBUG oslo_concurrency.lockutils [req-3bd57d23-a807-43b6-a328-2ecf72439daf req-fc03c240-8525-4df8-a1df-e61902815b43 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-fed642a5-01b1-488e-aae9-21971c326ddb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:26:06 compute-0 ceph-mon[75140]: pgmap v2262: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.4 MiB/s wr, 204 op/s
Jan 26 16:26:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:26:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/953033538' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.019 239969 DEBUG oslo_concurrency.processutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.678s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.039 239969 DEBUG nova.storage.rbd_utils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] rbd image 39a41999-3c56-4679-8c57-9286dc4edbb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.042 239969 DEBUG oslo_concurrency.processutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:26:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2853420158' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.199 239969 DEBUG oslo_concurrency.processutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.207 239969 DEBUG nova.compute.provider_tree [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.230 239969 DEBUG nova.scheduler.client.report [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.259 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.261 239969 DEBUG nova.compute.manager [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.324 239969 DEBUG nova.compute.manager [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.325 239969 DEBUG nova.network.neutron [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.346 239969 INFO nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.381 239969 DEBUG nova.compute.manager [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.482 239969 DEBUG nova.compute.manager [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.484 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.485 239969 INFO nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Creating image(s)
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.513 239969 DEBUG nova.storage.rbd_utils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image e9322863-2491-464c-92af-3e1f3181df98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.547 239969 DEBUG nova.storage.rbd_utils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image e9322863-2491-464c-92af-3e1f3181df98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.577 239969 DEBUG nova.storage.rbd_utils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image e9322863-2491-464c-92af-3e1f3181df98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.581 239969 DEBUG oslo_concurrency.processutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:26:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554919324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.622 239969 DEBUG nova.policy [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ace5f36058684b5782589457d9e15921', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.625 239969 DEBUG oslo_concurrency.processutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.626 239969 DEBUG nova.virt.libvirt.vif [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:25:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-855823578',display_name='tempest-TestSnapshotPattern-server-855823578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-855823578',id=140,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbFIdmmD0YOLT/gnQMrV89ZkLND1rAgrbfF3qbw6dEO95K+VelVpcFet15bYd/xM6RQfQxT61ExvAJRv5XtJWc2bRBlJ7p2k0AnC8pnTtVi312SPdjQj+CEHqI1btz5HA==',key_name='tempest-TestSnapshotPattern-544616432',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='02ec8c20b77c4ce9b1406d858bbcf14d',ramdisk_id='',reservation_id='r-92eb2iek',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-616132540',owner_user_name='tempest-TestSnapshotPattern-616132540-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:26:00Z,user_data=None,user_id='915d03d871484dc39e2d074d97f24809',uuid=39a41999-3c56-4679-8c57-9286dc4edbb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30d9e550-982e-4e2c-9ee0-755cad494e41", "address": "fa:16:3e:5d:8a:76", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30d9e550-98", "ovs_interfaceid": "30d9e550-982e-4e2c-9ee0-755cad494e41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.627 239969 DEBUG nova.network.os_vif_util [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Converting VIF {"id": "30d9e550-982e-4e2c-9ee0-755cad494e41", "address": "fa:16:3e:5d:8a:76", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30d9e550-98", "ovs_interfaceid": "30d9e550-982e-4e2c-9ee0-755cad494e41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.628 239969 DEBUG nova.network.os_vif_util [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:8a:76,bridge_name='br-int',has_traffic_filtering=True,id=30d9e550-982e-4e2c-9ee0-755cad494e41,network=Network(01b6ba29-8317-46fc-85a5-35c029f2796c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30d9e550-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.629 239969 DEBUG nova.objects.instance [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lazy-loading 'pci_devices' on Instance uuid 39a41999-3c56-4679-8c57-9286dc4edbb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.643 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:26:07 compute-0 nova_compute[239965]:   <uuid>39a41999-3c56-4679-8c57-9286dc4edbb6</uuid>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   <name>instance-0000008c</name>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <nova:name>tempest-TestSnapshotPattern-server-855823578</nova:name>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:26:06</nova:creationTime>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:26:07 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:26:07 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:26:07 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:26:07 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:26:07 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:26:07 compute-0 nova_compute[239965]:         <nova:user uuid="915d03d871484dc39e2d074d97f24809">tempest-TestSnapshotPattern-616132540-project-member</nova:user>
Jan 26 16:26:07 compute-0 nova_compute[239965]:         <nova:project uuid="02ec8c20b77c4ce9b1406d858bbcf14d">tempest-TestSnapshotPattern-616132540</nova:project>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:26:07 compute-0 nova_compute[239965]:         <nova:port uuid="30d9e550-982e-4e2c-9ee0-755cad494e41">
Jan 26 16:26:07 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <system>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <entry name="serial">39a41999-3c56-4679-8c57-9286dc4edbb6</entry>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <entry name="uuid">39a41999-3c56-4679-8c57-9286dc4edbb6</entry>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     </system>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   <os>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   </os>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   <features>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   </features>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/39a41999-3c56-4679-8c57-9286dc4edbb6_disk">
Jan 26 16:26:07 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       </source>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:26:07 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/39a41999-3c56-4679-8c57-9286dc4edbb6_disk.config">
Jan 26 16:26:07 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       </source>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:26:07 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:5d:8a:76"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <target dev="tap30d9e550-98"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/39a41999-3c56-4679-8c57-9286dc4edbb6/console.log" append="off"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <video>
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     </video>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:26:07 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:26:07 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:26:07 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:26:07 compute-0 nova_compute[239965]: </domain>
Jan 26 16:26:07 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.644 239969 DEBUG nova.compute.manager [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Preparing to wait for external event network-vif-plugged-30d9e550-982e-4e2c-9ee0-755cad494e41 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.645 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.645 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.645 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.646 239969 DEBUG nova.virt.libvirt.vif [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:25:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-855823578',display_name='tempest-TestSnapshotPattern-server-855823578',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-855823578',id=140,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbFIdmmD0YOLT/gnQMrV89ZkLND1rAgrbfF3qbw6dEO95K+VelVpcFet15bYd/xM6RQfQxT61ExvAJRv5XtJWc2bRBlJ7p2k0AnC8pnTtVi312SPdjQj+CEHqI1btz5HA==',key_name='tempest-TestSnapshotPattern-544616432',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='02ec8c20b77c4ce9b1406d858bbcf14d',ramdisk_id='',reservation_id='r-92eb2iek',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-616132540',owner_user_name='tempest-TestSnapshotPattern-616132540-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:26:00Z,user_data=None,user_id='915d03d871484dc39e2d074d97f24809',uuid=39a41999-3c56-4679-8c57-9286dc4edbb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "30d9e550-982e-4e2c-9ee0-755cad494e41", "address": "fa:16:3e:5d:8a:76", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30d9e550-98", "ovs_interfaceid": "30d9e550-982e-4e2c-9ee0-755cad494e41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.646 239969 DEBUG nova.network.os_vif_util [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Converting VIF {"id": "30d9e550-982e-4e2c-9ee0-755cad494e41", "address": "fa:16:3e:5d:8a:76", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30d9e550-98", "ovs_interfaceid": "30d9e550-982e-4e2c-9ee0-755cad494e41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.646 239969 DEBUG nova.network.os_vif_util [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:8a:76,bridge_name='br-int',has_traffic_filtering=True,id=30d9e550-982e-4e2c-9ee0-755cad494e41,network=Network(01b6ba29-8317-46fc-85a5-35c029f2796c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30d9e550-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.647 239969 DEBUG os_vif [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:8a:76,bridge_name='br-int',has_traffic_filtering=True,id=30d9e550-982e-4e2c-9ee0-755cad494e41,network=Network(01b6ba29-8317-46fc-85a5-35c029f2796c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30d9e550-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.647 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.648 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.648 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.655 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.655 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap30d9e550-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.655 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap30d9e550-98, col_values=(('external_ids', {'iface-id': '30d9e550-982e-4e2c-9ee0-755cad494e41', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5d:8a:76', 'vm-uuid': '39a41999-3c56-4679-8c57-9286dc4edbb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.658 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:07 compute-0 NetworkManager[48954]: <info>  [1769444767.6590] manager: (tap30d9e550-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/606)
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.660 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.667 239969 DEBUG oslo_concurrency.processutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.667 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.668 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.668 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.687 239969 DEBUG nova.storage.rbd_utils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image e9322863-2491-464c-92af-3e1f3181df98_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.690 239969 DEBUG oslo_concurrency.processutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 e9322863-2491-464c-92af-3e1f3181df98_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.725 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.727 239969 INFO os_vif [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:8a:76,bridge_name='br-int',has_traffic_filtering=True,id=30d9e550-982e-4e2c-9ee0-755cad494e41,network=Network(01b6ba29-8317-46fc-85a5-35c029f2796c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30d9e550-98')
Jan 26 16:26:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/953033538' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:26:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2853420158' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3554919324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.803 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.803 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.804 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] No VIF found with MAC fa:16:3e:5d:8a:76, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.804 239969 INFO nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Using config drive
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.830 239969 DEBUG nova.storage.rbd_utils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] rbd image 39a41999-3c56-4679-8c57-9286dc4edbb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.924 239969 DEBUG nova.network.neutron [req-c3337dba-fdf7-4394-ad8e-525f58e67680 req-b9a9a473-69f6-4b67-a92c-f1a47a3eb4d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Updated VIF entry in instance network info cache for port 30d9e550-982e-4e2c-9ee0-755cad494e41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.925 239969 DEBUG nova.network.neutron [req-c3337dba-fdf7-4394-ad8e-525f58e67680 req-b9a9a473-69f6-4b67-a92c-f1a47a3eb4d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Updating instance_info_cache with network_info: [{"id": "30d9e550-982e-4e2c-9ee0-755cad494e41", "address": "fa:16:3e:5d:8a:76", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30d9e550-98", "ovs_interfaceid": "30d9e550-982e-4e2c-9ee0-755cad494e41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:07 compute-0 nova_compute[239965]: 2026-01-26 16:26:07.939 239969 DEBUG oslo_concurrency.lockutils [req-c3337dba-fdf7-4394-ad8e-525f58e67680 req-b9a9a473-69f6-4b67-a92c-f1a47a3eb4d0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-39a41999-3c56-4679-8c57-9286dc4edbb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.012 239969 DEBUG oslo_concurrency.processutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 e9322863-2491-464c-92af-3e1f3181df98_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.322s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.101 239969 DEBUG nova.storage.rbd_utils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] resizing rbd image e9322863-2491-464c-92af-3e1f3181df98_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.156 239969 DEBUG nova.compute.manager [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received event network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.156 239969 DEBUG oslo_concurrency.lockutils [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.157 239969 DEBUG oslo_concurrency.lockutils [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.157 239969 DEBUG oslo_concurrency.lockutils [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.157 239969 DEBUG nova.compute.manager [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] No waiting events found dispatching network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.158 239969 WARNING nova.compute.manager [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received unexpected event network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 for instance with vm_state active and task_state None.
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.158 239969 DEBUG nova.compute.manager [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received event network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.158 239969 DEBUG oslo_concurrency.lockutils [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.159 239969 DEBUG oslo_concurrency.lockutils [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.159 239969 DEBUG oslo_concurrency.lockutils [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.159 239969 DEBUG nova.compute.manager [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] No waiting events found dispatching network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.159 239969 WARNING nova.compute.manager [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received unexpected event network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 for instance with vm_state active and task_state None.
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.160 239969 DEBUG nova.compute.manager [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received event network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.160 239969 DEBUG oslo_concurrency.lockutils [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.161 239969 DEBUG oslo_concurrency.lockutils [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.161 239969 DEBUG oslo_concurrency.lockutils [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.161 239969 DEBUG nova.compute.manager [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] No waiting events found dispatching network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.161 239969 WARNING nova.compute.manager [req-ef41c938-8362-46c7-90f4-f862f3e6ff27 req-08f9bc95-a6ba-4344-9e70-72c6aced0439 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received unexpected event network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 for instance with vm_state active and task_state None.
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.213 239969 DEBUG nova.objects.instance [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'migration_context' on Instance uuid e9322863-2491-464c-92af-3e1f3181df98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.227 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.227 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Ensure instance console log exists: /var/lib/nova/instances/e9322863-2491-464c-92af-3e1f3181df98/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.228 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.228 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.228 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.355 239969 INFO nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Creating config drive at /var/lib/nova/instances/39a41999-3c56-4679-8c57-9286dc4edbb6/disk.config
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.361 239969 DEBUG oslo_concurrency.processutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/39a41999-3c56-4679-8c57-9286dc4edbb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8xdkexh4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:08 compute-0 podman[362975]: 2026-01-26 16:26:08.39739433 +0000 UTC m=+0.082522912 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:26:08 compute-0 podman[362976]: 2026-01-26 16:26:08.403363267 +0000 UTC m=+0.087366682 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.442 239969 DEBUG nova.network.neutron [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Successfully created port: a5c8aecb-cefc-4d76-b032-9b122283da51 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.512 239969 DEBUG oslo_concurrency.processutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/39a41999-3c56-4679-8c57-9286dc4edbb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8xdkexh4" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.535 239969 DEBUG nova.storage.rbd_utils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] rbd image 39a41999-3c56-4679-8c57-9286dc4edbb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.538 239969 DEBUG oslo_concurrency.processutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/39a41999-3c56-4679-8c57-9286dc4edbb6/disk.config 39a41999-3c56-4679-8c57-9286dc4edbb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.6 MiB/s wr, 212 op/s
Jan 26 16:26:08 compute-0 nova_compute[239965]: 2026-01-26 16:26:08.667 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:09 compute-0 ceph-mon[75140]: pgmap v2263: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 3.6 MiB/s wr, 212 op/s
Jan 26 16:26:09 compute-0 nova_compute[239965]: 2026-01-26 16:26:09.097 239969 DEBUG oslo_concurrency.processutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/39a41999-3c56-4679-8c57-9286dc4edbb6/disk.config 39a41999-3c56-4679-8c57-9286dc4edbb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:09 compute-0 nova_compute[239965]: 2026-01-26 16:26:09.098 239969 INFO nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Deleting local config drive /var/lib/nova/instances/39a41999-3c56-4679-8c57-9286dc4edbb6/disk.config because it was imported into RBD.
Jan 26 16:26:09 compute-0 NetworkManager[48954]: <info>  [1769444769.1584] manager: (tap30d9e550-98): new Tun device (/org/freedesktop/NetworkManager/Devices/607)
Jan 26 16:26:09 compute-0 kernel: tap30d9e550-98: entered promiscuous mode
Jan 26 16:26:09 compute-0 nova_compute[239965]: 2026-01-26 16:26:09.170 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:09 compute-0 nova_compute[239965]: 2026-01-26 16:26:09.177 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:09 compute-0 ovn_controller[146046]: 2026-01-26T16:26:09Z|01486|binding|INFO|Claiming lport 30d9e550-982e-4e2c-9ee0-755cad494e41 for this chassis.
Jan 26 16:26:09 compute-0 ovn_controller[146046]: 2026-01-26T16:26:09Z|01487|binding|INFO|30d9e550-982e-4e2c-9ee0-755cad494e41: Claiming fa:16:3e:5d:8a:76 10.100.0.13
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.187 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:8a:76 10.100.0.13'], port_security=['fa:16:3e:5d:8a:76 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '39a41999-3c56-4679-8c57-9286dc4edbb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01b6ba29-8317-46fc-85a5-35c029f2796c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '02ec8c20b77c4ce9b1406d858bbcf14d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4f55238b-e3e6-4801-8e90-114accc75c46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d1802e6-be6e-40d0-8709-a6af00654d79, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=30d9e550-982e-4e2c-9ee0-755cad494e41) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.188 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 30d9e550-982e-4e2c-9ee0-755cad494e41 in datapath 01b6ba29-8317-46fc-85a5-35c029f2796c bound to our chassis
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.189 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01b6ba29-8317-46fc-85a5-35c029f2796c
Jan 26 16:26:09 compute-0 ovn_controller[146046]: 2026-01-26T16:26:09Z|01488|binding|INFO|Setting lport 30d9e550-982e-4e2c-9ee0-755cad494e41 ovn-installed in OVS
Jan 26 16:26:09 compute-0 ovn_controller[146046]: 2026-01-26T16:26:09Z|01489|binding|INFO|Setting lport 30d9e550-982e-4e2c-9ee0-755cad494e41 up in Southbound
Jan 26 16:26:09 compute-0 systemd-udevd[363079]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:26:09 compute-0 nova_compute[239965]: 2026-01-26 16:26:09.205 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.205 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[453c74ab-a1af-4951-8f4b-a559e1a4963a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.205 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01b6ba29-81 in ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.210 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01b6ba29-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.210 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc937fa-ca87-4944-b37b-78907a6bae26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.211 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[85c965a2-057f-404e-b6fd-1545053eeab0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:09 compute-0 NetworkManager[48954]: <info>  [1769444769.2178] device (tap30d9e550-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:26:09 compute-0 NetworkManager[48954]: <info>  [1769444769.2195] device (tap30d9e550-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:26:09 compute-0 systemd-machined[208061]: New machine qemu-170-instance-0000008c.
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.225 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[59869780-52b9-46cd-b9ed-839e229f5abf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:09 compute-0 systemd[1]: Started Virtual Machine qemu-170-instance-0000008c.
Jan 26 16:26:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.250 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[984e7bf3-f6a5-4912-853c-d6c1f27e7e71]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.284 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a8b17f77-0e6d-431a-b6cd-b46ee1b91a10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.290 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6c291a98-6e3a-451e-b867-26816d76a038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:09 compute-0 systemd-udevd[363082]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:26:09 compute-0 NetworkManager[48954]: <info>  [1769444769.2917] manager: (tap01b6ba29-80): new Veth device (/org/freedesktop/NetworkManager/Devices/608)
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.329 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[db168448-b5c0-44d4-9af1-49174b81fa09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.332 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c634b3b8-8a10-4b9f-b5c2-df8afaba0c8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:09 compute-0 NetworkManager[48954]: <info>  [1769444769.3545] device (tap01b6ba29-80): carrier: link connected
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.361 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[c01335cc-d814-41ed-b5ec-8521de3112dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.378 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[640655aa-2960-400a-beb5-85ee2ee2d341]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01b6ba29-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:79:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 426], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626016, 'reachable_time': 23299, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363111, 'error': None, 'target': 'ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.397 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8210de3f-e76a-4b47-b29d-fe0141da4823]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe25:7947'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626016, 'tstamp': 626016}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363112, 'error': None, 'target': 'ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.426 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[23256c6f-f705-46a5-8465-69ea733512ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01b6ba29-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:79:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 426], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626016, 'reachable_time': 23299, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 363113, 'error': None, 'target': 'ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.466 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8ed0eb9a-dc07-45b6-9087-68ce8c677314]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:09 compute-0 sshd-session[363058]: Connection closed by authenticating user root 209.38.206.249 port 41460 [preauth]
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.533 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1a906797-0e30-4599-8110-a3ec5f466405]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.535 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01b6ba29-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.535 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.536 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01b6ba29-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:09 compute-0 NetworkManager[48954]: <info>  [1769444769.5386] manager: (tap01b6ba29-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Jan 26 16:26:09 compute-0 kernel: tap01b6ba29-80: entered promiscuous mode
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.540 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01b6ba29-80, col_values=(('external_ids', {'iface-id': '36c96456-84ef-4832-9322-d686bc015797'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:09 compute-0 ovn_controller[146046]: 2026-01-26T16:26:09Z|01490|binding|INFO|Releasing lport 36c96456-84ef-4832-9322-d686bc015797 from this chassis (sb_readonly=0)
Jan 26 16:26:09 compute-0 nova_compute[239965]: 2026-01-26 16:26:09.542 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:09 compute-0 nova_compute[239965]: 2026-01-26 16:26:09.558 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.559 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01b6ba29-8317-46fc-85a5-35c029f2796c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01b6ba29-8317-46fc-85a5-35c029f2796c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:26:09 compute-0 nova_compute[239965]: 2026-01-26 16:26:09.559 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.560 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[688af00e-f2af-4f73-8e1f-58718e42eab9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.561 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-01b6ba29-8317-46fc-85a5-35c029f2796c
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/01b6ba29-8317-46fc-85a5-35c029f2796c.pid.haproxy
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 01b6ba29-8317-46fc-85a5-35c029f2796c
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:26:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:09.563 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c', 'env', 'PROCESS_TAG=haproxy-01b6ba29-8317-46fc-85a5-35c029f2796c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01b6ba29-8317-46fc-85a5-35c029f2796c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:26:10 compute-0 podman[363145]: 2026-01-26 16:26:10.001080574 +0000 UTC m=+0.058825671 container create ba080b9cc856817eb1c5599250c7320f892f450f208f6a8560b0d8b446fcf6d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:26:10 compute-0 systemd[1]: Started libpod-conmon-ba080b9cc856817eb1c5599250c7320f892f450f208f6a8560b0d8b446fcf6d1.scope.
Jan 26 16:26:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:26:10 compute-0 podman[363145]: 2026-01-26 16:26:09.972168066 +0000 UTC m=+0.029913193 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:26:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de02d51b3bb6302ad391739301de40fc615c91a44d40359213d946a05c9050c7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:26:10 compute-0 podman[363145]: 2026-01-26 16:26:10.090662718 +0000 UTC m=+0.148407835 container init ba080b9cc856817eb1c5599250c7320f892f450f208f6a8560b0d8b446fcf6d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:26:10 compute-0 podman[363145]: 2026-01-26 16:26:10.098801938 +0000 UTC m=+0.156547035 container start ba080b9cc856817eb1c5599250c7320f892f450f208f6a8560b0d8b446fcf6d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 26 16:26:10 compute-0 neutron-haproxy-ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c[363159]: [NOTICE]   (363163) : New worker (363165) forked
Jan 26 16:26:10 compute-0 neutron-haproxy-ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c[363159]: [NOTICE]   (363163) : Loading success.
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.133 239969 DEBUG nova.network.neutron [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Successfully updated port: a5c8aecb-cefc-4d76-b032-9b122283da51 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.149 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "refresh_cache-e9322863-2491-464c-92af-3e1f3181df98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.149 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquired lock "refresh_cache-e9322863-2491-464c-92af-3e1f3181df98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.149 239969 DEBUG nova.network.neutron [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.202 239969 DEBUG nova.compute.manager [req-fd973e5f-cb50-4963-bf52-7f4da5c1730c req-b5d6196c-421d-4110-a130-0eaa5bf60705 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Received event network-vif-plugged-30d9e550-982e-4e2c-9ee0-755cad494e41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.202 239969 DEBUG oslo_concurrency.lockutils [req-fd973e5f-cb50-4963-bf52-7f4da5c1730c req-b5d6196c-421d-4110-a130-0eaa5bf60705 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.203 239969 DEBUG oslo_concurrency.lockutils [req-fd973e5f-cb50-4963-bf52-7f4da5c1730c req-b5d6196c-421d-4110-a130-0eaa5bf60705 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.203 239969 DEBUG oslo_concurrency.lockutils [req-fd973e5f-cb50-4963-bf52-7f4da5c1730c req-b5d6196c-421d-4110-a130-0eaa5bf60705 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.203 239969 DEBUG nova.compute.manager [req-fd973e5f-cb50-4963-bf52-7f4da5c1730c req-b5d6196c-421d-4110-a130-0eaa5bf60705 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Processing event network-vif-plugged-30d9e550-982e-4e2c-9ee0-755cad494e41 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.249 239969 DEBUG nova.compute.manager [req-aadc988e-04e5-493f-9f47-1b65a3e79425 req-9f1fdc9c-3223-4e27-af58-e804b97438ec a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Received event network-changed-a5c8aecb-cefc-4d76-b032-9b122283da51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.249 239969 DEBUG nova.compute.manager [req-aadc988e-04e5-493f-9f47-1b65a3e79425 req-9f1fdc9c-3223-4e27-af58-e804b97438ec a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Refreshing instance network info cache due to event network-changed-a5c8aecb-cefc-4d76-b032-9b122283da51. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.249 239969 DEBUG oslo_concurrency.lockutils [req-aadc988e-04e5-493f-9f47-1b65a3e79425 req-9f1fdc9c-3223-4e27-af58-e804b97438ec a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-e9322863-2491-464c-92af-3e1f3181df98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:10 compute-0 ovn_controller[146046]: 2026-01-26T16:26:10Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bf:c2:02 10.100.0.7
Jan 26 16:26:10 compute-0 ovn_controller[146046]: 2026-01-26T16:26:10Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:c2:02 10.100.0.7
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.617 239969 DEBUG nova.compute.manager [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.618 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444770.616324, 39a41999-3c56-4679-8c57-9286dc4edbb6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.619 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] VM Started (Lifecycle Event)
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.625 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.629 239969 INFO nova.virt.libvirt.driver [-] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Instance spawned successfully.
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.630 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.645 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 149 op/s
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.665 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.665 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.666 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.668 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.669 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.671 239969 DEBUG nova.virt.libvirt.driver [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.677 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.728 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.729 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444770.6167092, 39a41999-3c56-4679-8c57-9286dc4edbb6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.730 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] VM Paused (Lifecycle Event)
Jan 26 16:26:10 compute-0 ceph-mon[75140]: pgmap v2264: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 149 op/s
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.752 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.761 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444770.6217375, 39a41999-3c56-4679-8c57-9286dc4edbb6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.761 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] VM Resumed (Lifecycle Event)
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.767 239969 INFO nova.compute.manager [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Took 9.97 seconds to spawn the instance on the hypervisor.
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.767 239969 DEBUG nova.compute.manager [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.800 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.804 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.834 239969 DEBUG nova.network.neutron [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.838 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.848 239969 INFO nova.compute.manager [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Took 11.12 seconds to build instance.
Jan 26 16:26:10 compute-0 nova_compute[239965]: 2026-01-26 16:26:10.892 239969 DEBUG oslo_concurrency.lockutils [None req-e57ab5be-50ad-4775-a29b-35b79f7adacb 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "39a41999-3c56-4679-8c57-9286dc4edbb6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.021 239969 DEBUG nova.network.neutron [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Updating instance_info_cache with network_info: [{"id": "a5c8aecb-cefc-4d76-b032-9b122283da51", "address": "fa:16:3e:d2:07:bd", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c8aecb-ce", "ovs_interfaceid": "a5c8aecb-cefc-4d76-b032-9b122283da51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.044 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Releasing lock "refresh_cache-e9322863-2491-464c-92af-3e1f3181df98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.045 239969 DEBUG nova.compute.manager [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Instance network_info: |[{"id": "a5c8aecb-cefc-4d76-b032-9b122283da51", "address": "fa:16:3e:d2:07:bd", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c8aecb-ce", "ovs_interfaceid": "a5c8aecb-cefc-4d76-b032-9b122283da51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.045 239969 DEBUG oslo_concurrency.lockutils [req-aadc988e-04e5-493f-9f47-1b65a3e79425 req-9f1fdc9c-3223-4e27-af58-e804b97438ec a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-e9322863-2491-464c-92af-3e1f3181df98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.046 239969 DEBUG nova.network.neutron [req-aadc988e-04e5-493f-9f47-1b65a3e79425 req-9f1fdc9c-3223-4e27-af58-e804b97438ec a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Refreshing network info cache for port a5c8aecb-cefc-4d76-b032-9b122283da51 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.048 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Start _get_guest_xml network_info=[{"id": "a5c8aecb-cefc-4d76-b032-9b122283da51", "address": "fa:16:3e:d2:07:bd", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c8aecb-ce", "ovs_interfaceid": "a5c8aecb-cefc-4d76-b032-9b122283da51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.052 239969 WARNING nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.058 239969 DEBUG nova.virt.libvirt.host [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.059 239969 DEBUG nova.virt.libvirt.host [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.064 239969 DEBUG nova.virt.libvirt.host [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.065 239969 DEBUG nova.virt.libvirt.host [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.066 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.066 239969 DEBUG nova.virt.hardware [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.066 239969 DEBUG nova.virt.hardware [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.067 239969 DEBUG nova.virt.hardware [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.067 239969 DEBUG nova.virt.hardware [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.067 239969 DEBUG nova.virt.hardware [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.067 239969 DEBUG nova.virt.hardware [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.068 239969 DEBUG nova.virt.hardware [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.068 239969 DEBUG nova.virt.hardware [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.068 239969 DEBUG nova.virt.hardware [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.069 239969 DEBUG nova.virt.hardware [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.069 239969 DEBUG nova.virt.hardware [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.071 239969 DEBUG oslo_concurrency.processutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.262 239969 DEBUG nova.compute.manager [req-ebd64d44-b107-4343-8b26-d781ac503d19 req-c5560919-f080-4432-8065-a6af31c22fbb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Received event network-vif-plugged-30d9e550-982e-4e2c-9ee0-755cad494e41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.263 239969 DEBUG oslo_concurrency.lockutils [req-ebd64d44-b107-4343-8b26-d781ac503d19 req-c5560919-f080-4432-8065-a6af31c22fbb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.263 239969 DEBUG oslo_concurrency.lockutils [req-ebd64d44-b107-4343-8b26-d781ac503d19 req-c5560919-f080-4432-8065-a6af31c22fbb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.264 239969 DEBUG oslo_concurrency.lockutils [req-ebd64d44-b107-4343-8b26-d781ac503d19 req-c5560919-f080-4432-8065-a6af31c22fbb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.264 239969 DEBUG nova.compute.manager [req-ebd64d44-b107-4343-8b26-d781ac503d19 req-c5560919-f080-4432-8065-a6af31c22fbb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] No waiting events found dispatching network-vif-plugged-30d9e550-982e-4e2c-9ee0-755cad494e41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.264 239969 WARNING nova.compute.manager [req-ebd64d44-b107-4343-8b26-d781ac503d19 req-c5560919-f080-4432-8065-a6af31c22fbb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Received unexpected event network-vif-plugged-30d9e550-982e-4e2c-9ee0-755cad494e41 for instance with vm_state active and task_state None.
Jan 26 16:26:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:26:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/387254913' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:26:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 5.7 MiB/s wr, 301 op/s
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.658 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.661 239969 DEBUG oslo_concurrency.processutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:12 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/387254913' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.696 239969 DEBUG nova.storage.rbd_utils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image e9322863-2491-464c-92af-3e1f3181df98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:12 compute-0 nova_compute[239965]: 2026-01-26 16:26:12.708 239969 DEBUG oslo_concurrency.processutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:26:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4287998824' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.286 239969 DEBUG oslo_concurrency.processutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.289 239969 DEBUG nova.virt.libvirt.vif [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:26:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-1-1689387241',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-1-1689387241',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-gen',id=141,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBByI6tCrfJLyIj6sIsV+Ekerorscq6qkiM/Lle83NCyTUlha/oPyViDL+PBWA7bXq9+9oJOHfnt/sOtyAsLXUYu52zdtEkcLJAtD8nhzn87iwgtBznYvhaPSD4tmFm9Vgw==',key_name='tempest-TestSecurityGroupsBasicOps-173561617',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-0xx03ctf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:26:07Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=e9322863-2491-464c-92af-3e1f3181df98,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5c8aecb-cefc-4d76-b032-9b122283da51", "address": "fa:16:3e:d2:07:bd", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c8aecb-ce", "ovs_interfaceid": "a5c8aecb-cefc-4d76-b032-9b122283da51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.291 239969 DEBUG nova.network.os_vif_util [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "a5c8aecb-cefc-4d76-b032-9b122283da51", "address": "fa:16:3e:d2:07:bd", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c8aecb-ce", "ovs_interfaceid": "a5c8aecb-cefc-4d76-b032-9b122283da51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.293 239969 DEBUG nova.network.os_vif_util [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:07:bd,bridge_name='br-int',has_traffic_filtering=True,id=a5c8aecb-cefc-4d76-b032-9b122283da51,network=Network(c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c8aecb-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.295 239969 DEBUG nova.objects.instance [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'pci_devices' on Instance uuid e9322863-2491-464c-92af-3e1f3181df98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.438 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:26:13 compute-0 nova_compute[239965]:   <uuid>e9322863-2491-464c-92af-3e1f3181df98</uuid>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   <name>instance-0000008d</name>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-1-1689387241</nova:name>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:26:12</nova:creationTime>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:26:13 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:26:13 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:26:13 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:26:13 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:26:13 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:26:13 compute-0 nova_compute[239965]:         <nova:user uuid="ace5f36058684b5782589457d9e15921">tempest-TestSecurityGroupsBasicOps-75910484-project-member</nova:user>
Jan 26 16:26:13 compute-0 nova_compute[239965]:         <nova:project uuid="1f814bd45d164be6827f7ef54ed07f5f">tempest-TestSecurityGroupsBasicOps-75910484</nova:project>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:26:13 compute-0 nova_compute[239965]:         <nova:port uuid="a5c8aecb-cefc-4d76-b032-9b122283da51">
Jan 26 16:26:13 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <system>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <entry name="serial">e9322863-2491-464c-92af-3e1f3181df98</entry>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <entry name="uuid">e9322863-2491-464c-92af-3e1f3181df98</entry>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     </system>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   <os>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   </os>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   <features>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   </features>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/e9322863-2491-464c-92af-3e1f3181df98_disk">
Jan 26 16:26:13 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       </source>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:26:13 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/e9322863-2491-464c-92af-3e1f3181df98_disk.config">
Jan 26 16:26:13 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       </source>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:26:13 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:d2:07:bd"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <target dev="tapa5c8aecb-ce"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/e9322863-2491-464c-92af-3e1f3181df98/console.log" append="off"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <video>
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     </video>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:26:13 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:26:13 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:26:13 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:26:13 compute-0 nova_compute[239965]: </domain>
Jan 26 16:26:13 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.440 239969 DEBUG nova.compute.manager [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Preparing to wait for external event network-vif-plugged-a5c8aecb-cefc-4d76-b032-9b122283da51 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.440 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "e9322863-2491-464c-92af-3e1f3181df98-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.440 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "e9322863-2491-464c-92af-3e1f3181df98-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.440 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "e9322863-2491-464c-92af-3e1f3181df98-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.441 239969 DEBUG nova.virt.libvirt.vif [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:26:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-1-1689387241',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-1-1689387241',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-gen',id=141,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBByI6tCrfJLyIj6sIsV+Ekerorscq6qkiM/Lle83NCyTUlha/oPyViDL+PBWA7bXq9+9oJOHfnt/sOtyAsLXUYu52zdtEkcLJAtD8nhzn87iwgtBznYvhaPSD4tmFm9Vgw==',key_name='tempest-TestSecurityGroupsBasicOps-173561617',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-0xx03ctf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:26:07Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=e9322863-2491-464c-92af-3e1f3181df98,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a5c8aecb-cefc-4d76-b032-9b122283da51", "address": "fa:16:3e:d2:07:bd", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c8aecb-ce", "ovs_interfaceid": "a5c8aecb-cefc-4d76-b032-9b122283da51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.441 239969 DEBUG nova.network.os_vif_util [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "a5c8aecb-cefc-4d76-b032-9b122283da51", "address": "fa:16:3e:d2:07:bd", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c8aecb-ce", "ovs_interfaceid": "a5c8aecb-cefc-4d76-b032-9b122283da51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.442 239969 DEBUG nova.network.os_vif_util [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d2:07:bd,bridge_name='br-int',has_traffic_filtering=True,id=a5c8aecb-cefc-4d76-b032-9b122283da51,network=Network(c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c8aecb-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.442 239969 DEBUG os_vif [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:07:bd,bridge_name='br-int',has_traffic_filtering=True,id=a5c8aecb-cefc-4d76-b032-9b122283da51,network=Network(c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c8aecb-ce') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.445 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.446 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.446 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.449 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.449 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5c8aecb-ce, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.449 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa5c8aecb-ce, col_values=(('external_ids', {'iface-id': 'a5c8aecb-cefc-4d76-b032-9b122283da51', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d2:07:bd', 'vm-uuid': 'e9322863-2491-464c-92af-3e1f3181df98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.451 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:13 compute-0 NetworkManager[48954]: <info>  [1769444773.4519] manager: (tapa5c8aecb-ce): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/610)
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.453 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.458 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.458 239969 INFO os_vif [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d2:07:bd,bridge_name='br-int',has_traffic_filtering=True,id=a5c8aecb-cefc-4d76-b032-9b122283da51,network=Network(c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c8aecb-ce')
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.517 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.518 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.518 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No VIF found with MAC fa:16:3e:d2:07:bd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.518 239969 INFO nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Using config drive
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.553 239969 DEBUG nova.storage.rbd_utils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image e9322863-2491-464c-92af-3e1f3181df98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.665 239969 DEBUG nova.network.neutron [req-aadc988e-04e5-493f-9f47-1b65a3e79425 req-9f1fdc9c-3223-4e27-af58-e804b97438ec a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Updated VIF entry in instance network info cache for port a5c8aecb-cefc-4d76-b032-9b122283da51. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.666 239969 DEBUG nova.network.neutron [req-aadc988e-04e5-493f-9f47-1b65a3e79425 req-9f1fdc9c-3223-4e27-af58-e804b97438ec a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Updating instance_info_cache with network_info: [{"id": "a5c8aecb-cefc-4d76-b032-9b122283da51", "address": "fa:16:3e:d2:07:bd", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c8aecb-ce", "ovs_interfaceid": "a5c8aecb-cefc-4d76-b032-9b122283da51", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.670 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.685 239969 DEBUG oslo_concurrency.lockutils [req-aadc988e-04e5-493f-9f47-1b65a3e79425 req-9f1fdc9c-3223-4e27-af58-e804b97438ec a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-e9322863-2491-464c-92af-3e1f3181df98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:26:13 compute-0 ceph-mon[75140]: pgmap v2265: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 5.7 MiB/s wr, 301 op/s
Jan 26 16:26:13 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4287998824' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.877 239969 INFO nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Creating config drive at /var/lib/nova/instances/e9322863-2491-464c-92af-3e1f3181df98/disk.config
Jan 26 16:26:13 compute-0 nova_compute[239965]: 2026-01-26 16:26:13.886 239969 DEBUG oslo_concurrency.processutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e9322863-2491-464c-92af-3e1f3181df98/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr0pjq8ps execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.038 239969 DEBUG oslo_concurrency.processutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e9322863-2491-464c-92af-3e1f3181df98/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr0pjq8ps" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.087 239969 DEBUG nova.storage.rbd_utils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image e9322863-2491-464c-92af-3e1f3181df98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.092 239969 DEBUG oslo_concurrency.processutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e9322863-2491-464c-92af-3e1f3181df98/disk.config e9322863-2491-464c-92af-3e1f3181df98_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.270 239969 DEBUG oslo_concurrency.processutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e9322863-2491-464c-92af-3e1f3181df98/disk.config e9322863-2491-464c-92af-3e1f3181df98_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.271 239969 INFO nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Deleting local config drive /var/lib/nova/instances/e9322863-2491-464c-92af-3e1f3181df98/disk.config because it was imported into RBD.
Jan 26 16:26:14 compute-0 kernel: tapa5c8aecb-ce: entered promiscuous mode
Jan 26 16:26:14 compute-0 NetworkManager[48954]: <info>  [1769444774.3341] manager: (tapa5c8aecb-ce): new Tun device (/org/freedesktop/NetworkManager/Devices/611)
Jan 26 16:26:14 compute-0 ovn_controller[146046]: 2026-01-26T16:26:14Z|01491|binding|INFO|Claiming lport a5c8aecb-cefc-4d76-b032-9b122283da51 for this chassis.
Jan 26 16:26:14 compute-0 ovn_controller[146046]: 2026-01-26T16:26:14Z|01492|binding|INFO|a5c8aecb-cefc-4d76-b032-9b122283da51: Claiming fa:16:3e:d2:07:bd 10.100.0.9
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.338 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:14.346 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:07:bd 10.100.0.9'], port_security=['fa:16:3e:d2:07:bd 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e9322863-2491-464c-92af-3e1f3181df98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b3cd03ab-361a-4eb5-b4b2-10e2c053ad7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c35e7e4-1920-4b21-8562-6698247816ad, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a5c8aecb-cefc-4d76-b032-9b122283da51) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:26:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:14.347 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a5c8aecb-cefc-4d76-b032-9b122283da51 in datapath c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073 bound to our chassis
Jan 26 16:26:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:14.349 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073
Jan 26 16:26:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:14.365 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c45e0f85-1113-4b1c-811f-68300d050caa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:14 compute-0 ovn_controller[146046]: 2026-01-26T16:26:14Z|01493|binding|INFO|Setting lport a5c8aecb-cefc-4d76-b032-9b122283da51 ovn-installed in OVS
Jan 26 16:26:14 compute-0 ovn_controller[146046]: 2026-01-26T16:26:14Z|01494|binding|INFO|Setting lport a5c8aecb-cefc-4d76-b032-9b122283da51 up in Southbound
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.373 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:14 compute-0 systemd-machined[208061]: New machine qemu-171-instance-0000008d.
Jan 26 16:26:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:14.395 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[4fb2c6f8-15c6-4428-8d6b-15ed60bf70a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:14.397 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[49971fc3-7dbe-45c8-8c7c-bdfb4fa8a4e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:14 compute-0 systemd[1]: Started Virtual Machine qemu-171-instance-0000008d.
Jan 26 16:26:14 compute-0 systemd-udevd[363356]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:26:14 compute-0 NetworkManager[48954]: <info>  [1769444774.4311] device (tapa5c8aecb-ce): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:26:14 compute-0 NetworkManager[48954]: <info>  [1769444774.4319] device (tapa5c8aecb-ce): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:26:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:14.442 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9e1f2870-2708-452c-b987-64556141e0eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:14.461 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[eb25a70c-db37-43d6-8fdc-2d69cb1a92f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc3b6f1ea-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:1b:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 419], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622993, 'reachable_time': 28486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363362, 'error': None, 'target': 'ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:14.479 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2a7a1993-267f-46e7-a8fc-17c810705dda]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc3b6f1ea-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 623005, 'tstamp': 623005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363365, 'error': None, 'target': 'ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc3b6f1ea-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 623008, 'tstamp': 623008}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363365, 'error': None, 'target': 'ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:14.481 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3b6f1ea-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.482 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.484 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:14.485 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc3b6f1ea-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:14.486 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:26:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:14.486 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc3b6f1ea-40, col_values=(('external_ids', {'iface-id': '1de7f01b-b4a3-4a1c-9255-2968d2bb9781'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:14.486 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.578 239969 DEBUG nova.compute.manager [req-5cc38e17-2908-4054-8c45-801e03e9c3e7 req-2020c99b-8532-4415-aa54-05da4551843d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Received event network-changed-30d9e550-982e-4e2c-9ee0-755cad494e41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.579 239969 DEBUG nova.compute.manager [req-5cc38e17-2908-4054-8c45-801e03e9c3e7 req-2020c99b-8532-4415-aa54-05da4551843d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Refreshing instance network info cache due to event network-changed-30d9e550-982e-4e2c-9ee0-755cad494e41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.579 239969 DEBUG oslo_concurrency.lockutils [req-5cc38e17-2908-4054-8c45-801e03e9c3e7 req-2020c99b-8532-4415-aa54-05da4551843d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-39a41999-3c56-4679-8c57-9286dc4edbb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.579 239969 DEBUG oslo_concurrency.lockutils [req-5cc38e17-2908-4054-8c45-801e03e9c3e7 req-2020c99b-8532-4415-aa54-05da4551843d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-39a41999-3c56-4679-8c57-9286dc4edbb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.580 239969 DEBUG nova.network.neutron [req-5cc38e17-2908-4054-8c45-801e03e9c3e7 req-2020c99b-8532-4415-aa54-05da4551843d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Refreshing network info cache for port 30d9e550-982e-4e2c-9ee0-755cad494e41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.600 239969 DEBUG nova.compute.manager [req-7af159d9-ffb0-4ab1-8b28-67def6c0ad49 req-b077a3ca-5cd8-4d64-883e-b9a73feeb6cb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Received event network-vif-plugged-a5c8aecb-cefc-4d76-b032-9b122283da51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.600 239969 DEBUG oslo_concurrency.lockutils [req-7af159d9-ffb0-4ab1-8b28-67def6c0ad49 req-b077a3ca-5cd8-4d64-883e-b9a73feeb6cb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "e9322863-2491-464c-92af-3e1f3181df98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.601 239969 DEBUG oslo_concurrency.lockutils [req-7af159d9-ffb0-4ab1-8b28-67def6c0ad49 req-b077a3ca-5cd8-4d64-883e-b9a73feeb6cb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e9322863-2491-464c-92af-3e1f3181df98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.601 239969 DEBUG oslo_concurrency.lockutils [req-7af159d9-ffb0-4ab1-8b28-67def6c0ad49 req-b077a3ca-5cd8-4d64-883e-b9a73feeb6cb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e9322863-2491-464c-92af-3e1f3181df98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.601 239969 DEBUG nova.compute.manager [req-7af159d9-ffb0-4ab1-8b28-67def6c0ad49 req-b077a3ca-5cd8-4d64-883e-b9a73feeb6cb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Processing event network-vif-plugged-a5c8aecb-cefc-4d76-b032-9b122283da51 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:26:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 204 op/s
Jan 26 16:26:14 compute-0 ceph-mon[75140]: pgmap v2266: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 4.3 MiB/s wr, 204 op/s
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.995 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444774.9948163, e9322863-2491-464c-92af-3e1f3181df98 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.995 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e9322863-2491-464c-92af-3e1f3181df98] VM Started (Lifecycle Event)
Jan 26 16:26:14 compute-0 nova_compute[239965]: 2026-01-26 16:26:14.998 239969 DEBUG nova.compute.manager [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.000 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.003 239969 INFO nova.virt.libvirt.driver [-] [instance: e9322863-2491-464c-92af-3e1f3181df98] Instance spawned successfully.
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.003 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.019 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e9322863-2491-464c-92af-3e1f3181df98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.024 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e9322863-2491-464c-92af-3e1f3181df98] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.028 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.029 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.029 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.029 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.030 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.030 239969 DEBUG nova.virt.libvirt.driver [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.064 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e9322863-2491-464c-92af-3e1f3181df98] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.064 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444774.995072, e9322863-2491-464c-92af-3e1f3181df98 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.065 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e9322863-2491-464c-92af-3e1f3181df98] VM Paused (Lifecycle Event)
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.096 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e9322863-2491-464c-92af-3e1f3181df98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.103 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444775.0000594, e9322863-2491-464c-92af-3e1f3181df98 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.103 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e9322863-2491-464c-92af-3e1f3181df98] VM Resumed (Lifecycle Event)
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.107 239969 INFO nova.compute.manager [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Took 7.62 seconds to spawn the instance on the hypervisor.
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.107 239969 DEBUG nova.compute.manager [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.124 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e9322863-2491-464c-92af-3e1f3181df98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.127 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e9322863-2491-464c-92af-3e1f3181df98] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.160 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: e9322863-2491-464c-92af-3e1f3181df98] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.187 239969 INFO nova.compute.manager [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Took 8.91 seconds to build instance.
Jan 26 16:26:15 compute-0 nova_compute[239965]: 2026-01-26 16:26:15.203 239969 DEBUG oslo_concurrency.lockutils [None req-53309d7d-240f-4da0-8a96-b4340c5a59ca ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "e9322863-2491-464c-92af-3e1f3181df98" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:16 compute-0 nova_compute[239965]: 2026-01-26 16:26:16.055 239969 DEBUG nova.network.neutron [req-5cc38e17-2908-4054-8c45-801e03e9c3e7 req-2020c99b-8532-4415-aa54-05da4551843d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Updated VIF entry in instance network info cache for port 30d9e550-982e-4e2c-9ee0-755cad494e41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:26:16 compute-0 nova_compute[239965]: 2026-01-26 16:26:16.056 239969 DEBUG nova.network.neutron [req-5cc38e17-2908-4054-8c45-801e03e9c3e7 req-2020c99b-8532-4415-aa54-05da4551843d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Updating instance_info_cache with network_info: [{"id": "30d9e550-982e-4e2c-9ee0-755cad494e41", "address": "fa:16:3e:5d:8a:76", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30d9e550-98", "ovs_interfaceid": "30d9e550-982e-4e2c-9ee0-755cad494e41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:16 compute-0 nova_compute[239965]: 2026-01-26 16:26:16.072 239969 DEBUG oslo_concurrency.lockutils [req-5cc38e17-2908-4054-8c45-801e03e9c3e7 req-2020c99b-8532-4415-aa54-05da4551843d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-39a41999-3c56-4679-8c57-9286dc4edbb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:26:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 261 op/s
Jan 26 16:26:16 compute-0 ceph-mon[75140]: pgmap v2267: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.3 MiB/s wr, 261 op/s
Jan 26 16:26:16 compute-0 nova_compute[239965]: 2026-01-26 16:26:16.909 239969 DEBUG nova.compute.manager [req-4ff28b28-7cf0-4f3d-baef-4fb25fd6f9b0 req-c26f9d06-e86a-4a45-bcb5-465eb9358f5e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Received event network-vif-plugged-a5c8aecb-cefc-4d76-b032-9b122283da51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:16 compute-0 nova_compute[239965]: 2026-01-26 16:26:16.909 239969 DEBUG oslo_concurrency.lockutils [req-4ff28b28-7cf0-4f3d-baef-4fb25fd6f9b0 req-c26f9d06-e86a-4a45-bcb5-465eb9358f5e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "e9322863-2491-464c-92af-3e1f3181df98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:16 compute-0 nova_compute[239965]: 2026-01-26 16:26:16.909 239969 DEBUG oslo_concurrency.lockutils [req-4ff28b28-7cf0-4f3d-baef-4fb25fd6f9b0 req-c26f9d06-e86a-4a45-bcb5-465eb9358f5e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e9322863-2491-464c-92af-3e1f3181df98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:16 compute-0 nova_compute[239965]: 2026-01-26 16:26:16.909 239969 DEBUG oslo_concurrency.lockutils [req-4ff28b28-7cf0-4f3d-baef-4fb25fd6f9b0 req-c26f9d06-e86a-4a45-bcb5-465eb9358f5e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e9322863-2491-464c-92af-3e1f3181df98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:16 compute-0 nova_compute[239965]: 2026-01-26 16:26:16.910 239969 DEBUG nova.compute.manager [req-4ff28b28-7cf0-4f3d-baef-4fb25fd6f9b0 req-c26f9d06-e86a-4a45-bcb5-465eb9358f5e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] No waiting events found dispatching network-vif-plugged-a5c8aecb-cefc-4d76-b032-9b122283da51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:26:16 compute-0 nova_compute[239965]: 2026-01-26 16:26:16.910 239969 WARNING nova.compute.manager [req-4ff28b28-7cf0-4f3d-baef-4fb25fd6f9b0 req-c26f9d06-e86a-4a45-bcb5-465eb9358f5e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Received unexpected event network-vif-plugged-a5c8aecb-cefc-4d76-b032-9b122283da51 for instance with vm_state active and task_state None.
Jan 26 16:26:17 compute-0 nova_compute[239965]: 2026-01-26 16:26:17.160 239969 INFO nova.compute.manager [None req-3b2b4e56-67e5-4b33-820d-6eca661c589e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Get console output
Jan 26 16:26:17 compute-0 nova_compute[239965]: 2026-01-26 16:26:17.167 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:26:17 compute-0 ovn_controller[146046]: 2026-01-26T16:26:17Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:29:6f 10.100.0.3
Jan 26 16:26:18 compute-0 ovn_controller[146046]: 2026-01-26T16:26:18Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:c2:02 10.100.0.7
Jan 26 16:26:18 compute-0 nova_compute[239965]: 2026-01-26 16:26:18.453 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 4.0 MiB/s wr, 333 op/s
Jan 26 16:26:18 compute-0 nova_compute[239965]: 2026-01-26 16:26:18.675 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:18 compute-0 ceph-mon[75140]: pgmap v2268: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.3 MiB/s rd, 4.0 MiB/s wr, 333 op/s
Jan 26 16:26:19 compute-0 nova_compute[239965]: 2026-01-26 16:26:19.006 239969 DEBUG nova.compute.manager [req-50ad5fc2-b812-4c93-b940-51ac2d973e5c req-9e574e56-fc43-4ef8-af0e-6f4e0cc8f689 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Received event network-changed-a5c8aecb-cefc-4d76-b032-9b122283da51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:19 compute-0 nova_compute[239965]: 2026-01-26 16:26:19.006 239969 DEBUG nova.compute.manager [req-50ad5fc2-b812-4c93-b940-51ac2d973e5c req-9e574e56-fc43-4ef8-af0e-6f4e0cc8f689 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Refreshing instance network info cache due to event network-changed-a5c8aecb-cefc-4d76-b032-9b122283da51. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:26:19 compute-0 nova_compute[239965]: 2026-01-26 16:26:19.007 239969 DEBUG oslo_concurrency.lockutils [req-50ad5fc2-b812-4c93-b940-51ac2d973e5c req-9e574e56-fc43-4ef8-af0e-6f4e0cc8f689 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-e9322863-2491-464c-92af-3e1f3181df98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:19 compute-0 nova_compute[239965]: 2026-01-26 16:26:19.007 239969 DEBUG oslo_concurrency.lockutils [req-50ad5fc2-b812-4c93-b940-51ac2d973e5c req-9e574e56-fc43-4ef8-af0e-6f4e0cc8f689 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-e9322863-2491-464c-92af-3e1f3181df98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:19 compute-0 nova_compute[239965]: 2026-01-26 16:26:19.007 239969 DEBUG nova.network.neutron [req-50ad5fc2-b812-4c93-b940-51ac2d973e5c req-9e574e56-fc43-4ef8-af0e-6f4e0cc8f689 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Refreshing network info cache for port a5c8aecb-cefc-4d76-b032-9b122283da51 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:26:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:26:20 compute-0 nova_compute[239965]: 2026-01-26 16:26:20.167 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:26:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.9 MiB/s wr, 283 op/s
Jan 26 16:26:20 compute-0 ceph-mon[75140]: pgmap v2269: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.9 MiB/s wr, 283 op/s
Jan 26 16:26:21 compute-0 nova_compute[239965]: 2026-01-26 16:26:21.273 239969 DEBUG nova.compute.manager [req-54849786-690a-4c25-8dd0-f3ec01dc2898 req-dbdff533-f35d-4f51-8d8f-fe09c3fe8271 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Received event network-changed-a5c8aecb-cefc-4d76-b032-9b122283da51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:21 compute-0 nova_compute[239965]: 2026-01-26 16:26:21.273 239969 DEBUG nova.compute.manager [req-54849786-690a-4c25-8dd0-f3ec01dc2898 req-dbdff533-f35d-4f51-8d8f-fe09c3fe8271 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Refreshing instance network info cache due to event network-changed-a5c8aecb-cefc-4d76-b032-9b122283da51. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:26:21 compute-0 nova_compute[239965]: 2026-01-26 16:26:21.275 239969 DEBUG oslo_concurrency.lockutils [req-54849786-690a-4c25-8dd0-f3ec01dc2898 req-dbdff533-f35d-4f51-8d8f-fe09c3fe8271 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-e9322863-2491-464c-92af-3e1f3181df98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:21 compute-0 sshd-session[363409]: Connection closed by authenticating user root 209.38.206.249 port 56746 [preauth]
Jan 26 16:26:21 compute-0 ovn_controller[146046]: 2026-01-26T16:26:21Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bf:c2:02 10.100.0.7
Jan 26 16:26:21 compute-0 sshd-session[363411]: Invalid user deployer from 209.38.206.249 port 57166
Jan 26 16:26:21 compute-0 sshd-session[363411]: Connection closed by invalid user deployer 209.38.206.249 port 57166 [preauth]
Jan 26 16:26:21 compute-0 nova_compute[239965]: 2026-01-26 16:26:21.947 239969 DEBUG nova.network.neutron [req-50ad5fc2-b812-4c93-b940-51ac2d973e5c req-9e574e56-fc43-4ef8-af0e-6f4e0cc8f689 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Updated VIF entry in instance network info cache for port a5c8aecb-cefc-4d76-b032-9b122283da51. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:26:21 compute-0 nova_compute[239965]: 2026-01-26 16:26:21.948 239969 DEBUG nova.network.neutron [req-50ad5fc2-b812-4c93-b940-51ac2d973e5c req-9e574e56-fc43-4ef8-af0e-6f4e0cc8f689 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Updating instance_info_cache with network_info: [{"id": "a5c8aecb-cefc-4d76-b032-9b122283da51", "address": "fa:16:3e:d2:07:bd", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c8aecb-ce", "ovs_interfaceid": "a5c8aecb-cefc-4d76-b032-9b122283da51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:21 compute-0 nova_compute[239965]: 2026-01-26 16:26:21.965 239969 DEBUG oslo_concurrency.lockutils [req-50ad5fc2-b812-4c93-b940-51ac2d973e5c req-9e574e56-fc43-4ef8-af0e-6f4e0cc8f689 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-e9322863-2491-464c-92af-3e1f3181df98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:26:21 compute-0 nova_compute[239965]: 2026-01-26 16:26:21.966 239969 DEBUG oslo_concurrency.lockutils [req-54849786-690a-4c25-8dd0-f3ec01dc2898 req-dbdff533-f35d-4f51-8d8f-fe09c3fe8271 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-e9322863-2491-464c-92af-3e1f3181df98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:21 compute-0 nova_compute[239965]: 2026-01-26 16:26:21.966 239969 DEBUG nova.network.neutron [req-54849786-690a-4c25-8dd0-f3ec01dc2898 req-dbdff533-f35d-4f51-8d8f-fe09c3fe8271 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Refreshing network info cache for port a5c8aecb-cefc-4d76-b032-9b122283da51 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:26:22 compute-0 sshd-session[363413]: Invalid user vpn from 209.38.206.249 port 57174
Jan 26 16:26:22 compute-0 sshd-session[363413]: Connection closed by invalid user vpn 209.38.206.249 port 57174 [preauth]
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.480 239969 DEBUG nova.compute.manager [req-6011f82d-f249-434e-9fa2-a47edabcd7c5 req-febfe9c1-1af1-4b10-8f46-4deb95f15877 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Received event network-changed-00461e74-34f4-4d7c-b930-09797f49ca0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.481 239969 DEBUG nova.compute.manager [req-6011f82d-f249-434e-9fa2-a47edabcd7c5 req-febfe9c1-1af1-4b10-8f46-4deb95f15877 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Refreshing instance network info cache due to event network-changed-00461e74-34f4-4d7c-b930-09797f49ca0f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.482 239969 DEBUG oslo_concurrency.lockutils [req-6011f82d-f249-434e-9fa2-a47edabcd7c5 req-febfe9c1-1af1-4b10-8f46-4deb95f15877 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-fed642a5-01b1-488e-aae9-21971c326ddb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.483 239969 DEBUG oslo_concurrency.lockutils [req-6011f82d-f249-434e-9fa2-a47edabcd7c5 req-febfe9c1-1af1-4b10-8f46-4deb95f15877 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-fed642a5-01b1-488e-aae9-21971c326ddb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.483 239969 DEBUG nova.network.neutron [req-6011f82d-f249-434e-9fa2-a47edabcd7c5 req-febfe9c1-1af1-4b10-8f46-4deb95f15877 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Refreshing network info cache for port 00461e74-34f4-4d7c-b930-09797f49ca0f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.577 239969 DEBUG oslo_concurrency.lockutils [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "fed642a5-01b1-488e-aae9-21971c326ddb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.578 239969 DEBUG oslo_concurrency.lockutils [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "fed642a5-01b1-488e-aae9-21971c326ddb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.579 239969 DEBUG oslo_concurrency.lockutils [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.579 239969 DEBUG oslo_concurrency.lockutils [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.580 239969 DEBUG oslo_concurrency.lockutils [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.582 239969 INFO nova.compute.manager [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Terminating instance
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.585 239969 DEBUG nova.compute.manager [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:26:22 compute-0 kernel: tap00461e74-34 (unregistering): left promiscuous mode
Jan 26 16:26:22 compute-0 NetworkManager[48954]: <info>  [1769444782.6585] device (tap00461e74-34): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:26:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 3.9 MiB/s wr, 303 op/s
Jan 26 16:26:22 compute-0 ovn_controller[146046]: 2026-01-26T16:26:22Z|01495|binding|INFO|Releasing lport 00461e74-34f4-4d7c-b930-09797f49ca0f from this chassis (sb_readonly=0)
Jan 26 16:26:22 compute-0 ovn_controller[146046]: 2026-01-26T16:26:22Z|01496|binding|INFO|Setting lport 00461e74-34f4-4d7c-b930-09797f49ca0f down in Southbound
Jan 26 16:26:22 compute-0 ovn_controller[146046]: 2026-01-26T16:26:22Z|01497|binding|INFO|Removing iface tap00461e74-34 ovn-installed in OVS
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.682 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:22.693 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bf:c2:02 10.100.0.7'], port_security=['fa:16:3e:bf:c2:02 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'fed642a5-01b1-488e-aae9-21971c326ddb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-843fc191-89a3-4e63-b63d-61d012b45b53', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ea11fac4-1f7d-4dda-9bbd-869a7a1325d2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9843283c-0b92-425e-9399-e711bb75e634, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=00461e74-34f4-4d7c-b930-09797f49ca0f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:26:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:22.697 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 00461e74-34f4-4d7c-b930-09797f49ca0f in datapath 843fc191-89a3-4e63-b63d-61d012b45b53 unbound from our chassis
Jan 26 16:26:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:22.700 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 843fc191-89a3-4e63-b63d-61d012b45b53, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:26:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:22.702 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ff306324-7cbb-40b0-bfbc-d2017e8d31ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:22.703 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53 namespace which is not needed anymore
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.704 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:22 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Jan 26 16:26:22 compute-0 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d0000008b.scope: Consumed 14.318s CPU time.
Jan 26 16:26:22 compute-0 systemd-machined[208061]: Machine qemu-168-instance-0000008b terminated.
Jan 26 16:26:22 compute-0 ceph-mon[75140]: pgmap v2270: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 3.9 MiB/s wr, 303 op/s
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.834 239969 INFO nova.virt.libvirt.driver [-] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Instance destroyed successfully.
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.835 239969 DEBUG nova.objects.instance [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'resources' on Instance uuid fed642a5-01b1-488e-aae9-21971c326ddb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.850 239969 DEBUG nova.virt.libvirt.vif [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:25:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-178216006',display_name='tempest-TestNetworkBasicOps-server-178216006',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-178216006',id=139,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCMhpUrDh36ssJ9Vym1gLGlLpgBVIIPj31hq9E58pCGxcFwcj3xnqplMiVn3+yrkZXgSAbZbCu9aez902uqN22e9Qra/AwSEnlDS3EhE8XdA110ow16DQGjzd2sXUocLgQ==',key_name='tempest-TestNetworkBasicOps-706367898',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:25:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-7dzx52a9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:25:58Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=fed642a5-01b1-488e-aae9-21971c326ddb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "00461e74-34f4-4d7c-b930-09797f49ca0f", "address": "fa:16:3e:bf:c2:02", "network": {"id": "843fc191-89a3-4e63-b63d-61d012b45b53", "bridge": "br-int", "label": "tempest-network-smoke--1571296953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00461e74-34", "ovs_interfaceid": "00461e74-34f4-4d7c-b930-09797f49ca0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.850 239969 DEBUG nova.network.os_vif_util [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "00461e74-34f4-4d7c-b930-09797f49ca0f", "address": "fa:16:3e:bf:c2:02", "network": {"id": "843fc191-89a3-4e63-b63d-61d012b45b53", "bridge": "br-int", "label": "tempest-network-smoke--1571296953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00461e74-34", "ovs_interfaceid": "00461e74-34f4-4d7c-b930-09797f49ca0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.851 239969 DEBUG nova.network.os_vif_util [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bf:c2:02,bridge_name='br-int',has_traffic_filtering=True,id=00461e74-34f4-4d7c-b930-09797f49ca0f,network=Network(843fc191-89a3-4e63-b63d-61d012b45b53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00461e74-34') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.851 239969 DEBUG os_vif [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:c2:02,bridge_name='br-int',has_traffic_filtering=True,id=00461e74-34f4-4d7c-b930-09797f49ca0f,network=Network(843fc191-89a3-4e63-b63d-61d012b45b53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00461e74-34') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.852 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.853 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap00461e74-34, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.854 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.855 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.858 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:22 compute-0 neutron-haproxy-ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53[362217]: [NOTICE]   (362221) : haproxy version is 2.8.14-c23fe91
Jan 26 16:26:22 compute-0 neutron-haproxy-ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53[362217]: [NOTICE]   (362221) : path to executable is /usr/sbin/haproxy
Jan 26 16:26:22 compute-0 neutron-haproxy-ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53[362217]: [WARNING]  (362221) : Exiting Master process...
Jan 26 16:26:22 compute-0 neutron-haproxy-ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53[362217]: [WARNING]  (362221) : Exiting Master process...
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.860 239969 INFO os_vif [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bf:c2:02,bridge_name='br-int',has_traffic_filtering=True,id=00461e74-34f4-4d7c-b930-09797f49ca0f,network=Network(843fc191-89a3-4e63-b63d-61d012b45b53),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap00461e74-34')
Jan 26 16:26:22 compute-0 neutron-haproxy-ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53[362217]: [ALERT]    (362221) : Current worker (362223) exited with code 143 (Terminated)
Jan 26 16:26:22 compute-0 neutron-haproxy-ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53[362217]: [WARNING]  (362221) : All workers exited. Exiting... (0)
Jan 26 16:26:22 compute-0 systemd[1]: libpod-60b59fbc2dd64307dbbc6c746d538bc2cc7a90c23d70dce43b1087940cee55f1.scope: Deactivated successfully.
Jan 26 16:26:22 compute-0 podman[363442]: 2026-01-26 16:26:22.870374032 +0000 UTC m=+0.049476972 container died 60b59fbc2dd64307dbbc6c746d538bc2cc7a90c23d70dce43b1087940cee55f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 16:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-60b59fbc2dd64307dbbc6c746d538bc2cc7a90c23d70dce43b1087940cee55f1-userdata-shm.mount: Deactivated successfully.
Jan 26 16:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f95b3693cf8f87a0352830eeb0cc8426f285c99bc9d5b0d2c048845789966a4-merged.mount: Deactivated successfully.
Jan 26 16:26:22 compute-0 podman[363442]: 2026-01-26 16:26:22.911244493 +0000 UTC m=+0.090347433 container cleanup 60b59fbc2dd64307dbbc6c746d538bc2cc7a90c23d70dce43b1087940cee55f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:26:22 compute-0 systemd[1]: libpod-conmon-60b59fbc2dd64307dbbc6c746d538bc2cc7a90c23d70dce43b1087940cee55f1.scope: Deactivated successfully.
Jan 26 16:26:22 compute-0 sshd-session[363415]: Invalid user devuser from 209.38.206.249 port 57176
Jan 26 16:26:22 compute-0 podman[363497]: 2026-01-26 16:26:22.983858321 +0000 UTC m=+0.041996299 container remove 60b59fbc2dd64307dbbc6c746d538bc2cc7a90c23d70dce43b1087940cee55f1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:26:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:22.993 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[44eb59ae-c14c-4983-886b-4422bcb83995]: (4, ('Mon Jan 26 04:26:22 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53 (60b59fbc2dd64307dbbc6c746d538bc2cc7a90c23d70dce43b1087940cee55f1)\n60b59fbc2dd64307dbbc6c746d538bc2cc7a90c23d70dce43b1087940cee55f1\nMon Jan 26 04:26:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53 (60b59fbc2dd64307dbbc6c746d538bc2cc7a90c23d70dce43b1087940cee55f1)\n60b59fbc2dd64307dbbc6c746d538bc2cc7a90c23d70dce43b1087940cee55f1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:22.994 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c961a33a-c9bb-47c0-89cd-8943f2e4f097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:22.996 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap843fc191-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:22 compute-0 nova_compute[239965]: 2026-01-26 16:26:22.998 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:23 compute-0 kernel: tap843fc191-80: left promiscuous mode
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.003 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:23.007 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f30d5d6a-c639-4cb0-b943-aebf3af1d239]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.023 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:23.023 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[846c761e-f6b7-41b7-8199-bcbd7ac695b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:23.025 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a2c80c18-ea57-4d75-b456-8126fac2a034]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:23.043 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8947c1f9-a958-4b42-8a0a-b0d6e1a5245c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 624824, 'reachable_time': 17338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363512, 'error': None, 'target': 'ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:23 compute-0 sshd-session[363415]: Connection closed by invalid user devuser 209.38.206.249 port 57176 [preauth]
Jan 26 16:26:23 compute-0 systemd[1]: run-netns-ovnmeta\x2d843fc191\x2d89a3\x2d4e63\x2db63d\x2d61d012b45b53.mount: Deactivated successfully.
Jan 26 16:26:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:23.047 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-843fc191-89a3-4e63-b63d-61d012b45b53 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:26:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:23.047 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[35a1999f-a03e-4bf2-b249-797be0de5c53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.165 239969 INFO nova.virt.libvirt.driver [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Deleting instance files /var/lib/nova/instances/fed642a5-01b1-488e-aae9-21971c326ddb_del
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.166 239969 INFO nova.virt.libvirt.driver [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Deletion of /var/lib/nova/instances/fed642a5-01b1-488e-aae9-21971c326ddb_del complete
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.222 239969 INFO nova.compute.manager [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Took 0.64 seconds to destroy the instance on the hypervisor.
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.223 239969 DEBUG oslo.service.loopingcall [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.223 239969 DEBUG nova.compute.manager [-] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.224 239969 DEBUG nova.network.neutron [-] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.364 239969 DEBUG nova.compute.manager [req-0b46c834-dece-42a8-af8b-ee4bca36a803 req-b8005b6a-36be-40ae-9692-347d7a06ebd1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Received event network-vif-unplugged-00461e74-34f4-4d7c-b930-09797f49ca0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.365 239969 DEBUG oslo_concurrency.lockutils [req-0b46c834-dece-42a8-af8b-ee4bca36a803 req-b8005b6a-36be-40ae-9692-347d7a06ebd1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.365 239969 DEBUG oslo_concurrency.lockutils [req-0b46c834-dece-42a8-af8b-ee4bca36a803 req-b8005b6a-36be-40ae-9692-347d7a06ebd1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.366 239969 DEBUG oslo_concurrency.lockutils [req-0b46c834-dece-42a8-af8b-ee4bca36a803 req-b8005b6a-36be-40ae-9692-347d7a06ebd1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.366 239969 DEBUG nova.compute.manager [req-0b46c834-dece-42a8-af8b-ee4bca36a803 req-b8005b6a-36be-40ae-9692-347d7a06ebd1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] No waiting events found dispatching network-vif-unplugged-00461e74-34f4-4d7c-b930-09797f49ca0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.367 239969 DEBUG nova.compute.manager [req-0b46c834-dece-42a8-af8b-ee4bca36a803 req-b8005b6a-36be-40ae-9692-347d7a06ebd1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Received event network-vif-unplugged-00461e74-34f4-4d7c-b930-09797f49ca0f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.367 239969 DEBUG nova.compute.manager [req-0b46c834-dece-42a8-af8b-ee4bca36a803 req-b8005b6a-36be-40ae-9692-347d7a06ebd1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Received event network-vif-plugged-00461e74-34f4-4d7c-b930-09797f49ca0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.368 239969 DEBUG oslo_concurrency.lockutils [req-0b46c834-dece-42a8-af8b-ee4bca36a803 req-b8005b6a-36be-40ae-9692-347d7a06ebd1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.368 239969 DEBUG oslo_concurrency.lockutils [req-0b46c834-dece-42a8-af8b-ee4bca36a803 req-b8005b6a-36be-40ae-9692-347d7a06ebd1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.368 239969 DEBUG oslo_concurrency.lockutils [req-0b46c834-dece-42a8-af8b-ee4bca36a803 req-b8005b6a-36be-40ae-9692-347d7a06ebd1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "fed642a5-01b1-488e-aae9-21971c326ddb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.368 239969 DEBUG nova.compute.manager [req-0b46c834-dece-42a8-af8b-ee4bca36a803 req-b8005b6a-36be-40ae-9692-347d7a06ebd1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] No waiting events found dispatching network-vif-plugged-00461e74-34f4-4d7c-b930-09797f49ca0f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.368 239969 WARNING nova.compute.manager [req-0b46c834-dece-42a8-af8b-ee4bca36a803 req-b8005b6a-36be-40ae-9692-347d7a06ebd1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Received unexpected event network-vif-plugged-00461e74-34f4-4d7c-b930-09797f49ca0f for instance with vm_state active and task_state deleting.
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.370 239969 DEBUG nova.network.neutron [req-54849786-690a-4c25-8dd0-f3ec01dc2898 req-dbdff533-f35d-4f51-8d8f-fe09c3fe8271 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Updated VIF entry in instance network info cache for port a5c8aecb-cefc-4d76-b032-9b122283da51. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.370 239969 DEBUG nova.network.neutron [req-54849786-690a-4c25-8dd0-f3ec01dc2898 req-dbdff533-f35d-4f51-8d8f-fe09c3fe8271 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Updating instance_info_cache with network_info: [{"id": "a5c8aecb-cefc-4d76-b032-9b122283da51", "address": "fa:16:3e:d2:07:bd", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c8aecb-ce", "ovs_interfaceid": "a5c8aecb-cefc-4d76-b032-9b122283da51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.392 239969 DEBUG oslo_concurrency.lockutils [req-54849786-690a-4c25-8dd0-f3ec01dc2898 req-dbdff533-f35d-4f51-8d8f-fe09c3fe8271 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-e9322863-2491-464c-92af-3e1f3181df98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:26:23 compute-0 sshd-session[363514]: Invalid user moxa from 209.38.206.249 port 57186
Jan 26 16:26:23 compute-0 sshd-session[363514]: Connection closed by invalid user moxa 209.38.206.249 port 57186 [preauth]
Jan 26 16:26:23 compute-0 nova_compute[239965]: 2026-01-26 16:26:23.729 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:24 compute-0 nova_compute[239965]: 2026-01-26 16:26:24.077 239969 DEBUG nova.network.neutron [req-6011f82d-f249-434e-9fa2-a47edabcd7c5 req-febfe9c1-1af1-4b10-8f46-4deb95f15877 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Updated VIF entry in instance network info cache for port 00461e74-34f4-4d7c-b930-09797f49ca0f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:26:24 compute-0 nova_compute[239965]: 2026-01-26 16:26:24.078 239969 DEBUG nova.network.neutron [req-6011f82d-f249-434e-9fa2-a47edabcd7c5 req-febfe9c1-1af1-4b10-8f46-4deb95f15877 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Updating instance_info_cache with network_info: [{"id": "00461e74-34f4-4d7c-b930-09797f49ca0f", "address": "fa:16:3e:bf:c2:02", "network": {"id": "843fc191-89a3-4e63-b63d-61d012b45b53", "bridge": "br-int", "label": "tempest-network-smoke--1571296953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap00461e74-34", "ovs_interfaceid": "00461e74-34f4-4d7c-b930-09797f49ca0f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:24 compute-0 nova_compute[239965]: 2026-01-26 16:26:24.096 239969 DEBUG oslo_concurrency.lockutils [req-6011f82d-f249-434e-9fa2-a47edabcd7c5 req-febfe9c1-1af1-4b10-8f46-4deb95f15877 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-fed642a5-01b1-488e-aae9-21971c326ddb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:26:24 compute-0 sshd-session[363516]: Invalid user odoo from 209.38.206.249 port 57196
Jan 26 16:26:24 compute-0 nova_compute[239965]: 2026-01-26 16:26:24.177 239969 DEBUG nova.network.neutron [-] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:24 compute-0 sshd-session[363516]: Connection closed by invalid user odoo 209.38.206.249 port 57196 [preauth]
Jan 26 16:26:24 compute-0 nova_compute[239965]: 2026-01-26 16:26:24.197 239969 INFO nova.compute.manager [-] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Took 0.97 seconds to deallocate network for instance.
Jan 26 16:26:24 compute-0 nova_compute[239965]: 2026-01-26 16:26:24.238 239969 DEBUG oslo_concurrency.lockutils [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:24 compute-0 nova_compute[239965]: 2026-01-26 16:26:24.238 239969 DEBUG oslo_concurrency.lockutils [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:26:24 compute-0 nova_compute[239965]: 2026-01-26 16:26:24.337 239969 DEBUG oslo_concurrency.processutils [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:24 compute-0 nova_compute[239965]: 2026-01-26 16:26:24.481 239969 INFO nova.compute.manager [None req-ee40eb90-8d23-42e4-b1c1-f0e1af213b35 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Get console output
Jan 26 16:26:24 compute-0 nova_compute[239965]: 2026-01-26 16:26:24.488 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:26:24 compute-0 nova_compute[239965]: 2026-01-26 16:26:24.561 239969 DEBUG nova.compute.manager [req-4cbca2d2-df76-477a-b885-e389b072fa14 req-7276ccb6-a853-4170-bbb9-c42f9bbf69aa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Received event network-vif-deleted-00461e74-34f4-4d7c-b930-09797f49ca0f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:24 compute-0 sshd-session[363518]: Invalid user postgres from 209.38.206.249 port 57208
Jan 26 16:26:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 81 KiB/s wr, 151 op/s
Jan 26 16:26:24 compute-0 sshd-session[363518]: Connection closed by invalid user postgres 209.38.206.249 port 57208 [preauth]
Jan 26 16:26:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:26:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1958368427' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:24 compute-0 nova_compute[239965]: 2026-01-26 16:26:24.911 239969 DEBUG oslo_concurrency.processutils [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:24 compute-0 nova_compute[239965]: 2026-01-26 16:26:24.917 239969 DEBUG nova.compute.provider_tree [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:26:24 compute-0 nova_compute[239965]: 2026-01-26 16:26:24.931 239969 DEBUG nova.scheduler.client.report [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:26:24 compute-0 nova_compute[239965]: 2026-01-26 16:26:24.952 239969 DEBUG oslo_concurrency.lockutils [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:24 compute-0 nova_compute[239965]: 2026-01-26 16:26:24.970 239969 INFO nova.scheduler.client.report [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Deleted allocations for instance fed642a5-01b1-488e-aae9-21971c326ddb
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.036 239969 DEBUG oslo_concurrency.lockutils [None req-5d1f776c-b0ec-40e8-ba5b-d0f6412d7615 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "fed642a5-01b1-488e-aae9-21971c326ddb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.458s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:25 compute-0 sshd-session[363540]: Invalid user guest from 209.38.206.249 port 57220
Jan 26 16:26:25 compute-0 sshd-session[363540]: Connection closed by invalid user guest 209.38.206.249 port 57220 [preauth]
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.389 239969 DEBUG oslo_concurrency.lockutils [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "b1edd3fd-2563-465f-94c8-577ab4debb72" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.390 239969 DEBUG oslo_concurrency.lockutils [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.390 239969 DEBUG oslo_concurrency.lockutils [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.390 239969 DEBUG oslo_concurrency.lockutils [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.391 239969 DEBUG oslo_concurrency.lockutils [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.393 239969 INFO nova.compute.manager [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Terminating instance
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.394 239969 DEBUG nova.compute.manager [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:26:25 compute-0 kernel: tap60b0936e-92 (unregistering): left promiscuous mode
Jan 26 16:26:25 compute-0 NetworkManager[48954]: <info>  [1769444785.4487] device (tap60b0936e-92): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.460 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:25 compute-0 ovn_controller[146046]: 2026-01-26T16:26:25Z|01498|binding|INFO|Releasing lport 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 from this chassis (sb_readonly=0)
Jan 26 16:26:25 compute-0 ovn_controller[146046]: 2026-01-26T16:26:25Z|01499|binding|INFO|Setting lport 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 down in Southbound
Jan 26 16:26:25 compute-0 ovn_controller[146046]: 2026-01-26T16:26:25Z|01500|binding|INFO|Removing iface tap60b0936e-92 ovn-installed in OVS
Jan 26 16:26:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:25.465 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:29:6f 10.100.0.3'], port_security=['fa:16:3e:00:29:6f 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'b1edd3fd-2563-465f-94c8-577ab4debb72', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-468ef4a7-70cc-4bbe-a116-35fd32215cb8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0604e11d-f932-4e4e-a21d-f453b4c1f6d5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=016b0b61-69af-4aa0-b892-809b130821a2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=60b0936e-92d4-4117-ad1c-6c6ba1f53cd8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:26:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:25.466 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 in datapath 468ef4a7-70cc-4bbe-a116-35fd32215cb8 unbound from our chassis
Jan 26 16:26:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:25.467 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 468ef4a7-70cc-4bbe-a116-35fd32215cb8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:26:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:25.468 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea8abb5-db57-40ae-9a0d-48d5fa02f42d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:25.468 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8 namespace which is not needed anymore
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.482 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:25 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000089.scope: Deactivated successfully.
Jan 26 16:26:25 compute-0 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000089.scope: Consumed 12.340s CPU time.
Jan 26 16:26:25 compute-0 systemd-machined[208061]: Machine qemu-169-instance-00000089 terminated.
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.618 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.624 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.632 239969 INFO nova.virt.libvirt.driver [-] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Instance destroyed successfully.
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.632 239969 DEBUG nova.objects.instance [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'resources' on Instance uuid b1edd3fd-2563-465f-94c8-577ab4debb72 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.645 239969 DEBUG nova.virt.libvirt.vif [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:25:23Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-218031812',display_name='tempest-TestNetworkAdvancedServerOps-server-218031812',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-218031812',id=137,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIrMoI9V6cKHzfTFlUxBbATN2hHFUio1ZQ6nDgO3nipL1aW8ijkKCse9C6fJ8w0ajCwwIYfL3LOI/6X08nYHu0UQi8GzU+YFfJ0MDsrLZ69a2gtrBoG4QpsmS9eHdII6aA==',key_name='tempest-TestNetworkAdvancedServerOps-1986592575',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:25:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-m0s2s10t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:26:05Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=b1edd3fd-2563-465f-94c8-577ab4debb72,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "address": "fa:16:3e:00:29:6f", "network": {"id": "468ef4a7-70cc-4bbe-a116-35fd32215cb8", "bridge": "br-int", "label": "tempest-network-smoke--1786195953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b0936e-92", "ovs_interfaceid": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.646 239969 DEBUG nova.network.os_vif_util [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "address": "fa:16:3e:00:29:6f", "network": {"id": "468ef4a7-70cc-4bbe-a116-35fd32215cb8", "bridge": "br-int", "label": "tempest-network-smoke--1786195953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b0936e-92", "ovs_interfaceid": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.647 239969 DEBUG nova.network.os_vif_util [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:29:6f,bridge_name='br-int',has_traffic_filtering=True,id=60b0936e-92d4-4117-ad1c-6c6ba1f53cd8,network=Network(468ef4a7-70cc-4bbe-a116-35fd32215cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60b0936e-92') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.648 239969 DEBUG os_vif [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:29:6f,bridge_name='br-int',has_traffic_filtering=True,id=60b0936e-92d4-4117-ad1c-6c6ba1f53cd8,network=Network(468ef4a7-70cc-4bbe-a116-35fd32215cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60b0936e-92') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.652 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.652 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60b0936e-92, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:25 compute-0 neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8[362688]: [NOTICE]   (362694) : haproxy version is 2.8.14-c23fe91
Jan 26 16:26:25 compute-0 neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8[362688]: [NOTICE]   (362694) : path to executable is /usr/sbin/haproxy
Jan 26 16:26:25 compute-0 neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8[362688]: [WARNING]  (362694) : Exiting Master process...
Jan 26 16:26:25 compute-0 neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8[362688]: [WARNING]  (362694) : Exiting Master process...
Jan 26 16:26:25 compute-0 neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8[362688]: [ALERT]    (362694) : Current worker (362696) exited with code 143 (Terminated)
Jan 26 16:26:25 compute-0 neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8[362688]: [WARNING]  (362694) : All workers exited. Exiting... (0)
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.657 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:25 compute-0 systemd[1]: libpod-31e5d3d685b4a87d51ace2aa0633d8f2b5aae985d6cebbcca9aa61c05e6e76b8.scope: Deactivated successfully.
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.660 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:25 compute-0 podman[363567]: 2026-01-26 16:26:25.664703495 +0000 UTC m=+0.115517061 container died 31e5d3d685b4a87d51ace2aa0633d8f2b5aae985d6cebbcca9aa61c05e6e76b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.665 239969 INFO os_vif [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:29:6f,bridge_name='br-int',has_traffic_filtering=True,id=60b0936e-92d4-4117-ad1c-6c6ba1f53cd8,network=Network(468ef4a7-70cc-4bbe-a116-35fd32215cb8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60b0936e-92')
Jan 26 16:26:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-31e5d3d685b4a87d51ace2aa0633d8f2b5aae985d6cebbcca9aa61c05e6e76b8-userdata-shm.mount: Deactivated successfully.
Jan 26 16:26:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ad63894d58fe0ffc073f7a032c19755edcdb07d0069e852e7c5e50387b8b613-merged.mount: Deactivated successfully.
Jan 26 16:26:25 compute-0 podman[363567]: 2026-01-26 16:26:25.705823871 +0000 UTC m=+0.156637437 container cleanup 31e5d3d685b4a87d51ace2aa0633d8f2b5aae985d6cebbcca9aa61c05e6e76b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:26:25 compute-0 systemd[1]: libpod-conmon-31e5d3d685b4a87d51ace2aa0633d8f2b5aae985d6cebbcca9aa61c05e6e76b8.scope: Deactivated successfully.
Jan 26 16:26:25 compute-0 ceph-mon[75140]: pgmap v2271: 305 pgs: 305 active+clean; 418 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 81 KiB/s wr, 151 op/s
Jan 26 16:26:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1958368427' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:25 compute-0 sshd-session[363544]: Invalid user cassandra from 209.38.206.249 port 57232
Jan 26 16:26:25 compute-0 podman[363625]: 2026-01-26 16:26:25.825710998 +0000 UTC m=+0.093652155 container remove 31e5d3d685b4a87d51ace2aa0633d8f2b5aae985d6cebbcca9aa61c05e6e76b8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:26:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:25.834 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[66cae6b5-9c58-443f-8856-e584b4d63cf2]: (4, ('Mon Jan 26 04:26:25 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8 (31e5d3d685b4a87d51ace2aa0633d8f2b5aae985d6cebbcca9aa61c05e6e76b8)\n31e5d3d685b4a87d51ace2aa0633d8f2b5aae985d6cebbcca9aa61c05e6e76b8\nMon Jan 26 04:26:25 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8 (31e5d3d685b4a87d51ace2aa0633d8f2b5aae985d6cebbcca9aa61c05e6e76b8)\n31e5d3d685b4a87d51ace2aa0633d8f2b5aae985d6cebbcca9aa61c05e6e76b8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:25.837 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ed5b1d-d7de-4907-b6b0-6ad85b4fb7cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:25.838 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap468ef4a7-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.841 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:25 compute-0 kernel: tap468ef4a7-70: left promiscuous mode
Jan 26 16:26:25 compute-0 ovn_controller[146046]: 2026-01-26T16:26:25Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5d:8a:76 10.100.0.13
Jan 26 16:26:25 compute-0 sshd-session[363544]: Connection closed by invalid user cassandra 209.38.206.249 port 57232 [preauth]
Jan 26 16:26:25 compute-0 nova_compute[239965]: 2026-01-26 16:26:25.923 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:25 compute-0 ovn_controller[146046]: 2026-01-26T16:26:25Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5d:8a:76 10.100.0.13
Jan 26 16:26:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:25.932 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[623a1004-d296-4491-9ef2-27599783ee2a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:25.951 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[605802fb-4efe-4324-bc77-dadc96d52c89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:25.953 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[63be112e-68a0-47c2-802d-7a051f03b664]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:25.975 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f36ba7aa-ba6e-422e-9713-43b98b57e6ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 625633, 'reachable_time': 36585, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363640, 'error': None, 'target': 'ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:25.977 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-468ef4a7-70cc-4bbe-a116-35fd32215cb8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:26:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:25.977 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[26ae728b-b0bc-4175-826b-912f30e66039]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:25 compute-0 systemd[1]: run-netns-ovnmeta\x2d468ef4a7\x2d70cc\x2d4bbe\x2da116\x2d35fd32215cb8.mount: Deactivated successfully.
Jan 26 16:26:26 compute-0 nova_compute[239965]: 2026-01-26 16:26:26.083 239969 INFO nova.virt.libvirt.driver [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Deleting instance files /var/lib/nova/instances/b1edd3fd-2563-465f-94c8-577ab4debb72_del
Jan 26 16:26:26 compute-0 nova_compute[239965]: 2026-01-26 16:26:26.083 239969 INFO nova.virt.libvirt.driver [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Deletion of /var/lib/nova/instances/b1edd3fd-2563-465f-94c8-577ab4debb72_del complete
Jan 26 16:26:26 compute-0 nova_compute[239965]: 2026-01-26 16:26:26.145 239969 INFO nova.compute.manager [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Took 0.75 seconds to destroy the instance on the hypervisor.
Jan 26 16:26:26 compute-0 nova_compute[239965]: 2026-01-26 16:26:26.145 239969 DEBUG oslo.service.loopingcall [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:26:26 compute-0 nova_compute[239965]: 2026-01-26 16:26:26.146 239969 DEBUG nova.compute.manager [-] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:26:26 compute-0 nova_compute[239965]: 2026-01-26 16:26:26.146 239969 DEBUG nova.network.neutron [-] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:26:26 compute-0 nova_compute[239965]: 2026-01-26 16:26:26.649 239969 DEBUG nova.compute.manager [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received event network-changed-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:26 compute-0 nova_compute[239965]: 2026-01-26 16:26:26.650 239969 DEBUG nova.compute.manager [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Refreshing instance network info cache due to event network-changed-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:26:26 compute-0 nova_compute[239965]: 2026-01-26 16:26:26.650 239969 DEBUG oslo_concurrency.lockutils [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:26 compute-0 nova_compute[239965]: 2026-01-26 16:26:26.651 239969 DEBUG oslo_concurrency.lockutils [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:26 compute-0 nova_compute[239965]: 2026-01-26 16:26:26.651 239969 DEBUG nova.network.neutron [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Refreshing network info cache for port 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:26:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 392 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 886 KiB/s wr, 195 op/s
Jan 26 16:26:27 compute-0 nova_compute[239965]: 2026-01-26 16:26:27.125 239969 DEBUG nova.network.neutron [-] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:27 compute-0 nova_compute[239965]: 2026-01-26 16:26:27.152 239969 INFO nova.compute.manager [-] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Took 1.01 seconds to deallocate network for instance.
Jan 26 16:26:27 compute-0 nova_compute[239965]: 2026-01-26 16:26:27.215 239969 DEBUG oslo_concurrency.lockutils [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:27 compute-0 nova_compute[239965]: 2026-01-26 16:26:27.216 239969 DEBUG oslo_concurrency.lockutils [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:27 compute-0 nova_compute[239965]: 2026-01-26 16:26:27.358 239969 DEBUG oslo_concurrency.processutils [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:27 compute-0 sshd-session[363642]: Invalid user web from 209.38.206.249 port 57234
Jan 26 16:26:27 compute-0 sshd-session[363642]: Connection closed by invalid user web 209.38.206.249 port 57234 [preauth]
Jan 26 16:26:27 compute-0 ceph-mon[75140]: pgmap v2272: 305 pgs: 305 active+clean; 392 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 886 KiB/s wr, 195 op/s
Jan 26 16:26:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:26:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1833226998' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:27 compute-0 nova_compute[239965]: 2026-01-26 16:26:27.996 239969 DEBUG oslo_concurrency.processutils [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.004 239969 DEBUG nova.compute.provider_tree [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.033 239969 DEBUG nova.scheduler.client.report [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:26:28 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:26:28 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.067 239969 DEBUG oslo_concurrency.lockutils [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.126 239969 INFO nova.scheduler.client.report [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Deleted allocations for instance b1edd3fd-2563-465f-94c8-577ab4debb72
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.197 239969 DEBUG oslo_concurrency.lockutils [None req-a090df8e-ba4f-4a76-95a8-f384dbbb8fd4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.453 239969 DEBUG nova.network.neutron [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Updated VIF entry in instance network info cache for port 60b0936e-92d4-4117-ad1c-6c6ba1f53cd8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.454 239969 DEBUG nova.network.neutron [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Updating instance_info_cache with network_info: [{"id": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "address": "fa:16:3e:00:29:6f", "network": {"id": "468ef4a7-70cc-4bbe-a116-35fd32215cb8", "bridge": "br-int", "label": "tempest-network-smoke--1786195953", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60b0936e-92", "ovs_interfaceid": "60b0936e-92d4-4117-ad1c-6c6ba1f53cd8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.477 239969 DEBUG oslo_concurrency.lockutils [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b1edd3fd-2563-465f-94c8-577ab4debb72" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.477 239969 DEBUG nova.compute.manager [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received event network-vif-unplugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.478 239969 DEBUG oslo_concurrency.lockutils [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.478 239969 DEBUG oslo_concurrency.lockutils [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.478 239969 DEBUG oslo_concurrency.lockutils [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.478 239969 DEBUG nova.compute.manager [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] No waiting events found dispatching network-vif-unplugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.478 239969 DEBUG nova.compute.manager [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received event network-vif-unplugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.479 239969 DEBUG nova.compute.manager [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received event network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.479 239969 DEBUG oslo_concurrency.lockutils [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.479 239969 DEBUG oslo_concurrency.lockutils [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.479 239969 DEBUG oslo_concurrency.lockutils [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b1edd3fd-2563-465f-94c8-577ab4debb72-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.479 239969 DEBUG nova.compute.manager [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] No waiting events found dispatching network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.479 239969 WARNING nova.compute.manager [req-1d8d95f5-e5b8-492d-b696-0c84f393f7b4 req-eafa9afb-3395-46a7-a90b-b4dd94e2d34e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received unexpected event network-vif-plugged-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 for instance with vm_state active and task_state deleting.
Jan 26 16:26:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 214 op/s
Jan 26 16:26:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:26:28
Jan 26 16:26:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:26:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:26:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', '.rgw.root', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', '.mgr', 'images', 'vms', 'cephfs.cephfs.meta']
Jan 26 16:26:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.735 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:28 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1833226998' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:28 compute-0 ceph-mon[75140]: pgmap v2273: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 214 op/s
Jan 26 16:26:28 compute-0 nova_compute[239965]: 2026-01-26 16:26:28.741 239969 DEBUG nova.compute.manager [req-11388bde-73b0-4a42-865a-b114b15d94e4 req-a5a93102-6a8a-4fe3-aac4-1848804c36d6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Received event network-vif-deleted-60b0936e-92d4-4117-ad1c-6c6ba1f53cd8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:26:29 compute-0 ovn_controller[146046]: 2026-01-26T16:26:29Z|01501|binding|INFO|Releasing lport 1de7f01b-b4a3-4a1c-9255-2968d2bb9781 from this chassis (sb_readonly=0)
Jan 26 16:26:29 compute-0 ovn_controller[146046]: 2026-01-26T16:26:29Z|01502|binding|INFO|Releasing lport 36c96456-84ef-4832-9322-d686bc015797 from this chassis (sb_readonly=0)
Jan 26 16:26:29 compute-0 nova_compute[239965]: 2026-01-26 16:26:29.989 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:26:30 compute-0 nova_compute[239965]: 2026-01-26 16:26:30.657 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 760 KiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:26:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:26:31 compute-0 ceph-mon[75140]: pgmap v2274: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 760 KiB/s rd, 2.2 MiB/s wr, 140 op/s
Jan 26 16:26:31 compute-0 ovn_controller[146046]: 2026-01-26T16:26:31Z|01503|binding|INFO|Releasing lport 1de7f01b-b4a3-4a1c-9255-2968d2bb9781 from this chassis (sb_readonly=0)
Jan 26 16:26:31 compute-0 ovn_controller[146046]: 2026-01-26T16:26:31Z|01504|binding|INFO|Releasing lport 36c96456-84ef-4832-9322-d686bc015797 from this chassis (sb_readonly=0)
Jan 26 16:26:31 compute-0 nova_compute[239965]: 2026-01-26 16:26:31.892 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 314 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 928 KiB/s rd, 3.8 MiB/s wr, 176 op/s
Jan 26 16:26:32 compute-0 ceph-mon[75140]: pgmap v2275: 305 pgs: 305 active+clean; 314 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 928 KiB/s rd, 3.8 MiB/s wr, 176 op/s
Jan 26 16:26:32 compute-0 ovn_controller[146046]: 2026-01-26T16:26:32Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d2:07:bd 10.100.0.9
Jan 26 16:26:32 compute-0 ovn_controller[146046]: 2026-01-26T16:26:32Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d2:07:bd 10.100.0.9
Jan 26 16:26:33 compute-0 nova_compute[239965]: 2026-01-26 16:26:33.194 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:33 compute-0 nova_compute[239965]: 2026-01-26 16:26:33.735 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:26:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 314 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 532 KiB/s rd, 3.8 MiB/s wr, 156 op/s
Jan 26 16:26:34 compute-0 ceph-mon[75140]: pgmap v2276: 305 pgs: 305 active+clean; 314 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 532 KiB/s rd, 3.8 MiB/s wr, 156 op/s
Jan 26 16:26:35 compute-0 nova_compute[239965]: 2026-01-26 16:26:35.661 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 321 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 646 KiB/s rd, 4.2 MiB/s wr, 172 op/s
Jan 26 16:26:36 compute-0 ceph-mon[75140]: pgmap v2277: 305 pgs: 305 active+clean; 321 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 646 KiB/s rd, 4.2 MiB/s wr, 172 op/s
Jan 26 16:26:37 compute-0 sshd-session[363668]: Invalid user solana from 45.148.10.240 port 46114
Jan 26 16:26:37 compute-0 sshd-session[363668]: Connection closed by invalid user solana 45.148.10.240 port 46114 [preauth]
Jan 26 16:26:37 compute-0 nova_compute[239965]: 2026-01-26 16:26:37.833 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444782.8311622, fed642a5-01b1-488e-aae9-21971c326ddb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:37 compute-0 nova_compute[239965]: 2026-01-26 16:26:37.833 239969 INFO nova.compute.manager [-] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] VM Stopped (Lifecycle Event)
Jan 26 16:26:37 compute-0 nova_compute[239965]: 2026-01-26 16:26:37.854 239969 DEBUG nova.compute.manager [None req-e212b783-56ab-4980-a12e-1ea26ec43cc1 - - - - - -] [instance: fed642a5-01b1-488e-aae9-21971c326ddb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:37 compute-0 nova_compute[239965]: 2026-01-26 16:26:37.862 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:38 compute-0 nova_compute[239965]: 2026-01-26 16:26:38.531 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 3.5 MiB/s wr, 141 op/s
Jan 26 16:26:38 compute-0 nova_compute[239965]: 2026-01-26 16:26:38.738 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:38 compute-0 ceph-mon[75140]: pgmap v2278: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 3.5 MiB/s wr, 141 op/s
Jan 26 16:26:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:26:39 compute-0 podman[363671]: 2026-01-26 16:26:39.410151968 +0000 UTC m=+0.084613133 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 26 16:26:39 compute-0 podman[363670]: 2026-01-26 16:26:39.420180273 +0000 UTC m=+0.096316940 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.219 239969 DEBUG oslo_concurrency.lockutils [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "e9322863-2491-464c-92af-3e1f3181df98" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.220 239969 DEBUG oslo_concurrency.lockutils [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "e9322863-2491-464c-92af-3e1f3181df98" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.220 239969 DEBUG oslo_concurrency.lockutils [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "e9322863-2491-464c-92af-3e1f3181df98-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.220 239969 DEBUG oslo_concurrency.lockutils [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "e9322863-2491-464c-92af-3e1f3181df98-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.220 239969 DEBUG oslo_concurrency.lockutils [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "e9322863-2491-464c-92af-3e1f3181df98-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.221 239969 INFO nova.compute.manager [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Terminating instance
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.222 239969 DEBUG nova.compute.manager [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:26:40 compute-0 kernel: tapa5c8aecb-ce (unregistering): left promiscuous mode
Jan 26 16:26:40 compute-0 NetworkManager[48954]: <info>  [1769444800.3293] device (tapa5c8aecb-ce): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:26:40 compute-0 ovn_controller[146046]: 2026-01-26T16:26:40Z|01505|binding|INFO|Releasing lport a5c8aecb-cefc-4d76-b032-9b122283da51 from this chassis (sb_readonly=0)
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.340 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:40 compute-0 ovn_controller[146046]: 2026-01-26T16:26:40Z|01506|binding|INFO|Setting lport a5c8aecb-cefc-4d76-b032-9b122283da51 down in Southbound
Jan 26 16:26:40 compute-0 ovn_controller[146046]: 2026-01-26T16:26:40Z|01507|binding|INFO|Removing iface tapa5c8aecb-ce ovn-installed in OVS
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.345 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:40.350 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d2:07:bd 10.100.0.9', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'e9322863-2491-464c-92af-3e1f3181df98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c35e7e4-1920-4b21-8562-6698247816ad, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=a5c8aecb-cefc-4d76-b032-9b122283da51) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:26:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:40.351 156105 INFO neutron.agent.ovn.metadata.agent [-] Port a5c8aecb-cefc-4d76-b032-9b122283da51 in datapath c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073 unbound from our chassis
Jan 26 16:26:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:40.353 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.375 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:40.378 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a4fd01b7-3b1d-4f82-ad4c-966832270617]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:40 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Jan 26 16:26:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:40.421 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[cb8bd919-98c4-435b-981c-e243897a4d94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:40 compute-0 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d0000008d.scope: Consumed 17.486s CPU time.
Jan 26 16:26:40 compute-0 systemd-machined[208061]: Machine qemu-171-instance-0000008d terminated.
Jan 26 16:26:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:40.426 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[1965a677-a4af-4a6c-b668-6a985945539a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.447 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.454 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:40.463 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[cc85c9c4-0d47-4ac6-9f2b-b117417d18e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.465 239969 INFO nova.virt.libvirt.driver [-] [instance: e9322863-2491-464c-92af-3e1f3181df98] Instance destroyed successfully.
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.465 239969 DEBUG nova.objects.instance [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'resources' on Instance uuid e9322863-2491-464c-92af-3e1f3181df98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.482 239969 DEBUG nova.virt.libvirt.vif [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:26:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-1-1689387241',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-1-1689387241',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-gen',id=141,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBByI6tCrfJLyIj6sIsV+Ekerorscq6qkiM/Lle83NCyTUlha/oPyViDL+PBWA7bXq9+9oJOHfnt/sOtyAsLXUYu52zdtEkcLJAtD8nhzn87iwgtBznYvhaPSD4tmFm9Vgw==',key_name='tempest-TestSecurityGroupsBasicOps-173561617',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:26:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-0xx03ctf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:26:15Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=e9322863-2491-464c-92af-3e1f3181df98,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a5c8aecb-cefc-4d76-b032-9b122283da51", "address": "fa:16:3e:d2:07:bd", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c8aecb-ce", "ovs_interfaceid": "a5c8aecb-cefc-4d76-b032-9b122283da51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.483 239969 DEBUG nova.network.os_vif_util [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "a5c8aecb-cefc-4d76-b032-9b122283da51", "address": "fa:16:3e:d2:07:bd", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa5c8aecb-ce", "ovs_interfaceid": "a5c8aecb-cefc-4d76-b032-9b122283da51", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.485 239969 DEBUG nova.network.os_vif_util [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d2:07:bd,bridge_name='br-int',has_traffic_filtering=True,id=a5c8aecb-cefc-4d76-b032-9b122283da51,network=Network(c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c8aecb-ce') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.485 239969 DEBUG os_vif [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:07:bd,bridge_name='br-int',has_traffic_filtering=True,id=a5c8aecb-cefc-4d76-b032-9b122283da51,network=Network(c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c8aecb-ce') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.488 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.488 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5c8aecb-ce, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:40.488 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7a31e3-b130-4c91-8fb8-b390f95d4e80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc3b6f1ea-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:48:1b:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 7, 'rx_bytes': 1370, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 7, 'rx_bytes': 1370, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 419], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622993, 'reachable_time': 28486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 13, 'inoctets': 936, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 13, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 936, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 13, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363734, 'error': None, 'target': 'ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.490 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.492 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.495 239969 INFO os_vif [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d2:07:bd,bridge_name='br-int',has_traffic_filtering=True,id=a5c8aecb-cefc-4d76-b032-9b122283da51,network=Network(c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa5c8aecb-ce')
Jan 26 16:26:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:40.506 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[76f60476-d05b-47f3-a13c-7d13d2ee841d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc3b6f1ea-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 623005, 'tstamp': 623005}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363735, 'error': None, 'target': 'ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc3b6f1ea-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 623008, 'tstamp': 623008}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 363735, 'error': None, 'target': 'ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:40.508 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3b6f1ea-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:40.511 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc3b6f1ea-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:40.512 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:26:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:40.513 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc3b6f1ea-40, col_values=(('external_ids', {'iface-id': '1de7f01b-b4a3-4a1c-9255-2968d2bb9781'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:40.514 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.515 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.629 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444785.6290126, b1edd3fd-2563-465f-94c8-577ab4debb72 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.630 239969 INFO nova.compute.manager [-] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] VM Stopped (Lifecycle Event)
Jan 26 16:26:40 compute-0 nova_compute[239965]: 2026-01-26 16:26:40.654 239969 DEBUG nova.compute.manager [None req-65401b31-d926-4fb5-ac63-1d22f1765cc3 - - - - - -] [instance: b1edd3fd-2563-465f-94c8-577ab4debb72] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:26:41 compute-0 ceph-mon[75140]: pgmap v2279: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Jan 26 16:26:41 compute-0 nova_compute[239965]: 2026-01-26 16:26:41.306 239969 INFO nova.virt.libvirt.driver [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Deleting instance files /var/lib/nova/instances/e9322863-2491-464c-92af-3e1f3181df98_del
Jan 26 16:26:41 compute-0 nova_compute[239965]: 2026-01-26 16:26:41.307 239969 INFO nova.virt.libvirt.driver [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Deletion of /var/lib/nova/instances/e9322863-2491-464c-92af-3e1f3181df98_del complete
Jan 26 16:26:41 compute-0 nova_compute[239965]: 2026-01-26 16:26:41.371 239969 INFO nova.compute.manager [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Took 1.15 seconds to destroy the instance on the hypervisor.
Jan 26 16:26:41 compute-0 nova_compute[239965]: 2026-01-26 16:26:41.372 239969 DEBUG oslo.service.loopingcall [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:26:41 compute-0 nova_compute[239965]: 2026-01-26 16:26:41.373 239969 DEBUG nova.compute.manager [-] [instance: e9322863-2491-464c-92af-3e1f3181df98] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:26:41 compute-0 nova_compute[239965]: 2026-01-26 16:26:41.373 239969 DEBUG nova.network.neutron [-] [instance: e9322863-2491-464c-92af-3e1f3181df98] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:26:41 compute-0 nova_compute[239965]: 2026-01-26 16:26:41.635 239969 DEBUG nova.compute.manager [req-02aa433f-ab05-4a37-bfbb-b40e0f2ccafe req-62ed4c73-d4c9-4c55-bbc1-1a1e1330e237 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Received event network-vif-unplugged-a5c8aecb-cefc-4d76-b032-9b122283da51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:41 compute-0 nova_compute[239965]: 2026-01-26 16:26:41.636 239969 DEBUG oslo_concurrency.lockutils [req-02aa433f-ab05-4a37-bfbb-b40e0f2ccafe req-62ed4c73-d4c9-4c55-bbc1-1a1e1330e237 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "e9322863-2491-464c-92af-3e1f3181df98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:41 compute-0 nova_compute[239965]: 2026-01-26 16:26:41.636 239969 DEBUG oslo_concurrency.lockutils [req-02aa433f-ab05-4a37-bfbb-b40e0f2ccafe req-62ed4c73-d4c9-4c55-bbc1-1a1e1330e237 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e9322863-2491-464c-92af-3e1f3181df98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:41 compute-0 nova_compute[239965]: 2026-01-26 16:26:41.637 239969 DEBUG oslo_concurrency.lockutils [req-02aa433f-ab05-4a37-bfbb-b40e0f2ccafe req-62ed4c73-d4c9-4c55-bbc1-1a1e1330e237 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e9322863-2491-464c-92af-3e1f3181df98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:41 compute-0 nova_compute[239965]: 2026-01-26 16:26:41.637 239969 DEBUG nova.compute.manager [req-02aa433f-ab05-4a37-bfbb-b40e0f2ccafe req-62ed4c73-d4c9-4c55-bbc1-1a1e1330e237 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] No waiting events found dispatching network-vif-unplugged-a5c8aecb-cefc-4d76-b032-9b122283da51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:26:41 compute-0 nova_compute[239965]: 2026-01-26 16:26:41.638 239969 DEBUG nova.compute.manager [req-02aa433f-ab05-4a37-bfbb-b40e0f2ccafe req-62ed4c73-d4c9-4c55-bbc1-1a1e1330e237 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Received event network-vif-unplugged-a5c8aecb-cefc-4d76-b032-9b122283da51 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:26:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:41.850 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:26:41 compute-0 nova_compute[239965]: 2026-01-26 16:26:41.851 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:41.852 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:26:42 compute-0 nova_compute[239965]: 2026-01-26 16:26:42.077 239969 DEBUG nova.network.neutron [-] [instance: e9322863-2491-464c-92af-3e1f3181df98] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:42 compute-0 nova_compute[239965]: 2026-01-26 16:26:42.082 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:42 compute-0 nova_compute[239965]: 2026-01-26 16:26:42.092 239969 INFO nova.compute.manager [-] [instance: e9322863-2491-464c-92af-3e1f3181df98] Took 0.72 seconds to deallocate network for instance.
Jan 26 16:26:42 compute-0 nova_compute[239965]: 2026-01-26 16:26:42.148 239969 DEBUG oslo_concurrency.lockutils [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:42 compute-0 nova_compute[239965]: 2026-01-26 16:26:42.148 239969 DEBUG oslo_concurrency.lockutils [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:42 compute-0 nova_compute[239965]: 2026-01-26 16:26:42.174 239969 DEBUG nova.compute.manager [req-ef960559-49e5-4e14-a1e7-0d5ea3310d8b req-925ca7cc-2776-47d7-b477-b549136b6376 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Received event network-vif-deleted-a5c8aecb-cefc-4d76-b032-9b122283da51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:42 compute-0 nova_compute[239965]: 2026-01-26 16:26:42.505 239969 DEBUG oslo_concurrency.processutils [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 2.2 MiB/s wr, 87 op/s
Jan 26 16:26:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:26:43 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1928222756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:43 compute-0 nova_compute[239965]: 2026-01-26 16:26:43.135 239969 DEBUG oslo_concurrency.processutils [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.630s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:43 compute-0 nova_compute[239965]: 2026-01-26 16:26:43.142 239969 DEBUG nova.compute.provider_tree [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:26:43 compute-0 nova_compute[239965]: 2026-01-26 16:26:43.166 239969 DEBUG nova.scheduler.client.report [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:26:43 compute-0 nova_compute[239965]: 2026-01-26 16:26:43.194 239969 DEBUG oslo_concurrency.lockutils [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:43 compute-0 nova_compute[239965]: 2026-01-26 16:26:43.223 239969 INFO nova.scheduler.client.report [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Deleted allocations for instance e9322863-2491-464c-92af-3e1f3181df98
Jan 26 16:26:43 compute-0 nova_compute[239965]: 2026-01-26 16:26:43.294 239969 DEBUG oslo_concurrency.lockutils [None req-f11a794e-5fdd-4f79-b4aa-2b4214733e99 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "e9322863-2491-464c-92af-3e1f3181df98" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:43 compute-0 ceph-mon[75140]: pgmap v2280: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 2.2 MiB/s wr, 87 op/s
Jan 26 16:26:43 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1928222756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:43 compute-0 nova_compute[239965]: 2026-01-26 16:26:43.740 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:43 compute-0 nova_compute[239965]: 2026-01-26 16:26:43.747 239969 DEBUG nova.compute.manager [req-76de6420-ee48-49d0-990f-6006176fdf6b req-73347a94-c21f-4d3e-8fee-105c212b9be7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Received event network-vif-plugged-a5c8aecb-cefc-4d76-b032-9b122283da51 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:43 compute-0 nova_compute[239965]: 2026-01-26 16:26:43.748 239969 DEBUG oslo_concurrency.lockutils [req-76de6420-ee48-49d0-990f-6006176fdf6b req-73347a94-c21f-4d3e-8fee-105c212b9be7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "e9322863-2491-464c-92af-3e1f3181df98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:43 compute-0 nova_compute[239965]: 2026-01-26 16:26:43.748 239969 DEBUG oslo_concurrency.lockutils [req-76de6420-ee48-49d0-990f-6006176fdf6b req-73347a94-c21f-4d3e-8fee-105c212b9be7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e9322863-2491-464c-92af-3e1f3181df98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:43 compute-0 nova_compute[239965]: 2026-01-26 16:26:43.749 239969 DEBUG oslo_concurrency.lockutils [req-76de6420-ee48-49d0-990f-6006176fdf6b req-73347a94-c21f-4d3e-8fee-105c212b9be7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "e9322863-2491-464c-92af-3e1f3181df98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:43 compute-0 nova_compute[239965]: 2026-01-26 16:26:43.750 239969 DEBUG nova.compute.manager [req-76de6420-ee48-49d0-990f-6006176fdf6b req-73347a94-c21f-4d3e-8fee-105c212b9be7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] No waiting events found dispatching network-vif-plugged-a5c8aecb-cefc-4d76-b032-9b122283da51 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:26:43 compute-0 nova_compute[239965]: 2026-01-26 16:26:43.750 239969 WARNING nova.compute.manager [req-76de6420-ee48-49d0-990f-6006176fdf6b req-73347a94-c21f-4d3e-8fee-105c212b9be7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: e9322863-2491-464c-92af-3e1f3181df98] Received unexpected event network-vif-plugged-a5c8aecb-cefc-4d76-b032-9b122283da51 for instance with vm_state deleted and task_state None.
Jan 26 16:26:43 compute-0 nova_compute[239965]: 2026-01-26 16:26:43.955 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:43 compute-0 nova_compute[239965]: 2026-01-26 16:26:43.956 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:43 compute-0 nova_compute[239965]: 2026-01-26 16:26:43.978 239969 DEBUG nova.compute.manager [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.046 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.047 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.056 239969 DEBUG nova.virt.hardware [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.056 239969 INFO nova.compute.claims [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.196 239969 DEBUG oslo_concurrency.lockutils [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "deadd136-f039-481d-bfc4-bbc46bc71563" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.197 239969 DEBUG oslo_concurrency.lockutils [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "deadd136-f039-481d-bfc4-bbc46bc71563" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.197 239969 DEBUG oslo_concurrency.lockutils [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "deadd136-f039-481d-bfc4-bbc46bc71563-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.198 239969 DEBUG oslo_concurrency.lockutils [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "deadd136-f039-481d-bfc4-bbc46bc71563-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.198 239969 DEBUG oslo_concurrency.lockutils [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "deadd136-f039-481d-bfc4-bbc46bc71563-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.200 239969 INFO nova.compute.manager [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Terminating instance
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.202 239969 DEBUG nova.compute.manager [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.220 239969 DEBUG oslo_concurrency.processutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:26:44 compute-0 kernel: tap7ad6a8b1-21 (unregistering): left promiscuous mode
Jan 26 16:26:44 compute-0 NetworkManager[48954]: <info>  [1769444804.2635] device (tap7ad6a8b1-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:26:44 compute-0 ovn_controller[146046]: 2026-01-26T16:26:44Z|01508|binding|INFO|Releasing lport 7ad6a8b1-2176-4542-ab37-6ece8654876d from this chassis (sb_readonly=0)
Jan 26 16:26:44 compute-0 ovn_controller[146046]: 2026-01-26T16:26:44Z|01509|binding|INFO|Setting lport 7ad6a8b1-2176-4542-ab37-6ece8654876d down in Southbound
Jan 26 16:26:44 compute-0 ovn_controller[146046]: 2026-01-26T16:26:44Z|01510|binding|INFO|Removing iface tap7ad6a8b1-21 ovn-installed in OVS
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.274 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:44.279 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:61:f7 10.100.0.11'], port_security=['fa:16:3e:f5:61:f7 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'deadd136-f039-481d-bfc4-bbc46bc71563', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '12f86669-b30d-4046-b58e-a323b0390514 b3cd03ab-361a-4eb5-b4b2-10e2c053ad7e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8c35e7e4-1920-4b21-8562-6698247816ad, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=7ad6a8b1-2176-4542-ab37-6ece8654876d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:26:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:44.281 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 7ad6a8b1-2176-4542-ab37-6ece8654876d in datapath c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073 unbound from our chassis
Jan 26 16:26:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:44.283 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.282 239969 DEBUG nova.compute.manager [req-234dbb04-abe5-4386-b65a-a3851ace19c4 req-d4945235-afbe-44f0-9a3b-72f7de872e6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Received event network-changed-7ad6a8b1-2176-4542-ab37-6ece8654876d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.283 239969 DEBUG nova.compute.manager [req-234dbb04-abe5-4386-b65a-a3851ace19c4 req-d4945235-afbe-44f0-9a3b-72f7de872e6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Refreshing instance network info cache due to event network-changed-7ad6a8b1-2176-4542-ab37-6ece8654876d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:26:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:44.284 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3092b6-6543-41e8-a2f2-28cf662dbc5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:44.284 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073 namespace which is not needed anymore
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.284 239969 DEBUG oslo_concurrency.lockutils [req-234dbb04-abe5-4386-b65a-a3851ace19c4 req-d4945235-afbe-44f0-9a3b-72f7de872e6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-deadd136-f039-481d-bfc4-bbc46bc71563" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.285 239969 DEBUG oslo_concurrency.lockutils [req-234dbb04-abe5-4386-b65a-a3851ace19c4 req-d4945235-afbe-44f0-9a3b-72f7de872e6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-deadd136-f039-481d-bfc4-bbc46bc71563" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.285 239969 DEBUG nova.network.neutron [req-234dbb04-abe5-4386-b65a-a3851ace19c4 req-d4945235-afbe-44f0-9a3b-72f7de872e6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Refreshing network info cache for port 7ad6a8b1-2176-4542-ab37-6ece8654876d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.293 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:44 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Jan 26 16:26:44 compute-0 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d0000008a.scope: Consumed 15.880s CPU time.
Jan 26 16:26:44 compute-0 systemd-machined[208061]: Machine qemu-167-instance-0000008a terminated.
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.443 239969 INFO nova.virt.libvirt.driver [-] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Instance destroyed successfully.
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.444 239969 DEBUG nova.objects.instance [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'resources' on Instance uuid deadd136-f039-481d-bfc4-bbc46bc71563 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:26:44 compute-0 neutron-haproxy-ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073[360863]: [NOTICE]   (360867) : haproxy version is 2.8.14-c23fe91
Jan 26 16:26:44 compute-0 neutron-haproxy-ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073[360863]: [NOTICE]   (360867) : path to executable is /usr/sbin/haproxy
Jan 26 16:26:44 compute-0 neutron-haproxy-ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073[360863]: [WARNING]  (360867) : Exiting Master process...
Jan 26 16:26:44 compute-0 neutron-haproxy-ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073[360863]: [WARNING]  (360867) : Exiting Master process...
Jan 26 16:26:44 compute-0 neutron-haproxy-ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073[360863]: [ALERT]    (360867) : Current worker (360869) exited with code 143 (Terminated)
Jan 26 16:26:44 compute-0 neutron-haproxy-ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073[360863]: [WARNING]  (360867) : All workers exited. Exiting... (0)
Jan 26 16:26:44 compute-0 systemd[1]: libpod-ba458dd7c249e4085e1667f60a6ab238df1c9b32814752f3326dd42f0a364e5f.scope: Deactivated successfully.
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.458 239969 DEBUG nova.virt.libvirt.vif [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:25:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-1437534189',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-1437534189',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-acce',id=138,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBByI6tCrfJLyIj6sIsV+Ekerorscq6qkiM/Lle83NCyTUlha/oPyViDL+PBWA7bXq9+9oJOHfnt/sOtyAsLXUYu52zdtEkcLJAtD8nhzn87iwgtBznYvhaPSD4tmFm9Vgw==',key_name='tempest-TestSecurityGroupsBasicOps-173561617',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:25:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-4w5dnqt8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:25:39Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=deadd136-f039-481d-bfc4-bbc46bc71563,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "address": "fa:16:3e:f5:61:f7", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ad6a8b1-21", "ovs_interfaceid": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:26:44 compute-0 podman[363807]: 2026-01-26 16:26:44.459465994 +0000 UTC m=+0.051523432 container died ba458dd7c249e4085e1667f60a6ab238df1c9b32814752f3326dd42f0a364e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.460 239969 DEBUG nova.network.os_vif_util [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "address": "fa:16:3e:f5:61:f7", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ad6a8b1-21", "ovs_interfaceid": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.462 239969 DEBUG nova.network.os_vif_util [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f5:61:f7,bridge_name='br-int',has_traffic_filtering=True,id=7ad6a8b1-2176-4542-ab37-6ece8654876d,network=Network(c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ad6a8b1-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.462 239969 DEBUG os_vif [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:61:f7,bridge_name='br-int',has_traffic_filtering=True,id=7ad6a8b1-2176-4542-ab37-6ece8654876d,network=Network(c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ad6a8b1-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.464 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.464 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ad6a8b1-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.466 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.468 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.473 239969 INFO os_vif [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:61:f7,bridge_name='br-int',has_traffic_filtering=True,id=7ad6a8b1-2176-4542-ab37-6ece8654876d,network=Network(c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ad6a8b1-21')
Jan 26 16:26:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba458dd7c249e4085e1667f60a6ab238df1c9b32814752f3326dd42f0a364e5f-userdata-shm.mount: Deactivated successfully.
Jan 26 16:26:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa1a158347e8d9fddaacd7ba72c9cb7d2d401475658409d597b6ff553bddb023-merged.mount: Deactivated successfully.
Jan 26 16:26:44 compute-0 podman[363807]: 2026-01-26 16:26:44.507711077 +0000 UTC m=+0.099768525 container cleanup ba458dd7c249e4085e1667f60a6ab238df1c9b32814752f3326dd42f0a364e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:26:44 compute-0 systemd[1]: libpod-conmon-ba458dd7c249e4085e1667f60a6ab238df1c9b32814752f3326dd42f0a364e5f.scope: Deactivated successfully.
Jan 26 16:26:44 compute-0 podman[363879]: 2026-01-26 16:26:44.589379696 +0000 UTC m=+0.054847894 container remove ba458dd7c249e4085e1667f60a6ab238df1c9b32814752f3326dd42f0a364e5f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 16:26:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:44.598 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[55c3febc-d79d-4699-977d-8b93d5e8ee0a]: (4, ('Mon Jan 26 04:26:44 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073 (ba458dd7c249e4085e1667f60a6ab238df1c9b32814752f3326dd42f0a364e5f)\nba458dd7c249e4085e1667f60a6ab238df1c9b32814752f3326dd42f0a364e5f\nMon Jan 26 04:26:44 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073 (ba458dd7c249e4085e1667f60a6ab238df1c9b32814752f3326dd42f0a364e5f)\nba458dd7c249e4085e1667f60a6ab238df1c9b32814752f3326dd42f0a364e5f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:44.600 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a48fd469-0036-4d83-95eb-31d9c77f95e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:44.601 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc3b6f1ea-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.603 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:44 compute-0 kernel: tapc3b6f1ea-40: left promiscuous mode
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.619 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.621 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:44.624 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f0c8a4-d4d9-473f-aeca-fae6bb099925]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.634 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.635 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:44.637 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c1510c48-921e-4e11-8384-f68d23dcb892]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:44.639 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7f2023c8-8934-40ff-9b74-152275090420]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:44.654 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e61cdfa9-e485-4637-b0fb-c894b7d30c73]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 622987, 'reachable_time': 38940, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363895, 'error': None, 'target': 'ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.655 239969 DEBUG nova.compute.manager [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:26:44 compute-0 systemd[1]: run-netns-ovnmeta\x2dc3b6f1ea\x2d4b30\x2d47ee\x2db3f9\x2d22cbfc4d9073.mount: Deactivated successfully.
Jan 26 16:26:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:44.656 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:26:44 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:44.657 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[406c1ff1-da94-4651-a4cb-0fcf9d8d99d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 486 KiB/s wr, 52 op/s
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.714 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.758 239969 INFO nova.virt.libvirt.driver [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Deleting instance files /var/lib/nova/instances/deadd136-f039-481d-bfc4-bbc46bc71563_del
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.759 239969 INFO nova.virt.libvirt.driver [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Deletion of /var/lib/nova/instances/deadd136-f039-481d-bfc4-bbc46bc71563_del complete
Jan 26 16:26:44 compute-0 ceph-mon[75140]: pgmap v2281: 305 pgs: 305 active+clean; 246 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 174 KiB/s rd, 486 KiB/s wr, 52 op/s
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.815 239969 INFO nova.compute.manager [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Took 0.61 seconds to destroy the instance on the hypervisor.
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.815 239969 DEBUG oslo.service.loopingcall [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.816 239969 DEBUG nova.compute.manager [-] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.816 239969 DEBUG nova.network.neutron [-] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:26:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:26:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/169090420' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.871 239969 DEBUG oslo_concurrency.processutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.650s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.875 239969 DEBUG nova.compute.provider_tree [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.895 239969 DEBUG nova.scheduler.client.report [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.915 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.916 239969 DEBUG nova.compute.manager [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.918 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.923 239969 DEBUG nova.virt.hardware [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.923 239969 INFO nova.compute.claims [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.968 239969 DEBUG nova.compute.manager [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.968 239969 DEBUG nova.network.neutron [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:26:44 compute-0 nova_compute[239965]: 2026-01-26 16:26:44.989 239969 INFO nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.007 239969 DEBUG nova.compute.manager [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.093 239969 DEBUG nova.compute.manager [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.095 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.095 239969 INFO nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Creating image(s)
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.125 239969 DEBUG nova.storage.rbd_utils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 759e14a7-49ae-4f6f-bb96-80fc691d50e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.161 239969 DEBUG nova.storage.rbd_utils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 759e14a7-49ae-4f6f-bb96-80fc691d50e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.196 239969 DEBUG nova.storage.rbd_utils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 759e14a7-49ae-4f6f-bb96-80fc691d50e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.201 239969 DEBUG oslo_concurrency.processutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.241 239969 DEBUG oslo_concurrency.processutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.285 239969 DEBUG oslo_concurrency.processutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.287 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.288 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.289 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.324 239969 DEBUG nova.storage.rbd_utils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 759e14a7-49ae-4f6f-bb96-80fc691d50e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.330 239969 DEBUG oslo_concurrency.processutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 759e14a7-49ae-4f6f-bb96-80fc691d50e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.622 239969 DEBUG oslo_concurrency.processutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 759e14a7-49ae-4f6f-bb96-80fc691d50e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.293s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.692 239969 DEBUG nova.storage.rbd_utils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] resizing rbd image 759e14a7-49ae-4f6f-bb96-80fc691d50e3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.770 239969 DEBUG nova.objects.instance [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'migration_context' on Instance uuid 759e14a7-49ae-4f6f-bb96-80fc691d50e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:26:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/169090420' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.795 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.795 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Ensure instance console log exists: /var/lib/nova/instances/759e14a7-49ae-4f6f-bb96-80fc691d50e3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.796 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.796 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.797 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:26:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4216444232' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.883 239969 DEBUG oslo_concurrency.processutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.642s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.893 239969 DEBUG nova.compute.provider_tree [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.915 239969 DEBUG nova.scheduler.client.report [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.938 239969 DEBUG nova.policy [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e865b82cb8db42568f95d2257ed9b158', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f543c15474f7451fa07b01e79dde95dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.943 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:45 compute-0 nova_compute[239965]: 2026-01-26 16:26:45.944 239969 DEBUG nova.compute.manager [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.018 239969 DEBUG nova.compute.manager [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.019 239969 DEBUG nova.network.neutron [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.042 239969 INFO nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.061 239969 DEBUG nova.compute.manager [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.145 239969 DEBUG nova.compute.manager [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.148 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.149 239969 INFO nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Creating image(s)
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.185 239969 DEBUG nova.storage.rbd_utils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.225 239969 DEBUG nova.storage.rbd_utils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.265 239969 DEBUG nova.storage.rbd_utils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.271 239969 DEBUG oslo_concurrency.processutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.321 239969 DEBUG nova.policy [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '59ae1c17a260470c91f50965ddd53a9e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.349 239969 DEBUG oslo_concurrency.processutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.351 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.352 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.353 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.387 239969 DEBUG nova.storage.rbd_utils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.391 239969 DEBUG oslo_concurrency.processutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.593 239969 DEBUG nova.network.neutron [req-234dbb04-abe5-4386-b65a-a3851ace19c4 req-d4945235-afbe-44f0-9a3b-72f7de872e6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Updated VIF entry in instance network info cache for port 7ad6a8b1-2176-4542-ab37-6ece8654876d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.594 239969 DEBUG nova.network.neutron [req-234dbb04-abe5-4386-b65a-a3851ace19c4 req-d4945235-afbe-44f0-9a3b-72f7de872e6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Updating instance_info_cache with network_info: [{"id": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "address": "fa:16:3e:f5:61:f7", "network": {"id": "c3b6f1ea-4b30-47ee-b3f9-22cbfc4d9073", "bridge": "br-int", "label": "tempest-network-smoke--510129127", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ad6a8b1-21", "ovs_interfaceid": "7ad6a8b1-2176-4542-ab37-6ece8654876d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.631 239969 DEBUG oslo_concurrency.lockutils [req-234dbb04-abe5-4386-b65a-a3851ace19c4 req-d4945235-afbe-44f0-9a3b-72f7de872e6a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-deadd136-f039-481d-bfc4-bbc46bc71563" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:26:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 252 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 189 KiB/s rd, 1.3 MiB/s wr, 72 op/s
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.691 239969 DEBUG nova.network.neutron [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Successfully created port: 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:26:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4216444232' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:46 compute-0 ceph-mon[75140]: pgmap v2282: 305 pgs: 305 active+clean; 252 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 189 KiB/s rd, 1.3 MiB/s wr, 72 op/s
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.832 239969 DEBUG oslo_concurrency.processutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.922 239969 DEBUG nova.storage.rbd_utils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] resizing rbd image af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.969 239969 DEBUG nova.compute.manager [None req-9f53ccec-a3db-4156-95bc-557040f7a98e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:46 compute-0 nova_compute[239965]: 2026-01-26 16:26:46.983 239969 DEBUG nova.network.neutron [-] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.033 239969 INFO nova.compute.manager [-] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Took 2.22 seconds to deallocate network for instance.
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.035 239969 INFO nova.compute.manager [None req-9f53ccec-a3db-4156-95bc-557040f7a98e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] instance snapshotting
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.052 239969 DEBUG nova.objects.instance [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'migration_context' on Instance uuid af71ccf7-284e-436b-9bb8-5bc6c727d3cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.069 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.070 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Ensure instance console log exists: /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.070 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.071 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.071 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.088 239969 DEBUG oslo_concurrency.lockutils [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.088 239969 DEBUG oslo_concurrency.lockutils [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.240 239969 DEBUG oslo_concurrency.processutils [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.339 239969 INFO nova.virt.libvirt.driver [None req-9f53ccec-a3db-4156-95bc-557040f7a98e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Beginning live snapshot process
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.356 239969 DEBUG nova.network.neutron [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Successfully created port: 81751ed2-633c-4723-82f3-f652ce04297b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.569 239969 DEBUG nova.compute.manager [req-7ac77e04-d054-478f-9c69-76ceb603a81b req-e8755473-c44f-4157-a7c7-aaeefbe71ad4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Received event network-vif-deleted-7ad6a8b1-2176-4542-ab37-6ece8654876d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.657 239969 DEBUG nova.virt.libvirt.imagebackend [None req-9f53ccec-a3db-4156-95bc-557040f7a98e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.663 239969 DEBUG nova.network.neutron [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Successfully updated port: 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.692 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.692 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.692 239969 DEBUG nova.network.neutron [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.744 239969 DEBUG nova.compute.manager [req-b0b2e37b-829f-4f30-adbb-0bcd57754124 req-11dbed64-7361-4e1b-bf63-ac8725ca1a5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received event network-changed-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.745 239969 DEBUG nova.compute.manager [req-b0b2e37b-829f-4f30-adbb-0bcd57754124 req-11dbed64-7361-4e1b-bf63-ac8725ca1a5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Refreshing instance network info cache due to event network-changed-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.746 239969 DEBUG oslo_concurrency.lockutils [req-b0b2e37b-829f-4f30-adbb-0bcd57754124 req-11dbed64-7361-4e1b-bf63-ac8725ca1a5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:26:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/362461659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.839 239969 DEBUG nova.network.neutron [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.847 239969 DEBUG oslo_concurrency.processutils [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.851 239969 DEBUG nova.compute.provider_tree [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:26:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:47.854 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.866 239969 DEBUG nova.scheduler.client.report [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.882 239969 DEBUG nova.storage.rbd_utils [None req-9f53ccec-a3db-4156-95bc-557040f7a98e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] creating snapshot(21cc52398bf3475bbe20317c43a42483) on rbd image(39a41999-3c56-4679-8c57-9286dc4edbb6_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:26:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/362461659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.918 239969 DEBUG oslo_concurrency.lockutils [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.829s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:47 compute-0 nova_compute[239965]: 2026-01-26 16:26:47.943 239969 INFO nova.scheduler.client.report [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Deleted allocations for instance deadd136-f039-481d-bfc4-bbc46bc71563
Jan 26 16:26:48 compute-0 nova_compute[239965]: 2026-01-26 16:26:48.003 239969 DEBUG oslo_concurrency.lockutils [None req-a5025315-891c-4f92-a389-ef14b134b750 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "deadd136-f039-481d-bfc4-bbc46bc71563" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:48 compute-0 nova_compute[239965]: 2026-01-26 16:26:48.028 239969 DEBUG nova.network.neutron [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Successfully updated port: 81751ed2-633c-4723-82f3-f652ce04297b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:26:48 compute-0 nova_compute[239965]: 2026-01-26 16:26:48.042 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "refresh_cache-af71ccf7-284e-436b-9bb8-5bc6c727d3cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:48 compute-0 nova_compute[239965]: 2026-01-26 16:26:48.043 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquired lock "refresh_cache-af71ccf7-284e-436b-9bb8-5bc6c727d3cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:48 compute-0 nova_compute[239965]: 2026-01-26 16:26:48.043 239969 DEBUG nova.network.neutron [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:26:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 252 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 3.5 MiB/s wr, 114 op/s
Jan 26 16:26:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:26:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/571177292' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:26:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:26:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/571177292' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:26:48 compute-0 nova_compute[239965]: 2026-01-26 16:26:48.742 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:48 compute-0 nova_compute[239965]: 2026-01-26 16:26:48.863 239969 DEBUG nova.network.neutron [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:26:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Jan 26 16:26:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Jan 26 16:26:48 compute-0 ceph-mon[75140]: pgmap v2283: 305 pgs: 305 active+clean; 252 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 113 KiB/s rd, 3.5 MiB/s wr, 114 op/s
Jan 26 16:26:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/571177292' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:26:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/571177292' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:26:48 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Jan 26 16:26:48 compute-0 nova_compute[239965]: 2026-01-26 16:26:48.937 239969 DEBUG nova.storage.rbd_utils [None req-9f53ccec-a3db-4156-95bc-557040f7a98e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] cloning vms/39a41999-3c56-4679-8c57-9286dc4edbb6_disk@21cc52398bf3475bbe20317c43a42483 to images/4f1e15b7-da72-469a-9162-748b08cf386f clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.049 239969 DEBUG nova.storage.rbd_utils [None req-9f53ccec-a3db-4156-95bc-557040f7a98e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] flattening images/4f1e15b7-da72-469a-9162-748b08cf386f flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014462216862820552 of space, bias 1.0, pg target 0.43386650588461656 quantized to 32 (current 32)
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010182292245441468 of space, bias 1.0, pg target 0.305468767363244 quantized to 32 (current 32)
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.011279189168239e-07 of space, bias 4.0, pg target 0.0008413535027001886 quantized to 16 (current 16)
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:26:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.458 239969 DEBUG nova.storage.rbd_utils [None req-9f53ccec-a3db-4156-95bc-557040f7a98e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] removing snapshot(21cc52398bf3475bbe20317c43a42483) on rbd image(39a41999-3c56-4679-8c57-9286dc4edbb6_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.467 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.643 239969 DEBUG nova.compute.manager [req-f5638c92-0147-4a0d-983c-3366086bdf3b req-14705794-d063-4cbc-93ed-b4347a762957 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received event network-changed-81751ed2-633c-4723-82f3-f652ce04297b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.643 239969 DEBUG nova.compute.manager [req-f5638c92-0147-4a0d-983c-3366086bdf3b req-14705794-d063-4cbc-93ed-b4347a762957 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Refreshing instance network info cache due to event network-changed-81751ed2-633c-4723-82f3-f652ce04297b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.643 239969 DEBUG oslo_concurrency.lockutils [req-f5638c92-0147-4a0d-983c-3366086bdf3b req-14705794-d063-4cbc-93ed-b4347a762957 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-af71ccf7-284e-436b-9bb8-5bc6c727d3cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.875 239969 DEBUG nova.network.neutron [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Updating instance_info_cache with network_info: [{"id": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "address": "fa:16:3e:5c:66:7c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb1285a-11", "ovs_interfaceid": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Jan 26 16:26:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.902 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.903 239969 DEBUG nova.compute.manager [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Instance network_info: |[{"id": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "address": "fa:16:3e:5c:66:7c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb1285a-11", "ovs_interfaceid": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:26:49 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.903 239969 DEBUG oslo_concurrency.lockutils [req-b0b2e37b-829f-4f30-adbb-0bcd57754124 req-11dbed64-7361-4e1b-bf63-ac8725ca1a5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.903 239969 DEBUG nova.network.neutron [req-b0b2e37b-829f-4f30-adbb-0bcd57754124 req-11dbed64-7361-4e1b-bf63-ac8725ca1a5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Refreshing network info cache for port 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:26:49 compute-0 ceph-mon[75140]: osdmap e304: 3 total, 3 up, 3 in
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.907 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Start _get_guest_xml network_info=[{"id": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "address": "fa:16:3e:5c:66:7c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb1285a-11", "ovs_interfaceid": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.914 239969 WARNING nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.923 239969 DEBUG nova.virt.libvirt.host [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.923 239969 DEBUG nova.virt.libvirt.host [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.926 239969 DEBUG nova.storage.rbd_utils [None req-9f53ccec-a3db-4156-95bc-557040f7a98e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] creating snapshot(snap) on rbd image(4f1e15b7-da72-469a-9162-748b08cf386f) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.965 239969 DEBUG nova.virt.libvirt.host [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.966 239969 DEBUG nova.virt.libvirt.host [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.966 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.967 239969 DEBUG nova.virt.hardware [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.967 239969 DEBUG nova.virt.hardware [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.968 239969 DEBUG nova.virt.hardware [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.968 239969 DEBUG nova.virt.hardware [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.968 239969 DEBUG nova.virt.hardware [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.968 239969 DEBUG nova.virt.hardware [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.969 239969 DEBUG nova.virt.hardware [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.969 239969 DEBUG nova.virt.hardware [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.970 239969 DEBUG nova.virt.hardware [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.970 239969 DEBUG nova.virt.hardware [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.970 239969 DEBUG nova.virt.hardware [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:26:49 compute-0 nova_compute[239965]: 2026-01-26 16:26:49.973 239969 DEBUG oslo_concurrency.processutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:50 compute-0 sshd-session[364397]: Invalid user cs2 from 209.38.206.249 port 33730
Jan 26 16:26:50 compute-0 sshd-session[364397]: Connection closed by invalid user cs2 209.38.206.249 port 33730 [preauth]
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.462 239969 DEBUG nova.network.neutron [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Updating instance_info_cache with network_info: [{"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.493 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Releasing lock "refresh_cache-af71ccf7-284e-436b-9bb8-5bc6c727d3cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.493 239969 DEBUG nova.compute.manager [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Instance network_info: |[{"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.493 239969 DEBUG oslo_concurrency.lockutils [req-f5638c92-0147-4a0d-983c-3366086bdf3b req-14705794-d063-4cbc-93ed-b4347a762957 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-af71ccf7-284e-436b-9bb8-5bc6c727d3cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.494 239969 DEBUG nova.network.neutron [req-f5638c92-0147-4a0d-983c-3366086bdf3b req-14705794-d063-4cbc-93ed-b4347a762957 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Refreshing network info cache for port 81751ed2-633c-4723-82f3-f652ce04297b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.496 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Start _get_guest_xml network_info=[{"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:26:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:26:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3437379286' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.503 239969 WARNING nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.515 239969 DEBUG nova.virt.libvirt.host [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.515 239969 DEBUG nova.virt.libvirt.host [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.519 239969 DEBUG oslo_concurrency.processutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.549 239969 DEBUG nova.storage.rbd_utils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 759e14a7-49ae-4f6f-bb96-80fc691d50e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.552 239969 DEBUG oslo_concurrency.processutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.593 239969 DEBUG nova.virt.libvirt.host [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.593 239969 DEBUG nova.virt.libvirt.host [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.594 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.594 239969 DEBUG nova.virt.hardware [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.595 239969 DEBUG nova.virt.hardware [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.595 239969 DEBUG nova.virt.hardware [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.595 239969 DEBUG nova.virt.hardware [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.596 239969 DEBUG nova.virt.hardware [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.596 239969 DEBUG nova.virt.hardware [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.596 239969 DEBUG nova.virt.hardware [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.596 239969 DEBUG nova.virt.hardware [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.597 239969 DEBUG nova.virt.hardware [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.597 239969 DEBUG nova.virt.hardware [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.597 239969 DEBUG nova.virt.hardware [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:26:50 compute-0 nova_compute[239965]: 2026-01-26 16:26:50.601 239969 DEBUG oslo_concurrency.processutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 252 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 5.2 MiB/s wr, 116 op/s
Jan 26 16:26:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Jan 26 16:26:50 compute-0 ceph-mon[75140]: osdmap e305: 3 total, 3 up, 3 in
Jan 26 16:26:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3437379286' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:26:50 compute-0 ceph-mon[75140]: pgmap v2286: 305 pgs: 305 active+clean; 252 MiB data, 1011 MiB used, 59 GiB / 60 GiB avail; 79 KiB/s rd, 5.2 MiB/s wr, 116 op/s
Jan 26 16:26:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Jan 26 16:26:50 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Jan 26 16:26:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:26:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/869503406' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:26:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:26:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3510990730' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.135 239969 DEBUG oslo_concurrency.processutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.165 239969 DEBUG nova.storage.rbd_utils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.171 239969 DEBUG oslo_concurrency.processutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.222 239969 DEBUG oslo_concurrency.processutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.670s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.226 239969 DEBUG nova.virt.libvirt.vif [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:26:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-879095890',display_name='tempest-TestNetworkBasicOps-server-879095890',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-879095890',id=142,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBENptaN+bG/9ejW/M5NuXQVCK6v3MRZr6mcCMzARlGtLJuYDEKZC5KZPEVS7zYqAcRckbC6mYpyXHfX98kGDdQowzIWS03GRgAXz2FaPPXZs9t1xQYzyEzlKdOej6nOsBw==',key_name='tempest-TestNetworkBasicOps-2084098348',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-j1dz4x1u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:26:45Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=759e14a7-49ae-4f6f-bb96-80fc691d50e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "address": "fa:16:3e:5c:66:7c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb1285a-11", "ovs_interfaceid": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.226 239969 DEBUG nova.network.os_vif_util [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "address": "fa:16:3e:5c:66:7c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb1285a-11", "ovs_interfaceid": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.227 239969 DEBUG nova.network.os_vif_util [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:66:7c,bridge_name='br-int',has_traffic_filtering=True,id=3eb1285a-11fb-4984-b35f-24fe2d9ed6f1,network=Network(6a673593-4cc1-4b91-8d9d-33cfb52b459e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb1285a-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.229 239969 DEBUG nova.objects.instance [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'pci_devices' on Instance uuid 759e14a7-49ae-4f6f-bb96-80fc691d50e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.250 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <uuid>759e14a7-49ae-4f6f-bb96-80fc691d50e3</uuid>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <name>instance-0000008e</name>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkBasicOps-server-879095890</nova:name>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:26:49</nova:creationTime>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <nova:port uuid="3eb1285a-11fb-4984-b35f-24fe2d9ed6f1">
Jan 26 16:26:51 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <system>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <entry name="serial">759e14a7-49ae-4f6f-bb96-80fc691d50e3</entry>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <entry name="uuid">759e14a7-49ae-4f6f-bb96-80fc691d50e3</entry>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </system>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <os>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   </os>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <features>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   </features>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/759e14a7-49ae-4f6f-bb96-80fc691d50e3_disk">
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       </source>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/759e14a7-49ae-4f6f-bb96-80fc691d50e3_disk.config">
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       </source>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:5c:66:7c"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <target dev="tap3eb1285a-11"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/759e14a7-49ae-4f6f-bb96-80fc691d50e3/console.log" append="off"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <video>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </video>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:26:51 compute-0 nova_compute[239965]: </domain>
Jan 26 16:26:51 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.251 239969 DEBUG nova.compute.manager [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Preparing to wait for external event network-vif-plugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.251 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.251 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.252 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.252 239969 DEBUG nova.virt.libvirt.vif [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:26:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-879095890',display_name='tempest-TestNetworkBasicOps-server-879095890',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-879095890',id=142,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBENptaN+bG/9ejW/M5NuXQVCK6v3MRZr6mcCMzARlGtLJuYDEKZC5KZPEVS7zYqAcRckbC6mYpyXHfX98kGDdQowzIWS03GRgAXz2FaPPXZs9t1xQYzyEzlKdOej6nOsBw==',key_name='tempest-TestNetworkBasicOps-2084098348',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-j1dz4x1u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:26:45Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=759e14a7-49ae-4f6f-bb96-80fc691d50e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "address": "fa:16:3e:5c:66:7c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb1285a-11", "ovs_interfaceid": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.253 239969 DEBUG nova.network.os_vif_util [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "address": "fa:16:3e:5c:66:7c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb1285a-11", "ovs_interfaceid": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.253 239969 DEBUG nova.network.os_vif_util [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:66:7c,bridge_name='br-int',has_traffic_filtering=True,id=3eb1285a-11fb-4984-b35f-24fe2d9ed6f1,network=Network(6a673593-4cc1-4b91-8d9d-33cfb52b459e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb1285a-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.254 239969 DEBUG os_vif [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:66:7c,bridge_name='br-int',has_traffic_filtering=True,id=3eb1285a-11fb-4984-b35f-24fe2d9ed6f1,network=Network(6a673593-4cc1-4b91-8d9d-33cfb52b459e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb1285a-11') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.254 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.255 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.255 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.261 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.261 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3eb1285a-11, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.262 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3eb1285a-11, col_values=(('external_ids', {'iface-id': '3eb1285a-11fb-4984-b35f-24fe2d9ed6f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5c:66:7c', 'vm-uuid': '759e14a7-49ae-4f6f-bb96-80fc691d50e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.263 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:51 compute-0 NetworkManager[48954]: <info>  [1769444811.2645] manager: (tap3eb1285a-11): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/612)
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.265 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.270 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.272 239969 INFO os_vif [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:66:7c,bridge_name='br-int',has_traffic_filtering=True,id=3eb1285a-11fb-4984-b35f-24fe2d9ed6f1,network=Network(6a673593-4cc1-4b91-8d9d-33cfb52b459e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb1285a-11')
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.344 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.345 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.345 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:5c:66:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.346 239969 INFO nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Using config drive
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.368 239969 DEBUG nova.storage.rbd_utils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 759e14a7-49ae-4f6f-bb96-80fc691d50e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:51 compute-0 sshd-session[364519]: Invalid user ecs-user from 209.38.206.249 port 35936
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.639 239969 DEBUG nova.network.neutron [req-b0b2e37b-829f-4f30-adbb-0bcd57754124 req-11dbed64-7361-4e1b-bf63-ac8725ca1a5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Updated VIF entry in instance network info cache for port 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.639 239969 DEBUG nova.network.neutron [req-b0b2e37b-829f-4f30-adbb-0bcd57754124 req-11dbed64-7361-4e1b-bf63-ac8725ca1a5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Updating instance_info_cache with network_info: [{"id": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "address": "fa:16:3e:5c:66:7c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb1285a-11", "ovs_interfaceid": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.683 239969 DEBUG oslo_concurrency.lockutils [req-b0b2e37b-829f-4f30-adbb-0bcd57754124 req-11dbed64-7361-4e1b-bf63-ac8725ca1a5b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:26:51 compute-0 sshd-session[364519]: Connection closed by invalid user ecs-user 209.38.206.249 port 35936 [preauth]
Jan 26 16:26:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:26:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2385131536' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.784 239969 DEBUG oslo_concurrency.processutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.786 239969 DEBUG nova.virt.libvirt.vif [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:26:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1217079529',display_name='tempest-TestNetworkAdvancedServerOps-server-1217079529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1217079529',id=143,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAHK7vjAqqBoYE5h97a5pskiriPLSJ5A7+bC6dk5uNnjCrImOF19Tu7dz5ff1CkVvUcFUiy33TMxCmyGQ5LYz502POS5lMWxg05L+Asko5H7mYq3E+7RWkgVQirbCTK0/w==',key_name='tempest-TestNetworkAdvancedServerOps-331164643',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-v3jbl2f2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:26:46Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=af71ccf7-284e-436b-9bb8-5bc6c727d3cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.787 239969 DEBUG nova.network.os_vif_util [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.788 239969 DEBUG nova.network.os_vif_util [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:6f:f6,bridge_name='br-int',has_traffic_filtering=True,id=81751ed2-633c-4723-82f3-f652ce04297b,network=Network(94bff060-de25-4619-bfc7-2522df0a74d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81751ed2-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.789 239969 DEBUG nova.objects.instance [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid af71ccf7-284e-436b-9bb8-5bc6c727d3cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.795 239969 INFO nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Creating config drive at /var/lib/nova/instances/759e14a7-49ae-4f6f-bb96-80fc691d50e3/disk.config
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.805 239969 DEBUG oslo_concurrency.processutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/759e14a7-49ae-4f6f-bb96-80fc691d50e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8m89pyrm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.871 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <uuid>af71ccf7-284e-436b-9bb8-5bc6c727d3cc</uuid>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <name>instance-0000008f</name>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1217079529</nova:name>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:26:50</nova:creationTime>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <nova:user uuid="59ae1c17a260470c91f50965ddd53a9e">tempest-TestNetworkAdvancedServerOps-842475489-project-member</nova:user>
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <nova:project uuid="2a7615c0db4e4f38aec30c7c723c3c3a">tempest-TestNetworkAdvancedServerOps-842475489</nova:project>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <nova:port uuid="81751ed2-633c-4723-82f3-f652ce04297b">
Jan 26 16:26:51 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <system>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <entry name="serial">af71ccf7-284e-436b-9bb8-5bc6c727d3cc</entry>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <entry name="uuid">af71ccf7-284e-436b-9bb8-5bc6c727d3cc</entry>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </system>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <os>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   </os>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <features>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   </features>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk">
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       </source>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk.config">
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       </source>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:26:51 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:bb:6f:f6"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <target dev="tap81751ed2-63"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc/console.log" append="off"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <video>
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </video>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:26:51 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:26:51 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:26:51 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:26:51 compute-0 nova_compute[239965]: </domain>
Jan 26 16:26:51 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.872 239969 DEBUG nova.compute.manager [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Preparing to wait for external event network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.872 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.872 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.873 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.873 239969 DEBUG nova.virt.libvirt.vif [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:26:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1217079529',display_name='tempest-TestNetworkAdvancedServerOps-server-1217079529',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1217079529',id=143,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAHK7vjAqqBoYE5h97a5pskiriPLSJ5A7+bC6dk5uNnjCrImOF19Tu7dz5ff1CkVvUcFUiy33TMxCmyGQ5LYz502POS5lMWxg05L+Asko5H7mYq3E+7RWkgVQirbCTK0/w==',key_name='tempest-TestNetworkAdvancedServerOps-331164643',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-v3jbl2f2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:26:46Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=af71ccf7-284e-436b-9bb8-5bc6c727d3cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.873 239969 DEBUG nova.network.os_vif_util [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.874 239969 DEBUG nova.network.os_vif_util [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:bb:6f:f6,bridge_name='br-int',has_traffic_filtering=True,id=81751ed2-633c-4723-82f3-f652ce04297b,network=Network(94bff060-de25-4619-bfc7-2522df0a74d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81751ed2-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.874 239969 DEBUG os_vif [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:6f:f6,bridge_name='br-int',has_traffic_filtering=True,id=81751ed2-633c-4723-82f3-f652ce04297b,network=Network(94bff060-de25-4619-bfc7-2522df0a74d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81751ed2-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.875 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.875 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.875 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.878 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.878 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81751ed2-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.878 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81751ed2-63, col_values=(('external_ids', {'iface-id': '81751ed2-633c-4723-82f3-f652ce04297b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:6f:f6', 'vm-uuid': 'af71ccf7-284e-436b-9bb8-5bc6c727d3cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.880 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:51 compute-0 NetworkManager[48954]: <info>  [1769444811.8808] manager: (tap81751ed2-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/613)
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.881 239969 DEBUG nova.network.neutron [req-f5638c92-0147-4a0d-983c-3366086bdf3b req-14705794-d063-4cbc-93ed-b4347a762957 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Updated VIF entry in instance network info cache for port 81751ed2-633c-4723-82f3-f652ce04297b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.882 239969 DEBUG nova.network.neutron [req-f5638c92-0147-4a0d-983c-3366086bdf3b req-14705794-d063-4cbc-93ed-b4347a762957 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Updating instance_info_cache with network_info: [{"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.883 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.886 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.886 239969 INFO os_vif [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:bb:6f:f6,bridge_name='br-int',has_traffic_filtering=True,id=81751ed2-633c-4723-82f3-f652ce04297b,network=Network(94bff060-de25-4619-bfc7-2522df0a74d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81751ed2-63')
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.895 239969 DEBUG oslo_concurrency.lockutils [req-f5638c92-0147-4a0d-983c-3366086bdf3b req-14705794-d063-4cbc-93ed-b4347a762957 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-af71ccf7-284e-436b-9bb8-5bc6c727d3cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:26:51 compute-0 ceph-mon[75140]: osdmap e306: 3 total, 3 up, 3 in
Jan 26 16:26:51 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/869503406' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:26:51 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3510990730' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:26:51 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2385131536' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.966 239969 DEBUG oslo_concurrency.processutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/759e14a7-49ae-4f6f-bb96-80fc691d50e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8m89pyrm" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:51 compute-0 nova_compute[239965]: 2026-01-26 16:26:51.997 239969 DEBUG nova.storage.rbd_utils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 759e14a7-49ae-4f6f-bb96-80fc691d50e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.000 239969 DEBUG oslo_concurrency.processutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/759e14a7-49ae-4f6f-bb96-80fc691d50e3/disk.config 759e14a7-49ae-4f6f-bb96-80fc691d50e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.049 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.050 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.050 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No VIF found with MAC fa:16:3e:bb:6f:f6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.051 239969 INFO nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Using config drive
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.076 239969 DEBUG nova.storage.rbd_utils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.157 239969 DEBUG oslo_concurrency.processutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/759e14a7-49ae-4f6f-bb96-80fc691d50e3/disk.config 759e14a7-49ae-4f6f-bb96-80fc691d50e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.158 239969 INFO nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Deleting local config drive /var/lib/nova/instances/759e14a7-49ae-4f6f-bb96-80fc691d50e3/disk.config because it was imported into RBD.
Jan 26 16:26:52 compute-0 sshd-session[364564]: Invalid user esuser from 209.38.206.249 port 35950
Jan 26 16:26:52 compute-0 kernel: tap3eb1285a-11: entered promiscuous mode
Jan 26 16:26:52 compute-0 NetworkManager[48954]: <info>  [1769444812.2376] manager: (tap3eb1285a-11): new Tun device (/org/freedesktop/NetworkManager/Devices/614)
Jan 26 16:26:52 compute-0 ovn_controller[146046]: 2026-01-26T16:26:52Z|01511|binding|INFO|Claiming lport 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 for this chassis.
Jan 26 16:26:52 compute-0 ovn_controller[146046]: 2026-01-26T16:26:52Z|01512|binding|INFO|3eb1285a-11fb-4984-b35f-24fe2d9ed6f1: Claiming fa:16:3e:5c:66:7c 10.100.0.14
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.245 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.251 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:66:7c 10.100.0.14'], port_security=['fa:16:3e:5c:66:7c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '759e14a7-49ae-4f6f-bb96-80fc691d50e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a673593-4cc1-4b91-8d9d-33cfb52b459e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'efca55d4-fc3b-40bd-af16-664c70c27cc2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=975e0fb6-2099-45e2-83bb-35528f0f0ffd, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3eb1285a-11fb-4984-b35f-24fe2d9ed6f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.253 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 in datapath 6a673593-4cc1-4b91-8d9d-33cfb52b459e bound to our chassis
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.255 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a673593-4cc1-4b91-8d9d-33cfb52b459e
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.272 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[82485c7d-f9ba-47f8-9403-aa2d8b626fbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.273 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6a673593-41 in ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.276 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6a673593-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.276 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0801782e-efe1-4d0a-b7b8-b7548b65a7fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.277 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1b5d2fd8-03c1-4e4d-bcf0-7985a773a727]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:52 compute-0 ovn_controller[146046]: 2026-01-26T16:26:52Z|01513|binding|INFO|Setting lport 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 ovn-installed in OVS
Jan 26 16:26:52 compute-0 ovn_controller[146046]: 2026-01-26T16:26:52Z|01514|binding|INFO|Setting lport 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 up in Southbound
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.281 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.285 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:52 compute-0 systemd-machined[208061]: New machine qemu-172-instance-0000008e.
Jan 26 16:26:52 compute-0 systemd-udevd[364642]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.292 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[af8170d8-cf6c-402b-9360-82317e13069e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:52 compute-0 NetworkManager[48954]: <info>  [1769444812.3051] device (tap3eb1285a-11): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:26:52 compute-0 NetworkManager[48954]: <info>  [1769444812.3057] device (tap3eb1285a-11): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:26:52 compute-0 systemd[1]: Started Virtual Machine qemu-172-instance-0000008e.
Jan 26 16:26:52 compute-0 sshd-session[364564]: Connection closed by invalid user esuser 209.38.206.249 port 35950 [preauth]
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.312 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[94aa020e-adb8-48aa-9a10-6838dab82d10]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.349 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2de15853-49ba-4998-bcd4-d6c904c89f9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.354 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[553ac29b-bee9-4431-83b1-8d0ff625d532]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:52 compute-0 NetworkManager[48954]: <info>  [1769444812.3561] manager: (tap6a673593-40): new Veth device (/org/freedesktop/NetworkManager/Devices/615)
Jan 26 16:26:52 compute-0 systemd-udevd[364645]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.395 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f81d6344-3acb-4320-91fc-150e95a7bf85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.399 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3619ede9-79f3-4127-9495-f82dd12cdb94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.417 239969 INFO nova.virt.libvirt.driver [None req-9f53ccec-a3db-4156-95bc-557040f7a98e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Snapshot image upload complete
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.418 239969 INFO nova.compute.manager [None req-9f53ccec-a3db-4156-95bc-557040f7a98e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Took 5.38 seconds to snapshot the instance on the hypervisor.
Jan 26 16:26:52 compute-0 NetworkManager[48954]: <info>  [1769444812.4268] device (tap6a673593-40): carrier: link connected
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.431 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[18c982c3-15bf-4de6-b0ce-94ee428de1cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.450 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bbc3582d-abf0-4adb-94e9-e41a4d6e2a3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a673593-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:cd:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 433], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630323, 'reachable_time': 22042, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364676, 'error': None, 'target': 'ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.467 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[aa89e3f3-d417-45c7-8c98-19705145849e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:cd57'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630323, 'tstamp': 630323}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364677, 'error': None, 'target': 'ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.486 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb6cd36-1c2e-4230-8610-433f327d4cad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a673593-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:cd:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 433], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630323, 'reachable_time': 22042, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364678, 'error': None, 'target': 'ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.519 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3f5f2c-60d5-4824-8f52-6df04493c285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.614 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[11eea6e7-9a10-4036-befe-991ec5b075cf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.615 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a673593-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.616 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.616 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a673593-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.618 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:52 compute-0 NetworkManager[48954]: <info>  [1769444812.6194] manager: (tap6a673593-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/616)
Jan 26 16:26:52 compute-0 kernel: tap6a673593-40: entered promiscuous mode
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.626 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.628 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a673593-40, col_values=(('external_ids', {'iface-id': '891fbaea-41fa-4f46-a25a-e1bcac6983ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:52 compute-0 ovn_controller[146046]: 2026-01-26T16:26:52Z|01515|binding|INFO|Releasing lport 891fbaea-41fa-4f46-a25a-e1bcac6983ae from this chassis (sb_readonly=0)
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.630 239969 INFO nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Creating config drive at /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc/disk.config
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.636 239969 DEBUG oslo_concurrency.processutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi3jrt3lz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.654 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6a673593-4cc1-4b91-8d9d-33cfb52b459e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6a673593-4cc1-4b91-8d9d-33cfb52b459e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.655 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bdea0eb9-45eb-404c-99c9-96b9091cd372]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.656 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-6a673593-4cc1-4b91-8d9d-33cfb52b459e
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/6a673593-4cc1-4b91-8d9d-33cfb52b459e.pid.haproxy
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 6a673593-4cc1-4b91-8d9d-33cfb52b459e
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:26:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:52.656 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e', 'env', 'PROCESS_TAG=haproxy-6a673593-4cc1-4b91-8d9d-33cfb52b459e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6a673593-4cc1-4b91-8d9d-33cfb52b459e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:26:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 13 MiB/s wr, 318 op/s
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.679 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.787 239969 DEBUG nova.compute.manager [req-a6a3b7a0-aa83-467c-84e0-e831dff25248 req-ddd35824-0d83-44c2-a938-0daa73d82153 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received event network-vif-plugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.787 239969 DEBUG oslo_concurrency.lockutils [req-a6a3b7a0-aa83-467c-84e0-e831dff25248 req-ddd35824-0d83-44c2-a938-0daa73d82153 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.788 239969 DEBUG oslo_concurrency.lockutils [req-a6a3b7a0-aa83-467c-84e0-e831dff25248 req-ddd35824-0d83-44c2-a938-0daa73d82153 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.788 239969 DEBUG oslo_concurrency.lockutils [req-a6a3b7a0-aa83-467c-84e0-e831dff25248 req-ddd35824-0d83-44c2-a938-0daa73d82153 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.788 239969 DEBUG nova.compute.manager [req-a6a3b7a0-aa83-467c-84e0-e831dff25248 req-ddd35824-0d83-44c2-a938-0daa73d82153 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Processing event network-vif-plugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.789 239969 DEBUG oslo_concurrency.processutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi3jrt3lz" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.814 239969 DEBUG nova.storage.rbd_utils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.818 239969 DEBUG oslo_concurrency.processutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc/disk.config af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.850 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444812.8333364, 759e14a7-49ae-4f6f-bb96-80fc691d50e3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.850 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] VM Started (Lifecycle Event)
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.853 239969 DEBUG nova.compute.manager [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.858 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.861 239969 INFO nova.virt.libvirt.driver [-] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Instance spawned successfully.
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.862 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.870 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.874 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.889 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.889 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444812.8336322, 759e14a7-49ae-4f6f-bb96-80fc691d50e3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.889 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] VM Paused (Lifecycle Event)
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.897 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.897 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.898 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.898 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.898 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.899 239969 DEBUG nova.virt.libvirt.driver [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.905 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.909 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444812.857135, 759e14a7-49ae-4f6f-bb96-80fc691d50e3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.909 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] VM Resumed (Lifecycle Event)
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.925 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.928 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.952 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.957 239969 INFO nova.compute.manager [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Took 7.86 seconds to spawn the instance on the hypervisor.
Jan 26 16:26:52 compute-0 nova_compute[239965]: 2026-01-26 16:26:52.957 239969 DEBUG nova.compute.manager [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:52 compute-0 ceph-mon[75140]: pgmap v2288: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 13 MiB/s wr, 318 op/s
Jan 26 16:26:53 compute-0 nova_compute[239965]: 2026-01-26 16:26:53.012 239969 INFO nova.compute.manager [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Took 8.98 seconds to build instance.
Jan 26 16:26:53 compute-0 nova_compute[239965]: 2026-01-26 16:26:53.013 239969 DEBUG oslo_concurrency.processutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc/disk.config af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.195s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:53 compute-0 nova_compute[239965]: 2026-01-26 16:26:53.014 239969 INFO nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Deleting local config drive /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc/disk.config because it was imported into RBD.
Jan 26 16:26:53 compute-0 nova_compute[239965]: 2026-01-26 16:26:53.039 239969 DEBUG oslo_concurrency.lockutils [None req-276f9dd4-5fce-4a97-9673-0616b06db321 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:53 compute-0 NetworkManager[48954]: <info>  [1769444813.0697] manager: (tap81751ed2-63): new Tun device (/org/freedesktop/NetworkManager/Devices/617)
Jan 26 16:26:53 compute-0 systemd-udevd[364670]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:26:53 compute-0 kernel: tap81751ed2-63: entered promiscuous mode
Jan 26 16:26:53 compute-0 NetworkManager[48954]: <info>  [1769444813.1504] device (tap81751ed2-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:26:53 compute-0 nova_compute[239965]: 2026-01-26 16:26:53.150 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:53 compute-0 NetworkManager[48954]: <info>  [1769444813.1511] device (tap81751ed2-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:26:53 compute-0 ovn_controller[146046]: 2026-01-26T16:26:53Z|01516|binding|INFO|Releasing lport 36c96456-84ef-4832-9322-d686bc015797 from this chassis (sb_readonly=0)
Jan 26 16:26:53 compute-0 ovn_controller[146046]: 2026-01-26T16:26:53Z|01517|binding|INFO|Releasing lport 891fbaea-41fa-4f46-a25a-e1bcac6983ae from this chassis (sb_readonly=0)
Jan 26 16:26:53 compute-0 nova_compute[239965]: 2026-01-26 16:26:53.152 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:53 compute-0 podman[364799]: 2026-01-26 16:26:53.065215818 +0000 UTC m=+0.025460574 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:26:53 compute-0 ovn_controller[146046]: 2026-01-26T16:26:53Z|01518|binding|INFO|Claiming lport 81751ed2-633c-4723-82f3-f652ce04297b for this chassis.
Jan 26 16:26:53 compute-0 ovn_controller[146046]: 2026-01-26T16:26:53Z|01519|binding|INFO|81751ed2-633c-4723-82f3-f652ce04297b: Claiming fa:16:3e:bb:6f:f6 10.100.0.11
Jan 26 16:26:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:53.179 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:6f:f6 10.100.0.11'], port_security=['fa:16:3e:bb:6f:f6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'af71ccf7-284e-436b-9bb8-5bc6c727d3cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94bff060-de25-4619-bfc7-2522df0a74d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2d8fd4f4-d04c-4ea0-ae86-5f52a7fe7407', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e02e3d-e80d-4860-b220-ee88b1eda52c, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=81751ed2-633c-4723-82f3-f652ce04297b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:26:53 compute-0 systemd-machined[208061]: New machine qemu-173-instance-0000008f.
Jan 26 16:26:53 compute-0 systemd[1]: Started Virtual Machine qemu-173-instance-0000008f.
Jan 26 16:26:53 compute-0 nova_compute[239965]: 2026-01-26 16:26:53.214 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:53 compute-0 ovn_controller[146046]: 2026-01-26T16:26:53Z|01520|binding|INFO|Setting lport 81751ed2-633c-4723-82f3-f652ce04297b ovn-installed in OVS
Jan 26 16:26:53 compute-0 ovn_controller[146046]: 2026-01-26T16:26:53Z|01521|binding|INFO|Setting lport 81751ed2-633c-4723-82f3-f652ce04297b up in Southbound
Jan 26 16:26:53 compute-0 nova_compute[239965]: 2026-01-26 16:26:53.229 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:53 compute-0 podman[364799]: 2026-01-26 16:26:53.251354077 +0000 UTC m=+0.211598803 container create fd79ea1142f5df62b39bc3bab34a3a6e9541bc0fe4493c1a52cfb91a527a8234 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 16:26:53 compute-0 systemd[1]: Started libpod-conmon-fd79ea1142f5df62b39bc3bab34a3a6e9541bc0fe4493c1a52cfb91a527a8234.scope.
Jan 26 16:26:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:26:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/958cbba1913b9e2297b52de2ae83f610b651777137ea9026925977748937e61d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:26:53 compute-0 nova_compute[239965]: 2026-01-26 16:26:53.744 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:53 compute-0 sshd-session[364853]: Invalid user ftptest from 209.38.206.249 port 35952
Jan 26 16:26:53 compute-0 podman[364799]: 2026-01-26 16:26:53.834222961 +0000 UTC m=+0.794467777 container init fd79ea1142f5df62b39bc3bab34a3a6e9541bc0fe4493c1a52cfb91a527a8234 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:26:53 compute-0 podman[364799]: 2026-01-26 16:26:53.845479277 +0000 UTC m=+0.805724053 container start fd79ea1142f5df62b39bc3bab34a3a6e9541bc0fe4493c1a52cfb91a527a8234 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202)
Jan 26 16:26:53 compute-0 neutron-haproxy-ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e[364832]: [NOTICE]   (364874) : New worker (364876) forked
Jan 26 16:26:53 compute-0 neutron-haproxy-ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e[364832]: [NOTICE]   (364874) : Loading success.
Jan 26 16:26:53 compute-0 sshd-session[364853]: Connection closed by invalid user ftptest 209.38.206.249 port 35952 [preauth]
Jan 26 16:26:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:53.998 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 81751ed2-633c-4723-82f3-f652ce04297b in datapath 94bff060-de25-4619-bfc7-2522df0a74d7 unbound from our chassis
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.003 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 94bff060-de25-4619-bfc7-2522df0a74d7
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.016 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce6878a-d82f-4882-9746-289fcf9d3492]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.017 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap94bff060-d1 in ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.019 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap94bff060-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.019 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c51e6b8c-aa76-4be7-8f81-dd18df229789]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.020 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[659451ba-ff01-4424-9e58-7214daca1a21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.038 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[d2fe7b67-42e8-4c13-ae13-1e9e30b37abe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.055 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2f7e1c-3118-4461-a134-417928b7217d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.071 239969 DEBUG nova.compute.manager [req-fbe08664-1e0a-45ab-b8a0-761a38fb15c7 req-8818c8bb-f4f1-4376-a129-1877716e5a7e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received event network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.072 239969 DEBUG oslo_concurrency.lockutils [req-fbe08664-1e0a-45ab-b8a0-761a38fb15c7 req-8818c8bb-f4f1-4376-a129-1877716e5a7e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.072 239969 DEBUG oslo_concurrency.lockutils [req-fbe08664-1e0a-45ab-b8a0-761a38fb15c7 req-8818c8bb-f4f1-4376-a129-1877716e5a7e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.072 239969 DEBUG oslo_concurrency.lockutils [req-fbe08664-1e0a-45ab-b8a0-761a38fb15c7 req-8818c8bb-f4f1-4376-a129-1877716e5a7e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.072 239969 DEBUG nova.compute.manager [req-fbe08664-1e0a-45ab-b8a0-761a38fb15c7 req-8818c8bb-f4f1-4376-a129-1877716e5a7e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Processing event network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.075 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444814.0750182, af71ccf7-284e-436b-9bb8-5bc6c727d3cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.076 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] VM Started (Lifecycle Event)
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.078 239969 DEBUG nova.compute.manager [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.081 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.085 239969 INFO nova.virt.libvirt.driver [-] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Instance spawned successfully.
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.085 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.094 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[fc37bfbc-441a-48ce-8adc-d20d666257c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:54 compute-0 NetworkManager[48954]: <info>  [1769444814.1005] manager: (tap94bff060-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/618)
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.100 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.102 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a54c5e86-d83c-4e77-9e79-2ae0e0f53586]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.103 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.117 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.118 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.118 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.122 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.122 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.124 239969 DEBUG nova.virt.libvirt.driver [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.128 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.129 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444814.0760086, af71ccf7-284e-436b-9bb8-5bc6c727d3cc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.129 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] VM Paused (Lifecycle Event)
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.143 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb18e48-0765-4c2d-8082-f3a8baf900d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.147 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[75b6abe2-140e-4546-bbde-9776dd48f69f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.161 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.164 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444814.0805736, af71ccf7-284e-436b-9bb8-5bc6c727d3cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.164 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] VM Resumed (Lifecycle Event)
Jan 26 16:26:54 compute-0 NetworkManager[48954]: <info>  [1769444814.1795] device (tap94bff060-d0): carrier: link connected
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.190 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[56f7d12a-e0ce-4434-8edd-1cc4687c0ea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.209 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fe822001-17a7-49c4-9057-7e3a11766e74]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap94bff060-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:a0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630498, 'reachable_time': 22341, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364903, 'error': None, 'target': 'ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.225 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6d21f161-d684-4fed-b488-b5770d1b9001]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec7:a08b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630498, 'tstamp': 630498}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364904, 'error': None, 'target': 'ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.240 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[beead51b-6d72-4864-aa77-5e80c340a457]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap94bff060-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:a0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 435], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630498, 'reachable_time': 22341, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364905, 'error': None, 'target': 'ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:26:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Jan 26 16:26:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Jan 26 16:26:54 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.272 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[35bf6ec3-54a7-4571-9529-d4d59aff87b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.323 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.326 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.336 239969 INFO nova.compute.manager [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Took 8.19 seconds to spawn the instance on the hypervisor.
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.336 239969 DEBUG nova.compute.manager [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.346 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.352 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[34d404a5-90e1-4281-8fa7-a32786cc6959]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.353 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94bff060-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.354 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.354 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94bff060-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:54 compute-0 kernel: tap94bff060-d0: entered promiscuous mode
Jan 26 16:26:54 compute-0 NetworkManager[48954]: <info>  [1769444814.3566] manager: (tap94bff060-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/619)
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.360 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap94bff060-d0, col_values=(('external_ids', {'iface-id': '2f737b4d-b226-456e-a14d-a02711b5bbea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:26:54 compute-0 ovn_controller[146046]: 2026-01-26T16:26:54Z|01522|binding|INFO|Releasing lport 2f737b4d-b226-456e-a14d-a02711b5bbea from this chassis (sb_readonly=0)
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.356 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.385 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/94bff060-de25-4619-bfc7-2522df0a74d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/94bff060-de25-4619-bfc7-2522df0a74d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.386 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dd3adf9f-f2bd-42e6-9d78-71e3602fc077]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.387 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-94bff060-de25-4619-bfc7-2522df0a74d7
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/94bff060-de25-4619-bfc7-2522df0a74d7.pid.haproxy
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 94bff060-de25-4619-bfc7-2522df0a74d7
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:26:54 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:54.389 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7', 'env', 'PROCESS_TAG=haproxy-94bff060-de25-4619-bfc7-2522df0a74d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/94bff060-de25-4619-bfc7-2522df0a74d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.389 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.393 239969 INFO nova.compute.manager [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Took 9.70 seconds to build instance.
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.409 239969 DEBUG oslo_concurrency.lockutils [None req-2201281b-696d-482b-b1ad-2dc608d67fb8 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:54 compute-0 sshd-session[364890]: Connection closed by authenticating user root 209.38.206.249 port 35962 [preauth]
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.534 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:26:54 compute-0 nova_compute[239965]: 2026-01-26 16:26:54.534 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:26:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 8.3 MiB/s wr, 212 op/s
Jan 26 16:26:54 compute-0 podman[364940]: 2026-01-26 16:26:54.721543152 +0000 UTC m=+0.024662826 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:26:54 compute-0 sshd-session[364916]: Invalid user pi from 209.38.206.249 port 35972
Jan 26 16:26:54 compute-0 podman[364940]: 2026-01-26 16:26:54.937793687 +0000 UTC m=+0.240913311 container create 9bf1752ce328c12a0c8c6812ef458df3d89a3c7b5d4d19586c9b64e01c2fe715 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Jan 26 16:26:54 compute-0 systemd[1]: Started libpod-conmon-9bf1752ce328c12a0c8c6812ef458df3d89a3c7b5d4d19586c9b64e01c2fe715.scope.
Jan 26 16:26:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:26:55 compute-0 sshd-session[364916]: Connection closed by invalid user pi 209.38.206.249 port 35972 [preauth]
Jan 26 16:26:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e7bed32c928e09c0ee3bd50b5f6e2a11669d7a467b56081e564f9850338966/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:26:55 compute-0 podman[364940]: 2026-01-26 16:26:55.033528082 +0000 UTC m=+0.336647726 container init 9bf1752ce328c12a0c8c6812ef458df3d89a3c7b5d4d19586c9b64e01c2fe715 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 16:26:55 compute-0 podman[364940]: 2026-01-26 16:26:55.042696876 +0000 UTC m=+0.345816510 container start 9bf1752ce328c12a0c8c6812ef458df3d89a3c7b5d4d19586c9b64e01c2fe715 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:26:55 compute-0 neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7[364953]: [NOTICE]   (364957) : New worker (364959) forked
Jan 26 16:26:55 compute-0 neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7[364953]: [NOTICE]   (364957) : Loading success.
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.149 239969 DEBUG nova.compute.manager [req-ed1f9105-be63-41af-abed-8c672cd57b97 req-02816cb4-c956-45da-8919-75f43cc2e9f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received event network-vif-plugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.150 239969 DEBUG oslo_concurrency.lockutils [req-ed1f9105-be63-41af-abed-8c672cd57b97 req-02816cb4-c956-45da-8919-75f43cc2e9f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.150 239969 DEBUG oslo_concurrency.lockutils [req-ed1f9105-be63-41af-abed-8c672cd57b97 req-02816cb4-c956-45da-8919-75f43cc2e9f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.150 239969 DEBUG oslo_concurrency.lockutils [req-ed1f9105-be63-41af-abed-8c672cd57b97 req-02816cb4-c956-45da-8919-75f43cc2e9f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.150 239969 DEBUG nova.compute.manager [req-ed1f9105-be63-41af-abed-8c672cd57b97 req-02816cb4-c956-45da-8919-75f43cc2e9f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] No waiting events found dispatching network-vif-plugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.150 239969 WARNING nova.compute.manager [req-ed1f9105-be63-41af-abed-8c672cd57b97 req-02816cb4-c956-45da-8919-75f43cc2e9f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received unexpected event network-vif-plugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 for instance with vm_state active and task_state None.
Jan 26 16:26:55 compute-0 ceph-mon[75140]: osdmap e307: 3 total, 3 up, 3 in
Jan 26 16:26:55 compute-0 ceph-mon[75140]: pgmap v2290: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.2 MiB/s rd, 8.3 MiB/s wr, 212 op/s
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.462 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444800.459814, e9322863-2491-464c-92af-3e1f3181df98 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.463 239969 INFO nova.compute.manager [-] [instance: e9322863-2491-464c-92af-3e1f3181df98] VM Stopped (Lifecycle Event)
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.481 239969 DEBUG nova.compute.manager [None req-2204d4ff-0db5-4834-84a5-439de09a9e98 - - - - - -] [instance: e9322863-2491-464c-92af-3e1f3181df98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:55 compute-0 sshd-session[364968]: Connection closed by authenticating user root 209.38.206.249 port 35984 [preauth]
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.668 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.668 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.684 239969 DEBUG nova.compute.manager [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.780 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.780 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.788 239969 DEBUG nova.virt.hardware [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.788 239969 INFO nova.compute.claims [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:26:55 compute-0 nova_compute[239965]: 2026-01-26 16:26:55.950 239969 DEBUG oslo_concurrency.processutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.152 239969 DEBUG nova.compute.manager [req-b7a66758-4820-4133-ac93-16f214928cbe req-9280ab0f-32fa-41d3-b932-f13fe5f6470b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received event network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.152 239969 DEBUG oslo_concurrency.lockutils [req-b7a66758-4820-4133-ac93-16f214928cbe req-9280ab0f-32fa-41d3-b932-f13fe5f6470b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.153 239969 DEBUG oslo_concurrency.lockutils [req-b7a66758-4820-4133-ac93-16f214928cbe req-9280ab0f-32fa-41d3-b932-f13fe5f6470b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.153 239969 DEBUG oslo_concurrency.lockutils [req-b7a66758-4820-4133-ac93-16f214928cbe req-9280ab0f-32fa-41d3-b932-f13fe5f6470b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.153 239969 DEBUG nova.compute.manager [req-b7a66758-4820-4133-ac93-16f214928cbe req-9280ab0f-32fa-41d3-b932-f13fe5f6470b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] No waiting events found dispatching network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.153 239969 WARNING nova.compute.manager [req-b7a66758-4820-4133-ac93-16f214928cbe req-9280ab0f-32fa-41d3-b932-f13fe5f6470b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received unexpected event network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b for instance with vm_state active and task_state None.
Jan 26 16:26:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:26:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/102169987' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.524 239969 DEBUG oslo_concurrency.processutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.529 239969 DEBUG nova.compute.provider_tree [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:26:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/102169987' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.573 239969 DEBUG nova.scheduler.client.report [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.602 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.602 239969 DEBUG nova.compute.manager [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.653 239969 DEBUG nova.compute.manager [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.653 239969 DEBUG nova.network.neutron [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.671 239969 INFO nova.virt.libvirt.driver [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:26:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.7 MiB/s rd, 7.1 MiB/s wr, 246 op/s
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.691 239969 DEBUG nova.compute.manager [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.774 239969 DEBUG nova.compute.manager [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.778 239969 DEBUG nova.virt.libvirt.driver [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.778 239969 INFO nova.virt.libvirt.driver [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Creating image(s)
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.803 239969 DEBUG nova.storage.rbd_utils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] rbd image 5d69a9b9-63ea-4df6-bf59-67250f4f907d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.830 239969 DEBUG nova.storage.rbd_utils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] rbd image 5d69a9b9-63ea-4df6-bf59-67250f4f907d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.852 239969 DEBUG nova.storage.rbd_utils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] rbd image 5d69a9b9-63ea-4df6-bf59-67250f4f907d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.856 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "2efa99ac92258edf8e4c48daf599d0720831b056" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.857 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "2efa99ac92258edf8e4c48daf599d0720831b056" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:56 compute-0 nova_compute[239965]: 2026-01-26 16:26:56.880 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:57 compute-0 sshd-session[364992]: Invalid user deploy from 209.38.206.249 port 35992
Jan 26 16:26:57 compute-0 nova_compute[239965]: 2026-01-26 16:26:57.103 239969 DEBUG nova.policy [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '915d03d871484dc39e2d074d97f24809', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '02ec8c20b77c4ce9b1406d858bbcf14d', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:26:57 compute-0 nova_compute[239965]: 2026-01-26 16:26:57.110 239969 DEBUG nova.virt.libvirt.imagebackend [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Image locations are: [{'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/4f1e15b7-da72-469a-9162-748b08cf386f/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/4f1e15b7-da72-469a-9162-748b08cf386f/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 26 16:26:57 compute-0 nova_compute[239965]: 2026-01-26 16:26:57.176 239969 DEBUG nova.virt.libvirt.imagebackend [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Selected location: {'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/4f1e15b7-da72-469a-9162-748b08cf386f/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 26 16:26:57 compute-0 nova_compute[239965]: 2026-01-26 16:26:57.177 239969 DEBUG nova.storage.rbd_utils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] cloning images/4f1e15b7-da72-469a-9162-748b08cf386f@snap to None/5d69a9b9-63ea-4df6-bf59-67250f4f907d_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 16:26:57 compute-0 sshd-session[364992]: Connection closed by invalid user deploy 209.38.206.249 port 35992 [preauth]
Jan 26 16:26:57 compute-0 nova_compute[239965]: 2026-01-26 16:26:57.307 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "2efa99ac92258edf8e4c48daf599d0720831b056" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:57 compute-0 nova_compute[239965]: 2026-01-26 16:26:57.439 239969 DEBUG nova.objects.instance [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lazy-loading 'migration_context' on Instance uuid 5d69a9b9-63ea-4df6-bf59-67250f4f907d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:26:57 compute-0 nova_compute[239965]: 2026-01-26 16:26:57.454 239969 DEBUG nova.virt.libvirt.driver [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:26:57 compute-0 nova_compute[239965]: 2026-01-26 16:26:57.454 239969 DEBUG nova.virt.libvirt.driver [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Ensure instance console log exists: /var/lib/nova/instances/5d69a9b9-63ea-4df6-bf59-67250f4f907d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:26:57 compute-0 nova_compute[239965]: 2026-01-26 16:26:57.455 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:57 compute-0 nova_compute[239965]: 2026-01-26 16:26:57.455 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:57 compute-0 nova_compute[239965]: 2026-01-26 16:26:57.455 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:57 compute-0 ceph-mon[75140]: pgmap v2291: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 8.7 MiB/s rd, 7.1 MiB/s wr, 246 op/s
Jan 26 16:26:57 compute-0 sshd-session[365115]: Invalid user testuser from 209.38.206.249 port 36008
Jan 26 16:26:57 compute-0 sshd-session[365115]: Connection closed by invalid user testuser 209.38.206.249 port 36008 [preauth]
Jan 26 16:26:57 compute-0 sudo[365173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:26:57 compute-0 sudo[365173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:26:57 compute-0 sudo[365173]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:57 compute-0 sudo[365198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:26:57 compute-0 sudo[365198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:26:58 compute-0 nova_compute[239965]: 2026-01-26 16:26:58.187 239969 DEBUG nova.compute.manager [req-d9d33f7d-97bb-43fb-82da-d10006ebb76f req-729c6fd2-0a65-46dc-8fa5-30edf3299a8a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received event network-changed-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:58 compute-0 nova_compute[239965]: 2026-01-26 16:26:58.190 239969 DEBUG nova.compute.manager [req-d9d33f7d-97bb-43fb-82da-d10006ebb76f req-729c6fd2-0a65-46dc-8fa5-30edf3299a8a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Refreshing instance network info cache due to event network-changed-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:26:58 compute-0 nova_compute[239965]: 2026-01-26 16:26:58.191 239969 DEBUG oslo_concurrency.lockutils [req-d9d33f7d-97bb-43fb-82da-d10006ebb76f req-729c6fd2-0a65-46dc-8fa5-30edf3299a8a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:58 compute-0 nova_compute[239965]: 2026-01-26 16:26:58.191 239969 DEBUG oslo_concurrency.lockutils [req-d9d33f7d-97bb-43fb-82da-d10006ebb76f req-729c6fd2-0a65-46dc-8fa5-30edf3299a8a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:58 compute-0 nova_compute[239965]: 2026-01-26 16:26:58.192 239969 DEBUG nova.network.neutron [req-d9d33f7d-97bb-43fb-82da-d10006ebb76f req-729c6fd2-0a65-46dc-8fa5-30edf3299a8a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Refreshing network info cache for port 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:26:58 compute-0 nova_compute[239965]: 2026-01-26 16:26:58.351 239969 DEBUG nova.compute.manager [req-c18b7f15-2eea-4369-ad5e-542f6cba3fa6 req-cefce93c-c280-463c-a53f-2072954e7193 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received event network-changed-81751ed2-633c-4723-82f3-f652ce04297b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:26:58 compute-0 nova_compute[239965]: 2026-01-26 16:26:58.352 239969 DEBUG nova.compute.manager [req-c18b7f15-2eea-4369-ad5e-542f6cba3fa6 req-cefce93c-c280-463c-a53f-2072954e7193 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Refreshing instance network info cache due to event network-changed-81751ed2-633c-4723-82f3-f652ce04297b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:26:58 compute-0 nova_compute[239965]: 2026-01-26 16:26:58.352 239969 DEBUG oslo_concurrency.lockutils [req-c18b7f15-2eea-4369-ad5e-542f6cba3fa6 req-cefce93c-c280-463c-a53f-2072954e7193 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-af71ccf7-284e-436b-9bb8-5bc6c727d3cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:58 compute-0 nova_compute[239965]: 2026-01-26 16:26:58.352 239969 DEBUG oslo_concurrency.lockutils [req-c18b7f15-2eea-4369-ad5e-542f6cba3fa6 req-cefce93c-c280-463c-a53f-2072954e7193 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-af71ccf7-284e-436b-9bb8-5bc6c727d3cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:58 compute-0 nova_compute[239965]: 2026-01-26 16:26:58.352 239969 DEBUG nova.network.neutron [req-c18b7f15-2eea-4369-ad5e-542f6cba3fa6 req-cefce93c-c280-463c-a53f-2072954e7193 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Refreshing network info cache for port 81751ed2-633c-4723-82f3-f652ce04297b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:26:58 compute-0 sudo[365198]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:58 compute-0 nova_compute[239965]: 2026-01-26 16:26:58.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:26:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:26:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:26:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:26:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:26:58 compute-0 nova_compute[239965]: 2026-01-26 16:26:58.588 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:26:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:26:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:26:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:26:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:26:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:26:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:26:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:26:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:26:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:26:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.0 MiB/s wr, 408 op/s
Jan 26 16:26:58 compute-0 sudo[365254]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:26:58 compute-0 sudo[365254]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:26:58 compute-0 sudo[365254]: pam_unix(sudo:session): session closed for user root
Jan 26 16:26:58 compute-0 nova_compute[239965]: 2026-01-26 16:26:58.747 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:26:58 compute-0 nova_compute[239965]: 2026-01-26 16:26:58.825 239969 DEBUG nova.network.neutron [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Successfully created port: fb5c2eef-7a9f-43e2-bf77-34753400c407 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:26:58 compute-0 sudo[365279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:26:58 compute-0 sudo[365279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:26:59 compute-0 podman[365316]: 2026-01-26 16:26:59.114505205 +0000 UTC m=+0.039426296 container create 232e9c59901d47869bc8a7046f5950c9f51e2977351360d24ea3fc7024535407 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 16:26:59 compute-0 systemd[1]: Started libpod-conmon-232e9c59901d47869bc8a7046f5950c9f51e2977351360d24ea3fc7024535407.scope.
Jan 26 16:26:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:26:59 compute-0 podman[365316]: 2026-01-26 16:26:59.097718474 +0000 UTC m=+0.022639585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:26:59 compute-0 podman[365316]: 2026-01-26 16:26:59.195346695 +0000 UTC m=+0.120267876 container init 232e9c59901d47869bc8a7046f5950c9f51e2977351360d24ea3fc7024535407 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:26:59 compute-0 podman[365316]: 2026-01-26 16:26:59.20492358 +0000 UTC m=+0.129844691 container start 232e9c59901d47869bc8a7046f5950c9f51e2977351360d24ea3fc7024535407 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 16:26:59 compute-0 podman[365316]: 2026-01-26 16:26:59.20861571 +0000 UTC m=+0.133536891 container attach 232e9c59901d47869bc8a7046f5950c9f51e2977351360d24ea3fc7024535407 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:26:59 compute-0 recursing_blackwell[365332]: 167 167
Jan 26 16:26:59 compute-0 systemd[1]: libpod-232e9c59901d47869bc8a7046f5950c9f51e2977351360d24ea3fc7024535407.scope: Deactivated successfully.
Jan 26 16:26:59 compute-0 conmon[365332]: conmon 232e9c59901d47869bc8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-232e9c59901d47869bc8a7046f5950c9f51e2977351360d24ea3fc7024535407.scope/container/memory.events
Jan 26 16:26:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:59.250 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:26:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:59.251 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:26:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:26:59.252 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:26:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:26:59 compute-0 podman[365337]: 2026-01-26 16:26:59.265272908 +0000 UTC m=+0.039240242 container died 232e9c59901d47869bc8a7046f5950c9f51e2977351360d24ea3fc7024535407 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:26:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-2890d17046f9b4a1ecec9c5e61b6b9deea264a6293a3a94bc9fef705b1d3bc11-merged.mount: Deactivated successfully.
Jan 26 16:26:59 compute-0 podman[365337]: 2026-01-26 16:26:59.309436619 +0000 UTC m=+0.083403873 container remove 232e9c59901d47869bc8a7046f5950c9f51e2977351360d24ea3fc7024535407 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_blackwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 16:26:59 compute-0 systemd[1]: libpod-conmon-232e9c59901d47869bc8a7046f5950c9f51e2977351360d24ea3fc7024535407.scope: Deactivated successfully.
Jan 26 16:26:59 compute-0 nova_compute[239965]: 2026-01-26 16:26:59.442 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444804.4413965, deadd136-f039-481d-bfc4-bbc46bc71563 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:26:59 compute-0 nova_compute[239965]: 2026-01-26 16:26:59.443 239969 INFO nova.compute.manager [-] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] VM Stopped (Lifecycle Event)
Jan 26 16:26:59 compute-0 nova_compute[239965]: 2026-01-26 16:26:59.464 239969 DEBUG nova.compute.manager [None req-584fda0f-b7d8-4671-9805-bfba1feabe11 - - - - - -] [instance: deadd136-f039-481d-bfc4-bbc46bc71563] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:26:59 compute-0 nova_compute[239965]: 2026-01-26 16:26:59.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:26:59 compute-0 podman[365359]: 2026-01-26 16:26:59.528530175 +0000 UTC m=+0.061732273 container create 8690cb067f069632b7cd536c0f8c061989a33f09eafc26a67f3316105d862970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_blackwell, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:26:59 compute-0 systemd[1]: Started libpod-conmon-8690cb067f069632b7cd536c0f8c061989a33f09eafc26a67f3316105d862970.scope.
Jan 26 16:26:59 compute-0 podman[365359]: 2026-01-26 16:26:59.497479565 +0000 UTC m=+0.030681673 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:26:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41285d482f9469d0482e9334d0c2c51edc0d64b08eae41942590bc266d04932e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41285d482f9469d0482e9334d0c2c51edc0d64b08eae41942590bc266d04932e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41285d482f9469d0482e9334d0c2c51edc0d64b08eae41942590bc266d04932e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41285d482f9469d0482e9334d0c2c51edc0d64b08eae41942590bc266d04932e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41285d482f9469d0482e9334d0c2c51edc0d64b08eae41942590bc266d04932e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:26:59 compute-0 podman[365359]: 2026-01-26 16:26:59.648160554 +0000 UTC m=+0.181362672 container init 8690cb067f069632b7cd536c0f8c061989a33f09eafc26a67f3316105d862970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 16:26:59 compute-0 podman[365359]: 2026-01-26 16:26:59.653621598 +0000 UTC m=+0.186823666 container start 8690cb067f069632b7cd536c0f8c061989a33f09eafc26a67f3316105d862970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_blackwell, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 16:26:59 compute-0 nova_compute[239965]: 2026-01-26 16:26:59.753 239969 DEBUG nova.network.neutron [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Successfully updated port: fb5c2eef-7a9f-43e2-bf77-34753400c407 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:26:59 compute-0 nova_compute[239965]: 2026-01-26 16:26:59.775 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "refresh_cache-5d69a9b9-63ea-4df6-bf59-67250f4f907d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:26:59 compute-0 nova_compute[239965]: 2026-01-26 16:26:59.776 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquired lock "refresh_cache-5d69a9b9-63ea-4df6-bf59-67250f4f907d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:26:59 compute-0 nova_compute[239965]: 2026-01-26 16:26:59.776 239969 DEBUG nova.network.neutron [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:26:59 compute-0 nova_compute[239965]: 2026-01-26 16:26:59.916 239969 DEBUG nova.network.neutron [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:27:00 compute-0 podman[365359]: 2026-01-26 16:27:00.040071753 +0000 UTC m=+0.573273831 container attach 8690cb067f069632b7cd536c0f8c061989a33f09eafc26a67f3316105d862970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_blackwell, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:27:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:27:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:27:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:27:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:27:00 compute-0 ceph-mon[75140]: pgmap v2292: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 6.0 MiB/s wr, 408 op/s
Jan 26 16:27:00 compute-0 thirsty_blackwell[365376]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:27:00 compute-0 thirsty_blackwell[365376]: --> All data devices are unavailable
Jan 26 16:27:00 compute-0 systemd[1]: libpod-8690cb067f069632b7cd536c0f8c061989a33f09eafc26a67f3316105d862970.scope: Deactivated successfully.
Jan 26 16:27:00 compute-0 podman[365359]: 2026-01-26 16:27:00.187822111 +0000 UTC m=+0.721024219 container died 8690cb067f069632b7cd536c0f8c061989a33f09eafc26a67f3316105d862970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_blackwell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:27:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-41285d482f9469d0482e9334d0c2c51edc0d64b08eae41942590bc266d04932e-merged.mount: Deactivated successfully.
Jan 26 16:27:00 compute-0 podman[365359]: 2026-01-26 16:27:00.242739256 +0000 UTC m=+0.775941334 container remove 8690cb067f069632b7cd536c0f8c061989a33f09eafc26a67f3316105d862970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_blackwell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.272 239969 DEBUG nova.compute.manager [req-74989845-34af-409f-ba17-de368ed644a7 req-8a72c6ec-2419-44f5-9201-fb4094f71bf9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Received event network-changed-fb5c2eef-7a9f-43e2-bf77-34753400c407 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.273 239969 DEBUG nova.compute.manager [req-74989845-34af-409f-ba17-de368ed644a7 req-8a72c6ec-2419-44f5-9201-fb4094f71bf9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Refreshing instance network info cache due to event network-changed-fb5c2eef-7a9f-43e2-bf77-34753400c407. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.273 239969 DEBUG oslo_concurrency.lockutils [req-74989845-34af-409f-ba17-de368ed644a7 req-8a72c6ec-2419-44f5-9201-fb4094f71bf9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-5d69a9b9-63ea-4df6-bf59-67250f4f907d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:27:00 compute-0 systemd[1]: libpod-conmon-8690cb067f069632b7cd536c0f8c061989a33f09eafc26a67f3316105d862970.scope: Deactivated successfully.
Jan 26 16:27:00 compute-0 sudo[365279]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:00 compute-0 sudo[365409]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:27:00 compute-0 sudo[365409]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:27:00 compute-0 sudo[365409]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:27:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:27:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:27:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:27:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:27:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:27:00 compute-0 sudo[365434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.453 239969 DEBUG nova.network.neutron [req-d9d33f7d-97bb-43fb-82da-d10006ebb76f req-729c6fd2-0a65-46dc-8fa5-30edf3299a8a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Updated VIF entry in instance network info cache for port 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.454 239969 DEBUG nova.network.neutron [req-d9d33f7d-97bb-43fb-82da-d10006ebb76f req-729c6fd2-0a65-46dc-8fa5-30edf3299a8a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Updating instance_info_cache with network_info: [{"id": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "address": "fa:16:3e:5c:66:7c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb1285a-11", "ovs_interfaceid": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:00 compute-0 sudo[365434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.474 239969 DEBUG oslo_concurrency.lockutils [req-d9d33f7d-97bb-43fb-82da-d10006ebb76f req-729c6fd2-0a65-46dc-8fa5-30edf3299a8a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:27:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 5.0 MiB/s wr, 334 op/s
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.702 239969 DEBUG nova.network.neutron [req-c18b7f15-2eea-4369-ad5e-542f6cba3fa6 req-cefce93c-c280-463c-a53f-2072954e7193 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Updated VIF entry in instance network info cache for port 81751ed2-633c-4723-82f3-f652ce04297b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.703 239969 DEBUG nova.network.neutron [req-c18b7f15-2eea-4369-ad5e-542f6cba3fa6 req-cefce93c-c280-463c-a53f-2072954e7193 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Updating instance_info_cache with network_info: [{"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.724 239969 DEBUG oslo_concurrency.lockutils [req-c18b7f15-2eea-4369-ad5e-542f6cba3fa6 req-cefce93c-c280-463c-a53f-2072954e7193 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-af71ccf7-284e-436b-9bb8-5bc6c727d3cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:00 compute-0 podman[365470]: 2026-01-26 16:27:00.740166198 +0000 UTC m=+0.042136333 container create 5e71aedba6bfd04335f04ed3ed74a23996af8b5fb626d46fef3717ebecc9d86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_galileo, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 16:27:00 compute-0 systemd[1]: Started libpod-conmon-5e71aedba6bfd04335f04ed3ed74a23996af8b5fb626d46fef3717ebecc9d86a.scope.
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.787 239969 DEBUG nova.network.neutron [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Updating instance_info_cache with network_info: [{"id": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "address": "fa:16:3e:ff:c3:d7", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb5c2eef-7a", "ovs_interfaceid": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.814 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Releasing lock "refresh_cache-5d69a9b9-63ea-4df6-bf59-67250f4f907d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.814 239969 DEBUG nova.compute.manager [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Instance network_info: |[{"id": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "address": "fa:16:3e:ff:c3:d7", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb5c2eef-7a", "ovs_interfaceid": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:27:00 compute-0 podman[365470]: 2026-01-26 16:27:00.719192984 +0000 UTC m=+0.021163039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.815 239969 DEBUG oslo_concurrency.lockutils [req-74989845-34af-409f-ba17-de368ed644a7 req-8a72c6ec-2419-44f5-9201-fb4094f71bf9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-5d69a9b9-63ea-4df6-bf59-67250f4f907d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.815 239969 DEBUG nova.network.neutron [req-74989845-34af-409f-ba17-de368ed644a7 req-8a72c6ec-2419-44f5-9201-fb4094f71bf9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Refreshing network info cache for port fb5c2eef-7a9f-43e2-bf77-34753400c407 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:27:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.819 239969 DEBUG nova.virt.libvirt.driver [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Start _get_guest_xml network_info=[{"id": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "address": "fa:16:3e:ff:c3:d7", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb5c2eef-7a", "ovs_interfaceid": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-26T16:26:46Z,direct_url=<?>,disk_format='raw',id=4f1e15b7-da72-469a-9162-748b08cf386f,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-1274478816',owner='02ec8c20b77c4ce9b1406d858bbcf14d',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-26T16:26:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': '4f1e15b7-da72-469a-9162-748b08cf386f'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.827 239969 WARNING nova.virt.libvirt.driver [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:27:00 compute-0 podman[365470]: 2026-01-26 16:27:00.836220601 +0000 UTC m=+0.138190666 container init 5e71aedba6bfd04335f04ed3ed74a23996af8b5fb626d46fef3717ebecc9d86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.840 239969 DEBUG nova.virt.libvirt.host [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.841 239969 DEBUG nova.virt.libvirt.host [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.845 239969 DEBUG nova.virt.libvirt.host [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:27:00 compute-0 podman[365470]: 2026-01-26 16:27:00.846728178 +0000 UTC m=+0.148698223 container start 5e71aedba6bfd04335f04ed3ed74a23996af8b5fb626d46fef3717ebecc9d86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_galileo, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.846 239969 DEBUG nova.virt.libvirt.host [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.848 239969 DEBUG nova.virt.libvirt.driver [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.848 239969 DEBUG nova.virt.hardware [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-26T16:26:46Z,direct_url=<?>,disk_format='raw',id=4f1e15b7-da72-469a-9162-748b08cf386f,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-1274478816',owner='02ec8c20b77c4ce9b1406d858bbcf14d',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-26T16:26:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.849 239969 DEBUG nova.virt.hardware [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.850 239969 DEBUG nova.virt.hardware [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:27:00 compute-0 podman[365470]: 2026-01-26 16:27:00.850566012 +0000 UTC m=+0.152536057 container attach 5e71aedba6bfd04335f04ed3ed74a23996af8b5fb626d46fef3717ebecc9d86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.850 239969 DEBUG nova.virt.hardware [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.851 239969 DEBUG nova.virt.hardware [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:27:00 compute-0 youthful_galileo[365486]: 167 167
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.851 239969 DEBUG nova.virt.hardware [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:27:00 compute-0 systemd[1]: libpod-5e71aedba6bfd04335f04ed3ed74a23996af8b5fb626d46fef3717ebecc9d86a.scope: Deactivated successfully.
Jan 26 16:27:00 compute-0 conmon[365486]: conmon 5e71aedba6bfd04335f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e71aedba6bfd04335f04ed3ed74a23996af8b5fb626d46fef3717ebecc9d86a.scope/container/memory.events
Jan 26 16:27:00 compute-0 podman[365470]: 2026-01-26 16:27:00.853042943 +0000 UTC m=+0.155012978 container died 5e71aedba6bfd04335f04ed3ed74a23996af8b5fb626d46fef3717ebecc9d86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.852 239969 DEBUG nova.virt.hardware [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.853 239969 DEBUG nova.virt.hardware [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.853 239969 DEBUG nova.virt.hardware [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.854 239969 DEBUG nova.virt.hardware [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.854 239969 DEBUG nova.virt.hardware [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:27:00 compute-0 nova_compute[239965]: 2026-01-26 16:27:00.857 239969 DEBUG oslo_concurrency.processutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a846c17fd63e1e629cc7c185fef885b9e691df6058fe769688579a9abad588b-merged.mount: Deactivated successfully.
Jan 26 16:27:00 compute-0 podman[365470]: 2026-01-26 16:27:00.896005635 +0000 UTC m=+0.197975660 container remove 5e71aedba6bfd04335f04ed3ed74a23996af8b5fb626d46fef3717ebecc9d86a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_galileo, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:27:00 compute-0 systemd[1]: libpod-conmon-5e71aedba6bfd04335f04ed3ed74a23996af8b5fb626d46fef3717ebecc9d86a.scope: Deactivated successfully.
Jan 26 16:27:01 compute-0 ceph-mon[75140]: pgmap v2293: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 9.6 MiB/s rd, 5.0 MiB/s wr, 334 op/s
Jan 26 16:27:01 compute-0 podman[365527]: 2026-01-26 16:27:01.082662806 +0000 UTC m=+0.040147334 container create e630e5eb310a08440ed24dd4ea53991283a470aeceb9ce5e28f98aa957351578 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 16:27:01 compute-0 podman[365527]: 2026-01-26 16:27:01.066417028 +0000 UTC m=+0.023901566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:27:01 compute-0 systemd[1]: Started libpod-conmon-e630e5eb310a08440ed24dd4ea53991283a470aeceb9ce5e28f98aa957351578.scope.
Jan 26 16:27:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/289136ef7dd3725da9fc6c247b65c7a239bb2ef256be66a394ca83946aa857c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/289136ef7dd3725da9fc6c247b65c7a239bb2ef256be66a394ca83946aa857c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/289136ef7dd3725da9fc6c247b65c7a239bb2ef256be66a394ca83946aa857c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/289136ef7dd3725da9fc6c247b65c7a239bb2ef256be66a394ca83946aa857c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:27:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/698601562' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:27:01 compute-0 nova_compute[239965]: 2026-01-26 16:27:01.425 239969 DEBUG oslo_concurrency.processutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:01 compute-0 nova_compute[239965]: 2026-01-26 16:27:01.447 239969 DEBUG nova.storage.rbd_utils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] rbd image 5d69a9b9-63ea-4df6-bf59-67250f4f907d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:01 compute-0 nova_compute[239965]: 2026-01-26 16:27:01.451 239969 DEBUG oslo_concurrency.processutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:01 compute-0 podman[365527]: 2026-01-26 16:27:01.520007056 +0000 UTC m=+0.477491614 container init e630e5eb310a08440ed24dd4ea53991283a470aeceb9ce5e28f98aa957351578 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_carson, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 16:27:01 compute-0 podman[365527]: 2026-01-26 16:27:01.531698693 +0000 UTC m=+0.489183221 container start e630e5eb310a08440ed24dd4ea53991283a470aeceb9ce5e28f98aa957351578 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 16:27:01 compute-0 podman[365527]: 2026-01-26 16:27:01.622421594 +0000 UTC m=+0.579906122 container attach e630e5eb310a08440ed24dd4ea53991283a470aeceb9ce5e28f98aa957351578 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 16:27:01 compute-0 serene_carson[365542]: {
Jan 26 16:27:01 compute-0 serene_carson[365542]:     "0": [
Jan 26 16:27:01 compute-0 serene_carson[365542]:         {
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "devices": [
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "/dev/loop3"
Jan 26 16:27:01 compute-0 serene_carson[365542]:             ],
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "lv_name": "ceph_lv0",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "lv_size": "21470642176",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "name": "ceph_lv0",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "tags": {
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.cluster_name": "ceph",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.crush_device_class": "",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.encrypted": "0",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.objectstore": "bluestore",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.osd_id": "0",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.type": "block",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.vdo": "0",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.with_tpm": "0"
Jan 26 16:27:01 compute-0 serene_carson[365542]:             },
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "type": "block",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "vg_name": "ceph_vg0"
Jan 26 16:27:01 compute-0 serene_carson[365542]:         }
Jan 26 16:27:01 compute-0 serene_carson[365542]:     ],
Jan 26 16:27:01 compute-0 serene_carson[365542]:     "1": [
Jan 26 16:27:01 compute-0 serene_carson[365542]:         {
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "devices": [
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "/dev/loop4"
Jan 26 16:27:01 compute-0 serene_carson[365542]:             ],
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "lv_name": "ceph_lv1",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "lv_size": "21470642176",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "name": "ceph_lv1",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "tags": {
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.cluster_name": "ceph",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.crush_device_class": "",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.encrypted": "0",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.objectstore": "bluestore",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.osd_id": "1",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.type": "block",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.vdo": "0",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.with_tpm": "0"
Jan 26 16:27:01 compute-0 serene_carson[365542]:             },
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "type": "block",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "vg_name": "ceph_vg1"
Jan 26 16:27:01 compute-0 serene_carson[365542]:         }
Jan 26 16:27:01 compute-0 serene_carson[365542]:     ],
Jan 26 16:27:01 compute-0 serene_carson[365542]:     "2": [
Jan 26 16:27:01 compute-0 serene_carson[365542]:         {
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "devices": [
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "/dev/loop5"
Jan 26 16:27:01 compute-0 serene_carson[365542]:             ],
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "lv_name": "ceph_lv2",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "lv_size": "21470642176",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "name": "ceph_lv2",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "tags": {
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.cluster_name": "ceph",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.crush_device_class": "",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.encrypted": "0",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.objectstore": "bluestore",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.osd_id": "2",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.type": "block",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.vdo": "0",
Jan 26 16:27:01 compute-0 serene_carson[365542]:                 "ceph.with_tpm": "0"
Jan 26 16:27:01 compute-0 serene_carson[365542]:             },
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "type": "block",
Jan 26 16:27:01 compute-0 serene_carson[365542]:             "vg_name": "ceph_vg2"
Jan 26 16:27:01 compute-0 serene_carson[365542]:         }
Jan 26 16:27:01 compute-0 serene_carson[365542]:     ]
Jan 26 16:27:01 compute-0 serene_carson[365542]: }
Jan 26 16:27:01 compute-0 nova_compute[239965]: 2026-01-26 16:27:01.882 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:01 compute-0 systemd[1]: libpod-e630e5eb310a08440ed24dd4ea53991283a470aeceb9ce5e28f98aa957351578.scope: Deactivated successfully.
Jan 26 16:27:01 compute-0 podman[365527]: 2026-01-26 16:27:01.895653006 +0000 UTC m=+0.853137534 container died e630e5eb310a08440ed24dd4ea53991283a470aeceb9ce5e28f98aa957351578 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:27:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-289136ef7dd3725da9fc6c247b65c7a239bb2ef256be66a394ca83946aa857c1-merged.mount: Deactivated successfully.
Jan 26 16:27:01 compute-0 podman[365527]: 2026-01-26 16:27:01.936485066 +0000 UTC m=+0.893969584 container remove e630e5eb310a08440ed24dd4ea53991283a470aeceb9ce5e28f98aa957351578 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_carson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 16:27:01 compute-0 systemd[1]: libpod-conmon-e630e5eb310a08440ed24dd4ea53991283a470aeceb9ce5e28f98aa957351578.scope: Deactivated successfully.
Jan 26 16:27:01 compute-0 sudo[365434]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:27:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389480181' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.029 239969 DEBUG oslo_concurrency.processutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.032 239969 DEBUG nova.virt.libvirt.vif [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:26:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1369427436',display_name='tempest-TestSnapshotPattern-server-1369427436',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1369427436',id=144,image_ref='4f1e15b7-da72-469a-9162-748b08cf386f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbFIdmmD0YOLT/gnQMrV89ZkLND1rAgrbfF3qbw6dEO95K+VelVpcFet15bYd/xM6RQfQxT61ExvAJRv5XtJWc2bRBlJ7p2k0AnC8pnTtVi312SPdjQj+CEHqI1btz5HA==',key_name='tempest-TestSnapshotPattern-544616432',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='02ec8c20b77c4ce9b1406d858bbcf14d',ramdisk_id='',reservation_id='r-jxkuoj5p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='39a41999-3c56-4679-8c57-9286dc4edbb6',image_min_disk='1',image_min_ram='0',image_owner_id='02ec8c20b77c4ce9b1406d858bbcf14d',image_owner_project_name='tempest-TestSnapshotPattern-616132540',image_owner_user_name='tempest-TestSnapshotPattern-616132540-project-member',image_user_id='915d03d871484dc39e2d074d97f24809',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-616132540',owner_user_name='tempest-TestSnapshotPattern-616132540-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:26:56Z,user_data=None,user_id='915d03d871484dc39e2d074d97f24809',uuid=5d69a9b9-63ea-4df6-bf59-67250f4f907d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "address": "fa:16:3e:ff:c3:d7", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb5c2eef-7a", "ovs_interfaceid": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.033 239969 DEBUG nova.network.os_vif_util [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Converting VIF {"id": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "address": "fa:16:3e:ff:c3:d7", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb5c2eef-7a", "ovs_interfaceid": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.034 239969 DEBUG nova.network.os_vif_util [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:c3:d7,bridge_name='br-int',has_traffic_filtering=True,id=fb5c2eef-7a9f-43e2-bf77-34753400c407,network=Network(01b6ba29-8317-46fc-85a5-35c029f2796c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb5c2eef-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.036 239969 DEBUG nova.objects.instance [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lazy-loading 'pci_devices' on Instance uuid 5d69a9b9-63ea-4df6-bf59-67250f4f907d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:27:02 compute-0 sudo[365604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:27:02 compute-0 sudo[365604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:27:02 compute-0 sudo[365604]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.056 239969 DEBUG nova.virt.libvirt.driver [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:27:02 compute-0 nova_compute[239965]:   <uuid>5d69a9b9-63ea-4df6-bf59-67250f4f907d</uuid>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   <name>instance-00000090</name>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <nova:name>tempest-TestSnapshotPattern-server-1369427436</nova:name>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:27:00</nova:creationTime>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:27:02 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:27:02 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:27:02 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:27:02 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:27:02 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:27:02 compute-0 nova_compute[239965]:         <nova:user uuid="915d03d871484dc39e2d074d97f24809">tempest-TestSnapshotPattern-616132540-project-member</nova:user>
Jan 26 16:27:02 compute-0 nova_compute[239965]:         <nova:project uuid="02ec8c20b77c4ce9b1406d858bbcf14d">tempest-TestSnapshotPattern-616132540</nova:project>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="4f1e15b7-da72-469a-9162-748b08cf386f"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:27:02 compute-0 nova_compute[239965]:         <nova:port uuid="fb5c2eef-7a9f-43e2-bf77-34753400c407">
Jan 26 16:27:02 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <system>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <entry name="serial">5d69a9b9-63ea-4df6-bf59-67250f4f907d</entry>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <entry name="uuid">5d69a9b9-63ea-4df6-bf59-67250f4f907d</entry>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     </system>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   <os>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   </os>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   <features>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   </features>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/5d69a9b9-63ea-4df6-bf59-67250f4f907d_disk">
Jan 26 16:27:02 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       </source>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:27:02 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/5d69a9b9-63ea-4df6-bf59-67250f4f907d_disk.config">
Jan 26 16:27:02 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       </source>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:27:02 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:ff:c3:d7"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <target dev="tapfb5c2eef-7a"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/5d69a9b9-63ea-4df6-bf59-67250f4f907d/console.log" append="off"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <video>
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     </video>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <input type="keyboard" bus="usb"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:27:02 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:27:02 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:27:02 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:27:02 compute-0 nova_compute[239965]: </domain>
Jan 26 16:27:02 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.059 239969 DEBUG nova.compute.manager [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Preparing to wait for external event network-vif-plugged-fb5c2eef-7a9f-43e2-bf77-34753400c407 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.059 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.060 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.060 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.061 239969 DEBUG nova.virt.libvirt.vif [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:26:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1369427436',display_name='tempest-TestSnapshotPattern-server-1369427436',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1369427436',id=144,image_ref='4f1e15b7-da72-469a-9162-748b08cf386f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbFIdmmD0YOLT/gnQMrV89ZkLND1rAgrbfF3qbw6dEO95K+VelVpcFet15bYd/xM6RQfQxT61ExvAJRv5XtJWc2bRBlJ7p2k0AnC8pnTtVi312SPdjQj+CEHqI1btz5HA==',key_name='tempest-TestSnapshotPattern-544616432',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='02ec8c20b77c4ce9b1406d858bbcf14d',ramdisk_id='',reservation_id='r-jxkuoj5p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='39a41999-3c56-4679-8c57-9286dc4edbb6',image_min_disk='1',image_min_ram='0',image_owner_id='02ec8c20b77c4ce9b1406d858bbcf14d',image_owner_project_name='tempest-TestSnapshotPattern-616132540',image_owner_user_name='tempest-TestSnapshotPattern-616132540-project-member',image_user_id='915d03d871484dc39e2d074d97f24809',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-616132540',owner_user_name='tempest-TestSnapshotPattern-616132540-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:26:56Z,user_data=None,user_id='915d03d871484dc39e2d074d97f24809',uuid=5d69a9b9-63ea-4df6-bf59-67250f4f907d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "address": "fa:16:3e:ff:c3:d7", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb5c2eef-7a", "ovs_interfaceid": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.062 239969 DEBUG nova.network.os_vif_util [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Converting VIF {"id": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "address": "fa:16:3e:ff:c3:d7", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb5c2eef-7a", "ovs_interfaceid": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.063 239969 DEBUG nova.network.os_vif_util [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ff:c3:d7,bridge_name='br-int',has_traffic_filtering=True,id=fb5c2eef-7a9f-43e2-bf77-34753400c407,network=Network(01b6ba29-8317-46fc-85a5-35c029f2796c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb5c2eef-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.064 239969 DEBUG os_vif [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:c3:d7,bridge_name='br-int',has_traffic_filtering=True,id=fb5c2eef-7a9f-43e2-bf77-34753400c407,network=Network(01b6ba29-8317-46fc-85a5-35c029f2796c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb5c2eef-7a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.065 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.066 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.067 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.073 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.074 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfb5c2eef-7a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.074 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfb5c2eef-7a, col_values=(('external_ids', {'iface-id': 'fb5c2eef-7a9f-43e2-bf77-34753400c407', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ff:c3:d7', 'vm-uuid': '5d69a9b9-63ea-4df6-bf59-67250f4f907d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/698601562' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:27:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2389480181' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.077 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:02 compute-0 NetworkManager[48954]: <info>  [1769444822.0773] manager: (tapfb5c2eef-7a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.080 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.086 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.088 239969 INFO os_vif [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ff:c3:d7,bridge_name='br-int',has_traffic_filtering=True,id=fb5c2eef-7a9f-43e2-bf77-34753400c407,network=Network(01b6ba29-8317-46fc-85a5-35c029f2796c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb5c2eef-7a')
Jan 26 16:27:02 compute-0 sudo[365631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:27:02 compute-0 sudo[365631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.159 239969 DEBUG nova.virt.libvirt.driver [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.169 239969 DEBUG nova.virt.libvirt.driver [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.169 239969 DEBUG nova.virt.libvirt.driver [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] No VIF found with MAC fa:16:3e:ff:c3:d7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.170 239969 INFO nova.virt.libvirt.driver [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Using config drive
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.193 239969 DEBUG nova.storage.rbd_utils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] rbd image 5d69a9b9-63ea-4df6-bf59-67250f4f907d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:02 compute-0 podman[365689]: 2026-01-26 16:27:02.402122299 +0000 UTC m=+0.039693463 container create 54d7bebd5cdddaa2df7e4452791ffd05bb9f60b23d7526f3ce6346539c807711 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_almeida, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 26 16:27:02 compute-0 systemd[1]: Started libpod-conmon-54d7bebd5cdddaa2df7e4452791ffd05bb9f60b23d7526f3ce6346539c807711.scope.
Jan 26 16:27:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:27:02 compute-0 podman[365689]: 2026-01-26 16:27:02.386001905 +0000 UTC m=+0.023573089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:27:02 compute-0 podman[365689]: 2026-01-26 16:27:02.496390958 +0000 UTC m=+0.133962172 container init 54d7bebd5cdddaa2df7e4452791ffd05bb9f60b23d7526f3ce6346539c807711 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 16:27:02 compute-0 podman[365689]: 2026-01-26 16:27:02.502145579 +0000 UTC m=+0.139716743 container start 54d7bebd5cdddaa2df7e4452791ffd05bb9f60b23d7526f3ce6346539c807711 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_almeida, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:27:02 compute-0 podman[365689]: 2026-01-26 16:27:02.505147822 +0000 UTC m=+0.142719006 container attach 54d7bebd5cdddaa2df7e4452791ffd05bb9f60b23d7526f3ce6346539c807711 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_almeida, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:27:02 compute-0 trusting_almeida[365706]: 167 167
Jan 26 16:27:02 compute-0 systemd[1]: libpod-54d7bebd5cdddaa2df7e4452791ffd05bb9f60b23d7526f3ce6346539c807711.scope: Deactivated successfully.
Jan 26 16:27:02 compute-0 podman[365689]: 2026-01-26 16:27:02.509779666 +0000 UTC m=+0.147350840 container died 54d7bebd5cdddaa2df7e4452791ffd05bb9f60b23d7526f3ce6346539c807711 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 16:27:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f80cd4bc9a1f0ce8a5f2965df73f441ba3618fb6fe858ba071d2ce92a425e85-merged.mount: Deactivated successfully.
Jan 26 16:27:02 compute-0 podman[365689]: 2026-01-26 16:27:02.561505582 +0000 UTC m=+0.199076746 container remove 54d7bebd5cdddaa2df7e4452791ffd05bb9f60b23d7526f3ce6346539c807711 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_almeida, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:27:02 compute-0 systemd[1]: libpod-conmon-54d7bebd5cdddaa2df7e4452791ffd05bb9f60b23d7526f3ce6346539c807711.scope: Deactivated successfully.
Jan 26 16:27:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 32 KiB/s wr, 207 op/s
Jan 26 16:27:02 compute-0 podman[365730]: 2026-01-26 16:27:02.762384032 +0000 UTC m=+0.041825485 container create bff935afecac83abbe422d95fe67cc6573d80d6857f44ea951cc6e6d07ef8cca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:27:02 compute-0 systemd[1]: Started libpod-conmon-bff935afecac83abbe422d95fe67cc6573d80d6857f44ea951cc6e6d07ef8cca.scope.
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.823 239969 INFO nova.virt.libvirt.driver [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Creating config drive at /var/lib/nova/instances/5d69a9b9-63ea-4df6-bf59-67250f4f907d/disk.config
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.831 239969 DEBUG oslo_concurrency.processutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5d69a9b9-63ea-4df6-bf59-67250f4f907d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvy93he33 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:02 compute-0 podman[365730]: 2026-01-26 16:27:02.742717281 +0000 UTC m=+0.022158784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:27:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0cfcc0a18695d2f415c3e44c8ad207d2f392d52f0d991c36ad0554eb6d844d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0cfcc0a18695d2f415c3e44c8ad207d2f392d52f0d991c36ad0554eb6d844d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0cfcc0a18695d2f415c3e44c8ad207d2f392d52f0d991c36ad0554eb6d844d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0cfcc0a18695d2f415c3e44c8ad207d2f392d52f0d991c36ad0554eb6d844d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.884 239969 DEBUG nova.network.neutron [req-74989845-34af-409f-ba17-de368ed644a7 req-8a72c6ec-2419-44f5-9201-fb4094f71bf9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Updated VIF entry in instance network info cache for port fb5c2eef-7a9f-43e2-bf77-34753400c407. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.886 239969 DEBUG nova.network.neutron [req-74989845-34af-409f-ba17-de368ed644a7 req-8a72c6ec-2419-44f5-9201-fb4094f71bf9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Updating instance_info_cache with network_info: [{"id": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "address": "fa:16:3e:ff:c3:d7", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb5c2eef-7a", "ovs_interfaceid": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:02 compute-0 podman[365730]: 2026-01-26 16:27:02.899699935 +0000 UTC m=+0.179141408 container init bff935afecac83abbe422d95fe67cc6573d80d6857f44ea951cc6e6d07ef8cca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.903 239969 DEBUG oslo_concurrency.lockutils [req-74989845-34af-409f-ba17-de368ed644a7 req-8a72c6ec-2419-44f5-9201-fb4094f71bf9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-5d69a9b9-63ea-4df6-bf59-67250f4f907d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:02 compute-0 podman[365730]: 2026-01-26 16:27:02.910696684 +0000 UTC m=+0.190138127 container start bff935afecac83abbe422d95fe67cc6573d80d6857f44ea951cc6e6d07ef8cca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jones, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 16:27:02 compute-0 podman[365730]: 2026-01-26 16:27:02.914236671 +0000 UTC m=+0.193678124 container attach bff935afecac83abbe422d95fe67cc6573d80d6857f44ea951cc6e6d07ef8cca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jones, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 16:27:02 compute-0 nova_compute[239965]: 2026-01-26 16:27:02.990 239969 DEBUG oslo_concurrency.processutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5d69a9b9-63ea-4df6-bf59-67250f4f907d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvy93he33" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:03 compute-0 nova_compute[239965]: 2026-01-26 16:27:03.016 239969 DEBUG nova.storage.rbd_utils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] rbd image 5d69a9b9-63ea-4df6-bf59-67250f4f907d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:03 compute-0 nova_compute[239965]: 2026-01-26 16:27:03.023 239969 DEBUG oslo_concurrency.processutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5d69a9b9-63ea-4df6-bf59-67250f4f907d/disk.config 5d69a9b9-63ea-4df6-bf59-67250f4f907d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:03 compute-0 ceph-mon[75140]: pgmap v2294: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 32 KiB/s wr, 207 op/s
Jan 26 16:27:03 compute-0 nova_compute[239965]: 2026-01-26 16:27:03.408 239969 DEBUG oslo_concurrency.processutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5d69a9b9-63ea-4df6-bf59-67250f4f907d/disk.config 5d69a9b9-63ea-4df6-bf59-67250f4f907d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:03 compute-0 nova_compute[239965]: 2026-01-26 16:27:03.409 239969 INFO nova.virt.libvirt.driver [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Deleting local config drive /var/lib/nova/instances/5d69a9b9-63ea-4df6-bf59-67250f4f907d/disk.config because it was imported into RBD.
Jan 26 16:27:03 compute-0 kernel: tapfb5c2eef-7a: entered promiscuous mode
Jan 26 16:27:03 compute-0 NetworkManager[48954]: <info>  [1769444823.4748] manager: (tapfb5c2eef-7a): new Tun device (/org/freedesktop/NetworkManager/Devices/621)
Jan 26 16:27:03 compute-0 ovn_controller[146046]: 2026-01-26T16:27:03Z|01523|binding|INFO|Claiming lport fb5c2eef-7a9f-43e2-bf77-34753400c407 for this chassis.
Jan 26 16:27:03 compute-0 ovn_controller[146046]: 2026-01-26T16:27:03Z|01524|binding|INFO|fb5c2eef-7a9f-43e2-bf77-34753400c407: Claiming fa:16:3e:ff:c3:d7 10.100.0.12
Jan 26 16:27:03 compute-0 nova_compute[239965]: 2026-01-26 16:27:03.478 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:03.486 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:c3:d7 10.100.0.12'], port_security=['fa:16:3e:ff:c3:d7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5d69a9b9-63ea-4df6-bf59-67250f4f907d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01b6ba29-8317-46fc-85a5-35c029f2796c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '02ec8c20b77c4ce9b1406d858bbcf14d', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4f55238b-e3e6-4801-8e90-114accc75c46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d1802e6-be6e-40d0-8709-a6af00654d79, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=fb5c2eef-7a9f-43e2-bf77-34753400c407) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:27:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:03.488 156105 INFO neutron.agent.ovn.metadata.agent [-] Port fb5c2eef-7a9f-43e2-bf77-34753400c407 in datapath 01b6ba29-8317-46fc-85a5-35c029f2796c bound to our chassis
Jan 26 16:27:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:03.489 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01b6ba29-8317-46fc-85a5-35c029f2796c
Jan 26 16:27:03 compute-0 ovn_controller[146046]: 2026-01-26T16:27:03Z|01525|binding|INFO|Setting lport fb5c2eef-7a9f-43e2-bf77-34753400c407 ovn-installed in OVS
Jan 26 16:27:03 compute-0 ovn_controller[146046]: 2026-01-26T16:27:03Z|01526|binding|INFO|Setting lport fb5c2eef-7a9f-43e2-bf77-34753400c407 up in Southbound
Jan 26 16:27:03 compute-0 nova_compute[239965]: 2026-01-26 16:27:03.501 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:03 compute-0 nova_compute[239965]: 2026-01-26 16:27:03.503 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:03 compute-0 nova_compute[239965]: 2026-01-26 16:27:03.506 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:03.517 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[97c0e2bf-00ea-45b7-9ec1-83316115e451]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:03 compute-0 systemd-udevd[365868]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:27:03 compute-0 NetworkManager[48954]: <info>  [1769444823.5322] device (tapfb5c2eef-7a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:27:03 compute-0 NetworkManager[48954]: <info>  [1769444823.5330] device (tapfb5c2eef-7a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:27:03 compute-0 systemd-machined[208061]: New machine qemu-174-instance-00000090.
Jan 26 16:27:03 compute-0 systemd[1]: Started Virtual Machine qemu-174-instance-00000090.
Jan 26 16:27:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:03.555 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d673b939-2be8-45d9-a00d-e1ea4a1604a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:03.564 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[7139ceb4-c223-475c-bc3e-bcf03abe7634]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:03.599 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[74c79a76-d79d-491b-af54-a113a5d58bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:03.616 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b2fb95d8-fb1a-436e-8539-5f0215a7369a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01b6ba29-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:79:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 426], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626016, 'reachable_time': 23299, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365888, 'error': None, 'target': 'ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:03 compute-0 lvm[365889]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:27:03 compute-0 lvm[365889]: VG ceph_vg0 finished
Jan 26 16:27:03 compute-0 lvm[365893]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:27:03 compute-0 lvm[365893]: VG ceph_vg1 finished
Jan 26 16:27:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:03.636 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[805c1036-c777-4fd4-a012-c18c033dcd42]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap01b6ba29-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626030, 'tstamp': 626030}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365891, 'error': None, 'target': 'ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap01b6ba29-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626033, 'tstamp': 626033}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365891, 'error': None, 'target': 'ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:03.639 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01b6ba29-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:03 compute-0 nova_compute[239965]: 2026-01-26 16:27:03.640 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:03 compute-0 nova_compute[239965]: 2026-01-26 16:27:03.642 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:03.642 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01b6ba29-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:03.642 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:27:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:03.643 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01b6ba29-80, col_values=(('external_ids', {'iface-id': '36c96456-84ef-4832-9322-d686bc015797'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:03 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:03.643 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:27:03 compute-0 lvm[365895]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:27:03 compute-0 lvm[365895]: VG ceph_vg2 finished
Jan 26 16:27:03 compute-0 hungry_jones[365747]: {}
Jan 26 16:27:03 compute-0 systemd[1]: libpod-bff935afecac83abbe422d95fe67cc6573d80d6857f44ea951cc6e6d07ef8cca.scope: Deactivated successfully.
Jan 26 16:27:03 compute-0 systemd[1]: libpod-bff935afecac83abbe422d95fe67cc6573d80d6857f44ea951cc6e6d07ef8cca.scope: Consumed 1.297s CPU time.
Jan 26 16:27:03 compute-0 nova_compute[239965]: 2026-01-26 16:27:03.749 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:03 compute-0 podman[365916]: 2026-01-26 16:27:03.790728146 +0000 UTC m=+0.026083470 container died bff935afecac83abbe422d95fe67cc6573d80d6857f44ea951cc6e6d07ef8cca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:27:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0cfcc0a18695d2f415c3e44c8ad207d2f392d52f0d991c36ad0554eb6d844d6-merged.mount: Deactivated successfully.
Jan 26 16:27:04 compute-0 podman[365916]: 2026-01-26 16:27:04.088066377 +0000 UTC m=+0.323421691 container remove bff935afecac83abbe422d95fe67cc6573d80d6857f44ea951cc6e6d07ef8cca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jones, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 16:27:04 compute-0 systemd[1]: libpod-conmon-bff935afecac83abbe422d95fe67cc6573d80d6857f44ea951cc6e6d07ef8cca.scope: Deactivated successfully.
Jan 26 16:27:04 compute-0 sudo[365631]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:04 compute-0 nova_compute[239965]: 2026-01-26 16:27:04.149 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444824.1490445, 5d69a9b9-63ea-4df6-bf59-67250f4f907d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:27:04 compute-0 nova_compute[239965]: 2026-01-26 16:27:04.149 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] VM Started (Lifecycle Event)
Jan 26 16:27:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:27:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:27:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:27:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:27:04 compute-0 nova_compute[239965]: 2026-01-26 16:27:04.169 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:04 compute-0 nova_compute[239965]: 2026-01-26 16:27:04.175 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444824.1513104, 5d69a9b9-63ea-4df6-bf59-67250f4f907d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:27:04 compute-0 nova_compute[239965]: 2026-01-26 16:27:04.175 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] VM Paused (Lifecycle Event)
Jan 26 16:27:04 compute-0 nova_compute[239965]: 2026-01-26 16:27:04.202 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:04 compute-0 nova_compute[239965]: 2026-01-26 16:27:04.204 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:27:04 compute-0 sudo[365955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:27:04 compute-0 nova_compute[239965]: 2026-01-26 16:27:04.232 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:27:04 compute-0 sudo[365955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:27:04 compute-0 sudo[365955]: pam_unix(sudo:session): session closed for user root
Jan 26 16:27:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:27:04 compute-0 nova_compute[239965]: 2026-01-26 16:27:04.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:27:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 31 KiB/s wr, 199 op/s
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.076 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.079 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.079 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.079 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.080 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.125 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:27:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:27:05 compute-0 ceph-mon[75140]: pgmap v2295: 305 pgs: 305 active+clean; 339 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 31 KiB/s wr, 199 op/s
Jan 26 16:27:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:27:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/290709726' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.630 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.727 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.728 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.736 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.737 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.743 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.743 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.749 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.749 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.751 239969 DEBUG nova.compute.manager [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.762 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.763 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.852 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.853 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.859 239969 DEBUG nova.virt.hardware [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:27:05 compute-0 nova_compute[239965]: 2026-01-26 16:27:05.860 239969 INFO nova.compute.claims [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:27:06 compute-0 nova_compute[239965]: 2026-01-26 16:27:06.050 239969 DEBUG oslo_concurrency.processutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:06 compute-0 nova_compute[239965]: 2026-01-26 16:27:06.150 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:27:06 compute-0 nova_compute[239965]: 2026-01-26 16:27:06.152 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2912MB free_disk=59.899827498942614GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:27:06 compute-0 nova_compute[239965]: 2026-01-26 16:27:06.153 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:06 compute-0 ovn_controller[146046]: 2026-01-26T16:27:06Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5c:66:7c 10.100.0.14
Jan 26 16:27:06 compute-0 ovn_controller[146046]: 2026-01-26T16:27:06Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5c:66:7c 10.100.0.14
Jan 26 16:27:06 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/290709726' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:27:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 348 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 623 KiB/s wr, 203 op/s
Jan 26 16:27:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:27:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1186889845' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.049 239969 DEBUG oslo_concurrency.processutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.999s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.057 239969 DEBUG nova.compute.provider_tree [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.073 239969 DEBUG nova.scheduler.client.report [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.080 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.097 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.244s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.098 239969 DEBUG nova.compute.manager [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.102 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.950s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.169 239969 DEBUG nova.compute.manager [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.169 239969 DEBUG nova.network.neutron [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.185 239969 INFO nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.192 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 39a41999-3c56-4679-8c57-9286dc4edbb6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.193 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 759e14a7-49ae-4f6f-bb96-80fc691d50e3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.193 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance af71ccf7-284e-436b-9bb8-5bc6c727d3cc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.193 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 5d69a9b9-63ea-4df6-bf59-67250f4f907d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.193 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 02c17ea9-0fbb-4beb-bc93-b0339e8a1048 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.194 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.194 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.201 239969 DEBUG nova.compute.manager [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.295 239969 DEBUG nova.compute.manager [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.296 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.296 239969 INFO nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Creating image(s)
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.319 239969 DEBUG nova.storage.rbd_utils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 02c17ea9-0fbb-4beb-bc93-b0339e8a1048_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.347 239969 DEBUG nova.storage.rbd_utils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 02c17ea9-0fbb-4beb-bc93-b0339e8a1048_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.374 239969 DEBUG nova.storage.rbd_utils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 02c17ea9-0fbb-4beb-bc93-b0339e8a1048_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.379 239969 DEBUG oslo_concurrency.processutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.424 239969 DEBUG nova.policy [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e865b82cb8db42568f95d2257ed9b158', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f543c15474f7451fa07b01e79dde95dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.428 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.475 239969 DEBUG oslo_concurrency.processutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.477 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.478 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.478 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.518 239969 DEBUG nova.storage.rbd_utils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 02c17ea9-0fbb-4beb-bc93-b0339e8a1048_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.529 239969 DEBUG oslo_concurrency.processutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 02c17ea9-0fbb-4beb-bc93-b0339e8a1048_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:07 compute-0 ceph-mon[75140]: pgmap v2296: 305 pgs: 305 active+clean; 348 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 623 KiB/s wr, 203 op/s
Jan 26 16:27:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1186889845' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.795 239969 DEBUG oslo_concurrency.processutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 02c17ea9-0fbb-4beb-bc93-b0339e8a1048_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.266s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.849 239969 DEBUG nova.storage.rbd_utils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] resizing rbd image 02c17ea9-0fbb-4beb-bc93-b0339e8a1048_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.934 239969 DEBUG nova.objects.instance [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'migration_context' on Instance uuid 02c17ea9-0fbb-4beb-bc93-b0339e8a1048 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.949 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.950 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Ensure instance console log exists: /var/lib/nova/instances/02c17ea9-0fbb-4beb-bc93-b0339e8a1048/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.950 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.951 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:07 compute-0 nova_compute[239965]: 2026-01-26 16:27:07.951 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:27:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/279421747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:27:08 compute-0 nova_compute[239965]: 2026-01-26 16:27:08.048 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:08 compute-0 nova_compute[239965]: 2026-01-26 16:27:08.054 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:27:08 compute-0 nova_compute[239965]: 2026-01-26 16:27:08.073 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:27:08 compute-0 nova_compute[239965]: 2026-01-26 16:27:08.103 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:27:08 compute-0 nova_compute[239965]: 2026-01-26 16:27:08.103 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:08 compute-0 nova_compute[239965]: 2026-01-26 16:27:08.243 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:08 compute-0 nova_compute[239965]: 2026-01-26 16:27:08.243 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:08 compute-0 nova_compute[239965]: 2026-01-26 16:27:08.261 239969 DEBUG nova.compute.manager [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:27:08 compute-0 nova_compute[239965]: 2026-01-26 16:27:08.323 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:08 compute-0 nova_compute[239965]: 2026-01-26 16:27:08.324 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:08 compute-0 nova_compute[239965]: 2026-01-26 16:27:08.334 239969 DEBUG nova.virt.hardware [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:27:08 compute-0 nova_compute[239965]: 2026-01-26 16:27:08.334 239969 INFO nova.compute.claims [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:27:08 compute-0 nova_compute[239965]: 2026-01-26 16:27:08.498 239969 DEBUG oslo_concurrency.processutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/279421747' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:27:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 375 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.9 MiB/s wr, 219 op/s
Jan 26 16:27:08 compute-0 nova_compute[239965]: 2026-01-26 16:27:08.751 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:08 compute-0 nova_compute[239965]: 2026-01-26 16:27:08.767 239969 DEBUG nova.network.neutron [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Successfully created port: 307ae46b-ff90-469d-8239-59ac4ce405cd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:27:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:27:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/145884702' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.080 239969 DEBUG oslo_concurrency.processutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.087 239969 DEBUG nova.compute.provider_tree [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.122 239969 DEBUG nova.scheduler.client.report [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.171 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.847s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.172 239969 DEBUG nova.compute.manager [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.225 239969 DEBUG nova.compute.manager [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.226 239969 DEBUG nova.network.neutron [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.245 239969 INFO nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:27:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.272 239969 DEBUG nova.compute.manager [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.362 239969 DEBUG nova.compute.manager [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.366 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.366 239969 INFO nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Creating image(s)
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.395 239969 DEBUG nova.storage.rbd_utils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image b4e64366-a873-483b-a2b9-3d3afc79f7d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:09 compute-0 sshd-session[366233]: Invalid user hadoop from 209.38.206.249 port 36012
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.432 239969 DEBUG nova.storage.rbd_utils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image b4e64366-a873-483b-a2b9-3d3afc79f7d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.462 239969 DEBUG nova.storage.rbd_utils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image b4e64366-a873-483b-a2b9-3d3afc79f7d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.466 239969 DEBUG oslo_concurrency.processutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:09 compute-0 sshd-session[366233]: Connection closed by invalid user hadoop 209.38.206.249 port 36012 [preauth]
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.508 239969 DEBUG nova.policy [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ace5f36058684b5782589457d9e15921', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:27:09 compute-0 ceph-mon[75140]: pgmap v2297: 305 pgs: 305 active+clean; 375 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.9 MiB/s wr, 219 op/s
Jan 26 16:27:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/145884702' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.553 239969 DEBUG oslo_concurrency.processutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.553 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.554 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.554 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.574 239969 DEBUG nova.storage.rbd_utils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image b4e64366-a873-483b-a2b9-3d3afc79f7d4_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.577 239969 DEBUG oslo_concurrency.processutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 b4e64366-a873-483b-a2b9-3d3afc79f7d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:09 compute-0 ovn_controller[146046]: 2026-01-26T16:27:09Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bb:6f:f6 10.100.0.11
Jan 26 16:27:09 compute-0 ovn_controller[146046]: 2026-01-26T16:27:09Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:6f:f6 10.100.0.11
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.847 239969 DEBUG oslo_concurrency.processutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 b4e64366-a873-483b-a2b9-3d3afc79f7d4_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.270s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.926 239969 DEBUG nova.storage.rbd_utils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] resizing rbd image b4e64366-a873-483b-a2b9-3d3afc79f7d4_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:27:09 compute-0 nova_compute[239965]: 2026-01-26 16:27:09.977 239969 DEBUG nova.network.neutron [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Successfully updated port: 307ae46b-ff90-469d-8239-59ac4ce405cd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.011 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-02c17ea9-0fbb-4beb-bc93-b0339e8a1048" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.011 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-02c17ea9-0fbb-4beb-bc93-b0339e8a1048" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.012 239969 DEBUG nova.network.neutron [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.019 239969 DEBUG nova.objects.instance [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'migration_context' on Instance uuid b4e64366-a873-483b-a2b9-3d3afc79f7d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.036 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.036 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Ensure instance console log exists: /var/lib/nova/instances/b4e64366-a873-483b-a2b9-3d3afc79f7d4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.037 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.037 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.038 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.113 239969 DEBUG nova.compute.manager [req-490c85db-8f41-40c0-8823-dce92e388382 req-70523360-0f2d-46e9-8eb6-2351f9754fdf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Received event network-changed-307ae46b-ff90-469d-8239-59ac4ce405cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.113 239969 DEBUG nova.compute.manager [req-490c85db-8f41-40c0-8823-dce92e388382 req-70523360-0f2d-46e9-8eb6-2351f9754fdf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Refreshing instance network info cache due to event network-changed-307ae46b-ff90-469d-8239-59ac4ce405cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.114 239969 DEBUG oslo_concurrency.lockutils [req-490c85db-8f41-40c0-8823-dce92e388382 req-70523360-0f2d-46e9-8eb6-2351f9754fdf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-02c17ea9-0fbb-4beb-bc93-b0339e8a1048" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.180 239969 DEBUG nova.network.neutron [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:27:10 compute-0 podman[366403]: 2026-01-26 16:27:10.403252516 +0000 UTC m=+0.076350002 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, managed_by=edpm_ansible)
Jan 26 16:27:10 compute-0 podman[366404]: 2026-01-26 16:27:10.43567408 +0000 UTC m=+0.109467462 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.630 239969 DEBUG nova.compute.manager [req-1e2eb06f-bebe-47bb-bf5f-5de793e2a7f8 req-2be4f368-fa88-4f64-be43-add3128276dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Received event network-vif-plugged-fb5c2eef-7a9f-43e2-bf77-34753400c407 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.631 239969 DEBUG oslo_concurrency.lockutils [req-1e2eb06f-bebe-47bb-bf5f-5de793e2a7f8 req-2be4f368-fa88-4f64-be43-add3128276dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.632 239969 DEBUG oslo_concurrency.lockutils [req-1e2eb06f-bebe-47bb-bf5f-5de793e2a7f8 req-2be4f368-fa88-4f64-be43-add3128276dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.633 239969 DEBUG oslo_concurrency.lockutils [req-1e2eb06f-bebe-47bb-bf5f-5de793e2a7f8 req-2be4f368-fa88-4f64-be43-add3128276dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.633 239969 DEBUG nova.compute.manager [req-1e2eb06f-bebe-47bb-bf5f-5de793e2a7f8 req-2be4f368-fa88-4f64-be43-add3128276dc a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Processing event network-vif-plugged-fb5c2eef-7a9f-43e2-bf77-34753400c407 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.634 239969 DEBUG nova.compute.manager [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.639 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444830.6391673, 5d69a9b9-63ea-4df6-bf59-67250f4f907d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.640 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] VM Resumed (Lifecycle Event)
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.645 239969 DEBUG nova.virt.libvirt.driver [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.651 239969 INFO nova.virt.libvirt.driver [-] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Instance spawned successfully.
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.651 239969 INFO nova.compute.manager [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Took 13.88 seconds to spawn the instance on the hypervisor.
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.652 239969 DEBUG nova.compute.manager [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 375 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 433 KiB/s rd, 2.9 MiB/s wr, 86 op/s
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.699 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.703 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.709 239969 DEBUG nova.network.neutron [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Successfully created port: dccf97d4-5209-4347-9f07-83ba148d3126 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.757 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.759 239969 INFO nova.compute.manager [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Took 15.00 seconds to build instance.
Jan 26 16:27:10 compute-0 nova_compute[239965]: 2026-01-26 16:27:10.778 239969 DEBUG oslo_concurrency.lockutils [None req-c4b34d6c-d89d-4f69-884d-8cddc5c45e1e 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.110s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:10 compute-0 ceph-mon[75140]: pgmap v2298: 305 pgs: 305 active+clean; 375 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 433 KiB/s rd, 2.9 MiB/s wr, 86 op/s
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.369 239969 DEBUG nova.network.neutron [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Updating instance_info_cache with network_info: [{"id": "307ae46b-ff90-469d-8239-59ac4ce405cd", "address": "fa:16:3e:36:2b:9c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap307ae46b-ff", "ovs_interfaceid": "307ae46b-ff90-469d-8239-59ac4ce405cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.390 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-02c17ea9-0fbb-4beb-bc93-b0339e8a1048" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.391 239969 DEBUG nova.compute.manager [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Instance network_info: |[{"id": "307ae46b-ff90-469d-8239-59ac4ce405cd", "address": "fa:16:3e:36:2b:9c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap307ae46b-ff", "ovs_interfaceid": "307ae46b-ff90-469d-8239-59ac4ce405cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.395 239969 DEBUG oslo_concurrency.lockutils [req-490c85db-8f41-40c0-8823-dce92e388382 req-70523360-0f2d-46e9-8eb6-2351f9754fdf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-02c17ea9-0fbb-4beb-bc93-b0339e8a1048" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.395 239969 DEBUG nova.network.neutron [req-490c85db-8f41-40c0-8823-dce92e388382 req-70523360-0f2d-46e9-8eb6-2351f9754fdf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Refreshing network info cache for port 307ae46b-ff90-469d-8239-59ac4ce405cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.402 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Start _get_guest_xml network_info=[{"id": "307ae46b-ff90-469d-8239-59ac4ce405cd", "address": "fa:16:3e:36:2b:9c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap307ae46b-ff", "ovs_interfaceid": "307ae46b-ff90-469d-8239-59ac4ce405cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.410 239969 WARNING nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.428 239969 DEBUG nova.virt.libvirt.host [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.429 239969 DEBUG nova.virt.libvirt.host [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.437 239969 DEBUG nova.virt.libvirt.host [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.438 239969 DEBUG nova.virt.libvirt.host [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.439 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.439 239969 DEBUG nova.virt.hardware [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.440 239969 DEBUG nova.virt.hardware [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.441 239969 DEBUG nova.virt.hardware [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.441 239969 DEBUG nova.virt.hardware [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.442 239969 DEBUG nova.virt.hardware [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.442 239969 DEBUG nova.virt.hardware [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.442 239969 DEBUG nova.virt.hardware [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.443 239969 DEBUG nova.virt.hardware [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.444 239969 DEBUG nova.virt.hardware [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.444 239969 DEBUG nova.virt.hardware [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.444 239969 DEBUG nova.virt.hardware [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.451 239969 DEBUG oslo_concurrency.processutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.836 239969 DEBUG nova.network.neutron [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Successfully updated port: dccf97d4-5209-4347-9f07-83ba148d3126 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.855 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "refresh_cache-b4e64366-a873-483b-a2b9-3d3afc79f7d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.855 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquired lock "refresh_cache-b4e64366-a873-483b-a2b9-3d3afc79f7d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:27:11 compute-0 nova_compute[239965]: 2026-01-26 16:27:11.855 239969 DEBUG nova.network.neutron [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.032 239969 DEBUG nova.network.neutron [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.082 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:27:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3819555662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:27:12 compute-0 sshd-session[366468]: Connection closed by authenticating user root 209.38.206.249 port 49916 [preauth]
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.145 239969 DEBUG oslo_concurrency.processutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.694s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.170 239969 DEBUG nova.storage.rbd_utils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 02c17ea9-0fbb-4beb-bc93-b0339e8a1048_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:12 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3819555662' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.174 239969 DEBUG oslo_concurrency.processutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.227 239969 DEBUG nova.compute.manager [req-36cf0074-34de-4958-9356-75b7287c4268 req-73c3b19b-8307-431b-89ab-7c50089c2d38 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Received event network-changed-dccf97d4-5209-4347-9f07-83ba148d3126 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.228 239969 DEBUG nova.compute.manager [req-36cf0074-34de-4958-9356-75b7287c4268 req-73c3b19b-8307-431b-89ab-7c50089c2d38 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Refreshing instance network info cache due to event network-changed-dccf97d4-5209-4347-9f07-83ba148d3126. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.229 239969 DEBUG oslo_concurrency.lockutils [req-36cf0074-34de-4958-9356-75b7287c4268 req-73c3b19b-8307-431b-89ab-7c50089c2d38 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b4e64366-a873-483b-a2b9-3d3afc79f7d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:27:12 compute-0 sshd-session[366491]: Invalid user ec2-user from 209.38.206.249 port 49926
Jan 26 16:27:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 497 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.8 MiB/s wr, 237 op/s
Jan 26 16:27:12 compute-0 sshd-session[366491]: Connection closed by invalid user ec2-user 209.38.206.249 port 49926 [preauth]
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.733 239969 DEBUG nova.compute.manager [req-b27ce552-a3bf-44ce-b0c5-5881fddd15a0 req-15532ceb-162d-4c29-9b06-cefff5a16fde a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Received event network-vif-plugged-fb5c2eef-7a9f-43e2-bf77-34753400c407 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.733 239969 DEBUG oslo_concurrency.lockutils [req-b27ce552-a3bf-44ce-b0c5-5881fddd15a0 req-15532ceb-162d-4c29-9b06-cefff5a16fde a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.734 239969 DEBUG oslo_concurrency.lockutils [req-b27ce552-a3bf-44ce-b0c5-5881fddd15a0 req-15532ceb-162d-4c29-9b06-cefff5a16fde a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.734 239969 DEBUG oslo_concurrency.lockutils [req-b27ce552-a3bf-44ce-b0c5-5881fddd15a0 req-15532ceb-162d-4c29-9b06-cefff5a16fde a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.735 239969 DEBUG nova.compute.manager [req-b27ce552-a3bf-44ce-b0c5-5881fddd15a0 req-15532ceb-162d-4c29-9b06-cefff5a16fde a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] No waiting events found dispatching network-vif-plugged-fb5c2eef-7a9f-43e2-bf77-34753400c407 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.735 239969 WARNING nova.compute.manager [req-b27ce552-a3bf-44ce-b0c5-5881fddd15a0 req-15532ceb-162d-4c29-9b06-cefff5a16fde a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Received unexpected event network-vif-plugged-fb5c2eef-7a9f-43e2-bf77-34753400c407 for instance with vm_state active and task_state None.
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.796 239969 DEBUG nova.network.neutron [req-490c85db-8f41-40c0-8823-dce92e388382 req-70523360-0f2d-46e9-8eb6-2351f9754fdf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Updated VIF entry in instance network info cache for port 307ae46b-ff90-469d-8239-59ac4ce405cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.796 239969 DEBUG nova.network.neutron [req-490c85db-8f41-40c0-8823-dce92e388382 req-70523360-0f2d-46e9-8eb6-2351f9754fdf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Updating instance_info_cache with network_info: [{"id": "307ae46b-ff90-469d-8239-59ac4ce405cd", "address": "fa:16:3e:36:2b:9c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap307ae46b-ff", "ovs_interfaceid": "307ae46b-ff90-469d-8239-59ac4ce405cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.809 239969 DEBUG oslo_concurrency.lockutils [req-490c85db-8f41-40c0-8823-dce92e388382 req-70523360-0f2d-46e9-8eb6-2351f9754fdf a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-02c17ea9-0fbb-4beb-bc93-b0339e8a1048" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:27:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145224261' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.866 239969 DEBUG oslo_concurrency.processutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.692s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.867 239969 DEBUG nova.virt.libvirt.vif [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:27:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-78115778',display_name='tempest-TestNetworkBasicOps-server-78115778',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-78115778',id=145,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBwBHpp1Vs9MWQPeeeqakNnwtGS48PwvSYJeNUykaXxeW214KrGoXbpYLnywNCCG/bKkzO9tfFdp4KnfWvJ367nCGMOMOcf1Bi3W6elO3tG7Lx1njo9pJYHabyFHcW7Hlg==',key_name='tempest-TestNetworkBasicOps-1084723917',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-0xfa470c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:27:07Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=02c17ea9-0fbb-4beb-bc93-b0339e8a1048,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "307ae46b-ff90-469d-8239-59ac4ce405cd", "address": "fa:16:3e:36:2b:9c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap307ae46b-ff", "ovs_interfaceid": "307ae46b-ff90-469d-8239-59ac4ce405cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.868 239969 DEBUG nova.network.os_vif_util [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "307ae46b-ff90-469d-8239-59ac4ce405cd", "address": "fa:16:3e:36:2b:9c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap307ae46b-ff", "ovs_interfaceid": "307ae46b-ff90-469d-8239-59ac4ce405cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.868 239969 DEBUG nova.network.os_vif_util [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=307ae46b-ff90-469d-8239-59ac4ce405cd,network=Network(6a673593-4cc1-4b91-8d9d-33cfb52b459e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap307ae46b-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.869 239969 DEBUG nova.objects.instance [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'pci_devices' on Instance uuid 02c17ea9-0fbb-4beb-bc93-b0339e8a1048 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.884 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:27:12 compute-0 nova_compute[239965]:   <uuid>02c17ea9-0fbb-4beb-bc93-b0339e8a1048</uuid>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   <name>instance-00000091</name>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkBasicOps-server-78115778</nova:name>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:27:11</nova:creationTime>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:27:12 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:27:12 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:27:12 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:27:12 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:27:12 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:27:12 compute-0 nova_compute[239965]:         <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:27:12 compute-0 nova_compute[239965]:         <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:27:12 compute-0 nova_compute[239965]:         <nova:port uuid="307ae46b-ff90-469d-8239-59ac4ce405cd">
Jan 26 16:27:12 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <system>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <entry name="serial">02c17ea9-0fbb-4beb-bc93-b0339e8a1048</entry>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <entry name="uuid">02c17ea9-0fbb-4beb-bc93-b0339e8a1048</entry>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     </system>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   <os>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   </os>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   <features>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   </features>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/02c17ea9-0fbb-4beb-bc93-b0339e8a1048_disk">
Jan 26 16:27:12 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       </source>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:27:12 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/02c17ea9-0fbb-4beb-bc93-b0339e8a1048_disk.config">
Jan 26 16:27:12 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       </source>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:27:12 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:36:2b:9c"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <target dev="tap307ae46b-ff"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/02c17ea9-0fbb-4beb-bc93-b0339e8a1048/console.log" append="off"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <video>
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     </video>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:27:12 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:27:12 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:27:12 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:27:12 compute-0 nova_compute[239965]: </domain>
Jan 26 16:27:12 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.885 239969 DEBUG nova.compute.manager [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Preparing to wait for external event network-vif-plugged-307ae46b-ff90-469d-8239-59ac4ce405cd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.885 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.885 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.886 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.886 239969 DEBUG nova.virt.libvirt.vif [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:27:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-78115778',display_name='tempest-TestNetworkBasicOps-server-78115778',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-78115778',id=145,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBwBHpp1Vs9MWQPeeeqakNnwtGS48PwvSYJeNUykaXxeW214KrGoXbpYLnywNCCG/bKkzO9tfFdp4KnfWvJ367nCGMOMOcf1Bi3W6elO3tG7Lx1njo9pJYHabyFHcW7Hlg==',key_name='tempest-TestNetworkBasicOps-1084723917',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-0xfa470c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:27:07Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=02c17ea9-0fbb-4beb-bc93-b0339e8a1048,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "307ae46b-ff90-469d-8239-59ac4ce405cd", "address": "fa:16:3e:36:2b:9c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap307ae46b-ff", "ovs_interfaceid": "307ae46b-ff90-469d-8239-59ac4ce405cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.887 239969 DEBUG nova.network.os_vif_util [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "307ae46b-ff90-469d-8239-59ac4ce405cd", "address": "fa:16:3e:36:2b:9c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap307ae46b-ff", "ovs_interfaceid": "307ae46b-ff90-469d-8239-59ac4ce405cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.887 239969 DEBUG nova.network.os_vif_util [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:36:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=307ae46b-ff90-469d-8239-59ac4ce405cd,network=Network(6a673593-4cc1-4b91-8d9d-33cfb52b459e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap307ae46b-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.887 239969 DEBUG os_vif [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=307ae46b-ff90-469d-8239-59ac4ce405cd,network=Network(6a673593-4cc1-4b91-8d9d-33cfb52b459e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap307ae46b-ff') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.888 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.888 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.889 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.892 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.892 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap307ae46b-ff, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.892 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap307ae46b-ff, col_values=(('external_ids', {'iface-id': '307ae46b-ff90-469d-8239-59ac4ce405cd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:36:2b:9c', 'vm-uuid': '02c17ea9-0fbb-4beb-bc93-b0339e8a1048'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.894 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:12 compute-0 NetworkManager[48954]: <info>  [1769444832.8957] manager: (tap307ae46b-ff): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/622)
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.896 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.902 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.903 239969 INFO os_vif [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:36:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=307ae46b-ff90-469d-8239-59ac4ce405cd,network=Network(6a673593-4cc1-4b91-8d9d-33cfb52b459e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap307ae46b-ff')
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.948 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.948 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.949 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:36:2b:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.949 239969 INFO nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Using config drive
Jan 26 16:27:12 compute-0 nova_compute[239965]: 2026-01-26 16:27:12.973 239969 DEBUG nova.storage.rbd_utils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 02c17ea9-0fbb-4beb-bc93-b0339e8a1048_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:13 compute-0 ceph-mon[75140]: pgmap v2299: 305 pgs: 305 active+clean; 497 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.8 MiB/s wr, 237 op/s
Jan 26 16:27:13 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4145224261' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.181 239969 DEBUG nova.network.neutron [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Updating instance_info_cache with network_info: [{"id": "dccf97d4-5209-4347-9f07-83ba148d3126", "address": "fa:16:3e:d5:66:73", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdccf97d4-52", "ovs_interfaceid": "dccf97d4-5209-4347-9f07-83ba148d3126", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.201 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Releasing lock "refresh_cache-b4e64366-a873-483b-a2b9-3d3afc79f7d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.202 239969 DEBUG nova.compute.manager [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Instance network_info: |[{"id": "dccf97d4-5209-4347-9f07-83ba148d3126", "address": "fa:16:3e:d5:66:73", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdccf97d4-52", "ovs_interfaceid": "dccf97d4-5209-4347-9f07-83ba148d3126", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.202 239969 DEBUG oslo_concurrency.lockutils [req-36cf0074-34de-4958-9356-75b7287c4268 req-73c3b19b-8307-431b-89ab-7c50089c2d38 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b4e64366-a873-483b-a2b9-3d3afc79f7d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.202 239969 DEBUG nova.network.neutron [req-36cf0074-34de-4958-9356-75b7287c4268 req-73c3b19b-8307-431b-89ab-7c50089c2d38 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Refreshing network info cache for port dccf97d4-5209-4347-9f07-83ba148d3126 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.205 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Start _get_guest_xml network_info=[{"id": "dccf97d4-5209-4347-9f07-83ba148d3126", "address": "fa:16:3e:d5:66:73", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdccf97d4-52", "ovs_interfaceid": "dccf97d4-5209-4347-9f07-83ba148d3126", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.211 239969 WARNING nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.215 239969 DEBUG nova.virt.libvirt.host [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.216 239969 DEBUG nova.virt.libvirt.host [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.230 239969 DEBUG nova.virt.libvirt.host [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.232 239969 DEBUG nova.virt.libvirt.host [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.233 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.233 239969 DEBUG nova.virt.hardware [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.235 239969 DEBUG nova.virt.hardware [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.235 239969 DEBUG nova.virt.hardware [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.236 239969 DEBUG nova.virt.hardware [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.236 239969 DEBUG nova.virt.hardware [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.236 239969 DEBUG nova.virt.hardware [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.237 239969 DEBUG nova.virt.hardware [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.237 239969 DEBUG nova.virt.hardware [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.238 239969 DEBUG nova.virt.hardware [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.239 239969 DEBUG nova.virt.hardware [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.239 239969 DEBUG nova.virt.hardware [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:27:13 compute-0 sshd-session[366512]: Connection closed by authenticating user root 209.38.206.249 port 49936 [preauth]
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.252 239969 DEBUG oslo_concurrency.processutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.392 239969 INFO nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Creating config drive at /var/lib/nova/instances/02c17ea9-0fbb-4beb-bc93-b0339e8a1048/disk.config
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.397 239969 DEBUG oslo_concurrency.processutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/02c17ea9-0fbb-4beb-bc93-b0339e8a1048/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa0wk3lji execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.552 239969 DEBUG oslo_concurrency.processutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/02c17ea9-0fbb-4beb-bc93-b0339e8a1048/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpa0wk3lji" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.579 239969 DEBUG nova.storage.rbd_utils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 02c17ea9-0fbb-4beb-bc93-b0339e8a1048_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.583 239969 DEBUG oslo_concurrency.processutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/02c17ea9-0fbb-4beb-bc93-b0339e8a1048/disk.config 02c17ea9-0fbb-4beb-bc93-b0339e8a1048_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.754 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:27:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1923558071' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.836 239969 DEBUG oslo_concurrency.processutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.864 239969 DEBUG nova.storage.rbd_utils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image b4e64366-a873-483b-a2b9-3d3afc79f7d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.867 239969 DEBUG oslo_concurrency.processutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.992 239969 DEBUG oslo_concurrency.processutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/02c17ea9-0fbb-4beb-bc93-b0339e8a1048/disk.config 02c17ea9-0fbb-4beb-bc93-b0339e8a1048_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.409s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:13 compute-0 nova_compute[239965]: 2026-01-26 16:27:13.993 239969 INFO nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Deleting local config drive /var/lib/nova/instances/02c17ea9-0fbb-4beb-bc93-b0339e8a1048/disk.config because it was imported into RBD.
Jan 26 16:27:14 compute-0 kernel: tap307ae46b-ff: entered promiscuous mode
Jan 26 16:27:14 compute-0 NetworkManager[48954]: <info>  [1769444834.0444] manager: (tap307ae46b-ff): new Tun device (/org/freedesktop/NetworkManager/Devices/623)
Jan 26 16:27:14 compute-0 ovn_controller[146046]: 2026-01-26T16:27:14Z|01527|binding|INFO|Claiming lport 307ae46b-ff90-469d-8239-59ac4ce405cd for this chassis.
Jan 26 16:27:14 compute-0 ovn_controller[146046]: 2026-01-26T16:27:14Z|01528|binding|INFO|307ae46b-ff90-469d-8239-59ac4ce405cd: Claiming fa:16:3e:36:2b:9c 10.100.0.10
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.058 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:14.062 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:2b:9c 10.100.0.10'], port_security=['fa:16:3e:36:2b:9c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '02c17ea9-0fbb-4beb-bc93-b0339e8a1048', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a673593-4cc1-4b91-8d9d-33cfb52b459e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '04ba1d55-304e-4a2d-9d0e-1ecb801b0b55', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=975e0fb6-2099-45e2-83bb-35528f0f0ffd, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=307ae46b-ff90-469d-8239-59ac4ce405cd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:27:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:14.064 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 307ae46b-ff90-469d-8239-59ac4ce405cd in datapath 6a673593-4cc1-4b91-8d9d-33cfb52b459e bound to our chassis
Jan 26 16:27:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:14.066 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a673593-4cc1-4b91-8d9d-33cfb52b459e
Jan 26 16:27:14 compute-0 ovn_controller[146046]: 2026-01-26T16:27:14Z|01529|binding|INFO|Setting lport 307ae46b-ff90-469d-8239-59ac4ce405cd ovn-installed in OVS
Jan 26 16:27:14 compute-0 ovn_controller[146046]: 2026-01-26T16:27:14Z|01530|binding|INFO|Setting lport 307ae46b-ff90-469d-8239-59ac4ce405cd up in Southbound
Jan 26 16:27:14 compute-0 systemd-machined[208061]: New machine qemu-175-instance-00000091.
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.080 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:14.087 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[265f74dc-f8a0-449d-9965-ed7f8b2ba35b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:14 compute-0 systemd[1]: Started Virtual Machine qemu-175-instance-00000091.
Jan 26 16:27:14 compute-0 systemd-udevd[366652]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:27:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:14.132 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2e49d85d-8835-43ab-9d92-638b5ad04e33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:14.136 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2ecea2dc-30b1-407e-bec1-769bb70c685c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:14 compute-0 NetworkManager[48954]: <info>  [1769444834.1436] device (tap307ae46b-ff): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:27:14 compute-0 NetworkManager[48954]: <info>  [1769444834.1446] device (tap307ae46b-ff): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:27:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:14.172 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f3e0e0f4-2525-4e04-a1a3-1ea91312534e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:14 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1923558071' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:27:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:14.192 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c9c5513d-2449-4904-9051-abdbc7c513ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a673593-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:cd:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 433], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630323, 'reachable_time': 22042, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366662, 'error': None, 'target': 'ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:14.211 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dca009ad-a483-4802-a884-85d7662d49bf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a673593-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630336, 'tstamp': 630336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366664, 'error': None, 'target': 'ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a673593-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630341, 'tstamp': 630341}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366664, 'error': None, 'target': 'ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:14.213 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a673593-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.214 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.215 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:14.216 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a673593-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:14.216 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:27:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:14.216 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a673593-40, col_values=(('external_ids', {'iface-id': '891fbaea-41fa-4f46-a25a-e1bcac6983ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:14.217 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:27:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:27:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:27:14 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1195444512' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.434 239969 DEBUG oslo_concurrency.processutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.436 239969 DEBUG nova.virt.libvirt.vif [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:27:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-468921711',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-468921711',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-acce',id=146,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNmPYHlpHuZpOeZCl7BHQ216cd8u50ZxFCXW833oHGB4gBbAp9eFvd6P3qjkcGf53JTK0QFiWuKcgJqP7NdWwDFlJ24rb8FRQzo1dBCyWpi8odGazdOwXvkLXo0bhtlZwA==',key_name='tempest-TestSecurityGroupsBasicOps-1082581796',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-bcmkbnyh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:27:09Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=b4e64366-a873-483b-a2b9-3d3afc79f7d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dccf97d4-5209-4347-9f07-83ba148d3126", "address": "fa:16:3e:d5:66:73", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdccf97d4-52", "ovs_interfaceid": "dccf97d4-5209-4347-9f07-83ba148d3126", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.436 239969 DEBUG nova.network.os_vif_util [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "dccf97d4-5209-4347-9f07-83ba148d3126", "address": "fa:16:3e:d5:66:73", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdccf97d4-52", "ovs_interfaceid": "dccf97d4-5209-4347-9f07-83ba148d3126", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.437 239969 DEBUG nova.network.os_vif_util [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:66:73,bridge_name='br-int',has_traffic_filtering=True,id=dccf97d4-5209-4347-9f07-83ba148d3126,network=Network(fed7ea2b-3239-4a92-a662-8de7d6a5b54f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdccf97d4-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.438 239969 DEBUG nova.objects.instance [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'pci_devices' on Instance uuid b4e64366-a873-483b-a2b9-3d3afc79f7d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.460 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:27:14 compute-0 nova_compute[239965]:   <uuid>b4e64366-a873-483b-a2b9-3d3afc79f7d4</uuid>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   <name>instance-00000092</name>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-468921711</nova:name>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:27:13</nova:creationTime>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:27:14 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:27:14 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:27:14 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:27:14 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:27:14 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:27:14 compute-0 nova_compute[239965]:         <nova:user uuid="ace5f36058684b5782589457d9e15921">tempest-TestSecurityGroupsBasicOps-75910484-project-member</nova:user>
Jan 26 16:27:14 compute-0 nova_compute[239965]:         <nova:project uuid="1f814bd45d164be6827f7ef54ed07f5f">tempest-TestSecurityGroupsBasicOps-75910484</nova:project>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:27:14 compute-0 nova_compute[239965]:         <nova:port uuid="dccf97d4-5209-4347-9f07-83ba148d3126">
Jan 26 16:27:14 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <system>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <entry name="serial">b4e64366-a873-483b-a2b9-3d3afc79f7d4</entry>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <entry name="uuid">b4e64366-a873-483b-a2b9-3d3afc79f7d4</entry>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     </system>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   <os>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   </os>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   <features>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   </features>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/b4e64366-a873-483b-a2b9-3d3afc79f7d4_disk">
Jan 26 16:27:14 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       </source>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:27:14 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/b4e64366-a873-483b-a2b9-3d3afc79f7d4_disk.config">
Jan 26 16:27:14 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       </source>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:27:14 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:d5:66:73"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <target dev="tapdccf97d4-52"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/b4e64366-a873-483b-a2b9-3d3afc79f7d4/console.log" append="off"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <video>
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     </video>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:27:14 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:27:14 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:27:14 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:27:14 compute-0 nova_compute[239965]: </domain>
Jan 26 16:27:14 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.460 239969 DEBUG nova.compute.manager [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Preparing to wait for external event network-vif-plugged-dccf97d4-5209-4347-9f07-83ba148d3126 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.460 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.461 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.461 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.461 239969 DEBUG nova.virt.libvirt.vif [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:27:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-468921711',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-468921711',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-acce',id=146,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNmPYHlpHuZpOeZCl7BHQ216cd8u50ZxFCXW833oHGB4gBbAp9eFvd6P3qjkcGf53JTK0QFiWuKcgJqP7NdWwDFlJ24rb8FRQzo1dBCyWpi8odGazdOwXvkLXo0bhtlZwA==',key_name='tempest-TestSecurityGroupsBasicOps-1082581796',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-bcmkbnyh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:27:09Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=b4e64366-a873-483b-a2b9-3d3afc79f7d4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dccf97d4-5209-4347-9f07-83ba148d3126", "address": "fa:16:3e:d5:66:73", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdccf97d4-52", "ovs_interfaceid": "dccf97d4-5209-4347-9f07-83ba148d3126", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.462 239969 DEBUG nova.network.os_vif_util [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "dccf97d4-5209-4347-9f07-83ba148d3126", "address": "fa:16:3e:d5:66:73", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdccf97d4-52", "ovs_interfaceid": "dccf97d4-5209-4347-9f07-83ba148d3126", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.462 239969 DEBUG nova.network.os_vif_util [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d5:66:73,bridge_name='br-int',has_traffic_filtering=True,id=dccf97d4-5209-4347-9f07-83ba148d3126,network=Network(fed7ea2b-3239-4a92-a662-8de7d6a5b54f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdccf97d4-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.462 239969 DEBUG os_vif [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:66:73,bridge_name='br-int',has_traffic_filtering=True,id=dccf97d4-5209-4347-9f07-83ba148d3126,network=Network(fed7ea2b-3239-4a92-a662-8de7d6a5b54f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdccf97d4-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.463 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.463 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.463 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.465 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.465 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdccf97d4-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.466 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdccf97d4-52, col_values=(('external_ids', {'iface-id': 'dccf97d4-5209-4347-9f07-83ba148d3126', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d5:66:73', 'vm-uuid': 'b4e64366-a873-483b-a2b9-3d3afc79f7d4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.467 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:14 compute-0 NetworkManager[48954]: <info>  [1769444834.4682] manager: (tapdccf97d4-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/624)
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.469 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.478 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.479 239969 INFO os_vif [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d5:66:73,bridge_name='br-int',has_traffic_filtering=True,id=dccf97d4-5209-4347-9f07-83ba148d3126,network=Network(fed7ea2b-3239-4a92-a662-8de7d6a5b54f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdccf97d4-52')
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.535 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.536 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.536 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No VIF found with MAC fa:16:3e:d5:66:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.536 239969 INFO nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Using config drive
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.558 239969 DEBUG nova.storage.rbd_utils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image b4e64366-a873-483b-a2b9-3d3afc79f7d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 497 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.8 MiB/s wr, 234 op/s
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.699 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444834.699351, 02c17ea9-0fbb-4beb-bc93-b0339e8a1048 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.700 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] VM Started (Lifecycle Event)
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.722 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.726 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444834.6994703, 02c17ea9-0fbb-4beb-bc93-b0339e8a1048 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.727 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] VM Paused (Lifecycle Event)
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.749 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.752 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:27:14 compute-0 nova_compute[239965]: 2026-01-26 16:27:14.776 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.037 239969 INFO nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Creating config drive at /var/lib/nova/instances/b4e64366-a873-483b-a2b9-3d3afc79f7d4/disk.config
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.047 239969 DEBUG oslo_concurrency.processutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b4e64366-a873-483b-a2b9-3d3afc79f7d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpebx_qqh1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:15 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1195444512' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:27:15 compute-0 ceph-mon[75140]: pgmap v2300: 305 pgs: 305 active+clean; 497 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 7.8 MiB/s wr, 234 op/s
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.215 239969 DEBUG oslo_concurrency.processutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b4e64366-a873-483b-a2b9-3d3afc79f7d4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpebx_qqh1" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.260 239969 DEBUG nova.storage.rbd_utils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image b4e64366-a873-483b-a2b9-3d3afc79f7d4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.267 239969 DEBUG oslo_concurrency.processutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b4e64366-a873-483b-a2b9-3d3afc79f7d4/disk.config b4e64366-a873-483b-a2b9-3d3afc79f7d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.328 239969 DEBUG nova.compute.manager [req-0cc4527f-97c8-4c29-83c1-7db186b34e21 req-6f05ac3c-ad62-469e-ad58-b69079c34794 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Received event network-changed-fb5c2eef-7a9f-43e2-bf77-34753400c407 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.329 239969 DEBUG nova.compute.manager [req-0cc4527f-97c8-4c29-83c1-7db186b34e21 req-6f05ac3c-ad62-469e-ad58-b69079c34794 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Refreshing instance network info cache due to event network-changed-fb5c2eef-7a9f-43e2-bf77-34753400c407. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.330 239969 DEBUG oslo_concurrency.lockutils [req-0cc4527f-97c8-4c29-83c1-7db186b34e21 req-6f05ac3c-ad62-469e-ad58-b69079c34794 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-5d69a9b9-63ea-4df6-bf59-67250f4f907d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.330 239969 DEBUG oslo_concurrency.lockutils [req-0cc4527f-97c8-4c29-83c1-7db186b34e21 req-6f05ac3c-ad62-469e-ad58-b69079c34794 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-5d69a9b9-63ea-4df6-bf59-67250f4f907d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.330 239969 DEBUG nova.network.neutron [req-0cc4527f-97c8-4c29-83c1-7db186b34e21 req-6f05ac3c-ad62-469e-ad58-b69079c34794 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Refreshing network info cache for port fb5c2eef-7a9f-43e2-bf77-34753400c407 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.434 239969 DEBUG oslo_concurrency.processutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b4e64366-a873-483b-a2b9-3d3afc79f7d4/disk.config b4e64366-a873-483b-a2b9-3d3afc79f7d4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.435 239969 INFO nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Deleting local config drive /var/lib/nova/instances/b4e64366-a873-483b-a2b9-3d3afc79f7d4/disk.config because it was imported into RBD.
Jan 26 16:27:15 compute-0 kernel: tapdccf97d4-52: entered promiscuous mode
Jan 26 16:27:15 compute-0 NetworkManager[48954]: <info>  [1769444835.4968] manager: (tapdccf97d4-52): new Tun device (/org/freedesktop/NetworkManager/Devices/625)
Jan 26 16:27:15 compute-0 ovn_controller[146046]: 2026-01-26T16:27:15Z|01531|binding|INFO|Claiming lport dccf97d4-5209-4347-9f07-83ba148d3126 for this chassis.
Jan 26 16:27:15 compute-0 ovn_controller[146046]: 2026-01-26T16:27:15Z|01532|binding|INFO|dccf97d4-5209-4347-9f07-83ba148d3126: Claiming fa:16:3e:d5:66:73 10.100.0.6
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.500 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.507 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:66:73 10.100.0.6'], port_security=['fa:16:3e:d5:66:73 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b4e64366-a873-483b-a2b9-3d3afc79f7d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fed7ea2b-3239-4a92-a662-8de7d6a5b54f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8ca05add-5079-41f1-84cc-371e0b63d227 94740064-d28f-4a3e-bee4-e8145220237e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7416312-6ee5-4c60-a0ef-0cc203c3bf4f, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=dccf97d4-5209-4347-9f07-83ba148d3126) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.508 156105 INFO neutron.agent.ovn.metadata.agent [-] Port dccf97d4-5209-4347-9f07-83ba148d3126 in datapath fed7ea2b-3239-4a92-a662-8de7d6a5b54f bound to our chassis
Jan 26 16:27:15 compute-0 NetworkManager[48954]: <info>  [1769444835.5099] device (tapdccf97d4-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:27:15 compute-0 NetworkManager[48954]: <info>  [1769444835.5109] device (tapdccf97d4-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.511 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fed7ea2b-3239-4a92-a662-8de7d6a5b54f
Jan 26 16:27:15 compute-0 ovn_controller[146046]: 2026-01-26T16:27:15Z|01533|binding|INFO|Setting lport dccf97d4-5209-4347-9f07-83ba148d3126 ovn-installed in OVS
Jan 26 16:27:15 compute-0 ovn_controller[146046]: 2026-01-26T16:27:15Z|01534|binding|INFO|Setting lport dccf97d4-5209-4347-9f07-83ba148d3126 up in Southbound
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.524 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.527 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[906a02e5-a608-485c-9eeb-b1fe7ade2e08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.528 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfed7ea2b-31 in ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.530 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfed7ea2b-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.530 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[efa7234c-1679-497e-9091-695c5d23f848]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.532 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[da340245-abe6-4719-b515-238128c360b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:15 compute-0 systemd-machined[208061]: New machine qemu-176-instance-00000092.
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.544 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[fe303658-c691-48a8-9ba4-7d788adab415]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:15 compute-0 systemd[1]: Started Virtual Machine qemu-176-instance-00000092.
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.559 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[70882b49-4412-451b-8704-450afba8cc72]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.594 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5a3c890b-26f2-4eb4-88c5-6203706ce496]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.603 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d19939-9417-4584-bde5-877d54e67f29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:15 compute-0 NetworkManager[48954]: <info>  [1769444835.6046] manager: (tapfed7ea2b-30): new Veth device (/org/freedesktop/NetworkManager/Devices/626)
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.637 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a5019497-327f-491e-9eed-0cde89a43aef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.640 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[41fe9c06-c785-4c47-987c-99a5ba31a64d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:15 compute-0 NetworkManager[48954]: <info>  [1769444835.6618] device (tapfed7ea2b-30): carrier: link connected
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.667 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f72823c5-4cf8-4e1f-9d2d-ab7929124c77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.683 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[83ab84ef-af55-4164-9479-bfe770cb57c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfed7ea2b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:39:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 439], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632646, 'reachable_time': 22167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 366813, 'error': None, 'target': 'ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.701 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ce2807bc-204c-44bc-938d-0e6b74464d50]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9e:3992'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632646, 'tstamp': 632646}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 366814, 'error': None, 'target': 'ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.719 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cc6ce3b2-a3e6-4a28-9a95-9b0cf4cb3d9a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfed7ea2b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:39:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 439], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632646, 'reachable_time': 22167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 366815, 'error': None, 'target': 'ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.755 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ad27aede-1f91-45bb-9f93-5fef4d0b0ebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.776 239969 INFO nova.compute.manager [None req-200987e1-48ab-4dd1-9bf9-6c09812439e4 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Get console output
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.782 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.821 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[cb718f87-82bb-4472-8ede-c7b60cfe5b7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.822 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfed7ea2b-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.823 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.823 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfed7ea2b-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.825 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:15 compute-0 kernel: tapfed7ea2b-30: entered promiscuous mode
Jan 26 16:27:15 compute-0 NetworkManager[48954]: <info>  [1769444835.8259] manager: (tapfed7ea2b-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/627)
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.827 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.829 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfed7ea2b-30, col_values=(('external_ids', {'iface-id': '1ca00a90-6e42-40f0-a57b-b46691113c4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.830 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:15 compute-0 ovn_controller[146046]: 2026-01-26T16:27:15Z|01535|binding|INFO|Releasing lport 1ca00a90-6e42-40f0-a57b-b46691113c4a from this chassis (sb_readonly=0)
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.845 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.846 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fed7ea2b-3239-4a92-a662-8de7d6a5b54f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fed7ea2b-3239-4a92-a662-8de7d6a5b54f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.851 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ad05756b-036b-4626-9095-c7cc3eeeb45d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.851 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-fed7ea2b-3239-4a92-a662-8de7d6a5b54f
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/fed7ea2b-3239-4a92-a662-8de7d6a5b54f.pid.haproxy
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID fed7ea2b-3239-4a92-a662-8de7d6a5b54f
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:27:15 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:15.852 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f', 'env', 'PROCESS_TAG=haproxy-fed7ea2b-3239-4a92-a662-8de7d6a5b54f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fed7ea2b-3239-4a92-a662-8de7d6a5b54f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.968 239969 DEBUG nova.network.neutron [req-36cf0074-34de-4958-9356-75b7287c4268 req-73c3b19b-8307-431b-89ab-7c50089c2d38 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Updated VIF entry in instance network info cache for port dccf97d4-5209-4347-9f07-83ba148d3126. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.968 239969 DEBUG nova.network.neutron [req-36cf0074-34de-4958-9356-75b7287c4268 req-73c3b19b-8307-431b-89ab-7c50089c2d38 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Updating instance_info_cache with network_info: [{"id": "dccf97d4-5209-4347-9f07-83ba148d3126", "address": "fa:16:3e:d5:66:73", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdccf97d4-52", "ovs_interfaceid": "dccf97d4-5209-4347-9f07-83ba148d3126", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.985 239969 DEBUG oslo_concurrency.lockutils [req-36cf0074-34de-4958-9356-75b7287c4268 req-73c3b19b-8307-431b-89ab-7c50089c2d38 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b4e64366-a873-483b-a2b9-3d3afc79f7d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.995 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444835.9951775, b4e64366-a873-483b-a2b9-3d3afc79f7d4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:27:15 compute-0 nova_compute[239965]: 2026-01-26 16:27:15.996 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] VM Started (Lifecycle Event)
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.016 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.020 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444835.9952903, b4e64366-a873-483b-a2b9-3d3afc79f7d4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.021 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] VM Paused (Lifecycle Event)
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.041 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.044 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.068 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.110 239969 DEBUG nova.compute.manager [req-c9ba09a7-6ca3-4b60-8dc8-c0b8b9d3f355 req-b3731d0a-9304-4dd9-9fa4-b210237b7ab5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Received event network-vif-plugged-dccf97d4-5209-4347-9f07-83ba148d3126 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.111 239969 DEBUG oslo_concurrency.lockutils [req-c9ba09a7-6ca3-4b60-8dc8-c0b8b9d3f355 req-b3731d0a-9304-4dd9-9fa4-b210237b7ab5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.111 239969 DEBUG oslo_concurrency.lockutils [req-c9ba09a7-6ca3-4b60-8dc8-c0b8b9d3f355 req-b3731d0a-9304-4dd9-9fa4-b210237b7ab5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.111 239969 DEBUG oslo_concurrency.lockutils [req-c9ba09a7-6ca3-4b60-8dc8-c0b8b9d3f355 req-b3731d0a-9304-4dd9-9fa4-b210237b7ab5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.111 239969 DEBUG nova.compute.manager [req-c9ba09a7-6ca3-4b60-8dc8-c0b8b9d3f355 req-b3731d0a-9304-4dd9-9fa4-b210237b7ab5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Processing event network-vif-plugged-dccf97d4-5209-4347-9f07-83ba148d3126 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.112 239969 DEBUG nova.compute.manager [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.124 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.125 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444836.124372, b4e64366-a873-483b-a2b9-3d3afc79f7d4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.125 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] VM Resumed (Lifecycle Event)
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.139 239969 INFO nova.virt.libvirt.driver [-] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Instance spawned successfully.
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.139 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.163 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.175 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.179 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.180 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.180 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.181 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.181 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.182 239969 DEBUG nova.virt.libvirt.driver [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.220 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.253 239969 INFO nova.compute.manager [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Took 6.89 seconds to spawn the instance on the hypervisor.
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.254 239969 DEBUG nova.compute.manager [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.330 239969 INFO nova.compute.manager [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Took 8.02 seconds to build instance.
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.351 239969 DEBUG oslo_concurrency.lockutils [None req-558138ee-6f8a-4ac7-b126-ca55000eccad ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:16 compute-0 podman[366889]: 2026-01-26 16:27:16.351842635 +0000 UTC m=+0.105492434 container create 171581566a982885b3510e6185bcd09ec5190090f2dccb0fd5f7e38398921960 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:27:16 compute-0 systemd[1]: Started libpod-conmon-171581566a982885b3510e6185bcd09ec5190090f2dccb0fd5f7e38398921960.scope.
Jan 26 16:27:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:27:16 compute-0 podman[366889]: 2026-01-26 16:27:16.320822355 +0000 UTC m=+0.074472174 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:27:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ebe0b011ab1b6817d14a724553ada7ad9613b81f1d6cb7819cefe4302460a96/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:16 compute-0 podman[366889]: 2026-01-26 16:27:16.443743636 +0000 UTC m=+0.197393465 container init 171581566a982885b3510e6185bcd09ec5190090f2dccb0fd5f7e38398921960 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:27:16 compute-0 podman[366889]: 2026-01-26 16:27:16.449539348 +0000 UTC m=+0.203189147 container start 171581566a982885b3510e6185bcd09ec5190090f2dccb0fd5f7e38398921960 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:27:16 compute-0 neutron-haproxy-ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f[366903]: [NOTICE]   (366907) : New worker (366909) forked
Jan 26 16:27:16 compute-0 neutron-haproxy-ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f[366903]: [NOTICE]   (366907) : Loading success.
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.609 239969 DEBUG nova.network.neutron [req-0cc4527f-97c8-4c29-83c1-7db186b34e21 req-6f05ac3c-ad62-469e-ad58-b69079c34794 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Updated VIF entry in instance network info cache for port fb5c2eef-7a9f-43e2-bf77-34753400c407. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.610 239969 DEBUG nova.network.neutron [req-0cc4527f-97c8-4c29-83c1-7db186b34e21 req-6f05ac3c-ad62-469e-ad58-b69079c34794 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Updating instance_info_cache with network_info: [{"id": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "address": "fa:16:3e:ff:c3:d7", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb5c2eef-7a", "ovs_interfaceid": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.639 239969 DEBUG oslo_concurrency.lockutils [req-0cc4527f-97c8-4c29-83c1-7db186b34e21 req-6f05ac3c-ad62-469e-ad58-b69079c34794 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-5d69a9b9-63ea-4df6-bf59-67250f4f907d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.678 239969 INFO nova.compute.manager [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Rebuilding instance
Jan 26 16:27:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 7.9 MiB/s wr, 265 op/s
Jan 26 16:27:16 compute-0 ceph-mon[75140]: pgmap v2301: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 7.9 MiB/s wr, 265 op/s
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.921 239969 DEBUG nova.objects.instance [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'trusted_certs' on Instance uuid af71ccf7-284e-436b-9bb8-5bc6c727d3cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.936 239969 DEBUG nova.compute.manager [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:16 compute-0 nova_compute[239965]: 2026-01-26 16:27:16.984 239969 DEBUG nova.objects.instance [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'pci_requests' on Instance uuid af71ccf7-284e-436b-9bb8-5bc6c727d3cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.000 239969 DEBUG nova.objects.instance [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid af71ccf7-284e-436b-9bb8-5bc6c727d3cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.012 239969 DEBUG nova.objects.instance [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'resources' on Instance uuid af71ccf7-284e-436b-9bb8-5bc6c727d3cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.023 239969 DEBUG nova.objects.instance [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'migration_context' on Instance uuid af71ccf7-284e-436b-9bb8-5bc6c727d3cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.033 239969 DEBUG nova.objects.instance [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.037 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.347 239969 DEBUG nova.compute.manager [req-926d47ed-5722-464c-8798-4db9a404a70b req-6834d580-9ec1-4471-a454-cc36ec8017ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Received event network-vif-plugged-307ae46b-ff90-469d-8239-59ac4ce405cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.348 239969 DEBUG oslo_concurrency.lockutils [req-926d47ed-5722-464c-8798-4db9a404a70b req-6834d580-9ec1-4471-a454-cc36ec8017ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.348 239969 DEBUG oslo_concurrency.lockutils [req-926d47ed-5722-464c-8798-4db9a404a70b req-6834d580-9ec1-4471-a454-cc36ec8017ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.348 239969 DEBUG oslo_concurrency.lockutils [req-926d47ed-5722-464c-8798-4db9a404a70b req-6834d580-9ec1-4471-a454-cc36ec8017ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.348 239969 DEBUG nova.compute.manager [req-926d47ed-5722-464c-8798-4db9a404a70b req-6834d580-9ec1-4471-a454-cc36ec8017ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Processing event network-vif-plugged-307ae46b-ff90-469d-8239-59ac4ce405cd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.349 239969 DEBUG nova.compute.manager [req-926d47ed-5722-464c-8798-4db9a404a70b req-6834d580-9ec1-4471-a454-cc36ec8017ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Received event network-vif-plugged-307ae46b-ff90-469d-8239-59ac4ce405cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.349 239969 DEBUG oslo_concurrency.lockutils [req-926d47ed-5722-464c-8798-4db9a404a70b req-6834d580-9ec1-4471-a454-cc36ec8017ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.349 239969 DEBUG oslo_concurrency.lockutils [req-926d47ed-5722-464c-8798-4db9a404a70b req-6834d580-9ec1-4471-a454-cc36ec8017ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.349 239969 DEBUG oslo_concurrency.lockutils [req-926d47ed-5722-464c-8798-4db9a404a70b req-6834d580-9ec1-4471-a454-cc36ec8017ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.349 239969 DEBUG nova.compute.manager [req-926d47ed-5722-464c-8798-4db9a404a70b req-6834d580-9ec1-4471-a454-cc36ec8017ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] No waiting events found dispatching network-vif-plugged-307ae46b-ff90-469d-8239-59ac4ce405cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.349 239969 WARNING nova.compute.manager [req-926d47ed-5722-464c-8798-4db9a404a70b req-6834d580-9ec1-4471-a454-cc36ec8017ef a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Received unexpected event network-vif-plugged-307ae46b-ff90-469d-8239-59ac4ce405cd for instance with vm_state building and task_state spawning.
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.350 239969 DEBUG nova.compute.manager [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.354 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444837.3544042, 02c17ea9-0fbb-4beb-bc93-b0339e8a1048 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.355 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] VM Resumed (Lifecycle Event)
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.358 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.368 239969 INFO nova.virt.libvirt.driver [-] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Instance spawned successfully.
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.369 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.380 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.383 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.392 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.393 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.393 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.393 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.394 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.394 239969 DEBUG nova.virt.libvirt.driver [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.401 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.442 239969 INFO nova.compute.manager [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Took 10.15 seconds to spawn the instance on the hypervisor.
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.443 239969 DEBUG nova.compute.manager [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.509 239969 INFO nova.compute.manager [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Took 11.69 seconds to build instance.
Jan 26 16:27:17 compute-0 nova_compute[239965]: 2026-01-26 16:27:17.532 239969 DEBUG oslo_concurrency.lockutils [None req-721026ea-9cdd-4f7a-aeef-50d782423bf2 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:17 compute-0 sshd-session[366918]: Invalid user ftpuser from 209.38.206.249 port 49942
Jan 26 16:27:17 compute-0 sshd-session[366918]: Connection closed by invalid user ftpuser 209.38.206.249 port 49942 [preauth]
Jan 26 16:27:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 7.3 MiB/s wr, 296 op/s
Jan 26 16:27:18 compute-0 nova_compute[239965]: 2026-01-26 16:27:18.756 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:18 compute-0 ceph-mon[75140]: pgmap v2302: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 7.3 MiB/s wr, 296 op/s
Jan 26 16:27:18 compute-0 nova_compute[239965]: 2026-01-26 16:27:18.952 239969 DEBUG nova.compute.manager [req-a45642bb-4f5d-461a-a75b-060f3c7a3bc5 req-857a855a-84d0-4ec5-8d5f-233e78efc245 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Received event network-vif-plugged-dccf97d4-5209-4347-9f07-83ba148d3126 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:18 compute-0 nova_compute[239965]: 2026-01-26 16:27:18.953 239969 DEBUG oslo_concurrency.lockutils [req-a45642bb-4f5d-461a-a75b-060f3c7a3bc5 req-857a855a-84d0-4ec5-8d5f-233e78efc245 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:18 compute-0 nova_compute[239965]: 2026-01-26 16:27:18.953 239969 DEBUG oslo_concurrency.lockutils [req-a45642bb-4f5d-461a-a75b-060f3c7a3bc5 req-857a855a-84d0-4ec5-8d5f-233e78efc245 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:18 compute-0 nova_compute[239965]: 2026-01-26 16:27:18.953 239969 DEBUG oslo_concurrency.lockutils [req-a45642bb-4f5d-461a-a75b-060f3c7a3bc5 req-857a855a-84d0-4ec5-8d5f-233e78efc245 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:18 compute-0 nova_compute[239965]: 2026-01-26 16:27:18.953 239969 DEBUG nova.compute.manager [req-a45642bb-4f5d-461a-a75b-060f3c7a3bc5 req-857a855a-84d0-4ec5-8d5f-233e78efc245 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] No waiting events found dispatching network-vif-plugged-dccf97d4-5209-4347-9f07-83ba148d3126 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:27:18 compute-0 nova_compute[239965]: 2026-01-26 16:27:18.953 239969 WARNING nova.compute.manager [req-a45642bb-4f5d-461a-a75b-060f3c7a3bc5 req-857a855a-84d0-4ec5-8d5f-233e78efc245 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Received unexpected event network-vif-plugged-dccf97d4-5209-4347-9f07-83ba148d3126 for instance with vm_state active and task_state None.
Jan 26 16:27:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:27:19 compute-0 kernel: tap81751ed2-63 (unregistering): left promiscuous mode
Jan 26 16:27:19 compute-0 NetworkManager[48954]: <info>  [1769444839.3144] device (tap81751ed2-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:27:19 compute-0 nova_compute[239965]: 2026-01-26 16:27:19.331 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:19 compute-0 ovn_controller[146046]: 2026-01-26T16:27:19Z|01536|binding|INFO|Releasing lport 81751ed2-633c-4723-82f3-f652ce04297b from this chassis (sb_readonly=0)
Jan 26 16:27:19 compute-0 ovn_controller[146046]: 2026-01-26T16:27:19Z|01537|binding|INFO|Setting lport 81751ed2-633c-4723-82f3-f652ce04297b down in Southbound
Jan 26 16:27:19 compute-0 ovn_controller[146046]: 2026-01-26T16:27:19Z|01538|binding|INFO|Removing iface tap81751ed2-63 ovn-installed in OVS
Jan 26 16:27:19 compute-0 nova_compute[239965]: 2026-01-26 16:27:19.336 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:19.342 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:6f:f6 10.100.0.11'], port_security=['fa:16:3e:bb:6f:f6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'af71ccf7-284e-436b-9bb8-5bc6c727d3cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94bff060-de25-4619-bfc7-2522df0a74d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2d8fd4f4-d04c-4ea0-ae86-5f52a7fe7407', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e02e3d-e80d-4860-b220-ee88b1eda52c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=81751ed2-633c-4723-82f3-f652ce04297b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:27:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:19.345 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 81751ed2-633c-4723-82f3-f652ce04297b in datapath 94bff060-de25-4619-bfc7-2522df0a74d7 unbound from our chassis
Jan 26 16:27:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:19.347 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 94bff060-de25-4619-bfc7-2522df0a74d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:27:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:19.350 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b51c2272-e159-4216-9392-fa24f2c0e5bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:19.352 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7 namespace which is not needed anymore
Jan 26 16:27:19 compute-0 nova_compute[239965]: 2026-01-26 16:27:19.363 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:19 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Jan 26 16:27:19 compute-0 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008f.scope: Consumed 14.063s CPU time.
Jan 26 16:27:19 compute-0 systemd-machined[208061]: Machine qemu-173-instance-0000008f terminated.
Jan 26 16:27:19 compute-0 nova_compute[239965]: 2026-01-26 16:27:19.468 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:19 compute-0 neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7[364953]: [NOTICE]   (364957) : haproxy version is 2.8.14-c23fe91
Jan 26 16:27:19 compute-0 neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7[364953]: [NOTICE]   (364957) : path to executable is /usr/sbin/haproxy
Jan 26 16:27:19 compute-0 neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7[364953]: [WARNING]  (364957) : Exiting Master process...
Jan 26 16:27:19 compute-0 neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7[364953]: [ALERT]    (364957) : Current worker (364959) exited with code 143 (Terminated)
Jan 26 16:27:19 compute-0 neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7[364953]: [WARNING]  (364957) : All workers exited. Exiting... (0)
Jan 26 16:27:19 compute-0 systemd[1]: libpod-9bf1752ce328c12a0c8c6812ef458df3d89a3c7b5d4d19586c9b64e01c2fe715.scope: Deactivated successfully.
Jan 26 16:27:19 compute-0 podman[366943]: 2026-01-26 16:27:19.540380342 +0000 UTC m=+0.062327657 container died 9bf1752ce328c12a0c8c6812ef458df3d89a3c7b5d4d19586c9b64e01c2fe715 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 16:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9bf1752ce328c12a0c8c6812ef458df3d89a3c7b5d4d19586c9b64e01c2fe715-userdata-shm.mount: Deactivated successfully.
Jan 26 16:27:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4e7bed32c928e09c0ee3bd50b5f6e2a11669d7a467b56081e564f9850338966-merged.mount: Deactivated successfully.
Jan 26 16:27:19 compute-0 podman[366943]: 2026-01-26 16:27:19.602588435 +0000 UTC m=+0.124535730 container cleanup 9bf1752ce328c12a0c8c6812ef458df3d89a3c7b5d4d19586c9b64e01c2fe715 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:27:19 compute-0 systemd[1]: libpod-conmon-9bf1752ce328c12a0c8c6812ef458df3d89a3c7b5d4d19586c9b64e01c2fe715.scope: Deactivated successfully.
Jan 26 16:27:19 compute-0 podman[366984]: 2026-01-26 16:27:19.681231641 +0000 UTC m=+0.043760992 container remove 9bf1752ce328c12a0c8c6812ef458df3d89a3c7b5d4d19586c9b64e01c2fe715 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 16:27:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:19.687 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a26daf4c-57fd-4f05-8109-30f37341d3ee]: (4, ('Mon Jan 26 04:27:19 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7 (9bf1752ce328c12a0c8c6812ef458df3d89a3c7b5d4d19586c9b64e01c2fe715)\n9bf1752ce328c12a0c8c6812ef458df3d89a3c7b5d4d19586c9b64e01c2fe715\nMon Jan 26 04:27:19 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7 (9bf1752ce328c12a0c8c6812ef458df3d89a3c7b5d4d19586c9b64e01c2fe715)\n9bf1752ce328c12a0c8c6812ef458df3d89a3c7b5d4d19586c9b64e01c2fe715\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:19.690 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2631a4db-1dd7-40cf-8c9a-1e8a764b0128]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:19.692 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94bff060-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:19 compute-0 nova_compute[239965]: 2026-01-26 16:27:19.751 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:19 compute-0 kernel: tap94bff060-d0: left promiscuous mode
Jan 26 16:27:19 compute-0 nova_compute[239965]: 2026-01-26 16:27:19.774 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:19.781 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[90e9ddc7-5dc2-4954-8e0b-a525e766cbbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:19.797 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[999c6fbf-9635-48a3-913c-3c3e7eac5528]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:19.799 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0f1b8bdf-1571-41f4-bcde-6289261645cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:19.820 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e48a7747-0e58-4c90-ad38-6b365464ace1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630489, 'reachable_time': 16248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367000, 'error': None, 'target': 'ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:19 compute-0 systemd[1]: run-netns-ovnmeta\x2d94bff060\x2dde25\x2d4619\x2dbfc7\x2d2522df0a74d7.mount: Deactivated successfully.
Jan 26 16:27:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:19.822 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:27:19 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:19.822 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[6da78bc7-0916-4377-9889-ca933f25f1f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:20 compute-0 nova_compute[239965]: 2026-01-26 16:27:20.058 239969 INFO nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Instance shutdown successfully after 3 seconds.
Jan 26 16:27:20 compute-0 nova_compute[239965]: 2026-01-26 16:27:20.063 239969 INFO nova.virt.libvirt.driver [-] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Instance destroyed successfully.
Jan 26 16:27:20 compute-0 nova_compute[239965]: 2026-01-26 16:27:20.067 239969 INFO nova.virt.libvirt.driver [-] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Instance destroyed successfully.
Jan 26 16:27:20 compute-0 nova_compute[239965]: 2026-01-26 16:27:20.067 239969 DEBUG nova.virt.libvirt.vif [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:26:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1217079529',display_name='tempest-TestNetworkAdvancedServerOps-server-1217079529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1217079529',id=143,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAHK7vjAqqBoYE5h97a5pskiriPLSJ5A7+bC6dk5uNnjCrImOF19Tu7dz5ff1CkVvUcFUiy33TMxCmyGQ5LYz502POS5lMWxg05L+Asko5H7mYq3E+7RWkgVQirbCTK0/w==',key_name='tempest-TestNetworkAdvancedServerOps-331164643',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:26:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-v3jbl2f2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:27:16Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=af71ccf7-284e-436b-9bb8-5bc6c727d3cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:27:20 compute-0 nova_compute[239965]: 2026-01-26 16:27:20.068 239969 DEBUG nova.network.os_vif_util [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:27:20 compute-0 nova_compute[239965]: 2026-01-26 16:27:20.068 239969 DEBUG nova.network.os_vif_util [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:6f:f6,bridge_name='br-int',has_traffic_filtering=True,id=81751ed2-633c-4723-82f3-f652ce04297b,network=Network(94bff060-de25-4619-bfc7-2522df0a74d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81751ed2-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:27:20 compute-0 nova_compute[239965]: 2026-01-26 16:27:20.068 239969 DEBUG os_vif [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:6f:f6,bridge_name='br-int',has_traffic_filtering=True,id=81751ed2-633c-4723-82f3-f652ce04297b,network=Network(94bff060-de25-4619-bfc7-2522df0a74d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81751ed2-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:27:20 compute-0 nova_compute[239965]: 2026-01-26 16:27:20.070 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:20 compute-0 nova_compute[239965]: 2026-01-26 16:27:20.070 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81751ed2-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:20 compute-0 nova_compute[239965]: 2026-01-26 16:27:20.071 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:20 compute-0 nova_compute[239965]: 2026-01-26 16:27:20.073 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:20 compute-0 nova_compute[239965]: 2026-01-26 16:27:20.074 239969 INFO os_vif [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:6f:f6,bridge_name='br-int',has_traffic_filtering=True,id=81751ed2-633c-4723-82f3-f652ce04297b,network=Network(94bff060-de25-4619-bfc7-2522df0a74d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81751ed2-63')
Jan 26 16:27:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.0 MiB/s wr, 243 op/s
Jan 26 16:27:20 compute-0 ceph-mon[75140]: pgmap v2303: 305 pgs: 305 active+clean; 498 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.0 MiB/s wr, 243 op/s
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.149 239969 INFO nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Deleting instance files /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc_del
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.150 239969 INFO nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Deletion of /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc_del complete
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.225 239969 DEBUG nova.compute.manager [req-a156265c-f799-4614-a0b3-59b222012b29 req-12c4cab3-8881-4d1b-b386-5c94f126ccc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received event network-vif-unplugged-81751ed2-633c-4723-82f3-f652ce04297b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.225 239969 DEBUG oslo_concurrency.lockutils [req-a156265c-f799-4614-a0b3-59b222012b29 req-12c4cab3-8881-4d1b-b386-5c94f126ccc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.226 239969 DEBUG oslo_concurrency.lockutils [req-a156265c-f799-4614-a0b3-59b222012b29 req-12c4cab3-8881-4d1b-b386-5c94f126ccc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.226 239969 DEBUG oslo_concurrency.lockutils [req-a156265c-f799-4614-a0b3-59b222012b29 req-12c4cab3-8881-4d1b-b386-5c94f126ccc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.226 239969 DEBUG nova.compute.manager [req-a156265c-f799-4614-a0b3-59b222012b29 req-12c4cab3-8881-4d1b-b386-5c94f126ccc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] No waiting events found dispatching network-vif-unplugged-81751ed2-633c-4723-82f3-f652ce04297b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.226 239969 WARNING nova.compute.manager [req-a156265c-f799-4614-a0b3-59b222012b29 req-12c4cab3-8881-4d1b-b386-5c94f126ccc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received unexpected event network-vif-unplugged-81751ed2-633c-4723-82f3-f652ce04297b for instance with vm_state active and task_state rebuilding.
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.227 239969 DEBUG nova.compute.manager [req-a156265c-f799-4614-a0b3-59b222012b29 req-12c4cab3-8881-4d1b-b386-5c94f126ccc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received event network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.227 239969 DEBUG oslo_concurrency.lockutils [req-a156265c-f799-4614-a0b3-59b222012b29 req-12c4cab3-8881-4d1b-b386-5c94f126ccc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.227 239969 DEBUG oslo_concurrency.lockutils [req-a156265c-f799-4614-a0b3-59b222012b29 req-12c4cab3-8881-4d1b-b386-5c94f126ccc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.227 239969 DEBUG oslo_concurrency.lockutils [req-a156265c-f799-4614-a0b3-59b222012b29 req-12c4cab3-8881-4d1b-b386-5c94f126ccc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.227 239969 DEBUG nova.compute.manager [req-a156265c-f799-4614-a0b3-59b222012b29 req-12c4cab3-8881-4d1b-b386-5c94f126ccc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] No waiting events found dispatching network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.228 239969 WARNING nova.compute.manager [req-a156265c-f799-4614-a0b3-59b222012b29 req-12c4cab3-8881-4d1b-b386-5c94f126ccc1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received unexpected event network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b for instance with vm_state active and task_state rebuilding.
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.278 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.279 239969 INFO nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Creating image(s)
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.308 239969 DEBUG nova.storage.rbd_utils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.333 239969 DEBUG nova.storage.rbd_utils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.371 239969 DEBUG nova.storage.rbd_utils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.377 239969 DEBUG oslo_concurrency.processutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.500 239969 DEBUG oslo_concurrency.processutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a --force-share --output=json" returned: 0 in 0.123s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.503 239969 DEBUG oslo_concurrency.lockutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "43c516c9cae415c0ab334521ada79f427fb2809a" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.505 239969 DEBUG oslo_concurrency.lockutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.506 239969 DEBUG oslo_concurrency.lockutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "43c516c9cae415c0ab334521ada79f427fb2809a" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.546 239969 DEBUG nova.storage.rbd_utils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:21 compute-0 nova_compute[239965]: 2026-01-26 16:27:21.551 239969 DEBUG oslo_concurrency.processutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.046 239969 DEBUG nova.compute.manager [req-56a74362-94b4-4b4e-87c5-09fe5ce18ae4 req-2bc4d62d-dcda-409a-ac81-088efd89645f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Received event network-changed-dccf97d4-5209-4347-9f07-83ba148d3126 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.047 239969 DEBUG nova.compute.manager [req-56a74362-94b4-4b4e-87c5-09fe5ce18ae4 req-2bc4d62d-dcda-409a-ac81-088efd89645f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Refreshing instance network info cache due to event network-changed-dccf97d4-5209-4347-9f07-83ba148d3126. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.048 239969 DEBUG oslo_concurrency.lockutils [req-56a74362-94b4-4b4e-87c5-09fe5ce18ae4 req-2bc4d62d-dcda-409a-ac81-088efd89645f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b4e64366-a873-483b-a2b9-3d3afc79f7d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.048 239969 DEBUG oslo_concurrency.lockutils [req-56a74362-94b4-4b4e-87c5-09fe5ce18ae4 req-2bc4d62d-dcda-409a-ac81-088efd89645f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b4e64366-a873-483b-a2b9-3d3afc79f7d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.048 239969 DEBUG nova.network.neutron [req-56a74362-94b4-4b4e-87c5-09fe5ce18ae4 req-2bc4d62d-dcda-409a-ac81-088efd89645f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Refreshing network info cache for port dccf97d4-5209-4347-9f07-83ba148d3126 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.057 239969 DEBUG oslo_concurrency.processutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.143 239969 DEBUG nova.storage.rbd_utils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] resizing rbd image af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:27:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 457 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.7 MiB/s wr, 369 op/s
Jan 26 16:27:22 compute-0 ceph-mon[75140]: pgmap v2304: 305 pgs: 305 active+clean; 457 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 6.7 MiB/s wr, 369 op/s
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.889 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.889 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Ensure instance console log exists: /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.890 239969 DEBUG oslo_concurrency.lockutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.890 239969 DEBUG oslo_concurrency.lockutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.891 239969 DEBUG oslo_concurrency.lockutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.893 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Start _get_guest_xml network_info=[{"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.897 239969 WARNING nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.903 239969 DEBUG nova.virt.libvirt.host [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.904 239969 DEBUG nova.virt.libvirt.host [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.909 239969 DEBUG nova.virt.libvirt.host [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.910 239969 DEBUG nova.virt.libvirt.host [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.911 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.911 239969 DEBUG nova.virt.hardware [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:43Z,direct_url=<?>,disk_format='qcow2',id=24ff5bfa-2bf0-4d76-ba05-fc857cd2108f,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:44Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.912 239969 DEBUG nova.virt.hardware [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.918 239969 DEBUG nova.virt.hardware [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.921 239969 DEBUG nova.virt.hardware [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.921 239969 DEBUG nova.virt.hardware [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.922 239969 DEBUG nova.virt.hardware [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.922 239969 DEBUG nova.virt.hardware [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.922 239969 DEBUG nova.virt.hardware [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.923 239969 DEBUG nova.virt.hardware [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.924 239969 DEBUG nova.virt.hardware [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.924 239969 DEBUG nova.virt.hardware [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.924 239969 DEBUG nova.objects.instance [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'vcpu_model' on Instance uuid af71ccf7-284e-436b-9bb8-5bc6c727d3cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:27:22 compute-0 nova_compute[239965]: 2026-01-26 16:27:22.954 239969 DEBUG oslo_concurrency.processutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:23 compute-0 nova_compute[239965]: 2026-01-26 16:27:23.335 239969 DEBUG nova.compute.manager [req-79d7baba-a0fb-498e-8515-bbc5ff5ddc4f req-52be9a2f-f65c-4fdd-8d27-b4b4f03fb20b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Received event network-changed-307ae46b-ff90-469d-8239-59ac4ce405cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:23 compute-0 nova_compute[239965]: 2026-01-26 16:27:23.336 239969 DEBUG nova.compute.manager [req-79d7baba-a0fb-498e-8515-bbc5ff5ddc4f req-52be9a2f-f65c-4fdd-8d27-b4b4f03fb20b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Refreshing instance network info cache due to event network-changed-307ae46b-ff90-469d-8239-59ac4ce405cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:27:23 compute-0 nova_compute[239965]: 2026-01-26 16:27:23.336 239969 DEBUG oslo_concurrency.lockutils [req-79d7baba-a0fb-498e-8515-bbc5ff5ddc4f req-52be9a2f-f65c-4fdd-8d27-b4b4f03fb20b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-02c17ea9-0fbb-4beb-bc93-b0339e8a1048" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:27:23 compute-0 nova_compute[239965]: 2026-01-26 16:27:23.337 239969 DEBUG oslo_concurrency.lockutils [req-79d7baba-a0fb-498e-8515-bbc5ff5ddc4f req-52be9a2f-f65c-4fdd-8d27-b4b4f03fb20b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-02c17ea9-0fbb-4beb-bc93-b0339e8a1048" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:27:23 compute-0 nova_compute[239965]: 2026-01-26 16:27:23.337 239969 DEBUG nova.network.neutron [req-79d7baba-a0fb-498e-8515-bbc5ff5ddc4f req-52be9a2f-f65c-4fdd-8d27-b4b4f03fb20b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Refreshing network info cache for port 307ae46b-ff90-469d-8239-59ac4ce405cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:27:23 compute-0 nova_compute[239965]: 2026-01-26 16:27:23.388 239969 DEBUG nova.network.neutron [req-56a74362-94b4-4b4e-87c5-09fe5ce18ae4 req-2bc4d62d-dcda-409a-ac81-088efd89645f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Updated VIF entry in instance network info cache for port dccf97d4-5209-4347-9f07-83ba148d3126. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:27:23 compute-0 nova_compute[239965]: 2026-01-26 16:27:23.389 239969 DEBUG nova.network.neutron [req-56a74362-94b4-4b4e-87c5-09fe5ce18ae4 req-2bc4d62d-dcda-409a-ac81-088efd89645f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Updating instance_info_cache with network_info: [{"id": "dccf97d4-5209-4347-9f07-83ba148d3126", "address": "fa:16:3e:d5:66:73", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdccf97d4-52", "ovs_interfaceid": "dccf97d4-5209-4347-9f07-83ba148d3126", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:23 compute-0 nova_compute[239965]: 2026-01-26 16:27:23.408 239969 DEBUG oslo_concurrency.lockutils [req-56a74362-94b4-4b4e-87c5-09fe5ce18ae4 req-2bc4d62d-dcda-409a-ac81-088efd89645f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b4e64366-a873-483b-a2b9-3d3afc79f7d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:27:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/462846498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:27:23 compute-0 nova_compute[239965]: 2026-01-26 16:27:23.576 239969 DEBUG oslo_concurrency.processutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.622s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:23 compute-0 nova_compute[239965]: 2026-01-26 16:27:23.607 239969 DEBUG nova.storage.rbd_utils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:23 compute-0 nova_compute[239965]: 2026-01-26 16:27:23.618 239969 DEBUG oslo_concurrency.processutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:23 compute-0 nova_compute[239965]: 2026-01-26 16:27:23.758 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/462846498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:27:24 compute-0 ovn_controller[146046]: 2026-01-26T16:27:24Z|00180|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.12
Jan 26 16:27:24 compute-0 ovn_controller[146046]: 2026-01-26T16:27:24Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:ff:c3:d7 10.100.0.12
Jan 26 16:27:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:27:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2121559933' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.225 239969 DEBUG oslo_concurrency.processutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.227 239969 DEBUG nova.virt.libvirt.vif [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:26:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1217079529',display_name='tempest-TestNetworkAdvancedServerOps-server-1217079529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1217079529',id=143,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAHK7vjAqqBoYE5h97a5pskiriPLSJ5A7+bC6dk5uNnjCrImOF19Tu7dz5ff1CkVvUcFUiy33TMxCmyGQ5LYz502POS5lMWxg05L+Asko5H7mYq3E+7RWkgVQirbCTK0/w==',key_name='tempest-TestNetworkAdvancedServerOps-331164643',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:26:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-v3jbl2f2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:27:21Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=af71ccf7-284e-436b-9bb8-5bc6c727d3cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.227 239969 DEBUG nova.network.os_vif_util [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.228 239969 DEBUG nova.network.os_vif_util [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:6f:f6,bridge_name='br-int',has_traffic_filtering=True,id=81751ed2-633c-4723-82f3-f652ce04297b,network=Network(94bff060-de25-4619-bfc7-2522df0a74d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81751ed2-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.231 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:27:24 compute-0 nova_compute[239965]:   <uuid>af71ccf7-284e-436b-9bb8-5bc6c727d3cc</uuid>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   <name>instance-0000008f</name>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1217079529</nova:name>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:27:22</nova:creationTime>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:27:24 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:27:24 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:27:24 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:27:24 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:27:24 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:27:24 compute-0 nova_compute[239965]:         <nova:user uuid="59ae1c17a260470c91f50965ddd53a9e">tempest-TestNetworkAdvancedServerOps-842475489-project-member</nova:user>
Jan 26 16:27:24 compute-0 nova_compute[239965]:         <nova:project uuid="2a7615c0db4e4f38aec30c7c723c3c3a">tempest-TestNetworkAdvancedServerOps-842475489</nova:project>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="24ff5bfa-2bf0-4d76-ba05-fc857cd2108f"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:27:24 compute-0 nova_compute[239965]:         <nova:port uuid="81751ed2-633c-4723-82f3-f652ce04297b">
Jan 26 16:27:24 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <system>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <entry name="serial">af71ccf7-284e-436b-9bb8-5bc6c727d3cc</entry>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <entry name="uuid">af71ccf7-284e-436b-9bb8-5bc6c727d3cc</entry>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     </system>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   <os>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   </os>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   <features>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   </features>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk">
Jan 26 16:27:24 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       </source>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:27:24 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk.config">
Jan 26 16:27:24 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       </source>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:27:24 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:bb:6f:f6"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <target dev="tap81751ed2-63"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc/console.log" append="off"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <video>
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     </video>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:27:24 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:27:24 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:27:24 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:27:24 compute-0 nova_compute[239965]: </domain>
Jan 26 16:27:24 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.233 239969 DEBUG nova.virt.libvirt.vif [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:26:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1217079529',display_name='tempest-TestNetworkAdvancedServerOps-server-1217079529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1217079529',id=143,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAHK7vjAqqBoYE5h97a5pskiriPLSJ5A7+bC6dk5uNnjCrImOF19Tu7dz5ff1CkVvUcFUiy33TMxCmyGQ5LYz502POS5lMWxg05L+Asko5H7mYq3E+7RWkgVQirbCTK0/w==',key_name='tempest-TestNetworkAdvancedServerOps-331164643',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:26:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-v3jbl2f2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:27:21Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=af71ccf7-284e-436b-9bb8-5bc6c727d3cc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.234 239969 DEBUG nova.network.os_vif_util [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.234 239969 DEBUG nova.network.os_vif_util [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:6f:f6,bridge_name='br-int',has_traffic_filtering=True,id=81751ed2-633c-4723-82f3-f652ce04297b,network=Network(94bff060-de25-4619-bfc7-2522df0a74d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81751ed2-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.235 239969 DEBUG os_vif [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:6f:f6,bridge_name='br-int',has_traffic_filtering=True,id=81751ed2-633c-4723-82f3-f652ce04297b,network=Network(94bff060-de25-4619-bfc7-2522df0a74d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81751ed2-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.236 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.236 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.237 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.239 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.239 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81751ed2-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.239 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81751ed2-63, col_values=(('external_ids', {'iface-id': '81751ed2-633c-4723-82f3-f652ce04297b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:bb:6f:f6', 'vm-uuid': 'af71ccf7-284e-436b-9bb8-5bc6c727d3cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.284 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:24 compute-0 NetworkManager[48954]: <info>  [1769444844.2853] manager: (tap81751ed2-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/628)
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.287 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.289 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.291 239969 INFO os_vif [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:6f:f6,bridge_name='br-int',has_traffic_filtering=True,id=81751ed2-633c-4723-82f3-f652ce04297b,network=Network(94bff060-de25-4619-bfc7-2522df0a74d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81751ed2-63')
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.340 239969 DEBUG nova.network.neutron [req-79d7baba-a0fb-498e-8515-bbc5ff5ddc4f req-52be9a2f-f65c-4fdd-8d27-b4b4f03fb20b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Updated VIF entry in instance network info cache for port 307ae46b-ff90-469d-8239-59ac4ce405cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.341 239969 DEBUG nova.network.neutron [req-79d7baba-a0fb-498e-8515-bbc5ff5ddc4f req-52be9a2f-f65c-4fdd-8d27-b4b4f03fb20b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Updating instance_info_cache with network_info: [{"id": "307ae46b-ff90-469d-8239-59ac4ce405cd", "address": "fa:16:3e:36:2b:9c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap307ae46b-ff", "ovs_interfaceid": "307ae46b-ff90-469d-8239-59ac4ce405cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.384 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.386 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.386 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No VIF found with MAC fa:16:3e:bb:6f:f6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.387 239969 INFO nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Using config drive
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.412 239969 DEBUG nova.storage.rbd_utils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.419 239969 DEBUG oslo_concurrency.lockutils [req-79d7baba-a0fb-498e-8515-bbc5ff5ddc4f req-52be9a2f-f65c-4fdd-8d27-b4b4f03fb20b a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-02c17ea9-0fbb-4beb-bc93-b0339e8a1048" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.452 239969 DEBUG nova.objects.instance [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'ec2_ids' on Instance uuid af71ccf7-284e-436b-9bb8-5bc6c727d3cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.480 239969 DEBUG nova.objects.instance [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'keypairs' on Instance uuid af71ccf7-284e-436b-9bb8-5bc6c727d3cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:27:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 457 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 1.8 MiB/s wr, 218 op/s
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.986 239969 INFO nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Creating config drive at /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc/disk.config
Jan 26 16:27:24 compute-0 nova_compute[239965]: 2026-01-26 16:27:24.990 239969 DEBUG oslo_concurrency.processutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjujjdew8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:25 compute-0 nova_compute[239965]: 2026-01-26 16:27:25.154 239969 DEBUG oslo_concurrency.processutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjujjdew8" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:25 compute-0 nova_compute[239965]: 2026-01-26 16:27:25.250 239969 DEBUG nova.storage.rbd_utils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:25 compute-0 nova_compute[239965]: 2026-01-26 16:27:25.256 239969 DEBUG oslo_concurrency.processutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc/disk.config af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2121559933' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:27:25 compute-0 ceph-mon[75140]: pgmap v2305: 305 pgs: 305 active+clean; 457 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 1.8 MiB/s wr, 218 op/s
Jan 26 16:27:25 compute-0 nova_compute[239965]: 2026-01-26 16:27:25.584 239969 DEBUG oslo_concurrency.processutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc/disk.config af71ccf7-284e-436b-9bb8-5bc6c727d3cc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:25 compute-0 nova_compute[239965]: 2026-01-26 16:27:25.586 239969 INFO nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Deleting local config drive /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc/disk.config because it was imported into RBD.
Jan 26 16:27:25 compute-0 kernel: tap81751ed2-63: entered promiscuous mode
Jan 26 16:27:25 compute-0 NetworkManager[48954]: <info>  [1769444845.6578] manager: (tap81751ed2-63): new Tun device (/org/freedesktop/NetworkManager/Devices/629)
Jan 26 16:27:25 compute-0 ovn_controller[146046]: 2026-01-26T16:27:25Z|01539|binding|INFO|Claiming lport 81751ed2-633c-4723-82f3-f652ce04297b for this chassis.
Jan 26 16:27:25 compute-0 ovn_controller[146046]: 2026-01-26T16:27:25Z|01540|binding|INFO|81751ed2-633c-4723-82f3-f652ce04297b: Claiming fa:16:3e:bb:6f:f6 10.100.0.11
Jan 26 16:27:25 compute-0 nova_compute[239965]: 2026-01-26 16:27:25.668 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.677 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:6f:f6 10.100.0.11'], port_security=['fa:16:3e:bb:6f:f6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'af71ccf7-284e-436b-9bb8-5bc6c727d3cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94bff060-de25-4619-bfc7-2522df0a74d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '2d8fd4f4-d04c-4ea0-ae86-5f52a7fe7407', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.219'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e02e3d-e80d-4860-b220-ee88b1eda52c, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=81751ed2-633c-4723-82f3-f652ce04297b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.681 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 81751ed2-633c-4723-82f3-f652ce04297b in datapath 94bff060-de25-4619-bfc7-2522df0a74d7 bound to our chassis
Jan 26 16:27:25 compute-0 nova_compute[239965]: 2026-01-26 16:27:25.683 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:25 compute-0 ovn_controller[146046]: 2026-01-26T16:27:25Z|01541|binding|INFO|Setting lport 81751ed2-633c-4723-82f3-f652ce04297b up in Southbound
Jan 26 16:27:25 compute-0 ovn_controller[146046]: 2026-01-26T16:27:25Z|01542|binding|INFO|Setting lport 81751ed2-633c-4723-82f3-f652ce04297b ovn-installed in OVS
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.689 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 94bff060-de25-4619-bfc7-2522df0a74d7
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.705 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[70be46ba-1d57-45e9-a85c-ed538069365c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.706 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap94bff060-d1 in ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.709 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap94bff060-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.709 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[841867ce-d6c4-41cd-a592-75632e022c6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.710 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[75de3af5-afd8-4343-83fd-d19d83a9944f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:25 compute-0 systemd-udevd[367324]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:27:25 compute-0 systemd-machined[208061]: New machine qemu-177-instance-0000008f.
Jan 26 16:27:25 compute-0 NetworkManager[48954]: <info>  [1769444845.7246] device (tap81751ed2-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:27:25 compute-0 NetworkManager[48954]: <info>  [1769444845.7252] device (tap81751ed2-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:27:25 compute-0 systemd[1]: Started Virtual Machine qemu-177-instance-0000008f.
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.734 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[10e4e88f-1865-4502-9395-fd28679407c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.766 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[888ca2eb-6b26-478a-aa11-b1a5d2d35bfc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.797 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[10232b56-14cf-4995-8b4c-7d277f65f006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:25 compute-0 systemd-udevd[367327]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:27:25 compute-0 NetworkManager[48954]: <info>  [1769444845.8183] manager: (tap94bff060-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/630)
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.820 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2321e85e-07fe-405e-9dea-ee1dea2bdd11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.856 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[acc3ad45-4e65-4cb0-8fa2-8af9dbe287ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.859 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b65279e5-48a4-4131-9f47-450a3174f4ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:25 compute-0 NetworkManager[48954]: <info>  [1769444845.8827] device (tap94bff060-d0): carrier: link connected
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.890 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[810fdb89-1178-471c-9558-76d22ba23905]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.910 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f167addc-6331-46bf-87f5-4b821cc514b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap94bff060-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:a0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 442], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633668, 'reachable_time': 29635, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367356, 'error': None, 'target': 'ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.931 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a449f29c-1890-4736-8d53-e757fe63caac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec7:a08b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 633668, 'tstamp': 633668}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367357, 'error': None, 'target': 'ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.950 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c23de3-ae60-4e42-b878-80d4637efa9c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap94bff060-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:a0:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 442], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633668, 'reachable_time': 29635, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 367358, 'error': None, 'target': 'ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:25.986 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[710ca103-cbf1-4f7f-b7c0-c5084b943987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:25 compute-0 nova_compute[239965]: 2026-01-26 16:27:25.994 239969 DEBUG nova.compute.manager [req-b2238729-f91f-4163-a8f7-a233ccbcf5e8 req-9ea69a36-d4e6-489e-8e22-95b50259d87e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received event network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:25 compute-0 nova_compute[239965]: 2026-01-26 16:27:25.995 239969 DEBUG oslo_concurrency.lockutils [req-b2238729-f91f-4163-a8f7-a233ccbcf5e8 req-9ea69a36-d4e6-489e-8e22-95b50259d87e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:25 compute-0 nova_compute[239965]: 2026-01-26 16:27:25.995 239969 DEBUG oslo_concurrency.lockutils [req-b2238729-f91f-4163-a8f7-a233ccbcf5e8 req-9ea69a36-d4e6-489e-8e22-95b50259d87e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:25 compute-0 nova_compute[239965]: 2026-01-26 16:27:25.995 239969 DEBUG oslo_concurrency.lockutils [req-b2238729-f91f-4163-a8f7-a233ccbcf5e8 req-9ea69a36-d4e6-489e-8e22-95b50259d87e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:25 compute-0 nova_compute[239965]: 2026-01-26 16:27:25.995 239969 DEBUG nova.compute.manager [req-b2238729-f91f-4163-a8f7-a233ccbcf5e8 req-9ea69a36-d4e6-489e-8e22-95b50259d87e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] No waiting events found dispatching network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:27:25 compute-0 nova_compute[239965]: 2026-01-26 16:27:25.996 239969 WARNING nova.compute.manager [req-b2238729-f91f-4163-a8f7-a233ccbcf5e8 req-9ea69a36-d4e6-489e-8e22-95b50259d87e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received unexpected event network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b for instance with vm_state active and task_state rebuild_spawning.
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:26.065 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4e705f5d-f888-44b9-91da-3978142ff2df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:26.066 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94bff060-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:26.066 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:26.067 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap94bff060-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:26 compute-0 nova_compute[239965]: 2026-01-26 16:27:26.068 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:26 compute-0 NetworkManager[48954]: <info>  [1769444846.0694] manager: (tap94bff060-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/631)
Jan 26 16:27:26 compute-0 kernel: tap94bff060-d0: entered promiscuous mode
Jan 26 16:27:26 compute-0 nova_compute[239965]: 2026-01-26 16:27:26.085 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:26.087 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap94bff060-d0, col_values=(('external_ids', {'iface-id': '2f737b4d-b226-456e-a14d-a02711b5bbea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:26 compute-0 ovn_controller[146046]: 2026-01-26T16:27:26Z|01543|binding|INFO|Releasing lport 2f737b4d-b226-456e-a14d-a02711b5bbea from this chassis (sb_readonly=0)
Jan 26 16:27:26 compute-0 nova_compute[239965]: 2026-01-26 16:27:26.088 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:26 compute-0 nova_compute[239965]: 2026-01-26 16:27:26.119 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:26.122 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/94bff060-de25-4619-bfc7-2522df0a74d7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/94bff060-de25-4619-bfc7-2522df0a74d7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:26.123 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3624ced6-e4cc-43a7-bd00-82e5e9b5ac33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:26.124 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-94bff060-de25-4619-bfc7-2522df0a74d7
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/94bff060-de25-4619-bfc7-2522df0a74d7.pid.haproxy
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 94bff060-de25-4619-bfc7-2522df0a74d7
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:27:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:26.125 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7', 'env', 'PROCESS_TAG=haproxy-94bff060-de25-4619-bfc7-2522df0a74d7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/94bff060-de25-4619-bfc7-2522df0a74d7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:27:26 compute-0 podman[367389]: 2026-01-26 16:27:26.657390867 +0000 UTC m=+0.105633709 container create 2fec36fb577e96ec8ffe352106e353ada4a467bfb79ee87e8209b9411258577e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:27:26 compute-0 podman[367389]: 2026-01-26 16:27:26.579671543 +0000 UTC m=+0.027914415 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:27:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 470 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.1 MiB/s wr, 251 op/s
Jan 26 16:27:26 compute-0 systemd[1]: Started libpod-conmon-2fec36fb577e96ec8ffe352106e353ada4a467bfb79ee87e8209b9411258577e.scope.
Jan 26 16:27:26 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:27:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41d01d4e64f01602f7d135941f890edf00a9c3e406b264475421e5790daf5131/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:27:26 compute-0 podman[367389]: 2026-01-26 16:27:26.788428775 +0000 UTC m=+0.236671637 container init 2fec36fb577e96ec8ffe352106e353ada4a467bfb79ee87e8209b9411258577e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:27:26 compute-0 podman[367389]: 2026-01-26 16:27:26.794059383 +0000 UTC m=+0.242302225 container start 2fec36fb577e96ec8ffe352106e353ada4a467bfb79ee87e8209b9411258577e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:27:26 compute-0 neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7[367403]: [NOTICE]   (367407) : New worker (367409) forked
Jan 26 16:27:26 compute-0 neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7[367403]: [NOTICE]   (367407) : Loading success.
Jan 26 16:27:26 compute-0 ceph-mon[75140]: pgmap v2306: 305 pgs: 305 active+clean; 470 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.1 MiB/s wr, 251 op/s
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.546 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for af71ccf7-284e-436b-9bb8-5bc6c727d3cc due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.547 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444847.5462198, af71ccf7-284e-436b-9bb8-5bc6c727d3cc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.548 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] VM Resumed (Lifecycle Event)
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.552 239969 DEBUG nova.compute.manager [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.552 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.556 239969 INFO nova.virt.libvirt.driver [-] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Instance spawned successfully.
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.557 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.585 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.590 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.598 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.598 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.599 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.600 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.600 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.601 239969 DEBUG nova.virt.libvirt.driver [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.638 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.639 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444847.5485358, af71ccf7-284e-436b-9bb8-5bc6c727d3cc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.639 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] VM Started (Lifecycle Event)
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.679 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.684 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.698 239969 DEBUG nova.compute.manager [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.718 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.767 239969 DEBUG oslo_concurrency.lockutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.767 239969 DEBUG oslo_concurrency.lockutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.768 239969 DEBUG nova.objects.instance [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032
Jan 26 16:27:27 compute-0 nova_compute[239965]: 2026-01-26 16:27:27.839 239969 DEBUG oslo_concurrency.lockutils [None req-bb0a4c32-309a-4294-859a-6db7f8ea6e70 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.072s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:28 compute-0 nova_compute[239965]: 2026-01-26 16:27:28.112 239969 DEBUG nova.compute.manager [req-963709fa-0ce1-4b5d-8f5b-161d184dca09 req-c7dbacc3-bc8e-4da6-91fb-5d07d7f473bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received event network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:28 compute-0 nova_compute[239965]: 2026-01-26 16:27:28.112 239969 DEBUG oslo_concurrency.lockutils [req-963709fa-0ce1-4b5d-8f5b-161d184dca09 req-c7dbacc3-bc8e-4da6-91fb-5d07d7f473bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:28 compute-0 nova_compute[239965]: 2026-01-26 16:27:28.113 239969 DEBUG oslo_concurrency.lockutils [req-963709fa-0ce1-4b5d-8f5b-161d184dca09 req-c7dbacc3-bc8e-4da6-91fb-5d07d7f473bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:28 compute-0 nova_compute[239965]: 2026-01-26 16:27:28.114 239969 DEBUG oslo_concurrency.lockutils [req-963709fa-0ce1-4b5d-8f5b-161d184dca09 req-c7dbacc3-bc8e-4da6-91fb-5d07d7f473bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:28 compute-0 nova_compute[239965]: 2026-01-26 16:27:28.114 239969 DEBUG nova.compute.manager [req-963709fa-0ce1-4b5d-8f5b-161d184dca09 req-c7dbacc3-bc8e-4da6-91fb-5d07d7f473bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] No waiting events found dispatching network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:27:28 compute-0 nova_compute[239965]: 2026-01-26 16:27:28.114 239969 WARNING nova.compute.manager [req-963709fa-0ce1-4b5d-8f5b-161d184dca09 req-c7dbacc3-bc8e-4da6-91fb-5d07d7f473bd a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received unexpected event network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b for instance with vm_state active and task_state None.
Jan 26 16:27:28 compute-0 ovn_controller[146046]: 2026-01-26T16:27:28Z|00182|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.13 does not match offer 10.100.0.12
Jan 26 16:27:28 compute-0 ovn_controller[146046]: 2026-01-26T16:27:28Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:ff:c3:d7 10.100.0.12
Jan 26 16:27:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:27:28
Jan 26 16:27:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:27:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:27:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'vms', 'default.rgw.log', '.mgr', 'volumes', 'default.rgw.meta']
Jan 26 16:27:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:27:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 479 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.3 MiB/s wr, 261 op/s
Jan 26 16:27:28 compute-0 nova_compute[239965]: 2026-01-26 16:27:28.760 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:28 compute-0 ceph-mon[75140]: pgmap v2307: 305 pgs: 305 active+clean; 479 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.3 MiB/s wr, 261 op/s
Jan 26 16:27:29 compute-0 ovn_controller[146046]: 2026-01-26T16:27:29Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ff:c3:d7 10.100.0.12
Jan 26 16:27:29 compute-0 ovn_controller[146046]: 2026-01-26T16:27:29Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ff:c3:d7 10.100.0.12
Jan 26 16:27:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:27:29 compute-0 nova_compute[239965]: 2026-01-26 16:27:29.285 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 479 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 199 op/s
Jan 26 16:27:30 compute-0 ceph-mon[75140]: pgmap v2308: 305 pgs: 305 active+clean; 479 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 199 op/s
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:27:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:27:31 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Jan 26 16:27:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 525 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 6.3 MiB/s wr, 343 op/s
Jan 26 16:27:33 compute-0 ovn_controller[146046]: 2026-01-26T16:27:33Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:36:2b:9c 10.100.0.10
Jan 26 16:27:33 compute-0 ovn_controller[146046]: 2026-01-26T16:27:33Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:36:2b:9c 10.100.0.10
Jan 26 16:27:33 compute-0 ceph-mon[75140]: pgmap v2309: 305 pgs: 305 active+clean; 525 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 6.3 MiB/s wr, 343 op/s
Jan 26 16:27:33 compute-0 ovn_controller[146046]: 2026-01-26T16:27:33Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d5:66:73 10.100.0.6
Jan 26 16:27:33 compute-0 ovn_controller[146046]: 2026-01-26T16:27:33Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d5:66:73 10.100.0.6
Jan 26 16:27:33 compute-0 nova_compute[239965]: 2026-01-26 16:27:33.764 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:27:34 compute-0 nova_compute[239965]: 2026-01-26 16:27:34.286 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 525 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.5 MiB/s wr, 217 op/s
Jan 26 16:27:34 compute-0 ceph-mon[75140]: pgmap v2310: 305 pgs: 305 active+clean; 525 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 4.5 MiB/s wr, 217 op/s
Jan 26 16:27:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 540 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.8 MiB/s wr, 258 op/s
Jan 26 16:27:36 compute-0 ceph-mon[75140]: pgmap v2311: 305 pgs: 305 active+clean; 540 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.8 MiB/s wr, 258 op/s
Jan 26 16:27:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 548 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.5 MiB/s wr, 232 op/s
Jan 26 16:27:38 compute-0 nova_compute[239965]: 2026-01-26 16:27:38.765 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:38 compute-0 ceph-mon[75140]: pgmap v2312: 305 pgs: 305 active+clean; 548 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 4.5 MiB/s wr, 232 op/s
Jan 26 16:27:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:27:39 compute-0 nova_compute[239965]: 2026-01-26 16:27:39.288 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 548 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 192 op/s
Jan 26 16:27:40 compute-0 ovn_controller[146046]: 2026-01-26T16:27:40Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:bb:6f:f6 10.100.0.11
Jan 26 16:27:40 compute-0 ovn_controller[146046]: 2026-01-26T16:27:40Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:bb:6f:f6 10.100.0.11
Jan 26 16:27:40 compute-0 ceph-mon[75140]: pgmap v2313: 305 pgs: 305 active+clean; 548 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 192 op/s
Jan 26 16:27:41 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Jan 26 16:27:41 compute-0 podman[367460]: 2026-01-26 16:27:41.406234013 +0000 UTC m=+0.079887458 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:27:41 compute-0 podman[367461]: 2026-01-26 16:27:41.45837537 +0000 UTC m=+0.123741992 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:27:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 581 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.5 MiB/s wr, 257 op/s
Jan 26 16:27:42 compute-0 ceph-mon[75140]: pgmap v2314: 305 pgs: 305 active+clean; 581 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 6.5 MiB/s wr, 257 op/s
Jan 26 16:27:43 compute-0 nova_compute[239965]: 2026-01-26 16:27:43.768 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:27:44 compute-0 nova_compute[239965]: 2026-01-26 16:27:44.290 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 581 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 597 KiB/s rd, 2.5 MiB/s wr, 112 op/s
Jan 26 16:27:44 compute-0 ceph-mon[75140]: pgmap v2315: 305 pgs: 305 active+clean; 581 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 597 KiB/s rd, 2.5 MiB/s wr, 112 op/s
Jan 26 16:27:45 compute-0 nova_compute[239965]: 2026-01-26 16:27:45.962 239969 INFO nova.compute.manager [None req-c5be8099-e6d2-49ca-b181-b8bafc43ea38 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Get console output
Jan 26 16:27:45 compute-0 nova_compute[239965]: 2026-01-26 16:27:45.969 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:27:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 581 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 597 KiB/s rd, 2.5 MiB/s wr, 112 op/s
Jan 26 16:27:46 compute-0 ceph-mon[75140]: pgmap v2316: 305 pgs: 305 active+clean; 581 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 597 KiB/s rd, 2.5 MiB/s wr, 112 op/s
Jan 26 16:27:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:46.872 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:27:46 compute-0 nova_compute[239965]: 2026-01-26 16:27:46.873 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:46.873 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.071 239969 DEBUG nova.compute.manager [req-8b528158-cc91-4d2c-b01c-f97cdb1594fd req-14cd1fc6-7a08-4130-affa-f62ed6ee4549 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received event network-changed-81751ed2-633c-4723-82f3-f652ce04297b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.072 239969 DEBUG nova.compute.manager [req-8b528158-cc91-4d2c-b01c-f97cdb1594fd req-14cd1fc6-7a08-4130-affa-f62ed6ee4549 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Refreshing instance network info cache due to event network-changed-81751ed2-633c-4723-82f3-f652ce04297b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.072 239969 DEBUG oslo_concurrency.lockutils [req-8b528158-cc91-4d2c-b01c-f97cdb1594fd req-14cd1fc6-7a08-4130-affa-f62ed6ee4549 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-af71ccf7-284e-436b-9bb8-5bc6c727d3cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.072 239969 DEBUG oslo_concurrency.lockutils [req-8b528158-cc91-4d2c-b01c-f97cdb1594fd req-14cd1fc6-7a08-4130-affa-f62ed6ee4549 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-af71ccf7-284e-436b-9bb8-5bc6c727d3cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.072 239969 DEBUG nova.network.neutron [req-8b528158-cc91-4d2c-b01c-f97cdb1594fd req-14cd1fc6-7a08-4130-affa-f62ed6ee4549 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Refreshing network info cache for port 81751ed2-633c-4723-82f3-f652ce04297b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.107 239969 DEBUG oslo_concurrency.lockutils [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.107 239969 DEBUG oslo_concurrency.lockutils [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.107 239969 DEBUG oslo_concurrency.lockutils [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.108 239969 DEBUG oslo_concurrency.lockutils [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.108 239969 DEBUG oslo_concurrency.lockutils [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.109 239969 INFO nova.compute.manager [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Terminating instance
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.110 239969 DEBUG nova.compute.manager [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:27:47 compute-0 kernel: tap81751ed2-63 (unregistering): left promiscuous mode
Jan 26 16:27:47 compute-0 NetworkManager[48954]: <info>  [1769444867.1727] device (tap81751ed2-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:27:47 compute-0 ovn_controller[146046]: 2026-01-26T16:27:47Z|01544|binding|INFO|Releasing lport 81751ed2-633c-4723-82f3-f652ce04297b from this chassis (sb_readonly=0)
Jan 26 16:27:47 compute-0 ovn_controller[146046]: 2026-01-26T16:27:47Z|01545|binding|INFO|Setting lport 81751ed2-633c-4723-82f3-f652ce04297b down in Southbound
Jan 26 16:27:47 compute-0 ovn_controller[146046]: 2026-01-26T16:27:47Z|01546|binding|INFO|Removing iface tap81751ed2-63 ovn-installed in OVS
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.188 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:47.196 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:bb:6f:f6 10.100.0.11'], port_security=['fa:16:3e:bb:6f:f6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'af71ccf7-284e-436b-9bb8-5bc6c727d3cc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-94bff060-de25-4619-bfc7-2522df0a74d7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '2d8fd4f4-d04c-4ea0-ae86-5f52a7fe7407', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a7e02e3d-e80d-4860-b220-ee88b1eda52c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=81751ed2-633c-4723-82f3-f652ce04297b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:27:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:47.199 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 81751ed2-633c-4723-82f3-f652ce04297b in datapath 94bff060-de25-4619-bfc7-2522df0a74d7 unbound from our chassis
Jan 26 16:27:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:47.203 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 94bff060-de25-4619-bfc7-2522df0a74d7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:27:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:47.206 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f894b6-c0f5-4e21-b0dc-c6666663f5a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:47.207 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7 namespace which is not needed anymore
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.217 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:47 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Jan 26 16:27:47 compute-0 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Consumed 15.274s CPU time.
Jan 26 16:27:47 compute-0 systemd-machined[208061]: Machine qemu-177-instance-0000008f terminated.
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.351 239969 INFO nova.virt.libvirt.driver [-] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Instance destroyed successfully.
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.351 239969 DEBUG nova.objects.instance [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'resources' on Instance uuid af71ccf7-284e-436b-9bb8-5bc6c727d3cc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.369 239969 DEBUG nova.virt.libvirt.vif [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:26:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1217079529',display_name='tempest-TestNetworkAdvancedServerOps-server-1217079529',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1217079529',id=143,image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAHK7vjAqqBoYE5h97a5pskiriPLSJ5A7+bC6dk5uNnjCrImOF19Tu7dz5ff1CkVvUcFUiy33TMxCmyGQ5LYz502POS5lMWxg05L+Asko5H7mYq3E+7RWkgVQirbCTK0/w==',key_name='tempest-TestNetworkAdvancedServerOps-331164643',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:27:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-v3jbl2f2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',clean_attempts='1',image_base_image_ref='24ff5bfa-2bf0-4d76-ba05-fc857cd2108f',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:27:27Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=af71ccf7-284e-436b-9bb8-5bc6c727d3cc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.369 239969 DEBUG nova.network.os_vif_util [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.219", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.370 239969 DEBUG nova.network.os_vif_util [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:bb:6f:f6,bridge_name='br-int',has_traffic_filtering=True,id=81751ed2-633c-4723-82f3-f652ce04297b,network=Network(94bff060-de25-4619-bfc7-2522df0a74d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81751ed2-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.370 239969 DEBUG os_vif [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:6f:f6,bridge_name='br-int',has_traffic_filtering=True,id=81751ed2-633c-4723-82f3-f652ce04297b,network=Network(94bff060-de25-4619-bfc7-2522df0a74d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81751ed2-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.372 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.372 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81751ed2-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.374 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.375 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.377 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.380 239969 INFO os_vif [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:bb:6f:f6,bridge_name='br-int',has_traffic_filtering=True,id=81751ed2-633c-4723-82f3-f652ce04297b,network=Network(94bff060-de25-4619-bfc7-2522df0a74d7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81751ed2-63')
Jan 26 16:27:47 compute-0 neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7[367403]: [NOTICE]   (367407) : haproxy version is 2.8.14-c23fe91
Jan 26 16:27:47 compute-0 neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7[367403]: [NOTICE]   (367407) : path to executable is /usr/sbin/haproxy
Jan 26 16:27:47 compute-0 neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7[367403]: [WARNING]  (367407) : Exiting Master process...
Jan 26 16:27:47 compute-0 neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7[367403]: [WARNING]  (367407) : Exiting Master process...
Jan 26 16:27:47 compute-0 neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7[367403]: [ALERT]    (367407) : Current worker (367409) exited with code 143 (Terminated)
Jan 26 16:27:47 compute-0 neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7[367403]: [WARNING]  (367407) : All workers exited. Exiting... (0)
Jan 26 16:27:47 compute-0 systemd[1]: libpod-2fec36fb577e96ec8ffe352106e353ada4a467bfb79ee87e8209b9411258577e.scope: Deactivated successfully.
Jan 26 16:27:47 compute-0 podman[367524]: 2026-01-26 16:27:47.554165515 +0000 UTC m=+0.238881560 container died 2fec36fb577e96ec8ffe352106e353ada4a467bfb79ee87e8209b9411258577e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 16:27:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2fec36fb577e96ec8ffe352106e353ada4a467bfb79ee87e8209b9411258577e-userdata-shm.mount: Deactivated successfully.
Jan 26 16:27:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-41d01d4e64f01602f7d135941f890edf00a9c3e406b264475421e5790daf5131-merged.mount: Deactivated successfully.
Jan 26 16:27:47 compute-0 podman[367524]: 2026-01-26 16:27:47.614868953 +0000 UTC m=+0.299585008 container cleanup 2fec36fb577e96ec8ffe352106e353ada4a467bfb79ee87e8209b9411258577e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:27:47 compute-0 systemd[1]: libpod-conmon-2fec36fb577e96ec8ffe352106e353ada4a467bfb79ee87e8209b9411258577e.scope: Deactivated successfully.
Jan 26 16:27:47 compute-0 podman[367582]: 2026-01-26 16:27:47.684521729 +0000 UTC m=+0.043936408 container remove 2fec36fb577e96ec8ffe352106e353ada4a467bfb79ee87e8209b9411258577e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:27:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:47.689 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[faac016c-b5fe-4ce3-b61f-9124e2751f45]: (4, ('Mon Jan 26 04:27:47 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7 (2fec36fb577e96ec8ffe352106e353ada4a467bfb79ee87e8209b9411258577e)\n2fec36fb577e96ec8ffe352106e353ada4a467bfb79ee87e8209b9411258577e\nMon Jan 26 04:27:47 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7 (2fec36fb577e96ec8ffe352106e353ada4a467bfb79ee87e8209b9411258577e)\n2fec36fb577e96ec8ffe352106e353ada4a467bfb79ee87e8209b9411258577e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:47.692 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d8e0214d-4ddc-4e27-a01d-1863002f3750]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:47.693 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap94bff060-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.695 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:47 compute-0 kernel: tap94bff060-d0: left promiscuous mode
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.711 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:47.715 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[be138e52-8468-406b-b881-ce5d51e70c28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:47.737 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee3dcfc-f58c-40f9-a369-10da8f4d5c39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:47.738 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a76762d3-a36b-4d5b-b130-447328973d8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:47.758 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1155b6f0-6cc6-4ec9-86d7-a71fdb1605f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 633659, 'reachable_time': 30559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367599, 'error': None, 'target': 'ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:47.760 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-94bff060-de25-4619-bfc7-2522df0a74d7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:27:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:47.761 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[aa91b65b-4e2d-43d1-b79b-4e05913c7530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:47 compute-0 systemd[1]: run-netns-ovnmeta\x2d94bff060\x2dde25\x2d4619\x2dbfc7\x2d2522df0a74d7.mount: Deactivated successfully.
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.786 239969 DEBUG nova.compute.manager [None req-17af654e-3fd6-437f-a891-64839fa6f65a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.805 239969 INFO nova.virt.libvirt.driver [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Deleting instance files /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc_del
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.806 239969 INFO nova.virt.libvirt.driver [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Deletion of /var/lib/nova/instances/af71ccf7-284e-436b-9bb8-5bc6c727d3cc_del complete
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.839 239969 INFO nova.compute.manager [None req-17af654e-3fd6-437f-a891-64839fa6f65a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] instance snapshotting
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.872 239969 INFO nova.compute.manager [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Took 0.76 seconds to destroy the instance on the hypervisor.
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.873 239969 DEBUG oslo.service.loopingcall [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.873 239969 DEBUG nova.compute.manager [-] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:27:47 compute-0 nova_compute[239965]: 2026-01-26 16:27:47.874 239969 DEBUG nova.network.neutron [-] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:27:48 compute-0 nova_compute[239965]: 2026-01-26 16:27:48.058 239969 INFO nova.virt.libvirt.driver [None req-17af654e-3fd6-437f-a891-64839fa6f65a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Beginning live snapshot process
Jan 26 16:27:48 compute-0 nova_compute[239965]: 2026-01-26 16:27:48.234 239969 DEBUG nova.storage.rbd_utils [None req-17af654e-3fd6-437f-a891-64839fa6f65a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] creating snapshot(74cba20fbafd4c1cacd0f7371db959be) on rbd image(5d69a9b9-63ea-4df6-bf59-67250f4f907d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:27:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Jan 26 16:27:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Jan 26 16:27:48 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Jan 26 16:27:48 compute-0 nova_compute[239965]: 2026-01-26 16:27:48.375 239969 DEBUG nova.storage.rbd_utils [None req-17af654e-3fd6-437f-a891-64839fa6f65a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] cloning vms/5d69a9b9-63ea-4df6-bf59-67250f4f907d_disk@74cba20fbafd4c1cacd0f7371db959be to images/384eb0c9-439f-498c-80ed-09050943d260 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 16:27:48 compute-0 nova_compute[239965]: 2026-01-26 16:27:48.472 239969 DEBUG nova.storage.rbd_utils [None req-17af654e-3fd6-437f-a891-64839fa6f65a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] flattening images/384eb0c9-439f-498c-80ed-09050943d260 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 16:27:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 581 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 510 KiB/s rd, 2.6 MiB/s wr, 83 op/s
Jan 26 16:27:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:27:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3007885653' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:27:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:27:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3007885653' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:27:48 compute-0 nova_compute[239965]: 2026-01-26 16:27:48.773 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:48 compute-0 nova_compute[239965]: 2026-01-26 16:27:48.901 239969 DEBUG nova.network.neutron [-] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:48 compute-0 nova_compute[239965]: 2026-01-26 16:27:48.922 239969 INFO nova.compute.manager [-] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Took 1.05 seconds to deallocate network for instance.
Jan 26 16:27:48 compute-0 nova_compute[239965]: 2026-01-26 16:27:48.968 239969 DEBUG oslo_concurrency.lockutils [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:48 compute-0 nova_compute[239965]: 2026-01-26 16:27:48.969 239969 DEBUG oslo_concurrency.lockutils [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.052 239969 DEBUG nova.network.neutron [req-8b528158-cc91-4d2c-b01c-f97cdb1594fd req-14cd1fc6-7a08-4130-affa-f62ed6ee4549 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Updated VIF entry in instance network info cache for port 81751ed2-633c-4723-82f3-f652ce04297b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.053 239969 DEBUG nova.network.neutron [req-8b528158-cc91-4d2c-b01c-f97cdb1594fd req-14cd1fc6-7a08-4130-affa-f62ed6ee4549 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Updating instance_info_cache with network_info: [{"id": "81751ed2-633c-4723-82f3-f652ce04297b", "address": "fa:16:3e:bb:6f:f6", "network": {"id": "94bff060-de25-4619-bfc7-2522df0a74d7", "bridge": "br-int", "label": "tempest-network-smoke--509183779", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81751ed2-63", "ovs_interfaceid": "81751ed2-633c-4723-82f3-f652ce04297b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.070 239969 DEBUG nova.storage.rbd_utils [None req-17af654e-3fd6-437f-a891-64839fa6f65a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] removing snapshot(74cba20fbafd4c1cacd0f7371db959be) on rbd image(5d69a9b9-63ea-4df6-bf59-67250f4f907d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.076 239969 DEBUG oslo_concurrency.lockutils [req-8b528158-cc91-4d2c-b01c-f97cdb1594fd req-14cd1fc6-7a08-4130-affa-f62ed6ee4549 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-af71ccf7-284e-436b-9bb8-5bc6c727d3cc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003918513272387244 of space, bias 1.0, pg target 1.1755539817161733 quantized to 32 (current 32)
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0017762233978557643 of space, bias 1.0, pg target 0.5310907959588735 quantized to 32 (current 32)
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.143087077204461e-07 of space, bias 4.0, pg target 0.0008543132144336536 quantized to 16 (current 16)
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.182 239969 DEBUG nova.compute.manager [req-d80e0b38-0692-4151-8b77-b2bdb73ede63 req-74389bd2-ca0b-470d-9993-7642539d2672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received event network-vif-unplugged-81751ed2-633c-4723-82f3-f652ce04297b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.183 239969 DEBUG oslo_concurrency.lockutils [req-d80e0b38-0692-4151-8b77-b2bdb73ede63 req-74389bd2-ca0b-470d-9993-7642539d2672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.183 239969 DEBUG oslo_concurrency.lockutils [req-d80e0b38-0692-4151-8b77-b2bdb73ede63 req-74389bd2-ca0b-470d-9993-7642539d2672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.184 239969 DEBUG oslo_concurrency.lockutils [req-d80e0b38-0692-4151-8b77-b2bdb73ede63 req-74389bd2-ca0b-470d-9993-7642539d2672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.185 239969 DEBUG nova.compute.manager [req-d80e0b38-0692-4151-8b77-b2bdb73ede63 req-74389bd2-ca0b-470d-9993-7642539d2672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] No waiting events found dispatching network-vif-unplugged-81751ed2-633c-4723-82f3-f652ce04297b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.185 239969 WARNING nova.compute.manager [req-d80e0b38-0692-4151-8b77-b2bdb73ede63 req-74389bd2-ca0b-470d-9993-7642539d2672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received unexpected event network-vif-unplugged-81751ed2-633c-4723-82f3-f652ce04297b for instance with vm_state deleted and task_state None.
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.186 239969 DEBUG nova.compute.manager [req-d80e0b38-0692-4151-8b77-b2bdb73ede63 req-74389bd2-ca0b-470d-9993-7642539d2672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received event network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.186 239969 DEBUG oslo_concurrency.lockutils [req-d80e0b38-0692-4151-8b77-b2bdb73ede63 req-74389bd2-ca0b-470d-9993-7642539d2672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.187 239969 DEBUG oslo_concurrency.lockutils [req-d80e0b38-0692-4151-8b77-b2bdb73ede63 req-74389bd2-ca0b-470d-9993-7642539d2672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.188 239969 DEBUG oslo_concurrency.lockutils [req-d80e0b38-0692-4151-8b77-b2bdb73ede63 req-74389bd2-ca0b-470d-9993-7642539d2672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.189 239969 DEBUG nova.compute.manager [req-d80e0b38-0692-4151-8b77-b2bdb73ede63 req-74389bd2-ca0b-470d-9993-7642539d2672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] No waiting events found dispatching network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.189 239969 WARNING nova.compute.manager [req-d80e0b38-0692-4151-8b77-b2bdb73ede63 req-74389bd2-ca0b-470d-9993-7642539d2672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received unexpected event network-vif-plugged-81751ed2-633c-4723-82f3-f652ce04297b for instance with vm_state deleted and task_state None.
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.190 239969 DEBUG nova.compute.manager [req-d80e0b38-0692-4151-8b77-b2bdb73ede63 req-74389bd2-ca0b-470d-9993-7642539d2672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Received event network-vif-deleted-81751ed2-633c-4723-82f3-f652ce04297b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.190 239969 INFO nova.compute.manager [req-d80e0b38-0692-4151-8b77-b2bdb73ede63 req-74389bd2-ca0b-470d-9993-7642539d2672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Neutron deleted interface 81751ed2-633c-4723-82f3-f652ce04297b; detaching it from the instance and deleting it from the info cache
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.191 239969 DEBUG nova.network.neutron [req-d80e0b38-0692-4151-8b77-b2bdb73ede63 req-74389bd2-ca0b-470d-9993-7642539d2672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.194 239969 DEBUG oslo_concurrency.processutils [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.257 239969 DEBUG nova.compute.manager [req-d80e0b38-0692-4151-8b77-b2bdb73ede63 req-74389bd2-ca0b-470d-9993-7642539d2672 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Detach interface failed, port_id=81751ed2-633c-4723-82f3-f652ce04297b, reason: Instance af71ccf7-284e-436b-9bb8-5bc6c727d3cc could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:27:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:27:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Jan 26 16:27:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Jan 26 16:27:49 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Jan 26 16:27:49 compute-0 ceph-mon[75140]: osdmap e308: 3 total, 3 up, 3 in
Jan 26 16:27:49 compute-0 ceph-mon[75140]: pgmap v2318: 305 pgs: 305 active+clean; 581 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 510 KiB/s rd, 2.6 MiB/s wr, 83 op/s
Jan 26 16:27:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3007885653' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:27:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3007885653' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.382 239969 DEBUG nova.storage.rbd_utils [None req-17af654e-3fd6-437f-a891-64839fa6f65a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] creating snapshot(snap) on rbd image(384eb0c9-439f-498c-80ed-09050943d260) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:27:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:27:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3036530114' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.810 239969 DEBUG oslo_concurrency.processutils [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.816 239969 DEBUG nova.compute.provider_tree [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.832 239969 DEBUG nova.scheduler.client.report [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.852 239969 DEBUG oslo_concurrency.lockutils [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.890 239969 INFO nova.scheduler.client.report [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Deleted allocations for instance af71ccf7-284e-436b-9bb8-5bc6c727d3cc
Jan 26 16:27:49 compute-0 nova_compute[239965]: 2026-01-26 16:27:49.955 239969 DEBUG oslo_concurrency.lockutils [None req-79f0889d-447a-48de-b757-298abbf024cd 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "af71ccf7-284e-436b-9bb8-5bc6c727d3cc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.848s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Jan 26 16:27:50 compute-0 ceph-mon[75140]: osdmap e309: 3 total, 3 up, 3 in
Jan 26 16:27:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3036530114' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:27:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Jan 26 16:27:50 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Jan 26 16:27:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 581 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 196 KiB/s rd, 33 KiB/s wr, 8 op/s
Jan 26 16:27:51 compute-0 ceph-mon[75140]: osdmap e310: 3 total, 3 up, 3 in
Jan 26 16:27:51 compute-0 ceph-mon[75140]: pgmap v2321: 305 pgs: 305 active+clean; 581 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 196 KiB/s rd, 33 KiB/s wr, 8 op/s
Jan 26 16:27:51 compute-0 nova_compute[239965]: 2026-01-26 16:27:51.898 239969 INFO nova.virt.libvirt.driver [None req-17af654e-3fd6-437f-a891-64839fa6f65a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Snapshot image upload complete
Jan 26 16:27:51 compute-0 nova_compute[239965]: 2026-01-26 16:27:51.898 239969 INFO nova.compute.manager [None req-17af654e-3fd6-437f-a891-64839fa6f65a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Took 4.06 seconds to snapshot the instance on the hypervisor.
Jan 26 16:27:52 compute-0 nova_compute[239965]: 2026-01-26 16:27:52.374 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 603 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 259 op/s
Jan 26 16:27:52 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:52.875 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:53 compute-0 ovn_controller[146046]: 2026-01-26T16:27:53Z|01547|binding|INFO|Releasing lport 36c96456-84ef-4832-9322-d686bc015797 from this chassis (sb_readonly=0)
Jan 26 16:27:53 compute-0 ovn_controller[146046]: 2026-01-26T16:27:53Z|01548|binding|INFO|Releasing lport 1ca00a90-6e42-40f0-a57b-b46691113c4a from this chassis (sb_readonly=0)
Jan 26 16:27:53 compute-0 ovn_controller[146046]: 2026-01-26T16:27:53Z|01549|binding|INFO|Releasing lport 891fbaea-41fa-4f46-a25a-e1bcac6983ae from this chassis (sb_readonly=0)
Jan 26 16:27:53 compute-0 ceph-mon[75140]: pgmap v2322: 305 pgs: 305 active+clean; 603 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 8.3 MiB/s rd, 15 MiB/s wr, 259 op/s
Jan 26 16:27:53 compute-0 nova_compute[239965]: 2026-01-26 16:27:53.823 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:54 compute-0 nova_compute[239965]: 2026-01-26 16:27:54.103 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:27:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:27:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Jan 26 16:27:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Jan 26 16:27:54 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Jan 26 16:27:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 603 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 15 MiB/s wr, 250 op/s
Jan 26 16:27:55 compute-0 ceph-mon[75140]: osdmap e311: 3 total, 3 up, 3 in
Jan 26 16:27:55 compute-0 ceph-mon[75140]: pgmap v2324: 305 pgs: 305 active+clean; 603 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 8.1 MiB/s rd, 15 MiB/s wr, 250 op/s
Jan 26 16:27:55 compute-0 nova_compute[239965]: 2026-01-26 16:27:55.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:27:55 compute-0 nova_compute[239965]: 2026-01-26 16:27:55.523 239969 INFO nova.compute.manager [None req-4226d842-7793-4076-abe7-5672ef40dda4 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Get console output
Jan 26 16:27:55 compute-0 nova_compute[239965]: 2026-01-26 16:27:55.531 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.534 239969 DEBUG nova.compute.manager [req-0deba225-f4c1-4176-a9c6-72b22567dd93 req-992e0cbf-7a53-443e-9280-5830ba440232 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received event network-changed-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.535 239969 DEBUG nova.compute.manager [req-0deba225-f4c1-4176-a9c6-72b22567dd93 req-992e0cbf-7a53-443e-9280-5830ba440232 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Refreshing instance network info cache due to event network-changed-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.535 239969 DEBUG oslo_concurrency.lockutils [req-0deba225-f4c1-4176-a9c6-72b22567dd93 req-992e0cbf-7a53-443e-9280-5830ba440232 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.535 239969 DEBUG oslo_concurrency.lockutils [req-0deba225-f4c1-4176-a9c6-72b22567dd93 req-992e0cbf-7a53-443e-9280-5830ba440232 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.535 239969 DEBUG nova.network.neutron [req-0deba225-f4c1-4176-a9c6-72b22567dd93 req-992e0cbf-7a53-443e-9280-5830ba440232 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Refreshing network info cache for port 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:27:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 559 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 12 MiB/s wr, 207 op/s
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.752 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-39a41999-3c56-4679-8c57-9286dc4edbb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.753 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-39a41999-3c56-4679-8c57-9286dc4edbb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.753 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.753 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 39a41999-3c56-4679-8c57-9286dc4edbb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.836 239969 DEBUG oslo_concurrency.lockutils [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.836 239969 DEBUG oslo_concurrency.lockutils [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.837 239969 DEBUG oslo_concurrency.lockutils [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.837 239969 DEBUG oslo_concurrency.lockutils [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.837 239969 DEBUG oslo_concurrency.lockutils [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.838 239969 INFO nova.compute.manager [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Terminating instance
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.839 239969 DEBUG nova.compute.manager [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:27:56 compute-0 ceph-mon[75140]: pgmap v2325: 305 pgs: 305 active+clean; 559 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 12 MiB/s wr, 207 op/s
Jan 26 16:27:56 compute-0 kernel: tapfb5c2eef-7a (unregistering): left promiscuous mode
Jan 26 16:27:56 compute-0 NetworkManager[48954]: <info>  [1769444876.9396] device (tapfb5c2eef-7a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:27:56 compute-0 ovn_controller[146046]: 2026-01-26T16:27:56Z|01550|binding|INFO|Releasing lport fb5c2eef-7a9f-43e2-bf77-34753400c407 from this chassis (sb_readonly=0)
Jan 26 16:27:56 compute-0 ovn_controller[146046]: 2026-01-26T16:27:56Z|01551|binding|INFO|Setting lport fb5c2eef-7a9f-43e2-bf77-34753400c407 down in Southbound
Jan 26 16:27:56 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.996 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:56 compute-0 ovn_controller[146046]: 2026-01-26T16:27:56Z|01552|binding|INFO|Removing iface tapfb5c2eef-7a ovn-installed in OVS
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:56.999 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:57.006 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ff:c3:d7 10.100.0.12'], port_security=['fa:16:3e:ff:c3:d7 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '5d69a9b9-63ea-4df6-bf59-67250f4f907d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01b6ba29-8317-46fc-85a5-35c029f2796c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '02ec8c20b77c4ce9b1406d858bbcf14d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4f55238b-e3e6-4801-8e90-114accc75c46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d1802e6-be6e-40d0-8709-a6af00654d79, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=fb5c2eef-7a9f-43e2-bf77-34753400c407) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:27:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:57.008 156105 INFO neutron.agent.ovn.metadata.agent [-] Port fb5c2eef-7a9f-43e2-bf77-34753400c407 in datapath 01b6ba29-8317-46fc-85a5-35c029f2796c unbound from our chassis
Jan 26 16:27:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:57.009 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01b6ba29-8317-46fc-85a5-35c029f2796c
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.016 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:57.028 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0c1537e0-c89c-4b9f-bb6e-d2011ccef4a2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:57 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d00000090.scope: Deactivated successfully.
Jan 26 16:27:57 compute-0 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d00000090.scope: Consumed 16.129s CPU time.
Jan 26 16:27:57 compute-0 systemd-machined[208061]: Machine qemu-174-instance-00000090 terminated.
Jan 26 16:27:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:57.059 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[71b4a4c3-3b7c-4b24-bcd7-326c86eb6a40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:57.062 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[8a34770b-cfc6-4f32-90a8-f93649675285]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:57.102 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5729d150-99a7-46c4-8255-dbb3ad9ffcfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:57.130 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3ebf41e2-60fc-4395-8618-781b34b92296]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01b6ba29-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:79:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 426], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626016, 'reachable_time': 18923, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 367774, 'error': None, 'target': 'ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:57.150 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[151b8585-cd46-42df-b22e-b02d9f796eab]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap01b6ba29-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626030, 'tstamp': 626030}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367775, 'error': None, 'target': 'ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap01b6ba29-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 626033, 'tstamp': 626033}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 367775, 'error': None, 'target': 'ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:27:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:57.152 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01b6ba29-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.154 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.162 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:57.163 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01b6ba29-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:57.164 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:27:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:57.164 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01b6ba29-80, col_values=(('external_ids', {'iface-id': '36c96456-84ef-4832-9322-d686bc015797'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:57 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:57.165 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.274 239969 INFO nova.virt.libvirt.driver [-] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Instance destroyed successfully.
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.275 239969 DEBUG nova.objects.instance [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lazy-loading 'resources' on Instance uuid 5d69a9b9-63ea-4df6-bf59-67250f4f907d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.290 239969 DEBUG nova.virt.libvirt.vif [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:26:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-1369427436',display_name='tempest-TestSnapshotPattern-server-1369427436',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-1369427436',id=144,image_ref='4f1e15b7-da72-469a-9162-748b08cf386f',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbFIdmmD0YOLT/gnQMrV89ZkLND1rAgrbfF3qbw6dEO95K+VelVpcFet15bYd/xM6RQfQxT61ExvAJRv5XtJWc2bRBlJ7p2k0AnC8pnTtVi312SPdjQj+CEHqI1btz5HA==',key_name='tempest-TestSnapshotPattern-544616432',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:27:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='02ec8c20b77c4ce9b1406d858bbcf14d',ramdisk_id='',reservation_id='r-jxkuoj5p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='39a41999-3c56-4679-8c57-9286dc4edbb6',image_min_disk='1',image_min_ram='0',image_owner_id='02ec8c20b77c4ce9b1406d858bbcf14d',image_owner_project_name='tempest-TestSnapshotPattern-616132540',image_owner_user_name='tempest-TestSnapshotPattern-616132540-project-member',image_user_id='915d03d871484dc39e2d074d97f24809',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-616132540',owner_user_name='tempest-TestSnapshotPattern-616132540-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:27:51Z,user_data=None,user_id='915d03d871484dc39e2d074d97f24809',uuid=5d69a9b9-63ea-4df6-bf59-67250f4f907d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "address": "fa:16:3e:ff:c3:d7", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb5c2eef-7a", "ovs_interfaceid": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.290 239969 DEBUG nova.network.os_vif_util [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Converting VIF {"id": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "address": "fa:16:3e:ff:c3:d7", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfb5c2eef-7a", "ovs_interfaceid": "fb5c2eef-7a9f-43e2-bf77-34753400c407", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.291 239969 DEBUG nova.network.os_vif_util [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ff:c3:d7,bridge_name='br-int',has_traffic_filtering=True,id=fb5c2eef-7a9f-43e2-bf77-34753400c407,network=Network(01b6ba29-8317-46fc-85a5-35c029f2796c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb5c2eef-7a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.291 239969 DEBUG os_vif [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:c3:d7,bridge_name='br-int',has_traffic_filtering=True,id=fb5c2eef-7a9f-43e2-bf77-34753400c407,network=Network(01b6ba29-8317-46fc-85a5-35c029f2796c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb5c2eef-7a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.292 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.292 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfb5c2eef-7a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.297 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.298 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.299 239969 INFO os_vif [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ff:c3:d7,bridge_name='br-int',has_traffic_filtering=True,id=fb5c2eef-7a9f-43e2-bf77-34753400c407,network=Network(01b6ba29-8317-46fc-85a5-35c029f2796c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfb5c2eef-7a')
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.408 239969 DEBUG nova.compute.manager [req-c3a04771-719d-4ad8-99d2-74f3700bfea1 req-84405447-8f2b-4325-8900-d025ef874edb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Received event network-vif-unplugged-fb5c2eef-7a9f-43e2-bf77-34753400c407 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.408 239969 DEBUG oslo_concurrency.lockutils [req-c3a04771-719d-4ad8-99d2-74f3700bfea1 req-84405447-8f2b-4325-8900-d025ef874edb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.408 239969 DEBUG oslo_concurrency.lockutils [req-c3a04771-719d-4ad8-99d2-74f3700bfea1 req-84405447-8f2b-4325-8900-d025ef874edb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.409 239969 DEBUG oslo_concurrency.lockutils [req-c3a04771-719d-4ad8-99d2-74f3700bfea1 req-84405447-8f2b-4325-8900-d025ef874edb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.409 239969 DEBUG nova.compute.manager [req-c3a04771-719d-4ad8-99d2-74f3700bfea1 req-84405447-8f2b-4325-8900-d025ef874edb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] No waiting events found dispatching network-vif-unplugged-fb5c2eef-7a9f-43e2-bf77-34753400c407 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.409 239969 DEBUG nova.compute.manager [req-c3a04771-719d-4ad8-99d2-74f3700bfea1 req-84405447-8f2b-4325-8900-d025ef874edb a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Received event network-vif-unplugged-fb5c2eef-7a9f-43e2-bf77-34753400c407 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.589 239969 INFO nova.compute.manager [None req-1965f5d5-d63c-450a-ab79-9b7dbedb6fca e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Get console output
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.598 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "0f79507c-8a44-47f3-a461-6377eed67521" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.599 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "0f79507c-8a44-47f3-a461-6377eed67521" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.604 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.620 239969 DEBUG nova.compute.manager [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.668 239969 INFO nova.virt.libvirt.driver [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Deleting instance files /var/lib/nova/instances/5d69a9b9-63ea-4df6-bf59-67250f4f907d_del
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.669 239969 INFO nova.virt.libvirt.driver [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Deletion of /var/lib/nova/instances/5d69a9b9-63ea-4df6-bf59-67250f4f907d_del complete
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.719 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.719 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.726 239969 DEBUG nova.virt.hardware [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.726 239969 INFO nova.compute.claims [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.735 239969 INFO nova.compute.manager [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Took 0.90 seconds to destroy the instance on the hypervisor.
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.735 239969 DEBUG oslo.service.loopingcall [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.735 239969 DEBUG nova.compute.manager [-] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.736 239969 DEBUG nova.network.neutron [-] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.837 239969 DEBUG nova.network.neutron [req-0deba225-f4c1-4176-a9c6-72b22567dd93 req-992e0cbf-7a53-443e-9280-5830ba440232 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Updated VIF entry in instance network info cache for port 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.837 239969 DEBUG nova.network.neutron [req-0deba225-f4c1-4176-a9c6-72b22567dd93 req-992e0cbf-7a53-443e-9280-5830ba440232 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Updating instance_info_cache with network_info: [{"id": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "address": "fa:16:3e:5c:66:7c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb1285a-11", "ovs_interfaceid": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.850 239969 DEBUG oslo_concurrency.lockutils [req-0deba225-f4c1-4176-a9c6-72b22567dd93 req-992e0cbf-7a53-443e-9280-5830ba440232 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:57 compute-0 nova_compute[239965]: 2026-01-26 16:27:57.912 239969 DEBUG oslo_concurrency.processutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.256 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Updating instance_info_cache with network_info: [{"id": "30d9e550-982e-4e2c-9ee0-755cad494e41", "address": "fa:16:3e:5d:8a:76", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30d9e550-98", "ovs_interfaceid": "30d9e550-982e-4e2c-9ee0-755cad494e41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.280 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-39a41999-3c56-4679-8c57-9286dc4edbb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.280 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.454 239969 DEBUG nova.network.neutron [-] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.473 239969 INFO nova.compute.manager [-] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Took 0.74 seconds to deallocate network for instance.
Jan 26 16:27:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:27:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3489135912' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.514 239969 DEBUG oslo_concurrency.processutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.518 239969 DEBUG oslo_concurrency.lockutils [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.521 239969 DEBUG nova.compute.provider_tree [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.535 239969 DEBUG nova.scheduler.client.report [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.559 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.560 239969 DEBUG nova.compute.manager [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.563 239969 DEBUG oslo_concurrency.lockutils [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.612 239969 DEBUG nova.compute.manager [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.613 239969 DEBUG nova.network.neutron [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:27:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3489135912' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.642 239969 INFO nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.653 239969 DEBUG nova.compute.manager [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received event network-vif-unplugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.654 239969 DEBUG oslo_concurrency.lockutils [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.654 239969 DEBUG oslo_concurrency.lockutils [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.654 239969 DEBUG oslo_concurrency.lockutils [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.655 239969 DEBUG nova.compute.manager [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] No waiting events found dispatching network-vif-unplugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.655 239969 WARNING nova.compute.manager [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received unexpected event network-vif-unplugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 for instance with vm_state active and task_state None.
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.656 239969 DEBUG nova.compute.manager [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Received event network-changed-fb5c2eef-7a9f-43e2-bf77-34753400c407 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.656 239969 DEBUG nova.compute.manager [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Refreshing instance network info cache due to event network-changed-fb5c2eef-7a9f-43e2-bf77-34753400c407. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.657 239969 DEBUG oslo_concurrency.lockutils [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-5d69a9b9-63ea-4df6-bf59-67250f4f907d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.657 239969 DEBUG oslo_concurrency.lockutils [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-5d69a9b9-63ea-4df6-bf59-67250f4f907d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.658 239969 DEBUG nova.network.neutron [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Refreshing network info cache for port fb5c2eef-7a9f-43e2-bf77-34753400c407 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.666 239969 DEBUG nova.compute.manager [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:27:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 11 MiB/s wr, 231 op/s
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.725 239969 DEBUG oslo_concurrency.processutils [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.824 239969 DEBUG nova.compute.manager [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.827 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.828 239969 INFO nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Creating image(s)
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.849 239969 DEBUG nova.storage.rbd_utils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 0f79507c-8a44-47f3-a461-6377eed67521_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.870 239969 DEBUG nova.storage.rbd_utils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 0f79507c-8a44-47f3-a461-6377eed67521_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.890 239969 DEBUG nova.storage.rbd_utils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 0f79507c-8a44-47f3-a461-6377eed67521_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.895 239969 DEBUG oslo_concurrency.processutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.935 239969 DEBUG nova.network.neutron [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.940 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:27:58 compute-0 nova_compute[239965]: 2026-01-26 16:27:58.976 239969 DEBUG nova.policy [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ace5f36058684b5782589457d9e15921', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.024 239969 DEBUG oslo_concurrency.processutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.025 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.026 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.026 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.046 239969 DEBUG nova.storage.rbd_utils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 0f79507c-8a44-47f3-a461-6377eed67521_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.049 239969 DEBUG oslo_concurrency.processutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 0f79507c-8a44-47f3-a461-6377eed67521_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:27:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:59.251 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:59.251 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:27:59.253 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:27:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:27:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1786929857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.311 239969 DEBUG oslo_concurrency.processutils [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.320 239969 DEBUG nova.compute.provider_tree [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.344 239969 DEBUG nova.scheduler.client.report [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.370 239969 DEBUG oslo_concurrency.lockutils [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.808s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.376 239969 DEBUG nova.network.neutron [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.391 239969 DEBUG oslo_concurrency.lockutils [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-5d69a9b9-63ea-4df6-bf59-67250f4f907d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.392 239969 DEBUG nova.compute.manager [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received event network-vif-plugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.392 239969 DEBUG oslo_concurrency.lockutils [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.393 239969 DEBUG oslo_concurrency.lockutils [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.393 239969 DEBUG oslo_concurrency.lockutils [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.393 239969 DEBUG nova.compute.manager [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] No waiting events found dispatching network-vif-plugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.393 239969 WARNING nova.compute.manager [req-ec579545-d1ef-4a11-a002-5774c06cdac7 req-94fae31d-15ca-4a0e-8610-16fb505d3b36 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received unexpected event network-vif-plugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 for instance with vm_state active and task_state None.
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.410 239969 INFO nova.scheduler.client.report [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Deleted allocations for instance 5d69a9b9-63ea-4df6-bf59-67250f4f907d
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.438 239969 INFO nova.compute.manager [None req-f734e855-d315-458e-ac8a-66739cc7c44f e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Get console output
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.446 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.476 239969 DEBUG oslo_concurrency.lockutils [None req-a31cf625-57b5-4185-9193-9c8b36af4f9a 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.537 239969 DEBUG nova.compute.manager [req-752cd133-0864-4e1a-a02d-c0a5d0b8ca7a req-151edc7a-3c6f-45f1-9540-c9b25ef81a04 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Received event network-vif-plugged-fb5c2eef-7a9f-43e2-bf77-34753400c407 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.537 239969 DEBUG oslo_concurrency.lockutils [req-752cd133-0864-4e1a-a02d-c0a5d0b8ca7a req-151edc7a-3c6f-45f1-9540-c9b25ef81a04 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.538 239969 DEBUG oslo_concurrency.lockutils [req-752cd133-0864-4e1a-a02d-c0a5d0b8ca7a req-151edc7a-3c6f-45f1-9540-c9b25ef81a04 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.538 239969 DEBUG oslo_concurrency.lockutils [req-752cd133-0864-4e1a-a02d-c0a5d0b8ca7a req-151edc7a-3c6f-45f1-9540-c9b25ef81a04 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "5d69a9b9-63ea-4df6-bf59-67250f4f907d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.538 239969 DEBUG nova.compute.manager [req-752cd133-0864-4e1a-a02d-c0a5d0b8ca7a req-151edc7a-3c6f-45f1-9540-c9b25ef81a04 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] No waiting events found dispatching network-vif-plugged-fb5c2eef-7a9f-43e2-bf77-34753400c407 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.538 239969 WARNING nova.compute.manager [req-752cd133-0864-4e1a-a02d-c0a5d0b8ca7a req-151edc7a-3c6f-45f1-9540-c9b25ef81a04 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Received unexpected event network-vif-plugged-fb5c2eef-7a9f-43e2-bf77-34753400c407 for instance with vm_state deleted and task_state None.
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.538 239969 DEBUG nova.compute.manager [req-752cd133-0864-4e1a-a02d-c0a5d0b8ca7a req-151edc7a-3c6f-45f1-9540-c9b25ef81a04 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Received event network-vif-deleted-fb5c2eef-7a9f-43e2-bf77-34753400c407 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:27:59 compute-0 ceph-mon[75140]: pgmap v2326: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 11 MiB/s wr, 231 op/s
Jan 26 16:27:59 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1786929857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.812 239969 DEBUG oslo_concurrency.processutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 0f79507c-8a44-47f3-a461-6377eed67521_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.763s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:27:59 compute-0 nova_compute[239965]: 2026-01-26 16:27:59.881 239969 DEBUG nova.storage.rbd_utils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] resizing rbd image 0f79507c-8a44-47f3-a461-6377eed67521_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.076 239969 DEBUG nova.objects.instance [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'migration_context' on Instance uuid 0f79507c-8a44-47f3-a461-6377eed67521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.090 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.091 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Ensure instance console log exists: /var/lib/nova/instances/0f79507c-8a44-47f3-a461-6377eed67521/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.092 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.093 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.094 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.233 239969 DEBUG nova.network.neutron [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Successfully created port: be140ce9-b433-44cc-867c-3beea67a67d2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:28:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:28:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:28:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:28:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:28:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:28:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.409 239969 DEBUG oslo_concurrency.lockutils [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.409 239969 DEBUG oslo_concurrency.lockutils [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.410 239969 DEBUG oslo_concurrency.lockutils [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.410 239969 DEBUG oslo_concurrency.lockutils [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.410 239969 DEBUG oslo_concurrency.lockutils [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.412 239969 INFO nova.compute.manager [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Terminating instance
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.414 239969 DEBUG nova.compute.manager [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:00 compute-0 kernel: tap307ae46b-ff (unregistering): left promiscuous mode
Jan 26 16:28:00 compute-0 NetworkManager[48954]: <info>  [1769444880.6786] device (tap307ae46b-ff): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:28:00 compute-0 ovn_controller[146046]: 2026-01-26T16:28:00Z|01553|binding|INFO|Releasing lport 307ae46b-ff90-469d-8239-59ac4ce405cd from this chassis (sb_readonly=0)
Jan 26 16:28:00 compute-0 ovn_controller[146046]: 2026-01-26T16:28:00Z|01554|binding|INFO|Setting lport 307ae46b-ff90-469d-8239-59ac4ce405cd down in Southbound
Jan 26 16:28:00 compute-0 ovn_controller[146046]: 2026-01-26T16:28:00Z|01555|binding|INFO|Removing iface tap307ae46b-ff ovn-installed in OVS
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.687 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:00.696 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:36:2b:9c 10.100.0.10'], port_security=['fa:16:3e:36:2b:9c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '02c17ea9-0fbb-4beb-bc93-b0339e8a1048', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a673593-4cc1-4b91-8d9d-33cfb52b459e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '04ba1d55-304e-4a2d-9d0e-1ecb801b0b55', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=975e0fb6-2099-45e2-83bb-35528f0f0ffd, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=307ae46b-ff90-469d-8239-59ac4ce405cd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:28:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:00.698 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 307ae46b-ff90-469d-8239-59ac4ce405cd in datapath 6a673593-4cc1-4b91-8d9d-33cfb52b459e unbound from our chassis
Jan 26 16:28:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:00.700 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6a673593-4cc1-4b91-8d9d-33cfb52b459e
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.704 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 9.3 MiB/s wr, 192 op/s
Jan 26 16:28:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:00.716 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[aaa433e4-1956-4655-be6f-c423c4c5a0ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:00.745 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[74f132e0-0243-40e6-9794-0d94e62a5c34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:00 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000091.scope: Deactivated successfully.
Jan 26 16:28:00 compute-0 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d00000091.scope: Consumed 16.280s CPU time.
Jan 26 16:28:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:00.750 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f36d8c26-88f5-4121-b7ad-e401914d490e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:00 compute-0 systemd-machined[208061]: Machine qemu-175-instance-00000091 terminated.
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.763 239969 DEBUG nova.compute.manager [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received event network-changed-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.763 239969 DEBUG nova.compute.manager [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Refreshing instance network info cache due to event network-changed-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.764 239969 DEBUG oslo_concurrency.lockutils [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.764 239969 DEBUG oslo_concurrency.lockutils [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.764 239969 DEBUG nova.network.neutron [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Refreshing network info cache for port 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:28:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:00.785 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[40a1bf57-8a2c-4da3-9374-72ba0ebe831f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:00.802 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[28e2a37a-4a64-4bf4-b7f9-f9bd532d801a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6a673593-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:cd:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 433], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630323, 'reachable_time': 33254, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368028, 'error': None, 'target': 'ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:00.816 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9700c54f-e36a-4dc0-8134-2e85f87e3d28]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6a673593-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630336, 'tstamp': 630336}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368029, 'error': None, 'target': 'ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6a673593-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 630341, 'tstamp': 630341}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368029, 'error': None, 'target': 'ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:00.817 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a673593-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.819 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.822 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:00.823 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a673593-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:00.824 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:28:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:00.824 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6a673593-40, col_values=(('external_ids', {'iface-id': '891fbaea-41fa-4f46-a25a-e1bcac6983ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:00.825 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.847 239969 INFO nova.virt.libvirt.driver [-] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Instance destroyed successfully.
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.847 239969 DEBUG nova.objects.instance [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'resources' on Instance uuid 02c17ea9-0fbb-4beb-bc93-b0339e8a1048 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.861 239969 DEBUG nova.virt.libvirt.vif [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:27:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-78115778',display_name='tempest-TestNetworkBasicOps-server-78115778',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-78115778',id=145,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBwBHpp1Vs9MWQPeeeqakNnwtGS48PwvSYJeNUykaXxeW214KrGoXbpYLnywNCCG/bKkzO9tfFdp4KnfWvJ367nCGMOMOcf1Bi3W6elO3tG7Lx1njo9pJYHabyFHcW7Hlg==',key_name='tempest-TestNetworkBasicOps-1084723917',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:27:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-0xfa470c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:27:17Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=02c17ea9-0fbb-4beb-bc93-b0339e8a1048,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "307ae46b-ff90-469d-8239-59ac4ce405cd", "address": "fa:16:3e:36:2b:9c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap307ae46b-ff", "ovs_interfaceid": "307ae46b-ff90-469d-8239-59ac4ce405cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.861 239969 DEBUG nova.network.os_vif_util [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "307ae46b-ff90-469d-8239-59ac4ce405cd", "address": "fa:16:3e:36:2b:9c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap307ae46b-ff", "ovs_interfaceid": "307ae46b-ff90-469d-8239-59ac4ce405cd", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.862 239969 DEBUG nova.network.os_vif_util [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:36:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=307ae46b-ff90-469d-8239-59ac4ce405cd,network=Network(6a673593-4cc1-4b91-8d9d-33cfb52b459e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap307ae46b-ff') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.862 239969 DEBUG os_vif [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=307ae46b-ff90-469d-8239-59ac4ce405cd,network=Network(6a673593-4cc1-4b91-8d9d-33cfb52b459e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap307ae46b-ff') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.863 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.864 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap307ae46b-ff, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.866 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:28:00 compute-0 nova_compute[239965]: 2026-01-26 16:28:00.869 239969 INFO os_vif [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:36:2b:9c,bridge_name='br-int',has_traffic_filtering=True,id=307ae46b-ff90-469d-8239-59ac4ce405cd,network=Network(6a673593-4cc1-4b91-8d9d-33cfb52b459e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap307ae46b-ff')
Jan 26 16:28:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Jan 26 16:28:01 compute-0 nova_compute[239965]: 2026-01-26 16:28:01.213 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:01 compute-0 ceph-mon[75140]: pgmap v2327: 305 pgs: 305 active+clean; 498 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 9.3 MiB/s wr, 192 op/s
Jan 26 16:28:01 compute-0 nova_compute[239965]: 2026-01-26 16:28:01.463 239969 DEBUG nova.network.neutron [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Successfully updated port: be140ce9-b433-44cc-867c-3beea67a67d2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:28:01 compute-0 nova_compute[239965]: 2026-01-26 16:28:01.488 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "refresh_cache-0f79507c-8a44-47f3-a461-6377eed67521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:28:01 compute-0 nova_compute[239965]: 2026-01-26 16:28:01.488 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquired lock "refresh_cache-0f79507c-8a44-47f3-a461-6377eed67521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:28:01 compute-0 nova_compute[239965]: 2026-01-26 16:28:01.488 239969 DEBUG nova.network.neutron [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:28:01 compute-0 nova_compute[239965]: 2026-01-26 16:28:01.506 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:01 compute-0 nova_compute[239965]: 2026-01-26 16:28:01.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:01 compute-0 nova_compute[239965]: 2026-01-26 16:28:01.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:28:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Jan 26 16:28:01 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Jan 26 16:28:01 compute-0 nova_compute[239965]: 2026-01-26 16:28:01.639 239969 DEBUG nova.compute.manager [req-2314da0f-af0a-44a5-a677-cd12866b6a1e req-6b33146a-3f89-4f2f-8a4d-30117e40e387 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Received event network-changed-be140ce9-b433-44cc-867c-3beea67a67d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:01 compute-0 nova_compute[239965]: 2026-01-26 16:28:01.639 239969 DEBUG nova.compute.manager [req-2314da0f-af0a-44a5-a677-cd12866b6a1e req-6b33146a-3f89-4f2f-8a4d-30117e40e387 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Refreshing instance network info cache due to event network-changed-be140ce9-b433-44cc-867c-3beea67a67d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:28:01 compute-0 nova_compute[239965]: 2026-01-26 16:28:01.640 239969 DEBUG oslo_concurrency.lockutils [req-2314da0f-af0a-44a5-a677-cd12866b6a1e req-6b33146a-3f89-4f2f-8a4d-30117e40e387 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-0f79507c-8a44-47f3-a461-6377eed67521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:28:01 compute-0 nova_compute[239965]: 2026-01-26 16:28:01.762 239969 DEBUG nova.network.neutron [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:28:02 compute-0 nova_compute[239965]: 2026-01-26 16:28:02.349 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444867.348756, af71ccf7-284e-436b-9bb8-5bc6c727d3cc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:02 compute-0 nova_compute[239965]: 2026-01-26 16:28:02.350 239969 INFO nova.compute.manager [-] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] VM Stopped (Lifecycle Event)
Jan 26 16:28:02 compute-0 nova_compute[239965]: 2026-01-26 16:28:02.373 239969 DEBUG nova.compute.manager [None req-01c40cbf-d92b-4e29-a384-9e9f66f2b031 - - - - - -] [instance: af71ccf7-284e-436b-9bb8-5bc6c727d3cc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:02 compute-0 nova_compute[239965]: 2026-01-26 16:28:02.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:02 compute-0 ceph-mon[75140]: osdmap e312: 3 total, 3 up, 3 in
Jan 26 16:28:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 495 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.8 MiB/s wr, 149 op/s
Jan 26 16:28:02 compute-0 nova_compute[239965]: 2026-01-26 16:28:02.850 239969 DEBUG nova.compute.manager [req-2c8613bb-5c32-4eab-bd60-2f9520c9506b req-cb33dec8-319c-48f9-a8c6-97d236d04007 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Received event network-vif-unplugged-307ae46b-ff90-469d-8239-59ac4ce405cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:02 compute-0 nova_compute[239965]: 2026-01-26 16:28:02.850 239969 DEBUG oslo_concurrency.lockutils [req-2c8613bb-5c32-4eab-bd60-2f9520c9506b req-cb33dec8-319c-48f9-a8c6-97d236d04007 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:02 compute-0 nova_compute[239965]: 2026-01-26 16:28:02.850 239969 DEBUG oslo_concurrency.lockutils [req-2c8613bb-5c32-4eab-bd60-2f9520c9506b req-cb33dec8-319c-48f9-a8c6-97d236d04007 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:02 compute-0 nova_compute[239965]: 2026-01-26 16:28:02.850 239969 DEBUG oslo_concurrency.lockutils [req-2c8613bb-5c32-4eab-bd60-2f9520c9506b req-cb33dec8-319c-48f9-a8c6-97d236d04007 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:02 compute-0 nova_compute[239965]: 2026-01-26 16:28:02.851 239969 DEBUG nova.compute.manager [req-2c8613bb-5c32-4eab-bd60-2f9520c9506b req-cb33dec8-319c-48f9-a8c6-97d236d04007 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] No waiting events found dispatching network-vif-unplugged-307ae46b-ff90-469d-8239-59ac4ce405cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:28:02 compute-0 nova_compute[239965]: 2026-01-26 16:28:02.851 239969 DEBUG nova.compute.manager [req-2c8613bb-5c32-4eab-bd60-2f9520c9506b req-cb33dec8-319c-48f9-a8c6-97d236d04007 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Received event network-vif-unplugged-307ae46b-ff90-469d-8239-59ac4ce405cd for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:28:02 compute-0 nova_compute[239965]: 2026-01-26 16:28:02.851 239969 DEBUG nova.compute.manager [req-2c8613bb-5c32-4eab-bd60-2f9520c9506b req-cb33dec8-319c-48f9-a8c6-97d236d04007 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Received event network-vif-plugged-307ae46b-ff90-469d-8239-59ac4ce405cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:02 compute-0 nova_compute[239965]: 2026-01-26 16:28:02.851 239969 DEBUG oslo_concurrency.lockutils [req-2c8613bb-5c32-4eab-bd60-2f9520c9506b req-cb33dec8-319c-48f9-a8c6-97d236d04007 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:02 compute-0 nova_compute[239965]: 2026-01-26 16:28:02.851 239969 DEBUG oslo_concurrency.lockutils [req-2c8613bb-5c32-4eab-bd60-2f9520c9506b req-cb33dec8-319c-48f9-a8c6-97d236d04007 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:02 compute-0 nova_compute[239965]: 2026-01-26 16:28:02.851 239969 DEBUG oslo_concurrency.lockutils [req-2c8613bb-5c32-4eab-bd60-2f9520c9506b req-cb33dec8-319c-48f9-a8c6-97d236d04007 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:02 compute-0 nova_compute[239965]: 2026-01-26 16:28:02.852 239969 DEBUG nova.compute.manager [req-2c8613bb-5c32-4eab-bd60-2f9520c9506b req-cb33dec8-319c-48f9-a8c6-97d236d04007 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] No waiting events found dispatching network-vif-plugged-307ae46b-ff90-469d-8239-59ac4ce405cd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:28:02 compute-0 nova_compute[239965]: 2026-01-26 16:28:02.852 239969 WARNING nova.compute.manager [req-2c8613bb-5c32-4eab-bd60-2f9520c9506b req-cb33dec8-319c-48f9-a8c6-97d236d04007 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Received unexpected event network-vif-plugged-307ae46b-ff90-469d-8239-59ac4ce405cd for instance with vm_state active and task_state deleting.
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.261 239969 DEBUG nova.network.neutron [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Updated VIF entry in instance network info cache for port 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.261 239969 DEBUG nova.network.neutron [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Updating instance_info_cache with network_info: [{"id": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "address": "fa:16:3e:5c:66:7c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb1285a-11", "ovs_interfaceid": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.276 239969 DEBUG oslo_concurrency.lockutils [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.277 239969 DEBUG nova.compute.manager [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received event network-vif-plugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.277 239969 DEBUG oslo_concurrency.lockutils [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.277 239969 DEBUG oslo_concurrency.lockutils [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.278 239969 DEBUG oslo_concurrency.lockutils [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.278 239969 DEBUG nova.compute.manager [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] No waiting events found dispatching network-vif-plugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.278 239969 WARNING nova.compute.manager [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received unexpected event network-vif-plugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 for instance with vm_state active and task_state None.
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.278 239969 DEBUG nova.compute.manager [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received event network-vif-plugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.279 239969 DEBUG oslo_concurrency.lockutils [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.279 239969 DEBUG oslo_concurrency.lockutils [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.279 239969 DEBUG oslo_concurrency.lockutils [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.279 239969 DEBUG nova.compute.manager [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] No waiting events found dispatching network-vif-plugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.280 239969 WARNING nova.compute.manager [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received unexpected event network-vif-plugged-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 for instance with vm_state active and task_state None.
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.280 239969 DEBUG nova.compute.manager [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Received event network-changed-307ae46b-ff90-469d-8239-59ac4ce405cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.280 239969 DEBUG nova.compute.manager [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Refreshing instance network info cache due to event network-changed-307ae46b-ff90-469d-8239-59ac4ce405cd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.281 239969 DEBUG oslo_concurrency.lockutils [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-02c17ea9-0fbb-4beb-bc93-b0339e8a1048" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.281 239969 DEBUG oslo_concurrency.lockutils [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-02c17ea9-0fbb-4beb-bc93-b0339e8a1048" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.281 239969 DEBUG nova.network.neutron [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Refreshing network info cache for port 307ae46b-ff90-469d-8239-59ac4ce405cd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:28:03 compute-0 ceph-mon[75140]: pgmap v2329: 305 pgs: 305 active+clean; 495 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.8 MiB/s wr, 149 op/s
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.721 239969 DEBUG nova.network.neutron [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Updating instance_info_cache with network_info: [{"id": "be140ce9-b433-44cc-867c-3beea67a67d2", "address": "fa:16:3e:a5:a8:5d", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe140ce9-b4", "ovs_interfaceid": "be140ce9-b433-44cc-867c-3beea67a67d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.738 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Releasing lock "refresh_cache-0f79507c-8a44-47f3-a461-6377eed67521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.738 239969 DEBUG nova.compute.manager [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Instance network_info: |[{"id": "be140ce9-b433-44cc-867c-3beea67a67d2", "address": "fa:16:3e:a5:a8:5d", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe140ce9-b4", "ovs_interfaceid": "be140ce9-b433-44cc-867c-3beea67a67d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.739 239969 DEBUG oslo_concurrency.lockutils [req-2314da0f-af0a-44a5-a677-cd12866b6a1e req-6b33146a-3f89-4f2f-8a4d-30117e40e387 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-0f79507c-8a44-47f3-a461-6377eed67521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.739 239969 DEBUG nova.network.neutron [req-2314da0f-af0a-44a5-a677-cd12866b6a1e req-6b33146a-3f89-4f2f-8a4d-30117e40e387 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Refreshing network info cache for port be140ce9-b433-44cc-867c-3beea67a67d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.742 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Start _get_guest_xml network_info=[{"id": "be140ce9-b433-44cc-867c-3beea67a67d2", "address": "fa:16:3e:a5:a8:5d", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe140ce9-b4", "ovs_interfaceid": "be140ce9-b433-44cc-867c-3beea67a67d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.746 239969 WARNING nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.752 239969 DEBUG nova.virt.libvirt.host [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.753 239969 DEBUG nova.virt.libvirt.host [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.763 239969 DEBUG nova.virt.libvirt.host [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.764 239969 DEBUG nova.virt.libvirt.host [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.765 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.766 239969 DEBUG nova.virt.hardware [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.766 239969 DEBUG nova.virt.hardware [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.767 239969 DEBUG nova.virt.hardware [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.767 239969 DEBUG nova.virt.hardware [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.768 239969 DEBUG nova.virt.hardware [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.768 239969 DEBUG nova.virt.hardware [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.768 239969 DEBUG nova.virt.hardware [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.769 239969 DEBUG nova.virt.hardware [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.769 239969 DEBUG nova.virt.hardware [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.770 239969 DEBUG nova.virt.hardware [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.771 239969 DEBUG nova.virt.hardware [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.777 239969 DEBUG oslo_concurrency.processutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:03 compute-0 nova_compute[239965]: 2026-01-26 16:28:03.876 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:04 compute-0 nova_compute[239965]: 2026-01-26 16:28:04.050 239969 INFO nova.virt.libvirt.driver [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Deleting instance files /var/lib/nova/instances/02c17ea9-0fbb-4beb-bc93-b0339e8a1048_del
Jan 26 16:28:04 compute-0 nova_compute[239965]: 2026-01-26 16:28:04.052 239969 INFO nova.virt.libvirt.driver [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Deletion of /var/lib/nova/instances/02c17ea9-0fbb-4beb-bc93-b0339e8a1048_del complete
Jan 26 16:28:04 compute-0 nova_compute[239965]: 2026-01-26 16:28:04.135 239969 INFO nova.compute.manager [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Took 3.72 seconds to destroy the instance on the hypervisor.
Jan 26 16:28:04 compute-0 nova_compute[239965]: 2026-01-26 16:28:04.136 239969 DEBUG oslo.service.loopingcall [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:28:04 compute-0 nova_compute[239965]: 2026-01-26 16:28:04.136 239969 DEBUG nova.compute.manager [-] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:28:04 compute-0 nova_compute[239965]: 2026-01-26 16:28:04.137 239969 DEBUG nova.network.neutron [-] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:28:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:28:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Jan 26 16:28:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Jan 26 16:28:04 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:28:04.287061) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444884287106, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 1366, "num_deletes": 253, "total_data_size": 2004708, "memory_usage": 2028520, "flush_reason": "Manual Compaction"}
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444884297218, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 1246711, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47684, "largest_seqno": 49049, "table_properties": {"data_size": 1241627, "index_size": 2352, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13561, "raw_average_key_size": 21, "raw_value_size": 1230482, "raw_average_value_size": 1919, "num_data_blocks": 105, "num_entries": 641, "num_filter_entries": 641, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769444764, "oldest_key_time": 1769444764, "file_creation_time": 1769444884, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 10195 microseconds, and 3601 cpu microseconds.
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:28:04.297258) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 1246711 bytes OK
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:28:04.297274) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:28:04.298721) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:28:04.298731) EVENT_LOG_v1 {"time_micros": 1769444884298728, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:28:04.298746) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 1998541, prev total WAL file size 1998541, number of live WAL files 2.
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:28:04.299325) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373535' seq:72057594037927935, type:22 .. '6D6772737461740032303037' seq:0, type:0; will stop at (end)
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(1217KB)], [110(10195KB)]
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444884299361, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11686490, "oldest_snapshot_seqno": -1}
Jan 26 16:28:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:28:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/996535720' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:28:04 compute-0 sudo[368081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:28:04 compute-0 sudo[368081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:28:04 compute-0 sudo[368081]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:04 compute-0 nova_compute[239965]: 2026-01-26 16:28:04.470 239969 DEBUG oslo_concurrency.processutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.693s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 7136 keys, 9017012 bytes, temperature: kUnknown
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444884482936, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 9017012, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8971099, "index_size": 26985, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17861, "raw_key_size": 185344, "raw_average_key_size": 25, "raw_value_size": 8845499, "raw_average_value_size": 1239, "num_data_blocks": 1049, "num_entries": 7136, "num_filter_entries": 7136, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769444884, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:28:04.483368) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 9017012 bytes
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:28:04.488775) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 63.6 rd, 49.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 10.0 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(16.6) write-amplify(7.2) OK, records in: 7603, records dropped: 467 output_compression: NoCompression
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:28:04.488810) EVENT_LOG_v1 {"time_micros": 1769444884488797, "job": 66, "event": "compaction_finished", "compaction_time_micros": 183737, "compaction_time_cpu_micros": 22367, "output_level": 6, "num_output_files": 1, "total_output_size": 9017012, "num_input_records": 7603, "num_output_records": 7136, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444884489637, "job": 66, "event": "table_file_deletion", "file_number": 112}
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444884492030, "job": 66, "event": "table_file_deletion", "file_number": 110}
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:28:04.299251) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:28:04.492125) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:28:04.492142) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:28:04.492146) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:28:04.492150) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:28:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:28:04.492156) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:28:04 compute-0 nova_compute[239965]: 2026-01-26 16:28:04.532 239969 DEBUG nova.storage.rbd_utils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 0f79507c-8a44-47f3-a461-6377eed67521_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:04 compute-0 sudo[368115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:28:04 compute-0 sudo[368115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:28:04 compute-0 nova_compute[239965]: 2026-01-26 16:28:04.537 239969 DEBUG oslo_concurrency.processutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 495 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 391 KiB/s rd, 2.9 MiB/s wr, 155 op/s
Jan 26 16:28:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:28:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2304879583' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.091 239969 DEBUG oslo_concurrency.processutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.095 239969 DEBUG nova.virt.libvirt.vif [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:27:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-1-1795104862',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-1-1795104862',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-gen',id=147,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNmPYHlpHuZpOeZCl7BHQ216cd8u50ZxFCXW833oHGB4gBbAp9eFvd6P3qjkcGf53JTK0QFiWuKcgJqP7NdWwDFlJ24rb8FRQzo1dBCyWpi8odGazdOwXvkLXo0bhtlZwA==',key_name='tempest-TestSecurityGroupsBasicOps-1082581796',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-bf1n0ojq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:27:58Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=0f79507c-8a44-47f3-a461-6377eed67521,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be140ce9-b433-44cc-867c-3beea67a67d2", "address": "fa:16:3e:a5:a8:5d", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe140ce9-b4", "ovs_interfaceid": "be140ce9-b433-44cc-867c-3beea67a67d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.095 239969 DEBUG nova.network.os_vif_util [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "be140ce9-b433-44cc-867c-3beea67a67d2", "address": "fa:16:3e:a5:a8:5d", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe140ce9-b4", "ovs_interfaceid": "be140ce9-b433-44cc-867c-3beea67a67d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.096 239969 DEBUG nova.network.os_vif_util [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:a8:5d,bridge_name='br-int',has_traffic_filtering=True,id=be140ce9-b433-44cc-867c-3beea67a67d2,network=Network(fed7ea2b-3239-4a92-a662-8de7d6a5b54f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe140ce9-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.100 239969 DEBUG nova.objects.instance [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 0f79507c-8a44-47f3-a461-6377eed67521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.123 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:28:05 compute-0 nova_compute[239965]:   <uuid>0f79507c-8a44-47f3-a461-6377eed67521</uuid>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   <name>instance-00000093</name>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-1-1795104862</nova:name>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:28:03</nova:creationTime>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:28:05 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:28:05 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:28:05 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:28:05 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:28:05 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:28:05 compute-0 nova_compute[239965]:         <nova:user uuid="ace5f36058684b5782589457d9e15921">tempest-TestSecurityGroupsBasicOps-75910484-project-member</nova:user>
Jan 26 16:28:05 compute-0 nova_compute[239965]:         <nova:project uuid="1f814bd45d164be6827f7ef54ed07f5f">tempest-TestSecurityGroupsBasicOps-75910484</nova:project>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:28:05 compute-0 nova_compute[239965]:         <nova:port uuid="be140ce9-b433-44cc-867c-3beea67a67d2">
Jan 26 16:28:05 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <system>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <entry name="serial">0f79507c-8a44-47f3-a461-6377eed67521</entry>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <entry name="uuid">0f79507c-8a44-47f3-a461-6377eed67521</entry>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     </system>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   <os>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   </os>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   <features>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   </features>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/0f79507c-8a44-47f3-a461-6377eed67521_disk">
Jan 26 16:28:05 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       </source>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:28:05 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/0f79507c-8a44-47f3-a461-6377eed67521_disk.config">
Jan 26 16:28:05 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       </source>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:28:05 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:a5:a8:5d"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <target dev="tapbe140ce9-b4"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/0f79507c-8a44-47f3-a461-6377eed67521/console.log" append="off"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <video>
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     </video>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:28:05 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:28:05 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:28:05 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:28:05 compute-0 nova_compute[239965]: </domain>
Jan 26 16:28:05 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.125 239969 DEBUG nova.compute.manager [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Preparing to wait for external event network-vif-plugged-be140ce9-b433-44cc-867c-3beea67a67d2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.126 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "0f79507c-8a44-47f3-a461-6377eed67521-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.126 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "0f79507c-8a44-47f3-a461-6377eed67521-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.126 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "0f79507c-8a44-47f3-a461-6377eed67521-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.127 239969 DEBUG nova.virt.libvirt.vif [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:27:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-1-1795104862',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-1-1795104862',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-gen',id=147,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNmPYHlpHuZpOeZCl7BHQ216cd8u50ZxFCXW833oHGB4gBbAp9eFvd6P3qjkcGf53JTK0QFiWuKcgJqP7NdWwDFlJ24rb8FRQzo1dBCyWpi8odGazdOwXvkLXo0bhtlZwA==',key_name='tempest-TestSecurityGroupsBasicOps-1082581796',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-bf1n0ojq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:27:58Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=0f79507c-8a44-47f3-a461-6377eed67521,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "be140ce9-b433-44cc-867c-3beea67a67d2", "address": "fa:16:3e:a5:a8:5d", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe140ce9-b4", "ovs_interfaceid": "be140ce9-b433-44cc-867c-3beea67a67d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.127 239969 DEBUG nova.network.os_vif_util [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "be140ce9-b433-44cc-867c-3beea67a67d2", "address": "fa:16:3e:a5:a8:5d", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe140ce9-b4", "ovs_interfaceid": "be140ce9-b433-44cc-867c-3beea67a67d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.128 239969 DEBUG nova.network.os_vif_util [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:a8:5d,bridge_name='br-int',has_traffic_filtering=True,id=be140ce9-b433-44cc-867c-3beea67a67d2,network=Network(fed7ea2b-3239-4a92-a662-8de7d6a5b54f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe140ce9-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.128 239969 DEBUG os_vif [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:a8:5d,bridge_name='br-int',has_traffic_filtering=True,id=be140ce9-b433-44cc-867c-3beea67a67d2,network=Network(fed7ea2b-3239-4a92-a662-8de7d6a5b54f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe140ce9-b4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.132 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.133 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.133 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.136 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.137 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe140ce9-b4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.137 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe140ce9-b4, col_values=(('external_ids', {'iface-id': 'be140ce9-b433-44cc-867c-3beea67a67d2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:a8:5d', 'vm-uuid': '0f79507c-8a44-47f3-a461-6377eed67521'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.138 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:05 compute-0 NetworkManager[48954]: <info>  [1769444885.1409] manager: (tapbe140ce9-b4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/632)
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.141 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.146 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.148 239969 INFO os_vif [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:a8:5d,bridge_name='br-int',has_traffic_filtering=True,id=be140ce9-b433-44cc-867c-3beea67a67d2,network=Network(fed7ea2b-3239-4a92-a662-8de7d6a5b54f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe140ce9-b4')
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.156 239969 DEBUG nova.compute.manager [req-f16b3c92-fe19-42bf-bc25-9df5529db269 req-ee5618ca-d909-422a-bc04-6f724896bb20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Received event network-changed-30d9e550-982e-4e2c-9ee0-755cad494e41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.156 239969 DEBUG nova.compute.manager [req-f16b3c92-fe19-42bf-bc25-9df5529db269 req-ee5618ca-d909-422a-bc04-6f724896bb20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Refreshing instance network info cache due to event network-changed-30d9e550-982e-4e2c-9ee0-755cad494e41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.157 239969 DEBUG oslo_concurrency.lockutils [req-f16b3c92-fe19-42bf-bc25-9df5529db269 req-ee5618ca-d909-422a-bc04-6f724896bb20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-39a41999-3c56-4679-8c57-9286dc4edbb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.157 239969 DEBUG oslo_concurrency.lockutils [req-f16b3c92-fe19-42bf-bc25-9df5529db269 req-ee5618ca-d909-422a-bc04-6f724896bb20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-39a41999-3c56-4679-8c57-9286dc4edbb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.158 239969 DEBUG nova.network.neutron [req-f16b3c92-fe19-42bf-bc25-9df5529db269 req-ee5618ca-d909-422a-bc04-6f724896bb20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Refreshing network info cache for port 30d9e550-982e-4e2c-9ee0-755cad494e41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.240 239969 DEBUG oslo_concurrency.lockutils [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "39a41999-3c56-4679-8c57-9286dc4edbb6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.241 239969 DEBUG oslo_concurrency.lockutils [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "39a41999-3c56-4679-8c57-9286dc4edbb6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.241 239969 DEBUG oslo_concurrency.lockutils [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.241 239969 DEBUG oslo_concurrency.lockutils [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.241 239969 DEBUG oslo_concurrency.lockutils [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.243 239969 INFO nova.compute.manager [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Terminating instance
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.244 239969 DEBUG nova.compute.manager [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.248 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.249 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.249 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] No VIF found with MAC fa:16:3e:a5:a8:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.250 239969 INFO nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Using config drive
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.272 239969 DEBUG nova.storage.rbd_utils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 0f79507c-8a44-47f3-a461-6377eed67521_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:05 compute-0 ceph-mon[75140]: osdmap e313: 3 total, 3 up, 3 in
Jan 26 16:28:05 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/996535720' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:28:05 compute-0 ceph-mon[75140]: pgmap v2331: 305 pgs: 305 active+clean; 495 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 391 KiB/s rd, 2.9 MiB/s wr, 155 op/s
Jan 26 16:28:05 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2304879583' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:28:05 compute-0 kernel: tap30d9e550-98 (unregistering): left promiscuous mode
Jan 26 16:28:05 compute-0 NetworkManager[48954]: <info>  [1769444885.3196] device (tap30d9e550-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:28:05 compute-0 ovn_controller[146046]: 2026-01-26T16:28:05Z|01556|binding|INFO|Releasing lport 30d9e550-982e-4e2c-9ee0-755cad494e41 from this chassis (sb_readonly=0)
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.330 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:05 compute-0 ovn_controller[146046]: 2026-01-26T16:28:05Z|01557|binding|INFO|Setting lport 30d9e550-982e-4e2c-9ee0-755cad494e41 down in Southbound
Jan 26 16:28:05 compute-0 ovn_controller[146046]: 2026-01-26T16:28:05Z|01558|binding|INFO|Removing iface tap30d9e550-98 ovn-installed in OVS
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.332 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:05.337 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:8a:76 10.100.0.13'], port_security=['fa:16:3e:5d:8a:76 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '39a41999-3c56-4679-8c57-9286dc4edbb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01b6ba29-8317-46fc-85a5-35c029f2796c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '02ec8c20b77c4ce9b1406d858bbcf14d', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4f55238b-e3e6-4801-8e90-114accc75c46', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3d1802e6-be6e-40d0-8709-a6af00654d79, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=30d9e550-982e-4e2c-9ee0-755cad494e41) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:28:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:05.338 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 30d9e550-982e-4e2c-9ee0-755cad494e41 in datapath 01b6ba29-8317-46fc-85a5-35c029f2796c unbound from our chassis
Jan 26 16:28:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:05.339 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01b6ba29-8317-46fc-85a5-35c029f2796c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:28:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:05.341 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9e7c373b-7d8e-4a66-9b09-36c999fbc8ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:05.343 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c namespace which is not needed anymore
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.346 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:05 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Jan 26 16:28:05 compute-0 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d0000008c.scope: Consumed 19.818s CPU time.
Jan 26 16:28:05 compute-0 systemd-machined[208061]: Machine qemu-170-instance-0000008c terminated.
Jan 26 16:28:05 compute-0 sudo[368115]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.411 239969 DEBUG nova.network.neutron [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Updated VIF entry in instance network info cache for port 307ae46b-ff90-469d-8239-59ac4ce405cd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.412 239969 DEBUG nova.network.neutron [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Updating instance_info_cache with network_info: [{"id": "307ae46b-ff90-469d-8239-59ac4ce405cd", "address": "fa:16:3e:36:2b:9c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap307ae46b-ff", "ovs_interfaceid": "307ae46b-ff90-469d-8239-59ac4ce405cd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.427 239969 DEBUG oslo_concurrency.lockutils [req-b073b7b3-c017-4402-a17b-09488f5fcb89 req-3b24679d-2bc8-42f2-928b-6e7abe2afa55 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-02c17ea9-0fbb-4beb-bc93-b0339e8a1048" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:28:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:28:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:28:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:28:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:28:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:28:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:28:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:28:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:28:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:28:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:28:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:28:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:28:05 compute-0 neutron-haproxy-ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c[363159]: [NOTICE]   (363163) : haproxy version is 2.8.14-c23fe91
Jan 26 16:28:05 compute-0 neutron-haproxy-ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c[363159]: [NOTICE]   (363163) : path to executable is /usr/sbin/haproxy
Jan 26 16:28:05 compute-0 neutron-haproxy-ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c[363159]: [WARNING]  (363163) : Exiting Master process...
Jan 26 16:28:05 compute-0 neutron-haproxy-ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c[363159]: [ALERT]    (363163) : Current worker (363165) exited with code 143 (Terminated)
Jan 26 16:28:05 compute-0 neutron-haproxy-ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c[363159]: [WARNING]  (363163) : All workers exited. Exiting... (0)
Jan 26 16:28:05 compute-0 systemd[1]: libpod-ba080b9cc856817eb1c5599250c7320f892f450f208f6a8560b0d8b446fcf6d1.scope: Deactivated successfully.
Jan 26 16:28:05 compute-0 podman[368248]: 2026-01-26 16:28:05.482875029 +0000 UTC m=+0.044036890 container died ba080b9cc856817eb1c5599250c7320f892f450f208f6a8560b0d8b446fcf6d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.510 239969 INFO nova.virt.libvirt.driver [-] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Instance destroyed successfully.
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.510 239969 DEBUG nova.objects.instance [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lazy-loading 'resources' on Instance uuid 39a41999-3c56-4679-8c57-9286dc4edbb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:05 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ba080b9cc856817eb1c5599250c7320f892f450f208f6a8560b0d8b446fcf6d1-userdata-shm.mount: Deactivated successfully.
Jan 26 16:28:05 compute-0 sudo[368261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:28:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-de02d51b3bb6302ad391739301de40fc615c91a44d40359213d946a05c9050c7-merged.mount: Deactivated successfully.
Jan 26 16:28:05 compute-0 sudo[368261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:28:05 compute-0 sudo[368261]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:05 compute-0 podman[368248]: 2026-01-26 16:28:05.529766037 +0000 UTC m=+0.090927898 container cleanup ba080b9cc856817eb1c5599250c7320f892f450f208f6a8560b0d8b446fcf6d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.528 239969 DEBUG nova.virt.libvirt.vif [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:25:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-855823578',display_name='tempest-TestSnapshotPattern-server-855823578',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-855823578',id=140,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDbFIdmmD0YOLT/gnQMrV89ZkLND1rAgrbfF3qbw6dEO95K+VelVpcFet15bYd/xM6RQfQxT61ExvAJRv5XtJWc2bRBlJ7p2k0AnC8pnTtVi312SPdjQj+CEHqI1btz5HA==',key_name='tempest-TestSnapshotPattern-544616432',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:26:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='02ec8c20b77c4ce9b1406d858bbcf14d',ramdisk_id='',reservation_id='r-92eb2iek',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-616132540',owner_user_name='tempest-TestSnapshotPattern-616132540-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:26:52Z,user_data=None,user_id='915d03d871484dc39e2d074d97f24809',uuid=39a41999-3c56-4679-8c57-9286dc4edbb6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "30d9e550-982e-4e2c-9ee0-755cad494e41", "address": "fa:16:3e:5d:8a:76", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30d9e550-98", "ovs_interfaceid": "30d9e550-982e-4e2c-9ee0-755cad494e41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.528 239969 DEBUG nova.network.os_vif_util [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Converting VIF {"id": "30d9e550-982e-4e2c-9ee0-755cad494e41", "address": "fa:16:3e:5d:8a:76", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30d9e550-98", "ovs_interfaceid": "30d9e550-982e-4e2c-9ee0-755cad494e41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.531 239969 DEBUG nova.network.os_vif_util [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5d:8a:76,bridge_name='br-int',has_traffic_filtering=True,id=30d9e550-982e-4e2c-9ee0-755cad494e41,network=Network(01b6ba29-8317-46fc-85a5-35c029f2796c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30d9e550-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.532 239969 DEBUG os_vif [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:8a:76,bridge_name='br-int',has_traffic_filtering=True,id=30d9e550-982e-4e2c-9ee0-755cad494e41,network=Network(01b6ba29-8317-46fc-85a5-35c029f2796c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30d9e550-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.533 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.534 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap30d9e550-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.537 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.537 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.537 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.537 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.538 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:05 compute-0 systemd[1]: libpod-conmon-ba080b9cc856817eb1c5599250c7320f892f450f208f6a8560b0d8b446fcf6d1.scope: Deactivated successfully.
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.570 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.576 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:28:05 compute-0 sudo[368311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.579 239969 INFO os_vif [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5d:8a:76,bridge_name='br-int',has_traffic_filtering=True,id=30d9e550-982e-4e2c-9ee0-755cad494e41,network=Network(01b6ba29-8317-46fc-85a5-35c029f2796c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap30d9e550-98')
Jan 26 16:28:05 compute-0 sudo[368311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:28:05 compute-0 podman[368320]: 2026-01-26 16:28:05.599228599 +0000 UTC m=+0.044538222 container remove ba080b9cc856817eb1c5599250c7320f892f450f208f6a8560b0d8b446fcf6d1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 16:28:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:05.607 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[50d555e7-8467-4b0d-9002-607e9711fdd0]: (4, ('Mon Jan 26 04:28:05 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c (ba080b9cc856817eb1c5599250c7320f892f450f208f6a8560b0d8b446fcf6d1)\nba080b9cc856817eb1c5599250c7320f892f450f208f6a8560b0d8b446fcf6d1\nMon Jan 26 04:28:05 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c (ba080b9cc856817eb1c5599250c7320f892f450f208f6a8560b0d8b446fcf6d1)\nba080b9cc856817eb1c5599250c7320f892f450f208f6a8560b0d8b446fcf6d1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:05.609 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[91fd2b6f-8a89-48f3-a607-5dcfff41f38a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:05.610 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01b6ba29-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:05 compute-0 kernel: tap01b6ba29-80: left promiscuous mode
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.613 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.630 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:05.634 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[52bbedc2-6b11-4957-887c-47d94de1cbe6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:05.649 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[843a8a4b-488c-4f4f-bb83-49ca4392993a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:05.651 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6cf6e990-2f25-4239-9e2a-2f3f68704c6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:05.668 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[59f4ded8-3db8-42a9-848f-b1ae015cecd7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 626008, 'reachable_time': 28259, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368375, 'error': None, 'target': 'ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:05.671 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01b6ba29-8317-46fc-85a5-35c029f2796c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:28:05 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:05.671 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[8fe9f047-5de6-4d95-9d0d-29f81fe66aa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:05 compute-0 systemd[1]: run-netns-ovnmeta\x2d01b6ba29\x2d8317\x2d46fc\x2d85a5\x2d35c029f2796c.mount: Deactivated successfully.
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.684 239969 DEBUG nova.network.neutron [-] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.704 239969 INFO nova.compute.manager [-] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Took 1.57 seconds to deallocate network for instance.
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.744 239969 DEBUG oslo_concurrency.lockutils [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.744 239969 DEBUG oslo_concurrency.lockutils [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.778 239969 DEBUG nova.scheduler.client.report [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.783 239969 DEBUG nova.compute.manager [req-66c1b664-3eef-43fe-a042-2f8c420b8a88 req-53de1682-f1b7-4096-a54f-02354d7b5952 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Received event network-vif-deleted-307ae46b-ff90-469d-8239-59ac4ce405cd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.795 239969 DEBUG nova.scheduler.client.report [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.796 239969 DEBUG nova.compute.provider_tree [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.810 239969 DEBUG nova.scheduler.client.report [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.830 239969 DEBUG nova.scheduler.client.report [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.868 239969 INFO nova.virt.libvirt.driver [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Deleting instance files /var/lib/nova/instances/39a41999-3c56-4679-8c57-9286dc4edbb6_del
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.870 239969 INFO nova.virt.libvirt.driver [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Deletion of /var/lib/nova/instances/39a41999-3c56-4679-8c57-9286dc4edbb6_del complete
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.937 239969 INFO nova.compute.manager [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Took 0.69 seconds to destroy the instance on the hypervisor.
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.938 239969 DEBUG oslo.service.loopingcall [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.938 239969 DEBUG nova.compute.manager [-] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.938 239969 DEBUG nova.network.neutron [-] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:28:05 compute-0 podman[368408]: 2026-01-26 16:28:05.950815589 +0000 UTC m=+0.071629426 container create 36b4bde479cfe610f1619423a7a9a8c4e1b637748c693980b444b6bac4e03a46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_feistel, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.970 239969 INFO nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Creating config drive at /var/lib/nova/instances/0f79507c-8a44-47f3-a461-6377eed67521/disk.config
Jan 26 16:28:05 compute-0 nova_compute[239965]: 2026-01-26 16:28:05.976 239969 DEBUG oslo_concurrency.processutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0f79507c-8a44-47f3-a461-6377eed67521/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1nh_s63y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:05 compute-0 systemd[1]: Started libpod-conmon-36b4bde479cfe610f1619423a7a9a8c4e1b637748c693980b444b6bac4e03a46.scope.
Jan 26 16:28:06 compute-0 podman[368408]: 2026-01-26 16:28:05.917634716 +0000 UTC m=+0.038448593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.014 239969 DEBUG nova.network.neutron [req-2314da0f-af0a-44a5-a677-cd12866b6a1e req-6b33146a-3f89-4f2f-8a4d-30117e40e387 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Updated VIF entry in instance network info cache for port be140ce9-b433-44cc-867c-3beea67a67d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.016 239969 DEBUG nova.network.neutron [req-2314da0f-af0a-44a5-a677-cd12866b6a1e req-6b33146a-3f89-4f2f-8a4d-30117e40e387 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Updating instance_info_cache with network_info: [{"id": "be140ce9-b433-44cc-867c-3beea67a67d2", "address": "fa:16:3e:a5:a8:5d", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe140ce9-b4", "ovs_interfaceid": "be140ce9-b433-44cc-867c-3beea67a67d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.019 239969 DEBUG oslo_concurrency.processutils [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:28:06 compute-0 podman[368408]: 2026-01-26 16:28:06.057529862 +0000 UTC m=+0.178343769 container init 36b4bde479cfe610f1619423a7a9a8c4e1b637748c693980b444b6bac4e03a46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_feistel, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 16:28:06 compute-0 podman[368408]: 2026-01-26 16:28:06.070899719 +0000 UTC m=+0.191713556 container start 36b4bde479cfe610f1619423a7a9a8c4e1b637748c693980b444b6bac4e03a46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_feistel, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 16:28:06 compute-0 podman[368408]: 2026-01-26 16:28:06.074630161 +0000 UTC m=+0.195444078 container attach 36b4bde479cfe610f1619423a7a9a8c4e1b637748c693980b444b6bac4e03a46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_feistel, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.077 239969 DEBUG oslo_concurrency.lockutils [req-2314da0f-af0a-44a5-a677-cd12866b6a1e req-6b33146a-3f89-4f2f-8a4d-30117e40e387 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-0f79507c-8a44-47f3-a461-6377eed67521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:28:06 compute-0 elastic_feistel[368425]: 167 167
Jan 26 16:28:06 compute-0 systemd[1]: libpod-36b4bde479cfe610f1619423a7a9a8c4e1b637748c693980b444b6bac4e03a46.scope: Deactivated successfully.
Jan 26 16:28:06 compute-0 podman[368408]: 2026-01-26 16:28:06.09011081 +0000 UTC m=+0.210924647 container died 36b4bde479cfe610f1619423a7a9a8c4e1b637748c693980b444b6bac4e03a46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_feistel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 16:28:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:28:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4207942285' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6f2269076be1fd34c45a3961b337539e58faea21ddac38f7d4ea82beb39f8aa-merged.mount: Deactivated successfully.
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.121 239969 DEBUG oslo_concurrency.processutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0f79507c-8a44-47f3-a461-6377eed67521/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1nh_s63y" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:06 compute-0 podman[368408]: 2026-01-26 16:28:06.135479472 +0000 UTC m=+0.256293299 container remove 36b4bde479cfe610f1619423a7a9a8c4e1b637748c693980b444b6bac4e03a46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 16:28:06 compute-0 systemd[1]: libpod-conmon-36b4bde479cfe610f1619423a7a9a8c4e1b637748c693980b444b6bac4e03a46.scope: Deactivated successfully.
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.148 239969 DEBUG nova.storage.rbd_utils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] rbd image 0f79507c-8a44-47f3-a461-6377eed67521_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.152 239969 DEBUG oslo_concurrency.processutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0f79507c-8a44-47f3-a461-6377eed67521/disk.config 0f79507c-8a44-47f3-a461-6377eed67521_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.186 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.648s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.280 239969 DEBUG oslo_concurrency.processutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0f79507c-8a44-47f3-a461-6377eed67521/disk.config 0f79507c-8a44-47f3-a461-6377eed67521_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.281 239969 INFO nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Deleting local config drive /var/lib/nova/instances/0f79507c-8a44-47f3-a461-6377eed67521/disk.config because it was imported into RBD.
Jan 26 16:28:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:28:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:28:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:28:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:28:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:28:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:28:06 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4207942285' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:06 compute-0 kernel: tapbe140ce9-b4: entered promiscuous mode
Jan 26 16:28:06 compute-0 NetworkManager[48954]: <info>  [1769444886.3344] manager: (tapbe140ce9-b4): new Tun device (/org/freedesktop/NetworkManager/Devices/633)
Jan 26 16:28:06 compute-0 systemd-udevd[368228]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.335 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:06 compute-0 ovn_controller[146046]: 2026-01-26T16:28:06Z|01559|binding|INFO|Claiming lport be140ce9-b433-44cc-867c-3beea67a67d2 for this chassis.
Jan 26 16:28:06 compute-0 ovn_controller[146046]: 2026-01-26T16:28:06Z|01560|binding|INFO|be140ce9-b433-44cc-867c-3beea67a67d2: Claiming fa:16:3e:a5:a8:5d 10.100.0.13
Jan 26 16:28:06 compute-0 NetworkManager[48954]: <info>  [1769444886.3469] device (tapbe140ce9-b4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:28:06 compute-0 NetworkManager[48954]: <info>  [1769444886.3474] device (tapbe140ce9-b4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:28:06 compute-0 ovn_controller[146046]: 2026-01-26T16:28:06Z|01561|binding|INFO|Setting lport be140ce9-b433-44cc-867c-3beea67a67d2 ovn-installed in OVS
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.362 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.364 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:06 compute-0 podman[368510]: 2026-01-26 16:28:06.366785396 +0000 UTC m=+0.060527024 container create 9fd9244f80694f5c037be80db114b0e977c1fdc124f393f3fbc21d3a29fc8abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_turing, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 16:28:06 compute-0 systemd[1]: Started libpod-conmon-9fd9244f80694f5c037be80db114b0e977c1fdc124f393f3fbc21d3a29fc8abd.scope.
Jan 26 16:28:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:06.423 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:a8:5d 10.100.0.13'], port_security=['fa:16:3e:a5:a8:5d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0f79507c-8a44-47f3-a461-6377eed67521', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fed7ea2b-3239-4a92-a662-8de7d6a5b54f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8ca05add-5079-41f1-84cc-371e0b63d227', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7416312-6ee5-4c60-a0ef-0cc203c3bf4f, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=be140ce9-b433-44cc-867c-3beea67a67d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:28:06 compute-0 podman[368510]: 2026-01-26 16:28:06.349992725 +0000 UTC m=+0.043734353 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:28:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:06.425 156105 INFO neutron.agent.ovn.metadata.agent [-] Port be140ce9-b433-44cc-867c-3beea67a67d2 in datapath fed7ea2b-3239-4a92-a662-8de7d6a5b54f bound to our chassis
Jan 26 16:28:06 compute-0 ovn_controller[146046]: 2026-01-26T16:28:06Z|01562|binding|INFO|Setting lport be140ce9-b433-44cc-867c-3beea67a67d2 up in Southbound
Jan 26 16:28:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:06.428 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fed7ea2b-3239-4a92-a662-8de7d6a5b54f
Jan 26 16:28:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca46ccd915bbe6df07a60ff8a5d5a7825f6428ee3ceea22dca4f55b8d2f9dfb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca46ccd915bbe6df07a60ff8a5d5a7825f6428ee3ceea22dca4f55b8d2f9dfb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:06 compute-0 systemd[1]: Started Virtual Machine qemu-178-instance-00000093.
Jan 26 16:28:06 compute-0 systemd-machined[208061]: New machine qemu-178-instance-00000093.
Jan 26 16:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca46ccd915bbe6df07a60ff8a5d5a7825f6428ee3ceea22dca4f55b8d2f9dfb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca46ccd915bbe6df07a60ff8a5d5a7825f6428ee3ceea22dca4f55b8d2f9dfb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:06.449 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[edf86c03-ce13-426c-be21-84883b8fb1b0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca46ccd915bbe6df07a60ff8a5d5a7825f6428ee3ceea22dca4f55b8d2f9dfb9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:06 compute-0 podman[368510]: 2026-01-26 16:28:06.461741982 +0000 UTC m=+0.155483630 container init 9fd9244f80694f5c037be80db114b0e977c1fdc124f393f3fbc21d3a29fc8abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_turing, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Jan 26 16:28:06 compute-0 podman[368510]: 2026-01-26 16:28:06.475035837 +0000 UTC m=+0.168777455 container start 9fd9244f80694f5c037be80db114b0e977c1fdc124f393f3fbc21d3a29fc8abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_turing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Jan 26 16:28:06 compute-0 podman[368510]: 2026-01-26 16:28:06.484702434 +0000 UTC m=+0.178444092 container attach 9fd9244f80694f5c037be80db114b0e977c1fdc124f393f3fbc21d3a29fc8abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:28:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:06.485 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d01b274f-bdcb-44a1-a1b2-02dbc954dfa5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:06.489 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[adecf791-0f2d-4ce8-be0d-0cb5d1644516]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.499 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.499 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.505 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.505 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.509 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.509 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:28:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:06.531 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b793fb76-484c-4ff1-88f3-c1a7523e7485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:06.554 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0f87fdaf-4756-4303-994c-5a6823cd12ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfed7ea2b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:39:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 439], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632646, 'reachable_time': 26816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 368555, 'error': None, 'target': 'ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:06.592 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[eb651bff-cc35-41b5-8fc1-1b23840568c6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfed7ea2b-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632659, 'tstamp': 632659}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368557, 'error': None, 'target': 'ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfed7ea2b-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632662, 'tstamp': 632662}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 368557, 'error': None, 'target': 'ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:06.594 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfed7ea2b-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.596 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.598 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:06.598 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfed7ea2b-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:06.598 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:28:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:06.599 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfed7ea2b-30, col_values=(('external_ids', {'iface-id': '1ca00a90-6e42-40f0-a57b-b46691113c4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:06.599 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:28:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:28:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1660629916' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.682 239969 DEBUG oslo_concurrency.processutils [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.663s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.690 239969 DEBUG nova.compute.provider_tree [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.705 239969 DEBUG nova.scheduler.client.report [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:28:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 2.7 MiB/s wr, 137 op/s
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.725 239969 DEBUG oslo_concurrency.lockutils [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.758 239969 INFO nova.scheduler.client.report [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Deleted allocations for instance 02c17ea9-0fbb-4beb-bc93-b0339e8a1048
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.825 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.826 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2974MB free_disk=59.81901144981384GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.827 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.827 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.830 239969 DEBUG oslo_concurrency.lockutils [None req-22525792-cfd8-4500-ad53-3c74701e5701 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "02c17ea9-0fbb-4beb-bc93-b0339e8a1048" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.903 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 39a41999-3c56-4679-8c57-9286dc4edbb6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.904 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 759e14a7-49ae-4f6f-bb96-80fc691d50e3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.904 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance b4e64366-a873-483b-a2b9-3d3afc79f7d4 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.905 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 0f79507c-8a44-47f3-a461-6377eed67521 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.905 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.906 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.974 239969 DEBUG nova.network.neutron [req-f16b3c92-fe19-42bf-bc25-9df5529db269 req-ee5618ca-d909-422a-bc04-6f724896bb20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Updated VIF entry in instance network info cache for port 30d9e550-982e-4e2c-9ee0-755cad494e41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.974 239969 DEBUG nova.network.neutron [req-f16b3c92-fe19-42bf-bc25-9df5529db269 req-ee5618ca-d909-422a-bc04-6f724896bb20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Updating instance_info_cache with network_info: [{"id": "30d9e550-982e-4e2c-9ee0-755cad494e41", "address": "fa:16:3e:5d:8a:76", "network": {"id": "01b6ba29-8317-46fc-85a5-35c029f2796c", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-1995687001-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "02ec8c20b77c4ce9b1406d858bbcf14d", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap30d9e550-98", "ovs_interfaceid": "30d9e550-982e-4e2c-9ee0-755cad494e41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:06 compute-0 nova_compute[239965]: 2026-01-26 16:28:06.998 239969 DEBUG oslo_concurrency.lockutils [req-f16b3c92-fe19-42bf-bc25-9df5529db269 req-ee5618ca-d909-422a-bc04-6f724896bb20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-39a41999-3c56-4679-8c57-9286dc4edbb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.022 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.089 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444887.0881343, 0f79507c-8a44-47f3-a461-6377eed67521 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:07 compute-0 compassionate_turing[368537]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:28:07 compute-0 compassionate_turing[368537]: --> All data devices are unavailable
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.090 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] VM Started (Lifecycle Event)
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.115 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.128 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444887.089959, 0f79507c-8a44-47f3-a461-6377eed67521 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.128 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] VM Paused (Lifecycle Event)
Jan 26 16:28:07 compute-0 systemd[1]: libpod-9fd9244f80694f5c037be80db114b0e977c1fdc124f393f3fbc21d3a29fc8abd.scope: Deactivated successfully.
Jan 26 16:28:07 compute-0 podman[368510]: 2026-01-26 16:28:07.140622066 +0000 UTC m=+0.834363674 container died 9fd9244f80694f5c037be80db114b0e977c1fdc124f393f3fbc21d3a29fc8abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_turing, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.152 239969 DEBUG nova.network.neutron [-] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.160 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.165 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:28:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca46ccd915bbe6df07a60ff8a5d5a7825f6428ee3ceea22dca4f55b8d2f9dfb9-merged.mount: Deactivated successfully.
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.186 239969 INFO nova.compute.manager [-] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Took 1.25 seconds to deallocate network for instance.
Jan 26 16:28:07 compute-0 podman[368510]: 2026-01-26 16:28:07.193311567 +0000 UTC m=+0.887053175 container remove 9fd9244f80694f5c037be80db114b0e977c1fdc124f393f3fbc21d3a29fc8abd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.196 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:28:07 compute-0 systemd[1]: libpod-conmon-9fd9244f80694f5c037be80db114b0e977c1fdc124f393f3fbc21d3a29fc8abd.scope: Deactivated successfully.
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.246 239969 DEBUG oslo_concurrency.lockutils [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:07 compute-0 sudo[368311]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.275 239969 DEBUG nova.compute.manager [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Received event network-vif-unplugged-30d9e550-982e-4e2c-9ee0-755cad494e41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.275 239969 DEBUG oslo_concurrency.lockutils [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.276 239969 DEBUG oslo_concurrency.lockutils [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.276 239969 DEBUG oslo_concurrency.lockutils [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.276 239969 DEBUG nova.compute.manager [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] No waiting events found dispatching network-vif-unplugged-30d9e550-982e-4e2c-9ee0-755cad494e41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.276 239969 WARNING nova.compute.manager [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Received unexpected event network-vif-unplugged-30d9e550-982e-4e2c-9ee0-755cad494e41 for instance with vm_state deleted and task_state None.
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.276 239969 DEBUG nova.compute.manager [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Received event network-vif-plugged-30d9e550-982e-4e2c-9ee0-755cad494e41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.276 239969 DEBUG oslo_concurrency.lockutils [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.277 239969 DEBUG oslo_concurrency.lockutils [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.277 239969 DEBUG oslo_concurrency.lockutils [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "39a41999-3c56-4679-8c57-9286dc4edbb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.277 239969 DEBUG nova.compute.manager [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] No waiting events found dispatching network-vif-plugged-30d9e550-982e-4e2c-9ee0-755cad494e41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.277 239969 WARNING nova.compute.manager [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Received unexpected event network-vif-plugged-30d9e550-982e-4e2c-9ee0-755cad494e41 for instance with vm_state deleted and task_state None.
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.277 239969 DEBUG nova.compute.manager [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Received event network-vif-plugged-be140ce9-b433-44cc-867c-3beea67a67d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.277 239969 DEBUG oslo_concurrency.lockutils [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0f79507c-8a44-47f3-a461-6377eed67521-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.278 239969 DEBUG oslo_concurrency.lockutils [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0f79507c-8a44-47f3-a461-6377eed67521-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.278 239969 DEBUG oslo_concurrency.lockutils [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0f79507c-8a44-47f3-a461-6377eed67521-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.278 239969 DEBUG nova.compute.manager [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Processing event network-vif-plugged-be140ce9-b433-44cc-867c-3beea67a67d2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.278 239969 DEBUG nova.compute.manager [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Received event network-vif-plugged-be140ce9-b433-44cc-867c-3beea67a67d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.278 239969 DEBUG oslo_concurrency.lockutils [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "0f79507c-8a44-47f3-a461-6377eed67521-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.278 239969 DEBUG oslo_concurrency.lockutils [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0f79507c-8a44-47f3-a461-6377eed67521-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.279 239969 DEBUG oslo_concurrency.lockutils [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "0f79507c-8a44-47f3-a461-6377eed67521-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.279 239969 DEBUG nova.compute.manager [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] No waiting events found dispatching network-vif-plugged-be140ce9-b433-44cc-867c-3beea67a67d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.279 239969 WARNING nova.compute.manager [req-26fa308c-9e3e-499f-b54f-ca2c66a4caa6 req-72e7929b-b893-4401-accd-7220df55dc35 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Received unexpected event network-vif-plugged-be140ce9-b433-44cc-867c-3beea67a67d2 for instance with vm_state building and task_state spawning.
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.279 239969 DEBUG nova.compute.manager [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.283 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.284 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444887.283642, 0f79507c-8a44-47f3-a461-6377eed67521 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.284 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] VM Resumed (Lifecycle Event)
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.287 239969 INFO nova.virt.libvirt.driver [-] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Instance spawned successfully.
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.288 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:28:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1660629916' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:07 compute-0 ceph-mon[75140]: pgmap v2332: 305 pgs: 305 active+clean; 409 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 2.7 MiB/s wr, 137 op/s
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.312 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.318 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.322 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.322 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.323 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.323 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.324 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.324 239969 DEBUG nova.virt.libvirt.driver [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:07 compute-0 sudo[368648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:28:07 compute-0 sudo[368648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:28:07 compute-0 sudo[368648]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.336 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:28:07 compute-0 sudo[368673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:28:07 compute-0 sudo[368673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.414 239969 INFO nova.compute.manager [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Took 8.59 seconds to spawn the instance on the hypervisor.
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.414 239969 DEBUG nova.compute.manager [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.474 239969 INFO nova.compute.manager [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Took 9.79 seconds to build instance.
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.496 239969 DEBUG oslo_concurrency.lockutils [None req-f4eebeef-e359-4522-b215-9544fd273bb3 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "0f79507c-8a44-47f3-a461-6377eed67521" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:28:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/103508854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.594 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.601 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.619 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.669 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.669 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.670 239969 DEBUG oslo_concurrency.lockutils [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:07 compute-0 podman[368711]: 2026-01-26 16:28:07.70426964 +0000 UTC m=+0.043947466 container create 1b5f6eef49d09f78e1ac4e057cba708978a1a6bbbdb13d46a87f35f909e86c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 16:28:07 compute-0 systemd[1]: Started libpod-conmon-1b5f6eef49d09f78e1ac4e057cba708978a1a6bbbdb13d46a87f35f909e86c64.scope.
Jan 26 16:28:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:28:07 compute-0 podman[368711]: 2026-01-26 16:28:07.687793617 +0000 UTC m=+0.027471493 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:28:07 compute-0 nova_compute[239965]: 2026-01-26 16:28:07.790 239969 DEBUG oslo_concurrency.processutils [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:07 compute-0 podman[368711]: 2026-01-26 16:28:07.939107412 +0000 UTC m=+0.278785278 container init 1b5f6eef49d09f78e1ac4e057cba708978a1a6bbbdb13d46a87f35f909e86c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:28:07 compute-0 podman[368711]: 2026-01-26 16:28:07.954019786 +0000 UTC m=+0.293697652 container start 1b5f6eef49d09f78e1ac4e057cba708978a1a6bbbdb13d46a87f35f909e86c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_archimedes, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 16:28:07 compute-0 focused_archimedes[368727]: 167 167
Jan 26 16:28:07 compute-0 systemd[1]: libpod-1b5f6eef49d09f78e1ac4e057cba708978a1a6bbbdb13d46a87f35f909e86c64.scope: Deactivated successfully.
Jan 26 16:28:07 compute-0 podman[368711]: 2026-01-26 16:28:07.966642296 +0000 UTC m=+0.306320142 container attach 1b5f6eef49d09f78e1ac4e057cba708978a1a6bbbdb13d46a87f35f909e86c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_archimedes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 16:28:07 compute-0 podman[368711]: 2026-01-26 16:28:07.967154369 +0000 UTC m=+0.306832205 container died 1b5f6eef49d09f78e1ac4e057cba708978a1a6bbbdb13d46a87f35f909e86c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 16:28:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-012b7553614447357513ba0668d920666cd4051215f9dddd731de4140977c63b-merged.mount: Deactivated successfully.
Jan 26 16:28:08 compute-0 podman[368711]: 2026-01-26 16:28:08.020466674 +0000 UTC m=+0.360144540 container remove 1b5f6eef49d09f78e1ac4e057cba708978a1a6bbbdb13d46a87f35f909e86c64 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:28:08 compute-0 nova_compute[239965]: 2026-01-26 16:28:08.020 239969 DEBUG nova.compute.manager [req-5292fa92-2f00-4e65-8a3e-19a6bd95bb07 req-7be60d48-4e85-4d85-804c-3574042ac838 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Received event network-vif-deleted-30d9e550-982e-4e2c-9ee0-755cad494e41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:08 compute-0 systemd[1]: libpod-conmon-1b5f6eef49d09f78e1ac4e057cba708978a1a6bbbdb13d46a87f35f909e86c64.scope: Deactivated successfully.
Jan 26 16:28:08 compute-0 nova_compute[239965]: 2026-01-26 16:28:08.033 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:08 compute-0 podman[368771]: 2026-01-26 16:28:08.217355835 +0000 UTC m=+0.025208578 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:28:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:28:08 compute-0 podman[368771]: 2026-01-26 16:28:08.460275495 +0000 UTC m=+0.268128228 container create 50ebec7ac28519c8ca92f00888cf785578d41d59ba8e56d7a4095a7b4bc22c2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 26 16:28:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/103508854' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1427607975' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:08 compute-0 nova_compute[239965]: 2026-01-26 16:28:08.497 239969 DEBUG oslo_concurrency.processutils [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.706s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:08 compute-0 nova_compute[239965]: 2026-01-26 16:28:08.506 239969 DEBUG nova.compute.provider_tree [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:28:08 compute-0 systemd[1]: Started libpod-conmon-50ebec7ac28519c8ca92f00888cf785578d41d59ba8e56d7a4095a7b4bc22c2a.scope.
Jan 26 16:28:08 compute-0 nova_compute[239965]: 2026-01-26 16:28:08.523 239969 DEBUG nova.scheduler.client.report [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:28:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:28:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e57a6680d304e36f470d5611f08e9113df76e6b528b291ab9436924e01c585d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:08 compute-0 nova_compute[239965]: 2026-01-26 16:28:08.548 239969 DEBUG oslo_concurrency.lockutils [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e57a6680d304e36f470d5611f08e9113df76e6b528b291ab9436924e01c585d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e57a6680d304e36f470d5611f08e9113df76e6b528b291ab9436924e01c585d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e57a6680d304e36f470d5611f08e9113df76e6b528b291ab9436924e01c585d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:08 compute-0 podman[368771]: 2026-01-26 16:28:08.566886396 +0000 UTC m=+0.374739129 container init 50ebec7ac28519c8ca92f00888cf785578d41d59ba8e56d7a4095a7b4bc22c2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_williamson, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 26 16:28:08 compute-0 nova_compute[239965]: 2026-01-26 16:28:08.575 239969 INFO nova.scheduler.client.report [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Deleted allocations for instance 39a41999-3c56-4679-8c57-9286dc4edbb6
Jan 26 16:28:08 compute-0 podman[368771]: 2026-01-26 16:28:08.578645493 +0000 UTC m=+0.386498246 container start 50ebec7ac28519c8ca92f00888cf785578d41d59ba8e56d7a4095a7b4bc22c2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 16:28:08 compute-0 podman[368771]: 2026-01-26 16:28:08.582880417 +0000 UTC m=+0.390733140 container attach 50ebec7ac28519c8ca92f00888cf785578d41d59ba8e56d7a4095a7b4bc22c2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_williamson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 16:28:08 compute-0 nova_compute[239965]: 2026-01-26 16:28:08.677 239969 DEBUG oslo_concurrency.lockutils [None req-125ef22b-025b-46f9-9c49-82a314d18c06 915d03d871484dc39e2d074d97f24809 02ec8c20b77c4ce9b1406d858bbcf14d - - default default] Lock "39a41999-3c56-4679-8c57-9286dc4edbb6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 2.7 MiB/s wr, 201 op/s
Jan 26 16:28:08 compute-0 nova_compute[239965]: 2026-01-26 16:28:08.874 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:08 compute-0 nifty_williamson[368790]: {
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:     "0": [
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:         {
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "devices": [
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "/dev/loop3"
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             ],
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "lv_name": "ceph_lv0",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "lv_size": "21470642176",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "name": "ceph_lv0",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "tags": {
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.cluster_name": "ceph",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.crush_device_class": "",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.encrypted": "0",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.objectstore": "bluestore",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.osd_id": "0",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.type": "block",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.vdo": "0",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.with_tpm": "0"
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             },
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "type": "block",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "vg_name": "ceph_vg0"
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:         }
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:     ],
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:     "1": [
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:         {
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "devices": [
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "/dev/loop4"
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             ],
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "lv_name": "ceph_lv1",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "lv_size": "21470642176",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "name": "ceph_lv1",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "tags": {
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.cluster_name": "ceph",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.crush_device_class": "",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.encrypted": "0",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.objectstore": "bluestore",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.osd_id": "1",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.type": "block",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.vdo": "0",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.with_tpm": "0"
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             },
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "type": "block",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "vg_name": "ceph_vg1"
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:         }
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:     ],
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:     "2": [
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:         {
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "devices": [
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "/dev/loop5"
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             ],
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "lv_name": "ceph_lv2",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "lv_size": "21470642176",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "name": "ceph_lv2",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "tags": {
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.cluster_name": "ceph",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.crush_device_class": "",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.encrypted": "0",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.objectstore": "bluestore",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.osd_id": "2",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.type": "block",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.vdo": "0",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:                 "ceph.with_tpm": "0"
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             },
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "type": "block",
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:             "vg_name": "ceph_vg2"
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:         }
Jan 26 16:28:08 compute-0 nifty_williamson[368790]:     ]
Jan 26 16:28:08 compute-0 nifty_williamson[368790]: }
Jan 26 16:28:08 compute-0 systemd[1]: libpod-50ebec7ac28519c8ca92f00888cf785578d41d59ba8e56d7a4095a7b4bc22c2a.scope: Deactivated successfully.
Jan 26 16:28:08 compute-0 podman[368771]: 2026-01-26 16:28:08.956670451 +0000 UTC m=+0.764523164 container died 50ebec7ac28519c8ca92f00888cf785578d41d59ba8e56d7a4095a7b4bc22c2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_williamson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:28:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e57a6680d304e36f470d5611f08e9113df76e6b528b291ab9436924e01c585d-merged.mount: Deactivated successfully.
Jan 26 16:28:09 compute-0 podman[368771]: 2026-01-26 16:28:09.082170305 +0000 UTC m=+0.890023018 container remove 50ebec7ac28519c8ca92f00888cf785578d41d59ba8e56d7a4095a7b4bc22c2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 16:28:09 compute-0 systemd[1]: libpod-conmon-50ebec7ac28519c8ca92f00888cf785578d41d59ba8e56d7a4095a7b4bc22c2a.scope: Deactivated successfully.
Jan 26 16:28:09 compute-0 sudo[368673]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:09 compute-0 sudo[368813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:28:09 compute-0 sudo[368813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:28:09 compute-0 sudo[368813]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:28:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Jan 26 16:28:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Jan 26 16:28:09 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Jan 26 16:28:09 compute-0 sudo[368838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:28:09 compute-0 sudo[368838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:28:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1427607975' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:09 compute-0 ceph-mon[75140]: pgmap v2333: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 134 KiB/s rd, 2.7 MiB/s wr, 201 op/s
Jan 26 16:28:09 compute-0 ceph-mon[75140]: osdmap e314: 3 total, 3 up, 3 in
Jan 26 16:28:09 compute-0 podman[368875]: 2026-01-26 16:28:09.64870558 +0000 UTC m=+0.052034556 container create d9a3ae79b25970ee1e6684bd973dde1aa64f2ac818520cdd240f09f7e1151eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_haibt, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:28:09 compute-0 systemd[1]: Started libpod-conmon-d9a3ae79b25970ee1e6684bd973dde1aa64f2ac818520cdd240f09f7e1151eb2.scope.
Jan 26 16:28:09 compute-0 podman[368875]: 2026-01-26 16:28:09.625964602 +0000 UTC m=+0.029293588 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:28:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:28:09 compute-0 podman[368875]: 2026-01-26 16:28:09.762789513 +0000 UTC m=+0.166118479 container init d9a3ae79b25970ee1e6684bd973dde1aa64f2ac818520cdd240f09f7e1151eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 16:28:09 compute-0 podman[368875]: 2026-01-26 16:28:09.769643121 +0000 UTC m=+0.172972097 container start d9a3ae79b25970ee1e6684bd973dde1aa64f2ac818520cdd240f09f7e1151eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_haibt, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 16:28:09 compute-0 podman[368875]: 2026-01-26 16:28:09.774289534 +0000 UTC m=+0.177618480 container attach d9a3ae79b25970ee1e6684bd973dde1aa64f2ac818520cdd240f09f7e1151eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:28:09 compute-0 gallant_haibt[368892]: 167 167
Jan 26 16:28:09 compute-0 systemd[1]: libpod-d9a3ae79b25970ee1e6684bd973dde1aa64f2ac818520cdd240f09f7e1151eb2.scope: Deactivated successfully.
Jan 26 16:28:09 compute-0 podman[368875]: 2026-01-26 16:28:09.780722512 +0000 UTC m=+0.184051518 container died d9a3ae79b25970ee1e6684bd973dde1aa64f2ac818520cdd240f09f7e1151eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_haibt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:28:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5aaa765a42a1bb23f7db469bf4b53d3218f1b959c038ca7e17961ba88df5e1c9-merged.mount: Deactivated successfully.
Jan 26 16:28:09 compute-0 podman[368875]: 2026-01-26 16:28:09.83124591 +0000 UTC m=+0.234574856 container remove d9a3ae79b25970ee1e6684bd973dde1aa64f2ac818520cdd240f09f7e1151eb2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_haibt, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 16:28:09 compute-0 systemd[1]: libpod-conmon-d9a3ae79b25970ee1e6684bd973dde1aa64f2ac818520cdd240f09f7e1151eb2.scope: Deactivated successfully.
Jan 26 16:28:10 compute-0 podman[368916]: 2026-01-26 16:28:10.071470193 +0000 UTC m=+0.046012468 container create e0421b444a8303a3b146540aeb00264b7d5e7ec0d91071b39b537d0266ec7e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 26 16:28:10 compute-0 systemd[1]: Started libpod-conmon-e0421b444a8303a3b146540aeb00264b7d5e7ec0d91071b39b537d0266ec7e15.scope.
Jan 26 16:28:10 compute-0 podman[368916]: 2026-01-26 16:28:10.050802747 +0000 UTC m=+0.025345072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:28:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:28:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cedc9b566bed4bbfdc4bf6cc9eccd144e35d11254f6e036e4453ef6d86c5d277/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cedc9b566bed4bbfdc4bf6cc9eccd144e35d11254f6e036e4453ef6d86c5d277/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cedc9b566bed4bbfdc4bf6cc9eccd144e35d11254f6e036e4453ef6d86c5d277/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cedc9b566bed4bbfdc4bf6cc9eccd144e35d11254f6e036e4453ef6d86c5d277/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:10 compute-0 podman[368916]: 2026-01-26 16:28:10.196264059 +0000 UTC m=+0.170806334 container init e0421b444a8303a3b146540aeb00264b7d5e7ec0d91071b39b537d0266ec7e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:28:10 compute-0 podman[368916]: 2026-01-26 16:28:10.205815013 +0000 UTC m=+0.180357308 container start e0421b444a8303a3b146540aeb00264b7d5e7ec0d91071b39b537d0266ec7e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:28:10 compute-0 podman[368916]: 2026-01-26 16:28:10.209430951 +0000 UTC m=+0.183973286 container attach e0421b444a8303a3b146540aeb00264b7d5e7ec0d91071b39b537d0266ec7e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhaskara, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:28:10 compute-0 nova_compute[239965]: 2026-01-26 16:28:10.537 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 29 KiB/s wr, 96 op/s
Jan 26 16:28:10 compute-0 ceph-mon[75140]: pgmap v2335: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 29 KiB/s wr, 96 op/s
Jan 26 16:28:10 compute-0 lvm[369012]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:28:10 compute-0 lvm[369012]: VG ceph_vg1 finished
Jan 26 16:28:10 compute-0 lvm[369011]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:28:10 compute-0 lvm[369011]: VG ceph_vg0 finished
Jan 26 16:28:10 compute-0 lvm[369014]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:28:10 compute-0 lvm[369014]: VG ceph_vg2 finished
Jan 26 16:28:11 compute-0 adoring_bhaskara[368933]: {}
Jan 26 16:28:11 compute-0 systemd[1]: libpod-e0421b444a8303a3b146540aeb00264b7d5e7ec0d91071b39b537d0266ec7e15.scope: Deactivated successfully.
Jan 26 16:28:11 compute-0 podman[368916]: 2026-01-26 16:28:11.106946111 +0000 UTC m=+1.081488396 container died e0421b444a8303a3b146540aeb00264b7d5e7ec0d91071b39b537d0266ec7e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:28:11 compute-0 systemd[1]: libpod-e0421b444a8303a3b146540aeb00264b7d5e7ec0d91071b39b537d0266ec7e15.scope: Consumed 1.385s CPU time.
Jan 26 16:28:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-cedc9b566bed4bbfdc4bf6cc9eccd144e35d11254f6e036e4453ef6d86c5d277-merged.mount: Deactivated successfully.
Jan 26 16:28:11 compute-0 podman[368916]: 2026-01-26 16:28:11.161022916 +0000 UTC m=+1.135565201 container remove e0421b444a8303a3b146540aeb00264b7d5e7ec0d91071b39b537d0266ec7e15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_bhaskara, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 16:28:11 compute-0 systemd[1]: libpod-conmon-e0421b444a8303a3b146540aeb00264b7d5e7ec0d91071b39b537d0266ec7e15.scope: Deactivated successfully.
Jan 26 16:28:11 compute-0 sudo[368838]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:28:11 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:28:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:28:11 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.288 239969 DEBUG oslo_concurrency.lockutils [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.289 239969 DEBUG oslo_concurrency.lockutils [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.289 239969 DEBUG oslo_concurrency.lockutils [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.289 239969 DEBUG oslo_concurrency.lockutils [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.289 239969 DEBUG oslo_concurrency.lockutils [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.290 239969 INFO nova.compute.manager [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Terminating instance
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.291 239969 DEBUG nova.compute.manager [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:28:11 compute-0 sudo[369028]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:28:11 compute-0 sudo[369028]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:28:11 compute-0 sudo[369028]: pam_unix(sudo:session): session closed for user root
Jan 26 16:28:11 compute-0 kernel: tap3eb1285a-11 (unregistering): left promiscuous mode
Jan 26 16:28:11 compute-0 NetworkManager[48954]: <info>  [1769444891.3457] device (tap3eb1285a-11): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.359 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:11 compute-0 ovn_controller[146046]: 2026-01-26T16:28:11Z|01563|binding|INFO|Releasing lport 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 from this chassis (sb_readonly=0)
Jan 26 16:28:11 compute-0 ovn_controller[146046]: 2026-01-26T16:28:11Z|01564|binding|INFO|Setting lport 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 down in Southbound
Jan 26 16:28:11 compute-0 ovn_controller[146046]: 2026-01-26T16:28:11Z|01565|binding|INFO|Removing iface tap3eb1285a-11 ovn-installed in OVS
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.362 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.395 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:11.398 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:66:7c 10.100.0.14'], port_security=['fa:16:3e:5c:66:7c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '759e14a7-49ae-4f6f-bb96-80fc691d50e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6a673593-4cc1-4b91-8d9d-33cfb52b459e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'efca55d4-fc3b-40bd-af16-664c70c27cc2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=975e0fb6-2099-45e2-83bb-35528f0f0ffd, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3eb1285a-11fb-4984-b35f-24fe2d9ed6f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:28:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:11.399 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 in datapath 6a673593-4cc1-4b91-8d9d-33cfb52b459e unbound from our chassis
Jan 26 16:28:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:11.400 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6a673593-4cc1-4b91-8d9d-33cfb52b459e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:28:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:11.402 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1fee77ed-1b81-41e4-8d9a-e04fc424c87d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:11.402 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e namespace which is not needed anymore
Jan 26 16:28:11 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Jan 26 16:28:11 compute-0 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008e.scope: Consumed 16.131s CPU time.
Jan 26 16:28:11 compute-0 systemd-machined[208061]: Machine qemu-172-instance-0000008e terminated.
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.472 239969 DEBUG nova.compute.manager [req-dcceb71f-a7b4-48c5-8495-66659aeca491 req-0d204b14-962c-4c6b-b2a0-a3ed404ae334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received event network-changed-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.472 239969 DEBUG nova.compute.manager [req-dcceb71f-a7b4-48c5-8495-66659aeca491 req-0d204b14-962c-4c6b-b2a0-a3ed404ae334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Refreshing instance network info cache due to event network-changed-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.472 239969 DEBUG oslo_concurrency.lockutils [req-dcceb71f-a7b4-48c5-8495-66659aeca491 req-0d204b14-962c-4c6b-b2a0-a3ed404ae334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.473 239969 DEBUG oslo_concurrency.lockutils [req-dcceb71f-a7b4-48c5-8495-66659aeca491 req-0d204b14-962c-4c6b-b2a0-a3ed404ae334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.473 239969 DEBUG nova.network.neutron [req-dcceb71f-a7b4-48c5-8495-66659aeca491 req-0d204b14-962c-4c6b-b2a0-a3ed404ae334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Refreshing network info cache for port 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:28:11 compute-0 podman[369058]: 2026-01-26 16:28:11.513253281 +0000 UTC m=+0.065047733 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.536 239969 INFO nova.virt.libvirt.driver [-] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Instance destroyed successfully.
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.537 239969 DEBUG nova.objects.instance [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'resources' on Instance uuid 759e14a7-49ae-4f6f-bb96-80fc691d50e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.556 239969 DEBUG nova.virt.libvirt.vif [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:26:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-879095890',display_name='tempest-TestNetworkBasicOps-server-879095890',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-879095890',id=142,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBENptaN+bG/9ejW/M5NuXQVCK6v3MRZr6mcCMzARlGtLJuYDEKZC5KZPEVS7zYqAcRckbC6mYpyXHfX98kGDdQowzIWS03GRgAXz2FaPPXZs9t1xQYzyEzlKdOej6nOsBw==',key_name='tempest-TestNetworkBasicOps-2084098348',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:26:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-j1dz4x1u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:26:52Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=759e14a7-49ae-4f6f-bb96-80fc691d50e3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "address": "fa:16:3e:5c:66:7c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb1285a-11", "ovs_interfaceid": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.557 239969 DEBUG nova.network.os_vif_util [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "address": "fa:16:3e:5c:66:7c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.229", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb1285a-11", "ovs_interfaceid": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.557 239969 DEBUG nova.network.os_vif_util [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5c:66:7c,bridge_name='br-int',has_traffic_filtering=True,id=3eb1285a-11fb-4984-b35f-24fe2d9ed6f1,network=Network(6a673593-4cc1-4b91-8d9d-33cfb52b459e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb1285a-11') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.557 239969 DEBUG os_vif [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:66:7c,bridge_name='br-int',has_traffic_filtering=True,id=3eb1285a-11fb-4984-b35f-24fe2d9ed6f1,network=Network(6a673593-4cc1-4b91-8d9d-33cfb52b459e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb1285a-11') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.560 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.560 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3eb1285a-11, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.562 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.563 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.566 239969 INFO os_vif [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5c:66:7c,bridge_name='br-int',has_traffic_filtering=True,id=3eb1285a-11fb-4984-b35f-24fe2d9ed6f1,network=Network(6a673593-4cc1-4b91-8d9d-33cfb52b459e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb1285a-11')
Jan 26 16:28:11 compute-0 neutron-haproxy-ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e[364832]: [NOTICE]   (364874) : haproxy version is 2.8.14-c23fe91
Jan 26 16:28:11 compute-0 neutron-haproxy-ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e[364832]: [NOTICE]   (364874) : path to executable is /usr/sbin/haproxy
Jan 26 16:28:11 compute-0 neutron-haproxy-ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e[364832]: [WARNING]  (364874) : Exiting Master process...
Jan 26 16:28:11 compute-0 neutron-haproxy-ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e[364832]: [ALERT]    (364874) : Current worker (364876) exited with code 143 (Terminated)
Jan 26 16:28:11 compute-0 neutron-haproxy-ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e[364832]: [WARNING]  (364874) : All workers exited. Exiting... (0)
Jan 26 16:28:11 compute-0 systemd[1]: libpod-fd79ea1142f5df62b39bc3bab34a3a6e9541bc0fe4493c1a52cfb91a527a8234.scope: Deactivated successfully.
Jan 26 16:28:11 compute-0 podman[369095]: 2026-01-26 16:28:11.589152171 +0000 UTC m=+0.058790741 container died fd79ea1142f5df62b39bc3bab34a3a6e9541bc0fe4493c1a52cfb91a527a8234 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 26 16:28:11 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fd79ea1142f5df62b39bc3bab34a3a6e9541bc0fe4493c1a52cfb91a527a8234-userdata-shm.mount: Deactivated successfully.
Jan 26 16:28:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-958cbba1913b9e2297b52de2ae83f610b651777137ea9026925977748937e61d-merged.mount: Deactivated successfully.
Jan 26 16:28:11 compute-0 podman[369099]: 2026-01-26 16:28:11.635043224 +0000 UTC m=+0.101792443 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 26 16:28:11 compute-0 podman[369095]: 2026-01-26 16:28:11.638200222 +0000 UTC m=+0.107838782 container cleanup fd79ea1142f5df62b39bc3bab34a3a6e9541bc0fe4493c1a52cfb91a527a8234 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:28:11 compute-0 systemd[1]: libpod-conmon-fd79ea1142f5df62b39bc3bab34a3a6e9541bc0fe4493c1a52cfb91a527a8234.scope: Deactivated successfully.
Jan 26 16:28:11 compute-0 podman[369178]: 2026-01-26 16:28:11.706183216 +0000 UTC m=+0.043926256 container remove fd79ea1142f5df62b39bc3bab34a3a6e9541bc0fe4493c1a52cfb91a527a8234 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:28:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:11.714 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[22b8b5af-8e60-4cc8-a844-946cb7afa1d9]: (4, ('Mon Jan 26 04:28:11 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e (fd79ea1142f5df62b39bc3bab34a3a6e9541bc0fe4493c1a52cfb91a527a8234)\nfd79ea1142f5df62b39bc3bab34a3a6e9541bc0fe4493c1a52cfb91a527a8234\nMon Jan 26 04:28:11 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e (fd79ea1142f5df62b39bc3bab34a3a6e9541bc0fe4493c1a52cfb91a527a8234)\nfd79ea1142f5df62b39bc3bab34a3a6e9541bc0fe4493c1a52cfb91a527a8234\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:11.715 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab46fa5-7902-4d6a-8d79-97172ab8fe54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:11.716 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a673593-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.717 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:11 compute-0 kernel: tap6a673593-40: left promiscuous mode
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.738 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:11.740 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c709ba26-34fd-429a-a973-0cd865afadcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:11.756 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8c0190e4-40f3-4bbb-bd85-926448e9351e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:11.757 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[df54f1bb-ce54-4f22-8bde-0948b2548a2d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:11.773 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[08119beb-87ff-4fc3-9c23-f72b66c738e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 630314, 'reachable_time': 41858, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369194, 'error': None, 'target': 'ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:11.776 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6a673593-4cc1-4b91-8d9d-33cfb52b459e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:28:11 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:11.776 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[c3e70ac3-7e14-4535-b546-ac67afdab2ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:11 compute-0 systemd[1]: run-netns-ovnmeta\x2d6a673593\x2d4cc1\x2d4b91\x2d8d9d\x2d33cfb52b459e.mount: Deactivated successfully.
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.844 239969 INFO nova.virt.libvirt.driver [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Deleting instance files /var/lib/nova/instances/759e14a7-49ae-4f6f-bb96-80fc691d50e3_del
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.847 239969 INFO nova.virt.libvirt.driver [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Deletion of /var/lib/nova/instances/759e14a7-49ae-4f6f-bb96-80fc691d50e3_del complete
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.898 239969 INFO nova.compute.manager [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Took 0.61 seconds to destroy the instance on the hypervisor.
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.899 239969 DEBUG oslo.service.loopingcall [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.899 239969 DEBUG nova.compute.manager [-] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:28:11 compute-0 nova_compute[239965]: 2026-01-26 16:28:11.899 239969 DEBUG nova.network.neutron [-] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:28:12 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:28:12 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:28:12 compute-0 nova_compute[239965]: 2026-01-26 16:28:12.273 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444877.2718978, 5d69a9b9-63ea-4df6-bf59-67250f4f907d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:12 compute-0 nova_compute[239965]: 2026-01-26 16:28:12.274 239969 INFO nova.compute.manager [-] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] VM Stopped (Lifecycle Event)
Jan 26 16:28:12 compute-0 nova_compute[239965]: 2026-01-26 16:28:12.293 239969 DEBUG nova.compute.manager [None req-83480f44-e9d3-4802-bf2f-cb1cb97e68ec - - - - - -] [instance: 5d69a9b9-63ea-4df6-bf59-67250f4f907d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 250 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 41 KiB/s wr, 217 op/s
Jan 26 16:28:13 compute-0 ceph-mon[75140]: pgmap v2336: 305 pgs: 305 active+clean; 250 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 41 KiB/s wr, 217 op/s
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.411 239969 DEBUG nova.network.neutron [-] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.430 239969 INFO nova.compute.manager [-] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Took 1.53 seconds to deallocate network for instance.
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.500 239969 DEBUG nova.compute.manager [req-9442544e-bb2b-4d9d-acda-cb994483596d req-2041d732-e132-4970-bed5-572216d5b383 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Received event network-vif-deleted-3eb1285a-11fb-4984-b35f-24fe2d9ed6f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.502 239969 DEBUG oslo_concurrency.lockutils [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.503 239969 DEBUG oslo_concurrency.lockutils [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.589 239969 DEBUG oslo_concurrency.processutils [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.637 239969 DEBUG nova.compute.manager [req-9ca06695-d961-415c-8eac-dc69cc4a0b2d req-dfef177b-8a14-4ad9-9e9d-cb76149d3a29 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Received event network-changed-be140ce9-b433-44cc-867c-3beea67a67d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.637 239969 DEBUG nova.compute.manager [req-9ca06695-d961-415c-8eac-dc69cc4a0b2d req-dfef177b-8a14-4ad9-9e9d-cb76149d3a29 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Refreshing instance network info cache due to event network-changed-be140ce9-b433-44cc-867c-3beea67a67d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.637 239969 DEBUG oslo_concurrency.lockutils [req-9ca06695-d961-415c-8eac-dc69cc4a0b2d req-dfef177b-8a14-4ad9-9e9d-cb76149d3a29 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-0f79507c-8a44-47f3-a461-6377eed67521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.638 239969 DEBUG oslo_concurrency.lockutils [req-9ca06695-d961-415c-8eac-dc69cc4a0b2d req-dfef177b-8a14-4ad9-9e9d-cb76149d3a29 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-0f79507c-8a44-47f3-a461-6377eed67521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.638 239969 DEBUG nova.network.neutron [req-9ca06695-d961-415c-8eac-dc69cc4a0b2d req-dfef177b-8a14-4ad9-9e9d-cb76149d3a29 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Refreshing network info cache for port be140ce9-b433-44cc-867c-3beea67a67d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.733 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "3a84510b-c77e-4b69-95cf-39eb5ba96406" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.733 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.758 239969 DEBUG nova.compute.manager [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.821 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.827 239969 DEBUG nova.network.neutron [req-dcceb71f-a7b4-48c5-8495-66659aeca491 req-0d204b14-962c-4c6b-b2a0-a3ed404ae334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Updated VIF entry in instance network info cache for port 3eb1285a-11fb-4984-b35f-24fe2d9ed6f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.828 239969 DEBUG nova.network.neutron [req-dcceb71f-a7b4-48c5-8495-66659aeca491 req-0d204b14-962c-4c6b-b2a0-a3ed404ae334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Updating instance_info_cache with network_info: [{"id": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "address": "fa:16:3e:5c:66:7c", "network": {"id": "6a673593-4cc1-4b91-8d9d-33cfb52b459e", "bridge": "br-int", "label": "tempest-network-smoke--1609506576", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb1285a-11", "ovs_interfaceid": "3eb1285a-11fb-4984-b35f-24fe2d9ed6f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.848 239969 DEBUG oslo_concurrency.lockutils [req-dcceb71f-a7b4-48c5-8495-66659aeca491 req-0d204b14-962c-4c6b-b2a0-a3ed404ae334 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-759e14a7-49ae-4f6f-bb96-80fc691d50e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:28:13 compute-0 nova_compute[239965]: 2026-01-26 16:28:13.877 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:28:14 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1958730636' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:14 compute-0 nova_compute[239965]: 2026-01-26 16:28:14.180 239969 DEBUG oslo_concurrency.processutils [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:14 compute-0 nova_compute[239965]: 2026-01-26 16:28:14.193 239969 DEBUG nova.compute.provider_tree [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:28:14 compute-0 nova_compute[239965]: 2026-01-26 16:28:14.209 239969 DEBUG nova.scheduler.client.report [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:28:14 compute-0 nova_compute[239965]: 2026-01-26 16:28:14.230 239969 DEBUG oslo_concurrency.lockutils [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:14 compute-0 nova_compute[239965]: 2026-01-26 16:28:14.232 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:14 compute-0 nova_compute[239965]: 2026-01-26 16:28:14.238 239969 DEBUG nova.virt.hardware [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:28:14 compute-0 nova_compute[239965]: 2026-01-26 16:28:14.238 239969 INFO nova.compute.claims [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:28:14 compute-0 nova_compute[239965]: 2026-01-26 16:28:14.274 239969 INFO nova.scheduler.client.report [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Deleted allocations for instance 759e14a7-49ae-4f6f-bb96-80fc691d50e3
Jan 26 16:28:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:28:14 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1958730636' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:14 compute-0 nova_compute[239965]: 2026-01-26 16:28:14.424 239969 DEBUG oslo_concurrency.lockutils [None req-0a0f7edf-8f8a-409f-93c8-0d5f7cad492e e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "759e14a7-49ae-4f6f-bb96-80fc691d50e3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:14 compute-0 nova_compute[239965]: 2026-01-26 16:28:14.490 239969 DEBUG oslo_concurrency.processutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 250 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 34 KiB/s wr, 183 op/s
Jan 26 16:28:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:28:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1662781377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.117 239969 DEBUG oslo_concurrency.processutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.627s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.123 239969 DEBUG nova.compute.provider_tree [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.144 239969 DEBUG nova.scheduler.client.report [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.173 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.941s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.174 239969 DEBUG nova.compute.manager [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.218 239969 DEBUG nova.compute.manager [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.219 239969 DEBUG nova.network.neutron [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.239 239969 INFO nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.256 239969 DEBUG nova.compute.manager [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:28:15 compute-0 ceph-mon[75140]: pgmap v2337: 305 pgs: 305 active+clean; 250 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 34 KiB/s wr, 183 op/s
Jan 26 16:28:15 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1662781377' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.355 239969 DEBUG nova.compute.manager [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.359 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.360 239969 INFO nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Creating image(s)
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.394 239969 DEBUG nova.storage.rbd_utils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 3a84510b-c77e-4b69-95cf-39eb5ba96406_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.432 239969 DEBUG nova.storage.rbd_utils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 3a84510b-c77e-4b69-95cf-39eb5ba96406_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.499 239969 DEBUG nova.storage.rbd_utils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 3a84510b-c77e-4b69-95cf-39eb5ba96406_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.505 239969 DEBUG oslo_concurrency.processutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.613 239969 DEBUG oslo_concurrency.processutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.615 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.615 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.616 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.637 239969 DEBUG nova.storage.rbd_utils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 3a84510b-c77e-4b69-95cf-39eb5ba96406_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.640 239969 DEBUG oslo_concurrency.processutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 3a84510b-c77e-4b69-95cf-39eb5ba96406_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.844 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444880.8427167, 02c17ea9-0fbb-4beb-bc93-b0339e8a1048 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.844 239969 INFO nova.compute.manager [-] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] VM Stopped (Lifecycle Event)
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.864 239969 DEBUG nova.compute.manager [None req-42447105-f97d-4456-b4e4-9959ceac8170 - - - - - -] [instance: 02c17ea9-0fbb-4beb-bc93-b0339e8a1048] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.916 239969 DEBUG nova.policy [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '59ae1c17a260470c91f50965ddd53a9e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.919 239969 DEBUG oslo_concurrency.processutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 3a84510b-c77e-4b69-95cf-39eb5ba96406_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.279s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.972 239969 DEBUG nova.network.neutron [req-9ca06695-d961-415c-8eac-dc69cc4a0b2d req-dfef177b-8a14-4ad9-9e9d-cb76149d3a29 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Updated VIF entry in instance network info cache for port be140ce9-b433-44cc-867c-3beea67a67d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.972 239969 DEBUG nova.network.neutron [req-9ca06695-d961-415c-8eac-dc69cc4a0b2d req-dfef177b-8a14-4ad9-9e9d-cb76149d3a29 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Updating instance_info_cache with network_info: [{"id": "be140ce9-b433-44cc-867c-3beea67a67d2", "address": "fa:16:3e:a5:a8:5d", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe140ce9-b4", "ovs_interfaceid": "be140ce9-b433-44cc-867c-3beea67a67d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:15 compute-0 nova_compute[239965]: 2026-01-26 16:28:15.979 239969 DEBUG nova.storage.rbd_utils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] resizing rbd image 3a84510b-c77e-4b69-95cf-39eb5ba96406_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:28:16 compute-0 nova_compute[239965]: 2026-01-26 16:28:16.003 239969 DEBUG oslo_concurrency.lockutils [req-9ca06695-d961-415c-8eac-dc69cc4a0b2d req-dfef177b-8a14-4ad9-9e9d-cb76149d3a29 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-0f79507c-8a44-47f3-a461-6377eed67521" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:28:16 compute-0 nova_compute[239965]: 2026-01-26 16:28:16.046 239969 DEBUG nova.objects.instance [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'migration_context' on Instance uuid 3a84510b-c77e-4b69-95cf-39eb5ba96406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:16 compute-0 nova_compute[239965]: 2026-01-26 16:28:16.063 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:28:16 compute-0 nova_compute[239965]: 2026-01-26 16:28:16.063 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Ensure instance console log exists: /var/lib/nova/instances/3a84510b-c77e-4b69-95cf-39eb5ba96406/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:28:16 compute-0 nova_compute[239965]: 2026-01-26 16:28:16.064 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:16 compute-0 nova_compute[239965]: 2026-01-26 16:28:16.064 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:16 compute-0 nova_compute[239965]: 2026-01-26 16:28:16.064 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:16 compute-0 nova_compute[239965]: 2026-01-26 16:28:16.562 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:16 compute-0 ovn_controller[146046]: 2026-01-26T16:28:16Z|01566|binding|INFO|Releasing lport 1ca00a90-6e42-40f0-a57b-b46691113c4a from this chassis (sb_readonly=0)
Jan 26 16:28:16 compute-0 nova_compute[239965]: 2026-01-26 16:28:16.682 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 226 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 355 KiB/s wr, 179 op/s
Jan 26 16:28:16 compute-0 ceph-mon[75140]: pgmap v2338: 305 pgs: 305 active+clean; 226 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 355 KiB/s wr, 179 op/s
Jan 26 16:28:17 compute-0 nova_compute[239965]: 2026-01-26 16:28:17.062 239969 DEBUG nova.network.neutron [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Successfully created port: 23d47336-e4f3-4764-b5f8-68038fe02faa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:28:18 compute-0 nova_compute[239965]: 2026-01-26 16:28:18.090 239969 DEBUG nova.network.neutron [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Successfully updated port: 23d47336-e4f3-4764-b5f8-68038fe02faa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:28:18 compute-0 nova_compute[239965]: 2026-01-26 16:28:18.104 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "refresh_cache-3a84510b-c77e-4b69-95cf-39eb5ba96406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:28:18 compute-0 nova_compute[239965]: 2026-01-26 16:28:18.104 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquired lock "refresh_cache-3a84510b-c77e-4b69-95cf-39eb5ba96406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:28:18 compute-0 nova_compute[239965]: 2026-01-26 16:28:18.104 239969 DEBUG nova.network.neutron [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:28:18 compute-0 nova_compute[239965]: 2026-01-26 16:28:18.170 239969 DEBUG nova.compute.manager [req-fc49239f-9970-46b7-8fef-8bc882dba2a4 req-2f37717f-a984-4fe9-8e4d-380831d55db7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received event network-changed-23d47336-e4f3-4764-b5f8-68038fe02faa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:18 compute-0 nova_compute[239965]: 2026-01-26 16:28:18.170 239969 DEBUG nova.compute.manager [req-fc49239f-9970-46b7-8fef-8bc882dba2a4 req-2f37717f-a984-4fe9-8e4d-380831d55db7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Refreshing instance network info cache due to event network-changed-23d47336-e4f3-4764-b5f8-68038fe02faa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:28:18 compute-0 nova_compute[239965]: 2026-01-26 16:28:18.170 239969 DEBUG oslo_concurrency.lockutils [req-fc49239f-9970-46b7-8fef-8bc882dba2a4 req-2f37717f-a984-4fe9-8e4d-380831d55db7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-3a84510b-c77e-4b69-95cf-39eb5ba96406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:28:18 compute-0 nova_compute[239965]: 2026-01-26 16:28:18.247 239969 DEBUG nova.network.neutron [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:28:18 compute-0 ovn_controller[146046]: 2026-01-26T16:28:18Z|01567|binding|INFO|Releasing lport 1ca00a90-6e42-40f0-a57b-b46691113c4a from this chassis (sb_readonly=0)
Jan 26 16:28:18 compute-0 nova_compute[239965]: 2026-01-26 16:28:18.371 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Jan 26 16:28:18 compute-0 ceph-mon[75140]: pgmap v2339: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 144 op/s
Jan 26 16:28:18 compute-0 nova_compute[239965]: 2026-01-26 16:28:18.879 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.406 239969 DEBUG nova.network.neutron [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Updating instance_info_cache with network_info: [{"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.434 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Releasing lock "refresh_cache-3a84510b-c77e-4b69-95cf-39eb5ba96406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.434 239969 DEBUG nova.compute.manager [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Instance network_info: |[{"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.435 239969 DEBUG oslo_concurrency.lockutils [req-fc49239f-9970-46b7-8fef-8bc882dba2a4 req-2f37717f-a984-4fe9-8e4d-380831d55db7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-3a84510b-c77e-4b69-95cf-39eb5ba96406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.435 239969 DEBUG nova.network.neutron [req-fc49239f-9970-46b7-8fef-8bc882dba2a4 req-2f37717f-a984-4fe9-8e4d-380831d55db7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Refreshing network info cache for port 23d47336-e4f3-4764-b5f8-68038fe02faa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.442 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Start _get_guest_xml network_info=[{"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.451 239969 WARNING nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.459 239969 DEBUG nova.virt.libvirt.host [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.461 239969 DEBUG nova.virt.libvirt.host [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.473 239969 DEBUG nova.virt.libvirt.host [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.474 239969 DEBUG nova.virt.libvirt.host [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.475 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.476 239969 DEBUG nova.virt.hardware [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.477 239969 DEBUG nova.virt.hardware [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.477 239969 DEBUG nova.virt.hardware [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.478 239969 DEBUG nova.virt.hardware [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.478 239969 DEBUG nova.virt.hardware [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.478 239969 DEBUG nova.virt.hardware [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.479 239969 DEBUG nova.virt.hardware [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.479 239969 DEBUG nova.virt.hardware [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.479 239969 DEBUG nova.virt.hardware [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.480 239969 DEBUG nova.virt.hardware [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.480 239969 DEBUG nova.virt.hardware [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:28:19 compute-0 nova_compute[239965]: 2026-01-26 16:28:19.486 239969 DEBUG oslo_concurrency.processutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:20 compute-0 ovn_controller[146046]: 2026-01-26T16:28:20Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:a8:5d 10.100.0.13
Jan 26 16:28:20 compute-0 ovn_controller[146046]: 2026-01-26T16:28:20Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:a8:5d 10.100.0.13
Jan 26 16:28:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:28:20 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3439424965' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.074 239969 DEBUG oslo_concurrency.processutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.101 239969 DEBUG nova.storage.rbd_utils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 3a84510b-c77e-4b69-95cf-39eb5ba96406_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.106 239969 DEBUG oslo_concurrency.processutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:20 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3439424965' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.508 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444885.5074685, 39a41999-3c56-4679-8c57-9286dc4edbb6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.509 239969 INFO nova.compute.manager [-] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] VM Stopped (Lifecycle Event)
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.529 239969 DEBUG nova.compute.manager [None req-12376e5a-9567-4650-9644-b275d33835f8 - - - - - -] [instance: 39a41999-3c56-4679-8c57-9286dc4edbb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 126 op/s
Jan 26 16:28:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:28:20 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1765882031' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.887 239969 DEBUG oslo_concurrency.processutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.781s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.889 239969 DEBUG nova.virt.libvirt.vif [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1457378879',display_name='tempest-TestNetworkAdvancedServerOps-server-1457378879',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1457378879',id=148,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBXRmJ55PYG/LM9xDFj4YfpDSpukedz8p49LlUh0uK6t84cxMa5WxNAqPRBOJJ8+YvFGubL9tQLCSQysIZbD/889B7QK05P6yiYx8xjOJeyjUpWMf+n6V4Xm1xDWJcHxCA==',key_name='tempest-TestNetworkAdvancedServerOps-1322118565',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-yfjodt40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:28:15Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=3a84510b-c77e-4b69-95cf-39eb5ba96406,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.890 239969 DEBUG nova.network.os_vif_util [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.891 239969 DEBUG nova.network.os_vif_util [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:4f:e5,bridge_name='br-int',has_traffic_filtering=True,id=23d47336-e4f3-4764-b5f8-68038fe02faa,network=Network(c5dcb653-5134-44c2-bb5f-1ef932033d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23d47336-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.893 239969 DEBUG nova.objects.instance [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a84510b-c77e-4b69-95cf-39eb5ba96406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.913 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:28:20 compute-0 nova_compute[239965]:   <uuid>3a84510b-c77e-4b69-95cf-39eb5ba96406</uuid>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   <name>instance-00000094</name>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1457378879</nova:name>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:28:19</nova:creationTime>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:28:20 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:28:20 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:28:20 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:28:20 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:28:20 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:28:20 compute-0 nova_compute[239965]:         <nova:user uuid="59ae1c17a260470c91f50965ddd53a9e">tempest-TestNetworkAdvancedServerOps-842475489-project-member</nova:user>
Jan 26 16:28:20 compute-0 nova_compute[239965]:         <nova:project uuid="2a7615c0db4e4f38aec30c7c723c3c3a">tempest-TestNetworkAdvancedServerOps-842475489</nova:project>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:28:20 compute-0 nova_compute[239965]:         <nova:port uuid="23d47336-e4f3-4764-b5f8-68038fe02faa">
Jan 26 16:28:20 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <system>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <entry name="serial">3a84510b-c77e-4b69-95cf-39eb5ba96406</entry>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <entry name="uuid">3a84510b-c77e-4b69-95cf-39eb5ba96406</entry>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     </system>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   <os>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   </os>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   <features>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   </features>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/3a84510b-c77e-4b69-95cf-39eb5ba96406_disk">
Jan 26 16:28:20 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       </source>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:28:20 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/3a84510b-c77e-4b69-95cf-39eb5ba96406_disk.config">
Jan 26 16:28:20 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       </source>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:28:20 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:5f:4f:e5"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <target dev="tap23d47336-e4"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/3a84510b-c77e-4b69-95cf-39eb5ba96406/console.log" append="off"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <video>
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     </video>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:28:20 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:28:20 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:28:20 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:28:20 compute-0 nova_compute[239965]: </domain>
Jan 26 16:28:20 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.915 239969 DEBUG nova.compute.manager [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Preparing to wait for external event network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.916 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.916 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.917 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.918 239969 DEBUG nova.virt.libvirt.vif [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1457378879',display_name='tempest-TestNetworkAdvancedServerOps-server-1457378879',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1457378879',id=148,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBXRmJ55PYG/LM9xDFj4YfpDSpukedz8p49LlUh0uK6t84cxMa5WxNAqPRBOJJ8+YvFGubL9tQLCSQysIZbD/889B7QK05P6yiYx8xjOJeyjUpWMf+n6V4Xm1xDWJcHxCA==',key_name='tempest-TestNetworkAdvancedServerOps-1322118565',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-yfjodt40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:28:15Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=3a84510b-c77e-4b69-95cf-39eb5ba96406,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.919 239969 DEBUG nova.network.os_vif_util [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.920 239969 DEBUG nova.network.os_vif_util [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:4f:e5,bridge_name='br-int',has_traffic_filtering=True,id=23d47336-e4f3-4764-b5f8-68038fe02faa,network=Network(c5dcb653-5134-44c2-bb5f-1ef932033d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23d47336-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.920 239969 DEBUG os_vif [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:4f:e5,bridge_name='br-int',has_traffic_filtering=True,id=23d47336-e4f3-4764-b5f8-68038fe02faa,network=Network(c5dcb653-5134-44c2-bb5f-1ef932033d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23d47336-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.921 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.922 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.923 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.927 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.928 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23d47336-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.929 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap23d47336-e4, col_values=(('external_ids', {'iface-id': '23d47336-e4f3-4764-b5f8-68038fe02faa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:4f:e5', 'vm-uuid': '3a84510b-c77e-4b69-95cf-39eb5ba96406'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.959 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:20 compute-0 NetworkManager[48954]: <info>  [1769444900.9606] manager: (tap23d47336-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/634)
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.963 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.968 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:20 compute-0 nova_compute[239965]: 2026-01-26 16:28:20.969 239969 INFO os_vif [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:4f:e5,bridge_name='br-int',has_traffic_filtering=True,id=23d47336-e4f3-4764-b5f8-68038fe02faa,network=Network(c5dcb653-5134-44c2-bb5f-1ef932033d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23d47336-e4')
Jan 26 16:28:21 compute-0 nova_compute[239965]: 2026-01-26 16:28:21.042 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:28:21 compute-0 nova_compute[239965]: 2026-01-26 16:28:21.043 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:28:21 compute-0 nova_compute[239965]: 2026-01-26 16:28:21.043 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No VIF found with MAC fa:16:3e:5f:4f:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:28:21 compute-0 nova_compute[239965]: 2026-01-26 16:28:21.044 239969 INFO nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Using config drive
Jan 26 16:28:21 compute-0 nova_compute[239965]: 2026-01-26 16:28:21.082 239969 DEBUG nova.storage.rbd_utils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 3a84510b-c77e-4b69-95cf-39eb5ba96406_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:21 compute-0 ceph-mon[75140]: pgmap v2340: 305 pgs: 305 active+clean; 260 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 126 op/s
Jan 26 16:28:21 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1765882031' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:28:21 compute-0 nova_compute[239965]: 2026-01-26 16:28:21.664 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:22 compute-0 ovn_controller[146046]: 2026-01-26T16:28:22Z|01568|binding|INFO|Releasing lport 1ca00a90-6e42-40f0-a57b-b46691113c4a from this chassis (sb_readonly=0)
Jan 26 16:28:22 compute-0 nova_compute[239965]: 2026-01-26 16:28:22.263 239969 DEBUG nova.network.neutron [req-fc49239f-9970-46b7-8fef-8bc882dba2a4 req-2f37717f-a984-4fe9-8e4d-380831d55db7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Updated VIF entry in instance network info cache for port 23d47336-e4f3-4764-b5f8-68038fe02faa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:28:22 compute-0 nova_compute[239965]: 2026-01-26 16:28:22.264 239969 DEBUG nova.network.neutron [req-fc49239f-9970-46b7-8fef-8bc882dba2a4 req-2f37717f-a984-4fe9-8e4d-380831d55db7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Updating instance_info_cache with network_info: [{"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:22 compute-0 nova_compute[239965]: 2026-01-26 16:28:22.281 239969 DEBUG oslo_concurrency.lockutils [req-fc49239f-9970-46b7-8fef-8bc882dba2a4 req-2f37717f-a984-4fe9-8e4d-380831d55db7 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-3a84510b-c77e-4b69-95cf-39eb5ba96406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:28:22 compute-0 nova_compute[239965]: 2026-01-26 16:28:22.383 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:22 compute-0 nova_compute[239965]: 2026-01-26 16:28:22.403 239969 INFO nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Creating config drive at /var/lib/nova/instances/3a84510b-c77e-4b69-95cf-39eb5ba96406/disk.config
Jan 26 16:28:22 compute-0 nova_compute[239965]: 2026-01-26 16:28:22.409 239969 DEBUG oslo_concurrency.processutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/3a84510b-c77e-4b69-95cf-39eb5ba96406/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoqu4qgyt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:22 compute-0 nova_compute[239965]: 2026-01-26 16:28:22.557 239969 DEBUG oslo_concurrency.processutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/3a84510b-c77e-4b69-95cf-39eb5ba96406/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoqu4qgyt" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:22 compute-0 nova_compute[239965]: 2026-01-26 16:28:22.593 239969 DEBUG nova.storage.rbd_utils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 3a84510b-c77e-4b69-95cf-39eb5ba96406_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:22 compute-0 nova_compute[239965]: 2026-01-26 16:28:22.599 239969 DEBUG oslo_concurrency.processutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/3a84510b-c77e-4b69-95cf-39eb5ba96406/disk.config 3a84510b-c77e-4b69-95cf-39eb5ba96406_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 184 op/s
Jan 26 16:28:22 compute-0 nova_compute[239965]: 2026-01-26 16:28:22.762 239969 DEBUG oslo_concurrency.processutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/3a84510b-c77e-4b69-95cf-39eb5ba96406/disk.config 3a84510b-c77e-4b69-95cf-39eb5ba96406_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:22 compute-0 nova_compute[239965]: 2026-01-26 16:28:22.763 239969 INFO nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Deleting local config drive /var/lib/nova/instances/3a84510b-c77e-4b69-95cf-39eb5ba96406/disk.config because it was imported into RBD.
Jan 26 16:28:22 compute-0 ceph-mon[75140]: pgmap v2341: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 184 op/s
Jan 26 16:28:22 compute-0 kernel: tap23d47336-e4: entered promiscuous mode
Jan 26 16:28:22 compute-0 NetworkManager[48954]: <info>  [1769444902.8287] manager: (tap23d47336-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/635)
Jan 26 16:28:22 compute-0 ovn_controller[146046]: 2026-01-26T16:28:22Z|01569|binding|INFO|Claiming lport 23d47336-e4f3-4764-b5f8-68038fe02faa for this chassis.
Jan 26 16:28:22 compute-0 ovn_controller[146046]: 2026-01-26T16:28:22Z|01570|binding|INFO|23d47336-e4f3-4764-b5f8-68038fe02faa: Claiming fa:16:3e:5f:4f:e5 10.100.0.13
Jan 26 16:28:22 compute-0 nova_compute[239965]: 2026-01-26 16:28:22.830 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:22.842 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:4f:e5 10.100.0.13'], port_security=['fa:16:3e:5f:4f:e5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3a84510b-c77e-4b69-95cf-39eb5ba96406', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5dcb653-5134-44c2-bb5f-1ef932033d43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '180e9b46-d482-4e98-8076-c3573609308c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94eb109d-0d0f-4f4d-9984-1552de27451e, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=23d47336-e4f3-4764-b5f8-68038fe02faa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:28:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:22.844 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 23d47336-e4f3-4764-b5f8-68038fe02faa in datapath c5dcb653-5134-44c2-bb5f-1ef932033d43 bound to our chassis
Jan 26 16:28:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:22.846 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5dcb653-5134-44c2-bb5f-1ef932033d43
Jan 26 16:28:22 compute-0 nova_compute[239965]: 2026-01-26 16:28:22.847 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:22 compute-0 ovn_controller[146046]: 2026-01-26T16:28:22Z|01571|binding|INFO|Setting lport 23d47336-e4f3-4764-b5f8-68038fe02faa ovn-installed in OVS
Jan 26 16:28:22 compute-0 ovn_controller[146046]: 2026-01-26T16:28:22Z|01572|binding|INFO|Setting lport 23d47336-e4f3-4764-b5f8-68038fe02faa up in Southbound
Jan 26 16:28:22 compute-0 nova_compute[239965]: 2026-01-26 16:28:22.852 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:22.862 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[391de14b-11f2-47fa-9372-5c3aa6a3d14b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:22 compute-0 systemd-machined[208061]: New machine qemu-179-instance-00000094.
Jan 26 16:28:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:22.865 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc5dcb653-51 in ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:28:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:22.867 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc5dcb653-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:28:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:22.867 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d93b53a2-81df-49c9-b469-1ebe3bec7d4d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:22.868 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8d42e33a-824e-4186-8354-0b8abccea4c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:22 compute-0 systemd-udevd[369542]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:28:22 compute-0 systemd[1]: Started Virtual Machine qemu-179-instance-00000094.
Jan 26 16:28:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:22.883 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[ca739c71-8178-42a6-bd2e-1ebfdc36cb27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:22 compute-0 NetworkManager[48954]: <info>  [1769444902.8855] device (tap23d47336-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:28:22 compute-0 NetworkManager[48954]: <info>  [1769444902.8883] device (tap23d47336-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:28:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:22.915 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3fe22bee-5f84-4b1d-91b4-495de2819c73]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:22.943 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[feb42ae8-7aab-4adc-9d35-738714ee9a2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:22 compute-0 NetworkManager[48954]: <info>  [1769444902.9501] manager: (tapc5dcb653-50): new Veth device (/org/freedesktop/NetworkManager/Devices/636)
Jan 26 16:28:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:22.949 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd3a081-76f9-44e9-9007-97a84086d72e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:22.980 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a9000f0e-cc2f-499d-8ce0-370572feb721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:22.985 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2bdbcc9d-22be-4cf0-b1e3-d13772e76ce1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:23 compute-0 NetworkManager[48954]: <info>  [1769444903.0075] device (tapc5dcb653-50): carrier: link connected
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:23.017 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[38c1c0d3-9aa6-4dea-8bd8-5bd75666b257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:23.035 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8468506f-613e-4424-8127-6be8e63d6354]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5dcb653-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:a5:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 450], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639381, 'reachable_time': 35139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369574, 'error': None, 'target': 'ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:23.059 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[afabb650-81df-46a3-80a4-74af6a8d9eeb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:a520'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639381, 'tstamp': 639381}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369575, 'error': None, 'target': 'ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:23.082 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f860f7d9-f992-4ac1-bcab-4dbc61a55af3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5dcb653-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:a5:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 450], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639381, 'reachable_time': 35139, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 369592, 'error': None, 'target': 'ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:23.124 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[69082cc9-bd9e-4186-bc27-585cf66a4d52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:23.213 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[49c7ba73-9fff-4303-a9ce-d56e48a729f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:23.214 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5dcb653-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:23.214 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:23.215 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5dcb653-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:23 compute-0 kernel: tapc5dcb653-50: entered promiscuous mode
Jan 26 16:28:23 compute-0 NetworkManager[48954]: <info>  [1769444903.2182] manager: (tapc5dcb653-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/637)
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.220 239969 DEBUG nova.compute.manager [req-4b2935ed-9805-4334-8714-cbc5a3f6e23c req-539b21ab-fac9-4cdf-96f4-0fa7fb345910 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received event network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:23.221 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5dcb653-50, col_values=(('external_ids', {'iface-id': '2e120840-5ef3-452f-9957-cc378e107313'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.221 239969 DEBUG oslo_concurrency.lockutils [req-4b2935ed-9805-4334-8714-cbc5a3f6e23c req-539b21ab-fac9-4cdf-96f4-0fa7fb345910 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.222 239969 DEBUG oslo_concurrency.lockutils [req-4b2935ed-9805-4334-8714-cbc5a3f6e23c req-539b21ab-fac9-4cdf-96f4-0fa7fb345910 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.222 239969 DEBUG oslo_concurrency.lockutils [req-4b2935ed-9805-4334-8714-cbc5a3f6e23c req-539b21ab-fac9-4cdf-96f4-0fa7fb345910 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:23 compute-0 ovn_controller[146046]: 2026-01-26T16:28:23Z|01573|binding|INFO|Releasing lport 2e120840-5ef3-452f-9957-cc378e107313 from this chassis (sb_readonly=0)
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.222 239969 DEBUG nova.compute.manager [req-4b2935ed-9805-4334-8714-cbc5a3f6e23c req-539b21ab-fac9-4cdf-96f4-0fa7fb345910 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Processing event network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.223 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:23.224 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c5dcb653-5134-44c2-bb5f-1ef932033d43.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c5dcb653-5134-44c2-bb5f-1ef932033d43.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:23.225 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a9cd2b83-4a92-4f9f-91db-0407acfb9cb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:23.226 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-c5dcb653-5134-44c2-bb5f-1ef932033d43
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/c5dcb653-5134-44c2-bb5f-1ef932033d43.pid.haproxy
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID c5dcb653-5134-44c2-bb5f-1ef932033d43
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:28:23 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:23.227 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43', 'env', 'PROCESS_TAG=haproxy-c5dcb653-5134-44c2-bb5f-1ef932033d43', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c5dcb653-5134-44c2-bb5f-1ef932033d43.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.239 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.459 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.482 239969 DEBUG nova.compute.manager [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.482 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444903.481425, 3a84510b-c77e-4b69-95cf-39eb5ba96406 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.482 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] VM Started (Lifecycle Event)
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.487 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.491 239969 INFO nova.virt.libvirt.driver [-] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Instance spawned successfully.
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.491 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.505 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.512 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.517 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.517 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.518 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.518 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.519 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.519 239969 DEBUG nova.virt.libvirt.driver [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.547 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.548 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444903.484501, 3a84510b-c77e-4b69-95cf-39eb5ba96406 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.548 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] VM Paused (Lifecycle Event)
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.581 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.586 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444903.4857554, 3a84510b-c77e-4b69-95cf-39eb5ba96406 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.586 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] VM Resumed (Lifecycle Event)
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.590 239969 INFO nova.compute.manager [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Took 8.23 seconds to spawn the instance on the hypervisor.
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.590 239969 DEBUG nova.compute.manager [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.604 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.607 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.637 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.661 239969 INFO nova.compute.manager [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Took 9.86 seconds to build instance.
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.677 239969 DEBUG oslo_concurrency.lockutils [None req-c95ea39d-4a91-437c-a1cb-50c98e31bd67 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:23 compute-0 podman[369650]: 2026-01-26 16:28:23.699580505 +0000 UTC m=+0.052499086 container create 798e0c850255998491cf349cef0d0a74081e2ef17e06338e86257ccce69bca47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:28:23 compute-0 systemd[1]: Started libpod-conmon-798e0c850255998491cf349cef0d0a74081e2ef17e06338e86257ccce69bca47.scope.
Jan 26 16:28:23 compute-0 podman[369650]: 2026-01-26 16:28:23.674592503 +0000 UTC m=+0.027511114 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:28:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:28:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c5c4a0148be57dda2f9617f85d9500a16d56e304ade231944c9748f034036f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:23 compute-0 podman[369650]: 2026-01-26 16:28:23.79085189 +0000 UTC m=+0.143770511 container init 798e0c850255998491cf349cef0d0a74081e2ef17e06338e86257ccce69bca47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:28:23 compute-0 podman[369650]: 2026-01-26 16:28:23.797659067 +0000 UTC m=+0.150577658 container start 798e0c850255998491cf349cef0d0a74081e2ef17e06338e86257ccce69bca47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:28:23 compute-0 neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43[369666]: [NOTICE]   (369670) : New worker (369672) forked
Jan 26 16:28:23 compute-0 neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43[369666]: [NOTICE]   (369670) : Loading success.
Jan 26 16:28:23 compute-0 nova_compute[239965]: 2026-01-26 16:28:23.882 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:28:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 96 op/s
Jan 26 16:28:25 compute-0 nova_compute[239965]: 2026-01-26 16:28:25.369 239969 DEBUG nova.compute.manager [req-2adf2f6d-891b-4161-9978-224dc633cf15 req-34545d93-df7b-4729-8cb2-9d206385974e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received event network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:25 compute-0 nova_compute[239965]: 2026-01-26 16:28:25.369 239969 DEBUG oslo_concurrency.lockutils [req-2adf2f6d-891b-4161-9978-224dc633cf15 req-34545d93-df7b-4729-8cb2-9d206385974e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:25 compute-0 nova_compute[239965]: 2026-01-26 16:28:25.369 239969 DEBUG oslo_concurrency.lockutils [req-2adf2f6d-891b-4161-9978-224dc633cf15 req-34545d93-df7b-4729-8cb2-9d206385974e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:25 compute-0 nova_compute[239965]: 2026-01-26 16:28:25.369 239969 DEBUG oslo_concurrency.lockutils [req-2adf2f6d-891b-4161-9978-224dc633cf15 req-34545d93-df7b-4729-8cb2-9d206385974e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:25 compute-0 nova_compute[239965]: 2026-01-26 16:28:25.370 239969 DEBUG nova.compute.manager [req-2adf2f6d-891b-4161-9978-224dc633cf15 req-34545d93-df7b-4729-8cb2-9d206385974e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] No waiting events found dispatching network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:28:25 compute-0 nova_compute[239965]: 2026-01-26 16:28:25.370 239969 WARNING nova.compute.manager [req-2adf2f6d-891b-4161-9978-224dc633cf15 req-34545d93-df7b-4729-8cb2-9d206385974e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received unexpected event network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa for instance with vm_state active and task_state None.
Jan 26 16:28:25 compute-0 ceph-mon[75140]: pgmap v2342: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 3.9 MiB/s wr, 96 op/s
Jan 26 16:28:25 compute-0 nova_compute[239965]: 2026-01-26 16:28:25.961 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.525 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444891.5231636, 759e14a7-49ae-4f6f-bb96-80fc691d50e3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.525 239969 INFO nova.compute.manager [-] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] VM Stopped (Lifecycle Event)
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.547 239969 DEBUG nova.compute.manager [None req-239a741e-36d0-468d-a989-dca186675d8c - - - - - -] [instance: 759e14a7-49ae-4f6f-bb96-80fc691d50e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.549 239969 DEBUG oslo_concurrency.lockutils [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "0f79507c-8a44-47f3-a461-6377eed67521" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.550 239969 DEBUG oslo_concurrency.lockutils [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "0f79507c-8a44-47f3-a461-6377eed67521" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.551 239969 DEBUG oslo_concurrency.lockutils [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "0f79507c-8a44-47f3-a461-6377eed67521-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.551 239969 DEBUG oslo_concurrency.lockutils [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "0f79507c-8a44-47f3-a461-6377eed67521-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.552 239969 DEBUG oslo_concurrency.lockutils [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "0f79507c-8a44-47f3-a461-6377eed67521-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.554 239969 INFO nova.compute.manager [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Terminating instance
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.555 239969 DEBUG nova.compute.manager [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:28:26 compute-0 kernel: tapbe140ce9-b4 (unregistering): left promiscuous mode
Jan 26 16:28:26 compute-0 NetworkManager[48954]: <info>  [1769444906.6155] device (tapbe140ce9-b4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:28:26 compute-0 ovn_controller[146046]: 2026-01-26T16:28:26Z|01574|binding|INFO|Releasing lport be140ce9-b433-44cc-867c-3beea67a67d2 from this chassis (sb_readonly=0)
Jan 26 16:28:26 compute-0 ovn_controller[146046]: 2026-01-26T16:28:26Z|01575|binding|INFO|Setting lport be140ce9-b433-44cc-867c-3beea67a67d2 down in Southbound
Jan 26 16:28:26 compute-0 ovn_controller[146046]: 2026-01-26T16:28:26Z|01576|binding|INFO|Removing iface tapbe140ce9-b4 ovn-installed in OVS
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.633 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:26.637 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:a8:5d 10.100.0.13'], port_security=['fa:16:3e:a5:a8:5d 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0f79507c-8a44-47f3-a461-6377eed67521', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fed7ea2b-3239-4a92-a662-8de7d6a5b54f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ef94faa6-075d-4f37-900d-b0e16481fc09', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7416312-6ee5-4c60-a0ef-0cc203c3bf4f, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=be140ce9-b433-44cc-867c-3beea67a67d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:28:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:26.640 156105 INFO neutron.agent.ovn.metadata.agent [-] Port be140ce9-b433-44cc-867c-3beea67a67d2 in datapath fed7ea2b-3239-4a92-a662-8de7d6a5b54f unbound from our chassis
Jan 26 16:28:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:26.642 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fed7ea2b-3239-4a92-a662-8de7d6a5b54f
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.700 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:26.700 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[365b55a0-3be3-4be0-a2c0-c20b4f9cad07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:26 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000093.scope: Deactivated successfully.
Jan 26 16:28:26 compute-0 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000093.scope: Consumed 13.649s CPU time.
Jan 26 16:28:26 compute-0 systemd-machined[208061]: Machine qemu-178-instance-00000093 terminated.
Jan 26 16:28:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 685 KiB/s rd, 3.9 MiB/s wr, 115 op/s
Jan 26 16:28:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:26.737 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b34107d9-9bf7-4d05-a56a-3182a4f3b95f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:26.742 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[53a23c04-6732-4772-a537-1d77f9bbc309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:26.773 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a036174e-a9a1-41b8-a541-636f37578462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.785 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.795 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.800 239969 INFO nova.virt.libvirt.driver [-] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Instance destroyed successfully.
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.800 239969 DEBUG nova.objects.instance [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'resources' on Instance uuid 0f79507c-8a44-47f3-a461-6377eed67521 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:26.801 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ba14eec4-03f4-40f0-899d-3a7e67a4172c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfed7ea2b-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9e:39:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 439], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632646, 'reachable_time': 26816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369695, 'error': None, 'target': 'ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:26 compute-0 ceph-mon[75140]: pgmap v2343: 305 pgs: 305 active+clean; 293 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 685 KiB/s rd, 3.9 MiB/s wr, 115 op/s
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.816 239969 DEBUG nova.virt.libvirt.vif [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:27:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-1-1795104862',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-gen-1-1795104862',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-gen',id=147,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNmPYHlpHuZpOeZCl7BHQ216cd8u50ZxFCXW833oHGB4gBbAp9eFvd6P3qjkcGf53JTK0QFiWuKcgJqP7NdWwDFlJ24rb8FRQzo1dBCyWpi8odGazdOwXvkLXo0bhtlZwA==',key_name='tempest-TestSecurityGroupsBasicOps-1082581796',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:28:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-bf1n0ojq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:28:07Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=0f79507c-8a44-47f3-a461-6377eed67521,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be140ce9-b433-44cc-867c-3beea67a67d2", "address": "fa:16:3e:a5:a8:5d", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe140ce9-b4", "ovs_interfaceid": "be140ce9-b433-44cc-867c-3beea67a67d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.820 239969 DEBUG nova.network.os_vif_util [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "be140ce9-b433-44cc-867c-3beea67a67d2", "address": "fa:16:3e:a5:a8:5d", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe140ce9-b4", "ovs_interfaceid": "be140ce9-b433-44cc-867c-3beea67a67d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.822 239969 DEBUG nova.network.os_vif_util [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a8:5d,bridge_name='br-int',has_traffic_filtering=True,id=be140ce9-b433-44cc-867c-3beea67a67d2,network=Network(fed7ea2b-3239-4a92-a662-8de7d6a5b54f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe140ce9-b4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.823 239969 DEBUG os_vif [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a8:5d,bridge_name='br-int',has_traffic_filtering=True,id=be140ce9-b433-44cc-867c-3beea67a67d2,network=Network(fed7ea2b-3239-4a92-a662-8de7d6a5b54f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe140ce9-b4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:28:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:26.826 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9a4f1b0f-e727-42ad-94a1-9ed8900adb31]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfed7ea2b-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632659, 'tstamp': 632659}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369703, 'error': None, 'target': 'ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfed7ea2b-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 632662, 'tstamp': 632662}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 369703, 'error': None, 'target': 'ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:26.828 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfed7ea2b-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.828 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.829 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe140ce9-b4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.832 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.835 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:26.835 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfed7ea2b-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:26.835 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:28:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:26.836 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfed7ea2b-30, col_values=(('external_ids', {'iface-id': '1ca00a90-6e42-40f0-a57b-b46691113c4a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:26 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:26.836 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.836 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:26 compute-0 nova_compute[239965]: 2026-01-26 16:28:26.838 239969 INFO os_vif [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a5:a8:5d,bridge_name='br-int',has_traffic_filtering=True,id=be140ce9-b433-44cc-867c-3beea67a67d2,network=Network(fed7ea2b-3239-4a92-a662-8de7d6a5b54f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbe140ce9-b4')
Jan 26 16:28:27 compute-0 nova_compute[239965]: 2026-01-26 16:28:27.125 239969 INFO nova.virt.libvirt.driver [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Deleting instance files /var/lib/nova/instances/0f79507c-8a44-47f3-a461-6377eed67521_del
Jan 26 16:28:27 compute-0 nova_compute[239965]: 2026-01-26 16:28:27.126 239969 INFO nova.virt.libvirt.driver [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Deletion of /var/lib/nova/instances/0f79507c-8a44-47f3-a461-6377eed67521_del complete
Jan 26 16:28:27 compute-0 nova_compute[239965]: 2026-01-26 16:28:27.199 239969 INFO nova.compute.manager [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Took 0.64 seconds to destroy the instance on the hypervisor.
Jan 26 16:28:27 compute-0 nova_compute[239965]: 2026-01-26 16:28:27.200 239969 DEBUG oslo.service.loopingcall [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:28:27 compute-0 nova_compute[239965]: 2026-01-26 16:28:27.200 239969 DEBUG nova.compute.manager [-] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:28:27 compute-0 nova_compute[239965]: 2026-01-26 16:28:27.201 239969 DEBUG nova.network.neutron [-] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:28:27 compute-0 nova_compute[239965]: 2026-01-26 16:28:27.487 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:27 compute-0 nova_compute[239965]: 2026-01-26 16:28:27.894 239969 DEBUG nova.network.neutron [-] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:27 compute-0 nova_compute[239965]: 2026-01-26 16:28:27.907 239969 INFO nova.compute.manager [-] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Took 0.71 seconds to deallocate network for instance.
Jan 26 16:28:27 compute-0 nova_compute[239965]: 2026-01-26 16:28:27.952 239969 DEBUG nova.compute.manager [req-40d1228c-0d65-461e-9a6f-be7d5d6404de req-247878a7-60dc-4fbd-821d-e8160c0e0f1a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Received event network-vif-deleted-be140ce9-b433-44cc-867c-3beea67a67d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:27 compute-0 nova_compute[239965]: 2026-01-26 16:28:27.957 239969 DEBUG oslo_concurrency.lockutils [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:27 compute-0 nova_compute[239965]: 2026-01-26 16:28:27.958 239969 DEBUG oslo_concurrency.lockutils [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:28 compute-0 nova_compute[239965]: 2026-01-26 16:28:28.024 239969 DEBUG oslo_concurrency.processutils [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:28:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1315800289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:28 compute-0 nova_compute[239965]: 2026-01-26 16:28:28.589 239969 DEBUG oslo_concurrency.processutils [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:28 compute-0 nova_compute[239965]: 2026-01-26 16:28:28.598 239969 DEBUG nova.compute.provider_tree [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:28:28 compute-0 nova_compute[239965]: 2026-01-26 16:28:28.621 239969 DEBUG nova.scheduler.client.report [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:28:28 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1315800289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:28 compute-0 nova_compute[239965]: 2026-01-26 16:28:28.645 239969 DEBUG oslo_concurrency.lockutils [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:28 compute-0 nova_compute[239965]: 2026-01-26 16:28:28.675 239969 DEBUG nova.compute.manager [req-81bfefaa-1d6f-4c4b-a2a3-3d3ba8879da0 req-2044e768-fc48-422d-a672-f7e941e89178 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received event network-changed-23d47336-e4f3-4764-b5f8-68038fe02faa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:28 compute-0 nova_compute[239965]: 2026-01-26 16:28:28.676 239969 DEBUG nova.compute.manager [req-81bfefaa-1d6f-4c4b-a2a3-3d3ba8879da0 req-2044e768-fc48-422d-a672-f7e941e89178 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Refreshing instance network info cache due to event network-changed-23d47336-e4f3-4764-b5f8-68038fe02faa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:28:28 compute-0 nova_compute[239965]: 2026-01-26 16:28:28.677 239969 DEBUG oslo_concurrency.lockutils [req-81bfefaa-1d6f-4c4b-a2a3-3d3ba8879da0 req-2044e768-fc48-422d-a672-f7e941e89178 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-3a84510b-c77e-4b69-95cf-39eb5ba96406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:28:28 compute-0 nova_compute[239965]: 2026-01-26 16:28:28.677 239969 DEBUG oslo_concurrency.lockutils [req-81bfefaa-1d6f-4c4b-a2a3-3d3ba8879da0 req-2044e768-fc48-422d-a672-f7e941e89178 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-3a84510b-c77e-4b69-95cf-39eb5ba96406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:28:28 compute-0 nova_compute[239965]: 2026-01-26 16:28:28.678 239969 DEBUG nova.network.neutron [req-81bfefaa-1d6f-4c4b-a2a3-3d3ba8879da0 req-2044e768-fc48-422d-a672-f7e941e89178 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Refreshing network info cache for port 23d47336-e4f3-4764-b5f8-68038fe02faa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:28:28 compute-0 nova_compute[239965]: 2026-01-26 16:28:28.682 239969 INFO nova.scheduler.client.report [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Deleted allocations for instance 0f79507c-8a44-47f3-a461-6377eed67521
Jan 26 16:28:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:28:28
Jan 26 16:28:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:28:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:28:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups', 'default.rgw.control', 'images', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes']
Jan 26 16:28:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:28:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 235 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.7 MiB/s wr, 179 op/s
Jan 26 16:28:28 compute-0 nova_compute[239965]: 2026-01-26 16:28:28.750 239969 DEBUG oslo_concurrency.lockutils [None req-c17c3ccc-05f4-40fd-9f30-2e46965bdd70 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "0f79507c-8a44-47f3-a461-6377eed67521" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:28 compute-0 nova_compute[239965]: 2026-01-26 16:28:28.944 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:28:29 compute-0 ceph-mon[75140]: pgmap v2344: 305 pgs: 305 active+clean; 235 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.7 MiB/s wr, 179 op/s
Jan 26 16:28:30 compute-0 nova_compute[239965]: 2026-01-26 16:28:30.275 239969 DEBUG nova.network.neutron [req-81bfefaa-1d6f-4c4b-a2a3-3d3ba8879da0 req-2044e768-fc48-422d-a672-f7e941e89178 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Updated VIF entry in instance network info cache for port 23d47336-e4f3-4764-b5f8-68038fe02faa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:28:30 compute-0 nova_compute[239965]: 2026-01-26 16:28:30.277 239969 DEBUG nova.network.neutron [req-81bfefaa-1d6f-4c4b-a2a3-3d3ba8879da0 req-2044e768-fc48-422d-a672-f7e941e89178 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Updating instance_info_cache with network_info: [{"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:30 compute-0 nova_compute[239965]: 2026-01-26 16:28:30.300 239969 DEBUG oslo_concurrency.lockutils [req-81bfefaa-1d6f-4c4b-a2a3-3d3ba8879da0 req-2044e768-fc48-422d-a672-f7e941e89178 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-3a84510b-c77e-4b69-95cf-39eb5ba96406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 235 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 164 op/s
Jan 26 16:28:30 compute-0 ceph-mon[75140]: pgmap v2345: 305 pgs: 305 active+clean; 235 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 164 op/s
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:28:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.304 239969 DEBUG nova.compute.manager [req-8022e7f3-ed93-4823-8f6b-020be32e52ee req-e2b57435-a8ae-46ca-bfe7-b3be8f557db9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Received event network-changed-dccf97d4-5209-4347-9f07-83ba148d3126 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.305 239969 DEBUG nova.compute.manager [req-8022e7f3-ed93-4823-8f6b-020be32e52ee req-e2b57435-a8ae-46ca-bfe7-b3be8f557db9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Refreshing instance network info cache due to event network-changed-dccf97d4-5209-4347-9f07-83ba148d3126. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.305 239969 DEBUG oslo_concurrency.lockutils [req-8022e7f3-ed93-4823-8f6b-020be32e52ee req-e2b57435-a8ae-46ca-bfe7-b3be8f557db9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-b4e64366-a873-483b-a2b9-3d3afc79f7d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.306 239969 DEBUG oslo_concurrency.lockutils [req-8022e7f3-ed93-4823-8f6b-020be32e52ee req-e2b57435-a8ae-46ca-bfe7-b3be8f557db9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-b4e64366-a873-483b-a2b9-3d3afc79f7d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.306 239969 DEBUG nova.network.neutron [req-8022e7f3-ed93-4823-8f6b-020be32e52ee req-e2b57435-a8ae-46ca-bfe7-b3be8f557db9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Refreshing network info cache for port dccf97d4-5209-4347-9f07-83ba148d3126 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.430 239969 DEBUG oslo_concurrency.lockutils [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.431 239969 DEBUG oslo_concurrency.lockutils [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.432 239969 DEBUG oslo_concurrency.lockutils [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.432 239969 DEBUG oslo_concurrency.lockutils [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.433 239969 DEBUG oslo_concurrency.lockutils [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.434 239969 INFO nova.compute.manager [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Terminating instance
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.436 239969 DEBUG nova.compute.manager [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:28:31 compute-0 kernel: tapdccf97d4-52 (unregistering): left promiscuous mode
Jan 26 16:28:31 compute-0 NetworkManager[48954]: <info>  [1769444911.4990] device (tapdccf97d4-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.506 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:31 compute-0 ovn_controller[146046]: 2026-01-26T16:28:31Z|01577|binding|INFO|Releasing lport dccf97d4-5209-4347-9f07-83ba148d3126 from this chassis (sb_readonly=0)
Jan 26 16:28:31 compute-0 ovn_controller[146046]: 2026-01-26T16:28:31Z|01578|binding|INFO|Setting lport dccf97d4-5209-4347-9f07-83ba148d3126 down in Southbound
Jan 26 16:28:31 compute-0 ovn_controller[146046]: 2026-01-26T16:28:31Z|01579|binding|INFO|Removing iface tapdccf97d4-52 ovn-installed in OVS
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.510 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:31.528 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d5:66:73 10.100.0.6'], port_security=['fa:16:3e:d5:66:73 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b4e64366-a873-483b-a2b9-3d3afc79f7d4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fed7ea2b-3239-4a92-a662-8de7d6a5b54f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f814bd45d164be6827f7ef54ed07f5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8ca05add-5079-41f1-84cc-371e0b63d227 94740064-d28f-4a3e-bee4-e8145220237e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b7416312-6ee5-4c60-a0ef-0cc203c3bf4f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=dccf97d4-5209-4347-9f07-83ba148d3126) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:28:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:31.529 156105 INFO neutron.agent.ovn.metadata.agent [-] Port dccf97d4-5209-4347-9f07-83ba148d3126 in datapath fed7ea2b-3239-4a92-a662-8de7d6a5b54f unbound from our chassis
Jan 26 16:28:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:31.532 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fed7ea2b-3239-4a92-a662-8de7d6a5b54f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:28:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:31.533 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[49c63ca4-0796-4a8c-ae2f-079de9c274a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:31.534 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f namespace which is not needed anymore
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.546 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:31 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000092.scope: Deactivated successfully.
Jan 26 16:28:31 compute-0 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d00000092.scope: Consumed 20.159s CPU time.
Jan 26 16:28:31 compute-0 systemd-machined[208061]: Machine qemu-176-instance-00000092 terminated.
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.659 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.673 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.678 239969 INFO nova.virt.libvirt.driver [-] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Instance destroyed successfully.
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.679 239969 DEBUG nova.objects.instance [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lazy-loading 'resources' on Instance uuid b4e64366-a873-483b-a2b9-3d3afc79f7d4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:31 compute-0 neutron-haproxy-ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f[366903]: [NOTICE]   (366907) : haproxy version is 2.8.14-c23fe91
Jan 26 16:28:31 compute-0 neutron-haproxy-ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f[366903]: [NOTICE]   (366907) : path to executable is /usr/sbin/haproxy
Jan 26 16:28:31 compute-0 neutron-haproxy-ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f[366903]: [WARNING]  (366907) : Exiting Master process...
Jan 26 16:28:31 compute-0 neutron-haproxy-ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f[366903]: [WARNING]  (366907) : Exiting Master process...
Jan 26 16:28:31 compute-0 neutron-haproxy-ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f[366903]: [ALERT]    (366907) : Current worker (366909) exited with code 143 (Terminated)
Jan 26 16:28:31 compute-0 neutron-haproxy-ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f[366903]: [WARNING]  (366907) : All workers exited. Exiting... (0)
Jan 26 16:28:31 compute-0 systemd[1]: libpod-171581566a982885b3510e6185bcd09ec5190090f2dccb0fd5f7e38398921960.scope: Deactivated successfully.
Jan 26 16:28:31 compute-0 podman[369770]: 2026-01-26 16:28:31.746446981 +0000 UTC m=+0.065223958 container died 171581566a982885b3510e6185bcd09ec5190090f2dccb0fd5f7e38398921960 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 16:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-171581566a982885b3510e6185bcd09ec5190090f2dccb0fd5f7e38398921960-userdata-shm.mount: Deactivated successfully.
Jan 26 16:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ebe0b011ab1b6817d14a724553ada7ad9613b81f1d6cb7819cefe4302460a96-merged.mount: Deactivated successfully.
Jan 26 16:28:31 compute-0 podman[369770]: 2026-01-26 16:28:31.786793859 +0000 UTC m=+0.105570786 container cleanup 171581566a982885b3510e6185bcd09ec5190090f2dccb0fd5f7e38398921960 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:28:31 compute-0 systemd[1]: libpod-conmon-171581566a982885b3510e6185bcd09ec5190090f2dccb0fd5f7e38398921960.scope: Deactivated successfully.
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.830 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:31 compute-0 podman[369810]: 2026-01-26 16:28:31.851056383 +0000 UTC m=+0.041370874 container remove 171581566a982885b3510e6185bcd09ec5190090f2dccb0fd5f7e38398921960 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.857 239969 DEBUG nova.virt.libvirt.vif [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:27:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-468921711',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-75910484-access_point-468921711',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-75910484-acce',id=146,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNmPYHlpHuZpOeZCl7BHQ216cd8u50ZxFCXW833oHGB4gBbAp9eFvd6P3qjkcGf53JTK0QFiWuKcgJqP7NdWwDFlJ24rb8FRQzo1dBCyWpi8odGazdOwXvkLXo0bhtlZwA==',key_name='tempest-TestSecurityGroupsBasicOps-1082581796',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:27:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f814bd45d164be6827f7ef54ed07f5f',ramdisk_id='',reservation_id='r-bcmkbnyh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-75910484',owner_user_name='tempest-TestSecurityGroupsBasicOps-75910484-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:27:16Z,user_data=None,user_id='ace5f36058684b5782589457d9e15921',uuid=b4e64366-a873-483b-a2b9-3d3afc79f7d4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dccf97d4-5209-4347-9f07-83ba148d3126", "address": "fa:16:3e:d5:66:73", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdccf97d4-52", "ovs_interfaceid": "dccf97d4-5209-4347-9f07-83ba148d3126", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.857 239969 DEBUG nova.network.os_vif_util [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converting VIF {"id": "dccf97d4-5209-4347-9f07-83ba148d3126", "address": "fa:16:3e:d5:66:73", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdccf97d4-52", "ovs_interfaceid": "dccf97d4-5209-4347-9f07-83ba148d3126", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.858 239969 DEBUG nova.network.os_vif_util [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d5:66:73,bridge_name='br-int',has_traffic_filtering=True,id=dccf97d4-5209-4347-9f07-83ba148d3126,network=Network(fed7ea2b-3239-4a92-a662-8de7d6a5b54f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdccf97d4-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.858 239969 DEBUG os_vif [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:66:73,bridge_name='br-int',has_traffic_filtering=True,id=dccf97d4-5209-4347-9f07-83ba148d3126,network=Network(fed7ea2b-3239-4a92-a662-8de7d6a5b54f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdccf97d4-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.860 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.860 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdccf97d4-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:31.859 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4f37df2b-0e38-4e05-8101-20084470acea]: (4, ('Mon Jan 26 04:28:31 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f (171581566a982885b3510e6185bcd09ec5190090f2dccb0fd5f7e38398921960)\n171581566a982885b3510e6185bcd09ec5190090f2dccb0fd5f7e38398921960\nMon Jan 26 04:28:31 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f (171581566a982885b3510e6185bcd09ec5190090f2dccb0fd5f7e38398921960)\n171581566a982885b3510e6185bcd09ec5190090f2dccb0fd5f7e38398921960\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:31.862 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4a1e12b1-25a3-4b47-a374-2103fa443288]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.861 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:31.863 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfed7ea2b-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.864 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:28:31 compute-0 kernel: tapfed7ea2b-30: left promiscuous mode
Jan 26 16:28:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:31.882 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca0c85f-f3c1-4f31-947d-5ff129da2343]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:31 compute-0 nova_compute[239965]: 2026-01-26 16:28:31.885 239969 INFO os_vif [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d5:66:73,bridge_name='br-int',has_traffic_filtering=True,id=dccf97d4-5209-4347-9f07-83ba148d3126,network=Network(fed7ea2b-3239-4a92-a662-8de7d6a5b54f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdccf97d4-52')
Jan 26 16:28:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:31.898 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fb977d8b-3da9-4640-8b8c-cc290c89c43a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:31.900 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b58ca49d-025a-4294-952a-c6c3027662bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:31.918 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f6a6a5-5f58-48b9-bfa2-4619393acc2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 632639, 'reachable_time': 30391, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 369838, 'error': None, 'target': 'ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:31 compute-0 systemd[1]: run-netns-ovnmeta\x2dfed7ea2b\x2d3239\x2d4a92\x2da662\x2d8de7d6a5b54f.mount: Deactivated successfully.
Jan 26 16:28:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:31.920 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fed7ea2b-3239-4a92-a662-8de7d6a5b54f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:28:31 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:31.920 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[ced52e88-bb18-4e59-89fc-11a9693e2491]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:32 compute-0 nova_compute[239965]: 2026-01-26 16:28:32.148 239969 INFO nova.virt.libvirt.driver [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Deleting instance files /var/lib/nova/instances/b4e64366-a873-483b-a2b9-3d3afc79f7d4_del
Jan 26 16:28:32 compute-0 nova_compute[239965]: 2026-01-26 16:28:32.149 239969 INFO nova.virt.libvirt.driver [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Deletion of /var/lib/nova/instances/b4e64366-a873-483b-a2b9-3d3afc79f7d4_del complete
Jan 26 16:28:32 compute-0 nova_compute[239965]: 2026-01-26 16:28:32.574 239969 INFO nova.compute.manager [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Took 1.14 seconds to destroy the instance on the hypervisor.
Jan 26 16:28:32 compute-0 nova_compute[239965]: 2026-01-26 16:28:32.575 239969 DEBUG oslo.service.loopingcall [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:28:32 compute-0 nova_compute[239965]: 2026-01-26 16:28:32.576 239969 DEBUG nova.compute.manager [-] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:28:32 compute-0 nova_compute[239965]: 2026-01-26 16:28:32.577 239969 DEBUG nova.network.neutron [-] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:28:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 148 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 181 op/s
Jan 26 16:28:32 compute-0 ceph-mon[75140]: pgmap v2346: 305 pgs: 305 active+clean; 148 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 181 op/s
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.201 239969 DEBUG nova.network.neutron [req-8022e7f3-ed93-4823-8f6b-020be32e52ee req-e2b57435-a8ae-46ca-bfe7-b3be8f557db9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Updated VIF entry in instance network info cache for port dccf97d4-5209-4347-9f07-83ba148d3126. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.202 239969 DEBUG nova.network.neutron [req-8022e7f3-ed93-4823-8f6b-020be32e52ee req-e2b57435-a8ae-46ca-bfe7-b3be8f557db9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Updating instance_info_cache with network_info: [{"id": "dccf97d4-5209-4347-9f07-83ba148d3126", "address": "fa:16:3e:d5:66:73", "network": {"id": "fed7ea2b-3239-4a92-a662-8de7d6a5b54f", "bridge": "br-int", "label": "tempest-network-smoke--883953454", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f814bd45d164be6827f7ef54ed07f5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdccf97d4-52", "ovs_interfaceid": "dccf97d4-5209-4347-9f07-83ba148d3126", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.229 239969 DEBUG oslo_concurrency.lockutils [req-8022e7f3-ed93-4823-8f6b-020be32e52ee req-e2b57435-a8ae-46ca-bfe7-b3be8f557db9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-b4e64366-a873-483b-a2b9-3d3afc79f7d4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.650 239969 DEBUG nova.network.neutron [-] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.756 239969 DEBUG nova.compute.manager [req-7154f143-b82a-4cde-9491-c8d6bd059297 req-557ef6c4-75b7-428d-b2f0-70eaf4baa0ac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Received event network-vif-unplugged-dccf97d4-5209-4347-9f07-83ba148d3126 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.757 239969 DEBUG oslo_concurrency.lockutils [req-7154f143-b82a-4cde-9491-c8d6bd059297 req-557ef6c4-75b7-428d-b2f0-70eaf4baa0ac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.757 239969 DEBUG oslo_concurrency.lockutils [req-7154f143-b82a-4cde-9491-c8d6bd059297 req-557ef6c4-75b7-428d-b2f0-70eaf4baa0ac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.757 239969 DEBUG oslo_concurrency.lockutils [req-7154f143-b82a-4cde-9491-c8d6bd059297 req-557ef6c4-75b7-428d-b2f0-70eaf4baa0ac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.758 239969 DEBUG nova.compute.manager [req-7154f143-b82a-4cde-9491-c8d6bd059297 req-557ef6c4-75b7-428d-b2f0-70eaf4baa0ac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] No waiting events found dispatching network-vif-unplugged-dccf97d4-5209-4347-9f07-83ba148d3126 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.758 239969 DEBUG nova.compute.manager [req-7154f143-b82a-4cde-9491-c8d6bd059297 req-557ef6c4-75b7-428d-b2f0-70eaf4baa0ac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Received event network-vif-unplugged-dccf97d4-5209-4347-9f07-83ba148d3126 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.758 239969 DEBUG nova.compute.manager [req-7154f143-b82a-4cde-9491-c8d6bd059297 req-557ef6c4-75b7-428d-b2f0-70eaf4baa0ac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Received event network-vif-plugged-dccf97d4-5209-4347-9f07-83ba148d3126 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.758 239969 DEBUG oslo_concurrency.lockutils [req-7154f143-b82a-4cde-9491-c8d6bd059297 req-557ef6c4-75b7-428d-b2f0-70eaf4baa0ac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.759 239969 DEBUG oslo_concurrency.lockutils [req-7154f143-b82a-4cde-9491-c8d6bd059297 req-557ef6c4-75b7-428d-b2f0-70eaf4baa0ac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.759 239969 DEBUG oslo_concurrency.lockutils [req-7154f143-b82a-4cde-9491-c8d6bd059297 req-557ef6c4-75b7-428d-b2f0-70eaf4baa0ac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.759 239969 DEBUG nova.compute.manager [req-7154f143-b82a-4cde-9491-c8d6bd059297 req-557ef6c4-75b7-428d-b2f0-70eaf4baa0ac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] No waiting events found dispatching network-vif-plugged-dccf97d4-5209-4347-9f07-83ba148d3126 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.760 239969 WARNING nova.compute.manager [req-7154f143-b82a-4cde-9491-c8d6bd059297 req-557ef6c4-75b7-428d-b2f0-70eaf4baa0ac a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Received unexpected event network-vif-plugged-dccf97d4-5209-4347-9f07-83ba148d3126 for instance with vm_state active and task_state deleting.
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.761 239969 INFO nova.compute.manager [-] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Took 1.18 seconds to deallocate network for instance.
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.818 239969 DEBUG oslo_concurrency.lockutils [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.819 239969 DEBUG oslo_concurrency.lockutils [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.861 239969 DEBUG nova.compute.manager [req-ebc4418b-1ac0-4bd5-84ec-f384d1c4a2d7 req-fc94fdea-2e64-4b2d-96ad-c2c019551fa0 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Received event network-vif-deleted-dccf97d4-5209-4347-9f07-83ba148d3126 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.884 239969 DEBUG oslo_concurrency.processutils [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:33 compute-0 nova_compute[239965]: 2026-01-26 16:28:33.947 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:28:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:28:34 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/987051912' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:34 compute-0 nova_compute[239965]: 2026-01-26 16:28:34.427 239969 DEBUG oslo_concurrency.processutils [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:34 compute-0 nova_compute[239965]: 2026-01-26 16:28:34.435 239969 DEBUG nova.compute.provider_tree [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:28:34 compute-0 nova_compute[239965]: 2026-01-26 16:28:34.458 239969 DEBUG nova.scheduler.client.report [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:28:34 compute-0 nova_compute[239965]: 2026-01-26 16:28:34.477 239969 DEBUG oslo_concurrency.lockutils [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:34 compute-0 nova_compute[239965]: 2026-01-26 16:28:34.502 239969 INFO nova.scheduler.client.report [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Deleted allocations for instance b4e64366-a873-483b-a2b9-3d3afc79f7d4
Jan 26 16:28:34 compute-0 nova_compute[239965]: 2026-01-26 16:28:34.518 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:34 compute-0 nova_compute[239965]: 2026-01-26 16:28:34.519 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:34 compute-0 nova_compute[239965]: 2026-01-26 16:28:34.542 239969 DEBUG nova.compute.manager [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:28:34 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/987051912' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:34 compute-0 nova_compute[239965]: 2026-01-26 16:28:34.577 239969 DEBUG oslo_concurrency.lockutils [None req-93dd75be-2713-439a-8f45-de03ae555659 ace5f36058684b5782589457d9e15921 1f814bd45d164be6827f7ef54ed07f5f - - default default] Lock "b4e64366-a873-483b-a2b9-3d3afc79f7d4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:34 compute-0 nova_compute[239965]: 2026-01-26 16:28:34.633 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:34 compute-0 nova_compute[239965]: 2026-01-26 16:28:34.634 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:34 compute-0 nova_compute[239965]: 2026-01-26 16:28:34.644 239969 DEBUG nova.virt.hardware [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:28:34 compute-0 nova_compute[239965]: 2026-01-26 16:28:34.646 239969 INFO nova.compute.claims [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:28:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 148 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 117 op/s
Jan 26 16:28:34 compute-0 nova_compute[239965]: 2026-01-26 16:28:34.772 239969 DEBUG oslo_concurrency.processutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:28:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1985692676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.388 239969 DEBUG oslo_concurrency.processutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.395 239969 DEBUG nova.compute.provider_tree [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:28:35 compute-0 ceph-mon[75140]: pgmap v2347: 305 pgs: 305 active+clean; 148 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 117 op/s
Jan 26 16:28:35 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1985692676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.585 239969 DEBUG nova.scheduler.client.report [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.609 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.610 239969 DEBUG nova.compute.manager [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.666 239969 DEBUG nova.compute.manager [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.667 239969 DEBUG nova.network.neutron [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.688 239969 INFO nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.703 239969 DEBUG nova.compute.manager [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.813 239969 DEBUG nova.compute.manager [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.816 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.817 239969 INFO nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Creating image(s)
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.849 239969 DEBUG nova.storage.rbd_utils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 77d5bfbb-20c1-4450-acf6-d6cc979c6657_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.876 239969 DEBUG nova.storage.rbd_utils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 77d5bfbb-20c1-4450-acf6-d6cc979c6657_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.899 239969 DEBUG nova.storage.rbd_utils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 77d5bfbb-20c1-4450-acf6-d6cc979c6657_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.904 239969 DEBUG oslo_concurrency.processutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:35 compute-0 nova_compute[239965]: 2026-01-26 16:28:35.996 239969 DEBUG nova.policy [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'e865b82cb8db42568f95d2257ed9b158', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f543c15474f7451fa07b01e79dde95dd', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:28:36 compute-0 nova_compute[239965]: 2026-01-26 16:28:36.001 239969 DEBUG oslo_concurrency.processutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:36 compute-0 nova_compute[239965]: 2026-01-26 16:28:36.002 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:36 compute-0 nova_compute[239965]: 2026-01-26 16:28:36.003 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:36 compute-0 nova_compute[239965]: 2026-01-26 16:28:36.003 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:36 compute-0 nova_compute[239965]: 2026-01-26 16:28:36.026 239969 DEBUG nova.storage.rbd_utils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 77d5bfbb-20c1-4450-acf6-d6cc979c6657_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:36 compute-0 nova_compute[239965]: 2026-01-26 16:28:36.030 239969 DEBUG oslo_concurrency.processutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 77d5bfbb-20c1-4450-acf6-d6cc979c6657_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:36 compute-0 nova_compute[239965]: 2026-01-26 16:28:36.351 239969 DEBUG oslo_concurrency.processutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 77d5bfbb-20c1-4450-acf6-d6cc979c6657_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.321s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:36 compute-0 nova_compute[239965]: 2026-01-26 16:28:36.436 239969 DEBUG nova.storage.rbd_utils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] resizing rbd image 77d5bfbb-20c1-4450-acf6-d6cc979c6657_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:28:36 compute-0 nova_compute[239965]: 2026-01-26 16:28:36.529 239969 DEBUG nova.objects.instance [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'migration_context' on Instance uuid 77d5bfbb-20c1-4450-acf6-d6cc979c6657 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:36 compute-0 nova_compute[239965]: 2026-01-26 16:28:36.544 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:28:36 compute-0 nova_compute[239965]: 2026-01-26 16:28:36.545 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Ensure instance console log exists: /var/lib/nova/instances/77d5bfbb-20c1-4450-acf6-d6cc979c6657/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:28:36 compute-0 nova_compute[239965]: 2026-01-26 16:28:36.545 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:36 compute-0 nova_compute[239965]: 2026-01-26 16:28:36.545 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:36 compute-0 nova_compute[239965]: 2026-01-26 16:28:36.546 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:36 compute-0 nova_compute[239965]: 2026-01-26 16:28:36.686 239969 DEBUG nova.network.neutron [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Successfully created port: 215fb84d-ba96-47bc-8e60-a40380dba651 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:28:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 143 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 512 KiB/s wr, 147 op/s
Jan 26 16:28:36 compute-0 ceph-mon[75140]: pgmap v2348: 305 pgs: 305 active+clean; 143 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 512 KiB/s wr, 147 op/s
Jan 26 16:28:36 compute-0 nova_compute[239965]: 2026-01-26 16:28:36.862 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:37 compute-0 ovn_controller[146046]: 2026-01-26T16:28:37Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:4f:e5 10.100.0.13
Jan 26 16:28:37 compute-0 ovn_controller[146046]: 2026-01-26T16:28:37Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:4f:e5 10.100.0.13
Jan 26 16:28:37 compute-0 nova_compute[239965]: 2026-01-26 16:28:37.754 239969 DEBUG nova.network.neutron [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Successfully updated port: 215fb84d-ba96-47bc-8e60-a40380dba651 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:28:37 compute-0 nova_compute[239965]: 2026-01-26 16:28:37.771 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "refresh_cache-77d5bfbb-20c1-4450-acf6-d6cc979c6657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:28:37 compute-0 nova_compute[239965]: 2026-01-26 16:28:37.771 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquired lock "refresh_cache-77d5bfbb-20c1-4450-acf6-d6cc979c6657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:28:37 compute-0 nova_compute[239965]: 2026-01-26 16:28:37.772 239969 DEBUG nova.network.neutron [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:28:37 compute-0 nova_compute[239965]: 2026-01-26 16:28:37.875 239969 DEBUG nova.compute.manager [req-2a5bba15-2d92-4d87-8385-a57bd6d59b94 req-61f0846a-8fb5-4a72-b60c-70a645477350 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Received event network-changed-215fb84d-ba96-47bc-8e60-a40380dba651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:37 compute-0 nova_compute[239965]: 2026-01-26 16:28:37.876 239969 DEBUG nova.compute.manager [req-2a5bba15-2d92-4d87-8385-a57bd6d59b94 req-61f0846a-8fb5-4a72-b60c-70a645477350 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Refreshing instance network info cache due to event network-changed-215fb84d-ba96-47bc-8e60-a40380dba651. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:28:37 compute-0 nova_compute[239965]: 2026-01-26 16:28:37.876 239969 DEBUG oslo_concurrency.lockutils [req-2a5bba15-2d92-4d87-8385-a57bd6d59b94 req-61f0846a-8fb5-4a72-b60c-70a645477350 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-77d5bfbb-20c1-4450-acf6-d6cc979c6657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:28:37 compute-0 nova_compute[239965]: 2026-01-26 16:28:37.983 239969 DEBUG nova.network.neutron [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:28:38 compute-0 ovn_controller[146046]: 2026-01-26T16:28:38Z|01580|binding|INFO|Releasing lport 2e120840-5ef3-452f-9957-cc378e107313 from this chassis (sb_readonly=0)
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.485 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 202 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.8 MiB/s wr, 184 op/s
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.774 239969 DEBUG nova.network.neutron [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Updating instance_info_cache with network_info: [{"id": "215fb84d-ba96-47bc-8e60-a40380dba651", "address": "fa:16:3e:33:3c:a2", "network": {"id": "7ab762de-31ea-484b-890a-4627bd72d470", "bridge": "br-int", "label": "tempest-network-smoke--99006309", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap215fb84d-ba", "ovs_interfaceid": "215fb84d-ba96-47bc-8e60-a40380dba651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.799 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Releasing lock "refresh_cache-77d5bfbb-20c1-4450-acf6-d6cc979c6657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.800 239969 DEBUG nova.compute.manager [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Instance network_info: |[{"id": "215fb84d-ba96-47bc-8e60-a40380dba651", "address": "fa:16:3e:33:3c:a2", "network": {"id": "7ab762de-31ea-484b-890a-4627bd72d470", "bridge": "br-int", "label": "tempest-network-smoke--99006309", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap215fb84d-ba", "ovs_interfaceid": "215fb84d-ba96-47bc-8e60-a40380dba651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.800 239969 DEBUG oslo_concurrency.lockutils [req-2a5bba15-2d92-4d87-8385-a57bd6d59b94 req-61f0846a-8fb5-4a72-b60c-70a645477350 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-77d5bfbb-20c1-4450-acf6-d6cc979c6657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.800 239969 DEBUG nova.network.neutron [req-2a5bba15-2d92-4d87-8385-a57bd6d59b94 req-61f0846a-8fb5-4a72-b60c-70a645477350 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Refreshing network info cache for port 215fb84d-ba96-47bc-8e60-a40380dba651 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.803 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Start _get_guest_xml network_info=[{"id": "215fb84d-ba96-47bc-8e60-a40380dba651", "address": "fa:16:3e:33:3c:a2", "network": {"id": "7ab762de-31ea-484b-890a-4627bd72d470", "bridge": "br-int", "label": "tempest-network-smoke--99006309", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap215fb84d-ba", "ovs_interfaceid": "215fb84d-ba96-47bc-8e60-a40380dba651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:28:38 compute-0 ceph-mon[75140]: pgmap v2349: 305 pgs: 305 active+clean; 202 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.8 MiB/s wr, 184 op/s
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.810 239969 WARNING nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.824 239969 DEBUG nova.virt.libvirt.host [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.825 239969 DEBUG nova.virt.libvirt.host [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.829 239969 DEBUG nova.virt.libvirt.host [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.830 239969 DEBUG nova.virt.libvirt.host [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.831 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.831 239969 DEBUG nova.virt.hardware [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.832 239969 DEBUG nova.virt.hardware [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.833 239969 DEBUG nova.virt.hardware [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.833 239969 DEBUG nova.virt.hardware [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.834 239969 DEBUG nova.virt.hardware [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.834 239969 DEBUG nova.virt.hardware [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.835 239969 DEBUG nova.virt.hardware [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.835 239969 DEBUG nova.virt.hardware [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.836 239969 DEBUG nova.virt.hardware [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.836 239969 DEBUG nova.virt.hardware [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.836 239969 DEBUG nova.virt.hardware [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.842 239969 DEBUG oslo_concurrency.processutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:38 compute-0 nova_compute[239965]: 2026-01-26 16:28:38.951 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:28:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:28:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1014899325' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:28:39 compute-0 nova_compute[239965]: 2026-01-26 16:28:39.427 239969 DEBUG oslo_concurrency.processutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:39 compute-0 nova_compute[239965]: 2026-01-26 16:28:39.448 239969 DEBUG nova.storage.rbd_utils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 77d5bfbb-20c1-4450-acf6-d6cc979c6657_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:39 compute-0 nova_compute[239965]: 2026-01-26 16:28:39.452 239969 DEBUG oslo_concurrency.processutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1014899325' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:28:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:28:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562994913' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:28:39 compute-0 nova_compute[239965]: 2026-01-26 16:28:39.984 239969 DEBUG oslo_concurrency.processutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:39 compute-0 nova_compute[239965]: 2026-01-26 16:28:39.987 239969 DEBUG nova.virt.libvirt.vif [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:28:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1604340945',display_name='tempest-TestNetworkBasicOps-server-1604340945',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1604340945',id=149,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLZriR7rB4MIcUUyUbpz2OuPTEddRjlh9uHycmLwLTvluzRkzF+TTyeP28ImqJsX4tFM+OGmts0tnvky1Xk5slfLPZkvJyIoOApg38qSZxUkUxtHw31k8EnphdwVr4ezWg==',key_name='tempest-TestNetworkBasicOps-15267660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-s8x2vxoa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:28:35Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=77d5bfbb-20c1-4450-acf6-d6cc979c6657,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "215fb84d-ba96-47bc-8e60-a40380dba651", "address": "fa:16:3e:33:3c:a2", "network": {"id": "7ab762de-31ea-484b-890a-4627bd72d470", "bridge": "br-int", "label": "tempest-network-smoke--99006309", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap215fb84d-ba", "ovs_interfaceid": "215fb84d-ba96-47bc-8e60-a40380dba651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:28:39 compute-0 nova_compute[239965]: 2026-01-26 16:28:39.988 239969 DEBUG nova.network.os_vif_util [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "215fb84d-ba96-47bc-8e60-a40380dba651", "address": "fa:16:3e:33:3c:a2", "network": {"id": "7ab762de-31ea-484b-890a-4627bd72d470", "bridge": "br-int", "label": "tempest-network-smoke--99006309", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap215fb84d-ba", "ovs_interfaceid": "215fb84d-ba96-47bc-8e60-a40380dba651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:28:39 compute-0 nova_compute[239965]: 2026-01-26 16:28:39.989 239969 DEBUG nova.network.os_vif_util [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:3c:a2,bridge_name='br-int',has_traffic_filtering=True,id=215fb84d-ba96-47bc-8e60-a40380dba651,network=Network(7ab762de-31ea-484b-890a-4627bd72d470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap215fb84d-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:28:39 compute-0 nova_compute[239965]: 2026-01-26 16:28:39.991 239969 DEBUG nova.objects.instance [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'pci_devices' on Instance uuid 77d5bfbb-20c1-4450-acf6-d6cc979c6657 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.015 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:28:40 compute-0 nova_compute[239965]:   <uuid>77d5bfbb-20c1-4450-acf6-d6cc979c6657</uuid>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   <name>instance-00000095</name>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkBasicOps-server-1604340945</nova:name>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:28:38</nova:creationTime>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:28:40 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:28:40 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:28:40 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:28:40 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:28:40 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:28:40 compute-0 nova_compute[239965]:         <nova:user uuid="e865b82cb8db42568f95d2257ed9b158">tempest-TestNetworkBasicOps-1371722765-project-member</nova:user>
Jan 26 16:28:40 compute-0 nova_compute[239965]:         <nova:project uuid="f543c15474f7451fa07b01e79dde95dd">tempest-TestNetworkBasicOps-1371722765</nova:project>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:28:40 compute-0 nova_compute[239965]:         <nova:port uuid="215fb84d-ba96-47bc-8e60-a40380dba651">
Jan 26 16:28:40 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <system>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <entry name="serial">77d5bfbb-20c1-4450-acf6-d6cc979c6657</entry>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <entry name="uuid">77d5bfbb-20c1-4450-acf6-d6cc979c6657</entry>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     </system>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   <os>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   </os>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   <features>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   </features>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/77d5bfbb-20c1-4450-acf6-d6cc979c6657_disk">
Jan 26 16:28:40 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       </source>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:28:40 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/77d5bfbb-20c1-4450-acf6-d6cc979c6657_disk.config">
Jan 26 16:28:40 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       </source>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:28:40 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:33:3c:a2"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <target dev="tap215fb84d-ba"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/77d5bfbb-20c1-4450-acf6-d6cc979c6657/console.log" append="off"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <video>
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     </video>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:28:40 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:28:40 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:28:40 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:28:40 compute-0 nova_compute[239965]: </domain>
Jan 26 16:28:40 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.017 239969 DEBUG nova.compute.manager [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Preparing to wait for external event network-vif-plugged-215fb84d-ba96-47bc-8e60-a40380dba651 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.017 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.018 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.018 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.018 239969 DEBUG nova.virt.libvirt.vif [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:28:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1604340945',display_name='tempest-TestNetworkBasicOps-server-1604340945',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1604340945',id=149,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLZriR7rB4MIcUUyUbpz2OuPTEddRjlh9uHycmLwLTvluzRkzF+TTyeP28ImqJsX4tFM+OGmts0tnvky1Xk5slfLPZkvJyIoOApg38qSZxUkUxtHw31k8EnphdwVr4ezWg==',key_name='tempest-TestNetworkBasicOps-15267660',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-s8x2vxoa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:28:35Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=77d5bfbb-20c1-4450-acf6-d6cc979c6657,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "215fb84d-ba96-47bc-8e60-a40380dba651", "address": "fa:16:3e:33:3c:a2", "network": {"id": "7ab762de-31ea-484b-890a-4627bd72d470", "bridge": "br-int", "label": "tempest-network-smoke--99006309", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap215fb84d-ba", "ovs_interfaceid": "215fb84d-ba96-47bc-8e60-a40380dba651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.019 239969 DEBUG nova.network.os_vif_util [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "215fb84d-ba96-47bc-8e60-a40380dba651", "address": "fa:16:3e:33:3c:a2", "network": {"id": "7ab762de-31ea-484b-890a-4627bd72d470", "bridge": "br-int", "label": "tempest-network-smoke--99006309", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap215fb84d-ba", "ovs_interfaceid": "215fb84d-ba96-47bc-8e60-a40380dba651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.019 239969 DEBUG nova.network.os_vif_util [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:33:3c:a2,bridge_name='br-int',has_traffic_filtering=True,id=215fb84d-ba96-47bc-8e60-a40380dba651,network=Network(7ab762de-31ea-484b-890a-4627bd72d470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap215fb84d-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.020 239969 DEBUG os_vif [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:3c:a2,bridge_name='br-int',has_traffic_filtering=True,id=215fb84d-ba96-47bc-8e60-a40380dba651,network=Network(7ab762de-31ea-484b-890a-4627bd72d470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap215fb84d-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.020 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.021 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.021 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.023 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.024 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap215fb84d-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.024 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap215fb84d-ba, col_values=(('external_ids', {'iface-id': '215fb84d-ba96-47bc-8e60-a40380dba651', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:33:3c:a2', 'vm-uuid': '77d5bfbb-20c1-4450-acf6-d6cc979c6657'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.025 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:40 compute-0 NetworkManager[48954]: <info>  [1769444920.0265] manager: (tap215fb84d-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/638)
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.030 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.034 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.034 239969 INFO os_vif [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:33:3c:a2,bridge_name='br-int',has_traffic_filtering=True,id=215fb84d-ba96-47bc-8e60-a40380dba651,network=Network(7ab762de-31ea-484b-890a-4627bd72d470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap215fb84d-ba')
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.107 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.108 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.108 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] No VIF found with MAC fa:16:3e:33:3c:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.109 239969 INFO nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Using config drive
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.145 239969 DEBUG nova.storage.rbd_utils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 77d5bfbb-20c1-4450-acf6-d6cc979c6657_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.414 239969 DEBUG nova.network.neutron [req-2a5bba15-2d92-4d87-8385-a57bd6d59b94 req-61f0846a-8fb5-4a72-b60c-70a645477350 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Updated VIF entry in instance network info cache for port 215fb84d-ba96-47bc-8e60-a40380dba651. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.415 239969 DEBUG nova.network.neutron [req-2a5bba15-2d92-4d87-8385-a57bd6d59b94 req-61f0846a-8fb5-4a72-b60c-70a645477350 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Updating instance_info_cache with network_info: [{"id": "215fb84d-ba96-47bc-8e60-a40380dba651", "address": "fa:16:3e:33:3c:a2", "network": {"id": "7ab762de-31ea-484b-890a-4627bd72d470", "bridge": "br-int", "label": "tempest-network-smoke--99006309", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap215fb84d-ba", "ovs_interfaceid": "215fb84d-ba96-47bc-8e60-a40380dba651", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.432 239969 DEBUG oslo_concurrency.lockutils [req-2a5bba15-2d92-4d87-8385-a57bd6d59b94 req-61f0846a-8fb5-4a72-b60c-70a645477350 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-77d5bfbb-20c1-4450-acf6-d6cc979c6657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.543 239969 INFO nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Creating config drive at /var/lib/nova/instances/77d5bfbb-20c1-4450-acf6-d6cc979c6657/disk.config
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.552 239969 DEBUG oslo_concurrency.processutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77d5bfbb-20c1-4450-acf6-d6cc979c6657/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpshhqhg_k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.722 239969 DEBUG oslo_concurrency.processutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77d5bfbb-20c1-4450-acf6-d6cc979c6657/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpshhqhg_k" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 202 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 3.8 MiB/s wr, 102 op/s
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.763 239969 DEBUG nova.storage.rbd_utils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] rbd image 77d5bfbb-20c1-4450-acf6-d6cc979c6657_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.767 239969 DEBUG oslo_concurrency.processutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77d5bfbb-20c1-4450-acf6-d6cc979c6657/disk.config 77d5bfbb-20c1-4450-acf6-d6cc979c6657_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:40 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2562994913' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:28:40 compute-0 ceph-mon[75140]: pgmap v2350: 305 pgs: 305 active+clean; 202 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 3.8 MiB/s wr, 102 op/s
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.936 239969 DEBUG oslo_concurrency.processutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77d5bfbb-20c1-4450-acf6-d6cc979c6657/disk.config 77d5bfbb-20c1-4450-acf6-d6cc979c6657_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:40 compute-0 nova_compute[239965]: 2026-01-26 16:28:40.937 239969 INFO nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Deleting local config drive /var/lib/nova/instances/77d5bfbb-20c1-4450-acf6-d6cc979c6657/disk.config because it was imported into RBD.
Jan 26 16:28:41 compute-0 kernel: tap215fb84d-ba: entered promiscuous mode
Jan 26 16:28:41 compute-0 NetworkManager[48954]: <info>  [1769444921.0113] manager: (tap215fb84d-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/639)
Jan 26 16:28:41 compute-0 ovn_controller[146046]: 2026-01-26T16:28:41Z|01581|binding|INFO|Claiming lport 215fb84d-ba96-47bc-8e60-a40380dba651 for this chassis.
Jan 26 16:28:41 compute-0 ovn_controller[146046]: 2026-01-26T16:28:41Z|01582|binding|INFO|215fb84d-ba96-47bc-8e60-a40380dba651: Claiming fa:16:3e:33:3c:a2 10.100.0.13
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.013 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.022 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:3c:a2 10.100.0.13'], port_security=['fa:16:3e:33:3c:a2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '77d5bfbb-20c1-4450-acf6-d6cc979c6657', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ab762de-31ea-484b-890a-4627bd72d470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9e8005a1-76bf-47c0-9489-eb0c86e71c3f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=abd4cd3d-376b-421a-9847-17dca5bd6537, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=215fb84d-ba96-47bc-8e60-a40380dba651) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.023 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 215fb84d-ba96-47bc-8e60-a40380dba651 in datapath 7ab762de-31ea-484b-890a-4627bd72d470 bound to our chassis
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.024 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ab762de-31ea-484b-890a-4627bd72d470
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.041 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[52bb993a-826c-4f1e-83e1-762bf2aa56df]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.041 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7ab762de-31 in ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.043 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7ab762de-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.043 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c2382877-5d95-4aa6-acf6-9efb14740b00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.044 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4e9ea76f-93c2-4796-8ee7-caccedaba2f0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:41 compute-0 ovn_controller[146046]: 2026-01-26T16:28:41Z|01583|binding|INFO|Setting lport 215fb84d-ba96-47bc-8e60-a40380dba651 ovn-installed in OVS
Jan 26 16:28:41 compute-0 ovn_controller[146046]: 2026-01-26T16:28:41Z|01584|binding|INFO|Setting lport 215fb84d-ba96-47bc-8e60-a40380dba651 up in Southbound
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.050 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:41 compute-0 systemd-machined[208061]: New machine qemu-180-instance-00000095.
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.055 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.061 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[8f9c407f-584c-408c-85c4-7eede7f969f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:41 compute-0 systemd[1]: Started Virtual Machine qemu-180-instance-00000095.
Jan 26 16:28:41 compute-0 systemd-udevd[370191]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:28:41 compute-0 NetworkManager[48954]: <info>  [1769444921.0863] device (tap215fb84d-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:28:41 compute-0 NetworkManager[48954]: <info>  [1769444921.0877] device (tap215fb84d-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.089 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a0c1cc44-25dc-4447-971e-dffa0ad97aee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.131 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[fbbdfb60-b07e-4d99-9389-97d66da60963]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:41 compute-0 NetworkManager[48954]: <info>  [1769444921.1386] manager: (tap7ab762de-30): new Veth device (/org/freedesktop/NetworkManager/Devices/640)
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.138 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6c5c394d-d836-4394-82df-02c762b00e72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.191 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2449a14f-e0d8-487a-a12d-43157db440c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.195 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b5152497-ae11-408e-81f9-7b073f1da9e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:41 compute-0 NetworkManager[48954]: <info>  [1769444921.2250] device (tap7ab762de-30): carrier: link connected
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.233 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[36e1017c-bd69-494b-8b6a-5f1fd18955b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.263 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e0930e58-bc09-4a0f-a16c-e2a6ad00efb6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ab762de-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:fd:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641203, 'reachable_time': 39953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370221, 'error': None, 'target': 'ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.285 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8df4dc7b-9b72-4559-a948-2597bb92a69c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed7:fd49'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 641203, 'tstamp': 641203}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370222, 'error': None, 'target': 'ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.322 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a3c3f38d-26ae-422c-83bb-2a50ddf0c760]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ab762de-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d7:fd:49'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 454], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641203, 'reachable_time': 39953, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370223, 'error': None, 'target': 'ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.357 239969 DEBUG nova.compute.manager [req-a2fa7366-4d68-4874-a562-cfe8d36feacd req-34ee7d6f-ade4-4b15-9422-21812a34d3a8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Received event network-vif-plugged-215fb84d-ba96-47bc-8e60-a40380dba651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.358 239969 DEBUG oslo_concurrency.lockutils [req-a2fa7366-4d68-4874-a562-cfe8d36feacd req-34ee7d6f-ade4-4b15-9422-21812a34d3a8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.358 239969 DEBUG oslo_concurrency.lockutils [req-a2fa7366-4d68-4874-a562-cfe8d36feacd req-34ee7d6f-ade4-4b15-9422-21812a34d3a8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.359 239969 DEBUG oslo_concurrency.lockutils [req-a2fa7366-4d68-4874-a562-cfe8d36feacd req-34ee7d6f-ade4-4b15-9422-21812a34d3a8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.359 239969 DEBUG nova.compute.manager [req-a2fa7366-4d68-4874-a562-cfe8d36feacd req-34ee7d6f-ade4-4b15-9422-21812a34d3a8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Processing event network-vif-plugged-215fb84d-ba96-47bc-8e60-a40380dba651 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.365 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4a529f99-cd86-45aa-8730-5c776cec21a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.439 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4f33ea9b-861c-4e4b-a908-5289f4c7d7a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.441 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ab762de-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.441 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.442 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ab762de-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:41 compute-0 NetworkManager[48954]: <info>  [1769444921.4836] manager: (tap7ab762de-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.484 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:41 compute-0 kernel: tap7ab762de-30: entered promiscuous mode
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.488 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.489 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ab762de-30, col_values=(('external_ids', {'iface-id': '78647157-c9a6-4f71-b72a-5340c958f292'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:41 compute-0 ovn_controller[146046]: 2026-01-26T16:28:41Z|01585|binding|INFO|Releasing lport 78647157-c9a6-4f71-b72a-5340c958f292 from this chassis (sb_readonly=0)
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.491 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.516 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.517 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ab762de-31ea-484b-890a-4627bd72d470.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ab762de-31ea-484b-890a-4627bd72d470.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.519 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[adcc0cfc-51e3-4f10-83e4-e9c298d4df75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.520 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-7ab762de-31ea-484b-890a-4627bd72d470
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/7ab762de-31ea-484b-890a-4627bd72d470.pid.haproxy
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 7ab762de-31ea-484b-890a-4627bd72d470
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:28:41 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:41.521 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470', 'env', 'PROCESS_TAG=haproxy-7ab762de-31ea-484b-890a-4627bd72d470', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7ab762de-31ea-484b-890a-4627bd72d470.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.798 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444906.7971547, 0f79507c-8a44-47f3-a461-6377eed67521 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.799 239969 INFO nova.compute.manager [-] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] VM Stopped (Lifecycle Event)
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.817 239969 DEBUG nova.compute.manager [None req-42a959dc-1f1e-447d-8675-43b744f9c998 - - - - - -] [instance: 0f79507c-8a44-47f3-a461-6377eed67521] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.912 239969 DEBUG nova.compute.manager [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.913 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444921.912497, 77d5bfbb-20c1-4450-acf6-d6cc979c6657 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.914 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] VM Started (Lifecycle Event)
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.922 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.925 239969 INFO nova.virt.libvirt.driver [-] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Instance spawned successfully.
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.926 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.931 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.935 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.948 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.948 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.949 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.949 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.949 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.950 239969 DEBUG nova.virt.libvirt.driver [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:28:41 compute-0 podman[370297]: 2026-01-26 16:28:41.960117062 +0000 UTC m=+0.058923654 container create 4cdd46fa1236582df57ac284d64980dd208529779a0291c03115b44afae0be0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.960 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.960 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444921.9160202, 77d5bfbb-20c1-4450-acf6-d6cc979c6657 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.960 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] VM Paused (Lifecycle Event)
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.982 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.989 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444921.9179559, 77d5bfbb-20c1-4450-acf6-d6cc979c6657 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:41 compute-0 nova_compute[239965]: 2026-01-26 16:28:41.990 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] VM Resumed (Lifecycle Event)
Jan 26 16:28:42 compute-0 nova_compute[239965]: 2026-01-26 16:28:42.008 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:42 compute-0 systemd[1]: Started libpod-conmon-4cdd46fa1236582df57ac284d64980dd208529779a0291c03115b44afae0be0e.scope.
Jan 26 16:28:42 compute-0 nova_compute[239965]: 2026-01-26 16:28:42.012 239969 INFO nova.compute.manager [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Took 6.20 seconds to spawn the instance on the hypervisor.
Jan 26 16:28:42 compute-0 nova_compute[239965]: 2026-01-26 16:28:42.012 239969 DEBUG nova.compute.manager [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:42 compute-0 nova_compute[239965]: 2026-01-26 16:28:42.013 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:28:42 compute-0 podman[370297]: 2026-01-26 16:28:41.93022723 +0000 UTC m=+0.029033832 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:28:42 compute-0 nova_compute[239965]: 2026-01-26 16:28:42.029 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:28:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:28:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b3072648d2c56b6084b0f2edcd5b74e84031c1657c9ef5c780c22df1c94958a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:42 compute-0 podman[370297]: 2026-01-26 16:28:42.052709039 +0000 UTC m=+0.151515651 container init 4cdd46fa1236582df57ac284d64980dd208529779a0291c03115b44afae0be0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 26 16:28:42 compute-0 podman[370297]: 2026-01-26 16:28:42.06047448 +0000 UTC m=+0.159281062 container start 4cdd46fa1236582df57ac284d64980dd208529779a0291c03115b44afae0be0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 26 16:28:42 compute-0 podman[370306]: 2026-01-26 16:28:42.065053472 +0000 UTC m=+0.076816683 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:28:42 compute-0 neutron-haproxy-ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470[370330]: [NOTICE]   (370351) : New worker (370356) forked
Jan 26 16:28:42 compute-0 neutron-haproxy-ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470[370330]: [NOTICE]   (370351) : Loading success.
Jan 26 16:28:42 compute-0 nova_compute[239965]: 2026-01-26 16:28:42.104 239969 INFO nova.compute.manager [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Took 7.49 seconds to build instance.
Jan 26 16:28:42 compute-0 podman[370308]: 2026-01-26 16:28:42.105101073 +0000 UTC m=+0.117381746 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:28:42 compute-0 nova_compute[239965]: 2026-01-26 16:28:42.119 239969 DEBUG oslo_concurrency.lockutils [None req-67e72a46-6d7c-4fcc-a486-fa60435d3b35 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 3.9 MiB/s wr, 127 op/s
Jan 26 16:28:43 compute-0 ceph-mon[75140]: pgmap v2351: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 378 KiB/s rd, 3.9 MiB/s wr, 127 op/s
Jan 26 16:28:43 compute-0 nova_compute[239965]: 2026-01-26 16:28:43.490 239969 DEBUG nova.compute.manager [req-84afeff2-b561-4964-b530-8ae94161e489 req-06e2d49a-8bf5-47c0-81e4-c9c38ac258f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Received event network-vif-plugged-215fb84d-ba96-47bc-8e60-a40380dba651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:43 compute-0 nova_compute[239965]: 2026-01-26 16:28:43.492 239969 DEBUG oslo_concurrency.lockutils [req-84afeff2-b561-4964-b530-8ae94161e489 req-06e2d49a-8bf5-47c0-81e4-c9c38ac258f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:43 compute-0 nova_compute[239965]: 2026-01-26 16:28:43.492 239969 DEBUG oslo_concurrency.lockutils [req-84afeff2-b561-4964-b530-8ae94161e489 req-06e2d49a-8bf5-47c0-81e4-c9c38ac258f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:43 compute-0 nova_compute[239965]: 2026-01-26 16:28:43.493 239969 DEBUG oslo_concurrency.lockutils [req-84afeff2-b561-4964-b530-8ae94161e489 req-06e2d49a-8bf5-47c0-81e4-c9c38ac258f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:43 compute-0 nova_compute[239965]: 2026-01-26 16:28:43.493 239969 DEBUG nova.compute.manager [req-84afeff2-b561-4964-b530-8ae94161e489 req-06e2d49a-8bf5-47c0-81e4-c9c38ac258f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] No waiting events found dispatching network-vif-plugged-215fb84d-ba96-47bc-8e60-a40380dba651 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:28:43 compute-0 nova_compute[239965]: 2026-01-26 16:28:43.494 239969 WARNING nova.compute.manager [req-84afeff2-b561-4964-b530-8ae94161e489 req-06e2d49a-8bf5-47c0-81e4-c9c38ac258f1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Received unexpected event network-vif-plugged-215fb84d-ba96-47bc-8e60-a40380dba651 for instance with vm_state active and task_state None.
Jan 26 16:28:43 compute-0 nova_compute[239965]: 2026-01-26 16:28:43.908 239969 INFO nova.compute.manager [None req-1dd17489-8464-4c7a-8745-3b620f0dd307 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Get console output
Jan 26 16:28:43 compute-0 nova_compute[239965]: 2026-01-26 16:28:43.916 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:28:43 compute-0 nova_compute[239965]: 2026-01-26 16:28:43.954 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:44 compute-0 nova_compute[239965]: 2026-01-26 16:28:44.167 239969 DEBUG oslo_concurrency.lockutils [None req-11009a12-a60f-47b6-9b0f-0d8a6811af13 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "3a84510b-c77e-4b69-95cf-39eb5ba96406" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:44 compute-0 nova_compute[239965]: 2026-01-26 16:28:44.168 239969 DEBUG oslo_concurrency.lockutils [None req-11009a12-a60f-47b6-9b0f-0d8a6811af13 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:44 compute-0 nova_compute[239965]: 2026-01-26 16:28:44.168 239969 DEBUG nova.compute.manager [None req-11009a12-a60f-47b6-9b0f-0d8a6811af13 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:44 compute-0 nova_compute[239965]: 2026-01-26 16:28:44.173 239969 DEBUG nova.compute.manager [None req-11009a12-a60f-47b6-9b0f-0d8a6811af13 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338
Jan 26 16:28:44 compute-0 nova_compute[239965]: 2026-01-26 16:28:44.175 239969 DEBUG nova.objects.instance [None req-11009a12-a60f-47b6-9b0f-0d8a6811af13 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'flavor' on Instance uuid 3a84510b-c77e-4b69-95cf-39eb5ba96406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:44 compute-0 nova_compute[239965]: 2026-01-26 16:28:44.211 239969 DEBUG nova.virt.libvirt.driver [None req-11009a12-a60f-47b6-9b0f-0d8a6811af13 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:28:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:28:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 3.9 MiB/s wr, 111 op/s
Jan 26 16:28:44 compute-0 ceph-mon[75140]: pgmap v2352: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 367 KiB/s rd, 3.9 MiB/s wr, 111 op/s
Jan 26 16:28:45 compute-0 nova_compute[239965]: 2026-01-26 16:28:45.053 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:46 compute-0 kernel: tap23d47336-e4 (unregistering): left promiscuous mode
Jan 26 16:28:46 compute-0 NetworkManager[48954]: <info>  [1769444926.4500] device (tap23d47336-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:28:46 compute-0 ovn_controller[146046]: 2026-01-26T16:28:46Z|01586|binding|INFO|Releasing lport 23d47336-e4f3-4764-b5f8-68038fe02faa from this chassis (sb_readonly=0)
Jan 26 16:28:46 compute-0 ovn_controller[146046]: 2026-01-26T16:28:46Z|01587|binding|INFO|Setting lport 23d47336-e4f3-4764-b5f8-68038fe02faa down in Southbound
Jan 26 16:28:46 compute-0 ovn_controller[146046]: 2026-01-26T16:28:46Z|01588|binding|INFO|Removing iface tap23d47336-e4 ovn-installed in OVS
Jan 26 16:28:46 compute-0 nova_compute[239965]: 2026-01-26 16:28:46.479 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:46.488 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:4f:e5 10.100.0.13'], port_security=['fa:16:3e:5f:4f:e5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3a84510b-c77e-4b69-95cf-39eb5ba96406', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5dcb653-5134-44c2-bb5f-1ef932033d43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '180e9b46-d482-4e98-8076-c3573609308c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94eb109d-0d0f-4f4d-9984-1552de27451e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=23d47336-e4f3-4764-b5f8-68038fe02faa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:28:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:46.491 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 23d47336-e4f3-4764-b5f8-68038fe02faa in datapath c5dcb653-5134-44c2-bb5f-1ef932033d43 unbound from our chassis
Jan 26 16:28:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:46.494 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c5dcb653-5134-44c2-bb5f-1ef932033d43, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:28:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:46.496 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[aabb0624-ef5c-4579-9816-6ec04f6157de]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:46.498 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43 namespace which is not needed anymore
Jan 26 16:28:46 compute-0 nova_compute[239965]: 2026-01-26 16:28:46.501 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:46 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000094.scope: Deactivated successfully.
Jan 26 16:28:46 compute-0 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000094.scope: Consumed 14.872s CPU time.
Jan 26 16:28:46 compute-0 systemd-machined[208061]: Machine qemu-179-instance-00000094 terminated.
Jan 26 16:28:46 compute-0 nova_compute[239965]: 2026-01-26 16:28:46.580 239969 DEBUG nova.compute.manager [req-62a60d05-5ec5-414f-84f6-ef16ce919e02 req-ebcfbcaa-6f64-4f75-b1d8-c758dd155b20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Received event network-changed-215fb84d-ba96-47bc-8e60-a40380dba651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:46 compute-0 nova_compute[239965]: 2026-01-26 16:28:46.581 239969 DEBUG nova.compute.manager [req-62a60d05-5ec5-414f-84f6-ef16ce919e02 req-ebcfbcaa-6f64-4f75-b1d8-c758dd155b20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Refreshing instance network info cache due to event network-changed-215fb84d-ba96-47bc-8e60-a40380dba651. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:28:46 compute-0 nova_compute[239965]: 2026-01-26 16:28:46.581 239969 DEBUG oslo_concurrency.lockutils [req-62a60d05-5ec5-414f-84f6-ef16ce919e02 req-ebcfbcaa-6f64-4f75-b1d8-c758dd155b20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-77d5bfbb-20c1-4450-acf6-d6cc979c6657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:28:46 compute-0 nova_compute[239965]: 2026-01-26 16:28:46.582 239969 DEBUG oslo_concurrency.lockutils [req-62a60d05-5ec5-414f-84f6-ef16ce919e02 req-ebcfbcaa-6f64-4f75-b1d8-c758dd155b20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-77d5bfbb-20c1-4450-acf6-d6cc979c6657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:28:46 compute-0 nova_compute[239965]: 2026-01-26 16:28:46.582 239969 DEBUG nova.network.neutron [req-62a60d05-5ec5-414f-84f6-ef16ce919e02 req-ebcfbcaa-6f64-4f75-b1d8-c758dd155b20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Refreshing network info cache for port 215fb84d-ba96-47bc-8e60-a40380dba651 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:28:46 compute-0 neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43[369666]: [NOTICE]   (369670) : haproxy version is 2.8.14-c23fe91
Jan 26 16:28:46 compute-0 neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43[369666]: [NOTICE]   (369670) : path to executable is /usr/sbin/haproxy
Jan 26 16:28:46 compute-0 neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43[369666]: [WARNING]  (369670) : Exiting Master process...
Jan 26 16:28:46 compute-0 neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43[369666]: [WARNING]  (369670) : Exiting Master process...
Jan 26 16:28:46 compute-0 neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43[369666]: [ALERT]    (369670) : Current worker (369672) exited with code 143 (Terminated)
Jan 26 16:28:46 compute-0 neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43[369666]: [WARNING]  (369670) : All workers exited. Exiting... (0)
Jan 26 16:28:46 compute-0 systemd[1]: libpod-798e0c850255998491cf349cef0d0a74081e2ef17e06338e86257ccce69bca47.scope: Deactivated successfully.
Jan 26 16:28:46 compute-0 podman[370389]: 2026-01-26 16:28:46.643579849 +0000 UTC m=+0.054816243 container died 798e0c850255998491cf349cef0d0a74081e2ef17e06338e86257ccce69bca47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 26 16:28:46 compute-0 nova_compute[239965]: 2026-01-26 16:28:46.677 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444911.6745672, b4e64366-a873-483b-a2b9-3d3afc79f7d4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:46 compute-0 nova_compute[239965]: 2026-01-26 16:28:46.678 239969 INFO nova.compute.manager [-] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] VM Stopped (Lifecycle Event)
Jan 26 16:28:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-798e0c850255998491cf349cef0d0a74081e2ef17e06338e86257ccce69bca47-userdata-shm.mount: Deactivated successfully.
Jan 26 16:28:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c5c4a0148be57dda2f9617f85d9500a16d56e304ade231944c9748f034036f3-merged.mount: Deactivated successfully.
Jan 26 16:28:46 compute-0 podman[370389]: 2026-01-26 16:28:46.70241077 +0000 UTC m=+0.113647144 container cleanup 798e0c850255998491cf349cef0d0a74081e2ef17e06338e86257ccce69bca47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 16:28:46 compute-0 nova_compute[239965]: 2026-01-26 16:28:46.699 239969 DEBUG nova.compute.manager [None req-2e51b4c8-1af9-4c57-ba5c-bfda77d7139f - - - - - -] [instance: b4e64366-a873-483b-a2b9-3d3afc79f7d4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:46 compute-0 systemd[1]: libpod-conmon-798e0c850255998491cf349cef0d0a74081e2ef17e06338e86257ccce69bca47.scope: Deactivated successfully.
Jan 26 16:28:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 573 KiB/s rd, 3.9 MiB/s wr, 122 op/s
Jan 26 16:28:46 compute-0 podman[370428]: 2026-01-26 16:28:46.789159944 +0000 UTC m=+0.054343362 container remove 798e0c850255998491cf349cef0d0a74081e2ef17e06338e86257ccce69bca47 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 16:28:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:46.799 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[47dbd8ec-a738-475c-b221-1d445efddfa4]: (4, ('Mon Jan 26 04:28:46 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43 (798e0c850255998491cf349cef0d0a74081e2ef17e06338e86257ccce69bca47)\n798e0c850255998491cf349cef0d0a74081e2ef17e06338e86257ccce69bca47\nMon Jan 26 04:28:46 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43 (798e0c850255998491cf349cef0d0a74081e2ef17e06338e86257ccce69bca47)\n798e0c850255998491cf349cef0d0a74081e2ef17e06338e86257ccce69bca47\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:46.802 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b45e0b52-a6d8-41a2-8455-4bb322246d9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:46.804 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5dcb653-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:46 compute-0 nova_compute[239965]: 2026-01-26 16:28:46.807 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:46 compute-0 kernel: tapc5dcb653-50: left promiscuous mode
Jan 26 16:28:46 compute-0 ceph-mon[75140]: pgmap v2353: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 573 KiB/s rd, 3.9 MiB/s wr, 122 op/s
Jan 26 16:28:46 compute-0 nova_compute[239965]: 2026-01-26 16:28:46.843 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:46 compute-0 nova_compute[239965]: 2026-01-26 16:28:46.844 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:46.849 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[19a619b3-c4de-4926-a196-3c6a29dd967f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:46.864 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2b7e2bb2-ca88-43e2-bf5f-5fe130d15a40]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:46.865 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fadca980-ee76-4748-b954-93e83dbd828d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:46.884 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[70749565-5ed8-42bd-80f6-9bc18c2121b1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639374, 'reachable_time': 40454, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370448, 'error': None, 'target': 'ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:46 compute-0 systemd[1]: run-netns-ovnmeta\x2dc5dcb653\x2d5134\x2d44c2\x2dbb5f\x2d1ef932033d43.mount: Deactivated successfully.
Jan 26 16:28:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:46.893 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:28:46 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:46.893 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[770b8bf0-1a16-4541-b6aa-512bb1d213fe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:47 compute-0 nova_compute[239965]: 2026-01-26 16:28:47.229 239969 INFO nova.virt.libvirt.driver [None req-11009a12-a60f-47b6-9b0f-0d8a6811af13 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Instance shutdown successfully after 3 seconds.
Jan 26 16:28:47 compute-0 nova_compute[239965]: 2026-01-26 16:28:47.262 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:47.264 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:28:47 compute-0 nova_compute[239965]: 2026-01-26 16:28:47.266 239969 INFO nova.virt.libvirt.driver [-] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Instance destroyed successfully.
Jan 26 16:28:47 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:47.266 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:28:47 compute-0 nova_compute[239965]: 2026-01-26 16:28:47.266 239969 DEBUG nova.objects.instance [None req-11009a12-a60f-47b6-9b0f-0d8a6811af13 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'numa_topology' on Instance uuid 3a84510b-c77e-4b69-95cf-39eb5ba96406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:47 compute-0 nova_compute[239965]: 2026-01-26 16:28:47.287 239969 DEBUG nova.compute.manager [None req-11009a12-a60f-47b6-9b0f-0d8a6811af13 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:47 compute-0 nova_compute[239965]: 2026-01-26 16:28:47.344 239969 DEBUG oslo_concurrency.lockutils [None req-11009a12-a60f-47b6-9b0f-0d8a6811af13 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.177s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:48 compute-0 nova_compute[239965]: 2026-01-26 16:28:48.701 239969 DEBUG nova.compute.manager [req-dfb9528d-b111-4bd3-b2a5-0a9cad1f17ee req-a189c04b-91db-4b20-ba29-67c57016d0d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received event network-vif-unplugged-23d47336-e4f3-4764-b5f8-68038fe02faa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:48 compute-0 nova_compute[239965]: 2026-01-26 16:28:48.702 239969 DEBUG oslo_concurrency.lockutils [req-dfb9528d-b111-4bd3-b2a5-0a9cad1f17ee req-a189c04b-91db-4b20-ba29-67c57016d0d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:48 compute-0 nova_compute[239965]: 2026-01-26 16:28:48.702 239969 DEBUG oslo_concurrency.lockutils [req-dfb9528d-b111-4bd3-b2a5-0a9cad1f17ee req-a189c04b-91db-4b20-ba29-67c57016d0d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:48 compute-0 nova_compute[239965]: 2026-01-26 16:28:48.703 239969 DEBUG oslo_concurrency.lockutils [req-dfb9528d-b111-4bd3-b2a5-0a9cad1f17ee req-a189c04b-91db-4b20-ba29-67c57016d0d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:48 compute-0 nova_compute[239965]: 2026-01-26 16:28:48.703 239969 DEBUG nova.compute.manager [req-dfb9528d-b111-4bd3-b2a5-0a9cad1f17ee req-a189c04b-91db-4b20-ba29-67c57016d0d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] No waiting events found dispatching network-vif-unplugged-23d47336-e4f3-4764-b5f8-68038fe02faa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:28:48 compute-0 nova_compute[239965]: 2026-01-26 16:28:48.704 239969 WARNING nova.compute.manager [req-dfb9528d-b111-4bd3-b2a5-0a9cad1f17ee req-a189c04b-91db-4b20-ba29-67c57016d0d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received unexpected event network-vif-unplugged-23d47336-e4f3-4764-b5f8-68038fe02faa for instance with vm_state stopped and task_state None.
Jan 26 16:28:48 compute-0 nova_compute[239965]: 2026-01-26 16:28:48.705 239969 DEBUG nova.compute.manager [req-dfb9528d-b111-4bd3-b2a5-0a9cad1f17ee req-a189c04b-91db-4b20-ba29-67c57016d0d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received event network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:48 compute-0 nova_compute[239965]: 2026-01-26 16:28:48.707 239969 DEBUG oslo_concurrency.lockutils [req-dfb9528d-b111-4bd3-b2a5-0a9cad1f17ee req-a189c04b-91db-4b20-ba29-67c57016d0d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:48 compute-0 nova_compute[239965]: 2026-01-26 16:28:48.709 239969 DEBUG oslo_concurrency.lockutils [req-dfb9528d-b111-4bd3-b2a5-0a9cad1f17ee req-a189c04b-91db-4b20-ba29-67c57016d0d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:48 compute-0 nova_compute[239965]: 2026-01-26 16:28:48.710 239969 DEBUG oslo_concurrency.lockutils [req-dfb9528d-b111-4bd3-b2a5-0a9cad1f17ee req-a189c04b-91db-4b20-ba29-67c57016d0d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:48 compute-0 nova_compute[239965]: 2026-01-26 16:28:48.710 239969 DEBUG nova.compute.manager [req-dfb9528d-b111-4bd3-b2a5-0a9cad1f17ee req-a189c04b-91db-4b20-ba29-67c57016d0d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] No waiting events found dispatching network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:28:48 compute-0 nova_compute[239965]: 2026-01-26 16:28:48.711 239969 WARNING nova.compute.manager [req-dfb9528d-b111-4bd3-b2a5-0a9cad1f17ee req-a189c04b-91db-4b20-ba29-67c57016d0d9 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received unexpected event network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa for instance with vm_state stopped and task_state None.
Jan 26 16:28:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.5 MiB/s wr, 150 op/s
Jan 26 16:28:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:28:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2587033330' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:28:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:28:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2587033330' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:28:48 compute-0 ceph-mon[75140]: pgmap v2354: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.5 MiB/s wr, 150 op/s
Jan 26 16:28:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2587033330' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:28:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2587033330' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:28:48 compute-0 nova_compute[239965]: 2026-01-26 16:28:48.957 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011257078604609066 of space, bias 1.0, pg target 0.337712358138272 quantized to 32 (current 32)
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010183187421274704 of space, bias 1.0, pg target 0.30549562263824115 quantized to 32 (current 32)
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.866585488756273e-07 of space, bias 4.0, pg target 0.0008239902586507528 quantized to 16 (current 16)
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:28:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:28:50 compute-0 nova_compute[239965]: 2026-01-26 16:28:50.056 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 140 KiB/s wr, 94 op/s
Jan 26 16:28:50 compute-0 ceph-mon[75140]: pgmap v2355: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 140 KiB/s wr, 94 op/s
Jan 26 16:28:51 compute-0 nova_compute[239965]: 2026-01-26 16:28:51.050 239969 DEBUG nova.network.neutron [req-62a60d05-5ec5-414f-84f6-ef16ce919e02 req-ebcfbcaa-6f64-4f75-b1d8-c758dd155b20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Updated VIF entry in instance network info cache for port 215fb84d-ba96-47bc-8e60-a40380dba651. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:28:51 compute-0 nova_compute[239965]: 2026-01-26 16:28:51.051 239969 DEBUG nova.network.neutron [req-62a60d05-5ec5-414f-84f6-ef16ce919e02 req-ebcfbcaa-6f64-4f75-b1d8-c758dd155b20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Updating instance_info_cache with network_info: [{"id": "215fb84d-ba96-47bc-8e60-a40380dba651", "address": "fa:16:3e:33:3c:a2", "network": {"id": "7ab762de-31ea-484b-890a-4627bd72d470", "bridge": "br-int", "label": "tempest-network-smoke--99006309", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap215fb84d-ba", "ovs_interfaceid": "215fb84d-ba96-47bc-8e60-a40380dba651", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:51 compute-0 nova_compute[239965]: 2026-01-26 16:28:51.069 239969 DEBUG oslo_concurrency.lockutils [req-62a60d05-5ec5-414f-84f6-ef16ce919e02 req-ebcfbcaa-6f64-4f75-b1d8-c758dd155b20 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-77d5bfbb-20c1-4450-acf6-d6cc979c6657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:28:51 compute-0 nova_compute[239965]: 2026-01-26 16:28:51.179 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:52 compute-0 nova_compute[239965]: 2026-01-26 16:28:52.205 239969 INFO nova.compute.manager [None req-e1fb8931-da4d-4de4-8677-33cb89755d73 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Get console output
Jan 26 16:28:52 compute-0 nova_compute[239965]: 2026-01-26 16:28:52.370 239969 DEBUG nova.objects.instance [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'flavor' on Instance uuid 3a84510b-c77e-4b69-95cf-39eb5ba96406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:52 compute-0 nova_compute[239965]: 2026-01-26 16:28:52.394 239969 DEBUG oslo_concurrency.lockutils [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "refresh_cache-3a84510b-c77e-4b69-95cf-39eb5ba96406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:28:52 compute-0 nova_compute[239965]: 2026-01-26 16:28:52.395 239969 DEBUG oslo_concurrency.lockutils [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquired lock "refresh_cache-3a84510b-c77e-4b69-95cf-39eb5ba96406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:28:52 compute-0 nova_compute[239965]: 2026-01-26 16:28:52.395 239969 DEBUG nova.network.neutron [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:28:52 compute-0 nova_compute[239965]: 2026-01-26 16:28:52.396 239969 DEBUG nova.objects.instance [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'info_cache' on Instance uuid 3a84510b-c77e-4b69-95cf-39eb5ba96406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:52 compute-0 nova_compute[239965]: 2026-01-26 16:28:52.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 140 KiB/s wr, 94 op/s
Jan 26 16:28:52 compute-0 ceph-mon[75140]: pgmap v2356: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 140 KiB/s wr, 94 op/s
Jan 26 16:28:53 compute-0 sshd-session[370450]: Invalid user solana from 45.148.10.240 port 35064
Jan 26 16:28:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:53.270 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:53 compute-0 sshd-session[370450]: Connection closed by invalid user solana 45.148.10.240 port 35064 [preauth]
Jan 26 16:28:53 compute-0 nova_compute[239965]: 2026-01-26 16:28:53.959 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:28:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 69 op/s
Jan 26 16:28:54 compute-0 ceph-mon[75140]: pgmap v2357: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 30 KiB/s wr, 69 op/s
Jan 26 16:28:55 compute-0 ovn_controller[146046]: 2026-01-26T16:28:55Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:33:3c:a2 10.100.0.13
Jan 26 16:28:55 compute-0 ovn_controller[146046]: 2026-01-26T16:28:55Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:33:3c:a2 10.100.0.13
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.058 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.099 239969 DEBUG nova.network.neutron [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Updating instance_info_cache with network_info: [{"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.116 239969 DEBUG oslo_concurrency.lockutils [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Releasing lock "refresh_cache-3a84510b-c77e-4b69-95cf-39eb5ba96406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.149 239969 INFO nova.virt.libvirt.driver [-] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Instance destroyed successfully.
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.150 239969 DEBUG nova.objects.instance [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'numa_topology' on Instance uuid 3a84510b-c77e-4b69-95cf-39eb5ba96406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.165 239969 DEBUG nova.objects.instance [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'resources' on Instance uuid 3a84510b-c77e-4b69-95cf-39eb5ba96406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.181 239969 DEBUG nova.virt.libvirt.vif [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1457378879',display_name='tempest-TestNetworkAdvancedServerOps-server-1457378879',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1457378879',id=148,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBXRmJ55PYG/LM9xDFj4YfpDSpukedz8p49LlUh0uK6t84cxMa5WxNAqPRBOJJ8+YvFGubL9tQLCSQysIZbD/889B7QK05P6yiYx8xjOJeyjUpWMf+n6V4Xm1xDWJcHxCA==',key_name='tempest-TestNetworkAdvancedServerOps-1322118565',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:28:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-yfjodt40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:28:47Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=3a84510b-c77e-4b69-95cf-39eb5ba96406,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.182 239969 DEBUG nova.network.os_vif_util [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.183 239969 DEBUG nova.network.os_vif_util [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:4f:e5,bridge_name='br-int',has_traffic_filtering=True,id=23d47336-e4f3-4764-b5f8-68038fe02faa,network=Network(c5dcb653-5134-44c2-bb5f-1ef932033d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23d47336-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.184 239969 DEBUG os_vif [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:4f:e5,bridge_name='br-int',has_traffic_filtering=True,id=23d47336-e4f3-4764-b5f8-68038fe02faa,network=Network(c5dcb653-5134-44c2-bb5f-1ef932033d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23d47336-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.187 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.187 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23d47336-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.190 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.193 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.197 239969 INFO os_vif [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:4f:e5,bridge_name='br-int',has_traffic_filtering=True,id=23d47336-e4f3-4764-b5f8-68038fe02faa,network=Network(c5dcb653-5134-44c2-bb5f-1ef932033d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23d47336-e4')
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.206 239969 DEBUG nova.virt.libvirt.driver [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Start _get_guest_xml network_info=[{"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.213 239969 WARNING nova.virt.libvirt.driver [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.222 239969 DEBUG nova.virt.libvirt.host [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.223 239969 DEBUG nova.virt.libvirt.host [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.226 239969 DEBUG nova.virt.libvirt.host [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.227 239969 DEBUG nova.virt.libvirt.host [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.228 239969 DEBUG nova.virt.libvirt.driver [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.228 239969 DEBUG nova.virt.hardware [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.229 239969 DEBUG nova.virt.hardware [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.230 239969 DEBUG nova.virt.hardware [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.230 239969 DEBUG nova.virt.hardware [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.231 239969 DEBUG nova.virt.hardware [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.231 239969 DEBUG nova.virt.hardware [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.232 239969 DEBUG nova.virt.hardware [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.232 239969 DEBUG nova.virt.hardware [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.232 239969 DEBUG nova.virt.hardware [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.233 239969 DEBUG nova.virt.hardware [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.233 239969 DEBUG nova.virt.hardware [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.234 239969 DEBUG nova.objects.instance [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'vcpu_model' on Instance uuid 3a84510b-c77e-4b69-95cf-39eb5ba96406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.250 239969 DEBUG oslo_concurrency.processutils [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:28:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/693776026' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.830 239969 DEBUG oslo_concurrency.processutils [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:55 compute-0 nova_compute[239965]: 2026-01-26 16:28:55.863 239969 DEBUG oslo_concurrency.processutils [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:28:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/693776026' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:28:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:28:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1497134400' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.409 239969 DEBUG oslo_concurrency.processutils [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.411 239969 DEBUG nova.virt.libvirt.vif [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1457378879',display_name='tempest-TestNetworkAdvancedServerOps-server-1457378879',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1457378879',id=148,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBXRmJ55PYG/LM9xDFj4YfpDSpukedz8p49LlUh0uK6t84cxMa5WxNAqPRBOJJ8+YvFGubL9tQLCSQysIZbD/889B7QK05P6yiYx8xjOJeyjUpWMf+n6V4Xm1xDWJcHxCA==',key_name='tempest-TestNetworkAdvancedServerOps-1322118565',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:28:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-yfjodt40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:28:47Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=3a84510b-c77e-4b69-95cf-39eb5ba96406,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.412 239969 DEBUG nova.network.os_vif_util [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.413 239969 DEBUG nova.network.os_vif_util [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:4f:e5,bridge_name='br-int',has_traffic_filtering=True,id=23d47336-e4f3-4764-b5f8-68038fe02faa,network=Network(c5dcb653-5134-44c2-bb5f-1ef932033d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23d47336-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.415 239969 DEBUG nova.objects.instance [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid 3a84510b-c77e-4b69-95cf-39eb5ba96406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.430 239969 DEBUG nova.virt.libvirt.driver [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:28:56 compute-0 nova_compute[239965]:   <uuid>3a84510b-c77e-4b69-95cf-39eb5ba96406</uuid>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   <name>instance-00000094</name>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1457378879</nova:name>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:28:55</nova:creationTime>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:28:56 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:28:56 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:28:56 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:28:56 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:28:56 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:28:56 compute-0 nova_compute[239965]:         <nova:user uuid="59ae1c17a260470c91f50965ddd53a9e">tempest-TestNetworkAdvancedServerOps-842475489-project-member</nova:user>
Jan 26 16:28:56 compute-0 nova_compute[239965]:         <nova:project uuid="2a7615c0db4e4f38aec30c7c723c3c3a">tempest-TestNetworkAdvancedServerOps-842475489</nova:project>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:28:56 compute-0 nova_compute[239965]:         <nova:port uuid="23d47336-e4f3-4764-b5f8-68038fe02faa">
Jan 26 16:28:56 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <system>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <entry name="serial">3a84510b-c77e-4b69-95cf-39eb5ba96406</entry>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <entry name="uuid">3a84510b-c77e-4b69-95cf-39eb5ba96406</entry>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     </system>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   <os>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   </os>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   <features>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   </features>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/3a84510b-c77e-4b69-95cf-39eb5ba96406_disk">
Jan 26 16:28:56 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       </source>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:28:56 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/3a84510b-c77e-4b69-95cf-39eb5ba96406_disk.config">
Jan 26 16:28:56 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       </source>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:28:56 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:5f:4f:e5"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <target dev="tap23d47336-e4"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/3a84510b-c77e-4b69-95cf-39eb5ba96406/console.log" append="off"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <video>
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     </video>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <input type="keyboard" bus="usb"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:28:56 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:28:56 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:28:56 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:28:56 compute-0 nova_compute[239965]: </domain>
Jan 26 16:28:56 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.432 239969 DEBUG nova.virt.libvirt.driver [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.432 239969 DEBUG nova.virt.libvirt.driver [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.433 239969 DEBUG nova.virt.libvirt.vif [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1457378879',display_name='tempest-TestNetworkAdvancedServerOps-server-1457378879',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1457378879',id=148,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBXRmJ55PYG/LM9xDFj4YfpDSpukedz8p49LlUh0uK6t84cxMa5WxNAqPRBOJJ8+YvFGubL9tQLCSQysIZbD/889B7QK05P6yiYx8xjOJeyjUpWMf+n6V4Xm1xDWJcHxCA==',key_name='tempest-TestNetworkAdvancedServerOps-1322118565',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:28:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-yfjodt40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:28:47Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=3a84510b-c77e-4b69-95cf-39eb5ba96406,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.433 239969 DEBUG nova.network.os_vif_util [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.434 239969 DEBUG nova.network.os_vif_util [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:4f:e5,bridge_name='br-int',has_traffic_filtering=True,id=23d47336-e4f3-4764-b5f8-68038fe02faa,network=Network(c5dcb653-5134-44c2-bb5f-1ef932033d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23d47336-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.434 239969 DEBUG os_vif [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:4f:e5,bridge_name='br-int',has_traffic_filtering=True,id=23d47336-e4f3-4764-b5f8-68038fe02faa,network=Network(c5dcb653-5134-44c2-bb5f-1ef932033d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23d47336-e4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.435 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.436 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.436 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.440 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.440 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23d47336-e4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.441 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap23d47336-e4, col_values=(('external_ids', {'iface-id': '23d47336-e4f3-4764-b5f8-68038fe02faa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:4f:e5', 'vm-uuid': '3a84510b-c77e-4b69-95cf-39eb5ba96406'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.442 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:56 compute-0 NetworkManager[48954]: <info>  [1769444936.4437] manager: (tap23d47336-e4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/642)
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.446 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.449 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.450 239969 INFO os_vif [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:4f:e5,bridge_name='br-int',has_traffic_filtering=True,id=23d47336-e4f3-4764-b5f8-68038fe02faa,network=Network(c5dcb653-5134-44c2-bb5f-1ef932033d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23d47336-e4')
Jan 26 16:28:56 compute-0 kernel: tap23d47336-e4: entered promiscuous mode
Jan 26 16:28:56 compute-0 NetworkManager[48954]: <info>  [1769444936.5414] manager: (tap23d47336-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/643)
Jan 26 16:28:56 compute-0 ovn_controller[146046]: 2026-01-26T16:28:56Z|01589|binding|INFO|Claiming lport 23d47336-e4f3-4764-b5f8-68038fe02faa for this chassis.
Jan 26 16:28:56 compute-0 ovn_controller[146046]: 2026-01-26T16:28:56Z|01590|binding|INFO|23d47336-e4f3-4764-b5f8-68038fe02faa: Claiming fa:16:3e:5f:4f:e5 10.100.0.13
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.543 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.551 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:4f:e5 10.100.0.13'], port_security=['fa:16:3e:5f:4f:e5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3a84510b-c77e-4b69-95cf-39eb5ba96406', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5dcb653-5134-44c2-bb5f-1ef932033d43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '5', 'neutron:security_group_ids': '180e9b46-d482-4e98-8076-c3573609308c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94eb109d-0d0f-4f4d-9984-1552de27451e, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=23d47336-e4f3-4764-b5f8-68038fe02faa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.552 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 23d47336-e4f3-4764-b5f8-68038fe02faa in datapath c5dcb653-5134-44c2-bb5f-1ef932033d43 bound to our chassis
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.554 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5dcb653-5134-44c2-bb5f-1ef932033d43
Jan 26 16:28:56 compute-0 ovn_controller[146046]: 2026-01-26T16:28:56Z|01591|binding|INFO|Setting lport 23d47336-e4f3-4764-b5f8-68038fe02faa ovn-installed in OVS
Jan 26 16:28:56 compute-0 ovn_controller[146046]: 2026-01-26T16:28:56Z|01592|binding|INFO|Setting lport 23d47336-e4f3-4764-b5f8-68038fe02faa up in Southbound
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.565 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.568 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.571 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e7cba76a-fa57-4cff-87f6-994aa7976acb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.572 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc5dcb653-51 in ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.574 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc5dcb653-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.574 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6841b848-9a75-4171-b55c-005b3ec8eca5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.575 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a2efd415-35bb-40a7-a788-90ff9b9ad005]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:56 compute-0 systemd-machined[208061]: New machine qemu-181-instance-00000094.
Jan 26 16:28:56 compute-0 systemd-udevd[370533]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.589 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f77acf-08ee-4e34-8ab1-e1f51b314dba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:56 compute-0 NetworkManager[48954]: <info>  [1769444936.5968] device (tap23d47336-e4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:28:56 compute-0 systemd[1]: Started Virtual Machine qemu-181-instance-00000094.
Jan 26 16:28:56 compute-0 NetworkManager[48954]: <info>  [1769444936.5976] device (tap23d47336-e4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.615 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[34a14f99-f75d-4c36-894a-7d41d6aeebf2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.654 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[fa53fd48-e92c-4770-a461-740442fd561a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.660 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a51409e7-fd89-4af6-992a-bad7de9102c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:56 compute-0 NetworkManager[48954]: <info>  [1769444936.6612] manager: (tapc5dcb653-50): new Veth device (/org/freedesktop/NetworkManager/Devices/644)
Jan 26 16:28:56 compute-0 systemd-udevd[370537]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.704 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[3b72ff94-e768-4b29-8cf6-97c2bae9bf1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.708 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ca6c0e-5880-4a04-8f51-894af3391074]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:56 compute-0 NetworkManager[48954]: <info>  [1769444936.7353] device (tapc5dcb653-50): carrier: link connected
Jan 26 16:28:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 224 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 886 KiB/s wr, 86 op/s
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.740 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[71d5eecc-9dd5-4b38-a321-fed0bdfd4d0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.756 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ce628d-2458-4b03-a208-0a10d54c017f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5dcb653-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:a5:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 457], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642754, 'reachable_time': 20757, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370565, 'error': None, 'target': 'ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.774 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2cc1bb70-10ec-41ee-8707-4ad4edb728f1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:a520'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 642754, 'tstamp': 642754}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370566, 'error': None, 'target': 'ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.792 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8660c4c0-2868-47bf-90b1-c8d6f1b2b066]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5dcb653-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:a5:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 457], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642754, 'reachable_time': 20757, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370567, 'error': None, 'target': 'ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.844 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[643b9d75-c904-4b0d-8a11-5c2d43a8c4c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1497134400' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:28:56 compute-0 ceph-mon[75140]: pgmap v2358: 305 pgs: 305 active+clean; 224 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 886 KiB/s wr, 86 op/s
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.912 239969 DEBUG nova.compute.manager [req-a7d99cd1-52e6-41a2-b6ac-319a0ce38d06 req-a9f5ed66-a45f-4e1b-bff0-32d76938955d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received event network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.912 239969 DEBUG oslo_concurrency.lockutils [req-a7d99cd1-52e6-41a2-b6ac-319a0ce38d06 req-a9f5ed66-a45f-4e1b-bff0-32d76938955d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.913 239969 DEBUG oslo_concurrency.lockutils [req-a7d99cd1-52e6-41a2-b6ac-319a0ce38d06 req-a9f5ed66-a45f-4e1b-bff0-32d76938955d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.913 239969 DEBUG oslo_concurrency.lockutils [req-a7d99cd1-52e6-41a2-b6ac-319a0ce38d06 req-a9f5ed66-a45f-4e1b-bff0-32d76938955d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.913 239969 DEBUG nova.compute.manager [req-a7d99cd1-52e6-41a2-b6ac-319a0ce38d06 req-a9f5ed66-a45f-4e1b-bff0-32d76938955d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] No waiting events found dispatching network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.913 239969 WARNING nova.compute.manager [req-a7d99cd1-52e6-41a2-b6ac-319a0ce38d06 req-a9f5ed66-a45f-4e1b-bff0-32d76938955d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received unexpected event network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa for instance with vm_state stopped and task_state powering-on.
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.928 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a77f10d9-38db-4b8d-975b-003948db7a0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.929 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5dcb653-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.930 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.930 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5dcb653-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.931 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:56 compute-0 NetworkManager[48954]: <info>  [1769444936.9327] manager: (tapc5dcb653-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/645)
Jan 26 16:28:56 compute-0 kernel: tapc5dcb653-50: entered promiscuous mode
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.934 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.936 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5dcb653-50, col_values=(('external_ids', {'iface-id': '2e120840-5ef3-452f-9957-cc378e107313'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.938 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:56 compute-0 ovn_controller[146046]: 2026-01-26T16:28:56Z|01593|binding|INFO|Releasing lport 2e120840-5ef3-452f-9957-cc378e107313 from this chassis (sb_readonly=0)
Jan 26 16:28:56 compute-0 nova_compute[239965]: 2026-01-26 16:28:56.959 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.961 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c5dcb653-5134-44c2-bb5f-1ef932033d43.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c5dcb653-5134-44c2-bb5f-1ef932033d43.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.963 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c428f678-fdde-445a-b45e-9b64d6b39c27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.964 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-c5dcb653-5134-44c2-bb5f-1ef932033d43
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/c5dcb653-5134-44c2-bb5f-1ef932033d43.pid.haproxy
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID c5dcb653-5134-44c2-bb5f-1ef932033d43
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:28:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:56.965 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43', 'env', 'PROCESS_TAG=haproxy-c5dcb653-5134-44c2-bb5f-1ef932033d43', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c5dcb653-5134-44c2-bb5f-1ef932033d43.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:28:57 compute-0 nova_compute[239965]: 2026-01-26 16:28:57.117 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 3a84510b-c77e-4b69-95cf-39eb5ba96406 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:28:57 compute-0 nova_compute[239965]: 2026-01-26 16:28:57.118 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444937.1172352, 3a84510b-c77e-4b69-95cf-39eb5ba96406 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:57 compute-0 nova_compute[239965]: 2026-01-26 16:28:57.118 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] VM Resumed (Lifecycle Event)
Jan 26 16:28:57 compute-0 nova_compute[239965]: 2026-01-26 16:28:57.120 239969 DEBUG nova.compute.manager [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:28:57 compute-0 nova_compute[239965]: 2026-01-26 16:28:57.123 239969 INFO nova.virt.libvirt.driver [-] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Instance rebooted successfully.
Jan 26 16:28:57 compute-0 nova_compute[239965]: 2026-01-26 16:28:57.124 239969 DEBUG nova.compute.manager [None req-0e617dd6-c169-461d-9db8-a827bb0a6419 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:57 compute-0 nova_compute[239965]: 2026-01-26 16:28:57.163 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:57 compute-0 nova_compute[239965]: 2026-01-26 16:28:57.178 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:28:57 compute-0 nova_compute[239965]: 2026-01-26 16:28:57.200 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444937.1199293, 3a84510b-c77e-4b69-95cf-39eb5ba96406 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:28:57 compute-0 nova_compute[239965]: 2026-01-26 16:28:57.201 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] VM Started (Lifecycle Event)
Jan 26 16:28:57 compute-0 nova_compute[239965]: 2026-01-26 16:28:57.292 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:28:57 compute-0 nova_compute[239965]: 2026-01-26 16:28:57.317 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:28:57 compute-0 nova_compute[239965]: 2026-01-26 16:28:57.345 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:57 compute-0 podman[370641]: 2026-01-26 16:28:57.395613774 +0000 UTC m=+0.085692529 container create fe5ec0ffc1a48aa43b39e1c5b7c0c059bdb25a66ab4e3697437c6f73ad1edd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:28:57 compute-0 podman[370641]: 2026-01-26 16:28:57.356588608 +0000 UTC m=+0.046667413 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:28:57 compute-0 systemd[1]: Started libpod-conmon-fe5ec0ffc1a48aa43b39e1c5b7c0c059bdb25a66ab4e3697437c6f73ad1edd73.scope.
Jan 26 16:28:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:28:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2fbda15bb1398afd35fc73b3322d59cd40b32c803c16cd204acc44aa64d7a2b7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:28:57 compute-0 podman[370641]: 2026-01-26 16:28:57.534599077 +0000 UTC m=+0.224677812 container init fe5ec0ffc1a48aa43b39e1c5b7c0c059bdb25a66ab4e3697437c6f73ad1edd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:28:57 compute-0 podman[370641]: 2026-01-26 16:28:57.545128586 +0000 UTC m=+0.235207301 container start fe5ec0ffc1a48aa43b39e1c5b7c0c059bdb25a66ab4e3697437c6f73ad1edd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:28:57 compute-0 neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43[370655]: [NOTICE]   (370659) : New worker (370661) forked
Jan 26 16:28:57 compute-0 neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43[370655]: [NOTICE]   (370659) : Loading success.
Jan 26 16:28:58 compute-0 nova_compute[239965]: 2026-01-26 16:28:58.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:28:58 compute-0 nova_compute[239965]: 2026-01-26 16:28:58.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:28:58 compute-0 nova_compute[239965]: 2026-01-26 16:28:58.577 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:28:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 26 16:28:58 compute-0 ceph-mon[75140]: pgmap v2359: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Jan 26 16:28:58 compute-0 nova_compute[239965]: 2026-01-26 16:28:58.962 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:28:59 compute-0 nova_compute[239965]: 2026-01-26 16:28:59.019 239969 DEBUG nova.compute.manager [req-442a3b60-97f7-43b4-8664-10788215610d req-90fa1006-30fd-408b-839a-2d0f94ab0b7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received event network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:28:59 compute-0 nova_compute[239965]: 2026-01-26 16:28:59.020 239969 DEBUG oslo_concurrency.lockutils [req-442a3b60-97f7-43b4-8664-10788215610d req-90fa1006-30fd-408b-839a-2d0f94ab0b7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:59 compute-0 nova_compute[239965]: 2026-01-26 16:28:59.020 239969 DEBUG oslo_concurrency.lockutils [req-442a3b60-97f7-43b4-8664-10788215610d req-90fa1006-30fd-408b-839a-2d0f94ab0b7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:59 compute-0 nova_compute[239965]: 2026-01-26 16:28:59.021 239969 DEBUG oslo_concurrency.lockutils [req-442a3b60-97f7-43b4-8664-10788215610d req-90fa1006-30fd-408b-839a-2d0f94ab0b7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:59 compute-0 nova_compute[239965]: 2026-01-26 16:28:59.022 239969 DEBUG nova.compute.manager [req-442a3b60-97f7-43b4-8664-10788215610d req-90fa1006-30fd-408b-839a-2d0f94ab0b7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] No waiting events found dispatching network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:28:59 compute-0 nova_compute[239965]: 2026-01-26 16:28:59.022 239969 WARNING nova.compute.manager [req-442a3b60-97f7-43b4-8664-10788215610d req-90fa1006-30fd-408b-839a-2d0f94ab0b7a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received unexpected event network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa for instance with vm_state active and task_state None.
Jan 26 16:28:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:59.252 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:28:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:59.252 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:28:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:28:59.253 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:28:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:28:59 compute-0 nova_compute[239965]: 2026-01-26 16:28:59.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:29:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:29:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:29:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:29:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:29:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:29:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 26 16:29:00 compute-0 ceph-mon[75140]: pgmap v2360: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Jan 26 16:29:01 compute-0 nova_compute[239965]: 2026-01-26 16:29:01.294 239969 INFO nova.compute.manager [None req-33e2f4b6-261d-47ce-9a19-86d691cc99d3 e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Get console output
Jan 26 16:29:01 compute-0 nova_compute[239965]: 2026-01-26 16:29:01.304 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:29:01 compute-0 nova_compute[239965]: 2026-01-26 16:29:01.443 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:01 compute-0 nova_compute[239965]: 2026-01-26 16:29:01.528 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:02 compute-0 nova_compute[239965]: 2026-01-26 16:29:02.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:02 compute-0 nova_compute[239965]: 2026-01-26 16:29:02.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:02 compute-0 nova_compute[239965]: 2026-01-26 16:29:02.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:02 compute-0 nova_compute[239965]: 2026-01-26 16:29:02.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:29:02 compute-0 ovn_controller[146046]: 2026-01-26T16:29:02Z|01594|binding|INFO|Releasing lport 78647157-c9a6-4f71-b72a-5340c958f292 from this chassis (sb_readonly=0)
Jan 26 16:29:02 compute-0 ovn_controller[146046]: 2026-01-26T16:29:02Z|01595|binding|INFO|Releasing lport 2e120840-5ef3-452f-9957-cc378e107313 from this chassis (sb_readonly=0)
Jan 26 16:29:02 compute-0 nova_compute[239965]: 2026-01-26 16:29:02.718 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 26 16:29:02 compute-0 ceph-mon[75140]: pgmap v2361: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 26 16:29:03 compute-0 nova_compute[239965]: 2026-01-26 16:29:03.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:03 compute-0 nova_compute[239965]: 2026-01-26 16:29:03.964 239969 INFO nova.compute.manager [None req-398c0a59-c486-4df0-90f2-8eeca46027cb e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Get console output
Jan 26 16:29:03 compute-0 nova_compute[239965]: 2026-01-26 16:29:03.965 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:03 compute-0 nova_compute[239965]: 2026-01-26 16:29:03.972 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:29:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:29:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 26 16:29:05 compute-0 ceph-mon[75140]: pgmap v2362: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 26 16:29:06 compute-0 nova_compute[239965]: 2026-01-26 16:29:06.448 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 26 16:29:06 compute-0 nova_compute[239965]: 2026-01-26 16:29:06.792 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:06 compute-0 ceph-mon[75140]: pgmap v2363: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Jan 26 16:29:07 compute-0 nova_compute[239965]: 2026-01-26 16:29:07.135 239969 INFO nova.compute.manager [None req-9bf28840-a1c9-43ff-b69c-3d8c5d3d25ba e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Get console output
Jan 26 16:29:07 compute-0 nova_compute[239965]: 2026-01-26 16:29:07.139 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:29:07 compute-0 nova_compute[239965]: 2026-01-26 16:29:07.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:07 compute-0 nova_compute[239965]: 2026-01-26 16:29:07.558 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:07 compute-0 nova_compute[239965]: 2026-01-26 16:29:07.558 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:07 compute-0 nova_compute[239965]: 2026-01-26 16:29:07.559 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:07 compute-0 nova_compute[239965]: 2026-01-26 16:29:07.560 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:29:07 compute-0 nova_compute[239965]: 2026-01-26 16:29:07.560 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:07 compute-0 nova_compute[239965]: 2026-01-26 16:29:07.891 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:29:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1659740122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.165 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.249 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.249 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.252 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.253 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.470 239969 DEBUG oslo_concurrency.lockutils [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.471 239969 DEBUG oslo_concurrency.lockutils [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.472 239969 DEBUG oslo_concurrency.lockutils [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.472 239969 DEBUG oslo_concurrency.lockutils [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.473 239969 DEBUG oslo_concurrency.lockutils [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.475 239969 INFO nova.compute.manager [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Terminating instance
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.477 239969 DEBUG nova.compute.manager [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.500 239969 DEBUG nova.compute.manager [req-401448d1-0d99-463a-8833-59cec9d62e93 req-c1062b4e-b202-4072-9cfd-f6c700dd35ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Received event network-changed-215fb84d-ba96-47bc-8e60-a40380dba651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.501 239969 DEBUG nova.compute.manager [req-401448d1-0d99-463a-8833-59cec9d62e93 req-c1062b4e-b202-4072-9cfd-f6c700dd35ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Refreshing instance network info cache due to event network-changed-215fb84d-ba96-47bc-8e60-a40380dba651. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.501 239969 DEBUG oslo_concurrency.lockutils [req-401448d1-0d99-463a-8833-59cec9d62e93 req-c1062b4e-b202-4072-9cfd-f6c700dd35ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-77d5bfbb-20c1-4450-acf6-d6cc979c6657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.502 239969 DEBUG oslo_concurrency.lockutils [req-401448d1-0d99-463a-8833-59cec9d62e93 req-c1062b4e-b202-4072-9cfd-f6c700dd35ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-77d5bfbb-20c1-4450-acf6-d6cc979c6657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.502 239969 DEBUG nova.network.neutron [req-401448d1-0d99-463a-8833-59cec9d62e93 req-c1062b4e-b202-4072-9cfd-f6c700dd35ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Refreshing network info cache for port 215fb84d-ba96-47bc-8e60-a40380dba651 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.527 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.529 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3091MB free_disk=59.89613572880626GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.529 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.530 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1659740122' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.626 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 3a84510b-c77e-4b69-95cf-39eb5ba96406 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.627 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 77d5bfbb-20c1-4450-acf6-d6cc979c6657 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.628 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.628 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:29:08 compute-0 kernel: tap215fb84d-ba (unregistering): left promiscuous mode
Jan 26 16:29:08 compute-0 NetworkManager[48954]: <info>  [1769444948.6800] device (tap215fb84d-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.690 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:08 compute-0 ovn_controller[146046]: 2026-01-26T16:29:08Z|01596|binding|INFO|Releasing lport 215fb84d-ba96-47bc-8e60-a40380dba651 from this chassis (sb_readonly=0)
Jan 26 16:29:08 compute-0 ovn_controller[146046]: 2026-01-26T16:29:08Z|01597|binding|INFO|Setting lport 215fb84d-ba96-47bc-8e60-a40380dba651 down in Southbound
Jan 26 16:29:08 compute-0 ovn_controller[146046]: 2026-01-26T16:29:08Z|01598|binding|INFO|Removing iface tap215fb84d-ba ovn-installed in OVS
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.694 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:08.700 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:33:3c:a2 10.100.0.13'], port_security=['fa:16:3e:33:3c:a2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '77d5bfbb-20c1-4450-acf6-d6cc979c6657', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ab762de-31ea-484b-890a-4627bd72d470', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f543c15474f7451fa07b01e79dde95dd', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9e8005a1-76bf-47c0-9489-eb0c86e71c3f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=abd4cd3d-376b-421a-9847-17dca5bd6537, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=215fb84d-ba96-47bc-8e60-a40380dba651) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:29:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:08.702 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 215fb84d-ba96-47bc-8e60-a40380dba651 in datapath 7ab762de-31ea-484b-890a-4627bd72d470 unbound from our chassis
Jan 26 16:29:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:08.704 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7ab762de-31ea-484b-890a-4627bd72d470, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:29:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:08.706 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a9059294-10b1-466c-97b2-83df4589371b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:08 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:08.708 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470 namespace which is not needed anymore
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.708 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.711 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 118 op/s
Jan 26 16:29:08 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000095.scope: Deactivated successfully.
Jan 26 16:29:08 compute-0 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000095.scope: Consumed 14.733s CPU time.
Jan 26 16:29:08 compute-0 systemd-machined[208061]: Machine qemu-180-instance-00000095 terminated.
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.924 239969 INFO nova.virt.libvirt.driver [-] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Instance destroyed successfully.
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.926 239969 DEBUG nova.objects.instance [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lazy-loading 'resources' on Instance uuid 77d5bfbb-20c1-4450-acf6-d6cc979c6657 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.940 239969 DEBUG nova.virt.libvirt.vif [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:28:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1604340945',display_name='tempest-TestNetworkBasicOps-server-1604340945',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1604340945',id=149,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLZriR7rB4MIcUUyUbpz2OuPTEddRjlh9uHycmLwLTvluzRkzF+TTyeP28ImqJsX4tFM+OGmts0tnvky1Xk5slfLPZkvJyIoOApg38qSZxUkUxtHw31k8EnphdwVr4ezWg==',key_name='tempest-TestNetworkBasicOps-15267660',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:28:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f543c15474f7451fa07b01e79dde95dd',ramdisk_id='',reservation_id='r-s8x2vxoa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1371722765',owner_user_name='tempest-TestNetworkBasicOps-1371722765-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:28:42Z,user_data=None,user_id='e865b82cb8db42568f95d2257ed9b158',uuid=77d5bfbb-20c1-4450-acf6-d6cc979c6657,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "215fb84d-ba96-47bc-8e60-a40380dba651", "address": "fa:16:3e:33:3c:a2", "network": {"id": "7ab762de-31ea-484b-890a-4627bd72d470", "bridge": "br-int", "label": "tempest-network-smoke--99006309", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap215fb84d-ba", "ovs_interfaceid": "215fb84d-ba96-47bc-8e60-a40380dba651", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.941 239969 DEBUG nova.network.os_vif_util [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converting VIF {"id": "215fb84d-ba96-47bc-8e60-a40380dba651", "address": "fa:16:3e:33:3c:a2", "network": {"id": "7ab762de-31ea-484b-890a-4627bd72d470", "bridge": "br-int", "label": "tempest-network-smoke--99006309", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap215fb84d-ba", "ovs_interfaceid": "215fb84d-ba96-47bc-8e60-a40380dba651", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.942 239969 DEBUG nova.network.os_vif_util [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:33:3c:a2,bridge_name='br-int',has_traffic_filtering=True,id=215fb84d-ba96-47bc-8e60-a40380dba651,network=Network(7ab762de-31ea-484b-890a-4627bd72d470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap215fb84d-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.942 239969 DEBUG os_vif [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:3c:a2,bridge_name='br-int',has_traffic_filtering=True,id=215fb84d-ba96-47bc-8e60-a40380dba651,network=Network(7ab762de-31ea-484b-890a-4627bd72d470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap215fb84d-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.945 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.945 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap215fb84d-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.947 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.949 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.952 239969 INFO os_vif [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:33:3c:a2,bridge_name='br-int',has_traffic_filtering=True,id=215fb84d-ba96-47bc-8e60-a40380dba651,network=Network(7ab762de-31ea-484b-890a-4627bd72d470),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap215fb84d-ba')
Jan 26 16:29:08 compute-0 neutron-haproxy-ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470[370330]: [NOTICE]   (370351) : haproxy version is 2.8.14-c23fe91
Jan 26 16:29:08 compute-0 neutron-haproxy-ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470[370330]: [NOTICE]   (370351) : path to executable is /usr/sbin/haproxy
Jan 26 16:29:08 compute-0 neutron-haproxy-ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470[370330]: [WARNING]  (370351) : Exiting Master process...
Jan 26 16:29:08 compute-0 neutron-haproxy-ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470[370330]: [ALERT]    (370351) : Current worker (370356) exited with code 143 (Terminated)
Jan 26 16:29:08 compute-0 neutron-haproxy-ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470[370330]: [WARNING]  (370351) : All workers exited. Exiting... (0)
Jan 26 16:29:08 compute-0 systemd[1]: libpod-4cdd46fa1236582df57ac284d64980dd208529779a0291c03115b44afae0be0e.scope: Deactivated successfully.
Jan 26 16:29:08 compute-0 nova_compute[239965]: 2026-01-26 16:29:08.978 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:08 compute-0 podman[370717]: 2026-01-26 16:29:08.979423651 +0000 UTC m=+0.171666715 container died 4cdd46fa1236582df57ac284d64980dd208529779a0291c03115b44afae0be0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:29:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4cdd46fa1236582df57ac284d64980dd208529779a0291c03115b44afae0be0e-userdata-shm.mount: Deactivated successfully.
Jan 26 16:29:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b3072648d2c56b6084b0f2edcd5b74e84031c1657c9ef5c780c22df1c94958a-merged.mount: Deactivated successfully.
Jan 26 16:29:09 compute-0 podman[370717]: 2026-01-26 16:29:09.023095961 +0000 UTC m=+0.215339005 container cleanup 4cdd46fa1236582df57ac284d64980dd208529779a0291c03115b44afae0be0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:29:09 compute-0 systemd[1]: libpod-conmon-4cdd46fa1236582df57ac284d64980dd208529779a0291c03115b44afae0be0e.scope: Deactivated successfully.
Jan 26 16:29:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:29:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:29:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1161605988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:29:09 compute-0 nova_compute[239965]: 2026-01-26 16:29:09.341 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:09 compute-0 nova_compute[239965]: 2026-01-26 16:29:09.348 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:29:09 compute-0 nova_compute[239965]: 2026-01-26 16:29:09.366 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:29:09 compute-0 nova_compute[239965]: 2026-01-26 16:29:09.388 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:29:09 compute-0 nova_compute[239965]: 2026-01-26 16:29:09.389 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:09 compute-0 podman[370793]: 2026-01-26 16:29:09.50002118 +0000 UTC m=+0.444878345 container remove 4cdd46fa1236582df57ac284d64980dd208529779a0291c03115b44afae0be0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 16:29:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:09.509 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e3265f00-97f2-47de-a00d-c4331664a435]: (4, ('Mon Jan 26 04:29:08 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470 (4cdd46fa1236582df57ac284d64980dd208529779a0291c03115b44afae0be0e)\n4cdd46fa1236582df57ac284d64980dd208529779a0291c03115b44afae0be0e\nMon Jan 26 04:29:09 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470 (4cdd46fa1236582df57ac284d64980dd208529779a0291c03115b44afae0be0e)\n4cdd46fa1236582df57ac284d64980dd208529779a0291c03115b44afae0be0e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:09.511 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[20b56ad4-c47e-4584-97fa-9253f14a7e0f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:09.513 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ab762de-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:09 compute-0 nova_compute[239965]: 2026-01-26 16:29:09.516 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:09 compute-0 kernel: tap7ab762de-30: left promiscuous mode
Jan 26 16:29:09 compute-0 nova_compute[239965]: 2026-01-26 16:29:09.534 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:09.539 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3dfa5b8c-8c2a-44a9-bc6e-bb6b80bacd45]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:09.552 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8587510c-afa4-4f0f-8c99-97fb8b90c7be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:09.555 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[dfcf1fe8-4d4f-4f70-bd6b-84e0d936dd4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:09.573 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2a5e33cf-3133-4dcd-8a41-00f0059f28ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 641192, 'reachable_time': 38781, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370812, 'error': None, 'target': 'ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:09 compute-0 systemd[1]: run-netns-ovnmeta\x2d7ab762de\x2d31ea\x2d484b\x2d890a\x2d4627bd72d470.mount: Deactivated successfully.
Jan 26 16:29:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:09.578 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7ab762de-31ea-484b-890a-4627bd72d470 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:29:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:09.578 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[de7b7532-8fdc-4d4a-ace3-366f40e4cff6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:09 compute-0 ceph-mon[75140]: pgmap v2364: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.3 MiB/s wr, 118 op/s
Jan 26 16:29:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1161605988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:29:09 compute-0 nova_compute[239965]: 2026-01-26 16:29:09.644 239969 INFO nova.virt.libvirt.driver [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Deleting instance files /var/lib/nova/instances/77d5bfbb-20c1-4450-acf6-d6cc979c6657_del
Jan 26 16:29:09 compute-0 nova_compute[239965]: 2026-01-26 16:29:09.645 239969 INFO nova.virt.libvirt.driver [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Deletion of /var/lib/nova/instances/77d5bfbb-20c1-4450-acf6-d6cc979c6657_del complete
Jan 26 16:29:09 compute-0 nova_compute[239965]: 2026-01-26 16:29:09.703 239969 INFO nova.compute.manager [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Took 1.23 seconds to destroy the instance on the hypervisor.
Jan 26 16:29:09 compute-0 nova_compute[239965]: 2026-01-26 16:29:09.704 239969 DEBUG oslo.service.loopingcall [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:29:09 compute-0 nova_compute[239965]: 2026-01-26 16:29:09.704 239969 DEBUG nova.compute.manager [-] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:29:09 compute-0 nova_compute[239965]: 2026-01-26 16:29:09.704 239969 DEBUG nova.network.neutron [-] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:29:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 68 op/s
Jan 26 16:29:10 compute-0 ceph-mon[75140]: pgmap v2365: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 68 op/s
Jan 26 16:29:10 compute-0 ovn_controller[146046]: 2026-01-26T16:29:10Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:4f:e5 10.100.0.13
Jan 26 16:29:11 compute-0 sudo[370813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:29:11 compute-0 sudo[370813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:29:11 compute-0 sudo[370813]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:11 compute-0 nova_compute[239965]: 2026-01-26 16:29:11.449 239969 DEBUG nova.network.neutron [-] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:29:11 compute-0 nova_compute[239965]: 2026-01-26 16:29:11.468 239969 DEBUG nova.network.neutron [req-401448d1-0d99-463a-8833-59cec9d62e93 req-c1062b4e-b202-4072-9cfd-f6c700dd35ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Updated VIF entry in instance network info cache for port 215fb84d-ba96-47bc-8e60-a40380dba651. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:29:11 compute-0 nova_compute[239965]: 2026-01-26 16:29:11.469 239969 DEBUG nova.network.neutron [req-401448d1-0d99-463a-8833-59cec9d62e93 req-c1062b4e-b202-4072-9cfd-f6c700dd35ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Updating instance_info_cache with network_info: [{"id": "215fb84d-ba96-47bc-8e60-a40380dba651", "address": "fa:16:3e:33:3c:a2", "network": {"id": "7ab762de-31ea-484b-890a-4627bd72d470", "bridge": "br-int", "label": "tempest-network-smoke--99006309", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f543c15474f7451fa07b01e79dde95dd", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap215fb84d-ba", "ovs_interfaceid": "215fb84d-ba96-47bc-8e60-a40380dba651", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:29:11 compute-0 nova_compute[239965]: 2026-01-26 16:29:11.476 239969 INFO nova.compute.manager [-] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Took 1.77 seconds to deallocate network for instance.
Jan 26 16:29:11 compute-0 sudo[370838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:29:11 compute-0 sudo[370838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:29:11 compute-0 nova_compute[239965]: 2026-01-26 16:29:11.488 239969 DEBUG oslo_concurrency.lockutils [req-401448d1-0d99-463a-8833-59cec9d62e93 req-c1062b4e-b202-4072-9cfd-f6c700dd35ed a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-77d5bfbb-20c1-4450-acf6-d6cc979c6657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:29:11 compute-0 nova_compute[239965]: 2026-01-26 16:29:11.534 239969 DEBUG nova.compute.manager [req-fe74109d-4c99-434f-962f-96c83b3d3222 req-801e562a-2829-4fb5-9a83-77951b35d596 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Received event network-vif-deleted-215fb84d-ba96-47bc-8e60-a40380dba651 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:29:11 compute-0 nova_compute[239965]: 2026-01-26 16:29:11.534 239969 INFO nova.compute.manager [req-fe74109d-4c99-434f-962f-96c83b3d3222 req-801e562a-2829-4fb5-9a83-77951b35d596 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Neutron deleted interface 215fb84d-ba96-47bc-8e60-a40380dba651; detaching it from the instance and deleting it from the info cache
Jan 26 16:29:11 compute-0 nova_compute[239965]: 2026-01-26 16:29:11.534 239969 DEBUG nova.network.neutron [req-fe74109d-4c99-434f-962f-96c83b3d3222 req-801e562a-2829-4fb5-9a83-77951b35d596 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:29:11 compute-0 nova_compute[239965]: 2026-01-26 16:29:11.538 239969 DEBUG oslo_concurrency.lockutils [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:11 compute-0 nova_compute[239965]: 2026-01-26 16:29:11.538 239969 DEBUG oslo_concurrency.lockutils [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:11 compute-0 nova_compute[239965]: 2026-01-26 16:29:11.565 239969 DEBUG nova.compute.manager [req-fe74109d-4c99-434f-962f-96c83b3d3222 req-801e562a-2829-4fb5-9a83-77951b35d596 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Detach interface failed, port_id=215fb84d-ba96-47bc-8e60-a40380dba651, reason: Instance 77d5bfbb-20c1-4450-acf6-d6cc979c6657 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:29:11 compute-0 nova_compute[239965]: 2026-01-26 16:29:11.623 239969 DEBUG oslo_concurrency.processutils [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:12 compute-0 sudo[370838]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:29:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:29:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:29:12 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:29:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:29:12 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:29:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:29:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:29:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:29:12 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:29:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:29:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:29:12 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:29:12 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:29:12 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:29:12 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:29:12 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:29:12 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:29:12 compute-0 sudo[370915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:29:12 compute-0 sudo[370915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:29:12 compute-0 sudo[370915]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:29:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3579511006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:29:12 compute-0 sudo[370942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:29:12 compute-0 nova_compute[239965]: 2026-01-26 16:29:12.255 239969 DEBUG oslo_concurrency.processutils [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:12 compute-0 sudo[370942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:29:12 compute-0 nova_compute[239965]: 2026-01-26 16:29:12.260 239969 DEBUG nova.compute.provider_tree [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:29:12 compute-0 podman[370939]: 2026-01-26 16:29:12.291095314 +0000 UTC m=+0.090286763 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:29:12 compute-0 podman[370940]: 2026-01-26 16:29:12.321324144 +0000 UTC m=+0.122786208 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 16:29:12 compute-0 nova_compute[239965]: 2026-01-26 16:29:12.347 239969 DEBUG nova.scheduler.client.report [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:29:12 compute-0 podman[371018]: 2026-01-26 16:29:12.57431716 +0000 UTC m=+0.048458748 container create 1f75691e3baa39f7fbcb460838e33526296648bf88ed4fd6f5b9b9d0cac6a9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 16:29:12 compute-0 systemd[1]: Started libpod-conmon-1f75691e3baa39f7fbcb460838e33526296648bf88ed4fd6f5b9b9d0cac6a9a4.scope.
Jan 26 16:29:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:29:12 compute-0 podman[371018]: 2026-01-26 16:29:12.551525272 +0000 UTC m=+0.025666870 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:29:12 compute-0 podman[371018]: 2026-01-26 16:29:12.663295419 +0000 UTC m=+0.137437047 container init 1f75691e3baa39f7fbcb460838e33526296648bf88ed4fd6f5b9b9d0cac6a9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:29:12 compute-0 podman[371018]: 2026-01-26 16:29:12.671296274 +0000 UTC m=+0.145437822 container start 1f75691e3baa39f7fbcb460838e33526296648bf88ed4fd6f5b9b9d0cac6a9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chatelet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 16:29:12 compute-0 podman[371018]: 2026-01-26 16:29:12.675512138 +0000 UTC m=+0.149653696 container attach 1f75691e3baa39f7fbcb460838e33526296648bf88ed4fd6f5b9b9d0cac6a9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chatelet, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:29:12 compute-0 zealous_chatelet[371035]: 167 167
Jan 26 16:29:12 compute-0 systemd[1]: libpod-1f75691e3baa39f7fbcb460838e33526296648bf88ed4fd6f5b9b9d0cac6a9a4.scope: Deactivated successfully.
Jan 26 16:29:12 compute-0 conmon[371035]: conmon 1f75691e3baa39f7fbcb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f75691e3baa39f7fbcb460838e33526296648bf88ed4fd6f5b9b9d0cac6a9a4.scope/container/memory.events
Jan 26 16:29:12 compute-0 podman[371018]: 2026-01-26 16:29:12.68172366 +0000 UTC m=+0.155865218 container died 1f75691e3baa39f7fbcb460838e33526296648bf88ed4fd6f5b9b9d0cac6a9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chatelet, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:29:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e26e2dcd564078be16c33d4b1fae122b14163556a05cca8c2bdb3751d09b665-merged.mount: Deactivated successfully.
Jan 26 16:29:12 compute-0 podman[371018]: 2026-01-26 16:29:12.735781884 +0000 UTC m=+0.209923462 container remove 1f75691e3baa39f7fbcb460838e33526296648bf88ed4fd6f5b9b9d0cac6a9a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_chatelet, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:29:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 29 KiB/s wr, 143 op/s
Jan 26 16:29:12 compute-0 systemd[1]: libpod-conmon-1f75691e3baa39f7fbcb460838e33526296648bf88ed4fd6f5b9b9d0cac6a9a4.scope: Deactivated successfully.
Jan 26 16:29:12 compute-0 nova_compute[239965]: 2026-01-26 16:29:12.870 239969 DEBUG oslo_concurrency.lockutils [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:12 compute-0 nova_compute[239965]: 2026-01-26 16:29:12.897 239969 INFO nova.scheduler.client.report [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Deleted allocations for instance 77d5bfbb-20c1-4450-acf6-d6cc979c6657
Jan 26 16:29:12 compute-0 nova_compute[239965]: 2026-01-26 16:29:12.968 239969 DEBUG oslo_concurrency.lockutils [None req-8b447c92-9df8-463d-8796-5832ef26052a e865b82cb8db42568f95d2257ed9b158 f543c15474f7451fa07b01e79dde95dd - - default default] Lock "77d5bfbb-20c1-4450-acf6-d6cc979c6657" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:12 compute-0 podman[371058]: 2026-01-26 16:29:12.983654044 +0000 UTC m=+0.067523795 container create f34add599fe0c6d7cd5582632a8eeec41acc8b60a8170e5db67e6a8542160c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hofstadter, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:29:13 compute-0 systemd[1]: Started libpod-conmon-f34add599fe0c6d7cd5582632a8eeec41acc8b60a8170e5db67e6a8542160c37.scope.
Jan 26 16:29:13 compute-0 podman[371058]: 2026-01-26 16:29:12.959651316 +0000 UTC m=+0.043521067 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:29:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc97f1e889956e1528c54ab683cf693f94184a9c7e4a55308b5fecc44de66ae2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc97f1e889956e1528c54ab683cf693f94184a9c7e4a55308b5fecc44de66ae2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc97f1e889956e1528c54ab683cf693f94184a9c7e4a55308b5fecc44de66ae2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc97f1e889956e1528c54ab683cf693f94184a9c7e4a55308b5fecc44de66ae2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc97f1e889956e1528c54ab683cf693f94184a9c7e4a55308b5fecc44de66ae2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:13 compute-0 podman[371058]: 2026-01-26 16:29:13.104416182 +0000 UTC m=+0.188285983 container init f34add599fe0c6d7cd5582632a8eeec41acc8b60a8170e5db67e6a8542160c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hofstadter, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:29:13 compute-0 podman[371058]: 2026-01-26 16:29:13.113168626 +0000 UTC m=+0.197038377 container start f34add599fe0c6d7cd5582632a8eeec41acc8b60a8170e5db67e6a8542160c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 16:29:13 compute-0 podman[371058]: 2026-01-26 16:29:13.117184965 +0000 UTC m=+0.201054786 container attach f34add599fe0c6d7cd5582632a8eeec41acc8b60a8170e5db67e6a8542160c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:29:13 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3579511006' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:29:13 compute-0 ceph-mon[75140]: pgmap v2366: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 29 KiB/s wr, 143 op/s
Jan 26 16:29:13 compute-0 jolly_hofstadter[371074]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:29:13 compute-0 jolly_hofstadter[371074]: --> All data devices are unavailable
Jan 26 16:29:13 compute-0 systemd[1]: libpod-f34add599fe0c6d7cd5582632a8eeec41acc8b60a8170e5db67e6a8542160c37.scope: Deactivated successfully.
Jan 26 16:29:13 compute-0 podman[371058]: 2026-01-26 16:29:13.720610813 +0000 UTC m=+0.804480534 container died f34add599fe0c6d7cd5582632a8eeec41acc8b60a8170e5db67e6a8542160c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:29:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc97f1e889956e1528c54ab683cf693f94184a9c7e4a55308b5fecc44de66ae2-merged.mount: Deactivated successfully.
Jan 26 16:29:13 compute-0 podman[371058]: 2026-01-26 16:29:13.762119448 +0000 UTC m=+0.845989169 container remove f34add599fe0c6d7cd5582632a8eeec41acc8b60a8170e5db67e6a8542160c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 16:29:13 compute-0 systemd[1]: libpod-conmon-f34add599fe0c6d7cd5582632a8eeec41acc8b60a8170e5db67e6a8542160c37.scope: Deactivated successfully.
Jan 26 16:29:13 compute-0 sudo[370942]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:13 compute-0 sudo[371106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:29:13 compute-0 sudo[371106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:29:13 compute-0 sudo[371106]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:13 compute-0 sudo[371131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:29:13 compute-0 sudo[371131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:29:13 compute-0 nova_compute[239965]: 2026-01-26 16:29:13.949 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:13 compute-0 nova_compute[239965]: 2026-01-26 16:29:13.969 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:29:14 compute-0 podman[371167]: 2026-01-26 16:29:14.311892293 +0000 UTC m=+0.069868452 container create b4f2c25983b8c6c15e4a69793d9c78e5d451eeb02a6413005b44f682daff1f52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:29:14 compute-0 systemd[1]: Started libpod-conmon-b4f2c25983b8c6c15e4a69793d9c78e5d451eeb02a6413005b44f682daff1f52.scope.
Jan 26 16:29:14 compute-0 podman[371167]: 2026-01-26 16:29:14.281928129 +0000 UTC m=+0.039904328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:29:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:29:14 compute-0 podman[371167]: 2026-01-26 16:29:14.409158955 +0000 UTC m=+0.167135154 container init b4f2c25983b8c6c15e4a69793d9c78e5d451eeb02a6413005b44f682daff1f52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_taussig, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 16:29:14 compute-0 podman[371167]: 2026-01-26 16:29:14.420719448 +0000 UTC m=+0.178695577 container start b4f2c25983b8c6c15e4a69793d9c78e5d451eeb02a6413005b44f682daff1f52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_taussig, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:29:14 compute-0 podman[371167]: 2026-01-26 16:29:14.42451724 +0000 UTC m=+0.182493439 container attach b4f2c25983b8c6c15e4a69793d9c78e5d451eeb02a6413005b44f682daff1f52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:29:14 compute-0 nice_taussig[371183]: 167 167
Jan 26 16:29:14 compute-0 systemd[1]: libpod-b4f2c25983b8c6c15e4a69793d9c78e5d451eeb02a6413005b44f682daff1f52.scope: Deactivated successfully.
Jan 26 16:29:14 compute-0 podman[371167]: 2026-01-26 16:29:14.427892704 +0000 UTC m=+0.185868863 container died b4f2c25983b8c6c15e4a69793d9c78e5d451eeb02a6413005b44f682daff1f52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 16:29:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-65ca9ba6218dd3a516041001223e8ea376267d32fcc31b08e04cf6e9a4a0e460-merged.mount: Deactivated successfully.
Jan 26 16:29:14 compute-0 podman[371167]: 2026-01-26 16:29:14.484484349 +0000 UTC m=+0.242460508 container remove b4f2c25983b8c6c15e4a69793d9c78e5d451eeb02a6413005b44f682daff1f52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:29:14 compute-0 systemd[1]: libpod-conmon-b4f2c25983b8c6c15e4a69793d9c78e5d451eeb02a6413005b44f682daff1f52.scope: Deactivated successfully.
Jan 26 16:29:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 548 KiB/s rd, 18 KiB/s wr, 74 op/s
Jan 26 16:29:14 compute-0 podman[371207]: 2026-01-26 16:29:14.757042394 +0000 UTC m=+0.072861955 container create 1acbddc4dbe1329a6883cd66c813375dc39ba5f4936bfae878752a62a7ddda3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:29:14 compute-0 systemd[1]: Started libpod-conmon-1acbddc4dbe1329a6883cd66c813375dc39ba5f4936bfae878752a62a7ddda3c.scope.
Jan 26 16:29:14 compute-0 ceph-mon[75140]: pgmap v2367: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 548 KiB/s rd, 18 KiB/s wr, 74 op/s
Jan 26 16:29:14 compute-0 podman[371207]: 2026-01-26 16:29:14.726547497 +0000 UTC m=+0.042367118 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:29:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:29:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76964825ee26f4f850c24e753906e6b4c579f5ab7a64b9c6124a0f991e605328/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76964825ee26f4f850c24e753906e6b4c579f5ab7a64b9c6124a0f991e605328/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76964825ee26f4f850c24e753906e6b4c579f5ab7a64b9c6124a0f991e605328/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76964825ee26f4f850c24e753906e6b4c579f5ab7a64b9c6124a0f991e605328/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:14 compute-0 podman[371207]: 2026-01-26 16:29:14.854743247 +0000 UTC m=+0.170562878 container init 1acbddc4dbe1329a6883cd66c813375dc39ba5f4936bfae878752a62a7ddda3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_carson, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:29:14 compute-0 podman[371207]: 2026-01-26 16:29:14.866019373 +0000 UTC m=+0.181838904 container start 1acbddc4dbe1329a6883cd66c813375dc39ba5f4936bfae878752a62a7ddda3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_carson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 16:29:14 compute-0 podman[371207]: 2026-01-26 16:29:14.870039402 +0000 UTC m=+0.185858933 container attach 1acbddc4dbe1329a6883cd66c813375dc39ba5f4936bfae878752a62a7ddda3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_carson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 16:29:15 compute-0 nova_compute[239965]: 2026-01-26 16:29:15.020 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "23444729-2c63-4c8b-a28c-04334c5b5949" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:15 compute-0 nova_compute[239965]: 2026-01-26 16:29:15.021 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:15 compute-0 nova_compute[239965]: 2026-01-26 16:29:15.039 239969 DEBUG nova.compute.manager [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:29:15 compute-0 nova_compute[239965]: 2026-01-26 16:29:15.107 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:15 compute-0 nova_compute[239965]: 2026-01-26 16:29:15.107 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:15 compute-0 nova_compute[239965]: 2026-01-26 16:29:15.114 239969 DEBUG nova.virt.hardware [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:29:15 compute-0 nova_compute[239965]: 2026-01-26 16:29:15.114 239969 INFO nova.compute.claims [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:29:15 compute-0 objective_carson[371224]: {
Jan 26 16:29:15 compute-0 objective_carson[371224]:     "0": [
Jan 26 16:29:15 compute-0 objective_carson[371224]:         {
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "devices": [
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "/dev/loop3"
Jan 26 16:29:15 compute-0 objective_carson[371224]:             ],
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "lv_name": "ceph_lv0",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "lv_size": "21470642176",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "name": "ceph_lv0",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "tags": {
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.cluster_name": "ceph",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.crush_device_class": "",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.encrypted": "0",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.objectstore": "bluestore",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.osd_id": "0",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.type": "block",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.vdo": "0",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.with_tpm": "0"
Jan 26 16:29:15 compute-0 objective_carson[371224]:             },
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "type": "block",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "vg_name": "ceph_vg0"
Jan 26 16:29:15 compute-0 objective_carson[371224]:         }
Jan 26 16:29:15 compute-0 objective_carson[371224]:     ],
Jan 26 16:29:15 compute-0 objective_carson[371224]:     "1": [
Jan 26 16:29:15 compute-0 objective_carson[371224]:         {
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "devices": [
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "/dev/loop4"
Jan 26 16:29:15 compute-0 objective_carson[371224]:             ],
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "lv_name": "ceph_lv1",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "lv_size": "21470642176",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "name": "ceph_lv1",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "tags": {
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.cluster_name": "ceph",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.crush_device_class": "",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.encrypted": "0",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.objectstore": "bluestore",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.osd_id": "1",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.type": "block",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.vdo": "0",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.with_tpm": "0"
Jan 26 16:29:15 compute-0 objective_carson[371224]:             },
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "type": "block",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "vg_name": "ceph_vg1"
Jan 26 16:29:15 compute-0 objective_carson[371224]:         }
Jan 26 16:29:15 compute-0 objective_carson[371224]:     ],
Jan 26 16:29:15 compute-0 objective_carson[371224]:     "2": [
Jan 26 16:29:15 compute-0 objective_carson[371224]:         {
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "devices": [
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "/dev/loop5"
Jan 26 16:29:15 compute-0 objective_carson[371224]:             ],
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "lv_name": "ceph_lv2",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "lv_size": "21470642176",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "name": "ceph_lv2",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "tags": {
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.cluster_name": "ceph",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.crush_device_class": "",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.encrypted": "0",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.objectstore": "bluestore",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.osd_id": "2",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.type": "block",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.vdo": "0",
Jan 26 16:29:15 compute-0 objective_carson[371224]:                 "ceph.with_tpm": "0"
Jan 26 16:29:15 compute-0 objective_carson[371224]:             },
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "type": "block",
Jan 26 16:29:15 compute-0 objective_carson[371224]:             "vg_name": "ceph_vg2"
Jan 26 16:29:15 compute-0 objective_carson[371224]:         }
Jan 26 16:29:15 compute-0 objective_carson[371224]:     ]
Jan 26 16:29:15 compute-0 objective_carson[371224]: }
Jan 26 16:29:15 compute-0 systemd[1]: libpod-1acbddc4dbe1329a6883cd66c813375dc39ba5f4936bfae878752a62a7ddda3c.scope: Deactivated successfully.
Jan 26 16:29:15 compute-0 podman[371207]: 2026-01-26 16:29:15.191935255 +0000 UTC m=+0.507754806 container died 1acbddc4dbe1329a6883cd66c813375dc39ba5f4936bfae878752a62a7ddda3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 16:29:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-76964825ee26f4f850c24e753906e6b4c579f5ab7a64b9c6124a0f991e605328-merged.mount: Deactivated successfully.
Jan 26 16:29:15 compute-0 podman[371207]: 2026-01-26 16:29:15.232388755 +0000 UTC m=+0.548208286 container remove 1acbddc4dbe1329a6883cd66c813375dc39ba5f4936bfae878752a62a7ddda3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_carson, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 16:29:15 compute-0 nova_compute[239965]: 2026-01-26 16:29:15.233 239969 DEBUG oslo_concurrency.processutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:15 compute-0 systemd[1]: libpod-conmon-1acbddc4dbe1329a6883cd66c813375dc39ba5f4936bfae878752a62a7ddda3c.scope: Deactivated successfully.
Jan 26 16:29:15 compute-0 sudo[371131]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:15 compute-0 sudo[371246]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:29:15 compute-0 sudo[371246]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:29:15 compute-0 sudo[371246]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:15 compute-0 sudo[371272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:29:15 compute-0 sudo[371272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:29:15 compute-0 podman[371328]: 2026-01-26 16:29:15.732799 +0000 UTC m=+0.044963992 container create 4220855d37423625e074c14728fe083f349f01efeea9255b11b1364fa7c7e267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:29:15 compute-0 systemd[1]: Started libpod-conmon-4220855d37423625e074c14728fe083f349f01efeea9255b11b1364fa7c7e267.scope.
Jan 26 16:29:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:29:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/825397289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:29:15 compute-0 podman[371328]: 2026-01-26 16:29:15.715929497 +0000 UTC m=+0.028094569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:29:15 compute-0 nova_compute[239965]: 2026-01-26 16:29:15.805 239969 DEBUG oslo_concurrency.processutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:15 compute-0 nova_compute[239965]: 2026-01-26 16:29:15.817 239969 DEBUG nova.compute.provider_tree [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:29:15 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/825397289' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:29:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:29:15 compute-0 podman[371328]: 2026-01-26 16:29:15.840255122 +0000 UTC m=+0.152420204 container init 4220855d37423625e074c14728fe083f349f01efeea9255b11b1364fa7c7e267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:29:15 compute-0 podman[371328]: 2026-01-26 16:29:15.845804678 +0000 UTC m=+0.157969710 container start 4220855d37423625e074c14728fe083f349f01efeea9255b11b1364fa7c7e267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 16:29:15 compute-0 podman[371328]: 2026-01-26 16:29:15.849258802 +0000 UTC m=+0.161423894 container attach 4220855d37423625e074c14728fe083f349f01efeea9255b11b1364fa7c7e267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 16:29:15 compute-0 systemd[1]: libpod-4220855d37423625e074c14728fe083f349f01efeea9255b11b1364fa7c7e267.scope: Deactivated successfully.
Jan 26 16:29:15 compute-0 bold_allen[371345]: 167 167
Jan 26 16:29:15 compute-0 conmon[371345]: conmon 4220855d37423625e074 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4220855d37423625e074c14728fe083f349f01efeea9255b11b1364fa7c7e267.scope/container/memory.events
Jan 26 16:29:15 compute-0 podman[371328]: 2026-01-26 16:29:15.853880265 +0000 UTC m=+0.166045297 container died 4220855d37423625e074c14728fe083f349f01efeea9255b11b1364fa7c7e267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:29:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-96b11a7453224c6dc334b8100ca93fdbaac34213d767c8a6a9d63dced5f3ac8f-merged.mount: Deactivated successfully.
Jan 26 16:29:15 compute-0 podman[371328]: 2026-01-26 16:29:15.905192422 +0000 UTC m=+0.217357414 container remove 4220855d37423625e074c14728fe083f349f01efeea9255b11b1364fa7c7e267 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_allen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 16:29:15 compute-0 systemd[1]: libpod-conmon-4220855d37423625e074c14728fe083f349f01efeea9255b11b1364fa7c7e267.scope: Deactivated successfully.
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.040 239969 DEBUG nova.scheduler.client.report [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.055 239969 INFO nova.compute.manager [None req-1b0cf140-a06a-420b-9453-85ccd6b85cba 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Get console output
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.065 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.070 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.071 239969 DEBUG nova.compute.manager [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.127 239969 DEBUG nova.compute.manager [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.128 239969 DEBUG nova.network.neutron [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.147 239969 INFO nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.164 239969 DEBUG nova.compute.manager [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:29:16 compute-0 podman[371372]: 2026-01-26 16:29:16.176095566 +0000 UTC m=+0.062928571 container create 01a0bab2d712dd724bc6a70b6c9b4336b04afec50969b42d182e5d676ccfe357 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_shockley, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:29:16 compute-0 systemd[1]: Started libpod-conmon-01a0bab2d712dd724bc6a70b6c9b4336b04afec50969b42d182e5d676ccfe357.scope.
Jan 26 16:29:16 compute-0 podman[371372]: 2026-01-26 16:29:16.153955485 +0000 UTC m=+0.040788490 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:29:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:29:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3abec7053f977e3363571c6875ea80f58f03603bbdfa4f288164ef10fad4a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3abec7053f977e3363571c6875ea80f58f03603bbdfa4f288164ef10fad4a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3abec7053f977e3363571c6875ea80f58f03603bbdfa4f288164ef10fad4a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a3abec7053f977e3363571c6875ea80f58f03603bbdfa4f288164ef10fad4a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:16 compute-0 podman[371372]: 2026-01-26 16:29:16.2754707 +0000 UTC m=+0.162303685 container init 01a0bab2d712dd724bc6a70b6c9b4336b04afec50969b42d182e5d676ccfe357 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_shockley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.279 239969 DEBUG nova.compute.manager [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.280 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.281 239969 INFO nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Creating image(s)
Jan 26 16:29:16 compute-0 podman[371372]: 2026-01-26 16:29:16.287934355 +0000 UTC m=+0.174767360 container start 01a0bab2d712dd724bc6a70b6c9b4336b04afec50969b42d182e5d676ccfe357 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 16:29:16 compute-0 podman[371372]: 2026-01-26 16:29:16.295651705 +0000 UTC m=+0.182484730 container attach 01a0bab2d712dd724bc6a70b6c9b4336b04afec50969b42d182e5d676ccfe357 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_shockley, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.307 239969 DEBUG nova.storage.rbd_utils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] rbd image 23444729-2c63-4c8b-a28c-04334c5b5949_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.332 239969 DEBUG nova.storage.rbd_utils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] rbd image 23444729-2c63-4c8b-a28c-04334c5b5949_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.355 239969 DEBUG nova.storage.rbd_utils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] rbd image 23444729-2c63-4c8b-a28c-04334c5b5949_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.359 239969 DEBUG oslo_concurrency.processutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.438 239969 DEBUG nova.policy [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '96f070e8d2fa469da6ec2db452db6f86', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67c9291388b24d24a9689e4acf626bf2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.454 239969 DEBUG oslo_concurrency.processutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.455 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.456 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.456 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.488 239969 DEBUG nova.storage.rbd_utils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] rbd image 23444729-2c63-4c8b-a28c-04334c5b5949_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.495 239969 DEBUG oslo_concurrency.processutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 23444729-2c63-4c8b-a28c-04334c5b5949_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 549 KiB/s rd, 20 KiB/s wr, 75 op/s
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.793 239969 DEBUG oslo_concurrency.processutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 23444729-2c63-4c8b-a28c-04334c5b5949_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:16 compute-0 ceph-mon[75140]: pgmap v2368: 305 pgs: 305 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 549 KiB/s rd, 20 KiB/s wr, 75 op/s
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.888 239969 DEBUG nova.storage.rbd_utils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] resizing rbd image 23444729-2c63-4c8b-a28c-04334c5b5949_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:29:16 compute-0 nova_compute[239965]: 2026-01-26 16:29:16.983 239969 DEBUG nova.objects.instance [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lazy-loading 'migration_context' on Instance uuid 23444729-2c63-4c8b-a28c-04334c5b5949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.017 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.018 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Ensure instance console log exists: /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.019 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.019 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.020 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:17 compute-0 lvm[371633]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:29:17 compute-0 lvm[371633]: VG ceph_vg1 finished
Jan 26 16:29:17 compute-0 lvm[371632]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:29:17 compute-0 lvm[371632]: VG ceph_vg0 finished
Jan 26 16:29:17 compute-0 lvm[371635]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:29:17 compute-0 lvm[371635]: VG ceph_vg2 finished
Jan 26 16:29:17 compute-0 lvm[371637]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:29:17 compute-0 lvm[371637]: VG ceph_vg2 finished
Jan 26 16:29:17 compute-0 friendly_shockley[371388]: {}
Jan 26 16:29:17 compute-0 systemd[1]: libpod-01a0bab2d712dd724bc6a70b6c9b4336b04afec50969b42d182e5d676ccfe357.scope: Deactivated successfully.
Jan 26 16:29:17 compute-0 podman[371372]: 2026-01-26 16:29:17.234467046 +0000 UTC m=+1.121300061 container died 01a0bab2d712dd724bc6a70b6c9b4336b04afec50969b42d182e5d676ccfe357 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_shockley, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 26 16:29:17 compute-0 systemd[1]: libpod-01a0bab2d712dd724bc6a70b6c9b4336b04afec50969b42d182e5d676ccfe357.scope: Consumed 1.482s CPU time.
Jan 26 16:29:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a3abec7053f977e3363571c6875ea80f58f03603bbdfa4f288164ef10fad4a6-merged.mount: Deactivated successfully.
Jan 26 16:29:17 compute-0 podman[371372]: 2026-01-26 16:29:17.292371064 +0000 UTC m=+1.179204049 container remove 01a0bab2d712dd724bc6a70b6c9b4336b04afec50969b42d182e5d676ccfe357 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 16:29:17 compute-0 systemd[1]: libpod-conmon-01a0bab2d712dd724bc6a70b6c9b4336b04afec50969b42d182e5d676ccfe357.scope: Deactivated successfully.
Jan 26 16:29:17 compute-0 sudo[371272]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:29:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:29:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:29:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:29:17 compute-0 sudo[371653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:29:17 compute-0 sudo[371653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.456 239969 DEBUG oslo_concurrency.lockutils [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "3a84510b-c77e-4b69-95cf-39eb5ba96406" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.458 239969 DEBUG oslo_concurrency.lockutils [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:17 compute-0 sudo[371653]: pam_unix(sudo:session): session closed for user root
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.458 239969 DEBUG oslo_concurrency.lockutils [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.458 239969 DEBUG oslo_concurrency.lockutils [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.459 239969 DEBUG oslo_concurrency.lockutils [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.461 239969 INFO nova.compute.manager [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Terminating instance
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.463 239969 DEBUG nova.compute.manager [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:29:17 compute-0 kernel: tap23d47336-e4 (unregistering): left promiscuous mode
Jan 26 16:29:17 compute-0 NetworkManager[48954]: <info>  [1769444957.5202] device (tap23d47336-e4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:29:17 compute-0 ovn_controller[146046]: 2026-01-26T16:29:17Z|01599|binding|INFO|Releasing lport 23d47336-e4f3-4764-b5f8-68038fe02faa from this chassis (sb_readonly=0)
Jan 26 16:29:17 compute-0 ovn_controller[146046]: 2026-01-26T16:29:17Z|01600|binding|INFO|Setting lport 23d47336-e4f3-4764-b5f8-68038fe02faa down in Southbound
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.527 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:17 compute-0 ovn_controller[146046]: 2026-01-26T16:29:17Z|01601|binding|INFO|Removing iface tap23d47336-e4 ovn-installed in OVS
Jan 26 16:29:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:17.534 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:4f:e5 10.100.0.13'], port_security=['fa:16:3e:5f:4f:e5 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '3a84510b-c77e-4b69-95cf-39eb5ba96406', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5dcb653-5134-44c2-bb5f-1ef932033d43', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '6', 'neutron:security_group_ids': '180e9b46-d482-4e98-8076-c3573609308c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=94eb109d-0d0f-4f4d-9984-1552de27451e, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=23d47336-e4f3-4764-b5f8-68038fe02faa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.535 239969 DEBUG nova.compute.manager [req-245af3c5-8000-4cc3-b998-cd94e139ff6e req-c34204a0-c229-4e4b-8fcd-04b6a22fd09d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received event network-changed-23d47336-e4f3-4764-b5f8-68038fe02faa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.536 239969 DEBUG nova.compute.manager [req-245af3c5-8000-4cc3-b998-cd94e139ff6e req-c34204a0-c229-4e4b-8fcd-04b6a22fd09d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Refreshing instance network info cache due to event network-changed-23d47336-e4f3-4764-b5f8-68038fe02faa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.536 239969 DEBUG oslo_concurrency.lockutils [req-245af3c5-8000-4cc3-b998-cd94e139ff6e req-c34204a0-c229-4e4b-8fcd-04b6a22fd09d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-3a84510b-c77e-4b69-95cf-39eb5ba96406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.536 239969 DEBUG oslo_concurrency.lockutils [req-245af3c5-8000-4cc3-b998-cd94e139ff6e req-c34204a0-c229-4e4b-8fcd-04b6a22fd09d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-3a84510b-c77e-4b69-95cf-39eb5ba96406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.536 239969 DEBUG nova.network.neutron [req-245af3c5-8000-4cc3-b998-cd94e139ff6e req-c34204a0-c229-4e4b-8fcd-04b6a22fd09d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Refreshing network info cache for port 23d47336-e4f3-4764-b5f8-68038fe02faa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:29:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:17.536 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 23d47336-e4f3-4764-b5f8-68038fe02faa in datapath c5dcb653-5134-44c2-bb5f-1ef932033d43 unbound from our chassis
Jan 26 16:29:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:17.537 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c5dcb653-5134-44c2-bb5f-1ef932033d43, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:29:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:17.539 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d396c92b-40c0-44a8-a4a6-95a94c013d82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:17.539 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43 namespace which is not needed anymore
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.556 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:17 compute-0 ovn_controller[146046]: 2026-01-26T16:29:17Z|01602|binding|INFO|Releasing lport 2e120840-5ef3-452f-9957-cc378e107313 from this chassis (sb_readonly=0)
Jan 26 16:29:17 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000094.scope: Deactivated successfully.
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.576 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:17 compute-0 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000094.scope: Consumed 13.876s CPU time.
Jan 26 16:29:17 compute-0 systemd-machined[208061]: Machine qemu-181-instance-00000094 terminated.
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.616 239969 DEBUG nova.network.neutron [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Successfully created port: 71007552-7ba9-4fe5-8e36-ee9ce30da37f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:29:17 compute-0 neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43[370655]: [NOTICE]   (370659) : haproxy version is 2.8.14-c23fe91
Jan 26 16:29:17 compute-0 neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43[370655]: [NOTICE]   (370659) : path to executable is /usr/sbin/haproxy
Jan 26 16:29:17 compute-0 neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43[370655]: [WARNING]  (370659) : Exiting Master process...
Jan 26 16:29:17 compute-0 neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43[370655]: [WARNING]  (370659) : Exiting Master process...
Jan 26 16:29:17 compute-0 neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43[370655]: [ALERT]    (370659) : Current worker (370661) exited with code 143 (Terminated)
Jan 26 16:29:17 compute-0 neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43[370655]: [WARNING]  (370659) : All workers exited. Exiting... (0)
Jan 26 16:29:17 compute-0 systemd[1]: libpod-fe5ec0ffc1a48aa43b39e1c5b7c0c059bdb25a66ab4e3697437c6f73ad1edd73.scope: Deactivated successfully.
Jan 26 16:29:17 compute-0 conmon[370655]: conmon fe5ec0ffc1a48aa43b39 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe5ec0ffc1a48aa43b39e1c5b7c0c059bdb25a66ab4e3697437c6f73ad1edd73.scope/container/memory.events
Jan 26 16:29:17 compute-0 podman[371701]: 2026-01-26 16:29:17.67848549 +0000 UTC m=+0.048854487 container died fe5ec0ffc1a48aa43b39e1c5b7c0c059bdb25a66ab4e3697437c6f73ad1edd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 16:29:17 compute-0 NetworkManager[48954]: <info>  [1769444957.6958] manager: (tap23d47336-e4): new Tun device (/org/freedesktop/NetworkManager/Devices/646)
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.712 239969 INFO nova.virt.libvirt.driver [-] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Instance destroyed successfully.
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.713 239969 DEBUG nova.objects.instance [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'resources' on Instance uuid 3a84510b-c77e-4b69-95cf-39eb5ba96406 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:29:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fbda15bb1398afd35fc73b3322d59cd40b32c803c16cd204acc44aa64d7a2b7-merged.mount: Deactivated successfully.
Jan 26 16:29:17 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fe5ec0ffc1a48aa43b39e1c5b7c0c059bdb25a66ab4e3697437c6f73ad1edd73-userdata-shm.mount: Deactivated successfully.
Jan 26 16:29:17 compute-0 podman[371701]: 2026-01-26 16:29:17.729363946 +0000 UTC m=+0.099732933 container cleanup fe5ec0ffc1a48aa43b39e1c5b7c0c059bdb25a66ab4e3697437c6f73ad1edd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.732 239969 DEBUG nova.virt.libvirt.vif [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:28:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1457378879',display_name='tempest-TestNetworkAdvancedServerOps-server-1457378879',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1457378879',id=148,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBXRmJ55PYG/LM9xDFj4YfpDSpukedz8p49LlUh0uK6t84cxMa5WxNAqPRBOJJ8+YvFGubL9tQLCSQysIZbD/889B7QK05P6yiYx8xjOJeyjUpWMf+n6V4Xm1xDWJcHxCA==',key_name='tempest-TestNetworkAdvancedServerOps-1322118565',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:28:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-yfjodt40',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:28:57Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=3a84510b-c77e-4b69-95cf-39eb5ba96406,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.733 239969 DEBUG nova.network.os_vif_util [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.735 239969 DEBUG nova.network.os_vif_util [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:4f:e5,bridge_name='br-int',has_traffic_filtering=True,id=23d47336-e4f3-4764-b5f8-68038fe02faa,network=Network(c5dcb653-5134-44c2-bb5f-1ef932033d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23d47336-e4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.736 239969 DEBUG os_vif [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:4f:e5,bridge_name='br-int',has_traffic_filtering=True,id=23d47336-e4f3-4764-b5f8-68038fe02faa,network=Network(c5dcb653-5134-44c2-bb5f-1ef932033d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23d47336-e4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.738 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.739 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23d47336-e4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.740 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.747 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:29:17 compute-0 systemd[1]: libpod-conmon-fe5ec0ffc1a48aa43b39e1c5b7c0c059bdb25a66ab4e3697437c6f73ad1edd73.scope: Deactivated successfully.
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.813 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.815 239969 INFO os_vif [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:4f:e5,bridge_name='br-int',has_traffic_filtering=True,id=23d47336-e4f3-4764-b5f8-68038fe02faa,network=Network(c5dcb653-5134-44c2-bb5f-1ef932033d43),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23d47336-e4')
Jan 26 16:29:17 compute-0 podman[371738]: 2026-01-26 16:29:17.816240224 +0000 UTC m=+0.050902148 container remove fe5ec0ffc1a48aa43b39e1c5b7c0c059bdb25a66ab4e3697437c6f73ad1edd73 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:29:17 compute-0 ovn_controller[146046]: 2026-01-26T16:29:17Z|01603|binding|INFO|Releasing lport 2e120840-5ef3-452f-9957-cc378e107313 from this chassis (sb_readonly=0)
Jan 26 16:29:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:17.826 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6f6e62-63a1-4e1b-ad43-15244fb1686d]: (4, ('Mon Jan 26 04:29:17 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43 (fe5ec0ffc1a48aa43b39e1c5b7c0c059bdb25a66ab4e3697437c6f73ad1edd73)\nfe5ec0ffc1a48aa43b39e1c5b7c0c059bdb25a66ab4e3697437c6f73ad1edd73\nMon Jan 26 04:29:17 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43 (fe5ec0ffc1a48aa43b39e1c5b7c0c059bdb25a66ab4e3697437c6f73ad1edd73)\nfe5ec0ffc1a48aa43b39e1c5b7c0c059bdb25a66ab4e3697437c6f73ad1edd73\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:17.829 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[45361f88-1627-4835-a6a1-a99c67a0b7c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:17.830 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5dcb653-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.835 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:17 compute-0 kernel: tapc5dcb653-50: left promiscuous mode
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.847 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:17.850 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0f00b7a4-9a07-4c3c-bfc2-369c6bc26641]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:17.862 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ed310a-f9b8-4ed0-a516-85a18a14ae74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:17.863 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1812204e-a61e-4eaa-8ecf-3aeeb2a18d76]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:17.880 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c609c1e7-3ba8-41ad-b0a8-f683e2a080e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 642745, 'reachable_time': 41287, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371770, 'error': None, 'target': 'ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.882 239969 DEBUG nova.compute.manager [req-9bb8e3fe-fabe-47d5-bd65-c4a28b5c3801 req-b561726d-5eb0-4766-bd7d-64f34fcce99d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received event network-vif-unplugged-23d47336-e4f3-4764-b5f8-68038fe02faa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:29:17 compute-0 systemd[1]: run-netns-ovnmeta\x2dc5dcb653\x2d5134\x2d44c2\x2dbb5f\x2d1ef932033d43.mount: Deactivated successfully.
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.883 239969 DEBUG oslo_concurrency.lockutils [req-9bb8e3fe-fabe-47d5-bd65-c4a28b5c3801 req-b561726d-5eb0-4766-bd7d-64f34fcce99d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.883 239969 DEBUG oslo_concurrency.lockutils [req-9bb8e3fe-fabe-47d5-bd65-c4a28b5c3801 req-b561726d-5eb0-4766-bd7d-64f34fcce99d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.883 239969 DEBUG oslo_concurrency.lockutils [req-9bb8e3fe-fabe-47d5-bd65-c4a28b5c3801 req-b561726d-5eb0-4766-bd7d-64f34fcce99d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.884 239969 DEBUG nova.compute.manager [req-9bb8e3fe-fabe-47d5-bd65-c4a28b5c3801 req-b561726d-5eb0-4766-bd7d-64f34fcce99d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] No waiting events found dispatching network-vif-unplugged-23d47336-e4f3-4764-b5f8-68038fe02faa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:29:17 compute-0 nova_compute[239965]: 2026-01-26 16:29:17.884 239969 DEBUG nova.compute.manager [req-9bb8e3fe-fabe-47d5-bd65-c4a28b5c3801 req-b561726d-5eb0-4766-bd7d-64f34fcce99d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received event network-vif-unplugged-23d47336-e4f3-4764-b5f8-68038fe02faa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:29:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:17.884 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c5dcb653-5134-44c2-bb5f-1ef932033d43 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:29:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:17.884 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[3dee99a9-4350-4f1b-be52-9d712a9e93b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:18 compute-0 nova_compute[239965]: 2026-01-26 16:29:18.046 239969 INFO nova.virt.libvirt.driver [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Deleting instance files /var/lib/nova/instances/3a84510b-c77e-4b69-95cf-39eb5ba96406_del
Jan 26 16:29:18 compute-0 nova_compute[239965]: 2026-01-26 16:29:18.047 239969 INFO nova.virt.libvirt.driver [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Deletion of /var/lib/nova/instances/3a84510b-c77e-4b69-95cf-39eb5ba96406_del complete
Jan 26 16:29:18 compute-0 nova_compute[239965]: 2026-01-26 16:29:18.141 239969 INFO nova.compute.manager [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Took 0.68 seconds to destroy the instance on the hypervisor.
Jan 26 16:29:18 compute-0 nova_compute[239965]: 2026-01-26 16:29:18.142 239969 DEBUG oslo.service.loopingcall [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:29:18 compute-0 nova_compute[239965]: 2026-01-26 16:29:18.143 239969 DEBUG nova.compute.manager [-] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:29:18 compute-0 nova_compute[239965]: 2026-01-26 16:29:18.143 239969 DEBUG nova.network.neutron [-] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:29:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:29:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:29:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 197 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 560 KiB/s rd, 1.2 MiB/s wr, 90 op/s
Jan 26 16:29:18 compute-0 nova_compute[239965]: 2026-01-26 16:29:18.970 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:29:19 compute-0 ceph-mon[75140]: pgmap v2369: 305 pgs: 305 active+clean; 197 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 560 KiB/s rd, 1.2 MiB/s wr, 90 op/s
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.262 239969 DEBUG nova.compute.manager [req-67dbaaef-a972-4cf0-8749-4b79266974b3 req-f4562699-80d1-47a3-8bcb-85c11e11d405 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received event network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.263 239969 DEBUG oslo_concurrency.lockutils [req-67dbaaef-a972-4cf0-8749-4b79266974b3 req-f4562699-80d1-47a3-8bcb-85c11e11d405 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.264 239969 DEBUG oslo_concurrency.lockutils [req-67dbaaef-a972-4cf0-8749-4b79266974b3 req-f4562699-80d1-47a3-8bcb-85c11e11d405 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.264 239969 DEBUG oslo_concurrency.lockutils [req-67dbaaef-a972-4cf0-8749-4b79266974b3 req-f4562699-80d1-47a3-8bcb-85c11e11d405 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.265 239969 DEBUG nova.compute.manager [req-67dbaaef-a972-4cf0-8749-4b79266974b3 req-f4562699-80d1-47a3-8bcb-85c11e11d405 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] No waiting events found dispatching network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.265 239969 WARNING nova.compute.manager [req-67dbaaef-a972-4cf0-8749-4b79266974b3 req-f4562699-80d1-47a3-8bcb-85c11e11d405 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received unexpected event network-vif-plugged-23d47336-e4f3-4764-b5f8-68038fe02faa for instance with vm_state active and task_state deleting.
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.439 239969 DEBUG nova.network.neutron [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Successfully updated port: 71007552-7ba9-4fe5-8e36-ee9ce30da37f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.455 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.455 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquired lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.456 239969 DEBUG nova.network.neutron [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.531 239969 DEBUG nova.compute.manager [req-4f55a7fe-2df6-44e4-b3fc-0ffcf15b1a02 req-d62e275a-8a29-43e7-8c51-f2c37932737a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Received event network-changed-71007552-7ba9-4fe5-8e36-ee9ce30da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.532 239969 DEBUG nova.compute.manager [req-4f55a7fe-2df6-44e4-b3fc-0ffcf15b1a02 req-d62e275a-8a29-43e7-8c51-f2c37932737a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Refreshing instance network info cache due to event network-changed-71007552-7ba9-4fe5-8e36-ee9ce30da37f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.532 239969 DEBUG oslo_concurrency.lockutils [req-4f55a7fe-2df6-44e4-b3fc-0ffcf15b1a02 req-d62e275a-8a29-43e7-8c51-f2c37932737a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.730 239969 DEBUG nova.network.neutron [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:29:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 197 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 560 KiB/s rd, 1.2 MiB/s wr, 90 op/s
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.803 239969 DEBUG nova.network.neutron [-] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:29:20 compute-0 ceph-mon[75140]: pgmap v2370: 305 pgs: 305 active+clean; 197 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 560 KiB/s rd, 1.2 MiB/s wr, 90 op/s
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.817 239969 INFO nova.compute.manager [-] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Took 2.67 seconds to deallocate network for instance.
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.860 239969 DEBUG oslo_concurrency.lockutils [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.860 239969 DEBUG oslo_concurrency.lockutils [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.936 239969 DEBUG oslo_concurrency.processutils [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.974 239969 DEBUG nova.network.neutron [req-245af3c5-8000-4cc3-b998-cd94e139ff6e req-c34204a0-c229-4e4b-8fcd-04b6a22fd09d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Updated VIF entry in instance network info cache for port 23d47336-e4f3-4764-b5f8-68038fe02faa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.975 239969 DEBUG nova.network.neutron [req-245af3c5-8000-4cc3-b998-cd94e139ff6e req-c34204a0-c229-4e4b-8fcd-04b6a22fd09d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Updating instance_info_cache with network_info: [{"id": "23d47336-e4f3-4764-b5f8-68038fe02faa", "address": "fa:16:3e:5f:4f:e5", "network": {"id": "c5dcb653-5134-44c2-bb5f-1ef932033d43", "bridge": "br-int", "label": "tempest-network-smoke--676384076", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23d47336-e4", "ovs_interfaceid": "23d47336-e4f3-4764-b5f8-68038fe02faa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:29:20 compute-0 nova_compute[239965]: 2026-01-26 16:29:20.995 239969 DEBUG oslo_concurrency.lockutils [req-245af3c5-8000-4cc3-b998-cd94e139ff6e req-c34204a0-c229-4e4b-8fcd-04b6a22fd09d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-3a84510b-c77e-4b69-95cf-39eb5ba96406" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:29:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:29:21 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4258716076' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:29:21 compute-0 nova_compute[239965]: 2026-01-26 16:29:21.491 239969 DEBUG oslo_concurrency.processutils [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:21 compute-0 nova_compute[239965]: 2026-01-26 16:29:21.497 239969 DEBUG nova.compute.provider_tree [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:29:21 compute-0 nova_compute[239965]: 2026-01-26 16:29:21.511 239969 DEBUG nova.scheduler.client.report [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:29:21 compute-0 nova_compute[239965]: 2026-01-26 16:29:21.528 239969 DEBUG oslo_concurrency.lockutils [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:21 compute-0 nova_compute[239965]: 2026-01-26 16:29:21.562 239969 INFO nova.scheduler.client.report [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Deleted allocations for instance 3a84510b-c77e-4b69-95cf-39eb5ba96406
Jan 26 16:29:21 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4258716076' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:29:21 compute-0 nova_compute[239965]: 2026-01-26 16:29:21.905 239969 DEBUG oslo_concurrency.lockutils [None req-54e9d78e-aee4-42b5-80b0-414f1b10227f 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "3a84510b-c77e-4b69-95cf-39eb5ba96406" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.448s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.632 239969 DEBUG nova.compute.manager [req-00b7d706-168b-4c93-b78b-769df27888f7 req-5ddf63c9-8254-4aa4-9b08-7d41e17047a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Received event network-vif-deleted-23d47336-e4f3-4764-b5f8-68038fe02faa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.632 239969 INFO nova.compute.manager [req-00b7d706-168b-4c93-b78b-769df27888f7 req-5ddf63c9-8254-4aa4-9b08-7d41e17047a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Neutron deleted interface 23d47336-e4f3-4764-b5f8-68038fe02faa; detaching it from the instance and deleting it from the info cache
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.632 239969 DEBUG nova.network.neutron [req-00b7d706-168b-4c93-b78b-769df27888f7 req-5ddf63c9-8254-4aa4-9b08-7d41e17047a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.635 239969 DEBUG nova.compute.manager [req-00b7d706-168b-4c93-b78b-769df27888f7 req-5ddf63c9-8254-4aa4-9b08-7d41e17047a6 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Detach interface failed, port_id=23d47336-e4f3-4764-b5f8-68038fe02faa, reason: Instance 3a84510b-c77e-4b69-95cf-39eb5ba96406 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.742 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 1.8 MiB/s wr, 130 op/s
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:29:22.829538) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444962829573, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 995, "num_deletes": 252, "total_data_size": 1327813, "memory_usage": 1359984, "flush_reason": "Manual Compaction"}
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Jan 26 16:29:22 compute-0 ceph-mon[75140]: pgmap v2371: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 1.8 MiB/s wr, 130 op/s
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444962837638, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1296732, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49050, "largest_seqno": 50044, "table_properties": {"data_size": 1291945, "index_size": 2311, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11130, "raw_average_key_size": 20, "raw_value_size": 1282009, "raw_average_value_size": 2305, "num_data_blocks": 103, "num_entries": 556, "num_filter_entries": 556, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769444885, "oldest_key_time": 1769444885, "file_creation_time": 1769444962, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 8157 microseconds, and 3660 cpu microseconds.
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:29:22.837690) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1296732 bytes OK
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:29:22.837713) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:29:22.839277) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:29:22.839289) EVENT_LOG_v1 {"time_micros": 1769444962839285, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:29:22.839306) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1323032, prev total WAL file size 1323032, number of live WAL files 2.
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:29:22.839835) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1266KB)], [113(8805KB)]
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444962839864, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10313744, "oldest_snapshot_seqno": -1}
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.890 239969 DEBUG nova.network.neutron [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updating instance_info_cache with network_info: [{"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 7171 keys, 8592468 bytes, temperature: kUnknown
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444962904432, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8592468, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8546688, "index_size": 26724, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17989, "raw_key_size": 186798, "raw_average_key_size": 26, "raw_value_size": 8420822, "raw_average_value_size": 1174, "num_data_blocks": 1032, "num_entries": 7171, "num_filter_entries": 7171, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769444962, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:29:22.904740) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8592468 bytes
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:29:22.906205) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.4 rd, 132.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 8.6 +0.0 blob) out(8.2 +0.0 blob), read-write-amplify(14.6) write-amplify(6.6) OK, records in: 7692, records dropped: 521 output_compression: NoCompression
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:29:22.906225) EVENT_LOG_v1 {"time_micros": 1769444962906216, "job": 68, "event": "compaction_finished", "compaction_time_micros": 64693, "compaction_time_cpu_micros": 27214, "output_level": 6, "num_output_files": 1, "total_output_size": 8592468, "num_input_records": 7692, "num_output_records": 7171, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444962906721, "job": 68, "event": "table_file_deletion", "file_number": 115}
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769444962908940, "job": 68, "event": "table_file_deletion", "file_number": 113}
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:29:22.839741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:29:22.909033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:29:22.909039) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:29:22.909041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:29:22.909043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:29:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:29:22.909044) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.917 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Releasing lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.917 239969 DEBUG nova.compute.manager [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Instance network_info: |[{"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.918 239969 DEBUG oslo_concurrency.lockutils [req-4f55a7fe-2df6-44e4-b3fc-0ffcf15b1a02 req-d62e275a-8a29-43e7-8c51-f2c37932737a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.918 239969 DEBUG nova.network.neutron [req-4f55a7fe-2df6-44e4-b3fc-0ffcf15b1a02 req-d62e275a-8a29-43e7-8c51-f2c37932737a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Refreshing network info cache for port 71007552-7ba9-4fe5-8e36-ee9ce30da37f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.921 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Start _get_guest_xml network_info=[{"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.925 239969 WARNING nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.930 239969 DEBUG nova.virt.libvirt.host [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.930 239969 DEBUG nova.virt.libvirt.host [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.933 239969 DEBUG nova.virt.libvirt.host [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.934 239969 DEBUG nova.virt.libvirt.host [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.934 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.935 239969 DEBUG nova.virt.hardware [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.935 239969 DEBUG nova.virt.hardware [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.936 239969 DEBUG nova.virt.hardware [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.936 239969 DEBUG nova.virt.hardware [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.936 239969 DEBUG nova.virt.hardware [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.936 239969 DEBUG nova.virt.hardware [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.937 239969 DEBUG nova.virt.hardware [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.937 239969 DEBUG nova.virt.hardware [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.938 239969 DEBUG nova.virt.hardware [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.938 239969 DEBUG nova.virt.hardware [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.938 239969 DEBUG nova.virt.hardware [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:29:22 compute-0 nova_compute[239965]: 2026-01-26 16:29:22.943 239969 DEBUG oslo_concurrency.processutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:29:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1243604191' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:29:23 compute-0 nova_compute[239965]: 2026-01-26 16:29:23.505 239969 DEBUG oslo_concurrency.processutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:23 compute-0 nova_compute[239965]: 2026-01-26 16:29:23.539 239969 DEBUG nova.storage.rbd_utils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] rbd image 23444729-2c63-4c8b-a28c-04334c5b5949_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:29:23 compute-0 nova_compute[239965]: 2026-01-26 16:29:23.544 239969 DEBUG oslo_concurrency.processutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1243604191' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:29:23 compute-0 nova_compute[239965]: 2026-01-26 16:29:23.912 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444948.911232, 77d5bfbb-20c1-4450-acf6-d6cc979c6657 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:29:23 compute-0 nova_compute[239965]: 2026-01-26 16:29:23.913 239969 INFO nova.compute.manager [-] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] VM Stopped (Lifecycle Event)
Jan 26 16:29:23 compute-0 nova_compute[239965]: 2026-01-26 16:29:23.946 239969 DEBUG nova.compute.manager [None req-f6b8db76-9c9d-4f1c-b44b-4ebc77abde80 - - - - - -] [instance: 77d5bfbb-20c1-4450-acf6-d6cc979c6657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:29:23 compute-0 nova_compute[239965]: 2026-01-26 16:29:23.972 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:29:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3164642622' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.131 239969 DEBUG oslo_concurrency.processutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.132 239969 DEBUG nova.virt.libvirt.vif [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:29:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-139030865',display_name='tempest-TestShelveInstance-server-139030865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-139030865',id=150,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGR71t6y+HJ4qAoeBl0pCSSZsSHWE/LQDqzbI32DfEJP6X/ptIcGuyNSJStvWenZDTRQYi8gTT8yZ+rcYMA60xr8rqSoQ28CSiFxxFlkuGKRbU71KSafynyknZ+52zot4Q==',key_name='tempest-TestShelveInstance-779455889',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67c9291388b24d24a9689e4acf626bf2',ramdisk_id='',reservation_id='r-9na32hh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1058510424',owner_user_name='tempest-TestShelveInstance-1058510424-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:29:16Z,user_data=None,user_id='96f070e8d2fa469da6ec2db452db6f86',uuid=23444729-2c63-4c8b-a28c-04334c5b5949,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.133 239969 DEBUG nova.network.os_vif_util [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Converting VIF {"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.133 239969 DEBUG nova.network.os_vif_util [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=71007552-7ba9-4fe5-8e36-ee9ce30da37f,network=Network(68925d6a-01d0-4cb9-94a6-da3e5f3fac44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71007552-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.134 239969 DEBUG nova.objects.instance [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 23444729-2c63-4c8b-a28c-04334c5b5949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.153 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:29:24 compute-0 nova_compute[239965]:   <uuid>23444729-2c63-4c8b-a28c-04334c5b5949</uuid>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   <name>instance-00000096</name>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <nova:name>tempest-TestShelveInstance-server-139030865</nova:name>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:29:22</nova:creationTime>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:29:24 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:29:24 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:29:24 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:29:24 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:29:24 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:29:24 compute-0 nova_compute[239965]:         <nova:user uuid="96f070e8d2fa469da6ec2db452db6f86">tempest-TestShelveInstance-1058510424-project-member</nova:user>
Jan 26 16:29:24 compute-0 nova_compute[239965]:         <nova:project uuid="67c9291388b24d24a9689e4acf626bf2">tempest-TestShelveInstance-1058510424</nova:project>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:29:24 compute-0 nova_compute[239965]:         <nova:port uuid="71007552-7ba9-4fe5-8e36-ee9ce30da37f">
Jan 26 16:29:24 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <system>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <entry name="serial">23444729-2c63-4c8b-a28c-04334c5b5949</entry>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <entry name="uuid">23444729-2c63-4c8b-a28c-04334c5b5949</entry>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     </system>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   <os>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   </os>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   <features>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   </features>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/23444729-2c63-4c8b-a28c-04334c5b5949_disk">
Jan 26 16:29:24 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       </source>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:29:24 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/23444729-2c63-4c8b-a28c-04334c5b5949_disk.config">
Jan 26 16:29:24 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       </source>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:29:24 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:45:01:8a"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <target dev="tap71007552-7b"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949/console.log" append="off"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <video>
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     </video>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:29:24 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:29:24 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:29:24 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:29:24 compute-0 nova_compute[239965]: </domain>
Jan 26 16:29:24 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.154 239969 DEBUG nova.compute.manager [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Preparing to wait for external event network-vif-plugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.155 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.155 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.156 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.157 239969 DEBUG nova.virt.libvirt.vif [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:29:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-139030865',display_name='tempest-TestShelveInstance-server-139030865',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-139030865',id=150,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGR71t6y+HJ4qAoeBl0pCSSZsSHWE/LQDqzbI32DfEJP6X/ptIcGuyNSJStvWenZDTRQYi8gTT8yZ+rcYMA60xr8rqSoQ28CSiFxxFlkuGKRbU71KSafynyknZ+52zot4Q==',key_name='tempest-TestShelveInstance-779455889',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67c9291388b24d24a9689e4acf626bf2',ramdisk_id='',reservation_id='r-9na32hh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-1058510424',owner_user_name='tempest-TestShelveInstance-1058510424-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:29:16Z,user_data=None,user_id='96f070e8d2fa469da6ec2db452db6f86',uuid=23444729-2c63-4c8b-a28c-04334c5b5949,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.158 239969 DEBUG nova.network.os_vif_util [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Converting VIF {"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.159 239969 DEBUG nova.network.os_vif_util [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=71007552-7ba9-4fe5-8e36-ee9ce30da37f,network=Network(68925d6a-01d0-4cb9-94a6-da3e5f3fac44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71007552-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.159 239969 DEBUG os_vif [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=71007552-7ba9-4fe5-8e36-ee9ce30da37f,network=Network(68925d6a-01d0-4cb9-94a6-da3e5f3fac44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71007552-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.160 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.161 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.162 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.166 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.166 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71007552-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.167 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap71007552-7b, col_values=(('external_ids', {'iface-id': '71007552-7ba9-4fe5-8e36-ee9ce30da37f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:01:8a', 'vm-uuid': '23444729-2c63-4c8b-a28c-04334c5b5949'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:24 compute-0 NetworkManager[48954]: <info>  [1769444964.1702] manager: (tap71007552-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/647)
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.169 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.174 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.176 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.177 239969 INFO os_vif [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=71007552-7ba9-4fe5-8e36-ee9ce30da37f,network=Network(68925d6a-01d0-4cb9-94a6-da3e5f3fac44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71007552-7b')
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.258 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.258 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.259 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] No VIF found with MAC fa:16:3e:45:01:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.259 239969 INFO nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Using config drive
Jan 26 16:29:24 compute-0 nova_compute[239965]: 2026-01-26 16:29:24.280 239969 DEBUG nova.storage.rbd_utils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] rbd image 23444729-2c63-4c8b-a28c-04334c5b5949_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:29:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:29:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 26 16:29:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3164642622' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:29:24 compute-0 ceph-mon[75140]: pgmap v2372: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.150 239969 INFO nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Creating config drive at /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949/disk.config
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.156 239969 DEBUG oslo_concurrency.processutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6vahcvoe execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.325 239969 DEBUG oslo_concurrency.processutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6vahcvoe" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.349 239969 DEBUG nova.storage.rbd_utils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] rbd image 23444729-2c63-4c8b-a28c-04334c5b5949_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.354 239969 DEBUG oslo_concurrency.processutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949/disk.config 23444729-2c63-4c8b-a28c-04334c5b5949_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.502 239969 DEBUG oslo_concurrency.processutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949/disk.config 23444729-2c63-4c8b-a28c-04334c5b5949_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.504 239969 INFO nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Deleting local config drive /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949/disk.config because it was imported into RBD.
Jan 26 16:29:25 compute-0 kernel: tap71007552-7b: entered promiscuous mode
Jan 26 16:29:25 compute-0 NetworkManager[48954]: <info>  [1769444965.5608] manager: (tap71007552-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/648)
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.560 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:25 compute-0 ovn_controller[146046]: 2026-01-26T16:29:25Z|01604|binding|INFO|Claiming lport 71007552-7ba9-4fe5-8e36-ee9ce30da37f for this chassis.
Jan 26 16:29:25 compute-0 ovn_controller[146046]: 2026-01-26T16:29:25Z|01605|binding|INFO|71007552-7ba9-4fe5-8e36-ee9ce30da37f: Claiming fa:16:3e:45:01:8a 10.100.0.6
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.565 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:25 compute-0 systemd-udevd[371929]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.591 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:01:8a 10.100.0.6'], port_security=['fa:16:3e:45:01:8a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '23444729-2c63-4c8b-a28c-04334c5b5949', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-68925d6a-01d0-4cb9-94a6-da3e5f3fac44', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67c9291388b24d24a9689e4acf626bf2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fec1bead-ca1f-4cab-8561-fae558eddc5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75ede299-1df1-4eb6-926a-ac3d97013a63, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=71007552-7ba9-4fe5-8e36-ee9ce30da37f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.592 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 71007552-7ba9-4fe5-8e36-ee9ce30da37f in datapath 68925d6a-01d0-4cb9-94a6-da3e5f3fac44 bound to our chassis
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.594 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 68925d6a-01d0-4cb9-94a6-da3e5f3fac44
Jan 26 16:29:25 compute-0 systemd-machined[208061]: New machine qemu-182-instance-00000096.
Jan 26 16:29:25 compute-0 NetworkManager[48954]: <info>  [1769444965.6025] device (tap71007552-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:29:25 compute-0 NetworkManager[48954]: <info>  [1769444965.6034] device (tap71007552-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:29:25 compute-0 systemd[1]: Started Virtual Machine qemu-182-instance-00000096.
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.615 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[41262c15-2e4c-4cc6-83f5-023198a55494]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.616 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap68925d6a-01 in ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.618 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap68925d6a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.618 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0267a6-6db0-4d80-ad38-ad960d4c7f3f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.620 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[34b4352f-56c5-4a6e-aebb-4aa866b98faa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.641 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[f5901a1e-8240-4d58-8fab-e174b0b1af21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:25 compute-0 ovn_controller[146046]: 2026-01-26T16:29:25Z|01606|binding|INFO|Setting lport 71007552-7ba9-4fe5-8e36-ee9ce30da37f ovn-installed in OVS
Jan 26 16:29:25 compute-0 ovn_controller[146046]: 2026-01-26T16:29:25Z|01607|binding|INFO|Setting lport 71007552-7ba9-4fe5-8e36-ee9ce30da37f up in Southbound
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.657 239969 DEBUG nova.network.neutron [req-4f55a7fe-2df6-44e4-b3fc-0ffcf15b1a02 req-d62e275a-8a29-43e7-8c51-f2c37932737a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updated VIF entry in instance network info cache for port 71007552-7ba9-4fe5-8e36-ee9ce30da37f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.657 239969 DEBUG nova.network.neutron [req-4f55a7fe-2df6-44e4-b3fc-0ffcf15b1a02 req-d62e275a-8a29-43e7-8c51-f2c37932737a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updating instance_info_cache with network_info: [{"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.678 239969 DEBUG oslo_concurrency.lockutils [req-4f55a7fe-2df6-44e4-b3fc-0ffcf15b1a02 req-d62e275a-8a29-43e7-8c51-f2c37932737a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.686 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.689 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[faf71687-67de-4861-ae88-ae65fd9ee7fc]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.711 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[884a7e93-bd4e-4652-ae6f-e0d9030377d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:25 compute-0 systemd-udevd[371932]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:29:25 compute-0 NetworkManager[48954]: <info>  [1769444965.7166] manager: (tap68925d6a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/649)
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.715 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d75426cf-b64a-4bdc-ada3-9f13a2982d70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.755 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[b64d59e7-01be-421e-af54-61b8879b5b96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.757 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[297fc461-358a-43d2-b857-990b16efff23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:25 compute-0 NetworkManager[48954]: <info>  [1769444965.7781] device (tap68925d6a-00): carrier: link connected
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.782 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[e1bd13fc-912c-4f86-9b13-986252bf558e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.802 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4a5197-45da-4f45-898d-5de35ae47acf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap68925d6a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:70:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 461], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645658, 'reachable_time': 43925, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371963, 'error': None, 'target': 'ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.822 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[a2593ea6-63d7-47f6-a6a5-c23042c0c454]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef5:70ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 645658, 'tstamp': 645658}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371964, 'error': None, 'target': 'ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.848 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1a4d4b6d-e1c7-435d-b51a-478e6baace22]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap68925d6a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:70:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 461], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645658, 'reachable_time': 43925, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371965, 'error': None, 'target': 'ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.878 239969 DEBUG nova.compute.manager [req-648dcda7-b519-4f06-8fe3-ac28109973a8 req-2ae9dad5-b53e-47aa-8d47-0faf681901c2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Received event network-vif-plugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.878 239969 DEBUG oslo_concurrency.lockutils [req-648dcda7-b519-4f06-8fe3-ac28109973a8 req-2ae9dad5-b53e-47aa-8d47-0faf681901c2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.879 239969 DEBUG oslo_concurrency.lockutils [req-648dcda7-b519-4f06-8fe3-ac28109973a8 req-2ae9dad5-b53e-47aa-8d47-0faf681901c2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.879 239969 DEBUG oslo_concurrency.lockutils [req-648dcda7-b519-4f06-8fe3-ac28109973a8 req-2ae9dad5-b53e-47aa-8d47-0faf681901c2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.879 239969 DEBUG nova.compute.manager [req-648dcda7-b519-4f06-8fe3-ac28109973a8 req-2ae9dad5-b53e-47aa-8d47-0faf681901c2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Processing event network-vif-plugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.884 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[70bd7f9c-014d-4841-8c78-04db4659af3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.941 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4d8e09bc-4ecc-4b9c-b50c-2a298b6f1f7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.942 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68925d6a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.942 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.943 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68925d6a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:25 compute-0 NetworkManager[48954]: <info>  [1769444965.9451] manager: (tap68925d6a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/650)
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.944 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:25 compute-0 kernel: tap68925d6a-00: entered promiscuous mode
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.952 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap68925d6a-00, col_values=(('external_ids', {'iface-id': '3962717a-bcba-4dc8-93b5-aaeff002c38f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:25 compute-0 ovn_controller[146046]: 2026-01-26T16:29:25Z|01608|binding|INFO|Releasing lport 3962717a-bcba-4dc8-93b5-aaeff002c38f from this chassis (sb_readonly=0)
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.953 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:25 compute-0 nova_compute[239965]: 2026-01-26 16:29:25.966 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.967 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/68925d6a-01d0-4cb9-94a6-da3e5f3fac44.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/68925d6a-01d0-4cb9-94a6-da3e5f3fac44.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.967 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[80879c68-fd01-4c58-bca9-66ed1c30a020]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.968 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-68925d6a-01d0-4cb9-94a6-da3e5f3fac44
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/68925d6a-01d0-4cb9-94a6-da3e5f3fac44.pid.haproxy
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 68925d6a-01d0-4cb9-94a6-da3e5f3fac44
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:29:25 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:25.968 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44', 'env', 'PROCESS_TAG=haproxy-68925d6a-01d0-4cb9-94a6-da3e5f3fac44', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/68925d6a-01d0-4cb9-94a6-da3e5f3fac44.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:29:26 compute-0 podman[371998]: 2026-01-26 16:29:26.353210142 +0000 UTC m=+0.063358122 container create 5a0b08210c9427d65a7276d5dd5ddc39bf7afd7a96770fa483d9f3e6a7244c37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:29:26 compute-0 systemd[1]: Started libpod-conmon-5a0b08210c9427d65a7276d5dd5ddc39bf7afd7a96770fa483d9f3e6a7244c37.scope.
Jan 26 16:29:26 compute-0 podman[371998]: 2026-01-26 16:29:26.321078585 +0000 UTC m=+0.031226565 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:29:26 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:29:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea9a076cbb73632203ca4524e00b697d395f45742ed6b63cbcb532d953304f3b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:26 compute-0 podman[371998]: 2026-01-26 16:29:26.456223915 +0000 UTC m=+0.166371925 container init 5a0b08210c9427d65a7276d5dd5ddc39bf7afd7a96770fa483d9f3e6a7244c37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:29:26 compute-0 podman[371998]: 2026-01-26 16:29:26.461481553 +0000 UTC m=+0.171629513 container start 5a0b08210c9427d65a7276d5dd5ddc39bf7afd7a96770fa483d9f3e6a7244c37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Jan 26 16:29:26 compute-0 neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44[372013]: [NOTICE]   (372017) : New worker (372019) forked
Jan 26 16:29:26 compute-0 neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44[372013]: [NOTICE]   (372017) : Loading success.
Jan 26 16:29:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 26 16:29:26 compute-0 ceph-mon[75140]: pgmap v2373: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.877 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444966.8772779, 23444729-2c63-4c8b-a28c-04334c5b5949 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.878 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] VM Started (Lifecycle Event)
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.880 239969 DEBUG nova.compute.manager [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.885 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.888 239969 INFO nova.virt.libvirt.driver [-] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Instance spawned successfully.
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.888 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.903 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.912 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.918 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.919 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.920 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.920 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.921 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.922 239969 DEBUG nova.virt.libvirt.driver [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.969 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.970 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444966.877524, 23444729-2c63-4c8b-a28c-04334c5b5949 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:29:26 compute-0 nova_compute[239965]: 2026-01-26 16:29:26.970 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] VM Paused (Lifecycle Event)
Jan 26 16:29:27 compute-0 nova_compute[239965]: 2026-01-26 16:29:27.001 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:29:27 compute-0 nova_compute[239965]: 2026-01-26 16:29:27.010 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444966.8837693, 23444729-2c63-4c8b-a28c-04334c5b5949 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:29:27 compute-0 nova_compute[239965]: 2026-01-26 16:29:27.010 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] VM Resumed (Lifecycle Event)
Jan 26 16:29:27 compute-0 nova_compute[239965]: 2026-01-26 16:29:27.018 239969 INFO nova.compute.manager [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Took 10.74 seconds to spawn the instance on the hypervisor.
Jan 26 16:29:27 compute-0 nova_compute[239965]: 2026-01-26 16:29:27.018 239969 DEBUG nova.compute.manager [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:29:27 compute-0 nova_compute[239965]: 2026-01-26 16:29:27.032 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:29:27 compute-0 nova_compute[239965]: 2026-01-26 16:29:27.037 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:29:27 compute-0 nova_compute[239965]: 2026-01-26 16:29:27.057 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:29:27 compute-0 nova_compute[239965]: 2026-01-26 16:29:27.094 239969 INFO nova.compute.manager [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Took 12.00 seconds to build instance.
Jan 26 16:29:27 compute-0 nova_compute[239965]: 2026-01-26 16:29:27.113 239969 DEBUG oslo_concurrency.lockutils [None req-1b28c9e4-63f5-44b9-b9b8-abaa260e202b 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:27 compute-0 nova_compute[239965]: 2026-01-26 16:29:27.989 239969 DEBUG nova.compute.manager [req-b8076dec-6274-4772-8cec-37954d2430e3 req-8978afb6-c67f-4c8e-8309-6a42a408f297 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Received event network-vif-plugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:29:27 compute-0 nova_compute[239965]: 2026-01-26 16:29:27.989 239969 DEBUG oslo_concurrency.lockutils [req-b8076dec-6274-4772-8cec-37954d2430e3 req-8978afb6-c67f-4c8e-8309-6a42a408f297 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:27 compute-0 nova_compute[239965]: 2026-01-26 16:29:27.989 239969 DEBUG oslo_concurrency.lockutils [req-b8076dec-6274-4772-8cec-37954d2430e3 req-8978afb6-c67f-4c8e-8309-6a42a408f297 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:27 compute-0 nova_compute[239965]: 2026-01-26 16:29:27.989 239969 DEBUG oslo_concurrency.lockutils [req-b8076dec-6274-4772-8cec-37954d2430e3 req-8978afb6-c67f-4c8e-8309-6a42a408f297 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:27 compute-0 nova_compute[239965]: 2026-01-26 16:29:27.990 239969 DEBUG nova.compute.manager [req-b8076dec-6274-4772-8cec-37954d2430e3 req-8978afb6-c67f-4c8e-8309-6a42a408f297 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] No waiting events found dispatching network-vif-plugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:29:27 compute-0 nova_compute[239965]: 2026-01-26 16:29:27.990 239969 WARNING nova.compute.manager [req-b8076dec-6274-4772-8cec-37954d2430e3 req-8978afb6-c67f-4c8e-8309-6a42a408f297 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Received unexpected event network-vif-plugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f for instance with vm_state active and task_state None.
Jan 26 16:29:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:29:28
Jan 26 16:29:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:29:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:29:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['images', 'default.rgw.control', '.mgr', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'vms', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta']
Jan 26 16:29:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:29:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 154 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Jan 26 16:29:28 compute-0 ceph-mon[75140]: pgmap v2374: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 154 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Jan 26 16:29:29 compute-0 nova_compute[239965]: 2026-01-26 16:29:29.005 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:29 compute-0 nova_compute[239965]: 2026-01-26 16:29:29.169 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 143 KiB/s rd, 605 KiB/s wr, 53 op/s
Jan 26 16:29:30 compute-0 ceph-mon[75140]: pgmap v2375: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 143 KiB/s rd, 605 KiB/s wr, 53 op/s
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:29:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:29:32 compute-0 nova_compute[239965]: 2026-01-26 16:29:32.120 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:32 compute-0 NetworkManager[48954]: <info>  [1769444972.1274] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Jan 26 16:29:32 compute-0 NetworkManager[48954]: <info>  [1769444972.1433] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/652)
Jan 26 16:29:32 compute-0 nova_compute[239965]: 2026-01-26 16:29:32.246 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:32 compute-0 ovn_controller[146046]: 2026-01-26T16:29:32Z|01609|binding|INFO|Releasing lport 3962717a-bcba-4dc8-93b5-aaeff002c38f from this chassis (sb_readonly=0)
Jan 26 16:29:32 compute-0 nova_compute[239965]: 2026-01-26 16:29:32.259 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:32 compute-0 nova_compute[239965]: 2026-01-26 16:29:32.711 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444957.709527, 3a84510b-c77e-4b69-95cf-39eb5ba96406 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:29:32 compute-0 nova_compute[239965]: 2026-01-26 16:29:32.712 239969 INFO nova.compute.manager [-] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] VM Stopped (Lifecycle Event)
Jan 26 16:29:32 compute-0 nova_compute[239965]: 2026-01-26 16:29:32.745 239969 DEBUG nova.compute.manager [None req-fdd0657e-49f5-43de-b26c-3fce7339d2a8 - - - - - -] [instance: 3a84510b-c77e-4b69-95cf-39eb5ba96406] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:29:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 605 KiB/s wr, 113 op/s
Jan 26 16:29:32 compute-0 ceph-mon[75140]: pgmap v2376: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 605 KiB/s wr, 113 op/s
Jan 26 16:29:33 compute-0 nova_compute[239965]: 2026-01-26 16:29:33.321 239969 DEBUG nova.compute.manager [req-039e7d7e-9888-47d0-9153-6a15afc4f783 req-31030d37-ecd4-4bf2-88f1-8c6edaa6a750 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Received event network-changed-71007552-7ba9-4fe5-8e36-ee9ce30da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:29:33 compute-0 nova_compute[239965]: 2026-01-26 16:29:33.322 239969 DEBUG nova.compute.manager [req-039e7d7e-9888-47d0-9153-6a15afc4f783 req-31030d37-ecd4-4bf2-88f1-8c6edaa6a750 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Refreshing instance network info cache due to event network-changed-71007552-7ba9-4fe5-8e36-ee9ce30da37f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:29:33 compute-0 nova_compute[239965]: 2026-01-26 16:29:33.323 239969 DEBUG oslo_concurrency.lockutils [req-039e7d7e-9888-47d0-9153-6a15afc4f783 req-31030d37-ecd4-4bf2-88f1-8c6edaa6a750 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:29:33 compute-0 nova_compute[239965]: 2026-01-26 16:29:33.324 239969 DEBUG oslo_concurrency.lockutils [req-039e7d7e-9888-47d0-9153-6a15afc4f783 req-31030d37-ecd4-4bf2-88f1-8c6edaa6a750 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:29:33 compute-0 nova_compute[239965]: 2026-01-26 16:29:33.325 239969 DEBUG nova.network.neutron [req-039e7d7e-9888-47d0-9153-6a15afc4f783 req-31030d37-ecd4-4bf2-88f1-8c6edaa6a750 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Refreshing network info cache for port 71007552-7ba9-4fe5-8e36-ee9ce30da37f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:29:34 compute-0 nova_compute[239965]: 2026-01-26 16:29:34.009 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:34 compute-0 nova_compute[239965]: 2026-01-26 16:29:34.171 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:29:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:29:34 compute-0 ceph-mon[75140]: pgmap v2377: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:29:35 compute-0 nova_compute[239965]: 2026-01-26 16:29:35.398 239969 DEBUG nova.network.neutron [req-039e7d7e-9888-47d0-9153-6a15afc4f783 req-31030d37-ecd4-4bf2-88f1-8c6edaa6a750 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updated VIF entry in instance network info cache for port 71007552-7ba9-4fe5-8e36-ee9ce30da37f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:29:35 compute-0 nova_compute[239965]: 2026-01-26 16:29:35.399 239969 DEBUG nova.network.neutron [req-039e7d7e-9888-47d0-9153-6a15afc4f783 req-31030d37-ecd4-4bf2-88f1-8c6edaa6a750 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updating instance_info_cache with network_info: [{"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:29:35 compute-0 nova_compute[239965]: 2026-01-26 16:29:35.424 239969 DEBUG oslo_concurrency.lockutils [req-039e7d7e-9888-47d0-9153-6a15afc4f783 req-31030d37-ecd4-4bf2-88f1-8c6edaa6a750 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:29:35 compute-0 nova_compute[239965]: 2026-01-26 16:29:35.508 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:29:36 compute-0 ceph-mon[75140]: pgmap v2378: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:29:38 compute-0 nova_compute[239965]: 2026-01-26 16:29:38.673 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:29:38 compute-0 ceph-mon[75140]: pgmap v2379: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Jan 26 16:29:39 compute-0 nova_compute[239965]: 2026-01-26 16:29:39.011 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:39 compute-0 nova_compute[239965]: 2026-01-26 16:29:39.173 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:29:39 compute-0 ovn_controller[146046]: 2026-01-26T16:29:39Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:01:8a 10.100.0.6
Jan 26 16:29:39 compute-0 ovn_controller[146046]: 2026-01-26T16:29:39Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:01:8a 10.100.0.6
Jan 26 16:29:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 60 op/s
Jan 26 16:29:40 compute-0 ceph-mon[75140]: pgmap v2380: 305 pgs: 305 active+clean; 134 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 60 op/s
Jan 26 16:29:42 compute-0 nova_compute[239965]: 2026-01-26 16:29:42.088 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "867d0872-e510-4b21-a4a4-ba5869bf7b41" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:42 compute-0 nova_compute[239965]: 2026-01-26 16:29:42.088 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:42 compute-0 nova_compute[239965]: 2026-01-26 16:29:42.104 239969 DEBUG nova.compute.manager [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:29:42 compute-0 nova_compute[239965]: 2026-01-26 16:29:42.191 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:42 compute-0 nova_compute[239965]: 2026-01-26 16:29:42.191 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:42 compute-0 nova_compute[239965]: 2026-01-26 16:29:42.202 239969 DEBUG nova.virt.hardware [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:29:42 compute-0 nova_compute[239965]: 2026-01-26 16:29:42.202 239969 INFO nova.compute.claims [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:29:42 compute-0 nova_compute[239965]: 2026-01-26 16:29:42.323 239969 DEBUG oslo_concurrency.processutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Jan 26 16:29:42 compute-0 ceph-mon[75140]: pgmap v2381: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Jan 26 16:29:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:29:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2620718557' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:29:42 compute-0 nova_compute[239965]: 2026-01-26 16:29:42.870 239969 DEBUG oslo_concurrency.processutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:42 compute-0 nova_compute[239965]: 2026-01-26 16:29:42.877 239969 DEBUG nova.compute.provider_tree [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:29:42 compute-0 nova_compute[239965]: 2026-01-26 16:29:42.894 239969 DEBUG nova.scheduler.client.report [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:29:42 compute-0 nova_compute[239965]: 2026-01-26 16:29:42.918 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:42 compute-0 nova_compute[239965]: 2026-01-26 16:29:42.920 239969 DEBUG nova.compute.manager [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:29:42 compute-0 nova_compute[239965]: 2026-01-26 16:29:42.975 239969 DEBUG nova.compute.manager [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:29:42 compute-0 nova_compute[239965]: 2026-01-26 16:29:42.975 239969 DEBUG nova.network.neutron [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:29:42 compute-0 nova_compute[239965]: 2026-01-26 16:29:42.991 239969 INFO nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.018 239969 DEBUG nova.compute.manager [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.101 239969 DEBUG nova.compute.manager [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.103 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.104 239969 INFO nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Creating image(s)
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.142 239969 DEBUG nova.storage.rbd_utils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 867d0872-e510-4b21-a4a4-ba5869bf7b41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.183 239969 DEBUG nova.storage.rbd_utils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 867d0872-e510-4b21-a4a4-ba5869bf7b41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.220 239969 DEBUG nova.storage.rbd_utils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 867d0872-e510-4b21-a4a4-ba5869bf7b41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.225 239969 DEBUG oslo_concurrency.processutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.284 239969 DEBUG nova.policy [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '59ae1c17a260470c91f50965ddd53a9e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.340 239969 DEBUG oslo_concurrency.processutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.341 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.341 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.342 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.369 239969 DEBUG nova.storage.rbd_utils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 867d0872-e510-4b21-a4a4-ba5869bf7b41_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.372 239969 DEBUG oslo_concurrency.processutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 867d0872-e510-4b21-a4a4-ba5869bf7b41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:43 compute-0 podman[372149]: 2026-01-26 16:29:43.3920697 +0000 UTC m=+0.069758698 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 16:29:43 compute-0 podman[372150]: 2026-01-26 16:29:43.415858963 +0000 UTC m=+0.087645257 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.667 239969 DEBUG oslo_concurrency.processutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 867d0872-e510-4b21-a4a4-ba5869bf7b41_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.749 239969 DEBUG nova.storage.rbd_utils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] resizing rbd image 867d0872-e510-4b21-a4a4-ba5869bf7b41_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:29:43 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2620718557' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.867 239969 DEBUG nova.objects.instance [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'migration_context' on Instance uuid 867d0872-e510-4b21-a4a4-ba5869bf7b41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.883 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.883 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Ensure instance console log exists: /var/lib/nova/instances/867d0872-e510-4b21-a4a4-ba5869bf7b41/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.884 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.885 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.885 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:43 compute-0 nova_compute[239965]: 2026-01-26 16:29:43.968 239969 DEBUG nova.network.neutron [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Successfully created port: 3bc091c5-3a0b-4785-bcb7-e3f106899c4e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:29:44 compute-0 nova_compute[239965]: 2026-01-26 16:29:44.014 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:44 compute-0 nova_compute[239965]: 2026-01-26 16:29:44.175 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:29:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 16:29:44 compute-0 ceph-mon[75140]: pgmap v2382: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Jan 26 16:29:45 compute-0 nova_compute[239965]: 2026-01-26 16:29:45.071 239969 DEBUG nova.network.neutron [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Successfully updated port: 3bc091c5-3a0b-4785-bcb7-e3f106899c4e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:29:45 compute-0 nova_compute[239965]: 2026-01-26 16:29:45.091 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "refresh_cache-867d0872-e510-4b21-a4a4-ba5869bf7b41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:29:45 compute-0 nova_compute[239965]: 2026-01-26 16:29:45.092 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquired lock "refresh_cache-867d0872-e510-4b21-a4a4-ba5869bf7b41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:29:45 compute-0 nova_compute[239965]: 2026-01-26 16:29:45.092 239969 DEBUG nova.network.neutron [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:29:45 compute-0 nova_compute[239965]: 2026-01-26 16:29:45.163 239969 DEBUG nova.compute.manager [req-b21b000f-51d2-4535-a66c-8818ce65abfd req-85173be9-cf27-4252-b166-7e2b596ae56e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received event network-changed-3bc091c5-3a0b-4785-bcb7-e3f106899c4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:29:45 compute-0 nova_compute[239965]: 2026-01-26 16:29:45.164 239969 DEBUG nova.compute.manager [req-b21b000f-51d2-4535-a66c-8818ce65abfd req-85173be9-cf27-4252-b166-7e2b596ae56e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Refreshing instance network info cache due to event network-changed-3bc091c5-3a0b-4785-bcb7-e3f106899c4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:29:45 compute-0 nova_compute[239965]: 2026-01-26 16:29:45.164 239969 DEBUG oslo_concurrency.lockutils [req-b21b000f-51d2-4535-a66c-8818ce65abfd req-85173be9-cf27-4252-b166-7e2b596ae56e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-867d0872-e510-4b21-a4a4-ba5869bf7b41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:29:45 compute-0 nova_compute[239965]: 2026-01-26 16:29:45.247 239969 DEBUG nova.network.neutron [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.020 239969 DEBUG nova.network.neutron [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Updating instance_info_cache with network_info: [{"id": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "address": "fa:16:3e:1e:04:cd", "network": {"id": "18216d50-69be-46b9-999b-ec8b1b0e72e8", "bridge": "br-int", "label": "tempest-network-smoke--1764697523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bc091c5-3a", "ovs_interfaceid": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.043 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Releasing lock "refresh_cache-867d0872-e510-4b21-a4a4-ba5869bf7b41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.044 239969 DEBUG nova.compute.manager [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Instance network_info: |[{"id": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "address": "fa:16:3e:1e:04:cd", "network": {"id": "18216d50-69be-46b9-999b-ec8b1b0e72e8", "bridge": "br-int", "label": "tempest-network-smoke--1764697523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bc091c5-3a", "ovs_interfaceid": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.045 239969 DEBUG oslo_concurrency.lockutils [req-b21b000f-51d2-4535-a66c-8818ce65abfd req-85173be9-cf27-4252-b166-7e2b596ae56e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-867d0872-e510-4b21-a4a4-ba5869bf7b41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.045 239969 DEBUG nova.network.neutron [req-b21b000f-51d2-4535-a66c-8818ce65abfd req-85173be9-cf27-4252-b166-7e2b596ae56e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Refreshing network info cache for port 3bc091c5-3a0b-4785-bcb7-e3f106899c4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.050 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Start _get_guest_xml network_info=[{"id": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "address": "fa:16:3e:1e:04:cd", "network": {"id": "18216d50-69be-46b9-999b-ec8b1b0e72e8", "bridge": "br-int", "label": "tempest-network-smoke--1764697523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bc091c5-3a", "ovs_interfaceid": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.055 239969 WARNING nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.067 239969 DEBUG nova.virt.libvirt.host [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.068 239969 DEBUG nova.virt.libvirt.host [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.073 239969 DEBUG nova.virt.libvirt.host [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.073 239969 DEBUG nova.virt.libvirt.host [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.074 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.074 239969 DEBUG nova.virt.hardware [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.075 239969 DEBUG nova.virt.hardware [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.076 239969 DEBUG nova.virt.hardware [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.076 239969 DEBUG nova.virt.hardware [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.077 239969 DEBUG nova.virt.hardware [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.077 239969 DEBUG nova.virt.hardware [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.077 239969 DEBUG nova.virt.hardware [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.078 239969 DEBUG nova.virt.hardware [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.078 239969 DEBUG nova.virt.hardware [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.079 239969 DEBUG nova.virt.hardware [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.079 239969 DEBUG nova.virt.hardware [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.084 239969 DEBUG oslo_concurrency.processutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:29:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/473073960' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.687 239969 DEBUG oslo_concurrency.processutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.717 239969 DEBUG nova.storage.rbd_utils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 867d0872-e510-4b21-a4a4-ba5869bf7b41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:29:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/473073960' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:29:46 compute-0 nova_compute[239965]: 2026-01-26 16:29:46.723 239969 DEBUG oslo_concurrency.processutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 172 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Jan 26 16:29:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:29:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1900364829' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.288 239969 DEBUG oslo_concurrency.processutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.291 239969 DEBUG nova.virt.libvirt.vif [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:29:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1668474041',display_name='tempest-TestNetworkAdvancedServerOps-server-1668474041',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1668474041',id=151,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPv22EdwGJNljFtI8+E9tV1J38GNPBUw3o1bTvVgqzjO9PC/VkU1AA5T7fBsCkq14J38duWTMLEM8jMA4T5BdKpjilyJ0NiQThFHa9xh05gGMp17MtvNd9WsWN6fEZCeJA==',key_name='tempest-TestNetworkAdvancedServerOps-1208787537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-hq09mo5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:29:43Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=867d0872-e510-4b21-a4a4-ba5869bf7b41,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "address": "fa:16:3e:1e:04:cd", "network": {"id": "18216d50-69be-46b9-999b-ec8b1b0e72e8", "bridge": "br-int", "label": "tempest-network-smoke--1764697523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bc091c5-3a", "ovs_interfaceid": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.292 239969 DEBUG nova.network.os_vif_util [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "address": "fa:16:3e:1e:04:cd", "network": {"id": "18216d50-69be-46b9-999b-ec8b1b0e72e8", "bridge": "br-int", "label": "tempest-network-smoke--1764697523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bc091c5-3a", "ovs_interfaceid": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.294 239969 DEBUG nova.network.os_vif_util [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:04:cd,bridge_name='br-int',has_traffic_filtering=True,id=3bc091c5-3a0b-4785-bcb7-e3f106899c4e,network=Network(18216d50-69be-46b9-999b-ec8b1b0e72e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bc091c5-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.296 239969 DEBUG nova.objects.instance [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid 867d0872-e510-4b21-a4a4-ba5869bf7b41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.318 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:29:47 compute-0 nova_compute[239965]:   <uuid>867d0872-e510-4b21-a4a4-ba5869bf7b41</uuid>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   <name>instance-00000097</name>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <nova:name>tempest-TestNetworkAdvancedServerOps-server-1668474041</nova:name>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:29:46</nova:creationTime>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:29:47 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:29:47 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:29:47 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:29:47 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:29:47 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:29:47 compute-0 nova_compute[239965]:         <nova:user uuid="59ae1c17a260470c91f50965ddd53a9e">tempest-TestNetworkAdvancedServerOps-842475489-project-member</nova:user>
Jan 26 16:29:47 compute-0 nova_compute[239965]:         <nova:project uuid="2a7615c0db4e4f38aec30c7c723c3c3a">tempest-TestNetworkAdvancedServerOps-842475489</nova:project>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:29:47 compute-0 nova_compute[239965]:         <nova:port uuid="3bc091c5-3a0b-4785-bcb7-e3f106899c4e">
Jan 26 16:29:47 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <system>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <entry name="serial">867d0872-e510-4b21-a4a4-ba5869bf7b41</entry>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <entry name="uuid">867d0872-e510-4b21-a4a4-ba5869bf7b41</entry>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     </system>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   <os>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   </os>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   <features>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   </features>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/867d0872-e510-4b21-a4a4-ba5869bf7b41_disk">
Jan 26 16:29:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       </source>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:29:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/867d0872-e510-4b21-a4a4-ba5869bf7b41_disk.config">
Jan 26 16:29:47 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       </source>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:29:47 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:1e:04:cd"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <target dev="tap3bc091c5-3a"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/867d0872-e510-4b21-a4a4-ba5869bf7b41/console.log" append="off"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <video>
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     </video>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:29:47 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:29:47 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:29:47 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:29:47 compute-0 nova_compute[239965]: </domain>
Jan 26 16:29:47 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.320 239969 DEBUG nova.compute.manager [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Preparing to wait for external event network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.321 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.322 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.323 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.324 239969 DEBUG nova.virt.libvirt.vif [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-26T16:29:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1668474041',display_name='tempest-TestNetworkAdvancedServerOps-server-1668474041',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1668474041',id=151,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPv22EdwGJNljFtI8+E9tV1J38GNPBUw3o1bTvVgqzjO9PC/VkU1AA5T7fBsCkq14J38duWTMLEM8jMA4T5BdKpjilyJ0NiQThFHa9xh05gGMp17MtvNd9WsWN6fEZCeJA==',key_name='tempest-TestNetworkAdvancedServerOps-1208787537',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-hq09mo5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:29:43Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=867d0872-e510-4b21-a4a4-ba5869bf7b41,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "address": "fa:16:3e:1e:04:cd", "network": {"id": "18216d50-69be-46b9-999b-ec8b1b0e72e8", "bridge": "br-int", "label": "tempest-network-smoke--1764697523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bc091c5-3a", "ovs_interfaceid": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.325 239969 DEBUG nova.network.os_vif_util [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "address": "fa:16:3e:1e:04:cd", "network": {"id": "18216d50-69be-46b9-999b-ec8b1b0e72e8", "bridge": "br-int", "label": "tempest-network-smoke--1764697523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bc091c5-3a", "ovs_interfaceid": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.326 239969 DEBUG nova.network.os_vif_util [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:04:cd,bridge_name='br-int',has_traffic_filtering=True,id=3bc091c5-3a0b-4785-bcb7-e3f106899c4e,network=Network(18216d50-69be-46b9-999b-ec8b1b0e72e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bc091c5-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.327 239969 DEBUG os_vif [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:04:cd,bridge_name='br-int',has_traffic_filtering=True,id=3bc091c5-3a0b-4785-bcb7-e3f106899c4e,network=Network(18216d50-69be-46b9-999b-ec8b1b0e72e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bc091c5-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.328 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.329 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.331 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.335 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.336 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bc091c5-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.337 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3bc091c5-3a, col_values=(('external_ids', {'iface-id': '3bc091c5-3a0b-4785-bcb7-e3f106899c4e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:04:cd', 'vm-uuid': '867d0872-e510-4b21-a4a4-ba5869bf7b41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:47 compute-0 NetworkManager[48954]: <info>  [1769444987.3403] manager: (tap3bc091c5-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/653)
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.344 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.350 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.351 239969 INFO os_vif [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:04:cd,bridge_name='br-int',has_traffic_filtering=True,id=3bc091c5-3a0b-4785-bcb7-e3f106899c4e,network=Network(18216d50-69be-46b9-999b-ec8b1b0e72e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bc091c5-3a')
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.429 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.430 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.430 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] No VIF found with MAC fa:16:3e:1e:04:cd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.431 239969 INFO nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Using config drive
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.466 239969 DEBUG nova.storage.rbd_utils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 867d0872-e510-4b21-a4a4-ba5869bf7b41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.479 239969 DEBUG nova.network.neutron [req-b21b000f-51d2-4535-a66c-8818ce65abfd req-85173be9-cf27-4252-b166-7e2b596ae56e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Updated VIF entry in instance network info cache for port 3bc091c5-3a0b-4785-bcb7-e3f106899c4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.480 239969 DEBUG nova.network.neutron [req-b21b000f-51d2-4535-a66c-8818ce65abfd req-85173be9-cf27-4252-b166-7e2b596ae56e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Updating instance_info_cache with network_info: [{"id": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "address": "fa:16:3e:1e:04:cd", "network": {"id": "18216d50-69be-46b9-999b-ec8b1b0e72e8", "bridge": "br-int", "label": "tempest-network-smoke--1764697523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bc091c5-3a", "ovs_interfaceid": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.503 239969 DEBUG oslo_concurrency.lockutils [req-b21b000f-51d2-4535-a66c-8818ce65abfd req-85173be9-cf27-4252-b166-7e2b596ae56e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-867d0872-e510-4b21-a4a4-ba5869bf7b41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:29:47 compute-0 ceph-mon[75140]: pgmap v2383: 305 pgs: 305 active+clean; 172 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Jan 26 16:29:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1900364829' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.801 239969 INFO nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Creating config drive at /var/lib/nova/instances/867d0872-e510-4b21-a4a4-ba5869bf7b41/disk.config
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.808 239969 DEBUG oslo_concurrency.processutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/867d0872-e510-4b21-a4a4-ba5869bf7b41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprhphnpv6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:47 compute-0 nova_compute[239965]: 2026-01-26 16:29:47.970 239969 DEBUG oslo_concurrency.processutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/867d0872-e510-4b21-a4a4-ba5869bf7b41/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprhphnpv6" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.002 239969 DEBUG nova.storage.rbd_utils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] rbd image 867d0872-e510-4b21-a4a4-ba5869bf7b41_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.006 239969 DEBUG oslo_concurrency.processutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/867d0872-e510-4b21-a4a4-ba5869bf7b41/disk.config 867d0872-e510-4b21-a4a4-ba5869bf7b41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.164 239969 DEBUG oslo_concurrency.processutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/867d0872-e510-4b21-a4a4-ba5869bf7b41/disk.config 867d0872-e510-4b21-a4a4-ba5869bf7b41_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.165 239969 INFO nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Deleting local config drive /var/lib/nova/instances/867d0872-e510-4b21-a4a4-ba5869bf7b41/disk.config because it was imported into RBD.
Jan 26 16:29:48 compute-0 kernel: tap3bc091c5-3a: entered promiscuous mode
Jan 26 16:29:48 compute-0 NetworkManager[48954]: <info>  [1769444988.2374] manager: (tap3bc091c5-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/654)
Jan 26 16:29:48 compute-0 ovn_controller[146046]: 2026-01-26T16:29:48Z|01610|binding|INFO|Claiming lport 3bc091c5-3a0b-4785-bcb7-e3f106899c4e for this chassis.
Jan 26 16:29:48 compute-0 ovn_controller[146046]: 2026-01-26T16:29:48Z|01611|binding|INFO|3bc091c5-3a0b-4785-bcb7-e3f106899c4e: Claiming fa:16:3e:1e:04:cd 10.100.0.8
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.239 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.247 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:04:cd 10.100.0.8'], port_security=['fa:16:3e:1e:04:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '867d0872-e510-4b21-a4a4-ba5869bf7b41', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18216d50-69be-46b9-999b-ec8b1b0e72e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ef4be394-553c-42cb-8e42-8f8b7eb94878', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45299a42-06fa-4567-8e89-46e6b687daf8, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3bc091c5-3a0b-4785-bcb7-e3f106899c4e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.249 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3bc091c5-3a0b-4785-bcb7-e3f106899c4e in datapath 18216d50-69be-46b9-999b-ec8b1b0e72e8 bound to our chassis
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.251 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18216d50-69be-46b9-999b-ec8b1b0e72e8
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.269 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[741db7d0-b5a4-48fd-9a6a-7a9b1e651103]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.270 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap18216d50-61 in ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:29:48 compute-0 ovn_controller[146046]: 2026-01-26T16:29:48Z|01612|binding|INFO|Setting lport 3bc091c5-3a0b-4785-bcb7-e3f106899c4e ovn-installed in OVS
Jan 26 16:29:48 compute-0 ovn_controller[146046]: 2026-01-26T16:29:48Z|01613|binding|INFO|Setting lport 3bc091c5-3a0b-4785-bcb7-e3f106899c4e up in Southbound
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.273 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap18216d50-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.273 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3bebb4-bbe7-4148-9997-cb86eb08e70e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.274 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.275 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ea2543-7f2c-4f0a-b14b-5d2ae8945487]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.276 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:48 compute-0 systemd-machined[208061]: New machine qemu-183-instance-00000097.
Jan 26 16:29:48 compute-0 systemd-udevd[372444]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.295 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[2c543c4d-809a-4d51-9367-4ba18801f44f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:48 compute-0 systemd[1]: Started Virtual Machine qemu-183-instance-00000097.
Jan 26 16:29:48 compute-0 NetworkManager[48954]: <info>  [1769444988.3074] device (tap3bc091c5-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:29:48 compute-0 NetworkManager[48954]: <info>  [1769444988.3086] device (tap3bc091c5-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.317 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2aab3ed8-8ed4-4297-9a25-7bcae9fad98b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.359 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[398e94e8-f89b-486c-a227-f5a3b0682c63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:48 compute-0 systemd-udevd[372447]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:29:48 compute-0 NetworkManager[48954]: <info>  [1769444988.3693] manager: (tap18216d50-60): new Veth device (/org/freedesktop/NetworkManager/Devices/655)
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.368 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fe251a98-a1e7-4bcc-83cd-70b8a6c48519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.406 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2d7e5db3-12e1-4acd-bac6-38c0df9ddc6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.409 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[db9033d7-e900-4437-acb0-0e6b8cf19629]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:48 compute-0 NetworkManager[48954]: <info>  [1769444988.4377] device (tap18216d50-60): carrier: link connected
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.445 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2f0fd8a5-884f-401e-beb9-3de9ad485741]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.470 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[8eab8b5c-28d6-43cb-a133-26ed753335eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18216d50-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:17:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647924, 'reachable_time': 28968, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372475, 'error': None, 'target': 'ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.491 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e8134ecd-f3e3-4e85-9c3c-746801172136]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:1715'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 647924, 'tstamp': 647924}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 372476, 'error': None, 'target': 'ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.512 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[283fb2a4-334b-4796-a8a8-131497ced2bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18216d50-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:17:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647924, 'reachable_time': 28968, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 372477, 'error': None, 'target': 'ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.546 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[37210685-a49c-4566-81c5-a7616afed097]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.601 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[556ea481-899e-4691-94ab-d2b73c3ecbba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.603 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18216d50-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.603 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.604 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18216d50-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.605 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:48 compute-0 kernel: tap18216d50-60: entered promiscuous mode
Jan 26 16:29:48 compute-0 NetworkManager[48954]: <info>  [1769444988.6079] manager: (tap18216d50-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/656)
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.608 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.613 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18216d50-60, col_values=(('external_ids', {'iface-id': 'c5adae48-670f-463e-b10e-3bf726c12f7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:48 compute-0 ovn_controller[146046]: 2026-01-26T16:29:48Z|01614|binding|INFO|Releasing lport c5adae48-670f-463e-b10e-3bf726c12f7a from this chassis (sb_readonly=0)
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.615 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.618 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18216d50-69be-46b9-999b-ec8b1b0e72e8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18216d50-69be-46b9-999b-ec8b1b0e72e8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.619 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5cb101aa-87da-47db-8dbd-7bf550b55f56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.620 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-18216d50-69be-46b9-999b-ec8b1b0e72e8
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/18216d50-69be-46b9-999b-ec8b1b0e72e8.pid.haproxy
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 18216d50-69be-46b9-999b-ec8b1b0e72e8
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:29:48 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:48.621 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8', 'env', 'PROCESS_TAG=haproxy-18216d50-69be-46b9-999b-ec8b1b0e72e8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/18216d50-69be-46b9-999b-ec8b1b0e72e8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.630 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.781 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444988.7808123, 867d0872-e510-4b21-a4a4-ba5869bf7b41 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.781 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] VM Started (Lifecycle Event)
Jan 26 16:29:48 compute-0 ceph-mon[75140]: pgmap v2384: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.929 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.934 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444988.780952, 867d0872-e510-4b21-a4a4-ba5869bf7b41 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.934 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] VM Paused (Lifecycle Event)
Jan 26 16:29:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:29:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3206871184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:29:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:29:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3206871184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.957 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.961 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:29:48 compute-0 nova_compute[239965]: 2026-01-26 16:29:48.982 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:29:49 compute-0 podman[372551]: 2026-01-26 16:29:49.033802308 +0000 UTC m=+0.054897896 container create cec44ca5b06b50c55fd75cc929766a533f4274c4541c5b2f018580a78171ad84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 16:29:49 compute-0 systemd[1]: Started libpod-conmon-cec44ca5b06b50c55fd75cc929766a533f4274c4541c5b2f018580a78171ad84.scope.
Jan 26 16:29:49 compute-0 podman[372551]: 2026-01-26 16:29:49.003775462 +0000 UTC m=+0.024871090 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:29:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:29:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c08905469d869d1417df0e276f1142e2ebeae8f4446a8d232f97b88e38b254b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:29:49 compute-0 podman[372551]: 2026-01-26 16:29:49.132483524 +0000 UTC m=+0.153579142 container init cec44ca5b06b50c55fd75cc929766a533f4274c4541c5b2f018580a78171ad84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 16:29:49 compute-0 podman[372551]: 2026-01-26 16:29:49.140250884 +0000 UTC m=+0.161346462 container start cec44ca5b06b50c55fd75cc929766a533f4274c4541c5b2f018580a78171ad84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.156 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:49 compute-0 neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8[372566]: [NOTICE]   (372570) : New worker (372572) forked
Jan 26 16:29:49 compute-0 neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8[372566]: [NOTICE]   (372570) : Loading success.
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011223037548578135 of space, bias 1.0, pg target 0.33669112645734406 quantized to 32 (current 32)
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010183131531004159 of space, bias 1.0, pg target 0.30549394593012474 quantized to 32 (current 32)
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.865343482744153e-07 of space, bias 4.0, pg target 0.0008238412179292983 quantized to 16 (current 16)
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:29:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:29:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:49.336 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:29:49 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:49.338 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.341 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.364 239969 DEBUG oslo_concurrency.lockutils [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "23444729-2c63-4c8b-a28c-04334c5b5949" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.365 239969 DEBUG oslo_concurrency.lockutils [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.365 239969 INFO nova.compute.manager [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Shelving
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.392 239969 DEBUG nova.virt.libvirt.driver [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.413 239969 DEBUG nova.compute.manager [req-6fb95088-795c-4965-8ea5-6f5fd3fb5d75 req-ae9a0122-a672-4783-8829-fc9fe6909947 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received event network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.413 239969 DEBUG oslo_concurrency.lockutils [req-6fb95088-795c-4965-8ea5-6f5fd3fb5d75 req-ae9a0122-a672-4783-8829-fc9fe6909947 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.413 239969 DEBUG oslo_concurrency.lockutils [req-6fb95088-795c-4965-8ea5-6f5fd3fb5d75 req-ae9a0122-a672-4783-8829-fc9fe6909947 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.413 239969 DEBUG oslo_concurrency.lockutils [req-6fb95088-795c-4965-8ea5-6f5fd3fb5d75 req-ae9a0122-a672-4783-8829-fc9fe6909947 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.414 239969 DEBUG nova.compute.manager [req-6fb95088-795c-4965-8ea5-6f5fd3fb5d75 req-ae9a0122-a672-4783-8829-fc9fe6909947 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Processing event network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.414 239969 DEBUG nova.compute.manager [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.419 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769444989.4190536, 867d0872-e510-4b21-a4a4-ba5869bf7b41 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.420 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] VM Resumed (Lifecycle Event)
Jan 26 16:29:49 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.424 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.428 239969 INFO nova.virt.libvirt.driver [-] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Instance spawned successfully.
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.428 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.443 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.450 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.453 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.454 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.454 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.455 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.455 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.456 239969 DEBUG nova.virt.libvirt.driver [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.482 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.516 239969 INFO nova.compute.manager [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Took 6.41 seconds to spawn the instance on the hypervisor.
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.517 239969 DEBUG nova.compute.manager [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.585 239969 INFO nova.compute.manager [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Took 7.43 seconds to build instance.
Jan 26 16:29:49 compute-0 nova_compute[239965]: 2026-01-26 16:29:49.599 239969 DEBUG oslo_concurrency.lockutils [None req-347b4920-d51f-47ae-81d1-57a95e375f74 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3206871184' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:29:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3206871184' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:29:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Jan 26 16:29:50 compute-0 ceph-mon[75140]: pgmap v2385: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Jan 26 16:29:51 compute-0 nova_compute[239965]: 2026-01-26 16:29:51.474 239969 DEBUG nova.compute.manager [req-eb7b3961-2ddb-4fea-bb6e-ba7dee5b429a req-964f6a7e-55a2-412a-9ecf-45294242ae1e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received event network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:29:51 compute-0 nova_compute[239965]: 2026-01-26 16:29:51.474 239969 DEBUG oslo_concurrency.lockutils [req-eb7b3961-2ddb-4fea-bb6e-ba7dee5b429a req-964f6a7e-55a2-412a-9ecf-45294242ae1e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:51 compute-0 nova_compute[239965]: 2026-01-26 16:29:51.475 239969 DEBUG oslo_concurrency.lockutils [req-eb7b3961-2ddb-4fea-bb6e-ba7dee5b429a req-964f6a7e-55a2-412a-9ecf-45294242ae1e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:51 compute-0 nova_compute[239965]: 2026-01-26 16:29:51.475 239969 DEBUG oslo_concurrency.lockutils [req-eb7b3961-2ddb-4fea-bb6e-ba7dee5b429a req-964f6a7e-55a2-412a-9ecf-45294242ae1e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:51 compute-0 nova_compute[239965]: 2026-01-26 16:29:51.475 239969 DEBUG nova.compute.manager [req-eb7b3961-2ddb-4fea-bb6e-ba7dee5b429a req-964f6a7e-55a2-412a-9ecf-45294242ae1e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] No waiting events found dispatching network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:29:51 compute-0 nova_compute[239965]: 2026-01-26 16:29:51.475 239969 WARNING nova.compute.manager [req-eb7b3961-2ddb-4fea-bb6e-ba7dee5b429a req-964f6a7e-55a2-412a-9ecf-45294242ae1e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received unexpected event network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e for instance with vm_state active and task_state None.
Jan 26 16:29:51 compute-0 kernel: tap71007552-7b (unregistering): left promiscuous mode
Jan 26 16:29:51 compute-0 NetworkManager[48954]: <info>  [1769444991.6492] device (tap71007552-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:29:51 compute-0 ovn_controller[146046]: 2026-01-26T16:29:51Z|01615|binding|INFO|Releasing lport 71007552-7ba9-4fe5-8e36-ee9ce30da37f from this chassis (sb_readonly=0)
Jan 26 16:29:51 compute-0 ovn_controller[146046]: 2026-01-26T16:29:51Z|01616|binding|INFO|Setting lport 71007552-7ba9-4fe5-8e36-ee9ce30da37f down in Southbound
Jan 26 16:29:51 compute-0 ovn_controller[146046]: 2026-01-26T16:29:51Z|01617|binding|INFO|Removing iface tap71007552-7b ovn-installed in OVS
Jan 26 16:29:51 compute-0 nova_compute[239965]: 2026-01-26 16:29:51.661 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:51 compute-0 nova_compute[239965]: 2026-01-26 16:29:51.663 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:51.669 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:01:8a 10.100.0.6'], port_security=['fa:16:3e:45:01:8a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '23444729-2c63-4c8b-a28c-04334c5b5949', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-68925d6a-01d0-4cb9-94a6-da3e5f3fac44', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67c9291388b24d24a9689e4acf626bf2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fec1bead-ca1f-4cab-8561-fae558eddc5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75ede299-1df1-4eb6-926a-ac3d97013a63, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=71007552-7ba9-4fe5-8e36-ee9ce30da37f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:29:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:51.671 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 71007552-7ba9-4fe5-8e36-ee9ce30da37f in datapath 68925d6a-01d0-4cb9-94a6-da3e5f3fac44 unbound from our chassis
Jan 26 16:29:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:51.672 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 68925d6a-01d0-4cb9-94a6-da3e5f3fac44, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:29:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:51.673 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[61ede752-bb84-4871-8223-a88dece8646f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:51.674 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44 namespace which is not needed anymore
Jan 26 16:29:51 compute-0 nova_compute[239965]: 2026-01-26 16:29:51.687 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:51 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000096.scope: Deactivated successfully.
Jan 26 16:29:51 compute-0 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000096.scope: Consumed 14.785s CPU time.
Jan 26 16:29:51 compute-0 systemd-machined[208061]: Machine qemu-182-instance-00000096 terminated.
Jan 26 16:29:51 compute-0 neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44[372013]: [NOTICE]   (372017) : haproxy version is 2.8.14-c23fe91
Jan 26 16:29:51 compute-0 neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44[372013]: [NOTICE]   (372017) : path to executable is /usr/sbin/haproxy
Jan 26 16:29:51 compute-0 neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44[372013]: [WARNING]  (372017) : Exiting Master process...
Jan 26 16:29:51 compute-0 neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44[372013]: [ALERT]    (372017) : Current worker (372019) exited with code 143 (Terminated)
Jan 26 16:29:51 compute-0 neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44[372013]: [WARNING]  (372017) : All workers exited. Exiting... (0)
Jan 26 16:29:51 compute-0 systemd[1]: libpod-5a0b08210c9427d65a7276d5dd5ddc39bf7afd7a96770fa483d9f3e6a7244c37.scope: Deactivated successfully.
Jan 26 16:29:51 compute-0 conmon[372013]: conmon 5a0b08210c9427d65a72 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5a0b08210c9427d65a7276d5dd5ddc39bf7afd7a96770fa483d9f3e6a7244c37.scope/container/memory.events
Jan 26 16:29:51 compute-0 podman[372605]: 2026-01-26 16:29:51.796660009 +0000 UTC m=+0.040353389 container died 5a0b08210c9427d65a7276d5dd5ddc39bf7afd7a96770fa483d9f3e6a7244c37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:29:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a0b08210c9427d65a7276d5dd5ddc39bf7afd7a96770fa483d9f3e6a7244c37-userdata-shm.mount: Deactivated successfully.
Jan 26 16:29:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea9a076cbb73632203ca4524e00b697d395f45742ed6b63cbcb532d953304f3b-merged.mount: Deactivated successfully.
Jan 26 16:29:51 compute-0 podman[372605]: 2026-01-26 16:29:51.856047464 +0000 UTC m=+0.099740844 container cleanup 5a0b08210c9427d65a7276d5dd5ddc39bf7afd7a96770fa483d9f3e6a7244c37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 26 16:29:51 compute-0 systemd[1]: libpod-conmon-5a0b08210c9427d65a7276d5dd5ddc39bf7afd7a96770fa483d9f3e6a7244c37.scope: Deactivated successfully.
Jan 26 16:29:51 compute-0 podman[372633]: 2026-01-26 16:29:51.917762956 +0000 UTC m=+0.043942358 container remove 5a0b08210c9427d65a7276d5dd5ddc39bf7afd7a96770fa483d9f3e6a7244c37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:29:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:51.923 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f553df6b-9876-4b56-8001-e4028e50ac87]: (4, ('Mon Jan 26 04:29:51 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44 (5a0b08210c9427d65a7276d5dd5ddc39bf7afd7a96770fa483d9f3e6a7244c37)\n5a0b08210c9427d65a7276d5dd5ddc39bf7afd7a96770fa483d9f3e6a7244c37\nMon Jan 26 04:29:51 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44 (5a0b08210c9427d65a7276d5dd5ddc39bf7afd7a96770fa483d9f3e6a7244c37)\n5a0b08210c9427d65a7276d5dd5ddc39bf7afd7a96770fa483d9f3e6a7244c37\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:51.924 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[41118af1-f48f-4dd1-8adf-ed23b218c6eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:51.925 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68925d6a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:51 compute-0 nova_compute[239965]: 2026-01-26 16:29:51.927 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:51 compute-0 kernel: tap68925d6a-00: left promiscuous mode
Jan 26 16:29:51 compute-0 nova_compute[239965]: 2026-01-26 16:29:51.946 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:51.949 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9fc101-22db-46b7-98f7-4aa8a45a2fe7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:51.962 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1b0e08b3-0ee5-438d-9319-797c647a96e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:51.963 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9b2711-e6f3-4973-b0a0-615eb9527f7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:51.977 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[624f87fe-0336-4380-a4b3-77a19a67a3e1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 645651, 'reachable_time': 40296, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 372661, 'error': None, 'target': 'ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:51 compute-0 systemd[1]: run-netns-ovnmeta\x2d68925d6a\x2d01d0\x2d4cb9\x2d94a6\x2dda3e5f3fac44.mount: Deactivated successfully.
Jan 26 16:29:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:51.983 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:29:51 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:51.983 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[db82ad41-692e-4c1f-b75f-48fef560a76c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:29:52 compute-0 nova_compute[239965]: 2026-01-26 16:29:52.339 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:52 compute-0 nova_compute[239965]: 2026-01-26 16:29:52.410 239969 INFO nova.virt.libvirt.driver [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Instance shutdown successfully after 3 seconds.
Jan 26 16:29:52 compute-0 nova_compute[239965]: 2026-01-26 16:29:52.416 239969 INFO nova.virt.libvirt.driver [-] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Instance destroyed successfully.
Jan 26 16:29:52 compute-0 nova_compute[239965]: 2026-01-26 16:29:52.416 239969 DEBUG nova.objects.instance [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 23444729-2c63-4c8b-a28c-04334c5b5949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:29:52 compute-0 nova_compute[239965]: 2026-01-26 16:29:52.681 239969 INFO nova.virt.libvirt.driver [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Beginning cold snapshot process
Jan 26 16:29:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 167 op/s
Jan 26 16:29:52 compute-0 ceph-mon[75140]: pgmap v2386: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.0 MiB/s wr, 167 op/s
Jan 26 16:29:52 compute-0 nova_compute[239965]: 2026-01-26 16:29:52.853 239969 DEBUG nova.virt.libvirt.imagebackend [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] No parent info for e0817942-948b-4945-aa42-c1cb3a1c65ba; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.062 239969 DEBUG nova.storage.rbd_utils [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] creating snapshot(3ca80fe921184618976274c319f40c69) on rbd image(23444729-2c63-4c8b-a28c-04334c5b5949_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:29:53 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:53.340 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.555 239969 DEBUG nova.compute.manager [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Received event network-vif-unplugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.556 239969 DEBUG oslo_concurrency.lockutils [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.557 239969 DEBUG oslo_concurrency.lockutils [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.557 239969 DEBUG oslo_concurrency.lockutils [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.558 239969 DEBUG nova.compute.manager [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] No waiting events found dispatching network-vif-unplugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.558 239969 WARNING nova.compute.manager [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Received unexpected event network-vif-unplugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f for instance with vm_state active and task_state shelving_image_uploading.
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.559 239969 DEBUG nova.compute.manager [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Received event network-vif-plugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.559 239969 DEBUG oslo_concurrency.lockutils [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.559 239969 DEBUG oslo_concurrency.lockutils [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.560 239969 DEBUG oslo_concurrency.lockutils [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.560 239969 DEBUG nova.compute.manager [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] No waiting events found dispatching network-vif-plugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.561 239969 WARNING nova.compute.manager [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Received unexpected event network-vif-plugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f for instance with vm_state active and task_state shelving_image_uploading.
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.561 239969 DEBUG nova.compute.manager [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received event network-changed-3bc091c5-3a0b-4785-bcb7-e3f106899c4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.562 239969 DEBUG nova.compute.manager [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Refreshing instance network info cache due to event network-changed-3bc091c5-3a0b-4785-bcb7-e3f106899c4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.562 239969 DEBUG oslo_concurrency.lockutils [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-867d0872-e510-4b21-a4a4-ba5869bf7b41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.562 239969 DEBUG oslo_concurrency.lockutils [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-867d0872-e510-4b21-a4a4-ba5869bf7b41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:29:53 compute-0 nova_compute[239965]: 2026-01-26 16:29:53.563 239969 DEBUG nova.network.neutron [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Refreshing network info cache for port 3bc091c5-3a0b-4785-bcb7-e3f106899c4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:29:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Jan 26 16:29:54 compute-0 nova_compute[239965]: 2026-01-26 16:29:54.019 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:54 compute-0 nova_compute[239965]: 2026-01-26 16:29:54.391 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Jan 26 16:29:54 compute-0 nova_compute[239965]: 2026-01-26 16:29:54.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:54 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Jan 26 16:29:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 126 op/s
Jan 26 16:29:55 compute-0 nova_compute[239965]: 2026-01-26 16:29:55.354 239969 DEBUG nova.storage.rbd_utils [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] cloning vms/23444729-2c63-4c8b-a28c-04334c5b5949_disk@3ca80fe921184618976274c319f40c69 to images/22289f10-7e92-46f4-a6e5-0047b19a5119 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 16:29:55 compute-0 nova_compute[239965]: 2026-01-26 16:29:55.407 239969 DEBUG nova.network.neutron [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Updated VIF entry in instance network info cache for port 3bc091c5-3a0b-4785-bcb7-e3f106899c4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:29:55 compute-0 nova_compute[239965]: 2026-01-26 16:29:55.408 239969 DEBUG nova.network.neutron [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Updating instance_info_cache with network_info: [{"id": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "address": "fa:16:3e:1e:04:cd", "network": {"id": "18216d50-69be-46b9-999b-ec8b1b0e72e8", "bridge": "br-int", "label": "tempest-network-smoke--1764697523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bc091c5-3a", "ovs_interfaceid": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:29:55 compute-0 nova_compute[239965]: 2026-01-26 16:29:55.478 239969 DEBUG oslo_concurrency.lockutils [req-fa240a93-2bb7-493c-b904-df6f4102df3a req-4e780a4a-c0af-48b9-ace0-76c8ff4c3d59 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-867d0872-e510-4b21-a4a4-ba5869bf7b41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:29:55 compute-0 ceph-mon[75140]: osdmap e315: 3 total, 3 up, 3 in
Jan 26 16:29:55 compute-0 ceph-mon[75140]: pgmap v2388: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 126 op/s
Jan 26 16:29:55 compute-0 nova_compute[239965]: 2026-01-26 16:29:55.695 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:29:56 compute-0 nova_compute[239965]: 2026-01-26 16:29:56.173 239969 DEBUG nova.storage.rbd_utils [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] flattening images/22289f10-7e92-46f4-a6e5-0047b19a5119 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 16:29:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 121 op/s
Jan 26 16:29:57 compute-0 nova_compute[239965]: 2026-01-26 16:29:57.341 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:57 compute-0 ceph-mon[75140]: pgmap v2389: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.9 MiB/s wr, 121 op/s
Jan 26 16:29:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 48 KiB/s wr, 103 op/s
Jan 26 16:29:59 compute-0 ceph-mon[75140]: pgmap v2390: 305 pgs: 305 active+clean; 213 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 48 KiB/s wr, 103 op/s
Jan 26 16:29:59 compute-0 nova_compute[239965]: 2026-01-26 16:29:59.021 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:29:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:59.252 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:29:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:59.253 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:29:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:29:59.254 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:29:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:30:00 compute-0 nova_compute[239965]: 2026-01-26 16:30:00.293 239969 DEBUG nova.storage.rbd_utils [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] removing snapshot(3ca80fe921184618976274c319f40c69) on rbd image(23444729-2c63-4c8b-a28c-04334c5b5949_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Jan 26 16:30:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:30:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:30:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:30:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:30:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:30:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:30:00 compute-0 nova_compute[239965]: 2026-01-26 16:30:00.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:00 compute-0 nova_compute[239965]: 2026-01-26 16:30:00.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:30:00 compute-0 nova_compute[239965]: 2026-01-26 16:30:00.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:30:00 compute-0 nova_compute[239965]: 2026-01-26 16:30:00.611 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:30:00 compute-0 nova_compute[239965]: 2026-01-26 16:30:00.613 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquired lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:30:00 compute-0 nova_compute[239965]: 2026-01-26 16:30:00.613 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 26 16:30:00 compute-0 nova_compute[239965]: 2026-01-26 16:30:00.613 239969 DEBUG nova.objects.instance [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 23444729-2c63-4c8b-a28c-04334c5b5949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:30:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 232 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 1.3 MiB/s wr, 124 op/s
Jan 26 16:30:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Jan 26 16:30:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Jan 26 16:30:00 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Jan 26 16:30:01 compute-0 ceph-mon[75140]: pgmap v2391: 305 pgs: 305 active+clean; 232 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 1.3 MiB/s wr, 124 op/s
Jan 26 16:30:01 compute-0 nova_compute[239965]: 2026-01-26 16:30:01.229 239969 DEBUG nova.storage.rbd_utils [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] creating snapshot(snap) on rbd image(22289f10-7e92-46f4-a6e5-0047b19a5119) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Jan 26 16:30:01 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Jan 26 16:30:02 compute-0 nova_compute[239965]: 2026-01-26 16:30:02.343 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:02 compute-0 nova_compute[239965]: 2026-01-26 16:30:02.694 239969 DEBUG nova.network.neutron [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updating instance_info_cache with network_info: [{"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:30:02 compute-0 nova_compute[239965]: 2026-01-26 16:30:02.744 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Releasing lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:30:02 compute-0 nova_compute[239965]: 2026-01-26 16:30:02.745 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 26 16:30:02 compute-0 nova_compute[239965]: 2026-01-26 16:30:02.746 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 269 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.4 MiB/s wr, 97 op/s
Jan 26 16:30:02 compute-0 ceph-mon[75140]: osdmap e316: 3 total, 3 up, 3 in
Jan 26 16:30:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Jan 26 16:30:03 compute-0 ceph-mon[75140]: pgmap v2393: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 269 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 4.4 MiB/s wr, 97 op/s
Jan 26 16:30:04 compute-0 nova_compute[239965]: 2026-01-26 16:30:04.023 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Jan 26 16:30:04 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Jan 26 16:30:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:30:04 compute-0 nova_compute[239965]: 2026-01-26 16:30:04.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:04 compute-0 nova_compute[239965]: 2026-01-26 16:30:04.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:04 compute-0 nova_compute[239965]: 2026-01-26 16:30:04.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:04 compute-0 nova_compute[239965]: 2026-01-26 16:30:04.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:04 compute-0 nova_compute[239965]: 2026-01-26 16:30:04.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:30:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 299 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 7.2 MiB/s wr, 120 op/s
Jan 26 16:30:05 compute-0 ceph-mon[75140]: osdmap e317: 3 total, 3 up, 3 in
Jan 26 16:30:05 compute-0 ceph-mon[75140]: pgmap v2395: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 299 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 7.2 MiB/s wr, 120 op/s
Jan 26 16:30:05 compute-0 ovn_controller[146046]: 2026-01-26T16:30:05Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:04:cd 10.100.0.8
Jan 26 16:30:05 compute-0 ovn_controller[146046]: 2026-01-26T16:30:05Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:04:cd 10.100.0.8
Jan 26 16:30:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:30:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Cumulative writes: 10K writes, 50K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1486 writes, 6656 keys, 1486 commit groups, 1.0 writes per commit group, ingest: 9.33 MB, 0.02 MB/s
                                           Interval WAL: 1486 writes, 1486 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    102.2      0.59              0.19        34    0.017       0      0       0.0       0.0
                                             L6      1/0    8.19 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.7    130.6    110.3      2.57              0.76        33    0.078    199K    18K       0.0       0.0
                                            Sum      1/0    8.19 MB   0.0      0.3     0.1      0.3       0.3      0.1       0.0   5.7    106.1    108.7      3.15              0.95        67    0.047    199K    18K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.5     95.4     96.4      0.56              0.17        10    0.056     37K   2535       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    130.6    110.3      2.57              0.76        33    0.078    199K    18K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    102.9      0.58              0.19        33    0.018       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.6      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.059, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.34 GB write, 0.08 MB/s write, 0.33 GB read, 0.08 MB/s read, 3.2 seconds
                                           Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x565356c818d0#2 capacity: 304.00 MB usage: 38.06 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000331 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2395,36.56 MB,12.025%) FilterBlock(68,564.73 KB,0.181414%) IndexBlock(68,973.59 KB,0.312755%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 16:30:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 317 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 8.6 MiB/s wr, 163 op/s
Jan 26 16:30:06 compute-0 ceph-mon[75140]: pgmap v2396: 305 pgs: 305 active+clean; 317 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 8.6 MiB/s wr, 163 op/s
Jan 26 16:30:06 compute-0 nova_compute[239965]: 2026-01-26 16:30:06.878 239969 INFO nova.virt.libvirt.driver [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Snapshot image upload complete
Jan 26 16:30:06 compute-0 nova_compute[239965]: 2026-01-26 16:30:06.879 239969 DEBUG nova.compute.manager [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:30:06 compute-0 nova_compute[239965]: 2026-01-26 16:30:06.891 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769444991.8899863, 23444729-2c63-4c8b-a28c-04334c5b5949 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:30:06 compute-0 nova_compute[239965]: 2026-01-26 16:30:06.891 239969 INFO nova.compute.manager [-] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] VM Stopped (Lifecycle Event)
Jan 26 16:30:06 compute-0 nova_compute[239965]: 2026-01-26 16:30:06.909 239969 DEBUG nova.compute.manager [None req-274277da-7253-4915-aebc-f1d3df01f267 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:30:06 compute-0 nova_compute[239965]: 2026-01-26 16:30:06.916 239969 DEBUG nova.compute.manager [None req-274277da-7253-4915-aebc-f1d3df01f267 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Synchronizing instance power state after lifecycle event "Stopped"; current vm_state: active, current task_state: shelving_image_uploading, current DB power_state: 1, VM power_state: 4 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:30:06 compute-0 nova_compute[239965]: 2026-01-26 16:30:06.937 239969 INFO nova.compute.manager [None req-274277da-7253-4915-aebc-f1d3df01f267 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] During sync_power_state the instance has a pending task (shelving_image_uploading). Skip.
Jan 26 16:30:06 compute-0 nova_compute[239965]: 2026-01-26 16:30:06.942 239969 INFO nova.compute.manager [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Shelve offloading
Jan 26 16:30:06 compute-0 nova_compute[239965]: 2026-01-26 16:30:06.953 239969 INFO nova.virt.libvirt.driver [-] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Instance destroyed successfully.
Jan 26 16:30:06 compute-0 nova_compute[239965]: 2026-01-26 16:30:06.954 239969 DEBUG nova.compute.manager [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:30:06 compute-0 nova_compute[239965]: 2026-01-26 16:30:06.957 239969 DEBUG oslo_concurrency.lockutils [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:30:06 compute-0 nova_compute[239965]: 2026-01-26 16:30:06.957 239969 DEBUG oslo_concurrency.lockutils [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquired lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:30:06 compute-0 nova_compute[239965]: 2026-01-26 16:30:06.958 239969 DEBUG nova.network.neutron [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:30:07 compute-0 nova_compute[239965]: 2026-01-26 16:30:07.345 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:08 compute-0 nova_compute[239965]: 2026-01-26 16:30:08.007 239969 DEBUG nova.network.neutron [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updating instance_info_cache with network_info: [{"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:30:08 compute-0 nova_compute[239965]: 2026-01-26 16:30:08.026 239969 DEBUG oslo_concurrency.lockutils [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Releasing lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:30:08 compute-0 nova_compute[239965]: 2026-01-26 16:30:08.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:08 compute-0 nova_compute[239965]: 2026-01-26 16:30:08.537 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:08 compute-0 nova_compute[239965]: 2026-01-26 16:30:08.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:08 compute-0 nova_compute[239965]: 2026-01-26 16:30:08.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:08 compute-0 nova_compute[239965]: 2026-01-26 16:30:08.538 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:30:08 compute-0 nova_compute[239965]: 2026-01-26 16:30:08.538 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:30:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 320 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 7.3 MiB/s wr, 150 op/s
Jan 26 16:30:08 compute-0 ceph-mon[75140]: pgmap v2397: 305 pgs: 305 active+clean; 320 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 7.3 MiB/s wr, 150 op/s
Jan 26 16:30:09 compute-0 nova_compute[239965]: 2026-01-26 16:30:09.026 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:30:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1826178579' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:30:09 compute-0 nova_compute[239965]: 2026-01-26 16:30:09.139 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:30:09 compute-0 nova_compute[239965]: 2026-01-26 16:30:09.230 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:30:09 compute-0 nova_compute[239965]: 2026-01-26 16:30:09.231 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000096 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:30:09 compute-0 nova_compute[239965]: 2026-01-26 16:30:09.237 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:30:09 compute-0 nova_compute[239965]: 2026-01-26 16:30:09.238 239969 DEBUG nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Jan 26 16:30:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e317 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:30:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Jan 26 16:30:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Jan 26 16:30:09 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Jan 26 16:30:09 compute-0 nova_compute[239965]: 2026-01-26 16:30:09.441 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:30:09 compute-0 nova_compute[239965]: 2026-01-26 16:30:09.442 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3198MB free_disk=59.897082667797804GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:30:09 compute-0 nova_compute[239965]: 2026-01-26 16:30:09.443 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:09 compute-0 nova_compute[239965]: 2026-01-26 16:30:09.443 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:09 compute-0 nova_compute[239965]: 2026-01-26 16:30:09.638 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 23444729-2c63-4c8b-a28c-04334c5b5949 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:30:09 compute-0 nova_compute[239965]: 2026-01-26 16:30:09.639 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance 867d0872-e510-4b21-a4a4-ba5869bf7b41 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:30:09 compute-0 nova_compute[239965]: 2026-01-26 16:30:09.640 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:30:09 compute-0 nova_compute[239965]: 2026-01-26 16:30:09.640 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:30:09 compute-0 nova_compute[239965]: 2026-01-26 16:30:09.790 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:30:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1826178579' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:30:09 compute-0 ceph-mon[75140]: osdmap e318: 3 total, 3 up, 3 in
Jan 26 16:30:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:30:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/405961068' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:30:10 compute-0 nova_compute[239965]: 2026-01-26 16:30:10.395 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.605s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:30:10 compute-0 nova_compute[239965]: 2026-01-26 16:30:10.401 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:30:10 compute-0 nova_compute[239965]: 2026-01-26 16:30:10.416 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:30:10 compute-0 nova_compute[239965]: 2026-01-26 16:30:10.436 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:30:10 compute-0 nova_compute[239965]: 2026-01-26 16:30:10.437 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 321 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 436 KiB/s rd, 4.6 MiB/s wr, 110 op/s
Jan 26 16:30:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/405961068' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:30:10 compute-0 ceph-mon[75140]: pgmap v2399: 305 pgs: 305 active+clean; 321 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 436 KiB/s rd, 4.6 MiB/s wr, 110 op/s
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.029 239969 INFO nova.virt.libvirt.driver [-] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Instance destroyed successfully.
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.030 239969 DEBUG nova.objects.instance [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lazy-loading 'resources' on Instance uuid 23444729-2c63-4c8b-a28c-04334c5b5949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.047 239969 DEBUG nova.virt.libvirt.vif [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:29:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-139030865',display_name='tempest-TestShelveInstance-server-139030865',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-139030865',id=150,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGR71t6y+HJ4qAoeBl0pCSSZsSHWE/LQDqzbI32DfEJP6X/ptIcGuyNSJStvWenZDTRQYi8gTT8yZ+rcYMA60xr8rqSoQ28CSiFxxFlkuGKRbU71KSafynyknZ+52zot4Q==',key_name='tempest-TestShelveInstance-779455889',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:29:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='67c9291388b24d24a9689e4acf626bf2',ramdisk_id='',reservation_id='r-9na32hh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1058510424',owner_user_name='tempest-TestShelveInstance-1058510424-project-member',shelved_at='2026-01-26T16:30:06.879387',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='22289f10-7e92-46f4-a6e5-0047b19a5119'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:29:52Z,user_data=None,user_id='96f070e8d2fa469da6ec2db452db6f86',uuid=23444729-2c63-4c8b-a28c-04334c5b5949,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.048 239969 DEBUG nova.network.os_vif_util [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Converting VIF {"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.050 239969 DEBUG nova.network.os_vif_util [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=71007552-7ba9-4fe5-8e36-ee9ce30da37f,network=Network(68925d6a-01d0-4cb9-94a6-da3e5f3fac44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71007552-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.051 239969 DEBUG os_vif [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=71007552-7ba9-4fe5-8e36-ee9ce30da37f,network=Network(68925d6a-01d0-4cb9-94a6-da3e5f3fac44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71007552-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.055 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.056 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71007552-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.059 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.061 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.065 239969 INFO os_vif [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=71007552-7ba9-4fe5-8e36-ee9ce30da37f,network=Network(68925d6a-01d0-4cb9-94a6-da3e5f3fac44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71007552-7b')
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.105 239969 DEBUG nova.compute.manager [req-1c56cc32-fc02-4a1b-915c-bf1831e8c044 req-3610aee2-fb38-448d-aaf7-ff9c3a94f993 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Received event network-changed-71007552-7ba9-4fe5-8e36-ee9ce30da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.106 239969 DEBUG nova.compute.manager [req-1c56cc32-fc02-4a1b-915c-bf1831e8c044 req-3610aee2-fb38-448d-aaf7-ff9c3a94f993 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Refreshing instance network info cache due to event network-changed-71007552-7ba9-4fe5-8e36-ee9ce30da37f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.107 239969 DEBUG oslo_concurrency.lockutils [req-1c56cc32-fc02-4a1b-915c-bf1831e8c044 req-3610aee2-fb38-448d-aaf7-ff9c3a94f993 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.107 239969 DEBUG oslo_concurrency.lockutils [req-1c56cc32-fc02-4a1b-915c-bf1831e8c044 req-3610aee2-fb38-448d-aaf7-ff9c3a94f993 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.108 239969 DEBUG nova.network.neutron [req-1c56cc32-fc02-4a1b-915c-bf1831e8c044 req-3610aee2-fb38-448d-aaf7-ff9c3a94f993 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Refreshing network info cache for port 71007552-7ba9-4fe5-8e36-ee9ce30da37f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.174 239969 INFO nova.compute.manager [None req-1e48617a-5f90-404e-9a5d-3581898690f3 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Get console output
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.182 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.473 239969 DEBUG nova.objects.instance [None req-a74ab4e4-7409-4c05-a93b-39c2181353bc 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid 867d0872-e510-4b21-a4a4-ba5869bf7b41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.502 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769445011.5022159, 867d0872-e510-4b21-a4a4-ba5869bf7b41 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.503 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] VM Paused (Lifecycle Event)
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.521 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.525 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:30:11 compute-0 nova_compute[239965]: 2026-01-26 16:30:11.543 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] During sync_power_state the instance has a pending task (suspending). Skip.
Jan 26 16:30:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 4.2 MiB/s wr, 115 op/s
Jan 26 16:30:12 compute-0 kernel: tap3bc091c5-3a (unregistering): left promiscuous mode
Jan 26 16:30:12 compute-0 NetworkManager[48954]: <info>  [1769445012.8033] device (tap3bc091c5-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:30:12 compute-0 ovn_controller[146046]: 2026-01-26T16:30:12Z|01618|binding|INFO|Releasing lport 3bc091c5-3a0b-4785-bcb7-e3f106899c4e from this chassis (sb_readonly=0)
Jan 26 16:30:12 compute-0 ovn_controller[146046]: 2026-01-26T16:30:12Z|01619|binding|INFO|Setting lport 3bc091c5-3a0b-4785-bcb7-e3f106899c4e down in Southbound
Jan 26 16:30:12 compute-0 ovn_controller[146046]: 2026-01-26T16:30:12Z|01620|binding|INFO|Removing iface tap3bc091c5-3a ovn-installed in OVS
Jan 26 16:30:12 compute-0 nova_compute[239965]: 2026-01-26 16:30:12.813 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:12 compute-0 nova_compute[239965]: 2026-01-26 16:30:12.815 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:12 compute-0 nova_compute[239965]: 2026-01-26 16:30:12.838 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:12 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000097.scope: Deactivated successfully.
Jan 26 16:30:12 compute-0 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000097.scope: Consumed 13.908s CPU time.
Jan 26 16:30:12 compute-0 systemd-machined[208061]: Machine qemu-183-instance-00000097 terminated.
Jan 26 16:30:12 compute-0 nova_compute[239965]: 2026-01-26 16:30:12.884 239969 DEBUG nova.compute.manager [None req-a74ab4e4-7409-4c05-a93b-39c2181353bc 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:30:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:12.900 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:04:cd 10.100.0.8'], port_security=['fa:16:3e:1e:04:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '867d0872-e510-4b21-a4a4-ba5869bf7b41', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18216d50-69be-46b9-999b-ec8b1b0e72e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ef4be394-553c-42cb-8e42-8f8b7eb94878', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45299a42-06fa-4567-8e89-46e6b687daf8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3bc091c5-3a0b-4785-bcb7-e3f106899c4e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:30:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:12.901 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3bc091c5-3a0b-4785-bcb7-e3f106899c4e in datapath 18216d50-69be-46b9-999b-ec8b1b0e72e8 unbound from our chassis
Jan 26 16:30:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:12.902 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 18216d50-69be-46b9-999b-ec8b1b0e72e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:30:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:12.903 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[73d80476-be7f-4040-b608-35fca9b570cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:12 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:12.903 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8 namespace which is not needed anymore
Jan 26 16:30:13 compute-0 neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8[372566]: [NOTICE]   (372570) : haproxy version is 2.8.14-c23fe91
Jan 26 16:30:13 compute-0 neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8[372566]: [NOTICE]   (372570) : path to executable is /usr/sbin/haproxy
Jan 26 16:30:13 compute-0 neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8[372566]: [WARNING]  (372570) : Exiting Master process...
Jan 26 16:30:13 compute-0 neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8[372566]: [WARNING]  (372570) : Exiting Master process...
Jan 26 16:30:13 compute-0 neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8[372566]: [ALERT]    (372570) : Current worker (372572) exited with code 143 (Terminated)
Jan 26 16:30:13 compute-0 neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8[372566]: [WARNING]  (372570) : All workers exited. Exiting... (0)
Jan 26 16:30:13 compute-0 systemd[1]: libpod-cec44ca5b06b50c55fd75cc929766a533f4274c4541c5b2f018580a78171ad84.scope: Deactivated successfully.
Jan 26 16:30:13 compute-0 podman[372913]: 2026-01-26 16:30:13.065593063 +0000 UTC m=+0.079233342 container died cec44ca5b06b50c55fd75cc929766a533f4274c4541c5b2f018580a78171ad84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 16:30:13 compute-0 ceph-mon[75140]: pgmap v2400: 305 pgs: 305 active+clean; 325 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 423 KiB/s rd, 4.2 MiB/s wr, 115 op/s
Jan 26 16:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cec44ca5b06b50c55fd75cc929766a533f4274c4541c5b2f018580a78171ad84-userdata-shm.mount: Deactivated successfully.
Jan 26 16:30:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-1c08905469d869d1417df0e276f1142e2ebeae8f4446a8d232f97b88e38b254b-merged.mount: Deactivated successfully.
Jan 26 16:30:13 compute-0 nova_compute[239965]: 2026-01-26 16:30:13.507 239969 DEBUG nova.network.neutron [req-1c56cc32-fc02-4a1b-915c-bf1831e8c044 req-3610aee2-fb38-448d-aaf7-ff9c3a94f993 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updated VIF entry in instance network info cache for port 71007552-7ba9-4fe5-8e36-ee9ce30da37f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:30:13 compute-0 nova_compute[239965]: 2026-01-26 16:30:13.507 239969 DEBUG nova.network.neutron [req-1c56cc32-fc02-4a1b-915c-bf1831e8c044 req-3610aee2-fb38-448d-aaf7-ff9c3a94f993 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updating instance_info_cache with network_info: [{"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": null, "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap71007552-7b", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:30:13 compute-0 nova_compute[239965]: 2026-01-26 16:30:13.533 239969 DEBUG oslo_concurrency.lockutils [req-1c56cc32-fc02-4a1b-915c-bf1831e8c044 req-3610aee2-fb38-448d-aaf7-ff9c3a94f993 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:30:13 compute-0 nova_compute[239965]: 2026-01-26 16:30:13.542 239969 DEBUG nova.compute.manager [req-2bfe3a44-a545-442c-bbd6-b339753c37c2 req-0e4d1ac0-92e7-4599-84c3-78cdfaf176de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received event network-vif-unplugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:30:13 compute-0 nova_compute[239965]: 2026-01-26 16:30:13.543 239969 DEBUG oslo_concurrency.lockutils [req-2bfe3a44-a545-442c-bbd6-b339753c37c2 req-0e4d1ac0-92e7-4599-84c3-78cdfaf176de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:13 compute-0 nova_compute[239965]: 2026-01-26 16:30:13.543 239969 DEBUG oslo_concurrency.lockutils [req-2bfe3a44-a545-442c-bbd6-b339753c37c2 req-0e4d1ac0-92e7-4599-84c3-78cdfaf176de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:13 compute-0 nova_compute[239965]: 2026-01-26 16:30:13.544 239969 DEBUG oslo_concurrency.lockutils [req-2bfe3a44-a545-442c-bbd6-b339753c37c2 req-0e4d1ac0-92e7-4599-84c3-78cdfaf176de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:13 compute-0 nova_compute[239965]: 2026-01-26 16:30:13.544 239969 DEBUG nova.compute.manager [req-2bfe3a44-a545-442c-bbd6-b339753c37c2 req-0e4d1ac0-92e7-4599-84c3-78cdfaf176de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] No waiting events found dispatching network-vif-unplugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:30:13 compute-0 nova_compute[239965]: 2026-01-26 16:30:13.544 239969 WARNING nova.compute.manager [req-2bfe3a44-a545-442c-bbd6-b339753c37c2 req-0e4d1ac0-92e7-4599-84c3-78cdfaf176de a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received unexpected event network-vif-unplugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e for instance with vm_state suspended and task_state None.
Jan 26 16:30:13 compute-0 podman[372942]: 2026-01-26 16:30:13.546413298 +0000 UTC m=+0.072671501 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 26 16:30:13 compute-0 podman[372943]: 2026-01-26 16:30:13.57505658 +0000 UTC m=+0.104878870 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Jan 26 16:30:13 compute-0 podman[372913]: 2026-01-26 16:30:13.748067277 +0000 UTC m=+0.761707546 container cleanup cec44ca5b06b50c55fd75cc929766a533f4274c4541c5b2f018580a78171ad84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:30:13 compute-0 systemd[1]: libpod-conmon-cec44ca5b06b50c55fd75cc929766a533f4274c4541c5b2f018580a78171ad84.scope: Deactivated successfully.
Jan 26 16:30:14 compute-0 nova_compute[239965]: 2026-01-26 16:30:14.029 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:14 compute-0 podman[372990]: 2026-01-26 16:30:14.409893394 +0000 UTC m=+0.621050220 container remove cec44ca5b06b50c55fd75cc929766a533f4274c4541c5b2f018580a78171ad84 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:30:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:30:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:14.419 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ee72adbc-50cc-41c9-9368-385af0d4cb29]: (4, ('Mon Jan 26 04:30:12 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8 (cec44ca5b06b50c55fd75cc929766a533f4274c4541c5b2f018580a78171ad84)\ncec44ca5b06b50c55fd75cc929766a533f4274c4541c5b2f018580a78171ad84\nMon Jan 26 04:30:13 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8 (cec44ca5b06b50c55fd75cc929766a533f4274c4541c5b2f018580a78171ad84)\ncec44ca5b06b50c55fd75cc929766a533f4274c4541c5b2f018580a78171ad84\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:14.421 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6a28f801-0299-4721-942c-fa2ed21b3720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:14.422 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18216d50-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:14 compute-0 nova_compute[239965]: 2026-01-26 16:30:14.462 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:14 compute-0 kernel: tap18216d50-60: left promiscuous mode
Jan 26 16:30:14 compute-0 nova_compute[239965]: 2026-01-26 16:30:14.483 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:14.485 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3cf206-f831-456c-82bd-226d2a8264e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:14.502 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[68c2ba65-c5ff-4632-a6b5-3876fdc5b058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:14.504 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2e211098-1360-4d4d-b0c8-a05b82cf7196]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:14.521 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e0f9a13a-3213-438e-b6b3-af679006fddc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 647915, 'reachable_time': 16529, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373007, 'error': None, 'target': 'ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:14 compute-0 systemd[1]: run-netns-ovnmeta\x2d18216d50\x2d69be\x2d46b9\x2d999b\x2dec8b1b0e72e8.mount: Deactivated successfully.
Jan 26 16:30:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:14.524 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:30:14 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:14.524 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7dc7db-535f-476e-83e2-7fcfc860b370]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 314 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 1.4 MiB/s wr, 91 op/s
Jan 26 16:30:14 compute-0 ceph-mon[75140]: pgmap v2401: 305 pgs: 305 active+clean; 314 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 1.4 MiB/s wr, 91 op/s
Jan 26 16:30:14 compute-0 nova_compute[239965]: 2026-01-26 16:30:14.867 239969 INFO nova.virt.libvirt.driver [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Deleting instance files /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949_del
Jan 26 16:30:14 compute-0 nova_compute[239965]: 2026-01-26 16:30:14.868 239969 INFO nova.virt.libvirt.driver [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Deletion of /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949_del complete
Jan 26 16:30:14 compute-0 nova_compute[239965]: 2026-01-26 16:30:14.959 239969 INFO nova.scheduler.client.report [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Deleted allocations for instance 23444729-2c63-4c8b-a28c-04334c5b5949
Jan 26 16:30:14 compute-0 nova_compute[239965]: 2026-01-26 16:30:14.998 239969 INFO nova.compute.manager [None req-3fbcfb1c-8bd9-4cb5-afe9-fc1d43dbbeb2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Get console output
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.001 239969 DEBUG oslo_concurrency.lockutils [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.001 239969 DEBUG oslo_concurrency.lockutils [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.042 239969 DEBUG oslo_concurrency.processutils [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.157 239969 INFO nova.compute.manager [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Resuming
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.159 239969 DEBUG nova.objects.instance [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'flavor' on Instance uuid 867d0872-e510-4b21-a4a4-ba5869bf7b41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.195 239969 DEBUG oslo_concurrency.lockutils [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "refresh_cache-867d0872-e510-4b21-a4a4-ba5869bf7b41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.195 239969 DEBUG oslo_concurrency.lockutils [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquired lock "refresh_cache-867d0872-e510-4b21-a4a4-ba5869bf7b41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.196 239969 DEBUG nova.network.neutron [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.623 239969 DEBUG nova.compute.manager [req-5266d3b9-f72d-4059-a02a-8d58df4741f7 req-4c1a0703-ec35-4195-9a44-b95000f95ca1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received event network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.623 239969 DEBUG oslo_concurrency.lockutils [req-5266d3b9-f72d-4059-a02a-8d58df4741f7 req-4c1a0703-ec35-4195-9a44-b95000f95ca1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.623 239969 DEBUG oslo_concurrency.lockutils [req-5266d3b9-f72d-4059-a02a-8d58df4741f7 req-4c1a0703-ec35-4195-9a44-b95000f95ca1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.624 239969 DEBUG oslo_concurrency.lockutils [req-5266d3b9-f72d-4059-a02a-8d58df4741f7 req-4c1a0703-ec35-4195-9a44-b95000f95ca1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.624 239969 DEBUG nova.compute.manager [req-5266d3b9-f72d-4059-a02a-8d58df4741f7 req-4c1a0703-ec35-4195-9a44-b95000f95ca1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] No waiting events found dispatching network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.624 239969 WARNING nova.compute.manager [req-5266d3b9-f72d-4059-a02a-8d58df4741f7 req-4c1a0703-ec35-4195-9a44-b95000f95ca1 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received unexpected event network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e for instance with vm_state suspended and task_state resuming.
Jan 26 16:30:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:30:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1625930116' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.666 239969 DEBUG oslo_concurrency.processutils [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.672 239969 DEBUG nova.compute.provider_tree [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.689 239969 DEBUG nova.scheduler.client.report [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.718 239969 DEBUG oslo_concurrency.lockutils [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:15 compute-0 nova_compute[239965]: 2026-01-26 16:30:15.764 239969 DEBUG oslo_concurrency.lockutils [None req-025692f6-6f1b-4cc7-9629-efc5e618ea9a 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 26.399s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:15 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1625930116' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.061 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.594 239969 DEBUG nova.network.neutron [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Updating instance_info_cache with network_info: [{"id": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "address": "fa:16:3e:1e:04:cd", "network": {"id": "18216d50-69be-46b9-999b-ec8b1b0e72e8", "bridge": "br-int", "label": "tempest-network-smoke--1764697523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bc091c5-3a", "ovs_interfaceid": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.617 239969 DEBUG oslo_concurrency.lockutils [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Releasing lock "refresh_cache-867d0872-e510-4b21-a4a4-ba5869bf7b41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.622 239969 DEBUG nova.virt.libvirt.vif [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:29:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1668474041',display_name='tempest-TestNetworkAdvancedServerOps-server-1668474041',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1668474041',id=151,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPv22EdwGJNljFtI8+E9tV1J38GNPBUw3o1bTvVgqzjO9PC/VkU1AA5T7fBsCkq14J38duWTMLEM8jMA4T5BdKpjilyJ0NiQThFHa9xh05gGMp17MtvNd9WsWN6fEZCeJA==',key_name='tempest-TestNetworkAdvancedServerOps-1208787537',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:29:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-hq09mo5w',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:30:12Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=867d0872-e510-4b21-a4a4-ba5869bf7b41,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "address": "fa:16:3e:1e:04:cd", "network": {"id": "18216d50-69be-46b9-999b-ec8b1b0e72e8", "bridge": "br-int", "label": "tempest-network-smoke--1764697523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bc091c5-3a", "ovs_interfaceid": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.623 239969 DEBUG nova.network.os_vif_util [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "address": "fa:16:3e:1e:04:cd", "network": {"id": "18216d50-69be-46b9-999b-ec8b1b0e72e8", "bridge": "br-int", "label": "tempest-network-smoke--1764697523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bc091c5-3a", "ovs_interfaceid": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.624 239969 DEBUG nova.network.os_vif_util [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:04:cd,bridge_name='br-int',has_traffic_filtering=True,id=3bc091c5-3a0b-4785-bcb7-e3f106899c4e,network=Network(18216d50-69be-46b9-999b-ec8b1b0e72e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bc091c5-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.624 239969 DEBUG os_vif [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:04:cd,bridge_name='br-int',has_traffic_filtering=True,id=3bc091c5-3a0b-4785-bcb7-e3f106899c4e,network=Network(18216d50-69be-46b9-999b-ec8b1b0e72e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bc091c5-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.624 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.625 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.625 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.628 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.628 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3bc091c5-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.628 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3bc091c5-3a, col_values=(('external_ids', {'iface-id': '3bc091c5-3a0b-4785-bcb7-e3f106899c4e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:04:cd', 'vm-uuid': '867d0872-e510-4b21-a4a4-ba5869bf7b41'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.628 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.629 239969 INFO os_vif [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:04:cd,bridge_name='br-int',has_traffic_filtering=True,id=3bc091c5-3a0b-4785-bcb7-e3f106899c4e,network=Network(18216d50-69be-46b9-999b-ec8b1b0e72e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bc091c5-3a')
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.646 239969 DEBUG nova.objects.instance [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'numa_topology' on Instance uuid 867d0872-e510-4b21-a4a4-ba5869bf7b41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:30:16 compute-0 kernel: tap3bc091c5-3a: entered promiscuous mode
Jan 26 16:30:16 compute-0 NetworkManager[48954]: <info>  [1769445016.7373] manager: (tap3bc091c5-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/657)
Jan 26 16:30:16 compute-0 ovn_controller[146046]: 2026-01-26T16:30:16Z|01621|binding|INFO|Claiming lport 3bc091c5-3a0b-4785-bcb7-e3f106899c4e for this chassis.
Jan 26 16:30:16 compute-0 ovn_controller[146046]: 2026-01-26T16:30:16Z|01622|binding|INFO|3bc091c5-3a0b-4785-bcb7-e3f106899c4e: Claiming fa:16:3e:1e:04:cd 10.100.0.8
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.737 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:16.750 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:04:cd 10.100.0.8'], port_security=['fa:16:3e:1e:04:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '867d0872-e510-4b21-a4a4-ba5869bf7b41', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18216d50-69be-46b9-999b-ec8b1b0e72e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'ef4be394-553c-42cb-8e42-8f8b7eb94878', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.203'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45299a42-06fa-4567-8e89-46e6b687daf8, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3bc091c5-3a0b-4785-bcb7-e3f106899c4e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:30:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:16.752 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3bc091c5-3a0b-4785-bcb7-e3f106899c4e in datapath 18216d50-69be-46b9-999b-ec8b1b0e72e8 bound to our chassis
Jan 26 16:30:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:16.755 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 18216d50-69be-46b9-999b-ec8b1b0e72e8
Jan 26 16:30:16 compute-0 ovn_controller[146046]: 2026-01-26T16:30:16Z|01623|binding|INFO|Setting lport 3bc091c5-3a0b-4785-bcb7-e3f106899c4e ovn-installed in OVS
Jan 26 16:30:16 compute-0 ovn_controller[146046]: 2026-01-26T16:30:16Z|01624|binding|INFO|Setting lport 3bc091c5-3a0b-4785-bcb7-e3f106899c4e up in Southbound
Jan 26 16:30:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 283 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 294 KiB/s rd, 347 KiB/s wr, 59 op/s
Jan 26 16:30:16 compute-0 nova_compute[239965]: 2026-01-26 16:30:16.773 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:16.782 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[375ed797-78ae-41a6-b01e-4ee9c58bb868]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:16.784 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap18216d50-61 in ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:30:16 compute-0 systemd-machined[208061]: New machine qemu-184-instance-00000097.
Jan 26 16:30:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:16.786 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap18216d50-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:30:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:16.786 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[462b8d54-c2ac-41a3-9cb9-fbc4adc911b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:16.788 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2f64a639-d52d-4f04-b791-3c6a155e499d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:16 compute-0 systemd[1]: Started Virtual Machine qemu-184-instance-00000097.
Jan 26 16:30:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:16.806 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[722dd9ba-559e-4878-a071-8ab37005111e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:16 compute-0 systemd-udevd[373046]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:30:16 compute-0 NetworkManager[48954]: <info>  [1769445016.8246] device (tap3bc091c5-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:30:16 compute-0 NetworkManager[48954]: <info>  [1769445016.8266] device (tap3bc091c5-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:30:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:16.827 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[d4ee6c23-2024-4ab9-8b33-fa862b2c956a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:16.870 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[37dd4383-59ad-4242-af72-42b20f68a757]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:16.877 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f080aab4-5ee4-40ec-87d0-d8a24aeb2a4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:16 compute-0 NetworkManager[48954]: <info>  [1769445016.8792] manager: (tap18216d50-60): new Veth device (/org/freedesktop/NetworkManager/Devices/658)
Jan 26 16:30:16 compute-0 systemd-udevd[373049]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:30:16 compute-0 ceph-mon[75140]: pgmap v2402: 305 pgs: 305 active+clean; 283 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 294 KiB/s rd, 347 KiB/s wr, 59 op/s
Jan 26 16:30:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:16.919 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5dfea83c-3958-49be-a19d-8d2fb829af8c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:16.921 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[a402d066-a074-4ea6-968e-f5fbc9b04c31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:16 compute-0 NetworkManager[48954]: <info>  [1769445016.9435] device (tap18216d50-60): carrier: link connected
Jan 26 16:30:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:16.952 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[202c7297-2805-4237-b960-88cd3841ebe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:16.976 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b1b76eeb-f809-4198-98fb-2006d492dcc6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18216d50-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:17:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 467], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650774, 'reachable_time': 28673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373077, 'error': None, 'target': 'ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:17.001 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[804a5393-c4bf-452d-a930-50f5b5fd7fa9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:1715'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650774, 'tstamp': 650774}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 373078, 'error': None, 'target': 'ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:17.018 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2607ac17-bef7-4b60-8c02-606c1cda21c4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap18216d50-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:17:15'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 467], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650774, 'reachable_time': 28673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 373079, 'error': None, 'target': 'ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:17.078 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[fcc26ce6-198f-46c6-80c5-db460d35a0d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:17.166 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[f6ede7ac-9b68-4c24-b4dc-7d495ad0e030]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:17.168 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18216d50-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:17.169 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:17.170 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18216d50-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.172 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:17 compute-0 kernel: tap18216d50-60: entered promiscuous mode
Jan 26 16:30:17 compute-0 NetworkManager[48954]: <info>  [1769445017.1740] manager: (tap18216d50-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/659)
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:17.178 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap18216d50-60, col_values=(('external_ids', {'iface-id': 'c5adae48-670f-463e-b10e-3bf726c12f7a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.180 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:17 compute-0 ovn_controller[146046]: 2026-01-26T16:30:17Z|01625|binding|INFO|Releasing lport c5adae48-670f-463e-b10e-3bf726c12f7a from this chassis (sb_readonly=0)
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.182 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:17.182 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/18216d50-69be-46b9-999b-ec8b1b0e72e8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/18216d50-69be-46b9-999b-ec8b1b0e72e8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:17.183 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[882efb95-32c5-44d1-810f-9b58ce170987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:17.184 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-18216d50-69be-46b9-999b-ec8b1b0e72e8
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/18216d50-69be-46b9-999b-ec8b1b0e72e8.pid.haproxy
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 18216d50-69be-46b9-999b-ec8b1b0e72e8
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:30:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:17.185 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8', 'env', 'PROCESS_TAG=haproxy-18216d50-69be-46b9-999b-ec8b1b0e72e8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/18216d50-69be-46b9-999b-ec8b1b0e72e8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.202 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.562 239969 DEBUG nova.virt.libvirt.host [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Removed pending event for 867d0872-e510-4b21-a4a4-ba5869bf7b41 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.563 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769445017.5608199, 867d0872-e510-4b21-a4a4-ba5869bf7b41 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.563 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] VM Started (Lifecycle Event)
Jan 26 16:30:17 compute-0 sudo[373146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:30:17 compute-0 sudo[373146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:30:17 compute-0 sudo[373146]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:17 compute-0 podman[373156]: 2026-01-26 16:30:17.575275744 +0000 UTC m=+0.071465251 container create afb24d07cd484b1fcb1598b8f5dcb7cbbd5f9e35a4531ecd7a904a3fc95c6855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.587 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.594 239969 DEBUG nova.compute.manager [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.594 239969 DEBUG nova.objects.instance [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'pci_devices' on Instance uuid 867d0872-e510-4b21-a4a4-ba5869bf7b41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.597 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.617 239969 INFO nova.virt.libvirt.driver [-] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Instance running successfully.
Jan 26 16:30:17 compute-0 virtqemud[240263]: argument unsupported: QEMU guest agent is not configured
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.620 239969 DEBUG nova.virt.libvirt.guest [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.620 239969 DEBUG nova.compute.manager [None req-5bfffcf1-8b09-4608-9bc7-be2df9f3f7c2 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.622 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.622 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769445017.5621855, 867d0872-e510-4b21-a4a4-ba5869bf7b41 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.622 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] VM Resumed (Lifecycle Event)
Jan 26 16:30:17 compute-0 podman[373156]: 2026-01-26 16:30:17.532813774 +0000 UTC m=+0.029003311 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:30:17 compute-0 sudo[373186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:30:17 compute-0 sudo[373186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.662 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.666 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.692 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] During sync_power_state the instance has a pending task (resuming). Skip.
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.748 239969 DEBUG nova.compute.manager [req-65a05934-df69-4262-a2fb-82ed073a660e req-23d97064-f410-46b4-aa7d-7ab6696b9036 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received event network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.749 239969 DEBUG oslo_concurrency.lockutils [req-65a05934-df69-4262-a2fb-82ed073a660e req-23d97064-f410-46b4-aa7d-7ab6696b9036 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.749 239969 DEBUG oslo_concurrency.lockutils [req-65a05934-df69-4262-a2fb-82ed073a660e req-23d97064-f410-46b4-aa7d-7ab6696b9036 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.750 239969 DEBUG oslo_concurrency.lockutils [req-65a05934-df69-4262-a2fb-82ed073a660e req-23d97064-f410-46b4-aa7d-7ab6696b9036 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.750 239969 DEBUG nova.compute.manager [req-65a05934-df69-4262-a2fb-82ed073a660e req-23d97064-f410-46b4-aa7d-7ab6696b9036 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] No waiting events found dispatching network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.751 239969 WARNING nova.compute.manager [req-65a05934-df69-4262-a2fb-82ed073a660e req-23d97064-f410-46b4-aa7d-7ab6696b9036 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received unexpected event network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e for instance with vm_state active and task_state None.
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.751 239969 DEBUG nova.compute.manager [req-65a05934-df69-4262-a2fb-82ed073a660e req-23d97064-f410-46b4-aa7d-7ab6696b9036 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received event network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.752 239969 DEBUG oslo_concurrency.lockutils [req-65a05934-df69-4262-a2fb-82ed073a660e req-23d97064-f410-46b4-aa7d-7ab6696b9036 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.753 239969 DEBUG oslo_concurrency.lockutils [req-65a05934-df69-4262-a2fb-82ed073a660e req-23d97064-f410-46b4-aa7d-7ab6696b9036 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.753 239969 DEBUG oslo_concurrency.lockutils [req-65a05934-df69-4262-a2fb-82ed073a660e req-23d97064-f410-46b4-aa7d-7ab6696b9036 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.754 239969 DEBUG nova.compute.manager [req-65a05934-df69-4262-a2fb-82ed073a660e req-23d97064-f410-46b4-aa7d-7ab6696b9036 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] No waiting events found dispatching network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:30:17 compute-0 nova_compute[239965]: 2026-01-26 16:30:17.754 239969 WARNING nova.compute.manager [req-65a05934-df69-4262-a2fb-82ed073a660e req-23d97064-f410-46b4-aa7d-7ab6696b9036 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received unexpected event network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e for instance with vm_state active and task_state None.
Jan 26 16:30:17 compute-0 systemd[1]: Started libpod-conmon-afb24d07cd484b1fcb1598b8f5dcb7cbbd5f9e35a4531ecd7a904a3fc95c6855.scope.
Jan 26 16:30:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16d3d9a7dfaebe63a996746582ae40e852b99f700e8c59ff6e1b6ab6c5373847/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:30:17 compute-0 podman[373156]: 2026-01-26 16:30:17.827783813 +0000 UTC m=+0.323973370 container init afb24d07cd484b1fcb1598b8f5dcb7cbbd5f9e35a4531ecd7a904a3fc95c6855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:30:17 compute-0 podman[373156]: 2026-01-26 16:30:17.833218657 +0000 UTC m=+0.329408184 container start afb24d07cd484b1fcb1598b8f5dcb7cbbd5f9e35a4531ecd7a904a3fc95c6855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 16:30:17 compute-0 neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8[373216]: [NOTICE]   (373220) : New worker (373229) forked
Jan 26 16:30:17 compute-0 neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8[373216]: [NOTICE]   (373220) : Loading success.
Jan 26 16:30:18 compute-0 sudo[373186]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:30:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:30:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:30:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:30:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:30:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:30:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:30:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:30:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:30:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:30:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:30:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:30:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:30:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:30:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:30:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:30:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:30:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:30:18 compute-0 sudo[373262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:30:18 compute-0 sudo[373262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:30:18 compute-0 sudo[373262]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:18 compute-0 sudo[373287]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:30:18 compute-0 sudo[373287]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:30:18 compute-0 nova_compute[239965]: 2026-01-26 16:30:18.713 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "23444729-2c63-4c8b-a28c-04334c5b5949" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:18 compute-0 nova_compute[239965]: 2026-01-26 16:30:18.715 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:18 compute-0 nova_compute[239965]: 2026-01-26 16:30:18.715 239969 INFO nova.compute.manager [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Unshelving
Jan 26 16:30:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 206 KiB/s rd, 127 KiB/s wr, 60 op/s
Jan 26 16:30:18 compute-0 nova_compute[239965]: 2026-01-26 16:30:18.789 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:18 compute-0 nova_compute[239965]: 2026-01-26 16:30:18.790 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:18 compute-0 nova_compute[239965]: 2026-01-26 16:30:18.796 239969 DEBUG nova.objects.instance [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lazy-loading 'pci_requests' on Instance uuid 23444729-2c63-4c8b-a28c-04334c5b5949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:30:18 compute-0 nova_compute[239965]: 2026-01-26 16:30:18.809 239969 DEBUG nova.objects.instance [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lazy-loading 'numa_topology' on Instance uuid 23444729-2c63-4c8b-a28c-04334c5b5949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:30:18 compute-0 nova_compute[239965]: 2026-01-26 16:30:18.821 239969 DEBUG nova.virt.hardware [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:30:18 compute-0 nova_compute[239965]: 2026-01-26 16:30:18.821 239969 INFO nova.compute.claims [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:30:18 compute-0 nova_compute[239965]: 2026-01-26 16:30:18.932 239969 DEBUG oslo_concurrency.processutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:30:18 compute-0 podman[373324]: 2026-01-26 16:30:18.900263265 +0000 UTC m=+0.035779912 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:30:19 compute-0 podman[373324]: 2026-01-26 16:30:19.012465125 +0000 UTC m=+0.147981722 container create 3c0cb7c37c1211b47b7aad83f1fe6a0f3ed186c2f9a82af56d83b059c39759b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kalam, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:30:19 compute-0 nova_compute[239965]: 2026-01-26 16:30:19.032 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:19 compute-0 systemd[1]: Started libpod-conmon-3c0cb7c37c1211b47b7aad83f1fe6a0f3ed186c2f9a82af56d83b059c39759b1.scope.
Jan 26 16:30:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:30:19 compute-0 podman[373324]: 2026-01-26 16:30:19.108359376 +0000 UTC m=+0.243875993 container init 3c0cb7c37c1211b47b7aad83f1fe6a0f3ed186c2f9a82af56d83b059c39759b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 16:30:19 compute-0 podman[373324]: 2026-01-26 16:30:19.118602167 +0000 UTC m=+0.254118774 container start 3c0cb7c37c1211b47b7aad83f1fe6a0f3ed186c2f9a82af56d83b059c39759b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kalam, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:30:19 compute-0 podman[373324]: 2026-01-26 16:30:19.123056927 +0000 UTC m=+0.258573564 container attach 3c0cb7c37c1211b47b7aad83f1fe6a0f3ed186c2f9a82af56d83b059c39759b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kalam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 16:30:19 compute-0 condescending_kalam[373341]: 167 167
Jan 26 16:30:19 compute-0 systemd[1]: libpod-3c0cb7c37c1211b47b7aad83f1fe6a0f3ed186c2f9a82af56d83b059c39759b1.scope: Deactivated successfully.
Jan 26 16:30:19 compute-0 conmon[373341]: conmon 3c0cb7c37c1211b47b7a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c0cb7c37c1211b47b7aad83f1fe6a0f3ed186c2f9a82af56d83b059c39759b1.scope/container/memory.events
Jan 26 16:30:19 compute-0 podman[373324]: 2026-01-26 16:30:19.127141937 +0000 UTC m=+0.262658534 container died 3c0cb7c37c1211b47b7aad83f1fe6a0f3ed186c2f9a82af56d83b059c39759b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kalam, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:30:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a84a1bfb625c203461639a4589f394ecf209f5a4b761bf8733a4975a3dac6e40-merged.mount: Deactivated successfully.
Jan 26 16:30:19 compute-0 podman[373324]: 2026-01-26 16:30:19.35763443 +0000 UTC m=+0.493151027 container remove 3c0cb7c37c1211b47b7aad83f1fe6a0f3ed186c2f9a82af56d83b059c39759b1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:30:19 compute-0 systemd[1]: libpod-conmon-3c0cb7c37c1211b47b7aad83f1fe6a0f3ed186c2f9a82af56d83b059c39759b1.scope: Deactivated successfully.
Jan 26 16:30:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:30:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:30:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1703455323' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:30:19 compute-0 nova_compute[239965]: 2026-01-26 16:30:19.552 239969 DEBUG oslo_concurrency.processutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.620s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:30:19 compute-0 nova_compute[239965]: 2026-01-26 16:30:19.560 239969 DEBUG nova.compute.provider_tree [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:30:19 compute-0 nova_compute[239965]: 2026-01-26 16:30:19.594 239969 DEBUG nova.scheduler.client.report [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:30:19 compute-0 nova_compute[239965]: 2026-01-26 16:30:19.618 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.828s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:19 compute-0 nova_compute[239965]: 2026-01-26 16:30:19.647 239969 INFO nova.compute.manager [None req-bddd6b69-8d88-4fca-a65a-fa5222f9517c 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Get console output
Jan 26 16:30:19 compute-0 podman[373384]: 2026-01-26 16:30:19.560437509 +0000 UTC m=+0.043084560 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:30:19 compute-0 nova_compute[239965]: 2026-01-26 16:30:19.655 305155 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Jan 26 16:30:19 compute-0 podman[373384]: 2026-01-26 16:30:19.705462918 +0000 UTC m=+0.188109979 container create d8db3267d51ac3f554d2c172958e2c616caeab5fd04a740ca0fcd05570adce2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_johnson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 16:30:19 compute-0 systemd[1]: Started libpod-conmon-d8db3267d51ac3f554d2c172958e2c616caeab5fd04a740ca0fcd05570adce2a.scope.
Jan 26 16:30:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/148725266f942ffa2ccc7d0e3a7ecc97c4d2ee39716e42379ff298bec8420351/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/148725266f942ffa2ccc7d0e3a7ecc97c4d2ee39716e42379ff298bec8420351/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/148725266f942ffa2ccc7d0e3a7ecc97c4d2ee39716e42379ff298bec8420351/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/148725266f942ffa2ccc7d0e3a7ecc97c4d2ee39716e42379ff298bec8420351/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:30:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/148725266f942ffa2ccc7d0e3a7ecc97c4d2ee39716e42379ff298bec8420351/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:30:19 compute-0 nova_compute[239965]: 2026-01-26 16:30:19.875 239969 INFO nova.network.neutron [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updating port 71007552-7ba9-4fe5-8e36-ee9ce30da37f with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}
Jan 26 16:30:19 compute-0 podman[373384]: 2026-01-26 16:30:19.880724971 +0000 UTC m=+0.363372002 container init d8db3267d51ac3f554d2c172958e2c616caeab5fd04a740ca0fcd05570adce2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 26 16:30:19 compute-0 podman[373384]: 2026-01-26 16:30:19.889013955 +0000 UTC m=+0.371660996 container start d8db3267d51ac3f554d2c172958e2c616caeab5fd04a740ca0fcd05570adce2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_johnson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 16:30:19 compute-0 podman[373384]: 2026-01-26 16:30:19.928840176 +0000 UTC m=+0.411487207 container attach d8db3267d51ac3f554d2c172958e2c616caeab5fd04a740ca0fcd05570adce2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:30:20 compute-0 eager_johnson[373402]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:30:20 compute-0 eager_johnson[373402]: --> All data devices are unavailable
Jan 26 16:30:20 compute-0 systemd[1]: libpod-d8db3267d51ac3f554d2c172958e2c616caeab5fd04a740ca0fcd05570adce2a.scope: Deactivated successfully.
Jan 26 16:30:20 compute-0 podman[373384]: 2026-01-26 16:30:20.472896293 +0000 UTC m=+0.955543314 container died d8db3267d51ac3f554d2c172958e2c616caeab5fd04a740ca0fcd05570adce2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 16:30:20 compute-0 ceph-mon[75140]: pgmap v2403: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 206 KiB/s rd, 127 KiB/s wr, 60 op/s
Jan 26 16:30:20 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1703455323' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:30:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-148725266f942ffa2ccc7d0e3a7ecc97c4d2ee39716e42379ff298bec8420351-merged.mount: Deactivated successfully.
Jan 26 16:30:20 compute-0 podman[373384]: 2026-01-26 16:30:20.541275966 +0000 UTC m=+1.023922987 container remove d8db3267d51ac3f554d2c172958e2c616caeab5fd04a740ca0fcd05570adce2a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 16:30:20 compute-0 systemd[1]: libpod-conmon-d8db3267d51ac3f554d2c172958e2c616caeab5fd04a740ca0fcd05570adce2a.scope: Deactivated successfully.
Jan 26 16:30:20 compute-0 sudo[373287]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:20 compute-0 nova_compute[239965]: 2026-01-26 16:30:20.629 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:30:20 compute-0 nova_compute[239965]: 2026-01-26 16:30:20.631 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquired lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:30:20 compute-0 nova_compute[239965]: 2026-01-26 16:30:20.631 239969 DEBUG nova.network.neutron [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:30:20 compute-0 sudo[373436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:30:20 compute-0 sudo[373436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:30:20 compute-0 sudo[373436]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:20 compute-0 nova_compute[239965]: 2026-01-26 16:30:20.707 239969 DEBUG oslo_concurrency.lockutils [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "867d0872-e510-4b21-a4a4-ba5869bf7b41" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:20 compute-0 nova_compute[239965]: 2026-01-26 16:30:20.707 239969 DEBUG oslo_concurrency.lockutils [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:20 compute-0 nova_compute[239965]: 2026-01-26 16:30:20.707 239969 DEBUG oslo_concurrency.lockutils [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:20 compute-0 nova_compute[239965]: 2026-01-26 16:30:20.707 239969 DEBUG oslo_concurrency.lockutils [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:20 compute-0 nova_compute[239965]: 2026-01-26 16:30:20.708 239969 DEBUG oslo_concurrency.lockutils [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:20 compute-0 nova_compute[239965]: 2026-01-26 16:30:20.709 239969 INFO nova.compute.manager [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Terminating instance
Jan 26 16:30:20 compute-0 nova_compute[239965]: 2026-01-26 16:30:20.709 239969 DEBUG nova.compute.manager [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:30:20 compute-0 sudo[373461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:30:20 compute-0 sudo[373461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:30:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 113 KiB/s wr, 57 op/s
Jan 26 16:30:20 compute-0 nova_compute[239965]: 2026-01-26 16:30:20.784 239969 DEBUG nova.compute.manager [req-1e3dcd2d-b8bd-41b6-bf35-882cc6ed5965 req-20895ce6-08ac-4f26-bfa4-0ddf4d1228c3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received event network-changed-3bc091c5-3a0b-4785-bcb7-e3f106899c4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:30:20 compute-0 nova_compute[239965]: 2026-01-26 16:30:20.785 239969 DEBUG nova.compute.manager [req-1e3dcd2d-b8bd-41b6-bf35-882cc6ed5965 req-20895ce6-08ac-4f26-bfa4-0ddf4d1228c3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Refreshing instance network info cache due to event network-changed-3bc091c5-3a0b-4785-bcb7-e3f106899c4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:30:20 compute-0 nova_compute[239965]: 2026-01-26 16:30:20.786 239969 DEBUG oslo_concurrency.lockutils [req-1e3dcd2d-b8bd-41b6-bf35-882cc6ed5965 req-20895ce6-08ac-4f26-bfa4-0ddf4d1228c3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-867d0872-e510-4b21-a4a4-ba5869bf7b41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:30:20 compute-0 nova_compute[239965]: 2026-01-26 16:30:20.786 239969 DEBUG oslo_concurrency.lockutils [req-1e3dcd2d-b8bd-41b6-bf35-882cc6ed5965 req-20895ce6-08ac-4f26-bfa4-0ddf4d1228c3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-867d0872-e510-4b21-a4a4-ba5869bf7b41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:30:20 compute-0 nova_compute[239965]: 2026-01-26 16:30:20.786 239969 DEBUG nova.network.neutron [req-1e3dcd2d-b8bd-41b6-bf35-882cc6ed5965 req-20895ce6-08ac-4f26-bfa4-0ddf4d1228c3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Refreshing network info cache for port 3bc091c5-3a0b-4785-bcb7-e3f106899c4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:30:20 compute-0 kernel: tap3bc091c5-3a (unregistering): left promiscuous mode
Jan 26 16:30:20 compute-0 NetworkManager[48954]: <info>  [1769445020.9263] device (tap3bc091c5-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:30:20 compute-0 ovn_controller[146046]: 2026-01-26T16:30:20Z|01626|binding|INFO|Releasing lport 3bc091c5-3a0b-4785-bcb7-e3f106899c4e from this chassis (sb_readonly=0)
Jan 26 16:30:20 compute-0 ovn_controller[146046]: 2026-01-26T16:30:20Z|01627|binding|INFO|Setting lport 3bc091c5-3a0b-4785-bcb7-e3f106899c4e down in Southbound
Jan 26 16:30:20 compute-0 ovn_controller[146046]: 2026-01-26T16:30:20Z|01628|binding|INFO|Removing iface tap3bc091c5-3a ovn-installed in OVS
Jan 26 16:30:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:20.944 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:04:cd 10.100.0.8'], port_security=['fa:16:3e:1e:04:cd 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '867d0872-e510-4b21-a4a4-ba5869bf7b41', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-18216d50-69be-46b9-999b-ec8b1b0e72e8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2a7615c0db4e4f38aec30c7c723c3c3a', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'ef4be394-553c-42cb-8e42-8f8b7eb94878', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=45299a42-06fa-4567-8e89-46e6b687daf8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=3bc091c5-3a0b-4785-bcb7-e3f106899c4e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:30:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:20.945 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 3bc091c5-3a0b-4785-bcb7-e3f106899c4e in datapath 18216d50-69be-46b9-999b-ec8b1b0e72e8 unbound from our chassis
Jan 26 16:30:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:20.947 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 18216d50-69be-46b9-999b-ec8b1b0e72e8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:30:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:20.948 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2bfd15b4-f169-4375-8873-4ee47e73ac00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:20 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:20.949 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8 namespace which is not needed anymore
Jan 26 16:30:20 compute-0 nova_compute[239965]: 2026-01-26 16:30:20.952 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:20 compute-0 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000097.scope: Deactivated successfully.
Jan 26 16:30:20 compute-0 systemd-machined[208061]: Machine qemu-184-instance-00000097 terminated.
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.062 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.138 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.146 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:21 compute-0 podman[373521]: 2026-01-26 16:30:21.151089662 +0000 UTC m=+0.111130946 container create d47beb17dca1e926081c0f24606cfd62abb0d77dcd31c9a8ca1017aaf942443a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shockley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.158 239969 INFO nova.virt.libvirt.driver [-] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Instance destroyed successfully.
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.159 239969 DEBUG nova.objects.instance [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lazy-loading 'resources' on Instance uuid 867d0872-e510-4b21-a4a4-ba5869bf7b41 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:30:21 compute-0 podman[373521]: 2026-01-26 16:30:21.070295494 +0000 UTC m=+0.030336768 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.183 239969 DEBUG nova.virt.libvirt.vif [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:29:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1668474041',display_name='tempest-TestNetworkAdvancedServerOps-server-1668474041',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1668474041',id=151,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPv22EdwGJNljFtI8+E9tV1J38GNPBUw3o1bTvVgqzjO9PC/VkU1AA5T7fBsCkq14J38duWTMLEM8jMA4T5BdKpjilyJ0NiQThFHa9xh05gGMp17MtvNd9WsWN6fEZCeJA==',key_name='tempest-TestNetworkAdvancedServerOps-1208787537',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:29:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2a7615c0db4e4f38aec30c7c723c3c3a',ramdisk_id='',reservation_id='r-hq09mo5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-842475489',owner_user_name='tempest-TestNetworkAdvancedServerOps-842475489-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:30:17Z,user_data=None,user_id='59ae1c17a260470c91f50965ddd53a9e',uuid=867d0872-e510-4b21-a4a4-ba5869bf7b41,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "address": "fa:16:3e:1e:04:cd", "network": {"id": "18216d50-69be-46b9-999b-ec8b1b0e72e8", "bridge": "br-int", "label": "tempest-network-smoke--1764697523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bc091c5-3a", "ovs_interfaceid": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.183 239969 DEBUG nova.network.os_vif_util [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converting VIF {"id": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "address": "fa:16:3e:1e:04:cd", "network": {"id": "18216d50-69be-46b9-999b-ec8b1b0e72e8", "bridge": "br-int", "label": "tempest-network-smoke--1764697523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.203", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bc091c5-3a", "ovs_interfaceid": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.185 239969 DEBUG nova.network.os_vif_util [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:04:cd,bridge_name='br-int',has_traffic_filtering=True,id=3bc091c5-3a0b-4785-bcb7-e3f106899c4e,network=Network(18216d50-69be-46b9-999b-ec8b1b0e72e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bc091c5-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.185 239969 DEBUG os_vif [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:04:cd,bridge_name='br-int',has_traffic_filtering=True,id=3bc091c5-3a0b-4785-bcb7-e3f106899c4e,network=Network(18216d50-69be-46b9-999b-ec8b1b0e72e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bc091c5-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.187 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.188 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3bc091c5-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.189 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.192 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.196 239969 INFO os_vif [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:04:cd,bridge_name='br-int',has_traffic_filtering=True,id=3bc091c5-3a0b-4785-bcb7-e3f106899c4e,network=Network(18216d50-69be-46b9-999b-ec8b1b0e72e8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3bc091c5-3a')
Jan 26 16:30:21 compute-0 neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8[373216]: [NOTICE]   (373220) : haproxy version is 2.8.14-c23fe91
Jan 26 16:30:21 compute-0 neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8[373216]: [NOTICE]   (373220) : path to executable is /usr/sbin/haproxy
Jan 26 16:30:21 compute-0 neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8[373216]: [WARNING]  (373220) : Exiting Master process...
Jan 26 16:30:21 compute-0 neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8[373216]: [ALERT]    (373220) : Current worker (373229) exited with code 143 (Terminated)
Jan 26 16:30:21 compute-0 neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8[373216]: [WARNING]  (373220) : All workers exited. Exiting... (0)
Jan 26 16:30:21 compute-0 systemd[1]: Started libpod-conmon-d47beb17dca1e926081c0f24606cfd62abb0d77dcd31c9a8ca1017aaf942443a.scope.
Jan 26 16:30:21 compute-0 systemd[1]: libpod-afb24d07cd484b1fcb1598b8f5dcb7cbbd5f9e35a4531ecd7a904a3fc95c6855.scope: Deactivated successfully.
Jan 26 16:30:21 compute-0 podman[373529]: 2026-01-26 16:30:21.244342536 +0000 UTC m=+0.173900370 container died afb24d07cd484b1fcb1598b8f5dcb7cbbd5f9e35a4531ecd7a904a3fc95c6855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:30:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.372 239969 DEBUG nova.compute.manager [req-1cd277ac-c2b9-4865-a568-aff64d682c2b req-255cce69-f06e-4950-bc41-bd298d158a9d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received event network-vif-unplugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.373 239969 DEBUG oslo_concurrency.lockutils [req-1cd277ac-c2b9-4865-a568-aff64d682c2b req-255cce69-f06e-4950-bc41-bd298d158a9d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.373 239969 DEBUG oslo_concurrency.lockutils [req-1cd277ac-c2b9-4865-a568-aff64d682c2b req-255cce69-f06e-4950-bc41-bd298d158a9d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.374 239969 DEBUG oslo_concurrency.lockutils [req-1cd277ac-c2b9-4865-a568-aff64d682c2b req-255cce69-f06e-4950-bc41-bd298d158a9d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.374 239969 DEBUG nova.compute.manager [req-1cd277ac-c2b9-4865-a568-aff64d682c2b req-255cce69-f06e-4950-bc41-bd298d158a9d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] No waiting events found dispatching network-vif-unplugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.374 239969 DEBUG nova.compute.manager [req-1cd277ac-c2b9-4865-a568-aff64d682c2b req-255cce69-f06e-4950-bc41-bd298d158a9d a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received event network-vif-unplugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:30:21 compute-0 nova_compute[239965]: 2026-01-26 16:30:21.430 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:21 compute-0 podman[373521]: 2026-01-26 16:30:21.831232437 +0000 UTC m=+0.791273711 container init d47beb17dca1e926081c0f24606cfd62abb0d77dcd31c9a8ca1017aaf942443a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 16:30:21 compute-0 podman[373521]: 2026-01-26 16:30:21.844170596 +0000 UTC m=+0.804211840 container start d47beb17dca1e926081c0f24606cfd62abb0d77dcd31c9a8ca1017aaf942443a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3)
Jan 26 16:30:21 compute-0 sweet_shockley[373578]: 167 167
Jan 26 16:30:21 compute-0 systemd[1]: libpod-d47beb17dca1e926081c0f24606cfd62abb0d77dcd31c9a8ca1017aaf942443a.scope: Deactivated successfully.
Jan 26 16:30:21 compute-0 podman[373521]: 2026-01-26 16:30:21.920398672 +0000 UTC m=+0.880439916 container attach d47beb17dca1e926081c0f24606cfd62abb0d77dcd31c9a8ca1017aaf942443a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:30:21 compute-0 podman[373521]: 2026-01-26 16:30:21.922316269 +0000 UTC m=+0.882357563 container died d47beb17dca1e926081c0f24606cfd62abb0d77dcd31c9a8ca1017aaf942443a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shockley, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 16:30:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-afb24d07cd484b1fcb1598b8f5dcb7cbbd5f9e35a4531ecd7a904a3fc95c6855-userdata-shm.mount: Deactivated successfully.
Jan 26 16:30:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-16d3d9a7dfaebe63a996746582ae40e852b99f700e8c59ff6e1b6ab6c5373847-merged.mount: Deactivated successfully.
Jan 26 16:30:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1a938e7614fb19214d06b12f1dd7c1fd3375232cb8297f0a61a4cc8638b500b-merged.mount: Deactivated successfully.
Jan 26 16:30:22 compute-0 podman[373529]: 2026-01-26 16:30:22.080188254 +0000 UTC m=+1.009746068 container cleanup afb24d07cd484b1fcb1598b8f5dcb7cbbd5f9e35a4531ecd7a904a3fc95c6855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.087 239969 DEBUG nova.network.neutron [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updating instance_info_cache with network_info: [{"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:30:22 compute-0 systemd[1]: libpod-conmon-afb24d07cd484b1fcb1598b8f5dcb7cbbd5f9e35a4531ecd7a904a3fc95c6855.scope: Deactivated successfully.
Jan 26 16:30:22 compute-0 podman[373521]: 2026-01-26 16:30:22.099159141 +0000 UTC m=+1.059200405 container remove d47beb17dca1e926081c0f24606cfd62abb0d77dcd31c9a8ca1017aaf942443a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_shockley, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.117 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Releasing lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.121 239969 DEBUG nova.virt.libvirt.driver [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.121 239969 INFO nova.virt.libvirt.driver [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Creating image(s)
Jan 26 16:30:22 compute-0 podman[373612]: 2026-01-26 16:30:22.150062643 +0000 UTC m=+0.045062849 container remove afb24d07cd484b1fcb1598b8f5dcb7cbbd5f9e35a4531ecd7a904a3fc95c6855 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.155 239969 DEBUG nova.storage.rbd_utils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] rbd image 23444729-2c63-4c8b-a28c-04334c5b5949_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:30:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:22.157 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[4d3d53f5-6347-4784-bde7-cd2123f726d3]: (4, ('Mon Jan 26 04:30:21 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8 (afb24d07cd484b1fcb1598b8f5dcb7cbbd5f9e35a4531ecd7a904a3fc95c6855)\nafb24d07cd484b1fcb1598b8f5dcb7cbbd5f9e35a4531ecd7a904a3fc95c6855\nMon Jan 26 04:30:22 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8 (afb24d07cd484b1fcb1598b8f5dcb7cbbd5f9e35a4531ecd7a904a3fc95c6855)\nafb24d07cd484b1fcb1598b8f5dcb7cbbd5f9e35a4531ecd7a904a3fc95c6855\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:22.159 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9e9fd783-35dd-4e04-a162-e19bd80dec21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:22.160 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18216d50-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:22 compute-0 systemd[1]: libpod-conmon-d47beb17dca1e926081c0f24606cfd62abb0d77dcd31c9a8ca1017aaf942443a.scope: Deactivated successfully.
Jan 26 16:30:22 compute-0 kernel: tap18216d50-60: left promiscuous mode
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.166 239969 DEBUG nova.objects.instance [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 23444729-2c63-4c8b-a28c-04334c5b5949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.167 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:22.189 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[de36401d-aa12-4734-ae86-c8855eff5247]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:22.206 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4245a5-0c5f-406b-b8ef-21cf83856a46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:22.209 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[96b3eaef-ab1b-4de9-b215-55280c2d201e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.213 239969 DEBUG nova.storage.rbd_utils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] rbd image 23444729-2c63-4c8b-a28c-04334c5b5949_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:30:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:22.231 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5797bfad-9d86-4ca9-b7c3-8089cb99eb4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650766, 'reachable_time': 24054, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 373667, 'error': None, 'target': 'ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:22.233 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-18216d50-69be-46b9-999b-ec8b1b0e72e8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:30:22 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:22.233 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab9509c-902d-4c51-81ef-17a53c9c783a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:22 compute-0 systemd[1]: run-netns-ovnmeta\x2d18216d50\x2d69be\x2d46b9\x2d999b\x2dec8b1b0e72e8.mount: Deactivated successfully.
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.246 239969 DEBUG nova.storage.rbd_utils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] rbd image 23444729-2c63-4c8b-a28c-04334c5b5949_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.250 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "0b56cbe5b92e10af4035a6f708f6c69c8b56a689" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.251 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "0b56cbe5b92e10af4035a6f708f6c69c8b56a689" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.257 239969 DEBUG nova.network.neutron [req-1e3dcd2d-b8bd-41b6-bf35-882cc6ed5965 req-20895ce6-08ac-4f26-bfa4-0ddf4d1228c3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Updated VIF entry in instance network info cache for port 3bc091c5-3a0b-4785-bcb7-e3f106899c4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.258 239969 DEBUG nova.network.neutron [req-1e3dcd2d-b8bd-41b6-bf35-882cc6ed5965 req-20895ce6-08ac-4f26-bfa4-0ddf4d1228c3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Updating instance_info_cache with network_info: [{"id": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "address": "fa:16:3e:1e:04:cd", "network": {"id": "18216d50-69be-46b9-999b-ec8b1b0e72e8", "bridge": "br-int", "label": "tempest-network-smoke--1764697523", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2a7615c0db4e4f38aec30c7c723c3c3a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3bc091c5-3a", "ovs_interfaceid": "3bc091c5-3a0b-4785-bcb7-e3f106899c4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.259 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.278 239969 DEBUG oslo_concurrency.lockutils [req-1e3dcd2d-b8bd-41b6-bf35-882cc6ed5965 req-20895ce6-08ac-4f26-bfa4-0ddf4d1228c3 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-867d0872-e510-4b21-a4a4-ba5869bf7b41" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:30:22 compute-0 podman[373691]: 2026-01-26 16:30:22.296913627 +0000 UTC m=+0.031141197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.485 239969 DEBUG nova.virt.libvirt.imagebackend [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Image locations are: [{'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/22289f10-7e92-46f4-a6e5-0047b19a5119/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/22289f10-7e92-46f4-a6e5-0047b19a5119/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Jan 26 16:30:22 compute-0 podman[373691]: 2026-01-26 16:30:22.537359864 +0000 UTC m=+0.271587404 container create ff34b9e13c9c4fd1b5308178a61134176a8b384702a5917dbcc3e22113211411 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_elion, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.565 239969 DEBUG nova.virt.libvirt.imagebackend [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Selected location: {'url': 'rbd://8b9831ad-4b0d-59b4-8860-96eb895a171f/images/22289f10-7e92-46f4-a6e5-0047b19a5119/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.567 239969 DEBUG nova.storage.rbd_utils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] cloning images/22289f10-7e92-46f4-a6e5-0047b19a5119@snap to None/23444729-2c63-4c8b-a28c-04334c5b5949_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Jan 26 16:30:22 compute-0 systemd[1]: Started libpod-conmon-ff34b9e13c9c4fd1b5308178a61134176a8b384702a5917dbcc3e22113211411.scope.
Jan 26 16:30:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613a2e4a4e5bbcd9081d1fc643332aa5a53ce3b061a6ad81668a30868094b279/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613a2e4a4e5bbcd9081d1fc643332aa5a53ce3b061a6ad81668a30868094b279/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613a2e4a4e5bbcd9081d1fc643332aa5a53ce3b061a6ad81668a30868094b279/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:30:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/613a2e4a4e5bbcd9081d1fc643332aa5a53ce3b061a6ad81668a30868094b279/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:30:22 compute-0 podman[373691]: 2026-01-26 16:30:22.656493485 +0000 UTC m=+0.390721055 container init ff34b9e13c9c4fd1b5308178a61134176a8b384702a5917dbcc3e22113211411 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_elion, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:30:22 compute-0 podman[373691]: 2026-01-26 16:30:22.672241193 +0000 UTC m=+0.406468763 container start ff34b9e13c9c4fd1b5308178a61134176a8b384702a5917dbcc3e22113211411 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:30:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 208 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 18 KiB/s wr, 57 op/s
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.865 239969 DEBUG nova.compute.manager [req-933f1312-83ce-4578-96c5-1a24b000de3a req-f2a7ef1d-d4f6-4235-b30a-7498121ead4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Received event network-changed-71007552-7ba9-4fe5-8e36-ee9ce30da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.866 239969 DEBUG nova.compute.manager [req-933f1312-83ce-4578-96c5-1a24b000de3a req-f2a7ef1d-d4f6-4235-b30a-7498121ead4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Refreshing instance network info cache due to event network-changed-71007552-7ba9-4fe5-8e36-ee9ce30da37f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.867 239969 DEBUG oslo_concurrency.lockutils [req-933f1312-83ce-4578-96c5-1a24b000de3a req-f2a7ef1d-d4f6-4235-b30a-7498121ead4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.867 239969 DEBUG oslo_concurrency.lockutils [req-933f1312-83ce-4578-96c5-1a24b000de3a req-f2a7ef1d-d4f6-4235-b30a-7498121ead4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:30:22 compute-0 nova_compute[239965]: 2026-01-26 16:30:22.868 239969 DEBUG nova.network.neutron [req-933f1312-83ce-4578-96c5-1a24b000de3a req-f2a7ef1d-d4f6-4235-b30a-7498121ead4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Refreshing network info cache for port 71007552-7ba9-4fe5-8e36-ee9ce30da37f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:30:22 compute-0 podman[373691]: 2026-01-26 16:30:22.877186045 +0000 UTC m=+0.611413625 container attach ff34b9e13c9c4fd1b5308178a61134176a8b384702a5917dbcc3e22113211411 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_elion, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:30:22 compute-0 ceph-mon[75140]: pgmap v2404: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 113 KiB/s wr, 57 op/s
Jan 26 16:30:22 compute-0 magical_elion[373755]: {
Jan 26 16:30:22 compute-0 magical_elion[373755]:     "0": [
Jan 26 16:30:22 compute-0 magical_elion[373755]:         {
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "devices": [
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "/dev/loop3"
Jan 26 16:30:22 compute-0 magical_elion[373755]:             ],
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "lv_name": "ceph_lv0",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "lv_size": "21470642176",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "name": "ceph_lv0",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "tags": {
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.cluster_name": "ceph",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.crush_device_class": "",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.encrypted": "0",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.objectstore": "bluestore",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.osd_id": "0",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.type": "block",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.vdo": "0",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.with_tpm": "0"
Jan 26 16:30:22 compute-0 magical_elion[373755]:             },
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "type": "block",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "vg_name": "ceph_vg0"
Jan 26 16:30:22 compute-0 magical_elion[373755]:         }
Jan 26 16:30:22 compute-0 magical_elion[373755]:     ],
Jan 26 16:30:22 compute-0 magical_elion[373755]:     "1": [
Jan 26 16:30:22 compute-0 magical_elion[373755]:         {
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "devices": [
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "/dev/loop4"
Jan 26 16:30:22 compute-0 magical_elion[373755]:             ],
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "lv_name": "ceph_lv1",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "lv_size": "21470642176",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "name": "ceph_lv1",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "tags": {
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.cluster_name": "ceph",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.crush_device_class": "",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.encrypted": "0",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.objectstore": "bluestore",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.osd_id": "1",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.type": "block",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.vdo": "0",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.with_tpm": "0"
Jan 26 16:30:22 compute-0 magical_elion[373755]:             },
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "type": "block",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "vg_name": "ceph_vg1"
Jan 26 16:30:22 compute-0 magical_elion[373755]:         }
Jan 26 16:30:22 compute-0 magical_elion[373755]:     ],
Jan 26 16:30:22 compute-0 magical_elion[373755]:     "2": [
Jan 26 16:30:22 compute-0 magical_elion[373755]:         {
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "devices": [
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "/dev/loop5"
Jan 26 16:30:22 compute-0 magical_elion[373755]:             ],
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "lv_name": "ceph_lv2",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "lv_size": "21470642176",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "name": "ceph_lv2",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "tags": {
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.cluster_name": "ceph",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.crush_device_class": "",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.encrypted": "0",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.objectstore": "bluestore",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.osd_id": "2",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.type": "block",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.vdo": "0",
Jan 26 16:30:22 compute-0 magical_elion[373755]:                 "ceph.with_tpm": "0"
Jan 26 16:30:22 compute-0 magical_elion[373755]:             },
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "type": "block",
Jan 26 16:30:22 compute-0 magical_elion[373755]:             "vg_name": "ceph_vg2"
Jan 26 16:30:22 compute-0 magical_elion[373755]:         }
Jan 26 16:30:22 compute-0 magical_elion[373755]:     ]
Jan 26 16:30:22 compute-0 magical_elion[373755]: }
Jan 26 16:30:22 compute-0 systemd[1]: libpod-ff34b9e13c9c4fd1b5308178a61134176a8b384702a5917dbcc3e22113211411.scope: Deactivated successfully.
Jan 26 16:30:22 compute-0 podman[373691]: 2026-01-26 16:30:22.966854752 +0000 UTC m=+0.701082312 container died ff34b9e13c9c4fd1b5308178a61134176a8b384702a5917dbcc3e22113211411 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_elion, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 16:30:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-613a2e4a4e5bbcd9081d1fc643332aa5a53ce3b061a6ad81668a30868094b279-merged.mount: Deactivated successfully.
Jan 26 16:30:23 compute-0 podman[373691]: 2026-01-26 16:30:23.410323656 +0000 UTC m=+1.144551206 container remove ff34b9e13c9c4fd1b5308178a61134176a8b384702a5917dbcc3e22113211411 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_elion, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 16:30:23 compute-0 nova_compute[239965]: 2026-01-26 16:30:23.415 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "0b56cbe5b92e10af4035a6f708f6c69c8b56a689" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:23 compute-0 sudo[373461]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:23 compute-0 systemd[1]: libpod-conmon-ff34b9e13c9c4fd1b5308178a61134176a8b384702a5917dbcc3e22113211411.scope: Deactivated successfully.
Jan 26 16:30:23 compute-0 nova_compute[239965]: 2026-01-26 16:30:23.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:23 compute-0 nova_compute[239965]: 2026-01-26 16:30:23.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:30:23 compute-0 sudo[373831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:30:23 compute-0 nova_compute[239965]: 2026-01-26 16:30:23.572 239969 DEBUG nova.objects.instance [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lazy-loading 'migration_context' on Instance uuid 23444729-2c63-4c8b-a28c-04334c5b5949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:30:23 compute-0 sudo[373831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:30:23 compute-0 sudo[373831]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:23 compute-0 sudo[373876]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:30:23 compute-0 sudo[373876]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:30:23 compute-0 nova_compute[239965]: 2026-01-26 16:30:23.779 239969 DEBUG nova.storage.rbd_utils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] flattening vms/23444729-2c63-4c8b-a28c-04334c5b5949_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Jan 26 16:30:23 compute-0 nova_compute[239965]: 2026-01-26 16:30:23.904 239969 DEBUG nova.compute.manager [req-dca784be-f452-4213-9819-14a132df8764 req-89d052f8-6832-476d-9dfb-fb67f916d84a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received event network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:30:23 compute-0 nova_compute[239965]: 2026-01-26 16:30:23.905 239969 DEBUG oslo_concurrency.lockutils [req-dca784be-f452-4213-9819-14a132df8764 req-89d052f8-6832-476d-9dfb-fb67f916d84a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:23 compute-0 nova_compute[239965]: 2026-01-26 16:30:23.906 239969 DEBUG oslo_concurrency.lockutils [req-dca784be-f452-4213-9819-14a132df8764 req-89d052f8-6832-476d-9dfb-fb67f916d84a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:23 compute-0 nova_compute[239965]: 2026-01-26 16:30:23.906 239969 DEBUG oslo_concurrency.lockutils [req-dca784be-f452-4213-9819-14a132df8764 req-89d052f8-6832-476d-9dfb-fb67f916d84a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:23 compute-0 nova_compute[239965]: 2026-01-26 16:30:23.907 239969 DEBUG nova.compute.manager [req-dca784be-f452-4213-9819-14a132df8764 req-89d052f8-6832-476d-9dfb-fb67f916d84a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] No waiting events found dispatching network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:30:23 compute-0 nova_compute[239965]: 2026-01-26 16:30:23.908 239969 WARNING nova.compute.manager [req-dca784be-f452-4213-9819-14a132df8764 req-89d052f8-6832-476d-9dfb-fb67f916d84a a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received unexpected event network-vif-plugged-3bc091c5-3a0b-4785-bcb7-e3f106899c4e for instance with vm_state active and task_state deleting.
Jan 26 16:30:23 compute-0 nova_compute[239965]: 2026-01-26 16:30:23.930 239969 INFO nova.virt.libvirt.driver [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Deleting instance files /var/lib/nova/instances/867d0872-e510-4b21-a4a4-ba5869bf7b41_del
Jan 26 16:30:23 compute-0 nova_compute[239965]: 2026-01-26 16:30:23.931 239969 INFO nova.virt.libvirt.driver [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Deletion of /var/lib/nova/instances/867d0872-e510-4b21-a4a4-ba5869bf7b41_del complete
Jan 26 16:30:24 compute-0 nova_compute[239965]: 2026-01-26 16:30:24.001 239969 INFO nova.compute.manager [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Took 3.29 seconds to destroy the instance on the hypervisor.
Jan 26 16:30:24 compute-0 nova_compute[239965]: 2026-01-26 16:30:24.002 239969 DEBUG oslo.service.loopingcall [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:30:24 compute-0 nova_compute[239965]: 2026-01-26 16:30:24.003 239969 DEBUG nova.compute.manager [-] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:30:24 compute-0 nova_compute[239965]: 2026-01-26 16:30:24.003 239969 DEBUG nova.network.neutron [-] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:30:24 compute-0 nova_compute[239965]: 2026-01-26 16:30:24.036 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:24 compute-0 nova_compute[239965]: 2026-01-26 16:30:24.046 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:24 compute-0 nova_compute[239965]: 2026-01-26 16:30:24.065 239969 WARNING nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] While synchronizing instance power states, found 2 instances in the database and 0 instances on the hypervisor.
Jan 26 16:30:24 compute-0 nova_compute[239965]: 2026-01-26 16:30:24.066 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Triggering sync for uuid 23444729-2c63-4c8b-a28c-04334c5b5949 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 16:30:24 compute-0 nova_compute[239965]: 2026-01-26 16:30:24.066 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Triggering sync for uuid 867d0872-e510-4b21-a4a4-ba5869bf7b41 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 26 16:30:24 compute-0 nova_compute[239965]: 2026-01-26 16:30:24.067 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "23444729-2c63-4c8b-a28c-04334c5b5949" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:24 compute-0 nova_compute[239965]: 2026-01-26 16:30:24.068 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "867d0872-e510-4b21-a4a4-ba5869bf7b41" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:24 compute-0 podman[373947]: 2026-01-26 16:30:23.985249093 +0000 UTC m=+0.022943336 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:30:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 194 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 14 KiB/s wr, 52 op/s
Jan 26 16:30:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:30:25 compute-0 ceph-mon[75140]: pgmap v2405: 305 pgs: 305 active+clean; 208 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 18 KiB/s wr, 57 op/s
Jan 26 16:30:25 compute-0 podman[373947]: 2026-01-26 16:30:25.00595828 +0000 UTC m=+1.043652543 container create 8c197a3b56e75c032b3ea8be7130a5898fcac32ba62105469c268717deca5d89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:30:25 compute-0 systemd[1]: Started libpod-conmon-8c197a3b56e75c032b3ea8be7130a5898fcac32ba62105469c268717deca5d89.scope.
Jan 26 16:30:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:30:25 compute-0 podman[373947]: 2026-01-26 16:30:25.358124645 +0000 UTC m=+1.395818958 container init 8c197a3b56e75c032b3ea8be7130a5898fcac32ba62105469c268717deca5d89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:30:25 compute-0 podman[373947]: 2026-01-26 16:30:25.370967571 +0000 UTC m=+1.408661834 container start 8c197a3b56e75c032b3ea8be7130a5898fcac32ba62105469c268717deca5d89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 16:30:25 compute-0 hardcore_bartik[373962]: 167 167
Jan 26 16:30:25 compute-0 systemd[1]: libpod-8c197a3b56e75c032b3ea8be7130a5898fcac32ba62105469c268717deca5d89.scope: Deactivated successfully.
Jan 26 16:30:25 compute-0 podman[373947]: 2026-01-26 16:30:25.40624983 +0000 UTC m=+1.443944093 container attach 8c197a3b56e75c032b3ea8be7130a5898fcac32ba62105469c268717deca5d89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:30:25 compute-0 podman[373947]: 2026-01-26 16:30:25.407338926 +0000 UTC m=+1.445033189 container died 8c197a3b56e75c032b3ea8be7130a5898fcac32ba62105469c268717deca5d89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:30:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-56b915ead803f4a9d3c2b2c82e25d4426dbc5926978bf5830d9804b57b14fe5e-merged.mount: Deactivated successfully.
Jan 26 16:30:25 compute-0 podman[373947]: 2026-01-26 16:30:25.584586119 +0000 UTC m=+1.622280352 container remove 8c197a3b56e75c032b3ea8be7130a5898fcac32ba62105469c268717deca5d89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:30:25 compute-0 systemd[1]: libpod-conmon-8c197a3b56e75c032b3ea8be7130a5898fcac32ba62105469c268717deca5d89.scope: Deactivated successfully.
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.624 239969 DEBUG nova.virt.libvirt.driver [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Image rbd:vms/23444729-2c63-4c8b-a28c-04334c5b5949_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.626 239969 DEBUG nova.virt.libvirt.driver [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.626 239969 DEBUG nova.virt.libvirt.driver [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Ensure instance console log exists: /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.627 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.627 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.627 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.629 239969 DEBUG nova.virt.libvirt.driver [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Start _get_guest_xml network_info=[{"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2026-01-26T16:29:49Z,direct_url=<?>,disk_format='raw',id=22289f10-7e92-46f4-a6e5-0047b19a5119,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-139030865-shelved',owner='67c9291388b24d24a9689e4acf626bf2',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-26T16:30:06Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.634 239969 WARNING nova.virt.libvirt.driver [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.640 239969 DEBUG nova.virt.libvirt.host [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.641 239969 DEBUG nova.virt.libvirt.host [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.644 239969 DEBUG nova.virt.libvirt.host [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.644 239969 DEBUG nova.virt.libvirt.host [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.644 239969 DEBUG nova.virt.libvirt.driver [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.645 239969 DEBUG nova.virt.hardware [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2026-01-26T16:29:49Z,direct_url=<?>,disk_format='raw',id=22289f10-7e92-46f4-a6e5-0047b19a5119,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-139030865-shelved',owner='67c9291388b24d24a9689e4acf626bf2',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2026-01-26T16:30:06Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.645 239969 DEBUG nova.virt.hardware [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.645 239969 DEBUG nova.virt.hardware [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.645 239969 DEBUG nova.virt.hardware [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.645 239969 DEBUG nova.virt.hardware [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.645 239969 DEBUG nova.virt.hardware [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.646 239969 DEBUG nova.virt.hardware [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.646 239969 DEBUG nova.virt.hardware [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.646 239969 DEBUG nova.virt.hardware [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.646 239969 DEBUG nova.virt.hardware [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.646 239969 DEBUG nova.virt.hardware [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.647 239969 DEBUG nova.objects.instance [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 23444729-2c63-4c8b-a28c-04334c5b5949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:30:25 compute-0 nova_compute[239965]: 2026-01-26 16:30:25.663 239969 DEBUG oslo_concurrency.processutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:30:25 compute-0 podman[373986]: 2026-01-26 16:30:25.769067269 +0000 UTC m=+0.038127680 container create 6ab7d6f97481430c72a830514902eaf826d743e76322a4af34d9f501527f83d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_swanson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:30:25 compute-0 systemd[1]: Started libpod-conmon-6ab7d6f97481430c72a830514902eaf826d743e76322a4af34d9f501527f83d4.scope.
Jan 26 16:30:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:30:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b26b7e1061aad169f8439c663584c191ee0c0462e79308823f5e0dae5efd509/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:30:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b26b7e1061aad169f8439c663584c191ee0c0462e79308823f5e0dae5efd509/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:30:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b26b7e1061aad169f8439c663584c191ee0c0462e79308823f5e0dae5efd509/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:30:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b26b7e1061aad169f8439c663584c191ee0c0462e79308823f5e0dae5efd509/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:30:25 compute-0 podman[373986]: 2026-01-26 16:30:25.75367242 +0000 UTC m=+0.022732831 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:30:25 compute-0 podman[373986]: 2026-01-26 16:30:25.861120803 +0000 UTC m=+0.130181264 container init 6ab7d6f97481430c72a830514902eaf826d743e76322a4af34d9f501527f83d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:30:25 compute-0 podman[373986]: 2026-01-26 16:30:25.868222609 +0000 UTC m=+0.137283020 container start 6ab7d6f97481430c72a830514902eaf826d743e76322a4af34d9f501527f83d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_swanson, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 16:30:25 compute-0 podman[373986]: 2026-01-26 16:30:25.872210966 +0000 UTC m=+0.141271477 container attach 6ab7d6f97481430c72a830514902eaf826d743e76322a4af34d9f501527f83d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_swanson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 26 16:30:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:30:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3878627674' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.241 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.258 239969 DEBUG oslo_concurrency.processutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.298 239969 DEBUG nova.storage.rbd_utils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] rbd image 23444729-2c63-4c8b-a28c-04334c5b5949_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:30:26 compute-0 ceph-mon[75140]: pgmap v2406: 305 pgs: 305 active+clean; 194 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 14 KiB/s wr, 52 op/s
Jan 26 16:30:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3878627674' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.303 239969 DEBUG oslo_concurrency.processutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.527 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:30:26 compute-0 lvm[374141]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:30:26 compute-0 lvm[374140]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:30:26 compute-0 lvm[374140]: VG ceph_vg0 finished
Jan 26 16:30:26 compute-0 lvm[374141]: VG ceph_vg1 finished
Jan 26 16:30:26 compute-0 lvm[374143]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:30:26 compute-0 lvm[374143]: VG ceph_vg2 finished
Jan 26 16:30:26 compute-0 nervous_swanson[374020]: {}
Jan 26 16:30:26 compute-0 podman[373986]: 2026-01-26 16:30:26.668094521 +0000 UTC m=+0.937154942 container died 6ab7d6f97481430c72a830514902eaf826d743e76322a4af34d9f501527f83d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:30:26 compute-0 systemd[1]: libpod-6ab7d6f97481430c72a830514902eaf826d743e76322a4af34d9f501527f83d4.scope: Deactivated successfully.
Jan 26 16:30:26 compute-0 systemd[1]: libpod-6ab7d6f97481430c72a830514902eaf826d743e76322a4af34d9f501527f83d4.scope: Consumed 1.361s CPU time.
Jan 26 16:30:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b26b7e1061aad169f8439c663584c191ee0c0462e79308823f5e0dae5efd509-merged.mount: Deactivated successfully.
Jan 26 16:30:26 compute-0 podman[373986]: 2026-01-26 16:30:26.715783855 +0000 UTC m=+0.984844266 container remove 6ab7d6f97481430c72a830514902eaf826d743e76322a4af34d9f501527f83d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:30:26 compute-0 systemd[1]: libpod-conmon-6ab7d6f97481430c72a830514902eaf826d743e76322a4af34d9f501527f83d4.scope: Deactivated successfully.
Jan 26 16:30:26 compute-0 sudo[373876]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:30:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 192 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1019 KiB/s wr, 88 op/s
Jan 26 16:30:26 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:30:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:30:26 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:30:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:30:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3791951691' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.851 239969 DEBUG oslo_concurrency.processutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.854 239969 DEBUG nova.virt.libvirt.vif [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:29:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-139030865',display_name='tempest-TestShelveInstance-server-139030865',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-139030865',id=150,image_ref='22289f10-7e92-46f4-a6e5-0047b19a5119',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-779455889',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:29:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='67c9291388b24d24a9689e4acf626bf2',ramdisk_id='',reservation_id='r-9na32hh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1058510424',owner_user_name='tempest-TestShelveInstance-1058510424-project-member',shelved_at='2026-01-26T16:30:06.879387',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='22289f10-7e92-46f4-a6e5-0047b19a5119'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:30:18Z,user_data=None,user_id='96f070e8d2fa469da6ec2db452db6f86',uuid=23444729-2c63-4c8b-a28c-04334c5b5949,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.855 239969 DEBUG nova.network.os_vif_util [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Converting VIF {"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.856 239969 DEBUG nova.network.os_vif_util [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=71007552-7ba9-4fe5-8e36-ee9ce30da37f,network=Network(68925d6a-01d0-4cb9-94a6-da3e5f3fac44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71007552-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:30:26 compute-0 sudo[374158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.858 239969 DEBUG nova.objects.instance [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 23444729-2c63-4c8b-a28c-04334c5b5949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:30:26 compute-0 sudo[374158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:30:26 compute-0 sudo[374158]: pam_unix(sudo:session): session closed for user root
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.881 239969 DEBUG nova.virt.libvirt.driver [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:30:26 compute-0 nova_compute[239965]:   <uuid>23444729-2c63-4c8b-a28c-04334c5b5949</uuid>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   <name>instance-00000096</name>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <nova:name>tempest-TestShelveInstance-server-139030865</nova:name>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:30:25</nova:creationTime>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:30:26 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:30:26 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:30:26 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:30:26 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:30:26 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:30:26 compute-0 nova_compute[239965]:         <nova:user uuid="96f070e8d2fa469da6ec2db452db6f86">tempest-TestShelveInstance-1058510424-project-member</nova:user>
Jan 26 16:30:26 compute-0 nova_compute[239965]:         <nova:project uuid="67c9291388b24d24a9689e4acf626bf2">tempest-TestShelveInstance-1058510424</nova:project>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="22289f10-7e92-46f4-a6e5-0047b19a5119"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:30:26 compute-0 nova_compute[239965]:         <nova:port uuid="71007552-7ba9-4fe5-8e36-ee9ce30da37f">
Jan 26 16:30:26 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <system>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <entry name="serial">23444729-2c63-4c8b-a28c-04334c5b5949</entry>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <entry name="uuid">23444729-2c63-4c8b-a28c-04334c5b5949</entry>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     </system>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   <os>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   </os>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   <features>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   </features>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/23444729-2c63-4c8b-a28c-04334c5b5949_disk">
Jan 26 16:30:26 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       </source>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:30:26 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/23444729-2c63-4c8b-a28c-04334c5b5949_disk.config">
Jan 26 16:30:26 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       </source>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:30:26 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:45:01:8a"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <target dev="tap71007552-7b"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949/console.log" append="off"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <video>
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     </video>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <input type="keyboard" bus="usb"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:30:26 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:30:26 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:30:26 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:30:26 compute-0 nova_compute[239965]: </domain>
Jan 26 16:30:26 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.883 239969 DEBUG nova.compute.manager [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Preparing to wait for external event network-vif-plugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.883 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.883 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.884 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.884 239969 DEBUG nova.virt.libvirt.vif [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:29:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-139030865',display_name='tempest-TestShelveInstance-server-139030865',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-139030865',id=150,image_ref='22289f10-7e92-46f4-a6e5-0047b19a5119',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-779455889',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:29:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='67c9291388b24d24a9689e4acf626bf2',ramdisk_id='',reservation_id='r-9na32hh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1058510424',owner_user_name='tempest-TestShelveInstance-1058510424-project-member',shelved_at='2026-01-26T16:30:06.879387',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='22289f10-7e92-46f4-a6e5-0047b19a5119'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:30:18Z,user_data=None,user_id='96f070e8d2fa469da6ec2db452db6f86',uuid=23444729-2c63-4c8b-a28c-04334c5b5949,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.885 239969 DEBUG nova.network.os_vif_util [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Converting VIF {"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.885 239969 DEBUG nova.network.os_vif_util [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=71007552-7ba9-4fe5-8e36-ee9ce30da37f,network=Network(68925d6a-01d0-4cb9-94a6-da3e5f3fac44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71007552-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.885 239969 DEBUG os_vif [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=71007552-7ba9-4fe5-8e36-ee9ce30da37f,network=Network(68925d6a-01d0-4cb9-94a6-da3e5f3fac44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71007552-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.886 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.886 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.887 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.890 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.890 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71007552-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.890 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap71007552-7b, col_values=(('external_ids', {'iface-id': '71007552-7ba9-4fe5-8e36-ee9ce30da37f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:01:8a', 'vm-uuid': '23444729-2c63-4c8b-a28c-04334c5b5949'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.892 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:26 compute-0 NetworkManager[48954]: <info>  [1769445026.8938] manager: (tap71007552-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.894 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.903 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.904 239969 INFO os_vif [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=71007552-7ba9-4fe5-8e36-ee9ce30da37f,network=Network(68925d6a-01d0-4cb9-94a6-da3e5f3fac44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71007552-7b')
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.961 239969 DEBUG nova.virt.libvirt.driver [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.961 239969 DEBUG nova.virt.libvirt.driver [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.961 239969 DEBUG nova.virt.libvirt.driver [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] No VIF found with MAC fa:16:3e:45:01:8a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.961 239969 INFO nova.virt.libvirt.driver [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Using config drive
Jan 26 16:30:26 compute-0 nova_compute[239965]: 2026-01-26 16:30:26.981 239969 DEBUG nova.storage.rbd_utils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] rbd image 23444729-2c63-4c8b-a28c-04334c5b5949_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.000 239969 DEBUG nova.objects.instance [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 23444729-2c63-4c8b-a28c-04334c5b5949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.028 239969 DEBUG nova.network.neutron [req-933f1312-83ce-4578-96c5-1a24b000de3a req-f2a7ef1d-d4f6-4235-b30a-7498121ead4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updated VIF entry in instance network info cache for port 71007552-7ba9-4fe5-8e36-ee9ce30da37f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.028 239969 DEBUG nova.network.neutron [req-933f1312-83ce-4578-96c5-1a24b000de3a req-f2a7ef1d-d4f6-4235-b30a-7498121ead4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updating instance_info_cache with network_info: [{"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.047 239969 DEBUG nova.network.neutron [-] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.052 239969 DEBUG oslo_concurrency.lockutils [req-933f1312-83ce-4578-96c5-1a24b000de3a req-f2a7ef1d-d4f6-4235-b30a-7498121ead4c a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.059 239969 DEBUG nova.objects.instance [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lazy-loading 'keypairs' on Instance uuid 23444729-2c63-4c8b-a28c-04334c5b5949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.066 239969 INFO nova.compute.manager [-] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Took 3.06 seconds to deallocate network for instance.
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.118 239969 DEBUG nova.compute.manager [req-55364c81-1094-4d67-963f-74cde1d4e893 req-d15dca6c-eee2-4513-af2c-4a5b742c5b9e a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Received event network-vif-deleted-3bc091c5-3a0b-4785-bcb7-e3f106899c4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.130 239969 DEBUG oslo_concurrency.lockutils [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.131 239969 DEBUG oslo_concurrency.lockutils [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.203 239969 DEBUG oslo_concurrency.processutils [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.435 239969 INFO nova.virt.libvirt.driver [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Creating config drive at /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949/disk.config
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.441 239969 DEBUG oslo_concurrency.processutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqav_1xee execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.608 239969 DEBUG oslo_concurrency.processutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqav_1xee" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.648 239969 DEBUG nova.storage.rbd_utils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] rbd image 23444729-2c63-4c8b-a28c-04334c5b5949_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.652 239969 DEBUG oslo_concurrency.processutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949/disk.config 23444729-2c63-4c8b-a28c-04334c5b5949_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:30:27 compute-0 ceph-mon[75140]: pgmap v2407: 305 pgs: 305 active+clean; 192 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1019 KiB/s wr, 88 op/s
Jan 26 16:30:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:30:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:30:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3791951691' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:30:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:30:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2660596367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.813 239969 DEBUG oslo_concurrency.processutils [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.815 239969 DEBUG oslo_concurrency.processutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949/disk.config 23444729-2c63-4c8b-a28c-04334c5b5949_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.816 239969 INFO nova.virt.libvirt.driver [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Deleting local config drive /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949/disk.config because it was imported into RBD.
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.820 239969 DEBUG nova.compute.provider_tree [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.834 239969 DEBUG nova.scheduler.client.report [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.851 239969 DEBUG oslo_concurrency.lockutils [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:27 compute-0 kernel: tap71007552-7b: entered promiscuous mode
Jan 26 16:30:27 compute-0 NetworkManager[48954]: <info>  [1769445027.8613] manager: (tap71007552-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/661)
Jan 26 16:30:27 compute-0 ovn_controller[146046]: 2026-01-26T16:30:27Z|01629|binding|INFO|Claiming lport 71007552-7ba9-4fe5-8e36-ee9ce30da37f for this chassis.
Jan 26 16:30:27 compute-0 ovn_controller[146046]: 2026-01-26T16:30:27Z|01630|binding|INFO|71007552-7ba9-4fe5-8e36-ee9ce30da37f: Claiming fa:16:3e:45:01:8a 10.100.0.6
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.861 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:27 compute-0 systemd-udevd[374139]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:30:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:27.871 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:01:8a 10.100.0.6'], port_security=['fa:16:3e:45:01:8a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '23444729-2c63-4c8b-a28c-04334c5b5949', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-68925d6a-01d0-4cb9-94a6-da3e5f3fac44', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67c9291388b24d24a9689e4acf626bf2', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'fec1bead-ca1f-4cab-8561-fae558eddc5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.201'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75ede299-1df1-4eb6-926a-ac3d97013a63, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=71007552-7ba9-4fe5-8e36-ee9ce30da37f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:30:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:27.872 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 71007552-7ba9-4fe5-8e36-ee9ce30da37f in datapath 68925d6a-01d0-4cb9-94a6-da3e5f3fac44 bound to our chassis
Jan 26 16:30:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:27.873 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 68925d6a-01d0-4cb9-94a6-da3e5f3fac44
Jan 26 16:30:27 compute-0 NetworkManager[48954]: <info>  [1769445027.8765] device (tap71007552-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.876 239969 INFO nova.scheduler.client.report [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Deleted allocations for instance 867d0872-e510-4b21-a4a4-ba5869bf7b41
Jan 26 16:30:27 compute-0 NetworkManager[48954]: <info>  [1769445027.8776] device (tap71007552-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:30:27 compute-0 ovn_controller[146046]: 2026-01-26T16:30:27Z|01631|binding|INFO|Setting lport 71007552-7ba9-4fe5-8e36-ee9ce30da37f ovn-installed in OVS
Jan 26 16:30:27 compute-0 ovn_controller[146046]: 2026-01-26T16:30:27Z|01632|binding|INFO|Setting lport 71007552-7ba9-4fe5-8e36-ee9ce30da37f up in Southbound
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.880 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:27.887 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd263fe-d837-4fd8-8652-7d1940a29f05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:27.888 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap68925d6a-01 in ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:30:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:27.890 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap68925d6a-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:30:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:27.890 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[af36a1e4-33a2-4fa3-825b-117016151ef8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:27.891 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[88c2c550-4254-4eca-a6e2-3f6e0bb262af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:27 compute-0 systemd-machined[208061]: New machine qemu-185-instance-00000096.
Jan 26 16:30:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:27.902 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[5f253624-a88f-4f3d-b57d-918a50b414e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:27 compute-0 systemd[1]: Started Virtual Machine qemu-185-instance-00000096.
Jan 26 16:30:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:27.915 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ec5e5358-5cd6-40eb-9505-f444786918b2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:27.942 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[9f81da0c-37a9-4cc2-a88e-cfa64d5ce125]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:27.947 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[75f75b4f-1376-4720-b203-46c02f86bea8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:27 compute-0 NetworkManager[48954]: <info>  [1769445027.9497] manager: (tap68925d6a-00): new Veth device (/org/freedesktop/NetworkManager/Devices/662)
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.948 239969 DEBUG oslo_concurrency.lockutils [None req-06e39b09-d855-4e00-be5d-ab1ea23ae110 59ae1c17a260470c91f50965ddd53a9e 2a7615c0db4e4f38aec30c7c723c3c3a - - default default] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.949 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 3.882s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.949 239969 INFO nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] During sync_power_state the instance has a pending task (deleting). Skip.
Jan 26 16:30:27 compute-0 nova_compute[239965]: 2026-01-26 16:30:27.950 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "867d0872-e510-4b21-a4a4-ba5869bf7b41" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:27.978 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[f8529f2b-75ee-4523-a9ee-df994f2f8dd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:27 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:27.981 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5d8149fe-b7dc-48c2-baf9-aa23ca53a124]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:28 compute-0 NetworkManager[48954]: <info>  [1769445028.0119] device (tap68925d6a-00): carrier: link connected
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:28.017 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[744063cb-472d-45e1-b16d-975d96c0a177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:28.043 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1555e781-ec92-4db3-8c22-7dc58c6c8ddc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap68925d6a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:70:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 470], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651881, 'reachable_time': 43433, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374310, 'error': None, 'target': 'ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:28.058 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[27271aef-ebe0-449d-b987-88dd153b2685]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef5:70ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651881, 'tstamp': 651881}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374311, 'error': None, 'target': 'ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.064 239969 DEBUG nova.compute.manager [req-a3cb840f-9d29-49fe-bfae-b67ec4a1641c req-13ba27ba-00fa-4b1c-9afa-23e54a590c74 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Received event network-vif-plugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.065 239969 DEBUG oslo_concurrency.lockutils [req-a3cb840f-9d29-49fe-bfae-b67ec4a1641c req-13ba27ba-00fa-4b1c-9afa-23e54a590c74 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.065 239969 DEBUG oslo_concurrency.lockutils [req-a3cb840f-9d29-49fe-bfae-b67ec4a1641c req-13ba27ba-00fa-4b1c-9afa-23e54a590c74 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.065 239969 DEBUG oslo_concurrency.lockutils [req-a3cb840f-9d29-49fe-bfae-b67ec4a1641c req-13ba27ba-00fa-4b1c-9afa-23e54a590c74 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.065 239969 DEBUG nova.compute.manager [req-a3cb840f-9d29-49fe-bfae-b67ec4a1641c req-13ba27ba-00fa-4b1c-9afa-23e54a590c74 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Processing event network-vif-plugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:28.078 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[47991b4f-281c-408c-a0be-81bedfebc396]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap68925d6a-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:70:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 470], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651881, 'reachable_time': 43433, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 374312, 'error': None, 'target': 'ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:28.113 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bc3e22c7-5f00-4e7f-91a3-66df6ba58a99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:28.190 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[7d37a726-a184-4aec-8a30-20afcacb063c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:28.192 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68925d6a-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:28.193 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:28.194 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68925d6a-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.196 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:28 compute-0 NetworkManager[48954]: <info>  [1769445028.1976] manager: (tap68925d6a-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/663)
Jan 26 16:30:28 compute-0 kernel: tap68925d6a-00: entered promiscuous mode
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:28.203 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap68925d6a-00, col_values=(('external_ids', {'iface-id': '3962717a-bcba-4dc8-93b5-aaeff002c38f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.205 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:28 compute-0 ovn_controller[146046]: 2026-01-26T16:30:28Z|01633|binding|INFO|Releasing lport 3962717a-bcba-4dc8-93b5-aaeff002c38f from this chassis (sb_readonly=0)
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.241 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:28.243 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/68925d6a-01d0-4cb9-94a6-da3e5f3fac44.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/68925d6a-01d0-4cb9-94a6-da3e5f3fac44.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:28.244 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[e4834383-8ca6-4cc8-9cc6-6d35fad7884c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:28.245 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-68925d6a-01d0-4cb9-94a6-da3e5f3fac44
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/68925d6a-01d0-4cb9-94a6-da3e5f3fac44.pid.haproxy
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID 68925d6a-01d0-4cb9-94a6-da3e5f3fac44
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:30:28 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:28.247 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44', 'env', 'PROCESS_TAG=haproxy-68925d6a-01d0-4cb9-94a6-da3e5f3fac44', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/68925d6a-01d0-4cb9-94a6-da3e5f3fac44.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:30:28 compute-0 podman[374344]: 2026-01-26 16:30:28.679717272 +0000 UTC m=+0.075271863 container create 2e4b1be030389fcb83f55eeffdb1f59f5a8b501e0985690a997388c4282c0fa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:30:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:30:28
Jan 26 16:30:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:30:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:30:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'images', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'backups', '.rgw.root']
Jan 26 16:30:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:30:28 compute-0 systemd[1]: Started libpod-conmon-2e4b1be030389fcb83f55eeffdb1f59f5a8b501e0985690a997388c4282c0fa6.scope.
Jan 26 16:30:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:30:28 compute-0 podman[374344]: 2026-01-26 16:30:28.638793405 +0000 UTC m=+0.034347916 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:30:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/671bd056345c84e4f894abbdbd49e22722163e8a6ee13b777997b55b292d34a9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:30:28 compute-0 podman[374344]: 2026-01-26 16:30:28.752735489 +0000 UTC m=+0.148289980 container init 2e4b1be030389fcb83f55eeffdb1f59f5a8b501e0985690a997388c4282c0fa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:30:28 compute-0 podman[374344]: 2026-01-26 16:30:28.761602027 +0000 UTC m=+0.157156498 container start 2e4b1be030389fcb83f55eeffdb1f59f5a8b501e0985690a997388c4282c0fa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:30:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 210 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.7 MiB/s wr, 110 op/s
Jan 26 16:30:28 compute-0 neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44[374396]: [NOTICE]   (374405) : New worker (374407) forked
Jan 26 16:30:28 compute-0 neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44[374396]: [NOTICE]   (374405) : Loading success.
Jan 26 16:30:28 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2660596367' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.797 239969 DEBUG nova.compute.manager [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.797 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769445028.7968564, 23444729-2c63-4c8b-a28c-04334c5b5949 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.798 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] VM Started (Lifecycle Event)
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.801 239969 DEBUG nova.virt.libvirt.driver [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.806 239969 INFO nova.virt.libvirt.driver [-] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Instance spawned successfully.
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.827 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.833 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.859 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.860 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769445028.7969952, 23444729-2c63-4c8b-a28c-04334c5b5949 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.860 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] VM Paused (Lifecycle Event)
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.879 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.882 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769445028.800501, 23444729-2c63-4c8b-a28c-04334c5b5949 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.883 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] VM Resumed (Lifecycle Event)
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.910 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.915 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:30:28 compute-0 nova_compute[239965]: 2026-01-26 16:30:28.940 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:30:29 compute-0 nova_compute[239965]: 2026-01-26 16:30:29.037 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Jan 26 16:30:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Jan 26 16:30:29 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Jan 26 16:30:29 compute-0 ceph-mon[75140]: pgmap v2408: 305 pgs: 305 active+clean; 210 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 1.7 MiB/s wr, 110 op/s
Jan 26 16:30:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:30:30 compute-0 nova_compute[239965]: 2026-01-26 16:30:30.074 239969 DEBUG nova.compute.manager [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:30:30 compute-0 nova_compute[239965]: 2026-01-26 16:30:30.144 239969 DEBUG nova.compute.manager [req-366c1149-8877-42ef-bf2d-f2c02204f20b req-31c2eece-fa3d-468c-989a-83810c4285e5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Received event network-vif-plugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:30:30 compute-0 nova_compute[239965]: 2026-01-26 16:30:30.145 239969 DEBUG oslo_concurrency.lockutils [req-366c1149-8877-42ef-bf2d-f2c02204f20b req-31c2eece-fa3d-468c-989a-83810c4285e5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:30 compute-0 nova_compute[239965]: 2026-01-26 16:30:30.146 239969 DEBUG oslo_concurrency.lockutils [req-366c1149-8877-42ef-bf2d-f2c02204f20b req-31c2eece-fa3d-468c-989a-83810c4285e5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:30 compute-0 nova_compute[239965]: 2026-01-26 16:30:30.146 239969 DEBUG oslo_concurrency.lockutils [req-366c1149-8877-42ef-bf2d-f2c02204f20b req-31c2eece-fa3d-468c-989a-83810c4285e5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:30 compute-0 nova_compute[239965]: 2026-01-26 16:30:30.147 239969 DEBUG nova.compute.manager [req-366c1149-8877-42ef-bf2d-f2c02204f20b req-31c2eece-fa3d-468c-989a-83810c4285e5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] No waiting events found dispatching network-vif-plugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:30:30 compute-0 nova_compute[239965]: 2026-01-26 16:30:30.147 239969 WARNING nova.compute.manager [req-366c1149-8877-42ef-bf2d-f2c02204f20b req-31c2eece-fa3d-468c-989a-83810c4285e5 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Received unexpected event network-vif-plugged-71007552-7ba9-4fe5-8e36-ee9ce30da37f for instance with vm_state active and task_state None.
Jan 26 16:30:30 compute-0 nova_compute[239965]: 2026-01-26 16:30:30.149 239969 DEBUG oslo_concurrency.lockutils [None req-bd919854-152e-4ccb-b9bd-d3a2b7087c5c 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 11.435s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:30 compute-0 nova_compute[239965]: 2026-01-26 16:30:30.151 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "23444729-2c63-4c8b-a28c-04334c5b5949" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 6.084s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:30 compute-0 nova_compute[239965]: 2026-01-26 16:30:30.152 239969 INFO nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:30:30 compute-0 nova_compute[239965]: 2026-01-26 16:30:30.152 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "23444729-2c63-4c8b-a28c-04334c5b5949" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.7 MiB/s wr, 135 op/s
Jan 26 16:30:30 compute-0 ceph-mon[75140]: osdmap e319: 3 total, 3 up, 3 in
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:30:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:30:31 compute-0 ceph-mon[75140]: pgmap v2410: 305 pgs: 305 active+clean; 246 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.7 MiB/s wr, 135 op/s
Jan 26 16:30:31 compute-0 nova_compute[239965]: 2026-01-26 16:30:31.893 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 212 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 4.7 MiB/s wr, 187 op/s
Jan 26 16:30:33 compute-0 ceph-mon[75140]: pgmap v2411: 305 pgs: 305 active+clean; 212 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 4.7 MiB/s wr, 187 op/s
Jan 26 16:30:34 compute-0 ovn_controller[146046]: 2026-01-26T16:30:34Z|01634|binding|INFO|Releasing lport 3962717a-bcba-4dc8-93b5-aaeff002c38f from this chassis (sb_readonly=0)
Jan 26 16:30:34 compute-0 nova_compute[239965]: 2026-01-26 16:30:34.108 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:34 compute-0 nova_compute[239965]: 2026-01-26 16:30:34.110 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Jan 26 16:30:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e319 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:30:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Jan 26 16:30:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Jan 26 16:30:35 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Jan 26 16:30:36 compute-0 ceph-mon[75140]: pgmap v2412: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.7 MiB/s wr, 216 op/s
Jan 26 16:30:36 compute-0 ceph-mon[75140]: osdmap e320: 3 total, 3 up, 3 in
Jan 26 16:30:36 compute-0 nova_compute[239965]: 2026-01-26 16:30:36.155 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769445021.1536624, 867d0872-e510-4b21-a4a4-ba5869bf7b41 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:30:36 compute-0 nova_compute[239965]: 2026-01-26 16:30:36.155 239969 INFO nova.compute.manager [-] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] VM Stopped (Lifecycle Event)
Jan 26 16:30:36 compute-0 nova_compute[239965]: 2026-01-26 16:30:36.175 239969 DEBUG nova.compute.manager [None req-b160b417-40be-45ec-b52f-70e68c0283a1 - - - - - -] [instance: 867d0872-e510-4b21-a4a4-ba5869bf7b41] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:30:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.4 MiB/s wr, 156 op/s
Jan 26 16:30:36 compute-0 nova_compute[239965]: 2026-01-26 16:30:36.895 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:38 compute-0 ceph-mon[75140]: pgmap v2414: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 3.4 MiB/s wr, 156 op/s
Jan 26 16:30:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 18 KiB/s wr, 119 op/s
Jan 26 16:30:39 compute-0 nova_compute[239965]: 2026-01-26 16:30:39.110 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:30:40 compute-0 ceph-mon[75140]: pgmap v2415: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 18 KiB/s wr, 119 op/s
Jan 26 16:30:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 106 op/s
Jan 26 16:30:41 compute-0 ovn_controller[146046]: 2026-01-26T16:30:41Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:01:8a 10.100.0.6
Jan 26 16:30:41 compute-0 nova_compute[239965]: 2026-01-26 16:30:41.897 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:42 compute-0 ceph-mon[75140]: pgmap v2416: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 106 op/s
Jan 26 16:30:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 15 KiB/s wr, 75 op/s
Jan 26 16:30:44 compute-0 ceph-mon[75140]: pgmap v2417: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 15 KiB/s wr, 75 op/s
Jan 26 16:30:44 compute-0 nova_compute[239965]: 2026-01-26 16:30:44.113 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:44 compute-0 podman[374416]: 2026-01-26 16:30:44.41354017 +0000 UTC m=+0.092098127 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 26 16:30:44 compute-0 podman[374417]: 2026-01-26 16:30:44.473090695 +0000 UTC m=+0.146646729 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 26 16:30:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 646 KiB/s rd, 16 KiB/s wr, 52 op/s
Jan 26 16:30:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:30:46 compute-0 ceph-mon[75140]: pgmap v2418: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 646 KiB/s rd, 16 KiB/s wr, 52 op/s
Jan 26 16:30:46 compute-0 nova_compute[239965]: 2026-01-26 16:30:46.213 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 549 KiB/s rd, 24 KiB/s wr, 45 op/s
Jan 26 16:30:46 compute-0 nova_compute[239965]: 2026-01-26 16:30:46.899 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:48 compute-0 ceph-mon[75140]: pgmap v2419: 305 pgs: 305 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 549 KiB/s rd, 24 KiB/s wr, 45 op/s
Jan 26 16:30:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:30:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1894704887' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:30:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:30:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1894704887' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:30:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 539 KiB/s rd, 23 KiB/s wr, 44 op/s
Jan 26 16:30:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1894704887' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:30:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1894704887' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:30:49 compute-0 nova_compute[239965]: 2026-01-26 16:30:49.115 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007791117997097148 of space, bias 1.0, pg target 0.23373353991291443 quantized to 32 (current 32)
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010183619328865418 of space, bias 1.0, pg target 0.3055085798659626 quantized to 32 (current 32)
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.697983172610999e-07 of space, bias 4.0, pg target 0.0008037579807133199 quantized to 16 (current 16)
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:30:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:30:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:50.054 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:30:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:50.055 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.084 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:50 compute-0 ceph-mon[75140]: pgmap v2420: 305 pgs: 305 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 539 KiB/s rd, 23 KiB/s wr, 44 op/s
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.300 239969 DEBUG nova.compute.manager [req-c1f20a17-ccb1-47bb-a1a4-d223bee8d727 req-d0b15b1f-f562-4518-800e-5f73fe44beb2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Received event network-changed-71007552-7ba9-4fe5-8e36-ee9ce30da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.300 239969 DEBUG nova.compute.manager [req-c1f20a17-ccb1-47bb-a1a4-d223bee8d727 req-d0b15b1f-f562-4518-800e-5f73fe44beb2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Refreshing instance network info cache due to event network-changed-71007552-7ba9-4fe5-8e36-ee9ce30da37f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.301 239969 DEBUG oslo_concurrency.lockutils [req-c1f20a17-ccb1-47bb-a1a4-d223bee8d727 req-d0b15b1f-f562-4518-800e-5f73fe44beb2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.301 239969 DEBUG oslo_concurrency.lockutils [req-c1f20a17-ccb1-47bb-a1a4-d223bee8d727 req-d0b15b1f-f562-4518-800e-5f73fe44beb2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.301 239969 DEBUG nova.network.neutron [req-c1f20a17-ccb1-47bb-a1a4-d223bee8d727 req-d0b15b1f-f562-4518-800e-5f73fe44beb2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Refreshing network info cache for port 71007552-7ba9-4fe5-8e36-ee9ce30da37f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.356 239969 DEBUG oslo_concurrency.lockutils [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "23444729-2c63-4c8b-a28c-04334c5b5949" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.357 239969 DEBUG oslo_concurrency.lockutils [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.357 239969 DEBUG oslo_concurrency.lockutils [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.357 239969 DEBUG oslo_concurrency.lockutils [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.357 239969 DEBUG oslo_concurrency.lockutils [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.359 239969 INFO nova.compute.manager [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Terminating instance
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.360 239969 DEBUG nova.compute.manager [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:30:50 compute-0 kernel: tap71007552-7b (unregistering): left promiscuous mode
Jan 26 16:30:50 compute-0 NetworkManager[48954]: <info>  [1769445050.5032] device (tap71007552-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:30:50 compute-0 ovn_controller[146046]: 2026-01-26T16:30:50Z|01635|binding|INFO|Releasing lport 71007552-7ba9-4fe5-8e36-ee9ce30da37f from this chassis (sb_readonly=0)
Jan 26 16:30:50 compute-0 ovn_controller[146046]: 2026-01-26T16:30:50Z|01636|binding|INFO|Setting lport 71007552-7ba9-4fe5-8e36-ee9ce30da37f down in Southbound
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.512 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:50 compute-0 ovn_controller[146046]: 2026-01-26T16:30:50Z|01637|binding|INFO|Removing iface tap71007552-7b ovn-installed in OVS
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.515 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:50.520 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:01:8a 10.100.0.6'], port_security=['fa:16:3e:45:01:8a 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '23444729-2c63-4c8b-a28c-04334c5b5949', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-68925d6a-01d0-4cb9-94a6-da3e5f3fac44', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67c9291388b24d24a9689e4acf626bf2', 'neutron:revision_number': '9', 'neutron:security_group_ids': 'fec1bead-ca1f-4cab-8561-fae558eddc5d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=75ede299-1df1-4eb6-926a-ac3d97013a63, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=71007552-7ba9-4fe5-8e36-ee9ce30da37f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:30:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:50.521 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 71007552-7ba9-4fe5-8e36-ee9ce30da37f in datapath 68925d6a-01d0-4cb9-94a6-da3e5f3fac44 unbound from our chassis
Jan 26 16:30:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:50.523 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 68925d6a-01d0-4cb9-94a6-da3e5f3fac44, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:30:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:50.524 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c4f43d1c-09b9-48de-bdb8-ebb13f0b1ac3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:50.525 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44 namespace which is not needed anymore
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.545 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:50 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000096.scope: Deactivated successfully.
Jan 26 16:30:50 compute-0 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000096.scope: Consumed 14.112s CPU time.
Jan 26 16:30:50 compute-0 systemd-machined[208061]: Machine qemu-185-instance-00000096 terminated.
Jan 26 16:30:50 compute-0 neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44[374396]: [NOTICE]   (374405) : haproxy version is 2.8.14-c23fe91
Jan 26 16:30:50 compute-0 neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44[374396]: [NOTICE]   (374405) : path to executable is /usr/sbin/haproxy
Jan 26 16:30:50 compute-0 neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44[374396]: [WARNING]  (374405) : Exiting Master process...
Jan 26 16:30:50 compute-0 neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44[374396]: [ALERT]    (374405) : Current worker (374407) exited with code 143 (Terminated)
Jan 26 16:30:50 compute-0 neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44[374396]: [WARNING]  (374405) : All workers exited. Exiting... (0)
Jan 26 16:30:50 compute-0 systemd[1]: libpod-2e4b1be030389fcb83f55eeffdb1f59f5a8b501e0985690a997388c4282c0fa6.scope: Deactivated successfully.
Jan 26 16:30:50 compute-0 podman[374484]: 2026-01-26 16:30:50.675940492 +0000 UTC m=+0.044228019 container died 2e4b1be030389fcb83f55eeffdb1f59f5a8b501e0985690a997388c4282c0fa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:30:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2e4b1be030389fcb83f55eeffdb1f59f5a8b501e0985690a997388c4282c0fa6-userdata-shm.mount: Deactivated successfully.
Jan 26 16:30:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-671bd056345c84e4f894abbdbd49e22722163e8a6ee13b777997b55b292d34a9-merged.mount: Deactivated successfully.
Jan 26 16:30:50 compute-0 podman[374484]: 2026-01-26 16:30:50.708186606 +0000 UTC m=+0.076474133 container cleanup 2e4b1be030389fcb83f55eeffdb1f59f5a8b501e0985690a997388c4282c0fa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:30:50 compute-0 systemd[1]: libpod-conmon-2e4b1be030389fcb83f55eeffdb1f59f5a8b501e0985690a997388c4282c0fa6.scope: Deactivated successfully.
Jan 26 16:30:50 compute-0 podman[374515]: 2026-01-26 16:30:50.764567923 +0000 UTC m=+0.037565776 container remove 2e4b1be030389fcb83f55eeffdb1f59f5a8b501e0985690a997388c4282c0fa6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 16:30:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:50.769 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[c577a442-e1a1-4d7a-9fcc-aaed7eae9752]: (4, ('Mon Jan 26 04:30:50 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44 (2e4b1be030389fcb83f55eeffdb1f59f5a8b501e0985690a997388c4282c0fa6)\n2e4b1be030389fcb83f55eeffdb1f59f5a8b501e0985690a997388c4282c0fa6\nMon Jan 26 04:30:50 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44 (2e4b1be030389fcb83f55eeffdb1f59f5a8b501e0985690a997388c4282c0fa6)\n2e4b1be030389fcb83f55eeffdb1f59f5a8b501e0985690a997388c4282c0fa6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:50.771 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0e92be2f-5a2f-4b0e-8cd1-582f4bb235d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:50.772 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68925d6a-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.773 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:50 compute-0 kernel: tap68925d6a-00: left promiscuous mode
Jan 26 16:30:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 715 KiB/s rd, 23 KiB/s wr, 48 op/s
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.791 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:50.794 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[971ed78d-a4d4-4a5e-b07d-e5c72e37c88a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.796 239969 INFO nova.virt.libvirt.driver [-] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Instance destroyed successfully.
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.797 239969 DEBUG nova.objects.instance [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lazy-loading 'resources' on Instance uuid 23444729-2c63-4c8b-a28c-04334c5b5949 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:30:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:50.806 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9a9223-d873-4bab-88e4-8306739311e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:50.807 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b455738e-8033-4223-9d95-d02885888e72]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.812 239969 DEBUG nova.virt.libvirt.vif [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2026-01-26T16:29:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-139030865',display_name='tempest-TestShelveInstance-server-139030865',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-139030865',id=150,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGR71t6y+HJ4qAoeBl0pCSSZsSHWE/LQDqzbI32DfEJP6X/ptIcGuyNSJStvWenZDTRQYi8gTT8yZ+rcYMA60xr8rqSoQ28CSiFxxFlkuGKRbU71KSafynyknZ+52zot4Q==',key_name='tempest-TestShelveInstance-779455889',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:30:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67c9291388b24d24a9689e4acf626bf2',ramdisk_id='',reservation_id='r-9na32hh0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-1058510424',owner_user_name='tempest-TestShelveInstance-1058510424-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:30:30Z,user_data=None,user_id='96f070e8d2fa469da6ec2db452db6f86',uuid=23444729-2c63-4c8b-a28c-04334c5b5949,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.813 239969 DEBUG nova.network.os_vif_util [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Converting VIF {"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.201", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.813 239969 DEBUG nova.network.os_vif_util [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=71007552-7ba9-4fe5-8e36-ee9ce30da37f,network=Network(68925d6a-01d0-4cb9-94a6-da3e5f3fac44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71007552-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.814 239969 DEBUG os_vif [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=71007552-7ba9-4fe5-8e36-ee9ce30da37f,network=Network(68925d6a-01d0-4cb9-94a6-da3e5f3fac44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71007552-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.815 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.815 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71007552-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.816 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.818 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:50 compute-0 nova_compute[239965]: 2026-01-26 16:30:50.821 239969 INFO os_vif [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:01:8a,bridge_name='br-int',has_traffic_filtering=True,id=71007552-7ba9-4fe5-8e36-ee9ce30da37f,network=Network(68925d6a-01d0-4cb9-94a6-da3e5f3fac44),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71007552-7b')
Jan 26 16:30:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:50.822 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6e019db8-4cd6-48e7-838a-6c8b5c3aacee]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651874, 'reachable_time': 43078, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374543, 'error': None, 'target': 'ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:50 compute-0 systemd[1]: run-netns-ovnmeta\x2d68925d6a\x2d01d0\x2d4cb9\x2d94a6\x2dda3e5f3fac44.mount: Deactivated successfully.
Jan 26 16:30:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:50.826 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-68925d6a-01d0-4cb9-94a6-da3e5f3fac44 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:30:50 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:50.827 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[f420b90a-0cb1-4c51-847e-c3e96948b4f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:30:51 compute-0 nova_compute[239965]: 2026-01-26 16:30:51.101 239969 INFO nova.virt.libvirt.driver [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Deleting instance files /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949_del
Jan 26 16:30:51 compute-0 nova_compute[239965]: 2026-01-26 16:30:51.102 239969 INFO nova.virt.libvirt.driver [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Deletion of /var/lib/nova/instances/23444729-2c63-4c8b-a28c-04334c5b5949_del complete
Jan 26 16:30:51 compute-0 nova_compute[239965]: 2026-01-26 16:30:51.154 239969 INFO nova.compute.manager [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Took 0.79 seconds to destroy the instance on the hypervisor.
Jan 26 16:30:51 compute-0 nova_compute[239965]: 2026-01-26 16:30:51.155 239969 DEBUG oslo.service.loopingcall [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:30:51 compute-0 nova_compute[239965]: 2026-01-26 16:30:51.155 239969 DEBUG nova.compute.manager [-] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:30:51 compute-0 nova_compute[239965]: 2026-01-26 16:30:51.155 239969 DEBUG nova.network.neutron [-] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:30:51 compute-0 nova_compute[239965]: 2026-01-26 16:30:51.277 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:52 compute-0 nova_compute[239965]: 2026-01-26 16:30:52.064 239969 DEBUG nova.network.neutron [-] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:30:52 compute-0 ceph-mon[75140]: pgmap v2421: 305 pgs: 305 active+clean; 169 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 715 KiB/s rd, 23 KiB/s wr, 48 op/s
Jan 26 16:30:52 compute-0 nova_compute[239965]: 2026-01-26 16:30:52.311 239969 INFO nova.compute.manager [-] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Took 1.16 seconds to deallocate network for instance.
Jan 26 16:30:52 compute-0 nova_compute[239965]: 2026-01-26 16:30:52.342 239969 DEBUG nova.compute.manager [req-9b05ee0b-5516-497e-a952-06802a593bb3 req-4a3e8247-7b6d-4de6-9b6d-9be136792c56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Received event network-vif-deleted-71007552-7ba9-4fe5-8e36-ee9ce30da37f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:30:52 compute-0 nova_compute[239965]: 2026-01-26 16:30:52.342 239969 INFO nova.compute.manager [req-9b05ee0b-5516-497e-a952-06802a593bb3 req-4a3e8247-7b6d-4de6-9b6d-9be136792c56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Neutron deleted interface 71007552-7ba9-4fe5-8e36-ee9ce30da37f; detaching it from the instance and deleting it from the info cache
Jan 26 16:30:52 compute-0 nova_compute[239965]: 2026-01-26 16:30:52.343 239969 DEBUG nova.network.neutron [req-9b05ee0b-5516-497e-a952-06802a593bb3 req-4a3e8247-7b6d-4de6-9b6d-9be136792c56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:30:52 compute-0 nova_compute[239965]: 2026-01-26 16:30:52.385 239969 DEBUG nova.compute.manager [req-9b05ee0b-5516-497e-a952-06802a593bb3 req-4a3e8247-7b6d-4de6-9b6d-9be136792c56 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Detach interface failed, port_id=71007552-7ba9-4fe5-8e36-ee9ce30da37f, reason: Instance 23444729-2c63-4c8b-a28c-04334c5b5949 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Jan 26 16:30:52 compute-0 nova_compute[239965]: 2026-01-26 16:30:52.408 239969 DEBUG oslo_concurrency.lockutils [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:52 compute-0 nova_compute[239965]: 2026-01-26 16:30:52.408 239969 DEBUG oslo_concurrency.lockutils [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:52 compute-0 nova_compute[239965]: 2026-01-26 16:30:52.472 239969 DEBUG oslo_concurrency.processutils [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:30:52 compute-0 nova_compute[239965]: 2026-01-26 16:30:52.528 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 124 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 762 KiB/s rd, 24 KiB/s wr, 67 op/s
Jan 26 16:30:52 compute-0 nova_compute[239965]: 2026-01-26 16:30:52.835 239969 DEBUG nova.network.neutron [req-c1f20a17-ccb1-47bb-a1a4-d223bee8d727 req-d0b15b1f-f562-4518-800e-5f73fe44beb2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updated VIF entry in instance network info cache for port 71007552-7ba9-4fe5-8e36-ee9ce30da37f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:30:52 compute-0 nova_compute[239965]: 2026-01-26 16:30:52.836 239969 DEBUG nova.network.neutron [req-c1f20a17-ccb1-47bb-a1a4-d223bee8d727 req-d0b15b1f-f562-4518-800e-5f73fe44beb2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Updating instance_info_cache with network_info: [{"id": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "address": "fa:16:3e:45:01:8a", "network": {"id": "68925d6a-01d0-4cb9-94a6-da3e5f3fac44", "bridge": "br-int", "label": "tempest-TestShelveInstance-272089643-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67c9291388b24d24a9689e4acf626bf2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71007552-7b", "ovs_interfaceid": "71007552-7ba9-4fe5-8e36-ee9ce30da37f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:30:52 compute-0 nova_compute[239965]: 2026-01-26 16:30:52.856 239969 DEBUG oslo_concurrency.lockutils [req-c1f20a17-ccb1-47bb-a1a4-d223bee8d727 req-d0b15b1f-f562-4518-800e-5f73fe44beb2 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-23444729-2c63-4c8b-a28c-04334c5b5949" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:30:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:30:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4105564419' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:30:53 compute-0 nova_compute[239965]: 2026-01-26 16:30:53.049 239969 DEBUG oslo_concurrency.processutils [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:30:53 compute-0 nova_compute[239965]: 2026-01-26 16:30:53.055 239969 DEBUG nova.compute.provider_tree [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:30:53 compute-0 nova_compute[239965]: 2026-01-26 16:30:53.073 239969 DEBUG nova.scheduler.client.report [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:30:53 compute-0 nova_compute[239965]: 2026-01-26 16:30:53.102 239969 DEBUG oslo_concurrency.lockutils [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:53 compute-0 nova_compute[239965]: 2026-01-26 16:30:53.131 239969 INFO nova.scheduler.client.report [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Deleted allocations for instance 23444729-2c63-4c8b-a28c-04334c5b5949
Jan 26 16:30:53 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4105564419' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:30:53 compute-0 nova_compute[239965]: 2026-01-26 16:30:53.264 239969 DEBUG oslo_concurrency.lockutils [None req-8f6c2ad6-b009-460b-8099-314d6a2f4d01 96f070e8d2fa469da6ec2db452db6f86 67c9291388b24d24a9689e4acf626bf2 - - default default] Lock "23444729-2c63-4c8b-a28c-04334c5b5949" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:30:54 compute-0 nova_compute[239965]: 2026-01-26 16:30:54.119 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:54 compute-0 ceph-mon[75140]: pgmap v2422: 305 pgs: 305 active+clean; 124 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 762 KiB/s rd, 24 KiB/s wr, 67 op/s
Jan 26 16:30:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 88 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 438 KiB/s rd, 12 KiB/s wr, 44 op/s
Jan 26 16:30:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:30:55 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:55.057 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:30:55 compute-0 nova_compute[239965]: 2026-01-26 16:30:55.817 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:56 compute-0 ceph-mon[75140]: pgmap v2423: 305 pgs: 305 active+clean; 88 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 438 KiB/s rd, 12 KiB/s wr, 44 op/s
Jan 26 16:30:56 compute-0 nova_compute[239965]: 2026-01-26 16:30:56.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:30:56 compute-0 nova_compute[239965]: 2026-01-26 16:30:56.653 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 11 KiB/s wr, 34 op/s
Jan 26 16:30:57 compute-0 nova_compute[239965]: 2026-01-26 16:30:57.561 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:57 compute-0 nova_compute[239965]: 2026-01-26 16:30:57.844 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:58 compute-0 ceph-mon[75140]: pgmap v2424: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 11 KiB/s wr, 34 op/s
Jan 26 16:30:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Jan 26 16:30:59 compute-0 nova_compute[239965]: 2026-01-26 16:30:59.120 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:30:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:59.253 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:30:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:59.254 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:30:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:30:59.254 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:31:00 compute-0 ceph-mon[75140]: pgmap v2425: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Jan 26 16:31:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:31:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:31:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:31:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:31:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:31:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:31:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Jan 26 16:31:00 compute-0 nova_compute[239965]: 2026-01-26 16:31:00.819 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:02 compute-0 ceph-mon[75140]: pgmap v2426: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Jan 26 16:31:02 compute-0 nova_compute[239965]: 2026-01-26 16:31:02.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:02 compute-0 nova_compute[239965]: 2026-01-26 16:31:02.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:31:02 compute-0 nova_compute[239965]: 2026-01-26 16:31:02.536 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:31:02 compute-0 nova_compute[239965]: 2026-01-26 16:31:02.536 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 26 16:31:04 compute-0 nova_compute[239965]: 2026-01-26 16:31:04.123 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:04 compute-0 ceph-mon[75140]: pgmap v2427: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Jan 26 16:31:04 compute-0 nova_compute[239965]: 2026-01-26 16:31:04.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:04 compute-0 nova_compute[239965]: 2026-01-26 16:31:04.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 511 B/s wr, 10 op/s
Jan 26 16:31:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:31:05 compute-0 nova_compute[239965]: 2026-01-26 16:31:05.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:05 compute-0 nova_compute[239965]: 2026-01-26 16:31:05.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:31:05 compute-0 nova_compute[239965]: 2026-01-26 16:31:05.795 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769445050.794117, 23444729-2c63-4c8b-a28c-04334c5b5949 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:31:05 compute-0 nova_compute[239965]: 2026-01-26 16:31:05.796 239969 INFO nova.compute.manager [-] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] VM Stopped (Lifecycle Event)
Jan 26 16:31:05 compute-0 nova_compute[239965]: 2026-01-26 16:31:05.820 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:05 compute-0 nova_compute[239965]: 2026-01-26 16:31:05.824 239969 DEBUG nova.compute.manager [None req-4230a3a1-ad37-4ca0-976b-7fda2d6594ce - - - - - -] [instance: 23444729-2c63-4c8b-a28c-04334c5b5949] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:31:06 compute-0 ceph-mon[75140]: pgmap v2428: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 511 B/s wr, 10 op/s
Jan 26 16:31:06 compute-0 nova_compute[239965]: 2026-01-26 16:31:06.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:31:08 compute-0 ceph-mon[75140]: pgmap v2429: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:31:08 compute-0 nova_compute[239965]: 2026-01-26 16:31:08.505 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Acquiring lock "ac212cb3-2160-4534-a35c-21bcfab037fd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:08 compute-0 nova_compute[239965]: 2026-01-26 16:31:08.505 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lock "ac212cb3-2160-4534-a35c-21bcfab037fd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:08 compute-0 nova_compute[239965]: 2026-01-26 16:31:08.522 239969 DEBUG nova.compute.manager [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:31:08 compute-0 nova_compute[239965]: 2026-01-26 16:31:08.602 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:08 compute-0 nova_compute[239965]: 2026-01-26 16:31:08.603 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:08 compute-0 nova_compute[239965]: 2026-01-26 16:31:08.614 239969 DEBUG nova.virt.hardware [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:31:08 compute-0 nova_compute[239965]: 2026-01-26 16:31:08.614 239969 INFO nova.compute.claims [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:31:08 compute-0 nova_compute[239965]: 2026-01-26 16:31:08.721 239969 DEBUG oslo_concurrency.processutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:31:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.125 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:31:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4130028615' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:31:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4130028615' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.375 239969 DEBUG oslo_concurrency.processutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.654s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.383 239969 DEBUG nova.compute.provider_tree [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.407 239969 DEBUG nova.scheduler.client.report [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.433 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.830s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.434 239969 DEBUG nova.compute.manager [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.491 239969 DEBUG nova.compute.manager [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.492 239969 DEBUG nova.network.neutron [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.515 239969 INFO nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.535 239969 DEBUG nova.compute.manager [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.625 239969 DEBUG nova.compute.manager [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.627 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.628 239969 INFO nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Creating image(s)
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.658 239969 DEBUG nova.storage.rbd_utils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] rbd image ac212cb3-2160-4534-a35c-21bcfab037fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.680 239969 DEBUG nova.storage.rbd_utils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] rbd image ac212cb3-2160-4534-a35c-21bcfab037fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.704 239969 DEBUG nova.storage.rbd_utils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] rbd image ac212cb3-2160-4534-a35c-21bcfab037fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.707 239969 DEBUG oslo_concurrency.processutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.780 239969 DEBUG oslo_concurrency.processutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.781 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.782 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.782 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.807 239969 DEBUG nova.storage.rbd_utils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] rbd image ac212cb3-2160-4534-a35c-21bcfab037fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:31:09 compute-0 nova_compute[239965]: 2026-01-26 16:31:09.811 239969 DEBUG oslo_concurrency.processutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 ac212cb3-2160-4534-a35c-21bcfab037fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:31:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:31:10 compute-0 nova_compute[239965]: 2026-01-26 16:31:10.102 239969 DEBUG nova.policy [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5eddbe6e360b4be7ae448340e594cd57', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '28416a9085754ed09fd1131e4bb69e84', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 26 16:31:10 compute-0 nova_compute[239965]: 2026-01-26 16:31:10.125 239969 DEBUG oslo_concurrency.processutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 ac212cb3-2160-4534-a35c-21bcfab037fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.314s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:31:10 compute-0 nova_compute[239965]: 2026-01-26 16:31:10.195 239969 DEBUG nova.storage.rbd_utils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] resizing rbd image ac212cb3-2160-4534-a35c-21bcfab037fd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:31:10 compute-0 sshd-session[374659]: Invalid user sol from 45.148.10.240 port 44454
Jan 26 16:31:10 compute-0 nova_compute[239965]: 2026-01-26 16:31:10.273 239969 DEBUG nova.objects.instance [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lazy-loading 'migration_context' on Instance uuid ac212cb3-2160-4534-a35c-21bcfab037fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:31:10 compute-0 nova_compute[239965]: 2026-01-26 16:31:10.289 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:31:10 compute-0 nova_compute[239965]: 2026-01-26 16:31:10.290 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Ensure instance console log exists: /var/lib/nova/instances/ac212cb3-2160-4534-a35c-21bcfab037fd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:31:10 compute-0 nova_compute[239965]: 2026-01-26 16:31:10.290 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:10 compute-0 nova_compute[239965]: 2026-01-26 16:31:10.291 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:10 compute-0 nova_compute[239965]: 2026-01-26 16:31:10.291 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:10 compute-0 ceph-mon[75140]: pgmap v2430: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:31:10 compute-0 sshd-session[374659]: Connection closed by invalid user sol 45.148.10.240 port 44454 [preauth]
Jan 26 16:31:10 compute-0 nova_compute[239965]: 2026-01-26 16:31:10.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:10 compute-0 nova_compute[239965]: 2026-01-26 16:31:10.531 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:10 compute-0 nova_compute[239965]: 2026-01-26 16:31:10.531 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:10 compute-0 nova_compute[239965]: 2026-01-26 16:31:10.532 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:10 compute-0 nova_compute[239965]: 2026-01-26 16:31:10.532 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:31:10 compute-0 nova_compute[239965]: 2026-01-26 16:31:10.532 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:31:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:31:10 compute-0 nova_compute[239965]: 2026-01-26 16:31:10.822 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:31:11 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/576008601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:31:11 compute-0 nova_compute[239965]: 2026-01-26 16:31:11.113 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:31:11 compute-0 nova_compute[239965]: 2026-01-26 16:31:11.336 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:31:11 compute-0 nova_compute[239965]: 2026-01-26 16:31:11.337 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3458MB free_disk=59.98721870966256GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:31:11 compute-0 nova_compute[239965]: 2026-01-26 16:31:11.338 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:11 compute-0 nova_compute[239965]: 2026-01-26 16:31:11.338 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:11 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/576008601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:31:11 compute-0 nova_compute[239965]: 2026-01-26 16:31:11.393 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Instance ac212cb3-2160-4534-a35c-21bcfab037fd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 26 16:31:11 compute-0 nova_compute[239965]: 2026-01-26 16:31:11.394 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:31:11 compute-0 nova_compute[239965]: 2026-01-26 16:31:11.394 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:31:11 compute-0 nova_compute[239965]: 2026-01-26 16:31:11.439 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:31:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:31:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4255384503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:31:12 compute-0 nova_compute[239965]: 2026-01-26 16:31:12.039 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:31:12 compute-0 nova_compute[239965]: 2026-01-26 16:31:12.045 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:31:12 compute-0 nova_compute[239965]: 2026-01-26 16:31:12.062 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:31:12 compute-0 nova_compute[239965]: 2026-01-26 16:31:12.093 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:31:12 compute-0 nova_compute[239965]: 2026-01-26 16:31:12.093 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:12 compute-0 nova_compute[239965]: 2026-01-26 16:31:12.141 239969 DEBUG nova.network.neutron [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Successfully created port: 9c3cd432-78a3-486a-99bf-b887e65dd8fb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 26 16:31:12 compute-0 ceph-mon[75140]: pgmap v2431: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:31:12 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4255384503' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:31:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 24 op/s
Jan 26 16:31:13 compute-0 nova_compute[239965]: 2026-01-26 16:31:13.556 239969 DEBUG nova.network.neutron [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Successfully updated port: 9c3cd432-78a3-486a-99bf-b887e65dd8fb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 26 16:31:13 compute-0 nova_compute[239965]: 2026-01-26 16:31:13.580 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Acquiring lock "refresh_cache-ac212cb3-2160-4534-a35c-21bcfab037fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:31:13 compute-0 nova_compute[239965]: 2026-01-26 16:31:13.580 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Acquired lock "refresh_cache-ac212cb3-2160-4534-a35c-21bcfab037fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:31:13 compute-0 nova_compute[239965]: 2026-01-26 16:31:13.581 239969 DEBUG nova.network.neutron [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:31:13 compute-0 nova_compute[239965]: 2026-01-26 16:31:13.696 239969 DEBUG nova.compute.manager [req-3c3f52a9-bcce-4d6d-b615-f609f7870437 req-154ecd1d-78bd-486c-afea-d4a60d079d82 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Received event network-changed-9c3cd432-78a3-486a-99bf-b887e65dd8fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:31:13 compute-0 nova_compute[239965]: 2026-01-26 16:31:13.696 239969 DEBUG nova.compute.manager [req-3c3f52a9-bcce-4d6d-b615-f609f7870437 req-154ecd1d-78bd-486c-afea-d4a60d079d82 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Refreshing instance network info cache due to event network-changed-9c3cd432-78a3-486a-99bf-b887e65dd8fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:31:13 compute-0 nova_compute[239965]: 2026-01-26 16:31:13.697 239969 DEBUG oslo_concurrency.lockutils [req-3c3f52a9-bcce-4d6d-b615-f609f7870437 req-154ecd1d-78bd-486c-afea-d4a60d079d82 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-ac212cb3-2160-4534-a35c-21bcfab037fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:31:13 compute-0 nova_compute[239965]: 2026-01-26 16:31:13.744 239969 DEBUG nova.network.neutron [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:31:14 compute-0 nova_compute[239965]: 2026-01-26 16:31:14.127 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:14 compute-0 ceph-mon[75140]: pgmap v2432: 305 pgs: 305 active+clean; 108 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.2 MiB/s wr, 24 op/s
Jan 26 16:31:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:31:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.075 239969 DEBUG nova.network.neutron [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Updating instance_info_cache with network_info: [{"id": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "address": "fa:16:3e:ce:c0:91", "network": {"id": "a06070a8-971b-42ea-82b9-a4dadcba86a5", "bridge": "br-int", "label": "tempest-TestServerBasicOps-105777808-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28416a9085754ed09fd1131e4bb69e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c3cd432-78", "ovs_interfaceid": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.092 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Releasing lock "refresh_cache-ac212cb3-2160-4534-a35c-21bcfab037fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.092 239969 DEBUG nova.compute.manager [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Instance network_info: |[{"id": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "address": "fa:16:3e:ce:c0:91", "network": {"id": "a06070a8-971b-42ea-82b9-a4dadcba86a5", "bridge": "br-int", "label": "tempest-TestServerBasicOps-105777808-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28416a9085754ed09fd1131e4bb69e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c3cd432-78", "ovs_interfaceid": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.092 239969 DEBUG oslo_concurrency.lockutils [req-3c3f52a9-bcce-4d6d-b615-f609f7870437 req-154ecd1d-78bd-486c-afea-d4a60d079d82 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-ac212cb3-2160-4534-a35c-21bcfab037fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.092 239969 DEBUG nova.network.neutron [req-3c3f52a9-bcce-4d6d-b615-f609f7870437 req-154ecd1d-78bd-486c-afea-d4a60d079d82 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Refreshing network info cache for port 9c3cd432-78a3-486a-99bf-b887e65dd8fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.095 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Start _get_guest_xml network_info=[{"id": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "address": "fa:16:3e:ce:c0:91", "network": {"id": "a06070a8-971b-42ea-82b9-a4dadcba86a5", "bridge": "br-int", "label": "tempest-TestServerBasicOps-105777808-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28416a9085754ed09fd1131e4bb69e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c3cd432-78", "ovs_interfaceid": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.101 239969 WARNING nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.106 239969 DEBUG nova.virt.libvirt.host [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.107 239969 DEBUG nova.virt.libvirt.host [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.114 239969 DEBUG nova.virt.libvirt.host [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.114 239969 DEBUG nova.virt.libvirt.host [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.115 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.115 239969 DEBUG nova.virt.hardware [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.116 239969 DEBUG nova.virt.hardware [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.116 239969 DEBUG nova.virt.hardware [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.116 239969 DEBUG nova.virt.hardware [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.116 239969 DEBUG nova.virt.hardware [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.117 239969 DEBUG nova.virt.hardware [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.117 239969 DEBUG nova.virt.hardware [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.117 239969 DEBUG nova.virt.hardware [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.118 239969 DEBUG nova.virt.hardware [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.118 239969 DEBUG nova.virt.hardware [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.118 239969 DEBUG nova.virt.hardware [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.123 239969 DEBUG oslo_concurrency.processutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:31:15 compute-0 podman[374842]: 2026-01-26 16:31:15.366389356 +0000 UTC m=+0.060443748 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:31:15 compute-0 podman[374843]: 2026-01-26 16:31:15.427828468 +0000 UTC m=+0.108247935 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Jan 26 16:31:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:31:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3016750382' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.732 239969 DEBUG oslo_concurrency.processutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.755 239969 DEBUG nova.storage.rbd_utils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] rbd image ac212cb3-2160-4534-a35c-21bcfab037fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.760 239969 DEBUG oslo_concurrency.processutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:31:15 compute-0 nova_compute[239965]: 2026-01-26 16:31:15.824 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:31:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3285859819' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.348 239969 DEBUG oslo_concurrency.processutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.350 239969 DEBUG nova.virt.libvirt.vif [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:31:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-856164022',display_name='tempest-TestServerBasicOps-server-856164022',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-856164022',id=152,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO3pbnj0sLvS5A2vfDeCLtyAVJWzkpG6iTn2ojAHV+wUBLFCJ7Zd7+5fWmtQD5To3l4t2bWX92m1dUg7pBiDUnNnLz7iHt7XlMOe0SEnUQs1SZRu1JyGgGgM56YjqqPgRw==',key_name='tempest-TestServerBasicOps-1871563773',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28416a9085754ed09fd1131e4bb69e84',ramdisk_id='',reservation_id='r-7cxapjyc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1398812118',owner_user_name='tempest-TestServerBasicOps-1398812118-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:31:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5eddbe6e360b4be7ae448340e594cd57',uuid=ac212cb3-2160-4534-a35c-21bcfab037fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "address": "fa:16:3e:ce:c0:91", "network": {"id": "a06070a8-971b-42ea-82b9-a4dadcba86a5", "bridge": "br-int", "label": "tempest-TestServerBasicOps-105777808-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28416a9085754ed09fd1131e4bb69e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c3cd432-78", "ovs_interfaceid": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.350 239969 DEBUG nova.network.os_vif_util [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Converting VIF {"id": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "address": "fa:16:3e:ce:c0:91", "network": {"id": "a06070a8-971b-42ea-82b9-a4dadcba86a5", "bridge": "br-int", "label": "tempest-TestServerBasicOps-105777808-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28416a9085754ed09fd1131e4bb69e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c3cd432-78", "ovs_interfaceid": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.351 239969 DEBUG nova.network.os_vif_util [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:c0:91,bridge_name='br-int',has_traffic_filtering=True,id=9c3cd432-78a3-486a-99bf-b887e65dd8fb,network=Network(a06070a8-971b-42ea-82b9-a4dadcba86a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c3cd432-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.353 239969 DEBUG nova.objects.instance [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lazy-loading 'pci_devices' on Instance uuid ac212cb3-2160-4534-a35c-21bcfab037fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.377 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:31:16 compute-0 nova_compute[239965]:   <uuid>ac212cb3-2160-4534-a35c-21bcfab037fd</uuid>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   <name>instance-00000098</name>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <nova:name>tempest-TestServerBasicOps-server-856164022</nova:name>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:31:15</nova:creationTime>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:31:16 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:31:16 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:31:16 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:31:16 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:31:16 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:31:16 compute-0 nova_compute[239965]:         <nova:user uuid="5eddbe6e360b4be7ae448340e594cd57">tempest-TestServerBasicOps-1398812118-project-member</nova:user>
Jan 26 16:31:16 compute-0 nova_compute[239965]:         <nova:project uuid="28416a9085754ed09fd1131e4bb69e84">tempest-TestServerBasicOps-1398812118</nova:project>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <nova:ports>
Jan 26 16:31:16 compute-0 nova_compute[239965]:         <nova:port uuid="9c3cd432-78a3-486a-99bf-b887e65dd8fb">
Jan 26 16:31:16 compute-0 nova_compute[239965]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:         </nova:port>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       </nova:ports>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <system>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <entry name="serial">ac212cb3-2160-4534-a35c-21bcfab037fd</entry>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <entry name="uuid">ac212cb3-2160-4534-a35c-21bcfab037fd</entry>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     </system>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   <os>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   </os>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   <features>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   </features>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/ac212cb3-2160-4534-a35c-21bcfab037fd_disk">
Jan 26 16:31:16 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       </source>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:31:16 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/ac212cb3-2160-4534-a35c-21bcfab037fd_disk.config">
Jan 26 16:31:16 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       </source>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:31:16 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <interface type="ethernet">
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <mac address="fa:16:3e:ce:c0:91"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <driver name="vhost" rx_queue_size="512"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <mtu size="1442"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <target dev="tap9c3cd432-78"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     </interface>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/ac212cb3-2160-4534-a35c-21bcfab037fd/console.log" append="off"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <video>
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     </video>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:31:16 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:31:16 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:31:16 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:31:16 compute-0 nova_compute[239965]: </domain>
Jan 26 16:31:16 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.378 239969 DEBUG nova.compute.manager [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Preparing to wait for external event network-vif-plugged-9c3cd432-78a3-486a-99bf-b887e65dd8fb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.379 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Acquiring lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.379 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.379 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.380 239969 DEBUG nova.virt.libvirt.vif [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:31:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-856164022',display_name='tempest-TestServerBasicOps-server-856164022',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-856164022',id=152,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO3pbnj0sLvS5A2vfDeCLtyAVJWzkpG6iTn2ojAHV+wUBLFCJ7Zd7+5fWmtQD5To3l4t2bWX92m1dUg7pBiDUnNnLz7iHt7XlMOe0SEnUQs1SZRu1JyGgGgM56YjqqPgRw==',key_name='tempest-TestServerBasicOps-1871563773',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='28416a9085754ed09fd1131e4bb69e84',ramdisk_id='',reservation_id='r-7cxapjyc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1398812118',owner_user_name='tempest-TestServerBasicOps-1398812118-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-26T16:31:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5eddbe6e360b4be7ae448340e594cd57',uuid=ac212cb3-2160-4534-a35c-21bcfab037fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "address": "fa:16:3e:ce:c0:91", "network": {"id": "a06070a8-971b-42ea-82b9-a4dadcba86a5", "bridge": "br-int", "label": "tempest-TestServerBasicOps-105777808-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28416a9085754ed09fd1131e4bb69e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c3cd432-78", "ovs_interfaceid": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.381 239969 DEBUG nova.network.os_vif_util [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Converting VIF {"id": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "address": "fa:16:3e:ce:c0:91", "network": {"id": "a06070a8-971b-42ea-82b9-a4dadcba86a5", "bridge": "br-int", "label": "tempest-TestServerBasicOps-105777808-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28416a9085754ed09fd1131e4bb69e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c3cd432-78", "ovs_interfaceid": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.381 239969 DEBUG nova.network.os_vif_util [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:c0:91,bridge_name='br-int',has_traffic_filtering=True,id=9c3cd432-78a3-486a-99bf-b887e65dd8fb,network=Network(a06070a8-971b-42ea-82b9-a4dadcba86a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c3cd432-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.382 239969 DEBUG os_vif [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:c0:91,bridge_name='br-int',has_traffic_filtering=True,id=9c3cd432-78a3-486a-99bf-b887e65dd8fb,network=Network(a06070a8-971b-42ea-82b9-a4dadcba86a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c3cd432-78') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.382 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.383 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.383 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.386 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.386 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9c3cd432-78, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.387 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9c3cd432-78, col_values=(('external_ids', {'iface-id': '9c3cd432-78a3-486a-99bf-b887e65dd8fb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ce:c0:91', 'vm-uuid': 'ac212cb3-2160-4534-a35c-21bcfab037fd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.388 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:16 compute-0 NetworkManager[48954]: <info>  [1769445076.3896] manager: (tap9c3cd432-78): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/664)
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.391 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:31:16 compute-0 ceph-mon[75140]: pgmap v2433: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:31:16 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3016750382' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:31:16 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3285859819' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.556 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.558 239969 INFO os_vif [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:c0:91,bridge_name='br-int',has_traffic_filtering=True,id=9c3cd432-78a3-486a-99bf-b887e65dd8fb,network=Network(a06070a8-971b-42ea-82b9-a4dadcba86a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c3cd432-78')
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.652 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.653 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.654 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] No VIF found with MAC fa:16:3e:ce:c0:91, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.654 239969 INFO nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Using config drive
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.691 239969 DEBUG nova.storage.rbd_utils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] rbd image ac212cb3-2160-4534-a35c-21bcfab037fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:31:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.987 239969 INFO nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Creating config drive at /var/lib/nova/instances/ac212cb3-2160-4534-a35c-21bcfab037fd/disk.config
Jan 26 16:31:16 compute-0 nova_compute[239965]: 2026-01-26 16:31:16.998 239969 DEBUG oslo_concurrency.processutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ac212cb3-2160-4534-a35c-21bcfab037fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpww2gbm_o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:31:17 compute-0 nova_compute[239965]: 2026-01-26 16:31:17.054 239969 DEBUG nova.network.neutron [req-3c3f52a9-bcce-4d6d-b615-f609f7870437 req-154ecd1d-78bd-486c-afea-d4a60d079d82 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Updated VIF entry in instance network info cache for port 9c3cd432-78a3-486a-99bf-b887e65dd8fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:31:17 compute-0 nova_compute[239965]: 2026-01-26 16:31:17.056 239969 DEBUG nova.network.neutron [req-3c3f52a9-bcce-4d6d-b615-f609f7870437 req-154ecd1d-78bd-486c-afea-d4a60d079d82 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Updating instance_info_cache with network_info: [{"id": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "address": "fa:16:3e:ce:c0:91", "network": {"id": "a06070a8-971b-42ea-82b9-a4dadcba86a5", "bridge": "br-int", "label": "tempest-TestServerBasicOps-105777808-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28416a9085754ed09fd1131e4bb69e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c3cd432-78", "ovs_interfaceid": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:31:17 compute-0 nova_compute[239965]: 2026-01-26 16:31:17.081 239969 DEBUG oslo_concurrency.lockutils [req-3c3f52a9-bcce-4d6d-b615-f609f7870437 req-154ecd1d-78bd-486c-afea-d4a60d079d82 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-ac212cb3-2160-4534-a35c-21bcfab037fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:31:17 compute-0 nova_compute[239965]: 2026-01-26 16:31:17.180 239969 DEBUG oslo_concurrency.processutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ac212cb3-2160-4534-a35c-21bcfab037fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpww2gbm_o" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:31:17 compute-0 nova_compute[239965]: 2026-01-26 16:31:17.212 239969 DEBUG nova.storage.rbd_utils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] rbd image ac212cb3-2160-4534-a35c-21bcfab037fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:31:17 compute-0 nova_compute[239965]: 2026-01-26 16:31:17.218 239969 DEBUG oslo_concurrency.processutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ac212cb3-2160-4534-a35c-21bcfab037fd/disk.config ac212cb3-2160-4534-a35c-21bcfab037fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:31:17 compute-0 nova_compute[239965]: 2026-01-26 16:31:17.660 239969 DEBUG oslo_concurrency.processutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ac212cb3-2160-4534-a35c-21bcfab037fd/disk.config ac212cb3-2160-4534-a35c-21bcfab037fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:31:17 compute-0 nova_compute[239965]: 2026-01-26 16:31:17.663 239969 INFO nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Deleting local config drive /var/lib/nova/instances/ac212cb3-2160-4534-a35c-21bcfab037fd/disk.config because it was imported into RBD.
Jan 26 16:31:17 compute-0 kernel: tap9c3cd432-78: entered promiscuous mode
Jan 26 16:31:17 compute-0 NetworkManager[48954]: <info>  [1769445077.7211] manager: (tap9c3cd432-78): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Jan 26 16:31:17 compute-0 nova_compute[239965]: 2026-01-26 16:31:17.723 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:17 compute-0 ovn_controller[146046]: 2026-01-26T16:31:17Z|01638|binding|INFO|Claiming lport 9c3cd432-78a3-486a-99bf-b887e65dd8fb for this chassis.
Jan 26 16:31:17 compute-0 ovn_controller[146046]: 2026-01-26T16:31:17Z|01639|binding|INFO|9c3cd432-78a3-486a-99bf-b887e65dd8fb: Claiming fa:16:3e:ce:c0:91 10.100.0.5
Jan 26 16:31:17 compute-0 nova_compute[239965]: 2026-01-26 16:31:17.731 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.741 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:c0:91 10.100.0.5'], port_security=['fa:16:3e:ce:c0:91 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ac212cb3-2160-4534-a35c-21bcfab037fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a06070a8-971b-42ea-82b9-a4dadcba86a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28416a9085754ed09fd1131e4bb69e84', 'neutron:revision_number': '2', 'neutron:security_group_ids': '944f3e70-1325-41b7-abc4-0693016fe099 ec6f1f93-0e58-4206-a4ff-6edef61884bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5f69434-7863-4622-9ed6-38e001f8727d, chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=9c3cd432-78a3-486a-99bf-b887e65dd8fb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.743 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 9c3cd432-78a3-486a-99bf-b887e65dd8fb in datapath a06070a8-971b-42ea-82b9-a4dadcba86a5 bound to our chassis
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.744 156105 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a06070a8-971b-42ea-82b9-a4dadcba86a5
Jan 26 16:31:17 compute-0 systemd-udevd[375001]: Network interface NamePolicy= disabled on kernel command line.
Jan 26 16:31:17 compute-0 systemd-machined[208061]: New machine qemu-186-instance-00000098.
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.758 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[db1710a9-9fb5-47f1-b289-7ad6da0e4b53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.759 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa06070a8-91 in ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.760 247577 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa06070a8-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.760 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[06e2d96b-20b8-4c42-a640-9f6c9d2c0bf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.761 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[1dc2dd50-c75f-4c82-b25d-6f53a9682fb7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:17 compute-0 NetworkManager[48954]: <info>  [1769445077.7676] device (tap9c3cd432-78): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 26 16:31:17 compute-0 NetworkManager[48954]: <info>  [1769445077.7684] device (tap9c3cd432-78): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.773 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[f599b50c-6709-429f-ba3f-4d1d9ce7332f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:17 compute-0 systemd[1]: Started Virtual Machine qemu-186-instance-00000098.
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.796 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[bf51aca3-ee59-41e1-b722-bfe8c5bf7386]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:17 compute-0 nova_compute[239965]: 2026-01-26 16:31:17.799 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:17 compute-0 ovn_controller[146046]: 2026-01-26T16:31:17Z|01640|binding|INFO|Setting lport 9c3cd432-78a3-486a-99bf-b887e65dd8fb ovn-installed in OVS
Jan 26 16:31:17 compute-0 ovn_controller[146046]: 2026-01-26T16:31:17Z|01641|binding|INFO|Setting lport 9c3cd432-78a3-486a-99bf-b887e65dd8fb up in Southbound
Jan 26 16:31:17 compute-0 nova_compute[239965]: 2026-01-26 16:31:17.805 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.831 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[2f3f4b79-aeb7-4123-afe2-e868df22839d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:17 compute-0 NetworkManager[48954]: <info>  [1769445077.8378] manager: (tapa06070a8-90): new Veth device (/org/freedesktop/NetworkManager/Devices/666)
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.836 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[6c37f6df-0368-4bd6-9bb3-044be8846f11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.871 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ef8d75-8a93-4f7c-b142-f5e12b2da359]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.874 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[5247e068-d1e7-4449-b279-9431375d3ca7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:17 compute-0 NetworkManager[48954]: <info>  [1769445077.8983] device (tapa06070a8-90): carrier: link connected
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.904 247633 DEBUG oslo.privsep.daemon [-] privsep: reply[abe990a2-c480-497b-9776-41133d4f3c5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.925 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[3ec2be05-6577-4134-8511-18fceb2b6553]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa06070a8-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:45:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656870, 'reachable_time': 44504, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375033, 'error': None, 'target': 'ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.944 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[2694e0ad-1135-4c9e-a06f-837cf3bb35cf]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe03:4592'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 656870, 'tstamp': 656870}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 375041, 'error': None, 'target': 'ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:17 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:17.967 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[b9245e68-f3df-4d65-9cde-2a8ea1f03682]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa06070a8-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:03:45:92'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656870, 'reachable_time': 44504, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 375050, 'error': None, 'target': 'ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:18.004 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5d1f091b-9053-4a58-a150-43a4f3506c4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:18.065 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[266b2cc4-e820-4d68-9e6b-c5f3061e493b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:18.067 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa06070a8-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:18.067 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:18.068 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa06070a8-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.070 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:18 compute-0 NetworkManager[48954]: <info>  [1769445078.0713] manager: (tapa06070a8-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/667)
Jan 26 16:31:18 compute-0 kernel: tapa06070a8-90: entered promiscuous mode
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.072 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:18.074 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa06070a8-90, col_values=(('external_ids', {'iface-id': 'd92b2e28-9b04-40cb-9473-8c92d8dd6623'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.074 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:18 compute-0 ovn_controller[146046]: 2026-01-26T16:31:18Z|01642|binding|INFO|Releasing lport d92b2e28-9b04-40cb-9473-8c92d8dd6623 from this chassis (sb_readonly=0)
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.097 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:18.098 156105 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a06070a8-971b-42ea-82b9-a4dadcba86a5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a06070a8-971b-42ea-82b9-a4dadcba86a5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:18.099 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[0280814a-74a8-45f1-b856-f645764ac455]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:18.100 156105 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]: global
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     log         /dev/log local0 debug
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     log-tag     haproxy-metadata-proxy-a06070a8-971b-42ea-82b9-a4dadcba86a5
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     user        root
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     group       root
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     maxconn     1024
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     pidfile     /var/lib/neutron/external/pids/a06070a8-971b-42ea-82b9-a4dadcba86a5.pid.haproxy
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     daemon
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]: defaults
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     log global
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     mode http
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     option httplog
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     option dontlognull
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     option http-server-close
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     option forwardfor
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     retries                 3
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     timeout http-request    30s
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     timeout connect         30s
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     timeout client          32s
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     timeout server          32s
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     timeout http-keep-alive 30s
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]: listen listener
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     bind 169.254.169.254:80
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     server metadata /var/lib/neutron/metadata_proxy
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:     http-request add-header X-OVN-Network-ID a06070a8-971b-42ea-82b9-a4dadcba86a5
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 26 16:31:18 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:18.101 156105 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5', 'env', 'PROCESS_TAG=haproxy-a06070a8-971b-42ea-82b9-a4dadcba86a5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a06070a8-971b-42ea-82b9-a4dadcba86a5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.217 239969 DEBUG nova.compute.manager [req-a39b4ccf-0730-44da-9ab8-d153072edaa3 req-fc495dae-9159-4c2d-b4f4-907a912006fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Received event network-vif-plugged-9c3cd432-78a3-486a-99bf-b887e65dd8fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.218 239969 DEBUG oslo_concurrency.lockutils [req-a39b4ccf-0730-44da-9ab8-d153072edaa3 req-fc495dae-9159-4c2d-b4f4-907a912006fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.218 239969 DEBUG oslo_concurrency.lockutils [req-a39b4ccf-0730-44da-9ab8-d153072edaa3 req-fc495dae-9159-4c2d-b4f4-907a912006fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.218 239969 DEBUG oslo_concurrency.lockutils [req-a39b4ccf-0730-44da-9ab8-d153072edaa3 req-fc495dae-9159-4c2d-b4f4-907a912006fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.218 239969 DEBUG nova.compute.manager [req-a39b4ccf-0730-44da-9ab8-d153072edaa3 req-fc495dae-9159-4c2d-b4f4-907a912006fe a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Processing event network-vif-plugged-9c3cd432-78a3-486a-99bf-b887e65dd8fb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.231 239969 DEBUG nova.compute.manager [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.232 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769445078.232018, ac212cb3-2160-4534-a35c-21bcfab037fd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.232 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] VM Started (Lifecycle Event)
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.235 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.240 239969 INFO nova.virt.libvirt.driver [-] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Instance spawned successfully.
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.240 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.254 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.257 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.266 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.267 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.267 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.268 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.268 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.269 239969 DEBUG nova.virt.libvirt.driver [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.278 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.278 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769445078.2327678, ac212cb3-2160-4534-a35c-21bcfab037fd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.279 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] VM Paused (Lifecycle Event)
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.311 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.314 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769445078.2351785, ac212cb3-2160-4534-a35c-21bcfab037fd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.314 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] VM Resumed (Lifecycle Event)
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.335 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.339 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.343 239969 INFO nova.compute.manager [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Took 8.72 seconds to spawn the instance on the hypervisor.
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.344 239969 DEBUG nova.compute.manager [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.356 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.398 239969 INFO nova.compute.manager [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Took 9.83 seconds to build instance.
Jan 26 16:31:18 compute-0 nova_compute[239965]: 2026-01-26 16:31:18.412 239969 DEBUG oslo_concurrency.lockutils [None req-9b726cc6-f02e-4d8a-9e35-1eff8402acf6 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lock "ac212cb3-2160-4534-a35c-21bcfab037fd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:18 compute-0 podman[375109]: 2026-01-26 16:31:18.502246701 +0000 UTC m=+0.028735388 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 26 16:31:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 16:31:19 compute-0 ceph-mon[75140]: pgmap v2434: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Jan 26 16:31:19 compute-0 nova_compute[239965]: 2026-01-26 16:31:19.130 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:19 compute-0 podman[375109]: 2026-01-26 16:31:19.148299319 +0000 UTC m=+0.674787986 container create 7b0c3824da41c5fa697cd95f202530800ad2a520eac70de3ff9990e12bdaf258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true)
Jan 26 16:31:19 compute-0 systemd[1]: Started libpod-conmon-7b0c3824da41c5fa697cd95f202530800ad2a520eac70de3ff9990e12bdaf258.scope.
Jan 26 16:31:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:31:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b7c4f92cf52633451301bc0cf838c36d7f33c7c28087fcc2a190b027559b23/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:19 compute-0 podman[375109]: 2026-01-26 16:31:19.772385376 +0000 UTC m=+1.298874063 container init 7b0c3824da41c5fa697cd95f202530800ad2a520eac70de3ff9990e12bdaf258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:31:19 compute-0 podman[375109]: 2026-01-26 16:31:19.779027739 +0000 UTC m=+1.305516456 container start 7b0c3824da41c5fa697cd95f202530800ad2a520eac70de3ff9990e12bdaf258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 16:31:19 compute-0 neutron-haproxy-ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5[375124]: [NOTICE]   (375128) : New worker (375130) forked
Jan 26 16:31:19 compute-0 neutron-haproxy-ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5[375124]: [NOTICE]   (375128) : Loading success.
Jan 26 16:31:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:31:20 compute-0 ceph-mon[75140]: pgmap v2435: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Jan 26 16:31:20 compute-0 nova_compute[239965]: 2026-01-26 16:31:20.781 239969 DEBUG nova.compute.manager [req-eff01e58-08f6-45cd-a463-575c8e143157 req-56d65022-2f7b-4e52-9875-9a3d53240426 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Received event network-vif-plugged-9c3cd432-78a3-486a-99bf-b887e65dd8fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:31:20 compute-0 nova_compute[239965]: 2026-01-26 16:31:20.782 239969 DEBUG oslo_concurrency.lockutils [req-eff01e58-08f6-45cd-a463-575c8e143157 req-56d65022-2f7b-4e52-9875-9a3d53240426 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:20 compute-0 nova_compute[239965]: 2026-01-26 16:31:20.782 239969 DEBUG oslo_concurrency.lockutils [req-eff01e58-08f6-45cd-a463-575c8e143157 req-56d65022-2f7b-4e52-9875-9a3d53240426 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:20 compute-0 nova_compute[239965]: 2026-01-26 16:31:20.783 239969 DEBUG oslo_concurrency.lockutils [req-eff01e58-08f6-45cd-a463-575c8e143157 req-56d65022-2f7b-4e52-9875-9a3d53240426 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:20 compute-0 nova_compute[239965]: 2026-01-26 16:31:20.783 239969 DEBUG nova.compute.manager [req-eff01e58-08f6-45cd-a463-575c8e143157 req-56d65022-2f7b-4e52-9875-9a3d53240426 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] No waiting events found dispatching network-vif-plugged-9c3cd432-78a3-486a-99bf-b887e65dd8fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:31:20 compute-0 nova_compute[239965]: 2026-01-26 16:31:20.783 239969 WARNING nova.compute.manager [req-eff01e58-08f6-45cd-a463-575c8e143157 req-56d65022-2f7b-4e52-9875-9a3d53240426 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Received unexpected event network-vif-plugged-9c3cd432-78a3-486a-99bf-b887e65dd8fb for instance with vm_state active and task_state None.
Jan 26 16:31:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 727 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Jan 26 16:31:21 compute-0 nova_compute[239965]: 2026-01-26 16:31:21.391 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:21 compute-0 nova_compute[239965]: 2026-01-26 16:31:21.783 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:21 compute-0 NetworkManager[48954]: <info>  [1769445081.7861] manager: (patch-provnet-c188e9cb-f042-4443-b9e7-96f370b24037-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/668)
Jan 26 16:31:21 compute-0 NetworkManager[48954]: <info>  [1769445081.7872] manager: (patch-br-int-to-provnet-c188e9cb-f042-4443-b9e7-96f370b24037): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/669)
Jan 26 16:31:21 compute-0 nova_compute[239965]: 2026-01-26 16:31:21.858 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:21 compute-0 ovn_controller[146046]: 2026-01-26T16:31:21Z|01643|binding|INFO|Releasing lport d92b2e28-9b04-40cb-9473-8c92d8dd6623 from this chassis (sb_readonly=0)
Jan 26 16:31:21 compute-0 nova_compute[239965]: 2026-01-26 16:31:21.868 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:22 compute-0 nova_compute[239965]: 2026-01-26 16:31:22.095 239969 DEBUG nova.compute.manager [req-d5adf87e-58cc-4bf6-bd2d-38b6f26d31ad req-792dd214-87aa-4064-97dc-2b59cfa7cad4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Received event network-changed-9c3cd432-78a3-486a-99bf-b887e65dd8fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:31:22 compute-0 nova_compute[239965]: 2026-01-26 16:31:22.096 239969 DEBUG nova.compute.manager [req-d5adf87e-58cc-4bf6-bd2d-38b6f26d31ad req-792dd214-87aa-4064-97dc-2b59cfa7cad4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Refreshing instance network info cache due to event network-changed-9c3cd432-78a3-486a-99bf-b887e65dd8fb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 26 16:31:22 compute-0 nova_compute[239965]: 2026-01-26 16:31:22.096 239969 DEBUG oslo_concurrency.lockutils [req-d5adf87e-58cc-4bf6-bd2d-38b6f26d31ad req-792dd214-87aa-4064-97dc-2b59cfa7cad4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "refresh_cache-ac212cb3-2160-4534-a35c-21bcfab037fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:31:22 compute-0 nova_compute[239965]: 2026-01-26 16:31:22.096 239969 DEBUG oslo_concurrency.lockutils [req-d5adf87e-58cc-4bf6-bd2d-38b6f26d31ad req-792dd214-87aa-4064-97dc-2b59cfa7cad4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquired lock "refresh_cache-ac212cb3-2160-4534-a35c-21bcfab037fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:31:22 compute-0 nova_compute[239965]: 2026-01-26 16:31:22.096 239969 DEBUG nova.network.neutron [req-d5adf87e-58cc-4bf6-bd2d-38b6f26d31ad req-792dd214-87aa-4064-97dc-2b59cfa7cad4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Refreshing network info cache for port 9c3cd432-78a3-486a-99bf-b887e65dd8fb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 26 16:31:22 compute-0 ceph-mon[75140]: pgmap v2436: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 727 KiB/s rd, 1.8 MiB/s wr, 53 op/s
Jan 26 16:31:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 16:31:23 compute-0 nova_compute[239965]: 2026-01-26 16:31:23.409 239969 DEBUG nova.network.neutron [req-d5adf87e-58cc-4bf6-bd2d-38b6f26d31ad req-792dd214-87aa-4064-97dc-2b59cfa7cad4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Updated VIF entry in instance network info cache for port 9c3cd432-78a3-486a-99bf-b887e65dd8fb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 26 16:31:23 compute-0 nova_compute[239965]: 2026-01-26 16:31:23.409 239969 DEBUG nova.network.neutron [req-d5adf87e-58cc-4bf6-bd2d-38b6f26d31ad req-792dd214-87aa-4064-97dc-2b59cfa7cad4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Updating instance_info_cache with network_info: [{"id": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "address": "fa:16:3e:ce:c0:91", "network": {"id": "a06070a8-971b-42ea-82b9-a4dadcba86a5", "bridge": "br-int", "label": "tempest-TestServerBasicOps-105777808-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28416a9085754ed09fd1131e4bb69e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c3cd432-78", "ovs_interfaceid": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:31:23 compute-0 nova_compute[239965]: 2026-01-26 16:31:23.437 239969 DEBUG oslo_concurrency.lockutils [req-d5adf87e-58cc-4bf6-bd2d-38b6f26d31ad req-792dd214-87aa-4064-97dc-2b59cfa7cad4 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Releasing lock "refresh_cache-ac212cb3-2160-4534-a35c-21bcfab037fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:31:24 compute-0 nova_compute[239965]: 2026-01-26 16:31:24.132 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:31:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.1 total, 600.0 interval
                                           Cumulative writes: 44K writes, 181K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 44K writes, 16K syncs, 2.77 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7892 writes, 33K keys, 7892 commit groups, 1.0 writes per commit group, ingest: 36.40 MB, 0.06 MB/s
                                           Interval WAL: 7892 writes, 3094 syncs, 2.55 writes per sync, written: 0.04 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:31:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 638 KiB/s wr, 76 op/s
Jan 26 16:31:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:31:25 compute-0 ceph-mon[75140]: pgmap v2437: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Jan 26 16:31:26 compute-0 ceph-mon[75140]: pgmap v2438: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 638 KiB/s wr, 76 op/s
Jan 26 16:31:26 compute-0 nova_compute[239965]: 2026-01-26 16:31:26.396 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 26 16:31:26 compute-0 sudo[375140]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:31:26 compute-0 sudo[375140]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:31:26 compute-0 sudo[375140]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:27 compute-0 sudo[375165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:31:27 compute-0 sudo[375165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:31:27 compute-0 sudo[375165]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:31:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:31:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:31:27 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:31:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:31:27 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:31:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:31:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:31:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:31:27 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:31:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:31:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:31:27 compute-0 sudo[375222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:31:27 compute-0 sudo[375222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:31:27 compute-0 sudo[375222]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:27 compute-0 sudo[375247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:31:27 compute-0 sudo[375247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:31:28 compute-0 podman[375282]: 2026-01-26 16:31:28.104407035 +0000 UTC m=+0.053975908 container create 4c27730ebe620badd58f2f241bc5c6326eaa692b42cf24280d126522de1e3142 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mclaren, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 26 16:31:28 compute-0 systemd[1]: Started libpod-conmon-4c27730ebe620badd58f2f241bc5c6326eaa692b42cf24280d126522de1e3142.scope.
Jan 26 16:31:28 compute-0 podman[375282]: 2026-01-26 16:31:28.083787219 +0000 UTC m=+0.033356112 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:31:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:31:28 compute-0 podman[375282]: 2026-01-26 16:31:28.199200288 +0000 UTC m=+0.148769181 container init 4c27730ebe620badd58f2f241bc5c6326eaa692b42cf24280d126522de1e3142 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:31:28 compute-0 podman[375282]: 2026-01-26 16:31:28.208641551 +0000 UTC m=+0.158210414 container start 4c27730ebe620badd58f2f241bc5c6326eaa692b42cf24280d126522de1e3142 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mclaren, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:31:28 compute-0 podman[375282]: 2026-01-26 16:31:28.212389503 +0000 UTC m=+0.161958406 container attach 4c27730ebe620badd58f2f241bc5c6326eaa692b42cf24280d126522de1e3142 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mclaren, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 16:31:28 compute-0 wonderful_mclaren[375299]: 167 167
Jan 26 16:31:28 compute-0 systemd[1]: libpod-4c27730ebe620badd58f2f241bc5c6326eaa692b42cf24280d126522de1e3142.scope: Deactivated successfully.
Jan 26 16:31:28 compute-0 podman[375282]: 2026-01-26 16:31:28.21634891 +0000 UTC m=+0.165917783 container died 4c27730ebe620badd58f2f241bc5c6326eaa692b42cf24280d126522de1e3142 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mclaren, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:31:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8fdc22d30bf37ad464b2aa55810017ed6711f8b12ace313d7dcb95d2619a7a5-merged.mount: Deactivated successfully.
Jan 26 16:31:28 compute-0 podman[375282]: 2026-01-26 16:31:28.263203523 +0000 UTC m=+0.212772396 container remove 4c27730ebe620badd58f2f241bc5c6326eaa692b42cf24280d126522de1e3142 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_mclaren, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:31:28 compute-0 ceph-mon[75140]: pgmap v2439: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 26 16:31:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:31:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:31:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:31:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:31:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:31:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:31:28 compute-0 systemd[1]: libpod-conmon-4c27730ebe620badd58f2f241bc5c6326eaa692b42cf24280d126522de1e3142.scope: Deactivated successfully.
Jan 26 16:31:28 compute-0 podman[375323]: 2026-01-26 16:31:28.490947668 +0000 UTC m=+0.082991164 container create 7c74ff1c409639dd69bd8645b7f9d29d6d14d9e18b1a7467787ff44346514cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 16:31:28 compute-0 podman[375323]: 2026-01-26 16:31:28.439671776 +0000 UTC m=+0.031715292 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:31:28 compute-0 systemd[1]: Started libpod-conmon-7c74ff1c409639dd69bd8645b7f9d29d6d14d9e18b1a7467787ff44346514cbb.scope.
Jan 26 16:31:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a7394a22d93fb6e4b91bf530e7b532109ff1b8b1266cfdd507af5f96ae2516/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a7394a22d93fb6e4b91bf530e7b532109ff1b8b1266cfdd507af5f96ae2516/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a7394a22d93fb6e4b91bf530e7b532109ff1b8b1266cfdd507af5f96ae2516/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a7394a22d93fb6e4b91bf530e7b532109ff1b8b1266cfdd507af5f96ae2516/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0a7394a22d93fb6e4b91bf530e7b532109ff1b8b1266cfdd507af5f96ae2516/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:28 compute-0 podman[375323]: 2026-01-26 16:31:28.62310275 +0000 UTC m=+0.215146296 container init 7c74ff1c409639dd69bd8645b7f9d29d6d14d9e18b1a7467787ff44346514cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_snyder, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:31:28 compute-0 podman[375323]: 2026-01-26 16:31:28.636753586 +0000 UTC m=+0.228797092 container start 7c74ff1c409639dd69bd8645b7f9d29d6d14d9e18b1a7467787ff44346514cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:31:28 compute-0 podman[375323]: 2026-01-26 16:31:28.641166024 +0000 UTC m=+0.233209540 container attach 7c74ff1c409639dd69bd8645b7f9d29d6d14d9e18b1a7467787ff44346514cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_snyder, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:31:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:31:28
Jan 26 16:31:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:31:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:31:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.mgr', 'default.rgw.log', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'images', 'default.rgw.control']
Jan 26 16:31:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:31:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 26 16:31:29 compute-0 intelligent_snyder[375339]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:31:29 compute-0 intelligent_snyder[375339]: --> All data devices are unavailable
Jan 26 16:31:29 compute-0 systemd[1]: libpod-7c74ff1c409639dd69bd8645b7f9d29d6d14d9e18b1a7467787ff44346514cbb.scope: Deactivated successfully.
Jan 26 16:31:29 compute-0 podman[375323]: 2026-01-26 16:31:29.12374774 +0000 UTC m=+0.715791236 container died 7c74ff1c409639dd69bd8645b7f9d29d6d14d9e18b1a7467787ff44346514cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_snyder, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:31:29 compute-0 nova_compute[239965]: 2026-01-26 16:31:29.133 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0a7394a22d93fb6e4b91bf530e7b532109ff1b8b1266cfdd507af5f96ae2516-merged.mount: Deactivated successfully.
Jan 26 16:31:29 compute-0 podman[375323]: 2026-01-26 16:31:29.166731217 +0000 UTC m=+0.758774713 container remove 7c74ff1c409639dd69bd8645b7f9d29d6d14d9e18b1a7467787ff44346514cbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_snyder, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:31:29 compute-0 systemd[1]: libpod-conmon-7c74ff1c409639dd69bd8645b7f9d29d6d14d9e18b1a7467787ff44346514cbb.scope: Deactivated successfully.
Jan 26 16:31:29 compute-0 sudo[375247]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:29 compute-0 sudo[375372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:31:29 compute-0 sudo[375372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:31:29 compute-0 sudo[375372]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:29 compute-0 sudo[375397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:31:29 compute-0 sudo[375397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:31:29 compute-0 podman[375432]: 2026-01-26 16:31:29.660387735 +0000 UTC m=+0.069567624 container create c874bdd94c7d9a6c6f1dc3c6749b245ee977bedce3ca3cd834fcbf2f1a2f5058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 26 16:31:29 compute-0 systemd[1]: Started libpod-conmon-c874bdd94c7d9a6c6f1dc3c6749b245ee977bedce3ca3cd834fcbf2f1a2f5058.scope.
Jan 26 16:31:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:31:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.4 total, 600.0 interval
                                           Cumulative writes: 48K writes, 188K keys, 48K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 48K writes, 17K syncs, 2.70 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7883 writes, 33K keys, 7883 commit groups, 1.0 writes per commit group, ingest: 35.32 MB, 0.06 MB/s
                                           Interval WAL: 7883 writes, 3124 syncs, 2.52 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:31:29 compute-0 podman[375432]: 2026-01-26 16:31:29.631553205 +0000 UTC m=+0.040733184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:31:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:31:29 compute-0 podman[375432]: 2026-01-26 16:31:29.750191454 +0000 UTC m=+0.159371373 container init c874bdd94c7d9a6c6f1dc3c6749b245ee977bedce3ca3cd834fcbf2f1a2f5058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:31:29 compute-0 podman[375432]: 2026-01-26 16:31:29.758773175 +0000 UTC m=+0.167953084 container start c874bdd94c7d9a6c6f1dc3c6749b245ee977bedce3ca3cd834fcbf2f1a2f5058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 16:31:29 compute-0 vigilant_rosalind[375448]: 167 167
Jan 26 16:31:29 compute-0 systemd[1]: libpod-c874bdd94c7d9a6c6f1dc3c6749b245ee977bedce3ca3cd834fcbf2f1a2f5058.scope: Deactivated successfully.
Jan 26 16:31:29 compute-0 podman[375432]: 2026-01-26 16:31:29.771187661 +0000 UTC m=+0.180367560 container attach c874bdd94c7d9a6c6f1dc3c6749b245ee977bedce3ca3cd834fcbf2f1a2f5058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:31:29 compute-0 podman[375432]: 2026-01-26 16:31:29.771592711 +0000 UTC m=+0.180772600 container died c874bdd94c7d9a6c6f1dc3c6749b245ee977bedce3ca3cd834fcbf2f1a2f5058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 16:31:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1fd9617a25e7ad5e5aec4771d53b208f5d52b23a24046a8c637fd2af972e4ac0-merged.mount: Deactivated successfully.
Jan 26 16:31:29 compute-0 podman[375432]: 2026-01-26 16:31:29.841372728 +0000 UTC m=+0.250552657 container remove c874bdd94c7d9a6c6f1dc3c6749b245ee977bedce3ca3cd834fcbf2f1a2f5058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_rosalind, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 16:31:29 compute-0 systemd[1]: libpod-conmon-c874bdd94c7d9a6c6f1dc3c6749b245ee977bedce3ca3cd834fcbf2f1a2f5058.scope: Deactivated successfully.
Jan 26 16:31:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:31:30 compute-0 podman[375472]: 2026-01-26 16:31:30.072796682 +0000 UTC m=+0.065714288 container create 582c1249dea3c889e60fd8115062c548230c0dcc950be0bf48afb7aac5d1fe52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lumiere, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 16:31:30 compute-0 systemd[1]: Started libpod-conmon-582c1249dea3c889e60fd8115062c548230c0dcc950be0bf48afb7aac5d1fe52.scope.
Jan 26 16:31:30 compute-0 podman[375472]: 2026-01-26 16:31:30.047605913 +0000 UTC m=+0.040523539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:31:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8fe690ec5dab57cb5dd6b868e50db5804a515410bda347756ecd77bd2889fbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8fe690ec5dab57cb5dd6b868e50db5804a515410bda347756ecd77bd2889fbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8fe690ec5dab57cb5dd6b868e50db5804a515410bda347756ecd77bd2889fbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8fe690ec5dab57cb5dd6b868e50db5804a515410bda347756ecd77bd2889fbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:30 compute-0 podman[375472]: 2026-01-26 16:31:30.190630232 +0000 UTC m=+0.183547878 container init 582c1249dea3c889e60fd8115062c548230c0dcc950be0bf48afb7aac5d1fe52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:31:30 compute-0 podman[375472]: 2026-01-26 16:31:30.201003078 +0000 UTC m=+0.193920684 container start 582c1249dea3c889e60fd8115062c548230c0dcc950be0bf48afb7aac5d1fe52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lumiere, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 16:31:30 compute-0 podman[375472]: 2026-01-26 16:31:30.205076808 +0000 UTC m=+0.197994434 container attach 582c1249dea3c889e60fd8115062c548230c0dcc950be0bf48afb7aac5d1fe52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lumiere, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 16:31:30 compute-0 ceph-mon[75140]: pgmap v2440: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:31:30 compute-0 keen_lumiere[375488]: {
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:     "0": [
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:         {
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "devices": [
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "/dev/loop3"
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             ],
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "lv_name": "ceph_lv0",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "lv_size": "21470642176",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "name": "ceph_lv0",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "tags": {
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.cluster_name": "ceph",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.crush_device_class": "",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.encrypted": "0",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.objectstore": "bluestore",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.osd_id": "0",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.type": "block",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.vdo": "0",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.with_tpm": "0"
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             },
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "type": "block",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "vg_name": "ceph_vg0"
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:         }
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:     ],
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:     "1": [
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:         {
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "devices": [
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "/dev/loop4"
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             ],
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "lv_name": "ceph_lv1",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "lv_size": "21470642176",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "name": "ceph_lv1",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "tags": {
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.cluster_name": "ceph",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.crush_device_class": "",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.encrypted": "0",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.objectstore": "bluestore",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.osd_id": "1",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.type": "block",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.vdo": "0",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.with_tpm": "0"
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             },
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "type": "block",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "vg_name": "ceph_vg1"
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:         }
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:     ],
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:     "2": [
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:         {
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "devices": [
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "/dev/loop5"
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             ],
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "lv_name": "ceph_lv2",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "lv_size": "21470642176",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "name": "ceph_lv2",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "tags": {
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.cluster_name": "ceph",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.crush_device_class": "",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.encrypted": "0",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.objectstore": "bluestore",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.osd_id": "2",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.type": "block",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.vdo": "0",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:                 "ceph.with_tpm": "0"
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             },
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "type": "block",
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:             "vg_name": "ceph_vg2"
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:         }
Jan 26 16:31:30 compute-0 keen_lumiere[375488]:     ]
Jan 26 16:31:30 compute-0 keen_lumiere[375488]: }
Jan 26 16:31:30 compute-0 systemd[1]: libpod-582c1249dea3c889e60fd8115062c548230c0dcc950be0bf48afb7aac5d1fe52.scope: Deactivated successfully.
Jan 26 16:31:30 compute-0 podman[375472]: 2026-01-26 16:31:30.534024283 +0000 UTC m=+0.526941909 container died 582c1249dea3c889e60fd8115062c548230c0dcc950be0bf48afb7aac5d1fe52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lumiere, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:31:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8fe690ec5dab57cb5dd6b868e50db5804a515410bda347756ecd77bd2889fbb-merged.mount: Deactivated successfully.
Jan 26 16:31:30 compute-0 podman[375472]: 2026-01-26 16:31:30.584114245 +0000 UTC m=+0.577031831 container remove 582c1249dea3c889e60fd8115062c548230c0dcc950be0bf48afb7aac5d1fe52 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_lumiere, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 16:31:30 compute-0 systemd[1]: libpod-conmon-582c1249dea3c889e60fd8115062c548230c0dcc950be0bf48afb7aac5d1fe52.scope: Deactivated successfully.
Jan 26 16:31:30 compute-0 sudo[375397]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:30 compute-0 sudo[375510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:31:30 compute-0 sudo[375510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:31:30 compute-0 sudo[375510]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:30 compute-0 sudo[375535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:31:30 compute-0 sudo[375535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 72 op/s
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:31:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:31:31 compute-0 podman[375571]: 2026-01-26 16:31:31.053310071 +0000 UTC m=+0.027807255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:31:31 compute-0 podman[375571]: 2026-01-26 16:31:31.272245018 +0000 UTC m=+0.246742142 container create ebd114e8ee5fb547cbaa7f25fadfa936303abdf49e3a11f34f8e608f8fd0d322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 26 16:31:31 compute-0 systemd[1]: Started libpod-conmon-ebd114e8ee5fb547cbaa7f25fadfa936303abdf49e3a11f34f8e608f8fd0d322.scope.
Jan 26 16:31:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:31:31 compute-0 nova_compute[239965]: 2026-01-26 16:31:31.400 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:31 compute-0 podman[375571]: 2026-01-26 16:31:31.407511577 +0000 UTC m=+0.382008721 container init ebd114e8ee5fb547cbaa7f25fadfa936303abdf49e3a11f34f8e608f8fd0d322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:31:31 compute-0 podman[375571]: 2026-01-26 16:31:31.414815527 +0000 UTC m=+0.389312691 container start ebd114e8ee5fb547cbaa7f25fadfa936303abdf49e3a11f34f8e608f8fd0d322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 26 16:31:31 compute-0 systemd[1]: libpod-ebd114e8ee5fb547cbaa7f25fadfa936303abdf49e3a11f34f8e608f8fd0d322.scope: Deactivated successfully.
Jan 26 16:31:31 compute-0 quirky_pare[375587]: 167 167
Jan 26 16:31:31 compute-0 podman[375571]: 2026-01-26 16:31:31.419147423 +0000 UTC m=+0.393644557 container attach ebd114e8ee5fb547cbaa7f25fadfa936303abdf49e3a11f34f8e608f8fd0d322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 16:31:31 compute-0 conmon[375587]: conmon ebd114e8ee5fb547cbaa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ebd114e8ee5fb547cbaa7f25fadfa936303abdf49e3a11f34f8e608f8fd0d322.scope/container/memory.events
Jan 26 16:31:31 compute-0 podman[375571]: 2026-01-26 16:31:31.419651265 +0000 UTC m=+0.394148379 container died ebd114e8ee5fb547cbaa7f25fadfa936303abdf49e3a11f34f8e608f8fd0d322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 16:31:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc4c77e23b1c18663cb59c67460c2f408201302b04e24c343058e6c910734ca4-merged.mount: Deactivated successfully.
Jan 26 16:31:31 compute-0 podman[375571]: 2026-01-26 16:31:31.451672843 +0000 UTC m=+0.426169967 container remove ebd114e8ee5fb547cbaa7f25fadfa936303abdf49e3a11f34f8e608f8fd0d322 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_pare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:31:31 compute-0 systemd[1]: libpod-conmon-ebd114e8ee5fb547cbaa7f25fadfa936303abdf49e3a11f34f8e608f8fd0d322.scope: Deactivated successfully.
Jan 26 16:31:31 compute-0 podman[375611]: 2026-01-26 16:31:31.627266285 +0000 UTC m=+0.043538644 container create 0d171b2cbe249d1eeae8c1d31e2bb631e475a9318f0b24266415d5f5bdd4601e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:31:31 compute-0 systemd[1]: Started libpod-conmon-0d171b2cbe249d1eeae8c1d31e2bb631e475a9318f0b24266415d5f5bdd4601e.scope.
Jan 26 16:31:31 compute-0 ovn_controller[146046]: 2026-01-26T16:31:31Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ce:c0:91 10.100.0.5
Jan 26 16:31:31 compute-0 ovn_controller[146046]: 2026-01-26T16:31:31Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ce:c0:91 10.100.0.5
Jan 26 16:31:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995e2699fc2218736abbf5dc8e0bcc6f4a868b87f75fb71783d88d5af62b2f19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995e2699fc2218736abbf5dc8e0bcc6f4a868b87f75fb71783d88d5af62b2f19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995e2699fc2218736abbf5dc8e0bcc6f4a868b87f75fb71783d88d5af62b2f19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/995e2699fc2218736abbf5dc8e0bcc6f4a868b87f75fb71783d88d5af62b2f19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:31:31 compute-0 podman[375611]: 2026-01-26 16:31:31.608591155 +0000 UTC m=+0.024863544 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:31:31 compute-0 podman[375611]: 2026-01-26 16:31:31.715769312 +0000 UTC m=+0.132041691 container init 0d171b2cbe249d1eeae8c1d31e2bb631e475a9318f0b24266415d5f5bdd4601e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_khorana, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 16:31:31 compute-0 podman[375611]: 2026-01-26 16:31:31.721819931 +0000 UTC m=+0.138092290 container start 0d171b2cbe249d1eeae8c1d31e2bb631e475a9318f0b24266415d5f5bdd4601e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_khorana, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:31:31 compute-0 podman[375611]: 2026-01-26 16:31:31.724758503 +0000 UTC m=+0.141030892 container attach 0d171b2cbe249d1eeae8c1d31e2bb631e475a9318f0b24266415d5f5bdd4601e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_khorana, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:31:32 compute-0 ceph-mon[75140]: pgmap v2441: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 72 op/s
Jan 26 16:31:32 compute-0 lvm[375706]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:31:32 compute-0 lvm[375706]: VG ceph_vg0 finished
Jan 26 16:31:32 compute-0 lvm[375707]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:31:32 compute-0 lvm[375707]: VG ceph_vg1 finished
Jan 26 16:31:32 compute-0 lvm[375709]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:31:32 compute-0 lvm[375709]: VG ceph_vg2 finished
Jan 26 16:31:32 compute-0 relaxed_khorana[375628]: {}
Jan 26 16:31:32 compute-0 systemd[1]: libpod-0d171b2cbe249d1eeae8c1d31e2bb631e475a9318f0b24266415d5f5bdd4601e.scope: Deactivated successfully.
Jan 26 16:31:32 compute-0 systemd[1]: libpod-0d171b2cbe249d1eeae8c1d31e2bb631e475a9318f0b24266415d5f5bdd4601e.scope: Consumed 1.278s CPU time.
Jan 26 16:31:32 compute-0 podman[375611]: 2026-01-26 16:31:32.54874597 +0000 UTC m=+0.965018339 container died 0d171b2cbe249d1eeae8c1d31e2bb631e475a9318f0b24266415d5f5bdd4601e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_khorana, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 16:31:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-995e2699fc2218736abbf5dc8e0bcc6f4a868b87f75fb71783d88d5af62b2f19-merged.mount: Deactivated successfully.
Jan 26 16:31:32 compute-0 podman[375611]: 2026-01-26 16:31:32.598583026 +0000 UTC m=+1.014855385 container remove 0d171b2cbe249d1eeae8c1d31e2bb631e475a9318f0b24266415d5f5bdd4601e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 16:31:32 compute-0 systemd[1]: libpod-conmon-0d171b2cbe249d1eeae8c1d31e2bb631e475a9318f0b24266415d5f5bdd4601e.scope: Deactivated successfully.
Jan 26 16:31:32 compute-0 sudo[375535]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:31:32 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:31:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:31:32 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:31:32 compute-0 sudo[375725]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:31:32 compute-0 sudo[375725]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:31:32 compute-0 sudo[375725]: pam_unix(sudo:session): session closed for user root
Jan 26 16:31:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 150 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 725 KiB/s wr, 79 op/s
Jan 26 16:31:33 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:31:33 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:31:34 compute-0 nova_compute[239965]: 2026-01-26 16:31:34.135 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:31:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4200.2 total, 600.0 interval
                                           Cumulative writes: 37K writes, 146K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.04 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.74 writes per sync, written: 0.15 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 7054 writes, 28K keys, 7054 commit groups, 1.0 writes per commit group, ingest: 35.15 MB, 0.06 MB/s
                                           Interval WAL: 7054 writes, 2745 syncs, 2.57 writes per sync, written: 0.03 GB, 0.06 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:31:34 compute-0 ceph-mon[75140]: pgmap v2442: 305 pgs: 305 active+clean; 150 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 725 KiB/s wr, 79 op/s
Jan 26 16:31:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 165 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 16:31:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:31:36 compute-0 nova_compute[239965]: 2026-01-26 16:31:36.405 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:36 compute-0 ceph-mon[75140]: pgmap v2443: 305 pgs: 305 active+clean; 165 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Jan 26 16:31:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 16:31:38 compute-0 ceph-mon[75140]: pgmap v2444: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 16:31:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:38.781 156607 DEBUG eventlet.wsgi.server [-] (156607) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 26 16:31:38 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:38.783 156607 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Jan 26 16:31:38 compute-0 ovn_metadata_agent[156096]: Accept: */*
Jan 26 16:31:38 compute-0 ovn_metadata_agent[156096]: Connection: close
Jan 26 16:31:38 compute-0 ovn_metadata_agent[156096]: Content-Type: text/plain
Jan 26 16:31:38 compute-0 ovn_metadata_agent[156096]: Host: 169.254.169.254
Jan 26 16:31:38 compute-0 ovn_metadata_agent[156096]: User-Agent: curl/7.84.0
Jan 26 16:31:38 compute-0 ovn_metadata_agent[156096]: X-Forwarded-For: 10.100.0.5
Jan 26 16:31:38 compute-0 ovn_metadata_agent[156096]: X-Ovn-Network-Id: a06070a8-971b-42ea-82b9-a4dadcba86a5 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 26 16:31:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 16:31:39 compute-0 nova_compute[239965]: 2026-01-26 16:31:39.138 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:39 compute-0 ceph-mgr[75431]: [devicehealth INFO root] Check health
Jan 26 16:31:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:40.152 156607 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 26 16:31:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:40.153 156607 INFO eventlet.wsgi.server [-] 10.100.0.5,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.3694706
Jan 26 16:31:40 compute-0 haproxy-metadata-proxy-a06070a8-971b-42ea-82b9-a4dadcba86a5[375130]: 10.100.0.5:45974 [26/Jan/2026:16:31:38.778] listener listener/metadata 0/0/0/1374/1374 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Jan 26 16:31:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:40.273 156607 DEBUG eventlet.wsgi.server [-] (156607) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 26 16:31:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:40.275 156607 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Jan 26 16:31:40 compute-0 ovn_metadata_agent[156096]: Accept: */*
Jan 26 16:31:40 compute-0 ovn_metadata_agent[156096]: Connection: close
Jan 26 16:31:40 compute-0 ovn_metadata_agent[156096]: Content-Length: 100
Jan 26 16:31:40 compute-0 ovn_metadata_agent[156096]: Content-Type: application/x-www-form-urlencoded
Jan 26 16:31:40 compute-0 ovn_metadata_agent[156096]: Host: 169.254.169.254
Jan 26 16:31:40 compute-0 ovn_metadata_agent[156096]: User-Agent: curl/7.84.0
Jan 26 16:31:40 compute-0 ovn_metadata_agent[156096]: X-Forwarded-For: 10.100.0.5
Jan 26 16:31:40 compute-0 ovn_metadata_agent[156096]: X-Ovn-Network-Id: a06070a8-971b-42ea-82b9-a4dadcba86a5
Jan 26 16:31:40 compute-0 ovn_metadata_agent[156096]: 
Jan 26 16:31:40 compute-0 ovn_metadata_agent[156096]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 26 16:31:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:31:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:40.606 156607 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 26 16:31:40 compute-0 haproxy-metadata-proxy-a06070a8-971b-42ea-82b9-a4dadcba86a5[375130]: 10.100.0.5:45988 [26/Jan/2026:16:31:40.272] listener listener/metadata 0/0/0/334/334 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Jan 26 16:31:40 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:40.607 156607 INFO eventlet.wsgi.server [-] 10.100.0.5,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.3319943
Jan 26 16:31:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 16:31:41 compute-0 nova_compute[239965]: 2026-01-26 16:31:41.409 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:41 compute-0 ceph-mon[75140]: pgmap v2445: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 16:31:42 compute-0 nova_compute[239965]: 2026-01-26 16:31:42.731 239969 DEBUG oslo_concurrency.lockutils [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Acquiring lock "ac212cb3-2160-4534-a35c-21bcfab037fd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:42 compute-0 nova_compute[239965]: 2026-01-26 16:31:42.731 239969 DEBUG oslo_concurrency.lockutils [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lock "ac212cb3-2160-4534-a35c-21bcfab037fd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:42 compute-0 nova_compute[239965]: 2026-01-26 16:31:42.731 239969 DEBUG oslo_concurrency.lockutils [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Acquiring lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:42 compute-0 nova_compute[239965]: 2026-01-26 16:31:42.732 239969 DEBUG oslo_concurrency.lockutils [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:42 compute-0 nova_compute[239965]: 2026-01-26 16:31:42.732 239969 DEBUG oslo_concurrency.lockutils [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:42 compute-0 nova_compute[239965]: 2026-01-26 16:31:42.734 239969 INFO nova.compute.manager [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Terminating instance
Jan 26 16:31:42 compute-0 nova_compute[239965]: 2026-01-26 16:31:42.735 239969 DEBUG nova.compute.manager [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:31:42 compute-0 kernel: tap9c3cd432-78 (unregistering): left promiscuous mode
Jan 26 16:31:42 compute-0 NetworkManager[48954]: <info>  [1769445102.7905] device (tap9c3cd432-78): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 26 16:31:42 compute-0 ovn_controller[146046]: 2026-01-26T16:31:42Z|01644|binding|INFO|Releasing lport 9c3cd432-78a3-486a-99bf-b887e65dd8fb from this chassis (sb_readonly=0)
Jan 26 16:31:42 compute-0 ovn_controller[146046]: 2026-01-26T16:31:42Z|01645|binding|INFO|Setting lport 9c3cd432-78a3-486a-99bf-b887e65dd8fb down in Southbound
Jan 26 16:31:42 compute-0 ovn_controller[146046]: 2026-01-26T16:31:42Z|01646|binding|INFO|Removing iface tap9c3cd432-78 ovn-installed in OVS
Jan 26 16:31:42 compute-0 nova_compute[239965]: 2026-01-26 16:31:42.801 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:42 compute-0 nova_compute[239965]: 2026-01-26 16:31:42.804 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:42.809 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:c0:91 10.100.0.5'], port_security=['fa:16:3e:ce:c0:91 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'ac212cb3-2160-4534-a35c-21bcfab037fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a06070a8-971b-42ea-82b9-a4dadcba86a5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '28416a9085754ed09fd1131e4bb69e84', 'neutron:revision_number': '4', 'neutron:security_group_ids': '944f3e70-1325-41b7-abc4-0693016fe099 ec6f1f93-0e58-4206-a4ff-6edef61884bb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.199'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5f69434-7863-4622-9ed6-38e001f8727d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>], logical_port=9c3cd432-78a3-486a-99bf-b887e65dd8fb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f88c64a4b80>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:31:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:42.812 156105 INFO neutron.agent.ovn.metadata.agent [-] Port 9c3cd432-78a3-486a-99bf-b887e65dd8fb in datapath a06070a8-971b-42ea-82b9-a4dadcba86a5 unbound from our chassis
Jan 26 16:31:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 16:31:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:42.815 156105 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a06070a8-971b-42ea-82b9-a4dadcba86a5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 26 16:31:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:42.816 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc532ac-c7a9-46f8-97d2-7b5d8caea2e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:42 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:42.817 156105 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5 namespace which is not needed anymore
Jan 26 16:31:42 compute-0 nova_compute[239965]: 2026-01-26 16:31:42.822 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:42 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Deactivated successfully.
Jan 26 16:31:42 compute-0 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Consumed 14.287s CPU time.
Jan 26 16:31:42 compute-0 systemd-machined[208061]: Machine qemu-186-instance-00000098 terminated.
Jan 26 16:31:42 compute-0 ceph-mon[75140]: pgmap v2446: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 329 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 16:31:42 compute-0 neutron-haproxy-ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5[375124]: [NOTICE]   (375128) : haproxy version is 2.8.14-c23fe91
Jan 26 16:31:42 compute-0 neutron-haproxy-ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5[375124]: [NOTICE]   (375128) : path to executable is /usr/sbin/haproxy
Jan 26 16:31:42 compute-0 neutron-haproxy-ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5[375124]: [WARNING]  (375128) : Exiting Master process...
Jan 26 16:31:42 compute-0 neutron-haproxy-ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5[375124]: [ALERT]    (375128) : Current worker (375130) exited with code 143 (Terminated)
Jan 26 16:31:42 compute-0 neutron-haproxy-ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5[375124]: [WARNING]  (375128) : All workers exited. Exiting... (0)
Jan 26 16:31:42 compute-0 systemd[1]: libpod-7b0c3824da41c5fa697cd95f202530800ad2a520eac70de3ff9990e12bdaf258.scope: Deactivated successfully.
Jan 26 16:31:42 compute-0 conmon[375124]: conmon 7b0c3824da41c5fa697c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b0c3824da41c5fa697cd95f202530800ad2a520eac70de3ff9990e12bdaf258.scope/container/memory.events
Jan 26 16:31:42 compute-0 podman[375773]: 2026-01-26 16:31:42.979791818 +0000 UTC m=+0.054645016 container died 7b0c3824da41c5fa697cd95f202530800ad2a520eac70de3ff9990e12bdaf258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:31:42 compute-0 nova_compute[239965]: 2026-01-26 16:31:42.984 239969 INFO nova.virt.libvirt.driver [-] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Instance destroyed successfully.
Jan 26 16:31:42 compute-0 nova_compute[239965]: 2026-01-26 16:31:42.985 239969 DEBUG nova.objects.instance [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lazy-loading 'resources' on Instance uuid ac212cb3-2160-4534-a35c-21bcfab037fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.004 239969 DEBUG nova.virt.libvirt.vif [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-26T16:31:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-856164022',display_name='tempest-TestServerBasicOps-server-856164022',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-856164022',id=152,image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBO3pbnj0sLvS5A2vfDeCLtyAVJWzkpG6iTn2ojAHV+wUBLFCJ7Zd7+5fWmtQD5To3l4t2bWX92m1dUg7pBiDUnNnLz7iHt7XlMOe0SEnUQs1SZRu1JyGgGgM56YjqqPgRw==',key_name='tempest-TestServerBasicOps-1871563773',keypairs=<?>,launch_index=0,launched_at=2026-01-26T16:31:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='28416a9085754ed09fd1131e4bb69e84',ramdisk_id='',reservation_id='r-7cxapjyc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='e0817942-948b-4945-aa42-c1cb3a1c65ba',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1398812118',owner_user_name='tempest-TestServerBasicOps-1398812118-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-26T16:31:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5eddbe6e360b4be7ae448340e594cd57',uuid=ac212cb3-2160-4534-a35c-21bcfab037fd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "address": "fa:16:3e:ce:c0:91", "network": {"id": "a06070a8-971b-42ea-82b9-a4dadcba86a5", "bridge": "br-int", "label": "tempest-TestServerBasicOps-105777808-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28416a9085754ed09fd1131e4bb69e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c3cd432-78", "ovs_interfaceid": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.005 239969 DEBUG nova.network.os_vif_util [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Converting VIF {"id": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "address": "fa:16:3e:ce:c0:91", "network": {"id": "a06070a8-971b-42ea-82b9-a4dadcba86a5", "bridge": "br-int", "label": "tempest-TestServerBasicOps-105777808-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.199", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "28416a9085754ed09fd1131e4bb69e84", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9c3cd432-78", "ovs_interfaceid": "9c3cd432-78a3-486a-99bf-b887e65dd8fb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.006 239969 DEBUG nova.network.os_vif_util [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ce:c0:91,bridge_name='br-int',has_traffic_filtering=True,id=9c3cd432-78a3-486a-99bf-b887e65dd8fb,network=Network(a06070a8-971b-42ea-82b9-a4dadcba86a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c3cd432-78') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.006 239969 DEBUG os_vif [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ce:c0:91,bridge_name='br-int',has_traffic_filtering=True,id=9c3cd432-78a3-486a-99bf-b887e65dd8fb,network=Network(a06070a8-971b-42ea-82b9-a4dadcba86a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c3cd432-78') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.009 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.010 239969 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9c3cd432-78, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.014 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:43 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7b0c3824da41c5fa697cd95f202530800ad2a520eac70de3ff9990e12bdaf258-userdata-shm.mount: Deactivated successfully.
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.017 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 26 16:31:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9b7c4f92cf52633451301bc0cf838c36d7f33c7c28087fcc2a190b027559b23-merged.mount: Deactivated successfully.
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.019 239969 INFO os_vif [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ce:c0:91,bridge_name='br-int',has_traffic_filtering=True,id=9c3cd432-78a3-486a-99bf-b887e65dd8fb,network=Network(a06070a8-971b-42ea-82b9-a4dadcba86a5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9c3cd432-78')
Jan 26 16:31:43 compute-0 podman[375773]: 2026-01-26 16:31:43.026135098 +0000 UTC m=+0.100988296 container cleanup 7b0c3824da41c5fa697cd95f202530800ad2a520eac70de3ff9990e12bdaf258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 16:31:43 compute-0 systemd[1]: libpod-conmon-7b0c3824da41c5fa697cd95f202530800ad2a520eac70de3ff9990e12bdaf258.scope: Deactivated successfully.
Jan 26 16:31:43 compute-0 podman[375828]: 2026-01-26 16:31:43.094742907 +0000 UTC m=+0.043237905 container remove 7b0c3824da41c5fa697cd95f202530800ad2a520eac70de3ff9990e12bdaf258 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:31:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:43.100 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[27e084fa-c80f-4424-9044-6c3168c92c78]: (4, ('Mon Jan 26 04:31:42 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5 (7b0c3824da41c5fa697cd95f202530800ad2a520eac70de3ff9990e12bdaf258)\n7b0c3824da41c5fa697cd95f202530800ad2a520eac70de3ff9990e12bdaf258\nMon Jan 26 04:31:43 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5 (7b0c3824da41c5fa697cd95f202530800ad2a520eac70de3ff9990e12bdaf258)\n7b0c3824da41c5fa697cd95f202530800ad2a520eac70de3ff9990e12bdaf258\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:43.102 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5b475f-676e-402c-9d06-306fd494de55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:43.104 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa06070a8-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.107 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:43 compute-0 kernel: tapa06070a8-90: left promiscuous mode
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.119 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:43.125 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9e3576aa-44d3-43eb-b726-08fb96101eff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:43.140 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[5129d403-8145-461a-b1a4-47d7c8c8c6e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:43.142 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[86ffc6d7-0a29-4f93-ae9a-b0fb9096d6c0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:43.163 247577 DEBUG oslo.privsep.daemon [-] privsep: reply[9347c1d8-362e-4450-99b0-4ba33366a58c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 656863, 'reachable_time': 42486, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375844, 'error': None, 'target': 'ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:43.166 156692 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a06070a8-971b-42ea-82b9-a4dadcba86a5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 26 16:31:43 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:43.166 156692 DEBUG oslo.privsep.daemon [-] privsep: reply[eb76218f-1037-47c2-9c45-8f61bd78ed16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 26 16:31:43 compute-0 systemd[1]: run-netns-ovnmeta\x2da06070a8\x2d971b\x2d42ea\x2d82b9\x2da4dadcba86a5.mount: Deactivated successfully.
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.235 239969 DEBUG nova.compute.manager [req-28bbb5c5-c26b-4e1a-91c2-3e25370fb1ac req-9e298d06-2ab1-4af9-99bf-792fe6b83afa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Received event network-vif-unplugged-9c3cd432-78a3-486a-99bf-b887e65dd8fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.236 239969 DEBUG oslo_concurrency.lockutils [req-28bbb5c5-c26b-4e1a-91c2-3e25370fb1ac req-9e298d06-2ab1-4af9-99bf-792fe6b83afa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.238 239969 DEBUG oslo_concurrency.lockutils [req-28bbb5c5-c26b-4e1a-91c2-3e25370fb1ac req-9e298d06-2ab1-4af9-99bf-792fe6b83afa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.240 239969 DEBUG oslo_concurrency.lockutils [req-28bbb5c5-c26b-4e1a-91c2-3e25370fb1ac req-9e298d06-2ab1-4af9-99bf-792fe6b83afa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.241 239969 DEBUG nova.compute.manager [req-28bbb5c5-c26b-4e1a-91c2-3e25370fb1ac req-9e298d06-2ab1-4af9-99bf-792fe6b83afa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] No waiting events found dispatching network-vif-unplugged-9c3cd432-78a3-486a-99bf-b887e65dd8fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.241 239969 DEBUG nova.compute.manager [req-28bbb5c5-c26b-4e1a-91c2-3e25370fb1ac req-9e298d06-2ab1-4af9-99bf-792fe6b83afa a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Received event network-vif-unplugged-9c3cd432-78a3-486a-99bf-b887e65dd8fb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.270 239969 INFO nova.virt.libvirt.driver [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Deleting instance files /var/lib/nova/instances/ac212cb3-2160-4534-a35c-21bcfab037fd_del
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.271 239969 INFO nova.virt.libvirt.driver [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Deletion of /var/lib/nova/instances/ac212cb3-2160-4534-a35c-21bcfab037fd_del complete
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.321 239969 INFO nova.compute.manager [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Took 0.59 seconds to destroy the instance on the hypervisor.
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.322 239969 DEBUG oslo.service.loopingcall [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.323 239969 DEBUG nova.compute.manager [-] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:31:43 compute-0 nova_compute[239965]: 2026-01-26 16:31:43.323 239969 DEBUG nova.network.neutron [-] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:31:44 compute-0 nova_compute[239965]: 2026-01-26 16:31:44.125 239969 DEBUG nova.network.neutron [-] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:31:44 compute-0 nova_compute[239965]: 2026-01-26 16:31:44.140 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:44 compute-0 nova_compute[239965]: 2026-01-26 16:31:44.145 239969 INFO nova.compute.manager [-] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Took 0.82 seconds to deallocate network for instance.
Jan 26 16:31:44 compute-0 nova_compute[239965]: 2026-01-26 16:31:44.198 239969 DEBUG oslo_concurrency.lockutils [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:44 compute-0 nova_compute[239965]: 2026-01-26 16:31:44.199 239969 DEBUG oslo_concurrency.lockutils [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:44 compute-0 nova_compute[239965]: 2026-01-26 16:31:44.234 239969 DEBUG nova.compute.manager [req-ecb43a39-b85f-4501-be83-7f87db5f486b req-e05a6620-b5e2-4f6f-940a-b9c7fccce98f a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Received event network-vif-deleted-9c3cd432-78a3-486a-99bf-b887e65dd8fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:31:44 compute-0 nova_compute[239965]: 2026-01-26 16:31:44.263 239969 DEBUG oslo_concurrency.processutils [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:31:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 127 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 187 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Jan 26 16:31:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:31:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/168413054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:31:44 compute-0 ceph-mon[75140]: pgmap v2447: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 330 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Jan 26 16:31:44 compute-0 nova_compute[239965]: 2026-01-26 16:31:44.994 239969 DEBUG oslo_concurrency.processutils [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.731s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:31:45 compute-0 nova_compute[239965]: 2026-01-26 16:31:45.005 239969 DEBUG nova.compute.provider_tree [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:31:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:31:45 compute-0 nova_compute[239965]: 2026-01-26 16:31:45.404 239969 DEBUG nova.compute.manager [req-ec70c66f-98aa-4cd6-96e3-30c25cbc5c2b req-4a32a3a2-28b8-4c4e-9098-89839ea79dc8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Received event network-vif-plugged-9c3cd432-78a3-486a-99bf-b887e65dd8fb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 26 16:31:45 compute-0 nova_compute[239965]: 2026-01-26 16:31:45.405 239969 DEBUG oslo_concurrency.lockutils [req-ec70c66f-98aa-4cd6-96e3-30c25cbc5c2b req-4a32a3a2-28b8-4c4e-9098-89839ea79dc8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Acquiring lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:45 compute-0 nova_compute[239965]: 2026-01-26 16:31:45.405 239969 DEBUG oslo_concurrency.lockutils [req-ec70c66f-98aa-4cd6-96e3-30c25cbc5c2b req-4a32a3a2-28b8-4c4e-9098-89839ea79dc8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:45 compute-0 nova_compute[239965]: 2026-01-26 16:31:45.406 239969 DEBUG oslo_concurrency.lockutils [req-ec70c66f-98aa-4cd6-96e3-30c25cbc5c2b req-4a32a3a2-28b8-4c4e-9098-89839ea79dc8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] Lock "ac212cb3-2160-4534-a35c-21bcfab037fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:45 compute-0 nova_compute[239965]: 2026-01-26 16:31:45.406 239969 DEBUG nova.compute.manager [req-ec70c66f-98aa-4cd6-96e3-30c25cbc5c2b req-4a32a3a2-28b8-4c4e-9098-89839ea79dc8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] No waiting events found dispatching network-vif-plugged-9c3cd432-78a3-486a-99bf-b887e65dd8fb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 26 16:31:45 compute-0 nova_compute[239965]: 2026-01-26 16:31:45.407 239969 WARNING nova.compute.manager [req-ec70c66f-98aa-4cd6-96e3-30c25cbc5c2b req-4a32a3a2-28b8-4c4e-9098-89839ea79dc8 a30ed45781b24a7dac6fdefc5f8cbb5e 55ae89344b9f49768a0d4a75dde56b60 - - default default] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Received unexpected event network-vif-plugged-9c3cd432-78a3-486a-99bf-b887e65dd8fb for instance with vm_state deleted and task_state None.
Jan 26 16:31:45 compute-0 nova_compute[239965]: 2026-01-26 16:31:45.410 239969 DEBUG nova.scheduler.client.report [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:31:45 compute-0 nova_compute[239965]: 2026-01-26 16:31:45.435 239969 DEBUG oslo_concurrency.lockutils [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:45 compute-0 nova_compute[239965]: 2026-01-26 16:31:45.459 239969 INFO nova.scheduler.client.report [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Deleted allocations for instance ac212cb3-2160-4534-a35c-21bcfab037fd
Jan 26 16:31:45 compute-0 nova_compute[239965]: 2026-01-26 16:31:45.516 239969 DEBUG oslo_concurrency.lockutils [None req-8208a3ed-ab17-4f41-a09c-622862b00db3 5eddbe6e360b4be7ae448340e594cd57 28416a9085754ed09fd1131e4bb69e84 - - default default] Lock "ac212cb3-2160-4534-a35c-21bcfab037fd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:31:45 compute-0 ceph-mon[75140]: pgmap v2448: 305 pgs: 305 active+clean; 127 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 187 KiB/s rd, 1.4 MiB/s wr, 38 op/s
Jan 26 16:31:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/168413054' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:31:46 compute-0 podman[375868]: 2026-01-26 16:31:46.396873322 +0000 UTC m=+0.071955091 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 26 16:31:46 compute-0 podman[375869]: 2026-01-26 16:31:46.428509101 +0000 UTC m=+0.112819718 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:31:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 101 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 46 KiB/s wr, 19 op/s
Jan 26 16:31:47 compute-0 ceph-mon[75140]: pgmap v2449: 305 pgs: 305 active+clean; 101 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 46 KiB/s wr, 19 op/s
Jan 26 16:31:48 compute-0 nova_compute[239965]: 2026-01-26 16:31:48.013 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:31:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3766081224' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:31:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:31:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3766081224' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:31:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 18 KiB/s wr, 30 op/s
Jan 26 16:31:49 compute-0 nova_compute[239965]: 2026-01-26 16:31:49.142 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3766081224' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:31:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3766081224' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.775605950091914e-05 of space, bias 1.0, pg target 0.0053268178502757415 quantized to 32 (current 32)
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010183619328865418 of space, bias 1.0, pg target 0.3055085798659626 quantized to 32 (current 32)
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.697983172610999e-07 of space, bias 4.0, pg target 0.0008037579807133199 quantized to 16 (current 16)
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:31:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:31:50 compute-0 ceph-mon[75140]: pgmap v2450: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 18 KiB/s wr, 30 op/s
Jan 26 16:31:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:31:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 18 KiB/s wr, 30 op/s
Jan 26 16:31:51 compute-0 nova_compute[239965]: 2026-01-26 16:31:51.931 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:52 compute-0 nova_compute[239965]: 2026-01-26 16:31:52.076 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:52 compute-0 ceph-mon[75140]: pgmap v2451: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 18 KiB/s wr, 30 op/s
Jan 26 16:31:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.8 KiB/s wr, 30 op/s
Jan 26 16:31:53 compute-0 nova_compute[239965]: 2026-01-26 16:31:53.017 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:54 compute-0 nova_compute[239965]: 2026-01-26 16:31:54.144 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:54 compute-0 ceph-mon[75140]: pgmap v2452: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.8 KiB/s wr, 30 op/s
Jan 26 16:31:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 4.8 KiB/s wr, 29 op/s
Jan 26 16:31:55 compute-0 nova_compute[239965]: 2026-01-26 16:31:55.094 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:31:56 compute-0 ceph-mon[75140]: pgmap v2453: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 4.8 KiB/s wr, 29 op/s
Jan 26 16:31:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 24 op/s
Jan 26 16:31:57 compute-0 nova_compute[239965]: 2026-01-26 16:31:57.981 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769445102.9788594, ac212cb3-2160-4534-a35c-21bcfab037fd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:31:57 compute-0 nova_compute[239965]: 2026-01-26 16:31:57.981 239969 INFO nova.compute.manager [-] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] VM Stopped (Lifecycle Event)
Jan 26 16:31:58 compute-0 nova_compute[239965]: 2026-01-26 16:31:58.001 239969 DEBUG nova.compute.manager [None req-1c80258e-385d-4ec4-9869-bd1b500aacc0 - - - - - -] [instance: ac212cb3-2160-4534-a35c-21bcfab037fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:31:58 compute-0 nova_compute[239965]: 2026-01-26 16:31:58.020 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:58 compute-0 ceph-mon[75140]: pgmap v2454: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 682 B/s wr, 24 op/s
Jan 26 16:31:58 compute-0 nova_compute[239965]: 2026-01-26 16:31:58.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:31:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2455: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 682 B/s wr, 15 op/s
Jan 26 16:31:59 compute-0 nova_compute[239965]: 2026-01-26 16:31:59.146 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:31:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:59.254 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:31:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:59.255 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:31:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:31:59.255 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:32:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:32:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:32:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:32:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:32:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:32:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:32:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:32:00 compute-0 ceph-mon[75140]: pgmap v2455: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 682 B/s wr, 15 op/s
Jan 26 16:32:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:02 compute-0 ceph-mon[75140]: pgmap v2456: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2457: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:03 compute-0 nova_compute[239965]: 2026-01-26 16:32:03.022 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:04 compute-0 nova_compute[239965]: 2026-01-26 16:32:04.199 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:04 compute-0 ceph-mon[75140]: pgmap v2457: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:04 compute-0 nova_compute[239965]: 2026-01-26 16:32:04.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:04 compute-0 nova_compute[239965]: 2026-01-26 16:32:04.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:32:04 compute-0 nova_compute[239965]: 2026-01-26 16:32:04.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:32:04 compute-0 nova_compute[239965]: 2026-01-26 16:32:04.530 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:32:04 compute-0 nova_compute[239965]: 2026-01-26 16:32:04.531 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2458: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:32:05 compute-0 nova_compute[239965]: 2026-01-26 16:32:05.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:05 compute-0 nova_compute[239965]: 2026-01-26 16:32:05.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:32:06 compute-0 ceph-mon[75140]: pgmap v2458: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:06 compute-0 nova_compute[239965]: 2026-01-26 16:32:06.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:06 compute-0 nova_compute[239965]: 2026-01-26 16:32:06.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:06 compute-0 nova_compute[239965]: 2026-01-26 16:32:06.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:32:06.675 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:32:06 compute-0 nova_compute[239965]: 2026-01-26 16:32:06.676 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:06 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:32:06.677 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:32:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:08 compute-0 nova_compute[239965]: 2026-01-26 16:32:08.026 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:08 compute-0 ceph-mon[75140]: pgmap v2459: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:09 compute-0 nova_compute[239965]: 2026-01-26 16:32:09.201 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:09 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:32:09.679 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:32:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:32:10 compute-0 ceph-mon[75140]: pgmap v2460: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:10 compute-0 nova_compute[239965]: 2026-01-26 16:32:10.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:10 compute-0 nova_compute[239965]: 2026-01-26 16:32:10.536 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:32:10 compute-0 nova_compute[239965]: 2026-01-26 16:32:10.537 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:32:10 compute-0 nova_compute[239965]: 2026-01-26 16:32:10.537 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:32:10 compute-0 nova_compute[239965]: 2026-01-26 16:32:10.537 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:32:10 compute-0 nova_compute[239965]: 2026-01-26 16:32:10.538 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:32:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:32:11 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3896408152' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:32:11 compute-0 nova_compute[239965]: 2026-01-26 16:32:11.082 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:32:11 compute-0 nova_compute[239965]: 2026-01-26 16:32:11.353 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:32:11 compute-0 nova_compute[239965]: 2026-01-26 16:32:11.356 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3491MB free_disk=59.98721609450877GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:32:11 compute-0 nova_compute[239965]: 2026-01-26 16:32:11.356 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:32:11 compute-0 nova_compute[239965]: 2026-01-26 16:32:11.356 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:32:11 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3896408152' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:32:11 compute-0 nova_compute[239965]: 2026-01-26 16:32:11.449 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:32:11 compute-0 nova_compute[239965]: 2026-01-26 16:32:11.450 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:32:11 compute-0 nova_compute[239965]: 2026-01-26 16:32:11.464 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:32:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:32:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1642427035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:32:12 compute-0 nova_compute[239965]: 2026-01-26 16:32:12.033 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:32:12 compute-0 nova_compute[239965]: 2026-01-26 16:32:12.041 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:32:12 compute-0 nova_compute[239965]: 2026-01-26 16:32:12.064 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:32:12 compute-0 nova_compute[239965]: 2026-01-26 16:32:12.092 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:32:12 compute-0 nova_compute[239965]: 2026-01-26 16:32:12.092 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:32:12 compute-0 ceph-mon[75140]: pgmap v2461: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:12 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1642427035' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:32:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:13 compute-0 nova_compute[239965]: 2026-01-26 16:32:13.030 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:14 compute-0 nova_compute[239965]: 2026-01-26 16:32:14.203 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:14 compute-0 ceph-mon[75140]: pgmap v2462: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:32:16 compute-0 ceph-mon[75140]: pgmap v2463: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2464: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:17 compute-0 podman[375959]: 2026-01-26 16:32:17.381310065 +0000 UTC m=+0.067398839 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 26 16:32:17 compute-0 podman[375960]: 2026-01-26 16:32:17.466105681 +0000 UTC m=+0.138204611 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 26 16:32:18 compute-0 nova_compute[239965]: 2026-01-26 16:32:18.032 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:18 compute-0 ceph-mon[75140]: pgmap v2464: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:19 compute-0 nova_compute[239965]: 2026-01-26 16:32:19.205 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:32:20 compute-0 ceph-mon[75140]: pgmap v2465: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:21 compute-0 nova_compute[239965]: 2026-01-26 16:32:21.836 239969 DEBUG oslo_concurrency.lockutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Acquiring lock "dd4373db-38a9-48e5-bcc2-059e3757fc63" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:32:21 compute-0 nova_compute[239965]: 2026-01-26 16:32:21.837 239969 DEBUG oslo_concurrency.lockutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lock "dd4373db-38a9-48e5-bcc2-059e3757fc63" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:32:21 compute-0 nova_compute[239965]: 2026-01-26 16:32:21.852 239969 DEBUG nova.compute.manager [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 26 16:32:21 compute-0 nova_compute[239965]: 2026-01-26 16:32:21.938 239969 DEBUG oslo_concurrency.lockutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:32:21 compute-0 nova_compute[239965]: 2026-01-26 16:32:21.938 239969 DEBUG oslo_concurrency.lockutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:32:21 compute-0 nova_compute[239965]: 2026-01-26 16:32:21.949 239969 DEBUG nova.virt.hardware [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 26 16:32:21 compute-0 nova_compute[239965]: 2026-01-26 16:32:21.950 239969 INFO nova.compute.claims [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Claim successful on node compute-0.ctlplane.example.com
Jan 26 16:32:21 compute-0 ceph-mon[75140]: pgmap v2466: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:22 compute-0 nova_compute[239965]: 2026-01-26 16:32:22.128 239969 DEBUG oslo_concurrency.processutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:32:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:32:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3686410779' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:32:22 compute-0 nova_compute[239965]: 2026-01-26 16:32:22.712 239969 DEBUG oslo_concurrency.processutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:32:22 compute-0 nova_compute[239965]: 2026-01-26 16:32:22.718 239969 DEBUG nova.compute.provider_tree [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:32:22 compute-0 nova_compute[239965]: 2026-01-26 16:32:22.813 239969 DEBUG nova.scheduler.client.report [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:32:22 compute-0 nova_compute[239965]: 2026-01-26 16:32:22.832 239969 DEBUG oslo_concurrency.lockutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:32:22 compute-0 nova_compute[239965]: 2026-01-26 16:32:22.833 239969 DEBUG nova.compute.manager [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 26 16:32:22 compute-0 nova_compute[239965]: 2026-01-26 16:32:22.878 239969 DEBUG nova.compute.manager [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 26 16:32:22 compute-0 nova_compute[239965]: 2026-01-26 16:32:22.879 239969 DEBUG nova.network.neutron [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 26 16:32:22 compute-0 nova_compute[239965]: 2026-01-26 16:32:22.901 239969 INFO nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 26 16:32:22 compute-0 nova_compute[239965]: 2026-01-26 16:32:22.934 239969 DEBUG nova.compute.manager [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 26 16:32:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3686410779' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:32:23 compute-0 nova_compute[239965]: 2026-01-26 16:32:23.037 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:23 compute-0 nova_compute[239965]: 2026-01-26 16:32:23.048 239969 DEBUG nova.compute.manager [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 26 16:32:23 compute-0 nova_compute[239965]: 2026-01-26 16:32:23.050 239969 DEBUG nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 26 16:32:23 compute-0 nova_compute[239965]: 2026-01-26 16:32:23.050 239969 INFO nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Creating image(s)
Jan 26 16:32:23 compute-0 nova_compute[239965]: 2026-01-26 16:32:23.094 239969 DEBUG nova.storage.rbd_utils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] rbd image dd4373db-38a9-48e5-bcc2-059e3757fc63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:32:23 compute-0 nova_compute[239965]: 2026-01-26 16:32:23.123 239969 DEBUG nova.storage.rbd_utils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] rbd image dd4373db-38a9-48e5-bcc2-059e3757fc63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:32:23 compute-0 nova_compute[239965]: 2026-01-26 16:32:23.149 239969 DEBUG nova.storage.rbd_utils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] rbd image dd4373db-38a9-48e5-bcc2-059e3757fc63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:32:23 compute-0 nova_compute[239965]: 2026-01-26 16:32:23.153 239969 DEBUG oslo_concurrency.processutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:32:23 compute-0 nova_compute[239965]: 2026-01-26 16:32:23.200 239969 DEBUG nova.network.neutron [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Jan 26 16:32:23 compute-0 nova_compute[239965]: 2026-01-26 16:32:23.201 239969 DEBUG nova.compute.manager [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 26 16:32:23 compute-0 nova_compute[239965]: 2026-01-26 16:32:23.250 239969 DEBUG oslo_concurrency.processutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:32:23 compute-0 nova_compute[239965]: 2026-01-26 16:32:23.251 239969 DEBUG oslo_concurrency.lockutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Acquiring lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:32:23 compute-0 nova_compute[239965]: 2026-01-26 16:32:23.252 239969 DEBUG oslo_concurrency.lockutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:32:23 compute-0 nova_compute[239965]: 2026-01-26 16:32:23.253 239969 DEBUG oslo_concurrency.lockutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lock "6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:32:23 compute-0 nova_compute[239965]: 2026-01-26 16:32:23.276 239969 DEBUG nova.storage.rbd_utils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] rbd image dd4373db-38a9-48e5-bcc2-059e3757fc63_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:32:23 compute-0 nova_compute[239965]: 2026-01-26 16:32:23.279 239969 DEBUG oslo_concurrency.processutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 dd4373db-38a9-48e5-bcc2-059e3757fc63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:32:24 compute-0 ceph-mon[75140]: pgmap v2467: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.215 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.508 239969 DEBUG oslo_concurrency.processutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 dd4373db-38a9-48e5-bcc2-059e3757fc63_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.229s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.582 239969 DEBUG nova.storage.rbd_utils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] resizing rbd image dd4373db-38a9-48e5-bcc2-059e3757fc63_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.763 239969 DEBUG nova.objects.instance [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lazy-loading 'migration_context' on Instance uuid dd4373db-38a9-48e5-bcc2-059e3757fc63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.783 239969 DEBUG nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.783 239969 DEBUG nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Ensure instance console log exists: /var/lib/nova/instances/dd4373db-38a9-48e5-bcc2-059e3757fc63/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.784 239969 DEBUG oslo_concurrency.lockutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.784 239969 DEBUG oslo_concurrency.lockutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.784 239969 DEBUG oslo_concurrency.lockutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.786 239969 DEBUG nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_type': 'disk', 'encryption_format': None, 'device_name': '/dev/vda', 'guest_format': None, 'encrypted': False, 'encryption_secret_uuid': None, 'encryption_options': None, 'boot_index': 0, 'size': 0, 'disk_bus': 'virtio', 'image_id': 'e0817942-948b-4945-aa42-c1cb3a1c65ba'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.792 239969 WARNING nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.797 239969 DEBUG nova.virt.libvirt.host [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.797 239969 DEBUG nova.virt.libvirt.host [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.800 239969 DEBUG nova.virt.libvirt.host [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.800 239969 DEBUG nova.virt.libvirt.host [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.801 239969 DEBUG nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.801 239969 DEBUG nova.virt.hardware [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-26T15:46:41Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='216d6d2c-7f80-43d8-8200-bdfe3ae7f9b6',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-26T15:46:42Z,direct_url=<?>,disk_format='qcow2',id=e0817942-948b-4945-aa42-c1cb3a1c65ba,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='93bcedec903248cc873916ae9152ee59',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-26T15:46:43Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.801 239969 DEBUG nova.virt.hardware [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.802 239969 DEBUG nova.virt.hardware [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.802 239969 DEBUG nova.virt.hardware [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.802 239969 DEBUG nova.virt.hardware [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.802 239969 DEBUG nova.virt.hardware [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.803 239969 DEBUG nova.virt.hardware [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.803 239969 DEBUG nova.virt.hardware [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.803 239969 DEBUG nova.virt.hardware [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.803 239969 DEBUG nova.virt.hardware [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.803 239969 DEBUG nova.virt.hardware [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 26 16:32:24 compute-0 nova_compute[239965]: 2026-01-26 16:32:24.806 239969 DEBUG oslo_concurrency.processutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:32:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 101 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 473 KiB/s wr, 0 op/s
Jan 26 16:32:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:32:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:32:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1452425663' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:32:25 compute-0 nova_compute[239965]: 2026-01-26 16:32:25.416 239969 DEBUG oslo_concurrency.processutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:32:25 compute-0 nova_compute[239965]: 2026-01-26 16:32:25.447 239969 DEBUG nova.storage.rbd_utils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] rbd image dd4373db-38a9-48e5-bcc2-059e3757fc63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:32:25 compute-0 nova_compute[239965]: 2026-01-26 16:32:25.452 239969 DEBUG oslo_concurrency.processutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:32:25 compute-0 sshd-session[376192]: Received disconnect from 91.224.92.108 port 49132:11:  [preauth]
Jan 26 16:32:25 compute-0 sshd-session[376192]: Disconnected from authenticating user root 91.224.92.108 port 49132 [preauth]
Jan 26 16:32:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 26 16:32:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/306864028' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:32:26 compute-0 nova_compute[239965]: 2026-01-26 16:32:26.000 239969 DEBUG oslo_concurrency.processutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:32:26 compute-0 nova_compute[239965]: 2026-01-26 16:32:26.003 239969 DEBUG nova.objects.instance [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd4373db-38a9-48e5-bcc2-059e3757fc63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:32:26 compute-0 nova_compute[239965]: 2026-01-26 16:32:26.043 239969 DEBUG nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] End _get_guest_xml xml=<domain type="kvm">
Jan 26 16:32:26 compute-0 nova_compute[239965]:   <uuid>dd4373db-38a9-48e5-bcc2-059e3757fc63</uuid>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   <name>instance-00000099</name>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   <memory>131072</memory>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   <vcpu>1</vcpu>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   <metadata>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <nova:name>tempest-AggregatesAdminTestJSON-server-21551065</nova:name>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <nova:creationTime>2026-01-26 16:32:24</nova:creationTime>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <nova:flavor name="m1.nano">
Jan 26 16:32:26 compute-0 nova_compute[239965]:         <nova:memory>128</nova:memory>
Jan 26 16:32:26 compute-0 nova_compute[239965]:         <nova:disk>1</nova:disk>
Jan 26 16:32:26 compute-0 nova_compute[239965]:         <nova:swap>0</nova:swap>
Jan 26 16:32:26 compute-0 nova_compute[239965]:         <nova:ephemeral>0</nova:ephemeral>
Jan 26 16:32:26 compute-0 nova_compute[239965]:         <nova:vcpus>1</nova:vcpus>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       </nova:flavor>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <nova:owner>
Jan 26 16:32:26 compute-0 nova_compute[239965]:         <nova:user uuid="c1e77c373a094e75a1b5e946521cb2a0">tempest-AggregatesAdminTestJSON-735403752-project-member</nova:user>
Jan 26 16:32:26 compute-0 nova_compute[239965]:         <nova:project uuid="756687506b72467ab7ee7ea96e650a72">tempest-AggregatesAdminTestJSON-735403752</nova:project>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       </nova:owner>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <nova:root type="image" uuid="e0817942-948b-4945-aa42-c1cb3a1c65ba"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <nova:ports/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     </nova:instance>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   </metadata>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   <sysinfo type="smbios">
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <system>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <entry name="manufacturer">RDO</entry>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <entry name="product">OpenStack Compute</entry>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <entry name="serial">dd4373db-38a9-48e5-bcc2-059e3757fc63</entry>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <entry name="uuid">dd4373db-38a9-48e5-bcc2-059e3757fc63</entry>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <entry name="family">Virtual Machine</entry>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     </system>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   </sysinfo>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   <os>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <boot dev="hd"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <smbios mode="sysinfo"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   </os>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   <features>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <acpi/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <apic/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <vmcoreinfo/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   </features>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   <clock offset="utc">
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <timer name="pit" tickpolicy="delay"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <timer name="hpet" present="no"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   </clock>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   <cpu mode="host-model" match="exact">
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <topology sockets="1" cores="1" threads="1"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   </cpu>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   <devices>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <disk type="network" device="disk">
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/dd4373db-38a9-48e5-bcc2-059e3757fc63_disk">
Jan 26 16:32:26 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       </source>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:32:26 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <target dev="vda" bus="virtio"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <disk type="network" device="cdrom">
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <driver type="raw" cache="none"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <source protocol="rbd" name="vms/dd4373db-38a9-48e5-bcc2-059e3757fc63_disk.config">
Jan 26 16:32:26 compute-0 nova_compute[239965]:         <host name="192.168.122.100" port="6789"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       </source>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <auth username="openstack">
Jan 26 16:32:26 compute-0 nova_compute[239965]:         <secret type="ceph" uuid="8b9831ad-4b0d-59b4-8860-96eb895a171f"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       </auth>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <target dev="sda" bus="sata"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     </disk>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <serial type="pty">
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <log file="/var/lib/nova/instances/dd4373db-38a9-48e5-bcc2-059e3757fc63/console.log" append="off"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     </serial>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <video>
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <model type="virtio"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     </video>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <input type="tablet" bus="usb"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <rng model="virtio">
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <backend model="random">/dev/urandom</backend>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     </rng>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="pci" model="pcie-root-port"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <controller type="usb" index="0"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     <memballoon model="virtio">
Jan 26 16:32:26 compute-0 nova_compute[239965]:       <stats period="10"/>
Jan 26 16:32:26 compute-0 nova_compute[239965]:     </memballoon>
Jan 26 16:32:26 compute-0 nova_compute[239965]:   </devices>
Jan 26 16:32:26 compute-0 nova_compute[239965]: </domain>
Jan 26 16:32:26 compute-0 nova_compute[239965]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 26 16:32:26 compute-0 nova_compute[239965]: 2026-01-26 16:32:26.085 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:26 compute-0 nova_compute[239965]: 2026-01-26 16:32:26.121 239969 DEBUG nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:32:26 compute-0 nova_compute[239965]: 2026-01-26 16:32:26.122 239969 DEBUG nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 26 16:32:26 compute-0 nova_compute[239965]: 2026-01-26 16:32:26.123 239969 INFO nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Using config drive
Jan 26 16:32:26 compute-0 nova_compute[239965]: 2026-01-26 16:32:26.145 239969 DEBUG nova.storage.rbd_utils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] rbd image dd4373db-38a9-48e5-bcc2-059e3757fc63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:32:26 compute-0 ceph-mon[75140]: pgmap v2468: 305 pgs: 305 active+clean; 101 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 473 KiB/s wr, 0 op/s
Jan 26 16:32:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1452425663' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:32:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/306864028' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 26 16:32:26 compute-0 nova_compute[239965]: 2026-01-26 16:32:26.694 239969 INFO nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Creating config drive at /var/lib/nova/instances/dd4373db-38a9-48e5-bcc2-059e3757fc63/disk.config
Jan 26 16:32:26 compute-0 nova_compute[239965]: 2026-01-26 16:32:26.699 239969 DEBUG oslo_concurrency.processutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd4373db-38a9-48e5-bcc2-059e3757fc63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9d8cgtj3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:32:26 compute-0 nova_compute[239965]: 2026-01-26 16:32:26.850 239969 DEBUG oslo_concurrency.processutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd4373db-38a9-48e5-bcc2-059e3757fc63/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9d8cgtj3" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:32:26 compute-0 nova_compute[239965]: 2026-01-26 16:32:26.880 239969 DEBUG nova.storage.rbd_utils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] rbd image dd4373db-38a9-48e5-bcc2-059e3757fc63_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Jan 26 16:32:26 compute-0 nova_compute[239965]: 2026-01-26 16:32:26.884 239969 DEBUG oslo_concurrency.processutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dd4373db-38a9-48e5-bcc2-059e3757fc63/disk.config dd4373db-38a9-48e5-bcc2-059e3757fc63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:32:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 114 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 643 KiB/s wr, 13 op/s
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.035 239969 DEBUG oslo_concurrency.processutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dd4373db-38a9-48e5-bcc2-059e3757fc63/disk.config dd4373db-38a9-48e5-bcc2-059e3757fc63_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.036 239969 INFO nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Deleting local config drive /var/lib/nova/instances/dd4373db-38a9-48e5-bcc2-059e3757fc63/disk.config because it was imported into RBD.
Jan 26 16:32:27 compute-0 systemd-machined[208061]: New machine qemu-187-instance-00000099.
Jan 26 16:32:27 compute-0 systemd[1]: Started Virtual Machine qemu-187-instance-00000099.
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.509 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769445147.5088935, dd4373db-38a9-48e5-bcc2-059e3757fc63 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.510 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] VM Resumed (Lifecycle Event)
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.515 239969 DEBUG nova.compute.manager [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.516 239969 DEBUG nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.523 239969 INFO nova.virt.libvirt.driver [-] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Instance spawned successfully.
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.524 239969 DEBUG nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.545 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.551 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.555 239969 DEBUG nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.555 239969 DEBUG nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.556 239969 DEBUG nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.556 239969 DEBUG nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.557 239969 DEBUG nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.557 239969 DEBUG nova.virt.libvirt.driver [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.604 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.605 239969 DEBUG nova.virt.driver [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] Emitting event <LifecycleEvent: 1769445147.51417, dd4373db-38a9-48e5-bcc2-059e3757fc63 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.605 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] VM Started (Lifecycle Event)
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.639 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.645 239969 DEBUG nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.651 239969 INFO nova.compute.manager [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Took 4.60 seconds to spawn the instance on the hypervisor.
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.651 239969 DEBUG nova.compute.manager [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.662 239969 INFO nova.compute.manager [None req-c07e4341-71b7-4afe-aec9-378269ebe439 - - - - - -] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.713 239969 INFO nova.compute.manager [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Took 5.82 seconds to build instance.
Jan 26 16:32:27 compute-0 nova_compute[239965]: 2026-01-26 16:32:27.734 239969 DEBUG oslo_concurrency.lockutils [None req-16529ea9-2f03-4499-8c63-b6eeacfdc6e7 c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lock "dd4373db-38a9-48e5-bcc2-059e3757fc63" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:32:28 compute-0 nova_compute[239965]: 2026-01-26 16:32:28.040 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:28 compute-0 ceph-mon[75140]: pgmap v2469: 305 pgs: 305 active+clean; 114 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 643 KiB/s wr, 13 op/s
Jan 26 16:32:28 compute-0 nova_compute[239965]: 2026-01-26 16:32:28.525 239969 DEBUG oslo_concurrency.lockutils [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Acquiring lock "dd4373db-38a9-48e5-bcc2-059e3757fc63" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:32:28 compute-0 nova_compute[239965]: 2026-01-26 16:32:28.526 239969 DEBUG oslo_concurrency.lockutils [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lock "dd4373db-38a9-48e5-bcc2-059e3757fc63" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:32:28 compute-0 nova_compute[239965]: 2026-01-26 16:32:28.527 239969 DEBUG oslo_concurrency.lockutils [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Acquiring lock "dd4373db-38a9-48e5-bcc2-059e3757fc63-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:32:28 compute-0 nova_compute[239965]: 2026-01-26 16:32:28.527 239969 DEBUG oslo_concurrency.lockutils [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lock "dd4373db-38a9-48e5-bcc2-059e3757fc63-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:32:28 compute-0 nova_compute[239965]: 2026-01-26 16:32:28.527 239969 DEBUG oslo_concurrency.lockutils [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lock "dd4373db-38a9-48e5-bcc2-059e3757fc63-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:32:28 compute-0 nova_compute[239965]: 2026-01-26 16:32:28.529 239969 INFO nova.compute.manager [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Terminating instance
Jan 26 16:32:28 compute-0 nova_compute[239965]: 2026-01-26 16:32:28.530 239969 DEBUG oslo_concurrency.lockutils [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Acquiring lock "refresh_cache-dd4373db-38a9-48e5-bcc2-059e3757fc63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 26 16:32:28 compute-0 nova_compute[239965]: 2026-01-26 16:32:28.530 239969 DEBUG oslo_concurrency.lockutils [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Acquired lock "refresh_cache-dd4373db-38a9-48e5-bcc2-059e3757fc63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 26 16:32:28 compute-0 nova_compute[239965]: 2026-01-26 16:32:28.530 239969 DEBUG nova.network.neutron [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 26 16:32:28 compute-0 nova_compute[239965]: 2026-01-26 16:32:28.702 239969 DEBUG nova.network.neutron [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:32:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:32:28
Jan 26 16:32:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:32:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:32:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.mgr', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'vms', '.rgw.root', 'cephfs.cephfs.meta']
Jan 26 16:32:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:32:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2470: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.022 239969 DEBUG nova.network.neutron [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.040 239969 DEBUG oslo_concurrency.lockutils [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Releasing lock "refresh_cache-dd4373db-38a9-48e5-bcc2-059e3757fc63" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.041 239969 DEBUG nova.compute.manager [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 26 16:32:29 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Deactivated successfully.
Jan 26 16:32:29 compute-0 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Consumed 2.052s CPU time.
Jan 26 16:32:29 compute-0 systemd-machined[208061]: Machine qemu-187-instance-00000099 terminated.
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.224 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.273 239969 INFO nova.virt.libvirt.driver [-] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Instance destroyed successfully.
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.274 239969 DEBUG nova.objects.instance [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lazy-loading 'resources' on Instance uuid dd4373db-38a9-48e5-bcc2-059e3757fc63 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.576 239969 INFO nova.virt.libvirt.driver [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Deleting instance files /var/lib/nova/instances/dd4373db-38a9-48e5-bcc2-059e3757fc63_del
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.577 239969 INFO nova.virt.libvirt.driver [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Deletion of /var/lib/nova/instances/dd4373db-38a9-48e5-bcc2-059e3757fc63_del complete
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.625 239969 INFO nova.compute.manager [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Took 0.58 seconds to destroy the instance on the hypervisor.
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.626 239969 DEBUG oslo.service.loopingcall [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.626 239969 DEBUG nova.compute.manager [-] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.626 239969 DEBUG nova.network.neutron [-] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.767 239969 DEBUG nova.network.neutron [-] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.782 239969 DEBUG nova.network.neutron [-] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.796 239969 INFO nova.compute.manager [-] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Took 0.17 seconds to deallocate network for instance.
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.839 239969 DEBUG oslo_concurrency.lockutils [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.839 239969 DEBUG oslo_concurrency.lockutils [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:32:29 compute-0 nova_compute[239965]: 2026-01-26 16:32:29.894 239969 DEBUG oslo_concurrency.processutils [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:32:30 compute-0 ceph-mon[75140]: pgmap v2470: 305 pgs: 305 active+clean; 134 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Jan 26 16:32:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:32:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:32:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790273014' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:32:30 compute-0 nova_compute[239965]: 2026-01-26 16:32:30.469 239969 DEBUG oslo_concurrency.processutils [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:32:30 compute-0 nova_compute[239965]: 2026-01-26 16:32:30.476 239969 DEBUG nova.compute.provider_tree [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:32:30 compute-0 nova_compute[239965]: 2026-01-26 16:32:30.491 239969 DEBUG nova.scheduler.client.report [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:32:30 compute-0 nova_compute[239965]: 2026-01-26 16:32:30.510 239969 DEBUG oslo_concurrency.lockutils [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:32:30 compute-0 nova_compute[239965]: 2026-01-26 16:32:30.546 239969 INFO nova.scheduler.client.report [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Deleted allocations for instance dd4373db-38a9-48e5-bcc2-059e3757fc63
Jan 26 16:32:30 compute-0 nova_compute[239965]: 2026-01-26 16:32:30.620 239969 DEBUG oslo_concurrency.lockutils [None req-dbcf92cc-7af5-4992-ac03-74cc9663629a c1e77c373a094e75a1b5e946521cb2a0 756687506b72467ab7ee7ea96e650a72 - - default default] Lock "dd4373db-38a9-48e5-bcc2-059e3757fc63" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:32:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 514 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 26 16:32:31 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2790273014' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:32:32 compute-0 ceph-mon[75140]: pgmap v2471: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 514 KiB/s rd, 1.8 MiB/s wr, 51 op/s
Jan 26 16:32:32 compute-0 sudo[376414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:32:32 compute-0 sudo[376414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:32:32 compute-0 sudo[376414]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:32 compute-0 sudo[376439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:32:32 compute-0 sudo[376439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:32:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Jan 26 16:32:33 compute-0 nova_compute[239965]: 2026-01-26 16:32:33.043 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:33 compute-0 sudo[376439]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:32:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:32:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:32:33 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:32:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:32:33 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:32:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:32:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:32:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:32:33 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:32:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:32:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:32:33 compute-0 sudo[376495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:32:33 compute-0 sudo[376495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:32:33 compute-0 sudo[376495]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:33 compute-0 sudo[376520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:32:33 compute-0 sudo[376520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:32:34 compute-0 podman[376558]: 2026-01-26 16:32:34.03704833 +0000 UTC m=+0.043824469 container create 4d76daab5637b1618dfafe64baa7c9614125ee092b22bd1173a771b681230436 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sanderson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 16:32:34 compute-0 systemd[1]: Started libpod-conmon-4d76daab5637b1618dfafe64baa7c9614125ee092b22bd1173a771b681230436.scope.
Jan 26 16:32:34 compute-0 podman[376558]: 2026-01-26 16:32:34.020470973 +0000 UTC m=+0.027247122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:32:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:32:34 compute-0 podman[376558]: 2026-01-26 16:32:34.142863664 +0000 UTC m=+0.149639813 container init 4d76daab5637b1618dfafe64baa7c9614125ee092b22bd1173a771b681230436 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 16:32:34 compute-0 podman[376558]: 2026-01-26 16:32:34.151448786 +0000 UTC m=+0.158224925 container start 4d76daab5637b1618dfafe64baa7c9614125ee092b22bd1173a771b681230436 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:32:34 compute-0 podman[376558]: 2026-01-26 16:32:34.154289895 +0000 UTC m=+0.161066054 container attach 4d76daab5637b1618dfafe64baa7c9614125ee092b22bd1173a771b681230436 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sanderson, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 16:32:34 compute-0 hungry_sanderson[376575]: 167 167
Jan 26 16:32:34 compute-0 systemd[1]: libpod-4d76daab5637b1618dfafe64baa7c9614125ee092b22bd1173a771b681230436.scope: Deactivated successfully.
Jan 26 16:32:34 compute-0 podman[376558]: 2026-01-26 16:32:34.157099524 +0000 UTC m=+0.163875673 container died 4d76daab5637b1618dfafe64baa7c9614125ee092b22bd1173a771b681230436 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sanderson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 16:32:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-be902d7b0ebdb6167359f33f9b66b5cc80964c05c6d4da18f29b4212609ed56c-merged.mount: Deactivated successfully.
Jan 26 16:32:34 compute-0 podman[376558]: 2026-01-26 16:32:34.201036316 +0000 UTC m=+0.207812465 container remove 4d76daab5637b1618dfafe64baa7c9614125ee092b22bd1173a771b681230436 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:32:34 compute-0 systemd[1]: libpod-conmon-4d76daab5637b1618dfafe64baa7c9614125ee092b22bd1173a771b681230436.scope: Deactivated successfully.
Jan 26 16:32:34 compute-0 nova_compute[239965]: 2026-01-26 16:32:34.225 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:34 compute-0 podman[376601]: 2026-01-26 16:32:34.396510586 +0000 UTC m=+0.056408919 container create 2b731ea76d692efe03a46b8a70363ffd07e83d3eb1cef611b920666435beb9df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_agnesi, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:32:34 compute-0 ceph-mon[75140]: pgmap v2472: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Jan 26 16:32:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:32:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:32:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:32:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:32:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:32:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:32:34 compute-0 systemd[1]: Started libpod-conmon-2b731ea76d692efe03a46b8a70363ffd07e83d3eb1cef611b920666435beb9df.scope.
Jan 26 16:32:34 compute-0 podman[376601]: 2026-01-26 16:32:34.374294239 +0000 UTC m=+0.034192592 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:32:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326c1ddb9ad04c5e1ae736e7be6366254fe3363f6921370249fa15ef94266922/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326c1ddb9ad04c5e1ae736e7be6366254fe3363f6921370249fa15ef94266922/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326c1ddb9ad04c5e1ae736e7be6366254fe3363f6921370249fa15ef94266922/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326c1ddb9ad04c5e1ae736e7be6366254fe3363f6921370249fa15ef94266922/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:32:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/326c1ddb9ad04c5e1ae736e7be6366254fe3363f6921370249fa15ef94266922/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:32:34 compute-0 podman[376601]: 2026-01-26 16:32:34.497885811 +0000 UTC m=+0.157784144 container init 2b731ea76d692efe03a46b8a70363ffd07e83d3eb1cef611b920666435beb9df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 16:32:34 compute-0 podman[376601]: 2026-01-26 16:32:34.505434816 +0000 UTC m=+0.165333139 container start 2b731ea76d692efe03a46b8a70363ffd07e83d3eb1cef611b920666435beb9df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:32:34 compute-0 podman[376601]: 2026-01-26 16:32:34.510523392 +0000 UTC m=+0.170421715 container attach 2b731ea76d692efe03a46b8a70363ffd07e83d3eb1cef611b920666435beb9df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:32:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 26 16:32:34 compute-0 charming_agnesi[376618]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:32:34 compute-0 charming_agnesi[376618]: --> All data devices are unavailable
Jan 26 16:32:35 compute-0 systemd[1]: libpod-2b731ea76d692efe03a46b8a70363ffd07e83d3eb1cef611b920666435beb9df.scope: Deactivated successfully.
Jan 26 16:32:35 compute-0 podman[376601]: 2026-01-26 16:32:35.025527694 +0000 UTC m=+0.685426077 container died 2b731ea76d692efe03a46b8a70363ffd07e83d3eb1cef611b920666435beb9df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_agnesi, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:32:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-326c1ddb9ad04c5e1ae736e7be6366254fe3363f6921370249fa15ef94266922-merged.mount: Deactivated successfully.
Jan 26 16:32:35 compute-0 podman[376601]: 2026-01-26 16:32:35.079851521 +0000 UTC m=+0.739749854 container remove 2b731ea76d692efe03a46b8a70363ffd07e83d3eb1cef611b920666435beb9df (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 26 16:32:35 compute-0 systemd[1]: libpod-conmon-2b731ea76d692efe03a46b8a70363ffd07e83d3eb1cef611b920666435beb9df.scope: Deactivated successfully.
Jan 26 16:32:35 compute-0 sudo[376520]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:35 compute-0 sudo[376648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:32:35 compute-0 sudo[376648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:32:35 compute-0 sudo[376648]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:35 compute-0 sudo[376673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:32:35 compute-0 sudo[376673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:32:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:32:35 compute-0 podman[376710]: 2026-01-26 16:32:35.590259461 +0000 UTC m=+0.058056710 container create 24d488917fffe0ba0e1286bf691780e73b45506741235db35f8c613d8886f301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 16:32:35 compute-0 systemd[1]: Started libpod-conmon-24d488917fffe0ba0e1286bf691780e73b45506741235db35f8c613d8886f301.scope.
Jan 26 16:32:35 compute-0 podman[376710]: 2026-01-26 16:32:35.570938116 +0000 UTC m=+0.038735355 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:32:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:32:35 compute-0 podman[376710]: 2026-01-26 16:32:35.688275993 +0000 UTC m=+0.156073302 container init 24d488917fffe0ba0e1286bf691780e73b45506741235db35f8c613d8886f301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 16:32:35 compute-0 podman[376710]: 2026-01-26 16:32:35.696511496 +0000 UTC m=+0.164308755 container start 24d488917fffe0ba0e1286bf691780e73b45506741235db35f8c613d8886f301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 16:32:35 compute-0 podman[376710]: 2026-01-26 16:32:35.704037741 +0000 UTC m=+0.171835060 container attach 24d488917fffe0ba0e1286bf691780e73b45506741235db35f8c613d8886f301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 16:32:35 compute-0 awesome_kalam[376726]: 167 167
Jan 26 16:32:35 compute-0 systemd[1]: libpod-24d488917fffe0ba0e1286bf691780e73b45506741235db35f8c613d8886f301.scope: Deactivated successfully.
Jan 26 16:32:35 compute-0 podman[376710]: 2026-01-26 16:32:35.70807081 +0000 UTC m=+0.175868069 container died 24d488917fffe0ba0e1286bf691780e73b45506741235db35f8c613d8886f301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:32:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-0535e9043be5c907245efc54b57312433c667f81de688ce1736441bac47dbeea-merged.mount: Deactivated successfully.
Jan 26 16:32:35 compute-0 podman[376710]: 2026-01-26 16:32:35.753900578 +0000 UTC m=+0.221697797 container remove 24d488917fffe0ba0e1286bf691780e73b45506741235db35f8c613d8886f301 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:32:35 compute-0 systemd[1]: libpod-conmon-24d488917fffe0ba0e1286bf691780e73b45506741235db35f8c613d8886f301.scope: Deactivated successfully.
Jan 26 16:32:35 compute-0 podman[376750]: 2026-01-26 16:32:35.919602526 +0000 UTC m=+0.056774879 container create 5a36dd764783aebda80a6e98f0e15a98fe02d627bd00cc1314205853fe9cc1cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:32:35 compute-0 systemd[1]: Started libpod-conmon-5a36dd764783aebda80a6e98f0e15a98fe02d627bd00cc1314205853fe9cc1cd.scope.
Jan 26 16:32:35 compute-0 podman[376750]: 2026-01-26 16:32:35.890380426 +0000 UTC m=+0.027552829 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:32:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1183a763d970ddede1db3e714ad71ea95fe72e0453419aed2412cf5e6893c710/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1183a763d970ddede1db3e714ad71ea95fe72e0453419aed2412cf5e6893c710/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1183a763d970ddede1db3e714ad71ea95fe72e0453419aed2412cf5e6893c710/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:32:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1183a763d970ddede1db3e714ad71ea95fe72e0453419aed2412cf5e6893c710/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:32:36 compute-0 podman[376750]: 2026-01-26 16:32:36.035211811 +0000 UTC m=+0.172384174 container init 5a36dd764783aebda80a6e98f0e15a98fe02d627bd00cc1314205853fe9cc1cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:32:36 compute-0 podman[376750]: 2026-01-26 16:32:36.047160844 +0000 UTC m=+0.184333197 container start 5a36dd764783aebda80a6e98f0e15a98fe02d627bd00cc1314205853fe9cc1cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:32:36 compute-0 podman[376750]: 2026-01-26 16:32:36.051529802 +0000 UTC m=+0.188702165 container attach 5a36dd764783aebda80a6e98f0e15a98fe02d627bd00cc1314205853fe9cc1cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:32:36 compute-0 laughing_johnson[376766]: {
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:     "0": [
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:         {
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "devices": [
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "/dev/loop3"
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             ],
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "lv_name": "ceph_lv0",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "lv_size": "21470642176",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "name": "ceph_lv0",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "tags": {
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.cluster_name": "ceph",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.crush_device_class": "",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.encrypted": "0",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.objectstore": "bluestore",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.osd_id": "0",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.type": "block",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.vdo": "0",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.with_tpm": "0"
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             },
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "type": "block",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "vg_name": "ceph_vg0"
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:         }
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:     ],
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:     "1": [
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:         {
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "devices": [
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "/dev/loop4"
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             ],
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "lv_name": "ceph_lv1",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "lv_size": "21470642176",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "name": "ceph_lv1",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "tags": {
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.cluster_name": "ceph",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.crush_device_class": "",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.encrypted": "0",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.objectstore": "bluestore",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.osd_id": "1",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.type": "block",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.vdo": "0",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.with_tpm": "0"
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             },
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "type": "block",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "vg_name": "ceph_vg1"
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:         }
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:     ],
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:     "2": [
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:         {
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "devices": [
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "/dev/loop5"
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             ],
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "lv_name": "ceph_lv2",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "lv_size": "21470642176",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "name": "ceph_lv2",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "tags": {
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.cluster_name": "ceph",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.crush_device_class": "",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.encrypted": "0",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.objectstore": "bluestore",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.osd_id": "2",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.type": "block",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.vdo": "0",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:                 "ceph.with_tpm": "0"
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             },
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "type": "block",
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:             "vg_name": "ceph_vg2"
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:         }
Jan 26 16:32:36 compute-0 laughing_johnson[376766]:     ]
Jan 26 16:32:36 compute-0 laughing_johnson[376766]: }
Jan 26 16:32:36 compute-0 systemd[1]: libpod-5a36dd764783aebda80a6e98f0e15a98fe02d627bd00cc1314205853fe9cc1cd.scope: Deactivated successfully.
Jan 26 16:32:36 compute-0 podman[376750]: 2026-01-26 16:32:36.395822644 +0000 UTC m=+0.532994997 container died 5a36dd764783aebda80a6e98f0e15a98fe02d627bd00cc1314205853fe9cc1cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:32:36 compute-0 ceph-mon[75140]: pgmap v2473: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Jan 26 16:32:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-1183a763d970ddede1db3e714ad71ea95fe72e0453419aed2412cf5e6893c710-merged.mount: Deactivated successfully.
Jan 26 16:32:36 compute-0 podman[376750]: 2026-01-26 16:32:36.720856312 +0000 UTC m=+0.858028665 container remove 5a36dd764783aebda80a6e98f0e15a98fe02d627bd00cc1314205853fe9cc1cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_johnson, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:32:36 compute-0 systemd[1]: libpod-conmon-5a36dd764783aebda80a6e98f0e15a98fe02d627bd00cc1314205853fe9cc1cd.scope: Deactivated successfully.
Jan 26 16:32:36 compute-0 sudo[376673]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:36 compute-0 sudo[376787]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:32:36 compute-0 sudo[376787]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:32:36 compute-0 sudo[376787]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:36 compute-0 sudo[376812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:32:36 compute-0 sudo[376812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:32:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2474: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 126 op/s
Jan 26 16:32:37 compute-0 ovn_controller[146046]: 2026-01-26T16:32:37Z|01647|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 26 16:32:37 compute-0 podman[376849]: 2026-01-26 16:32:37.23444992 +0000 UTC m=+0.062655143 container create fa996678eb36b365a36af6bf86065b1c7d803acbb262edc0c75fdbd872364438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_antonelli, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 16:32:37 compute-0 systemd[1]: Started libpod-conmon-fa996678eb36b365a36af6bf86065b1c7d803acbb262edc0c75fdbd872364438.scope.
Jan 26 16:32:37 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:32:37 compute-0 podman[376849]: 2026-01-26 16:32:37.210638714 +0000 UTC m=+0.038843967 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:32:37 compute-0 podman[376849]: 2026-01-26 16:32:37.325275436 +0000 UTC m=+0.153480729 container init fa996678eb36b365a36af6bf86065b1c7d803acbb262edc0c75fdbd872364438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_antonelli, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:32:37 compute-0 podman[376849]: 2026-01-26 16:32:37.3368325 +0000 UTC m=+0.165037743 container start fa996678eb36b365a36af6bf86065b1c7d803acbb262edc0c75fdbd872364438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_antonelli, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:32:37 compute-0 goofy_antonelli[376865]: 167 167
Jan 26 16:32:37 compute-0 systemd[1]: libpod-fa996678eb36b365a36af6bf86065b1c7d803acbb262edc0c75fdbd872364438.scope: Deactivated successfully.
Jan 26 16:32:37 compute-0 podman[376849]: 2026-01-26 16:32:37.343189507 +0000 UTC m=+0.171394800 container attach fa996678eb36b365a36af6bf86065b1c7d803acbb262edc0c75fdbd872364438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_antonelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 16:32:37 compute-0 podman[376849]: 2026-01-26 16:32:37.343518764 +0000 UTC m=+0.171724007 container died fa996678eb36b365a36af6bf86065b1c7d803acbb262edc0c75fdbd872364438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_antonelli, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:32:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-36b6f4bb128697e372b5d92622da2aca3566ca193ef2b134143d982daff6b413-merged.mount: Deactivated successfully.
Jan 26 16:32:37 compute-0 podman[376849]: 2026-01-26 16:32:37.392668334 +0000 UTC m=+0.220873557 container remove fa996678eb36b365a36af6bf86065b1c7d803acbb262edc0c75fdbd872364438 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_antonelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:32:37 compute-0 systemd[1]: libpod-conmon-fa996678eb36b365a36af6bf86065b1c7d803acbb262edc0c75fdbd872364438.scope: Deactivated successfully.
Jan 26 16:32:37 compute-0 podman[376889]: 2026-01-26 16:32:37.622078619 +0000 UTC m=+0.052429451 container create ec629722eafccf591f09ca70edd46d81c8fe3184f3cc5b34528fe9d04e3a89de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_brown, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:32:37 compute-0 systemd[1]: Started libpod-conmon-ec629722eafccf591f09ca70edd46d81c8fe3184f3cc5b34528fe9d04e3a89de.scope.
Jan 26 16:32:37 compute-0 podman[376889]: 2026-01-26 16:32:37.601080952 +0000 UTC m=+0.031431864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:32:37 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:32:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf9f5dbfe2845219d823620a237a3a76b765960c2f5984d63f423ebc33661ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:32:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf9f5dbfe2845219d823620a237a3a76b765960c2f5984d63f423ebc33661ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:32:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf9f5dbfe2845219d823620a237a3a76b765960c2f5984d63f423ebc33661ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:32:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdf9f5dbfe2845219d823620a237a3a76b765960c2f5984d63f423ebc33661ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:32:37 compute-0 podman[376889]: 2026-01-26 16:32:37.745485396 +0000 UTC m=+0.175836248 container init ec629722eafccf591f09ca70edd46d81c8fe3184f3cc5b34528fe9d04e3a89de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 16:32:37 compute-0 podman[376889]: 2026-01-26 16:32:37.758502656 +0000 UTC m=+0.188853498 container start ec629722eafccf591f09ca70edd46d81c8fe3184f3cc5b34528fe9d04e3a89de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_brown, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 16:32:37 compute-0 podman[376889]: 2026-01-26 16:32:37.761622173 +0000 UTC m=+0.191973095 container attach ec629722eafccf591f09ca70edd46d81c8fe3184f3cc5b34528fe9d04e3a89de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:32:38 compute-0 nova_compute[239965]: 2026-01-26 16:32:38.046 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:38 compute-0 ceph-mon[75140]: pgmap v2474: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.3 MiB/s wr, 126 op/s
Jan 26 16:32:38 compute-0 lvm[376986]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:32:38 compute-0 lvm[376986]: VG ceph_vg2 finished
Jan 26 16:32:38 compute-0 lvm[376983]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:32:38 compute-0 lvm[376983]: VG ceph_vg0 finished
Jan 26 16:32:38 compute-0 lvm[376985]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:32:38 compute-0 lvm[376985]: VG ceph_vg1 finished
Jan 26 16:32:38 compute-0 compassionate_brown[376905]: {}
Jan 26 16:32:38 compute-0 systemd[1]: libpod-ec629722eafccf591f09ca70edd46d81c8fe3184f3cc5b34528fe9d04e3a89de.scope: Deactivated successfully.
Jan 26 16:32:38 compute-0 systemd[1]: libpod-ec629722eafccf591f09ca70edd46d81c8fe3184f3cc5b34528fe9d04e3a89de.scope: Consumed 1.420s CPU time.
Jan 26 16:32:38 compute-0 podman[376889]: 2026-01-26 16:32:38.634717317 +0000 UTC m=+1.065068189 container died ec629722eafccf591f09ca70edd46d81c8fe3184f3cc5b34528fe9d04e3a89de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Jan 26 16:32:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-fdf9f5dbfe2845219d823620a237a3a76b765960c2f5984d63f423ebc33661ed-merged.mount: Deactivated successfully.
Jan 26 16:32:38 compute-0 podman[376889]: 2026-01-26 16:32:38.690310156 +0000 UTC m=+1.120661028 container remove ec629722eafccf591f09ca70edd46d81c8fe3184f3cc5b34528fe9d04e3a89de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_brown, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:32:38 compute-0 systemd[1]: libpod-conmon-ec629722eafccf591f09ca70edd46d81c8fe3184f3cc5b34528fe9d04e3a89de.scope: Deactivated successfully.
Jan 26 16:32:38 compute-0 sudo[376812]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:32:38 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:32:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:32:38 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:32:38 compute-0 sudo[376999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:32:38 compute-0 sudo[376999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:32:38 compute-0 sudo[376999]: pam_unix(sudo:session): session closed for user root
Jan 26 16:32:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 113 op/s
Jan 26 16:32:39 compute-0 nova_compute[239965]: 2026-01-26 16:32:39.227 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:39 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:32:39 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:32:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:32:40 compute-0 ceph-mon[75140]: pgmap v2475: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 113 op/s
Jan 26 16:32:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 94 op/s
Jan 26 16:32:42 compute-0 ceph-mon[75140]: pgmap v2476: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 94 op/s
Jan 26 16:32:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 937 B/s wr, 75 op/s
Jan 26 16:32:43 compute-0 nova_compute[239965]: 2026-01-26 16:32:43.049 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:44 compute-0 nova_compute[239965]: 2026-01-26 16:32:44.229 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:44 compute-0 nova_compute[239965]: 2026-01-26 16:32:44.270 239969 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1769445149.269902, dd4373db-38a9-48e5-bcc2-059e3757fc63 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 26 16:32:44 compute-0 nova_compute[239965]: 2026-01-26 16:32:44.271 239969 INFO nova.compute.manager [-] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] VM Stopped (Lifecycle Event)
Jan 26 16:32:44 compute-0 nova_compute[239965]: 2026-01-26 16:32:44.295 239969 DEBUG nova.compute.manager [None req-e1045e27-ddbf-4415-b2b6-1f496e13ce19 - - - - - -] [instance: dd4373db-38a9-48e5-bcc2-059e3757fc63] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 26 16:32:44 compute-0 ceph-mon[75140]: pgmap v2477: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 937 B/s wr, 75 op/s
Jan 26 16:32:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 596 B/s wr, 3 op/s
Jan 26 16:32:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:45.414733) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445165414780, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 1973, "num_deletes": 258, "total_data_size": 3203551, "memory_usage": 3252672, "flush_reason": "Manual Compaction"}
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445165452438, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3132957, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 50045, "largest_seqno": 52017, "table_properties": {"data_size": 3123979, "index_size": 5600, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 18406, "raw_average_key_size": 20, "raw_value_size": 3105897, "raw_average_value_size": 3383, "num_data_blocks": 248, "num_entries": 918, "num_filter_entries": 918, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769444963, "oldest_key_time": 1769444963, "file_creation_time": 1769445165, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 37761 microseconds, and 8581 cpu microseconds.
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:45.452494) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3132957 bytes OK
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:45.452516) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:45.454461) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:45.454478) EVENT_LOG_v1 {"time_micros": 1769445165454472, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:45.454496) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3195194, prev total WAL file size 3195194, number of live WAL files 2.
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:45.455617) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303037' seq:72057594037927935, type:22 .. '6C6F676D0032323539' seq:0, type:0; will stop at (end)
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3059KB)], [116(8391KB)]
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445165455680, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11725425, "oldest_snapshot_seqno": -1}
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7557 keys, 11612320 bytes, temperature: kUnknown
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445165581479, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 11612320, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11560371, "index_size": 31933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18949, "raw_key_size": 195764, "raw_average_key_size": 25, "raw_value_size": 11424178, "raw_average_value_size": 1511, "num_data_blocks": 1255, "num_entries": 7557, "num_filter_entries": 7557, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769445165, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:45.581766) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 11612320 bytes
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:45.583568) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.1 rd, 92.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.0, 8.2 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(7.4) write-amplify(3.7) OK, records in: 8089, records dropped: 532 output_compression: NoCompression
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:45.583583) EVENT_LOG_v1 {"time_micros": 1769445165583576, "job": 70, "event": "compaction_finished", "compaction_time_micros": 125906, "compaction_time_cpu_micros": 57202, "output_level": 6, "num_output_files": 1, "total_output_size": 11612320, "num_input_records": 8089, "num_output_records": 7557, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445165584374, "job": 70, "event": "table_file_deletion", "file_number": 118}
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445165585905, "job": 70, "event": "table_file_deletion", "file_number": 116}
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:45.455473) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:45.586055) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:45.586063) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:45.586065) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:45.586067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:32:45 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:45.586070) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:32:46 compute-0 ceph-mon[75140]: pgmap v2478: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 KiB/s rd, 596 B/s wr, 3 op/s
Jan 26 16:32:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:48 compute-0 nova_compute[239965]: 2026-01-26 16:32:48.052 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:48 compute-0 ceph-mon[75140]: pgmap v2479: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:48 compute-0 podman[377024]: 2026-01-26 16:32:48.406677168 +0000 UTC m=+0.090780794 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:32:48 compute-0 podman[377025]: 2026-01-26 16:32:48.437884966 +0000 UTC m=+0.122435194 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:32:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:32:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3537779885' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:32:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:32:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3537779885' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:32:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:49 compute-0 nova_compute[239965]: 2026-01-26 16:32:49.231 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.7734790147961586e-05 of space, bias 1.0, pg target 0.005320437044388476 quantized to 32 (current 32)
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010183187110773201 of space, bias 1.0, pg target 0.305495613323196 quantized to 32 (current 32)
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.697362169604939e-07 of space, bias 4.0, pg target 0.0008036834603525927 quantized to 16 (current 16)
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:32:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:32:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3537779885' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:32:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3537779885' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:32:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:32:50 compute-0 ceph-mon[75140]: pgmap v2480: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:52 compute-0 ceph-mon[75140]: pgmap v2481: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:32:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Jan 26 16:32:53 compute-0 nova_compute[239965]: 2026-01-26 16:32:53.055 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:53 compute-0 nova_compute[239965]: 2026-01-26 16:32:53.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:53 compute-0 ceph-mon[75140]: pgmap v2482: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Jan 26 16:32:54 compute-0 nova_compute[239965]: 2026-01-26 16:32:54.259 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Jan 26 16:32:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:32:56 compute-0 ceph-mon[75140]: pgmap v2483: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Jan 26 16:32:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Jan 26 16:32:58 compute-0 nova_compute[239965]: 2026-01-26 16:32:58.058 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:32:58 compute-0 ceph-mon[75140]: pgmap v2484: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:58.366398) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445178366498, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 362, "num_deletes": 251, "total_data_size": 217099, "memory_usage": 225152, "flush_reason": "Manual Compaction"}
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445178372094, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 214284, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52018, "largest_seqno": 52379, "table_properties": {"data_size": 212061, "index_size": 387, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5522, "raw_average_key_size": 18, "raw_value_size": 207702, "raw_average_value_size": 696, "num_data_blocks": 17, "num_entries": 298, "num_filter_entries": 298, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769445165, "oldest_key_time": 1769445165, "file_creation_time": 1769445178, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 5732 microseconds, and 1618 cpu microseconds.
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:58.372145) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 214284 bytes OK
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:58.372167) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:58.373793) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:58.373806) EVENT_LOG_v1 {"time_micros": 1769445178373801, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:58.373823) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 214698, prev total WAL file size 214698, number of live WAL files 2.
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:58.374300) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(209KB)], [119(11MB)]
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445178374358, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 11826604, "oldest_snapshot_seqno": -1}
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7346 keys, 10176805 bytes, temperature: kUnknown
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445178463681, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 10176805, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10127485, "index_size": 29880, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18373, "raw_key_size": 192067, "raw_average_key_size": 26, "raw_value_size": 9996208, "raw_average_value_size": 1360, "num_data_blocks": 1161, "num_entries": 7346, "num_filter_entries": 7346, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769445178, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:58.464190) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 10176805 bytes
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:58.465724) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 132.2 rd, 113.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 11.1 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(102.7) write-amplify(47.5) OK, records in: 7855, records dropped: 509 output_compression: NoCompression
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:58.465753) EVENT_LOG_v1 {"time_micros": 1769445178465733, "job": 72, "event": "compaction_finished", "compaction_time_micros": 89457, "compaction_time_cpu_micros": 46934, "output_level": 6, "num_output_files": 1, "total_output_size": 10176805, "num_input_records": 7855, "num_output_records": 7346, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445178465938, "job": 72, "event": "table_file_deletion", "file_number": 121}
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445178468478, "job": 72, "event": "table_file_deletion", "file_number": 119}
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:58.374168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:58.468550) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:58.468554) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:58.468556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:58.468558) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:32:58 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:32:58.468560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:32:58 compute-0 nova_compute[239965]: 2026-01-26 16:32:58.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:32:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Jan 26 16:32:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:32:59.255 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:32:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:32:59.256 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:32:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:32:59.256 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:32:59 compute-0 nova_compute[239965]: 2026-01-26 16:32:59.261 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:00 compute-0 ceph-mon[75140]: pgmap v2485: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Jan 26 16:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:33:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:33:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 16:33:02 compute-0 ceph-mon[75140]: pgmap v2486: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 16:33:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 16:33:03 compute-0 nova_compute[239965]: 2026-01-26 16:33:03.062 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:04 compute-0 ceph-mon[75140]: pgmap v2487: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 16:33:04 compute-0 nova_compute[239965]: 2026-01-26 16:33:04.263 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 26 16:33:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:33:05 compute-0 nova_compute[239965]: 2026-01-26 16:33:05.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:05 compute-0 nova_compute[239965]: 2026-01-26 16:33:05.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:33:05 compute-0 nova_compute[239965]: 2026-01-26 16:33:05.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:33:05 compute-0 nova_compute[239965]: 2026-01-26 16:33:05.529 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:33:05 compute-0 nova_compute[239965]: 2026-01-26 16:33:05.529 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:06 compute-0 ceph-mon[75140]: pgmap v2488: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 26 16:33:06 compute-0 nova_compute[239965]: 2026-01-26 16:33:06.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Jan 26 16:33:07 compute-0 nova_compute[239965]: 2026-01-26 16:33:07.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:07 compute-0 nova_compute[239965]: 2026-01-26 16:33:07.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:07 compute-0 nova_compute[239965]: 2026-01-26 16:33:07.509 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:33:08 compute-0 nova_compute[239965]: 2026-01-26 16:33:08.065 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:08 compute-0 ceph-mon[75140]: pgmap v2489: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Jan 26 16:33:08 compute-0 nova_compute[239965]: 2026-01-26 16:33:08.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Jan 26 16:33:09 compute-0 nova_compute[239965]: 2026-01-26 16:33:09.264 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:10 compute-0 ceph-mon[75140]: pgmap v2490: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Jan 26 16:33:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:33:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Jan 26 16:33:12 compute-0 ceph-mon[75140]: pgmap v2491: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Jan 26 16:33:12 compute-0 nova_compute[239965]: 2026-01-26 16:33:12.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:12 compute-0 nova_compute[239965]: 2026-01-26 16:33:12.532 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:33:12 compute-0 nova_compute[239965]: 2026-01-26 16:33:12.533 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:33:12 compute-0 nova_compute[239965]: 2026-01-26 16:33:12.533 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:33:12 compute-0 nova_compute[239965]: 2026-01-26 16:33:12.533 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:33:12 compute-0 nova_compute[239965]: 2026-01-26 16:33:12.533 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:33:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:33:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4039744005' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:33:13 compute-0 nova_compute[239965]: 2026-01-26 16:33:13.066 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:13 compute-0 nova_compute[239965]: 2026-01-26 16:33:13.097 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:33:13 compute-0 nova_compute[239965]: 2026-01-26 16:33:13.267 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:33:13 compute-0 nova_compute[239965]: 2026-01-26 16:33:13.269 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3469MB free_disk=59.987217370420694GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:33:13 compute-0 nova_compute[239965]: 2026-01-26 16:33:13.269 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:33:13 compute-0 nova_compute[239965]: 2026-01-26 16:33:13.269 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:33:13 compute-0 nova_compute[239965]: 2026-01-26 16:33:13.342 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:33:13 compute-0 nova_compute[239965]: 2026-01-26 16:33:13.343 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:33:13 compute-0 nova_compute[239965]: 2026-01-26 16:33:13.359 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:33:13 compute-0 nova_compute[239965]: 2026-01-26 16:33:13.380 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:33:13 compute-0 nova_compute[239965]: 2026-01-26 16:33:13.380 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:33:13 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4039744005' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:33:13 compute-0 nova_compute[239965]: 2026-01-26 16:33:13.393 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:33:13 compute-0 nova_compute[239965]: 2026-01-26 16:33:13.411 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:33:13 compute-0 nova_compute[239965]: 2026-01-26 16:33:13.423 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:33:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:33:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3996606695' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:33:13 compute-0 nova_compute[239965]: 2026-01-26 16:33:13.980 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:33:13 compute-0 nova_compute[239965]: 2026-01-26 16:33:13.990 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:33:14 compute-0 nova_compute[239965]: 2026-01-26 16:33:14.006 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:33:14 compute-0 nova_compute[239965]: 2026-01-26 16:33:14.024 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:33:14 compute-0 nova_compute[239965]: 2026-01-26 16:33:14.024 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.755s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:33:14 compute-0 nova_compute[239965]: 2026-01-26 16:33:14.306 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:14 compute-0 ceph-mon[75140]: pgmap v2492: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:14 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3996606695' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:33:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:33:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:33:16.180 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:33:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:33:16.182 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:33:16 compute-0 nova_compute[239965]: 2026-01-26 16:33:16.181 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:16 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:33:16.183 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:33:16 compute-0 ceph-mon[75140]: pgmap v2493: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:18 compute-0 nova_compute[239965]: 2026-01-26 16:33:18.069 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:18 compute-0 ceph-mon[75140]: pgmap v2494: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:19 compute-0 nova_compute[239965]: 2026-01-26 16:33:19.308 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:19 compute-0 podman[377112]: 2026-01-26 16:33:19.399132362 +0000 UTC m=+0.084967752 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:33:19 compute-0 podman[377113]: 2026-01-26 16:33:19.432472542 +0000 UTC m=+0.100684778 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Jan 26 16:33:20 compute-0 ceph-mon[75140]: pgmap v2495: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:33:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:22 compute-0 ceph-mon[75140]: pgmap v2496: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:23 compute-0 nova_compute[239965]: 2026-01-26 16:33:23.072 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:24 compute-0 ceph-mon[75140]: pgmap v2497: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:24 compute-0 nova_compute[239965]: 2026-01-26 16:33:24.311 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:33:26 compute-0 ceph-mon[75140]: pgmap v2498: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Jan 26 16:33:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Jan 26 16:33:27 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Jan 26 16:33:27 compute-0 sshd-session[377157]: Invalid user sol from 45.148.10.240 port 58178
Jan 26 16:33:28 compute-0 sshd-session[377157]: Connection closed by invalid user sol 45.148.10.240 port 58178 [preauth]
Jan 26 16:33:28 compute-0 nova_compute[239965]: 2026-01-26 16:33:28.075 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Jan 26 16:33:28 compute-0 ceph-mon[75140]: pgmap v2499: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:28 compute-0 ceph-mon[75140]: osdmap e321: 3 total, 3 up, 3 in
Jan 26 16:33:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Jan 26 16:33:28 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Jan 26 16:33:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:33:28
Jan 26 16:33:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:33:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:33:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.mgr', 'backups', 'images', 'default.rgw.meta', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 26 16:33:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:33:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 81 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 895 B/s rd, 383 B/s wr, 1 op/s
Jan 26 16:33:29 compute-0 ceph-mon[75140]: osdmap e322: 3 total, 3 up, 3 in
Jan 26 16:33:29 compute-0 nova_compute[239965]: 2026-01-26 16:33:29.312 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:30 compute-0 ceph-mon[75140]: pgmap v2502: 305 pgs: 305 active+clean; 81 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 895 B/s rd, 383 B/s wr, 1 op/s
Jan 26 16:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:33:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:33:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:33:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 71 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 26 16:33:32 compute-0 ceph-mon[75140]: pgmap v2503: 305 pgs: 305 active+clean; 71 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 26 16:33:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 64 op/s
Jan 26 16:33:33 compute-0 nova_compute[239965]: 2026-01-26 16:33:33.078 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:34 compute-0 ceph-mon[75140]: pgmap v2504: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 64 op/s
Jan 26 16:33:34 compute-0 nova_compute[239965]: 2026-01-26 16:33:34.314 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 64 op/s
Jan 26 16:33:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e322 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:33:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Jan 26 16:33:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Jan 26 16:33:35 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Jan 26 16:33:36 compute-0 ceph-mon[75140]: pgmap v2505: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 3.5 KiB/s wr, 64 op/s
Jan 26 16:33:36 compute-0 ceph-mon[75140]: osdmap e323: 3 total, 3 up, 3 in
Jan 26 16:33:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.9 KiB/s wr, 56 op/s
Jan 26 16:33:38 compute-0 nova_compute[239965]: 2026-01-26 16:33:38.081 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:38 compute-0 ceph-mon[75140]: pgmap v2507: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.9 KiB/s wr, 56 op/s
Jan 26 16:33:38 compute-0 sudo[377159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:33:38 compute-0 sudo[377159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:33:38 compute-0 sudo[377159]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:38 compute-0 sudo[377184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:33:38 compute-0 sudo[377184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:33:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.5 KiB/s wr, 49 op/s
Jan 26 16:33:39 compute-0 nova_compute[239965]: 2026-01-26 16:33:39.316 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:39 compute-0 sudo[377184]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:33:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:33:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:33:39 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:33:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:33:39 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:33:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:33:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:33:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:33:39 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:33:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:33:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:33:39 compute-0 sudo[377240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:33:39 compute-0 sudo[377240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:33:39 compute-0 sudo[377240]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:39 compute-0 sudo[377265]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:33:39 compute-0 sudo[377265]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:33:40 compute-0 podman[377302]: 2026-01-26 16:33:39.969260991 +0000 UTC m=+0.019716316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:33:40 compute-0 podman[377302]: 2026-01-26 16:33:40.408838338 +0000 UTC m=+0.459293633 container create 80ade52b4a71b6c40a5c0b34a932efbbcde7c7e1816ba6192c3d3aca6f0f763a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cerf, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:33:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:33:40 compute-0 ceph-mon[75140]: pgmap v2508: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.5 KiB/s wr, 49 op/s
Jan 26 16:33:40 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:33:40 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:33:40 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:33:40 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:33:40 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:33:40 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:33:40 compute-0 systemd[1]: Started libpod-conmon-80ade52b4a71b6c40a5c0b34a932efbbcde7c7e1816ba6192c3d3aca6f0f763a.scope.
Jan 26 16:33:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:33:40 compute-0 podman[377302]: 2026-01-26 16:33:40.689699399 +0000 UTC m=+0.740154764 container init 80ade52b4a71b6c40a5c0b34a932efbbcde7c7e1816ba6192c3d3aca6f0f763a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cerf, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 16:33:40 compute-0 podman[377302]: 2026-01-26 16:33:40.70477407 +0000 UTC m=+0.755229385 container start 80ade52b4a71b6c40a5c0b34a932efbbcde7c7e1816ba6192c3d3aca6f0f763a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 16:33:40 compute-0 practical_cerf[377318]: 167 167
Jan 26 16:33:40 compute-0 systemd[1]: libpod-80ade52b4a71b6c40a5c0b34a932efbbcde7c7e1816ba6192c3d3aca6f0f763a.scope: Deactivated successfully.
Jan 26 16:33:40 compute-0 podman[377302]: 2026-01-26 16:33:40.723403589 +0000 UTC m=+0.773858964 container attach 80ade52b4a71b6c40a5c0b34a932efbbcde7c7e1816ba6192c3d3aca6f0f763a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cerf, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 16:33:40 compute-0 podman[377302]: 2026-01-26 16:33:40.724381382 +0000 UTC m=+0.774836667 container died 80ade52b4a71b6c40a5c0b34a932efbbcde7c7e1816ba6192c3d3aca6f0f763a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 16:33:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5423a6e89fbf68a65eb939dde85405d28bb39193dd99e6a9a9a59060677dd33-merged.mount: Deactivated successfully.
Jan 26 16:33:40 compute-0 podman[377302]: 2026-01-26 16:33:40.79090759 +0000 UTC m=+0.841362885 container remove 80ade52b4a71b6c40a5c0b34a932efbbcde7c7e1816ba6192c3d3aca6f0f763a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_cerf, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 16:33:40 compute-0 systemd[1]: libpod-conmon-80ade52b4a71b6c40a5c0b34a932efbbcde7c7e1816ba6192c3d3aca6f0f763a.scope: Deactivated successfully.
Jan 26 16:33:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Jan 26 16:33:41 compute-0 podman[377344]: 2026-01-26 16:33:41.039637931 +0000 UTC m=+0.091699658 container create d42e66005eda8be3ae6bc57bb5134058f014010fe0c20cf72cac7f3e89331c47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hugle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:33:41 compute-0 podman[377344]: 2026-01-26 16:33:40.993613838 +0000 UTC m=+0.045675575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:33:41 compute-0 systemd[1]: Started libpod-conmon-d42e66005eda8be3ae6bc57bb5134058f014010fe0c20cf72cac7f3e89331c47.scope.
Jan 26 16:33:41 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a07440085abf5f692cb305c69cce0a941567e5e835e64da892812f89e087b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a07440085abf5f692cb305c69cce0a941567e5e835e64da892812f89e087b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a07440085abf5f692cb305c69cce0a941567e5e835e64da892812f89e087b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a07440085abf5f692cb305c69cce0a941567e5e835e64da892812f89e087b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf7a07440085abf5f692cb305c69cce0a941567e5e835e64da892812f89e087b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:33:41 compute-0 podman[377344]: 2026-01-26 16:33:41.180928097 +0000 UTC m=+0.232989874 container init d42e66005eda8be3ae6bc57bb5134058f014010fe0c20cf72cac7f3e89331c47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hugle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:33:41 compute-0 podman[377344]: 2026-01-26 16:33:41.189118888 +0000 UTC m=+0.241180615 container start d42e66005eda8be3ae6bc57bb5134058f014010fe0c20cf72cac7f3e89331c47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hugle, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:33:41 compute-0 podman[377344]: 2026-01-26 16:33:41.194164283 +0000 UTC m=+0.246225980 container attach d42e66005eda8be3ae6bc57bb5134058f014010fe0c20cf72cac7f3e89331c47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:33:41 compute-0 amazing_hugle[377360]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:33:41 compute-0 amazing_hugle[377360]: --> All data devices are unavailable
Jan 26 16:33:41 compute-0 systemd[1]: libpod-d42e66005eda8be3ae6bc57bb5134058f014010fe0c20cf72cac7f3e89331c47.scope: Deactivated successfully.
Jan 26 16:33:41 compute-0 podman[377344]: 2026-01-26 16:33:41.703546217 +0000 UTC m=+0.755607914 container died d42e66005eda8be3ae6bc57bb5134058f014010fe0c20cf72cac7f3e89331c47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 16:33:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf7a07440085abf5f692cb305c69cce0a941567e5e835e64da892812f89e087b-merged.mount: Deactivated successfully.
Jan 26 16:33:41 compute-0 podman[377344]: 2026-01-26 16:33:41.839311308 +0000 UTC m=+0.891372995 container remove d42e66005eda8be3ae6bc57bb5134058f014010fe0c20cf72cac7f3e89331c47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 26 16:33:41 compute-0 systemd[1]: libpod-conmon-d42e66005eda8be3ae6bc57bb5134058f014010fe0c20cf72cac7f3e89331c47.scope: Deactivated successfully.
Jan 26 16:33:41 compute-0 sudo[377265]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:41 compute-0 sudo[377393]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:33:41 compute-0 sudo[377393]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:33:41 compute-0 sudo[377393]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:42 compute-0 sudo[377418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:33:42 compute-0 sudo[377418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:33:42 compute-0 podman[377454]: 2026-01-26 16:33:42.407585972 +0000 UTC m=+0.119124402 container create 0f2805514725b163557b974d6e5e46baeea1a05b34a2ad3030166ab3f2c1ad4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mahavira, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 16:33:42 compute-0 podman[377454]: 2026-01-26 16:33:42.320480278 +0000 UTC m=+0.032018748 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:33:42 compute-0 systemd[1]: Started libpod-conmon-0f2805514725b163557b974d6e5e46baeea1a05b34a2ad3030166ab3f2c1ad4e.scope.
Jan 26 16:33:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:33:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:43 compute-0 nova_compute[239965]: 2026-01-26 16:33:43.083 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:43 compute-0 ceph-mon[75140]: pgmap v2509: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 26 op/s
Jan 26 16:33:43 compute-0 podman[377454]: 2026-01-26 16:33:43.915053386 +0000 UTC m=+1.626591796 container init 0f2805514725b163557b974d6e5e46baeea1a05b34a2ad3030166ab3f2c1ad4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mahavira, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:33:43 compute-0 podman[377454]: 2026-01-26 16:33:43.926461747 +0000 UTC m=+1.638000137 container start 0f2805514725b163557b974d6e5e46baeea1a05b34a2ad3030166ab3f2c1ad4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:33:43 compute-0 dazzling_mahavira[377470]: 167 167
Jan 26 16:33:43 compute-0 systemd[1]: libpod-0f2805514725b163557b974d6e5e46baeea1a05b34a2ad3030166ab3f2c1ad4e.scope: Deactivated successfully.
Jan 26 16:33:43 compute-0 podman[377454]: 2026-01-26 16:33:43.940083182 +0000 UTC m=+1.651621572 container attach 0f2805514725b163557b974d6e5e46baeea1a05b34a2ad3030166ab3f2c1ad4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:33:43 compute-0 podman[377454]: 2026-01-26 16:33:43.941410835 +0000 UTC m=+1.652949225 container died 0f2805514725b163557b974d6e5e46baeea1a05b34a2ad3030166ab3f2c1ad4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mahavira, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:33:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-936daf799c28d12fc761041ccf9364cee60ad72f4d5893fa5ac453595b8f6443-merged.mount: Deactivated successfully.
Jan 26 16:33:43 compute-0 podman[377454]: 2026-01-26 16:33:43.986694649 +0000 UTC m=+1.698233039 container remove 0f2805514725b163557b974d6e5e46baeea1a05b34a2ad3030166ab3f2c1ad4e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_mahavira, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:33:43 compute-0 systemd[1]: libpod-conmon-0f2805514725b163557b974d6e5e46baeea1a05b34a2ad3030166ab3f2c1ad4e.scope: Deactivated successfully.
Jan 26 16:33:44 compute-0 podman[377496]: 2026-01-26 16:33:44.144517233 +0000 UTC m=+0.051778255 container create 5d14fc4114267d7dc2d1bce7cf5488bd5cdc729edd1523142428e12c2b30e40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_margulis, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:33:44 compute-0 systemd[1]: Started libpod-conmon-5d14fc4114267d7dc2d1bce7cf5488bd5cdc729edd1523142428e12c2b30e40c.scope.
Jan 26 16:33:44 compute-0 podman[377496]: 2026-01-26 16:33:44.117509339 +0000 UTC m=+0.024770451 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:33:44 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6ea4b7ffb6e780a79f44992503450ed2dec5b7d56c21ab32a015ef205235b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6ea4b7ffb6e780a79f44992503450ed2dec5b7d56c21ab32a015ef205235b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6ea4b7ffb6e780a79f44992503450ed2dec5b7d56c21ab32a015ef205235b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:33:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6ea4b7ffb6e780a79f44992503450ed2dec5b7d56c21ab32a015ef205235b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:33:44 compute-0 podman[377496]: 2026-01-26 16:33:44.237510112 +0000 UTC m=+0.144771144 container init 5d14fc4114267d7dc2d1bce7cf5488bd5cdc729edd1523142428e12c2b30e40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_margulis, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:33:44 compute-0 podman[377496]: 2026-01-26 16:33:44.248669197 +0000 UTC m=+0.155930219 container start 5d14fc4114267d7dc2d1bce7cf5488bd5cdc729edd1523142428e12c2b30e40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_margulis, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 16:33:44 compute-0 podman[377496]: 2026-01-26 16:33:44.252206373 +0000 UTC m=+0.159467435 container attach 5d14fc4114267d7dc2d1bce7cf5488bd5cdc729edd1523142428e12c2b30e40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_margulis, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:33:44 compute-0 nova_compute[239965]: 2026-01-26 16:33:44.318 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:44 compute-0 sharp_margulis[377512]: {
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:     "0": [
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:         {
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "devices": [
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "/dev/loop3"
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             ],
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "lv_name": "ceph_lv0",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "lv_size": "21470642176",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "name": "ceph_lv0",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "tags": {
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.cluster_name": "ceph",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.crush_device_class": "",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.encrypted": "0",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.objectstore": "bluestore",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.osd_id": "0",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.type": "block",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.vdo": "0",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.with_tpm": "0"
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             },
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "type": "block",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "vg_name": "ceph_vg0"
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:         }
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:     ],
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:     "1": [
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:         {
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "devices": [
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "/dev/loop4"
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             ],
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "lv_name": "ceph_lv1",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "lv_size": "21470642176",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "name": "ceph_lv1",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "tags": {
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.cluster_name": "ceph",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.crush_device_class": "",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.encrypted": "0",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.objectstore": "bluestore",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.osd_id": "1",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.type": "block",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.vdo": "0",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.with_tpm": "0"
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             },
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "type": "block",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "vg_name": "ceph_vg1"
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:         }
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:     ],
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:     "2": [
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:         {
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "devices": [
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "/dev/loop5"
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             ],
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "lv_name": "ceph_lv2",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "lv_size": "21470642176",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "name": "ceph_lv2",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "tags": {
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.cluster_name": "ceph",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.crush_device_class": "",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.encrypted": "0",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.objectstore": "bluestore",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.osd_id": "2",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.type": "block",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.vdo": "0",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:                 "ceph.with_tpm": "0"
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             },
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "type": "block",
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:             "vg_name": "ceph_vg2"
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:         }
Jan 26 16:33:44 compute-0 sharp_margulis[377512]:     ]
Jan 26 16:33:44 compute-0 sharp_margulis[377512]: }
Jan 26 16:33:44 compute-0 systemd[1]: libpod-5d14fc4114267d7dc2d1bce7cf5488bd5cdc729edd1523142428e12c2b30e40c.scope: Deactivated successfully.
Jan 26 16:33:44 compute-0 podman[377496]: 2026-01-26 16:33:44.612217712 +0000 UTC m=+0.519478754 container died 5d14fc4114267d7dc2d1bce7cf5488bd5cdc729edd1523142428e12c2b30e40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_margulis, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 16:33:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc6ea4b7ffb6e780a79f44992503450ed2dec5b7d56c21ab32a015ef205235b4-merged.mount: Deactivated successfully.
Jan 26 16:33:44 compute-0 podman[377496]: 2026-01-26 16:33:44.672584067 +0000 UTC m=+0.579845089 container remove 5d14fc4114267d7dc2d1bce7cf5488bd5cdc729edd1523142428e12c2b30e40c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_margulis, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 16:33:44 compute-0 systemd[1]: libpod-conmon-5d14fc4114267d7dc2d1bce7cf5488bd5cdc729edd1523142428e12c2b30e40c.scope: Deactivated successfully.
Jan 26 16:33:44 compute-0 sudo[377418]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:44 compute-0 sudo[377533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:33:44 compute-0 sudo[377533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:33:44 compute-0 sudo[377533]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:44 compute-0 sudo[377558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:33:44 compute-0 sudo[377558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:33:44 compute-0 ceph-mon[75140]: pgmap v2510: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:45 compute-0 podman[377596]: 2026-01-26 16:33:45.113483027 +0000 UTC m=+0.037124274 container create 6aed064d8ac91aa21b9dd4563e43ad0d928f465ba3e89a6bd4fd29b3b57e2048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 16:33:45 compute-0 systemd[1]: Started libpod-conmon-6aed064d8ac91aa21b9dd4563e43ad0d928f465ba3e89a6bd4fd29b3b57e2048.scope.
Jan 26 16:33:45 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:33:45 compute-0 podman[377596]: 2026-01-26 16:33:45.098219901 +0000 UTC m=+0.021861168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:33:45 compute-0 podman[377596]: 2026-01-26 16:33:45.199501954 +0000 UTC m=+0.123143221 container init 6aed064d8ac91aa21b9dd4563e43ad0d928f465ba3e89a6bd4fd29b3b57e2048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 16:33:45 compute-0 podman[377596]: 2026-01-26 16:33:45.206702091 +0000 UTC m=+0.130343338 container start 6aed064d8ac91aa21b9dd4563e43ad0d928f465ba3e89a6bd4fd29b3b57e2048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_goldstine, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 26 16:33:45 compute-0 strange_goldstine[377612]: 167 167
Jan 26 16:33:45 compute-0 systemd[1]: libpod-6aed064d8ac91aa21b9dd4563e43ad0d928f465ba3e89a6bd4fd29b3b57e2048.scope: Deactivated successfully.
Jan 26 16:33:45 compute-0 podman[377596]: 2026-01-26 16:33:45.212798231 +0000 UTC m=+0.136439508 container attach 6aed064d8ac91aa21b9dd4563e43ad0d928f465ba3e89a6bd4fd29b3b57e2048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 26 16:33:45 compute-0 podman[377596]: 2026-01-26 16:33:45.213792546 +0000 UTC m=+0.137433803 container died 6aed064d8ac91aa21b9dd4563e43ad0d928f465ba3e89a6bd4fd29b3b57e2048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_goldstine, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 16:33:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-71538352f45847e8885b93125352f13f313771a44728c1bd188113b34f6800d9-merged.mount: Deactivated successfully.
Jan 26 16:33:45 compute-0 podman[377596]: 2026-01-26 16:33:45.326690184 +0000 UTC m=+0.250331421 container remove 6aed064d8ac91aa21b9dd4563e43ad0d928f465ba3e89a6bd4fd29b3b57e2048 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:33:45 compute-0 systemd[1]: libpod-conmon-6aed064d8ac91aa21b9dd4563e43ad0d928f465ba3e89a6bd4fd29b3b57e2048.scope: Deactivated successfully.
Jan 26 16:33:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:33:45 compute-0 podman[377638]: 2026-01-26 16:33:45.5398732 +0000 UTC m=+0.064552950 container create 5df321400d2f830e13a496e4c222520084305307160a22996e1af68268b3c807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_moser, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 26 16:33:45 compute-0 systemd[1]: Started libpod-conmon-5df321400d2f830e13a496e4c222520084305307160a22996e1af68268b3c807.scope.
Jan 26 16:33:45 compute-0 podman[377638]: 2026-01-26 16:33:45.503957476 +0000 UTC m=+0.028637276 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:33:45 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04fb7237564f6b70ec2b9ddf27bc2f342a9b31c4e835ba6c70e6efde904d44c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04fb7237564f6b70ec2b9ddf27bc2f342a9b31c4e835ba6c70e6efde904d44c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04fb7237564f6b70ec2b9ddf27bc2f342a9b31c4e835ba6c70e6efde904d44c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:33:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d04fb7237564f6b70ec2b9ddf27bc2f342a9b31c4e835ba6c70e6efde904d44c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:33:45 compute-0 podman[377638]: 2026-01-26 16:33:45.646948504 +0000 UTC m=+0.171628264 container init 5df321400d2f830e13a496e4c222520084305307160a22996e1af68268b3c807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:33:45 compute-0 podman[377638]: 2026-01-26 16:33:45.663632415 +0000 UTC m=+0.188312155 container start 5df321400d2f830e13a496e4c222520084305307160a22996e1af68268b3c807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:33:45 compute-0 podman[377638]: 2026-01-26 16:33:45.66789732 +0000 UTC m=+0.192577110 container attach 5df321400d2f830e13a496e4c222520084305307160a22996e1af68268b3c807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 16:33:45 compute-0 ceph-mon[75140]: pgmap v2511: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:46 compute-0 lvm[377733]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:33:46 compute-0 lvm[377733]: VG ceph_vg0 finished
Jan 26 16:33:46 compute-0 lvm[377734]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:33:46 compute-0 lvm[377734]: VG ceph_vg1 finished
Jan 26 16:33:46 compute-0 lvm[377736]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:33:46 compute-0 lvm[377736]: VG ceph_vg2 finished
Jan 26 16:33:46 compute-0 hopeful_moser[377655]: {}
Jan 26 16:33:46 compute-0 systemd[1]: libpod-5df321400d2f830e13a496e4c222520084305307160a22996e1af68268b3c807.scope: Deactivated successfully.
Jan 26 16:33:46 compute-0 systemd[1]: libpod-5df321400d2f830e13a496e4c222520084305307160a22996e1af68268b3c807.scope: Consumed 1.428s CPU time.
Jan 26 16:33:46 compute-0 podman[377638]: 2026-01-26 16:33:46.504900276 +0000 UTC m=+1.029579996 container died 5df321400d2f830e13a496e4c222520084305307160a22996e1af68268b3c807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 16:33:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-d04fb7237564f6b70ec2b9ddf27bc2f342a9b31c4e835ba6c70e6efde904d44c-merged.mount: Deactivated successfully.
Jan 26 16:33:46 compute-0 podman[377638]: 2026-01-26 16:33:46.552331524 +0000 UTC m=+1.077011284 container remove 5df321400d2f830e13a496e4c222520084305307160a22996e1af68268b3c807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_moser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 16:33:46 compute-0 systemd[1]: libpod-conmon-5df321400d2f830e13a496e4c222520084305307160a22996e1af68268b3c807.scope: Deactivated successfully.
Jan 26 16:33:46 compute-0 sudo[377558]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:33:46 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:33:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:33:46 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:33:46 compute-0 sudo[377751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:33:46 compute-0 sudo[377751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:33:46 compute-0 sudo[377751]: pam_unix(sudo:session): session closed for user root
Jan 26 16:33:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:33:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:33:48 compute-0 nova_compute[239965]: 2026-01-26 16:33:48.087 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:48 compute-0 ceph-mon[75140]: pgmap v2512: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:33:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3018870048' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:33:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:33:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3018870048' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.7829757032663304e-05 of space, bias 1.0, pg target 0.005348927109798991 quantized to 32 (current 32)
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006720730823224477 of space, bias 1.0, pg target 0.20162192469673432 quantized to 32 (current 32)
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.703727450417054e-07 of space, bias 4.0, pg target 0.0008044472940500464 quantized to 16 (current 16)
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:33:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:33:49 compute-0 nova_compute[239965]: 2026-01-26 16:33:49.319 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3018870048' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:33:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3018870048' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:33:50 compute-0 podman[377776]: 2026-01-26 16:33:50.380612229 +0000 UTC m=+0.058600093 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 16:33:50 compute-0 podman[377777]: 2026-01-26 16:33:50.420197413 +0000 UTC m=+0.099008607 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:33:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:33:50 compute-0 ceph-mon[75140]: pgmap v2513: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:52 compute-0 ceph-mon[75140]: pgmap v2514: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:53 compute-0 nova_compute[239965]: 2026-01-26 16:33:53.091 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:54 compute-0 nova_compute[239965]: 2026-01-26 16:33:54.359 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:54 compute-0 ceph-mon[75140]: pgmap v2515: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:55 compute-0 nova_compute[239965]: 2026-01-26 16:33:55.025 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:33:56 compute-0 ceph-mon[75140]: pgmap v2516: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:58 compute-0 nova_compute[239965]: 2026-01-26 16:33:58.093 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:58 compute-0 nova_compute[239965]: 2026-01-26 16:33:58.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:33:58 compute-0 ceph-mon[75140]: pgmap v2517: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:33:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:33:59.256 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:33:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:33:59.256 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:33:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:33:59.256 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:33:59 compute-0 nova_compute[239965]: 2026-01-26 16:33:59.387 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:33:59 compute-0 ceph-mon[75140]: pgmap v2518: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:34:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:34:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:02 compute-0 ceph-mon[75140]: pgmap v2519: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:03 compute-0 nova_compute[239965]: 2026-01-26 16:34:03.096 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:04 compute-0 ceph-mon[75140]: pgmap v2520: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:04 compute-0 nova_compute[239965]: 2026-01-26 16:34:04.387 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:34:06 compute-0 ceph-mon[75140]: pgmap v2521: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:06 compute-0 nova_compute[239965]: 2026-01-26 16:34:06.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:06 compute-0 nova_compute[239965]: 2026-01-26 16:34:06.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:07 compute-0 nova_compute[239965]: 2026-01-26 16:34:07.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:07 compute-0 nova_compute[239965]: 2026-01-26 16:34:07.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:07 compute-0 nova_compute[239965]: 2026-01-26 16:34:07.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:34:07 compute-0 nova_compute[239965]: 2026-01-26 16:34:07.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:34:07 compute-0 nova_compute[239965]: 2026-01-26 16:34:07.524 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:34:07 compute-0 nova_compute[239965]: 2026-01-26 16:34:07.524 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:07 compute-0 nova_compute[239965]: 2026-01-26 16:34:07.524 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:34:08 compute-0 nova_compute[239965]: 2026-01-26 16:34:08.100 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:08 compute-0 ceph-mon[75140]: pgmap v2522: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:08 compute-0 nova_compute[239965]: 2026-01-26 16:34:08.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:09 compute-0 nova_compute[239965]: 2026-01-26 16:34:09.389 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:10 compute-0 ceph-mon[75140]: pgmap v2523: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:34:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:12 compute-0 ceph-mon[75140]: pgmap v2524: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:13 compute-0 nova_compute[239965]: 2026-01-26 16:34:13.103 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:14 compute-0 nova_compute[239965]: 2026-01-26 16:34:14.393 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:14 compute-0 nova_compute[239965]: 2026-01-26 16:34:14.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:14 compute-0 ceph-mon[75140]: pgmap v2525: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:14 compute-0 nova_compute[239965]: 2026-01-26 16:34:14.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:34:14 compute-0 nova_compute[239965]: 2026-01-26 16:34:14.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:34:14 compute-0 nova_compute[239965]: 2026-01-26 16:34:14.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:34:14 compute-0 nova_compute[239965]: 2026-01-26 16:34:14.542 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:34:14 compute-0 nova_compute[239965]: 2026-01-26 16:34:14.542 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:34:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:34:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2951020535' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:34:15 compute-0 nova_compute[239965]: 2026-01-26 16:34:15.093 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:34:15 compute-0 nova_compute[239965]: 2026-01-26 16:34:15.264 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:34:15 compute-0 nova_compute[239965]: 2026-01-26 16:34:15.266 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3502MB free_disk=59.987211673520505GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:34:15 compute-0 nova_compute[239965]: 2026-01-26 16:34:15.267 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:34:15 compute-0 nova_compute[239965]: 2026-01-26 16:34:15.267 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:34:15 compute-0 nova_compute[239965]: 2026-01-26 16:34:15.342 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:34:15 compute-0 nova_compute[239965]: 2026-01-26 16:34:15.344 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:34:15 compute-0 nova_compute[239965]: 2026-01-26 16:34:15.364 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:34:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:34:15 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2951020535' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:34:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:34:15 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1888012842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:34:15 compute-0 nova_compute[239965]: 2026-01-26 16:34:15.979 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:34:15 compute-0 nova_compute[239965]: 2026-01-26 16:34:15.986 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:34:16 compute-0 nova_compute[239965]: 2026-01-26 16:34:16.002 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:34:16 compute-0 nova_compute[239965]: 2026-01-26 16:34:16.004 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:34:16 compute-0 nova_compute[239965]: 2026-01-26 16:34:16.004 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:34:16 compute-0 ceph-mon[75140]: pgmap v2526: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:16 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1888012842' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:34:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:18 compute-0 nova_compute[239965]: 2026-01-26 16:34:18.106 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:18 compute-0 ceph-mon[75140]: pgmap v2527: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:19 compute-0 nova_compute[239965]: 2026-01-26 16:34:19.395 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:34:20 compute-0 ceph-mon[75140]: pgmap v2528: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:21 compute-0 podman[377863]: 2026-01-26 16:34:21.374566069 +0000 UTC m=+0.062231572 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 16:34:21 compute-0 podman[377864]: 2026-01-26 16:34:21.406899406 +0000 UTC m=+0.091861212 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 16:34:22 compute-0 ceph-mon[75140]: pgmap v2529: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:23 compute-0 nova_compute[239965]: 2026-01-26 16:34:23.189 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:24 compute-0 nova_compute[239965]: 2026-01-26 16:34:24.424 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:24 compute-0 ceph-mon[75140]: pgmap v2530: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:34:25 compute-0 nova_compute[239965]: 2026-01-26 16:34:25.997 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:26 compute-0 ceph-mon[75140]: pgmap v2531: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:27 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 26 16:34:27 compute-0 systemd[1]: virtsecretd.service: Consumed 1.221s CPU time.
Jan 26 16:34:28 compute-0 nova_compute[239965]: 2026-01-26 16:34:28.192 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:34:28
Jan 26 16:34:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:34:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:34:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'vms', 'backups']
Jan 26 16:34:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:34:28 compute-0 ceph-mon[75140]: pgmap v2532: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:29 compute-0 nova_compute[239965]: 2026-01-26 16:34:29.426 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:30 compute-0 ceph-mon[75140]: pgmap v2533: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:34:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:34:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:34:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:32 compute-0 ceph-mon[75140]: pgmap v2534: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:33 compute-0 nova_compute[239965]: 2026-01-26 16:34:33.194 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:34 compute-0 ceph-mon[75140]: pgmap v2535: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:34 compute-0 nova_compute[239965]: 2026-01-26 16:34:34.427 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:34:36 compute-0 ceph-mon[75140]: pgmap v2536: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:38 compute-0 nova_compute[239965]: 2026-01-26 16:34:38.196 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:38 compute-0 ceph-mon[75140]: pgmap v2537: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:39 compute-0 nova_compute[239965]: 2026-01-26 16:34:39.532 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:40 compute-0 ceph-mon[75140]: pgmap v2538: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:34:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:42 compute-0 ceph-mon[75140]: pgmap v2539: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:43 compute-0 nova_compute[239965]: 2026-01-26 16:34:43.198 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:44 compute-0 nova_compute[239965]: 2026-01-26 16:34:44.533 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:44 compute-0 ceph-mon[75140]: pgmap v2540: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:34:45 compute-0 ceph-mon[75140]: pgmap v2541: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:46 compute-0 sudo[377906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:34:46 compute-0 sudo[377906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:34:46 compute-0 sudo[377906]: pam_unix(sudo:session): session closed for user root
Jan 26 16:34:46 compute-0 sudo[377931]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 26 16:34:46 compute-0 sudo[377931]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:34:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:47 compute-0 sudo[377931]: pam_unix(sudo:session): session closed for user root
Jan 26 16:34:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:34:47 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:34:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:34:47 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:34:47 compute-0 sudo[377975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:34:47 compute-0 sudo[377975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:34:47 compute-0 sudo[377975]: pam_unix(sudo:session): session closed for user root
Jan 26 16:34:47 compute-0 sudo[378000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:34:47 compute-0 sudo[378000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:34:47 compute-0 sudo[378000]: pam_unix(sudo:session): session closed for user root
Jan 26 16:34:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 16:34:47 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 16:34:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:34:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:34:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:34:47 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:34:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:34:47 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:34:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:34:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:34:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:34:47 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:34:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:34:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:34:47 compute-0 sudo[378055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:34:48 compute-0 sudo[378055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:34:48 compute-0 sudo[378055]: pam_unix(sudo:session): session closed for user root
Jan 26 16:34:48 compute-0 sudo[378080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:34:48 compute-0 sudo[378080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:34:48 compute-0 nova_compute[239965]: 2026-01-26 16:34:48.201 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:48 compute-0 ceph-mon[75140]: pgmap v2542: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:34:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:34:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 16:34:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:34:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:34:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:34:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:34:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:34:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:34:48 compute-0 podman[378117]: 2026-01-26 16:34:48.330234709 +0000 UTC m=+0.024860093 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:34:48 compute-0 podman[378117]: 2026-01-26 16:34:48.593150718 +0000 UTC m=+0.287776082 container create 431d170097dde46d6c3580f3aa4ec53d6e16f6a464388ad0a058d2c8193ada2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:34:48 compute-0 systemd[1]: Started libpod-conmon-431d170097dde46d6c3580f3aa4ec53d6e16f6a464388ad0a058d2c8193ada2f.scope.
Jan 26 16:34:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:34:48 compute-0 podman[378117]: 2026-01-26 16:34:48.692784109 +0000 UTC m=+0.387409563 container init 431d170097dde46d6c3580f3aa4ec53d6e16f6a464388ad0a058d2c8193ada2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 16:34:48 compute-0 podman[378117]: 2026-01-26 16:34:48.70742029 +0000 UTC m=+0.402045654 container start 431d170097dde46d6c3580f3aa4ec53d6e16f6a464388ad0a058d2c8193ada2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wescoff, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 16:34:48 compute-0 podman[378117]: 2026-01-26 16:34:48.710785842 +0000 UTC m=+0.405411206 container attach 431d170097dde46d6c3580f3aa4ec53d6e16f6a464388ad0a058d2c8193ada2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 16:34:48 compute-0 silly_wescoff[378134]: 167 167
Jan 26 16:34:48 compute-0 systemd[1]: libpod-431d170097dde46d6c3580f3aa4ec53d6e16f6a464388ad0a058d2c8193ada2f.scope: Deactivated successfully.
Jan 26 16:34:48 compute-0 conmon[378134]: conmon 431d170097dde46d6c35 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-431d170097dde46d6c3580f3aa4ec53d6e16f6a464388ad0a058d2c8193ada2f.scope/container/memory.events
Jan 26 16:34:48 compute-0 podman[378117]: 2026-01-26 16:34:48.717549159 +0000 UTC m=+0.412174523 container died 431d170097dde46d6c3580f3aa4ec53d6e16f6a464388ad0a058d2c8193ada2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wescoff, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 16:34:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:34:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3876084829' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:34:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:34:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3876084829' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:34:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd113cf30dde9bf0f1d0085fca7a8e5add8fba9d0d079ea8c0d9b454d670eec5-merged.mount: Deactivated successfully.
Jan 26 16:34:48 compute-0 podman[378117]: 2026-01-26 16:34:48.811648464 +0000 UTC m=+0.506273838 container remove 431d170097dde46d6c3580f3aa4ec53d6e16f6a464388ad0a058d2c8193ada2f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:34:48 compute-0 systemd[1]: libpod-conmon-431d170097dde46d6c3580f3aa4ec53d6e16f6a464388ad0a058d2c8193ada2f.scope: Deactivated successfully.
Jan 26 16:34:49 compute-0 podman[378157]: 2026-01-26 16:34:49.008028568 +0000 UTC m=+0.065455432 container create b12885aba16453bd0409fbb61feed4e6ee74ff0a0b018d4d3dc19dd92432d21a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Jan 26 16:34:49 compute-0 systemd[1]: Started libpod-conmon-b12885aba16453bd0409fbb61feed4e6ee74ff0a0b018d4d3dc19dd92432d21a.scope.
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:49 compute-0 podman[378157]: 2026-01-26 16:34:48.97523711 +0000 UTC m=+0.032664054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:34:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92c75039208830e5232105fe35262b212607ff8eb64ecb55b7cdd732f82759b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92c75039208830e5232105fe35262b212607ff8eb64ecb55b7cdd732f82759b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92c75039208830e5232105fe35262b212607ff8eb64ecb55b7cdd732f82759b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92c75039208830e5232105fe35262b212607ff8eb64ecb55b7cdd732f82759b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:34:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92c75039208830e5232105fe35262b212607ff8eb64ecb55b7cdd732f82759b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:34:49 compute-0 podman[378157]: 2026-01-26 16:34:49.106344387 +0000 UTC m=+0.163771271 container init b12885aba16453bd0409fbb61feed4e6ee74ff0a0b018d4d3dc19dd92432d21a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:34:49 compute-0 podman[378157]: 2026-01-26 16:34:49.114523858 +0000 UTC m=+0.171950712 container start b12885aba16453bd0409fbb61feed4e6ee74ff0a0b018d4d3dc19dd92432d21a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 16:34:49 compute-0 podman[378157]: 2026-01-26 16:34:49.125929708 +0000 UTC m=+0.183356582 container attach b12885aba16453bd0409fbb61feed4e6ee74ff0a0b018d4d3dc19dd92432d21a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.7829757032663304e-05 of space, bias 1.0, pg target 0.005348927109798991 quantized to 32 (current 32)
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006720730823224477 of space, bias 1.0, pg target 0.20162192469673432 quantized to 32 (current 32)
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.703727450417054e-07 of space, bias 4.0, pg target 0.0008044472940500464 quantized to 16 (current 16)
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:34:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:34:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3876084829' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:34:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3876084829' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:34:49 compute-0 nova_compute[239965]: 2026-01-26 16:34:49.537 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:49 compute-0 nice_gates[378173]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:34:49 compute-0 nice_gates[378173]: --> All data devices are unavailable
Jan 26 16:34:49 compute-0 systemd[1]: libpod-b12885aba16453bd0409fbb61feed4e6ee74ff0a0b018d4d3dc19dd92432d21a.scope: Deactivated successfully.
Jan 26 16:34:49 compute-0 podman[378157]: 2026-01-26 16:34:49.594387005 +0000 UTC m=+0.651813899 container died b12885aba16453bd0409fbb61feed4e6ee74ff0a0b018d4d3dc19dd92432d21a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:34:50 compute-0 ceph-mon[75140]: pgmap v2543: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:34:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-92c75039208830e5232105fe35262b212607ff8eb64ecb55b7cdd732f82759b0-merged.mount: Deactivated successfully.
Jan 26 16:34:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:51 compute-0 podman[378157]: 2026-01-26 16:34:51.465331175 +0000 UTC m=+2.522758049 container remove b12885aba16453bd0409fbb61feed4e6ee74ff0a0b018d4d3dc19dd92432d21a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:34:51 compute-0 systemd[1]: libpod-conmon-b12885aba16453bd0409fbb61feed4e6ee74ff0a0b018d4d3dc19dd92432d21a.scope: Deactivated successfully.
Jan 26 16:34:51 compute-0 sudo[378080]: pam_unix(sudo:session): session closed for user root
Jan 26 16:34:51 compute-0 podman[378205]: 2026-01-26 16:34:51.584956678 +0000 UTC m=+0.070636509 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:34:51 compute-0 sudo[378222]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:34:51 compute-0 sudo[378222]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:34:51 compute-0 sudo[378222]: pam_unix(sudo:session): session closed for user root
Jan 26 16:34:51 compute-0 podman[378207]: 2026-01-26 16:34:51.610159248 +0000 UTC m=+0.094883456 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller)
Jan 26 16:34:51 compute-0 sudo[378273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:34:51 compute-0 sudo[378273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:34:51 compute-0 ceph-mon[75140]: pgmap v2544: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:51 compute-0 podman[378310]: 2026-01-26 16:34:51.982328226 +0000 UTC m=+0.064013116 container create a6f949127488e0b28a3b7e94d3e2f471563a5291ef49982d9b7ce5ab5cd7ce42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:34:52 compute-0 podman[378310]: 2026-01-26 16:34:51.947149091 +0000 UTC m=+0.028833971 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:34:52 compute-0 systemd[1]: Started libpod-conmon-a6f949127488e0b28a3b7e94d3e2f471563a5291ef49982d9b7ce5ab5cd7ce42.scope.
Jan 26 16:34:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:34:52 compute-0 podman[378310]: 2026-01-26 16:34:52.621510134 +0000 UTC m=+0.703195084 container init a6f949127488e0b28a3b7e94d3e2f471563a5291ef49982d9b7ce5ab5cd7ce42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:34:52 compute-0 podman[378310]: 2026-01-26 16:34:52.62943851 +0000 UTC m=+0.711123370 container start a6f949127488e0b28a3b7e94d3e2f471563a5291ef49982d9b7ce5ab5cd7ce42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:34:52 compute-0 zen_faraday[378327]: 167 167
Jan 26 16:34:52 compute-0 systemd[1]: libpod-a6f949127488e0b28a3b7e94d3e2f471563a5291ef49982d9b7ce5ab5cd7ce42.scope: Deactivated successfully.
Jan 26 16:34:53 compute-0 podman[378310]: 2026-01-26 16:34:53.030541991 +0000 UTC m=+1.112226941 container attach a6f949127488e0b28a3b7e94d3e2f471563a5291ef49982d9b7ce5ab5cd7ce42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 16:34:53 compute-0 podman[378310]: 2026-01-26 16:34:53.0313335 +0000 UTC m=+1.113018440 container died a6f949127488e0b28a3b7e94d3e2f471563a5291ef49982d9b7ce5ab5cd7ce42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 16:34:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:53 compute-0 nova_compute[239965]: 2026-01-26 16:34:53.204 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:53 compute-0 ceph-mon[75140]: pgmap v2545: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3f975abde7a06f08719de3083fb97af8153bdc2d444b7ba5bf54168ad9dcde4-merged.mount: Deactivated successfully.
Jan 26 16:34:54 compute-0 podman[378310]: 2026-01-26 16:34:54.056301762 +0000 UTC m=+2.137986612 container remove a6f949127488e0b28a3b7e94d3e2f471563a5291ef49982d9b7ce5ab5cd7ce42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:34:54 compute-0 systemd[1]: libpod-conmon-a6f949127488e0b28a3b7e94d3e2f471563a5291ef49982d9b7ce5ab5cd7ce42.scope: Deactivated successfully.
Jan 26 16:34:54 compute-0 podman[378351]: 2026-01-26 16:34:54.31061056 +0000 UTC m=+0.104355649 container create 63f30c324b8eb83438030c410ca00917eddb9937763b960d3c1999168cd46e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:34:54 compute-0 podman[378351]: 2026-01-26 16:34:54.247110957 +0000 UTC m=+0.040856056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:34:54 compute-0 systemd[1]: Started libpod-conmon-63f30c324b8eb83438030c410ca00917eddb9937763b960d3c1999168cd46e9b.scope.
Jan 26 16:34:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:34:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15975432183fbfd2f6f5199d085a3c84c00cd6c033723085ffd0e23e2d16f603/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:34:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15975432183fbfd2f6f5199d085a3c84c00cd6c033723085ffd0e23e2d16f603/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:34:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15975432183fbfd2f6f5199d085a3c84c00cd6c033723085ffd0e23e2d16f603/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:34:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15975432183fbfd2f6f5199d085a3c84c00cd6c033723085ffd0e23e2d16f603/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:34:54 compute-0 nova_compute[239965]: 2026-01-26 16:34:54.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:54 compute-0 nova_compute[239965]: 2026-01-26 16:34:54.538 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:54 compute-0 podman[378351]: 2026-01-26 16:34:54.69017342 +0000 UTC m=+0.483918599 container init 63f30c324b8eb83438030c410ca00917eddb9937763b960d3c1999168cd46e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:34:54 compute-0 podman[378351]: 2026-01-26 16:34:54.701939329 +0000 UTC m=+0.495684418 container start 63f30c324b8eb83438030c410ca00917eddb9937763b960d3c1999168cd46e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 16:34:54 compute-0 podman[378351]: 2026-01-26 16:34:54.706660526 +0000 UTC m=+0.500405655 container attach 63f30c324b8eb83438030c410ca00917eddb9937763b960d3c1999168cd46e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]: {
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:     "0": [
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:         {
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "devices": [
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "/dev/loop3"
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             ],
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "lv_name": "ceph_lv0",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "lv_size": "21470642176",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "name": "ceph_lv0",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "tags": {
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.cluster_name": "ceph",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.crush_device_class": "",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.encrypted": "0",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.objectstore": "bluestore",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.osd_id": "0",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.type": "block",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.vdo": "0",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.with_tpm": "0"
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             },
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "type": "block",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "vg_name": "ceph_vg0"
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:         }
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:     ],
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:     "1": [
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:         {
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "devices": [
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "/dev/loop4"
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             ],
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "lv_name": "ceph_lv1",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "lv_size": "21470642176",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "name": "ceph_lv1",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "tags": {
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.cluster_name": "ceph",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.crush_device_class": "",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.encrypted": "0",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.objectstore": "bluestore",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.osd_id": "1",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.type": "block",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.vdo": "0",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.with_tpm": "0"
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             },
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "type": "block",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "vg_name": "ceph_vg1"
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:         }
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:     ],
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:     "2": [
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:         {
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "devices": [
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "/dev/loop5"
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             ],
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "lv_name": "ceph_lv2",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "lv_size": "21470642176",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "name": "ceph_lv2",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "tags": {
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.cluster_name": "ceph",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.crush_device_class": "",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.encrypted": "0",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.objectstore": "bluestore",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.osd_id": "2",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.type": "block",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.vdo": "0",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:                 "ceph.with_tpm": "0"
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             },
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "type": "block",
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:             "vg_name": "ceph_vg2"
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:         }
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]:     ]
Jan 26 16:34:55 compute-0 eloquent_leavitt[378367]: }
Jan 26 16:34:55 compute-0 systemd[1]: libpod-63f30c324b8eb83438030c410ca00917eddb9937763b960d3c1999168cd46e9b.scope: Deactivated successfully.
Jan 26 16:34:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:55 compute-0 podman[378351]: 2026-01-26 16:34:55.062598044 +0000 UTC m=+0.856343123 container died 63f30c324b8eb83438030c410ca00917eddb9937763b960d3c1999168cd46e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:34:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-15975432183fbfd2f6f5199d085a3c84c00cd6c033723085ffd0e23e2d16f603-merged.mount: Deactivated successfully.
Jan 26 16:34:55 compute-0 podman[378351]: 2026-01-26 16:34:55.499593188 +0000 UTC m=+1.293338257 container remove 63f30c324b8eb83438030c410ca00917eddb9937763b960d3c1999168cd46e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 16:34:55 compute-0 systemd[1]: libpod-conmon-63f30c324b8eb83438030c410ca00917eddb9937763b960d3c1999168cd46e9b.scope: Deactivated successfully.
Jan 26 16:34:55 compute-0 sudo[378273]: pam_unix(sudo:session): session closed for user root
Jan 26 16:34:55 compute-0 sudo[378389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:34:55 compute-0 sudo[378389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:34:55 compute-0 sudo[378389]: pam_unix(sudo:session): session closed for user root
Jan 26 16:34:55 compute-0 sudo[378414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:34:55 compute-0 sudo[378414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:34:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:34:56 compute-0 podman[378452]: 2026-01-26 16:34:56.01292654 +0000 UTC m=+0.023337016 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:34:56 compute-0 podman[378452]: 2026-01-26 16:34:56.177868238 +0000 UTC m=+0.188278684 container create 91b22088c2b1f3ae33eea9af413b2b43c3fe35f3d3550b05362c75bf2fe2d505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_gagarin, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:34:56 compute-0 systemd[1]: Started libpod-conmon-91b22088c2b1f3ae33eea9af413b2b43c3fe35f3d3550b05362c75bf2fe2d505.scope.
Jan 26 16:34:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:34:56 compute-0 ceph-mon[75140]: pgmap v2546: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:56 compute-0 podman[378452]: 2026-01-26 16:34:56.460432561 +0000 UTC m=+0.470843097 container init 91b22088c2b1f3ae33eea9af413b2b43c3fe35f3d3550b05362c75bf2fe2d505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_gagarin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 16:34:56 compute-0 podman[378452]: 2026-01-26 16:34:56.471499744 +0000 UTC m=+0.481910170 container start 91b22088c2b1f3ae33eea9af413b2b43c3fe35f3d3550b05362c75bf2fe2d505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_gagarin, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 16:34:56 compute-0 infallible_gagarin[378468]: 167 167
Jan 26 16:34:56 compute-0 systemd[1]: libpod-91b22088c2b1f3ae33eea9af413b2b43c3fe35f3d3550b05362c75bf2fe2d505.scope: Deactivated successfully.
Jan 26 16:34:56 compute-0 podman[378452]: 2026-01-26 16:34:56.506890814 +0000 UTC m=+0.517301330 container attach 91b22088c2b1f3ae33eea9af413b2b43c3fe35f3d3550b05362c75bf2fe2d505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 16:34:56 compute-0 podman[378452]: 2026-01-26 16:34:56.50751622 +0000 UTC m=+0.517926676 container died 91b22088c2b1f3ae33eea9af413b2b43c3fe35f3d3550b05362c75bf2fe2d505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_gagarin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:34:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb97bcf3bddc0acaf7be98fd5059cb71ee0583153f17c0a4cd5509b89ab2e352-merged.mount: Deactivated successfully.
Jan 26 16:34:56 compute-0 podman[378452]: 2026-01-26 16:34:56.836325561 +0000 UTC m=+0.846735997 container remove 91b22088c2b1f3ae33eea9af413b2b43c3fe35f3d3550b05362c75bf2fe2d505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_gagarin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 16:34:56 compute-0 systemd[1]: libpod-conmon-91b22088c2b1f3ae33eea9af413b2b43c3fe35f3d3550b05362c75bf2fe2d505.scope: Deactivated successfully.
Jan 26 16:34:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:57 compute-0 podman[378492]: 2026-01-26 16:34:57.042699879 +0000 UTC m=+0.034131090 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:34:57 compute-0 podman[378492]: 2026-01-26 16:34:57.295398818 +0000 UTC m=+0.286830049 container create 9ffe2471100719c992bd80a407c3c9ce85f116d6ba80d44d403ac457efa53faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_matsumoto, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 16:34:57 compute-0 systemd[1]: Started libpod-conmon-9ffe2471100719c992bd80a407c3c9ce85f116d6ba80d44d403ac457efa53faa.scope.
Jan 26 16:34:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739e747100185f6250d2a38d560e4ff62e5a9cf10461397031ddca4c54ec5b42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739e747100185f6250d2a38d560e4ff62e5a9cf10461397031ddca4c54ec5b42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739e747100185f6250d2a38d560e4ff62e5a9cf10461397031ddca4c54ec5b42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:34:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/739e747100185f6250d2a38d560e4ff62e5a9cf10461397031ddca4c54ec5b42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:34:57 compute-0 podman[378492]: 2026-01-26 16:34:57.715905315 +0000 UTC m=+0.707336506 container init 9ffe2471100719c992bd80a407c3c9ce85f116d6ba80d44d403ac457efa53faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_matsumoto, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 16:34:57 compute-0 podman[378492]: 2026-01-26 16:34:57.725693206 +0000 UTC m=+0.717124397 container start 9ffe2471100719c992bd80a407c3c9ce85f116d6ba80d44d403ac457efa53faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:34:57 compute-0 podman[378492]: 2026-01-26 16:34:57.730295569 +0000 UTC m=+0.721726780 container attach 9ffe2471100719c992bd80a407c3c9ce85f116d6ba80d44d403ac457efa53faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_matsumoto, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:34:58 compute-0 nova_compute[239965]: 2026-01-26 16:34:58.206 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:58 compute-0 lvm[378588]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:34:58 compute-0 lvm[378588]: VG ceph_vg1 finished
Jan 26 16:34:58 compute-0 lvm[378587]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:34:58 compute-0 lvm[378587]: VG ceph_vg0 finished
Jan 26 16:34:58 compute-0 lvm[378590]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:34:58 compute-0 lvm[378590]: VG ceph_vg2 finished
Jan 26 16:34:58 compute-0 nova_compute[239965]: 2026-01-26 16:34:58.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:58 compute-0 ceph-mon[75140]: pgmap v2547: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:58 compute-0 clever_matsumoto[378509]: {}
Jan 26 16:34:58 compute-0 systemd[1]: libpod-9ffe2471100719c992bd80a407c3c9ce85f116d6ba80d44d403ac457efa53faa.scope: Deactivated successfully.
Jan 26 16:34:58 compute-0 systemd[1]: libpod-9ffe2471100719c992bd80a407c3c9ce85f116d6ba80d44d403ac457efa53faa.scope: Consumed 1.442s CPU time.
Jan 26 16:34:58 compute-0 podman[378492]: 2026-01-26 16:34:58.639527593 +0000 UTC m=+1.630958784 container died 9ffe2471100719c992bd80a407c3c9ce85f116d6ba80d44d403ac457efa53faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_matsumoto, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:34:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-739e747100185f6250d2a38d560e4ff62e5a9cf10461397031ddca4c54ec5b42-merged.mount: Deactivated successfully.
Jan 26 16:34:58 compute-0 podman[378492]: 2026-01-26 16:34:58.725380786 +0000 UTC m=+1.716811977 container remove 9ffe2471100719c992bd80a407c3c9ce85f116d6ba80d44d403ac457efa53faa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 16:34:58 compute-0 systemd[1]: libpod-conmon-9ffe2471100719c992bd80a407c3c9ce85f116d6ba80d44d403ac457efa53faa.scope: Deactivated successfully.
Jan 26 16:34:58 compute-0 sudo[378414]: pam_unix(sudo:session): session closed for user root
Jan 26 16:34:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:34:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:34:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:34:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:34:58 compute-0 sudo[378604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:34:58 compute-0 sudo[378604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:34:58 compute-0 sudo[378604]: pam_unix(sudo:session): session closed for user root
Jan 26 16:34:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:34:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:34:59.256 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:34:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:34:59.258 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:34:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:34:59.258 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:34:59 compute-0 nova_compute[239965]: 2026-01-26 16:34:59.531 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:34:59 compute-0 nova_compute[239965]: 2026-01-26 16:34:59.541 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:34:59 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:34:59 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:35:00 compute-0 ceph-mon[75140]: pgmap v2548: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:35:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:01 compute-0 ceph-mon[75140]: pgmap v2549: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:03 compute-0 nova_compute[239965]: 2026-01-26 16:35:03.210 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:04 compute-0 ceph-mon[75140]: pgmap v2550: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:04 compute-0 nova_compute[239965]: 2026-01-26 16:35:04.542 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:35:06 compute-0 ceph-mon[75140]: pgmap v2551: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:06 compute-0 nova_compute[239965]: 2026-01-26 16:35:06.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:06 compute-0 nova_compute[239965]: 2026-01-26 16:35:06.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:07 compute-0 nova_compute[239965]: 2026-01-26 16:35:07.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:07 compute-0 nova_compute[239965]: 2026-01-26 16:35:07.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:07 compute-0 nova_compute[239965]: 2026-01-26 16:35:07.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:35:08 compute-0 nova_compute[239965]: 2026-01-26 16:35:08.212 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:08 compute-0 ceph-mon[75140]: pgmap v2552: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:08 compute-0 nova_compute[239965]: 2026-01-26 16:35:08.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:09 compute-0 nova_compute[239965]: 2026-01-26 16:35:09.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:09 compute-0 nova_compute[239965]: 2026-01-26 16:35:09.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:35:09 compute-0 nova_compute[239965]: 2026-01-26 16:35:09.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:35:09 compute-0 nova_compute[239965]: 2026-01-26 16:35:09.545 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:10 compute-0 ceph-mon[75140]: pgmap v2553: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:35:11 compute-0 nova_compute[239965]: 2026-01-26 16:35:11.008 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:35:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:12 compute-0 ceph-mon[75140]: pgmap v2554: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:13 compute-0 nova_compute[239965]: 2026-01-26 16:35:13.214 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:14 compute-0 ceph-mon[75140]: pgmap v2555: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:14 compute-0 nova_compute[239965]: 2026-01-26 16:35:14.548 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:35:16 compute-0 ceph-mon[75140]: pgmap v2556: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:16 compute-0 nova_compute[239965]: 2026-01-26 16:35:16.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:16 compute-0 nova_compute[239965]: 2026-01-26 16:35:16.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:35:16 compute-0 nova_compute[239965]: 2026-01-26 16:35:16.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:35:16 compute-0 nova_compute[239965]: 2026-01-26 16:35:16.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:35:16 compute-0 nova_compute[239965]: 2026-01-26 16:35:16.542 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:35:16 compute-0 nova_compute[239965]: 2026-01-26 16:35:16.542 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:35:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:35:17 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3975607718' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:35:17 compute-0 nova_compute[239965]: 2026-01-26 16:35:17.148 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.606s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:35:17 compute-0 nova_compute[239965]: 2026-01-26 16:35:17.316 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:35:17 compute-0 nova_compute[239965]: 2026-01-26 16:35:17.318 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3466MB free_disk=59.987211673520505GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:35:17 compute-0 nova_compute[239965]: 2026-01-26 16:35:17.318 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:35:17 compute-0 nova_compute[239965]: 2026-01-26 16:35:17.318 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:35:17 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3975607718' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:35:17 compute-0 nova_compute[239965]: 2026-01-26 16:35:17.578 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:35:17 compute-0 nova_compute[239965]: 2026-01-26 16:35:17.579 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:35:17 compute-0 nova_compute[239965]: 2026-01-26 16:35:17.693 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:35:18 compute-0 nova_compute[239965]: 2026-01-26 16:35:18.217 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:35:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2628039188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:35:18 compute-0 nova_compute[239965]: 2026-01-26 16:35:18.275 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:35:18 compute-0 nova_compute[239965]: 2026-01-26 16:35:18.282 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:35:18 compute-0 nova_compute[239965]: 2026-01-26 16:35:18.314 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:35:18 compute-0 nova_compute[239965]: 2026-01-26 16:35:18.315 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:35:18 compute-0 nova_compute[239965]: 2026-01-26 16:35:18.316 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.998s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:35:18 compute-0 ceph-mon[75140]: pgmap v2557: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2628039188' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:35:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:19 compute-0 nova_compute[239965]: 2026-01-26 16:35:19.551 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:20 compute-0 ceph-mon[75140]: pgmap v2558: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:35:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:22 compute-0 podman[378673]: 2026-01-26 16:35:22.385916701 +0000 UTC m=+0.075071088 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 26 16:35:22 compute-0 ceph-mon[75140]: pgmap v2559: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:22 compute-0 podman[378674]: 2026-01-26 16:35:22.463247994 +0000 UTC m=+0.143084692 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 16:35:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:23 compute-0 nova_compute[239965]: 2026-01-26 16:35:23.219 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:23 compute-0 nova_compute[239965]: 2026-01-26 16:35:23.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:23 compute-0 nova_compute[239965]: 2026-01-26 16:35:23.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:35:24 compute-0 ceph-mon[75140]: pgmap v2560: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:35:24 compute-0 nova_compute[239965]: 2026-01-26 16:35:24.554 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1 op/s
Jan 26 16:35:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:35:26 compute-0 ceph-mon[75140]: pgmap v2561: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1 op/s
Jan 26 16:35:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1 op/s
Jan 26 16:35:28 compute-0 nova_compute[239965]: 2026-01-26 16:35:28.222 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:28 compute-0 ceph-mon[75140]: pgmap v2562: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1 op/s
Jan 26 16:35:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:35:28
Jan 26 16:35:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:35:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:35:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'backups', 'vms', 'volumes', 'default.rgw.log', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data']
Jan 26 16:35:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:35:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 26 16:35:29 compute-0 nova_compute[239965]: 2026-01-26 16:35:29.555 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:35:30 compute-0 ceph-mon[75140]: pgmap v2563: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 26 16:35:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:35:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:35:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 26 16:35:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Jan 26 16:35:32 compute-0 ceph-mon[75140]: pgmap v2564: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Jan 26 16:35:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Jan 26 16:35:32 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Jan 26 16:35:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 614 B/s wr, 15 op/s
Jan 26 16:35:33 compute-0 nova_compute[239965]: 2026-01-26 16:35:33.224 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:33 compute-0 ceph-mon[75140]: osdmap e324: 3 total, 3 up, 3 in
Jan 26 16:35:34 compute-0 nova_compute[239965]: 2026-01-26 16:35:34.556 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:34 compute-0 ceph-mon[75140]: pgmap v2566: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 614 B/s wr, 15 op/s
Jan 26 16:35:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 KiB/s wr, 19 op/s
Jan 26 16:35:35 compute-0 nova_compute[239965]: 2026-01-26 16:35:35.188 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:35 compute-0 nova_compute[239965]: 2026-01-26 16:35:35.188 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:35:35 compute-0 nova_compute[239965]: 2026-01-26 16:35:35.202 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:35:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Jan 26 16:35:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Jan 26 16:35:35 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Jan 26 16:35:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:35:36 compute-0 ceph-mon[75140]: pgmap v2567: 305 pgs: 305 active+clean; 21 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 KiB/s wr, 19 op/s
Jan 26 16:35:36 compute-0 ceph-mon[75140]: osdmap e325: 3 total, 3 up, 3 in
Jan 26 16:35:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 13 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Jan 26 16:35:38 compute-0 nova_compute[239965]: 2026-01-26 16:35:38.226 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:38 compute-0 ceph-mon[75140]: pgmap v2569: 305 pgs: 305 active+clean; 13 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.2 KiB/s wr, 18 op/s
Jan 26 16:35:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 13 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 34 op/s
Jan 26 16:35:39 compute-0 nova_compute[239965]: 2026-01-26 16:35:39.558 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:40 compute-0 ceph-mon[75140]: pgmap v2570: 305 pgs: 305 active+clean; 13 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 2.1 KiB/s wr, 34 op/s
Jan 26 16:35:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:35:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Jan 26 16:35:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Jan 26 16:35:40 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Jan 26 16:35:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 462 KiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.9 KiB/s wr, 53 op/s
Jan 26 16:35:41 compute-0 ceph-mon[75140]: osdmap e326: 3 total, 3 up, 3 in
Jan 26 16:35:41 compute-0 ceph-mon[75140]: pgmap v2572: 305 pgs: 305 active+clean; 462 KiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.9 KiB/s wr, 53 op/s
Jan 26 16:35:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.2 KiB/s wr, 46 op/s
Jan 26 16:35:43 compute-0 nova_compute[239965]: 2026-01-26 16:35:43.228 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:44 compute-0 ceph-mon[75140]: pgmap v2573: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 2.2 KiB/s wr, 46 op/s
Jan 26 16:35:44 compute-0 nova_compute[239965]: 2026-01-26 16:35:44.559 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:44 compute-0 sshd-session[378719]: Invalid user sol from 45.148.10.240 port 46164
Jan 26 16:35:44 compute-0 sshd-session[378719]: Connection closed by invalid user sol 45.148.10.240 port 46164 [preauth]
Jan 26 16:35:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.9 KiB/s wr, 39 op/s
Jan 26 16:35:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e326 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:35:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Jan 26 16:35:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Jan 26 16:35:45 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Jan 26 16:35:46 compute-0 ceph-mon[75140]: pgmap v2574: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.9 KiB/s wr, 39 op/s
Jan 26 16:35:46 compute-0 ceph-mon[75140]: osdmap e327: 3 total, 3 up, 3 in
Jan 26 16:35:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 28 op/s
Jan 26 16:35:48 compute-0 nova_compute[239965]: 2026-01-26 16:35:48.231 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Jan 26 16:35:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Jan 26 16:35:48 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Jan 26 16:35:48 compute-0 ceph-mon[75140]: pgmap v2576: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.4 KiB/s wr, 28 op/s
Jan 26 16:35:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:35:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1484914764' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:35:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:35:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1484914764' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 127 B/s rd, 127 B/s wr, 0 op/s
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.7935622520121372e-05 of space, bias 1.0, pg target 0.005380686756036411 quantized to 32 (current 32)
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.359241557880453e-06 of space, bias 1.0, pg target 0.001907772467364136 quantized to 32 (current 32)
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.695809662089789e-07 of space, bias 4.0, pg target 0.0008034971594507747 quantized to 16 (current 16)
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:35:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:35:49 compute-0 ceph-mon[75140]: osdmap e328: 3 total, 3 up, 3 in
Jan 26 16:35:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1484914764' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:35:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1484914764' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:35:49 compute-0 nova_compute[239965]: 2026-01-26 16:35:49.561 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:50 compute-0 ceph-mon[75140]: pgmap v2578: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 127 B/s rd, 127 B/s wr, 0 op/s
Jan 26 16:35:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:35:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 26 16:35:52 compute-0 ceph-mon[75140]: pgmap v2579: 305 pgs: 305 active+clean; 462 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 5 op/s
Jan 26 16:35:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Jan 26 16:35:53 compute-0 nova_compute[239965]: 2026-01-26 16:35:53.232 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:53 compute-0 podman[378721]: 2026-01-26 16:35:53.378917518 +0000 UTC m=+0.058772357 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 16:35:53 compute-0 podman[378722]: 2026-01-26 16:35:53.413692444 +0000 UTC m=+0.094070505 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller)
Jan 26 16:35:54 compute-0 ceph-mon[75140]: pgmap v2580: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Jan 26 16:35:54 compute-0 nova_compute[239965]: 2026-01-26 16:35:54.564 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.7 KiB/s wr, 16 op/s
Jan 26 16:35:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:35:56 compute-0 nova_compute[239965]: 2026-01-26 16:35:56.524 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:35:56 compute-0 ceph-mon[75140]: pgmap v2581: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 1.7 KiB/s wr, 16 op/s
Jan 26 16:35:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 26 16:35:58 compute-0 nova_compute[239965]: 2026-01-26 16:35:58.234 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:58 compute-0 ceph-mon[75140]: pgmap v2582: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Jan 26 16:35:59 compute-0 sudo[378765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:35:59 compute-0 sudo[378765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:35:59 compute-0 sudo[378765]: pam_unix(sudo:session): session closed for user root
Jan 26 16:35:59 compute-0 sudo[378790]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 26 16:35:59 compute-0 sudo[378790]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:35:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.4 KiB/s wr, 13 op/s
Jan 26 16:35:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:35:59.257 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:35:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:35:59.258 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:35:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:35:59.258 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:35:59 compute-0 podman[378860]: 2026-01-26 16:35:59.56051433 +0000 UTC m=+0.065646217 container exec 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:35:59 compute-0 nova_compute[239965]: 2026-01-26 16:35:59.565 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:35:59 compute-0 podman[378860]: 2026-01-26 16:35:59.695362678 +0000 UTC m=+0.200494545 container exec_died 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:36:00 compute-0 sudo[378790]: pam_unix(sudo:session): session closed for user root
Jan 26 16:36:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:36:00 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:36:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:36:00 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:36:00 compute-0 nova_compute[239965]: 2026-01-26 16:36:00.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:00 compute-0 sudo[379046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:36:00 compute-0 sudo[379046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:36:00 compute-0 sudo[379046]: pam_unix(sudo:session): session closed for user root
Jan 26 16:36:00 compute-0 sudo[379071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:36:00 compute-0 sudo[379071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:36:00 compute-0 ceph-mon[75140]: pgmap v2583: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 1.4 KiB/s wr, 13 op/s
Jan 26 16:36:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:36:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:36:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:36:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.2 KiB/s wr, 11 op/s
Jan 26 16:36:01 compute-0 sudo[379071]: pam_unix(sudo:session): session closed for user root
Jan 26 16:36:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:36:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:36:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:36:01 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:36:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:36:01 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:36:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:36:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:36:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:36:01 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:36:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:36:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:36:01 compute-0 sudo[379128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:36:01 compute-0 sudo[379128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:36:01 compute-0 sudo[379128]: pam_unix(sudo:session): session closed for user root
Jan 26 16:36:01 compute-0 sudo[379153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:36:01 compute-0 sudo[379153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:36:01 compute-0 podman[379190]: 2026-01-26 16:36:01.686902166 +0000 UTC m=+0.040784045 container create 86a17b6d3d2139d2cd66080c6263910fe96844d1befe1648648830e66894ed21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_northcutt, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:36:01 compute-0 systemd[1]: Started libpod-conmon-86a17b6d3d2139d2cd66080c6263910fe96844d1befe1648648830e66894ed21.scope.
Jan 26 16:36:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:36:01 compute-0 podman[379190]: 2026-01-26 16:36:01.763165882 +0000 UTC m=+0.117047811 container init 86a17b6d3d2139d2cd66080c6263910fe96844d1befe1648648830e66894ed21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_northcutt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 16:36:01 compute-0 podman[379190]: 2026-01-26 16:36:01.668052311 +0000 UTC m=+0.021934210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:36:01 compute-0 podman[379190]: 2026-01-26 16:36:01.775795503 +0000 UTC m=+0.129677382 container start 86a17b6d3d2139d2cd66080c6263910fe96844d1befe1648648830e66894ed21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:36:01 compute-0 podman[379190]: 2026-01-26 16:36:01.779366691 +0000 UTC m=+0.133248620 container attach 86a17b6d3d2139d2cd66080c6263910fe96844d1befe1648648830e66894ed21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_northcutt, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:36:01 compute-0 relaxed_northcutt[379206]: 167 167
Jan 26 16:36:01 compute-0 systemd[1]: libpod-86a17b6d3d2139d2cd66080c6263910fe96844d1befe1648648830e66894ed21.scope: Deactivated successfully.
Jan 26 16:36:01 compute-0 podman[379190]: 2026-01-26 16:36:01.782742504 +0000 UTC m=+0.136624383 container died 86a17b6d3d2139d2cd66080c6263910fe96844d1befe1648648830e66894ed21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_northcutt, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 16:36:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-10eac6976e6376fd2fec6a5a7034b50016a18bd773dde297a8f6e2569d85075d-merged.mount: Deactivated successfully.
Jan 26 16:36:01 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:36:01 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:36:01 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:36:01 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:36:01 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:36:01 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:36:01 compute-0 podman[379190]: 2026-01-26 16:36:01.834359424 +0000 UTC m=+0.188241303 container remove 86a17b6d3d2139d2cd66080c6263910fe96844d1befe1648648830e66894ed21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_northcutt, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:36:01 compute-0 systemd[1]: libpod-conmon-86a17b6d3d2139d2cd66080c6263910fe96844d1befe1648648830e66894ed21.scope: Deactivated successfully.
Jan 26 16:36:01 compute-0 podman[379229]: 2026-01-26 16:36:01.987106612 +0000 UTC m=+0.039730178 container create 5509b605e1a82fa1e94eb912b51ffcb199a91cd1bd062fe7d13d488813bc7a19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:36:02 compute-0 systemd[1]: Started libpod-conmon-5509b605e1a82fa1e94eb912b51ffcb199a91cd1bd062fe7d13d488813bc7a19.scope.
Jan 26 16:36:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0224a7866e754edd09c82100ffd8cabf569ce68719a346cc3f9ef33856650d2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0224a7866e754edd09c82100ffd8cabf569ce68719a346cc3f9ef33856650d2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0224a7866e754edd09c82100ffd8cabf569ce68719a346cc3f9ef33856650d2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0224a7866e754edd09c82100ffd8cabf569ce68719a346cc3f9ef33856650d2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:36:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0224a7866e754edd09c82100ffd8cabf569ce68719a346cc3f9ef33856650d2f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:36:02 compute-0 podman[379229]: 2026-01-26 16:36:01.970040012 +0000 UTC m=+0.022663578 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:36:02 compute-0 podman[379229]: 2026-01-26 16:36:02.071102899 +0000 UTC m=+0.123726485 container init 5509b605e1a82fa1e94eb912b51ffcb199a91cd1bd062fe7d13d488813bc7a19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:36:02 compute-0 podman[379229]: 2026-01-26 16:36:02.083988266 +0000 UTC m=+0.136611812 container start 5509b605e1a82fa1e94eb912b51ffcb199a91cd1bd062fe7d13d488813bc7a19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:36:02 compute-0 podman[379229]: 2026-01-26 16:36:02.087099913 +0000 UTC m=+0.139723489 container attach 5509b605e1a82fa1e94eb912b51ffcb199a91cd1bd062fe7d13d488813bc7a19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True)
Jan 26 16:36:02 compute-0 naughty_mahavira[379245]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:36:02 compute-0 naughty_mahavira[379245]: --> All data devices are unavailable
Jan 26 16:36:02 compute-0 systemd[1]: libpod-5509b605e1a82fa1e94eb912b51ffcb199a91cd1bd062fe7d13d488813bc7a19.scope: Deactivated successfully.
Jan 26 16:36:02 compute-0 podman[379229]: 2026-01-26 16:36:02.611527438 +0000 UTC m=+0.664151044 container died 5509b605e1a82fa1e94eb912b51ffcb199a91cd1bd062fe7d13d488813bc7a19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:36:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0224a7866e754edd09c82100ffd8cabf569ce68719a346cc3f9ef33856650d2f-merged.mount: Deactivated successfully.
Jan 26 16:36:02 compute-0 podman[379229]: 2026-01-26 16:36:02.654988247 +0000 UTC m=+0.707611803 container remove 5509b605e1a82fa1e94eb912b51ffcb199a91cd1bd062fe7d13d488813bc7a19 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_mahavira, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:36:02 compute-0 systemd[1]: libpod-conmon-5509b605e1a82fa1e94eb912b51ffcb199a91cd1bd062fe7d13d488813bc7a19.scope: Deactivated successfully.
Jan 26 16:36:02 compute-0 sudo[379153]: pam_unix(sudo:session): session closed for user root
Jan 26 16:36:02 compute-0 sudo[379278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:36:02 compute-0 sudo[379278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:36:02 compute-0 sudo[379278]: pam_unix(sudo:session): session closed for user root
Jan 26 16:36:02 compute-0 ceph-mon[75140]: pgmap v2584: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 8.6 KiB/s rd, 1.2 KiB/s wr, 11 op/s
Jan 26 16:36:02 compute-0 sudo[379303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:36:02 compute-0 sudo[379303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:36:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 682 B/s wr, 8 op/s
Jan 26 16:36:03 compute-0 podman[379341]: 2026-01-26 16:36:03.143612061 +0000 UTC m=+0.049167531 container create 823cd8ac3773ab3f8dc0d817aa7a33458e73e85e91faf1c9dd42866c0140959e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lederberg, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 16:36:03 compute-0 systemd[1]: Started libpod-conmon-823cd8ac3773ab3f8dc0d817aa7a33458e73e85e91faf1c9dd42866c0140959e.scope.
Jan 26 16:36:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:36:03 compute-0 podman[379341]: 2026-01-26 16:36:03.118573895 +0000 UTC m=+0.024129435 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:36:03 compute-0 podman[379341]: 2026-01-26 16:36:03.22402624 +0000 UTC m=+0.129581810 container init 823cd8ac3773ab3f8dc0d817aa7a33458e73e85e91faf1c9dd42866c0140959e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Jan 26 16:36:03 compute-0 podman[379341]: 2026-01-26 16:36:03.232408166 +0000 UTC m=+0.137963656 container start 823cd8ac3773ab3f8dc0d817aa7a33458e73e85e91faf1c9dd42866c0140959e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lederberg, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:36:03 compute-0 podman[379341]: 2026-01-26 16:36:03.237208444 +0000 UTC m=+0.142763904 container attach 823cd8ac3773ab3f8dc0d817aa7a33458e73e85e91faf1c9dd42866c0140959e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 16:36:03 compute-0 nova_compute[239965]: 2026-01-26 16:36:03.235 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:03 compute-0 affectionate_lederberg[379357]: 167 167
Jan 26 16:36:03 compute-0 systemd[1]: libpod-823cd8ac3773ab3f8dc0d817aa7a33458e73e85e91faf1c9dd42866c0140959e.scope: Deactivated successfully.
Jan 26 16:36:03 compute-0 podman[379341]: 2026-01-26 16:36:03.240415823 +0000 UTC m=+0.145971283 container died 823cd8ac3773ab3f8dc0d817aa7a33458e73e85e91faf1c9dd42866c0140959e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 16:36:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0393cfc726d52fa9557b7b9d755d1317e00b769d6bb3b69fe9e114533817a14a-merged.mount: Deactivated successfully.
Jan 26 16:36:03 compute-0 podman[379341]: 2026-01-26 16:36:03.279077225 +0000 UTC m=+0.184632685 container remove 823cd8ac3773ab3f8dc0d817aa7a33458e73e85e91faf1c9dd42866c0140959e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:36:03 compute-0 systemd[1]: libpod-conmon-823cd8ac3773ab3f8dc0d817aa7a33458e73e85e91faf1c9dd42866c0140959e.scope: Deactivated successfully.
Jan 26 16:36:03 compute-0 podman[379382]: 2026-01-26 16:36:03.471059989 +0000 UTC m=+0.056795828 container create 6189b2a5564afa0d2ba667ad1255d24238536aee24142679fe9e16e457c47dbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 16:36:03 compute-0 systemd[1]: Started libpod-conmon-6189b2a5564afa0d2ba667ad1255d24238536aee24142679fe9e16e457c47dbd.scope.
Jan 26 16:36:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85df6889715b3c5020ee895a3b8f032c826c1ba942de2d259002922b786ff9fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85df6889715b3c5020ee895a3b8f032c826c1ba942de2d259002922b786ff9fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85df6889715b3c5020ee895a3b8f032c826c1ba942de2d259002922b786ff9fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:36:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85df6889715b3c5020ee895a3b8f032c826c1ba942de2d259002922b786ff9fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:36:03 compute-0 podman[379382]: 2026-01-26 16:36:03.452769899 +0000 UTC m=+0.038505738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:36:03 compute-0 podman[379382]: 2026-01-26 16:36:03.552297128 +0000 UTC m=+0.138033027 container init 6189b2a5564afa0d2ba667ad1255d24238536aee24142679fe9e16e457c47dbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tesla, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:36:03 compute-0 podman[379382]: 2026-01-26 16:36:03.560858588 +0000 UTC m=+0.146594457 container start 6189b2a5564afa0d2ba667ad1255d24238536aee24142679fe9e16e457c47dbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tesla, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:36:03 compute-0 podman[379382]: 2026-01-26 16:36:03.565068252 +0000 UTC m=+0.150804131 container attach 6189b2a5564afa0d2ba667ad1255d24238536aee24142679fe9e16e457c47dbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tesla, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 16:36:03 compute-0 kind_tesla[379398]: {
Jan 26 16:36:03 compute-0 kind_tesla[379398]:     "0": [
Jan 26 16:36:03 compute-0 kind_tesla[379398]:         {
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "devices": [
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "/dev/loop3"
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             ],
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "lv_name": "ceph_lv0",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "lv_size": "21470642176",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "name": "ceph_lv0",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "tags": {
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.cluster_name": "ceph",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.crush_device_class": "",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.encrypted": "0",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.objectstore": "bluestore",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.osd_id": "0",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.type": "block",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.vdo": "0",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.with_tpm": "0"
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             },
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "type": "block",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "vg_name": "ceph_vg0"
Jan 26 16:36:03 compute-0 kind_tesla[379398]:         }
Jan 26 16:36:03 compute-0 kind_tesla[379398]:     ],
Jan 26 16:36:03 compute-0 kind_tesla[379398]:     "1": [
Jan 26 16:36:03 compute-0 kind_tesla[379398]:         {
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "devices": [
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "/dev/loop4"
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             ],
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "lv_name": "ceph_lv1",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "lv_size": "21470642176",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "name": "ceph_lv1",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "tags": {
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.cluster_name": "ceph",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.crush_device_class": "",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.encrypted": "0",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.objectstore": "bluestore",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.osd_id": "1",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.type": "block",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.vdo": "0",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.with_tpm": "0"
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             },
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "type": "block",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "vg_name": "ceph_vg1"
Jan 26 16:36:03 compute-0 kind_tesla[379398]:         }
Jan 26 16:36:03 compute-0 kind_tesla[379398]:     ],
Jan 26 16:36:03 compute-0 kind_tesla[379398]:     "2": [
Jan 26 16:36:03 compute-0 kind_tesla[379398]:         {
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "devices": [
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "/dev/loop5"
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             ],
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "lv_name": "ceph_lv2",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "lv_size": "21470642176",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "name": "ceph_lv2",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "tags": {
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.cluster_name": "ceph",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.crush_device_class": "",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.encrypted": "0",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.objectstore": "bluestore",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.osd_id": "2",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.type": "block",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.vdo": "0",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:                 "ceph.with_tpm": "0"
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             },
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "type": "block",
Jan 26 16:36:03 compute-0 kind_tesla[379398]:             "vg_name": "ceph_vg2"
Jan 26 16:36:03 compute-0 kind_tesla[379398]:         }
Jan 26 16:36:03 compute-0 kind_tesla[379398]:     ]
Jan 26 16:36:03 compute-0 kind_tesla[379398]: }
Jan 26 16:36:03 compute-0 systemd[1]: libpod-6189b2a5564afa0d2ba667ad1255d24238536aee24142679fe9e16e457c47dbd.scope: Deactivated successfully.
Jan 26 16:36:03 compute-0 podman[379382]: 2026-01-26 16:36:03.890571012 +0000 UTC m=+0.476306851 container died 6189b2a5564afa0d2ba667ad1255d24238536aee24142679fe9e16e457c47dbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tesla, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:36:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-85df6889715b3c5020ee895a3b8f032c826c1ba942de2d259002922b786ff9fe-merged.mount: Deactivated successfully.
Jan 26 16:36:03 compute-0 podman[379382]: 2026-01-26 16:36:03.935546119 +0000 UTC m=+0.521281998 container remove 6189b2a5564afa0d2ba667ad1255d24238536aee24142679fe9e16e457c47dbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_tesla, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 16:36:03 compute-0 systemd[1]: libpod-conmon-6189b2a5564afa0d2ba667ad1255d24238536aee24142679fe9e16e457c47dbd.scope: Deactivated successfully.
Jan 26 16:36:03 compute-0 sudo[379303]: pam_unix(sudo:session): session closed for user root
Jan 26 16:36:04 compute-0 sudo[379418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:36:04 compute-0 sudo[379418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:36:04 compute-0 sudo[379418]: pam_unix(sudo:session): session closed for user root
Jan 26 16:36:04 compute-0 sudo[379443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:36:04 compute-0 sudo[379443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:36:04 compute-0 podman[379480]: 2026-01-26 16:36:04.415038547 +0000 UTC m=+0.037525733 container create 219b1553afbb465a621acc41ff0c4bb7b2a32d00dd252b5d0ce9e7144d8a2377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_kilby, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:36:04 compute-0 systemd[1]: Started libpod-conmon-219b1553afbb465a621acc41ff0c4bb7b2a32d00dd252b5d0ce9e7144d8a2377.scope.
Jan 26 16:36:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:36:04 compute-0 podman[379480]: 2026-01-26 16:36:04.486070556 +0000 UTC m=+0.108557762 container init 219b1553afbb465a621acc41ff0c4bb7b2a32d00dd252b5d0ce9e7144d8a2377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Jan 26 16:36:04 compute-0 podman[379480]: 2026-01-26 16:36:04.39886509 +0000 UTC m=+0.021352296 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:36:04 compute-0 podman[379480]: 2026-01-26 16:36:04.49516566 +0000 UTC m=+0.117652846 container start 219b1553afbb465a621acc41ff0c4bb7b2a32d00dd252b5d0ce9e7144d8a2377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_kilby, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 26 16:36:04 compute-0 systemd[1]: libpod-219b1553afbb465a621acc41ff0c4bb7b2a32d00dd252b5d0ce9e7144d8a2377.scope: Deactivated successfully.
Jan 26 16:36:04 compute-0 festive_kilby[379496]: 167 167
Jan 26 16:36:04 compute-0 podman[379480]: 2026-01-26 16:36:04.499552908 +0000 UTC m=+0.122040114 container attach 219b1553afbb465a621acc41ff0c4bb7b2a32d00dd252b5d0ce9e7144d8a2377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_kilby, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 16:36:04 compute-0 conmon[379496]: conmon 219b1553afbb465a621a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-219b1553afbb465a621acc41ff0c4bb7b2a32d00dd252b5d0ce9e7144d8a2377.scope/container/memory.events
Jan 26 16:36:04 compute-0 podman[379480]: 2026-01-26 16:36:04.500998533 +0000 UTC m=+0.123485719 container died 219b1553afbb465a621acc41ff0c4bb7b2a32d00dd252b5d0ce9e7144d8a2377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_kilby, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:36:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-98c51d95fd909554cc9e1c6f80faae99568c6844cc8fc6d8fc4bf58abfd12be5-merged.mount: Deactivated successfully.
Jan 26 16:36:04 compute-0 podman[379480]: 2026-01-26 16:36:04.545754835 +0000 UTC m=+0.168242021 container remove 219b1553afbb465a621acc41ff0c4bb7b2a32d00dd252b5d0ce9e7144d8a2377 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_kilby, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 16:36:04 compute-0 systemd[1]: libpod-conmon-219b1553afbb465a621acc41ff0c4bb7b2a32d00dd252b5d0ce9e7144d8a2377.scope: Deactivated successfully.
Jan 26 16:36:04 compute-0 nova_compute[239965]: 2026-01-26 16:36:04.566 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:04 compute-0 podman[379520]: 2026-01-26 16:36:04.804352157 +0000 UTC m=+0.118168348 container create a52c73b12011b0136591ed946c687cfe4dc4508e264df128f25b8928ea1ac9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_johnson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:36:04 compute-0 podman[379520]: 2026-01-26 16:36:04.714083226 +0000 UTC m=+0.027899407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:36:04 compute-0 ceph-mon[75140]: pgmap v2585: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 6.9 KiB/s rd, 682 B/s wr, 8 op/s
Jan 26 16:36:04 compute-0 systemd[1]: Started libpod-conmon-a52c73b12011b0136591ed946c687cfe4dc4508e264df128f25b8928ea1ac9f4.scope.
Jan 26 16:36:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc87dca37fba76c8ae1b0dbc465b3578269c0e0a40e9ee8b5af39ce88b0b0de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc87dca37fba76c8ae1b0dbc465b3578269c0e0a40e9ee8b5af39ce88b0b0de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc87dca37fba76c8ae1b0dbc465b3578269c0e0a40e9ee8b5af39ce88b0b0de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:36:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcc87dca37fba76c8ae1b0dbc465b3578269c0e0a40e9ee8b5af39ce88b0b0de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:36:04 compute-0 podman[379520]: 2026-01-26 16:36:04.884781127 +0000 UTC m=+0.198597308 container init a52c73b12011b0136591ed946c687cfe4dc4508e264df128f25b8928ea1ac9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:36:04 compute-0 podman[379520]: 2026-01-26 16:36:04.894561078 +0000 UTC m=+0.208377239 container start a52c73b12011b0136591ed946c687cfe4dc4508e264df128f25b8928ea1ac9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_johnson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 16:36:04 compute-0 podman[379520]: 2026-01-26 16:36:04.898593187 +0000 UTC m=+0.212409368 container attach a52c73b12011b0136591ed946c687cfe4dc4508e264df128f25b8928ea1ac9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_johnson, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:36:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:05 compute-0 lvm[379615]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:36:05 compute-0 lvm[379615]: VG ceph_vg0 finished
Jan 26 16:36:05 compute-0 lvm[379616]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:36:05 compute-0 lvm[379616]: VG ceph_vg1 finished
Jan 26 16:36:05 compute-0 lvm[379618]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:36:05 compute-0 lvm[379618]: VG ceph_vg2 finished
Jan 26 16:36:05 compute-0 inspiring_johnson[379537]: {}
Jan 26 16:36:05 compute-0 systemd[1]: libpod-a52c73b12011b0136591ed946c687cfe4dc4508e264df128f25b8928ea1ac9f4.scope: Deactivated successfully.
Jan 26 16:36:05 compute-0 podman[379520]: 2026-01-26 16:36:05.697088385 +0000 UTC m=+1.010904556 container died a52c73b12011b0136591ed946c687cfe4dc4508e264df128f25b8928ea1ac9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 26 16:36:05 compute-0 systemd[1]: libpod-a52c73b12011b0136591ed946c687cfe4dc4508e264df128f25b8928ea1ac9f4.scope: Consumed 1.292s CPU time.
Jan 26 16:36:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcc87dca37fba76c8ae1b0dbc465b3578269c0e0a40e9ee8b5af39ce88b0b0de-merged.mount: Deactivated successfully.
Jan 26 16:36:05 compute-0 podman[379520]: 2026-01-26 16:36:05.742528493 +0000 UTC m=+1.056344654 container remove a52c73b12011b0136591ed946c687cfe4dc4508e264df128f25b8928ea1ac9f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 16:36:05 compute-0 systemd[1]: libpod-conmon-a52c73b12011b0136591ed946c687cfe4dc4508e264df128f25b8928ea1ac9f4.scope: Deactivated successfully.
Jan 26 16:36:05 compute-0 sudo[379443]: pam_unix(sudo:session): session closed for user root
Jan 26 16:36:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:36:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:36:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:36:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:36:05 compute-0 sudo[379635]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:36:05 compute-0 sudo[379635]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:36:05 compute-0 sudo[379635]: pam_unix(sudo:session): session closed for user root
Jan 26 16:36:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:36:06 compute-0 ceph-mon[75140]: pgmap v2586: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:36:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:36:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:08 compute-0 nova_compute[239965]: 2026-01-26 16:36:08.238 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:08 compute-0 nova_compute[239965]: 2026-01-26 16:36:08.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:08 compute-0 nova_compute[239965]: 2026-01-26 16:36:08.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:08 compute-0 nova_compute[239965]: 2026-01-26 16:36:08.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:08 compute-0 ceph-mon[75140]: pgmap v2587: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:09 compute-0 nova_compute[239965]: 2026-01-26 16:36:09.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:09 compute-0 nova_compute[239965]: 2026-01-26 16:36:09.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:09 compute-0 nova_compute[239965]: 2026-01-26 16:36:09.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:36:09 compute-0 nova_compute[239965]: 2026-01-26 16:36:09.567 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:10 compute-0 nova_compute[239965]: 2026-01-26 16:36:10.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:10 compute-0 nova_compute[239965]: 2026-01-26 16:36:10.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:36:10 compute-0 nova_compute[239965]: 2026-01-26 16:36:10.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:36:10 compute-0 nova_compute[239965]: 2026-01-26 16:36:10.530 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:36:10 compute-0 ceph-mon[75140]: pgmap v2588: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:36:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:12 compute-0 ceph-mon[75140]: pgmap v2589: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:13 compute-0 nova_compute[239965]: 2026-01-26 16:36:13.240 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:14 compute-0 nova_compute[239965]: 2026-01-26 16:36:14.570 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:14 compute-0 ceph-mon[75140]: pgmap v2590: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:36:15 compute-0 ceph-mon[75140]: pgmap v2591: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:17 compute-0 nova_compute[239965]: 2026-01-26 16:36:17.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:17 compute-0 nova_compute[239965]: 2026-01-26 16:36:17.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:36:17 compute-0 nova_compute[239965]: 2026-01-26 16:36:17.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:36:17 compute-0 nova_compute[239965]: 2026-01-26 16:36:17.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:36:17 compute-0 nova_compute[239965]: 2026-01-26 16:36:17.542 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:36:17 compute-0 nova_compute[239965]: 2026-01-26 16:36:17.542 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:36:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:36:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/359685192' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:36:18 compute-0 nova_compute[239965]: 2026-01-26 16:36:18.146 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:36:18 compute-0 ceph-mon[75140]: pgmap v2592: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/359685192' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:36:18 compute-0 nova_compute[239965]: 2026-01-26 16:36:18.242 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:18 compute-0 nova_compute[239965]: 2026-01-26 16:36:18.313 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:36:18 compute-0 nova_compute[239965]: 2026-01-26 16:36:18.314 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3521MB free_disk=59.98720581829548GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:36:18 compute-0 nova_compute[239965]: 2026-01-26 16:36:18.314 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:36:18 compute-0 nova_compute[239965]: 2026-01-26 16:36:18.315 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:36:18 compute-0 nova_compute[239965]: 2026-01-26 16:36:18.392 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:36:18 compute-0 nova_compute[239965]: 2026-01-26 16:36:18.392 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:36:18 compute-0 nova_compute[239965]: 2026-01-26 16:36:18.406 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:36:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:36:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3658047080' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:36:18 compute-0 nova_compute[239965]: 2026-01-26 16:36:18.944 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:36:18 compute-0 nova_compute[239965]: 2026-01-26 16:36:18.950 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:36:18 compute-0 nova_compute[239965]: 2026-01-26 16:36:18.974 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:36:18 compute-0 nova_compute[239965]: 2026-01-26 16:36:18.976 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:36:18 compute-0 nova_compute[239965]: 2026-01-26 16:36:18.976 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:36:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:19 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3658047080' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:36:19 compute-0 nova_compute[239965]: 2026-01-26 16:36:19.573 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:20 compute-0 ceph-mon[75140]: pgmap v2593: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:36:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:22 compute-0 ceph-mon[75140]: pgmap v2594: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:23 compute-0 nova_compute[239965]: 2026-01-26 16:36:23.243 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:24 compute-0 ceph-mon[75140]: pgmap v2595: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:24 compute-0 podman[379704]: 2026-01-26 16:36:24.382563889 +0000 UTC m=+0.059702800 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 26 16:36:24 compute-0 podman[379705]: 2026-01-26 16:36:24.430103849 +0000 UTC m=+0.098823062 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 16:36:24 compute-0 nova_compute[239965]: 2026-01-26 16:36:24.574 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:36:25 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Jan 26 16:36:25 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:25.985793) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:36:25 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Jan 26 16:36:25 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445385985904, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 2014, "num_deletes": 255, "total_data_size": 3414834, "memory_usage": 3473160, "flush_reason": "Manual Compaction"}
Jan 26 16:36:25 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445386001707, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 2000575, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52380, "largest_seqno": 54393, "table_properties": {"data_size": 1993826, "index_size": 3567, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17264, "raw_average_key_size": 20, "raw_value_size": 1978927, "raw_average_value_size": 2404, "num_data_blocks": 163, "num_entries": 823, "num_filter_entries": 823, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769445179, "oldest_key_time": 1769445179, "file_creation_time": 1769445385, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 16025 microseconds, and 6931 cpu microseconds.
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:26.001828) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 2000575 bytes OK
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:26.001865) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:26.003844) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:26.003867) EVENT_LOG_v1 {"time_micros": 1769445386003860, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:26.003898) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 3406361, prev total WAL file size 3406361, number of live WAL files 2.
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:26.005340) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303036' seq:72057594037927935, type:22 .. '6D6772737461740032323538' seq:0, type:0; will stop at (end)
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1953KB)], [122(9938KB)]
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445386005595, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 12177380, "oldest_snapshot_seqno": -1}
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7742 keys, 10100622 bytes, temperature: kUnknown
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445386135889, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 10100622, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10050320, "index_size": 29838, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19397, "raw_key_size": 200421, "raw_average_key_size": 25, "raw_value_size": 9913613, "raw_average_value_size": 1280, "num_data_blocks": 1168, "num_entries": 7742, "num_filter_entries": 7742, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769445386, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:26.136239) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 10100622 bytes
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:26.138402) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 93.4 rd, 77.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.9, 9.7 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(11.1) write-amplify(5.0) OK, records in: 8169, records dropped: 427 output_compression: NoCompression
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:26.138442) EVENT_LOG_v1 {"time_micros": 1769445386138428, "job": 74, "event": "compaction_finished", "compaction_time_micros": 130414, "compaction_time_cpu_micros": 23525, "output_level": 6, "num_output_files": 1, "total_output_size": 10100622, "num_input_records": 8169, "num_output_records": 7742, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445386139017, "job": 74, "event": "table_file_deletion", "file_number": 124}
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445386140828, "job": 74, "event": "table_file_deletion", "file_number": 122}
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:26.005174) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:26.140857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:26.140861) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:26.140863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:26.140864) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:36:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:26.140866) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:36:26 compute-0 ceph-mon[75140]: pgmap v2596: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:26 compute-0 nova_compute[239965]: 2026-01-26 16:36:26.969 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:28 compute-0 nova_compute[239965]: 2026-01-26 16:36:28.245 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:28 compute-0 ceph-mon[75140]: pgmap v2597: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:36:28
Jan 26 16:36:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:36:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:36:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['images', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.meta', 'default.rgw.log', 'vms', 'backups']
Jan 26 16:36:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:36:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:29 compute-0 nova_compute[239965]: 2026-01-26 16:36:29.576 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:36:30 compute-0 ceph-mon[75140]: pgmap v2598: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:36:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:36:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:36:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:33 compute-0 ceph-mon[75140]: pgmap v2599: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:33 compute-0 nova_compute[239965]: 2026-01-26 16:36:33.247 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:34 compute-0 ceph-mon[75140]: pgmap v2600: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:34.220778) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445394220841, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 319, "num_deletes": 251, "total_data_size": 129192, "memory_usage": 135392, "flush_reason": "Manual Compaction"}
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445394223918, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 127678, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54394, "largest_seqno": 54712, "table_properties": {"data_size": 125599, "index_size": 243, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5191, "raw_average_key_size": 18, "raw_value_size": 121617, "raw_average_value_size": 431, "num_data_blocks": 11, "num_entries": 282, "num_filter_entries": 282, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769445386, "oldest_key_time": 1769445386, "file_creation_time": 1769445394, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 3166 microseconds, and 1067 cpu microseconds.
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:34.223954) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 127678 bytes OK
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:34.223989) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:34.225859) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:34.225874) EVENT_LOG_v1 {"time_micros": 1769445394225869, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:34.225895) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 126948, prev total WAL file size 126948, number of live WAL files 2.
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:34.226261) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(124KB)], [125(9863KB)]
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445394226298, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 10228300, "oldest_snapshot_seqno": -1}
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7515 keys, 8542740 bytes, temperature: kUnknown
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445394282355, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8542740, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8495359, "index_size": 27492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18821, "raw_key_size": 196393, "raw_average_key_size": 26, "raw_value_size": 8363969, "raw_average_value_size": 1112, "num_data_blocks": 1061, "num_entries": 7515, "num_filter_entries": 7515, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769445394, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:34.282601) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8542740 bytes
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:34.284727) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.2 rd, 152.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.6 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(147.0) write-amplify(66.9) OK, records in: 8024, records dropped: 509 output_compression: NoCompression
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:34.284771) EVENT_LOG_v1 {"time_micros": 1769445394284755, "job": 76, "event": "compaction_finished", "compaction_time_micros": 56134, "compaction_time_cpu_micros": 21872, "output_level": 6, "num_output_files": 1, "total_output_size": 8542740, "num_input_records": 8024, "num_output_records": 7515, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445394285037, "job": 76, "event": "table_file_deletion", "file_number": 127}
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445394287097, "job": 76, "event": "table_file_deletion", "file_number": 125}
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:34.226209) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:34.287171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:34.287177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:34.287179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:34.287180) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:36:34 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:36:34.287183) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:36:34 compute-0 nova_compute[239965]: 2026-01-26 16:36:34.578 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:36:36 compute-0 ceph-mon[75140]: pgmap v2601: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:38 compute-0 ceph-mon[75140]: pgmap v2602: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:38 compute-0 nova_compute[239965]: 2026-01-26 16:36:38.249 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:39 compute-0 nova_compute[239965]: 2026-01-26 16:36:39.581 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:40 compute-0 ceph-mon[75140]: pgmap v2603: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:36:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:42 compute-0 ceph-mon[75140]: pgmap v2604: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:43 compute-0 nova_compute[239965]: 2026-01-26 16:36:43.250 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:44 compute-0 ceph-mon[75140]: pgmap v2605: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:44 compute-0 nova_compute[239965]: 2026-01-26 16:36:44.583 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:36:46 compute-0 ceph-mon[75140]: pgmap v2606: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:48 compute-0 nova_compute[239965]: 2026-01-26 16:36:48.253 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:48 compute-0 ceph-mon[75140]: pgmap v2607: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:36:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/369381744' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:36:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:36:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/369381744' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.7927363180140773e-05 of space, bias 1.0, pg target 0.005378208954042232 quantized to 32 (current 32)
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 6.540403659823289e-06 of space, bias 1.0, pg target 0.0019621210979469867 quantized to 32 (current 32)
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.695809662089789e-07 of space, bias 4.0, pg target 0.0008034971594507747 quantized to 16 (current 16)
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:36:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:36:49 compute-0 nova_compute[239965]: 2026-01-26 16:36:49.594 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/369381744' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:36:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/369381744' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:36:50 compute-0 ceph-mon[75140]: pgmap v2608: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:36:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:52 compute-0 ceph-mon[75140]: pgmap v2609: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:53 compute-0 nova_compute[239965]: 2026-01-26 16:36:53.256 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:54 compute-0 nova_compute[239965]: 2026-01-26 16:36:54.596 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:54 compute-0 ceph-mon[75140]: pgmap v2610: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:55 compute-0 podman[379750]: 2026-01-26 16:36:55.398208516 +0000 UTC m=+0.076769381 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 26 16:36:55 compute-0 podman[379751]: 2026-01-26 16:36:55.433805602 +0000 UTC m=+0.112787597 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:36:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:36:56 compute-0 ceph-mon[75140]: pgmap v2611: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:58 compute-0 ceph-mon[75140]: pgmap v2612: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:58 compute-0 nova_compute[239965]: 2026-01-26 16:36:58.258 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:36:58 compute-0 nova_compute[239965]: 2026-01-26 16:36:58.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:36:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:36:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:36:59.258 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:36:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:36:59.258 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:36:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:36:59.258 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:36:59 compute-0 nova_compute[239965]: 2026-01-26 16:36:59.599 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:00 compute-0 ceph-mon[75140]: pgmap v2613: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:37:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:37:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:02 compute-0 ceph-mon[75140]: pgmap v2614: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:02 compute-0 nova_compute[239965]: 2026-01-26 16:37:02.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:03 compute-0 nova_compute[239965]: 2026-01-26 16:37:03.261 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:04 compute-0 ceph-mon[75140]: pgmap v2615: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:04 compute-0 nova_compute[239965]: 2026-01-26 16:37:04.601 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:05 compute-0 sudo[379795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:37:05 compute-0 sudo[379795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:37:05 compute-0 sudo[379795]: pam_unix(sudo:session): session closed for user root
Jan 26 16:37:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:37:06 compute-0 sudo[379820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:37:06 compute-0 sudo[379820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:37:06 compute-0 ceph-mon[75140]: pgmap v2616: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:06 compute-0 sudo[379820]: pam_unix(sudo:session): session closed for user root
Jan 26 16:37:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:37:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:37:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:37:06 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:37:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:37:06 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:37:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:37:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:37:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:37:06 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:37:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:37:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:37:06 compute-0 sudo[379878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:37:06 compute-0 sudo[379878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:37:06 compute-0 sudo[379878]: pam_unix(sudo:session): session closed for user root
Jan 26 16:37:06 compute-0 sudo[379903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:37:06 compute-0 sudo[379903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:37:06 compute-0 podman[379939]: 2026-01-26 16:37:06.977228585 +0000 UTC m=+0.048713819 container create 25bf52a2171cf2233c020adcb62f812c6816592ad28637afcb898ffa99e240e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_snyder, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 16:37:07 compute-0 systemd[1]: Started libpod-conmon-25bf52a2171cf2233c020adcb62f812c6816592ad28637afcb898ffa99e240e2.scope.
Jan 26 16:37:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:37:07 compute-0 podman[379939]: 2026-01-26 16:37:06.953497441 +0000 UTC m=+0.024982695 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:37:07 compute-0 podman[379939]: 2026-01-26 16:37:07.067761704 +0000 UTC m=+0.139246978 container init 25bf52a2171cf2233c020adcb62f812c6816592ad28637afcb898ffa99e240e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_snyder, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:37:07 compute-0 podman[379939]: 2026-01-26 16:37:07.07939861 +0000 UTC m=+0.150883844 container start 25bf52a2171cf2233c020adcb62f812c6816592ad28637afcb898ffa99e240e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 16:37:07 compute-0 podman[379939]: 2026-01-26 16:37:07.083451769 +0000 UTC m=+0.154937013 container attach 25bf52a2171cf2233c020adcb62f812c6816592ad28637afcb898ffa99e240e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 16:37:07 compute-0 angry_snyder[379955]: 167 167
Jan 26 16:37:07 compute-0 systemd[1]: libpod-25bf52a2171cf2233c020adcb62f812c6816592ad28637afcb898ffa99e240e2.scope: Deactivated successfully.
Jan 26 16:37:07 compute-0 podman[379939]: 2026-01-26 16:37:07.090877942 +0000 UTC m=+0.162363176 container died 25bf52a2171cf2233c020adcb62f812c6816592ad28637afcb898ffa99e240e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:37:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-75839e40a6408f0fcb024f79bc30ed7e88d882a151f3bf564b495727a4c7683d-merged.mount: Deactivated successfully.
Jan 26 16:37:07 compute-0 podman[379939]: 2026-01-26 16:37:07.135329036 +0000 UTC m=+0.206814280 container remove 25bf52a2171cf2233c020adcb62f812c6816592ad28637afcb898ffa99e240e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:37:07 compute-0 systemd[1]: libpod-conmon-25bf52a2171cf2233c020adcb62f812c6816592ad28637afcb898ffa99e240e2.scope: Deactivated successfully.
Jan 26 16:37:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:37:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:37:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:37:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:37:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:37:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:37:07 compute-0 podman[379979]: 2026-01-26 16:37:07.305163485 +0000 UTC m=+0.042664801 container create 8e7656cd97e880e560898714e783d996ff408ae17504b8742bfc9605f5655670 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 16:37:07 compute-0 systemd[1]: Started libpod-conmon-8e7656cd97e880e560898714e783d996ff408ae17504b8742bfc9605f5655670.scope.
Jan 26 16:37:07 compute-0 podman[379979]: 2026-01-26 16:37:07.287645294 +0000 UTC m=+0.025146610 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:37:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e164037c314a839cb66d7ebccc96099689a1da7e40d898a811f2186e8377c40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e164037c314a839cb66d7ebccc96099689a1da7e40d898a811f2186e8377c40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e164037c314a839cb66d7ebccc96099689a1da7e40d898a811f2186e8377c40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e164037c314a839cb66d7ebccc96099689a1da7e40d898a811f2186e8377c40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:37:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e164037c314a839cb66d7ebccc96099689a1da7e40d898a811f2186e8377c40/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:37:07 compute-0 podman[379979]: 2026-01-26 16:37:07.401335572 +0000 UTC m=+0.138836928 container init 8e7656cd97e880e560898714e783d996ff408ae17504b8742bfc9605f5655670 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_dewdney, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:37:07 compute-0 podman[379979]: 2026-01-26 16:37:07.4166881 +0000 UTC m=+0.154189396 container start 8e7656cd97e880e560898714e783d996ff408ae17504b8742bfc9605f5655670 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:37:07 compute-0 podman[379979]: 2026-01-26 16:37:07.422768449 +0000 UTC m=+0.160269765 container attach 8e7656cd97e880e560898714e783d996ff408ae17504b8742bfc9605f5655670 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_dewdney, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:37:07 compute-0 hardcore_dewdney[379996]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:37:07 compute-0 hardcore_dewdney[379996]: --> All data devices are unavailable
Jan 26 16:37:07 compute-0 systemd[1]: libpod-8e7656cd97e880e560898714e783d996ff408ae17504b8742bfc9605f5655670.scope: Deactivated successfully.
Jan 26 16:37:07 compute-0 podman[379979]: 2026-01-26 16:37:07.949303485 +0000 UTC m=+0.686804781 container died 8e7656cd97e880e560898714e783d996ff408ae17504b8742bfc9605f5655670 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_dewdney, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 16:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e164037c314a839cb66d7ebccc96099689a1da7e40d898a811f2186e8377c40-merged.mount: Deactivated successfully.
Jan 26 16:37:08 compute-0 podman[379979]: 2026-01-26 16:37:08.202390473 +0000 UTC m=+0.939891789 container remove 8e7656cd97e880e560898714e783d996ff408ae17504b8742bfc9605f5655670 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:37:08 compute-0 systemd[1]: libpod-conmon-8e7656cd97e880e560898714e783d996ff408ae17504b8742bfc9605f5655670.scope: Deactivated successfully.
Jan 26 16:37:08 compute-0 sudo[379903]: pam_unix(sudo:session): session closed for user root
Jan 26 16:37:08 compute-0 nova_compute[239965]: 2026-01-26 16:37:08.264 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:08 compute-0 sudo[380029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:37:08 compute-0 sudo[380029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:37:08 compute-0 sudo[380029]: pam_unix(sudo:session): session closed for user root
Jan 26 16:37:08 compute-0 sudo[380054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:37:08 compute-0 sudo[380054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:37:08 compute-0 nova_compute[239965]: 2026-01-26 16:37:08.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:08 compute-0 ceph-mon[75140]: pgmap v2617: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:08 compute-0 podman[380091]: 2026-01-26 16:37:08.711268216 +0000 UTC m=+0.038890818 container create e3e64a7e42597fcbd3fff706b958c6c6a91b8a522b6bf1147c68e45915a680fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_carver, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 16:37:08 compute-0 systemd[1]: Started libpod-conmon-e3e64a7e42597fcbd3fff706b958c6c6a91b8a522b6bf1147c68e45915a680fc.scope.
Jan 26 16:37:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:37:08 compute-0 podman[380091]: 2026-01-26 16:37:08.695687382 +0000 UTC m=+0.023310014 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:37:08 compute-0 podman[380091]: 2026-01-26 16:37:08.804198943 +0000 UTC m=+0.131821585 container init e3e64a7e42597fcbd3fff706b958c6c6a91b8a522b6bf1147c68e45915a680fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 16:37:08 compute-0 podman[380091]: 2026-01-26 16:37:08.812807524 +0000 UTC m=+0.140430136 container start e3e64a7e42597fcbd3fff706b958c6c6a91b8a522b6bf1147c68e45915a680fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:37:08 compute-0 laughing_carver[380106]: 167 167
Jan 26 16:37:08 compute-0 podman[380091]: 2026-01-26 16:37:08.818465793 +0000 UTC m=+0.146088485 container attach e3e64a7e42597fcbd3fff706b958c6c6a91b8a522b6bf1147c68e45915a680fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:37:08 compute-0 systemd[1]: libpod-e3e64a7e42597fcbd3fff706b958c6c6a91b8a522b6bf1147c68e45915a680fc.scope: Deactivated successfully.
Jan 26 16:37:08 compute-0 conmon[380106]: conmon e3e64a7e42597fcbd3ff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e3e64a7e42597fcbd3fff706b958c6c6a91b8a522b6bf1147c68e45915a680fc.scope/container/memory.events
Jan 26 16:37:08 compute-0 podman[380111]: 2026-01-26 16:37:08.86870926 +0000 UTC m=+0.028366039 container died e3e64a7e42597fcbd3fff706b958c6c6a91b8a522b6bf1147c68e45915a680fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 16:37:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f050f94b653a25cb9b959519ff0933f89d3c7825932b593e6f983622f6ef791c-merged.mount: Deactivated successfully.
Jan 26 16:37:08 compute-0 podman[380111]: 2026-01-26 16:37:08.906735055 +0000 UTC m=+0.066391814 container remove e3e64a7e42597fcbd3fff706b958c6c6a91b8a522b6bf1147c68e45915a680fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:37:08 compute-0 systemd[1]: libpod-conmon-e3e64a7e42597fcbd3fff706b958c6c6a91b8a522b6bf1147c68e45915a680fc.scope: Deactivated successfully.
Jan 26 16:37:09 compute-0 podman[380133]: 2026-01-26 16:37:09.141694047 +0000 UTC m=+0.059436323 container create 48b84ccee144b077f24721459481034d22283a4f20485d05af752e2013331510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kapitsa, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:37:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:09 compute-0 systemd[1]: Started libpod-conmon-48b84ccee144b077f24721459481034d22283a4f20485d05af752e2013331510.scope.
Jan 26 16:37:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:37:09 compute-0 podman[380133]: 2026-01-26 16:37:09.122626288 +0000 UTC m=+0.040368584 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939b8bc2a91bcfb008bbfb4d8d300bc05afe2dc58ffe71338cccb57309f546dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939b8bc2a91bcfb008bbfb4d8d300bc05afe2dc58ffe71338cccb57309f546dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939b8bc2a91bcfb008bbfb4d8d300bc05afe2dc58ffe71338cccb57309f546dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:37:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/939b8bc2a91bcfb008bbfb4d8d300bc05afe2dc58ffe71338cccb57309f546dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:37:09 compute-0 nova_compute[239965]: 2026-01-26 16:37:09.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:09 compute-0 nova_compute[239965]: 2026-01-26 16:37:09.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:09 compute-0 nova_compute[239965]: 2026-01-26 16:37:09.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:09 compute-0 nova_compute[239965]: 2026-01-26 16:37:09.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:37:09 compute-0 nova_compute[239965]: 2026-01-26 16:37:09.603 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:09 compute-0 podman[380133]: 2026-01-26 16:37:09.656666859 +0000 UTC m=+0.574409165 container init 48b84ccee144b077f24721459481034d22283a4f20485d05af752e2013331510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:37:09 compute-0 podman[380133]: 2026-01-26 16:37:09.663462167 +0000 UTC m=+0.581204483 container start 48b84ccee144b077f24721459481034d22283a4f20485d05af752e2013331510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:37:09 compute-0 podman[380133]: 2026-01-26 16:37:09.699467002 +0000 UTC m=+0.617209368 container attach 48b84ccee144b077f24721459481034d22283a4f20485d05af752e2013331510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]: {
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:     "0": [
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:         {
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "devices": [
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "/dev/loop3"
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             ],
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "lv_name": "ceph_lv0",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "lv_size": "21470642176",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "name": "ceph_lv0",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "tags": {
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.cluster_name": "ceph",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.crush_device_class": "",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.encrypted": "0",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.objectstore": "bluestore",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.osd_id": "0",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.type": "block",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.vdo": "0",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.with_tpm": "0"
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             },
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "type": "block",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "vg_name": "ceph_vg0"
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:         }
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:     ],
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:     "1": [
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:         {
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "devices": [
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "/dev/loop4"
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             ],
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "lv_name": "ceph_lv1",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "lv_size": "21470642176",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "name": "ceph_lv1",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "tags": {
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.cluster_name": "ceph",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.crush_device_class": "",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.encrypted": "0",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.objectstore": "bluestore",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.osd_id": "1",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.type": "block",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.vdo": "0",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.with_tpm": "0"
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             },
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "type": "block",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "vg_name": "ceph_vg1"
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:         }
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:     ],
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:     "2": [
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:         {
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "devices": [
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "/dev/loop5"
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             ],
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "lv_name": "ceph_lv2",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "lv_size": "21470642176",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "name": "ceph_lv2",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "tags": {
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.cluster_name": "ceph",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.crush_device_class": "",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.encrypted": "0",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.objectstore": "bluestore",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.osd_id": "2",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.type": "block",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.vdo": "0",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:                 "ceph.with_tpm": "0"
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             },
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "type": "block",
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:             "vg_name": "ceph_vg2"
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:         }
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]:     ]
Jan 26 16:37:09 compute-0 elastic_kapitsa[380150]: }
Jan 26 16:37:10 compute-0 systemd[1]: libpod-48b84ccee144b077f24721459481034d22283a4f20485d05af752e2013331510.scope: Deactivated successfully.
Jan 26 16:37:10 compute-0 podman[380133]: 2026-01-26 16:37:10.014734591 +0000 UTC m=+0.932476867 container died 48b84ccee144b077f24721459481034d22283a4f20485d05af752e2013331510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 16:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-939b8bc2a91bcfb008bbfb4d8d300bc05afe2dc58ffe71338cccb57309f546dc-merged.mount: Deactivated successfully.
Jan 26 16:37:10 compute-0 podman[380133]: 2026-01-26 16:37:10.05738984 +0000 UTC m=+0.975132136 container remove 48b84ccee144b077f24721459481034d22283a4f20485d05af752e2013331510 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kapitsa, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:37:10 compute-0 systemd[1]: libpod-conmon-48b84ccee144b077f24721459481034d22283a4f20485d05af752e2013331510.scope: Deactivated successfully.
Jan 26 16:37:10 compute-0 sudo[380054]: pam_unix(sudo:session): session closed for user root
Jan 26 16:37:10 compute-0 sudo[380171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:37:10 compute-0 sudo[380171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:37:10 compute-0 sudo[380171]: pam_unix(sudo:session): session closed for user root
Jan 26 16:37:10 compute-0 sudo[380196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:37:10 compute-0 sudo[380196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:37:10 compute-0 nova_compute[239965]: 2026-01-26 16:37:10.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:10 compute-0 nova_compute[239965]: 2026-01-26 16:37:10.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:37:10 compute-0 nova_compute[239965]: 2026-01-26 16:37:10.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:37:10 compute-0 podman[380234]: 2026-01-26 16:37:10.565669487 +0000 UTC m=+0.070609638 container create 1cf2cce5830b051e5ff06592bdc419d2f68451256de4e5dc1bd439cc037925d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 16:37:10 compute-0 systemd[1]: Started libpod-conmon-1cf2cce5830b051e5ff06592bdc419d2f68451256de4e5dc1bd439cc037925d6.scope.
Jan 26 16:37:10 compute-0 podman[380234]: 2026-01-26 16:37:10.531724072 +0000 UTC m=+0.036664313 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:37:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:37:10 compute-0 nova_compute[239965]: 2026-01-26 16:37:10.675 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:37:10 compute-0 nova_compute[239965]: 2026-01-26 16:37:10.676 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:10 compute-0 podman[380234]: 2026-01-26 16:37:10.688009877 +0000 UTC m=+0.192950048 container init 1cf2cce5830b051e5ff06592bdc419d2f68451256de4e5dc1bd439cc037925d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:37:10 compute-0 podman[380234]: 2026-01-26 16:37:10.699716766 +0000 UTC m=+0.204656917 container start 1cf2cce5830b051e5ff06592bdc419d2f68451256de4e5dc1bd439cc037925d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 16:37:10 compute-0 strange_agnesi[380250]: 167 167
Jan 26 16:37:10 compute-0 systemd[1]: libpod-1cf2cce5830b051e5ff06592bdc419d2f68451256de4e5dc1bd439cc037925d6.scope: Deactivated successfully.
Jan 26 16:37:10 compute-0 podman[380234]: 2026-01-26 16:37:10.715367201 +0000 UTC m=+0.220307392 container attach 1cf2cce5830b051e5ff06592bdc419d2f68451256de4e5dc1bd439cc037925d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:37:10 compute-0 podman[380234]: 2026-01-26 16:37:10.715947355 +0000 UTC m=+0.220887546 container died 1cf2cce5830b051e5ff06592bdc419d2f68451256de4e5dc1bd439cc037925d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 16:37:10 compute-0 ceph-mon[75140]: pgmap v2618: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c586634ec75d80c46381a3820ba0653d911c4e6fbf31bdc1e08b7edc3a96d318-merged.mount: Deactivated successfully.
Jan 26 16:37:10 compute-0 podman[380234]: 2026-01-26 16:37:10.764210053 +0000 UTC m=+0.269150244 container remove 1cf2cce5830b051e5ff06592bdc419d2f68451256de4e5dc1bd439cc037925d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_agnesi, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 16:37:10 compute-0 systemd[1]: libpod-conmon-1cf2cce5830b051e5ff06592bdc419d2f68451256de4e5dc1bd439cc037925d6.scope: Deactivated successfully.
Jan 26 16:37:10 compute-0 podman[380274]: 2026-01-26 16:37:10.982182066 +0000 UTC m=+0.052325098 container create b963895b5bf95e10b0060b7393e8716c94fe1710b99d2723b00675c25f4fb7f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:37:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:37:11 compute-0 systemd[1]: Started libpod-conmon-b963895b5bf95e10b0060b7393e8716c94fe1710b99d2723b00675c25f4fb7f7.scope.
Jan 26 16:37:11 compute-0 podman[380274]: 2026-01-26 16:37:10.964239615 +0000 UTC m=+0.034382667 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:37:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b171ef59a298c3fa9547c44052973ffd54dcf67f92e3899171e326fb4ad08b43/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b171ef59a298c3fa9547c44052973ffd54dcf67f92e3899171e326fb4ad08b43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b171ef59a298c3fa9547c44052973ffd54dcf67f92e3899171e326fb4ad08b43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:37:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b171ef59a298c3fa9547c44052973ffd54dcf67f92e3899171e326fb4ad08b43/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:37:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:11 compute-0 podman[380274]: 2026-01-26 16:37:11.232112007 +0000 UTC m=+0.302255049 container init b963895b5bf95e10b0060b7393e8716c94fe1710b99d2723b00675c25f4fb7f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:37:11 compute-0 podman[380274]: 2026-01-26 16:37:11.238478914 +0000 UTC m=+0.308621976 container start b963895b5bf95e10b0060b7393e8716c94fe1710b99d2723b00675c25f4fb7f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 16:37:11 compute-0 podman[380274]: 2026-01-26 16:37:11.261872909 +0000 UTC m=+0.332015951 container attach b963895b5bf95e10b0060b7393e8716c94fe1710b99d2723b00675c25f4fb7f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Jan 26 16:37:12 compute-0 lvm[380369]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:37:12 compute-0 lvm[380368]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:37:12 compute-0 lvm[380368]: VG ceph_vg0 finished
Jan 26 16:37:12 compute-0 lvm[380369]: VG ceph_vg1 finished
Jan 26 16:37:12 compute-0 lvm[380371]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:37:12 compute-0 lvm[380371]: VG ceph_vg2 finished
Jan 26 16:37:12 compute-0 hungry_goodall[380290]: {}
Jan 26 16:37:12 compute-0 systemd[1]: libpod-b963895b5bf95e10b0060b7393e8716c94fe1710b99d2723b00675c25f4fb7f7.scope: Deactivated successfully.
Jan 26 16:37:12 compute-0 podman[380274]: 2026-01-26 16:37:12.209487457 +0000 UTC m=+1.279630519 container died b963895b5bf95e10b0060b7393e8716c94fe1710b99d2723b00675c25f4fb7f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:37:12 compute-0 systemd[1]: libpod-b963895b5bf95e10b0060b7393e8716c94fe1710b99d2723b00675c25f4fb7f7.scope: Consumed 1.572s CPU time.
Jan 26 16:37:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-b171ef59a298c3fa9547c44052973ffd54dcf67f92e3899171e326fb4ad08b43-merged.mount: Deactivated successfully.
Jan 26 16:37:12 compute-0 podman[380274]: 2026-01-26 16:37:12.382647079 +0000 UTC m=+1.452790111 container remove b963895b5bf95e10b0060b7393e8716c94fe1710b99d2723b00675c25f4fb7f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:37:12 compute-0 systemd[1]: libpod-conmon-b963895b5bf95e10b0060b7393e8716c94fe1710b99d2723b00675c25f4fb7f7.scope: Deactivated successfully.
Jan 26 16:37:12 compute-0 sudo[380196]: pam_unix(sudo:session): session closed for user root
Jan 26 16:37:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:37:12 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:37:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:37:12 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:37:12 compute-0 sudo[380386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:37:12 compute-0 sudo[380386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:37:12 compute-0 sudo[380386]: pam_unix(sudo:session): session closed for user root
Jan 26 16:37:12 compute-0 ceph-mon[75140]: pgmap v2619: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:12 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:37:12 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:37:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:13 compute-0 nova_compute[239965]: 2026-01-26 16:37:13.268 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:14 compute-0 nova_compute[239965]: 2026-01-26 16:37:14.607 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:14 compute-0 ceph-mon[75140]: pgmap v2620: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:37:16 compute-0 ceph-mon[75140]: pgmap v2621: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:17 compute-0 nova_compute[239965]: 2026-01-26 16:37:17.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:37:17 compute-0 nova_compute[239965]: 2026-01-26 16:37:17.547 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:17 compute-0 nova_compute[239965]: 2026-01-26 16:37:17.548 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:17 compute-0 nova_compute[239965]: 2026-01-26 16:37:17.548 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:17 compute-0 nova_compute[239965]: 2026-01-26 16:37:17.549 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:37:17 compute-0 nova_compute[239965]: 2026-01-26 16:37:17.549 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:37:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3857183573' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:37:18 compute-0 nova_compute[239965]: 2026-01-26 16:37:18.164 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:18 compute-0 nova_compute[239965]: 2026-01-26 16:37:18.270 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:18 compute-0 nova_compute[239965]: 2026-01-26 16:37:18.342 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:37:18 compute-0 nova_compute[239965]: 2026-01-26 16:37:18.343 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3526MB free_disk=59.98720581829548GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:37:18 compute-0 nova_compute[239965]: 2026-01-26 16:37:18.343 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:18 compute-0 nova_compute[239965]: 2026-01-26 16:37:18.344 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:18 compute-0 ceph-mon[75140]: pgmap v2622: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3857183573' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:37:19 compute-0 nova_compute[239965]: 2026-01-26 16:37:19.011 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:37:19 compute-0 nova_compute[239965]: 2026-01-26 16:37:19.012 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:37:19 compute-0 nova_compute[239965]: 2026-01-26 16:37:19.142 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:37:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:19 compute-0 nova_compute[239965]: 2026-01-26 16:37:19.608 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:37:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3603516796' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:37:19 compute-0 nova_compute[239965]: 2026-01-26 16:37:19.721 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:37:19 compute-0 nova_compute[239965]: 2026-01-26 16:37:19.728 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:37:19 compute-0 nova_compute[239965]: 2026-01-26 16:37:19.761 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:37:19 compute-0 nova_compute[239965]: 2026-01-26 16:37:19.763 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:37:19 compute-0 nova_compute[239965]: 2026-01-26 16:37:19.764 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:19 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3603516796' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:37:20 compute-0 ceph-mon[75140]: pgmap v2623: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:37:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:21 compute-0 ceph-mon[75140]: pgmap v2624: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:23 compute-0 nova_compute[239965]: 2026-01-26 16:37:23.272 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:24 compute-0 ceph-mon[75140]: pgmap v2625: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:24 compute-0 nova_compute[239965]: 2026-01-26 16:37:24.610 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:37:26 compute-0 podman[380455]: 2026-01-26 16:37:26.392781882 +0000 UTC m=+0.069577403 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:37:26 compute-0 podman[380456]: 2026-01-26 16:37:26.436916898 +0000 UTC m=+0.114219741 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 26 16:37:26 compute-0 ceph-mon[75140]: pgmap v2626: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:28 compute-0 nova_compute[239965]: 2026-01-26 16:37:28.273 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:28 compute-0 ceph-mon[75140]: pgmap v2627: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:37:28
Jan 26 16:37:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:37:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:37:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.rgw.root', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'default.rgw.log', 'volumes', '.mgr']
Jan 26 16:37:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:37:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:29 compute-0 nova_compute[239965]: 2026-01-26 16:37:29.657 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:37:30 compute-0 ceph-mon[75140]: pgmap v2628: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:37:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:37:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:37:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:32 compute-0 ceph-mon[75140]: pgmap v2629: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Jan 26 16:37:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Jan 26 16:37:33 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Jan 26 16:37:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:33 compute-0 nova_compute[239965]: 2026-01-26 16:37:33.276 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:34 compute-0 ceph-mon[75140]: osdmap e329: 3 total, 3 up, 3 in
Jan 26 16:37:34 compute-0 ceph-mon[75140]: pgmap v2631: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:34 compute-0 nova_compute[239965]: 2026-01-26 16:37:34.660 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Jan 26 16:37:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Jan 26 16:37:35 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Jan 26 16:37:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:37:36 compute-0 ceph-mon[75140]: pgmap v2632: 305 pgs: 305 active+clean; 463 KiB data, 1007 MiB used, 59 GiB / 60 GiB avail
Jan 26 16:37:36 compute-0 ceph-mon[75140]: osdmap e330: 3 total, 3 up, 3 in
Jan 26 16:37:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 37 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 4.6 MiB/s wr, 38 op/s
Jan 26 16:37:38 compute-0 nova_compute[239965]: 2026-01-26 16:37:38.277 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:38 compute-0 ceph-mon[75140]: pgmap v2634: 305 pgs: 305 active+clean; 37 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 4.6 MiB/s wr, 38 op/s
Jan 26 16:37:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 26 16:37:39 compute-0 nova_compute[239965]: 2026-01-26 16:37:39.661 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:40 compute-0 ceph-mon[75140]: pgmap v2635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 26 16:37:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:37:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 26 16:37:42 compute-0 ceph-mon[75140]: pgmap v2636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Jan 26 16:37:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Jan 26 16:37:43 compute-0 nova_compute[239965]: 2026-01-26 16:37:43.280 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:44 compute-0 ceph-mon[75140]: pgmap v2637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Jan 26 16:37:44 compute-0 nova_compute[239965]: 2026-01-26 16:37:44.663 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Jan 26 16:37:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:37:46 compute-0 ceph-mon[75140]: pgmap v2638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Jan 26 16:37:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 3.5 MiB/s wr, 32 op/s
Jan 26 16:37:48 compute-0 nova_compute[239965]: 2026-01-26 16:37:48.282 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:48 compute-0 ceph-mon[75140]: pgmap v2639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 3.5 MiB/s wr, 32 op/s
Jan 26 16:37:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:37:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/880080591' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:37:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:37:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/880080591' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.4 KiB/s rd, 380 KiB/s wr, 5 op/s
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8005143806649783e-05 of space, bias 1.0, pg target 0.005401543141994935 quantized to 32 (current 32)
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006723675464228462 of space, bias 1.0, pg target 0.20171026392685387 quantized to 32 (current 32)
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.860530709447188e-07 of space, bias 4.0, pg target 0.0008232636851336626 quantized to 16 (current 16)
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:37:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:37:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/880080591' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:37:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/880080591' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:37:49 compute-0 nova_compute[239965]: 2026-01-26 16:37:49.665 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:50 compute-0 ceph-mon[75140]: pgmap v2640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.4 KiB/s rd, 380 KiB/s wr, 5 op/s
Jan 26 16:37:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:37:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 16:37:52 compute-0 ceph-mon[75140]: pgmap v2641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Jan 26 16:37:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Jan 26 16:37:53 compute-0 nova_compute[239965]: 2026-01-26 16:37:53.284 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:54 compute-0 ceph-mon[75140]: pgmap v2642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Jan 26 16:37:54 compute-0 nova_compute[239965]: 2026-01-26 16:37:54.667 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Jan 26 16:37:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:37:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:37:56.410 156105 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '42:bb:23', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '92:bd:c2:93:a8:2a'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 26 16:37:56 compute-0 nova_compute[239965]: 2026-01-26 16:37:56.411 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:56 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:37:56.412 156105 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 26 16:37:56 compute-0 ceph-mon[75140]: pgmap v2643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 10 op/s
Jan 26 16:37:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 16:37:57 compute-0 podman[380498]: 2026-01-26 16:37:57.372798175 +0000 UTC m=+0.054801739 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:37:57 compute-0 podman[380499]: 2026-01-26 16:37:57.402860805 +0000 UTC m=+0.084416079 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 16:37:58 compute-0 nova_compute[239965]: 2026-01-26 16:37:58.287 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:37:58 compute-0 ceph-mon[75140]: pgmap v2644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 16:37:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 16:37:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:37:59.259 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:37:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:37:59.260 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:37:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:37:59.260 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:37:59 compute-0 nova_compute[239965]: 2026-01-26 16:37:59.668 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:00 compute-0 ceph-mon[75140]: pgmap v2645: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 16:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:38:00 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:38:00.414 156105 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=8b34ddd0-1459-4ea9-b050-cbcb6a3cdd76, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 26 16:38:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e330 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:38:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 16:38:02 compute-0 ceph-mon[75140]: pgmap v2646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 16:38:02 compute-0 nova_compute[239965]: 2026-01-26 16:38:02.764 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:02 compute-0 nova_compute[239965]: 2026-01-26 16:38:02.765 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 16:38:03 compute-0 nova_compute[239965]: 2026-01-26 16:38:03.289 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Jan 26 16:38:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Jan 26 16:38:03 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Jan 26 16:38:04 compute-0 ceph-mon[75140]: pgmap v2647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 16:38:04 compute-0 ceph-mon[75140]: osdmap e331: 3 total, 3 up, 3 in
Jan 26 16:38:04 compute-0 nova_compute[239965]: 2026-01-26 16:38:04.672 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:05 compute-0 sshd-session[380542]: Invalid user sol from 45.148.10.240 port 38344
Jan 26 16:38:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 0 B/s wr, 6 op/s
Jan 26 16:38:05 compute-0 sshd-session[380542]: Connection closed by invalid user sol 45.148.10.240 port 38344 [preauth]
Jan 26 16:38:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:38:06 compute-0 ceph-mon[75140]: pgmap v2649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 0 B/s wr, 6 op/s
Jan 26 16:38:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Jan 26 16:38:08 compute-0 nova_compute[239965]: 2026-01-26 16:38:08.291 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:08 compute-0 nova_compute[239965]: 2026-01-26 16:38:08.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:09 compute-0 ceph-mon[75140]: pgmap v2650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 921 B/s wr, 19 op/s
Jan 26 16:38:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 26 16:38:09 compute-0 nova_compute[239965]: 2026-01-26 16:38:09.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:09 compute-0 nova_compute[239965]: 2026-01-26 16:38:09.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:38:09 compute-0 nova_compute[239965]: 2026-01-26 16:38:09.674 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:10 compute-0 ceph-mon[75140]: pgmap v2651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 26 16:38:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e331 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:38:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Jan 26 16:38:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Jan 26 16:38:11 compute-0 ceph-mon[75140]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Jan 26 16:38:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 26 16:38:11 compute-0 nova_compute[239965]: 2026-01-26 16:38:11.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:11 compute-0 nova_compute[239965]: 2026-01-26 16:38:11.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:11 compute-0 nova_compute[239965]: 2026-01-26 16:38:11.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:12 compute-0 ceph-mon[75140]: osdmap e332: 3 total, 3 up, 3 in
Jan 26 16:38:12 compute-0 ceph-mon[75140]: pgmap v2653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.7 KiB/s wr, 31 op/s
Jan 26 16:38:12 compute-0 nova_compute[239965]: 2026-01-26 16:38:12.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:12 compute-0 nova_compute[239965]: 2026-01-26 16:38:12.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:38:12 compute-0 nova_compute[239965]: 2026-01-26 16:38:12.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:38:12 compute-0 nova_compute[239965]: 2026-01-26 16:38:12.535 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:38:12 compute-0 sudo[380545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:38:12 compute-0 sudo[380545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:38:12 compute-0 sudo[380545]: pam_unix(sudo:session): session closed for user root
Jan 26 16:38:12 compute-0 sudo[380570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:38:12 compute-0 sudo[380570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:38:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 26 16:38:13 compute-0 sudo[380570]: pam_unix(sudo:session): session closed for user root
Jan 26 16:38:13 compute-0 nova_compute[239965]: 2026-01-26 16:38:13.294 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:38:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:38:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:38:13 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:38:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:38:13 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:38:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:38:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:38:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:38:13 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:38:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:38:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:38:13 compute-0 sudo[380625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:38:13 compute-0 sudo[380625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:38:13 compute-0 sudo[380625]: pam_unix(sudo:session): session closed for user root
Jan 26 16:38:13 compute-0 sudo[380650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:38:13 compute-0 sudo[380650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:38:13 compute-0 podman[380687]: 2026-01-26 16:38:13.697793962 +0000 UTC m=+0.036299185 container create 05a8a33d9a8476e0fea7b9acd57d0b6c2ea18d900daa5466130fd3035b961d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 16:38:13 compute-0 systemd[1]: Started libpod-conmon-05a8a33d9a8476e0fea7b9acd57d0b6c2ea18d900daa5466130fd3035b961d6e.scope.
Jan 26 16:38:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:38:13 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:38:13 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 16:38:13 compute-0 podman[380687]: 2026-01-26 16:38:13.764922734 +0000 UTC m=+0.103427977 container init 05a8a33d9a8476e0fea7b9acd57d0b6c2ea18d900daa5466130fd3035b961d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 16:38:13 compute-0 podman[380687]: 2026-01-26 16:38:13.776080218 +0000 UTC m=+0.114585451 container start 05a8a33d9a8476e0fea7b9acd57d0b6c2ea18d900daa5466130fd3035b961d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:38:13 compute-0 podman[380687]: 2026-01-26 16:38:13.681250044 +0000 UTC m=+0.019755287 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:38:13 compute-0 podman[380687]: 2026-01-26 16:38:13.779762168 +0000 UTC m=+0.118267401 container attach 05a8a33d9a8476e0fea7b9acd57d0b6c2ea18d900daa5466130fd3035b961d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 26 16:38:13 compute-0 systemd[1]: libpod-05a8a33d9a8476e0fea7b9acd57d0b6c2ea18d900daa5466130fd3035b961d6e.scope: Deactivated successfully.
Jan 26 16:38:13 compute-0 blissful_jemison[380704]: 167 167
Jan 26 16:38:13 compute-0 podman[380687]: 2026-01-26 16:38:13.785792667 +0000 UTC m=+0.124297890 container died 05a8a33d9a8476e0fea7b9acd57d0b6c2ea18d900daa5466130fd3035b961d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:38:13 compute-0 conmon[380704]: conmon 05a8a33d9a8476e0fea7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-05a8a33d9a8476e0fea7b9acd57d0b6c2ea18d900daa5466130fd3035b961d6e.scope/container/memory.events
Jan 26 16:38:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-23c9b5dde71b5e3922d246c8962716f1ff85ead1871a55c65f59d663798aa5a2-merged.mount: Deactivated successfully.
Jan 26 16:38:13 compute-0 podman[380687]: 2026-01-26 16:38:13.82736113 +0000 UTC m=+0.165866353 container remove 05a8a33d9a8476e0fea7b9acd57d0b6c2ea18d900daa5466130fd3035b961d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_jemison, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:38:13 compute-0 systemd[1]: libpod-conmon-05a8a33d9a8476e0fea7b9acd57d0b6c2ea18d900daa5466130fd3035b961d6e.scope: Deactivated successfully.
Jan 26 16:38:14 compute-0 podman[380731]: 2026-01-26 16:38:14.085135003 +0000 UTC m=+0.108898830 container create d190c459fb99f7483c557701cdb565e3242b6559e05b8cbe9696d93ea922cf0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_germain, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:38:14 compute-0 podman[380731]: 2026-01-26 16:38:14.010539267 +0000 UTC m=+0.034303114 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:38:14 compute-0 systemd[1]: Started libpod-conmon-d190c459fb99f7483c557701cdb565e3242b6559e05b8cbe9696d93ea922cf0f.scope.
Jan 26 16:38:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/343f28d6d46864c2c11388b4ef9a1f085cdab0172417cf72bf729fa94eb391cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/343f28d6d46864c2c11388b4ef9a1f085cdab0172417cf72bf729fa94eb391cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/343f28d6d46864c2c11388b4ef9a1f085cdab0172417cf72bf729fa94eb391cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/343f28d6d46864c2c11388b4ef9a1f085cdab0172417cf72bf729fa94eb391cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:38:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/343f28d6d46864c2c11388b4ef9a1f085cdab0172417cf72bf729fa94eb391cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:38:14 compute-0 podman[380731]: 2026-01-26 16:38:14.184471547 +0000 UTC m=+0.208235404 container init d190c459fb99f7483c557701cdb565e3242b6559e05b8cbe9696d93ea922cf0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_germain, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:38:14 compute-0 podman[380731]: 2026-01-26 16:38:14.194239007 +0000 UTC m=+0.218002834 container start d190c459fb99f7483c557701cdb565e3242b6559e05b8cbe9696d93ea922cf0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_germain, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 16:38:14 compute-0 podman[380731]: 2026-01-26 16:38:14.197616741 +0000 UTC m=+0.221380638 container attach d190c459fb99f7483c557701cdb565e3242b6559e05b8cbe9696d93ea922cf0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_germain, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:38:14 compute-0 ceph-mon[75140]: pgmap v2654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Jan 26 16:38:14 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:38:14 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:38:14 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:38:14 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:38:14 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:38:14 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:38:14 compute-0 quizzical_germain[380748]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:38:14 compute-0 quizzical_germain[380748]: --> All data devices are unavailable
Jan 26 16:38:14 compute-0 systemd[1]: libpod-d190c459fb99f7483c557701cdb565e3242b6559e05b8cbe9696d93ea922cf0f.scope: Deactivated successfully.
Jan 26 16:38:14 compute-0 podman[380731]: 2026-01-26 16:38:14.666553581 +0000 UTC m=+0.690317398 container died d190c459fb99f7483c557701cdb565e3242b6559e05b8cbe9696d93ea922cf0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_germain, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 16:38:14 compute-0 nova_compute[239965]: 2026-01-26 16:38:14.674 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-343f28d6d46864c2c11388b4ef9a1f085cdab0172417cf72bf729fa94eb391cb-merged.mount: Deactivated successfully.
Jan 26 16:38:14 compute-0 podman[380731]: 2026-01-26 16:38:14.71000648 +0000 UTC m=+0.733770297 container remove d190c459fb99f7483c557701cdb565e3242b6559e05b8cbe9696d93ea922cf0f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 16:38:14 compute-0 systemd[1]: libpod-conmon-d190c459fb99f7483c557701cdb565e3242b6559e05b8cbe9696d93ea922cf0f.scope: Deactivated successfully.
Jan 26 16:38:14 compute-0 sudo[380650]: pam_unix(sudo:session): session closed for user root
Jan 26 16:38:14 compute-0 sudo[380780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:38:14 compute-0 sudo[380780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:38:14 compute-0 sudo[380780]: pam_unix(sudo:session): session closed for user root
Jan 26 16:38:14 compute-0 sudo[380805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:38:14 compute-0 sudo[380805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:38:15 compute-0 podman[380840]: 2026-01-26 16:38:15.157227754 +0000 UTC m=+0.039904072 container create 715eaf2b91fd2a62df2864bcd92613c3978a3efb7f0d7dfb35ac6484fb6d1cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_newton, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:38:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 26 16:38:15 compute-0 systemd[1]: Started libpod-conmon-715eaf2b91fd2a62df2864bcd92613c3978a3efb7f0d7dfb35ac6484fb6d1cfa.scope.
Jan 26 16:38:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:38:15 compute-0 podman[380840]: 2026-01-26 16:38:15.1391771 +0000 UTC m=+0.021853388 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:38:15 compute-0 podman[380840]: 2026-01-26 16:38:15.244178424 +0000 UTC m=+0.126854712 container init 715eaf2b91fd2a62df2864bcd92613c3978a3efb7f0d7dfb35ac6484fb6d1cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_newton, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:38:15 compute-0 podman[380840]: 2026-01-26 16:38:15.250711005 +0000 UTC m=+0.133387293 container start 715eaf2b91fd2a62df2864bcd92613c3978a3efb7f0d7dfb35ac6484fb6d1cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_newton, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:38:15 compute-0 podman[380840]: 2026-01-26 16:38:15.25418754 +0000 UTC m=+0.136863808 container attach 715eaf2b91fd2a62df2864bcd92613c3978a3efb7f0d7dfb35ac6484fb6d1cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_newton, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:38:15 compute-0 stoic_newton[380857]: 167 167
Jan 26 16:38:15 compute-0 systemd[1]: libpod-715eaf2b91fd2a62df2864bcd92613c3978a3efb7f0d7dfb35ac6484fb6d1cfa.scope: Deactivated successfully.
Jan 26 16:38:15 compute-0 podman[380840]: 2026-01-26 16:38:15.259317757 +0000 UTC m=+0.141994045 container died 715eaf2b91fd2a62df2864bcd92613c3978a3efb7f0d7dfb35ac6484fb6d1cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_newton, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 16:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-523f2cea160d4bcfb465e341c79f49b1f53b9cbc5cf0c2554510e028d2c75110-merged.mount: Deactivated successfully.
Jan 26 16:38:15 compute-0 podman[380840]: 2026-01-26 16:38:15.301850453 +0000 UTC m=+0.184526711 container remove 715eaf2b91fd2a62df2864bcd92613c3978a3efb7f0d7dfb35ac6484fb6d1cfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_newton, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 26 16:38:15 compute-0 systemd[1]: libpod-conmon-715eaf2b91fd2a62df2864bcd92613c3978a3efb7f0d7dfb35ac6484fb6d1cfa.scope: Deactivated successfully.
Jan 26 16:38:15 compute-0 podman[380881]: 2026-01-26 16:38:15.490640489 +0000 UTC m=+0.038538569 container create ed4148adc40211daf1c8af9e349f4c3bd9913149a199f6909d8e835bba0ce162 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_joliot, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 16:38:15 compute-0 systemd[1]: Started libpod-conmon-ed4148adc40211daf1c8af9e349f4c3bd9913149a199f6909d8e835bba0ce162.scope.
Jan 26 16:38:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4150c5fb2ea8183a1d179ac026aa6cf814c580df5e0f3db9c37288d86c9a42d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4150c5fb2ea8183a1d179ac026aa6cf814c580df5e0f3db9c37288d86c9a42d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4150c5fb2ea8183a1d179ac026aa6cf814c580df5e0f3db9c37288d86c9a42d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:38:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4150c5fb2ea8183a1d179ac026aa6cf814c580df5e0f3db9c37288d86c9a42d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:38:15 compute-0 podman[380881]: 2026-01-26 16:38:15.567011119 +0000 UTC m=+0.114909199 container init ed4148adc40211daf1c8af9e349f4c3bd9913149a199f6909d8e835bba0ce162 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_joliot, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:38:15 compute-0 podman[380881]: 2026-01-26 16:38:15.474163694 +0000 UTC m=+0.022061804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:38:15 compute-0 podman[380881]: 2026-01-26 16:38:15.575237521 +0000 UTC m=+0.123135601 container start ed4148adc40211daf1c8af9e349f4c3bd9913149a199f6909d8e835bba0ce162 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_joliot, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 16:38:15 compute-0 podman[380881]: 2026-01-26 16:38:15.579546987 +0000 UTC m=+0.127445137 container attach ed4148adc40211daf1c8af9e349f4c3bd9913149a199f6909d8e835bba0ce162 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:38:15 compute-0 youthful_joliot[380899]: {
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:     "0": [
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:         {
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "devices": [
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "/dev/loop3"
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             ],
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "lv_name": "ceph_lv0",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "lv_size": "21470642176",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "name": "ceph_lv0",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "tags": {
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.cluster_name": "ceph",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.crush_device_class": "",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.encrypted": "0",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.objectstore": "bluestore",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.osd_id": "0",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.type": "block",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.vdo": "0",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.with_tpm": "0"
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             },
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "type": "block",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "vg_name": "ceph_vg0"
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:         }
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:     ],
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:     "1": [
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:         {
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "devices": [
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "/dev/loop4"
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             ],
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "lv_name": "ceph_lv1",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "lv_size": "21470642176",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "name": "ceph_lv1",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "tags": {
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.cluster_name": "ceph",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.crush_device_class": "",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.encrypted": "0",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.objectstore": "bluestore",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.osd_id": "1",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.type": "block",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.vdo": "0",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.with_tpm": "0"
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             },
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "type": "block",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "vg_name": "ceph_vg1"
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:         }
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:     ],
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:     "2": [
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:         {
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "devices": [
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "/dev/loop5"
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             ],
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "lv_name": "ceph_lv2",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "lv_size": "21470642176",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "name": "ceph_lv2",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "tags": {
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.cluster_name": "ceph",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.crush_device_class": "",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.encrypted": "0",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.objectstore": "bluestore",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.osd_id": "2",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.type": "block",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.vdo": "0",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:                 "ceph.with_tpm": "0"
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             },
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "type": "block",
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:             "vg_name": "ceph_vg2"
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:         }
Jan 26 16:38:15 compute-0 youthful_joliot[380899]:     ]
Jan 26 16:38:15 compute-0 youthful_joliot[380899]: }
Jan 26 16:38:15 compute-0 systemd[1]: libpod-ed4148adc40211daf1c8af9e349f4c3bd9913149a199f6909d8e835bba0ce162.scope: Deactivated successfully.
Jan 26 16:38:15 compute-0 podman[380881]: 2026-01-26 16:38:15.868549288 +0000 UTC m=+0.416447378 container died ed4148adc40211daf1c8af9e349f4c3bd9913149a199f6909d8e835bba0ce162 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:38:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-4150c5fb2ea8183a1d179ac026aa6cf814c580df5e0f3db9c37288d86c9a42d1-merged.mount: Deactivated successfully.
Jan 26 16:38:15 compute-0 podman[380881]: 2026-01-26 16:38:15.918851936 +0000 UTC m=+0.466750016 container remove ed4148adc40211daf1c8af9e349f4c3bd9913149a199f6909d8e835bba0ce162 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_joliot, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:38:15 compute-0 systemd[1]: libpod-conmon-ed4148adc40211daf1c8af9e349f4c3bd9913149a199f6909d8e835bba0ce162.scope: Deactivated successfully.
Jan 26 16:38:15 compute-0 sudo[380805]: pam_unix(sudo:session): session closed for user root
Jan 26 16:38:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:38:16 compute-0 sudo[380922]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:38:16 compute-0 sudo[380922]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:38:16 compute-0 sudo[380922]: pam_unix(sudo:session): session closed for user root
Jan 26 16:38:16 compute-0 sudo[380947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:38:16 compute-0 sudo[380947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:38:16 compute-0 ceph-mon[75140]: pgmap v2655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Jan 26 16:38:16 compute-0 podman[380984]: 2026-01-26 16:38:16.416936463 +0000 UTC m=+0.049548340 container create 0036efd0755f056a11dceea4253af99fc1f3ce1ca48ac51a6aacec2ff4991dec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cray, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:38:16 compute-0 systemd[1]: Started libpod-conmon-0036efd0755f056a11dceea4253af99fc1f3ce1ca48ac51a6aacec2ff4991dec.scope.
Jan 26 16:38:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:38:16 compute-0 podman[380984]: 2026-01-26 16:38:16.489828836 +0000 UTC m=+0.122440723 container init 0036efd0755f056a11dceea4253af99fc1f3ce1ca48ac51a6aacec2ff4991dec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:38:16 compute-0 podman[380984]: 2026-01-26 16:38:16.396649393 +0000 UTC m=+0.029261320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:38:16 compute-0 podman[380984]: 2026-01-26 16:38:16.496222564 +0000 UTC m=+0.128834441 container start 0036efd0755f056a11dceea4253af99fc1f3ce1ca48ac51a6aacec2ff4991dec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cray, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 16:38:16 compute-0 vibrant_cray[381000]: 167 167
Jan 26 16:38:16 compute-0 systemd[1]: libpod-0036efd0755f056a11dceea4253af99fc1f3ce1ca48ac51a6aacec2ff4991dec.scope: Deactivated successfully.
Jan 26 16:38:16 compute-0 podman[380984]: 2026-01-26 16:38:16.501401621 +0000 UTC m=+0.134013518 container attach 0036efd0755f056a11dceea4253af99fc1f3ce1ca48ac51a6aacec2ff4991dec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:38:16 compute-0 podman[380984]: 2026-01-26 16:38:16.50179155 +0000 UTC m=+0.134403437 container died 0036efd0755f056a11dceea4253af99fc1f3ce1ca48ac51a6aacec2ff4991dec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:38:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-1196c9b8ac07057d88b1191585471b705402fafca8b87d208f4b9bf2b8aad1de-merged.mount: Deactivated successfully.
Jan 26 16:38:16 compute-0 podman[380984]: 2026-01-26 16:38:16.536348431 +0000 UTC m=+0.168960308 container remove 0036efd0755f056a11dceea4253af99fc1f3ce1ca48ac51a6aacec2ff4991dec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cray, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:38:16 compute-0 systemd[1]: libpod-conmon-0036efd0755f056a11dceea4253af99fc1f3ce1ca48ac51a6aacec2ff4991dec.scope: Deactivated successfully.
Jan 26 16:38:16 compute-0 podman[381024]: 2026-01-26 16:38:16.731945354 +0000 UTC m=+0.057838003 container create 78243b87521b6b56ed94f91f519bd861eedf892231be012e9edce4a33acc3f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 16:38:16 compute-0 systemd[1]: Started libpod-conmon-78243b87521b6b56ed94f91f519bd861eedf892231be012e9edce4a33acc3f54.scope.
Jan 26 16:38:16 compute-0 podman[381024]: 2026-01-26 16:38:16.713585763 +0000 UTC m=+0.039478422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:38:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6a7d5956b1bbe1f82471898f0204f30d9bfa5fa7c2441c0ee99359553982f70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6a7d5956b1bbe1f82471898f0204f30d9bfa5fa7c2441c0ee99359553982f70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6a7d5956b1bbe1f82471898f0204f30d9bfa5fa7c2441c0ee99359553982f70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:38:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6a7d5956b1bbe1f82471898f0204f30d9bfa5fa7c2441c0ee99359553982f70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:38:16 compute-0 podman[381024]: 2026-01-26 16:38:16.83703536 +0000 UTC m=+0.162928099 container init 78243b87521b6b56ed94f91f519bd861eedf892231be012e9edce4a33acc3f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_boyd, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:38:16 compute-0 podman[381024]: 2026-01-26 16:38:16.8504293 +0000 UTC m=+0.176321979 container start 78243b87521b6b56ed94f91f519bd861eedf892231be012e9edce4a33acc3f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_boyd, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:38:16 compute-0 podman[381024]: 2026-01-26 16:38:16.854626873 +0000 UTC m=+0.180519562 container attach 78243b87521b6b56ed94f91f519bd861eedf892231be012e9edce4a33acc3f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_boyd, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:38:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.1 KiB/s rd, 511 B/s wr, 5 op/s
Jan 26 16:38:17 compute-0 nova_compute[239965]: 2026-01-26 16:38:17.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:17 compute-0 nova_compute[239965]: 2026-01-26 16:38:17.534 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:38:17 compute-0 nova_compute[239965]: 2026-01-26 16:38:17.535 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:38:17 compute-0 nova_compute[239965]: 2026-01-26 16:38:17.535 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:38:17 compute-0 nova_compute[239965]: 2026-01-26 16:38:17.535 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:38:17 compute-0 nova_compute[239965]: 2026-01-26 16:38:17.535 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:38:17 compute-0 lvm[381122]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:38:17 compute-0 lvm[381120]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:38:17 compute-0 lvm[381120]: VG ceph_vg0 finished
Jan 26 16:38:17 compute-0 lvm[381122]: VG ceph_vg1 finished
Jan 26 16:38:17 compute-0 lvm[381124]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:38:17 compute-0 lvm[381124]: VG ceph_vg2 finished
Jan 26 16:38:17 compute-0 hardcore_boyd[381041]: {}
Jan 26 16:38:17 compute-0 systemd[1]: libpod-78243b87521b6b56ed94f91f519bd861eedf892231be012e9edce4a33acc3f54.scope: Deactivated successfully.
Jan 26 16:38:17 compute-0 podman[381024]: 2026-01-26 16:38:17.720366677 +0000 UTC m=+1.046259326 container died 78243b87521b6b56ed94f91f519bd861eedf892231be012e9edce4a33acc3f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:38:17 compute-0 systemd[1]: libpod-78243b87521b6b56ed94f91f519bd861eedf892231be012e9edce4a33acc3f54.scope: Consumed 1.373s CPU time.
Jan 26 16:38:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6a7d5956b1bbe1f82471898f0204f30d9bfa5fa7c2441c0ee99359553982f70-merged.mount: Deactivated successfully.
Jan 26 16:38:17 compute-0 podman[381024]: 2026-01-26 16:38:17.764640206 +0000 UTC m=+1.090532845 container remove 78243b87521b6b56ed94f91f519bd861eedf892231be012e9edce4a33acc3f54 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:38:17 compute-0 systemd[1]: libpod-conmon-78243b87521b6b56ed94f91f519bd861eedf892231be012e9edce4a33acc3f54.scope: Deactivated successfully.
Jan 26 16:38:17 compute-0 sudo[380947]: pam_unix(sudo:session): session closed for user root
Jan 26 16:38:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:38:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:38:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:38:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:38:17 compute-0 sudo[381157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:38:17 compute-0 sudo[381157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:38:17 compute-0 sudo[381157]: pam_unix(sudo:session): session closed for user root
Jan 26 16:38:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:38:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1686924470' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:38:18 compute-0 nova_compute[239965]: 2026-01-26 16:38:18.088 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:38:18 compute-0 nova_compute[239965]: 2026-01-26 16:38:18.247 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:38:18 compute-0 nova_compute[239965]: 2026-01-26 16:38:18.248 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3494MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:38:18 compute-0 nova_compute[239965]: 2026-01-26 16:38:18.248 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:38:18 compute-0 nova_compute[239965]: 2026-01-26 16:38:18.249 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:38:18 compute-0 nova_compute[239965]: 2026-01-26 16:38:18.299 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:18 compute-0 nova_compute[239965]: 2026-01-26 16:38:18.322 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:38:18 compute-0 nova_compute[239965]: 2026-01-26 16:38:18.323 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:38:18 compute-0 nova_compute[239965]: 2026-01-26 16:38:18.772 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:38:18 compute-0 nova_compute[239965]: 2026-01-26 16:38:18.793 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:38:18 compute-0 nova_compute[239965]: 2026-01-26 16:38:18.794 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:38:18 compute-0 nova_compute[239965]: 2026-01-26 16:38:18.808 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:38:18 compute-0 ceph-mon[75140]: pgmap v2656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.1 KiB/s rd, 511 B/s wr, 5 op/s
Jan 26 16:38:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:38:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:38:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1686924470' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:38:18 compute-0 nova_compute[239965]: 2026-01-26 16:38:18.838 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:38:18 compute-0 nova_compute[239965]: 2026-01-26 16:38:18.858 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:38:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:38:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/484919179' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:38:19 compute-0 nova_compute[239965]: 2026-01-26 16:38:19.413 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:38:19 compute-0 nova_compute[239965]: 2026-01-26 16:38:19.418 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:38:19 compute-0 nova_compute[239965]: 2026-01-26 16:38:19.443 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:38:19 compute-0 nova_compute[239965]: 2026-01-26 16:38:19.445 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:38:19 compute-0 nova_compute[239965]: 2026-01-26 16:38:19.445 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:38:19 compute-0 nova_compute[239965]: 2026-01-26 16:38:19.676 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:19 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/484919179' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:38:20 compute-0 ceph-mon[75140]: pgmap v2657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:38:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:21 compute-0 ceph-mon[75140]: pgmap v2658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:23 compute-0 nova_compute[239965]: 2026-01-26 16:38:23.301 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:24 compute-0 ceph-mon[75140]: pgmap v2659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:24 compute-0 nova_compute[239965]: 2026-01-26 16:38:24.677 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:38:26 compute-0 ceph-mon[75140]: pgmap v2660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:27 compute-0 nova_compute[239965]: 2026-01-26 16:38:27.438 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:38:28 compute-0 ceph-mon[75140]: pgmap v2661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:28 compute-0 nova_compute[239965]: 2026-01-26 16:38:28.303 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:28 compute-0 podman[381206]: 2026-01-26 16:38:28.405854908 +0000 UTC m=+0.077441658 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 26 16:38:28 compute-0 podman[381207]: 2026-01-26 16:38:28.447376429 +0000 UTC m=+0.118703062 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller)
Jan 26 16:38:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:38:28
Jan 26 16:38:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:38:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:38:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'vms']
Jan 26 16:38:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:38:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:29 compute-0 nova_compute[239965]: 2026-01-26 16:38:29.680 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:30 compute-0 ceph-mon[75140]: pgmap v2662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:38:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:38:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:38:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:32 compute-0 ceph-mon[75140]: pgmap v2663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:33 compute-0 nova_compute[239965]: 2026-01-26 16:38:33.305 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:34 compute-0 ceph-mon[75140]: pgmap v2664: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:34 compute-0 nova_compute[239965]: 2026-01-26 16:38:34.682 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:38:36 compute-0 ceph-mon[75140]: pgmap v2665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:38 compute-0 nova_compute[239965]: 2026-01-26 16:38:38.306 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:38 compute-0 ceph-mon[75140]: pgmap v2666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:39 compute-0 nova_compute[239965]: 2026-01-26 16:38:39.684 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:40 compute-0 ceph-mon[75140]: pgmap v2667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:38:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:42 compute-0 ceph-mon[75140]: pgmap v2668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:43 compute-0 nova_compute[239965]: 2026-01-26 16:38:43.308 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:44 compute-0 ceph-mon[75140]: pgmap v2669: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:44 compute-0 nova_compute[239965]: 2026-01-26 16:38:44.687 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:38:46 compute-0 ceph-mon[75140]: pgmap v2670: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:48 compute-0 nova_compute[239965]: 2026-01-26 16:38:48.309 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:48 compute-0 ceph-mon[75140]: pgmap v2671: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:38:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/314374931' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:38:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:38:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/314374931' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:38:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:38:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/314374931' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:38:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/314374931' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:38:49 compute-0 nova_compute[239965]: 2026-01-26 16:38:49.689 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:50 compute-0 ceph-mon[75140]: pgmap v2672: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:38:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:52 compute-0 ceph-mon[75140]: pgmap v2673: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:53 compute-0 nova_compute[239965]: 2026-01-26 16:38:53.311 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:54 compute-0 nova_compute[239965]: 2026-01-26 16:38:54.692 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:54 compute-0 ceph-mon[75140]: pgmap v2674: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:38:56 compute-0 ceph-mon[75140]: pgmap v2675: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:58 compute-0 nova_compute[239965]: 2026-01-26 16:38:58.314 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:38:58 compute-0 ceph-mon[75140]: pgmap v2676: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:38:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:38:59.260 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:38:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:38:59.261 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:38:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:38:59.261 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:38:59 compute-0 podman[381252]: 2026-01-26 16:38:59.388643875 +0000 UTC m=+0.064060058 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 16:38:59 compute-0 podman[381253]: 2026-01-26 16:38:59.404788142 +0000 UTC m=+0.083514046 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Jan 26 16:38:59 compute-0 nova_compute[239965]: 2026-01-26 16:38:59.695 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:39:00 compute-0 ceph-mon[75140]: pgmap v2677: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:39:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:01 compute-0 nova_compute[239965]: 2026-01-26 16:39:01.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:02 compute-0 ceph-mon[75140]: pgmap v2678: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:03 compute-0 nova_compute[239965]: 2026-01-26 16:39:03.316 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:03 compute-0 nova_compute[239965]: 2026-01-26 16:39:03.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:04 compute-0 nova_compute[239965]: 2026-01-26 16:39:04.697 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:04 compute-0 ceph-mon[75140]: pgmap v2679: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:39:06 compute-0 ceph-mon[75140]: pgmap v2680: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:08 compute-0 nova_compute[239965]: 2026-01-26 16:39:08.317 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:08 compute-0 ceph-mon[75140]: pgmap v2681: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:09 compute-0 nova_compute[239965]: 2026-01-26 16:39:09.699 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:10 compute-0 ceph-mon[75140]: pgmap v2682: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:10 compute-0 nova_compute[239965]: 2026-01-26 16:39:10.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:39:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:11 compute-0 nova_compute[239965]: 2026-01-26 16:39:11.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:11 compute-0 nova_compute[239965]: 2026-01-26 16:39:11.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:11 compute-0 nova_compute[239965]: 2026-01-26 16:39:11.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:39:12 compute-0 ceph-mon[75140]: pgmap v2683: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:12 compute-0 nova_compute[239965]: 2026-01-26 16:39:12.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:12 compute-0 nova_compute[239965]: 2026-01-26 16:39:12.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:39:12 compute-0 nova_compute[239965]: 2026-01-26 16:39:12.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:39:12 compute-0 nova_compute[239965]: 2026-01-26 16:39:12.527 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:39:12 compute-0 nova_compute[239965]: 2026-01-26 16:39:12.527 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:13 compute-0 nova_compute[239965]: 2026-01-26 16:39:13.319 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:13 compute-0 nova_compute[239965]: 2026-01-26 16:39:13.519 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:14 compute-0 ceph-mon[75140]: pgmap v2684: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:14 compute-0 nova_compute[239965]: 2026-01-26 16:39:14.701 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:39:16 compute-0 nova_compute[239965]: 2026-01-26 16:39:16.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:16 compute-0 ceph-mon[75140]: pgmap v2685: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:17 compute-0 sudo[381296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:39:17 compute-0 sudo[381296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:39:17 compute-0 sudo[381296]: pam_unix(sudo:session): session closed for user root
Jan 26 16:39:18 compute-0 sudo[381321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:39:18 compute-0 sudo[381321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:39:18 compute-0 nova_compute[239965]: 2026-01-26 16:39:18.321 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:18 compute-0 sudo[381321]: pam_unix(sudo:session): session closed for user root
Jan 26 16:39:18 compute-0 ceph-mon[75140]: pgmap v2686: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:39:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:39:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:39:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:39:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:39:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:39:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:39:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:39:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:39:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:39:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:39:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:39:18 compute-0 sudo[381378]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:39:18 compute-0 sudo[381378]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:39:18 compute-0 sudo[381378]: pam_unix(sudo:session): session closed for user root
Jan 26 16:39:18 compute-0 sudo[381403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:39:18 compute-0 sudo[381403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:39:19 compute-0 podman[381440]: 2026-01-26 16:39:19.065290735 +0000 UTC m=+0.058608123 container create b55c4381ae92bf08f156b0fdb3b70ce760a5e6c463d756a4583bf47726de4901 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 16:39:19 compute-0 systemd[1]: Started libpod-conmon-b55c4381ae92bf08f156b0fdb3b70ce760a5e6c463d756a4583bf47726de4901.scope.
Jan 26 16:39:19 compute-0 podman[381440]: 2026-01-26 16:39:19.026994233 +0000 UTC m=+0.020311611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:39:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:39:19 compute-0 podman[381440]: 2026-01-26 16:39:19.172439722 +0000 UTC m=+0.165757160 container init b55c4381ae92bf08f156b0fdb3b70ce760a5e6c463d756a4583bf47726de4901 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:39:19 compute-0 podman[381440]: 2026-01-26 16:39:19.182300695 +0000 UTC m=+0.175618083 container start b55c4381ae92bf08f156b0fdb3b70ce760a5e6c463d756a4583bf47726de4901 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:39:19 compute-0 podman[381440]: 2026-01-26 16:39:19.188253511 +0000 UTC m=+0.181570879 container attach b55c4381ae92bf08f156b0fdb3b70ce760a5e6c463d756a4583bf47726de4901 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 16:39:19 compute-0 relaxed_ellis[381457]: 167 167
Jan 26 16:39:19 compute-0 systemd[1]: libpod-b55c4381ae92bf08f156b0fdb3b70ce760a5e6c463d756a4583bf47726de4901.scope: Deactivated successfully.
Jan 26 16:39:19 compute-0 conmon[381457]: conmon b55c4381ae92bf08f156 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b55c4381ae92bf08f156b0fdb3b70ce760a5e6c463d756a4583bf47726de4901.scope/container/memory.events
Jan 26 16:39:19 compute-0 podman[381440]: 2026-01-26 16:39:19.193735156 +0000 UTC m=+0.187052544 container died b55c4381ae92bf08f156b0fdb3b70ce760a5e6c463d756a4583bf47726de4901 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:39:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-03594b1b9db8f73221e8d1ca18ccfb769f2cfcb745d4effaba6014d9b2b29c85-merged.mount: Deactivated successfully.
Jan 26 16:39:19 compute-0 podman[381440]: 2026-01-26 16:39:19.270020593 +0000 UTC m=+0.263337941 container remove b55c4381ae92bf08f156b0fdb3b70ce760a5e6c463d756a4583bf47726de4901 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:39:19 compute-0 systemd[1]: libpod-conmon-b55c4381ae92bf08f156b0fdb3b70ce760a5e6c463d756a4583bf47726de4901.scope: Deactivated successfully.
Jan 26 16:39:19 compute-0 podman[381480]: 2026-01-26 16:39:19.443131833 +0000 UTC m=+0.042490906 container create 7616ec81ae1b6c775bbe82264c9d7f544cd775e37282456b0b1ea83d0db94ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:39:19 compute-0 systemd[1]: Started libpod-conmon-7616ec81ae1b6c775bbe82264c9d7f544cd775e37282456b0b1ea83d0db94ca7.scope.
Jan 26 16:39:19 compute-0 nova_compute[239965]: 2026-01-26 16:39:19.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3460b2ffdf20a83671b47efd3038e8ee1a261a8a6986a3d4ada1ad7920288e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3460b2ffdf20a83671b47efd3038e8ee1a261a8a6986a3d4ada1ad7920288e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3460b2ffdf20a83671b47efd3038e8ee1a261a8a6986a3d4ada1ad7920288e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3460b2ffdf20a83671b47efd3038e8ee1a261a8a6986a3d4ada1ad7920288e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:39:19 compute-0 podman[381480]: 2026-01-26 16:39:19.425740366 +0000 UTC m=+0.025099469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:39:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3460b2ffdf20a83671b47efd3038e8ee1a261a8a6986a3d4ada1ad7920288e1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:39:19 compute-0 podman[381480]: 2026-01-26 16:39:19.531049807 +0000 UTC m=+0.130408900 container init 7616ec81ae1b6c775bbe82264c9d7f544cd775e37282456b0b1ea83d0db94ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:39:19 compute-0 nova_compute[239965]: 2026-01-26 16:39:19.535 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:39:19 compute-0 nova_compute[239965]: 2026-01-26 16:39:19.535 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:39:19 compute-0 nova_compute[239965]: 2026-01-26 16:39:19.535 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:39:19 compute-0 nova_compute[239965]: 2026-01-26 16:39:19.535 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:39:19 compute-0 nova_compute[239965]: 2026-01-26 16:39:19.536 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:39:19 compute-0 podman[381480]: 2026-01-26 16:39:19.538789627 +0000 UTC m=+0.138148700 container start 7616ec81ae1b6c775bbe82264c9d7f544cd775e37282456b0b1ea83d0db94ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:39:19 compute-0 podman[381480]: 2026-01-26 16:39:19.542666902 +0000 UTC m=+0.142026015 container attach 7616ec81ae1b6c775bbe82264c9d7f544cd775e37282456b0b1ea83d0db94ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 16:39:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:39:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:39:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:39:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:39:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:39:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:39:19 compute-0 nova_compute[239965]: 2026-01-26 16:39:19.703 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:20 compute-0 quirky_carson[381496]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:39:20 compute-0 quirky_carson[381496]: --> All data devices are unavailable
Jan 26 16:39:20 compute-0 systemd[1]: libpod-7616ec81ae1b6c775bbe82264c9d7f544cd775e37282456b0b1ea83d0db94ca7.scope: Deactivated successfully.
Jan 26 16:39:20 compute-0 podman[381480]: 2026-01-26 16:39:20.05909081 +0000 UTC m=+0.658449903 container died 7616ec81ae1b6c775bbe82264c9d7f544cd775e37282456b0b1ea83d0db94ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 16:39:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3460b2ffdf20a83671b47efd3038e8ee1a261a8a6986a3d4ada1ad7920288e1-merged.mount: Deactivated successfully.
Jan 26 16:39:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:39:20 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4187917400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:39:20 compute-0 podman[381480]: 2026-01-26 16:39:20.135588453 +0000 UTC m=+0.734947536 container remove 7616ec81ae1b6c775bbe82264c9d7f544cd775e37282456b0b1ea83d0db94ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_carson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:39:20 compute-0 nova_compute[239965]: 2026-01-26 16:39:20.144 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:39:20 compute-0 systemd[1]: libpod-conmon-7616ec81ae1b6c775bbe82264c9d7f544cd775e37282456b0b1ea83d0db94ca7.scope: Deactivated successfully.
Jan 26 16:39:20 compute-0 sudo[381403]: pam_unix(sudo:session): session closed for user root
Jan 26 16:39:20 compute-0 sudo[381550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:39:20 compute-0 sudo[381550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:39:20 compute-0 sudo[381550]: pam_unix(sudo:session): session closed for user root
Jan 26 16:39:20 compute-0 nova_compute[239965]: 2026-01-26 16:39:20.314 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:39:20 compute-0 nova_compute[239965]: 2026-01-26 16:39:20.315 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3548MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:39:20 compute-0 nova_compute[239965]: 2026-01-26 16:39:20.315 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:39:20 compute-0 nova_compute[239965]: 2026-01-26 16:39:20.315 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:39:20 compute-0 sudo[381575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:39:20 compute-0 sudo[381575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:39:20 compute-0 nova_compute[239965]: 2026-01-26 16:39:20.372 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:39:20 compute-0 nova_compute[239965]: 2026-01-26 16:39:20.373 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:39:20 compute-0 nova_compute[239965]: 2026-01-26 16:39:20.388 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:39:20 compute-0 podman[381631]: 2026-01-26 16:39:20.612849466 +0000 UTC m=+0.040510186 container create 78febe2acc564b27a700208dae124c75bc5b82c92ce7fcbfe1833b216cbe9929 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:39:20 compute-0 ceph-mon[75140]: pgmap v2687: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:20 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4187917400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:39:20 compute-0 systemd[1]: Started libpod-conmon-78febe2acc564b27a700208dae124c75bc5b82c92ce7fcbfe1833b216cbe9929.scope.
Jan 26 16:39:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:39:20 compute-0 podman[381631]: 2026-01-26 16:39:20.594429103 +0000 UTC m=+0.022089863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:39:20 compute-0 podman[381631]: 2026-01-26 16:39:20.723056699 +0000 UTC m=+0.150717449 container init 78febe2acc564b27a700208dae124c75bc5b82c92ce7fcbfe1833b216cbe9929 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 16:39:20 compute-0 podman[381631]: 2026-01-26 16:39:20.729425506 +0000 UTC m=+0.157086226 container start 78febe2acc564b27a700208dae124c75bc5b82c92ce7fcbfe1833b216cbe9929 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:39:20 compute-0 podman[381631]: 2026-01-26 16:39:20.733007063 +0000 UTC m=+0.160667773 container attach 78febe2acc564b27a700208dae124c75bc5b82c92ce7fcbfe1833b216cbe9929 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 16:39:20 compute-0 laughing_dhawan[381647]: 167 167
Jan 26 16:39:20 compute-0 podman[381631]: 2026-01-26 16:39:20.734532441 +0000 UTC m=+0.162193151 container died 78febe2acc564b27a700208dae124c75bc5b82c92ce7fcbfe1833b216cbe9929 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 16:39:20 compute-0 systemd[1]: libpod-78febe2acc564b27a700208dae124c75bc5b82c92ce7fcbfe1833b216cbe9929.scope: Deactivated successfully.
Jan 26 16:39:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-59f93887ae100f346e8f3e90b40296fa10953b5028474e7ac512c168cfe9aefe-merged.mount: Deactivated successfully.
Jan 26 16:39:20 compute-0 podman[381631]: 2026-01-26 16:39:20.772107356 +0000 UTC m=+0.199768076 container remove 78febe2acc564b27a700208dae124c75bc5b82c92ce7fcbfe1833b216cbe9929 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_dhawan, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:39:20 compute-0 systemd[1]: libpod-conmon-78febe2acc564b27a700208dae124c75bc5b82c92ce7fcbfe1833b216cbe9929.scope: Deactivated successfully.
Jan 26 16:39:20 compute-0 podman[381672]: 2026-01-26 16:39:20.941676918 +0000 UTC m=+0.046205798 container create 2652929fa2f5ed3f3d9675b883192e5ccb08c96a81dc647bb2f5d1ffa26a537b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_maxwell, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:39:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:39:20 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3479221728' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:39:20 compute-0 systemd[1]: Started libpod-conmon-2652929fa2f5ed3f3d9675b883192e5ccb08c96a81dc647bb2f5d1ffa26a537b.scope.
Jan 26 16:39:20 compute-0 nova_compute[239965]: 2026-01-26 16:39:20.978 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:39:20 compute-0 nova_compute[239965]: 2026-01-26 16:39:20.985 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:39:20 compute-0 nova_compute[239965]: 2026-01-26 16:39:20.998 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:39:21 compute-0 nova_compute[239965]: 2026-01-26 16:39:21.000 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:39:21 compute-0 nova_compute[239965]: 2026-01-26 16:39:21.000 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:39:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd99ea1c5093fe8291ffbcfcad579a2da4465b7825ff44c298dc8e626a03225b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd99ea1c5093fe8291ffbcfcad579a2da4465b7825ff44c298dc8e626a03225b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd99ea1c5093fe8291ffbcfcad579a2da4465b7825ff44c298dc8e626a03225b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:39:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd99ea1c5093fe8291ffbcfcad579a2da4465b7825ff44c298dc8e626a03225b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:39:21 compute-0 podman[381672]: 2026-01-26 16:39:21.020441287 +0000 UTC m=+0.124970157 container init 2652929fa2f5ed3f3d9675b883192e5ccb08c96a81dc647bb2f5d1ffa26a537b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_maxwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:39:21 compute-0 podman[381672]: 2026-01-26 16:39:20.922750263 +0000 UTC m=+0.027279133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:39:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:39:21 compute-0 podman[381672]: 2026-01-26 16:39:21.031741085 +0000 UTC m=+0.136269935 container start 2652929fa2f5ed3f3d9675b883192e5ccb08c96a81dc647bb2f5d1ffa26a537b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:39:21 compute-0 podman[381672]: 2026-01-26 16:39:21.035915168 +0000 UTC m=+0.140444068 container attach 2652929fa2f5ed3f3d9675b883192e5ccb08c96a81dc647bb2f5d1ffa26a537b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_maxwell, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 16:39:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]: {
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:     "0": [
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:         {
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "devices": [
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "/dev/loop3"
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             ],
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "lv_name": "ceph_lv0",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "lv_size": "21470642176",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "name": "ceph_lv0",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "tags": {
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.cluster_name": "ceph",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.crush_device_class": "",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.encrypted": "0",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.objectstore": "bluestore",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.osd_id": "0",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.type": "block",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.vdo": "0",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.with_tpm": "0"
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             },
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "type": "block",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "vg_name": "ceph_vg0"
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:         }
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:     ],
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:     "1": [
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:         {
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "devices": [
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "/dev/loop4"
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             ],
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "lv_name": "ceph_lv1",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "lv_size": "21470642176",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "name": "ceph_lv1",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "tags": {
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.cluster_name": "ceph",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.crush_device_class": "",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.encrypted": "0",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.objectstore": "bluestore",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.osd_id": "1",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.type": "block",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.vdo": "0",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.with_tpm": "0"
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             },
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "type": "block",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "vg_name": "ceph_vg1"
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:         }
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:     ],
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:     "2": [
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:         {
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "devices": [
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "/dev/loop5"
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             ],
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "lv_name": "ceph_lv2",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "lv_size": "21470642176",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "name": "ceph_lv2",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "tags": {
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.cluster_name": "ceph",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.crush_device_class": "",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.encrypted": "0",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.objectstore": "bluestore",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.osd_id": "2",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.type": "block",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.vdo": "0",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:                 "ceph.with_tpm": "0"
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             },
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "type": "block",
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:             "vg_name": "ceph_vg2"
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:         }
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]:     ]
Jan 26 16:39:21 compute-0 dazzling_maxwell[381690]: }
Jan 26 16:39:21 compute-0 systemd[1]: libpod-2652929fa2f5ed3f3d9675b883192e5ccb08c96a81dc647bb2f5d1ffa26a537b.scope: Deactivated successfully.
Jan 26 16:39:21 compute-0 podman[381672]: 2026-01-26 16:39:21.336969366 +0000 UTC m=+0.441544207 container died 2652929fa2f5ed3f3d9675b883192e5ccb08c96a81dc647bb2f5d1ffa26a537b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_maxwell, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:39:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd99ea1c5093fe8291ffbcfcad579a2da4465b7825ff44c298dc8e626a03225b-merged.mount: Deactivated successfully.
Jan 26 16:39:21 compute-0 podman[381672]: 2026-01-26 16:39:21.375022112 +0000 UTC m=+0.479550962 container remove 2652929fa2f5ed3f3d9675b883192e5ccb08c96a81dc647bb2f5d1ffa26a537b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_maxwell, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 16:39:21 compute-0 systemd[1]: libpod-conmon-2652929fa2f5ed3f3d9675b883192e5ccb08c96a81dc647bb2f5d1ffa26a537b.scope: Deactivated successfully.
Jan 26 16:39:21 compute-0 sudo[381575]: pam_unix(sudo:session): session closed for user root
Jan 26 16:39:21 compute-0 sudo[381710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:39:21 compute-0 sudo[381710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:39:21 compute-0 sudo[381710]: pam_unix(sudo:session): session closed for user root
Jan 26 16:39:21 compute-0 sudo[381735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:39:21 compute-0 sudo[381735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:39:21 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3479221728' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:39:21 compute-0 podman[381771]: 2026-01-26 16:39:21.810229242 +0000 UTC m=+0.046378722 container create bd9f2773d77bed066c3fc7d7c946323df92591658b27d590d7eca3564ab5dbc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mcnulty, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:39:21 compute-0 systemd[1]: Started libpod-conmon-bd9f2773d77bed066c3fc7d7c946323df92591658b27d590d7eca3564ab5dbc1.scope.
Jan 26 16:39:21 compute-0 podman[381771]: 2026-01-26 16:39:21.790506827 +0000 UTC m=+0.026656337 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:39:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:39:21 compute-0 podman[381771]: 2026-01-26 16:39:21.90850292 +0000 UTC m=+0.144652490 container init bd9f2773d77bed066c3fc7d7c946323df92591658b27d590d7eca3564ab5dbc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:39:21 compute-0 podman[381771]: 2026-01-26 16:39:21.914060667 +0000 UTC m=+0.150210137 container start bd9f2773d77bed066c3fc7d7c946323df92591658b27d590d7eca3564ab5dbc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:39:21 compute-0 brave_mcnulty[381787]: 167 167
Jan 26 16:39:21 compute-0 systemd[1]: libpod-bd9f2773d77bed066c3fc7d7c946323df92591658b27d590d7eca3564ab5dbc1.scope: Deactivated successfully.
Jan 26 16:39:21 compute-0 podman[381771]: 2026-01-26 16:39:21.925917288 +0000 UTC m=+0.162066758 container attach bd9f2773d77bed066c3fc7d7c946323df92591658b27d590d7eca3564ab5dbc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 26 16:39:21 compute-0 podman[381771]: 2026-01-26 16:39:21.926290058 +0000 UTC m=+0.162439528 container died bd9f2773d77bed066c3fc7d7c946323df92591658b27d590d7eca3564ab5dbc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 16:39:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cf6d5ccfb26cf571ee886f69ea2a9af94789397dce3f3ad6fe15ee95669d046-merged.mount: Deactivated successfully.
Jan 26 16:39:21 compute-0 podman[381771]: 2026-01-26 16:39:21.964691612 +0000 UTC m=+0.200841102 container remove bd9f2773d77bed066c3fc7d7c946323df92591658b27d590d7eca3564ab5dbc1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_mcnulty, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 16:39:21 compute-0 systemd[1]: libpod-conmon-bd9f2773d77bed066c3fc7d7c946323df92591658b27d590d7eca3564ab5dbc1.scope: Deactivated successfully.
Jan 26 16:39:22 compute-0 podman[381810]: 2026-01-26 16:39:22.11658578 +0000 UTC m=+0.035127256 container create 8a96ab87242dd6a7a39d535f3f1298e144827b390184aeb8fe89f683ae75ec20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_booth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 16:39:22 compute-0 systemd[1]: Started libpod-conmon-8a96ab87242dd6a7a39d535f3f1298e144827b390184aeb8fe89f683ae75ec20.scope.
Jan 26 16:39:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17bfdefb218020e15f5fc14d8dfba3c3eee2e928ca2651ba25d9f0e41eeebfe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17bfdefb218020e15f5fc14d8dfba3c3eee2e928ca2651ba25d9f0e41eeebfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17bfdefb218020e15f5fc14d8dfba3c3eee2e928ca2651ba25d9f0e41eeebfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:39:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b17bfdefb218020e15f5fc14d8dfba3c3eee2e928ca2651ba25d9f0e41eeebfe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:39:22 compute-0 podman[381810]: 2026-01-26 16:39:22.102778161 +0000 UTC m=+0.021319647 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:39:22 compute-0 podman[381810]: 2026-01-26 16:39:22.200936816 +0000 UTC m=+0.119478312 container init 8a96ab87242dd6a7a39d535f3f1298e144827b390184aeb8fe89f683ae75ec20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_booth, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:39:22 compute-0 podman[381810]: 2026-01-26 16:39:22.20963522 +0000 UTC m=+0.128176696 container start 8a96ab87242dd6a7a39d535f3f1298e144827b390184aeb8fe89f683ae75ec20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_booth, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:39:22 compute-0 podman[381810]: 2026-01-26 16:39:22.213948976 +0000 UTC m=+0.132490452 container attach 8a96ab87242dd6a7a39d535f3f1298e144827b390184aeb8fe89f683ae75ec20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:39:22 compute-0 ceph-mon[75140]: pgmap v2688: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:22 compute-0 lvm[381904]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:39:22 compute-0 lvm[381905]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:39:22 compute-0 lvm[381905]: VG ceph_vg1 finished
Jan 26 16:39:22 compute-0 lvm[381904]: VG ceph_vg0 finished
Jan 26 16:39:22 compute-0 lvm[381907]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:39:22 compute-0 lvm[381907]: VG ceph_vg2 finished
Jan 26 16:39:22 compute-0 lucid_booth[381826]: {}
Jan 26 16:39:23 compute-0 systemd[1]: libpod-8a96ab87242dd6a7a39d535f3f1298e144827b390184aeb8fe89f683ae75ec20.scope: Deactivated successfully.
Jan 26 16:39:23 compute-0 systemd[1]: libpod-8a96ab87242dd6a7a39d535f3f1298e144827b390184aeb8fe89f683ae75ec20.scope: Consumed 1.358s CPU time.
Jan 26 16:39:23 compute-0 podman[381910]: 2026-01-26 16:39:23.067222342 +0000 UTC m=+0.027334993 container died 8a96ab87242dd6a7a39d535f3f1298e144827b390184aeb8fe89f683ae75ec20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_booth, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Jan 26 16:39:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b17bfdefb218020e15f5fc14d8dfba3c3eee2e928ca2651ba25d9f0e41eeebfe-merged.mount: Deactivated successfully.
Jan 26 16:39:23 compute-0 podman[381910]: 2026-01-26 16:39:23.108207822 +0000 UTC m=+0.068320433 container remove 8a96ab87242dd6a7a39d535f3f1298e144827b390184aeb8fe89f683ae75ec20 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_booth, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 26 16:39:23 compute-0 systemd[1]: libpod-conmon-8a96ab87242dd6a7a39d535f3f1298e144827b390184aeb8fe89f683ae75ec20.scope: Deactivated successfully.
Jan 26 16:39:23 compute-0 sudo[381735]: pam_unix(sudo:session): session closed for user root
Jan 26 16:39:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:39:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:39:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:39:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:39:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:23 compute-0 sudo[381925]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:39:23 compute-0 sudo[381925]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:39:23 compute-0 sudo[381925]: pam_unix(sudo:session): session closed for user root
Jan 26 16:39:23 compute-0 nova_compute[239965]: 2026-01-26 16:39:23.324 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:24 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:39:24 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:39:24 compute-0 ceph-mon[75140]: pgmap v2689: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:24 compute-0 nova_compute[239965]: 2026-01-26 16:39:24.706 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:39:26 compute-0 ceph-mon[75140]: pgmap v2690: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:28 compute-0 nova_compute[239965]: 2026-01-26 16:39:28.325 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:28 compute-0 ceph-mon[75140]: pgmap v2691: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:39:28
Jan 26 16:39:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:39:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:39:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'vms', 'volumes', 'images', '.mgr', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.meta']
Jan 26 16:39:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:39:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:29 compute-0 nova_compute[239965]: 2026-01-26 16:39:29.706 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:30 compute-0 ceph-mon[75140]: pgmap v2692: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:30 compute-0 podman[381950]: 2026-01-26 16:39:30.36987041 +0000 UTC m=+0.056985433 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:39:30 compute-0 podman[381951]: 2026-01-26 16:39:30.397884289 +0000 UTC m=+0.084673604 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:39:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:39:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:39:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:31 compute-0 nova_compute[239965]: 2026-01-26 16:39:31.804 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:39:32 compute-0 ceph-mon[75140]: pgmap v2693: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:33 compute-0 nova_compute[239965]: 2026-01-26 16:39:33.327 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:34 compute-0 ceph-mon[75140]: pgmap v2694: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:34 compute-0 nova_compute[239965]: 2026-01-26 16:39:34.708 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:39:36 compute-0 ceph-mon[75140]: pgmap v2695: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:38 compute-0 nova_compute[239965]: 2026-01-26 16:39:38.330 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:38 compute-0 ceph-mon[75140]: pgmap v2696: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:39 compute-0 nova_compute[239965]: 2026-01-26 16:39:39.711 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:40 compute-0 ceph-mon[75140]: pgmap v2697: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:39:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:42 compute-0 ceph-mon[75140]: pgmap v2698: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:43 compute-0 nova_compute[239965]: 2026-01-26 16:39:43.373 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:44 compute-0 ceph-mon[75140]: pgmap v2699: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:44 compute-0 nova_compute[239965]: 2026-01-26 16:39:44.713 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:39:46 compute-0 ceph-mon[75140]: pgmap v2700: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:48 compute-0 nova_compute[239965]: 2026-01-26 16:39:48.376 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:48 compute-0 ceph-mon[75140]: pgmap v2701: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:39:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/809001559' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:39:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:39:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/809001559' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:39:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:39:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/809001559' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:39:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/809001559' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:39:49 compute-0 nova_compute[239965]: 2026-01-26 16:39:49.715 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:50 compute-0 ceph-mon[75140]: pgmap v2702: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:39:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:52 compute-0 ceph-mon[75140]: pgmap v2703: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:53 compute-0 nova_compute[239965]: 2026-01-26 16:39:53.379 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:54 compute-0 ceph-mon[75140]: pgmap v2704: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:54 compute-0 nova_compute[239965]: 2026-01-26 16:39:54.716 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:39:56 compute-0 ceph-mon[75140]: pgmap v2705: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:58 compute-0 ceph-mon[75140]: pgmap v2706: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:58 compute-0 nova_compute[239965]: 2026-01-26 16:39:58.381 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:39:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:39:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:39:59.262 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:39:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:39:59.263 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:39:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:39:59.263 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:39:59 compute-0 nova_compute[239965]: 2026-01-26 16:39:59.719 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:00 compute-0 ceph-mon[75140]: pgmap v2707: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:40:00 compute-0 nova_compute[239965]: 2026-01-26 16:40:00.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:40:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:01 compute-0 podman[381994]: 2026-01-26 16:40:01.382394281 +0000 UTC m=+0.060777427 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 16:40:01 compute-0 podman[381995]: 2026-01-26 16:40:01.415498745 +0000 UTC m=+0.096095415 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 26 16:40:02 compute-0 ceph-mon[75140]: pgmap v2708: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:02 compute-0 nova_compute[239965]: 2026-01-26 16:40:02.548 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:03 compute-0 nova_compute[239965]: 2026-01-26 16:40:03.382 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:04 compute-0 ceph-mon[75140]: pgmap v2709: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:04 compute-0 nova_compute[239965]: 2026-01-26 16:40:04.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:04 compute-0 nova_compute[239965]: 2026-01-26 16:40:04.720 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:40:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Cumulative writes: 12K writes, 56K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1385 writes, 5982 keys, 1385 commit groups, 1.0 writes per commit group, ingest: 9.11 MB, 0.02 MB/s
                                           Interval WAL: 1385 writes, 1385 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    100.4      0.65              0.21        38    0.017       0      0       0.0       0.0
                                             L6      1/0    8.15 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.9    127.6    108.3      2.97              0.91        37    0.080    231K    20K       0.0       0.0
                                            Sum      1/0    8.15 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.9    104.6    106.9      3.62              1.12        75    0.048    231K    20K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   8.4     94.3     94.2      0.46              0.17         8    0.058     32K   1977       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    127.6    108.3      2.97              0.91        37    0.080    231K    20K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    101.0      0.65              0.21        37    0.018       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.6      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 4800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.064, interval 0.005
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.38 GB write, 0.08 MB/s write, 0.37 GB read, 0.08 MB/s read, 3.6 seconds
                                           Interval compaction: 0.04 GB write, 0.07 MB/s write, 0.04 GB read, 0.07 MB/s read, 0.5 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x565356c818d0#2 capacity: 304.00 MB usage: 42.84 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000496 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2684,41.12 MB,13.5257%) FilterBlock(76,651.48 KB,0.209281%) IndexBlock(76,1.09 MB,0.357497%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 16:40:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:40:06 compute-0 ceph-mon[75140]: pgmap v2710: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:08 compute-0 nova_compute[239965]: 2026-01-26 16:40:08.385 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:08 compute-0 ceph-mon[75140]: pgmap v2711: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:09 compute-0 nova_compute[239965]: 2026-01-26 16:40:09.722 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:10 compute-0 nova_compute[239965]: 2026-01-26 16:40:10.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:10 compute-0 ceph-mon[75140]: pgmap v2712: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:40:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:11 compute-0 nova_compute[239965]: 2026-01-26 16:40:11.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:11 compute-0 nova_compute[239965]: 2026-01-26 16:40:11.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:40:12 compute-0 nova_compute[239965]: 2026-01-26 16:40:12.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:12 compute-0 nova_compute[239965]: 2026-01-26 16:40:12.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:40:12 compute-0 nova_compute[239965]: 2026-01-26 16:40:12.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:40:12 compute-0 nova_compute[239965]: 2026-01-26 16:40:12.530 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:40:12 compute-0 nova_compute[239965]: 2026-01-26 16:40:12.531 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:12 compute-0 ceph-mon[75140]: pgmap v2713: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:13 compute-0 nova_compute[239965]: 2026-01-26 16:40:13.430 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:13 compute-0 nova_compute[239965]: 2026-01-26 16:40:13.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:13 compute-0 nova_compute[239965]: 2026-01-26 16:40:13.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:13 compute-0 ceph-mon[75140]: pgmap v2714: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:14 compute-0 nova_compute[239965]: 2026-01-26 16:40:14.725 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:40:16 compute-0 ceph-mon[75140]: pgmap v2715: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:18 compute-0 nova_compute[239965]: 2026-01-26 16:40:18.433 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:18 compute-0 ceph-mon[75140]: pgmap v2716: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:19 compute-0 nova_compute[239965]: 2026-01-26 16:40:19.727 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:19 compute-0 ceph-mon[75140]: pgmap v2717: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:40:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:21.288921) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445621289046, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2048, "num_deletes": 256, "total_data_size": 3518236, "memory_usage": 3571456, "flush_reason": "Manual Compaction"}
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Jan 26 16:40:21 compute-0 nova_compute[239965]: 2026-01-26 16:40:21.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:21 compute-0 nova_compute[239965]: 2026-01-26 16:40:21.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:40:21 compute-0 nova_compute[239965]: 2026-01-26 16:40:21.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:40:21 compute-0 nova_compute[239965]: 2026-01-26 16:40:21.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:40:21 compute-0 nova_compute[239965]: 2026-01-26 16:40:21.539 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:40:21 compute-0 nova_compute[239965]: 2026-01-26 16:40:21.539 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445621685234, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3444553, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54713, "largest_seqno": 56760, "table_properties": {"data_size": 3435122, "index_size": 5988, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18559, "raw_average_key_size": 19, "raw_value_size": 3416333, "raw_average_value_size": 3665, "num_data_blocks": 265, "num_entries": 932, "num_filter_entries": 932, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769445395, "oldest_key_time": 1769445395, "file_creation_time": 1769445621, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 396368 microseconds, and 13587 cpu microseconds.
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:21.685294) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3444553 bytes OK
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:21.685319) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:21.878386) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:21.878435) EVENT_LOG_v1 {"time_micros": 1769445621878425, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:21.878461) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3509668, prev total WAL file size 3545345, number of live WAL files 2.
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:21.879679) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323538' seq:72057594037927935, type:22 .. '6C6F676D0032353039' seq:0, type:0; will stop at (end)
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(3363KB)], [128(8342KB)]
Jan 26 16:40:21 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445621879704, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11987293, "oldest_snapshot_seqno": -1}
Jan 26 16:40:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:40:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2294860106' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:40:22 compute-0 nova_compute[239965]: 2026-01-26 16:40:22.118 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:40:22 compute-0 nova_compute[239965]: 2026-01-26 16:40:22.299 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:40:22 compute-0 nova_compute[239965]: 2026-01-26 16:40:22.301 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3574MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:40:22 compute-0 nova_compute[239965]: 2026-01-26 16:40:22.301 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:40:22 compute-0 nova_compute[239965]: 2026-01-26 16:40:22.301 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:40:22 compute-0 nova_compute[239965]: 2026-01-26 16:40:22.374 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:40:22 compute-0 nova_compute[239965]: 2026-01-26 16:40:22.375 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7919 keys, 11867949 bytes, temperature: kUnknown
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445622392263, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 11867949, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11813983, "index_size": 33053, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 205640, "raw_average_key_size": 25, "raw_value_size": 11671579, "raw_average_value_size": 1473, "num_data_blocks": 1299, "num_entries": 7919, "num_filter_entries": 7919, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769445621, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:40:22 compute-0 nova_compute[239965]: 2026-01-26 16:40:22.400 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.392525) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 11867949 bytes
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.405200) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 23.4 rd, 23.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 8.1 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(6.9) write-amplify(3.4) OK, records in: 8447, records dropped: 528 output_compression: NoCompression
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.405239) EVENT_LOG_v1 {"time_micros": 1769445622405223, "job": 78, "event": "compaction_finished", "compaction_time_micros": 512640, "compaction_time_cpu_micros": 27835, "output_level": 6, "num_output_files": 1, "total_output_size": 11867949, "num_input_records": 8447, "num_output_records": 7919, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445622406182, "job": 78, "event": "table_file_deletion", "file_number": 130}
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445622407558, "job": 78, "event": "table_file_deletion", "file_number": 128}
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:21.879529) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.407611) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.407618) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.407621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.407624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.407627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.408032) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445622408056, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 263, "num_deletes": 251, "total_data_size": 40808, "memory_usage": 47368, "flush_reason": "Manual Compaction"}
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445622428532, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 40820, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56761, "largest_seqno": 57023, "table_properties": {"data_size": 38956, "index_size": 92, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4730, "raw_average_key_size": 18, "raw_value_size": 35449, "raw_average_value_size": 136, "num_data_blocks": 4, "num_entries": 260, "num_filter_entries": 260, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769445621, "oldest_key_time": 1769445621, "file_creation_time": 1769445622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 20546 microseconds, and 689 cpu microseconds.
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.428574) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 40820 bytes OK
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.428591) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.431086) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.431116) EVENT_LOG_v1 {"time_micros": 1769445622431106, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.431142) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 38764, prev total WAL file size 38764, number of live WAL files 2.
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.431653) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(39KB)], [131(11MB)]
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445622431689, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 11908769, "oldest_snapshot_seqno": -1}
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7671 keys, 10188685 bytes, temperature: kUnknown
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445622734392, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 10188685, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10137964, "index_size": 30448, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19205, "raw_key_size": 201178, "raw_average_key_size": 26, "raw_value_size": 10001415, "raw_average_value_size": 1303, "num_data_blocks": 1180, "num_entries": 7671, "num_filter_entries": 7671, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769445622, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.734683) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10188685 bytes
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.741813) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 39.3 rd, 33.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.0, 11.3 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(541.3) write-amplify(249.6) OK, records in: 8179, records dropped: 508 output_compression: NoCompression
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.741840) EVENT_LOG_v1 {"time_micros": 1769445622741827, "job": 80, "event": "compaction_finished", "compaction_time_micros": 302792, "compaction_time_cpu_micros": 35435, "output_level": 6, "num_output_files": 1, "total_output_size": 10188685, "num_input_records": 8179, "num_output_records": 7671, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445622742032, "job": 80, "event": "table_file_deletion", "file_number": 133}
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445622744749, "job": 80, "event": "table_file_deletion", "file_number": 131}
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.431571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.744843) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.744849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.744850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.744852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:40:22 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:40:22.744854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:40:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:40:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4248349859' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:40:22 compute-0 ceph-mon[75140]: pgmap v2718: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2294860106' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:40:22 compute-0 nova_compute[239965]: 2026-01-26 16:40:22.994 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:40:23 compute-0 nova_compute[239965]: 2026-01-26 16:40:23.001 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:40:23 compute-0 nova_compute[239965]: 2026-01-26 16:40:23.083 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:40:23 compute-0 nova_compute[239965]: 2026-01-26 16:40:23.086 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:40:23 compute-0 nova_compute[239965]: 2026-01-26 16:40:23.087 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:40:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:23 compute-0 sudo[382082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:40:23 compute-0 sudo[382082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:40:23 compute-0 sudo[382082]: pam_unix(sudo:session): session closed for user root
Jan 26 16:40:23 compute-0 sudo[382107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:40:23 compute-0 sudo[382107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:40:23 compute-0 nova_compute[239965]: 2026-01-26 16:40:23.436 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:24 compute-0 sudo[382107]: pam_unix(sudo:session): session closed for user root
Jan 26 16:40:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:40:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:40:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:40:24 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:40:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4248349859' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:40:24 compute-0 ceph-mon[75140]: pgmap v2719: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:40:24 compute-0 sshd-session[382147]: Invalid user sol from 45.148.10.240 port 36700
Jan 26 16:40:24 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:40:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:40:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:40:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:40:24 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:40:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:40:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:40:24 compute-0 sshd-session[382147]: Connection closed by invalid user sol 45.148.10.240 port 36700 [preauth]
Jan 26 16:40:24 compute-0 sudo[382166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:40:24 compute-0 sudo[382166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:40:24 compute-0 sudo[382166]: pam_unix(sudo:session): session closed for user root
Jan 26 16:40:24 compute-0 sudo[382191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:40:24 compute-0 sudo[382191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:40:24 compute-0 nova_compute[239965]: 2026-01-26 16:40:24.762 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:25 compute-0 podman[382228]: 2026-01-26 16:40:24.963317819 +0000 UTC m=+0.028021271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:40:25 compute-0 podman[382228]: 2026-01-26 16:40:25.199338477 +0000 UTC m=+0.264041909 container create 50c755e9b788ef2c90c6bc1307efa7e33f140c078e7ab7fff78cbda0f39162d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_meitner, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 16:40:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:25 compute-0 nova_compute[239965]: 2026-01-26 16:40:25.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:25 compute-0 nova_compute[239965]: 2026-01-26 16:40:25.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:40:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:40:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:40:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:40:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:40:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:40:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:40:25 compute-0 systemd[1]: Started libpod-conmon-50c755e9b788ef2c90c6bc1307efa7e33f140c078e7ab7fff78cbda0f39162d3.scope.
Jan 26 16:40:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:40:26 compute-0 podman[382228]: 2026-01-26 16:40:26.102580134 +0000 UTC m=+1.167283676 container init 50c755e9b788ef2c90c6bc1307efa7e33f140c078e7ab7fff78cbda0f39162d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:40:26 compute-0 podman[382228]: 2026-01-26 16:40:26.113019202 +0000 UTC m=+1.177722644 container start 50c755e9b788ef2c90c6bc1307efa7e33f140c078e7ab7fff78cbda0f39162d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_meitner, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 16:40:26 compute-0 jolly_meitner[382245]: 167 167
Jan 26 16:40:26 compute-0 systemd[1]: libpod-50c755e9b788ef2c90c6bc1307efa7e33f140c078e7ab7fff78cbda0f39162d3.scope: Deactivated successfully.
Jan 26 16:40:26 compute-0 podman[382228]: 2026-01-26 16:40:26.142542608 +0000 UTC m=+1.207246070 container attach 50c755e9b788ef2c90c6bc1307efa7e33f140c078e7ab7fff78cbda0f39162d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_meitner, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 16:40:26 compute-0 podman[382228]: 2026-01-26 16:40:26.143184763 +0000 UTC m=+1.207888195 container died 50c755e9b788ef2c90c6bc1307efa7e33f140c078e7ab7fff78cbda0f39162d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_meitner, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:40:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3670812ea6b13355e831cd863549e4efd283e91fc21ea197e88a88d234f90fae-merged.mount: Deactivated successfully.
Jan 26 16:40:26 compute-0 podman[382228]: 2026-01-26 16:40:26.212202281 +0000 UTC m=+1.276905713 container remove 50c755e9b788ef2c90c6bc1307efa7e33f140c078e7ab7fff78cbda0f39162d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_meitner, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:40:26 compute-0 systemd[1]: libpod-conmon-50c755e9b788ef2c90c6bc1307efa7e33f140c078e7ab7fff78cbda0f39162d3.scope: Deactivated successfully.
Jan 26 16:40:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:40:26 compute-0 podman[382269]: 2026-01-26 16:40:26.351782496 +0000 UTC m=+0.024588855 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:40:26 compute-0 podman[382269]: 2026-01-26 16:40:26.523717808 +0000 UTC m=+0.196524147 container create 3747d1563c23d19477775ccecbebb3490aa9a055263b33dacba07efd2bc6434c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_visvesvaraya, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 16:40:26 compute-0 systemd[1]: Started libpod-conmon-3747d1563c23d19477775ccecbebb3490aa9a055263b33dacba07efd2bc6434c.scope.
Jan 26 16:40:26 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b7fbc1480c9c4fbe657db3142c7bdfb3c81c0e5e0f9d61716b42eb26ea642c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b7fbc1480c9c4fbe657db3142c7bdfb3c81c0e5e0f9d61716b42eb26ea642c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b7fbc1480c9c4fbe657db3142c7bdfb3c81c0e5e0f9d61716b42eb26ea642c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b7fbc1480c9c4fbe657db3142c7bdfb3c81c0e5e0f9d61716b42eb26ea642c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:40:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/95b7fbc1480c9c4fbe657db3142c7bdfb3c81c0e5e0f9d61716b42eb26ea642c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:40:26 compute-0 ceph-mon[75140]: pgmap v2720: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:26 compute-0 podman[382269]: 2026-01-26 16:40:26.61279029 +0000 UTC m=+0.285596649 container init 3747d1563c23d19477775ccecbebb3490aa9a055263b33dacba07efd2bc6434c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 26 16:40:26 compute-0 podman[382269]: 2026-01-26 16:40:26.621531045 +0000 UTC m=+0.294337384 container start 3747d1563c23d19477775ccecbebb3490aa9a055263b33dacba07efd2bc6434c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 16:40:26 compute-0 podman[382269]: 2026-01-26 16:40:26.646328925 +0000 UTC m=+0.319135264 container attach 3747d1563c23d19477775ccecbebb3490aa9a055263b33dacba07efd2bc6434c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_visvesvaraya, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:40:27 compute-0 thirsty_visvesvaraya[382286]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:40:27 compute-0 thirsty_visvesvaraya[382286]: --> All data devices are unavailable
Jan 26 16:40:27 compute-0 systemd[1]: libpod-3747d1563c23d19477775ccecbebb3490aa9a055263b33dacba07efd2bc6434c.scope: Deactivated successfully.
Jan 26 16:40:27 compute-0 podman[382269]: 2026-01-26 16:40:27.076698455 +0000 UTC m=+0.749504834 container died 3747d1563c23d19477775ccecbebb3490aa9a055263b33dacba07efd2bc6434c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 16:40:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-95b7fbc1480c9c4fbe657db3142c7bdfb3c81c0e5e0f9d61716b42eb26ea642c-merged.mount: Deactivated successfully.
Jan 26 16:40:27 compute-0 nova_compute[239965]: 2026-01-26 16:40:27.516 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:27 compute-0 podman[382269]: 2026-01-26 16:40:27.53408143 +0000 UTC m=+1.206887799 container remove 3747d1563c23d19477775ccecbebb3490aa9a055263b33dacba07efd2bc6434c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_visvesvaraya, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 16:40:27 compute-0 sudo[382191]: pam_unix(sudo:session): session closed for user root
Jan 26 16:40:27 compute-0 systemd[1]: libpod-conmon-3747d1563c23d19477775ccecbebb3490aa9a055263b33dacba07efd2bc6434c.scope: Deactivated successfully.
Jan 26 16:40:27 compute-0 sudo[382319]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:40:27 compute-0 sudo[382319]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:40:27 compute-0 sudo[382319]: pam_unix(sudo:session): session closed for user root
Jan 26 16:40:27 compute-0 sudo[382344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:40:27 compute-0 sudo[382344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:40:27 compute-0 podman[382379]: 2026-01-26 16:40:27.973459892 +0000 UTC m=+0.042242271 container create 1f3d9659e6219ff0bdbf451b68e101deaa02cb8a4f85602192b76e7d8eaf067f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_fermat, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:40:28 compute-0 systemd[1]: Started libpod-conmon-1f3d9659e6219ff0bdbf451b68e101deaa02cb8a4f85602192b76e7d8eaf067f.scope.
Jan 26 16:40:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:40:28 compute-0 podman[382379]: 2026-01-26 16:40:27.95105304 +0000 UTC m=+0.019835459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:40:28 compute-0 podman[382379]: 2026-01-26 16:40:28.089111908 +0000 UTC m=+0.157894297 container init 1f3d9659e6219ff0bdbf451b68e101deaa02cb8a4f85602192b76e7d8eaf067f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 16:40:28 compute-0 podman[382379]: 2026-01-26 16:40:28.095980827 +0000 UTC m=+0.164763196 container start 1f3d9659e6219ff0bdbf451b68e101deaa02cb8a4f85602192b76e7d8eaf067f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_fermat, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 26 16:40:28 compute-0 eloquent_fermat[382395]: 167 167
Jan 26 16:40:28 compute-0 systemd[1]: libpod-1f3d9659e6219ff0bdbf451b68e101deaa02cb8a4f85602192b76e7d8eaf067f.scope: Deactivated successfully.
Jan 26 16:40:28 compute-0 conmon[382395]: conmon 1f3d9659e6219ff0bdbf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f3d9659e6219ff0bdbf451b68e101deaa02cb8a4f85602192b76e7d8eaf067f.scope/container/memory.events
Jan 26 16:40:28 compute-0 podman[382379]: 2026-01-26 16:40:28.123806702 +0000 UTC m=+0.192589101 container attach 1f3d9659e6219ff0bdbf451b68e101deaa02cb8a4f85602192b76e7d8eaf067f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 16:40:28 compute-0 podman[382379]: 2026-01-26 16:40:28.124509248 +0000 UTC m=+0.193291627 container died 1f3d9659e6219ff0bdbf451b68e101deaa02cb8a4f85602192b76e7d8eaf067f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_fermat, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:40:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb5dc49d608c12a5d4862829504fb519b1d09348dfa3b9c03a2405476cb705a3-merged.mount: Deactivated successfully.
Jan 26 16:40:28 compute-0 podman[382379]: 2026-01-26 16:40:28.240019251 +0000 UTC m=+0.308801620 container remove 1f3d9659e6219ff0bdbf451b68e101deaa02cb8a4f85602192b76e7d8eaf067f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 16:40:28 compute-0 systemd[1]: libpod-conmon-1f3d9659e6219ff0bdbf451b68e101deaa02cb8a4f85602192b76e7d8eaf067f.scope: Deactivated successfully.
Jan 26 16:40:28 compute-0 nova_compute[239965]: 2026-01-26 16:40:28.439 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:28 compute-0 podman[382419]: 2026-01-26 16:40:28.45331986 +0000 UTC m=+0.093394680 container create 68b1d932213144bccdd988be2b79e20bde5c38c46a347676bb12337f72e19078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 16:40:28 compute-0 podman[382419]: 2026-01-26 16:40:28.381433271 +0000 UTC m=+0.021508121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:40:28 compute-0 systemd[1]: Started libpod-conmon-68b1d932213144bccdd988be2b79e20bde5c38c46a347676bb12337f72e19078.scope.
Jan 26 16:40:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757a90d79cc233435f6cefb62825a592eb5d63e3438be9d575b0f4f219b6b5fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757a90d79cc233435f6cefb62825a592eb5d63e3438be9d575b0f4f219b6b5fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757a90d79cc233435f6cefb62825a592eb5d63e3438be9d575b0f4f219b6b5fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:40:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/757a90d79cc233435f6cefb62825a592eb5d63e3438be9d575b0f4f219b6b5fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:40:28 compute-0 podman[382419]: 2026-01-26 16:40:28.540051414 +0000 UTC m=+0.180126254 container init 68b1d932213144bccdd988be2b79e20bde5c38c46a347676bb12337f72e19078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:40:28 compute-0 podman[382419]: 2026-01-26 16:40:28.547085147 +0000 UTC m=+0.187159967 container start 68b1d932213144bccdd988be2b79e20bde5c38c46a347676bb12337f72e19078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_benz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 16:40:28 compute-0 podman[382419]: 2026-01-26 16:40:28.550975163 +0000 UTC m=+0.191049983 container attach 68b1d932213144bccdd988be2b79e20bde5c38c46a347676bb12337f72e19078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_benz, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 16:40:28 compute-0 ceph-mon[75140]: pgmap v2721: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:40:28
Jan 26 16:40:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:40:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:40:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'images', 'default.rgw.meta', 'volumes', 'vms', 'default.rgw.control', 'default.rgw.log', '.mgr']
Jan 26 16:40:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:40:28 compute-0 determined_benz[382436]: {
Jan 26 16:40:28 compute-0 determined_benz[382436]:     "0": [
Jan 26 16:40:28 compute-0 determined_benz[382436]:         {
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "devices": [
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "/dev/loop3"
Jan 26 16:40:28 compute-0 determined_benz[382436]:             ],
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "lv_name": "ceph_lv0",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "lv_size": "21470642176",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "name": "ceph_lv0",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "tags": {
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.cluster_name": "ceph",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.crush_device_class": "",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.encrypted": "0",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.objectstore": "bluestore",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.osd_id": "0",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.type": "block",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.vdo": "0",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.with_tpm": "0"
Jan 26 16:40:28 compute-0 determined_benz[382436]:             },
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "type": "block",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "vg_name": "ceph_vg0"
Jan 26 16:40:28 compute-0 determined_benz[382436]:         }
Jan 26 16:40:28 compute-0 determined_benz[382436]:     ],
Jan 26 16:40:28 compute-0 determined_benz[382436]:     "1": [
Jan 26 16:40:28 compute-0 determined_benz[382436]:         {
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "devices": [
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "/dev/loop4"
Jan 26 16:40:28 compute-0 determined_benz[382436]:             ],
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "lv_name": "ceph_lv1",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "lv_size": "21470642176",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "name": "ceph_lv1",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "tags": {
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.cluster_name": "ceph",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.crush_device_class": "",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.encrypted": "0",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.objectstore": "bluestore",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.osd_id": "1",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.type": "block",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.vdo": "0",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.with_tpm": "0"
Jan 26 16:40:28 compute-0 determined_benz[382436]:             },
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "type": "block",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "vg_name": "ceph_vg1"
Jan 26 16:40:28 compute-0 determined_benz[382436]:         }
Jan 26 16:40:28 compute-0 determined_benz[382436]:     ],
Jan 26 16:40:28 compute-0 determined_benz[382436]:     "2": [
Jan 26 16:40:28 compute-0 determined_benz[382436]:         {
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "devices": [
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "/dev/loop5"
Jan 26 16:40:28 compute-0 determined_benz[382436]:             ],
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "lv_name": "ceph_lv2",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "lv_size": "21470642176",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "name": "ceph_lv2",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "tags": {
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.cluster_name": "ceph",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.crush_device_class": "",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.encrypted": "0",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.objectstore": "bluestore",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.osd_id": "2",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.type": "block",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.vdo": "0",
Jan 26 16:40:28 compute-0 determined_benz[382436]:                 "ceph.with_tpm": "0"
Jan 26 16:40:28 compute-0 determined_benz[382436]:             },
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "type": "block",
Jan 26 16:40:28 compute-0 determined_benz[382436]:             "vg_name": "ceph_vg2"
Jan 26 16:40:28 compute-0 determined_benz[382436]:         }
Jan 26 16:40:28 compute-0 determined_benz[382436]:     ]
Jan 26 16:40:28 compute-0 determined_benz[382436]: }
Jan 26 16:40:28 compute-0 systemd[1]: libpod-68b1d932213144bccdd988be2b79e20bde5c38c46a347676bb12337f72e19078.scope: Deactivated successfully.
Jan 26 16:40:28 compute-0 podman[382419]: 2026-01-26 16:40:28.85927801 +0000 UTC m=+0.499352840 container died 68b1d932213144bccdd988be2b79e20bde5c38c46a347676bb12337f72e19078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_benz, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:40:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-757a90d79cc233435f6cefb62825a592eb5d63e3438be9d575b0f4f219b6b5fc-merged.mount: Deactivated successfully.
Jan 26 16:40:29 compute-0 podman[382419]: 2026-01-26 16:40:29.305715255 +0000 UTC m=+0.945790075 container remove 68b1d932213144bccdd988be2b79e20bde5c38c46a347676bb12337f72e19078 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_benz, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:40:29 compute-0 systemd[1]: libpod-conmon-68b1d932213144bccdd988be2b79e20bde5c38c46a347676bb12337f72e19078.scope: Deactivated successfully.
Jan 26 16:40:29 compute-0 sudo[382344]: pam_unix(sudo:session): session closed for user root
Jan 26 16:40:29 compute-0 sudo[382457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:40:29 compute-0 sudo[382457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:40:29 compute-0 sudo[382457]: pam_unix(sudo:session): session closed for user root
Jan 26 16:40:29 compute-0 sudo[382482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:40:29 compute-0 sudo[382482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:40:29 compute-0 podman[382519]: 2026-01-26 16:40:29.758934758 +0000 UTC m=+0.048948886 container create de2d5f866c308826a413dd94012573c638b61b81606c957855eb8e63aa83e8ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mcclintock, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:40:29 compute-0 nova_compute[239965]: 2026-01-26 16:40:29.803 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:29 compute-0 systemd[1]: Started libpod-conmon-de2d5f866c308826a413dd94012573c638b61b81606c957855eb8e63aa83e8ca.scope.
Jan 26 16:40:29 compute-0 podman[382519]: 2026-01-26 16:40:29.73503779 +0000 UTC m=+0.025051978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:40:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:40:29 compute-0 podman[382519]: 2026-01-26 16:40:29.899292421 +0000 UTC m=+0.189306579 container init de2d5f866c308826a413dd94012573c638b61b81606c957855eb8e63aa83e8ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:40:29 compute-0 podman[382519]: 2026-01-26 16:40:29.90655905 +0000 UTC m=+0.196573178 container start de2d5f866c308826a413dd94012573c638b61b81606c957855eb8e63aa83e8ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 16:40:29 compute-0 nostalgic_mcclintock[382535]: 167 167
Jan 26 16:40:29 compute-0 systemd[1]: libpod-de2d5f866c308826a413dd94012573c638b61b81606c957855eb8e63aa83e8ca.scope: Deactivated successfully.
Jan 26 16:40:29 compute-0 podman[382519]: 2026-01-26 16:40:29.950748538 +0000 UTC m=+0.240762686 container attach de2d5f866c308826a413dd94012573c638b61b81606c957855eb8e63aa83e8ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mcclintock, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:40:29 compute-0 podman[382519]: 2026-01-26 16:40:29.951356392 +0000 UTC m=+0.241370540 container died de2d5f866c308826a413dd94012573c638b61b81606c957855eb8e63aa83e8ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 16:40:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-10b7cc983341d2951e67470418f02a92a1c0226d53ea4250cfffdc06c31013fc-merged.mount: Deactivated successfully.
Jan 26 16:40:30 compute-0 podman[382519]: 2026-01-26 16:40:30.222186026 +0000 UTC m=+0.512200154 container remove de2d5f866c308826a413dd94012573c638b61b81606c957855eb8e63aa83e8ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:40:30 compute-0 systemd[1]: libpod-conmon-de2d5f866c308826a413dd94012573c638b61b81606c957855eb8e63aa83e8ca.scope: Deactivated successfully.
Jan 26 16:40:30 compute-0 podman[382561]: 2026-01-26 16:40:30.403710513 +0000 UTC m=+0.056187163 container create 0b9707179a3545f99bc8d158125c273bde5eb2a981789fccbdaac166738e1c3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_albattani, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:40:30 compute-0 systemd[1]: Started libpod-conmon-0b9707179a3545f99bc8d158125c273bde5eb2a981789fccbdaac166738e1c3c.scope.
Jan 26 16:40:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a802a2827ed99dbe288cb5d858871d939f277d9b6d522eae0fed6bed0bc1b910/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a802a2827ed99dbe288cb5d858871d939f277d9b6d522eae0fed6bed0bc1b910/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:40:30 compute-0 podman[382561]: 2026-01-26 16:40:30.377511198 +0000 UTC m=+0.029987928 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a802a2827ed99dbe288cb5d858871d939f277d9b6d522eae0fed6bed0bc1b910/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:40:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a802a2827ed99dbe288cb5d858871d939f277d9b6d522eae0fed6bed0bc1b910/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:40:30 compute-0 podman[382561]: 2026-01-26 16:40:30.641205298 +0000 UTC m=+0.293681968 container init 0b9707179a3545f99bc8d158125c273bde5eb2a981789fccbdaac166738e1c3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_albattani, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:40:30 compute-0 podman[382561]: 2026-01-26 16:40:30.648222511 +0000 UTC m=+0.300699191 container start 0b9707179a3545f99bc8d158125c273bde5eb2a981789fccbdaac166738e1c3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_albattani, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 16:40:30 compute-0 podman[382561]: 2026-01-26 16:40:30.656288228 +0000 UTC m=+0.308764968 container attach 0b9707179a3545f99bc8d158125c273bde5eb2a981789fccbdaac166738e1c3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 16:40:30 compute-0 ceph-mon[75140]: pgmap v2722: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:40:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:40:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:40:31 compute-0 lvm[382655]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:40:31 compute-0 lvm[382655]: VG ceph_vg0 finished
Jan 26 16:40:31 compute-0 lvm[382656]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:40:31 compute-0 lvm[382656]: VG ceph_vg1 finished
Jan 26 16:40:31 compute-0 lvm[382658]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:40:31 compute-0 lvm[382658]: VG ceph_vg2 finished
Jan 26 16:40:31 compute-0 confident_albattani[382577]: {}
Jan 26 16:40:31 compute-0 systemd[1]: libpod-0b9707179a3545f99bc8d158125c273bde5eb2a981789fccbdaac166738e1c3c.scope: Deactivated successfully.
Jan 26 16:40:31 compute-0 systemd[1]: libpod-0b9707179a3545f99bc8d158125c273bde5eb2a981789fccbdaac166738e1c3c.scope: Consumed 1.333s CPU time.
Jan 26 16:40:31 compute-0 podman[382561]: 2026-01-26 16:40:31.483709109 +0000 UTC m=+1.136185759 container died 0b9707179a3545f99bc8d158125c273bde5eb2a981789fccbdaac166738e1c3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_albattani, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 16:40:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a802a2827ed99dbe288cb5d858871d939f277d9b6d522eae0fed6bed0bc1b910-merged.mount: Deactivated successfully.
Jan 26 16:40:31 compute-0 podman[382661]: 2026-01-26 16:40:31.537402501 +0000 UTC m=+0.086750446 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:40:31 compute-0 podman[382561]: 2026-01-26 16:40:31.54347924 +0000 UTC m=+1.195955890 container remove 0b9707179a3545f99bc8d158125c273bde5eb2a981789fccbdaac166738e1c3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_albattani, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 16:40:31 compute-0 systemd[1]: libpod-conmon-0b9707179a3545f99bc8d158125c273bde5eb2a981789fccbdaac166738e1c3c.scope: Deactivated successfully.
Jan 26 16:40:31 compute-0 podman[382662]: 2026-01-26 16:40:31.571395547 +0000 UTC m=+0.121344347 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 16:40:31 compute-0 sudo[382482]: pam_unix(sudo:session): session closed for user root
Jan 26 16:40:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:40:31 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:40:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:40:31 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:40:31 compute-0 sudo[382717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:40:31 compute-0 sudo[382717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:40:31 compute-0 sudo[382717]: pam_unix(sudo:session): session closed for user root
Jan 26 16:40:32 compute-0 ceph-mon[75140]: pgmap v2723: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:40:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:40:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:33 compute-0 nova_compute[239965]: 2026-01-26 16:40:33.483 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:34 compute-0 ceph-mon[75140]: pgmap v2724: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:34 compute-0 nova_compute[239965]: 2026-01-26 16:40:34.805 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:40:36 compute-0 ceph-mon[75140]: pgmap v2725: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:38 compute-0 nova_compute[239965]: 2026-01-26 16:40:38.486 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:38 compute-0 ceph-mon[75140]: pgmap v2726: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:39 compute-0 nova_compute[239965]: 2026-01-26 16:40:39.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:39 compute-0 nova_compute[239965]: 2026-01-26 16:40:39.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:40:39 compute-0 nova_compute[239965]: 2026-01-26 16:40:39.539 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:40:39 compute-0 nova_compute[239965]: 2026-01-26 16:40:39.806 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:40 compute-0 ceph-mon[75140]: pgmap v2727: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:40:43 compute-0 ceph-mon[75140]: pgmap v2728: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:43 compute-0 nova_compute[239965]: 2026-01-26 16:40:43.488 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:44 compute-0 ceph-mon[75140]: pgmap v2729: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:44 compute-0 nova_compute[239965]: 2026-01-26 16:40:44.844 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:40:46 compute-0 ceph-mon[75140]: pgmap v2730: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:48 compute-0 nova_compute[239965]: 2026-01-26 16:40:48.490 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:48 compute-0 ceph-mon[75140]: pgmap v2731: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:40:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2674674568' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:40:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:40:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2674674568' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:40:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:40:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2674674568' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:40:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2674674568' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:40:49 compute-0 nova_compute[239965]: 2026-01-26 16:40:49.812 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:40:49 compute-0 nova_compute[239965]: 2026-01-26 16:40:49.846 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:50 compute-0 ceph-mon[75140]: pgmap v2732: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:40:53 compute-0 ceph-mon[75140]: pgmap v2733: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:53 compute-0 nova_compute[239965]: 2026-01-26 16:40:53.492 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:54 compute-0 ceph-mon[75140]: pgmap v2734: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:54 compute-0 nova_compute[239965]: 2026-01-26 16:40:54.848 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:40:56 compute-0 ceph-mon[75140]: pgmap v2735: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:58 compute-0 nova_compute[239965]: 2026-01-26 16:40:58.494 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:40:58 compute-0 ceph-mon[75140]: pgmap v2736: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:40:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:40:59.263 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:40:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:40:59.263 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:40:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:40:59.264 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:40:59 compute-0 nova_compute[239965]: 2026-01-26 16:40:59.873 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:00 compute-0 ceph-mon[75140]: pgmap v2737: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:41:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:41:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:02 compute-0 podman[382742]: 2026-01-26 16:41:02.36407943 +0000 UTC m=+0.055230091 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 16:41:02 compute-0 ceph-mon[75140]: pgmap v2738: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:02 compute-0 podman[382743]: 2026-01-26 16:41:02.399644984 +0000 UTC m=+0.088981521 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 26 16:41:02 compute-0 nova_compute[239965]: 2026-01-26 16:41:02.527 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:03 compute-0 nova_compute[239965]: 2026-01-26 16:41:03.495 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:04 compute-0 nova_compute[239965]: 2026-01-26 16:41:04.874 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:04 compute-0 ceph-mon[75140]: pgmap v2739: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:41:06 compute-0 ceph-mon[75140]: pgmap v2740: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:06 compute-0 nova_compute[239965]: 2026-01-26 16:41:06.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:08 compute-0 nova_compute[239965]: 2026-01-26 16:41:08.497 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:09 compute-0 ceph-mon[75140]: pgmap v2741: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:09 compute-0 nova_compute[239965]: 2026-01-26 16:41:09.877 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:10 compute-0 ceph-mon[75140]: pgmap v2742: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:41:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:11 compute-0 nova_compute[239965]: 2026-01-26 16:41:11.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:12 compute-0 ceph-mon[75140]: pgmap v2743: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:13 compute-0 nova_compute[239965]: 2026-01-26 16:41:13.500 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:13 compute-0 nova_compute[239965]: 2026-01-26 16:41:13.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:13 compute-0 nova_compute[239965]: 2026-01-26 16:41:13.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:13 compute-0 nova_compute[239965]: 2026-01-26 16:41:13.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:13 compute-0 nova_compute[239965]: 2026-01-26 16:41:13.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:13 compute-0 nova_compute[239965]: 2026-01-26 16:41:13.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:41:14 compute-0 nova_compute[239965]: 2026-01-26 16:41:14.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:14 compute-0 nova_compute[239965]: 2026-01-26 16:41:14.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:41:14 compute-0 nova_compute[239965]: 2026-01-26 16:41:14.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:41:14 compute-0 nova_compute[239965]: 2026-01-26 16:41:14.533 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:41:14 compute-0 nova_compute[239965]: 2026-01-26 16:41:14.879 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:15 compute-0 ceph-mon[75140]: pgmap v2744: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:16 compute-0 ceph-mon[75140]: pgmap v2745: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:41:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:18 compute-0 ceph-mon[75140]: pgmap v2746: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:18 compute-0 nova_compute[239965]: 2026-01-26 16:41:18.502 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:19 compute-0 sshd-session[382788]: Received disconnect from 45.227.254.170 port 61678:11:  [preauth]
Jan 26 16:41:19 compute-0 sshd-session[382788]: Disconnected from authenticating user root 45.227.254.170 port 61678 [preauth]
Jan 26 16:41:19 compute-0 nova_compute[239965]: 2026-01-26 16:41:19.881 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:20 compute-0 ceph-mon[75140]: pgmap v2747: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:41:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:21 compute-0 nova_compute[239965]: 2026-01-26 16:41:21.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:41:21 compute-0 nova_compute[239965]: 2026-01-26 16:41:21.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:41:21 compute-0 nova_compute[239965]: 2026-01-26 16:41:21.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:41:21 compute-0 nova_compute[239965]: 2026-01-26 16:41:21.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:41:21 compute-0 nova_compute[239965]: 2026-01-26 16:41:21.542 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:41:21 compute-0 nova_compute[239965]: 2026-01-26 16:41:21.542 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:41:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:41:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/91981689' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:41:22 compute-0 nova_compute[239965]: 2026-01-26 16:41:22.133 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:41:22 compute-0 nova_compute[239965]: 2026-01-26 16:41:22.307 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:41:22 compute-0 nova_compute[239965]: 2026-01-26 16:41:22.308 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3583MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:41:22 compute-0 nova_compute[239965]: 2026-01-26 16:41:22.309 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:41:22 compute-0 nova_compute[239965]: 2026-01-26 16:41:22.309 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:41:22 compute-0 nova_compute[239965]: 2026-01-26 16:41:22.370 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:41:22 compute-0 nova_compute[239965]: 2026-01-26 16:41:22.371 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:41:22 compute-0 nova_compute[239965]: 2026-01-26 16:41:22.391 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:41:22 compute-0 ceph-mon[75140]: pgmap v2748: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/91981689' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:41:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:41:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/479057370' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:41:22 compute-0 nova_compute[239965]: 2026-01-26 16:41:22.977 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:41:22 compute-0 nova_compute[239965]: 2026-01-26 16:41:22.983 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:41:22 compute-0 nova_compute[239965]: 2026-01-26 16:41:22.999 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:41:23 compute-0 nova_compute[239965]: 2026-01-26 16:41:23.000 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:41:23 compute-0 nova_compute[239965]: 2026-01-26 16:41:23.000 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:41:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:23 compute-0 nova_compute[239965]: 2026-01-26 16:41:23.505 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:41:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 183K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.76 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 820 writes, 2067 keys, 820 commit groups, 1.0 writes per commit group, ingest: 1.17 MB, 0.00 MB/s
                                           Interval WAL: 820 writes, 377 syncs, 2.18 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:41:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/479057370' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:41:24 compute-0 nova_compute[239965]: 2026-01-26 16:41:24.883 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:25 compute-0 ceph-mon[75140]: pgmap v2749: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:41:26 compute-0 ceph-mon[75140]: pgmap v2750: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:28 compute-0 ceph-mon[75140]: pgmap v2751: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:28 compute-0 nova_compute[239965]: 2026-01-26 16:41:28.540 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:41:28
Jan 26 16:41:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:41:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:41:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms', 'volumes', 'backups', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.log']
Jan 26 16:41:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:41:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:41:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.4 total, 600.0 interval
                                           Cumulative writes: 49K writes, 191K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.69 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1018 writes, 3286 keys, 1018 commit groups, 1.0 writes per commit group, ingest: 2.60 MB, 0.00 MB/s
                                           Interval WAL: 1018 writes, 434 syncs, 2.35 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:41:29 compute-0 nova_compute[239965]: 2026-01-26 16:41:29.885 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:41:30 compute-0 ceph-mon[75140]: pgmap v2752: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:41:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:41:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:41:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:31 compute-0 sudo[382834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:41:31 compute-0 sudo[382834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:41:31 compute-0 sudo[382834]: pam_unix(sudo:session): session closed for user root
Jan 26 16:41:31 compute-0 sudo[382859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:41:31 compute-0 sudo[382859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:41:32 compute-0 sudo[382859]: pam_unix(sudo:session): session closed for user root
Jan 26 16:41:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:41:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:41:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:41:32 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:41:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:41:32 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:41:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:41:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:41:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:41:32 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:41:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:41:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:41:32 compute-0 sudo[382914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:41:32 compute-0 sudo[382914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:41:32 compute-0 ceph-mon[75140]: pgmap v2753: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:41:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:41:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:41:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:41:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:41:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:41:32 compute-0 sudo[382914]: pam_unix(sudo:session): session closed for user root
Jan 26 16:41:32 compute-0 sudo[382951]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:41:32 compute-0 sudo[382951]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:41:32 compute-0 podman[382938]: 2026-01-26 16:41:32.596988546 +0000 UTC m=+0.092828405 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:41:32 compute-0 podman[382939]: 2026-01-26 16:41:32.606699475 +0000 UTC m=+0.097568301 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:41:32 compute-0 podman[383020]: 2026-01-26 16:41:32.896393864 +0000 UTC m=+0.043177624 container create a0645d4785824ff79c77bf7d19b9ec00df440353d9abf08b2bc7d578216dbc3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 16:41:32 compute-0 systemd[1]: Started libpod-conmon-a0645d4785824ff79c77bf7d19b9ec00df440353d9abf08b2bc7d578216dbc3e.scope.
Jan 26 16:41:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:41:32 compute-0 podman[383020]: 2026-01-26 16:41:32.875782306 +0000 UTC m=+0.022566086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:41:32 compute-0 podman[383020]: 2026-01-26 16:41:32.972888686 +0000 UTC m=+0.119672476 container init a0645d4785824ff79c77bf7d19b9ec00df440353d9abf08b2bc7d578216dbc3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Jan 26 16:41:32 compute-0 podman[383020]: 2026-01-26 16:41:32.98200897 +0000 UTC m=+0.128792730 container start a0645d4785824ff79c77bf7d19b9ec00df440353d9abf08b2bc7d578216dbc3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 16:41:32 compute-0 elated_stonebraker[383036]: 167 167
Jan 26 16:41:32 compute-0 systemd[1]: libpod-a0645d4785824ff79c77bf7d19b9ec00df440353d9abf08b2bc7d578216dbc3e.scope: Deactivated successfully.
Jan 26 16:41:32 compute-0 podman[383020]: 2026-01-26 16:41:32.99214206 +0000 UTC m=+0.138925850 container attach a0645d4785824ff79c77bf7d19b9ec00df440353d9abf08b2bc7d578216dbc3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_stonebraker, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 16:41:32 compute-0 podman[383020]: 2026-01-26 16:41:32.992874928 +0000 UTC m=+0.139658698 container died a0645d4785824ff79c77bf7d19b9ec00df440353d9abf08b2bc7d578216dbc3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_stonebraker, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:41:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-da91a31e0b218b9941316037ad7183420b0558dbfdfacbde2065034a71745cf1-merged.mount: Deactivated successfully.
Jan 26 16:41:33 compute-0 podman[383020]: 2026-01-26 16:41:33.066958241 +0000 UTC m=+0.213742001 container remove a0645d4785824ff79c77bf7d19b9ec00df440353d9abf08b2bc7d578216dbc3e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_stonebraker, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 16:41:33 compute-0 systemd[1]: libpod-conmon-a0645d4785824ff79c77bf7d19b9ec00df440353d9abf08b2bc7d578216dbc3e.scope: Deactivated successfully.
Jan 26 16:41:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:33 compute-0 podman[383060]: 2026-01-26 16:41:33.277720418 +0000 UTC m=+0.063149176 container create 23ba24e14a57b3fbae0b6043d5774080de1a235a3729509098346d5e9ce8c1a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:41:33 compute-0 systemd[1]: Started libpod-conmon-23ba24e14a57b3fbae0b6043d5774080de1a235a3729509098346d5e9ce8c1a5.scope.
Jan 26 16:41:33 compute-0 podman[383060]: 2026-01-26 16:41:33.247115444 +0000 UTC m=+0.032544272 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:41:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4481dff734659beed56865cfa7ecfb4476b789fd31c250d5008e3026d2d6a39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4481dff734659beed56865cfa7ecfb4476b789fd31c250d5008e3026d2d6a39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4481dff734659beed56865cfa7ecfb4476b789fd31c250d5008e3026d2d6a39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4481dff734659beed56865cfa7ecfb4476b789fd31c250d5008e3026d2d6a39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:41:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4481dff734659beed56865cfa7ecfb4476b789fd31c250d5008e3026d2d6a39/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:41:33 compute-0 podman[383060]: 2026-01-26 16:41:33.364945794 +0000 UTC m=+0.150374552 container init 23ba24e14a57b3fbae0b6043d5774080de1a235a3729509098346d5e9ce8c1a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_golick, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 16:41:33 compute-0 podman[383060]: 2026-01-26 16:41:33.371815523 +0000 UTC m=+0.157244261 container start 23ba24e14a57b3fbae0b6043d5774080de1a235a3729509098346d5e9ce8c1a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_golick, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:41:33 compute-0 podman[383060]: 2026-01-26 16:41:33.375534944 +0000 UTC m=+0.160963702 container attach 23ba24e14a57b3fbae0b6043d5774080de1a235a3729509098346d5e9ce8c1a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Jan 26 16:41:33 compute-0 nova_compute[239965]: 2026-01-26 16:41:33.541 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:33 compute-0 eloquent_golick[383077]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:41:33 compute-0 eloquent_golick[383077]: --> All data devices are unavailable
Jan 26 16:41:33 compute-0 systemd[1]: libpod-23ba24e14a57b3fbae0b6043d5774080de1a235a3729509098346d5e9ce8c1a5.scope: Deactivated successfully.
Jan 26 16:41:33 compute-0 conmon[383077]: conmon 23ba24e14a57b3fbae0b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23ba24e14a57b3fbae0b6043d5774080de1a235a3729509098346d5e9ce8c1a5.scope/container/memory.events
Jan 26 16:41:33 compute-0 podman[383060]: 2026-01-26 16:41:33.84472627 +0000 UTC m=+0.630155008 container died 23ba24e14a57b3fbae0b6043d5774080de1a235a3729509098346d5e9ce8c1a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_golick, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 16:41:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4481dff734659beed56865cfa7ecfb4476b789fd31c250d5008e3026d2d6a39-merged.mount: Deactivated successfully.
Jan 26 16:41:33 compute-0 podman[383060]: 2026-01-26 16:41:33.913444521 +0000 UTC m=+0.698873259 container remove 23ba24e14a57b3fbae0b6043d5774080de1a235a3729509098346d5e9ce8c1a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_golick, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:41:33 compute-0 systemd[1]: libpod-conmon-23ba24e14a57b3fbae0b6043d5774080de1a235a3729509098346d5e9ce8c1a5.scope: Deactivated successfully.
Jan 26 16:41:33 compute-0 sudo[382951]: pam_unix(sudo:session): session closed for user root
Jan 26 16:41:34 compute-0 sudo[383107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:41:34 compute-0 sudo[383107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:41:34 compute-0 sudo[383107]: pam_unix(sudo:session): session closed for user root
Jan 26 16:41:34 compute-0 sudo[383132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:41:34 compute-0 sudo[383132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:41:34 compute-0 podman[383168]: 2026-01-26 16:41:34.44576167 +0000 UTC m=+0.077593901 container create 464b65dca921fb0a5d3ade616dc79ff52d43b101fff3bde9ff70ffe72fd4ef53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 16:41:34 compute-0 systemd[1]: Started libpod-conmon-464b65dca921fb0a5d3ade616dc79ff52d43b101fff3bde9ff70ffe72fd4ef53.scope.
Jan 26 16:41:34 compute-0 podman[383168]: 2026-01-26 16:41:34.392353686 +0000 UTC m=+0.024186017 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:41:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:41:34 compute-0 podman[383168]: 2026-01-26 16:41:34.522476038 +0000 UTC m=+0.154308319 container init 464b65dca921fb0a5d3ade616dc79ff52d43b101fff3bde9ff70ffe72fd4ef53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:41:34 compute-0 podman[383168]: 2026-01-26 16:41:34.528910266 +0000 UTC m=+0.160742497 container start 464b65dca921fb0a5d3ade616dc79ff52d43b101fff3bde9ff70ffe72fd4ef53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:41:34 compute-0 condescending_albattani[383184]: 167 167
Jan 26 16:41:34 compute-0 systemd[1]: libpod-464b65dca921fb0a5d3ade616dc79ff52d43b101fff3bde9ff70ffe72fd4ef53.scope: Deactivated successfully.
Jan 26 16:41:34 compute-0 podman[383168]: 2026-01-26 16:41:34.53799087 +0000 UTC m=+0.169823121 container attach 464b65dca921fb0a5d3ade616dc79ff52d43b101fff3bde9ff70ffe72fd4ef53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:41:34 compute-0 podman[383168]: 2026-01-26 16:41:34.53884696 +0000 UTC m=+0.170679191 container died 464b65dca921fb0a5d3ade616dc79ff52d43b101fff3bde9ff70ffe72fd4ef53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:41:34 compute-0 ceph-mon[75140]: pgmap v2754: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-47710ced6f31ffcf7086007ef7d2dd86fbb021308e2ccecb8fa50e53f4a38167-merged.mount: Deactivated successfully.
Jan 26 16:41:34 compute-0 podman[383168]: 2026-01-26 16:41:34.588906692 +0000 UTC m=+0.220738923 container remove 464b65dca921fb0a5d3ade616dc79ff52d43b101fff3bde9ff70ffe72fd4ef53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 26 16:41:34 compute-0 systemd[1]: libpod-conmon-464b65dca921fb0a5d3ade616dc79ff52d43b101fff3bde9ff70ffe72fd4ef53.scope: Deactivated successfully.
Jan 26 16:41:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:41:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 4800.2 total, 600.0 interval
                                           Cumulative writes: 37K writes, 148K keys, 37K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 37K writes, 13K syncs, 2.73 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 772 writes, 2086 keys, 772 commit groups, 1.0 writes per commit group, ingest: 0.92 MB, 0.00 MB/s
                                           Interval WAL: 772 writes, 351 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:41:34 compute-0 podman[383209]: 2026-01-26 16:41:34.789797136 +0000 UTC m=+0.061074064 container create b6ed292aa19d2f2144c70d4d46a7438344d916e7f9c73ecd9ebcb3b21d7a293e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_nobel, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 16:41:34 compute-0 systemd[1]: Started libpod-conmon-b6ed292aa19d2f2144c70d4d46a7438344d916e7f9c73ecd9ebcb3b21d7a293e.scope.
Jan 26 16:41:34 compute-0 podman[383209]: 2026-01-26 16:41:34.751231217 +0000 UTC m=+0.022508175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:41:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:41:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279f6e26de424f2000c4a5a5e9707ad70286cc20cec09fcbb507bc40a2d52cf8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:41:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279f6e26de424f2000c4a5a5e9707ad70286cc20cec09fcbb507bc40a2d52cf8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:41:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279f6e26de424f2000c4a5a5e9707ad70286cc20cec09fcbb507bc40a2d52cf8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:41:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/279f6e26de424f2000c4a5a5e9707ad70286cc20cec09fcbb507bc40a2d52cf8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:41:34 compute-0 nova_compute[239965]: 2026-01-26 16:41:34.918 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:34 compute-0 podman[383209]: 2026-01-26 16:41:34.94114435 +0000 UTC m=+0.212421288 container init b6ed292aa19d2f2144c70d4d46a7438344d916e7f9c73ecd9ebcb3b21d7a293e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_nobel, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 26 16:41:34 compute-0 podman[383209]: 2026-01-26 16:41:34.951993467 +0000 UTC m=+0.223270395 container start b6ed292aa19d2f2144c70d4d46a7438344d916e7f9c73ecd9ebcb3b21d7a293e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:41:34 compute-0 podman[383209]: 2026-01-26 16:41:34.956853957 +0000 UTC m=+0.228130915 container attach b6ed292aa19d2f2144c70d4d46a7438344d916e7f9c73ecd9ebcb3b21d7a293e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]: {
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:     "0": [
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:         {
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "devices": [
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "/dev/loop3"
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             ],
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "lv_name": "ceph_lv0",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "lv_size": "21470642176",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "name": "ceph_lv0",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "tags": {
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.cluster_name": "ceph",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.crush_device_class": "",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.encrypted": "0",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.objectstore": "bluestore",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.osd_id": "0",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.type": "block",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.vdo": "0",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.with_tpm": "0"
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             },
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "type": "block",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "vg_name": "ceph_vg0"
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:         }
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:     ],
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:     "1": [
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:         {
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "devices": [
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "/dev/loop4"
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             ],
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "lv_name": "ceph_lv1",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "lv_size": "21470642176",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "name": "ceph_lv1",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "tags": {
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.cluster_name": "ceph",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.crush_device_class": "",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.encrypted": "0",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.objectstore": "bluestore",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.osd_id": "1",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.type": "block",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.vdo": "0",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.with_tpm": "0"
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             },
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "type": "block",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "vg_name": "ceph_vg1"
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:         }
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:     ],
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:     "2": [
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:         {
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "devices": [
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "/dev/loop5"
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             ],
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "lv_name": "ceph_lv2",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "lv_size": "21470642176",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "name": "ceph_lv2",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "tags": {
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.cluster_name": "ceph",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.crush_device_class": "",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.encrypted": "0",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.objectstore": "bluestore",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.osd_id": "2",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.type": "block",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.vdo": "0",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:                 "ceph.with_tpm": "0"
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             },
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "type": "block",
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:             "vg_name": "ceph_vg2"
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:         }
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]:     ]
Jan 26 16:41:35 compute-0 inspiring_nobel[383225]: }
Jan 26 16:41:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:35 compute-0 systemd[1]: libpod-b6ed292aa19d2f2144c70d4d46a7438344d916e7f9c73ecd9ebcb3b21d7a293e.scope: Deactivated successfully.
Jan 26 16:41:35 compute-0 podman[383209]: 2026-01-26 16:41:35.292456266 +0000 UTC m=+0.563733194 container died b6ed292aa19d2f2144c70d4d46a7438344d916e7f9c73ecd9ebcb3b21d7a293e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_nobel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:41:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-279f6e26de424f2000c4a5a5e9707ad70286cc20cec09fcbb507bc40a2d52cf8-merged.mount: Deactivated successfully.
Jan 26 16:41:35 compute-0 podman[383209]: 2026-01-26 16:41:35.337293939 +0000 UTC m=+0.608570857 container remove b6ed292aa19d2f2144c70d4d46a7438344d916e7f9c73ecd9ebcb3b21d7a293e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_nobel, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 26 16:41:35 compute-0 systemd[1]: libpod-conmon-b6ed292aa19d2f2144c70d4d46a7438344d916e7f9c73ecd9ebcb3b21d7a293e.scope: Deactivated successfully.
Jan 26 16:41:35 compute-0 sudo[383132]: pam_unix(sudo:session): session closed for user root
Jan 26 16:41:35 compute-0 sudo[383245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:41:35 compute-0 sudo[383245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:41:35 compute-0 sudo[383245]: pam_unix(sudo:session): session closed for user root
Jan 26 16:41:35 compute-0 sudo[383270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:41:35 compute-0 sudo[383270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:41:35 compute-0 podman[383307]: 2026-01-26 16:41:35.840621783 +0000 UTC m=+0.100093523 container create d766a69395ff91bc467f681193dab799305018c2cd93ca6f37225b4a41f56baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 16:41:35 compute-0 podman[383307]: 2026-01-26 16:41:35.768927799 +0000 UTC m=+0.028399619 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:41:35 compute-0 systemd[1]: Started libpod-conmon-d766a69395ff91bc467f681193dab799305018c2cd93ca6f37225b4a41f56baa.scope.
Jan 26 16:41:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:41:35 compute-0 podman[383307]: 2026-01-26 16:41:35.930379292 +0000 UTC m=+0.189851042 container init d766a69395ff91bc467f681193dab799305018c2cd93ca6f37225b4a41f56baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hodgkin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 16:41:35 compute-0 podman[383307]: 2026-01-26 16:41:35.937281062 +0000 UTC m=+0.196752802 container start d766a69395ff91bc467f681193dab799305018c2cd93ca6f37225b4a41f56baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hodgkin, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 16:41:35 compute-0 nostalgic_hodgkin[383323]: 167 167
Jan 26 16:41:35 compute-0 systemd[1]: libpod-d766a69395ff91bc467f681193dab799305018c2cd93ca6f37225b4a41f56baa.scope: Deactivated successfully.
Jan 26 16:41:35 compute-0 podman[383307]: 2026-01-26 16:41:35.946686494 +0000 UTC m=+0.206158254 container attach d766a69395ff91bc467f681193dab799305018c2cd93ca6f37225b4a41f56baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hodgkin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:41:35 compute-0 podman[383307]: 2026-01-26 16:41:35.947928105 +0000 UTC m=+0.207399855 container died d766a69395ff91bc467f681193dab799305018c2cd93ca6f37225b4a41f56baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hodgkin, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:41:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c58e8bd0560cd93518061734a5da94e888dd09e2ea71db8dbfe529a8f311c0db-merged.mount: Deactivated successfully.
Jan 26 16:41:35 compute-0 podman[383307]: 2026-01-26 16:41:35.991726172 +0000 UTC m=+0.251197912 container remove d766a69395ff91bc467f681193dab799305018c2cd93ca6f37225b4a41f56baa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_hodgkin, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:41:36 compute-0 systemd[1]: libpod-conmon-d766a69395ff91bc467f681193dab799305018c2cd93ca6f37225b4a41f56baa.scope: Deactivated successfully.
Jan 26 16:41:36 compute-0 podman[383347]: 2026-01-26 16:41:36.194621075 +0000 UTC m=+0.086258794 container create f0c7bafe90cfb0fe2f375dd19e78271bd9790f1e4232732812b1ee3396d5ff68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:41:36 compute-0 podman[383347]: 2026-01-26 16:41:36.130496367 +0000 UTC m=+0.022134096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:41:36 compute-0 systemd[1]: Started libpod-conmon-f0c7bafe90cfb0fe2f375dd19e78271bd9790f1e4232732812b1ee3396d5ff68.scope.
Jan 26 16:41:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:41:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57af6f5be51d86c526e0400b8fbabd97eca5a7a3d816ba35b7a05e4f3f185910/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57af6f5be51d86c526e0400b8fbabd97eca5a7a3d816ba35b7a05e4f3f185910/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57af6f5be51d86c526e0400b8fbabd97eca5a7a3d816ba35b7a05e4f3f185910/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:41:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57af6f5be51d86c526e0400b8fbabd97eca5a7a3d816ba35b7a05e4f3f185910/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:41:36 compute-0 podman[383347]: 2026-01-26 16:41:36.290586406 +0000 UTC m=+0.182224115 container init f0c7bafe90cfb0fe2f375dd19e78271bd9790f1e4232732812b1ee3396d5ff68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:41:36 compute-0 podman[383347]: 2026-01-26 16:41:36.30741304 +0000 UTC m=+0.199050769 container start f0c7bafe90cfb0fe2f375dd19e78271bd9790f1e4232732812b1ee3396d5ff68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 16:41:36 compute-0 podman[383347]: 2026-01-26 16:41:36.311156653 +0000 UTC m=+0.202794362 container attach f0c7bafe90cfb0fe2f375dd19e78271bd9790f1e4232732812b1ee3396d5ff68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 16:41:36 compute-0 ceph-mon[75140]: pgmap v2755: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:36 compute-0 lvm[383442]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:41:36 compute-0 lvm[383441]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:41:36 compute-0 lvm[383441]: VG ceph_vg0 finished
Jan 26 16:41:36 compute-0 lvm[383442]: VG ceph_vg1 finished
Jan 26 16:41:36 compute-0 lvm[383444]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:41:36 compute-0 lvm[383444]: VG ceph_vg2 finished
Jan 26 16:41:37 compute-0 gracious_noether[383363]: {}
Jan 26 16:41:37 compute-0 systemd[1]: libpod-f0c7bafe90cfb0fe2f375dd19e78271bd9790f1e4232732812b1ee3396d5ff68.scope: Deactivated successfully.
Jan 26 16:41:37 compute-0 systemd[1]: libpod-f0c7bafe90cfb0fe2f375dd19e78271bd9790f1e4232732812b1ee3396d5ff68.scope: Consumed 1.321s CPU time.
Jan 26 16:41:37 compute-0 podman[383347]: 2026-01-26 16:41:37.093353592 +0000 UTC m=+0.984991301 container died f0c7bafe90cfb0fe2f375dd19e78271bd9790f1e4232732812b1ee3396d5ff68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 26 16:41:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-57af6f5be51d86c526e0400b8fbabd97eca5a7a3d816ba35b7a05e4f3f185910-merged.mount: Deactivated successfully.
Jan 26 16:41:37 compute-0 podman[383347]: 2026-01-26 16:41:37.134238897 +0000 UTC m=+1.025876606 container remove f0c7bafe90cfb0fe2f375dd19e78271bd9790f1e4232732812b1ee3396d5ff68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_noether, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:41:37 compute-0 systemd[1]: libpod-conmon-f0c7bafe90cfb0fe2f375dd19e78271bd9790f1e4232732812b1ee3396d5ff68.scope: Deactivated successfully.
Jan 26 16:41:37 compute-0 sudo[383270]: pam_unix(sudo:session): session closed for user root
Jan 26 16:41:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:41:37 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:41:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:41:37 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:41:37 compute-0 sudo[383457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:41:37 compute-0 sudo[383457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:41:37 compute-0 sudo[383457]: pam_unix(sudo:session): session closed for user root
Jan 26 16:41:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:38 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:41:38 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:41:38 compute-0 ceph-mon[75140]: pgmap v2756: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:38 compute-0 nova_compute[239965]: 2026-01-26 16:41:38.544 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:39 compute-0 ceph-mgr[75431]: [devicehealth INFO root] Check health
Jan 26 16:41:39 compute-0 nova_compute[239965]: 2026-01-26 16:41:39.919 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:40 compute-0 ceph-mon[75140]: pgmap v2757: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:41:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:42 compute-0 ceph-mon[75140]: pgmap v2758: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:43 compute-0 nova_compute[239965]: 2026-01-26 16:41:43.547 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:44 compute-0 ceph-mon[75140]: pgmap v2759: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:44 compute-0 nova_compute[239965]: 2026-01-26 16:41:44.920 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:41:46 compute-0 ceph-mon[75140]: pgmap v2760: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:48 compute-0 ceph-mon[75140]: pgmap v2761: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:48 compute-0 nova_compute[239965]: 2026-01-26 16:41:48.550 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:41:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1063687640' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:41:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:41:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1063687640' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:41:49 compute-0 ceph-osd[87780]: bluestore.MempoolThread fragmentation_score=0.005588 took=0.000070s
Jan 26 16:41:49 compute-0 ceph-osd[85687]: bluestore.MempoolThread fragmentation_score=0.005312 took=0.000095s
Jan 26 16:41:49 compute-0 ceph-osd[86729]: bluestore.MempoolThread fragmentation_score=0.005051 took=0.000056s
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1063687640' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:41:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1063687640' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:41:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:41:49 compute-0 nova_compute[239965]: 2026-01-26 16:41:49.923 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:50 compute-0 ceph-mon[75140]: pgmap v2762: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:41:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:52 compute-0 ceph-mon[75140]: pgmap v2763: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:53 compute-0 nova_compute[239965]: 2026-01-26 16:41:53.553 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:54 compute-0 ceph-mon[75140]: pgmap v2764: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:54 compute-0 nova_compute[239965]: 2026-01-26 16:41:54.925 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:41:56 compute-0 ceph-mon[75140]: pgmap v2765: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:58 compute-0 nova_compute[239965]: 2026-01-26 16:41:58.556 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:41:58 compute-0 ceph-mon[75140]: pgmap v2766: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:41:59.264 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:41:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:41:59.265 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:41:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:41:59.265 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:41:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:41:59 compute-0 nova_compute[239965]: 2026-01-26 16:41:59.927 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:42:00 compute-0 ceph-mon[75140]: pgmap v2767: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:42:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:02 compute-0 ceph-mon[75140]: pgmap v2768: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:03 compute-0 podman[383482]: 2026-01-26 16:42:03.379361007 +0000 UTC m=+0.062869578 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:42:03 compute-0 podman[383483]: 2026-01-26 16:42:03.412214545 +0000 UTC m=+0.095899790 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 16:42:03 compute-0 nova_compute[239965]: 2026-01-26 16:42:03.557 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:04 compute-0 ceph-mon[75140]: pgmap v2769: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:04 compute-0 nova_compute[239965]: 2026-01-26 16:42:04.929 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:06 compute-0 nova_compute[239965]: 2026-01-26 16:42:06.000 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:42:06 compute-0 ceph-mon[75140]: pgmap v2770: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:07 compute-0 nova_compute[239965]: 2026-01-26 16:42:07.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:08 compute-0 nova_compute[239965]: 2026-01-26 16:42:08.559 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:08 compute-0 ceph-mon[75140]: pgmap v2771: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:09 compute-0 nova_compute[239965]: 2026-01-26 16:42:09.930 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:10 compute-0 ceph-mon[75140]: pgmap v2772: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:42:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:11 compute-0 nova_compute[239965]: 2026-01-26 16:42:11.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:12 compute-0 ceph-mon[75140]: pgmap v2773: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:13 compute-0 nova_compute[239965]: 2026-01-26 16:42:13.561 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:14 compute-0 nova_compute[239965]: 2026-01-26 16:42:14.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:14 compute-0 nova_compute[239965]: 2026-01-26 16:42:14.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:14 compute-0 nova_compute[239965]: 2026-01-26 16:42:14.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:14 compute-0 nova_compute[239965]: 2026-01-26 16:42:14.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:14 compute-0 nova_compute[239965]: 2026-01-26 16:42:14.509 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:42:14 compute-0 ceph-mon[75140]: pgmap v2774: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:14 compute-0 nova_compute[239965]: 2026-01-26 16:42:14.932 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:15 compute-0 nova_compute[239965]: 2026-01-26 16:42:15.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:15 compute-0 nova_compute[239965]: 2026-01-26 16:42:15.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:42:15 compute-0 nova_compute[239965]: 2026-01-26 16:42:15.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:42:15 compute-0 nova_compute[239965]: 2026-01-26 16:42:15.530 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:42:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:42:16 compute-0 ceph-mon[75140]: pgmap v2775: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:18 compute-0 nova_compute[239965]: 2026-01-26 16:42:18.611 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:18 compute-0 ceph-mon[75140]: pgmap v2776: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:19 compute-0 nova_compute[239965]: 2026-01-26 16:42:19.934 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:20 compute-0 ceph-mon[75140]: pgmap v2777: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:42:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:22 compute-0 nova_compute[239965]: 2026-01-26 16:42:22.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:22 compute-0 nova_compute[239965]: 2026-01-26 16:42:22.544 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:42:22 compute-0 nova_compute[239965]: 2026-01-26 16:42:22.544 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:42:22 compute-0 nova_compute[239965]: 2026-01-26 16:42:22.544 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:42:22 compute-0 nova_compute[239965]: 2026-01-26 16:42:22.545 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:42:22 compute-0 nova_compute[239965]: 2026-01-26 16:42:22.545 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:42:22 compute-0 ceph-mon[75140]: pgmap v2778: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:42:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4270698876' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:42:23 compute-0 nova_compute[239965]: 2026-01-26 16:42:23.179 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:42:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:23 compute-0 nova_compute[239965]: 2026-01-26 16:42:23.353 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:42:23 compute-0 nova_compute[239965]: 2026-01-26 16:42:23.355 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3615MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:42:23 compute-0 nova_compute[239965]: 2026-01-26 16:42:23.356 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:42:23 compute-0 nova_compute[239965]: 2026-01-26 16:42:23.356 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:42:23 compute-0 nova_compute[239965]: 2026-01-26 16:42:23.615 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:23 compute-0 nova_compute[239965]: 2026-01-26 16:42:23.849 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:42:23 compute-0 nova_compute[239965]: 2026-01-26 16:42:23.850 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:42:23 compute-0 nova_compute[239965]: 2026-01-26 16:42:23.870 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:42:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4270698876' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:42:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:42:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1749266894' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:42:24 compute-0 nova_compute[239965]: 2026-01-26 16:42:24.448 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:42:24 compute-0 nova_compute[239965]: 2026-01-26 16:42:24.455 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:42:24 compute-0 nova_compute[239965]: 2026-01-26 16:42:24.474 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:42:24 compute-0 nova_compute[239965]: 2026-01-26 16:42:24.476 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:42:24 compute-0 nova_compute[239965]: 2026-01-26 16:42:24.476 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:42:24 compute-0 nova_compute[239965]: 2026-01-26 16:42:24.935 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:25 compute-0 ceph-mon[75140]: pgmap v2779: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1749266894' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:42:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:26 compute-0 ceph-mon[75140]: pgmap v2780: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:42:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:28 compute-0 nova_compute[239965]: 2026-01-26 16:42:28.617 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:42:28
Jan 26 16:42:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:42:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:42:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.control', 'backups']
Jan 26 16:42:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:42:28 compute-0 ceph-mon[75140]: pgmap v2781: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:29 compute-0 nova_compute[239965]: 2026-01-26 16:42:29.938 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:42:30 compute-0 ceph-mon[75140]: pgmap v2782: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:42:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:42:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:42:31 compute-0 nova_compute[239965]: 2026-01-26 16:42:31.470 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:42:33 compute-0 ceph-mon[75140]: pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:33 compute-0 nova_compute[239965]: 2026-01-26 16:42:33.619 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:34 compute-0 ceph-mon[75140]: pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:34 compute-0 podman[383569]: 2026-01-26 16:42:34.356085906 +0000 UTC m=+0.046939017 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 16:42:34 compute-0 podman[383570]: 2026-01-26 16:42:34.466825611 +0000 UTC m=+0.149868199 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 26 16:42:34 compute-0 nova_compute[239965]: 2026-01-26 16:42:34.940 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:42:36 compute-0 ceph-mon[75140]: pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:37 compute-0 sudo[383615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:42:37 compute-0 sudo[383615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:42:37 compute-0 sudo[383615]: pam_unix(sudo:session): session closed for user root
Jan 26 16:42:37 compute-0 sudo[383640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:42:37 compute-0 sudo[383640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:42:37 compute-0 sudo[383640]: pam_unix(sudo:session): session closed for user root
Jan 26 16:42:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:42:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:42:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:42:37 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:42:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:42:38 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:42:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:42:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:42:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:42:38 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:42:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:42:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:42:38 compute-0 sshd-session[383684]: Invalid user sol from 45.148.10.240 port 44320
Jan 26 16:42:38 compute-0 sudo[383698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:42:38 compute-0 sudo[383698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:42:38 compute-0 sudo[383698]: pam_unix(sudo:session): session closed for user root
Jan 26 16:42:38 compute-0 sshd-session[383684]: Connection closed by invalid user sol 45.148.10.240 port 44320 [preauth]
Jan 26 16:42:38 compute-0 sudo[383723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:42:38 compute-0 sudo[383723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:42:38 compute-0 nova_compute[239965]: 2026-01-26 16:42:38.622 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:38 compute-0 podman[383760]: 2026-01-26 16:42:38.536499123 +0000 UTC m=+0.024953915 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:42:38 compute-0 podman[383760]: 2026-01-26 16:42:38.863276365 +0000 UTC m=+0.351731147 container create a50042d9b25be8d1149afeaff3ffb4a44b541b3c9d44f44b47752d09c76ae0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:42:38 compute-0 ceph-mon[75140]: pgmap v2786: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:38 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:42:38 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:42:38 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:42:38 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:42:38 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:42:38 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:42:38 compute-0 systemd[1]: Started libpod-conmon-a50042d9b25be8d1149afeaff3ffb4a44b541b3c9d44f44b47752d09c76ae0d5.scope.
Jan 26 16:42:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:42:39 compute-0 podman[383760]: 2026-01-26 16:42:39.037510462 +0000 UTC m=+0.525965254 container init a50042d9b25be8d1149afeaff3ffb4a44b541b3c9d44f44b47752d09c76ae0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:42:39 compute-0 podman[383760]: 2026-01-26 16:42:39.048549914 +0000 UTC m=+0.537004706 container start a50042d9b25be8d1149afeaff3ffb4a44b541b3c9d44f44b47752d09c76ae0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_dubinsky, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:42:39 compute-0 epic_dubinsky[383776]: 167 167
Jan 26 16:42:39 compute-0 systemd[1]: libpod-a50042d9b25be8d1149afeaff3ffb4a44b541b3c9d44f44b47752d09c76ae0d5.scope: Deactivated successfully.
Jan 26 16:42:39 compute-0 podman[383760]: 2026-01-26 16:42:39.118353372 +0000 UTC m=+0.606808164 container attach a50042d9b25be8d1149afeaff3ffb4a44b541b3c9d44f44b47752d09c76ae0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 16:42:39 compute-0 podman[383760]: 2026-01-26 16:42:39.120595356 +0000 UTC m=+0.609050148 container died a50042d9b25be8d1149afeaff3ffb4a44b541b3c9d44f44b47752d09c76ae0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_dubinsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 16:42:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4e19b38d9d33ca0485e1abd2145a0f99dfa325d682e4b9ddb9a1f48142c5f98-merged.mount: Deactivated successfully.
Jan 26 16:42:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:39 compute-0 podman[383760]: 2026-01-26 16:42:39.354245456 +0000 UTC m=+0.842700268 container remove a50042d9b25be8d1149afeaff3ffb4a44b541b3c9d44f44b47752d09c76ae0d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_dubinsky, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 16:42:39 compute-0 systemd[1]: libpod-conmon-a50042d9b25be8d1149afeaff3ffb4a44b541b3c9d44f44b47752d09c76ae0d5.scope: Deactivated successfully.
Jan 26 16:42:39 compute-0 podman[383802]: 2026-01-26 16:42:39.521616915 +0000 UTC m=+0.025036407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:42:39 compute-0 podman[383802]: 2026-01-26 16:42:39.695096264 +0000 UTC m=+0.198515726 container create 8b64cf935e3d9a5e3b5d882c021e4e93b20b39e63c3b8c339d493cc3f25bb5f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_moore, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:42:39 compute-0 systemd[1]: Started libpod-conmon-8b64cf935e3d9a5e3b5d882c021e4e93b20b39e63c3b8c339d493cc3f25bb5f2.scope.
Jan 26 16:42:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219dce1a2217f7b998da5153e27816955c69c9d8f0f9047eb4ecb9b487738074/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219dce1a2217f7b998da5153e27816955c69c9d8f0f9047eb4ecb9b487738074/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219dce1a2217f7b998da5153e27816955c69c9d8f0f9047eb4ecb9b487738074/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219dce1a2217f7b998da5153e27816955c69c9d8f0f9047eb4ecb9b487738074/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:42:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/219dce1a2217f7b998da5153e27816955c69c9d8f0f9047eb4ecb9b487738074/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:42:39 compute-0 podman[383802]: 2026-01-26 16:42:39.881173523 +0000 UTC m=+0.384593035 container init 8b64cf935e3d9a5e3b5d882c021e4e93b20b39e63c3b8c339d493cc3f25bb5f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_moore, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:42:39 compute-0 podman[383802]: 2026-01-26 16:42:39.890097282 +0000 UTC m=+0.393516784 container start 8b64cf935e3d9a5e3b5d882c021e4e93b20b39e63c3b8c339d493cc3f25bb5f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_moore, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 16:42:39 compute-0 nova_compute[239965]: 2026-01-26 16:42:39.940 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:39 compute-0 podman[383802]: 2026-01-26 16:42:39.957961992 +0000 UTC m=+0.461381494 container attach 8b64cf935e3d9a5e3b5d882c021e4e93b20b39e63c3b8c339d493cc3f25bb5f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 16:42:40 compute-0 objective_moore[383819]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:42:40 compute-0 objective_moore[383819]: --> All data devices are unavailable
Jan 26 16:42:40 compute-0 systemd[1]: libpod-8b64cf935e3d9a5e3b5d882c021e4e93b20b39e63c3b8c339d493cc3f25bb5f2.scope: Deactivated successfully.
Jan 26 16:42:40 compute-0 podman[383802]: 2026-01-26 16:42:40.362668081 +0000 UTC m=+0.866087543 container died 8b64cf935e3d9a5e3b5d882c021e4e93b20b39e63c3b8c339d493cc3f25bb5f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_moore, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 16:42:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-219dce1a2217f7b998da5153e27816955c69c9d8f0f9047eb4ecb9b487738074-merged.mount: Deactivated successfully.
Jan 26 16:42:40 compute-0 podman[383802]: 2026-01-26 16:42:40.615381059 +0000 UTC m=+1.118800521 container remove 8b64cf935e3d9a5e3b5d882c021e4e93b20b39e63c3b8c339d493cc3f25bb5f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_moore, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 16:42:40 compute-0 systemd[1]: libpod-conmon-8b64cf935e3d9a5e3b5d882c021e4e93b20b39e63c3b8c339d493cc3f25bb5f2.scope: Deactivated successfully.
Jan 26 16:42:40 compute-0 sudo[383723]: pam_unix(sudo:session): session closed for user root
Jan 26 16:42:40 compute-0 sudo[383853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:42:40 compute-0 sudo[383853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:42:40 compute-0 sudo[383853]: pam_unix(sudo:session): session closed for user root
Jan 26 16:42:40 compute-0 sudo[383878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:42:40 compute-0 sudo[383878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:42:41 compute-0 ceph-mon[75140]: pgmap v2787: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:41 compute-0 podman[383916]: 2026-01-26 16:42:41.141834904 +0000 UTC m=+0.098414152 container create 74e7afeff9e86f62f9fc3b38a9cf33fea691097fbb4df2feedd7014b8778111f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:42:41 compute-0 podman[383916]: 2026-01-26 16:42:41.076852445 +0000 UTC m=+0.033431693 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:42:41 compute-0 systemd[1]: Started libpod-conmon-74e7afeff9e86f62f9fc3b38a9cf33fea691097fbb4df2feedd7014b8778111f.scope.
Jan 26 16:42:41 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:42:41 compute-0 podman[383916]: 2026-01-26 16:42:41.229134482 +0000 UTC m=+0.185713730 container init 74e7afeff9e86f62f9fc3b38a9cf33fea691097fbb4df2feedd7014b8778111f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilson, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:42:41 compute-0 podman[383916]: 2026-01-26 16:42:41.237650111 +0000 UTC m=+0.194229339 container start 74e7afeff9e86f62f9fc3b38a9cf33fea691097fbb4df2feedd7014b8778111f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 16:42:41 compute-0 podman[383916]: 2026-01-26 16:42:41.241906537 +0000 UTC m=+0.198485765 container attach 74e7afeff9e86f62f9fc3b38a9cf33fea691097fbb4df2feedd7014b8778111f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 16:42:41 compute-0 magical_wilson[383933]: 167 167
Jan 26 16:42:41 compute-0 systemd[1]: libpod-74e7afeff9e86f62f9fc3b38a9cf33fea691097fbb4df2feedd7014b8778111f.scope: Deactivated successfully.
Jan 26 16:42:41 compute-0 podman[383916]: 2026-01-26 16:42:41.243366723 +0000 UTC m=+0.199945961 container died 74e7afeff9e86f62f9fc3b38a9cf33fea691097fbb4df2feedd7014b8778111f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 16:42:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc01853e5f80ac9b0732ccc81955791382f8afee337e0152d3bfae179418fabb-merged.mount: Deactivated successfully.
Jan 26 16:42:41 compute-0 podman[383916]: 2026-01-26 16:42:41.414966685 +0000 UTC m=+0.371545913 container remove 74e7afeff9e86f62f9fc3b38a9cf33fea691097fbb4df2feedd7014b8778111f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:42:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:42:41 compute-0 systemd[1]: libpod-conmon-74e7afeff9e86f62f9fc3b38a9cf33fea691097fbb4df2feedd7014b8778111f.scope: Deactivated successfully.
Jan 26 16:42:41 compute-0 podman[383957]: 2026-01-26 16:42:41.66308182 +0000 UTC m=+0.110601952 container create 51d900d6d644f31408e4e0a5db9244f41ece94ec7c4b0c2c171e305902e4c48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_elbakyan, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 16:42:41 compute-0 podman[383957]: 2026-01-26 16:42:41.57690376 +0000 UTC m=+0.024423982 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:42:41 compute-0 systemd[1]: Started libpod-conmon-51d900d6d644f31408e4e0a5db9244f41ece94ec7c4b0c2c171e305902e4c48e.scope.
Jan 26 16:42:41 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:42:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc59b4347849da82609e7d3a396612a02ef71256c86c66e21d8907c9b91d7c16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:42:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc59b4347849da82609e7d3a396612a02ef71256c86c66e21d8907c9b91d7c16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:42:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc59b4347849da82609e7d3a396612a02ef71256c86c66e21d8907c9b91d7c16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:42:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc59b4347849da82609e7d3a396612a02ef71256c86c66e21d8907c9b91d7c16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:42:42 compute-0 podman[383957]: 2026-01-26 16:42:42.028796781 +0000 UTC m=+0.476316983 container init 51d900d6d644f31408e4e0a5db9244f41ece94ec7c4b0c2c171e305902e4c48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_elbakyan, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 16:42:42 compute-0 podman[383957]: 2026-01-26 16:42:42.041156144 +0000 UTC m=+0.488676266 container start 51d900d6d644f31408e4e0a5db9244f41ece94ec7c4b0c2c171e305902e4c48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 16:42:42 compute-0 podman[383957]: 2026-01-26 16:42:42.07958403 +0000 UTC m=+0.527104232 container attach 51d900d6d644f31408e4e0a5db9244f41ece94ec7c4b0c2c171e305902e4c48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 16:42:42 compute-0 ceph-mon[75140]: pgmap v2788: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]: {
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:     "0": [
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:         {
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "devices": [
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "/dev/loop3"
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             ],
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "lv_name": "ceph_lv0",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "lv_size": "21470642176",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "name": "ceph_lv0",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "tags": {
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.cluster_name": "ceph",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.crush_device_class": "",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.encrypted": "0",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.objectstore": "bluestore",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.osd_id": "0",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.type": "block",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.vdo": "0",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.with_tpm": "0"
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             },
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "type": "block",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "vg_name": "ceph_vg0"
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:         }
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:     ],
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:     "1": [
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:         {
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "devices": [
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "/dev/loop4"
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             ],
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "lv_name": "ceph_lv1",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "lv_size": "21470642176",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "name": "ceph_lv1",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "tags": {
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.cluster_name": "ceph",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.crush_device_class": "",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.encrypted": "0",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.objectstore": "bluestore",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.osd_id": "1",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.type": "block",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.vdo": "0",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.with_tpm": "0"
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             },
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "type": "block",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "vg_name": "ceph_vg1"
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:         }
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:     ],
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:     "2": [
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:         {
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "devices": [
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "/dev/loop5"
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             ],
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "lv_name": "ceph_lv2",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "lv_size": "21470642176",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "name": "ceph_lv2",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "tags": {
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.cluster_name": "ceph",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.crush_device_class": "",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.encrypted": "0",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.objectstore": "bluestore",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.osd_id": "2",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.type": "block",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.vdo": "0",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:                 "ceph.with_tpm": "0"
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             },
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "type": "block",
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:             "vg_name": "ceph_vg2"
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:         }
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]:     ]
Jan 26 16:42:42 compute-0 inspiring_elbakyan[383973]: }
Jan 26 16:42:42 compute-0 systemd[1]: libpod-51d900d6d644f31408e4e0a5db9244f41ece94ec7c4b0c2c171e305902e4c48e.scope: Deactivated successfully.
Jan 26 16:42:42 compute-0 podman[383957]: 2026-01-26 16:42:42.379112661 +0000 UTC m=+0.826632793 container died 51d900d6d644f31408e4e0a5db9244f41ece94ec7c4b0c2c171e305902e4c48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_elbakyan, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 16:42:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc59b4347849da82609e7d3a396612a02ef71256c86c66e21d8907c9b91d7c16-merged.mount: Deactivated successfully.
Jan 26 16:42:42 compute-0 podman[383957]: 2026-01-26 16:42:42.708439205 +0000 UTC m=+1.155959317 container remove 51d900d6d644f31408e4e0a5db9244f41ece94ec7c4b0c2c171e305902e4c48e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_elbakyan, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 16:42:42 compute-0 systemd[1]: libpod-conmon-51d900d6d644f31408e4e0a5db9244f41ece94ec7c4b0c2c171e305902e4c48e.scope: Deactivated successfully.
Jan 26 16:42:42 compute-0 sudo[383878]: pam_unix(sudo:session): session closed for user root
Jan 26 16:42:42 compute-0 sudo[383995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:42:42 compute-0 sudo[383995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:42:42 compute-0 sudo[383995]: pam_unix(sudo:session): session closed for user root
Jan 26 16:42:42 compute-0 sudo[384020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:42:42 compute-0 sudo[384020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:42:43 compute-0 podman[384057]: 2026-01-26 16:42:43.206131251 +0000 UTC m=+0.101365475 container create 1cd4c4ec4f74d2884c2c5950329efd09adc86a59f82d1e074d198c396ffe0f25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:42:43 compute-0 podman[384057]: 2026-01-26 16:42:43.127727032 +0000 UTC m=+0.022961286 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:42:43 compute-0 systemd[1]: Started libpod-conmon-1cd4c4ec4f74d2884c2c5950329efd09adc86a59f82d1e074d198c396ffe0f25.scope.
Jan 26 16:42:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:42:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:43 compute-0 podman[384057]: 2026-01-26 16:42:43.321871199 +0000 UTC m=+0.217105453 container init 1cd4c4ec4f74d2884c2c5950329efd09adc86a59f82d1e074d198c396ffe0f25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_cohen, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:42:43 compute-0 podman[384057]: 2026-01-26 16:42:43.329983919 +0000 UTC m=+0.225218133 container start 1cd4c4ec4f74d2884c2c5950329efd09adc86a59f82d1e074d198c396ffe0f25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_cohen, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:42:43 compute-0 podman[384057]: 2026-01-26 16:42:43.334134091 +0000 UTC m=+0.229368325 container attach 1cd4c4ec4f74d2884c2c5950329efd09adc86a59f82d1e074d198c396ffe0f25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:42:43 compute-0 thirsty_cohen[384073]: 167 167
Jan 26 16:42:43 compute-0 podman[384057]: 2026-01-26 16:42:43.336216372 +0000 UTC m=+0.231450586 container died 1cd4c4ec4f74d2884c2c5950329efd09adc86a59f82d1e074d198c396ffe0f25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_cohen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 16:42:43 compute-0 systemd[1]: libpod-1cd4c4ec4f74d2884c2c5950329efd09adc86a59f82d1e074d198c396ffe0f25.scope: Deactivated successfully.
Jan 26 16:42:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-f571805924c0fca84392eb0f9b7c5c773bbfb80eb1c1102f39af7ed5e45da957-merged.mount: Deactivated successfully.
Jan 26 16:42:43 compute-0 podman[384057]: 2026-01-26 16:42:43.490062878 +0000 UTC m=+0.385297082 container remove 1cd4c4ec4f74d2884c2c5950329efd09adc86a59f82d1e074d198c396ffe0f25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_cohen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 16:42:43 compute-0 systemd[1]: libpod-conmon-1cd4c4ec4f74d2884c2c5950329efd09adc86a59f82d1e074d198c396ffe0f25.scope: Deactivated successfully.
Jan 26 16:42:43 compute-0 nova_compute[239965]: 2026-01-26 16:42:43.728 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:43 compute-0 podman[384097]: 2026-01-26 16:42:43.734825111 +0000 UTC m=+0.109477175 container create 18b52e36f1e8e72aff5c05162cc400dcb718290e192d74bdf9a4eeb0ddc6d2e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 16:42:43 compute-0 podman[384097]: 2026-01-26 16:42:43.6518687 +0000 UTC m=+0.026520794 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:42:43 compute-0 systemd[1]: Started libpod-conmon-18b52e36f1e8e72aff5c05162cc400dcb718290e192d74bdf9a4eeb0ddc6d2e1.scope.
Jan 26 16:42:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd74d01ef4582cf1ff219652f67bb8499ee35b497b5bbb938d85acf00438a669/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd74d01ef4582cf1ff219652f67bb8499ee35b497b5bbb938d85acf00438a669/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd74d01ef4582cf1ff219652f67bb8499ee35b497b5bbb938d85acf00438a669/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:42:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd74d01ef4582cf1ff219652f67bb8499ee35b497b5bbb938d85acf00438a669/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:42:43 compute-0 podman[384097]: 2026-01-26 16:42:43.931154142 +0000 UTC m=+0.305806246 container init 18b52e36f1e8e72aff5c05162cc400dcb718290e192d74bdf9a4eeb0ddc6d2e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:42:43 compute-0 podman[384097]: 2026-01-26 16:42:43.942617064 +0000 UTC m=+0.317269138 container start 18b52e36f1e8e72aff5c05162cc400dcb718290e192d74bdf9a4eeb0ddc6d2e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:42:43 compute-0 podman[384097]: 2026-01-26 16:42:43.946491869 +0000 UTC m=+0.321143973 container attach 18b52e36f1e8e72aff5c05162cc400dcb718290e192d74bdf9a4eeb0ddc6d2e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 16:42:44 compute-0 lvm[384193]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:42:44 compute-0 lvm[384190]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:42:44 compute-0 lvm[384190]: VG ceph_vg0 finished
Jan 26 16:42:44 compute-0 lvm[384193]: VG ceph_vg1 finished
Jan 26 16:42:44 compute-0 lvm[384195]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:42:44 compute-0 lvm[384195]: VG ceph_vg2 finished
Jan 26 16:42:44 compute-0 lvm[384196]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:42:44 compute-0 lvm[384196]: VG ceph_vg1 finished
Jan 26 16:42:44 compute-0 ceph-mon[75140]: pgmap v2789: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:44 compute-0 condescending_solomon[384113]: {}
Jan 26 16:42:44 compute-0 systemd[1]: libpod-18b52e36f1e8e72aff5c05162cc400dcb718290e192d74bdf9a4eeb0ddc6d2e1.scope: Deactivated successfully.
Jan 26 16:42:44 compute-0 systemd[1]: libpod-18b52e36f1e8e72aff5c05162cc400dcb718290e192d74bdf9a4eeb0ddc6d2e1.scope: Consumed 1.360s CPU time.
Jan 26 16:42:44 compute-0 podman[384097]: 2026-01-26 16:42:44.795592794 +0000 UTC m=+1.170244878 container died 18b52e36f1e8e72aff5c05162cc400dcb718290e192d74bdf9a4eeb0ddc6d2e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:42:44 compute-0 nova_compute[239965]: 2026-01-26 16:42:44.942 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd74d01ef4582cf1ff219652f67bb8499ee35b497b5bbb938d85acf00438a669-merged.mount: Deactivated successfully.
Jan 26 16:42:45 compute-0 podman[384097]: 2026-01-26 16:42:45.384755691 +0000 UTC m=+1.759407785 container remove 18b52e36f1e8e72aff5c05162cc400dcb718290e192d74bdf9a4eeb0ddc6d2e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 16:42:45 compute-0 systemd[1]: libpod-conmon-18b52e36f1e8e72aff5c05162cc400dcb718290e192d74bdf9a4eeb0ddc6d2e1.scope: Deactivated successfully.
Jan 26 16:42:45 compute-0 sudo[384020]: pam_unix(sudo:session): session closed for user root
Jan 26 16:42:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:42:45 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:42:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:42:45 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:42:45 compute-0 sudo[384210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:42:45 compute-0 sudo[384210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:42:45 compute-0 sudo[384210]: pam_unix(sudo:session): session closed for user root
Jan 26 16:42:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:42:46 compute-0 ceph-mon[75140]: pgmap v2790: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:46 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:42:46 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:42:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:48 compute-0 ceph-mon[75140]: pgmap v2791: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:48 compute-0 nova_compute[239965]: 2026-01-26 16:42:48.730 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:42:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1859347780' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:42:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:42:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1859347780' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:42:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:42:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1859347780' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:42:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1859347780' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:42:49 compute-0 nova_compute[239965]: 2026-01-26 16:42:49.944 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:50 compute-0 ceph-mon[75140]: pgmap v2792: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:42:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Jan 26 16:42:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:42:52 compute-0 ceph-mon[75140]: pgmap v2793: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 767 B/s rd, 0 B/s wr, 1 op/s
Jan 26 16:42:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Jan 26 16:42:53 compute-0 nova_compute[239965]: 2026-01-26 16:42:53.734 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:54 compute-0 ceph-mon[75140]: pgmap v2794: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Jan 26 16:42:54 compute-0 nova_compute[239965]: 2026-01-26 16:42:54.945 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Jan 26 16:42:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:42:56 compute-0 ceph-mon[75140]: pgmap v2795: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 0 B/s wr, 6 op/s
Jan 26 16:42:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Jan 26 16:42:58 compute-0 nova_compute[239965]: 2026-01-26 16:42:58.735 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:42:58 compute-0 ceph-mon[75140]: pgmap v2796: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Jan 26 16:42:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:42:59.266 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:42:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:42:59.266 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:42:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:42:59.266 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:42:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 16:42:59 compute-0 nova_compute[239965]: 2026-01-26 16:42:59.945 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:43:00 compute-0 ceph-mon[75140]: pgmap v2797: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 16:43:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 16:43:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:43:03 compute-0 ceph-mon[75140]: pgmap v2798: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 16:43:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Jan 26 16:43:03 compute-0 nova_compute[239965]: 2026-01-26 16:43:03.737 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:04 compute-0 ceph-mon[75140]: pgmap v2799: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 0 B/s wr, 58 op/s
Jan 26 16:43:04 compute-0 nova_compute[239965]: 2026-01-26 16:43:04.996 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 26 16:43:05 compute-0 podman[384236]: 2026-01-26 16:43:05.382532456 +0000 UTC m=+0.062121779 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Jan 26 16:43:05 compute-0 podman[384237]: 2026-01-26 16:43:05.412908864 +0000 UTC m=+0.091284897 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 26 16:43:05 compute-0 nova_compute[239965]: 2026-01-26 16:43:05.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:06 compute-0 ceph-mon[75140]: pgmap v2800: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 26 16:43:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:43:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 26 16:43:08 compute-0 ceph-mon[75140]: pgmap v2801: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 53 op/s
Jan 26 16:43:08 compute-0 nova_compute[239965]: 2026-01-26 16:43:08.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:08 compute-0 nova_compute[239965]: 2026-01-26 16:43:08.739 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 26 16:43:10 compute-0 nova_compute[239965]: 2026-01-26 16:43:10.000 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:10 compute-0 ceph-mon[75140]: pgmap v2802: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 26 16:43:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:43:11 compute-0 nova_compute[239965]: 2026-01-26 16:43:11.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:12 compute-0 ceph-mon[75140]: pgmap v2803: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:13 compute-0 nova_compute[239965]: 2026-01-26 16:43:13.742 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:14 compute-0 ceph-mon[75140]: pgmap v2804: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:15 compute-0 nova_compute[239965]: 2026-01-26 16:43:15.002 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:15 compute-0 nova_compute[239965]: 2026-01-26 16:43:15.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:43:16 compute-0 nova_compute[239965]: 2026-01-26 16:43:16.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:16 compute-0 nova_compute[239965]: 2026-01-26 16:43:16.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:16 compute-0 nova_compute[239965]: 2026-01-26 16:43:16.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:16 compute-0 nova_compute[239965]: 2026-01-26 16:43:16.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:43:17 compute-0 ceph-mon[75140]: pgmap v2805: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:17 compute-0 nova_compute[239965]: 2026-01-26 16:43:17.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:17 compute-0 nova_compute[239965]: 2026-01-26 16:43:17.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:43:17 compute-0 nova_compute[239965]: 2026-01-26 16:43:17.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:43:17 compute-0 nova_compute[239965]: 2026-01-26 16:43:17.528 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:43:18 compute-0 ceph-mon[75140]: pgmap v2806: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:18 compute-0 nova_compute[239965]: 2026-01-26 16:43:18.744 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:20 compute-0 nova_compute[239965]: 2026-01-26 16:43:20.005 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:20 compute-0 ceph-mon[75140]: pgmap v2807: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:43:22 compute-0 nova_compute[239965]: 2026-01-26 16:43:22.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:43:22 compute-0 nova_compute[239965]: 2026-01-26 16:43:22.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:22 compute-0 nova_compute[239965]: 2026-01-26 16:43:22.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:22 compute-0 nova_compute[239965]: 2026-01-26 16:43:22.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:22 compute-0 nova_compute[239965]: 2026-01-26 16:43:22.543 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:43:22 compute-0 nova_compute[239965]: 2026-01-26 16:43:22.543 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:22 compute-0 ceph-mon[75140]: pgmap v2808: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:43:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4179973757' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:43:23 compute-0 nova_compute[239965]: 2026-01-26 16:43:23.151 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:23 compute-0 nova_compute[239965]: 2026-01-26 16:43:23.297 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:43:23 compute-0 nova_compute[239965]: 2026-01-26 16:43:23.298 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3593MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:43:23 compute-0 nova_compute[239965]: 2026-01-26 16:43:23.299 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:23 compute-0 nova_compute[239965]: 2026-01-26 16:43:23.299 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4179973757' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:43:23 compute-0 nova_compute[239965]: 2026-01-26 16:43:23.746 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:23 compute-0 nova_compute[239965]: 2026-01-26 16:43:23.774 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:43:23 compute-0 nova_compute[239965]: 2026-01-26 16:43:23.774 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:43:23 compute-0 nova_compute[239965]: 2026-01-26 16:43:23.860 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:43:23 compute-0 nova_compute[239965]: 2026-01-26 16:43:23.875 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:43:23 compute-0 nova_compute[239965]: 2026-01-26 16:43:23.876 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:43:23 compute-0 nova_compute[239965]: 2026-01-26 16:43:23.891 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:43:23 compute-0 nova_compute[239965]: 2026-01-26 16:43:23.914 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:43:23 compute-0 nova_compute[239965]: 2026-01-26 16:43:23.931 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:43:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:43:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2723068138' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:43:24 compute-0 nova_compute[239965]: 2026-01-26 16:43:24.521 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:43:24 compute-0 nova_compute[239965]: 2026-01-26 16:43:24.528 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:43:24 compute-0 nova_compute[239965]: 2026-01-26 16:43:24.549 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:43:24 compute-0 nova_compute[239965]: 2026-01-26 16:43:24.552 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:43:24 compute-0 nova_compute[239965]: 2026-01-26 16:43:24.552 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:24 compute-0 ceph-mon[75140]: pgmap v2809: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2723068138' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:43:25 compute-0 nova_compute[239965]: 2026-01-26 16:43:25.006 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:43:27 compute-0 ceph-mon[75140]: pgmap v2810: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:28 compute-0 ceph-mon[75140]: pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:43:28
Jan 26 16:43:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:43:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:43:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.control', 'images', '.rgw.root', 'default.rgw.log', 'vms', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'backups', 'cephfs.cephfs.data']
Jan 26 16:43:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:43:28 compute-0 nova_compute[239965]: 2026-01-26 16:43:28.748 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:30 compute-0 nova_compute[239965]: 2026-01-26 16:43:30.007 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:43:30 compute-0 ceph-mon[75140]: pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:43:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:43:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:43:32 compute-0 ceph-mon[75140]: pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:33 compute-0 nova_compute[239965]: 2026-01-26 16:43:33.751 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:34 compute-0 ceph-mon[75140]: pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:35 compute-0 nova_compute[239965]: 2026-01-26 16:43:35.010 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:36 compute-0 ceph-mon[75140]: pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:36 compute-0 podman[384325]: 2026-01-26 16:43:36.364902293 +0000 UTC m=+0.050395421 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:43:36 compute-0 podman[384326]: 2026-01-26 16:43:36.396207793 +0000 UTC m=+0.079256971 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 26 16:43:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:43:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:38 compute-0 ceph-mon[75140]: pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:38 compute-0 nova_compute[239965]: 2026-01-26 16:43:38.752 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:40 compute-0 nova_compute[239965]: 2026-01-26 16:43:40.011 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:40 compute-0 ceph-mon[75140]: pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:43:42 compute-0 ceph-mon[75140]: pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:43 compute-0 nova_compute[239965]: 2026-01-26 16:43:43.755 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:44 compute-0 ceph-mon[75140]: pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:45 compute-0 nova_compute[239965]: 2026-01-26 16:43:45.013 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:45 compute-0 sudo[384369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:43:45 compute-0 sudo[384369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:43:45 compute-0 sudo[384369]: pam_unix(sudo:session): session closed for user root
Jan 26 16:43:45 compute-0 sudo[384394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:43:45 compute-0 sudo[384394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:43:46 compute-0 ceph-mon[75140]: pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:46 compute-0 sudo[384394]: pam_unix(sudo:session): session closed for user root
Jan 26 16:43:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:43:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:43:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:43:46 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:43:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:43:46 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:43:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:43:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:43:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:43:46 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:43:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:43:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:43:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:43:46 compute-0 sudo[384450]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:43:46 compute-0 sudo[384450]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:43:46 compute-0 sudo[384450]: pam_unix(sudo:session): session closed for user root
Jan 26 16:43:46 compute-0 sudo[384475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:43:46 compute-0 sudo[384475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:43:46 compute-0 podman[384512]: 2026-01-26 16:43:46.892407477 +0000 UTC m=+0.095381258 container create ef5f7b81867d40a283312cf072461045e7031b75c90798e6374d176e2bcac00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haibt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:43:46 compute-0 podman[384512]: 2026-01-26 16:43:46.820676802 +0000 UTC m=+0.023650613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:43:46 compute-0 systemd[1]: Started libpod-conmon-ef5f7b81867d40a283312cf072461045e7031b75c90798e6374d176e2bcac00a.scope.
Jan 26 16:43:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:43:46 compute-0 podman[384512]: 2026-01-26 16:43:46.980614858 +0000 UTC m=+0.183588639 container init ef5f7b81867d40a283312cf072461045e7031b75c90798e6374d176e2bcac00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haibt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True)
Jan 26 16:43:46 compute-0 podman[384512]: 2026-01-26 16:43:46.989107487 +0000 UTC m=+0.192081268 container start ef5f7b81867d40a283312cf072461045e7031b75c90798e6374d176e2bcac00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haibt, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 16:43:46 compute-0 podman[384512]: 2026-01-26 16:43:46.992918921 +0000 UTC m=+0.195892722 container attach ef5f7b81867d40a283312cf072461045e7031b75c90798e6374d176e2bcac00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 16:43:46 compute-0 loving_haibt[384528]: 167 167
Jan 26 16:43:46 compute-0 systemd[1]: libpod-ef5f7b81867d40a283312cf072461045e7031b75c90798e6374d176e2bcac00a.scope: Deactivated successfully.
Jan 26 16:43:46 compute-0 podman[384512]: 2026-01-26 16:43:46.99658325 +0000 UTC m=+0.199557041 container died ef5f7b81867d40a283312cf072461045e7031b75c90798e6374d176e2bcac00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 16:43:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-af40d62b36c3ce123637ab4b6a1c222aeeea85ed1feb26fd0cb87a1f059e4ad3-merged.mount: Deactivated successfully.
Jan 26 16:43:47 compute-0 podman[384512]: 2026-01-26 16:43:47.206480155 +0000 UTC m=+0.409453946 container remove ef5f7b81867d40a283312cf072461045e7031b75c90798e6374d176e2bcac00a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_haibt, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 16:43:47 compute-0 systemd[1]: libpod-conmon-ef5f7b81867d40a283312cf072461045e7031b75c90798e6374d176e2bcac00a.scope: Deactivated successfully.
Jan 26 16:43:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:43:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:43:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:43:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:43:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:43:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:43:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:47 compute-0 podman[384552]: 2026-01-26 16:43:47.36839413 +0000 UTC m=+0.040486787 container create 873c73282ed32ca13d0fd6af526825f94670a741a6c8ea467517efa2f0ce6fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:43:47 compute-0 systemd[1]: Started libpod-conmon-873c73282ed32ca13d0fd6af526825f94670a741a6c8ea467517efa2f0ce6fa2.scope.
Jan 26 16:43:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726fe1c9da3b1c4acb94d27f574b9e95abd91c777a69a884139cafc5b6816701/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726fe1c9da3b1c4acb94d27f574b9e95abd91c777a69a884139cafc5b6816701/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726fe1c9da3b1c4acb94d27f574b9e95abd91c777a69a884139cafc5b6816701/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726fe1c9da3b1c4acb94d27f574b9e95abd91c777a69a884139cafc5b6816701/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:43:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726fe1c9da3b1c4acb94d27f574b9e95abd91c777a69a884139cafc5b6816701/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:43:47 compute-0 podman[384552]: 2026-01-26 16:43:47.351828002 +0000 UTC m=+0.023920689 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:43:47 compute-0 podman[384552]: 2026-01-26 16:43:47.471167789 +0000 UTC m=+0.143260466 container init 873c73282ed32ca13d0fd6af526825f94670a741a6c8ea467517efa2f0ce6fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:43:47 compute-0 podman[384552]: 2026-01-26 16:43:47.476850379 +0000 UTC m=+0.148943036 container start 873c73282ed32ca13d0fd6af526825f94670a741a6c8ea467517efa2f0ce6fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 16:43:47 compute-0 podman[384552]: 2026-01-26 16:43:47.480097589 +0000 UTC m=+0.152190276 container attach 873c73282ed32ca13d0fd6af526825f94670a741a6c8ea467517efa2f0ce6fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 16:43:47 compute-0 focused_morse[384569]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:43:47 compute-0 focused_morse[384569]: --> All data devices are unavailable
Jan 26 16:43:47 compute-0 systemd[1]: libpod-873c73282ed32ca13d0fd6af526825f94670a741a6c8ea467517efa2f0ce6fa2.scope: Deactivated successfully.
Jan 26 16:43:47 compute-0 podman[384552]: 2026-01-26 16:43:47.948053704 +0000 UTC m=+0.620146381 container died 873c73282ed32ca13d0fd6af526825f94670a741a6c8ea467517efa2f0ce6fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030)
Jan 26 16:43:48 compute-0 ceph-mon[75140]: pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-726fe1c9da3b1c4acb94d27f574b9e95abd91c777a69a884139cafc5b6816701-merged.mount: Deactivated successfully.
Jan 26 16:43:48 compute-0 podman[384552]: 2026-01-26 16:43:48.423694989 +0000 UTC m=+1.095787646 container remove 873c73282ed32ca13d0fd6af526825f94670a741a6c8ea467517efa2f0ce6fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_morse, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 16:43:48 compute-0 systemd[1]: libpod-conmon-873c73282ed32ca13d0fd6af526825f94670a741a6c8ea467517efa2f0ce6fa2.scope: Deactivated successfully.
Jan 26 16:43:48 compute-0 sudo[384475]: pam_unix(sudo:session): session closed for user root
Jan 26 16:43:48 compute-0 sudo[384601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:43:48 compute-0 sudo[384601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:43:48 compute-0 sudo[384601]: pam_unix(sudo:session): session closed for user root
Jan 26 16:43:48 compute-0 sudo[384626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:43:48 compute-0 sudo[384626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:43:48 compute-0 nova_compute[239965]: 2026-01-26 16:43:48.757 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:43:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1540057800' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:43:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:43:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1540057800' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:43:48 compute-0 podman[384664]: 2026-01-26 16:43:48.911253186 +0000 UTC m=+0.055699622 container create 952acfc29a915a74cf66f2f039af4a25521381361e7a14f3bd88d01df3be7a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mestorf, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 16:43:48 compute-0 systemd[1]: Started libpod-conmon-952acfc29a915a74cf66f2f039af4a25521381361e7a14f3bd88d01df3be7a26.scope.
Jan 26 16:43:48 compute-0 podman[384664]: 2026-01-26 16:43:48.884423906 +0000 UTC m=+0.028870442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:43:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:43:49 compute-0 podman[384664]: 2026-01-26 16:43:49.003801033 +0000 UTC m=+0.148247489 container init 952acfc29a915a74cf66f2f039af4a25521381361e7a14f3bd88d01df3be7a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 26 16:43:49 compute-0 podman[384664]: 2026-01-26 16:43:49.017283506 +0000 UTC m=+0.161729942 container start 952acfc29a915a74cf66f2f039af4a25521381361e7a14f3bd88d01df3be7a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 26 16:43:49 compute-0 podman[384664]: 2026-01-26 16:43:49.021110629 +0000 UTC m=+0.165557085 container attach 952acfc29a915a74cf66f2f039af4a25521381361e7a14f3bd88d01df3be7a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mestorf, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 16:43:49 compute-0 affectionate_mestorf[384680]: 167 167
Jan 26 16:43:49 compute-0 systemd[1]: libpod-952acfc29a915a74cf66f2f039af4a25521381361e7a14f3bd88d01df3be7a26.scope: Deactivated successfully.
Jan 26 16:43:49 compute-0 conmon[384680]: conmon 952acfc29a915a74cf66 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-952acfc29a915a74cf66f2f039af4a25521381361e7a14f3bd88d01df3be7a26.scope/container/memory.events
Jan 26 16:43:49 compute-0 podman[384664]: 2026-01-26 16:43:49.02438782 +0000 UTC m=+0.168834246 container died 952acfc29a915a74cf66f2f039af4a25521381361e7a14f3bd88d01df3be7a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mestorf, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 16:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1897c4b18aa9d2ff424da645631f3bb5df99e8bb2d8ab7b2a3131cb35377c50a-merged.mount: Deactivated successfully.
Jan 26 16:43:49 compute-0 podman[384664]: 2026-01-26 16:43:49.082997373 +0000 UTC m=+0.227443809 container remove 952acfc29a915a74cf66f2f039af4a25521381361e7a14f3bd88d01df3be7a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_mestorf, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:43:49 compute-0 systemd[1]: libpod-conmon-952acfc29a915a74cf66f2f039af4a25521381361e7a14f3bd88d01df3be7a26.scope: Deactivated successfully.
Jan 26 16:43:49 compute-0 podman[384704]: 2026-01-26 16:43:49.256486971 +0000 UTC m=+0.042946517 container create 03670e184c84e373c1da9a3d7bcbed57e43195b297b14f372fc6c6e0e150da58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 16:43:49 compute-0 systemd[1]: Started libpod-conmon-03670e184c84e373c1da9a3d7bcbed57e43195b297b14f372fc6c6e0e150da58.scope.
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:49 compute-0 podman[384704]: 2026-01-26 16:43:49.238732575 +0000 UTC m=+0.025192151 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:43:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150d01335878a32ea1b338bcef2eec03ff192e9f568c8874a6cdfeee5e19fb2b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150d01335878a32ea1b338bcef2eec03ff192e9f568c8874a6cdfeee5e19fb2b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150d01335878a32ea1b338bcef2eec03ff192e9f568c8874a6cdfeee5e19fb2b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:43:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/150d01335878a32ea1b338bcef2eec03ff192e9f568c8874a6cdfeee5e19fb2b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:43:49 compute-0 podman[384704]: 2026-01-26 16:43:49.349580152 +0000 UTC m=+0.136039718 container init 03670e184c84e373c1da9a3d7bcbed57e43195b297b14f372fc6c6e0e150da58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ganguly, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:43:49 compute-0 podman[384704]: 2026-01-26 16:43:49.361128296 +0000 UTC m=+0.147587842 container start 03670e184c84e373c1da9a3d7bcbed57e43195b297b14f372fc6c6e0e150da58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ganguly, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 16:43:49 compute-0 podman[384704]: 2026-01-26 16:43:49.366066138 +0000 UTC m=+0.152525684 container attach 03670e184c84e373c1da9a3d7bcbed57e43195b297b14f372fc6c6e0e150da58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:43:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1540057800' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:43:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1540057800' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:43:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]: {
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:     "0": [
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:         {
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "devices": [
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "/dev/loop3"
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             ],
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "lv_name": "ceph_lv0",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "lv_size": "21470642176",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "name": "ceph_lv0",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "tags": {
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.cluster_name": "ceph",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.crush_device_class": "",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.encrypted": "0",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.objectstore": "bluestore",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.osd_id": "0",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.type": "block",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.vdo": "0",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.with_tpm": "0"
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             },
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "type": "block",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "vg_name": "ceph_vg0"
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:         }
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:     ],
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:     "1": [
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:         {
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "devices": [
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "/dev/loop4"
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             ],
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "lv_name": "ceph_lv1",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "lv_size": "21470642176",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "name": "ceph_lv1",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "tags": {
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.cluster_name": "ceph",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.crush_device_class": "",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.encrypted": "0",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.objectstore": "bluestore",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.osd_id": "1",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.type": "block",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.vdo": "0",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.with_tpm": "0"
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             },
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "type": "block",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "vg_name": "ceph_vg1"
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:         }
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:     ],
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:     "2": [
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:         {
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "devices": [
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "/dev/loop5"
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             ],
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "lv_name": "ceph_lv2",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "lv_size": "21470642176",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "name": "ceph_lv2",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "tags": {
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.cluster_name": "ceph",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.crush_device_class": "",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.encrypted": "0",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.objectstore": "bluestore",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.osd_id": "2",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.type": "block",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.vdo": "0",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:                 "ceph.with_tpm": "0"
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             },
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "type": "block",
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:             "vg_name": "ceph_vg2"
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:         }
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]:     ]
Jan 26 16:43:49 compute-0 ecstatic_ganguly[384722]: }
Jan 26 16:43:49 compute-0 systemd[1]: libpod-03670e184c84e373c1da9a3d7bcbed57e43195b297b14f372fc6c6e0e150da58.scope: Deactivated successfully.
Jan 26 16:43:49 compute-0 podman[384704]: 2026-01-26 16:43:49.714319887 +0000 UTC m=+0.500779443 container died 03670e184c84e373c1da9a3d7bcbed57e43195b297b14f372fc6c6e0e150da58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:43:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-150d01335878a32ea1b338bcef2eec03ff192e9f568c8874a6cdfeee5e19fb2b-merged.mount: Deactivated successfully.
Jan 26 16:43:49 compute-0 podman[384704]: 2026-01-26 16:43:49.76890061 +0000 UTC m=+0.555360176 container remove 03670e184c84e373c1da9a3d7bcbed57e43195b297b14f372fc6c6e0e150da58 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 16:43:49 compute-0 systemd[1]: libpod-conmon-03670e184c84e373c1da9a3d7bcbed57e43195b297b14f372fc6c6e0e150da58.scope: Deactivated successfully.
Jan 26 16:43:49 compute-0 sudo[384626]: pam_unix(sudo:session): session closed for user root
Jan 26 16:43:49 compute-0 sudo[384745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:43:49 compute-0 sudo[384745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:43:49 compute-0 sudo[384745]: pam_unix(sudo:session): session closed for user root
Jan 26 16:43:49 compute-0 sudo[384770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:43:49 compute-0 sudo[384770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:43:50 compute-0 nova_compute[239965]: 2026-01-26 16:43:50.015 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:50 compute-0 podman[384807]: 2026-01-26 16:43:50.223698912 +0000 UTC m=+0.044353532 container create c26f6cdcd0da1e8bd7d082da264dca6fbf7ea4a6582020e9c29c63183c8a6247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 26 16:43:50 compute-0 systemd[1]: Started libpod-conmon-c26f6cdcd0da1e8bd7d082da264dca6fbf7ea4a6582020e9c29c63183c8a6247.scope.
Jan 26 16:43:50 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:43:50 compute-0 podman[384807]: 2026-01-26 16:43:50.200823089 +0000 UTC m=+0.021477709 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:43:50 compute-0 podman[384807]: 2026-01-26 16:43:50.302229184 +0000 UTC m=+0.122883784 container init c26f6cdcd0da1e8bd7d082da264dca6fbf7ea4a6582020e9c29c63183c8a6247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:43:50 compute-0 podman[384807]: 2026-01-26 16:43:50.314351603 +0000 UTC m=+0.135006183 container start c26f6cdcd0da1e8bd7d082da264dca6fbf7ea4a6582020e9c29c63183c8a6247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 16:43:50 compute-0 podman[384807]: 2026-01-26 16:43:50.318905385 +0000 UTC m=+0.139559995 container attach c26f6cdcd0da1e8bd7d082da264dca6fbf7ea4a6582020e9c29c63183c8a6247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williams, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Jan 26 16:43:50 compute-0 wizardly_williams[384823]: 167 167
Jan 26 16:43:50 compute-0 systemd[1]: libpod-c26f6cdcd0da1e8bd7d082da264dca6fbf7ea4a6582020e9c29c63183c8a6247.scope: Deactivated successfully.
Jan 26 16:43:50 compute-0 podman[384807]: 2026-01-26 16:43:50.321109309 +0000 UTC m=+0.141763889 container died c26f6cdcd0da1e8bd7d082da264dca6fbf7ea4a6582020e9c29c63183c8a6247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williams, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 16:43:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3fbe63eb00c2bb5e73583feca33b55f18bd05b2fc8c0368267f5c98007ba330-merged.mount: Deactivated successfully.
Jan 26 16:43:50 compute-0 podman[384807]: 2026-01-26 16:43:50.35933577 +0000 UTC m=+0.179990350 container remove c26f6cdcd0da1e8bd7d082da264dca6fbf7ea4a6582020e9c29c63183c8a6247 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_williams, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:43:50 compute-0 systemd[1]: libpod-conmon-c26f6cdcd0da1e8bd7d082da264dca6fbf7ea4a6582020e9c29c63183c8a6247.scope: Deactivated successfully.
Jan 26 16:43:50 compute-0 ceph-mon[75140]: pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:50 compute-0 podman[384845]: 2026-01-26 16:43:50.525294994 +0000 UTC m=+0.039838592 container create a802301f03a6a87231fa9fd0960566c525155c6456a9fe5e909bddb4849807e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_germain, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:43:50 compute-0 systemd[1]: Started libpod-conmon-a802301f03a6a87231fa9fd0960566c525155c6456a9fe5e909bddb4849807e8.scope.
Jan 26 16:43:50 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eafc340789e7df139e2109d6b710835aac3c5eb14c8e73cd9eaad46f5d87136e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eafc340789e7df139e2109d6b710835aac3c5eb14c8e73cd9eaad46f5d87136e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eafc340789e7df139e2109d6b710835aac3c5eb14c8e73cd9eaad46f5d87136e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:43:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eafc340789e7df139e2109d6b710835aac3c5eb14c8e73cd9eaad46f5d87136e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:43:50 compute-0 podman[384845]: 2026-01-26 16:43:50.604703657 +0000 UTC m=+0.119247275 container init a802301f03a6a87231fa9fd0960566c525155c6456a9fe5e909bddb4849807e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:43:50 compute-0 podman[384845]: 2026-01-26 16:43:50.509783412 +0000 UTC m=+0.024327030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:43:50 compute-0 podman[384845]: 2026-01-26 16:43:50.610796778 +0000 UTC m=+0.125340366 container start a802301f03a6a87231fa9fd0960566c525155c6456a9fe5e909bddb4849807e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_germain, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 16:43:50 compute-0 podman[384845]: 2026-01-26 16:43:50.613910504 +0000 UTC m=+0.128454112 container attach a802301f03a6a87231fa9fd0960566c525155c6456a9fe5e909bddb4849807e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_germain, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:43:51 compute-0 lvm[384940]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:43:51 compute-0 lvm[384939]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:43:51 compute-0 lvm[384939]: VG ceph_vg0 finished
Jan 26 16:43:51 compute-0 lvm[384940]: VG ceph_vg1 finished
Jan 26 16:43:51 compute-0 lvm[384942]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:43:51 compute-0 lvm[384942]: VG ceph_vg2 finished
Jan 26 16:43:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:51 compute-0 naughty_germain[384861]: {}
Jan 26 16:43:51 compute-0 systemd[1]: libpod-a802301f03a6a87231fa9fd0960566c525155c6456a9fe5e909bddb4849807e8.scope: Deactivated successfully.
Jan 26 16:43:51 compute-0 systemd[1]: libpod-a802301f03a6a87231fa9fd0960566c525155c6456a9fe5e909bddb4849807e8.scope: Consumed 1.323s CPU time.
Jan 26 16:43:51 compute-0 podman[384845]: 2026-01-26 16:43:51.409697196 +0000 UTC m=+0.924240804 container died a802301f03a6a87231fa9fd0960566c525155c6456a9fe5e909bddb4849807e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_germain, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:43:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-eafc340789e7df139e2109d6b710835aac3c5eb14c8e73cd9eaad46f5d87136e-merged.mount: Deactivated successfully.
Jan 26 16:43:51 compute-0 podman[384845]: 2026-01-26 16:43:51.454454627 +0000 UTC m=+0.968998225 container remove a802301f03a6a87231fa9fd0960566c525155c6456a9fe5e909bddb4849807e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 16:43:51 compute-0 systemd[1]: libpod-conmon-a802301f03a6a87231fa9fd0960566c525155c6456a9fe5e909bddb4849807e8.scope: Deactivated successfully.
Jan 26 16:43:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:43:51 compute-0 sudo[384770]: pam_unix(sudo:session): session closed for user root
Jan 26 16:43:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:43:51 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:43:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:43:51 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:43:51 compute-0 sudo[384958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:43:51 compute-0 sudo[384958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:43:51 compute-0 sudo[384958]: pam_unix(sudo:session): session closed for user root
Jan 26 16:43:52 compute-0 ceph-mon[75140]: pgmap v2823: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:52 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:43:52 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:43:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:53 compute-0 nova_compute[239965]: 2026-01-26 16:43:53.760 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:54 compute-0 ceph-mon[75140]: pgmap v2824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:55 compute-0 nova_compute[239965]: 2026-01-26 16:43:55.018 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:43:56 compute-0 ceph-mon[75140]: pgmap v2825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:58 compute-0 ceph-mon[75140]: pgmap v2826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:43:58 compute-0 nova_compute[239965]: 2026-01-26 16:43:58.762 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:43:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:43:59.267 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:43:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:43:59.267 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:43:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:43:59.267 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:43:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:00 compute-0 nova_compute[239965]: 2026-01-26 16:44:00.054 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:44:00 compute-0 ceph-mon[75140]: pgmap v2827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:44:02 compute-0 ceph-mon[75140]: pgmap v2828: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:03 compute-0 nova_compute[239965]: 2026-01-26 16:44:03.765 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:04 compute-0 ceph-mon[75140]: pgmap v2829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:05 compute-0 nova_compute[239965]: 2026-01-26 16:44:05.058 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:44:06 compute-0 ceph-mon[75140]: pgmap v2830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:07 compute-0 podman[384984]: 2026-01-26 16:44:07.414000272 +0000 UTC m=+0.097085471 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:44:07 compute-0 podman[384985]: 2026-01-26 16:44:07.451558886 +0000 UTC m=+0.124371422 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:44:08 compute-0 nova_compute[239965]: 2026-01-26 16:44:08.553 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:08 compute-0 nova_compute[239965]: 2026-01-26 16:44:08.768 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:08 compute-0 ceph-mon[75140]: pgmap v2831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:09 compute-0 nova_compute[239965]: 2026-01-26 16:44:09.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:10 compute-0 nova_compute[239965]: 2026-01-26 16:44:10.059 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:10 compute-0 ceph-mon[75140]: pgmap v2832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:10.851197) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445850851272, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 2051, "num_deletes": 251, "total_data_size": 3534808, "memory_usage": 3586512, "flush_reason": "Manual Compaction"}
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445850878715, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 3449357, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 57024, "largest_seqno": 59074, "table_properties": {"data_size": 3439958, "index_size": 5956, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18680, "raw_average_key_size": 20, "raw_value_size": 3421366, "raw_average_value_size": 3678, "num_data_blocks": 264, "num_entries": 930, "num_filter_entries": 930, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769445622, "oldest_key_time": 1769445622, "file_creation_time": 1769445850, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 27582 microseconds, and 9239 cpu microseconds.
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:10.878786) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 3449357 bytes OK
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:10.878811) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:10.880653) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:10.880670) EVENT_LOG_v1 {"time_micros": 1769445850880665, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:10.880690) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 3526224, prev total WAL file size 3526224, number of live WAL files 2.
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:10.882167) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(3368KB)], [134(9949KB)]
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445850882236, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 13638042, "oldest_snapshot_seqno": -1}
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 8087 keys, 11844552 bytes, temperature: kUnknown
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445850960608, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 11844552, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11789555, "index_size": 33700, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20229, "raw_key_size": 210431, "raw_average_key_size": 26, "raw_value_size": 11644166, "raw_average_value_size": 1439, "num_data_blocks": 1316, "num_entries": 8087, "num_filter_entries": 8087, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769445850, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:10.960853) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 11844552 bytes
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:10.962560) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.9 rd, 151.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 9.7 +0.0 blob) out(11.3 +0.0 blob), read-write-amplify(7.4) write-amplify(3.4) OK, records in: 8601, records dropped: 514 output_compression: NoCompression
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:10.962574) EVENT_LOG_v1 {"time_micros": 1769445850962567, "job": 82, "event": "compaction_finished", "compaction_time_micros": 78434, "compaction_time_cpu_micros": 27428, "output_level": 6, "num_output_files": 1, "total_output_size": 11844552, "num_input_records": 8601, "num_output_records": 8087, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445850963276, "job": 82, "event": "table_file_deletion", "file_number": 136}
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445850965120, "job": 82, "event": "table_file_deletion", "file_number": 134}
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:10.882005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:10.965253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:10.965262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:10.965266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:10.965269) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:44:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:10.965272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:44:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:44:11 compute-0 nova_compute[239965]: 2026-01-26 16:44:11.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:12 compute-0 ceph-mon[75140]: pgmap v2833: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:13 compute-0 nova_compute[239965]: 2026-01-26 16:44:13.771 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:14 compute-0 ceph-mon[75140]: pgmap v2834: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:15 compute-0 nova_compute[239965]: 2026-01-26 16:44:15.062 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:44:16 compute-0 nova_compute[239965]: 2026-01-26 16:44:16.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:16 compute-0 ceph-mon[75140]: pgmap v2835: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:17 compute-0 nova_compute[239965]: 2026-01-26 16:44:17.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:17 compute-0 nova_compute[239965]: 2026-01-26 16:44:17.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:18 compute-0 nova_compute[239965]: 2026-01-26 16:44:18.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:18 compute-0 nova_compute[239965]: 2026-01-26 16:44:18.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:44:18 compute-0 nova_compute[239965]: 2026-01-26 16:44:18.772 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:18 compute-0 ceph-mon[75140]: pgmap v2836: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:19 compute-0 nova_compute[239965]: 2026-01-26 16:44:19.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:19 compute-0 nova_compute[239965]: 2026-01-26 16:44:19.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:44:19 compute-0 nova_compute[239965]: 2026-01-26 16:44:19.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:44:19 compute-0 nova_compute[239965]: 2026-01-26 16:44:19.536 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:44:20 compute-0 nova_compute[239965]: 2026-01-26 16:44:20.064 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:20 compute-0 ceph-mon[75140]: pgmap v2837: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:44:22 compute-0 ceph-mon[75140]: pgmap v2838: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:23 compute-0 nova_compute[239965]: 2026-01-26 16:44:23.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:23 compute-0 nova_compute[239965]: 2026-01-26 16:44:23.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:44:23 compute-0 nova_compute[239965]: 2026-01-26 16:44:23.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:44:23 compute-0 nova_compute[239965]: 2026-01-26 16:44:23.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:23 compute-0 nova_compute[239965]: 2026-01-26 16:44:23.543 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:44:23 compute-0 nova_compute[239965]: 2026-01-26 16:44:23.543 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:44:23 compute-0 nova_compute[239965]: 2026-01-26 16:44:23.775 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:44:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2979006855' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:44:24 compute-0 nova_compute[239965]: 2026-01-26 16:44:24.205 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:44:24 compute-0 nova_compute[239965]: 2026-01-26 16:44:24.451 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:44:24 compute-0 nova_compute[239965]: 2026-01-26 16:44:24.453 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3615MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:44:24 compute-0 nova_compute[239965]: 2026-01-26 16:44:24.453 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:44:24 compute-0 nova_compute[239965]: 2026-01-26 16:44:24.453 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:44:24 compute-0 nova_compute[239965]: 2026-01-26 16:44:24.524 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:44:24 compute-0 nova_compute[239965]: 2026-01-26 16:44:24.524 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:44:24 compute-0 nova_compute[239965]: 2026-01-26 16:44:24.545 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:44:24 compute-0 ceph-mon[75140]: pgmap v2839: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2979006855' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:44:25 compute-0 nova_compute[239965]: 2026-01-26 16:44:25.066 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:44:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3870454147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:44:25 compute-0 nova_compute[239965]: 2026-01-26 16:44:25.114 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:44:25 compute-0 nova_compute[239965]: 2026-01-26 16:44:25.120 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:44:25 compute-0 nova_compute[239965]: 2026-01-26 16:44:25.139 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:44:25 compute-0 nova_compute[239965]: 2026-01-26 16:44:25.141 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:44:25 compute-0 nova_compute[239965]: 2026-01-26 16:44:25.142 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3870454147' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:44:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:44:26 compute-0 ceph-mon[75140]: pgmap v2840: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:44:28
Jan 26 16:44:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:44:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:44:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'backups', 'volumes', '.mgr', 'default.rgw.control', 'default.rgw.log', 'images']
Jan 26 16:44:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:44:28 compute-0 nova_compute[239965]: 2026-01-26 16:44:28.778 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:28 compute-0 ceph-mon[75140]: pgmap v2841: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:30 compute-0 nova_compute[239965]: 2026-01-26 16:44:30.226 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:44:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:44:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:44:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:44:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:44:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:44:30 compute-0 ceph-mon[75140]: pgmap v2842: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:44:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:44:31 compute-0 nova_compute[239965]: 2026-01-26 16:44:31.135 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:44:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:44:32 compute-0 ceph-mon[75140]: pgmap v2843: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:33 compute-0 nova_compute[239965]: 2026-01-26 16:44:33.779 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:34 compute-0 ceph-mon[75140]: pgmap v2844: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:35 compute-0 nova_compute[239965]: 2026-01-26 16:44:35.228 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:44:37 compute-0 ceph-mon[75140]: pgmap v2845: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:38 compute-0 ceph-mon[75140]: pgmap v2846: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:38 compute-0 podman[385074]: 2026-01-26 16:44:38.361678206 +0000 UTC m=+0.051175660 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 16:44:38 compute-0 podman[385075]: 2026-01-26 16:44:38.398817031 +0000 UTC m=+0.084934872 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Jan 26 16:44:38 compute-0 nova_compute[239965]: 2026-01-26 16:44:38.781 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:40 compute-0 nova_compute[239965]: 2026-01-26 16:44:40.230 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:40 compute-0 ceph-mon[75140]: pgmap v2847: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:44:42 compute-0 ceph-mon[75140]: pgmap v2848: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:43 compute-0 nova_compute[239965]: 2026-01-26 16:44:43.784 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:45 compute-0 ceph-mon[75140]: pgmap v2849: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:45 compute-0 sshd-session[385116]: Connection closed by 1.197.102.62 port 58654
Jan 26 16:44:45 compute-0 nova_compute[239965]: 2026-01-26 16:44:45.232 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:46 compute-0 ceph-mon[75140]: pgmap v2850: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:46.491204) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445886491241, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 515, "num_deletes": 250, "total_data_size": 532725, "memory_usage": 542272, "flush_reason": "Manual Compaction"}
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445886496065, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 363424, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59075, "largest_seqno": 59589, "table_properties": {"data_size": 360829, "index_size": 630, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6888, "raw_average_key_size": 20, "raw_value_size": 355600, "raw_average_value_size": 1045, "num_data_blocks": 29, "num_entries": 340, "num_filter_entries": 340, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769445851, "oldest_key_time": 1769445851, "file_creation_time": 1769445886, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 4902 microseconds, and 2519 cpu microseconds.
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:46.496107) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 363424 bytes OK
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:46.496124) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:46.497824) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:46.497842) EVENT_LOG_v1 {"time_micros": 1769445886497836, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:46.497859) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 529782, prev total WAL file size 529782, number of live WAL files 2.
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:46.498449) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323537' seq:72057594037927935, type:22 .. '6D6772737461740032353038' seq:0, type:0; will stop at (end)
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(354KB)], [137(11MB)]
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445886498500, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 12207976, "oldest_snapshot_seqno": -1}
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7932 keys, 8996024 bytes, temperature: kUnknown
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445886601258, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 8996024, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8946441, "index_size": 28670, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19845, "raw_key_size": 207374, "raw_average_key_size": 26, "raw_value_size": 8808172, "raw_average_value_size": 1110, "num_data_blocks": 1106, "num_entries": 7932, "num_filter_entries": 7932, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769445886, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:46.601535) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 8996024 bytes
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:46.605373) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 118.7 rd, 87.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 11.3 +0.0 blob) out(8.6 +0.0 blob), read-write-amplify(58.3) write-amplify(24.8) OK, records in: 8427, records dropped: 495 output_compression: NoCompression
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:46.605392) EVENT_LOG_v1 {"time_micros": 1769445886605384, "job": 84, "event": "compaction_finished", "compaction_time_micros": 102839, "compaction_time_cpu_micros": 49739, "output_level": 6, "num_output_files": 1, "total_output_size": 8996024, "num_input_records": 8427, "num_output_records": 7932, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445886605605, "job": 84, "event": "table_file_deletion", "file_number": 139}
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769445886608788, "job": 84, "event": "table_file_deletion", "file_number": 137}
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:46.498298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:46.608917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:46.608926) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:46.608931) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:46.608935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:44:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:44:46.608939) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:44:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:48 compute-0 ceph-mon[75140]: pgmap v2851: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:44:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/585933298' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:44:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:44:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/585933298' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:44:48 compute-0 nova_compute[239965]: 2026-01-26 16:44:48.786 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:44:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:44:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/585933298' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:44:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/585933298' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:44:50 compute-0 nova_compute[239965]: 2026-01-26 16:44:50.235 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:50 compute-0 ceph-mon[75140]: pgmap v2852: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:44:51 compute-0 sudo[385119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:44:51 compute-0 sudo[385119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:44:51 compute-0 sudo[385119]: pam_unix(sudo:session): session closed for user root
Jan 26 16:44:51 compute-0 sudo[385144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 26 16:44:51 compute-0 sudo[385144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:44:52 compute-0 sudo[385144]: pam_unix(sudo:session): session closed for user root
Jan 26 16:44:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:44:52 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:44:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:44:52 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:44:52 compute-0 sudo[385189]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:44:52 compute-0 sudo[385189]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:44:52 compute-0 sudo[385189]: pam_unix(sudo:session): session closed for user root
Jan 26 16:44:52 compute-0 sudo[385214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:44:52 compute-0 sudo[385214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:44:52 compute-0 ceph-mon[75140]: pgmap v2853: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:52 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:44:52 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:44:52 compute-0 sudo[385214]: pam_unix(sudo:session): session closed for user root
Jan 26 16:44:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 16:44:52 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 16:44:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:44:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:44:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:44:52 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:44:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:44:52 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:44:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:44:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:44:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:44:52 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:44:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:44:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:44:52 compute-0 sudo[385272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:44:52 compute-0 sudo[385272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:44:52 compute-0 sudo[385272]: pam_unix(sudo:session): session closed for user root
Jan 26 16:44:52 compute-0 sudo[385297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:44:52 compute-0 sudo[385297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:44:53 compute-0 podman[385333]: 2026-01-26 16:44:53.265913753 +0000 UTC m=+0.054553233 container create ee6c6721a12f0421ad75e48bb44da3a5c790acff936f29fefd1a56f1f755d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 16:44:53 compute-0 systemd[1]: Started libpod-conmon-ee6c6721a12f0421ad75e48bb44da3a5c790acff936f29fefd1a56f1f755d5ba.scope.
Jan 26 16:44:53 compute-0 podman[385333]: 2026-01-26 16:44:53.247470169 +0000 UTC m=+0.036109669 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:44:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:44:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:53 compute-0 podman[385333]: 2026-01-26 16:44:53.361817083 +0000 UTC m=+0.150456583 container init ee6c6721a12f0421ad75e48bb44da3a5c790acff936f29fefd1a56f1f755d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 16:44:53 compute-0 podman[385333]: 2026-01-26 16:44:53.368671501 +0000 UTC m=+0.157310981 container start ee6c6721a12f0421ad75e48bb44da3a5c790acff936f29fefd1a56f1f755d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_khayyam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 26 16:44:53 compute-0 podman[385333]: 2026-01-26 16:44:53.372512796 +0000 UTC m=+0.161152336 container attach ee6c6721a12f0421ad75e48bb44da3a5c790acff936f29fefd1a56f1f755d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_khayyam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 16:44:53 compute-0 epic_khayyam[385350]: 167 167
Jan 26 16:44:53 compute-0 systemd[1]: libpod-ee6c6721a12f0421ad75e48bb44da3a5c790acff936f29fefd1a56f1f755d5ba.scope: Deactivated successfully.
Jan 26 16:44:53 compute-0 conmon[385350]: conmon ee6c6721a12f0421ad75 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ee6c6721a12f0421ad75e48bb44da3a5c790acff936f29fefd1a56f1f755d5ba.scope/container/memory.events
Jan 26 16:44:53 compute-0 podman[385333]: 2026-01-26 16:44:53.377112799 +0000 UTC m=+0.165752309 container died ee6c6721a12f0421ad75e48bb44da3a5c790acff936f29fefd1a56f1f755d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 16:44:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-f963d55a90226987fc3053140175f59073ef1b3ba833d79814950d07456f1b3f-merged.mount: Deactivated successfully.
Jan 26 16:44:53 compute-0 podman[385333]: 2026-01-26 16:44:53.415926204 +0000 UTC m=+0.204565684 container remove ee6c6721a12f0421ad75e48bb44da3a5c790acff936f29fefd1a56f1f755d5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_khayyam, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 16:44:53 compute-0 systemd[1]: libpod-conmon-ee6c6721a12f0421ad75e48bb44da3a5c790acff936f29fefd1a56f1f755d5ba.scope: Deactivated successfully.
Jan 26 16:44:53 compute-0 podman[385374]: 2026-01-26 16:44:53.585392204 +0000 UTC m=+0.040714972 container create 38805e610c4c1d16f7291db422262e51befdeff0feedbde5fc2541e63d9e9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:44:53 compute-0 systemd[1]: Started libpod-conmon-38805e610c4c1d16f7291db422262e51befdeff0feedbde5fc2541e63d9e9fc4.scope.
Jan 26 16:44:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b2f895e2ba9c9059fc301e33091a95dd941c005d65e7a46eefd4e4e1c069d58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b2f895e2ba9c9059fc301e33091a95dd941c005d65e7a46eefd4e4e1c069d58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b2f895e2ba9c9059fc301e33091a95dd941c005d65e7a46eefd4e4e1c069d58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b2f895e2ba9c9059fc301e33091a95dd941c005d65e7a46eefd4e4e1c069d58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8b2f895e2ba9c9059fc301e33091a95dd941c005d65e7a46eefd4e4e1c069d58/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:44:53 compute-0 podman[385374]: 2026-01-26 16:44:53.566841298 +0000 UTC m=+0.022164086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:44:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 16:44:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:44:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:44:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:44:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:44:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:44:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:44:53 compute-0 podman[385374]: 2026-01-26 16:44:53.769915115 +0000 UTC m=+0.225237903 container init 38805e610c4c1d16f7291db422262e51befdeff0feedbde5fc2541e63d9e9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_shockley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 16:44:53 compute-0 podman[385374]: 2026-01-26 16:44:53.785426047 +0000 UTC m=+0.240748815 container start 38805e610c4c1d16f7291db422262e51befdeff0feedbde5fc2541e63d9e9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:44:53 compute-0 podman[385374]: 2026-01-26 16:44:53.790303266 +0000 UTC m=+0.245626064 container attach 38805e610c4c1d16f7291db422262e51befdeff0feedbde5fc2541e63d9e9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 16:44:53 compute-0 nova_compute[239965]: 2026-01-26 16:44:53.788 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:54 compute-0 jolly_shockley[385391]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:44:54 compute-0 jolly_shockley[385391]: --> All data devices are unavailable
Jan 26 16:44:54 compute-0 systemd[1]: libpod-38805e610c4c1d16f7291db422262e51befdeff0feedbde5fc2541e63d9e9fc4.scope: Deactivated successfully.
Jan 26 16:44:54 compute-0 podman[385411]: 2026-01-26 16:44:54.341730986 +0000 UTC m=+0.025469928 container died 38805e610c4c1d16f7291db422262e51befdeff0feedbde5fc2541e63d9e9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_shockley, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:44:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b2f895e2ba9c9059fc301e33091a95dd941c005d65e7a46eefd4e4e1c069d58-merged.mount: Deactivated successfully.
Jan 26 16:44:54 compute-0 podman[385411]: 2026-01-26 16:44:54.378407108 +0000 UTC m=+0.062145990 container remove 38805e610c4c1d16f7291db422262e51befdeff0feedbde5fc2541e63d9e9fc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 16:44:54 compute-0 systemd[1]: libpod-conmon-38805e610c4c1d16f7291db422262e51befdeff0feedbde5fc2541e63d9e9fc4.scope: Deactivated successfully.
Jan 26 16:44:54 compute-0 sudo[385297]: pam_unix(sudo:session): session closed for user root
Jan 26 16:44:54 compute-0 sudo[385426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:44:54 compute-0 sudo[385426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:44:54 compute-0 sudo[385426]: pam_unix(sudo:session): session closed for user root
Jan 26 16:44:54 compute-0 sudo[385451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:44:54 compute-0 sudo[385451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:44:54 compute-0 podman[385487]: 2026-01-26 16:44:54.840244213 +0000 UTC m=+0.050894273 container create 00243c3a5b3f31e428798d3ef0442881fd435d2ae80faafb0dbc18ad95216285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:44:54 compute-0 ceph-mon[75140]: pgmap v2854: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:54 compute-0 systemd[1]: Started libpod-conmon-00243c3a5b3f31e428798d3ef0442881fd435d2ae80faafb0dbc18ad95216285.scope.
Jan 26 16:44:54 compute-0 podman[385487]: 2026-01-26 16:44:54.811777193 +0000 UTC m=+0.022427263 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:44:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:44:54 compute-0 podman[385487]: 2026-01-26 16:44:54.945620397 +0000 UTC m=+0.156270507 container init 00243c3a5b3f31e428798d3ef0442881fd435d2ae80faafb0dbc18ad95216285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:44:54 compute-0 podman[385487]: 2026-01-26 16:44:54.953672334 +0000 UTC m=+0.164322404 container start 00243c3a5b3f31e428798d3ef0442881fd435d2ae80faafb0dbc18ad95216285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dhawan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:44:54 compute-0 podman[385487]: 2026-01-26 16:44:54.957787615 +0000 UTC m=+0.168437695 container attach 00243c3a5b3f31e428798d3ef0442881fd435d2ae80faafb0dbc18ad95216285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dhawan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:44:54 compute-0 sleepy_dhawan[385503]: 167 167
Jan 26 16:44:54 compute-0 systemd[1]: libpod-00243c3a5b3f31e428798d3ef0442881fd435d2ae80faafb0dbc18ad95216285.scope: Deactivated successfully.
Jan 26 16:44:54 compute-0 podman[385487]: 2026-01-26 16:44:54.962469311 +0000 UTC m=+0.173119391 container died 00243c3a5b3f31e428798d3ef0442881fd435d2ae80faafb0dbc18ad95216285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dhawan, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:44:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-56680077e4e785b7c768394a8d7ceaba81c28b47a8368451891d3db0ac732d4f-merged.mount: Deactivated successfully.
Jan 26 16:44:55 compute-0 podman[385487]: 2026-01-26 16:44:55.00064505 +0000 UTC m=+0.211295110 container remove 00243c3a5b3f31e428798d3ef0442881fd435d2ae80faafb0dbc18ad95216285 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 16:44:55 compute-0 systemd[1]: libpod-conmon-00243c3a5b3f31e428798d3ef0442881fd435d2ae80faafb0dbc18ad95216285.scope: Deactivated successfully.
Jan 26 16:44:55 compute-0 podman[385528]: 2026-01-26 16:44:55.186190286 +0000 UTC m=+0.045783697 container create c866c77f0b402c84778db276a53bdd85bb3be869d7455ca564918cee63095f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_gates, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 16:44:55 compute-0 systemd[1]: Started libpod-conmon-c866c77f0b402c84778db276a53bdd85bb3be869d7455ca564918cee63095f3f.scope.
Jan 26 16:44:55 compute-0 nova_compute[239965]: 2026-01-26 16:44:55.235 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75486c56a0c5d9f7e326f0623094842bf6349acee0968dbd7ce4a4813f357031/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75486c56a0c5d9f7e326f0623094842bf6349acee0968dbd7ce4a4813f357031/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75486c56a0c5d9f7e326f0623094842bf6349acee0968dbd7ce4a4813f357031/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75486c56a0c5d9f7e326f0623094842bf6349acee0968dbd7ce4a4813f357031/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:44:55 compute-0 podman[385528]: 2026-01-26 16:44:55.163407495 +0000 UTC m=+0.023000936 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:44:55 compute-0 podman[385528]: 2026-01-26 16:44:55.268482141 +0000 UTC m=+0.128075572 container init c866c77f0b402c84778db276a53bdd85bb3be869d7455ca564918cee63095f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 16:44:55 compute-0 podman[385528]: 2026-01-26 16:44:55.277892353 +0000 UTC m=+0.137485774 container start c866c77f0b402c84778db276a53bdd85bb3be869d7455ca564918cee63095f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_gates, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:44:55 compute-0 podman[385528]: 2026-01-26 16:44:55.281890221 +0000 UTC m=+0.141483662 container attach c866c77f0b402c84778db276a53bdd85bb3be869d7455ca564918cee63095f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_gates, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 16:44:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:55 compute-0 thirsty_gates[385544]: {
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:     "0": [
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:         {
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "devices": [
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "/dev/loop3"
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             ],
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "lv_name": "ceph_lv0",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "lv_size": "21470642176",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "name": "ceph_lv0",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "tags": {
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.cluster_name": "ceph",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.crush_device_class": "",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.encrypted": "0",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.objectstore": "bluestore",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.osd_id": "0",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.type": "block",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.vdo": "0",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.with_tpm": "0"
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             },
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "type": "block",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "vg_name": "ceph_vg0"
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:         }
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:     ],
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:     "1": [
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:         {
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "devices": [
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "/dev/loop4"
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             ],
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "lv_name": "ceph_lv1",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "lv_size": "21470642176",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "name": "ceph_lv1",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "tags": {
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.cluster_name": "ceph",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.crush_device_class": "",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.encrypted": "0",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.objectstore": "bluestore",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.osd_id": "1",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.type": "block",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.vdo": "0",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.with_tpm": "0"
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             },
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "type": "block",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "vg_name": "ceph_vg1"
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:         }
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:     ],
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:     "2": [
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:         {
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "devices": [
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "/dev/loop5"
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             ],
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "lv_name": "ceph_lv2",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "lv_size": "21470642176",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "name": "ceph_lv2",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "tags": {
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.cluster_name": "ceph",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.crush_device_class": "",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.encrypted": "0",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.objectstore": "bluestore",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.osd_id": "2",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.type": "block",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.vdo": "0",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:                 "ceph.with_tpm": "0"
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             },
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "type": "block",
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:             "vg_name": "ceph_vg2"
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:         }
Jan 26 16:44:55 compute-0 thirsty_gates[385544]:     ]
Jan 26 16:44:55 compute-0 thirsty_gates[385544]: }
Jan 26 16:44:55 compute-0 systemd[1]: libpod-c866c77f0b402c84778db276a53bdd85bb3be869d7455ca564918cee63095f3f.scope: Deactivated successfully.
Jan 26 16:44:55 compute-0 conmon[385544]: conmon c866c77f0b402c84778d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c866c77f0b402c84778db276a53bdd85bb3be869d7455ca564918cee63095f3f.scope/container/memory.events
Jan 26 16:44:55 compute-0 podman[385528]: 2026-01-26 16:44:55.599761533 +0000 UTC m=+0.459354974 container died c866c77f0b402c84778db276a53bdd85bb3be869d7455ca564918cee63095f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_gates, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 16:44:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-75486c56a0c5d9f7e326f0623094842bf6349acee0968dbd7ce4a4813f357031-merged.mount: Deactivated successfully.
Jan 26 16:44:55 compute-0 podman[385528]: 2026-01-26 16:44:55.75487206 +0000 UTC m=+0.614465491 container remove c866c77f0b402c84778db276a53bdd85bb3be869d7455ca564918cee63095f3f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_gates, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:44:55 compute-0 systemd[1]: libpod-conmon-c866c77f0b402c84778db276a53bdd85bb3be869d7455ca564918cee63095f3f.scope: Deactivated successfully.
Jan 26 16:44:55 compute-0 sudo[385451]: pam_unix(sudo:session): session closed for user root
Jan 26 16:44:55 compute-0 sudo[385567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:44:55 compute-0 sudo[385567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:44:55 compute-0 sudo[385567]: pam_unix(sudo:session): session closed for user root
Jan 26 16:44:55 compute-0 sudo[385592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:44:55 compute-0 sudo[385592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:44:56 compute-0 sshd-session[385551]: Invalid user sol from 45.148.10.240 port 54486
Jan 26 16:44:56 compute-0 podman[385629]: 2026-01-26 16:44:56.217375441 +0000 UTC m=+0.042027385 container create 741698740d24f7871c302ec5bc847c33445b7d83c3df450db47162a626b030d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_perlman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:44:56 compute-0 systemd[1]: Started libpod-conmon-741698740d24f7871c302ec5bc847c33445b7d83c3df450db47162a626b030d7.scope.
Jan 26 16:44:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:44:56 compute-0 podman[385629]: 2026-01-26 16:44:56.197446781 +0000 UTC m=+0.022098765 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:44:56 compute-0 podman[385629]: 2026-01-26 16:44:56.315359102 +0000 UTC m=+0.140011086 container init 741698740d24f7871c302ec5bc847c33445b7d83c3df450db47162a626b030d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:44:56 compute-0 podman[385629]: 2026-01-26 16:44:56.323896802 +0000 UTC m=+0.148548736 container start 741698740d24f7871c302ec5bc847c33445b7d83c3df450db47162a626b030d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_perlman, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:44:56 compute-0 podman[385629]: 2026-01-26 16:44:56.327284325 +0000 UTC m=+0.151936259 container attach 741698740d24f7871c302ec5bc847c33445b7d83c3df450db47162a626b030d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_perlman, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 16:44:56 compute-0 sweet_perlman[385645]: 167 167
Jan 26 16:44:56 compute-0 systemd[1]: libpod-741698740d24f7871c302ec5bc847c33445b7d83c3df450db47162a626b030d7.scope: Deactivated successfully.
Jan 26 16:44:56 compute-0 conmon[385645]: conmon 741698740d24f7871c30 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-741698740d24f7871c302ec5bc847c33445b7d83c3df450db47162a626b030d7.scope/container/memory.events
Jan 26 16:44:56 compute-0 podman[385629]: 2026-01-26 16:44:56.331031218 +0000 UTC m=+0.155683162 container died 741698740d24f7871c302ec5bc847c33445b7d83c3df450db47162a626b030d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3)
Jan 26 16:44:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb1fabddc012fa40b5fe1a167682711ab17277a7d9ad6abdee3b6a6631b1b3e6-merged.mount: Deactivated successfully.
Jan 26 16:44:56 compute-0 podman[385629]: 2026-01-26 16:44:56.366898421 +0000 UTC m=+0.191550375 container remove 741698740d24f7871c302ec5bc847c33445b7d83c3df450db47162a626b030d7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:44:56 compute-0 systemd[1]: libpod-conmon-741698740d24f7871c302ec5bc847c33445b7d83c3df450db47162a626b030d7.scope: Deactivated successfully.
Jan 26 16:44:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:44:56 compute-0 podman[385667]: 2026-01-26 16:44:56.529476631 +0000 UTC m=+0.041222616 container create 8f8442d3aab0503e56d224b3033d7e7a51beb47c99e7d80bfeee5720d97418af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 16:44:56 compute-0 systemd[1]: Started libpod-conmon-8f8442d3aab0503e56d224b3033d7e7a51beb47c99e7d80bfeee5720d97418af.scope.
Jan 26 16:44:56 compute-0 podman[385667]: 2026-01-26 16:44:56.512095853 +0000 UTC m=+0.023841838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:44:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc0963063641613cd166988057a58e6e3f91a14d033587dd9b26bf442761ad0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc0963063641613cd166988057a58e6e3f91a14d033587dd9b26bf442761ad0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc0963063641613cd166988057a58e6e3f91a14d033587dd9b26bf442761ad0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:44:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fc0963063641613cd166988057a58e6e3f91a14d033587dd9b26bf442761ad0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:44:56 compute-0 podman[385667]: 2026-01-26 16:44:56.625715279 +0000 UTC m=+0.137461264 container init 8f8442d3aab0503e56d224b3033d7e7a51beb47c99e7d80bfeee5720d97418af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:44:56 compute-0 podman[385667]: 2026-01-26 16:44:56.63428363 +0000 UTC m=+0.146029595 container start 8f8442d3aab0503e56d224b3033d7e7a51beb47c99e7d80bfeee5720d97418af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:44:56 compute-0 podman[385667]: 2026-01-26 16:44:56.638167475 +0000 UTC m=+0.149913440 container attach 8f8442d3aab0503e56d224b3033d7e7a51beb47c99e7d80bfeee5720d97418af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rubin, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:44:56 compute-0 sshd-session[385551]: Connection closed by invalid user sol 45.148.10.240 port 54486 [preauth]
Jan 26 16:44:56 compute-0 ceph-mon[75140]: pgmap v2855: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:57 compute-0 lvm[385763]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:44:57 compute-0 lvm[385762]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:44:57 compute-0 lvm[385763]: VG ceph_vg1 finished
Jan 26 16:44:57 compute-0 lvm[385762]: VG ceph_vg0 finished
Jan 26 16:44:57 compute-0 lvm[385765]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:44:57 compute-0 lvm[385765]: VG ceph_vg2 finished
Jan 26 16:44:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:57 compute-0 modest_rubin[385683]: {}
Jan 26 16:44:57 compute-0 systemd[1]: libpod-8f8442d3aab0503e56d224b3033d7e7a51beb47c99e7d80bfeee5720d97418af.scope: Deactivated successfully.
Jan 26 16:44:57 compute-0 systemd[1]: libpod-8f8442d3aab0503e56d224b3033d7e7a51beb47c99e7d80bfeee5720d97418af.scope: Consumed 1.263s CPU time.
Jan 26 16:44:57 compute-0 podman[385667]: 2026-01-26 16:44:57.393679057 +0000 UTC m=+0.905425022 container died 8f8442d3aab0503e56d224b3033d7e7a51beb47c99e7d80bfeee5720d97418af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rubin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:44:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fc0963063641613cd166988057a58e6e3f91a14d033587dd9b26bf442761ad0-merged.mount: Deactivated successfully.
Jan 26 16:44:57 compute-0 podman[385667]: 2026-01-26 16:44:57.435833734 +0000 UTC m=+0.947579739 container remove 8f8442d3aab0503e56d224b3033d7e7a51beb47c99e7d80bfeee5720d97418af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 16:44:57 compute-0 systemd[1]: libpod-conmon-8f8442d3aab0503e56d224b3033d7e7a51beb47c99e7d80bfeee5720d97418af.scope: Deactivated successfully.
Jan 26 16:44:57 compute-0 sudo[385592]: pam_unix(sudo:session): session closed for user root
Jan 26 16:44:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:44:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:44:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:44:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:44:57 compute-0 sudo[385779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:44:57 compute-0 sudo[385779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:44:57 compute-0 sudo[385779]: pam_unix(sudo:session): session closed for user root
Jan 26 16:44:58 compute-0 ceph-mon[75140]: pgmap v2856: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:44:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:44:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:44:58 compute-0 nova_compute[239965]: 2026-01-26 16:44:58.791 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:44:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:44:59.267 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:44:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:44:59.268 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:44:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:44:59.268 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:44:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:00 compute-0 nova_compute[239965]: 2026-01-26 16:45:00.237 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:45:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:45:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:45:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:45:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:45:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:45:00 compute-0 ceph-mon[75140]: pgmap v2857: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:45:02 compute-0 ceph-mon[75140]: pgmap v2858: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:03 compute-0 nova_compute[239965]: 2026-01-26 16:45:03.796 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:04 compute-0 ceph-mon[75140]: pgmap v2859: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:05 compute-0 nova_compute[239965]: 2026-01-26 16:45:05.239 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:45:06 compute-0 ceph-mon[75140]: pgmap v2860: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:07 compute-0 nova_compute[239965]: 2026-01-26 16:45:07.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:08 compute-0 ceph-mon[75140]: pgmap v2861: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:08 compute-0 nova_compute[239965]: 2026-01-26 16:45:08.798 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:09 compute-0 podman[385804]: 2026-01-26 16:45:09.374294247 +0000 UTC m=+0.056528283 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 16:45:09 compute-0 podman[385805]: 2026-01-26 16:45:09.432548329 +0000 UTC m=+0.109218078 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:45:10 compute-0 nova_compute[239965]: 2026-01-26 16:45:10.240 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:10 compute-0 ceph-mon[75140]: pgmap v2862: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:45:11 compute-0 nova_compute[239965]: 2026-01-26 16:45:11.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:12 compute-0 nova_compute[239965]: 2026-01-26 16:45:12.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:12 compute-0 ceph-mon[75140]: pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:13 compute-0 nova_compute[239965]: 2026-01-26 16:45:13.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:13 compute-0 nova_compute[239965]: 2026-01-26 16:45:13.800 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:14 compute-0 ceph-mon[75140]: pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:15 compute-0 nova_compute[239965]: 2026-01-26 16:45:15.242 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:45:16 compute-0 ceph-mon[75140]: pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:17 compute-0 nova_compute[239965]: 2026-01-26 16:45:17.522 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:18 compute-0 nova_compute[239965]: 2026-01-26 16:45:18.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:18 compute-0 nova_compute[239965]: 2026-01-26 16:45:18.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:18 compute-0 nova_compute[239965]: 2026-01-26 16:45:18.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:45:18 compute-0 ceph-mon[75140]: pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:18 compute-0 nova_compute[239965]: 2026-01-26 16:45:18.802 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:19 compute-0 nova_compute[239965]: 2026-01-26 16:45:19.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:19 compute-0 nova_compute[239965]: 2026-01-26 16:45:19.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:19 compute-0 nova_compute[239965]: 2026-01-26 16:45:19.509 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:45:19 compute-0 nova_compute[239965]: 2026-01-26 16:45:19.509 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:45:19 compute-0 nova_compute[239965]: 2026-01-26 16:45:19.528 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:45:20 compute-0 nova_compute[239965]: 2026-01-26 16:45:20.244 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:20 compute-0 ceph-mon[75140]: pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:45:22 compute-0 ceph-mon[75140]: pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:23 compute-0 nova_compute[239965]: 2026-01-26 16:45:23.804 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:24 compute-0 ceph-mon[75140]: pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:24 compute-0 nova_compute[239965]: 2026-01-26 16:45:24.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:24 compute-0 nova_compute[239965]: 2026-01-26 16:45:24.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:45:24 compute-0 nova_compute[239965]: 2026-01-26 16:45:24.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:45:24 compute-0 nova_compute[239965]: 2026-01-26 16:45:24.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:45:24 compute-0 nova_compute[239965]: 2026-01-26 16:45:24.541 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:45:24 compute-0 nova_compute[239965]: 2026-01-26 16:45:24.541 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:45:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:45:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1619129709' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:45:25 compute-0 nova_compute[239965]: 2026-01-26 16:45:25.125 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:45:25 compute-0 nova_compute[239965]: 2026-01-26 16:45:25.246 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:25 compute-0 nova_compute[239965]: 2026-01-26 16:45:25.289 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:45:25 compute-0 nova_compute[239965]: 2026-01-26 16:45:25.290 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3580MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:45:25 compute-0 nova_compute[239965]: 2026-01-26 16:45:25.290 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:45:25 compute-0 nova_compute[239965]: 2026-01-26 16:45:25.290 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:45:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1619129709' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:45:25 compute-0 nova_compute[239965]: 2026-01-26 16:45:25.369 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:45:25 compute-0 nova_compute[239965]: 2026-01-26 16:45:25.369 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:45:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:25 compute-0 nova_compute[239965]: 2026-01-26 16:45:25.391 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:45:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:45:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4142052094' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:45:25 compute-0 nova_compute[239965]: 2026-01-26 16:45:25.967 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:45:25 compute-0 nova_compute[239965]: 2026-01-26 16:45:25.973 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:45:25 compute-0 nova_compute[239965]: 2026-01-26 16:45:25.991 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:45:25 compute-0 nova_compute[239965]: 2026-01-26 16:45:25.992 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:45:25 compute-0 nova_compute[239965]: 2026-01-26 16:45:25.993 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.702s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:45:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:45:28
Jan 26 16:45:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:45:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:45:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'images', '.rgw.root', 'cephfs.cephfs.data', 'vms', '.mgr', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'backups']
Jan 26 16:45:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:45:28 compute-0 nova_compute[239965]: 2026-01-26 16:45:28.805 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:30 compute-0 nova_compute[239965]: 2026-01-26 16:45:30.248 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:45:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:45:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:45:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:45:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:45:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:45:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:45:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:33 compute-0 nova_compute[239965]: 2026-01-26 16:45:33.808 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:35 compute-0 nova_compute[239965]: 2026-01-26 16:45:35.249 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:36 compute-0 nova_compute[239965]: 2026-01-26 16:45:36.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:36 compute-0 nova_compute[239965]: 2026-01-26 16:45:36.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:45:36 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:45:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).mds e6 check_health: resetting beacon timeouts due to mon delay (slow election?) of 2e+01 seconds
Jan 26 16:45:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:45:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:38 compute-0 nova_compute[239965]: 2026-01-26 16:45:38.810 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:40 compute-0 nova_compute[239965]: 2026-01-26 16:45:40.250 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:40 compute-0 podman[385895]: 2026-01-26 16:45:40.410088741 +0000 UTC m=+0.089443983 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 16:45:40 compute-0 podman[385896]: 2026-01-26 16:45:40.423176132 +0000 UTC m=+0.100869323 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 26 16:45:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:45:42 compute-0 ceph-mon[75140]: pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4142052094' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:45:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:43 compute-0 ceph-mon[75140]: pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:43 compute-0 ceph-mon[75140]: pgmap v2872: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:43 compute-0 ceph-mon[75140]: pgmap v2873: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:43 compute-0 ceph-mon[75140]: pgmap v2874: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:43 compute-0 ceph-mon[75140]: pgmap v2875: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:43 compute-0 ceph-mon[75140]: pgmap v2876: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:43 compute-0 ceph-mon[75140]: pgmap v2877: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:43 compute-0 ceph-mon[75140]: pgmap v2878: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:43 compute-0 nova_compute[239965]: 2026-01-26 16:45:43.813 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:44 compute-0 ceph-mon[75140]: pgmap v2879: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:45 compute-0 nova_compute[239965]: 2026-01-26 16:45:45.265 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:46 compute-0 ceph-mon[75140]: pgmap v2880: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:45:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:48 compute-0 ceph-mon[75140]: pgmap v2881: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:45:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3875077263' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:45:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:45:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3875077263' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:45:48 compute-0 nova_compute[239965]: 2026-01-26 16:45:48.815 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:45:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:45:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3875077263' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:45:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3875077263' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:45:50 compute-0 nova_compute[239965]: 2026-01-26 16:45:50.266 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:50 compute-0 ceph-mon[75140]: pgmap v2882: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:51 compute-0 nova_compute[239965]: 2026-01-26 16:45:51.532 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:45:51 compute-0 nova_compute[239965]: 2026-01-26 16:45:51.532 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:45:51 compute-0 nova_compute[239965]: 2026-01-26 16:45:51.551 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:45:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:45:52 compute-0 ceph-mon[75140]: pgmap v2883: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:53 compute-0 nova_compute[239965]: 2026-01-26 16:45:53.817 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:54 compute-0 ceph-mon[75140]: pgmap v2884: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:55 compute-0 nova_compute[239965]: 2026-01-26 16:45:55.268 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:56 compute-0 ceph-mon[75140]: pgmap v2885: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:45:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:57 compute-0 sudo[385938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:45:57 compute-0 sudo[385938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:45:57 compute-0 sudo[385938]: pam_unix(sudo:session): session closed for user root
Jan 26 16:45:57 compute-0 sudo[385963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:45:57 compute-0 sudo[385963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:45:58 compute-0 sudo[385963]: pam_unix(sudo:session): session closed for user root
Jan 26 16:45:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:45:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:45:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:45:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:45:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:45:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:45:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:45:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:45:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:45:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:45:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:45:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:45:58 compute-0 sudo[386018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:45:58 compute-0 sudo[386018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:45:58 compute-0 sudo[386018]: pam_unix(sudo:session): session closed for user root
Jan 26 16:45:58 compute-0 sudo[386043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:45:58 compute-0 sudo[386043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:45:58 compute-0 podman[386080]: 2026-01-26 16:45:58.690898986 +0000 UTC m=+0.043978612 container create 63b24de6a8c484261c65aeb54cc5ee5d0ab7d17f2f3bf26bd6a8dd33ec991c8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hellman, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 26 16:45:58 compute-0 systemd[1]: Started libpod-conmon-63b24de6a8c484261c65aeb54cc5ee5d0ab7d17f2f3bf26bd6a8dd33ec991c8b.scope.
Jan 26 16:45:58 compute-0 ceph-mon[75140]: pgmap v2886: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:45:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:45:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:45:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:45:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:45:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:45:58 compute-0 podman[386080]: 2026-01-26 16:45:58.67277798 +0000 UTC m=+0.025857636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:45:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:45:58 compute-0 podman[386080]: 2026-01-26 16:45:58.790658772 +0000 UTC m=+0.143738428 container init 63b24de6a8c484261c65aeb54cc5ee5d0ab7d17f2f3bf26bd6a8dd33ec991c8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:45:58 compute-0 podman[386080]: 2026-01-26 16:45:58.799590721 +0000 UTC m=+0.152670357 container start 63b24de6a8c484261c65aeb54cc5ee5d0ab7d17f2f3bf26bd6a8dd33ec991c8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hellman, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 16:45:58 compute-0 podman[386080]: 2026-01-26 16:45:58.803145408 +0000 UTC m=+0.156225104 container attach 63b24de6a8c484261c65aeb54cc5ee5d0ab7d17f2f3bf26bd6a8dd33ec991c8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hellman, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:45:58 compute-0 inspiring_hellman[386096]: 167 167
Jan 26 16:45:58 compute-0 systemd[1]: libpod-63b24de6a8c484261c65aeb54cc5ee5d0ab7d17f2f3bf26bd6a8dd33ec991c8b.scope: Deactivated successfully.
Jan 26 16:45:58 compute-0 conmon[386096]: conmon 63b24de6a8c484261c65 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-63b24de6a8c484261c65aeb54cc5ee5d0ab7d17f2f3bf26bd6a8dd33ec991c8b.scope/container/memory.events
Jan 26 16:45:58 compute-0 podman[386080]: 2026-01-26 16:45:58.807249919 +0000 UTC m=+0.160329565 container died 63b24de6a8c484261c65aeb54cc5ee5d0ab7d17f2f3bf26bd6a8dd33ec991c8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hellman, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 16:45:58 compute-0 nova_compute[239965]: 2026-01-26 16:45:58.818 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:45:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-65c7077bfef04ab9e0c865d533779daac1465de883dad2cc117b86c176d30477-merged.mount: Deactivated successfully.
Jan 26 16:45:58 compute-0 podman[386080]: 2026-01-26 16:45:58.850572455 +0000 UTC m=+0.203652091 container remove 63b24de6a8c484261c65aeb54cc5ee5d0ab7d17f2f3bf26bd6a8dd33ec991c8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hellman, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:45:58 compute-0 systemd[1]: libpod-conmon-63b24de6a8c484261c65aeb54cc5ee5d0ab7d17f2f3bf26bd6a8dd33ec991c8b.scope: Deactivated successfully.
Jan 26 16:45:58 compute-0 podman[386122]: 2026-01-26 16:45:58.994525348 +0000 UTC m=+0.038670042 container create 4160b55b02208458ff3b162798067bd93c63e6133ce6532862ee217d7a2f8d7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wright, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Jan 26 16:45:59 compute-0 systemd[1]: Started libpod-conmon-4160b55b02208458ff3b162798067bd93c63e6133ce6532862ee217d7a2f8d7a.scope.
Jan 26 16:45:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:45:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/679a6162bb678ed535b56ce0b52d57ae5c3e8ef32a51461a3ad33db8fcfab100/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:45:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/679a6162bb678ed535b56ce0b52d57ae5c3e8ef32a51461a3ad33db8fcfab100/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:45:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/679a6162bb678ed535b56ce0b52d57ae5c3e8ef32a51461a3ad33db8fcfab100/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:45:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/679a6162bb678ed535b56ce0b52d57ae5c3e8ef32a51461a3ad33db8fcfab100/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:45:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/679a6162bb678ed535b56ce0b52d57ae5c3e8ef32a51461a3ad33db8fcfab100/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:45:59 compute-0 podman[386122]: 2026-01-26 16:45:59.070389795 +0000 UTC m=+0.114534489 container init 4160b55b02208458ff3b162798067bd93c63e6133ce6532862ee217d7a2f8d7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wright, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:45:59 compute-0 podman[386122]: 2026-01-26 16:45:58.977056439 +0000 UTC m=+0.021201153 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:45:59 compute-0 podman[386122]: 2026-01-26 16:45:59.076904415 +0000 UTC m=+0.121049109 container start 4160b55b02208458ff3b162798067bd93c63e6133ce6532862ee217d7a2f8d7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wright, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 16:45:59 compute-0 podman[386122]: 2026-01-26 16:45:59.08073778 +0000 UTC m=+0.124882494 container attach 4160b55b02208458ff3b162798067bd93c63e6133ce6532862ee217d7a2f8d7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wright, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 16:45:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:45:59.269 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:45:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:45:59.270 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:45:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:45:59.270 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:45:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:45:59 compute-0 unruffled_wright[386138]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:45:59 compute-0 unruffled_wright[386138]: --> All data devices are unavailable
Jan 26 16:45:59 compute-0 systemd[1]: libpod-4160b55b02208458ff3b162798067bd93c63e6133ce6532862ee217d7a2f8d7a.scope: Deactivated successfully.
Jan 26 16:45:59 compute-0 podman[386122]: 2026-01-26 16:45:59.535421368 +0000 UTC m=+0.579566062 container died 4160b55b02208458ff3b162798067bd93c63e6133ce6532862ee217d7a2f8d7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wright, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 16:46:00 compute-0 nova_compute[239965]: 2026-01-26 16:46:00.271 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:46:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:46:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:46:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:46:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:46:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:46:01 compute-0 ceph-mon[75140]: pgmap v2887: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-679a6162bb678ed535b56ce0b52d57ae5c3e8ef32a51461a3ad33db8fcfab100-merged.mount: Deactivated successfully.
Jan 26 16:46:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:01 compute-0 podman[386122]: 2026-01-26 16:46:01.623226704 +0000 UTC m=+2.667371438 container remove 4160b55b02208458ff3b162798067bd93c63e6133ce6532862ee217d7a2f8d7a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:46:01 compute-0 systemd[1]: libpod-conmon-4160b55b02208458ff3b162798067bd93c63e6133ce6532862ee217d7a2f8d7a.scope: Deactivated successfully.
Jan 26 16:46:01 compute-0 sudo[386043]: pam_unix(sudo:session): session closed for user root
Jan 26 16:46:01 compute-0 sudo[386173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:46:01 compute-0 sudo[386173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:46:01 compute-0 sudo[386173]: pam_unix(sudo:session): session closed for user root
Jan 26 16:46:01 compute-0 sudo[386198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:46:01 compute-0 sudo[386198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:46:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:46:02 compute-0 podman[386236]: 2026-01-26 16:46:02.089857546 +0000 UTC m=+0.023699974 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:46:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:03 compute-0 nova_compute[239965]: 2026-01-26 16:46:03.821 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:05 compute-0 nova_compute[239965]: 2026-01-26 16:46:05.272 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:08 compute-0 nova_compute[239965]: 2026-01-26 16:46:08.530 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:46:08 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:46:08 compute-0 nova_compute[239965]: 2026-01-26 16:46:08.862 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:10 compute-0 nova_compute[239965]: 2026-01-26 16:46:10.274 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:12 compute-0 nova_compute[239965]: 2026-01-26 16:46:12.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:46:12 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:46:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:13 compute-0 nova_compute[239965]: 2026-01-26 16:46:13.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:46:13 compute-0 nova_compute[239965]: 2026-01-26 16:46:13.864 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:15 compute-0 nova_compute[239965]: 2026-01-26 16:46:15.277 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:16 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:46:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:17 compute-0 nova_compute[239965]: 2026-01-26 16:46:17.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:46:18 compute-0 nova_compute[239965]: 2026-01-26 16:46:18.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:46:18 compute-0 nova_compute[239965]: 2026-01-26 16:46:18.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:46:18 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx MDS connection to Monitors appears to be laggy; 17.9314s since last acked beacon
Jan 26 16:46:18 compute-0 ceph-mds[96626]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 26 16:46:18 compute-0 nova_compute[239965]: 2026-01-26 16:46:18.868 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:19 compute-0 nova_compute[239965]: 2026-01-26 16:46:19.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:46:19 compute-0 nova_compute[239965]: 2026-01-26 16:46:19.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:46:19 compute-0 nova_compute[239965]: 2026-01-26 16:46:19.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:46:19 compute-0 nova_compute[239965]: 2026-01-26 16:46:19.546 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:46:20 compute-0 nova_compute[239965]: 2026-01-26 16:46:20.279 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:20 compute-0 nova_compute[239965]: 2026-01-26 16:46:20.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:46:20 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:46:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:21 compute-0 nova_compute[239965]: 2026-01-26 16:46:21.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:46:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:23 compute-0 ceph-mds[96626]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 26 16:46:23 compute-0 nova_compute[239965]: 2026-01-26 16:46:23.869 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:24 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:46:25 compute-0 nova_compute[239965]: 2026-01-26 16:46:25.281 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:25 compute-0 nova_compute[239965]: 2026-01-26 16:46:25.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:46:25 compute-0 nova_compute[239965]: 2026-01-26 16:46:25.548 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:46:25 compute-0 nova_compute[239965]: 2026-01-26 16:46:25.549 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:46:25 compute-0 nova_compute[239965]: 2026-01-26 16:46:25.549 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:46:25 compute-0 nova_compute[239965]: 2026-01-26 16:46:25.549 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:46:25 compute-0 nova_compute[239965]: 2026-01-26 16:46:25.549 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:46:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:46:28
Jan 26 16:46:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:46:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:46:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'images', 'volumes', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'vms']
Jan 26 16:46:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:46:28 compute-0 ceph-mds[96626]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 26 16:46:28 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:46:28 compute-0 nova_compute[239965]: 2026-01-26 16:46:28.872 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:30 compute-0 nova_compute[239965]: 2026-01-26 16:46:30.282 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:46:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:46:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:46:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:46:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:46:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:46:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:46:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:46:30 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:46:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:46:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:46:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:46:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:46:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:46:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:46:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:46:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:32 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:46:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:33 compute-0 ceph-mds[96626]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 26 16:46:33 compute-0 nova_compute[239965]: 2026-01-26 16:46:33.874 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:35 compute-0 nova_compute[239965]: 2026-01-26 16:46:35.285 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:36 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:46:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:38 compute-0 ceph-mds[96626]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 26 16:46:38 compute-0 nova_compute[239965]: 2026-01-26 16:46:38.876 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:40 compute-0 nova_compute[239965]: 2026-01-26 16:46:40.287 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:40 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:46:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:43 compute-0 ceph-mds[96626]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 26 16:46:44 compute-0 nova_compute[239965]: 2026-01-26 16:46:43.879 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:44 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:46:45 compute-0 nova_compute[239965]: 2026-01-26 16:46:45.290 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:48 compute-0 sshd[182589]: Timeout before authentication for connection from 1.197.102.62 to 38.102.83.13, pid = 385117
Jan 26 16:46:48 compute-0 ceph-mds[96626]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 26 16:46:48 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:46:48 compute-0 nova_compute[239965]: 2026-01-26 16:46:48.883 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:46:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:46:50 compute-0 nova_compute[239965]: 2026-01-26 16:46:50.290 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:52 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:46:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:53 compute-0 ceph-mds[96626]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 26 16:46:53 compute-0 nova_compute[239965]: 2026-01-26 16:46:53.886 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:55 compute-0 nova_compute[239965]: 2026-01-26 16:46:55.293 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:56 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:46:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:46:58 compute-0 ceph-mds[96626]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 26 16:46:58 compute-0 nova_compute[239965]: 2026-01-26 16:46:58.889 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:46:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:46:59.270 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:46:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:46:59.271 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:46:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:46:59.271 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:46:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:00 compute-0 nova_compute[239965]: 2026-01-26 16:47:00.297 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:47:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:47:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:47:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:47:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:47:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:47:00 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:47:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:03 compute-0 ceph-mds[96626]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 26 16:47:03 compute-0 nova_compute[239965]: 2026-01-26 16:47:03.979 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:04 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:47:05 compute-0 nova_compute[239965]: 2026-01-26 16:47:05.299 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:08 compute-0 ceph-mds[96626]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 26 16:47:08 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:47:08 compute-0 nova_compute[239965]: 2026-01-26 16:47:08.982 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:10 compute-0 nova_compute[239965]: 2026-01-26 16:47:10.300 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:12 compute-0 sshd-session[386280]: Invalid user sol from 45.148.10.240 port 50492
Jan 26 16:47:12 compute-0 sshd-session[386280]: Connection closed by invalid user sol 45.148.10.240 port 50492 [preauth]
Jan 26 16:47:12 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:47:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:13 compute-0 ceph-mds[96626]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 26 16:47:14 compute-0 nova_compute[239965]: 2026-01-26 16:47:14.012 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:15 compute-0 nova_compute[239965]: 2026-01-26 16:47:15.301 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:16 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:47:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:18 compute-0 ceph-mds[96626]: mds.0.4 skipping upkeep work because connection to Monitors appears laggy
Jan 26 16:47:19 compute-0 nova_compute[239965]: 2026-01-26 16:47:19.014 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:20 compute-0 nova_compute[239965]: 2026-01-26 16:47:20.303 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:20 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx missed beacon ack from the monitors
Jan 26 16:47:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:21 compute-0 podman[386236]: 2026-01-26 16:47:21.789508107 +0000 UTC m=+79.723350485 container create a1d8cf2919bc8a718253e3a018c1b3406bfaac267a13d804b06cd27788c977de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_banzai, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:47:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).mds e6 check_health: resetting beacon timeouts due to mon delay (slow election?) of 8e+01 seconds
Jan 26 16:47:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:47:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is log(1 entries from seq 2627 at 2026-01-26T16:46:01.394129+0000)
Jan 26 16:47:21 compute-0 ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0[75136]: 2026-01-26T16:47:21.805+0000 7f258f63d640 -1 mon.compute-0@0(leader) e1 get_health_metrics reporting 1 slow ops, oldest is log(1 entries from seq 2627 at 2026-01-26T16:46:01.394129+0000)
Jan 26 16:47:21 compute-0 ceph-mds[96626]: mds.beacon.cephfs.compute-0.slnfyx  MDS is no longer laggy
Jan 26 16:47:21 compute-0 ceph-mon[75140]: pgmap v2888: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:21 compute-0 systemd[1]: Started libpod-conmon-a1d8cf2919bc8a718253e3a018c1b3406bfaac267a13d804b06cd27788c977de.scope.
Jan 26 16:47:21 compute-0 podman[386250]: 2026-01-26 16:47:21.95299443 +0000 UTC m=+70.631438508 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:47:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:47:22 compute-0 podman[386236]: 2026-01-26 16:47:22.097202679 +0000 UTC m=+80.031045087 container init a1d8cf2919bc8a718253e3a018c1b3406bfaac267a13d804b06cd27788c977de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_banzai, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:47:22 compute-0 podman[386236]: 2026-01-26 16:47:22.107096312 +0000 UTC m=+80.040938690 container start a1d8cf2919bc8a718253e3a018c1b3406bfaac267a13d804b06cd27788c977de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_banzai, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:47:22 compute-0 systemd[1]: libpod-a1d8cf2919bc8a718253e3a018c1b3406bfaac267a13d804b06cd27788c977de.scope: Deactivated successfully.
Jan 26 16:47:22 compute-0 dreamy_banzai[386301]: 167 167
Jan 26 16:47:22 compute-0 conmon[386301]: conmon a1d8cf2919bc8a718253 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a1d8cf2919bc8a718253e3a018c1b3406bfaac267a13d804b06cd27788c977de.scope/container/memory.events
Jan 26 16:47:22 compute-0 podman[386236]: 2026-01-26 16:47:22.215939541 +0000 UTC m=+80.149781919 container attach a1d8cf2919bc8a718253e3a018c1b3406bfaac267a13d804b06cd27788c977de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_banzai, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:47:22 compute-0 podman[386236]: 2026-01-26 16:47:22.217163541 +0000 UTC m=+80.151005959 container died a1d8cf2919bc8a718253e3a018c1b3406bfaac267a13d804b06cd27788c977de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 16:47:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:47:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1309035442' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:47:22 compute-0 nova_compute[239965]: 2026-01-26 16:47:22.287 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 56.737s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-72a2956ec4b4de2b1408e57e43f342ee8604e8566f2a476191ea0e43dd81df27-merged.mount: Deactivated successfully.
Jan 26 16:47:22 compute-0 podman[386236]: 2026-01-26 16:47:22.512653522 +0000 UTC m=+80.446495890 container remove a1d8cf2919bc8a718253e3a018c1b3406bfaac267a13d804b06cd27788c977de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:47:22 compute-0 nova_compute[239965]: 2026-01-26 16:47:22.554 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:47:22 compute-0 nova_compute[239965]: 2026-01-26 16:47:22.557 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3553MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:47:22 compute-0 nova_compute[239965]: 2026-01-26 16:47:22.558 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:22 compute-0 nova_compute[239965]: 2026-01-26 16:47:22.559 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:22 compute-0 systemd[1]: libpod-conmon-a1d8cf2919bc8a718253e3a018c1b3406bfaac267a13d804b06cd27788c977de.scope: Deactivated successfully.
Jan 26 16:47:22 compute-0 nova_compute[239965]: 2026-01-26 16:47:22.631 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:47:22 compute-0 nova_compute[239965]: 2026-01-26 16:47:22.632 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:47:22 compute-0 podman[386251]: 2026-01-26 16:47:22.637099745 +0000 UTC m=+71.315790519 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 26 16:47:22 compute-0 nova_compute[239965]: 2026-01-26 16:47:22.659 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:22 compute-0 podman[386345]: 2026-01-26 16:47:22.739532285 +0000 UTC m=+0.070595888 container create 15a5ded32af16e4e8f533a04773f6507b577af6a7124f1728aa6d83301adeab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_lichterman, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 16:47:22 compute-0 systemd[1]: Started libpod-conmon-15a5ded32af16e4e8f533a04773f6507b577af6a7124f1728aa6d83301adeab6.scope.
Jan 26 16:47:22 compute-0 podman[386345]: 2026-01-26 16:47:22.698988737 +0000 UTC m=+0.030052350 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:47:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:47:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1501a10b272be6281199b4449f5752a5db234340d4d8a09f6755d89c63cebd7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:47:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1501a10b272be6281199b4449f5752a5db234340d4d8a09f6755d89c63cebd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:47:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1501a10b272be6281199b4449f5752a5db234340d4d8a09f6755d89c63cebd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:47:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1501a10b272be6281199b4449f5752a5db234340d4d8a09f6755d89c63cebd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:47:22 compute-0 podman[386345]: 2026-01-26 16:47:22.862221885 +0000 UTC m=+0.193285498 container init 15a5ded32af16e4e8f533a04773f6507b577af6a7124f1728aa6d83301adeab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:47:22 compute-0 podman[386345]: 2026-01-26 16:47:22.873739398 +0000 UTC m=+0.204802981 container start 15a5ded32af16e4e8f533a04773f6507b577af6a7124f1728aa6d83301adeab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_lichterman, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:47:22 compute-0 podman[386345]: 2026-01-26 16:47:22.917596587 +0000 UTC m=+0.248660200 container attach 15a5ded32af16e4e8f533a04773f6507b577af6a7124f1728aa6d83301adeab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_lichterman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2889: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2890: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2891: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2892: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2893: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2894: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2895: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2896: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2897: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2898: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2899: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2900: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2901: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: pgmap v2928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1309035442' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]: {
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:     "0": [
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:         {
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "devices": [
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "/dev/loop3"
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             ],
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "lv_name": "ceph_lv0",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "lv_size": "21470642176",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "name": "ceph_lv0",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "tags": {
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.cluster_name": "ceph",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.crush_device_class": "",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.encrypted": "0",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.objectstore": "bluestore",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.osd_id": "0",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.type": "block",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.vdo": "0",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.with_tpm": "0"
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             },
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "type": "block",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "vg_name": "ceph_vg0"
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:         }
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:     ],
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:     "1": [
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:         {
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "devices": [
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "/dev/loop4"
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             ],
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "lv_name": "ceph_lv1",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "lv_size": "21470642176",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "name": "ceph_lv1",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "tags": {
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.cluster_name": "ceph",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.crush_device_class": "",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.encrypted": "0",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.objectstore": "bluestore",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.osd_id": "1",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.type": "block",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.vdo": "0",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.with_tpm": "0"
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             },
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "type": "block",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "vg_name": "ceph_vg1"
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:         }
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:     ],
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:     "2": [
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:         {
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "devices": [
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "/dev/loop5"
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             ],
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "lv_name": "ceph_lv2",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "lv_size": "21470642176",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "name": "ceph_lv2",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "tags": {
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.cluster_name": "ceph",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.crush_device_class": "",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.encrypted": "0",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.objectstore": "bluestore",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.osd_id": "2",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.type": "block",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.vdo": "0",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:                 "ceph.with_tpm": "0"
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             },
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "type": "block",
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:             "vg_name": "ceph_vg2"
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:         }
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]:     ]
Jan 26 16:47:23 compute-0 ecstatic_lichterman[386362]: }
Jan 26 16:47:23 compute-0 systemd[1]: libpod-15a5ded32af16e4e8f533a04773f6507b577af6a7124f1728aa6d83301adeab6.scope: Deactivated successfully.
Jan 26 16:47:23 compute-0 podman[386345]: 2026-01-26 16:47:23.235485689 +0000 UTC m=+0.566549272 container died 15a5ded32af16e4e8f533a04773f6507b577af6a7124f1728aa6d83301adeab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 16:47:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1501a10b272be6281199b4449f5752a5db234340d4d8a09f6755d89c63cebd7-merged.mount: Deactivated successfully.
Jan 26 16:47:23 compute-0 podman[386345]: 2026-01-26 16:47:23.355216476 +0000 UTC m=+0.686280059 container remove 15a5ded32af16e4e8f533a04773f6507b577af6a7124f1728aa6d83301adeab6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_lichterman, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:47:23 compute-0 systemd[1]: libpod-conmon-15a5ded32af16e4e8f533a04773f6507b577af6a7124f1728aa6d83301adeab6.scope: Deactivated successfully.
Jan 26 16:47:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:47:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3392924711' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:47:23 compute-0 sudo[386198]: pam_unix(sudo:session): session closed for user root
Jan 26 16:47:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:23 compute-0 nova_compute[239965]: 2026-01-26 16:47:23.433 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.775s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:23 compute-0 nova_compute[239965]: 2026-01-26 16:47:23.448 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:47:23 compute-0 nova_compute[239965]: 2026-01-26 16:47:23.470 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:47:23 compute-0 nova_compute[239965]: 2026-01-26 16:47:23.472 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:47:23 compute-0 nova_compute[239965]: 2026-01-26 16:47:23.473 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:23 compute-0 sudo[386402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:47:23 compute-0 sudo[386402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:47:23 compute-0 sudo[386402]: pam_unix(sudo:session): session closed for user root
Jan 26 16:47:23 compute-0 sudo[386427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:47:23 compute-0 sudo[386427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:47:23 compute-0 podman[386463]: 2026-01-26 16:47:23.953348334 +0000 UTC m=+0.052713658 container create 67afdc89321058c01a52fe3b2068cadbf8244fe8ae207dbf19befe19b67bfebc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 16:47:24 compute-0 systemd[1]: Started libpod-conmon-67afdc89321058c01a52fe3b2068cadbf8244fe8ae207dbf19befe19b67bfebc.scope.
Jan 26 16:47:24 compute-0 nova_compute[239965]: 2026-01-26 16:47:24.016 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:24 compute-0 podman[386463]: 2026-01-26 16:47:23.928621036 +0000 UTC m=+0.027986390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:47:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3392924711' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:47:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:47:24 compute-0 podman[386463]: 2026-01-26 16:47:24.061377613 +0000 UTC m=+0.160742967 container init 67afdc89321058c01a52fe3b2068cadbf8244fe8ae207dbf19befe19b67bfebc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_leavitt, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:47:24 compute-0 podman[386463]: 2026-01-26 16:47:24.071232105 +0000 UTC m=+0.170597429 container start 67afdc89321058c01a52fe3b2068cadbf8244fe8ae207dbf19befe19b67bfebc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:47:24 compute-0 heuristic_leavitt[386480]: 167 167
Jan 26 16:47:24 compute-0 podman[386463]: 2026-01-26 16:47:24.076859304 +0000 UTC m=+0.176224648 container attach 67afdc89321058c01a52fe3b2068cadbf8244fe8ae207dbf19befe19b67bfebc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 16:47:24 compute-0 systemd[1]: libpod-67afdc89321058c01a52fe3b2068cadbf8244fe8ae207dbf19befe19b67bfebc.scope: Deactivated successfully.
Jan 26 16:47:24 compute-0 podman[386463]: 2026-01-26 16:47:24.080142634 +0000 UTC m=+0.179507978 container died 67afdc89321058c01a52fe3b2068cadbf8244fe8ae207dbf19befe19b67bfebc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 16:47:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6b4f81324323df072ec6336dc90c10fd74b5fac41d08d8630147ede1e914c0e-merged.mount: Deactivated successfully.
Jan 26 16:47:24 compute-0 podman[386463]: 2026-01-26 16:47:24.158577784 +0000 UTC m=+0.257943108 container remove 67afdc89321058c01a52fe3b2068cadbf8244fe8ae207dbf19befe19b67bfebc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_leavitt, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:47:24 compute-0 systemd[1]: libpod-conmon-67afdc89321058c01a52fe3b2068cadbf8244fe8ae207dbf19befe19b67bfebc.scope: Deactivated successfully.
Jan 26 16:47:24 compute-0 podman[386502]: 2026-01-26 16:47:24.352180479 +0000 UTC m=+0.045945442 container create cc11cc3b816c59f85f9feecf80a5112a0fb7228ec564e81d5f86bff7d40f24ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:47:24 compute-0 systemd[1]: Started libpod-conmon-cc11cc3b816c59f85f9feecf80a5112a0fb7228ec564e81d5f86bff7d40f24ba.scope.
Jan 26 16:47:24 compute-0 podman[386502]: 2026-01-26 16:47:24.332258939 +0000 UTC m=+0.026023802 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:47:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f88daad0825720acd850d2106dd41c0b8bcd6ebd44050b8a4248ba96d6bf57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f88daad0825720acd850d2106dd41c0b8bcd6ebd44050b8a4248ba96d6bf57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f88daad0825720acd850d2106dd41c0b8bcd6ebd44050b8a4248ba96d6bf57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:47:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f88daad0825720acd850d2106dd41c0b8bcd6ebd44050b8a4248ba96d6bf57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:47:24 compute-0 podman[386502]: 2026-01-26 16:47:24.464080023 +0000 UTC m=+0.157844866 container init cc11cc3b816c59f85f9feecf80a5112a0fb7228ec564e81d5f86bff7d40f24ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 16:47:24 compute-0 podman[386502]: 2026-01-26 16:47:24.471465274 +0000 UTC m=+0.165230117 container start cc11cc3b816c59f85f9feecf80a5112a0fb7228ec564e81d5f86bff7d40f24ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 26 16:47:24 compute-0 podman[386502]: 2026-01-26 16:47:24.47498456 +0000 UTC m=+0.168749433 container attach cc11cc3b816c59f85f9feecf80a5112a0fb7228ec564e81d5f86bff7d40f24ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:47:25 compute-0 ceph-mon[75140]: pgmap v2929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:25 compute-0 lvm[386596]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:47:25 compute-0 lvm[386597]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:47:25 compute-0 lvm[386596]: VG ceph_vg0 finished
Jan 26 16:47:25 compute-0 lvm[386597]: VG ceph_vg1 finished
Jan 26 16:47:25 compute-0 lvm[386599]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:47:25 compute-0 lvm[386599]: VG ceph_vg2 finished
Jan 26 16:47:25 compute-0 nova_compute[239965]: 2026-01-26 16:47:25.306 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:25 compute-0 priceless_booth[386518]: {}
Jan 26 16:47:25 compute-0 systemd[1]: libpod-cc11cc3b816c59f85f9feecf80a5112a0fb7228ec564e81d5f86bff7d40f24ba.scope: Deactivated successfully.
Jan 26 16:47:25 compute-0 systemd[1]: libpod-cc11cc3b816c59f85f9feecf80a5112a0fb7228ec564e81d5f86bff7d40f24ba.scope: Consumed 1.511s CPU time.
Jan 26 16:47:25 compute-0 podman[386502]: 2026-01-26 16:47:25.381697143 +0000 UTC m=+1.075461986 container died cc11cc3b816c59f85f9feecf80a5112a0fb7228ec564e81d5f86bff7d40f24ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_booth, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:47:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4f88daad0825720acd850d2106dd41c0b8bcd6ebd44050b8a4248ba96d6bf57-merged.mount: Deactivated successfully.
Jan 26 16:47:25 compute-0 podman[386502]: 2026-01-26 16:47:25.795877335 +0000 UTC m=+1.489642188 container remove cc11cc3b816c59f85f9feecf80a5112a0fb7228ec564e81d5f86bff7d40f24ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 16:47:25 compute-0 sudo[386427]: pam_unix(sudo:session): session closed for user root
Jan 26 16:47:25 compute-0 systemd[1]: libpod-conmon-cc11cc3b816c59f85f9feecf80a5112a0fb7228ec564e81d5f86bff7d40f24ba.scope: Deactivated successfully.
Jan 26 16:47:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:47:25 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:47:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:47:25 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:47:25 compute-0 sudo[386614]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:47:25 compute-0 sudo[386614]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:47:25 compute-0 sudo[386614]: pam_unix(sudo:session): session closed for user root
Jan 26 16:47:26 compute-0 ceph-mon[75140]: log_channel(cluster) log [WRN] : Health check failed: 1 slow ops, oldest one blocked for 79 sec, mon.compute-0 has slow ops (SLOW_OPS)
Jan 26 16:47:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:47:26 compute-0 ceph-mon[75140]: pgmap v2930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:26 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:47:26 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:47:26 compute-0 ceph-mon[75140]: Health check failed: 1 slow ops, oldest one blocked for 79 sec, mon.compute-0 has slow ops (SLOW_OPS)
Jan 26 16:47:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.466 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.467 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.499 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.499 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.499 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.517 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.517 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.517 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.517 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.518 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.518 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.518 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.518 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.519 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.543 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:47:27 compute-0 nova_compute[239965]: 2026-01-26 16:47:27.544 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:47:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4159997256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:47:28 compute-0 nova_compute[239965]: 2026-01-26 16:47:28.155 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:28 compute-0 nova_compute[239965]: 2026-01-26 16:47:28.366 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:47:28 compute-0 nova_compute[239965]: 2026-01-26 16:47:28.367 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3563MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:47:28 compute-0 nova_compute[239965]: 2026-01-26 16:47:28.367 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:28 compute-0 nova_compute[239965]: 2026-01-26 16:47:28.368 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:28 compute-0 nova_compute[239965]: 2026-01-26 16:47:28.431 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:47:28 compute-0 nova_compute[239965]: 2026-01-26 16:47:28.433 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:47:28 compute-0 nova_compute[239965]: 2026-01-26 16:47:28.527 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:47:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:47:28
Jan 26 16:47:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:47:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:47:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'vms', 'default.rgw.log', 'default.rgw.control', 'images', 'default.rgw.meta', 'volumes', '.mgr']
Jan 26 16:47:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:47:28 compute-0 ceph-mon[75140]: pgmap v2931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:28 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4159997256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:47:29 compute-0 nova_compute[239965]: 2026-01-26 16:47:29.019 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:47:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/189603558' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:47:29 compute-0 nova_compute[239965]: 2026-01-26 16:47:29.142 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:47:29 compute-0 nova_compute[239965]: 2026-01-26 16:47:29.148 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:47:29 compute-0 nova_compute[239965]: 2026-01-26 16:47:29.163 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:47:29 compute-0 nova_compute[239965]: 2026-01-26 16:47:29.165 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:47:29 compute-0 nova_compute[239965]: 2026-01-26 16:47:29.165 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.797s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/189603558' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:47:30 compute-0 nova_compute[239965]: 2026-01-26 16:47:30.308 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:47:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:47:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:47:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:47:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:47:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:47:30 compute-0 ceph-mon[75140]: pgmap v2932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:47:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:47:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:47:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:47:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:47:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:47:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:47:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:47:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:47:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:47:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:47:31 compute-0 ceph-mon[75140]: log_channel(cluster) log [INF] : Health check cleared: SLOW_OPS (was: 1 slow ops, oldest one blocked for 79 sec, mon.compute-0 has slow ops)
Jan 26 16:47:31 compute-0 ceph-mon[75140]: Health check cleared: SLOW_OPS (was: 1 slow ops, oldest one blocked for 79 sec, mon.compute-0 has slow ops)
Jan 26 16:47:32 compute-0 ceph-mon[75140]: pgmap v2933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:34 compute-0 nova_compute[239965]: 2026-01-26 16:47:34.023 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:35 compute-0 ceph-mon[75140]: pgmap v2934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:35 compute-0 nova_compute[239965]: 2026-01-26 16:47:35.341 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:47:37 compute-0 ceph-mon[75140]: pgmap v2935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:39 compute-0 nova_compute[239965]: 2026-01-26 16:47:39.025 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:39 compute-0 ceph-mon[75140]: pgmap v2936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:40 compute-0 ceph-mon[75140]: pgmap v2937: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:40 compute-0 nova_compute[239965]: 2026-01-26 16:47:40.343 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:47:42 compute-0 ceph-mon[75140]: pgmap v2938: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:44 compute-0 nova_compute[239965]: 2026-01-26 16:47:44.026 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:44 compute-0 ceph-mon[75140]: pgmap v2939: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:45 compute-0 nova_compute[239965]: 2026-01-26 16:47:45.345 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:46 compute-0 ceph-mon[75140]: pgmap v2940: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:47:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:48 compute-0 ceph-mon[75140]: pgmap v2941: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:47:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2839756767' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:47:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:47:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2839756767' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:47:49 compute-0 nova_compute[239965]: 2026-01-26 16:47:49.028 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:47:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2839756767' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:47:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2839756767' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:47:50 compute-0 nova_compute[239965]: 2026-01-26 16:47:50.346 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:50 compute-0 ceph-mon[75140]: pgmap v2942: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:47:52 compute-0 podman[386683]: 2026-01-26 16:47:52.420811398 +0000 UTC m=+0.102941574 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 16:47:52 compute-0 ceph-mon[75140]: pgmap v2943: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:53 compute-0 podman[386703]: 2026-01-26 16:47:53.453085455 +0000 UTC m=+0.147124426 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 26 16:47:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:54 compute-0 nova_compute[239965]: 2026-01-26 16:47:54.029 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:54 compute-0 ceph-mon[75140]: pgmap v2944: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:55 compute-0 nova_compute[239965]: 2026-01-26 16:47:55.349 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:56 compute-0 sshd-session[386729]: Received disconnect from 91.224.92.190 port 13540:11:  [preauth]
Jan 26 16:47:56 compute-0 sshd-session[386729]: Disconnected from authenticating user root 91.224.92.190 port 13540 [preauth]
Jan 26 16:47:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:47:56 compute-0 ceph-mon[75140]: pgmap v2945: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:58 compute-0 ceph-mon[75140]: pgmap v2946: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:47:59 compute-0 nova_compute[239965]: 2026-01-26 16:47:59.031 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:47:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:47:59.271 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:47:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:47:59.271 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:47:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:47:59.271 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:47:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:00 compute-0 nova_compute[239965]: 2026-01-26 16:48:00.350 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:48:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:48:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:48:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:48:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:48:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:48:00 compute-0 ceph-mon[75140]: pgmap v2947: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2948: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:48:02 compute-0 ceph-mon[75140]: pgmap v2948: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:04 compute-0 nova_compute[239965]: 2026-01-26 16:48:04.033 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:04 compute-0 ceph-mon[75140]: pgmap v2949: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:05 compute-0 nova_compute[239965]: 2026-01-26 16:48:05.353 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2950: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:48:06 compute-0 ceph-mon[75140]: pgmap v2950: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:08 compute-0 ceph-mon[75140]: pgmap v2951: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:09 compute-0 nova_compute[239965]: 2026-01-26 16:48:09.035 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2952: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:10 compute-0 nova_compute[239965]: 2026-01-26 16:48:10.354 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:10 compute-0 ceph-mon[75140]: pgmap v2952: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:48:13 compute-0 ceph-mon[75140]: pgmap v2953: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:14 compute-0 nova_compute[239965]: 2026-01-26 16:48:14.036 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:15 compute-0 ceph-mon[75140]: pgmap v2954: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:15 compute-0 nova_compute[239965]: 2026-01-26 16:48:15.355 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:48:17 compute-0 ceph-mon[75140]: pgmap v2955: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:18 compute-0 ceph-mon[75140]: pgmap v2956: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:19 compute-0 nova_compute[239965]: 2026-01-26 16:48:19.037 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2957: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:20 compute-0 nova_compute[239965]: 2026-01-26 16:48:20.382 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:20 compute-0 ceph-mon[75140]: pgmap v2957: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:48:22 compute-0 ceph-mon[75140]: pgmap v2958: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:23 compute-0 podman[386731]: 2026-01-26 16:48:23.380017771 +0000 UTC m=+0.065058035 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 16:48:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:24 compute-0 nova_compute[239965]: 2026-01-26 16:48:24.039 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:24 compute-0 podman[386751]: 2026-01-26 16:48:24.394229857 +0000 UTC m=+0.088694106 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller)
Jan 26 16:48:24 compute-0 ceph-mon[75140]: pgmap v2959: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:25 compute-0 nova_compute[239965]: 2026-01-26 16:48:25.386 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:26 compute-0 sudo[386777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:48:26 compute-0 sudo[386777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:48:26 compute-0 sudo[386777]: pam_unix(sudo:session): session closed for user root
Jan 26 16:48:26 compute-0 sudo[386802]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 26 16:48:26 compute-0 sudo[386802]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:48:26 compute-0 podman[386870]: 2026-01-26 16:48:26.586966832 +0000 UTC m=+0.066931541 container exec 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:48:26 compute-0 podman[386870]: 2026-01-26 16:48:26.687518116 +0000 UTC m=+0.167482805 container exec_died 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:48:26 compute-0 ceph-mon[75140]: pgmap v2960: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:48:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:27 compute-0 sudo[386802]: pam_unix(sudo:session): session closed for user root
Jan 26 16:48:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:48:27 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:48:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:48:27 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:48:27 compute-0 sudo[387055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:48:27 compute-0 sudo[387055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:48:27 compute-0 sudo[387055]: pam_unix(sudo:session): session closed for user root
Jan 26 16:48:27 compute-0 sudo[387080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:48:27 compute-0 sudo[387080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:48:28 compute-0 sudo[387080]: pam_unix(sudo:session): session closed for user root
Jan 26 16:48:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:48:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:48:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:48:28 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:48:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:48:28 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:48:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:48:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:48:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:48:28 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:48:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:48:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:48:28 compute-0 sudo[387136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:48:28 compute-0 sudo[387136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:48:28 compute-0 sudo[387136]: pam_unix(sudo:session): session closed for user root
Jan 26 16:48:28 compute-0 sudo[387161]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:48:28 compute-0 sudo[387161]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:48:28 compute-0 ceph-mon[75140]: pgmap v2961: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:48:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:48:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:48:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:48:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:48:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:48:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:48:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:48:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:48:28
Jan 26 16:48:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:48:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:48:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'images', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'vms', 'volumes']
Jan 26 16:48:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:48:28 compute-0 podman[387198]: 2026-01-26 16:48:28.766252218 +0000 UTC m=+0.041530908 container create 1c2ab3865a6be057a021ae8f80505909fc04c31ded5b062bb4859e53ca02b2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wescoff, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:48:28 compute-0 systemd[1]: Started libpod-conmon-1c2ab3865a6be057a021ae8f80505909fc04c31ded5b062bb4859e53ca02b2ef.scope.
Jan 26 16:48:28 compute-0 podman[387198]: 2026-01-26 16:48:28.747456638 +0000 UTC m=+0.022735348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:48:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:48:28 compute-0 podman[387198]: 2026-01-26 16:48:28.860097658 +0000 UTC m=+0.135376368 container init 1c2ab3865a6be057a021ae8f80505909fc04c31ded5b062bb4859e53ca02b2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:48:28 compute-0 podman[387198]: 2026-01-26 16:48:28.868733429 +0000 UTC m=+0.144012119 container start 1c2ab3865a6be057a021ae8f80505909fc04c31ded5b062bb4859e53ca02b2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 26 16:48:28 compute-0 podman[387198]: 2026-01-26 16:48:28.872095182 +0000 UTC m=+0.147373972 container attach 1c2ab3865a6be057a021ae8f80505909fc04c31ded5b062bb4859e53ca02b2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 16:48:28 compute-0 vigorous_wescoff[387215]: 167 167
Jan 26 16:48:28 compute-0 systemd[1]: libpod-1c2ab3865a6be057a021ae8f80505909fc04c31ded5b062bb4859e53ca02b2ef.scope: Deactivated successfully.
Jan 26 16:48:28 compute-0 podman[387198]: 2026-01-26 16:48:28.874998813 +0000 UTC m=+0.150277533 container died 1c2ab3865a6be057a021ae8f80505909fc04c31ded5b062bb4859e53ca02b2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wescoff, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 16:48:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-54244e128806e9ac1a47b94fb08fd20029ee7d6e9a6c50e5ca4543d7b378f7e0-merged.mount: Deactivated successfully.
Jan 26 16:48:28 compute-0 podman[387198]: 2026-01-26 16:48:28.918370976 +0000 UTC m=+0.193649686 container remove 1c2ab3865a6be057a021ae8f80505909fc04c31ded5b062bb4859e53ca02b2ef (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wescoff, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:48:28 compute-0 systemd[1]: libpod-conmon-1c2ab3865a6be057a021ae8f80505909fc04c31ded5b062bb4859e53ca02b2ef.scope: Deactivated successfully.
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.040 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:29 compute-0 podman[387240]: 2026-01-26 16:48:29.070558226 +0000 UTC m=+0.035986583 container create 37869f1b9606f9e472b8366037631e3a7f70a8bf34fbc47269abaeb1a0fbd812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_austin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 16:48:29 compute-0 systemd[1]: Started libpod-conmon-37869f1b9606f9e472b8366037631e3a7f70a8bf34fbc47269abaeb1a0fbd812.scope.
Jan 26 16:48:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4825de5dcd5b54a4c71e56c5ca987214733bef45b3badd320266959a3bea24/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4825de5dcd5b54a4c71e56c5ca987214733bef45b3badd320266959a3bea24/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4825de5dcd5b54a4c71e56c5ca987214733bef45b3badd320266959a3bea24/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4825de5dcd5b54a4c71e56c5ca987214733bef45b3badd320266959a3bea24/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:48:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4825de5dcd5b54a4c71e56c5ca987214733bef45b3badd320266959a3bea24/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:48:29 compute-0 podman[387240]: 2026-01-26 16:48:29.056147222 +0000 UTC m=+0.021575609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:48:29 compute-0 podman[387240]: 2026-01-26 16:48:29.161768921 +0000 UTC m=+0.127197308 container init 37869f1b9606f9e472b8366037631e3a7f70a8bf34fbc47269abaeb1a0fbd812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_austin, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.166 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.168 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.169 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.169 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:48:29 compute-0 podman[387240]: 2026-01-26 16:48:29.170110076 +0000 UTC m=+0.135538483 container start 37869f1b9606f9e472b8366037631e3a7f70a8bf34fbc47269abaeb1a0fbd812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_austin, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:48:29 compute-0 podman[387240]: 2026-01-26 16:48:29.174211546 +0000 UTC m=+0.139639953 container attach 37869f1b9606f9e472b8366037631e3a7f70a8bf34fbc47269abaeb1a0fbd812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_austin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.185 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.185 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.186 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.187 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.187 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.188 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.188 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.188 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.189 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.251 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.252 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.252 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.253 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.253 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:48:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:29 compute-0 cranky_austin[387256]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:48:29 compute-0 cranky_austin[387256]: --> All data devices are unavailable
Jan 26 16:48:29 compute-0 systemd[1]: libpod-37869f1b9606f9e472b8366037631e3a7f70a8bf34fbc47269abaeb1a0fbd812.scope: Deactivated successfully.
Jan 26 16:48:29 compute-0 podman[387240]: 2026-01-26 16:48:29.640176995 +0000 UTC m=+0.605605402 container died 37869f1b9606f9e472b8366037631e3a7f70a8bf34fbc47269abaeb1a0fbd812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_austin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:48:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f4825de5dcd5b54a4c71e56c5ca987214733bef45b3badd320266959a3bea24-merged.mount: Deactivated successfully.
Jan 26 16:48:29 compute-0 podman[387240]: 2026-01-26 16:48:29.693726618 +0000 UTC m=+0.659154975 container remove 37869f1b9606f9e472b8366037631e3a7f70a8bf34fbc47269abaeb1a0fbd812 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_austin, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:48:29 compute-0 systemd[1]: libpod-conmon-37869f1b9606f9e472b8366037631e3a7f70a8bf34fbc47269abaeb1a0fbd812.scope: Deactivated successfully.
Jan 26 16:48:29 compute-0 sudo[387161]: pam_unix(sudo:session): session closed for user root
Jan 26 16:48:29 compute-0 sudo[387309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:48:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:48:29 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2702730573' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:48:29 compute-0 sudo[387309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:48:29 compute-0 sudo[387309]: pam_unix(sudo:session): session closed for user root
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.829 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:48:29 compute-0 sudo[387336]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:48:29 compute-0 sudo[387336]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.966 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.967 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3572MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.968 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:48:29 compute-0 nova_compute[239965]: 2026-01-26 16:48:29.968 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:48:30 compute-0 nova_compute[239965]: 2026-01-26 16:48:30.027 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:48:30 compute-0 nova_compute[239965]: 2026-01-26 16:48:30.028 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:48:30 compute-0 podman[387372]: 2026-01-26 16:48:30.151858455 +0000 UTC m=+0.043339883 container create 44b47b6243a6f63c31483111489fbdd2b52163e0fac80e719fb25721b056b2b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ritchie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:48:30 compute-0 systemd[1]: Started libpod-conmon-44b47b6243a6f63c31483111489fbdd2b52163e0fac80e719fb25721b056b2b5.scope.
Jan 26 16:48:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:48:30 compute-0 nova_compute[239965]: 2026-01-26 16:48:30.228 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:48:30 compute-0 podman[387372]: 2026-01-26 16:48:30.135008772 +0000 UTC m=+0.026490230 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:48:30 compute-0 podman[387372]: 2026-01-26 16:48:30.236302434 +0000 UTC m=+0.127783882 container init 44b47b6243a6f63c31483111489fbdd2b52163e0fac80e719fb25721b056b2b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 16:48:30 compute-0 podman[387372]: 2026-01-26 16:48:30.243247694 +0000 UTC m=+0.134729122 container start 44b47b6243a6f63c31483111489fbdd2b52163e0fac80e719fb25721b056b2b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:48:30 compute-0 agitated_ritchie[387388]: 167 167
Jan 26 16:48:30 compute-0 systemd[1]: libpod-44b47b6243a6f63c31483111489fbdd2b52163e0fac80e719fb25721b056b2b5.scope: Deactivated successfully.
Jan 26 16:48:30 compute-0 podman[387372]: 2026-01-26 16:48:30.249797625 +0000 UTC m=+0.141279073 container attach 44b47b6243a6f63c31483111489fbdd2b52163e0fac80e719fb25721b056b2b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 16:48:30 compute-0 podman[387372]: 2026-01-26 16:48:30.250467401 +0000 UTC m=+0.141948829 container died 44b47b6243a6f63c31483111489fbdd2b52163e0fac80e719fb25721b056b2b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 16:48:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e4f6be36ac78b10c0907f3fe6f53971f8428f71520c5e5c7792b5a4311edd6c-merged.mount: Deactivated successfully.
Jan 26 16:48:30 compute-0 podman[387372]: 2026-01-26 16:48:30.294566982 +0000 UTC m=+0.186048410 container remove 44b47b6243a6f63c31483111489fbdd2b52163e0fac80e719fb25721b056b2b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_ritchie, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:48:30 compute-0 systemd[1]: libpod-conmon-44b47b6243a6f63c31483111489fbdd2b52163e0fac80e719fb25721b056b2b5.scope: Deactivated successfully.
Jan 26 16:48:30 compute-0 nova_compute[239965]: 2026-01-26 16:48:30.340 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:48:30 compute-0 nova_compute[239965]: 2026-01-26 16:48:30.342 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:48:30 compute-0 nova_compute[239965]: 2026-01-26 16:48:30.356 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:48:30 compute-0 nova_compute[239965]: 2026-01-26 16:48:30.378 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:48:30 compute-0 nova_compute[239965]: 2026-01-26 16:48:30.387 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:30 compute-0 nova_compute[239965]: 2026-01-26 16:48:30.391 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:48:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:48:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:48:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:48:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:48:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:48:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:48:30 compute-0 podman[387412]: 2026-01-26 16:48:30.473407124 +0000 UTC m=+0.056872424 container create cc2583892becabed2aed7fb3861d220506e7c00495f7382a6479e614ba28e36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:48:30 compute-0 systemd[1]: Started libpod-conmon-cc2583892becabed2aed7fb3861d220506e7c00495f7382a6479e614ba28e36a.scope.
Jan 26 16:48:30 compute-0 podman[387412]: 2026-01-26 16:48:30.447072919 +0000 UTC m=+0.030538249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:48:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0b311f31f291a14184c7f9a5ea4435690de23efb8aa17c63ed607e4b4efad0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0b311f31f291a14184c7f9a5ea4435690de23efb8aa17c63ed607e4b4efad0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0b311f31f291a14184c7f9a5ea4435690de23efb8aa17c63ed607e4b4efad0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:48:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a0b311f31f291a14184c7f9a5ea4435690de23efb8aa17c63ed607e4b4efad0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:48:30 compute-0 podman[387412]: 2026-01-26 16:48:30.56338186 +0000 UTC m=+0.146847200 container init cc2583892becabed2aed7fb3861d220506e7c00495f7382a6479e614ba28e36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 26 16:48:30 compute-0 podman[387412]: 2026-01-26 16:48:30.571324634 +0000 UTC m=+0.154789934 container start cc2583892becabed2aed7fb3861d220506e7c00495f7382a6479e614ba28e36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:48:30 compute-0 podman[387412]: 2026-01-26 16:48:30.575449765 +0000 UTC m=+0.158915065 container attach cc2583892becabed2aed7fb3861d220506e7c00495f7382a6479e614ba28e36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:48:30 compute-0 ceph-mon[75140]: pgmap v2962: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2702730573' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:48:30 compute-0 friendly_mayer[387431]: {
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:     "0": [
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:         {
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "devices": [
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "/dev/loop3"
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             ],
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "lv_name": "ceph_lv0",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "lv_size": "21470642176",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "name": "ceph_lv0",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "tags": {
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.cluster_name": "ceph",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.crush_device_class": "",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.encrypted": "0",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.objectstore": "bluestore",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.osd_id": "0",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.type": "block",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.vdo": "0",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.with_tpm": "0"
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             },
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "type": "block",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "vg_name": "ceph_vg0"
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:         }
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:     ],
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:     "1": [
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:         {
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "devices": [
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "/dev/loop4"
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             ],
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "lv_name": "ceph_lv1",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "lv_size": "21470642176",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "name": "ceph_lv1",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "tags": {
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.cluster_name": "ceph",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.crush_device_class": "",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.encrypted": "0",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.objectstore": "bluestore",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.osd_id": "1",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.type": "block",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.vdo": "0",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.with_tpm": "0"
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             },
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "type": "block",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "vg_name": "ceph_vg1"
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:         }
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:     ],
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:     "2": [
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:         {
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "devices": [
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "/dev/loop5"
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             ],
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "lv_name": "ceph_lv2",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "lv_size": "21470642176",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "name": "ceph_lv2",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "tags": {
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.cluster_name": "ceph",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.crush_device_class": "",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.encrypted": "0",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.objectstore": "bluestore",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.osd_id": "2",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.type": "block",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.vdo": "0",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:                 "ceph.with_tpm": "0"
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             },
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "type": "block",
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:             "vg_name": "ceph_vg2"
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:         }
Jan 26 16:48:30 compute-0 friendly_mayer[387431]:     ]
Jan 26 16:48:30 compute-0 friendly_mayer[387431]: }
Jan 26 16:48:30 compute-0 systemd[1]: libpod-cc2583892becabed2aed7fb3861d220506e7c00495f7382a6479e614ba28e36a.scope: Deactivated successfully.
Jan 26 16:48:30 compute-0 podman[387412]: 2026-01-26 16:48:30.90657603 +0000 UTC m=+0.490041350 container died cc2583892becabed2aed7fb3861d220506e7c00495f7382a6479e614ba28e36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:48:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a0b311f31f291a14184c7f9a5ea4435690de23efb8aa17c63ed607e4b4efad0-merged.mount: Deactivated successfully.
Jan 26 16:48:30 compute-0 podman[387412]: 2026-01-26 16:48:30.949493851 +0000 UTC m=+0.532959151 container remove cc2583892becabed2aed7fb3861d220506e7c00495f7382a6479e614ba28e36a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:48:30 compute-0 systemd[1]: libpod-conmon-cc2583892becabed2aed7fb3861d220506e7c00495f7382a6479e614ba28e36a.scope: Deactivated successfully.
Jan 26 16:48:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:48:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1500370141' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:48:30 compute-0 sudo[387336]: pam_unix(sudo:session): session closed for user root
Jan 26 16:48:31 compute-0 nova_compute[239965]: 2026-01-26 16:48:31.006 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:48:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:48:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:48:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:48:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:48:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:48:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:48:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:48:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:48:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:48:31 compute-0 nova_compute[239965]: 2026-01-26 16:48:31.016 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:48:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:48:31 compute-0 nova_compute[239965]: 2026-01-26 16:48:31.030 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:48:31 compute-0 nova_compute[239965]: 2026-01-26 16:48:31.032 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:48:31 compute-0 nova_compute[239965]: 2026-01-26 16:48:31.032 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.064s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:48:31 compute-0 sudo[387471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:48:31 compute-0 sudo[387471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:48:31 compute-0 sudo[387471]: pam_unix(sudo:session): session closed for user root
Jan 26 16:48:31 compute-0 sudo[387496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:48:31 compute-0 sudo[387496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:48:31 compute-0 podman[387532]: 2026-01-26 16:48:31.380677128 +0000 UTC m=+0.043553298 container create 3171f86cb9a9c2d89b69adcb6fc0e3e7970e78475ea91fb1569b62e0d21197be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hodgkin, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 16:48:31 compute-0 systemd[1]: Started libpod-conmon-3171f86cb9a9c2d89b69adcb6fc0e3e7970e78475ea91fb1569b62e0d21197be.scope.
Jan 26 16:48:31 compute-0 podman[387532]: 2026-01-26 16:48:31.361867177 +0000 UTC m=+0.024743357 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:48:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:48:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:31 compute-0 podman[387532]: 2026-01-26 16:48:31.482461383 +0000 UTC m=+0.145337573 container init 3171f86cb9a9c2d89b69adcb6fc0e3e7970e78475ea91fb1569b62e0d21197be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hodgkin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:48:31 compute-0 podman[387532]: 2026-01-26 16:48:31.489940596 +0000 UTC m=+0.152816766 container start 3171f86cb9a9c2d89b69adcb6fc0e3e7970e78475ea91fb1569b62e0d21197be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hodgkin, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 16:48:31 compute-0 podman[387532]: 2026-01-26 16:48:31.49379242 +0000 UTC m=+0.156668590 container attach 3171f86cb9a9c2d89b69adcb6fc0e3e7970e78475ea91fb1569b62e0d21197be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 16:48:31 compute-0 friendly_hodgkin[387548]: 167 167
Jan 26 16:48:31 compute-0 systemd[1]: libpod-3171f86cb9a9c2d89b69adcb6fc0e3e7970e78475ea91fb1569b62e0d21197be.scope: Deactivated successfully.
Jan 26 16:48:31 compute-0 conmon[387548]: conmon 3171f86cb9a9c2d89b69 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3171f86cb9a9c2d89b69adcb6fc0e3e7970e78475ea91fb1569b62e0d21197be.scope/container/memory.events
Jan 26 16:48:31 compute-0 podman[387553]: 2026-01-26 16:48:31.54396069 +0000 UTC m=+0.029831032 container died 3171f86cb9a9c2d89b69adcb6fc0e3e7970e78475ea91fb1569b62e0d21197be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 16:48:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-15e3b31c3c3d5e7ae57089531902cfe7838864b8642c09831fb75e60fd3167c9-merged.mount: Deactivated successfully.
Jan 26 16:48:31 compute-0 podman[387553]: 2026-01-26 16:48:31.589308611 +0000 UTC m=+0.075178943 container remove 3171f86cb9a9c2d89b69adcb6fc0e3e7970e78475ea91fb1569b62e0d21197be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_hodgkin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 16:48:31 compute-0 systemd[1]: libpod-conmon-3171f86cb9a9c2d89b69adcb6fc0e3e7970e78475ea91fb1569b62e0d21197be.scope: Deactivated successfully.
Jan 26 16:48:31 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1500370141' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:48:31 compute-0 podman[387575]: 2026-01-26 16:48:31.772669974 +0000 UTC m=+0.049054153 container create a05c2411f4d26c219604cddc8c6bed2535182229505e1b670b3336759d58a782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 16:48:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:48:31 compute-0 systemd[1]: Started libpod-conmon-a05c2411f4d26c219604cddc8c6bed2535182229505e1b670b3336759d58a782.scope.
Jan 26 16:48:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74aecf8d41aa008c27f627fd29fdfb427f32778bf1c31c00406be55feb497ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:48:31 compute-0 podman[387575]: 2026-01-26 16:48:31.749388934 +0000 UTC m=+0.025773163 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74aecf8d41aa008c27f627fd29fdfb427f32778bf1c31c00406be55feb497ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74aecf8d41aa008c27f627fd29fdfb427f32778bf1c31c00406be55feb497ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:48:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b74aecf8d41aa008c27f627fd29fdfb427f32778bf1c31c00406be55feb497ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:48:31 compute-0 podman[387575]: 2026-01-26 16:48:31.880113318 +0000 UTC m=+0.156497527 container init a05c2411f4d26c219604cddc8c6bed2535182229505e1b670b3336759d58a782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:48:31 compute-0 podman[387575]: 2026-01-26 16:48:31.889035557 +0000 UTC m=+0.165419736 container start a05c2411f4d26c219604cddc8c6bed2535182229505e1b670b3336759d58a782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 16:48:31 compute-0 podman[387575]: 2026-01-26 16:48:31.893330562 +0000 UTC m=+0.169714791 container attach a05c2411f4d26c219604cddc8c6bed2535182229505e1b670b3336759d58a782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 16:48:32 compute-0 lvm[387667]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:48:32 compute-0 lvm[387670]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:48:32 compute-0 lvm[387667]: VG ceph_vg0 finished
Jan 26 16:48:32 compute-0 lvm[387670]: VG ceph_vg1 finished
Jan 26 16:48:32 compute-0 lvm[387672]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:48:32 compute-0 lvm[387672]: VG ceph_vg2 finished
Jan 26 16:48:32 compute-0 lvm[387673]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:48:32 compute-0 lvm[387673]: VG ceph_vg1 finished
Jan 26 16:48:32 compute-0 lvm[387676]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:48:32 compute-0 lvm[387676]: VG ceph_vg1 finished
Jan 26 16:48:32 compute-0 exciting_curie[387591]: {}
Jan 26 16:48:32 compute-0 ceph-mon[75140]: pgmap v2963: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:32 compute-0 systemd[1]: libpod-a05c2411f4d26c219604cddc8c6bed2535182229505e1b670b3336759d58a782.scope: Deactivated successfully.
Jan 26 16:48:32 compute-0 podman[387575]: 2026-01-26 16:48:32.699481607 +0000 UTC m=+0.975865816 container died a05c2411f4d26c219604cddc8c6bed2535182229505e1b670b3336759d58a782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:48:32 compute-0 systemd[1]: libpod-a05c2411f4d26c219604cddc8c6bed2535182229505e1b670b3336759d58a782.scope: Consumed 1.302s CPU time.
Jan 26 16:48:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b74aecf8d41aa008c27f627fd29fdfb427f32778bf1c31c00406be55feb497ee-merged.mount: Deactivated successfully.
Jan 26 16:48:32 compute-0 podman[387575]: 2026-01-26 16:48:32.764684655 +0000 UTC m=+1.041068834 container remove a05c2411f4d26c219604cddc8c6bed2535182229505e1b670b3336759d58a782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_curie, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:48:32 compute-0 systemd[1]: libpod-conmon-a05c2411f4d26c219604cddc8c6bed2535182229505e1b670b3336759d58a782.scope: Deactivated successfully.
Jan 26 16:48:32 compute-0 sudo[387496]: pam_unix(sudo:session): session closed for user root
Jan 26 16:48:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:48:32 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:48:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:48:32 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:48:32 compute-0 sudo[387688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:48:32 compute-0 sudo[387688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:48:32 compute-0 sudo[387688]: pam_unix(sudo:session): session closed for user root
Jan 26 16:48:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:33 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:48:33 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:48:34 compute-0 nova_compute[239965]: 2026-01-26 16:48:34.066 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:34 compute-0 ceph-mon[75140]: pgmap v2964: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:35 compute-0 nova_compute[239965]: 2026-01-26 16:48:35.390 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:36 compute-0 nova_compute[239965]: 2026-01-26 16:48:36.366 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:48:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:48:37 compute-0 ceph-mon[75140]: pgmap v2965: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:39 compute-0 ceph-mon[75140]: pgmap v2966: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:39 compute-0 nova_compute[239965]: 2026-01-26 16:48:39.069 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:40 compute-0 nova_compute[239965]: 2026-01-26 16:48:40.393 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:41 compute-0 ceph-mon[75140]: pgmap v2967: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:48:42 compute-0 ceph-mon[75140]: pgmap v2968: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:44 compute-0 nova_compute[239965]: 2026-01-26 16:48:44.086 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:44 compute-0 ceph-mon[75140]: pgmap v2969: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:45 compute-0 nova_compute[239965]: 2026-01-26 16:48:45.394 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:46 compute-0 ceph-mon[75140]: pgmap v2970: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:48:47 compute-0 sshd-session[387714]: Invalid user squid from 176.120.22.13 port 55720
Jan 26 16:48:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:47 compute-0 sshd-session[387714]: Connection reset by invalid user squid 176.120.22.13 port 55720 [preauth]
Jan 26 16:48:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:48:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3091382957' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:48:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:48:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3091382957' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:48:48 compute-0 ceph-mon[75140]: pgmap v2971: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3091382957' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:48:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3091382957' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:48:49 compute-0 nova_compute[239965]: 2026-01-26 16:48:49.087 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:48:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:49 compute-0 sshd-session[387716]: Connection reset by authenticating user root 176.120.22.13 port 55742 [preauth]
Jan 26 16:48:50 compute-0 nova_compute[239965]: 2026-01-26 16:48:50.395 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:50 compute-0 ceph-mon[75140]: pgmap v2972: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:48:52 compute-0 ceph-mon[75140]: pgmap v2973: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:52 compute-0 sshd-session[387718]: Connection reset by authenticating user root 176.120.22.13 port 55744 [preauth]
Jan 26 16:48:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:54 compute-0 nova_compute[239965]: 2026-01-26 16:48:54.088 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:54 compute-0 podman[387722]: 2026-01-26 16:48:54.379927554 +0000 UTC m=+0.059533220 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 16:48:54 compute-0 ceph-mon[75140]: pgmap v2974: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:55 compute-0 nova_compute[239965]: 2026-01-26 16:48:55.432 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:55 compute-0 podman[387740]: 2026-01-26 16:48:55.457408779 +0000 UTC m=+0.147178798 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 26 16:48:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:55 compute-0 sshd-session[387720]: Connection reset by authenticating user root 176.120.22.13 port 53816 [preauth]
Jan 26 16:48:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:48:57 compute-0 ceph-mon[75140]: pgmap v2975: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:57 compute-0 sshd-session[387766]: Invalid user admin from 176.120.22.13 port 53834
Jan 26 16:48:58 compute-0 sshd-session[387766]: Connection reset by invalid user admin 176.120.22.13 port 53834 [preauth]
Jan 26 16:48:58 compute-0 ceph-mon[75140]: pgmap v2976: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:48:59 compute-0 nova_compute[239965]: 2026-01-26 16:48:59.089 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:48:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:48:59.271 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:48:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:48:59.272 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:48:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:48:59.272 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:48:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:49:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:49:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:49:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:49:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:49:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:49:00 compute-0 nova_compute[239965]: 2026-01-26 16:49:00.434 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:00 compute-0 ceph-mon[75140]: pgmap v2977: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:49:02 compute-0 ceph-mon[75140]: pgmap v2978: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:04 compute-0 nova_compute[239965]: 2026-01-26 16:49:04.091 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:04 compute-0 ceph-mon[75140]: pgmap v2979: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:05 compute-0 nova_compute[239965]: 2026-01-26 16:49:05.436 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:06 compute-0 ceph-mon[75140]: pgmap v2980: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:49:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:08 compute-0 ceph-mon[75140]: pgmap v2981: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:09 compute-0 nova_compute[239965]: 2026-01-26 16:49:09.092 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:10 compute-0 nova_compute[239965]: 2026-01-26 16:49:10.438 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:10 compute-0 ceph-mon[75140]: pgmap v2982: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2983: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:49:12 compute-0 ceph-mon[75140]: pgmap v2983: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2984: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:13 compute-0 nova_compute[239965]: 2026-01-26 16:49:13.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:14 compute-0 nova_compute[239965]: 2026-01-26 16:49:14.125 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:14 compute-0 nova_compute[239965]: 2026-01-26 16:49:14.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:14 compute-0 nova_compute[239965]: 2026-01-26 16:49:14.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:14 compute-0 ceph-mon[75140]: pgmap v2984: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:15 compute-0 nova_compute[239965]: 2026-01-26 16:49:15.440 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:16 compute-0 ceph-mon[75140]: pgmap v2985: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:49:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2986: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:18 compute-0 ceph-mon[75140]: pgmap v2986: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:19 compute-0 nova_compute[239965]: 2026-01-26 16:49:19.127 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:20 compute-0 nova_compute[239965]: 2026-01-26 16:49:20.442 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:20 compute-0 nova_compute[239965]: 2026-01-26 16:49:20.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:20 compute-0 ceph-mon[75140]: pgmap v2987: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2988: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:21 compute-0 nova_compute[239965]: 2026-01-26 16:49:21.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:21 compute-0 nova_compute[239965]: 2026-01-26 16:49:21.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:49:22 compute-0 nova_compute[239965]: 2026-01-26 16:49:22.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:22 compute-0 nova_compute[239965]: 2026-01-26 16:49:22.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:49:23 compute-0 ceph-mon[75140]: pgmap v2988: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:24 compute-0 nova_compute[239965]: 2026-01-26 16:49:24.128 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:24 compute-0 nova_compute[239965]: 2026-01-26 16:49:24.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:24 compute-0 nova_compute[239965]: 2026-01-26 16:49:24.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:49:24 compute-0 nova_compute[239965]: 2026-01-26 16:49:24.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:49:24 compute-0 nova_compute[239965]: 2026-01-26 16:49:24.531 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:49:24 compute-0 podman[387768]: 2026-01-26 16:49:24.861453847 +0000 UTC m=+0.050628621 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:49:25 compute-0 ceph-mon[75140]: pgmap v2989: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:25 compute-0 nova_compute[239965]: 2026-01-26 16:49:25.442 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:26 compute-0 podman[387787]: 2026-01-26 16:49:26.383596349 +0000 UTC m=+0.073903742 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller)
Jan 26 16:49:27 compute-0 ceph-mon[75140]: pgmap v2990: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:49:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:27 compute-0 nova_compute[239965]: 2026-01-26 16:49:27.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:49:27 compute-0 nova_compute[239965]: 2026-01-26 16:49:27.535 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:49:27 compute-0 nova_compute[239965]: 2026-01-26 16:49:27.535 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:49:27 compute-0 nova_compute[239965]: 2026-01-26 16:49:27.536 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:49:27 compute-0 nova_compute[239965]: 2026-01-26 16:49:27.536 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:49:27 compute-0 nova_compute[239965]: 2026-01-26 16:49:27.536 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:49:27 compute-0 sshd-session[387813]: Invalid user funded from 45.148.10.240 port 44888
Jan 26 16:49:28 compute-0 sshd-session[387813]: Connection closed by invalid user funded 45.148.10.240 port 44888 [preauth]
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:28.076137) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446168076211, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 1827, "num_deletes": 251, "total_data_size": 3069215, "memory_usage": 3116592, "flush_reason": "Manual Compaction"}
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Jan 26 16:49:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:49:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2422858965' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:49:28 compute-0 nova_compute[239965]: 2026-01-26 16:49:28.098 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446168101064, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 2979711, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59590, "largest_seqno": 61416, "table_properties": {"data_size": 2971246, "index_size": 5150, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 17970, "raw_average_key_size": 20, "raw_value_size": 2954098, "raw_average_value_size": 3368, "num_data_blocks": 229, "num_entries": 877, "num_filter_entries": 877, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769445887, "oldest_key_time": 1769445887, "file_creation_time": 1769446168, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 24967 microseconds, and 6781 cpu microseconds.
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:28.101113) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 2979711 bytes OK
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:28.101133) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:28.120097) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:28.120142) EVENT_LOG_v1 {"time_micros": 1769446168120133, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:28.120167) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3061287, prev total WAL file size 3061287, number of live WAL files 2.
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:28.121028) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(2909KB)], [140(8785KB)]
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446168121055, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 11975735, "oldest_snapshot_seqno": -1}
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8295 keys, 10174883 bytes, temperature: kUnknown
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446168197173, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 10174883, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10121864, "index_size": 31146, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20805, "raw_key_size": 215917, "raw_average_key_size": 26, "raw_value_size": 9976181, "raw_average_value_size": 1202, "num_data_blocks": 1206, "num_entries": 8295, "num_filter_entries": 8295, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769446168, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:28.197409) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 10174883 bytes
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:28.200282) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 157.2 rd, 133.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 8.6 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(7.4) write-amplify(3.4) OK, records in: 8809, records dropped: 514 output_compression: NoCompression
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:28.200306) EVENT_LOG_v1 {"time_micros": 1769446168200295, "job": 86, "event": "compaction_finished", "compaction_time_micros": 76188, "compaction_time_cpu_micros": 23527, "output_level": 6, "num_output_files": 1, "total_output_size": 10174883, "num_input_records": 8809, "num_output_records": 8295, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446168201634, "job": 86, "event": "table_file_deletion", "file_number": 142}
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446168203730, "job": 86, "event": "table_file_deletion", "file_number": 140}
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:28.120952) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:28.203863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:28.203867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:28.203869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:28.203871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:49:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:28.203873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:49:28 compute-0 nova_compute[239965]: 2026-01-26 16:49:28.262 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:49:28 compute-0 nova_compute[239965]: 2026-01-26 16:49:28.263 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3581MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:49:28 compute-0 nova_compute[239965]: 2026-01-26 16:49:28.263 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:49:28 compute-0 nova_compute[239965]: 2026-01-26 16:49:28.263 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:49:28 compute-0 nova_compute[239965]: 2026-01-26 16:49:28.324 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:49:28 compute-0 nova_compute[239965]: 2026-01-26 16:49:28.325 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:49:28 compute-0 nova_compute[239965]: 2026-01-26 16:49:28.351 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:49:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:49:28
Jan 26 16:49:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:49:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:49:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'default.rgw.log', 'images']
Jan 26 16:49:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:49:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:49:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1045694522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:49:28 compute-0 nova_compute[239965]: 2026-01-26 16:49:28.903 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:49:28 compute-0 nova_compute[239965]: 2026-01-26 16:49:28.909 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:49:28 compute-0 nova_compute[239965]: 2026-01-26 16:49:28.923 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:49:28 compute-0 nova_compute[239965]: 2026-01-26 16:49:28.924 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:49:28 compute-0 nova_compute[239965]: 2026-01-26 16:49:28.925 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.661s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:49:29 compute-0 nova_compute[239965]: 2026-01-26 16:49:29.175 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:29 compute-0 ceph-mon[75140]: pgmap v2991: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2422858965' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:49:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1045694522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:49:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2992: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:49:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:49:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:49:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:49:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:49:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:49:30 compute-0 nova_compute[239965]: 2026-01-26 16:49:30.444 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:30 compute-0 ceph-mon[75140]: pgmap v2992: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:49:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:49:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:49:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:49:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:49:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:49:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:49:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:49:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:49:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:49:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2993: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:49:32 compute-0 ceph-mon[75140]: pgmap v2993: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:32 compute-0 sudo[387859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:49:32 compute-0 sudo[387859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:49:32 compute-0 sudo[387859]: pam_unix(sudo:session): session closed for user root
Jan 26 16:49:33 compute-0 sudo[387884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:49:33 compute-0 sudo[387884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:49:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:33 compute-0 sudo[387884]: pam_unix(sudo:session): session closed for user root
Jan 26 16:49:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:49:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:49:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:49:33 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:49:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:49:33 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:49:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:49:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:49:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:49:33 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:49:33 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:49:33 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:49:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:49:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:49:34 compute-0 sudo[387940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:49:34 compute-0 sudo[387940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:49:34 compute-0 sudo[387940]: pam_unix(sudo:session): session closed for user root
Jan 26 16:49:34 compute-0 sudo[387965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:49:34 compute-0 sudo[387965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:49:34 compute-0 nova_compute[239965]: 2026-01-26 16:49:34.178 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:34 compute-0 podman[388002]: 2026-01-26 16:49:34.395684086 +0000 UTC m=+0.064450580 container create be8440e46ba78097be9aa804770f01e8c5950d6cc11f48ef73ae0ba1b62b766c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_brattain, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 16:49:34 compute-0 systemd[1]: Started libpod-conmon-be8440e46ba78097be9aa804770f01e8c5950d6cc11f48ef73ae0ba1b62b766c.scope.
Jan 26 16:49:34 compute-0 podman[388002]: 2026-01-26 16:49:34.356485416 +0000 UTC m=+0.025251940 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:49:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:49:34 compute-0 podman[388002]: 2026-01-26 16:49:34.521207222 +0000 UTC m=+0.189973746 container init be8440e46ba78097be9aa804770f01e8c5950d6cc11f48ef73ae0ba1b62b766c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_brattain, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 16:49:34 compute-0 podman[388002]: 2026-01-26 16:49:34.529644498 +0000 UTC m=+0.198410992 container start be8440e46ba78097be9aa804770f01e8c5950d6cc11f48ef73ae0ba1b62b766c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_brattain, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:49:34 compute-0 elastic_brattain[388018]: 167 167
Jan 26 16:49:34 compute-0 systemd[1]: libpod-be8440e46ba78097be9aa804770f01e8c5950d6cc11f48ef73ae0ba1b62b766c.scope: Deactivated successfully.
Jan 26 16:49:34 compute-0 conmon[388018]: conmon be8440e46ba78097be9a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-be8440e46ba78097be9aa804770f01e8c5950d6cc11f48ef73ae0ba1b62b766c.scope/container/memory.events
Jan 26 16:49:35 compute-0 podman[388002]: 2026-01-26 16:49:35.20462037 +0000 UTC m=+0.873386954 container attach be8440e46ba78097be9aa804770f01e8c5950d6cc11f48ef73ae0ba1b62b766c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_brattain, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:49:35 compute-0 podman[388002]: 2026-01-26 16:49:35.205575363 +0000 UTC m=+0.874341897 container died be8440e46ba78097be9aa804770f01e8c5950d6cc11f48ef73ae0ba1b62b766c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_brattain, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 16:49:35 compute-0 ceph-mon[75140]: pgmap v2994: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:35 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:49:35 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:49:35 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:49:35 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:49:35 compute-0 nova_compute[239965]: 2026-01-26 16:49:35.446 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac0aa29dbcc77cf3c40c72c5f403c4dd581411a05b45e0fc646dbe40c5b79819-merged.mount: Deactivated successfully.
Jan 26 16:49:36 compute-0 podman[388002]: 2026-01-26 16:49:36.0562238 +0000 UTC m=+1.724990294 container remove be8440e46ba78097be9aa804770f01e8c5950d6cc11f48ef73ae0ba1b62b766c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True)
Jan 26 16:49:36 compute-0 systemd[1]: libpod-conmon-be8440e46ba78097be9aa804770f01e8c5950d6cc11f48ef73ae0ba1b62b766c.scope: Deactivated successfully.
Jan 26 16:49:36 compute-0 podman[388043]: 2026-01-26 16:49:36.227895246 +0000 UTC m=+0.056355172 container create 5ec0d237a556c7244d3b6fe41658f00d515f035b750464adbc91d77b3acbe54c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mendeleev, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 16:49:36 compute-0 podman[388043]: 2026-01-26 16:49:36.195681847 +0000 UTC m=+0.024141833 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:49:36 compute-0 ceph-mon[75140]: pgmap v2995: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:36 compute-0 systemd[1]: Started libpod-conmon-5ec0d237a556c7244d3b6fe41658f00d515f035b750464adbc91d77b3acbe54c.scope.
Jan 26 16:49:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b66bd2c84ab0114890702760aebcae25608dee0257a6edcdad2108c0026b45d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b66bd2c84ab0114890702760aebcae25608dee0257a6edcdad2108c0026b45d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b66bd2c84ab0114890702760aebcae25608dee0257a6edcdad2108c0026b45d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b66bd2c84ab0114890702760aebcae25608dee0257a6edcdad2108c0026b45d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:49:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b66bd2c84ab0114890702760aebcae25608dee0257a6edcdad2108c0026b45d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:49:36 compute-0 podman[388043]: 2026-01-26 16:49:36.405474308 +0000 UTC m=+0.233934284 container init 5ec0d237a556c7244d3b6fe41658f00d515f035b750464adbc91d77b3acbe54c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mendeleev, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:49:36 compute-0 podman[388043]: 2026-01-26 16:49:36.413397563 +0000 UTC m=+0.241857519 container start 5ec0d237a556c7244d3b6fe41658f00d515f035b750464adbc91d77b3acbe54c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030)
Jan 26 16:49:36 compute-0 podman[388043]: 2026-01-26 16:49:36.519699818 +0000 UTC m=+0.348159744 container attach 5ec0d237a556c7244d3b6fe41658f00d515f035b750464adbc91d77b3acbe54c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True)
Jan 26 16:49:36 compute-0 lucid_mendeleev[388060]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:49:36 compute-0 lucid_mendeleev[388060]: --> All data devices are unavailable
Jan 26 16:49:36 compute-0 systemd[1]: libpod-5ec0d237a556c7244d3b6fe41658f00d515f035b750464adbc91d77b3acbe54c.scope: Deactivated successfully.
Jan 26 16:49:36 compute-0 podman[388080]: 2026-01-26 16:49:36.935952498 +0000 UTC m=+0.025388983 container died 5ec0d237a556c7244d3b6fe41658f00d515f035b750464adbc91d77b3acbe54c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:49:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b66bd2c84ab0114890702760aebcae25608dee0257a6edcdad2108c0026b45d-merged.mount: Deactivated successfully.
Jan 26 16:49:37 compute-0 podman[388080]: 2026-01-26 16:49:37.212147537 +0000 UTC m=+0.301584002 container remove 5ec0d237a556c7244d3b6fe41658f00d515f035b750464adbc91d77b3acbe54c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mendeleev, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:49:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:49:37 compute-0 systemd[1]: libpod-conmon-5ec0d237a556c7244d3b6fe41658f00d515f035b750464adbc91d77b3acbe54c.scope: Deactivated successfully.
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:37.221491) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446177221526, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 343, "num_deletes": 258, "total_data_size": 138813, "memory_usage": 146632, "flush_reason": "Manual Compaction"}
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446177225313, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 137265, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61417, "largest_seqno": 61759, "table_properties": {"data_size": 135158, "index_size": 270, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5424, "raw_average_key_size": 17, "raw_value_size": 130813, "raw_average_value_size": 433, "num_data_blocks": 12, "num_entries": 302, "num_filter_entries": 302, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769446169, "oldest_key_time": 1769446169, "file_creation_time": 1769446177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 3882 microseconds, and 1322 cpu microseconds.
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:37.225368) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 137265 bytes OK
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:37.225389) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:37.226899) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:37.226923) EVENT_LOG_v1 {"time_micros": 1769446177226917, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:37.226943) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 136418, prev total WAL file size 136418, number of live WAL files 2.
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:37.227393) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353038' seq:72057594037927935, type:22 .. '6C6F676D0032373632' seq:0, type:0; will stop at (end)
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(134KB)], [143(9936KB)]
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446177227425, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 10312148, "oldest_snapshot_seqno": -1}
Jan 26 16:49:37 compute-0 sudo[387965]: pam_unix(sudo:session): session closed for user root
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 8072 keys, 10205881 bytes, temperature: kUnknown
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446177295062, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 10205881, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10153701, "index_size": 30883, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20229, "raw_key_size": 212261, "raw_average_key_size": 26, "raw_value_size": 10011144, "raw_average_value_size": 1240, "num_data_blocks": 1192, "num_entries": 8072, "num_filter_entries": 8072, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769446177, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:37.295356) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10205881 bytes
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:37.298653) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.2 rd, 150.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 9.7 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(149.5) write-amplify(74.4) OK, records in: 8597, records dropped: 525 output_compression: NoCompression
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:37.298698) EVENT_LOG_v1 {"time_micros": 1769446177298680, "job": 88, "event": "compaction_finished", "compaction_time_micros": 67742, "compaction_time_cpu_micros": 22660, "output_level": 6, "num_output_files": 1, "total_output_size": 10205881, "num_input_records": 8597, "num_output_records": 8072, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446177298941, "job": 88, "event": "table_file_deletion", "file_number": 145}
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446177301306, "job": 88, "event": "table_file_deletion", "file_number": 143}
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:37.227330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:37.301392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:37.301398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:37.301400) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:37.301401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:49:37 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:49:37.301403) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:49:37 compute-0 sudo[388095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:49:37 compute-0 sudo[388095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:49:37 compute-0 sudo[388095]: pam_unix(sudo:session): session closed for user root
Jan 26 16:49:37 compute-0 sudo[388120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:49:37 compute-0 sudo[388120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:49:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:37 compute-0 podman[388157]: 2026-01-26 16:49:37.670952241 +0000 UTC m=+0.028511201 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:49:38 compute-0 podman[388157]: 2026-01-26 16:49:38.005025227 +0000 UTC m=+0.362584157 container create 945ce255c66fe8abf7cdc8375026a669d8056a1d3003563766670fb789c1f2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_saha, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:49:38 compute-0 systemd[1]: Started libpod-conmon-945ce255c66fe8abf7cdc8375026a669d8056a1d3003563766670fb789c1f2a0.scope.
Jan 26 16:49:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:49:38 compute-0 podman[388157]: 2026-01-26 16:49:38.196167702 +0000 UTC m=+0.553726652 container init 945ce255c66fe8abf7cdc8375026a669d8056a1d3003563766670fb789c1f2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_saha, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 16:49:38 compute-0 podman[388157]: 2026-01-26 16:49:38.205584323 +0000 UTC m=+0.563143253 container start 945ce255c66fe8abf7cdc8375026a669d8056a1d3003563766670fb789c1f2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:49:38 compute-0 dreamy_saha[388173]: 167 167
Jan 26 16:49:38 compute-0 systemd[1]: libpod-945ce255c66fe8abf7cdc8375026a669d8056a1d3003563766670fb789c1f2a0.scope: Deactivated successfully.
Jan 26 16:49:38 compute-0 conmon[388173]: conmon 945ce255c66fe8abf7cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-945ce255c66fe8abf7cdc8375026a669d8056a1d3003563766670fb789c1f2a0.scope/container/memory.events
Jan 26 16:49:38 compute-0 podman[388157]: 2026-01-26 16:49:38.261990025 +0000 UTC m=+0.619548985 container attach 945ce255c66fe8abf7cdc8375026a669d8056a1d3003563766670fb789c1f2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_saha, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:49:38 compute-0 podman[388157]: 2026-01-26 16:49:38.262919898 +0000 UTC m=+0.620478828 container died 945ce255c66fe8abf7cdc8375026a669d8056a1d3003563766670fb789c1f2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 16:49:38 compute-0 ceph-mon[75140]: pgmap v2996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-23001b1570eb4fc6cae2e183ac41442088610ae3e7a13b3ae9b80bdd083971b3-merged.mount: Deactivated successfully.
Jan 26 16:49:38 compute-0 podman[388157]: 2026-01-26 16:49:38.51396379 +0000 UTC m=+0.871522720 container remove 945ce255c66fe8abf7cdc8375026a669d8056a1d3003563766670fb789c1f2a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 16:49:38 compute-0 systemd[1]: libpod-conmon-945ce255c66fe8abf7cdc8375026a669d8056a1d3003563766670fb789c1f2a0.scope: Deactivated successfully.
Jan 26 16:49:38 compute-0 podman[388198]: 2026-01-26 16:49:38.672235839 +0000 UTC m=+0.041599061 container create 28b323e093214fa664443871322d69539646c86efcff66b69205bfdb08522aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:49:38 compute-0 systemd[1]: Started libpod-conmon-28b323e093214fa664443871322d69539646c86efcff66b69205bfdb08522aba.scope.
Jan 26 16:49:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:49:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e588428688f9495748958b5ac1cd43d5c7ee69585e5dbe19b69a8ee1bb9ce18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:49:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e588428688f9495748958b5ac1cd43d5c7ee69585e5dbe19b69a8ee1bb9ce18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:49:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e588428688f9495748958b5ac1cd43d5c7ee69585e5dbe19b69a8ee1bb9ce18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:49:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e588428688f9495748958b5ac1cd43d5c7ee69585e5dbe19b69a8ee1bb9ce18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:49:38 compute-0 podman[388198]: 2026-01-26 16:49:38.743183978 +0000 UTC m=+0.112547280 container init 28b323e093214fa664443871322d69539646c86efcff66b69205bfdb08522aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:49:38 compute-0 podman[388198]: 2026-01-26 16:49:38.654666818 +0000 UTC m=+0.024030060 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:49:38 compute-0 podman[388198]: 2026-01-26 16:49:38.750140878 +0000 UTC m=+0.119504100 container start 28b323e093214fa664443871322d69539646c86efcff66b69205bfdb08522aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 16:49:38 compute-0 podman[388198]: 2026-01-26 16:49:38.75350192 +0000 UTC m=+0.122865152 container attach 28b323e093214fa664443871322d69539646c86efcff66b69205bfdb08522aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_robinson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 16:49:39 compute-0 gifted_robinson[388214]: {
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:     "0": [
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:         {
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "devices": [
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "/dev/loop3"
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             ],
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "lv_name": "ceph_lv0",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "lv_size": "21470642176",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "name": "ceph_lv0",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "tags": {
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.cluster_name": "ceph",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.crush_device_class": "",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.encrypted": "0",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.objectstore": "bluestore",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.osd_id": "0",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.type": "block",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.vdo": "0",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.with_tpm": "0"
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             },
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "type": "block",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "vg_name": "ceph_vg0"
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:         }
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:     ],
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:     "1": [
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:         {
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "devices": [
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "/dev/loop4"
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             ],
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "lv_name": "ceph_lv1",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "lv_size": "21470642176",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "name": "ceph_lv1",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "tags": {
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.cluster_name": "ceph",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.crush_device_class": "",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.encrypted": "0",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.objectstore": "bluestore",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.osd_id": "1",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.type": "block",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.vdo": "0",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.with_tpm": "0"
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             },
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "type": "block",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "vg_name": "ceph_vg1"
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:         }
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:     ],
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:     "2": [
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:         {
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "devices": [
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "/dev/loop5"
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             ],
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "lv_name": "ceph_lv2",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "lv_size": "21470642176",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "name": "ceph_lv2",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "tags": {
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.cluster_name": "ceph",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.crush_device_class": "",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.encrypted": "0",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.objectstore": "bluestore",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.osd_id": "2",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.type": "block",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.vdo": "0",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:                 "ceph.with_tpm": "0"
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             },
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "type": "block",
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:             "vg_name": "ceph_vg2"
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:         }
Jan 26 16:49:39 compute-0 gifted_robinson[388214]:     ]
Jan 26 16:49:39 compute-0 gifted_robinson[388214]: }
Jan 26 16:49:39 compute-0 systemd[1]: libpod-28b323e093214fa664443871322d69539646c86efcff66b69205bfdb08522aba.scope: Deactivated successfully.
Jan 26 16:49:39 compute-0 podman[388223]: 2026-01-26 16:49:39.107756192 +0000 UTC m=+0.024552483 container died 28b323e093214fa664443871322d69539646c86efcff66b69205bfdb08522aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 16:49:39 compute-0 nova_compute[239965]: 2026-01-26 16:49:39.180 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e588428688f9495748958b5ac1cd43d5c7ee69585e5dbe19b69a8ee1bb9ce18-merged.mount: Deactivated successfully.
Jan 26 16:49:39 compute-0 podman[388223]: 2026-01-26 16:49:39.404152446 +0000 UTC m=+0.320948727 container remove 28b323e093214fa664443871322d69539646c86efcff66b69205bfdb08522aba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_robinson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 16:49:39 compute-0 systemd[1]: libpod-conmon-28b323e093214fa664443871322d69539646c86efcff66b69205bfdb08522aba.scope: Deactivated successfully.
Jan 26 16:49:39 compute-0 sudo[388120]: pam_unix(sudo:session): session closed for user root
Jan 26 16:49:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:39 compute-0 sudo[388238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:49:39 compute-0 sudo[388238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:49:39 compute-0 sudo[388238]: pam_unix(sudo:session): session closed for user root
Jan 26 16:49:39 compute-0 sudo[388263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:49:39 compute-0 sudo[388263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:49:39 compute-0 podman[388301]: 2026-01-26 16:49:39.814612824 +0000 UTC m=+0.022705497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:49:40 compute-0 podman[388301]: 2026-01-26 16:49:40.004667473 +0000 UTC m=+0.212760126 container create 46fb1d2614e2cff97b57cb26216ba820f1fd14c7d8687c0ba0d139ebb7baf970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_engelbart, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:49:40 compute-0 systemd[1]: Started libpod-conmon-46fb1d2614e2cff97b57cb26216ba820f1fd14c7d8687c0ba0d139ebb7baf970.scope.
Jan 26 16:49:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:49:40 compute-0 podman[388301]: 2026-01-26 16:49:40.092929646 +0000 UTC m=+0.301022329 container init 46fb1d2614e2cff97b57cb26216ba820f1fd14c7d8687c0ba0d139ebb7baf970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:49:40 compute-0 podman[388301]: 2026-01-26 16:49:40.102055849 +0000 UTC m=+0.310148502 container start 46fb1d2614e2cff97b57cb26216ba820f1fd14c7d8687c0ba0d139ebb7baf970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 16:49:40 compute-0 loving_engelbart[388318]: 167 167
Jan 26 16:49:40 compute-0 systemd[1]: libpod-46fb1d2614e2cff97b57cb26216ba820f1fd14c7d8687c0ba0d139ebb7baf970.scope: Deactivated successfully.
Jan 26 16:49:40 compute-0 podman[388301]: 2026-01-26 16:49:40.330774375 +0000 UTC m=+0.538867038 container attach 46fb1d2614e2cff97b57cb26216ba820f1fd14c7d8687c0ba0d139ebb7baf970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 16:49:40 compute-0 podman[388301]: 2026-01-26 16:49:40.33143428 +0000 UTC m=+0.539526973 container died 46fb1d2614e2cff97b57cb26216ba820f1fd14c7d8687c0ba0d139ebb7baf970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 16:49:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-91928ef6d6fe6af8dd49245c5b9877ec55eda5baa86fdaa6a76692871ad96290-merged.mount: Deactivated successfully.
Jan 26 16:49:40 compute-0 nova_compute[239965]: 2026-01-26 16:49:40.449 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:40 compute-0 podman[388301]: 2026-01-26 16:49:40.460639476 +0000 UTC m=+0.668732139 container remove 46fb1d2614e2cff97b57cb26216ba820f1fd14c7d8687c0ba0d139ebb7baf970 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_engelbart, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 16:49:40 compute-0 systemd[1]: libpod-conmon-46fb1d2614e2cff97b57cb26216ba820f1fd14c7d8687c0ba0d139ebb7baf970.scope: Deactivated successfully.
Jan 26 16:49:40 compute-0 podman[388341]: 2026-01-26 16:49:40.631471943 +0000 UTC m=+0.043801174 container create 615e6b77c1e69ba822a0671b50b9354517a0e5e4c6663e083c219c415bf2bb9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_allen, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:49:40 compute-0 systemd[1]: Started libpod-conmon-615e6b77c1e69ba822a0671b50b9354517a0e5e4c6663e083c219c415bf2bb9b.scope.
Jan 26 16:49:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05821b296804c112064f7c3697c25d90c13059cb080fb5fc06d2a95429b445e8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05821b296804c112064f7c3697c25d90c13059cb080fb5fc06d2a95429b445e8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05821b296804c112064f7c3697c25d90c13059cb080fb5fc06d2a95429b445e8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:49:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05821b296804c112064f7c3697c25d90c13059cb080fb5fc06d2a95429b445e8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:49:40 compute-0 podman[388341]: 2026-01-26 16:49:40.610287434 +0000 UTC m=+0.022616685 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:49:40 compute-0 ceph-mon[75140]: pgmap v2997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:40 compute-0 podman[388341]: 2026-01-26 16:49:40.716163829 +0000 UTC m=+0.128493090 container init 615e6b77c1e69ba822a0671b50b9354517a0e5e4c6663e083c219c415bf2bb9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 16:49:40 compute-0 podman[388341]: 2026-01-26 16:49:40.724231257 +0000 UTC m=+0.136560488 container start 615e6b77c1e69ba822a0671b50b9354517a0e5e4c6663e083c219c415bf2bb9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 16:49:40 compute-0 podman[388341]: 2026-01-26 16:49:40.728220104 +0000 UTC m=+0.140549355 container attach 615e6b77c1e69ba822a0671b50b9354517a0e5e4c6663e083c219c415bf2bb9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_allen, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:49:41 compute-0 lvm[388436]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:49:41 compute-0 lvm[388437]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:49:41 compute-0 lvm[388437]: VG ceph_vg1 finished
Jan 26 16:49:41 compute-0 lvm[388436]: VG ceph_vg0 finished
Jan 26 16:49:41 compute-0 lvm[388439]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:49:41 compute-0 lvm[388439]: VG ceph_vg2 finished
Jan 26 16:49:41 compute-0 suspicious_allen[388357]: {}
Jan 26 16:49:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:41 compute-0 systemd[1]: libpod-615e6b77c1e69ba822a0671b50b9354517a0e5e4c6663e083c219c415bf2bb9b.scope: Deactivated successfully.
Jan 26 16:49:41 compute-0 systemd[1]: libpod-615e6b77c1e69ba822a0671b50b9354517a0e5e4c6663e083c219c415bf2bb9b.scope: Consumed 1.312s CPU time.
Jan 26 16:49:41 compute-0 podman[388341]: 2026-01-26 16:49:41.528069226 +0000 UTC m=+0.940398477 container died 615e6b77c1e69ba822a0671b50b9354517a0e5e4c6663e083c219c415bf2bb9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:49:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:49:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-05821b296804c112064f7c3697c25d90c13059cb080fb5fc06d2a95429b445e8-merged.mount: Deactivated successfully.
Jan 26 16:49:43 compute-0 ceph-mon[75140]: pgmap v2998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v2999: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:43 compute-0 podman[388341]: 2026-01-26 16:49:43.519812346 +0000 UTC m=+2.932141587 container remove 615e6b77c1e69ba822a0671b50b9354517a0e5e4c6663e083c219c415bf2bb9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_allen, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:49:43 compute-0 sudo[388263]: pam_unix(sudo:session): session closed for user root
Jan 26 16:49:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:49:43 compute-0 systemd[1]: libpod-conmon-615e6b77c1e69ba822a0671b50b9354517a0e5e4c6663e083c219c415bf2bb9b.scope: Deactivated successfully.
Jan 26 16:49:43 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:49:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:49:43 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:49:43 compute-0 sudo[388455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:49:43 compute-0 sudo[388455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:49:43 compute-0 sudo[388455]: pam_unix(sudo:session): session closed for user root
Jan 26 16:49:44 compute-0 nova_compute[239965]: 2026-01-26 16:49:44.184 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:44 compute-0 ceph-mon[75140]: pgmap v2999: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:49:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:49:45 compute-0 nova_compute[239965]: 2026-01-26 16:49:45.453 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:46 compute-0 ceph-mon[75140]: pgmap v3000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:49:48 compute-0 ceph-mon[75140]: pgmap v3001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:49:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3903535546' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:49:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:49:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3903535546' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:49:49 compute-0 nova_compute[239965]: 2026-01-26 16:49:49.186 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:49:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3903535546' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:49:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3903535546' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:49:50 compute-0 nova_compute[239965]: 2026-01-26 16:49:50.455 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:50 compute-0 ceph-mon[75140]: pgmap v3002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:52 compute-0 ceph-mon[75140]: pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:49:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:54 compute-0 nova_compute[239965]: 2026-01-26 16:49:54.187 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:54 compute-0 ceph-mon[75140]: pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:55 compute-0 podman[388480]: 2026-01-26 16:49:55.387194653 +0000 UTC m=+0.070511789 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 16:49:55 compute-0 nova_compute[239965]: 2026-01-26 16:49:55.457 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:56 compute-0 ceph-mon[75140]: pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:57 compute-0 podman[388499]: 2026-01-26 16:49:57.40208878 +0000 UTC m=+0.090874338 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:49:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:49:58 compute-0 ceph-mon[75140]: pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:49:59 compute-0 nova_compute[239965]: 2026-01-26 16:49:59.189 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:49:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:49:59.273 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:49:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:49:59.273 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:49:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:49:59.273 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:49:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:50:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:50:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:50:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:50:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:50:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:50:00 compute-0 nova_compute[239965]: 2026-01-26 16:50:00.458 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:00 compute-0 ceph-mon[75140]: pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:02 compute-0 ceph-mon[75140]: pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:50:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:04 compute-0 nova_compute[239965]: 2026-01-26 16:50:04.191 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:04 compute-0 ceph-mon[75140]: pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:50:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Cumulative writes: 13K writes, 61K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1126 writes, 5593 keys, 1126 commit groups, 1.0 writes per commit group, ingest: 7.62 MB, 0.01 MB/s
                                           Interval WAL: 1126 writes, 1126 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     66.7      1.13              0.24        44    0.026       0      0       0.0       0.0
                                             L6      1/0    9.73 MB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   5.1    108.9     92.9      4.11              1.10        43    0.096    282K    23K       0.0       0.0
                                            Sum      1/0    9.73 MB   0.0      0.4     0.1      0.4       0.4      0.1       0.0   6.1     85.4     87.3      5.24              1.34        87    0.060    282K    23K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.1     42.4     43.4      1.62              0.22        12    0.135     51K   3084       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.4     0.1      0.4       0.4      0.0       0.0   0.0    108.9     92.9      4.11              1.10        43    0.096    282K    23K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     66.9      1.13              0.24        43    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.6      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 5400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.074, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.45 GB write, 0.08 MB/s write, 0.44 GB read, 0.08 MB/s read, 5.2 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.11 MB/s read, 1.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x565356c818d0#2 capacity: 304.00 MB usage: 50.20 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000827 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3140,48.12 MB,15.8294%) FilterBlock(88,785.73 KB,0.252407%) IndexBlock(88,1.31 MB,0.430719%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 16:50:05 compute-0 nova_compute[239965]: 2026-01-26 16:50:05.460 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:07 compute-0 ceph-mon[75140]: pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:50:09 compute-0 ceph-mon[75140]: pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:09 compute-0 nova_compute[239965]: 2026-01-26 16:50:09.193 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:10 compute-0 nova_compute[239965]: 2026-01-26 16:50:10.462 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:11 compute-0 ceph-mon[75140]: pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:13 compute-0 ceph-mon[75140]: pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:50:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:14 compute-0 nova_compute[239965]: 2026-01-26 16:50:14.195 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:14 compute-0 nova_compute[239965]: 2026-01-26 16:50:14.926 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:15 compute-0 ceph-mon[75140]: pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:15 compute-0 nova_compute[239965]: 2026-01-26 16:50:15.462 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:15 compute-0 nova_compute[239965]: 2026-01-26 16:50:15.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:16 compute-0 nova_compute[239965]: 2026-01-26 16:50:16.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:17 compute-0 ceph-mon[75140]: pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:50:19 compute-0 ceph-mon[75140]: pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:19 compute-0 nova_compute[239965]: 2026-01-26 16:50:19.225 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:20 compute-0 nova_compute[239965]: 2026-01-26 16:50:20.465 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:21 compute-0 ceph-mon[75140]: pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:21 compute-0 nova_compute[239965]: 2026-01-26 16:50:21.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:22 compute-0 nova_compute[239965]: 2026-01-26 16:50:22.522 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:22 compute-0 nova_compute[239965]: 2026-01-26 16:50:22.523 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:23 compute-0 ceph-mon[75140]: pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:50:23 compute-0 nova_compute[239965]: 2026-01-26 16:50:23.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:24 compute-0 ceph-mon[75140]: pgmap v3019: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:24 compute-0 nova_compute[239965]: 2026-01-26 16:50:24.227 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:24 compute-0 nova_compute[239965]: 2026-01-26 16:50:24.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:24 compute-0 nova_compute[239965]: 2026-01-26 16:50:24.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:50:25 compute-0 nova_compute[239965]: 2026-01-26 16:50:25.465 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:26 compute-0 podman[388525]: 2026-01-26 16:50:26.362897398 +0000 UTC m=+0.052344793 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 16:50:26 compute-0 nova_compute[239965]: 2026-01-26 16:50:26.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:26 compute-0 nova_compute[239965]: 2026-01-26 16:50:26.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:50:26 compute-0 nova_compute[239965]: 2026-01-26 16:50:26.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:50:26 compute-0 nova_compute[239965]: 2026-01-26 16:50:26.527 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:50:26 compute-0 ceph-mon[75140]: pgmap v3020: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:27 compute-0 nova_compute[239965]: 2026-01-26 16:50:27.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:27 compute-0 nova_compute[239965]: 2026-01-26 16:50:27.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:50:27 compute-0 nova_compute[239965]: 2026-01-26 16:50:27.544 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:50:27 compute-0 nova_compute[239965]: 2026-01-26 16:50:27.544 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:50:27 compute-0 nova_compute[239965]: 2026-01-26 16:50:27.545 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:50:27 compute-0 nova_compute[239965]: 2026-01-26 16:50:27.545 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:50:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:50:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4087519339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:50:28 compute-0 nova_compute[239965]: 2026-01-26 16:50:28.089 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:50:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:50:28 compute-0 nova_compute[239965]: 2026-01-26 16:50:28.256 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:50:28 compute-0 nova_compute[239965]: 2026-01-26 16:50:28.258 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3601MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:50:28 compute-0 nova_compute[239965]: 2026-01-26 16:50:28.258 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:50:28 compute-0 nova_compute[239965]: 2026-01-26 16:50:28.258 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:50:28 compute-0 nova_compute[239965]: 2026-01-26 16:50:28.325 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:50:28 compute-0 nova_compute[239965]: 2026-01-26 16:50:28.326 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:50:28 compute-0 nova_compute[239965]: 2026-01-26 16:50:28.350 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:50:28 compute-0 podman[388566]: 2026-01-26 16:50:28.383074185 +0000 UTC m=+0.075170793 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:50:28 compute-0 ceph-mon[75140]: pgmap v3021: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:28 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4087519339' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:50:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:50:28
Jan 26 16:50:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:50:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:50:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Jan 26 16:50:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:50:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:50:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/998346952' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:50:28 compute-0 nova_compute[239965]: 2026-01-26 16:50:28.919 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:50:28 compute-0 nova_compute[239965]: 2026-01-26 16:50:28.925 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:50:29 compute-0 nova_compute[239965]: 2026-01-26 16:50:29.040 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:50:29 compute-0 nova_compute[239965]: 2026-01-26 16:50:29.041 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:50:29 compute-0 nova_compute[239965]: 2026-01-26 16:50:29.041 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:50:29 compute-0 nova_compute[239965]: 2026-01-26 16:50:29.229 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/998346952' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:50:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:50:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:50:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:50:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:50:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:50:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:50:30 compute-0 nova_compute[239965]: 2026-01-26 16:50:30.467 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:30 compute-0 ceph-mon[75140]: pgmap v3022: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:50:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:50:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:50:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:50:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:50:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:50:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:50:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:50:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:50:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:50:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:32 compute-0 ceph-mon[75140]: pgmap v3023: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:50:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:34 compute-0 nova_compute[239965]: 2026-01-26 16:50:34.231 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:34 compute-0 ceph-mon[75140]: pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:35 compute-0 nova_compute[239965]: 2026-01-26 16:50:35.470 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:36 compute-0 nova_compute[239965]: 2026-01-26 16:50:36.035 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:36 compute-0 ceph-mon[75140]: pgmap v3025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:50:38 compute-0 ceph-mon[75140]: pgmap v3026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:39 compute-0 nova_compute[239965]: 2026-01-26 16:50:39.232 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:40 compute-0 nova_compute[239965]: 2026-01-26 16:50:40.472 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:40 compute-0 ceph-mon[75140]: pgmap v3027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:41 compute-0 nova_compute[239965]: 2026-01-26 16:50:41.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:50:41 compute-0 nova_compute[239965]: 2026-01-26 16:50:41.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:50:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:42 compute-0 ceph-mon[75140]: pgmap v3028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:50:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:43 compute-0 sudo[388615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:50:43 compute-0 sudo[388615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:50:43 compute-0 sudo[388615]: pam_unix(sudo:session): session closed for user root
Jan 26 16:50:43 compute-0 sudo[388640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:50:43 compute-0 sudo[388640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:50:44 compute-0 nova_compute[239965]: 2026-01-26 16:50:44.234 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:44 compute-0 sudo[388640]: pam_unix(sudo:session): session closed for user root
Jan 26 16:50:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:50:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:50:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:50:44 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:50:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:50:44 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:50:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:50:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:50:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:50:44 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:50:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:50:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:50:44 compute-0 sudo[388695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:50:44 compute-0 sudo[388695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:50:44 compute-0 sudo[388695]: pam_unix(sudo:session): session closed for user root
Jan 26 16:50:44 compute-0 sudo[388720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:50:44 compute-0 sudo[388720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:50:44 compute-0 ceph-mon[75140]: pgmap v3029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:50:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:50:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:50:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:50:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:50:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:50:44 compute-0 podman[388758]: 2026-01-26 16:50:44.994977204 +0000 UTC m=+0.068621942 container create 10906eab2d550d1192ed2957e8708cba7b74e96a168e7d44ce7def82343aa0ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_maxwell, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:50:45 compute-0 systemd[1]: Started libpod-conmon-10906eab2d550d1192ed2957e8708cba7b74e96a168e7d44ce7def82343aa0ed.scope.
Jan 26 16:50:45 compute-0 podman[388758]: 2026-01-26 16:50:44.969686784 +0000 UTC m=+0.043331602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:50:45 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:50:45 compute-0 podman[388758]: 2026-01-26 16:50:45.085720427 +0000 UTC m=+0.159365195 container init 10906eab2d550d1192ed2957e8708cba7b74e96a168e7d44ce7def82343aa0ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_maxwell, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:50:45 compute-0 podman[388758]: 2026-01-26 16:50:45.09723811 +0000 UTC m=+0.170882848 container start 10906eab2d550d1192ed2957e8708cba7b74e96a168e7d44ce7def82343aa0ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:50:45 compute-0 podman[388758]: 2026-01-26 16:50:45.101039813 +0000 UTC m=+0.174684571 container attach 10906eab2d550d1192ed2957e8708cba7b74e96a168e7d44ce7def82343aa0ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_maxwell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:50:45 compute-0 zen_maxwell[388774]: 167 167
Jan 26 16:50:45 compute-0 systemd[1]: libpod-10906eab2d550d1192ed2957e8708cba7b74e96a168e7d44ce7def82343aa0ed.scope: Deactivated successfully.
Jan 26 16:50:45 compute-0 podman[388758]: 2026-01-26 16:50:45.106419255 +0000 UTC m=+0.180064003 container died 10906eab2d550d1192ed2957e8708cba7b74e96a168e7d44ce7def82343aa0ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:50:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-86cbd40249df220af438694a6cd1f29e7b9dc206c3cb22f7f7871d4a27810d0f-merged.mount: Deactivated successfully.
Jan 26 16:50:45 compute-0 podman[388758]: 2026-01-26 16:50:45.14827575 +0000 UTC m=+0.221920488 container remove 10906eab2d550d1192ed2957e8708cba7b74e96a168e7d44ce7def82343aa0ed (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_maxwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 16:50:45 compute-0 systemd[1]: libpod-conmon-10906eab2d550d1192ed2957e8708cba7b74e96a168e7d44ce7def82343aa0ed.scope: Deactivated successfully.
Jan 26 16:50:45 compute-0 podman[388801]: 2026-01-26 16:50:45.316187855 +0000 UTC m=+0.042666406 container create 2e2e04edba178bf35de1ccc8077a8514cf72a32ecd57e24325d3e87fc70b2442 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hugle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:50:45 compute-0 systemd[1]: Started libpod-conmon-2e2e04edba178bf35de1ccc8077a8514cf72a32ecd57e24325d3e87fc70b2442.scope.
Jan 26 16:50:45 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:50:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8db5e5c703914baeded256e14a08f5cd623f435a5b63e2f4a421989a48c23e72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:50:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8db5e5c703914baeded256e14a08f5cd623f435a5b63e2f4a421989a48c23e72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:50:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8db5e5c703914baeded256e14a08f5cd623f435a5b63e2f4a421989a48c23e72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:50:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8db5e5c703914baeded256e14a08f5cd623f435a5b63e2f4a421989a48c23e72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:50:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8db5e5c703914baeded256e14a08f5cd623f435a5b63e2f4a421989a48c23e72/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:50:45 compute-0 podman[388801]: 2026-01-26 16:50:45.393921861 +0000 UTC m=+0.120400412 container init 2e2e04edba178bf35de1ccc8077a8514cf72a32ecd57e24325d3e87fc70b2442 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 16:50:45 compute-0 podman[388801]: 2026-01-26 16:50:45.299393794 +0000 UTC m=+0.025872365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:50:45 compute-0 podman[388801]: 2026-01-26 16:50:45.400404059 +0000 UTC m=+0.126882630 container start 2e2e04edba178bf35de1ccc8077a8514cf72a32ecd57e24325d3e87fc70b2442 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hugle, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 16:50:45 compute-0 podman[388801]: 2026-01-26 16:50:45.404164922 +0000 UTC m=+0.130643553 container attach 2e2e04edba178bf35de1ccc8077a8514cf72a32ecd57e24325d3e87fc70b2442 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hugle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 16:50:45 compute-0 nova_compute[239965]: 2026-01-26 16:50:45.474 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:45 compute-0 elegant_hugle[388817]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:50:45 compute-0 elegant_hugle[388817]: --> All data devices are unavailable
Jan 26 16:50:45 compute-0 systemd[1]: libpod-2e2e04edba178bf35de1ccc8077a8514cf72a32ecd57e24325d3e87fc70b2442.scope: Deactivated successfully.
Jan 26 16:50:45 compute-0 podman[388801]: 2026-01-26 16:50:45.914370244 +0000 UTC m=+0.640848795 container died 2e2e04edba178bf35de1ccc8077a8514cf72a32ecd57e24325d3e87fc70b2442 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:50:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-8db5e5c703914baeded256e14a08f5cd623f435a5b63e2f4a421989a48c23e72-merged.mount: Deactivated successfully.
Jan 26 16:50:45 compute-0 podman[388801]: 2026-01-26 16:50:45.968196204 +0000 UTC m=+0.694674755 container remove 2e2e04edba178bf35de1ccc8077a8514cf72a32ecd57e24325d3e87fc70b2442 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elegant_hugle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:50:45 compute-0 systemd[1]: libpod-conmon-2e2e04edba178bf35de1ccc8077a8514cf72a32ecd57e24325d3e87fc70b2442.scope: Deactivated successfully.
Jan 26 16:50:46 compute-0 sudo[388720]: pam_unix(sudo:session): session closed for user root
Jan 26 16:50:46 compute-0 sudo[388849]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:50:46 compute-0 sudo[388849]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:50:46 compute-0 sudo[388849]: pam_unix(sudo:session): session closed for user root
Jan 26 16:50:46 compute-0 sudo[388874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:50:46 compute-0 sudo[388874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:50:46 compute-0 podman[388911]: 2026-01-26 16:50:46.456446509 +0000 UTC m=+0.041287713 container create d3e0fe6418307fae066bd0ba8f7116daca9dc2c230f78bbaa77340803ff84714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 16:50:46 compute-0 systemd[1]: Started libpod-conmon-d3e0fe6418307fae066bd0ba8f7116daca9dc2c230f78bbaa77340803ff84714.scope.
Jan 26 16:50:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:50:46 compute-0 podman[388911]: 2026-01-26 16:50:46.437652089 +0000 UTC m=+0.022493323 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:50:46 compute-0 podman[388911]: 2026-01-26 16:50:46.538443019 +0000 UTC m=+0.123284243 container init d3e0fe6418307fae066bd0ba8f7116daca9dc2c230f78bbaa77340803ff84714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sinoussi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:50:46 compute-0 podman[388911]: 2026-01-26 16:50:46.545506522 +0000 UTC m=+0.130347766 container start d3e0fe6418307fae066bd0ba8f7116daca9dc2c230f78bbaa77340803ff84714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:50:46 compute-0 amazing_sinoussi[388926]: 167 167
Jan 26 16:50:46 compute-0 systemd[1]: libpod-d3e0fe6418307fae066bd0ba8f7116daca9dc2c230f78bbaa77340803ff84714.scope: Deactivated successfully.
Jan 26 16:50:46 compute-0 podman[388911]: 2026-01-26 16:50:46.550144075 +0000 UTC m=+0.134985279 container attach d3e0fe6418307fae066bd0ba8f7116daca9dc2c230f78bbaa77340803ff84714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sinoussi, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 16:50:46 compute-0 podman[388911]: 2026-01-26 16:50:46.5523764 +0000 UTC m=+0.137217604 container died d3e0fe6418307fae066bd0ba8f7116daca9dc2c230f78bbaa77340803ff84714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sinoussi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:50:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-c330fd463d45fc1095c6db833562ccdca854433435a1dfb2a0681c54f2d08e7f-merged.mount: Deactivated successfully.
Jan 26 16:50:46 compute-0 podman[388911]: 2026-01-26 16:50:46.585496872 +0000 UTC m=+0.170338076 container remove d3e0fe6418307fae066bd0ba8f7116daca9dc2c230f78bbaa77340803ff84714 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_sinoussi, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:50:46 compute-0 systemd[1]: libpod-conmon-d3e0fe6418307fae066bd0ba8f7116daca9dc2c230f78bbaa77340803ff84714.scope: Deactivated successfully.
Jan 26 16:50:46 compute-0 podman[388949]: 2026-01-26 16:50:46.747679286 +0000 UTC m=+0.042780119 container create d1cda423fd0ddd2a0a406e437ba0e0b4d35e66a9811166d8ba7efd3760fe48a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_neumann, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:50:46 compute-0 ceph-mon[75140]: pgmap v3030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:46 compute-0 systemd[1]: Started libpod-conmon-d1cda423fd0ddd2a0a406e437ba0e0b4d35e66a9811166d8ba7efd3760fe48a8.scope.
Jan 26 16:50:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a74195754522930f5554da5ab9c8cd941413270bb63ff71a99113741722c64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a74195754522930f5554da5ab9c8cd941413270bb63ff71a99113741722c64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a74195754522930f5554da5ab9c8cd941413270bb63ff71a99113741722c64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:50:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12a74195754522930f5554da5ab9c8cd941413270bb63ff71a99113741722c64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:50:46 compute-0 podman[388949]: 2026-01-26 16:50:46.73027111 +0000 UTC m=+0.025371953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:50:47 compute-0 podman[388949]: 2026-01-26 16:50:47.048649611 +0000 UTC m=+0.343750464 container init d1cda423fd0ddd2a0a406e437ba0e0b4d35e66a9811166d8ba7efd3760fe48a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_neumann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:50:47 compute-0 podman[388949]: 2026-01-26 16:50:47.06125017 +0000 UTC m=+0.356351003 container start d1cda423fd0ddd2a0a406e437ba0e0b4d35e66a9811166d8ba7efd3760fe48a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_neumann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 16:50:47 compute-0 podman[388949]: 2026-01-26 16:50:47.073690005 +0000 UTC m=+0.368790858 container attach d1cda423fd0ddd2a0a406e437ba0e0b4d35e66a9811166d8ba7efd3760fe48a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_neumann, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:50:47 compute-0 admiring_neumann[388966]: {
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:     "0": [
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:         {
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "devices": [
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "/dev/loop3"
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             ],
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "lv_name": "ceph_lv0",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "lv_size": "21470642176",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "name": "ceph_lv0",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "tags": {
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.cluster_name": "ceph",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.crush_device_class": "",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.encrypted": "0",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.objectstore": "bluestore",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.osd_id": "0",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.type": "block",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.vdo": "0",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.with_tpm": "0"
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             },
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "type": "block",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "vg_name": "ceph_vg0"
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:         }
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:     ],
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:     "1": [
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:         {
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "devices": [
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "/dev/loop4"
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             ],
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "lv_name": "ceph_lv1",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "lv_size": "21470642176",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "name": "ceph_lv1",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "tags": {
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.cluster_name": "ceph",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.crush_device_class": "",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.encrypted": "0",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.objectstore": "bluestore",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.osd_id": "1",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.type": "block",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.vdo": "0",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.with_tpm": "0"
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             },
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "type": "block",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "vg_name": "ceph_vg1"
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:         }
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:     ],
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:     "2": [
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:         {
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "devices": [
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "/dev/loop5"
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             ],
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "lv_name": "ceph_lv2",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "lv_size": "21470642176",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "name": "ceph_lv2",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "tags": {
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.cluster_name": "ceph",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.crush_device_class": "",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.encrypted": "0",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.objectstore": "bluestore",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.osd_id": "2",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.type": "block",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.vdo": "0",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:                 "ceph.with_tpm": "0"
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             },
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "type": "block",
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:             "vg_name": "ceph_vg2"
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:         }
Jan 26 16:50:47 compute-0 admiring_neumann[388966]:     ]
Jan 26 16:50:47 compute-0 admiring_neumann[388966]: }
Jan 26 16:50:47 compute-0 systemd[1]: libpod-d1cda423fd0ddd2a0a406e437ba0e0b4d35e66a9811166d8ba7efd3760fe48a8.scope: Deactivated successfully.
Jan 26 16:50:47 compute-0 podman[388949]: 2026-01-26 16:50:47.419421798 +0000 UTC m=+0.714522631 container died d1cda423fd0ddd2a0a406e437ba0e0b4d35e66a9811166d8ba7efd3760fe48a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_neumann, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:50:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-12a74195754522930f5554da5ab9c8cd941413270bb63ff71a99113741722c64-merged.mount: Deactivated successfully.
Jan 26 16:50:47 compute-0 podman[388949]: 2026-01-26 16:50:47.900198181 +0000 UTC m=+1.195299014 container remove d1cda423fd0ddd2a0a406e437ba0e0b4d35e66a9811166d8ba7efd3760fe48a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 16:50:47 compute-0 sudo[388874]: pam_unix(sudo:session): session closed for user root
Jan 26 16:50:47 compute-0 systemd[1]: libpod-conmon-d1cda423fd0ddd2a0a406e437ba0e0b4d35e66a9811166d8ba7efd3760fe48a8.scope: Deactivated successfully.
Jan 26 16:50:47 compute-0 sudo[388987]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:50:47 compute-0 sudo[388987]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:50:48 compute-0 sudo[388987]: pam_unix(sudo:session): session closed for user root
Jan 26 16:50:48 compute-0 sudo[389012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:50:48 compute-0 sudo[389012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:50:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:50:48 compute-0 podman[389049]: 2026-01-26 16:50:48.298667675 +0000 UTC m=+0.022330688 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:50:48 compute-0 podman[389049]: 2026-01-26 16:50:48.491418199 +0000 UTC m=+0.215081192 container create 4d890f392025097dd7d564de9a233c339fbe9d34140385b1bca7e9b817e1450d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 16:50:48 compute-0 systemd[1]: Started libpod-conmon-4d890f392025097dd7d564de9a233c339fbe9d34140385b1bca7e9b817e1450d.scope.
Jan 26 16:50:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:50:48 compute-0 podman[389049]: 2026-01-26 16:50:48.730332283 +0000 UTC m=+0.453995296 container init 4d890f392025097dd7d564de9a233c339fbe9d34140385b1bca7e9b817e1450d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kapitsa, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:50:48 compute-0 podman[389049]: 2026-01-26 16:50:48.737227942 +0000 UTC m=+0.460890925 container start 4d890f392025097dd7d564de9a233c339fbe9d34140385b1bca7e9b817e1450d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 16:50:48 compute-0 podman[389049]: 2026-01-26 16:50:48.742119543 +0000 UTC m=+0.465782546 container attach 4d890f392025097dd7d564de9a233c339fbe9d34140385b1bca7e9b817e1450d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:50:48 compute-0 dazzling_kapitsa[389066]: 167 167
Jan 26 16:50:48 compute-0 systemd[1]: libpod-4d890f392025097dd7d564de9a233c339fbe9d34140385b1bca7e9b817e1450d.scope: Deactivated successfully.
Jan 26 16:50:48 compute-0 podman[389049]: 2026-01-26 16:50:48.743195549 +0000 UTC m=+0.466858552 container died 4d890f392025097dd7d564de9a233c339fbe9d34140385b1bca7e9b817e1450d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kapitsa, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:50:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-c856942598ffd5a5cad5011ec12fc9c8bbccb16d25a39614d94fe6d964e9ba6b-merged.mount: Deactivated successfully.
Jan 26 16:50:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:50:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2292674332' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:50:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:50:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2292674332' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:50:48 compute-0 podman[389049]: 2026-01-26 16:50:48.787390422 +0000 UTC m=+0.511053405 container remove 4d890f392025097dd7d564de9a233c339fbe9d34140385b1bca7e9b817e1450d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_kapitsa, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:50:48 compute-0 systemd[1]: libpod-conmon-4d890f392025097dd7d564de9a233c339fbe9d34140385b1bca7e9b817e1450d.scope: Deactivated successfully.
Jan 26 16:50:48 compute-0 ceph-mon[75140]: pgmap v3031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2292674332' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:50:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2292674332' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:50:49 compute-0 podman[389090]: 2026-01-26 16:50:49.016905777 +0000 UTC m=+0.088318606 container create b93523b27e6a86067d1f01b763d2a21bd3de8ce2897633f87dd96042dd8c2942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_fermi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 26 16:50:49 compute-0 podman[389090]: 2026-01-26 16:50:48.959463259 +0000 UTC m=+0.030876108 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:50:49 compute-0 systemd[1]: Started libpod-conmon-b93523b27e6a86067d1f01b763d2a21bd3de8ce2897633f87dd96042dd8c2942.scope.
Jan 26 16:50:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edda9abd2e4ade1ff58a92c0d06024a9eb35a8d02d7f6fc2e052b4d2153a2de9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edda9abd2e4ade1ff58a92c0d06024a9eb35a8d02d7f6fc2e052b4d2153a2de9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edda9abd2e4ade1ff58a92c0d06024a9eb35a8d02d7f6fc2e052b4d2153a2de9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:50:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edda9abd2e4ade1ff58a92c0d06024a9eb35a8d02d7f6fc2e052b4d2153a2de9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:50:49 compute-0 podman[389090]: 2026-01-26 16:50:49.096330933 +0000 UTC m=+0.167743772 container init b93523b27e6a86067d1f01b763d2a21bd3de8ce2897633f87dd96042dd8c2942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_fermi, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 16:50:49 compute-0 podman[389090]: 2026-01-26 16:50:49.102514614 +0000 UTC m=+0.173927443 container start b93523b27e6a86067d1f01b763d2a21bd3de8ce2897633f87dd96042dd8c2942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_fermi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:50:49 compute-0 podman[389090]: 2026-01-26 16:50:49.105493567 +0000 UTC m=+0.176906416 container attach b93523b27e6a86067d1f01b763d2a21bd3de8ce2897633f87dd96042dd8c2942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:50:49 compute-0 nova_compute[239965]: 2026-01-26 16:50:49.237 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:50:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:49 compute-0 lvm[389185]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:50:49 compute-0 lvm[389184]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:50:49 compute-0 lvm[389184]: VG ceph_vg0 finished
Jan 26 16:50:49 compute-0 lvm[389185]: VG ceph_vg1 finished
Jan 26 16:50:49 compute-0 lvm[389187]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:50:49 compute-0 lvm[389187]: VG ceph_vg2 finished
Jan 26 16:50:49 compute-0 lvm[389189]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:50:49 compute-0 lvm[389189]: VG ceph_vg2 finished
Jan 26 16:50:49 compute-0 lvm[389190]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:50:49 compute-0 lvm[389190]: VG ceph_vg2 finished
Jan 26 16:50:49 compute-0 brave_fermi[389106]: {}
Jan 26 16:50:49 compute-0 systemd[1]: libpod-b93523b27e6a86067d1f01b763d2a21bd3de8ce2897633f87dd96042dd8c2942.scope: Deactivated successfully.
Jan 26 16:50:49 compute-0 systemd[1]: libpod-b93523b27e6a86067d1f01b763d2a21bd3de8ce2897633f87dd96042dd8c2942.scope: Consumed 1.300s CPU time.
Jan 26 16:50:49 compute-0 podman[389090]: 2026-01-26 16:50:49.899787753 +0000 UTC m=+0.971200612 container died b93523b27e6a86067d1f01b763d2a21bd3de8ce2897633f87dd96042dd8c2942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_fermi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 16:50:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-edda9abd2e4ade1ff58a92c0d06024a9eb35a8d02d7f6fc2e052b4d2153a2de9-merged.mount: Deactivated successfully.
Jan 26 16:50:49 compute-0 podman[389090]: 2026-01-26 16:50:49.943632747 +0000 UTC m=+1.015045576 container remove b93523b27e6a86067d1f01b763d2a21bd3de8ce2897633f87dd96042dd8c2942 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_fermi, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:50:49 compute-0 systemd[1]: libpod-conmon-b93523b27e6a86067d1f01b763d2a21bd3de8ce2897633f87dd96042dd8c2942.scope: Deactivated successfully.
Jan 26 16:50:49 compute-0 sudo[389012]: pam_unix(sudo:session): session closed for user root
Jan 26 16:50:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:50:49 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:50:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:50:50 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:50:50 compute-0 sudo[389205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:50:50 compute-0 sudo[389205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:50:50 compute-0 sudo[389205]: pam_unix(sudo:session): session closed for user root
Jan 26 16:50:50 compute-0 nova_compute[239965]: 2026-01-26 16:50:50.476 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:50 compute-0 ceph-mon[75140]: pgmap v3032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:50:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:50:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:53 compute-0 ceph-mon[75140]: pgmap v3033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:50:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:54 compute-0 nova_compute[239965]: 2026-01-26 16:50:54.240 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:55 compute-0 ceph-mon[75140]: pgmap v3034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:55 compute-0 nova_compute[239965]: 2026-01-26 16:50:55.479 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:57 compute-0 ceph-mon[75140]: pgmap v3035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:57 compute-0 podman[389230]: 2026-01-26 16:50:57.379939803 +0000 UTC m=+0.062577435 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 26 16:50:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:50:59 compute-0 ceph-mon[75140]: pgmap v3036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:50:59 compute-0 nova_compute[239965]: 2026-01-26 16:50:59.245 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:50:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:50:59.273 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:50:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:50:59.274 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:50:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:50:59.274 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:50:59 compute-0 podman[389250]: 2026-01-26 16:50:59.392049003 +0000 UTC m=+0.076793323 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 26 16:50:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:51:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:51:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:51:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:51:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:51:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:51:00 compute-0 nova_compute[239965]: 2026-01-26 16:51:00.479 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:01 compute-0 ceph-mon[75140]: pgmap v3037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:01 compute-0 nova_compute[239965]: 2026-01-26 16:51:01.525 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:01 compute-0 nova_compute[239965]: 2026-01-26 16:51:01.526 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:51:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:01 compute-0 nova_compute[239965]: 2026-01-26 16:51:01.541 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:51:03 compute-0 ceph-mon[75140]: pgmap v3038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:51:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:04 compute-0 nova_compute[239965]: 2026-01-26 16:51:04.247 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:05 compute-0 ceph-mon[75140]: pgmap v3039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:05 compute-0 nova_compute[239965]: 2026-01-26 16:51:05.479 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:07 compute-0 ceph-mon[75140]: pgmap v3040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:51:08 compute-0 ceph-mon[75140]: pgmap v3041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:09 compute-0 nova_compute[239965]: 2026-01-26 16:51:09.251 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:10 compute-0 nova_compute[239965]: 2026-01-26 16:51:10.482 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:10 compute-0 ceph-mon[75140]: pgmap v3042: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:11 compute-0 nova_compute[239965]: 2026-01-26 16:51:11.799 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:12 compute-0 ceph-mon[75140]: pgmap v3043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:51:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:14 compute-0 nova_compute[239965]: 2026-01-26 16:51:14.281 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:14 compute-0 ceph-mon[75140]: pgmap v3044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:15 compute-0 nova_compute[239965]: 2026-01-26 16:51:15.485 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:15 compute-0 nova_compute[239965]: 2026-01-26 16:51:15.529 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:16 compute-0 nova_compute[239965]: 2026-01-26 16:51:16.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:16 compute-0 nova_compute[239965]: 2026-01-26 16:51:16.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:16 compute-0 ceph-mon[75140]: pgmap v3045: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:51:19 compute-0 ceph-mon[75140]: pgmap v3046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:19 compute-0 nova_compute[239965]: 2026-01-26 16:51:19.285 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:20 compute-0 nova_compute[239965]: 2026-01-26 16:51:20.485 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:21 compute-0 ceph-mon[75140]: pgmap v3047: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:21 compute-0 sshd-session[389276]: Connection reset by 198.235.24.218 port 58728 [preauth]
Jan 26 16:51:23 compute-0 ceph-mon[75140]: pgmap v3048: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:51:23 compute-0 nova_compute[239965]: 2026-01-26 16:51:23.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:24 compute-0 nova_compute[239965]: 2026-01-26 16:51:24.327 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:51:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.1 total, 600.0 interval
                                           Cumulative writes: 45K writes, 183K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s
                                           Cumulative WAL: 45K writes, 16K syncs, 2.75 writes per sync, written: 0.18 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 331 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:51:24 compute-0 nova_compute[239965]: 2026-01-26 16:51:24.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:24 compute-0 nova_compute[239965]: 2026-01-26 16:51:24.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:25 compute-0 ceph-mon[75140]: pgmap v3049: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:25 compute-0 nova_compute[239965]: 2026-01-26 16:51:25.523 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:26 compute-0 nova_compute[239965]: 2026-01-26 16:51:26.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:26 compute-0 nova_compute[239965]: 2026-01-26 16:51:26.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:51:27 compute-0 ceph-mon[75140]: pgmap v3050: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3051: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:51:28 compute-0 podman[389278]: 2026-01-26 16:51:28.354391071 +0000 UTC m=+0.046620154 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:51:28 compute-0 nova_compute[239965]: 2026-01-26 16:51:28.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:28 compute-0 nova_compute[239965]: 2026-01-26 16:51:28.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:51:28 compute-0 nova_compute[239965]: 2026-01-26 16:51:28.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:51:28 compute-0 nova_compute[239965]: 2026-01-26 16:51:28.610 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:51:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:51:28
Jan 26 16:51:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:51:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:51:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'volumes']
Jan 26 16:51:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:51:29 compute-0 ceph-mon[75140]: pgmap v3051: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:29 compute-0 nova_compute[239965]: 2026-01-26 16:51:29.329 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:29 compute-0 nova_compute[239965]: 2026-01-26 16:51:29.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:51:29 compute-0 nova_compute[239965]: 2026-01-26 16:51:29.546 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:51:29 compute-0 nova_compute[239965]: 2026-01-26 16:51:29.546 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:51:29 compute-0 nova_compute[239965]: 2026-01-26 16:51:29.546 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:51:29 compute-0 nova_compute[239965]: 2026-01-26 16:51:29.547 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:51:29 compute-0 nova_compute[239965]: 2026-01-26 16:51:29.547 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:51:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:51:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.4 total, 600.0 interval
                                           Cumulative writes: 49K writes, 191K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.69 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 281 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:51:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:51:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2780395556' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:51:30 compute-0 nova_compute[239965]: 2026-01-26 16:51:30.103 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:51:30 compute-0 nova_compute[239965]: 2026-01-26 16:51:30.255 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:51:30 compute-0 nova_compute[239965]: 2026-01-26 16:51:30.257 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3585MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:51:30 compute-0 nova_compute[239965]: 2026-01-26 16:51:30.257 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:51:30 compute-0 nova_compute[239965]: 2026-01-26 16:51:30.258 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:51:30 compute-0 podman[389320]: 2026-01-26 16:51:30.390739824 +0000 UTC m=+0.082425671 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 16:51:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:51:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:51:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:51:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:51:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:51:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:51:30 compute-0 nova_compute[239965]: 2026-01-26 16:51:30.454 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:51:30 compute-0 nova_compute[239965]: 2026-01-26 16:51:30.454 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:51:30 compute-0 nova_compute[239965]: 2026-01-26 16:51:30.469 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:51:30 compute-0 nova_compute[239965]: 2026-01-26 16:51:30.525 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:51:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:51:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:51:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:51:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:51:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:51:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:51:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:51:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:51:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:51:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:51:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4127361079' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:51:31 compute-0 ceph-mon[75140]: pgmap v3052: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:31 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2780395556' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:51:31 compute-0 nova_compute[239965]: 2026-01-26 16:51:31.098 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:51:31 compute-0 nova_compute[239965]: 2026-01-26 16:51:31.105 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:51:31 compute-0 nova_compute[239965]: 2026-01-26 16:51:31.237 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:51:31 compute-0 nova_compute[239965]: 2026-01-26 16:51:31.238 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:51:31 compute-0 nova_compute[239965]: 2026-01-26 16:51:31.239 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:51:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:32 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4127361079' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:51:33 compute-0 ceph-mon[75140]: pgmap v3053: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:51:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:34 compute-0 nova_compute[239965]: 2026-01-26 16:51:34.333 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 16:51:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 5400.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 148K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 13K syncs, 2.72 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 281 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 16:51:35 compute-0 ceph-mon[75140]: pgmap v3054: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:35 compute-0 nova_compute[239965]: 2026-01-26 16:51:35.526 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:36 compute-0 ceph-mon[75140]: pgmap v3055: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3056: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:51:38 compute-0 ceph-mon[75140]: pgmap v3056: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:39 compute-0 nova_compute[239965]: 2026-01-26 16:51:39.336 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:39 compute-0 ceph-mgr[75431]: [devicehealth INFO root] Check health
Jan 26 16:51:40 compute-0 nova_compute[239965]: 2026-01-26 16:51:40.566 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:40 compute-0 ceph-mon[75140]: pgmap v3057: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:42 compute-0 ceph-mon[75140]: pgmap v3058: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:51:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:44 compute-0 nova_compute[239965]: 2026-01-26 16:51:44.341 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:44 compute-0 ceph-mon[75140]: pgmap v3059: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:45 compute-0 nova_compute[239965]: 2026-01-26 16:51:45.567 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:46 compute-0 ceph-mon[75140]: pgmap v3060: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:47 compute-0 sshd-session[389370]: Invalid user sol from 45.148.10.240 port 42176
Jan 26 16:51:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:47 compute-0 sshd-session[389370]: Connection closed by invalid user sol 45.148.10.240 port 42176 [preauth]
Jan 26 16:51:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:51:48 compute-0 ceph-mon[75140]: pgmap v3061: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:51:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2847271499' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:51:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:51:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2847271499' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:51:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2847271499' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:51:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2847271499' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:51:49 compute-0 nova_compute[239965]: 2026-01-26 16:51:49.344 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:51:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:50 compute-0 sudo[389372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:51:50 compute-0 sudo[389372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:51:50 compute-0 sudo[389372]: pam_unix(sudo:session): session closed for user root
Jan 26 16:51:50 compute-0 ceph-mon[75140]: pgmap v3062: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:50 compute-0 sudo[389397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:51:50 compute-0 sudo[389397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:51:50 compute-0 nova_compute[239965]: 2026-01-26 16:51:50.569 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:50 compute-0 sudo[389397]: pam_unix(sudo:session): session closed for user root
Jan 26 16:51:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:51:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:51:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:51:50 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:51:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:51:50 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:51:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:51:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:51:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:51:50 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:51:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:51:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:51:50 compute-0 sudo[389453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:51:50 compute-0 sudo[389453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:51:50 compute-0 sudo[389453]: pam_unix(sudo:session): session closed for user root
Jan 26 16:51:50 compute-0 sudo[389478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:51:50 compute-0 sudo[389478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:51:51 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:51:51 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:51:51 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:51:51 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:51:51 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:51:51 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:51:51 compute-0 podman[389516]: 2026-01-26 16:51:51.26564589 +0000 UTC m=+0.049124675 container create 86d2bd92c8231c25d57b375861af0b3b7a70488a137479999071637ade872b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_engelbart, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:51:51 compute-0 systemd[1]: Started libpod-conmon-86d2bd92c8231c25d57b375861af0b3b7a70488a137479999071637ade872b82.scope.
Jan 26 16:51:51 compute-0 podman[389516]: 2026-01-26 16:51:51.244516752 +0000 UTC m=+0.027995597 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:51:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:51:51 compute-0 podman[389516]: 2026-01-26 16:51:51.370142431 +0000 UTC m=+0.153621236 container init 86d2bd92c8231c25d57b375861af0b3b7a70488a137479999071637ade872b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_engelbart, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 16:51:51 compute-0 podman[389516]: 2026-01-26 16:51:51.380860173 +0000 UTC m=+0.164338948 container start 86d2bd92c8231c25d57b375861af0b3b7a70488a137479999071637ade872b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:51:51 compute-0 clever_engelbart[389533]: 167 167
Jan 26 16:51:51 compute-0 systemd[1]: libpod-86d2bd92c8231c25d57b375861af0b3b7a70488a137479999071637ade872b82.scope: Deactivated successfully.
Jan 26 16:51:51 compute-0 conmon[389533]: conmon 86d2bd92c8231c25d57b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86d2bd92c8231c25d57b375861af0b3b7a70488a137479999071637ade872b82.scope/container/memory.events
Jan 26 16:51:51 compute-0 podman[389516]: 2026-01-26 16:51:51.430683335 +0000 UTC m=+0.214162140 container attach 86d2bd92c8231c25d57b375861af0b3b7a70488a137479999071637ade872b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:51:51 compute-0 podman[389516]: 2026-01-26 16:51:51.432254993 +0000 UTC m=+0.215733778 container died 86d2bd92c8231c25d57b375861af0b3b7a70488a137479999071637ade872b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:51:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca3c01f46343da29fe84545cd9a866fce1088516c7fbb985a85878dde34c2286-merged.mount: Deactivated successfully.
Jan 26 16:51:51 compute-0 podman[389516]: 2026-01-26 16:51:51.486509453 +0000 UTC m=+0.269988238 container remove 86d2bd92c8231c25d57b375861af0b3b7a70488a137479999071637ade872b82 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_engelbart, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:51:51 compute-0 systemd[1]: libpod-conmon-86d2bd92c8231c25d57b375861af0b3b7a70488a137479999071637ade872b82.scope: Deactivated successfully.
Jan 26 16:51:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:51 compute-0 podman[389559]: 2026-01-26 16:51:51.647430577 +0000 UTC m=+0.044000630 container create 3cafc5036d8dd1294538af4f143d346175d7e988943577af66333f8344e59c35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:51:51 compute-0 systemd[1]: Started libpod-conmon-3cafc5036d8dd1294538af4f143d346175d7e988943577af66333f8344e59c35.scope.
Jan 26 16:51:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61c54b894ac86895f09c07390094eb23a34c3fdc20c74619cfee42ceaab9ba63/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61c54b894ac86895f09c07390094eb23a34c3fdc20c74619cfee42ceaab9ba63/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61c54b894ac86895f09c07390094eb23a34c3fdc20c74619cfee42ceaab9ba63/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61c54b894ac86895f09c07390094eb23a34c3fdc20c74619cfee42ceaab9ba63/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:51:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61c54b894ac86895f09c07390094eb23a34c3fdc20c74619cfee42ceaab9ba63/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:51:51 compute-0 podman[389559]: 2026-01-26 16:51:51.624382851 +0000 UTC m=+0.020952924 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:51:51 compute-0 podman[389559]: 2026-01-26 16:51:51.730961503 +0000 UTC m=+0.127531576 container init 3cafc5036d8dd1294538af4f143d346175d7e988943577af66333f8344e59c35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 16:51:51 compute-0 podman[389559]: 2026-01-26 16:51:51.739759329 +0000 UTC m=+0.136329382 container start 3cafc5036d8dd1294538af4f143d346175d7e988943577af66333f8344e59c35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:51:51 compute-0 podman[389559]: 2026-01-26 16:51:51.74347324 +0000 UTC m=+0.140043323 container attach 3cafc5036d8dd1294538af4f143d346175d7e988943577af66333f8344e59c35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:51:52 compute-0 sharp_ptolemy[389576]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:51:52 compute-0 sharp_ptolemy[389576]: --> All data devices are unavailable
Jan 26 16:51:52 compute-0 systemd[1]: libpod-3cafc5036d8dd1294538af4f143d346175d7e988943577af66333f8344e59c35.scope: Deactivated successfully.
Jan 26 16:51:52 compute-0 podman[389559]: 2026-01-26 16:51:52.298911822 +0000 UTC m=+0.695481875 container died 3cafc5036d8dd1294538af4f143d346175d7e988943577af66333f8344e59c35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 16:51:52 compute-0 ceph-mon[75140]: pgmap v3063: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-61c54b894ac86895f09c07390094eb23a34c3fdc20c74619cfee42ceaab9ba63-merged.mount: Deactivated successfully.
Jan 26 16:51:52 compute-0 podman[389559]: 2026-01-26 16:51:52.812393276 +0000 UTC m=+1.208963329 container remove 3cafc5036d8dd1294538af4f143d346175d7e988943577af66333f8344e59c35 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 16:51:52 compute-0 systemd[1]: libpod-conmon-3cafc5036d8dd1294538af4f143d346175d7e988943577af66333f8344e59c35.scope: Deactivated successfully.
Jan 26 16:51:52 compute-0 sudo[389478]: pam_unix(sudo:session): session closed for user root
Jan 26 16:51:52 compute-0 sudo[389608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:51:52 compute-0 sudo[389608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:51:52 compute-0 sudo[389608]: pam_unix(sudo:session): session closed for user root
Jan 26 16:51:53 compute-0 sudo[389633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:51:53 compute-0 sudo[389633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:51:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:51:53 compute-0 podman[389671]: 2026-01-26 16:51:53.3149027 +0000 UTC m=+0.039643873 container create 2a405077023eb7d25e68b54843430a68210ecd064c70a4bb3d98b9662c2504b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shamir, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:51:53 compute-0 systemd[1]: Started libpod-conmon-2a405077023eb7d25e68b54843430a68210ecd064c70a4bb3d98b9662c2504b8.scope.
Jan 26 16:51:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:51:53 compute-0 podman[389671]: 2026-01-26 16:51:53.297148135 +0000 UTC m=+0.021889318 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:51:53 compute-0 podman[389671]: 2026-01-26 16:51:53.392714517 +0000 UTC m=+0.117455720 container init 2a405077023eb7d25e68b54843430a68210ecd064c70a4bb3d98b9662c2504b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shamir, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:51:53 compute-0 podman[389671]: 2026-01-26 16:51:53.400651792 +0000 UTC m=+0.125392965 container start 2a405077023eb7d25e68b54843430a68210ecd064c70a4bb3d98b9662c2504b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 16:51:53 compute-0 clever_shamir[389685]: 167 167
Jan 26 16:51:53 compute-0 systemd[1]: libpod-2a405077023eb7d25e68b54843430a68210ecd064c70a4bb3d98b9662c2504b8.scope: Deactivated successfully.
Jan 26 16:51:53 compute-0 podman[389671]: 2026-01-26 16:51:53.404684201 +0000 UTC m=+0.129425384 container attach 2a405077023eb7d25e68b54843430a68210ecd064c70a4bb3d98b9662c2504b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shamir, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 16:51:53 compute-0 podman[389671]: 2026-01-26 16:51:53.405521361 +0000 UTC m=+0.130262534 container died 2a405077023eb7d25e68b54843430a68210ecd064c70a4bb3d98b9662c2504b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shamir, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:51:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d93a33d16206dccf81fc4723b4094b5a5f470394b9e404d49eaa572f8580fd8-merged.mount: Deactivated successfully.
Jan 26 16:51:53 compute-0 podman[389671]: 2026-01-26 16:51:53.445171833 +0000 UTC m=+0.169913006 container remove 2a405077023eb7d25e68b54843430a68210ecd064c70a4bb3d98b9662c2504b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:51:53 compute-0 systemd[1]: libpod-conmon-2a405077023eb7d25e68b54843430a68210ecd064c70a4bb3d98b9662c2504b8.scope: Deactivated successfully.
Jan 26 16:51:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:53 compute-0 podman[389713]: 2026-01-26 16:51:53.59929892 +0000 UTC m=+0.028456509 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:51:54 compute-0 nova_compute[239965]: 2026-01-26 16:51:54.385 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:54 compute-0 podman[389713]: 2026-01-26 16:51:54.50316646 +0000 UTC m=+0.932324029 container create dbc33b21594b71f0d660ae3511a9116c0b0c253a95fde74039a749376db56288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_kirch, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 16:51:54 compute-0 ceph-mon[75140]: pgmap v3064: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:54 compute-0 systemd[1]: Started libpod-conmon-dbc33b21594b71f0d660ae3511a9116c0b0c253a95fde74039a749376db56288.scope.
Jan 26 16:51:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:51:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969c4aacdbaf4bc0f5e84b4b3b3b97235f1731fd112226b87ce5b2702a3e04d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:51:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969c4aacdbaf4bc0f5e84b4b3b3b97235f1731fd112226b87ce5b2702a3e04d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:51:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969c4aacdbaf4bc0f5e84b4b3b3b97235f1731fd112226b87ce5b2702a3e04d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:51:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969c4aacdbaf4bc0f5e84b4b3b3b97235f1731fd112226b87ce5b2702a3e04d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:51:54 compute-0 podman[389713]: 2026-01-26 16:51:54.629512967 +0000 UTC m=+1.058670536 container init dbc33b21594b71f0d660ae3511a9116c0b0c253a95fde74039a749376db56288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_kirch, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Jan 26 16:51:54 compute-0 podman[389713]: 2026-01-26 16:51:54.635798841 +0000 UTC m=+1.064956410 container start dbc33b21594b71f0d660ae3511a9116c0b0c253a95fde74039a749376db56288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_kirch, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:51:54 compute-0 podman[389713]: 2026-01-26 16:51:54.64026553 +0000 UTC m=+1.069423119 container attach dbc33b21594b71f0d660ae3511a9116c0b0c253a95fde74039a749376db56288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:51:54 compute-0 happy_kirch[389730]: {
Jan 26 16:51:54 compute-0 happy_kirch[389730]:     "0": [
Jan 26 16:51:54 compute-0 happy_kirch[389730]:         {
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "devices": [
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "/dev/loop3"
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             ],
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "lv_name": "ceph_lv0",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "lv_size": "21470642176",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "name": "ceph_lv0",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "tags": {
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.cluster_name": "ceph",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.crush_device_class": "",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.encrypted": "0",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.objectstore": "bluestore",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.osd_id": "0",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.type": "block",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.vdo": "0",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.with_tpm": "0"
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             },
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "type": "block",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "vg_name": "ceph_vg0"
Jan 26 16:51:54 compute-0 happy_kirch[389730]:         }
Jan 26 16:51:54 compute-0 happy_kirch[389730]:     ],
Jan 26 16:51:54 compute-0 happy_kirch[389730]:     "1": [
Jan 26 16:51:54 compute-0 happy_kirch[389730]:         {
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "devices": [
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "/dev/loop4"
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             ],
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "lv_name": "ceph_lv1",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "lv_size": "21470642176",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "name": "ceph_lv1",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "tags": {
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.cluster_name": "ceph",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.crush_device_class": "",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.encrypted": "0",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.objectstore": "bluestore",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.osd_id": "1",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.type": "block",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.vdo": "0",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.with_tpm": "0"
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             },
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "type": "block",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "vg_name": "ceph_vg1"
Jan 26 16:51:54 compute-0 happy_kirch[389730]:         }
Jan 26 16:51:54 compute-0 happy_kirch[389730]:     ],
Jan 26 16:51:54 compute-0 happy_kirch[389730]:     "2": [
Jan 26 16:51:54 compute-0 happy_kirch[389730]:         {
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "devices": [
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "/dev/loop5"
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             ],
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "lv_name": "ceph_lv2",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "lv_size": "21470642176",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "name": "ceph_lv2",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "tags": {
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.cluster_name": "ceph",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.crush_device_class": "",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.encrypted": "0",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.objectstore": "bluestore",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.osd_id": "2",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.type": "block",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.vdo": "0",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:                 "ceph.with_tpm": "0"
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             },
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "type": "block",
Jan 26 16:51:54 compute-0 happy_kirch[389730]:             "vg_name": "ceph_vg2"
Jan 26 16:51:54 compute-0 happy_kirch[389730]:         }
Jan 26 16:51:54 compute-0 happy_kirch[389730]:     ]
Jan 26 16:51:54 compute-0 happy_kirch[389730]: }
Jan 26 16:51:54 compute-0 systemd[1]: libpod-dbc33b21594b71f0d660ae3511a9116c0b0c253a95fde74039a749376db56288.scope: Deactivated successfully.
Jan 26 16:51:54 compute-0 podman[389713]: 2026-01-26 16:51:54.925737026 +0000 UTC m=+1.354894605 container died dbc33b21594b71f0d660ae3511a9116c0b0c253a95fde74039a749376db56288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_kirch, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:51:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-969c4aacdbaf4bc0f5e84b4b3b3b97235f1731fd112226b87ce5b2702a3e04d7-merged.mount: Deactivated successfully.
Jan 26 16:51:55 compute-0 podman[389713]: 2026-01-26 16:51:55.000325134 +0000 UTC m=+1.429482743 container remove dbc33b21594b71f0d660ae3511a9116c0b0c253a95fde74039a749376db56288 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_kirch, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 16:51:55 compute-0 systemd[1]: libpod-conmon-dbc33b21594b71f0d660ae3511a9116c0b0c253a95fde74039a749376db56288.scope: Deactivated successfully.
Jan 26 16:51:55 compute-0 sudo[389633]: pam_unix(sudo:session): session closed for user root
Jan 26 16:51:55 compute-0 sudo[389753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:51:55 compute-0 sudo[389753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:51:55 compute-0 sudo[389753]: pam_unix(sudo:session): session closed for user root
Jan 26 16:51:55 compute-0 sudo[389778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:51:55 compute-0 sudo[389778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:51:55 compute-0 podman[389816]: 2026-01-26 16:51:55.452827834 +0000 UTC m=+0.041685293 container create 17968d2dbd8e7b0b72c1936588fab012ba00e134e540295401c7b1b171cf3477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 16:51:55 compute-0 systemd[1]: Started libpod-conmon-17968d2dbd8e7b0b72c1936588fab012ba00e134e540295401c7b1b171cf3477.scope.
Jan 26 16:51:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:51:55 compute-0 podman[389816]: 2026-01-26 16:51:55.43269816 +0000 UTC m=+0.021555659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:51:55 compute-0 podman[389816]: 2026-01-26 16:51:55.540684416 +0000 UTC m=+0.129541915 container init 17968d2dbd8e7b0b72c1936588fab012ba00e134e540295401c7b1b171cf3477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 16:51:55 compute-0 podman[389816]: 2026-01-26 16:51:55.550733542 +0000 UTC m=+0.139591011 container start 17968d2dbd8e7b0b72c1936588fab012ba00e134e540295401c7b1b171cf3477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:51:55 compute-0 kind_wing[389833]: 167 167
Jan 26 16:51:55 compute-0 systemd[1]: libpod-17968d2dbd8e7b0b72c1936588fab012ba00e134e540295401c7b1b171cf3477.scope: Deactivated successfully.
Jan 26 16:51:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:55 compute-0 nova_compute[239965]: 2026-01-26 16:51:55.571 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:55 compute-0 podman[389816]: 2026-01-26 16:51:55.715638474 +0000 UTC m=+0.304495943 container attach 17968d2dbd8e7b0b72c1936588fab012ba00e134e540295401c7b1b171cf3477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 16:51:55 compute-0 podman[389816]: 2026-01-26 16:51:55.716166007 +0000 UTC m=+0.305023486 container died 17968d2dbd8e7b0b72c1936588fab012ba00e134e540295401c7b1b171cf3477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:51:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-97e385f25998264f4c4e45774e53e560f21264236d816dd4571b0db93104ae5c-merged.mount: Deactivated successfully.
Jan 26 16:51:55 compute-0 podman[389816]: 2026-01-26 16:51:55.896216019 +0000 UTC m=+0.485073488 container remove 17968d2dbd8e7b0b72c1936588fab012ba00e134e540295401c7b1b171cf3477 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 16:51:55 compute-0 systemd[1]: libpod-conmon-17968d2dbd8e7b0b72c1936588fab012ba00e134e540295401c7b1b171cf3477.scope: Deactivated successfully.
Jan 26 16:51:56 compute-0 podman[389856]: 2026-01-26 16:51:56.052312775 +0000 UTC m=+0.026694835 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:51:56 compute-0 podman[389856]: 2026-01-26 16:51:56.262925816 +0000 UTC m=+0.237307846 container create 057f4d6de423b0a4676652e2c742fc53e117f9351ed986fc780485f12a5f1304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_williamson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 16:51:56 compute-0 systemd[1]: Started libpod-conmon-057f4d6de423b0a4676652e2c742fc53e117f9351ed986fc780485f12a5f1304.scope.
Jan 26 16:51:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b70f8a4582f154e232200abc8ea72d3f91ad9b09658d55b598b31accbea1fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b70f8a4582f154e232200abc8ea72d3f91ad9b09658d55b598b31accbea1fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b70f8a4582f154e232200abc8ea72d3f91ad9b09658d55b598b31accbea1fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:51:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39b70f8a4582f154e232200abc8ea72d3f91ad9b09658d55b598b31accbea1fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:51:56 compute-0 podman[389856]: 2026-01-26 16:51:56.553861736 +0000 UTC m=+0.528243766 container init 057f4d6de423b0a4676652e2c742fc53e117f9351ed986fc780485f12a5f1304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_williamson, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:51:56 compute-0 podman[389856]: 2026-01-26 16:51:56.561293387 +0000 UTC m=+0.535675417 container start 057f4d6de423b0a4676652e2c742fc53e117f9351ed986fc780485f12a5f1304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_williamson, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:51:56 compute-0 podman[389856]: 2026-01-26 16:51:56.565657644 +0000 UTC m=+0.540039664 container attach 057f4d6de423b0a4676652e2c742fc53e117f9351ed986fc780485f12a5f1304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_williamson, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 16:51:56 compute-0 ceph-mon[75140]: pgmap v3065: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:57 compute-0 lvm[389952]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:51:57 compute-0 lvm[389951]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:51:57 compute-0 lvm[389952]: VG ceph_vg1 finished
Jan 26 16:51:57 compute-0 lvm[389951]: VG ceph_vg0 finished
Jan 26 16:51:57 compute-0 lvm[389954]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:51:57 compute-0 lvm[389954]: VG ceph_vg2 finished
Jan 26 16:51:57 compute-0 loving_williamson[389872]: {}
Jan 26 16:51:57 compute-0 systemd[1]: libpod-057f4d6de423b0a4676652e2c742fc53e117f9351ed986fc780485f12a5f1304.scope: Deactivated successfully.
Jan 26 16:51:57 compute-0 podman[389856]: 2026-01-26 16:51:57.448434838 +0000 UTC m=+1.422816878 container died 057f4d6de423b0a4676652e2c742fc53e117f9351ed986fc780485f12a5f1304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 16:51:57 compute-0 systemd[1]: libpod-057f4d6de423b0a4676652e2c742fc53e117f9351ed986fc780485f12a5f1304.scope: Consumed 1.420s CPU time.
Jan 26 16:51:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-39b70f8a4582f154e232200abc8ea72d3f91ad9b09658d55b598b31accbea1fc-merged.mount: Deactivated successfully.
Jan 26 16:51:57 compute-0 podman[389856]: 2026-01-26 16:51:57.81654123 +0000 UTC m=+1.790923260 container remove 057f4d6de423b0a4676652e2c742fc53e117f9351ed986fc780485f12a5f1304 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 16:51:57 compute-0 sudo[389778]: pam_unix(sudo:session): session closed for user root
Jan 26 16:51:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:51:57 compute-0 systemd[1]: libpod-conmon-057f4d6de423b0a4676652e2c742fc53e117f9351ed986fc780485f12a5f1304.scope: Deactivated successfully.
Jan 26 16:51:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:51:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:51:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:51:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:51:58 compute-0 sudo[389969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:51:58 compute-0 sudo[389969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:51:58 compute-0 sudo[389969]: pam_unix(sudo:session): session closed for user root
Jan 26 16:51:58 compute-0 podman[389993]: 2026-01-26 16:51:58.630982998 +0000 UTC m=+0.069021382 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 16:51:59 compute-0 ceph-mon[75140]: pgmap v3066: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:51:59 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:51:59 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:51:59.274 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:51:59.274 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:51:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:51:59.274 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:51:59 compute-0 nova_compute[239965]: 2026-01-26 16:51:59.387 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:51:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:52:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:52:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:52:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:52:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:52:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:52:00 compute-0 nova_compute[239965]: 2026-01-26 16:52:00.574 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:00 compute-0 ceph-mon[75140]: pgmap v3067: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:01 compute-0 podman[390011]: 2026-01-26 16:52:01.392997505 +0000 UTC m=+0.081612961 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 26 16:52:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:02 compute-0 ceph-mon[75140]: pgmap v3068: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:52:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:04 compute-0 nova_compute[239965]: 2026-01-26 16:52:04.389 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:04 compute-0 ceph-mon[75140]: pgmap v3069: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:05 compute-0 nova_compute[239965]: 2026-01-26 16:52:05.575 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:06 compute-0 ceph-mon[75140]: pgmap v3070: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:52:08 compute-0 ceph-mon[75140]: pgmap v3071: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:09 compute-0 nova_compute[239965]: 2026-01-26 16:52:09.392 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:10 compute-0 nova_compute[239965]: 2026-01-26 16:52:10.578 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:10 compute-0 ceph-mon[75140]: pgmap v3072: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:12 compute-0 ceph-mon[75140]: pgmap v3073: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:52:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:14 compute-0 nova_compute[239965]: 2026-01-26 16:52:14.396 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:15 compute-0 ceph-mon[75140]: pgmap v3074: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:15 compute-0 nova_compute[239965]: 2026-01-26 16:52:15.580 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:16 compute-0 ceph-mon[75140]: pgmap v3075: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:52:18 compute-0 ceph-mon[75140]: pgmap v3076: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:19 compute-0 nova_compute[239965]: 2026-01-26 16:52:19.239 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:19 compute-0 nova_compute[239965]: 2026-01-26 16:52:19.240 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:19 compute-0 nova_compute[239965]: 2026-01-26 16:52:19.240 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:19 compute-0 nova_compute[239965]: 2026-01-26 16:52:19.397 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:20 compute-0 nova_compute[239965]: 2026-01-26 16:52:20.581 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:21 compute-0 ceph-mon[75140]: pgmap v3077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:22 compute-0 ceph-mon[75140]: pgmap v3078: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:52:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:24 compute-0 nova_compute[239965]: 2026-01-26 16:52:24.400 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:25 compute-0 ceph-mon[75140]: pgmap v3079: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:25 compute-0 nova_compute[239965]: 2026-01-26 16:52:25.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:25 compute-0 nova_compute[239965]: 2026-01-26 16:52:25.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:25 compute-0 nova_compute[239965]: 2026-01-26 16:52:25.583 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:26 compute-0 nova_compute[239965]: 2026-01-26 16:52:26.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:26 compute-0 ceph-mon[75140]: pgmap v3080: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:52:28 compute-0 ceph-mon[75140]: pgmap v3081: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:28 compute-0 nova_compute[239965]: 2026-01-26 16:52:28.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:28 compute-0 nova_compute[239965]: 2026-01-26 16:52:28.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:52:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:52:28
Jan 26 16:52:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:52:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:52:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['backups', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'default.rgw.log', 'vms']
Jan 26 16:52:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:52:29 compute-0 podman[390036]: 2026-01-26 16:52:29.389061105 +0000 UTC m=+0.062355908 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent)
Jan 26 16:52:29 compute-0 nova_compute[239965]: 2026-01-26 16:52:29.403 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:52:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:52:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:52:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:52:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:52:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:52:30 compute-0 nova_compute[239965]: 2026-01-26 16:52:30.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:30 compute-0 nova_compute[239965]: 2026-01-26 16:52:30.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:52:30 compute-0 nova_compute[239965]: 2026-01-26 16:52:30.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:52:30 compute-0 nova_compute[239965]: 2026-01-26 16:52:30.526 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:52:30 compute-0 nova_compute[239965]: 2026-01-26 16:52:30.585 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:52:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:52:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:52:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:52:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:52:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:52:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:52:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:52:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:52:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:52:31 compute-0 ceph-mon[75140]: pgmap v3082: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:31 compute-0 nova_compute[239965]: 2026-01-26 16:52:31.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:31 compute-0 nova_compute[239965]: 2026-01-26 16:52:31.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:52:31 compute-0 nova_compute[239965]: 2026-01-26 16:52:31.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:52:31 compute-0 nova_compute[239965]: 2026-01-26 16:52:31.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:52:31 compute-0 nova_compute[239965]: 2026-01-26 16:52:31.539 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:52:31 compute-0 nova_compute[239965]: 2026-01-26 16:52:31.540 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:52:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:52:32 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2821821888' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:52:32 compute-0 nova_compute[239965]: 2026-01-26 16:52:32.075 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:52:32 compute-0 nova_compute[239965]: 2026-01-26 16:52:32.241 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:52:32 compute-0 nova_compute[239965]: 2026-01-26 16:52:32.242 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3584MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:52:32 compute-0 nova_compute[239965]: 2026-01-26 16:52:32.242 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:52:32 compute-0 nova_compute[239965]: 2026-01-26 16:52:32.243 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:52:32 compute-0 ceph-mon[75140]: pgmap v3083: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:32 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2821821888' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:52:32 compute-0 podman[390077]: 2026-01-26 16:52:32.399803998 +0000 UTC m=+0.087562657 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:52:32 compute-0 nova_compute[239965]: 2026-01-26 16:52:32.641 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:52:32 compute-0 nova_compute[239965]: 2026-01-26 16:52:32.641 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:52:32 compute-0 nova_compute[239965]: 2026-01-26 16:52:32.661 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:52:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:52:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:52:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1071039389' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:52:33 compute-0 nova_compute[239965]: 2026-01-26 16:52:33.211 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:52:33 compute-0 nova_compute[239965]: 2026-01-26 16:52:33.218 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:52:33 compute-0 nova_compute[239965]: 2026-01-26 16:52:33.240 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:52:33 compute-0 nova_compute[239965]: 2026-01-26 16:52:33.242 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:52:33 compute-0 nova_compute[239965]: 2026-01-26 16:52:33.242 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.999s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:52:33 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1071039389' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:52:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:34 compute-0 ceph-mon[75140]: pgmap v3084: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:34 compute-0 nova_compute[239965]: 2026-01-26 16:52:34.407 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:35 compute-0 nova_compute[239965]: 2026-01-26 16:52:35.587 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:36 compute-0 ceph-mon[75140]: pgmap v3085: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:37 compute-0 nova_compute[239965]: 2026-01-26 16:52:37.235 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:52:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:52:38 compute-0 ceph-mon[75140]: pgmap v3086: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:39 compute-0 nova_compute[239965]: 2026-01-26 16:52:39.412 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:40 compute-0 nova_compute[239965]: 2026-01-26 16:52:40.589 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:40 compute-0 ceph-mon[75140]: pgmap v3087: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:42 compute-0 ceph-mon[75140]: pgmap v3088: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:52:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:44 compute-0 nova_compute[239965]: 2026-01-26 16:52:44.415 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:44 compute-0 ceph-mon[75140]: pgmap v3089: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:45 compute-0 nova_compute[239965]: 2026-01-26 16:52:45.592 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:46 compute-0 ceph-mon[75140]: pgmap v3090: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:52:48 compute-0 ceph-mon[75140]: pgmap v3091: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:52:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2579119703' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:52:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:52:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2579119703' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:52:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2579119703' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:52:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2579119703' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:52:49 compute-0 nova_compute[239965]: 2026-01-26 16:52:49.420 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:52:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:50 compute-0 ceph-mon[75140]: pgmap v3092: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:50 compute-0 nova_compute[239965]: 2026-01-26 16:52:50.592 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:52 compute-0 ceph-mon[75140]: pgmap v3093: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:52:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:52:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 14 op/s
Jan 26 16:52:54 compute-0 nova_compute[239965]: 2026-01-26 16:52:54.422 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:54 compute-0 ceph-mon[75140]: pgmap v3094: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 14 op/s
Jan 26 16:52:55 compute-0 nova_compute[239965]: 2026-01-26 16:52:55.594 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 39 op/s
Jan 26 16:52:56 compute-0 ceph-mon[75140]: pgmap v3095: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 39 op/s
Jan 26 16:52:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 26 16:52:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:52:58 compute-0 sudo[390125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:52:58 compute-0 sudo[390125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:52:58 compute-0 sudo[390125]: pam_unix(sudo:session): session closed for user root
Jan 26 16:52:58 compute-0 ceph-mon[75140]: pgmap v3096: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 26 16:52:58 compute-0 sudo[390150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:52:58 compute-0 sudo[390150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:52:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:52:59.274 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:52:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:52:59.275 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:52:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:52:59.275 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:52:59 compute-0 sudo[390150]: pam_unix(sudo:session): session closed for user root
Jan 26 16:52:59 compute-0 nova_compute[239965]: 2026-01-26 16:52:59.425 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:52:59 compute-0 sudo[390207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:52:59 compute-0 sudo[390207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:52:59 compute-0 sudo[390207]: pam_unix(sudo:session): session closed for user root
Jan 26 16:52:59 compute-0 sudo[390233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 26 16:52:59 compute-0 sudo[390233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:52:59 compute-0 podman[390231]: 2026-01-26 16:52:59.54921174 +0000 UTC m=+0.084822849 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 16:52:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 26 16:52:59 compute-0 sudo[390233]: pam_unix(sudo:session): session closed for user root
Jan 26 16:52:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:52:59 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:52:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:52:59 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:52:59 compute-0 sudo[390295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:52:59 compute-0 sudo[390295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:52:59 compute-0 sudo[390295]: pam_unix(sudo:session): session closed for user root
Jan 26 16:52:59 compute-0 sudo[390320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- inventory --format=json-pretty --filter-for-batch
Jan 26 16:52:59 compute-0 sudo[390320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:53:00 compute-0 podman[390355]: 2026-01-26 16:53:00.214446462 +0000 UTC m=+0.048859348 container create c972bd5bef6498863726f0e972f02480da2eb9a7bcbfca3dc831b767298c3e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dhawan, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 26 16:53:00 compute-0 systemd[1]: Started libpod-conmon-c972bd5bef6498863726f0e972f02480da2eb9a7bcbfca3dc831b767298c3e60.scope.
Jan 26 16:53:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:53:00 compute-0 podman[390355]: 2026-01-26 16:53:00.194584156 +0000 UTC m=+0.028997072 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:53:00 compute-0 podman[390355]: 2026-01-26 16:53:00.31023105 +0000 UTC m=+0.144643966 container init c972bd5bef6498863726f0e972f02480da2eb9a7bcbfca3dc831b767298c3e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dhawan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:53:00 compute-0 podman[390355]: 2026-01-26 16:53:00.319438435 +0000 UTC m=+0.153851321 container start c972bd5bef6498863726f0e972f02480da2eb9a7bcbfca3dc831b767298c3e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dhawan, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:53:00 compute-0 podman[390355]: 2026-01-26 16:53:00.323016653 +0000 UTC m=+0.157429579 container attach c972bd5bef6498863726f0e972f02480da2eb9a7bcbfca3dc831b767298c3e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dhawan, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:53:00 compute-0 pedantic_dhawan[390371]: 167 167
Jan 26 16:53:00 compute-0 systemd[1]: libpod-c972bd5bef6498863726f0e972f02480da2eb9a7bcbfca3dc831b767298c3e60.scope: Deactivated successfully.
Jan 26 16:53:00 compute-0 conmon[390371]: conmon c972bd5bef6498863726 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c972bd5bef6498863726f0e972f02480da2eb9a7bcbfca3dc831b767298c3e60.scope/container/memory.events
Jan 26 16:53:00 compute-0 podman[390355]: 2026-01-26 16:53:00.327489313 +0000 UTC m=+0.161902229 container died c972bd5bef6498863726f0e972f02480da2eb9a7bcbfca3dc831b767298c3e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dhawan, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:53:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-16317a736b54b575a6b3b8fed178428a1fe6f5d8121a0a4edd1c30bb41d1bc42-merged.mount: Deactivated successfully.
Jan 26 16:53:00 compute-0 podman[390355]: 2026-01-26 16:53:00.378356019 +0000 UTC m=+0.212768905 container remove c972bd5bef6498863726f0e972f02480da2eb9a7bcbfca3dc831b767298c3e60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_dhawan, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 16:53:00 compute-0 systemd[1]: libpod-conmon-c972bd5bef6498863726f0e972f02480da2eb9a7bcbfca3dc831b767298c3e60.scope: Deactivated successfully.
Jan 26 16:53:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:53:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:53:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:53:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:53:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:53:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:53:00 compute-0 podman[390396]: 2026-01-26 16:53:00.589625997 +0000 UTC m=+0.064109433 container create 1755742d52e9e348864b2d5693b9e7a45503a6b205312c6cd676245237d13da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_babbage, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:53:00 compute-0 nova_compute[239965]: 2026-01-26 16:53:00.596 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:00 compute-0 systemd[1]: Started libpod-conmon-1755742d52e9e348864b2d5693b9e7a45503a6b205312c6cd676245237d13da2.scope.
Jan 26 16:53:00 compute-0 podman[390396]: 2026-01-26 16:53:00.557948691 +0000 UTC m=+0.032432227 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:53:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98569363bcd7f5b1142256de864c72ba75353d7288286065752fbcf04e5aa92/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98569363bcd7f5b1142256de864c72ba75353d7288286065752fbcf04e5aa92/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98569363bcd7f5b1142256de864c72ba75353d7288286065752fbcf04e5aa92/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98569363bcd7f5b1142256de864c72ba75353d7288286065752fbcf04e5aa92/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:00 compute-0 podman[390396]: 2026-01-26 16:53:00.701838016 +0000 UTC m=+0.176321532 container init 1755742d52e9e348864b2d5693b9e7a45503a6b205312c6cd676245237d13da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_babbage, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:53:00 compute-0 podman[390396]: 2026-01-26 16:53:00.715362088 +0000 UTC m=+0.189845554 container start 1755742d52e9e348864b2d5693b9e7a45503a6b205312c6cd676245237d13da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 16:53:00 compute-0 podman[390396]: 2026-01-26 16:53:00.719771246 +0000 UTC m=+0.194254772 container attach 1755742d52e9e348864b2d5693b9e7a45503a6b205312c6cd676245237d13da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 16:53:00 compute-0 ceph-mon[75140]: pgmap v3097: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 26 16:53:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:53:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:53:01 compute-0 recursing_babbage[390412]: [
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:     {
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:         "available": false,
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:         "being_replaced": false,
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:         "ceph_device_lvm": false,
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:         "lsm_data": {},
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:         "lvs": [],
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:         "path": "/dev/sr0",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:         "rejected_reasons": [
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "Insufficient space (<5GB)",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "Has a FileSystem"
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:         ],
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:         "sys_api": {
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "actuators": null,
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "device_nodes": [
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:                 "sr0"
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             ],
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "devname": "sr0",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "human_readable_size": "482.00 KB",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "id_bus": "ata",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "model": "QEMU DVD-ROM",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "nr_requests": "2",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "parent": "/dev/sr0",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "partitions": {},
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "path": "/dev/sr0",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "removable": "1",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "rev": "2.5+",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "ro": "0",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "rotational": "1",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "sas_address": "",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "sas_device_handle": "",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "scheduler_mode": "mq-deadline",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "sectors": 0,
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "sectorsize": "2048",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "size": 493568.0,
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "support_discard": "2048",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "type": "disk",
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:             "vendor": "QEMU"
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:         }
Jan 26 16:53:01 compute-0 recursing_babbage[390412]:     }
Jan 26 16:53:01 compute-0 recursing_babbage[390412]: ]
Jan 26 16:53:01 compute-0 systemd[1]: libpod-1755742d52e9e348864b2d5693b9e7a45503a6b205312c6cd676245237d13da2.scope: Deactivated successfully.
Jan 26 16:53:01 compute-0 podman[391265]: 2026-01-26 16:53:01.422828215 +0000 UTC m=+0.022215505 container died 1755742d52e9e348864b2d5693b9e7a45503a6b205312c6cd676245237d13da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 16:53:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c98569363bcd7f5b1142256de864c72ba75353d7288286065752fbcf04e5aa92-merged.mount: Deactivated successfully.
Jan 26 16:53:01 compute-0 podman[391265]: 2026-01-26 16:53:01.461243627 +0000 UTC m=+0.060630907 container remove 1755742d52e9e348864b2d5693b9e7a45503a6b205312c6cd676245237d13da2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_babbage, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 16:53:01 compute-0 systemd[1]: libpod-conmon-1755742d52e9e348864b2d5693b9e7a45503a6b205312c6cd676245237d13da2.scope: Deactivated successfully.
Jan 26 16:53:01 compute-0 sudo[390320]: pam_unix(sudo:session): session closed for user root
Jan 26 16:53:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:53:01 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:53:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:53:01 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:53:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:53:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:53:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:53:01 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:53:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:53:01 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:53:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:53:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:53:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:53:01 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:53:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:53:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:53:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 26 16:53:01 compute-0 sudo[391280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:53:01 compute-0 sudo[391280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:53:01 compute-0 sudo[391280]: pam_unix(sudo:session): session closed for user root
Jan 26 16:53:01 compute-0 sudo[391305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:53:01 compute-0 sudo[391305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:53:01 compute-0 podman[391340]: 2026-01-26 16:53:01.959141328 +0000 UTC m=+0.045581577 container create 6ecd488c0b3bbbf16222cf2dc8aac28bad4cf69368be0d99b993fbe546b78aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:53:01 compute-0 systemd[1]: Started libpod-conmon-6ecd488c0b3bbbf16222cf2dc8aac28bad4cf69368be0d99b993fbe546b78aaa.scope.
Jan 26 16:53:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:53:02 compute-0 podman[391340]: 2026-01-26 16:53:01.93678463 +0000 UTC m=+0.023224899 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:53:02 compute-0 podman[391340]: 2026-01-26 16:53:02.035542551 +0000 UTC m=+0.121982800 container init 6ecd488c0b3bbbf16222cf2dc8aac28bad4cf69368be0d99b993fbe546b78aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 16:53:02 compute-0 podman[391340]: 2026-01-26 16:53:02.043043025 +0000 UTC m=+0.129483274 container start 6ecd488c0b3bbbf16222cf2dc8aac28bad4cf69368be0d99b993fbe546b78aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 16:53:02 compute-0 podman[391340]: 2026-01-26 16:53:02.046528351 +0000 UTC m=+0.132968600 container attach 6ecd488c0b3bbbf16222cf2dc8aac28bad4cf69368be0d99b993fbe546b78aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sammet, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:53:02 compute-0 loving_sammet[391356]: 167 167
Jan 26 16:53:02 compute-0 systemd[1]: libpod-6ecd488c0b3bbbf16222cf2dc8aac28bad4cf69368be0d99b993fbe546b78aaa.scope: Deactivated successfully.
Jan 26 16:53:02 compute-0 podman[391340]: 2026-01-26 16:53:02.049271578 +0000 UTC m=+0.135711837 container died 6ecd488c0b3bbbf16222cf2dc8aac28bad4cf69368be0d99b993fbe546b78aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 16:53:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f5465edb930734450f59ddb8a874e43ba2d6d07f55fd307fb0fa0e59bd88831-merged.mount: Deactivated successfully.
Jan 26 16:53:02 compute-0 podman[391340]: 2026-01-26 16:53:02.082222465 +0000 UTC m=+0.168662714 container remove 6ecd488c0b3bbbf16222cf2dc8aac28bad4cf69368be0d99b993fbe546b78aaa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_sammet, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:53:02 compute-0 systemd[1]: libpod-conmon-6ecd488c0b3bbbf16222cf2dc8aac28bad4cf69368be0d99b993fbe546b78aaa.scope: Deactivated successfully.
Jan 26 16:53:02 compute-0 podman[391380]: 2026-01-26 16:53:02.29668642 +0000 UTC m=+0.069426442 container create fc747bdd5259fd780fe795c790e7782db8492a2e22076d4e1832e4e431d63cad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mendeleev, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 26 16:53:02 compute-0 systemd[1]: Started libpod-conmon-fc747bdd5259fd780fe795c790e7782db8492a2e22076d4e1832e4e431d63cad.scope.
Jan 26 16:53:02 compute-0 podman[391380]: 2026-01-26 16:53:02.257142221 +0000 UTC m=+0.029882243 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:53:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1c478e44b97433064a23a35277f99dd0e90c48d4ac3c952a1580a1f455dd097/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1c478e44b97433064a23a35277f99dd0e90c48d4ac3c952a1580a1f455dd097/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1c478e44b97433064a23a35277f99dd0e90c48d4ac3c952a1580a1f455dd097/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1c478e44b97433064a23a35277f99dd0e90c48d4ac3c952a1580a1f455dd097/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f1c478e44b97433064a23a35277f99dd0e90c48d4ac3c952a1580a1f455dd097/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:02 compute-0 podman[391380]: 2026-01-26 16:53:02.39663215 +0000 UTC m=+0.169372262 container init fc747bdd5259fd780fe795c790e7782db8492a2e22076d4e1832e4e431d63cad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mendeleev, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:53:02 compute-0 podman[391380]: 2026-01-26 16:53:02.409151917 +0000 UTC m=+0.181891959 container start fc747bdd5259fd780fe795c790e7782db8492a2e22076d4e1832e4e431d63cad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Jan 26 16:53:02 compute-0 podman[391380]: 2026-01-26 16:53:02.416120818 +0000 UTC m=+0.188860860 container attach fc747bdd5259fd780fe795c790e7782db8492a2e22076d4e1832e4e431d63cad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mendeleev, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 16:53:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:53:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:53:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:53:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:53:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:53:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:53:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:53:02 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:53:02 compute-0 ceph-mon[75140]: pgmap v3098: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 26 16:53:02 compute-0 gifted_mendeleev[391397]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:53:02 compute-0 gifted_mendeleev[391397]: --> All data devices are unavailable
Jan 26 16:53:03 compute-0 systemd[1]: libpod-fc747bdd5259fd780fe795c790e7782db8492a2e22076d4e1832e4e431d63cad.scope: Deactivated successfully.
Jan 26 16:53:03 compute-0 podman[391380]: 2026-01-26 16:53:03.029846868 +0000 UTC m=+0.802586870 container died fc747bdd5259fd780fe795c790e7782db8492a2e22076d4e1832e4e431d63cad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 16:53:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1c478e44b97433064a23a35277f99dd0e90c48d4ac3c952a1580a1f455dd097-merged.mount: Deactivated successfully.
Jan 26 16:53:03 compute-0 podman[391380]: 2026-01-26 16:53:03.079365941 +0000 UTC m=+0.852105943 container remove fc747bdd5259fd780fe795c790e7782db8492a2e22076d4e1832e4e431d63cad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:53:03 compute-0 systemd[1]: libpod-conmon-fc747bdd5259fd780fe795c790e7782db8492a2e22076d4e1832e4e431d63cad.scope: Deactivated successfully.
Jan 26 16:53:03 compute-0 sudo[391305]: pam_unix(sudo:session): session closed for user root
Jan 26 16:53:03 compute-0 podman[391419]: 2026-01-26 16:53:03.167150553 +0000 UTC m=+0.105913427 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:53:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:53:03 compute-0 sudo[391454]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:53:03 compute-0 sudo[391454]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:53:03 compute-0 sudo[391454]: pam_unix(sudo:session): session closed for user root
Jan 26 16:53:03 compute-0 sudo[391480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:53:03 compute-0 sudo[391480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:53:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 26 16:53:03 compute-0 podman[391517]: 2026-01-26 16:53:03.604242374 +0000 UTC m=+0.053002720 container create c41aa09862dd8312730b6b83f3f4c3378330cec835175e0287267705bca93365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 26 16:53:03 compute-0 systemd[1]: Started libpod-conmon-c41aa09862dd8312730b6b83f3f4c3378330cec835175e0287267705bca93365.scope.
Jan 26 16:53:03 compute-0 podman[391517]: 2026-01-26 16:53:03.581873166 +0000 UTC m=+0.030633562 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:53:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:53:03 compute-0 podman[391517]: 2026-01-26 16:53:03.706226973 +0000 UTC m=+0.154987339 container init c41aa09862dd8312730b6b83f3f4c3378330cec835175e0287267705bca93365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lichterman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 26 16:53:03 compute-0 podman[391517]: 2026-01-26 16:53:03.713787978 +0000 UTC m=+0.162548324 container start c41aa09862dd8312730b6b83f3f4c3378330cec835175e0287267705bca93365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lichterman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:53:03 compute-0 podman[391517]: 2026-01-26 16:53:03.717777106 +0000 UTC m=+0.166537472 container attach c41aa09862dd8312730b6b83f3f4c3378330cec835175e0287267705bca93365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lichterman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 16:53:03 compute-0 peaceful_lichterman[391533]: 167 167
Jan 26 16:53:03 compute-0 systemd[1]: libpod-c41aa09862dd8312730b6b83f3f4c3378330cec835175e0287267705bca93365.scope: Deactivated successfully.
Jan 26 16:53:03 compute-0 podman[391517]: 2026-01-26 16:53:03.721143289 +0000 UTC m=+0.169903655 container died c41aa09862dd8312730b6b83f3f4c3378330cec835175e0287267705bca93365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lichterman, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 16:53:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fb3357f72502392919ec251154bd25101a9442d515b7b1c3a61506f8e8ccf7d-merged.mount: Deactivated successfully.
Jan 26 16:53:03 compute-0 podman[391517]: 2026-01-26 16:53:03.768331555 +0000 UTC m=+0.217091931 container remove c41aa09862dd8312730b6b83f3f4c3378330cec835175e0287267705bca93365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:53:03 compute-0 systemd[1]: libpod-conmon-c41aa09862dd8312730b6b83f3f4c3378330cec835175e0287267705bca93365.scope: Deactivated successfully.
Jan 26 16:53:03 compute-0 podman[391556]: 2026-01-26 16:53:03.962316039 +0000 UTC m=+0.058492484 container create dabe0900f5eb983f60ce3c692156a1c558e4c56dad069539d7d5c00b5eff5846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kepler, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:53:04 compute-0 systemd[1]: Started libpod-conmon-dabe0900f5eb983f60ce3c692156a1c558e4c56dad069539d7d5c00b5eff5846.scope.
Jan 26 16:53:04 compute-0 podman[391556]: 2026-01-26 16:53:03.938733421 +0000 UTC m=+0.034909846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:53:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4fa969751367c044b7008c6daceab9ea090063238f73fc701c7ba7ec6b98f1f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4fa969751367c044b7008c6daceab9ea090063238f73fc701c7ba7ec6b98f1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4fa969751367c044b7008c6daceab9ea090063238f73fc701c7ba7ec6b98f1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4fa969751367c044b7008c6daceab9ea090063238f73fc701c7ba7ec6b98f1f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:04 compute-0 podman[391556]: 2026-01-26 16:53:04.070722305 +0000 UTC m=+0.166898790 container init dabe0900f5eb983f60ce3c692156a1c558e4c56dad069539d7d5c00b5eff5846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:53:04 compute-0 podman[391556]: 2026-01-26 16:53:04.0794361 +0000 UTC m=+0.175612505 container start dabe0900f5eb983f60ce3c692156a1c558e4c56dad069539d7d5c00b5eff5846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kepler, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:53:04 compute-0 podman[391556]: 2026-01-26 16:53:04.084147075 +0000 UTC m=+0.180323510 container attach dabe0900f5eb983f60ce3c692156a1c558e4c56dad069539d7d5c00b5eff5846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kepler, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]: {
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:     "0": [
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:         {
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "devices": [
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "/dev/loop3"
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             ],
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "lv_name": "ceph_lv0",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "lv_size": "21470642176",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "name": "ceph_lv0",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "tags": {
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.cluster_name": "ceph",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.crush_device_class": "",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.encrypted": "0",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.objectstore": "bluestore",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.osd_id": "0",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.type": "block",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.vdo": "0",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.with_tpm": "0"
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             },
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "type": "block",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "vg_name": "ceph_vg0"
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:         }
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:     ],
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:     "1": [
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:         {
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "devices": [
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "/dev/loop4"
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             ],
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "lv_name": "ceph_lv1",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "lv_size": "21470642176",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "name": "ceph_lv1",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "tags": {
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.cluster_name": "ceph",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.crush_device_class": "",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.encrypted": "0",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.objectstore": "bluestore",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.osd_id": "1",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.type": "block",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.vdo": "0",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.with_tpm": "0"
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             },
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "type": "block",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "vg_name": "ceph_vg1"
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:         }
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:     ],
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:     "2": [
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:         {
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "devices": [
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "/dev/loop5"
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             ],
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "lv_name": "ceph_lv2",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "lv_size": "21470642176",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "name": "ceph_lv2",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "tags": {
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.cluster_name": "ceph",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.crush_device_class": "",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.encrypted": "0",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.objectstore": "bluestore",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.osd_id": "2",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.type": "block",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.vdo": "0",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:                 "ceph.with_tpm": "0"
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             },
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "type": "block",
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:             "vg_name": "ceph_vg2"
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:         }
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]:     ]
Jan 26 16:53:04 compute-0 vibrant_kepler[391573]: }
Jan 26 16:53:04 compute-0 systemd[1]: libpod-dabe0900f5eb983f60ce3c692156a1c558e4c56dad069539d7d5c00b5eff5846.scope: Deactivated successfully.
Jan 26 16:53:04 compute-0 nova_compute[239965]: 2026-01-26 16:53:04.426 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:04 compute-0 podman[391582]: 2026-01-26 16:53:04.469840096 +0000 UTC m=+0.027634837 container died dabe0900f5eb983f60ce3c692156a1c558e4c56dad069539d7d5c00b5eff5846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kepler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:53:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4fa969751367c044b7008c6daceab9ea090063238f73fc701c7ba7ec6b98f1f-merged.mount: Deactivated successfully.
Jan 26 16:53:04 compute-0 podman[391582]: 2026-01-26 16:53:04.52426396 +0000 UTC m=+0.082058711 container remove dabe0900f5eb983f60ce3c692156a1c558e4c56dad069539d7d5c00b5eff5846 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_kepler, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 16:53:04 compute-0 systemd[1]: libpod-conmon-dabe0900f5eb983f60ce3c692156a1c558e4c56dad069539d7d5c00b5eff5846.scope: Deactivated successfully.
Jan 26 16:53:04 compute-0 sudo[391480]: pam_unix(sudo:session): session closed for user root
Jan 26 16:53:04 compute-0 ceph-mon[75140]: pgmap v3099: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 75 op/s
Jan 26 16:53:04 compute-0 sudo[391597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:53:04 compute-0 sudo[391597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:53:04 compute-0 sudo[391597]: pam_unix(sudo:session): session closed for user root
Jan 26 16:53:04 compute-0 sudo[391622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:53:04 compute-0 sudo[391622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:53:05 compute-0 podman[391658]: 2026-01-26 16:53:05.083394853 +0000 UTC m=+0.044653655 container create 3ece10eabb4848d74f7f4bf0e281e3b214207a36ff72be775e0d0bdd3c93a0dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:53:05 compute-0 systemd[1]: Started libpod-conmon-3ece10eabb4848d74f7f4bf0e281e3b214207a36ff72be775e0d0bdd3c93a0dc.scope.
Jan 26 16:53:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:53:05 compute-0 podman[391658]: 2026-01-26 16:53:05.067165685 +0000 UTC m=+0.028424487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:53:05 compute-0 podman[391658]: 2026-01-26 16:53:05.187216067 +0000 UTC m=+0.148474929 container init 3ece10eabb4848d74f7f4bf0e281e3b214207a36ff72be775e0d0bdd3c93a0dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_margulis, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 16:53:05 compute-0 podman[391658]: 2026-01-26 16:53:05.199330183 +0000 UTC m=+0.160589005 container start 3ece10eabb4848d74f7f4bf0e281e3b214207a36ff72be775e0d0bdd3c93a0dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 16:53:05 compute-0 podman[391658]: 2026-01-26 16:53:05.203832415 +0000 UTC m=+0.165091267 container attach 3ece10eabb4848d74f7f4bf0e281e3b214207a36ff72be775e0d0bdd3c93a0dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_margulis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:53:05 compute-0 xenodochial_margulis[391674]: 167 167
Jan 26 16:53:05 compute-0 systemd[1]: libpod-3ece10eabb4848d74f7f4bf0e281e3b214207a36ff72be775e0d0bdd3c93a0dc.scope: Deactivated successfully.
Jan 26 16:53:05 compute-0 podman[391658]: 2026-01-26 16:53:05.206770126 +0000 UTC m=+0.168028938 container died 3ece10eabb4848d74f7f4bf0e281e3b214207a36ff72be775e0d0bdd3c93a0dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:53:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-eef090b3021365e8ed72602a2c9ed82815e8786dd5203f032af9989687c920a3-merged.mount: Deactivated successfully.
Jan 26 16:53:05 compute-0 podman[391658]: 2026-01-26 16:53:05.265087465 +0000 UTC m=+0.226346277 container remove 3ece10eabb4848d74f7f4bf0e281e3b214207a36ff72be775e0d0bdd3c93a0dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_margulis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 16:53:05 compute-0 systemd[1]: libpod-conmon-3ece10eabb4848d74f7f4bf0e281e3b214207a36ff72be775e0d0bdd3c93a0dc.scope: Deactivated successfully.
Jan 26 16:53:05 compute-0 podman[391697]: 2026-01-26 16:53:05.463920828 +0000 UTC m=+0.043783554 container create 77969522287a1d65205f47fd86b48794e66a9855028ae1b7ab0440336ba9874b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 16:53:05 compute-0 systemd[1]: Started libpod-conmon-77969522287a1d65205f47fd86b48794e66a9855028ae1b7ab0440336ba9874b.scope.
Jan 26 16:53:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c65ddeb4522daea9e1dd69b8f3499dc813d2e69e1ffd82196234edad6d4ba0b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c65ddeb4522daea9e1dd69b8f3499dc813d2e69e1ffd82196234edad6d4ba0b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c65ddeb4522daea9e1dd69b8f3499dc813d2e69e1ffd82196234edad6d4ba0b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c65ddeb4522daea9e1dd69b8f3499dc813d2e69e1ffd82196234edad6d4ba0b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:53:05 compute-0 podman[391697]: 2026-01-26 16:53:05.443921498 +0000 UTC m=+0.023784234 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:53:05 compute-0 podman[391697]: 2026-01-26 16:53:05.548303366 +0000 UTC m=+0.128166152 container init 77969522287a1d65205f47fd86b48794e66a9855028ae1b7ab0440336ba9874b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:53:05 compute-0 podman[391697]: 2026-01-26 16:53:05.55457864 +0000 UTC m=+0.134441376 container start 77969522287a1d65205f47fd86b48794e66a9855028ae1b7ab0440336ba9874b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 26 16:53:05 compute-0 podman[391697]: 2026-01-26 16:53:05.558919636 +0000 UTC m=+0.138782422 container attach 77969522287a1d65205f47fd86b48794e66a9855028ae1b7ab0440336ba9874b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 16:53:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Jan 26 16:53:05 compute-0 nova_compute[239965]: 2026-01-26 16:53:05.599 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:06 compute-0 lvm[391792]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:53:06 compute-0 lvm[391793]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:53:06 compute-0 lvm[391792]: VG ceph_vg0 finished
Jan 26 16:53:06 compute-0 lvm[391793]: VG ceph_vg1 finished
Jan 26 16:53:06 compute-0 lvm[391795]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:53:06 compute-0 lvm[391795]: VG ceph_vg2 finished
Jan 26 16:53:06 compute-0 adoring_vaughan[391714]: {}
Jan 26 16:53:06 compute-0 systemd[1]: libpod-77969522287a1d65205f47fd86b48794e66a9855028ae1b7ab0440336ba9874b.scope: Deactivated successfully.
Jan 26 16:53:06 compute-0 systemd[1]: libpod-77969522287a1d65205f47fd86b48794e66a9855028ae1b7ab0440336ba9874b.scope: Consumed 1.424s CPU time.
Jan 26 16:53:06 compute-0 podman[391697]: 2026-01-26 16:53:06.398613054 +0000 UTC m=+0.978475790 container died 77969522287a1d65205f47fd86b48794e66a9855028ae1b7ab0440336ba9874b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:53:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c65ddeb4522daea9e1dd69b8f3499dc813d2e69e1ffd82196234edad6d4ba0b-merged.mount: Deactivated successfully.
Jan 26 16:53:06 compute-0 podman[391697]: 2026-01-26 16:53:06.452648049 +0000 UTC m=+1.032510825 container remove 77969522287a1d65205f47fd86b48794e66a9855028ae1b7ab0440336ba9874b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:53:06 compute-0 systemd[1]: libpod-conmon-77969522287a1d65205f47fd86b48794e66a9855028ae1b7ab0440336ba9874b.scope: Deactivated successfully.
Jan 26 16:53:06 compute-0 sudo[391622]: pam_unix(sudo:session): session closed for user root
Jan 26 16:53:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:53:06 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:53:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:53:06 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:53:06 compute-0 sudo[391809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:53:06 compute-0 sudo[391809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:53:06 compute-0 sudo[391809]: pam_unix(sudo:session): session closed for user root
Jan 26 16:53:06 compute-0 ceph-mon[75140]: pgmap v3100: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Jan 26 16:53:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:53:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:53:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Jan 26 16:53:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:53:08 compute-0 ceph-mon[75140]: pgmap v3101: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 0 B/s wr, 35 op/s
Jan 26 16:53:09 compute-0 nova_compute[239965]: 2026-01-26 16:53:09.430 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:10 compute-0 nova_compute[239965]: 2026-01-26 16:53:10.602 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:10 compute-0 ceph-mon[75140]: pgmap v3102: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:12 compute-0 ceph-mon[75140]: pgmap v3103: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:53:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:53:13.710932) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446393710995, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1972, "num_deletes": 251, "total_data_size": 3477600, "memory_usage": 3534536, "flush_reason": "Manual Compaction"}
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446393742117, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 3372156, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61760, "largest_seqno": 63731, "table_properties": {"data_size": 3363021, "index_size": 5756, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17960, "raw_average_key_size": 20, "raw_value_size": 3345058, "raw_average_value_size": 3729, "num_data_blocks": 257, "num_entries": 897, "num_filter_entries": 897, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769446178, "oldest_key_time": 1769446178, "file_creation_time": 1769446393, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 31267 microseconds, and 7814 cpu microseconds.
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:53:13.742189) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 3372156 bytes OK
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:53:13.742223) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:53:13.745360) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:53:13.745378) EVENT_LOG_v1 {"time_micros": 1769446393745372, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:53:13.745404) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 3469305, prev total WAL file size 3469305, number of live WAL files 2.
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:53:13.746394) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(3293KB)], [146(9966KB)]
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446393746431, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 13578037, "oldest_snapshot_seqno": -1}
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8455 keys, 11790950 bytes, temperature: kUnknown
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446393839143, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11790950, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11734652, "index_size": 34041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21189, "raw_key_size": 220794, "raw_average_key_size": 26, "raw_value_size": 11583737, "raw_average_value_size": 1370, "num_data_blocks": 1323, "num_entries": 8455, "num_filter_entries": 8455, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769446393, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:53:13.839545) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11790950 bytes
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:53:13.841427) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.3 rd, 127.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 9.7 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(7.5) write-amplify(3.5) OK, records in: 8969, records dropped: 514 output_compression: NoCompression
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:53:13.841454) EVENT_LOG_v1 {"time_micros": 1769446393841441, "job": 90, "event": "compaction_finished", "compaction_time_micros": 92828, "compaction_time_cpu_micros": 29360, "output_level": 6, "num_output_files": 1, "total_output_size": 11790950, "num_input_records": 8969, "num_output_records": 8455, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446393842843, "job": 90, "event": "table_file_deletion", "file_number": 148}
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446393846411, "job": 90, "event": "table_file_deletion", "file_number": 146}
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:53:13.746262) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:53:13.846516) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:53:13.846523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:53:13.846525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:53:13.846526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:53:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:53:13.846528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:53:14 compute-0 nova_compute[239965]: 2026-01-26 16:53:14.433 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:14 compute-0 ceph-mon[75140]: pgmap v3104: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:15 compute-0 nova_compute[239965]: 2026-01-26 16:53:15.604 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:16 compute-0 ceph-mon[75140]: pgmap v3105: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:53:18 compute-0 nova_compute[239965]: 2026-01-26 16:53:18.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:18 compute-0 nova_compute[239965]: 2026-01-26 16:53:18.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:18 compute-0 ceph-mon[75140]: pgmap v3106: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:19 compute-0 nova_compute[239965]: 2026-01-26 16:53:19.438 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:19 compute-0 nova_compute[239965]: 2026-01-26 16:53:19.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:20 compute-0 nova_compute[239965]: 2026-01-26 16:53:20.607 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:20 compute-0 ceph-mon[75140]: pgmap v3107: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:22 compute-0 ceph-mon[75140]: pgmap v3108: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:53:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:24 compute-0 nova_compute[239965]: 2026-01-26 16:53:24.487 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:24 compute-0 ceph-mon[75140]: pgmap v3109: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:25 compute-0 nova_compute[239965]: 2026-01-26 16:53:25.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:25 compute-0 nova_compute[239965]: 2026-01-26 16:53:25.662 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:26 compute-0 ceph-mon[75140]: pgmap v3110: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:27 compute-0 nova_compute[239965]: 2026-01-26 16:53:27.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:27 compute-0 nova_compute[239965]: 2026-01-26 16:53:27.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:53:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:53:28
Jan 26 16:53:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:53:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:53:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'backups', 'images', 'default.rgw.control', '.mgr']
Jan 26 16:53:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:53:28 compute-0 ceph-mon[75140]: pgmap v3111: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:29 compute-0 nova_compute[239965]: 2026-01-26 16:53:29.489 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:29 compute-0 nova_compute[239965]: 2026-01-26 16:53:29.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:29 compute-0 nova_compute[239965]: 2026-01-26 16:53:29.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:53:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:30 compute-0 podman[391834]: 2026-01-26 16:53:30.373261835 +0000 UTC m=+0.059728305 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 26 16:53:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:53:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:53:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:53:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:53:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:53:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:53:30 compute-0 nova_compute[239965]: 2026-01-26 16:53:30.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:30 compute-0 nova_compute[239965]: 2026-01-26 16:53:30.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:53:30 compute-0 nova_compute[239965]: 2026-01-26 16:53:30.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:53:30 compute-0 nova_compute[239965]: 2026-01-26 16:53:30.666 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:30 compute-0 ceph-mon[75140]: pgmap v3112: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:53:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:53:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:53:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:53:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:53:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:53:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:53:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:53:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:53:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:53:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:33 compute-0 ceph-mon[75140]: pgmap v3113: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:53:33 compute-0 podman[391855]: 2026-01-26 16:53:33.391194216 +0000 UTC m=+0.077655705 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:53:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3114: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:34 compute-0 nova_compute[239965]: 2026-01-26 16:53:34.492 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:35 compute-0 ceph-mon[75140]: pgmap v3114: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:35 compute-0 nova_compute[239965]: 2026-01-26 16:53:35.668 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:37 compute-0 ceph-mon[75140]: pgmap v3115: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:53:39 compute-0 ceph-mon[75140]: pgmap v3116: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:39 compute-0 nova_compute[239965]: 2026-01-26 16:53:39.496 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3117: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:40 compute-0 nova_compute[239965]: 2026-01-26 16:53:40.671 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:41 compute-0 ceph-mon[75140]: pgmap v3117: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:42 compute-0 ceph-mon[75140]: pgmap v3118: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:53:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3119: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:43 compute-0 nova_compute[239965]: 2026-01-26 16:53:43.623 239969 WARNING oslo.service.loopingcall [-] Function 'nova.servicegroup.drivers.db.DbDriver._report_state' run outlasted interval by 34.01 sec
Jan 26 16:53:43 compute-0 nova_compute[239965]: 2026-01-26 16:53:43.665 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:53:43 compute-0 nova_compute[239965]: 2026-01-26 16:53:43.665 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:53:43 compute-0 nova_compute[239965]: 2026-01-26 16:53:43.702 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:53:43 compute-0 nova_compute[239965]: 2026-01-26 16:53:43.702 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:53:43 compute-0 nova_compute[239965]: 2026-01-26 16:53:43.703 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:53:43 compute-0 nova_compute[239965]: 2026-01-26 16:53:43.703 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:53:43 compute-0 nova_compute[239965]: 2026-01-26 16:53:43.703 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:53:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:53:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2857914288' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:53:44 compute-0 nova_compute[239965]: 2026-01-26 16:53:44.258 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:53:44 compute-0 nova_compute[239965]: 2026-01-26 16:53:44.434 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:53:44 compute-0 nova_compute[239965]: 2026-01-26 16:53:44.435 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3563MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:53:44 compute-0 nova_compute[239965]: 2026-01-26 16:53:44.435 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:53:44 compute-0 nova_compute[239965]: 2026-01-26 16:53:44.436 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:53:44 compute-0 nova_compute[239965]: 2026-01-26 16:53:44.499 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:44 compute-0 nova_compute[239965]: 2026-01-26 16:53:44.624 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:53:44 compute-0 nova_compute[239965]: 2026-01-26 16:53:44.624 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:53:44 compute-0 nova_compute[239965]: 2026-01-26 16:53:44.757 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:53:44 compute-0 nova_compute[239965]: 2026-01-26 16:53:44.775 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:53:44 compute-0 nova_compute[239965]: 2026-01-26 16:53:44.775 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:53:44 compute-0 nova_compute[239965]: 2026-01-26 16:53:44.796 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:53:44 compute-0 ceph-mon[75140]: pgmap v3119: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:44 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2857914288' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:53:44 compute-0 nova_compute[239965]: 2026-01-26 16:53:44.822 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:53:44 compute-0 nova_compute[239965]: 2026-01-26 16:53:44.853 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:53:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:53:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2195907407' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:53:45 compute-0 nova_compute[239965]: 2026-01-26 16:53:45.381 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:53:45 compute-0 nova_compute[239965]: 2026-01-26 16:53:45.385 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:53:45 compute-0 nova_compute[239965]: 2026-01-26 16:53:45.426 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:53:45 compute-0 nova_compute[239965]: 2026-01-26 16:53:45.427 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:53:45 compute-0 nova_compute[239965]: 2026-01-26 16:53:45.428 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.992s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:53:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:45 compute-0 nova_compute[239965]: 2026-01-26 16:53:45.673 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2195907407' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:53:46 compute-0 ceph-mon[75140]: pgmap v3120: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:53:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:53:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/790540618' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:53:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:53:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/790540618' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:53:48 compute-0 ceph-mon[75140]: pgmap v3121: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/790540618' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:53:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/790540618' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:53:49 compute-0 nova_compute[239965]: 2026-01-26 16:53:49.502 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:53:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:50 compute-0 nova_compute[239965]: 2026-01-26 16:53:50.675 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:50 compute-0 ceph-mon[75140]: pgmap v3122: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:52 compute-0 ceph-mon[75140]: pgmap v3123: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:53:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:54 compute-0 nova_compute[239965]: 2026-01-26 16:53:54.505 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:54 compute-0 ceph-mon[75140]: pgmap v3124: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3125: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:55 compute-0 nova_compute[239965]: 2026-01-26 16:53:55.676 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:56 compute-0 ceph-mon[75140]: pgmap v3125: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:53:58 compute-0 ceph-mon[75140]: pgmap v3126: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:53:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:53:59.276 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:53:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:53:59.276 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:53:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:53:59.276 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:53:59 compute-0 nova_compute[239965]: 2026-01-26 16:53:59.509 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:53:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3127: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:54:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:54:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:54:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:54:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:54:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:54:00 compute-0 nova_compute[239965]: 2026-01-26 16:54:00.676 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:00 compute-0 ceph-mon[75140]: pgmap v3127: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:01 compute-0 podman[391924]: 2026-01-26 16:54:01.398053279 +0000 UTC m=+0.081859527 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 16:54:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:54:03 compute-0 ceph-mon[75140]: pgmap v3128: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:04 compute-0 ceph-mon[75140]: pgmap v3129: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:04 compute-0 podman[391943]: 2026-01-26 16:54:04.404924896 +0000 UTC m=+0.095516342 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 16:54:04 compute-0 nova_compute[239965]: 2026-01-26 16:54:04.511 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:05 compute-0 nova_compute[239965]: 2026-01-26 16:54:05.678 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:06 compute-0 sshd-session[391968]: Invalid user sol from 45.148.10.240 port 51106
Jan 26 16:54:06 compute-0 sshd-session[391968]: Connection closed by invalid user sol 45.148.10.240 port 51106 [preauth]
Jan 26 16:54:06 compute-0 sudo[391970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:54:06 compute-0 sudo[391970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:54:06 compute-0 sudo[391970]: pam_unix(sudo:session): session closed for user root
Jan 26 16:54:06 compute-0 ceph-mon[75140]: pgmap v3130: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:06 compute-0 sudo[391995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:54:06 compute-0 sudo[391995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:54:07 compute-0 sudo[391995]: pam_unix(sudo:session): session closed for user root
Jan 26 16:54:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:54:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:54:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:54:07 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:54:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:54:07 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:54:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:54:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:54:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:54:07 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:54:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:54:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:54:07 compute-0 sudo[392051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:54:07 compute-0 sudo[392051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:54:07 compute-0 sudo[392051]: pam_unix(sudo:session): session closed for user root
Jan 26 16:54:07 compute-0 sudo[392076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:54:07 compute-0 sudo[392076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:54:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:54:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:54:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:54:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:54:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:54:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:54:07 compute-0 podman[392113]: 2026-01-26 16:54:07.742362264 +0000 UTC m=+0.051736299 container create 6e89a44a3fccb47e7bd14a01a9c846844498548e850270b9567033ac02ae3a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_newton, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:54:07 compute-0 systemd[1]: Started libpod-conmon-6e89a44a3fccb47e7bd14a01a9c846844498548e850270b9567033ac02ae3a9d.scope.
Jan 26 16:54:07 compute-0 podman[392113]: 2026-01-26 16:54:07.720924698 +0000 UTC m=+0.030298753 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:54:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:54:07 compute-0 podman[392113]: 2026-01-26 16:54:07.840996301 +0000 UTC m=+0.150370366 container init 6e89a44a3fccb47e7bd14a01a9c846844498548e850270b9567033ac02ae3a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:54:07 compute-0 podman[392113]: 2026-01-26 16:54:07.851339034 +0000 UTC m=+0.160713069 container start 6e89a44a3fccb47e7bd14a01a9c846844498548e850270b9567033ac02ae3a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_newton, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 16:54:07 compute-0 podman[392113]: 2026-01-26 16:54:07.855909777 +0000 UTC m=+0.165283832 container attach 6e89a44a3fccb47e7bd14a01a9c846844498548e850270b9567033ac02ae3a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_newton, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:54:07 compute-0 agitated_newton[392129]: 167 167
Jan 26 16:54:07 compute-0 systemd[1]: libpod-6e89a44a3fccb47e7bd14a01a9c846844498548e850270b9567033ac02ae3a9d.scope: Deactivated successfully.
Jan 26 16:54:07 compute-0 podman[392113]: 2026-01-26 16:54:07.85974186 +0000 UTC m=+0.169115895 container died 6e89a44a3fccb47e7bd14a01a9c846844498548e850270b9567033ac02ae3a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_newton, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 16:54:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-6f53929667bf0c5154cff2469cbedead6ba8acadadfad0745d2868ed21ca39ab-merged.mount: Deactivated successfully.
Jan 26 16:54:07 compute-0 podman[392113]: 2026-01-26 16:54:07.912224317 +0000 UTC m=+0.221598352 container remove 6e89a44a3fccb47e7bd14a01a9c846844498548e850270b9567033ac02ae3a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_newton, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 16:54:07 compute-0 systemd[1]: libpod-conmon-6e89a44a3fccb47e7bd14a01a9c846844498548e850270b9567033ac02ae3a9d.scope: Deactivated successfully.
Jan 26 16:54:08 compute-0 podman[392152]: 2026-01-26 16:54:08.071234393 +0000 UTC m=+0.048697714 container create 978340e31df249ece119b23cb885977fd6ba95d4c277151d968666a278df09db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 16:54:08 compute-0 systemd[1]: Started libpod-conmon-978340e31df249ece119b23cb885977fd6ba95d4c277151d968666a278df09db.scope.
Jan 26 16:54:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:54:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e84f5f2641a3404630b39b7b0afec103e05c3c837217644c84fe013d9485312/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:54:08 compute-0 podman[392152]: 2026-01-26 16:54:08.04948695 +0000 UTC m=+0.026950281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:54:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e84f5f2641a3404630b39b7b0afec103e05c3c837217644c84fe013d9485312/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:54:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e84f5f2641a3404630b39b7b0afec103e05c3c837217644c84fe013d9485312/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:54:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e84f5f2641a3404630b39b7b0afec103e05c3c837217644c84fe013d9485312/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:54:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e84f5f2641a3404630b39b7b0afec103e05c3c837217644c84fe013d9485312/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:54:08 compute-0 podman[392152]: 2026-01-26 16:54:08.162410408 +0000 UTC m=+0.139873749 container init 978340e31df249ece119b23cb885977fd6ba95d4c277151d968666a278df09db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_moore, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 16:54:08 compute-0 podman[392152]: 2026-01-26 16:54:08.169079471 +0000 UTC m=+0.146542782 container start 978340e31df249ece119b23cb885977fd6ba95d4c277151d968666a278df09db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:54:08 compute-0 podman[392152]: 2026-01-26 16:54:08.17514619 +0000 UTC m=+0.152609581 container attach 978340e31df249ece119b23cb885977fd6ba95d4c277151d968666a278df09db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_moore, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:54:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:54:08 compute-0 clever_moore[392169]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:54:08 compute-0 clever_moore[392169]: --> All data devices are unavailable
Jan 26 16:54:08 compute-0 systemd[1]: libpod-978340e31df249ece119b23cb885977fd6ba95d4c277151d968666a278df09db.scope: Deactivated successfully.
Jan 26 16:54:08 compute-0 podman[392152]: 2026-01-26 16:54:08.680444303 +0000 UTC m=+0.657907634 container died 978340e31df249ece119b23cb885977fd6ba95d4c277151d968666a278df09db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 16:54:08 compute-0 ceph-mon[75140]: pgmap v3131: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e84f5f2641a3404630b39b7b0afec103e05c3c837217644c84fe013d9485312-merged.mount: Deactivated successfully.
Jan 26 16:54:08 compute-0 podman[392152]: 2026-01-26 16:54:08.729842363 +0000 UTC m=+0.707305684 container remove 978340e31df249ece119b23cb885977fd6ba95d4c277151d968666a278df09db (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:54:08 compute-0 systemd[1]: libpod-conmon-978340e31df249ece119b23cb885977fd6ba95d4c277151d968666a278df09db.scope: Deactivated successfully.
Jan 26 16:54:08 compute-0 sudo[392076]: pam_unix(sudo:session): session closed for user root
Jan 26 16:54:08 compute-0 sudo[392200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:54:08 compute-0 sudo[392200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:54:08 compute-0 sudo[392200]: pam_unix(sudo:session): session closed for user root
Jan 26 16:54:08 compute-0 sudo[392225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:54:08 compute-0 sudo[392225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:54:09 compute-0 podman[392261]: 2026-01-26 16:54:09.177820622 +0000 UTC m=+0.037117201 container create ee6aeb378257fdf9e5e76e3a71ff6996c575662b6ccafcdbcc0d42a1d6d73947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldstine, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:54:09 compute-0 systemd[1]: Started libpod-conmon-ee6aeb378257fdf9e5e76e3a71ff6996c575662b6ccafcdbcc0d42a1d6d73947.scope.
Jan 26 16:54:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:54:09 compute-0 podman[392261]: 2026-01-26 16:54:09.244232439 +0000 UTC m=+0.103529018 container init ee6aeb378257fdf9e5e76e3a71ff6996c575662b6ccafcdbcc0d42a1d6d73947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:54:09 compute-0 podman[392261]: 2026-01-26 16:54:09.251045666 +0000 UTC m=+0.110342245 container start ee6aeb378257fdf9e5e76e3a71ff6996c575662b6ccafcdbcc0d42a1d6d73947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldstine, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:54:09 compute-0 podman[392261]: 2026-01-26 16:54:09.254354577 +0000 UTC m=+0.113651186 container attach ee6aeb378257fdf9e5e76e3a71ff6996c575662b6ccafcdbcc0d42a1d6d73947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:54:09 compute-0 elastic_goldstine[392277]: 167 167
Jan 26 16:54:09 compute-0 systemd[1]: libpod-ee6aeb378257fdf9e5e76e3a71ff6996c575662b6ccafcdbcc0d42a1d6d73947.scope: Deactivated successfully.
Jan 26 16:54:09 compute-0 podman[392261]: 2026-01-26 16:54:09.161097422 +0000 UTC m=+0.020394021 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:54:09 compute-0 conmon[392277]: conmon ee6aeb378257fdf9e5e7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ee6aeb378257fdf9e5e76e3a71ff6996c575662b6ccafcdbcc0d42a1d6d73947.scope/container/memory.events
Jan 26 16:54:09 compute-0 podman[392261]: 2026-01-26 16:54:09.257517825 +0000 UTC m=+0.116814404 container died ee6aeb378257fdf9e5e76e3a71ff6996c575662b6ccafcdbcc0d42a1d6d73947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldstine, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 26 16:54:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f299781a1c01acc8a72ec1d6b6acaea8abaafd1861697ca42dd925641a261a3-merged.mount: Deactivated successfully.
Jan 26 16:54:09 compute-0 podman[392261]: 2026-01-26 16:54:09.302145899 +0000 UTC m=+0.161442478 container remove ee6aeb378257fdf9e5e76e3a71ff6996c575662b6ccafcdbcc0d42a1d6d73947 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:54:09 compute-0 systemd[1]: libpod-conmon-ee6aeb378257fdf9e5e76e3a71ff6996c575662b6ccafcdbcc0d42a1d6d73947.scope: Deactivated successfully.
Jan 26 16:54:09 compute-0 podman[392301]: 2026-01-26 16:54:09.483497773 +0000 UTC m=+0.048220163 container create 1708178e734c8278556c2b0943bd6808f200d080add50490d3af48b40f45ddcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sinoussi, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 16:54:09 compute-0 nova_compute[239965]: 2026-01-26 16:54:09.514 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:09 compute-0 systemd[1]: Started libpod-conmon-1708178e734c8278556c2b0943bd6808f200d080add50490d3af48b40f45ddcd.scope.
Jan 26 16:54:09 compute-0 podman[392301]: 2026-01-26 16:54:09.459814792 +0000 UTC m=+0.024537192 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:54:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6560b6843160c8dd555a35b972918cfe603c98966facc9b933f9feacb97b32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6560b6843160c8dd555a35b972918cfe603c98966facc9b933f9feacb97b32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6560b6843160c8dd555a35b972918cfe603c98966facc9b933f9feacb97b32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:54:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc6560b6843160c8dd555a35b972918cfe603c98966facc9b933f9feacb97b32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:54:09 compute-0 podman[392301]: 2026-01-26 16:54:09.582908009 +0000 UTC m=+0.147630439 container init 1708178e734c8278556c2b0943bd6808f200d080add50490d3af48b40f45ddcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 16:54:09 compute-0 podman[392301]: 2026-01-26 16:54:09.591286004 +0000 UTC m=+0.156008384 container start 1708178e734c8278556c2b0943bd6808f200d080add50490d3af48b40f45ddcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sinoussi, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 16:54:09 compute-0 podman[392301]: 2026-01-26 16:54:09.595235321 +0000 UTC m=+0.159957711 container attach 1708178e734c8278556c2b0943bd6808f200d080add50490d3af48b40f45ddcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sinoussi, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:54:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]: {
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:     "0": [
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:         {
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "devices": [
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "/dev/loop3"
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             ],
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "lv_name": "ceph_lv0",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "lv_size": "21470642176",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "name": "ceph_lv0",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "tags": {
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.cluster_name": "ceph",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.crush_device_class": "",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.encrypted": "0",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.objectstore": "bluestore",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.osd_id": "0",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.type": "block",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.vdo": "0",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.with_tpm": "0"
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             },
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "type": "block",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "vg_name": "ceph_vg0"
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:         }
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:     ],
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:     "1": [
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:         {
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "devices": [
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "/dev/loop4"
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             ],
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "lv_name": "ceph_lv1",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "lv_size": "21470642176",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "name": "ceph_lv1",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "tags": {
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.cluster_name": "ceph",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.crush_device_class": "",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.encrypted": "0",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.objectstore": "bluestore",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.osd_id": "1",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.type": "block",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.vdo": "0",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.with_tpm": "0"
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             },
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "type": "block",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "vg_name": "ceph_vg1"
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:         }
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:     ],
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:     "2": [
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:         {
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "devices": [
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "/dev/loop5"
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             ],
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "lv_name": "ceph_lv2",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "lv_size": "21470642176",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "name": "ceph_lv2",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "tags": {
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.cluster_name": "ceph",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.crush_device_class": "",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.encrypted": "0",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.objectstore": "bluestore",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.osd_id": "2",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.type": "block",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.vdo": "0",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:                 "ceph.with_tpm": "0"
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             },
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "type": "block",
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:             "vg_name": "ceph_vg2"
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:         }
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]:     ]
Jan 26 16:54:09 compute-0 hungry_sinoussi[392317]: }
Jan 26 16:54:09 compute-0 systemd[1]: libpod-1708178e734c8278556c2b0943bd6808f200d080add50490d3af48b40f45ddcd.scope: Deactivated successfully.
Jan 26 16:54:09 compute-0 podman[392301]: 2026-01-26 16:54:09.911247136 +0000 UTC m=+0.475969556 container died 1708178e734c8278556c2b0943bd6808f200d080add50490d3af48b40f45ddcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sinoussi, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 16:54:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc6560b6843160c8dd555a35b972918cfe603c98966facc9b933f9feacb97b32-merged.mount: Deactivated successfully.
Jan 26 16:54:09 compute-0 podman[392301]: 2026-01-26 16:54:09.956624407 +0000 UTC m=+0.521346787 container remove 1708178e734c8278556c2b0943bd6808f200d080add50490d3af48b40f45ddcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 16:54:09 compute-0 systemd[1]: libpod-conmon-1708178e734c8278556c2b0943bd6808f200d080add50490d3af48b40f45ddcd.scope: Deactivated successfully.
Jan 26 16:54:10 compute-0 sudo[392225]: pam_unix(sudo:session): session closed for user root
Jan 26 16:54:10 compute-0 sudo[392338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:54:10 compute-0 sudo[392338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:54:10 compute-0 sudo[392338]: pam_unix(sudo:session): session closed for user root
Jan 26 16:54:10 compute-0 sudo[392363]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:54:10 compute-0 sudo[392363]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:54:10 compute-0 podman[392399]: 2026-01-26 16:54:10.405676592 +0000 UTC m=+0.037822788 container create 58e25a0be1bebf3135b3081b1b1016a76cf1dc65fb3c3f9672339002a421e396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:54:10 compute-0 systemd[1]: Started libpod-conmon-58e25a0be1bebf3135b3081b1b1016a76cf1dc65fb3c3f9672339002a421e396.scope.
Jan 26 16:54:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:54:10 compute-0 podman[392399]: 2026-01-26 16:54:10.389241929 +0000 UTC m=+0.021388155 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:54:10 compute-0 podman[392399]: 2026-01-26 16:54:10.495817741 +0000 UTC m=+0.127963957 container init 58e25a0be1bebf3135b3081b1b1016a76cf1dc65fb3c3f9672339002a421e396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:54:10 compute-0 podman[392399]: 2026-01-26 16:54:10.50271073 +0000 UTC m=+0.134856926 container start 58e25a0be1bebf3135b3081b1b1016a76cf1dc65fb3c3f9672339002a421e396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_kowalevski, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:54:10 compute-0 podman[392399]: 2026-01-26 16:54:10.506065041 +0000 UTC m=+0.138211257 container attach 58e25a0be1bebf3135b3081b1b1016a76cf1dc65fb3c3f9672339002a421e396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 16:54:10 compute-0 zealous_kowalevski[392415]: 167 167
Jan 26 16:54:10 compute-0 systemd[1]: libpod-58e25a0be1bebf3135b3081b1b1016a76cf1dc65fb3c3f9672339002a421e396.scope: Deactivated successfully.
Jan 26 16:54:10 compute-0 podman[392399]: 2026-01-26 16:54:10.509742132 +0000 UTC m=+0.141888348 container died 58e25a0be1bebf3135b3081b1b1016a76cf1dc65fb3c3f9672339002a421e396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_kowalevski, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:54:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7858f6297d5a56f68704e5b3e64620b6e908aaa8d8d63c2ff674742d17b920f7-merged.mount: Deactivated successfully.
Jan 26 16:54:10 compute-0 podman[392399]: 2026-01-26 16:54:10.565374206 +0000 UTC m=+0.197520402 container remove 58e25a0be1bebf3135b3081b1b1016a76cf1dc65fb3c3f9672339002a421e396 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_kowalevski, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:54:10 compute-0 systemd[1]: libpod-conmon-58e25a0be1bebf3135b3081b1b1016a76cf1dc65fb3c3f9672339002a421e396.scope: Deactivated successfully.
Jan 26 16:54:10 compute-0 nova_compute[239965]: 2026-01-26 16:54:10.681 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:10 compute-0 ceph-mon[75140]: pgmap v3132: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:10 compute-0 podman[392441]: 2026-01-26 16:54:10.725842207 +0000 UTC m=+0.045556546 container create 64e1e320ba923487ff74ab1541e198f37a6d13d31a817e6fbd6200af0795314a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 16:54:10 compute-0 systemd[1]: Started libpod-conmon-64e1e320ba923487ff74ab1541e198f37a6d13d31a817e6fbd6200af0795314a.scope.
Jan 26 16:54:10 compute-0 podman[392441]: 2026-01-26 16:54:10.705297644 +0000 UTC m=+0.025012003 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:54:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:54:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bd7dc26817d91d4ac813407ec7c351f60c5f2f3593c6fda81670298417501c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:54:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bd7dc26817d91d4ac813407ec7c351f60c5f2f3593c6fda81670298417501c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:54:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bd7dc26817d91d4ac813407ec7c351f60c5f2f3593c6fda81670298417501c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:54:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3bd7dc26817d91d4ac813407ec7c351f60c5f2f3593c6fda81670298417501c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:54:10 compute-0 podman[392441]: 2026-01-26 16:54:10.828570926 +0000 UTC m=+0.148285295 container init 64e1e320ba923487ff74ab1541e198f37a6d13d31a817e6fbd6200af0795314a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 26 16:54:10 compute-0 podman[392441]: 2026-01-26 16:54:10.837140246 +0000 UTC m=+0.156854595 container start 64e1e320ba923487ff74ab1541e198f37a6d13d31a817e6fbd6200af0795314a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 26 16:54:10 compute-0 podman[392441]: 2026-01-26 16:54:10.841066972 +0000 UTC m=+0.160781331 container attach 64e1e320ba923487ff74ab1541e198f37a6d13d31a817e6fbd6200af0795314a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 16:54:11 compute-0 lvm[392536]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:54:11 compute-0 lvm[392537]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:54:11 compute-0 lvm[392537]: VG ceph_vg1 finished
Jan 26 16:54:11 compute-0 lvm[392536]: VG ceph_vg0 finished
Jan 26 16:54:11 compute-0 lvm[392539]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:54:11 compute-0 lvm[392539]: VG ceph_vg2 finished
Jan 26 16:54:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:11 compute-0 agitated_khayyam[392458]: {}
Jan 26 16:54:11 compute-0 systemd[1]: libpod-64e1e320ba923487ff74ab1541e198f37a6d13d31a817e6fbd6200af0795314a.scope: Deactivated successfully.
Jan 26 16:54:11 compute-0 systemd[1]: libpod-64e1e320ba923487ff74ab1541e198f37a6d13d31a817e6fbd6200af0795314a.scope: Consumed 1.390s CPU time.
Jan 26 16:54:11 compute-0 podman[392441]: 2026-01-26 16:54:11.718088404 +0000 UTC m=+1.037802743 container died 64e1e320ba923487ff74ab1541e198f37a6d13d31a817e6fbd6200af0795314a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:54:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-3bd7dc26817d91d4ac813407ec7c351f60c5f2f3593c6fda81670298417501c4-merged.mount: Deactivated successfully.
Jan 26 16:54:11 compute-0 podman[392441]: 2026-01-26 16:54:11.767867765 +0000 UTC m=+1.087582104 container remove 64e1e320ba923487ff74ab1541e198f37a6d13d31a817e6fbd6200af0795314a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:54:11 compute-0 systemd[1]: libpod-conmon-64e1e320ba923487ff74ab1541e198f37a6d13d31a817e6fbd6200af0795314a.scope: Deactivated successfully.
Jan 26 16:54:11 compute-0 sudo[392363]: pam_unix(sudo:session): session closed for user root
Jan 26 16:54:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:54:11 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:54:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:54:11 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:54:11 compute-0 sudo[392552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:54:11 compute-0 sudo[392552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:54:11 compute-0 sudo[392552]: pam_unix(sudo:session): session closed for user root
Jan 26 16:54:12 compute-0 ceph-mon[75140]: pgmap v3133: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:12 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:54:12 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:54:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:54:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:14 compute-0 nova_compute[239965]: 2026-01-26 16:54:14.519 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:14 compute-0 ceph-mon[75140]: pgmap v3134: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:15 compute-0 nova_compute[239965]: 2026-01-26 16:54:15.684 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:16 compute-0 ceph-mon[75140]: pgmap v3135: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:54:18 compute-0 ceph-mon[75140]: pgmap v3136: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:19 compute-0 nova_compute[239965]: 2026-01-26 16:54:19.599 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:20 compute-0 nova_compute[239965]: 2026-01-26 16:54:20.686 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:21 compute-0 ceph-mon[75140]: pgmap v3137: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:21 compute-0 nova_compute[239965]: 2026-01-26 16:54:21.273 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:21 compute-0 nova_compute[239965]: 2026-01-26 16:54:21.273 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:21 compute-0 nova_compute[239965]: 2026-01-26 16:54:21.273 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:23 compute-0 ceph-mon[75140]: pgmap v3138: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:54:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:24 compute-0 nova_compute[239965]: 2026-01-26 16:54:24.602 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:25 compute-0 ceph-mon[75140]: pgmap v3139: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:25 compute-0 nova_compute[239965]: 2026-01-26 16:54:25.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:25 compute-0 nova_compute[239965]: 2026-01-26 16:54:25.689 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:26 compute-0 ceph-mon[75140]: pgmap v3140: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:27 compute-0 nova_compute[239965]: 2026-01-26 16:54:27.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:54:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:54:28
Jan 26 16:54:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:54:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:54:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['volumes', 'default.rgw.log', '.rgw.root', 'backups', 'vms', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control']
Jan 26 16:54:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:54:28 compute-0 ceph-mon[75140]: pgmap v3141: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:29 compute-0 nova_compute[239965]: 2026-01-26 16:54:29.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:29 compute-0 nova_compute[239965]: 2026-01-26 16:54:29.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:29 compute-0 nova_compute[239965]: 2026-01-26 16:54:29.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:54:29 compute-0 nova_compute[239965]: 2026-01-26 16:54:29.606 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:54:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:54:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:54:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:54:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:54:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:54:30 compute-0 ceph-mon[75140]: pgmap v3142: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:30 compute-0 nova_compute[239965]: 2026-01-26 16:54:30.689 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:54:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:54:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:54:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:54:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:54:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:54:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:54:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:54:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:54:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:54:31 compute-0 nova_compute[239965]: 2026-01-26 16:54:31.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:31 compute-0 nova_compute[239965]: 2026-01-26 16:54:31.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:54:31 compute-0 nova_compute[239965]: 2026-01-26 16:54:31.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:54:31 compute-0 nova_compute[239965]: 2026-01-26 16:54:31.530 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:54:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:32 compute-0 podman[392577]: 2026-01-26 16:54:32.374909767 +0000 UTC m=+0.058075843 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 26 16:54:32 compute-0 ceph-mon[75140]: pgmap v3143: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:54:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:34 compute-0 nova_compute[239965]: 2026-01-26 16:54:34.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:34 compute-0 nova_compute[239965]: 2026-01-26 16:54:34.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:54:34 compute-0 nova_compute[239965]: 2026-01-26 16:54:34.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:54:34 compute-0 nova_compute[239965]: 2026-01-26 16:54:34.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:54:34 compute-0 nova_compute[239965]: 2026-01-26 16:54:34.541 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:54:34 compute-0 nova_compute[239965]: 2026-01-26 16:54:34.541 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:54:34 compute-0 nova_compute[239965]: 2026-01-26 16:54:34.664 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:34 compute-0 ceph-mon[75140]: pgmap v3144: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:54:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/168629414' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:54:35 compute-0 nova_compute[239965]: 2026-01-26 16:54:35.131 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:54:35 compute-0 nova_compute[239965]: 2026-01-26 16:54:35.348 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:54:35 compute-0 nova_compute[239965]: 2026-01-26 16:54:35.350 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3589MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:54:35 compute-0 nova_compute[239965]: 2026-01-26 16:54:35.350 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:54:35 compute-0 nova_compute[239965]: 2026-01-26 16:54:35.351 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:54:35 compute-0 podman[392619]: 2026-01-26 16:54:35.390186501 +0000 UTC m=+0.074930067 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 26 16:54:35 compute-0 nova_compute[239965]: 2026-01-26 16:54:35.426 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:54:35 compute-0 nova_compute[239965]: 2026-01-26 16:54:35.427 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:54:35 compute-0 nova_compute[239965]: 2026-01-26 16:54:35.444 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:54:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:35 compute-0 nova_compute[239965]: 2026-01-26 16:54:35.691 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:35 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/168629414' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:54:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:54:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4239664046' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:54:36 compute-0 nova_compute[239965]: 2026-01-26 16:54:36.023 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:54:36 compute-0 nova_compute[239965]: 2026-01-26 16:54:36.027 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:54:36 compute-0 nova_compute[239965]: 2026-01-26 16:54:36.354 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:54:36 compute-0 nova_compute[239965]: 2026-01-26 16:54:36.356 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:54:36 compute-0 nova_compute[239965]: 2026-01-26 16:54:36.357 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:54:36 compute-0 ceph-mon[75140]: pgmap v3145: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:36 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4239664046' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:54:37 compute-0 nova_compute[239965]: 2026-01-26 16:54:37.349 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:54:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:54:38.241858) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446478241879, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 898, "num_deletes": 250, "total_data_size": 1286977, "memory_usage": 1303688, "flush_reason": "Manual Compaction"}
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446478249306, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 782237, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63732, "largest_seqno": 64629, "table_properties": {"data_size": 778606, "index_size": 1346, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 9576, "raw_average_key_size": 20, "raw_value_size": 770888, "raw_average_value_size": 1661, "num_data_blocks": 61, "num_entries": 464, "num_filter_entries": 464, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769446394, "oldest_key_time": 1769446394, "file_creation_time": 1769446478, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 7493 microseconds, and 2900 cpu microseconds.
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:54:38.249348) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 782237 bytes OK
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:54:38.249366) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:54:38.251186) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:54:38.251201) EVENT_LOG_v1 {"time_micros": 1769446478251196, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:54:38.251219) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 1282625, prev total WAL file size 1282625, number of live WAL files 2.
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:54:38.251723) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353037' seq:72057594037927935, type:22 .. '6D6772737461740032373538' seq:0, type:0; will stop at (end)
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(763KB)], [149(11MB)]
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446478251770, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 12573187, "oldest_snapshot_seqno": -1}
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8443 keys, 9675032 bytes, temperature: kUnknown
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446478324308, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 9675032, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9622589, "index_size": 30250, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21125, "raw_key_size": 220697, "raw_average_key_size": 26, "raw_value_size": 9475601, "raw_average_value_size": 1122, "num_data_blocks": 1168, "num_entries": 8443, "num_filter_entries": 8443, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769446478, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:54:38.324572) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 9675032 bytes
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:54:38.326282) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.1 rd, 133.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 11.2 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(28.4) write-amplify(12.4) OK, records in: 8919, records dropped: 476 output_compression: NoCompression
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:54:38.326297) EVENT_LOG_v1 {"time_micros": 1769446478326290, "job": 92, "event": "compaction_finished", "compaction_time_micros": 72639, "compaction_time_cpu_micros": 25712, "output_level": 6, "num_output_files": 1, "total_output_size": 9675032, "num_input_records": 8919, "num_output_records": 8443, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446478327018, "job": 92, "event": "table_file_deletion", "file_number": 151}
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446478329292, "job": 92, "event": "table_file_deletion", "file_number": 149}
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:54:38.251646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:54:38.329335) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:54:38.329341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:54:38.329343) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:54:38.329346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:54:38 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:54:38.329349) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:54:39 compute-0 ceph-mon[75140]: pgmap v3146: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:39 compute-0 nova_compute[239965]: 2026-01-26 16:54:39.709 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:40 compute-0 ceph-mon[75140]: pgmap v3147: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:40 compute-0 nova_compute[239965]: 2026-01-26 16:54:40.694 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:42 compute-0 ceph-mon[75140]: pgmap v3148: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:54:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:44 compute-0 nova_compute[239965]: 2026-01-26 16:54:44.713 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:44 compute-0 ceph-mon[75140]: pgmap v3149: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:45 compute-0 nova_compute[239965]: 2026-01-26 16:54:45.764 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:46 compute-0 ceph-mon[75140]: pgmap v3150: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:54:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:54:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/370454345' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:54:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:54:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/370454345' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:54:49 compute-0 ceph-mon[75140]: pgmap v3151: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/370454345' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:54:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/370454345' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:54:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:49 compute-0 nova_compute[239965]: 2026-01-26 16:54:49.717 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:50 compute-0 nova_compute[239965]: 2026-01-26 16:54:50.767 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:51 compute-0 ceph-mon[75140]: pgmap v3152: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:53 compute-0 ceph-mon[75140]: pgmap v3153: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:54:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:54 compute-0 nova_compute[239965]: 2026-01-26 16:54:54.721 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:54 compute-0 ceph-mon[75140]: pgmap v3154: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:55 compute-0 nova_compute[239965]: 2026-01-26 16:54:55.769 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:54:57 compute-0 ceph-mon[75140]: pgmap v3155: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:54:59 compute-0 ceph-mon[75140]: pgmap v3156: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:54:59.276 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:54:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:54:59.277 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:54:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:54:59.277 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:54:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:54:59 compute-0 nova_compute[239965]: 2026-01-26 16:54:59.726 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:55:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:55:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:55:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:55:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:55:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:55:00 compute-0 nova_compute[239965]: 2026-01-26 16:55:00.770 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:01 compute-0 ceph-mon[75140]: pgmap v3157: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:03 compute-0 ceph-mon[75140]: pgmap v3158: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:55:03 compute-0 podman[392669]: 2026-01-26 16:55:03.363968726 +0000 UTC m=+0.052421445 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:55:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:04 compute-0 nova_compute[239965]: 2026-01-26 16:55:04.729 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:05 compute-0 ceph-mon[75140]: pgmap v3159: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:05 compute-0 nova_compute[239965]: 2026-01-26 16:55:05.773 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:06 compute-0 podman[392688]: 2026-01-26 16:55:06.402488749 +0000 UTC m=+0.086460689 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_controller)
Jan 26 16:55:07 compute-0 ceph-mon[75140]: pgmap v3160: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:55:09 compute-0 ceph-mon[75140]: pgmap v3161: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:09 compute-0 nova_compute[239965]: 2026-01-26 16:55:09.733 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:10 compute-0 nova_compute[239965]: 2026-01-26 16:55:10.776 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:11 compute-0 ceph-mon[75140]: pgmap v3162: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:11 compute-0 sudo[392715]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:55:11 compute-0 sudo[392715]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:55:11 compute-0 sudo[392715]: pam_unix(sudo:session): session closed for user root
Jan 26 16:55:12 compute-0 sudo[392740]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 26 16:55:12 compute-0 sudo[392740]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:55:12 compute-0 ceph-mon[75140]: pgmap v3163: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:12 compute-0 sudo[392740]: pam_unix(sudo:session): session closed for user root
Jan 26 16:55:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:55:12 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:55:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:55:12 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:55:12 compute-0 sudo[392785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:55:12 compute-0 sudo[392785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:55:12 compute-0 sudo[392785]: pam_unix(sudo:session): session closed for user root
Jan 26 16:55:12 compute-0 sudo[392810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:55:12 compute-0 sudo[392810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:55:12 compute-0 sudo[392810]: pam_unix(sudo:session): session closed for user root
Jan 26 16:55:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 16:55:13 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 16:55:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:55:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:55:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:55:13 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:55:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:55:13 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:55:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:55:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:55:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:55:13 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:55:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:55:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:55:13 compute-0 sudo[392867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:55:13 compute-0 sudo[392867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:55:13 compute-0 sudo[392867]: pam_unix(sudo:session): session closed for user root
Jan 26 16:55:13 compute-0 sudo[392892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:55:13 compute-0 sudo[392892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:55:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:55:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:55:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:55:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 16:55:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:55:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:55:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:55:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:55:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:55:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:55:13 compute-0 podman[392929]: 2026-01-26 16:55:13.457081781 +0000 UTC m=+0.028739676 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:55:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:13 compute-0 podman[392929]: 2026-01-26 16:55:13.759185934 +0000 UTC m=+0.330843729 container create 14f30b0d46f3dd69d2aa579b1619134b8c5c244aea9086f2a927cca2c16e68ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:55:13 compute-0 systemd[1]: Started libpod-conmon-14f30b0d46f3dd69d2aa579b1619134b8c5c244aea9086f2a927cca2c16e68ae.scope.
Jan 26 16:55:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:55:14 compute-0 podman[392929]: 2026-01-26 16:55:14.024807593 +0000 UTC m=+0.596465418 container init 14f30b0d46f3dd69d2aa579b1619134b8c5c244aea9086f2a927cca2c16e68ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:55:14 compute-0 podman[392929]: 2026-01-26 16:55:14.032527563 +0000 UTC m=+0.604185358 container start 14f30b0d46f3dd69d2aa579b1619134b8c5c244aea9086f2a927cca2c16e68ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:55:14 compute-0 podman[392929]: 2026-01-26 16:55:14.035774322 +0000 UTC m=+0.607432117 container attach 14f30b0d46f3dd69d2aa579b1619134b8c5c244aea9086f2a927cca2c16e68ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kowalevski, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:55:14 compute-0 beautiful_kowalevski[392946]: 167 167
Jan 26 16:55:14 compute-0 podman[392929]: 2026-01-26 16:55:14.038391156 +0000 UTC m=+0.610048971 container died 14f30b0d46f3dd69d2aa579b1619134b8c5c244aea9086f2a927cca2c16e68ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:55:14 compute-0 systemd[1]: libpod-14f30b0d46f3dd69d2aa579b1619134b8c5c244aea9086f2a927cca2c16e68ae.scope: Deactivated successfully.
Jan 26 16:55:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-e19dacfb4465f173b8894d58b6fa5dc702d490f92932eceaa39f17ff7a16ecf7-merged.mount: Deactivated successfully.
Jan 26 16:55:14 compute-0 podman[392929]: 2026-01-26 16:55:14.097166127 +0000 UTC m=+0.668823922 container remove 14f30b0d46f3dd69d2aa579b1619134b8c5c244aea9086f2a927cca2c16e68ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_kowalevski, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:55:14 compute-0 systemd[1]: libpod-conmon-14f30b0d46f3dd69d2aa579b1619134b8c5c244aea9086f2a927cca2c16e68ae.scope: Deactivated successfully.
Jan 26 16:55:14 compute-0 podman[392970]: 2026-01-26 16:55:14.269887559 +0000 UTC m=+0.045726721 container create dce928ec398f87962eafb37ea7dd17135d6f621028f5b726862c56b79613f5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jemison, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:55:14 compute-0 systemd[1]: Started libpod-conmon-dce928ec398f87962eafb37ea7dd17135d6f621028f5b726862c56b79613f5c2.scope.
Jan 26 16:55:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:55:14 compute-0 podman[392970]: 2026-01-26 16:55:14.247297606 +0000 UTC m=+0.023136788 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:55:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b36bc10a725465fa359c75972a59b541f8f9e6ec812f3cb7750cf1dfbd03bb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:55:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b36bc10a725465fa359c75972a59b541f8f9e6ec812f3cb7750cf1dfbd03bb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:55:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b36bc10a725465fa359c75972a59b541f8f9e6ec812f3cb7750cf1dfbd03bb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:55:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b36bc10a725465fa359c75972a59b541f8f9e6ec812f3cb7750cf1dfbd03bb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:55:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b36bc10a725465fa359c75972a59b541f8f9e6ec812f3cb7750cf1dfbd03bb2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:55:14 compute-0 podman[392970]: 2026-01-26 16:55:14.454942645 +0000 UTC m=+0.230781807 container init dce928ec398f87962eafb37ea7dd17135d6f621028f5b726862c56b79613f5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jemison, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 16:55:14 compute-0 ceph-mon[75140]: pgmap v3164: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:14 compute-0 podman[392970]: 2026-01-26 16:55:14.46290564 +0000 UTC m=+0.238744802 container start dce928ec398f87962eafb37ea7dd17135d6f621028f5b726862c56b79613f5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jemison, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Jan 26 16:55:14 compute-0 podman[392970]: 2026-01-26 16:55:14.468390374 +0000 UTC m=+0.244229546 container attach dce928ec398f87962eafb37ea7dd17135d6f621028f5b726862c56b79613f5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True)
Jan 26 16:55:14 compute-0 nova_compute[239965]: 2026-01-26 16:55:14.736 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:14 compute-0 thirsty_jemison[392987]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:55:14 compute-0 thirsty_jemison[392987]: --> All data devices are unavailable
Jan 26 16:55:14 compute-0 systemd[1]: libpod-dce928ec398f87962eafb37ea7dd17135d6f621028f5b726862c56b79613f5c2.scope: Deactivated successfully.
Jan 26 16:55:14 compute-0 podman[393007]: 2026-01-26 16:55:14.97828685 +0000 UTC m=+0.025842104 container died dce928ec398f87962eafb37ea7dd17135d6f621028f5b726862c56b79613f5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:55:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b36bc10a725465fa359c75972a59b541f8f9e6ec812f3cb7750cf1dfbd03bb2-merged.mount: Deactivated successfully.
Jan 26 16:55:15 compute-0 podman[393007]: 2026-01-26 16:55:15.021685663 +0000 UTC m=+0.069240907 container remove dce928ec398f87962eafb37ea7dd17135d6f621028f5b726862c56b79613f5c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2)
Jan 26 16:55:15 compute-0 systemd[1]: libpod-conmon-dce928ec398f87962eafb37ea7dd17135d6f621028f5b726862c56b79613f5c2.scope: Deactivated successfully.
Jan 26 16:55:15 compute-0 sudo[392892]: pam_unix(sudo:session): session closed for user root
Jan 26 16:55:15 compute-0 sudo[393022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:55:15 compute-0 sudo[393022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:55:15 compute-0 sudo[393022]: pam_unix(sudo:session): session closed for user root
Jan 26 16:55:15 compute-0 sudo[393047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:55:15 compute-0 sudo[393047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:55:15 compute-0 podman[393083]: 2026-01-26 16:55:15.55367268 +0000 UTC m=+0.039981850 container create 696e01385ab93429336bed818ef2d14a9a1281627e476c3806c7df7650092d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 16:55:15 compute-0 systemd[1]: Started libpod-conmon-696e01385ab93429336bed818ef2d14a9a1281627e476c3806c7df7650092d63.scope.
Jan 26 16:55:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:55:15 compute-0 podman[393083]: 2026-01-26 16:55:15.630956895 +0000 UTC m=+0.117266065 container init 696e01385ab93429336bed818ef2d14a9a1281627e476c3806c7df7650092d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_jennings, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:55:15 compute-0 podman[393083]: 2026-01-26 16:55:15.535675039 +0000 UTC m=+0.021984229 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:55:15 compute-0 podman[393083]: 2026-01-26 16:55:15.638423618 +0000 UTC m=+0.124732788 container start 696e01385ab93429336bed818ef2d14a9a1281627e476c3806c7df7650092d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 16:55:15 compute-0 podman[393083]: 2026-01-26 16:55:15.641614326 +0000 UTC m=+0.127923496 container attach 696e01385ab93429336bed818ef2d14a9a1281627e476c3806c7df7650092d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_jennings, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:55:15 compute-0 strange_jennings[393100]: 167 167
Jan 26 16:55:15 compute-0 systemd[1]: libpod-696e01385ab93429336bed818ef2d14a9a1281627e476c3806c7df7650092d63.scope: Deactivated successfully.
Jan 26 16:55:15 compute-0 podman[393083]: 2026-01-26 16:55:15.644280501 +0000 UTC m=+0.130589671 container died 696e01385ab93429336bed818ef2d14a9a1281627e476c3806c7df7650092d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_jennings, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:55:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6d7716e0b775879702936eeb6457271eea9b7cecce6b5d7015a332be2d316e1-merged.mount: Deactivated successfully.
Jan 26 16:55:15 compute-0 podman[393083]: 2026-01-26 16:55:15.686942817 +0000 UTC m=+0.173252007 container remove 696e01385ab93429336bed818ef2d14a9a1281627e476c3806c7df7650092d63 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_jennings, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:55:15 compute-0 systemd[1]: libpod-conmon-696e01385ab93429336bed818ef2d14a9a1281627e476c3806c7df7650092d63.scope: Deactivated successfully.
Jan 26 16:55:15 compute-0 nova_compute[239965]: 2026-01-26 16:55:15.777 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:15 compute-0 podman[393125]: 2026-01-26 16:55:15.877890856 +0000 UTC m=+0.042280597 container create 1396a573cdd1d3b55e6d306878d07aaadcbff6beff67600499d8db1733c180b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:55:15 compute-0 systemd[1]: Started libpod-conmon-1396a573cdd1d3b55e6d306878d07aaadcbff6beff67600499d8db1733c180b7.scope.
Jan 26 16:55:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:55:15 compute-0 podman[393125]: 2026-01-26 16:55:15.859755882 +0000 UTC m=+0.024145633 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5908844a11dd4bb8609cb8ac7188be03edb9025bf1939825b4a1fd7e1d3db82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5908844a11dd4bb8609cb8ac7188be03edb9025bf1939825b4a1fd7e1d3db82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5908844a11dd4bb8609cb8ac7188be03edb9025bf1939825b4a1fd7e1d3db82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:55:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5908844a11dd4bb8609cb8ac7188be03edb9025bf1939825b4a1fd7e1d3db82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:55:15 compute-0 podman[393125]: 2026-01-26 16:55:15.974590166 +0000 UTC m=+0.138979907 container init 1396a573cdd1d3b55e6d306878d07aaadcbff6beff67600499d8db1733c180b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 16:55:15 compute-0 podman[393125]: 2026-01-26 16:55:15.982405268 +0000 UTC m=+0.146795009 container start 1396a573cdd1d3b55e6d306878d07aaadcbff6beff67600499d8db1733c180b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 16:55:15 compute-0 podman[393125]: 2026-01-26 16:55:15.986093257 +0000 UTC m=+0.150482998 container attach 1396a573cdd1d3b55e6d306878d07aaadcbff6beff67600499d8db1733c180b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]: {
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:     "0": [
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:         {
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "devices": [
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "/dev/loop3"
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             ],
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "lv_name": "ceph_lv0",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "lv_size": "21470642176",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "name": "ceph_lv0",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "tags": {
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.cluster_name": "ceph",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.crush_device_class": "",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.encrypted": "0",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.objectstore": "bluestore",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.osd_id": "0",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.type": "block",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.vdo": "0",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.with_tpm": "0"
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             },
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "type": "block",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "vg_name": "ceph_vg0"
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:         }
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:     ],
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:     "1": [
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:         {
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "devices": [
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "/dev/loop4"
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             ],
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "lv_name": "ceph_lv1",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "lv_size": "21470642176",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "name": "ceph_lv1",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "tags": {
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.cluster_name": "ceph",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.crush_device_class": "",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.encrypted": "0",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.objectstore": "bluestore",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.osd_id": "1",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.type": "block",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.vdo": "0",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.with_tpm": "0"
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             },
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "type": "block",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "vg_name": "ceph_vg1"
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:         }
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:     ],
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:     "2": [
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:         {
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "devices": [
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "/dev/loop5"
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             ],
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "lv_name": "ceph_lv2",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "lv_size": "21470642176",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "name": "ceph_lv2",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "tags": {
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.cluster_name": "ceph",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.crush_device_class": "",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.encrypted": "0",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.objectstore": "bluestore",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.osd_id": "2",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.type": "block",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.vdo": "0",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:                 "ceph.with_tpm": "0"
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             },
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "type": "block",
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:             "vg_name": "ceph_vg2"
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:         }
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]:     ]
Jan 26 16:55:16 compute-0 sleepy_northcutt[393141]: }
Jan 26 16:55:16 compute-0 systemd[1]: libpod-1396a573cdd1d3b55e6d306878d07aaadcbff6beff67600499d8db1733c180b7.scope: Deactivated successfully.
Jan 26 16:55:16 compute-0 podman[393125]: 2026-01-26 16:55:16.314210659 +0000 UTC m=+0.478600410 container died 1396a573cdd1d3b55e6d306878d07aaadcbff6beff67600499d8db1733c180b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_northcutt, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:55:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5908844a11dd4bb8609cb8ac7188be03edb9025bf1939825b4a1fd7e1d3db82-merged.mount: Deactivated successfully.
Jan 26 16:55:16 compute-0 podman[393125]: 2026-01-26 16:55:16.363306972 +0000 UTC m=+0.527696713 container remove 1396a573cdd1d3b55e6d306878d07aaadcbff6beff67600499d8db1733c180b7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 16:55:16 compute-0 systemd[1]: libpod-conmon-1396a573cdd1d3b55e6d306878d07aaadcbff6beff67600499d8db1733c180b7.scope: Deactivated successfully.
Jan 26 16:55:16 compute-0 sudo[393047]: pam_unix(sudo:session): session closed for user root
Jan 26 16:55:16 compute-0 sudo[393162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:55:16 compute-0 sudo[393162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:55:16 compute-0 sudo[393162]: pam_unix(sudo:session): session closed for user root
Jan 26 16:55:16 compute-0 sudo[393187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:55:16 compute-0 sudo[393187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:55:16 compute-0 ceph-mon[75140]: pgmap v3165: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:16 compute-0 podman[393224]: 2026-01-26 16:55:16.842814993 +0000 UTC m=+0.046982082 container create 62fb25b8bf5d47606f34b211c0d6d4fed90ae7b58d4aee43c8c7887e352ba06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_shockley, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:55:16 compute-0 systemd[1]: Started libpod-conmon-62fb25b8bf5d47606f34b211c0d6d4fed90ae7b58d4aee43c8c7887e352ba06f.scope.
Jan 26 16:55:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:55:16 compute-0 podman[393224]: 2026-01-26 16:55:16.823139131 +0000 UTC m=+0.027306230 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:55:16 compute-0 podman[393224]: 2026-01-26 16:55:16.926911574 +0000 UTC m=+0.131078673 container init 62fb25b8bf5d47606f34b211c0d6d4fed90ae7b58d4aee43c8c7887e352ba06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:55:16 compute-0 podman[393224]: 2026-01-26 16:55:16.933002733 +0000 UTC m=+0.137169812 container start 62fb25b8bf5d47606f34b211c0d6d4fed90ae7b58d4aee43c8c7887e352ba06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_shockley, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:55:16 compute-0 jolly_shockley[393240]: 167 167
Jan 26 16:55:16 compute-0 podman[393224]: 2026-01-26 16:55:16.937192615 +0000 UTC m=+0.141359714 container attach 62fb25b8bf5d47606f34b211c0d6d4fed90ae7b58d4aee43c8c7887e352ba06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_shockley, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:55:16 compute-0 systemd[1]: libpod-62fb25b8bf5d47606f34b211c0d6d4fed90ae7b58d4aee43c8c7887e352ba06f.scope: Deactivated successfully.
Jan 26 16:55:16 compute-0 conmon[393240]: conmon 62fb25b8bf5d47606f34 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-62fb25b8bf5d47606f34b211c0d6d4fed90ae7b58d4aee43c8c7887e352ba06f.scope/container/memory.events
Jan 26 16:55:16 compute-0 podman[393224]: 2026-01-26 16:55:16.939325228 +0000 UTC m=+0.143492337 container died 62fb25b8bf5d47606f34b211c0d6d4fed90ae7b58d4aee43c8c7887e352ba06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_shockley, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 16:55:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e0bd4ce3340818d539bae8dab399af490adf93c528d1f6b6c040d055aaed4dd-merged.mount: Deactivated successfully.
Jan 26 16:55:16 compute-0 podman[393224]: 2026-01-26 16:55:16.982341922 +0000 UTC m=+0.186509001 container remove 62fb25b8bf5d47606f34b211c0d6d4fed90ae7b58d4aee43c8c7887e352ba06f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:55:16 compute-0 systemd[1]: libpod-conmon-62fb25b8bf5d47606f34b211c0d6d4fed90ae7b58d4aee43c8c7887e352ba06f.scope: Deactivated successfully.
Jan 26 16:55:17 compute-0 podman[393264]: 2026-01-26 16:55:17.151371944 +0000 UTC m=+0.048295834 container create 436d00152b161576a8751a241b8874c47f52c9aa11d3038bc573ba103a6e00bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 16:55:17 compute-0 systemd[1]: Started libpod-conmon-436d00152b161576a8751a241b8874c47f52c9aa11d3038bc573ba103a6e00bd.scope.
Jan 26 16:55:17 compute-0 podman[393264]: 2026-01-26 16:55:17.130858792 +0000 UTC m=+0.027782672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:55:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:55:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb015cfb1996b8f09320617e55e49582d9f5f1da015327c423a6f11debaab02/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:55:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb015cfb1996b8f09320617e55e49582d9f5f1da015327c423a6f11debaab02/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:55:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb015cfb1996b8f09320617e55e49582d9f5f1da015327c423a6f11debaab02/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:55:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdb015cfb1996b8f09320617e55e49582d9f5f1da015327c423a6f11debaab02/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:55:17 compute-0 podman[393264]: 2026-01-26 16:55:17.251187221 +0000 UTC m=+0.148111101 container init 436d00152b161576a8751a241b8874c47f52c9aa11d3038bc573ba103a6e00bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Jan 26 16:55:17 compute-0 podman[393264]: 2026-01-26 16:55:17.258729295 +0000 UTC m=+0.155653155 container start 436d00152b161576a8751a241b8874c47f52c9aa11d3038bc573ba103a6e00bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 16:55:17 compute-0 podman[393264]: 2026-01-26 16:55:17.262298573 +0000 UTC m=+0.159222453 container attach 436d00152b161576a8751a241b8874c47f52c9aa11d3038bc573ba103a6e00bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 16:55:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:18 compute-0 lvm[393359]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:55:18 compute-0 lvm[393359]: VG ceph_vg1 finished
Jan 26 16:55:18 compute-0 lvm[393358]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:55:18 compute-0 lvm[393358]: VG ceph_vg0 finished
Jan 26 16:55:18 compute-0 lvm[393361]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:55:18 compute-0 lvm[393361]: VG ceph_vg2 finished
Jan 26 16:55:18 compute-0 infallible_mirzakhani[393280]: {}
Jan 26 16:55:18 compute-0 systemd[1]: libpod-436d00152b161576a8751a241b8874c47f52c9aa11d3038bc573ba103a6e00bd.scope: Deactivated successfully.
Jan 26 16:55:18 compute-0 systemd[1]: libpod-436d00152b161576a8751a241b8874c47f52c9aa11d3038bc573ba103a6e00bd.scope: Consumed 1.542s CPU time.
Jan 26 16:55:18 compute-0 podman[393264]: 2026-01-26 16:55:18.205855026 +0000 UTC m=+1.102778886 container died 436d00152b161576a8751a241b8874c47f52c9aa11d3038bc573ba103a6e00bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mirzakhani, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 16:55:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-bdb015cfb1996b8f09320617e55e49582d9f5f1da015327c423a6f11debaab02-merged.mount: Deactivated successfully.
Jan 26 16:55:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:55:18 compute-0 podman[393264]: 2026-01-26 16:55:18.254763395 +0000 UTC m=+1.151687265 container remove 436d00152b161576a8751a241b8874c47f52c9aa11d3038bc573ba103a6e00bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:55:18 compute-0 systemd[1]: libpod-conmon-436d00152b161576a8751a241b8874c47f52c9aa11d3038bc573ba103a6e00bd.scope: Deactivated successfully.
Jan 26 16:55:18 compute-0 sudo[393187]: pam_unix(sudo:session): session closed for user root
Jan 26 16:55:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:55:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:55:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:55:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:55:18 compute-0 sudo[393374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:55:18 compute-0 sudo[393374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:55:18 compute-0 sudo[393374]: pam_unix(sudo:session): session closed for user root
Jan 26 16:55:18 compute-0 ceph-mon[75140]: pgmap v3166: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:55:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:55:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:19 compute-0 nova_compute[239965]: 2026-01-26 16:55:19.739 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:20 compute-0 nova_compute[239965]: 2026-01-26 16:55:20.780 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:20 compute-0 ceph-mon[75140]: pgmap v3167: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:21 compute-0 nova_compute[239965]: 2026-01-26 16:55:21.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:21 compute-0 nova_compute[239965]: 2026-01-26 16:55:21.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:22 compute-0 nova_compute[239965]: 2026-01-26 16:55:22.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:55:23 compute-0 ceph-mon[75140]: pgmap v3168: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:24 compute-0 ceph-mon[75140]: pgmap v3169: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:24 compute-0 nova_compute[239965]: 2026-01-26 16:55:24.742 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:25 compute-0 nova_compute[239965]: 2026-01-26 16:55:25.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:25 compute-0 nova_compute[239965]: 2026-01-26 16:55:25.783 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:26 compute-0 ceph-mon[75140]: pgmap v3170: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:55:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:55:28
Jan 26 16:55:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:55:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:55:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.control', 'images', '.rgw.root', 'volumes', 'vms', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 26 16:55:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:55:28 compute-0 ceph-mon[75140]: pgmap v3171: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:29 compute-0 nova_compute[239965]: 2026-01-26 16:55:29.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:29 compute-0 nova_compute[239965]: 2026-01-26 16:55:29.801 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:55:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:55:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:55:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:55:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:55:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:55:30 compute-0 nova_compute[239965]: 2026-01-26 16:55:30.786 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:55:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:55:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:55:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:55:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:55:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:55:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:55:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:55:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:55:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:55:31 compute-0 ceph-mon[75140]: pgmap v3172: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:31 compute-0 nova_compute[239965]: 2026-01-26 16:55:31.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:31 compute-0 nova_compute[239965]: 2026-01-26 16:55:31.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:31 compute-0 nova_compute[239965]: 2026-01-26 16:55:31.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:55:31 compute-0 nova_compute[239965]: 2026-01-26 16:55:31.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:32 compute-0 ceph-mon[75140]: pgmap v3173: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:55:33 compute-0 nova_compute[239965]: 2026-01-26 16:55:33.526 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:33 compute-0 nova_compute[239965]: 2026-01-26 16:55:33.527 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:55:33 compute-0 nova_compute[239965]: 2026-01-26 16:55:33.527 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:55:33 compute-0 nova_compute[239965]: 2026-01-26 16:55:33.549 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:55:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:34 compute-0 podman[393399]: 2026-01-26 16:55:34.391536672 +0000 UTC m=+0.080062725 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 16:55:34 compute-0 ceph-mon[75140]: pgmap v3174: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:34 compute-0 nova_compute[239965]: 2026-01-26 16:55:34.804 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:35 compute-0 nova_compute[239965]: 2026-01-26 16:55:35.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:35 compute-0 nova_compute[239965]: 2026-01-26 16:55:35.547 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:55:35 compute-0 nova_compute[239965]: 2026-01-26 16:55:35.547 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:55:35 compute-0 nova_compute[239965]: 2026-01-26 16:55:35.547 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:55:35 compute-0 nova_compute[239965]: 2026-01-26 16:55:35.548 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:55:35 compute-0 nova_compute[239965]: 2026-01-26 16:55:35.548 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:55:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:35 compute-0 nova_compute[239965]: 2026-01-26 16:55:35.787 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:55:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2947868828' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:55:36 compute-0 nova_compute[239965]: 2026-01-26 16:55:36.125 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:55:36 compute-0 nova_compute[239965]: 2026-01-26 16:55:36.276 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:55:36 compute-0 nova_compute[239965]: 2026-01-26 16:55:36.277 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3584MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:55:36 compute-0 nova_compute[239965]: 2026-01-26 16:55:36.278 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:55:36 compute-0 nova_compute[239965]: 2026-01-26 16:55:36.278 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:55:36 compute-0 nova_compute[239965]: 2026-01-26 16:55:36.346 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:55:36 compute-0 nova_compute[239965]: 2026-01-26 16:55:36.347 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:55:36 compute-0 nova_compute[239965]: 2026-01-26 16:55:36.372 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:55:36 compute-0 ceph-mon[75140]: pgmap v3175: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:36 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2947868828' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:55:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:55:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2092199714' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:55:36 compute-0 nova_compute[239965]: 2026-01-26 16:55:36.935 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:55:36 compute-0 nova_compute[239965]: 2026-01-26 16:55:36.941 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:55:36 compute-0 nova_compute[239965]: 2026-01-26 16:55:36.960 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:55:36 compute-0 nova_compute[239965]: 2026-01-26 16:55:36.962 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:55:36 compute-0 nova_compute[239965]: 2026-01-26 16:55:36.962 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:55:37 compute-0 podman[393462]: 2026-01-26 16:55:37.417656299 +0000 UTC m=+0.096992928 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:55:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:37 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2092199714' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:55:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:55:38 compute-0 ceph-mon[75140]: pgmap v3176: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:39 compute-0 nova_compute[239965]: 2026-01-26 16:55:39.807 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:40 compute-0 nova_compute[239965]: 2026-01-26 16:55:40.789 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:40 compute-0 ceph-mon[75140]: pgmap v3177: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:42 compute-0 ceph-mon[75140]: pgmap v3178: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:55:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:44 compute-0 nova_compute[239965]: 2026-01-26 16:55:44.810 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:44 compute-0 ceph-mon[75140]: pgmap v3179: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3180: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:45 compute-0 nova_compute[239965]: 2026-01-26 16:55:45.791 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:46 compute-0 ceph-mon[75140]: pgmap v3180: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:47 compute-0 sshd-session[393488]: Received disconnect from 45.148.10.147 port 38874:11:  [preauth]
Jan 26 16:55:47 compute-0 sshd-session[393488]: Disconnected from authenticating user root 45.148.10.147 port 38874 [preauth]
Jan 26 16:55:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:55:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:55:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3763568846' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:55:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:55:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3763568846' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:55:48 compute-0 ceph-mon[75140]: pgmap v3181: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3763568846' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:55:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3763568846' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:55:49 compute-0 nova_compute[239965]: 2026-01-26 16:55:49.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:55:49 compute-0 nova_compute[239965]: 2026-01-26 16:55:49.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:55:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3182: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:49 compute-0 nova_compute[239965]: 2026-01-26 16:55:49.814 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:50 compute-0 nova_compute[239965]: 2026-01-26 16:55:50.794 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:51 compute-0 ceph-mon[75140]: pgmap v3182: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:53 compute-0 ceph-mon[75140]: pgmap v3183: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:55:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:54 compute-0 nova_compute[239965]: 2026-01-26 16:55:54.818 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:55 compute-0 ceph-mon[75140]: pgmap v3184: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:55 compute-0 nova_compute[239965]: 2026-01-26 16:55:55.795 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:55:57 compute-0 ceph-mon[75140]: pgmap v3185: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:55:59 compute-0 ceph-mon[75140]: pgmap v3186: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:55:59.278 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:55:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:55:59.278 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:55:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:55:59.278 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:55:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:55:59 compute-0 nova_compute[239965]: 2026-01-26 16:55:59.821 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:00 compute-0 ceph-mon[75140]: pgmap v3187: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:56:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:56:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:56:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:56:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:56:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:56:00 compute-0 nova_compute[239965]: 2026-01-26 16:56:00.796 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:03 compute-0 ceph-mon[75140]: pgmap v3188: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:56:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:04 compute-0 nova_compute[239965]: 2026-01-26 16:56:04.824 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:05 compute-0 podman[393490]: 2026-01-26 16:56:05.362114658 +0000 UTC m=+0.047663299 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:56:05 compute-0 ceph-mon[75140]: pgmap v3189: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:05 compute-0 nova_compute[239965]: 2026-01-26 16:56:05.799 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:06 compute-0 ceph-mon[75140]: pgmap v3190: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:56:08 compute-0 podman[393509]: 2026-01-26 16:56:08.395426634 +0000 UTC m=+0.083769584 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:56:08 compute-0 ceph-mon[75140]: pgmap v3191: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:09 compute-0 nova_compute[239965]: 2026-01-26 16:56:09.826 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:10 compute-0 nova_compute[239965]: 2026-01-26 16:56:10.841 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:11 compute-0 ceph-mon[75140]: pgmap v3192: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:12 compute-0 nova_compute[239965]: 2026-01-26 16:56:12.532 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:12 compute-0 nova_compute[239965]: 2026-01-26 16:56:12.533 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 16:56:12 compute-0 nova_compute[239965]: 2026-01-26 16:56:12.587 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 16:56:13 compute-0 ceph-mon[75140]: pgmap v3193: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:56:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:14 compute-0 nova_compute[239965]: 2026-01-26 16:56:14.831 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:15 compute-0 ceph-mon[75140]: pgmap v3194: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:15 compute-0 nova_compute[239965]: 2026-01-26 16:56:15.842 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:17 compute-0 ceph-mon[75140]: pgmap v3195: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:56:18 compute-0 sudo[393535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:56:18 compute-0 sudo[393535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:56:18 compute-0 sudo[393535]: pam_unix(sudo:session): session closed for user root
Jan 26 16:56:18 compute-0 sudo[393560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:56:18 compute-0 sudo[393560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:56:19 compute-0 sudo[393560]: pam_unix(sudo:session): session closed for user root
Jan 26 16:56:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:56:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:56:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:56:19 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:56:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:56:19 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:56:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:56:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:56:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:56:19 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:56:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:56:19 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:56:19 compute-0 sudo[393616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:56:19 compute-0 sudo[393616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:56:19 compute-0 sudo[393616]: pam_unix(sudo:session): session closed for user root
Jan 26 16:56:19 compute-0 ceph-mon[75140]: pgmap v3196: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:56:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:56:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:56:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:56:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:56:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:56:19 compute-0 sudo[393641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:56:19 compute-0 sudo[393641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:56:19 compute-0 podman[393677]: 2026-01-26 16:56:19.478909519 +0000 UTC m=+0.047432204 container create 406f106578e417529ef14ec1849d26e6855f03ba5a3199584d7abd192904b473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_grothendieck, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 16:56:19 compute-0 systemd[1]: Started libpod-conmon-406f106578e417529ef14ec1849d26e6855f03ba5a3199584d7abd192904b473.scope.
Jan 26 16:56:19 compute-0 podman[393677]: 2026-01-26 16:56:19.46018872 +0000 UTC m=+0.028711435 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:56:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:56:19 compute-0 podman[393677]: 2026-01-26 16:56:19.581318709 +0000 UTC m=+0.149841424 container init 406f106578e417529ef14ec1849d26e6855f03ba5a3199584d7abd192904b473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_grothendieck, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 16:56:19 compute-0 podman[393677]: 2026-01-26 16:56:19.591660472 +0000 UTC m=+0.160183157 container start 406f106578e417529ef14ec1849d26e6855f03ba5a3199584d7abd192904b473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_grothendieck, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:56:19 compute-0 podman[393677]: 2026-01-26 16:56:19.595789583 +0000 UTC m=+0.164312268 container attach 406f106578e417529ef14ec1849d26e6855f03ba5a3199584d7abd192904b473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_grothendieck, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 16:56:19 compute-0 vibrant_grothendieck[393693]: 167 167
Jan 26 16:56:19 compute-0 systemd[1]: libpod-406f106578e417529ef14ec1849d26e6855f03ba5a3199584d7abd192904b473.scope: Deactivated successfully.
Jan 26 16:56:19 compute-0 conmon[393693]: conmon 406f106578e417529ef1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-406f106578e417529ef14ec1849d26e6855f03ba5a3199584d7abd192904b473.scope/container/memory.events
Jan 26 16:56:19 compute-0 podman[393677]: 2026-01-26 16:56:19.600890628 +0000 UTC m=+0.169413313 container died 406f106578e417529ef14ec1849d26e6855f03ba5a3199584d7abd192904b473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:56:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-adc491a2ff1b807d1fd4d71a1bacf714e2df7909f09d6797d3c18776ee464ab4-merged.mount: Deactivated successfully.
Jan 26 16:56:19 compute-0 podman[393677]: 2026-01-26 16:56:19.686151928 +0000 UTC m=+0.254674613 container remove 406f106578e417529ef14ec1849d26e6855f03ba5a3199584d7abd192904b473 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:56:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3197: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:19 compute-0 systemd[1]: libpod-conmon-406f106578e417529ef14ec1849d26e6855f03ba5a3199584d7abd192904b473.scope: Deactivated successfully.
Jan 26 16:56:19 compute-0 nova_compute[239965]: 2026-01-26 16:56:19.834 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:19 compute-0 podman[393715]: 2026-01-26 16:56:19.88661097 +0000 UTC m=+0.056912956 container create 092c491c9ab0df21937b8d14785edffa9ea8a00434b8e20301fa38c3ccf9d69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_liskov, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:56:19 compute-0 systemd[1]: Started libpod-conmon-092c491c9ab0df21937b8d14785edffa9ea8a00434b8e20301fa38c3ccf9d69c.scope.
Jan 26 16:56:19 compute-0 podman[393715]: 2026-01-26 16:56:19.858302357 +0000 UTC m=+0.028604363 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:56:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6849f8013698ddd145e25cc87c88a0eadd65b9393523b498f2e627f7fc35c9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6849f8013698ddd145e25cc87c88a0eadd65b9393523b498f2e627f7fc35c9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6849f8013698ddd145e25cc87c88a0eadd65b9393523b498f2e627f7fc35c9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6849f8013698ddd145e25cc87c88a0eadd65b9393523b498f2e627f7fc35c9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:56:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6849f8013698ddd145e25cc87c88a0eadd65b9393523b498f2e627f7fc35c9f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:56:19 compute-0 podman[393715]: 2026-01-26 16:56:19.984015397 +0000 UTC m=+0.154317413 container init 092c491c9ab0df21937b8d14785edffa9ea8a00434b8e20301fa38c3ccf9d69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_liskov, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 16:56:19 compute-0 podman[393715]: 2026-01-26 16:56:19.995779016 +0000 UTC m=+0.166081002 container start 092c491c9ab0df21937b8d14785edffa9ea8a00434b8e20301fa38c3ccf9d69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_liskov, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:56:20 compute-0 podman[393715]: 2026-01-26 16:56:20.000562662 +0000 UTC m=+0.170864668 container attach 092c491c9ab0df21937b8d14785edffa9ea8a00434b8e20301fa38c3ccf9d69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_liskov, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:56:20 compute-0 goofy_liskov[393733]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:56:20 compute-0 goofy_liskov[393733]: --> All data devices are unavailable
Jan 26 16:56:20 compute-0 systemd[1]: libpod-092c491c9ab0df21937b8d14785edffa9ea8a00434b8e20301fa38c3ccf9d69c.scope: Deactivated successfully.
Jan 26 16:56:20 compute-0 podman[393715]: 2026-01-26 16:56:20.489952486 +0000 UTC m=+0.660254472 container died 092c491c9ab0df21937b8d14785edffa9ea8a00434b8e20301fa38c3ccf9d69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_liskov, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:56:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6849f8013698ddd145e25cc87c88a0eadd65b9393523b498f2e627f7fc35c9f-merged.mount: Deactivated successfully.
Jan 26 16:56:20 compute-0 podman[393715]: 2026-01-26 16:56:20.53908915 +0000 UTC m=+0.709391166 container remove 092c491c9ab0df21937b8d14785edffa9ea8a00434b8e20301fa38c3ccf9d69c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_liskov, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:56:20 compute-0 systemd[1]: libpod-conmon-092c491c9ab0df21937b8d14785edffa9ea8a00434b8e20301fa38c3ccf9d69c.scope: Deactivated successfully.
Jan 26 16:56:20 compute-0 sudo[393641]: pam_unix(sudo:session): session closed for user root
Jan 26 16:56:20 compute-0 sudo[393764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:56:20 compute-0 sudo[393764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:56:20 compute-0 sudo[393764]: pam_unix(sudo:session): session closed for user root
Jan 26 16:56:20 compute-0 sudo[393789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:56:20 compute-0 sudo[393789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:56:20 compute-0 nova_compute[239965]: 2026-01-26 16:56:20.846 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:21 compute-0 podman[393826]: 2026-01-26 16:56:21.036674333 +0000 UTC m=+0.046136531 container create f413ad16208bfb8e6d2bf582de0c7dbc0cb46a801c624fe706a411e0f0747b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_taussig, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 16:56:21 compute-0 systemd[1]: Started libpod-conmon-f413ad16208bfb8e6d2bf582de0c7dbc0cb46a801c624fe706a411e0f0747b56.scope.
Jan 26 16:56:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:56:21 compute-0 podman[393826]: 2026-01-26 16:56:21.017130864 +0000 UTC m=+0.026593092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:56:21 compute-0 podman[393826]: 2026-01-26 16:56:21.124488126 +0000 UTC m=+0.133950354 container init f413ad16208bfb8e6d2bf582de0c7dbc0cb46a801c624fe706a411e0f0747b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_taussig, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 16:56:21 compute-0 podman[393826]: 2026-01-26 16:56:21.135007653 +0000 UTC m=+0.144469851 container start f413ad16208bfb8e6d2bf582de0c7dbc0cb46a801c624fe706a411e0f0747b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_taussig, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:56:21 compute-0 podman[393826]: 2026-01-26 16:56:21.139287919 +0000 UTC m=+0.148750117 container attach f413ad16208bfb8e6d2bf582de0c7dbc0cb46a801c624fe706a411e0f0747b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_taussig, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 16:56:21 compute-0 stupefied_taussig[393843]: 167 167
Jan 26 16:56:21 compute-0 systemd[1]: libpod-f413ad16208bfb8e6d2bf582de0c7dbc0cb46a801c624fe706a411e0f0747b56.scope: Deactivated successfully.
Jan 26 16:56:21 compute-0 podman[393826]: 2026-01-26 16:56:21.143135742 +0000 UTC m=+0.152597960 container died f413ad16208bfb8e6d2bf582de0c7dbc0cb46a801c624fe706a411e0f0747b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 16:56:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd6a06f0f6ed459883f2648735d614e15864482b554936a010c95c30ab6a0325-merged.mount: Deactivated successfully.
Jan 26 16:56:21 compute-0 ceph-mon[75140]: pgmap v3197: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:21 compute-0 podman[393826]: 2026-01-26 16:56:21.195344002 +0000 UTC m=+0.204806190 container remove f413ad16208bfb8e6d2bf582de0c7dbc0cb46a801c624fe706a411e0f0747b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_taussig, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 16:56:21 compute-0 systemd[1]: libpod-conmon-f413ad16208bfb8e6d2bf582de0c7dbc0cb46a801c624fe706a411e0f0747b56.scope: Deactivated successfully.
Jan 26 16:56:21 compute-0 podman[393866]: 2026-01-26 16:56:21.376641235 +0000 UTC m=+0.045804694 container create ea4e9e9e584caab10b0df12f4b71240b091c5ab63178474702ef6ecee16a7349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:56:21 compute-0 systemd[1]: Started libpod-conmon-ea4e9e9e584caab10b0df12f4b71240b091c5ab63178474702ef6ecee16a7349.scope.
Jan 26 16:56:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc6d8782ef687b66c286e5989f4d184780c3e350d1a9a50dc0fdb7ad526927d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc6d8782ef687b66c286e5989f4d184780c3e350d1a9a50dc0fdb7ad526927d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc6d8782ef687b66c286e5989f4d184780c3e350d1a9a50dc0fdb7ad526927d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:56:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdc6d8782ef687b66c286e5989f4d184780c3e350d1a9a50dc0fdb7ad526927d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:56:21 compute-0 podman[393866]: 2026-01-26 16:56:21.357360832 +0000 UTC m=+0.026524311 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:56:21 compute-0 podman[393866]: 2026-01-26 16:56:21.454608576 +0000 UTC m=+0.123772055 container init ea4e9e9e584caab10b0df12f4b71240b091c5ab63178474702ef6ecee16a7349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:56:21 compute-0 podman[393866]: 2026-01-26 16:56:21.462412686 +0000 UTC m=+0.131576145 container start ea4e9e9e584caab10b0df12f4b71240b091c5ab63178474702ef6ecee16a7349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:56:21 compute-0 podman[393866]: 2026-01-26 16:56:21.466364863 +0000 UTC m=+0.135528322 container attach ea4e9e9e584caab10b0df12f4b71240b091c5ab63178474702ef6ecee16a7349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 16:56:21 compute-0 nova_compute[239965]: 2026-01-26 16:56:21.566 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:21 compute-0 inspiring_carver[393882]: {
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:     "0": [
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:         {
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "devices": [
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "/dev/loop3"
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             ],
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "lv_name": "ceph_lv0",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "lv_size": "21470642176",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "name": "ceph_lv0",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "tags": {
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.cluster_name": "ceph",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.crush_device_class": "",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.encrypted": "0",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.objectstore": "bluestore",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.osd_id": "0",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.type": "block",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.vdo": "0",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.with_tpm": "0"
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             },
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "type": "block",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "vg_name": "ceph_vg0"
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:         }
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:     ],
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:     "1": [
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:         {
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "devices": [
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "/dev/loop4"
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             ],
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "lv_name": "ceph_lv1",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "lv_size": "21470642176",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "name": "ceph_lv1",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "tags": {
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.cluster_name": "ceph",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.crush_device_class": "",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.encrypted": "0",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.objectstore": "bluestore",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.osd_id": "1",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.type": "block",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.vdo": "0",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.with_tpm": "0"
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             },
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "type": "block",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "vg_name": "ceph_vg1"
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:         }
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:     ],
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:     "2": [
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:         {
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "devices": [
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "/dev/loop5"
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             ],
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "lv_name": "ceph_lv2",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "lv_size": "21470642176",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "name": "ceph_lv2",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "tags": {
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.cluster_name": "ceph",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.crush_device_class": "",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.encrypted": "0",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.objectstore": "bluestore",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.osd_id": "2",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.type": "block",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.vdo": "0",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:                 "ceph.with_tpm": "0"
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             },
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "type": "block",
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:             "vg_name": "ceph_vg2"
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:         }
Jan 26 16:56:21 compute-0 inspiring_carver[393882]:     ]
Jan 26 16:56:21 compute-0 inspiring_carver[393882]: }
Jan 26 16:56:21 compute-0 systemd[1]: libpod-ea4e9e9e584caab10b0df12f4b71240b091c5ab63178474702ef6ecee16a7349.scope: Deactivated successfully.
Jan 26 16:56:21 compute-0 podman[393866]: 2026-01-26 16:56:21.780960013 +0000 UTC m=+0.450123482 container died ea4e9e9e584caab10b0df12f4b71240b091c5ab63178474702ef6ecee16a7349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:56:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-cdc6d8782ef687b66c286e5989f4d184780c3e350d1a9a50dc0fdb7ad526927d-merged.mount: Deactivated successfully.
Jan 26 16:56:21 compute-0 podman[393866]: 2026-01-26 16:56:21.831624205 +0000 UTC m=+0.500787664 container remove ea4e9e9e584caab10b0df12f4b71240b091c5ab63178474702ef6ecee16a7349 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_carver, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 16:56:21 compute-0 systemd[1]: libpod-conmon-ea4e9e9e584caab10b0df12f4b71240b091c5ab63178474702ef6ecee16a7349.scope: Deactivated successfully.
Jan 26 16:56:21 compute-0 sudo[393789]: pam_unix(sudo:session): session closed for user root
Jan 26 16:56:21 compute-0 sudo[393901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:56:21 compute-0 sudo[393901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:56:21 compute-0 sudo[393901]: pam_unix(sudo:session): session closed for user root
Jan 26 16:56:22 compute-0 sudo[393926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:56:22 compute-0 sudo[393926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:56:22 compute-0 podman[393963]: 2026-01-26 16:56:22.320328291 +0000 UTC m=+0.050670963 container create 89fba8591f27c447e9e4ee2c7b97a01d6c9995d2a3abb2f2d82432ecd942fb26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:56:22 compute-0 systemd[1]: Started libpod-conmon-89fba8591f27c447e9e4ee2c7b97a01d6c9995d2a3abb2f2d82432ecd942fb26.scope.
Jan 26 16:56:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:56:22 compute-0 podman[393963]: 2026-01-26 16:56:22.297394289 +0000 UTC m=+0.027736981 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:56:22 compute-0 podman[393963]: 2026-01-26 16:56:22.410192903 +0000 UTC m=+0.140535585 container init 89fba8591f27c447e9e4ee2c7b97a01d6c9995d2a3abb2f2d82432ecd942fb26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_curie, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:56:22 compute-0 podman[393963]: 2026-01-26 16:56:22.420166447 +0000 UTC m=+0.150509109 container start 89fba8591f27c447e9e4ee2c7b97a01d6c9995d2a3abb2f2d82432ecd942fb26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_curie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 16:56:22 compute-0 podman[393963]: 2026-01-26 16:56:22.423903499 +0000 UTC m=+0.154246191 container attach 89fba8591f27c447e9e4ee2c7b97a01d6c9995d2a3abb2f2d82432ecd942fb26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_curie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 16:56:22 compute-0 zealous_curie[393979]: 167 167
Jan 26 16:56:22 compute-0 systemd[1]: libpod-89fba8591f27c447e9e4ee2c7b97a01d6c9995d2a3abb2f2d82432ecd942fb26.scope: Deactivated successfully.
Jan 26 16:56:22 compute-0 conmon[393979]: conmon 89fba8591f27c447e9e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-89fba8591f27c447e9e4ee2c7b97a01d6c9995d2a3abb2f2d82432ecd942fb26.scope/container/memory.events
Jan 26 16:56:22 compute-0 podman[393963]: 2026-01-26 16:56:22.43986807 +0000 UTC m=+0.170210742 container died 89fba8591f27c447e9e4ee2c7b97a01d6c9995d2a3abb2f2d82432ecd942fb26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:56:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-df32fcf819c961e637d25d0d35ba294da747709127f7bc9969a711ab51b50c47-merged.mount: Deactivated successfully.
Jan 26 16:56:22 compute-0 podman[393963]: 2026-01-26 16:56:22.484992286 +0000 UTC m=+0.215334958 container remove 89fba8591f27c447e9e4ee2c7b97a01d6c9995d2a3abb2f2d82432ecd942fb26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_curie, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 16:56:22 compute-0 systemd[1]: libpod-conmon-89fba8591f27c447e9e4ee2c7b97a01d6c9995d2a3abb2f2d82432ecd942fb26.scope: Deactivated successfully.
Jan 26 16:56:22 compute-0 podman[394002]: 2026-01-26 16:56:22.648810991 +0000 UTC m=+0.042668697 container create af3b8b5314e28e977e4ab18a74e18552e6de7d4695751689c157326e0bde3374 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_rosalind, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:56:22 compute-0 systemd[1]: Started libpod-conmon-af3b8b5314e28e977e4ab18a74e18552e6de7d4695751689c157326e0bde3374.scope.
Jan 26 16:56:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:56:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0d874c6b76de59baf35763a004a9f0939614d6151a8983c04dfb02aa7cfff2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:56:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0d874c6b76de59baf35763a004a9f0939614d6151a8983c04dfb02aa7cfff2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:56:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0d874c6b76de59baf35763a004a9f0939614d6151a8983c04dfb02aa7cfff2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:56:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae0d874c6b76de59baf35763a004a9f0939614d6151a8983c04dfb02aa7cfff2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:56:22 compute-0 podman[394002]: 2026-01-26 16:56:22.629731103 +0000 UTC m=+0.023588839 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:56:22 compute-0 podman[394002]: 2026-01-26 16:56:22.962386396 +0000 UTC m=+0.356244122 container init af3b8b5314e28e977e4ab18a74e18552e6de7d4695751689c157326e0bde3374 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:56:22 compute-0 podman[394002]: 2026-01-26 16:56:22.96870341 +0000 UTC m=+0.362561116 container start af3b8b5314e28e977e4ab18a74e18552e6de7d4695751689c157326e0bde3374 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 16:56:23 compute-0 podman[394002]: 2026-01-26 16:56:23.085743899 +0000 UTC m=+0.479601625 container attach af3b8b5314e28e977e4ab18a74e18552e6de7d4695751689c157326e0bde3374 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 16:56:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:56:23 compute-0 nova_compute[239965]: 2026-01-26 16:56:23.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:23 compute-0 nova_compute[239965]: 2026-01-26 16:56:23.515 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:23 compute-0 lvm[394096]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:56:23 compute-0 lvm[394097]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:56:23 compute-0 lvm[394097]: VG ceph_vg1 finished
Jan 26 16:56:23 compute-0 lvm[394096]: VG ceph_vg0 finished
Jan 26 16:56:23 compute-0 ceph-mon[75140]: pgmap v3198: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:23 compute-0 lvm[394099]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:56:23 compute-0 lvm[394099]: VG ceph_vg2 finished
Jan 26 16:56:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:23 compute-0 infallible_rosalind[394018]: {}
Jan 26 16:56:23 compute-0 systemd[1]: libpod-af3b8b5314e28e977e4ab18a74e18552e6de7d4695751689c157326e0bde3374.scope: Deactivated successfully.
Jan 26 16:56:23 compute-0 systemd[1]: libpod-af3b8b5314e28e977e4ab18a74e18552e6de7d4695751689c157326e0bde3374.scope: Consumed 1.309s CPU time.
Jan 26 16:56:23 compute-0 podman[394002]: 2026-01-26 16:56:23.790794876 +0000 UTC m=+1.184652602 container died af3b8b5314e28e977e4ab18a74e18552e6de7d4695751689c157326e0bde3374 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_rosalind, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 16:56:24 compute-0 nova_compute[239965]: 2026-01-26 16:56:24.837 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:25 compute-0 sshd-session[394114]: Invalid user sol from 45.148.10.240 port 41882
Jan 26 16:56:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae0d874c6b76de59baf35763a004a9f0939614d6151a8983c04dfb02aa7cfff2-merged.mount: Deactivated successfully.
Jan 26 16:56:25 compute-0 ceph-mon[75140]: pgmap v3199: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:25 compute-0 podman[394002]: 2026-01-26 16:56:25.498995858 +0000 UTC m=+2.892853584 container remove af3b8b5314e28e977e4ab18a74e18552e6de7d4695751689c157326e0bde3374 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:56:25 compute-0 systemd[1]: libpod-conmon-af3b8b5314e28e977e4ab18a74e18552e6de7d4695751689c157326e0bde3374.scope: Deactivated successfully.
Jan 26 16:56:25 compute-0 sudo[393926]: pam_unix(sudo:session): session closed for user root
Jan 26 16:56:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:56:25 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:56:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:56:25 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:56:25 compute-0 sshd-session[394114]: Connection closed by invalid user sol 45.148.10.240 port 41882 [preauth]
Jan 26 16:56:25 compute-0 sudo[394118]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:56:25 compute-0 sudo[394118]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:56:25 compute-0 sudo[394118]: pam_unix(sudo:session): session closed for user root
Jan 26 16:56:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:25 compute-0 nova_compute[239965]: 2026-01-26 16:56:25.847 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:26 compute-0 nova_compute[239965]: 2026-01-26 16:56:26.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:26 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:56:26 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:56:26 compute-0 ceph-mon[75140]: pgmap v3200: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:56:28 compute-0 ceph-mon[75140]: pgmap v3201: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:56:28
Jan 26 16:56:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:56:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:56:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['images', '.rgw.root', 'default.rgw.control', 'default.rgw.meta', 'backups', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'volumes']
Jan 26 16:56:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:56:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:29 compute-0 nova_compute[239965]: 2026-01-26 16:56:29.840 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:56:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:56:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:56:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:56:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:56:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:56:30 compute-0 ceph-mon[75140]: pgmap v3202: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:30 compute-0 nova_compute[239965]: 2026-01-26 16:56:30.850 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:56:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:56:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:56:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:56:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:56:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:56:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:56:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:56:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:56:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:56:31 compute-0 nova_compute[239965]: 2026-01-26 16:56:31.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3203: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:32 compute-0 nova_compute[239965]: 2026-01-26 16:56:32.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:32 compute-0 nova_compute[239965]: 2026-01-26 16:56:32.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:56:32 compute-0 ceph-mon[75140]: pgmap v3203: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:56:33 compute-0 nova_compute[239965]: 2026-01-26 16:56:33.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:34 compute-0 ceph-mon[75140]: pgmap v3204: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:34 compute-0 nova_compute[239965]: 2026-01-26 16:56:34.845 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:35 compute-0 nova_compute[239965]: 2026-01-26 16:56:35.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:35 compute-0 nova_compute[239965]: 2026-01-26 16:56:35.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:56:35 compute-0 nova_compute[239965]: 2026-01-26 16:56:35.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:56:35 compute-0 nova_compute[239965]: 2026-01-26 16:56:35.525 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:56:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:35 compute-0 nova_compute[239965]: 2026-01-26 16:56:35.853 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:36 compute-0 podman[394144]: 2026-01-26 16:56:36.371319351 +0000 UTC m=+0.056146307 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 26 16:56:36 compute-0 nova_compute[239965]: 2026-01-26 16:56:36.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:36 compute-0 nova_compute[239965]: 2026-01-26 16:56:36.535 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:56:36 compute-0 nova_compute[239965]: 2026-01-26 16:56:36.563 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:56:36 compute-0 nova_compute[239965]: 2026-01-26 16:56:36.563 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:56:36 compute-0 nova_compute[239965]: 2026-01-26 16:56:36.563 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:56:36 compute-0 nova_compute[239965]: 2026-01-26 16:56:36.563 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:56:36 compute-0 nova_compute[239965]: 2026-01-26 16:56:36.563 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:56:36 compute-0 ceph-mon[75140]: pgmap v3205: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:56:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274576410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:56:37 compute-0 nova_compute[239965]: 2026-01-26 16:56:37.174 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:56:37 compute-0 nova_compute[239965]: 2026-01-26 16:56:37.327 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:56:37 compute-0 nova_compute[239965]: 2026-01-26 16:56:37.328 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3561MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:56:37 compute-0 nova_compute[239965]: 2026-01-26 16:56:37.328 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:56:37 compute-0 nova_compute[239965]: 2026-01-26 16:56:37.329 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:56:37 compute-0 nova_compute[239965]: 2026-01-26 16:56:37.393 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:56:37 compute-0 nova_compute[239965]: 2026-01-26 16:56:37.394 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:56:37 compute-0 nova_compute[239965]: 2026-01-26 16:56:37.406 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:56:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:37 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3274576410' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:56:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:56:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4551440' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:56:37 compute-0 nova_compute[239965]: 2026-01-26 16:56:37.973 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:56:37 compute-0 nova_compute[239965]: 2026-01-26 16:56:37.980 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:56:37 compute-0 nova_compute[239965]: 2026-01-26 16:56:37.996 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:56:37 compute-0 nova_compute[239965]: 2026-01-26 16:56:37.998 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:56:37 compute-0 nova_compute[239965]: 2026-01-26 16:56:37.998 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:56:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:56:38 compute-0 ceph-mon[75140]: pgmap v3206: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4551440' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:56:39 compute-0 podman[394208]: 2026-01-26 16:56:39.389930666 +0000 UTC m=+0.079462527 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 26 16:56:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:39 compute-0 nova_compute[239965]: 2026-01-26 16:56:39.931 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:40 compute-0 ceph-mon[75140]: pgmap v3207: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:40 compute-0 nova_compute[239965]: 2026-01-26 16:56:40.853 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:42 compute-0 ceph-mon[75140]: pgmap v3208: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:56:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:44 compute-0 ceph-mon[75140]: pgmap v3209: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:44 compute-0 nova_compute[239965]: 2026-01-26 16:56:44.934 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:45 compute-0 nova_compute[239965]: 2026-01-26 16:56:45.857 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:46 compute-0 ceph-mon[75140]: pgmap v3210: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:56:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:56:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3664973851' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:56:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:56:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3664973851' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:56:48 compute-0 ceph-mon[75140]: pgmap v3211: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3664973851' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:56:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3664973851' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:56:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:49 compute-0 nova_compute[239965]: 2026-01-26 16:56:49.938 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:50 compute-0 ceph-mon[75140]: pgmap v3212: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:50 compute-0 nova_compute[239965]: 2026-01-26 16:56:50.968 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:52 compute-0 ceph-mon[75140]: pgmap v3213: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:56:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:54 compute-0 ceph-mon[75140]: pgmap v3214: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:55 compute-0 nova_compute[239965]: 2026-01-26 16:56:55.063 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:55 compute-0 nova_compute[239965]: 2026-01-26 16:56:55.972 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:56:56 compute-0 ceph-mon[75140]: pgmap v3215: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:56:58 compute-0 ceph-mon[75140]: pgmap v3216: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:56:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:56:59.279 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:56:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:56:59.280 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:56:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:56:59.280 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:56:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:00 compute-0 nova_compute[239965]: 2026-01-26 16:57:00.066 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:57:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:57:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:57:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:57:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:57:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:57:00 compute-0 nova_compute[239965]: 2026-01-26 16:57:00.975 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:00 compute-0 ceph-mon[75140]: pgmap v3217: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:03 compute-0 ceph-mon[75140]: pgmap v3218: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:57:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:05 compute-0 ceph-mon[75140]: pgmap v3219: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:05 compute-0 nova_compute[239965]: 2026-01-26 16:57:05.069 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:05 compute-0 nova_compute[239965]: 2026-01-26 16:57:05.978 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:06.020879) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446626020914, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1397, "num_deletes": 251, "total_data_size": 2307652, "memory_usage": 2332872, "flush_reason": "Manual Compaction"}
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446626036711, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 2252646, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64630, "largest_seqno": 66026, "table_properties": {"data_size": 2246006, "index_size": 3838, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13641, "raw_average_key_size": 19, "raw_value_size": 2232799, "raw_average_value_size": 3254, "num_data_blocks": 172, "num_entries": 686, "num_filter_entries": 686, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769446479, "oldest_key_time": 1769446479, "file_creation_time": 1769446626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 15877 microseconds, and 5944 cpu microseconds.
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:06.036753) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 2252646 bytes OK
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:06.036772) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:06.040922) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:06.040950) EVENT_LOG_v1 {"time_micros": 1769446626040943, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:06.040970) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2301462, prev total WAL file size 2301462, number of live WAL files 2.
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:06.041909) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(2199KB)], [152(9448KB)]
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446626041934, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 11927678, "oldest_snapshot_seqno": -1}
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8615 keys, 10105370 bytes, temperature: kUnknown
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446626120048, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 10105370, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10051303, "index_size": 31426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21573, "raw_key_size": 224911, "raw_average_key_size": 26, "raw_value_size": 9900800, "raw_average_value_size": 1149, "num_data_blocks": 1212, "num_entries": 8615, "num_filter_entries": 8615, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769446626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:06.120302) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 10105370 bytes
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:06.122366) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.5 rd, 129.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 9.2 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(9.8) write-amplify(4.5) OK, records in: 9129, records dropped: 514 output_compression: NoCompression
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:06.122384) EVENT_LOG_v1 {"time_micros": 1769446626122376, "job": 94, "event": "compaction_finished", "compaction_time_micros": 78215, "compaction_time_cpu_micros": 26707, "output_level": 6, "num_output_files": 1, "total_output_size": 10105370, "num_input_records": 9129, "num_output_records": 8615, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446626123058, "job": 94, "event": "table_file_deletion", "file_number": 154}
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446626125317, "job": 94, "event": "table_file_deletion", "file_number": 152}
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:06.041852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:06.125623) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:06.125634) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:06.125637) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:06.125639) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:57:06 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:06.125641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:57:07 compute-0 ceph-mon[75140]: pgmap v3220: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:07 compute-0 podman[394233]: 2026-01-26 16:57:07.356737712 +0000 UTC m=+0.048622223 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 16:57:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:57:09 compute-0 ceph-mon[75140]: pgmap v3221: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:10 compute-0 nova_compute[239965]: 2026-01-26 16:57:10.073 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:10 compute-0 podman[394253]: 2026-01-26 16:57:10.392045066 +0000 UTC m=+0.082469002 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller)
Jan 26 16:57:10 compute-0 nova_compute[239965]: 2026-01-26 16:57:10.982 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:11 compute-0 ceph-mon[75140]: pgmap v3222: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:13 compute-0 ceph-mon[75140]: pgmap v3223: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:57:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:15 compute-0 ceph-mon[75140]: pgmap v3224: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:15 compute-0 nova_compute[239965]: 2026-01-26 16:57:15.077 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:15 compute-0 nova_compute[239965]: 2026-01-26 16:57:15.985 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:17 compute-0 ceph-mon[75140]: pgmap v3225: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:57:19 compute-0 ceph-mon[75140]: pgmap v3226: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:20 compute-0 nova_compute[239965]: 2026-01-26 16:57:20.080 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:20 compute-0 nova_compute[239965]: 2026-01-26 16:57:20.988 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:21 compute-0 ceph-mon[75140]: pgmap v3227: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:22 compute-0 nova_compute[239965]: 2026-01-26 16:57:22.974 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:23 compute-0 ceph-mon[75140]: pgmap v3228: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:57:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:24 compute-0 nova_compute[239965]: 2026-01-26 16:57:24.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:25 compute-0 nova_compute[239965]: 2026-01-26 16:57:25.085 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:25 compute-0 ceph-mon[75140]: pgmap v3229: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:25 compute-0 nova_compute[239965]: 2026-01-26 16:57:25.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:25 compute-0 sudo[394278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:57:25 compute-0 sudo[394278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:57:25 compute-0 sudo[394278]: pam_unix(sudo:session): session closed for user root
Jan 26 16:57:25 compute-0 sudo[394303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:57:25 compute-0 sudo[394303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:57:25 compute-0 nova_compute[239965]: 2026-01-26 16:57:25.988 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:26 compute-0 sudo[394303]: pam_unix(sudo:session): session closed for user root
Jan 26 16:57:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:57:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:57:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:57:26 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:57:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:57:26 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:57:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:57:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:57:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:57:26 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:57:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:57:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:57:26 compute-0 sudo[394360]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:57:26 compute-0 sudo[394360]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:57:26 compute-0 sudo[394360]: pam_unix(sudo:session): session closed for user root
Jan 26 16:57:26 compute-0 sudo[394385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:57:26 compute-0 sudo[394385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:57:26 compute-0 podman[394422]: 2026-01-26 16:57:26.787968027 +0000 UTC m=+0.040343900 container create 645022eb5d069b67de433ab7d523583192df7286b71f8ad2e0bea093c90ee89d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 16:57:26 compute-0 systemd[1]: Started libpod-conmon-645022eb5d069b67de433ab7d523583192df7286b71f8ad2e0bea093c90ee89d.scope.
Jan 26 16:57:26 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:57:26 compute-0 podman[394422]: 2026-01-26 16:57:26.76971289 +0000 UTC m=+0.022088783 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:57:26 compute-0 podman[394422]: 2026-01-26 16:57:26.868855488 +0000 UTC m=+0.121231381 container init 645022eb5d069b67de433ab7d523583192df7286b71f8ad2e0bea093c90ee89d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 16:57:26 compute-0 podman[394422]: 2026-01-26 16:57:26.875656785 +0000 UTC m=+0.128032648 container start 645022eb5d069b67de433ab7d523583192df7286b71f8ad2e0bea093c90ee89d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_diffie, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 16:57:26 compute-0 podman[394422]: 2026-01-26 16:57:26.879132861 +0000 UTC m=+0.131508744 container attach 645022eb5d069b67de433ab7d523583192df7286b71f8ad2e0bea093c90ee89d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_diffie, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:57:26 compute-0 stoic_diffie[394438]: 167 167
Jan 26 16:57:26 compute-0 systemd[1]: libpod-645022eb5d069b67de433ab7d523583192df7286b71f8ad2e0bea093c90ee89d.scope: Deactivated successfully.
Jan 26 16:57:26 compute-0 podman[394422]: 2026-01-26 16:57:26.88277965 +0000 UTC m=+0.135155513 container died 645022eb5d069b67de433ab7d523583192df7286b71f8ad2e0bea093c90ee89d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_diffie, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:57:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-6fa2d3e447edce1df4e8da49d86a6a24535779598c92873434f998a33865b7a7-merged.mount: Deactivated successfully.
Jan 26 16:57:26 compute-0 podman[394422]: 2026-01-26 16:57:26.924319528 +0000 UTC m=+0.176695391 container remove 645022eb5d069b67de433ab7d523583192df7286b71f8ad2e0bea093c90ee89d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 26 16:57:26 compute-0 systemd[1]: libpod-conmon-645022eb5d069b67de433ab7d523583192df7286b71f8ad2e0bea093c90ee89d.scope: Deactivated successfully.
Jan 26 16:57:27 compute-0 podman[394460]: 2026-01-26 16:57:27.095939294 +0000 UTC m=+0.045034305 container create d515a1e9f1fbfaf4bfccde0362959585d89635f2f7b11134610bfdf0250e4ba5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_pascal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:57:27 compute-0 ceph-mon[75140]: pgmap v3230: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:57:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:57:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:57:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:57:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:57:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:57:27 compute-0 systemd[1]: Started libpod-conmon-d515a1e9f1fbfaf4bfccde0362959585d89635f2f7b11134610bfdf0250e4ba5.scope.
Jan 26 16:57:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e9a1351d526820aeac51a884a576023ddd1385e1e3bcf74546b2813f68a3c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:57:27 compute-0 podman[394460]: 2026-01-26 16:57:27.076571489 +0000 UTC m=+0.025666520 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e9a1351d526820aeac51a884a576023ddd1385e1e3bcf74546b2813f68a3c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e9a1351d526820aeac51a884a576023ddd1385e1e3bcf74546b2813f68a3c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e9a1351d526820aeac51a884a576023ddd1385e1e3bcf74546b2813f68a3c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e9a1351d526820aeac51a884a576023ddd1385e1e3bcf74546b2813f68a3c9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:57:27 compute-0 podman[394460]: 2026-01-26 16:57:27.188313478 +0000 UTC m=+0.137408519 container init d515a1e9f1fbfaf4bfccde0362959585d89635f2f7b11134610bfdf0250e4ba5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_pascal, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:57:27 compute-0 podman[394460]: 2026-01-26 16:57:27.196043497 +0000 UTC m=+0.145138518 container start d515a1e9f1fbfaf4bfccde0362959585d89635f2f7b11134610bfdf0250e4ba5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_pascal, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 16:57:27 compute-0 podman[394460]: 2026-01-26 16:57:27.204934624 +0000 UTC m=+0.154029665 container attach d515a1e9f1fbfaf4bfccde0362959585d89635f2f7b11134610bfdf0250e4ba5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_pascal, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 16:57:27 compute-0 compassionate_pascal[394477]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:57:27 compute-0 compassionate_pascal[394477]: --> All data devices are unavailable
Jan 26 16:57:27 compute-0 systemd[1]: libpod-d515a1e9f1fbfaf4bfccde0362959585d89635f2f7b11134610bfdf0250e4ba5.scope: Deactivated successfully.
Jan 26 16:57:27 compute-0 podman[394460]: 2026-01-26 16:57:27.703898043 +0000 UTC m=+0.652993074 container died d515a1e9f1fbfaf4bfccde0362959585d89635f2f7b11134610bfdf0250e4ba5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_pascal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 16:57:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-49e9a1351d526820aeac51a884a576023ddd1385e1e3bcf74546b2813f68a3c9-merged.mount: Deactivated successfully.
Jan 26 16:57:27 compute-0 podman[394460]: 2026-01-26 16:57:27.754877602 +0000 UTC m=+0.703972623 container remove d515a1e9f1fbfaf4bfccde0362959585d89635f2f7b11134610bfdf0250e4ba5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_pascal, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:57:27 compute-0 systemd[1]: libpod-conmon-d515a1e9f1fbfaf4bfccde0362959585d89635f2f7b11134610bfdf0250e4ba5.scope: Deactivated successfully.
Jan 26 16:57:27 compute-0 sudo[394385]: pam_unix(sudo:session): session closed for user root
Jan 26 16:57:27 compute-0 sudo[394511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:57:27 compute-0 sudo[394511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:57:27 compute-0 sudo[394511]: pam_unix(sudo:session): session closed for user root
Jan 26 16:57:27 compute-0 sudo[394536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:57:27 compute-0 sudo[394536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:57:28 compute-0 podman[394573]: 2026-01-26 16:57:28.210939028 +0000 UTC m=+0.039583330 container create 679ddd574d4f57e03fbcd06f94de5f2bb81ba22eada2ad7a7a7abaea542e01b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:57:28 compute-0 systemd[1]: Started libpod-conmon-679ddd574d4f57e03fbcd06f94de5f2bb81ba22eada2ad7a7a7abaea542e01b8.scope.
Jan 26 16:57:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:57:28 compute-0 podman[394573]: 2026-01-26 16:57:28.286824718 +0000 UTC m=+0.115469040 container init 679ddd574d4f57e03fbcd06f94de5f2bb81ba22eada2ad7a7a7abaea542e01b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:57:28 compute-0 podman[394573]: 2026-01-26 16:57:28.193768318 +0000 UTC m=+0.022412640 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:57:28 compute-0 podman[394573]: 2026-01-26 16:57:28.293369129 +0000 UTC m=+0.122013431 container start 679ddd574d4f57e03fbcd06f94de5f2bb81ba22eada2ad7a7a7abaea542e01b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mcclintock, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 16:57:28 compute-0 podman[394573]: 2026-01-26 16:57:28.29707954 +0000 UTC m=+0.125723842 container attach 679ddd574d4f57e03fbcd06f94de5f2bb81ba22eada2ad7a7a7abaea542e01b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Jan 26 16:57:28 compute-0 vibrant_mcclintock[394590]: 167 167
Jan 26 16:57:28 compute-0 systemd[1]: libpod-679ddd574d4f57e03fbcd06f94de5f2bb81ba22eada2ad7a7a7abaea542e01b8.scope: Deactivated successfully.
Jan 26 16:57:28 compute-0 podman[394573]: 2026-01-26 16:57:28.298820762 +0000 UTC m=+0.127465064 container died 679ddd574d4f57e03fbcd06f94de5f2bb81ba22eada2ad7a7a7abaea542e01b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:57:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5d4077569eb9a7df14db6913cb6239b36e629a2626cd1fd8b41c97fdd0162be-merged.mount: Deactivated successfully.
Jan 26 16:57:28 compute-0 podman[394573]: 2026-01-26 16:57:28.339564001 +0000 UTC m=+0.168208303 container remove 679ddd574d4f57e03fbcd06f94de5f2bb81ba22eada2ad7a7a7abaea542e01b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_mcclintock, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 16:57:28 compute-0 systemd[1]: libpod-conmon-679ddd574d4f57e03fbcd06f94de5f2bb81ba22eada2ad7a7a7abaea542e01b8.scope: Deactivated successfully.
Jan 26 16:57:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:57:28 compute-0 podman[394615]: 2026-01-26 16:57:28.499595792 +0000 UTC m=+0.041090257 container create 8d7e59398e9f80685c1ab7e0338f75aed63922f102ef6dfb30b27b36be3025bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_knuth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 26 16:57:28 compute-0 nova_compute[239965]: 2026-01-26 16:57:28.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:28 compute-0 systemd[1]: Started libpod-conmon-8d7e59398e9f80685c1ab7e0338f75aed63922f102ef6dfb30b27b36be3025bb.scope.
Jan 26 16:57:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ccd7935959303f2740b91a0c74113314206f3946cd03375e3e7173c0b4fe80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ccd7935959303f2740b91a0c74113314206f3946cd03375e3e7173c0b4fe80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ccd7935959303f2740b91a0c74113314206f3946cd03375e3e7173c0b4fe80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:57:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/55ccd7935959303f2740b91a0c74113314206f3946cd03375e3e7173c0b4fe80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:57:28 compute-0 podman[394615]: 2026-01-26 16:57:28.481804636 +0000 UTC m=+0.023299121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:57:28 compute-0 podman[394615]: 2026-01-26 16:57:28.580879724 +0000 UTC m=+0.122374209 container init 8d7e59398e9f80685c1ab7e0338f75aed63922f102ef6dfb30b27b36be3025bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 16:57:28 compute-0 podman[394615]: 2026-01-26 16:57:28.588632325 +0000 UTC m=+0.130126790 container start 8d7e59398e9f80685c1ab7e0338f75aed63922f102ef6dfb30b27b36be3025bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:57:28 compute-0 podman[394615]: 2026-01-26 16:57:28.593546994 +0000 UTC m=+0.135041459 container attach 8d7e59398e9f80685c1ab7e0338f75aed63922f102ef6dfb30b27b36be3025bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_knuth, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 26 16:57:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:57:28
Jan 26 16:57:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:57:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:57:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'backups', 'default.rgw.log', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'vms', '.mgr', '.rgw.root']
Jan 26 16:57:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:57:28 compute-0 sharp_knuth[394632]: {
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:     "0": [
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:         {
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "devices": [
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "/dev/loop3"
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             ],
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "lv_name": "ceph_lv0",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "lv_size": "21470642176",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "name": "ceph_lv0",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "tags": {
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.cluster_name": "ceph",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.crush_device_class": "",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.encrypted": "0",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.objectstore": "bluestore",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.osd_id": "0",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.type": "block",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.vdo": "0",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.with_tpm": "0"
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             },
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "type": "block",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "vg_name": "ceph_vg0"
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:         }
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:     ],
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:     "1": [
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:         {
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "devices": [
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "/dev/loop4"
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             ],
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "lv_name": "ceph_lv1",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "lv_size": "21470642176",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "name": "ceph_lv1",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "tags": {
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.cluster_name": "ceph",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.crush_device_class": "",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.encrypted": "0",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.objectstore": "bluestore",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.osd_id": "1",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.type": "block",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.vdo": "0",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.with_tpm": "0"
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             },
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "type": "block",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "vg_name": "ceph_vg1"
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:         }
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:     ],
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:     "2": [
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:         {
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "devices": [
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "/dev/loop5"
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             ],
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "lv_name": "ceph_lv2",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "lv_size": "21470642176",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "name": "ceph_lv2",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "tags": {
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.cluster_name": "ceph",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.crush_device_class": "",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.encrypted": "0",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.objectstore": "bluestore",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.osd_id": "2",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.type": "block",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.vdo": "0",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:                 "ceph.with_tpm": "0"
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             },
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "type": "block",
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:             "vg_name": "ceph_vg2"
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:         }
Jan 26 16:57:28 compute-0 sharp_knuth[394632]:     ]
Jan 26 16:57:28 compute-0 sharp_knuth[394632]: }
Jan 26 16:57:28 compute-0 systemd[1]: libpod-8d7e59398e9f80685c1ab7e0338f75aed63922f102ef6dfb30b27b36be3025bb.scope: Deactivated successfully.
Jan 26 16:57:28 compute-0 podman[394615]: 2026-01-26 16:57:28.911094117 +0000 UTC m=+0.452588582 container died 8d7e59398e9f80685c1ab7e0338f75aed63922f102ef6dfb30b27b36be3025bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_knuth, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:57:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-55ccd7935959303f2740b91a0c74113314206f3946cd03375e3e7173c0b4fe80-merged.mount: Deactivated successfully.
Jan 26 16:57:28 compute-0 podman[394615]: 2026-01-26 16:57:28.960304352 +0000 UTC m=+0.501798817 container remove 8d7e59398e9f80685c1ab7e0338f75aed63922f102ef6dfb30b27b36be3025bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 26 16:57:28 compute-0 systemd[1]: libpod-conmon-8d7e59398e9f80685c1ab7e0338f75aed63922f102ef6dfb30b27b36be3025bb.scope: Deactivated successfully.
Jan 26 16:57:29 compute-0 sudo[394536]: pam_unix(sudo:session): session closed for user root
Jan 26 16:57:29 compute-0 sudo[394652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:57:29 compute-0 sudo[394652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:57:29 compute-0 sudo[394652]: pam_unix(sudo:session): session closed for user root
Jan 26 16:57:29 compute-0 sudo[394677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:57:29 compute-0 sudo[394677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:57:29 compute-0 ceph-mon[75140]: pgmap v3231: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:29 compute-0 podman[394714]: 2026-01-26 16:57:29.452168837 +0000 UTC m=+0.047669630 container create 636c61c5cab080b9009ea09b843ada155f597b55acbacd0c2b6e9eb749c815da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:57:29 compute-0 systemd[1]: Started libpod-conmon-636c61c5cab080b9009ea09b843ada155f597b55acbacd0c2b6e9eb749c815da.scope.
Jan 26 16:57:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:57:29 compute-0 podman[394714]: 2026-01-26 16:57:29.432989117 +0000 UTC m=+0.028489920 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:57:29 compute-0 podman[394714]: 2026-01-26 16:57:29.537390114 +0000 UTC m=+0.132890927 container init 636c61c5cab080b9009ea09b843ada155f597b55acbacd0c2b6e9eb749c815da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:57:29 compute-0 podman[394714]: 2026-01-26 16:57:29.546528289 +0000 UTC m=+0.142029082 container start 636c61c5cab080b9009ea09b843ada155f597b55acbacd0c2b6e9eb749c815da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:57:29 compute-0 reverent_mccarthy[394730]: 167 167
Jan 26 16:57:29 compute-0 podman[394714]: 2026-01-26 16:57:29.550810814 +0000 UTC m=+0.146311607 container attach 636c61c5cab080b9009ea09b843ada155f597b55acbacd0c2b6e9eb749c815da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Jan 26 16:57:29 compute-0 systemd[1]: libpod-636c61c5cab080b9009ea09b843ada155f597b55acbacd0c2b6e9eb749c815da.scope: Deactivated successfully.
Jan 26 16:57:29 compute-0 podman[394714]: 2026-01-26 16:57:29.5518866 +0000 UTC m=+0.147387413 container died 636c61c5cab080b9009ea09b843ada155f597b55acbacd0c2b6e9eb749c815da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mccarthy, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:57:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-840ce6340a6038d2919625a01aa854f6b300ad0c9b125ed25ba6f2233d6c1f17-merged.mount: Deactivated successfully.
Jan 26 16:57:29 compute-0 podman[394714]: 2026-01-26 16:57:29.591201774 +0000 UTC m=+0.186702567 container remove 636c61c5cab080b9009ea09b843ada155f597b55acbacd0c2b6e9eb749c815da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_mccarthy, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 16:57:29 compute-0 systemd[1]: libpod-conmon-636c61c5cab080b9009ea09b843ada155f597b55acbacd0c2b6e9eb749c815da.scope: Deactivated successfully.
Jan 26 16:57:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:29 compute-0 podman[394754]: 2026-01-26 16:57:29.770653921 +0000 UTC m=+0.048235282 container create 76d2c8e32d21a4dd1d9877fc64cb365cec1c4d4409be314cb6d3a25819a2eb39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_spence, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:57:29 compute-0 systemd[1]: Started libpod-conmon-76d2c8e32d21a4dd1d9877fc64cb365cec1c4d4409be314cb6d3a25819a2eb39.scope.
Jan 26 16:57:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:57:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e0ba9464a2000cae95854a43a48e6550b51bbeb925248412ba5ee37b26743f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:57:29 compute-0 podman[394754]: 2026-01-26 16:57:29.751492251 +0000 UTC m=+0.029073632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:57:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e0ba9464a2000cae95854a43a48e6550b51bbeb925248412ba5ee37b26743f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:57:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e0ba9464a2000cae95854a43a48e6550b51bbeb925248412ba5ee37b26743f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:57:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7e0ba9464a2000cae95854a43a48e6550b51bbeb925248412ba5ee37b26743f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:57:29 compute-0 podman[394754]: 2026-01-26 16:57:29.859245542 +0000 UTC m=+0.136826923 container init 76d2c8e32d21a4dd1d9877fc64cb365cec1c4d4409be314cb6d3a25819a2eb39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_spence, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 16:57:29 compute-0 podman[394754]: 2026-01-26 16:57:29.865845244 +0000 UTC m=+0.143426605 container start 76d2c8e32d21a4dd1d9877fc64cb365cec1c4d4409be314cb6d3a25819a2eb39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_spence, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 16:57:29 compute-0 podman[394754]: 2026-01-26 16:57:29.869889963 +0000 UTC m=+0.147471344 container attach 76d2c8e32d21a4dd1d9877fc64cb365cec1c4d4409be314cb6d3a25819a2eb39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_spence, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 16:57:30 compute-0 nova_compute[239965]: 2026-01-26 16:57:30.089 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:57:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:57:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:57:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:57:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:57:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:57:30 compute-0 lvm[394849]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:57:30 compute-0 lvm[394850]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:57:30 compute-0 lvm[394849]: VG ceph_vg0 finished
Jan 26 16:57:30 compute-0 lvm[394850]: VG ceph_vg1 finished
Jan 26 16:57:30 compute-0 lvm[394852]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:57:30 compute-0 lvm[394852]: VG ceph_vg2 finished
Jan 26 16:57:30 compute-0 flamboyant_spence[394771]: {}
Jan 26 16:57:30 compute-0 systemd[1]: libpod-76d2c8e32d21a4dd1d9877fc64cb365cec1c4d4409be314cb6d3a25819a2eb39.scope: Deactivated successfully.
Jan 26 16:57:30 compute-0 systemd[1]: libpod-76d2c8e32d21a4dd1d9877fc64cb365cec1c4d4409be314cb6d3a25819a2eb39.scope: Consumed 1.369s CPU time.
Jan 26 16:57:30 compute-0 podman[394754]: 2026-01-26 16:57:30.70386457 +0000 UTC m=+0.981445931 container died 76d2c8e32d21a4dd1d9877fc64cb365cec1c4d4409be314cb6d3a25819a2eb39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_spence, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:57:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7e0ba9464a2000cae95854a43a48e6550b51bbeb925248412ba5ee37b26743f-merged.mount: Deactivated successfully.
Jan 26 16:57:30 compute-0 podman[394754]: 2026-01-26 16:57:30.750895313 +0000 UTC m=+1.028476674 container remove 76d2c8e32d21a4dd1d9877fc64cb365cec1c4d4409be314cb6d3a25819a2eb39 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:57:30 compute-0 systemd[1]: libpod-conmon-76d2c8e32d21a4dd1d9877fc64cb365cec1c4d4409be314cb6d3a25819a2eb39.scope: Deactivated successfully.
Jan 26 16:57:30 compute-0 sudo[394677]: pam_unix(sudo:session): session closed for user root
Jan 26 16:57:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:57:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:57:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:57:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:57:30 compute-0 sudo[394868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:57:30 compute-0 sudo[394868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:57:30 compute-0 sudo[394868]: pam_unix(sudo:session): session closed for user root
Jan 26 16:57:30 compute-0 nova_compute[239965]: 2026-01-26 16:57:30.990 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:57:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:57:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:57:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:57:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:57:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:57:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:57:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:57:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:57:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:57:31 compute-0 ceph-mon[75140]: pgmap v3232: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:57:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:57:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:33 compute-0 ceph-mon[75140]: pgmap v3233: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:57:33 compute-0 nova_compute[239965]: 2026-01-26 16:57:33.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:33.508014) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446653508048, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 479, "num_deletes": 256, "total_data_size": 441723, "memory_usage": 450600, "flush_reason": "Manual Compaction"}
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Jan 26 16:57:33 compute-0 nova_compute[239965]: 2026-01-26 16:57:33.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446653513340, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 436269, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66027, "largest_seqno": 66505, "table_properties": {"data_size": 433508, "index_size": 796, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6463, "raw_average_key_size": 18, "raw_value_size": 427965, "raw_average_value_size": 1222, "num_data_blocks": 35, "num_entries": 350, "num_filter_entries": 350, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769446627, "oldest_key_time": 1769446627, "file_creation_time": 1769446653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 5386 microseconds, and 1996 cpu microseconds.
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:33.513394) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 436269 bytes OK
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:33.513418) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:33.515440) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:33.515454) EVENT_LOG_v1 {"time_micros": 1769446653515449, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:33.515473) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 438858, prev total WAL file size 438858, number of live WAL files 2.
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:33.515927) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373631' seq:72057594037927935, type:22 .. '6C6F676D0033303133' seq:0, type:0; will stop at (end)
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(426KB)], [155(9868KB)]
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446653516028, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 10541639, "oldest_snapshot_seqno": -1}
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8442 keys, 10437431 bytes, temperature: kUnknown
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446653599570, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 10437431, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10383540, "index_size": 31698, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21125, "raw_key_size": 222254, "raw_average_key_size": 26, "raw_value_size": 10235040, "raw_average_value_size": 1212, "num_data_blocks": 1222, "num_entries": 8442, "num_filter_entries": 8442, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769446653, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:33.599818) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 10437431 bytes
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:33.602508) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.1 rd, 124.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.6 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(48.1) write-amplify(23.9) OK, records in: 8965, records dropped: 523 output_compression: NoCompression
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:33.602539) EVENT_LOG_v1 {"time_micros": 1769446653602524, "job": 96, "event": "compaction_finished", "compaction_time_micros": 83615, "compaction_time_cpu_micros": 28478, "output_level": 6, "num_output_files": 1, "total_output_size": 10437431, "num_input_records": 8965, "num_output_records": 8442, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446653602782, "job": 96, "event": "table_file_deletion", "file_number": 157}
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446653605114, "job": 96, "event": "table_file_deletion", "file_number": 155}
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:33.515823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:33.605211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:33.605216) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:33.605217) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:33.605218) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:57:33 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-16:57:33.605220) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 16:57:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:34 compute-0 nova_compute[239965]: 2026-01-26 16:57:34.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:34 compute-0 nova_compute[239965]: 2026-01-26 16:57:34.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:57:34 compute-0 ceph-mon[75140]: pgmap v3234: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:35 compute-0 nova_compute[239965]: 2026-01-26 16:57:35.092 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:35 compute-0 nova_compute[239965]: 2026-01-26 16:57:35.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:35 compute-0 nova_compute[239965]: 2026-01-26 16:57:35.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:57:35 compute-0 nova_compute[239965]: 2026-01-26 16:57:35.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:57:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:35 compute-0 nova_compute[239965]: 2026-01-26 16:57:35.855 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:57:35 compute-0 nova_compute[239965]: 2026-01-26 16:57:35.992 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:36 compute-0 nova_compute[239965]: 2026-01-26 16:57:36.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:57:36 compute-0 nova_compute[239965]: 2026-01-26 16:57:36.753 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:57:36 compute-0 nova_compute[239965]: 2026-01-26 16:57:36.753 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:57:36 compute-0 nova_compute[239965]: 2026-01-26 16:57:36.754 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:57:36 compute-0 nova_compute[239965]: 2026-01-26 16:57:36.754 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:57:36 compute-0 nova_compute[239965]: 2026-01-26 16:57:36.754 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:57:37 compute-0 ceph-mon[75140]: pgmap v3235: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:37 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:57:37 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3396791504' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:57:37 compute-0 nova_compute[239965]: 2026-01-26 16:57:37.325 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:57:37 compute-0 nova_compute[239965]: 2026-01-26 16:57:37.513 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:57:37 compute-0 nova_compute[239965]: 2026-01-26 16:57:37.514 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3547MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:57:37 compute-0 nova_compute[239965]: 2026-01-26 16:57:37.515 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:57:37 compute-0 nova_compute[239965]: 2026-01-26 16:57:37.515 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:57:37 compute-0 nova_compute[239965]: 2026-01-26 16:57:37.648 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:57:37 compute-0 nova_compute[239965]: 2026-01-26 16:57:37.649 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:57:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:37 compute-0 nova_compute[239965]: 2026-01-26 16:57:37.805 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:57:38 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3396791504' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:57:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:57:38 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2919106722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:57:38 compute-0 podman[394935]: 2026-01-26 16:57:38.39611386 +0000 UTC m=+0.072226560 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Jan 26 16:57:38 compute-0 nova_compute[239965]: 2026-01-26 16:57:38.413 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:57:38 compute-0 nova_compute[239965]: 2026-01-26 16:57:38.420 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:57:38 compute-0 nova_compute[239965]: 2026-01-26 16:57:38.436 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:57:38 compute-0 nova_compute[239965]: 2026-01-26 16:57:38.439 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:57:38 compute-0 nova_compute[239965]: 2026-01-26 16:57:38.439 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.924s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:57:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:57:39 compute-0 ceph-mon[75140]: pgmap v3236: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2919106722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:57:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:40 compute-0 nova_compute[239965]: 2026-01-26 16:57:40.094 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:40 compute-0 nova_compute[239965]: 2026-01-26 16:57:40.993 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:41 compute-0 ceph-mon[75140]: pgmap v3237: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:41 compute-0 podman[394956]: 2026-01-26 16:57:41.403995953 +0000 UTC m=+0.092784845 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 26 16:57:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:43 compute-0 ceph-mon[75140]: pgmap v3238: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:57:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:44 compute-0 ceph-mon[75140]: pgmap v3239: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:45 compute-0 nova_compute[239965]: 2026-01-26 16:57:45.097 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:45 compute-0 nova_compute[239965]: 2026-01-26 16:57:45.995 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:46 compute-0 ceph-mon[75140]: pgmap v3240: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:57:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:57:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2047221258' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:57:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:57:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2047221258' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:57:48 compute-0 ceph-mon[75140]: pgmap v3241: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2047221258' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:57:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2047221258' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:57:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:50 compute-0 nova_compute[239965]: 2026-01-26 16:57:50.100 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:50 compute-0 nova_compute[239965]: 2026-01-26 16:57:50.997 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:51 compute-0 ceph-mon[75140]: pgmap v3242: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:53 compute-0 ceph-mon[75140]: pgmap v3243: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:57:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:54 compute-0 ceph-mon[75140]: pgmap v3244: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:55 compute-0 nova_compute[239965]: 2026-01-26 16:57:55.103 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:55 compute-0 nova_compute[239965]: 2026-01-26 16:57:55.999 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:57:57 compute-0 ceph-mon[75140]: pgmap v3245: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:57:59 compute-0 ceph-mon[75140]: pgmap v3246: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:57:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:57:59.280 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:57:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:57:59.281 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:57:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:57:59.282 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:57:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:00 compute-0 nova_compute[239965]: 2026-01-26 16:58:00.108 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:58:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:58:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:58:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:58:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:58:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:58:01 compute-0 nova_compute[239965]: 2026-01-26 16:58:01.014 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:01 compute-0 ceph-mon[75140]: pgmap v3247: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:03 compute-0 ceph-mon[75140]: pgmap v3248: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:58:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:04 compute-0 ceph-mon[75140]: pgmap v3249: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:05 compute-0 nova_compute[239965]: 2026-01-26 16:58:05.111 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:06 compute-0 nova_compute[239965]: 2026-01-26 16:58:06.016 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:06 compute-0 ceph-mon[75140]: pgmap v3250: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:58:08 compute-0 ceph-mon[75140]: pgmap v3251: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:09 compute-0 podman[394982]: 2026-01-26 16:58:09.365817953 +0000 UTC m=+0.052175399 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 16:58:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:10 compute-0 nova_compute[239965]: 2026-01-26 16:58:10.146 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:10 compute-0 ceph-mon[75140]: pgmap v3252: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:11 compute-0 nova_compute[239965]: 2026-01-26 16:58:11.017 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:12 compute-0 podman[395003]: 2026-01-26 16:58:12.393129953 +0000 UTC m=+0.082803001 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 16:58:13 compute-0 ceph-mon[75140]: pgmap v3253: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:58:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:15 compute-0 nova_compute[239965]: 2026-01-26 16:58:15.148 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:15 compute-0 ceph-mon[75140]: pgmap v3254: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:16 compute-0 nova_compute[239965]: 2026-01-26 16:58:16.022 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:16 compute-0 ceph-mon[75140]: pgmap v3255: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:58:18 compute-0 ceph-mon[75140]: pgmap v3256: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:20 compute-0 nova_compute[239965]: 2026-01-26 16:58:20.152 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:21 compute-0 nova_compute[239965]: 2026-01-26 16:58:21.073 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:21 compute-0 ceph-mon[75140]: pgmap v3257: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:23 compute-0 ceph-mon[75140]: pgmap v3258: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:58:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:24 compute-0 nova_compute[239965]: 2026-01-26 16:58:24.440 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:24 compute-0 nova_compute[239965]: 2026-01-26 16:58:24.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:25 compute-0 nova_compute[239965]: 2026-01-26 16:58:25.153 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:25 compute-0 ceph-mon[75140]: pgmap v3259: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:26 compute-0 nova_compute[239965]: 2026-01-26 16:58:26.074 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:26 compute-0 ceph-mon[75140]: pgmap v3260: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:27 compute-0 nova_compute[239965]: 2026-01-26 16:58:27.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:28 compute-0 nova_compute[239965]: 2026-01-26 16:58:28.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:58:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:58:28
Jan 26 16:58:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:58:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:58:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'volumes', 'images', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms']
Jan 26 16:58:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:58:29 compute-0 ceph-mon[75140]: pgmap v3261: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:30 compute-0 nova_compute[239965]: 2026-01-26 16:58:30.157 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:58:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:58:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:58:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:58:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:58:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:58:30 compute-0 sudo[395030]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:58:30 compute-0 sudo[395030]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:58:30 compute-0 sudo[395030]: pam_unix(sudo:session): session closed for user root
Jan 26 16:58:31 compute-0 sudo[395055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 26 16:58:31 compute-0 sudo[395055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:58:31 compute-0 nova_compute[239965]: 2026-01-26 16:58:31.076 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:58:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:58:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:58:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:58:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:58:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:58:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:58:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:58:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:58:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:58:31 compute-0 ceph-mon[75140]: pgmap v3262: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:31 compute-0 podman[395125]: 2026-01-26 16:58:31.481295796 +0000 UTC m=+0.071840102 container exec 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 16:58:31 compute-0 podman[395125]: 2026-01-26 16:58:31.596385176 +0000 UTC m=+0.186929482 container exec_died 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:58:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:32 compute-0 sudo[395055]: pam_unix(sudo:session): session closed for user root
Jan 26 16:58:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:58:32 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:58:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:58:32 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:58:32 compute-0 sudo[395314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:58:32 compute-0 sudo[395314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:58:32 compute-0 sudo[395314]: pam_unix(sudo:session): session closed for user root
Jan 26 16:58:32 compute-0 sudo[395339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:58:32 compute-0 sudo[395339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:58:33 compute-0 ceph-mon[75140]: pgmap v3263: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:33 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:58:33 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:58:33 compute-0 sudo[395339]: pam_unix(sudo:session): session closed for user root
Jan 26 16:58:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:58:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:58:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:58:33 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:58:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:58:33 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:58:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:58:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:58:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:58:33 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:58:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:58:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:58:33 compute-0 sudo[395396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:58:33 compute-0 sudo[395396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:58:33 compute-0 sudo[395396]: pam_unix(sudo:session): session closed for user root
Jan 26 16:58:33 compute-0 sudo[395421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:58:33 compute-0 sudo[395421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:58:33 compute-0 nova_compute[239965]: 2026-01-26 16:58:33.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:33 compute-0 nova_compute[239965]: 2026-01-26 16:58:33.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:58:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:33 compute-0 podman[395457]: 2026-01-26 16:58:33.781191947 +0000 UTC m=+0.039841247 container create 779464fac3acb9196a90b0ae16759c6edd85015b2c2cab63f158089e04f209d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_pare, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:58:33 compute-0 systemd[1]: Started libpod-conmon-779464fac3acb9196a90b0ae16759c6edd85015b2c2cab63f158089e04f209d9.scope.
Jan 26 16:58:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:58:33 compute-0 podman[395457]: 2026-01-26 16:58:33.763331529 +0000 UTC m=+0.021980859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:58:33 compute-0 podman[395457]: 2026-01-26 16:58:33.871604663 +0000 UTC m=+0.130253983 container init 779464fac3acb9196a90b0ae16759c6edd85015b2c2cab63f158089e04f209d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_pare, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:58:33 compute-0 podman[395457]: 2026-01-26 16:58:33.877646471 +0000 UTC m=+0.136295771 container start 779464fac3acb9196a90b0ae16759c6edd85015b2c2cab63f158089e04f209d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:58:33 compute-0 podman[395457]: 2026-01-26 16:58:33.881109707 +0000 UTC m=+0.139759007 container attach 779464fac3acb9196a90b0ae16759c6edd85015b2c2cab63f158089e04f209d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_pare, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Jan 26 16:58:33 compute-0 focused_pare[395474]: 167 167
Jan 26 16:58:33 compute-0 systemd[1]: libpod-779464fac3acb9196a90b0ae16759c6edd85015b2c2cab63f158089e04f209d9.scope: Deactivated successfully.
Jan 26 16:58:33 compute-0 conmon[395474]: conmon 779464fac3acb9196a90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-779464fac3acb9196a90b0ae16759c6edd85015b2c2cab63f158089e04f209d9.scope/container/memory.events
Jan 26 16:58:33 compute-0 podman[395457]: 2026-01-26 16:58:33.884362346 +0000 UTC m=+0.143011646 container died 779464fac3acb9196a90b0ae16759c6edd85015b2c2cab63f158089e04f209d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 16:58:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f410e39b76a70aefaf92426211eee96bb5ad0a0df17662959615cc0f4e17071b-merged.mount: Deactivated successfully.
Jan 26 16:58:33 compute-0 podman[395457]: 2026-01-26 16:58:33.927412321 +0000 UTC m=+0.186061621 container remove 779464fac3acb9196a90b0ae16759c6edd85015b2c2cab63f158089e04f209d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_pare, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 16:58:33 compute-0 systemd[1]: libpod-conmon-779464fac3acb9196a90b0ae16759c6edd85015b2c2cab63f158089e04f209d9.scope: Deactivated successfully.
Jan 26 16:58:34 compute-0 podman[395499]: 2026-01-26 16:58:34.071703097 +0000 UTC m=+0.039531700 container create abd850e71bc15c22fa93e5f678ce6c9050d9bc8862cc1f62571dc135799f6056 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 16:58:34 compute-0 systemd[1]: Started libpod-conmon-abd850e71bc15c22fa93e5f678ce6c9050d9bc8862cc1f62571dc135799f6056.scope.
Jan 26 16:58:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b41ecbf0d20ea3c8f91e556021c3a4bfa009b9ac2a0a61bc244a1cff6b2e4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b41ecbf0d20ea3c8f91e556021c3a4bfa009b9ac2a0a61bc244a1cff6b2e4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b41ecbf0d20ea3c8f91e556021c3a4bfa009b9ac2a0a61bc244a1cff6b2e4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b41ecbf0d20ea3c8f91e556021c3a4bfa009b9ac2a0a61bc244a1cff6b2e4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:58:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b41ecbf0d20ea3c8f91e556021c3a4bfa009b9ac2a0a61bc244a1cff6b2e4f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:58:34 compute-0 podman[395499]: 2026-01-26 16:58:34.137600922 +0000 UTC m=+0.105429475 container init abd850e71bc15c22fa93e5f678ce6c9050d9bc8862cc1f62571dc135799f6056 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 16:58:34 compute-0 podman[395499]: 2026-01-26 16:58:34.146735995 +0000 UTC m=+0.114564528 container start abd850e71bc15c22fa93e5f678ce6c9050d9bc8862cc1f62571dc135799f6056 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:58:34 compute-0 podman[395499]: 2026-01-26 16:58:34.052744302 +0000 UTC m=+0.020572855 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:58:34 compute-0 podman[395499]: 2026-01-26 16:58:34.150304413 +0000 UTC m=+0.118132946 container attach abd850e71bc15c22fa93e5f678ce6c9050d9bc8862cc1f62571dc135799f6056 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:58:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:58:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:58:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:58:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:58:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:58:34 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:58:34 compute-0 upbeat_thompson[395515]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:58:34 compute-0 upbeat_thompson[395515]: --> All data devices are unavailable
Jan 26 16:58:34 compute-0 systemd[1]: libpod-abd850e71bc15c22fa93e5f678ce6c9050d9bc8862cc1f62571dc135799f6056.scope: Deactivated successfully.
Jan 26 16:58:34 compute-0 podman[395499]: 2026-01-26 16:58:34.629039785 +0000 UTC m=+0.596868318 container died abd850e71bc15c22fa93e5f678ce6c9050d9bc8862cc1f62571dc135799f6056 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_thompson, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:58:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-23b41ecbf0d20ea3c8f91e556021c3a4bfa009b9ac2a0a61bc244a1cff6b2e4f-merged.mount: Deactivated successfully.
Jan 26 16:58:34 compute-0 podman[395499]: 2026-01-26 16:58:34.785093629 +0000 UTC m=+0.752922162 container remove abd850e71bc15c22fa93e5f678ce6c9050d9bc8862cc1f62571dc135799f6056 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_thompson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 16:58:34 compute-0 systemd[1]: libpod-conmon-abd850e71bc15c22fa93e5f678ce6c9050d9bc8862cc1f62571dc135799f6056.scope: Deactivated successfully.
Jan 26 16:58:34 compute-0 sudo[395421]: pam_unix(sudo:session): session closed for user root
Jan 26 16:58:34 compute-0 sudo[395546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:58:34 compute-0 sudo[395546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:58:34 compute-0 sudo[395546]: pam_unix(sudo:session): session closed for user root
Jan 26 16:58:34 compute-0 sudo[395571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:58:34 compute-0 sudo[395571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:58:35 compute-0 nova_compute[239965]: 2026-01-26 16:58:35.160 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:35 compute-0 ceph-mon[75140]: pgmap v3264: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:35 compute-0 podman[395608]: 2026-01-26 16:58:35.260462239 +0000 UTC m=+0.044050541 container create 89934598215d3d91f88c3ea98b10f5fbf1a807418ae2a7bfda4d56d71d8d8cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 16:58:35 compute-0 systemd[1]: Started libpod-conmon-89934598215d3d91f88c3ea98b10f5fbf1a807418ae2a7bfda4d56d71d8d8cb9.scope.
Jan 26 16:58:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:58:35 compute-0 podman[395608]: 2026-01-26 16:58:35.237889836 +0000 UTC m=+0.021478168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:58:35 compute-0 podman[395608]: 2026-01-26 16:58:35.340531941 +0000 UTC m=+0.124120273 container init 89934598215d3d91f88c3ea98b10f5fbf1a807418ae2a7bfda4d56d71d8d8cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_perlman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:58:35 compute-0 podman[395608]: 2026-01-26 16:58:35.347272497 +0000 UTC m=+0.130860809 container start 89934598215d3d91f88c3ea98b10f5fbf1a807418ae2a7bfda4d56d71d8d8cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_perlman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 16:58:35 compute-0 podman[395608]: 2026-01-26 16:58:35.351838978 +0000 UTC m=+0.135427310 container attach 89934598215d3d91f88c3ea98b10f5fbf1a807418ae2a7bfda4d56d71d8d8cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 16:58:35 compute-0 confident_perlman[395624]: 167 167
Jan 26 16:58:35 compute-0 systemd[1]: libpod-89934598215d3d91f88c3ea98b10f5fbf1a807418ae2a7bfda4d56d71d8d8cb9.scope: Deactivated successfully.
Jan 26 16:58:35 compute-0 conmon[395624]: conmon 89934598215d3d91f88c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-89934598215d3d91f88c3ea98b10f5fbf1a807418ae2a7bfda4d56d71d8d8cb9.scope/container/memory.events
Jan 26 16:58:35 compute-0 podman[395608]: 2026-01-26 16:58:35.354603006 +0000 UTC m=+0.138191318 container died 89934598215d3d91f88c3ea98b10f5fbf1a807418ae2a7bfda4d56d71d8d8cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_perlman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f6db5fd8fe263d73d0ef04d40bda52936cfca4dfff1b32b10581068f2eb67a0-merged.mount: Deactivated successfully.
Jan 26 16:58:35 compute-0 podman[395608]: 2026-01-26 16:58:35.394211547 +0000 UTC m=+0.177799859 container remove 89934598215d3d91f88c3ea98b10f5fbf1a807418ae2a7bfda4d56d71d8d8cb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_perlman, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 16:58:35 compute-0 systemd[1]: libpod-conmon-89934598215d3d91f88c3ea98b10f5fbf1a807418ae2a7bfda4d56d71d8d8cb9.scope: Deactivated successfully.
Jan 26 16:58:35 compute-0 nova_compute[239965]: 2026-01-26 16:58:35.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:35 compute-0 nova_compute[239965]: 2026-01-26 16:58:35.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:58:35 compute-0 podman[395648]: 2026-01-26 16:58:35.551321226 +0000 UTC m=+0.037416727 container create 9544819ba0d9e4d56e3e0e2435940af1cc2985d416c64185d92f270d6f772d25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:58:35 compute-0 systemd[1]: Started libpod-conmon-9544819ba0d9e4d56e3e0e2435940af1cc2985d416c64185d92f270d6f772d25.scope.
Jan 26 16:58:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146f0fec630b2ebc76e0be40d7f560b7c5b3da27609226767dc690202632d1d3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146f0fec630b2ebc76e0be40d7f560b7c5b3da27609226767dc690202632d1d3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146f0fec630b2ebc76e0be40d7f560b7c5b3da27609226767dc690202632d1d3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:58:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146f0fec630b2ebc76e0be40d7f560b7c5b3da27609226767dc690202632d1d3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:58:35 compute-0 podman[395648]: 2026-01-26 16:58:35.630032366 +0000 UTC m=+0.116127887 container init 9544819ba0d9e4d56e3e0e2435940af1cc2985d416c64185d92f270d6f772d25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:58:35 compute-0 podman[395648]: 2026-01-26 16:58:35.53510306 +0000 UTC m=+0.021198581 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:58:35 compute-0 podman[395648]: 2026-01-26 16:58:35.637011687 +0000 UTC m=+0.123107188 container start 9544819ba0d9e4d56e3e0e2435940af1cc2985d416c64185d92f270d6f772d25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bell, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:58:35 compute-0 podman[395648]: 2026-01-26 16:58:35.640797849 +0000 UTC m=+0.126893350 container attach 9544819ba0d9e4d56e3e0e2435940af1cc2985d416c64185d92f270d6f772d25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bell, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:58:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:35 compute-0 happy_bell[395665]: {
Jan 26 16:58:35 compute-0 happy_bell[395665]:     "0": [
Jan 26 16:58:35 compute-0 happy_bell[395665]:         {
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "devices": [
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "/dev/loop3"
Jan 26 16:58:35 compute-0 happy_bell[395665]:             ],
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "lv_name": "ceph_lv0",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "lv_size": "21470642176",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "name": "ceph_lv0",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "tags": {
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.cluster_name": "ceph",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.crush_device_class": "",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.encrypted": "0",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.objectstore": "bluestore",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.osd_id": "0",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.type": "block",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.vdo": "0",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.with_tpm": "0"
Jan 26 16:58:35 compute-0 happy_bell[395665]:             },
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "type": "block",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "vg_name": "ceph_vg0"
Jan 26 16:58:35 compute-0 happy_bell[395665]:         }
Jan 26 16:58:35 compute-0 happy_bell[395665]:     ],
Jan 26 16:58:35 compute-0 happy_bell[395665]:     "1": [
Jan 26 16:58:35 compute-0 happy_bell[395665]:         {
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "devices": [
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "/dev/loop4"
Jan 26 16:58:35 compute-0 happy_bell[395665]:             ],
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "lv_name": "ceph_lv1",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "lv_size": "21470642176",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "name": "ceph_lv1",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "tags": {
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.cluster_name": "ceph",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.crush_device_class": "",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.encrypted": "0",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.objectstore": "bluestore",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.osd_id": "1",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.type": "block",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.vdo": "0",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.with_tpm": "0"
Jan 26 16:58:35 compute-0 happy_bell[395665]:             },
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "type": "block",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "vg_name": "ceph_vg1"
Jan 26 16:58:35 compute-0 happy_bell[395665]:         }
Jan 26 16:58:35 compute-0 happy_bell[395665]:     ],
Jan 26 16:58:35 compute-0 happy_bell[395665]:     "2": [
Jan 26 16:58:35 compute-0 happy_bell[395665]:         {
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "devices": [
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "/dev/loop5"
Jan 26 16:58:35 compute-0 happy_bell[395665]:             ],
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "lv_name": "ceph_lv2",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "lv_size": "21470642176",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "name": "ceph_lv2",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "tags": {
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.cluster_name": "ceph",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.crush_device_class": "",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.encrypted": "0",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.objectstore": "bluestore",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.osd_id": "2",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.type": "block",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.vdo": "0",
Jan 26 16:58:35 compute-0 happy_bell[395665]:                 "ceph.with_tpm": "0"
Jan 26 16:58:35 compute-0 happy_bell[395665]:             },
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "type": "block",
Jan 26 16:58:35 compute-0 happy_bell[395665]:             "vg_name": "ceph_vg2"
Jan 26 16:58:35 compute-0 happy_bell[395665]:         }
Jan 26 16:58:35 compute-0 happy_bell[395665]:     ]
Jan 26 16:58:35 compute-0 happy_bell[395665]: }
Jan 26 16:58:35 compute-0 systemd[1]: libpod-9544819ba0d9e4d56e3e0e2435940af1cc2985d416c64185d92f270d6f772d25.scope: Deactivated successfully.
Jan 26 16:58:35 compute-0 conmon[395665]: conmon 9544819ba0d9e4d56e3e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9544819ba0d9e4d56e3e0e2435940af1cc2985d416c64185d92f270d6f772d25.scope/container/memory.events
Jan 26 16:58:35 compute-0 podman[395648]: 2026-01-26 16:58:35.966214935 +0000 UTC m=+0.452310426 container died 9544819ba0d9e4d56e3e0e2435940af1cc2985d416c64185d92f270d6f772d25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:58:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-146f0fec630b2ebc76e0be40d7f560b7c5b3da27609226767dc690202632d1d3-merged.mount: Deactivated successfully.
Jan 26 16:58:36 compute-0 podman[395648]: 2026-01-26 16:58:36.006381748 +0000 UTC m=+0.492477249 container remove 9544819ba0d9e4d56e3e0e2435940af1cc2985d416c64185d92f270d6f772d25 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bell, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 16:58:36 compute-0 systemd[1]: libpod-conmon-9544819ba0d9e4d56e3e0e2435940af1cc2985d416c64185d92f270d6f772d25.scope: Deactivated successfully.
Jan 26 16:58:36 compute-0 sudo[395571]: pam_unix(sudo:session): session closed for user root
Jan 26 16:58:36 compute-0 nova_compute[239965]: 2026-01-26 16:58:36.078 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:36 compute-0 sudo[395684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:58:36 compute-0 sudo[395684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:58:36 compute-0 sudo[395684]: pam_unix(sudo:session): session closed for user root
Jan 26 16:58:36 compute-0 sudo[395709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:58:36 compute-0 sudo[395709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:58:36 compute-0 nova_compute[239965]: 2026-01-26 16:58:36.505 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:36 compute-0 podman[395746]: 2026-01-26 16:58:36.453965277 +0000 UTC m=+0.023029085 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:58:36 compute-0 podman[395746]: 2026-01-26 16:58:36.783290698 +0000 UTC m=+0.352354506 container create 5ad30417186e627b600d5e79fcd1720d9ea71ba8619c11065e0c286cbc98b921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mayer, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 16:58:36 compute-0 systemd[1]: Started libpod-conmon-5ad30417186e627b600d5e79fcd1720d9ea71ba8619c11065e0c286cbc98b921.scope.
Jan 26 16:58:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:58:37 compute-0 podman[395746]: 2026-01-26 16:58:37.46687193 +0000 UTC m=+1.035935768 container init 5ad30417186e627b600d5e79fcd1720d9ea71ba8619c11065e0c286cbc98b921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mayer, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 16:58:37 compute-0 podman[395746]: 2026-01-26 16:58:37.474305592 +0000 UTC m=+1.043369410 container start 5ad30417186e627b600d5e79fcd1720d9ea71ba8619c11065e0c286cbc98b921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mayer, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 16:58:37 compute-0 optimistic_mayer[395763]: 167 167
Jan 26 16:58:37 compute-0 systemd[1]: libpod-5ad30417186e627b600d5e79fcd1720d9ea71ba8619c11065e0c286cbc98b921.scope: Deactivated successfully.
Jan 26 16:58:37 compute-0 nova_compute[239965]: 2026-01-26 16:58:37.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:37 compute-0 nova_compute[239965]: 2026-01-26 16:58:37.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:58:37 compute-0 nova_compute[239965]: 2026-01-26 16:58:37.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:58:37 compute-0 nova_compute[239965]: 2026-01-26 16:58:37.535 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:58:37 compute-0 ceph-mon[75140]: pgmap v3265: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:38 compute-0 podman[395746]: 2026-01-26 16:58:38.053002424 +0000 UTC m=+1.622066252 container attach 5ad30417186e627b600d5e79fcd1720d9ea71ba8619c11065e0c286cbc98b921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mayer, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:58:38 compute-0 podman[395746]: 2026-01-26 16:58:38.054749736 +0000 UTC m=+1.623813544 container died 5ad30417186e627b600d5e79fcd1720d9ea71ba8619c11065e0c286cbc98b921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mayer, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 16:58:38 compute-0 nova_compute[239965]: 2026-01-26 16:58:38.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:58:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:58:38 compute-0 nova_compute[239965]: 2026-01-26 16:58:38.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:58:38 compute-0 nova_compute[239965]: 2026-01-26 16:58:38.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:58:38 compute-0 nova_compute[239965]: 2026-01-26 16:58:38.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:58:38 compute-0 nova_compute[239965]: 2026-01-26 16:58:38.539 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:58:38 compute-0 nova_compute[239965]: 2026-01-26 16:58:38.540 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:58:39 compute-0 ceph-mon[75140]: pgmap v3266: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ae4670a3d14dfb91d9deb255a75b878bd8d01119c5379b416239e10e9699f95-merged.mount: Deactivated successfully.
Jan 26 16:58:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:58:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3648355330' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:58:39 compute-0 podman[395746]: 2026-01-26 16:58:39.402610348 +0000 UTC m=+2.971674156 container remove 5ad30417186e627b600d5e79fcd1720d9ea71ba8619c11065e0c286cbc98b921 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_mayer, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 16:58:39 compute-0 nova_compute[239965]: 2026-01-26 16:58:39.424 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.884s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:58:39 compute-0 systemd[1]: libpod-conmon-5ad30417186e627b600d5e79fcd1720d9ea71ba8619c11065e0c286cbc98b921.scope: Deactivated successfully.
Jan 26 16:58:39 compute-0 podman[395799]: 2026-01-26 16:58:39.483097141 +0000 UTC m=+0.072391486 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true)
Jan 26 16:58:39 compute-0 podman[395824]: 2026-01-26 16:58:39.611385774 +0000 UTC m=+0.065002514 container create ca8a4b4ff2f157a6c143048adf9c2246b4a280ce4cf600711a7ce657300de73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermat, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 16:58:39 compute-0 nova_compute[239965]: 2026-01-26 16:58:39.647 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:58:39 compute-0 nova_compute[239965]: 2026-01-26 16:58:39.648 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3520MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:58:39 compute-0 nova_compute[239965]: 2026-01-26 16:58:39.648 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:58:39 compute-0 nova_compute[239965]: 2026-01-26 16:58:39.648 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:58:39 compute-0 systemd[1]: Started libpod-conmon-ca8a4b4ff2f157a6c143048adf9c2246b4a280ce4cf600711a7ce657300de73f.scope.
Jan 26 16:58:39 compute-0 podman[395824]: 2026-01-26 16:58:39.580142639 +0000 UTC m=+0.033759399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:58:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57c6491dc0d17400c918c11883b05e4c3c6409486fcff16a48521b62a3cae3e3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57c6491dc0d17400c918c11883b05e4c3c6409486fcff16a48521b62a3cae3e3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57c6491dc0d17400c918c11883b05e4c3c6409486fcff16a48521b62a3cae3e3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:58:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57c6491dc0d17400c918c11883b05e4c3c6409486fcff16a48521b62a3cae3e3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:58:39 compute-0 podman[395824]: 2026-01-26 16:58:39.731512288 +0000 UTC m=+0.185129038 container init ca8a4b4ff2f157a6c143048adf9c2246b4a280ce4cf600711a7ce657300de73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 16:58:39 compute-0 podman[395824]: 2026-01-26 16:58:39.743261356 +0000 UTC m=+0.196878086 container start ca8a4b4ff2f157a6c143048adf9c2246b4a280ce4cf600711a7ce657300de73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermat, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:58:39 compute-0 podman[395824]: 2026-01-26 16:58:39.748585746 +0000 UTC m=+0.202202476 container attach ca8a4b4ff2f157a6c143048adf9c2246b4a280ce4cf600711a7ce657300de73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermat, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:58:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:40 compute-0 nova_compute[239965]: 2026-01-26 16:58:40.161 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:40 compute-0 nova_compute[239965]: 2026-01-26 16:58:40.234 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:58:40 compute-0 nova_compute[239965]: 2026-01-26 16:58:40.235 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:58:40 compute-0 nova_compute[239965]: 2026-01-26 16:58:40.255 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:58:40 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3648355330' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:58:40 compute-0 ceph-mon[75140]: pgmap v3267: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:40 compute-0 sshd-session[395855]: Invalid user sol from 45.148.10.240 port 33452
Jan 26 16:58:40 compute-0 lvm[395932]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:58:40 compute-0 lvm[395932]: VG ceph_vg1 finished
Jan 26 16:58:40 compute-0 lvm[395931]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:58:40 compute-0 lvm[395931]: VG ceph_vg0 finished
Jan 26 16:58:40 compute-0 lvm[395943]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:58:40 compute-0 lvm[395943]: VG ceph_vg2 finished
Jan 26 16:58:40 compute-0 sshd-session[395855]: Connection closed by invalid user sol 45.148.10.240 port 33452 [preauth]
Jan 26 16:58:40 compute-0 goofy_fermat[395840]: {}
Jan 26 16:58:40 compute-0 systemd[1]: libpod-ca8a4b4ff2f157a6c143048adf9c2246b4a280ce4cf600711a7ce657300de73f.scope: Deactivated successfully.
Jan 26 16:58:40 compute-0 systemd[1]: libpod-ca8a4b4ff2f157a6c143048adf9c2246b4a280ce4cf600711a7ce657300de73f.scope: Consumed 1.487s CPU time.
Jan 26 16:58:40 compute-0 podman[395824]: 2026-01-26 16:58:40.706601244 +0000 UTC m=+1.160217974 container died ca8a4b4ff2f157a6c143048adf9c2246b4a280ce4cf600711a7ce657300de73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermat, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:58:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-57c6491dc0d17400c918c11883b05e4c3c6409486fcff16a48521b62a3cae3e3-merged.mount: Deactivated successfully.
Jan 26 16:58:40 compute-0 podman[395824]: 2026-01-26 16:58:40.757458891 +0000 UTC m=+1.211075621 container remove ca8a4b4ff2f157a6c143048adf9c2246b4a280ce4cf600711a7ce657300de73f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 16:58:40 compute-0 systemd[1]: libpod-conmon-ca8a4b4ff2f157a6c143048adf9c2246b4a280ce4cf600711a7ce657300de73f.scope: Deactivated successfully.
Jan 26 16:58:40 compute-0 sudo[395709]: pam_unix(sudo:session): session closed for user root
Jan 26 16:58:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:58:40 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:58:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:58:40 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:58:40 compute-0 sudo[395958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:58:40 compute-0 sudo[395958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:58:40 compute-0 sudo[395958]: pam_unix(sudo:session): session closed for user root
Jan 26 16:58:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:58:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3928806858' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:58:41 compute-0 nova_compute[239965]: 2026-01-26 16:58:41.021 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.766s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:58:41 compute-0 nova_compute[239965]: 2026-01-26 16:58:41.028 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:58:41 compute-0 nova_compute[239965]: 2026-01-26 16:58:41.068 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:58:41 compute-0 nova_compute[239965]: 2026-01-26 16:58:41.070 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:58:41 compute-0 nova_compute[239965]: 2026-01-26 16:58:41.071 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.422s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:58:41 compute-0 nova_compute[239965]: 2026-01-26 16:58:41.080 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:43 compute-0 podman[395985]: 2026-01-26 16:58:43.412153737 +0000 UTC m=+0.098271699 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 16:58:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:58:43 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:58:43 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:58:43 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3928806858' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:58:44 compute-0 ceph-mon[75140]: pgmap v3268: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:44 compute-0 ceph-mon[75140]: pgmap v3269: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:45 compute-0 nova_compute[239965]: 2026-01-26 16:58:45.165 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:46 compute-0 nova_compute[239965]: 2026-01-26 16:58:46.082 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:47 compute-0 ceph-mon[75140]: pgmap v3270: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:58:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:58:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2368791153' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:58:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:58:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2368791153' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:58:49 compute-0 ceph-mon[75140]: pgmap v3271: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2368791153' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:58:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2368791153' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:58:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:50 compute-0 nova_compute[239965]: 2026-01-26 16:58:50.199 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:50 compute-0 ceph-mon[75140]: pgmap v3272: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:51 compute-0 nova_compute[239965]: 2026-01-26 16:58:51.084 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:52 compute-0 ceph-mon[75140]: pgmap v3273: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:58:54 compute-0 ceph-mon[75140]: pgmap v3274: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:55 compute-0 nova_compute[239965]: 2026-01-26 16:58:55.203 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:56 compute-0 nova_compute[239965]: 2026-01-26 16:58:56.087 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:58:56 compute-0 ceph-mon[75140]: pgmap v3275: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:58 compute-0 ceph-mon[75140]: pgmap v3276: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:58:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:58:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:58:59.281 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:58:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:58:59.281 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:58:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:58:59.281 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:58:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:00 compute-0 nova_compute[239965]: 2026-01-26 16:59:00.205 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:59:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:59:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:59:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:59:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:59:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:59:00 compute-0 ceph-mon[75140]: pgmap v3277: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:01 compute-0 nova_compute[239965]: 2026-01-26 16:59:01.089 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:02 compute-0 ceph-mon[75140]: pgmap v3278: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:59:04 compute-0 ceph-mon[75140]: pgmap v3279: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:05 compute-0 nova_compute[239965]: 2026-01-26 16:59:05.207 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:06 compute-0 nova_compute[239965]: 2026-01-26 16:59:06.091 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:07 compute-0 ceph-mon[75140]: pgmap v3280: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:08 compute-0 ceph-mon[75140]: pgmap v3281: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:59:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:10 compute-0 nova_compute[239965]: 2026-01-26 16:59:10.292 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:10 compute-0 podman[396011]: 2026-01-26 16:59:10.392270588 +0000 UTC m=+0.072725133 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 16:59:10 compute-0 ceph-mon[75140]: pgmap v3282: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:11 compute-0 nova_compute[239965]: 2026-01-26 16:59:11.093 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:12 compute-0 ceph-mon[75140]: pgmap v3283: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:59:14 compute-0 podman[396031]: 2026-01-26 16:59:14.396806905 +0000 UTC m=+0.082863441 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 16:59:14 compute-0 ceph-mon[75140]: pgmap v3284: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:15 compute-0 nova_compute[239965]: 2026-01-26 16:59:15.294 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:16 compute-0 nova_compute[239965]: 2026-01-26 16:59:16.095 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:17 compute-0 ceph-mon[75140]: pgmap v3285: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:59:19 compute-0 ceph-mon[75140]: pgmap v3286: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:20 compute-0 nova_compute[239965]: 2026-01-26 16:59:20.298 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:21 compute-0 ceph-mon[75140]: pgmap v3287: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:21 compute-0 nova_compute[239965]: 2026-01-26 16:59:21.098 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:23 compute-0 ceph-mon[75140]: pgmap v3288: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:59:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:25 compute-0 ceph-mon[75140]: pgmap v3289: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:25 compute-0 nova_compute[239965]: 2026-01-26 16:59:25.300 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:26 compute-0 nova_compute[239965]: 2026-01-26 16:59:26.098 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:27 compute-0 ceph-mon[75140]: pgmap v3290: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:27 compute-0 nova_compute[239965]: 2026-01-26 16:59:27.071 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:27 compute-0 nova_compute[239965]: 2026-01-26 16:59:27.072 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:59:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_16:59:28
Jan 26 16:59:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 16:59:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 16:59:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['volumes', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', '.mgr', 'vms', 'images', 'cephfs.cephfs.meta', 'backups']
Jan 26 16:59:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 16:59:29 compute-0 ceph-mon[75140]: pgmap v3291: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:29 compute-0 nova_compute[239965]: 2026-01-26 16:59:29.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:30 compute-0 nova_compute[239965]: 2026-01-26 16:59:30.303 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:59:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:59:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:59:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:59:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 16:59:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 16:59:30 compute-0 nova_compute[239965]: 2026-01-26 16:59:30.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:31 compute-0 ceph-mon[75140]: pgmap v3292: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 16:59:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:59:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 16:59:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 16:59:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:59:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 16:59:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:59:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:59:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 16:59:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 16:59:31 compute-0 nova_compute[239965]: 2026-01-26 16:59:31.100 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:33 compute-0 ceph-mon[75140]: pgmap v3293: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:59:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:34 compute-0 nova_compute[239965]: 2026-01-26 16:59:34.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:35 compute-0 ceph-mon[75140]: pgmap v3294: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:35 compute-0 nova_compute[239965]: 2026-01-26 16:59:35.306 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:35 compute-0 nova_compute[239965]: 2026-01-26 16:59:35.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:35 compute-0 nova_compute[239965]: 2026-01-26 16:59:35.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:35 compute-0 nova_compute[239965]: 2026-01-26 16:59:35.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 16:59:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:36 compute-0 nova_compute[239965]: 2026-01-26 16:59:36.103 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:37 compute-0 ceph-mon[75140]: pgmap v3295: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:38 compute-0 ceph-mon[75140]: pgmap v3296: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:38 compute-0 nova_compute[239965]: 2026-01-26 16:59:38.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:38 compute-0 nova_compute[239965]: 2026-01-26 16:59:38.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 16:59:38 compute-0 nova_compute[239965]: 2026-01-26 16:59:38.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 16:59:38 compute-0 nova_compute[239965]: 2026-01-26 16:59:38.534 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 16:59:38 compute-0 nova_compute[239965]: 2026-01-26 16:59:38.534 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 16:59:38 compute-0 nova_compute[239965]: 2026-01-26 16:59:38.572 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:59:38 compute-0 nova_compute[239965]: 2026-01-26 16:59:38.573 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:59:38 compute-0 nova_compute[239965]: 2026-01-26 16:59:38.573 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:59:38 compute-0 nova_compute[239965]: 2026-01-26 16:59:38.573 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 16:59:38 compute-0 nova_compute[239965]: 2026-01-26 16:59:38.573 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:59:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:59:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:59:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3136201513' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:59:39 compute-0 nova_compute[239965]: 2026-01-26 16:59:39.138 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:59:39 compute-0 nova_compute[239965]: 2026-01-26 16:59:39.305 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 16:59:39 compute-0 nova_compute[239965]: 2026-01-26 16:59:39.306 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3590MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 16:59:39 compute-0 nova_compute[239965]: 2026-01-26 16:59:39.307 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:59:39 compute-0 nova_compute[239965]: 2026-01-26 16:59:39.307 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:59:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3136201513' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:59:39 compute-0 nova_compute[239965]: 2026-01-26 16:59:39.379 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 16:59:39 compute-0 nova_compute[239965]: 2026-01-26 16:59:39.379 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 16:59:39 compute-0 nova_compute[239965]: 2026-01-26 16:59:39.605 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 16:59:39 compute-0 nova_compute[239965]: 2026-01-26 16:59:39.674 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 16:59:39 compute-0 nova_compute[239965]: 2026-01-26 16:59:39.675 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 16:59:39 compute-0 nova_compute[239965]: 2026-01-26 16:59:39.694 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 16:59:39 compute-0 nova_compute[239965]: 2026-01-26 16:59:39.723 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 16:59:39 compute-0 nova_compute[239965]: 2026-01-26 16:59:39.739 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 16:59:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 16:59:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/511094899' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:59:40 compute-0 nova_compute[239965]: 2026-01-26 16:59:40.309 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:40 compute-0 nova_compute[239965]: 2026-01-26 16:59:40.312 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 16:59:40 compute-0 nova_compute[239965]: 2026-01-26 16:59:40.319 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 16:59:40 compute-0 nova_compute[239965]: 2026-01-26 16:59:40.340 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 16:59:40 compute-0 nova_compute[239965]: 2026-01-26 16:59:40.341 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 16:59:40 compute-0 nova_compute[239965]: 2026-01-26 16:59:40.342 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:59:40 compute-0 ceph-mon[75140]: pgmap v3297: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:40 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/511094899' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 16:59:40 compute-0 sudo[396101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:59:40 compute-0 sudo[396101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:59:40 compute-0 sudo[396101]: pam_unix(sudo:session): session closed for user root
Jan 26 16:59:41 compute-0 sudo[396132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 16:59:41 compute-0 sudo[396132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:59:41 compute-0 nova_compute[239965]: 2026-01-26 16:59:41.287 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:41 compute-0 podman[396125]: 2026-01-26 16:59:41.337891151 +0000 UTC m=+0.329819193 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 16:59:41 compute-0 sudo[396132]: pam_unix(sudo:session): session closed for user root
Jan 26 16:59:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:59:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:59:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 16:59:41 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:59:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 16:59:41 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:59:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 16:59:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:59:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 16:59:41 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:59:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 16:59:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:59:41 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:59:41 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 16:59:41 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:59:41 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 16:59:41 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 16:59:41 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 16:59:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:41 compute-0 sudo[396201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:59:41 compute-0 sudo[396201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:59:41 compute-0 sudo[396201]: pam_unix(sudo:session): session closed for user root
Jan 26 16:59:41 compute-0 sudo[396226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 16:59:41 compute-0 sudo[396226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:59:42 compute-0 podman[396262]: 2026-01-26 16:59:42.144412646 +0000 UTC m=+0.041454916 container create 67ca7701592b2a54f5a1e2debc685f63334d3a229aa716d930964e7965987f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:59:42 compute-0 systemd[1]: Started libpod-conmon-67ca7701592b2a54f5a1e2debc685f63334d3a229aa716d930964e7965987f6a.scope.
Jan 26 16:59:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:59:42 compute-0 podman[396262]: 2026-01-26 16:59:42.12864079 +0000 UTC m=+0.025683090 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:59:42 compute-0 podman[396262]: 2026-01-26 16:59:42.233379317 +0000 UTC m=+0.130421637 container init 67ca7701592b2a54f5a1e2debc685f63334d3a229aa716d930964e7965987f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 26 16:59:42 compute-0 podman[396262]: 2026-01-26 16:59:42.240484171 +0000 UTC m=+0.137526451 container start 67ca7701592b2a54f5a1e2debc685f63334d3a229aa716d930964e7965987f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hopper, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True)
Jan 26 16:59:42 compute-0 podman[396262]: 2026-01-26 16:59:42.244105599 +0000 UTC m=+0.141147889 container attach 67ca7701592b2a54f5a1e2debc685f63334d3a229aa716d930964e7965987f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hopper, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:59:42 compute-0 busy_hopper[396278]: 167 167
Jan 26 16:59:42 compute-0 systemd[1]: libpod-67ca7701592b2a54f5a1e2debc685f63334d3a229aa716d930964e7965987f6a.scope: Deactivated successfully.
Jan 26 16:59:42 compute-0 podman[396262]: 2026-01-26 16:59:42.24656531 +0000 UTC m=+0.143607590 container died 67ca7701592b2a54f5a1e2debc685f63334d3a229aa716d930964e7965987f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 16:59:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5052636d371cc3eda6712a074793efb07d28c18733bdab3952076bbde13ab4d-merged.mount: Deactivated successfully.
Jan 26 16:59:42 compute-0 podman[396262]: 2026-01-26 16:59:42.285952955 +0000 UTC m=+0.182995245 container remove 67ca7701592b2a54f5a1e2debc685f63334d3a229aa716d930964e7965987f6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 16:59:42 compute-0 systemd[1]: libpod-conmon-67ca7701592b2a54f5a1e2debc685f63334d3a229aa716d930964e7965987f6a.scope: Deactivated successfully.
Jan 26 16:59:42 compute-0 podman[396300]: 2026-01-26 16:59:42.440405641 +0000 UTC m=+0.039920900 container create 6076d0ea29d651d033a5f474c7582f1d71cfa9cb43bcec9bda204148b23eb694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_vaughan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 16:59:42 compute-0 systemd[1]: Started libpod-conmon-6076d0ea29d651d033a5f474c7582f1d71cfa9cb43bcec9bda204148b23eb694.scope.
Jan 26 16:59:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3fec3012ab764fc1d6e7212dad3106315433e45f84e7193af2031eeaaeba92b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3fec3012ab764fc1d6e7212dad3106315433e45f84e7193af2031eeaaeba92b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3fec3012ab764fc1d6e7212dad3106315433e45f84e7193af2031eeaaeba92b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3fec3012ab764fc1d6e7212dad3106315433e45f84e7193af2031eeaaeba92b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:59:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3fec3012ab764fc1d6e7212dad3106315433e45f84e7193af2031eeaaeba92b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 16:59:42 compute-0 podman[396300]: 2026-01-26 16:59:42.501261222 +0000 UTC m=+0.100776481 container init 6076d0ea29d651d033a5f474c7582f1d71cfa9cb43bcec9bda204148b23eb694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 16:59:42 compute-0 podman[396300]: 2026-01-26 16:59:42.516102506 +0000 UTC m=+0.115617775 container start 6076d0ea29d651d033a5f474c7582f1d71cfa9cb43bcec9bda204148b23eb694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 16:59:42 compute-0 podman[396300]: 2026-01-26 16:59:42.519908318 +0000 UTC m=+0.119423587 container attach 6076d0ea29d651d033a5f474c7582f1d71cfa9cb43bcec9bda204148b23eb694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:59:42 compute-0 podman[396300]: 2026-01-26 16:59:42.42528783 +0000 UTC m=+0.024803119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:59:42 compute-0 ceph-mon[75140]: pgmap v3298: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:42 compute-0 epic_vaughan[396316]: --> passed data devices: 0 physical, 3 LVM
Jan 26 16:59:42 compute-0 epic_vaughan[396316]: --> All data devices are unavailable
Jan 26 16:59:42 compute-0 systemd[1]: libpod-6076d0ea29d651d033a5f474c7582f1d71cfa9cb43bcec9bda204148b23eb694.scope: Deactivated successfully.
Jan 26 16:59:42 compute-0 podman[396300]: 2026-01-26 16:59:42.994727374 +0000 UTC m=+0.594242723 container died 6076d0ea29d651d033a5f474c7582f1d71cfa9cb43bcec9bda204148b23eb694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_vaughan, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 16:59:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3fec3012ab764fc1d6e7212dad3106315433e45f84e7193af2031eeaaeba92b-merged.mount: Deactivated successfully.
Jan 26 16:59:43 compute-0 podman[396300]: 2026-01-26 16:59:43.045186761 +0000 UTC m=+0.644702030 container remove 6076d0ea29d651d033a5f474c7582f1d71cfa9cb43bcec9bda204148b23eb694 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_vaughan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:59:43 compute-0 systemd[1]: libpod-conmon-6076d0ea29d651d033a5f474c7582f1d71cfa9cb43bcec9bda204148b23eb694.scope: Deactivated successfully.
Jan 26 16:59:43 compute-0 sudo[396226]: pam_unix(sudo:session): session closed for user root
Jan 26 16:59:43 compute-0 sudo[396349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:59:43 compute-0 sudo[396349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:59:43 compute-0 sudo[396349]: pam_unix(sudo:session): session closed for user root
Jan 26 16:59:43 compute-0 sudo[396374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 16:59:43 compute-0 sudo[396374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:59:43 compute-0 podman[396412]: 2026-01-26 16:59:43.487906591 +0000 UTC m=+0.052428566 container create 6b5e24b93bb6f32eaea0e2287fc244c53645f15f693aed602b83ea7920ff6889 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 16:59:43 compute-0 systemd[1]: Started libpod-conmon-6b5e24b93bb6f32eaea0e2287fc244c53645f15f693aed602b83ea7920ff6889.scope.
Jan 26 16:59:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:59:43 compute-0 podman[396412]: 2026-01-26 16:59:43.469825198 +0000 UTC m=+0.034347193 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:59:43 compute-0 podman[396412]: 2026-01-26 16:59:43.573025986 +0000 UTC m=+0.137547971 container init 6b5e24b93bb6f32eaea0e2287fc244c53645f15f693aed602b83ea7920ff6889 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_torvalds, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 16:59:43 compute-0 podman[396412]: 2026-01-26 16:59:43.579107576 +0000 UTC m=+0.143629551 container start 6b5e24b93bb6f32eaea0e2287fc244c53645f15f693aed602b83ea7920ff6889 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_torvalds, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 16:59:43 compute-0 podman[396412]: 2026-01-26 16:59:43.582852757 +0000 UTC m=+0.147374752 container attach 6b5e24b93bb6f32eaea0e2287fc244c53645f15f693aed602b83ea7920ff6889 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 16:59:43 compute-0 happy_torvalds[396428]: 167 167
Jan 26 16:59:43 compute-0 systemd[1]: libpod-6b5e24b93bb6f32eaea0e2287fc244c53645f15f693aed602b83ea7920ff6889.scope: Deactivated successfully.
Jan 26 16:59:43 compute-0 podman[396412]: 2026-01-26 16:59:43.58377727 +0000 UTC m=+0.148299235 container died 6b5e24b93bb6f32eaea0e2287fc244c53645f15f693aed602b83ea7920ff6889 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_torvalds, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 16:59:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-a7c0cd294b6af46513b9d7a336897731e348f9a5a36f14c0b4a632b700751907-merged.mount: Deactivated successfully.
Jan 26 16:59:43 compute-0 podman[396412]: 2026-01-26 16:59:43.622795456 +0000 UTC m=+0.187317431 container remove 6b5e24b93bb6f32eaea0e2287fc244c53645f15f693aed602b83ea7920ff6889 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:59:43 compute-0 systemd[1]: libpod-conmon-6b5e24b93bb6f32eaea0e2287fc244c53645f15f693aed602b83ea7920ff6889.scope: Deactivated successfully.
Jan 26 16:59:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:59:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:43 compute-0 podman[396452]: 2026-01-26 16:59:43.820721056 +0000 UTC m=+0.064546982 container create 0f2f4748693a4123ea541cf65ce2e56e97125c9a88fd59ed187de1b00c4fd31a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kirch, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:59:43 compute-0 systemd[1]: Started libpod-conmon-0f2f4748693a4123ea541cf65ce2e56e97125c9a88fd59ed187de1b00c4fd31a.scope.
Jan 26 16:59:43 compute-0 podman[396452]: 2026-01-26 16:59:43.787827391 +0000 UTC m=+0.031653387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:59:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3b367732102c844a5d26fe2421635b394b061dd716d9e5ee2e3f5d1ba82088/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3b367732102c844a5d26fe2421635b394b061dd716d9e5ee2e3f5d1ba82088/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3b367732102c844a5d26fe2421635b394b061dd716d9e5ee2e3f5d1ba82088/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:59:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf3b367732102c844a5d26fe2421635b394b061dd716d9e5ee2e3f5d1ba82088/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:59:43 compute-0 podman[396452]: 2026-01-26 16:59:43.90938757 +0000 UTC m=+0.153213466 container init 0f2f4748693a4123ea541cf65ce2e56e97125c9a88fd59ed187de1b00c4fd31a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kirch, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 16:59:43 compute-0 podman[396452]: 2026-01-26 16:59:43.916532735 +0000 UTC m=+0.160358631 container start 0f2f4748693a4123ea541cf65ce2e56e97125c9a88fd59ed187de1b00c4fd31a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kirch, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 16:59:43 compute-0 podman[396452]: 2026-01-26 16:59:43.919414086 +0000 UTC m=+0.163239992 container attach 0f2f4748693a4123ea541cf65ce2e56e97125c9a88fd59ed187de1b00c4fd31a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 16:59:44 compute-0 nervous_kirch[396469]: {
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:     "0": [
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:         {
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "devices": [
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "/dev/loop3"
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             ],
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "lv_name": "ceph_lv0",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "lv_size": "21470642176",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "name": "ceph_lv0",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "tags": {
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.cluster_name": "ceph",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.crush_device_class": "",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.encrypted": "0",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.objectstore": "bluestore",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.osd_id": "0",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.type": "block",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.vdo": "0",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.with_tpm": "0"
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             },
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "type": "block",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "vg_name": "ceph_vg0"
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:         }
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:     ],
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:     "1": [
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:         {
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "devices": [
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "/dev/loop4"
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             ],
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "lv_name": "ceph_lv1",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "lv_size": "21470642176",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "name": "ceph_lv1",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "tags": {
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.cluster_name": "ceph",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.crush_device_class": "",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.encrypted": "0",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.objectstore": "bluestore",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.osd_id": "1",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.type": "block",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.vdo": "0",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.with_tpm": "0"
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             },
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "type": "block",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "vg_name": "ceph_vg1"
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:         }
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:     ],
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:     "2": [
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:         {
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "devices": [
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "/dev/loop5"
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             ],
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "lv_name": "ceph_lv2",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "lv_size": "21470642176",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "name": "ceph_lv2",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "tags": {
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.cluster_name": "ceph",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.crush_device_class": "",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.encrypted": "0",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.objectstore": "bluestore",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.osd_id": "2",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.type": "block",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.vdo": "0",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:                 "ceph.with_tpm": "0"
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             },
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "type": "block",
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:             "vg_name": "ceph_vg2"
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:         }
Jan 26 16:59:44 compute-0 nervous_kirch[396469]:     ]
Jan 26 16:59:44 compute-0 nervous_kirch[396469]: }
Jan 26 16:59:44 compute-0 systemd[1]: libpod-0f2f4748693a4123ea541cf65ce2e56e97125c9a88fd59ed187de1b00c4fd31a.scope: Deactivated successfully.
Jan 26 16:59:44 compute-0 podman[396452]: 2026-01-26 16:59:44.249578307 +0000 UTC m=+0.493404213 container died 0f2f4748693a4123ea541cf65ce2e56e97125c9a88fd59ed187de1b00c4fd31a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kirch, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 16:59:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf3b367732102c844a5d26fe2421635b394b061dd716d9e5ee2e3f5d1ba82088-merged.mount: Deactivated successfully.
Jan 26 16:59:44 compute-0 podman[396452]: 2026-01-26 16:59:44.293571445 +0000 UTC m=+0.537397351 container remove 0f2f4748693a4123ea541cf65ce2e56e97125c9a88fd59ed187de1b00c4fd31a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_kirch, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:59:44 compute-0 systemd[1]: libpod-conmon-0f2f4748693a4123ea541cf65ce2e56e97125c9a88fd59ed187de1b00c4fd31a.scope: Deactivated successfully.
Jan 26 16:59:44 compute-0 sudo[396374]: pam_unix(sudo:session): session closed for user root
Jan 26 16:59:44 compute-0 sudo[396490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 16:59:44 compute-0 sudo[396490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:59:44 compute-0 sudo[396490]: pam_unix(sudo:session): session closed for user root
Jan 26 16:59:44 compute-0 auditd[703]: Audit daemon rotating log files
Jan 26 16:59:44 compute-0 sudo[396521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 16:59:44 compute-0 sudo[396521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:59:44 compute-0 podman[396514]: 2026-01-26 16:59:44.545101329 +0000 UTC m=+0.094741393 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 26 16:59:44 compute-0 podman[396577]: 2026-01-26 16:59:44.808793161 +0000 UTC m=+0.073363539 container create 37483c919ff6f11dad1de47f6ca9930c0c181ad7bd50cf3df98ad5f4c91da15b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 16:59:44 compute-0 systemd[1]: Started libpod-conmon-37483c919ff6f11dad1de47f6ca9930c0c181ad7bd50cf3df98ad5f4c91da15b.scope.
Jan 26 16:59:44 compute-0 ceph-mon[75140]: pgmap v3299: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:44 compute-0 podman[396577]: 2026-01-26 16:59:44.757317759 +0000 UTC m=+0.021888167 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:59:44 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:59:44 compute-0 podman[396577]: 2026-01-26 16:59:44.876800708 +0000 UTC m=+0.141371106 container init 37483c919ff6f11dad1de47f6ca9930c0c181ad7bd50cf3df98ad5f4c91da15b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:59:44 compute-0 podman[396577]: 2026-01-26 16:59:44.885415899 +0000 UTC m=+0.149986277 container start 37483c919ff6f11dad1de47f6ca9930c0c181ad7bd50cf3df98ad5f4c91da15b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:59:44 compute-0 podman[396577]: 2026-01-26 16:59:44.889136589 +0000 UTC m=+0.153706987 container attach 37483c919ff6f11dad1de47f6ca9930c0c181ad7bd50cf3df98ad5f4c91da15b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:59:44 compute-0 festive_cannon[396593]: 167 167
Jan 26 16:59:44 compute-0 systemd[1]: libpod-37483c919ff6f11dad1de47f6ca9930c0c181ad7bd50cf3df98ad5f4c91da15b.scope: Deactivated successfully.
Jan 26 16:59:44 compute-0 podman[396577]: 2026-01-26 16:59:44.89074524 +0000 UTC m=+0.155315618 container died 37483c919ff6f11dad1de47f6ca9930c0c181ad7bd50cf3df98ad5f4c91da15b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:59:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ebbe2f313e9faa10f04ce40bdc1d7d2c51d0897685452264efeb4d80890f27e9-merged.mount: Deactivated successfully.
Jan 26 16:59:44 compute-0 podman[396577]: 2026-01-26 16:59:44.932461981 +0000 UTC m=+0.197032379 container remove 37483c919ff6f11dad1de47f6ca9930c0c181ad7bd50cf3df98ad5f4c91da15b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 16:59:44 compute-0 systemd[1]: libpod-conmon-37483c919ff6f11dad1de47f6ca9930c0c181ad7bd50cf3df98ad5f4c91da15b.scope: Deactivated successfully.
Jan 26 16:59:45 compute-0 podman[396616]: 2026-01-26 16:59:45.099846224 +0000 UTC m=+0.042485012 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 16:59:45 compute-0 nova_compute[239965]: 2026-01-26 16:59:45.311 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:45 compute-0 podman[396616]: 2026-01-26 16:59:45.36328498 +0000 UTC m=+0.305923728 container create f2d82f73e41c1bdd9a1404c4c73fd4b4f391abf659223ae56352e52b2d7cb3e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 16:59:45 compute-0 systemd[1]: Started libpod-conmon-f2d82f73e41c1bdd9a1404c4c73fd4b4f391abf659223ae56352e52b2d7cb3e4.scope.
Jan 26 16:59:45 compute-0 systemd[1]: Started libcrun container.
Jan 26 16:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa93b96151a5d0301f54e832b762d4d1d59958bab9d5b76c70e32e388df35ce7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 16:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa93b96151a5d0301f54e832b762d4d1d59958bab9d5b76c70e32e388df35ce7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 16:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa93b96151a5d0301f54e832b762d4d1d59958bab9d5b76c70e32e388df35ce7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 16:59:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa93b96151a5d0301f54e832b762d4d1d59958bab9d5b76c70e32e388df35ce7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 16:59:45 compute-0 podman[396616]: 2026-01-26 16:59:45.530941229 +0000 UTC m=+0.473579947 container init f2d82f73e41c1bdd9a1404c4c73fd4b4f391abf659223ae56352e52b2d7cb3e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_albattani, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 16:59:45 compute-0 podman[396616]: 2026-01-26 16:59:45.538923214 +0000 UTC m=+0.481561912 container start f2d82f73e41c1bdd9a1404c4c73fd4b4f391abf659223ae56352e52b2d7cb3e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_albattani, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 16:59:45 compute-0 podman[396616]: 2026-01-26 16:59:45.547617037 +0000 UTC m=+0.490255765 container attach f2d82f73e41c1bdd9a1404c4c73fd4b4f391abf659223ae56352e52b2d7cb3e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_albattani, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 16:59:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:46 compute-0 lvm[396713]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 16:59:46 compute-0 lvm[396713]: VG ceph_vg1 finished
Jan 26 16:59:46 compute-0 lvm[396712]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 16:59:46 compute-0 lvm[396712]: VG ceph_vg0 finished
Jan 26 16:59:46 compute-0 lvm[396715]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 16:59:46 compute-0 lvm[396715]: VG ceph_vg2 finished
Jan 26 16:59:46 compute-0 nova_compute[239965]: 2026-01-26 16:59:46.287 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:46 compute-0 distracted_albattani[396634]: {}
Jan 26 16:59:46 compute-0 systemd[1]: libpod-f2d82f73e41c1bdd9a1404c4c73fd4b4f391abf659223ae56352e52b2d7cb3e4.scope: Deactivated successfully.
Jan 26 16:59:46 compute-0 systemd[1]: libpod-f2d82f73e41c1bdd9a1404c4c73fd4b4f391abf659223ae56352e52b2d7cb3e4.scope: Consumed 1.343s CPU time.
Jan 26 16:59:46 compute-0 podman[396616]: 2026-01-26 16:59:46.383702477 +0000 UTC m=+1.326341275 container died f2d82f73e41c1bdd9a1404c4c73fd4b4f391abf659223ae56352e52b2d7cb3e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_albattani, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 16:59:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-fa93b96151a5d0301f54e832b762d4d1d59958bab9d5b76c70e32e388df35ce7-merged.mount: Deactivated successfully.
Jan 26 16:59:46 compute-0 podman[396616]: 2026-01-26 16:59:46.437389382 +0000 UTC m=+1.380028110 container remove f2d82f73e41c1bdd9a1404c4c73fd4b4f391abf659223ae56352e52b2d7cb3e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 16:59:46 compute-0 systemd[1]: libpod-conmon-f2d82f73e41c1bdd9a1404c4c73fd4b4f391abf659223ae56352e52b2d7cb3e4.scope: Deactivated successfully.
Jan 26 16:59:46 compute-0 sudo[396521]: pam_unix(sudo:session): session closed for user root
Jan 26 16:59:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 16:59:46 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:59:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 16:59:46 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:59:46 compute-0 sudo[396731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 16:59:46 compute-0 sudo[396731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 16:59:46 compute-0 sudo[396731]: pam_unix(sudo:session): session closed for user root
Jan 26 16:59:46 compute-0 ceph-mon[75140]: pgmap v3300: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:46 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:59:46 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 16:59:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:59:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 16:59:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/686503229' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:59:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 16:59:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/686503229' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:59:48 compute-0 ceph-mon[75140]: pgmap v3301: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/686503229' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 16:59:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/686503229' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 16:59:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:50 compute-0 nova_compute[239965]: 2026-01-26 16:59:50.314 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:51 compute-0 ceph-mon[75140]: pgmap v3302: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:51 compute-0 nova_compute[239965]: 2026-01-26 16:59:51.332 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:52 compute-0 ceph-mon[75140]: pgmap v3303: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:59:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:55 compute-0 ceph-mon[75140]: pgmap v3304: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:55 compute-0 nova_compute[239965]: 2026-01-26 16:59:55.318 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:56 compute-0 nova_compute[239965]: 2026-01-26 16:59:56.334 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 16:59:57 compute-0 ceph-mon[75140]: pgmap v3305: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 16:59:59 compute-0 ceph-mon[75140]: pgmap v3306: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 16:59:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:59:59.282 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 16:59:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:59:59.282 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 16:59:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 16:59:59.282 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 16:59:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:00 compute-0 nova_compute[239965]: 2026-01-26 17:00:00.320 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:00:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:00:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:00:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:00:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:00:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:00:01 compute-0 ceph-mon[75140]: pgmap v3307: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.335 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.511 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.511 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.512 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.512 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.512 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.513 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.556 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.563 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.563 239969 WARNING nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.563 239969 WARNING nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.564 239969 INFO nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Removable base files: /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.564 239969 INFO nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.564 239969 INFO nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.564 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.564 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.564 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 26 17:00:01 compute-0 nova_compute[239965]: 2026-01-26 17:00:01.565 239969 INFO nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Jan 26 17:00:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:03 compute-0 ceph-mon[75140]: pgmap v3308: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:00:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:05 compute-0 ceph-mon[75140]: pgmap v3309: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:05 compute-0 nova_compute[239965]: 2026-01-26 17:00:05.323 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:00:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Cumulative writes: 14K writes, 67K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s
                                           Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1310 writes, 5702 keys, 1310 commit groups, 1.0 writes per commit group, ingest: 8.95 MB, 0.01 MB/s
                                           Interval WAL: 1310 writes, 1310 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     68.8      1.19              0.26        48    0.025       0      0       0.0       0.0
                                             L6      1/0    9.95 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   5.1    111.3     95.1      4.43              1.21        47    0.094    318K    25K       0.0       0.0
                                            Sum      1/0    9.95 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   6.1     87.8     89.5      5.63              1.47        95    0.059    318K    25K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.1    119.7    120.3      0.39              0.13         8    0.048     35K   2027       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    111.3     95.1      4.43              1.21        47    0.094    318K    25K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     69.1      1.19              0.26        47    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.6      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.080, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.49 GB write, 0.08 MB/s write, 0.48 GB read, 0.08 MB/s read, 5.6 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x565356c818d0#2 capacity: 304.00 MB usage: 55.36 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000361 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3450,53.04 MB,17.4469%) FilterBlock(96,888.48 KB,0.285415%) IndexBlock(96,1.45 MB,0.478544%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 17:00:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:06 compute-0 nova_compute[239965]: 2026-01-26 17:00:06.505 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:07 compute-0 ceph-mon[75140]: pgmap v3310: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:00:09 compute-0 ceph-mon[75140]: pgmap v3311: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:10 compute-0 nova_compute[239965]: 2026-01-26 17:00:10.325 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:11 compute-0 ceph-mon[75140]: pgmap v3312: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:11 compute-0 nova_compute[239965]: 2026-01-26 17:00:11.686 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:12 compute-0 podman[396757]: 2026-01-26 17:00:12.380667157 +0000 UTC m=+0.057522700 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:00:13 compute-0 ceph-mon[75140]: pgmap v3313: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:00:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:15 compute-0 ceph-mon[75140]: pgmap v3314: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:15 compute-0 nova_compute[239965]: 2026-01-26 17:00:15.366 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:15 compute-0 podman[396777]: 2026-01-26 17:00:15.436200859 +0000 UTC m=+0.120405692 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 17:00:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:16 compute-0 nova_compute[239965]: 2026-01-26 17:00:16.811 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:17 compute-0 ceph-mon[75140]: pgmap v3315: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:00:19 compute-0 ceph-mon[75140]: pgmap v3316: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:20 compute-0 nova_compute[239965]: 2026-01-26 17:00:20.581 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:21 compute-0 ceph-mon[75140]: pgmap v3317: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:21 compute-0 nova_compute[239965]: 2026-01-26 17:00:21.814 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:23 compute-0 ceph-mon[75140]: pgmap v3318: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:00:24 compute-0 nova_compute[239965]: 2026-01-26 17:00:24.565 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:25 compute-0 ceph-mon[75140]: pgmap v3319: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:25 compute-0 nova_compute[239965]: 2026-01-26 17:00:25.583 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:26 compute-0 ceph-mon[75140]: pgmap v3320: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:26 compute-0 nova_compute[239965]: 2026-01-26 17:00:26.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:26 compute-0 nova_compute[239965]: 2026-01-26 17:00:26.815 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:00:28
Jan 26 17:00:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:00:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:00:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'volumes', 'images', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'backups', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 26 17:00:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:00:28 compute-0 ceph-mon[75140]: pgmap v3321: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:00:29 compute-0 nova_compute[239965]: 2026-01-26 17:00:29.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:00:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:00:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:00:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:00:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:00:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:00:30 compute-0 nova_compute[239965]: 2026-01-26 17:00:30.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:30 compute-0 nova_compute[239965]: 2026-01-26 17:00:30.586 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:30 compute-0 ceph-mon[75140]: pgmap v3322: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:00:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:00:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:00:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:00:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:00:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:00:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:00:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:00:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:00:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:00:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:31 compute-0 nova_compute[239965]: 2026-01-26 17:00:31.859 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:32 compute-0 ceph-mon[75140]: pgmap v3323: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:00:34 compute-0 nova_compute[239965]: 2026-01-26 17:00:34.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:34 compute-0 ceph-mon[75140]: pgmap v3324: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:35 compute-0 nova_compute[239965]: 2026-01-26 17:00:35.633 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:36 compute-0 nova_compute[239965]: 2026-01-26 17:00:36.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:36 compute-0 nova_compute[239965]: 2026-01-26 17:00:36.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:36 compute-0 nova_compute[239965]: 2026-01-26 17:00:36.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:00:36 compute-0 nova_compute[239965]: 2026-01-26 17:00:36.905 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:36 compute-0 ceph-mon[75140]: pgmap v3325: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:38 compute-0 nova_compute[239965]: 2026-01-26 17:00:38.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:38 compute-0 nova_compute[239965]: 2026-01-26 17:00:38.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:00:38 compute-0 nova_compute[239965]: 2026-01-26 17:00:38.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:00:38 compute-0 nova_compute[239965]: 2026-01-26 17:00:38.533 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:00:38 compute-0 nova_compute[239965]: 2026-01-26 17:00:38.533 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:38 compute-0 nova_compute[239965]: 2026-01-26 17:00:38.565 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:00:38 compute-0 nova_compute[239965]: 2026-01-26 17:00:38.565 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:00:38 compute-0 nova_compute[239965]: 2026-01-26 17:00:38.566 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:00:38 compute-0 nova_compute[239965]: 2026-01-26 17:00:38.566 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:00:38 compute-0 nova_compute[239965]: 2026-01-26 17:00:38.567 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:00:38 compute-0 ceph-mon[75140]: pgmap v3326: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:00:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:00:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4154490677' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:00:39 compute-0 nova_compute[239965]: 2026-01-26 17:00:39.128 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:00:39 compute-0 nova_compute[239965]: 2026-01-26 17:00:39.275 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:00:39 compute-0 nova_compute[239965]: 2026-01-26 17:00:39.276 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3568MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:00:39 compute-0 nova_compute[239965]: 2026-01-26 17:00:39.276 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:00:39 compute-0 nova_compute[239965]: 2026-01-26 17:00:39.276 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:00:39 compute-0 nova_compute[239965]: 2026-01-26 17:00:39.354 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:00:39 compute-0 nova_compute[239965]: 2026-01-26 17:00:39.354 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:00:39 compute-0 nova_compute[239965]: 2026-01-26 17:00:39.373 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:00:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:00:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2551496268' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:00:39 compute-0 nova_compute[239965]: 2026-01-26 17:00:39.915 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:00:39 compute-0 nova_compute[239965]: 2026-01-26 17:00:39.924 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:00:39 compute-0 nova_compute[239965]: 2026-01-26 17:00:39.940 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:00:39 compute-0 nova_compute[239965]: 2026-01-26 17:00:39.942 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:00:39 compute-0 nova_compute[239965]: 2026-01-26 17:00:39.942 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.666s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:00:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4154490677' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:00:39 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2551496268' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:00:40 compute-0 nova_compute[239965]: 2026-01-26 17:00:40.635 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:40 compute-0 ceph-mon[75140]: pgmap v3327: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:41 compute-0 nova_compute[239965]: 2026-01-26 17:00:41.909 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:42 compute-0 nova_compute[239965]: 2026-01-26 17:00:42.934 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:42 compute-0 ceph-mon[75140]: pgmap v3328: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:43 compute-0 podman[396847]: 2026-01-26 17:00:43.389924853 +0000 UTC m=+0.067531266 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:00:43 compute-0 nova_compute[239965]: 2026-01-26 17:00:43.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:00:44 compute-0 ceph-mon[75140]: pgmap v3329: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:45 compute-0 nova_compute[239965]: 2026-01-26 17:00:45.638 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:46 compute-0 podman[396866]: 2026-01-26 17:00:46.409586475 +0000 UTC m=+0.094426255 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:00:46 compute-0 sudo[396892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:00:46 compute-0 sudo[396892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:00:46 compute-0 sudo[396892]: pam_unix(sudo:session): session closed for user root
Jan 26 17:00:46 compute-0 sudo[396917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:00:46 compute-0 sudo[396917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:00:46 compute-0 nova_compute[239965]: 2026-01-26 17:00:46.909 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:47 compute-0 ceph-mon[75140]: pgmap v3330: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:47 compute-0 sudo[396917]: pam_unix(sudo:session): session closed for user root
Jan 26 17:00:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:00:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:00:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:00:47 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:00:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:00:47 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:00:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:00:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:00:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:00:47 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:00:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:00:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:00:47 compute-0 sudo[396974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:00:47 compute-0 sudo[396974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:00:47 compute-0 sudo[396974]: pam_unix(sudo:session): session closed for user root
Jan 26 17:00:47 compute-0 sudo[396999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:00:47 compute-0 sudo[396999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:00:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:47 compute-0 podman[397035]: 2026-01-26 17:00:47.91921783 +0000 UTC m=+0.035931411 container create d499b145885c9cefb3c359c6c7af11fb2dc615898cc135bbdf1fc25c0b51f7ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:00:47 compute-0 systemd[1]: Started libpod-conmon-d499b145885c9cefb3c359c6c7af11fb2dc615898cc135bbdf1fc25c0b51f7ca.scope.
Jan 26 17:00:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:00:47 compute-0 podman[397035]: 2026-01-26 17:00:47.901935097 +0000 UTC m=+0.018648708 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:00:48 compute-0 podman[397035]: 2026-01-26 17:00:48.10775705 +0000 UTC m=+0.224470661 container init d499b145885c9cefb3c359c6c7af11fb2dc615898cc135bbdf1fc25c0b51f7ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:00:48 compute-0 podman[397035]: 2026-01-26 17:00:48.116422423 +0000 UTC m=+0.233136004 container start d499b145885c9cefb3c359c6c7af11fb2dc615898cc135bbdf1fc25c0b51f7ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:00:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:00:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:00:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:00:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:00:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:00:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:00:48 compute-0 thirsty_lehmann[397052]: 167 167
Jan 26 17:00:48 compute-0 systemd[1]: libpod-d499b145885c9cefb3c359c6c7af11fb2dc615898cc135bbdf1fc25c0b51f7ca.scope: Deactivated successfully.
Jan 26 17:00:48 compute-0 podman[397035]: 2026-01-26 17:00:48.122662396 +0000 UTC m=+0.239376007 container attach d499b145885c9cefb3c359c6c7af11fb2dc615898cc135bbdf1fc25c0b51f7ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 17:00:48 compute-0 podman[397035]: 2026-01-26 17:00:48.126002447 +0000 UTC m=+0.242716028 container died d499b145885c9cefb3c359c6c7af11fb2dc615898cc135bbdf1fc25c0b51f7ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:00:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-18227990a493b07916a11ca3efa73f4858d9befafe8a22be749768c49c1cd230-merged.mount: Deactivated successfully.
Jan 26 17:00:48 compute-0 podman[397035]: 2026-01-26 17:00:48.198019993 +0000 UTC m=+0.314733574 container remove d499b145885c9cefb3c359c6c7af11fb2dc615898cc135bbdf1fc25c0b51f7ca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_lehmann, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:00:48 compute-0 systemd[1]: libpod-conmon-d499b145885c9cefb3c359c6c7af11fb2dc615898cc135bbdf1fc25c0b51f7ca.scope: Deactivated successfully.
Jan 26 17:00:48 compute-0 podman[397076]: 2026-01-26 17:00:48.356945697 +0000 UTC m=+0.040248107 container create 8ebfbd5c71d2596c9522e6117ce85c04cb9d4ca49ad690e756c3b0e7d0497fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 17:00:48 compute-0 systemd[1]: Started libpod-conmon-8ebfbd5c71d2596c9522e6117ce85c04cb9d4ca49ad690e756c3b0e7d0497fbd.scope.
Jan 26 17:00:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:00:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d66da8bf852b53263f8129c219fc902097aaaa58643b68d4b3a5576e4887c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:00:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d66da8bf852b53263f8129c219fc902097aaaa58643b68d4b3a5576e4887c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:00:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d66da8bf852b53263f8129c219fc902097aaaa58643b68d4b3a5576e4887c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:00:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d66da8bf852b53263f8129c219fc902097aaaa58643b68d4b3a5576e4887c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:00:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66d66da8bf852b53263f8129c219fc902097aaaa58643b68d4b3a5576e4887c3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:00:48 compute-0 podman[397076]: 2026-01-26 17:00:48.340103414 +0000 UTC m=+0.023405844 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:00:48 compute-0 podman[397076]: 2026-01-26 17:00:48.559564813 +0000 UTC m=+0.242867243 container init 8ebfbd5c71d2596c9522e6117ce85c04cb9d4ca49ad690e756c3b0e7d0497fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 17:00:48 compute-0 podman[397076]: 2026-01-26 17:00:48.566661027 +0000 UTC m=+0.249963427 container start 8ebfbd5c71d2596c9522e6117ce85c04cb9d4ca49ad690e756c3b0e7d0497fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 17:00:48 compute-0 podman[397076]: 2026-01-26 17:00:48.573071833 +0000 UTC m=+0.256374243 container attach 8ebfbd5c71d2596c9522e6117ce85c04cb9d4ca49ad690e756c3b0e7d0497fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:00:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:00:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2204258028' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:00:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:00:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2204258028' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:00:49 compute-0 elated_swartz[397092]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:00:49 compute-0 elated_swartz[397092]: --> All data devices are unavailable
Jan 26 17:00:49 compute-0 systemd[1]: libpod-8ebfbd5c71d2596c9522e6117ce85c04cb9d4ca49ad690e756c3b0e7d0497fbd.scope: Deactivated successfully.
Jan 26 17:00:49 compute-0 podman[397076]: 2026-01-26 17:00:49.042682262 +0000 UTC m=+0.725984692 container died 8ebfbd5c71d2596c9522e6117ce85c04cb9d4ca49ad690e756c3b0e7d0497fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_swartz, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 17:00:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:00:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-66d66da8bf852b53263f8129c219fc902097aaaa58643b68d4b3a5576e4887c3-merged.mount: Deactivated successfully.
Jan 26 17:00:49 compute-0 podman[397076]: 2026-01-26 17:00:49.10012325 +0000 UTC m=+0.783425660 container remove 8ebfbd5c71d2596c9522e6117ce85c04cb9d4ca49ad690e756c3b0e7d0497fbd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_swartz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 17:00:49 compute-0 systemd[1]: libpod-conmon-8ebfbd5c71d2596c9522e6117ce85c04cb9d4ca49ad690e756c3b0e7d0497fbd.scope: Deactivated successfully.
Jan 26 17:00:49 compute-0 ceph-mon[75140]: pgmap v3331: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2204258028' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:00:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2204258028' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:00:49 compute-0 sudo[396999]: pam_unix(sudo:session): session closed for user root
Jan 26 17:00:49 compute-0 sudo[397125]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:00:49 compute-0 sudo[397125]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:00:49 compute-0 sudo[397125]: pam_unix(sudo:session): session closed for user root
Jan 26 17:00:49 compute-0 sudo[397150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:00:49 compute-0 sudo[397150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:00:49 compute-0 podman[397186]: 2026-01-26 17:00:49.573388308 +0000 UTC m=+0.046508301 container create 59b8d08228a9c52189e7f2d06af3a37be0b1b14db419c0a3ec1a83d18d856517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_panini, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:00:49 compute-0 systemd[1]: Started libpod-conmon-59b8d08228a9c52189e7f2d06af3a37be0b1b14db419c0a3ec1a83d18d856517.scope.
Jan 26 17:00:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:00:49 compute-0 podman[397186]: 2026-01-26 17:00:49.554520575 +0000 UTC m=+0.027640848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:00:49 compute-0 podman[397186]: 2026-01-26 17:00:49.66324048 +0000 UTC m=+0.136360493 container init 59b8d08228a9c52189e7f2d06af3a37be0b1b14db419c0a3ec1a83d18d856517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_panini, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 17:00:49 compute-0 podman[397186]: 2026-01-26 17:00:49.671282267 +0000 UTC m=+0.144402260 container start 59b8d08228a9c52189e7f2d06af3a37be0b1b14db419c0a3ec1a83d18d856517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_panini, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:00:49 compute-0 podman[397186]: 2026-01-26 17:00:49.67631224 +0000 UTC m=+0.149432233 container attach 59b8d08228a9c52189e7f2d06af3a37be0b1b14db419c0a3ec1a83d18d856517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_panini, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 17:00:49 compute-0 awesome_panini[397202]: 167 167
Jan 26 17:00:49 compute-0 systemd[1]: libpod-59b8d08228a9c52189e7f2d06af3a37be0b1b14db419c0a3ec1a83d18d856517.scope: Deactivated successfully.
Jan 26 17:00:49 compute-0 podman[397186]: 2026-01-26 17:00:49.678658188 +0000 UTC m=+0.151778191 container died 59b8d08228a9c52189e7f2d06af3a37be0b1b14db419c0a3ec1a83d18d856517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_panini, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 17:00:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea8b992dd9d2938c845073f0e905cf2938fb23446d9cd0a11b34914e651cf502-merged.mount: Deactivated successfully.
Jan 26 17:00:49 compute-0 podman[397186]: 2026-01-26 17:00:49.868271714 +0000 UTC m=+0.341391697 container remove 59b8d08228a9c52189e7f2d06af3a37be0b1b14db419c0a3ec1a83d18d856517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:00:49 compute-0 systemd[1]: libpod-conmon-59b8d08228a9c52189e7f2d06af3a37be0b1b14db419c0a3ec1a83d18d856517.scope: Deactivated successfully.
Jan 26 17:00:50 compute-0 podman[397226]: 2026-01-26 17:00:50.021809307 +0000 UTC m=+0.029333490 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:00:50 compute-0 podman[397226]: 2026-01-26 17:00:50.17742277 +0000 UTC m=+0.184946933 container create 1fae62ef9974e21e9c219c51ec02da93a0d81e5145017f5c9ae78794e1dcd43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lovelace, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 17:00:50 compute-0 systemd[1]: Started libpod-conmon-1fae62ef9974e21e9c219c51ec02da93a0d81e5145017f5c9ae78794e1dcd43a.scope.
Jan 26 17:00:50 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:00:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/526fb173862f287034065f1997965c06f5715c5701fe45341b1ec431ded75b8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:00:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/526fb173862f287034065f1997965c06f5715c5701fe45341b1ec431ded75b8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:00:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/526fb173862f287034065f1997965c06f5715c5701fe45341b1ec431ded75b8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:00:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/526fb173862f287034065f1997965c06f5715c5701fe45341b1ec431ded75b8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:00:50 compute-0 podman[397226]: 2026-01-26 17:00:50.440140609 +0000 UTC m=+0.447664772 container init 1fae62ef9974e21e9c219c51ec02da93a0d81e5145017f5c9ae78794e1dcd43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 17:00:50 compute-0 podman[397226]: 2026-01-26 17:00:50.449162519 +0000 UTC m=+0.456686682 container start 1fae62ef9974e21e9c219c51ec02da93a0d81e5145017f5c9ae78794e1dcd43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lovelace, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:00:50 compute-0 podman[397226]: 2026-01-26 17:00:50.452487572 +0000 UTC m=+0.460011735 container attach 1fae62ef9974e21e9c219c51ec02da93a0d81e5145017f5c9ae78794e1dcd43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lovelace, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:00:50 compute-0 nova_compute[239965]: 2026-01-26 17:00:50.894 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:50 compute-0 funny_lovelace[397243]: {
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:     "0": [
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:         {
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "devices": [
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "/dev/loop3"
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             ],
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "lv_name": "ceph_lv0",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "lv_size": "21470642176",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "name": "ceph_lv0",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "tags": {
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.cluster_name": "ceph",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.crush_device_class": "",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.encrypted": "0",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.objectstore": "bluestore",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.osd_id": "0",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.type": "block",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.vdo": "0",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.with_tpm": "0"
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             },
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "type": "block",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "vg_name": "ceph_vg0"
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:         }
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:     ],
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:     "1": [
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:         {
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "devices": [
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "/dev/loop4"
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             ],
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "lv_name": "ceph_lv1",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "lv_size": "21470642176",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "name": "ceph_lv1",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "tags": {
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.cluster_name": "ceph",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.crush_device_class": "",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.encrypted": "0",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.objectstore": "bluestore",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.osd_id": "1",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.type": "block",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.vdo": "0",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.with_tpm": "0"
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             },
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "type": "block",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "vg_name": "ceph_vg1"
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:         }
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:     ],
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:     "2": [
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:         {
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "devices": [
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "/dev/loop5"
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             ],
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "lv_name": "ceph_lv2",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "lv_size": "21470642176",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "name": "ceph_lv2",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "tags": {
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.cluster_name": "ceph",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.crush_device_class": "",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.encrypted": "0",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.objectstore": "bluestore",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.osd_id": "2",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.type": "block",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.vdo": "0",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:                 "ceph.with_tpm": "0"
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             },
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "type": "block",
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:             "vg_name": "ceph_vg2"
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:         }
Jan 26 17:00:50 compute-0 funny_lovelace[397243]:     ]
Jan 26 17:00:50 compute-0 funny_lovelace[397243]: }
Jan 26 17:00:51 compute-0 systemd[1]: libpod-1fae62ef9974e21e9c219c51ec02da93a0d81e5145017f5c9ae78794e1dcd43a.scope: Deactivated successfully.
Jan 26 17:00:51 compute-0 podman[397252]: 2026-01-26 17:00:51.061827444 +0000 UTC m=+0.023686581 container died 1fae62ef9974e21e9c219c51ec02da93a0d81e5145017f5c9ae78794e1dcd43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:00:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-526fb173862f287034065f1997965c06f5715c5701fe45341b1ec431ded75b8c-merged.mount: Deactivated successfully.
Jan 26 17:00:51 compute-0 podman[397252]: 2026-01-26 17:00:51.115791606 +0000 UTC m=+0.077650743 container remove 1fae62ef9974e21e9c219c51ec02da93a0d81e5145017f5c9ae78794e1dcd43a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_lovelace, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 17:00:51 compute-0 systemd[1]: libpod-conmon-1fae62ef9974e21e9c219c51ec02da93a0d81e5145017f5c9ae78794e1dcd43a.scope: Deactivated successfully.
Jan 26 17:00:51 compute-0 sudo[397150]: pam_unix(sudo:session): session closed for user root
Jan 26 17:00:51 compute-0 ceph-mon[75140]: pgmap v3332: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:51 compute-0 sudo[397267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:00:51 compute-0 sudo[397267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:00:51 compute-0 sudo[397267]: pam_unix(sudo:session): session closed for user root
Jan 26 17:00:51 compute-0 sudo[397292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:00:51 compute-0 sudo[397292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:00:51 compute-0 podman[397329]: 2026-01-26 17:00:51.63622352 +0000 UTC m=+0.077553161 container create 1f8900f7dfd5dbf5ffca4c77a408387bd9cdf63b01d5d5105fbb7f3be7f7c3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_saha, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:00:51 compute-0 systemd[1]: Started libpod-conmon-1f8900f7dfd5dbf5ffca4c77a408387bd9cdf63b01d5d5105fbb7f3be7f7c3c6.scope.
Jan 26 17:00:51 compute-0 podman[397329]: 2026-01-26 17:00:51.605270602 +0000 UTC m=+0.046600223 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:00:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:00:51 compute-0 podman[397329]: 2026-01-26 17:00:51.714684553 +0000 UTC m=+0.156014114 container init 1f8900f7dfd5dbf5ffca4c77a408387bd9cdf63b01d5d5105fbb7f3be7f7c3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_saha, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 17:00:51 compute-0 podman[397329]: 2026-01-26 17:00:51.722765141 +0000 UTC m=+0.164094672 container start 1f8900f7dfd5dbf5ffca4c77a408387bd9cdf63b01d5d5105fbb7f3be7f7c3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_saha, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:00:51 compute-0 podman[397329]: 2026-01-26 17:00:51.728256216 +0000 UTC m=+0.169585757 container attach 1f8900f7dfd5dbf5ffca4c77a408387bd9cdf63b01d5d5105fbb7f3be7f7c3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:00:51 compute-0 priceless_saha[397346]: 167 167
Jan 26 17:00:51 compute-0 systemd[1]: libpod-1f8900f7dfd5dbf5ffca4c77a408387bd9cdf63b01d5d5105fbb7f3be7f7c3c6.scope: Deactivated successfully.
Jan 26 17:00:51 compute-0 conmon[397346]: conmon 1f8900f7dfd5dbf5ffca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1f8900f7dfd5dbf5ffca4c77a408387bd9cdf63b01d5d5105fbb7f3be7f7c3c6.scope/container/memory.events
Jan 26 17:00:51 compute-0 podman[397329]: 2026-01-26 17:00:51.731843003 +0000 UTC m=+0.173172584 container died 1f8900f7dfd5dbf5ffca4c77a408387bd9cdf63b01d5d5105fbb7f3be7f7c3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_saha, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 17:00:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-b804b3676b0a5e0b6773a15b41fc0f9534ba45da488ec4ab2973ca46b64fdadb-merged.mount: Deactivated successfully.
Jan 26 17:00:51 compute-0 podman[397329]: 2026-01-26 17:00:51.771072495 +0000 UTC m=+0.212402016 container remove 1f8900f7dfd5dbf5ffca4c77a408387bd9cdf63b01d5d5105fbb7f3be7f7c3c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 26 17:00:51 compute-0 systemd[1]: libpod-conmon-1f8900f7dfd5dbf5ffca4c77a408387bd9cdf63b01d5d5105fbb7f3be7f7c3c6.scope: Deactivated successfully.
Jan 26 17:00:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:51 compute-0 nova_compute[239965]: 2026-01-26 17:00:51.912 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:51 compute-0 podman[397370]: 2026-01-26 17:00:51.947867828 +0000 UTC m=+0.057266394 container create 54bb8927a010d077f5dc520fea5d2c08851ea2ce7781ebe1ecf4ede60aae77a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_heisenberg, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:00:52 compute-0 podman[397370]: 2026-01-26 17:00:51.916302424 +0000 UTC m=+0.025701030 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:00:52 compute-0 systemd[1]: Started libpod-conmon-54bb8927a010d077f5dc520fea5d2c08851ea2ce7781ebe1ecf4ede60aae77a2.scope.
Jan 26 17:00:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57ae67e92f53a8f5e62f3ec18c9844b39cb907bc8b811d38f7046f5c02f4696/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57ae67e92f53a8f5e62f3ec18c9844b39cb907bc8b811d38f7046f5c02f4696/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57ae67e92f53a8f5e62f3ec18c9844b39cb907bc8b811d38f7046f5c02f4696/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:00:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f57ae67e92f53a8f5e62f3ec18c9844b39cb907bc8b811d38f7046f5c02f4696/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:00:52 compute-0 podman[397370]: 2026-01-26 17:00:52.061177095 +0000 UTC m=+0.170575671 container init 54bb8927a010d077f5dc520fea5d2c08851ea2ce7781ebe1ecf4ede60aae77a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_heisenberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:00:52 compute-0 podman[397370]: 2026-01-26 17:00:52.069087148 +0000 UTC m=+0.178485724 container start 54bb8927a010d077f5dc520fea5d2c08851ea2ce7781ebe1ecf4ede60aae77a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_heisenberg, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:00:52 compute-0 podman[397370]: 2026-01-26 17:00:52.072658796 +0000 UTC m=+0.182057372 container attach 54bb8927a010d077f5dc520fea5d2c08851ea2ce7781ebe1ecf4ede60aae77a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 17:00:52 compute-0 lvm[397466]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:00:52 compute-0 lvm[397465]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:00:52 compute-0 lvm[397466]: VG ceph_vg1 finished
Jan 26 17:00:52 compute-0 lvm[397465]: VG ceph_vg0 finished
Jan 26 17:00:52 compute-0 lvm[397468]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:00:52 compute-0 lvm[397468]: VG ceph_vg2 finished
Jan 26 17:00:52 compute-0 laughing_heisenberg[397387]: {}
Jan 26 17:00:52 compute-0 systemd[1]: libpod-54bb8927a010d077f5dc520fea5d2c08851ea2ce7781ebe1ecf4ede60aae77a2.scope: Deactivated successfully.
Jan 26 17:00:52 compute-0 podman[397370]: 2026-01-26 17:00:52.893954853 +0000 UTC m=+1.003353429 container died 54bb8927a010d077f5dc520fea5d2c08851ea2ce7781ebe1ecf4ede60aae77a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:00:52 compute-0 systemd[1]: libpod-54bb8927a010d077f5dc520fea5d2c08851ea2ce7781ebe1ecf4ede60aae77a2.scope: Consumed 1.314s CPU time.
Jan 26 17:00:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-f57ae67e92f53a8f5e62f3ec18c9844b39cb907bc8b811d38f7046f5c02f4696-merged.mount: Deactivated successfully.
Jan 26 17:00:52 compute-0 podman[397370]: 2026-01-26 17:00:52.949772231 +0000 UTC m=+1.059170797 container remove 54bb8927a010d077f5dc520fea5d2c08851ea2ce7781ebe1ecf4ede60aae77a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:00:52 compute-0 systemd[1]: libpod-conmon-54bb8927a010d077f5dc520fea5d2c08851ea2ce7781ebe1ecf4ede60aae77a2.scope: Deactivated successfully.
Jan 26 17:00:52 compute-0 sudo[397292]: pam_unix(sudo:session): session closed for user root
Jan 26 17:00:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:00:53 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:00:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:00:53 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:00:53 compute-0 sudo[397483]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:00:53 compute-0 sudo[397483]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:00:53 compute-0 sudo[397483]: pam_unix(sudo:session): session closed for user root
Jan 26 17:00:53 compute-0 ceph-mon[75140]: pgmap v3333: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:00:53 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:00:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:00:55 compute-0 ceph-mon[75140]: pgmap v3334: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:00:55.212310) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446855212351, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1829, "num_deletes": 251, "total_data_size": 3145510, "memory_usage": 3194456, "flush_reason": "Manual Compaction"}
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446855237406, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 3066963, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66506, "largest_seqno": 68334, "table_properties": {"data_size": 3058458, "index_size": 5254, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 16969, "raw_average_key_size": 20, "raw_value_size": 3041658, "raw_average_value_size": 3591, "num_data_blocks": 234, "num_entries": 847, "num_filter_entries": 847, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769446654, "oldest_key_time": 1769446654, "file_creation_time": 1769446855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 25252 microseconds, and 11974 cpu microseconds.
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:00:55.237559) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 3066963 bytes OK
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:00:55.237618) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:00:55.239513) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:00:55.239529) EVENT_LOG_v1 {"time_micros": 1769446855239523, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:00:55.239548) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 3137736, prev total WAL file size 3137736, number of live WAL files 2.
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:00:55.240936) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2995KB)], [158(10192KB)]
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446855240989, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 13504394, "oldest_snapshot_seqno": -1}
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8775 keys, 11674937 bytes, temperature: kUnknown
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446855321897, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11674937, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11617595, "index_size": 34317, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21957, "raw_key_size": 229796, "raw_average_key_size": 26, "raw_value_size": 11462007, "raw_average_value_size": 1306, "num_data_blocks": 1329, "num_entries": 8775, "num_filter_entries": 8775, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769446855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:00:55.322609) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11674937 bytes
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:00:55.325313) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.0 rd, 143.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.9, 10.0 +0.0 blob) out(11.1 +0.0 blob), read-write-amplify(8.2) write-amplify(3.8) OK, records in: 9289, records dropped: 514 output_compression: NoCompression
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:00:55.325346) EVENT_LOG_v1 {"time_micros": 1769446855325332, "job": 98, "event": "compaction_finished", "compaction_time_micros": 81357, "compaction_time_cpu_micros": 27553, "output_level": 6, "num_output_files": 1, "total_output_size": 11674937, "num_input_records": 9289, "num_output_records": 8775, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446855326702, "job": 98, "event": "table_file_deletion", "file_number": 160}
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446855330881, "job": 98, "event": "table_file_deletion", "file_number": 158}
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:00:55.240800) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:00:55.331073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:00:55.331081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:00:55.331085) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:00:55.331088) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:00:55 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:00:55.331091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:00:55 compute-0 nova_compute[239965]: 2026-01-26 17:00:55.527 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:00:55 compute-0 nova_compute[239965]: 2026-01-26 17:00:55.529 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:00:55 compute-0 sshd-session[397508]: Invalid user sol from 45.148.10.240 port 52824
Jan 26 17:00:55 compute-0 sshd-session[397508]: Connection closed by invalid user sol 45.148.10.240 port 52824 [preauth]
Jan 26 17:00:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:55 compute-0 nova_compute[239965]: 2026-01-26 17:00:55.898 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:56 compute-0 nova_compute[239965]: 2026-01-26 17:00:56.913 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:00:57 compute-0 ceph-mon[75140]: pgmap v3335: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:00:59 compute-0 ceph-mon[75140]: pgmap v3336: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:00:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:00:59.283 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:00:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:00:59.284 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:00:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:00:59.284 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:00:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:01:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:01:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:01:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:01:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:01:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:01:00 compute-0 nova_compute[239965]: 2026-01-26 17:01:00.902 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:01 compute-0 ceph-mon[75140]: pgmap v3337: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:01 compute-0 nova_compute[239965]: 2026-01-26 17:01:01.915 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:01 compute-0 CROND[397511]: (root) CMD (run-parts /etc/cron.hourly)
Jan 26 17:01:02 compute-0 run-parts[397514]: (/etc/cron.hourly) starting 0anacron
Jan 26 17:01:02 compute-0 run-parts[397520]: (/etc/cron.hourly) finished 0anacron
Jan 26 17:01:02 compute-0 CROND[397510]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 26 17:01:03 compute-0 ceph-mon[75140]: pgmap v3338: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:01:05 compute-0 ceph-mon[75140]: pgmap v3339: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:05 compute-0 nova_compute[239965]: 2026-01-26 17:01:05.904 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:06 compute-0 nova_compute[239965]: 2026-01-26 17:01:06.916 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:07 compute-0 ceph-mon[75140]: pgmap v3340: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:01:09 compute-0 ceph-mon[75140]: pgmap v3341: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:10 compute-0 nova_compute[239965]: 2026-01-26 17:01:10.953 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:11 compute-0 ceph-mon[75140]: pgmap v3342: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:11 compute-0 nova_compute[239965]: 2026-01-26 17:01:11.918 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:12 compute-0 ceph-mon[75140]: pgmap v3343: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:01:14 compute-0 podman[397521]: 2026-01-26 17:01:14.365866278 +0000 UTC m=+0.054432975 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 17:01:14 compute-0 ceph-mon[75140]: pgmap v3344: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:15 compute-0 nova_compute[239965]: 2026-01-26 17:01:15.955 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:16 compute-0 ceph-mon[75140]: pgmap v3345: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:16 compute-0 nova_compute[239965]: 2026-01-26 17:01:16.920 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:17 compute-0 podman[397542]: 2026-01-26 17:01:17.42229392 +0000 UTC m=+0.112670763 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:01:17 compute-0 nova_compute[239965]: 2026-01-26 17:01:17.806 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:18 compute-0 ceph-mon[75140]: pgmap v3346: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:01:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:20 compute-0 nova_compute[239965]: 2026-01-26 17:01:20.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:20 compute-0 nova_compute[239965]: 2026-01-26 17:01:20.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:01:20 compute-0 nova_compute[239965]: 2026-01-26 17:01:20.532 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:01:20 compute-0 ceph-mon[75140]: pgmap v3347: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:20 compute-0 nova_compute[239965]: 2026-01-26 17:01:20.957 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:21 compute-0 nova_compute[239965]: 2026-01-26 17:01:21.921 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:22 compute-0 ceph-mon[75140]: pgmap v3348: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:01:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:01:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 184K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.74 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.5      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.6      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed3a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.1 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.0 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x555973ed38d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 7.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 26 17:01:24 compute-0 ceph-mon[75140]: pgmap v3349: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:25 compute-0 nova_compute[239965]: 2026-01-26 17:01:25.532 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:25 compute-0 nova_compute[239965]: 2026-01-26 17:01:25.960 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:26 compute-0 nova_compute[239965]: 2026-01-26 17:01:26.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:26 compute-0 nova_compute[239965]: 2026-01-26 17:01:26.922 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:26 compute-0 ceph-mon[75140]: pgmap v3350: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:01:28
Jan 26 17:01:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:01:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:01:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'backups', 'default.rgw.control', 'vms', 'default.rgw.meta', 'images', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Jan 26 17:01:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:01:28 compute-0 ceph-mon[75140]: pgmap v3351: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:01:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:01:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.4 total, 600.0 interval
                                           Cumulative writes: 49K writes, 192K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.69 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.253       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.25              0.00         1    0.253       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.25              0.00         1    0.253       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.3 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.039       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.039       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.039       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.024       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.024       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.4 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5563c8cdf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 26 17:01:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:01:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:01:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:01:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:01:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:01:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:01:30 compute-0 nova_compute[239965]: 2026-01-26 17:01:30.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:30 compute-0 nova_compute[239965]: 2026-01-26 17:01:30.962 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:31 compute-0 ceph-mon[75140]: pgmap v3352: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:01:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:01:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:01:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:01:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:01:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:01:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:01:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:01:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:01:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:01:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:31 compute-0 nova_compute[239965]: 2026-01-26 17:01:31.926 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:32 compute-0 nova_compute[239965]: 2026-01-26 17:01:32.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:33 compute-0 ceph-mon[75140]: pgmap v3353: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:01:34 compute-0 nova_compute[239965]: 2026-01-26 17:01:34.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:01:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6000.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 149K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.03 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.72 writes per sync, written: 0.15 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.011       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.039       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.04              0.00         1    0.039       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.04              0.00         1    0.039       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562773363a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562773363a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.028       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x562773363a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6000.2 total, 4800.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5627733638d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 26 17:01:35 compute-0 ceph-mon[75140]: pgmap v3354: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:35 compute-0 nova_compute[239965]: 2026-01-26 17:01:35.964 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:36 compute-0 nova_compute[239965]: 2026-01-26 17:01:36.926 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:37 compute-0 ceph-mon[75140]: pgmap v3355: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:37 compute-0 nova_compute[239965]: 2026-01-26 17:01:37.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:37 compute-0 nova_compute[239965]: 2026-01-26 17:01:37.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:01:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:38 compute-0 nova_compute[239965]: 2026-01-26 17:01:38.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:38 compute-0 nova_compute[239965]: 2026-01-26 17:01:38.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:38 compute-0 nova_compute[239965]: 2026-01-26 17:01:38.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:01:38 compute-0 nova_compute[239965]: 2026-01-26 17:01:38.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:01:38 compute-0 nova_compute[239965]: 2026-01-26 17:01:38.529 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:01:38 compute-0 ceph-mon[75140]: pgmap v3356: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:01:39 compute-0 nova_compute[239965]: 2026-01-26 17:01:39.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:01:39 compute-0 nova_compute[239965]: 2026-01-26 17:01:39.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:01:39 compute-0 nova_compute[239965]: 2026-01-26 17:01:39.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:01:39 compute-0 nova_compute[239965]: 2026-01-26 17:01:39.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:01:39 compute-0 nova_compute[239965]: 2026-01-26 17:01:39.540 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:01:39 compute-0 nova_compute[239965]: 2026-01-26 17:01:39.540 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:01:39 compute-0 ceph-mgr[75431]: [devicehealth INFO root] Check health
Jan 26 17:01:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:01:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4160279858' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:01:40 compute-0 nova_compute[239965]: 2026-01-26 17:01:40.131 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:01:40 compute-0 nova_compute[239965]: 2026-01-26 17:01:40.309 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:01:40 compute-0 nova_compute[239965]: 2026-01-26 17:01:40.311 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3579MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:01:40 compute-0 nova_compute[239965]: 2026-01-26 17:01:40.311 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:01:40 compute-0 nova_compute[239965]: 2026-01-26 17:01:40.311 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:01:40 compute-0 nova_compute[239965]: 2026-01-26 17:01:40.568 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:01:40 compute-0 nova_compute[239965]: 2026-01-26 17:01:40.569 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:01:40 compute-0 nova_compute[239965]: 2026-01-26 17:01:40.586 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:01:40 compute-0 ceph-mon[75140]: pgmap v3357: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:40 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4160279858' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:01:40 compute-0 nova_compute[239965]: 2026-01-26 17:01:40.966 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:01:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3530804222' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:01:41 compute-0 nova_compute[239965]: 2026-01-26 17:01:41.249 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.663s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:01:41 compute-0 nova_compute[239965]: 2026-01-26 17:01:41.254 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:01:41 compute-0 nova_compute[239965]: 2026-01-26 17:01:41.275 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:01:41 compute-0 nova_compute[239965]: 2026-01-26 17:01:41.277 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:01:41 compute-0 nova_compute[239965]: 2026-01-26 17:01:41.277 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.966s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:01:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3530804222' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:01:41 compute-0 nova_compute[239965]: 2026-01-26 17:01:41.929 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:42 compute-0 ceph-mon[75140]: pgmap v3358: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:01:44 compute-0 ceph-mon[75140]: pgmap v3359: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:45 compute-0 podman[397614]: 2026-01-26 17:01:45.36531481 +0000 UTC m=+0.053772429 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 17:01:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:45 compute-0 nova_compute[239965]: 2026-01-26 17:01:45.968 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:46 compute-0 nova_compute[239965]: 2026-01-26 17:01:46.931 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:47 compute-0 ceph-mon[75140]: pgmap v3360: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:48 compute-0 podman[397634]: 2026-01-26 17:01:48.4016487 +0000 UTC m=+0.082100993 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:01:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:01:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1908953156' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:01:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:01:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1908953156' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:01:49 compute-0 ceph-mon[75140]: pgmap v3361: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1908953156' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:01:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1908953156' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:01:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:01:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:50 compute-0 nova_compute[239965]: 2026-01-26 17:01:50.970 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:51 compute-0 ceph-mon[75140]: pgmap v3362: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:51 compute-0 nova_compute[239965]: 2026-01-26 17:01:51.971 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:53 compute-0 ceph-mon[75140]: pgmap v3363: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:53 compute-0 sudo[397660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:01:53 compute-0 sudo[397660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:01:53 compute-0 sudo[397660]: pam_unix(sudo:session): session closed for user root
Jan 26 17:01:53 compute-0 sudo[397685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:01:53 compute-0 sudo[397685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:01:53 compute-0 sudo[397685]: pam_unix(sudo:session): session closed for user root
Jan 26 17:01:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:01:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:01:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:01:53 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:01:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:01:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:53 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:01:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:01:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:01:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:01:53 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:01:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:01:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:01:53 compute-0 sudo[397742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:01:53 compute-0 sudo[397742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:01:53 compute-0 sudo[397742]: pam_unix(sudo:session): session closed for user root
Jan 26 17:01:53 compute-0 sudo[397767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:01:53 compute-0 sudo[397767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:01:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:01:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:01:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:01:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:01:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:01:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:01:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:01:54 compute-0 podman[397805]: 2026-01-26 17:01:54.267185633 +0000 UTC m=+0.024828280 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:01:54 compute-0 podman[397805]: 2026-01-26 17:01:54.405634926 +0000 UTC m=+0.163277553 container create a158f42408602387ba2b0394f0b31b4a70bbda07045cff3133c941227c7dd7dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_borg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 17:01:54 compute-0 systemd[1]: Started libpod-conmon-a158f42408602387ba2b0394f0b31b4a70bbda07045cff3133c941227c7dd7dd.scope.
Jan 26 17:01:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:01:54 compute-0 podman[397805]: 2026-01-26 17:01:54.726869457 +0000 UTC m=+0.484512114 container init a158f42408602387ba2b0394f0b31b4a70bbda07045cff3133c941227c7dd7dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 17:01:54 compute-0 podman[397805]: 2026-01-26 17:01:54.735316764 +0000 UTC m=+0.492959391 container start a158f42408602387ba2b0394f0b31b4a70bbda07045cff3133c941227c7dd7dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_borg, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 17:01:54 compute-0 podman[397805]: 2026-01-26 17:01:54.739555098 +0000 UTC m=+0.497197745 container attach a158f42408602387ba2b0394f0b31b4a70bbda07045cff3133c941227c7dd7dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_borg, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:01:54 compute-0 vigorous_borg[397822]: 167 167
Jan 26 17:01:54 compute-0 systemd[1]: libpod-a158f42408602387ba2b0394f0b31b4a70bbda07045cff3133c941227c7dd7dd.scope: Deactivated successfully.
Jan 26 17:01:54 compute-0 podman[397805]: 2026-01-26 17:01:54.742426618 +0000 UTC m=+0.500069245 container died a158f42408602387ba2b0394f0b31b4a70bbda07045cff3133c941227c7dd7dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:01:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-12f7b4ea39214fa780d30ffe0193e8bb54312bb8e714ee52b2c79e73237906dd-merged.mount: Deactivated successfully.
Jan 26 17:01:54 compute-0 podman[397805]: 2026-01-26 17:01:54.79351058 +0000 UTC m=+0.551153207 container remove a158f42408602387ba2b0394f0b31b4a70bbda07045cff3133c941227c7dd7dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_borg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:01:54 compute-0 systemd[1]: libpod-conmon-a158f42408602387ba2b0394f0b31b4a70bbda07045cff3133c941227c7dd7dd.scope: Deactivated successfully.
Jan 26 17:01:54 compute-0 podman[397846]: 2026-01-26 17:01:54.95508083 +0000 UTC m=+0.040960525 container create 6867c445fdcd481a044a71ca84de299a3d61450d03f1e7d2cb3103d990293d71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 17:01:54 compute-0 systemd[1]: Started libpod-conmon-6867c445fdcd481a044a71ca84de299a3d61450d03f1e7d2cb3103d990293d71.scope.
Jan 26 17:01:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:01:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98be67f490fd74d1a4df2f711366efb20cb8d91dff9002ea5d5965203d00a4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:01:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98be67f490fd74d1a4df2f711366efb20cb8d91dff9002ea5d5965203d00a4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:01:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98be67f490fd74d1a4df2f711366efb20cb8d91dff9002ea5d5965203d00a4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:01:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98be67f490fd74d1a4df2f711366efb20cb8d91dff9002ea5d5965203d00a4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:01:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98be67f490fd74d1a4df2f711366efb20cb8d91dff9002ea5d5965203d00a4c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:01:55 compute-0 podman[397846]: 2026-01-26 17:01:54.936827362 +0000 UTC m=+0.022707077 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:01:55 compute-0 podman[397846]: 2026-01-26 17:01:55.036021613 +0000 UTC m=+0.121901328 container init 6867c445fdcd481a044a71ca84de299a3d61450d03f1e7d2cb3103d990293d71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:01:55 compute-0 podman[397846]: 2026-01-26 17:01:55.045716811 +0000 UTC m=+0.131596506 container start 6867c445fdcd481a044a71ca84de299a3d61450d03f1e7d2cb3103d990293d71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:01:55 compute-0 podman[397846]: 2026-01-26 17:01:55.050038537 +0000 UTC m=+0.135918262 container attach 6867c445fdcd481a044a71ca84de299a3d61450d03f1e7d2cb3103d990293d71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:01:55 compute-0 ceph-mon[75140]: pgmap v3364: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:55 compute-0 confident_bhaskara[397862]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:01:55 compute-0 confident_bhaskara[397862]: --> All data devices are unavailable
Jan 26 17:01:55 compute-0 systemd[1]: libpod-6867c445fdcd481a044a71ca84de299a3d61450d03f1e7d2cb3103d990293d71.scope: Deactivated successfully.
Jan 26 17:01:55 compute-0 podman[397846]: 2026-01-26 17:01:55.527385925 +0000 UTC m=+0.613265650 container died 6867c445fdcd481a044a71ca84de299a3d61450d03f1e7d2cb3103d990293d71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 17:01:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c98be67f490fd74d1a4df2f711366efb20cb8d91dff9002ea5d5965203d00a4c-merged.mount: Deactivated successfully.
Jan 26 17:01:55 compute-0 podman[397846]: 2026-01-26 17:01:55.573910085 +0000 UTC m=+0.659789780 container remove 6867c445fdcd481a044a71ca84de299a3d61450d03f1e7d2cb3103d990293d71 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:01:55 compute-0 systemd[1]: libpod-conmon-6867c445fdcd481a044a71ca84de299a3d61450d03f1e7d2cb3103d990293d71.scope: Deactivated successfully.
Jan 26 17:01:55 compute-0 sudo[397767]: pam_unix(sudo:session): session closed for user root
Jan 26 17:01:55 compute-0 sudo[397893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:01:55 compute-0 sudo[397893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:01:55 compute-0 sudo[397893]: pam_unix(sudo:session): session closed for user root
Jan 26 17:01:55 compute-0 sudo[397918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:01:55 compute-0 sudo[397918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:01:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:55 compute-0 nova_compute[239965]: 2026-01-26 17:01:55.972 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:56 compute-0 podman[397955]: 2026-01-26 17:01:56.027630054 +0000 UTC m=+0.050742195 container create 84b19918593cfa86ce35c07579bd2f1e57ae6abe20d379dec937ca4190f56618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 17:01:56 compute-0 systemd[1]: Started libpod-conmon-84b19918593cfa86ce35c07579bd2f1e57ae6abe20d379dec937ca4190f56618.scope.
Jan 26 17:01:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:01:56 compute-0 podman[397955]: 2026-01-26 17:01:56.008541987 +0000 UTC m=+0.031654158 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:01:56 compute-0 podman[397955]: 2026-01-26 17:01:56.116309848 +0000 UTC m=+0.139422019 container init 84b19918593cfa86ce35c07579bd2f1e57ae6abe20d379dec937ca4190f56618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:01:56 compute-0 podman[397955]: 2026-01-26 17:01:56.124126009 +0000 UTC m=+0.147238140 container start 84b19918593cfa86ce35c07579bd2f1e57ae6abe20d379dec937ca4190f56618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 17:01:56 compute-0 podman[397955]: 2026-01-26 17:01:56.127602164 +0000 UTC m=+0.150714335 container attach 84b19918593cfa86ce35c07579bd2f1e57ae6abe20d379dec937ca4190f56618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 17:01:56 compute-0 jovial_faraday[397972]: 167 167
Jan 26 17:01:56 compute-0 systemd[1]: libpod-84b19918593cfa86ce35c07579bd2f1e57ae6abe20d379dec937ca4190f56618.scope: Deactivated successfully.
Jan 26 17:01:56 compute-0 podman[397955]: 2026-01-26 17:01:56.131790797 +0000 UTC m=+0.154902938 container died 84b19918593cfa86ce35c07579bd2f1e57ae6abe20d379dec937ca4190f56618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 17:01:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-c8c6dc667ae3efb9da47c6c10c4abb4fdac5b7bb0443e6ab80cb986601247c2b-merged.mount: Deactivated successfully.
Jan 26 17:01:56 compute-0 podman[397955]: 2026-01-26 17:01:56.172897194 +0000 UTC m=+0.196009345 container remove 84b19918593cfa86ce35c07579bd2f1e57ae6abe20d379dec937ca4190f56618 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_faraday, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Jan 26 17:01:56 compute-0 systemd[1]: libpod-conmon-84b19918593cfa86ce35c07579bd2f1e57ae6abe20d379dec937ca4190f56618.scope: Deactivated successfully.
Jan 26 17:01:56 compute-0 podman[397995]: 2026-01-26 17:01:56.332418313 +0000 UTC m=+0.045195208 container create 56cb60f1f7b0e5c7e882aad3d7010334be284b381cbca6f33d543b8dcdea303d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_clarke, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 26 17:01:56 compute-0 systemd[1]: Started libpod-conmon-56cb60f1f7b0e5c7e882aad3d7010334be284b381cbca6f33d543b8dcdea303d.scope.
Jan 26 17:01:56 compute-0 ceph-mon[75140]: pgmap v3365: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3cee43d484ee4a861a1163a890fc65f7a81ade0da3e2cbf652661cfb40d2d1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3cee43d484ee4a861a1163a890fc65f7a81ade0da3e2cbf652661cfb40d2d1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3cee43d484ee4a861a1163a890fc65f7a81ade0da3e2cbf652661cfb40d2d1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:01:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3cee43d484ee4a861a1163a890fc65f7a81ade0da3e2cbf652661cfb40d2d1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:01:56 compute-0 podman[397995]: 2026-01-26 17:01:56.31309614 +0000 UTC m=+0.025873085 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:01:56 compute-0 podman[397995]: 2026-01-26 17:01:56.41714571 +0000 UTC m=+0.129922695 container init 56cb60f1f7b0e5c7e882aad3d7010334be284b381cbca6f33d543b8dcdea303d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_clarke, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:01:56 compute-0 podman[397995]: 2026-01-26 17:01:56.422904781 +0000 UTC m=+0.135681706 container start 56cb60f1f7b0e5c7e882aad3d7010334be284b381cbca6f33d543b8dcdea303d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:01:56 compute-0 podman[397995]: 2026-01-26 17:01:56.426670293 +0000 UTC m=+0.139447228 container attach 56cb60f1f7b0e5c7e882aad3d7010334be284b381cbca6f33d543b8dcdea303d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_clarke, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]: {
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:     "0": [
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:         {
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "devices": [
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "/dev/loop3"
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             ],
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "lv_name": "ceph_lv0",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "lv_size": "21470642176",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "name": "ceph_lv0",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "tags": {
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.cluster_name": "ceph",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.crush_device_class": "",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.encrypted": "0",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.objectstore": "bluestore",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.osd_id": "0",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.type": "block",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.vdo": "0",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.with_tpm": "0"
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             },
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "type": "block",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "vg_name": "ceph_vg0"
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:         }
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:     ],
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:     "1": [
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:         {
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "devices": [
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "/dev/loop4"
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             ],
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "lv_name": "ceph_lv1",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "lv_size": "21470642176",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "name": "ceph_lv1",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "tags": {
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.cluster_name": "ceph",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.crush_device_class": "",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.encrypted": "0",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.objectstore": "bluestore",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.osd_id": "1",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.type": "block",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.vdo": "0",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.with_tpm": "0"
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             },
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "type": "block",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "vg_name": "ceph_vg1"
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:         }
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:     ],
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:     "2": [
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:         {
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "devices": [
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "/dev/loop5"
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             ],
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "lv_name": "ceph_lv2",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "lv_size": "21470642176",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "name": "ceph_lv2",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "tags": {
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.cluster_name": "ceph",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.crush_device_class": "",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.encrypted": "0",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.objectstore": "bluestore",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.osd_id": "2",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.type": "block",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.vdo": "0",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:                 "ceph.with_tpm": "0"
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             },
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "type": "block",
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:             "vg_name": "ceph_vg2"
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:         }
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]:     ]
Jan 26 17:01:56 compute-0 unruffled_clarke[398012]: }
Jan 26 17:01:56 compute-0 systemd[1]: libpod-56cb60f1f7b0e5c7e882aad3d7010334be284b381cbca6f33d543b8dcdea303d.scope: Deactivated successfully.
Jan 26 17:01:56 compute-0 podman[397995]: 2026-01-26 17:01:56.715018829 +0000 UTC m=+0.427795724 container died 56cb60f1f7b0e5c7e882aad3d7010334be284b381cbca6f33d543b8dcdea303d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_clarke, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 17:01:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3cee43d484ee4a861a1163a890fc65f7a81ade0da3e2cbf652661cfb40d2d1a-merged.mount: Deactivated successfully.
Jan 26 17:01:56 compute-0 podman[397995]: 2026-01-26 17:01:56.759547121 +0000 UTC m=+0.472324006 container remove 56cb60f1f7b0e5c7e882aad3d7010334be284b381cbca6f33d543b8dcdea303d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_clarke, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 17:01:56 compute-0 systemd[1]: libpod-conmon-56cb60f1f7b0e5c7e882aad3d7010334be284b381cbca6f33d543b8dcdea303d.scope: Deactivated successfully.
Jan 26 17:01:56 compute-0 sudo[397918]: pam_unix(sudo:session): session closed for user root
Jan 26 17:01:56 compute-0 sudo[398032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:01:56 compute-0 sudo[398032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:01:56 compute-0 sudo[398032]: pam_unix(sudo:session): session closed for user root
Jan 26 17:01:56 compute-0 nova_compute[239965]: 2026-01-26 17:01:56.973 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:01:56 compute-0 sudo[398057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:01:56 compute-0 sudo[398057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:01:57 compute-0 podman[398094]: 2026-01-26 17:01:57.278416286 +0000 UTC m=+0.041975829 container create 299e244e9e69bc408780ec4c312b84aa45c64d145da3567e2ba295574cc34023 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 17:01:57 compute-0 systemd[1]: Started libpod-conmon-299e244e9e69bc408780ec4c312b84aa45c64d145da3567e2ba295574cc34023.scope.
Jan 26 17:01:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:01:57 compute-0 podman[398094]: 2026-01-26 17:01:57.261608695 +0000 UTC m=+0.025168258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:01:57 compute-0 podman[398094]: 2026-01-26 17:01:57.36059813 +0000 UTC m=+0.124157703 container init 299e244e9e69bc408780ec4c312b84aa45c64d145da3567e2ba295574cc34023 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_williams, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 17:01:57 compute-0 podman[398094]: 2026-01-26 17:01:57.366572747 +0000 UTC m=+0.130132290 container start 299e244e9e69bc408780ec4c312b84aa45c64d145da3567e2ba295574cc34023 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:01:57 compute-0 podman[398094]: 2026-01-26 17:01:57.370213246 +0000 UTC m=+0.133772819 container attach 299e244e9e69bc408780ec4c312b84aa45c64d145da3567e2ba295574cc34023 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:01:57 compute-0 cool_williams[398110]: 167 167
Jan 26 17:01:57 compute-0 systemd[1]: libpod-299e244e9e69bc408780ec4c312b84aa45c64d145da3567e2ba295574cc34023.scope: Deactivated successfully.
Jan 26 17:01:57 compute-0 podman[398094]: 2026-01-26 17:01:57.373741423 +0000 UTC m=+0.137301006 container died 299e244e9e69bc408780ec4c312b84aa45c64d145da3567e2ba295574cc34023 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:01:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-0454c0b4f41b1c8fab22cd00c0f58ab28cff395830a467e17dd8bdf098987cab-merged.mount: Deactivated successfully.
Jan 26 17:01:57 compute-0 podman[398094]: 2026-01-26 17:01:57.417359901 +0000 UTC m=+0.180919454 container remove 299e244e9e69bc408780ec4c312b84aa45c64d145da3567e2ba295574cc34023 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_williams, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 17:01:57 compute-0 systemd[1]: libpod-conmon-299e244e9e69bc408780ec4c312b84aa45c64d145da3567e2ba295574cc34023.scope: Deactivated successfully.
Jan 26 17:01:57 compute-0 podman[398134]: 2026-01-26 17:01:57.581593636 +0000 UTC m=+0.044135172 container create 4df16cddc933738eda3618c0f84e4b5117e0c5052dc89fbb151107df78a4d31c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:01:57 compute-0 systemd[1]: Started libpod-conmon-4df16cddc933738eda3618c0f84e4b5117e0c5052dc89fbb151107df78a4d31c.scope.
Jan 26 17:01:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71d33aee28366517405930c1d125a3e235a918184ea7cffd873efeb239063f5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71d33aee28366517405930c1d125a3e235a918184ea7cffd873efeb239063f5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71d33aee28366517405930c1d125a3e235a918184ea7cffd873efeb239063f5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:01:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71d33aee28366517405930c1d125a3e235a918184ea7cffd873efeb239063f5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:01:57 compute-0 podman[398134]: 2026-01-26 17:01:57.655790964 +0000 UTC m=+0.118332530 container init 4df16cddc933738eda3618c0f84e4b5117e0c5052dc89fbb151107df78a4d31c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_nash, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 17:01:57 compute-0 podman[398134]: 2026-01-26 17:01:57.561704758 +0000 UTC m=+0.024246334 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:01:57 compute-0 podman[398134]: 2026-01-26 17:01:57.663719149 +0000 UTC m=+0.126260695 container start 4df16cddc933738eda3618c0f84e4b5117e0c5052dc89fbb151107df78a4d31c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_nash, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:01:57 compute-0 podman[398134]: 2026-01-26 17:01:57.667381649 +0000 UTC m=+0.129923215 container attach 4df16cddc933738eda3618c0f84e4b5117e0c5052dc89fbb151107df78a4d31c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_nash, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:01:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:58 compute-0 lvm[398230]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:01:58 compute-0 lvm[398230]: VG ceph_vg0 finished
Jan 26 17:01:58 compute-0 lvm[398232]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:01:58 compute-0 lvm[398232]: VG ceph_vg2 finished
Jan 26 17:01:58 compute-0 lvm[398233]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:01:58 compute-0 lvm[398233]: VG ceph_vg1 finished
Jan 26 17:01:58 compute-0 gifted_nash[398151]: {}
Jan 26 17:01:58 compute-0 systemd[1]: libpod-4df16cddc933738eda3618c0f84e4b5117e0c5052dc89fbb151107df78a4d31c.scope: Deactivated successfully.
Jan 26 17:01:58 compute-0 systemd[1]: libpod-4df16cddc933738eda3618c0f84e4b5117e0c5052dc89fbb151107df78a4d31c.scope: Consumed 1.351s CPU time.
Jan 26 17:01:58 compute-0 podman[398134]: 2026-01-26 17:01:58.469037284 +0000 UTC m=+0.931578900 container died 4df16cddc933738eda3618c0f84e4b5117e0c5052dc89fbb151107df78a4d31c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 17:01:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-71d33aee28366517405930c1d125a3e235a918184ea7cffd873efeb239063f5b-merged.mount: Deactivated successfully.
Jan 26 17:01:58 compute-0 podman[398134]: 2026-01-26 17:01:58.520786812 +0000 UTC m=+0.983328368 container remove 4df16cddc933738eda3618c0f84e4b5117e0c5052dc89fbb151107df78a4d31c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:01:58 compute-0 systemd[1]: libpod-conmon-4df16cddc933738eda3618c0f84e4b5117e0c5052dc89fbb151107df78a4d31c.scope: Deactivated successfully.
Jan 26 17:01:58 compute-0 sudo[398057]: pam_unix(sudo:session): session closed for user root
Jan 26 17:01:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:01:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:01:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:01:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:01:58 compute-0 sudo[398247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:01:58 compute-0 sudo[398247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:01:58 compute-0 sudo[398247]: pam_unix(sudo:session): session closed for user root
Jan 26 17:01:58 compute-0 ceph-mon[75140]: pgmap v3366: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:01:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:01:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:01:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:01:59.285 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:01:59.286 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:01:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:01:59.286 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:01:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:02:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:02:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:02:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:02:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:02:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:02:00 compute-0 ceph-mon[75140]: pgmap v3367: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:00 compute-0 nova_compute[239965]: 2026-01-26 17:02:00.976 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:01 compute-0 nova_compute[239965]: 2026-01-26 17:02:01.975 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:03 compute-0 ceph-mon[75140]: pgmap v3368: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:02:05 compute-0 ceph-mon[75140]: pgmap v3369: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:05 compute-0 nova_compute[239965]: 2026-01-26 17:02:05.978 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:06 compute-0 nova_compute[239965]: 2026-01-26 17:02:06.977 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:07 compute-0 ceph-mon[75140]: pgmap v3370: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:02:09 compute-0 ceph-mon[75140]: pgmap v3371: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:10 compute-0 nova_compute[239965]: 2026-01-26 17:02:10.980 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:11 compute-0 ceph-mon[75140]: pgmap v3372: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:11 compute-0 nova_compute[239965]: 2026-01-26 17:02:11.981 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:13 compute-0 ceph-mon[75140]: pgmap v3373: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:02:15 compute-0 ceph-mon[75140]: pgmap v3374: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:15 compute-0 nova_compute[239965]: 2026-01-26 17:02:15.983 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:16 compute-0 podman[398272]: 2026-01-26 17:02:16.370688559 +0000 UTC m=+0.056183147 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 26 17:02:16 compute-0 nova_compute[239965]: 2026-01-26 17:02:16.982 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:17 compute-0 ceph-mon[75140]: pgmap v3375: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:02:19 compute-0 ceph-mon[75140]: pgmap v3376: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:19 compute-0 podman[398292]: 2026-01-26 17:02:19.398023068 +0000 UTC m=+0.086605943 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 17:02:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:20 compute-0 nova_compute[239965]: 2026-01-26 17:02:20.986 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:21 compute-0 ceph-mon[75140]: pgmap v3377: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:21 compute-0 nova_compute[239965]: 2026-01-26 17:02:21.984 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:23 compute-0 ceph-mon[75140]: pgmap v3378: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:02:24 compute-0 ceph-mon[75140]: pgmap v3379: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:25 compute-0 nova_compute[239965]: 2026-01-26 17:02:25.990 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:26 compute-0 ceph-mon[75140]: pgmap v3380: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:26 compute-0 nova_compute[239965]: 2026-01-26 17:02:26.985 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:27 compute-0 nova_compute[239965]: 2026-01-26 17:02:27.279 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:27 compute-0 nova_compute[239965]: 2026-01-26 17:02:27.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:02:28
Jan 26 17:02:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:02:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:02:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', 'vms', 'default.rgw.log', 'backups']
Jan 26 17:02:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:02:28 compute-0 ceph-mon[75140]: pgmap v3381: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:02:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:02:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:02:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:02:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:02:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:02:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:02:30 compute-0 nova_compute[239965]: 2026-01-26 17:02:30.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:30 compute-0 ceph-mon[75140]: pgmap v3382: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:30 compute-0 nova_compute[239965]: 2026-01-26 17:02:30.991 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:02:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:02:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:02:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:02:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:02:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:02:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:02:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:02:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:02:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:02:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:31 compute-0 nova_compute[239965]: 2026-01-26 17:02:31.987 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:32 compute-0 nova_compute[239965]: 2026-01-26 17:02:32.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:32 compute-0 ceph-mon[75140]: pgmap v3383: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:02:34 compute-0 nova_compute[239965]: 2026-01-26 17:02:34.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:34 compute-0 ceph-mon[75140]: pgmap v3384: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:35 compute-0 nova_compute[239965]: 2026-01-26 17:02:35.994 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:36 compute-0 ceph-mon[75140]: pgmap v3385: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:36 compute-0 nova_compute[239965]: 2026-01-26 17:02:36.988 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:38 compute-0 nova_compute[239965]: 2026-01-26 17:02:38.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:38 compute-0 ceph-mon[75140]: pgmap v3386: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:02:39 compute-0 nova_compute[239965]: 2026-01-26 17:02:39.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:39 compute-0 nova_compute[239965]: 2026-01-26 17:02:39.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:02:39 compute-0 nova_compute[239965]: 2026-01-26 17:02:39.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:02:39 compute-0 nova_compute[239965]: 2026-01-26 17:02:39.528 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:02:39 compute-0 nova_compute[239965]: 2026-01-26 17:02:39.528 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:39 compute-0 nova_compute[239965]: 2026-01-26 17:02:39.528 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:02:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:40 compute-0 nova_compute[239965]: 2026-01-26 17:02:40.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:40 compute-0 nova_compute[239965]: 2026-01-26 17:02:40.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:02:40 compute-0 nova_compute[239965]: 2026-01-26 17:02:40.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:02:40 compute-0 nova_compute[239965]: 2026-01-26 17:02:40.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:02:40 compute-0 nova_compute[239965]: 2026-01-26 17:02:40.543 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:02:40 compute-0 nova_compute[239965]: 2026-01-26 17:02:40.543 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:02:41 compute-0 ceph-mon[75140]: pgmap v3387: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:41 compute-0 nova_compute[239965]: 2026-01-26 17:02:41.033 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:02:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2059695251' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:02:41 compute-0 nova_compute[239965]: 2026-01-26 17:02:41.144 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.600s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:02:41 compute-0 nova_compute[239965]: 2026-01-26 17:02:41.286 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:02:41 compute-0 nova_compute[239965]: 2026-01-26 17:02:41.287 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3571MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:02:41 compute-0 nova_compute[239965]: 2026-01-26 17:02:41.287 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:02:41 compute-0 nova_compute[239965]: 2026-01-26 17:02:41.288 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:02:41 compute-0 nova_compute[239965]: 2026-01-26 17:02:41.365 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:02:41 compute-0 nova_compute[239965]: 2026-01-26 17:02:41.366 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:02:41 compute-0 nova_compute[239965]: 2026-01-26 17:02:41.383 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:02:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:02:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3822588158' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:02:41 compute-0 nova_compute[239965]: 2026-01-26 17:02:41.926 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:02:41 compute-0 nova_compute[239965]: 2026-01-26 17:02:41.933 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:02:41 compute-0 nova_compute[239965]: 2026-01-26 17:02:41.950 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:02:41 compute-0 nova_compute[239965]: 2026-01-26 17:02:41.952 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:02:41 compute-0 nova_compute[239965]: 2026-01-26 17:02:41.953 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:02:41 compute-0 nova_compute[239965]: 2026-01-26 17:02:41.990 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2059695251' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:02:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3822588158' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:02:43 compute-0 ceph-mon[75140]: pgmap v3388: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:02:44 compute-0 ceph-mon[75140]: pgmap v3389: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:45 compute-0 nova_compute[239965]: 2026-01-26 17:02:45.946 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:02:46 compute-0 nova_compute[239965]: 2026-01-26 17:02:46.036 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:46 compute-0 nova_compute[239965]: 2026-01-26 17:02:46.992 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:47 compute-0 ceph-mon[75140]: pgmap v3390: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:47 compute-0 podman[398362]: 2026-01-26 17:02:47.361737393 +0000 UTC m=+0.052794405 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:02:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:02:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/999359238' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:02:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:02:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/999359238' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:02:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:02:49 compute-0 ceph-mon[75140]: pgmap v3391: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/999359238' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:02:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/999359238' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:02:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:50 compute-0 podman[398381]: 2026-01-26 17:02:50.443472656 +0000 UTC m=+0.131805761 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 26 17:02:51 compute-0 nova_compute[239965]: 2026-01-26 17:02:51.036 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:51 compute-0 ceph-mon[75140]: pgmap v3392: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:51 compute-0 nova_compute[239965]: 2026-01-26 17:02:51.996 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:53 compute-0 ceph-mon[75140]: pgmap v3393: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:02:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s
Jan 26 17:02:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:02:55 compute-0 ceph-mon[75140]: pgmap v3394: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s
Jan 26 17:02:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 26 17:02:56 compute-0 nova_compute[239965]: 2026-01-26 17:02:56.173 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:56 compute-0 nova_compute[239965]: 2026-01-26 17:02:56.996 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:02:57 compute-0 ceph-mon[75140]: pgmap v3395: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Jan 26 17:02:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 26 17:02:58 compute-0 sudo[398408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:02:58 compute-0 sudo[398408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:02:58 compute-0 sudo[398408]: pam_unix(sudo:session): session closed for user root
Jan 26 17:02:58 compute-0 sudo[398433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:02:58 compute-0 sudo[398433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:02:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:02:59.093851) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446979093880, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1175, "num_deletes": 250, "total_data_size": 1860844, "memory_usage": 1884664, "flush_reason": "Manual Compaction"}
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446979101340, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 1080546, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68335, "largest_seqno": 69509, "table_properties": {"data_size": 1076228, "index_size": 1841, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11228, "raw_average_key_size": 20, "raw_value_size": 1066917, "raw_average_value_size": 1961, "num_data_blocks": 84, "num_entries": 544, "num_filter_entries": 544, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769446856, "oldest_key_time": 1769446856, "file_creation_time": 1769446979, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 7539 microseconds, and 3872 cpu microseconds.
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:02:59.101387) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 1080546 bytes OK
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:02:59.101408) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:02:59.103310) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:02:59.103327) EVENT_LOG_v1 {"time_micros": 1769446979103322, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:02:59.103348) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1855489, prev total WAL file size 1855489, number of live WAL files 2.
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:02:59.104188) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373537' seq:72057594037927935, type:22 .. '6D6772737461740033303038' seq:0, type:0; will stop at (end)
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(1055KB)], [161(11MB)]
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446979104259, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12755483, "oldest_snapshot_seqno": -1}
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8863 keys, 10129016 bytes, temperature: kUnknown
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446979164768, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 10129016, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10074212, "index_size": 31523, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22213, "raw_key_size": 231751, "raw_average_key_size": 26, "raw_value_size": 9920075, "raw_average_value_size": 1119, "num_data_blocks": 1217, "num_entries": 8863, "num_filter_entries": 8863, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769446979, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:02:59.165165) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 10129016 bytes
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:02:59.167042) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 210.5 rd, 167.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 11.1 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(21.2) write-amplify(9.4) OK, records in: 9319, records dropped: 456 output_compression: NoCompression
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:02:59.167074) EVENT_LOG_v1 {"time_micros": 1769446979167059, "job": 100, "event": "compaction_finished", "compaction_time_micros": 60607, "compaction_time_cpu_micros": 29057, "output_level": 6, "num_output_files": 1, "total_output_size": 10129016, "num_input_records": 9319, "num_output_records": 8863, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446979167848, "job": 100, "event": "table_file_deletion", "file_number": 163}
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769446979171707, "job": 100, "event": "table_file_deletion", "file_number": 161}
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:02:59.104041) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:02:59.171840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:02:59.171846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:02:59.171848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:02:59.171850) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:02:59 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:02:59.171852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:02:59 compute-0 ceph-mon[75140]: pgmap v3396: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 26 17:02:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:02:59.286 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:02:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:02:59.286 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:02:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:02:59.286 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:02:59 compute-0 sudo[398433]: pam_unix(sudo:session): session closed for user root
Jan 26 17:02:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:02:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:02:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:02:59 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:02:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:02:59 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:02:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:02:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:02:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:02:59 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:02:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:02:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:02:59 compute-0 sudo[398488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:02:59 compute-0 sudo[398488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:02:59 compute-0 sudo[398488]: pam_unix(sudo:session): session closed for user root
Jan 26 17:02:59 compute-0 sudo[398513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:02:59 compute-0 sudo[398513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:02:59 compute-0 podman[398550]: 2026-01-26 17:02:59.811553384 +0000 UTC m=+0.043005566 container create 93f5c9c4e548aea9fe9477343396e1043db251d4fd106b0277b15014b536756f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:02:59 compute-0 systemd[1]: Started libpod-conmon-93f5c9c4e548aea9fe9477343396e1043db251d4fd106b0277b15014b536756f.scope.
Jan 26 17:02:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:02:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 26 17:02:59 compute-0 podman[398550]: 2026-01-26 17:02:59.793150493 +0000 UTC m=+0.024602725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:02:59 compute-0 podman[398550]: 2026-01-26 17:02:59.892178169 +0000 UTC m=+0.123630371 container init 93f5c9c4e548aea9fe9477343396e1043db251d4fd106b0277b15014b536756f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_keldysh, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 17:02:59 compute-0 podman[398550]: 2026-01-26 17:02:59.899396916 +0000 UTC m=+0.130849108 container start 93f5c9c4e548aea9fe9477343396e1043db251d4fd106b0277b15014b536756f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_keldysh, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 17:02:59 compute-0 podman[398550]: 2026-01-26 17:02:59.903562268 +0000 UTC m=+0.135014480 container attach 93f5c9c4e548aea9fe9477343396e1043db251d4fd106b0277b15014b536756f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 26 17:02:59 compute-0 modest_keldysh[398567]: 167 167
Jan 26 17:02:59 compute-0 systemd[1]: libpod-93f5c9c4e548aea9fe9477343396e1043db251d4fd106b0277b15014b536756f.scope: Deactivated successfully.
Jan 26 17:02:59 compute-0 conmon[398567]: conmon 93f5c9c4e548aea9fe94 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93f5c9c4e548aea9fe9477343396e1043db251d4fd106b0277b15014b536756f.scope/container/memory.events
Jan 26 17:02:59 compute-0 podman[398572]: 2026-01-26 17:02:59.946518451 +0000 UTC m=+0.026252495 container died 93f5c9c4e548aea9fe9477343396e1043db251d4fd106b0277b15014b536756f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:02:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-49f78cea31c8dd10fab098c2ae24a9f90ba0c215f6af0810e3215835de306bba-merged.mount: Deactivated successfully.
Jan 26 17:02:59 compute-0 podman[398572]: 2026-01-26 17:02:59.985113627 +0000 UTC m=+0.064847651 container remove 93f5c9c4e548aea9fe9477343396e1043db251d4fd106b0277b15014b536756f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 17:02:59 compute-0 systemd[1]: libpod-conmon-93f5c9c4e548aea9fe9477343396e1043db251d4fd106b0277b15014b536756f.scope: Deactivated successfully.
Jan 26 17:03:00 compute-0 podman[398594]: 2026-01-26 17:03:00.151255998 +0000 UTC m=+0.046376197 container create aef3bdf420e58cefc09f7af7e2604e862d6f07ba3f087d313f6abc8fe51eda98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 17:03:00 compute-0 systemd[1]: Started libpod-conmon-aef3bdf420e58cefc09f7af7e2604e862d6f07ba3f087d313f6abc8fe51eda98.scope.
Jan 26 17:03:00 compute-0 podman[398594]: 2026-01-26 17:03:00.133232256 +0000 UTC m=+0.028352475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:03:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d238056843104077f78e2c0603dccd2a6a2e9e1376e2ac7fad890e83a39dc0cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d238056843104077f78e2c0603dccd2a6a2e9e1376e2ac7fad890e83a39dc0cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d238056843104077f78e2c0603dccd2a6a2e9e1376e2ac7fad890e83a39dc0cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d238056843104077f78e2c0603dccd2a6a2e9e1376e2ac7fad890e83a39dc0cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d238056843104077f78e2c0603dccd2a6a2e9e1376e2ac7fad890e83a39dc0cc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:03:00 compute-0 podman[398594]: 2026-01-26 17:03:00.287297063 +0000 UTC m=+0.182417292 container init aef3bdf420e58cefc09f7af7e2604e862d6f07ba3f087d313f6abc8fe51eda98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_diffie, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:03:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:03:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:03:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:03:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:03:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:03:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:03:00 compute-0 podman[398594]: 2026-01-26 17:03:00.299509881 +0000 UTC m=+0.194630080 container start aef3bdf420e58cefc09f7af7e2604e862d6f07ba3f087d313f6abc8fe51eda98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 17:03:00 compute-0 podman[398594]: 2026-01-26 17:03:00.311824723 +0000 UTC m=+0.206944932 container attach aef3bdf420e58cefc09f7af7e2604e862d6f07ba3f087d313f6abc8fe51eda98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_diffie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 17:03:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:03:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:03:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:03:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:03:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:03:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:03:00 compute-0 zealous_diffie[398610]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:03:00 compute-0 zealous_diffie[398610]: --> All data devices are unavailable
Jan 26 17:03:00 compute-0 systemd[1]: libpod-aef3bdf420e58cefc09f7af7e2604e862d6f07ba3f087d313f6abc8fe51eda98.scope: Deactivated successfully.
Jan 26 17:03:00 compute-0 podman[398630]: 2026-01-26 17:03:00.839236218 +0000 UTC m=+0.024294397 container died aef3bdf420e58cefc09f7af7e2604e862d6f07ba3f087d313f6abc8fe51eda98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_diffie, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 17:03:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-d238056843104077f78e2c0603dccd2a6a2e9e1376e2ac7fad890e83a39dc0cc-merged.mount: Deactivated successfully.
Jan 26 17:03:00 compute-0 podman[398630]: 2026-01-26 17:03:00.885068281 +0000 UTC m=+0.070126430 container remove aef3bdf420e58cefc09f7af7e2604e862d6f07ba3f087d313f6abc8fe51eda98 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:03:00 compute-0 systemd[1]: libpod-conmon-aef3bdf420e58cefc09f7af7e2604e862d6f07ba3f087d313f6abc8fe51eda98.scope: Deactivated successfully.
Jan 26 17:03:00 compute-0 sudo[398513]: pam_unix(sudo:session): session closed for user root
Jan 26 17:03:01 compute-0 sudo[398645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:03:01 compute-0 sudo[398645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:03:01 compute-0 sudo[398645]: pam_unix(sudo:session): session closed for user root
Jan 26 17:03:01 compute-0 sudo[398670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:03:01 compute-0 sudo[398670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:03:01 compute-0 nova_compute[239965]: 2026-01-26 17:03:01.176 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:01 compute-0 ceph-mon[75140]: pgmap v3397: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 26 17:03:01 compute-0 podman[398707]: 2026-01-26 17:03:01.392593429 +0000 UTC m=+0.035790028 container create 4de3e45a7370b6ee41cd90b78635a008a27d438a02ef7441a82c62fc2eff56e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 17:03:01 compute-0 systemd[1]: Started libpod-conmon-4de3e45a7370b6ee41cd90b78635a008a27d438a02ef7441a82c62fc2eff56e7.scope.
Jan 26 17:03:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:03:01 compute-0 podman[398707]: 2026-01-26 17:03:01.468776916 +0000 UTC m=+0.111973545 container init 4de3e45a7370b6ee41cd90b78635a008a27d438a02ef7441a82c62fc2eff56e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ptolemy, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:03:01 compute-0 podman[398707]: 2026-01-26 17:03:01.377392116 +0000 UTC m=+0.020588735 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:03:01 compute-0 podman[398707]: 2026-01-26 17:03:01.474883286 +0000 UTC m=+0.118079885 container start 4de3e45a7370b6ee41cd90b78635a008a27d438a02ef7441a82c62fc2eff56e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 17:03:01 compute-0 bold_ptolemy[398724]: 167 167
Jan 26 17:03:01 compute-0 systemd[1]: libpod-4de3e45a7370b6ee41cd90b78635a008a27d438a02ef7441a82c62fc2eff56e7.scope: Deactivated successfully.
Jan 26 17:03:01 compute-0 podman[398707]: 2026-01-26 17:03:01.480541175 +0000 UTC m=+0.123737794 container attach 4de3e45a7370b6ee41cd90b78635a008a27d438a02ef7441a82c62fc2eff56e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ptolemy, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:03:01 compute-0 podman[398707]: 2026-01-26 17:03:01.480893993 +0000 UTC m=+0.124090592 container died 4de3e45a7370b6ee41cd90b78635a008a27d438a02ef7441a82c62fc2eff56e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ptolemy, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:03:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f403a066d954465f145731a584a03f6cc8ff1846a277657ec2f7cd0285db31b-merged.mount: Deactivated successfully.
Jan 26 17:03:01 compute-0 podman[398707]: 2026-01-26 17:03:01.51505882 +0000 UTC m=+0.158255419 container remove 4de3e45a7370b6ee41cd90b78635a008a27d438a02ef7441a82c62fc2eff56e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_ptolemy, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:03:01 compute-0 systemd[1]: libpod-conmon-4de3e45a7370b6ee41cd90b78635a008a27d438a02ef7441a82c62fc2eff56e7.scope: Deactivated successfully.
Jan 26 17:03:01 compute-0 podman[398748]: 2026-01-26 17:03:01.692395186 +0000 UTC m=+0.045645660 container create 4f9a3dfb78d931242c050821a244bb507db6026799ee86502209de412c29b8cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:03:01 compute-0 systemd[1]: Started libpod-conmon-4f9a3dfb78d931242c050821a244bb507db6026799ee86502209de412c29b8cf.scope.
Jan 26 17:03:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:03:01 compute-0 podman[398748]: 2026-01-26 17:03:01.673289077 +0000 UTC m=+0.026539541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5c026c05f3750a9919d7fdb218917d082b8a380ad38a7670e4842c77e8ba4f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5c026c05f3750a9919d7fdb218917d082b8a380ad38a7670e4842c77e8ba4f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5c026c05f3750a9919d7fdb218917d082b8a380ad38a7670e4842c77e8ba4f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d5c026c05f3750a9919d7fdb218917d082b8a380ad38a7670e4842c77e8ba4f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:03:01 compute-0 podman[398748]: 2026-01-26 17:03:01.795761069 +0000 UTC m=+0.149011543 container init 4f9a3dfb78d931242c050821a244bb507db6026799ee86502209de412c29b8cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:03:01 compute-0 podman[398748]: 2026-01-26 17:03:01.814310084 +0000 UTC m=+0.167560538 container start 4f9a3dfb78d931242c050821a244bb507db6026799ee86502209de412c29b8cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 17:03:01 compute-0 podman[398748]: 2026-01-26 17:03:01.818163688 +0000 UTC m=+0.171414132 container attach 4f9a3dfb78d931242c050821a244bb507db6026799ee86502209de412c29b8cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:03:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 17:03:02 compute-0 nova_compute[239965]: 2026-01-26 17:03:01.999 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:02 compute-0 great_lalande[398765]: {
Jan 26 17:03:02 compute-0 great_lalande[398765]:     "0": [
Jan 26 17:03:02 compute-0 great_lalande[398765]:         {
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "devices": [
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "/dev/loop3"
Jan 26 17:03:02 compute-0 great_lalande[398765]:             ],
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "lv_name": "ceph_lv0",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "lv_size": "21470642176",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "name": "ceph_lv0",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "tags": {
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.cluster_name": "ceph",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.crush_device_class": "",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.encrypted": "0",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.objectstore": "bluestore",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.osd_id": "0",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.type": "block",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.vdo": "0",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.with_tpm": "0"
Jan 26 17:03:02 compute-0 great_lalande[398765]:             },
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "type": "block",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "vg_name": "ceph_vg0"
Jan 26 17:03:02 compute-0 great_lalande[398765]:         }
Jan 26 17:03:02 compute-0 great_lalande[398765]:     ],
Jan 26 17:03:02 compute-0 great_lalande[398765]:     "1": [
Jan 26 17:03:02 compute-0 great_lalande[398765]:         {
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "devices": [
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "/dev/loop4"
Jan 26 17:03:02 compute-0 great_lalande[398765]:             ],
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "lv_name": "ceph_lv1",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "lv_size": "21470642176",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "name": "ceph_lv1",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "tags": {
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.cluster_name": "ceph",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.crush_device_class": "",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.encrypted": "0",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.objectstore": "bluestore",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.osd_id": "1",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.type": "block",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.vdo": "0",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.with_tpm": "0"
Jan 26 17:03:02 compute-0 great_lalande[398765]:             },
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "type": "block",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "vg_name": "ceph_vg1"
Jan 26 17:03:02 compute-0 great_lalande[398765]:         }
Jan 26 17:03:02 compute-0 great_lalande[398765]:     ],
Jan 26 17:03:02 compute-0 great_lalande[398765]:     "2": [
Jan 26 17:03:02 compute-0 great_lalande[398765]:         {
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "devices": [
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "/dev/loop5"
Jan 26 17:03:02 compute-0 great_lalande[398765]:             ],
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "lv_name": "ceph_lv2",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "lv_size": "21470642176",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "name": "ceph_lv2",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "tags": {
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.cluster_name": "ceph",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.crush_device_class": "",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.encrypted": "0",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.objectstore": "bluestore",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.osd_id": "2",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.type": "block",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.vdo": "0",
Jan 26 17:03:02 compute-0 great_lalande[398765]:                 "ceph.with_tpm": "0"
Jan 26 17:03:02 compute-0 great_lalande[398765]:             },
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "type": "block",
Jan 26 17:03:02 compute-0 great_lalande[398765]:             "vg_name": "ceph_vg2"
Jan 26 17:03:02 compute-0 great_lalande[398765]:         }
Jan 26 17:03:02 compute-0 great_lalande[398765]:     ]
Jan 26 17:03:02 compute-0 great_lalande[398765]: }
Jan 26 17:03:02 compute-0 systemd[1]: libpod-4f9a3dfb78d931242c050821a244bb507db6026799ee86502209de412c29b8cf.scope: Deactivated successfully.
Jan 26 17:03:02 compute-0 podman[398748]: 2026-01-26 17:03:02.14832975 +0000 UTC m=+0.501580194 container died 4f9a3dfb78d931242c050821a244bb507db6026799ee86502209de412c29b8cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:03:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d5c026c05f3750a9919d7fdb218917d082b8a380ad38a7670e4842c77e8ba4f-merged.mount: Deactivated successfully.
Jan 26 17:03:02 compute-0 podman[398748]: 2026-01-26 17:03:02.196840928 +0000 UTC m=+0.550091392 container remove 4f9a3dfb78d931242c050821a244bb507db6026799ee86502209de412c29b8cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:03:02 compute-0 systemd[1]: libpod-conmon-4f9a3dfb78d931242c050821a244bb507db6026799ee86502209de412c29b8cf.scope: Deactivated successfully.
Jan 26 17:03:02 compute-0 sudo[398670]: pam_unix(sudo:session): session closed for user root
Jan 26 17:03:02 compute-0 sudo[398786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:03:02 compute-0 sudo[398786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:03:02 compute-0 sudo[398786]: pam_unix(sudo:session): session closed for user root
Jan 26 17:03:02 compute-0 sudo[398811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:03:02 compute-0 sudo[398811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:03:02 compute-0 podman[398846]: 2026-01-26 17:03:02.651955191 +0000 UTC m=+0.042536343 container create 169bfcda7d92abf234b1550102249c98181010ac526fbde47155fb77bcd03c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_johnson, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:03:02 compute-0 systemd[1]: Started libpod-conmon-169bfcda7d92abf234b1550102249c98181010ac526fbde47155fb77bcd03c1a.scope.
Jan 26 17:03:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:03:02 compute-0 podman[398846]: 2026-01-26 17:03:02.714554985 +0000 UTC m=+0.105136157 container init 169bfcda7d92abf234b1550102249c98181010ac526fbde47155fb77bcd03c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 26 17:03:02 compute-0 podman[398846]: 2026-01-26 17:03:02.719830565 +0000 UTC m=+0.110411717 container start 169bfcda7d92abf234b1550102249c98181010ac526fbde47155fb77bcd03c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_johnson, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:03:02 compute-0 strange_johnson[398861]: 167 167
Jan 26 17:03:02 compute-0 podman[398846]: 2026-01-26 17:03:02.72333511 +0000 UTC m=+0.113916262 container attach 169bfcda7d92abf234b1550102249c98181010ac526fbde47155fb77bcd03c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_johnson, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 17:03:02 compute-0 systemd[1]: libpod-169bfcda7d92abf234b1550102249c98181010ac526fbde47155fb77bcd03c1a.scope: Deactivated successfully.
Jan 26 17:03:02 compute-0 podman[398846]: 2026-01-26 17:03:02.724289354 +0000 UTC m=+0.114870536 container died 169bfcda7d92abf234b1550102249c98181010ac526fbde47155fb77bcd03c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_johnson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:03:02 compute-0 podman[398846]: 2026-01-26 17:03:02.636928463 +0000 UTC m=+0.027509635 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:03:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e25b4ccf6dda927ef8c00c3dfddca801530ee4fdbb739dafa50ee6ef429a74b6-merged.mount: Deactivated successfully.
Jan 26 17:03:02 compute-0 podman[398846]: 2026-01-26 17:03:02.7918936 +0000 UTC m=+0.182474752 container remove 169bfcda7d92abf234b1550102249c98181010ac526fbde47155fb77bcd03c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_johnson, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:03:02 compute-0 systemd[1]: libpod-conmon-169bfcda7d92abf234b1550102249c98181010ac526fbde47155fb77bcd03c1a.scope: Deactivated successfully.
Jan 26 17:03:02 compute-0 podman[398885]: 2026-01-26 17:03:02.983791623 +0000 UTC m=+0.050018606 container create 2059419d5f3c16be7f9c34181514fd3e9ce12bd7740c6751a7ab5da810e3e259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_matsumoto, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:03:03 compute-0 systemd[1]: Started libpod-conmon-2059419d5f3c16be7f9c34181514fd3e9ce12bd7740c6751a7ab5da810e3e259.scope.
Jan 26 17:03:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:03:03 compute-0 podman[398885]: 2026-01-26 17:03:02.963444485 +0000 UTC m=+0.029671498 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7fb01bd5409b5951f09fa0362249188dc1e2e0186c19ea49b9ff3974b14c08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7fb01bd5409b5951f09fa0362249188dc1e2e0186c19ea49b9ff3974b14c08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7fb01bd5409b5951f09fa0362249188dc1e2e0186c19ea49b9ff3974b14c08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd7fb01bd5409b5951f09fa0362249188dc1e2e0186c19ea49b9ff3974b14c08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:03:03 compute-0 podman[398885]: 2026-01-26 17:03:03.075651985 +0000 UTC m=+0.141878998 container init 2059419d5f3c16be7f9c34181514fd3e9ce12bd7740c6751a7ab5da810e3e259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_matsumoto, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 17:03:03 compute-0 podman[398885]: 2026-01-26 17:03:03.083951628 +0000 UTC m=+0.150178611 container start 2059419d5f3c16be7f9c34181514fd3e9ce12bd7740c6751a7ab5da810e3e259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_matsumoto, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:03:03 compute-0 podman[398885]: 2026-01-26 17:03:03.087348391 +0000 UTC m=+0.153575384 container attach 2059419d5f3c16be7f9c34181514fd3e9ce12bd7740c6751a7ab5da810e3e259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_matsumoto, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 17:03:03 compute-0 ceph-mon[75140]: pgmap v3398: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 17:03:03 compute-0 lvm[398978]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:03:03 compute-0 lvm[398978]: VG ceph_vg0 finished
Jan 26 17:03:03 compute-0 lvm[398980]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:03:03 compute-0 lvm[398980]: VG ceph_vg1 finished
Jan 26 17:03:03 compute-0 lvm[398982]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:03:03 compute-0 lvm[398982]: VG ceph_vg2 finished
Jan 26 17:03:03 compute-0 romantic_matsumoto[398901]: {}
Jan 26 17:03:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 17:03:03 compute-0 systemd[1]: libpod-2059419d5f3c16be7f9c34181514fd3e9ce12bd7740c6751a7ab5da810e3e259.scope: Deactivated successfully.
Jan 26 17:03:03 compute-0 podman[398885]: 2026-01-26 17:03:03.885754288 +0000 UTC m=+0.951981311 container died 2059419d5f3c16be7f9c34181514fd3e9ce12bd7740c6751a7ab5da810e3e259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:03:03 compute-0 systemd[1]: libpod-2059419d5f3c16be7f9c34181514fd3e9ce12bd7740c6751a7ab5da810e3e259.scope: Consumed 1.273s CPU time.
Jan 26 17:03:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd7fb01bd5409b5951f09fa0362249188dc1e2e0186c19ea49b9ff3974b14c08-merged.mount: Deactivated successfully.
Jan 26 17:03:03 compute-0 podman[398885]: 2026-01-26 17:03:03.93728133 +0000 UTC m=+1.003508323 container remove 2059419d5f3c16be7f9c34181514fd3e9ce12bd7740c6751a7ab5da810e3e259 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 17:03:03 compute-0 systemd[1]: libpod-conmon-2059419d5f3c16be7f9c34181514fd3e9ce12bd7740c6751a7ab5da810e3e259.scope: Deactivated successfully.
Jan 26 17:03:03 compute-0 sudo[398811]: pam_unix(sudo:session): session closed for user root
Jan 26 17:03:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:03:03 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:03:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:03:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:03:04 compute-0 sudo[398997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:03:04 compute-0 sudo[398997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:03:04 compute-0 sudo[398997]: pam_unix(sudo:session): session closed for user root
Jan 26 17:03:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:03:05 compute-0 ceph-mon[75140]: pgmap v3399: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 17:03:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:03:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:03:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 26 17:03:06 compute-0 nova_compute[239965]: 2026-01-26 17:03:06.178 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:07 compute-0 nova_compute[239965]: 2026-01-26 17:03:07.001 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:07 compute-0 ceph-mon[75140]: pgmap v3400: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 26 17:03:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 26 17:03:09 compute-0 ceph-mon[75140]: pgmap v3401: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Jan 26 17:03:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:03:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Jan 26 17:03:10 compute-0 sshd-session[399022]: Invalid user sol from 45.148.10.240 port 57702
Jan 26 17:03:11 compute-0 ceph-mon[75140]: pgmap v3402: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Jan 26 17:03:11 compute-0 sshd-session[399022]: Connection closed by invalid user sol 45.148.10.240 port 57702 [preauth]
Jan 26 17:03:11 compute-0 nova_compute[239965]: 2026-01-26 17:03:11.180 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Jan 26 17:03:12 compute-0 nova_compute[239965]: 2026-01-26 17:03:12.002 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:13 compute-0 ceph-mon[75140]: pgmap v3403: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 0 B/s wr, 4 op/s
Jan 26 17:03:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:03:15 compute-0 ceph-mon[75140]: pgmap v3404: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:16 compute-0 nova_compute[239965]: 2026-01-26 17:03:16.182 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:17 compute-0 nova_compute[239965]: 2026-01-26 17:03:17.004 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:17 compute-0 ceph-mon[75140]: pgmap v3405: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:18 compute-0 podman[399024]: 2026-01-26 17:03:18.375327822 +0000 UTC m=+0.055746597 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 26 17:03:19 compute-0 ceph-mon[75140]: pgmap v3406: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:03:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:21 compute-0 nova_compute[239965]: 2026-01-26 17:03:21.185 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:21 compute-0 ceph-mon[75140]: pgmap v3407: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:21 compute-0 podman[399043]: 2026-01-26 17:03:21.411819215 +0000 UTC m=+0.101287543 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 17:03:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:22 compute-0 nova_compute[239965]: 2026-01-26 17:03:22.006 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:23 compute-0 ceph-mon[75140]: pgmap v3408: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3409: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:03:25 compute-0 ceph-mon[75140]: pgmap v3409: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:26 compute-0 nova_compute[239965]: 2026-01-26 17:03:26.188 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:26 compute-0 nova_compute[239965]: 2026-01-26 17:03:26.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:27 compute-0 nova_compute[239965]: 2026-01-26 17:03:27.043 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:27 compute-0 ceph-mon[75140]: pgmap v3410: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:27 compute-0 nova_compute[239965]: 2026-01-26 17:03:27.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3411: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:28 compute-0 ceph-mon[75140]: pgmap v3411: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:03:28
Jan 26 17:03:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:03:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:03:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', '.mgr', '.rgw.root', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'backups', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 26 17:03:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:03:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:03:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:03:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:03:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:03:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:03:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:03:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:03:30 compute-0 ceph-mon[75140]: pgmap v3412: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:03:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:03:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:03:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:03:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:03:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:03:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:03:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:03:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:03:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:03:31 compute-0 nova_compute[239965]: 2026-01-26 17:03:31.189 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:31 compute-0 nova_compute[239965]: 2026-01-26 17:03:31.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3413: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:32 compute-0 nova_compute[239965]: 2026-01-26 17:03:32.046 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:32 compute-0 ceph-mon[75140]: pgmap v3413: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:03:34 compute-0 nova_compute[239965]: 2026-01-26 17:03:34.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:34 compute-0 nova_compute[239965]: 2026-01-26 17:03:34.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:34 compute-0 ceph-mon[75140]: pgmap v3414: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:36 compute-0 nova_compute[239965]: 2026-01-26 17:03:36.192 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:36 compute-0 ceph-mon[75140]: pgmap v3415: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:37 compute-0 nova_compute[239965]: 2026-01-26 17:03:37.048 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3416: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:38 compute-0 nova_compute[239965]: 2026-01-26 17:03:38.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:38 compute-0 ceph-mon[75140]: pgmap v3416: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:03:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:40 compute-0 nova_compute[239965]: 2026-01-26 17:03:40.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:40 compute-0 nova_compute[239965]: 2026-01-26 17:03:40.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:03:40 compute-0 nova_compute[239965]: 2026-01-26 17:03:40.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:03:40 compute-0 nova_compute[239965]: 2026-01-26 17:03:40.599 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:03:40 compute-0 nova_compute[239965]: 2026-01-26 17:03:40.600 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:40 compute-0 nova_compute[239965]: 2026-01-26 17:03:40.600 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:03:41 compute-0 ceph-mon[75140]: pgmap v3417: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:41 compute-0 nova_compute[239965]: 2026-01-26 17:03:41.195 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:41 compute-0 nova_compute[239965]: 2026-01-26 17:03:41.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:03:41 compute-0 nova_compute[239965]: 2026-01-26 17:03:41.873 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:03:41 compute-0 nova_compute[239965]: 2026-01-26 17:03:41.874 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:03:41 compute-0 nova_compute[239965]: 2026-01-26 17:03:41.874 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:03:41 compute-0 nova_compute[239965]: 2026-01-26 17:03:41.875 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:03:41 compute-0 nova_compute[239965]: 2026-01-26 17:03:41.875 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:03:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:42 compute-0 nova_compute[239965]: 2026-01-26 17:03:42.049 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:03:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3851914487' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:03:42 compute-0 nova_compute[239965]: 2026-01-26 17:03:42.427 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:03:42 compute-0 nova_compute[239965]: 2026-01-26 17:03:42.598 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:03:42 compute-0 nova_compute[239965]: 2026-01-26 17:03:42.600 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3582MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:03:42 compute-0 nova_compute[239965]: 2026-01-26 17:03:42.600 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:03:42 compute-0 nova_compute[239965]: 2026-01-26 17:03:42.600 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:03:42 compute-0 nova_compute[239965]: 2026-01-26 17:03:42.827 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:03:42 compute-0 nova_compute[239965]: 2026-01-26 17:03:42.828 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:03:42 compute-0 nova_compute[239965]: 2026-01-26 17:03:42.982 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:03:43 compute-0 ceph-mon[75140]: pgmap v3418: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:43 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3851914487' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:03:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:03:43 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2480795353' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:03:43 compute-0 nova_compute[239965]: 2026-01-26 17:03:43.537 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:03:43 compute-0 nova_compute[239965]: 2026-01-26 17:03:43.550 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:03:43 compute-0 nova_compute[239965]: 2026-01-26 17:03:43.660 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:03:43 compute-0 nova_compute[239965]: 2026-01-26 17:03:43.664 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:03:43 compute-0 nova_compute[239965]: 2026-01-26 17:03:43.665 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:03:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:44 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2480795353' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:03:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:03:45 compute-0 ceph-mon[75140]: pgmap v3419: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:45 compute-0 sshd-session[399113]: Received disconnect from 91.224.92.108 port 20642:11:  [preauth]
Jan 26 17:03:45 compute-0 sshd-session[399113]: Disconnected from authenticating user root 91.224.92.108 port 20642 [preauth]
Jan 26 17:03:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3420: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:46 compute-0 nova_compute[239965]: 2026-01-26 17:03:46.198 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:47 compute-0 nova_compute[239965]: 2026-01-26 17:03:47.050 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:47 compute-0 ceph-mon[75140]: pgmap v3420: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:03:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1489877407' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:03:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:03:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1489877407' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:03:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:03:49 compute-0 ceph-mon[75140]: pgmap v3421: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1489877407' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:03:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1489877407' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:03:49 compute-0 podman[399115]: 2026-01-26 17:03:49.383931869 +0000 UTC m=+0.060457042 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:03:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:51 compute-0 ceph-mon[75140]: pgmap v3422: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:51 compute-0 nova_compute[239965]: 2026-01-26 17:03:51.200 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:52 compute-0 nova_compute[239965]: 2026-01-26 17:03:52.065 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:52 compute-0 podman[399134]: 2026-01-26 17:03:52.439773426 +0000 UTC m=+0.118088635 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:03:53 compute-0 ceph-mon[75140]: pgmap v3423: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:03:55 compute-0 ceph-mon[75140]: pgmap v3424: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:56 compute-0 nova_compute[239965]: 2026-01-26 17:03:56.203 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:57 compute-0 nova_compute[239965]: 2026-01-26 17:03:57.068 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:03:57 compute-0 ceph-mon[75140]: pgmap v3425: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:03:59 compute-0 ceph-mon[75140]: pgmap v3426: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:03:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:03:59.287 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:03:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:03:59.287 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:03:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:03:59.287 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:03:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:04:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:04:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:04:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:04:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:04:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:04:01 compute-0 ceph-mon[75140]: pgmap v3427: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:01 compute-0 nova_compute[239965]: 2026-01-26 17:04:01.206 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:02 compute-0 nova_compute[239965]: 2026-01-26 17:04:02.103 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:03 compute-0 ceph-mon[75140]: pgmap v3428: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:04:04 compute-0 sudo[399162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:04:04 compute-0 sudo[399162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:04:04 compute-0 sudo[399162]: pam_unix(sudo:session): session closed for user root
Jan 26 17:04:04 compute-0 sudo[399187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:04:04 compute-0 sudo[399187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:04:04 compute-0 sudo[399187]: pam_unix(sudo:session): session closed for user root
Jan 26 17:04:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:04:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:04:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:04:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:04:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:04:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:04:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:04:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:04:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:04:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:04:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:04:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:04:04 compute-0 sudo[399243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:04:04 compute-0 sudo[399243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:04:04 compute-0 sudo[399243]: pam_unix(sudo:session): session closed for user root
Jan 26 17:04:04 compute-0 sudo[399268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:04:04 compute-0 sudo[399268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:04:05 compute-0 podman[399303]: 2026-01-26 17:04:05.197847903 +0000 UTC m=+0.039993382 container create 72e746683c554ef623c2bdf28637c74c49c7eb42939b7504cf8c2dc9691226a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 17:04:05 compute-0 ceph-mon[75140]: pgmap v3429: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:04:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:04:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:04:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:04:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:04:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:04:05 compute-0 systemd[1]: Started libpod-conmon-72e746683c554ef623c2bdf28637c74c49c7eb42939b7504cf8c2dc9691226a5.scope.
Jan 26 17:04:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:04:05 compute-0 podman[399303]: 2026-01-26 17:04:05.179721988 +0000 UTC m=+0.021867467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:04:05 compute-0 podman[399303]: 2026-01-26 17:04:05.287088959 +0000 UTC m=+0.129234458 container init 72e746683c554ef623c2bdf28637c74c49c7eb42939b7504cf8c2dc9691226a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:04:05 compute-0 podman[399303]: 2026-01-26 17:04:05.300919878 +0000 UTC m=+0.143065337 container start 72e746683c554ef623c2bdf28637c74c49c7eb42939b7504cf8c2dc9691226a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Jan 26 17:04:05 compute-0 podman[399303]: 2026-01-26 17:04:05.305221783 +0000 UTC m=+0.147367242 container attach 72e746683c554ef623c2bdf28637c74c49c7eb42939b7504cf8c2dc9691226a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 17:04:05 compute-0 vigilant_cartwright[399319]: 167 167
Jan 26 17:04:05 compute-0 systemd[1]: libpod-72e746683c554ef623c2bdf28637c74c49c7eb42939b7504cf8c2dc9691226a5.scope: Deactivated successfully.
Jan 26 17:04:05 compute-0 podman[399303]: 2026-01-26 17:04:05.309363645 +0000 UTC m=+0.151509104 container died 72e746683c554ef623c2bdf28637c74c49c7eb42939b7504cf8c2dc9691226a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:04:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-de3523d29473167fdc429047ef105a6fb35ab3ad022c4c15c378e000478f1567-merged.mount: Deactivated successfully.
Jan 26 17:04:05 compute-0 podman[399303]: 2026-01-26 17:04:05.356244114 +0000 UTC m=+0.198389583 container remove 72e746683c554ef623c2bdf28637c74c49c7eb42939b7504cf8c2dc9691226a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 17:04:05 compute-0 systemd[1]: libpod-conmon-72e746683c554ef623c2bdf28637c74c49c7eb42939b7504cf8c2dc9691226a5.scope: Deactivated successfully.
Jan 26 17:04:05 compute-0 podman[399343]: 2026-01-26 17:04:05.517910916 +0000 UTC m=+0.040511434 container create 9bc696bd8a274052cd624bbfa3486603ec71588c8d5a96c82e0740e37c57badf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_williams, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:04:05 compute-0 systemd[1]: Started libpod-conmon-9bc696bd8a274052cd624bbfa3486603ec71588c8d5a96c82e0740e37c57badf.scope.
Jan 26 17:04:05 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e401d51df17e81b562acc3812674f061149e4c318e7da3046a89cbe7de80a4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e401d51df17e81b562acc3812674f061149e4c318e7da3046a89cbe7de80a4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e401d51df17e81b562acc3812674f061149e4c318e7da3046a89cbe7de80a4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e401d51df17e81b562acc3812674f061149e4c318e7da3046a89cbe7de80a4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e401d51df17e81b562acc3812674f061149e4c318e7da3046a89cbe7de80a4a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:04:05 compute-0 podman[399343]: 2026-01-26 17:04:05.500067318 +0000 UTC m=+0.022667856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:04:05 compute-0 podman[399343]: 2026-01-26 17:04:05.600390287 +0000 UTC m=+0.122990805 container init 9bc696bd8a274052cd624bbfa3486603ec71588c8d5a96c82e0740e37c57badf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_williams, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 17:04:05 compute-0 podman[399343]: 2026-01-26 17:04:05.606407194 +0000 UTC m=+0.129007712 container start 9bc696bd8a274052cd624bbfa3486603ec71588c8d5a96c82e0740e37c57badf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_williams, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 17:04:05 compute-0 podman[399343]: 2026-01-26 17:04:05.610505395 +0000 UTC m=+0.133105913 container attach 9bc696bd8a274052cd624bbfa3486603ec71588c8d5a96c82e0740e37c57badf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_williams, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 17:04:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:06 compute-0 dazzling_williams[399359]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:04:06 compute-0 dazzling_williams[399359]: --> All data devices are unavailable
Jan 26 17:04:06 compute-0 systemd[1]: libpod-9bc696bd8a274052cd624bbfa3486603ec71588c8d5a96c82e0740e37c57badf.scope: Deactivated successfully.
Jan 26 17:04:06 compute-0 podman[399379]: 2026-01-26 17:04:06.135448259 +0000 UTC m=+0.034669320 container died 9bc696bd8a274052cd624bbfa3486603ec71588c8d5a96c82e0740e37c57badf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:04:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e401d51df17e81b562acc3812674f061149e4c318e7da3046a89cbe7de80a4a-merged.mount: Deactivated successfully.
Jan 26 17:04:06 compute-0 podman[399379]: 2026-01-26 17:04:06.176931876 +0000 UTC m=+0.076152917 container remove 9bc696bd8a274052cd624bbfa3486603ec71588c8d5a96c82e0740e37c57badf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_williams, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 17:04:06 compute-0 systemd[1]: libpod-conmon-9bc696bd8a274052cd624bbfa3486603ec71588c8d5a96c82e0740e37c57badf.scope: Deactivated successfully.
Jan 26 17:04:06 compute-0 nova_compute[239965]: 2026-01-26 17:04:06.207 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:06 compute-0 sudo[399268]: pam_unix(sudo:session): session closed for user root
Jan 26 17:04:06 compute-0 sudo[399394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:04:06 compute-0 sudo[399394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:04:06 compute-0 sudo[399394]: pam_unix(sudo:session): session closed for user root
Jan 26 17:04:06 compute-0 sudo[399419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:04:06 compute-0 sudo[399419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:04:06 compute-0 podman[399456]: 2026-01-26 17:04:06.701351287 +0000 UTC m=+0.053126583 container create 8586d387ba8d65a481de171e42f58417a6c6d98a9f661943bd015d43eec88667 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 17:04:06 compute-0 systemd[1]: Started libpod-conmon-8586d387ba8d65a481de171e42f58417a6c6d98a9f661943bd015d43eec88667.scope.
Jan 26 17:04:06 compute-0 podman[399456]: 2026-01-26 17:04:06.678984139 +0000 UTC m=+0.030759485 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:04:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:04:06 compute-0 podman[399456]: 2026-01-26 17:04:06.804792382 +0000 UTC m=+0.156567708 container init 8586d387ba8d65a481de171e42f58417a6c6d98a9f661943bd015d43eec88667 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banach, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Jan 26 17:04:06 compute-0 podman[399456]: 2026-01-26 17:04:06.81404642 +0000 UTC m=+0.165821726 container start 8586d387ba8d65a481de171e42f58417a6c6d98a9f661943bd015d43eec88667 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banach, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 17:04:06 compute-0 stupefied_banach[399473]: 167 167
Jan 26 17:04:06 compute-0 systemd[1]: libpod-8586d387ba8d65a481de171e42f58417a6c6d98a9f661943bd015d43eec88667.scope: Deactivated successfully.
Jan 26 17:04:06 compute-0 podman[399456]: 2026-01-26 17:04:06.820709543 +0000 UTC m=+0.172484869 container attach 8586d387ba8d65a481de171e42f58417a6c6d98a9f661943bd015d43eec88667 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banach, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:04:06 compute-0 podman[399456]: 2026-01-26 17:04:06.821547663 +0000 UTC m=+0.173322969 container died 8586d387ba8d65a481de171e42f58417a6c6d98a9f661943bd015d43eec88667 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 17:04:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-368c1230fb86e2af5cae58d857029940fce4e673d37bfa930f59b994360a8270-merged.mount: Deactivated successfully.
Jan 26 17:04:06 compute-0 podman[399456]: 2026-01-26 17:04:06.862441025 +0000 UTC m=+0.214216321 container remove 8586d387ba8d65a481de171e42f58417a6c6d98a9f661943bd015d43eec88667 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_banach, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:04:06 compute-0 systemd[1]: libpod-conmon-8586d387ba8d65a481de171e42f58417a6c6d98a9f661943bd015d43eec88667.scope: Deactivated successfully.
Jan 26 17:04:07 compute-0 podman[399496]: 2026-01-26 17:04:07.038473099 +0000 UTC m=+0.055112371 container create ea1cdaf45782118d6d91ec39aa2fb000ba0f8a9aea4f0baf588458cf787d5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 17:04:07 compute-0 systemd[1]: Started libpod-conmon-ea1cdaf45782118d6d91ec39aa2fb000ba0f8a9aea4f0baf588458cf787d5655.scope.
Jan 26 17:04:07 compute-0 nova_compute[239965]: 2026-01-26 17:04:07.104 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:04:07 compute-0 podman[399496]: 2026-01-26 17:04:07.019014552 +0000 UTC m=+0.035653854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3d9be41b774b6b9a3a06b27ee3fefa015cae57a26c41b1ae51c8d17afa2408/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3d9be41b774b6b9a3a06b27ee3fefa015cae57a26c41b1ae51c8d17afa2408/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3d9be41b774b6b9a3a06b27ee3fefa015cae57a26c41b1ae51c8d17afa2408/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3d9be41b774b6b9a3a06b27ee3fefa015cae57a26c41b1ae51c8d17afa2408/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:04:07 compute-0 podman[399496]: 2026-01-26 17:04:07.127698496 +0000 UTC m=+0.144337768 container init ea1cdaf45782118d6d91ec39aa2fb000ba0f8a9aea4f0baf588458cf787d5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 17:04:07 compute-0 podman[399496]: 2026-01-26 17:04:07.137758372 +0000 UTC m=+0.154397644 container start ea1cdaf45782118d6d91ec39aa2fb000ba0f8a9aea4f0baf588458cf787d5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 17:04:07 compute-0 podman[399496]: 2026-01-26 17:04:07.141046512 +0000 UTC m=+0.157685784 container attach ea1cdaf45782118d6d91ec39aa2fb000ba0f8a9aea4f0baf588458cf787d5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:04:07 compute-0 ceph-mon[75140]: pgmap v3430: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:07 compute-0 nice_babbage[399513]: {
Jan 26 17:04:07 compute-0 nice_babbage[399513]:     "0": [
Jan 26 17:04:07 compute-0 nice_babbage[399513]:         {
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "devices": [
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "/dev/loop3"
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             ],
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "lv_name": "ceph_lv0",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "lv_size": "21470642176",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "name": "ceph_lv0",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "tags": {
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.cluster_name": "ceph",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.crush_device_class": "",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.encrypted": "0",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.objectstore": "bluestore",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.osd_id": "0",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.type": "block",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.vdo": "0",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.with_tpm": "0"
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             },
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "type": "block",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "vg_name": "ceph_vg0"
Jan 26 17:04:07 compute-0 nice_babbage[399513]:         }
Jan 26 17:04:07 compute-0 nice_babbage[399513]:     ],
Jan 26 17:04:07 compute-0 nice_babbage[399513]:     "1": [
Jan 26 17:04:07 compute-0 nice_babbage[399513]:         {
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "devices": [
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "/dev/loop4"
Jan 26 17:04:07 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             ],
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "lv_name": "ceph_lv1",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "lv_size": "21470642176",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "name": "ceph_lv1",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "tags": {
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.cluster_name": "ceph",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.crush_device_class": "",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.encrypted": "0",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.objectstore": "bluestore",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.osd_id": "1",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.type": "block",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.vdo": "0",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.with_tpm": "0"
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             },
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "type": "block",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "vg_name": "ceph_vg1"
Jan 26 17:04:07 compute-0 nice_babbage[399513]:         }
Jan 26 17:04:07 compute-0 nice_babbage[399513]:     ],
Jan 26 17:04:07 compute-0 nice_babbage[399513]:     "2": [
Jan 26 17:04:07 compute-0 nice_babbage[399513]:         {
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "devices": [
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "/dev/loop5"
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             ],
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "lv_name": "ceph_lv2",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "lv_size": "21470642176",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "name": "ceph_lv2",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "tags": {
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.cluster_name": "ceph",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.crush_device_class": "",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.encrypted": "0",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.objectstore": "bluestore",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.osd_id": "2",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.type": "block",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.vdo": "0",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:                 "ceph.with_tpm": "0"
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             },
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "type": "block",
Jan 26 17:04:07 compute-0 nice_babbage[399513]:             "vg_name": "ceph_vg2"
Jan 26 17:04:07 compute-0 nice_babbage[399513]:         }
Jan 26 17:04:07 compute-0 nice_babbage[399513]:     ]
Jan 26 17:04:07 compute-0 nice_babbage[399513]: }
Jan 26 17:04:07 compute-0 systemd[1]: libpod-ea1cdaf45782118d6d91ec39aa2fb000ba0f8a9aea4f0baf588458cf787d5655.scope: Deactivated successfully.
Jan 26 17:04:07 compute-0 conmon[399513]: conmon ea1cdaf45782118d6d91 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea1cdaf45782118d6d91ec39aa2fb000ba0f8a9aea4f0baf588458cf787d5655.scope/container/memory.events
Jan 26 17:04:07 compute-0 podman[399496]: 2026-01-26 17:04:07.451934151 +0000 UTC m=+0.468573423 container died ea1cdaf45782118d6d91ec39aa2fb000ba0f8a9aea4f0baf588458cf787d5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 17:04:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb3d9be41b774b6b9a3a06b27ee3fefa015cae57a26c41b1ae51c8d17afa2408-merged.mount: Deactivated successfully.
Jan 26 17:04:07 compute-0 podman[399496]: 2026-01-26 17:04:07.494837933 +0000 UTC m=+0.511477205 container remove ea1cdaf45782118d6d91ec39aa2fb000ba0f8a9aea4f0baf588458cf787d5655 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_babbage, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:04:07 compute-0 systemd[1]: libpod-conmon-ea1cdaf45782118d6d91ec39aa2fb000ba0f8a9aea4f0baf588458cf787d5655.scope: Deactivated successfully.
Jan 26 17:04:07 compute-0 sudo[399419]: pam_unix(sudo:session): session closed for user root
Jan 26 17:04:07 compute-0 sudo[399533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:04:07 compute-0 sudo[399533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:04:07 compute-0 sudo[399533]: pam_unix(sudo:session): session closed for user root
Jan 26 17:04:07 compute-0 sudo[399558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:04:07 compute-0 sudo[399558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:04:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3431: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:07 compute-0 podman[399595]: 2026-01-26 17:04:07.931811351 +0000 UTC m=+0.042348268 container create 2c83c6521564a24aa768ac8e74b8aa0650d971eb47bc41b0e046b8385c1208ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 17:04:07 compute-0 systemd[1]: Started libpod-conmon-2c83c6521564a24aa768ac8e74b8aa0650d971eb47bc41b0e046b8385c1208ea.scope.
Jan 26 17:04:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:04:08 compute-0 podman[399595]: 2026-01-26 17:04:08.004476572 +0000 UTC m=+0.115013499 container init 2c83c6521564a24aa768ac8e74b8aa0650d971eb47bc41b0e046b8385c1208ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 17:04:08 compute-0 podman[399595]: 2026-01-26 17:04:08.010447928 +0000 UTC m=+0.120984845 container start 2c83c6521564a24aa768ac8e74b8aa0650d971eb47bc41b0e046b8385c1208ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 17:04:08 compute-0 podman[399595]: 2026-01-26 17:04:07.916129547 +0000 UTC m=+0.026666494 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:04:08 compute-0 ecstatic_moore[399612]: 167 167
Jan 26 17:04:08 compute-0 systemd[1]: libpod-2c83c6521564a24aa768ac8e74b8aa0650d971eb47bc41b0e046b8385c1208ea.scope: Deactivated successfully.
Jan 26 17:04:08 compute-0 podman[399595]: 2026-01-26 17:04:08.0162441 +0000 UTC m=+0.126781017 container attach 2c83c6521564a24aa768ac8e74b8aa0650d971eb47bc41b0e046b8385c1208ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 26 17:04:08 compute-0 podman[399595]: 2026-01-26 17:04:08.016948378 +0000 UTC m=+0.127485295 container died 2c83c6521564a24aa768ac8e74b8aa0650d971eb47bc41b0e046b8385c1208ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 17:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-70ab46aa82ae435929adff14d544e8b0e0497ea6fab5bdbf26b22d6e3f224a93-merged.mount: Deactivated successfully.
Jan 26 17:04:08 compute-0 podman[399595]: 2026-01-26 17:04:08.055461851 +0000 UTC m=+0.165998768 container remove 2c83c6521564a24aa768ac8e74b8aa0650d971eb47bc41b0e046b8385c1208ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:04:08 compute-0 systemd[1]: libpod-conmon-2c83c6521564a24aa768ac8e74b8aa0650d971eb47bc41b0e046b8385c1208ea.scope: Deactivated successfully.
Jan 26 17:04:08 compute-0 podman[399635]: 2026-01-26 17:04:08.223003237 +0000 UTC m=+0.046823588 container create c9464c6ee28ecfd0c5fbd9b0518e6e7037c778997dfb33495c6676141bdee0e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 26 17:04:08 compute-0 systemd[1]: Started libpod-conmon-c9464c6ee28ecfd0c5fbd9b0518e6e7037c778997dfb33495c6676141bdee0e6.scope.
Jan 26 17:04:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1208affc3d2306dffc82a02d33bfd73d4ce2664bd91a10c858a60c0e9f1b5bce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:04:08 compute-0 podman[399635]: 2026-01-26 17:04:08.200429674 +0000 UTC m=+0.024250045 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1208affc3d2306dffc82a02d33bfd73d4ce2664bd91a10c858a60c0e9f1b5bce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1208affc3d2306dffc82a02d33bfd73d4ce2664bd91a10c858a60c0e9f1b5bce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1208affc3d2306dffc82a02d33bfd73d4ce2664bd91a10c858a60c0e9f1b5bce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:04:08 compute-0 podman[399635]: 2026-01-26 17:04:08.310544513 +0000 UTC m=+0.134364874 container init c9464c6ee28ecfd0c5fbd9b0518e6e7037c778997dfb33495c6676141bdee0e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Jan 26 17:04:08 compute-0 podman[399635]: 2026-01-26 17:04:08.31652897 +0000 UTC m=+0.140349331 container start c9464c6ee28ecfd0c5fbd9b0518e6e7037c778997dfb33495c6676141bdee0e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:04:08 compute-0 podman[399635]: 2026-01-26 17:04:08.322721741 +0000 UTC m=+0.146542092 container attach c9464c6ee28ecfd0c5fbd9b0518e6e7037c778997dfb33495c6676141bdee0e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:04:09 compute-0 lvm[399729]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:04:09 compute-0 lvm[399729]: VG ceph_vg0 finished
Jan 26 17:04:09 compute-0 lvm[399730]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:04:09 compute-0 lvm[399730]: VG ceph_vg1 finished
Jan 26 17:04:09 compute-0 lvm[399732]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:04:09 compute-0 lvm[399732]: VG ceph_vg2 finished
Jan 26 17:04:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:04:09 compute-0 elated_elbakyan[399651]: {}
Jan 26 17:04:09 compute-0 systemd[1]: libpod-c9464c6ee28ecfd0c5fbd9b0518e6e7037c778997dfb33495c6676141bdee0e6.scope: Deactivated successfully.
Jan 26 17:04:09 compute-0 systemd[1]: libpod-c9464c6ee28ecfd0c5fbd9b0518e6e7037c778997dfb33495c6676141bdee0e6.scope: Consumed 1.310s CPU time.
Jan 26 17:04:09 compute-0 podman[399635]: 2026-01-26 17:04:09.17634303 +0000 UTC m=+1.000163411 container died c9464c6ee28ecfd0c5fbd9b0518e6e7037c778997dfb33495c6676141bdee0e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 17:04:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-1208affc3d2306dffc82a02d33bfd73d4ce2664bd91a10c858a60c0e9f1b5bce-merged.mount: Deactivated successfully.
Jan 26 17:04:09 compute-0 podman[399635]: 2026-01-26 17:04:09.23022235 +0000 UTC m=+1.054042701 container remove c9464c6ee28ecfd0c5fbd9b0518e6e7037c778997dfb33495c6676141bdee0e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_elbakyan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Jan 26 17:04:09 compute-0 ceph-mon[75140]: pgmap v3431: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:09 compute-0 systemd[1]: libpod-conmon-c9464c6ee28ecfd0c5fbd9b0518e6e7037c778997dfb33495c6676141bdee0e6.scope: Deactivated successfully.
Jan 26 17:04:09 compute-0 sudo[399558]: pam_unix(sudo:session): session closed for user root
Jan 26 17:04:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:04:09 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:04:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:04:09 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:04:09 compute-0 sudo[399747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:04:09 compute-0 sudo[399747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:04:09 compute-0 sudo[399747]: pam_unix(sudo:session): session closed for user root
Jan 26 17:04:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:10 compute-0 sshd-session[399772]: Connection closed by 154.117.199.5 port 11858 [preauth]
Jan 26 17:04:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:04:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:04:11 compute-0 nova_compute[239965]: 2026-01-26 17:04:11.212 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:11 compute-0 ceph-mon[75140]: pgmap v3432: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:12 compute-0 nova_compute[239965]: 2026-01-26 17:04:12.106 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:13 compute-0 ceph-mon[75140]: pgmap v3433: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:04:14 compute-0 ceph-mon[75140]: pgmap v3434: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:16 compute-0 nova_compute[239965]: 2026-01-26 17:04:16.215 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:16 compute-0 ceph-mon[75140]: pgmap v3435: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:17 compute-0 nova_compute[239965]: 2026-01-26 17:04:17.137 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:18 compute-0 ceph-mon[75140]: pgmap v3436: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:04:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:20 compute-0 podman[399774]: 2026-01-26 17:04:20.404280115 +0000 UTC m=+0.067247359 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Jan 26 17:04:20 compute-0 ceph-mon[75140]: pgmap v3437: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:21 compute-0 nova_compute[239965]: 2026-01-26 17:04:21.217 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:22 compute-0 nova_compute[239965]: 2026-01-26 17:04:22.139 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:22 compute-0 ceph-mon[75140]: pgmap v3438: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:23 compute-0 podman[399793]: 2026-01-26 17:04:23.407095663 +0000 UTC m=+0.093790010 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 26 17:04:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:04:24 compute-0 ceph-mon[75140]: pgmap v3439: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:26 compute-0 nova_compute[239965]: 2026-01-26 17:04:26.218 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:27 compute-0 ceph-mon[75140]: pgmap v3440: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:27 compute-0 nova_compute[239965]: 2026-01-26 17:04:27.140 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:04:28
Jan 26 17:04:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:04:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:04:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'images', 'default.rgw.control', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'backups']
Jan 26 17:04:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:04:29 compute-0 ceph-mon[75140]: pgmap v3441: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:04:29 compute-0 nova_compute[239965]: 2026-01-26 17:04:29.667 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:29 compute-0 nova_compute[239965]: 2026-01-26 17:04:29.668 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:04:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:04:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:04:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:04:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:04:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:04:31 compute-0 ceph-mon[75140]: pgmap v3442: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:04:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:04:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:04:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:04:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:04:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:04:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:04:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:04:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:04:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:04:31 compute-0 nova_compute[239965]: 2026-01-26 17:04:31.219 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:32 compute-0 nova_compute[239965]: 2026-01-26 17:04:32.141 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:33 compute-0 ceph-mon[75140]: pgmap v3443: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:33 compute-0 nova_compute[239965]: 2026-01-26 17:04:33.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3444: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:04:35 compute-0 ceph-mon[75140]: pgmap v3444: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:36 compute-0 nova_compute[239965]: 2026-01-26 17:04:36.220 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:36 compute-0 nova_compute[239965]: 2026-01-26 17:04:36.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:36 compute-0 nova_compute[239965]: 2026-01-26 17:04:36.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:37 compute-0 ceph-mon[75140]: pgmap v3445: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:37 compute-0 nova_compute[239965]: 2026-01-26 17:04:37.145 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:39 compute-0 ceph-mon[75140]: pgmap v3446: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:04:39 compute-0 nova_compute[239965]: 2026-01-26 17:04:39.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:40 compute-0 nova_compute[239965]: 2026-01-26 17:04:40.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:40 compute-0 nova_compute[239965]: 2026-01-26 17:04:40.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:04:41 compute-0 ceph-mon[75140]: pgmap v3447: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:41 compute-0 nova_compute[239965]: 2026-01-26 17:04:41.224 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:42 compute-0 nova_compute[239965]: 2026-01-26 17:04:42.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:42 compute-0 nova_compute[239965]: 2026-01-26 17:04:42.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:04:42 compute-0 nova_compute[239965]: 2026-01-26 17:04:42.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:04:42 compute-0 nova_compute[239965]: 2026-01-26 17:04:42.823 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:04:42 compute-0 nova_compute[239965]: 2026-01-26 17:04:42.825 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:42 compute-0 ceph-mon[75140]: pgmap v3448: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:43 compute-0 nova_compute[239965]: 2026-01-26 17:04:43.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:43 compute-0 nova_compute[239965]: 2026-01-26 17:04:43.572 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:04:43 compute-0 nova_compute[239965]: 2026-01-26 17:04:43.572 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:04:43 compute-0 nova_compute[239965]: 2026-01-26 17:04:43.572 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:04:43 compute-0 nova_compute[239965]: 2026-01-26 17:04:43.573 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:04:43 compute-0 nova_compute[239965]: 2026-01-26 17:04:43.573 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:04:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:04:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:04:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2376633125' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:04:44 compute-0 nova_compute[239965]: 2026-01-26 17:04:44.153 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:04:44 compute-0 nova_compute[239965]: 2026-01-26 17:04:44.323 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:04:44 compute-0 nova_compute[239965]: 2026-01-26 17:04:44.324 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3555MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:04:44 compute-0 nova_compute[239965]: 2026-01-26 17:04:44.324 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:04:44 compute-0 nova_compute[239965]: 2026-01-26 17:04:44.325 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:04:44 compute-0 nova_compute[239965]: 2026-01-26 17:04:44.472 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:04:44 compute-0 nova_compute[239965]: 2026-01-26 17:04:44.472 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:04:44 compute-0 nova_compute[239965]: 2026-01-26 17:04:44.550 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 17:04:44 compute-0 nova_compute[239965]: 2026-01-26 17:04:44.566 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 17:04:44 compute-0 nova_compute[239965]: 2026-01-26 17:04:44.567 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 17:04:44 compute-0 nova_compute[239965]: 2026-01-26 17:04:44.582 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 17:04:44 compute-0 nova_compute[239965]: 2026-01-26 17:04:44.602 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 17:04:44 compute-0 nova_compute[239965]: 2026-01-26 17:04:44.617 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:04:44 compute-0 ceph-mon[75140]: pgmap v3449: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:44 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2376633125' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:04:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:04:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3754326467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:04:45 compute-0 nova_compute[239965]: 2026-01-26 17:04:45.202 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:04:45 compute-0 nova_compute[239965]: 2026-01-26 17:04:45.210 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:04:45 compute-0 nova_compute[239965]: 2026-01-26 17:04:45.226 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:04:45 compute-0 nova_compute[239965]: 2026-01-26 17:04:45.228 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:04:45 compute-0 nova_compute[239965]: 2026-01-26 17:04:45.228 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:04:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3450: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3754326467' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:04:46 compute-0 nova_compute[239965]: 2026-01-26 17:04:46.226 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:47 compute-0 ceph-mon[75140]: pgmap v3450: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:47 compute-0 nova_compute[239965]: 2026-01-26 17:04:47.221 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:04:47 compute-0 nova_compute[239965]: 2026-01-26 17:04:47.825 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:04:48.022064) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447088022100, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1116, "num_deletes": 251, "total_data_size": 1711077, "memory_usage": 1742112, "flush_reason": "Manual Compaction"}
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447088033443, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 1675521, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69510, "largest_seqno": 70625, "table_properties": {"data_size": 1670093, "index_size": 2886, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11485, "raw_average_key_size": 19, "raw_value_size": 1659232, "raw_average_value_size": 2850, "num_data_blocks": 129, "num_entries": 582, "num_filter_entries": 582, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769446979, "oldest_key_time": 1769446979, "file_creation_time": 1769447088, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 11433 microseconds, and 4872 cpu microseconds.
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:04:48.033492) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 1675521 bytes OK
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:04:48.033513) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:04:48.035881) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:04:48.035897) EVENT_LOG_v1 {"time_micros": 1769447088035892, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:04:48.035915) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 1705918, prev total WAL file size 1705918, number of live WAL files 2.
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:04:48.036592) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(1636KB)], [164(9891KB)]
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447088036624, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 11804537, "oldest_snapshot_seqno": -1}
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8931 keys, 9985974 bytes, temperature: kUnknown
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447088101892, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 9985974, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9930877, "index_size": 31688, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22341, "raw_key_size": 233809, "raw_average_key_size": 26, "raw_value_size": 9775620, "raw_average_value_size": 1094, "num_data_blocks": 1218, "num_entries": 8931, "num_filter_entries": 8931, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769447088, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:04:48.102198) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 9985974 bytes
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:04:48.104767) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 180.5 rd, 152.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.7 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(13.0) write-amplify(6.0) OK, records in: 9445, records dropped: 514 output_compression: NoCompression
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:04:48.104794) EVENT_LOG_v1 {"time_micros": 1769447088104781, "job": 102, "event": "compaction_finished", "compaction_time_micros": 65389, "compaction_time_cpu_micros": 23478, "output_level": 6, "num_output_files": 1, "total_output_size": 9985974, "num_input_records": 9445, "num_output_records": 8931, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447088105463, "job": 102, "event": "table_file_deletion", "file_number": 166}
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447088108725, "job": 102, "event": "table_file_deletion", "file_number": 164}
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:04:48.036503) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:04:48.108838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:04:48.108845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:04:48.108847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:04:48.108849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:04:48 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:04:48.108851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:04:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:04:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/806997852' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:04:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:04:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/806997852' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:04:49 compute-0 ceph-mon[75140]: pgmap v3451: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/806997852' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:04:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/806997852' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:04:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:04:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:51 compute-0 ceph-mon[75140]: pgmap v3452: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:51 compute-0 nova_compute[239965]: 2026-01-26 17:04:51.228 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:51 compute-0 podman[399863]: 2026-01-26 17:04:51.402429405 +0000 UTC m=+0.083069628 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 17:04:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:52 compute-0 nova_compute[239965]: 2026-01-26 17:04:52.826 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:53 compute-0 ceph-mon[75140]: pgmap v3453: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:04:54 compute-0 podman[399882]: 2026-01-26 17:04:54.491051041 +0000 UTC m=+0.174555470 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 26 17:04:55 compute-0 ceph-mon[75140]: pgmap v3454: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:56 compute-0 nova_compute[239965]: 2026-01-26 17:04:56.231 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:57 compute-0 ceph-mon[75140]: pgmap v3455: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:57 compute-0 nova_compute[239965]: 2026-01-26 17:04:57.828 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:04:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3456: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:59 compute-0 ceph-mon[75140]: pgmap v3456: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:04:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:04:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:04:59.287 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:04:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:04:59.288 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:04:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:04:59.288 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:04:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:05:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:05:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:05:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:05:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:05:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:05:01 compute-0 ceph-mon[75140]: pgmap v3457: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:01 compute-0 nova_compute[239965]: 2026-01-26 17:05:01.267 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:02 compute-0 nova_compute[239965]: 2026-01-26 17:05:02.830 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:03 compute-0 ceph-mon[75140]: pgmap v3458: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:05:05 compute-0 ceph-mon[75140]: pgmap v3459: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3460: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:06 compute-0 nova_compute[239965]: 2026-01-26 17:05:06.269 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:07 compute-0 ceph-mon[75140]: pgmap v3460: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:07 compute-0 nova_compute[239965]: 2026-01-26 17:05:07.833 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:05:09 compute-0 ceph-mon[75140]: pgmap v3461: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:09 compute-0 sudo[399910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:05:09 compute-0 sudo[399910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:05:09 compute-0 sudo[399910]: pam_unix(sudo:session): session closed for user root
Jan 26 17:05:09 compute-0 sudo[399935]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:05:09 compute-0 sudo[399935]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:05:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:10 compute-0 sudo[399935]: pam_unix(sudo:session): session closed for user root
Jan 26 17:05:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:05:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:05:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:05:10 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:05:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:05:10 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:05:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:05:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:05:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:05:10 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:05:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:05:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:05:10 compute-0 sudo[399990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:05:10 compute-0 sudo[399990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:05:10 compute-0 sudo[399990]: pam_unix(sudo:session): session closed for user root
Jan 26 17:05:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:05:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:05:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:05:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:05:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:05:10 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:05:10 compute-0 sudo[400015]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:05:10 compute-0 sudo[400015]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:05:10 compute-0 podman[400051]: 2026-01-26 17:05:10.51907998 +0000 UTC m=+0.056724312 container create bbd96f6e54791683576b86ee1d66c4531e422951f622234c1e18246bcc346356 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_beaver, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:05:10 compute-0 systemd[1]: Started libpod-conmon-bbd96f6e54791683576b86ee1d66c4531e422951f622234c1e18246bcc346356.scope.
Jan 26 17:05:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:05:10 compute-0 podman[400051]: 2026-01-26 17:05:10.499205433 +0000 UTC m=+0.036849785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:05:10 compute-0 podman[400051]: 2026-01-26 17:05:10.607236401 +0000 UTC m=+0.144880743 container init bbd96f6e54791683576b86ee1d66c4531e422951f622234c1e18246bcc346356 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_beaver, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 17:05:10 compute-0 podman[400051]: 2026-01-26 17:05:10.616892888 +0000 UTC m=+0.154537220 container start bbd96f6e54791683576b86ee1d66c4531e422951f622234c1e18246bcc346356 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_beaver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:05:10 compute-0 podman[400051]: 2026-01-26 17:05:10.622021303 +0000 UTC m=+0.159665665 container attach bbd96f6e54791683576b86ee1d66c4531e422951f622234c1e18246bcc346356 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_beaver, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:05:10 compute-0 laughing_beaver[400067]: 167 167
Jan 26 17:05:10 compute-0 systemd[1]: libpod-bbd96f6e54791683576b86ee1d66c4531e422951f622234c1e18246bcc346356.scope: Deactivated successfully.
Jan 26 17:05:10 compute-0 conmon[400067]: conmon bbd96f6e54791683576b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bbd96f6e54791683576b86ee1d66c4531e422951f622234c1e18246bcc346356.scope/container/memory.events
Jan 26 17:05:10 compute-0 podman[400051]: 2026-01-26 17:05:10.626070303 +0000 UTC m=+0.163714635 container died bbd96f6e54791683576b86ee1d66c4531e422951f622234c1e18246bcc346356 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_beaver, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 17:05:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5111fa2bc2dff29709b1a0c4acfd8ac24f458cdbdd50fb696cd7627720561ab-merged.mount: Deactivated successfully.
Jan 26 17:05:10 compute-0 podman[400051]: 2026-01-26 17:05:10.669338303 +0000 UTC m=+0.206982645 container remove bbd96f6e54791683576b86ee1d66c4531e422951f622234c1e18246bcc346356 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:05:10 compute-0 systemd[1]: libpod-conmon-bbd96f6e54791683576b86ee1d66c4531e422951f622234c1e18246bcc346356.scope: Deactivated successfully.
Jan 26 17:05:10 compute-0 podman[400091]: 2026-01-26 17:05:10.833068098 +0000 UTC m=+0.048875290 container create e8918144fc2610867ed30f0c4c51201ac585f37f8430c6054662884d0202300d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 26 17:05:10 compute-0 systemd[1]: Started libpod-conmon-e8918144fc2610867ed30f0c4c51201ac585f37f8430c6054662884d0202300d.scope.
Jan 26 17:05:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:05:10 compute-0 podman[400091]: 2026-01-26 17:05:10.811987801 +0000 UTC m=+0.027795023 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27b10f5b265c6c55cc265e234644ce16508e90953418c49876fc20c5ecd59a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27b10f5b265c6c55cc265e234644ce16508e90953418c49876fc20c5ecd59a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27b10f5b265c6c55cc265e234644ce16508e90953418c49876fc20c5ecd59a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27b10f5b265c6c55cc265e234644ce16508e90953418c49876fc20c5ecd59a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:05:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a27b10f5b265c6c55cc265e234644ce16508e90953418c49876fc20c5ecd59a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:05:10 compute-0 podman[400091]: 2026-01-26 17:05:10.93590973 +0000 UTC m=+0.151717012 container init e8918144fc2610867ed30f0c4c51201ac585f37f8430c6054662884d0202300d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_villani, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:05:10 compute-0 podman[400091]: 2026-01-26 17:05:10.944297655 +0000 UTC m=+0.160104847 container start e8918144fc2610867ed30f0c4c51201ac585f37f8430c6054662884d0202300d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:05:10 compute-0 podman[400091]: 2026-01-26 17:05:10.948370234 +0000 UTC m=+0.164177426 container attach e8918144fc2610867ed30f0c4c51201ac585f37f8430c6054662884d0202300d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_villani, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:05:11 compute-0 ceph-mon[75140]: pgmap v3462: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:11 compute-0 nova_compute[239965]: 2026-01-26 17:05:11.271 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:11 compute-0 compassionate_villani[400107]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:05:11 compute-0 compassionate_villani[400107]: --> All data devices are unavailable
Jan 26 17:05:11 compute-0 systemd[1]: libpod-e8918144fc2610867ed30f0c4c51201ac585f37f8430c6054662884d0202300d.scope: Deactivated successfully.
Jan 26 17:05:11 compute-0 podman[400091]: 2026-01-26 17:05:11.465943084 +0000 UTC m=+0.681750346 container died e8918144fc2610867ed30f0c4c51201ac585f37f8430c6054662884d0202300d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_villani, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:05:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a27b10f5b265c6c55cc265e234644ce16508e90953418c49876fc20c5ecd59a4-merged.mount: Deactivated successfully.
Jan 26 17:05:11 compute-0 podman[400091]: 2026-01-26 17:05:11.514728441 +0000 UTC m=+0.730535623 container remove e8918144fc2610867ed30f0c4c51201ac585f37f8430c6054662884d0202300d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_villani, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:05:11 compute-0 systemd[1]: libpod-conmon-e8918144fc2610867ed30f0c4c51201ac585f37f8430c6054662884d0202300d.scope: Deactivated successfully.
Jan 26 17:05:11 compute-0 sudo[400015]: pam_unix(sudo:session): session closed for user root
Jan 26 17:05:11 compute-0 sudo[400139]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:05:11 compute-0 sudo[400139]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:05:11 compute-0 sudo[400139]: pam_unix(sudo:session): session closed for user root
Jan 26 17:05:11 compute-0 sudo[400164]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:05:11 compute-0 sudo[400164]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:05:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:12 compute-0 podman[400201]: 2026-01-26 17:05:12.03226361 +0000 UTC m=+0.043378785 container create 0467e5f2b556bb9f192f395dd752be5973fb2c2b12e51243d8d197bc3313f2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lehmann, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:05:12 compute-0 systemd[1]: Started libpod-conmon-0467e5f2b556bb9f192f395dd752be5973fb2c2b12e51243d8d197bc3313f2ba.scope.
Jan 26 17:05:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:05:12 compute-0 podman[400201]: 2026-01-26 17:05:12.012362981 +0000 UTC m=+0.023478166 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:05:12 compute-0 podman[400201]: 2026-01-26 17:05:12.119798965 +0000 UTC m=+0.130914160 container init 0467e5f2b556bb9f192f395dd752be5973fb2c2b12e51243d8d197bc3313f2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lehmann, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:05:12 compute-0 podman[400201]: 2026-01-26 17:05:12.128194201 +0000 UTC m=+0.139309366 container start 0467e5f2b556bb9f192f395dd752be5973fb2c2b12e51243d8d197bc3313f2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lehmann, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 17:05:12 compute-0 podman[400201]: 2026-01-26 17:05:12.131850021 +0000 UTC m=+0.142965226 container attach 0467e5f2b556bb9f192f395dd752be5973fb2c2b12e51243d8d197bc3313f2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lehmann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:05:12 compute-0 happy_lehmann[400217]: 167 167
Jan 26 17:05:12 compute-0 systemd[1]: libpod-0467e5f2b556bb9f192f395dd752be5973fb2c2b12e51243d8d197bc3313f2ba.scope: Deactivated successfully.
Jan 26 17:05:12 compute-0 podman[400201]: 2026-01-26 17:05:12.134558028 +0000 UTC m=+0.145673233 container died 0467e5f2b556bb9f192f395dd752be5973fb2c2b12e51243d8d197bc3313f2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lehmann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 17:05:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-87ba06297990badfacb3ecb7ec1771409f4d4dd717fa63014158a4b8bc7e68e5-merged.mount: Deactivated successfully.
Jan 26 17:05:12 compute-0 podman[400201]: 2026-01-26 17:05:12.176779133 +0000 UTC m=+0.187894328 container remove 0467e5f2b556bb9f192f395dd752be5973fb2c2b12e51243d8d197bc3313f2ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 17:05:12 compute-0 systemd[1]: libpod-conmon-0467e5f2b556bb9f192f395dd752be5973fb2c2b12e51243d8d197bc3313f2ba.scope: Deactivated successfully.
Jan 26 17:05:12 compute-0 podman[400241]: 2026-01-26 17:05:12.386552626 +0000 UTC m=+0.049719060 container create 524119e44c7602f7ed2f587c931097b45f6c349efcfcb174eaf08b3d53f8534d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 17:05:12 compute-0 systemd[1]: Started libpod-conmon-524119e44c7602f7ed2f587c931097b45f6c349efcfcb174eaf08b3d53f8534d.scope.
Jan 26 17:05:12 compute-0 podman[400241]: 2026-01-26 17:05:12.362033484 +0000 UTC m=+0.025199978 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:05:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3a32db26f0cf4829e0e9dead7a170babceb6887027323bde59d5dd73ab424f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3a32db26f0cf4829e0e9dead7a170babceb6887027323bde59d5dd73ab424f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3a32db26f0cf4829e0e9dead7a170babceb6887027323bde59d5dd73ab424f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:05:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e3a32db26f0cf4829e0e9dead7a170babceb6887027323bde59d5dd73ab424f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:05:12 compute-0 podman[400241]: 2026-01-26 17:05:12.482248812 +0000 UTC m=+0.145415296 container init 524119e44c7602f7ed2f587c931097b45f6c349efcfcb174eaf08b3d53f8534d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 17:05:12 compute-0 podman[400241]: 2026-01-26 17:05:12.501562836 +0000 UTC m=+0.164729240 container start 524119e44c7602f7ed2f587c931097b45f6c349efcfcb174eaf08b3d53f8534d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_einstein, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:05:12 compute-0 podman[400241]: 2026-01-26 17:05:12.505083141 +0000 UTC m=+0.168249635 container attach 524119e44c7602f7ed2f587c931097b45f6c349efcfcb174eaf08b3d53f8534d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_einstein, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:05:12 compute-0 funny_einstein[400258]: {
Jan 26 17:05:12 compute-0 funny_einstein[400258]:     "0": [
Jan 26 17:05:12 compute-0 funny_einstein[400258]:         {
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "devices": [
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "/dev/loop3"
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             ],
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "lv_name": "ceph_lv0",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "lv_size": "21470642176",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "name": "ceph_lv0",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "tags": {
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.cluster_name": "ceph",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.crush_device_class": "",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.encrypted": "0",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.objectstore": "bluestore",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.osd_id": "0",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.type": "block",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.vdo": "0",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.with_tpm": "0"
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             },
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "type": "block",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "vg_name": "ceph_vg0"
Jan 26 17:05:12 compute-0 funny_einstein[400258]:         }
Jan 26 17:05:12 compute-0 funny_einstein[400258]:     ],
Jan 26 17:05:12 compute-0 funny_einstein[400258]:     "1": [
Jan 26 17:05:12 compute-0 funny_einstein[400258]:         {
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "devices": [
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "/dev/loop4"
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             ],
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "lv_name": "ceph_lv1",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "lv_size": "21470642176",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "name": "ceph_lv1",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "tags": {
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.cluster_name": "ceph",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.crush_device_class": "",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.encrypted": "0",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.objectstore": "bluestore",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.osd_id": "1",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.type": "block",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.vdo": "0",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.with_tpm": "0"
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             },
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "type": "block",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "vg_name": "ceph_vg1"
Jan 26 17:05:12 compute-0 funny_einstein[400258]:         }
Jan 26 17:05:12 compute-0 funny_einstein[400258]:     ],
Jan 26 17:05:12 compute-0 funny_einstein[400258]:     "2": [
Jan 26 17:05:12 compute-0 funny_einstein[400258]:         {
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "devices": [
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "/dev/loop5"
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             ],
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "lv_name": "ceph_lv2",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "lv_size": "21470642176",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "name": "ceph_lv2",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "tags": {
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.cluster_name": "ceph",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.crush_device_class": "",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.encrypted": "0",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.objectstore": "bluestore",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.osd_id": "2",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.type": "block",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.vdo": "0",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:                 "ceph.with_tpm": "0"
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             },
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "type": "block",
Jan 26 17:05:12 compute-0 funny_einstein[400258]:             "vg_name": "ceph_vg2"
Jan 26 17:05:12 compute-0 funny_einstein[400258]:         }
Jan 26 17:05:12 compute-0 funny_einstein[400258]:     ]
Jan 26 17:05:12 compute-0 funny_einstein[400258]: }
Jan 26 17:05:12 compute-0 systemd[1]: libpod-524119e44c7602f7ed2f587c931097b45f6c349efcfcb174eaf08b3d53f8534d.scope: Deactivated successfully.
Jan 26 17:05:12 compute-0 podman[400241]: 2026-01-26 17:05:12.80440344 +0000 UTC m=+0.467569844 container died 524119e44c7602f7ed2f587c931097b45f6c349efcfcb174eaf08b3d53f8534d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_einstein, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Jan 26 17:05:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-9e3a32db26f0cf4829e0e9dead7a170babceb6887027323bde59d5dd73ab424f-merged.mount: Deactivated successfully.
Jan 26 17:05:12 compute-0 nova_compute[239965]: 2026-01-26 17:05:12.835 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:12 compute-0 podman[400241]: 2026-01-26 17:05:12.849050785 +0000 UTC m=+0.512217179 container remove 524119e44c7602f7ed2f587c931097b45f6c349efcfcb174eaf08b3d53f8534d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 17:05:12 compute-0 systemd[1]: libpod-conmon-524119e44c7602f7ed2f587c931097b45f6c349efcfcb174eaf08b3d53f8534d.scope: Deactivated successfully.
Jan 26 17:05:12 compute-0 sudo[400164]: pam_unix(sudo:session): session closed for user root
Jan 26 17:05:12 compute-0 sudo[400281]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:05:12 compute-0 sudo[400281]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:05:12 compute-0 sudo[400281]: pam_unix(sudo:session): session closed for user root
Jan 26 17:05:13 compute-0 sudo[400306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:05:13 compute-0 sudo[400306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:05:13 compute-0 ceph-mon[75140]: pgmap v3463: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:13 compute-0 podman[400343]: 2026-01-26 17:05:13.347793013 +0000 UTC m=+0.045202719 container create 770bdbd168ec444d0495ac0df7bf1c420c54bb04553a2bedd31fe2184e097afc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_tu, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 17:05:13 compute-0 systemd[1]: Started libpod-conmon-770bdbd168ec444d0495ac0df7bf1c420c54bb04553a2bedd31fe2184e097afc.scope.
Jan 26 17:05:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:05:13 compute-0 podman[400343]: 2026-01-26 17:05:13.327815423 +0000 UTC m=+0.025225149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:05:13 compute-0 podman[400343]: 2026-01-26 17:05:13.430083741 +0000 UTC m=+0.127493467 container init 770bdbd168ec444d0495ac0df7bf1c420c54bb04553a2bedd31fe2184e097afc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_tu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:05:13 compute-0 podman[400343]: 2026-01-26 17:05:13.444257208 +0000 UTC m=+0.141666914 container start 770bdbd168ec444d0495ac0df7bf1c420c54bb04553a2bedd31fe2184e097afc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_tu, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 17:05:13 compute-0 podman[400343]: 2026-01-26 17:05:13.448754038 +0000 UTC m=+0.146163804 container attach 770bdbd168ec444d0495ac0df7bf1c420c54bb04553a2bedd31fe2184e097afc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 17:05:13 compute-0 cranky_tu[400359]: 167 167
Jan 26 17:05:13 compute-0 systemd[1]: libpod-770bdbd168ec444d0495ac0df7bf1c420c54bb04553a2bedd31fe2184e097afc.scope: Deactivated successfully.
Jan 26 17:05:13 compute-0 podman[400343]: 2026-01-26 17:05:13.450732537 +0000 UTC m=+0.148142263 container died 770bdbd168ec444d0495ac0df7bf1c420c54bb04553a2bedd31fe2184e097afc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:05:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b3f797fe9999500165a5ce483dba781ec8eba519873ab27a7eddc675dea9693-merged.mount: Deactivated successfully.
Jan 26 17:05:13 compute-0 podman[400343]: 2026-01-26 17:05:13.499067972 +0000 UTC m=+0.196477678 container remove 770bdbd168ec444d0495ac0df7bf1c420c54bb04553a2bedd31fe2184e097afc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_tu, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 17:05:13 compute-0 systemd[1]: libpod-conmon-770bdbd168ec444d0495ac0df7bf1c420c54bb04553a2bedd31fe2184e097afc.scope: Deactivated successfully.
Jan 26 17:05:13 compute-0 podman[400384]: 2026-01-26 17:05:13.682004847 +0000 UTC m=+0.050240853 container create 8ce588a3b5c99c859d9f3d5edf8d73bffb087dad08dc8d5b61c5b7901d034649 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:05:13 compute-0 systemd[1]: Started libpod-conmon-8ce588a3b5c99c859d9f3d5edf8d73bffb087dad08dc8d5b61c5b7901d034649.scope.
Jan 26 17:05:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0caf23a77b6dcf17aabe1178fbe58deb12d6ef9094e56b4d3c3e15ad8299b5e5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0caf23a77b6dcf17aabe1178fbe58deb12d6ef9094e56b4d3c3e15ad8299b5e5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0caf23a77b6dcf17aabe1178fbe58deb12d6ef9094e56b4d3c3e15ad8299b5e5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:05:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0caf23a77b6dcf17aabe1178fbe58deb12d6ef9094e56b4d3c3e15ad8299b5e5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:05:13 compute-0 podman[400384]: 2026-01-26 17:05:13.662186861 +0000 UTC m=+0.030422897 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:05:13 compute-0 podman[400384]: 2026-01-26 17:05:13.766660142 +0000 UTC m=+0.134896158 container init 8ce588a3b5c99c859d9f3d5edf8d73bffb087dad08dc8d5b61c5b7901d034649 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:05:13 compute-0 podman[400384]: 2026-01-26 17:05:13.780066101 +0000 UTC m=+0.148302107 container start 8ce588a3b5c99c859d9f3d5edf8d73bffb087dad08dc8d5b61c5b7901d034649 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:05:13 compute-0 podman[400384]: 2026-01-26 17:05:13.783815504 +0000 UTC m=+0.152051690 container attach 8ce588a3b5c99c859d9f3d5edf8d73bffb087dad08dc8d5b61c5b7901d034649 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:05:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:05:14 compute-0 lvm[400478]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:05:14 compute-0 lvm[400479]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:05:14 compute-0 lvm[400478]: VG ceph_vg0 finished
Jan 26 17:05:14 compute-0 lvm[400479]: VG ceph_vg1 finished
Jan 26 17:05:14 compute-0 lvm[400481]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:05:14 compute-0 lvm[400481]: VG ceph_vg2 finished
Jan 26 17:05:14 compute-0 hardcore_brahmagupta[400400]: {}
Jan 26 17:05:14 compute-0 systemd[1]: libpod-8ce588a3b5c99c859d9f3d5edf8d73bffb087dad08dc8d5b61c5b7901d034649.scope: Deactivated successfully.
Jan 26 17:05:14 compute-0 systemd[1]: libpod-8ce588a3b5c99c859d9f3d5edf8d73bffb087dad08dc8d5b61c5b7901d034649.scope: Consumed 1.447s CPU time.
Jan 26 17:05:14 compute-0 podman[400384]: 2026-01-26 17:05:14.652272426 +0000 UTC m=+1.020508432 container died 8ce588a3b5c99c859d9f3d5edf8d73bffb087dad08dc8d5b61c5b7901d034649 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 17:05:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-0caf23a77b6dcf17aabe1178fbe58deb12d6ef9094e56b4d3c3e15ad8299b5e5-merged.mount: Deactivated successfully.
Jan 26 17:05:14 compute-0 podman[400384]: 2026-01-26 17:05:14.702733573 +0000 UTC m=+1.070969579 container remove 8ce588a3b5c99c859d9f3d5edf8d73bffb087dad08dc8d5b61c5b7901d034649 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_brahmagupta, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:05:14 compute-0 systemd[1]: libpod-conmon-8ce588a3b5c99c859d9f3d5edf8d73bffb087dad08dc8d5b61c5b7901d034649.scope: Deactivated successfully.
Jan 26 17:05:14 compute-0 sudo[400306]: pam_unix(sudo:session): session closed for user root
Jan 26 17:05:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:05:14 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:05:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:05:14 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:05:14 compute-0 sudo[400497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:05:14 compute-0 sudo[400497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:05:14 compute-0 sudo[400497]: pam_unix(sudo:session): session closed for user root
Jan 26 17:05:15 compute-0 ceph-mon[75140]: pgmap v3464: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:15 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:05:15 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:05:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3465: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:16 compute-0 nova_compute[239965]: 2026-01-26 17:05:16.274 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:17 compute-0 ceph-mon[75140]: pgmap v3465: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:17 compute-0 nova_compute[239965]: 2026-01-26 17:05:17.839 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:05:19 compute-0 ceph-mon[75140]: pgmap v3466: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:21 compute-0 nova_compute[239965]: 2026-01-26 17:05:21.276 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:21 compute-0 ceph-mon[75140]: pgmap v3467: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:22 compute-0 podman[400522]: 2026-01-26 17:05:22.384849141 +0000 UTC m=+0.066431581 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:05:22 compute-0 nova_compute[239965]: 2026-01-26 17:05:22.839 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:23 compute-0 ceph-mon[75140]: pgmap v3468: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3469: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:05:25 compute-0 ceph-mon[75140]: pgmap v3469: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:25 compute-0 podman[400543]: 2026-01-26 17:05:25.412519291 +0000 UTC m=+0.097249856 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:05:25 compute-0 sshd-session[400542]: Invalid user sol from 45.148.10.240 port 37852
Jan 26 17:05:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:26 compute-0 sshd-session[400542]: Connection closed by invalid user sol 45.148.10.240 port 37852 [preauth]
Jan 26 17:05:26 compute-0 nova_compute[239965]: 2026-01-26 17:05:26.278 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:27 compute-0 ceph-mon[75140]: pgmap v3470: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:27 compute-0 nova_compute[239965]: 2026-01-26 17:05:27.840 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:28 compute-0 nova_compute[239965]: 2026-01-26 17:05:28.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:05:28
Jan 26 17:05:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:05:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:05:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'images', 'default.rgw.log', '.rgw.root', 'vms', 'volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta']
Jan 26 17:05:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:05:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:05:29.133873) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447129133908, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 577, "num_deletes": 255, "total_data_size": 643722, "memory_usage": 655672, "flush_reason": "Manual Compaction"}
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447129139537, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 635376, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70626, "largest_seqno": 71202, "table_properties": {"data_size": 632147, "index_size": 1136, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7180, "raw_average_key_size": 18, "raw_value_size": 625774, "raw_average_value_size": 1625, "num_data_blocks": 50, "num_entries": 385, "num_filter_entries": 385, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769447089, "oldest_key_time": 1769447089, "file_creation_time": 1769447129, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 5708 microseconds, and 2857 cpu microseconds.
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:05:29.139579) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 635376 bytes OK
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:05:29.139601) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:05:29.141105) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:05:29.141120) EVENT_LOG_v1 {"time_micros": 1769447129141115, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:05:29.141137) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 640508, prev total WAL file size 640508, number of live WAL files 2.
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:05:29.141656) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303132' seq:72057594037927935, type:22 .. '6C6F676D0033323634' seq:0, type:0; will stop at (end)
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(620KB)], [167(9751KB)]
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447129141704, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10621350, "oldest_snapshot_seqno": -1}
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8795 keys, 10511207 bytes, temperature: kUnknown
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447129230640, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 10511207, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10455770, "index_size": 32348, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22021, "raw_key_size": 231913, "raw_average_key_size": 26, "raw_value_size": 10301634, "raw_average_value_size": 1171, "num_data_blocks": 1245, "num_entries": 8795, "num_filter_entries": 8795, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769447129, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:05:29.230966) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 10511207 bytes
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:05:29.232362) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 119.3 rd, 118.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.5 +0.0 blob) out(10.0 +0.0 blob), read-write-amplify(33.3) write-amplify(16.5) OK, records in: 9316, records dropped: 521 output_compression: NoCompression
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:05:29.232391) EVENT_LOG_v1 {"time_micros": 1769447129232377, "job": 104, "event": "compaction_finished", "compaction_time_micros": 89030, "compaction_time_cpu_micros": 45140, "output_level": 6, "num_output_files": 1, "total_output_size": 10511207, "num_input_records": 9316, "num_output_records": 8795, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447129232849, "job": 104, "event": "table_file_deletion", "file_number": 169}
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447129236364, "job": 104, "event": "table_file_deletion", "file_number": 167}
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:05:29.141556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:05:29.236428) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:05:29.236436) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:05:29.236439) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:05:29.236443) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:05:29 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:05:29.236447) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:05:29 compute-0 ceph-mon[75140]: pgmap v3471: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:29 compute-0 nova_compute[239965]: 2026-01-26 17:05:29.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:05:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:05:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:05:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:05:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:05:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:05:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:05:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:05:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:05:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:05:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:05:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:05:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:05:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:05:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:05:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:05:31 compute-0 nova_compute[239965]: 2026-01-26 17:05:31.280 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:31 compute-0 ceph-mon[75140]: pgmap v3472: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3473: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:32 compute-0 nova_compute[239965]: 2026-01-26 17:05:32.843 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:33 compute-0 ceph-mon[75140]: pgmap v3473: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:33 compute-0 nova_compute[239965]: 2026-01-26 17:05:33.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:05:35 compute-0 ceph-mon[75140]: pgmap v3474: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:36 compute-0 nova_compute[239965]: 2026-01-26 17:05:36.283 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:36 compute-0 nova_compute[239965]: 2026-01-26 17:05:36.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:36 compute-0 nova_compute[239965]: 2026-01-26 17:05:36.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:36 compute-0 ceph-mon[75140]: pgmap v3475: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:37 compute-0 nova_compute[239965]: 2026-01-26 17:05:37.844 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3476: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:39 compute-0 ceph-mon[75140]: pgmap v3476: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:05:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:40 compute-0 nova_compute[239965]: 2026-01-26 17:05:40.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:41 compute-0 ceph-mon[75140]: pgmap v3477: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:41 compute-0 nova_compute[239965]: 2026-01-26 17:05:41.286 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:42 compute-0 nova_compute[239965]: 2026-01-26 17:05:42.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:42 compute-0 nova_compute[239965]: 2026-01-26 17:05:42.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:05:42 compute-0 nova_compute[239965]: 2026-01-26 17:05:42.847 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:43 compute-0 ceph-mon[75140]: pgmap v3478: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:43 compute-0 nova_compute[239965]: 2026-01-26 17:05:43.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:43 compute-0 nova_compute[239965]: 2026-01-26 17:05:43.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:05:43 compute-0 nova_compute[239965]: 2026-01-26 17:05:43.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:05:43 compute-0 nova_compute[239965]: 2026-01-26 17:05:43.527 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:05:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:05:44 compute-0 nova_compute[239965]: 2026-01-26 17:05:44.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:44 compute-0 nova_compute[239965]: 2026-01-26 17:05:44.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:05:44 compute-0 nova_compute[239965]: 2026-01-26 17:05:44.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:05:44 compute-0 nova_compute[239965]: 2026-01-26 17:05:44.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:05:44 compute-0 nova_compute[239965]: 2026-01-26 17:05:44.543 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:05:44 compute-0 nova_compute[239965]: 2026-01-26 17:05:44.544 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:05:45 compute-0 ceph-mon[75140]: pgmap v3479: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:05:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1288226084' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:05:45 compute-0 nova_compute[239965]: 2026-01-26 17:05:45.158 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.614s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:05:45 compute-0 nova_compute[239965]: 2026-01-26 17:05:45.343 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:05:45 compute-0 nova_compute[239965]: 2026-01-26 17:05:45.344 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3575MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:05:45 compute-0 nova_compute[239965]: 2026-01-26 17:05:45.344 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:05:45 compute-0 nova_compute[239965]: 2026-01-26 17:05:45.345 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:05:45 compute-0 nova_compute[239965]: 2026-01-26 17:05:45.418 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:05:45 compute-0 nova_compute[239965]: 2026-01-26 17:05:45.418 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:05:45 compute-0 nova_compute[239965]: 2026-01-26 17:05:45.435 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:05:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:05:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2178013595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:05:46 compute-0 nova_compute[239965]: 2026-01-26 17:05:46.020 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:05:46 compute-0 nova_compute[239965]: 2026-01-26 17:05:46.028 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:05:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1288226084' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:05:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2178013595' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:05:46 compute-0 nova_compute[239965]: 2026-01-26 17:05:46.288 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:46 compute-0 nova_compute[239965]: 2026-01-26 17:05:46.923 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:05:46 compute-0 nova_compute[239965]: 2026-01-26 17:05:46.925 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:05:46 compute-0 nova_compute[239965]: 2026-01-26 17:05:46.925 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:05:47 compute-0 ceph-mon[75140]: pgmap v3480: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:47 compute-0 nova_compute[239965]: 2026-01-26 17:05:47.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:05:47 compute-0 nova_compute[239965]: 2026-01-26 17:05:47.849 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:05:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1801506949' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:05:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:05:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1801506949' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:05:49 compute-0 ceph-mon[75140]: pgmap v3481: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1801506949' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:05:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1801506949' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:05:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:05:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3482: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:51 compute-0 ceph-mon[75140]: pgmap v3482: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:51 compute-0 nova_compute[239965]: 2026-01-26 17:05:51.291 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:52 compute-0 nova_compute[239965]: 2026-01-26 17:05:52.852 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:53 compute-0 ceph-mon[75140]: pgmap v3483: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:53 compute-0 podman[400615]: 2026-01-26 17:05:53.403660767 +0000 UTC m=+0.071130485 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:05:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:05:55 compute-0 ceph-mon[75140]: pgmap v3484: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:56 compute-0 nova_compute[239965]: 2026-01-26 17:05:56.293 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:56 compute-0 podman[400636]: 2026-01-26 17:05:56.45384798 +0000 UTC m=+0.137051842 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 17:05:57 compute-0 ceph-mon[75140]: pgmap v3485: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:57 compute-0 nova_compute[239965]: 2026-01-26 17:05:57.854 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:05:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:59 compute-0 ceph-mon[75140]: pgmap v3486: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:05:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:05:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:05:59.288 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:05:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:05:59.289 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:05:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:05:59.289 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:05:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:06:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:06:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:06:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:06:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:06:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:06:01 compute-0 ceph-mon[75140]: pgmap v3487: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:01 compute-0 nova_compute[239965]: 2026-01-26 17:06:01.296 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:02 compute-0 nova_compute[239965]: 2026-01-26 17:06:02.855 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:03 compute-0 ceph-mon[75140]: pgmap v3488: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3489: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:06:05 compute-0 ceph-mon[75140]: pgmap v3489: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3490: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:06 compute-0 nova_compute[239965]: 2026-01-26 17:06:06.298 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:07 compute-0 ceph-mon[75140]: pgmap v3490: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:07 compute-0 nova_compute[239965]: 2026-01-26 17:06:07.882 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3491: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:06:09 compute-0 ceph-mon[75140]: pgmap v3491: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:09 compute-0 nova_compute[239965]: 2026-01-26 17:06:09.522 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:09 compute-0 nova_compute[239965]: 2026-01-26 17:06:09.522 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:06:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3492: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:11 compute-0 ceph-mon[75140]: pgmap v3492: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:11 compute-0 nova_compute[239965]: 2026-01-26 17:06:11.301 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3493: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:12 compute-0 nova_compute[239965]: 2026-01-26 17:06:12.885 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:13 compute-0 ceph-mon[75140]: pgmap v3493: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3494: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:06:14 compute-0 sudo[400663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:06:14 compute-0 sudo[400663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:06:14 compute-0 sudo[400663]: pam_unix(sudo:session): session closed for user root
Jan 26 17:06:15 compute-0 sudo[400688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 26 17:06:15 compute-0 sudo[400688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:06:15 compute-0 ceph-mon[75140]: pgmap v3494: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:15 compute-0 sudo[400688]: pam_unix(sudo:session): session closed for user root
Jan 26 17:06:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:06:15 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:06:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:06:15 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:06:15 compute-0 sudo[400734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:06:15 compute-0 sudo[400734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:06:15 compute-0 sudo[400734]: pam_unix(sudo:session): session closed for user root
Jan 26 17:06:15 compute-0 sudo[400759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:06:15 compute-0 sudo[400759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:06:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3495: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:16 compute-0 sudo[400759]: pam_unix(sudo:session): session closed for user root
Jan 26 17:06:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 17:06:16 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 17:06:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:06:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:06:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:06:16 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:06:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:06:16 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:06:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:06:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:06:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:06:16 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:06:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:06:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:06:16 compute-0 sudo[400815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:06:16 compute-0 sudo[400815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:06:16 compute-0 sudo[400815]: pam_unix(sudo:session): session closed for user root
Jan 26 17:06:16 compute-0 sudo[400840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:06:16 compute-0 sudo[400840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:06:16 compute-0 nova_compute[239965]: 2026-01-26 17:06:16.303 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:06:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:06:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 17:06:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:06:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:06:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:06:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:06:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:06:16 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:06:16 compute-0 podman[400876]: 2026-01-26 17:06:16.573581817 +0000 UTC m=+0.068556102 container create effd119fe0c331ed7021ca16a5e57f8559419be8fff4ce6ba8d404b9a6e4f06e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_tu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 26 17:06:16 compute-0 systemd[1]: Started libpod-conmon-effd119fe0c331ed7021ca16a5e57f8559419be8fff4ce6ba8d404b9a6e4f06e.scope.
Jan 26 17:06:16 compute-0 podman[400876]: 2026-01-26 17:06:16.547904798 +0000 UTC m=+0.042879103 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:06:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:06:16 compute-0 podman[400876]: 2026-01-26 17:06:16.675838134 +0000 UTC m=+0.170812519 container init effd119fe0c331ed7021ca16a5e57f8559419be8fff4ce6ba8d404b9a6e4f06e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_tu, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 17:06:16 compute-0 podman[400876]: 2026-01-26 17:06:16.684854085 +0000 UTC m=+0.179828370 container start effd119fe0c331ed7021ca16a5e57f8559419be8fff4ce6ba8d404b9a6e4f06e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_tu, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 17:06:16 compute-0 podman[400876]: 2026-01-26 17:06:16.688618137 +0000 UTC m=+0.183592462 container attach effd119fe0c331ed7021ca16a5e57f8559419be8fff4ce6ba8d404b9a6e4f06e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:06:16 compute-0 quizzical_tu[400892]: 167 167
Jan 26 17:06:16 compute-0 systemd[1]: libpod-effd119fe0c331ed7021ca16a5e57f8559419be8fff4ce6ba8d404b9a6e4f06e.scope: Deactivated successfully.
Jan 26 17:06:16 compute-0 conmon[400892]: conmon effd119fe0c331ed7021 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-effd119fe0c331ed7021ca16a5e57f8559419be8fff4ce6ba8d404b9a6e4f06e.scope/container/memory.events
Jan 26 17:06:16 compute-0 podman[400876]: 2026-01-26 17:06:16.692868681 +0000 UTC m=+0.187843006 container died effd119fe0c331ed7021ca16a5e57f8559419be8fff4ce6ba8d404b9a6e4f06e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 17:06:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6170185925f6bc9ede67bb9b55a58cdd72fdb495f5bb5ada99ddcda164c9d63-merged.mount: Deactivated successfully.
Jan 26 17:06:16 compute-0 podman[400876]: 2026-01-26 17:06:16.744710383 +0000 UTC m=+0.239684668 container remove effd119fe0c331ed7021ca16a5e57f8559419be8fff4ce6ba8d404b9a6e4f06e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_tu, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 17:06:16 compute-0 systemd[1]: libpod-conmon-effd119fe0c331ed7021ca16a5e57f8559419be8fff4ce6ba8d404b9a6e4f06e.scope: Deactivated successfully.
Jan 26 17:06:16 compute-0 podman[400917]: 2026-01-26 17:06:16.940867612 +0000 UTC m=+0.044388440 container create 9bc3702a5bd5832d130d54dce167269b5dcc1be8e8cc642ae5309ac92918f020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 17:06:16 compute-0 systemd[1]: Started libpod-conmon-9bc3702a5bd5832d130d54dce167269b5dcc1be8e8cc642ae5309ac92918f020.scope.
Jan 26 17:06:17 compute-0 podman[400917]: 2026-01-26 17:06:16.922647025 +0000 UTC m=+0.026167873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:06:17 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1436b22ce62109181908b8644fb3358c2d7af9a476ca9017138d40cc972c4c3e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1436b22ce62109181908b8644fb3358c2d7af9a476ca9017138d40cc972c4c3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1436b22ce62109181908b8644fb3358c2d7af9a476ca9017138d40cc972c4c3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1436b22ce62109181908b8644fb3358c2d7af9a476ca9017138d40cc972c4c3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:06:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1436b22ce62109181908b8644fb3358c2d7af9a476ca9017138d40cc972c4c3e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:06:17 compute-0 podman[400917]: 2026-01-26 17:06:17.059601103 +0000 UTC m=+0.163121931 container init 9bc3702a5bd5832d130d54dce167269b5dcc1be8e8cc642ae5309ac92918f020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ganguly, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:06:17 compute-0 podman[400917]: 2026-01-26 17:06:17.069026534 +0000 UTC m=+0.172547372 container start 9bc3702a5bd5832d130d54dce167269b5dcc1be8e8cc642ae5309ac92918f020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ganguly, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:06:17 compute-0 podman[400917]: 2026-01-26 17:06:17.073150506 +0000 UTC m=+0.176671354 container attach 9bc3702a5bd5832d130d54dce167269b5dcc1be8e8cc642ae5309ac92918f020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:06:17 compute-0 ceph-mon[75140]: pgmap v3495: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:17 compute-0 strange_ganguly[400934]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:06:17 compute-0 strange_ganguly[400934]: --> All data devices are unavailable
Jan 26 17:06:17 compute-0 systemd[1]: libpod-9bc3702a5bd5832d130d54dce167269b5dcc1be8e8cc642ae5309ac92918f020.scope: Deactivated successfully.
Jan 26 17:06:17 compute-0 podman[400917]: 2026-01-26 17:06:17.591780901 +0000 UTC m=+0.695301739 container died 9bc3702a5bd5832d130d54dce167269b5dcc1be8e8cc642ae5309ac92918f020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ganguly, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:06:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1436b22ce62109181908b8644fb3358c2d7af9a476ca9017138d40cc972c4c3e-merged.mount: Deactivated successfully.
Jan 26 17:06:17 compute-0 podman[400917]: 2026-01-26 17:06:17.635231906 +0000 UTC m=+0.738752734 container remove 9bc3702a5bd5832d130d54dce167269b5dcc1be8e8cc642ae5309ac92918f020 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_ganguly, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 17:06:17 compute-0 systemd[1]: libpod-conmon-9bc3702a5bd5832d130d54dce167269b5dcc1be8e8cc642ae5309ac92918f020.scope: Deactivated successfully.
Jan 26 17:06:17 compute-0 sudo[400840]: pam_unix(sudo:session): session closed for user root
Jan 26 17:06:17 compute-0 sudo[400966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:06:17 compute-0 sudo[400966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:06:17 compute-0 sudo[400966]: pam_unix(sudo:session): session closed for user root
Jan 26 17:06:17 compute-0 sudo[400991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:06:17 compute-0 sudo[400991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:06:17 compute-0 nova_compute[239965]: 2026-01-26 17:06:17.888 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3496: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:18 compute-0 podman[401029]: 2026-01-26 17:06:18.114147208 +0000 UTC m=+0.047345162 container create 77cc449fd047ccd598c69892e86f2e1a62f4aaf2a03703823d33a5433b6e8df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_banzai, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:06:18 compute-0 systemd[1]: Started libpod-conmon-77cc449fd047ccd598c69892e86f2e1a62f4aaf2a03703823d33a5433b6e8df9.scope.
Jan 26 17:06:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:06:18 compute-0 podman[401029]: 2026-01-26 17:06:18.094445985 +0000 UTC m=+0.027643949 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:06:18 compute-0 podman[401029]: 2026-01-26 17:06:18.200430123 +0000 UTC m=+0.133628057 container init 77cc449fd047ccd598c69892e86f2e1a62f4aaf2a03703823d33a5433b6e8df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 17:06:18 compute-0 podman[401029]: 2026-01-26 17:06:18.208347737 +0000 UTC m=+0.141545691 container start 77cc449fd047ccd598c69892e86f2e1a62f4aaf2a03703823d33a5433b6e8df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_banzai, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:06:18 compute-0 podman[401029]: 2026-01-26 17:06:18.212990351 +0000 UTC m=+0.146188295 container attach 77cc449fd047ccd598c69892e86f2e1a62f4aaf2a03703823d33a5433b6e8df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_banzai, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 17:06:18 compute-0 objective_banzai[401046]: 167 167
Jan 26 17:06:18 compute-0 systemd[1]: libpod-77cc449fd047ccd598c69892e86f2e1a62f4aaf2a03703823d33a5433b6e8df9.scope: Deactivated successfully.
Jan 26 17:06:18 compute-0 podman[401029]: 2026-01-26 17:06:18.214741464 +0000 UTC m=+0.147939418 container died 77cc449fd047ccd598c69892e86f2e1a62f4aaf2a03703823d33a5433b6e8df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_banzai, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:06:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-07158d36135ec971b6f8ad7a3bf12112dc40c603381d9d065a4da19800655f67-merged.mount: Deactivated successfully.
Jan 26 17:06:18 compute-0 podman[401029]: 2026-01-26 17:06:18.272634283 +0000 UTC m=+0.205832197 container remove 77cc449fd047ccd598c69892e86f2e1a62f4aaf2a03703823d33a5433b6e8df9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_banzai, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:06:18 compute-0 systemd[1]: libpod-conmon-77cc449fd047ccd598c69892e86f2e1a62f4aaf2a03703823d33a5433b6e8df9.scope: Deactivated successfully.
Jan 26 17:06:18 compute-0 podman[401069]: 2026-01-26 17:06:18.449266755 +0000 UTC m=+0.046713257 container create 53d377e77a38a54e589ed41bd647b78b84c90be15f65889c75b6e6c566a902be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 17:06:18 compute-0 systemd[1]: Started libpod-conmon-53d377e77a38a54e589ed41bd647b78b84c90be15f65889c75b6e6c566a902be.scope.
Jan 26 17:06:18 compute-0 podman[401069]: 2026-01-26 17:06:18.428318911 +0000 UTC m=+0.025765433 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:06:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5700016859f63e36d260ccf5ad1e84cf635a96f2dca54c1cc468d6004d7699e1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5700016859f63e36d260ccf5ad1e84cf635a96f2dca54c1cc468d6004d7699e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5700016859f63e36d260ccf5ad1e84cf635a96f2dca54c1cc468d6004d7699e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:06:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5700016859f63e36d260ccf5ad1e84cf635a96f2dca54c1cc468d6004d7699e1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:06:18 compute-0 podman[401069]: 2026-01-26 17:06:18.546298423 +0000 UTC m=+0.143744945 container init 53d377e77a38a54e589ed41bd647b78b84c90be15f65889c75b6e6c566a902be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 17:06:18 compute-0 podman[401069]: 2026-01-26 17:06:18.554960065 +0000 UTC m=+0.152406567 container start 53d377e77a38a54e589ed41bd647b78b84c90be15f65889c75b6e6c566a902be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:06:18 compute-0 podman[401069]: 2026-01-26 17:06:18.558695707 +0000 UTC m=+0.156142229 container attach 53d377e77a38a54e589ed41bd647b78b84c90be15f65889c75b6e6c566a902be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]: {
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:     "0": [
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:         {
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "devices": [
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "/dev/loop3"
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             ],
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "lv_name": "ceph_lv0",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "lv_size": "21470642176",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "name": "ceph_lv0",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "tags": {
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.cluster_name": "ceph",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.crush_device_class": "",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.encrypted": "0",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.objectstore": "bluestore",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.osd_id": "0",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.type": "block",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.vdo": "0",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.with_tpm": "0"
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             },
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "type": "block",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "vg_name": "ceph_vg0"
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:         }
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:     ],
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:     "1": [
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:         {
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "devices": [
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "/dev/loop4"
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             ],
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "lv_name": "ceph_lv1",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "lv_size": "21470642176",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "name": "ceph_lv1",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "tags": {
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.cluster_name": "ceph",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.crush_device_class": "",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.encrypted": "0",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.objectstore": "bluestore",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.osd_id": "1",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.type": "block",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.vdo": "0",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.with_tpm": "0"
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             },
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "type": "block",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "vg_name": "ceph_vg1"
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:         }
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:     ],
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:     "2": [
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:         {
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "devices": [
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "/dev/loop5"
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             ],
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "lv_name": "ceph_lv2",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "lv_size": "21470642176",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "name": "ceph_lv2",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "tags": {
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.cluster_name": "ceph",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.crush_device_class": "",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.encrypted": "0",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.objectstore": "bluestore",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.osd_id": "2",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.type": "block",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.vdo": "0",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:                 "ceph.with_tpm": "0"
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             },
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "type": "block",
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:             "vg_name": "ceph_vg2"
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:         }
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]:     ]
Jan 26 17:06:18 compute-0 relaxed_ellis[401085]: }
Jan 26 17:06:18 compute-0 systemd[1]: libpod-53d377e77a38a54e589ed41bd647b78b84c90be15f65889c75b6e6c566a902be.scope: Deactivated successfully.
Jan 26 17:06:18 compute-0 podman[401069]: 2026-01-26 17:06:18.896843107 +0000 UTC m=+0.494289609 container died 53d377e77a38a54e589ed41bd647b78b84c90be15f65889c75b6e6c566a902be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 17:06:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-5700016859f63e36d260ccf5ad1e84cf635a96f2dca54c1cc468d6004d7699e1-merged.mount: Deactivated successfully.
Jan 26 17:06:18 compute-0 podman[401069]: 2026-01-26 17:06:18.946610608 +0000 UTC m=+0.544057110 container remove 53d377e77a38a54e589ed41bd647b78b84c90be15f65889c75b6e6c566a902be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 17:06:18 compute-0 systemd[1]: libpod-conmon-53d377e77a38a54e589ed41bd647b78b84c90be15f65889c75b6e6c566a902be.scope: Deactivated successfully.
Jan 26 17:06:19 compute-0 sudo[400991]: pam_unix(sudo:session): session closed for user root
Jan 26 17:06:19 compute-0 sudo[401108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:06:19 compute-0 sudo[401108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:06:19 compute-0 sudo[401108]: pam_unix(sudo:session): session closed for user root
Jan 26 17:06:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:06:19 compute-0 sudo[401133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:06:19 compute-0 sudo[401133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:06:19 compute-0 ceph-mon[75140]: pgmap v3496: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:19 compute-0 podman[401169]: 2026-01-26 17:06:19.509664572 +0000 UTC m=+0.046991273 container create 5f1bd4087ab42164651ab85a5aff012068354cb73faad7d2ab02d16f660b1ce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 17:06:19 compute-0 systemd[1]: Started libpod-conmon-5f1bd4087ab42164651ab85a5aff012068354cb73faad7d2ab02d16f660b1ce6.scope.
Jan 26 17:06:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:06:19 compute-0 podman[401169]: 2026-01-26 17:06:19.489255152 +0000 UTC m=+0.026581873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:06:19 compute-0 podman[401169]: 2026-01-26 17:06:19.602329954 +0000 UTC m=+0.139656735 container init 5f1bd4087ab42164651ab85a5aff012068354cb73faad7d2ab02d16f660b1ce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_grothendieck, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:06:19 compute-0 podman[401169]: 2026-01-26 17:06:19.612812802 +0000 UTC m=+0.150139533 container start 5f1bd4087ab42164651ab85a5aff012068354cb73faad7d2ab02d16f660b1ce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:06:19 compute-0 podman[401169]: 2026-01-26 17:06:19.616741917 +0000 UTC m=+0.154068698 container attach 5f1bd4087ab42164651ab85a5aff012068354cb73faad7d2ab02d16f660b1ce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_grothendieck, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 17:06:19 compute-0 modest_grothendieck[401185]: 167 167
Jan 26 17:06:19 compute-0 systemd[1]: libpod-5f1bd4087ab42164651ab85a5aff012068354cb73faad7d2ab02d16f660b1ce6.scope: Deactivated successfully.
Jan 26 17:06:19 compute-0 podman[401169]: 2026-01-26 17:06:19.620465279 +0000 UTC m=+0.157792000 container died 5f1bd4087ab42164651ab85a5aff012068354cb73faad7d2ab02d16f660b1ce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_grothendieck, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 17:06:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae81cc0ec42cd9a2e7177a325a37870d74888b41c0e9c989d4d2c4dff9e91f98-merged.mount: Deactivated successfully.
Jan 26 17:06:19 compute-0 podman[401169]: 2026-01-26 17:06:19.667149523 +0000 UTC m=+0.204476214 container remove 5f1bd4087ab42164651ab85a5aff012068354cb73faad7d2ab02d16f660b1ce6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:06:19 compute-0 systemd[1]: libpod-conmon-5f1bd4087ab42164651ab85a5aff012068354cb73faad7d2ab02d16f660b1ce6.scope: Deactivated successfully.
Jan 26 17:06:19 compute-0 podman[401209]: 2026-01-26 17:06:19.875159673 +0000 UTC m=+0.047718220 container create 94c3d0818b7ac0e53b0cfcf951764a886727b0beaf4e5e2ac61d700cf4abd27a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 17:06:19 compute-0 systemd[1]: Started libpod-conmon-94c3d0818b7ac0e53b0cfcf951764a886727b0beaf4e5e2ac61d700cf4abd27a.scope.
Jan 26 17:06:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:06:19 compute-0 podman[401209]: 2026-01-26 17:06:19.854510147 +0000 UTC m=+0.027068714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:06:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba52db71123304d367917591cce90f4d9e0dc935e3e1737fc243308303b18a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:06:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba52db71123304d367917591cce90f4d9e0dc935e3e1737fc243308303b18a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:06:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba52db71123304d367917591cce90f4d9e0dc935e3e1737fc243308303b18a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:06:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba52db71123304d367917591cce90f4d9e0dc935e3e1737fc243308303b18a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:06:19 compute-0 podman[401209]: 2026-01-26 17:06:19.967996129 +0000 UTC m=+0.140554696 container init 94c3d0818b7ac0e53b0cfcf951764a886727b0beaf4e5e2ac61d700cf4abd27a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 17:06:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3497: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:19 compute-0 podman[401209]: 2026-01-26 17:06:19.974757125 +0000 UTC m=+0.147315672 container start 94c3d0818b7ac0e53b0cfcf951764a886727b0beaf4e5e2ac61d700cf4abd27a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:06:19 compute-0 podman[401209]: 2026-01-26 17:06:19.978400745 +0000 UTC m=+0.150959312 container attach 94c3d0818b7ac0e53b0cfcf951764a886727b0beaf4e5e2ac61d700cf4abd27a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 17:06:20 compute-0 lvm[401305]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:06:20 compute-0 lvm[401304]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:06:20 compute-0 lvm[401304]: VG ceph_vg0 finished
Jan 26 17:06:20 compute-0 lvm[401307]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:06:20 compute-0 lvm[401307]: VG ceph_vg2 finished
Jan 26 17:06:20 compute-0 lvm[401305]: VG ceph_vg1 finished
Jan 26 17:06:20 compute-0 lvm[401309]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:06:20 compute-0 lvm[401309]: VG ceph_vg2 finished
Jan 26 17:06:20 compute-0 gallant_elion[401226]: {}
Jan 26 17:06:20 compute-0 systemd[1]: libpod-94c3d0818b7ac0e53b0cfcf951764a886727b0beaf4e5e2ac61d700cf4abd27a.scope: Deactivated successfully.
Jan 26 17:06:20 compute-0 systemd[1]: libpod-94c3d0818b7ac0e53b0cfcf951764a886727b0beaf4e5e2ac61d700cf4abd27a.scope: Consumed 1.559s CPU time.
Jan 26 17:06:20 compute-0 podman[401209]: 2026-01-26 17:06:20.924843009 +0000 UTC m=+1.097401566 container died 94c3d0818b7ac0e53b0cfcf951764a886727b0beaf4e5e2ac61d700cf4abd27a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 17:06:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ba52db71123304d367917591cce90f4d9e0dc935e3e1737fc243308303b18a3-merged.mount: Deactivated successfully.
Jan 26 17:06:20 compute-0 podman[401209]: 2026-01-26 17:06:20.976455974 +0000 UTC m=+1.149014521 container remove 94c3d0818b7ac0e53b0cfcf951764a886727b0beaf4e5e2ac61d700cf4abd27a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_elion, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 17:06:20 compute-0 systemd[1]: libpod-conmon-94c3d0818b7ac0e53b0cfcf951764a886727b0beaf4e5e2ac61d700cf4abd27a.scope: Deactivated successfully.
Jan 26 17:06:21 compute-0 sudo[401133]: pam_unix(sudo:session): session closed for user root
Jan 26 17:06:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:06:21 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:06:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:06:21 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:06:21 compute-0 nova_compute[239965]: 2026-01-26 17:06:21.304 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:21 compute-0 sudo[401323]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:06:21 compute-0 sudo[401323]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:06:21 compute-0 sudo[401323]: pam_unix(sudo:session): session closed for user root
Jan 26 17:06:21 compute-0 ceph-mon[75140]: pgmap v3497: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:21 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:06:21 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:06:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3498: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:22 compute-0 ceph-mon[75140]: pgmap v3498: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:22 compute-0 nova_compute[239965]: 2026-01-26 17:06:22.889 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3499: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:06:24 compute-0 podman[401348]: 2026-01-26 17:06:24.390281653 +0000 UTC m=+0.068376878 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 26 17:06:25 compute-0 ceph-mon[75140]: pgmap v3499: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:25 compute-0 nova_compute[239965]: 2026-01-26 17:06:25.650 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:25 compute-0 nova_compute[239965]: 2026-01-26 17:06:25.650 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:06:25 compute-0 nova_compute[239965]: 2026-01-26 17:06:25.743 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:06:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3500: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:26 compute-0 nova_compute[239965]: 2026-01-26 17:06:26.313 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:27 compute-0 ceph-mon[75140]: pgmap v3500: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:27 compute-0 podman[401368]: 2026-01-26 17:06:27.410646974 +0000 UTC m=+0.098342482 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:06:27 compute-0 nova_compute[239965]: 2026-01-26 17:06:27.892 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3501: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:06:28
Jan 26 17:06:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:06:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:06:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'images', '.mgr', 'volumes', 'default.rgw.log', '.rgw.root', 'vms']
Jan 26 17:06:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:06:29 compute-0 ceph-mon[75140]: pgmap v3501: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:06:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3502: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:06:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:06:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:06:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:06:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:06:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:06:30 compute-0 nova_compute[239965]: 2026-01-26 17:06:30.604 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:31 compute-0 ceph-mon[75140]: pgmap v3502: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:06:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:06:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:06:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:06:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:06:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:06:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:06:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:06:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:06:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:06:31 compute-0 nova_compute[239965]: 2026-01-26 17:06:31.315 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:31 compute-0 nova_compute[239965]: 2026-01-26 17:06:31.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3503: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:32 compute-0 nova_compute[239965]: 2026-01-26 17:06:32.895 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:33 compute-0 ceph-mon[75140]: pgmap v3503: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3504: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:06:34 compute-0 nova_compute[239965]: 2026-01-26 17:06:34.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:35 compute-0 ceph-mon[75140]: pgmap v3504: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3505: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:36 compute-0 nova_compute[239965]: 2026-01-26 17:06:36.316 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:36 compute-0 ceph-mon[75140]: pgmap v3505: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:36 compute-0 nova_compute[239965]: 2026-01-26 17:06:36.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:37 compute-0 nova_compute[239965]: 2026-01-26 17:06:37.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:37 compute-0 nova_compute[239965]: 2026-01-26 17:06:37.896 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3506: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:06:39 compute-0 ceph-mon[75140]: pgmap v3506: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3507: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:41 compute-0 ceph-mon[75140]: pgmap v3507: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:41 compute-0 nova_compute[239965]: 2026-01-26 17:06:41.320 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3508: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:42 compute-0 nova_compute[239965]: 2026-01-26 17:06:42.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:42 compute-0 ceph-mon[75140]: pgmap v3508: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:42 compute-0 nova_compute[239965]: 2026-01-26 17:06:42.897 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:43 compute-0 nova_compute[239965]: 2026-01-26 17:06:43.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:43 compute-0 nova_compute[239965]: 2026-01-26 17:06:43.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:06:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3509: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:06:45 compute-0 ceph-mon[75140]: pgmap v3509: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:45 compute-0 nova_compute[239965]: 2026-01-26 17:06:45.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:45 compute-0 nova_compute[239965]: 2026-01-26 17:06:45.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:06:45 compute-0 nova_compute[239965]: 2026-01-26 17:06:45.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:06:45 compute-0 nova_compute[239965]: 2026-01-26 17:06:45.746 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:06:45 compute-0 nova_compute[239965]: 2026-01-26 17:06:45.748 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:45 compute-0 nova_compute[239965]: 2026-01-26 17:06:45.840 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:06:45 compute-0 nova_compute[239965]: 2026-01-26 17:06:45.841 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:06:45 compute-0 nova_compute[239965]: 2026-01-26 17:06:45.841 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:06:45 compute-0 nova_compute[239965]: 2026-01-26 17:06:45.841 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:06:45 compute-0 nova_compute[239965]: 2026-01-26 17:06:45.842 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:06:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3510: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:46 compute-0 nova_compute[239965]: 2026-01-26 17:06:46.325 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:06:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3616704844' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:06:46 compute-0 nova_compute[239965]: 2026-01-26 17:06:46.479 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.637s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:06:46 compute-0 nova_compute[239965]: 2026-01-26 17:06:46.666 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:06:46 compute-0 nova_compute[239965]: 2026-01-26 17:06:46.667 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3545MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:06:46 compute-0 nova_compute[239965]: 2026-01-26 17:06:46.668 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:06:46 compute-0 nova_compute[239965]: 2026-01-26 17:06:46.668 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:06:46 compute-0 nova_compute[239965]: 2026-01-26 17:06:46.814 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:06:46 compute-0 nova_compute[239965]: 2026-01-26 17:06:46.815 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:06:46 compute-0 nova_compute[239965]: 2026-01-26 17:06:46.833 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:06:47 compute-0 ceph-mon[75140]: pgmap v3510: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3616704844' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:06:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:06:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1756978205' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:06:47 compute-0 nova_compute[239965]: 2026-01-26 17:06:47.467 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:06:47 compute-0 nova_compute[239965]: 2026-01-26 17:06:47.474 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:06:47 compute-0 nova_compute[239965]: 2026-01-26 17:06:47.899 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3511: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1756978205' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:06:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:06:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3077259125' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:06:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:06:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3077259125' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:06:49 compute-0 nova_compute[239965]: 2026-01-26 17:06:49.028 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:06:49 compute-0 nova_compute[239965]: 2026-01-26 17:06:49.029 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:06:49 compute-0 nova_compute[239965]: 2026-01-26 17:06:49.030 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:06:49 compute-0 ceph-mon[75140]: pgmap v3511: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3077259125' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:06:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3077259125' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:06:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:06:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3512: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:51 compute-0 ceph-mon[75140]: pgmap v3512: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:51 compute-0 nova_compute[239965]: 2026-01-26 17:06:51.366 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3513: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:52 compute-0 nova_compute[239965]: 2026-01-26 17:06:52.903 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:53 compute-0 nova_compute[239965]: 2026-01-26 17:06:53.023 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:06:53 compute-0 ceph-mon[75140]: pgmap v3513: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3514: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:06:55 compute-0 ceph-mon[75140]: pgmap v3514: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:55 compute-0 podman[401439]: 2026-01-26 17:06:55.381841708 +0000 UTC m=+0.067110487 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 17:06:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3515: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:56 compute-0 nova_compute[239965]: 2026-01-26 17:06:56.370 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:57 compute-0 ceph-mon[75140]: pgmap v3515: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:57 compute-0 nova_compute[239965]: 2026-01-26 17:06:57.904 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:06:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3516: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:58 compute-0 podman[401458]: 2026-01-26 17:06:58.398180911 +0000 UTC m=+0.087904216 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:06:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:06:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:06:59.289 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:06:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:06:59.289 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:06:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:06:59.290 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:06:59 compute-0 ceph-mon[75140]: pgmap v3516: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:06:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3517: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:07:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:07:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:07:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:07:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:07:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:07:01 compute-0 nova_compute[239965]: 2026-01-26 17:07:01.371 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:01 compute-0 ceph-mon[75140]: pgmap v3517: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3518: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:02 compute-0 nova_compute[239965]: 2026-01-26 17:07:02.905 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:03 compute-0 ceph-mon[75140]: pgmap v3518: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3519: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:07:05 compute-0 ceph-mon[75140]: pgmap v3519: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3520: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:06 compute-0 nova_compute[239965]: 2026-01-26 17:07:06.373 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:06 compute-0 ceph-mon[75140]: pgmap v3520: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:07 compute-0 nova_compute[239965]: 2026-01-26 17:07:07.907 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3521: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:09 compute-0 ceph-mon[75140]: pgmap v3521: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:07:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3522: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:11 compute-0 ceph-mon[75140]: pgmap v3522: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:11 compute-0 nova_compute[239965]: 2026-01-26 17:07:11.375 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3523: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:12 compute-0 nova_compute[239965]: 2026-01-26 17:07:12.915 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:13 compute-0 ceph-mon[75140]: pgmap v3523: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3524: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:07:15 compute-0 ceph-mon[75140]: pgmap v3524: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3525: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:16 compute-0 nova_compute[239965]: 2026-01-26 17:07:16.378 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:17 compute-0 ceph-mon[75140]: pgmap v3525: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:17 compute-0 nova_compute[239965]: 2026-01-26 17:07:17.913 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3526: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:19 compute-0 ceph-mon[75140]: pgmap v3526: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:07:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3527: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:21 compute-0 ceph-mon[75140]: pgmap v3527: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:21 compute-0 nova_compute[239965]: 2026-01-26 17:07:21.381 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:21 compute-0 sudo[401484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:07:21 compute-0 sudo[401484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:07:21 compute-0 sudo[401484]: pam_unix(sudo:session): session closed for user root
Jan 26 17:07:21 compute-0 sudo[401509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:07:21 compute-0 sudo[401509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:07:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3528: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:22 compute-0 sudo[401509]: pam_unix(sudo:session): session closed for user root
Jan 26 17:07:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:07:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:07:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:07:22 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:07:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:07:22 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:07:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:07:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:07:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:07:22 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:07:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:07:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:07:22 compute-0 sudo[401565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:07:22 compute-0 sudo[401565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:07:22 compute-0 sudo[401565]: pam_unix(sudo:session): session closed for user root
Jan 26 17:07:22 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:07:22 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:07:22 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:07:22 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:07:22 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:07:22 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:07:22 compute-0 sudo[401590]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:07:22 compute-0 sudo[401590]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:07:22 compute-0 podman[401627]: 2026-01-26 17:07:22.515122716 +0000 UTC m=+0.058265709 container create 54c0686e35c14b3e5dd602b44dfaf143623d9d0d567c7ec3572961ba6f85495f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_jepsen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 17:07:22 compute-0 systemd[1]: Started libpod-conmon-54c0686e35c14b3e5dd602b44dfaf143623d9d0d567c7ec3572961ba6f85495f.scope.
Jan 26 17:07:22 compute-0 podman[401627]: 2026-01-26 17:07:22.482669901 +0000 UTC m=+0.025812944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:07:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:07:22 compute-0 podman[401627]: 2026-01-26 17:07:22.621206958 +0000 UTC m=+0.164349931 container init 54c0686e35c14b3e5dd602b44dfaf143623d9d0d567c7ec3572961ba6f85495f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_jepsen, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:07:22 compute-0 podman[401627]: 2026-01-26 17:07:22.631023458 +0000 UTC m=+0.174166411 container start 54c0686e35c14b3e5dd602b44dfaf143623d9d0d567c7ec3572961ba6f85495f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 17:07:22 compute-0 podman[401627]: 2026-01-26 17:07:22.635006056 +0000 UTC m=+0.178149329 container attach 54c0686e35c14b3e5dd602b44dfaf143623d9d0d567c7ec3572961ba6f85495f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 17:07:22 compute-0 cranky_jepsen[401643]: 167 167
Jan 26 17:07:22 compute-0 systemd[1]: libpod-54c0686e35c14b3e5dd602b44dfaf143623d9d0d567c7ec3572961ba6f85495f.scope: Deactivated successfully.
Jan 26 17:07:22 compute-0 podman[401627]: 2026-01-26 17:07:22.640106891 +0000 UTC m=+0.183249854 container died 54c0686e35c14b3e5dd602b44dfaf143623d9d0d567c7ec3572961ba6f85495f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:07:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d282434c33a66807362d0203874767cd5f47b23cf4b96b4b6ba7ad37d2622d0f-merged.mount: Deactivated successfully.
Jan 26 17:07:22 compute-0 podman[401627]: 2026-01-26 17:07:22.694765022 +0000 UTC m=+0.237907975 container remove 54c0686e35c14b3e5dd602b44dfaf143623d9d0d567c7ec3572961ba6f85495f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_jepsen, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:07:22 compute-0 systemd[1]: libpod-conmon-54c0686e35c14b3e5dd602b44dfaf143623d9d0d567c7ec3572961ba6f85495f.scope: Deactivated successfully.
Jan 26 17:07:22 compute-0 podman[401667]: 2026-01-26 17:07:22.898838694 +0000 UTC m=+0.059146641 container create 33663e92f781b20ae6328756536420b981bf3442089a2a1431feee25106f2a65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dijkstra, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:07:22 compute-0 nova_compute[239965]: 2026-01-26 17:07:22.915 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:22 compute-0 systemd[1]: Started libpod-conmon-33663e92f781b20ae6328756536420b981bf3442089a2a1431feee25106f2a65.scope.
Jan 26 17:07:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:07:22 compute-0 podman[401667]: 2026-01-26 17:07:22.880798232 +0000 UTC m=+0.041106199 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f22e90ade18a222f88b9e0f14ea8c0553970ebf799522f9b3b5d97467b5637/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f22e90ade18a222f88b9e0f14ea8c0553970ebf799522f9b3b5d97467b5637/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f22e90ade18a222f88b9e0f14ea8c0553970ebf799522f9b3b5d97467b5637/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f22e90ade18a222f88b9e0f14ea8c0553970ebf799522f9b3b5d97467b5637/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:07:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7f22e90ade18a222f88b9e0f14ea8c0553970ebf799522f9b3b5d97467b5637/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:07:22 compute-0 podman[401667]: 2026-01-26 17:07:22.99161733 +0000 UTC m=+0.151925287 container init 33663e92f781b20ae6328756536420b981bf3442089a2a1431feee25106f2a65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 17:07:23 compute-0 podman[401667]: 2026-01-26 17:07:23.006643317 +0000 UTC m=+0.166951274 container start 33663e92f781b20ae6328756536420b981bf3442089a2a1431feee25106f2a65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dijkstra, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 17:07:23 compute-0 podman[401667]: 2026-01-26 17:07:23.012185643 +0000 UTC m=+0.172493620 container attach 33663e92f781b20ae6328756536420b981bf3442089a2a1431feee25106f2a65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Jan 26 17:07:23 compute-0 ceph-mon[75140]: pgmap v3528: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:23 compute-0 relaxed_dijkstra[401683]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:07:23 compute-0 relaxed_dijkstra[401683]: --> All data devices are unavailable
Jan 26 17:07:23 compute-0 systemd[1]: libpod-33663e92f781b20ae6328756536420b981bf3442089a2a1431feee25106f2a65.scope: Deactivated successfully.
Jan 26 17:07:23 compute-0 podman[401703]: 2026-01-26 17:07:23.584658539 +0000 UTC m=+0.029464953 container died 33663e92f781b20ae6328756536420b981bf3442089a2a1431feee25106f2a65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 17:07:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7f22e90ade18a222f88b9e0f14ea8c0553970ebf799522f9b3b5d97467b5637-merged.mount: Deactivated successfully.
Jan 26 17:07:23 compute-0 podman[401703]: 2026-01-26 17:07:23.630631706 +0000 UTC m=+0.075438100 container remove 33663e92f781b20ae6328756536420b981bf3442089a2a1431feee25106f2a65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_dijkstra, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 17:07:23 compute-0 systemd[1]: libpod-conmon-33663e92f781b20ae6328756536420b981bf3442089a2a1431feee25106f2a65.scope: Deactivated successfully.
Jan 26 17:07:23 compute-0 sudo[401590]: pam_unix(sudo:session): session closed for user root
Jan 26 17:07:23 compute-0 sudo[401719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:07:23 compute-0 sudo[401719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:07:23 compute-0 sudo[401719]: pam_unix(sudo:session): session closed for user root
Jan 26 17:07:23 compute-0 sudo[401744]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:07:23 compute-0 sudo[401744]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:07:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3529: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:24 compute-0 podman[401780]: 2026-01-26 17:07:24.186946676 +0000 UTC m=+0.048790658 container create 840ade3529588762caf3f4066c70d34627cc411e43c80fa004d404086c35a92f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kalam, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:07:24 compute-0 systemd[1]: Started libpod-conmon-840ade3529588762caf3f4066c70d34627cc411e43c80fa004d404086c35a92f.scope.
Jan 26 17:07:24 compute-0 podman[401780]: 2026-01-26 17:07:24.16628428 +0000 UTC m=+0.028128302 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:07:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:07:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:07:24 compute-0 podman[401780]: 2026-01-26 17:07:24.28216843 +0000 UTC m=+0.144012432 container init 840ade3529588762caf3f4066c70d34627cc411e43c80fa004d404086c35a92f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 17:07:24 compute-0 podman[401780]: 2026-01-26 17:07:24.294891582 +0000 UTC m=+0.156735564 container start 840ade3529588762caf3f4066c70d34627cc411e43c80fa004d404086c35a92f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:07:24 compute-0 podman[401780]: 2026-01-26 17:07:24.298710086 +0000 UTC m=+0.160554118 container attach 840ade3529588762caf3f4066c70d34627cc411e43c80fa004d404086c35a92f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 17:07:24 compute-0 jovial_kalam[401796]: 167 167
Jan 26 17:07:24 compute-0 systemd[1]: libpod-840ade3529588762caf3f4066c70d34627cc411e43c80fa004d404086c35a92f.scope: Deactivated successfully.
Jan 26 17:07:24 compute-0 podman[401780]: 2026-01-26 17:07:24.303590346 +0000 UTC m=+0.165434368 container died 840ade3529588762caf3f4066c70d34627cc411e43c80fa004d404086c35a92f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:07:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-83eb521a01b3d59ab981f30831356e1a014065c7420fe1c2d6496c1a0472846e-merged.mount: Deactivated successfully.
Jan 26 17:07:24 compute-0 podman[401780]: 2026-01-26 17:07:24.353578951 +0000 UTC m=+0.215422943 container remove 840ade3529588762caf3f4066c70d34627cc411e43c80fa004d404086c35a92f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_kalam, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:07:24 compute-0 systemd[1]: libpod-conmon-840ade3529588762caf3f4066c70d34627cc411e43c80fa004d404086c35a92f.scope: Deactivated successfully.
Jan 26 17:07:24 compute-0 podman[401820]: 2026-01-26 17:07:24.541417726 +0000 UTC m=+0.047305560 container create d92ec1cc72fec4abde97e8a8623560de2f3f9d28909fd131fd9e903e64764cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cerf, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 17:07:24 compute-0 systemd[1]: Started libpod-conmon-d92ec1cc72fec4abde97e8a8623560de2f3f9d28909fd131fd9e903e64764cbf.scope.
Jan 26 17:07:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ba6ba8920147c24b257ae0fea06be490576dc4a4b461fae9154c399e48ccfeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:07:24 compute-0 podman[401820]: 2026-01-26 17:07:24.521930499 +0000 UTC m=+0.027818363 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ba6ba8920147c24b257ae0fea06be490576dc4a4b461fae9154c399e48ccfeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ba6ba8920147c24b257ae0fea06be490576dc4a4b461fae9154c399e48ccfeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:07:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ba6ba8920147c24b257ae0fea06be490576dc4a4b461fae9154c399e48ccfeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:07:24 compute-0 podman[401820]: 2026-01-26 17:07:24.635554575 +0000 UTC m=+0.141442429 container init d92ec1cc72fec4abde97e8a8623560de2f3f9d28909fd131fd9e903e64764cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:07:24 compute-0 podman[401820]: 2026-01-26 17:07:24.644313019 +0000 UTC m=+0.150200853 container start d92ec1cc72fec4abde97e8a8623560de2f3f9d28909fd131fd9e903e64764cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cerf, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:07:24 compute-0 podman[401820]: 2026-01-26 17:07:24.647939969 +0000 UTC m=+0.153827803 container attach d92ec1cc72fec4abde97e8a8623560de2f3f9d28909fd131fd9e903e64764cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]: {
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:     "0": [
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:         {
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "devices": [
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "/dev/loop3"
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             ],
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "lv_name": "ceph_lv0",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "lv_size": "21470642176",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "name": "ceph_lv0",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "tags": {
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.cluster_name": "ceph",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.crush_device_class": "",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.encrypted": "0",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.objectstore": "bluestore",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.osd_id": "0",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.type": "block",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.vdo": "0",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.with_tpm": "0"
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             },
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "type": "block",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "vg_name": "ceph_vg0"
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:         }
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:     ],
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:     "1": [
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:         {
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "devices": [
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "/dev/loop4"
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             ],
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "lv_name": "ceph_lv1",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "lv_size": "21470642176",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "name": "ceph_lv1",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "tags": {
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.cluster_name": "ceph",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.crush_device_class": "",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.encrypted": "0",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.objectstore": "bluestore",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.osd_id": "1",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.type": "block",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.vdo": "0",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.with_tpm": "0"
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             },
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "type": "block",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "vg_name": "ceph_vg1"
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:         }
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:     ],
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:     "2": [
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:         {
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "devices": [
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "/dev/loop5"
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             ],
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "lv_name": "ceph_lv2",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "lv_size": "21470642176",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "name": "ceph_lv2",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "tags": {
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.cluster_name": "ceph",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.crush_device_class": "",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.encrypted": "0",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.objectstore": "bluestore",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.osd_id": "2",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.type": "block",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.vdo": "0",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:                 "ceph.with_tpm": "0"
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             },
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "type": "block",
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:             "vg_name": "ceph_vg2"
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:         }
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]:     ]
Jan 26 17:07:24 compute-0 flamboyant_cerf[401836]: }
Jan 26 17:07:24 compute-0 systemd[1]: libpod-d92ec1cc72fec4abde97e8a8623560de2f3f9d28909fd131fd9e903e64764cbf.scope: Deactivated successfully.
Jan 26 17:07:25 compute-0 podman[401845]: 2026-01-26 17:07:25.008632771 +0000 UTC m=+0.027243838 container died d92ec1cc72fec4abde97e8a8623560de2f3f9d28909fd131fd9e903e64764cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cerf, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 17:07:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ba6ba8920147c24b257ae0fea06be490576dc4a4b461fae9154c399e48ccfeb-merged.mount: Deactivated successfully.
Jan 26 17:07:25 compute-0 podman[401845]: 2026-01-26 17:07:25.057838948 +0000 UTC m=+0.076449985 container remove d92ec1cc72fec4abde97e8a8623560de2f3f9d28909fd131fd9e903e64764cbf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_cerf, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:07:25 compute-0 systemd[1]: libpod-conmon-d92ec1cc72fec4abde97e8a8623560de2f3f9d28909fd131fd9e903e64764cbf.scope: Deactivated successfully.
Jan 26 17:07:25 compute-0 sudo[401744]: pam_unix(sudo:session): session closed for user root
Jan 26 17:07:25 compute-0 sudo[401860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:07:25 compute-0 sudo[401860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:07:25 compute-0 sudo[401860]: pam_unix(sudo:session): session closed for user root
Jan 26 17:07:25 compute-0 ceph-mon[75140]: pgmap v3529: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:25 compute-0 sudo[401885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:07:25 compute-0 sudo[401885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:07:25 compute-0 podman[401922]: 2026-01-26 17:07:25.561711032 +0000 UTC m=+0.048558182 container create ae9ae738c379fdc193591e595236418e1b459c9e639b2a77f6650822d132154e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_nash, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:07:25 compute-0 systemd[1]: Started libpod-conmon-ae9ae738c379fdc193591e595236418e1b459c9e639b2a77f6650822d132154e.scope.
Jan 26 17:07:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:07:25 compute-0 podman[401922]: 2026-01-26 17:07:25.536966015 +0000 UTC m=+0.023813155 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:07:25 compute-0 podman[401922]: 2026-01-26 17:07:25.635824879 +0000 UTC m=+0.122671999 container init ae9ae738c379fdc193591e595236418e1b459c9e639b2a77f6650822d132154e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 17:07:25 compute-0 podman[401922]: 2026-01-26 17:07:25.645835164 +0000 UTC m=+0.132682284 container start ae9ae738c379fdc193591e595236418e1b459c9e639b2a77f6650822d132154e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_nash, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:07:25 compute-0 podman[401922]: 2026-01-26 17:07:25.650271753 +0000 UTC m=+0.137118983 container attach ae9ae738c379fdc193591e595236418e1b459c9e639b2a77f6650822d132154e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:07:25 compute-0 lucid_nash[401939]: 167 167
Jan 26 17:07:25 compute-0 systemd[1]: libpod-ae9ae738c379fdc193591e595236418e1b459c9e639b2a77f6650822d132154e.scope: Deactivated successfully.
Jan 26 17:07:25 compute-0 podman[401922]: 2026-01-26 17:07:25.654157878 +0000 UTC m=+0.141004998 container died ae9ae738c379fdc193591e595236418e1b459c9e639b2a77f6650822d132154e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_nash, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 17:07:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-908a0e365c113f04dde4de445b03159c4195977c0b2452adb40482abd80d7559-merged.mount: Deactivated successfully.
Jan 26 17:07:25 compute-0 podman[401922]: 2026-01-26 17:07:25.699826148 +0000 UTC m=+0.186673268 container remove ae9ae738c379fdc193591e595236418e1b459c9e639b2a77f6650822d132154e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_nash, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:07:25 compute-0 podman[401936]: 2026-01-26 17:07:25.704711397 +0000 UTC m=+0.091457693 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:07:25 compute-0 systemd[1]: libpod-conmon-ae9ae738c379fdc193591e595236418e1b459c9e639b2a77f6650822d132154e.scope: Deactivated successfully.
Jan 26 17:07:25 compute-0 podman[401981]: 2026-01-26 17:07:25.884134226 +0000 UTC m=+0.047484744 container create d346f6eceb8e264d0cb562a90fee835f3808b618a265f7bd5099d816cb52d5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:07:25 compute-0 systemd[1]: Started libpod-conmon-d346f6eceb8e264d0cb562a90fee835f3808b618a265f7bd5099d816cb52d5e1.scope.
Jan 26 17:07:25 compute-0 podman[401981]: 2026-01-26 17:07:25.863301166 +0000 UTC m=+0.026651694 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:07:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e1a7db8905cc6a7cb3867f55b35957541a4ae41d3db86573e64629058435a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e1a7db8905cc6a7cb3867f55b35957541a4ae41d3db86573e64629058435a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e1a7db8905cc6a7cb3867f55b35957541a4ae41d3db86573e64629058435a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:07:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6e1a7db8905cc6a7cb3867f55b35957541a4ae41d3db86573e64629058435a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:07:25 compute-0 podman[401981]: 2026-01-26 17:07:25.980694754 +0000 UTC m=+0.144045292 container init d346f6eceb8e264d0cb562a90fee835f3808b618a265f7bd5099d816cb52d5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 26 17:07:25 compute-0 podman[401981]: 2026-01-26 17:07:25.990458873 +0000 UTC m=+0.153809381 container start d346f6eceb8e264d0cb562a90fee835f3808b618a265f7bd5099d816cb52d5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_haslett, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 17:07:25 compute-0 podman[401981]: 2026-01-26 17:07:25.995160319 +0000 UTC m=+0.158510827 container attach d346f6eceb8e264d0cb562a90fee835f3808b618a265f7bd5099d816cb52d5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:07:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3530: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:26 compute-0 nova_compute[239965]: 2026-01-26 17:07:26.382 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:26 compute-0 lvm[402078]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:07:26 compute-0 lvm[402078]: VG ceph_vg0 finished
Jan 26 17:07:26 compute-0 lvm[402079]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:07:26 compute-0 lvm[402079]: VG ceph_vg1 finished
Jan 26 17:07:26 compute-0 lvm[402081]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:07:26 compute-0 lvm[402081]: VG ceph_vg2 finished
Jan 26 17:07:26 compute-0 eager_haslett[401998]: {}
Jan 26 17:07:26 compute-0 systemd[1]: libpod-d346f6eceb8e264d0cb562a90fee835f3808b618a265f7bd5099d816cb52d5e1.scope: Deactivated successfully.
Jan 26 17:07:26 compute-0 podman[401981]: 2026-01-26 17:07:26.990034311 +0000 UTC m=+1.153384839 container died d346f6eceb8e264d0cb562a90fee835f3808b618a265f7bd5099d816cb52d5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_haslett, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:07:26 compute-0 systemd[1]: libpod-d346f6eceb8e264d0cb562a90fee835f3808b618a265f7bd5099d816cb52d5e1.scope: Consumed 1.617s CPU time.
Jan 26 17:07:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-e6e1a7db8905cc6a7cb3867f55b35957541a4ae41d3db86573e64629058435a2-merged.mount: Deactivated successfully.
Jan 26 17:07:27 compute-0 podman[401981]: 2026-01-26 17:07:27.042302343 +0000 UTC m=+1.205652851 container remove d346f6eceb8e264d0cb562a90fee835f3808b618a265f7bd5099d816cb52d5e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_haslett, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 17:07:27 compute-0 systemd[1]: libpod-conmon-d346f6eceb8e264d0cb562a90fee835f3808b618a265f7bd5099d816cb52d5e1.scope: Deactivated successfully.
Jan 26 17:07:27 compute-0 sudo[401885]: pam_unix(sudo:session): session closed for user root
Jan 26 17:07:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:07:27 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:07:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:07:27 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:07:27 compute-0 sudo[402096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:07:27 compute-0 ceph-mon[75140]: pgmap v3530: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:07:27 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:07:27 compute-0 sudo[402096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:07:27 compute-0 sudo[402096]: pam_unix(sudo:session): session closed for user root
Jan 26 17:07:27 compute-0 nova_compute[239965]: 2026-01-26 17:07:27.919 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3531: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:07:28
Jan 26 17:07:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:07:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:07:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'volumes', 'images', 'backups', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.log']
Jan 26 17:07:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:07:29 compute-0 ceph-mon[75140]: pgmap v3531: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:07:29 compute-0 podman[402121]: 2026-01-26 17:07:29.425146833 +0000 UTC m=+0.098539366 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 17:07:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3532: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:07:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:07:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:07:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:07:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:07:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:07:30 compute-0 nova_compute[239965]: 2026-01-26 17:07:30.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:07:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:07:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:07:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:07:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:07:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:07:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:07:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:07:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:07:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:07:31 compute-0 ceph-mon[75140]: pgmap v3532: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:31 compute-0 nova_compute[239965]: 2026-01-26 17:07:31.385 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:31 compute-0 nova_compute[239965]: 2026-01-26 17:07:31.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3533: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:32 compute-0 ceph-mon[75140]: pgmap v3533: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:32 compute-0 nova_compute[239965]: 2026-01-26 17:07:32.920 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3534: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:07:35 compute-0 ceph-mon[75140]: pgmap v3534: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3535: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:36 compute-0 nova_compute[239965]: 2026-01-26 17:07:36.389 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:36 compute-0 nova_compute[239965]: 2026-01-26 17:07:36.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:36 compute-0 nova_compute[239965]: 2026-01-26 17:07:36.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:37 compute-0 ceph-mon[75140]: pgmap v3535: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:37 compute-0 nova_compute[239965]: 2026-01-26 17:07:37.920 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3536: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:38 compute-0 nova_compute[239965]: 2026-01-26 17:07:38.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:39 compute-0 ceph-mon[75140]: pgmap v3536: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:07:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3537: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:41 compute-0 ceph-mon[75140]: pgmap v3537: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:41 compute-0 nova_compute[239965]: 2026-01-26 17:07:41.437 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3538: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:42 compute-0 nova_compute[239965]: 2026-01-26 17:07:42.922 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:43 compute-0 ceph-mon[75140]: pgmap v3538: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:43 compute-0 nova_compute[239965]: 2026-01-26 17:07:43.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3539: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:07:44 compute-0 sshd-session[402147]: Invalid user sol from 45.148.10.240 port 47590
Jan 26 17:07:44 compute-0 sshd-session[402147]: Connection closed by invalid user sol 45.148.10.240 port 47590 [preauth]
Jan 26 17:07:45 compute-0 ceph-mon[75140]: pgmap v3539: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:45 compute-0 nova_compute[239965]: 2026-01-26 17:07:45.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:45 compute-0 nova_compute[239965]: 2026-01-26 17:07:45.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:07:45 compute-0 nova_compute[239965]: 2026-01-26 17:07:45.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:45 compute-0 nova_compute[239965]: 2026-01-26 17:07:45.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:07:45 compute-0 nova_compute[239965]: 2026-01-26 17:07:45.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:07:45 compute-0 nova_compute[239965]: 2026-01-26 17:07:45.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:07:45 compute-0 nova_compute[239965]: 2026-01-26 17:07:45.540 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:07:45 compute-0 nova_compute[239965]: 2026-01-26 17:07:45.541 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:07:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3540: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:07:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2057900540' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:07:46 compute-0 nova_compute[239965]: 2026-01-26 17:07:46.136 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:07:46 compute-0 nova_compute[239965]: 2026-01-26 17:07:46.311 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:07:46 compute-0 nova_compute[239965]: 2026-01-26 17:07:46.312 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3549MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:07:46 compute-0 nova_compute[239965]: 2026-01-26 17:07:46.312 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:07:46 compute-0 nova_compute[239965]: 2026-01-26 17:07:46.312 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:07:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2057900540' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:07:46 compute-0 nova_compute[239965]: 2026-01-26 17:07:46.390 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:07:46 compute-0 nova_compute[239965]: 2026-01-26 17:07:46.391 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:07:46 compute-0 nova_compute[239965]: 2026-01-26 17:07:46.409 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:07:46 compute-0 nova_compute[239965]: 2026-01-26 17:07:46.450 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:07:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1478897291' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:07:46 compute-0 nova_compute[239965]: 2026-01-26 17:07:46.971 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:07:46 compute-0 nova_compute[239965]: 2026-01-26 17:07:46.979 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:07:46 compute-0 nova_compute[239965]: 2026-01-26 17:07:46.996 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:07:46 compute-0 nova_compute[239965]: 2026-01-26 17:07:46.998 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:07:46 compute-0 nova_compute[239965]: 2026-01-26 17:07:46.998 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:07:47 compute-0 ceph-mon[75140]: pgmap v3540: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1478897291' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:07:47 compute-0 nova_compute[239965]: 2026-01-26 17:07:47.924 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:47 compute-0 nova_compute[239965]: 2026-01-26 17:07:47.998 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:07:47 compute-0 nova_compute[239965]: 2026-01-26 17:07:47.999 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:07:47 compute-0 nova_compute[239965]: 2026-01-26 17:07:47.999 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:07:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3541: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:48 compute-0 nova_compute[239965]: 2026-01-26 17:07:48.031 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:07:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:07:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/155816356' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:07:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:07:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/155816356' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:07:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:07:49 compute-0 ceph-mon[75140]: pgmap v3541: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/155816356' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:07:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/155816356' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:07:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:07:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3542: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:51 compute-0 ceph-mon[75140]: pgmap v3542: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:51 compute-0 nova_compute[239965]: 2026-01-26 17:07:51.454 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3543: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:52 compute-0 ceph-mon[75140]: pgmap v3543: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:07:52 compute-0 nova_compute[239965]: 2026-01-26 17:07:52.928 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3544: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 26 17:07:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:07:55 compute-0 ceph-mon[75140]: pgmap v3544: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 26 17:07:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3545: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:07:56 compute-0 podman[402193]: 2026-01-26 17:07:56.410592447 +0000 UTC m=+0.073311289 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Jan 26 17:07:56 compute-0 nova_compute[239965]: 2026-01-26 17:07:56.457 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:57 compute-0 ceph-mon[75140]: pgmap v3545: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:07:57 compute-0 nova_compute[239965]: 2026-01-26 17:07:57.931 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:07:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3546: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:07:59 compute-0 ceph-mon[75140]: pgmap v3546: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:07:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:07:59.291 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:07:59.292 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:07:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:07:59.292 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:08:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3547: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:08:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:08:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:08:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:08:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:08:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:08:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:08:00 compute-0 podman[402213]: 2026-01-26 17:08:00.468991648 +0000 UTC m=+0.145253491 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 26 17:08:01 compute-0 ceph-mon[75140]: pgmap v3547: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:08:01 compute-0 nova_compute[239965]: 2026-01-26 17:08:01.485 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3548: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:08:02 compute-0 nova_compute[239965]: 2026-01-26 17:08:02.934 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:03 compute-0 ceph-mon[75140]: pgmap v3548: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:08:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3549: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:08:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:08:05 compute-0 ceph-mon[75140]: pgmap v3549: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:08:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3550: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 26 17:08:06 compute-0 nova_compute[239965]: 2026-01-26 17:08:06.560 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:07 compute-0 ceph-mon[75140]: pgmap v3550: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 26 17:08:07 compute-0 nova_compute[239965]: 2026-01-26 17:08:07.935 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3551: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:08 compute-0 ceph-mon[75140]: pgmap v3551: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:08:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3552: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:11 compute-0 ceph-mon[75140]: pgmap v3552: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:11 compute-0 nova_compute[239965]: 2026-01-26 17:08:11.564 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3553: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:12 compute-0 nova_compute[239965]: 2026-01-26 17:08:12.936 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:13 compute-0 ceph-mon[75140]: pgmap v3553: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3554: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:08:15 compute-0 ceph-mon[75140]: pgmap v3554: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3555: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:16 compute-0 nova_compute[239965]: 2026-01-26 17:08:16.567 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:17 compute-0 ceph-mon[75140]: pgmap v3555: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:17 compute-0 nova_compute[239965]: 2026-01-26 17:08:17.938 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3556: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:19 compute-0 ceph-mon[75140]: pgmap v3556: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:08:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3557: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:21 compute-0 nova_compute[239965]: 2026-01-26 17:08:21.570 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:21 compute-0 ceph-mon[75140]: pgmap v3557: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3558: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:22 compute-0 ceph-mon[75140]: pgmap v3558: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:22 compute-0 nova_compute[239965]: 2026-01-26 17:08:22.940 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3559: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:08:25 compute-0 ceph-mon[75140]: pgmap v3559: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3560: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:26 compute-0 nova_compute[239965]: 2026-01-26 17:08:26.609 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:27 compute-0 sudo[402238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:08:27 compute-0 sudo[402238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:08:27 compute-0 sudo[402238]: pam_unix(sudo:session): session closed for user root
Jan 26 17:08:27 compute-0 ceph-mon[75140]: pgmap v3560: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:27 compute-0 podman[402262]: 2026-01-26 17:08:27.372618071 +0000 UTC m=+0.061234762 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:08:27 compute-0 sudo[402269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:08:27 compute-0 sudo[402269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:08:27 compute-0 nova_compute[239965]: 2026-01-26 17:08:27.941 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:27 compute-0 sudo[402269]: pam_unix(sudo:session): session closed for user root
Jan 26 17:08:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:08:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:08:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:08:27 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:08:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:08:27 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:08:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:08:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:08:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:08:27 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:08:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:08:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:08:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3561: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:28 compute-0 sudo[402337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:08:28 compute-0 sudo[402337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:08:28 compute-0 sudo[402337]: pam_unix(sudo:session): session closed for user root
Jan 26 17:08:28 compute-0 sudo[402362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:08:28 compute-0 sudo[402362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:08:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:08:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:08:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:08:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:08:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:08:28 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:08:28 compute-0 podman[402399]: 2026-01-26 17:08:28.379039386 +0000 UTC m=+0.043185860 container create dffb08a6ec7e89a9e6df8d0043bf7432b2dfb64cc428aa6548bc1a1e69e1a505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_elgamal, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 17:08:28 compute-0 systemd[1]: Started libpod-conmon-dffb08a6ec7e89a9e6df8d0043bf7432b2dfb64cc428aa6548bc1a1e69e1a505.scope.
Jan 26 17:08:28 compute-0 podman[402399]: 2026-01-26 17:08:28.359128558 +0000 UTC m=+0.023275062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:08:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:08:28 compute-0 podman[402399]: 2026-01-26 17:08:28.475826519 +0000 UTC m=+0.139973023 container init dffb08a6ec7e89a9e6df8d0043bf7432b2dfb64cc428aa6548bc1a1e69e1a505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 17:08:28 compute-0 podman[402399]: 2026-01-26 17:08:28.488659374 +0000 UTC m=+0.152805858 container start dffb08a6ec7e89a9e6df8d0043bf7432b2dfb64cc428aa6548bc1a1e69e1a505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 17:08:28 compute-0 podman[402399]: 2026-01-26 17:08:28.492825386 +0000 UTC m=+0.156971890 container attach dffb08a6ec7e89a9e6df8d0043bf7432b2dfb64cc428aa6548bc1a1e69e1a505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_elgamal, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:08:28 compute-0 compassionate_elgamal[402416]: 167 167
Jan 26 17:08:28 compute-0 systemd[1]: libpod-dffb08a6ec7e89a9e6df8d0043bf7432b2dfb64cc428aa6548bc1a1e69e1a505.scope: Deactivated successfully.
Jan 26 17:08:28 compute-0 podman[402399]: 2026-01-26 17:08:28.498578128 +0000 UTC m=+0.162724612 container died dffb08a6ec7e89a9e6df8d0043bf7432b2dfb64cc428aa6548bc1a1e69e1a505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:08:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-096c106072fd27e4ca6700f9a46d9bf11669841a6a18f6bcd67695981a684e65-merged.mount: Deactivated successfully.
Jan 26 17:08:28 compute-0 podman[402399]: 2026-01-26 17:08:28.568514181 +0000 UTC m=+0.232660665 container remove dffb08a6ec7e89a9e6df8d0043bf7432b2dfb64cc428aa6548bc1a1e69e1a505 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:08:28 compute-0 systemd[1]: libpod-conmon-dffb08a6ec7e89a9e6df8d0043bf7432b2dfb64cc428aa6548bc1a1e69e1a505.scope: Deactivated successfully.
Jan 26 17:08:28 compute-0 podman[402440]: 2026-01-26 17:08:28.757750422 +0000 UTC m=+0.053798141 container create 55b494c7a63fd8492fa3ce9edbd3fba961e8f72ac16caca9f8b70e54397270fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:08:28 compute-0 systemd[1]: Started libpod-conmon-55b494c7a63fd8492fa3ce9edbd3fba961e8f72ac16caca9f8b70e54397270fb.scope.
Jan 26 17:08:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:08:28
Jan 26 17:08:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:08:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:08:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.data', '.mgr', 'images', 'volumes', 'default.rgw.log']
Jan 26 17:08:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:08:28 compute-0 podman[402440]: 2026-01-26 17:08:28.73485998 +0000 UTC m=+0.030907709 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:08:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f24c71a1a4a134e9b2582ab5b0aa83a100e5472435d0de330a32e54c473326f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f24c71a1a4a134e9b2582ab5b0aa83a100e5472435d0de330a32e54c473326f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f24c71a1a4a134e9b2582ab5b0aa83a100e5472435d0de330a32e54c473326f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f24c71a1a4a134e9b2582ab5b0aa83a100e5472435d0de330a32e54c473326f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:08:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f24c71a1a4a134e9b2582ab5b0aa83a100e5472435d0de330a32e54c473326f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:08:28 compute-0 podman[402440]: 2026-01-26 17:08:28.87190129 +0000 UTC m=+0.167949019 container init 55b494c7a63fd8492fa3ce9edbd3fba961e8f72ac16caca9f8b70e54397270fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 17:08:28 compute-0 podman[402440]: 2026-01-26 17:08:28.884992462 +0000 UTC m=+0.181040161 container start 55b494c7a63fd8492fa3ce9edbd3fba961e8f72ac16caca9f8b70e54397270fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shaw, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 17:08:28 compute-0 podman[402440]: 2026-01-26 17:08:28.88983839 +0000 UTC m=+0.185886129 container attach 55b494c7a63fd8492fa3ce9edbd3fba961e8f72ac16caca9f8b70e54397270fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shaw, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 17:08:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:08:29 compute-0 ceph-mon[75140]: pgmap v3561: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:29 compute-0 sharp_shaw[402457]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:08:29 compute-0 sharp_shaw[402457]: --> All data devices are unavailable
Jan 26 17:08:29 compute-0 systemd[1]: libpod-55b494c7a63fd8492fa3ce9edbd3fba961e8f72ac16caca9f8b70e54397270fb.scope: Deactivated successfully.
Jan 26 17:08:29 compute-0 podman[402440]: 2026-01-26 17:08:29.467150784 +0000 UTC m=+0.763198503 container died 55b494c7a63fd8492fa3ce9edbd3fba961e8f72ac16caca9f8b70e54397270fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shaw, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 17:08:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f24c71a1a4a134e9b2582ab5b0aa83a100e5472435d0de330a32e54c473326f-merged.mount: Deactivated successfully.
Jan 26 17:08:29 compute-0 podman[402440]: 2026-01-26 17:08:29.548445667 +0000 UTC m=+0.844493376 container remove 55b494c7a63fd8492fa3ce9edbd3fba961e8f72ac16caca9f8b70e54397270fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 17:08:29 compute-0 systemd[1]: libpod-conmon-55b494c7a63fd8492fa3ce9edbd3fba961e8f72ac16caca9f8b70e54397270fb.scope: Deactivated successfully.
Jan 26 17:08:29 compute-0 sudo[402362]: pam_unix(sudo:session): session closed for user root
Jan 26 17:08:29 compute-0 sudo[402490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:08:29 compute-0 sudo[402490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:08:29 compute-0 sudo[402490]: pam_unix(sudo:session): session closed for user root
Jan 26 17:08:29 compute-0 sudo[402515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:08:29 compute-0 sudo[402515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:08:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3562: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:30 compute-0 podman[402550]: 2026-01-26 17:08:30.122333718 +0000 UTC m=+0.049498295 container create 2c412492ec8e572ab132f086b0181b9e8b52b4bc95669f38c46b36fb3cf3b493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldberg, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:08:30 compute-0 systemd[1]: Started libpod-conmon-2c412492ec8e572ab132f086b0181b9e8b52b4bc95669f38c46b36fb3cf3b493.scope.
Jan 26 17:08:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:08:30 compute-0 podman[402550]: 2026-01-26 17:08:30.106153101 +0000 UTC m=+0.033317698 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:08:30 compute-0 podman[402550]: 2026-01-26 17:08:30.216720012 +0000 UTC m=+0.143884619 container init 2c412492ec8e572ab132f086b0181b9e8b52b4bc95669f38c46b36fb3cf3b493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 17:08:30 compute-0 podman[402550]: 2026-01-26 17:08:30.224783279 +0000 UTC m=+0.151947846 container start 2c412492ec8e572ab132f086b0181b9e8b52b4bc95669f38c46b36fb3cf3b493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldberg, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:08:30 compute-0 podman[402550]: 2026-01-26 17:08:30.228626134 +0000 UTC m=+0.155790731 container attach 2c412492ec8e572ab132f086b0181b9e8b52b4bc95669f38c46b36fb3cf3b493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldberg, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:08:30 compute-0 admiring_goldberg[402567]: 167 167
Jan 26 17:08:30 compute-0 systemd[1]: libpod-2c412492ec8e572ab132f086b0181b9e8b52b4bc95669f38c46b36fb3cf3b493.scope: Deactivated successfully.
Jan 26 17:08:30 compute-0 podman[402550]: 2026-01-26 17:08:30.232087139 +0000 UTC m=+0.159251726 container died 2c412492ec8e572ab132f086b0181b9e8b52b4bc95669f38c46b36fb3cf3b493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:08:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6507c2cce7f3a66827eeb26844f2b1edbeeba6ff4415a7ee93e567d542e63fea-merged.mount: Deactivated successfully.
Jan 26 17:08:30 compute-0 podman[402550]: 2026-01-26 17:08:30.280775282 +0000 UTC m=+0.207939889 container remove 2c412492ec8e572ab132f086b0181b9e8b52b4bc95669f38c46b36fb3cf3b493 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goldberg, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:08:30 compute-0 systemd[1]: libpod-conmon-2c412492ec8e572ab132f086b0181b9e8b52b4bc95669f38c46b36fb3cf3b493.scope: Deactivated successfully.
Jan 26 17:08:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:08:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:08:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:08:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:08:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:08:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:08:30 compute-0 podman[402589]: 2026-01-26 17:08:30.482413986 +0000 UTC m=+0.051968685 container create 127dc00b5535c50e5ac9e2c5d9722533e09565fc8e692a6205e4faa15b14615e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_diffie, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:08:30 compute-0 nova_compute[239965]: 2026-01-26 17:08:30.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:30 compute-0 systemd[1]: Started libpod-conmon-127dc00b5535c50e5ac9e2c5d9722533e09565fc8e692a6205e4faa15b14615e.scope.
Jan 26 17:08:30 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:08:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3bfe5191921678fe6f5edcde54befd47db77bfd8e69852bf4f8bd0a524b340d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:08:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3bfe5191921678fe6f5edcde54befd47db77bfd8e69852bf4f8bd0a524b340d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:08:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3bfe5191921678fe6f5edcde54befd47db77bfd8e69852bf4f8bd0a524b340d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:08:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c3bfe5191921678fe6f5edcde54befd47db77bfd8e69852bf4f8bd0a524b340d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:08:30 compute-0 podman[402589]: 2026-01-26 17:08:30.4544475 +0000 UTC m=+0.024002179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:08:30 compute-0 podman[402589]: 2026-01-26 17:08:30.563734309 +0000 UTC m=+0.133288998 container init 127dc00b5535c50e5ac9e2c5d9722533e09565fc8e692a6205e4faa15b14615e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_diffie, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:08:30 compute-0 podman[402589]: 2026-01-26 17:08:30.572041353 +0000 UTC m=+0.141596012 container start 127dc00b5535c50e5ac9e2c5d9722533e09565fc8e692a6205e4faa15b14615e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:08:30 compute-0 podman[402589]: 2026-01-26 17:08:30.578166234 +0000 UTC m=+0.147720923 container attach 127dc00b5535c50e5ac9e2c5d9722533e09565fc8e692a6205e4faa15b14615e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_diffie, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:08:30 compute-0 podman[402603]: 2026-01-26 17:08:30.63472818 +0000 UTC m=+0.103255963 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 26 17:08:30 compute-0 serene_diffie[402606]: {
Jan 26 17:08:30 compute-0 serene_diffie[402606]:     "0": [
Jan 26 17:08:30 compute-0 serene_diffie[402606]:         {
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "devices": [
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "/dev/loop3"
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             ],
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "lv_name": "ceph_lv0",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "lv_size": "21470642176",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "name": "ceph_lv0",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "tags": {
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.cluster_name": "ceph",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.crush_device_class": "",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.encrypted": "0",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.objectstore": "bluestore",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.osd_id": "0",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.type": "block",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.vdo": "0",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.with_tpm": "0"
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             },
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "type": "block",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "vg_name": "ceph_vg0"
Jan 26 17:08:30 compute-0 serene_diffie[402606]:         }
Jan 26 17:08:30 compute-0 serene_diffie[402606]:     ],
Jan 26 17:08:30 compute-0 serene_diffie[402606]:     "1": [
Jan 26 17:08:30 compute-0 serene_diffie[402606]:         {
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "devices": [
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "/dev/loop4"
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             ],
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "lv_name": "ceph_lv1",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "lv_size": "21470642176",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "name": "ceph_lv1",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "tags": {
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.cluster_name": "ceph",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.crush_device_class": "",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.encrypted": "0",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.objectstore": "bluestore",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.osd_id": "1",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.type": "block",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.vdo": "0",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.with_tpm": "0"
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             },
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "type": "block",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "vg_name": "ceph_vg1"
Jan 26 17:08:30 compute-0 serene_diffie[402606]:         }
Jan 26 17:08:30 compute-0 serene_diffie[402606]:     ],
Jan 26 17:08:30 compute-0 serene_diffie[402606]:     "2": [
Jan 26 17:08:30 compute-0 serene_diffie[402606]:         {
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "devices": [
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "/dev/loop5"
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             ],
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "lv_name": "ceph_lv2",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "lv_size": "21470642176",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "name": "ceph_lv2",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "tags": {
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.cluster_name": "ceph",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.crush_device_class": "",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.encrypted": "0",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.objectstore": "bluestore",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.osd_id": "2",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.type": "block",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.vdo": "0",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:                 "ceph.with_tpm": "0"
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             },
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "type": "block",
Jan 26 17:08:30 compute-0 serene_diffie[402606]:             "vg_name": "ceph_vg2"
Jan 26 17:08:30 compute-0 serene_diffie[402606]:         }
Jan 26 17:08:30 compute-0 serene_diffie[402606]:     ]
Jan 26 17:08:30 compute-0 serene_diffie[402606]: }
Jan 26 17:08:30 compute-0 systemd[1]: libpod-127dc00b5535c50e5ac9e2c5d9722533e09565fc8e692a6205e4faa15b14615e.scope: Deactivated successfully.
Jan 26 17:08:30 compute-0 podman[402589]: 2026-01-26 17:08:30.884013222 +0000 UTC m=+0.453567881 container died 127dc00b5535c50e5ac9e2c5d9722533e09565fc8e692a6205e4faa15b14615e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 17:08:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3bfe5191921678fe6f5edcde54befd47db77bfd8e69852bf4f8bd0a524b340d-merged.mount: Deactivated successfully.
Jan 26 17:08:30 compute-0 podman[402589]: 2026-01-26 17:08:30.926496664 +0000 UTC m=+0.496051323 container remove 127dc00b5535c50e5ac9e2c5d9722533e09565fc8e692a6205e4faa15b14615e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_diffie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:08:30 compute-0 systemd[1]: libpod-conmon-127dc00b5535c50e5ac9e2c5d9722533e09565fc8e692a6205e4faa15b14615e.scope: Deactivated successfully.
Jan 26 17:08:30 compute-0 sudo[402515]: pam_unix(sudo:session): session closed for user root
Jan 26 17:08:31 compute-0 sudo[402653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:08:31 compute-0 sudo[402653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:08:31 compute-0 sudo[402653]: pam_unix(sudo:session): session closed for user root
Jan 26 17:08:31 compute-0 sudo[402678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:08:31 compute-0 sudo[402678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:08:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:08:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:08:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:08:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:08:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:08:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:08:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:08:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:08:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:08:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:08:31 compute-0 ceph-mon[75140]: pgmap v3562: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:31 compute-0 podman[402716]: 2026-01-26 17:08:31.423175921 +0000 UTC m=+0.040659458 container create db5dc434330bb05f0162cbed9b69571a1c94eeff673c59941feffa679ae7377a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 17:08:31 compute-0 systemd[1]: Started libpod-conmon-db5dc434330bb05f0162cbed9b69571a1c94eeff673c59941feffa679ae7377a.scope.
Jan 26 17:08:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:08:31 compute-0 podman[402716]: 2026-01-26 17:08:31.406671277 +0000 UTC m=+0.024154834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:08:31 compute-0 podman[402716]: 2026-01-26 17:08:31.510460832 +0000 UTC m=+0.127944399 container init db5dc434330bb05f0162cbed9b69571a1c94eeff673c59941feffa679ae7377a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:08:31 compute-0 podman[402716]: 2026-01-26 17:08:31.517191236 +0000 UTC m=+0.134674773 container start db5dc434330bb05f0162cbed9b69571a1c94eeff673c59941feffa679ae7377a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 17:08:31 compute-0 vigorous_mcclintock[402732]: 167 167
Jan 26 17:08:31 compute-0 podman[402716]: 2026-01-26 17:08:31.521223445 +0000 UTC m=+0.138706982 container attach db5dc434330bb05f0162cbed9b69571a1c94eeff673c59941feffa679ae7377a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 17:08:31 compute-0 systemd[1]: libpod-db5dc434330bb05f0162cbed9b69571a1c94eeff673c59941feffa679ae7377a.scope: Deactivated successfully.
Jan 26 17:08:31 compute-0 podman[402716]: 2026-01-26 17:08:31.522954778 +0000 UTC m=+0.140438315 container died db5dc434330bb05f0162cbed9b69571a1c94eeff673c59941feffa679ae7377a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 26 17:08:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-02717b8b1f1b769a146ac9e488112b99bdc420c76488532a77f275d93a649085-merged.mount: Deactivated successfully.
Jan 26 17:08:31 compute-0 podman[402716]: 2026-01-26 17:08:31.558597701 +0000 UTC m=+0.176081238 container remove db5dc434330bb05f0162cbed9b69571a1c94eeff673c59941feffa679ae7377a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 17:08:31 compute-0 systemd[1]: libpod-conmon-db5dc434330bb05f0162cbed9b69571a1c94eeff673c59941feffa679ae7377a.scope: Deactivated successfully.
Jan 26 17:08:31 compute-0 nova_compute[239965]: 2026-01-26 17:08:31.612 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:31 compute-0 podman[402756]: 2026-01-26 17:08:31.714904464 +0000 UTC m=+0.042966415 container create 84f31f6bef04bbff4ccf258e2b9c5c9c7843065462f4fb41310d2f0cf6732361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:08:31 compute-0 systemd[1]: Started libpod-conmon-84f31f6bef04bbff4ccf258e2b9c5c9c7843065462f4fb41310d2f0cf6732361.scope.
Jan 26 17:08:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee6e02a02d879df707c02440478554791ed7b851b79823254a2176f2346a0db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee6e02a02d879df707c02440478554791ed7b851b79823254a2176f2346a0db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee6e02a02d879df707c02440478554791ed7b851b79823254a2176f2346a0db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:08:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee6e02a02d879df707c02440478554791ed7b851b79823254a2176f2346a0db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:08:31 compute-0 podman[402756]: 2026-01-26 17:08:31.787510514 +0000 UTC m=+0.115572465 container init 84f31f6bef04bbff4ccf258e2b9c5c9c7843065462f4fb41310d2f0cf6732361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_leakey, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:08:31 compute-0 podman[402756]: 2026-01-26 17:08:31.697020425 +0000 UTC m=+0.025082396 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:08:31 compute-0 podman[402756]: 2026-01-26 17:08:31.793798768 +0000 UTC m=+0.121860719 container start 84f31f6bef04bbff4ccf258e2b9c5c9c7843065462f4fb41310d2f0cf6732361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_leakey, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:08:31 compute-0 podman[402756]: 2026-01-26 17:08:31.798091933 +0000 UTC m=+0.126153904 container attach 84f31f6bef04bbff4ccf258e2b9c5c9c7843065462f4fb41310d2f0cf6732361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_leakey, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:08:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3563: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:32 compute-0 nova_compute[239965]: 2026-01-26 17:08:32.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:32 compute-0 lvm[402849]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:08:32 compute-0 lvm[402851]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:08:32 compute-0 lvm[402851]: VG ceph_vg1 finished
Jan 26 17:08:32 compute-0 lvm[402853]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:08:32 compute-0 lvm[402853]: VG ceph_vg2 finished
Jan 26 17:08:32 compute-0 lvm[402849]: VG ceph_vg0 finished
Jan 26 17:08:32 compute-0 lvm[402855]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:08:32 compute-0 lvm[402855]: VG ceph_vg2 finished
Jan 26 17:08:32 compute-0 lvm[402857]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:08:32 compute-0 lvm[402857]: VG ceph_vg2 finished
Jan 26 17:08:32 compute-0 dazzling_leakey[402772]: {}
Jan 26 17:08:32 compute-0 systemd[1]: libpod-84f31f6bef04bbff4ccf258e2b9c5c9c7843065462f4fb41310d2f0cf6732361.scope: Deactivated successfully.
Jan 26 17:08:32 compute-0 systemd[1]: libpod-84f31f6bef04bbff4ccf258e2b9c5c9c7843065462f4fb41310d2f0cf6732361.scope: Consumed 1.566s CPU time.
Jan 26 17:08:32 compute-0 podman[402858]: 2026-01-26 17:08:32.771101029 +0000 UTC m=+0.024304016 container died 84f31f6bef04bbff4ccf258e2b9c5c9c7843065462f4fb41310d2f0cf6732361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_leakey, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:08:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-dee6e02a02d879df707c02440478554791ed7b851b79823254a2176f2346a0db-merged.mount: Deactivated successfully.
Jan 26 17:08:32 compute-0 podman[402858]: 2026-01-26 17:08:32.888149889 +0000 UTC m=+0.141352836 container remove 84f31f6bef04bbff4ccf258e2b9c5c9c7843065462f4fb41310d2f0cf6732361 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:08:32 compute-0 systemd[1]: libpod-conmon-84f31f6bef04bbff4ccf258e2b9c5c9c7843065462f4fb41310d2f0cf6732361.scope: Deactivated successfully.
Jan 26 17:08:32 compute-0 sudo[402678]: pam_unix(sudo:session): session closed for user root
Jan 26 17:08:32 compute-0 nova_compute[239965]: 2026-01-26 17:08:32.942 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:32 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:08:33 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:08:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:08:33 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:08:33 compute-0 sudo[402873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:08:33 compute-0 sudo[402873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:08:33 compute-0 sudo[402873]: pam_unix(sudo:session): session closed for user root
Jan 26 17:08:33 compute-0 ceph-mon[75140]: pgmap v3563: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:33 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:08:33 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:08:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3564: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:08:35 compute-0 ceph-mon[75140]: pgmap v3564: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3565: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #171. Immutable memtables: 0.
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:08:36.385388) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 105] Flushing memtable with next log file: 171
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447316385425, "job": 105, "event": "flush_started", "num_memtables": 1, "num_entries": 1728, "num_deletes": 251, "total_data_size": 2950913, "memory_usage": 2993360, "flush_reason": "Manual Compaction"}
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 105] Level-0 flush table #172: started
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447316405913, "cf_name": "default", "job": 105, "event": "table_file_creation", "file_number": 172, "file_size": 2875319, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 71203, "largest_seqno": 72930, "table_properties": {"data_size": 2867281, "index_size": 4916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16174, "raw_average_key_size": 19, "raw_value_size": 2851341, "raw_average_value_size": 3524, "num_data_blocks": 219, "num_entries": 809, "num_filter_entries": 809, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769447129, "oldest_key_time": 1769447129, "file_creation_time": 1769447316, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 172, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 105] Flush lasted 20623 microseconds, and 7827 cpu microseconds.
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:08:36.406006) [db/flush_job.cc:967] [default] [JOB 105] Level-0 flush table #172: 2875319 bytes OK
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:08:36.406032) [db/memtable_list.cc:519] [default] Level-0 commit table #172 started
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:08:36.414633) [db/memtable_list.cc:722] [default] Level-0 commit table #172: memtable #1 done
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:08:36.414675) EVENT_LOG_v1 {"time_micros": 1769447316414666, "job": 105, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:08:36.414725) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 105] Try to delete WAL files size 2943510, prev total WAL file size 2943510, number of live WAL files 2.
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000168.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:08:36.415614) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037303238' seq:72057594037927935, type:22 .. '7061786F730037323830' seq:0, type:0; will stop at (end)
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 106] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 105 Base level 0, inputs: [172(2807KB)], [170(10MB)]
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447316415663, "job": 106, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [172], "files_L6": [170], "score": -1, "input_data_size": 13386526, "oldest_snapshot_seqno": -1}
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 106] Generated table #173: 9090 keys, 11571696 bytes, temperature: kUnknown
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447316497752, "cf_name": "default", "job": 106, "event": "table_file_creation", "file_number": 173, "file_size": 11571696, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11513211, "index_size": 34628, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 238660, "raw_average_key_size": 26, "raw_value_size": 11352914, "raw_average_value_size": 1248, "num_data_blocks": 1337, "num_entries": 9090, "num_filter_entries": 9090, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769447316, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 173, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:08:36.498248) [db/compaction/compaction_job.cc:1663] [default] [JOB 106] Compacted 1@0 + 1@6 files to L6 => 11571696 bytes
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:08:36.500122) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.8 rd, 140.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 10.0 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(8.7) write-amplify(4.0) OK, records in: 9604, records dropped: 514 output_compression: NoCompression
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:08:36.500150) EVENT_LOG_v1 {"time_micros": 1769447316500136, "job": 106, "event": "compaction_finished", "compaction_time_micros": 82230, "compaction_time_cpu_micros": 30717, "output_level": 6, "num_output_files": 1, "total_output_size": 11571696, "num_input_records": 9604, "num_output_records": 9090, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000172.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447316500916, "job": 106, "event": "table_file_deletion", "file_number": 172}
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000170.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447316503522, "job": 106, "event": "table_file_deletion", "file_number": 170}
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:08:36.415496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:08:36.503616) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:08:36.503624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:08:36.503626) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:08:36.503628) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:08:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:08:36.503630) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:08:36 compute-0 nova_compute[239965]: 2026-01-26 17:08:36.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:36 compute-0 nova_compute[239965]: 2026-01-26 17:08:36.615 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:37 compute-0 ceph-mon[75140]: pgmap v3565: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:37 compute-0 nova_compute[239965]: 2026-01-26 17:08:37.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:37 compute-0 nova_compute[239965]: 2026-01-26 17:08:37.944 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3566: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:08:39 compute-0 nova_compute[239965]: 2026-01-26 17:08:39.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:39 compute-0 ceph-mon[75140]: pgmap v3566: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3567: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:40 compute-0 ceph-mon[75140]: pgmap v3567: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:41 compute-0 nova_compute[239965]: 2026-01-26 17:08:41.618 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3568: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:42 compute-0 nova_compute[239965]: 2026-01-26 17:08:42.945 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:43 compute-0 ceph-mon[75140]: pgmap v3568: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3569: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:08:44 compute-0 nova_compute[239965]: 2026-01-26 17:08:44.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:45 compute-0 ceph-mon[75140]: pgmap v3569: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:45 compute-0 nova_compute[239965]: 2026-01-26 17:08:45.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:45 compute-0 nova_compute[239965]: 2026-01-26 17:08:45.598 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:08:45 compute-0 nova_compute[239965]: 2026-01-26 17:08:45.599 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:08:45 compute-0 nova_compute[239965]: 2026-01-26 17:08:45.599 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:08:45 compute-0 nova_compute[239965]: 2026-01-26 17:08:45.599 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:08:45 compute-0 nova_compute[239965]: 2026-01-26 17:08:45.600 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:08:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3570: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:08:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1541336627' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:08:46 compute-0 nova_compute[239965]: 2026-01-26 17:08:46.215 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.615s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:08:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1541336627' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:08:46 compute-0 nova_compute[239965]: 2026-01-26 17:08:46.418 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:08:46 compute-0 nova_compute[239965]: 2026-01-26 17:08:46.419 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3537MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:08:46 compute-0 nova_compute[239965]: 2026-01-26 17:08:46.419 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:08:46 compute-0 nova_compute[239965]: 2026-01-26 17:08:46.420 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:08:46 compute-0 nova_compute[239965]: 2026-01-26 17:08:46.621 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:47 compute-0 ceph-mon[75140]: pgmap v3570: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:47 compute-0 nova_compute[239965]: 2026-01-26 17:08:47.310 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:08:47 compute-0 nova_compute[239965]: 2026-01-26 17:08:47.311 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:08:47 compute-0 nova_compute[239965]: 2026-01-26 17:08:47.350 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:08:47 compute-0 nova_compute[239965]: 2026-01-26 17:08:47.947 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:08:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3479703235' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:08:47 compute-0 nova_compute[239965]: 2026-01-26 17:08:47.991 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:08:48 compute-0 nova_compute[239965]: 2026-01-26 17:08:47.999 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:08:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3571: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:48 compute-0 nova_compute[239965]: 2026-01-26 17:08:48.277 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:08:48 compute-0 nova_compute[239965]: 2026-01-26 17:08:48.279 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:08:48 compute-0 nova_compute[239965]: 2026-01-26 17:08:48.280 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:08:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3479703235' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:08:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:08:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2465751778' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:08:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:08:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2465751778' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:08:49 compute-0 nova_compute[239965]: 2026-01-26 17:08:49.279 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:49 compute-0 nova_compute[239965]: 2026-01-26 17:08:49.281 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:08:49 compute-0 nova_compute[239965]: 2026-01-26 17:08:49.282 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:08:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:08:49 compute-0 nova_compute[239965]: 2026-01-26 17:08:49.437 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:08:49 compute-0 nova_compute[239965]: 2026-01-26 17:08:49.437 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:49 compute-0 nova_compute[239965]: 2026-01-26 17:08:49.437 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:08:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:08:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3572: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:50 compute-0 ceph-mon[75140]: pgmap v3571: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2465751778' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:08:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2465751778' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:08:51 compute-0 ceph-mon[75140]: pgmap v3572: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:51 compute-0 nova_compute[239965]: 2026-01-26 17:08:51.624 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3573: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:52 compute-0 nova_compute[239965]: 2026-01-26 17:08:52.949 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:53 compute-0 ceph-mon[75140]: pgmap v3573: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3574: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:08:54 compute-0 ceph-mon[75140]: pgmap v3574: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:54 compute-0 nova_compute[239965]: 2026-01-26 17:08:54.660 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:08:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3575: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:56 compute-0 nova_compute[239965]: 2026-01-26 17:08:56.628 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:57 compute-0 ceph-mon[75140]: pgmap v3575: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:57 compute-0 nova_compute[239965]: 2026-01-26 17:08:57.951 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:08:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3576: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:08:58 compute-0 podman[402942]: 2026-01-26 17:08:58.39750249 +0000 UTC m=+0.067399134 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 26 17:08:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:08:59.292 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:08:59.293 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:08:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:08:59.293 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:08:59 compute-0 ceph-mon[75140]: pgmap v3576: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3577: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:09:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:09:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:09:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:09:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:09:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:09:00 compute-0 ceph-mon[75140]: pgmap v3577: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:01 compute-0 podman[402963]: 2026-01-26 17:09:01.432859499 +0000 UTC m=+0.122465924 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:09:01 compute-0 nova_compute[239965]: 2026-01-26 17:09:01.631 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3578: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:02 compute-0 nova_compute[239965]: 2026-01-26 17:09:02.952 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:03 compute-0 ceph-mon[75140]: pgmap v3578: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3579: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:09:05 compute-0 ceph-mon[75140]: pgmap v3579: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3580: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:06 compute-0 nova_compute[239965]: 2026-01-26 17:09:06.672 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:07 compute-0 ceph-mon[75140]: pgmap v3580: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3581: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:08 compute-0 nova_compute[239965]: 2026-01-26 17:09:08.200 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:09:09 compute-0 ceph-mon[75140]: pgmap v3581: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3582: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:10 compute-0 ceph-mon[75140]: pgmap v3582: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:11 compute-0 nova_compute[239965]: 2026-01-26 17:09:11.730 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3583: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:13 compute-0 ceph-mon[75140]: pgmap v3583: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:13 compute-0 nova_compute[239965]: 2026-01-26 17:09:13.203 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3584: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:09:15 compute-0 ceph-mon[75140]: pgmap v3584: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3585: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:16 compute-0 nova_compute[239965]: 2026-01-26 17:09:16.790 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:17 compute-0 ceph-mon[75140]: pgmap v3585: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3586: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:18 compute-0 nova_compute[239965]: 2026-01-26 17:09:18.204 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:19 compute-0 ceph-mon[75140]: pgmap v3586: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:09:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3587: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:20 compute-0 ceph-mon[75140]: pgmap v3587: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:21 compute-0 nova_compute[239965]: 2026-01-26 17:09:21.793 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3588: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:23 compute-0 ceph-mon[75140]: pgmap v3588: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:23 compute-0 nova_compute[239965]: 2026-01-26 17:09:23.206 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3589: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:09:25 compute-0 ceph-mon[75140]: pgmap v3589: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3590: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:26 compute-0 nova_compute[239965]: 2026-01-26 17:09:26.796 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:27 compute-0 ceph-mon[75140]: pgmap v3590: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3591: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:28 compute-0 nova_compute[239965]: 2026-01-26 17:09:28.209 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:09:28
Jan 26 17:09:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:09:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:09:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'default.rgw.log', 'default.rgw.meta', 'vms', 'default.rgw.control', 'images', 'cephfs.cephfs.data', 'volumes', '.rgw.root', '.mgr']
Jan 26 17:09:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:09:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:09:29 compute-0 podman[402989]: 2026-01-26 17:09:29.379562555 +0000 UTC m=+0.064717258 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:09:29 compute-0 ceph-mon[75140]: pgmap v3591: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3592: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:09:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:09:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:09:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:09:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:09:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:09:30 compute-0 ceph-mon[75140]: pgmap v3592: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:09:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:09:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:09:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:09:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:09:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:09:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:09:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:09:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:09:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:09:31 compute-0 nova_compute[239965]: 2026-01-26 17:09:31.799 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3593: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:32 compute-0 podman[403009]: 2026-01-26 17:09:32.406447305 +0000 UTC m=+0.098019674 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:09:32 compute-0 nova_compute[239965]: 2026-01-26 17:09:32.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:33 compute-0 ceph-mon[75140]: pgmap v3593: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:33 compute-0 sudo[403035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:09:33 compute-0 sudo[403035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:09:33 compute-0 sudo[403035]: pam_unix(sudo:session): session closed for user root
Jan 26 17:09:33 compute-0 nova_compute[239965]: 2026-01-26 17:09:33.262 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:33 compute-0 sudo[403060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 26 17:09:33 compute-0 sudo[403060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:09:33 compute-0 nova_compute[239965]: 2026-01-26 17:09:33.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:33 compute-0 podman[403128]: 2026-01-26 17:09:33.750352304 +0000 UTC m=+0.064204795 container exec 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:09:33 compute-0 podman[403128]: 2026-01-26 17:09:33.846474641 +0000 UTC m=+0.160327122 container exec_died 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:09:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3594: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:09:34 compute-0 sudo[403060]: pam_unix(sudo:session): session closed for user root
Jan 26 17:09:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:09:34 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:09:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:09:34 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:09:34 compute-0 sudo[403314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:09:34 compute-0 sudo[403314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:09:34 compute-0 sudo[403314]: pam_unix(sudo:session): session closed for user root
Jan 26 17:09:34 compute-0 sudo[403339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:09:34 compute-0 sudo[403339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:09:35 compute-0 ceph-mon[75140]: pgmap v3594: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:35 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:09:35 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:09:35 compute-0 sudo[403339]: pam_unix(sudo:session): session closed for user root
Jan 26 17:09:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:09:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:09:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:09:35 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:09:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:09:35 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:09:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:09:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:09:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:09:35 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:09:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:09:35 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:09:35 compute-0 sudo[403396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:09:35 compute-0 sudo[403396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:09:35 compute-0 sudo[403396]: pam_unix(sudo:session): session closed for user root
Jan 26 17:09:35 compute-0 sudo[403421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:09:35 compute-0 sudo[403421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:09:35 compute-0 podman[403457]: 2026-01-26 17:09:35.820415667 +0000 UTC m=+0.042281007 container create a04dddd065fe8d4808718c7e1c31fc8f93e50c4e8d13b7f9f3361a90a4aa916b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaplygin, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Jan 26 17:09:35 compute-0 systemd[1]: Started libpod-conmon-a04dddd065fe8d4808718c7e1c31fc8f93e50c4e8d13b7f9f3361a90a4aa916b.scope.
Jan 26 17:09:35 compute-0 podman[403457]: 2026-01-26 17:09:35.801882472 +0000 UTC m=+0.023747842 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:09:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:09:35 compute-0 podman[403457]: 2026-01-26 17:09:35.916591925 +0000 UTC m=+0.138457285 container init a04dddd065fe8d4808718c7e1c31fc8f93e50c4e8d13b7f9f3361a90a4aa916b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 17:09:35 compute-0 podman[403457]: 2026-01-26 17:09:35.9249849 +0000 UTC m=+0.146850240 container start a04dddd065fe8d4808718c7e1c31fc8f93e50c4e8d13b7f9f3361a90a4aa916b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 17:09:35 compute-0 podman[403457]: 2026-01-26 17:09:35.928423765 +0000 UTC m=+0.150289145 container attach a04dddd065fe8d4808718c7e1c31fc8f93e50c4e8d13b7f9f3361a90a4aa916b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaplygin, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 17:09:35 compute-0 confident_chaplygin[403473]: 167 167
Jan 26 17:09:35 compute-0 systemd[1]: libpod-a04dddd065fe8d4808718c7e1c31fc8f93e50c4e8d13b7f9f3361a90a4aa916b.scope: Deactivated successfully.
Jan 26 17:09:35 compute-0 podman[403457]: 2026-01-26 17:09:35.932208198 +0000 UTC m=+0.154073548 container died a04dddd065fe8d4808718c7e1c31fc8f93e50c4e8d13b7f9f3361a90a4aa916b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaplygin, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:09:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d1c683c9ddd315e59970e7005b98443e7df4ba552d3512a7603f40868345a53-merged.mount: Deactivated successfully.
Jan 26 17:09:35 compute-0 podman[403457]: 2026-01-26 17:09:35.973421418 +0000 UTC m=+0.195286758 container remove a04dddd065fe8d4808718c7e1c31fc8f93e50c4e8d13b7f9f3361a90a4aa916b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 17:09:35 compute-0 systemd[1]: libpod-conmon-a04dddd065fe8d4808718c7e1c31fc8f93e50c4e8d13b7f9f3361a90a4aa916b.scope: Deactivated successfully.
Jan 26 17:09:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3595: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:36 compute-0 podman[403495]: 2026-01-26 17:09:36.12239593 +0000 UTC m=+0.024938231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:09:36 compute-0 nova_compute[239965]: 2026-01-26 17:09:36.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:36 compute-0 podman[403495]: 2026-01-26 17:09:36.740332291 +0000 UTC m=+0.642874552 container create 030eb3fc5a6dcbdba2dd368c9a5615e3a3032bfe7ed68b565b39a9ddf29d8c5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:09:36 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:09:36 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:09:36 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:09:36 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:09:36 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:09:36 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:09:36 compute-0 nova_compute[239965]: 2026-01-26 17:09:36.848 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:36 compute-0 systemd[1]: Started libpod-conmon-030eb3fc5a6dcbdba2dd368c9a5615e3a3032bfe7ed68b565b39a9ddf29d8c5d.scope.
Jan 26 17:09:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2482c75d4628a819afaad5982bb7440987fb41aed4312163bbe2e30832749f11/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2482c75d4628a819afaad5982bb7440987fb41aed4312163bbe2e30832749f11/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2482c75d4628a819afaad5982bb7440987fb41aed4312163bbe2e30832749f11/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2482c75d4628a819afaad5982bb7440987fb41aed4312163bbe2e30832749f11/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:09:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2482c75d4628a819afaad5982bb7440987fb41aed4312163bbe2e30832749f11/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:09:37 compute-0 podman[403495]: 2026-01-26 17:09:37.06173609 +0000 UTC m=+0.964278391 container init 030eb3fc5a6dcbdba2dd368c9a5615e3a3032bfe7ed68b565b39a9ddf29d8c5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_grothendieck, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 17:09:37 compute-0 podman[403495]: 2026-01-26 17:09:37.071262754 +0000 UTC m=+0.973805055 container start 030eb3fc5a6dcbdba2dd368c9a5615e3a3032bfe7ed68b565b39a9ddf29d8c5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:09:37 compute-0 podman[403495]: 2026-01-26 17:09:37.076344479 +0000 UTC m=+0.978886770 container attach 030eb3fc5a6dcbdba2dd368c9a5615e3a3032bfe7ed68b565b39a9ddf29d8c5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_grothendieck, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 26 17:09:37 compute-0 nova_compute[239965]: 2026-01-26 17:09:37.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:37 compute-0 bold_grothendieck[403511]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:09:37 compute-0 bold_grothendieck[403511]: --> All data devices are unavailable
Jan 26 17:09:37 compute-0 systemd[1]: libpod-030eb3fc5a6dcbdba2dd368c9a5615e3a3032bfe7ed68b565b39a9ddf29d8c5d.scope: Deactivated successfully.
Jan 26 17:09:37 compute-0 podman[403495]: 2026-01-26 17:09:37.599441194 +0000 UTC m=+1.501983445 container died 030eb3fc5a6dcbdba2dd368c9a5615e3a3032bfe7ed68b565b39a9ddf29d8c5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 17:09:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3596: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:38 compute-0 nova_compute[239965]: 2026-01-26 17:09:38.267 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:38 compute-0 ceph-mon[75140]: pgmap v3595: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-2482c75d4628a819afaad5982bb7440987fb41aed4312163bbe2e30832749f11-merged.mount: Deactivated successfully.
Jan 26 17:09:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:09:39 compute-0 podman[403495]: 2026-01-26 17:09:39.418071083 +0000 UTC m=+3.320613344 container remove 030eb3fc5a6dcbdba2dd368c9a5615e3a3032bfe7ed68b565b39a9ddf29d8c5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 17:09:39 compute-0 systemd[1]: libpod-conmon-030eb3fc5a6dcbdba2dd368c9a5615e3a3032bfe7ed68b565b39a9ddf29d8c5d.scope: Deactivated successfully.
Jan 26 17:09:39 compute-0 sudo[403421]: pam_unix(sudo:session): session closed for user root
Jan 26 17:09:39 compute-0 sudo[403543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:09:39 compute-0 sudo[403543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:09:39 compute-0 sudo[403543]: pam_unix(sudo:session): session closed for user root
Jan 26 17:09:39 compute-0 sudo[403568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:09:39 compute-0 sudo[403568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:09:39 compute-0 ceph-mon[75140]: pgmap v3596: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:39 compute-0 podman[403605]: 2026-01-26 17:09:39.930360363 +0000 UTC m=+0.040536335 container create bd779064e7380876aed16a924f74224deb0ddb02ffb6106e9b607eb3d0605653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cohen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 17:09:39 compute-0 systemd[1]: Started libpod-conmon-bd779064e7380876aed16a924f74224deb0ddb02ffb6106e9b607eb3d0605653.scope.
Jan 26 17:09:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:09:40 compute-0 podman[403605]: 2026-01-26 17:09:39.913589302 +0000 UTC m=+0.023765304 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:09:40 compute-0 podman[403605]: 2026-01-26 17:09:40.020353129 +0000 UTC m=+0.130529131 container init bd779064e7380876aed16a924f74224deb0ddb02ffb6106e9b607eb3d0605653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:09:40 compute-0 podman[403605]: 2026-01-26 17:09:40.02772964 +0000 UTC m=+0.137905602 container start bd779064e7380876aed16a924f74224deb0ddb02ffb6106e9b607eb3d0605653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:09:40 compute-0 podman[403605]: 2026-01-26 17:09:40.032314383 +0000 UTC m=+0.142490355 container attach bd779064e7380876aed16a924f74224deb0ddb02ffb6106e9b607eb3d0605653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 26 17:09:40 compute-0 gifted_cohen[403621]: 167 167
Jan 26 17:09:40 compute-0 systemd[1]: libpod-bd779064e7380876aed16a924f74224deb0ddb02ffb6106e9b607eb3d0605653.scope: Deactivated successfully.
Jan 26 17:09:40 compute-0 podman[403605]: 2026-01-26 17:09:40.033628355 +0000 UTC m=+0.143804317 container died bd779064e7380876aed16a924f74224deb0ddb02ffb6106e9b607eb3d0605653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 26 17:09:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e318ad01129d9a1046216750c0493192ae3121b943bd04a906a3838888d753a-merged.mount: Deactivated successfully.
Jan 26 17:09:40 compute-0 podman[403605]: 2026-01-26 17:09:40.070385846 +0000 UTC m=+0.180561818 container remove bd779064e7380876aed16a924f74224deb0ddb02ffb6106e9b607eb3d0605653 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 17:09:40 compute-0 systemd[1]: libpod-conmon-bd779064e7380876aed16a924f74224deb0ddb02ffb6106e9b607eb3d0605653.scope: Deactivated successfully.
Jan 26 17:09:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3597: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:40 compute-0 podman[403646]: 2026-01-26 17:09:40.256197812 +0000 UTC m=+0.042807141 container create 1d839fbbda1f9af28b70f4590ac04e3fc69b4292c703b1b071e86da3317d9ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shaw, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:09:40 compute-0 systemd[1]: Started libpod-conmon-1d839fbbda1f9af28b70f4590ac04e3fc69b4292c703b1b071e86da3317d9ec4.scope.
Jan 26 17:09:40 compute-0 podman[403646]: 2026-01-26 17:09:40.23698816 +0000 UTC m=+0.023597489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:09:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9247325b9815cc0e38999219fa3e8b6c986a0d7f22ddb67ee953670cdeffebb0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9247325b9815cc0e38999219fa3e8b6c986a0d7f22ddb67ee953670cdeffebb0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9247325b9815cc0e38999219fa3e8b6c986a0d7f22ddb67ee953670cdeffebb0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:09:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9247325b9815cc0e38999219fa3e8b6c986a0d7f22ddb67ee953670cdeffebb0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:09:40 compute-0 podman[403646]: 2026-01-26 17:09:40.355457395 +0000 UTC m=+0.142066744 container init 1d839fbbda1f9af28b70f4590ac04e3fc69b4292c703b1b071e86da3317d9ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shaw, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 17:09:40 compute-0 podman[403646]: 2026-01-26 17:09:40.360786386 +0000 UTC m=+0.147395715 container start 1d839fbbda1f9af28b70f4590ac04e3fc69b4292c703b1b071e86da3317d9ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shaw, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:09:40 compute-0 podman[403646]: 2026-01-26 17:09:40.364990329 +0000 UTC m=+0.151599658 container attach 1d839fbbda1f9af28b70f4590ac04e3fc69b4292c703b1b071e86da3317d9ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shaw, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:09:40 compute-0 nova_compute[239965]: 2026-01-26 17:09:40.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:40 compute-0 clever_shaw[403662]: {
Jan 26 17:09:40 compute-0 clever_shaw[403662]:     "0": [
Jan 26 17:09:40 compute-0 clever_shaw[403662]:         {
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "devices": [
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "/dev/loop3"
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             ],
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "lv_name": "ceph_lv0",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "lv_size": "21470642176",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "name": "ceph_lv0",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "tags": {
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.cluster_name": "ceph",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.crush_device_class": "",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.encrypted": "0",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.objectstore": "bluestore",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.osd_id": "0",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.type": "block",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.vdo": "0",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.with_tpm": "0"
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             },
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "type": "block",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "vg_name": "ceph_vg0"
Jan 26 17:09:40 compute-0 clever_shaw[403662]:         }
Jan 26 17:09:40 compute-0 clever_shaw[403662]:     ],
Jan 26 17:09:40 compute-0 clever_shaw[403662]:     "1": [
Jan 26 17:09:40 compute-0 clever_shaw[403662]:         {
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "devices": [
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "/dev/loop4"
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             ],
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "lv_name": "ceph_lv1",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "lv_size": "21470642176",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "name": "ceph_lv1",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "tags": {
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.cluster_name": "ceph",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.crush_device_class": "",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.encrypted": "0",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.objectstore": "bluestore",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.osd_id": "1",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.type": "block",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.vdo": "0",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.with_tpm": "0"
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             },
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "type": "block",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "vg_name": "ceph_vg1"
Jan 26 17:09:40 compute-0 clever_shaw[403662]:         }
Jan 26 17:09:40 compute-0 clever_shaw[403662]:     ],
Jan 26 17:09:40 compute-0 clever_shaw[403662]:     "2": [
Jan 26 17:09:40 compute-0 clever_shaw[403662]:         {
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "devices": [
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "/dev/loop5"
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             ],
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "lv_name": "ceph_lv2",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "lv_size": "21470642176",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "name": "ceph_lv2",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "tags": {
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.cluster_name": "ceph",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.crush_device_class": "",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.encrypted": "0",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.objectstore": "bluestore",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.osd_id": "2",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.type": "block",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.vdo": "0",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:                 "ceph.with_tpm": "0"
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             },
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "type": "block",
Jan 26 17:09:40 compute-0 clever_shaw[403662]:             "vg_name": "ceph_vg2"
Jan 26 17:09:40 compute-0 clever_shaw[403662]:         }
Jan 26 17:09:40 compute-0 clever_shaw[403662]:     ]
Jan 26 17:09:40 compute-0 clever_shaw[403662]: }
Jan 26 17:09:40 compute-0 systemd[1]: libpod-1d839fbbda1f9af28b70f4590ac04e3fc69b4292c703b1b071e86da3317d9ec4.scope: Deactivated successfully.
Jan 26 17:09:40 compute-0 podman[403646]: 2026-01-26 17:09:40.66929375 +0000 UTC m=+0.455903069 container died 1d839fbbda1f9af28b70f4590ac04e3fc69b4292c703b1b071e86da3317d9ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:09:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-9247325b9815cc0e38999219fa3e8b6c986a0d7f22ddb67ee953670cdeffebb0-merged.mount: Deactivated successfully.
Jan 26 17:09:41 compute-0 ceph-mon[75140]: pgmap v3597: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:41 compute-0 podman[403646]: 2026-01-26 17:09:41.414732256 +0000 UTC m=+1.201341575 container remove 1d839fbbda1f9af28b70f4590ac04e3fc69b4292c703b1b071e86da3317d9ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_shaw, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:09:41 compute-0 systemd[1]: libpod-conmon-1d839fbbda1f9af28b70f4590ac04e3fc69b4292c703b1b071e86da3317d9ec4.scope: Deactivated successfully.
Jan 26 17:09:41 compute-0 sudo[403568]: pam_unix(sudo:session): session closed for user root
Jan 26 17:09:41 compute-0 sudo[403684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:09:41 compute-0 sudo[403684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:09:41 compute-0 sudo[403684]: pam_unix(sudo:session): session closed for user root
Jan 26 17:09:41 compute-0 nova_compute[239965]: 2026-01-26 17:09:41.851 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:41 compute-0 sudo[403709]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:09:41 compute-0 sudo[403709]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:09:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3598: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:42 compute-0 podman[403745]: 2026-01-26 17:09:42.158745798 +0000 UTC m=+0.047678570 container create 1762a80505345e46cbab936a2eed18afc73c1dcb16e79b2d86fc00e95d04aaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_elbakyan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 17:09:42 compute-0 systemd[1]: Started libpod-conmon-1762a80505345e46cbab936a2eed18afc73c1dcb16e79b2d86fc00e95d04aaff.scope.
Jan 26 17:09:42 compute-0 podman[403745]: 2026-01-26 17:09:42.136358429 +0000 UTC m=+0.025291201 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:09:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:09:42 compute-0 podman[403745]: 2026-01-26 17:09:42.251212445 +0000 UTC m=+0.140145237 container init 1762a80505345e46cbab936a2eed18afc73c1dcb16e79b2d86fc00e95d04aaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 17:09:42 compute-0 podman[403745]: 2026-01-26 17:09:42.262188634 +0000 UTC m=+0.151121416 container start 1762a80505345e46cbab936a2eed18afc73c1dcb16e79b2d86fc00e95d04aaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 17:09:42 compute-0 podman[403745]: 2026-01-26 17:09:42.266827328 +0000 UTC m=+0.155760120 container attach 1762a80505345e46cbab936a2eed18afc73c1dcb16e79b2d86fc00e95d04aaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_elbakyan, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 17:09:42 compute-0 bold_elbakyan[403761]: 167 167
Jan 26 17:09:42 compute-0 systemd[1]: libpod-1762a80505345e46cbab936a2eed18afc73c1dcb16e79b2d86fc00e95d04aaff.scope: Deactivated successfully.
Jan 26 17:09:42 compute-0 podman[403745]: 2026-01-26 17:09:42.269716689 +0000 UTC m=+0.158649471 container died 1762a80505345e46cbab936a2eed18afc73c1dcb16e79b2d86fc00e95d04aaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_elbakyan, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:09:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-5574f676d73ba125d733e6ad52615fd19ddaefdd0f801d1c1e780e00747fc4a3-merged.mount: Deactivated successfully.
Jan 26 17:09:42 compute-0 podman[403745]: 2026-01-26 17:09:42.3101567 +0000 UTC m=+0.199089472 container remove 1762a80505345e46cbab936a2eed18afc73c1dcb16e79b2d86fc00e95d04aaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:09:42 compute-0 systemd[1]: libpod-conmon-1762a80505345e46cbab936a2eed18afc73c1dcb16e79b2d86fc00e95d04aaff.scope: Deactivated successfully.
Jan 26 17:09:42 compute-0 podman[403785]: 2026-01-26 17:09:42.490142573 +0000 UTC m=+0.049061024 container create a9c78c760fe51b46c5560a5ffef11fdccc7970910c02df46bcf9a440a89143c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_johnson, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 26 17:09:42 compute-0 systemd[1]: Started libpod-conmon-a9c78c760fe51b46c5560a5ffef11fdccc7970910c02df46bcf9a440a89143c4.scope.
Jan 26 17:09:42 compute-0 podman[403785]: 2026-01-26 17:09:42.467255652 +0000 UTC m=+0.026174093 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:09:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3791eade073b96327818dc0b4725fbc9c57ba646f4c26cb56d4cf1c6a0e98027/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3791eade073b96327818dc0b4725fbc9c57ba646f4c26cb56d4cf1c6a0e98027/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3791eade073b96327818dc0b4725fbc9c57ba646f4c26cb56d4cf1c6a0e98027/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:09:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3791eade073b96327818dc0b4725fbc9c57ba646f4c26cb56d4cf1c6a0e98027/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:09:42 compute-0 podman[403785]: 2026-01-26 17:09:42.59606582 +0000 UTC m=+0.154984251 container init a9c78c760fe51b46c5560a5ffef11fdccc7970910c02df46bcf9a440a89143c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 17:09:42 compute-0 podman[403785]: 2026-01-26 17:09:42.604962948 +0000 UTC m=+0.163881359 container start a9c78c760fe51b46c5560a5ffef11fdccc7970910c02df46bcf9a440a89143c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_johnson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:09:42 compute-0 podman[403785]: 2026-01-26 17:09:42.609377216 +0000 UTC m=+0.168295627 container attach a9c78c760fe51b46c5560a5ffef11fdccc7970910c02df46bcf9a440a89143c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:09:43 compute-0 ceph-mon[75140]: pgmap v3598: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:43 compute-0 nova_compute[239965]: 2026-01-26 17:09:43.269 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:43 compute-0 lvm[403882]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:09:43 compute-0 lvm[403882]: VG ceph_vg1 finished
Jan 26 17:09:43 compute-0 lvm[403879]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:09:43 compute-0 lvm[403879]: VG ceph_vg0 finished
Jan 26 17:09:43 compute-0 lvm[403884]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:09:43 compute-0 lvm[403884]: VG ceph_vg2 finished
Jan 26 17:09:43 compute-0 hopeful_johnson[403803]: {}
Jan 26 17:09:43 compute-0 systemd[1]: libpod-a9c78c760fe51b46c5560a5ffef11fdccc7970910c02df46bcf9a440a89143c4.scope: Deactivated successfully.
Jan 26 17:09:43 compute-0 systemd[1]: libpod-a9c78c760fe51b46c5560a5ffef11fdccc7970910c02df46bcf9a440a89143c4.scope: Consumed 1.563s CPU time.
Jan 26 17:09:43 compute-0 podman[403785]: 2026-01-26 17:09:43.525134359 +0000 UTC m=+1.084052790 container died a9c78c760fe51b46c5560a5ffef11fdccc7970910c02df46bcf9a440a89143c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:09:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-3791eade073b96327818dc0b4725fbc9c57ba646f4c26cb56d4cf1c6a0e98027-merged.mount: Deactivated successfully.
Jan 26 17:09:43 compute-0 podman[403785]: 2026-01-26 17:09:43.611023314 +0000 UTC m=+1.169941725 container remove a9c78c760fe51b46c5560a5ffef11fdccc7970910c02df46bcf9a440a89143c4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_johnson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 17:09:43 compute-0 systemd[1]: libpod-conmon-a9c78c760fe51b46c5560a5ffef11fdccc7970910c02df46bcf9a440a89143c4.scope: Deactivated successfully.
Jan 26 17:09:43 compute-0 sudo[403709]: pam_unix(sudo:session): session closed for user root
Jan 26 17:09:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:09:43 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:09:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:09:43 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:09:43 compute-0 sudo[403901]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:09:43 compute-0 sudo[403901]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:09:43 compute-0 sudo[403901]: pam_unix(sudo:session): session closed for user root
Jan 26 17:09:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3599: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:09:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:09:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:09:44 compute-0 ceph-mon[75140]: pgmap v3599: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:45 compute-0 nova_compute[239965]: 2026-01-26 17:09:45.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3600: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:46 compute-0 nova_compute[239965]: 2026-01-26 17:09:46.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:46 compute-0 nova_compute[239965]: 2026-01-26 17:09:46.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:09:46 compute-0 nova_compute[239965]: 2026-01-26 17:09:46.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:46 compute-0 nova_compute[239965]: 2026-01-26 17:09:46.614 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:09:46 compute-0 nova_compute[239965]: 2026-01-26 17:09:46.614 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:09:46 compute-0 nova_compute[239965]: 2026-01-26 17:09:46.614 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:09:46 compute-0 nova_compute[239965]: 2026-01-26 17:09:46.614 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:09:46 compute-0 nova_compute[239965]: 2026-01-26 17:09:46.615 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:09:46 compute-0 nova_compute[239965]: 2026-01-26 17:09:46.913 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:09:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/408593761' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:09:47 compute-0 nova_compute[239965]: 2026-01-26 17:09:47.213 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:09:47 compute-0 nova_compute[239965]: 2026-01-26 17:09:47.379 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:09:47 compute-0 nova_compute[239965]: 2026-01-26 17:09:47.380 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3515MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:09:47 compute-0 nova_compute[239965]: 2026-01-26 17:09:47.381 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:09:47 compute-0 nova_compute[239965]: 2026-01-26 17:09:47.381 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:09:47 compute-0 ceph-mon[75140]: pgmap v3600: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/408593761' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:09:47 compute-0 nova_compute[239965]: 2026-01-26 17:09:47.741 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:09:47 compute-0 nova_compute[239965]: 2026-01-26 17:09:47.742 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:09:47 compute-0 nova_compute[239965]: 2026-01-26 17:09:47.892 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 17:09:47 compute-0 nova_compute[239965]: 2026-01-26 17:09:47.982 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 17:09:47 compute-0 nova_compute[239965]: 2026-01-26 17:09:47.983 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 17:09:48 compute-0 nova_compute[239965]: 2026-01-26 17:09:48.002 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 17:09:48 compute-0 nova_compute[239965]: 2026-01-26 17:09:48.026 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 17:09:48 compute-0 nova_compute[239965]: 2026-01-26 17:09:48.042 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:09:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3601: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:48 compute-0 nova_compute[239965]: 2026-01-26 17:09:48.272 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:09:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2729952237' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:09:48 compute-0 nova_compute[239965]: 2026-01-26 17:09:48.612 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:09:48 compute-0 nova_compute[239965]: 2026-01-26 17:09:48.618 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:09:48 compute-0 nova_compute[239965]: 2026-01-26 17:09:48.687 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:09:48 compute-0 nova_compute[239965]: 2026-01-26 17:09:48.691 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:09:48 compute-0 nova_compute[239965]: 2026-01-26 17:09:48.691 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.310s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:09:48 compute-0 ceph-mon[75140]: pgmap v3601: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2729952237' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:09:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:09:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/592547388' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:09:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:09:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/592547388' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:09:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:09:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:09:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3602: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/592547388' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:09:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/592547388' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:09:50 compute-0 nova_compute[239965]: 2026-01-26 17:09:50.693 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:09:50 compute-0 nova_compute[239965]: 2026-01-26 17:09:50.694 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:09:50 compute-0 nova_compute[239965]: 2026-01-26 17:09:50.694 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:09:50 compute-0 nova_compute[239965]: 2026-01-26 17:09:50.709 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:09:51 compute-0 ceph-mon[75140]: pgmap v3602: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:51 compute-0 nova_compute[239965]: 2026-01-26 17:09:51.917 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3603: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:53 compute-0 nova_compute[239965]: 2026-01-26 17:09:53.275 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:53 compute-0 ceph-mon[75140]: pgmap v3603: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3604: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:09:55 compute-0 ceph-mon[75140]: pgmap v3604: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3605: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:56 compute-0 nova_compute[239965]: 2026-01-26 17:09:56.919 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:57 compute-0 ceph-mon[75140]: pgmap v3605: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3606: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:09:58 compute-0 nova_compute[239965]: 2026-01-26 17:09:58.276 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:09:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:09:59.293 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:09:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:09:59.294 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:09:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:09:59.294 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:09:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:09:59 compute-0 ceph-mon[75140]: pgmap v3606: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3607: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:00 compute-0 podman[403970]: 2026-01-26 17:10:00.437261023 +0000 UTC m=+0.102483843 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:10:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:10:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:10:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:10:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:10:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:10:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:10:00 compute-0 ceph-mon[75140]: pgmap v3607: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:01 compute-0 nova_compute[239965]: 2026-01-26 17:10:01.921 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3608: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:03 compute-0 ceph-mon[75140]: pgmap v3608: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:03 compute-0 nova_compute[239965]: 2026-01-26 17:10:03.278 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:03 compute-0 podman[403991]: 2026-01-26 17:10:03.399880779 +0000 UTC m=+0.082759510 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 26 17:10:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:10:05 compute-0 ceph-mon[75140]: pgmap v3609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:10:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Cumulative writes: 16K writes, 73K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.02 MB/s
                                           Cumulative WAL: 16K writes, 16K syncs, 1.00 writes per sync, written: 0.10 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1312 writes, 5949 keys, 1312 commit groups, 1.0 writes per commit group, ingest: 8.88 MB, 0.01 MB/s
                                           Interval WAL: 1312 writes, 1312 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     72.0      1.26              0.29        53    0.024       0      0       0.0       0.0
                                             L6      1/0   11.04 MB   0.0      0.5     0.1      0.5       0.5      0.0       0.0   5.2    114.9     98.3      4.81              1.36        52    0.093    365K    27K       0.0       0.0
                                            Sum      1/0   11.04 MB   0.0      0.5     0.1      0.5       0.6      0.1       0.0   6.2     91.0     92.8      6.07              1.66       105    0.058    365K    27K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.8    131.8    134.2      0.45              0.19        10    0.045     46K   2519       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.5     0.1      0.5       0.5      0.0       0.0   0.0    114.9     98.3      4.81              1.36        52    0.093    365K    27K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     72.3      1.26              0.29        52    0.024       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.6      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 6600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.089, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.55 GB write, 0.09 MB/s write, 0.54 GB read, 0.08 MB/s read, 6.1 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x565356c818d0#2 capacity: 304.00 MB usage: 62.72 MB table_size: 0 occupancy: 18446744073709551615 collections: 12 last_copies: 0 last_secs: 0.000469 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3898,60.08 MB,19.7623%) FilterBlock(106,1017.42 KB,0.326834%) IndexBlock(106,1.64 MB,0.540834%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 17:10:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:06 compute-0 nova_compute[239965]: 2026-01-26 17:10:06.924 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:07 compute-0 sshd-session[404017]: Invalid user sol from 45.148.10.240 port 45246
Jan 26 17:10:07 compute-0 sshd-session[404017]: Connection closed by invalid user sol 45.148.10.240 port 45246 [preauth]
Jan 26 17:10:07 compute-0 ceph-mon[75140]: pgmap v3610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:08 compute-0 nova_compute[239965]: 2026-01-26 17:10:08.283 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:08 compute-0 ceph-mon[75140]: pgmap v3611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:10:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3612: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:11 compute-0 ceph-mon[75140]: pgmap v3612: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:11 compute-0 nova_compute[239965]: 2026-01-26 17:10:11.926 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:13 compute-0 ceph-mon[75140]: pgmap v3613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:13 compute-0 nova_compute[239965]: 2026-01-26 17:10:13.284 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:13 compute-0 nova_compute[239965]: 2026-01-26 17:10:13.803 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:10:15 compute-0 ceph-mon[75140]: pgmap v3614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:16 compute-0 nova_compute[239965]: 2026-01-26 17:10:16.928 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:17 compute-0 ceph-mon[75140]: pgmap v3615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:18 compute-0 nova_compute[239965]: 2026-01-26 17:10:18.285 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:19 compute-0 ceph-mon[75140]: pgmap v3616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:10:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:21 compute-0 ceph-mon[75140]: pgmap v3617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:21 compute-0 nova_compute[239965]: 2026-01-26 17:10:21.930 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:22 compute-0 ceph-mon[75140]: pgmap v3618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:23 compute-0 nova_compute[239965]: 2026-01-26 17:10:23.286 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:10:24 compute-0 systemd[1]: Starting dnf makecache...
Jan 26 17:10:25 compute-0 dnf[404019]: Metadata cache refreshed recently.
Jan 26 17:10:25 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 26 17:10:25 compute-0 systemd[1]: Finished dnf makecache.
Jan 26 17:10:25 compute-0 ceph-mon[75140]: pgmap v3619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:26 compute-0 nova_compute[239965]: 2026-01-26 17:10:26.932 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:27 compute-0 ceph-mon[75140]: pgmap v3620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:28 compute-0 nova_compute[239965]: 2026-01-26 17:10:28.288 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:10:28
Jan 26 17:10:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:10:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:10:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', '.rgw.root', 'volumes', 'backups', 'default.rgw.control', 'images']
Jan 26 17:10:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:10:29 compute-0 ceph-mon[75140]: pgmap v3621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:10:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:10:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:10:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:10:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:10:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:10:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:10:30 compute-0 ceph-mon[75140]: pgmap v3622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:10:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:10:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:10:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:10:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:10:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:10:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:10:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:10:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:10:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:10:31 compute-0 podman[404020]: 2026-01-26 17:10:31.387119244 +0000 UTC m=+0.066679095 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 26 17:10:31 compute-0 nova_compute[239965]: 2026-01-26 17:10:31.976 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:33 compute-0 nova_compute[239965]: 2026-01-26 17:10:33.290 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:33 compute-0 ceph-mon[75140]: pgmap v3623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:33 compute-0 nova_compute[239965]: 2026-01-26 17:10:33.513 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:34 compute-0 podman[404039]: 2026-01-26 17:10:34.455237467 +0000 UTC m=+0.133055154 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:10:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:10:34 compute-0 ceph-mon[75140]: pgmap v3624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:35 compute-0 nova_compute[239965]: 2026-01-26 17:10:35.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:36 compute-0 nova_compute[239965]: 2026-01-26 17:10:36.979 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:37 compute-0 nova_compute[239965]: 2026-01-26 17:10:37.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:37 compute-0 ceph-mon[75140]: pgmap v3625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:38 compute-0 nova_compute[239965]: 2026-01-26 17:10:38.345 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:38 compute-0 nova_compute[239965]: 2026-01-26 17:10:38.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:39 compute-0 ceph-mon[75140]: pgmap v3626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:10:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:41 compute-0 ceph-mon[75140]: pgmap v3627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:41 compute-0 nova_compute[239965]: 2026-01-26 17:10:41.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:42 compute-0 nova_compute[239965]: 2026-01-26 17:10:42.030 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:43 compute-0 nova_compute[239965]: 2026-01-26 17:10:43.348 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:43 compute-0 ceph-mon[75140]: pgmap v3628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:43 compute-0 sudo[404065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:10:43 compute-0 sudo[404065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:10:43 compute-0 sudo[404065]: pam_unix(sudo:session): session closed for user root
Jan 26 17:10:43 compute-0 sudo[404090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:10:43 compute-0 sudo[404090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:10:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:44 compute-0 sudo[404090]: pam_unix(sudo:session): session closed for user root
Jan 26 17:10:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:10:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:10:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:10:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:10:44 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:10:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:10:44 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:10:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:10:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:10:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:10:44 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:10:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:10:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:10:44 compute-0 ceph-mon[75140]: pgmap v3629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:10:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:10:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:10:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:10:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:10:44 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:10:44 compute-0 sudo[404145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:10:44 compute-0 sudo[404145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:10:44 compute-0 sudo[404145]: pam_unix(sudo:session): session closed for user root
Jan 26 17:10:44 compute-0 sudo[404170]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:10:44 compute-0 sudo[404170]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:10:45 compute-0 podman[404207]: 2026-01-26 17:10:45.042860468 +0000 UTC m=+0.056005714 container create 15e76c5e00126cc3abca747fdb5a6f72e9a14020df08bff4a458ee5d371e2fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 17:10:45 compute-0 systemd[1]: Started libpod-conmon-15e76c5e00126cc3abca747fdb5a6f72e9a14020df08bff4a458ee5d371e2fa4.scope.
Jan 26 17:10:45 compute-0 podman[404207]: 2026-01-26 17:10:45.012133505 +0000 UTC m=+0.025278771 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:10:45 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:10:45 compute-0 podman[404207]: 2026-01-26 17:10:45.143496856 +0000 UTC m=+0.156642122 container init 15e76c5e00126cc3abca747fdb5a6f72e9a14020df08bff4a458ee5d371e2fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldwasser, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:10:45 compute-0 podman[404207]: 2026-01-26 17:10:45.154314281 +0000 UTC m=+0.167459537 container start 15e76c5e00126cc3abca747fdb5a6f72e9a14020df08bff4a458ee5d371e2fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:10:45 compute-0 podman[404207]: 2026-01-26 17:10:45.158583936 +0000 UTC m=+0.171729492 container attach 15e76c5e00126cc3abca747fdb5a6f72e9a14020df08bff4a458ee5d371e2fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldwasser, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 17:10:45 compute-0 wizardly_goldwasser[404223]: 167 167
Jan 26 17:10:45 compute-0 systemd[1]: libpod-15e76c5e00126cc3abca747fdb5a6f72e9a14020df08bff4a458ee5d371e2fa4.scope: Deactivated successfully.
Jan 26 17:10:45 compute-0 conmon[404223]: conmon 15e76c5e00126cc3abca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-15e76c5e00126cc3abca747fdb5a6f72e9a14020df08bff4a458ee5d371e2fa4.scope/container/memory.events
Jan 26 17:10:45 compute-0 podman[404207]: 2026-01-26 17:10:45.163728032 +0000 UTC m=+0.176873298 container died 15e76c5e00126cc3abca747fdb5a6f72e9a14020df08bff4a458ee5d371e2fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldwasser, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:10:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d5f13386fdac606e2b521ff48c5911ff1b10d8f765aa9f58bd664c9700c557b-merged.mount: Deactivated successfully.
Jan 26 17:10:45 compute-0 podman[404207]: 2026-01-26 17:10:45.210720914 +0000 UTC m=+0.223866150 container remove 15e76c5e00126cc3abca747fdb5a6f72e9a14020df08bff4a458ee5d371e2fa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldwasser, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 17:10:45 compute-0 systemd[1]: libpod-conmon-15e76c5e00126cc3abca747fdb5a6f72e9a14020df08bff4a458ee5d371e2fa4.scope: Deactivated successfully.
Jan 26 17:10:45 compute-0 podman[404246]: 2026-01-26 17:10:45.365897279 +0000 UTC m=+0.027441754 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:10:45 compute-0 podman[404246]: 2026-01-26 17:10:45.792692663 +0000 UTC m=+0.454237158 container create cd50a4a6c57df8e90e213dd76c8a560dcc4a55d197f129a5562f1aab6547f186 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_margulis, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 17:10:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:46 compute-0 systemd[1]: Started libpod-conmon-cd50a4a6c57df8e90e213dd76c8a560dcc4a55d197f129a5562f1aab6547f186.scope.
Jan 26 17:10:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24417a3794baab845951299241a4b21790cf284c341b2c1ca76d97532fb02277/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24417a3794baab845951299241a4b21790cf284c341b2c1ca76d97532fb02277/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24417a3794baab845951299241a4b21790cf284c341b2c1ca76d97532fb02277/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24417a3794baab845951299241a4b21790cf284c341b2c1ca76d97532fb02277/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:10:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24417a3794baab845951299241a4b21790cf284c341b2c1ca76d97532fb02277/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:10:46 compute-0 podman[404246]: 2026-01-26 17:10:46.342079122 +0000 UTC m=+1.003623597 container init cd50a4a6c57df8e90e213dd76c8a560dcc4a55d197f129a5562f1aab6547f186 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 17:10:46 compute-0 podman[404246]: 2026-01-26 17:10:46.350479968 +0000 UTC m=+1.012024423 container start cd50a4a6c57df8e90e213dd76c8a560dcc4a55d197f129a5562f1aab6547f186 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_margulis, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:10:46 compute-0 nova_compute[239965]: 2026-01-26 17:10:46.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:46 compute-0 podman[404246]: 2026-01-26 17:10:46.654374569 +0000 UTC m=+1.315919034 container attach cd50a4a6c57df8e90e213dd76c8a560dcc4a55d197f129a5562f1aab6547f186 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_margulis, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:10:46 compute-0 eloquent_margulis[404263]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:10:46 compute-0 eloquent_margulis[404263]: --> All data devices are unavailable
Jan 26 17:10:46 compute-0 podman[404246]: 2026-01-26 17:10:46.96154944 +0000 UTC m=+1.623093925 container died cd50a4a6c57df8e90e213dd76c8a560dcc4a55d197f129a5562f1aab6547f186 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_margulis, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:10:46 compute-0 systemd[1]: libpod-cd50a4a6c57df8e90e213dd76c8a560dcc4a55d197f129a5562f1aab6547f186.scope: Deactivated successfully.
Jan 26 17:10:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-24417a3794baab845951299241a4b21790cf284c341b2c1ca76d97532fb02277-merged.mount: Deactivated successfully.
Jan 26 17:10:47 compute-0 nova_compute[239965]: 2026-01-26 17:10:47.031 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:47 compute-0 podman[404246]: 2026-01-26 17:10:47.077356099 +0000 UTC m=+1.738900564 container remove cd50a4a6c57df8e90e213dd76c8a560dcc4a55d197f129a5562f1aab6547f186 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_margulis, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Jan 26 17:10:47 compute-0 systemd[1]: libpod-conmon-cd50a4a6c57df8e90e213dd76c8a560dcc4a55d197f129a5562f1aab6547f186.scope: Deactivated successfully.
Jan 26 17:10:47 compute-0 sudo[404170]: pam_unix(sudo:session): session closed for user root
Jan 26 17:10:47 compute-0 sudo[404296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:10:47 compute-0 sudo[404296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:10:47 compute-0 sudo[404296]: pam_unix(sudo:session): session closed for user root
Jan 26 17:10:47 compute-0 sudo[404321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:10:47 compute-0 sudo[404321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:10:47 compute-0 ceph-mon[75140]: pgmap v3630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:47 compute-0 podman[404359]: 2026-01-26 17:10:47.62842639 +0000 UTC m=+0.087622859 container create 402eb181925951166526ab62ad1caa43d5b631e5753f5c09b8502125f438a874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_villani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 17:10:47 compute-0 podman[404359]: 2026-01-26 17:10:47.57376575 +0000 UTC m=+0.032962219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:10:47 compute-0 systemd[1]: Started libpod-conmon-402eb181925951166526ab62ad1caa43d5b631e5753f5c09b8502125f438a874.scope.
Jan 26 17:10:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:10:47 compute-0 podman[404359]: 2026-01-26 17:10:47.81562569 +0000 UTC m=+0.274822139 container init 402eb181925951166526ab62ad1caa43d5b631e5753f5c09b8502125f438a874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_villani, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:10:47 compute-0 podman[404359]: 2026-01-26 17:10:47.825078462 +0000 UTC m=+0.284274911 container start 402eb181925951166526ab62ad1caa43d5b631e5753f5c09b8502125f438a874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_villani, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:10:47 compute-0 wizardly_villani[404375]: 167 167
Jan 26 17:10:47 compute-0 systemd[1]: libpod-402eb181925951166526ab62ad1caa43d5b631e5753f5c09b8502125f438a874.scope: Deactivated successfully.
Jan 26 17:10:47 compute-0 podman[404359]: 2026-01-26 17:10:47.983945657 +0000 UTC m=+0.443142126 container attach 402eb181925951166526ab62ad1caa43d5b631e5753f5c09b8502125f438a874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_villani, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 17:10:47 compute-0 podman[404359]: 2026-01-26 17:10:47.984743886 +0000 UTC m=+0.443940325 container died 402eb181925951166526ab62ad1caa43d5b631e5753f5c09b8502125f438a874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_villani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default)
Jan 26 17:10:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-16723ff9c7e1d7d2d2c4309dbe8de3337666398207a1ed545bcb8d7bf418f38e-merged.mount: Deactivated successfully.
Jan 26 17:10:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:48 compute-0 podman[404359]: 2026-01-26 17:10:48.185358515 +0000 UTC m=+0.644554944 container remove 402eb181925951166526ab62ad1caa43d5b631e5753f5c09b8502125f438a874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 17:10:48 compute-0 systemd[1]: libpod-conmon-402eb181925951166526ab62ad1caa43d5b631e5753f5c09b8502125f438a874.scope: Deactivated successfully.
Jan 26 17:10:48 compute-0 nova_compute[239965]: 2026-01-26 17:10:48.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:48 compute-0 nova_compute[239965]: 2026-01-26 17:10:48.513 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:10:48 compute-0 nova_compute[239965]: 2026-01-26 17:10:48.513 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:48 compute-0 nova_compute[239965]: 2026-01-26 17:10:48.566 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:48 compute-0 podman[404401]: 2026-01-26 17:10:48.353684772 +0000 UTC m=+0.026881400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:10:48 compute-0 podman[404401]: 2026-01-26 17:10:48.657386058 +0000 UTC m=+0.330582676 container create 55b00223364640ff56c8e2d57cc621cc53fb143316534e5a08c0894ff1e9dff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:10:48 compute-0 ceph-mon[75140]: pgmap v3631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:48 compute-0 nova_compute[239965]: 2026-01-26 17:10:48.667 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:10:48 compute-0 nova_compute[239965]: 2026-01-26 17:10:48.668 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:10:48 compute-0 nova_compute[239965]: 2026-01-26 17:10:48.669 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:10:48 compute-0 nova_compute[239965]: 2026-01-26 17:10:48.669 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:10:48 compute-0 nova_compute[239965]: 2026-01-26 17:10:48.670 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:10:48 compute-0 systemd[1]: Started libpod-conmon-55b00223364640ff56c8e2d57cc621cc53fb143316534e5a08c0894ff1e9dff5.scope.
Jan 26 17:10:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bbf17fad741ed708558364d98ca66c7be635c4f1a3952a68418012ca391244f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bbf17fad741ed708558364d98ca66c7be635c4f1a3952a68418012ca391244f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bbf17fad741ed708558364d98ca66c7be635c4f1a3952a68418012ca391244f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:10:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bbf17fad741ed708558364d98ca66c7be635c4f1a3952a68418012ca391244f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:10:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:10:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3622469167' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:10:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:10:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3622469167' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:10:48 compute-0 podman[404401]: 2026-01-26 17:10:48.903838791 +0000 UTC m=+0.577035409 container init 55b00223364640ff56c8e2d57cc621cc53fb143316534e5a08c0894ff1e9dff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True)
Jan 26 17:10:48 compute-0 podman[404401]: 2026-01-26 17:10:48.914705458 +0000 UTC m=+0.587902076 container start 55b00223364640ff56c8e2d57cc621cc53fb143316534e5a08c0894ff1e9dff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:10:48 compute-0 podman[404401]: 2026-01-26 17:10:48.921228467 +0000 UTC m=+0.594425085 container attach 55b00223364640ff56c8e2d57cc621cc53fb143316534e5a08c0894ff1e9dff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 17:10:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:10:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1716792154' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:10:49 compute-0 goofy_moore[404417]: {
Jan 26 17:10:49 compute-0 goofy_moore[404417]:     "0": [
Jan 26 17:10:49 compute-0 goofy_moore[404417]:         {
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "devices": [
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "/dev/loop3"
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             ],
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "lv_name": "ceph_lv0",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "lv_size": "21470642176",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "name": "ceph_lv0",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "tags": {
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.cluster_name": "ceph",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.crush_device_class": "",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.encrypted": "0",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.objectstore": "bluestore",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.osd_id": "0",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.type": "block",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.vdo": "0",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.with_tpm": "0"
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             },
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "type": "block",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "vg_name": "ceph_vg0"
Jan 26 17:10:49 compute-0 goofy_moore[404417]:         }
Jan 26 17:10:49 compute-0 goofy_moore[404417]:     ],
Jan 26 17:10:49 compute-0 goofy_moore[404417]:     "1": [
Jan 26 17:10:49 compute-0 goofy_moore[404417]:         {
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "devices": [
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "/dev/loop4"
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             ],
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "lv_name": "ceph_lv1",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "lv_size": "21470642176",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "name": "ceph_lv1",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "tags": {
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.cluster_name": "ceph",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.crush_device_class": "",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.encrypted": "0",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.objectstore": "bluestore",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.osd_id": "1",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.type": "block",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.vdo": "0",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.with_tpm": "0"
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             },
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "type": "block",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "vg_name": "ceph_vg1"
Jan 26 17:10:49 compute-0 goofy_moore[404417]:         }
Jan 26 17:10:49 compute-0 goofy_moore[404417]:     ],
Jan 26 17:10:49 compute-0 goofy_moore[404417]:     "2": [
Jan 26 17:10:49 compute-0 goofy_moore[404417]:         {
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "devices": [
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "/dev/loop5"
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             ],
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "lv_name": "ceph_lv2",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "lv_size": "21470642176",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "name": "ceph_lv2",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "tags": {
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.cluster_name": "ceph",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.crush_device_class": "",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.encrypted": "0",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.objectstore": "bluestore",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.osd_id": "2",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.type": "block",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.vdo": "0",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:                 "ceph.with_tpm": "0"
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             },
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "type": "block",
Jan 26 17:10:49 compute-0 goofy_moore[404417]:             "vg_name": "ceph_vg2"
Jan 26 17:10:49 compute-0 goofy_moore[404417]:         }
Jan 26 17:10:49 compute-0 goofy_moore[404417]:     ]
Jan 26 17:10:49 compute-0 goofy_moore[404417]: }
Jan 26 17:10:49 compute-0 nova_compute[239965]: 2026-01-26 17:10:49.241 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:10:49 compute-0 systemd[1]: libpod-55b00223364640ff56c8e2d57cc621cc53fb143316534e5a08c0894ff1e9dff5.scope: Deactivated successfully.
Jan 26 17:10:49 compute-0 podman[404401]: 2026-01-26 17:10:49.284034202 +0000 UTC m=+0.957230820 container died 55b00223364640ff56c8e2d57cc621cc53fb143316534e5a08c0894ff1e9dff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:10:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4bbf17fad741ed708558364d98ca66c7be635c4f1a3952a68418012ca391244f-merged.mount: Deactivated successfully.
Jan 26 17:10:49 compute-0 podman[404401]: 2026-01-26 17:10:49.331069805 +0000 UTC m=+1.004266413 container remove 55b00223364640ff56c8e2d57cc621cc53fb143316534e5a08c0894ff1e9dff5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 17:10:49 compute-0 systemd[1]: libpod-conmon-55b00223364640ff56c8e2d57cc621cc53fb143316534e5a08c0894ff1e9dff5.scope: Deactivated successfully.
Jan 26 17:10:49 compute-0 sudo[404321]: pam_unix(sudo:session): session closed for user root
Jan 26 17:10:49 compute-0 nova_compute[239965]: 2026-01-26 17:10:49.444 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:10:49 compute-0 nova_compute[239965]: 2026-01-26 17:10:49.446 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3508MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:10:49 compute-0 nova_compute[239965]: 2026-01-26 17:10:49.446 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:10:49 compute-0 nova_compute[239965]: 2026-01-26 17:10:49.447 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:10:49 compute-0 sudo[404460]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:10:49 compute-0 sudo[404460]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:10:49 compute-0 sudo[404460]: pam_unix(sudo:session): session closed for user root
Jan 26 17:10:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:10:49 compute-0 sudo[404485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:10:49 compute-0 sudo[404485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:10:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3622469167' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:10:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3622469167' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:10:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1716792154' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:10:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:10:49 compute-0 nova_compute[239965]: 2026-01-26 17:10:49.742 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:10:49 compute-0 nova_compute[239965]: 2026-01-26 17:10:49.743 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:10:49 compute-0 nova_compute[239965]: 2026-01-26 17:10:49.759 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:10:49 compute-0 podman[404521]: 2026-01-26 17:10:49.884843993 +0000 UTC m=+0.041833567 container create d16e6eb8a41b675c7c40cb804aa48bdc6d7343c9d7eab42edaa17db2e88a3d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 17:10:49 compute-0 systemd[1]: Started libpod-conmon-d16e6eb8a41b675c7c40cb804aa48bdc6d7343c9d7eab42edaa17db2e88a3d84.scope.
Jan 26 17:10:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:10:49 compute-0 podman[404521]: 2026-01-26 17:10:49.864440703 +0000 UTC m=+0.021430297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:10:49 compute-0 podman[404521]: 2026-01-26 17:10:49.971539528 +0000 UTC m=+0.128529132 container init d16e6eb8a41b675c7c40cb804aa48bdc6d7343c9d7eab42edaa17db2e88a3d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:10:49 compute-0 podman[404521]: 2026-01-26 17:10:49.981270766 +0000 UTC m=+0.138260340 container start d16e6eb8a41b675c7c40cb804aa48bdc6d7343c9d7eab42edaa17db2e88a3d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 17:10:49 compute-0 podman[404521]: 2026-01-26 17:10:49.985149792 +0000 UTC m=+0.142139356 container attach d16e6eb8a41b675c7c40cb804aa48bdc6d7343c9d7eab42edaa17db2e88a3d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 17:10:49 compute-0 silly_roentgen[404547]: 167 167
Jan 26 17:10:49 compute-0 systemd[1]: libpod-d16e6eb8a41b675c7c40cb804aa48bdc6d7343c9d7eab42edaa17db2e88a3d84.scope: Deactivated successfully.
Jan 26 17:10:49 compute-0 podman[404521]: 2026-01-26 17:10:49.991013395 +0000 UTC m=+0.148002969 container died d16e6eb8a41b675c7c40cb804aa48bdc6d7343c9d7eab42edaa17db2e88a3d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:10:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-5637f911dbd1ed0fc68c995695931399c52fbdc7efb22cace060bdefc42935aa-merged.mount: Deactivated successfully.
Jan 26 17:10:50 compute-0 podman[404521]: 2026-01-26 17:10:50.037116066 +0000 UTC m=+0.194105630 container remove d16e6eb8a41b675c7c40cb804aa48bdc6d7343c9d7eab42edaa17db2e88a3d84 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 26 17:10:50 compute-0 systemd[1]: libpod-conmon-d16e6eb8a41b675c7c40cb804aa48bdc6d7343c9d7eab42edaa17db2e88a3d84.scope: Deactivated successfully.
Jan 26 17:10:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:50 compute-0 podman[404581]: 2026-01-26 17:10:50.224412438 +0000 UTC m=+0.053332468 container create 4698634ca0b35eb12ae2fea8dc490b474a42fe60038f621d2f3e5ccde5a3d6b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_robinson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 26 17:10:50 compute-0 systemd[1]: Started libpod-conmon-4698634ca0b35eb12ae2fea8dc490b474a42fe60038f621d2f3e5ccde5a3d6b9.scope.
Jan 26 17:10:50 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff8f1e526fbc7bc814654ea7ab225603d1d4ab8ebf7f6594d32eb3de55b10e4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff8f1e526fbc7bc814654ea7ab225603d1d4ab8ebf7f6594d32eb3de55b10e4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff8f1e526fbc7bc814654ea7ab225603d1d4ab8ebf7f6594d32eb3de55b10e4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:10:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cff8f1e526fbc7bc814654ea7ab225603d1d4ab8ebf7f6594d32eb3de55b10e4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:10:50 compute-0 podman[404581]: 2026-01-26 17:10:50.205062304 +0000 UTC m=+0.033982354 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:10:50 compute-0 podman[404581]: 2026-01-26 17:10:50.309063424 +0000 UTC m=+0.137983474 container init 4698634ca0b35eb12ae2fea8dc490b474a42fe60038f621d2f3e5ccde5a3d6b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_robinson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:10:50 compute-0 podman[404581]: 2026-01-26 17:10:50.320520055 +0000 UTC m=+0.149440085 container start 4698634ca0b35eb12ae2fea8dc490b474a42fe60038f621d2f3e5ccde5a3d6b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_robinson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 17:10:50 compute-0 podman[404581]: 2026-01-26 17:10:50.324374179 +0000 UTC m=+0.153294209 container attach 4698634ca0b35eb12ae2fea8dc490b474a42fe60038f621d2f3e5ccde5a3d6b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:10:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:10:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2310009250' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:10:50 compute-0 nova_compute[239965]: 2026-01-26 17:10:50.418 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:10:50 compute-0 nova_compute[239965]: 2026-01-26 17:10:50.428 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:10:50 compute-0 nova_compute[239965]: 2026-01-26 17:10:50.451 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:10:50 compute-0 nova_compute[239965]: 2026-01-26 17:10:50.453 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:10:50 compute-0 nova_compute[239965]: 2026-01-26 17:10:50.454 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:10:50 compute-0 ceph-mon[75140]: pgmap v3632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2310009250' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:10:51 compute-0 lvm[404680]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:10:51 compute-0 lvm[404680]: VG ceph_vg1 finished
Jan 26 17:10:51 compute-0 lvm[404677]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:10:51 compute-0 lvm[404677]: VG ceph_vg0 finished
Jan 26 17:10:51 compute-0 lvm[404682]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:10:51 compute-0 lvm[404682]: VG ceph_vg2 finished
Jan 26 17:10:51 compute-0 happy_robinson[404599]: {}
Jan 26 17:10:51 compute-0 systemd[1]: libpod-4698634ca0b35eb12ae2fea8dc490b474a42fe60038f621d2f3e5ccde5a3d6b9.scope: Deactivated successfully.
Jan 26 17:10:51 compute-0 systemd[1]: libpod-4698634ca0b35eb12ae2fea8dc490b474a42fe60038f621d2f3e5ccde5a3d6b9.scope: Consumed 1.496s CPU time.
Jan 26 17:10:51 compute-0 podman[404581]: 2026-01-26 17:10:51.262287264 +0000 UTC m=+1.091207294 container died 4698634ca0b35eb12ae2fea8dc490b474a42fe60038f621d2f3e5ccde5a3d6b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_robinson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 17:10:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-cff8f1e526fbc7bc814654ea7ab225603d1d4ab8ebf7f6594d32eb3de55b10e4-merged.mount: Deactivated successfully.
Jan 26 17:10:51 compute-0 podman[404581]: 2026-01-26 17:10:51.394285091 +0000 UTC m=+1.223205141 container remove 4698634ca0b35eb12ae2fea8dc490b474a42fe60038f621d2f3e5ccde5a3d6b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_robinson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 17:10:51 compute-0 systemd[1]: libpod-conmon-4698634ca0b35eb12ae2fea8dc490b474a42fe60038f621d2f3e5ccde5a3d6b9.scope: Deactivated successfully.
Jan 26 17:10:51 compute-0 sudo[404485]: pam_unix(sudo:session): session closed for user root
Jan 26 17:10:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:10:51 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:10:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:10:51 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:10:51 compute-0 sudo[404699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:10:51 compute-0 sudo[404699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:10:51 compute-0 sudo[404699]: pam_unix(sudo:session): session closed for user root
Jan 26 17:10:52 compute-0 nova_compute[239965]: 2026-01-26 17:10:52.111 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:52 compute-0 nova_compute[239965]: 2026-01-26 17:10:52.453 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:52 compute-0 nova_compute[239965]: 2026-01-26 17:10:52.453 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:10:52 compute-0 nova_compute[239965]: 2026-01-26 17:10:52.453 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:10:52 compute-0 nova_compute[239965]: 2026-01-26 17:10:52.623 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:10:52 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:10:52 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:10:52 compute-0 ceph-mon[75140]: pgmap v3633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:53 compute-0 nova_compute[239965]: 2026-01-26 17:10:53.568 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:10:55 compute-0 ceph-mon[75140]: pgmap v3634: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:55 compute-0 nova_compute[239965]: 2026-01-26 17:10:55.673 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:10:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:57 compute-0 nova_compute[239965]: 2026-01-26 17:10:57.137 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:57 compute-0 ceph-mon[75140]: pgmap v3635: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:58 compute-0 nova_compute[239965]: 2026-01-26 17:10:58.570 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:10:59 compute-0 ceph-mon[75140]: pgmap v3636: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:10:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:10:59.295 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:10:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:10:59.296 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:10:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:10:59.296 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:10:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:11:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:11:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:11:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:11:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:11:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:11:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:11:00 compute-0 nova_compute[239965]: 2026-01-26 17:11:00.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:01 compute-0 ceph-mon[75140]: pgmap v3637: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:02 compute-0 nova_compute[239965]: 2026-01-26 17:11:02.258 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:02 compute-0 podman[404724]: 2026-01-26 17:11:02.393065775 +0000 UTC m=+0.075672876 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 17:11:03 compute-0 ceph-mon[75140]: pgmap v3638: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:03 compute-0 nova_compute[239965]: 2026-01-26 17:11:03.571 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:11:05 compute-0 ceph-mon[75140]: pgmap v3639: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:05 compute-0 podman[404744]: 2026-01-26 17:11:05.400776637 +0000 UTC m=+0.087984659 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 17:11:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:07 compute-0 nova_compute[239965]: 2026-01-26 17:11:07.261 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:07 compute-0 ceph-mon[75140]: pgmap v3640: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:08 compute-0 nova_compute[239965]: 2026-01-26 17:11:08.573 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:09 compute-0 ceph-mon[75140]: pgmap v3641: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:11:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:11 compute-0 ceph-mon[75140]: pgmap v3642: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:12 compute-0 nova_compute[239965]: 2026-01-26 17:11:12.265 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:12 compute-0 ceph-mon[75140]: pgmap v3643: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:13 compute-0 nova_compute[239965]: 2026-01-26 17:11:13.575 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:11:15 compute-0 ceph-mon[75140]: pgmap v3644: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3645: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:17 compute-0 ceph-mon[75140]: pgmap v3645: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:17 compute-0 nova_compute[239965]: 2026-01-26 17:11:17.268 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:18 compute-0 nova_compute[239965]: 2026-01-26 17:11:18.577 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:19 compute-0 ceph-mon[75140]: pgmap v3646: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #174. Immutable memtables: 0.
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:11:19.577106) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 107] Flushing memtable with next log file: 174
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447479577147, "job": 107, "event": "flush_started", "num_memtables": 1, "num_entries": 1516, "num_deletes": 250, "total_data_size": 2515642, "memory_usage": 2565744, "flush_reason": "Manual Compaction"}
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 107] Level-0 flush table #175: started
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447479590452, "cf_name": "default", "job": 107, "event": "table_file_creation", "file_number": 175, "file_size": 1447471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 72931, "largest_seqno": 74446, "table_properties": {"data_size": 1442302, "index_size": 2436, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13522, "raw_average_key_size": 20, "raw_value_size": 1430885, "raw_average_value_size": 2187, "num_data_blocks": 112, "num_entries": 654, "num_filter_entries": 654, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769447317, "oldest_key_time": 1769447317, "file_creation_time": 1769447479, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 175, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 107] Flush lasted 13416 microseconds, and 5462 cpu microseconds.
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:11:19.590513) [db/flush_job.cc:967] [default] [JOB 107] Level-0 flush table #175: 1447471 bytes OK
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:11:19.590542) [db/memtable_list.cc:519] [default] Level-0 commit table #175 started
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:11:19.592951) [db/memtable_list.cc:722] [default] Level-0 commit table #175: memtable #1 done
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:11:19.592969) EVENT_LOG_v1 {"time_micros": 1769447479592963, "job": 107, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:11:19.593014) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 107] Try to delete WAL files size 2509028, prev total WAL file size 2509028, number of live WAL files 2.
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000171.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:11:19.594187) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033303037' seq:72057594037927935, type:22 .. '6D6772737461740033323538' seq:0, type:0; will stop at (end)
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 108] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 107 Base level 0, inputs: [175(1413KB)], [173(11MB)]
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447479594316, "job": 108, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [175], "files_L6": [173], "score": -1, "input_data_size": 13019167, "oldest_snapshot_seqno": -1}
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 108] Generated table #176: 9307 keys, 10644089 bytes, temperature: kUnknown
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447479701615, "cf_name": "default", "job": 108, "event": "table_file_creation", "file_number": 176, "file_size": 10644089, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10586812, "index_size": 32908, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23301, "raw_key_size": 243289, "raw_average_key_size": 26, "raw_value_size": 10425173, "raw_average_value_size": 1120, "num_data_blocks": 1271, "num_entries": 9307, "num_filter_entries": 9307, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769447479, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 176, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:11:19.701892) [db/compaction/compaction_job.cc:1663] [default] [JOB 108] Compacted 1@0 + 1@6 files to L6 => 10644089 bytes
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:11:19.703724) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 121.3 rd, 99.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 11.0 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(16.3) write-amplify(7.4) OK, records in: 9744, records dropped: 437 output_compression: NoCompression
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:11:19.703746) EVENT_LOG_v1 {"time_micros": 1769447479703736, "job": 108, "event": "compaction_finished", "compaction_time_micros": 107366, "compaction_time_cpu_micros": 49640, "output_level": 6, "num_output_files": 1, "total_output_size": 10644089, "num_input_records": 9744, "num_output_records": 9307, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000175.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447479704102, "job": 108, "event": "table_file_deletion", "file_number": 175}
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000173.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447479706232, "job": 108, "event": "table_file_deletion", "file_number": 173}
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:11:19.593753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:11:19.706312) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:11:19.706316) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:11:19.706317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:11:19.706319) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:11:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:11:19.706321) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:11:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:20 compute-0 ceph-mon[75140]: pgmap v3647: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:22 compute-0 nova_compute[239965]: 2026-01-26 17:11:22.298 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:22 compute-0 nova_compute[239965]: 2026-01-26 17:11:22.611 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:22 compute-0 nova_compute[239965]: 2026-01-26 17:11:22.611 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:11:23 compute-0 ceph-mon[75140]: pgmap v3648: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:23 compute-0 nova_compute[239965]: 2026-01-26 17:11:23.580 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:11:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 184K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.74 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:11:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:11:25 compute-0 ceph-mon[75140]: pgmap v3649: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:27 compute-0 nova_compute[239965]: 2026-01-26 17:11:27.301 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:28 compute-0 ceph-mon[75140]: pgmap v3650: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:28 compute-0 nova_compute[239965]: 2026-01-26 17:11:28.581 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:11:28
Jan 26 17:11:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:11:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:11:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['images', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'vms', '.rgw.root', 'volumes', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta']
Jan 26 17:11:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:11:29 compute-0 ceph-mon[75140]: pgmap v3651: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:11:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:11:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.4 total, 600.0 interval
                                           Cumulative writes: 49K writes, 192K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.68 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:11:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:11:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:11:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:11:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:11:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:11:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:11:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:11:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:11:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:11:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:11:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:11:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:11:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:11:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:11:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:11:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:11:31 compute-0 ceph-mon[75140]: pgmap v3652: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:32 compute-0 nova_compute[239965]: 2026-01-26 17:11:32.303 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:33 compute-0 ceph-mon[75140]: pgmap v3653: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:33 compute-0 podman[404771]: 2026-01-26 17:11:33.375449397 +0000 UTC m=+0.061659833 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:11:33 compute-0 nova_compute[239965]: 2026-01-26 17:11:33.582 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:34 compute-0 nova_compute[239965]: 2026-01-26 17:11:34.524 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:11:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:11:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 149K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.71 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:11:35 compute-0 ceph-mon[75140]: pgmap v3654: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:35 compute-0 nova_compute[239965]: 2026-01-26 17:11:35.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:36 compute-0 podman[404790]: 2026-01-26 17:11:36.416861506 +0000 UTC m=+0.105692262 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:11:37 compute-0 nova_compute[239965]: 2026-01-26 17:11:37.306 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:37 compute-0 ceph-mon[75140]: pgmap v3655: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:37 compute-0 nova_compute[239965]: 2026-01-26 17:11:37.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:38 compute-0 nova_compute[239965]: 2026-01-26 17:11:38.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:38 compute-0 nova_compute[239965]: 2026-01-26 17:11:38.616 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:38 compute-0 ceph-mon[75140]: pgmap v3656: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:39 compute-0 nova_compute[239965]: 2026-01-26 17:11:39.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:39 compute-0 nova_compute[239965]: 2026-01-26 17:11:39.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:11:39 compute-0 nova_compute[239965]: 2026-01-26 17:11:39.543 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:11:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:11:39 compute-0 ceph-mgr[75431]: [devicehealth INFO root] Check health
Jan 26 17:11:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:41 compute-0 ceph-mon[75140]: pgmap v3657: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:41 compute-0 nova_compute[239965]: 2026-01-26 17:11:41.544 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:42 compute-0 nova_compute[239965]: 2026-01-26 17:11:42.309 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:42 compute-0 nova_compute[239965]: 2026-01-26 17:11:42.783 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:43 compute-0 ceph-mon[75140]: pgmap v3658: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:43 compute-0 nova_compute[239965]: 2026-01-26 17:11:43.619 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:11:45 compute-0 ceph-mon[75140]: pgmap v3659: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:47 compute-0 nova_compute[239965]: 2026-01-26 17:11:47.312 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:47 compute-0 ceph-mon[75140]: pgmap v3660: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:47 compute-0 nova_compute[239965]: 2026-01-26 17:11:47.526 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:48 compute-0 nova_compute[239965]: 2026-01-26 17:11:48.621 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:11:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2459979052' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:11:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:11:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2459979052' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:11:49 compute-0 sshd-session[404816]: Received disconnect from 91.224.92.190 port 39468:11:  [preauth]
Jan 26 17:11:49 compute-0 sshd-session[404816]: Disconnected from authenticating user root 91.224.92.190 port 39468 [preauth]
Jan 26 17:11:49 compute-0 nova_compute[239965]: 2026-01-26 17:11:49.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:49 compute-0 nova_compute[239965]: 2026-01-26 17:11:49.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:11:49 compute-0 nova_compute[239965]: 2026-01-26 17:11:49.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:49 compute-0 ceph-mon[75140]: pgmap v3661: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2459979052' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:11:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2459979052' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:11:49 compute-0 nova_compute[239965]: 2026-01-26 17:11:49.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:49 compute-0 nova_compute[239965]: 2026-01-26 17:11:49.542 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:49 compute-0 nova_compute[239965]: 2026-01-26 17:11:49.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:49 compute-0 nova_compute[239965]: 2026-01-26 17:11:49.543 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:11:49 compute-0 nova_compute[239965]: 2026-01-26 17:11:49.543 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:11:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:11:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:11:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3521532222' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:11:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:50 compute-0 nova_compute[239965]: 2026-01-26 17:11:50.172 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:50 compute-0 nova_compute[239965]: 2026-01-26 17:11:50.338 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:11:50 compute-0 nova_compute[239965]: 2026-01-26 17:11:50.341 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3565MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:11:50 compute-0 nova_compute[239965]: 2026-01-26 17:11:50.342 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:50 compute-0 nova_compute[239965]: 2026-01-26 17:11:50.342 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:50 compute-0 nova_compute[239965]: 2026-01-26 17:11:50.421 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:11:50 compute-0 nova_compute[239965]: 2026-01-26 17:11:50.422 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:11:50 compute-0 nova_compute[239965]: 2026-01-26 17:11:50.438 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:11:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3521532222' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:11:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:11:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3125594754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:11:51 compute-0 nova_compute[239965]: 2026-01-26 17:11:51.051 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:11:51 compute-0 nova_compute[239965]: 2026-01-26 17:11:51.057 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:11:51 compute-0 nova_compute[239965]: 2026-01-26 17:11:51.092 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:11:51 compute-0 nova_compute[239965]: 2026-01-26 17:11:51.098 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:11:51 compute-0 nova_compute[239965]: 2026-01-26 17:11:51.099 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:51 compute-0 ceph-mon[75140]: pgmap v3662: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:51 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3125594754' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:11:51 compute-0 sudo[404862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:11:51 compute-0 sudo[404862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:11:51 compute-0 sudo[404862]: pam_unix(sudo:session): session closed for user root
Jan 26 17:11:51 compute-0 sudo[404887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:11:51 compute-0 sudo[404887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:11:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:52 compute-0 nova_compute[239965]: 2026-01-26 17:11:52.314 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:52 compute-0 sudo[404887]: pam_unix(sudo:session): session closed for user root
Jan 26 17:11:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:11:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:11:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:11:52 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:11:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:11:53 compute-0 ceph-mon[75140]: pgmap v3663: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:53 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:11:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:11:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:11:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:11:53 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:11:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:11:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:11:53 compute-0 sudo[404944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:11:53 compute-0 sudo[404944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:11:53 compute-0 sudo[404944]: pam_unix(sudo:session): session closed for user root
Jan 26 17:11:53 compute-0 sudo[404969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:11:53 compute-0 sudo[404969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:11:53 compute-0 podman[405006]: 2026-01-26 17:11:53.547942092 +0000 UTC m=+0.048765347 container create 00a93eb5f5faf57acf7603a43948890b524fc9a0f399830fc2a5a53e02e17484 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatterjee, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 17:11:53 compute-0 systemd[1]: Started libpod-conmon-00a93eb5f5faf57acf7603a43948890b524fc9a0f399830fc2a5a53e02e17484.scope.
Jan 26 17:11:53 compute-0 podman[405006]: 2026-01-26 17:11:53.526764242 +0000 UTC m=+0.027587497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:11:53 compute-0 nova_compute[239965]: 2026-01-26 17:11:53.623 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:11:53 compute-0 podman[405006]: 2026-01-26 17:11:53.656078413 +0000 UTC m=+0.156901678 container init 00a93eb5f5faf57acf7603a43948890b524fc9a0f399830fc2a5a53e02e17484 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatterjee, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:11:53 compute-0 podman[405006]: 2026-01-26 17:11:53.668777874 +0000 UTC m=+0.169601109 container start 00a93eb5f5faf57acf7603a43948890b524fc9a0f399830fc2a5a53e02e17484 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:11:53 compute-0 podman[405006]: 2026-01-26 17:11:53.673702964 +0000 UTC m=+0.174526199 container attach 00a93eb5f5faf57acf7603a43948890b524fc9a0f399830fc2a5a53e02e17484 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 26 17:11:53 compute-0 nervous_chatterjee[405022]: 167 167
Jan 26 17:11:53 compute-0 systemd[1]: libpod-00a93eb5f5faf57acf7603a43948890b524fc9a0f399830fc2a5a53e02e17484.scope: Deactivated successfully.
Jan 26 17:11:53 compute-0 conmon[405022]: conmon 00a93eb5f5faf57acf76 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-00a93eb5f5faf57acf7603a43948890b524fc9a0f399830fc2a5a53e02e17484.scope/container/memory.events
Jan 26 17:11:53 compute-0 podman[405006]: 2026-01-26 17:11:53.680514382 +0000 UTC m=+0.181337617 container died 00a93eb5f5faf57acf7603a43948890b524fc9a0f399830fc2a5a53e02e17484 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 17:11:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b797ead98e4e5010bbbf0cd73e752888adb5476f21c5ccf25c950ebfea71a2e-merged.mount: Deactivated successfully.
Jan 26 17:11:53 compute-0 podman[405006]: 2026-01-26 17:11:53.825945137 +0000 UTC m=+0.326768412 container remove 00a93eb5f5faf57acf7603a43948890b524fc9a0f399830fc2a5a53e02e17484 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 17:11:53 compute-0 systemd[1]: libpod-conmon-00a93eb5f5faf57acf7603a43948890b524fc9a0f399830fc2a5a53e02e17484.scope: Deactivated successfully.
Jan 26 17:11:54 compute-0 podman[405047]: 2026-01-26 17:11:53.970583583 +0000 UTC m=+0.025066694 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:11:54 compute-0 podman[405047]: 2026-01-26 17:11:54.071921668 +0000 UTC m=+0.126404759 container create d2a52bf924436da5aa355f12ce511085fedace8d3074ec7ad4d96c12c05765a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wing, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:11:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:11:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:11:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:11:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:11:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:11:54 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:11:54 compute-0 nova_compute[239965]: 2026-01-26 17:11:54.101 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:11:54 compute-0 nova_compute[239965]: 2026-01-26 17:11:54.101 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:11:54 compute-0 nova_compute[239965]: 2026-01-26 17:11:54.101 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:11:54 compute-0 systemd[1]: Started libpod-conmon-d2a52bf924436da5aa355f12ce511085fedace8d3074ec7ad4d96c12c05765a0.scope.
Jan 26 17:11:54 compute-0 nova_compute[239965]: 2026-01-26 17:11:54.120 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:11:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3c0b850ed21f25a67ab07ac9057ce501a6a1155ed73d6e09f42c47914d7992/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3c0b850ed21f25a67ab07ac9057ce501a6a1155ed73d6e09f42c47914d7992/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3c0b850ed21f25a67ab07ac9057ce501a6a1155ed73d6e09f42c47914d7992/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3c0b850ed21f25a67ab07ac9057ce501a6a1155ed73d6e09f42c47914d7992/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:11:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd3c0b850ed21f25a67ab07ac9057ce501a6a1155ed73d6e09f42c47914d7992/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:11:54 compute-0 podman[405047]: 2026-01-26 17:11:54.159016103 +0000 UTC m=+0.213499214 container init d2a52bf924436da5aa355f12ce511085fedace8d3074ec7ad4d96c12c05765a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:11:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3664: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:54 compute-0 podman[405047]: 2026-01-26 17:11:54.16580244 +0000 UTC m=+0.220285531 container start d2a52bf924436da5aa355f12ce511085fedace8d3074ec7ad4d96c12c05765a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:11:54 compute-0 podman[405047]: 2026-01-26 17:11:54.170174497 +0000 UTC m=+0.224657588 container attach d2a52bf924436da5aa355f12ce511085fedace8d3074ec7ad4d96c12c05765a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wing, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:11:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:11:54 compute-0 suspicious_wing[405064]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:11:54 compute-0 suspicious_wing[405064]: --> All data devices are unavailable
Jan 26 17:11:54 compute-0 systemd[1]: libpod-d2a52bf924436da5aa355f12ce511085fedace8d3074ec7ad4d96c12c05765a0.scope: Deactivated successfully.
Jan 26 17:11:54 compute-0 podman[405047]: 2026-01-26 17:11:54.706592339 +0000 UTC m=+0.761075430 container died d2a52bf924436da5aa355f12ce511085fedace8d3074ec7ad4d96c12c05765a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:11:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd3c0b850ed21f25a67ab07ac9057ce501a6a1155ed73d6e09f42c47914d7992-merged.mount: Deactivated successfully.
Jan 26 17:11:54 compute-0 podman[405047]: 2026-01-26 17:11:54.798556934 +0000 UTC m=+0.853040025 container remove d2a52bf924436da5aa355f12ce511085fedace8d3074ec7ad4d96c12c05765a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wing, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 17:11:54 compute-0 systemd[1]: libpod-conmon-d2a52bf924436da5aa355f12ce511085fedace8d3074ec7ad4d96c12c05765a0.scope: Deactivated successfully.
Jan 26 17:11:54 compute-0 sudo[404969]: pam_unix(sudo:session): session closed for user root
Jan 26 17:11:54 compute-0 sudo[405099]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:11:54 compute-0 sudo[405099]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:11:54 compute-0 sudo[405099]: pam_unix(sudo:session): session closed for user root
Jan 26 17:11:55 compute-0 sudo[405124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:11:55 compute-0 sudo[405124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:11:55 compute-0 ceph-mon[75140]: pgmap v3664: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:55 compute-0 podman[405160]: 2026-01-26 17:11:55.32087249 +0000 UTC m=+0.038377663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:11:55 compute-0 podman[405160]: 2026-01-26 17:11:55.422243254 +0000 UTC m=+0.139748407 container create 83ba79813facfdc435cd05a4bcbc624cd82f422f4ab6559e95f5fae9b378b258 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:11:55 compute-0 systemd[1]: Started libpod-conmon-83ba79813facfdc435cd05a4bcbc624cd82f422f4ab6559e95f5fae9b378b258.scope.
Jan 26 17:11:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:11:55 compute-0 podman[405160]: 2026-01-26 17:11:55.57461454 +0000 UTC m=+0.292119743 container init 83ba79813facfdc435cd05a4bcbc624cd82f422f4ab6559e95f5fae9b378b258 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 17:11:55 compute-0 podman[405160]: 2026-01-26 17:11:55.582237828 +0000 UTC m=+0.299742991 container start 83ba79813facfdc435cd05a4bcbc624cd82f422f4ab6559e95f5fae9b378b258 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:11:55 compute-0 podman[405160]: 2026-01-26 17:11:55.585764234 +0000 UTC m=+0.303269387 container attach 83ba79813facfdc435cd05a4bcbc624cd82f422f4ab6559e95f5fae9b378b258 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:11:55 compute-0 systemd[1]: libpod-83ba79813facfdc435cd05a4bcbc624cd82f422f4ab6559e95f5fae9b378b258.scope: Deactivated successfully.
Jan 26 17:11:55 compute-0 hardcore_mccarthy[405177]: 167 167
Jan 26 17:11:55 compute-0 conmon[405177]: conmon 83ba79813facfdc435cd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83ba79813facfdc435cd05a4bcbc624cd82f422f4ab6559e95f5fae9b378b258.scope/container/memory.events
Jan 26 17:11:55 compute-0 podman[405160]: 2026-01-26 17:11:55.591445293 +0000 UTC m=+0.308950446 container died 83ba79813facfdc435cd05a4bcbc624cd82f422f4ab6559e95f5fae9b378b258 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 17:11:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-04ebf02b411b0eec2111acfe7df4ad57c8b95dc4c0de50d4617ae2c9703e557e-merged.mount: Deactivated successfully.
Jan 26 17:11:55 compute-0 podman[405160]: 2026-01-26 17:11:55.667925798 +0000 UTC m=+0.385430951 container remove 83ba79813facfdc435cd05a4bcbc624cd82f422f4ab6559e95f5fae9b378b258 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_mccarthy, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 17:11:55 compute-0 systemd[1]: libpod-conmon-83ba79813facfdc435cd05a4bcbc624cd82f422f4ab6559e95f5fae9b378b258.scope: Deactivated successfully.
Jan 26 17:11:55 compute-0 podman[405203]: 2026-01-26 17:11:55.83689245 +0000 UTC m=+0.046065239 container create e48832217b6f5165731754de2c51beab3fccd45e9a91e4c29f9244dae5575798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bhabha, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:11:55 compute-0 systemd[1]: Started libpod-conmon-e48832217b6f5165731754de2c51beab3fccd45e9a91e4c29f9244dae5575798.scope.
Jan 26 17:11:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:11:55 compute-0 podman[405203]: 2026-01-26 17:11:55.813443116 +0000 UTC m=+0.022615925 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae72fd4ed1b0a1be015e992436fd08900508bbc648630492e9f695813cd72f2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae72fd4ed1b0a1be015e992436fd08900508bbc648630492e9f695813cd72f2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae72fd4ed1b0a1be015e992436fd08900508bbc648630492e9f695813cd72f2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:11:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae72fd4ed1b0a1be015e992436fd08900508bbc648630492e9f695813cd72f2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:11:55 compute-0 podman[405203]: 2026-01-26 17:11:55.940642335 +0000 UTC m=+0.149815144 container init e48832217b6f5165731754de2c51beab3fccd45e9a91e4c29f9244dae5575798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bhabha, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 17:11:55 compute-0 podman[405203]: 2026-01-26 17:11:55.94822063 +0000 UTC m=+0.157393429 container start e48832217b6f5165731754de2c51beab3fccd45e9a91e4c29f9244dae5575798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 17:11:55 compute-0 podman[405203]: 2026-01-26 17:11:55.951961032 +0000 UTC m=+0.161133841 container attach e48832217b6f5165731754de2c51beab3fccd45e9a91e4c29f9244dae5575798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bhabha, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 17:11:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:56 compute-0 funny_bhabha[405219]: {
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:     "0": [
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:         {
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "devices": [
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "/dev/loop3"
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             ],
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "lv_name": "ceph_lv0",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "lv_size": "21470642176",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "name": "ceph_lv0",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "tags": {
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.cluster_name": "ceph",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.crush_device_class": "",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.encrypted": "0",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.objectstore": "bluestore",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.osd_id": "0",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.type": "block",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.vdo": "0",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.with_tpm": "0"
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             },
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "type": "block",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "vg_name": "ceph_vg0"
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:         }
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:     ],
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:     "1": [
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:         {
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "devices": [
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "/dev/loop4"
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             ],
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "lv_name": "ceph_lv1",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "lv_size": "21470642176",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "name": "ceph_lv1",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "tags": {
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.cluster_name": "ceph",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.crush_device_class": "",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.encrypted": "0",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.objectstore": "bluestore",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.osd_id": "1",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.type": "block",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.vdo": "0",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.with_tpm": "0"
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             },
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "type": "block",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "vg_name": "ceph_vg1"
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:         }
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:     ],
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:     "2": [
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:         {
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "devices": [
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "/dev/loop5"
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             ],
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "lv_name": "ceph_lv2",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "lv_size": "21470642176",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "name": "ceph_lv2",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "tags": {
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.cluster_name": "ceph",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.crush_device_class": "",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.encrypted": "0",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.objectstore": "bluestore",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.osd_id": "2",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.type": "block",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.vdo": "0",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:                 "ceph.with_tpm": "0"
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             },
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "type": "block",
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:             "vg_name": "ceph_vg2"
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:         }
Jan 26 17:11:56 compute-0 funny_bhabha[405219]:     ]
Jan 26 17:11:56 compute-0 funny_bhabha[405219]: }
Jan 26 17:11:56 compute-0 systemd[1]: libpod-e48832217b6f5165731754de2c51beab3fccd45e9a91e4c29f9244dae5575798.scope: Deactivated successfully.
Jan 26 17:11:56 compute-0 podman[405203]: 2026-01-26 17:11:56.292082851 +0000 UTC m=+0.501255660 container died e48832217b6f5165731754de2c51beab3fccd45e9a91e4c29f9244dae5575798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 17:11:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae72fd4ed1b0a1be015e992436fd08900508bbc648630492e9f695813cd72f2c-merged.mount: Deactivated successfully.
Jan 26 17:11:56 compute-0 podman[405203]: 2026-01-26 17:11:56.379837832 +0000 UTC m=+0.589010621 container remove e48832217b6f5165731754de2c51beab3fccd45e9a91e4c29f9244dae5575798 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 17:11:56 compute-0 systemd[1]: libpod-conmon-e48832217b6f5165731754de2c51beab3fccd45e9a91e4c29f9244dae5575798.scope: Deactivated successfully.
Jan 26 17:11:56 compute-0 sudo[405124]: pam_unix(sudo:session): session closed for user root
Jan 26 17:11:56 compute-0 sudo[405242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:11:56 compute-0 sudo[405242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:11:56 compute-0 sudo[405242]: pam_unix(sudo:session): session closed for user root
Jan 26 17:11:56 compute-0 sudo[405267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:11:56 compute-0 sudo[405267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:11:56 compute-0 podman[405305]: 2026-01-26 17:11:56.886210258 +0000 UTC m=+0.042698959 container create f3fc6b4556e412d4c9cf76f1104d1df21c477bdec855000299f46efce59a5f4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 17:11:56 compute-0 systemd[1]: Started libpod-conmon-f3fc6b4556e412d4c9cf76f1104d1df21c477bdec855000299f46efce59a5f4b.scope.
Jan 26 17:11:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:11:56 compute-0 podman[405305]: 2026-01-26 17:11:56.866999366 +0000 UTC m=+0.023488077 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:11:56 compute-0 podman[405305]: 2026-01-26 17:11:56.972190415 +0000 UTC m=+0.128679126 container init f3fc6b4556e412d4c9cf76f1104d1df21c477bdec855000299f46efce59a5f4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_engelbart, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:11:56 compute-0 podman[405305]: 2026-01-26 17:11:56.981583166 +0000 UTC m=+0.138071867 container start f3fc6b4556e412d4c9cf76f1104d1df21c477bdec855000299f46efce59a5f4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_engelbart, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 17:11:56 compute-0 upbeat_engelbart[405321]: 167 167
Jan 26 17:11:56 compute-0 systemd[1]: libpod-f3fc6b4556e412d4c9cf76f1104d1df21c477bdec855000299f46efce59a5f4b.scope: Deactivated successfully.
Jan 26 17:11:56 compute-0 podman[405305]: 2026-01-26 17:11:56.992491993 +0000 UTC m=+0.148980714 container attach f3fc6b4556e412d4c9cf76f1104d1df21c477bdec855000299f46efce59a5f4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:11:56 compute-0 podman[405305]: 2026-01-26 17:11:56.993009406 +0000 UTC m=+0.149498117 container died f3fc6b4556e412d4c9cf76f1104d1df21c477bdec855000299f46efce59a5f4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_engelbart, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 17:11:57 compute-0 ceph-mon[75140]: pgmap v3665: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:57 compute-0 nova_compute[239965]: 2026-01-26 17:11:57.317 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-0521a7c076391d9aa648d7aae0c2c0b541e44e681b137a24fc363ccbecdd8f68-merged.mount: Deactivated successfully.
Jan 26 17:11:57 compute-0 podman[405305]: 2026-01-26 17:11:57.538351847 +0000 UTC m=+0.694840538 container remove f3fc6b4556e412d4c9cf76f1104d1df21c477bdec855000299f46efce59a5f4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 17:11:57 compute-0 systemd[1]: libpod-conmon-f3fc6b4556e412d4c9cf76f1104d1df21c477bdec855000299f46efce59a5f4b.scope: Deactivated successfully.
Jan 26 17:11:57 compute-0 podman[405346]: 2026-01-26 17:11:57.740444491 +0000 UTC m=+0.064929522 container create 9cc36395b0479b5af4153bbedad80d037c8694233ea67157d41927cc558d7492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curie, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 17:11:57 compute-0 podman[405346]: 2026-01-26 17:11:57.708107059 +0000 UTC m=+0.032592070 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:11:57 compute-0 systemd[1]: Started libpod-conmon-9cc36395b0479b5af4153bbedad80d037c8694233ea67157d41927cc558d7492.scope.
Jan 26 17:11:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce989e3e23775e50d5c17791cae6cf225426634d053454aada2268d9996b6a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce989e3e23775e50d5c17791cae6cf225426634d053454aada2268d9996b6a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce989e3e23775e50d5c17791cae6cf225426634d053454aada2268d9996b6a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:11:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ce989e3e23775e50d5c17791cae6cf225426634d053454aada2268d9996b6a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:11:57 compute-0 podman[405346]: 2026-01-26 17:11:57.925961579 +0000 UTC m=+0.250446600 container init 9cc36395b0479b5af4153bbedad80d037c8694233ea67157d41927cc558d7492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:11:57 compute-0 podman[405346]: 2026-01-26 17:11:57.937494683 +0000 UTC m=+0.261979674 container start 9cc36395b0479b5af4153bbedad80d037c8694233ea67157d41927cc558d7492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curie, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:11:57 compute-0 podman[405346]: 2026-01-26 17:11:57.942550486 +0000 UTC m=+0.267035477 container attach 9cc36395b0479b5af4153bbedad80d037c8694233ea67157d41927cc558d7492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curie, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:11:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:58 compute-0 nova_compute[239965]: 2026-01-26 17:11:58.625 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:11:58 compute-0 lvm[405442]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:11:58 compute-0 lvm[405441]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:11:58 compute-0 lvm[405442]: VG ceph_vg1 finished
Jan 26 17:11:58 compute-0 lvm[405441]: VG ceph_vg0 finished
Jan 26 17:11:58 compute-0 lvm[405444]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:11:58 compute-0 lvm[405444]: VG ceph_vg2 finished
Jan 26 17:11:58 compute-0 cranky_curie[405363]: {}
Jan 26 17:11:58 compute-0 systemd[1]: libpod-9cc36395b0479b5af4153bbedad80d037c8694233ea67157d41927cc558d7492.scope: Deactivated successfully.
Jan 26 17:11:58 compute-0 systemd[1]: libpod-9cc36395b0479b5af4153bbedad80d037c8694233ea67157d41927cc558d7492.scope: Consumed 1.571s CPU time.
Jan 26 17:11:58 compute-0 podman[405346]: 2026-01-26 17:11:58.880742888 +0000 UTC m=+1.205227889 container died 9cc36395b0479b5af4153bbedad80d037c8694233ea67157d41927cc558d7492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 17:11:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ce989e3e23775e50d5c17791cae6cf225426634d053454aada2268d9996b6a0-merged.mount: Deactivated successfully.
Jan 26 17:11:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:11:59.296 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:11:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:11:59.297 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:11:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:11:59.297 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:11:59 compute-0 podman[405346]: 2026-01-26 17:11:59.302816787 +0000 UTC m=+1.627301778 container remove 9cc36395b0479b5af4153bbedad80d037c8694233ea67157d41927cc558d7492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_curie, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:11:59 compute-0 sudo[405267]: pam_unix(sudo:session): session closed for user root
Jan 26 17:11:59 compute-0 systemd[1]: libpod-conmon-9cc36395b0479b5af4153bbedad80d037c8694233ea67157d41927cc558d7492.scope: Deactivated successfully.
Jan 26 17:11:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:11:59 compute-0 ceph-mon[75140]: pgmap v3666: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:11:59 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:11:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:11:59 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:11:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:11:59 compute-0 sudo[405459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:11:59 compute-0 sudo[405459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:11:59 compute-0 sudo[405459]: pam_unix(sudo:session): session closed for user root
Jan 26 17:12:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:12:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:12:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:12:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:12:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:12:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:12:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:12:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:12:00 compute-0 ceph-mon[75140]: pgmap v3667: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:02 compute-0 nova_compute[239965]: 2026-01-26 17:12:02.320 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:03 compute-0 ceph-mon[75140]: pgmap v3668: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:03 compute-0 nova_compute[239965]: 2026-01-26 17:12:03.627 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3669: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:04 compute-0 podman[405484]: 2026-01-26 17:12:04.423442301 +0000 UTC m=+0.101259213 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 17:12:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:12:05 compute-0 ceph-mon[75140]: pgmap v3669: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3670: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:07 compute-0 nova_compute[239965]: 2026-01-26 17:12:07.360 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:07 compute-0 podman[405507]: 2026-01-26 17:12:07.412887815 +0000 UTC m=+0.098680180 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 17:12:07 compute-0 ceph-mon[75140]: pgmap v3670: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3671: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:08 compute-0 nova_compute[239965]: 2026-01-26 17:12:08.629 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:09 compute-0 ceph-mon[75140]: pgmap v3671: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:12:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3672: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:11 compute-0 ceph-mon[75140]: pgmap v3672: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3673: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:12 compute-0 nova_compute[239965]: 2026-01-26 17:12:12.363 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:12 compute-0 ceph-mon[75140]: pgmap v3673: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:13 compute-0 nova_compute[239965]: 2026-01-26 17:12:13.633 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3674: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:12:15 compute-0 ceph-mon[75140]: pgmap v3674: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3675: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:17 compute-0 nova_compute[239965]: 2026-01-26 17:12:17.365 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:17 compute-0 ceph-mon[75140]: pgmap v3675: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3676: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:18 compute-0 nova_compute[239965]: 2026-01-26 17:12:18.635 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:19 compute-0 ceph-mon[75140]: pgmap v3676: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:12:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3677: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:21 compute-0 ceph-mon[75140]: pgmap v3677: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3678: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:22 compute-0 nova_compute[239965]: 2026-01-26 17:12:22.369 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:23 compute-0 nova_compute[239965]: 2026-01-26 17:12:23.636 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:24 compute-0 ceph-mon[75140]: pgmap v3678: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3679: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:12:25 compute-0 ceph-mon[75140]: pgmap v3679: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3680: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:26 compute-0 sshd-session[405534]: Invalid user sol from 45.148.10.240 port 36262
Jan 26 17:12:26 compute-0 sshd-session[405534]: Connection closed by invalid user sol 45.148.10.240 port 36262 [preauth]
Jan 26 17:12:26 compute-0 ceph-mon[75140]: pgmap v3680: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:27 compute-0 nova_compute[239965]: 2026-01-26 17:12:27.371 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3681: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #177. Immutable memtables: 0.
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:12:28.597043) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 109] Flushing memtable with next log file: 177
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447548597099, "job": 109, "event": "flush_started", "num_memtables": 1, "num_entries": 782, "num_deletes": 251, "total_data_size": 1077908, "memory_usage": 1095832, "flush_reason": "Manual Compaction"}
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 109] Level-0 flush table #178: started
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447548609682, "cf_name": "default", "job": 109, "event": "table_file_creation", "file_number": 178, "file_size": 1063052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 74447, "largest_seqno": 75228, "table_properties": {"data_size": 1058991, "index_size": 1840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8927, "raw_average_key_size": 19, "raw_value_size": 1050881, "raw_average_value_size": 2294, "num_data_blocks": 82, "num_entries": 458, "num_filter_entries": 458, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769447480, "oldest_key_time": 1769447480, "file_creation_time": 1769447548, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 178, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 109] Flush lasted 12739 microseconds, and 4360 cpu microseconds.
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:12:28 compute-0 nova_compute[239965]: 2026-01-26 17:12:28.638 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:12:28.609772) [db/flush_job.cc:967] [default] [JOB 109] Level-0 flush table #178: 1063052 bytes OK
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:12:28.609812) [db/memtable_list.cc:519] [default] Level-0 commit table #178 started
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:12:28.708323) [db/memtable_list.cc:722] [default] Level-0 commit table #178: memtable #1 done
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:12:28.708394) EVENT_LOG_v1 {"time_micros": 1769447548708381, "job": 109, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:12:28.708438) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 109] Try to delete WAL files size 1073959, prev total WAL file size 1073959, number of live WAL files 2.
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000174.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:12:28.709392) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037323739' seq:72057594037927935, type:22 .. '7061786F730037353331' seq:0, type:0; will stop at (end)
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 110] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 109 Base level 0, inputs: [178(1038KB)], [176(10MB)]
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447548709520, "job": 110, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [178], "files_L6": [176], "score": -1, "input_data_size": 11707141, "oldest_snapshot_seqno": -1}
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 110] Generated table #179: 9251 keys, 9899994 bytes, temperature: kUnknown
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447548797206, "cf_name": "default", "job": 110, "event": "table_file_creation", "file_number": 179, "file_size": 9899994, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9843731, "index_size": 32022, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23173, "raw_key_size": 242789, "raw_average_key_size": 26, "raw_value_size": 9683742, "raw_average_value_size": 1046, "num_data_blocks": 1226, "num_entries": 9251, "num_filter_entries": 9251, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769447548, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 179, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:12:28.797496) [db/compaction/compaction_job.cc:1663] [default] [JOB 110] Compacted 1@0 + 1@6 files to L6 => 9899994 bytes
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:12:28.799884) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.4 rd, 112.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 10.2 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(20.3) write-amplify(9.3) OK, records in: 9765, records dropped: 514 output_compression: NoCompression
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:12:28.799901) EVENT_LOG_v1 {"time_micros": 1769447548799893, "job": 110, "event": "compaction_finished", "compaction_time_micros": 87766, "compaction_time_cpu_micros": 34112, "output_level": 6, "num_output_files": 1, "total_output_size": 9899994, "num_input_records": 9765, "num_output_records": 9251, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000178.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447548800198, "job": 110, "event": "table_file_deletion", "file_number": 178}
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000176.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447548802232, "job": 110, "event": "table_file_deletion", "file_number": 176}
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:12:28.709157) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:12:28.802464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:12:28.802474) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:12:28.802476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:12:28.802478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:12:28 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:12:28.802480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:12:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:12:28
Jan 26 17:12:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:12:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:12:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'vms', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.data', 'images', 'cephfs.cephfs.meta', '.rgw.root']
Jan 26 17:12:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:12:29 compute-0 ceph-mon[75140]: pgmap v3681: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:12:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3682: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:12:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:12:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:12:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:12:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:12:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:12:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:12:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:12:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:12:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:12:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:12:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:12:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:12:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:12:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:12:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:12:31 compute-0 ceph-mon[75140]: pgmap v3682: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3683: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:32 compute-0 nova_compute[239965]: 2026-01-26 17:12:32.373 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:33 compute-0 ceph-mon[75140]: pgmap v3683: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:33 compute-0 nova_compute[239965]: 2026-01-26 17:12:33.639 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3684: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:12:35 compute-0 podman[405536]: 2026-01-26 17:12:35.379667892 +0000 UTC m=+0.059072499 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 26 17:12:35 compute-0 nova_compute[239965]: 2026-01-26 17:12:35.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:35 compute-0 ceph-mon[75140]: pgmap v3684: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3685: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:36 compute-0 nova_compute[239965]: 2026-01-26 17:12:36.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:36 compute-0 ceph-mon[75140]: pgmap v3685: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:37 compute-0 nova_compute[239965]: 2026-01-26 17:12:37.375 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3686: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:38 compute-0 podman[405555]: 2026-01-26 17:12:38.417274268 +0000 UTC m=+0.098197379 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 17:12:38 compute-0 nova_compute[239965]: 2026-01-26 17:12:38.640 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:39 compute-0 ceph-mon[75140]: pgmap v3686: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:39 compute-0 nova_compute[239965]: 2026-01-26 17:12:39.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:39 compute-0 nova_compute[239965]: 2026-01-26 17:12:39.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:12:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3687: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:41 compute-0 ceph-mon[75140]: pgmap v3687: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3688: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:42 compute-0 nova_compute[239965]: 2026-01-26 17:12:42.378 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:43 compute-0 nova_compute[239965]: 2026-01-26 17:12:43.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:43 compute-0 nova_compute[239965]: 2026-01-26 17:12:43.642 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:43 compute-0 ceph-mon[75140]: pgmap v3688: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3689: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:12:45 compute-0 ceph-mon[75140]: pgmap v3689: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3690: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:47 compute-0 ceph-mon[75140]: pgmap v3690: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:47 compute-0 nova_compute[239965]: 2026-01-26 17:12:47.380 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3691: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:48 compute-0 nova_compute[239965]: 2026-01-26 17:12:48.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:48 compute-0 nova_compute[239965]: 2026-01-26 17:12:48.685 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:12:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2918091901' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:12:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:12:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2918091901' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:12:49 compute-0 ceph-mon[75140]: pgmap v3691: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2918091901' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:12:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2918091901' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:12:49 compute-0 nova_compute[239965]: 2026-01-26 17:12:49.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:49 compute-0 nova_compute[239965]: 2026-01-26 17:12:49.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:12:49 compute-0 nova_compute[239965]: 2026-01-26 17:12:49.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:49 compute-0 nova_compute[239965]: 2026-01-26 17:12:49.547 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:12:49 compute-0 nova_compute[239965]: 2026-01-26 17:12:49.548 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:12:49 compute-0 nova_compute[239965]: 2026-01-26 17:12:49.548 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:12:49 compute-0 nova_compute[239965]: 2026-01-26 17:12:49.548 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:12:49 compute-0 nova_compute[239965]: 2026-01-26 17:12:49.549 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:12:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:12:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:12:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:12:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3122754290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:12:50 compute-0 nova_compute[239965]: 2026-01-26 17:12:50.139 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:12:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3692: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3122754290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:12:50 compute-0 nova_compute[239965]: 2026-01-26 17:12:50.344 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:12:50 compute-0 nova_compute[239965]: 2026-01-26 17:12:50.345 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3570MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:12:50 compute-0 nova_compute[239965]: 2026-01-26 17:12:50.346 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:12:50 compute-0 nova_compute[239965]: 2026-01-26 17:12:50.346 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:12:50 compute-0 nova_compute[239965]: 2026-01-26 17:12:50.407 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:12:50 compute-0 nova_compute[239965]: 2026-01-26 17:12:50.407 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:12:50 compute-0 nova_compute[239965]: 2026-01-26 17:12:50.423 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:12:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:12:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/340904019' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:12:50 compute-0 nova_compute[239965]: 2026-01-26 17:12:50.998 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:12:51 compute-0 nova_compute[239965]: 2026-01-26 17:12:51.005 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:12:51 compute-0 nova_compute[239965]: 2026-01-26 17:12:51.023 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:12:51 compute-0 nova_compute[239965]: 2026-01-26 17:12:51.026 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:12:51 compute-0 nova_compute[239965]: 2026-01-26 17:12:51.026 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:12:51 compute-0 ceph-mon[75140]: pgmap v3692: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:51 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/340904019' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:12:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3693: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:52 compute-0 nova_compute[239965]: 2026-01-26 17:12:52.383 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:53 compute-0 ceph-mon[75140]: pgmap v3693: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:12:53 compute-0 nova_compute[239965]: 2026-01-26 17:12:53.686 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:54 compute-0 nova_compute[239965]: 2026-01-26 17:12:54.027 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:54 compute-0 nova_compute[239965]: 2026-01-26 17:12:54.027 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:12:54 compute-0 nova_compute[239965]: 2026-01-26 17:12:54.027 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:12:54 compute-0 nova_compute[239965]: 2026-01-26 17:12:54.047 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:12:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3694: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s
Jan 26 17:12:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:12:55 compute-0 ceph-mon[75140]: pgmap v3694: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 4 op/s
Jan 26 17:12:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3695: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Jan 26 17:12:56 compute-0 ceph-mon[75140]: pgmap v3695: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.0 KiB/s rd, 0 B/s wr, 6 op/s
Jan 26 17:12:57 compute-0 nova_compute[239965]: 2026-01-26 17:12:57.385 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:57 compute-0 nova_compute[239965]: 2026-01-26 17:12:57.522 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:12:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3696: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Jan 26 17:12:58 compute-0 nova_compute[239965]: 2026-01-26 17:12:58.688 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:12:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:12:59.297 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:12:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:12:59.297 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:12:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:12:59.298 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:12:59 compute-0 ceph-mon[75140]: pgmap v3696: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Jan 26 17:12:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:12:59 compute-0 sudo[405624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:12:59 compute-0 sudo[405624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:12:59 compute-0 sudo[405624]: pam_unix(sudo:session): session closed for user root
Jan 26 17:12:59 compute-0 sudo[405649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:12:59 compute-0 sudo[405649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:13:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3697: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Jan 26 17:13:00 compute-0 sudo[405649]: pam_unix(sudo:session): session closed for user root
Jan 26 17:13:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:13:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:13:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:13:00 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:13:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:13:00 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:13:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:13:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:13:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:13:00 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:13:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:13:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:13:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:13:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:13:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:13:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:13:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:13:00 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:13:00 compute-0 sudo[405704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:13:00 compute-0 sudo[405704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:13:00 compute-0 sudo[405704]: pam_unix(sudo:session): session closed for user root
Jan 26 17:13:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:13:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:13:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:13:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:13:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:13:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:13:00 compute-0 sudo[405729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:13:00 compute-0 sudo[405729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:13:00 compute-0 podman[405766]: 2026-01-26 17:13:00.803776202 +0000 UTC m=+0.043813196 container create 3b82cb61c359d4cb1efa91b37089bf3d88b8a621654d68433236fc2bf8eb8a8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 17:13:00 compute-0 systemd[1]: Started libpod-conmon-3b82cb61c359d4cb1efa91b37089bf3d88b8a621654d68433236fc2bf8eb8a8b.scope.
Jan 26 17:13:00 compute-0 podman[405766]: 2026-01-26 17:13:00.783448673 +0000 UTC m=+0.023485697 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:13:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:13:00 compute-0 podman[405766]: 2026-01-26 17:13:00.907025982 +0000 UTC m=+0.147063026 container init 3b82cb61c359d4cb1efa91b37089bf3d88b8a621654d68433236fc2bf8eb8a8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:13:00 compute-0 podman[405766]: 2026-01-26 17:13:00.915932182 +0000 UTC m=+0.155969176 container start 3b82cb61c359d4cb1efa91b37089bf3d88b8a621654d68433236fc2bf8eb8a8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:13:00 compute-0 podman[405766]: 2026-01-26 17:13:00.920050282 +0000 UTC m=+0.160087296 container attach 3b82cb61c359d4cb1efa91b37089bf3d88b8a621654d68433236fc2bf8eb8a8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 17:13:00 compute-0 systemd[1]: libpod-3b82cb61c359d4cb1efa91b37089bf3d88b8a621654d68433236fc2bf8eb8a8b.scope: Deactivated successfully.
Jan 26 17:13:00 compute-0 youthful_lederberg[405782]: 167 167
Jan 26 17:13:00 compute-0 conmon[405782]: conmon 3b82cb61c359d4cb1efa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b82cb61c359d4cb1efa91b37089bf3d88b8a621654d68433236fc2bf8eb8a8b.scope/container/memory.events
Jan 26 17:13:00 compute-0 podman[405766]: 2026-01-26 17:13:00.925725452 +0000 UTC m=+0.165762446 container died 3b82cb61c359d4cb1efa91b37089bf3d88b8a621654d68433236fc2bf8eb8a8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 26 17:13:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4f048e7068911cefebf5d0231b92d0fc7f86b39fd1c4a791f366b6b50541cf9-merged.mount: Deactivated successfully.
Jan 26 17:13:00 compute-0 podman[405766]: 2026-01-26 17:13:00.97544124 +0000 UTC m=+0.215478244 container remove 3b82cb61c359d4cb1efa91b37089bf3d88b8a621654d68433236fc2bf8eb8a8b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:13:00 compute-0 systemd[1]: libpod-conmon-3b82cb61c359d4cb1efa91b37089bf3d88b8a621654d68433236fc2bf8eb8a8b.scope: Deactivated successfully.
Jan 26 17:13:01 compute-0 podman[405805]: 2026-01-26 17:13:01.190512583 +0000 UTC m=+0.049558926 container create ef218cd1f2c1d4abe99255ec163dbe6632c01c2bc1d12e6fb242372c74abea73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:13:01 compute-0 systemd[1]: Started libpod-conmon-ef218cd1f2c1d4abe99255ec163dbe6632c01c2bc1d12e6fb242372c74abea73.scope.
Jan 26 17:13:01 compute-0 podman[405805]: 2026-01-26 17:13:01.168547985 +0000 UTC m=+0.027594348 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:13:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29b6632beb9448dc6a2a70683e58495eec57e5857e50e89c43b7f286ea89a980/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29b6632beb9448dc6a2a70683e58495eec57e5857e50e89c43b7f286ea89a980/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29b6632beb9448dc6a2a70683e58495eec57e5857e50e89c43b7f286ea89a980/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29b6632beb9448dc6a2a70683e58495eec57e5857e50e89c43b7f286ea89a980/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:13:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29b6632beb9448dc6a2a70683e58495eec57e5857e50e89c43b7f286ea89a980/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:13:01 compute-0 podman[405805]: 2026-01-26 17:13:01.302344835 +0000 UTC m=+0.161391198 container init ef218cd1f2c1d4abe99255ec163dbe6632c01c2bc1d12e6fb242372c74abea73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 17:13:01 compute-0 podman[405805]: 2026-01-26 17:13:01.31559234 +0000 UTC m=+0.174638723 container start ef218cd1f2c1d4abe99255ec163dbe6632c01c2bc1d12e6fb242372c74abea73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 17:13:01 compute-0 podman[405805]: 2026-01-26 17:13:01.321254368 +0000 UTC m=+0.180300751 container attach ef218cd1f2c1d4abe99255ec163dbe6632c01c2bc1d12e6fb242372c74abea73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 17:13:01 compute-0 ceph-mon[75140]: pgmap v3697: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Jan 26 17:13:01 compute-0 boring_chandrasekhar[405822]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:13:01 compute-0 boring_chandrasekhar[405822]: --> All data devices are unavailable
Jan 26 17:13:01 compute-0 systemd[1]: libpod-ef218cd1f2c1d4abe99255ec163dbe6632c01c2bc1d12e6fb242372c74abea73.scope: Deactivated successfully.
Jan 26 17:13:01 compute-0 podman[405842]: 2026-01-26 17:13:01.8776326 +0000 UTC m=+0.026048300 container died ef218cd1f2c1d4abe99255ec163dbe6632c01c2bc1d12e6fb242372c74abea73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 17:13:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-29b6632beb9448dc6a2a70683e58495eec57e5857e50e89c43b7f286ea89a980-merged.mount: Deactivated successfully.
Jan 26 17:13:01 compute-0 podman[405842]: 2026-01-26 17:13:01.923287329 +0000 UTC m=+0.071703019 container remove ef218cd1f2c1d4abe99255ec163dbe6632c01c2bc1d12e6fb242372c74abea73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 17:13:01 compute-0 systemd[1]: libpod-conmon-ef218cd1f2c1d4abe99255ec163dbe6632c01c2bc1d12e6fb242372c74abea73.scope: Deactivated successfully.
Jan 26 17:13:01 compute-0 sudo[405729]: pam_unix(sudo:session): session closed for user root
Jan 26 17:13:02 compute-0 sudo[405857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:13:02 compute-0 sudo[405857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:13:02 compute-0 sudo[405857]: pam_unix(sudo:session): session closed for user root
Jan 26 17:13:02 compute-0 sudo[405882]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:13:02 compute-0 sudo[405882]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:13:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3698: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Jan 26 17:13:02 compute-0 nova_compute[239965]: 2026-01-26 17:13:02.387 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:02 compute-0 podman[405918]: 2026-01-26 17:13:02.414651036 +0000 UTC m=+0.056635519 container create 87db60c980402776c66f438ea8559f3e1ecc6a4e2e14733ab6fc007f0ec070b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:13:02 compute-0 systemd[1]: Started libpod-conmon-87db60c980402776c66f438ea8559f3e1ecc6a4e2e14733ab6fc007f0ec070b3.scope.
Jan 26 17:13:02 compute-0 podman[405918]: 2026-01-26 17:13:02.382289073 +0000 UTC m=+0.024273576 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:13:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:13:02 compute-0 podman[405918]: 2026-01-26 17:13:02.497302493 +0000 UTC m=+0.139286996 container init 87db60c980402776c66f438ea8559f3e1ecc6a4e2e14733ab6fc007f0ec070b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 17:13:02 compute-0 podman[405918]: 2026-01-26 17:13:02.504678243 +0000 UTC m=+0.146662726 container start 87db60c980402776c66f438ea8559f3e1ecc6a4e2e14733ab6fc007f0ec070b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:13:02 compute-0 nifty_cartwright[405934]: 167 167
Jan 26 17:13:02 compute-0 systemd[1]: libpod-87db60c980402776c66f438ea8559f3e1ecc6a4e2e14733ab6fc007f0ec070b3.scope: Deactivated successfully.
Jan 26 17:13:02 compute-0 podman[405918]: 2026-01-26 17:13:02.516926294 +0000 UTC m=+0.158910817 container attach 87db60c980402776c66f438ea8559f3e1ecc6a4e2e14733ab6fc007f0ec070b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 26 17:13:02 compute-0 podman[405918]: 2026-01-26 17:13:02.517488748 +0000 UTC m=+0.159473231 container died 87db60c980402776c66f438ea8559f3e1ecc6a4e2e14733ab6fc007f0ec070b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cartwright, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:13:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf468e6c4b221e121d7c722c6c97e7127516158a2f3f2f2ed512336599f86c1b-merged.mount: Deactivated successfully.
Jan 26 17:13:02 compute-0 podman[405918]: 2026-01-26 17:13:02.589326529 +0000 UTC m=+0.231311002 container remove 87db60c980402776c66f438ea8559f3e1ecc6a4e2e14733ab6fc007f0ec070b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_cartwright, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 17:13:02 compute-0 systemd[1]: libpod-conmon-87db60c980402776c66f438ea8559f3e1ecc6a4e2e14733ab6fc007f0ec070b3.scope: Deactivated successfully.
Jan 26 17:13:02 compute-0 podman[405960]: 2026-01-26 17:13:02.766514243 +0000 UTC m=+0.061362375 container create 77ecc76f7a3fb0ee7628ce29e4aa732681ab91ef8d8a1e179ecd79d9f4e0490d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 17:13:02 compute-0 systemd[1]: Started libpod-conmon-77ecc76f7a3fb0ee7628ce29e4aa732681ab91ef8d8a1e179ecd79d9f4e0490d.scope.
Jan 26 17:13:02 compute-0 podman[405960]: 2026-01-26 17:13:02.72968856 +0000 UTC m=+0.024536672 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:13:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0399e60d5954234345cad228af7599fc939384119ccf5caed46801b23c05962f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0399e60d5954234345cad228af7599fc939384119ccf5caed46801b23c05962f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0399e60d5954234345cad228af7599fc939384119ccf5caed46801b23c05962f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:13:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0399e60d5954234345cad228af7599fc939384119ccf5caed46801b23c05962f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:13:02 compute-0 podman[405960]: 2026-01-26 17:13:02.880721983 +0000 UTC m=+0.175570135 container init 77ecc76f7a3fb0ee7628ce29e4aa732681ab91ef8d8a1e179ecd79d9f4e0490d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 17:13:02 compute-0 podman[405960]: 2026-01-26 17:13:02.890078273 +0000 UTC m=+0.184926365 container start 77ecc76f7a3fb0ee7628ce29e4aa732681ab91ef8d8a1e179ecd79d9f4e0490d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_keller, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 17:13:02 compute-0 podman[405960]: 2026-01-26 17:13:02.896858659 +0000 UTC m=+0.191706761 container attach 77ecc76f7a3fb0ee7628ce29e4aa732681ab91ef8d8a1e179ecd79d9f4e0490d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 17:13:03 compute-0 magical_keller[405976]: {
Jan 26 17:13:03 compute-0 magical_keller[405976]:     "0": [
Jan 26 17:13:03 compute-0 magical_keller[405976]:         {
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "devices": [
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "/dev/loop3"
Jan 26 17:13:03 compute-0 magical_keller[405976]:             ],
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "lv_name": "ceph_lv0",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "lv_size": "21470642176",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "name": "ceph_lv0",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "tags": {
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.cluster_name": "ceph",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.crush_device_class": "",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.encrypted": "0",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.objectstore": "bluestore",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.osd_id": "0",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.type": "block",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.vdo": "0",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.with_tpm": "0"
Jan 26 17:13:03 compute-0 magical_keller[405976]:             },
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "type": "block",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "vg_name": "ceph_vg0"
Jan 26 17:13:03 compute-0 magical_keller[405976]:         }
Jan 26 17:13:03 compute-0 magical_keller[405976]:     ],
Jan 26 17:13:03 compute-0 magical_keller[405976]:     "1": [
Jan 26 17:13:03 compute-0 magical_keller[405976]:         {
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "devices": [
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "/dev/loop4"
Jan 26 17:13:03 compute-0 magical_keller[405976]:             ],
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "lv_name": "ceph_lv1",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "lv_size": "21470642176",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "name": "ceph_lv1",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "tags": {
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.cluster_name": "ceph",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.crush_device_class": "",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.encrypted": "0",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.objectstore": "bluestore",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.osd_id": "1",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.type": "block",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.vdo": "0",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.with_tpm": "0"
Jan 26 17:13:03 compute-0 magical_keller[405976]:             },
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "type": "block",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "vg_name": "ceph_vg1"
Jan 26 17:13:03 compute-0 magical_keller[405976]:         }
Jan 26 17:13:03 compute-0 magical_keller[405976]:     ],
Jan 26 17:13:03 compute-0 magical_keller[405976]:     "2": [
Jan 26 17:13:03 compute-0 magical_keller[405976]:         {
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "devices": [
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "/dev/loop5"
Jan 26 17:13:03 compute-0 magical_keller[405976]:             ],
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "lv_name": "ceph_lv2",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "lv_size": "21470642176",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "name": "ceph_lv2",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "tags": {
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.cluster_name": "ceph",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.crush_device_class": "",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.encrypted": "0",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.objectstore": "bluestore",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.osd_id": "2",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.type": "block",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.vdo": "0",
Jan 26 17:13:03 compute-0 magical_keller[405976]:                 "ceph.with_tpm": "0"
Jan 26 17:13:03 compute-0 magical_keller[405976]:             },
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "type": "block",
Jan 26 17:13:03 compute-0 magical_keller[405976]:             "vg_name": "ceph_vg2"
Jan 26 17:13:03 compute-0 magical_keller[405976]:         }
Jan 26 17:13:03 compute-0 magical_keller[405976]:     ]
Jan 26 17:13:03 compute-0 magical_keller[405976]: }
Jan 26 17:13:03 compute-0 systemd[1]: libpod-77ecc76f7a3fb0ee7628ce29e4aa732681ab91ef8d8a1e179ecd79d9f4e0490d.scope: Deactivated successfully.
Jan 26 17:13:03 compute-0 conmon[405976]: conmon 77ecc76f7a3fb0ee7628 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77ecc76f7a3fb0ee7628ce29e4aa732681ab91ef8d8a1e179ecd79d9f4e0490d.scope/container/memory.events
Jan 26 17:13:03 compute-0 podman[405960]: 2026-01-26 17:13:03.210726374 +0000 UTC m=+0.505574466 container died 77ecc76f7a3fb0ee7628ce29e4aa732681ab91ef8d8a1e179ecd79d9f4e0490d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_keller, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 17:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-0399e60d5954234345cad228af7599fc939384119ccf5caed46801b23c05962f-merged.mount: Deactivated successfully.
Jan 26 17:13:03 compute-0 podman[405960]: 2026-01-26 17:13:03.274685772 +0000 UTC m=+0.569533864 container remove 77ecc76f7a3fb0ee7628ce29e4aa732681ab91ef8d8a1e179ecd79d9f4e0490d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_keller, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:13:03 compute-0 systemd[1]: libpod-conmon-77ecc76f7a3fb0ee7628ce29e4aa732681ab91ef8d8a1e179ecd79d9f4e0490d.scope: Deactivated successfully.
Jan 26 17:13:03 compute-0 sudo[405882]: pam_unix(sudo:session): session closed for user root
Jan 26 17:13:03 compute-0 sudo[405999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:13:03 compute-0 sudo[405999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:13:03 compute-0 sudo[405999]: pam_unix(sudo:session): session closed for user root
Jan 26 17:13:03 compute-0 ceph-mon[75140]: pgmap v3698: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Jan 26 17:13:03 compute-0 sudo[406024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:13:03 compute-0 sudo[406024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:13:03 compute-0 nova_compute[239965]: 2026-01-26 17:13:03.691 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:03 compute-0 podman[406062]: 2026-01-26 17:13:03.730135539 +0000 UTC m=+0.048771217 container create 32d9b185f420a679efe47581379009af1b2bdbe048c02e2f84788554d30e727d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_merkle, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:13:03 compute-0 systemd[1]: Started libpod-conmon-32d9b185f420a679efe47581379009af1b2bdbe048c02e2f84788554d30e727d.scope.
Jan 26 17:13:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:13:03 compute-0 podman[406062]: 2026-01-26 17:13:03.711246666 +0000 UTC m=+0.029882344 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:13:03 compute-0 podman[406062]: 2026-01-26 17:13:03.814030636 +0000 UTC m=+0.132666324 container init 32d9b185f420a679efe47581379009af1b2bdbe048c02e2f84788554d30e727d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_merkle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 17:13:03 compute-0 podman[406062]: 2026-01-26 17:13:03.819746556 +0000 UTC m=+0.138382214 container start 32d9b185f420a679efe47581379009af1b2bdbe048c02e2f84788554d30e727d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 17:13:03 compute-0 podman[406062]: 2026-01-26 17:13:03.823155669 +0000 UTC m=+0.141791447 container attach 32d9b185f420a679efe47581379009af1b2bdbe048c02e2f84788554d30e727d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 17:13:03 compute-0 blissful_merkle[406078]: 167 167
Jan 26 17:13:03 compute-0 systemd[1]: libpod-32d9b185f420a679efe47581379009af1b2bdbe048c02e2f84788554d30e727d.scope: Deactivated successfully.
Jan 26 17:13:03 compute-0 podman[406062]: 2026-01-26 17:13:03.827870976 +0000 UTC m=+0.146506644 container died 32d9b185f420a679efe47581379009af1b2bdbe048c02e2f84788554d30e727d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_merkle, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:13:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-33eba27d92b6b52ca15c42a411cc2f7a5f72a8b6f39fe770e6017c896fba4ffa-merged.mount: Deactivated successfully.
Jan 26 17:13:03 compute-0 podman[406062]: 2026-01-26 17:13:03.868202584 +0000 UTC m=+0.186838252 container remove 32d9b185f420a679efe47581379009af1b2bdbe048c02e2f84788554d30e727d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_merkle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:13:03 compute-0 systemd[1]: libpod-conmon-32d9b185f420a679efe47581379009af1b2bdbe048c02e2f84788554d30e727d.scope: Deactivated successfully.
Jan 26 17:13:04 compute-0 podman[406102]: 2026-01-26 17:13:04.037158276 +0000 UTC m=+0.049327450 container create 7180c26d5ae202542ee739fcd306a2bc91aec7ee4906fe51911f3459430ec2d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 17:13:04 compute-0 systemd[1]: Started libpod-conmon-7180c26d5ae202542ee739fcd306a2bc91aec7ee4906fe51911f3459430ec2d6.scope.
Jan 26 17:13:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:13:04 compute-0 podman[406102]: 2026-01-26 17:13:04.016929081 +0000 UTC m=+0.029098265 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c163109bf92a08ad3ef0be14ca3a4568a8b9f091f84532bcb2a43dae2048890f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c163109bf92a08ad3ef0be14ca3a4568a8b9f091f84532bcb2a43dae2048890f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c163109bf92a08ad3ef0be14ca3a4568a8b9f091f84532bcb2a43dae2048890f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:13:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c163109bf92a08ad3ef0be14ca3a4568a8b9f091f84532bcb2a43dae2048890f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:13:04 compute-0 podman[406102]: 2026-01-26 17:13:04.13520209 +0000 UTC m=+0.147371264 container init 7180c26d5ae202542ee739fcd306a2bc91aec7ee4906fe51911f3459430ec2d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:13:04 compute-0 podman[406102]: 2026-01-26 17:13:04.144528409 +0000 UTC m=+0.156697563 container start 7180c26d5ae202542ee739fcd306a2bc91aec7ee4906fe51911f3459430ec2d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:13:04 compute-0 podman[406102]: 2026-01-26 17:13:04.162007287 +0000 UTC m=+0.174176461 container attach 7180c26d5ae202542ee739fcd306a2bc91aec7ee4906fe51911f3459430ec2d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:13:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3699: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 17:13:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:13:05 compute-0 lvm[406198]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:13:05 compute-0 lvm[406199]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:13:05 compute-0 lvm[406198]: VG ceph_vg0 finished
Jan 26 17:13:05 compute-0 lvm[406199]: VG ceph_vg1 finished
Jan 26 17:13:05 compute-0 lvm[406201]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:13:05 compute-0 lvm[406201]: VG ceph_vg2 finished
Jan 26 17:13:05 compute-0 wonderful_bose[406119]: {}
Jan 26 17:13:05 compute-0 systemd[1]: libpod-7180c26d5ae202542ee739fcd306a2bc91aec7ee4906fe51911f3459430ec2d6.scope: Deactivated successfully.
Jan 26 17:13:05 compute-0 systemd[1]: libpod-7180c26d5ae202542ee739fcd306a2bc91aec7ee4906fe51911f3459430ec2d6.scope: Consumed 1.701s CPU time.
Jan 26 17:13:05 compute-0 podman[406102]: 2026-01-26 17:13:05.17736716 +0000 UTC m=+1.189536404 container died 7180c26d5ae202542ee739fcd306a2bc91aec7ee4906fe51911f3459430ec2d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:13:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-c163109bf92a08ad3ef0be14ca3a4568a8b9f091f84532bcb2a43dae2048890f-merged.mount: Deactivated successfully.
Jan 26 17:13:05 compute-0 podman[406102]: 2026-01-26 17:13:05.231075118 +0000 UTC m=+1.243244272 container remove 7180c26d5ae202542ee739fcd306a2bc91aec7ee4906fe51911f3459430ec2d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_bose, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 17:13:05 compute-0 systemd[1]: libpod-conmon-7180c26d5ae202542ee739fcd306a2bc91aec7ee4906fe51911f3459430ec2d6.scope: Deactivated successfully.
Jan 26 17:13:05 compute-0 sudo[406024]: pam_unix(sudo:session): session closed for user root
Jan 26 17:13:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:13:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:13:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:13:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:13:05 compute-0 ceph-mon[75140]: pgmap v3699: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 17:13:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:13:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:13:05 compute-0 sudo[406215]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:13:05 compute-0 sudo[406215]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:13:05 compute-0 sudo[406215]: pam_unix(sudo:session): session closed for user root
Jan 26 17:13:05 compute-0 podman[406239]: 2026-01-26 17:13:05.529432483 +0000 UTC m=+0.069341042 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:13:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3700: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 26 17:13:07 compute-0 nova_compute[239965]: 2026-01-26 17:13:07.390 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:07 compute-0 ceph-mon[75140]: pgmap v3700: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 54 op/s
Jan 26 17:13:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3701: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Jan 26 17:13:08 compute-0 nova_compute[239965]: 2026-01-26 17:13:08.742 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:08 compute-0 ceph-mon[75140]: pgmap v3701: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Jan 26 17:13:09 compute-0 podman[406259]: 2026-01-26 17:13:09.410806725 +0000 UTC m=+0.101624813 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:13:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:13:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3702: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Jan 26 17:13:11 compute-0 ceph-mon[75140]: pgmap v3702: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Jan 26 17:13:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3703: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Jan 26 17:13:12 compute-0 nova_compute[239965]: 2026-01-26 17:13:12.391 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:12 compute-0 ceph-mon[75140]: pgmap v3703: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Jan 26 17:13:13 compute-0 nova_compute[239965]: 2026-01-26 17:13:13.744 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3704: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Jan 26 17:13:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:13:15 compute-0 ceph-mon[75140]: pgmap v3704: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Jan 26 17:13:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3705: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:17 compute-0 nova_compute[239965]: 2026-01-26 17:13:17.394 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:17 compute-0 ceph-mon[75140]: pgmap v3705: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3706: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:18 compute-0 ceph-mon[75140]: pgmap v3706: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:18 compute-0 nova_compute[239965]: 2026-01-26 17:13:18.764 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:13:19.603333) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447599603382, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 663, "num_deletes": 256, "total_data_size": 792621, "memory_usage": 804712, "flush_reason": "Manual Compaction"}
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447599610638, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 781758, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75229, "largest_seqno": 75891, "table_properties": {"data_size": 778245, "index_size": 1356, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 7865, "raw_average_key_size": 18, "raw_value_size": 771201, "raw_average_value_size": 1844, "num_data_blocks": 61, "num_entries": 418, "num_filter_entries": 418, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769447549, "oldest_key_time": 1769447549, "file_creation_time": 1769447599, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 7366 microseconds, and 3309 cpu microseconds.
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:13:19.610694) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 781758 bytes OK
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:13:19.610721) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:13:19.612123) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:13:19.612142) EVENT_LOG_v1 {"time_micros": 1769447599612135, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:13:19.612169) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 789092, prev total WAL file size 789092, number of live WAL files 2.
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:13:19.612734) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323633' seq:72057594037927935, type:22 .. '6C6F676D0033353135' seq:0, type:0; will stop at (end)
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(763KB)], [179(9667KB)]
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447599612778, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 10681752, "oldest_snapshot_seqno": -1}
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 9146 keys, 10578782 bytes, temperature: kUnknown
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447599685374, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 10578782, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10521865, "index_size": 32932, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22917, "raw_key_size": 241534, "raw_average_key_size": 26, "raw_value_size": 10362342, "raw_average_value_size": 1132, "num_data_blocks": 1264, "num_entries": 9146, "num_filter_entries": 9146, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769447599, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:13:19.685763) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 10578782 bytes
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:13:19.687861) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 146.9 rd, 145.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 9.4 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(27.2) write-amplify(13.5) OK, records in: 9669, records dropped: 523 output_compression: NoCompression
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:13:19.687881) EVENT_LOG_v1 {"time_micros": 1769447599687872, "job": 112, "event": "compaction_finished", "compaction_time_micros": 72726, "compaction_time_cpu_micros": 31563, "output_level": 6, "num_output_files": 1, "total_output_size": 10578782, "num_input_records": 9669, "num_output_records": 9146, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447599688497, "job": 112, "event": "table_file_deletion", "file_number": 181}
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447599690820, "job": 112, "event": "table_file_deletion", "file_number": 179}
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:13:19.612632) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:13:19.691016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:13:19.691027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:13:19.691029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:13:19.691031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:13:19 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:13:19.691033) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:13:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3707: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:20 compute-0 ceph-mon[75140]: pgmap v3707: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3708: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:22 compute-0 nova_compute[239965]: 2026-01-26 17:13:22.397 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:22 compute-0 ceph-mon[75140]: pgmap v3708: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:23 compute-0 nova_compute[239965]: 2026-01-26 17:13:23.765 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3709: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:13:25 compute-0 ceph-mon[75140]: pgmap v3709: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3710: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:27 compute-0 ceph-mon[75140]: pgmap v3710: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:27 compute-0 nova_compute[239965]: 2026-01-26 17:13:27.400 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3711: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:28 compute-0 nova_compute[239965]: 2026-01-26 17:13:28.767 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:13:28
Jan 26 17:13:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:13:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:13:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.rgw.root', 'vms', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'backups', 'cephfs.cephfs.meta', 'volumes', '.mgr']
Jan 26 17:13:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:13:29 compute-0 ceph-mon[75140]: pgmap v3711: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:13:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3712: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:13:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:13:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:13:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:13:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:13:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:13:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:13:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:13:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:13:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:13:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:13:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:13:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:13:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:13:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:13:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:13:31 compute-0 ceph-mon[75140]: pgmap v3712: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3713: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:32 compute-0 nova_compute[239965]: 2026-01-26 17:13:32.402 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:33 compute-0 ceph-mon[75140]: pgmap v3713: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:33 compute-0 nova_compute[239965]: 2026-01-26 17:13:33.770 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3714: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:13:35 compute-0 ceph-mon[75140]: pgmap v3714: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:35 compute-0 nova_compute[239965]: 2026-01-26 17:13:35.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:36 compute-0 sshd-session[406287]: Connection reset by authenticating user root 176.120.22.13 port 31652 [preauth]
Jan 26 17:13:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3715: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:36 compute-0 podman[406289]: 2026-01-26 17:13:36.4251782 +0000 UTC m=+0.061813528 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 26 17:13:37 compute-0 ceph-mon[75140]: pgmap v3715: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:37 compute-0 nova_compute[239965]: 2026-01-26 17:13:37.433 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:37 compute-0 nova_compute[239965]: 2026-01-26 17:13:37.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:37 compute-0 sshd-session[406308]: Invalid user admin from 176.120.22.13 port 31668
Jan 26 17:13:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3716: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:38 compute-0 sshd-session[406308]: Connection reset by invalid user admin 176.120.22.13 port 31668 [preauth]
Jan 26 17:13:38 compute-0 nova_compute[239965]: 2026-01-26 17:13:38.771 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:39 compute-0 ceph-mon[75140]: pgmap v3716: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:39 compute-0 nova_compute[239965]: 2026-01-26 17:13:39.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:39 compute-0 nova_compute[239965]: 2026-01-26 17:13:39.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:13:40 compute-0 sshd-session[406310]: Connection reset by authenticating user root 176.120.22.13 port 31676 [preauth]
Jan 26 17:13:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3717: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:40 compute-0 podman[406313]: 2026-01-26 17:13:40.456791624 +0000 UTC m=+0.140312510 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 26 17:13:41 compute-0 ceph-mon[75140]: pgmap v3717: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:41 compute-0 sshd-session[406312]: Invalid user system from 176.120.22.13 port 31682
Jan 26 17:13:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3718: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:42 compute-0 sshd-session[406312]: Connection reset by invalid user system 176.120.22.13 port 31682 [preauth]
Jan 26 17:13:42 compute-0 nova_compute[239965]: 2026-01-26 17:13:42.436 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:43 compute-0 ceph-mon[75140]: pgmap v3718: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:43 compute-0 sshd-session[406341]: Invalid user user from 176.120.22.13 port 47720
Jan 26 17:13:43 compute-0 nova_compute[239965]: 2026-01-26 17:13:43.775 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:44 compute-0 sshd-session[406341]: Connection reset by invalid user user 176.120.22.13 port 47720 [preauth]
Jan 26 17:13:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3719: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:44 compute-0 nova_compute[239965]: 2026-01-26 17:13:44.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:13:45 compute-0 ceph-mon[75140]: pgmap v3719: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3720: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:47 compute-0 nova_compute[239965]: 2026-01-26 17:13:47.439 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:47 compute-0 ceph-mon[75140]: pgmap v3720: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3721: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:48 compute-0 nova_compute[239965]: 2026-01-26 17:13:48.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:48 compute-0 nova_compute[239965]: 2026-01-26 17:13:48.776 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:48 compute-0 ceph-mon[75140]: pgmap v3721: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:13:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1049383584' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:13:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:13:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1049383584' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:13:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:13:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:13:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1049383584' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:13:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1049383584' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:13:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3722: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:50 compute-0 nova_compute[239965]: 2026-01-26 17:13:50.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:50 compute-0 nova_compute[239965]: 2026-01-26 17:13:50.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:13:50 compute-0 ceph-mon[75140]: pgmap v3722: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:51 compute-0 nova_compute[239965]: 2026-01-26 17:13:51.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:51 compute-0 nova_compute[239965]: 2026-01-26 17:13:51.544 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:13:51 compute-0 nova_compute[239965]: 2026-01-26 17:13:51.545 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:13:51 compute-0 nova_compute[239965]: 2026-01-26 17:13:51.545 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:13:51 compute-0 nova_compute[239965]: 2026-01-26 17:13:51.545 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:13:51 compute-0 nova_compute[239965]: 2026-01-26 17:13:51.545 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:13:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:13:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3435950103' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:13:52 compute-0 nova_compute[239965]: 2026-01-26 17:13:52.112 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:13:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3435950103' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:13:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3723: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:52 compute-0 nova_compute[239965]: 2026-01-26 17:13:52.263 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:13:52 compute-0 nova_compute[239965]: 2026-01-26 17:13:52.264 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3587MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:13:52 compute-0 nova_compute[239965]: 2026-01-26 17:13:52.265 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:13:52 compute-0 nova_compute[239965]: 2026-01-26 17:13:52.265 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:13:52 compute-0 nova_compute[239965]: 2026-01-26 17:13:52.331 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:13:52 compute-0 nova_compute[239965]: 2026-01-26 17:13:52.331 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:13:52 compute-0 nova_compute[239965]: 2026-01-26 17:13:52.356 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:13:52 compute-0 nova_compute[239965]: 2026-01-26 17:13:52.480 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:52 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:13:52 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2644884333' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:13:52 compute-0 nova_compute[239965]: 2026-01-26 17:13:52.897 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:13:52 compute-0 nova_compute[239965]: 2026-01-26 17:13:52.906 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:13:52 compute-0 nova_compute[239965]: 2026-01-26 17:13:52.961 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:13:52 compute-0 nova_compute[239965]: 2026-01-26 17:13:52.964 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:13:52 compute-0 nova_compute[239965]: 2026-01-26 17:13:52.964 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:13:53 compute-0 ceph-mon[75140]: pgmap v3723: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:53 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2644884333' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:13:53 compute-0 nova_compute[239965]: 2026-01-26 17:13:53.778 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3724: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:13:55 compute-0 ceph-mon[75140]: pgmap v3724: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:55 compute-0 nova_compute[239965]: 2026-01-26 17:13:55.965 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:13:55 compute-0 nova_compute[239965]: 2026-01-26 17:13:55.966 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:13:55 compute-0 nova_compute[239965]: 2026-01-26 17:13:55.966 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:13:55 compute-0 nova_compute[239965]: 2026-01-26 17:13:55.983 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:13:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3725: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:57 compute-0 ceph-mon[75140]: pgmap v3725: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:57 compute-0 nova_compute[239965]: 2026-01-26 17:13:57.483 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3726: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:58 compute-0 nova_compute[239965]: 2026-01-26 17:13:58.781 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:13:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:13:59.298 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:13:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:13:59.299 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:13:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:13:59.299 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:13:59 compute-0 ceph-mon[75140]: pgmap v3726: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:13:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:14:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3727: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:14:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:14:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:14:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:14:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:14:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:14:01 compute-0 ceph-mon[75140]: pgmap v3727: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3728: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:02 compute-0 nova_compute[239965]: 2026-01-26 17:14:02.485 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:03 compute-0 ceph-mon[75140]: pgmap v3728: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:03 compute-0 nova_compute[239965]: 2026-01-26 17:14:03.781 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3729: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:14:05 compute-0 ceph-mon[75140]: pgmap v3729: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:05 compute-0 sudo[406387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:14:05 compute-0 sudo[406387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:14:05 compute-0 sudo[406387]: pam_unix(sudo:session): session closed for user root
Jan 26 17:14:05 compute-0 sudo[406412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:14:05 compute-0 sudo[406412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:14:06 compute-0 sudo[406412]: pam_unix(sudo:session): session closed for user root
Jan 26 17:14:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:14:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:14:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:14:06 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:14:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:14:06 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:14:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:14:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:14:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:14:06 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:14:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:14:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:14:06 compute-0 sudo[406467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:14:06 compute-0 sudo[406467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:14:06 compute-0 sudo[406467]: pam_unix(sudo:session): session closed for user root
Jan 26 17:14:06 compute-0 sudo[406492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:14:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3730: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:06 compute-0 sudo[406492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:14:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:14:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:14:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:14:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:14:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:14:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:14:06 compute-0 podman[406530]: 2026-01-26 17:14:06.511530926 +0000 UTC m=+0.047189388 container create f1c367a2e810afa993208ff15045ab4b0866bfbb452b3fdfd577692a6f24e813 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:14:06 compute-0 systemd[1]: Started libpod-conmon-f1c367a2e810afa993208ff15045ab4b0866bfbb452b3fdfd577692a6f24e813.scope.
Jan 26 17:14:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:14:06 compute-0 podman[406530]: 2026-01-26 17:14:06.579417081 +0000 UTC m=+0.115075573 container init f1c367a2e810afa993208ff15045ab4b0866bfbb452b3fdfd577692a6f24e813 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 17:14:06 compute-0 podman[406530]: 2026-01-26 17:14:06.490611173 +0000 UTC m=+0.026269685 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:14:06 compute-0 podman[406530]: 2026-01-26 17:14:06.59123129 +0000 UTC m=+0.126889742 container start f1c367a2e810afa993208ff15045ab4b0866bfbb452b3fdfd577692a6f24e813 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:14:06 compute-0 podman[406530]: 2026-01-26 17:14:06.595013453 +0000 UTC m=+0.130671935 container attach f1c367a2e810afa993208ff15045ab4b0866bfbb452b3fdfd577692a6f24e813 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:14:06 compute-0 competent_sammet[406547]: 167 167
Jan 26 17:14:06 compute-0 systemd[1]: libpod-f1c367a2e810afa993208ff15045ab4b0866bfbb452b3fdfd577692a6f24e813.scope: Deactivated successfully.
Jan 26 17:14:06 compute-0 podman[406530]: 2026-01-26 17:14:06.598251972 +0000 UTC m=+0.133910454 container died f1c367a2e810afa993208ff15045ab4b0866bfbb452b3fdfd577692a6f24e813 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:14:06 compute-0 podman[406544]: 2026-01-26 17:14:06.607346255 +0000 UTC m=+0.063412716 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:14:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3c08a994d948fb484870f0d8693a195d962ba708850a7321546da25ab8b2c81-merged.mount: Deactivated successfully.
Jan 26 17:14:06 compute-0 podman[406530]: 2026-01-26 17:14:06.64951461 +0000 UTC m=+0.185173072 container remove f1c367a2e810afa993208ff15045ab4b0866bfbb452b3fdfd577692a6f24e813 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 17:14:06 compute-0 systemd[1]: libpod-conmon-f1c367a2e810afa993208ff15045ab4b0866bfbb452b3fdfd577692a6f24e813.scope: Deactivated successfully.
Jan 26 17:14:06 compute-0 podman[406586]: 2026-01-26 17:14:06.814994647 +0000 UTC m=+0.041031818 container create b49a5fc136251fafa9b6de65101cdb2e6dfa1da73372c9b70ecb9650a5578dea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 26 17:14:06 compute-0 systemd[1]: Started libpod-conmon-b49a5fc136251fafa9b6de65101cdb2e6dfa1da73372c9b70ecb9650a5578dea.scope.
Jan 26 17:14:06 compute-0 podman[406586]: 2026-01-26 17:14:06.798257926 +0000 UTC m=+0.024295117 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:14:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf1afb21fd211c9cad9adafa69b2d11e2069ef23f7f0339e453a23b6b0c4884e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf1afb21fd211c9cad9adafa69b2d11e2069ef23f7f0339e453a23b6b0c4884e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf1afb21fd211c9cad9adafa69b2d11e2069ef23f7f0339e453a23b6b0c4884e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf1afb21fd211c9cad9adafa69b2d11e2069ef23f7f0339e453a23b6b0c4884e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:14:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf1afb21fd211c9cad9adafa69b2d11e2069ef23f7f0339e453a23b6b0c4884e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:14:06 compute-0 podman[406586]: 2026-01-26 17:14:06.922524723 +0000 UTC m=+0.148561914 container init b49a5fc136251fafa9b6de65101cdb2e6dfa1da73372c9b70ecb9650a5578dea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kalam, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:14:06 compute-0 podman[406586]: 2026-01-26 17:14:06.93832562 +0000 UTC m=+0.164362791 container start b49a5fc136251fafa9b6de65101cdb2e6dfa1da73372c9b70ecb9650a5578dea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 17:14:06 compute-0 podman[406586]: 2026-01-26 17:14:06.941686042 +0000 UTC m=+0.167723443 container attach b49a5fc136251fafa9b6de65101cdb2e6dfa1da73372c9b70ecb9650a5578dea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kalam, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 17:14:07 compute-0 ceph-mon[75140]: pgmap v3730: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:07 compute-0 fervent_kalam[406602]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:14:07 compute-0 fervent_kalam[406602]: --> All data devices are unavailable
Jan 26 17:14:07 compute-0 systemd[1]: libpod-b49a5fc136251fafa9b6de65101cdb2e6dfa1da73372c9b70ecb9650a5578dea.scope: Deactivated successfully.
Jan 26 17:14:07 compute-0 podman[406622]: 2026-01-26 17:14:07.459377395 +0000 UTC m=+0.026575942 container died b49a5fc136251fafa9b6de65101cdb2e6dfa1da73372c9b70ecb9650a5578dea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kalam, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:14:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf1afb21fd211c9cad9adafa69b2d11e2069ef23f7f0339e453a23b6b0c4884e-merged.mount: Deactivated successfully.
Jan 26 17:14:07 compute-0 nova_compute[239965]: 2026-01-26 17:14:07.487 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:07 compute-0 podman[406622]: 2026-01-26 17:14:07.502112543 +0000 UTC m=+0.069311070 container remove b49a5fc136251fafa9b6de65101cdb2e6dfa1da73372c9b70ecb9650a5578dea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 17:14:07 compute-0 systemd[1]: libpod-conmon-b49a5fc136251fafa9b6de65101cdb2e6dfa1da73372c9b70ecb9650a5578dea.scope: Deactivated successfully.
Jan 26 17:14:07 compute-0 sudo[406492]: pam_unix(sudo:session): session closed for user root
Jan 26 17:14:07 compute-0 sudo[406638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:14:07 compute-0 sudo[406638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:14:07 compute-0 sudo[406638]: pam_unix(sudo:session): session closed for user root
Jan 26 17:14:07 compute-0 sudo[406663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:14:07 compute-0 sudo[406663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:14:07 compute-0 podman[406700]: 2026-01-26 17:14:07.925610976 +0000 UTC m=+0.044987064 container create 26f81ef90dcf345ab6bfcfe4e6d08daf5bd91464ed67fe9357b00192137a4e26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_snyder, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 17:14:07 compute-0 systemd[1]: Started libpod-conmon-26f81ef90dcf345ab6bfcfe4e6d08daf5bd91464ed67fe9357b00192137a4e26.scope.
Jan 26 17:14:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:14:08 compute-0 podman[406700]: 2026-01-26 17:14:08.000178664 +0000 UTC m=+0.119554752 container init 26f81ef90dcf345ab6bfcfe4e6d08daf5bd91464ed67fe9357b00192137a4e26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 17:14:08 compute-0 podman[406700]: 2026-01-26 17:14:07.907040891 +0000 UTC m=+0.026416989 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:14:08 compute-0 podman[406700]: 2026-01-26 17:14:08.007444022 +0000 UTC m=+0.126820100 container start 26f81ef90dcf345ab6bfcfe4e6d08daf5bd91464ed67fe9357b00192137a4e26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_snyder, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 17:14:08 compute-0 podman[406700]: 2026-01-26 17:14:08.010784275 +0000 UTC m=+0.130160373 container attach 26f81ef90dcf345ab6bfcfe4e6d08daf5bd91464ed67fe9357b00192137a4e26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_snyder, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:14:08 compute-0 intelligent_snyder[406716]: 167 167
Jan 26 17:14:08 compute-0 systemd[1]: libpod-26f81ef90dcf345ab6bfcfe4e6d08daf5bd91464ed67fe9357b00192137a4e26.scope: Deactivated successfully.
Jan 26 17:14:08 compute-0 podman[406700]: 2026-01-26 17:14:08.015146721 +0000 UTC m=+0.134522799 container died 26f81ef90dcf345ab6bfcfe4e6d08daf5bd91464ed67fe9357b00192137a4e26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 17:14:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-87efadd3dbffdb14ffbea5674f4bcd49c847eaceaa9e02af0250b0312659ea22-merged.mount: Deactivated successfully.
Jan 26 17:14:08 compute-0 podman[406700]: 2026-01-26 17:14:08.057401377 +0000 UTC m=+0.176777455 container remove 26f81ef90dcf345ab6bfcfe4e6d08daf5bd91464ed67fe9357b00192137a4e26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_snyder, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 17:14:08 compute-0 systemd[1]: libpod-conmon-26f81ef90dcf345ab6bfcfe4e6d08daf5bd91464ed67fe9357b00192137a4e26.scope: Deactivated successfully.
Jan 26 17:14:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3731: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:08 compute-0 podman[406740]: 2026-01-26 17:14:08.24144093 +0000 UTC m=+0.049131947 container create a70ab29ffa7f7bbe214a05271b9d10398a22d1fd472453095e9632e96a428ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bell, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:14:08 compute-0 systemd[1]: Started libpod-conmon-a70ab29ffa7f7bbe214a05271b9d10398a22d1fd472453095e9632e96a428ee7.scope.
Jan 26 17:14:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e6f64bd74c2b212bfaab244f012ab8f56b389dfe83ac3561d44396971ec0155/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e6f64bd74c2b212bfaab244f012ab8f56b389dfe83ac3561d44396971ec0155/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e6f64bd74c2b212bfaab244f012ab8f56b389dfe83ac3561d44396971ec0155/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:14:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e6f64bd74c2b212bfaab244f012ab8f56b389dfe83ac3561d44396971ec0155/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:14:08 compute-0 podman[406740]: 2026-01-26 17:14:08.221276345 +0000 UTC m=+0.028967382 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:14:08 compute-0 podman[406740]: 2026-01-26 17:14:08.326910135 +0000 UTC m=+0.134601172 container init a70ab29ffa7f7bbe214a05271b9d10398a22d1fd472453095e9632e96a428ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bell, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 17:14:08 compute-0 podman[406740]: 2026-01-26 17:14:08.336368376 +0000 UTC m=+0.144059393 container start a70ab29ffa7f7bbe214a05271b9d10398a22d1fd472453095e9632e96a428ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bell, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:14:08 compute-0 podman[406740]: 2026-01-26 17:14:08.340265622 +0000 UTC m=+0.147956779 container attach a70ab29ffa7f7bbe214a05271b9d10398a22d1fd472453095e9632e96a428ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Jan 26 17:14:08 compute-0 condescending_bell[406757]: {
Jan 26 17:14:08 compute-0 condescending_bell[406757]:     "0": [
Jan 26 17:14:08 compute-0 condescending_bell[406757]:         {
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "devices": [
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "/dev/loop3"
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             ],
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "lv_name": "ceph_lv0",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "lv_size": "21470642176",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "name": "ceph_lv0",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "tags": {
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.cluster_name": "ceph",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.crush_device_class": "",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.encrypted": "0",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.objectstore": "bluestore",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.osd_id": "0",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.type": "block",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.vdo": "0",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.with_tpm": "0"
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             },
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "type": "block",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "vg_name": "ceph_vg0"
Jan 26 17:14:08 compute-0 condescending_bell[406757]:         }
Jan 26 17:14:08 compute-0 condescending_bell[406757]:     ],
Jan 26 17:14:08 compute-0 condescending_bell[406757]:     "1": [
Jan 26 17:14:08 compute-0 condescending_bell[406757]:         {
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "devices": [
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "/dev/loop4"
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             ],
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "lv_name": "ceph_lv1",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "lv_size": "21470642176",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "name": "ceph_lv1",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "tags": {
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.cluster_name": "ceph",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.crush_device_class": "",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.encrypted": "0",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.objectstore": "bluestore",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.osd_id": "1",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.type": "block",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.vdo": "0",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.with_tpm": "0"
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             },
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "type": "block",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "vg_name": "ceph_vg1"
Jan 26 17:14:08 compute-0 condescending_bell[406757]:         }
Jan 26 17:14:08 compute-0 condescending_bell[406757]:     ],
Jan 26 17:14:08 compute-0 condescending_bell[406757]:     "2": [
Jan 26 17:14:08 compute-0 condescending_bell[406757]:         {
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "devices": [
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "/dev/loop5"
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             ],
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "lv_name": "ceph_lv2",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "lv_size": "21470642176",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "name": "ceph_lv2",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "tags": {
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.cluster_name": "ceph",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.crush_device_class": "",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.encrypted": "0",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.objectstore": "bluestore",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.osd_id": "2",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.type": "block",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.vdo": "0",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:                 "ceph.with_tpm": "0"
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             },
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "type": "block",
Jan 26 17:14:08 compute-0 condescending_bell[406757]:             "vg_name": "ceph_vg2"
Jan 26 17:14:08 compute-0 condescending_bell[406757]:         }
Jan 26 17:14:08 compute-0 condescending_bell[406757]:     ]
Jan 26 17:14:08 compute-0 condescending_bell[406757]: }
Jan 26 17:14:08 compute-0 systemd[1]: libpod-a70ab29ffa7f7bbe214a05271b9d10398a22d1fd472453095e9632e96a428ee7.scope: Deactivated successfully.
Jan 26 17:14:08 compute-0 podman[406740]: 2026-01-26 17:14:08.636167607 +0000 UTC m=+0.443858644 container died a70ab29ffa7f7bbe214a05271b9d10398a22d1fd472453095e9632e96a428ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:14:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e6f64bd74c2b212bfaab244f012ab8f56b389dfe83ac3561d44396971ec0155-merged.mount: Deactivated successfully.
Jan 26 17:14:08 compute-0 podman[406740]: 2026-01-26 17:14:08.685371864 +0000 UTC m=+0.493062881 container remove a70ab29ffa7f7bbe214a05271b9d10398a22d1fd472453095e9632e96a428ee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_bell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:14:08 compute-0 systemd[1]: libpod-conmon-a70ab29ffa7f7bbe214a05271b9d10398a22d1fd472453095e9632e96a428ee7.scope: Deactivated successfully.
Jan 26 17:14:08 compute-0 sudo[406663]: pam_unix(sudo:session): session closed for user root
Jan 26 17:14:08 compute-0 nova_compute[239965]: 2026-01-26 17:14:08.782 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:08 compute-0 sudo[406778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:14:08 compute-0 sudo[406778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:14:08 compute-0 sudo[406778]: pam_unix(sudo:session): session closed for user root
Jan 26 17:14:08 compute-0 sudo[406803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:14:08 compute-0 sudo[406803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:14:09 compute-0 podman[406841]: 2026-01-26 17:14:09.167552825 +0000 UTC m=+0.037925170 container create 4caef8924ce64fdb8e1ebea24bbb613999aedc94649ca772cb61283cb21cfd0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_khorana, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:14:09 compute-0 systemd[1]: Started libpod-conmon-4caef8924ce64fdb8e1ebea24bbb613999aedc94649ca772cb61283cb21cfd0c.scope.
Jan 26 17:14:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:14:09 compute-0 podman[406841]: 2026-01-26 17:14:09.151860051 +0000 UTC m=+0.022232416 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:14:09 compute-0 podman[406841]: 2026-01-26 17:14:09.250612532 +0000 UTC m=+0.120984907 container init 4caef8924ce64fdb8e1ebea24bbb613999aedc94649ca772cb61283cb21cfd0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_khorana, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 17:14:09 compute-0 podman[406841]: 2026-01-26 17:14:09.257296836 +0000 UTC m=+0.127669181 container start 4caef8924ce64fdb8e1ebea24bbb613999aedc94649ca772cb61283cb21cfd0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_khorana, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True)
Jan 26 17:14:09 compute-0 podman[406841]: 2026-01-26 17:14:09.261688923 +0000 UTC m=+0.132061268 container attach 4caef8924ce64fdb8e1ebea24bbb613999aedc94649ca772cb61283cb21cfd0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_khorana, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Jan 26 17:14:09 compute-0 crazy_khorana[406857]: 167 167
Jan 26 17:14:09 compute-0 systemd[1]: libpod-4caef8924ce64fdb8e1ebea24bbb613999aedc94649ca772cb61283cb21cfd0c.scope: Deactivated successfully.
Jan 26 17:14:09 compute-0 podman[406841]: 2026-01-26 17:14:09.267702741 +0000 UTC m=+0.138075086 container died 4caef8924ce64fdb8e1ebea24bbb613999aedc94649ca772cb61283cb21cfd0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_khorana, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:14:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-29c4011203d0497668eb248b8b05a0a469e8e6600053ba3f47c4afb7bb021ce6-merged.mount: Deactivated successfully.
Jan 26 17:14:09 compute-0 podman[406841]: 2026-01-26 17:14:09.328064161 +0000 UTC m=+0.198436506 container remove 4caef8924ce64fdb8e1ebea24bbb613999aedc94649ca772cb61283cb21cfd0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_khorana, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 17:14:09 compute-0 systemd[1]: libpod-conmon-4caef8924ce64fdb8e1ebea24bbb613999aedc94649ca772cb61283cb21cfd0c.scope: Deactivated successfully.
Jan 26 17:14:09 compute-0 ceph-mon[75140]: pgmap v3731: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:09 compute-0 podman[406883]: 2026-01-26 17:14:09.519396391 +0000 UTC m=+0.061830336 container create 1cdec37876772c7bdebef46c9a791b5d2d1e705f6c595f3090b0b45886f86487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_kilby, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:14:09 compute-0 systemd[1]: Started libpod-conmon-1cdec37876772c7bdebef46c9a791b5d2d1e705f6c595f3090b0b45886f86487.scope.
Jan 26 17:14:09 compute-0 podman[406883]: 2026-01-26 17:14:09.485163283 +0000 UTC m=+0.027597308 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:14:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc7ca9c8a6873f1464980d336aa4ad0a600ef0342fd651713b6c80c59ada77b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc7ca9c8a6873f1464980d336aa4ad0a600ef0342fd651713b6c80c59ada77b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc7ca9c8a6873f1464980d336aa4ad0a600ef0342fd651713b6c80c59ada77b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:14:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edc7ca9c8a6873f1464980d336aa4ad0a600ef0342fd651713b6c80c59ada77b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:14:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:14:09 compute-0 podman[406883]: 2026-01-26 17:14:09.620915511 +0000 UTC m=+0.163349466 container init 1cdec37876772c7bdebef46c9a791b5d2d1e705f6c595f3090b0b45886f86487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_kilby, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:14:09 compute-0 podman[406883]: 2026-01-26 17:14:09.62902916 +0000 UTC m=+0.171463095 container start 1cdec37876772c7bdebef46c9a791b5d2d1e705f6c595f3090b0b45886f86487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:14:09 compute-0 podman[406883]: 2026-01-26 17:14:09.632138386 +0000 UTC m=+0.174572321 container attach 1cdec37876772c7bdebef46c9a791b5d2d1e705f6c595f3090b0b45886f86487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_kilby, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 17:14:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3732: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:10 compute-0 lvm[406979]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:14:10 compute-0 lvm[406980]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:14:10 compute-0 lvm[406979]: VG ceph_vg0 finished
Jan 26 17:14:10 compute-0 lvm[406980]: VG ceph_vg1 finished
Jan 26 17:14:10 compute-0 lvm[406982]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:14:10 compute-0 lvm[406982]: VG ceph_vg2 finished
Jan 26 17:14:10 compute-0 tender_kilby[406900]: {}
Jan 26 17:14:10 compute-0 systemd[1]: libpod-1cdec37876772c7bdebef46c9a791b5d2d1e705f6c595f3090b0b45886f86487.scope: Deactivated successfully.
Jan 26 17:14:10 compute-0 systemd[1]: libpod-1cdec37876772c7bdebef46c9a791b5d2d1e705f6c595f3090b0b45886f86487.scope: Consumed 1.586s CPU time.
Jan 26 17:14:10 compute-0 podman[406883]: 2026-01-26 17:14:10.608543045 +0000 UTC m=+1.150976980 container died 1cdec37876772c7bdebef46c9a791b5d2d1e705f6c595f3090b0b45886f86487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_kilby, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:14:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-edc7ca9c8a6873f1464980d336aa4ad0a600ef0342fd651713b6c80c59ada77b-merged.mount: Deactivated successfully.
Jan 26 17:14:10 compute-0 podman[406983]: 2026-01-26 17:14:10.717037045 +0000 UTC m=+0.182653849 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:14:10 compute-0 podman[406883]: 2026-01-26 17:14:10.723538064 +0000 UTC m=+1.265971999 container remove 1cdec37876772c7bdebef46c9a791b5d2d1e705f6c595f3090b0b45886f86487 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_kilby, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:14:10 compute-0 systemd[1]: libpod-conmon-1cdec37876772c7bdebef46c9a791b5d2d1e705f6c595f3090b0b45886f86487.scope: Deactivated successfully.
Jan 26 17:14:10 compute-0 sudo[406803]: pam_unix(sudo:session): session closed for user root
Jan 26 17:14:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:14:10 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:14:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:14:10 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:14:10 compute-0 sudo[407024]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:14:10 compute-0 sudo[407024]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:14:10 compute-0 sudo[407024]: pam_unix(sudo:session): session closed for user root
Jan 26 17:14:11 compute-0 ceph-mon[75140]: pgmap v3732: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:14:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:14:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3733: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:12 compute-0 nova_compute[239965]: 2026-01-26 17:14:12.489 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:13 compute-0 ceph-mon[75140]: pgmap v3733: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:13 compute-0 nova_compute[239965]: 2026-01-26 17:14:13.783 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3734: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:14:15 compute-0 ceph-mon[75140]: pgmap v3734: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3735: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:17 compute-0 ceph-mon[75140]: pgmap v3735: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:17 compute-0 nova_compute[239965]: 2026-01-26 17:14:17.491 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3736: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:18 compute-0 nova_compute[239965]: 2026-01-26 17:14:18.788 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:19 compute-0 ceph-mon[75140]: pgmap v3736: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:14:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3737: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:21 compute-0 ceph-mon[75140]: pgmap v3737: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3738: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:22 compute-0 nova_compute[239965]: 2026-01-26 17:14:22.493 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:23 compute-0 ceph-mon[75140]: pgmap v3738: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:23 compute-0 nova_compute[239965]: 2026-01-26 17:14:23.788 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3739: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:14:25 compute-0 ceph-mon[75140]: pgmap v3739: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3740: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:27 compute-0 nova_compute[239965]: 2026-01-26 17:14:27.496 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:27 compute-0 ceph-mon[75140]: pgmap v3740: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3741: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:28 compute-0 nova_compute[239965]: 2026-01-26 17:14:28.790 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:14:28
Jan 26 17:14:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:14:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:14:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', '.rgw.root', 'volumes', 'default.rgw.meta', 'vms', 'backups']
Jan 26 17:14:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:14:29 compute-0 ceph-mon[75140]: pgmap v3741: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:14:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3742: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:14:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:14:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:14:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:14:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:14:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:14:30 compute-0 ceph-mon[75140]: pgmap v3742: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:14:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:14:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:14:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:14:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:14:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:14:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:14:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:14:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:14:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:14:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3743: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:32 compute-0 nova_compute[239965]: 2026-01-26 17:14:32.500 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:33 compute-0 ceph-mon[75140]: pgmap v3743: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:33 compute-0 nova_compute[239965]: 2026-01-26 17:14:33.794 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3744: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:14:35 compute-0 ceph-mon[75140]: pgmap v3744: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:35 compute-0 nova_compute[239965]: 2026-01-26 17:14:35.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3745: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:37 compute-0 ceph-mon[75140]: pgmap v3745: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:37 compute-0 podman[407049]: 2026-01-26 17:14:37.398240365 +0000 UTC m=+0.073467983 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 17:14:37 compute-0 nova_compute[239965]: 2026-01-26 17:14:37.502 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:37 compute-0 nova_compute[239965]: 2026-01-26 17:14:37.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3746: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:38 compute-0 nova_compute[239965]: 2026-01-26 17:14:38.836 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:39 compute-0 ceph-mon[75140]: pgmap v3746: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:39 compute-0 nova_compute[239965]: 2026-01-26 17:14:39.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:14:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3747: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:40 compute-0 nova_compute[239965]: 2026-01-26 17:14:40.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:40 compute-0 ceph-mon[75140]: pgmap v3747: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:41 compute-0 podman[407068]: 2026-01-26 17:14:41.405958063 +0000 UTC m=+0.089008222 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Jan 26 17:14:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3748: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:42 compute-0 sshd-session[407094]: Invalid user sol from 45.148.10.240 port 51082
Jan 26 17:14:42 compute-0 nova_compute[239965]: 2026-01-26 17:14:42.504 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:42 compute-0 sshd-session[407094]: Connection closed by invalid user sol 45.148.10.240 port 51082 [preauth]
Jan 26 17:14:43 compute-0 ceph-mon[75140]: pgmap v3748: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:43 compute-0 nova_compute[239965]: 2026-01-26 17:14:43.837 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3749: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:14:45 compute-0 ceph-mon[75140]: pgmap v3749: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3750: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:46 compute-0 nova_compute[239965]: 2026-01-26 17:14:46.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:46 compute-0 ceph-mon[75140]: pgmap v3750: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:47 compute-0 nova_compute[239965]: 2026-01-26 17:14:47.506 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3751: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:48 compute-0 nova_compute[239965]: 2026-01-26 17:14:48.842 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:48 compute-0 ceph-mon[75140]: pgmap v3751: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:14:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3860844057' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:14:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:14:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3860844057' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:14:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:14:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:14:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3860844057' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:14:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3860844057' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:14:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3752: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:50 compute-0 nova_compute[239965]: 2026-01-26 17:14:50.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:50 compute-0 ceph-mon[75140]: pgmap v3752: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3753: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:52 compute-0 nova_compute[239965]: 2026-01-26 17:14:52.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:52 compute-0 nova_compute[239965]: 2026-01-26 17:14:52.509 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:14:52 compute-0 nova_compute[239965]: 2026-01-26 17:14:52.510 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:53 compute-0 ceph-mon[75140]: pgmap v3753: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:53 compute-0 nova_compute[239965]: 2026-01-26 17:14:53.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:53 compute-0 nova_compute[239965]: 2026-01-26 17:14:53.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:14:53 compute-0 nova_compute[239965]: 2026-01-26 17:14:53.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:14:53 compute-0 nova_compute[239965]: 2026-01-26 17:14:53.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:14:53 compute-0 nova_compute[239965]: 2026-01-26 17:14:53.539 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:14:53 compute-0 nova_compute[239965]: 2026-01-26 17:14:53.540 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:14:53 compute-0 nova_compute[239965]: 2026-01-26 17:14:53.842 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:14:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1338003097' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:14:54 compute-0 nova_compute[239965]: 2026-01-26 17:14:54.128 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:14:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3754: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:54 compute-0 nova_compute[239965]: 2026-01-26 17:14:54.309 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:14:54 compute-0 nova_compute[239965]: 2026-01-26 17:14:54.311 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3570MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:14:54 compute-0 nova_compute[239965]: 2026-01-26 17:14:54.311 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:14:54 compute-0 nova_compute[239965]: 2026-01-26 17:14:54.311 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:14:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1338003097' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:14:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:14:54 compute-0 nova_compute[239965]: 2026-01-26 17:14:54.891 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:14:54 compute-0 nova_compute[239965]: 2026-01-26 17:14:54.892 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:14:55 compute-0 nova_compute[239965]: 2026-01-26 17:14:55.009 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 17:14:55 compute-0 nova_compute[239965]: 2026-01-26 17:14:55.108 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 17:14:55 compute-0 nova_compute[239965]: 2026-01-26 17:14:55.108 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 17:14:55 compute-0 nova_compute[239965]: 2026-01-26 17:14:55.124 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 17:14:55 compute-0 nova_compute[239965]: 2026-01-26 17:14:55.151 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 17:14:55 compute-0 nova_compute[239965]: 2026-01-26 17:14:55.175 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:14:55 compute-0 ceph-mon[75140]: pgmap v3754: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:14:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3971753593' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:14:55 compute-0 nova_compute[239965]: 2026-01-26 17:14:55.793 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:14:55 compute-0 nova_compute[239965]: 2026-01-26 17:14:55.802 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:14:55 compute-0 nova_compute[239965]: 2026-01-26 17:14:55.915 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:14:55 compute-0 nova_compute[239965]: 2026-01-26 17:14:55.917 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:14:55 compute-0 nova_compute[239965]: 2026-01-26 17:14:55.918 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:14:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3755: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3971753593' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:14:57 compute-0 nova_compute[239965]: 2026-01-26 17:14:57.511 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:57 compute-0 nova_compute[239965]: 2026-01-26 17:14:57.920 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:57 compute-0 nova_compute[239965]: 2026-01-26 17:14:57.920 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:14:57 compute-0 nova_compute[239965]: 2026-01-26 17:14:57.920 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:14:58 compute-0 ceph-mon[75140]: pgmap v3755: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3756: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:58 compute-0 nova_compute[239965]: 2026-01-26 17:14:58.348 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:14:58 compute-0 nova_compute[239965]: 2026-01-26 17:14:58.845 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:14:58 compute-0 nova_compute[239965]: 2026-01-26 17:14:58.930 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:14:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:14:59.299 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:14:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:14:59.300 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:14:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:14:59.300 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:14:59 compute-0 ceph-mon[75140]: pgmap v3756: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:14:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:15:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3757: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:15:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:15:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:15:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:15:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:15:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:15:01 compute-0 ceph-mon[75140]: pgmap v3757: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3758: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:02 compute-0 nova_compute[239965]: 2026-01-26 17:15:02.614 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:03 compute-0 ceph-mon[75140]: pgmap v3758: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:03 compute-0 nova_compute[239965]: 2026-01-26 17:15:03.847 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3759: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:15:05 compute-0 ceph-mon[75140]: pgmap v3759: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3760: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:07 compute-0 ceph-mon[75140]: pgmap v3760: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:07 compute-0 nova_compute[239965]: 2026-01-26 17:15:07.665 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3761: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:08 compute-0 podman[407140]: 2026-01-26 17:15:08.382942563 +0000 UTC m=+0.056701720 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:15:08 compute-0 nova_compute[239965]: 2026-01-26 17:15:08.896 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:09 compute-0 ceph-mon[75140]: pgmap v3761: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:15:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3762: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:10 compute-0 ceph-mon[75140]: pgmap v3762: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:10 compute-0 sudo[407160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:15:10 compute-0 sudo[407160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:15:10 compute-0 sudo[407160]: pam_unix(sudo:session): session closed for user root
Jan 26 17:15:11 compute-0 sudo[407185]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:15:11 compute-0 sudo[407185]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:15:11 compute-0 sudo[407185]: pam_unix(sudo:session): session closed for user root
Jan 26 17:15:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:15:11 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:15:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:15:11 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:15:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:15:11 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:15:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:15:11 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:15:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:15:11 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:15:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:15:11 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:15:11 compute-0 sudo[407240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:15:11 compute-0 sudo[407240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:15:11 compute-0 sudo[407240]: pam_unix(sudo:session): session closed for user root
Jan 26 17:15:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:15:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:15:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:15:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:15:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:15:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:15:11 compute-0 sudo[407266]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:15:11 compute-0 sudo[407266]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:15:11 compute-0 podman[407264]: 2026-01-26 17:15:11.874807976 +0000 UTC m=+0.127451676 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 26 17:15:12 compute-0 podman[407325]: 2026-01-26 17:15:12.145544655 +0000 UTC m=+0.052004036 container create 8b9724f9ecbc1a0df4e2ee19cc70e20688088d7b0a1ac2b2a9b1536aeb16d4d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_varahamihira, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 17:15:12 compute-0 systemd[1]: Started libpod-conmon-8b9724f9ecbc1a0df4e2ee19cc70e20688088d7b0a1ac2b2a9b1536aeb16d4d5.scope.
Jan 26 17:15:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:15:12 compute-0 podman[407325]: 2026-01-26 17:15:12.123510114 +0000 UTC m=+0.029969525 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:15:12 compute-0 podman[407325]: 2026-01-26 17:15:12.233590393 +0000 UTC m=+0.140049874 container init 8b9724f9ecbc1a0df4e2ee19cc70e20688088d7b0a1ac2b2a9b1536aeb16d4d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 26 17:15:12 compute-0 podman[407325]: 2026-01-26 17:15:12.242638235 +0000 UTC m=+0.149097666 container start 8b9724f9ecbc1a0df4e2ee19cc70e20688088d7b0a1ac2b2a9b1536aeb16d4d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 17:15:12 compute-0 podman[407325]: 2026-01-26 17:15:12.246599592 +0000 UTC m=+0.153059183 container attach 8b9724f9ecbc1a0df4e2ee19cc70e20688088d7b0a1ac2b2a9b1536aeb16d4d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_varahamihira, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 17:15:12 compute-0 gracious_varahamihira[407341]: 167 167
Jan 26 17:15:12 compute-0 systemd[1]: libpod-8b9724f9ecbc1a0df4e2ee19cc70e20688088d7b0a1ac2b2a9b1536aeb16d4d5.scope: Deactivated successfully.
Jan 26 17:15:12 compute-0 podman[407325]: 2026-01-26 17:15:12.251495041 +0000 UTC m=+0.157954432 container died 8b9724f9ecbc1a0df4e2ee19cc70e20688088d7b0a1ac2b2a9b1536aeb16d4d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_varahamihira, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:15:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3763: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-88f88674114ea5cbfb28a6a8b24bb8370edcc08841eba0090b8b475649f8f659-merged.mount: Deactivated successfully.
Jan 26 17:15:12 compute-0 podman[407325]: 2026-01-26 17:15:12.304617734 +0000 UTC m=+0.211077125 container remove 8b9724f9ecbc1a0df4e2ee19cc70e20688088d7b0a1ac2b2a9b1536aeb16d4d5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_varahamihira, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:15:12 compute-0 systemd[1]: libpod-conmon-8b9724f9ecbc1a0df4e2ee19cc70e20688088d7b0a1ac2b2a9b1536aeb16d4d5.scope: Deactivated successfully.
Jan 26 17:15:12 compute-0 podman[407364]: 2026-01-26 17:15:12.497047792 +0000 UTC m=+0.052130708 container create 202bf62644cc126089617cc8b43156048495f2b4a44cf21a45b911f13191322d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_kepler, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 17:15:12 compute-0 systemd[1]: Started libpod-conmon-202bf62644cc126089617cc8b43156048495f2b4a44cf21a45b911f13191322d.scope.
Jan 26 17:15:12 compute-0 podman[407364]: 2026-01-26 17:15:12.476687103 +0000 UTC m=+0.031770089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:15:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e242b1c05b1b33b25881ffab38dae0fdd93e03b3812f1c9f488016b5c13043/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e242b1c05b1b33b25881ffab38dae0fdd93e03b3812f1c9f488016b5c13043/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e242b1c05b1b33b25881ffab38dae0fdd93e03b3812f1c9f488016b5c13043/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e242b1c05b1b33b25881ffab38dae0fdd93e03b3812f1c9f488016b5c13043/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:15:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49e242b1c05b1b33b25881ffab38dae0fdd93e03b3812f1c9f488016b5c13043/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:15:12 compute-0 podman[407364]: 2026-01-26 17:15:12.602732133 +0000 UTC m=+0.157815089 container init 202bf62644cc126089617cc8b43156048495f2b4a44cf21a45b911f13191322d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_kepler, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 17:15:12 compute-0 podman[407364]: 2026-01-26 17:15:12.610728609 +0000 UTC m=+0.165811515 container start 202bf62644cc126089617cc8b43156048495f2b4a44cf21a45b911f13191322d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_kepler, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 17:15:12 compute-0 podman[407364]: 2026-01-26 17:15:12.614650436 +0000 UTC m=+0.169733392 container attach 202bf62644cc126089617cc8b43156048495f2b4a44cf21a45b911f13191322d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_kepler, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:15:12 compute-0 nova_compute[239965]: 2026-01-26 17:15:12.668 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:12 compute-0 ceph-mon[75140]: pgmap v3763: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:13 compute-0 confident_kepler[407380]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:15:13 compute-0 confident_kepler[407380]: --> All data devices are unavailable
Jan 26 17:15:13 compute-0 systemd[1]: libpod-202bf62644cc126089617cc8b43156048495f2b4a44cf21a45b911f13191322d.scope: Deactivated successfully.
Jan 26 17:15:13 compute-0 podman[407364]: 2026-01-26 17:15:13.122954157 +0000 UTC m=+0.678037153 container died 202bf62644cc126089617cc8b43156048495f2b4a44cf21a45b911f13191322d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:15:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-49e242b1c05b1b33b25881ffab38dae0fdd93e03b3812f1c9f488016b5c13043-merged.mount: Deactivated successfully.
Jan 26 17:15:13 compute-0 podman[407364]: 2026-01-26 17:15:13.170666997 +0000 UTC m=+0.725749893 container remove 202bf62644cc126089617cc8b43156048495f2b4a44cf21a45b911f13191322d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:15:13 compute-0 systemd[1]: libpod-conmon-202bf62644cc126089617cc8b43156048495f2b4a44cf21a45b911f13191322d.scope: Deactivated successfully.
Jan 26 17:15:13 compute-0 sudo[407266]: pam_unix(sudo:session): session closed for user root
Jan 26 17:15:13 compute-0 sudo[407410]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:15:13 compute-0 sudo[407410]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:15:13 compute-0 sudo[407410]: pam_unix(sudo:session): session closed for user root
Jan 26 17:15:13 compute-0 sudo[407435]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:15:13 compute-0 sudo[407435]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:15:13 compute-0 podman[407472]: 2026-01-26 17:15:13.666252537 +0000 UTC m=+0.043512547 container create 08e9ad90bf40e764b153b5c4ee78a21c0dd017d2919f532daa080789a56a016e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lederberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 17:15:13 compute-0 systemd[1]: Started libpod-conmon-08e9ad90bf40e764b153b5c4ee78a21c0dd017d2919f532daa080789a56a016e.scope.
Jan 26 17:15:13 compute-0 podman[407472]: 2026-01-26 17:15:13.647809635 +0000 UTC m=+0.025069625 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:15:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:15:13 compute-0 podman[407472]: 2026-01-26 17:15:13.773722572 +0000 UTC m=+0.150982642 container init 08e9ad90bf40e764b153b5c4ee78a21c0dd017d2919f532daa080789a56a016e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lederberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:15:13 compute-0 podman[407472]: 2026-01-26 17:15:13.782134349 +0000 UTC m=+0.159394339 container start 08e9ad90bf40e764b153b5c4ee78a21c0dd017d2919f532daa080789a56a016e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lederberg, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 17:15:13 compute-0 podman[407472]: 2026-01-26 17:15:13.786066664 +0000 UTC m=+0.163326824 container attach 08e9ad90bf40e764b153b5c4ee78a21c0dd017d2919f532daa080789a56a016e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:15:13 compute-0 kind_lederberg[407488]: 167 167
Jan 26 17:15:13 compute-0 systemd[1]: libpod-08e9ad90bf40e764b153b5c4ee78a21c0dd017d2919f532daa080789a56a016e.scope: Deactivated successfully.
Jan 26 17:15:13 compute-0 podman[407472]: 2026-01-26 17:15:13.78911236 +0000 UTC m=+0.166372380 container died 08e9ad90bf40e764b153b5c4ee78a21c0dd017d2919f532daa080789a56a016e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lederberg, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:15:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-08763f70f0740e42fcddbea895f010d0a2bf743a23258808b29d43f5f7326142-merged.mount: Deactivated successfully.
Jan 26 17:15:13 compute-0 podman[407472]: 2026-01-26 17:15:13.830064814 +0000 UTC m=+0.207324804 container remove 08e9ad90bf40e764b153b5c4ee78a21c0dd017d2919f532daa080789a56a016e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lederberg, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 17:15:13 compute-0 systemd[1]: libpod-conmon-08e9ad90bf40e764b153b5c4ee78a21c0dd017d2919f532daa080789a56a016e.scope: Deactivated successfully.
Jan 26 17:15:13 compute-0 nova_compute[239965]: 2026-01-26 17:15:13.897 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:14 compute-0 podman[407513]: 2026-01-26 17:15:14.026422377 +0000 UTC m=+0.056195198 container create 82d24b1f73b490a975f180221c1b679cb1a6ea48ea8951c0b81d5a9d0f0c69da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 17:15:14 compute-0 systemd[1]: Started libpod-conmon-82d24b1f73b490a975f180221c1b679cb1a6ea48ea8951c0b81d5a9d0f0c69da.scope.
Jan 26 17:15:14 compute-0 podman[407513]: 2026-01-26 17:15:14.003446535 +0000 UTC m=+0.033219376 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:15:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf74aaad7ccccebd9a31ba96d4bd0351766cd3e0b186fb5425eeb0542b5e9a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf74aaad7ccccebd9a31ba96d4bd0351766cd3e0b186fb5425eeb0542b5e9a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf74aaad7ccccebd9a31ba96d4bd0351766cd3e0b186fb5425eeb0542b5e9a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:15:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cf74aaad7ccccebd9a31ba96d4bd0351766cd3e0b186fb5425eeb0542b5e9a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:15:14 compute-0 podman[407513]: 2026-01-26 17:15:14.130153791 +0000 UTC m=+0.159926662 container init 82d24b1f73b490a975f180221c1b679cb1a6ea48ea8951c0b81d5a9d0f0c69da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ritchie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:15:14 compute-0 podman[407513]: 2026-01-26 17:15:14.139122561 +0000 UTC m=+0.168895382 container start 82d24b1f73b490a975f180221c1b679cb1a6ea48ea8951c0b81d5a9d0f0c69da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 17:15:14 compute-0 podman[407513]: 2026-01-26 17:15:14.144643547 +0000 UTC m=+0.174416418 container attach 82d24b1f73b490a975f180221c1b679cb1a6ea48ea8951c0b81d5a9d0f0c69da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:15:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3764: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]: {
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:     "0": [
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:         {
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "devices": [
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "/dev/loop3"
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             ],
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "lv_name": "ceph_lv0",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "lv_size": "21470642176",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "name": "ceph_lv0",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "tags": {
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.cluster_name": "ceph",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.crush_device_class": "",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.encrypted": "0",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.objectstore": "bluestore",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.osd_id": "0",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.type": "block",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.vdo": "0",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.with_tpm": "0"
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             },
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "type": "block",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "vg_name": "ceph_vg0"
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:         }
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:     ],
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:     "1": [
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:         {
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "devices": [
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "/dev/loop4"
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             ],
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "lv_name": "ceph_lv1",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "lv_size": "21470642176",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "name": "ceph_lv1",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "tags": {
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.cluster_name": "ceph",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.crush_device_class": "",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.encrypted": "0",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.objectstore": "bluestore",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.osd_id": "1",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.type": "block",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.vdo": "0",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.with_tpm": "0"
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             },
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "type": "block",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "vg_name": "ceph_vg1"
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:         }
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:     ],
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:     "2": [
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:         {
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "devices": [
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "/dev/loop5"
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             ],
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "lv_name": "ceph_lv2",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "lv_size": "21470642176",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "name": "ceph_lv2",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "tags": {
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.cluster_name": "ceph",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.crush_device_class": "",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.encrypted": "0",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.objectstore": "bluestore",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.osd_id": "2",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.type": "block",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.vdo": "0",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:                 "ceph.with_tpm": "0"
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             },
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "type": "block",
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:             "vg_name": "ceph_vg2"
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:         }
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]:     ]
Jan 26 17:15:14 compute-0 suspicious_ritchie[407530]: }
Jan 26 17:15:14 compute-0 systemd[1]: libpod-82d24b1f73b490a975f180221c1b679cb1a6ea48ea8951c0b81d5a9d0f0c69da.scope: Deactivated successfully.
Jan 26 17:15:14 compute-0 podman[407513]: 2026-01-26 17:15:14.531014849 +0000 UTC m=+0.560787680 container died 82d24b1f73b490a975f180221c1b679cb1a6ea48ea8951c0b81d5a9d0f0c69da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:15:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cf74aaad7ccccebd9a31ba96d4bd0351766cd3e0b186fb5425eeb0542b5e9a7-merged.mount: Deactivated successfully.
Jan 26 17:15:14 compute-0 podman[407513]: 2026-01-26 17:15:14.588187581 +0000 UTC m=+0.617960402 container remove 82d24b1f73b490a975f180221c1b679cb1a6ea48ea8951c0b81d5a9d0f0c69da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 17:15:14 compute-0 systemd[1]: libpod-conmon-82d24b1f73b490a975f180221c1b679cb1a6ea48ea8951c0b81d5a9d0f0c69da.scope: Deactivated successfully.
Jan 26 17:15:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:15:14 compute-0 sudo[407435]: pam_unix(sudo:session): session closed for user root
Jan 26 17:15:14 compute-0 sudo[407550]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:15:14 compute-0 sudo[407550]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:15:14 compute-0 sudo[407550]: pam_unix(sudo:session): session closed for user root
Jan 26 17:15:14 compute-0 sudo[407575]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:15:14 compute-0 sudo[407575]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:15:15 compute-0 podman[407613]: 2026-01-26 17:15:15.147548895 +0000 UTC m=+0.048399998 container create e463be9cdc84fe81fada7510e298f0de4e2a0407cfcf56915a51cd9f79c12c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 17:15:15 compute-0 systemd[1]: Started libpod-conmon-e463be9cdc84fe81fada7510e298f0de4e2a0407cfcf56915a51cd9f79c12c1a.scope.
Jan 26 17:15:15 compute-0 podman[407613]: 2026-01-26 17:15:15.12774863 +0000 UTC m=+0.028599753 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:15:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:15:15 compute-0 podman[407613]: 2026-01-26 17:15:15.245716382 +0000 UTC m=+0.146567505 container init e463be9cdc84fe81fada7510e298f0de4e2a0407cfcf56915a51cd9f79c12c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 26 17:15:15 compute-0 podman[407613]: 2026-01-26 17:15:15.25377522 +0000 UTC m=+0.154626323 container start e463be9cdc84fe81fada7510e298f0de4e2a0407cfcf56915a51cd9f79c12c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:15:15 compute-0 podman[407613]: 2026-01-26 17:15:15.257601693 +0000 UTC m=+0.158452796 container attach e463be9cdc84fe81fada7510e298f0de4e2a0407cfcf56915a51cd9f79c12c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:15:15 compute-0 systemd[1]: libpod-e463be9cdc84fe81fada7510e298f0de4e2a0407cfcf56915a51cd9f79c12c1a.scope: Deactivated successfully.
Jan 26 17:15:15 compute-0 cool_ptolemy[407629]: 167 167
Jan 26 17:15:15 compute-0 conmon[407629]: conmon e463be9cdc84fe81fada <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e463be9cdc84fe81fada7510e298f0de4e2a0407cfcf56915a51cd9f79c12c1a.scope/container/memory.events
Jan 26 17:15:15 compute-0 podman[407613]: 2026-01-26 17:15:15.261383866 +0000 UTC m=+0.162234999 container died e463be9cdc84fe81fada7510e298f0de4e2a0407cfcf56915a51cd9f79c12c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_ptolemy, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:15:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-f44d91ceb9e82b0a2931ffa682456cdac5311da25c3ef9e58ac998fff8fdf417-merged.mount: Deactivated successfully.
Jan 26 17:15:15 compute-0 podman[407613]: 2026-01-26 17:15:15.303393696 +0000 UTC m=+0.204244799 container remove e463be9cdc84fe81fada7510e298f0de4e2a0407cfcf56915a51cd9f79c12c1a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_ptolemy, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:15:15 compute-0 systemd[1]: libpod-conmon-e463be9cdc84fe81fada7510e298f0de4e2a0407cfcf56915a51cd9f79c12c1a.scope: Deactivated successfully.
Jan 26 17:15:15 compute-0 ceph-mon[75140]: pgmap v3764: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:15 compute-0 podman[407653]: 2026-01-26 17:15:15.47769608 +0000 UTC m=+0.044938423 container create 86f5c7c642e748a624989a3381dfddff8a7e7b032e7cbfedb7e99222e51d0cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:15:15 compute-0 systemd[1]: Started libpod-conmon-86f5c7c642e748a624989a3381dfddff8a7e7b032e7cbfedb7e99222e51d0cf3.scope.
Jan 26 17:15:15 compute-0 podman[407653]: 2026-01-26 17:15:15.458315125 +0000 UTC m=+0.025557258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:15:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/835ba5a66bc4a300f3388f000922e9b6374afc5bbaf8e4e9ee883511cc688184/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/835ba5a66bc4a300f3388f000922e9b6374afc5bbaf8e4e9ee883511cc688184/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/835ba5a66bc4a300f3388f000922e9b6374afc5bbaf8e4e9ee883511cc688184/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:15:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/835ba5a66bc4a300f3388f000922e9b6374afc5bbaf8e4e9ee883511cc688184/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:15:15 compute-0 podman[407653]: 2026-01-26 17:15:15.580575532 +0000 UTC m=+0.147817665 container init 86f5c7c642e748a624989a3381dfddff8a7e7b032e7cbfedb7e99222e51d0cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:15:15 compute-0 podman[407653]: 2026-01-26 17:15:15.587285167 +0000 UTC m=+0.154527270 container start 86f5c7c642e748a624989a3381dfddff8a7e7b032e7cbfedb7e99222e51d0cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:15:15 compute-0 podman[407653]: 2026-01-26 17:15:15.590833563 +0000 UTC m=+0.158075696 container attach 86f5c7c642e748a624989a3381dfddff8a7e7b032e7cbfedb7e99222e51d0cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:15:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3765: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:16 compute-0 lvm[407751]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:15:16 compute-0 lvm[407749]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:15:16 compute-0 lvm[407751]: VG ceph_vg2 finished
Jan 26 17:15:16 compute-0 lvm[407749]: VG ceph_vg0 finished
Jan 26 17:15:16 compute-0 lvm[407750]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:15:16 compute-0 lvm[407750]: VG ceph_vg1 finished
Jan 26 17:15:16 compute-0 confident_shirley[407670]: {}
Jan 26 17:15:16 compute-0 systemd[1]: libpod-86f5c7c642e748a624989a3381dfddff8a7e7b032e7cbfedb7e99222e51d0cf3.scope: Deactivated successfully.
Jan 26 17:15:16 compute-0 systemd[1]: libpod-86f5c7c642e748a624989a3381dfddff8a7e7b032e7cbfedb7e99222e51d0cf3.scope: Consumed 1.455s CPU time.
Jan 26 17:15:16 compute-0 podman[407653]: 2026-01-26 17:15:16.449509346 +0000 UTC m=+1.016751479 container died 86f5c7c642e748a624989a3381dfddff8a7e7b032e7cbfedb7e99222e51d0cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:15:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-835ba5a66bc4a300f3388f000922e9b6374afc5bbaf8e4e9ee883511cc688184-merged.mount: Deactivated successfully.
Jan 26 17:15:16 compute-0 podman[407653]: 2026-01-26 17:15:16.502117205 +0000 UTC m=+1.069359318 container remove 86f5c7c642e748a624989a3381dfddff8a7e7b032e7cbfedb7e99222e51d0cf3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_shirley, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 17:15:16 compute-0 systemd[1]: libpod-conmon-86f5c7c642e748a624989a3381dfddff8a7e7b032e7cbfedb7e99222e51d0cf3.scope: Deactivated successfully.
Jan 26 17:15:16 compute-0 sudo[407575]: pam_unix(sudo:session): session closed for user root
Jan 26 17:15:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:15:16 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:15:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:15:16 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:15:16 compute-0 sudo[407766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:15:16 compute-0 sudo[407766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:15:16 compute-0 sudo[407766]: pam_unix(sudo:session): session closed for user root
Jan 26 17:15:17 compute-0 ceph-mon[75140]: pgmap v3765: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:15:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:15:17 compute-0 nova_compute[239965]: 2026-01-26 17:15:17.671 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3766: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:18 compute-0 nova_compute[239965]: 2026-01-26 17:15:18.900 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:19 compute-0 ceph-mon[75140]: pgmap v3766: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:15:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3767: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:21 compute-0 ceph-mon[75140]: pgmap v3767: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3768: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:22 compute-0 ceph-mon[75140]: pgmap v3768: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:22 compute-0 nova_compute[239965]: 2026-01-26 17:15:22.674 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:23 compute-0 nova_compute[239965]: 2026-01-26 17:15:23.902 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3769: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:15:25 compute-0 ceph-mon[75140]: pgmap v3769: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3770: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:27 compute-0 ceph-mon[75140]: pgmap v3770: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:27 compute-0 nova_compute[239965]: 2026-01-26 17:15:27.675 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3771: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:15:28
Jan 26 17:15:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:15:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:15:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', '.mgr', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups']
Jan 26 17:15:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:15:28 compute-0 nova_compute[239965]: 2026-01-26 17:15:28.903 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:29 compute-0 ceph-mon[75140]: pgmap v3771: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:15:29 compute-0 sshd-session[407791]: Invalid user ubuntu from 103.42.57.146 port 59858
Jan 26 17:15:30 compute-0 sshd-session[407791]: Received disconnect from 103.42.57.146 port 59858:11:  [preauth]
Jan 26 17:15:30 compute-0 sshd-session[407791]: Disconnected from invalid user ubuntu 103.42.57.146 port 59858 [preauth]
Jan 26 17:15:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3772: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:15:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:15:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:15:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:15:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:15:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:15:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:15:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:15:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:15:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:15:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:15:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:15:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:15:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:15:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:15:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:15:31 compute-0 ceph-mon[75140]: pgmap v3772: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3773: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:32 compute-0 nova_compute[239965]: 2026-01-26 17:15:32.678 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:33 compute-0 ceph-mon[75140]: pgmap v3773: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:33 compute-0 nova_compute[239965]: 2026-01-26 17:15:33.903 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3774: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:15:35 compute-0 ceph-mon[75140]: pgmap v3774: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3775: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:37 compute-0 nova_compute[239965]: 2026-01-26 17:15:37.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:37 compute-0 nova_compute[239965]: 2026-01-26 17:15:37.681 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:37 compute-0 ceph-mon[75140]: pgmap v3775: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3776: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:38 compute-0 ceph-mon[75140]: pgmap v3776: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:38 compute-0 nova_compute[239965]: 2026-01-26 17:15:38.904 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:39 compute-0 podman[407793]: 2026-01-26 17:15:39.41986665 +0000 UTC m=+0.089623448 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 17:15:39 compute-0 nova_compute[239965]: 2026-01-26 17:15:39.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:15:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3777: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:40 compute-0 nova_compute[239965]: 2026-01-26 17:15:40.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:40 compute-0 nova_compute[239965]: 2026-01-26 17:15:40.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:40 compute-0 ceph-mon[75140]: pgmap v3777: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3778: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:42 compute-0 podman[407812]: 2026-01-26 17:15:42.417388332 +0000 UTC m=+0.104331179 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 26 17:15:42 compute-0 nova_compute[239965]: 2026-01-26 17:15:42.727 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:43 compute-0 ceph-mon[75140]: pgmap v3778: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:43 compute-0 nova_compute[239965]: 2026-01-26 17:15:43.906 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3779: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:15:45 compute-0 ceph-mon[75140]: pgmap v3779: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3780: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:47 compute-0 ceph-mon[75140]: pgmap v3780: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:47 compute-0 nova_compute[239965]: 2026-01-26 17:15:47.730 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3781: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:48 compute-0 nova_compute[239965]: 2026-01-26 17:15:48.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:15:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1393317603' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:15:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:15:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1393317603' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:15:48 compute-0 nova_compute[239965]: 2026-01-26 17:15:48.909 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:49 compute-0 ceph-mon[75140]: pgmap v3781: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1393317603' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:15:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1393317603' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:15:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:15:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:15:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3782: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:51 compute-0 ceph-mon[75140]: pgmap v3782: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3783: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:52 compute-0 nova_compute[239965]: 2026-01-26 17:15:52.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:52 compute-0 nova_compute[239965]: 2026-01-26 17:15:52.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:52 compute-0 nova_compute[239965]: 2026-01-26 17:15:52.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:15:52 compute-0 nova_compute[239965]: 2026-01-26 17:15:52.732 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:53 compute-0 ceph-mon[75140]: pgmap v3783: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:53 compute-0 nova_compute[239965]: 2026-01-26 17:15:53.910 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3784: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:54 compute-0 nova_compute[239965]: 2026-01-26 17:15:54.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:15:54 compute-0 nova_compute[239965]: 2026-01-26 17:15:54.694 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:15:54 compute-0 nova_compute[239965]: 2026-01-26 17:15:54.694 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:15:54 compute-0 nova_compute[239965]: 2026-01-26 17:15:54.695 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:15:54 compute-0 nova_compute[239965]: 2026-01-26 17:15:54.695 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:15:54 compute-0 nova_compute[239965]: 2026-01-26 17:15:54.695 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:15:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:15:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2204277641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:15:55 compute-0 nova_compute[239965]: 2026-01-26 17:15:55.255 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:15:55 compute-0 nova_compute[239965]: 2026-01-26 17:15:55.462 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:15:55 compute-0 nova_compute[239965]: 2026-01-26 17:15:55.464 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3540MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:15:55 compute-0 nova_compute[239965]: 2026-01-26 17:15:55.464 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:15:55 compute-0 nova_compute[239965]: 2026-01-26 17:15:55.465 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:15:55 compute-0 nova_compute[239965]: 2026-01-26 17:15:55.577 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:15:55 compute-0 nova_compute[239965]: 2026-01-26 17:15:55.578 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:15:55 compute-0 ceph-mon[75140]: pgmap v3784: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2204277641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:15:55 compute-0 nova_compute[239965]: 2026-01-26 17:15:55.595 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:15:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:15:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2413835127' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:15:56 compute-0 nova_compute[239965]: 2026-01-26 17:15:56.135 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:15:56 compute-0 nova_compute[239965]: 2026-01-26 17:15:56.140 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:15:56 compute-0 nova_compute[239965]: 2026-01-26 17:15:56.162 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:15:56 compute-0 nova_compute[239965]: 2026-01-26 17:15:56.164 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:15:56 compute-0 nova_compute[239965]: 2026-01-26 17:15:56.164 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.699s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:15:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3785: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2413835127' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:15:57 compute-0 ceph-mon[75140]: pgmap v3785: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:57 compute-0 nova_compute[239965]: 2026-01-26 17:15:57.735 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:58 compute-0 nova_compute[239965]: 2026-01-26 17:15:58.165 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:15:58 compute-0 nova_compute[239965]: 2026-01-26 17:15:58.165 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:15:58 compute-0 nova_compute[239965]: 2026-01-26 17:15:58.165 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:15:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3786: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:58 compute-0 nova_compute[239965]: 2026-01-26 17:15:58.357 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:15:58 compute-0 nova_compute[239965]: 2026-01-26 17:15:58.912 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:15:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:15:59.300 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:15:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:15:59.300 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:15:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:15:59.300 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:15:59 compute-0 ceph-mon[75140]: pgmap v3786: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:15:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:16:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3787: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:16:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:16:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:16:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:16:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:16:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:16:00 compute-0 ceph-mon[75140]: pgmap v3787: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3788: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:02 compute-0 nova_compute[239965]: 2026-01-26 17:16:02.738 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:03 compute-0 ceph-mon[75140]: pgmap v3788: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:03 compute-0 nova_compute[239965]: 2026-01-26 17:16:03.916 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3789: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:16:05 compute-0 ceph-mon[75140]: pgmap v3789: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3790: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:07 compute-0 ceph-mon[75140]: pgmap v3790: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:07 compute-0 nova_compute[239965]: 2026-01-26 17:16:07.740 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3791: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:08 compute-0 nova_compute[239965]: 2026-01-26 17:16:08.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:08 compute-0 nova_compute[239965]: 2026-01-26 17:16:08.917 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:09 compute-0 ceph-mon[75140]: pgmap v3791: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:16:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3792: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:10 compute-0 podman[407882]: 2026-01-26 17:16:10.386264935 +0000 UTC m=+0.066314866 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:16:11 compute-0 ceph-mon[75140]: pgmap v3792: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3793: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:12 compute-0 nova_compute[239965]: 2026-01-26 17:16:12.791 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:13 compute-0 podman[407903]: 2026-01-26 17:16:13.391941847 +0000 UTC m=+0.081673034 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:16:13 compute-0 ceph-mon[75140]: pgmap v3793: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:13 compute-0 nova_compute[239965]: 2026-01-26 17:16:13.918 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3794: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:16:15 compute-0 ceph-mon[75140]: pgmap v3794: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3795: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:16 compute-0 sudo[407929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:16:16 compute-0 sudo[407929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:16:16 compute-0 sudo[407929]: pam_unix(sudo:session): session closed for user root
Jan 26 17:16:16 compute-0 sudo[407954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 26 17:16:16 compute-0 sudo[407954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:16:17 compute-0 sudo[407954]: pam_unix(sudo:session): session closed for user root
Jan 26 17:16:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:16:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:16:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:16:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:16:17.217365) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447777217401, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1631, "num_deletes": 251, "total_data_size": 2739845, "memory_usage": 2772864, "flush_reason": "Manual Compaction"}
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447777241703, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 2677919, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75892, "largest_seqno": 77522, "table_properties": {"data_size": 2670346, "index_size": 4579, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15277, "raw_average_key_size": 19, "raw_value_size": 2655293, "raw_average_value_size": 3457, "num_data_blocks": 204, "num_entries": 768, "num_filter_entries": 768, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769447600, "oldest_key_time": 1769447600, "file_creation_time": 1769447777, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 24399 microseconds, and 6656 cpu microseconds.
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:16:17.241759) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 2677919 bytes OK
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:16:17.241783) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:16:17.244726) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:16:17.244770) EVENT_LOG_v1 {"time_micros": 1769447777244760, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:16:17.244800) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2732833, prev total WAL file size 2732833, number of live WAL files 2.
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:16:17.245868) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(2615KB)], [182(10MB)]
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447777245934, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 13256701, "oldest_snapshot_seqno": -1}
Jan 26 17:16:17 compute-0 sudo[408000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:16:17 compute-0 sudo[408000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:16:17 compute-0 sudo[408000]: pam_unix(sudo:session): session closed for user root
Jan 26 17:16:17 compute-0 sudo[408025]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:16:17 compute-0 sudo[408025]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 9400 keys, 11438198 bytes, temperature: kUnknown
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447777392575, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 11438198, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11378763, "index_size": 34810, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23557, "raw_key_size": 247384, "raw_average_key_size": 26, "raw_value_size": 11213890, "raw_average_value_size": 1192, "num_data_blocks": 1339, "num_entries": 9400, "num_filter_entries": 9400, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769447777, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:16:17.392885) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 11438198 bytes
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:16:17.394737) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 90.3 rd, 78.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 10.1 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(9.2) write-amplify(4.3) OK, records in: 9914, records dropped: 514 output_compression: NoCompression
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:16:17.394758) EVENT_LOG_v1 {"time_micros": 1769447777394747, "job": 114, "event": "compaction_finished", "compaction_time_micros": 146732, "compaction_time_cpu_micros": 30373, "output_level": 6, "num_output_files": 1, "total_output_size": 11438198, "num_input_records": 9914, "num_output_records": 9400, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447777395438, "job": 114, "event": "table_file_deletion", "file_number": 184}
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447777398125, "job": 114, "event": "table_file_deletion", "file_number": 182}
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:16:17.245739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:16:17.398225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:16:17.398233) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:16:17.398235) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:16:17.398236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:16:17 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:16:17.398238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:16:17 compute-0 ceph-mon[75140]: pgmap v3795: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:16:17 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:16:17 compute-0 nova_compute[239965]: 2026-01-26 17:16:17.793 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:17 compute-0 sudo[408025]: pam_unix(sudo:session): session closed for user root
Jan 26 17:16:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 17:16:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 17:16:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:16:17 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:16:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:16:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:16:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:16:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:16:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:16:17 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:16:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:16:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:16:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:16:17 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:16:17 compute-0 sudo[408082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:16:17 compute-0 sudo[408082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:16:17 compute-0 sudo[408082]: pam_unix(sudo:session): session closed for user root
Jan 26 17:16:17 compute-0 sudo[408107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:16:17 compute-0 sudo[408107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:16:18 compute-0 podman[408144]: 2026-01-26 17:16:18.261111367 +0000 UTC m=+0.049902654 container create 3b79192cc28cda65b1ffd9b898dee3d33afaf3485d642b1580900e8e5f371f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_buck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:16:18 compute-0 systemd[1]: Started libpod-conmon-3b79192cc28cda65b1ffd9b898dee3d33afaf3485d642b1580900e8e5f371f53.scope.
Jan 26 17:16:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3796: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:16:18 compute-0 podman[408144]: 2026-01-26 17:16:18.243631689 +0000 UTC m=+0.032422996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:16:18 compute-0 podman[408144]: 2026-01-26 17:16:18.34807698 +0000 UTC m=+0.136868367 container init 3b79192cc28cda65b1ffd9b898dee3d33afaf3485d642b1580900e8e5f371f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_buck, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:16:18 compute-0 podman[408144]: 2026-01-26 17:16:18.356955607 +0000 UTC m=+0.145746894 container start 3b79192cc28cda65b1ffd9b898dee3d33afaf3485d642b1580900e8e5f371f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:16:18 compute-0 podman[408144]: 2026-01-26 17:16:18.360818742 +0000 UTC m=+0.149610049 container attach 3b79192cc28cda65b1ffd9b898dee3d33afaf3485d642b1580900e8e5f371f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_buck, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:16:18 compute-0 sleepy_buck[408161]: 167 167
Jan 26 17:16:18 compute-0 systemd[1]: libpod-3b79192cc28cda65b1ffd9b898dee3d33afaf3485d642b1580900e8e5f371f53.scope: Deactivated successfully.
Jan 26 17:16:18 compute-0 podman[408144]: 2026-01-26 17:16:18.36442525 +0000 UTC m=+0.153216537 container died 3b79192cc28cda65b1ffd9b898dee3d33afaf3485d642b1580900e8e5f371f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 17:16:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e14d8dff6b116215feb85b739d90c7519086d2da122f31c70841abf37121cf71-merged.mount: Deactivated successfully.
Jan 26 17:16:18 compute-0 podman[408144]: 2026-01-26 17:16:18.407576078 +0000 UTC m=+0.196367365 container remove 3b79192cc28cda65b1ffd9b898dee3d33afaf3485d642b1580900e8e5f371f53 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:16:18 compute-0 systemd[1]: libpod-conmon-3b79192cc28cda65b1ffd9b898dee3d33afaf3485d642b1580900e8e5f371f53.scope: Deactivated successfully.
Jan 26 17:16:18 compute-0 podman[408185]: 2026-01-26 17:16:18.588118455 +0000 UTC m=+0.043330404 container create a759e12b85b602d53737f4931bb3ab5a6cabefa945088aa9891eeae2388ccb6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_boyd, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:16:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 17:16:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:16:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:16:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:16:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:16:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:16:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:16:18 compute-0 systemd[1]: Started libpod-conmon-a759e12b85b602d53737f4931bb3ab5a6cabefa945088aa9891eeae2388ccb6e.scope.
Jan 26 17:16:18 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:16:18 compute-0 podman[408185]: 2026-01-26 17:16:18.56956959 +0000 UTC m=+0.024781549 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f20d60aadda01b613713bb7d55a7c7cb137f17d37e57ef198d1e30dc9c7ecfa0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f20d60aadda01b613713bb7d55a7c7cb137f17d37e57ef198d1e30dc9c7ecfa0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f20d60aadda01b613713bb7d55a7c7cb137f17d37e57ef198d1e30dc9c7ecfa0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f20d60aadda01b613713bb7d55a7c7cb137f17d37e57ef198d1e30dc9c7ecfa0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:16:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f20d60aadda01b613713bb7d55a7c7cb137f17d37e57ef198d1e30dc9c7ecfa0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:16:18 compute-0 podman[408185]: 2026-01-26 17:16:18.682260903 +0000 UTC m=+0.137472862 container init a759e12b85b602d53737f4931bb3ab5a6cabefa945088aa9891eeae2388ccb6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_boyd, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 17:16:18 compute-0 podman[408185]: 2026-01-26 17:16:18.696391429 +0000 UTC m=+0.151603378 container start a759e12b85b602d53737f4931bb3ab5a6cabefa945088aa9891eeae2388ccb6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_boyd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:16:18 compute-0 podman[408185]: 2026-01-26 17:16:18.700154602 +0000 UTC m=+0.155366581 container attach a759e12b85b602d53737f4931bb3ab5a6cabefa945088aa9891eeae2388ccb6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_boyd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 17:16:18 compute-0 nova_compute[239965]: 2026-01-26 17:16:18.921 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:19 compute-0 youthful_boyd[408201]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:16:19 compute-0 youthful_boyd[408201]: --> All data devices are unavailable
Jan 26 17:16:19 compute-0 systemd[1]: libpod-a759e12b85b602d53737f4931bb3ab5a6cabefa945088aa9891eeae2388ccb6e.scope: Deactivated successfully.
Jan 26 17:16:19 compute-0 podman[408185]: 2026-01-26 17:16:19.166748641 +0000 UTC m=+0.621960610 container died a759e12b85b602d53737f4931bb3ab5a6cabefa945088aa9891eeae2388ccb6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_boyd, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 17:16:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-f20d60aadda01b613713bb7d55a7c7cb137f17d37e57ef198d1e30dc9c7ecfa0-merged.mount: Deactivated successfully.
Jan 26 17:16:19 compute-0 podman[408185]: 2026-01-26 17:16:19.218342626 +0000 UTC m=+0.673554585 container remove a759e12b85b602d53737f4931bb3ab5a6cabefa945088aa9891eeae2388ccb6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_boyd, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:16:19 compute-0 systemd[1]: libpod-conmon-a759e12b85b602d53737f4931bb3ab5a6cabefa945088aa9891eeae2388ccb6e.scope: Deactivated successfully.
Jan 26 17:16:19 compute-0 sudo[408107]: pam_unix(sudo:session): session closed for user root
Jan 26 17:16:19 compute-0 sudo[408236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:16:19 compute-0 sudo[408236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:16:19 compute-0 sudo[408236]: pam_unix(sudo:session): session closed for user root
Jan 26 17:16:19 compute-0 sudo[408261]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:16:19 compute-0 sudo[408261]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:16:19 compute-0 ceph-mon[75140]: pgmap v3796: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:16:19 compute-0 podman[408299]: 2026-01-26 17:16:19.698695123 +0000 UTC m=+0.039867028 container create 508df1a3658740a1286a1ee028098e074b08e8db883acbbeb1a704a7aa14569f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 17:16:19 compute-0 systemd[1]: Started libpod-conmon-508df1a3658740a1286a1ee028098e074b08e8db883acbbeb1a704a7aa14569f.scope.
Jan 26 17:16:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:16:19 compute-0 podman[408299]: 2026-01-26 17:16:19.681954473 +0000 UTC m=+0.023126378 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:16:19 compute-0 podman[408299]: 2026-01-26 17:16:19.782075538 +0000 UTC m=+0.123247473 container init 508df1a3658740a1286a1ee028098e074b08e8db883acbbeb1a704a7aa14569f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 26 17:16:19 compute-0 podman[408299]: 2026-01-26 17:16:19.788544626 +0000 UTC m=+0.129716531 container start 508df1a3658740a1286a1ee028098e074b08e8db883acbbeb1a704a7aa14569f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatterjee, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:16:19 compute-0 podman[408299]: 2026-01-26 17:16:19.79235512 +0000 UTC m=+0.133527055 container attach 508df1a3658740a1286a1ee028098e074b08e8db883acbbeb1a704a7aa14569f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:16:19 compute-0 exciting_chatterjee[408315]: 167 167
Jan 26 17:16:19 compute-0 systemd[1]: libpod-508df1a3658740a1286a1ee028098e074b08e8db883acbbeb1a704a7aa14569f.scope: Deactivated successfully.
Jan 26 17:16:19 compute-0 podman[408299]: 2026-01-26 17:16:19.79480917 +0000 UTC m=+0.135981075 container died 508df1a3658740a1286a1ee028098e074b08e8db883acbbeb1a704a7aa14569f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatterjee, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 17:16:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5bbca15f4a7d7db211e2eb321fa1e422bda6ca1f13cd78aac8cb902a1656313-merged.mount: Deactivated successfully.
Jan 26 17:16:19 compute-0 podman[408299]: 2026-01-26 17:16:19.843062403 +0000 UTC m=+0.184234308 container remove 508df1a3658740a1286a1ee028098e074b08e8db883acbbeb1a704a7aa14569f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_chatterjee, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:16:19 compute-0 systemd[1]: libpod-conmon-508df1a3658740a1286a1ee028098e074b08e8db883acbbeb1a704a7aa14569f.scope: Deactivated successfully.
Jan 26 17:16:19 compute-0 podman[408339]: 2026-01-26 17:16:19.990448317 +0000 UTC m=+0.040111055 container create 5c4eeb362be9e3fece619ed4a812d876cfdf9346da69c1464b519956b1a499e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:16:20 compute-0 systemd[1]: Started libpod-conmon-5c4eeb362be9e3fece619ed4a812d876cfdf9346da69c1464b519956b1a499e1.scope.
Jan 26 17:16:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86013ba0ee22932b3387508cbfa70340f4a537394d7a371093679c5a350a4eee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86013ba0ee22932b3387508cbfa70340f4a537394d7a371093679c5a350a4eee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86013ba0ee22932b3387508cbfa70340f4a537394d7a371093679c5a350a4eee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:16:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86013ba0ee22932b3387508cbfa70340f4a537394d7a371093679c5a350a4eee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:16:20 compute-0 podman[408339]: 2026-01-26 17:16:19.972689451 +0000 UTC m=+0.022352209 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:16:20 compute-0 podman[408339]: 2026-01-26 17:16:20.072692563 +0000 UTC m=+0.122355331 container init 5c4eeb362be9e3fece619ed4a812d876cfdf9346da69c1464b519956b1a499e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 17:16:20 compute-0 podman[408339]: 2026-01-26 17:16:20.079780266 +0000 UTC m=+0.129443004 container start 5c4eeb362be9e3fece619ed4a812d876cfdf9346da69c1464b519956b1a499e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 17:16:20 compute-0 podman[408339]: 2026-01-26 17:16:20.083317454 +0000 UTC m=+0.132980222 container attach 5c4eeb362be9e3fece619ed4a812d876cfdf9346da69c1464b519956b1a499e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 17:16:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3797: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:20 compute-0 tender_keldysh[408357]: {
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:     "0": [
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:         {
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "devices": [
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "/dev/loop3"
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             ],
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "lv_name": "ceph_lv0",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "lv_size": "21470642176",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "name": "ceph_lv0",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "tags": {
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.cluster_name": "ceph",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.crush_device_class": "",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.encrypted": "0",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.objectstore": "bluestore",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.osd_id": "0",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.type": "block",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.vdo": "0",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.with_tpm": "0"
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             },
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "type": "block",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "vg_name": "ceph_vg0"
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:         }
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:     ],
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:     "1": [
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:         {
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "devices": [
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "/dev/loop4"
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             ],
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "lv_name": "ceph_lv1",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "lv_size": "21470642176",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "name": "ceph_lv1",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "tags": {
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.cluster_name": "ceph",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.crush_device_class": "",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.encrypted": "0",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.objectstore": "bluestore",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.osd_id": "1",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.type": "block",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.vdo": "0",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.with_tpm": "0"
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             },
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "type": "block",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "vg_name": "ceph_vg1"
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:         }
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:     ],
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:     "2": [
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:         {
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "devices": [
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "/dev/loop5"
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             ],
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "lv_name": "ceph_lv2",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "lv_size": "21470642176",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "name": "ceph_lv2",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "tags": {
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.cluster_name": "ceph",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.crush_device_class": "",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.encrypted": "0",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.objectstore": "bluestore",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.osd_id": "2",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.type": "block",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.vdo": "0",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:                 "ceph.with_tpm": "0"
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             },
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "type": "block",
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:             "vg_name": "ceph_vg2"
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:         }
Jan 26 17:16:20 compute-0 tender_keldysh[408357]:     ]
Jan 26 17:16:20 compute-0 tender_keldysh[408357]: }
Jan 26 17:16:20 compute-0 systemd[1]: libpod-5c4eeb362be9e3fece619ed4a812d876cfdf9346da69c1464b519956b1a499e1.scope: Deactivated successfully.
Jan 26 17:16:20 compute-0 podman[408339]: 2026-01-26 17:16:20.363831661 +0000 UTC m=+0.413494409 container died 5c4eeb362be9e3fece619ed4a812d876cfdf9346da69c1464b519956b1a499e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 26 17:16:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-86013ba0ee22932b3387508cbfa70340f4a537394d7a371093679c5a350a4eee-merged.mount: Deactivated successfully.
Jan 26 17:16:20 compute-0 podman[408339]: 2026-01-26 17:16:20.405221956 +0000 UTC m=+0.454884694 container remove 5c4eeb362be9e3fece619ed4a812d876cfdf9346da69c1464b519956b1a499e1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_keldysh, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 26 17:16:20 compute-0 systemd[1]: libpod-conmon-5c4eeb362be9e3fece619ed4a812d876cfdf9346da69c1464b519956b1a499e1.scope: Deactivated successfully.
Jan 26 17:16:20 compute-0 sudo[408261]: pam_unix(sudo:session): session closed for user root
Jan 26 17:16:20 compute-0 sudo[408379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:16:20 compute-0 sudo[408379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:16:20 compute-0 sudo[408379]: pam_unix(sudo:session): session closed for user root
Jan 26 17:16:20 compute-0 sudo[408404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:16:20 compute-0 sudo[408404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:16:20 compute-0 podman[408441]: 2026-01-26 17:16:20.867615813 +0000 UTC m=+0.042554614 container create a655d2c1f17345285c87047b4a5ad291db6c3c481d52a1807d1841147d5b9dc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_visvesvaraya, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:16:20 compute-0 systemd[1]: Started libpod-conmon-a655d2c1f17345285c87047b4a5ad291db6c3c481d52a1807d1841147d5b9dc2.scope.
Jan 26 17:16:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:16:20 compute-0 podman[408441]: 2026-01-26 17:16:20.935172039 +0000 UTC m=+0.110110900 container init a655d2c1f17345285c87047b4a5ad291db6c3c481d52a1807d1841147d5b9dc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_visvesvaraya, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 17:16:20 compute-0 podman[408441]: 2026-01-26 17:16:20.941098084 +0000 UTC m=+0.116036885 container start a655d2c1f17345285c87047b4a5ad291db6c3c481d52a1807d1841147d5b9dc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:16:20 compute-0 systemd[1]: libpod-a655d2c1f17345285c87047b4a5ad291db6c3c481d52a1807d1841147d5b9dc2.scope: Deactivated successfully.
Jan 26 17:16:20 compute-0 confident_visvesvaraya[408458]: 167 167
Jan 26 17:16:20 compute-0 podman[408441]: 2026-01-26 17:16:20.94503709 +0000 UTC m=+0.119975901 container attach a655d2c1f17345285c87047b4a5ad291db6c3c481d52a1807d1841147d5b9dc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 17:16:20 compute-0 conmon[408458]: conmon a655d2c1f17345285c87 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a655d2c1f17345285c87047b4a5ad291db6c3c481d52a1807d1841147d5b9dc2.scope/container/memory.events
Jan 26 17:16:20 compute-0 podman[408441]: 2026-01-26 17:16:20.947632344 +0000 UTC m=+0.122571145 container died a655d2c1f17345285c87047b4a5ad291db6c3c481d52a1807d1841147d5b9dc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 17:16:20 compute-0 podman[408441]: 2026-01-26 17:16:20.852628815 +0000 UTC m=+0.027567636 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:16:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6200a9b8afa9886e5287b4c505bd82e90a0d0c29b536e775deadbe893437ab62-merged.mount: Deactivated successfully.
Jan 26 17:16:20 compute-0 podman[408441]: 2026-01-26 17:16:20.984903938 +0000 UTC m=+0.159842739 container remove a655d2c1f17345285c87047b4a5ad291db6c3c481d52a1807d1841147d5b9dc2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 17:16:21 compute-0 systemd[1]: libpod-conmon-a655d2c1f17345285c87047b4a5ad291db6c3c481d52a1807d1841147d5b9dc2.scope: Deactivated successfully.
Jan 26 17:16:21 compute-0 podman[408482]: 2026-01-26 17:16:21.189578436 +0000 UTC m=+0.041351955 container create 43a739e70abba06764e0bbf49868f6afa6507aa12c11a65e4827e8f0bd791982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wilson, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:16:21 compute-0 systemd[1]: Started libpod-conmon-43a739e70abba06764e0bbf49868f6afa6507aa12c11a65e4827e8f0bd791982.scope.
Jan 26 17:16:21 compute-0 podman[408482]: 2026-01-26 17:16:21.172721833 +0000 UTC m=+0.024495372 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:16:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6110063e5f85fca756beb74a0596a774fccef7c64fda7005281def29d8c8ea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6110063e5f85fca756beb74a0596a774fccef7c64fda7005281def29d8c8ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6110063e5f85fca756beb74a0596a774fccef7c64fda7005281def29d8c8ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:16:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b6110063e5f85fca756beb74a0596a774fccef7c64fda7005281def29d8c8ea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:16:21 compute-0 podman[408482]: 2026-01-26 17:16:21.300138577 +0000 UTC m=+0.151912096 container init 43a739e70abba06764e0bbf49868f6afa6507aa12c11a65e4827e8f0bd791982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wilson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 26 17:16:21 compute-0 podman[408482]: 2026-01-26 17:16:21.311918995 +0000 UTC m=+0.163692554 container start 43a739e70abba06764e0bbf49868f6afa6507aa12c11a65e4827e8f0bd791982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wilson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:16:21 compute-0 podman[408482]: 2026-01-26 17:16:21.316476518 +0000 UTC m=+0.168250067 container attach 43a739e70abba06764e0bbf49868f6afa6507aa12c11a65e4827e8f0bd791982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wilson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:16:21 compute-0 ceph-mon[75140]: pgmap v3797: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:21 compute-0 sshd-session[408347]: Invalid user ubuntu from 103.42.57.158 port 49440
Jan 26 17:16:21 compute-0 sshd-session[408347]: Received disconnect from 103.42.57.158 port 49440:11:  [preauth]
Jan 26 17:16:21 compute-0 sshd-session[408347]: Disconnected from invalid user ubuntu 103.42.57.158 port 49440 [preauth]
Jan 26 17:16:22 compute-0 lvm[408577]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:16:22 compute-0 lvm[408577]: VG ceph_vg0 finished
Jan 26 17:16:22 compute-0 lvm[408578]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:16:22 compute-0 lvm[408578]: VG ceph_vg1 finished
Jan 26 17:16:22 compute-0 lvm[408580]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:16:22 compute-0 lvm[408580]: VG ceph_vg2 finished
Jan 26 17:16:22 compute-0 suspicious_wilson[408499]: {}
Jan 26 17:16:22 compute-0 systemd[1]: libpod-43a739e70abba06764e0bbf49868f6afa6507aa12c11a65e4827e8f0bd791982.scope: Deactivated successfully.
Jan 26 17:16:22 compute-0 systemd[1]: libpod-43a739e70abba06764e0bbf49868f6afa6507aa12c11a65e4827e8f0bd791982.scope: Consumed 1.439s CPU time.
Jan 26 17:16:22 compute-0 podman[408482]: 2026-01-26 17:16:22.197734174 +0000 UTC m=+1.049507703 container died 43a739e70abba06764e0bbf49868f6afa6507aa12c11a65e4827e8f0bd791982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wilson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 17:16:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3798: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b6110063e5f85fca756beb74a0596a774fccef7c64fda7005281def29d8c8ea-merged.mount: Deactivated successfully.
Jan 26 17:16:22 compute-0 podman[408482]: 2026-01-26 17:16:22.46106638 +0000 UTC m=+1.312839899 container remove 43a739e70abba06764e0bbf49868f6afa6507aa12c11a65e4827e8f0bd791982 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_wilson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:16:22 compute-0 systemd[1]: libpod-conmon-43a739e70abba06764e0bbf49868f6afa6507aa12c11a65e4827e8f0bd791982.scope: Deactivated successfully.
Jan 26 17:16:22 compute-0 sudo[408404]: pam_unix(sudo:session): session closed for user root
Jan 26 17:16:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:16:22 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:16:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:16:22 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:16:22 compute-0 sudo[408595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:16:22 compute-0 sudo[408595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:16:22 compute-0 sudo[408595]: pam_unix(sudo:session): session closed for user root
Jan 26 17:16:22 compute-0 nova_compute[239965]: 2026-01-26 17:16:22.829 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:23 compute-0 ceph-mon[75140]: pgmap v3798: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:16:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:16:23 compute-0 nova_compute[239965]: 2026-01-26 17:16:23.922 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3799: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:16:25 compute-0 nova_compute[239965]: 2026-01-26 17:16:25.527 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:25 compute-0 nova_compute[239965]: 2026-01-26 17:16:25.527 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:16:25 compute-0 ceph-mon[75140]: pgmap v3799: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3800: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:27 compute-0 ceph-mon[75140]: pgmap v3800: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:27 compute-0 nova_compute[239965]: 2026-01-26 17:16:27.833 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3801: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:16:28
Jan 26 17:16:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:16:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:16:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', '.mgr', 'backups', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', 'vms']
Jan 26 17:16:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:16:28 compute-0 nova_compute[239965]: 2026-01-26 17:16:28.924 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:29 compute-0 ceph-mon[75140]: pgmap v3801: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:16:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3802: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:16:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:16:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:16:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:16:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:16:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:16:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:16:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:16:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:16:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:16:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:16:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:16:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:16:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:16:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:16:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:16:31 compute-0 ceph-mon[75140]: pgmap v3802: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3803: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:32 compute-0 nova_compute[239965]: 2026-01-26 17:16:32.868 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:33 compute-0 ceph-mon[75140]: pgmap v3803: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:33 compute-0 nova_compute[239965]: 2026-01-26 17:16:33.984 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3804: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:16:35 compute-0 ceph-mon[75140]: pgmap v3804: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3805: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:36 compute-0 ceph-mon[75140]: pgmap v3805: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:37 compute-0 nova_compute[239965]: 2026-01-26 17:16:37.870 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3806: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:38 compute-0 nova_compute[239965]: 2026-01-26 17:16:38.561 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:38 compute-0 nova_compute[239965]: 2026-01-26 17:16:38.986 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:39 compute-0 ceph-mon[75140]: pgmap v3806: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:16:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3807: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:40 compute-0 nova_compute[239965]: 2026-01-26 17:16:40.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:40 compute-0 nova_compute[239965]: 2026-01-26 17:16:40.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:41 compute-0 podman[408620]: 2026-01-26 17:16:41.366504321 +0000 UTC m=+0.051563476 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Jan 26 17:16:41 compute-0 ceph-mon[75140]: pgmap v3807: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:41 compute-0 nova_compute[239965]: 2026-01-26 17:16:41.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3808: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:43 compute-0 nova_compute[239965]: 2026-01-26 17:16:43.000 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:43 compute-0 ceph-mon[75140]: pgmap v3808: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:43 compute-0 nova_compute[239965]: 2026-01-26 17:16:43.989 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3809: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:44 compute-0 podman[408639]: 2026-01-26 17:16:44.519146375 +0000 UTC m=+0.130714885 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:16:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:16:45 compute-0 ceph-mon[75140]: pgmap v3809: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3810: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:47 compute-0 ceph-mon[75140]: pgmap v3810: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:48 compute-0 nova_compute[239965]: 2026-01-26 17:16:48.003 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:16:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1771786849' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:16:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:16:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1771786849' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:16:48 compute-0 nova_compute[239965]: 2026-01-26 17:16:48.990 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:49 compute-0 ceph-mon[75140]: pgmap v3811: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1771786849' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:16:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1771786849' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:16:49 compute-0 nova_compute[239965]: 2026-01-26 17:16:49.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:16:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:16:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:51 compute-0 ceph-mon[75140]: pgmap v3812: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:51 compute-0 nova_compute[239965]: 2026-01-26 17:16:51.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:51 compute-0 nova_compute[239965]: 2026-01-26 17:16:51.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:16:51 compute-0 nova_compute[239965]: 2026-01-26 17:16:51.536 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:16:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:53 compute-0 nova_compute[239965]: 2026-01-26 17:16:53.006 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:53 compute-0 ceph-mon[75140]: pgmap v3813: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:53 compute-0 nova_compute[239965]: 2026-01-26 17:16:53.529 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:53 compute-0 nova_compute[239965]: 2026-01-26 17:16:53.529 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:53 compute-0 nova_compute[239965]: 2026-01-26 17:16:53.530 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:16:53 compute-0 nova_compute[239965]: 2026-01-26 17:16:53.993 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:16:55 compute-0 ceph-mon[75140]: pgmap v3814: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:56 compute-0 nova_compute[239965]: 2026-01-26 17:16:56.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:16:56 compute-0 nova_compute[239965]: 2026-01-26 17:16:56.617 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:16:56 compute-0 nova_compute[239965]: 2026-01-26 17:16:56.618 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:16:56 compute-0 nova_compute[239965]: 2026-01-26 17:16:56.619 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:16:56 compute-0 nova_compute[239965]: 2026-01-26 17:16:56.619 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:16:56 compute-0 nova_compute[239965]: 2026-01-26 17:16:56.620 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:16:57 compute-0 ceph-mon[75140]: pgmap v3815: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:16:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/98156456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:16:57 compute-0 nova_compute[239965]: 2026-01-26 17:16:57.392 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.773s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:16:57 compute-0 nova_compute[239965]: 2026-01-26 17:16:57.555 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:16:57 compute-0 nova_compute[239965]: 2026-01-26 17:16:57.556 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3525MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:16:57 compute-0 nova_compute[239965]: 2026-01-26 17:16:57.556 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:16:57 compute-0 nova_compute[239965]: 2026-01-26 17:16:57.556 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:16:57 compute-0 sshd-session[408676]: Invalid user sol from 45.148.10.240 port 56836
Jan 26 17:16:57 compute-0 nova_compute[239965]: 2026-01-26 17:16:57.646 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:16:57 compute-0 nova_compute[239965]: 2026-01-26 17:16:57.647 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:16:57 compute-0 nova_compute[239965]: 2026-01-26 17:16:57.665 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:16:57 compute-0 sshd-session[408676]: Connection closed by invalid user sol 45.148.10.240 port 56836 [preauth]
Jan 26 17:16:58 compute-0 nova_compute[239965]: 2026-01-26 17:16:58.007 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/98156456' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:16:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:16:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3566258037' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:16:58 compute-0 nova_compute[239965]: 2026-01-26 17:16:58.251 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.587s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:16:58 compute-0 nova_compute[239965]: 2026-01-26 17:16:58.258 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:16:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:16:58 compute-0 nova_compute[239965]: 2026-01-26 17:16:58.399 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:16:58 compute-0 nova_compute[239965]: 2026-01-26 17:16:58.401 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:16:58 compute-0 nova_compute[239965]: 2026-01-26 17:16:58.401 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:16:58 compute-0 nova_compute[239965]: 2026-01-26 17:16:58.994 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:16:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:16:59.301 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:16:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:16:59.302 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:16:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:16:59.302 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:16:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:16:59 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3566258037' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:16:59 compute-0 ceph-mon[75140]: pgmap v3816: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:00 compute-0 nova_compute[239965]: 2026-01-26 17:17:00.402 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:17:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:17:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:17:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:17:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:17:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:17:00 compute-0 nova_compute[239965]: 2026-01-26 17:17:00.457 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:00 compute-0 nova_compute[239965]: 2026-01-26 17:17:00.458 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:17:00 compute-0 nova_compute[239965]: 2026-01-26 17:17:00.458 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:17:00 compute-0 nova_compute[239965]: 2026-01-26 17:17:00.475 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:17:00 compute-0 ceph-mon[75140]: pgmap v3817: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:03 compute-0 nova_compute[239965]: 2026-01-26 17:17:03.010 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:03 compute-0 ceph-mon[75140]: pgmap v3818: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:03 compute-0 nova_compute[239965]: 2026-01-26 17:17:03.996 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:17:05 compute-0 ceph-mon[75140]: pgmap v3819: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:07 compute-0 ceph-mon[75140]: pgmap v3820: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:08 compute-0 nova_compute[239965]: 2026-01-26 17:17:08.013 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:08 compute-0 ceph-mon[75140]: pgmap v3821: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:08 compute-0 nova_compute[239965]: 2026-01-26 17:17:08.997 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:17:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:11 compute-0 ceph-mon[75140]: pgmap v3822: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:12 compute-0 podman[408711]: 2026-01-26 17:17:12.366025616 +0000 UTC m=+0.054327213 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:17:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3823: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:13 compute-0 nova_compute[239965]: 2026-01-26 17:17:13.016 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:13 compute-0 ceph-mon[75140]: pgmap v3823: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:14 compute-0 nova_compute[239965]: 2026-01-26 17:17:14.002 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:17:15 compute-0 podman[408731]: 2026-01-26 17:17:15.415897453 +0000 UTC m=+0.099063240 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 26 17:17:15 compute-0 ceph-mon[75140]: pgmap v3824: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:17 compute-0 ceph-mon[75140]: pgmap v3825: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:18 compute-0 nova_compute[239965]: 2026-01-26 17:17:18.020 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:19 compute-0 nova_compute[239965]: 2026-01-26 17:17:19.003 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:17:19 compute-0 ceph-mon[75140]: pgmap v3826: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:20 compute-0 ceph-mon[75140]: pgmap v3827: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3828: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:22 compute-0 sudo[408757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:17:22 compute-0 sudo[408757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:17:22 compute-0 sudo[408757]: pam_unix(sudo:session): session closed for user root
Jan 26 17:17:22 compute-0 sudo[408782]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:17:22 compute-0 sudo[408782]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:17:23 compute-0 nova_compute[239965]: 2026-01-26 17:17:23.022 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:23 compute-0 sudo[408782]: pam_unix(sudo:session): session closed for user root
Jan 26 17:17:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:17:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:17:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:17:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:17:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:17:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:17:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:17:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:17:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:17:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:17:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:17:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:17:23 compute-0 sudo[408839]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:17:23 compute-0 sudo[408839]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:17:23 compute-0 sudo[408839]: pam_unix(sudo:session): session closed for user root
Jan 26 17:17:23 compute-0 ceph-mon[75140]: pgmap v3828: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:17:23 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:17:23 compute-0 sudo[408864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:17:23 compute-0 sudo[408864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:17:24 compute-0 nova_compute[239965]: 2026-01-26 17:17:24.005 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:24 compute-0 podman[408901]: 2026-01-26 17:17:24.248538509 +0000 UTC m=+0.027072215 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:17:24 compute-0 podman[408901]: 2026-01-26 17:17:24.350085448 +0000 UTC m=+0.128619104 container create 6347af4fd6f2171ed545652c7b90647c5c3efbee6b1d809b221e1bd57d6c36f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_edison, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 17:17:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:24 compute-0 systemd[1]: Started libpod-conmon-6347af4fd6f2171ed545652c7b90647c5c3efbee6b1d809b221e1bd57d6c36f3.scope.
Jan 26 17:17:24 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:17:24 compute-0 podman[408901]: 2026-01-26 17:17:24.522557006 +0000 UTC m=+0.301090682 container init 6347af4fd6f2171ed545652c7b90647c5c3efbee6b1d809b221e1bd57d6c36f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:17:24 compute-0 podman[408901]: 2026-01-26 17:17:24.532412818 +0000 UTC m=+0.310946474 container start 6347af4fd6f2171ed545652c7b90647c5c3efbee6b1d809b221e1bd57d6c36f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_edison, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:17:24 compute-0 sweet_edison[408918]: 167 167
Jan 26 17:17:24 compute-0 systemd[1]: libpod-6347af4fd6f2171ed545652c7b90647c5c3efbee6b1d809b221e1bd57d6c36f3.scope: Deactivated successfully.
Jan 26 17:17:24 compute-0 podman[408901]: 2026-01-26 17:17:24.543010298 +0000 UTC m=+0.321543964 container attach 6347af4fd6f2171ed545652c7b90647c5c3efbee6b1d809b221e1bd57d6c36f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_edison, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 17:17:24 compute-0 podman[408901]: 2026-01-26 17:17:24.543577992 +0000 UTC m=+0.322111648 container died 6347af4fd6f2171ed545652c7b90647c5c3efbee6b1d809b221e1bd57d6c36f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_edison, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:17:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-907cd6c1db0bd27c9318399e6b72d06dec048a42cf8b2029bf8c94ed70dda1e1-merged.mount: Deactivated successfully.
Jan 26 17:17:24 compute-0 podman[408901]: 2026-01-26 17:17:24.661128024 +0000 UTC m=+0.439661680 container remove 6347af4fd6f2171ed545652c7b90647c5c3efbee6b1d809b221e1bd57d6c36f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_edison, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:17:24 compute-0 systemd[1]: libpod-conmon-6347af4fd6f2171ed545652c7b90647c5c3efbee6b1d809b221e1bd57d6c36f3.scope: Deactivated successfully.
Jan 26 17:17:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:17:24 compute-0 podman[408945]: 2026-01-26 17:17:24.802427278 +0000 UTC m=+0.022285717 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:17:25 compute-0 podman[408945]: 2026-01-26 17:17:25.37594477 +0000 UTC m=+0.595803179 container create 99bfda8c88d09101969fff8db9cbaa91370c0f428e7ff340d64764b0dba83efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_wing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 17:17:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:17:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:17:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:17:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:17:25 compute-0 ceph-mon[75140]: pgmap v3829: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:25 compute-0 systemd[1]: Started libpod-conmon-99bfda8c88d09101969fff8db9cbaa91370c0f428e7ff340d64764b0dba83efe.scope.
Jan 26 17:17:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3709084bf46d054e7f744ce73c6879436d07ec7df03f477f1a0f640ea2acb99f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3709084bf46d054e7f744ce73c6879436d07ec7df03f477f1a0f640ea2acb99f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3709084bf46d054e7f744ce73c6879436d07ec7df03f477f1a0f640ea2acb99f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3709084bf46d054e7f744ce73c6879436d07ec7df03f477f1a0f640ea2acb99f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:17:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3709084bf46d054e7f744ce73c6879436d07ec7df03f477f1a0f640ea2acb99f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:17:25 compute-0 podman[408945]: 2026-01-26 17:17:25.724510946 +0000 UTC m=+0.944369395 container init 99bfda8c88d09101969fff8db9cbaa91370c0f428e7ff340d64764b0dba83efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:17:25 compute-0 podman[408945]: 2026-01-26 17:17:25.731911177 +0000 UTC m=+0.951769606 container start 99bfda8c88d09101969fff8db9cbaa91370c0f428e7ff340d64764b0dba83efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_wing, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:17:25 compute-0 podman[408945]: 2026-01-26 17:17:25.764023225 +0000 UTC m=+0.983881674 container attach 99bfda8c88d09101969fff8db9cbaa91370c0f428e7ff340d64764b0dba83efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 17:17:26 compute-0 quirky_wing[408962]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:17:26 compute-0 quirky_wing[408962]: --> All data devices are unavailable
Jan 26 17:17:26 compute-0 systemd[1]: libpod-99bfda8c88d09101969fff8db9cbaa91370c0f428e7ff340d64764b0dba83efe.scope: Deactivated successfully.
Jan 26 17:17:26 compute-0 podman[408982]: 2026-01-26 17:17:26.350850142 +0000 UTC m=+0.030314604 container died 99bfda8c88d09101969fff8db9cbaa91370c0f428e7ff340d64764b0dba83efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_wing, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:17:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3709084bf46d054e7f744ce73c6879436d07ec7df03f477f1a0f640ea2acb99f-merged.mount: Deactivated successfully.
Jan 26 17:17:27 compute-0 ceph-mon[75140]: pgmap v3830: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:27 compute-0 podman[408982]: 2026-01-26 17:17:27.118054052 +0000 UTC m=+0.797518474 container remove 99bfda8c88d09101969fff8db9cbaa91370c0f428e7ff340d64764b0dba83efe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_wing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 17:17:27 compute-0 systemd[1]: libpod-conmon-99bfda8c88d09101969fff8db9cbaa91370c0f428e7ff340d64764b0dba83efe.scope: Deactivated successfully.
Jan 26 17:17:27 compute-0 sudo[408864]: pam_unix(sudo:session): session closed for user root
Jan 26 17:17:27 compute-0 sudo[408996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:17:27 compute-0 sudo[408996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:17:27 compute-0 sudo[408996]: pam_unix(sudo:session): session closed for user root
Jan 26 17:17:27 compute-0 sudo[409021]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:17:27 compute-0 sudo[409021]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:17:27 compute-0 podman[409056]: 2026-01-26 17:17:27.59721964 +0000 UTC m=+0.029966625 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:17:27 compute-0 podman[409056]: 2026-01-26 17:17:27.697588961 +0000 UTC m=+0.130335916 container create aeac446d1bf9a16f0165d134ca2e1435d79ef4eea5cea6dc91906faf557d574a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:17:27 compute-0 systemd[1]: Started libpod-conmon-aeac446d1bf9a16f0165d134ca2e1435d79ef4eea5cea6dc91906faf557d574a.scope.
Jan 26 17:17:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:17:27 compute-0 podman[409056]: 2026-01-26 17:17:27.866659897 +0000 UTC m=+0.299406882 container init aeac446d1bf9a16f0165d134ca2e1435d79ef4eea5cea6dc91906faf557d574a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:17:27 compute-0 podman[409056]: 2026-01-26 17:17:27.87743757 +0000 UTC m=+0.310184565 container start aeac446d1bf9a16f0165d134ca2e1435d79ef4eea5cea6dc91906faf557d574a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_meitner, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:17:27 compute-0 keen_meitner[409072]: 167 167
Jan 26 17:17:27 compute-0 systemd[1]: libpod-aeac446d1bf9a16f0165d134ca2e1435d79ef4eea5cea6dc91906faf557d574a.scope: Deactivated successfully.
Jan 26 17:17:27 compute-0 podman[409056]: 2026-01-26 17:17:27.919516172 +0000 UTC m=+0.352263127 container attach aeac446d1bf9a16f0165d134ca2e1435d79ef4eea5cea6dc91906faf557d574a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_meitner, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 17:17:27 compute-0 podman[409056]: 2026-01-26 17:17:27.920462665 +0000 UTC m=+0.353209640 container died aeac446d1bf9a16f0165d134ca2e1435d79ef4eea5cea6dc91906faf557d574a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_meitner, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:17:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e33bd183fe2adae4850975007202291db8210e9fb488de267eebaf5aa9a7fc2-merged.mount: Deactivated successfully.
Jan 26 17:17:28 compute-0 nova_compute[239965]: 2026-01-26 17:17:28.024 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:28 compute-0 podman[409056]: 2026-01-26 17:17:28.110824603 +0000 UTC m=+0.543571548 container remove aeac446d1bf9a16f0165d134ca2e1435d79ef4eea5cea6dc91906faf557d574a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_meitner, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 17:17:28 compute-0 systemd[1]: libpod-conmon-aeac446d1bf9a16f0165d134ca2e1435d79ef4eea5cea6dc91906faf557d574a.scope: Deactivated successfully.
Jan 26 17:17:28 compute-0 podman[409098]: 2026-01-26 17:17:28.311318408 +0000 UTC m=+0.069201237 container create f802b13ac978479739619988df26c0b3fb89ac579fef56f5b48e1eff852c74b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poincare, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 17:17:28 compute-0 podman[409098]: 2026-01-26 17:17:28.273106771 +0000 UTC m=+0.030989680 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:17:28 compute-0 systemd[1]: Started libpod-conmon-f802b13ac978479739619988df26c0b3fb89ac579fef56f5b48e1eff852c74b8.scope.
Jan 26 17:17:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acffe2a657fefad6ea06b63d9b9b895bf00de2ca1937ba23b703f3cd20c98775/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acffe2a657fefad6ea06b63d9b9b895bf00de2ca1937ba23b703f3cd20c98775/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acffe2a657fefad6ea06b63d9b9b895bf00de2ca1937ba23b703f3cd20c98775/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:17:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/acffe2a657fefad6ea06b63d9b9b895bf00de2ca1937ba23b703f3cd20c98775/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:17:28 compute-0 podman[409098]: 2026-01-26 17:17:28.449801403 +0000 UTC m=+0.207684242 container init f802b13ac978479739619988df26c0b3fb89ac579fef56f5b48e1eff852c74b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poincare, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 17:17:28 compute-0 podman[409098]: 2026-01-26 17:17:28.461792218 +0000 UTC m=+0.219675047 container start f802b13ac978479739619988df26c0b3fb89ac579fef56f5b48e1eff852c74b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:17:28 compute-0 podman[409098]: 2026-01-26 17:17:28.468566984 +0000 UTC m=+0.226449933 container attach f802b13ac978479739619988df26c0b3fb89ac579fef56f5b48e1eff852c74b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poincare, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 17:17:28 compute-0 quirky_poincare[409115]: {
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:     "0": [
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:         {
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "devices": [
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "/dev/loop3"
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             ],
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "lv_name": "ceph_lv0",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "lv_size": "21470642176",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "name": "ceph_lv0",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "tags": {
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.cluster_name": "ceph",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.crush_device_class": "",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.encrypted": "0",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.objectstore": "bluestore",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.osd_id": "0",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.type": "block",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.vdo": "0",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.with_tpm": "0"
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             },
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "type": "block",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "vg_name": "ceph_vg0"
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:         }
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:     ],
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:     "1": [
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:         {
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "devices": [
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "/dev/loop4"
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             ],
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "lv_name": "ceph_lv1",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "lv_size": "21470642176",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "name": "ceph_lv1",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "tags": {
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.cluster_name": "ceph",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.crush_device_class": "",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.encrypted": "0",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.objectstore": "bluestore",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.osd_id": "1",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.type": "block",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.vdo": "0",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.with_tpm": "0"
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             },
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "type": "block",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "vg_name": "ceph_vg1"
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:         }
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:     ],
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:     "2": [
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:         {
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "devices": [
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "/dev/loop5"
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             ],
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "lv_name": "ceph_lv2",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "lv_size": "21470642176",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "name": "ceph_lv2",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "tags": {
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.cluster_name": "ceph",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.crush_device_class": "",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.encrypted": "0",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.objectstore": "bluestore",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.osd_id": "2",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.type": "block",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.vdo": "0",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:                 "ceph.with_tpm": "0"
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             },
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "type": "block",
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:             "vg_name": "ceph_vg2"
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:         }
Jan 26 17:17:28 compute-0 quirky_poincare[409115]:     ]
Jan 26 17:17:28 compute-0 quirky_poincare[409115]: }
Jan 26 17:17:28 compute-0 systemd[1]: libpod-f802b13ac978479739619988df26c0b3fb89ac579fef56f5b48e1eff852c74b8.scope: Deactivated successfully.
Jan 26 17:17:28 compute-0 podman[409098]: 2026-01-26 17:17:28.787540644 +0000 UTC m=+0.545423473 container died f802b13ac978479739619988df26c0b3fb89ac579fef56f5b48e1eff852c74b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poincare, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:17:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:17:28
Jan 26 17:17:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:17:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:17:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.control', '.rgw.root', 'images']
Jan 26 17:17:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:17:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-acffe2a657fefad6ea06b63d9b9b895bf00de2ca1937ba23b703f3cd20c98775-merged.mount: Deactivated successfully.
Jan 26 17:17:28 compute-0 podman[409098]: 2026-01-26 17:17:28.896014884 +0000 UTC m=+0.653897713 container remove f802b13ac978479739619988df26c0b3fb89ac579fef56f5b48e1eff852c74b8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 17:17:28 compute-0 systemd[1]: libpod-conmon-f802b13ac978479739619988df26c0b3fb89ac579fef56f5b48e1eff852c74b8.scope: Deactivated successfully.
Jan 26 17:17:28 compute-0 sudo[409021]: pam_unix(sudo:session): session closed for user root
Jan 26 17:17:29 compute-0 nova_compute[239965]: 2026-01-26 17:17:29.007 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:29 compute-0 sudo[409137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:17:29 compute-0 sudo[409137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:17:29 compute-0 sudo[409137]: pam_unix(sudo:session): session closed for user root
Jan 26 17:17:29 compute-0 sudo[409162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:17:29 compute-0 sudo[409162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:17:29 compute-0 podman[409199]: 2026-01-26 17:17:29.4326211 +0000 UTC m=+0.048343476 container create 632508600af27a1dde87bb7afc166fec1e16809b90586b275b8d284402283158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_vaughan, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:17:29 compute-0 ceph-mon[75140]: pgmap v3831: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:29 compute-0 systemd[1]: Started libpod-conmon-632508600af27a1dde87bb7afc166fec1e16809b90586b275b8d284402283158.scope.
Jan 26 17:17:29 compute-0 podman[409199]: 2026-01-26 17:17:29.411874301 +0000 UTC m=+0.027596677 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:17:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:17:29 compute-0 podman[409199]: 2026-01-26 17:17:29.535337778 +0000 UTC m=+0.151060174 container init 632508600af27a1dde87bb7afc166fec1e16809b90586b275b8d284402283158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_vaughan, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:17:29 compute-0 podman[409199]: 2026-01-26 17:17:29.543433036 +0000 UTC m=+0.159155392 container start 632508600af27a1dde87bb7afc166fec1e16809b90586b275b8d284402283158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 17:17:29 compute-0 podman[409199]: 2026-01-26 17:17:29.547366693 +0000 UTC m=+0.163089119 container attach 632508600af27a1dde87bb7afc166fec1e16809b90586b275b8d284402283158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_vaughan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 26 17:17:29 compute-0 flamboyant_vaughan[409216]: 167 167
Jan 26 17:17:29 compute-0 systemd[1]: libpod-632508600af27a1dde87bb7afc166fec1e16809b90586b275b8d284402283158.scope: Deactivated successfully.
Jan 26 17:17:29 compute-0 conmon[409216]: conmon 632508600af27a1dde87 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-632508600af27a1dde87bb7afc166fec1e16809b90586b275b8d284402283158.scope/container/memory.events
Jan 26 17:17:29 compute-0 podman[409199]: 2026-01-26 17:17:29.552659333 +0000 UTC m=+0.168381689 container died 632508600af27a1dde87bb7afc166fec1e16809b90586b275b8d284402283158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_vaughan, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:17:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-d406e3485bfbfd405d639f4c60e7d9a4b618737026aa643356af8773a6cd003a-merged.mount: Deactivated successfully.
Jan 26 17:17:29 compute-0 podman[409199]: 2026-01-26 17:17:29.600586358 +0000 UTC m=+0.216308704 container remove 632508600af27a1dde87bb7afc166fec1e16809b90586b275b8d284402283158 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:17:29 compute-0 systemd[1]: libpod-conmon-632508600af27a1dde87bb7afc166fec1e16809b90586b275b8d284402283158.scope: Deactivated successfully.
Jan 26 17:17:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:17:29 compute-0 podman[409240]: 2026-01-26 17:17:29.801337759 +0000 UTC m=+0.049652738 container create cb0365adf9c5681e0bfe5d4c4626c4891eb666d5f5037e85226189bf7beaa25b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_rhodes, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:17:29 compute-0 systemd[1]: Started libpod-conmon-cb0365adf9c5681e0bfe5d4c4626c4891eb666d5f5037e85226189bf7beaa25b.scope.
Jan 26 17:17:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:17:29 compute-0 podman[409240]: 2026-01-26 17:17:29.780188251 +0000 UTC m=+0.028503250 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b730701f997726befef7d21ce264600c22892aeb9e31509dfda22785ae2b0f3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b730701f997726befef7d21ce264600c22892aeb9e31509dfda22785ae2b0f3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b730701f997726befef7d21ce264600c22892aeb9e31509dfda22785ae2b0f3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:17:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b730701f997726befef7d21ce264600c22892aeb9e31509dfda22785ae2b0f3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:17:29 compute-0 podman[409240]: 2026-01-26 17:17:29.895213061 +0000 UTC m=+0.143528120 container init cb0365adf9c5681e0bfe5d4c4626c4891eb666d5f5037e85226189bf7beaa25b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 17:17:29 compute-0 podman[409240]: 2026-01-26 17:17:29.903762431 +0000 UTC m=+0.152077410 container start cb0365adf9c5681e0bfe5d4c4626c4891eb666d5f5037e85226189bf7beaa25b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 17:17:29 compute-0 podman[409240]: 2026-01-26 17:17:29.907224456 +0000 UTC m=+0.155539515 container attach cb0365adf9c5681e0bfe5d4c4626c4891eb666d5f5037e85226189bf7beaa25b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_rhodes, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:17:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:17:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:17:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:17:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:17:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:17:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:17:30 compute-0 lvm[409334]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:17:30 compute-0 lvm[409335]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:17:30 compute-0 lvm[409334]: VG ceph_vg0 finished
Jan 26 17:17:30 compute-0 lvm[409335]: VG ceph_vg1 finished
Jan 26 17:17:30 compute-0 lvm[409337]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:17:30 compute-0 lvm[409337]: VG ceph_vg2 finished
Jan 26 17:17:30 compute-0 recursing_rhodes[409256]: {}
Jan 26 17:17:30 compute-0 systemd[1]: libpod-cb0365adf9c5681e0bfe5d4c4626c4891eb666d5f5037e85226189bf7beaa25b.scope: Deactivated successfully.
Jan 26 17:17:30 compute-0 podman[409240]: 2026-01-26 17:17:30.874706196 +0000 UTC m=+1.123021195 container died cb0365adf9c5681e0bfe5d4c4626c4891eb666d5f5037e85226189bf7beaa25b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 17:17:30 compute-0 systemd[1]: libpod-cb0365adf9c5681e0bfe5d4c4626c4891eb666d5f5037e85226189bf7beaa25b.scope: Consumed 1.566s CPU time.
Jan 26 17:17:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-7b730701f997726befef7d21ce264600c22892aeb9e31509dfda22785ae2b0f3-merged.mount: Deactivated successfully.
Jan 26 17:17:30 compute-0 podman[409240]: 2026-01-26 17:17:30.926101377 +0000 UTC m=+1.174416346 container remove cb0365adf9c5681e0bfe5d4c4626c4891eb666d5f5037e85226189bf7beaa25b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_rhodes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:17:30 compute-0 systemd[1]: libpod-conmon-cb0365adf9c5681e0bfe5d4c4626c4891eb666d5f5037e85226189bf7beaa25b.scope: Deactivated successfully.
Jan 26 17:17:30 compute-0 sudo[409162]: pam_unix(sudo:session): session closed for user root
Jan 26 17:17:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:17:31 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:17:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:17:31 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:17:31 compute-0 sudo[409353]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:17:31 compute-0 sudo[409353]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:17:31 compute-0 sudo[409353]: pam_unix(sudo:session): session closed for user root
Jan 26 17:17:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:17:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:17:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:17:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:17:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:17:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:17:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:17:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:17:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:17:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:17:31 compute-0 ceph-mon[75140]: pgmap v3832: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:17:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:17:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3833: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:33 compute-0 nova_compute[239965]: 2026-01-26 17:17:33.076 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:33 compute-0 ceph-mon[75140]: pgmap v3833: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:34 compute-0 nova_compute[239965]: 2026-01-26 17:17:34.010 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3834: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:17:35 compute-0 ceph-mon[75140]: pgmap v3834: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3835: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:37 compute-0 ceph-mon[75140]: pgmap v3835: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:38 compute-0 nova_compute[239965]: 2026-01-26 17:17:38.079 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3836: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:39 compute-0 nova_compute[239965]: 2026-01-26 17:17:39.013 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:39 compute-0 ceph-mon[75140]: pgmap v3836: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:17:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3837: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:40 compute-0 nova_compute[239965]: 2026-01-26 17:17:40.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:41 compute-0 ceph-mon[75140]: pgmap v3837: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3838: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:42 compute-0 nova_compute[239965]: 2026-01-26 17:17:42.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:42 compute-0 nova_compute[239965]: 2026-01-26 17:17:42.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:43 compute-0 nova_compute[239965]: 2026-01-26 17:17:43.082 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:43 compute-0 podman[409379]: 2026-01-26 17:17:43.396928731 +0000 UTC m=+0.075781909 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 26 17:17:43 compute-0 nova_compute[239965]: 2026-01-26 17:17:43.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:43 compute-0 ceph-mon[75140]: pgmap v3838: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:44 compute-0 nova_compute[239965]: 2026-01-26 17:17:44.014 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3839: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:17:45 compute-0 ceph-mon[75140]: pgmap v3839: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3840: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:46 compute-0 podman[409399]: 2026-01-26 17:17:46.417005217 +0000 UTC m=+0.106062461 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:17:47 compute-0 ceph-mon[75140]: pgmap v3840: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:48 compute-0 nova_compute[239965]: 2026-01-26 17:17:48.084 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3841: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:17:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2295218931' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:17:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:17:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2295218931' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:17:49 compute-0 nova_compute[239965]: 2026-01-26 17:17:49.016 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:49 compute-0 ceph-mon[75140]: pgmap v3841: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2295218931' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:17:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2295218931' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:17:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:17:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:17:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3842: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:51 compute-0 nova_compute[239965]: 2026-01-26 17:17:51.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:51 compute-0 ceph-mon[75140]: pgmap v3842: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3843: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:53 compute-0 nova_compute[239965]: 2026-01-26 17:17:53.086 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:53 compute-0 ceph-mon[75140]: pgmap v3843: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:54 compute-0 nova_compute[239965]: 2026-01-26 17:17:54.018 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3844: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:54 compute-0 nova_compute[239965]: 2026-01-26 17:17:54.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:54 compute-0 nova_compute[239965]: 2026-01-26 17:17:54.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:17:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:17:55 compute-0 nova_compute[239965]: 2026-01-26 17:17:55.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:55 compute-0 ceph-mon[75140]: pgmap v3844: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3845: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:57 compute-0 ceph-mon[75140]: pgmap v3845: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:58 compute-0 nova_compute[239965]: 2026-01-26 17:17:58.089 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3846: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:58 compute-0 nova_compute[239965]: 2026-01-26 17:17:58.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:58 compute-0 nova_compute[239965]: 2026-01-26 17:17:58.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:17:58 compute-0 nova_compute[239965]: 2026-01-26 17:17:58.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:17:58 compute-0 nova_compute[239965]: 2026-01-26 17:17:58.530 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:17:58 compute-0 nova_compute[239965]: 2026-01-26 17:17:58.530 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:17:58 compute-0 nova_compute[239965]: 2026-01-26 17:17:58.560 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:17:58 compute-0 nova_compute[239965]: 2026-01-26 17:17:58.561 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:17:58 compute-0 nova_compute[239965]: 2026-01-26 17:17:58.561 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:17:58 compute-0 nova_compute[239965]: 2026-01-26 17:17:58.561 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:17:58 compute-0 nova_compute[239965]: 2026-01-26 17:17:58.561 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:17:59 compute-0 nova_compute[239965]: 2026-01-26 17:17:59.020 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:17:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:17:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/824797882' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:17:59 compute-0 nova_compute[239965]: 2026-01-26 17:17:59.138 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:17:59 compute-0 nova_compute[239965]: 2026-01-26 17:17:59.300 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:17:59 compute-0 nova_compute[239965]: 2026-01-26 17:17:59.302 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3553MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:17:59 compute-0 nova_compute[239965]: 2026-01-26 17:17:59.302 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:17:59 compute-0 nova_compute[239965]: 2026-01-26 17:17:59.302 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:17:59.302 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:17:59.303 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:17:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:17:59.303 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:17:59 compute-0 nova_compute[239965]: 2026-01-26 17:17:59.376 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:17:59 compute-0 nova_compute[239965]: 2026-01-26 17:17:59.376 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:17:59 compute-0 nova_compute[239965]: 2026-01-26 17:17:59.392 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:17:59 compute-0 ceph-mon[75140]: pgmap v3846: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:17:59 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/824797882' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:17:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:17:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:17:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/747273756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:17:59 compute-0 nova_compute[239965]: 2026-01-26 17:17:59.954 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:17:59 compute-0 nova_compute[239965]: 2026-01-26 17:17:59.966 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:17:59 compute-0 nova_compute[239965]: 2026-01-26 17:17:59.985 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:17:59 compute-0 nova_compute[239965]: 2026-01-26 17:17:59.987 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:17:59 compute-0 nova_compute[239965]: 2026-01-26 17:17:59.988 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:18:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3847: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:18:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:18:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:18:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:18:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:18:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:18:00 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/747273756' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:18:01 compute-0 ceph-mon[75140]: pgmap v3847: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3848: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:02 compute-0 ceph-mon[75140]: pgmap v3848: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:03 compute-0 nova_compute[239965]: 2026-01-26 17:18:03.093 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:04 compute-0 nova_compute[239965]: 2026-01-26 17:18:04.022 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3849: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:18:05 compute-0 ceph-mon[75140]: pgmap v3849: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3850: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:07 compute-0 ceph-mon[75140]: pgmap v3850: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:08 compute-0 nova_compute[239965]: 2026-01-26 17:18:08.094 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3851: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:09 compute-0 nova_compute[239965]: 2026-01-26 17:18:09.023 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:09 compute-0 ceph-mon[75140]: pgmap v3851: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:18:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3852: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:11 compute-0 ceph-mon[75140]: pgmap v3852: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3853: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:13 compute-0 nova_compute[239965]: 2026-01-26 17:18:13.096 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:13 compute-0 ceph-mon[75140]: pgmap v3853: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:14 compute-0 nova_compute[239965]: 2026-01-26 17:18:14.027 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:14 compute-0 podman[409470]: 2026-01-26 17:18:14.375513694 +0000 UTC m=+0.061815096 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 26 17:18:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3854: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:18:15 compute-0 ceph-mon[75140]: pgmap v3854: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3855: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:17 compute-0 podman[409489]: 2026-01-26 17:18:17.43353927 +0000 UTC m=+0.112628512 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:18:17 compute-0 ceph-mon[75140]: pgmap v3855: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:18 compute-0 nova_compute[239965]: 2026-01-26 17:18:18.100 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3856: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:19 compute-0 nova_compute[239965]: 2026-01-26 17:18:19.029 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:19 compute-0 ceph-mon[75140]: pgmap v3856: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:19 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:18:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3857: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:21 compute-0 ceph-mon[75140]: pgmap v3857: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3858: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:22 compute-0 ceph-mon[75140]: pgmap v3858: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:23 compute-0 nova_compute[239965]: 2026-01-26 17:18:23.102 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:24 compute-0 nova_compute[239965]: 2026-01-26 17:18:24.029 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3859: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:18:25 compute-0 ceph-mon[75140]: pgmap v3859: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3860: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:27 compute-0 ceph-mon[75140]: pgmap v3860: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:28 compute-0 nova_compute[239965]: 2026-01-26 17:18:28.103 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3861: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:18:28
Jan 26 17:18:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:18:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:18:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.control', '.mgr', 'images', 'cephfs.cephfs.data', 'vms']
Jan 26 17:18:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:18:29 compute-0 nova_compute[239965]: 2026-01-26 17:18:29.032 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:29 compute-0 ceph-mon[75140]: pgmap v3861: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:18:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3862: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:18:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:18:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:18:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:18:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:18:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:18:31 compute-0 sudo[409516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:18:31 compute-0 sudo[409516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:18:31 compute-0 sudo[409516]: pam_unix(sudo:session): session closed for user root
Jan 26 17:18:31 compute-0 sudo[409541]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:18:31 compute-0 sudo[409541]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:18:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:18:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:18:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:18:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:18:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:18:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:18:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:18:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:18:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:18:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:18:31 compute-0 ceph-mon[75140]: pgmap v3862: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:31 compute-0 sudo[409541]: pam_unix(sudo:session): session closed for user root
Jan 26 17:18:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:18:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:18:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:18:31 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:18:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:18:31 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:18:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:18:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:18:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:18:31 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:18:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:18:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:18:31 compute-0 sudo[409597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:18:31 compute-0 sudo[409597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:18:31 compute-0 sudo[409597]: pam_unix(sudo:session): session closed for user root
Jan 26 17:18:31 compute-0 sudo[409622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:18:31 compute-0 sudo[409622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:18:32 compute-0 podman[409659]: 2026-01-26 17:18:32.21565142 +0000 UTC m=+0.046045650 container create fc84d7b08f9101ee0d9050ab230a3590bd7fd053f769aa9e439892ea619e0eec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dubinsky, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 17:18:32 compute-0 systemd[1]: Started libpod-conmon-fc84d7b08f9101ee0d9050ab230a3590bd7fd053f769aa9e439892ea619e0eec.scope.
Jan 26 17:18:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:18:32 compute-0 podman[409659]: 2026-01-26 17:18:32.196355736 +0000 UTC m=+0.026749976 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:18:32 compute-0 podman[409659]: 2026-01-26 17:18:32.308977947 +0000 UTC m=+0.139372197 container init fc84d7b08f9101ee0d9050ab230a3590bd7fd053f769aa9e439892ea619e0eec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 17:18:32 compute-0 podman[409659]: 2026-01-26 17:18:32.319417904 +0000 UTC m=+0.149812134 container start fc84d7b08f9101ee0d9050ab230a3590bd7fd053f769aa9e439892ea619e0eec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:18:32 compute-0 podman[409659]: 2026-01-26 17:18:32.323458153 +0000 UTC m=+0.153852403 container attach fc84d7b08f9101ee0d9050ab230a3590bd7fd053f769aa9e439892ea619e0eec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dubinsky, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:18:32 compute-0 eloquent_dubinsky[409675]: 167 167
Jan 26 17:18:32 compute-0 systemd[1]: libpod-fc84d7b08f9101ee0d9050ab230a3590bd7fd053f769aa9e439892ea619e0eec.scope: Deactivated successfully.
Jan 26 17:18:32 compute-0 conmon[409675]: conmon fc84d7b08f9101ee0d90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc84d7b08f9101ee0d9050ab230a3590bd7fd053f769aa9e439892ea619e0eec.scope/container/memory.events
Jan 26 17:18:32 compute-0 podman[409659]: 2026-01-26 17:18:32.32782551 +0000 UTC m=+0.158219730 container died fc84d7b08f9101ee0d9050ab230a3590bd7fd053f769aa9e439892ea619e0eec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dubinsky, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 17:18:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-020c10b7489317d16cdbb1e3151e2b53904afb15e57d8ad3c0807d869b21da29-merged.mount: Deactivated successfully.
Jan 26 17:18:32 compute-0 podman[409659]: 2026-01-26 17:18:32.368760293 +0000 UTC m=+0.199154513 container remove fc84d7b08f9101ee0d9050ab230a3590bd7fd053f769aa9e439892ea619e0eec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_dubinsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 17:18:32 compute-0 systemd[1]: libpod-conmon-fc84d7b08f9101ee0d9050ab230a3590bd7fd053f769aa9e439892ea619e0eec.scope: Deactivated successfully.
Jan 26 17:18:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3863: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:18:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:18:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:18:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:18:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:18:32 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:18:32 compute-0 podman[409699]: 2026-01-26 17:18:32.535569413 +0000 UTC m=+0.052279843 container create 45c4919c48cd2a8a639b1062b7abfa5ca079603272016d693809b828e8522aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 17:18:32 compute-0 systemd[1]: Started libpod-conmon-45c4919c48cd2a8a639b1062b7abfa5ca079603272016d693809b828e8522aa6.scope.
Jan 26 17:18:32 compute-0 podman[409699]: 2026-01-26 17:18:32.512491987 +0000 UTC m=+0.029202427 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:18:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f65db1103ca5dea65a636a12d2f95b532db59448669781fd50ddd6e3d4d6986/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f65db1103ca5dea65a636a12d2f95b532db59448669781fd50ddd6e3d4d6986/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f65db1103ca5dea65a636a12d2f95b532db59448669781fd50ddd6e3d4d6986/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f65db1103ca5dea65a636a12d2f95b532db59448669781fd50ddd6e3d4d6986/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:18:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f65db1103ca5dea65a636a12d2f95b532db59448669781fd50ddd6e3d4d6986/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:18:32 compute-0 podman[409699]: 2026-01-26 17:18:32.63663591 +0000 UTC m=+0.153346380 container init 45c4919c48cd2a8a639b1062b7abfa5ca079603272016d693809b828e8522aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:18:32 compute-0 podman[409699]: 2026-01-26 17:18:32.648688506 +0000 UTC m=+0.165398926 container start 45c4919c48cd2a8a639b1062b7abfa5ca079603272016d693809b828e8522aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_faraday, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:18:32 compute-0 podman[409699]: 2026-01-26 17:18:32.65335135 +0000 UTC m=+0.170061850 container attach 45c4919c48cd2a8a639b1062b7abfa5ca079603272016d693809b828e8522aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 17:18:33 compute-0 nova_compute[239965]: 2026-01-26 17:18:33.106 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:33 compute-0 naughty_faraday[409716]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:18:33 compute-0 naughty_faraday[409716]: --> All data devices are unavailable
Jan 26 17:18:33 compute-0 systemd[1]: libpod-45c4919c48cd2a8a639b1062b7abfa5ca079603272016d693809b828e8522aa6.scope: Deactivated successfully.
Jan 26 17:18:33 compute-0 podman[409736]: 2026-01-26 17:18:33.217183795 +0000 UTC m=+0.032842517 container died 45c4919c48cd2a8a639b1062b7abfa5ca079603272016d693809b828e8522aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_faraday, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Jan 26 17:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-2f65db1103ca5dea65a636a12d2f95b532db59448669781fd50ddd6e3d4d6986-merged.mount: Deactivated successfully.
Jan 26 17:18:33 compute-0 podman[409736]: 2026-01-26 17:18:33.267341084 +0000 UTC m=+0.082999766 container remove 45c4919c48cd2a8a639b1062b7abfa5ca079603272016d693809b828e8522aa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 17:18:33 compute-0 systemd[1]: libpod-conmon-45c4919c48cd2a8a639b1062b7abfa5ca079603272016d693809b828e8522aa6.scope: Deactivated successfully.
Jan 26 17:18:33 compute-0 sudo[409622]: pam_unix(sudo:session): session closed for user root
Jan 26 17:18:33 compute-0 sudo[409751]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:18:33 compute-0 sudo[409751]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:18:33 compute-0 sudo[409751]: pam_unix(sudo:session): session closed for user root
Jan 26 17:18:33 compute-0 sudo[409776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:18:33 compute-0 sudo[409776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:18:33 compute-0 ceph-mon[75140]: pgmap v3863: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:33 compute-0 podman[409813]: 2026-01-26 17:18:33.787359174 +0000 UTC m=+0.045105177 container create 29be894c13c54c11dce01476bd09b9cb9aad19e8304c2641a891eb2573de8bbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jang, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:18:33 compute-0 systemd[1]: Started libpod-conmon-29be894c13c54c11dce01476bd09b9cb9aad19e8304c2641a891eb2573de8bbc.scope.
Jan 26 17:18:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:18:33 compute-0 podman[409813]: 2026-01-26 17:18:33.769620078 +0000 UTC m=+0.027366081 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:18:33 compute-0 podman[409813]: 2026-01-26 17:18:33.868536044 +0000 UTC m=+0.126282127 container init 29be894c13c54c11dce01476bd09b9cb9aad19e8304c2641a891eb2573de8bbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 26 17:18:33 compute-0 podman[409813]: 2026-01-26 17:18:33.87612492 +0000 UTC m=+0.133870923 container start 29be894c13c54c11dce01476bd09b9cb9aad19e8304c2641a891eb2573de8bbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:18:33 compute-0 podman[409813]: 2026-01-26 17:18:33.880102777 +0000 UTC m=+0.137848820 container attach 29be894c13c54c11dce01476bd09b9cb9aad19e8304c2641a891eb2573de8bbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jang, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:18:33 compute-0 quizzical_jang[409829]: 167 167
Jan 26 17:18:33 compute-0 systemd[1]: libpod-29be894c13c54c11dce01476bd09b9cb9aad19e8304c2641a891eb2573de8bbc.scope: Deactivated successfully.
Jan 26 17:18:33 compute-0 podman[409813]: 2026-01-26 17:18:33.881902392 +0000 UTC m=+0.139648395 container died 29be894c13c54c11dce01476bd09b9cb9aad19e8304c2641a891eb2573de8bbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jang, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:18:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-be172f8b1a5ddbd4c5c17ab8e8efcf599e33fbd0a5f48ea7442d4d6f806d1887-merged.mount: Deactivated successfully.
Jan 26 17:18:33 compute-0 podman[409813]: 2026-01-26 17:18:33.923287326 +0000 UTC m=+0.181033339 container remove 29be894c13c54c11dce01476bd09b9cb9aad19e8304c2641a891eb2573de8bbc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 17:18:33 compute-0 systemd[1]: libpod-conmon-29be894c13c54c11dce01476bd09b9cb9aad19e8304c2641a891eb2573de8bbc.scope: Deactivated successfully.
Jan 26 17:18:34 compute-0 nova_compute[239965]: 2026-01-26 17:18:34.033 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:34 compute-0 podman[409852]: 2026-01-26 17:18:34.078798109 +0000 UTC m=+0.041267573 container create b8914f4867ad2c9ec7396e6242466b1b34fe78c6ba35c0baaaa723a4e8413992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:18:34 compute-0 systemd[1]: Started libpod-conmon-b8914f4867ad2c9ec7396e6242466b1b34fe78c6ba35c0baaaa723a4e8413992.scope.
Jan 26 17:18:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bb7f841e162f4d8f4984554c481e1477f6b96055ef3713434c8b464ae058ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bb7f841e162f4d8f4984554c481e1477f6b96055ef3713434c8b464ae058ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bb7f841e162f4d8f4984554c481e1477f6b96055ef3713434c8b464ae058ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:18:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00bb7f841e162f4d8f4984554c481e1477f6b96055ef3713434c8b464ae058ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:18:34 compute-0 podman[409852]: 2026-01-26 17:18:34.061775102 +0000 UTC m=+0.024244576 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:18:34 compute-0 podman[409852]: 2026-01-26 17:18:34.159146809 +0000 UTC m=+0.121616333 container init b8914f4867ad2c9ec7396e6242466b1b34fe78c6ba35c0baaaa723a4e8413992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_poitras, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:18:34 compute-0 podman[409852]: 2026-01-26 17:18:34.165842783 +0000 UTC m=+0.128312267 container start b8914f4867ad2c9ec7396e6242466b1b34fe78c6ba35c0baaaa723a4e8413992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_poitras, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 17:18:34 compute-0 podman[409852]: 2026-01-26 17:18:34.170644421 +0000 UTC m=+0.133113905 container attach b8914f4867ad2c9ec7396e6242466b1b34fe78c6ba35c0baaaa723a4e8413992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 17:18:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3864: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:34 compute-0 sad_poitras[409870]: {
Jan 26 17:18:34 compute-0 sad_poitras[409870]:     "0": [
Jan 26 17:18:34 compute-0 sad_poitras[409870]:         {
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "devices": [
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "/dev/loop3"
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             ],
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "lv_name": "ceph_lv0",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "lv_size": "21470642176",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "name": "ceph_lv0",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "tags": {
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.cluster_name": "ceph",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.crush_device_class": "",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.encrypted": "0",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.objectstore": "bluestore",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.osd_id": "0",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.type": "block",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.vdo": "0",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.with_tpm": "0"
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             },
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "type": "block",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "vg_name": "ceph_vg0"
Jan 26 17:18:34 compute-0 sad_poitras[409870]:         }
Jan 26 17:18:34 compute-0 sad_poitras[409870]:     ],
Jan 26 17:18:34 compute-0 sad_poitras[409870]:     "1": [
Jan 26 17:18:34 compute-0 sad_poitras[409870]:         {
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "devices": [
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "/dev/loop4"
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             ],
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "lv_name": "ceph_lv1",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "lv_size": "21470642176",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "name": "ceph_lv1",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "tags": {
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.cluster_name": "ceph",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.crush_device_class": "",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.encrypted": "0",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.objectstore": "bluestore",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.osd_id": "1",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.type": "block",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.vdo": "0",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.with_tpm": "0"
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             },
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "type": "block",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "vg_name": "ceph_vg1"
Jan 26 17:18:34 compute-0 sad_poitras[409870]:         }
Jan 26 17:18:34 compute-0 sad_poitras[409870]:     ],
Jan 26 17:18:34 compute-0 sad_poitras[409870]:     "2": [
Jan 26 17:18:34 compute-0 sad_poitras[409870]:         {
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "devices": [
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "/dev/loop5"
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             ],
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "lv_name": "ceph_lv2",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "lv_size": "21470642176",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "name": "ceph_lv2",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "tags": {
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.cluster_name": "ceph",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.crush_device_class": "",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.encrypted": "0",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.objectstore": "bluestore",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.osd_id": "2",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.type": "block",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.vdo": "0",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:                 "ceph.with_tpm": "0"
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             },
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "type": "block",
Jan 26 17:18:34 compute-0 sad_poitras[409870]:             "vg_name": "ceph_vg2"
Jan 26 17:18:34 compute-0 sad_poitras[409870]:         }
Jan 26 17:18:34 compute-0 sad_poitras[409870]:     ]
Jan 26 17:18:34 compute-0 sad_poitras[409870]: }
Jan 26 17:18:34 compute-0 systemd[1]: libpod-b8914f4867ad2c9ec7396e6242466b1b34fe78c6ba35c0baaaa723a4e8413992.scope: Deactivated successfully.
Jan 26 17:18:34 compute-0 conmon[409870]: conmon b8914f4867ad2c9ec739 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8914f4867ad2c9ec7396e6242466b1b34fe78c6ba35c0baaaa723a4e8413992.scope/container/memory.events
Jan 26 17:18:34 compute-0 podman[409852]: 2026-01-26 17:18:34.455109335 +0000 UTC m=+0.417578799 container died b8914f4867ad2c9ec7396e6242466b1b34fe78c6ba35c0baaaa723a4e8413992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 17:18:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-00bb7f841e162f4d8f4984554c481e1477f6b96055ef3713434c8b464ae058ce-merged.mount: Deactivated successfully.
Jan 26 17:18:34 compute-0 podman[409852]: 2026-01-26 17:18:34.501470712 +0000 UTC m=+0.463940176 container remove b8914f4867ad2c9ec7396e6242466b1b34fe78c6ba35c0baaaa723a4e8413992 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 17:18:34 compute-0 systemd[1]: libpod-conmon-b8914f4867ad2c9ec7396e6242466b1b34fe78c6ba35c0baaaa723a4e8413992.scope: Deactivated successfully.
Jan 26 17:18:34 compute-0 sudo[409776]: pam_unix(sudo:session): session closed for user root
Jan 26 17:18:34 compute-0 sudo[409891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:18:34 compute-0 sudo[409891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:18:34 compute-0 sudo[409891]: pam_unix(sudo:session): session closed for user root
Jan 26 17:18:34 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:18:34 compute-0 sudo[409916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:18:34 compute-0 sudo[409916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:18:34 compute-0 podman[409953]: 2026-01-26 17:18:34.997857823 +0000 UTC m=+0.052664463 container create 5c48058fc969e8435df159a82cbf5b5495265c962e8a8dd8d22989fb232992f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:18:35 compute-0 systemd[1]: Started libpod-conmon-5c48058fc969e8435df159a82cbf5b5495265c962e8a8dd8d22989fb232992f9.scope.
Jan 26 17:18:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:18:35 compute-0 podman[409953]: 2026-01-26 17:18:34.969869566 +0000 UTC m=+0.024676226 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:18:35 compute-0 podman[409953]: 2026-01-26 17:18:35.08546715 +0000 UTC m=+0.140273750 container init 5c48058fc969e8435df159a82cbf5b5495265c962e8a8dd8d22989fb232992f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_chandrasekhar, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:18:35 compute-0 podman[409953]: 2026-01-26 17:18:35.093028816 +0000 UTC m=+0.147835456 container start 5c48058fc969e8435df159a82cbf5b5495265c962e8a8dd8d22989fb232992f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:18:35 compute-0 podman[409953]: 2026-01-26 17:18:35.097532776 +0000 UTC m=+0.152339396 container attach 5c48058fc969e8435df159a82cbf5b5495265c962e8a8dd8d22989fb232992f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_chandrasekhar, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:18:35 compute-0 dreamy_chandrasekhar[409969]: 167 167
Jan 26 17:18:35 compute-0 systemd[1]: libpod-5c48058fc969e8435df159a82cbf5b5495265c962e8a8dd8d22989fb232992f9.scope: Deactivated successfully.
Jan 26 17:18:35 compute-0 podman[409953]: 2026-01-26 17:18:35.100437037 +0000 UTC m=+0.155243647 container died 5c48058fc969e8435df159a82cbf5b5495265c962e8a8dd8d22989fb232992f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_chandrasekhar, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:18:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-38bf5cb9dbf0f915345cac483490d04da27e867cb80a707aaf20ee29f8aa317e-merged.mount: Deactivated successfully.
Jan 26 17:18:35 compute-0 podman[409953]: 2026-01-26 17:18:35.149519971 +0000 UTC m=+0.204326571 container remove 5c48058fc969e8435df159a82cbf5b5495265c962e8a8dd8d22989fb232992f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 26 17:18:35 compute-0 systemd[1]: libpod-conmon-5c48058fc969e8435df159a82cbf5b5495265c962e8a8dd8d22989fb232992f9.scope: Deactivated successfully.
Jan 26 17:18:35 compute-0 podman[409992]: 2026-01-26 17:18:35.337994232 +0000 UTC m=+0.058123587 container create 1995a7d13a2631dc249e8e56d89355bd8eec1af8edfd5c5caa8f3da8906e6c32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 17:18:35 compute-0 systemd[1]: Started libpod-conmon-1995a7d13a2631dc249e8e56d89355bd8eec1af8edfd5c5caa8f3da8906e6c32.scope.
Jan 26 17:18:35 compute-0 podman[409992]: 2026-01-26 17:18:35.312868106 +0000 UTC m=+0.032997511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:18:35 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ea05a3a6ce54e31b3be49c2ada6ac4a54feb1da60a8220688ff581701784b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ea05a3a6ce54e31b3be49c2ada6ac4a54feb1da60a8220688ff581701784b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ea05a3a6ce54e31b3be49c2ada6ac4a54feb1da60a8220688ff581701784b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:18:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09ea05a3a6ce54e31b3be49c2ada6ac4a54feb1da60a8220688ff581701784b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:18:35 compute-0 podman[409992]: 2026-01-26 17:18:35.436770713 +0000 UTC m=+0.156900078 container init 1995a7d13a2631dc249e8e56d89355bd8eec1af8edfd5c5caa8f3da8906e6c32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:18:35 compute-0 podman[409992]: 2026-01-26 17:18:35.444283308 +0000 UTC m=+0.164412663 container start 1995a7d13a2631dc249e8e56d89355bd8eec1af8edfd5c5caa8f3da8906e6c32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 17:18:35 compute-0 podman[409992]: 2026-01-26 17:18:35.448723397 +0000 UTC m=+0.168852782 container attach 1995a7d13a2631dc249e8e56d89355bd8eec1af8edfd5c5caa8f3da8906e6c32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_galileo, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 17:18:35 compute-0 ceph-mon[75140]: pgmap v3864: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:36 compute-0 lvm[410088]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:18:36 compute-0 lvm[410088]: VG ceph_vg1 finished
Jan 26 17:18:36 compute-0 lvm[410090]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:18:36 compute-0 lvm[410090]: VG ceph_vg2 finished
Jan 26 17:18:36 compute-0 lvm[410087]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:18:36 compute-0 lvm[410087]: VG ceph_vg0 finished
Jan 26 17:18:36 compute-0 admiring_galileo[410009]: {}
Jan 26 17:18:36 compute-0 systemd[1]: libpod-1995a7d13a2631dc249e8e56d89355bd8eec1af8edfd5c5caa8f3da8906e6c32.scope: Deactivated successfully.
Jan 26 17:18:36 compute-0 systemd[1]: libpod-1995a7d13a2631dc249e8e56d89355bd8eec1af8edfd5c5caa8f3da8906e6c32.scope: Consumed 1.500s CPU time.
Jan 26 17:18:36 compute-0 podman[409992]: 2026-01-26 17:18:36.361511956 +0000 UTC m=+1.081641341 container died 1995a7d13a2631dc249e8e56d89355bd8eec1af8edfd5c5caa8f3da8906e6c32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_galileo, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:18:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-09ea05a3a6ce54e31b3be49c2ada6ac4a54feb1da60a8220688ff581701784b0-merged.mount: Deactivated successfully.
Jan 26 17:18:36 compute-0 podman[409992]: 2026-01-26 17:18:36.419748263 +0000 UTC m=+1.139877618 container remove 1995a7d13a2631dc249e8e56d89355bd8eec1af8edfd5c5caa8f3da8906e6c32 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 17:18:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3865: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:36 compute-0 systemd[1]: libpod-conmon-1995a7d13a2631dc249e8e56d89355bd8eec1af8edfd5c5caa8f3da8906e6c32.scope: Deactivated successfully.
Jan 26 17:18:36 compute-0 sudo[409916]: pam_unix(sudo:session): session closed for user root
Jan 26 17:18:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:18:36 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:18:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:18:36 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:18:36 compute-0 sudo[410105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:18:36 compute-0 sudo[410105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:18:36 compute-0 sudo[410105]: pam_unix(sudo:session): session closed for user root
Jan 26 17:18:37 compute-0 ceph-mon[75140]: pgmap v3865: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:18:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:18:38 compute-0 nova_compute[239965]: 2026-01-26 17:18:38.110 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3866: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:38 compute-0 ceph-mon[75140]: pgmap v3866: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:39 compute-0 nova_compute[239965]: 2026-01-26 17:18:39.035 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:18:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3867: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:41 compute-0 ceph-mon[75140]: pgmap v3867: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:41 compute-0 nova_compute[239965]: 2026-01-26 17:18:41.968 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3868: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:42 compute-0 nova_compute[239965]: 2026-01-26 17:18:42.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:43 compute-0 nova_compute[239965]: 2026-01-26 17:18:43.113 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:43 compute-0 nova_compute[239965]: 2026-01-26 17:18:43.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:43 compute-0 nova_compute[239965]: 2026-01-26 17:18:43.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:43 compute-0 ceph-mon[75140]: pgmap v3868: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:44 compute-0 nova_compute[239965]: 2026-01-26 17:18:44.038 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3869: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:18:45 compute-0 podman[410130]: 2026-01-26 17:18:45.410863836 +0000 UTC m=+0.093641387 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 26 17:18:45 compute-0 ceph-mon[75140]: pgmap v3869: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3870: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:47 compute-0 ceph-mon[75140]: pgmap v3870: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:48 compute-0 nova_compute[239965]: 2026-01-26 17:18:48.114 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3871: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:48 compute-0 podman[410149]: 2026-01-26 17:18:48.460641279 +0000 UTC m=+0.152453509 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:18:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:18:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1300933424' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:18:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:18:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1300933424' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:18:49 compute-0 nova_compute[239965]: 2026-01-26 17:18:49.040 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:49 compute-0 ceph-mon[75140]: pgmap v3871: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1300933424' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:18:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1300933424' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:18:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:18:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:18:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3872: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:51 compute-0 nova_compute[239965]: 2026-01-26 17:18:51.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:51 compute-0 ceph-mon[75140]: pgmap v3872: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3873: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:53 compute-0 nova_compute[239965]: 2026-01-26 17:18:53.115 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:53 compute-0 ceph-mon[75140]: pgmap v3873: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:54 compute-0 nova_compute[239965]: 2026-01-26 17:18:54.042 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3874: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:18:55 compute-0 ceph-mon[75140]: pgmap v3874: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3875: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:56 compute-0 nova_compute[239965]: 2026-01-26 17:18:56.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:56 compute-0 nova_compute[239965]: 2026-01-26 17:18:56.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:18:57 compute-0 nova_compute[239965]: 2026-01-26 17:18:57.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:18:57 compute-0 ceph-mon[75140]: pgmap v3875: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:58 compute-0 nova_compute[239965]: 2026-01-26 17:18:58.116 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3876: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:59 compute-0 nova_compute[239965]: 2026-01-26 17:18:59.044 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:18:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:18:59.303 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:18:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:18:59.305 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:18:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:18:59.305 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:18:59 compute-0 ceph-mon[75140]: pgmap v3876: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:18:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:19:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3877: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:19:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:19:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:19:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:19:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:19:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:19:00 compute-0 nova_compute[239965]: 2026-01-26 17:19:00.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:00 compute-0 nova_compute[239965]: 2026-01-26 17:19:00.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:19:00 compute-0 nova_compute[239965]: 2026-01-26 17:19:00.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:19:00 compute-0 nova_compute[239965]: 2026-01-26 17:19:00.529 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:19:00 compute-0 nova_compute[239965]: 2026-01-26 17:19:00.529 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:00 compute-0 nova_compute[239965]: 2026-01-26 17:19:00.554 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:19:00 compute-0 nova_compute[239965]: 2026-01-26 17:19:00.554 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:19:00 compute-0 nova_compute[239965]: 2026-01-26 17:19:00.554 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:19:00 compute-0 nova_compute[239965]: 2026-01-26 17:19:00.555 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:19:00 compute-0 nova_compute[239965]: 2026-01-26 17:19:00.555 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:19:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:19:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/12281815' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:19:01 compute-0 nova_compute[239965]: 2026-01-26 17:19:01.103 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:19:01 compute-0 nova_compute[239965]: 2026-01-26 17:19:01.270 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:19:01 compute-0 nova_compute[239965]: 2026-01-26 17:19:01.271 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3543MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:19:01 compute-0 nova_compute[239965]: 2026-01-26 17:19:01.272 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:19:01 compute-0 nova_compute[239965]: 2026-01-26 17:19:01.272 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:19:01 compute-0 nova_compute[239965]: 2026-01-26 17:19:01.427 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:19:01 compute-0 nova_compute[239965]: 2026-01-26 17:19:01.427 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:19:01 compute-0 nova_compute[239965]: 2026-01-26 17:19:01.443 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:19:01 compute-0 ceph-mon[75140]: pgmap v3877: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:01 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/12281815' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:19:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:19:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3553191616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:19:02 compute-0 nova_compute[239965]: 2026-01-26 17:19:02.139 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.697s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:19:02 compute-0 nova_compute[239965]: 2026-01-26 17:19:02.146 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:19:02 compute-0 nova_compute[239965]: 2026-01-26 17:19:02.162 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:19:02 compute-0 nova_compute[239965]: 2026-01-26 17:19:02.164 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:19:02 compute-0 nova_compute[239965]: 2026-01-26 17:19:02.165 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:19:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3878: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3553191616' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:19:03 compute-0 nova_compute[239965]: 2026-01-26 17:19:03.117 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:03 compute-0 ceph-mon[75140]: pgmap v3878: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:04 compute-0 nova_compute[239965]: 2026-01-26 17:19:04.047 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3879: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:19:05 compute-0 ceph-mon[75140]: pgmap v3879: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:06 compute-0 nova_compute[239965]: 2026-01-26 17:19:06.158 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3880: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:06 compute-0 ceph-mon[75140]: pgmap v3880: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:08 compute-0 nova_compute[239965]: 2026-01-26 17:19:08.120 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3881: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:09 compute-0 nova_compute[239965]: 2026-01-26 17:19:09.049 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:09 compute-0 ceph-mon[75140]: pgmap v3881: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:19:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3882: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:11 compute-0 ceph-mon[75140]: pgmap v3882: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:11 compute-0 sshd-session[410220]: Invalid user sol from 45.148.10.240 port 48018
Jan 26 17:19:11 compute-0 sshd-session[410220]: Connection closed by invalid user sol 45.148.10.240 port 48018 [preauth]
Jan 26 17:19:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3883: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:13 compute-0 nova_compute[239965]: 2026-01-26 17:19:13.164 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:13 compute-0 ceph-mon[75140]: pgmap v3883: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:14 compute-0 nova_compute[239965]: 2026-01-26 17:19:14.051 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3884: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:19:15 compute-0 ceph-mon[75140]: pgmap v3884: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:16 compute-0 podman[410222]: 2026-01-26 17:19:16.364118454 +0000 UTC m=+0.049959926 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:19:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3885: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:17 compute-0 ceph-mon[75140]: pgmap v3885: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:18 compute-0 nova_compute[239965]: 2026-01-26 17:19:18.207 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3886: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:18 compute-0 ceph-mon[75140]: pgmap v3886: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:19 compute-0 nova_compute[239965]: 2026-01-26 17:19:19.052 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:19 compute-0 podman[410242]: 2026-01-26 17:19:19.394289856 +0000 UTC m=+0.079585313 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 26 17:19:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:19:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3887: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:20 compute-0 ceph-mon[75140]: pgmap v3887: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3888: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:23 compute-0 nova_compute[239965]: 2026-01-26 17:19:23.207 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:23 compute-0 ceph-mon[75140]: pgmap v3888: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:24 compute-0 nova_compute[239965]: 2026-01-26 17:19:24.054 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3889: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:19:25 compute-0 ceph-mon[75140]: pgmap v3889: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3890: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:27 compute-0 ceph-mon[75140]: pgmap v3890: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:28 compute-0 nova_compute[239965]: 2026-01-26 17:19:28.210 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3891: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:19:28
Jan 26 17:19:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:19:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:19:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', '.mgr', 'vms', 'default.rgw.log', 'images', 'default.rgw.control', 'volumes', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta']
Jan 26 17:19:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:19:29 compute-0 nova_compute[239965]: 2026-01-26 17:19:29.056 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:29 compute-0 ceph-mon[75140]: pgmap v3891: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:19:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:19:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:19:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:19:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3892: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:19:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:19:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:19:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:19:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:19:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:19:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:19:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:19:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:19:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:19:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:19:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:19:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:19:31 compute-0 sshd-session[410268]: Received disconnect from 45.148.10.147 port 36420:11:  [preauth]
Jan 26 17:19:31 compute-0 sshd-session[410268]: Disconnected from authenticating user root 45.148.10.147 port 36420 [preauth]
Jan 26 17:19:31 compute-0 ceph-mon[75140]: pgmap v3892: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3893: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:32 compute-0 ceph-mon[75140]: pgmap v3893: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:33 compute-0 nova_compute[239965]: 2026-01-26 17:19:33.212 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:34 compute-0 nova_compute[239965]: 2026-01-26 17:19:34.059 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3894: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:19:35 compute-0 ceph-mon[75140]: pgmap v3894: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3895: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:36 compute-0 sudo[410270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:19:36 compute-0 sudo[410270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:19:36 compute-0 sudo[410270]: pam_unix(sudo:session): session closed for user root
Jan 26 17:19:36 compute-0 sudo[410295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 26 17:19:36 compute-0 sudo[410295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:19:37 compute-0 podman[410364]: 2026-01-26 17:19:37.279442395 +0000 UTC m=+0.179835990 container exec 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 26 17:19:37 compute-0 podman[410364]: 2026-01-26 17:19:37.397359576 +0000 UTC m=+0.297753171 container exec_died 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 17:19:37 compute-0 ceph-mon[75140]: pgmap v3895: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:38 compute-0 nova_compute[239965]: 2026-01-26 17:19:38.214 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:38 compute-0 sudo[410295]: pam_unix(sudo:session): session closed for user root
Jan 26 17:19:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:19:38 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:19:38 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:19:38 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:19:38 compute-0 sudo[410549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:19:38 compute-0 sudo[410549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:19:38 compute-0 sudo[410549]: pam_unix(sudo:session): session closed for user root
Jan 26 17:19:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3896: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:38 compute-0 sudo[410574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:19:38 compute-0 sudo[410574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:19:39 compute-0 sudo[410574]: pam_unix(sudo:session): session closed for user root
Jan 26 17:19:39 compute-0 nova_compute[239965]: 2026-01-26 17:19:39.096 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:19:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:19:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:19:39 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:19:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:19:39 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:19:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:19:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:19:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:19:39 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:19:39 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:19:39 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:19:39 compute-0 sudo[410630]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:19:39 compute-0 sudo[410630]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:19:39 compute-0 sudo[410630]: pam_unix(sudo:session): session closed for user root
Jan 26 17:19:39 compute-0 sudo[410655]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:19:39 compute-0 sudo[410655]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:19:39 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:19:39 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:19:39 compute-0 ceph-mon[75140]: pgmap v3896: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:39 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:19:39 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:19:39 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:19:39 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:19:39 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:19:39 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:19:39 compute-0 podman[410693]: 2026-01-26 17:19:39.587726179 +0000 UTC m=+0.056048405 container create 74f785276ab1d6d00b0231941c825c0219283305b79bcabc3064b275f1a1efdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_galois, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:19:39 compute-0 systemd[1]: Started libpod-conmon-74f785276ab1d6d00b0231941c825c0219283305b79bcabc3064b275f1a1efdc.scope.
Jan 26 17:19:39 compute-0 podman[410693]: 2026-01-26 17:19:39.563804633 +0000 UTC m=+0.032126849 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:19:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:19:39 compute-0 podman[410693]: 2026-01-26 17:19:39.69912881 +0000 UTC m=+0.167451036 container init 74f785276ab1d6d00b0231941c825c0219283305b79bcabc3064b275f1a1efdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_galois, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 17:19:39 compute-0 podman[410693]: 2026-01-26 17:19:39.708336146 +0000 UTC m=+0.176658342 container start 74f785276ab1d6d00b0231941c825c0219283305b79bcabc3064b275f1a1efdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_galois, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:19:39 compute-0 podman[410693]: 2026-01-26 17:19:39.71217227 +0000 UTC m=+0.180494516 container attach 74f785276ab1d6d00b0231941c825c0219283305b79bcabc3064b275f1a1efdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_galois, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:19:39 compute-0 vibrant_galois[410709]: 167 167
Jan 26 17:19:39 compute-0 systemd[1]: libpod-74f785276ab1d6d00b0231941c825c0219283305b79bcabc3064b275f1a1efdc.scope: Deactivated successfully.
Jan 26 17:19:39 compute-0 podman[410693]: 2026-01-26 17:19:39.715617704 +0000 UTC m=+0.183939900 container died 74f785276ab1d6d00b0231941c825c0219283305b79bcabc3064b275f1a1efdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_galois, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:19:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-ffa887254ae22533cfb268dc8f6b25f387903b59131522d5d644595d60007ea1-merged.mount: Deactivated successfully.
Jan 26 17:19:39 compute-0 podman[410693]: 2026-01-26 17:19:39.763333084 +0000 UTC m=+0.231655280 container remove 74f785276ab1d6d00b0231941c825c0219283305b79bcabc3064b275f1a1efdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_galois, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:19:39 compute-0 systemd[1]: libpod-conmon-74f785276ab1d6d00b0231941c825c0219283305b79bcabc3064b275f1a1efdc.scope: Deactivated successfully.
Jan 26 17:19:39 compute-0 podman[410731]: 2026-01-26 17:19:39.957392802 +0000 UTC m=+0.052614591 container create 722c411a418b8b03d8a6546dd71ecf334fdfc9144a0ba81540726e2a316b99e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bose, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:19:40 compute-0 systemd[1]: Started libpod-conmon-722c411a418b8b03d8a6546dd71ecf334fdfc9144a0ba81540726e2a316b99e0.scope.
Jan 26 17:19:40 compute-0 podman[410731]: 2026-01-26 17:19:39.936410518 +0000 UTC m=+0.031632307 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:19:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:19:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #186. Immutable memtables: 0.
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:19:40.039078) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 115] Flushing memtable with next log file: 186
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447980039141, "job": 115, "event": "flush_started", "num_memtables": 1, "num_entries": 1841, "num_deletes": 250, "total_data_size": 3153364, "memory_usage": 3214576, "flush_reason": "Manual Compaction"}
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 115] Level-0 flush table #187: started
Jan 26 17:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d73831da3b7979fdf8791968019226cbe3b27585eb55002b89f90e0f70bde076/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d73831da3b7979fdf8791968019226cbe3b27585eb55002b89f90e0f70bde076/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d73831da3b7979fdf8791968019226cbe3b27585eb55002b89f90e0f70bde076/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d73831da3b7979fdf8791968019226cbe3b27585eb55002b89f90e0f70bde076/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:19:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d73831da3b7979fdf8791968019226cbe3b27585eb55002b89f90e0f70bde076/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447980284231, "cf_name": "default", "job": 115, "event": "table_file_creation", "file_number": 187, "file_size": 1796121, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 77523, "largest_seqno": 79363, "table_properties": {"data_size": 1790083, "index_size": 3048, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15749, "raw_average_key_size": 20, "raw_value_size": 1776665, "raw_average_value_size": 2340, "num_data_blocks": 140, "num_entries": 759, "num_filter_entries": 759, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769447777, "oldest_key_time": 1769447777, "file_creation_time": 1769447980, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 187, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 115] Flush lasted 245223 microseconds, and 9385 cpu microseconds.
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:19:40.284304) [db/flush_job.cc:967] [default] [JOB 115] Level-0 flush table #187: 1796121 bytes OK
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:19:40.284333) [db/memtable_list.cc:519] [default] Level-0 commit table #187 started
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:19:40.286689) [db/memtable_list.cc:722] [default] Level-0 commit table #187: memtable #1 done
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:19:40.286792) EVENT_LOG_v1 {"time_micros": 1769447980286784, "job": 115, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:19:40.286821) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 115] Try to delete WAL files size 3145531, prev total WAL file size 3145531, number of live WAL files 2.
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000183.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:19:40.287914) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033323537' seq:72057594037927935, type:22 .. '6D6772737461740033353038' seq:0, type:0; will stop at (end)
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 116] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 115 Base level 0, inputs: [187(1754KB)], [185(10MB)]
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447980288038, "job": 116, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [187], "files_L6": [185], "score": -1, "input_data_size": 13234319, "oldest_snapshot_seqno": -1}
Jan 26 17:19:40 compute-0 podman[410731]: 2026-01-26 17:19:40.291666267 +0000 UTC m=+0.386888056 container init 722c411a418b8b03d8a6546dd71ecf334fdfc9144a0ba81540726e2a316b99e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bose, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 17:19:40 compute-0 podman[410731]: 2026-01-26 17:19:40.303790245 +0000 UTC m=+0.399012014 container start 722c411a418b8b03d8a6546dd71ecf334fdfc9144a0ba81540726e2a316b99e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Jan 26 17:19:40 compute-0 podman[410731]: 2026-01-26 17:19:40.309587587 +0000 UTC m=+0.404809356 container attach 722c411a418b8b03d8a6546dd71ecf334fdfc9144a0ba81540726e2a316b99e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bose, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 116] Generated table #188: 9742 keys, 11132214 bytes, temperature: kUnknown
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447980389595, "cf_name": "default", "job": 116, "event": "table_file_creation", "file_number": 188, "file_size": 11132214, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11072529, "index_size": 34228, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24389, "raw_key_size": 254640, "raw_average_key_size": 26, "raw_value_size": 10903661, "raw_average_value_size": 1119, "num_data_blocks": 1322, "num_entries": 9742, "num_filter_entries": 9742, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769447980, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 188, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:19:40.389997) [db/compaction/compaction_job.cc:1663] [default] [JOB 116] Compacted 1@0 + 1@6 files to L6 => 11132214 bytes
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:19:40.391513) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 130.2 rd, 109.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.9 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(13.6) write-amplify(6.2) OK, records in: 10159, records dropped: 417 output_compression: NoCompression
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:19:40.391564) EVENT_LOG_v1 {"time_micros": 1769447980391524, "job": 116, "event": "compaction_finished", "compaction_time_micros": 101666, "compaction_time_cpu_micros": 45436, "output_level": 6, "num_output_files": 1, "total_output_size": 11132214, "num_input_records": 10159, "num_output_records": 9742, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000187.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447980392110, "job": 116, "event": "table_file_deletion", "file_number": 187}
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000185.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769447980395002, "job": 116, "event": "table_file_deletion", "file_number": 185}
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:19:40.287738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:19:40.395140) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:19:40.395155) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:19:40.395158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:19:40.395160) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:19:40 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:19:40.395163) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:19:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3897: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:40 compute-0 upbeat_bose[410747]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:19:40 compute-0 upbeat_bose[410747]: --> All data devices are unavailable
Jan 26 17:19:40 compute-0 systemd[1]: libpod-722c411a418b8b03d8a6546dd71ecf334fdfc9144a0ba81540726e2a316b99e0.scope: Deactivated successfully.
Jan 26 17:19:40 compute-0 podman[410731]: 2026-01-26 17:19:40.878462795 +0000 UTC m=+0.973684564 container died 722c411a418b8b03d8a6546dd71ecf334fdfc9144a0ba81540726e2a316b99e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bose, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:19:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-d73831da3b7979fdf8791968019226cbe3b27585eb55002b89f90e0f70bde076-merged.mount: Deactivated successfully.
Jan 26 17:19:40 compute-0 podman[410731]: 2026-01-26 17:19:40.926516583 +0000 UTC m=+1.021738352 container remove 722c411a418b8b03d8a6546dd71ecf334fdfc9144a0ba81540726e2a316b99e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_bose, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 17:19:40 compute-0 systemd[1]: libpod-conmon-722c411a418b8b03d8a6546dd71ecf334fdfc9144a0ba81540726e2a316b99e0.scope: Deactivated successfully.
Jan 26 17:19:41 compute-0 sudo[410655]: pam_unix(sudo:session): session closed for user root
Jan 26 17:19:41 compute-0 ceph-mon[75140]: pgmap v3897: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:41 compute-0 sudo[410780]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:19:41 compute-0 sudo[410780]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:19:41 compute-0 sudo[410780]: pam_unix(sudo:session): session closed for user root
Jan 26 17:19:41 compute-0 sudo[410805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:19:41 compute-0 sudo[410805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:19:41 compute-0 podman[410842]: 2026-01-26 17:19:41.428679904 +0000 UTC m=+0.043244230 container create 6c19f01824484816c144826d99d4afe79c3a952e1fe25398e0636229c0e2a9ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_gauss, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:19:41 compute-0 systemd[1]: Started libpod-conmon-6c19f01824484816c144826d99d4afe79c3a952e1fe25398e0636229c0e2a9ea.scope.
Jan 26 17:19:41 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:19:41 compute-0 podman[410842]: 2026-01-26 17:19:41.413308607 +0000 UTC m=+0.027872953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:19:41 compute-0 nova_compute[239965]: 2026-01-26 17:19:41.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:41 compute-0 podman[410842]: 2026-01-26 17:19:41.513661018 +0000 UTC m=+0.128225374 container init 6c19f01824484816c144826d99d4afe79c3a952e1fe25398e0636229c0e2a9ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 17:19:41 compute-0 podman[410842]: 2026-01-26 17:19:41.521372257 +0000 UTC m=+0.135936583 container start 6c19f01824484816c144826d99d4afe79c3a952e1fe25398e0636229c0e2a9ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_gauss, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 17:19:41 compute-0 podman[410842]: 2026-01-26 17:19:41.524712249 +0000 UTC m=+0.139276575 container attach 6c19f01824484816c144826d99d4afe79c3a952e1fe25398e0636229c0e2a9ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_gauss, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:19:41 compute-0 systemd[1]: libpod-6c19f01824484816c144826d99d4afe79c3a952e1fe25398e0636229c0e2a9ea.scope: Deactivated successfully.
Jan 26 17:19:41 compute-0 cool_gauss[410858]: 167 167
Jan 26 17:19:41 compute-0 conmon[410858]: conmon 6c19f01824484816c144 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6c19f01824484816c144826d99d4afe79c3a952e1fe25398e0636229c0e2a9ea.scope/container/memory.events
Jan 26 17:19:41 compute-0 podman[410842]: 2026-01-26 17:19:41.526806941 +0000 UTC m=+0.141371267 container died 6c19f01824484816c144826d99d4afe79c3a952e1fe25398e0636229c0e2a9ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 17:19:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-02f0981b60df4f8c3c718cf71acedcbc02f4da93809a5607c8f148312d937cb7-merged.mount: Deactivated successfully.
Jan 26 17:19:41 compute-0 podman[410842]: 2026-01-26 17:19:41.677273859 +0000 UTC m=+0.291838185 container remove 6c19f01824484816c144826d99d4afe79c3a952e1fe25398e0636229c0e2a9ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_gauss, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 17:19:41 compute-0 systemd[1]: libpod-conmon-6c19f01824484816c144826d99d4afe79c3a952e1fe25398e0636229c0e2a9ea.scope: Deactivated successfully.
Jan 26 17:19:41 compute-0 podman[410880]: 2026-01-26 17:19:41.810466625 +0000 UTC m=+0.021052787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:19:41 compute-0 podman[410880]: 2026-01-26 17:19:41.924378728 +0000 UTC m=+0.134964860 container create cda1cf773ffcdde9cab04d31dcac585e7d5a1ecab7f79adf4b8c8aeb39dc4e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:19:41 compute-0 systemd[1]: Started libpod-conmon-cda1cf773ffcdde9cab04d31dcac585e7d5a1ecab7f79adf4b8c8aeb39dc4e9b.scope.
Jan 26 17:19:41 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01020b92ca7187ce00e3bda27135362839fd7788a195d2ad68d0e94a99a752d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01020b92ca7187ce00e3bda27135362839fd7788a195d2ad68d0e94a99a752d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01020b92ca7187ce00e3bda27135362839fd7788a195d2ad68d0e94a99a752d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:19:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e01020b92ca7187ce00e3bda27135362839fd7788a195d2ad68d0e94a99a752d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:19:42 compute-0 podman[410880]: 2026-01-26 17:19:42.058317851 +0000 UTC m=+0.268904003 container init cda1cf773ffcdde9cab04d31dcac585e7d5a1ecab7f79adf4b8c8aeb39dc4e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 17:19:42 compute-0 podman[410880]: 2026-01-26 17:19:42.064727699 +0000 UTC m=+0.275313831 container start cda1cf773ffcdde9cab04d31dcac585e7d5a1ecab7f79adf4b8c8aeb39dc4e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 17:19:42 compute-0 podman[410880]: 2026-01-26 17:19:42.069948496 +0000 UTC m=+0.280534628 container attach cda1cf773ffcdde9cab04d31dcac585e7d5a1ecab7f79adf4b8c8aeb39dc4e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]: {
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:     "0": [
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:         {
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "devices": [
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "/dev/loop3"
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             ],
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "lv_name": "ceph_lv0",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "lv_size": "21470642176",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "name": "ceph_lv0",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "tags": {
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.cluster_name": "ceph",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.crush_device_class": "",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.encrypted": "0",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.objectstore": "bluestore",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.osd_id": "0",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.type": "block",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.vdo": "0",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.with_tpm": "0"
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             },
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "type": "block",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "vg_name": "ceph_vg0"
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:         }
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:     ],
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:     "1": [
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:         {
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "devices": [
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "/dev/loop4"
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             ],
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "lv_name": "ceph_lv1",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "lv_size": "21470642176",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "name": "ceph_lv1",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "tags": {
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.cluster_name": "ceph",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.crush_device_class": "",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.encrypted": "0",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.objectstore": "bluestore",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.osd_id": "1",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.type": "block",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.vdo": "0",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.with_tpm": "0"
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             },
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "type": "block",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "vg_name": "ceph_vg1"
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:         }
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:     ],
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:     "2": [
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:         {
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "devices": [
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "/dev/loop5"
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             ],
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "lv_name": "ceph_lv2",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "lv_size": "21470642176",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "name": "ceph_lv2",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "tags": {
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.cluster_name": "ceph",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.crush_device_class": "",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.encrypted": "0",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.objectstore": "bluestore",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.osd_id": "2",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.type": "block",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.vdo": "0",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:                 "ceph.with_tpm": "0"
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             },
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "type": "block",
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:             "vg_name": "ceph_vg2"
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:         }
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]:     ]
Jan 26 17:19:42 compute-0 hardcore_satoshi[410896]: }
Jan 26 17:19:42 compute-0 systemd[1]: libpod-cda1cf773ffcdde9cab04d31dcac585e7d5a1ecab7f79adf4b8c8aeb39dc4e9b.scope: Deactivated successfully.
Jan 26 17:19:42 compute-0 podman[410880]: 2026-01-26 17:19:42.366098797 +0000 UTC m=+0.576684939 container died cda1cf773ffcdde9cab04d31dcac585e7d5a1ecab7f79adf4b8c8aeb39dc4e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 17:19:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-e01020b92ca7187ce00e3bda27135362839fd7788a195d2ad68d0e94a99a752d-merged.mount: Deactivated successfully.
Jan 26 17:19:42 compute-0 podman[410880]: 2026-01-26 17:19:42.423545266 +0000 UTC m=+0.634131398 container remove cda1cf773ffcdde9cab04d31dcac585e7d5a1ecab7f79adf4b8c8aeb39dc4e9b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_satoshi, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:19:42 compute-0 systemd[1]: libpod-conmon-cda1cf773ffcdde9cab04d31dcac585e7d5a1ecab7f79adf4b8c8aeb39dc4e9b.scope: Deactivated successfully.
Jan 26 17:19:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3898: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:42 compute-0 sudo[410805]: pam_unix(sudo:session): session closed for user root
Jan 26 17:19:42 compute-0 sudo[410918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:19:42 compute-0 sudo[410918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:19:42 compute-0 sudo[410918]: pam_unix(sudo:session): session closed for user root
Jan 26 17:19:42 compute-0 sudo[410943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:19:42 compute-0 sudo[410943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:19:42 compute-0 podman[410980]: 2026-01-26 17:19:42.925920473 +0000 UTC m=+0.057258895 container create f6b61b4e7b6302dfd04efa8f751c43d09d86e5fd10582a562258222888fd31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:19:42 compute-0 systemd[1]: Started libpod-conmon-f6b61b4e7b6302dfd04efa8f751c43d09d86e5fd10582a562258222888fd31de.scope.
Jan 26 17:19:42 compute-0 podman[410980]: 2026-01-26 17:19:42.898733526 +0000 UTC m=+0.030071948 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:19:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:19:43 compute-0 podman[410980]: 2026-01-26 17:19:43.029068541 +0000 UTC m=+0.160406943 container init f6b61b4e7b6302dfd04efa8f751c43d09d86e5fd10582a562258222888fd31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_volhard, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:19:43 compute-0 podman[410980]: 2026-01-26 17:19:43.036103935 +0000 UTC m=+0.167442337 container start f6b61b4e7b6302dfd04efa8f751c43d09d86e5fd10582a562258222888fd31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_volhard, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:19:43 compute-0 systemd[1]: libpod-f6b61b4e7b6302dfd04efa8f751c43d09d86e5fd10582a562258222888fd31de.scope: Deactivated successfully.
Jan 26 17:19:43 compute-0 suspicious_volhard[410996]: 167 167
Jan 26 17:19:43 compute-0 podman[410980]: 2026-01-26 17:19:43.043897135 +0000 UTC m=+0.175235557 container attach f6b61b4e7b6302dfd04efa8f751c43d09d86e5fd10582a562258222888fd31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:19:43 compute-0 podman[410980]: 2026-01-26 17:19:43.044489311 +0000 UTC m=+0.175827713 container died f6b61b4e7b6302dfd04efa8f751c43d09d86e5fd10582a562258222888fd31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 17:19:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8ac5a4c32f9d8d9cd6f147390fcf7631e469bb822e9f70a1eef329924714444-merged.mount: Deactivated successfully.
Jan 26 17:19:43 compute-0 podman[410980]: 2026-01-26 17:19:43.086799758 +0000 UTC m=+0.218138160 container remove f6b61b4e7b6302dfd04efa8f751c43d09d86e5fd10582a562258222888fd31de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_volhard, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:19:43 compute-0 systemd[1]: libpod-conmon-f6b61b4e7b6302dfd04efa8f751c43d09d86e5fd10582a562258222888fd31de.scope: Deactivated successfully.
Jan 26 17:19:43 compute-0 nova_compute[239965]: 2026-01-26 17:19:43.216 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:43 compute-0 podman[411020]: 2026-01-26 17:19:43.252804447 +0000 UTC m=+0.048066999 container create e914e5bc4f7e28417831cb7ce7c670ced833ca3e1864e721d6740197c4a5e919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_babbage, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 17:19:43 compute-0 systemd[1]: Started libpod-conmon-e914e5bc4f7e28417831cb7ce7c670ced833ca3e1864e721d6740197c4a5e919.scope.
Jan 26 17:19:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:19:43 compute-0 podman[411020]: 2026-01-26 17:19:43.230521961 +0000 UTC m=+0.025784553 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e038a7838d8761228b051a987304b463a2eb1347315369c6f6ea355184ae249d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e038a7838d8761228b051a987304b463a2eb1347315369c6f6ea355184ae249d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e038a7838d8761228b051a987304b463a2eb1347315369c6f6ea355184ae249d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:19:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e038a7838d8761228b051a987304b463a2eb1347315369c6f6ea355184ae249d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:19:43 compute-0 podman[411020]: 2026-01-26 17:19:43.353471645 +0000 UTC m=+0.148734207 container init e914e5bc4f7e28417831cb7ce7c670ced833ca3e1864e721d6740197c4a5e919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_babbage, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 17:19:43 compute-0 podman[411020]: 2026-01-26 17:19:43.359862832 +0000 UTC m=+0.155125374 container start e914e5bc4f7e28417831cb7ce7c670ced833ca3e1864e721d6740197c4a5e919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_babbage, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:19:43 compute-0 podman[411020]: 2026-01-26 17:19:43.362957218 +0000 UTC m=+0.158219790 container attach e914e5bc4f7e28417831cb7ce7c670ced833ca3e1864e721d6740197c4a5e919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:19:43 compute-0 nova_compute[239965]: 2026-01-26 17:19:43.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:43 compute-0 nova_compute[239965]: 2026-01-26 17:19:43.513 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:43 compute-0 ceph-mon[75140]: pgmap v3898: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:44 compute-0 lvm[411116]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:19:44 compute-0 lvm[411115]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:19:44 compute-0 lvm[411115]: VG ceph_vg1 finished
Jan 26 17:19:44 compute-0 lvm[411116]: VG ceph_vg0 finished
Jan 26 17:19:44 compute-0 lvm[411118]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:19:44 compute-0 lvm[411118]: VG ceph_vg2 finished
Jan 26 17:19:44 compute-0 nova_compute[239965]: 2026-01-26 17:19:44.098 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:44 compute-0 epic_babbage[411037]: {}
Jan 26 17:19:44 compute-0 systemd[1]: libpod-e914e5bc4f7e28417831cb7ce7c670ced833ca3e1864e721d6740197c4a5e919.scope: Deactivated successfully.
Jan 26 17:19:44 compute-0 systemd[1]: libpod-e914e5bc4f7e28417831cb7ce7c670ced833ca3e1864e721d6740197c4a5e919.scope: Consumed 1.346s CPU time.
Jan 26 17:19:44 compute-0 podman[411020]: 2026-01-26 17:19:44.152860614 +0000 UTC m=+0.948123176 container died e914e5bc4f7e28417831cb7ce7c670ced833ca3e1864e721d6740197c4a5e919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:19:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-e038a7838d8761228b051a987304b463a2eb1347315369c6f6ea355184ae249d-merged.mount: Deactivated successfully.
Jan 26 17:19:44 compute-0 podman[411020]: 2026-01-26 17:19:44.210449016 +0000 UTC m=+1.005711558 container remove e914e5bc4f7e28417831cb7ce7c670ced833ca3e1864e721d6740197c4a5e919 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:19:44 compute-0 systemd[1]: libpod-conmon-e914e5bc4f7e28417831cb7ce7c670ced833ca3e1864e721d6740197c4a5e919.scope: Deactivated successfully.
Jan 26 17:19:44 compute-0 sudo[410943]: pam_unix(sudo:session): session closed for user root
Jan 26 17:19:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:19:44 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:19:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:19:44 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:19:44 compute-0 sudo[411135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:19:44 compute-0 sudo[411135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:19:44 compute-0 sudo[411135]: pam_unix(sudo:session): session closed for user root
Jan 26 17:19:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3899: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:44 compute-0 nova_compute[239965]: 2026-01-26 17:19:44.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:19:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3900: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:46 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:19:46 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:19:46 compute-0 ceph-mon[75140]: pgmap v3899: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:47 compute-0 podman[411161]: 2026-01-26 17:19:47.400074788 +0000 UTC m=+0.079545140 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 26 17:19:47 compute-0 ceph-mon[75140]: pgmap v3900: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:48 compute-0 nova_compute[239965]: 2026-01-26 17:19:48.219 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3901: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:19:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1967018147' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:19:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:19:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1967018147' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:19:49 compute-0 nova_compute[239965]: 2026-01-26 17:19:49.100 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:49 compute-0 ceph-mon[75140]: pgmap v3901: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1967018147' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:19:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1967018147' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:19:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:19:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:19:50 compute-0 podman[411181]: 2026-01-26 17:19:50.396063393 +0000 UTC m=+0.084547204 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:19:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:51 compute-0 ceph-mon[75140]: pgmap v3902: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:52 compute-0 ceph-mon[75140]: pgmap v3903: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:53 compute-0 nova_compute[239965]: 2026-01-26 17:19:53.220 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:53 compute-0 nova_compute[239965]: 2026-01-26 17:19:53.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:54 compute-0 nova_compute[239965]: 2026-01-26 17:19:54.103 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:19:55 compute-0 ceph-mon[75140]: pgmap v3904: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:56 compute-0 nova_compute[239965]: 2026-01-26 17:19:56.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:19:56 compute-0 nova_compute[239965]: 2026-01-26 17:19:56.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:19:57 compute-0 ceph-mon[75140]: pgmap v3905: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:58 compute-0 nova_compute[239965]: 2026-01-26 17:19:58.221 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:58 compute-0 ceph-mon[75140]: pgmap v3906: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:19:59 compute-0 nova_compute[239965]: 2026-01-26 17:19:59.106 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:19:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:19:59.304 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:19:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:19:59.304 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:19:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:19:59.304 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:19:59 compute-0 nova_compute[239965]: 2026-01-26 17:19:59.504 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:20:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:20:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:20:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:20:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:20:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:20:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:20:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:00 compute-0 ceph-mon[75140]: pgmap v3907: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:02 compute-0 nova_compute[239965]: 2026-01-26 17:20:02.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:02 compute-0 nova_compute[239965]: 2026-01-26 17:20:02.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:20:02 compute-0 nova_compute[239965]: 2026-01-26 17:20:02.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:20:02 compute-0 nova_compute[239965]: 2026-01-26 17:20:02.526 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:20:02 compute-0 nova_compute[239965]: 2026-01-26 17:20:02.527 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:02 compute-0 nova_compute[239965]: 2026-01-26 17:20:02.550 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:20:02 compute-0 nova_compute[239965]: 2026-01-26 17:20:02.550 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:20:02 compute-0 nova_compute[239965]: 2026-01-26 17:20:02.551 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:20:02 compute-0 nova_compute[239965]: 2026-01-26 17:20:02.551 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:20:02 compute-0 nova_compute[239965]: 2026-01-26 17:20:02.551 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:20:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:20:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944032008' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:20:03 compute-0 nova_compute[239965]: 2026-01-26 17:20:03.162 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:20:03 compute-0 nova_compute[239965]: 2026-01-26 17:20:03.223 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:03 compute-0 nova_compute[239965]: 2026-01-26 17:20:03.331 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:20:03 compute-0 nova_compute[239965]: 2026-01-26 17:20:03.333 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3514MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:20:03 compute-0 nova_compute[239965]: 2026-01-26 17:20:03.333 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:20:03 compute-0 nova_compute[239965]: 2026-01-26 17:20:03.333 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:20:03 compute-0 ceph-mon[75140]: pgmap v3908: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2944032008' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:20:03 compute-0 nova_compute[239965]: 2026-01-26 17:20:03.584 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:20:03 compute-0 nova_compute[239965]: 2026-01-26 17:20:03.584 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:20:03 compute-0 nova_compute[239965]: 2026-01-26 17:20:03.673 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 17:20:03 compute-0 nova_compute[239965]: 2026-01-26 17:20:03.774 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 17:20:03 compute-0 nova_compute[239965]: 2026-01-26 17:20:03.774 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 17:20:03 compute-0 nova_compute[239965]: 2026-01-26 17:20:03.789 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 17:20:03 compute-0 nova_compute[239965]: 2026-01-26 17:20:03.809 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 17:20:03 compute-0 nova_compute[239965]: 2026-01-26 17:20:03.828 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:20:04 compute-0 nova_compute[239965]: 2026-01-26 17:20:04.108 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:20:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1210405540' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:20:04 compute-0 nova_compute[239965]: 2026-01-26 17:20:04.371 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:20:04 compute-0 nova_compute[239965]: 2026-01-26 17:20:04.376 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:20:04 compute-0 nova_compute[239965]: 2026-01-26 17:20:04.393 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:20:04 compute-0 nova_compute[239965]: 2026-01-26 17:20:04.395 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:20:04 compute-0 nova_compute[239965]: 2026-01-26 17:20:04.396 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.063s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:20:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1210405540' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:20:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #189. Immutable memtables: 0.
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:20:05.053165) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 117] Flushing memtable with next log file: 189
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448005053204, "job": 117, "event": "flush_started", "num_memtables": 1, "num_entries": 465, "num_deletes": 251, "total_data_size": 439049, "memory_usage": 448680, "flush_reason": "Manual Compaction"}
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 117] Level-0 flush table #190: started
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448005057463, "cf_name": "default", "job": 117, "event": "table_file_creation", "file_number": 190, "file_size": 433669, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79364, "largest_seqno": 79828, "table_properties": {"data_size": 430927, "index_size": 777, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6481, "raw_average_key_size": 19, "raw_value_size": 425473, "raw_average_value_size": 1247, "num_data_blocks": 34, "num_entries": 341, "num_filter_entries": 341, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769447981, "oldest_key_time": 1769447981, "file_creation_time": 1769448005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 190, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 117] Flush lasted 4321 microseconds, and 1924 cpu microseconds.
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:20:05.057492) [db/flush_job.cc:967] [default] [JOB 117] Level-0 flush table #190: 433669 bytes OK
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:20:05.057506) [db/memtable_list.cc:519] [default] Level-0 commit table #190 started
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:20:05.059334) [db/memtable_list.cc:722] [default] Level-0 commit table #190: memtable #1 done
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:20:05.059346) EVENT_LOG_v1 {"time_micros": 1769448005059342, "job": 117, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:20:05.059362) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 117] Try to delete WAL files size 436263, prev total WAL file size 436263, number of live WAL files 2.
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000186.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:20:05.059910) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037373831' seq:72057594037927935, type:22 .. '7061786F730038303333' seq:0, type:0; will stop at (end)
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 118] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 117 Base level 0, inputs: [190(423KB)], [188(10MB)]
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448005059939, "job": 118, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [190], "files_L6": [188], "score": -1, "input_data_size": 11565883, "oldest_snapshot_seqno": -1}
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 118] Generated table #191: 9568 keys, 9747785 bytes, temperature: kUnknown
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448005131632, "cf_name": "default", "job": 118, "event": "table_file_creation", "file_number": 191, "file_size": 9747785, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9690584, "index_size": 32192, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23941, "raw_key_size": 251696, "raw_average_key_size": 26, "raw_value_size": 9525951, "raw_average_value_size": 995, "num_data_blocks": 1228, "num_entries": 9568, "num_filter_entries": 9568, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769448005, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 191, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:20:05.131935) [db/compaction/compaction_job.cc:1663] [default] [JOB 118] Compacted 1@0 + 1@6 files to L6 => 9747785 bytes
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:20:05.136294) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.1 rd, 135.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 10.6 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(49.1) write-amplify(22.5) OK, records in: 10083, records dropped: 515 output_compression: NoCompression
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:20:05.136323) EVENT_LOG_v1 {"time_micros": 1769448005136312, "job": 118, "event": "compaction_finished", "compaction_time_micros": 71787, "compaction_time_cpu_micros": 26429, "output_level": 6, "num_output_files": 1, "total_output_size": 9747785, "num_input_records": 10083, "num_output_records": 9568, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000190.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448005136543, "job": 118, "event": "table_file_deletion", "file_number": 190}
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000188.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448005138714, "job": 118, "event": "table_file_deletion", "file_number": 188}
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:20:05.059699) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:20:05.138859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:20:05.138865) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:20:05.138867) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:20:05.138869) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:20:05.138871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:20:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Cumulative writes: 17K writes, 79K keys, 17K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.02 MB/s
                                           Cumulative WAL: 17K writes, 17K syncs, 1.00 writes per sync, written: 0.11 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1310 writes, 6196 keys, 1310 commit groups, 1.0 writes per commit group, ingest: 8.88 MB, 0.01 MB/s
                                           Interval WAL: 1310 writes, 1310 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     62.9      1.57              0.32        59    0.027       0      0       0.0       0.0
                                             L6      1/0    9.30 MB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   5.4    115.3     98.8      5.40              1.58        58    0.093    425K    30K       0.0       0.0
                                            Sum      1/0    9.30 MB   0.0      0.6     0.1      0.5       0.6      0.1       0.0   6.4     89.4     90.7      6.97              1.90       117    0.060    425K    30K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.7     78.2     76.3      0.90              0.25        12    0.075     59K   2920       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.6     0.1      0.5       0.5      0.0       0.0   0.0    115.3     98.8      5.40              1.58        58    0.093    425K    30K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     63.1      1.56              0.32        58    0.027       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.6      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.096, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.62 GB write, 0.09 MB/s write, 0.61 GB read, 0.09 MB/s read, 7.0 seconds
                                           Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x565356c818d0#2 capacity: 304.00 MB usage: 70.18 MB table_size: 0 occupancy: 18446744073709551615 collections: 13 last_copies: 0 last_secs: 0.000543 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4348,67.16 MB,22.0931%) FilterBlock(118,1.14 MB,0.376144%) IndexBlock(118,1.88 MB,0.61767%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 17:20:05 compute-0 ceph-mon[75140]: pgmap v3909: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:06 compute-0 ceph-mon[75140]: pgmap v3910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:08 compute-0 nova_compute[239965]: 2026-01-26 17:20:08.226 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:09 compute-0 nova_compute[239965]: 2026-01-26 17:20:09.109 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:09 compute-0 ceph-mon[75140]: pgmap v3911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:20:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:11 compute-0 ceph-mon[75140]: pgmap v3912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:13 compute-0 nova_compute[239965]: 2026-01-26 17:20:13.228 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:13 compute-0 ceph-mon[75140]: pgmap v3913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:14 compute-0 nova_compute[239965]: 2026-01-26 17:20:14.112 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:20:16 compute-0 ceph-mon[75140]: pgmap v3914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:17 compute-0 ceph-mon[75140]: pgmap v3915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:18 compute-0 nova_compute[239965]: 2026-01-26 17:20:18.230 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:18 compute-0 podman[411251]: 2026-01-26 17:20:18.371824691 +0000 UTC m=+0.060945135 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:20:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:19 compute-0 nova_compute[239965]: 2026-01-26 17:20:19.112 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:19 compute-0 ceph-mon[75140]: pgmap v3916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:20:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:21 compute-0 podman[411270]: 2026-01-26 17:20:21.432468661 +0000 UTC m=+0.112106410 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:20:21 compute-0 ceph-mon[75140]: pgmap v3917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:22 compute-0 ceph-mon[75140]: pgmap v3918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:23 compute-0 nova_compute[239965]: 2026-01-26 17:20:23.272 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:24 compute-0 nova_compute[239965]: 2026-01-26 17:20:24.115 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:20:25 compute-0 ceph-mon[75140]: pgmap v3919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:27 compute-0 ceph-mon[75140]: pgmap v3920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:28 compute-0 nova_compute[239965]: 2026-01-26 17:20:28.274 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:20:28
Jan 26 17:20:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:20:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:20:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['vms', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'volumes', 'cephfs.cephfs.data', 'images']
Jan 26 17:20:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:20:29 compute-0 nova_compute[239965]: 2026-01-26 17:20:29.117 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:29 compute-0 ceph-mon[75140]: pgmap v3921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:20:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:20:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:20:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:20:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:20:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:20:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:20:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:20:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:20:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:20:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:20:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:20:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:20:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:20:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:20:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:20:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:20:31 compute-0 ceph-mon[75140]: pgmap v3922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:33 compute-0 nova_compute[239965]: 2026-01-26 17:20:33.275 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:33 compute-0 ceph-mon[75140]: pgmap v3923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:34 compute-0 nova_compute[239965]: 2026-01-26 17:20:34.118 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:20:35 compute-0 ceph-mon[75140]: pgmap v3924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:37 compute-0 ceph-mon[75140]: pgmap v3925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:38 compute-0 nova_compute[239965]: 2026-01-26 17:20:38.277 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:39 compute-0 nova_compute[239965]: 2026-01-26 17:20:39.121 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:39 compute-0 ceph-mon[75140]: pgmap v3926: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:20:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:41 compute-0 ceph-mon[75140]: pgmap v3927: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:43 compute-0 nova_compute[239965]: 2026-01-26 17:20:43.281 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:43 compute-0 ceph-mon[75140]: pgmap v3928: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:44 compute-0 nova_compute[239965]: 2026-01-26 17:20:44.126 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:44 compute-0 nova_compute[239965]: 2026-01-26 17:20:44.380 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:44 compute-0 nova_compute[239965]: 2026-01-26 17:20:44.381 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:44 compute-0 nova_compute[239965]: 2026-01-26 17:20:44.381 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:44 compute-0 sudo[411299]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:20:44 compute-0 sudo[411299]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:20:44 compute-0 sudo[411299]: pam_unix(sudo:session): session closed for user root
Jan 26 17:20:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:44 compute-0 sudo[411324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:20:44 compute-0 sudo[411324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:20:44 compute-0 nova_compute[239965]: 2026-01-26 17:20:44.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:20:45 compute-0 sudo[411324]: pam_unix(sudo:session): session closed for user root
Jan 26 17:20:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:20:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:20:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:20:45 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:20:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:20:45 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:20:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:20:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:20:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:20:45 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:20:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:20:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:20:45 compute-0 sudo[411380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:20:45 compute-0 sudo[411380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:20:45 compute-0 sudo[411380]: pam_unix(sudo:session): session closed for user root
Jan 26 17:20:45 compute-0 sudo[411405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:20:45 compute-0 sudo[411405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:20:45 compute-0 ceph-mon[75140]: pgmap v3929: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:45 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:20:45 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:20:45 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:20:45 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:20:45 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:20:45 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:20:45 compute-0 podman[411441]: 2026-01-26 17:20:45.564791494 +0000 UTC m=+0.047606588 container create 32ddc3e3e2afe9baeeb2708eafda62112c4e0ba18a5aabea68782477459b27fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_germain, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 17:20:45 compute-0 systemd[1]: Started libpod-conmon-32ddc3e3e2afe9baeeb2708eafda62112c4e0ba18a5aabea68782477459b27fc.scope.
Jan 26 17:20:45 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:20:45 compute-0 podman[411441]: 2026-01-26 17:20:45.539644757 +0000 UTC m=+0.022459901 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:20:45 compute-0 podman[411441]: 2026-01-26 17:20:45.645738309 +0000 UTC m=+0.128553403 container init 32ddc3e3e2afe9baeeb2708eafda62112c4e0ba18a5aabea68782477459b27fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_germain, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:20:45 compute-0 podman[411441]: 2026-01-26 17:20:45.654850743 +0000 UTC m=+0.137665847 container start 32ddc3e3e2afe9baeeb2708eafda62112c4e0ba18a5aabea68782477459b27fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_germain, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:20:45 compute-0 podman[411441]: 2026-01-26 17:20:45.658519852 +0000 UTC m=+0.141334946 container attach 32ddc3e3e2afe9baeeb2708eafda62112c4e0ba18a5aabea68782477459b27fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_germain, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 17:20:45 compute-0 sweet_germain[411457]: 167 167
Jan 26 17:20:45 compute-0 systemd[1]: libpod-32ddc3e3e2afe9baeeb2708eafda62112c4e0ba18a5aabea68782477459b27fc.scope: Deactivated successfully.
Jan 26 17:20:45 compute-0 conmon[411457]: conmon 32ddc3e3e2afe9baeeb2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-32ddc3e3e2afe9baeeb2708eafda62112c4e0ba18a5aabea68782477459b27fc.scope/container/memory.events
Jan 26 17:20:45 compute-0 podman[411441]: 2026-01-26 17:20:45.662134841 +0000 UTC m=+0.144949935 container died 32ddc3e3e2afe9baeeb2708eafda62112c4e0ba18a5aabea68782477459b27fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 17:20:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-17a732c35807bf3f03ea55bbb4e161b9ba066b858b6a5505aa5b0ef838ac8b72-merged.mount: Deactivated successfully.
Jan 26 17:20:45 compute-0 podman[411441]: 2026-01-26 17:20:45.708005646 +0000 UTC m=+0.190820740 container remove 32ddc3e3e2afe9baeeb2708eafda62112c4e0ba18a5aabea68782477459b27fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 17:20:45 compute-0 systemd[1]: libpod-conmon-32ddc3e3e2afe9baeeb2708eafda62112c4e0ba18a5aabea68782477459b27fc.scope: Deactivated successfully.
Jan 26 17:20:45 compute-0 podman[411481]: 2026-01-26 17:20:45.951109025 +0000 UTC m=+0.067234289 container create 3374627e63b75faa6f1540d5f377a3c963d9ff5a4eba3788648a471fd32db721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hopper, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 17:20:45 compute-0 systemd[1]: Started libpod-conmon-3374627e63b75faa6f1540d5f377a3c963d9ff5a4eba3788648a471fd32db721.scope.
Jan 26 17:20:46 compute-0 podman[411481]: 2026-01-26 17:20:45.920532156 +0000 UTC m=+0.036657470 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:20:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50ff64ca8a08fafc7811fbae7815a342d56c6570da46df647f87d1c9e17ffe77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50ff64ca8a08fafc7811fbae7815a342d56c6570da46df647f87d1c9e17ffe77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50ff64ca8a08fafc7811fbae7815a342d56c6570da46df647f87d1c9e17ffe77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50ff64ca8a08fafc7811fbae7815a342d56c6570da46df647f87d1c9e17ffe77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:20:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50ff64ca8a08fafc7811fbae7815a342d56c6570da46df647f87d1c9e17ffe77/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:20:46 compute-0 podman[411481]: 2026-01-26 17:20:46.049204741 +0000 UTC m=+0.165329985 container init 3374627e63b75faa6f1540d5f377a3c963d9ff5a4eba3788648a471fd32db721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hopper, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:20:46 compute-0 podman[411481]: 2026-01-26 17:20:46.056336805 +0000 UTC m=+0.172462029 container start 3374627e63b75faa6f1540d5f377a3c963d9ff5a4eba3788648a471fd32db721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hopper, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 17:20:46 compute-0 podman[411481]: 2026-01-26 17:20:46.06100594 +0000 UTC m=+0.177131164 container attach 3374627e63b75faa6f1540d5f377a3c963d9ff5a4eba3788648a471fd32db721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 17:20:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:46 compute-0 quizzical_hopper[411497]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:20:46 compute-0 quizzical_hopper[411497]: --> All data devices are unavailable
Jan 26 17:20:46 compute-0 systemd[1]: libpod-3374627e63b75faa6f1540d5f377a3c963d9ff5a4eba3788648a471fd32db721.scope: Deactivated successfully.
Jan 26 17:20:46 compute-0 podman[411517]: 2026-01-26 17:20:46.615640069 +0000 UTC m=+0.041602152 container died 3374627e63b75faa6f1540d5f377a3c963d9ff5a4eba3788648a471fd32db721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hopper, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 17:20:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-50ff64ca8a08fafc7811fbae7815a342d56c6570da46df647f87d1c9e17ffe77-merged.mount: Deactivated successfully.
Jan 26 17:20:46 compute-0 podman[411517]: 2026-01-26 17:20:46.662302602 +0000 UTC m=+0.088264655 container remove 3374627e63b75faa6f1540d5f377a3c963d9ff5a4eba3788648a471fd32db721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_hopper, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:20:46 compute-0 systemd[1]: libpod-conmon-3374627e63b75faa6f1540d5f377a3c963d9ff5a4eba3788648a471fd32db721.scope: Deactivated successfully.
Jan 26 17:20:46 compute-0 sudo[411405]: pam_unix(sudo:session): session closed for user root
Jan 26 17:20:46 compute-0 sudo[411533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:20:46 compute-0 sudo[411533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:20:46 compute-0 sudo[411533]: pam_unix(sudo:session): session closed for user root
Jan 26 17:20:46 compute-0 sudo[411558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:20:46 compute-0 sudo[411558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:20:47 compute-0 podman[411596]: 2026-01-26 17:20:47.161835659 +0000 UTC m=+0.045746583 container create 2afc1b34e89125719c5592317678ff1c1d683eed94eaff240e1ef8b7c29ec6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_pascal, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 17:20:47 compute-0 systemd[1]: Started libpod-conmon-2afc1b34e89125719c5592317678ff1c1d683eed94eaff240e1ef8b7c29ec6f6.scope.
Jan 26 17:20:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:20:47 compute-0 podman[411596]: 2026-01-26 17:20:47.139698936 +0000 UTC m=+0.023609840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:20:47 compute-0 podman[411596]: 2026-01-26 17:20:47.248155056 +0000 UTC m=+0.132065970 container init 2afc1b34e89125719c5592317678ff1c1d683eed94eaff240e1ef8b7c29ec6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_pascal, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:20:47 compute-0 podman[411596]: 2026-01-26 17:20:47.257008452 +0000 UTC m=+0.140919336 container start 2afc1b34e89125719c5592317678ff1c1d683eed94eaff240e1ef8b7c29ec6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_pascal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 17:20:47 compute-0 podman[411596]: 2026-01-26 17:20:47.260736334 +0000 UTC m=+0.144647218 container attach 2afc1b34e89125719c5592317678ff1c1d683eed94eaff240e1ef8b7c29ec6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_pascal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:20:47 compute-0 busy_pascal[411612]: 167 167
Jan 26 17:20:47 compute-0 systemd[1]: libpod-2afc1b34e89125719c5592317678ff1c1d683eed94eaff240e1ef8b7c29ec6f6.scope: Deactivated successfully.
Jan 26 17:20:47 compute-0 podman[411596]: 2026-01-26 17:20:47.262651301 +0000 UTC m=+0.146562185 container died 2afc1b34e89125719c5592317678ff1c1d683eed94eaff240e1ef8b7c29ec6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-891e49b6e418858e9d0e79edbbb22e46b52c196bc4ed48a3ae18849861a398c8-merged.mount: Deactivated successfully.
Jan 26 17:20:47 compute-0 podman[411596]: 2026-01-26 17:20:47.295991148 +0000 UTC m=+0.179902032 container remove 2afc1b34e89125719c5592317678ff1c1d683eed94eaff240e1ef8b7c29ec6f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 17:20:47 compute-0 systemd[1]: libpod-conmon-2afc1b34e89125719c5592317678ff1c1d683eed94eaff240e1ef8b7c29ec6f6.scope: Deactivated successfully.
Jan 26 17:20:47 compute-0 podman[411635]: 2026-01-26 17:20:47.488503837 +0000 UTC m=+0.053694187 container create ca862e8043efdcd813be9db47a5ea1e5212e34b81832b2d1f77bddd7a08fc5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:20:47 compute-0 systemd[1]: Started libpod-conmon-ca862e8043efdcd813be9db47a5ea1e5212e34b81832b2d1f77bddd7a08fc5ad.scope.
Jan 26 17:20:47 compute-0 podman[411635]: 2026-01-26 17:20:47.465448203 +0000 UTC m=+0.030638573 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:20:47 compute-0 ceph-mon[75140]: pgmap v3930: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a1034d8a0c69f456da7237c00b1b906ecc303fc1df1894c29d6f827aa4876d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a1034d8a0c69f456da7237c00b1b906ecc303fc1df1894c29d6f827aa4876d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a1034d8a0c69f456da7237c00b1b906ecc303fc1df1894c29d6f827aa4876d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:20:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46a1034d8a0c69f456da7237c00b1b906ecc303fc1df1894c29d6f827aa4876d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:20:47 compute-0 podman[411635]: 2026-01-26 17:20:47.585446025 +0000 UTC m=+0.150636395 container init ca862e8043efdcd813be9db47a5ea1e5212e34b81832b2d1f77bddd7a08fc5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:20:47 compute-0 podman[411635]: 2026-01-26 17:20:47.592987389 +0000 UTC m=+0.158177739 container start ca862e8043efdcd813be9db47a5ea1e5212e34b81832b2d1f77bddd7a08fc5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:20:47 compute-0 podman[411635]: 2026-01-26 17:20:47.596388772 +0000 UTC m=+0.161579122 container attach ca862e8043efdcd813be9db47a5ea1e5212e34b81832b2d1f77bddd7a08fc5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]: {
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:     "0": [
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:         {
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "devices": [
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "/dev/loop3"
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             ],
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "lv_name": "ceph_lv0",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "lv_size": "21470642176",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "name": "ceph_lv0",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "tags": {
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.cluster_name": "ceph",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.crush_device_class": "",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.encrypted": "0",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.objectstore": "bluestore",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.osd_id": "0",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.type": "block",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.vdo": "0",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.with_tpm": "0"
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             },
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "type": "block",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "vg_name": "ceph_vg0"
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:         }
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:     ],
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:     "1": [
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:         {
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "devices": [
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "/dev/loop4"
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             ],
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "lv_name": "ceph_lv1",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "lv_size": "21470642176",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "name": "ceph_lv1",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "tags": {
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.cluster_name": "ceph",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.crush_device_class": "",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.encrypted": "0",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.objectstore": "bluestore",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.osd_id": "1",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.type": "block",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.vdo": "0",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.with_tpm": "0"
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             },
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "type": "block",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "vg_name": "ceph_vg1"
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:         }
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:     ],
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:     "2": [
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:         {
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "devices": [
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "/dev/loop5"
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             ],
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "lv_name": "ceph_lv2",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "lv_size": "21470642176",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "name": "ceph_lv2",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "tags": {
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.cluster_name": "ceph",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.crush_device_class": "",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.encrypted": "0",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.objectstore": "bluestore",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.osd_id": "2",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.type": "block",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.vdo": "0",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:                 "ceph.with_tpm": "0"
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             },
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "type": "block",
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:             "vg_name": "ceph_vg2"
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:         }
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]:     ]
Jan 26 17:20:47 compute-0 affectionate_goldberg[411652]: }
Jan 26 17:20:47 compute-0 systemd[1]: libpod-ca862e8043efdcd813be9db47a5ea1e5212e34b81832b2d1f77bddd7a08fc5ad.scope: Deactivated successfully.
Jan 26 17:20:47 compute-0 podman[411635]: 2026-01-26 17:20:47.923015491 +0000 UTC m=+0.488205841 container died ca862e8043efdcd813be9db47a5ea1e5212e34b81832b2d1f77bddd7a08fc5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_goldberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 17:20:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-46a1034d8a0c69f456da7237c00b1b906ecc303fc1df1894c29d6f827aa4876d-merged.mount: Deactivated successfully.
Jan 26 17:20:47 compute-0 podman[411635]: 2026-01-26 17:20:47.963587026 +0000 UTC m=+0.528777376 container remove ca862e8043efdcd813be9db47a5ea1e5212e34b81832b2d1f77bddd7a08fc5ad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:20:47 compute-0 systemd[1]: libpod-conmon-ca862e8043efdcd813be9db47a5ea1e5212e34b81832b2d1f77bddd7a08fc5ad.scope: Deactivated successfully.
Jan 26 17:20:48 compute-0 sudo[411558]: pam_unix(sudo:session): session closed for user root
Jan 26 17:20:48 compute-0 sudo[411672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:20:48 compute-0 sudo[411672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:20:48 compute-0 sudo[411672]: pam_unix(sudo:session): session closed for user root
Jan 26 17:20:48 compute-0 sudo[411697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:20:48 compute-0 sudo[411697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:20:48 compute-0 nova_compute[239965]: 2026-01-26 17:20:48.287 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:48 compute-0 podman[411733]: 2026-01-26 17:20:48.471612791 +0000 UTC m=+0.050987941 container create 3efb39269c3d8aa051fe4c05d5cbea99399e3ffdb619ae5a45722159a330cd56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 17:20:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:48 compute-0 systemd[1]: Started libpod-conmon-3efb39269c3d8aa051fe4c05d5cbea99399e3ffdb619ae5a45722159a330cd56.scope.
Jan 26 17:20:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:20:48 compute-0 podman[411733]: 2026-01-26 17:20:48.453164689 +0000 UTC m=+0.032539859 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:20:48 compute-0 podman[411733]: 2026-01-26 17:20:48.550190758 +0000 UTC m=+0.129565928 container init 3efb39269c3d8aa051fe4c05d5cbea99399e3ffdb619ae5a45722159a330cd56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williamson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:20:48 compute-0 podman[411733]: 2026-01-26 17:20:48.559089656 +0000 UTC m=+0.138464806 container start 3efb39269c3d8aa051fe4c05d5cbea99399e3ffdb619ae5a45722159a330cd56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williamson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:20:48 compute-0 podman[411733]: 2026-01-26 17:20:48.563525305 +0000 UTC m=+0.142900455 container attach 3efb39269c3d8aa051fe4c05d5cbea99399e3ffdb619ae5a45722159a330cd56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:20:48 compute-0 reverent_williamson[411750]: 167 167
Jan 26 17:20:48 compute-0 systemd[1]: libpod-3efb39269c3d8aa051fe4c05d5cbea99399e3ffdb619ae5a45722159a330cd56.scope: Deactivated successfully.
Jan 26 17:20:48 compute-0 podman[411733]: 2026-01-26 17:20:48.568934177 +0000 UTC m=+0.148309327 container died 3efb39269c3d8aa051fe4c05d5cbea99399e3ffdb619ae5a45722159a330cd56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 26 17:20:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8579bdae1d9d0ae86cf7957d90f8b23a9a4945bd708b5a16e802ba8dcb164a2-merged.mount: Deactivated successfully.
Jan 26 17:20:48 compute-0 podman[411733]: 2026-01-26 17:20:48.611319926 +0000 UTC m=+0.190695076 container remove 3efb39269c3d8aa051fe4c05d5cbea99399e3ffdb619ae5a45722159a330cd56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 17:20:48 compute-0 podman[411747]: 2026-01-26 17:20:48.611718217 +0000 UTC m=+0.093086094 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:20:48 compute-0 systemd[1]: libpod-conmon-3efb39269c3d8aa051fe4c05d5cbea99399e3ffdb619ae5a45722159a330cd56.scope: Deactivated successfully.
Jan 26 17:20:48 compute-0 podman[411793]: 2026-01-26 17:20:48.776168078 +0000 UTC m=+0.044429150 container create f75339a73462f13a62f52f2be9f0e38f00ad7464a2bdb6d3122e497d519bef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 26 17:20:48 compute-0 systemd[1]: Started libpod-conmon-f75339a73462f13a62f52f2be9f0e38f00ad7464a2bdb6d3122e497d519bef5e.scope.
Jan 26 17:20:48 compute-0 podman[411793]: 2026-01-26 17:20:48.757759047 +0000 UTC m=+0.026020149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:20:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:20:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:20:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4211034631' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:20:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:20:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4211034631' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/571f713288373315709563eb5dc1b1218700b5b10f6ced9b9bcf6dd9fc539462/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/571f713288373315709563eb5dc1b1218700b5b10f6ced9b9bcf6dd9fc539462/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/571f713288373315709563eb5dc1b1218700b5b10f6ced9b9bcf6dd9fc539462/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:20:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/571f713288373315709563eb5dc1b1218700b5b10f6ced9b9bcf6dd9fc539462/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:20:48 compute-0 podman[411793]: 2026-01-26 17:20:48.874852477 +0000 UTC m=+0.143113569 container init f75339a73462f13a62f52f2be9f0e38f00ad7464a2bdb6d3122e497d519bef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:20:48 compute-0 podman[411793]: 2026-01-26 17:20:48.888367569 +0000 UTC m=+0.156628641 container start f75339a73462f13a62f52f2be9f0e38f00ad7464a2bdb6d3122e497d519bef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:20:48 compute-0 podman[411793]: 2026-01-26 17:20:48.892146462 +0000 UTC m=+0.160407534 container attach f75339a73462f13a62f52f2be9f0e38f00ad7464a2bdb6d3122e497d519bef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_easley, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 17:20:49 compute-0 nova_compute[239965]: 2026-01-26 17:20:49.139 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:49 compute-0 ceph-mon[75140]: pgmap v3931: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/4211034631' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:20:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/4211034631' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:20:49 compute-0 lvm[411889]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:20:49 compute-0 lvm[411889]: VG ceph_vg1 finished
Jan 26 17:20:49 compute-0 lvm[411888]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:20:49 compute-0 lvm[411888]: VG ceph_vg0 finished
Jan 26 17:20:49 compute-0 lvm[411891]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:20:49 compute-0 lvm[411891]: VG ceph_vg2 finished
Jan 26 17:20:49 compute-0 lvm[411893]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:20:49 compute-0 lvm[411893]: VG ceph_vg2 finished
Jan 26 17:20:49 compute-0 wonderful_easley[411810]: {}
Jan 26 17:20:49 compute-0 systemd[1]: libpod-f75339a73462f13a62f52f2be9f0e38f00ad7464a2bdb6d3122e497d519bef5e.scope: Deactivated successfully.
Jan 26 17:20:49 compute-0 systemd[1]: libpod-f75339a73462f13a62f52f2be9f0e38f00ad7464a2bdb6d3122e497d519bef5e.scope: Consumed 1.354s CPU time.
Jan 26 17:20:49 compute-0 podman[411793]: 2026-01-26 17:20:49.739554988 +0000 UTC m=+1.007816060 container died f75339a73462f13a62f52f2be9f0e38f00ad7464a2bdb6d3122e497d519bef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_easley, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:20:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-571f713288373315709563eb5dc1b1218700b5b10f6ced9b9bcf6dd9fc539462-merged.mount: Deactivated successfully.
Jan 26 17:20:49 compute-0 podman[411793]: 2026-01-26 17:20:49.784576012 +0000 UTC m=+1.052837084 container remove f75339a73462f13a62f52f2be9f0e38f00ad7464a2bdb6d3122e497d519bef5e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:20:49 compute-0 systemd[1]: libpod-conmon-f75339a73462f13a62f52f2be9f0e38f00ad7464a2bdb6d3122e497d519bef5e.scope: Deactivated successfully.
Jan 26 17:20:49 compute-0 sudo[411697]: pam_unix(sudo:session): session closed for user root
Jan 26 17:20:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:20:49 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:20:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:20:49 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:20:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:20:49 compute-0 sudo[411905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:20:49 compute-0 sudo[411905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:20:49 compute-0 sudo[411905]: pam_unix(sudo:session): session closed for user root
Jan 26 17:20:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:20:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:20:50 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:20:50 compute-0 ceph-mon[75140]: pgmap v3932: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:52 compute-0 podman[411930]: 2026-01-26 17:20:52.435808955 +0000 UTC m=+0.122413283 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 26 17:20:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:53 compute-0 nova_compute[239965]: 2026-01-26 17:20:53.291 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:53 compute-0 ceph-mon[75140]: pgmap v3933: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:54 compute-0 nova_compute[239965]: 2026-01-26 17:20:54.141 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:20:55 compute-0 nova_compute[239965]: 2026-01-26 17:20:55.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:55 compute-0 ceph-mon[75140]: pgmap v3934: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:56 compute-0 nova_compute[239965]: 2026-01-26 17:20:56.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:20:56 compute-0 nova_compute[239965]: 2026-01-26 17:20:56.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:20:57 compute-0 ceph-mon[75140]: pgmap v3935: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:58 compute-0 nova_compute[239965]: 2026-01-26 17:20:58.293 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:20:59 compute-0 nova_compute[239965]: 2026-01-26 17:20:59.145 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:20:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:20:59.304 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:20:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:20:59.305 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:20:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:20:59.305 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:20:59 compute-0 ceph-mon[75140]: pgmap v3936: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:21:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:21:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:21:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:21:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:21:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:21:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:21:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3937: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:01 compute-0 nova_compute[239965]: 2026-01-26 17:21:01.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:01 compute-0 ceph-mon[75140]: pgmap v3937: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3938: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:02 compute-0 ceph-mon[75140]: pgmap v3938: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:03 compute-0 nova_compute[239965]: 2026-01-26 17:21:03.295 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:03 compute-0 nova_compute[239965]: 2026-01-26 17:21:03.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:03 compute-0 nova_compute[239965]: 2026-01-26 17:21:03.878 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:03 compute-0 nova_compute[239965]: 2026-01-26 17:21:03.879 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:03 compute-0 nova_compute[239965]: 2026-01-26 17:21:03.879 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:03 compute-0 nova_compute[239965]: 2026-01-26 17:21:03.879 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:21:03 compute-0 nova_compute[239965]: 2026-01-26 17:21:03.879 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:04 compute-0 nova_compute[239965]: 2026-01-26 17:21:04.146 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3939: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:21:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3661716342' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:21:04 compute-0 nova_compute[239965]: 2026-01-26 17:21:04.626 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.747s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:04 compute-0 nova_compute[239965]: 2026-01-26 17:21:04.791 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:21:04 compute-0 nova_compute[239965]: 2026-01-26 17:21:04.792 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3517MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:21:04 compute-0 nova_compute[239965]: 2026-01-26 17:21:04.792 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:04 compute-0 nova_compute[239965]: 2026-01-26 17:21:04.793 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:04 compute-0 nova_compute[239965]: 2026-01-26 17:21:04.979 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:21:04 compute-0 nova_compute[239965]: 2026-01-26 17:21:04.980 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:21:04 compute-0 nova_compute[239965]: 2026-01-26 17:21:04.995 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:21:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:21:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:21:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/53912655' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:21:05 compute-0 nova_compute[239965]: 2026-01-26 17:21:05.592 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:21:05 compute-0 nova_compute[239965]: 2026-01-26 17:21:05.598 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:21:05 compute-0 ceph-mon[75140]: pgmap v3939: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:05 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3661716342' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:21:05 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/53912655' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:21:05 compute-0 nova_compute[239965]: 2026-01-26 17:21:05.728 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:21:05 compute-0 nova_compute[239965]: 2026-01-26 17:21:05.729 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:21:05 compute-0 nova_compute[239965]: 2026-01-26 17:21:05.729 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3940: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:06 compute-0 nova_compute[239965]: 2026-01-26 17:21:06.730 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:06 compute-0 nova_compute[239965]: 2026-01-26 17:21:06.768 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:06 compute-0 nova_compute[239965]: 2026-01-26 17:21:06.769 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:21:06 compute-0 nova_compute[239965]: 2026-01-26 17:21:06.769 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:21:06 compute-0 nova_compute[239965]: 2026-01-26 17:21:06.790 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:21:07 compute-0 ceph-mon[75140]: pgmap v3940: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:08 compute-0 nova_compute[239965]: 2026-01-26 17:21:08.296 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3941: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:08 compute-0 ceph-mon[75140]: pgmap v3941: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:09 compute-0 nova_compute[239965]: 2026-01-26 17:21:09.148 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:21:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3942: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:11 compute-0 ceph-mon[75140]: pgmap v3942: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3943: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:12 compute-0 nova_compute[239965]: 2026-01-26 17:21:12.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:13 compute-0 nova_compute[239965]: 2026-01-26 17:21:13.297 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:13 compute-0 ceph-mon[75140]: pgmap v3943: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:14 compute-0 nova_compute[239965]: 2026-01-26 17:21:14.150 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3944: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #192. Immutable memtables: 0.
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:21:15.062307) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 119] Flushing memtable with next log file: 192
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448075062330, "job": 119, "event": "flush_started", "num_memtables": 1, "num_entries": 786, "num_deletes": 256, "total_data_size": 1078460, "memory_usage": 1096016, "flush_reason": "Manual Compaction"}
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 119] Level-0 flush table #193: started
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448075070387, "cf_name": "default", "job": 119, "event": "table_file_creation", "file_number": 193, "file_size": 1063899, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 79829, "largest_seqno": 80614, "table_properties": {"data_size": 1059772, "index_size": 1842, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8797, "raw_average_key_size": 19, "raw_value_size": 1051590, "raw_average_value_size": 2271, "num_data_blocks": 82, "num_entries": 463, "num_filter_entries": 463, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769448005, "oldest_key_time": 1769448005, "file_creation_time": 1769448075, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 193, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 119] Flush lasted 8124 microseconds, and 3858 cpu microseconds.
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:21:15.070425) [db/flush_job.cc:967] [default] [JOB 119] Level-0 flush table #193: 1063899 bytes OK
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:21:15.070445) [db/memtable_list.cc:519] [default] Level-0 commit table #193 started
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:21:15.072169) [db/memtable_list.cc:722] [default] Level-0 commit table #193: memtable #1 done
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:21:15.072182) EVENT_LOG_v1 {"time_micros": 1769448075072178, "job": 119, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:21:15.072197) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 119] Try to delete WAL files size 1074478, prev total WAL file size 1074478, number of live WAL files 2.
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000189.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:21:15.072797) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033353134' seq:72057594037927935, type:22 .. '6C6F676D0033373636' seq:0, type:0; will stop at (end)
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 120] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 119 Base level 0, inputs: [193(1038KB)], [191(9519KB)]
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448075072869, "job": 120, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [193], "files_L6": [191], "score": -1, "input_data_size": 10811684, "oldest_snapshot_seqno": -1}
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 120] Generated table #194: 9507 keys, 10709770 bytes, temperature: kUnknown
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448075150682, "cf_name": "default", "job": 120, "event": "table_file_creation", "file_number": 194, "file_size": 10709770, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10651295, "index_size": 33594, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23813, "raw_key_size": 251349, "raw_average_key_size": 26, "raw_value_size": 10486143, "raw_average_value_size": 1102, "num_data_blocks": 1287, "num_entries": 9507, "num_filter_entries": 9507, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769448075, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 194, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:21:15.151010) [db/compaction/compaction_job.cc:1663] [default] [JOB 120] Compacted 1@0 + 1@6 files to L6 => 10709770 bytes
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:21:15.152364) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 139.1 rd, 137.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 9.3 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(20.2) write-amplify(10.1) OK, records in: 10031, records dropped: 524 output_compression: NoCompression
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:21:15.152380) EVENT_LOG_v1 {"time_micros": 1769448075152372, "job": 120, "event": "compaction_finished", "compaction_time_micros": 77734, "compaction_time_cpu_micros": 27906, "output_level": 6, "num_output_files": 1, "total_output_size": 10709770, "num_input_records": 10031, "num_output_records": 9507, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000193.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448075152668, "job": 120, "event": "table_file_deletion", "file_number": 193}
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000191.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448075154306, "job": 120, "event": "table_file_deletion", "file_number": 191}
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:21:15.072654) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:21:15.154374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:21:15.154384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:21:15.154385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:21:15.154387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:21:15 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:21:15.154388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:21:15 compute-0 ceph-mon[75140]: pgmap v3944: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3945: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:17 compute-0 ceph-mon[75140]: pgmap v3945: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:18 compute-0 nova_compute[239965]: 2026-01-26 17:21:18.298 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3946: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:19 compute-0 nova_compute[239965]: 2026-01-26 17:21:19.152 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:19 compute-0 podman[412001]: 2026-01-26 17:21:19.367139023 +0000 UTC m=+0.050744514 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:21:19 compute-0 ceph-mon[75140]: pgmap v3946: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:21:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3947: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:21 compute-0 ceph-mon[75140]: pgmap v3947: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3948: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:23 compute-0 nova_compute[239965]: 2026-01-26 17:21:23.300 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:23 compute-0 podman[412021]: 2026-01-26 17:21:23.387953955 +0000 UTC m=+0.079594692 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 26 17:21:23 compute-0 ceph-mon[75140]: pgmap v3948: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:24 compute-0 nova_compute[239965]: 2026-01-26 17:21:24.154 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:21:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 184K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.73 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:21:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3949: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:21:25 compute-0 ceph-mon[75140]: pgmap v3949: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3950: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:27 compute-0 sshd-session[412048]: Invalid user sol from 45.148.10.240 port 37014
Jan 26 17:21:27 compute-0 sshd-session[412048]: Connection closed by invalid user sol 45.148.10.240 port 37014 [preauth]
Jan 26 17:21:27 compute-0 nova_compute[239965]: 2026-01-26 17:21:27.539 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:27 compute-0 nova_compute[239965]: 2026-01-26 17:21:27.540 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:21:27 compute-0 ceph-mon[75140]: pgmap v3950: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:28 compute-0 nova_compute[239965]: 2026-01-26 17:21:28.302 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3951: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:21:28
Jan 26 17:21:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:21:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:21:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'backups', '.rgw.root', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr']
Jan 26 17:21:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:21:29 compute-0 nova_compute[239965]: 2026-01-26 17:21:29.156 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:29 compute-0 ceph-mon[75140]: pgmap v3951: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:21:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.4 total, 600.0 interval
                                           Cumulative writes: 50K writes, 192K keys, 50K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 50K writes, 18K syncs, 2.68 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:21:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:21:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:21:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:21:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:21:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:21:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:21:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:21:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3952: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:21:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:21:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:21:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:21:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:21:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:21:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:21:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:21:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:21:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:21:31 compute-0 ceph-mon[75140]: pgmap v3952: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3953: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:33 compute-0 nova_compute[239965]: 2026-01-26 17:21:33.304 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:33 compute-0 ceph-mon[75140]: pgmap v3953: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:34 compute-0 nova_compute[239965]: 2026-01-26 17:21:34.157 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3954: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:21:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 149K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.70 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:21:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:21:35 compute-0 ceph-mon[75140]: pgmap v3954: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3955: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:37 compute-0 ceph-mon[75140]: pgmap v3955: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:38 compute-0 nova_compute[239965]: 2026-01-26 17:21:38.306 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3956: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:39 compute-0 nova_compute[239965]: 2026-01-26 17:21:39.161 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:39 compute-0 ceph-mon[75140]: pgmap v3956: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:39 compute-0 ceph-mgr[75431]: [devicehealth INFO root] Check health
Jan 26 17:21:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:21:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3957: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:40 compute-0 ceph-mon[75140]: pgmap v3957: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3958: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:43 compute-0 nova_compute[239965]: 2026-01-26 17:21:43.308 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:43 compute-0 ceph-mon[75140]: pgmap v3958: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:43 compute-0 nova_compute[239965]: 2026-01-26 17:21:43.527 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:43 compute-0 nova_compute[239965]: 2026-01-26 17:21:43.527 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:44 compute-0 nova_compute[239965]: 2026-01-26 17:21:44.197 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:44 compute-0 nova_compute[239965]: 2026-01-26 17:21:44.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3959: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:21:45 compute-0 ceph-mon[75140]: pgmap v3959: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:45 compute-0 nova_compute[239965]: 2026-01-26 17:21:45.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3960: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:47 compute-0 ceph-mon[75140]: pgmap v3960: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:48 compute-0 nova_compute[239965]: 2026-01-26 17:21:48.309 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3961: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:21:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/807187151' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:21:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:21:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/807187151' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:21:49 compute-0 nova_compute[239965]: 2026-01-26 17:21:49.199 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:49 compute-0 ceph-mon[75140]: pgmap v3961: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/807187151' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:21:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/807187151' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:21:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:21:50 compute-0 sudo[412050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:21:50 compute-0 sudo[412050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:21:50 compute-0 sudo[412050]: pam_unix(sudo:session): session closed for user root
Jan 26 17:21:50 compute-0 podman[412074]: 2026-01-26 17:21:50.099885345 +0000 UTC m=+0.054574929 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 17:21:50 compute-0 sudo[412081]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:21:50 compute-0 sudo[412081]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:21:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:21:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3962: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:50 compute-0 sudo[412081]: pam_unix(sudo:session): session closed for user root
Jan 26 17:21:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:21:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:21:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:21:50 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:21:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:21:51 compute-0 ceph-mon[75140]: pgmap v3962: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:51 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:21:51 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:21:51 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:21:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:21:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:21:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:21:51 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:21:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:21:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:21:51 compute-0 sudo[412149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:21:51 compute-0 sudo[412149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:21:51 compute-0 sudo[412149]: pam_unix(sudo:session): session closed for user root
Jan 26 17:21:51 compute-0 sudo[412174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:21:51 compute-0 sudo[412174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:21:51 compute-0 podman[412211]: 2026-01-26 17:21:51.715511576 +0000 UTC m=+0.095355259 container create 6216d351837f13dd93e932969a6b23020545c3ba59c3c44e4814bd9530534c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_proskuriakova, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 17:21:51 compute-0 podman[412211]: 2026-01-26 17:21:51.642407324 +0000 UTC m=+0.022251007 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:21:51 compute-0 systemd[1]: Started libpod-conmon-6216d351837f13dd93e932969a6b23020545c3ba59c3c44e4814bd9530534c3b.scope.
Jan 26 17:21:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:21:52 compute-0 podman[412211]: 2026-01-26 17:21:52.069630228 +0000 UTC m=+0.449473931 container init 6216d351837f13dd93e932969a6b23020545c3ba59c3c44e4814bd9530534c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_proskuriakova, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 17:21:52 compute-0 podman[412211]: 2026-01-26 17:21:52.076158969 +0000 UTC m=+0.456002682 container start 6216d351837f13dd93e932969a6b23020545c3ba59c3c44e4814bd9530534c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_proskuriakova, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 26 17:21:52 compute-0 tender_proskuriakova[412227]: 167 167
Jan 26 17:21:52 compute-0 systemd[1]: libpod-6216d351837f13dd93e932969a6b23020545c3ba59c3c44e4814bd9530534c3b.scope: Deactivated successfully.
Jan 26 17:21:52 compute-0 podman[412211]: 2026-01-26 17:21:52.399360253 +0000 UTC m=+0.779203986 container attach 6216d351837f13dd93e932969a6b23020545c3ba59c3c44e4814bd9530534c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_proskuriakova, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:21:52 compute-0 podman[412211]: 2026-01-26 17:21:52.400420008 +0000 UTC m=+0.780263731 container died 6216d351837f13dd93e932969a6b23020545c3ba59c3c44e4814bd9530534c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 17:21:52 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:21:52 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:21:52 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:21:52 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:21:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3963: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe0f7b714fd97cc19c48e1e2beb64af672593a21742bb7beaf0ab7343e2a512a-merged.mount: Deactivated successfully.
Jan 26 17:21:52 compute-0 podman[412211]: 2026-01-26 17:21:52.715091953 +0000 UTC m=+1.094935656 container remove 6216d351837f13dd93e932969a6b23020545c3ba59c3c44e4814bd9530534c3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_proskuriakova, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:21:52 compute-0 systemd[1]: libpod-conmon-6216d351837f13dd93e932969a6b23020545c3ba59c3c44e4814bd9530534c3b.scope: Deactivated successfully.
Jan 26 17:21:52 compute-0 podman[412253]: 2026-01-26 17:21:52.868291739 +0000 UTC m=+0.039774736 container create 17dc8f8aee0012b86840c3c0015ba7bf8527307e2249dde0c47e77ed3fcfd9a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:21:52 compute-0 systemd[1]: Started libpod-conmon-17dc8f8aee0012b86840c3c0015ba7bf8527307e2249dde0c47e77ed3fcfd9a3.scope.
Jan 26 17:21:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fa141379b92855b4815970ce4ff334f532e96f5f89d4d1a425c44a16d885fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fa141379b92855b4815970ce4ff334f532e96f5f89d4d1a425c44a16d885fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fa141379b92855b4815970ce4ff334f532e96f5f89d4d1a425c44a16d885fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fa141379b92855b4815970ce4ff334f532e96f5f89d4d1a425c44a16d885fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:21:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fa141379b92855b4815970ce4ff334f532e96f5f89d4d1a425c44a16d885fc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:21:52 compute-0 podman[412253]: 2026-01-26 17:21:52.850420862 +0000 UTC m=+0.021903879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:21:52 compute-0 podman[412253]: 2026-01-26 17:21:52.951001038 +0000 UTC m=+0.122484045 container init 17dc8f8aee0012b86840c3c0015ba7bf8527307e2249dde0c47e77ed3fcfd9a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_zhukovsky, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 17:21:52 compute-0 podman[412253]: 2026-01-26 17:21:52.960656984 +0000 UTC m=+0.132139981 container start 17dc8f8aee0012b86840c3c0015ba7bf8527307e2249dde0c47e77ed3fcfd9a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_zhukovsky, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:21:52 compute-0 podman[412253]: 2026-01-26 17:21:52.964006116 +0000 UTC m=+0.135489143 container attach 17dc8f8aee0012b86840c3c0015ba7bf8527307e2249dde0c47e77ed3fcfd9a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_zhukovsky, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:21:53 compute-0 nova_compute[239965]: 2026-01-26 17:21:53.311 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:53 compute-0 laughing_zhukovsky[412269]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:21:53 compute-0 laughing_zhukovsky[412269]: --> All data devices are unavailable
Jan 26 17:21:53 compute-0 systemd[1]: libpod-17dc8f8aee0012b86840c3c0015ba7bf8527307e2249dde0c47e77ed3fcfd9a3.scope: Deactivated successfully.
Jan 26 17:21:53 compute-0 ceph-mon[75140]: pgmap v3963: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:53 compute-0 podman[412289]: 2026-01-26 17:21:53.440880158 +0000 UTC m=+0.027410173 container died 17dc8f8aee0012b86840c3c0015ba7bf8527307e2249dde0c47e77ed3fcfd9a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_zhukovsky, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 17:21:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9fa141379b92855b4815970ce4ff334f532e96f5f89d4d1a425c44a16d885fc-merged.mount: Deactivated successfully.
Jan 26 17:21:53 compute-0 podman[412289]: 2026-01-26 17:21:53.487583073 +0000 UTC m=+0.074113058 container remove 17dc8f8aee0012b86840c3c0015ba7bf8527307e2249dde0c47e77ed3fcfd9a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:21:53 compute-0 systemd[1]: libpod-conmon-17dc8f8aee0012b86840c3c0015ba7bf8527307e2249dde0c47e77ed3fcfd9a3.scope: Deactivated successfully.
Jan 26 17:21:53 compute-0 podman[412290]: 2026-01-26 17:21:53.524193941 +0000 UTC m=+0.094284282 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:21:53 compute-0 sudo[412174]: pam_unix(sudo:session): session closed for user root
Jan 26 17:21:53 compute-0 sudo[412330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:21:53 compute-0 sudo[412330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:21:53 compute-0 sudo[412330]: pam_unix(sudo:session): session closed for user root
Jan 26 17:21:53 compute-0 sudo[412355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:21:53 compute-0 sudo[412355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:21:53 compute-0 podman[412391]: 2026-01-26 17:21:53.979570995 +0000 UTC m=+0.054494247 container create 477f9c1d1240080dfc0ff1b9566e6301c3af09295958c78a6ffe554f33ca2355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_meninsky, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 17:21:54 compute-0 systemd[1]: Started libpod-conmon-477f9c1d1240080dfc0ff1b9566e6301c3af09295958c78a6ffe554f33ca2355.scope.
Jan 26 17:21:54 compute-0 podman[412391]: 2026-01-26 17:21:53.952861601 +0000 UTC m=+0.027784913 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:21:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:21:54 compute-0 podman[412391]: 2026-01-26 17:21:54.071190252 +0000 UTC m=+0.146113534 container init 477f9c1d1240080dfc0ff1b9566e6301c3af09295958c78a6ffe554f33ca2355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_meninsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:21:54 compute-0 podman[412391]: 2026-01-26 17:21:54.082716254 +0000 UTC m=+0.157639556 container start 477f9c1d1240080dfc0ff1b9566e6301c3af09295958c78a6ffe554f33ca2355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_meninsky, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:21:54 compute-0 romantic_meninsky[412408]: 167 167
Jan 26 17:21:54 compute-0 podman[412391]: 2026-01-26 17:21:54.089484761 +0000 UTC m=+0.164408043 container attach 477f9c1d1240080dfc0ff1b9566e6301c3af09295958c78a6ffe554f33ca2355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 17:21:54 compute-0 systemd[1]: libpod-477f9c1d1240080dfc0ff1b9566e6301c3af09295958c78a6ffe554f33ca2355.scope: Deactivated successfully.
Jan 26 17:21:54 compute-0 podman[412391]: 2026-01-26 17:21:54.091473589 +0000 UTC m=+0.166396861 container died 477f9c1d1240080dfc0ff1b9566e6301c3af09295958c78a6ffe554f33ca2355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 17:21:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-e69262b9493d4255a17c85c080bb371489f15ae4586ba114d8494eaa80f7e771-merged.mount: Deactivated successfully.
Jan 26 17:21:54 compute-0 podman[412391]: 2026-01-26 17:21:54.130753642 +0000 UTC m=+0.205676904 container remove 477f9c1d1240080dfc0ff1b9566e6301c3af09295958c78a6ffe554f33ca2355 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_meninsky, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Jan 26 17:21:54 compute-0 systemd[1]: libpod-conmon-477f9c1d1240080dfc0ff1b9566e6301c3af09295958c78a6ffe554f33ca2355.scope: Deactivated successfully.
Jan 26 17:21:54 compute-0 nova_compute[239965]: 2026-01-26 17:21:54.201 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:54 compute-0 podman[412433]: 2026-01-26 17:21:54.290310104 +0000 UTC m=+0.043084028 container create b94ad2cb8b8d8798f50bf9b4e2a344d2c2482bbc34062272e4eb28b0087603f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 17:21:54 compute-0 systemd[1]: Started libpod-conmon-b94ad2cb8b8d8798f50bf9b4e2a344d2c2482bbc34062272e4eb28b0087603f4.scope.
Jan 26 17:21:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402c76dfa21cac72f477ed2d2ac37365da5d6ed9f57ffed601fb1dcf3883e7c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402c76dfa21cac72f477ed2d2ac37365da5d6ed9f57ffed601fb1dcf3883e7c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402c76dfa21cac72f477ed2d2ac37365da5d6ed9f57ffed601fb1dcf3883e7c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:21:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/402c76dfa21cac72f477ed2d2ac37365da5d6ed9f57ffed601fb1dcf3883e7c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:21:54 compute-0 podman[412433]: 2026-01-26 17:21:54.269562675 +0000 UTC m=+0.022336629 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:21:54 compute-0 podman[412433]: 2026-01-26 17:21:54.367706421 +0000 UTC m=+0.120480375 container init b94ad2cb8b8d8798f50bf9b4e2a344d2c2482bbc34062272e4eb28b0087603f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 17:21:54 compute-0 podman[412433]: 2026-01-26 17:21:54.375561884 +0000 UTC m=+0.128335808 container start b94ad2cb8b8d8798f50bf9b4e2a344d2c2482bbc34062272e4eb28b0087603f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hofstadter, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 17:21:54 compute-0 podman[412433]: 2026-01-26 17:21:54.379158893 +0000 UTC m=+0.131932837 container attach b94ad2cb8b8d8798f50bf9b4e2a344d2c2482bbc34062272e4eb28b0087603f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:21:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3964: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]: {
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:     "0": [
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:         {
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "devices": [
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "/dev/loop3"
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             ],
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "lv_name": "ceph_lv0",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "lv_size": "21470642176",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "name": "ceph_lv0",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "tags": {
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.cluster_name": "ceph",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.crush_device_class": "",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.encrypted": "0",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.objectstore": "bluestore",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.osd_id": "0",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.type": "block",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.vdo": "0",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.with_tpm": "0"
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             },
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "type": "block",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "vg_name": "ceph_vg0"
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:         }
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:     ],
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:     "1": [
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:         {
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "devices": [
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "/dev/loop4"
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             ],
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "lv_name": "ceph_lv1",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "lv_size": "21470642176",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "name": "ceph_lv1",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "tags": {
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.cluster_name": "ceph",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.crush_device_class": "",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.encrypted": "0",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.objectstore": "bluestore",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.osd_id": "1",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.type": "block",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.vdo": "0",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.with_tpm": "0"
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             },
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "type": "block",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "vg_name": "ceph_vg1"
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:         }
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:     ],
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:     "2": [
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:         {
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "devices": [
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "/dev/loop5"
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             ],
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "lv_name": "ceph_lv2",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "lv_size": "21470642176",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "name": "ceph_lv2",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "tags": {
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.cluster_name": "ceph",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.crush_device_class": "",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.encrypted": "0",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.objectstore": "bluestore",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.osd_id": "2",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.type": "block",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.vdo": "0",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:                 "ceph.with_tpm": "0"
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             },
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "type": "block",
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:             "vg_name": "ceph_vg2"
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:         }
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]:     ]
Jan 26 17:21:54 compute-0 xenodochial_hofstadter[412447]: }
Jan 26 17:21:54 compute-0 systemd[1]: libpod-b94ad2cb8b8d8798f50bf9b4e2a344d2c2482bbc34062272e4eb28b0087603f4.scope: Deactivated successfully.
Jan 26 17:21:54 compute-0 podman[412433]: 2026-01-26 17:21:54.678562663 +0000 UTC m=+0.431336587 container died b94ad2cb8b8d8798f50bf9b4e2a344d2c2482bbc34062272e4eb28b0087603f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hofstadter, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 17:21:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-402c76dfa21cac72f477ed2d2ac37365da5d6ed9f57ffed601fb1dcf3883e7c7-merged.mount: Deactivated successfully.
Jan 26 17:21:54 compute-0 podman[412433]: 2026-01-26 17:21:54.72083507 +0000 UTC m=+0.473608994 container remove b94ad2cb8b8d8798f50bf9b4e2a344d2c2482bbc34062272e4eb28b0087603f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:21:54 compute-0 systemd[1]: libpod-conmon-b94ad2cb8b8d8798f50bf9b4e2a344d2c2482bbc34062272e4eb28b0087603f4.scope: Deactivated successfully.
Jan 26 17:21:54 compute-0 sudo[412355]: pam_unix(sudo:session): session closed for user root
Jan 26 17:21:54 compute-0 sudo[412471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:21:54 compute-0 sudo[412471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:21:54 compute-0 sudo[412471]: pam_unix(sudo:session): session closed for user root
Jan 26 17:21:54 compute-0 sudo[412496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:21:54 compute-0 sudo[412496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:21:55 compute-0 podman[412533]: 2026-01-26 17:21:55.164364114 +0000 UTC m=+0.036031384 container create 440ebf2b0a804cecf3e5f2751fd6e2cb580b25d6d3c5a1c3f1856ff81f188ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_carson, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:21:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:21:55 compute-0 systemd[1]: Started libpod-conmon-440ebf2b0a804cecf3e5f2751fd6e2cb580b25d6d3c5a1c3f1856ff81f188ff4.scope.
Jan 26 17:21:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:21:55 compute-0 podman[412533]: 2026-01-26 17:21:55.225501503 +0000 UTC m=+0.097168793 container init 440ebf2b0a804cecf3e5f2751fd6e2cb580b25d6d3c5a1c3f1856ff81f188ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_carson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:21:55 compute-0 podman[412533]: 2026-01-26 17:21:55.232405432 +0000 UTC m=+0.104072702 container start 440ebf2b0a804cecf3e5f2751fd6e2cb580b25d6d3c5a1c3f1856ff81f188ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_carson, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:21:55 compute-0 podman[412533]: 2026-01-26 17:21:55.236225106 +0000 UTC m=+0.107892386 container attach 440ebf2b0a804cecf3e5f2751fd6e2cb580b25d6d3c5a1c3f1856ff81f188ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 17:21:55 compute-0 relaxed_carson[412549]: 167 167
Jan 26 17:21:55 compute-0 systemd[1]: libpod-440ebf2b0a804cecf3e5f2751fd6e2cb580b25d6d3c5a1c3f1856ff81f188ff4.scope: Deactivated successfully.
Jan 26 17:21:55 compute-0 podman[412533]: 2026-01-26 17:21:55.238271576 +0000 UTC m=+0.109938876 container died 440ebf2b0a804cecf3e5f2751fd6e2cb580b25d6d3c5a1c3f1856ff81f188ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_carson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 17:21:55 compute-0 podman[412533]: 2026-01-26 17:21:55.148551706 +0000 UTC m=+0.020218996 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:21:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad1ae413c6febbbba063675629866308198fa969345b8addf95015942a8cb13e-merged.mount: Deactivated successfully.
Jan 26 17:21:55 compute-0 podman[412533]: 2026-01-26 17:21:55.273659533 +0000 UTC m=+0.145326803 container remove 440ebf2b0a804cecf3e5f2751fd6e2cb580b25d6d3c5a1c3f1856ff81f188ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 17:21:55 compute-0 systemd[1]: libpod-conmon-440ebf2b0a804cecf3e5f2751fd6e2cb580b25d6d3c5a1c3f1856ff81f188ff4.scope: Deactivated successfully.
Jan 26 17:21:55 compute-0 podman[412573]: 2026-01-26 17:21:55.434552298 +0000 UTC m=+0.040954725 container create ab12f2dc8362f173b9bc3556c992564ce2a16fcb2547ddc91faf456a7656dde6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_lalande, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:21:55 compute-0 systemd[1]: Started libpod-conmon-ab12f2dc8362f173b9bc3556c992564ce2a16fcb2547ddc91faf456a7656dde6.scope.
Jan 26 17:21:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f681507bd8215a94b9a74f197980af93a97846c920b3138876d100adfaa83f46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f681507bd8215a94b9a74f197980af93a97846c920b3138876d100adfaa83f46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f681507bd8215a94b9a74f197980af93a97846c920b3138876d100adfaa83f46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:21:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f681507bd8215a94b9a74f197980af93a97846c920b3138876d100adfaa83f46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:21:55 compute-0 podman[412573]: 2026-01-26 17:21:55.498562428 +0000 UTC m=+0.104964875 container init ab12f2dc8362f173b9bc3556c992564ce2a16fcb2547ddc91faf456a7656dde6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 17:21:55 compute-0 podman[412573]: 2026-01-26 17:21:55.507572718 +0000 UTC m=+0.113975145 container start ab12f2dc8362f173b9bc3556c992564ce2a16fcb2547ddc91faf456a7656dde6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_lalande, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 17:21:55 compute-0 podman[412573]: 2026-01-26 17:21:55.511570886 +0000 UTC m=+0.117973313 container attach ab12f2dc8362f173b9bc3556c992564ce2a16fcb2547ddc91faf456a7656dde6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_lalande, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 17:21:55 compute-0 nova_compute[239965]: 2026-01-26 17:21:55.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:55 compute-0 podman[412573]: 2026-01-26 17:21:55.416879905 +0000 UTC m=+0.023282362 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:21:55 compute-0 ceph-mon[75140]: pgmap v3964: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:56 compute-0 lvm[412668]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:21:56 compute-0 lvm[412669]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:21:56 compute-0 lvm[412669]: VG ceph_vg1 finished
Jan 26 17:21:56 compute-0 lvm[412668]: VG ceph_vg0 finished
Jan 26 17:21:56 compute-0 lvm[412671]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:21:56 compute-0 lvm[412671]: VG ceph_vg2 finished
Jan 26 17:21:56 compute-0 beautiful_lalande[412590]: {}
Jan 26 17:21:56 compute-0 systemd[1]: libpod-ab12f2dc8362f173b9bc3556c992564ce2a16fcb2547ddc91faf456a7656dde6.scope: Deactivated successfully.
Jan 26 17:21:56 compute-0 systemd[1]: libpod-ab12f2dc8362f173b9bc3556c992564ce2a16fcb2547ddc91faf456a7656dde6.scope: Consumed 1.582s CPU time.
Jan 26 17:21:56 compute-0 podman[412573]: 2026-01-26 17:21:56.452069595 +0000 UTC m=+1.058472092 container died ab12f2dc8362f173b9bc3556c992564ce2a16fcb2547ddc91faf456a7656dde6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_lalande, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 17:21:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3965: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-f681507bd8215a94b9a74f197980af93a97846c920b3138876d100adfaa83f46-merged.mount: Deactivated successfully.
Jan 26 17:21:56 compute-0 podman[412573]: 2026-01-26 17:21:56.617340217 +0000 UTC m=+1.223742644 container remove ab12f2dc8362f173b9bc3556c992564ce2a16fcb2547ddc91faf456a7656dde6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:21:56 compute-0 sudo[412496]: pam_unix(sudo:session): session closed for user root
Jan 26 17:21:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:21:56 compute-0 systemd[1]: libpod-conmon-ab12f2dc8362f173b9bc3556c992564ce2a16fcb2547ddc91faf456a7656dde6.scope: Deactivated successfully.
Jan 26 17:21:56 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:21:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:21:56 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:21:56 compute-0 sudo[412685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:21:56 compute-0 sudo[412685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:21:56 compute-0 sudo[412685]: pam_unix(sudo:session): session closed for user root
Jan 26 17:21:57 compute-0 ceph-mon[75140]: pgmap v3965: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:21:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:21:58 compute-0 nova_compute[239965]: 2026-01-26 17:21:58.313 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:58 compute-0 nova_compute[239965]: 2026-01-26 17:21:58.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:21:58 compute-0 nova_compute[239965]: 2026-01-26 17:21:58.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:21:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3966: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:21:59 compute-0 nova_compute[239965]: 2026-01-26 17:21:59.203 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:21:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:21:59.306 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:21:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:21:59.306 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:21:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:21:59.306 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:21:59 compute-0 ceph-mon[75140]: pgmap v3966: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:22:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:22:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:22:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:22:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:22:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:22:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:22:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3967: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:00 compute-0 ceph-mon[75140]: pgmap v3967: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:01 compute-0 nova_compute[239965]: 2026-01-26 17:22:01.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:01 compute-0 nova_compute[239965]: 2026-01-26 17:22:01.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:01 compute-0 nova_compute[239965]: 2026-01-26 17:22:01.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:22:01 compute-0 nova_compute[239965]: 2026-01-26 17:22:01.527 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:22:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3968: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:03 compute-0 nova_compute[239965]: 2026-01-26 17:22:03.316 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:03 compute-0 ceph-mon[75140]: pgmap v3968: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:04 compute-0 nova_compute[239965]: 2026-01-26 17:22:04.205 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3969: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:04 compute-0 nova_compute[239965]: 2026-01-26 17:22:04.527 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:04 compute-0 nova_compute[239965]: 2026-01-26 17:22:04.551 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:04 compute-0 nova_compute[239965]: 2026-01-26 17:22:04.551 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:04 compute-0 nova_compute[239965]: 2026-01-26 17:22:04.551 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:04 compute-0 nova_compute[239965]: 2026-01-26 17:22:04.551 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:22:04 compute-0 nova_compute[239965]: 2026-01-26 17:22:04.552 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:22:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:22:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2290243443' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:22:05 compute-0 nova_compute[239965]: 2026-01-26 17:22:05.122 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:22:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:22:05 compute-0 nova_compute[239965]: 2026-01-26 17:22:05.276 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:22:05 compute-0 nova_compute[239965]: 2026-01-26 17:22:05.278 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3530MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:22:05 compute-0 nova_compute[239965]: 2026-01-26 17:22:05.278 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:05 compute-0 nova_compute[239965]: 2026-01-26 17:22:05.279 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:05 compute-0 nova_compute[239965]: 2026-01-26 17:22:05.394 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:22:05 compute-0 nova_compute[239965]: 2026-01-26 17:22:05.395 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:22:05 compute-0 nova_compute[239965]: 2026-01-26 17:22:05.418 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:22:05 compute-0 ceph-mon[75140]: pgmap v3969: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:05 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2290243443' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:22:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:22:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3665557921' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:22:05 compute-0 nova_compute[239965]: 2026-01-26 17:22:05.982 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:22:05 compute-0 nova_compute[239965]: 2026-01-26 17:22:05.988 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:22:06 compute-0 nova_compute[239965]: 2026-01-26 17:22:06.007 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:22:06 compute-0 nova_compute[239965]: 2026-01-26 17:22:06.008 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:22:06 compute-0 nova_compute[239965]: 2026-01-26 17:22:06.008 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.730s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3970: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:06 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3665557921' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:22:07 compute-0 nova_compute[239965]: 2026-01-26 17:22:07.264 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:07 compute-0 nova_compute[239965]: 2026-01-26 17:22:07.265 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:22:07 compute-0 nova_compute[239965]: 2026-01-26 17:22:07.265 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:22:07 compute-0 nova_compute[239965]: 2026-01-26 17:22:07.292 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:22:07 compute-0 nova_compute[239965]: 2026-01-26 17:22:07.293 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:07 compute-0 ceph-mon[75140]: pgmap v3970: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:08 compute-0 nova_compute[239965]: 2026-01-26 17:22:08.318 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3971: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:08 compute-0 ceph-mon[75140]: pgmap v3971: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:09 compute-0 nova_compute[239965]: 2026-01-26 17:22:09.206 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:22:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3972: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:10 compute-0 ceph-mon[75140]: pgmap v3972: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3973: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:12 compute-0 ceph-mon[75140]: pgmap v3973: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:13 compute-0 nova_compute[239965]: 2026-01-26 17:22:13.320 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:14 compute-0 nova_compute[239965]: 2026-01-26 17:22:14.208 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3974: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:14 compute-0 ceph-mon[75140]: pgmap v3974: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:22:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3975: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:17 compute-0 ceph-mon[75140]: pgmap v3975: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:18 compute-0 nova_compute[239965]: 2026-01-26 17:22:18.321 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3976: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:19 compute-0 nova_compute[239965]: 2026-01-26 17:22:19.210 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:19 compute-0 ceph-mon[75140]: pgmap v3976: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:22:20 compute-0 podman[412754]: 2026-01-26 17:22:20.398315128 +0000 UTC m=+0.072030003 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:22:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3977: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:21 compute-0 ceph-mon[75140]: pgmap v3977: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3978: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:23 compute-0 nova_compute[239965]: 2026-01-26 17:22:23.323 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:23 compute-0 ceph-mon[75140]: pgmap v3978: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:24 compute-0 nova_compute[239965]: 2026-01-26 17:22:24.212 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:24 compute-0 podman[412773]: 2026-01-26 17:22:24.397369115 +0000 UTC m=+0.080433338 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 26 17:22:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3979: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:22:25 compute-0 ceph-mon[75140]: pgmap v3979: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3980: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:27 compute-0 ceph-mon[75140]: pgmap v3980: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:28 compute-0 nova_compute[239965]: 2026-01-26 17:22:28.324 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3981: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:22:28
Jan 26 17:22:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:22:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:22:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'backups', 'images', '.mgr', 'vms', 'cephfs.cephfs.data', 'volumes']
Jan 26 17:22:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:22:29 compute-0 nova_compute[239965]: 2026-01-26 17:22:29.213 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:29 compute-0 ceph-mon[75140]: pgmap v3981: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:22:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:22:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:22:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:22:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:22:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:22:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:22:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3982: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:22:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:22:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:22:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:22:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:22:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:22:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:22:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:22:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:22:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:22:31 compute-0 ceph-mon[75140]: pgmap v3982: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3983: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:33 compute-0 nova_compute[239965]: 2026-01-26 17:22:33.325 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:33 compute-0 ceph-mon[75140]: pgmap v3983: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:34 compute-0 nova_compute[239965]: 2026-01-26 17:22:34.215 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3984: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:22:35 compute-0 ceph-mon[75140]: pgmap v3984: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3985: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:37 compute-0 ceph-mon[75140]: pgmap v3985: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:38 compute-0 nova_compute[239965]: 2026-01-26 17:22:38.328 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3986: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:39 compute-0 nova_compute[239965]: 2026-01-26 17:22:39.257 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:39 compute-0 ceph-mon[75140]: pgmap v3986: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:22:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3987: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:41 compute-0 ceph-mon[75140]: pgmap v3987: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3988: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:43 compute-0 nova_compute[239965]: 2026-01-26 17:22:43.330 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:43 compute-0 nova_compute[239965]: 2026-01-26 17:22:43.526 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:43 compute-0 ceph-mon[75140]: pgmap v3988: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:44 compute-0 nova_compute[239965]: 2026-01-26 17:22:44.312 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:44 compute-0 nova_compute[239965]: 2026-01-26 17:22:44.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3989: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:22:45 compute-0 nova_compute[239965]: 2026-01-26 17:22:45.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:45 compute-0 ceph-mon[75140]: pgmap v3989: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:46 compute-0 nova_compute[239965]: 2026-01-26 17:22:46.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3990: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:47 compute-0 ceph-mon[75140]: pgmap v3990: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:48 compute-0 nova_compute[239965]: 2026-01-26 17:22:48.332 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3991: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:22:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/118106467' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:22:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:22:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/118106467' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:22:49 compute-0 nova_compute[239965]: 2026-01-26 17:22:49.314 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:49 compute-0 ceph-mon[75140]: pgmap v3991: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/118106467' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:22:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/118106467' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:22:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:22:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:22:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3992: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:51 compute-0 podman[412800]: 2026-01-26 17:22:51.381990265 +0000 UTC m=+0.060278065 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 26 17:22:51 compute-0 ceph-mon[75140]: pgmap v3992: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:22:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3993: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 16 op/s
Jan 26 17:22:53 compute-0 nova_compute[239965]: 2026-01-26 17:22:53.334 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:53 compute-0 ceph-mon[75140]: pgmap v3993: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 16 op/s
Jan 26 17:22:54 compute-0 nova_compute[239965]: 2026-01-26 17:22:54.315 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3994: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 28 op/s
Jan 26 17:22:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:22:55 compute-0 podman[412820]: 2026-01-26 17:22:55.422681802 +0000 UTC m=+0.112375771 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 26 17:22:55 compute-0 nova_compute[239965]: 2026-01-26 17:22:55.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:22:55 compute-0 ceph-mon[75140]: pgmap v3994: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 28 op/s
Jan 26 17:22:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3995: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 35 op/s
Jan 26 17:22:56 compute-0 sudo[412846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:22:56 compute-0 sudo[412846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:22:56 compute-0 sudo[412846]: pam_unix(sudo:session): session closed for user root
Jan 26 17:22:56 compute-0 sudo[412871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:22:56 compute-0 sudo[412871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:22:57 compute-0 sudo[412871]: pam_unix(sudo:session): session closed for user root
Jan 26 17:22:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:22:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:22:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:22:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:22:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:22:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:22:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:22:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:22:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:22:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:22:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:22:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:22:57 compute-0 sudo[412928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:22:57 compute-0 sudo[412928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:22:57 compute-0 sudo[412928]: pam_unix(sudo:session): session closed for user root
Jan 26 17:22:57 compute-0 ceph-mon[75140]: pgmap v3995: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 35 op/s
Jan 26 17:22:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:22:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:22:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:22:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:22:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:22:57 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:22:57 compute-0 sudo[412953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:22:57 compute-0 sudo[412953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:22:58 compute-0 podman[412990]: 2026-01-26 17:22:58.032990049 +0000 UTC m=+0.050645390 container create b47e784dd589f39030d309567b8b355472a9a973519e8d0df155a46e026f7929 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mirzakhani, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:22:58 compute-0 systemd[1]: Started libpod-conmon-b47e784dd589f39030d309567b8b355472a9a973519e8d0df155a46e026f7929.scope.
Jan 26 17:22:58 compute-0 podman[412990]: 2026-01-26 17:22:58.010019037 +0000 UTC m=+0.027674428 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:22:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:22:58 compute-0 podman[412990]: 2026-01-26 17:22:58.138413936 +0000 UTC m=+0.156069307 container init b47e784dd589f39030d309567b8b355472a9a973519e8d0df155a46e026f7929 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mirzakhani, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:22:58 compute-0 podman[412990]: 2026-01-26 17:22:58.151177828 +0000 UTC m=+0.168833199 container start b47e784dd589f39030d309567b8b355472a9a973519e8d0df155a46e026f7929 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mirzakhani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 26 17:22:58 compute-0 podman[412990]: 2026-01-26 17:22:58.155823273 +0000 UTC m=+0.173478634 container attach b47e784dd589f39030d309567b8b355472a9a973519e8d0df155a46e026f7929 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:22:58 compute-0 epic_mirzakhani[413006]: 167 167
Jan 26 17:22:58 compute-0 systemd[1]: libpod-b47e784dd589f39030d309567b8b355472a9a973519e8d0df155a46e026f7929.scope: Deactivated successfully.
Jan 26 17:22:58 compute-0 podman[412990]: 2026-01-26 17:22:58.163757336 +0000 UTC m=+0.181412697 container died b47e784dd589f39030d309567b8b355472a9a973519e8d0df155a46e026f7929 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:22:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-9608ddb662840e004c7e7c0557c3ab79708fff71b01069f6dffbed35865b4061-merged.mount: Deactivated successfully.
Jan 26 17:22:58 compute-0 podman[412990]: 2026-01-26 17:22:58.21502069 +0000 UTC m=+0.232676031 container remove b47e784dd589f39030d309567b8b355472a9a973519e8d0df155a46e026f7929 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:22:58 compute-0 systemd[1]: libpod-conmon-b47e784dd589f39030d309567b8b355472a9a973519e8d0df155a46e026f7929.scope: Deactivated successfully.
Jan 26 17:22:58 compute-0 nova_compute[239965]: 2026-01-26 17:22:58.336 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:58 compute-0 podman[413030]: 2026-01-26 17:22:58.397255717 +0000 UTC m=+0.051892850 container create caad2d5eb98daa9d6252753d396c355ef32c803d463ebcffa69ec2c3b8ae7ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_murdock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 17:22:58 compute-0 systemd[1]: Started libpod-conmon-caad2d5eb98daa9d6252753d396c355ef32c803d463ebcffa69ec2c3b8ae7ffb.scope.
Jan 26 17:22:58 compute-0 podman[413030]: 2026-01-26 17:22:58.374432259 +0000 UTC m=+0.029069422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:22:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176aacef257bd61e8e46c45de878408dec6849ad400ef87b6b362a857f64cf46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176aacef257bd61e8e46c45de878408dec6849ad400ef87b6b362a857f64cf46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176aacef257bd61e8e46c45de878408dec6849ad400ef87b6b362a857f64cf46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176aacef257bd61e8e46c45de878408dec6849ad400ef87b6b362a857f64cf46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:22:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/176aacef257bd61e8e46c45de878408dec6849ad400ef87b6b362a857f64cf46/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:22:58 compute-0 podman[413030]: 2026-01-26 17:22:58.494072874 +0000 UTC m=+0.148710027 container init caad2d5eb98daa9d6252753d396c355ef32c803d463ebcffa69ec2c3b8ae7ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_murdock, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:22:58 compute-0 podman[413030]: 2026-01-26 17:22:58.504490539 +0000 UTC m=+0.159127672 container start caad2d5eb98daa9d6252753d396c355ef32c803d463ebcffa69ec2c3b8ae7ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_murdock, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 26 17:22:58 compute-0 podman[413030]: 2026-01-26 17:22:58.509184274 +0000 UTC m=+0.163821427 container attach caad2d5eb98daa9d6252753d396c355ef32c803d463ebcffa69ec2c3b8ae7ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_murdock, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 17:22:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 51 op/s
Jan 26 17:22:59 compute-0 keen_murdock[413047]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:22:59 compute-0 keen_murdock[413047]: --> All data devices are unavailable
Jan 26 17:22:59 compute-0 systemd[1]: libpod-caad2d5eb98daa9d6252753d396c355ef32c803d463ebcffa69ec2c3b8ae7ffb.scope: Deactivated successfully.
Jan 26 17:22:59 compute-0 conmon[413047]: conmon caad2d5eb98daa9d6252 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-caad2d5eb98daa9d6252753d396c355ef32c803d463ebcffa69ec2c3b8ae7ffb.scope/container/memory.events
Jan 26 17:22:59 compute-0 podman[413030]: 2026-01-26 17:22:59.10726633 +0000 UTC m=+0.761903503 container died caad2d5eb98daa9d6252753d396c355ef32c803d463ebcffa69ec2c3b8ae7ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_murdock, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-176aacef257bd61e8e46c45de878408dec6849ad400ef87b6b362a857f64cf46-merged.mount: Deactivated successfully.
Jan 26 17:22:59 compute-0 podman[413030]: 2026-01-26 17:22:59.167126214 +0000 UTC m=+0.821763347 container remove caad2d5eb98daa9d6252753d396c355ef32c803d463ebcffa69ec2c3b8ae7ffb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:22:59 compute-0 systemd[1]: libpod-conmon-caad2d5eb98daa9d6252753d396c355ef32c803d463ebcffa69ec2c3b8ae7ffb.scope: Deactivated successfully.
Jan 26 17:22:59 compute-0 sudo[412953]: pam_unix(sudo:session): session closed for user root
Jan 26 17:22:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:22:59.306 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:22:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:22:59.308 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:22:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:22:59.309 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:22:59 compute-0 nova_compute[239965]: 2026-01-26 17:22:59.319 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:22:59 compute-0 sudo[413078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:22:59 compute-0 sudo[413078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:22:59 compute-0 sudo[413078]: pam_unix(sudo:session): session closed for user root
Jan 26 17:22:59 compute-0 sudo[413103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:22:59 compute-0 sudo[413103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:22:59 compute-0 ceph-mon[75140]: pgmap v3996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 51 op/s
Jan 26 17:22:59 compute-0 podman[413140]: 2026-01-26 17:22:59.727250602 +0000 UTC m=+0.051773687 container create 1efd3f937ab5a1c3f5e3c3fa8313efe389dc3130130bd5d158dfcf130ad30a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:22:59 compute-0 systemd[1]: Started libpod-conmon-1efd3f937ab5a1c3f5e3c3fa8313efe389dc3130130bd5d158dfcf130ad30a47.scope.
Jan 26 17:22:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:22:59 compute-0 podman[413140]: 2026-01-26 17:22:59.708380271 +0000 UTC m=+0.032903386 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:22:59 compute-0 podman[413140]: 2026-01-26 17:22:59.822597624 +0000 UTC m=+0.147120769 container init 1efd3f937ab5a1c3f5e3c3fa8313efe389dc3130130bd5d158dfcf130ad30a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:22:59 compute-0 podman[413140]: 2026-01-26 17:22:59.832801113 +0000 UTC m=+0.157324198 container start 1efd3f937ab5a1c3f5e3c3fa8313efe389dc3130130bd5d158dfcf130ad30a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:22:59 compute-0 podman[413140]: 2026-01-26 17:22:59.837336565 +0000 UTC m=+0.161859670 container attach 1efd3f937ab5a1c3f5e3c3fa8313efe389dc3130130bd5d158dfcf130ad30a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mendeleev, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:22:59 compute-0 kind_mendeleev[413157]: 167 167
Jan 26 17:22:59 compute-0 systemd[1]: libpod-1efd3f937ab5a1c3f5e3c3fa8313efe389dc3130130bd5d158dfcf130ad30a47.scope: Deactivated successfully.
Jan 26 17:22:59 compute-0 podman[413140]: 2026-01-26 17:22:59.841818934 +0000 UTC m=+0.166342049 container died 1efd3f937ab5a1c3f5e3c3fa8313efe389dc3130130bd5d158dfcf130ad30a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:22:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5ddc61c9c12509ea2b9fca7682dfff885405b48ba8b4a0e6e3f0d391d9d1071-merged.mount: Deactivated successfully.
Jan 26 17:22:59 compute-0 podman[413140]: 2026-01-26 17:22:59.883207996 +0000 UTC m=+0.207731081 container remove 1efd3f937ab5a1c3f5e3c3fa8313efe389dc3130130bd5d158dfcf130ad30a47 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:22:59 compute-0 systemd[1]: libpod-conmon-1efd3f937ab5a1c3f5e3c3fa8313efe389dc3130130bd5d158dfcf130ad30a47.scope: Deactivated successfully.
Jan 26 17:23:00 compute-0 podman[413179]: 2026-01-26 17:23:00.084007007 +0000 UTC m=+0.060845069 container create 6f2de6147b14ed66b3d1cae64c71cb3fbac15ba9594a85ebe97abb3896fedb0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_rubin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 17:23:00 compute-0 systemd[1]: Started libpod-conmon-6f2de6147b14ed66b3d1cae64c71cb3fbac15ba9594a85ebe97abb3896fedb0c.scope.
Jan 26 17:23:00 compute-0 podman[413179]: 2026-01-26 17:23:00.057703934 +0000 UTC m=+0.034542086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:23:00 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76840b90993fc573d1ee4ccf4dcbdfe85dee3c3fec45adfe0f887e147904090/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76840b90993fc573d1ee4ccf4dcbdfe85dee3c3fec45adfe0f887e147904090/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76840b90993fc573d1ee4ccf4dcbdfe85dee3c3fec45adfe0f887e147904090/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:23:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c76840b90993fc573d1ee4ccf4dcbdfe85dee3c3fec45adfe0f887e147904090/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:23:00 compute-0 podman[413179]: 2026-01-26 17:23:00.1851499 +0000 UTC m=+0.161987972 container init 6f2de6147b14ed66b3d1cae64c71cb3fbac15ba9594a85ebe97abb3896fedb0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:23:00 compute-0 podman[413179]: 2026-01-26 17:23:00.193028783 +0000 UTC m=+0.169866835 container start 6f2de6147b14ed66b3d1cae64c71cb3fbac15ba9594a85ebe97abb3896fedb0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 17:23:00 compute-0 podman[413179]: 2026-01-26 17:23:00.196478637 +0000 UTC m=+0.173316699 container attach 6f2de6147b14ed66b3d1cae64c71cb3fbac15ba9594a85ebe97abb3896fedb0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_rubin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 17:23:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:23:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:23:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:23:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:23:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:23:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:23:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]: {
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:     "0": [
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:         {
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "devices": [
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "/dev/loop3"
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             ],
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "lv_name": "ceph_lv0",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "lv_size": "21470642176",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "name": "ceph_lv0",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "tags": {
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.cluster_name": "ceph",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.crush_device_class": "",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.encrypted": "0",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.objectstore": "bluestore",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.osd_id": "0",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.type": "block",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.vdo": "0",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.with_tpm": "0"
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             },
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "type": "block",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "vg_name": "ceph_vg0"
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:         }
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:     ],
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:     "1": [
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:         {
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "devices": [
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "/dev/loop4"
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             ],
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "lv_name": "ceph_lv1",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "lv_size": "21470642176",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "name": "ceph_lv1",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "tags": {
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.cluster_name": "ceph",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.crush_device_class": "",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.encrypted": "0",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.objectstore": "bluestore",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.osd_id": "1",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.type": "block",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.vdo": "0",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.with_tpm": "0"
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             },
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "type": "block",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "vg_name": "ceph_vg1"
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:         }
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:     ],
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:     "2": [
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:         {
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "devices": [
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "/dev/loop5"
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             ],
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "lv_name": "ceph_lv2",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "lv_size": "21470642176",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "name": "ceph_lv2",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "tags": {
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.cluster_name": "ceph",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.crush_device_class": "",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.encrypted": "0",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.objectstore": "bluestore",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.osd_id": "2",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.type": "block",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.vdo": "0",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:                 "ceph.with_tpm": "0"
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             },
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "type": "block",
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:             "vg_name": "ceph_vg2"
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:         }
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]:     ]
Jan 26 17:23:00 compute-0 optimistic_rubin[413196]: }
Jan 26 17:23:00 compute-0 nova_compute[239965]: 2026-01-26 17:23:00.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:00 compute-0 nova_compute[239965]: 2026-01-26 17:23:00.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:23:00 compute-0 systemd[1]: libpod-6f2de6147b14ed66b3d1cae64c71cb3fbac15ba9594a85ebe97abb3896fedb0c.scope: Deactivated successfully.
Jan 26 17:23:00 compute-0 podman[413179]: 2026-01-26 17:23:00.530558168 +0000 UTC m=+0.507396240 container died 6f2de6147b14ed66b3d1cae64c71cb3fbac15ba9594a85ebe97abb3896fedb0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_rubin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:23:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 51 op/s
Jan 26 17:23:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c76840b90993fc573d1ee4ccf4dcbdfe85dee3c3fec45adfe0f887e147904090-merged.mount: Deactivated successfully.
Jan 26 17:23:00 compute-0 podman[413179]: 2026-01-26 17:23:00.575245501 +0000 UTC m=+0.552083553 container remove 6f2de6147b14ed66b3d1cae64c71cb3fbac15ba9594a85ebe97abb3896fedb0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_rubin, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:23:00 compute-0 systemd[1]: libpod-conmon-6f2de6147b14ed66b3d1cae64c71cb3fbac15ba9594a85ebe97abb3896fedb0c.scope: Deactivated successfully.
Jan 26 17:23:00 compute-0 sudo[413103]: pam_unix(sudo:session): session closed for user root
Jan 26 17:23:00 compute-0 sudo[413217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:23:00 compute-0 sudo[413217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:23:00 compute-0 sudo[413217]: pam_unix(sudo:session): session closed for user root
Jan 26 17:23:00 compute-0 sudo[413242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:23:00 compute-0 sudo[413242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:23:01 compute-0 podman[413280]: 2026-01-26 17:23:01.138106845 +0000 UTC m=+0.047639955 container create 3f12e84b583de15b04374ff4d784d085a54c3b5def4e3cab959605797c0d054b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_zhukovsky, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 17:23:01 compute-0 systemd[1]: Started libpod-conmon-3f12e84b583de15b04374ff4d784d085a54c3b5def4e3cab959605797c0d054b.scope.
Jan 26 17:23:01 compute-0 podman[413280]: 2026-01-26 17:23:01.116613839 +0000 UTC m=+0.026146949 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:23:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:23:01 compute-0 podman[413280]: 2026-01-26 17:23:01.242708673 +0000 UTC m=+0.152242353 container init 3f12e84b583de15b04374ff4d784d085a54c3b5def4e3cab959605797c0d054b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_zhukovsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 17:23:01 compute-0 podman[413280]: 2026-01-26 17:23:01.253505687 +0000 UTC m=+0.163038787 container start 3f12e84b583de15b04374ff4d784d085a54c3b5def4e3cab959605797c0d054b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:23:01 compute-0 podman[413280]: 2026-01-26 17:23:01.257836643 +0000 UTC m=+0.167369753 container attach 3f12e84b583de15b04374ff4d784d085a54c3b5def4e3cab959605797c0d054b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 17:23:01 compute-0 upbeat_zhukovsky[413297]: 167 167
Jan 26 17:23:01 compute-0 systemd[1]: libpod-3f12e84b583de15b04374ff4d784d085a54c3b5def4e3cab959605797c0d054b.scope: Deactivated successfully.
Jan 26 17:23:01 compute-0 podman[413280]: 2026-01-26 17:23:01.262327303 +0000 UTC m=+0.171860403 container died 3f12e84b583de15b04374ff4d784d085a54c3b5def4e3cab959605797c0d054b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_zhukovsky, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:23:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-b427eef5576540373736833fc7df6c5840f5b708c292a308f09d24eae276cab0-merged.mount: Deactivated successfully.
Jan 26 17:23:01 compute-0 podman[413280]: 2026-01-26 17:23:01.319614804 +0000 UTC m=+0.229147924 container remove 3f12e84b583de15b04374ff4d784d085a54c3b5def4e3cab959605797c0d054b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 17:23:01 compute-0 systemd[1]: libpod-conmon-3f12e84b583de15b04374ff4d784d085a54c3b5def4e3cab959605797c0d054b.scope: Deactivated successfully.
Jan 26 17:23:01 compute-0 podman[413321]: 2026-01-26 17:23:01.51859278 +0000 UTC m=+0.045999675 container create 09e7556a3bd66a6c6c06a893b9da76246d825f4020be83200cc0c3a939155a30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:23:01 compute-0 systemd[1]: Started libpod-conmon-09e7556a3bd66a6c6c06a893b9da76246d825f4020be83200cc0c3a939155a30.scope.
Jan 26 17:23:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d3caeec3c512f7f37c3ea0b9da48fa596616aa8cad6e24a38e1a6eba4862aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:23:01 compute-0 podman[413321]: 2026-01-26 17:23:01.498527249 +0000 UTC m=+0.025934134 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d3caeec3c512f7f37c3ea0b9da48fa596616aa8cad6e24a38e1a6eba4862aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d3caeec3c512f7f37c3ea0b9da48fa596616aa8cad6e24a38e1a6eba4862aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:23:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34d3caeec3c512f7f37c3ea0b9da48fa596616aa8cad6e24a38e1a6eba4862aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:23:01 compute-0 podman[413321]: 2026-01-26 17:23:01.611239196 +0000 UTC m=+0.138646161 container init 09e7556a3bd66a6c6c06a893b9da76246d825f4020be83200cc0c3a939155a30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 17:23:01 compute-0 podman[413321]: 2026-01-26 17:23:01.620883452 +0000 UTC m=+0.148290317 container start 09e7556a3bd66a6c6c06a893b9da76246d825f4020be83200cc0c3a939155a30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 17:23:01 compute-0 podman[413321]: 2026-01-26 17:23:01.625639818 +0000 UTC m=+0.153046783 container attach 09e7556a3bd66a6c6c06a893b9da76246d825f4020be83200cc0c3a939155a30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:23:01 compute-0 ceph-mon[75140]: pgmap v3997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 51 op/s
Jan 26 17:23:02 compute-0 lvm[413416]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:23:02 compute-0 lvm[413416]: VG ceph_vg0 finished
Jan 26 17:23:02 compute-0 lvm[413417]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:23:02 compute-0 lvm[413417]: VG ceph_vg1 finished
Jan 26 17:23:02 compute-0 lvm[413419]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:23:02 compute-0 lvm[413419]: VG ceph_vg2 finished
Jan 26 17:23:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 0 B/s wr, 88 op/s
Jan 26 17:23:02 compute-0 amazing_keller[413338]: {}
Jan 26 17:23:02 compute-0 systemd[1]: libpod-09e7556a3bd66a6c6c06a893b9da76246d825f4020be83200cc0c3a939155a30.scope: Deactivated successfully.
Jan 26 17:23:02 compute-0 systemd[1]: libpod-09e7556a3bd66a6c6c06a893b9da76246d825f4020be83200cc0c3a939155a30.scope: Consumed 1.765s CPU time.
Jan 26 17:23:02 compute-0 podman[413321]: 2026-01-26 17:23:02.673873593 +0000 UTC m=+1.201280448 container died 09e7556a3bd66a6c6c06a893b9da76246d825f4020be83200cc0c3a939155a30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 17:23:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-34d3caeec3c512f7f37c3ea0b9da48fa596616aa8cad6e24a38e1a6eba4862aa-merged.mount: Deactivated successfully.
Jan 26 17:23:02 compute-0 podman[413321]: 2026-01-26 17:23:02.735596223 +0000 UTC m=+1.263003088 container remove 09e7556a3bd66a6c6c06a893b9da76246d825f4020be83200cc0c3a939155a30 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_keller, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 17:23:02 compute-0 systemd[1]: libpod-conmon-09e7556a3bd66a6c6c06a893b9da76246d825f4020be83200cc0c3a939155a30.scope: Deactivated successfully.
Jan 26 17:23:02 compute-0 ceph-mon[75140]: pgmap v3998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 0 B/s wr, 88 op/s
Jan 26 17:23:02 compute-0 sudo[413242]: pam_unix(sudo:session): session closed for user root
Jan 26 17:23:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:23:02 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:23:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:23:02 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:23:02 compute-0 sudo[413433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:23:02 compute-0 sudo[413433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:23:02 compute-0 sudo[413433]: pam_unix(sudo:session): session closed for user root
Jan 26 17:23:03 compute-0 nova_compute[239965]: 2026-01-26 17:23:03.341 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:03 compute-0 nova_compute[239965]: 2026-01-26 17:23:03.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:03 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:23:03 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:23:04 compute-0 nova_compute[239965]: 2026-01-26 17:23:04.357 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v3999: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 72 op/s
Jan 26 17:23:04 compute-0 ceph-mon[75140]: pgmap v3999: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 0 B/s wr, 72 op/s
Jan 26 17:23:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:23:05 compute-0 nova_compute[239965]: 2026-01-26 17:23:05.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:05 compute-0 nova_compute[239965]: 2026-01-26 17:23:05.551 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:05 compute-0 nova_compute[239965]: 2026-01-26 17:23:05.552 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:05 compute-0 nova_compute[239965]: 2026-01-26 17:23:05.552 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:05 compute-0 nova_compute[239965]: 2026-01-26 17:23:05.552 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:23:05 compute-0 nova_compute[239965]: 2026-01-26 17:23:05.553 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:23:06 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/438424003' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:23:06 compute-0 nova_compute[239965]: 2026-01-26 17:23:06.194 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:06 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/438424003' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:23:06 compute-0 nova_compute[239965]: 2026-01-26 17:23:06.427 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:23:06 compute-0 nova_compute[239965]: 2026-01-26 17:23:06.428 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3510MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:23:06 compute-0 nova_compute[239965]: 2026-01-26 17:23:06.429 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:06 compute-0 nova_compute[239965]: 2026-01-26 17:23:06.429 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:06 compute-0 nova_compute[239965]: 2026-01-26 17:23:06.498 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:23:06 compute-0 nova_compute[239965]: 2026-01-26 17:23:06.499 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:23:06 compute-0 nova_compute[239965]: 2026-01-26 17:23:06.526 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:23:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Jan 26 17:23:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:23:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3502887971' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:23:07 compute-0 nova_compute[239965]: 2026-01-26 17:23:07.080 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:23:07 compute-0 nova_compute[239965]: 2026-01-26 17:23:07.088 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:23:07 compute-0 nova_compute[239965]: 2026-01-26 17:23:07.526 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:23:07 compute-0 nova_compute[239965]: 2026-01-26 17:23:07.530 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:23:07 compute-0 nova_compute[239965]: 2026-01-26 17:23:07.530 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.101s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:07 compute-0 ceph-mon[75140]: pgmap v4000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 60 op/s
Jan 26 17:23:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3502887971' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:23:08 compute-0 nova_compute[239965]: 2026-01-26 17:23:08.345 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Jan 26 17:23:09 compute-0 nova_compute[239965]: 2026-01-26 17:23:09.407 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:09 compute-0 nova_compute[239965]: 2026-01-26 17:23:09.522 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:09 compute-0 ceph-mon[75140]: pgmap v4001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 52 op/s
Jan 26 17:23:09 compute-0 nova_compute[239965]: 2026-01-26 17:23:09.664 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:09 compute-0 nova_compute[239965]: 2026-01-26 17:23:09.664 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:23:09 compute-0 nova_compute[239965]: 2026-01-26 17:23:09.664 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:23:09 compute-0 nova_compute[239965]: 2026-01-26 17:23:09.831 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:23:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:23:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Jan 26 17:23:11 compute-0 ceph-mon[75140]: pgmap v4002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Jan 26 17:23:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Jan 26 17:23:13 compute-0 nova_compute[239965]: 2026-01-26 17:23:13.346 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:13 compute-0 ceph-mon[75140]: pgmap v4003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Jan 26 17:23:14 compute-0 nova_compute[239965]: 2026-01-26 17:23:14.460 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:23:15 compute-0 ceph-mon[75140]: pgmap v4004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:17 compute-0 ceph-mon[75140]: pgmap v4005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:18 compute-0 nova_compute[239965]: 2026-01-26 17:23:18.348 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:19 compute-0 nova_compute[239965]: 2026-01-26 17:23:19.511 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:19 compute-0 ceph-mon[75140]: pgmap v4006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:23:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:21 compute-0 ceph-mon[75140]: pgmap v4007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:22 compute-0 podman[413502]: 2026-01-26 17:23:22.379903292 +0000 UTC m=+0.064500098 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 17:23:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:23 compute-0 nova_compute[239965]: 2026-01-26 17:23:23.351 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:23 compute-0 ceph-mon[75140]: pgmap v4008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:24 compute-0 nova_compute[239965]: 2026-01-26 17:23:24.512 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:23:25 compute-0 ceph-mon[75140]: pgmap v4009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:26 compute-0 podman[413521]: 2026-01-26 17:23:26.408772129 +0000 UTC m=+0.094061391 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 26 17:23:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:27 compute-0 ceph-mon[75140]: pgmap v4010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:28 compute-0 nova_compute[239965]: 2026-01-26 17:23:28.352 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:23:28
Jan 26 17:23:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:23:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:23:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.log', '.mgr', '.rgw.root', 'vms']
Jan 26 17:23:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:23:29 compute-0 nova_compute[239965]: 2026-01-26 17:23:29.514 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:29 compute-0 ceph-mon[75140]: pgmap v4011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:23:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:23:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:23:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:23:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:23:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:23:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:23:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:23:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:23:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:23:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:23:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:23:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:23:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:23:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:23:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:23:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:23:31 compute-0 ceph-mon[75140]: pgmap v4012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:33 compute-0 nova_compute[239965]: 2026-01-26 17:23:33.359 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:33 compute-0 ceph-mon[75140]: pgmap v4013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:34 compute-0 nova_compute[239965]: 2026-01-26 17:23:34.516 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:35 compute-0 ceph-mon[75140]: pgmap v4014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:23:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:37 compute-0 ceph-mon[75140]: pgmap v4015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:38 compute-0 nova_compute[239965]: 2026-01-26 17:23:38.367 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:39 compute-0 nova_compute[239965]: 2026-01-26 17:23:39.518 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:39 compute-0 ceph-mon[75140]: pgmap v4016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:23:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:41 compute-0 ceph-mon[75140]: pgmap v4017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:43 compute-0 nova_compute[239965]: 2026-01-26 17:23:43.369 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:43 compute-0 ceph-mon[75140]: pgmap v4018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:44 compute-0 nova_compute[239965]: 2026-01-26 17:23:44.521 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4019: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:23:45 compute-0 nova_compute[239965]: 2026-01-26 17:23:45.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:45 compute-0 ceph-mon[75140]: pgmap v4019: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:46 compute-0 nova_compute[239965]: 2026-01-26 17:23:46.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:46 compute-0 nova_compute[239965]: 2026-01-26 17:23:46.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4020: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:47 compute-0 ceph-mon[75140]: pgmap v4020: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:47 compute-0 nova_compute[239965]: 2026-01-26 17:23:47.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:47 compute-0 sshd-session[413547]: Invalid user sol from 45.148.10.240 port 39574
Jan 26 17:23:48 compute-0 sshd-session[413547]: Connection closed by invalid user sol 45.148.10.240 port 39574 [preauth]
Jan 26 17:23:48 compute-0 nova_compute[239965]: 2026-01-26 17:23:48.370 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4021: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:23:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3486867734' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:23:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:23:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3486867734' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:23:49 compute-0 nova_compute[239965]: 2026-01-26 17:23:49.523 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:49 compute-0 ceph-mon[75140]: pgmap v4021: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3486867734' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:23:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3486867734' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:23:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:23:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:23:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4022: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:51 compute-0 ceph-mon[75140]: pgmap v4022: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4023: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:52 compute-0 ceph-mon[75140]: pgmap v4023: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:53 compute-0 podman[413549]: 2026-01-26 17:23:53.373696656 +0000 UTC m=+0.061254179 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:23:53 compute-0 nova_compute[239965]: 2026-01-26 17:23:53.373 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:54 compute-0 nova_compute[239965]: 2026-01-26 17:23:54.524 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:23:55 compute-0 ceph-mon[75140]: pgmap v4024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:57 compute-0 podman[413570]: 2026-01-26 17:23:57.388916711 +0000 UTC m=+0.082077429 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:23:57 compute-0 nova_compute[239965]: 2026-01-26 17:23:57.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:23:57 compute-0 ceph-mon[75140]: pgmap v4025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:58 compute-0 nova_compute[239965]: 2026-01-26 17:23:58.376 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:23:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:23:59.309 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:23:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:23:59.309 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:23:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:23:59.309 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:23:59 compute-0 nova_compute[239965]: 2026-01-26 17:23:59.526 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:23:59 compute-0 ceph-mon[75140]: pgmap v4026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:24:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:24:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:24:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:24:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:24:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:24:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:24:00 compute-0 nova_compute[239965]: 2026-01-26 17:24:00.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:00 compute-0 nova_compute[239965]: 2026-01-26 17:24:00.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:24:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:01 compute-0 ceph-mon[75140]: pgmap v4027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #195. Immutable memtables: 0.
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:24:01.882202) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 121] Flushing memtable with next log file: 195
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448241882479, "job": 121, "event": "flush_started", "num_memtables": 1, "num_entries": 1517, "num_deletes": 251, "total_data_size": 2522478, "memory_usage": 2552544, "flush_reason": "Manual Compaction"}
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 121] Level-0 flush table #196: started
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448241941077, "cf_name": "default", "job": 121, "event": "table_file_creation", "file_number": 196, "file_size": 2474973, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 80615, "largest_seqno": 82131, "table_properties": {"data_size": 2467813, "index_size": 4230, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14517, "raw_average_key_size": 19, "raw_value_size": 2453621, "raw_average_value_size": 3365, "num_data_blocks": 189, "num_entries": 729, "num_filter_entries": 729, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769448075, "oldest_key_time": 1769448075, "file_creation_time": 1769448241, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 196, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 121] Flush lasted 58713 microseconds, and 6014 cpu microseconds.
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:24:01.941124) [db/flush_job.cc:967] [default] [JOB 121] Level-0 flush table #196: 2474973 bytes OK
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:24:01.941147) [db/memtable_list.cc:519] [default] Level-0 commit table #196 started
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:24:01.943243) [db/memtable_list.cc:722] [default] Level-0 commit table #196: memtable #1 done
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:24:01.943257) EVENT_LOG_v1 {"time_micros": 1769448241943252, "job": 121, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:24:01.943279) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 121] Try to delete WAL files size 2515856, prev total WAL file size 2515856, number of live WAL files 2.
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000192.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:24:01.944174) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038303332' seq:72057594037927935, type:22 .. '7061786F730038323834' seq:0, type:0; will stop at (end)
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 122] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 121 Base level 0, inputs: [196(2416KB)], [194(10MB)]
Jan 26 17:24:01 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448241944234, "job": 122, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [196], "files_L6": [194], "score": -1, "input_data_size": 13184743, "oldest_snapshot_seqno": -1}
Jan 26 17:24:02 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 122] Generated table #197: 9722 keys, 11397620 bytes, temperature: kUnknown
Jan 26 17:24:02 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448242120839, "cf_name": "default", "job": 122, "event": "table_file_creation", "file_number": 197, "file_size": 11397620, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11337019, "index_size": 35208, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24325, "raw_key_size": 256439, "raw_average_key_size": 26, "raw_value_size": 11167164, "raw_average_value_size": 1148, "num_data_blocks": 1351, "num_entries": 9722, "num_filter_entries": 9722, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769448241, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 197, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:24:02 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:24:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:24:02.121193) [db/compaction/compaction_job.cc:1663] [default] [JOB 122] Compacted 1@0 + 1@6 files to L6 => 11397620 bytes
Jan 26 17:24:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:24:02.265225) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 74.6 rd, 64.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 10.2 +0.0 blob) out(10.9 +0.0 blob), read-write-amplify(9.9) write-amplify(4.6) OK, records in: 10236, records dropped: 514 output_compression: NoCompression
Jan 26 17:24:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:24:02.265262) EVENT_LOG_v1 {"time_micros": 1769448242265249, "job": 122, "event": "compaction_finished", "compaction_time_micros": 176707, "compaction_time_cpu_micros": 32396, "output_level": 6, "num_output_files": 1, "total_output_size": 11397620, "num_input_records": 10236, "num_output_records": 9722, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:24:02 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000196.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:24:02 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448242265883, "job": 122, "event": "table_file_deletion", "file_number": 196}
Jan 26 17:24:02 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000194.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:24:02 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448242267972, "job": 122, "event": "table_file_deletion", "file_number": 194}
Jan 26 17:24:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:24:01.944052) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:24:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:24:02.268053) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:24:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:24:02.268058) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:24:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:24:02.268060) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:24:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:24:02.268067) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:24:02 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:24:02.268069) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:24:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:02 compute-0 ceph-mon[75140]: pgmap v4028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:03 compute-0 sudo[413596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:24:03 compute-0 sudo[413596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:24:03 compute-0 sudo[413596]: pam_unix(sudo:session): session closed for user root
Jan 26 17:24:03 compute-0 sudo[413621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:24:03 compute-0 sudo[413621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:24:03 compute-0 nova_compute[239965]: 2026-01-26 17:24:03.377 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:03 compute-0 sudo[413621]: pam_unix(sudo:session): session closed for user root
Jan 26 17:24:03 compute-0 sudo[413677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:24:03 compute-0 sudo[413677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:24:03 compute-0 sudo[413677]: pam_unix(sudo:session): session closed for user root
Jan 26 17:24:03 compute-0 sudo[413702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 26 17:24:03 compute-0 sudo[413702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:24:03 compute-0 sudo[413702]: pam_unix(sudo:session): session closed for user root
Jan 26 17:24:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:24:03 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:24:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:24:03 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:24:04 compute-0 sudo[413745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:24:04 compute-0 sudo[413745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:24:04 compute-0 sudo[413745]: pam_unix(sudo:session): session closed for user root
Jan 26 17:24:04 compute-0 sudo[413770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- inventory --format=json-pretty --filter-for-batch
Jan 26 17:24:04 compute-0 sudo[413770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:24:04 compute-0 podman[413805]: 2026-01-26 17:24:04.375606313 +0000 UTC m=+0.039728523 container create f0dd06359bbdee8dc22b81dcb0e47c03ebfb4b41489056b5a395c87aeef6cb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swartz, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 17:24:04 compute-0 systemd[1]: Started libpod-conmon-f0dd06359bbdee8dc22b81dcb0e47c03ebfb4b41489056b5a395c87aeef6cb87.scope.
Jan 26 17:24:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:24:04 compute-0 podman[413805]: 2026-01-26 17:24:04.449138871 +0000 UTC m=+0.113261111 container init f0dd06359bbdee8dc22b81dcb0e47c03ebfb4b41489056b5a395c87aeef6cb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swartz, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 17:24:04 compute-0 podman[413805]: 2026-01-26 17:24:04.358414633 +0000 UTC m=+0.022536853 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:24:04 compute-0 podman[413805]: 2026-01-26 17:24:04.455028305 +0000 UTC m=+0.119150515 container start f0dd06359bbdee8dc22b81dcb0e47c03ebfb4b41489056b5a395c87aeef6cb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swartz, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:24:04 compute-0 podman[413805]: 2026-01-26 17:24:04.458191233 +0000 UTC m=+0.122313443 container attach f0dd06359bbdee8dc22b81dcb0e47c03ebfb4b41489056b5a395c87aeef6cb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swartz, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:24:04 compute-0 cool_swartz[413822]: 167 167
Jan 26 17:24:04 compute-0 systemd[1]: libpod-f0dd06359bbdee8dc22b81dcb0e47c03ebfb4b41489056b5a395c87aeef6cb87.scope: Deactivated successfully.
Jan 26 17:24:04 compute-0 conmon[413822]: conmon f0dd06359bbdee8dc22b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f0dd06359bbdee8dc22b81dcb0e47c03ebfb4b41489056b5a395c87aeef6cb87.scope/container/memory.events
Jan 26 17:24:04 compute-0 podman[413805]: 2026-01-26 17:24:04.461566325 +0000 UTC m=+0.125688535 container died f0dd06359bbdee8dc22b81dcb0e47c03ebfb4b41489056b5a395c87aeef6cb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:24:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-aae4bba5deac29fb531f1b9f4ec75563b527c9d666e06cf8b6b68b706ca3d01d-merged.mount: Deactivated successfully.
Jan 26 17:24:04 compute-0 podman[413805]: 2026-01-26 17:24:04.506566635 +0000 UTC m=+0.170688845 container remove f0dd06359bbdee8dc22b81dcb0e47c03ebfb4b41489056b5a395c87aeef6cb87 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 17:24:04 compute-0 systemd[1]: libpod-conmon-f0dd06359bbdee8dc22b81dcb0e47c03ebfb4b41489056b5a395c87aeef6cb87.scope: Deactivated successfully.
Jan 26 17:24:04 compute-0 nova_compute[239965]: 2026-01-26 17:24:04.527 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:04 compute-0 podman[413845]: 2026-01-26 17:24:04.659562017 +0000 UTC m=+0.039174559 container create d0f70f25dd7920a0bad4d4be3d35c80cd79c5b791a470f8766d30189864034a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 17:24:04 compute-0 systemd[1]: Started libpod-conmon-d0f70f25dd7920a0bad4d4be3d35c80cd79c5b791a470f8766d30189864034a2.scope.
Jan 26 17:24:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0418f53f9431b6b397ae4f24e97ac489fbcd76bf0828eeaa2719f33e025080b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0418f53f9431b6b397ae4f24e97ac489fbcd76bf0828eeaa2719f33e025080b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0418f53f9431b6b397ae4f24e97ac489fbcd76bf0828eeaa2719f33e025080b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0418f53f9431b6b397ae4f24e97ac489fbcd76bf0828eeaa2719f33e025080b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:04 compute-0 podman[413845]: 2026-01-26 17:24:04.64127219 +0000 UTC m=+0.020884752 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:24:04 compute-0 podman[413845]: 2026-01-26 17:24:04.739873481 +0000 UTC m=+0.119486033 container init d0f70f25dd7920a0bad4d4be3d35c80cd79c5b791a470f8766d30189864034a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:24:04 compute-0 podman[413845]: 2026-01-26 17:24:04.745462618 +0000 UTC m=+0.125075160 container start d0f70f25dd7920a0bad4d4be3d35c80cd79c5b791a470f8766d30189864034a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_tesla, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 17:24:04 compute-0 podman[413845]: 2026-01-26 17:24:04.750690016 +0000 UTC m=+0.130302588 container attach d0f70f25dd7920a0bad4d4be3d35c80cd79c5b791a470f8766d30189864034a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_tesla, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:24:04 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:24:04 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:24:04 compute-0 ceph-mon[75140]: pgmap v4029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]: [
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:     {
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:         "available": false,
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:         "being_replaced": false,
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:         "ceph_device_lvm": false,
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:         "lsm_data": {},
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:         "lvs": [],
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:         "path": "/dev/sr0",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:         "rejected_reasons": [
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "Has a FileSystem",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "Insufficient space (<5GB)"
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:         ],
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:         "sys_api": {
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "actuators": null,
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "device_nodes": [
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:                 "sr0"
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             ],
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "devname": "sr0",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "human_readable_size": "482.00 KB",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "id_bus": "ata",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "model": "QEMU DVD-ROM",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "nr_requests": "2",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "parent": "/dev/sr0",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "partitions": {},
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "path": "/dev/sr0",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "removable": "1",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "rev": "2.5+",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "ro": "0",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "rotational": "1",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "sas_address": "",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "sas_device_handle": "",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "scheduler_mode": "mq-deadline",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "sectors": 0,
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "sectorsize": "2048",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "size": 493568.0,
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "support_discard": "2048",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "type": "disk",
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:             "vendor": "QEMU"
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:         }
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]:     }
Jan 26 17:24:05 compute-0 eloquent_tesla[413862]: ]
Jan 26 17:24:05 compute-0 systemd[1]: libpod-d0f70f25dd7920a0bad4d4be3d35c80cd79c5b791a470f8766d30189864034a2.scope: Deactivated successfully.
Jan 26 17:24:05 compute-0 podman[413845]: 2026-01-26 17:24:05.29063572 +0000 UTC m=+0.670248262 container died d0f70f25dd7920a0bad4d4be3d35c80cd79c5b791a470f8766d30189864034a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:24:05 compute-0 nova_compute[239965]: 2026-01-26 17:24:05.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0418f53f9431b6b397ae4f24e97ac489fbcd76bf0828eeaa2719f33e025080b7-merged.mount: Deactivated successfully.
Jan 26 17:24:05 compute-0 podman[413845]: 2026-01-26 17:24:05.539534527 +0000 UTC m=+0.919147069 container remove d0f70f25dd7920a0bad4d4be3d35c80cd79c5b791a470f8766d30189864034a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:24:05 compute-0 systemd[1]: libpod-conmon-d0f70f25dd7920a0bad4d4be3d35c80cd79c5b791a470f8766d30189864034a2.scope: Deactivated successfully.
Jan 26 17:24:05 compute-0 sudo[413770]: pam_unix(sudo:session): session closed for user root
Jan 26 17:24:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:24:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:24:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:24:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:24:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:24:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:24:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:24:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:24:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:24:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:24:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:24:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:24:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:24:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:24:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:24:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:24:05 compute-0 sudo[414592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:24:05 compute-0 sudo[414592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:24:05 compute-0 sudo[414592]: pam_unix(sudo:session): session closed for user root
Jan 26 17:24:05 compute-0 sudo[414617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:24:05 compute-0 sudo[414617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:24:06 compute-0 podman[414654]: 2026-01-26 17:24:06.021120814 +0000 UTC m=+0.037736834 container create 9cb465eb8a7caeb5c68615b6e4a756499c4251c3b49ad8750e7ac5b091643d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_villani, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:24:06 compute-0 systemd[1]: Started libpod-conmon-9cb465eb8a7caeb5c68615b6e4a756499c4251c3b49ad8750e7ac5b091643d0c.scope.
Jan 26 17:24:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:24:06 compute-0 podman[414654]: 2026-01-26 17:24:06.098794864 +0000 UTC m=+0.115410914 container init 9cb465eb8a7caeb5c68615b6e4a756499c4251c3b49ad8750e7ac5b091643d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:24:06 compute-0 podman[414654]: 2026-01-26 17:24:06.005015881 +0000 UTC m=+0.021631911 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:24:06 compute-0 podman[414654]: 2026-01-26 17:24:06.104552335 +0000 UTC m=+0.121168355 container start 9cb465eb8a7caeb5c68615b6e4a756499c4251c3b49ad8750e7ac5b091643d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_villani, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 17:24:06 compute-0 podman[414654]: 2026-01-26 17:24:06.108242955 +0000 UTC m=+0.124858995 container attach 9cb465eb8a7caeb5c68615b6e4a756499c4251c3b49ad8750e7ac5b091643d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_villani, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 17:24:06 compute-0 jolly_villani[414670]: 167 167
Jan 26 17:24:06 compute-0 systemd[1]: libpod-9cb465eb8a7caeb5c68615b6e4a756499c4251c3b49ad8750e7ac5b091643d0c.scope: Deactivated successfully.
Jan 26 17:24:06 compute-0 podman[414654]: 2026-01-26 17:24:06.110091361 +0000 UTC m=+0.126707401 container died 9cb465eb8a7caeb5c68615b6e4a756499c4251c3b49ad8750e7ac5b091643d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 17:24:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-72cac68539c74af769a543cba65bd229b581af53b3f4397c9597193cc521663d-merged.mount: Deactivated successfully.
Jan 26 17:24:06 compute-0 podman[414654]: 2026-01-26 17:24:06.148634063 +0000 UTC m=+0.165250083 container remove 9cb465eb8a7caeb5c68615b6e4a756499c4251c3b49ad8750e7ac5b091643d0c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_villani, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 17:24:06 compute-0 systemd[1]: libpod-conmon-9cb465eb8a7caeb5c68615b6e4a756499c4251c3b49ad8750e7ac5b091643d0c.scope: Deactivated successfully.
Jan 26 17:24:06 compute-0 podman[414694]: 2026-01-26 17:24:06.336004175 +0000 UTC m=+0.046730134 container create 82ce9a11a320662929b306a9c1e4a0b6ca63b7c73638a0aa0f0b163ab8c5cff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:24:06 compute-0 systemd[1]: Started libpod-conmon-82ce9a11a320662929b306a9c1e4a0b6ca63b7c73638a0aa0f0b163ab8c5cff6.scope.
Jan 26 17:24:06 compute-0 podman[414694]: 2026-01-26 17:24:06.316419556 +0000 UTC m=+0.027145535 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:24:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01b1514cd61f604feb738e7cb3bc815f0f6fda8a58fce62860361674b1922f18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01b1514cd61f604feb738e7cb3bc815f0f6fda8a58fce62860361674b1922f18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01b1514cd61f604feb738e7cb3bc815f0f6fda8a58fce62860361674b1922f18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01b1514cd61f604feb738e7cb3bc815f0f6fda8a58fce62860361674b1922f18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01b1514cd61f604feb738e7cb3bc815f0f6fda8a58fce62860361674b1922f18/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:06 compute-0 podman[414694]: 2026-01-26 17:24:06.445128584 +0000 UTC m=+0.155854563 container init 82ce9a11a320662929b306a9c1e4a0b6ca63b7c73638a0aa0f0b163ab8c5cff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:24:06 compute-0 podman[414694]: 2026-01-26 17:24:06.453174411 +0000 UTC m=+0.163900370 container start 82ce9a11a320662929b306a9c1e4a0b6ca63b7c73638a0aa0f0b163ab8c5cff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 17:24:06 compute-0 podman[414694]: 2026-01-26 17:24:06.461119864 +0000 UTC m=+0.171845823 container attach 82ce9a11a320662929b306a9c1e4a0b6ca63b7c73638a0aa0f0b163ab8c5cff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 17:24:06 compute-0 nova_compute[239965]: 2026-01-26 17:24:06.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:06 compute-0 nova_compute[239965]: 2026-01-26 17:24:06.577 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:06 compute-0 nova_compute[239965]: 2026-01-26 17:24:06.577 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:06 compute-0 nova_compute[239965]: 2026-01-26 17:24:06.578 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:06 compute-0 nova_compute[239965]: 2026-01-26 17:24:06.578 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:24:06 compute-0 nova_compute[239965]: 2026-01-26 17:24:06.578 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:24:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:24:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:24:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:24:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:24:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:24:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:24:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:24:06 compute-0 great_kepler[414711]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:24:06 compute-0 great_kepler[414711]: --> All data devices are unavailable
Jan 26 17:24:06 compute-0 systemd[1]: libpod-82ce9a11a320662929b306a9c1e4a0b6ca63b7c73638a0aa0f0b163ab8c5cff6.scope: Deactivated successfully.
Jan 26 17:24:06 compute-0 podman[414694]: 2026-01-26 17:24:06.996832456 +0000 UTC m=+0.707558425 container died 82ce9a11a320662929b306a9c1e4a0b6ca63b7c73638a0aa0f0b163ab8c5cff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 17:24:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-01b1514cd61f604feb738e7cb3bc815f0f6fda8a58fce62860361674b1922f18-merged.mount: Deactivated successfully.
Jan 26 17:24:07 compute-0 podman[414694]: 2026-01-26 17:24:07.05260701 +0000 UTC m=+0.763332969 container remove 82ce9a11a320662929b306a9c1e4a0b6ca63b7c73638a0aa0f0b163ab8c5cff6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 17:24:07 compute-0 systemd[1]: libpod-conmon-82ce9a11a320662929b306a9c1e4a0b6ca63b7c73638a0aa0f0b163ab8c5cff6.scope: Deactivated successfully.
Jan 26 17:24:07 compute-0 sudo[414617]: pam_unix(sudo:session): session closed for user root
Jan 26 17:24:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:24:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1380193641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:24:07 compute-0 nova_compute[239965]: 2026-01-26 17:24:07.192 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.613s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:07 compute-0 sudo[414764]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:24:07 compute-0 sudo[414764]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:24:07 compute-0 sudo[414764]: pam_unix(sudo:session): session closed for user root
Jan 26 17:24:07 compute-0 sudo[414791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:24:07 compute-0 sudo[414791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:24:07 compute-0 nova_compute[239965]: 2026-01-26 17:24:07.395 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:24:07 compute-0 nova_compute[239965]: 2026-01-26 17:24:07.396 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3497MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:24:07 compute-0 nova_compute[239965]: 2026-01-26 17:24:07.397 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:07 compute-0 nova_compute[239965]: 2026-01-26 17:24:07.397 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:07 compute-0 nova_compute[239965]: 2026-01-26 17:24:07.461 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:24:07 compute-0 nova_compute[239965]: 2026-01-26 17:24:07.462 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:24:07 compute-0 nova_compute[239965]: 2026-01-26 17:24:07.484 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:24:07 compute-0 ceph-mon[75140]: pgmap v4030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:07 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1380193641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:24:07 compute-0 podman[414830]: 2026-01-26 17:24:07.669401263 +0000 UTC m=+0.045018052 container create 25bccb10367a8d6b10875544cf294b6b7c88f4f18a1f155ad5fb78ae864c5596 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 17:24:07 compute-0 systemd[1]: Started libpod-conmon-25bccb10367a8d6b10875544cf294b6b7c88f4f18a1f155ad5fb78ae864c5596.scope.
Jan 26 17:24:07 compute-0 podman[414830]: 2026-01-26 17:24:07.651452334 +0000 UTC m=+0.027069153 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:24:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:24:07 compute-0 podman[414830]: 2026-01-26 17:24:07.772871704 +0000 UTC m=+0.148488733 container init 25bccb10367a8d6b10875544cf294b6b7c88f4f18a1f155ad5fb78ae864c5596 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_curie, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:24:07 compute-0 podman[414830]: 2026-01-26 17:24:07.784254422 +0000 UTC m=+0.159871251 container start 25bccb10367a8d6b10875544cf294b6b7c88f4f18a1f155ad5fb78ae864c5596 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_curie, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 17:24:07 compute-0 hungry_curie[414865]: 167 167
Jan 26 17:24:07 compute-0 systemd[1]: libpod-25bccb10367a8d6b10875544cf294b6b7c88f4f18a1f155ad5fb78ae864c5596.scope: Deactivated successfully.
Jan 26 17:24:07 compute-0 podman[414830]: 2026-01-26 17:24:07.790308491 +0000 UTC m=+0.165925310 container attach 25bccb10367a8d6b10875544cf294b6b7c88f4f18a1f155ad5fb78ae864c5596 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Jan 26 17:24:07 compute-0 podman[414830]: 2026-01-26 17:24:07.795753484 +0000 UTC m=+0.171370303 container died 25bccb10367a8d6b10875544cf294b6b7c88f4f18a1f155ad5fb78ae864c5596 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_curie, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 17:24:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a865eea09f9acb7f0b3a3e900a5b4f595495082c3af279ce13d23cfe8f5c51b7-merged.mount: Deactivated successfully.
Jan 26 17:24:07 compute-0 podman[414830]: 2026-01-26 17:24:07.951441571 +0000 UTC m=+0.327058360 container remove 25bccb10367a8d6b10875544cf294b6b7c88f4f18a1f155ad5fb78ae864c5596 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:24:07 compute-0 systemd[1]: libpod-conmon-25bccb10367a8d6b10875544cf294b6b7c88f4f18a1f155ad5fb78ae864c5596.scope: Deactivated successfully.
Jan 26 17:24:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:24:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3756858857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:24:08 compute-0 nova_compute[239965]: 2026-01-26 17:24:08.086 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:24:08 compute-0 nova_compute[239965]: 2026-01-26 17:24:08.097 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:24:08 compute-0 nova_compute[239965]: 2026-01-26 17:24:08.115 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:24:08 compute-0 nova_compute[239965]: 2026-01-26 17:24:08.117 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:24:08 compute-0 nova_compute[239965]: 2026-01-26 17:24:08.117 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:08 compute-0 podman[414889]: 2026-01-26 17:24:08.159802527 +0000 UTC m=+0.066633201 container create a18eaa408d58f9d4d666e9c40250746c16e3d7fd33d7d19b4654a20047d97c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:24:08 compute-0 podman[414889]: 2026-01-26 17:24:08.121720075 +0000 UTC m=+0.028550769 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:24:08 compute-0 systemd[1]: Started libpod-conmon-a18eaa408d58f9d4d666e9c40250746c16e3d7fd33d7d19b4654a20047d97c37.scope.
Jan 26 17:24:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a152773e00396b8c75bd63bcaac375f6e91e014454ce3dcb019451ac5435c73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a152773e00396b8c75bd63bcaac375f6e91e014454ce3dcb019451ac5435c73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a152773e00396b8c75bd63bcaac375f6e91e014454ce3dcb019451ac5435c73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a152773e00396b8c75bd63bcaac375f6e91e014454ce3dcb019451ac5435c73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:08 compute-0 podman[414889]: 2026-01-26 17:24:08.322886044 +0000 UTC m=+0.229716738 container init a18eaa408d58f9d4d666e9c40250746c16e3d7fd33d7d19b4654a20047d97c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 26 17:24:08 compute-0 podman[414889]: 2026-01-26 17:24:08.333234777 +0000 UTC m=+0.240065451 container start a18eaa408d58f9d4d666e9c40250746c16e3d7fd33d7d19b4654a20047d97c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:24:08 compute-0 podman[414889]: 2026-01-26 17:24:08.340660689 +0000 UTC m=+0.247491383 container attach a18eaa408d58f9d4d666e9c40250746c16e3d7fd33d7d19b4654a20047d97c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:24:08 compute-0 nova_compute[239965]: 2026-01-26 17:24:08.379 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]: {
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:     "0": [
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:         {
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "devices": [
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "/dev/loop3"
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             ],
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "lv_name": "ceph_lv0",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "lv_size": "21470642176",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "name": "ceph_lv0",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "tags": {
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.cluster_name": "ceph",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.crush_device_class": "",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.encrypted": "0",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.objectstore": "bluestore",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.osd_id": "0",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.type": "block",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.vdo": "0",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.with_tpm": "0"
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             },
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "type": "block",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "vg_name": "ceph_vg0"
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:         }
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:     ],
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:     "1": [
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:         {
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "devices": [
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "/dev/loop4"
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             ],
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "lv_name": "ceph_lv1",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "lv_size": "21470642176",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "name": "ceph_lv1",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "tags": {
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.cluster_name": "ceph",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.crush_device_class": "",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.encrypted": "0",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.objectstore": "bluestore",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.osd_id": "1",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.type": "block",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.vdo": "0",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.with_tpm": "0"
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             },
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "type": "block",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "vg_name": "ceph_vg1"
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:         }
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:     ],
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:     "2": [
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:         {
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "devices": [
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "/dev/loop5"
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             ],
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "lv_name": "ceph_lv2",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "lv_size": "21470642176",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "name": "ceph_lv2",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "tags": {
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.cluster_name": "ceph",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.crush_device_class": "",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.encrypted": "0",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.objectstore": "bluestore",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.osd_id": "2",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.type": "block",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.vdo": "0",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:                 "ceph.with_tpm": "0"
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             },
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "type": "block",
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:             "vg_name": "ceph_vg2"
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:         }
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]:     ]
Jan 26 17:24:08 compute-0 interesting_rhodes[414906]: }
Jan 26 17:24:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3756858857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:24:08 compute-0 systemd[1]: libpod-a18eaa408d58f9d4d666e9c40250746c16e3d7fd33d7d19b4654a20047d97c37.scope: Deactivated successfully.
Jan 26 17:24:08 compute-0 podman[414915]: 2026-01-26 17:24:08.685499722 +0000 UTC m=+0.026039738 container died a18eaa408d58f9d4d666e9c40250746c16e3d7fd33d7d19b4654a20047d97c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:24:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a152773e00396b8c75bd63bcaac375f6e91e014454ce3dcb019451ac5435c73-merged.mount: Deactivated successfully.
Jan 26 17:24:08 compute-0 podman[414915]: 2026-01-26 17:24:08.771221649 +0000 UTC m=+0.111761645 container remove a18eaa408d58f9d4d666e9c40250746c16e3d7fd33d7d19b4654a20047d97c37 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 17:24:08 compute-0 systemd[1]: libpod-conmon-a18eaa408d58f9d4d666e9c40250746c16e3d7fd33d7d19b4654a20047d97c37.scope: Deactivated successfully.
Jan 26 17:24:08 compute-0 sudo[414791]: pam_unix(sudo:session): session closed for user root
Jan 26 17:24:08 compute-0 sudo[414929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:24:08 compute-0 sudo[414929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:24:08 compute-0 sudo[414929]: pam_unix(sudo:session): session closed for user root
Jan 26 17:24:08 compute-0 sudo[414954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:24:08 compute-0 sudo[414954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:24:09 compute-0 podman[414991]: 2026-01-26 17:24:09.319925387 +0000 UTC m=+0.058295837 container create abb28be801b3173351519f4369c616eed3942e3989fb36407058c00b57c53897 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_joliot, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 17:24:09 compute-0 systemd[1]: Started libpod-conmon-abb28be801b3173351519f4369c616eed3942e3989fb36407058c00b57c53897.scope.
Jan 26 17:24:09 compute-0 podman[414991]: 2026-01-26 17:24:09.291196795 +0000 UTC m=+0.029567275 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:24:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:24:09 compute-0 podman[414991]: 2026-01-26 17:24:09.413217409 +0000 UTC m=+0.151587889 container init abb28be801b3173351519f4369c616eed3942e3989fb36407058c00b57c53897 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_joliot, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:24:09 compute-0 podman[414991]: 2026-01-26 17:24:09.420683441 +0000 UTC m=+0.159053891 container start abb28be801b3173351519f4369c616eed3942e3989fb36407058c00b57c53897 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_joliot, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 26 17:24:09 compute-0 podman[414991]: 2026-01-26 17:24:09.4239235 +0000 UTC m=+0.162293980 container attach abb28be801b3173351519f4369c616eed3942e3989fb36407058c00b57c53897 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Jan 26 17:24:09 compute-0 thirsty_joliot[415007]: 167 167
Jan 26 17:24:09 compute-0 systemd[1]: libpod-abb28be801b3173351519f4369c616eed3942e3989fb36407058c00b57c53897.scope: Deactivated successfully.
Jan 26 17:24:09 compute-0 podman[414991]: 2026-01-26 17:24:09.428181994 +0000 UTC m=+0.166552464 container died abb28be801b3173351519f4369c616eed3942e3989fb36407058c00b57c53897 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_joliot, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:24:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-be0265420fc3dfbfff6524defec52c1cfbc3da1d3e265883cefae595d7d674dd-merged.mount: Deactivated successfully.
Jan 26 17:24:09 compute-0 podman[414991]: 2026-01-26 17:24:09.485524737 +0000 UTC m=+0.223895187 container remove abb28be801b3173351519f4369c616eed3942e3989fb36407058c00b57c53897 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_joliot, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:24:09 compute-0 systemd[1]: libpod-conmon-abb28be801b3173351519f4369c616eed3942e3989fb36407058c00b57c53897.scope: Deactivated successfully.
Jan 26 17:24:09 compute-0 nova_compute[239965]: 2026-01-26 17:24:09.528 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:09 compute-0 ceph-mon[75140]: pgmap v4031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:09 compute-0 podman[415032]: 2026-01-26 17:24:09.655253278 +0000 UTC m=+0.044811577 container create d90287ea200fa33e147a3db33b0e52371735dd054cbc294a46407764fa08eaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_snyder, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:24:09 compute-0 systemd[1]: Started libpod-conmon-d90287ea200fa33e147a3db33b0e52371735dd054cbc294a46407764fa08eaff.scope.
Jan 26 17:24:09 compute-0 podman[415032]: 2026-01-26 17:24:09.633854164 +0000 UTC m=+0.023412493 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:24:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efeb33bd4b93e03e65e6e3d543577ad9da5ed2ec3d65b78a75215916f858186/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efeb33bd4b93e03e65e6e3d543577ad9da5ed2ec3d65b78a75215916f858186/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efeb33bd4b93e03e65e6e3d543577ad9da5ed2ec3d65b78a75215916f858186/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2efeb33bd4b93e03e65e6e3d543577ad9da5ed2ec3d65b78a75215916f858186/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:24:09 compute-0 podman[415032]: 2026-01-26 17:24:09.751335417 +0000 UTC m=+0.140893726 container init d90287ea200fa33e147a3db33b0e52371735dd054cbc294a46407764fa08eaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Jan 26 17:24:09 compute-0 podman[415032]: 2026-01-26 17:24:09.759927267 +0000 UTC m=+0.149485566 container start d90287ea200fa33e147a3db33b0e52371735dd054cbc294a46407764fa08eaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 17:24:09 compute-0 podman[415032]: 2026-01-26 17:24:09.764262233 +0000 UTC m=+0.153820622 container attach d90287ea200fa33e147a3db33b0e52371735dd054cbc294a46407764fa08eaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_snyder, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:24:10 compute-0 nova_compute[239965]: 2026-01-26 17:24:10.118 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:10 compute-0 nova_compute[239965]: 2026-01-26 17:24:10.119 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:24:10 compute-0 nova_compute[239965]: 2026-01-26 17:24:10.119 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:24:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:24:10 compute-0 nova_compute[239965]: 2026-01-26 17:24:10.361 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:24:10 compute-0 lvm[415127]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:24:10 compute-0 lvm[415128]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:24:10 compute-0 lvm[415128]: VG ceph_vg1 finished
Jan 26 17:24:10 compute-0 lvm[415127]: VG ceph_vg0 finished
Jan 26 17:24:10 compute-0 lvm[415130]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:24:10 compute-0 lvm[415130]: VG ceph_vg2 finished
Jan 26 17:24:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:10 compute-0 mystifying_snyder[415049]: {}
Jan 26 17:24:10 compute-0 systemd[1]: libpod-d90287ea200fa33e147a3db33b0e52371735dd054cbc294a46407764fa08eaff.scope: Deactivated successfully.
Jan 26 17:24:10 compute-0 podman[415032]: 2026-01-26 17:24:10.673376847 +0000 UTC m=+1.062935146 container died d90287ea200fa33e147a3db33b0e52371735dd054cbc294a46407764fa08eaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_snyder, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:24:10 compute-0 systemd[1]: libpod-d90287ea200fa33e147a3db33b0e52371735dd054cbc294a46407764fa08eaff.scope: Consumed 1.550s CPU time.
Jan 26 17:24:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-2efeb33bd4b93e03e65e6e3d543577ad9da5ed2ec3d65b78a75215916f858186-merged.mount: Deactivated successfully.
Jan 26 17:24:10 compute-0 podman[415032]: 2026-01-26 17:24:10.717048185 +0000 UTC m=+1.106606524 container remove d90287ea200fa33e147a3db33b0e52371735dd054cbc294a46407764fa08eaff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:24:10 compute-0 systemd[1]: libpod-conmon-d90287ea200fa33e147a3db33b0e52371735dd054cbc294a46407764fa08eaff.scope: Deactivated successfully.
Jan 26 17:24:10 compute-0 sudo[414954]: pam_unix(sudo:session): session closed for user root
Jan 26 17:24:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:24:10 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:24:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:24:10 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:24:10 compute-0 sudo[415144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:24:10 compute-0 sudo[415144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:24:10 compute-0 sudo[415144]: pam_unix(sudo:session): session closed for user root
Jan 26 17:24:11 compute-0 ceph-mon[75140]: pgmap v4032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:24:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:24:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:12 compute-0 ceph-mon[75140]: pgmap v4033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:13 compute-0 nova_compute[239965]: 2026-01-26 17:24:13.381 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:14 compute-0 nova_compute[239965]: 2026-01-26 17:24:14.530 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:24:16 compute-0 ceph-mon[75140]: pgmap v4034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:17 compute-0 ceph-mon[75140]: pgmap v4035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:18 compute-0 nova_compute[239965]: 2026-01-26 17:24:18.384 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:19 compute-0 ceph-mon[75140]: pgmap v4036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:19 compute-0 nova_compute[239965]: 2026-01-26 17:24:19.534 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:24:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:20 compute-0 ceph-mon[75140]: pgmap v4037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:23 compute-0 ceph-mon[75140]: pgmap v4038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:23 compute-0 nova_compute[239965]: 2026-01-26 17:24:23.387 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:24 compute-0 podman[415169]: 2026-01-26 17:24:24.409283652 +0000 UTC m=+0.087798188 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:24:24 compute-0 nova_compute[239965]: 2026-01-26 17:24:24.536 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:25 compute-0 ceph-mon[75140]: pgmap v4039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:24:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:27 compute-0 ceph-mon[75140]: pgmap v4040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:28 compute-0 nova_compute[239965]: 2026-01-26 17:24:28.389 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:28 compute-0 podman[415189]: 2026-01-26 17:24:28.445759456 +0000 UTC m=+0.130967845 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:24:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:24:28
Jan 26 17:24:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:24:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:24:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['backups', 'volumes', 'vms', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', '.mgr']
Jan 26 17:24:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:24:29 compute-0 ceph-mon[75140]: pgmap v4041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:29 compute-0 nova_compute[239965]: 2026-01-26 17:24:29.537 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:24:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:24:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:24:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:24:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:24:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:24:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:24:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4042: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:24:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:24:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:24:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:24:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:24:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:24:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:24:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:24:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:24:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:24:31 compute-0 ceph-mon[75140]: pgmap v4042: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:32 compute-0 ceph-mon[75140]: pgmap v4043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:33 compute-0 nova_compute[239965]: 2026-01-26 17:24:33.447 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:34 compute-0 nova_compute[239965]: 2026-01-26 17:24:34.540 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:24:35 compute-0 ceph-mon[75140]: pgmap v4044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4045: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:36 compute-0 ceph-mon[75140]: pgmap v4045: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:38 compute-0 nova_compute[239965]: 2026-01-26 17:24:38.449 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:39 compute-0 ceph-mon[75140]: pgmap v4046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:39 compute-0 nova_compute[239965]: 2026-01-26 17:24:39.541 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:24:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4047: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:41 compute-0 ceph-mon[75140]: pgmap v4047: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4048: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:43 compute-0 ceph-mon[75140]: pgmap v4048: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:43 compute-0 nova_compute[239965]: 2026-01-26 17:24:43.468 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:44 compute-0 nova_compute[239965]: 2026-01-26 17:24:44.556 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4049: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:24:45 compute-0 ceph-mon[75140]: pgmap v4049: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:46 compute-0 nova_compute[239965]: 2026-01-26 17:24:46.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:46 compute-0 nova_compute[239965]: 2026-01-26 17:24:46.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4050: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:47 compute-0 nova_compute[239965]: 2026-01-26 17:24:47.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:47 compute-0 ceph-mon[75140]: pgmap v4050: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:48 compute-0 nova_compute[239965]: 2026-01-26 17:24:48.472 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:48 compute-0 nova_compute[239965]: 2026-01-26 17:24:48.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4051: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:48 compute-0 ceph-mon[75140]: pgmap v4051: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:24:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3044938871' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:24:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:24:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3044938871' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:24:49 compute-0 nova_compute[239965]: 2026-01-26 17:24:49.560 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3044938871' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:24:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3044938871' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:24:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:24:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:24:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4052: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:50 compute-0 ceph-mon[75140]: pgmap v4052: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4053: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:53 compute-0 ceph-mon[75140]: pgmap v4053: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:53 compute-0 nova_compute[239965]: 2026-01-26 17:24:53.476 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:54 compute-0 nova_compute[239965]: 2026-01-26 17:24:54.560 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4054: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:24:55 compute-0 podman[415216]: 2026-01-26 17:24:55.361255814 +0000 UTC m=+0.051918201 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:24:55 compute-0 ceph-mon[75140]: pgmap v4054: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4055: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:57 compute-0 ceph-mon[75140]: pgmap v4055: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:58 compute-0 nova_compute[239965]: 2026-01-26 17:24:58.480 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:58 compute-0 nova_compute[239965]: 2026-01-26 17:24:58.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:24:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4056: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:24:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:24:59.311 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:24:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:24:59.311 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:24:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:24:59.311 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:24:59 compute-0 podman[415236]: 2026-01-26 17:24:59.411932544 +0000 UTC m=+0.097603549 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 26 17:24:59 compute-0 nova_compute[239965]: 2026-01-26 17:24:59.563 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:24:59 compute-0 ceph-mon[75140]: pgmap v4056: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:25:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:25:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:25:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:25:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:25:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:25:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:25:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4057: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:01 compute-0 ceph-mon[75140]: pgmap v4057: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:02 compute-0 nova_compute[239965]: 2026-01-26 17:25:02.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:02 compute-0 nova_compute[239965]: 2026-01-26 17:25:02.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:25:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4058: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:03 compute-0 nova_compute[239965]: 2026-01-26 17:25:03.483 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:03 compute-0 ceph-mon[75140]: pgmap v4058: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:04 compute-0 nova_compute[239965]: 2026-01-26 17:25:04.564 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4059: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:25:05 compute-0 nova_compute[239965]: 2026-01-26 17:25:05.504 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:05 compute-0 ceph-mon[75140]: pgmap v4059: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4060: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:07 compute-0 nova_compute[239965]: 2026-01-26 17:25:07.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:07 compute-0 nova_compute[239965]: 2026-01-26 17:25:07.556 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:07 compute-0 nova_compute[239965]: 2026-01-26 17:25:07.557 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:07 compute-0 nova_compute[239965]: 2026-01-26 17:25:07.557 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:07 compute-0 nova_compute[239965]: 2026-01-26 17:25:07.558 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:25:07 compute-0 nova_compute[239965]: 2026-01-26 17:25:07.558 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:25:07 compute-0 ceph-mon[75140]: pgmap v4060: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:25:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2464371826' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:25:08 compute-0 nova_compute[239965]: 2026-01-26 17:25:08.177 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:25:08 compute-0 nova_compute[239965]: 2026-01-26 17:25:08.360 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:25:08 compute-0 nova_compute[239965]: 2026-01-26 17:25:08.361 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3565MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:25:08 compute-0 nova_compute[239965]: 2026-01-26 17:25:08.361 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:08 compute-0 nova_compute[239965]: 2026-01-26 17:25:08.362 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:08 compute-0 nova_compute[239965]: 2026-01-26 17:25:08.486 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4061: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2464371826' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:25:08 compute-0 nova_compute[239965]: 2026-01-26 17:25:08.800 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:25:08 compute-0 nova_compute[239965]: 2026-01-26 17:25:08.801 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:25:08 compute-0 nova_compute[239965]: 2026-01-26 17:25:08.887 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 17:25:08 compute-0 nova_compute[239965]: 2026-01-26 17:25:08.979 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 17:25:08 compute-0 nova_compute[239965]: 2026-01-26 17:25:08.979 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 17:25:08 compute-0 nova_compute[239965]: 2026-01-26 17:25:08.993 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 17:25:09 compute-0 nova_compute[239965]: 2026-01-26 17:25:09.016 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 17:25:09 compute-0 nova_compute[239965]: 2026-01-26 17:25:09.031 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:25:09 compute-0 nova_compute[239965]: 2026-01-26 17:25:09.565 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:09 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:25:09 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3758574551' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:25:09 compute-0 nova_compute[239965]: 2026-01-26 17:25:09.634 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:25:09 compute-0 nova_compute[239965]: 2026-01-26 17:25:09.640 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:25:09 compute-0 ceph-mon[75140]: pgmap v4061: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:09 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3758574551' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:25:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:25:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4062: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:10 compute-0 nova_compute[239965]: 2026-01-26 17:25:10.888 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:25:10 compute-0 nova_compute[239965]: 2026-01-26 17:25:10.889 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:25:10 compute-0 nova_compute[239965]: 2026-01-26 17:25:10.890 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.528s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:10 compute-0 sudo[415306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:25:10 compute-0 sudo[415306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:25:11 compute-0 sudo[415306]: pam_unix(sudo:session): session closed for user root
Jan 26 17:25:11 compute-0 sudo[415331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:25:11 compute-0 sudo[415331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:25:11 compute-0 sudo[415331]: pam_unix(sudo:session): session closed for user root
Jan 26 17:25:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:25:11 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:25:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:25:11 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:25:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:25:11 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:25:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:25:11 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:25:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:25:11 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:25:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:25:11 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:25:11 compute-0 sudo[415387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:25:11 compute-0 sudo[415387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:25:11 compute-0 sudo[415387]: pam_unix(sudo:session): session closed for user root
Jan 26 17:25:11 compute-0 ceph-mon[75140]: pgmap v4062: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:25:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:25:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:25:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:25:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:25:11 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:25:11 compute-0 sudo[415412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:25:11 compute-0 sudo[415412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:25:12 compute-0 podman[415449]: 2026-01-26 17:25:12.139352626 +0000 UTC m=+0.053317275 container create 5ad90619a4693df702eadbe2a61b67f825ba43dd3d9afc7cd4a032db0e2b020c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 17:25:12 compute-0 systemd[1]: Started libpod-conmon-5ad90619a4693df702eadbe2a61b67f825ba43dd3d9afc7cd4a032db0e2b020c.scope.
Jan 26 17:25:12 compute-0 podman[415449]: 2026-01-26 17:25:12.112684514 +0000 UTC m=+0.026649193 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:25:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:25:12 compute-0 podman[415449]: 2026-01-26 17:25:12.290778229 +0000 UTC m=+0.204742908 container init 5ad90619a4693df702eadbe2a61b67f825ba43dd3d9afc7cd4a032db0e2b020c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:25:12 compute-0 podman[415449]: 2026-01-26 17:25:12.298682111 +0000 UTC m=+0.212646760 container start 5ad90619a4693df702eadbe2a61b67f825ba43dd3d9afc7cd4a032db0e2b020c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 17:25:12 compute-0 podman[415449]: 2026-01-26 17:25:12.302678719 +0000 UTC m=+0.216643388 container attach 5ad90619a4693df702eadbe2a61b67f825ba43dd3d9afc7cd4a032db0e2b020c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Jan 26 17:25:12 compute-0 serene_keldysh[415465]: 167 167
Jan 26 17:25:12 compute-0 systemd[1]: libpod-5ad90619a4693df702eadbe2a61b67f825ba43dd3d9afc7cd4a032db0e2b020c.scope: Deactivated successfully.
Jan 26 17:25:12 compute-0 conmon[415465]: conmon 5ad90619a4693df702ea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ad90619a4693df702eadbe2a61b67f825ba43dd3d9afc7cd4a032db0e2b020c.scope/container/memory.events
Jan 26 17:25:12 compute-0 podman[415449]: 2026-01-26 17:25:12.308139723 +0000 UTC m=+0.222104382 container died 5ad90619a4693df702eadbe2a61b67f825ba43dd3d9afc7cd4a032db0e2b020c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:25:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ab0c0365b58aec3af49a98cf1d4bb17ea614e76e40a57d7c8ddc00e727319a3-merged.mount: Deactivated successfully.
Jan 26 17:25:12 compute-0 podman[415449]: 2026-01-26 17:25:12.396523544 +0000 UTC m=+0.310488193 container remove 5ad90619a4693df702eadbe2a61b67f825ba43dd3d9afc7cd4a032db0e2b020c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 17:25:12 compute-0 systemd[1]: libpod-conmon-5ad90619a4693df702eadbe2a61b67f825ba43dd3d9afc7cd4a032db0e2b020c.scope: Deactivated successfully.
Jan 26 17:25:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4063: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:12 compute-0 podman[415490]: 2026-01-26 17:25:12.56686342 +0000 UTC m=+0.029747568 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:25:12 compute-0 podman[415490]: 2026-01-26 17:25:12.686395363 +0000 UTC m=+0.149279481 container create 7b17263ec27ac9b12d666cbc102ed00e8c23f22034556e3a51c4aeea4030d644 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elbakyan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:25:12 compute-0 systemd[1]: Started libpod-conmon-7b17263ec27ac9b12d666cbc102ed00e8c23f22034556e3a51c4aeea4030d644.scope.
Jan 26 17:25:12 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2babf6b8cc086bc0d3f21467f75fdfe427a31028cfd42f23faa60baedd74518c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2babf6b8cc086bc0d3f21467f75fdfe427a31028cfd42f23faa60baedd74518c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2babf6b8cc086bc0d3f21467f75fdfe427a31028cfd42f23faa60baedd74518c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2babf6b8cc086bc0d3f21467f75fdfe427a31028cfd42f23faa60baedd74518c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:25:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2babf6b8cc086bc0d3f21467f75fdfe427a31028cfd42f23faa60baedd74518c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:25:12 compute-0 ceph-mon[75140]: pgmap v4063: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:12 compute-0 podman[415490]: 2026-01-26 17:25:12.798648878 +0000 UTC m=+0.261533026 container init 7b17263ec27ac9b12d666cbc102ed00e8c23f22034556e3a51c4aeea4030d644 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elbakyan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 17:25:12 compute-0 podman[415490]: 2026-01-26 17:25:12.810169691 +0000 UTC m=+0.273053809 container start 7b17263ec27ac9b12d666cbc102ed00e8c23f22034556e3a51c4aeea4030d644 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elbakyan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:25:12 compute-0 podman[415490]: 2026-01-26 17:25:12.941072101 +0000 UTC m=+0.403956269 container attach 7b17263ec27ac9b12d666cbc102ed00e8c23f22034556e3a51c4aeea4030d644 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elbakyan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 17:25:13 compute-0 youthful_elbakyan[415507]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:25:13 compute-0 youthful_elbakyan[415507]: --> All data devices are unavailable
Jan 26 17:25:13 compute-0 systemd[1]: libpod-7b17263ec27ac9b12d666cbc102ed00e8c23f22034556e3a51c4aeea4030d644.scope: Deactivated successfully.
Jan 26 17:25:13 compute-0 podman[415527]: 2026-01-26 17:25:13.413968197 +0000 UTC m=+0.027573256 container died 7b17263ec27ac9b12d666cbc102ed00e8c23f22034556e3a51c4aeea4030d644 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elbakyan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:25:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-2babf6b8cc086bc0d3f21467f75fdfe427a31028cfd42f23faa60baedd74518c-merged.mount: Deactivated successfully.
Jan 26 17:25:13 compute-0 podman[415527]: 2026-01-26 17:25:13.480496273 +0000 UTC m=+0.094101322 container remove 7b17263ec27ac9b12d666cbc102ed00e8c23f22034556e3a51c4aeea4030d644 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_elbakyan, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:25:13 compute-0 systemd[1]: libpod-conmon-7b17263ec27ac9b12d666cbc102ed00e8c23f22034556e3a51c4aeea4030d644.scope: Deactivated successfully.
Jan 26 17:25:13 compute-0 nova_compute[239965]: 2026-01-26 17:25:13.489 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:13 compute-0 sudo[415412]: pam_unix(sudo:session): session closed for user root
Jan 26 17:25:13 compute-0 sudo[415542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:25:13 compute-0 sudo[415542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:25:13 compute-0 sudo[415542]: pam_unix(sudo:session): session closed for user root
Jan 26 17:25:13 compute-0 sudo[415567]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:25:13 compute-0 sudo[415567]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:25:13 compute-0 nova_compute[239965]: 2026-01-26 17:25:13.890 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:13 compute-0 nova_compute[239965]: 2026-01-26 17:25:13.907 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:13 compute-0 nova_compute[239965]: 2026-01-26 17:25:13.907 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:25:13 compute-0 nova_compute[239965]: 2026-01-26 17:25:13.908 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:25:13 compute-0 nova_compute[239965]: 2026-01-26 17:25:13.921 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:25:14 compute-0 podman[415605]: 2026-01-26 17:25:14.036493361 +0000 UTC m=+0.058417570 container create 258b1d332cf2db637221f15d3884420846e2f92f6d74033b5bb8017adcbe6eb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_leakey, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 17:25:14 compute-0 systemd[1]: Started libpod-conmon-258b1d332cf2db637221f15d3884420846e2f92f6d74033b5bb8017adcbe6eb9.scope.
Jan 26 17:25:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:25:14 compute-0 podman[415605]: 2026-01-26 17:25:14.017368243 +0000 UTC m=+0.039292472 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:25:14 compute-0 podman[415605]: 2026-01-26 17:25:14.250020172 +0000 UTC m=+0.271944411 container init 258b1d332cf2db637221f15d3884420846e2f92f6d74033b5bb8017adcbe6eb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_leakey, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 17:25:14 compute-0 podman[415605]: 2026-01-26 17:25:14.259788141 +0000 UTC m=+0.281712360 container start 258b1d332cf2db637221f15d3884420846e2f92f6d74033b5bb8017adcbe6eb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_leakey, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:25:14 compute-0 podman[415605]: 2026-01-26 17:25:14.264552967 +0000 UTC m=+0.286477266 container attach 258b1d332cf2db637221f15d3884420846e2f92f6d74033b5bb8017adcbe6eb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_leakey, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:25:14 compute-0 jolly_leakey[415621]: 167 167
Jan 26 17:25:14 compute-0 systemd[1]: libpod-258b1d332cf2db637221f15d3884420846e2f92f6d74033b5bb8017adcbe6eb9.scope: Deactivated successfully.
Jan 26 17:25:14 compute-0 podman[415605]: 2026-01-26 17:25:14.268961406 +0000 UTC m=+0.290885625 container died 258b1d332cf2db637221f15d3884420846e2f92f6d74033b5bb8017adcbe6eb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_leakey, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 17:25:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-211d9be6f016165fb7c48a6101515de1ac98948c7604a78375ce77f072f0f64c-merged.mount: Deactivated successfully.
Jan 26 17:25:14 compute-0 podman[415605]: 2026-01-26 17:25:14.329294522 +0000 UTC m=+0.351218771 container remove 258b1d332cf2db637221f15d3884420846e2f92f6d74033b5bb8017adcbe6eb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_leakey, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:25:14 compute-0 systemd[1]: libpod-conmon-258b1d332cf2db637221f15d3884420846e2f92f6d74033b5bb8017adcbe6eb9.scope: Deactivated successfully.
Jan 26 17:25:14 compute-0 podman[415645]: 2026-01-26 17:25:14.531143998 +0000 UTC m=+0.051695236 container create e7ad558e406314fb9e8ddafa4cacdc778ade990650123fdb9785528c8206ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:25:14 compute-0 nova_compute[239965]: 2026-01-26 17:25:14.567 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:14 compute-0 systemd[1]: Started libpod-conmon-e7ad558e406314fb9e8ddafa4cacdc778ade990650123fdb9785528c8206ecf4.scope.
Jan 26 17:25:14 compute-0 podman[415645]: 2026-01-26 17:25:14.505120491 +0000 UTC m=+0.025671749 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:25:14 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae89039815f167e95b8779bb958a2cb3d865f631e180b6070560b0deca935cfd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae89039815f167e95b8779bb958a2cb3d865f631e180b6070560b0deca935cfd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae89039815f167e95b8779bb958a2cb3d865f631e180b6070560b0deca935cfd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:25:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae89039815f167e95b8779bb958a2cb3d865f631e180b6070560b0deca935cfd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:25:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4064: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:14 compute-0 podman[415645]: 2026-01-26 17:25:14.649188944 +0000 UTC m=+0.169740202 container init e7ad558e406314fb9e8ddafa4cacdc778ade990650123fdb9785528c8206ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_zhukovsky, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:25:14 compute-0 podman[415645]: 2026-01-26 17:25:14.657813835 +0000 UTC m=+0.178365073 container start e7ad558e406314fb9e8ddafa4cacdc778ade990650123fdb9785528c8206ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_zhukovsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 26 17:25:14 compute-0 podman[415645]: 2026-01-26 17:25:14.756871148 +0000 UTC m=+0.277422386 container attach e7ad558e406314fb9e8ddafa4cacdc778ade990650123fdb9785528c8206ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_zhukovsky, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:25:14 compute-0 ceph-mon[75140]: pgmap v4064: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]: {
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:     "0": [
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:         {
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "devices": [
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "/dev/loop3"
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             ],
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "lv_name": "ceph_lv0",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "lv_size": "21470642176",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "name": "ceph_lv0",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "tags": {
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.cluster_name": "ceph",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.crush_device_class": "",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.encrypted": "0",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.objectstore": "bluestore",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.osd_id": "0",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.type": "block",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.vdo": "0",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.with_tpm": "0"
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             },
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "type": "block",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "vg_name": "ceph_vg0"
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:         }
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:     ],
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:     "1": [
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:         {
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "devices": [
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "/dev/loop4"
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             ],
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "lv_name": "ceph_lv1",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "lv_size": "21470642176",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "name": "ceph_lv1",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "tags": {
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.cluster_name": "ceph",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.crush_device_class": "",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.encrypted": "0",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.objectstore": "bluestore",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.osd_id": "1",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.type": "block",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.vdo": "0",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.with_tpm": "0"
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             },
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "type": "block",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "vg_name": "ceph_vg1"
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:         }
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:     ],
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:     "2": [
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:         {
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "devices": [
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "/dev/loop5"
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             ],
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "lv_name": "ceph_lv2",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "lv_size": "21470642176",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "name": "ceph_lv2",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "tags": {
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.cluster_name": "ceph",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.crush_device_class": "",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.encrypted": "0",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.objectstore": "bluestore",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.osd_id": "2",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.type": "block",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.vdo": "0",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:                 "ceph.with_tpm": "0"
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             },
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "type": "block",
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:             "vg_name": "ceph_vg2"
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:         }
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]:     ]
Jan 26 17:25:14 compute-0 vigorous_zhukovsky[415662]: }
Jan 26 17:25:14 compute-0 systemd[1]: libpod-e7ad558e406314fb9e8ddafa4cacdc778ade990650123fdb9785528c8206ecf4.scope: Deactivated successfully.
Jan 26 17:25:14 compute-0 podman[415645]: 2026-01-26 17:25:14.970058391 +0000 UTC m=+0.490609639 container died e7ad558e406314fb9e8ddafa4cacdc778ade990650123fdb9785528c8206ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_zhukovsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:25:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae89039815f167e95b8779bb958a2cb3d865f631e180b6070560b0deca935cfd-merged.mount: Deactivated successfully.
Jan 26 17:25:15 compute-0 podman[415645]: 2026-01-26 17:25:15.267057564 +0000 UTC m=+0.787608802 container remove e7ad558e406314fb9e8ddafa4cacdc778ade990650123fdb9785528c8206ecf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:25:15 compute-0 sudo[415567]: pam_unix(sudo:session): session closed for user root
Jan 26 17:25:15 compute-0 systemd[1]: libpod-conmon-e7ad558e406314fb9e8ddafa4cacdc778ade990650123fdb9785528c8206ecf4.scope: Deactivated successfully.
Jan 26 17:25:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:25:15 compute-0 sudo[415685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:25:15 compute-0 sudo[415685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:25:15 compute-0 sudo[415685]: pam_unix(sudo:session): session closed for user root
Jan 26 17:25:15 compute-0 sudo[415710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:25:15 compute-0 sudo[415710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:25:15 compute-0 podman[415748]: 2026-01-26 17:25:15.788807354 +0000 UTC m=+0.103566524 container create e4f9ee24ae75c424fc71b1f5bd88596d20c5ae3934da6180aa6d62aac58e893f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hermann, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:25:15 compute-0 podman[415748]: 2026-01-26 17:25:15.711073864 +0000 UTC m=+0.025833024 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:25:15 compute-0 systemd[1]: Started libpod-conmon-e4f9ee24ae75c424fc71b1f5bd88596d20c5ae3934da6180aa6d62aac58e893f.scope.
Jan 26 17:25:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:25:15 compute-0 podman[415748]: 2026-01-26 17:25:15.899707136 +0000 UTC m=+0.214466306 container init e4f9ee24ae75c424fc71b1f5bd88596d20c5ae3934da6180aa6d62aac58e893f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hermann, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:25:15 compute-0 podman[415748]: 2026-01-26 17:25:15.910105911 +0000 UTC m=+0.224865061 container start e4f9ee24ae75c424fc71b1f5bd88596d20c5ae3934da6180aa6d62aac58e893f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:25:15 compute-0 podman[415748]: 2026-01-26 17:25:15.91414725 +0000 UTC m=+0.228906390 container attach e4f9ee24ae75c424fc71b1f5bd88596d20c5ae3934da6180aa6d62aac58e893f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:25:15 compute-0 optimistic_hermann[415765]: 167 167
Jan 26 17:25:15 compute-0 systemd[1]: libpod-e4f9ee24ae75c424fc71b1f5bd88596d20c5ae3934da6180aa6d62aac58e893f.scope: Deactivated successfully.
Jan 26 17:25:15 compute-0 podman[415748]: 2026-01-26 17:25:15.919412678 +0000 UTC m=+0.234171818 container died e4f9ee24ae75c424fc71b1f5bd88596d20c5ae3934da6180aa6d62aac58e893f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:25:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f09a2eaea2435e2592848c0bf0af1bb15fdeb3c6f42b20538f787d69ee316c2-merged.mount: Deactivated successfully.
Jan 26 17:25:15 compute-0 podman[415748]: 2026-01-26 17:25:15.998934213 +0000 UTC m=+0.313693373 container remove e4f9ee24ae75c424fc71b1f5bd88596d20c5ae3934da6180aa6d62aac58e893f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_hermann, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:25:16 compute-0 systemd[1]: libpod-conmon-e4f9ee24ae75c424fc71b1f5bd88596d20c5ae3934da6180aa6d62aac58e893f.scope: Deactivated successfully.
Jan 26 17:25:16 compute-0 podman[415788]: 2026-01-26 17:25:16.217058437 +0000 UTC m=+0.055840896 container create c19f92cb2afe811069fe3e9905f703ceaf8e2af180b3b08a97dd8819aa11615e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tu, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:25:16 compute-0 systemd[1]: Started libpod-conmon-c19f92cb2afe811069fe3e9905f703ceaf8e2af180b3b08a97dd8819aa11615e.scope.
Jan 26 17:25:16 compute-0 podman[415788]: 2026-01-26 17:25:16.191107842 +0000 UTC m=+0.029890301 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:25:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6c51f983bc7074d525daed26577338b2d1eca5a8673ce8fb14043206c9d7fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6c51f983bc7074d525daed26577338b2d1eca5a8673ce8fb14043206c9d7fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6c51f983bc7074d525daed26577338b2d1eca5a8673ce8fb14043206c9d7fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:25:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c6c51f983bc7074d525daed26577338b2d1eca5a8673ce8fb14043206c9d7fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:25:16 compute-0 podman[415788]: 2026-01-26 17:25:16.305872129 +0000 UTC m=+0.144654578 container init c19f92cb2afe811069fe3e9905f703ceaf8e2af180b3b08a97dd8819aa11615e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tu, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 17:25:16 compute-0 podman[415788]: 2026-01-26 17:25:16.317364291 +0000 UTC m=+0.156146710 container start c19f92cb2afe811069fe3e9905f703ceaf8e2af180b3b08a97dd8819aa11615e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 17:25:16 compute-0 podman[415788]: 2026-01-26 17:25:16.321257915 +0000 UTC m=+0.160040334 container attach c19f92cb2afe811069fe3e9905f703ceaf8e2af180b3b08a97dd8819aa11615e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tu, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 17:25:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4065: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:16 compute-0 ceph-mon[75140]: pgmap v4065: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:17 compute-0 lvm[415884]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:25:17 compute-0 lvm[415884]: VG ceph_vg0 finished
Jan 26 17:25:17 compute-0 lvm[415887]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:25:17 compute-0 lvm[415887]: VG ceph_vg1 finished
Jan 26 17:25:17 compute-0 lvm[415888]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:25:17 compute-0 lvm[415888]: VG ceph_vg2 finished
Jan 26 17:25:17 compute-0 great_tu[415807]: {}
Jan 26 17:25:17 compute-0 systemd[1]: libpod-c19f92cb2afe811069fe3e9905f703ceaf8e2af180b3b08a97dd8819aa11615e.scope: Deactivated successfully.
Jan 26 17:25:17 compute-0 podman[415788]: 2026-01-26 17:25:17.218454106 +0000 UTC m=+1.057236555 container died c19f92cb2afe811069fe3e9905f703ceaf8e2af180b3b08a97dd8819aa11615e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tu, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:25:17 compute-0 systemd[1]: libpod-c19f92cb2afe811069fe3e9905f703ceaf8e2af180b3b08a97dd8819aa11615e.scope: Consumed 1.548s CPU time.
Jan 26 17:25:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-0c6c51f983bc7074d525daed26577338b2d1eca5a8673ce8fb14043206c9d7fb-merged.mount: Deactivated successfully.
Jan 26 17:25:17 compute-0 podman[415788]: 2026-01-26 17:25:17.268869159 +0000 UTC m=+1.107651578 container remove c19f92cb2afe811069fe3e9905f703ceaf8e2af180b3b08a97dd8819aa11615e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_tu, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 17:25:17 compute-0 systemd[1]: libpod-conmon-c19f92cb2afe811069fe3e9905f703ceaf8e2af180b3b08a97dd8819aa11615e.scope: Deactivated successfully.
Jan 26 17:25:17 compute-0 sudo[415710]: pam_unix(sudo:session): session closed for user root
Jan 26 17:25:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:25:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:25:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:25:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:25:17 compute-0 sudo[415902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:25:17 compute-0 sudo[415902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:25:17 compute-0 sudo[415902]: pam_unix(sudo:session): session closed for user root
Jan 26 17:25:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:25:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:25:18 compute-0 nova_compute[239965]: 2026-01-26 17:25:18.494 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4066: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:19 compute-0 ceph-mon[75140]: pgmap v4066: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:19 compute-0 nova_compute[239965]: 2026-01-26 17:25:19.572 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:25:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4067: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:20 compute-0 ceph-mon[75140]: pgmap v4067: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4068: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:23 compute-0 nova_compute[239965]: 2026-01-26 17:25:23.497 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:23 compute-0 ceph-mon[75140]: pgmap v4068: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:24 compute-0 nova_compute[239965]: 2026-01-26 17:25:24.573 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4069: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:25 compute-0 ceph-mon[75140]: pgmap v4069: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:25:26 compute-0 podman[415927]: 2026-01-26 17:25:26.391890036 +0000 UTC m=+0.068701371 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 26 17:25:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4070: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:27 compute-0 ceph-mon[75140]: pgmap v4070: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:28 compute-0 nova_compute[239965]: 2026-01-26 17:25:28.501 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4071: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:25:28
Jan 26 17:25:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:25:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:25:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'volumes', 'vms', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'images']
Jan 26 17:25:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:25:29 compute-0 nova_compute[239965]: 2026-01-26 17:25:29.575 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:29 compute-0 ceph-mon[75140]: pgmap v4071: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:25:30 compute-0 podman[415947]: 2026-01-26 17:25:30.421875639 +0000 UTC m=+0.103126112 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:25:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:25:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:25:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:25:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:25:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:25:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:25:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4072: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:25:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:25:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:25:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:25:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:25:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:25:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:25:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:25:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:25:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:25:31 compute-0 ceph-mon[75140]: pgmap v4072: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4073: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:33 compute-0 nova_compute[239965]: 2026-01-26 17:25:33.565 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:33 compute-0 ceph-mon[75140]: pgmap v4073: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:34 compute-0 nova_compute[239965]: 2026-01-26 17:25:34.577 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4074: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:25:35 compute-0 ceph-mon[75140]: pgmap v4074: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4075: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:37 compute-0 ceph-mon[75140]: pgmap v4075: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:38 compute-0 nova_compute[239965]: 2026-01-26 17:25:38.567 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4076: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:39 compute-0 nova_compute[239965]: 2026-01-26 17:25:39.579 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:39 compute-0 ceph-mon[75140]: pgmap v4076: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:25:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:40 compute-0 ceph-mon[75140]: pgmap v4077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4078: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:43 compute-0 nova_compute[239965]: 2026-01-26 17:25:43.571 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:43 compute-0 ceph-mon[75140]: pgmap v4078: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:44 compute-0 nova_compute[239965]: 2026-01-26 17:25:44.581 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4079: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:25:45 compute-0 ceph-mon[75140]: pgmap v4079: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:46 compute-0 nova_compute[239965]: 2026-01-26 17:25:46.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4080: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:46 compute-0 ceph-mon[75140]: pgmap v4080: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:47 compute-0 nova_compute[239965]: 2026-01-26 17:25:47.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:47 compute-0 nova_compute[239965]: 2026-01-26 17:25:47.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:48 compute-0 nova_compute[239965]: 2026-01-26 17:25:48.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:25:48 compute-0 nova_compute[239965]: 2026-01-26 17:25:48.575 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4081: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:25:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2274802010' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:25:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:25:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2274802010' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:25:49 compute-0 nova_compute[239965]: 2026-01-26 17:25:49.584 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:49 compute-0 ceph-mon[75140]: pgmap v4081: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2274802010' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:25:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2274802010' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:25:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:25:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:25:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4082: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:51 compute-0 ceph-mon[75140]: pgmap v4082: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4083: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:53 compute-0 ceph-mon[75140]: pgmap v4083: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:53 compute-0 nova_compute[239965]: 2026-01-26 17:25:53.580 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:54 compute-0 nova_compute[239965]: 2026-01-26 17:25:54.586 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4084: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:54 compute-0 ceph-mon[75140]: pgmap v4084: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:25:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4085: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:57 compute-0 podman[415973]: 2026-01-26 17:25:57.396380025 +0000 UTC m=+0.075285632 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:25:57 compute-0 ceph-mon[75140]: pgmap v4085: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:58 compute-0 nova_compute[239965]: 2026-01-26 17:25:58.582 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:25:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4086: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:58 compute-0 ceph-mon[75140]: pgmap v4086: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:25:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:25:59.312 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:25:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:25:59.313 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:25:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:25:59.313 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:25:59 compute-0 nova_compute[239965]: 2026-01-26 17:25:59.588 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:26:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:26:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:26:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:26:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:26:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:26:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:26:00 compute-0 nova_compute[239965]: 2026-01-26 17:26:00.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4087: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:01 compute-0 podman[415992]: 2026-01-26 17:26:01.454281692 +0000 UTC m=+0.120775414 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 26 17:26:01 compute-0 ceph-mon[75140]: pgmap v4087: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4088: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:03 compute-0 nova_compute[239965]: 2026-01-26 17:26:03.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:03 compute-0 nova_compute[239965]: 2026-01-26 17:26:03.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:26:03 compute-0 nova_compute[239965]: 2026-01-26 17:26:03.657 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:03 compute-0 ceph-mon[75140]: pgmap v4088: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:04 compute-0 nova_compute[239965]: 2026-01-26 17:26:04.590 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4089: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:04 compute-0 ceph-mon[75140]: pgmap v4089: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:26:05 compute-0 nova_compute[239965]: 2026-01-26 17:26:05.504 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4090: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:07 compute-0 sshd-session[416019]: Invalid user user from 45.148.10.240 port 47970
Jan 26 17:26:07 compute-0 sshd-session[416019]: Connection closed by invalid user user 45.148.10.240 port 47970 [preauth]
Jan 26 17:26:07 compute-0 ceph-mon[75140]: pgmap v4090: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4091: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:08 compute-0 nova_compute[239965]: 2026-01-26 17:26:08.660 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:09 compute-0 ceph-mon[75140]: pgmap v4091: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:09 compute-0 nova_compute[239965]: 2026-01-26 17:26:09.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:09 compute-0 nova_compute[239965]: 2026-01-26 17:26:09.546 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:26:09 compute-0 nova_compute[239965]: 2026-01-26 17:26:09.547 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:26:09 compute-0 nova_compute[239965]: 2026-01-26 17:26:09.547 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:26:09 compute-0 nova_compute[239965]: 2026-01-26 17:26:09.547 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:26:09 compute-0 nova_compute[239965]: 2026-01-26 17:26:09.547 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:26:09 compute-0 nova_compute[239965]: 2026-01-26 17:26:09.592 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:26:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2264033535' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:26:10 compute-0 nova_compute[239965]: 2026-01-26 17:26:10.159 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:26:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2264033535' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:26:10 compute-0 nova_compute[239965]: 2026-01-26 17:26:10.352 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:26:10 compute-0 nova_compute[239965]: 2026-01-26 17:26:10.354 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3550MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:26:10 compute-0 nova_compute[239965]: 2026-01-26 17:26:10.354 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:26:10 compute-0 nova_compute[239965]: 2026-01-26 17:26:10.355 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:26:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:26:10 compute-0 nova_compute[239965]: 2026-01-26 17:26:10.436 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:26:10 compute-0 nova_compute[239965]: 2026-01-26 17:26:10.437 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:26:10 compute-0 nova_compute[239965]: 2026-01-26 17:26:10.467 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:26:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4092: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:26:11 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/243755679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:26:11 compute-0 nova_compute[239965]: 2026-01-26 17:26:11.037 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:26:11 compute-0 nova_compute[239965]: 2026-01-26 17:26:11.046 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:26:11 compute-0 nova_compute[239965]: 2026-01-26 17:26:11.079 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:26:11 compute-0 nova_compute[239965]: 2026-01-26 17:26:11.082 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:26:11 compute-0 nova_compute[239965]: 2026-01-26 17:26:11.083 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:26:11 compute-0 ceph-mon[75140]: pgmap v4092: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:11 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/243755679' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:26:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4093: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:13 compute-0 nova_compute[239965]: 2026-01-26 17:26:13.083 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:13 compute-0 nova_compute[239965]: 2026-01-26 17:26:13.084 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:26:13 compute-0 nova_compute[239965]: 2026-01-26 17:26:13.084 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:26:13 compute-0 nova_compute[239965]: 2026-01-26 17:26:13.107 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:26:13 compute-0 nova_compute[239965]: 2026-01-26 17:26:13.696 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:13 compute-0 ceph-mon[75140]: pgmap v4093: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:14 compute-0 nova_compute[239965]: 2026-01-26 17:26:14.594 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4094: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:14 compute-0 ceph-mon[75140]: pgmap v4094: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:26:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4095: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:16 compute-0 ceph-mon[75140]: pgmap v4095: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:17 compute-0 sudo[416065]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:26:17 compute-0 sudo[416065]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:26:17 compute-0 sudo[416065]: pam_unix(sudo:session): session closed for user root
Jan 26 17:26:17 compute-0 sudo[416090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 26 17:26:17 compute-0 sudo[416090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:26:18 compute-0 sudo[416090]: pam_unix(sudo:session): session closed for user root
Jan 26 17:26:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:26:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:26:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:26:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:26:18 compute-0 sudo[416135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:26:18 compute-0 sudo[416135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:26:18 compute-0 sudo[416135]: pam_unix(sudo:session): session closed for user root
Jan 26 17:26:18 compute-0 sudo[416160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:26:18 compute-0 sudo[416160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:26:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4096: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:18 compute-0 nova_compute[239965]: 2026-01-26 17:26:18.700 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:18 compute-0 sudo[416160]: pam_unix(sudo:session): session closed for user root
Jan 26 17:26:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 17:26:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 17:26:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:26:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:26:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:26:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:26:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:26:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:26:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:26:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:26:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:26:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:26:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:26:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:26:18 compute-0 sudo[416216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:26:18 compute-0 sudo[416216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:26:18 compute-0 sudo[416216]: pam_unix(sudo:session): session closed for user root
Jan 26 17:26:19 compute-0 sudo[416241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:26:19 compute-0 sudo[416241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:26:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:26:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:26:19 compute-0 ceph-mon[75140]: pgmap v4096: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 17:26:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:26:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:26:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:26:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:26:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:26:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:26:19 compute-0 podman[416279]: 2026-01-26 17:26:19.345192351 +0000 UTC m=+0.050453515 container create d5e9533bc2d2c7462f03761daed737a9f55e7ffc415902871bfeb9649df785be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_easley, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:26:19 compute-0 systemd[1]: Started libpod-conmon-d5e9533bc2d2c7462f03761daed737a9f55e7ffc415902871bfeb9649df785be.scope.
Jan 26 17:26:19 compute-0 podman[416279]: 2026-01-26 17:26:19.324660918 +0000 UTC m=+0.029922042 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:26:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:26:19 compute-0 podman[416279]: 2026-01-26 17:26:19.449692687 +0000 UTC m=+0.154953821 container init d5e9533bc2d2c7462f03761daed737a9f55e7ffc415902871bfeb9649df785be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_easley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 17:26:19 compute-0 podman[416279]: 2026-01-26 17:26:19.459292281 +0000 UTC m=+0.164553405 container start d5e9533bc2d2c7462f03761daed737a9f55e7ffc415902871bfeb9649df785be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_easley, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:26:19 compute-0 podman[416279]: 2026-01-26 17:26:19.463578126 +0000 UTC m=+0.168839250 container attach d5e9533bc2d2c7462f03761daed737a9f55e7ffc415902871bfeb9649df785be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_easley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 17:26:19 compute-0 priceless_easley[416295]: 167 167
Jan 26 17:26:19 compute-0 systemd[1]: libpod-d5e9533bc2d2c7462f03761daed737a9f55e7ffc415902871bfeb9649df785be.scope: Deactivated successfully.
Jan 26 17:26:19 compute-0 podman[416279]: 2026-01-26 17:26:19.468578188 +0000 UTC m=+0.173839312 container died d5e9533bc2d2c7462f03761daed737a9f55e7ffc415902871bfeb9649df785be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:26:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b3bb11751e0da19bddcdc745f49451a3d0b238e606d34b755cfe63e06e46743-merged.mount: Deactivated successfully.
Jan 26 17:26:19 compute-0 podman[416279]: 2026-01-26 17:26:19.519235357 +0000 UTC m=+0.224496481 container remove d5e9533bc2d2c7462f03761daed737a9f55e7ffc415902871bfeb9649df785be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_easley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:26:19 compute-0 systemd[1]: libpod-conmon-d5e9533bc2d2c7462f03761daed737a9f55e7ffc415902871bfeb9649df785be.scope: Deactivated successfully.
Jan 26 17:26:19 compute-0 nova_compute[239965]: 2026-01-26 17:26:19.597 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:19 compute-0 podman[416319]: 2026-01-26 17:26:19.739244277 +0000 UTC m=+0.071499859 container create 4a8b81af56083a086e40c5e5be3c80f9030f1282d0926708ad6f5ca1b77acd5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:26:19 compute-0 systemd[1]: Started libpod-conmon-4a8b81af56083a086e40c5e5be3c80f9030f1282d0926708ad6f5ca1b77acd5f.scope.
Jan 26 17:26:19 compute-0 podman[416319]: 2026-01-26 17:26:19.714738988 +0000 UTC m=+0.046994550 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:26:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9f7b1436a18845a2123ec32d712262b5780f51da4917aa9608118fb17cf2c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9f7b1436a18845a2123ec32d712262b5780f51da4917aa9608118fb17cf2c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9f7b1436a18845a2123ec32d712262b5780f51da4917aa9608118fb17cf2c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9f7b1436a18845a2123ec32d712262b5780f51da4917aa9608118fb17cf2c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:26:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9f7b1436a18845a2123ec32d712262b5780f51da4917aa9608118fb17cf2c9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:26:19 compute-0 podman[416319]: 2026-01-26 17:26:19.840083133 +0000 UTC m=+0.172338725 container init 4a8b81af56083a086e40c5e5be3c80f9030f1282d0926708ad6f5ca1b77acd5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_leavitt, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:26:19 compute-0 podman[416319]: 2026-01-26 17:26:19.849721919 +0000 UTC m=+0.181977471 container start 4a8b81af56083a086e40c5e5be3c80f9030f1282d0926708ad6f5ca1b77acd5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 17:26:19 compute-0 podman[416319]: 2026-01-26 17:26:19.854026644 +0000 UTC m=+0.186282226 container attach 4a8b81af56083a086e40c5e5be3c80f9030f1282d0926708ad6f5ca1b77acd5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:26:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:26:20 compute-0 cranky_leavitt[416336]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:26:20 compute-0 cranky_leavitt[416336]: --> All data devices are unavailable
Jan 26 17:26:20 compute-0 systemd[1]: libpod-4a8b81af56083a086e40c5e5be3c80f9030f1282d0926708ad6f5ca1b77acd5f.scope: Deactivated successfully.
Jan 26 17:26:20 compute-0 podman[416319]: 2026-01-26 17:26:20.466178585 +0000 UTC m=+0.798434167 container died 4a8b81af56083a086e40c5e5be3c80f9030f1282d0926708ad6f5ca1b77acd5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_leavitt, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 17:26:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4097: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d9f7b1436a18845a2123ec32d712262b5780f51da4917aa9608118fb17cf2c9-merged.mount: Deactivated successfully.
Jan 26 17:26:20 compute-0 ceph-mon[75140]: pgmap v4097: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:20 compute-0 podman[416319]: 2026-01-26 17:26:20.832699648 +0000 UTC m=+1.164955190 container remove 4a8b81af56083a086e40c5e5be3c80f9030f1282d0926708ad6f5ca1b77acd5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:26:20 compute-0 sudo[416241]: pam_unix(sudo:session): session closed for user root
Jan 26 17:26:20 compute-0 systemd[1]: libpod-conmon-4a8b81af56083a086e40c5e5be3c80f9030f1282d0926708ad6f5ca1b77acd5f.scope: Deactivated successfully.
Jan 26 17:26:20 compute-0 sudo[416369]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:26:20 compute-0 sudo[416369]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:26:20 compute-0 sudo[416369]: pam_unix(sudo:session): session closed for user root
Jan 26 17:26:21 compute-0 sudo[416394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:26:21 compute-0 sudo[416394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:26:21 compute-0 podman[416431]: 2026-01-26 17:26:21.289226423 +0000 UTC m=+0.042986562 container create bde7bad92d037defeedcaa7a82d6f3f9a60a1530a54ff51f6cd40efbf1a70a1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_brahmagupta, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:26:21 compute-0 systemd[1]: Started libpod-conmon-bde7bad92d037defeedcaa7a82d6f3f9a60a1530a54ff51f6cd40efbf1a70a1b.scope.
Jan 26 17:26:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:26:21 compute-0 podman[416431]: 2026-01-26 17:26:21.271361666 +0000 UTC m=+0.025121835 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:26:21 compute-0 podman[416431]: 2026-01-26 17:26:21.36884923 +0000 UTC m=+0.122609459 container init bde7bad92d037defeedcaa7a82d6f3f9a60a1530a54ff51f6cd40efbf1a70a1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_brahmagupta, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 17:26:21 compute-0 podman[416431]: 2026-01-26 17:26:21.381953811 +0000 UTC m=+0.135713990 container start bde7bad92d037defeedcaa7a82d6f3f9a60a1530a54ff51f6cd40efbf1a70a1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_brahmagupta, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:26:21 compute-0 podman[416431]: 2026-01-26 17:26:21.386246636 +0000 UTC m=+0.140006775 container attach bde7bad92d037defeedcaa7a82d6f3f9a60a1530a54ff51f6cd40efbf1a70a1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 26 17:26:21 compute-0 distracted_brahmagupta[416447]: 167 167
Jan 26 17:26:21 compute-0 systemd[1]: libpod-bde7bad92d037defeedcaa7a82d6f3f9a60a1530a54ff51f6cd40efbf1a70a1b.scope: Deactivated successfully.
Jan 26 17:26:21 compute-0 podman[416431]: 2026-01-26 17:26:21.391850863 +0000 UTC m=+0.145611002 container died bde7bad92d037defeedcaa7a82d6f3f9a60a1530a54ff51f6cd40efbf1a70a1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_brahmagupta, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:26:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4853b91f0ceacd0f072390358af97f60c6b499bb2d866c8de57e9925217831b-merged.mount: Deactivated successfully.
Jan 26 17:26:21 compute-0 podman[416431]: 2026-01-26 17:26:21.439072787 +0000 UTC m=+0.192832926 container remove bde7bad92d037defeedcaa7a82d6f3f9a60a1530a54ff51f6cd40efbf1a70a1b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 17:26:21 compute-0 systemd[1]: libpod-conmon-bde7bad92d037defeedcaa7a82d6f3f9a60a1530a54ff51f6cd40efbf1a70a1b.scope: Deactivated successfully.
Jan 26 17:26:21 compute-0 podman[416471]: 2026-01-26 17:26:21.650345264 +0000 UTC m=+0.063978775 container create 60e70958e434c8a90704ecfc0c4c9ae83a093ec56d19186bf929edb87956c4f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:26:21 compute-0 systemd[1]: Started libpod-conmon-60e70958e434c8a90704ecfc0c4c9ae83a093ec56d19186bf929edb87956c4f0.scope.
Jan 26 17:26:21 compute-0 podman[416471]: 2026-01-26 17:26:21.62441 +0000 UTC m=+0.038043591 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:26:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70b7717b31ee45e8e3cda7ba02240946a282210a10bee42fc44de493e816af2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70b7717b31ee45e8e3cda7ba02240946a282210a10bee42fc44de493e816af2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70b7717b31ee45e8e3cda7ba02240946a282210a10bee42fc44de493e816af2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:26:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b70b7717b31ee45e8e3cda7ba02240946a282210a10bee42fc44de493e816af2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:26:21 compute-0 podman[416471]: 2026-01-26 17:26:21.758966821 +0000 UTC m=+0.172600332 container init 60e70958e434c8a90704ecfc0c4c9ae83a093ec56d19186bf929edb87956c4f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lovelace, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 17:26:21 compute-0 podman[416471]: 2026-01-26 17:26:21.766652428 +0000 UTC m=+0.180285939 container start 60e70958e434c8a90704ecfc0c4c9ae83a093ec56d19186bf929edb87956c4f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lovelace, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 17:26:21 compute-0 podman[416471]: 2026-01-26 17:26:21.77037194 +0000 UTC m=+0.184005441 container attach 60e70958e434c8a90704ecfc0c4c9ae83a093ec56d19186bf929edb87956c4f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lovelace, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:26:22 compute-0 festive_lovelace[416488]: {
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:     "0": [
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:         {
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "devices": [
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "/dev/loop3"
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             ],
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "lv_name": "ceph_lv0",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "lv_size": "21470642176",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "name": "ceph_lv0",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "tags": {
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.cluster_name": "ceph",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.crush_device_class": "",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.encrypted": "0",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.objectstore": "bluestore",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.osd_id": "0",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.type": "block",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.vdo": "0",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.with_tpm": "0"
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             },
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "type": "block",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "vg_name": "ceph_vg0"
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:         }
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:     ],
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:     "1": [
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:         {
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "devices": [
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "/dev/loop4"
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             ],
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "lv_name": "ceph_lv1",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "lv_size": "21470642176",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "name": "ceph_lv1",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "tags": {
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.cluster_name": "ceph",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.crush_device_class": "",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.encrypted": "0",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.objectstore": "bluestore",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.osd_id": "1",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.type": "block",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.vdo": "0",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.with_tpm": "0"
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             },
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "type": "block",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "vg_name": "ceph_vg1"
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:         }
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:     ],
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:     "2": [
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:         {
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "devices": [
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "/dev/loop5"
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             ],
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "lv_name": "ceph_lv2",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "lv_size": "21470642176",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "name": "ceph_lv2",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "tags": {
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.cluster_name": "ceph",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.crush_device_class": "",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.encrypted": "0",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.objectstore": "bluestore",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.osd_id": "2",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.type": "block",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.vdo": "0",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:                 "ceph.with_tpm": "0"
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             },
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "type": "block",
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:             "vg_name": "ceph_vg2"
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:         }
Jan 26 17:26:22 compute-0 festive_lovelace[416488]:     ]
Jan 26 17:26:22 compute-0 festive_lovelace[416488]: }
Jan 26 17:26:22 compute-0 systemd[1]: libpod-60e70958e434c8a90704ecfc0c4c9ae83a093ec56d19186bf929edb87956c4f0.scope: Deactivated successfully.
Jan 26 17:26:22 compute-0 podman[416497]: 2026-01-26 17:26:22.141465425 +0000 UTC m=+0.023460895 container died 60e70958e434c8a90704ecfc0c4c9ae83a093ec56d19186bf929edb87956c4f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lovelace, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 17:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b70b7717b31ee45e8e3cda7ba02240946a282210a10bee42fc44de493e816af2-merged.mount: Deactivated successfully.
Jan 26 17:26:22 compute-0 podman[416497]: 2026-01-26 17:26:22.179450053 +0000 UTC m=+0.061445513 container remove 60e70958e434c8a90704ecfc0c4c9ae83a093ec56d19186bf929edb87956c4f0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 17:26:22 compute-0 systemd[1]: libpod-conmon-60e70958e434c8a90704ecfc0c4c9ae83a093ec56d19186bf929edb87956c4f0.scope: Deactivated successfully.
Jan 26 17:26:22 compute-0 sudo[416394]: pam_unix(sudo:session): session closed for user root
Jan 26 17:26:22 compute-0 sudo[416512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:26:22 compute-0 sudo[416512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:26:22 compute-0 sudo[416512]: pam_unix(sudo:session): session closed for user root
Jan 26 17:26:22 compute-0 sudo[416537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:26:22 compute-0 sudo[416537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:26:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4098: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:22 compute-0 podman[416574]: 2026-01-26 17:26:22.669485367 +0000 UTC m=+0.051818448 container create 6989df38bf4e3c14f9a5f5f17c242e887d68ab16599f39f2e7577b3179b05dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhabha, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 17:26:22 compute-0 systemd[1]: Started libpod-conmon-6989df38bf4e3c14f9a5f5f17c242e887d68ab16599f39f2e7577b3179b05dc8.scope.
Jan 26 17:26:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:26:22 compute-0 podman[416574]: 2026-01-26 17:26:22.645618824 +0000 UTC m=+0.027951985 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:26:22 compute-0 podman[416574]: 2026-01-26 17:26:22.751889323 +0000 UTC m=+0.134222474 container init 6989df38bf4e3c14f9a5f5f17c242e887d68ab16599f39f2e7577b3179b05dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhabha, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:26:22 compute-0 podman[416574]: 2026-01-26 17:26:22.763823405 +0000 UTC m=+0.146156516 container start 6989df38bf4e3c14f9a5f5f17c242e887d68ab16599f39f2e7577b3179b05dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhabha, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 26 17:26:22 compute-0 podman[416574]: 2026-01-26 17:26:22.768193292 +0000 UTC m=+0.150526403 container attach 6989df38bf4e3c14f9a5f5f17c242e887d68ab16599f39f2e7577b3179b05dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhabha, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 26 17:26:22 compute-0 eloquent_bhabha[416590]: 167 167
Jan 26 17:26:22 compute-0 systemd[1]: libpod-6989df38bf4e3c14f9a5f5f17c242e887d68ab16599f39f2e7577b3179b05dc8.scope: Deactivated successfully.
Jan 26 17:26:22 compute-0 podman[416574]: 2026-01-26 17:26:22.772155829 +0000 UTC m=+0.154488950 container died 6989df38bf4e3c14f9a5f5f17c242e887d68ab16599f39f2e7577b3179b05dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhabha, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 26 17:26:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-252bdc68eab39e69b4f927b9eadaa0b8d0c11dde063f23d7e1adac1a30f36616-merged.mount: Deactivated successfully.
Jan 26 17:26:22 compute-0 podman[416574]: 2026-01-26 17:26:22.819635679 +0000 UTC m=+0.201968760 container remove 6989df38bf4e3c14f9a5f5f17c242e887d68ab16599f39f2e7577b3179b05dc8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_bhabha, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:26:22 compute-0 systemd[1]: libpod-conmon-6989df38bf4e3c14f9a5f5f17c242e887d68ab16599f39f2e7577b3179b05dc8.scope: Deactivated successfully.
Jan 26 17:26:22 compute-0 podman[416614]: 2026-01-26 17:26:22.99056366 +0000 UTC m=+0.044032508 container create 4ba94bd17f1f0b96186f0b563a94eea38c82ccfb195c86528dd064cedd9ed589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:26:23 compute-0 systemd[1]: Started libpod-conmon-4ba94bd17f1f0b96186f0b563a94eea38c82ccfb195c86528dd064cedd9ed589.scope.
Jan 26 17:26:23 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a79b2b66648df833e2d3a79a180f4c4ff32ca3e4a3ebf655e2bb2b98426153c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a79b2b66648df833e2d3a79a180f4c4ff32ca3e4a3ebf655e2bb2b98426153c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a79b2b66648df833e2d3a79a180f4c4ff32ca3e4a3ebf655e2bb2b98426153c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:26:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a79b2b66648df833e2d3a79a180f4c4ff32ca3e4a3ebf655e2bb2b98426153c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:26:23 compute-0 podman[416614]: 2026-01-26 17:26:22.972491428 +0000 UTC m=+0.025960186 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:26:23 compute-0 podman[416614]: 2026-01-26 17:26:23.076200394 +0000 UTC m=+0.129669202 container init 4ba94bd17f1f0b96186f0b563a94eea38c82ccfb195c86528dd064cedd9ed589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:26:23 compute-0 podman[416614]: 2026-01-26 17:26:23.088676229 +0000 UTC m=+0.142144987 container start 4ba94bd17f1f0b96186f0b563a94eea38c82ccfb195c86528dd064cedd9ed589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:26:23 compute-0 podman[416614]: 2026-01-26 17:26:23.093281752 +0000 UTC m=+0.146750510 container attach 4ba94bd17f1f0b96186f0b563a94eea38c82ccfb195c86528dd064cedd9ed589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:26:23 compute-0 nova_compute[239965]: 2026-01-26 17:26:23.702 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:23 compute-0 ceph-mon[75140]: pgmap v4098: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:23 compute-0 lvm[416710]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:26:23 compute-0 lvm[416711]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:26:23 compute-0 lvm[416711]: VG ceph_vg1 finished
Jan 26 17:26:23 compute-0 lvm[416710]: VG ceph_vg0 finished
Jan 26 17:26:23 compute-0 lvm[416713]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:26:23 compute-0 lvm[416713]: VG ceph_vg2 finished
Jan 26 17:26:23 compute-0 angry_hofstadter[416631]: {}
Jan 26 17:26:23 compute-0 systemd[1]: libpod-4ba94bd17f1f0b96186f0b563a94eea38c82ccfb195c86528dd064cedd9ed589.scope: Deactivated successfully.
Jan 26 17:26:23 compute-0 podman[416614]: 2026-01-26 17:26:23.981427262 +0000 UTC m=+1.034895990 container died 4ba94bd17f1f0b96186f0b563a94eea38c82ccfb195c86528dd064cedd9ed589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 17:26:23 compute-0 systemd[1]: libpod-4ba94bd17f1f0b96186f0b563a94eea38c82ccfb195c86528dd064cedd9ed589.scope: Consumed 1.455s CPU time.
Jan 26 17:26:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a79b2b66648df833e2d3a79a180f4c4ff32ca3e4a3ebf655e2bb2b98426153c6-merged.mount: Deactivated successfully.
Jan 26 17:26:24 compute-0 podman[416614]: 2026-01-26 17:26:24.026067954 +0000 UTC m=+1.079536672 container remove 4ba94bd17f1f0b96186f0b563a94eea38c82ccfb195c86528dd064cedd9ed589 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:26:24 compute-0 systemd[1]: libpod-conmon-4ba94bd17f1f0b96186f0b563a94eea38c82ccfb195c86528dd064cedd9ed589.scope: Deactivated successfully.
Jan 26 17:26:24 compute-0 sudo[416537]: pam_unix(sudo:session): session closed for user root
Jan 26 17:26:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:26:24 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:26:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:26:24 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:26:24 compute-0 sudo[416727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:26:24 compute-0 sudo[416727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:26:24 compute-0 sudo[416727]: pam_unix(sudo:session): session closed for user root
Jan 26 17:26:24 compute-0 nova_compute[239965]: 2026-01-26 17:26:24.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4099: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:24 compute-0 nova_compute[239965]: 2026-01-26 17:26:24.764 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:26:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:26:25 compute-0 ceph-mon[75140]: pgmap v4099: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:26:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4100: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:27 compute-0 ceph-mon[75140]: pgmap v4100: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:28 compute-0 podman[416752]: 2026-01-26 17:26:28.427239045 +0000 UTC m=+0.106570447 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:26:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4101: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:28 compute-0 nova_compute[239965]: 2026-01-26 17:26:28.707 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:26:28
Jan 26 17:26:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:26:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:26:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.control', 'backups', '.rgw.root', '.mgr', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes']
Jan 26 17:26:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:26:29 compute-0 ceph-mon[75140]: pgmap v4101: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:29 compute-0 nova_compute[239965]: 2026-01-26 17:26:29.766 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:26:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:26:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:26:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:26:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:26:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:26:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:26:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4102: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:26:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:26:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:26:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:26:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:26:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:26:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:26:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:26:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:26:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:26:31 compute-0 ceph-mon[75140]: pgmap v4102: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:32 compute-0 podman[416773]: 2026-01-26 17:26:32.437467078 +0000 UTC m=+0.117934786 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:26:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4103: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:33 compute-0 nova_compute[239965]: 2026-01-26 17:26:33.530 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:33 compute-0 nova_compute[239965]: 2026-01-26 17:26:33.531 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:26:33 compute-0 nova_compute[239965]: 2026-01-26 17:26:33.709 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:33 compute-0 ceph-mon[75140]: pgmap v4103: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4104: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:34 compute-0 nova_compute[239965]: 2026-01-26 17:26:34.766 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:26:35 compute-0 ceph-mon[75140]: pgmap v4104: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4105: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:37 compute-0 ceph-mon[75140]: pgmap v4105: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4106: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:38 compute-0 nova_compute[239965]: 2026-01-26 17:26:38.711 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:38 compute-0 ceph-mon[75140]: pgmap v4106: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:39 compute-0 nova_compute[239965]: 2026-01-26 17:26:39.769 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:26:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4107: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:41 compute-0 ceph-mon[75140]: pgmap v4107: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4108: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:43 compute-0 nova_compute[239965]: 2026-01-26 17:26:43.714 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:43 compute-0 ceph-mon[75140]: pgmap v4108: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4109: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:44 compute-0 nova_compute[239965]: 2026-01-26 17:26:44.771 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:44 compute-0 ceph-mon[75140]: pgmap v4109: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:26:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4110: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:46 compute-0 ceph-mon[75140]: pgmap v4110: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:47 compute-0 nova_compute[239965]: 2026-01-26 17:26:47.529 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:48 compute-0 nova_compute[239965]: 2026-01-26 17:26:48.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:48 compute-0 nova_compute[239965]: 2026-01-26 17:26:48.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4111: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:48 compute-0 nova_compute[239965]: 2026-01-26 17:26:48.718 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:26:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/369561232' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:26:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:26:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/369561232' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:26:49 compute-0 ceph-mon[75140]: pgmap v4111: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/369561232' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:26:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/369561232' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:26:49 compute-0 nova_compute[239965]: 2026-01-26 17:26:49.773 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:26:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:26:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:26:50 compute-0 nova_compute[239965]: 2026-01-26 17:26:50.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:26:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4112: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:51 compute-0 ceph-mon[75140]: pgmap v4112: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4113: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:53 compute-0 nova_compute[239965]: 2026-01-26 17:26:53.721 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:53 compute-0 ceph-mon[75140]: pgmap v4113: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4114: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:54 compute-0 nova_compute[239965]: 2026-01-26 17:26:54.776 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:26:55 compute-0 ceph-mon[75140]: pgmap v4114: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4115: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:57 compute-0 ceph-mon[75140]: pgmap v4115: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4116: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:58 compute-0 nova_compute[239965]: 2026-01-26 17:26:58.725 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:26:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:26:59.314 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:26:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:26:59.314 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:26:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:26:59.314 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:26:59 compute-0 podman[416801]: 2026-01-26 17:26:59.405296028 +0000 UTC m=+0.094150143 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 26 17:26:59 compute-0 ceph-mon[75140]: pgmap v4116: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:26:59 compute-0 nova_compute[239965]: 2026-01-26 17:26:59.780 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:27:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:27:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:27:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:27:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:27:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:27:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:27:00 compute-0 nova_compute[239965]: 2026-01-26 17:27:00.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4117: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:01 compute-0 nova_compute[239965]: 2026-01-26 17:27:01.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:01 compute-0 nova_compute[239965]: 2026-01-26 17:27:01.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:27:01 compute-0 nova_compute[239965]: 2026-01-26 17:27:01.533 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:27:01 compute-0 ceph-mon[75140]: pgmap v4117: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4118: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:02 compute-0 ceph-mon[75140]: pgmap v4118: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:03 compute-0 podman[416820]: 2026-01-26 17:27:03.447419779 +0000 UTC m=+0.118940900 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:27:03 compute-0 nova_compute[239965]: 2026-01-26 17:27:03.534 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:03 compute-0 nova_compute[239965]: 2026-01-26 17:27:03.535 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:27:03 compute-0 nova_compute[239965]: 2026-01-26 17:27:03.728 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4119: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:04 compute-0 nova_compute[239965]: 2026-01-26 17:27:04.782 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:27:05 compute-0 ceph-mon[75140]: pgmap v4119: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:06 compute-0 nova_compute[239965]: 2026-01-26 17:27:06.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4120: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:07 compute-0 ceph-mon[75140]: pgmap v4120: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4121: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:08 compute-0 nova_compute[239965]: 2026-01-26 17:27:08.731 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:09 compute-0 ceph-mon[75140]: pgmap v4121: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:09 compute-0 nova_compute[239965]: 2026-01-26 17:27:09.784 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:27:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4122: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:11 compute-0 nova_compute[239965]: 2026-01-26 17:27:11.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:11 compute-0 nova_compute[239965]: 2026-01-26 17:27:11.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:11 compute-0 nova_compute[239965]: 2026-01-26 17:27:11.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:11 compute-0 nova_compute[239965]: 2026-01-26 17:27:11.541 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:11 compute-0 nova_compute[239965]: 2026-01-26 17:27:11.542 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:27:11 compute-0 nova_compute[239965]: 2026-01-26 17:27:11.543 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:27:11 compute-0 ceph-mon[75140]: pgmap v4122: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:27:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4263899738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:27:12 compute-0 nova_compute[239965]: 2026-01-26 17:27:12.147 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:27:12 compute-0 nova_compute[239965]: 2026-01-26 17:27:12.289 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:27:12 compute-0 nova_compute[239965]: 2026-01-26 17:27:12.290 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3547MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:27:12 compute-0 nova_compute[239965]: 2026-01-26 17:27:12.290 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:12 compute-0 nova_compute[239965]: 2026-01-26 17:27:12.291 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:12 compute-0 nova_compute[239965]: 2026-01-26 17:27:12.358 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:27:12 compute-0 nova_compute[239965]: 2026-01-26 17:27:12.358 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:27:12 compute-0 nova_compute[239965]: 2026-01-26 17:27:12.373 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:27:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4123: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:12 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4263899738' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:27:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:27:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/157023823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:27:12 compute-0 nova_compute[239965]: 2026-01-26 17:27:12.935 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:27:12 compute-0 nova_compute[239965]: 2026-01-26 17:27:12.952 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:27:12 compute-0 nova_compute[239965]: 2026-01-26 17:27:12.970 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:27:12 compute-0 nova_compute[239965]: 2026-01-26 17:27:12.972 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:27:12 compute-0 nova_compute[239965]: 2026-01-26 17:27:12.972 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:13 compute-0 nova_compute[239965]: 2026-01-26 17:27:13.737 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:13 compute-0 ceph-mon[75140]: pgmap v4123: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:13 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/157023823' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:27:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4124: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:14 compute-0 nova_compute[239965]: 2026-01-26 17:27:14.788 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:14 compute-0 ceph-mon[75140]: pgmap v4124: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:14 compute-0 nova_compute[239965]: 2026-01-26 17:27:14.972 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:14 compute-0 nova_compute[239965]: 2026-01-26 17:27:14.973 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:27:14 compute-0 nova_compute[239965]: 2026-01-26 17:27:14.973 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:27:14 compute-0 nova_compute[239965]: 2026-01-26 17:27:14.989 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:27:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:27:15 compute-0 nova_compute[239965]: 2026-01-26 17:27:15.518 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4125: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:17 compute-0 ceph-mon[75140]: pgmap v4125: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4126: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:18 compute-0 nova_compute[239965]: 2026-01-26 17:27:18.741 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:19 compute-0 ceph-mon[75140]: pgmap v4126: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:19 compute-0 nova_compute[239965]: 2026-01-26 17:27:19.801 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:27:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4127: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:21 compute-0 ceph-mon[75140]: pgmap v4127: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4128: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:23 compute-0 nova_compute[239965]: 2026-01-26 17:27:23.743 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:23 compute-0 ceph-mon[75140]: pgmap v4128: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:24 compute-0 sudo[416890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:27:24 compute-0 sudo[416890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:27:24 compute-0 sudo[416890]: pam_unix(sudo:session): session closed for user root
Jan 26 17:27:24 compute-0 sudo[416915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:27:24 compute-0 sudo[416915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:27:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4129: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:24 compute-0 nova_compute[239965]: 2026-01-26 17:27:24.804 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:24 compute-0 sudo[416915]: pam_unix(sudo:session): session closed for user root
Jan 26 17:27:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:27:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:27:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:27:24 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:27:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:27:25 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:27:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:27:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:27:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:27:25 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:27:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:27:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:27:25 compute-0 sudo[416970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:27:25 compute-0 sudo[416970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:27:25 compute-0 sudo[416970]: pam_unix(sudo:session): session closed for user root
Jan 26 17:27:25 compute-0 sudo[416995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:27:25 compute-0 sudo[416995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:27:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:27:25 compute-0 podman[417032]: 2026-01-26 17:27:25.454931359 +0000 UTC m=+0.049075351 container create 8f04c0fba64de35b62cbd3a587870ffe9d954dcf58e8910d7f6b1328f79cb43f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pasteur, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:27:25 compute-0 systemd[1]: Started libpod-conmon-8f04c0fba64de35b62cbd3a587870ffe9d954dcf58e8910d7f6b1328f79cb43f.scope.
Jan 26 17:27:25 compute-0 podman[417032]: 2026-01-26 17:27:25.43288276 +0000 UTC m=+0.027026752 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:27:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:27:25 compute-0 podman[417032]: 2026-01-26 17:27:25.573422177 +0000 UTC m=+0.167566189 container init 8f04c0fba64de35b62cbd3a587870ffe9d954dcf58e8910d7f6b1328f79cb43f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pasteur, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 17:27:25 compute-0 podman[417032]: 2026-01-26 17:27:25.582920759 +0000 UTC m=+0.177064791 container start 8f04c0fba64de35b62cbd3a587870ffe9d954dcf58e8910d7f6b1328f79cb43f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pasteur, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:27:25 compute-0 podman[417032]: 2026-01-26 17:27:25.587201924 +0000 UTC m=+0.181345996 container attach 8f04c0fba64de35b62cbd3a587870ffe9d954dcf58e8910d7f6b1328f79cb43f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pasteur, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 17:27:25 compute-0 great_pasteur[417048]: 167 167
Jan 26 17:27:25 compute-0 podman[417032]: 2026-01-26 17:27:25.590680988 +0000 UTC m=+0.184825010 container died 8f04c0fba64de35b62cbd3a587870ffe9d954dcf58e8910d7f6b1328f79cb43f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pasteur, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 17:27:25 compute-0 systemd[1]: libpod-8f04c0fba64de35b62cbd3a587870ffe9d954dcf58e8910d7f6b1328f79cb43f.scope: Deactivated successfully.
Jan 26 17:27:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-489ed33b410510e7240c6a03fdd63b0416b0861069f54e5a263e892edf345a87-merged.mount: Deactivated successfully.
Jan 26 17:27:25 compute-0 podman[417032]: 2026-01-26 17:27:25.63570377 +0000 UTC m=+0.229847772 container remove 8f04c0fba64de35b62cbd3a587870ffe9d954dcf58e8910d7f6b1328f79cb43f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 17:27:25 compute-0 systemd[1]: libpod-conmon-8f04c0fba64de35b62cbd3a587870ffe9d954dcf58e8910d7f6b1328f79cb43f.scope: Deactivated successfully.
Jan 26 17:27:25 compute-0 ceph-mon[75140]: pgmap v4129: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:27:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:27:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:27:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:27:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:27:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:27:25 compute-0 podman[417072]: 2026-01-26 17:27:25.824865376 +0000 UTC m=+0.049402230 container create 24d6b224cc20d3f8d1207f207ea383f1037836539b73ae1cf68702c0cd130d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:27:25 compute-0 systemd[1]: Started libpod-conmon-24d6b224cc20d3f8d1207f207ea383f1037836539b73ae1cf68702c0cd130d79.scope.
Jan 26 17:27:25 compute-0 podman[417072]: 2026-01-26 17:27:25.802032717 +0000 UTC m=+0.026569601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:27:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d9c34a09491301546f0960148ad98f61b015454e06a5f1b4e07c8852038f79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d9c34a09491301546f0960148ad98f61b015454e06a5f1b4e07c8852038f79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d9c34a09491301546f0960148ad98f61b015454e06a5f1b4e07c8852038f79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d9c34a09491301546f0960148ad98f61b015454e06a5f1b4e07c8852038f79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:27:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31d9c34a09491301546f0960148ad98f61b015454e06a5f1b4e07c8852038f79/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:27:25 compute-0 podman[417072]: 2026-01-26 17:27:25.929832193 +0000 UTC m=+0.154369097 container init 24d6b224cc20d3f8d1207f207ea383f1037836539b73ae1cf68702c0cd130d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 17:27:25 compute-0 podman[417072]: 2026-01-26 17:27:25.941909718 +0000 UTC m=+0.166446572 container start 24d6b224cc20d3f8d1207f207ea383f1037836539b73ae1cf68702c0cd130d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:27:25 compute-0 podman[417072]: 2026-01-26 17:27:25.945802053 +0000 UTC m=+0.170338907 container attach 24d6b224cc20d3f8d1207f207ea383f1037836539b73ae1cf68702c0cd130d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:27:26 compute-0 festive_lewin[417088]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:27:26 compute-0 festive_lewin[417088]: --> All data devices are unavailable
Jan 26 17:27:26 compute-0 systemd[1]: libpod-24d6b224cc20d3f8d1207f207ea383f1037836539b73ae1cf68702c0cd130d79.scope: Deactivated successfully.
Jan 26 17:27:26 compute-0 podman[417072]: 2026-01-26 17:27:26.528868312 +0000 UTC m=+0.753405176 container died 24d6b224cc20d3f8d1207f207ea383f1037836539b73ae1cf68702c0cd130d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 17:27:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-31d9c34a09491301546f0960148ad98f61b015454e06a5f1b4e07c8852038f79-merged.mount: Deactivated successfully.
Jan 26 17:27:26 compute-0 podman[417072]: 2026-01-26 17:27:26.574343195 +0000 UTC m=+0.798880049 container remove 24d6b224cc20d3f8d1207f207ea383f1037836539b73ae1cf68702c0cd130d79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 26 17:27:26 compute-0 systemd[1]: libpod-conmon-24d6b224cc20d3f8d1207f207ea383f1037836539b73ae1cf68702c0cd130d79.scope: Deactivated successfully.
Jan 26 17:27:26 compute-0 sudo[416995]: pam_unix(sudo:session): session closed for user root
Jan 26 17:27:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4130: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:26 compute-0 sudo[417121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:27:26 compute-0 sudo[417121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:27:26 compute-0 sudo[417121]: pam_unix(sudo:session): session closed for user root
Jan 26 17:27:26 compute-0 sudo[417146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:27:26 compute-0 sudo[417146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:27:27 compute-0 podman[417181]: 2026-01-26 17:27:27.04192648 +0000 UTC m=+0.046531709 container create 67dacbc86de2160394213b3eb0bea6132b0a3a6d83243ed48f98cd2e6d0877f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:27:27 compute-0 systemd[1]: Started libpod-conmon-67dacbc86de2160394213b3eb0bea6132b0a3a6d83243ed48f98cd2e6d0877f9.scope.
Jan 26 17:27:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:27:27 compute-0 podman[417181]: 2026-01-26 17:27:27.018556678 +0000 UTC m=+0.023161907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:27:27 compute-0 podman[417181]: 2026-01-26 17:27:27.12947304 +0000 UTC m=+0.134078249 container init 67dacbc86de2160394213b3eb0bea6132b0a3a6d83243ed48f98cd2e6d0877f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:27:27 compute-0 podman[417181]: 2026-01-26 17:27:27.137691001 +0000 UTC m=+0.142296200 container start 67dacbc86de2160394213b3eb0bea6132b0a3a6d83243ed48f98cd2e6d0877f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 17:27:27 compute-0 romantic_snyder[417197]: 167 167
Jan 26 17:27:27 compute-0 podman[417181]: 2026-01-26 17:27:27.144358964 +0000 UTC m=+0.148964183 container attach 67dacbc86de2160394213b3eb0bea6132b0a3a6d83243ed48f98cd2e6d0877f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_snyder, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 26 17:27:27 compute-0 systemd[1]: libpod-67dacbc86de2160394213b3eb0bea6132b0a3a6d83243ed48f98cd2e6d0877f9.scope: Deactivated successfully.
Jan 26 17:27:27 compute-0 podman[417181]: 2026-01-26 17:27:27.145601315 +0000 UTC m=+0.150206544 container died 67dacbc86de2160394213b3eb0bea6132b0a3a6d83243ed48f98cd2e6d0877f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_snyder, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 17:27:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-87e77bda5b3f66ea8fcc319a3337807b9212902d7f907faf3afe86a1e4bcb8db-merged.mount: Deactivated successfully.
Jan 26 17:27:27 compute-0 podman[417181]: 2026-01-26 17:27:27.186952876 +0000 UTC m=+0.191558075 container remove 67dacbc86de2160394213b3eb0bea6132b0a3a6d83243ed48f98cd2e6d0877f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 26 17:27:27 compute-0 systemd[1]: libpod-conmon-67dacbc86de2160394213b3eb0bea6132b0a3a6d83243ed48f98cd2e6d0877f9.scope: Deactivated successfully.
Jan 26 17:27:27 compute-0 podman[417223]: 2026-01-26 17:27:27.375561248 +0000 UTC m=+0.050592588 container create 85599ace3dd37915cfb568b7fa66915d84cb2a4aec56cae64a97c2d50a46d794 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_turing, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:27:27 compute-0 systemd[1]: Started libpod-conmon-85599ace3dd37915cfb568b7fa66915d84cb2a4aec56cae64a97c2d50a46d794.scope.
Jan 26 17:27:27 compute-0 podman[417223]: 2026-01-26 17:27:27.356240466 +0000 UTC m=+0.031271826 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:27:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cea7345f947bff0df677a83d3537957c32c0326b088c6e189775c3489b863091/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cea7345f947bff0df677a83d3537957c32c0326b088c6e189775c3489b863091/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cea7345f947bff0df677a83d3537957c32c0326b088c6e189775c3489b863091/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:27:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cea7345f947bff0df677a83d3537957c32c0326b088c6e189775c3489b863091/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:27:27 compute-0 podman[417223]: 2026-01-26 17:27:27.477474421 +0000 UTC m=+0.152505781 container init 85599ace3dd37915cfb568b7fa66915d84cb2a4aec56cae64a97c2d50a46d794 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_turing, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:27:27 compute-0 podman[417223]: 2026-01-26 17:27:27.490188662 +0000 UTC m=+0.165220002 container start 85599ace3dd37915cfb568b7fa66915d84cb2a4aec56cae64a97c2d50a46d794 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Jan 26 17:27:27 compute-0 podman[417223]: 2026-01-26 17:27:27.494464757 +0000 UTC m=+0.169496197 container attach 85599ace3dd37915cfb568b7fa66915d84cb2a4aec56cae64a97c2d50a46d794 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_turing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 26 17:27:27 compute-0 intelligent_turing[417239]: {
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:     "0": [
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:         {
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "devices": [
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "/dev/loop3"
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             ],
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "lv_name": "ceph_lv0",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "lv_size": "21470642176",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "name": "ceph_lv0",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "tags": {
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.cluster_name": "ceph",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.crush_device_class": "",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.encrypted": "0",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.objectstore": "bluestore",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.osd_id": "0",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.type": "block",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.vdo": "0",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.with_tpm": "0"
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             },
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "type": "block",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "vg_name": "ceph_vg0"
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:         }
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:     ],
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:     "1": [
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:         {
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "devices": [
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "/dev/loop4"
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             ],
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "lv_name": "ceph_lv1",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "lv_size": "21470642176",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "name": "ceph_lv1",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "tags": {
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.cluster_name": "ceph",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.crush_device_class": "",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.encrypted": "0",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.objectstore": "bluestore",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.osd_id": "1",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.type": "block",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.vdo": "0",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.with_tpm": "0"
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             },
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "type": "block",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "vg_name": "ceph_vg1"
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:         }
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:     ],
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:     "2": [
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:         {
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "devices": [
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "/dev/loop5"
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             ],
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "lv_name": "ceph_lv2",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "lv_size": "21470642176",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "name": "ceph_lv2",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "tags": {
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.cluster_name": "ceph",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.crush_device_class": "",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.encrypted": "0",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.objectstore": "bluestore",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.osd_id": "2",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.type": "block",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.vdo": "0",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:                 "ceph.with_tpm": "0"
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             },
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "type": "block",
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:             "vg_name": "ceph_vg2"
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:         }
Jan 26 17:27:27 compute-0 intelligent_turing[417239]:     ]
Jan 26 17:27:27 compute-0 intelligent_turing[417239]: }
Jan 26 17:27:27 compute-0 systemd[1]: libpod-85599ace3dd37915cfb568b7fa66915d84cb2a4aec56cae64a97c2d50a46d794.scope: Deactivated successfully.
Jan 26 17:27:27 compute-0 podman[417223]: 2026-01-26 17:27:27.839074374 +0000 UTC m=+0.514105714 container died 85599ace3dd37915cfb568b7fa66915d84cb2a4aec56cae64a97c2d50a46d794 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_turing, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:27:28 compute-0 ceph-mon[75140]: pgmap v4130: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-cea7345f947bff0df677a83d3537957c32c0326b088c6e189775c3489b863091-merged.mount: Deactivated successfully.
Jan 26 17:27:28 compute-0 podman[417223]: 2026-01-26 17:27:28.202061691 +0000 UTC m=+0.877093051 container remove 85599ace3dd37915cfb568b7fa66915d84cb2a4aec56cae64a97c2d50a46d794 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_turing, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:27:28 compute-0 sudo[417146]: pam_unix(sudo:session): session closed for user root
Jan 26 17:27:28 compute-0 systemd[1]: libpod-conmon-85599ace3dd37915cfb568b7fa66915d84cb2a4aec56cae64a97c2d50a46d794.scope: Deactivated successfully.
Jan 26 17:27:28 compute-0 sudo[417260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:27:28 compute-0 sudo[417260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:27:28 compute-0 sudo[417260]: pam_unix(sudo:session): session closed for user root
Jan 26 17:27:28 compute-0 sudo[417285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:27:28 compute-0 sudo[417285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:27:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4131: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:28 compute-0 podman[417323]: 2026-01-26 17:27:28.698074661 +0000 UTC m=+0.055335474 container create 68a3a38043b3020c7de2b1bbc87dc2b4a1078b31d5c999e46f3455dbf89b1ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jennings, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 17:27:28 compute-0 systemd[1]: Started libpod-conmon-68a3a38043b3020c7de2b1bbc87dc2b4a1078b31d5c999e46f3455dbf89b1ee3.scope.
Jan 26 17:27:28 compute-0 nova_compute[239965]: 2026-01-26 17:27:28.748 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:27:28 compute-0 podman[417323]: 2026-01-26 17:27:28.673131541 +0000 UTC m=+0.030392374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:27:28 compute-0 podman[417323]: 2026-01-26 17:27:28.779021341 +0000 UTC m=+0.136282164 container init 68a3a38043b3020c7de2b1bbc87dc2b4a1078b31d5c999e46f3455dbf89b1ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jennings, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 17:27:28 compute-0 podman[417323]: 2026-01-26 17:27:28.790301577 +0000 UTC m=+0.147562390 container start 68a3a38043b3020c7de2b1bbc87dc2b4a1078b31d5c999e46f3455dbf89b1ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:27:28 compute-0 podman[417323]: 2026-01-26 17:27:28.793845343 +0000 UTC m=+0.151106186 container attach 68a3a38043b3020c7de2b1bbc87dc2b4a1078b31d5c999e46f3455dbf89b1ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:27:28 compute-0 unruffled_jennings[417339]: 167 167
Jan 26 17:27:28 compute-0 podman[417323]: 2026-01-26 17:27:28.797492123 +0000 UTC m=+0.154752926 container died 68a3a38043b3020c7de2b1bbc87dc2b4a1078b31d5c999e46f3455dbf89b1ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jennings, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:27:28 compute-0 systemd[1]: libpod-68a3a38043b3020c7de2b1bbc87dc2b4a1078b31d5c999e46f3455dbf89b1ee3.scope: Deactivated successfully.
Jan 26 17:27:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-26972efcfc3bb4e6b2f2edab47a58fdb01ca117589e958cd10ca9afd250428f8-merged.mount: Deactivated successfully.
Jan 26 17:27:28 compute-0 podman[417323]: 2026-01-26 17:27:28.834479077 +0000 UTC m=+0.191739890 container remove 68a3a38043b3020c7de2b1bbc87dc2b4a1078b31d5c999e46f3455dbf89b1ee3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 17:27:28 compute-0 systemd[1]: libpod-conmon-68a3a38043b3020c7de2b1bbc87dc2b4a1078b31d5c999e46f3455dbf89b1ee3.scope: Deactivated successfully.
Jan 26 17:27:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:27:28
Jan 26 17:27:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:27:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:27:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'images', '.mgr', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.meta']
Jan 26 17:27:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:27:29 compute-0 podman[417364]: 2026-01-26 17:27:29.021238575 +0000 UTC m=+0.051116172 container create 0c8ada02e5e48b88d0caa5abf7721a6bc8a0a1a5c492eab86a837d05e3515b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_rubin, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:27:29 compute-0 systemd[1]: Started libpod-conmon-0c8ada02e5e48b88d0caa5abf7721a6bc8a0a1a5c492eab86a837d05e3515b56.scope.
Jan 26 17:27:29 compute-0 ceph-mon[75140]: pgmap v4131: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:29 compute-0 podman[417364]: 2026-01-26 17:27:29.00060352 +0000 UTC m=+0.030481127 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:27:29 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08d085c2aeca1c7c482e7fd1172a75e17a75ebcf4cd38ce5e6c723e0b8d112e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08d085c2aeca1c7c482e7fd1172a75e17a75ebcf4cd38ce5e6c723e0b8d112e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08d085c2aeca1c7c482e7fd1172a75e17a75ebcf4cd38ce5e6c723e0b8d112e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c08d085c2aeca1c7c482e7fd1172a75e17a75ebcf4cd38ce5e6c723e0b8d112e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:27:29 compute-0 podman[417364]: 2026-01-26 17:27:29.123915105 +0000 UTC m=+0.153792722 container init 0c8ada02e5e48b88d0caa5abf7721a6bc8a0a1a5c492eab86a837d05e3515b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_rubin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 17:27:29 compute-0 podman[417364]: 2026-01-26 17:27:29.133174911 +0000 UTC m=+0.163052518 container start 0c8ada02e5e48b88d0caa5abf7721a6bc8a0a1a5c492eab86a837d05e3515b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_rubin, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 26 17:27:29 compute-0 podman[417364]: 2026-01-26 17:27:29.137247821 +0000 UTC m=+0.167125438 container attach 0c8ada02e5e48b88d0caa5abf7721a6bc8a0a1a5c492eab86a837d05e3515b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:27:29 compute-0 nova_compute[239965]: 2026-01-26 17:27:29.849 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:29 compute-0 lvm[417465]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:27:29 compute-0 lvm[417465]: VG ceph_vg1 finished
Jan 26 17:27:29 compute-0 lvm[417464]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:27:29 compute-0 lvm[417464]: VG ceph_vg0 finished
Jan 26 17:27:29 compute-0 lvm[417472]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:27:29 compute-0 lvm[417472]: VG ceph_vg2 finished
Jan 26 17:27:29 compute-0 podman[417455]: 2026-01-26 17:27:29.993921461 +0000 UTC m=+0.065337158 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Jan 26 17:27:30 compute-0 suspicious_rubin[417380]: {}
Jan 26 17:27:30 compute-0 systemd[1]: libpod-0c8ada02e5e48b88d0caa5abf7721a6bc8a0a1a5c492eab86a837d05e3515b56.scope: Deactivated successfully.
Jan 26 17:27:30 compute-0 systemd[1]: libpod-0c8ada02e5e48b88d0caa5abf7721a6bc8a0a1a5c492eab86a837d05e3515b56.scope: Consumed 1.488s CPU time.
Jan 26 17:27:30 compute-0 podman[417364]: 2026-01-26 17:27:30.089648063 +0000 UTC m=+1.119525660 container died 0c8ada02e5e48b88d0caa5abf7721a6bc8a0a1a5c492eab86a837d05e3515b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 17:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-c08d085c2aeca1c7c482e7fd1172a75e17a75ebcf4cd38ce5e6c723e0b8d112e-merged.mount: Deactivated successfully.
Jan 26 17:27:30 compute-0 podman[417364]: 2026-01-26 17:27:30.13575087 +0000 UTC m=+1.165628467 container remove 0c8ada02e5e48b88d0caa5abf7721a6bc8a0a1a5c492eab86a837d05e3515b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 17:27:30 compute-0 systemd[1]: libpod-conmon-0c8ada02e5e48b88d0caa5abf7721a6bc8a0a1a5c492eab86a837d05e3515b56.scope: Deactivated successfully.
Jan 26 17:27:30 compute-0 sudo[417285]: pam_unix(sudo:session): session closed for user root
Jan 26 17:27:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:27:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:27:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:27:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:27:30 compute-0 sudo[417492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:27:30 compute-0 sudo[417492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:27:30 compute-0 sudo[417492]: pam_unix(sudo:session): session closed for user root
Jan 26 17:27:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:27:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:27:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:27:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:27:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:27:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:27:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:27:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4132: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:27:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:27:31 compute-0 ceph-mon[75140]: pgmap v4132: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:27:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:27:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:27:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:27:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:27:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:27:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:27:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:27:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:27:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:27:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4133: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:33 compute-0 ceph-mon[75140]: pgmap v4133: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:33 compute-0 nova_compute[239965]: 2026-01-26 17:27:33.751 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:34 compute-0 podman[417517]: 2026-01-26 17:27:34.419123421 +0000 UTC m=+0.100745515 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 17:27:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4134: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:34 compute-0 nova_compute[239965]: 2026-01-26 17:27:34.849 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:35 compute-0 ceph-mon[75140]: pgmap v4134: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:27:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4135: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:37 compute-0 ceph-mon[75140]: pgmap v4135: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4136: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:38 compute-0 nova_compute[239965]: 2026-01-26 17:27:38.754 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:38 compute-0 ceph-mon[75140]: pgmap v4136: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:39 compute-0 nova_compute[239965]: 2026-01-26 17:27:39.851 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:27:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4137: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:41 compute-0 ceph-mon[75140]: pgmap v4137: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4138: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:43 compute-0 nova_compute[239965]: 2026-01-26 17:27:43.758 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:43 compute-0 ceph-mon[75140]: pgmap v4138: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4139: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:44 compute-0 nova_compute[239965]: 2026-01-26 17:27:44.855 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:44 compute-0 ceph-mon[75140]: pgmap v4139: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:27:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4140: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #198. Immutable memtables: 0.
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:27:46.767223) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 123] Flushing memtable with next log file: 198
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448466767265, "job": 123, "event": "flush_started", "num_memtables": 1, "num_entries": 2056, "num_deletes": 251, "total_data_size": 3637177, "memory_usage": 3696272, "flush_reason": "Manual Compaction"}
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 123] Level-0 flush table #199: started
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448466786788, "cf_name": "default", "job": 123, "event": "table_file_creation", "file_number": 199, "file_size": 3518843, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 82132, "largest_seqno": 84187, "table_properties": {"data_size": 3509408, "index_size": 5992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18685, "raw_average_key_size": 20, "raw_value_size": 3490766, "raw_average_value_size": 3749, "num_data_blocks": 267, "num_entries": 931, "num_filter_entries": 931, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769448242, "oldest_key_time": 1769448242, "file_creation_time": 1769448466, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 199, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 123] Flush lasted 19631 microseconds, and 8414 cpu microseconds.
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:27:46.786851) [db/flush_job.cc:967] [default] [JOB 123] Level-0 flush table #199: 3518843 bytes OK
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:27:46.786882) [db/memtable_list.cc:519] [default] Level-0 commit table #199 started
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:27:46.789616) [db/memtable_list.cc:722] [default] Level-0 commit table #199: memtable #1 done
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:27:46.789631) EVENT_LOG_v1 {"time_micros": 1769448466789626, "job": 123, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:27:46.789654) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 123] Try to delete WAL files size 3628558, prev total WAL file size 3628558, number of live WAL files 2.
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000195.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:27:46.790742) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038323833' seq:72057594037927935, type:22 .. '7061786F730038353335' seq:0, type:0; will stop at (end)
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 124] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 123 Base level 0, inputs: [199(3436KB)], [197(10MB)]
Jan 26 17:27:46 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448466790843, "job": 124, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [199], "files_L6": [197], "score": -1, "input_data_size": 14916463, "oldest_snapshot_seqno": -1}
Jan 26 17:27:47 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 124] Generated table #200: 10139 keys, 13099088 bytes, temperature: kUnknown
Jan 26 17:27:47 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448467172675, "cf_name": "default", "job": 124, "event": "table_file_creation", "file_number": 200, "file_size": 13099088, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13034030, "index_size": 38577, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25413, "raw_key_size": 265697, "raw_average_key_size": 26, "raw_value_size": 12855289, "raw_average_value_size": 1267, "num_data_blocks": 1492, "num_entries": 10139, "num_filter_entries": 10139, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769448466, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 200, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:27:47 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:27:47 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:27:47.173235) [db/compaction/compaction_job.cc:1663] [default] [JOB 124] Compacted 1@0 + 1@6 files to L6 => 13099088 bytes
Jan 26 17:27:47 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:27:47.175946) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 39.1 rd, 34.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 10.9 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 10653, records dropped: 514 output_compression: NoCompression
Jan 26 17:27:47 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:27:47.176008) EVENT_LOG_v1 {"time_micros": 1769448467175963, "job": 124, "event": "compaction_finished", "compaction_time_micros": 381961, "compaction_time_cpu_micros": 36174, "output_level": 6, "num_output_files": 1, "total_output_size": 13099088, "num_input_records": 10653, "num_output_records": 10139, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:27:47 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000199.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:27:47 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448467177628, "job": 124, "event": "table_file_deletion", "file_number": 199}
Jan 26 17:27:47 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000197.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:27:47 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448467182154, "job": 124, "event": "table_file_deletion", "file_number": 197}
Jan 26 17:27:47 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:27:46.790532) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:27:47 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:27:47.182228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:27:47 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:27:47.182236) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:27:47 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:27:47.182239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:27:47 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:27:47.182242) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:27:47 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:27:47.182245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:27:47 compute-0 nova_compute[239965]: 2026-01-26 17:27:47.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:47 compute-0 ceph-mon[75140]: pgmap v4140: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4141: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:48 compute-0 nova_compute[239965]: 2026-01-26 17:27:48.762 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:27:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2303353274' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:27:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:27:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2303353274' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:27:49 compute-0 nova_compute[239965]: 2026-01-26 17:27:49.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:49 compute-0 nova_compute[239965]: 2026-01-26 17:27:49.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:49 compute-0 ceph-mon[75140]: pgmap v4141: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2303353274' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:27:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2303353274' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:27:49 compute-0 nova_compute[239965]: 2026-01-26 17:27:49.857 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:27:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:27:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:27:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4142: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:51 compute-0 ceph-mon[75140]: pgmap v4142: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:52 compute-0 nova_compute[239965]: 2026-01-26 17:27:52.517 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:27:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4143: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:52 compute-0 ceph-mon[75140]: pgmap v4143: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:53 compute-0 nova_compute[239965]: 2026-01-26 17:27:53.766 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4144: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:54 compute-0 nova_compute[239965]: 2026-01-26 17:27:54.859 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:54 compute-0 ceph-mon[75140]: pgmap v4144: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:27:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4145: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:56 compute-0 ceph-mon[75140]: pgmap v4145: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4146: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:58 compute-0 nova_compute[239965]: 2026-01-26 17:27:58.771 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:27:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:27:59.315 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:27:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:27:59.316 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:27:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:27:59.316 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:27:59 compute-0 ceph-mon[75140]: pgmap v4146: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:27:59 compute-0 nova_compute[239965]: 2026-01-26 17:27:59.862 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #201. Immutable memtables: 0.
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:28:00.407224) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 125] Flushing memtable with next log file: 201
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448480407306, "job": 125, "event": "flush_started", "num_memtables": 1, "num_entries": 348, "num_deletes": 250, "total_data_size": 189796, "memory_usage": 195792, "flush_reason": "Manual Compaction"}
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 125] Level-0 flush table #202: started
Jan 26 17:28:00 compute-0 podman[417544]: 2026-01-26 17:28:00.407969537 +0000 UTC m=+0.083853722 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448480410668, "cf_name": "default", "job": 125, "event": "table_file_creation", "file_number": 202, "file_size": 186984, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84188, "largest_seqno": 84535, "table_properties": {"data_size": 184784, "index_size": 363, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5933, "raw_average_key_size": 20, "raw_value_size": 180483, "raw_average_value_size": 615, "num_data_blocks": 16, "num_entries": 293, "num_filter_entries": 293, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769448467, "oldest_key_time": 1769448467, "file_creation_time": 1769448480, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 202, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 125] Flush lasted 3467 microseconds, and 1212 cpu microseconds.
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:28:00.410708) [db/flush_job.cc:967] [default] [JOB 125] Level-0 flush table #202: 186984 bytes OK
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:28:00.410724) [db/memtable_list.cc:519] [default] Level-0 commit table #202 started
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:28:00.412743) [db/memtable_list.cc:722] [default] Level-0 commit table #202: memtable #1 done
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:28:00.412761) EVENT_LOG_v1 {"time_micros": 1769448480412755, "job": 125, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:28:00.412785) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 125] Try to delete WAL files size 187446, prev total WAL file size 187446, number of live WAL files 2.
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000198.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:28:00.413487) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033353037' seq:72057594037927935, type:22 .. '6D6772737461740033373538' seq:0, type:0; will stop at (end)
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 126] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 125 Base level 0, inputs: [202(182KB)], [200(12MB)]
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448480413583, "job": 126, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [202], "files_L6": [200], "score": -1, "input_data_size": 13286072, "oldest_snapshot_seqno": -1}
Jan 26 17:28:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:28:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:28:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:28:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:28:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:28:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 126] Generated table #203: 9925 keys, 9917487 bytes, temperature: kUnknown
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448480494280, "cf_name": "default", "job": 126, "event": "table_file_creation", "file_number": 203, "file_size": 9917487, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9858624, "index_size": 32958, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24837, "raw_key_size": 261445, "raw_average_key_size": 26, "raw_value_size": 9688305, "raw_average_value_size": 976, "num_data_blocks": 1256, "num_entries": 9925, "num_filter_entries": 9925, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769448480, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 203, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:28:00.494874) [db/compaction/compaction_job.cc:1663] [default] [JOB 126] Compacted 1@0 + 1@6 files to L6 => 9917487 bytes
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:28:00.496575) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 164.3 rd, 122.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 12.5 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(124.1) write-amplify(53.0) OK, records in: 10432, records dropped: 507 output_compression: NoCompression
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:28:00.496614) EVENT_LOG_v1 {"time_micros": 1769448480496595, "job": 126, "event": "compaction_finished", "compaction_time_micros": 80842, "compaction_time_cpu_micros": 45550, "output_level": 6, "num_output_files": 1, "total_output_size": 9917487, "num_input_records": 10432, "num_output_records": 9925, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000202.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448480496948, "job": 126, "event": "table_file_deletion", "file_number": 202}
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000200.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448480501955, "job": 126, "event": "table_file_deletion", "file_number": 200}
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:28:00.413204) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:28:00.502153) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:28:00.502177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:28:00.502184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:28:00.502189) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:28:00 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:28:00.502194) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:28:00 compute-0 nova_compute[239965]: 2026-01-26 17:28:00.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4147: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:01 compute-0 ceph-mon[75140]: pgmap v4147: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4148: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:03 compute-0 ceph-mon[75140]: pgmap v4148: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:03 compute-0 nova_compute[239965]: 2026-01-26 17:28:03.775 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4149: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:04 compute-0 nova_compute[239965]: 2026-01-26 17:28:04.863 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:28:05 compute-0 podman[417565]: 2026-01-26 17:28:05.45805287 +0000 UTC m=+0.137514144 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:28:05 compute-0 nova_compute[239965]: 2026-01-26 17:28:05.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:05 compute-0 nova_compute[239965]: 2026-01-26 17:28:05.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:28:05 compute-0 ceph-mon[75140]: pgmap v4149: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4150: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:07 compute-0 ceph-mon[75140]: pgmap v4150: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:08 compute-0 nova_compute[239965]: 2026-01-26 17:28:08.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4151: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:08 compute-0 nova_compute[239965]: 2026-01-26 17:28:08.777 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:08 compute-0 ceph-mon[75140]: pgmap v4151: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:09 compute-0 nova_compute[239965]: 2026-01-26 17:28:09.864 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:28:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4152: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:11 compute-0 ceph-mon[75140]: pgmap v4152: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4153: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:13 compute-0 nova_compute[239965]: 2026-01-26 17:28:13.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:13 compute-0 nova_compute[239965]: 2026-01-26 17:28:13.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:28:13 compute-0 nova_compute[239965]: 2026-01-26 17:28:13.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:28:13 compute-0 nova_compute[239965]: 2026-01-26 17:28:13.537 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:28:13 compute-0 nova_compute[239965]: 2026-01-26 17:28:13.537 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:13 compute-0 nova_compute[239965]: 2026-01-26 17:28:13.565 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:28:13 compute-0 nova_compute[239965]: 2026-01-26 17:28:13.565 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:28:13 compute-0 nova_compute[239965]: 2026-01-26 17:28:13.565 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:28:13 compute-0 nova_compute[239965]: 2026-01-26 17:28:13.565 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:28:13 compute-0 nova_compute[239965]: 2026-01-26 17:28:13.565 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:28:13 compute-0 nova_compute[239965]: 2026-01-26 17:28:13.781 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:13 compute-0 ceph-mon[75140]: pgmap v4153: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:28:14 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2608588727' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:28:14 compute-0 nova_compute[239965]: 2026-01-26 17:28:14.130 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:28:14 compute-0 nova_compute[239965]: 2026-01-26 17:28:14.325 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:28:14 compute-0 nova_compute[239965]: 2026-01-26 17:28:14.327 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3558MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:28:14 compute-0 nova_compute[239965]: 2026-01-26 17:28:14.327 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:28:14 compute-0 nova_compute[239965]: 2026-01-26 17:28:14.328 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:28:14 compute-0 nova_compute[239965]: 2026-01-26 17:28:14.411 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:28:14 compute-0 nova_compute[239965]: 2026-01-26 17:28:14.412 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:28:14 compute-0 nova_compute[239965]: 2026-01-26 17:28:14.428 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:28:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4154: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:14 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2608588727' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:28:14 compute-0 nova_compute[239965]: 2026-01-26 17:28:14.909 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:14 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:28:14 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/995982221' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:28:14 compute-0 nova_compute[239965]: 2026-01-26 17:28:14.993 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:28:15 compute-0 nova_compute[239965]: 2026-01-26 17:28:15.000 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:28:15 compute-0 nova_compute[239965]: 2026-01-26 17:28:15.017 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:28:15 compute-0 nova_compute[239965]: 2026-01-26 17:28:15.021 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:28:15 compute-0 nova_compute[239965]: 2026-01-26 17:28:15.022 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:28:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:28:16 compute-0 ceph-mon[75140]: pgmap v4154: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:16 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/995982221' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:28:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4155: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:17 compute-0 ceph-mon[75140]: pgmap v4155: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4156: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:18 compute-0 nova_compute[239965]: 2026-01-26 17:28:18.784 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:19 compute-0 ceph-mon[75140]: pgmap v4156: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:19 compute-0 nova_compute[239965]: 2026-01-26 17:28:19.910 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:28:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4157: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:21 compute-0 ceph-mon[75140]: pgmap v4157: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4158: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:23 compute-0 nova_compute[239965]: 2026-01-26 17:28:23.789 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:24 compute-0 ceph-mon[75140]: pgmap v4158: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4159: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:24 compute-0 nova_compute[239965]: 2026-01-26 17:28:24.914 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:28:26 compute-0 ceph-mon[75140]: pgmap v4159: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4160: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:27 compute-0 ceph-mon[75140]: pgmap v4160: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:28 compute-0 sshd-session[417635]: Invalid user solv from 45.148.10.240 port 33192
Jan 26 17:28:28 compute-0 sshd-session[417635]: Connection closed by invalid user solv 45.148.10.240 port 33192 [preauth]
Jan 26 17:28:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4161: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:28 compute-0 nova_compute[239965]: 2026-01-26 17:28:28.793 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:28:28
Jan 26 17:28:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:28:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:28:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', '.rgw.root', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'cephfs.cephfs.data', 'images', 'vms', 'backups']
Jan 26 17:28:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:28:29 compute-0 ceph-mon[75140]: pgmap v4161: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:29 compute-0 nova_compute[239965]: 2026-01-26 17:28:29.914 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:30 compute-0 sudo[417638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:28:30 compute-0 sudo[417638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:28:30 compute-0 sudo[417638]: pam_unix(sudo:session): session closed for user root
Jan 26 17:28:30 compute-0 sudo[417663]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:28:30 compute-0 sudo[417663]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:28:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:28:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:28:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:28:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:28:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:28:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:28:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:28:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4162: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:30 compute-0 ceph-mon[75140]: pgmap v4162: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:30 compute-0 sudo[417663]: pam_unix(sudo:session): session closed for user root
Jan 26 17:28:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:28:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:28:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:28:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:28:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:28:31 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:28:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:28:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:28:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:28:31 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:28:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:28:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:28:31 compute-0 sudo[417719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:28:31 compute-0 sudo[417719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:28:31 compute-0 sudo[417719]: pam_unix(sudo:session): session closed for user root
Jan 26 17:28:31 compute-0 sudo[417750]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:28:31 compute-0 sudo[417750]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:28:31 compute-0 podman[417743]: 2026-01-26 17:28:31.178702335 +0000 UTC m=+0.072763531 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 26 17:28:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:28:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:28:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:28:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:28:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:28:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:28:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:28:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:28:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:28:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:28:31 compute-0 podman[417798]: 2026-01-26 17:28:31.477999094 +0000 UTC m=+0.052886974 container create 024fe29d37dd2600591e81afd569d3a3de45140ba4dc76fdcd3061075b43fed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_rubin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 17:28:31 compute-0 systemd[1]: Started libpod-conmon-024fe29d37dd2600591e81afd569d3a3de45140ba4dc76fdcd3061075b43fed7.scope.
Jan 26 17:28:31 compute-0 podman[417798]: 2026-01-26 17:28:31.456863998 +0000 UTC m=+0.031751868 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:28:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:28:31 compute-0 podman[417798]: 2026-01-26 17:28:31.582829368 +0000 UTC m=+0.157717258 container init 024fe29d37dd2600591e81afd569d3a3de45140ba4dc76fdcd3061075b43fed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:28:31 compute-0 podman[417798]: 2026-01-26 17:28:31.593820057 +0000 UTC m=+0.168707927 container start 024fe29d37dd2600591e81afd569d3a3de45140ba4dc76fdcd3061075b43fed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:28:31 compute-0 dreamy_rubin[417815]: 167 167
Jan 26 17:28:31 compute-0 systemd[1]: libpod-024fe29d37dd2600591e81afd569d3a3de45140ba4dc76fdcd3061075b43fed7.scope: Deactivated successfully.
Jan 26 17:28:31 compute-0 podman[417798]: 2026-01-26 17:28:31.629668383 +0000 UTC m=+0.204556253 container attach 024fe29d37dd2600591e81afd569d3a3de45140ba4dc76fdcd3061075b43fed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_rubin, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 17:28:31 compute-0 podman[417798]: 2026-01-26 17:28:31.632040001 +0000 UTC m=+0.206927871 container died 024fe29d37dd2600591e81afd569d3a3de45140ba4dc76fdcd3061075b43fed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:28:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-f173ef81e9435a2f245aa33d1fb66c64a9d69ee2fe191074e79efe38c0bc4ab9-merged.mount: Deactivated successfully.
Jan 26 17:28:31 compute-0 podman[417798]: 2026-01-26 17:28:31.681314886 +0000 UTC m=+0.256202756 container remove 024fe29d37dd2600591e81afd569d3a3de45140ba4dc76fdcd3061075b43fed7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_rubin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:28:31 compute-0 systemd[1]: libpod-conmon-024fe29d37dd2600591e81afd569d3a3de45140ba4dc76fdcd3061075b43fed7.scope: Deactivated successfully.
Jan 26 17:28:31 compute-0 podman[417840]: 2026-01-26 17:28:31.869461398 +0000 UTC m=+0.045646157 container create a0b36b94dd706dbff9321da31d52782d0d1dace0b28ea12baad28ef10e6437c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lumiere, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 17:28:31 compute-0 systemd[1]: Started libpod-conmon-a0b36b94dd706dbff9321da31d52782d0d1dace0b28ea12baad28ef10e6437c0.scope.
Jan 26 17:28:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:28:31 compute-0 podman[417840]: 2026-01-26 17:28:31.84952997 +0000 UTC m=+0.025714729 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/084dc85a1f6cf8718b1ee689a932e60ea0349c265d349b2451239bfa68d625b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:28:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:28:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:28:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:28:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:28:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:28:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/084dc85a1f6cf8718b1ee689a932e60ea0349c265d349b2451239bfa68d625b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/084dc85a1f6cf8718b1ee689a932e60ea0349c265d349b2451239bfa68d625b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/084dc85a1f6cf8718b1ee689a932e60ea0349c265d349b2451239bfa68d625b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:28:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/084dc85a1f6cf8718b1ee689a932e60ea0349c265d349b2451239bfa68d625b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:28:31 compute-0 podman[417840]: 2026-01-26 17:28:31.965954968 +0000 UTC m=+0.142139727 container init a0b36b94dd706dbff9321da31d52782d0d1dace0b28ea12baad28ef10e6437c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lumiere, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:28:31 compute-0 podman[417840]: 2026-01-26 17:28:31.974104257 +0000 UTC m=+0.150288996 container start a0b36b94dd706dbff9321da31d52782d0d1dace0b28ea12baad28ef10e6437c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 17:28:31 compute-0 podman[417840]: 2026-01-26 17:28:31.977421328 +0000 UTC m=+0.153606067 container attach a0b36b94dd706dbff9321da31d52782d0d1dace0b28ea12baad28ef10e6437c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lumiere, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 17:28:32 compute-0 dazzling_lumiere[417857]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:28:32 compute-0 dazzling_lumiere[417857]: --> All data devices are unavailable
Jan 26 17:28:32 compute-0 systemd[1]: libpod-a0b36b94dd706dbff9321da31d52782d0d1dace0b28ea12baad28ef10e6437c0.scope: Deactivated successfully.
Jan 26 17:28:32 compute-0 podman[417840]: 2026-01-26 17:28:32.525068771 +0000 UTC m=+0.701253530 container died a0b36b94dd706dbff9321da31d52782d0d1dace0b28ea12baad28ef10e6437c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 17:28:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-084dc85a1f6cf8718b1ee689a932e60ea0349c265d349b2451239bfa68d625b2-merged.mount: Deactivated successfully.
Jan 26 17:28:32 compute-0 podman[417840]: 2026-01-26 17:28:32.575032483 +0000 UTC m=+0.751217232 container remove a0b36b94dd706dbff9321da31d52782d0d1dace0b28ea12baad28ef10e6437c0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_lumiere, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:28:32 compute-0 systemd[1]: libpod-conmon-a0b36b94dd706dbff9321da31d52782d0d1dace0b28ea12baad28ef10e6437c0.scope: Deactivated successfully.
Jan 26 17:28:32 compute-0 sudo[417750]: pam_unix(sudo:session): session closed for user root
Jan 26 17:28:32 compute-0 sudo[417889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:28:32 compute-0 sudo[417889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:28:32 compute-0 sudo[417889]: pam_unix(sudo:session): session closed for user root
Jan 26 17:28:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4163: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:32 compute-0 sudo[417914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:28:32 compute-0 sudo[417914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:28:32 compute-0 ceph-mon[75140]: pgmap v4163: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:33 compute-0 podman[417950]: 2026-01-26 17:28:33.066412469 +0000 UTC m=+0.039667020 container create 57f7916143f122425d5f149a46d3d5d69bcd0d82405e43101c8752ecbb4be094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 17:28:33 compute-0 systemd[1]: Started libpod-conmon-57f7916143f122425d5f149a46d3d5d69bcd0d82405e43101c8752ecbb4be094.scope.
Jan 26 17:28:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:28:33 compute-0 podman[417950]: 2026-01-26 17:28:33.049339452 +0000 UTC m=+0.022594023 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:28:33 compute-0 podman[417950]: 2026-01-26 17:28:33.152945926 +0000 UTC m=+0.126200497 container init 57f7916143f122425d5f149a46d3d5d69bcd0d82405e43101c8752ecbb4be094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:28:33 compute-0 podman[417950]: 2026-01-26 17:28:33.162269384 +0000 UTC m=+0.135523935 container start 57f7916143f122425d5f149a46d3d5d69bcd0d82405e43101c8752ecbb4be094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 17:28:33 compute-0 podman[417950]: 2026-01-26 17:28:33.166071187 +0000 UTC m=+0.139325788 container attach 57f7916143f122425d5f149a46d3d5d69bcd0d82405e43101c8752ecbb4be094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 17:28:33 compute-0 youthful_lederberg[417968]: 167 167
Jan 26 17:28:33 compute-0 systemd[1]: libpod-57f7916143f122425d5f149a46d3d5d69bcd0d82405e43101c8752ecbb4be094.scope: Deactivated successfully.
Jan 26 17:28:33 compute-0 conmon[417968]: conmon 57f7916143f122425d5f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57f7916143f122425d5f149a46d3d5d69bcd0d82405e43101c8752ecbb4be094.scope/container/memory.events
Jan 26 17:28:33 compute-0 podman[417973]: 2026-01-26 17:28:33.211691632 +0000 UTC m=+0.025981525 container died 57f7916143f122425d5f149a46d3d5d69bcd0d82405e43101c8752ecbb4be094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:28:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-c03db3e15b8eae5c7167628f4a4bb42a02d875961b41e277ab3827d91b01e6f3-merged.mount: Deactivated successfully.
Jan 26 17:28:33 compute-0 podman[417973]: 2026-01-26 17:28:33.275611756 +0000 UTC m=+0.089901639 container remove 57f7916143f122425d5f149a46d3d5d69bcd0d82405e43101c8752ecbb4be094 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 17:28:33 compute-0 systemd[1]: libpod-conmon-57f7916143f122425d5f149a46d3d5d69bcd0d82405e43101c8752ecbb4be094.scope: Deactivated successfully.
Jan 26 17:28:33 compute-0 podman[417995]: 2026-01-26 17:28:33.480014455 +0000 UTC m=+0.056917464 container create 176a5dd799840d14d40a72f5fd3d5ba347545a3c975c3f8351df08ae982304e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kalam, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 17:28:33 compute-0 systemd[1]: Started libpod-conmon-176a5dd799840d14d40a72f5fd3d5ba347545a3c975c3f8351df08ae982304e7.scope.
Jan 26 17:28:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:28:33 compute-0 podman[417995]: 2026-01-26 17:28:33.455223438 +0000 UTC m=+0.032126467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3837212eb42bfa9721ea25fc206af3e466c6a212ef680fa2c2860c50acca560c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3837212eb42bfa9721ea25fc206af3e466c6a212ef680fa2c2860c50acca560c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3837212eb42bfa9721ea25fc206af3e466c6a212ef680fa2c2860c50acca560c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:28:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3837212eb42bfa9721ea25fc206af3e466c6a212ef680fa2c2860c50acca560c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:28:33 compute-0 podman[417995]: 2026-01-26 17:28:33.610037134 +0000 UTC m=+0.186940223 container init 176a5dd799840d14d40a72f5fd3d5ba347545a3c975c3f8351df08ae982304e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:28:33 compute-0 podman[417995]: 2026-01-26 17:28:33.621365971 +0000 UTC m=+0.198268970 container start 176a5dd799840d14d40a72f5fd3d5ba347545a3c975c3f8351df08ae982304e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:28:33 compute-0 podman[417995]: 2026-01-26 17:28:33.625552894 +0000 UTC m=+0.202455903 container attach 176a5dd799840d14d40a72f5fd3d5ba347545a3c975c3f8351df08ae982304e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kalam, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:28:33 compute-0 nova_compute[239965]: 2026-01-26 17:28:33.795 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:33 compute-0 exciting_kalam[418012]: {
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:     "0": [
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:         {
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "devices": [
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "/dev/loop3"
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             ],
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "lv_name": "ceph_lv0",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "lv_size": "21470642176",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "name": "ceph_lv0",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "tags": {
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.cluster_name": "ceph",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.crush_device_class": "",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.encrypted": "0",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.objectstore": "bluestore",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.osd_id": "0",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.type": "block",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.vdo": "0",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.with_tpm": "0"
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             },
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "type": "block",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "vg_name": "ceph_vg0"
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:         }
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:     ],
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:     "1": [
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:         {
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "devices": [
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "/dev/loop4"
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             ],
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "lv_name": "ceph_lv1",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "lv_size": "21470642176",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "name": "ceph_lv1",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "tags": {
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.cluster_name": "ceph",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.crush_device_class": "",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.encrypted": "0",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.objectstore": "bluestore",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.osd_id": "1",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.type": "block",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.vdo": "0",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.with_tpm": "0"
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             },
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "type": "block",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "vg_name": "ceph_vg1"
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:         }
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:     ],
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:     "2": [
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:         {
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "devices": [
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "/dev/loop5"
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             ],
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "lv_name": "ceph_lv2",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "lv_size": "21470642176",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "name": "ceph_lv2",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "tags": {
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.cluster_name": "ceph",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.crush_device_class": "",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.encrypted": "0",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.objectstore": "bluestore",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.osd_id": "2",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.type": "block",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.vdo": "0",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:                 "ceph.with_tpm": "0"
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             },
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "type": "block",
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:             "vg_name": "ceph_vg2"
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:         }
Jan 26 17:28:33 compute-0 exciting_kalam[418012]:     ]
Jan 26 17:28:33 compute-0 exciting_kalam[418012]: }
Jan 26 17:28:33 compute-0 systemd[1]: libpod-176a5dd799840d14d40a72f5fd3d5ba347545a3c975c3f8351df08ae982304e7.scope: Deactivated successfully.
Jan 26 17:28:33 compute-0 podman[417995]: 2026-01-26 17:28:33.983663302 +0000 UTC m=+0.560566311 container died 176a5dd799840d14d40a72f5fd3d5ba347545a3c975c3f8351df08ae982304e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kalam, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:28:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-3837212eb42bfa9721ea25fc206af3e466c6a212ef680fa2c2860c50acca560c-merged.mount: Deactivated successfully.
Jan 26 17:28:34 compute-0 podman[417995]: 2026-01-26 17:28:34.042618533 +0000 UTC m=+0.619521552 container remove 176a5dd799840d14d40a72f5fd3d5ba347545a3c975c3f8351df08ae982304e7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_kalam, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 17:28:34 compute-0 systemd[1]: libpod-conmon-176a5dd799840d14d40a72f5fd3d5ba347545a3c975c3f8351df08ae982304e7.scope: Deactivated successfully.
Jan 26 17:28:34 compute-0 sudo[417914]: pam_unix(sudo:session): session closed for user root
Jan 26 17:28:34 compute-0 sudo[418032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:28:34 compute-0 sudo[418032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:28:34 compute-0 sudo[418032]: pam_unix(sudo:session): session closed for user root
Jan 26 17:28:34 compute-0 sudo[418057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:28:34 compute-0 sudo[418057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:28:34 compute-0 podman[418094]: 2026-01-26 17:28:34.555622429 +0000 UTC m=+0.045964126 container create f1508e63e293ddc5d719a4967410869172e3c18b26cabf9c3e2b0ed4d25116f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_mestorf, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Jan 26 17:28:34 compute-0 systemd[1]: Started libpod-conmon-f1508e63e293ddc5d719a4967410869172e3c18b26cabf9c3e2b0ed4d25116f7.scope.
Jan 26 17:28:34 compute-0 podman[418094]: 2026-01-26 17:28:34.532699219 +0000 UTC m=+0.023040896 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:28:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:28:34 compute-0 podman[418094]: 2026-01-26 17:28:34.64685837 +0000 UTC m=+0.137200037 container init f1508e63e293ddc5d719a4967410869172e3c18b26cabf9c3e2b0ed4d25116f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_mestorf, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 17:28:34 compute-0 podman[418094]: 2026-01-26 17:28:34.654358474 +0000 UTC m=+0.144700131 container start f1508e63e293ddc5d719a4967410869172e3c18b26cabf9c3e2b0ed4d25116f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 17:28:34 compute-0 podman[418094]: 2026-01-26 17:28:34.65749344 +0000 UTC m=+0.147835117 container attach f1508e63e293ddc5d719a4967410869172e3c18b26cabf9c3e2b0ed4d25116f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_mestorf, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Jan 26 17:28:34 compute-0 boring_mestorf[418111]: 167 167
Jan 26 17:28:34 compute-0 systemd[1]: libpod-f1508e63e293ddc5d719a4967410869172e3c18b26cabf9c3e2b0ed4d25116f7.scope: Deactivated successfully.
Jan 26 17:28:34 compute-0 podman[418094]: 2026-01-26 17:28:34.660318829 +0000 UTC m=+0.150660486 container died f1508e63e293ddc5d719a4967410869172e3c18b26cabf9c3e2b0ed4d25116f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_mestorf, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 17:28:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-68d9a957934e57cd9c91f2a9c8854802b6c5c51d3d5e6bc56035df352b0031ac-merged.mount: Deactivated successfully.
Jan 26 17:28:34 compute-0 podman[418094]: 2026-01-26 17:28:34.69839263 +0000 UTC m=+0.188734287 container remove f1508e63e293ddc5d719a4967410869172e3c18b26cabf9c3e2b0ed4d25116f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_mestorf, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 17:28:34 compute-0 systemd[1]: libpod-conmon-f1508e63e293ddc5d719a4967410869172e3c18b26cabf9c3e2b0ed4d25116f7.scope: Deactivated successfully.
Jan 26 17:28:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4164: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:34 compute-0 podman[418136]: 2026-01-26 17:28:34.870659603 +0000 UTC m=+0.042377247 container create 79ab8557e7bdd12a28c242ef1a6e80ecfd2b1caa9b900cc6ba7fc7c910ac1eab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_newton, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:28:34 compute-0 systemd[1]: Started libpod-conmon-79ab8557e7bdd12a28c242ef1a6e80ecfd2b1caa9b900cc6ba7fc7c910ac1eab.scope.
Jan 26 17:28:34 compute-0 nova_compute[239965]: 2026-01-26 17:28:34.917 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:34 compute-0 podman[418136]: 2026-01-26 17:28:34.852906638 +0000 UTC m=+0.024624302 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:28:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b78651a073f6a730174bc303b577a213e939406154300490d37f33518199a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b78651a073f6a730174bc303b577a213e939406154300490d37f33518199a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b78651a073f6a730174bc303b577a213e939406154300490d37f33518199a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:28:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71b78651a073f6a730174bc303b577a213e939406154300490d37f33518199a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:28:34 compute-0 podman[418136]: 2026-01-26 17:28:34.978766347 +0000 UTC m=+0.150484031 container init 79ab8557e7bdd12a28c242ef1a6e80ecfd2b1caa9b900cc6ba7fc7c910ac1eab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_newton, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:28:34 compute-0 podman[418136]: 2026-01-26 17:28:34.985655365 +0000 UTC m=+0.157373009 container start 79ab8557e7bdd12a28c242ef1a6e80ecfd2b1caa9b900cc6ba7fc7c910ac1eab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_newton, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:28:34 compute-0 podman[418136]: 2026-01-26 17:28:34.990559005 +0000 UTC m=+0.162276669 container attach 79ab8557e7bdd12a28c242ef1a6e80ecfd2b1caa9b900cc6ba7fc7c910ac1eab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_newton, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:28:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:28:35 compute-0 lvm[418237]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:28:35 compute-0 lvm[418240]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:28:35 compute-0 lvm[418240]: VG ceph_vg2 finished
Jan 26 17:28:35 compute-0 lvm[418237]: VG ceph_vg0 finished
Jan 26 17:28:35 compute-0 ceph-mon[75140]: pgmap v4164: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:35 compute-0 lvm[418238]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:28:35 compute-0 lvm[418238]: VG ceph_vg1 finished
Jan 26 17:28:35 compute-0 happy_newton[418153]: {}
Jan 26 17:28:35 compute-0 podman[418228]: 2026-01-26 17:28:35.946894413 +0000 UTC m=+0.184545284 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Jan 26 17:28:35 compute-0 systemd[1]: libpod-79ab8557e7bdd12a28c242ef1a6e80ecfd2b1caa9b900cc6ba7fc7c910ac1eab.scope: Deactivated successfully.
Jan 26 17:28:35 compute-0 systemd[1]: libpod-79ab8557e7bdd12a28c242ef1a6e80ecfd2b1caa9b900cc6ba7fc7c910ac1eab.scope: Consumed 1.576s CPU time.
Jan 26 17:28:35 compute-0 podman[418136]: 2026-01-26 17:28:35.975207765 +0000 UTC m=+1.146925419 container died 79ab8557e7bdd12a28c242ef1a6e80ecfd2b1caa9b900cc6ba7fc7c910ac1eab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-71b78651a073f6a730174bc303b577a213e939406154300490d37f33518199a0-merged.mount: Deactivated successfully.
Jan 26 17:28:36 compute-0 podman[418136]: 2026-01-26 17:28:36.031342518 +0000 UTC m=+1.203060162 container remove 79ab8557e7bdd12a28c242ef1a6e80ecfd2b1caa9b900cc6ba7fc7c910ac1eab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_newton, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:28:36 compute-0 systemd[1]: libpod-conmon-79ab8557e7bdd12a28c242ef1a6e80ecfd2b1caa9b900cc6ba7fc7c910ac1eab.scope: Deactivated successfully.
Jan 26 17:28:36 compute-0 sudo[418057]: pam_unix(sudo:session): session closed for user root
Jan 26 17:28:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:28:36 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:28:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:28:36 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:28:36 compute-0 sudo[418274]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:28:36 compute-0 sudo[418274]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:28:36 compute-0 sudo[418274]: pam_unix(sudo:session): session closed for user root
Jan 26 17:28:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4165: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:28:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:28:37 compute-0 ceph-mon[75140]: pgmap v4165: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4166: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:38 compute-0 nova_compute[239965]: 2026-01-26 17:28:38.799 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:39 compute-0 ceph-mon[75140]: pgmap v4166: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:39 compute-0 nova_compute[239965]: 2026-01-26 17:28:39.919 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:28:40 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4167: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:40 compute-0 ceph-mon[75140]: pgmap v4167: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:42 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4168: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:43 compute-0 ceph-mon[75140]: pgmap v4168: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:43 compute-0 nova_compute[239965]: 2026-01-26 17:28:43.804 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:44 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4169: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:44 compute-0 nova_compute[239965]: 2026-01-26 17:28:44.922 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:28:45 compute-0 ceph-mon[75140]: pgmap v4169: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:46 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4170: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:47 compute-0 ceph-mon[75140]: pgmap v4170: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:48 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4171: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:48 compute-0 nova_compute[239965]: 2026-01-26 17:28:48.808 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:28:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3213020075' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:28:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:28:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3213020075' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:28:48 compute-0 ceph-mon[75140]: pgmap v4171: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3213020075' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:28:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3213020075' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:28:49 compute-0 nova_compute[239965]: 2026-01-26 17:28:49.924 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:28:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:28:49 compute-0 nova_compute[239965]: 2026-01-26 17:28:49.994 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:28:50 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4172: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:51 compute-0 nova_compute[239965]: 2026-01-26 17:28:51.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:51 compute-0 nova_compute[239965]: 2026-01-26 17:28:51.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:51 compute-0 ceph-mon[75140]: pgmap v4172: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:52 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4173: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:53 compute-0 nova_compute[239965]: 2026-01-26 17:28:53.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:28:53 compute-0 ceph-mon[75140]: pgmap v4173: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:53 compute-0 nova_compute[239965]: 2026-01-26 17:28:53.811 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:54 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4174: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:54 compute-0 ceph-mon[75140]: pgmap v4174: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:54 compute-0 nova_compute[239965]: 2026-01-26 17:28:54.975 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:28:56 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4175: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:57 compute-0 ceph-mon[75140]: pgmap v4175: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:58 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4176: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:58 compute-0 nova_compute[239965]: 2026-01-26 17:28:58.815 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:28:58 compute-0 ceph-mon[75140]: pgmap v4176: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:28:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:28:59.316 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:28:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:28:59.316 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:28:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:28:59.316 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:28:59 compute-0 nova_compute[239965]: 2026-01-26 17:28:59.977 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:29:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:29:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:29:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:29:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:29:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:29:00 compute-0 nova_compute[239965]: 2026-01-26 17:29:00.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:29:00 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4177: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:01 compute-0 podman[418299]: 2026-01-26 17:29:01.380137535 +0000 UTC m=+0.059728812 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:29:01 compute-0 ceph-mon[75140]: pgmap v4177: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:02 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4178: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:03 compute-0 nova_compute[239965]: 2026-01-26 17:29:03.819 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:03 compute-0 ceph-mon[75140]: pgmap v4178: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:04 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4179: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:04 compute-0 ceph-mon[75140]: pgmap v4179: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:04 compute-0 nova_compute[239965]: 2026-01-26 17:29:04.980 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:29:06 compute-0 podman[418319]: 2026-01-26 17:29:06.386873235 +0000 UTC m=+0.078844259 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 26 17:29:06 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4180: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:07 compute-0 nova_compute[239965]: 2026-01-26 17:29:07.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:07 compute-0 nova_compute[239965]: 2026-01-26 17:29:07.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:29:07 compute-0 ceph-mon[75140]: pgmap v4180: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:08 compute-0 nova_compute[239965]: 2026-01-26 17:29:08.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:08 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4181: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:08 compute-0 nova_compute[239965]: 2026-01-26 17:29:08.824 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:08 compute-0 ceph-mon[75140]: pgmap v4181: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:09 compute-0 nova_compute[239965]: 2026-01-26 17:29:09.982 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #204. Immutable memtables: 0.
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:29:10.637673) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 127] Flushing memtable with next log file: 204
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448550637707, "job": 127, "event": "flush_started", "num_memtables": 1, "num_entries": 798, "num_deletes": 256, "total_data_size": 1079078, "memory_usage": 1097384, "flush_reason": "Manual Compaction"}
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 127] Level-0 flush table #205: started
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448550650167, "cf_name": "default", "job": 127, "event": "table_file_creation", "file_number": 205, "file_size": 1064195, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 84536, "largest_seqno": 85333, "table_properties": {"data_size": 1060065, "index_size": 1845, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8867, "raw_average_key_size": 18, "raw_value_size": 1051842, "raw_average_value_size": 2252, "num_data_blocks": 82, "num_entries": 467, "num_filter_entries": 467, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769448481, "oldest_key_time": 1769448481, "file_creation_time": 1769448550, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 205, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 127] Flush lasted 12558 microseconds, and 5298 cpu microseconds.
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:29:10.650225) [db/flush_job.cc:967] [default] [JOB 127] Level-0 flush table #205: 1064195 bytes OK
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:29:10.650253) [db/memtable_list.cc:519] [default] Level-0 commit table #205 started
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:29:10.688208) [db/memtable_list.cc:722] [default] Level-0 commit table #205: memtable #1 done
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:29:10.688262) EVENT_LOG_v1 {"time_micros": 1769448550688250, "job": 127, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:29:10.688293) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 127] Try to delete WAL files size 1075060, prev total WAL file size 1075060, number of live WAL files 2.
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000201.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:29:10.689265) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033373635' seq:72057594037927935, type:22 .. '6C6F676D0034303137' seq:0, type:0; will stop at (end)
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 128] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 127 Base level 0, inputs: [205(1039KB)], [203(9685KB)]
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448550689340, "job": 128, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [205], "files_L6": [203], "score": -1, "input_data_size": 10981682, "oldest_snapshot_seqno": -1}
Jan 26 17:29:10 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4182: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 128] Generated table #206: 9868 keys, 10879053 bytes, temperature: kUnknown
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448550785364, "cf_name": "default", "job": 128, "event": "table_file_creation", "file_number": 206, "file_size": 10879053, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10818911, "index_size": 34365, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 24709, "raw_key_size": 261168, "raw_average_key_size": 26, "raw_value_size": 10647951, "raw_average_value_size": 1079, "num_data_blocks": 1315, "num_entries": 9868, "num_filter_entries": 9868, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769448550, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 206, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:29:10.785641) [db/compaction/compaction_job.cc:1663] [default] [JOB 128] Compacted 1@0 + 1@6 files to L6 => 10879053 bytes
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:29:10.787913) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 114.3 rd, 113.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 9.5 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(20.5) write-amplify(10.2) OK, records in: 10392, records dropped: 524 output_compression: NoCompression
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:29:10.787928) EVENT_LOG_v1 {"time_micros": 1769448550787921, "job": 128, "event": "compaction_finished", "compaction_time_micros": 96111, "compaction_time_cpu_micros": 29350, "output_level": 6, "num_output_files": 1, "total_output_size": 10879053, "num_input_records": 10392, "num_output_records": 9868, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000205.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448550788210, "job": 128, "event": "table_file_deletion", "file_number": 205}
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000203.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448550790449, "job": 128, "event": "table_file_deletion", "file_number": 203}
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:29:10.689073) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:29:10.790519) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:29:10.790524) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:29:10.790525) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:29:10.790527) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:29:10 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:29:10.790528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:29:11 compute-0 ceph-mon[75140]: pgmap v4182: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:12 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4183: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:12 compute-0 ceph-mon[75140]: pgmap v4183: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:13 compute-0 nova_compute[239965]: 2026-01-26 17:29:13.825 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:14 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4184: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:15 compute-0 nova_compute[239965]: 2026-01-26 17:29:15.030 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:15 compute-0 nova_compute[239965]: 2026-01-26 17:29:15.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:15 compute-0 nova_compute[239965]: 2026-01-26 17:29:15.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:29:15 compute-0 nova_compute[239965]: 2026-01-26 17:29:15.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:29:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:29:15 compute-0 nova_compute[239965]: 2026-01-26 17:29:15.643 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:29:15 compute-0 nova_compute[239965]: 2026-01-26 17:29:15.644 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:15 compute-0 ceph-mon[75140]: pgmap v4184: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:16 compute-0 nova_compute[239965]: 2026-01-26 17:29:16.049 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:29:16 compute-0 nova_compute[239965]: 2026-01-26 17:29:16.050 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:29:16 compute-0 nova_compute[239965]: 2026-01-26 17:29:16.050 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:29:16 compute-0 nova_compute[239965]: 2026-01-26 17:29:16.050 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:29:16 compute-0 nova_compute[239965]: 2026-01-26 17:29:16.050 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:29:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:29:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1899229989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:29:16 compute-0 nova_compute[239965]: 2026-01-26 17:29:16.610 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:29:16 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4185: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:16 compute-0 nova_compute[239965]: 2026-01-26 17:29:16.759 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:29:16 compute-0 nova_compute[239965]: 2026-01-26 17:29:16.761 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3561MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:29:16 compute-0 nova_compute[239965]: 2026-01-26 17:29:16.761 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:29:16 compute-0 nova_compute[239965]: 2026-01-26 17:29:16.761 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:29:16 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1899229989' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:29:16 compute-0 ceph-mon[75140]: pgmap v4185: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:16 compute-0 sshd-session[418346]: Received disconnect from 91.224.92.78 port 62720:11:  [preauth]
Jan 26 17:29:16 compute-0 sshd-session[418346]: Disconnected from authenticating user root 91.224.92.78 port 62720 [preauth]
Jan 26 17:29:17 compute-0 nova_compute[239965]: 2026-01-26 17:29:17.631 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:29:17 compute-0 nova_compute[239965]: 2026-01-26 17:29:17.631 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:29:17 compute-0 nova_compute[239965]: 2026-01-26 17:29:17.651 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:29:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:29:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2922675625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:29:18 compute-0 nova_compute[239965]: 2026-01-26 17:29:18.284 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.633s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:29:18 compute-0 nova_compute[239965]: 2026-01-26 17:29:18.290 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:29:18 compute-0 nova_compute[239965]: 2026-01-26 17:29:18.315 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:29:18 compute-0 nova_compute[239965]: 2026-01-26 17:29:18.317 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:29:18 compute-0 nova_compute[239965]: 2026-01-26 17:29:18.317 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:29:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2922675625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:29:18 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4186: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:18 compute-0 nova_compute[239965]: 2026-01-26 17:29:18.829 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:19 compute-0 ceph-mon[75140]: pgmap v4186: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:20 compute-0 nova_compute[239965]: 2026-01-26 17:29:20.032 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:29:20 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4187: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:21 compute-0 ceph-mon[75140]: pgmap v4187: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:22 compute-0 nova_compute[239965]: 2026-01-26 17:29:22.310 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:22 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4188: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:22 compute-0 ceph-mon[75140]: pgmap v4188: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:23 compute-0 nova_compute[239965]: 2026-01-26 17:29:23.834 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:24 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4189: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:25 compute-0 nova_compute[239965]: 2026-01-26 17:29:25.078 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:29:25 compute-0 ceph-mon[75140]: pgmap v4189: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:26 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4190: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:27 compute-0 ceph-mon[75140]: pgmap v4190: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:28 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4191: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:28 compute-0 nova_compute[239965]: 2026-01-26 17:29:28.837 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:29:28
Jan 26 17:29:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:29:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:29:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'default.rgw.meta']
Jan 26 17:29:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:29:29 compute-0 ceph-mon[75140]: pgmap v4191: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:30 compute-0 nova_compute[239965]: 2026-01-26 17:29:30.080 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:29:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:29:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:29:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:29:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:29:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:29:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:29:30 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4192: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:30 compute-0 ceph-mon[75140]: pgmap v4192: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:29:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:29:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:29:31 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 17:29:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:29:31 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 17:29:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:29:31 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 17:29:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:29:31 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 17:29:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:29:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:29:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:29:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:29:32 compute-0 podman[418392]: 2026-01-26 17:29:32.413413613 +0000 UTC m=+0.097504516 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:29:32 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4193: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:32 compute-0 ceph-mon[75140]: pgmap v4193: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:33 compute-0 nova_compute[239965]: 2026-01-26 17:29:33.841 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:34 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4194: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:35 compute-0 nova_compute[239965]: 2026-01-26 17:29:35.083 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:29:36 compute-0 ceph-mon[75140]: pgmap v4194: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:36 compute-0 sudo[418412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:29:36 compute-0 sudo[418412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:29:36 compute-0 sudo[418412]: pam_unix(sudo:session): session closed for user root
Jan 26 17:29:36 compute-0 sudo[418437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:29:36 compute-0 sudo[418437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:29:36 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4195: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:36 compute-0 sudo[418437]: pam_unix(sudo:session): session closed for user root
Jan 26 17:29:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:29:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:29:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:29:36 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:29:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:29:36 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:29:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:29:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:29:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:29:36 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:29:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:29:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:29:37 compute-0 sudo[418494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:29:37 compute-0 sudo[418494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:29:37 compute-0 sudo[418494]: pam_unix(sudo:session): session closed for user root
Jan 26 17:29:37 compute-0 sudo[418524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:29:37 compute-0 sudo[418524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:29:37 compute-0 podman[418518]: 2026-01-26 17:29:37.139885401 +0000 UTC m=+0.095547759 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 26 17:29:37 compute-0 ceph-mon[75140]: pgmap v4195: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:29:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:29:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:29:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:29:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:29:37 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:29:37 compute-0 podman[418582]: 2026-01-26 17:29:37.464829947 +0000 UTC m=+0.059381344 container create 7dbd4b8dbbe8d5991fa8ab9b1846f390dfadb6a62f8f0b0bf2c03c7e34629aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_leavitt, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:29:37 compute-0 systemd[1]: Started libpod-conmon-7dbd4b8dbbe8d5991fa8ab9b1846f390dfadb6a62f8f0b0bf2c03c7e34629aa0.scope.
Jan 26 17:29:37 compute-0 podman[418582]: 2026-01-26 17:29:37.436112054 +0000 UTC m=+0.030663451 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:29:37 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:29:37 compute-0 podman[418582]: 2026-01-26 17:29:37.572281254 +0000 UTC m=+0.166832641 container init 7dbd4b8dbbe8d5991fa8ab9b1846f390dfadb6a62f8f0b0bf2c03c7e34629aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_leavitt, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 26 17:29:37 compute-0 podman[418582]: 2026-01-26 17:29:37.582855383 +0000 UTC m=+0.177406740 container start 7dbd4b8dbbe8d5991fa8ab9b1846f390dfadb6a62f8f0b0bf2c03c7e34629aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 17:29:37 compute-0 podman[418582]: 2026-01-26 17:29:37.587045816 +0000 UTC m=+0.181597173 container attach 7dbd4b8dbbe8d5991fa8ab9b1846f390dfadb6a62f8f0b0bf2c03c7e34629aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_leavitt, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 17:29:37 compute-0 kind_leavitt[418598]: 167 167
Jan 26 17:29:37 compute-0 systemd[1]: libpod-7dbd4b8dbbe8d5991fa8ab9b1846f390dfadb6a62f8f0b0bf2c03c7e34629aa0.scope: Deactivated successfully.
Jan 26 17:29:37 compute-0 conmon[418598]: conmon 7dbd4b8dbbe8d5991fa8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7dbd4b8dbbe8d5991fa8ab9b1846f390dfadb6a62f8f0b0bf2c03c7e34629aa0.scope/container/memory.events
Jan 26 17:29:37 compute-0 podman[418582]: 2026-01-26 17:29:37.592824347 +0000 UTC m=+0.187375694 container died 7dbd4b8dbbe8d5991fa8ab9b1846f390dfadb6a62f8f0b0bf2c03c7e34629aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:29:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-e9968e4892edac8d5771482df3f4626617ebf8f4fb101e03965302387a4666c9-merged.mount: Deactivated successfully.
Jan 26 17:29:37 compute-0 podman[418582]: 2026-01-26 17:29:37.641579219 +0000 UTC m=+0.236130576 container remove 7dbd4b8dbbe8d5991fa8ab9b1846f390dfadb6a62f8f0b0bf2c03c7e34629aa0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_leavitt, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:29:37 compute-0 systemd[1]: libpod-conmon-7dbd4b8dbbe8d5991fa8ab9b1846f390dfadb6a62f8f0b0bf2c03c7e34629aa0.scope: Deactivated successfully.
Jan 26 17:29:37 compute-0 podman[418622]: 2026-01-26 17:29:37.836379743 +0000 UTC m=+0.046110599 container create c5d7f549979b547a3337e0a09314a773dc8e1514006cce7b6be653f8a93b237a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_poitras, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:29:37 compute-0 systemd[1]: Started libpod-conmon-c5d7f549979b547a3337e0a09314a773dc8e1514006cce7b6be653f8a93b237a.scope.
Jan 26 17:29:37 compute-0 podman[418622]: 2026-01-26 17:29:37.816151839 +0000 UTC m=+0.025882715 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:29:37 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07258b4d1b6cc571f5f433016e28b4d85ca165b0ba6c83646888363aca301625/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07258b4d1b6cc571f5f433016e28b4d85ca165b0ba6c83646888363aca301625/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07258b4d1b6cc571f5f433016e28b4d85ca165b0ba6c83646888363aca301625/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07258b4d1b6cc571f5f433016e28b4d85ca165b0ba6c83646888363aca301625/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:29:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07258b4d1b6cc571f5f433016e28b4d85ca165b0ba6c83646888363aca301625/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:29:37 compute-0 podman[418622]: 2026-01-26 17:29:37.943470062 +0000 UTC m=+0.153201018 container init c5d7f549979b547a3337e0a09314a773dc8e1514006cce7b6be653f8a93b237a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Jan 26 17:29:37 compute-0 podman[418622]: 2026-01-26 17:29:37.951762184 +0000 UTC m=+0.161493040 container start c5d7f549979b547a3337e0a09314a773dc8e1514006cce7b6be653f8a93b237a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 17:29:37 compute-0 podman[418622]: 2026-01-26 17:29:37.956846339 +0000 UTC m=+0.166577215 container attach c5d7f549979b547a3337e0a09314a773dc8e1514006cce7b6be653f8a93b237a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_poitras, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:29:38 compute-0 great_poitras[418639]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:29:38 compute-0 great_poitras[418639]: --> All data devices are unavailable
Jan 26 17:29:38 compute-0 systemd[1]: libpod-c5d7f549979b547a3337e0a09314a773dc8e1514006cce7b6be653f8a93b237a.scope: Deactivated successfully.
Jan 26 17:29:38 compute-0 podman[418659]: 2026-01-26 17:29:38.530444147 +0000 UTC m=+0.026920019 container died c5d7f549979b547a3337e0a09314a773dc8e1514006cce7b6be653f8a93b237a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_poitras, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 17:29:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-07258b4d1b6cc571f5f433016e28b4d85ca165b0ba6c83646888363aca301625-merged.mount: Deactivated successfully.
Jan 26 17:29:38 compute-0 podman[418659]: 2026-01-26 17:29:38.583206337 +0000 UTC m=+0.079682199 container remove c5d7f549979b547a3337e0a09314a773dc8e1514006cce7b6be653f8a93b237a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_poitras, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 17:29:38 compute-0 systemd[1]: libpod-conmon-c5d7f549979b547a3337e0a09314a773dc8e1514006cce7b6be653f8a93b237a.scope: Deactivated successfully.
Jan 26 17:29:38 compute-0 sudo[418524]: pam_unix(sudo:session): session closed for user root
Jan 26 17:29:38 compute-0 sudo[418674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:29:38 compute-0 sudo[418674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:29:38 compute-0 sudo[418674]: pam_unix(sudo:session): session closed for user root
Jan 26 17:29:38 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4196: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:38 compute-0 sudo[418699]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:29:38 compute-0 sudo[418699]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:29:38 compute-0 nova_compute[239965]: 2026-01-26 17:29:38.846 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:39 compute-0 podman[418738]: 2026-01-26 17:29:39.082316623 +0000 UTC m=+0.044290525 container create 4240e2b405f72334ca850b8fc9e7d9cc3677ec8f8747b76eb28bdee9a90f9254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 17:29:39 compute-0 systemd[1]: Started libpod-conmon-4240e2b405f72334ca850b8fc9e7d9cc3677ec8f8747b76eb28bdee9a90f9254.scope.
Jan 26 17:29:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:29:39 compute-0 podman[418738]: 2026-01-26 17:29:39.062361285 +0000 UTC m=+0.024335217 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:29:39 compute-0 podman[418738]: 2026-01-26 17:29:39.165089217 +0000 UTC m=+0.127063139 container init 4240e2b405f72334ca850b8fc9e7d9cc3677ec8f8747b76eb28bdee9a90f9254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:29:39 compute-0 podman[418738]: 2026-01-26 17:29:39.173522913 +0000 UTC m=+0.135496815 container start 4240e2b405f72334ca850b8fc9e7d9cc3677ec8f8747b76eb28bdee9a90f9254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mclean, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:29:39 compute-0 busy_mclean[418754]: 167 167
Jan 26 17:29:39 compute-0 systemd[1]: libpod-4240e2b405f72334ca850b8fc9e7d9cc3677ec8f8747b76eb28bdee9a90f9254.scope: Deactivated successfully.
Jan 26 17:29:39 compute-0 podman[418738]: 2026-01-26 17:29:39.181347315 +0000 UTC m=+0.143321227 container attach 4240e2b405f72334ca850b8fc9e7d9cc3677ec8f8747b76eb28bdee9a90f9254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mclean, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:29:39 compute-0 conmon[418754]: conmon 4240e2b405f72334ca85 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4240e2b405f72334ca850b8fc9e7d9cc3677ec8f8747b76eb28bdee9a90f9254.scope/container/memory.events
Jan 26 17:29:39 compute-0 podman[418738]: 2026-01-26 17:29:39.182937824 +0000 UTC m=+0.144911726 container died 4240e2b405f72334ca850b8fc9e7d9cc3677ec8f8747b76eb28bdee9a90f9254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mclean, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:29:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1298453d3ea56cdff568be313df094fdfae1db94defa6d2087a758292ee586d-merged.mount: Deactivated successfully.
Jan 26 17:29:39 compute-0 podman[418738]: 2026-01-26 17:29:39.23186365 +0000 UTC m=+0.193837572 container remove 4240e2b405f72334ca850b8fc9e7d9cc3677ec8f8747b76eb28bdee9a90f9254 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_mclean, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:29:39 compute-0 systemd[1]: libpod-conmon-4240e2b405f72334ca850b8fc9e7d9cc3677ec8f8747b76eb28bdee9a90f9254.scope: Deactivated successfully.
Jan 26 17:29:39 compute-0 podman[418778]: 2026-01-26 17:29:39.402290858 +0000 UTC m=+0.049698566 container create 4bde6c2daaded4a73fb440c7f0110592a4bfeb53295fe3c4b3480ed9354f8a5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kirch, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:29:39 compute-0 systemd[1]: Started libpod-conmon-4bde6c2daaded4a73fb440c7f0110592a4bfeb53295fe3c4b3480ed9354f8a5c.scope.
Jan 26 17:29:39 compute-0 podman[418778]: 2026-01-26 17:29:39.379332386 +0000 UTC m=+0.026740084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:29:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7b0bdefa876ef7c61eec4c2d482c5fb53a428a824f38ed80a32c9b7202359d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7b0bdefa876ef7c61eec4c2d482c5fb53a428a824f38ed80a32c9b7202359d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7b0bdefa876ef7c61eec4c2d482c5fb53a428a824f38ed80a32c9b7202359d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:29:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc7b0bdefa876ef7c61eec4c2d482c5fb53a428a824f38ed80a32c9b7202359d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:29:39 compute-0 podman[418778]: 2026-01-26 17:29:39.507355868 +0000 UTC m=+0.154763536 container init 4bde6c2daaded4a73fb440c7f0110592a4bfeb53295fe3c4b3480ed9354f8a5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kirch, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 17:29:39 compute-0 podman[418778]: 2026-01-26 17:29:39.515403174 +0000 UTC m=+0.162810842 container start 4bde6c2daaded4a73fb440c7f0110592a4bfeb53295fe3c4b3480ed9354f8a5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kirch, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:29:39 compute-0 podman[418778]: 2026-01-26 17:29:39.5193101 +0000 UTC m=+0.166717848 container attach 4bde6c2daaded4a73fb440c7f0110592a4bfeb53295fe3c4b3480ed9354f8a5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kirch, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:29:39 compute-0 elastic_kirch[418795]: {
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:     "0": [
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:         {
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "devices": [
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "/dev/loop3"
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             ],
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "lv_name": "ceph_lv0",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "lv_size": "21470642176",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "name": "ceph_lv0",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "tags": {
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.cluster_name": "ceph",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.crush_device_class": "",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.encrypted": "0",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.objectstore": "bluestore",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.osd_id": "0",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.type": "block",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.vdo": "0",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.with_tpm": "0"
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             },
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "type": "block",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "vg_name": "ceph_vg0"
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:         }
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:     ],
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:     "1": [
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:         {
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "devices": [
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "/dev/loop4"
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             ],
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "lv_name": "ceph_lv1",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "lv_size": "21470642176",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "name": "ceph_lv1",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "tags": {
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.cluster_name": "ceph",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.crush_device_class": "",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.encrypted": "0",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.objectstore": "bluestore",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.osd_id": "1",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.type": "block",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.vdo": "0",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.with_tpm": "0"
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             },
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "type": "block",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "vg_name": "ceph_vg1"
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:         }
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:     ],
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:     "2": [
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:         {
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "devices": [
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "/dev/loop5"
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             ],
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "lv_name": "ceph_lv2",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "lv_size": "21470642176",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "name": "ceph_lv2",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "tags": {
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.cluster_name": "ceph",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.crush_device_class": "",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.encrypted": "0",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.objectstore": "bluestore",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.osd_id": "2",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.type": "block",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.vdo": "0",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:                 "ceph.with_tpm": "0"
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             },
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "type": "block",
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:             "vg_name": "ceph_vg2"
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:         }
Jan 26 17:29:39 compute-0 elastic_kirch[418795]:     ]
Jan 26 17:29:39 compute-0 elastic_kirch[418795]: }
Jan 26 17:29:39 compute-0 ceph-mon[75140]: pgmap v4196: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:39 compute-0 systemd[1]: libpod-4bde6c2daaded4a73fb440c7f0110592a4bfeb53295fe3c4b3480ed9354f8a5c.scope: Deactivated successfully.
Jan 26 17:29:39 compute-0 podman[418778]: 2026-01-26 17:29:39.83538354 +0000 UTC m=+0.482791208 container died 4bde6c2daaded4a73fb440c7f0110592a4bfeb53295fe3c4b3480ed9354f8a5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:29:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc7b0bdefa876ef7c61eec4c2d482c5fb53a428a824f38ed80a32c9b7202359d-merged.mount: Deactivated successfully.
Jan 26 17:29:39 compute-0 podman[418778]: 2026-01-26 17:29:39.880433621 +0000 UTC m=+0.527841289 container remove 4bde6c2daaded4a73fb440c7f0110592a4bfeb53295fe3c4b3480ed9354f8a5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_kirch, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 17:29:39 compute-0 systemd[1]: libpod-conmon-4bde6c2daaded4a73fb440c7f0110592a4bfeb53295fe3c4b3480ed9354f8a5c.scope: Deactivated successfully.
Jan 26 17:29:39 compute-0 sudo[418699]: pam_unix(sudo:session): session closed for user root
Jan 26 17:29:40 compute-0 sudo[418816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:29:40 compute-0 sudo[418816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:29:40 compute-0 sudo[418816]: pam_unix(sudo:session): session closed for user root
Jan 26 17:29:40 compute-0 sudo[418841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:29:40 compute-0 sudo[418841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:29:40 compute-0 nova_compute[239965]: 2026-01-26 17:29:40.085 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:40 compute-0 podman[418879]: 2026-01-26 17:29:40.395326893 +0000 UTC m=+0.053556581 container create 0cd99f68b2b359f33ad1cebde2581db1638c604e035b51e06dbb5bfa4d9df883 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:29:40 compute-0 systemd[1]: Started libpod-conmon-0cd99f68b2b359f33ad1cebde2581db1638c604e035b51e06dbb5bfa4d9df883.scope.
Jan 26 17:29:40 compute-0 podman[418879]: 2026-01-26 17:29:40.368745783 +0000 UTC m=+0.026975471 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:29:40 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:29:40 compute-0 podman[418879]: 2026-01-26 17:29:40.492372876 +0000 UTC m=+0.150602534 container init 0cd99f68b2b359f33ad1cebde2581db1638c604e035b51e06dbb5bfa4d9df883 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 17:29:40 compute-0 podman[418879]: 2026-01-26 17:29:40.500488155 +0000 UTC m=+0.158717803 container start 0cd99f68b2b359f33ad1cebde2581db1638c604e035b51e06dbb5bfa4d9df883 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 17:29:40 compute-0 podman[418879]: 2026-01-26 17:29:40.504099803 +0000 UTC m=+0.162329461 container attach 0cd99f68b2b359f33ad1cebde2581db1638c604e035b51e06dbb5bfa4d9df883 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:29:40 compute-0 competent_mcclintock[418895]: 167 167
Jan 26 17:29:40 compute-0 systemd[1]: libpod-0cd99f68b2b359f33ad1cebde2581db1638c604e035b51e06dbb5bfa4d9df883.scope: Deactivated successfully.
Jan 26 17:29:40 compute-0 podman[418900]: 2026-01-26 17:29:40.573119161 +0000 UTC m=+0.046351824 container died 0cd99f68b2b359f33ad1cebde2581db1638c604e035b51e06dbb5bfa4d9df883 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:29:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:29:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4197: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-52b6848be3cd82e2dc782330582b158441fc1af762199cb0aa6dd90c3e069f92-merged.mount: Deactivated successfully.
Jan 26 17:29:41 compute-0 podman[418900]: 2026-01-26 17:29:41.045200586 +0000 UTC m=+0.518433219 container remove 0cd99f68b2b359f33ad1cebde2581db1638c604e035b51e06dbb5bfa4d9df883 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:29:41 compute-0 systemd[1]: libpod-conmon-0cd99f68b2b359f33ad1cebde2581db1638c604e035b51e06dbb5bfa4d9df883.scope: Deactivated successfully.
Jan 26 17:29:41 compute-0 podman[418922]: 2026-01-26 17:29:41.282813547 +0000 UTC m=+0.052924825 container create 37fdcb25097c7d2ce7e87295612cde96f15fc724f08fb3277d83615b99859517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_varahamihira, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:29:41 compute-0 systemd[1]: Started libpod-conmon-37fdcb25097c7d2ce7e87295612cde96f15fc724f08fb3277d83615b99859517.scope.
Jan 26 17:29:41 compute-0 podman[418922]: 2026-01-26 17:29:41.260915151 +0000 UTC m=+0.031026429 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:29:41 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b6c8ac101effed3a6f2f45a54e776d213f26ad8622983d70cc52c58420af91b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b6c8ac101effed3a6f2f45a54e776d213f26ad8622983d70cc52c58420af91b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b6c8ac101effed3a6f2f45a54e776d213f26ad8622983d70cc52c58420af91b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:29:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b6c8ac101effed3a6f2f45a54e776d213f26ad8622983d70cc52c58420af91b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:29:41 compute-0 podman[418922]: 2026-01-26 17:29:41.383162451 +0000 UTC m=+0.153273759 container init 37fdcb25097c7d2ce7e87295612cde96f15fc724f08fb3277d83615b99859517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_varahamihira, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 26 17:29:41 compute-0 podman[418922]: 2026-01-26 17:29:41.397246695 +0000 UTC m=+0.167357943 container start 37fdcb25097c7d2ce7e87295612cde96f15fc724f08fb3277d83615b99859517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_varahamihira, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 17:29:41 compute-0 podman[418922]: 2026-01-26 17:29:41.400889504 +0000 UTC m=+0.171000772 container attach 37fdcb25097c7d2ce7e87295612cde96f15fc724f08fb3277d83615b99859517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Jan 26 17:29:42 compute-0 ceph-mon[75140]: pgmap v4197: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:42 compute-0 lvm[419016]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:29:42 compute-0 lvm[419017]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:29:42 compute-0 lvm[419016]: VG ceph_vg0 finished
Jan 26 17:29:42 compute-0 lvm[419017]: VG ceph_vg1 finished
Jan 26 17:29:42 compute-0 lvm[419019]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:29:42 compute-0 lvm[419019]: VG ceph_vg2 finished
Jan 26 17:29:42 compute-0 vibrant_varahamihira[418938]: {}
Jan 26 17:29:42 compute-0 systemd[1]: libpod-37fdcb25097c7d2ce7e87295612cde96f15fc724f08fb3277d83615b99859517.scope: Deactivated successfully.
Jan 26 17:29:42 compute-0 systemd[1]: libpod-37fdcb25097c7d2ce7e87295612cde96f15fc724f08fb3277d83615b99859517.scope: Consumed 1.505s CPU time.
Jan 26 17:29:42 compute-0 podman[418922]: 2026-01-26 17:29:42.341168109 +0000 UTC m=+1.111279367 container died 37fdcb25097c7d2ce7e87295612cde96f15fc724f08fb3277d83615b99859517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:29:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b6c8ac101effed3a6f2f45a54e776d213f26ad8622983d70cc52c58420af91b-merged.mount: Deactivated successfully.
Jan 26 17:29:42 compute-0 podman[418922]: 2026-01-26 17:29:42.384381966 +0000 UTC m=+1.154493214 container remove 37fdcb25097c7d2ce7e87295612cde96f15fc724f08fb3277d83615b99859517 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_varahamihira, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:29:42 compute-0 systemd[1]: libpod-conmon-37fdcb25097c7d2ce7e87295612cde96f15fc724f08fb3277d83615b99859517.scope: Deactivated successfully.
Jan 26 17:29:42 compute-0 sudo[418841]: pam_unix(sudo:session): session closed for user root
Jan 26 17:29:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:29:42 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:29:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:29:42 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:29:42 compute-0 sudo[419035]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:29:42 compute-0 sudo[419035]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:29:42 compute-0 sudo[419035]: pam_unix(sudo:session): session closed for user root
Jan 26 17:29:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4198: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:43 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:29:43 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:29:43 compute-0 nova_compute[239965]: 2026-01-26 17:29:43.849 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:44 compute-0 ceph-mon[75140]: pgmap v4198: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4199: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:45 compute-0 nova_compute[239965]: 2026-01-26 17:29:45.087 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:29:46 compute-0 ceph-mon[75140]: pgmap v4199: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4200: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:48 compute-0 nova_compute[239965]: 2026-01-26 17:29:48.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:48 compute-0 ceph-mon[75140]: pgmap v4200: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:29:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1699337686' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:29:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:29:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1699337686' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:29:48 compute-0 nova_compute[239965]: 2026-01-26 17:29:48.852 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4201: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1699337686' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:29:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1699337686' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:29:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:29:50 compute-0 nova_compute[239965]: 2026-01-26 17:29:50.169 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:29:50 compute-0 ceph-mon[75140]: pgmap v4201: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4202: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:52 compute-0 ceph-mon[75140]: pgmap v4202: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4203: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:53 compute-0 nova_compute[239965]: 2026-01-26 17:29:53.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:53 compute-0 nova_compute[239965]: 2026-01-26 17:29:53.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:53 compute-0 nova_compute[239965]: 2026-01-26 17:29:53.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:29:53 compute-0 nova_compute[239965]: 2026-01-26 17:29:53.856 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:54 compute-0 ceph-mon[75140]: pgmap v4203: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4204: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:55 compute-0 nova_compute[239965]: 2026-01-26 17:29:55.172 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:29:56 compute-0 ceph-mon[75140]: pgmap v4204: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4205: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:58 compute-0 ceph-mon[75140]: pgmap v4205: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:58 compute-0 nova_compute[239965]: 2026-01-26 17:29:58.860 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:29:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4206: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:29:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:29:59.317 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:29:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:29:59.318 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:29:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:29:59.318 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:30:00 compute-0 nova_compute[239965]: 2026-01-26 17:30:00.175 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:30:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:30:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:30:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:30:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:30:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:30:00 compute-0 nova_compute[239965]: 2026-01-26 17:30:00.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:30:00 compute-0 ceph-mon[75140]: pgmap v4206: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4207: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:02 compute-0 ceph-mon[75140]: pgmap v4207: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4208: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:03 compute-0 podman[419060]: 2026-01-26 17:30:03.378942584 +0000 UTC m=+0.061899224 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 26 17:30:03 compute-0 nova_compute[239965]: 2026-01-26 17:30:03.864 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:04 compute-0 ceph-mon[75140]: pgmap v4208: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4209: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:05 compute-0 nova_compute[239965]: 2026-01-26 17:30:05.176 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:30:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.0 total, 600.0 interval
                                           Cumulative writes: 18K writes, 85K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.02 MB/s
                                           Cumulative WAL: 18K writes, 18K syncs, 1.00 writes per sync, written: 0.12 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1304 writes, 5936 keys, 1304 commit groups, 1.0 writes per commit group, ingest: 8.89 MB, 0.01 MB/s
                                           Interval WAL: 1304 writes, 1304 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     63.8      1.67              0.35        64    0.026       0      0       0.0       0.0
                                             L6      1/0   10.38 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.5    109.9     94.5      6.21              1.75        63    0.099    477K    33K       0.0       0.0
                                            Sum      1/0   10.38 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.5     86.6     88.0      7.89              2.10       127    0.062    477K    33K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.7     65.8     67.0      0.92              0.20        10    0.092     51K   2583       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    109.9     94.5      6.21              1.75        63    0.099    477K    33K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     63.9      1.67              0.35        63    0.026       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.6      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 7800.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.104, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.68 GB write, 0.09 MB/s write, 0.67 GB read, 0.09 MB/s read, 7.9 seconds
                                           Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.9 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x565356c818d0#2 capacity: 304.00 MB usage: 75.79 MB table_size: 0 occupancy: 18446744073709551615 collections: 14 last_copies: 0 last_secs: 0.00065 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(4677,72.43 MB,23.8256%) FilterBlock(128,1.27 MB,0.41925%) IndexBlock(128,2.09 MB,0.686947%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 17:30:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:30:06 compute-0 ceph-mon[75140]: pgmap v4209: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4210: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:07 compute-0 podman[419079]: 2026-01-26 17:30:07.424809279 +0000 UTC m=+0.113987510 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 26 17:30:08 compute-0 nova_compute[239965]: 2026-01-26 17:30:08.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:08 compute-0 nova_compute[239965]: 2026-01-26 17:30:08.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:30:08 compute-0 ceph-mon[75140]: pgmap v4210: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:08 compute-0 nova_compute[239965]: 2026-01-26 17:30:08.867 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4211: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:09 compute-0 nova_compute[239965]: 2026-01-26 17:30:09.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:10 compute-0 nova_compute[239965]: 2026-01-26 17:30:10.178 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:30:10 compute-0 ceph-mon[75140]: pgmap v4211: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4212: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:12 compute-0 ceph-mon[75140]: pgmap v4212: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4213: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:13 compute-0 nova_compute[239965]: 2026-01-26 17:30:13.871 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:14 compute-0 ceph-mon[75140]: pgmap v4213: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4214: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:15 compute-0 nova_compute[239965]: 2026-01-26 17:30:15.180 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:15 compute-0 nova_compute[239965]: 2026-01-26 17:30:15.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:30:15 compute-0 nova_compute[239965]: 2026-01-26 17:30:15.690 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:30:15 compute-0 nova_compute[239965]: 2026-01-26 17:30:15.691 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:30:15 compute-0 nova_compute[239965]: 2026-01-26 17:30:15.691 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:30:15 compute-0 nova_compute[239965]: 2026-01-26 17:30:15.692 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:30:15 compute-0 nova_compute[239965]: 2026-01-26 17:30:15.692 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:30:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:30:16 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/623208713' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:30:16 compute-0 nova_compute[239965]: 2026-01-26 17:30:16.308 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.616s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:30:16 compute-0 nova_compute[239965]: 2026-01-26 17:30:16.467 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:30:16 compute-0 nova_compute[239965]: 2026-01-26 17:30:16.468 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3577MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:30:16 compute-0 nova_compute[239965]: 2026-01-26 17:30:16.469 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:30:16 compute-0 nova_compute[239965]: 2026-01-26 17:30:16.469 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:30:16 compute-0 nova_compute[239965]: 2026-01-26 17:30:16.758 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:30:16 compute-0 nova_compute[239965]: 2026-01-26 17:30:16.758 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:30:16 compute-0 ceph-mon[75140]: pgmap v4214: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:16 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/623208713' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:30:16 compute-0 nova_compute[239965]: 2026-01-26 17:30:16.854 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 17:30:16 compute-0 nova_compute[239965]: 2026-01-26 17:30:16.967 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 17:30:16 compute-0 nova_compute[239965]: 2026-01-26 17:30:16.967 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 17:30:16 compute-0 nova_compute[239965]: 2026-01-26 17:30:16.981 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 17:30:17 compute-0 nova_compute[239965]: 2026-01-26 17:30:17.003 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 17:30:17 compute-0 nova_compute[239965]: 2026-01-26 17:30:17.019 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:30:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4215: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:30:17 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1992461840' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:30:17 compute-0 nova_compute[239965]: 2026-01-26 17:30:17.605 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:30:17 compute-0 nova_compute[239965]: 2026-01-26 17:30:17.614 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:30:17 compute-0 nova_compute[239965]: 2026-01-26 17:30:17.629 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:30:17 compute-0 nova_compute[239965]: 2026-01-26 17:30:17.630 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:30:17 compute-0 nova_compute[239965]: 2026-01-26 17:30:17.630 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:30:17 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1992461840' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:30:18 compute-0 ceph-mon[75140]: pgmap v4215: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:18 compute-0 nova_compute[239965]: 2026-01-26 17:30:18.874 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4216: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:19 compute-0 nova_compute[239965]: 2026-01-26 17:30:19.631 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:19 compute-0 nova_compute[239965]: 2026-01-26 17:30:19.631 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:30:19 compute-0 nova_compute[239965]: 2026-01-26 17:30:19.631 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:30:19 compute-0 nova_compute[239965]: 2026-01-26 17:30:19.651 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:30:20 compute-0 nova_compute[239965]: 2026-01-26 17:30:20.183 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:30:20 compute-0 ceph-mon[75140]: pgmap v4216: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4217: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:22 compute-0 ceph-mon[75140]: pgmap v4217: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4218: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:23 compute-0 nova_compute[239965]: 2026-01-26 17:30:23.914 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:24 compute-0 ceph-mon[75140]: pgmap v4218: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4219: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:25 compute-0 nova_compute[239965]: 2026-01-26 17:30:25.219 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:30:26 compute-0 ceph-mon[75140]: pgmap v4219: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4220: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:28 compute-0 ceph-mon[75140]: pgmap v4220: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:30:28
Jan 26 17:30:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:30:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:30:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', '.mgr', '.rgw.root', 'backups', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'volumes', 'images']
Jan 26 17:30:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:30:28 compute-0 nova_compute[239965]: 2026-01-26 17:30:28.961 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4221: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:30 compute-0 ceph-mon[75140]: pgmap v4221: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:30 compute-0 nova_compute[239965]: 2026-01-26 17:30:30.222 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:30:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:30:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:30:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:30:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:30:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:30:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:30:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4222: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:30:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:30:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:30:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:30:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:30:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:30:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:30:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:30:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:30:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:30:32 compute-0 ceph-mon[75140]: pgmap v4222: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4223: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:33 compute-0 nova_compute[239965]: 2026-01-26 17:30:33.966 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:34 compute-0 ceph-mon[75140]: pgmap v4223: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:34 compute-0 podman[419149]: 2026-01-26 17:30:34.370093312 +0000 UTC m=+0.054683678 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:30:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4224: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:35 compute-0 nova_compute[239965]: 2026-01-26 17:30:35.224 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:30:36 compute-0 ceph-mon[75140]: pgmap v4224: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4225: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:38 compute-0 ceph-mon[75140]: pgmap v4225: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:38 compute-0 podman[419169]: 2026-01-26 17:30:38.383783769 +0000 UTC m=+0.076233866 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Jan 26 17:30:38 compute-0 nova_compute[239965]: 2026-01-26 17:30:38.967 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4226: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:40 compute-0 ceph-mon[75140]: pgmap v4226: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:40 compute-0 nova_compute[239965]: 2026-01-26 17:30:40.227 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:30:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4227: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:42 compute-0 ceph-mon[75140]: pgmap v4227: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:42 compute-0 sudo[419195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:30:42 compute-0 sudo[419195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:30:42 compute-0 sudo[419195]: pam_unix(sudo:session): session closed for user root
Jan 26 17:30:42 compute-0 sudo[419220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 26 17:30:42 compute-0 sudo[419220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:30:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4228: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:43 compute-0 podman[419287]: 2026-01-26 17:30:43.537240248 +0000 UTC m=+0.366023513 container exec 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:30:43 compute-0 podman[419287]: 2026-01-26 17:30:43.662426819 +0000 UTC m=+0.491209994 container exec_died 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 17:30:43 compute-0 nova_compute[239965]: 2026-01-26 17:30:43.971 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:44 compute-0 sshd-session[419368]: Invalid user solv from 45.148.10.240 port 59834
Jan 26 17:30:44 compute-0 sudo[419220]: pam_unix(sudo:session): session closed for user root
Jan 26 17:30:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:30:44 compute-0 sshd-session[419368]: Connection closed by invalid user solv 45.148.10.240 port 59834 [preauth]
Jan 26 17:30:44 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:30:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:30:44 compute-0 ceph-mon[75140]: pgmap v4228: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:44 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:30:44 compute-0 sudo[419475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:30:44 compute-0 sudo[419475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:30:44 compute-0 sudo[419475]: pam_unix(sudo:session): session closed for user root
Jan 26 17:30:44 compute-0 sudo[419500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:30:44 compute-0 sudo[419500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:30:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4229: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:45 compute-0 nova_compute[239965]: 2026-01-26 17:30:45.228 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:45 compute-0 sudo[419500]: pam_unix(sudo:session): session closed for user root
Jan 26 17:30:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:30:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:30:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:30:45 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:30:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:30:45 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:30:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:30:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:30:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:30:45 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:30:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:30:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:30:45 compute-0 sudo[419556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:30:45 compute-0 sudo[419556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:30:45 compute-0 sudo[419556]: pam_unix(sudo:session): session closed for user root
Jan 26 17:30:45 compute-0 sudo[419581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:30:45 compute-0 sudo[419581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:30:45 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:30:45 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:30:45 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:30:45 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:30:45 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:30:45 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:30:45 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:30:45 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:30:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:30:45 compute-0 podman[419618]: 2026-01-26 17:30:45.694010122 +0000 UTC m=+0.036869023 container create 83ad7e8bc0212f4b058941079af213fa0bc4a0f49b3b531ff2057cf9c12dc696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:30:45 compute-0 systemd[1]: Started libpod-conmon-83ad7e8bc0212f4b058941079af213fa0bc4a0f49b3b531ff2057cf9c12dc696.scope.
Jan 26 17:30:45 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:30:45 compute-0 podman[419618]: 2026-01-26 17:30:45.764524446 +0000 UTC m=+0.107383367 container init 83ad7e8bc0212f4b058941079af213fa0bc4a0f49b3b531ff2057cf9c12dc696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_galois, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 26 17:30:45 compute-0 podman[419618]: 2026-01-26 17:30:45.771500636 +0000 UTC m=+0.114359537 container start 83ad7e8bc0212f4b058941079af213fa0bc4a0f49b3b531ff2057cf9c12dc696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_galois, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:30:45 compute-0 podman[419618]: 2026-01-26 17:30:45.77452957 +0000 UTC m=+0.117388481 container attach 83ad7e8bc0212f4b058941079af213fa0bc4a0f49b3b531ff2057cf9c12dc696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_galois, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 17:30:45 compute-0 podman[419618]: 2026-01-26 17:30:45.679341542 +0000 UTC m=+0.022200473 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:30:45 compute-0 jolly_galois[419636]: 167 167
Jan 26 17:30:45 compute-0 podman[419618]: 2026-01-26 17:30:45.777254007 +0000 UTC m=+0.120112898 container died 83ad7e8bc0212f4b058941079af213fa0bc4a0f49b3b531ff2057cf9c12dc696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_galois, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:30:45 compute-0 systemd[1]: libpod-83ad7e8bc0212f4b058941079af213fa0bc4a0f49b3b531ff2057cf9c12dc696.scope: Deactivated successfully.
Jan 26 17:30:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8e347edabdfecb58fcd7515da59a1792de507248ab1c50c8de8f0455d964248-merged.mount: Deactivated successfully.
Jan 26 17:30:45 compute-0 podman[419618]: 2026-01-26 17:30:45.823608521 +0000 UTC m=+0.166467422 container remove 83ad7e8bc0212f4b058941079af213fa0bc4a0f49b3b531ff2057cf9c12dc696 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_galois, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:30:45 compute-0 systemd[1]: libpod-conmon-83ad7e8bc0212f4b058941079af213fa0bc4a0f49b3b531ff2057cf9c12dc696.scope: Deactivated successfully.
Jan 26 17:30:45 compute-0 podman[419659]: 2026-01-26 17:30:45.97445119 +0000 UTC m=+0.040400860 container create 617d70ab9e13bd5dd9040369565f50bd1c15d317df6313a6e8c5a3288f1e8d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_joliot, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:30:46 compute-0 systemd[1]: Started libpod-conmon-617d70ab9e13bd5dd9040369565f50bd1c15d317df6313a6e8c5a3288f1e8d16.scope.
Jan 26 17:30:46 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df276270d1c5b227db3db8ba066108aea61fc8edde22dc4233d4861727e9058e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df276270d1c5b227db3db8ba066108aea61fc8edde22dc4233d4861727e9058e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df276270d1c5b227db3db8ba066108aea61fc8edde22dc4233d4861727e9058e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df276270d1c5b227db3db8ba066108aea61fc8edde22dc4233d4861727e9058e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:30:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df276270d1c5b227db3db8ba066108aea61fc8edde22dc4233d4861727e9058e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:30:46 compute-0 podman[419659]: 2026-01-26 17:30:45.957887464 +0000 UTC m=+0.023837164 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:30:46 compute-0 podman[419659]: 2026-01-26 17:30:46.056956187 +0000 UTC m=+0.122905957 container init 617d70ab9e13bd5dd9040369565f50bd1c15d317df6313a6e8c5a3288f1e8d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_joliot, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 17:30:46 compute-0 podman[419659]: 2026-01-26 17:30:46.064465712 +0000 UTC m=+0.130415382 container start 617d70ab9e13bd5dd9040369565f50bd1c15d317df6313a6e8c5a3288f1e8d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 26 17:30:46 compute-0 podman[419659]: 2026-01-26 17:30:46.067859344 +0000 UTC m=+0.133809114 container attach 617d70ab9e13bd5dd9040369565f50bd1c15d317df6313a6e8c5a3288f1e8d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_joliot, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:30:46 compute-0 bold_joliot[419676]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:30:46 compute-0 bold_joliot[419676]: --> All data devices are unavailable
Jan 26 17:30:46 compute-0 systemd[1]: libpod-617d70ab9e13bd5dd9040369565f50bd1c15d317df6313a6e8c5a3288f1e8d16.scope: Deactivated successfully.
Jan 26 17:30:46 compute-0 podman[419659]: 2026-01-26 17:30:46.537663853 +0000 UTC m=+0.603613523 container died 617d70ab9e13bd5dd9040369565f50bd1c15d317df6313a6e8c5a3288f1e8d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 26 17:30:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-df276270d1c5b227db3db8ba066108aea61fc8edde22dc4233d4861727e9058e-merged.mount: Deactivated successfully.
Jan 26 17:30:46 compute-0 podman[419659]: 2026-01-26 17:30:46.584443867 +0000 UTC m=+0.650393567 container remove 617d70ab9e13bd5dd9040369565f50bd1c15d317df6313a6e8c5a3288f1e8d16 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_joliot, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:30:46 compute-0 systemd[1]: libpod-conmon-617d70ab9e13bd5dd9040369565f50bd1c15d317df6313a6e8c5a3288f1e8d16.scope: Deactivated successfully.
Jan 26 17:30:46 compute-0 sudo[419581]: pam_unix(sudo:session): session closed for user root
Jan 26 17:30:46 compute-0 ceph-mon[75140]: pgmap v4229: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:46 compute-0 sudo[419708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:30:46 compute-0 sudo[419708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:30:46 compute-0 sudo[419708]: pam_unix(sudo:session): session closed for user root
Jan 26 17:30:46 compute-0 sudo[419733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:30:46 compute-0 sudo[419733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:30:47 compute-0 podman[419768]: 2026-01-26 17:30:47.036747899 +0000 UTC m=+0.045930095 container create 3d33d7bec696a80a90ecf07ed30f84801e12528032d4c8ddfda139472149a54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Jan 26 17:30:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4230: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:47 compute-0 systemd[1]: Started libpod-conmon-3d33d7bec696a80a90ecf07ed30f84801e12528032d4c8ddfda139472149a54b.scope.
Jan 26 17:30:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:30:47 compute-0 podman[419768]: 2026-01-26 17:30:47.110094622 +0000 UTC m=+0.119276848 container init 3d33d7bec696a80a90ecf07ed30f84801e12528032d4c8ddfda139472149a54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:30:47 compute-0 podman[419768]: 2026-01-26 17:30:47.017020746 +0000 UTC m=+0.026202982 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:30:47 compute-0 podman[419768]: 2026-01-26 17:30:47.118913768 +0000 UTC m=+0.128095964 container start 3d33d7bec696a80a90ecf07ed30f84801e12528032d4c8ddfda139472149a54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_leavitt, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:30:47 compute-0 podman[419768]: 2026-01-26 17:30:47.122034874 +0000 UTC m=+0.131217100 container attach 3d33d7bec696a80a90ecf07ed30f84801e12528032d4c8ddfda139472149a54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 17:30:47 compute-0 compassionate_leavitt[419784]: 167 167
Jan 26 17:30:47 compute-0 systemd[1]: libpod-3d33d7bec696a80a90ecf07ed30f84801e12528032d4c8ddfda139472149a54b.scope: Deactivated successfully.
Jan 26 17:30:47 compute-0 conmon[419784]: conmon 3d33d7bec696a80a90ec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3d33d7bec696a80a90ecf07ed30f84801e12528032d4c8ddfda139472149a54b.scope/container/memory.events
Jan 26 17:30:47 compute-0 podman[419768]: 2026-01-26 17:30:47.124687839 +0000 UTC m=+0.133870035 container died 3d33d7bec696a80a90ecf07ed30f84801e12528032d4c8ddfda139472149a54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_leavitt, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-0295e46e11bbd75418bb2a474f02a309f49de06e5259f7489887b5bf8c13bdcc-merged.mount: Deactivated successfully.
Jan 26 17:30:47 compute-0 podman[419768]: 2026-01-26 17:30:47.162872003 +0000 UTC m=+0.172054199 container remove 3d33d7bec696a80a90ecf07ed30f84801e12528032d4c8ddfda139472149a54b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_leavitt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:30:47 compute-0 systemd[1]: libpod-conmon-3d33d7bec696a80a90ecf07ed30f84801e12528032d4c8ddfda139472149a54b.scope: Deactivated successfully.
Jan 26 17:30:47 compute-0 podman[419808]: 2026-01-26 17:30:47.317151386 +0000 UTC m=+0.040541973 container create 4a6a8d00db812f284b7d59e1829df88ae86a96e25b3bb56e6e5a4e548aebf8e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:30:47 compute-0 systemd[1]: Started libpod-conmon-4a6a8d00db812f284b7d59e1829df88ae86a96e25b3bb56e6e5a4e548aebf8e3.scope.
Jan 26 17:30:47 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a08485f757d6b9faf899af66f2572dcc0c2b7f9ce96ead699408a94ca10e54d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:30:47 compute-0 podman[419808]: 2026-01-26 17:30:47.299883274 +0000 UTC m=+0.023273891 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a08485f757d6b9faf899af66f2572dcc0c2b7f9ce96ead699408a94ca10e54d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a08485f757d6b9faf899af66f2572dcc0c2b7f9ce96ead699408a94ca10e54d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:30:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a08485f757d6b9faf899af66f2572dcc0c2b7f9ce96ead699408a94ca10e54d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:30:47 compute-0 podman[419808]: 2026-01-26 17:30:47.40888319 +0000 UTC m=+0.132273807 container init 4a6a8d00db812f284b7d59e1829df88ae86a96e25b3bb56e6e5a4e548aebf8e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:30:47 compute-0 podman[419808]: 2026-01-26 17:30:47.421048856 +0000 UTC m=+0.144439453 container start 4a6a8d00db812f284b7d59e1829df88ae86a96e25b3bb56e6e5a4e548aebf8e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 17:30:47 compute-0 podman[419808]: 2026-01-26 17:30:47.428824027 +0000 UTC m=+0.152214654 container attach 4a6a8d00db812f284b7d59e1829df88ae86a96e25b3bb56e6e5a4e548aebf8e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]: {
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:     "0": [
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:         {
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "devices": [
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "/dev/loop3"
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             ],
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "lv_name": "ceph_lv0",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "lv_size": "21470642176",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "name": "ceph_lv0",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "tags": {
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.cluster_name": "ceph",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.crush_device_class": "",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.encrypted": "0",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.objectstore": "bluestore",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.osd_id": "0",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.type": "block",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.vdo": "0",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.with_tpm": "0"
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             },
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "type": "block",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "vg_name": "ceph_vg0"
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:         }
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:     ],
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:     "1": [
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:         {
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "devices": [
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "/dev/loop4"
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             ],
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "lv_name": "ceph_lv1",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "lv_size": "21470642176",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "name": "ceph_lv1",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "tags": {
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.cluster_name": "ceph",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.crush_device_class": "",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.encrypted": "0",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.objectstore": "bluestore",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.osd_id": "1",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.type": "block",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.vdo": "0",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.with_tpm": "0"
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             },
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "type": "block",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "vg_name": "ceph_vg1"
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:         }
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:     ],
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:     "2": [
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:         {
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "devices": [
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "/dev/loop5"
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             ],
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "lv_name": "ceph_lv2",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "lv_size": "21470642176",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "name": "ceph_lv2",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "tags": {
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.cluster_name": "ceph",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.crush_device_class": "",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.encrypted": "0",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.objectstore": "bluestore",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.osd_id": "2",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.type": "block",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.vdo": "0",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:                 "ceph.with_tpm": "0"
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             },
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "type": "block",
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:             "vg_name": "ceph_vg2"
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:         }
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]:     ]
Jan 26 17:30:47 compute-0 crazy_rhodes[419824]: }
Jan 26 17:30:47 compute-0 systemd[1]: libpod-4a6a8d00db812f284b7d59e1829df88ae86a96e25b3bb56e6e5a4e548aebf8e3.scope: Deactivated successfully.
Jan 26 17:30:47 compute-0 podman[419808]: 2026-01-26 17:30:47.708674981 +0000 UTC m=+0.432065578 container died 4a6a8d00db812f284b7d59e1829df88ae86a96e25b3bb56e6e5a4e548aebf8e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 17:30:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a08485f757d6b9faf899af66f2572dcc0c2b7f9ce96ead699408a94ca10e54d-merged.mount: Deactivated successfully.
Jan 26 17:30:47 compute-0 podman[419808]: 2026-01-26 17:30:47.755312991 +0000 UTC m=+0.478703578 container remove 4a6a8d00db812f284b7d59e1829df88ae86a96e25b3bb56e6e5a4e548aebf8e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_rhodes, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:30:47 compute-0 systemd[1]: libpod-conmon-4a6a8d00db812f284b7d59e1829df88ae86a96e25b3bb56e6e5a4e548aebf8e3.scope: Deactivated successfully.
Jan 26 17:30:47 compute-0 sudo[419733]: pam_unix(sudo:session): session closed for user root
Jan 26 17:30:47 compute-0 sudo[419846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:30:47 compute-0 sudo[419846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:30:47 compute-0 sudo[419846]: pam_unix(sudo:session): session closed for user root
Jan 26 17:30:47 compute-0 sudo[419871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:30:47 compute-0 sudo[419871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:30:48 compute-0 podman[419908]: 2026-01-26 17:30:48.229467677 +0000 UTC m=+0.044801447 container create 359939ec8c2a25f5cd01efeb2a64c41dfc6acc47e622b147f2ccb62054411f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Jan 26 17:30:48 compute-0 systemd[1]: Started libpod-conmon-359939ec8c2a25f5cd01efeb2a64c41dfc6acc47e622b147f2ccb62054411f11.scope.
Jan 26 17:30:48 compute-0 podman[419908]: 2026-01-26 17:30:48.211363314 +0000 UTC m=+0.026697114 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:30:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:30:48 compute-0 podman[419908]: 2026-01-26 17:30:48.324577653 +0000 UTC m=+0.139911443 container init 359939ec8c2a25f5cd01efeb2a64c41dfc6acc47e622b147f2ccb62054411f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:30:48 compute-0 podman[419908]: 2026-01-26 17:30:48.33347874 +0000 UTC m=+0.148812520 container start 359939ec8c2a25f5cd01efeb2a64c41dfc6acc47e622b147f2ccb62054411f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 17:30:48 compute-0 podman[419908]: 2026-01-26 17:30:48.337174311 +0000 UTC m=+0.152508111 container attach 359939ec8c2a25f5cd01efeb2a64c41dfc6acc47e622b147f2ccb62054411f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_hopper, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:30:48 compute-0 quirky_hopper[419924]: 167 167
Jan 26 17:30:48 compute-0 systemd[1]: libpod-359939ec8c2a25f5cd01efeb2a64c41dfc6acc47e622b147f2ccb62054411f11.scope: Deactivated successfully.
Jan 26 17:30:48 compute-0 conmon[419924]: conmon 359939ec8c2a25f5cd01 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-359939ec8c2a25f5cd01efeb2a64c41dfc6acc47e622b147f2ccb62054411f11.scope/container/memory.events
Jan 26 17:30:48 compute-0 podman[419908]: 2026-01-26 17:30:48.342395268 +0000 UTC m=+0.157729048 container died 359939ec8c2a25f5cd01efeb2a64c41dfc6acc47e622b147f2ccb62054411f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_hopper, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:30:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-310a5dbce389870c37d9cce445d39f8c3a9a04e0e3443c474ba68e949069cebc-merged.mount: Deactivated successfully.
Jan 26 17:30:48 compute-0 podman[419908]: 2026-01-26 17:30:48.378576723 +0000 UTC m=+0.193910523 container remove 359939ec8c2a25f5cd01efeb2a64c41dfc6acc47e622b147f2ccb62054411f11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:30:48 compute-0 systemd[1]: libpod-conmon-359939ec8c2a25f5cd01efeb2a64c41dfc6acc47e622b147f2ccb62054411f11.scope: Deactivated successfully.
Jan 26 17:30:48 compute-0 podman[419946]: 2026-01-26 17:30:48.576044192 +0000 UTC m=+0.056194585 container create b21431c65c62900abc4b152820a6849a7aec4e37da981de92e79de140b05b09d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:30:48 compute-0 systemd[1]: Started libpod-conmon-b21431c65c62900abc4b152820a6849a7aec4e37da981de92e79de140b05b09d.scope.
Jan 26 17:30:48 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dceda6806f0eba885713c51b1b7a6d408f8051c23904bd29af3e954d7b590663/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dceda6806f0eba885713c51b1b7a6d408f8051c23904bd29af3e954d7b590663/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dceda6806f0eba885713c51b1b7a6d408f8051c23904bd29af3e954d7b590663/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:30:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dceda6806f0eba885713c51b1b7a6d408f8051c23904bd29af3e954d7b590663/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:30:48 compute-0 podman[419946]: 2026-01-26 17:30:48.55796119 +0000 UTC m=+0.038111603 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:30:48 compute-0 ceph-mon[75140]: pgmap v4230: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:48 compute-0 podman[419946]: 2026-01-26 17:30:48.66020955 +0000 UTC m=+0.140359953 container init b21431c65c62900abc4b152820a6849a7aec4e37da981de92e79de140b05b09d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:30:48 compute-0 podman[419946]: 2026-01-26 17:30:48.666216138 +0000 UTC m=+0.146366531 container start b21431c65c62900abc4b152820a6849a7aec4e37da981de92e79de140b05b09d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:30:48 compute-0 podman[419946]: 2026-01-26 17:30:48.670302068 +0000 UTC m=+0.150452471 container attach b21431c65c62900abc4b152820a6849a7aec4e37da981de92e79de140b05b09d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 26 17:30:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:30:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2457920320' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:30:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:30:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2457920320' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:30:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4231: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:49 compute-0 nova_compute[239965]: 2026-01-26 17:30:49.042 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:49 compute-0 lvm[420045]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:30:49 compute-0 lvm[420045]: VG ceph_vg1 finished
Jan 26 17:30:49 compute-0 lvm[420044]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:30:49 compute-0 lvm[420044]: VG ceph_vg0 finished
Jan 26 17:30:49 compute-0 lvm[420047]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:30:49 compute-0 lvm[420047]: VG ceph_vg2 finished
Jan 26 17:30:49 compute-0 lvm[420049]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:30:49 compute-0 lvm[420049]: VG ceph_vg2 finished
Jan 26 17:30:49 compute-0 unruffled_shtern[419965]: {}
Jan 26 17:30:49 compute-0 lvm[420051]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:30:49 compute-0 lvm[420051]: VG ceph_vg2 finished
Jan 26 17:30:49 compute-0 nova_compute[239965]: 2026-01-26 17:30:49.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:49 compute-0 systemd[1]: libpod-b21431c65c62900abc4b152820a6849a7aec4e37da981de92e79de140b05b09d.scope: Deactivated successfully.
Jan 26 17:30:49 compute-0 podman[419946]: 2026-01-26 17:30:49.520213433 +0000 UTC m=+1.000363826 container died b21431c65c62900abc4b152820a6849a7aec4e37da981de92e79de140b05b09d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:30:49 compute-0 systemd[1]: libpod-b21431c65c62900abc4b152820a6849a7aec4e37da981de92e79de140b05b09d.scope: Consumed 1.453s CPU time.
Jan 26 17:30:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-dceda6806f0eba885713c51b1b7a6d408f8051c23904bd29af3e954d7b590663-merged.mount: Deactivated successfully.
Jan 26 17:30:49 compute-0 podman[419946]: 2026-01-26 17:30:49.56304212 +0000 UTC m=+1.043192513 container remove b21431c65c62900abc4b152820a6849a7aec4e37da981de92e79de140b05b09d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_shtern, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 17:30:49 compute-0 systemd[1]: libpod-conmon-b21431c65c62900abc4b152820a6849a7aec4e37da981de92e79de140b05b09d.scope: Deactivated successfully.
Jan 26 17:30:49 compute-0 sudo[419871]: pam_unix(sudo:session): session closed for user root
Jan 26 17:30:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:30:49 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:30:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:30:49 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:30:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2457920320' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:30:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/2457920320' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:30:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:30:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:30:49 compute-0 sudo[420062]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:30:49 compute-0 sudo[420062]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:30:49 compute-0 sudo[420062]: pam_unix(sudo:session): session closed for user root
Jan 26 17:30:49 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:30:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:30:50 compute-0 nova_compute[239965]: 2026-01-26 17:30:50.282 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:50 compute-0 ceph-mon[75140]: pgmap v4231: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:30:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4232: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:52 compute-0 ceph-mon[75140]: pgmap v4232: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4233: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:53 compute-0 nova_compute[239965]: 2026-01-26 17:30:53.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:53 compute-0 nova_compute[239965]: 2026-01-26 17:30:53.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:54 compute-0 nova_compute[239965]: 2026-01-26 17:30:54.088 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:54 compute-0 nova_compute[239965]: 2026-01-26 17:30:54.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:30:54 compute-0 ceph-mon[75140]: pgmap v4233: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4234: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:55 compute-0 nova_compute[239965]: 2026-01-26 17:30:55.284 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:30:56 compute-0 ceph-mon[75140]: pgmap v4234: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4235: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:58 compute-0 ceph-mon[75140]: pgmap v4235: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4236: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:30:59 compute-0 nova_compute[239965]: 2026-01-26 17:30:59.092 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:30:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:30:59.318 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:30:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:30:59.319 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:30:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:30:59.319 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:31:00 compute-0 ceph-mon[75140]: pgmap v4236: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:00 compute-0 nova_compute[239965]: 2026-01-26 17:31:00.286 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:31:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:31:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:31:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:31:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:31:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:31:00 compute-0 nova_compute[239965]: 2026-01-26 17:31:00.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:31:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4237: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:02 compute-0 ceph-mon[75140]: pgmap v4237: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4238: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:04 compute-0 nova_compute[239965]: 2026-01-26 17:31:04.123 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:04 compute-0 ceph-mon[75140]: pgmap v4238: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4239: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:05 compute-0 nova_compute[239965]: 2026-01-26 17:31:05.290 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:05 compute-0 podman[420087]: 2026-01-26 17:31:05.3914472 +0000 UTC m=+0.071942750 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:31:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:31:06 compute-0 ceph-mon[75140]: pgmap v4239: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4240: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:08 compute-0 ceph-mon[75140]: pgmap v4240: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4241: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:09 compute-0 nova_compute[239965]: 2026-01-26 17:31:09.127 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:09 compute-0 podman[420107]: 2026-01-26 17:31:09.430516237 +0000 UTC m=+0.116167661 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:31:09 compute-0 nova_compute[239965]: 2026-01-26 17:31:09.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:10 compute-0 nova_compute[239965]: 2026-01-26 17:31:10.485 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:10 compute-0 ceph-mon[75140]: pgmap v4241: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:10 compute-0 nova_compute[239965]: 2026-01-26 17:31:10.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:10 compute-0 nova_compute[239965]: 2026-01-26 17:31:10.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:31:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:31:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4242: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:12 compute-0 ceph-mon[75140]: pgmap v4242: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4243: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:14 compute-0 nova_compute[239965]: 2026-01-26 17:31:14.129 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:14 compute-0 ceph-mon[75140]: pgmap v4243: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4244: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:15 compute-0 nova_compute[239965]: 2026-01-26 17:31:15.487 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:31:16 compute-0 ceph-mon[75140]: pgmap v4244: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4245: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:17 compute-0 nova_compute[239965]: 2026-01-26 17:31:17.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:17 compute-0 nova_compute[239965]: 2026-01-26 17:31:17.543 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:31:17 compute-0 nova_compute[239965]: 2026-01-26 17:31:17.544 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:31:17 compute-0 nova_compute[239965]: 2026-01-26 17:31:17.544 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:31:17 compute-0 nova_compute[239965]: 2026-01-26 17:31:17.545 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:31:17 compute-0 nova_compute[239965]: 2026-01-26 17:31:17.545 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:31:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:31:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/579260757' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:31:18 compute-0 nova_compute[239965]: 2026-01-26 17:31:18.135 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:31:18 compute-0 nova_compute[239965]: 2026-01-26 17:31:18.311 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:31:18 compute-0 nova_compute[239965]: 2026-01-26 17:31:18.312 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3520MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:31:18 compute-0 nova_compute[239965]: 2026-01-26 17:31:18.313 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:31:18 compute-0 nova_compute[239965]: 2026-01-26 17:31:18.313 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:31:18 compute-0 nova_compute[239965]: 2026-01-26 17:31:18.442 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:31:18 compute-0 nova_compute[239965]: 2026-01-26 17:31:18.442 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:31:18 compute-0 nova_compute[239965]: 2026-01-26 17:31:18.463 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:31:18 compute-0 ceph-mon[75140]: pgmap v4245: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:18 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/579260757' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:31:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:31:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1006943361' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:31:19 compute-0 nova_compute[239965]: 2026-01-26 17:31:19.025 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:31:19 compute-0 nova_compute[239965]: 2026-01-26 17:31:19.032 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:31:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4246: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:19 compute-0 nova_compute[239965]: 2026-01-26 17:31:19.059 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:31:19 compute-0 nova_compute[239965]: 2026-01-26 17:31:19.062 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:31:19 compute-0 nova_compute[239965]: 2026-01-26 17:31:19.062 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.749s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:31:19 compute-0 nova_compute[239965]: 2026-01-26 17:31:19.133 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:19 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1006943361' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:31:19 compute-0 ceph-mon[75140]: pgmap v4246: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:20 compute-0 nova_compute[239965]: 2026-01-26 17:31:20.063 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:20 compute-0 nova_compute[239965]: 2026-01-26 17:31:20.064 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:31:20 compute-0 nova_compute[239965]: 2026-01-26 17:31:20.064 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:31:20 compute-0 nova_compute[239965]: 2026-01-26 17:31:20.087 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:31:20 compute-0 nova_compute[239965]: 2026-01-26 17:31:20.490 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:31:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4247: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:22 compute-0 ceph-mon[75140]: pgmap v4247: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4248: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:24 compute-0 ceph-mon[75140]: pgmap v4248: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:24 compute-0 nova_compute[239965]: 2026-01-26 17:31:24.136 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:31:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 185K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.73 writes per sync, written: 0.19 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 300 writes, 450 keys, 300 commit groups, 1.0 writes per commit group, ingest: 0.15 MB, 0.00 MB/s
                                           Interval WAL: 300 writes, 150 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:31:24 compute-0 nova_compute[239965]: 2026-01-26 17:31:24.527 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4249: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:25 compute-0 nova_compute[239965]: 2026-01-26 17:31:25.493 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:31:26 compute-0 ceph-mon[75140]: pgmap v4249: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4250: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:28 compute-0 ceph-mon[75140]: pgmap v4250: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:31:28
Jan 26 17:31:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:31:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:31:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', '.mgr', '.rgw.root', 'vms', 'backups', 'images', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 26 17:31:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:31:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4251: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:29 compute-0 nova_compute[239965]: 2026-01-26 17:31:29.142 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:31:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.4 total, 600.0 interval
                                           Cumulative writes: 50K writes, 193K keys, 50K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.02 MB/s
                                           Cumulative WAL: 50K writes, 18K syncs, 2.67 writes per sync, written: 0.18 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 264 writes, 396 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 264 writes, 132 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:31:30 compute-0 ceph-mon[75140]: pgmap v4251: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:31:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:31:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:31:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:31:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:31:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:31:30 compute-0 nova_compute[239965]: 2026-01-26 17:31:30.495 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:31:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4252: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:31:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:31:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:31:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:31:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:31:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:31:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:31:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:31:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:31:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:31:32 compute-0 ceph-mon[75140]: pgmap v4252: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4253: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:34 compute-0 nova_compute[239965]: 2026-01-26 17:31:34.146 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:34 compute-0 ceph-mon[75140]: pgmap v4253: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:31:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.70 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 264 writes, 396 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 264 writes, 132 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:31:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4254: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:35 compute-0 nova_compute[239965]: 2026-01-26 17:31:35.498 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:31:36 compute-0 podman[420178]: 2026-01-26 17:31:36.409365345 +0000 UTC m=+0.088121936 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 26 17:31:36 compute-0 ceph-mon[75140]: pgmap v4254: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #207. Immutable memtables: 0.
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:31:36.456048) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 129] Flushing memtable with next log file: 207
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448696456115, "job": 129, "event": "flush_started", "num_memtables": 1, "num_entries": 1402, "num_deletes": 251, "total_data_size": 2281393, "memory_usage": 2307904, "flush_reason": "Manual Compaction"}
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 129] Level-0 flush table #208: started
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448696471765, "cf_name": "default", "job": 129, "event": "table_file_creation", "file_number": 208, "file_size": 2226341, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 85334, "largest_seqno": 86735, "table_properties": {"data_size": 2219748, "index_size": 3791, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13639, "raw_average_key_size": 19, "raw_value_size": 2206558, "raw_average_value_size": 3211, "num_data_blocks": 170, "num_entries": 687, "num_filter_entries": 687, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769448551, "oldest_key_time": 1769448551, "file_creation_time": 1769448696, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 208, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 129] Flush lasted 15782 microseconds, and 6525 cpu microseconds.
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:31:36.471830) [db/flush_job.cc:967] [default] [JOB 129] Level-0 flush table #208: 2226341 bytes OK
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:31:36.471865) [db/memtable_list.cc:519] [default] Level-0 commit table #208 started
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:31:36.473539) [db/memtable_list.cc:722] [default] Level-0 commit table #208: memtable #1 done
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:31:36.473555) EVENT_LOG_v1 {"time_micros": 1769448696473550, "job": 129, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:31:36.473579) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 129] Try to delete WAL files size 2275192, prev total WAL file size 2275192, number of live WAL files 2.
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000204.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:31:36.474317) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038353334' seq:72057594037927935, type:22 .. '7061786F730038373836' seq:0, type:0; will stop at (end)
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 130] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 129 Base level 0, inputs: [208(2174KB)], [206(10MB)]
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448696474373, "job": 130, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [208], "files_L6": [206], "score": -1, "input_data_size": 13105394, "oldest_snapshot_seqno": -1}
Jan 26 17:31:36 compute-0 nova_compute[239965]: 2026-01-26 17:31:36.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 130] Generated table #209: 10041 keys, 11283807 bytes, temperature: kUnknown
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448696558132, "cf_name": "default", "job": 130, "event": "table_file_creation", "file_number": 209, "file_size": 11283807, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11222110, "index_size": 35472, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25157, "raw_key_size": 265380, "raw_average_key_size": 26, "raw_value_size": 11047622, "raw_average_value_size": 1100, "num_data_blocks": 1356, "num_entries": 10041, "num_filter_entries": 10041, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769448696, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 209, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:31:36.558535) [db/compaction/compaction_job.cc:1663] [default] [JOB 130] Compacted 1@0 + 1@6 files to L6 => 11283807 bytes
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:31:36.561338) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 156.1 rd, 134.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 10.4 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(11.0) write-amplify(5.1) OK, records in: 10555, records dropped: 514 output_compression: NoCompression
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:31:36.561358) EVENT_LOG_v1 {"time_micros": 1769448696561349, "job": 130, "event": "compaction_finished", "compaction_time_micros": 83954, "compaction_time_cpu_micros": 37012, "output_level": 6, "num_output_files": 1, "total_output_size": 11283807, "num_input_records": 10555, "num_output_records": 10041, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000208.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448696561830, "job": 130, "event": "table_file_deletion", "file_number": 208}
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000206.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448696563962, "job": 130, "event": "table_file_deletion", "file_number": 206}
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:31:36.474230) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:31:36.564096) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:31:36.564103) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:31:36.564105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:31:36.564107) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:31:36 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:31:36.564110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:31:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4255: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:38 compute-0 ceph-mon[75140]: pgmap v4255: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4256: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:39 compute-0 nova_compute[239965]: 2026-01-26 17:31:39.150 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:39 compute-0 ceph-mgr[75431]: [devicehealth INFO root] Check health
Jan 26 17:31:40 compute-0 podman[420198]: 2026-01-26 17:31:40.389942011 +0000 UTC m=+0.079785562 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:31:40 compute-0 ceph-mon[75140]: pgmap v4256: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:40 compute-0 nova_compute[239965]: 2026-01-26 17:31:40.499 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:31:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4257: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:42 compute-0 ceph-mon[75140]: pgmap v4257: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4258: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:44 compute-0 nova_compute[239965]: 2026-01-26 17:31:44.152 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:44 compute-0 ceph-mon[75140]: pgmap v4258: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4259: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:45 compute-0 nova_compute[239965]: 2026-01-26 17:31:45.524 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:45 compute-0 nova_compute[239965]: 2026-01-26 17:31:45.525 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:31:45 compute-0 nova_compute[239965]: 2026-01-26 17:31:45.533 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:31:46 compute-0 ceph-mon[75140]: pgmap v4259: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4260: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:47 compute-0 ceph-mon[75140]: pgmap v4260: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:31:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3536957662' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:31:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:31:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3536957662' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:31:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3536957662' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:31:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3536957662' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:31:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4261: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:49 compute-0 nova_compute[239965]: 2026-01-26 17:31:49.308 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:49 compute-0 nova_compute[239965]: 2026-01-26 17:31:49.578 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:49 compute-0 sudo[420225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:31:49 compute-0 sudo[420225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:31:49 compute-0 sudo[420225]: pam_unix(sudo:session): session closed for user root
Jan 26 17:31:49 compute-0 sudo[420250]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:31:49 compute-0 sudo[420250]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:31:50 compute-0 ceph-mon[75140]: pgmap v4261: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:31:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:31:50 compute-0 sudo[420250]: pam_unix(sudo:session): session closed for user root
Jan 26 17:31:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:31:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:31:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:31:50 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:31:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:31:50 compute-0 nova_compute[239965]: 2026-01-26 17:31:50.579 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:31:50 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:31:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:31:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:31:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:31:50 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:31:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:31:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:31:50 compute-0 sudo[420308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:31:50 compute-0 sudo[420308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:31:50 compute-0 sudo[420308]: pam_unix(sudo:session): session closed for user root
Jan 26 17:31:50 compute-0 sudo[420333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:31:50 compute-0 sudo[420333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:31:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4262: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:51 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:31:51 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:31:51 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:31:51 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:31:51 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:31:51 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:31:51 compute-0 podman[420371]: 2026-01-26 17:31:51.210503852 +0000 UTC m=+0.024332547 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:31:51 compute-0 podman[420371]: 2026-01-26 17:31:51.327727558 +0000 UTC m=+0.141556173 container create 5d544887157bd393b8fb9a42f586e1a606c55142680f915e13b845d0e56103f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 17:31:51 compute-0 systemd[1]: Started libpod-conmon-5d544887157bd393b8fb9a42f586e1a606c55142680f915e13b845d0e56103f2.scope.
Jan 26 17:31:51 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:31:51 compute-0 podman[420371]: 2026-01-26 17:31:51.551089571 +0000 UTC m=+0.364918206 container init 5d544887157bd393b8fb9a42f586e1a606c55142680f915e13b845d0e56103f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 17:31:51 compute-0 podman[420371]: 2026-01-26 17:31:51.559483806 +0000 UTC m=+0.373312431 container start 5d544887157bd393b8fb9a42f586e1a606c55142680f915e13b845d0e56103f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lichterman, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 26 17:31:51 compute-0 sharp_lichterman[420387]: 167 167
Jan 26 17:31:51 compute-0 systemd[1]: libpod-5d544887157bd393b8fb9a42f586e1a606c55142680f915e13b845d0e56103f2.scope: Deactivated successfully.
Jan 26 17:31:51 compute-0 podman[420371]: 2026-01-26 17:31:51.689286941 +0000 UTC m=+0.503115656 container attach 5d544887157bd393b8fb9a42f586e1a606c55142680f915e13b845d0e56103f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lichterman, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 26 17:31:51 compute-0 podman[420371]: 2026-01-26 17:31:51.690790517 +0000 UTC m=+0.504619172 container died 5d544887157bd393b8fb9a42f586e1a606c55142680f915e13b845d0e56103f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lichterman, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:31:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-169b373a74f6359077c3be1e59f7dff9a83fbc86a25cce64b181da6a5fbe33b7-merged.mount: Deactivated successfully.
Jan 26 17:31:51 compute-0 podman[420371]: 2026-01-26 17:31:51.944126373 +0000 UTC m=+0.757954988 container remove 5d544887157bd393b8fb9a42f586e1a606c55142680f915e13b845d0e56103f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:31:51 compute-0 systemd[1]: libpod-conmon-5d544887157bd393b8fb9a42f586e1a606c55142680f915e13b845d0e56103f2.scope: Deactivated successfully.
Jan 26 17:31:52 compute-0 podman[420413]: 2026-01-26 17:31:52.127578179 +0000 UTC m=+0.061744841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:31:52 compute-0 podman[420413]: 2026-01-26 17:31:52.29690197 +0000 UTC m=+0.231068612 container create b92436b81e2dca923f48432de4b5138fcf11fde3d226562869ee5434694b87ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:31:52 compute-0 ceph-mon[75140]: pgmap v4262: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:52 compute-0 systemd[1]: Started libpod-conmon-b92436b81e2dca923f48432de4b5138fcf11fde3d226562869ee5434694b87ba.scope.
Jan 26 17:31:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d49d5497a2edbc79d9e7eca1000d280be259a66c60b0ac6fc6f61cb41fd05e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d49d5497a2edbc79d9e7eca1000d280be259a66c60b0ac6fc6f61cb41fd05e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d49d5497a2edbc79d9e7eca1000d280be259a66c60b0ac6fc6f61cb41fd05e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d49d5497a2edbc79d9e7eca1000d280be259a66c60b0ac6fc6f61cb41fd05e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:31:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48d49d5497a2edbc79d9e7eca1000d280be259a66c60b0ac6fc6f61cb41fd05e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:31:52 compute-0 podman[420413]: 2026-01-26 17:31:52.706563148 +0000 UTC m=+0.640729810 container init b92436b81e2dca923f48432de4b5138fcf11fde3d226562869ee5434694b87ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gauss, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:31:52 compute-0 podman[420413]: 2026-01-26 17:31:52.716350698 +0000 UTC m=+0.650517330 container start b92436b81e2dca923f48432de4b5138fcf11fde3d226562869ee5434694b87ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gauss, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 26 17:31:52 compute-0 podman[420413]: 2026-01-26 17:31:52.729073699 +0000 UTC m=+0.663240371 container attach b92436b81e2dca923f48432de4b5138fcf11fde3d226562869ee5434694b87ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gauss, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:31:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4263: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:53 compute-0 sad_gauss[420429]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:31:53 compute-0 sad_gauss[420429]: --> All data devices are unavailable
Jan 26 17:31:53 compute-0 systemd[1]: libpod-b92436b81e2dca923f48432de4b5138fcf11fde3d226562869ee5434694b87ba.scope: Deactivated successfully.
Jan 26 17:31:53 compute-0 podman[420413]: 2026-01-26 17:31:53.212166733 +0000 UTC m=+1.146333375 container died b92436b81e2dca923f48432de4b5138fcf11fde3d226562869ee5434694b87ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gauss, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 17:31:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-48d49d5497a2edbc79d9e7eca1000d280be259a66c60b0ac6fc6f61cb41fd05e-merged.mount: Deactivated successfully.
Jan 26 17:31:53 compute-0 podman[420413]: 2026-01-26 17:31:53.342592182 +0000 UTC m=+1.276758824 container remove b92436b81e2dca923f48432de4b5138fcf11fde3d226562869ee5434694b87ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 17:31:53 compute-0 systemd[1]: libpod-conmon-b92436b81e2dca923f48432de4b5138fcf11fde3d226562869ee5434694b87ba.scope: Deactivated successfully.
Jan 26 17:31:53 compute-0 sudo[420333]: pam_unix(sudo:session): session closed for user root
Jan 26 17:31:53 compute-0 sudo[420462]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:31:53 compute-0 sudo[420462]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:31:53 compute-0 sudo[420462]: pam_unix(sudo:session): session closed for user root
Jan 26 17:31:53 compute-0 sudo[420487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:31:53 compute-0 sudo[420487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:31:53 compute-0 podman[420523]: 2026-01-26 17:31:53.856445758 +0000 UTC m=+0.115534045 container create 28dfaad82c8bbe1faafdad549b4b457e0c760e8e2251f7ce97f3874de40a647b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_thompson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:31:53 compute-0 podman[420523]: 2026-01-26 17:31:53.774504115 +0000 UTC m=+0.033592482 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:31:54 compute-0 systemd[1]: Started libpod-conmon-28dfaad82c8bbe1faafdad549b4b457e0c760e8e2251f7ce97f3874de40a647b.scope.
Jan 26 17:31:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:31:54 compute-0 podman[420523]: 2026-01-26 17:31:54.131189438 +0000 UTC m=+0.390277775 container init 28dfaad82c8bbe1faafdad549b4b457e0c760e8e2251f7ce97f3874de40a647b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_thompson, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:31:54 compute-0 podman[420523]: 2026-01-26 17:31:54.137312338 +0000 UTC m=+0.396400625 container start 28dfaad82c8bbe1faafdad549b4b457e0c760e8e2251f7ce97f3874de40a647b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:31:54 compute-0 podman[420523]: 2026-01-26 17:31:54.1414947 +0000 UTC m=+0.400583007 container attach 28dfaad82c8bbe1faafdad549b4b457e0c760e8e2251f7ce97f3874de40a647b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_thompson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 17:31:54 compute-0 kind_thompson[420539]: 167 167
Jan 26 17:31:54 compute-0 systemd[1]: libpod-28dfaad82c8bbe1faafdad549b4b457e0c760e8e2251f7ce97f3874de40a647b.scope: Deactivated successfully.
Jan 26 17:31:54 compute-0 podman[420523]: 2026-01-26 17:31:54.142510975 +0000 UTC m=+0.401599272 container died 28dfaad82c8bbe1faafdad549b4b457e0c760e8e2251f7ce97f3874de40a647b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_thompson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Jan 26 17:31:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a8d1304374a14ee0fcdddd11a3b78df26d7df1b0a47541d748cef4086ba44e5-merged.mount: Deactivated successfully.
Jan 26 17:31:54 compute-0 podman[420523]: 2026-01-26 17:31:54.177195634 +0000 UTC m=+0.436283921 container remove 28dfaad82c8bbe1faafdad549b4b457e0c760e8e2251f7ce97f3874de40a647b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_thompson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 17:31:54 compute-0 systemd[1]: libpod-conmon-28dfaad82c8bbe1faafdad549b4b457e0c760e8e2251f7ce97f3874de40a647b.scope: Deactivated successfully.
Jan 26 17:31:54 compute-0 ceph-mon[75140]: pgmap v4263: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:54 compute-0 nova_compute[239965]: 2026-01-26 17:31:54.311 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:54 compute-0 podman[420562]: 2026-01-26 17:31:54.346271488 +0000 UTC m=+0.039248551 container create 7c11c13589aed5df04ef0c88cd643610d530114757e39583938bd682d6ef7d2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gagarin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 17:31:54 compute-0 systemd[1]: Started libpod-conmon-7c11c13589aed5df04ef0c88cd643610d530114757e39583938bd682d6ef7d2b.scope.
Jan 26 17:31:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b97feee572c5b49383182524a74104dc69465ac4e166689075d2f15fab45ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b97feee572c5b49383182524a74104dc69465ac4e166689075d2f15fab45ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b97feee572c5b49383182524a74104dc69465ac4e166689075d2f15fab45ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:31:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8b97feee572c5b49383182524a74104dc69465ac4e166689075d2f15fab45ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:31:54 compute-0 podman[420562]: 2026-01-26 17:31:54.331311162 +0000 UTC m=+0.024288255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:31:54 compute-0 podman[420562]: 2026-01-26 17:31:54.466859807 +0000 UTC m=+0.159836970 container init 7c11c13589aed5df04ef0c88cd643610d530114757e39583938bd682d6ef7d2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 17:31:54 compute-0 podman[420562]: 2026-01-26 17:31:54.473212662 +0000 UTC m=+0.166189725 container start 7c11c13589aed5df04ef0c88cd643610d530114757e39583938bd682d6ef7d2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gagarin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 17:31:54 compute-0 podman[420562]: 2026-01-26 17:31:54.510257458 +0000 UTC m=+0.203234571 container attach 7c11c13589aed5df04ef0c88cd643610d530114757e39583938bd682d6ef7d2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gagarin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]: {
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:     "0": [
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:         {
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "devices": [
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "/dev/loop3"
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             ],
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "lv_name": "ceph_lv0",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "lv_size": "21470642176",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "name": "ceph_lv0",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "tags": {
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.cluster_name": "ceph",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.crush_device_class": "",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.encrypted": "0",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.objectstore": "bluestore",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.osd_id": "0",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.type": "block",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.vdo": "0",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.with_tpm": "0"
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             },
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "type": "block",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "vg_name": "ceph_vg0"
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:         }
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:     ],
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:     "1": [
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:         {
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "devices": [
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "/dev/loop4"
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             ],
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "lv_name": "ceph_lv1",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "lv_size": "21470642176",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "name": "ceph_lv1",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "tags": {
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.cluster_name": "ceph",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.crush_device_class": "",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.encrypted": "0",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.objectstore": "bluestore",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.osd_id": "1",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.type": "block",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.vdo": "0",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.with_tpm": "0"
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             },
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "type": "block",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "vg_name": "ceph_vg1"
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:         }
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:     ],
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:     "2": [
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:         {
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "devices": [
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "/dev/loop5"
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             ],
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "lv_name": "ceph_lv2",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "lv_size": "21470642176",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "name": "ceph_lv2",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "tags": {
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.cluster_name": "ceph",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.crush_device_class": "",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.encrypted": "0",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.objectstore": "bluestore",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.osd_id": "2",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.type": "block",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.vdo": "0",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:                 "ceph.with_tpm": "0"
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             },
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "type": "block",
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:             "vg_name": "ceph_vg2"
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:         }
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]:     ]
Jan 26 17:31:54 compute-0 fervent_gagarin[420579]: }
Jan 26 17:31:54 compute-0 systemd[1]: libpod-7c11c13589aed5df04ef0c88cd643610d530114757e39583938bd682d6ef7d2b.scope: Deactivated successfully.
Jan 26 17:31:54 compute-0 podman[420562]: 2026-01-26 17:31:54.829733841 +0000 UTC m=+0.522710904 container died 7c11c13589aed5df04ef0c88cd643610d530114757e39583938bd682d6ef7d2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gagarin, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:31:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8b97feee572c5b49383182524a74104dc69465ac4e166689075d2f15fab45ff-merged.mount: Deactivated successfully.
Jan 26 17:31:54 compute-0 podman[420562]: 2026-01-26 17:31:54.879839676 +0000 UTC m=+0.572816779 container remove 7c11c13589aed5df04ef0c88cd643610d530114757e39583938bd682d6ef7d2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 17:31:54 compute-0 systemd[1]: libpod-conmon-7c11c13589aed5df04ef0c88cd643610d530114757e39583938bd682d6ef7d2b.scope: Deactivated successfully.
Jan 26 17:31:54 compute-0 sudo[420487]: pam_unix(sudo:session): session closed for user root
Jan 26 17:31:55 compute-0 sudo[420599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:31:55 compute-0 sudo[420599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:31:55 compute-0 sudo[420599]: pam_unix(sudo:session): session closed for user root
Jan 26 17:31:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4264: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:55 compute-0 sudo[420624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:31:55 compute-0 sudo[420624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:31:55 compute-0 podman[420660]: 2026-01-26 17:31:55.431730134 +0000 UTC m=+0.049428561 container create 113dc8c41972b204880fa587375c22ac4ce1607d3bb6e7278b7fb9115bcee720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_driscoll, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 17:31:55 compute-0 systemd[1]: Started libpod-conmon-113dc8c41972b204880fa587375c22ac4ce1607d3bb6e7278b7fb9115bcee720.scope.
Jan 26 17:31:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:31:55 compute-0 podman[420660]: 2026-01-26 17:31:55.41072894 +0000 UTC m=+0.028427397 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:31:55 compute-0 nova_compute[239965]: 2026-01-26 17:31:55.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:55 compute-0 nova_compute[239965]: 2026-01-26 17:31:55.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:55 compute-0 podman[420660]: 2026-01-26 17:31:55.516077626 +0000 UTC m=+0.133776073 container init 113dc8c41972b204880fa587375c22ac4ce1607d3bb6e7278b7fb9115bcee720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_driscoll, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:31:55 compute-0 podman[420660]: 2026-01-26 17:31:55.524361319 +0000 UTC m=+0.142059746 container start 113dc8c41972b204880fa587375c22ac4ce1607d3bb6e7278b7fb9115bcee720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_driscoll, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 17:31:55 compute-0 podman[420660]: 2026-01-26 17:31:55.528163831 +0000 UTC m=+0.145862258 container attach 113dc8c41972b204880fa587375c22ac4ce1607d3bb6e7278b7fb9115bcee720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_driscoll, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:31:55 compute-0 kind_driscoll[420676]: 167 167
Jan 26 17:31:55 compute-0 systemd[1]: libpod-113dc8c41972b204880fa587375c22ac4ce1607d3bb6e7278b7fb9115bcee720.scope: Deactivated successfully.
Jan 26 17:31:55 compute-0 podman[420660]: 2026-01-26 17:31:55.53222142 +0000 UTC m=+0.149919847 container died 113dc8c41972b204880fa587375c22ac4ce1607d3bb6e7278b7fb9115bcee720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_driscoll, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:31:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf76994e34391c0e4b8c13a98a6e756e46096215f91166bd4d1a5c0528b4da30-merged.mount: Deactivated successfully.
Jan 26 17:31:55 compute-0 podman[420660]: 2026-01-26 17:31:55.578526193 +0000 UTC m=+0.196224630 container remove 113dc8c41972b204880fa587375c22ac4ce1607d3bb6e7278b7fb9115bcee720 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 17:31:55 compute-0 nova_compute[239965]: 2026-01-26 17:31:55.663 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:55 compute-0 systemd[1]: libpod-conmon-113dc8c41972b204880fa587375c22ac4ce1607d3bb6e7278b7fb9115bcee720.scope: Deactivated successfully.
Jan 26 17:31:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:31:55 compute-0 podman[420700]: 2026-01-26 17:31:55.76526771 +0000 UTC m=+0.028943919 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:31:56 compute-0 podman[420700]: 2026-01-26 17:31:56.114812087 +0000 UTC m=+0.378488296 container create 2b941f9d09ae7052cfb725e8e6df9982cfc3a661359be64cd345d3810a6bdae0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:31:56 compute-0 systemd[1]: Started libpod-conmon-2b941f9d09ae7052cfb725e8e6df9982cfc3a661359be64cd345d3810a6bdae0.scope.
Jan 26 17:31:56 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b00babc646748ec835fb54d15f6d40b84a99596728246da7a7e3973d68b37e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b00babc646748ec835fb54d15f6d40b84a99596728246da7a7e3973d68b37e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b00babc646748ec835fb54d15f6d40b84a99596728246da7a7e3973d68b37e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:31:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64b00babc646748ec835fb54d15f6d40b84a99596728246da7a7e3973d68b37e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:31:56 compute-0 podman[420700]: 2026-01-26 17:31:56.210228582 +0000 UTC m=+0.473904801 container init 2b941f9d09ae7052cfb725e8e6df9982cfc3a661359be64cd345d3810a6bdae0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kapitsa, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 26 17:31:56 compute-0 podman[420700]: 2026-01-26 17:31:56.223736951 +0000 UTC m=+0.487413160 container start 2b941f9d09ae7052cfb725e8e6df9982cfc3a661359be64cd345d3810a6bdae0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kapitsa, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:31:56 compute-0 podman[420700]: 2026-01-26 17:31:56.228339165 +0000 UTC m=+0.492015394 container attach 2b941f9d09ae7052cfb725e8e6df9982cfc3a661359be64cd345d3810a6bdae0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kapitsa, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:31:56 compute-0 ceph-mon[75140]: pgmap v4264: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:56 compute-0 nova_compute[239965]: 2026-01-26 17:31:56.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:31:56 compute-0 lvm[420794]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:31:56 compute-0 lvm[420794]: VG ceph_vg0 finished
Jan 26 17:31:56 compute-0 lvm[420797]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:31:56 compute-0 lvm[420797]: VG ceph_vg1 finished
Jan 26 17:31:56 compute-0 lvm[420796]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:31:56 compute-0 lvm[420796]: VG ceph_vg2 finished
Jan 26 17:31:57 compute-0 nice_kapitsa[420716]: {}
Jan 26 17:31:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4265: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:57 compute-0 systemd[1]: libpod-2b941f9d09ae7052cfb725e8e6df9982cfc3a661359be64cd345d3810a6bdae0.scope: Deactivated successfully.
Jan 26 17:31:57 compute-0 systemd[1]: libpod-2b941f9d09ae7052cfb725e8e6df9982cfc3a661359be64cd345d3810a6bdae0.scope: Consumed 1.580s CPU time.
Jan 26 17:31:57 compute-0 podman[420700]: 2026-01-26 17:31:57.101671541 +0000 UTC m=+1.365347770 container died 2b941f9d09ae7052cfb725e8e6df9982cfc3a661359be64cd345d3810a6bdae0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kapitsa, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:31:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-64b00babc646748ec835fb54d15f6d40b84a99596728246da7a7e3973d68b37e-merged.mount: Deactivated successfully.
Jan 26 17:31:57 compute-0 podman[420700]: 2026-01-26 17:31:57.149366208 +0000 UTC m=+1.413042417 container remove 2b941f9d09ae7052cfb725e8e6df9982cfc3a661359be64cd345d3810a6bdae0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_kapitsa, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:31:57 compute-0 systemd[1]: libpod-conmon-2b941f9d09ae7052cfb725e8e6df9982cfc3a661359be64cd345d3810a6bdae0.scope: Deactivated successfully.
Jan 26 17:31:57 compute-0 sudo[420624]: pam_unix(sudo:session): session closed for user root
Jan 26 17:31:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:31:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:31:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:31:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:31:57 compute-0 sudo[420811]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:31:57 compute-0 sudo[420811]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:31:57 compute-0 sudo[420811]: pam_unix(sudo:session): session closed for user root
Jan 26 17:31:58 compute-0 ceph-mon[75140]: pgmap v4265: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:31:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:31:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4266: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:31:59 compute-0 nova_compute[239965]: 2026-01-26 17:31:59.315 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:31:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:31:59.319 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:31:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:31:59.320 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:31:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:31:59.320 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:32:00 compute-0 ceph-mon[75140]: pgmap v4266: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:32:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:32:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:32:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:32:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:32:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:32:00 compute-0 nova_compute[239965]: 2026-01-26 17:32:00.665 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:32:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4267: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:01 compute-0 nova_compute[239965]: 2026-01-26 17:32:01.518 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:02 compute-0 ceph-mon[75140]: pgmap v4267: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4268: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:04 compute-0 nova_compute[239965]: 2026-01-26 17:32:04.358 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:04 compute-0 ceph-mon[75140]: pgmap v4268: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4269: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:32:05 compute-0 nova_compute[239965]: 2026-01-26 17:32:05.724 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:05 compute-0 nova_compute[239965]: 2026-01-26 17:32:05.784 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:06 compute-0 ceph-mon[75140]: pgmap v4269: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4270: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:07 compute-0 podman[420836]: 2026-01-26 17:32:07.411543163 +0000 UTC m=+0.096017229 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 17:32:08 compute-0 ceph-mon[75140]: pgmap v4270: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4271: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:09 compute-0 nova_compute[239965]: 2026-01-26 17:32:09.402 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:10 compute-0 nova_compute[239965]: 2026-01-26 17:32:10.726 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:32:10 compute-0 ceph-mon[75140]: pgmap v4271: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4272: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:11 compute-0 podman[420855]: 2026-01-26 17:32:11.450020196 +0000 UTC m=+0.120591080 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:32:11 compute-0 ceph-mon[75140]: pgmap v4272: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:12 compute-0 nova_compute[239965]: 2026-01-26 17:32:12.451 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:12 compute-0 nova_compute[239965]: 2026-01-26 17:32:12.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:12 compute-0 nova_compute[239965]: 2026-01-26 17:32:12.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:32:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4273: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:14 compute-0 ceph-mon[75140]: pgmap v4273: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:14 compute-0 nova_compute[239965]: 2026-01-26 17:32:14.453 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:14 compute-0 nova_compute[239965]: 2026-01-26 17:32:14.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:14 compute-0 nova_compute[239965]: 2026-01-26 17:32:14.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:32:14 compute-0 nova_compute[239965]: 2026-01-26 17:32:14.528 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:32:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4274: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:15 compute-0 nova_compute[239965]: 2026-01-26 17:32:15.789 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:32:16 compute-0 ceph-mon[75140]: pgmap v4274: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4275: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:18 compute-0 ceph-mon[75140]: pgmap v4275: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:18 compute-0 nova_compute[239965]: 2026-01-26 17:32:18.529 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4276: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:19 compute-0 nova_compute[239965]: 2026-01-26 17:32:19.456 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:20 compute-0 ceph-mon[75140]: pgmap v4276: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:20 compute-0 nova_compute[239965]: 2026-01-26 17:32:20.791 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:32:20 compute-0 nova_compute[239965]: 2026-01-26 17:32:20.980 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:32:20 compute-0 nova_compute[239965]: 2026-01-26 17:32:20.980 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:32:20 compute-0 nova_compute[239965]: 2026-01-26 17:32:20.981 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:32:20 compute-0 nova_compute[239965]: 2026-01-26 17:32:20.981 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:32:20 compute-0 nova_compute[239965]: 2026-01-26 17:32:20.982 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:32:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4277: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:32:21 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1617108523' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:32:21 compute-0 nova_compute[239965]: 2026-01-26 17:32:21.589 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.608s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:32:21 compute-0 nova_compute[239965]: 2026-01-26 17:32:21.786 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:32:21 compute-0 nova_compute[239965]: 2026-01-26 17:32:21.787 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3500MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:32:21 compute-0 nova_compute[239965]: 2026-01-26 17:32:21.787 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:32:21 compute-0 nova_compute[239965]: 2026-01-26 17:32:21.788 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:32:21 compute-0 nova_compute[239965]: 2026-01-26 17:32:21.857 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:32:21 compute-0 nova_compute[239965]: 2026-01-26 17:32:21.857 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:32:21 compute-0 nova_compute[239965]: 2026-01-26 17:32:21.874 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:32:22 compute-0 ceph-mon[75140]: pgmap v4277: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1617108523' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:32:22 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:32:22 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1445826934' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:32:22 compute-0 nova_compute[239965]: 2026-01-26 17:32:22.440 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:32:22 compute-0 nova_compute[239965]: 2026-01-26 17:32:22.449 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:32:22 compute-0 nova_compute[239965]: 2026-01-26 17:32:22.468 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:32:22 compute-0 nova_compute[239965]: 2026-01-26 17:32:22.472 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:32:22 compute-0 nova_compute[239965]: 2026-01-26 17:32:22.472 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:32:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4278: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1445826934' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:32:23 compute-0 nova_compute[239965]: 2026-01-26 17:32:23.455 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:23 compute-0 nova_compute[239965]: 2026-01-26 17:32:23.456 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:32:23 compute-0 nova_compute[239965]: 2026-01-26 17:32:23.456 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:32:23 compute-0 nova_compute[239965]: 2026-01-26 17:32:23.488 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:32:24 compute-0 ceph-mon[75140]: pgmap v4278: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:24 compute-0 nova_compute[239965]: 2026-01-26 17:32:24.618 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4279: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:32:25 compute-0 nova_compute[239965]: 2026-01-26 17:32:25.916 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:26 compute-0 ceph-mon[75140]: pgmap v4279: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4280: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:28 compute-0 ceph-mon[75140]: pgmap v4280: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:32:28
Jan 26 17:32:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:32:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:32:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['volumes', 'default.rgw.log', 'images', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'backups', '.rgw.root']
Jan 26 17:32:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:32:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4281: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:29 compute-0 nova_compute[239965]: 2026-01-26 17:32:29.620 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:32:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:32:30 compute-0 ceph-mon[75140]: pgmap v4281: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:32:30 compute-0 nova_compute[239965]: 2026-01-26 17:32:30.917 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4282: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:32:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:32:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:32:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:32:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:32:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:32:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:32:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:32:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:32:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:32:32 compute-0 ceph-mon[75140]: pgmap v4282: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4283: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:34 compute-0 ceph-mon[75140]: pgmap v4283: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:34 compute-0 nova_compute[239965]: 2026-01-26 17:32:34.623 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4284: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:32:35 compute-0 nova_compute[239965]: 2026-01-26 17:32:35.919 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:36 compute-0 ceph-mon[75140]: pgmap v4284: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4285: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:38 compute-0 ceph-mon[75140]: pgmap v4285: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:38 compute-0 podman[420925]: 2026-01-26 17:32:38.407952632 +0000 UTC m=+0.086658060 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 26 17:32:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4286: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:39 compute-0 nova_compute[239965]: 2026-01-26 17:32:39.624 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:40 compute-0 ceph-mon[75140]: pgmap v4286: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:32:40 compute-0 nova_compute[239965]: 2026-01-26 17:32:40.921 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4287: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:42 compute-0 podman[420945]: 2026-01-26 17:32:42.442316074 +0000 UTC m=+0.131175948 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:32:43 compute-0 ceph-mon[75140]: pgmap v4287: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4288: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:44 compute-0 ceph-mon[75140]: pgmap v4288: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:44 compute-0 nova_compute[239965]: 2026-01-26 17:32:44.627 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4289: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:32:45 compute-0 nova_compute[239965]: 2026-01-26 17:32:45.923 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:46 compute-0 ceph-mon[75140]: pgmap v4289: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4290: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:48 compute-0 ceph-mon[75140]: pgmap v4290: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:32:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1631724566' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:32:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:32:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1631724566' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:32:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4291: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1631724566' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:32:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1631724566' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:32:49 compute-0 nova_compute[239965]: 2026-01-26 17:32:49.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:49 compute-0 nova_compute[239965]: 2026-01-26 17:32:49.632 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:32:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:32:50 compute-0 ceph-mon[75140]: pgmap v4291: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:32:50 compute-0 nova_compute[239965]: 2026-01-26 17:32:50.925 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4292: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:52 compute-0 ceph-mon[75140]: pgmap v4292: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:32:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4293: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 26 17:32:54 compute-0 ceph-mon[75140]: pgmap v4293: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 26 17:32:54 compute-0 nova_compute[239965]: 2026-01-26 17:32:54.708 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4294: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 26 17:32:55 compute-0 nova_compute[239965]: 2026-01-26 17:32:55.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:32:55 compute-0 nova_compute[239965]: 2026-01-26 17:32:55.950 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:56 compute-0 ceph-mon[75140]: pgmap v4294: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 3 op/s
Jan 26 17:32:56 compute-0 nova_compute[239965]: 2026-01-26 17:32:56.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4295: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Jan 26 17:32:57 compute-0 sudo[420974]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:32:57 compute-0 sudo[420974]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:32:57 compute-0 sudo[420974]: pam_unix(sudo:session): session closed for user root
Jan 26 17:32:57 compute-0 sudo[420999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:32:57 compute-0 sudo[420999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:32:57 compute-0 nova_compute[239965]: 2026-01-26 17:32:57.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:32:57 compute-0 sudo[420999]: pam_unix(sudo:session): session closed for user root
Jan 26 17:32:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:32:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:32:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:32:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:32:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:32:58 compute-0 ceph-mon[75140]: pgmap v4295: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.2 KiB/s rd, 0 B/s wr, 5 op/s
Jan 26 17:32:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:32:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:32:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:32:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:32:58 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:32:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:32:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:32:58 compute-0 sudo[421056]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:32:58 compute-0 sudo[421056]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:32:58 compute-0 sudo[421056]: pam_unix(sudo:session): session closed for user root
Jan 26 17:32:58 compute-0 sudo[421082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:32:58 compute-0 sudo[421082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:32:58 compute-0 podman[421120]: 2026-01-26 17:32:58.768229329 +0000 UTC m=+0.026298724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:32:58 compute-0 podman[421120]: 2026-01-26 17:32:58.879184433 +0000 UTC m=+0.137253788 container create 486d432ec9596d4ac31350bcac799f97749551f028b7ae8c38ee9b49b072a7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_haslett, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:32:58 compute-0 systemd[1]: Started libpod-conmon-486d432ec9596d4ac31350bcac799f97749551f028b7ae8c38ee9b49b072a7f1.scope.
Jan 26 17:32:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:32:58 compute-0 podman[421120]: 2026-01-26 17:32:58.991853868 +0000 UTC m=+0.249923253 container init 486d432ec9596d4ac31350bcac799f97749551f028b7ae8c38ee9b49b072a7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_haslett, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 26 17:32:59 compute-0 podman[421120]: 2026-01-26 17:32:59.001852572 +0000 UTC m=+0.259921937 container start 486d432ec9596d4ac31350bcac799f97749551f028b7ae8c38ee9b49b072a7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:32:59 compute-0 podman[421120]: 2026-01-26 17:32:59.005095412 +0000 UTC m=+0.263164807 container attach 486d432ec9596d4ac31350bcac799f97749551f028b7ae8c38ee9b49b072a7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_haslett, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:32:59 compute-0 sharp_haslett[421136]: 167 167
Jan 26 17:32:59 compute-0 systemd[1]: libpod-486d432ec9596d4ac31350bcac799f97749551f028b7ae8c38ee9b49b072a7f1.scope: Deactivated successfully.
Jan 26 17:32:59 compute-0 podman[421120]: 2026-01-26 17:32:59.009265474 +0000 UTC m=+0.267334849 container died 486d432ec9596d4ac31350bcac799f97749551f028b7ae8c38ee9b49b072a7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:32:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-379449bbba547bcf6f6d5f831abbfe734894a9aad3d2639c8f1237f74895c341-merged.mount: Deactivated successfully.
Jan 26 17:32:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4296: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Jan 26 17:32:59 compute-0 sshd-session[421055]: Invalid user solv from 45.148.10.240 port 52474
Jan 26 17:32:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:32:59.320 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:32:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:32:59.322 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:32:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:32:59.323 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:32:59 compute-0 podman[421120]: 2026-01-26 17:32:59.434011451 +0000 UTC m=+0.692080846 container remove 486d432ec9596d4ac31350bcac799f97749551f028b7ae8c38ee9b49b072a7f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_haslett, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:32:59 compute-0 sshd-session[421055]: Connection closed by invalid user solv 45.148.10.240 port 52474 [preauth]
Jan 26 17:32:59 compute-0 systemd[1]: libpod-conmon-486d432ec9596d4ac31350bcac799f97749551f028b7ae8c38ee9b49b072a7f1.scope: Deactivated successfully.
Jan 26 17:32:59 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:32:59 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:32:59 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:32:59 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:32:59 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:32:59 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:32:59 compute-0 podman[421160]: 2026-01-26 17:32:59.658282606 +0000 UTC m=+0.051466090 container create c5734f342f580a1801a33136d68d38ed67f8cf56e3b29e171e5e82320d5d0439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_margulis, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:32:59 compute-0 podman[421160]: 2026-01-26 17:32:59.628785624 +0000 UTC m=+0.021969138 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:32:59 compute-0 nova_compute[239965]: 2026-01-26 17:32:59.775 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:32:59 compute-0 systemd[1]: Started libpod-conmon-c5734f342f580a1801a33136d68d38ed67f8cf56e3b29e171e5e82320d5d0439.scope.
Jan 26 17:32:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f1ea215d434b33ef1484324701843f517d3815b0685db0b87f0394f337ddf8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f1ea215d434b33ef1484324701843f517d3815b0685db0b87f0394f337ddf8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f1ea215d434b33ef1484324701843f517d3815b0685db0b87f0394f337ddf8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f1ea215d434b33ef1484324701843f517d3815b0685db0b87f0394f337ddf8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:32:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f1ea215d434b33ef1484324701843f517d3815b0685db0b87f0394f337ddf8d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:33:00 compute-0 podman[421160]: 2026-01-26 17:33:00.004733459 +0000 UTC m=+0.397916963 container init c5734f342f580a1801a33136d68d38ed67f8cf56e3b29e171e5e82320d5d0439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_margulis, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 17:33:00 compute-0 podman[421160]: 2026-01-26 17:33:00.014804195 +0000 UTC m=+0.407987689 container start c5734f342f580a1801a33136d68d38ed67f8cf56e3b29e171e5e82320d5d0439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 26 17:33:00 compute-0 podman[421160]: 2026-01-26 17:33:00.019706884 +0000 UTC m=+0.412890398 container attach c5734f342f580a1801a33136d68d38ed67f8cf56e3b29e171e5e82320d5d0439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:33:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:33:00 compute-0 jovial_margulis[421176]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:33:00 compute-0 jovial_margulis[421176]: --> All data devices are unavailable
Jan 26 17:33:00 compute-0 systemd[1]: libpod-c5734f342f580a1801a33136d68d38ed67f8cf56e3b29e171e5e82320d5d0439.scope: Deactivated successfully.
Jan 26 17:33:00 compute-0 podman[421160]: 2026-01-26 17:33:00.571426297 +0000 UTC m=+0.964609781 container died c5734f342f580a1801a33136d68d38ed67f8cf56e3b29e171e5e82320d5d0439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 26 17:33:00 compute-0 ceph-mon[75140]: pgmap v4296: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 0 B/s wr, 22 op/s
Jan 26 17:33:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f1ea215d434b33ef1484324701843f517d3815b0685db0b87f0394f337ddf8d-merged.mount: Deactivated successfully.
Jan 26 17:33:00 compute-0 podman[421160]: 2026-01-26 17:33:00.72368295 +0000 UTC m=+1.116866464 container remove c5734f342f580a1801a33136d68d38ed67f8cf56e3b29e171e5e82320d5d0439 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_margulis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Jan 26 17:33:00 compute-0 systemd[1]: libpod-conmon-c5734f342f580a1801a33136d68d38ed67f8cf56e3b29e171e5e82320d5d0439.scope: Deactivated successfully.
Jan 26 17:33:00 compute-0 sudo[421082]: pam_unix(sudo:session): session closed for user root
Jan 26 17:33:00 compute-0 sudo[421210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:33:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:33:00 compute-0 sudo[421210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:33:00 compute-0 sudo[421210]: pam_unix(sudo:session): session closed for user root
Jan 26 17:33:00 compute-0 sudo[421235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:33:00 compute-0 sudo[421235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:33:00 compute-0 nova_compute[239965]: 2026-01-26 17:33:00.996 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4297: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Jan 26 17:33:01 compute-0 podman[421272]: 2026-01-26 17:33:01.20005179 +0000 UTC m=+0.024318775 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:33:01 compute-0 podman[421272]: 2026-01-26 17:33:01.484171528 +0000 UTC m=+0.308438513 container create b45c8fecb0de7bcbd7cf69a51d3a2746b2654619e8e3e9162c1ce14c1bd434b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:33:01 compute-0 systemd[1]: Started libpod-conmon-b45c8fecb0de7bcbd7cf69a51d3a2746b2654619e8e3e9162c1ce14c1bd434b3.scope.
Jan 26 17:33:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:33:01 compute-0 podman[421272]: 2026-01-26 17:33:01.654699569 +0000 UTC m=+0.478966574 container init b45c8fecb0de7bcbd7cf69a51d3a2746b2654619e8e3e9162c1ce14c1bd434b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kare, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:33:01 compute-0 podman[421272]: 2026-01-26 17:33:01.661449634 +0000 UTC m=+0.485716619 container start b45c8fecb0de7bcbd7cf69a51d3a2746b2654619e8e3e9162c1ce14c1bd434b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 17:33:01 compute-0 podman[421272]: 2026-01-26 17:33:01.665949884 +0000 UTC m=+0.490216869 container attach b45c8fecb0de7bcbd7cf69a51d3a2746b2654619e8e3e9162c1ce14c1bd434b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kare, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:33:01 compute-0 vigorous_kare[421288]: 167 167
Jan 26 17:33:01 compute-0 systemd[1]: libpod-b45c8fecb0de7bcbd7cf69a51d3a2746b2654619e8e3e9162c1ce14c1bd434b3.scope: Deactivated successfully.
Jan 26 17:33:01 compute-0 conmon[421288]: conmon b45c8fecb0de7bcbd7cf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b45c8fecb0de7bcbd7cf69a51d3a2746b2654619e8e3e9162c1ce14c1bd434b3.scope/container/memory.events
Jan 26 17:33:01 compute-0 podman[421272]: 2026-01-26 17:33:01.668946757 +0000 UTC m=+0.493213742 container died b45c8fecb0de7bcbd7cf69a51d3a2746b2654619e8e3e9162c1ce14c1bd434b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:33:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-b75cf26a5c28a37fa165991326281b6a47aeea40f9e1d7d6f3370373b317f997-merged.mount: Deactivated successfully.
Jan 26 17:33:01 compute-0 podman[421272]: 2026-01-26 17:33:01.978044226 +0000 UTC m=+0.802311211 container remove b45c8fecb0de7bcbd7cf69a51d3a2746b2654619e8e3e9162c1ce14c1bd434b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_kare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 17:33:02 compute-0 systemd[1]: libpod-conmon-b45c8fecb0de7bcbd7cf69a51d3a2746b2654619e8e3e9162c1ce14c1bd434b3.scope: Deactivated successfully.
Jan 26 17:33:02 compute-0 podman[421311]: 2026-01-26 17:33:02.125816339 +0000 UTC m=+0.027896002 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:33:02 compute-0 podman[421311]: 2026-01-26 17:33:02.22311285 +0000 UTC m=+0.125192463 container create d975145830b179a4c86f78b522132fd4edb833040e23418677b855f662eeee3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poitras, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:33:02 compute-0 systemd[1]: Started libpod-conmon-d975145830b179a4c86f78b522132fd4edb833040e23418677b855f662eeee3d.scope.
Jan 26 17:33:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e1a71e918e2390f2f6ca4add0fc7a04a5a279b12af8c354a689432ca6595d2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e1a71e918e2390f2f6ca4add0fc7a04a5a279b12af8c354a689432ca6595d2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e1a71e918e2390f2f6ca4add0fc7a04a5a279b12af8c354a689432ca6595d2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:33:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e1a71e918e2390f2f6ca4add0fc7a04a5a279b12af8c354a689432ca6595d2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:33:02 compute-0 podman[421311]: 2026-01-26 17:33:02.445259382 +0000 UTC m=+0.347339035 container init d975145830b179a4c86f78b522132fd4edb833040e23418677b855f662eeee3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poitras, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 26 17:33:02 compute-0 podman[421311]: 2026-01-26 17:33:02.45947145 +0000 UTC m=+0.361551073 container start d975145830b179a4c86f78b522132fd4edb833040e23418677b855f662eeee3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:33:02 compute-0 podman[421311]: 2026-01-26 17:33:02.498423102 +0000 UTC m=+0.400502735 container attach d975145830b179a4c86f78b522132fd4edb833040e23418677b855f662eeee3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poitras, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:33:02 compute-0 ceph-mon[75140]: pgmap v4297: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 0 B/s wr, 39 op/s
Jan 26 17:33:02 compute-0 kind_poitras[421328]: {
Jan 26 17:33:02 compute-0 kind_poitras[421328]:     "0": [
Jan 26 17:33:02 compute-0 kind_poitras[421328]:         {
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "devices": [
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "/dev/loop3"
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             ],
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "lv_name": "ceph_lv0",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "lv_size": "21470642176",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "name": "ceph_lv0",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "tags": {
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.cluster_name": "ceph",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.crush_device_class": "",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.encrypted": "0",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.objectstore": "bluestore",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.osd_id": "0",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.type": "block",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.vdo": "0",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.with_tpm": "0"
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             },
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "type": "block",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "vg_name": "ceph_vg0"
Jan 26 17:33:02 compute-0 kind_poitras[421328]:         }
Jan 26 17:33:02 compute-0 kind_poitras[421328]:     ],
Jan 26 17:33:02 compute-0 kind_poitras[421328]:     "1": [
Jan 26 17:33:02 compute-0 kind_poitras[421328]:         {
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "devices": [
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "/dev/loop4"
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             ],
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "lv_name": "ceph_lv1",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "lv_size": "21470642176",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "name": "ceph_lv1",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "tags": {
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.cluster_name": "ceph",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.crush_device_class": "",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.encrypted": "0",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.objectstore": "bluestore",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.osd_id": "1",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.type": "block",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.vdo": "0",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.with_tpm": "0"
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             },
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "type": "block",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "vg_name": "ceph_vg1"
Jan 26 17:33:02 compute-0 kind_poitras[421328]:         }
Jan 26 17:33:02 compute-0 kind_poitras[421328]:     ],
Jan 26 17:33:02 compute-0 kind_poitras[421328]:     "2": [
Jan 26 17:33:02 compute-0 kind_poitras[421328]:         {
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "devices": [
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "/dev/loop5"
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             ],
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "lv_name": "ceph_lv2",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "lv_size": "21470642176",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "name": "ceph_lv2",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "tags": {
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.cluster_name": "ceph",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.crush_device_class": "",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.encrypted": "0",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.objectstore": "bluestore",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.osd_id": "2",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.type": "block",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.vdo": "0",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:                 "ceph.with_tpm": "0"
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             },
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "type": "block",
Jan 26 17:33:02 compute-0 kind_poitras[421328]:             "vg_name": "ceph_vg2"
Jan 26 17:33:02 compute-0 kind_poitras[421328]:         }
Jan 26 17:33:02 compute-0 kind_poitras[421328]:     ]
Jan 26 17:33:02 compute-0 kind_poitras[421328]: }
Jan 26 17:33:02 compute-0 systemd[1]: libpod-d975145830b179a4c86f78b522132fd4edb833040e23418677b855f662eeee3d.scope: Deactivated successfully.
Jan 26 17:33:02 compute-0 podman[421311]: 2026-01-26 17:33:02.77397362 +0000 UTC m=+0.676053243 container died d975145830b179a4c86f78b522132fd4edb833040e23418677b855f662eeee3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 17:33:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-11e1a71e918e2390f2f6ca4add0fc7a04a5a279b12af8c354a689432ca6595d2-merged.mount: Deactivated successfully.
Jan 26 17:33:03 compute-0 podman[421311]: 2026-01-26 17:33:03.083564461 +0000 UTC m=+0.985644114 container remove d975145830b179a4c86f78b522132fd4edb833040e23418677b855f662eeee3d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_poitras, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 17:33:03 compute-0 systemd[1]: libpod-conmon-d975145830b179a4c86f78b522132fd4edb833040e23418677b855f662eeee3d.scope: Deactivated successfully.
Jan 26 17:33:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4298: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 17:33:03 compute-0 sudo[421235]: pam_unix(sudo:session): session closed for user root
Jan 26 17:33:03 compute-0 sudo[421349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:33:03 compute-0 sudo[421349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:33:03 compute-0 sudo[421349]: pam_unix(sudo:session): session closed for user root
Jan 26 17:33:03 compute-0 sudo[421374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:33:03 compute-0 sudo[421374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:33:03 compute-0 nova_compute[239965]: 2026-01-26 17:33:03.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:03 compute-0 podman[421409]: 2026-01-26 17:33:03.532424869 +0000 UTC m=+0.024832039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:33:03 compute-0 podman[421409]: 2026-01-26 17:33:03.68171487 +0000 UTC m=+0.174122020 container create 9a3b0fda921450f5a617338510c14ddd7b9aee146aa684f7eead29b74ece3cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 17:33:03 compute-0 systemd[1]: Started libpod-conmon-9a3b0fda921450f5a617338510c14ddd7b9aee146aa684f7eead29b74ece3cda.scope.
Jan 26 17:33:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:33:03 compute-0 podman[421409]: 2026-01-26 17:33:03.785315774 +0000 UTC m=+0.277722924 container init 9a3b0fda921450f5a617338510c14ddd7b9aee146aa684f7eead29b74ece3cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:33:03 compute-0 podman[421409]: 2026-01-26 17:33:03.792012256 +0000 UTC m=+0.284419396 container start 9a3b0fda921450f5a617338510c14ddd7b9aee146aa684f7eead29b74ece3cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feistel, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:33:03 compute-0 priceless_feistel[421425]: 167 167
Jan 26 17:33:03 compute-0 systemd[1]: libpod-9a3b0fda921450f5a617338510c14ddd7b9aee146aa684f7eead29b74ece3cda.scope: Deactivated successfully.
Jan 26 17:33:03 compute-0 podman[421409]: 2026-01-26 17:33:03.915177569 +0000 UTC m=+0.407584719 container attach 9a3b0fda921450f5a617338510c14ddd7b9aee146aa684f7eead29b74ece3cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feistel, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:33:03 compute-0 podman[421409]: 2026-01-26 17:33:03.915736833 +0000 UTC m=+0.408144023 container died 9a3b0fda921450f5a617338510c14ddd7b9aee146aa684f7eead29b74ece3cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feistel, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 17:33:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc6a7b22ef273f1e6126ff0b342e659f1f4f7bdb6e4c01dda9961b4a05541285-merged.mount: Deactivated successfully.
Jan 26 17:33:03 compute-0 podman[421409]: 2026-01-26 17:33:03.967608991 +0000 UTC m=+0.460016151 container remove 9a3b0fda921450f5a617338510c14ddd7b9aee146aa684f7eead29b74ece3cda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 17:33:04 compute-0 systemd[1]: libpod-conmon-9a3b0fda921450f5a617338510c14ddd7b9aee146aa684f7eead29b74ece3cda.scope: Deactivated successfully.
Jan 26 17:33:04 compute-0 podman[421448]: 2026-01-26 17:33:04.105200586 +0000 UTC m=+0.021285541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:33:04 compute-0 podman[421448]: 2026-01-26 17:33:04.279287403 +0000 UTC m=+0.195372348 container create 0d8d154654becb8cba9ef706b226bf7c2cf17cac8b279686aa2ce4eafd47561d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 17:33:04 compute-0 systemd[1]: Started libpod-conmon-0d8d154654becb8cba9ef706b226bf7c2cf17cac8b279686aa2ce4eafd47561d.scope.
Jan 26 17:33:04 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ef71abd0fbdd005b3cba4d41e4fb3ec7124b6752a5ca6366e549c63c1e69aef/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ef71abd0fbdd005b3cba4d41e4fb3ec7124b6752a5ca6366e549c63c1e69aef/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ef71abd0fbdd005b3cba4d41e4fb3ec7124b6752a5ca6366e549c63c1e69aef/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:33:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ef71abd0fbdd005b3cba4d41e4fb3ec7124b6752a5ca6366e549c63c1e69aef/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:33:04 compute-0 podman[421448]: 2026-01-26 17:33:04.722258397 +0000 UTC m=+0.638343342 container init 0d8d154654becb8cba9ef706b226bf7c2cf17cac8b279686aa2ce4eafd47561d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:33:04 compute-0 podman[421448]: 2026-01-26 17:33:04.729837332 +0000 UTC m=+0.645922267 container start 0d8d154654becb8cba9ef706b226bf7c2cf17cac8b279686aa2ce4eafd47561d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 17:33:04 compute-0 ceph-mon[75140]: pgmap v4298: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 17:33:04 compute-0 nova_compute[239965]: 2026-01-26 17:33:04.780 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:04 compute-0 podman[421448]: 2026-01-26 17:33:04.788830515 +0000 UTC m=+0.704915490 container attach 0d8d154654becb8cba9ef706b226bf7c2cf17cac8b279686aa2ce4eafd47561d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 17:33:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4299: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Jan 26 17:33:05 compute-0 lvm[421545]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:33:05 compute-0 lvm[421545]: VG ceph_vg1 finished
Jan 26 17:33:05 compute-0 lvm[421546]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:33:05 compute-0 lvm[421546]: VG ceph_vg2 finished
Jan 26 17:33:05 compute-0 lvm[421543]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:33:05 compute-0 lvm[421543]: VG ceph_vg0 finished
Jan 26 17:33:05 compute-0 goofy_mclean[421465]: {}
Jan 26 17:33:05 compute-0 systemd[1]: libpod-0d8d154654becb8cba9ef706b226bf7c2cf17cac8b279686aa2ce4eafd47561d.scope: Deactivated successfully.
Jan 26 17:33:05 compute-0 podman[421448]: 2026-01-26 17:33:05.563147361 +0000 UTC m=+1.479232316 container died 0d8d154654becb8cba9ef706b226bf7c2cf17cac8b279686aa2ce4eafd47561d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:33:05 compute-0 systemd[1]: libpod-0d8d154654becb8cba9ef706b226bf7c2cf17cac8b279686aa2ce4eafd47561d.scope: Consumed 1.398s CPU time.
Jan 26 17:33:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:33:06 compute-0 nova_compute[239965]: 2026-01-26 17:33:05.998 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ef71abd0fbdd005b3cba4d41e4fb3ec7124b6752a5ca6366e549c63c1e69aef-merged.mount: Deactivated successfully.
Jan 26 17:33:06 compute-0 podman[421448]: 2026-01-26 17:33:06.40790637 +0000 UTC m=+2.323991305 container remove 0d8d154654becb8cba9ef706b226bf7c2cf17cac8b279686aa2ce4eafd47561d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_mclean, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:33:06 compute-0 systemd[1]: libpod-conmon-0d8d154654becb8cba9ef706b226bf7c2cf17cac8b279686aa2ce4eafd47561d.scope: Deactivated successfully.
Jan 26 17:33:06 compute-0 sudo[421374]: pam_unix(sudo:session): session closed for user root
Jan 26 17:33:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:33:06 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:33:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:33:06 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:33:06 compute-0 sudo[421561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:33:06 compute-0 sudo[421561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:33:06 compute-0 sudo[421561]: pam_unix(sudo:session): session closed for user root
Jan 26 17:33:06 compute-0 ceph-mon[75140]: pgmap v4299: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Jan 26 17:33:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:33:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:33:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4300: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Jan 26 17:33:08 compute-0 ceph-mon[75140]: pgmap v4300: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 0 B/s wr, 55 op/s
Jan 26 17:33:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4301: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 54 op/s
Jan 26 17:33:09 compute-0 podman[421586]: 2026-01-26 17:33:09.399404617 +0000 UTC m=+0.075656081 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent)
Jan 26 17:33:09 compute-0 nova_compute[239965]: 2026-01-26 17:33:09.784 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:10 compute-0 ceph-mon[75140]: pgmap v4301: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 0 B/s wr, 54 op/s
Jan 26 17:33:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:33:11 compute-0 nova_compute[239965]: 2026-01-26 17:33:11.000 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4302: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Jan 26 17:33:11 compute-0 nova_compute[239965]: 2026-01-26 17:33:11.504 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:13 compute-0 ceph-mon[75140]: pgmap v4302: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 37 op/s
Jan 26 17:33:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4303: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Jan 26 17:33:13 compute-0 podman[421606]: 2026-01-26 17:33:13.44850106 +0000 UTC m=+0.126115066 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Jan 26 17:33:14 compute-0 nova_compute[239965]: 2026-01-26 17:33:14.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:14 compute-0 nova_compute[239965]: 2026-01-26 17:33:14.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:33:14 compute-0 nova_compute[239965]: 2026-01-26 17:33:14.788 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:14 compute-0 ceph-mon[75140]: pgmap v4303: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 0 B/s wr, 20 op/s
Jan 26 17:33:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4304: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:33:16 compute-0 nova_compute[239965]: 2026-01-26 17:33:16.003 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:16 compute-0 ceph-mon[75140]: pgmap v4304: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4305: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:17 compute-0 ceph-mon[75140]: pgmap v4305: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4306: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:19 compute-0 nova_compute[239965]: 2026-01-26 17:33:19.792 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:20 compute-0 ceph-mon[75140]: pgmap v4306: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:20 compute-0 nova_compute[239965]: 2026-01-26 17:33:20.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:20 compute-0 nova_compute[239965]: 2026-01-26 17:33:20.548 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:33:20 compute-0 nova_compute[239965]: 2026-01-26 17:33:20.548 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:33:20 compute-0 nova_compute[239965]: 2026-01-26 17:33:20.549 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:33:20 compute-0 nova_compute[239965]: 2026-01-26 17:33:20.549 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:33:20 compute-0 nova_compute[239965]: 2026-01-26 17:33:20.549 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:33:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:33:21 compute-0 nova_compute[239965]: 2026-01-26 17:33:21.004 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:33:21 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3715120753' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:33:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4307: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:21 compute-0 nova_compute[239965]: 2026-01-26 17:33:21.144 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:33:21 compute-0 nova_compute[239965]: 2026-01-26 17:33:21.313 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:33:21 compute-0 nova_compute[239965]: 2026-01-26 17:33:21.315 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3539MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:33:21 compute-0 nova_compute[239965]: 2026-01-26 17:33:21.315 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:33:21 compute-0 nova_compute[239965]: 2026-01-26 17:33:21.315 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:33:21 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3715120753' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:33:21 compute-0 nova_compute[239965]: 2026-01-26 17:33:21.401 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:33:21 compute-0 nova_compute[239965]: 2026-01-26 17:33:21.402 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:33:21 compute-0 nova_compute[239965]: 2026-01-26 17:33:21.421 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:33:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:33:21 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3514533881' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:33:21 compute-0 nova_compute[239965]: 2026-01-26 17:33:21.975 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:33:21 compute-0 nova_compute[239965]: 2026-01-26 17:33:21.981 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:33:22 compute-0 nova_compute[239965]: 2026-01-26 17:33:22.006 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:33:22 compute-0 nova_compute[239965]: 2026-01-26 17:33:22.008 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:33:22 compute-0 nova_compute[239965]: 2026-01-26 17:33:22.009 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:33:22 compute-0 ceph-mon[75140]: pgmap v4307: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:22 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3514533881' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:33:23 compute-0 nova_compute[239965]: 2026-01-26 17:33:23.008 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:23 compute-0 nova_compute[239965]: 2026-01-26 17:33:23.009 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:33:23 compute-0 nova_compute[239965]: 2026-01-26 17:33:23.009 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:33:23 compute-0 nova_compute[239965]: 2026-01-26 17:33:23.088 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:33:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4308: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:24 compute-0 ceph-mon[75140]: pgmap v4308: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:24 compute-0 nova_compute[239965]: 2026-01-26 17:33:24.796 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4309: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:33:26 compute-0 nova_compute[239965]: 2026-01-26 17:33:26.006 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:26 compute-0 nova_compute[239965]: 2026-01-26 17:33:26.583 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:26 compute-0 ceph-mon[75140]: pgmap v4309: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4310: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:28 compute-0 ceph-mon[75140]: pgmap v4310: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:33:28
Jan 26 17:33:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:33:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:33:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'volumes', 'default.rgw.control', 'default.rgw.log', '.mgr', 'backups', 'default.rgw.meta']
Jan 26 17:33:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:33:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4311: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:29 compute-0 nova_compute[239965]: 2026-01-26 17:33:29.799 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:30 compute-0 ceph-mon[75140]: pgmap v4311: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:33:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:33:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:33:31 compute-0 nova_compute[239965]: 2026-01-26 17:33:31.008 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4312: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:33:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:33:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:33:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:33:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:33:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:33:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:33:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:33:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:33:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:33:32 compute-0 ceph-mon[75140]: pgmap v4312: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4313: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:34 compute-0 ceph-mon[75140]: pgmap v4313: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:34 compute-0 nova_compute[239965]: 2026-01-26 17:33:34.803 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4314: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:33:36 compute-0 nova_compute[239965]: 2026-01-26 17:33:36.010 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:36 compute-0 ceph-mon[75140]: pgmap v4314: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4315: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:38 compute-0 ceph-mon[75140]: pgmap v4315: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4316: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:39 compute-0 nova_compute[239965]: 2026-01-26 17:33:39.807 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:40 compute-0 podman[421676]: 2026-01-26 17:33:40.348846415 +0000 UTC m=+0.042040969 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 17:33:40 compute-0 ceph-mon[75140]: pgmap v4316: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:33:41 compute-0 nova_compute[239965]: 2026-01-26 17:33:41.011 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4317: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:42 compute-0 ceph-mon[75140]: pgmap v4317: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4318: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:44 compute-0 podman[421697]: 2026-01-26 17:33:44.426543485 +0000 UTC m=+0.116373547 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 26 17:33:44 compute-0 ceph-mon[75140]: pgmap v4318: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:44 compute-0 nova_compute[239965]: 2026-01-26 17:33:44.809 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4319: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:33:46 compute-0 nova_compute[239965]: 2026-01-26 17:33:46.013 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:46 compute-0 ceph-mon[75140]: pgmap v4319: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4320: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:48 compute-0 ceph-mon[75140]: pgmap v4320: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4321: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:49 compute-0 nova_compute[239965]: 2026-01-26 17:33:49.813 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:33:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:33:50 compute-0 nova_compute[239965]: 2026-01-26 17:33:50.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:50 compute-0 ceph-mon[75140]: pgmap v4321: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:33:51 compute-0 nova_compute[239965]: 2026-01-26 17:33:51.015 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4322: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:52 compute-0 ceph-mon[75140]: pgmap v4322: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4323: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:54 compute-0 ceph-mon[75140]: pgmap v4323: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:54 compute-0 nova_compute[239965]: 2026-01-26 17:33:54.817 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4324: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:33:56 compute-0 nova_compute[239965]: 2026-01-26 17:33:56.018 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:33:56 compute-0 ceph-mon[75140]: pgmap v4324: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4325: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:57 compute-0 nova_compute[239965]: 2026-01-26 17:33:57.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:57 compute-0 nova_compute[239965]: 2026-01-26 17:33:57.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:57 compute-0 nova_compute[239965]: 2026-01-26 17:33:57.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:33:58 compute-0 ceph-mon[75140]: pgmap v4325: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4326: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:33:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:33:59.321 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:33:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:33:59.322 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:33:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:33:59.322 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:33:59 compute-0 nova_compute[239965]: 2026-01-26 17:33:59.822 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:34:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:34:00 compute-0 ceph-mon[75140]: pgmap v4326: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:34:01 compute-0 nova_compute[239965]: 2026-01-26 17:34:01.019 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4327: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:02 compute-0 ceph-mon[75140]: pgmap v4327: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4328: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:04 compute-0 ceph-mon[75140]: pgmap v4328: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:04 compute-0 nova_compute[239965]: 2026-01-26 17:34:04.824 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4329: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:05 compute-0 nova_compute[239965]: 2026-01-26 17:34:05.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:34:06 compute-0 nova_compute[239965]: 2026-01-26 17:34:06.021 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:06 compute-0 sudo[421723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:34:06 compute-0 sudo[421723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:34:06 compute-0 sudo[421723]: pam_unix(sudo:session): session closed for user root
Jan 26 17:34:06 compute-0 sudo[421748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:34:06 compute-0 sudo[421748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:34:06 compute-0 ceph-mon[75140]: pgmap v4329: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4330: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:07 compute-0 sudo[421748]: pam_unix(sudo:session): session closed for user root
Jan 26 17:34:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:34:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:34:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:34:07 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:34:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:34:07 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:34:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:34:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:34:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:34:07 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:34:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:34:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:34:07 compute-0 sudo[421804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:34:07 compute-0 sudo[421804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:34:07 compute-0 sudo[421804]: pam_unix(sudo:session): session closed for user root
Jan 26 17:34:07 compute-0 sudo[421829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:34:07 compute-0 sudo[421829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:34:07 compute-0 podman[421866]: 2026-01-26 17:34:07.703367694 +0000 UTC m=+0.039593879 container create 5ff01cd719fed318b7afc0c698d8765c683484c4db356d39e3df1e884d45c8ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_thompson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 26 17:34:07 compute-0 systemd[1]: Started libpod-conmon-5ff01cd719fed318b7afc0c698d8765c683484c4db356d39e3df1e884d45c8ce.scope.
Jan 26 17:34:07 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:34:07 compute-0 podman[421866]: 2026-01-26 17:34:07.775679632 +0000 UTC m=+0.111905847 container init 5ff01cd719fed318b7afc0c698d8765c683484c4db356d39e3df1e884d45c8ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 17:34:07 compute-0 podman[421866]: 2026-01-26 17:34:07.684614745 +0000 UTC m=+0.020840950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:34:07 compute-0 podman[421866]: 2026-01-26 17:34:07.782297924 +0000 UTC m=+0.118524109 container start 5ff01cd719fed318b7afc0c698d8765c683484c4db356d39e3df1e884d45c8ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_thompson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:34:07 compute-0 podman[421866]: 2026-01-26 17:34:07.785560484 +0000 UTC m=+0.121786679 container attach 5ff01cd719fed318b7afc0c698d8765c683484c4db356d39e3df1e884d45c8ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_thompson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 17:34:07 compute-0 vigorous_thompson[421882]: 167 167
Jan 26 17:34:07 compute-0 systemd[1]: libpod-5ff01cd719fed318b7afc0c698d8765c683484c4db356d39e3df1e884d45c8ce.scope: Deactivated successfully.
Jan 26 17:34:07 compute-0 podman[421866]: 2026-01-26 17:34:07.790191447 +0000 UTC m=+0.126417642 container died 5ff01cd719fed318b7afc0c698d8765c683484c4db356d39e3df1e884d45c8ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 17:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3b90f04193ad52ade37bcfed98a93b1287800a886592323dd6d596c01cb10ce-merged.mount: Deactivated successfully.
Jan 26 17:34:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:34:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:34:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:34:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:34:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:34:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:34:07 compute-0 podman[421866]: 2026-01-26 17:34:07.834635554 +0000 UTC m=+0.170861749 container remove 5ff01cd719fed318b7afc0c698d8765c683484c4db356d39e3df1e884d45c8ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_thompson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:34:07 compute-0 systemd[1]: libpod-conmon-5ff01cd719fed318b7afc0c698d8765c683484c4db356d39e3df1e884d45c8ce.scope: Deactivated successfully.
Jan 26 17:34:08 compute-0 podman[421906]: 2026-01-26 17:34:08.025263016 +0000 UTC m=+0.070995888 container create b141e39df12f6e906c8effa37ae48b7935ac0df3c8756b14a16f861a0ae9f5d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 17:34:08 compute-0 systemd[1]: Started libpod-conmon-b141e39df12f6e906c8effa37ae48b7935ac0df3c8756b14a16f861a0ae9f5d1.scope.
Jan 26 17:34:08 compute-0 podman[421906]: 2026-01-26 17:34:07.997025155 +0000 UTC m=+0.042758077 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:34:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca029df6d5d7c51a1cb954747bca90371c35d5a8f79b3522121b49fc9971003/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca029df6d5d7c51a1cb954747bca90371c35d5a8f79b3522121b49fc9971003/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca029df6d5d7c51a1cb954747bca90371c35d5a8f79b3522121b49fc9971003/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca029df6d5d7c51a1cb954747bca90371c35d5a8f79b3522121b49fc9971003/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:34:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cca029df6d5d7c51a1cb954747bca90371c35d5a8f79b3522121b49fc9971003/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:34:08 compute-0 podman[421906]: 2026-01-26 17:34:08.124350538 +0000 UTC m=+0.170083400 container init b141e39df12f6e906c8effa37ae48b7935ac0df3c8756b14a16f861a0ae9f5d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 17:34:08 compute-0 podman[421906]: 2026-01-26 17:34:08.13791478 +0000 UTC m=+0.183647612 container start b141e39df12f6e906c8effa37ae48b7935ac0df3c8756b14a16f861a0ae9f5d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_brahmagupta, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:34:08 compute-0 podman[421906]: 2026-01-26 17:34:08.141646682 +0000 UTC m=+0.187379534 container attach b141e39df12f6e906c8effa37ae48b7935ac0df3c8756b14a16f861a0ae9f5d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 26 17:34:08 compute-0 sleepy_brahmagupta[421922]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:34:08 compute-0 sleepy_brahmagupta[421922]: --> All data devices are unavailable
Jan 26 17:34:08 compute-0 systemd[1]: libpod-b141e39df12f6e906c8effa37ae48b7935ac0df3c8756b14a16f861a0ae9f5d1.scope: Deactivated successfully.
Jan 26 17:34:08 compute-0 podman[421906]: 2026-01-26 17:34:08.724229319 +0000 UTC m=+0.769962211 container died b141e39df12f6e906c8effa37ae48b7935ac0df3c8756b14a16f861a0ae9f5d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_brahmagupta, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 17:34:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-cca029df6d5d7c51a1cb954747bca90371c35d5a8f79b3522121b49fc9971003-merged.mount: Deactivated successfully.
Jan 26 17:34:08 compute-0 podman[421906]: 2026-01-26 17:34:08.786305307 +0000 UTC m=+0.832038139 container remove b141e39df12f6e906c8effa37ae48b7935ac0df3c8756b14a16f861a0ae9f5d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:34:08 compute-0 systemd[1]: libpod-conmon-b141e39df12f6e906c8effa37ae48b7935ac0df3c8756b14a16f861a0ae9f5d1.scope: Deactivated successfully.
Jan 26 17:34:08 compute-0 sudo[421829]: pam_unix(sudo:session): session closed for user root
Jan 26 17:34:08 compute-0 sudo[421956]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:34:08 compute-0 sudo[421956]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:34:08 compute-0 sudo[421956]: pam_unix(sudo:session): session closed for user root
Jan 26 17:34:08 compute-0 ceph-mon[75140]: pgmap v4330: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:08 compute-0 sudo[421981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:34:08 compute-0 sudo[421981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:34:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4331: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:09 compute-0 podman[422017]: 2026-01-26 17:34:09.227956068 +0000 UTC m=+0.036864973 container create e59f7bff2a02573b28e067ea7d7216841fbb3b35785a227e475812862acb75ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 17:34:09 compute-0 systemd[1]: Started libpod-conmon-e59f7bff2a02573b28e067ea7d7216841fbb3b35785a227e475812862acb75ee.scope.
Jan 26 17:34:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:34:09 compute-0 podman[422017]: 2026-01-26 17:34:09.300282986 +0000 UTC m=+0.109191921 container init e59f7bff2a02573b28e067ea7d7216841fbb3b35785a227e475812862acb75ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_taussig, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:34:09 compute-0 podman[422017]: 2026-01-26 17:34:09.307940144 +0000 UTC m=+0.116849059 container start e59f7bff2a02573b28e067ea7d7216841fbb3b35785a227e475812862acb75ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 26 17:34:09 compute-0 podman[422017]: 2026-01-26 17:34:09.213380311 +0000 UTC m=+0.022289246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:34:09 compute-0 awesome_taussig[422033]: 167 167
Jan 26 17:34:09 compute-0 podman[422017]: 2026-01-26 17:34:09.311898941 +0000 UTC m=+0.120807876 container attach e59f7bff2a02573b28e067ea7d7216841fbb3b35785a227e475812862acb75ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_taussig, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 17:34:09 compute-0 systemd[1]: libpod-e59f7bff2a02573b28e067ea7d7216841fbb3b35785a227e475812862acb75ee.scope: Deactivated successfully.
Jan 26 17:34:09 compute-0 podman[422017]: 2026-01-26 17:34:09.315237833 +0000 UTC m=+0.124146748 container died e59f7bff2a02573b28e067ea7d7216841fbb3b35785a227e475812862acb75ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 17:34:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-1114fc5efb854ac53e1c88461eed4a629356b1ab63c916af8633a5d0392fa4c1-merged.mount: Deactivated successfully.
Jan 26 17:34:09 compute-0 podman[422017]: 2026-01-26 17:34:09.355345594 +0000 UTC m=+0.164254529 container remove e59f7bff2a02573b28e067ea7d7216841fbb3b35785a227e475812862acb75ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_taussig, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:34:09 compute-0 systemd[1]: libpod-conmon-e59f7bff2a02573b28e067ea7d7216841fbb3b35785a227e475812862acb75ee.scope: Deactivated successfully.
Jan 26 17:34:09 compute-0 podman[422056]: 2026-01-26 17:34:09.525404232 +0000 UTC m=+0.044206131 container create cb50089c2c45c6c1ddc99b59166ab30ee16d1cbbfe30ca9e08a1c48328e40b94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Jan 26 17:34:09 compute-0 systemd[1]: Started libpod-conmon-cb50089c2c45c6c1ddc99b59166ab30ee16d1cbbfe30ca9e08a1c48328e40b94.scope.
Jan 26 17:34:09 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:34:09 compute-0 podman[422056]: 2026-01-26 17:34:09.504399438 +0000 UTC m=+0.023201387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0299da5528ceee4bda33f91a43bccd92a796322e1722d30ad9f761e78c5be6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0299da5528ceee4bda33f91a43bccd92a796322e1722d30ad9f761e78c5be6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0299da5528ceee4bda33f91a43bccd92a796322e1722d30ad9f761e78c5be6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:34:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0299da5528ceee4bda33f91a43bccd92a796322e1722d30ad9f761e78c5be6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:34:09 compute-0 podman[422056]: 2026-01-26 17:34:09.620563489 +0000 UTC m=+0.139365408 container init cb50089c2c45c6c1ddc99b59166ab30ee16d1cbbfe30ca9e08a1c48328e40b94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lamarr, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:34:09 compute-0 podman[422056]: 2026-01-26 17:34:09.634860989 +0000 UTC m=+0.153662928 container start cb50089c2c45c6c1ddc99b59166ab30ee16d1cbbfe30ca9e08a1c48328e40b94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lamarr, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:34:09 compute-0 podman[422056]: 2026-01-26 17:34:09.640448725 +0000 UTC m=+0.159250664 container attach cb50089c2c45c6c1ddc99b59166ab30ee16d1cbbfe30ca9e08a1c48328e40b94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lamarr, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:34:09 compute-0 nova_compute[239965]: 2026-01-26 17:34:09.828 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:09 compute-0 practical_lamarr[422073]: {
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:     "0": [
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:         {
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "devices": [
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "/dev/loop3"
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             ],
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "lv_name": "ceph_lv0",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "lv_size": "21470642176",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "name": "ceph_lv0",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "tags": {
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.cluster_name": "ceph",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.crush_device_class": "",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.encrypted": "0",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.objectstore": "bluestore",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.osd_id": "0",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.type": "block",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.vdo": "0",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.with_tpm": "0"
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             },
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "type": "block",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "vg_name": "ceph_vg0"
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:         }
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:     ],
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:     "1": [
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:         {
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "devices": [
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "/dev/loop4"
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             ],
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "lv_name": "ceph_lv1",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "lv_size": "21470642176",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "name": "ceph_lv1",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "tags": {
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.cluster_name": "ceph",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.crush_device_class": "",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.encrypted": "0",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.objectstore": "bluestore",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.osd_id": "1",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.type": "block",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.vdo": "0",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.with_tpm": "0"
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             },
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "type": "block",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "vg_name": "ceph_vg1"
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:         }
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:     ],
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:     "2": [
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:         {
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "devices": [
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "/dev/loop5"
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             ],
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "lv_name": "ceph_lv2",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "lv_size": "21470642176",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "name": "ceph_lv2",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "tags": {
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.cluster_name": "ceph",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.crush_device_class": "",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.encrypted": "0",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.objectstore": "bluestore",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.osd_id": "2",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.type": "block",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.vdo": "0",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:                 "ceph.with_tpm": "0"
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             },
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "type": "block",
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:             "vg_name": "ceph_vg2"
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:         }
Jan 26 17:34:09 compute-0 practical_lamarr[422073]:     ]
Jan 26 17:34:09 compute-0 practical_lamarr[422073]: }
Jan 26 17:34:09 compute-0 ceph-mon[75140]: pgmap v4331: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:09 compute-0 systemd[1]: libpod-cb50089c2c45c6c1ddc99b59166ab30ee16d1cbbfe30ca9e08a1c48328e40b94.scope: Deactivated successfully.
Jan 26 17:34:09 compute-0 podman[422056]: 2026-01-26 17:34:09.968229461 +0000 UTC m=+0.487031410 container died cb50089c2c45c6c1ddc99b59166ab30ee16d1cbbfe30ca9e08a1c48328e40b94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lamarr, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 17:34:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0299da5528ceee4bda33f91a43bccd92a796322e1722d30ad9f761e78c5be6a-merged.mount: Deactivated successfully.
Jan 26 17:34:10 compute-0 podman[422056]: 2026-01-26 17:34:10.022735294 +0000 UTC m=+0.541537203 container remove cb50089c2c45c6c1ddc99b59166ab30ee16d1cbbfe30ca9e08a1c48328e40b94 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lamarr, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:34:10 compute-0 systemd[1]: libpod-conmon-cb50089c2c45c6c1ddc99b59166ab30ee16d1cbbfe30ca9e08a1c48328e40b94.scope: Deactivated successfully.
Jan 26 17:34:10 compute-0 sudo[421981]: pam_unix(sudo:session): session closed for user root
Jan 26 17:34:10 compute-0 sudo[422095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:34:10 compute-0 sudo[422095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:34:10 compute-0 sudo[422095]: pam_unix(sudo:session): session closed for user root
Jan 26 17:34:10 compute-0 sudo[422120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:34:10 compute-0 sudo[422120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:34:10 compute-0 podman[422158]: 2026-01-26 17:34:10.463078603 +0000 UTC m=+0.044035038 container create b3afba80e5303bd4aa10871fc54d811e593cceef68e1ed44b1a556732f3bdd0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_diffie, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 26 17:34:10 compute-0 systemd[1]: Started libpod-conmon-b3afba80e5303bd4aa10871fc54d811e593cceef68e1ed44b1a556732f3bdd0d.scope.
Jan 26 17:34:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:34:10 compute-0 podman[422158]: 2026-01-26 17:34:10.513939557 +0000 UTC m=+0.094896052 container init b3afba80e5303bd4aa10871fc54d811e593cceef68e1ed44b1a556732f3bdd0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 26 17:34:10 compute-0 podman[422158]: 2026-01-26 17:34:10.521508623 +0000 UTC m=+0.102465058 container start b3afba80e5303bd4aa10871fc54d811e593cceef68e1ed44b1a556732f3bdd0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_diffie, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 17:34:10 compute-0 podman[422158]: 2026-01-26 17:34:10.526020433 +0000 UTC m=+0.106976898 container attach b3afba80e5303bd4aa10871fc54d811e593cceef68e1ed44b1a556732f3bdd0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 26 17:34:10 compute-0 musing_diffie[422175]: 167 167
Jan 26 17:34:10 compute-0 systemd[1]: libpod-b3afba80e5303bd4aa10871fc54d811e593cceef68e1ed44b1a556732f3bdd0d.scope: Deactivated successfully.
Jan 26 17:34:10 compute-0 podman[422158]: 2026-01-26 17:34:10.529240441 +0000 UTC m=+0.110196886 container died b3afba80e5303bd4aa10871fc54d811e593cceef68e1ed44b1a556732f3bdd0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_diffie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 17:34:10 compute-0 podman[422158]: 2026-01-26 17:34:10.443573346 +0000 UTC m=+0.024529811 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:34:10 compute-0 podman[422172]: 2026-01-26 17:34:10.572877508 +0000 UTC m=+0.076799349 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:34:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-28a406ec5bae05af9750920bfa2fd61714a6459d020999a75089ede3374e4540-merged.mount: Deactivated successfully.
Jan 26 17:34:10 compute-0 podman[422158]: 2026-01-26 17:34:10.589344141 +0000 UTC m=+0.170300576 container remove b3afba80e5303bd4aa10871fc54d811e593cceef68e1ed44b1a556732f3bdd0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_diffie, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:34:10 compute-0 systemd[1]: libpod-conmon-b3afba80e5303bd4aa10871fc54d811e593cceef68e1ed44b1a556732f3bdd0d.scope: Deactivated successfully.
Jan 26 17:34:10 compute-0 podman[422217]: 2026-01-26 17:34:10.763933021 +0000 UTC m=+0.045384311 container create 9cbea6a68356abb64ff3d1450f235559c00bd7f18a2a950c443f38756377d1fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 17:34:10 compute-0 systemd[1]: Started libpod-conmon-9cbea6a68356abb64ff3d1450f235559c00bd7f18a2a950c443f38756377d1fc.scope.
Jan 26 17:34:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43c9ef1fe41dd0ddba93a05266aa57c183807ec157e67497bd97eb8f643a6e2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:34:10 compute-0 podman[422217]: 2026-01-26 17:34:10.74508849 +0000 UTC m=+0.026539810 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43c9ef1fe41dd0ddba93a05266aa57c183807ec157e67497bd97eb8f643a6e2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43c9ef1fe41dd0ddba93a05266aa57c183807ec157e67497bd97eb8f643a6e2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:34:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/43c9ef1fe41dd0ddba93a05266aa57c183807ec157e67497bd97eb8f643a6e2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:34:10 compute-0 podman[422217]: 2026-01-26 17:34:10.852980128 +0000 UTC m=+0.134431448 container init 9cbea6a68356abb64ff3d1450f235559c00bd7f18a2a950c443f38756377d1fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Jan 26 17:34:10 compute-0 podman[422217]: 2026-01-26 17:34:10.861707362 +0000 UTC m=+0.143158662 container start 9cbea6a68356abb64ff3d1450f235559c00bd7f18a2a950c443f38756377d1fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 26 17:34:10 compute-0 podman[422217]: 2026-01-26 17:34:10.864809318 +0000 UTC m=+0.146260618 container attach 9cbea6a68356abb64ff3d1450f235559c00bd7f18a2a950c443f38756377d1fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:34:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:34:11 compute-0 nova_compute[239965]: 2026-01-26 17:34:11.022 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4332: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:11 compute-0 nova_compute[239965]: 2026-01-26 17:34:11.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:11 compute-0 lvm[422312]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:34:11 compute-0 lvm[422313]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:34:11 compute-0 lvm[422313]: VG ceph_vg1 finished
Jan 26 17:34:11 compute-0 lvm[422312]: VG ceph_vg0 finished
Jan 26 17:34:11 compute-0 lvm[422315]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:34:11 compute-0 lvm[422315]: VG ceph_vg2 finished
Jan 26 17:34:11 compute-0 lvm[422317]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:34:11 compute-0 lvm[422317]: VG ceph_vg2 finished
Jan 26 17:34:11 compute-0 hardcore_stonebraker[422234]: {}
Jan 26 17:34:11 compute-0 lvm[422319]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:34:11 compute-0 lvm[422319]: VG ceph_vg2 finished
Jan 26 17:34:11 compute-0 systemd[1]: libpod-9cbea6a68356abb64ff3d1450f235559c00bd7f18a2a950c443f38756377d1fc.scope: Deactivated successfully.
Jan 26 17:34:11 compute-0 systemd[1]: libpod-9cbea6a68356abb64ff3d1450f235559c00bd7f18a2a950c443f38756377d1fc.scope: Consumed 1.637s CPU time.
Jan 26 17:34:11 compute-0 podman[422217]: 2026-01-26 17:34:11.84422672 +0000 UTC m=+1.125678060 container died 9cbea6a68356abb64ff3d1450f235559c00bd7f18a2a950c443f38756377d1fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 26 17:34:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-43c9ef1fe41dd0ddba93a05266aa57c183807ec157e67497bd97eb8f643a6e2f-merged.mount: Deactivated successfully.
Jan 26 17:34:11 compute-0 podman[422217]: 2026-01-26 17:34:11.896026126 +0000 UTC m=+1.177477436 container remove 9cbea6a68356abb64ff3d1450f235559c00bd7f18a2a950c443f38756377d1fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 26 17:34:11 compute-0 systemd[1]: libpod-conmon-9cbea6a68356abb64ff3d1450f235559c00bd7f18a2a950c443f38756377d1fc.scope: Deactivated successfully.
Jan 26 17:34:11 compute-0 sudo[422120]: pam_unix(sudo:session): session closed for user root
Jan 26 17:34:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:34:12 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:34:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:34:12 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:34:12 compute-0 sudo[422331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:34:12 compute-0 sudo[422331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:34:12 compute-0 sudo[422331]: pam_unix(sudo:session): session closed for user root
Jan 26 17:34:12 compute-0 ceph-mon[75140]: pgmap v4332: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:12 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:34:12 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:34:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4333: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:14 compute-0 ceph-mon[75140]: pgmap v4333: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:14 compute-0 nova_compute[239965]: 2026-01-26 17:34:14.832 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4334: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:15 compute-0 podman[422356]: 2026-01-26 17:34:15.492740866 +0000 UTC m=+0.174050207 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:34:15 compute-0 nova_compute[239965]: 2026-01-26 17:34:15.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:15 compute-0 nova_compute[239965]: 2026-01-26 17:34:15.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:34:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:34:16 compute-0 nova_compute[239965]: 2026-01-26 17:34:16.024 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:16 compute-0 ceph-mon[75140]: pgmap v4334: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4335: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:18 compute-0 ceph-mon[75140]: pgmap v4335: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4336: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:19 compute-0 nova_compute[239965]: 2026-01-26 17:34:19.837 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:20 compute-0 ceph-mon[75140]: pgmap v4336: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:34:21 compute-0 nova_compute[239965]: 2026-01-26 17:34:21.027 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4337: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:21 compute-0 nova_compute[239965]: 2026-01-26 17:34:21.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:21 compute-0 nova_compute[239965]: 2026-01-26 17:34:21.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:34:21 compute-0 nova_compute[239965]: 2026-01-26 17:34:21.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:34:21 compute-0 nova_compute[239965]: 2026-01-26 17:34:21.533 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:34:22 compute-0 nova_compute[239965]: 2026-01-26 17:34:22.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:22 compute-0 ceph-mon[75140]: pgmap v4337: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:22 compute-0 nova_compute[239965]: 2026-01-26 17:34:22.545 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:34:22 compute-0 nova_compute[239965]: 2026-01-26 17:34:22.546 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:34:22 compute-0 nova_compute[239965]: 2026-01-26 17:34:22.546 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:34:22 compute-0 nova_compute[239965]: 2026-01-26 17:34:22.546 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:34:22 compute-0 nova_compute[239965]: 2026-01-26 17:34:22.547 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:34:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:34:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2664047495' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:34:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4338: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:23 compute-0 nova_compute[239965]: 2026-01-26 17:34:23.156 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:34:23 compute-0 nova_compute[239965]: 2026-01-26 17:34:23.302 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:34:23 compute-0 nova_compute[239965]: 2026-01-26 17:34:23.303 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3507MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:34:23 compute-0 nova_compute[239965]: 2026-01-26 17:34:23.304 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:34:23 compute-0 nova_compute[239965]: 2026-01-26 17:34:23.304 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:34:23 compute-0 nova_compute[239965]: 2026-01-26 17:34:23.368 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:34:23 compute-0 nova_compute[239965]: 2026-01-26 17:34:23.368 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:34:23 compute-0 nova_compute[239965]: 2026-01-26 17:34:23.387 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:34:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2664047495' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:34:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:34:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2515197957' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:34:23 compute-0 nova_compute[239965]: 2026-01-26 17:34:23.938 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:34:23 compute-0 nova_compute[239965]: 2026-01-26 17:34:23.945 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:34:23 compute-0 nova_compute[239965]: 2026-01-26 17:34:23.965 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:34:23 compute-0 nova_compute[239965]: 2026-01-26 17:34:23.967 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:34:23 compute-0 nova_compute[239965]: 2026-01-26 17:34:23.967 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:34:24 compute-0 ceph-mon[75140]: pgmap v4338: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2515197957' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:34:24 compute-0 nova_compute[239965]: 2026-01-26 17:34:24.841 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4339: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:34:26 compute-0 nova_compute[239965]: 2026-01-26 17:34:26.030 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:26 compute-0 ceph-mon[75140]: pgmap v4339: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4340: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:28 compute-0 ceph-mon[75140]: pgmap v4340: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:34:28
Jan 26 17:34:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:34:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:34:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['vms', 'volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'default.rgw.log']
Jan 26 17:34:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:34:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4341: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:29 compute-0 nova_compute[239965]: 2026-01-26 17:34:29.844 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:34:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:34:30 compute-0 ceph-mon[75140]: pgmap v4341: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:34:31 compute-0 nova_compute[239965]: 2026-01-26 17:34:31.033 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4342: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:34:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:34:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:34:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:34:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:34:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:34:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:34:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:34:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:34:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:34:32 compute-0 ceph-mon[75140]: pgmap v4342: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4343: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:34 compute-0 nova_compute[239965]: 2026-01-26 17:34:34.848 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:34 compute-0 ceph-mon[75140]: pgmap v4343: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4344: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:35 compute-0 ceph-mon[75140]: pgmap v4344: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:34:36 compute-0 nova_compute[239965]: 2026-01-26 17:34:36.034 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4345: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:38 compute-0 ceph-mon[75140]: pgmap v4345: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4346: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:39 compute-0 nova_compute[239965]: 2026-01-26 17:34:39.853 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:40 compute-0 ceph-mon[75140]: pgmap v4346: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:34:41 compute-0 nova_compute[239965]: 2026-01-26 17:34:41.036 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4347: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:41 compute-0 podman[422428]: 2026-01-26 17:34:41.370875184 +0000 UTC m=+0.057177359 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 26 17:34:42 compute-0 ceph-mon[75140]: pgmap v4347: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4348: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:44 compute-0 ceph-mon[75140]: pgmap v4348: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:44 compute-0 nova_compute[239965]: 2026-01-26 17:34:44.857 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4349: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:34:46 compute-0 nova_compute[239965]: 2026-01-26 17:34:46.073 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:46 compute-0 podman[422448]: 2026-01-26 17:34:46.460162784 +0000 UTC m=+0.133349132 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_controller)
Jan 26 17:34:46 compute-0 ceph-mon[75140]: pgmap v4349: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4350: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:48 compute-0 ceph-mon[75140]: pgmap v4350: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:34:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1054057243' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:34:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:34:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1054057243' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:34:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4351: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:49 compute-0 nova_compute[239965]: 2026-01-26 17:34:49.861 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1054057243' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:34:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1054057243' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:34:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:34:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:34:51 compute-0 ceph-mon[75140]: pgmap v4351: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:51 compute-0 nova_compute[239965]: 2026-01-26 17:34:51.075 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4352: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:51 compute-0 nova_compute[239965]: 2026-01-26 17:34:51.967 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:53 compute-0 ceph-mon[75140]: pgmap v4352: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4353: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:54 compute-0 nova_compute[239965]: 2026-01-26 17:34:54.866 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:55 compute-0 ceph-mon[75140]: pgmap v4353: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4354: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:34:56 compute-0 nova_compute[239965]: 2026-01-26 17:34:56.077 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:34:57 compute-0 ceph-mon[75140]: pgmap v4354: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4355: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:57 compute-0 nova_compute[239965]: 2026-01-26 17:34:57.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:57 compute-0 nova_compute[239965]: 2026-01-26 17:34:57.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:58 compute-0 ceph-mon[75140]: pgmap v4355: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:58 compute-0 nova_compute[239965]: 2026-01-26 17:34:58.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:34:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4356: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:34:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:34:59.322 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:34:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:34:59.323 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:34:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:34:59.323 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:34:59 compute-0 nova_compute[239965]: 2026-01-26 17:34:59.870 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:00 compute-0 ceph-mon[75140]: pgmap v4356: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:35:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:35:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:35:01 compute-0 nova_compute[239965]: 2026-01-26 17:35:01.080 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4357: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:02 compute-0 ceph-mon[75140]: pgmap v4357: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4358: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:04 compute-0 ceph-mon[75140]: pgmap v4358: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:04 compute-0 nova_compute[239965]: 2026-01-26 17:35:04.874 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4359: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:05 compute-0 nova_compute[239965]: 2026-01-26 17:35:05.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:35:06 compute-0 nova_compute[239965]: 2026-01-26 17:35:06.082 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:06 compute-0 ceph-mon[75140]: pgmap v4359: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4360: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:08 compute-0 ceph-mon[75140]: pgmap v4360: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4361: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:09 compute-0 nova_compute[239965]: 2026-01-26 17:35:09.879 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:10 compute-0 ceph-mon[75140]: pgmap v4361: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:35:11 compute-0 nova_compute[239965]: 2026-01-26 17:35:11.086 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4362: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:11 compute-0 nova_compute[239965]: 2026-01-26 17:35:11.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:12 compute-0 ceph-mon[75140]: pgmap v4362: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:12 compute-0 sudo[422474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:35:12 compute-0 sudo[422474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:35:12 compute-0 sudo[422474]: pam_unix(sudo:session): session closed for user root
Jan 26 17:35:12 compute-0 sudo[422506]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:35:12 compute-0 sudo[422506]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:35:12 compute-0 podman[422498]: 2026-01-26 17:35:12.399259141 +0000 UTC m=+0.064292203 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:35:13 compute-0 sudo[422506]: pam_unix(sudo:session): session closed for user root
Jan 26 17:35:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:35:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:35:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:35:13 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:35:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:35:13 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:35:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:35:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:35:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:35:13 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:35:13 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:35:13 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:35:13 compute-0 sudo[422574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:35:13 compute-0 sudo[422574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:35:13 compute-0 sudo[422574]: pam_unix(sudo:session): session closed for user root
Jan 26 17:35:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4363: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:13 compute-0 sudo[422599]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:35:13 compute-0 sudo[422599]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:35:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:35:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:35:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:35:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:35:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:35:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:35:13 compute-0 podman[422637]: 2026-01-26 17:35:13.4861083 +0000 UTC m=+0.054271878 container create eaa77b712a906b0dfc486f3c6baf3280dceb9332c4973364c1161ea142fe6d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:35:13 compute-0 systemd[1]: Started libpod-conmon-eaa77b712a906b0dfc486f3c6baf3280dceb9332c4973364c1161ea142fe6d00.scope.
Jan 26 17:35:13 compute-0 podman[422637]: 2026-01-26 17:35:13.463316312 +0000 UTC m=+0.031479940 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:35:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:35:13 compute-0 podman[422637]: 2026-01-26 17:35:13.578592002 +0000 UTC m=+0.146755640 container init eaa77b712a906b0dfc486f3c6baf3280dceb9332c4973364c1161ea142fe6d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:35:13 compute-0 podman[422637]: 2026-01-26 17:35:13.588939484 +0000 UTC m=+0.157103062 container start eaa77b712a906b0dfc486f3c6baf3280dceb9332c4973364c1161ea142fe6d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 26 17:35:13 compute-0 podman[422637]: 2026-01-26 17:35:13.592275247 +0000 UTC m=+0.160439045 container attach eaa77b712a906b0dfc486f3c6baf3280dceb9332c4973364c1161ea142fe6d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 17:35:13 compute-0 gifted_chatterjee[422653]: 167 167
Jan 26 17:35:13 compute-0 systemd[1]: libpod-eaa77b712a906b0dfc486f3c6baf3280dceb9332c4973364c1161ea142fe6d00.scope: Deactivated successfully.
Jan 26 17:35:13 compute-0 podman[422637]: 2026-01-26 17:35:13.600538848 +0000 UTC m=+0.168702436 container died eaa77b712a906b0dfc486f3c6baf3280dceb9332c4973364c1161ea142fe6d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 17:35:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-996202ee38393641416a74e3805fd15d63153553ef490fc49c717c761ba39437-merged.mount: Deactivated successfully.
Jan 26 17:35:13 compute-0 podman[422637]: 2026-01-26 17:35:13.648655085 +0000 UTC m=+0.216818663 container remove eaa77b712a906b0dfc486f3c6baf3280dceb9332c4973364c1161ea142fe6d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 17:35:13 compute-0 systemd[1]: libpod-conmon-eaa77b712a906b0dfc486f3c6baf3280dceb9332c4973364c1161ea142fe6d00.scope: Deactivated successfully.
Jan 26 17:35:13 compute-0 podman[422677]: 2026-01-26 17:35:13.854579561 +0000 UTC m=+0.066658982 container create db8b494a548dd49b9a72e55543681d4cade4d5d96b0d04ed100ad2df6757b51a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dewdney, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:35:13 compute-0 systemd[1]: Started libpod-conmon-db8b494a548dd49b9a72e55543681d4cade4d5d96b0d04ed100ad2df6757b51a.scope.
Jan 26 17:35:13 compute-0 podman[422677]: 2026-01-26 17:35:13.821157573 +0000 UTC m=+0.033237034 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:35:13 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0a872c00bc41832be730f547fc6253f608441e600e5120aa277cf9af917fbaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0a872c00bc41832be730f547fc6253f608441e600e5120aa277cf9af917fbaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0a872c00bc41832be730f547fc6253f608441e600e5120aa277cf9af917fbaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0a872c00bc41832be730f547fc6253f608441e600e5120aa277cf9af917fbaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:35:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0a872c00bc41832be730f547fc6253f608441e600e5120aa277cf9af917fbaa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:35:13 compute-0 podman[422677]: 2026-01-26 17:35:13.95312121 +0000 UTC m=+0.165200601 container init db8b494a548dd49b9a72e55543681d4cade4d5d96b0d04ed100ad2df6757b51a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 17:35:13 compute-0 podman[422677]: 2026-01-26 17:35:13.971453299 +0000 UTC m=+0.183532670 container start db8b494a548dd49b9a72e55543681d4cade4d5d96b0d04ed100ad2df6757b51a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dewdney, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:35:13 compute-0 podman[422677]: 2026-01-26 17:35:13.975829837 +0000 UTC m=+0.187909228 container attach db8b494a548dd49b9a72e55543681d4cade4d5d96b0d04ed100ad2df6757b51a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dewdney, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 26 17:35:14 compute-0 ceph-mon[75140]: pgmap v4363: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:14 compute-0 serene_dewdney[422694]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:35:14 compute-0 serene_dewdney[422694]: --> All data devices are unavailable
Jan 26 17:35:14 compute-0 systemd[1]: libpod-db8b494a548dd49b9a72e55543681d4cade4d5d96b0d04ed100ad2df6757b51a.scope: Deactivated successfully.
Jan 26 17:35:14 compute-0 podman[422677]: 2026-01-26 17:35:14.525091659 +0000 UTC m=+0.737171060 container died db8b494a548dd49b9a72e55543681d4cade4d5d96b0d04ed100ad2df6757b51a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dewdney, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:35:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0a872c00bc41832be730f547fc6253f608441e600e5120aa277cf9af917fbaa-merged.mount: Deactivated successfully.
Jan 26 17:35:14 compute-0 sshd-session[422699]: Invalid user solv from 45.148.10.240 port 46078
Jan 26 17:35:14 compute-0 podman[422677]: 2026-01-26 17:35:14.572476148 +0000 UTC m=+0.784555529 container remove db8b494a548dd49b9a72e55543681d4cade4d5d96b0d04ed100ad2df6757b51a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 17:35:14 compute-0 systemd[1]: libpod-conmon-db8b494a548dd49b9a72e55543681d4cade4d5d96b0d04ed100ad2df6757b51a.scope: Deactivated successfully.
Jan 26 17:35:14 compute-0 sudo[422599]: pam_unix(sudo:session): session closed for user root
Jan 26 17:35:14 compute-0 sshd-session[422699]: Connection closed by invalid user solv 45.148.10.240 port 46078 [preauth]
Jan 26 17:35:14 compute-0 sudo[422727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:35:14 compute-0 sudo[422727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:35:14 compute-0 sudo[422727]: pam_unix(sudo:session): session closed for user root
Jan 26 17:35:14 compute-0 sudo[422752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:35:14 compute-0 sudo[422752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:35:14 compute-0 nova_compute[239965]: 2026-01-26 17:35:14.883 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:15 compute-0 podman[422788]: 2026-01-26 17:35:15.09476747 +0000 UTC m=+0.060855939 container create 95790578f4ea80d3a81cfaa99d5e927f9a46c36c9a3a62299ddc4725633e950e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_meitner, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True)
Jan 26 17:35:15 compute-0 systemd[1]: Started libpod-conmon-95790578f4ea80d3a81cfaa99d5e927f9a46c36c9a3a62299ddc4725633e950e.scope.
Jan 26 17:35:15 compute-0 podman[422788]: 2026-01-26 17:35:15.076178305 +0000 UTC m=+0.042266784 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:35:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:35:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4364: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:15 compute-0 podman[422788]: 2026-01-26 17:35:15.193863393 +0000 UTC m=+0.159951882 container init 95790578f4ea80d3a81cfaa99d5e927f9a46c36c9a3a62299ddc4725633e950e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 17:35:15 compute-0 podman[422788]: 2026-01-26 17:35:15.201372977 +0000 UTC m=+0.167461466 container start 95790578f4ea80d3a81cfaa99d5e927f9a46c36c9a3a62299ddc4725633e950e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_meitner, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 17:35:15 compute-0 podman[422788]: 2026-01-26 17:35:15.205802756 +0000 UTC m=+0.171891255 container attach 95790578f4ea80d3a81cfaa99d5e927f9a46c36c9a3a62299ddc4725633e950e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_meitner, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:35:15 compute-0 romantic_meitner[422804]: 167 167
Jan 26 17:35:15 compute-0 systemd[1]: libpod-95790578f4ea80d3a81cfaa99d5e927f9a46c36c9a3a62299ddc4725633e950e.scope: Deactivated successfully.
Jan 26 17:35:15 compute-0 podman[422788]: 2026-01-26 17:35:15.207704822 +0000 UTC m=+0.173793311 container died 95790578f4ea80d3a81cfaa99d5e927f9a46c36c9a3a62299ddc4725633e950e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_meitner, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:35:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-948ecfa540b416e1a1d315284ad799a55913a02cd0f0b9a51940a3807d735ed7-merged.mount: Deactivated successfully.
Jan 26 17:35:15 compute-0 podman[422788]: 2026-01-26 17:35:15.250367996 +0000 UTC m=+0.216456475 container remove 95790578f4ea80d3a81cfaa99d5e927f9a46c36c9a3a62299ddc4725633e950e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_meitner, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle)
Jan 26 17:35:15 compute-0 systemd[1]: libpod-conmon-95790578f4ea80d3a81cfaa99d5e927f9a46c36c9a3a62299ddc4725633e950e.scope: Deactivated successfully.
Jan 26 17:35:15 compute-0 podman[422829]: 2026-01-26 17:35:15.441423387 +0000 UTC m=+0.047705217 container create b51291eda256a2a30f1c912715ad10488a4c120d6dd462855ef59b08dfc0d6eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Jan 26 17:35:15 compute-0 systemd[1]: Started libpod-conmon-b51291eda256a2a30f1c912715ad10488a4c120d6dd462855ef59b08dfc0d6eb.scope.
Jan 26 17:35:15 compute-0 podman[422829]: 2026-01-26 17:35:15.421708015 +0000 UTC m=+0.027989855 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:35:15 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b66c94d71ab96239aa9f454dbdf1ade44b42682664b584608dd27eb10b930d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b66c94d71ab96239aa9f454dbdf1ade44b42682664b584608dd27eb10b930d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b66c94d71ab96239aa9f454dbdf1ade44b42682664b584608dd27eb10b930d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:35:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23b66c94d71ab96239aa9f454dbdf1ade44b42682664b584608dd27eb10b930d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:35:15 compute-0 podman[422829]: 2026-01-26 17:35:15.542333215 +0000 UTC m=+0.148615085 container init b51291eda256a2a30f1c912715ad10488a4c120d6dd462855ef59b08dfc0d6eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 17:35:15 compute-0 podman[422829]: 2026-01-26 17:35:15.558840919 +0000 UTC m=+0.165122739 container start b51291eda256a2a30f1c912715ad10488a4c120d6dd462855ef59b08dfc0d6eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:35:15 compute-0 podman[422829]: 2026-01-26 17:35:15.562830407 +0000 UTC m=+0.169112247 container attach b51291eda256a2a30f1c912715ad10488a4c120d6dd462855ef59b08dfc0d6eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]: {
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:     "0": [
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:         {
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "devices": [
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "/dev/loop3"
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             ],
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "lv_name": "ceph_lv0",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "lv_size": "21470642176",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "name": "ceph_lv0",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "tags": {
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.cluster_name": "ceph",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.crush_device_class": "",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.encrypted": "0",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.objectstore": "bluestore",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.osd_id": "0",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.type": "block",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.vdo": "0",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.with_tpm": "0"
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             },
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "type": "block",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "vg_name": "ceph_vg0"
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:         }
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:     ],
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:     "1": [
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:         {
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "devices": [
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "/dev/loop4"
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             ],
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "lv_name": "ceph_lv1",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "lv_size": "21470642176",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "name": "ceph_lv1",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "tags": {
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.cluster_name": "ceph",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.crush_device_class": "",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.encrypted": "0",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.objectstore": "bluestore",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.osd_id": "1",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.type": "block",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.vdo": "0",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.with_tpm": "0"
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             },
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "type": "block",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "vg_name": "ceph_vg1"
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:         }
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:     ],
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:     "2": [
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:         {
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "devices": [
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "/dev/loop5"
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             ],
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "lv_name": "ceph_lv2",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "lv_size": "21470642176",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "name": "ceph_lv2",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "tags": {
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.cluster_name": "ceph",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.crush_device_class": "",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.encrypted": "0",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.objectstore": "bluestore",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.osd_id": "2",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.type": "block",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.vdo": "0",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:                 "ceph.with_tpm": "0"
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             },
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "type": "block",
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:             "vg_name": "ceph_vg2"
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:         }
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]:     ]
Jan 26 17:35:15 compute-0 cool_stonebraker[422845]: }
Jan 26 17:35:15 compute-0 systemd[1]: libpod-b51291eda256a2a30f1c912715ad10488a4c120d6dd462855ef59b08dfc0d6eb.scope: Deactivated successfully.
Jan 26 17:35:15 compute-0 podman[422829]: 2026-01-26 17:35:15.938666137 +0000 UTC m=+0.544947967 container died b51291eda256a2a30f1c912715ad10488a4c120d6dd462855ef59b08dfc0d6eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:35:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-23b66c94d71ab96239aa9f454dbdf1ade44b42682664b584608dd27eb10b930d-merged.mount: Deactivated successfully.
Jan 26 17:35:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:35:15 compute-0 podman[422829]: 2026-01-26 17:35:15.996186904 +0000 UTC m=+0.602468724 container remove b51291eda256a2a30f1c912715ad10488a4c120d6dd462855ef59b08dfc0d6eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:35:16 compute-0 systemd[1]: libpod-conmon-b51291eda256a2a30f1c912715ad10488a4c120d6dd462855ef59b08dfc0d6eb.scope: Deactivated successfully.
Jan 26 17:35:16 compute-0 sudo[422752]: pam_unix(sudo:session): session closed for user root
Jan 26 17:35:16 compute-0 nova_compute[239965]: 2026-01-26 17:35:16.101 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:16 compute-0 sudo[422867]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:35:16 compute-0 sudo[422867]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:35:16 compute-0 sudo[422867]: pam_unix(sudo:session): session closed for user root
Jan 26 17:35:16 compute-0 sudo[422892]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:35:16 compute-0 sudo[422892]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:35:16 compute-0 ceph-mon[75140]: pgmap v4364: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:16 compute-0 sshd-session[422854]: Received disconnect from 45.148.10.147 port 47298:11:  [preauth]
Jan 26 17:35:16 compute-0 sshd-session[422854]: Disconnected from authenticating user root 45.148.10.147 port 47298 [preauth]
Jan 26 17:35:16 compute-0 podman[422929]: 2026-01-26 17:35:16.480780706 +0000 UTC m=+0.045652278 container create 71235d7000c9493c9771f6bb18cdcceed562468463dede32a4994ae48292790a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_noyce, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:35:16 compute-0 systemd[1]: Started libpod-conmon-71235d7000c9493c9771f6bb18cdcceed562468463dede32a4994ae48292790a.scope.
Jan 26 17:35:16 compute-0 nova_compute[239965]: 2026-01-26 17:35:16.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:16 compute-0 nova_compute[239965]: 2026-01-26 17:35:16.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:35:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:35:16 compute-0 podman[422929]: 2026-01-26 17:35:16.552401467 +0000 UTC m=+0.117273029 container init 71235d7000c9493c9771f6bb18cdcceed562468463dede32a4994ae48292790a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_noyce, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:35:16 compute-0 podman[422929]: 2026-01-26 17:35:16.558024035 +0000 UTC m=+0.122895587 container start 71235d7000c9493c9771f6bb18cdcceed562468463dede32a4994ae48292790a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_noyce, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:35:16 compute-0 podman[422929]: 2026-01-26 17:35:16.462685903 +0000 UTC m=+0.027557475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:35:16 compute-0 serene_noyce[422946]: 167 167
Jan 26 17:35:16 compute-0 podman[422929]: 2026-01-26 17:35:16.561454118 +0000 UTC m=+0.126325670 container attach 71235d7000c9493c9771f6bb18cdcceed562468463dede32a4994ae48292790a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 17:35:16 compute-0 systemd[1]: libpod-71235d7000c9493c9771f6bb18cdcceed562468463dede32a4994ae48292790a.scope: Deactivated successfully.
Jan 26 17:35:16 compute-0 podman[422929]: 2026-01-26 17:35:16.562566816 +0000 UTC m=+0.127438368 container died 71235d7000c9493c9771f6bb18cdcceed562468463dede32a4994ae48292790a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_noyce, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:35:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcde20e5aa2e228899950b996793d80033f79dcb659b3bbde81607bf63f15eff-merged.mount: Deactivated successfully.
Jan 26 17:35:16 compute-0 podman[422943]: 2026-01-26 17:35:16.600019612 +0000 UTC m=+0.083594326 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 26 17:35:16 compute-0 podman[422929]: 2026-01-26 17:35:16.606476779 +0000 UTC m=+0.171348331 container remove 71235d7000c9493c9771f6bb18cdcceed562468463dede32a4994ae48292790a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_noyce, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:35:16 compute-0 systemd[1]: libpod-conmon-71235d7000c9493c9771f6bb18cdcceed562468463dede32a4994ae48292790a.scope: Deactivated successfully.
Jan 26 17:35:16 compute-0 podman[422995]: 2026-01-26 17:35:16.780384233 +0000 UTC m=+0.054205657 container create 758e598ea316957a9d909e881eff868fff16c26757a760be120c8232384b5e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hermann, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 17:35:16 compute-0 systemd[1]: Started libpod-conmon-758e598ea316957a9d909e881eff868fff16c26757a760be120c8232384b5e1c.scope.
Jan 26 17:35:16 compute-0 podman[422995]: 2026-01-26 17:35:16.75211039 +0000 UTC m=+0.025931864 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:35:16 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27574a6a2b0ce863cbc77587f701c7cb5df575f9abf55d8f8105a690550a9eda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27574a6a2b0ce863cbc77587f701c7cb5df575f9abf55d8f8105a690550a9eda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27574a6a2b0ce863cbc77587f701c7cb5df575f9abf55d8f8105a690550a9eda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:35:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27574a6a2b0ce863cbc77587f701c7cb5df575f9abf55d8f8105a690550a9eda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:35:16 compute-0 podman[422995]: 2026-01-26 17:35:16.869707226 +0000 UTC m=+0.143528630 container init 758e598ea316957a9d909e881eff868fff16c26757a760be120c8232384b5e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 17:35:16 compute-0 podman[422995]: 2026-01-26 17:35:16.877997239 +0000 UTC m=+0.151818633 container start 758e598ea316957a9d909e881eff868fff16c26757a760be120c8232384b5e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hermann, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 17:35:16 compute-0 podman[422995]: 2026-01-26 17:35:16.881196678 +0000 UTC m=+0.155018062 container attach 758e598ea316957a9d909e881eff868fff16c26757a760be120c8232384b5e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 17:35:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4365: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:17 compute-0 lvm[423090]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:35:17 compute-0 lvm[423090]: VG ceph_vg1 finished
Jan 26 17:35:17 compute-0 lvm[423088]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:35:17 compute-0 lvm[423088]: VG ceph_vg0 finished
Jan 26 17:35:17 compute-0 lvm[423092]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:35:17 compute-0 lvm[423092]: VG ceph_vg2 finished
Jan 26 17:35:17 compute-0 stoic_hermann[423011]: {}
Jan 26 17:35:17 compute-0 systemd[1]: libpod-758e598ea316957a9d909e881eff868fff16c26757a760be120c8232384b5e1c.scope: Deactivated successfully.
Jan 26 17:35:17 compute-0 systemd[1]: libpod-758e598ea316957a9d909e881eff868fff16c26757a760be120c8232384b5e1c.scope: Consumed 1.307s CPU time.
Jan 26 17:35:17 compute-0 podman[422995]: 2026-01-26 17:35:17.721291793 +0000 UTC m=+0.995113187 container died 758e598ea316957a9d909e881eff868fff16c26757a760be120c8232384b5e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hermann, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:35:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-27574a6a2b0ce863cbc77587f701c7cb5df575f9abf55d8f8105a690550a9eda-merged.mount: Deactivated successfully.
Jan 26 17:35:17 compute-0 podman[422995]: 2026-01-26 17:35:17.76328249 +0000 UTC m=+1.037103874 container remove 758e598ea316957a9d909e881eff868fff16c26757a760be120c8232384b5e1c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_hermann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 17:35:17 compute-0 systemd[1]: libpod-conmon-758e598ea316957a9d909e881eff868fff16c26757a760be120c8232384b5e1c.scope: Deactivated successfully.
Jan 26 17:35:17 compute-0 sudo[422892]: pam_unix(sudo:session): session closed for user root
Jan 26 17:35:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:35:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:35:17 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:35:17 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:35:17 compute-0 sudo[423108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:35:17 compute-0 sudo[423108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:35:17 compute-0 sudo[423108]: pam_unix(sudo:session): session closed for user root
Jan 26 17:35:18 compute-0 ceph-mon[75140]: pgmap v4365: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:35:18 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:35:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4366: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:19 compute-0 nova_compute[239965]: 2026-01-26 17:35:19.887 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:20 compute-0 ceph-mon[75140]: pgmap v4366: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:35:21 compute-0 nova_compute[239965]: 2026-01-26 17:35:21.102 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4367: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:22 compute-0 nova_compute[239965]: 2026-01-26 17:35:22.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:22 compute-0 nova_compute[239965]: 2026-01-26 17:35:22.533 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:35:22 compute-0 nova_compute[239965]: 2026-01-26 17:35:22.534 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:35:22 compute-0 nova_compute[239965]: 2026-01-26 17:35:22.534 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:35:22 compute-0 nova_compute[239965]: 2026-01-26 17:35:22.534 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:35:22 compute-0 nova_compute[239965]: 2026-01-26 17:35:22.534 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:35:22 compute-0 ceph-mon[75140]: pgmap v4367: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:35:23 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063767834' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:35:23 compute-0 nova_compute[239965]: 2026-01-26 17:35:23.103 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:35:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4368: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:23 compute-0 nova_compute[239965]: 2026-01-26 17:35:23.257 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:35:23 compute-0 nova_compute[239965]: 2026-01-26 17:35:23.258 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3484MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:35:23 compute-0 nova_compute[239965]: 2026-01-26 17:35:23.259 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:35:23 compute-0 nova_compute[239965]: 2026-01-26 17:35:23.259 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:35:23 compute-0 nova_compute[239965]: 2026-01-26 17:35:23.753 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:35:23 compute-0 nova_compute[239965]: 2026-01-26 17:35:23.754 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:35:23 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3063767834' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:35:23 compute-0 nova_compute[239965]: 2026-01-26 17:35:23.869 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 17:35:23 compute-0 nova_compute[239965]: 2026-01-26 17:35:23.921 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 17:35:23 compute-0 nova_compute[239965]: 2026-01-26 17:35:23.922 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 17:35:23 compute-0 nova_compute[239965]: 2026-01-26 17:35:23.937 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 17:35:23 compute-0 nova_compute[239965]: 2026-01-26 17:35:23.968 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 17:35:23 compute-0 nova_compute[239965]: 2026-01-26 17:35:23.987 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:35:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:35:24 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2130208024' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:35:24 compute-0 nova_compute[239965]: 2026-01-26 17:35:24.559 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:35:24 compute-0 nova_compute[239965]: 2026-01-26 17:35:24.565 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:35:24 compute-0 nova_compute[239965]: 2026-01-26 17:35:24.582 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:35:24 compute-0 nova_compute[239965]: 2026-01-26 17:35:24.583 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:35:24 compute-0 nova_compute[239965]: 2026-01-26 17:35:24.583 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.324s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:35:24 compute-0 ceph-mon[75140]: pgmap v4368: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:24 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2130208024' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #210. Immutable memtables: 0.
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:35:24.859738) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 131] Flushing memtable with next log file: 210
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448924859813, "job": 131, "event": "flush_started", "num_memtables": 1, "num_entries": 2048, "num_deletes": 251, "total_data_size": 3525915, "memory_usage": 3570360, "flush_reason": "Manual Compaction"}
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 131] Level-0 flush table #211: started
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448924877316, "cf_name": "default", "job": 131, "event": "table_file_creation", "file_number": 211, "file_size": 3441041, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 86736, "largest_seqno": 88783, "table_properties": {"data_size": 3431672, "index_size": 5926, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18592, "raw_average_key_size": 20, "raw_value_size": 3413152, "raw_average_value_size": 3681, "num_data_blocks": 263, "num_entries": 927, "num_filter_entries": 927, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769448697, "oldest_key_time": 1769448697, "file_creation_time": 1769448924, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 211, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 131] Flush lasted 17626 microseconds, and 6498 cpu microseconds.
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:35:24.877370) [db/flush_job.cc:967] [default] [JOB 131] Level-0 flush table #211: 3441041 bytes OK
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:35:24.877389) [db/memtable_list.cc:519] [default] Level-0 commit table #211 started
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:35:24.879304) [db/memtable_list.cc:722] [default] Level-0 commit table #211: memtable #1 done
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:35:24.879344) EVENT_LOG_v1 {"time_micros": 1769448924879336, "job": 131, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:35:24.879369) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 131] Try to delete WAL files size 3517352, prev total WAL file size 3517352, number of live WAL files 2.
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000207.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:35:24.880453) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730038373835' seq:72057594037927935, type:22 .. '7061786F730039303337' seq:0, type:0; will stop at (end)
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 132] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 131 Base level 0, inputs: [211(3360KB)], [209(10MB)]
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448924880485, "job": 132, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [211], "files_L6": [209], "score": -1, "input_data_size": 14724848, "oldest_snapshot_seqno": -1}
Jan 26 17:35:24 compute-0 nova_compute[239965]: 2026-01-26 17:35:24.890 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 132] Generated table #212: 10454 keys, 12886581 bytes, temperature: kUnknown
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448924964151, "cf_name": "default", "job": 132, "event": "table_file_creation", "file_number": 212, "file_size": 12886581, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12820571, "index_size": 38761, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26181, "raw_key_size": 274545, "raw_average_key_size": 26, "raw_value_size": 12637294, "raw_average_value_size": 1208, "num_data_blocks": 1492, "num_entries": 10454, "num_filter_entries": 10454, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769448924, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 212, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:35:24.965532) [db/compaction/compaction_job.cc:1663] [default] [JOB 132] Compacted 1@0 + 1@6 files to L6 => 12886581 bytes
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:35:24.967841) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.5 rd, 151.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 10.8 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 10968, records dropped: 514 output_compression: NoCompression
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:35:24.967864) EVENT_LOG_v1 {"time_micros": 1769448924967855, "job": 132, "event": "compaction_finished", "compaction_time_micros": 84875, "compaction_time_cpu_micros": 31979, "output_level": 6, "num_output_files": 1, "total_output_size": 12886581, "num_input_records": 10968, "num_output_records": 10454, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000211.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448924968479, "job": 132, "event": "table_file_deletion", "file_number": 211}
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000209.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448924970497, "job": 132, "event": "table_file_deletion", "file_number": 209}
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:35:24.880380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:35:24.970528) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:35:24.970533) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:35:24.970534) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:35:24.970536) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:35:24 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:35:24.970538) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:35:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4369: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:25 compute-0 nova_compute[239965]: 2026-01-26 17:35:25.584 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:25 compute-0 nova_compute[239965]: 2026-01-26 17:35:25.585 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:35:25 compute-0 nova_compute[239965]: 2026-01-26 17:35:25.585 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:35:25 compute-0 nova_compute[239965]: 2026-01-26 17:35:25.605 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:35:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:35:26 compute-0 nova_compute[239965]: 2026-01-26 17:35:26.104 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:26 compute-0 nova_compute[239965]: 2026-01-26 17:35:26.524 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:26 compute-0 ceph-mon[75140]: pgmap v4369: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4370: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:28 compute-0 ceph-mon[75140]: pgmap v4370: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:35:28
Jan 26 17:35:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:35:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:35:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'volumes', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'backups']
Jan 26 17:35:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:35:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4371: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:29 compute-0 nova_compute[239965]: 2026-01-26 17:35:29.893 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:35:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:35:30 compute-0 ceph-mon[75140]: pgmap v4371: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:35:31 compute-0 nova_compute[239965]: 2026-01-26 17:35:31.107 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4372: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:35:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:35:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:35:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:35:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:35:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:35:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:35:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:35:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:35:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:35:32 compute-0 ceph-mon[75140]: pgmap v4372: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4373: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:34 compute-0 nova_compute[239965]: 2026-01-26 17:35:34.897 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:34 compute-0 ceph-mon[75140]: pgmap v4373: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4374: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:35:36 compute-0 nova_compute[239965]: 2026-01-26 17:35:36.108 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:36 compute-0 ceph-mon[75140]: pgmap v4374: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4375: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:38 compute-0 ceph-mon[75140]: pgmap v4375: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4376: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:39 compute-0 nova_compute[239965]: 2026-01-26 17:35:39.900 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:40 compute-0 ceph-mon[75140]: pgmap v4376: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:35:41 compute-0 nova_compute[239965]: 2026-01-26 17:35:41.111 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4377: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:42 compute-0 ceph-mon[75140]: pgmap v4377: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4378: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:43 compute-0 podman[423177]: 2026-01-26 17:35:43.420535754 +0000 UTC m=+0.094899651 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 17:35:44 compute-0 nova_compute[239965]: 2026-01-26 17:35:44.904 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:44 compute-0 ceph-mon[75140]: pgmap v4378: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4379: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:35:46 compute-0 nova_compute[239965]: 2026-01-26 17:35:46.113 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:46 compute-0 ceph-mon[75140]: pgmap v4379: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4380: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:47 compute-0 podman[423197]: 2026-01-26 17:35:47.379753589 +0000 UTC m=+0.070262180 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_controller)
Jan 26 17:35:48 compute-0 ceph-mon[75140]: pgmap v4380: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4381: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:49 compute-0 nova_compute[239965]: 2026-01-26 17:35:49.907 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:35:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:35:50 compute-0 ceph-mon[75140]: pgmap v4381: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:35:51 compute-0 nova_compute[239965]: 2026-01-26 17:35:51.114 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4382: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:51 compute-0 nova_compute[239965]: 2026-01-26 17:35:51.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:52 compute-0 ceph-mon[75140]: pgmap v4382: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4383: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:54 compute-0 nova_compute[239965]: 2026-01-26 17:35:54.910 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:54 compute-0 ceph-mon[75140]: pgmap v4383: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4384: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:35:56 compute-0 nova_compute[239965]: 2026-01-26 17:35:56.115 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:35:56 compute-0 ceph-mon[75140]: pgmap v4384: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4385: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:57 compute-0 nova_compute[239965]: 2026-01-26 17:35:57.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:58 compute-0 ceph-mon[75140]: pgmap v4385: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4386: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:35:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:35:59.324 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:35:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:35:59.324 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:35:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:35:59.325 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:35:59 compute-0 nova_compute[239965]: 2026-01-26 17:35:59.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:59 compute-0 nova_compute[239965]: 2026-01-26 17:35:59.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:35:59 compute-0 nova_compute[239965]: 2026-01-26 17:35:59.913 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:36:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:36:00 compute-0 ceph-mon[75140]: pgmap v4386: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:36:01 compute-0 nova_compute[239965]: 2026-01-26 17:36:01.117 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4387: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:02 compute-0 ceph-mon[75140]: pgmap v4387: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4388: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:04 compute-0 nova_compute[239965]: 2026-01-26 17:36:04.915 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:04 compute-0 ceph-mon[75140]: pgmap v4388: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4389: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:05 compute-0 nova_compute[239965]: 2026-01-26 17:36:05.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:36:06 compute-0 nova_compute[239965]: 2026-01-26 17:36:06.120 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:06 compute-0 ceph-mon[75140]: pgmap v4389: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4390: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:08 compute-0 ceph-mon[75140]: pgmap v4390: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4391: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:09 compute-0 nova_compute[239965]: 2026-01-26 17:36:09.918 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:10 compute-0 ceph-mon[75140]: pgmap v4391: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:36:11 compute-0 nova_compute[239965]: 2026-01-26 17:36:11.121 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4392: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:11 compute-0 nova_compute[239965]: 2026-01-26 17:36:11.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:12 compute-0 ceph-mon[75140]: pgmap v4392: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4393: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:14 compute-0 podman[423224]: 2026-01-26 17:36:14.386278722 +0000 UTC m=+0.069639324 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:36:14 compute-0 nova_compute[239965]: 2026-01-26 17:36:14.921 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:14 compute-0 ceph-mon[75140]: pgmap v4393: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4394: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:15 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:36:16 compute-0 nova_compute[239965]: 2026-01-26 17:36:16.124 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:16 compute-0 sshd-session[423247]: banner exchange: Connection from 20.168.121.152 port 54452: invalid format
Jan 26 17:36:16 compute-0 nova_compute[239965]: 2026-01-26 17:36:16.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:16 compute-0 nova_compute[239965]: 2026-01-26 17:36:16.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:36:17 compute-0 ceph-mon[75140]: pgmap v4394: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4395: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:18 compute-0 sudo[423248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:36:18 compute-0 sudo[423248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:36:18 compute-0 sudo[423248]: pam_unix(sudo:session): session closed for user root
Jan 26 17:36:18 compute-0 sudo[423275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:36:18 compute-0 sudo[423275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:36:18 compute-0 ceph-mon[75140]: pgmap v4395: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:18 compute-0 podman[423272]: 2026-01-26 17:36:18.162580563 +0000 UTC m=+0.129308724 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:36:18 compute-0 sudo[423275]: pam_unix(sudo:session): session closed for user root
Jan 26 17:36:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:36:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:36:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:36:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:36:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:36:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:36:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:36:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:36:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:36:18 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:36:18 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:36:18 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:36:18 compute-0 sudo[423356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:36:18 compute-0 sudo[423356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:36:18 compute-0 sudo[423356]: pam_unix(sudo:session): session closed for user root
Jan 26 17:36:18 compute-0 sudo[423381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:36:18 compute-0 sudo[423381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:36:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:36:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:36:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:36:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:36:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:36:19 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:36:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4396: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:19 compute-0 podman[423419]: 2026-01-26 17:36:19.23059696 +0000 UTC m=+0.054442782 container create c51d52f168cbe9b224600cc0faa4f93acfa5cef63e37d2127276897c226d5c33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hertz, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 26 17:36:19 compute-0 systemd[1]: Started libpod-conmon-c51d52f168cbe9b224600cc0faa4f93acfa5cef63e37d2127276897c226d5c33.scope.
Jan 26 17:36:19 compute-0 podman[423419]: 2026-01-26 17:36:19.202882854 +0000 UTC m=+0.026728766 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:36:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:36:19 compute-0 podman[423419]: 2026-01-26 17:36:19.343514652 +0000 UTC m=+0.167360504 container init c51d52f168cbe9b224600cc0faa4f93acfa5cef63e37d2127276897c226d5c33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hertz, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:36:19 compute-0 podman[423419]: 2026-01-26 17:36:19.350675667 +0000 UTC m=+0.174521519 container start c51d52f168cbe9b224600cc0faa4f93acfa5cef63e37d2127276897c226d5c33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hertz, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 26 17:36:19 compute-0 podman[423419]: 2026-01-26 17:36:19.355730931 +0000 UTC m=+0.179576963 container attach c51d52f168cbe9b224600cc0faa4f93acfa5cef63e37d2127276897c226d5c33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:36:19 compute-0 busy_hertz[423436]: 167 167
Jan 26 17:36:19 compute-0 systemd[1]: libpod-c51d52f168cbe9b224600cc0faa4f93acfa5cef63e37d2127276897c226d5c33.scope: Deactivated successfully.
Jan 26 17:36:19 compute-0 conmon[423436]: conmon c51d52f168cbe9b22460 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c51d52f168cbe9b224600cc0faa4f93acfa5cef63e37d2127276897c226d5c33.scope/container/memory.events
Jan 26 17:36:19 compute-0 podman[423419]: 2026-01-26 17:36:19.358207192 +0000 UTC m=+0.182053024 container died c51d52f168cbe9b224600cc0faa4f93acfa5cef63e37d2127276897c226d5c33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hertz, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:36:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3bce6c7a4c28878c860c964263f576cc673cee93d3c8a7ba2adc3b4543b39ea-merged.mount: Deactivated successfully.
Jan 26 17:36:19 compute-0 podman[423419]: 2026-01-26 17:36:19.413832652 +0000 UTC m=+0.237678474 container remove c51d52f168cbe9b224600cc0faa4f93acfa5cef63e37d2127276897c226d5c33 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_hertz, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:36:19 compute-0 systemd[1]: libpod-conmon-c51d52f168cbe9b224600cc0faa4f93acfa5cef63e37d2127276897c226d5c33.scope: Deactivated successfully.
Jan 26 17:36:19 compute-0 podman[423459]: 2026-01-26 17:36:19.65828547 +0000 UTC m=+0.056669667 container create 86cd051143efb9b31c0e1b023002b10261fe3cccd0772cf95302595c56e38111 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_wright, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:36:19 compute-0 systemd[1]: Started libpod-conmon-86cd051143efb9b31c0e1b023002b10261fe3cccd0772cf95302595c56e38111.scope.
Jan 26 17:36:19 compute-0 podman[423459]: 2026-01-26 17:36:19.634194811 +0000 UTC m=+0.032579028 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:36:19 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b16c595733df0e37f17a7f0a855337d148b3b574d8a315f2fc672dcce1b30f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b16c595733df0e37f17a7f0a855337d148b3b574d8a315f2fc672dcce1b30f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b16c595733df0e37f17a7f0a855337d148b3b574d8a315f2fc672dcce1b30f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b16c595733df0e37f17a7f0a855337d148b3b574d8a315f2fc672dcce1b30f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:36:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b16c595733df0e37f17a7f0a855337d148b3b574d8a315f2fc672dcce1b30f2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:36:19 compute-0 podman[423459]: 2026-01-26 17:36:19.769114671 +0000 UTC m=+0.167498918 container init 86cd051143efb9b31c0e1b023002b10261fe3cccd0772cf95302595c56e38111 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_wright, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:36:19 compute-0 podman[423459]: 2026-01-26 17:36:19.783786719 +0000 UTC m=+0.182170946 container start 86cd051143efb9b31c0e1b023002b10261fe3cccd0772cf95302595c56e38111 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_wright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 17:36:19 compute-0 podman[423459]: 2026-01-26 17:36:19.789101879 +0000 UTC m=+0.187486086 container attach 86cd051143efb9b31c0e1b023002b10261fe3cccd0772cf95302595c56e38111 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_wright, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:36:19 compute-0 nova_compute[239965]: 2026-01-26 17:36:19.925 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:20 compute-0 ceph-mon[75140]: pgmap v4396: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:20 compute-0 strange_wright[423475]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:36:20 compute-0 strange_wright[423475]: --> All data devices are unavailable
Jan 26 17:36:20 compute-0 systemd[1]: libpod-86cd051143efb9b31c0e1b023002b10261fe3cccd0772cf95302595c56e38111.scope: Deactivated successfully.
Jan 26 17:36:20 compute-0 conmon[423475]: conmon 86cd051143efb9b31c0e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86cd051143efb9b31c0e1b023002b10261fe3cccd0772cf95302595c56e38111.scope/container/memory.events
Jan 26 17:36:20 compute-0 podman[423459]: 2026-01-26 17:36:20.341818656 +0000 UTC m=+0.740202863 container died 86cd051143efb9b31c0e1b023002b10261fe3cccd0772cf95302595c56e38111 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_wright, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:36:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b16c595733df0e37f17a7f0a855337d148b3b574d8a315f2fc672dcce1b30f2-merged.mount: Deactivated successfully.
Jan 26 17:36:20 compute-0 podman[423459]: 2026-01-26 17:36:20.395348635 +0000 UTC m=+0.793732832 container remove 86cd051143efb9b31c0e1b023002b10261fe3cccd0772cf95302595c56e38111 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_wright, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:36:20 compute-0 systemd[1]: libpod-conmon-86cd051143efb9b31c0e1b023002b10261fe3cccd0772cf95302595c56e38111.scope: Deactivated successfully.
Jan 26 17:36:20 compute-0 sudo[423381]: pam_unix(sudo:session): session closed for user root
Jan 26 17:36:20 compute-0 sudo[423505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:36:20 compute-0 sudo[423505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:36:20 compute-0 sudo[423505]: pam_unix(sudo:session): session closed for user root
Jan 26 17:36:20 compute-0 sudo[423530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:36:20 compute-0 sudo[423530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:36:20 compute-0 podman[423568]: 2026-01-26 17:36:20.909815237 +0000 UTC m=+0.067759019 container create e3f2413df275ef175f4fb4276c82dca9f0e77d95bf0269944e38c8055e43db3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_boyd, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 17:36:20 compute-0 systemd[1]: Started libpod-conmon-e3f2413df275ef175f4fb4276c82dca9f0e77d95bf0269944e38c8055e43db3b.scope.
Jan 26 17:36:20 compute-0 podman[423568]: 2026-01-26 17:36:20.876898431 +0000 UTC m=+0.034842323 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:36:20 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:36:20 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #213. Immutable memtables: 0.
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:21.001544) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 133] Flushing memtable with next log file: 213
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448981001604, "job": 133, "event": "flush_started", "num_memtables": 1, "num_entries": 668, "num_deletes": 250, "total_data_size": 827358, "memory_usage": 839936, "flush_reason": "Manual Compaction"}
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 133] Level-0 flush table #214: started
Jan 26 17:36:21 compute-0 podman[423568]: 2026-01-26 17:36:21.00729193 +0000 UTC m=+0.165235772 container init e3f2413df275ef175f4fb4276c82dca9f0e77d95bf0269944e38c8055e43db3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448981008736, "cf_name": "default", "job": 133, "event": "table_file_creation", "file_number": 214, "file_size": 519219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 88784, "largest_seqno": 89451, "table_properties": {"data_size": 516217, "index_size": 909, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7970, "raw_average_key_size": 20, "raw_value_size": 509970, "raw_average_value_size": 1307, "num_data_blocks": 42, "num_entries": 390, "num_filter_entries": 390, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769448925, "oldest_key_time": 1769448925, "file_creation_time": 1769448981, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 214, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 133] Flush lasted 7258 microseconds, and 4157 cpu microseconds.
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:21.008804) [db/flush_job.cc:967] [default] [JOB 133] Level-0 flush table #214: 519219 bytes OK
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:21.008829) [db/memtable_list.cc:519] [default] Level-0 commit table #214 started
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:21.010810) [db/memtable_list.cc:722] [default] Level-0 commit table #214: memtable #1 done
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:21.010836) EVENT_LOG_v1 {"time_micros": 1769448981010828, "job": 133, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:21.010861) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 133] Try to delete WAL files size 823848, prev total WAL file size 823848, number of live WAL files 2.
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000210.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:21.011789) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740033373537' seq:72057594037927935, type:22 .. '6D6772737461740034303038' seq:0, type:0; will stop at (end)
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 134] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 133 Base level 0, inputs: [214(507KB)], [212(12MB)]
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448981011855, "job": 134, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [214], "files_L6": [212], "score": -1, "input_data_size": 13405800, "oldest_snapshot_seqno": -1}
Jan 26 17:36:21 compute-0 podman[423568]: 2026-01-26 17:36:21.020461673 +0000 UTC m=+0.178405475 container start e3f2413df275ef175f4fb4276c82dca9f0e77d95bf0269944e38c8055e43db3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:36:21 compute-0 podman[423568]: 2026-01-26 17:36:21.024903161 +0000 UTC m=+0.182846963 container attach e3f2413df275ef175f4fb4276c82dca9f0e77d95bf0269944e38c8055e43db3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_boyd, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:36:21 compute-0 sharp_boyd[423584]: 167 167
Jan 26 17:36:21 compute-0 systemd[1]: libpod-e3f2413df275ef175f4fb4276c82dca9f0e77d95bf0269944e38c8055e43db3b.scope: Deactivated successfully.
Jan 26 17:36:21 compute-0 conmon[423584]: conmon e3f2413df275ef175f4f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e3f2413df275ef175f4fb4276c82dca9f0e77d95bf0269944e38c8055e43db3b.scope/container/memory.events
Jan 26 17:36:21 compute-0 podman[423568]: 2026-01-26 17:36:21.028735155 +0000 UTC m=+0.186678967 container died e3f2413df275ef175f4fb4276c82dca9f0e77d95bf0269944e38c8055e43db3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 134] Generated table #215: 10356 keys, 10333041 bytes, temperature: kUnknown
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448981095921, "cf_name": "default", "job": 134, "event": "table_file_creation", "file_number": 215, "file_size": 10333041, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10271826, "index_size": 34222, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25925, "raw_key_size": 272670, "raw_average_key_size": 26, "raw_value_size": 10094337, "raw_average_value_size": 974, "num_data_blocks": 1304, "num_entries": 10356, "num_filter_entries": 10356, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769448981, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 215, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:21.096408) [db/compaction/compaction_job.cc:1663] [default] [JOB 134] Compacted 1@0 + 1@6 files to L6 => 10333041 bytes
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:21.097899) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.1 rd, 122.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 12.3 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(45.7) write-amplify(19.9) OK, records in: 10844, records dropped: 488 output_compression: NoCompression
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:21.097918) EVENT_LOG_v1 {"time_micros": 1769448981097908, "job": 134, "event": "compaction_finished", "compaction_time_micros": 84282, "compaction_time_cpu_micros": 48033, "output_level": 6, "num_output_files": 1, "total_output_size": 10333041, "num_input_records": 10844, "num_output_records": 10356, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000214.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448981098147, "job": 134, "event": "table_file_deletion", "file_number": 214}
Jan 26 17:36:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2aeedfa90a2d9d0153dfd4de0d03fe447054e38b25c560ad550c15ab65773c8-merged.mount: Deactivated successfully.
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000212.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769448981100704, "job": 134, "event": "table_file_deletion", "file_number": 212}
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:21.011725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:21.100763) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:21.100769) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:21.100771) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:21.100773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:36:21 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:21.100774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:36:21 compute-0 podman[423568]: 2026-01-26 17:36:21.112236157 +0000 UTC m=+0.270179939 container remove e3f2413df275ef175f4fb4276c82dca9f0e77d95bf0269944e38c8055e43db3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:36:21 compute-0 systemd[1]: libpod-conmon-e3f2413df275ef175f4fb4276c82dca9f0e77d95bf0269944e38c8055e43db3b.scope: Deactivated successfully.
Jan 26 17:36:21 compute-0 nova_compute[239965]: 2026-01-26 17:36:21.125 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4397: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:21 compute-0 podman[423608]: 2026-01-26 17:36:21.293861589 +0000 UTC m=+0.056307328 container create 673184bc88ce0f5b86a5c851f1dd28084754f478dc8517644a5a4716362fa4c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:36:21 compute-0 systemd[1]: Started libpod-conmon-673184bc88ce0f5b86a5c851f1dd28084754f478dc8517644a5a4716362fa4c9.scope.
Jan 26 17:36:21 compute-0 podman[423608]: 2026-01-26 17:36:21.267906083 +0000 UTC m=+0.030351802 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:36:21 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/047da36161a6f252477fa486d84df199cf65974f3ee929139831427049aea4cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/047da36161a6f252477fa486d84df199cf65974f3ee929139831427049aea4cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/047da36161a6f252477fa486d84df199cf65974f3ee929139831427049aea4cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:36:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/047da36161a6f252477fa486d84df199cf65974f3ee929139831427049aea4cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:36:21 compute-0 podman[423608]: 2026-01-26 17:36:21.412032288 +0000 UTC m=+0.174478027 container init 673184bc88ce0f5b86a5c851f1dd28084754f478dc8517644a5a4716362fa4c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:36:21 compute-0 podman[423608]: 2026-01-26 17:36:21.420597129 +0000 UTC m=+0.183042828 container start 673184bc88ce0f5b86a5c851f1dd28084754f478dc8517644a5a4716362fa4c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 26 17:36:21 compute-0 podman[423608]: 2026-01-26 17:36:21.424674667 +0000 UTC m=+0.187120426 container attach 673184bc88ce0f5b86a5c851f1dd28084754f478dc8517644a5a4716362fa4c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]: {
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:     "0": [
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:         {
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "devices": [
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "/dev/loop3"
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             ],
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "lv_name": "ceph_lv0",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "lv_size": "21470642176",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "name": "ceph_lv0",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "tags": {
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.cluster_name": "ceph",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.crush_device_class": "",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.encrypted": "0",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.objectstore": "bluestore",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.osd_id": "0",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.type": "block",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.vdo": "0",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.with_tpm": "0"
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             },
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "type": "block",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "vg_name": "ceph_vg0"
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:         }
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:     ],
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:     "1": [
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:         {
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "devices": [
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "/dev/loop4"
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             ],
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "lv_name": "ceph_lv1",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "lv_size": "21470642176",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "name": "ceph_lv1",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "tags": {
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.cluster_name": "ceph",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.crush_device_class": "",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.encrypted": "0",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.objectstore": "bluestore",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.osd_id": "1",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.type": "block",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.vdo": "0",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.with_tpm": "0"
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             },
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "type": "block",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "vg_name": "ceph_vg1"
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:         }
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:     ],
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:     "2": [
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:         {
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "devices": [
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "/dev/loop5"
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             ],
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "lv_name": "ceph_lv2",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "lv_size": "21470642176",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "name": "ceph_lv2",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "tags": {
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.cluster_name": "ceph",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.crush_device_class": "",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.encrypted": "0",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.objectstore": "bluestore",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.osd_id": "2",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.type": "block",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.vdo": "0",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:                 "ceph.with_tpm": "0"
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             },
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "type": "block",
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:             "vg_name": "ceph_vg2"
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:         }
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]:     ]
Jan 26 17:36:21 compute-0 cranky_zhukovsky[423625]: }
Jan 26 17:36:21 compute-0 systemd[1]: libpod-673184bc88ce0f5b86a5c851f1dd28084754f478dc8517644a5a4716362fa4c9.scope: Deactivated successfully.
Jan 26 17:36:21 compute-0 podman[423608]: 2026-01-26 17:36:21.734517655 +0000 UTC m=+0.496963354 container died 673184bc88ce0f5b86a5c851f1dd28084754f478dc8517644a5a4716362fa4c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_zhukovsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:36:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-047da36161a6f252477fa486d84df199cf65974f3ee929139831427049aea4cb-merged.mount: Deactivated successfully.
Jan 26 17:36:21 compute-0 podman[423608]: 2026-01-26 17:36:21.787731567 +0000 UTC m=+0.550177266 container remove 673184bc88ce0f5b86a5c851f1dd28084754f478dc8517644a5a4716362fa4c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_zhukovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 17:36:21 compute-0 systemd[1]: libpod-conmon-673184bc88ce0f5b86a5c851f1dd28084754f478dc8517644a5a4716362fa4c9.scope: Deactivated successfully.
Jan 26 17:36:21 compute-0 sudo[423530]: pam_unix(sudo:session): session closed for user root
Jan 26 17:36:21 compute-0 sudo[423646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:36:21 compute-0 sudo[423646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:36:21 compute-0 sudo[423646]: pam_unix(sudo:session): session closed for user root
Jan 26 17:36:21 compute-0 sudo[423671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:36:21 compute-0 sudo[423671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:36:22 compute-0 podman[423707]: 2026-01-26 17:36:22.318172129 +0000 UTC m=+0.072439323 container create 439cc91238275ce8c44e367971a142a6c0daa63123a5fcf70d595affd042554b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:36:22 compute-0 systemd[1]: Started libpod-conmon-439cc91238275ce8c44e367971a142a6c0daa63123a5fcf70d595affd042554b.scope.
Jan 26 17:36:22 compute-0 podman[423707]: 2026-01-26 17:36:22.28879223 +0000 UTC m=+0.043059454 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:36:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:36:22 compute-0 podman[423707]: 2026-01-26 17:36:22.400474751 +0000 UTC m=+0.154741915 container init 439cc91238275ce8c44e367971a142a6c0daa63123a5fcf70d595affd042554b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 26 17:36:22 compute-0 podman[423707]: 2026-01-26 17:36:22.408664051 +0000 UTC m=+0.162931195 container start 439cc91238275ce8c44e367971a142a6c0daa63123a5fcf70d595affd042554b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_blackburn, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:36:22 compute-0 podman[423707]: 2026-01-26 17:36:22.412161247 +0000 UTC m=+0.166428411 container attach 439cc91238275ce8c44e367971a142a6c0daa63123a5fcf70d595affd042554b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:36:22 compute-0 festive_blackburn[423721]: 167 167
Jan 26 17:36:22 compute-0 systemd[1]: libpod-439cc91238275ce8c44e367971a142a6c0daa63123a5fcf70d595affd042554b.scope: Deactivated successfully.
Jan 26 17:36:22 compute-0 podman[423707]: 2026-01-26 17:36:22.414300349 +0000 UTC m=+0.168567493 container died 439cc91238275ce8c44e367971a142a6c0daa63123a5fcf70d595affd042554b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_blackburn, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 17:36:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-e31b5ca41344d28837c7aebd240257dafd9e248592b3d68dbcb740347ab6b6c9-merged.mount: Deactivated successfully.
Jan 26 17:36:22 compute-0 podman[423707]: 2026-01-26 17:36:22.454517693 +0000 UTC m=+0.208784837 container remove 439cc91238275ce8c44e367971a142a6c0daa63123a5fcf70d595affd042554b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_blackburn, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 17:36:22 compute-0 systemd[1]: libpod-conmon-439cc91238275ce8c44e367971a142a6c0daa63123a5fcf70d595affd042554b.scope: Deactivated successfully.
Jan 26 17:36:22 compute-0 podman[423747]: 2026-01-26 17:36:22.641160967 +0000 UTC m=+0.072505244 container create 54c658f10a216cb277d4e477f52cdb0096e71450b351e292e754fa1a59b94daf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jones, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:36:22 compute-0 systemd[1]: Started libpod-conmon-54c658f10a216cb277d4e477f52cdb0096e71450b351e292e754fa1a59b94daf.scope.
Jan 26 17:36:22 compute-0 podman[423747]: 2026-01-26 17:36:22.606782497 +0000 UTC m=+0.038126884 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:36:22 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0950e056ae8537749edc04c4035121d33e99d04c10745c5b7b3482eff7edcc9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0950e056ae8537749edc04c4035121d33e99d04c10745c5b7b3482eff7edcc9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0950e056ae8537749edc04c4035121d33e99d04c10745c5b7b3482eff7edcc9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:36:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0950e056ae8537749edc04c4035121d33e99d04c10745c5b7b3482eff7edcc9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:36:22 compute-0 podman[423747]: 2026-01-26 17:36:22.738382505 +0000 UTC m=+0.169726822 container init 54c658f10a216cb277d4e477f52cdb0096e71450b351e292e754fa1a59b94daf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jones, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:36:22 compute-0 podman[423747]: 2026-01-26 17:36:22.755181696 +0000 UTC m=+0.186526003 container start 54c658f10a216cb277d4e477f52cdb0096e71450b351e292e754fa1a59b94daf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jones, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 17:36:22 compute-0 podman[423747]: 2026-01-26 17:36:22.759191534 +0000 UTC m=+0.190535901 container attach 54c658f10a216cb277d4e477f52cdb0096e71450b351e292e754fa1a59b94daf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 17:36:23 compute-0 ceph-mon[75140]: pgmap v4397: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4398: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:23 compute-0 nova_compute[239965]: 2026-01-26 17:36:23.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:23 compute-0 nova_compute[239965]: 2026-01-26 17:36:23.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:36:23 compute-0 nova_compute[239965]: 2026-01-26 17:36:23.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:36:23 compute-0 lvm[423842]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:36:23 compute-0 lvm[423842]: VG ceph_vg1 finished
Jan 26 17:36:23 compute-0 lvm[423841]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:36:23 compute-0 lvm[423841]: VG ceph_vg0 finished
Jan 26 17:36:23 compute-0 lvm[423844]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:36:23 compute-0 lvm[423844]: VG ceph_vg2 finished
Jan 26 17:36:23 compute-0 nova_compute[239965]: 2026-01-26 17:36:23.531 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:36:23 compute-0 lvm[423846]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:36:23 compute-0 lvm[423846]: VG ceph_vg2 finished
Jan 26 17:36:23 compute-0 lvm[423848]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:36:23 compute-0 lvm[423848]: VG ceph_vg2 finished
Jan 26 17:36:23 compute-0 elated_jones[423763]: {}
Jan 26 17:36:23 compute-0 systemd[1]: libpod-54c658f10a216cb277d4e477f52cdb0096e71450b351e292e754fa1a59b94daf.scope: Deactivated successfully.
Jan 26 17:36:23 compute-0 podman[423747]: 2026-01-26 17:36:23.664104924 +0000 UTC m=+1.095449211 container died 54c658f10a216cb277d4e477f52cdb0096e71450b351e292e754fa1a59b94daf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 26 17:36:23 compute-0 systemd[1]: libpod-54c658f10a216cb277d4e477f52cdb0096e71450b351e292e754fa1a59b94daf.scope: Consumed 1.467s CPU time.
Jan 26 17:36:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-0950e056ae8537749edc04c4035121d33e99d04c10745c5b7b3482eff7edcc9f-merged.mount: Deactivated successfully.
Jan 26 17:36:23 compute-0 podman[423747]: 2026-01-26 17:36:23.72733617 +0000 UTC m=+1.158680447 container remove 54c658f10a216cb277d4e477f52cdb0096e71450b351e292e754fa1a59b94daf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 26 17:36:23 compute-0 systemd[1]: libpod-conmon-54c658f10a216cb277d4e477f52cdb0096e71450b351e292e754fa1a59b94daf.scope: Deactivated successfully.
Jan 26 17:36:23 compute-0 sudo[423671]: pam_unix(sudo:session): session closed for user root
Jan 26 17:36:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:36:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:36:23 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:36:23 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:36:23 compute-0 sudo[423860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:36:23 compute-0 sudo[423860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:36:23 compute-0 sudo[423860]: pam_unix(sudo:session): session closed for user root
Jan 26 17:36:24 compute-0 nova_compute[239965]: 2026-01-26 17:36:24.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:24 compute-0 nova_compute[239965]: 2026-01-26 17:36:24.537 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:36:24 compute-0 nova_compute[239965]: 2026-01-26 17:36:24.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:36:24 compute-0 nova_compute[239965]: 2026-01-26 17:36:24.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:36:24 compute-0 nova_compute[239965]: 2026-01-26 17:36:24.539 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:36:24 compute-0 nova_compute[239965]: 2026-01-26 17:36:24.539 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:36:24 compute-0 ceph-mon[75140]: pgmap v4398: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:24 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:36:24 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:36:24 compute-0 nova_compute[239965]: 2026-01-26 17:36:24.971 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:36:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/302571132' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:36:25 compute-0 nova_compute[239965]: 2026-01-26 17:36:25.131 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:36:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4399: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:25 compute-0 nova_compute[239965]: 2026-01-26 17:36:25.333 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:36:25 compute-0 nova_compute[239965]: 2026-01-26 17:36:25.335 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3455MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:36:25 compute-0 nova_compute[239965]: 2026-01-26 17:36:25.335 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:36:25 compute-0 nova_compute[239965]: 2026-01-26 17:36:25.336 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:36:25 compute-0 nova_compute[239965]: 2026-01-26 17:36:25.416 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:36:25 compute-0 nova_compute[239965]: 2026-01-26 17:36:25.416 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:36:25 compute-0 nova_compute[239965]: 2026-01-26 17:36:25.441 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:36:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/302571132' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:36:25 compute-0 sshd-session[423245]: Connection closed by 20.168.121.152 port 54450 [preauth]
Jan 26 17:36:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:36:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1724911417' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:36:25 compute-0 nova_compute[239965]: 2026-01-26 17:36:25.974 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:36:25 compute-0 nova_compute[239965]: 2026-01-26 17:36:25.982 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:36:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:36:26 compute-0 nova_compute[239965]: 2026-01-26 17:36:26.003 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:36:26 compute-0 nova_compute[239965]: 2026-01-26 17:36:26.005 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:36:26 compute-0 nova_compute[239965]: 2026-01-26 17:36:26.006 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:36:26 compute-0 nova_compute[239965]: 2026-01-26 17:36:26.127 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:26 compute-0 ceph-mon[75140]: pgmap v4399: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1724911417' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:36:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4400: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:28 compute-0 ceph-mon[75140]: pgmap v4400: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:36:28
Jan 26 17:36:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:36:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:36:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['vms', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'backups', '.rgw.root']
Jan 26 17:36:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:36:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4401: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:29 compute-0 nova_compute[239965]: 2026-01-26 17:36:29.973 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:36:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:36:30 compute-0 ceph-mon[75140]: pgmap v4401: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:36:31 compute-0 nova_compute[239965]: 2026-01-26 17:36:31.130 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4402: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:36:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:36:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:36:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:36:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:36:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:36:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:36:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:36:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:36:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:36:32 compute-0 ceph-mon[75140]: pgmap v4402: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4403: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:34 compute-0 nova_compute[239965]: 2026-01-26 17:36:34.976 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:34 compute-0 ceph-mon[75140]: pgmap v4403: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4404: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:36:36 compute-0 nova_compute[239965]: 2026-01-26 17:36:36.131 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:37 compute-0 ceph-mon[75140]: pgmap v4404: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4405: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:37 compute-0 nova_compute[239965]: 2026-01-26 17:36:37.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:39 compute-0 ceph-mon[75140]: pgmap v4405: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4406: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:39 compute-0 nova_compute[239965]: 2026-01-26 17:36:39.981 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:36:41 compute-0 ceph-mon[75140]: pgmap v4406: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:41 compute-0 nova_compute[239965]: 2026-01-26 17:36:41.134 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4407: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:43 compute-0 ceph-mon[75140]: pgmap v4407: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4408: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:44 compute-0 ceph-mon[75140]: pgmap v4408: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:44 compute-0 nova_compute[239965]: 2026-01-26 17:36:44.984 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4409: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:45 compute-0 podman[423929]: 2026-01-26 17:36:45.418523366 +0000 UTC m=+0.089335616 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 17:36:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:36:46 compute-0 nova_compute[239965]: 2026-01-26 17:36:46.181 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:46 compute-0 ceph-mon[75140]: pgmap v4409: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4410: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:48 compute-0 podman[423949]: 2026-01-26 17:36:48.453349035 +0000 UTC m=+0.127214213 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:36:48 compute-0 ceph-mon[75140]: pgmap v4410: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:36:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1906178572' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:36:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:36:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1906178572' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:36:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4411: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1906178572' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:36:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1906178572' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:36:49 compute-0 nova_compute[239965]: 2026-01-26 17:36:49.987 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:36:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:36:50 compute-0 ceph-mon[75140]: pgmap v4411: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #216. Immutable memtables: 0.
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:51.011648) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 135] Flushing memtable with next log file: 216
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449011011696, "job": 135, "event": "flush_started", "num_memtables": 1, "num_entries": 502, "num_deletes": 255, "total_data_size": 495589, "memory_usage": 506120, "flush_reason": "Manual Compaction"}
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 135] Level-0 flush table #217: started
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449011017874, "cf_name": "default", "job": 135, "event": "table_file_creation", "file_number": 217, "file_size": 489298, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89452, "largest_seqno": 89953, "table_properties": {"data_size": 486474, "index_size": 859, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6519, "raw_average_key_size": 18, "raw_value_size": 480793, "raw_average_value_size": 1354, "num_data_blocks": 38, "num_entries": 355, "num_filter_entries": 355, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769448982, "oldest_key_time": 1769448982, "file_creation_time": 1769449011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 217, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 135] Flush lasted 6299 microseconds, and 2818 cpu microseconds.
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:51.017944) [db/flush_job.cc:967] [default] [JOB 135] Level-0 flush table #217: 489298 bytes OK
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:51.018001) [db/memtable_list.cc:519] [default] Level-0 commit table #217 started
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:51.020248) [db/memtable_list.cc:722] [default] Level-0 commit table #217: memtable #1 done
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:51.020268) EVENT_LOG_v1 {"time_micros": 1769449011020262, "job": 135, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:51.020295) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 135] Try to delete WAL files size 492660, prev total WAL file size 492660, number of live WAL files 2.
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000213.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:51.021314) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0034303136' seq:72057594037927935, type:22 .. '6C6F676D0034323637' seq:0, type:0; will stop at (end)
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 136] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 135 Base level 0, inputs: [217(477KB)], [215(10090KB)]
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449011021376, "job": 136, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [217], "files_L6": [215], "score": -1, "input_data_size": 10822339, "oldest_snapshot_seqno": -1}
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 136] Generated table #218: 10191 keys, 10720423 bytes, temperature: kUnknown
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449011170353, "cf_name": "default", "job": 136, "event": "table_file_creation", "file_number": 218, "file_size": 10720423, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10659241, "index_size": 34573, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25541, "raw_key_size": 270154, "raw_average_key_size": 26, "raw_value_size": 10483536, "raw_average_value_size": 1028, "num_data_blocks": 1318, "num_entries": 10191, "num_filter_entries": 10191, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769449011, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 218, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:51.170706) [db/compaction/compaction_job.cc:1663] [default] [JOB 136] Compacted 1@0 + 1@6 files to L6 => 10720423 bytes
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:51.178437) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 72.6 rd, 71.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.9 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(44.0) write-amplify(21.9) OK, records in: 10711, records dropped: 520 output_compression: NoCompression
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:51.178457) EVENT_LOG_v1 {"time_micros": 1769449011178449, "job": 136, "event": "compaction_finished", "compaction_time_micros": 149081, "compaction_time_cpu_micros": 29033, "output_level": 6, "num_output_files": 1, "total_output_size": 10720423, "num_input_records": 10711, "num_output_records": 10191, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000217.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449011178901, "job": 136, "event": "table_file_deletion", "file_number": 217}
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000215.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449011181005, "job": 136, "event": "table_file_deletion", "file_number": 215}
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:51.020922) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:51.181294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:51.181306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:51.181308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:51.181310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:36:51 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:36:51.181312) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:36:51 compute-0 nova_compute[239965]: 2026-01-26 17:36:51.184 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4412: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:52 compute-0 nova_compute[239965]: 2026-01-26 17:36:52.532 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:52 compute-0 nova_compute[239965]: 2026-01-26 17:36:52.533 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:36:53 compute-0 ceph-mon[75140]: pgmap v4412: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4413: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:53 compute-0 nova_compute[239965]: 2026-01-26 17:36:53.534 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:54 compute-0 ceph-mon[75140]: pgmap v4413: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:54 compute-0 nova_compute[239965]: 2026-01-26 17:36:54.992 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4414: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:36:56 compute-0 nova_compute[239965]: 2026-01-26 17:36:56.185 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:36:56 compute-0 ceph-mon[75140]: pgmap v4414: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4415: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:58 compute-0 nova_compute[239965]: 2026-01-26 17:36:58.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:58 compute-0 ceph-mon[75140]: pgmap v4415: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4416: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:36:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:36:59.325 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:36:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:36:59.325 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:36:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:36:59.325 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:36:59 compute-0 nova_compute[239965]: 2026-01-26 17:36:59.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:36:59 compute-0 nova_compute[239965]: 2026-01-26 17:36:59.995 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:37:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:37:00 compute-0 ceph-mon[75140]: pgmap v4416: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:37:01 compute-0 nova_compute[239965]: 2026-01-26 17:37:01.219 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4417: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:01 compute-0 nova_compute[239965]: 2026-01-26 17:37:01.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:03 compute-0 ceph-mon[75140]: pgmap v4417: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4418: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:04 compute-0 ceph-mon[75140]: pgmap v4418: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:04 compute-0 nova_compute[239965]: 2026-01-26 17:37:04.998 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4419: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:37:06 compute-0 nova_compute[239965]: 2026-01-26 17:37:06.221 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:06 compute-0 ceph-mon[75140]: pgmap v4419: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:06 compute-0 nova_compute[239965]: 2026-01-26 17:37:06.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4420: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:08 compute-0 ceph-mon[75140]: pgmap v4420: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4421: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:10 compute-0 nova_compute[239965]: 2026-01-26 17:37:10.004 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:10 compute-0 ceph-mon[75140]: pgmap v4421: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:37:11 compute-0 nova_compute[239965]: 2026-01-26 17:37:11.222 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4422: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:12 compute-0 ceph-mon[75140]: pgmap v4422: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:12 compute-0 nova_compute[239965]: 2026-01-26 17:37:12.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4423: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:14 compute-0 ceph-mon[75140]: pgmap v4423: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:15 compute-0 nova_compute[239965]: 2026-01-26 17:37:15.051 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4424: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:37:16 compute-0 nova_compute[239965]: 2026-01-26 17:37:16.258 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:16 compute-0 ceph-mon[75140]: pgmap v4424: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:16 compute-0 podman[423975]: 2026-01-26 17:37:16.406147776 +0000 UTC m=+0.086378663 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 17:37:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4425: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:18 compute-0 ceph-mon[75140]: pgmap v4425: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:18 compute-0 nova_compute[239965]: 2026-01-26 17:37:18.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:18 compute-0 nova_compute[239965]: 2026-01-26 17:37:18.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:37:18 compute-0 nova_compute[239965]: 2026-01-26 17:37:18.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:18 compute-0 nova_compute[239965]: 2026-01-26 17:37:18.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:37:19 compute-0 nova_compute[239965]: 2026-01-26 17:37:19.067 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:37:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4426: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:19 compute-0 podman[423994]: 2026-01-26 17:37:19.393742619 +0000 UTC m=+0.082136390 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 17:37:20 compute-0 nova_compute[239965]: 2026-01-26 17:37:20.054 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:20 compute-0 ceph-mon[75140]: pgmap v4426: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:37:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4427: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:21 compute-0 nova_compute[239965]: 2026-01-26 17:37:21.258 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:22 compute-0 ceph-mon[75140]: pgmap v4427: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4428: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:23 compute-0 sudo[424020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:37:23 compute-0 sudo[424020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:37:23 compute-0 sudo[424020]: pam_unix(sudo:session): session closed for user root
Jan 26 17:37:24 compute-0 sudo[424045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 26 17:37:24 compute-0 sudo[424045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:37:24 compute-0 nova_compute[239965]: 2026-01-26 17:37:24.068 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:24 compute-0 nova_compute[239965]: 2026-01-26 17:37:24.068 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:37:24 compute-0 nova_compute[239965]: 2026-01-26 17:37:24.068 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:37:24 compute-0 nova_compute[239965]: 2026-01-26 17:37:24.091 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:37:24 compute-0 ceph-mon[75140]: pgmap v4428: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:24 compute-0 sudo[424045]: pam_unix(sudo:session): session closed for user root
Jan 26 17:37:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:37:24 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:37:24 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:37:24 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:37:24 compute-0 sudo[424089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:37:24 compute-0 sudo[424089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:37:24 compute-0 sudo[424089]: pam_unix(sudo:session): session closed for user root
Jan 26 17:37:24 compute-0 nova_compute[239965]: 2026-01-26 17:37:24.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:24 compute-0 nova_compute[239965]: 2026-01-26 17:37:24.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:37:24 compute-0 nova_compute[239965]: 2026-01-26 17:37:24.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:37:24 compute-0 nova_compute[239965]: 2026-01-26 17:37:24.540 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:37:24 compute-0 nova_compute[239965]: 2026-01-26 17:37:24.540 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:37:24 compute-0 nova_compute[239965]: 2026-01-26 17:37:24.541 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:37:24 compute-0 sudo[424114]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:37:24 compute-0 sudo[424114]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:37:25 compute-0 nova_compute[239965]: 2026-01-26 17:37:25.056 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:37:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2139027988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:37:25 compute-0 sudo[424114]: pam_unix(sudo:session): session closed for user root
Jan 26 17:37:25 compute-0 nova_compute[239965]: 2026-01-26 17:37:25.097 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:37:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 26 17:37:25 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 17:37:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:37:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:37:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:37:25 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:37:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:37:25 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:37:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:37:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:37:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:37:25 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:37:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:37:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:37:25 compute-0 sudo[424192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:37:25 compute-0 sudo[424192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:37:25 compute-0 sudo[424192]: pam_unix(sudo:session): session closed for user root
Jan 26 17:37:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4429: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:25 compute-0 nova_compute[239965]: 2026-01-26 17:37:25.250 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:37:25 compute-0 nova_compute[239965]: 2026-01-26 17:37:25.251 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3514MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:37:25 compute-0 nova_compute[239965]: 2026-01-26 17:37:25.252 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:37:25 compute-0 nova_compute[239965]: 2026-01-26 17:37:25.252 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:37:25 compute-0 sudo[424217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:37:25 compute-0 sudo[424217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:37:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:37:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:37:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2139027988' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:37:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 26 17:37:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:37:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:37:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:37:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:37:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:37:25 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:37:25 compute-0 podman[424256]: 2026-01-26 17:37:25.509610235 +0000 UTC m=+0.038681667 container create d9b016696e781e6c20012bb575b40d7a5769dbde7763d539c170c30e7aae835d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 17:37:25 compute-0 systemd[1]: Started libpod-conmon-d9b016696e781e6c20012bb575b40d7a5769dbde7763d539c170c30e7aae835d.scope.
Jan 26 17:37:25 compute-0 podman[424256]: 2026-01-26 17:37:25.49183086 +0000 UTC m=+0.020902322 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:37:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:37:25 compute-0 podman[424256]: 2026-01-26 17:37:25.623288845 +0000 UTC m=+0.152360367 container init d9b016696e781e6c20012bb575b40d7a5769dbde7763d539c170c30e7aae835d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_allen, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 26 17:37:25 compute-0 podman[424256]: 2026-01-26 17:37:25.632040389 +0000 UTC m=+0.161111821 container start d9b016696e781e6c20012bb575b40d7a5769dbde7763d539c170c30e7aae835d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_allen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:37:25 compute-0 podman[424256]: 2026-01-26 17:37:25.636149999 +0000 UTC m=+0.165221451 container attach d9b016696e781e6c20012bb575b40d7a5769dbde7763d539c170c30e7aae835d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:37:25 compute-0 systemd[1]: libpod-d9b016696e781e6c20012bb575b40d7a5769dbde7763d539c170c30e7aae835d.scope: Deactivated successfully.
Jan 26 17:37:25 compute-0 pedantic_allen[424272]: 167 167
Jan 26 17:37:25 compute-0 podman[424256]: 2026-01-26 17:37:25.639195514 +0000 UTC m=+0.168267026 container died d9b016696e781e6c20012bb575b40d7a5769dbde7763d539c170c30e7aae835d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_allen, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:37:25 compute-0 conmon[424272]: conmon d9b016696e781e6c2001 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d9b016696e781e6c20012bb575b40d7a5769dbde7763d539c170c30e7aae835d.scope/container/memory.events
Jan 26 17:37:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a22ccf8f65cfc36b2dd3f8ef7c8c2fef737c311d41b068453920ed75df05d3f-merged.mount: Deactivated successfully.
Jan 26 17:37:25 compute-0 podman[424256]: 2026-01-26 17:37:25.685623199 +0000 UTC m=+0.214694631 container remove d9b016696e781e6c20012bb575b40d7a5769dbde7763d539c170c30e7aae835d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_allen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:37:25 compute-0 systemd[1]: libpod-conmon-d9b016696e781e6c20012bb575b40d7a5769dbde7763d539c170c30e7aae835d.scope: Deactivated successfully.
Jan 26 17:37:25 compute-0 podman[424297]: 2026-01-26 17:37:25.854712464 +0000 UTC m=+0.052447133 container create 9fb44e4e3c131a96b8ce0fddaea5d225f68d15e180778b513d35461e00b9f7d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 17:37:25 compute-0 systemd[1]: Started libpod-conmon-9fb44e4e3c131a96b8ce0fddaea5d225f68d15e180778b513d35461e00b9f7d3.scope.
Jan 26 17:37:25 compute-0 podman[424297]: 2026-01-26 17:37:25.83039027 +0000 UTC m=+0.028125029 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:37:25 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480294392fa716d76c53655420bce58aaab887ee197e10be977131ee65bc3ebf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480294392fa716d76c53655420bce58aaab887ee197e10be977131ee65bc3ebf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480294392fa716d76c53655420bce58aaab887ee197e10be977131ee65bc3ebf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480294392fa716d76c53655420bce58aaab887ee197e10be977131ee65bc3ebf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:37:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/480294392fa716d76c53655420bce58aaab887ee197e10be977131ee65bc3ebf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:37:25 compute-0 podman[424297]: 2026-01-26 17:37:25.958100223 +0000 UTC m=+0.155834932 container init 9fb44e4e3c131a96b8ce0fddaea5d225f68d15e180778b513d35461e00b9f7d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 17:37:25 compute-0 podman[424297]: 2026-01-26 17:37:25.975580911 +0000 UTC m=+0.173315620 container start 9fb44e4e3c131a96b8ce0fddaea5d225f68d15e180778b513d35461e00b9f7d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 17:37:25 compute-0 podman[424297]: 2026-01-26 17:37:25.980328196 +0000 UTC m=+0.178062885 container attach 9fb44e4e3c131a96b8ce0fddaea5d225f68d15e180778b513d35461e00b9f7d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 17:37:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #219. Immutable memtables: 0.
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:37:26.022113) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 137] Flushing memtable with next log file: 219
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449046022199, "job": 137, "event": "flush_started", "num_memtables": 1, "num_entries": 530, "num_deletes": 250, "total_data_size": 556040, "memory_usage": 567096, "flush_reason": "Manual Compaction"}
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 137] Level-0 flush table #220: started
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449046030259, "cf_name": "default", "job": 137, "event": "table_file_creation", "file_number": 220, "file_size": 549714, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 89954, "largest_seqno": 90483, "table_properties": {"data_size": 546686, "index_size": 999, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6017, "raw_average_key_size": 16, "raw_value_size": 540753, "raw_average_value_size": 1481, "num_data_blocks": 44, "num_entries": 365, "num_filter_entries": 365, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449012, "oldest_key_time": 1769449012, "file_creation_time": 1769449046, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 220, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 137] Flush lasted 8213 microseconds, and 4326 cpu microseconds.
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:37:26.030331) [db/flush_job.cc:967] [default] [JOB 137] Level-0 flush table #220: 549714 bytes OK
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:37:26.030360) [db/memtable_list.cc:519] [default] Level-0 commit table #220 started
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:37:26.032367) [db/memtable_list.cc:722] [default] Level-0 commit table #220: memtable #1 done
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:37:26.032388) EVENT_LOG_v1 {"time_micros": 1769449046032382, "job": 137, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:37:26.032412) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 137] Try to delete WAL files size 553016, prev total WAL file size 553016, number of live WAL files 2.
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000216.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:37:26.033160) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323531' seq:72057594037927935, type:22 .. '6B7600353032' seq:0, type:0; will stop at (end)
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 138] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 137 Base level 0, inputs: [220(536KB)], [218(10MB)]
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449046033212, "job": 138, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [220], "files_L6": [218], "score": -1, "input_data_size": 11270137, "oldest_snapshot_seqno": -1}
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 138] Generated table #221: 10045 keys, 10217720 bytes, temperature: kUnknown
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449046145520, "cf_name": "default", "job": 138, "event": "table_file_creation", "file_number": 221, "file_size": 10217720, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10157757, "index_size": 33738, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25157, "raw_key_size": 268807, "raw_average_key_size": 26, "raw_value_size": 9984517, "raw_average_value_size": 993, "num_data_blocks": 1269, "num_entries": 10045, "num_filter_entries": 10045, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769449046, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 221, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:37:26.145965) [db/compaction/compaction_job.cc:1663] [default] [JOB 138] Compacted 1@0 + 1@6 files to L6 => 10217720 bytes
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:37:26.147627) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 100.3 rd, 90.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.2 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(39.1) write-amplify(18.6) OK, records in: 10556, records dropped: 511 output_compression: NoCompression
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:37:26.147642) EVENT_LOG_v1 {"time_micros": 1769449046147635, "job": 138, "event": "compaction_finished", "compaction_time_micros": 112360, "compaction_time_cpu_micros": 51801, "output_level": 6, "num_output_files": 1, "total_output_size": 10217720, "num_input_records": 10556, "num_output_records": 10045, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000220.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449046147961, "job": 138, "event": "table_file_deletion", "file_number": 220}
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000218.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449046149581, "job": 138, "event": "table_file_deletion", "file_number": 218}
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:37:26.032947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:37:26.149619) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:37:26.149624) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:37:26.149627) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:37:26.149641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:37:26 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:37:26.149643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:37:26 compute-0 nova_compute[239965]: 2026-01-26 17:37:26.171 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:37:26 compute-0 nova_compute[239965]: 2026-01-26 17:37:26.174 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:37:26 compute-0 nova_compute[239965]: 2026-01-26 17:37:26.189 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:37:26 compute-0 nova_compute[239965]: 2026-01-26 17:37:26.260 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:26 compute-0 jolly_hopper[424313]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:37:26 compute-0 jolly_hopper[424313]: --> All data devices are unavailable
Jan 26 17:37:26 compute-0 systemd[1]: libpod-9fb44e4e3c131a96b8ce0fddaea5d225f68d15e180778b513d35461e00b9f7d3.scope: Deactivated successfully.
Jan 26 17:37:26 compute-0 podman[424297]: 2026-01-26 17:37:26.554254992 +0000 UTC m=+0.751989671 container died 9fb44e4e3c131a96b8ce0fddaea5d225f68d15e180778b513d35461e00b9f7d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:37:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-480294392fa716d76c53655420bce58aaab887ee197e10be977131ee65bc3ebf-merged.mount: Deactivated successfully.
Jan 26 17:37:26 compute-0 podman[424297]: 2026-01-26 17:37:26.599356265 +0000 UTC m=+0.797090944 container remove 9fb44e4e3c131a96b8ce0fddaea5d225f68d15e180778b513d35461e00b9f7d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hopper, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 26 17:37:26 compute-0 systemd[1]: libpod-conmon-9fb44e4e3c131a96b8ce0fddaea5d225f68d15e180778b513d35461e00b9f7d3.scope: Deactivated successfully.
Jan 26 17:37:26 compute-0 sudo[424217]: pam_unix(sudo:session): session closed for user root
Jan 26 17:37:26 compute-0 sudo[424365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:37:26 compute-0 sudo[424365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:37:26 compute-0 sudo[424365]: pam_unix(sudo:session): session closed for user root
Jan 26 17:37:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:37:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3559044394' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:37:26 compute-0 nova_compute[239965]: 2026-01-26 17:37:26.749 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:37:26 compute-0 nova_compute[239965]: 2026-01-26 17:37:26.754 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:37:26 compute-0 nova_compute[239965]: 2026-01-26 17:37:26.773 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:37:26 compute-0 nova_compute[239965]: 2026-01-26 17:37:26.776 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:37:26 compute-0 nova_compute[239965]: 2026-01-26 17:37:26.777 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:37:26 compute-0 sudo[424390]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:37:26 compute-0 sudo[424390]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:37:27 compute-0 ceph-mon[75140]: pgmap v4429: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3559044394' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:37:27 compute-0 podman[424430]: 2026-01-26 17:37:27.072847084 +0000 UTC m=+0.047500113 container create 22eac71dc40d21576af68d10f9f09a90916487e60c70912575f6f77ffc7d3c9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lamarr, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 26 17:37:27 compute-0 systemd[1]: Started libpod-conmon-22eac71dc40d21576af68d10f9f09a90916487e60c70912575f6f77ffc7d3c9e.scope.
Jan 26 17:37:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:37:27 compute-0 podman[424430]: 2026-01-26 17:37:27.047942115 +0000 UTC m=+0.022595224 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:37:27 compute-0 podman[424430]: 2026-01-26 17:37:27.141974445 +0000 UTC m=+0.116627494 container init 22eac71dc40d21576af68d10f9f09a90916487e60c70912575f6f77ffc7d3c9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lamarr, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:37:27 compute-0 podman[424430]: 2026-01-26 17:37:27.14833963 +0000 UTC m=+0.122992699 container start 22eac71dc40d21576af68d10f9f09a90916487e60c70912575f6f77ffc7d3c9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lamarr, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:37:27 compute-0 practical_lamarr[424446]: 167 167
Jan 26 17:37:27 compute-0 podman[424430]: 2026-01-26 17:37:27.152747598 +0000 UTC m=+0.127400647 container attach 22eac71dc40d21576af68d10f9f09a90916487e60c70912575f6f77ffc7d3c9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lamarr, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 17:37:27 compute-0 systemd[1]: libpod-22eac71dc40d21576af68d10f9f09a90916487e60c70912575f6f77ffc7d3c9e.scope: Deactivated successfully.
Jan 26 17:37:27 compute-0 podman[424430]: 2026-01-26 17:37:27.154286405 +0000 UTC m=+0.128939474 container died 22eac71dc40d21576af68d10f9f09a90916487e60c70912575f6f77ffc7d3c9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lamarr, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:37:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-24a24d4d7630c74668261dd50f474449380c392dcbfaed51a51d1cfc44855bd4-merged.mount: Deactivated successfully.
Jan 26 17:37:27 compute-0 podman[424430]: 2026-01-26 17:37:27.194244453 +0000 UTC m=+0.168897482 container remove 22eac71dc40d21576af68d10f9f09a90916487e60c70912575f6f77ffc7d3c9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_lamarr, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:37:27 compute-0 systemd[1]: libpod-conmon-22eac71dc40d21576af68d10f9f09a90916487e60c70912575f6f77ffc7d3c9e.scope: Deactivated successfully.
Jan 26 17:37:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4430: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:27 compute-0 podman[424470]: 2026-01-26 17:37:27.353815025 +0000 UTC m=+0.043549025 container create 610c06d415f13acbae8562a6b630c4bed1ad1dd9da65341e23f1ed9e834306d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_driscoll, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 17:37:27 compute-0 systemd[1]: Started libpod-conmon-610c06d415f13acbae8562a6b630c4bed1ad1dd9da65341e23f1ed9e834306d6.scope.
Jan 26 17:37:27 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:37:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d05f7aa9be1ccd69b46d25566beaa1752bff191abbbbf2e6e4865b5b5183f33/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:37:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d05f7aa9be1ccd69b46d25566beaa1752bff191abbbbf2e6e4865b5b5183f33/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:37:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d05f7aa9be1ccd69b46d25566beaa1752bff191abbbbf2e6e4865b5b5183f33/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:37:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d05f7aa9be1ccd69b46d25566beaa1752bff191abbbbf2e6e4865b5b5183f33/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:37:27 compute-0 podman[424470]: 2026-01-26 17:37:27.334056422 +0000 UTC m=+0.023790452 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:37:27 compute-0 podman[424470]: 2026-01-26 17:37:27.43332658 +0000 UTC m=+0.123060600 container init 610c06d415f13acbae8562a6b630c4bed1ad1dd9da65341e23f1ed9e834306d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 26 17:37:27 compute-0 podman[424470]: 2026-01-26 17:37:27.438954018 +0000 UTC m=+0.128688018 container start 610c06d415f13acbae8562a6b630c4bed1ad1dd9da65341e23f1ed9e834306d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_driscoll, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 17:37:27 compute-0 podman[424470]: 2026-01-26 17:37:27.442306919 +0000 UTC m=+0.132040919 container attach 610c06d415f13acbae8562a6b630c4bed1ad1dd9da65341e23f1ed9e834306d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_driscoll, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 17:37:27 compute-0 competent_driscoll[424486]: {
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:     "0": [
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:         {
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "devices": [
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "/dev/loop3"
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             ],
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "lv_name": "ceph_lv0",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "lv_size": "21470642176",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "name": "ceph_lv0",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "tags": {
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.cluster_name": "ceph",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.crush_device_class": "",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.encrypted": "0",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.objectstore": "bluestore",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.osd_id": "0",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.type": "block",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.vdo": "0",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.with_tpm": "0"
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             },
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "type": "block",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "vg_name": "ceph_vg0"
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:         }
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:     ],
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:     "1": [
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:         {
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "devices": [
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "/dev/loop4"
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             ],
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "lv_name": "ceph_lv1",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "lv_size": "21470642176",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "name": "ceph_lv1",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "tags": {
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.cluster_name": "ceph",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.crush_device_class": "",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.encrypted": "0",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.objectstore": "bluestore",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.osd_id": "1",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.type": "block",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.vdo": "0",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.with_tpm": "0"
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             },
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "type": "block",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "vg_name": "ceph_vg1"
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:         }
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:     ],
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:     "2": [
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:         {
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "devices": [
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "/dev/loop5"
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             ],
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "lv_name": "ceph_lv2",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "lv_size": "21470642176",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "name": "ceph_lv2",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "tags": {
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.cluster_name": "ceph",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.crush_device_class": "",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.encrypted": "0",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.objectstore": "bluestore",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.osd_id": "2",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.type": "block",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.vdo": "0",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:                 "ceph.with_tpm": "0"
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             },
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "type": "block",
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:             "vg_name": "ceph_vg2"
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:         }
Jan 26 17:37:27 compute-0 competent_driscoll[424486]:     ]
Jan 26 17:37:27 compute-0 competent_driscoll[424486]: }
Jan 26 17:37:27 compute-0 systemd[1]: libpod-610c06d415f13acbae8562a6b630c4bed1ad1dd9da65341e23f1ed9e834306d6.scope: Deactivated successfully.
Jan 26 17:37:27 compute-0 podman[424470]: 2026-01-26 17:37:27.727947185 +0000 UTC m=+0.417681185 container died 610c06d415f13acbae8562a6b630c4bed1ad1dd9da65341e23f1ed9e834306d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_driscoll, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:37:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d05f7aa9be1ccd69b46d25566beaa1752bff191abbbbf2e6e4865b5b5183f33-merged.mount: Deactivated successfully.
Jan 26 17:37:28 compute-0 podman[424470]: 2026-01-26 17:37:28.217931508 +0000 UTC m=+0.907665498 container remove 610c06d415f13acbae8562a6b630c4bed1ad1dd9da65341e23f1ed9e834306d6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_driscoll, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 26 17:37:28 compute-0 ceph-mon[75140]: pgmap v4430: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:28 compute-0 sudo[424390]: pam_unix(sudo:session): session closed for user root
Jan 26 17:37:28 compute-0 systemd[1]: libpod-conmon-610c06d415f13acbae8562a6b630c4bed1ad1dd9da65341e23f1ed9e834306d6.scope: Deactivated successfully.
Jan 26 17:37:28 compute-0 sudo[424507]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:37:28 compute-0 sudo[424507]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:37:28 compute-0 sudo[424507]: pam_unix(sudo:session): session closed for user root
Jan 26 17:37:28 compute-0 sudo[424532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:37:28 compute-0 sudo[424532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:37:28 compute-0 podman[424569]: 2026-01-26 17:37:28.633104291 +0000 UTC m=+0.034494465 container create ed953f34ef4beb21bb77857e423e38880736a5e39d9e9eec3479ccd3ff829432 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 17:37:28 compute-0 systemd[1]: Started libpod-conmon-ed953f34ef4beb21bb77857e423e38880736a5e39d9e9eec3479ccd3ff829432.scope.
Jan 26 17:37:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:37:28 compute-0 podman[424569]: 2026-01-26 17:37:28.699709929 +0000 UTC m=+0.101100113 container init ed953f34ef4beb21bb77857e423e38880736a5e39d9e9eec3479ccd3ff829432 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:37:28 compute-0 sshd-session[424505]: Invalid user solv from 45.148.10.240 port 57694
Jan 26 17:37:28 compute-0 podman[424569]: 2026-01-26 17:37:28.711016456 +0000 UTC m=+0.112406660 container start ed953f34ef4beb21bb77857e423e38880736a5e39d9e9eec3479ccd3ff829432 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:37:28 compute-0 zen_boyd[424585]: 167 167
Jan 26 17:37:28 compute-0 systemd[1]: libpod-ed953f34ef4beb21bb77857e423e38880736a5e39d9e9eec3479ccd3ff829432.scope: Deactivated successfully.
Jan 26 17:37:28 compute-0 podman[424569]: 2026-01-26 17:37:28.617411837 +0000 UTC m=+0.018802051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:37:28 compute-0 conmon[424585]: conmon ed953f34ef4beb21bb77 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ed953f34ef4beb21bb77857e423e38880736a5e39d9e9eec3479ccd3ff829432.scope/container/memory.events
Jan 26 17:37:28 compute-0 podman[424569]: 2026-01-26 17:37:28.714928782 +0000 UTC m=+0.116318956 container attach ed953f34ef4beb21bb77857e423e38880736a5e39d9e9eec3479ccd3ff829432 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_boyd, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:37:28 compute-0 podman[424569]: 2026-01-26 17:37:28.716451809 +0000 UTC m=+0.117841993 container died ed953f34ef4beb21bb77857e423e38880736a5e39d9e9eec3479ccd3ff829432 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_boyd, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 26 17:37:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-5efe250e41e69d1968397e57ad14c115068fb17a98877531c40b42d00245eb5d-merged.mount: Deactivated successfully.
Jan 26 17:37:28 compute-0 podman[424569]: 2026-01-26 17:37:28.749360124 +0000 UTC m=+0.150750288 container remove ed953f34ef4beb21bb77857e423e38880736a5e39d9e9eec3479ccd3ff829432 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:37:28 compute-0 systemd[1]: libpod-conmon-ed953f34ef4beb21bb77857e423e38880736a5e39d9e9eec3479ccd3ff829432.scope: Deactivated successfully.
Jan 26 17:37:28 compute-0 sshd-session[424505]: Connection closed by invalid user solv 45.148.10.240 port 57694 [preauth]
Jan 26 17:37:28 compute-0 podman[424611]: 2026-01-26 17:37:28.909965851 +0000 UTC m=+0.038391879 container create 31dc08fdc2f5f23efb581d6fcadc2378041f89bec79f3585a835239099dac3eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 17:37:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:37:28
Jan 26 17:37:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:37:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:37:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['volumes', 'backups', 'images', '.rgw.root', 'vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 26 17:37:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:37:28 compute-0 systemd[1]: Started libpod-conmon-31dc08fdc2f5f23efb581d6fcadc2378041f89bec79f3585a835239099dac3eb.scope.
Jan 26 17:37:28 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6639ab073d831510b2715f1f87666ef83ec22d4cd162b389a50dc55e1085e15e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6639ab073d831510b2715f1f87666ef83ec22d4cd162b389a50dc55e1085e15e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6639ab073d831510b2715f1f87666ef83ec22d4cd162b389a50dc55e1085e15e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:37:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6639ab073d831510b2715f1f87666ef83ec22d4cd162b389a50dc55e1085e15e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:37:28 compute-0 podman[424611]: 2026-01-26 17:37:28.976275864 +0000 UTC m=+0.104701902 container init 31dc08fdc2f5f23efb581d6fcadc2378041f89bec79f3585a835239099dac3eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:37:28 compute-0 podman[424611]: 2026-01-26 17:37:28.981915621 +0000 UTC m=+0.110341649 container start 31dc08fdc2f5f23efb581d6fcadc2378041f89bec79f3585a835239099dac3eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:37:28 compute-0 podman[424611]: 2026-01-26 17:37:28.985113549 +0000 UTC m=+0.113539597 container attach 31dc08fdc2f5f23efb581d6fcadc2378041f89bec79f3585a835239099dac3eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_robinson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 17:37:28 compute-0 podman[424611]: 2026-01-26 17:37:28.894183675 +0000 UTC m=+0.022609723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:37:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4431: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:29 compute-0 lvm[424708]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:37:29 compute-0 lvm[424705]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:37:29 compute-0 lvm[424705]: VG ceph_vg0 finished
Jan 26 17:37:29 compute-0 lvm[424708]: VG ceph_vg1 finished
Jan 26 17:37:29 compute-0 lvm[424710]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:37:29 compute-0 lvm[424710]: VG ceph_vg2 finished
Jan 26 17:37:29 compute-0 ecstatic_robinson[424628]: {}
Jan 26 17:37:29 compute-0 systemd[1]: libpod-31dc08fdc2f5f23efb581d6fcadc2378041f89bec79f3585a835239099dac3eb.scope: Deactivated successfully.
Jan 26 17:37:29 compute-0 systemd[1]: libpod-31dc08fdc2f5f23efb581d6fcadc2378041f89bec79f3585a835239099dac3eb.scope: Consumed 1.217s CPU time.
Jan 26 17:37:29 compute-0 podman[424713]: 2026-01-26 17:37:29.782365496 +0000 UTC m=+0.020792959 container died 31dc08fdc2f5f23efb581d6fcadc2378041f89bec79f3585a835239099dac3eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 26 17:37:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-6639ab073d831510b2715f1f87666ef83ec22d4cd162b389a50dc55e1085e15e-merged.mount: Deactivated successfully.
Jan 26 17:37:29 compute-0 podman[424713]: 2026-01-26 17:37:29.815870715 +0000 UTC m=+0.054298148 container remove 31dc08fdc2f5f23efb581d6fcadc2378041f89bec79f3585a835239099dac3eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_robinson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 26 17:37:29 compute-0 systemd[1]: libpod-conmon-31dc08fdc2f5f23efb581d6fcadc2378041f89bec79f3585a835239099dac3eb.scope: Deactivated successfully.
Jan 26 17:37:29 compute-0 sudo[424532]: pam_unix(sudo:session): session closed for user root
Jan 26 17:37:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:37:29 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:37:29 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:37:29 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:37:29 compute-0 sudo[424728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:37:29 compute-0 sudo[424728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:37:29 compute-0 sudo[424728]: pam_unix(sudo:session): session closed for user root
Jan 26 17:37:30 compute-0 nova_compute[239965]: 2026-01-26 17:37:30.059 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:37:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:37:30 compute-0 nova_compute[239965]: 2026-01-26 17:37:30.770 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:30 compute-0 ceph-mon[75140]: pgmap v4431: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:37:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:37:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:37:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4432: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:31 compute-0 nova_compute[239965]: 2026-01-26 17:37:31.261 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:37:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:37:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:37:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:37:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:37:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:37:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:37:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:37:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:37:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:37:32 compute-0 ceph-mon[75140]: pgmap v4432: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4433: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:34 compute-0 ceph-mon[75140]: pgmap v4433: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:35 compute-0 nova_compute[239965]: 2026-01-26 17:37:35.103 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4434: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:37:36 compute-0 nova_compute[239965]: 2026-01-26 17:37:36.263 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:36 compute-0 ceph-mon[75140]: pgmap v4434: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4435: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:38 compute-0 ceph-mon[75140]: pgmap v4435: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4436: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:40 compute-0 nova_compute[239965]: 2026-01-26 17:37:40.109 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:40 compute-0 ceph-mon[75140]: pgmap v4436: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:37:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4437: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:41 compute-0 nova_compute[239965]: 2026-01-26 17:37:41.264 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:42 compute-0 ceph-mon[75140]: pgmap v4437: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4438: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:45 compute-0 ceph-mon[75140]: pgmap v4438: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:45 compute-0 nova_compute[239965]: 2026-01-26 17:37:45.110 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4439: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:37:46 compute-0 nova_compute[239965]: 2026-01-26 17:37:46.266 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:47 compute-0 ceph-mon[75140]: pgmap v4439: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4440: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:47 compute-0 podman[424753]: 2026-01-26 17:37:47.389335888 +0000 UTC m=+0.073177341 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 26 17:37:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:37:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3774591147' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:37:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:37:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3774591147' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:37:49 compute-0 ceph-mon[75140]: pgmap v4440: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3774591147' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:37:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3774591147' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:37:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4441: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:37:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:37:50 compute-0 nova_compute[239965]: 2026-01-26 17:37:50.113 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:50 compute-0 ceph-mon[75140]: pgmap v4441: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:50 compute-0 podman[424772]: 2026-01-26 17:37:50.42963635 +0000 UTC m=+0.113042086 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 26 17:37:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:37:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4442: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:51 compute-0 nova_compute[239965]: 2026-01-26 17:37:51.269 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:52 compute-0 ceph-mon[75140]: pgmap v4442: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:37:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4443: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 26 17:37:54 compute-0 ceph-mon[75140]: pgmap v4443: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 26 17:37:54 compute-0 nova_compute[239965]: 2026-01-26 17:37:54.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:55 compute-0 nova_compute[239965]: 2026-01-26 17:37:55.116 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4444: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 26 17:37:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:37:56 compute-0 nova_compute[239965]: 2026-01-26 17:37:56.271 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:37:56 compute-0 ceph-mon[75140]: pgmap v4444: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Jan 26 17:37:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4445: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:37:58 compute-0 ceph-mon[75140]: pgmap v4445: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:37:58 compute-0 nova_compute[239965]: 2026-01-26 17:37:58.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:37:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4446: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:37:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:37:59.326 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:37:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:37:59.327 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:37:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:37:59.327 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:38:00 compute-0 nova_compute[239965]: 2026-01-26 17:38:00.121 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:00 compute-0 ceph-mon[75140]: pgmap v4446: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:38:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:38:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:38:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4447: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:38:01 compute-0 nova_compute[239965]: 2026-01-26 17:38:01.273 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:01 compute-0 nova_compute[239965]: 2026-01-26 17:38:01.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:02 compute-0 ceph-mon[75140]: pgmap v4447: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:38:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4448: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:38:03 compute-0 nova_compute[239965]: 2026-01-26 17:38:03.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:04 compute-0 ceph-mon[75140]: pgmap v4448: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:38:05 compute-0 nova_compute[239965]: 2026-01-26 17:38:05.124 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4449: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 26 17:38:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:38:06 compute-0 nova_compute[239965]: 2026-01-26 17:38:06.273 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:06 compute-0 ceph-mon[75140]: pgmap v4449: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 26 17:38:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4450: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 26 17:38:07 compute-0 nova_compute[239965]: 2026-01-26 17:38:07.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:08 compute-0 ceph-mon[75140]: pgmap v4450: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Jan 26 17:38:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4451: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:10 compute-0 nova_compute[239965]: 2026-01-26 17:38:10.127 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:10 compute-0 ceph-mon[75140]: pgmap v4451: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:38:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4452: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:11 compute-0 nova_compute[239965]: 2026-01-26 17:38:11.276 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:12 compute-0 ceph-mon[75140]: pgmap v4452: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4453: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:13 compute-0 nova_compute[239965]: 2026-01-26 17:38:13.503 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:14 compute-0 ceph-mon[75140]: pgmap v4453: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:15 compute-0 nova_compute[239965]: 2026-01-26 17:38:15.132 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4454: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:38:16 compute-0 nova_compute[239965]: 2026-01-26 17:38:16.278 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:16 compute-0 ceph-mon[75140]: pgmap v4454: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4455: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:18 compute-0 podman[424798]: 2026-01-26 17:38:18.363804972 +0000 UTC m=+0.052494935 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:38:18 compute-0 nova_compute[239965]: 2026-01-26 17:38:18.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:18 compute-0 nova_compute[239965]: 2026-01-26 17:38:18.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:38:18 compute-0 ceph-mon[75140]: pgmap v4455: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4456: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:20 compute-0 nova_compute[239965]: 2026-01-26 17:38:20.136 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:20 compute-0 ceph-mon[75140]: pgmap v4456: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:38:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4457: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:21 compute-0 nova_compute[239965]: 2026-01-26 17:38:21.280 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:21 compute-0 podman[424818]: 2026-01-26 17:38:21.407549817 +0000 UTC m=+0.101112363 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:38:22 compute-0 ceph-mon[75140]: pgmap v4457: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4458: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:23 compute-0 nova_compute[239965]: 2026-01-26 17:38:23.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:23 compute-0 nova_compute[239965]: 2026-01-26 17:38:23.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:38:23 compute-0 nova_compute[239965]: 2026-01-26 17:38:23.512 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:38:23 compute-0 nova_compute[239965]: 2026-01-26 17:38:23.613 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:38:24 compute-0 nova_compute[239965]: 2026-01-26 17:38:24.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:24 compute-0 nova_compute[239965]: 2026-01-26 17:38:24.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:38:24 compute-0 nova_compute[239965]: 2026-01-26 17:38:24.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:38:24 compute-0 nova_compute[239965]: 2026-01-26 17:38:24.539 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:38:24 compute-0 nova_compute[239965]: 2026-01-26 17:38:24.540 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:38:24 compute-0 nova_compute[239965]: 2026-01-26 17:38:24.540 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:38:24 compute-0 ceph-mon[75140]: pgmap v4458: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:38:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/7151489' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:38:25 compute-0 nova_compute[239965]: 2026-01-26 17:38:25.119 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:38:25 compute-0 nova_compute[239965]: 2026-01-26 17:38:25.146 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:25 compute-0 nova_compute[239965]: 2026-01-26 17:38:25.268 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:38:25 compute-0 nova_compute[239965]: 2026-01-26 17:38:25.269 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3562MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:38:25 compute-0 nova_compute[239965]: 2026-01-26 17:38:25.270 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:38:25 compute-0 nova_compute[239965]: 2026-01-26 17:38:25.270 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:38:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4459: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:25 compute-0 nova_compute[239965]: 2026-01-26 17:38:25.338 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:38:25 compute-0 nova_compute[239965]: 2026-01-26 17:38:25.338 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:38:25 compute-0 nova_compute[239965]: 2026-01-26 17:38:25.353 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:38:25 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/7151489' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:38:25 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:38:25 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/943219211' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:38:25 compute-0 nova_compute[239965]: 2026-01-26 17:38:25.949 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:38:25 compute-0 nova_compute[239965]: 2026-01-26 17:38:25.957 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:38:25 compute-0 nova_compute[239965]: 2026-01-26 17:38:25.982 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:38:25 compute-0 nova_compute[239965]: 2026-01-26 17:38:25.984 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:38:25 compute-0 nova_compute[239965]: 2026-01-26 17:38:25.984 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:38:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:38:26 compute-0 nova_compute[239965]: 2026-01-26 17:38:26.280 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:26 compute-0 ceph-mon[75140]: pgmap v4459: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/943219211' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:38:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4460: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:28 compute-0 ceph-mon[75140]: pgmap v4460: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:38:28
Jan 26 17:38:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:38:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:38:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['images', 'vms', 'volumes', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control']
Jan 26 17:38:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:38:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4461: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:30 compute-0 sudo[424890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:38:30 compute-0 sudo[424890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:38:30 compute-0 sudo[424890]: pam_unix(sudo:session): session closed for user root
Jan 26 17:38:30 compute-0 sudo[424915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:38:30 compute-0 sudo[424915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:38:30 compute-0 nova_compute[239965]: 2026-01-26 17:38:30.149 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:30 compute-0 sshd-session[424888]: Invalid user ftpuser from 176.120.22.13 port 42952
Jan 26 17:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:38:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:38:30 compute-0 sudo[424915]: pam_unix(sudo:session): session closed for user root
Jan 26 17:38:30 compute-0 sshd-session[424888]: Connection reset by invalid user ftpuser 176.120.22.13 port 42952 [preauth]
Jan 26 17:38:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:38:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:38:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:38:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:38:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:38:30 compute-0 ceph-mon[75140]: pgmap v4461: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:38:30 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:38:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:38:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:38:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:38:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:38:30 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:38:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:38:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:38:30 compute-0 sudo[424971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:38:30 compute-0 sudo[424971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:38:30 compute-0 sudo[424971]: pam_unix(sudo:session): session closed for user root
Jan 26 17:38:30 compute-0 sudo[424996]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:38:30 compute-0 sudo[424996]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:38:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:38:31 compute-0 podman[425036]: 2026-01-26 17:38:31.139119295 +0000 UTC m=+0.046285773 container create 13492744d2e818464ac071c1532e5f0c08daebf889671b10fb5c481f3254e7dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:38:31 compute-0 systemd[1]: Started libpod-conmon-13492744d2e818464ac071c1532e5f0c08daebf889671b10fb5c481f3254e7dd.scope.
Jan 26 17:38:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:38:31 compute-0 podman[425036]: 2026-01-26 17:38:31.207906757 +0000 UTC m=+0.115073225 container init 13492744d2e818464ac071c1532e5f0c08daebf889671b10fb5c481f3254e7dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jennings, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 17:38:31 compute-0 podman[425036]: 2026-01-26 17:38:31.116582304 +0000 UTC m=+0.023748782 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:38:31 compute-0 podman[425036]: 2026-01-26 17:38:31.216247011 +0000 UTC m=+0.123413459 container start 13492744d2e818464ac071c1532e5f0c08daebf889671b10fb5c481f3254e7dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 26 17:38:31 compute-0 hungry_jennings[425052]: 167 167
Jan 26 17:38:31 compute-0 podman[425036]: 2026-01-26 17:38:31.221225743 +0000 UTC m=+0.128392221 container attach 13492744d2e818464ac071c1532e5f0c08daebf889671b10fb5c481f3254e7dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jennings, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 17:38:31 compute-0 systemd[1]: libpod-13492744d2e818464ac071c1532e5f0c08daebf889671b10fb5c481f3254e7dd.scope: Deactivated successfully.
Jan 26 17:38:31 compute-0 conmon[425052]: conmon 13492744d2e818464ac0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13492744d2e818464ac071c1532e5f0c08daebf889671b10fb5c481f3254e7dd.scope/container/memory.events
Jan 26 17:38:31 compute-0 podman[425036]: 2026-01-26 17:38:31.223094949 +0000 UTC m=+0.130261417 container died 13492744d2e818464ac071c1532e5f0c08daebf889671b10fb5c481f3254e7dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jennings, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 26 17:38:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-da550848eecdc157ebea0a0a1e014f4dcde3e800f5073ed470e1710b9d457675-merged.mount: Deactivated successfully.
Jan 26 17:38:31 compute-0 podman[425036]: 2026-01-26 17:38:31.273100832 +0000 UTC m=+0.180267290 container remove 13492744d2e818464ac071c1532e5f0c08daebf889671b10fb5c481f3254e7dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_jennings, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:38:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4462: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:31 compute-0 nova_compute[239965]: 2026-01-26 17:38:31.282 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:31 compute-0 systemd[1]: libpod-conmon-13492744d2e818464ac071c1532e5f0c08daebf889671b10fb5c481f3254e7dd.scope: Deactivated successfully.
Jan 26 17:38:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:38:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:38:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:38:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:38:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:38:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:38:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:38:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:38:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:38:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:38:31 compute-0 podman[425076]: 2026-01-26 17:38:31.471464623 +0000 UTC m=+0.055169920 container create d2f144bb1c2676cb9e6efb8e24488bbd11e93752f9e5b71f7a2bd4e76b1cb2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cartwright, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:38:31 compute-0 systemd[1]: Started libpod-conmon-d2f144bb1c2676cb9e6efb8e24488bbd11e93752f9e5b71f7a2bd4e76b1cb2b6.scope.
Jan 26 17:38:31 compute-0 podman[425076]: 2026-01-26 17:38:31.455526663 +0000 UTC m=+0.039231970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:38:31 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca92e2ef14aafe933c661f48529abeefb044f0289d029c3375ce89f1044d9a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca92e2ef14aafe933c661f48529abeefb044f0289d029c3375ce89f1044d9a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca92e2ef14aafe933c661f48529abeefb044f0289d029c3375ce89f1044d9a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca92e2ef14aafe933c661f48529abeefb044f0289d029c3375ce89f1044d9a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:38:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca92e2ef14aafe933c661f48529abeefb044f0289d029c3375ce89f1044d9a8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:38:31 compute-0 podman[425076]: 2026-01-26 17:38:31.56945805 +0000 UTC m=+0.153163347 container init d2f144bb1c2676cb9e6efb8e24488bbd11e93752f9e5b71f7a2bd4e76b1cb2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 17:38:31 compute-0 podman[425076]: 2026-01-26 17:38:31.579950016 +0000 UTC m=+0.163655303 container start d2f144bb1c2676cb9e6efb8e24488bbd11e93752f9e5b71f7a2bd4e76b1cb2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:38:31 compute-0 podman[425076]: 2026-01-26 17:38:31.584245761 +0000 UTC m=+0.167951068 container attach d2f144bb1c2676cb9e6efb8e24488bbd11e93752f9e5b71f7a2bd4e76b1cb2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 17:38:31 compute-0 sshd-session[425019]: Connection reset by authenticating user root 176.120.22.13 port 42956 [preauth]
Jan 26 17:38:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:38:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:38:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:38:31 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:38:32 compute-0 strange_cartwright[425093]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:38:32 compute-0 strange_cartwright[425093]: --> All data devices are unavailable
Jan 26 17:38:32 compute-0 systemd[1]: libpod-d2f144bb1c2676cb9e6efb8e24488bbd11e93752f9e5b71f7a2bd4e76b1cb2b6.scope: Deactivated successfully.
Jan 26 17:38:32 compute-0 conmon[425093]: conmon d2f144bb1c2676cb9e6e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d2f144bb1c2676cb9e6efb8e24488bbd11e93752f9e5b71f7a2bd4e76b1cb2b6.scope/container/memory.events
Jan 26 17:38:32 compute-0 podman[425076]: 2026-01-26 17:38:32.163880506 +0000 UTC m=+0.747585793 container died d2f144bb1c2676cb9e6efb8e24488bbd11e93752f9e5b71f7a2bd4e76b1cb2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cartwright, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:38:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ca92e2ef14aafe933c661f48529abeefb044f0289d029c3375ce89f1044d9a8-merged.mount: Deactivated successfully.
Jan 26 17:38:32 compute-0 podman[425076]: 2026-01-26 17:38:32.208153829 +0000 UTC m=+0.791859116 container remove d2f144bb1c2676cb9e6efb8e24488bbd11e93752f9e5b71f7a2bd4e76b1cb2b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cartwright, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:38:32 compute-0 systemd[1]: libpod-conmon-d2f144bb1c2676cb9e6efb8e24488bbd11e93752f9e5b71f7a2bd4e76b1cb2b6.scope: Deactivated successfully.
Jan 26 17:38:32 compute-0 sudo[424996]: pam_unix(sudo:session): session closed for user root
Jan 26 17:38:32 compute-0 sudo[425127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:38:32 compute-0 sudo[425127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:38:32 compute-0 sudo[425127]: pam_unix(sudo:session): session closed for user root
Jan 26 17:38:32 compute-0 sudo[425152]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:38:32 compute-0 sudo[425152]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:38:32 compute-0 podman[425188]: 2026-01-26 17:38:32.685661377 +0000 UTC m=+0.056805461 container create b387eccd23d3b1bd7d6f63f6a9ac9c252f6e6baa64aedc687fd7bd39b7b1c586 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:38:32 compute-0 systemd[1]: Started libpod-conmon-b387eccd23d3b1bd7d6f63f6a9ac9c252f6e6baa64aedc687fd7bd39b7b1c586.scope.
Jan 26 17:38:32 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:38:32 compute-0 podman[425188]: 2026-01-26 17:38:32.665342959 +0000 UTC m=+0.036487063 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:38:32 compute-0 podman[425188]: 2026-01-26 17:38:32.774304024 +0000 UTC m=+0.145448188 container init b387eccd23d3b1bd7d6f63f6a9ac9c252f6e6baa64aedc687fd7bd39b7b1c586 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 17:38:32 compute-0 podman[425188]: 2026-01-26 17:38:32.780126707 +0000 UTC m=+0.151270811 container start b387eccd23d3b1bd7d6f63f6a9ac9c252f6e6baa64aedc687fd7bd39b7b1c586 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mcclintock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:38:32 compute-0 podman[425188]: 2026-01-26 17:38:32.785474238 +0000 UTC m=+0.156618422 container attach b387eccd23d3b1bd7d6f63f6a9ac9c252f6e6baa64aedc687fd7bd39b7b1c586 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:38:32 compute-0 stupefied_mcclintock[425205]: 167 167
Jan 26 17:38:32 compute-0 systemd[1]: libpod-b387eccd23d3b1bd7d6f63f6a9ac9c252f6e6baa64aedc687fd7bd39b7b1c586.scope: Deactivated successfully.
Jan 26 17:38:32 compute-0 podman[425188]: 2026-01-26 17:38:32.787507798 +0000 UTC m=+0.158651912 container died b387eccd23d3b1bd7d6f63f6a9ac9c252f6e6baa64aedc687fd7bd39b7b1c586 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mcclintock, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:38:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd40509f61297da98e4d204f5f8ea12957166d55a04a4ee10e88f945346ed285-merged.mount: Deactivated successfully.
Jan 26 17:38:32 compute-0 ceph-mon[75140]: pgmap v4462: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:32 compute-0 podman[425188]: 2026-01-26 17:38:32.835613594 +0000 UTC m=+0.206757708 container remove b387eccd23d3b1bd7d6f63f6a9ac9c252f6e6baa64aedc687fd7bd39b7b1c586 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 17:38:32 compute-0 systemd[1]: libpod-conmon-b387eccd23d3b1bd7d6f63f6a9ac9c252f6e6baa64aedc687fd7bd39b7b1c586.scope: Deactivated successfully.
Jan 26 17:38:33 compute-0 podman[425230]: 2026-01-26 17:38:33.018907727 +0000 UTC m=+0.047388061 container create 6c4b481c2e29789f2de9128e428871a5ed6e478f5d860e974ad11af3c7899a8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jackson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 17:38:33 compute-0 systemd[1]: Started libpod-conmon-6c4b481c2e29789f2de9128e428871a5ed6e478f5d860e974ad11af3c7899a8d.scope.
Jan 26 17:38:33 compute-0 sshd-session[425107]: Invalid user ftpuser from 176.120.22.13 port 49830
Jan 26 17:38:33 compute-0 podman[425230]: 2026-01-26 17:38:32.994212672 +0000 UTC m=+0.022693086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:38:33 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb29e4ff89357bcd933f3bb8ca76a66395f750ef5441bd188f0c3d879f706399/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb29e4ff89357bcd933f3bb8ca76a66395f750ef5441bd188f0c3d879f706399/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb29e4ff89357bcd933f3bb8ca76a66395f750ef5441bd188f0c3d879f706399/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:38:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb29e4ff89357bcd933f3bb8ca76a66395f750ef5441bd188f0c3d879f706399/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:38:33 compute-0 podman[425230]: 2026-01-26 17:38:33.11026031 +0000 UTC m=+0.138740644 container init 6c4b481c2e29789f2de9128e428871a5ed6e478f5d860e974ad11af3c7899a8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 17:38:33 compute-0 podman[425230]: 2026-01-26 17:38:33.127169714 +0000 UTC m=+0.155650048 container start 6c4b481c2e29789f2de9128e428871a5ed6e478f5d860e974ad11af3c7899a8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jackson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 17:38:33 compute-0 podman[425230]: 2026-01-26 17:38:33.131633403 +0000 UTC m=+0.160113737 container attach 6c4b481c2e29789f2de9128e428871a5ed6e478f5d860e974ad11af3c7899a8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 26 17:38:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4463: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:33 compute-0 sshd-session[425107]: Connection reset by invalid user ftpuser 176.120.22.13 port 49830 [preauth]
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]: {
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:     "0": [
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:         {
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "devices": [
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "/dev/loop3"
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             ],
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "lv_name": "ceph_lv0",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "lv_size": "21470642176",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "name": "ceph_lv0",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "tags": {
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.cluster_name": "ceph",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.crush_device_class": "",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.encrypted": "0",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.objectstore": "bluestore",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.osd_id": "0",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.type": "block",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.vdo": "0",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.with_tpm": "0"
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             },
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "type": "block",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "vg_name": "ceph_vg0"
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:         }
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:     ],
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:     "1": [
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:         {
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "devices": [
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "/dev/loop4"
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             ],
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "lv_name": "ceph_lv1",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "lv_size": "21470642176",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "name": "ceph_lv1",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "tags": {
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.cluster_name": "ceph",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.crush_device_class": "",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.encrypted": "0",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.objectstore": "bluestore",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.osd_id": "1",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.type": "block",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.vdo": "0",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.with_tpm": "0"
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             },
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "type": "block",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "vg_name": "ceph_vg1"
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:         }
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:     ],
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:     "2": [
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:         {
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "devices": [
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "/dev/loop5"
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             ],
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "lv_name": "ceph_lv2",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "lv_size": "21470642176",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "name": "ceph_lv2",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "tags": {
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.cluster_name": "ceph",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.crush_device_class": "",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.encrypted": "0",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.objectstore": "bluestore",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.osd_id": "2",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.type": "block",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.vdo": "0",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:                 "ceph.with_tpm": "0"
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             },
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "type": "block",
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:             "vg_name": "ceph_vg2"
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:         }
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]:     ]
Jan 26 17:38:33 compute-0 vigorous_jackson[425247]: }
Jan 26 17:38:33 compute-0 systemd[1]: libpod-6c4b481c2e29789f2de9128e428871a5ed6e478f5d860e974ad11af3c7899a8d.scope: Deactivated successfully.
Jan 26 17:38:33 compute-0 podman[425230]: 2026-01-26 17:38:33.466141983 +0000 UTC m=+0.494622357 container died 6c4b481c2e29789f2de9128e428871a5ed6e478f5d860e974ad11af3c7899a8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:38:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb29e4ff89357bcd933f3bb8ca76a66395f750ef5441bd188f0c3d879f706399-merged.mount: Deactivated successfully.
Jan 26 17:38:33 compute-0 podman[425230]: 2026-01-26 17:38:33.515400258 +0000 UTC m=+0.543880642 container remove 6c4b481c2e29789f2de9128e428871a5ed6e478f5d860e974ad11af3c7899a8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_jackson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 17:38:33 compute-0 systemd[1]: libpod-conmon-6c4b481c2e29789f2de9128e428871a5ed6e478f5d860e974ad11af3c7899a8d.scope: Deactivated successfully.
Jan 26 17:38:33 compute-0 sudo[425152]: pam_unix(sudo:session): session closed for user root
Jan 26 17:38:33 compute-0 sudo[425271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:38:33 compute-0 sudo[425271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:38:33 compute-0 sudo[425271]: pam_unix(sudo:session): session closed for user root
Jan 26 17:38:33 compute-0 sudo[425296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:38:33 compute-0 sudo[425296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:38:33 compute-0 podman[425334]: 2026-01-26 17:38:33.989929703 +0000 UTC m=+0.036815231 container create d78fa419231cefce8713ddf3e9c1e7020365fe9be3d724f1f2f36938cf246aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_beaver, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:38:34 compute-0 systemd[1]: Started libpod-conmon-d78fa419231cefce8713ddf3e9c1e7020365fe9be3d724f1f2f36938cf246aa4.scope.
Jan 26 17:38:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:38:34 compute-0 podman[425334]: 2026-01-26 17:38:33.974816084 +0000 UTC m=+0.021701632 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:38:34 compute-0 podman[425334]: 2026-01-26 17:38:34.07771114 +0000 UTC m=+0.124596678 container init d78fa419231cefce8713ddf3e9c1e7020365fe9be3d724f1f2f36938cf246aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_beaver, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 26 17:38:34 compute-0 podman[425334]: 2026-01-26 17:38:34.08918196 +0000 UTC m=+0.136067488 container start d78fa419231cefce8713ddf3e9c1e7020365fe9be3d724f1f2f36938cf246aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 17:38:34 compute-0 podman[425334]: 2026-01-26 17:38:34.093191378 +0000 UTC m=+0.140076906 container attach d78fa419231cefce8713ddf3e9c1e7020365fe9be3d724f1f2f36938cf246aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_beaver, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 17:38:34 compute-0 pensive_beaver[425350]: 167 167
Jan 26 17:38:34 compute-0 systemd[1]: libpod-d78fa419231cefce8713ddf3e9c1e7020365fe9be3d724f1f2f36938cf246aa4.scope: Deactivated successfully.
Jan 26 17:38:34 compute-0 conmon[425350]: conmon d78fa419231cefce8713 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d78fa419231cefce8713ddf3e9c1e7020365fe9be3d724f1f2f36938cf246aa4.scope/container/memory.events
Jan 26 17:38:34 compute-0 podman[425334]: 2026-01-26 17:38:34.095346261 +0000 UTC m=+0.142231829 container died d78fa419231cefce8713ddf3e9c1e7020365fe9be3d724f1f2f36938cf246aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_beaver, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:38:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8e242ade5415340c778def7ee04f9b2a84d0f88eda51ca55323a200f3b6b862-merged.mount: Deactivated successfully.
Jan 26 17:38:34 compute-0 podman[425334]: 2026-01-26 17:38:34.130610224 +0000 UTC m=+0.177495752 container remove d78fa419231cefce8713ddf3e9c1e7020365fe9be3d724f1f2f36938cf246aa4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 17:38:34 compute-0 systemd[1]: libpod-conmon-d78fa419231cefce8713ddf3e9c1e7020365fe9be3d724f1f2f36938cf246aa4.scope: Deactivated successfully.
Jan 26 17:38:34 compute-0 podman[425372]: 2026-01-26 17:38:34.281002901 +0000 UTC m=+0.039127388 container create cde57814c93c4fbd048161d3b8e5c84160119f2ce146927641cb3e70cf321405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:38:34 compute-0 systemd[1]: Started libpod-conmon-cde57814c93c4fbd048161d3b8e5c84160119f2ce146927641cb3e70cf321405.scope.
Jan 26 17:38:34 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:38:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420da484662514989539ab17ee29635b76c89fb1944690b05638d8992e50478d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:38:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420da484662514989539ab17ee29635b76c89fb1944690b05638d8992e50478d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:38:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420da484662514989539ab17ee29635b76c89fb1944690b05638d8992e50478d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:38:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/420da484662514989539ab17ee29635b76c89fb1944690b05638d8992e50478d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:38:34 compute-0 podman[425372]: 2026-01-26 17:38:34.263041722 +0000 UTC m=+0.021166219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:38:34 compute-0 podman[425372]: 2026-01-26 17:38:34.362044703 +0000 UTC m=+0.120169200 container init cde57814c93c4fbd048161d3b8e5c84160119f2ce146927641cb3e70cf321405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_booth, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:38:34 compute-0 podman[425372]: 2026-01-26 17:38:34.370275874 +0000 UTC m=+0.128400351 container start cde57814c93c4fbd048161d3b8e5c84160119f2ce146927641cb3e70cf321405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_booth, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 17:38:34 compute-0 podman[425372]: 2026-01-26 17:38:34.373257537 +0000 UTC m=+0.131382014 container attach cde57814c93c4fbd048161d3b8e5c84160119f2ce146927641cb3e70cf321405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 17:38:34 compute-0 ceph-mon[75140]: pgmap v4463: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:34 compute-0 sshd-session[425270]: Invalid user Administrator from 176.120.22.13 port 49836
Jan 26 17:38:35 compute-0 lvm[425467]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:38:35 compute-0 lvm[425466]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:38:35 compute-0 lvm[425467]: VG ceph_vg0 finished
Jan 26 17:38:35 compute-0 lvm[425466]: VG ceph_vg1 finished
Jan 26 17:38:35 compute-0 lvm[425469]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:38:35 compute-0 lvm[425469]: VG ceph_vg2 finished
Jan 26 17:38:35 compute-0 nova_compute[239965]: 2026-01-26 17:38:35.153 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:35 compute-0 frosty_booth[425388]: {}
Jan 26 17:38:35 compute-0 systemd[1]: libpod-cde57814c93c4fbd048161d3b8e5c84160119f2ce146927641cb3e70cf321405.scope: Deactivated successfully.
Jan 26 17:38:35 compute-0 systemd[1]: libpod-cde57814c93c4fbd048161d3b8e5c84160119f2ce146927641cb3e70cf321405.scope: Consumed 1.280s CPU time.
Jan 26 17:38:35 compute-0 podman[425372]: 2026-01-26 17:38:35.23619033 +0000 UTC m=+0.994314807 container died cde57814c93c4fbd048161d3b8e5c84160119f2ce146927641cb3e70cf321405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_booth, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 26 17:38:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-420da484662514989539ab17ee29635b76c89fb1944690b05638d8992e50478d-merged.mount: Deactivated successfully.
Jan 26 17:38:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4464: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:35 compute-0 podman[425372]: 2026-01-26 17:38:35.279864519 +0000 UTC m=+1.037988996 container remove cde57814c93c4fbd048161d3b8e5c84160119f2ce146927641cb3e70cf321405 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 17:38:35 compute-0 systemd[1]: libpod-conmon-cde57814c93c4fbd048161d3b8e5c84160119f2ce146927641cb3e70cf321405.scope: Deactivated successfully.
Jan 26 17:38:35 compute-0 sudo[425296]: pam_unix(sudo:session): session closed for user root
Jan 26 17:38:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:38:35 compute-0 sshd-session[425270]: Connection reset by invalid user Administrator 176.120.22.13 port 49836 [preauth]
Jan 26 17:38:35 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:38:35 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:38:35 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:38:35 compute-0 sudo[425484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:38:35 compute-0 sudo[425484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:38:35 compute-0 sudo[425484]: pam_unix(sudo:session): session closed for user root
Jan 26 17:38:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:38:36 compute-0 nova_compute[239965]: 2026-01-26 17:38:36.328 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:36 compute-0 ceph-mon[75140]: pgmap v4464: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:36 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:38:36 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:38:36 compute-0 sshd-session[425509]: Connection reset by authenticating user root 176.120.22.13 port 49848 [preauth]
Jan 26 17:38:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4465: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:39 compute-0 ceph-mon[75140]: pgmap v4465: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4466: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:40 compute-0 nova_compute[239965]: 2026-01-26 17:38:40.482 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:40 compute-0 ceph-mon[75140]: pgmap v4466: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:38:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4467: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:41 compute-0 nova_compute[239965]: 2026-01-26 17:38:41.374 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:42 compute-0 ceph-mon[75140]: pgmap v4467: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4468: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:44 compute-0 ceph-mon[75140]: pgmap v4468: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4469: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:45 compute-0 nova_compute[239965]: 2026-01-26 17:38:45.484 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:38:46 compute-0 nova_compute[239965]: 2026-01-26 17:38:46.417 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:46 compute-0 ceph-mon[75140]: pgmap v4469: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4470: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:48 compute-0 ceph-mon[75140]: pgmap v4470: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:38:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1074581276' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:38:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:38:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1074581276' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:38:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4471: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:49 compute-0 podman[425512]: 2026-01-26 17:38:49.386484951 +0000 UTC m=+0.072057714 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 26 17:38:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1074581276' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:38:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1074581276' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:38:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:38:50 compute-0 nova_compute[239965]: 2026-01-26 17:38:50.486 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:50 compute-0 ceph-mon[75140]: pgmap v4471: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:38:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4472: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:51 compute-0 nova_compute[239965]: 2026-01-26 17:38:51.420 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:52 compute-0 podman[425533]: 2026-01-26 17:38:52.41277126 +0000 UTC m=+0.100314874 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 26 17:38:52 compute-0 ceph-mon[75140]: pgmap v4472: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4473: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:54 compute-0 ceph-mon[75140]: pgmap v4473: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4474: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:55 compute-0 nova_compute[239965]: 2026-01-26 17:38:55.487 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:38:56 compute-0 nova_compute[239965]: 2026-01-26 17:38:56.471 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:38:56 compute-0 ceph-mon[75140]: pgmap v4474: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4475: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:57 compute-0 nova_compute[239965]: 2026-01-26 17:38:57.985 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:38:58 compute-0 ceph-mon[75140]: pgmap v4475: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4476: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:38:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:38:59.327 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:38:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:38:59.327 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:38:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:38:59.327 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:39:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:39:00 compute-0 nova_compute[239965]: 2026-01-26 17:39:00.490 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:00 compute-0 nova_compute[239965]: 2026-01-26 17:39:00.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:00 compute-0 ceph-mon[75140]: pgmap v4476: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:39:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4477: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:01 compute-0 nova_compute[239965]: 2026-01-26 17:39:01.472 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:02 compute-0 nova_compute[239965]: 2026-01-26 17:39:02.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:02 compute-0 ceph-mon[75140]: pgmap v4477: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4478: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:03 compute-0 nova_compute[239965]: 2026-01-26 17:39:03.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:04 compute-0 ceph-mon[75140]: pgmap v4478: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4479: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:05 compute-0 nova_compute[239965]: 2026-01-26 17:39:05.490 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:39:06 compute-0 nova_compute[239965]: 2026-01-26 17:39:06.473 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:06 compute-0 ceph-mon[75140]: pgmap v4479: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4480: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:08 compute-0 ceph-mon[75140]: pgmap v4480: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4481: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:09 compute-0 nova_compute[239965]: 2026-01-26 17:39:09.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:10 compute-0 nova_compute[239965]: 2026-01-26 17:39:10.492 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:10 compute-0 ceph-mon[75140]: pgmap v4481: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:39:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4482: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:11 compute-0 nova_compute[239965]: 2026-01-26 17:39:11.476 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:12 compute-0 ceph-mon[75140]: pgmap v4482: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4483: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #222. Immutable memtables: 0.
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:39:13.884699) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 139] Flushing memtable with next log file: 222
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449153884723, "job": 139, "event": "flush_started", "num_memtables": 1, "num_entries": 1091, "num_deletes": 251, "total_data_size": 1711527, "memory_usage": 1740640, "flush_reason": "Manual Compaction"}
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 139] Level-0 flush table #223: started
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449153895168, "cf_name": "default", "job": 139, "event": "table_file_creation", "file_number": 223, "file_size": 1676486, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 90484, "largest_seqno": 91574, "table_properties": {"data_size": 1671114, "index_size": 2831, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11219, "raw_average_key_size": 19, "raw_value_size": 1660462, "raw_average_value_size": 2913, "num_data_blocks": 127, "num_entries": 570, "num_filter_entries": 570, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449046, "oldest_key_time": 1769449046, "file_creation_time": 1769449153, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 223, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 139] Flush lasted 10504 microseconds, and 4640 cpu microseconds.
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:39:13.895201) [db/flush_job.cc:967] [default] [JOB 139] Level-0 flush table #223: 1676486 bytes OK
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:39:13.895219) [db/memtable_list.cc:519] [default] Level-0 commit table #223 started
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:39:13.896798) [db/memtable_list.cc:722] [default] Level-0 commit table #223: memtable #1 done
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:39:13.896809) EVENT_LOG_v1 {"time_micros": 1769449153896806, "job": 139, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:39:13.896825) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 139] Try to delete WAL files size 1706471, prev total WAL file size 1706471, number of live WAL files 2.
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000219.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:39:13.897446) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039303336' seq:72057594037927935, type:22 .. '7061786F730039323838' seq:0, type:0; will stop at (end)
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 140] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 139 Base level 0, inputs: [223(1637KB)], [221(9978KB)]
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449153897475, "job": 140, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [223], "files_L6": [221], "score": -1, "input_data_size": 11894206, "oldest_snapshot_seqno": -1}
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 140] Generated table #224: 10101 keys, 10096359 bytes, temperature: kUnknown
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449153955372, "cf_name": "default", "job": 140, "event": "table_file_creation", "file_number": 224, "file_size": 10096359, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10036116, "index_size": 33890, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 25285, "raw_key_size": 270599, "raw_average_key_size": 26, "raw_value_size": 9861913, "raw_average_value_size": 976, "num_data_blocks": 1270, "num_entries": 10101, "num_filter_entries": 10101, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769449153, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 224, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:39:13.955619) [db/compaction/compaction_job.cc:1663] [default] [JOB 140] Compacted 1@0 + 1@6 files to L6 => 10096359 bytes
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:39:13.957192) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 205.2 rd, 174.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.7 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(13.1) write-amplify(6.0) OK, records in: 10615, records dropped: 514 output_compression: NoCompression
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:39:13.957207) EVENT_LOG_v1 {"time_micros": 1769449153957199, "job": 140, "event": "compaction_finished", "compaction_time_micros": 57963, "compaction_time_cpu_micros": 21932, "output_level": 6, "num_output_files": 1, "total_output_size": 10096359, "num_input_records": 10615, "num_output_records": 10101, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000223.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449153957599, "job": 140, "event": "table_file_deletion", "file_number": 223}
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000221.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449153959360, "job": 140, "event": "table_file_deletion", "file_number": 221}
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:39:13.897331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:39:13.959383) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:39:13.959387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:39:13.959388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:39:13.959390) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:39:13 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:39:13.959391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:39:14 compute-0 nova_compute[239965]: 2026-01-26 17:39:14.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:14 compute-0 ceph-mon[75140]: pgmap v4483: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4484: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:15 compute-0 nova_compute[239965]: 2026-01-26 17:39:15.493 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:39:16 compute-0 nova_compute[239965]: 2026-01-26 17:39:16.478 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:16 compute-0 ceph-mon[75140]: pgmap v4484: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4485: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:18 compute-0 ceph-mon[75140]: pgmap v4485: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4486: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:20 compute-0 podman[425560]: 2026-01-26 17:39:20.356881673 +0000 UTC m=+0.049923302 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 17:39:20 compute-0 nova_compute[239965]: 2026-01-26 17:39:20.495 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:20 compute-0 nova_compute[239965]: 2026-01-26 17:39:20.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:20 compute-0 nova_compute[239965]: 2026-01-26 17:39:20.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:39:20 compute-0 ceph-mon[75140]: pgmap v4486: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:39:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4487: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:21 compute-0 nova_compute[239965]: 2026-01-26 17:39:21.478 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:22 compute-0 ceph-mon[75140]: pgmap v4487: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4488: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:23 compute-0 podman[425579]: 2026-01-26 17:39:23.45413342 +0000 UTC m=+0.138687174 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 17:39:24 compute-0 nova_compute[239965]: 2026-01-26 17:39:24.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:24 compute-0 nova_compute[239965]: 2026-01-26 17:39:24.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:39:24 compute-0 nova_compute[239965]: 2026-01-26 17:39:24.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:39:24 compute-0 nova_compute[239965]: 2026-01-26 17:39:24.844 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:39:24 compute-0 ceph-mon[75140]: pgmap v4488: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4489: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:25 compute-0 nova_compute[239965]: 2026-01-26 17:39:25.497 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:25 compute-0 nova_compute[239965]: 2026-01-26 17:39:25.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:25 compute-0 nova_compute[239965]: 2026-01-26 17:39:25.537 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:39:25 compute-0 nova_compute[239965]: 2026-01-26 17:39:25.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:39:25 compute-0 nova_compute[239965]: 2026-01-26 17:39:25.538 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:39:25 compute-0 nova_compute[239965]: 2026-01-26 17:39:25.539 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:39:25 compute-0 nova_compute[239965]: 2026-01-26 17:39:25.539 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:39:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:39:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:39:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3257180221' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:39:26 compute-0 nova_compute[239965]: 2026-01-26 17:39:26.128 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:39:26 compute-0 nova_compute[239965]: 2026-01-26 17:39:26.272 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:39:26 compute-0 nova_compute[239965]: 2026-01-26 17:39:26.273 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3535MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:39:26 compute-0 nova_compute[239965]: 2026-01-26 17:39:26.273 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:39:26 compute-0 nova_compute[239965]: 2026-01-26 17:39:26.273 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:39:26 compute-0 nova_compute[239965]: 2026-01-26 17:39:26.356 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:39:26 compute-0 nova_compute[239965]: 2026-01-26 17:39:26.357 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:39:26 compute-0 nova_compute[239965]: 2026-01-26 17:39:26.375 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:39:26 compute-0 nova_compute[239965]: 2026-01-26 17:39:26.480 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:39:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/652993784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:39:26 compute-0 nova_compute[239965]: 2026-01-26 17:39:26.928 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:39:26 compute-0 nova_compute[239965]: 2026-01-26 17:39:26.933 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:39:27 compute-0 ceph-mon[75140]: pgmap v4489: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3257180221' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:39:27 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/652993784' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:39:27 compute-0 nova_compute[239965]: 2026-01-26 17:39:27.137 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:39:27 compute-0 nova_compute[239965]: 2026-01-26 17:39:27.138 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:39:27 compute-0 nova_compute[239965]: 2026-01-26 17:39:27.139 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.865s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:39:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4490: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:39:28
Jan 26 17:39:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:39:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:39:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['.rgw.root', 'cephfs.cephfs.meta', 'images', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.control', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data']
Jan 26 17:39:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:39:29 compute-0 ceph-mon[75140]: pgmap v4490: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4491: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:30 compute-0 ceph-mon[75140]: pgmap v4491: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:39:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:39:30 compute-0 nova_compute[239965]: 2026-01-26 17:39:30.499 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:39:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4492: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:39:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:39:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:39:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:39:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:39:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:39:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:39:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:39:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:39:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:39:31 compute-0 nova_compute[239965]: 2026-01-26 17:39:31.482 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:32 compute-0 ceph-mon[75140]: pgmap v4492: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4493: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:34 compute-0 ceph-mon[75140]: pgmap v4493: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:35 compute-0 nova_compute[239965]: 2026-01-26 17:39:35.132 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4494: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:35 compute-0 sudo[425649]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:39:35 compute-0 sudo[425649]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:39:35 compute-0 sudo[425649]: pam_unix(sudo:session): session closed for user root
Jan 26 17:39:35 compute-0 nova_compute[239965]: 2026-01-26 17:39:35.501 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:35 compute-0 sudo[425674]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:39:35 compute-0 sudo[425674]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:39:35 compute-0 sudo[425674]: pam_unix(sudo:session): session closed for user root
Jan 26 17:39:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:39:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:39:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:39:36 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:39:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:39:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:39:36 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:39:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:39:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:39:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:39:36 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:39:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:39:36 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:39:36 compute-0 sudo[425730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:39:36 compute-0 sudo[425730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:39:36 compute-0 sudo[425730]: pam_unix(sudo:session): session closed for user root
Jan 26 17:39:36 compute-0 sudo[425755]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:39:36 compute-0 sudo[425755]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:39:36 compute-0 ceph-mon[75140]: pgmap v4494: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:36 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:39:36 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:39:36 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:39:36 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:39:36 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:39:36 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:39:36 compute-0 podman[425792]: 2026-01-26 17:39:36.442266106 +0000 UTC m=+0.041432622 container create e58db3113fcbb9f715dc9be22c1770134b0c10b38c6cee297033bba70e4c6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_fermi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 17:39:36 compute-0 systemd[1]: Started libpod-conmon-e58db3113fcbb9f715dc9be22c1770134b0c10b38c6cee297033bba70e4c6ff4.scope.
Jan 26 17:39:36 compute-0 nova_compute[239965]: 2026-01-26 17:39:36.484 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:39:36 compute-0 podman[425792]: 2026-01-26 17:39:36.42437255 +0000 UTC m=+0.023539086 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:39:36 compute-0 podman[425792]: 2026-01-26 17:39:36.523646872 +0000 UTC m=+0.122813438 container init e58db3113fcbb9f715dc9be22c1770134b0c10b38c6cee297033bba70e4c6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_fermi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 17:39:36 compute-0 podman[425792]: 2026-01-26 17:39:36.535676374 +0000 UTC m=+0.134842890 container start e58db3113fcbb9f715dc9be22c1770134b0c10b38c6cee297033bba70e4c6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_fermi, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:39:36 compute-0 podman[425792]: 2026-01-26 17:39:36.539725883 +0000 UTC m=+0.138892429 container attach e58db3113fcbb9f715dc9be22c1770134b0c10b38c6cee297033bba70e4c6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_fermi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 26 17:39:36 compute-0 loving_fermi[425808]: 167 167
Jan 26 17:39:36 compute-0 systemd[1]: libpod-e58db3113fcbb9f715dc9be22c1770134b0c10b38c6cee297033bba70e4c6ff4.scope: Deactivated successfully.
Jan 26 17:39:36 compute-0 podman[425792]: 2026-01-26 17:39:36.543735771 +0000 UTC m=+0.142902297 container died e58db3113fcbb9f715dc9be22c1770134b0c10b38c6cee297033bba70e4c6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_fermi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 26 17:39:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbd7e2abcdff197b9ed446e4869b328ccadfd42f846b9b8e83da0522e5f8339e-merged.mount: Deactivated successfully.
Jan 26 17:39:36 compute-0 podman[425792]: 2026-01-26 17:39:36.590471472 +0000 UTC m=+0.189638028 container remove e58db3113fcbb9f715dc9be22c1770134b0c10b38c6cee297033bba70e4c6ff4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_fermi, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:39:36 compute-0 systemd[1]: libpod-conmon-e58db3113fcbb9f715dc9be22c1770134b0c10b38c6cee297033bba70e4c6ff4.scope: Deactivated successfully.
Jan 26 17:39:36 compute-0 podman[425833]: 2026-01-26 17:39:36.776669585 +0000 UTC m=+0.047868300 container create 2af4e38d89517ee97603deecf8c8cc1d2a3e4d808616caf0c49e8f93b5cc3ad8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goldberg, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Jan 26 17:39:36 compute-0 systemd[1]: Started libpod-conmon-2af4e38d89517ee97603deecf8c8cc1d2a3e4d808616caf0c49e8f93b5cc3ad8.scope.
Jan 26 17:39:36 compute-0 podman[425833]: 2026-01-26 17:39:36.758171003 +0000 UTC m=+0.029369748 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:39:36 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b9f9585417568704ec537138e384107060e8c76dddbf9008c8d7104321de66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b9f9585417568704ec537138e384107060e8c76dddbf9008c8d7104321de66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b9f9585417568704ec537138e384107060e8c76dddbf9008c8d7104321de66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b9f9585417568704ec537138e384107060e8c76dddbf9008c8d7104321de66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:39:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76b9f9585417568704ec537138e384107060e8c76dddbf9008c8d7104321de66/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:39:36 compute-0 podman[425833]: 2026-01-26 17:39:36.878451197 +0000 UTC m=+0.149649942 container init 2af4e38d89517ee97603deecf8c8cc1d2a3e4d808616caf0c49e8f93b5cc3ad8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goldberg, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:39:36 compute-0 podman[425833]: 2026-01-26 17:39:36.891410213 +0000 UTC m=+0.162608928 container start 2af4e38d89517ee97603deecf8c8cc1d2a3e4d808616caf0c49e8f93b5cc3ad8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goldberg, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 17:39:36 compute-0 podman[425833]: 2026-01-26 17:39:36.895469402 +0000 UTC m=+0.166668197 container attach 2af4e38d89517ee97603deecf8c8cc1d2a3e4d808616caf0c49e8f93b5cc3ad8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goldberg, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 17:39:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4495: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:37 compute-0 interesting_goldberg[425849]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:39:37 compute-0 interesting_goldberg[425849]: --> All data devices are unavailable
Jan 26 17:39:37 compute-0 systemd[1]: libpod-2af4e38d89517ee97603deecf8c8cc1d2a3e4d808616caf0c49e8f93b5cc3ad8.scope: Deactivated successfully.
Jan 26 17:39:37 compute-0 podman[425870]: 2026-01-26 17:39:37.525493562 +0000 UTC m=+0.038963871 container died 2af4e38d89517ee97603deecf8c8cc1d2a3e4d808616caf0c49e8f93b5cc3ad8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goldberg, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Jan 26 17:39:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-76b9f9585417568704ec537138e384107060e8c76dddbf9008c8d7104321de66-merged.mount: Deactivated successfully.
Jan 26 17:39:37 compute-0 podman[425870]: 2026-01-26 17:39:37.569878215 +0000 UTC m=+0.083348504 container remove 2af4e38d89517ee97603deecf8c8cc1d2a3e4d808616caf0c49e8f93b5cc3ad8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_goldberg, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:39:37 compute-0 systemd[1]: libpod-conmon-2af4e38d89517ee97603deecf8c8cc1d2a3e4d808616caf0c49e8f93b5cc3ad8.scope: Deactivated successfully.
Jan 26 17:39:37 compute-0 sudo[425755]: pam_unix(sudo:session): session closed for user root
Jan 26 17:39:37 compute-0 sudo[425885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:39:37 compute-0 sudo[425885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:39:37 compute-0 sudo[425885]: pam_unix(sudo:session): session closed for user root
Jan 26 17:39:37 compute-0 sudo[425910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:39:37 compute-0 sudo[425910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:39:38 compute-0 podman[425947]: 2026-01-26 17:39:38.138043916 +0000 UTC m=+0.052757868 container create 91263e254fd6ed5882bdee635231e6126df70dc3e8cf649f8218cc49bbc2bb83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 26 17:39:38 compute-0 systemd[1]: Started libpod-conmon-91263e254fd6ed5882bdee635231e6126df70dc3e8cf649f8218cc49bbc2bb83.scope.
Jan 26 17:39:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:39:38 compute-0 podman[425947]: 2026-01-26 17:39:38.112470413 +0000 UTC m=+0.027184465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:39:38 compute-0 podman[425947]: 2026-01-26 17:39:38.222860815 +0000 UTC m=+0.137574777 container init 91263e254fd6ed5882bdee635231e6126df70dc3e8cf649f8218cc49bbc2bb83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shaw, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 17:39:38 compute-0 podman[425947]: 2026-01-26 17:39:38.230059591 +0000 UTC m=+0.144773543 container start 91263e254fd6ed5882bdee635231e6126df70dc3e8cf649f8218cc49bbc2bb83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Jan 26 17:39:38 compute-0 podman[425947]: 2026-01-26 17:39:38.232766776 +0000 UTC m=+0.147480758 container attach 91263e254fd6ed5882bdee635231e6126df70dc3e8cf649f8218cc49bbc2bb83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:39:38 compute-0 happy_shaw[425963]: 167 167
Jan 26 17:39:38 compute-0 systemd[1]: libpod-91263e254fd6ed5882bdee635231e6126df70dc3e8cf649f8218cc49bbc2bb83.scope: Deactivated successfully.
Jan 26 17:39:38 compute-0 conmon[425963]: conmon 91263e254fd6ed5882bd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-91263e254fd6ed5882bdee635231e6126df70dc3e8cf649f8218cc49bbc2bb83.scope/container/memory.events
Jan 26 17:39:38 compute-0 podman[425947]: 2026-01-26 17:39:38.235504023 +0000 UTC m=+0.150217995 container died 91263e254fd6ed5882bdee635231e6126df70dc3e8cf649f8218cc49bbc2bb83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shaw, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 26 17:39:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ad1baa99fe3de82f9aa73a4e0e046a3665f18d6caec4a62c9fdcf1c195edb73-merged.mount: Deactivated successfully.
Jan 26 17:39:38 compute-0 podman[425947]: 2026-01-26 17:39:38.270355034 +0000 UTC m=+0.185068986 container remove 91263e254fd6ed5882bdee635231e6126df70dc3e8cf649f8218cc49bbc2bb83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_shaw, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:39:38 compute-0 systemd[1]: libpod-conmon-91263e254fd6ed5882bdee635231e6126df70dc3e8cf649f8218cc49bbc2bb83.scope: Deactivated successfully.
Jan 26 17:39:38 compute-0 ceph-mon[75140]: pgmap v4495: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:38 compute-0 podman[425987]: 2026-01-26 17:39:38.449901255 +0000 UTC m=+0.040324765 container create a798bfbe0b9138fbb35e23c95cfa020cdfc503ab625a29e52daa82954d80f601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_heyrovsky, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:39:38 compute-0 systemd[1]: Started libpod-conmon-a798bfbe0b9138fbb35e23c95cfa020cdfc503ab625a29e52daa82954d80f601.scope.
Jan 26 17:39:38 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f0df9e5242a7d0df80c9961846be3f55a48c5a8deda6d1e7905b3d56cc7700d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f0df9e5242a7d0df80c9961846be3f55a48c5a8deda6d1e7905b3d56cc7700d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f0df9e5242a7d0df80c9961846be3f55a48c5a8deda6d1e7905b3d56cc7700d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:39:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f0df9e5242a7d0df80c9961846be3f55a48c5a8deda6d1e7905b3d56cc7700d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:39:38 compute-0 podman[425987]: 2026-01-26 17:39:38.431602947 +0000 UTC m=+0.022026497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:39:38 compute-0 podman[425987]: 2026-01-26 17:39:38.532424668 +0000 UTC m=+0.122848188 container init a798bfbe0b9138fbb35e23c95cfa020cdfc503ab625a29e52daa82954d80f601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:39:38 compute-0 podman[425987]: 2026-01-26 17:39:38.54032372 +0000 UTC m=+0.130747230 container start a798bfbe0b9138fbb35e23c95cfa020cdfc503ab625a29e52daa82954d80f601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_heyrovsky, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:39:38 compute-0 podman[425987]: 2026-01-26 17:39:38.54401181 +0000 UTC m=+0.134435320 container attach a798bfbe0b9138fbb35e23c95cfa020cdfc503ab625a29e52daa82954d80f601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]: {
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:     "0": [
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:         {
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "devices": [
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "/dev/loop3"
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             ],
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "lv_name": "ceph_lv0",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "lv_size": "21470642176",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "name": "ceph_lv0",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "tags": {
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.cluster_name": "ceph",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.crush_device_class": "",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.encrypted": "0",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.objectstore": "bluestore",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.osd_id": "0",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.type": "block",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.vdo": "0",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.with_tpm": "0"
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             },
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "type": "block",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "vg_name": "ceph_vg0"
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:         }
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:     ],
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:     "1": [
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:         {
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "devices": [
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "/dev/loop4"
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             ],
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "lv_name": "ceph_lv1",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "lv_size": "21470642176",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "name": "ceph_lv1",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "tags": {
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.cluster_name": "ceph",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.crush_device_class": "",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.encrypted": "0",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.objectstore": "bluestore",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.osd_id": "1",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.type": "block",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.vdo": "0",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.with_tpm": "0"
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             },
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "type": "block",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "vg_name": "ceph_vg1"
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:         }
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:     ],
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:     "2": [
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:         {
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "devices": [
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "/dev/loop5"
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             ],
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "lv_name": "ceph_lv2",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "lv_size": "21470642176",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "name": "ceph_lv2",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "tags": {
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.cluster_name": "ceph",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.crush_device_class": "",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.encrypted": "0",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.objectstore": "bluestore",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.osd_id": "2",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.type": "block",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.vdo": "0",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:                 "ceph.with_tpm": "0"
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             },
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "type": "block",
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:             "vg_name": "ceph_vg2"
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:         }
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]:     ]
Jan 26 17:39:38 compute-0 objective_heyrovsky[426004]: }
Jan 26 17:39:38 compute-0 systemd[1]: libpod-a798bfbe0b9138fbb35e23c95cfa020cdfc503ab625a29e52daa82954d80f601.scope: Deactivated successfully.
Jan 26 17:39:38 compute-0 podman[425987]: 2026-01-26 17:39:38.821337076 +0000 UTC m=+0.411760606 container died a798bfbe0b9138fbb35e23c95cfa020cdfc503ab625a29e52daa82954d80f601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:39:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f0df9e5242a7d0df80c9961846be3f55a48c5a8deda6d1e7905b3d56cc7700d-merged.mount: Deactivated successfully.
Jan 26 17:39:38 compute-0 podman[425987]: 2026-01-26 17:39:38.860242655 +0000 UTC m=+0.450666165 container remove a798bfbe0b9138fbb35e23c95cfa020cdfc503ab625a29e52daa82954d80f601 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_heyrovsky, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:39:38 compute-0 systemd[1]: libpod-conmon-a798bfbe0b9138fbb35e23c95cfa020cdfc503ab625a29e52daa82954d80f601.scope: Deactivated successfully.
Jan 26 17:39:38 compute-0 sudo[425910]: pam_unix(sudo:session): session closed for user root
Jan 26 17:39:38 compute-0 sudo[426026]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:39:38 compute-0 sudo[426026]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:39:38 compute-0 sudo[426026]: pam_unix(sudo:session): session closed for user root
Jan 26 17:39:39 compute-0 sudo[426051]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:39:39 compute-0 sudo[426051]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:39:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4496: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:39 compute-0 podman[426088]: 2026-01-26 17:39:39.325820123 +0000 UTC m=+0.055090485 container create 61086cbca82b568aae86bc6fdf149dde88d15e9fc24ed4f801b9e1a99e11a874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:39:39 compute-0 systemd[1]: Started libpod-conmon-61086cbca82b568aae86bc6fdf149dde88d15e9fc24ed4f801b9e1a99e11a874.scope.
Jan 26 17:39:39 compute-0 podman[426088]: 2026-01-26 17:39:39.295258507 +0000 UTC m=+0.024528939 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:39:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:39:39 compute-0 podman[426088]: 2026-01-26 17:39:39.411251767 +0000 UTC m=+0.140522119 container init 61086cbca82b568aae86bc6fdf149dde88d15e9fc24ed4f801b9e1a99e11a874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:39:39 compute-0 podman[426088]: 2026-01-26 17:39:39.41957499 +0000 UTC m=+0.148845352 container start 61086cbca82b568aae86bc6fdf149dde88d15e9fc24ed4f801b9e1a99e11a874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 26 17:39:39 compute-0 podman[426088]: 2026-01-26 17:39:39.422996384 +0000 UTC m=+0.152266756 container attach 61086cbca82b568aae86bc6fdf149dde88d15e9fc24ed4f801b9e1a99e11a874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:39:39 compute-0 modest_wilbur[426104]: 167 167
Jan 26 17:39:39 compute-0 systemd[1]: libpod-61086cbca82b568aae86bc6fdf149dde88d15e9fc24ed4f801b9e1a99e11a874.scope: Deactivated successfully.
Jan 26 17:39:39 compute-0 podman[426088]: 2026-01-26 17:39:39.42406558 +0000 UTC m=+0.153335982 container died 61086cbca82b568aae86bc6fdf149dde88d15e9fc24ed4f801b9e1a99e11a874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 26 17:39:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1ec3df3a828ef0ced9e52cb32690b353f84a1fc6e8207abf51ce552a60438dc-merged.mount: Deactivated successfully.
Jan 26 17:39:39 compute-0 podman[426088]: 2026-01-26 17:39:39.465295026 +0000 UTC m=+0.194565428 container remove 61086cbca82b568aae86bc6fdf149dde88d15e9fc24ed4f801b9e1a99e11a874 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_wilbur, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:39:39 compute-0 systemd[1]: libpod-conmon-61086cbca82b568aae86bc6fdf149dde88d15e9fc24ed4f801b9e1a99e11a874.scope: Deactivated successfully.
Jan 26 17:39:39 compute-0 podman[426128]: 2026-01-26 17:39:39.685272542 +0000 UTC m=+0.075717538 container create e5e755f25999235060074e2259a730adbbedb8a9e3ffb8af489791e6785953ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:39:39 compute-0 systemd[1]: Started libpod-conmon-e5e755f25999235060074e2259a730adbbedb8a9e3ffb8af489791e6785953ab.scope.
Jan 26 17:39:39 compute-0 podman[426128]: 2026-01-26 17:39:39.651410696 +0000 UTC m=+0.041855682 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:39:39 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:39:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f95c605401a825965a8bb21d961fabb0ba1bf74e4d36c394ab0d3257345404d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:39:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f95c605401a825965a8bb21d961fabb0ba1bf74e4d36c394ab0d3257345404d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:39:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f95c605401a825965a8bb21d961fabb0ba1bf74e4d36c394ab0d3257345404d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:39:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f95c605401a825965a8bb21d961fabb0ba1bf74e4d36c394ab0d3257345404d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:39:39 compute-0 podman[426128]: 2026-01-26 17:39:39.76717517 +0000 UTC m=+0.157620156 container init e5e755f25999235060074e2259a730adbbedb8a9e3ffb8af489791e6785953ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_swirles, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 17:39:39 compute-0 podman[426128]: 2026-01-26 17:39:39.772525331 +0000 UTC m=+0.162970317 container start e5e755f25999235060074e2259a730adbbedb8a9e3ffb8af489791e6785953ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_swirles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:39:39 compute-0 podman[426128]: 2026-01-26 17:39:39.776983169 +0000 UTC m=+0.167428155 container attach e5e755f25999235060074e2259a730adbbedb8a9e3ffb8af489791e6785953ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 26 17:39:40 compute-0 ceph-mon[75140]: pgmap v4496: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:40 compute-0 lvm[426221]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:39:40 compute-0 lvm[426221]: VG ceph_vg0 finished
Jan 26 17:39:40 compute-0 lvm[426224]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:39:40 compute-0 lvm[426224]: VG ceph_vg1 finished
Jan 26 17:39:40 compute-0 lvm[426226]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:39:40 compute-0 lvm[426226]: VG ceph_vg2 finished
Jan 26 17:39:40 compute-0 nova_compute[239965]: 2026-01-26 17:39:40.503 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:40 compute-0 angry_swirles[426145]: {}
Jan 26 17:39:40 compute-0 systemd[1]: libpod-e5e755f25999235060074e2259a730adbbedb8a9e3ffb8af489791e6785953ab.scope: Deactivated successfully.
Jan 26 17:39:40 compute-0 systemd[1]: libpod-e5e755f25999235060074e2259a730adbbedb8a9e3ffb8af489791e6785953ab.scope: Consumed 1.345s CPU time.
Jan 26 17:39:40 compute-0 podman[426128]: 2026-01-26 17:39:40.587845532 +0000 UTC m=+0.978290508 container died e5e755f25999235060074e2259a730adbbedb8a9e3ffb8af489791e6785953ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:39:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f95c605401a825965a8bb21d961fabb0ba1bf74e4d36c394ab0d3257345404d-merged.mount: Deactivated successfully.
Jan 26 17:39:40 compute-0 podman[426128]: 2026-01-26 17:39:40.643881969 +0000 UTC m=+1.034326945 container remove e5e755f25999235060074e2259a730adbbedb8a9e3ffb8af489791e6785953ab (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_swirles, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 17:39:40 compute-0 systemd[1]: libpod-conmon-e5e755f25999235060074e2259a730adbbedb8a9e3ffb8af489791e6785953ab.scope: Deactivated successfully.
Jan 26 17:39:40 compute-0 sudo[426051]: pam_unix(sudo:session): session closed for user root
Jan 26 17:39:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:39:40 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:39:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:39:40 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:39:40 compute-0 sudo[426243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:39:40 compute-0 sudo[426243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:39:40 compute-0 sudo[426243]: pam_unix(sudo:session): session closed for user root
Jan 26 17:39:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:39:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4497: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:41 compute-0 nova_compute[239965]: 2026-01-26 17:39:41.487 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:41 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:39:41 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:39:42 compute-0 ceph-mon[75140]: pgmap v4497: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4498: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:44 compute-0 ceph-mon[75140]: pgmap v4498: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4499: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:45 compute-0 nova_compute[239965]: 2026-01-26 17:39:45.506 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:39:46 compute-0 nova_compute[239965]: 2026-01-26 17:39:46.490 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:46 compute-0 ceph-mon[75140]: pgmap v4499: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:47 compute-0 sshd-session[426268]: Invalid user validator from 45.148.10.240 port 39240
Jan 26 17:39:47 compute-0 sshd-session[426268]: Connection closed by invalid user validator 45.148.10.240 port 39240 [preauth]
Jan 26 17:39:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4500: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:39:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3770068925' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:39:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:39:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3770068925' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:39:49 compute-0 ceph-mon[75140]: pgmap v4500: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3770068925' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:39:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3770068925' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:39:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4501: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:39:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:39:50 compute-0 nova_compute[239965]: 2026-01-26 17:39:50.508 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:51 compute-0 ceph-mon[75140]: pgmap v4501: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:39:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4502: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:51 compute-0 podman[426270]: 2026-01-26 17:39:51.37809947 +0000 UTC m=+0.059995545 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 26 17:39:51 compute-0 nova_compute[239965]: 2026-01-26 17:39:51.491 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:53 compute-0 ceph-mon[75140]: pgmap v4502: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4503: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:54 compute-0 podman[426290]: 2026-01-26 17:39:54.38969841 +0000 UTC m=+0.081770116 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:39:55 compute-0 ceph-mon[75140]: pgmap v4503: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4504: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:55 compute-0 nova_compute[239965]: 2026-01-26 17:39:55.510 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:39:56 compute-0 nova_compute[239965]: 2026-01-26 17:39:56.499 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:39:57 compute-0 ceph-mon[75140]: pgmap v4504: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4505: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:58 compute-0 nova_compute[239965]: 2026-01-26 17:39:58.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:39:59 compute-0 ceph-mon[75140]: pgmap v4505: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4506: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:39:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:39:59.327 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:39:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:39:59.328 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:39:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:39:59.328 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:40:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:40:00 compute-0 nova_compute[239965]: 2026-01-26 17:40:00.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:40:00 compute-0 nova_compute[239965]: 2026-01-26 17:40:00.512 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:40:01 compute-0 ceph-mon[75140]: pgmap v4506: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4507: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:01 compute-0 nova_compute[239965]: 2026-01-26 17:40:01.501 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:02 compute-0 ceph-mon[75140]: pgmap v4507: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4508: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:04 compute-0 ceph-mon[75140]: pgmap v4508: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:04 compute-0 nova_compute[239965]: 2026-01-26 17:40:04.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:40:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4509: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:40:05 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.0 total, 600.0 interval
                                           Cumulative writes: 20K writes, 91K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.02 MB/s
                                           Cumulative WAL: 20K writes, 20K syncs, 1.00 writes per sync, written: 0.13 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1318 writes, 6215 keys, 1318 commit groups, 1.0 writes per commit group, ingest: 8.91 MB, 0.01 MB/s
                                           Interval WAL: 1318 writes, 1318 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     66.3      1.74              0.38        70    0.025       0      0       0.0       0.0
                                             L6      1/0    9.63 MB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   5.6    111.2     95.7      6.79              1.97        69    0.098    541K    36K       0.0       0.0
                                            Sum      1/0    9.63 MB   0.0      0.7     0.1      0.6       0.7      0.1       0.0   6.6     88.6     89.7      8.52              2.35       139    0.061    541K    36K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4    112.4    111.2      0.64              0.25        12    0.053     64K   3061       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.7     0.1      0.6       0.6      0.0       0.0   0.0    111.2     95.7      6.79              1.97        69    0.098    541K    36K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     66.4      1.73              0.38        69    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     11.6      0.00              0.00         1    0.005       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 8400.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.112, interval 0.008
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.75 GB write, 0.09 MB/s write, 0.74 GB read, 0.09 MB/s read, 8.5 seconds
                                           Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x565356c818d0#2 capacity: 304.00 MB usage: 83.57 MB table_size: 0 occupancy: 18446744073709551615 collections: 15 last_copies: 0 last_secs: 0.000552 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(5132,79.79 MB,26.2483%) FilterBlock(140,1.44 MB,0.472335%) IndexBlock(140,2.34 MB,0.769445%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 26 17:40:05 compute-0 nova_compute[239965]: 2026-01-26 17:40:05.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:40:05 compute-0 nova_compute[239965]: 2026-01-26 17:40:05.513 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:40:06 compute-0 ceph-mon[75140]: pgmap v4509: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:06 compute-0 nova_compute[239965]: 2026-01-26 17:40:06.539 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4510: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:08 compute-0 ceph-mon[75140]: pgmap v4510: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4511: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:10 compute-0 nova_compute[239965]: 2026-01-26 17:40:10.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:40:10 compute-0 nova_compute[239965]: 2026-01-26 17:40:10.514 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:10 compute-0 ceph-mon[75140]: pgmap v4511: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:40:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4512: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:11 compute-0 nova_compute[239965]: 2026-01-26 17:40:11.542 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:12 compute-0 ceph-mon[75140]: pgmap v4512: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4513: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.511 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.511 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.512 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.512 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.513 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.513 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.532 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.544 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.544 239969 WARNING nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.544 239969 WARNING nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.546 239969 INFO nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Removable base files: /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0 /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.546 239969 INFO nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/6da0ee5a24bb670c07d2f750c1bb2d005d3c6ed0
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.548 239969 INFO nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/43c516c9cae415c0ab334521ada79f427fb2809a
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.548 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.548 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.548 239969 DEBUG nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 26 17:40:13 compute-0 nova_compute[239965]: 2026-01-26 17:40:13.548 239969 INFO nova.virt.libvirt.imagecache [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Jan 26 17:40:14 compute-0 ceph-mon[75140]: pgmap v4513: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4514: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:15 compute-0 nova_compute[239965]: 2026-01-26 17:40:15.516 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:15 compute-0 nova_compute[239965]: 2026-01-26 17:40:15.540 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:40:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:40:16 compute-0 nova_compute[239965]: 2026-01-26 17:40:16.544 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:16 compute-0 ceph-mon[75140]: pgmap v4514: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4515: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:18 compute-0 ceph-mon[75140]: pgmap v4515: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4516: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:20 compute-0 nova_compute[239965]: 2026-01-26 17:40:20.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:40:20 compute-0 nova_compute[239965]: 2026-01-26 17:40:20.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:40:20 compute-0 nova_compute[239965]: 2026-01-26 17:40:20.518 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:20 compute-0 ceph-mon[75140]: pgmap v4516: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:40:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4517: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:21 compute-0 nova_compute[239965]: 2026-01-26 17:40:21.546 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:22 compute-0 podman[426317]: 2026-01-26 17:40:22.407936599 +0000 UTC m=+0.085301943 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 26 17:40:22 compute-0 ceph-mon[75140]: pgmap v4517: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4518: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:24 compute-0 nova_compute[239965]: 2026-01-26 17:40:24.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:40:24 compute-0 ceph-mon[75140]: pgmap v4518: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:24 compute-0 podman[426336]: 2026-01-26 17:40:24.687747126 +0000 UTC m=+0.073913625 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 17:40:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4519: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:25 compute-0 nova_compute[239965]: 2026-01-26 17:40:25.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:40:25 compute-0 nova_compute[239965]: 2026-01-26 17:40:25.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:40:25 compute-0 nova_compute[239965]: 2026-01-26 17:40:25.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:40:25 compute-0 nova_compute[239965]: 2026-01-26 17:40:25.519 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:25 compute-0 nova_compute[239965]: 2026-01-26 17:40:25.524 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:40:25 compute-0 nova_compute[239965]: 2026-01-26 17:40:25.525 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:40:25 compute-0 nova_compute[239965]: 2026-01-26 17:40:25.547 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:40:25 compute-0 nova_compute[239965]: 2026-01-26 17:40:25.548 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:40:25 compute-0 nova_compute[239965]: 2026-01-26 17:40:25.548 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:40:25 compute-0 nova_compute[239965]: 2026-01-26 17:40:25.548 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:40:25 compute-0 nova_compute[239965]: 2026-01-26 17:40:25.549 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:40:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:40:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:40:26 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695747755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:40:26 compute-0 nova_compute[239965]: 2026-01-26 17:40:26.127 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:40:26 compute-0 nova_compute[239965]: 2026-01-26 17:40:26.276 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:40:26 compute-0 nova_compute[239965]: 2026-01-26 17:40:26.277 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3539MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:40:26 compute-0 nova_compute[239965]: 2026-01-26 17:40:26.278 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:40:26 compute-0 nova_compute[239965]: 2026-01-26 17:40:26.278 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:40:26 compute-0 nova_compute[239965]: 2026-01-26 17:40:26.548 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:26 compute-0 ceph-mon[75140]: pgmap v4519: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:26 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1695747755' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:40:26 compute-0 nova_compute[239965]: 2026-01-26 17:40:26.859 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:40:26 compute-0 nova_compute[239965]: 2026-01-26 17:40:26.860 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:40:26 compute-0 nova_compute[239965]: 2026-01-26 17:40:26.940 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing inventories for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 26 17:40:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4520: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:27 compute-0 nova_compute[239965]: 2026-01-26 17:40:27.337 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating ProviderTree inventory for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 26 17:40:27 compute-0 nova_compute[239965]: 2026-01-26 17:40:27.337 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Updating inventory in ProviderTree for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 26 17:40:27 compute-0 nova_compute[239965]: 2026-01-26 17:40:27.354 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing aggregate associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 26 17:40:27 compute-0 nova_compute[239965]: 2026-01-26 17:40:27.394 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Refreshing trait associations for resource provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed, traits: COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_ABM,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_AESNI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_F16C,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE2,COMPUTE_ACCELERATORS,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_DEVICE_TAGGING,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,HW_CPU_X86_MMX,COMPUTE_RESCUE_BFV,COMPUTE_VOLUME_EXTEND,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SSE,HW_CPU_X86_BMI2,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_BMI,COMPUTE_NODE,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_AVX2,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_SHA,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_TPM_2_0,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SSE41,HW_CPU_X86_AVX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_STORAGE_BUS_SATA,COMPUTE_SECURITY_UEFI_SECURE_BOOT _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 26 17:40:27 compute-0 nova_compute[239965]: 2026-01-26 17:40:27.410 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:40:27 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:40:27 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/448574772' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:40:27 compute-0 nova_compute[239965]: 2026-01-26 17:40:27.931 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:40:27 compute-0 nova_compute[239965]: 2026-01-26 17:40:27.940 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:40:27 compute-0 nova_compute[239965]: 2026-01-26 17:40:27.962 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:40:27 compute-0 nova_compute[239965]: 2026-01-26 17:40:27.965 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:40:27 compute-0 nova_compute[239965]: 2026-01-26 17:40:27.965 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:40:28 compute-0 ceph-mon[75140]: pgmap v4520: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:28 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/448574772' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:40:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:40:28
Jan 26 17:40:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:40:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:40:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', '.mgr', 'images', '.rgw.root', 'default.rgw.meta', 'vms']
Jan 26 17:40:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:40:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4521: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:40:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:40:30 compute-0 nova_compute[239965]: 2026-01-26 17:40:30.521 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:30 compute-0 ceph-mon[75140]: pgmap v4521: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:40:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4522: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:40:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:40:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:40:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:40:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:40:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:40:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:40:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:40:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:40:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:40:31 compute-0 nova_compute[239965]: 2026-01-26 17:40:31.549 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:32 compute-0 ceph-mon[75140]: pgmap v4522: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4523: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:34 compute-0 ceph-mon[75140]: pgmap v4523: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4524: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:35 compute-0 nova_compute[239965]: 2026-01-26 17:40:35.611 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:40:36 compute-0 nova_compute[239965]: 2026-01-26 17:40:36.551 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:36 compute-0 ceph-mon[75140]: pgmap v4524: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4525: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:38 compute-0 ceph-mon[75140]: pgmap v4525: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4526: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:40 compute-0 nova_compute[239965]: 2026-01-26 17:40:40.613 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:40 compute-0 ceph-mon[75140]: pgmap v4526: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:40 compute-0 sudo[426406]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:40:40 compute-0 sudo[426406]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:40:40 compute-0 sudo[426406]: pam_unix(sudo:session): session closed for user root
Jan 26 17:40:40 compute-0 sudo[426431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:40:40 compute-0 sudo[426431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:40:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:40:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4527: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:41 compute-0 sudo[426431]: pam_unix(sudo:session): session closed for user root
Jan 26 17:40:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:40:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:40:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:40:41 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:40:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:40:41 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:40:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:40:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:40:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:40:41 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:40:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:40:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:40:41 compute-0 sudo[426487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:40:41 compute-0 sudo[426487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:40:41 compute-0 sudo[426487]: pam_unix(sudo:session): session closed for user root
Jan 26 17:40:41 compute-0 sudo[426512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:40:41 compute-0 sudo[426512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:40:41 compute-0 nova_compute[239965]: 2026-01-26 17:40:41.554 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:41 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:40:41 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:40:41 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:40:41 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:40:41 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:40:41 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:40:41 compute-0 podman[426548]: 2026-01-26 17:40:41.821815708 +0000 UTC m=+0.041658278 container create c81873e53b8f4d1cfe4b5abee3511fce981686a3dde7d3da3754903c93ded2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 17:40:41 compute-0 systemd[1]: Started libpod-conmon-c81873e53b8f4d1cfe4b5abee3511fce981686a3dde7d3da3754903c93ded2f8.scope.
Jan 26 17:40:41 compute-0 podman[426548]: 2026-01-26 17:40:41.802704961 +0000 UTC m=+0.022547551 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:40:41 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:40:41 compute-0 podman[426548]: 2026-01-26 17:40:41.925588619 +0000 UTC m=+0.145431209 container init c81873e53b8f4d1cfe4b5abee3511fce981686a3dde7d3da3754903c93ded2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ride, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 17:40:41 compute-0 podman[426548]: 2026-01-26 17:40:41.932845426 +0000 UTC m=+0.152687996 container start c81873e53b8f4d1cfe4b5abee3511fce981686a3dde7d3da3754903c93ded2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:40:41 compute-0 podman[426548]: 2026-01-26 17:40:41.936549767 +0000 UTC m=+0.156392367 container attach c81873e53b8f4d1cfe4b5abee3511fce981686a3dde7d3da3754903c93ded2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ride, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 26 17:40:41 compute-0 practical_ride[426565]: 167 167
Jan 26 17:40:41 compute-0 systemd[1]: libpod-c81873e53b8f4d1cfe4b5abee3511fce981686a3dde7d3da3754903c93ded2f8.scope: Deactivated successfully.
Jan 26 17:40:41 compute-0 podman[426548]: 2026-01-26 17:40:41.940634517 +0000 UTC m=+0.160477097 container died c81873e53b8f4d1cfe4b5abee3511fce981686a3dde7d3da3754903c93ded2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:40:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac49a69bfc3cce0450822131b0a0e2d55aa6fb01ffafbc531ebad4013a76bf74-merged.mount: Deactivated successfully.
Jan 26 17:40:41 compute-0 podman[426548]: 2026-01-26 17:40:41.990961564 +0000 UTC m=+0.210804134 container remove c81873e53b8f4d1cfe4b5abee3511fce981686a3dde7d3da3754903c93ded2f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_ride, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 26 17:40:42 compute-0 systemd[1]: libpod-conmon-c81873e53b8f4d1cfe4b5abee3511fce981686a3dde7d3da3754903c93ded2f8.scope: Deactivated successfully.
Jan 26 17:40:42 compute-0 podman[426589]: 2026-01-26 17:40:42.16375382 +0000 UTC m=+0.045683956 container create c45e818230973046e5a35ec9a102714c8fd164b2af94ac06d2cd7f506a39f782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 17:40:42 compute-0 systemd[1]: Started libpod-conmon-c45e818230973046e5a35ec9a102714c8fd164b2af94ac06d2cd7f506a39f782.scope.
Jan 26 17:40:42 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840727ce5241390b228a3ab9bb406ceead9a1bb0bfae255695d9ede889cf24e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840727ce5241390b228a3ab9bb406ceead9a1bb0bfae255695d9ede889cf24e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840727ce5241390b228a3ab9bb406ceead9a1bb0bfae255695d9ede889cf24e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840727ce5241390b228a3ab9bb406ceead9a1bb0bfae255695d9ede889cf24e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:40:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/840727ce5241390b228a3ab9bb406ceead9a1bb0bfae255695d9ede889cf24e9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:40:42 compute-0 podman[426589]: 2026-01-26 17:40:42.138258398 +0000 UTC m=+0.020188554 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:40:42 compute-0 podman[426589]: 2026-01-26 17:40:42.237471568 +0000 UTC m=+0.119401724 container init c45e818230973046e5a35ec9a102714c8fd164b2af94ac06d2cd7f506a39f782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_poincare, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 26 17:40:42 compute-0 podman[426589]: 2026-01-26 17:40:42.242580652 +0000 UTC m=+0.124510778 container start c45e818230973046e5a35ec9a102714c8fd164b2af94ac06d2cd7f506a39f782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_poincare, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 17:40:42 compute-0 podman[426589]: 2026-01-26 17:40:42.246068888 +0000 UTC m=+0.127999014 container attach c45e818230973046e5a35ec9a102714c8fd164b2af94ac06d2cd7f506a39f782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:40:42 compute-0 condescending_poincare[426605]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:40:42 compute-0 condescending_poincare[426605]: --> All data devices are unavailable
Jan 26 17:40:42 compute-0 ceph-mon[75140]: pgmap v4527: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:42 compute-0 systemd[1]: libpod-c45e818230973046e5a35ec9a102714c8fd164b2af94ac06d2cd7f506a39f782.scope: Deactivated successfully.
Jan 26 17:40:42 compute-0 podman[426589]: 2026-01-26 17:40:42.696353943 +0000 UTC m=+0.578284079 container died c45e818230973046e5a35ec9a102714c8fd164b2af94ac06d2cd7f506a39f782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_poincare, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:40:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-840727ce5241390b228a3ab9bb406ceead9a1bb0bfae255695d9ede889cf24e9-merged.mount: Deactivated successfully.
Jan 26 17:40:42 compute-0 podman[426589]: 2026-01-26 17:40:42.735536788 +0000 UTC m=+0.617466914 container remove c45e818230973046e5a35ec9a102714c8fd164b2af94ac06d2cd7f506a39f782 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:40:42 compute-0 systemd[1]: libpod-conmon-c45e818230973046e5a35ec9a102714c8fd164b2af94ac06d2cd7f506a39f782.scope: Deactivated successfully.
Jan 26 17:40:42 compute-0 sudo[426512]: pam_unix(sudo:session): session closed for user root
Jan 26 17:40:42 compute-0 nova_compute[239965]: 2026-01-26 17:40:42.803 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:40:42 compute-0 sudo[426636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:40:42 compute-0 sudo[426636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:40:42 compute-0 sudo[426636]: pam_unix(sudo:session): session closed for user root
Jan 26 17:40:42 compute-0 sudo[426661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:40:42 compute-0 sudo[426661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:40:43 compute-0 podman[426699]: 2026-01-26 17:40:43.187895614 +0000 UTC m=+0.060488396 container create a434bf5322c99594d061df3d092c9ba9ba07a1cd3b8b048a431036c89dc4c856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:40:43 compute-0 systemd[1]: Started libpod-conmon-a434bf5322c99594d061df3d092c9ba9ba07a1cd3b8b048a431036c89dc4c856.scope.
Jan 26 17:40:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:40:43 compute-0 podman[426699]: 2026-01-26 17:40:43.255068533 +0000 UTC m=+0.127661345 container init a434bf5322c99594d061df3d092c9ba9ba07a1cd3b8b048a431036c89dc4c856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:40:43 compute-0 podman[426699]: 2026-01-26 17:40:43.164648678 +0000 UTC m=+0.037241490 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:40:43 compute-0 podman[426699]: 2026-01-26 17:40:43.264520584 +0000 UTC m=+0.137113356 container start a434bf5322c99594d061df3d092c9ba9ba07a1cd3b8b048a431036c89dc4c856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:40:43 compute-0 suspicious_perlman[426716]: 167 167
Jan 26 17:40:43 compute-0 podman[426699]: 2026-01-26 17:40:43.268251675 +0000 UTC m=+0.140844487 container attach a434bf5322c99594d061df3d092c9ba9ba07a1cd3b8b048a431036c89dc4c856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_perlman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:40:43 compute-0 systemd[1]: libpod-a434bf5322c99594d061df3d092c9ba9ba07a1cd3b8b048a431036c89dc4c856.scope: Deactivated successfully.
Jan 26 17:40:43 compute-0 podman[426699]: 2026-01-26 17:40:43.269192417 +0000 UTC m=+0.141785199 container died a434bf5322c99594d061df3d092c9ba9ba07a1cd3b8b048a431036c89dc4c856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 26 17:40:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-496076967c217c2399daad117aa20466e2a792c52f52f9ab1658e8f94e377854-merged.mount: Deactivated successfully.
Jan 26 17:40:43 compute-0 podman[426699]: 2026-01-26 17:40:43.309722236 +0000 UTC m=+0.182314998 container remove a434bf5322c99594d061df3d092c9ba9ba07a1cd3b8b048a431036c89dc4c856 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_perlman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:40:43 compute-0 systemd[1]: libpod-conmon-a434bf5322c99594d061df3d092c9ba9ba07a1cd3b8b048a431036c89dc4c856.scope: Deactivated successfully.
Jan 26 17:40:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4528: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:43 compute-0 podman[426738]: 2026-01-26 17:40:43.462546744 +0000 UTC m=+0.043836200 container create ced0fb9b6e891b9d346ba2d07fa854f128d8082ce80f9a6b58b6bc5cd35588f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:40:43 compute-0 systemd[1]: Started libpod-conmon-ced0fb9b6e891b9d346ba2d07fa854f128d8082ce80f9a6b58b6bc5cd35588f5.scope.
Jan 26 17:40:43 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:40:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc841167aa561183d412401cf7e9e05de6667f24284c181dc4e85ac01e75473/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:40:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc841167aa561183d412401cf7e9e05de6667f24284c181dc4e85ac01e75473/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:40:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc841167aa561183d412401cf7e9e05de6667f24284c181dc4e85ac01e75473/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:40:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/abc841167aa561183d412401cf7e9e05de6667f24284c181dc4e85ac01e75473/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:40:43 compute-0 podman[426738]: 2026-01-26 17:40:43.444768771 +0000 UTC m=+0.026058237 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:40:43 compute-0 podman[426738]: 2026-01-26 17:40:43.555220666 +0000 UTC m=+0.136510152 container init ced0fb9b6e891b9d346ba2d07fa854f128d8082ce80f9a6b58b6bc5cd35588f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elgamal, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 26 17:40:43 compute-0 podman[426738]: 2026-01-26 17:40:43.562034052 +0000 UTC m=+0.143323508 container start ced0fb9b6e891b9d346ba2d07fa854f128d8082ce80f9a6b58b6bc5cd35588f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elgamal, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:40:43 compute-0 podman[426738]: 2026-01-26 17:40:43.56607601 +0000 UTC m=+0.147365496 container attach ced0fb9b6e891b9d346ba2d07fa854f128d8082ce80f9a6b58b6bc5cd35588f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elgamal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]: {
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:     "0": [
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:         {
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "devices": [
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "/dev/loop3"
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             ],
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "lv_name": "ceph_lv0",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "lv_size": "21470642176",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "name": "ceph_lv0",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "tags": {
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.cluster_name": "ceph",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.crush_device_class": "",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.encrypted": "0",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.objectstore": "bluestore",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.osd_id": "0",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.type": "block",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.vdo": "0",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.with_tpm": "0"
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             },
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "type": "block",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "vg_name": "ceph_vg0"
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:         }
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:     ],
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:     "1": [
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:         {
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "devices": [
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "/dev/loop4"
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             ],
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "lv_name": "ceph_lv1",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "lv_size": "21470642176",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "name": "ceph_lv1",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "tags": {
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.cluster_name": "ceph",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.crush_device_class": "",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.encrypted": "0",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.objectstore": "bluestore",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.osd_id": "1",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.type": "block",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.vdo": "0",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.with_tpm": "0"
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             },
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "type": "block",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "vg_name": "ceph_vg1"
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:         }
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:     ],
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:     "2": [
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:         {
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "devices": [
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "/dev/loop5"
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             ],
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "lv_name": "ceph_lv2",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "lv_size": "21470642176",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "name": "ceph_lv2",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "tags": {
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.cluster_name": "ceph",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.crush_device_class": "",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.encrypted": "0",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.objectstore": "bluestore",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.osd_id": "2",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.type": "block",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.vdo": "0",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:                 "ceph.with_tpm": "0"
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             },
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "type": "block",
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:             "vg_name": "ceph_vg2"
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:         }
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]:     ]
Jan 26 17:40:43 compute-0 infallible_elgamal[426754]: }
Jan 26 17:40:43 compute-0 systemd[1]: libpod-ced0fb9b6e891b9d346ba2d07fa854f128d8082ce80f9a6b58b6bc5cd35588f5.scope: Deactivated successfully.
Jan 26 17:40:43 compute-0 podman[426738]: 2026-01-26 17:40:43.851464682 +0000 UTC m=+0.432754138 container died ced0fb9b6e891b9d346ba2d07fa854f128d8082ce80f9a6b58b6bc5cd35588f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elgamal, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:40:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-abc841167aa561183d412401cf7e9e05de6667f24284c181dc4e85ac01e75473-merged.mount: Deactivated successfully.
Jan 26 17:40:43 compute-0 podman[426738]: 2026-01-26 17:40:43.912589064 +0000 UTC m=+0.493878520 container remove ced0fb9b6e891b9d346ba2d07fa854f128d8082ce80f9a6b58b6bc5cd35588f5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:40:43 compute-0 systemd[1]: libpod-conmon-ced0fb9b6e891b9d346ba2d07fa854f128d8082ce80f9a6b58b6bc5cd35588f5.scope: Deactivated successfully.
Jan 26 17:40:43 compute-0 sudo[426661]: pam_unix(sudo:session): session closed for user root
Jan 26 17:40:44 compute-0 sudo[426776]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:40:44 compute-0 sudo[426776]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:40:44 compute-0 sudo[426776]: pam_unix(sudo:session): session closed for user root
Jan 26 17:40:44 compute-0 sudo[426801]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:40:44 compute-0 sudo[426801]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:40:44 compute-0 podman[426838]: 2026-01-26 17:40:44.342958273 +0000 UTC m=+0.052205074 container create 3c3a47e26c010ad9b68dd822416709a0f6cedbfaf4767efb2d89bc1111dfee88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hellman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:40:44 compute-0 systemd[1]: Started libpod-conmon-3c3a47e26c010ad9b68dd822416709a0f6cedbfaf4767efb2d89bc1111dfee88.scope.
Jan 26 17:40:44 compute-0 podman[426838]: 2026-01-26 17:40:44.314733314 +0000 UTC m=+0.023980195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:40:44 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:40:44 compute-0 podman[426838]: 2026-01-26 17:40:44.434154178 +0000 UTC m=+0.143401019 container init 3c3a47e26c010ad9b68dd822416709a0f6cedbfaf4767efb2d89bc1111dfee88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hellman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 26 17:40:44 compute-0 podman[426838]: 2026-01-26 17:40:44.444947011 +0000 UTC m=+0.154193802 container start 3c3a47e26c010ad9b68dd822416709a0f6cedbfaf4767efb2d89bc1111dfee88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 26 17:40:44 compute-0 podman[426838]: 2026-01-26 17:40:44.44942496 +0000 UTC m=+0.158671791 container attach 3c3a47e26c010ad9b68dd822416709a0f6cedbfaf4767efb2d89bc1111dfee88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 17:40:44 compute-0 mystifying_hellman[426854]: 167 167
Jan 26 17:40:44 compute-0 systemd[1]: libpod-3c3a47e26c010ad9b68dd822416709a0f6cedbfaf4767efb2d89bc1111dfee88.scope: Deactivated successfully.
Jan 26 17:40:44 compute-0 podman[426838]: 2026-01-26 17:40:44.451284456 +0000 UTC m=+0.160531317 container died 3c3a47e26c010ad9b68dd822416709a0f6cedbfaf4767efb2d89bc1111dfee88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 26 17:40:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a7ef5efdac05283aeea992995afd468f3972082915b5420efbb47572de88c2d-merged.mount: Deactivated successfully.
Jan 26 17:40:44 compute-0 podman[426838]: 2026-01-26 17:40:44.507877256 +0000 UTC m=+0.217124097 container remove 3c3a47e26c010ad9b68dd822416709a0f6cedbfaf4767efb2d89bc1111dfee88 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_hellman, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:40:44 compute-0 systemd[1]: libpod-conmon-3c3a47e26c010ad9b68dd822416709a0f6cedbfaf4767efb2d89bc1111dfee88.scope: Deactivated successfully.
Jan 26 17:40:44 compute-0 ceph-mon[75140]: pgmap v4528: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:44 compute-0 podman[426878]: 2026-01-26 17:40:44.708430019 +0000 UTC m=+0.039770361 container create 219d452c350fd15cab07c73a83f70fd7e6a033343b5bb028574084e0163e5bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pasteur, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:40:44 compute-0 systemd[1]: Started libpod-conmon-219d452c350fd15cab07c73a83f70fd7e6a033343b5bb028574084e0163e5bdd.scope.
Jan 26 17:40:44 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846f7b787734602f6c69517931f8f38a0f70a01c7e51702b91b0da2f8b316c83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846f7b787734602f6c69517931f8f38a0f70a01c7e51702b91b0da2f8b316c83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846f7b787734602f6c69517931f8f38a0f70a01c7e51702b91b0da2f8b316c83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:40:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/846f7b787734602f6c69517931f8f38a0f70a01c7e51702b91b0da2f8b316c83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:40:44 compute-0 podman[426878]: 2026-01-26 17:40:44.69084087 +0000 UTC m=+0.022181232 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:40:44 compute-0 podman[426878]: 2026-01-26 17:40:44.7932997 +0000 UTC m=+0.124640072 container init 219d452c350fd15cab07c73a83f70fd7e6a033343b5bb028574084e0163e5bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:40:44 compute-0 podman[426878]: 2026-01-26 17:40:44.79904487 +0000 UTC m=+0.130385212 container start 219d452c350fd15cab07c73a83f70fd7e6a033343b5bb028574084e0163e5bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Jan 26 17:40:44 compute-0 podman[426878]: 2026-01-26 17:40:44.802434412 +0000 UTC m=+0.133774784 container attach 219d452c350fd15cab07c73a83f70fd7e6a033343b5bb028574084e0163e5bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pasteur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:40:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4529: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:45 compute-0 lvm[426973]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:40:45 compute-0 lvm[426973]: VG ceph_vg0 finished
Jan 26 17:40:45 compute-0 lvm[426974]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:40:45 compute-0 lvm[426974]: VG ceph_vg1 finished
Jan 26 17:40:45 compute-0 lvm[426976]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:40:45 compute-0 lvm[426976]: VG ceph_vg2 finished
Jan 26 17:40:45 compute-0 unruffled_pasteur[426895]: {}
Jan 26 17:40:45 compute-0 systemd[1]: libpod-219d452c350fd15cab07c73a83f70fd7e6a033343b5bb028574084e0163e5bdd.scope: Deactivated successfully.
Jan 26 17:40:45 compute-0 podman[426878]: 2026-01-26 17:40:45.57049948 +0000 UTC m=+0.901839822 container died 219d452c350fd15cab07c73a83f70fd7e6a033343b5bb028574084e0163e5bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pasteur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 26 17:40:45 compute-0 systemd[1]: libpod-219d452c350fd15cab07c73a83f70fd7e6a033343b5bb028574084e0163e5bdd.scope: Consumed 1.247s CPU time.
Jan 26 17:40:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-846f7b787734602f6c69517931f8f38a0f70a01c7e51702b91b0da2f8b316c83-merged.mount: Deactivated successfully.
Jan 26 17:40:45 compute-0 nova_compute[239965]: 2026-01-26 17:40:45.615 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:45 compute-0 podman[426878]: 2026-01-26 17:40:45.62215469 +0000 UTC m=+0.953495042 container remove 219d452c350fd15cab07c73a83f70fd7e6a033343b5bb028574084e0163e5bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_pasteur, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 26 17:40:45 compute-0 systemd[1]: libpod-conmon-219d452c350fd15cab07c73a83f70fd7e6a033343b5bb028574084e0163e5bdd.scope: Deactivated successfully.
Jan 26 17:40:45 compute-0 sudo[426801]: pam_unix(sudo:session): session closed for user root
Jan 26 17:40:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:40:45 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:40:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:40:45 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:40:45 compute-0 sudo[426990]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:40:45 compute-0 sudo[426990]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:40:45 compute-0 sudo[426990]: pam_unix(sudo:session): session closed for user root
Jan 26 17:40:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:40:46 compute-0 nova_compute[239965]: 2026-01-26 17:40:46.601 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:46 compute-0 ceph-mon[75140]: pgmap v4529: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:46 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:40:46 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:40:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4530: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:48 compute-0 ceph-mon[75140]: pgmap v4530: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:40:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/38856555' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:40:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:40:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/38856555' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:40:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4531: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/38856555' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:40:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/38856555' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:40:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:40:50 compute-0 nova_compute[239965]: 2026-01-26 17:40:50.618 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:50 compute-0 ceph-mon[75140]: pgmap v4531: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:40:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4532: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:51 compute-0 nova_compute[239965]: 2026-01-26 17:40:51.604 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:52 compute-0 ceph-mon[75140]: pgmap v4532: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4533: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:53 compute-0 podman[427015]: 2026-01-26 17:40:53.387467042 +0000 UTC m=+0.067873466 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 26 17:40:54 compute-0 ceph-mon[75140]: pgmap v4533: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4534: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:55 compute-0 podman[427035]: 2026-01-26 17:40:55.444336961 +0000 UTC m=+0.126377563 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 26 17:40:55 compute-0 nova_compute[239965]: 2026-01-26 17:40:55.620 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:40:56 compute-0 nova_compute[239965]: 2026-01-26 17:40:56.606 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:40:56 compute-0 ceph-mon[75140]: pgmap v4534: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4535: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:58 compute-0 nova_compute[239965]: 2026-01-26 17:40:58.514 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:40:58 compute-0 ceph-mon[75140]: pgmap v4535: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:40:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:40:59.328 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:40:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:40:59.329 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:40:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:40:59.329 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:40:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4536: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:41:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:41:00 compute-0 nova_compute[239965]: 2026-01-26 17:41:00.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:41:00 compute-0 nova_compute[239965]: 2026-01-26 17:41:00.622 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:00 compute-0 ceph-mon[75140]: pgmap v4536: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:41:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4537: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:01 compute-0 nova_compute[239965]: 2026-01-26 17:41:01.670 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:02 compute-0 ceph-mon[75140]: pgmap v4537: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4538: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:04 compute-0 ceph-mon[75140]: pgmap v4538: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4539: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:05 compute-0 nova_compute[239965]: 2026-01-26 17:41:05.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:41:05 compute-0 nova_compute[239965]: 2026-01-26 17:41:05.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:41:05 compute-0 nova_compute[239965]: 2026-01-26 17:41:05.625 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:41:06 compute-0 nova_compute[239965]: 2026-01-26 17:41:06.719 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:06 compute-0 ceph-mon[75140]: pgmap v4539: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4540: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:08 compute-0 ceph-mon[75140]: pgmap v4540: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4541: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:10 compute-0 nova_compute[239965]: 2026-01-26 17:41:10.628 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:10 compute-0 ceph-mon[75140]: pgmap v4541: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:41:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4542: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:11 compute-0 nova_compute[239965]: 2026-01-26 17:41:11.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:41:11 compute-0 nova_compute[239965]: 2026-01-26 17:41:11.721 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:12 compute-0 ceph-mon[75140]: pgmap v4542: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4543: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:14 compute-0 ceph-mon[75140]: pgmap v4543: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4544: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:15 compute-0 nova_compute[239965]: 2026-01-26 17:41:15.629 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:41:16 compute-0 nova_compute[239965]: 2026-01-26 17:41:16.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:41:16 compute-0 nova_compute[239965]: 2026-01-26 17:41:16.724 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:16 compute-0 ceph-mon[75140]: pgmap v4544: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4545: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:18 compute-0 ceph-mon[75140]: pgmap v4545: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4546: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:20 compute-0 nova_compute[239965]: 2026-01-26 17:41:20.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:41:20 compute-0 nova_compute[239965]: 2026-01-26 17:41:20.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:41:20 compute-0 nova_compute[239965]: 2026-01-26 17:41:20.631 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:20 compute-0 ceph-mon[75140]: pgmap v4546: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:41:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4547: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:21 compute-0 nova_compute[239965]: 2026-01-26 17:41:21.775 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:22 compute-0 ceph-mon[75140]: pgmap v4547: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4548: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:41:24 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.1 total, 600.0 interval
                                           Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.72 writes per sync, written: 0.19 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:41:24 compute-0 podman[427061]: 2026-01-26 17:41:24.396077135 +0000 UTC m=+0.075413781 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:41:25 compute-0 ceph-mon[75140]: pgmap v4548: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4549: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:25 compute-0 nova_compute[239965]: 2026-01-26 17:41:25.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:41:25 compute-0 nova_compute[239965]: 2026-01-26 17:41:25.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:41:25 compute-0 nova_compute[239965]: 2026-01-26 17:41:25.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:41:25 compute-0 nova_compute[239965]: 2026-01-26 17:41:25.579 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:41:25 compute-0 nova_compute[239965]: 2026-01-26 17:41:25.633 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:41:26 compute-0 podman[427080]: 2026-01-26 17:41:26.435037487 +0000 UTC m=+0.111751107 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, container_name=ovn_controller)
Jan 26 17:41:26 compute-0 nova_compute[239965]: 2026-01-26 17:41:26.776 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:27 compute-0 ceph-mon[75140]: pgmap v4549: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4550: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:27 compute-0 nova_compute[239965]: 2026-01-26 17:41:27.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:41:27 compute-0 nova_compute[239965]: 2026-01-26 17:41:27.689 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:41:27 compute-0 nova_compute[239965]: 2026-01-26 17:41:27.689 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:41:27 compute-0 nova_compute[239965]: 2026-01-26 17:41:27.690 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:41:27 compute-0 nova_compute[239965]: 2026-01-26 17:41:27.690 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:41:27 compute-0 nova_compute[239965]: 2026-01-26 17:41:27.690 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:41:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:41:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1822118909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:41:28 compute-0 nova_compute[239965]: 2026-01-26 17:41:28.219 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:41:28 compute-0 nova_compute[239965]: 2026-01-26 17:41:28.346 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:41:28 compute-0 nova_compute[239965]: 2026-01-26 17:41:28.347 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3546MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:41:28 compute-0 nova_compute[239965]: 2026-01-26 17:41:28.347 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:41:28 compute-0 nova_compute[239965]: 2026-01-26 17:41:28.347 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:41:28 compute-0 nova_compute[239965]: 2026-01-26 17:41:28.411 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:41:28 compute-0 nova_compute[239965]: 2026-01-26 17:41:28.411 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:41:28 compute-0 nova_compute[239965]: 2026-01-26 17:41:28.427 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:41:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:41:28
Jan 26 17:41:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:41:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:41:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'default.rgw.control', 'images', 'vms', 'backups', '.rgw.root']
Jan 26 17:41:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:41:28 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:41:28 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3165739550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:41:28 compute-0 nova_compute[239965]: 2026-01-26 17:41:28.994 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:41:29 compute-0 nova_compute[239965]: 2026-01-26 17:41:29.000 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:41:29 compute-0 nova_compute[239965]: 2026-01-26 17:41:29.019 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:41:29 compute-0 nova_compute[239965]: 2026-01-26 17:41:29.020 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:41:29 compute-0 nova_compute[239965]: 2026-01-26 17:41:29.020 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:41:29 compute-0 ceph-mon[75140]: pgmap v4550: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1822118909' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:41:29 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3165739550' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:41:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4551: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:41:29 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.4 total, 600.0 interval
                                           Cumulative writes: 50K writes, 193K keys, 50K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.02 MB/s
                                           Cumulative WAL: 50K writes, 18K syncs, 2.67 writes per sync, written: 0.18 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:41:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:41:30 compute-0 nova_compute[239965]: 2026-01-26 17:41:30.633 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:31 compute-0 ceph-mon[75140]: pgmap v4551: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:41:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4552: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:41:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:41:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:41:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:41:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:41:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:41:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:41:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:41:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:41:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:41:31 compute-0 nova_compute[239965]: 2026-01-26 17:41:31.813 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:33 compute-0 ceph-mon[75140]: pgmap v4552: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4553: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:41:34 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.2 total, 600.0 interval
                                           Cumulative writes: 39K writes, 150K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 39K writes, 14K syncs, 2.69 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:41:35 compute-0 ceph-mon[75140]: pgmap v4553: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4554: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:35 compute-0 nova_compute[239965]: 2026-01-26 17:41:35.635 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:36 compute-0 nova_compute[239965]: 2026-01-26 17:41:36.013 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:41:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:41:36 compute-0 nova_compute[239965]: 2026-01-26 17:41:36.860 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:37 compute-0 ceph-mon[75140]: pgmap v4554: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4555: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:39 compute-0 ceph-mon[75140]: pgmap v4555: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4556: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:39 compute-0 ceph-mgr[75431]: [devicehealth INFO root] Check health
Jan 26 17:41:40 compute-0 nova_compute[239965]: 2026-01-26 17:41:40.637 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:40 compute-0 ceph-mon[75140]: pgmap v4556: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:41:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4557: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:41 compute-0 nova_compute[239965]: 2026-01-26 17:41:41.863 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:43 compute-0 ceph-mon[75140]: pgmap v4557: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4558: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:44 compute-0 ceph-mon[75140]: pgmap v4558: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4559: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:45 compute-0 nova_compute[239965]: 2026-01-26 17:41:45.638 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:45 compute-0 sudo[427150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:41:45 compute-0 sudo[427150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:41:45 compute-0 sudo[427150]: pam_unix(sudo:session): session closed for user root
Jan 26 17:41:45 compute-0 sudo[427175]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 26 17:41:45 compute-0 sudo[427175]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:41:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:41:46 compute-0 podman[427245]: 2026-01-26 17:41:46.386164023 +0000 UTC m=+0.181517339 container exec 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 17:41:46 compute-0 podman[427245]: 2026-01-26 17:41:46.490337775 +0000 UTC m=+0.285691091 container exec_died 6fac3d3c1cec318302b6cd54dc0a56240d1526777b23b8e6c65e921e6fa86ae9 (image=quay.io/ceph/ceph:v20, name=ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:41:46 compute-0 ceph-mon[75140]: pgmap v4559: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:46 compute-0 nova_compute[239965]: 2026-01-26 17:41:46.863 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4560: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:47 compute-0 sudo[427175]: pam_unix(sudo:session): session closed for user root
Jan 26 17:41:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:41:47 compute-0 nova_compute[239965]: 2026-01-26 17:41:47.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:41:47 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:41:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:41:47 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:41:47 compute-0 sudo[427427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:41:47 compute-0 sudo[427427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:41:47 compute-0 sudo[427427]: pam_unix(sudo:session): session closed for user root
Jan 26 17:41:47 compute-0 sudo[427452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:41:47 compute-0 sudo[427452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:41:48 compute-0 sudo[427452]: pam_unix(sudo:session): session closed for user root
Jan 26 17:41:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:41:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:41:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:41:48 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:41:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:41:48 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:41:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:41:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:41:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:41:48 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:41:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:41:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:41:48 compute-0 ceph-mon[75140]: pgmap v4560: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:41:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:41:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:41:48 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:41:48 compute-0 sudo[427509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:41:48 compute-0 sudo[427509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:41:48 compute-0 sudo[427509]: pam_unix(sudo:session): session closed for user root
Jan 26 17:41:48 compute-0 sudo[427534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:41:48 compute-0 sudo[427534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:41:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:41:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1363641863' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:41:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:41:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1363641863' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:41:49 compute-0 podman[427572]: 2026-01-26 17:41:49.01469803 +0000 UTC m=+0.028105097 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:41:49 compute-0 podman[427572]: 2026-01-26 17:41:49.167114448 +0000 UTC m=+0.180521425 container create 3643c744a927080b03e4a5b34436c205b88fac1aea19b20bf0ef7486fd5a28f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 26 17:41:49 compute-0 ceph-osd[85687]: bluestore.MempoolThread fragmentation_score=0.005312 took=0.000067s
Jan 26 17:41:49 compute-0 ceph-osd[87780]: bluestore.MempoolThread fragmentation_score=0.005588 took=0.000088s
Jan 26 17:41:49 compute-0 ceph-osd[86729]: bluestore.MempoolThread fragmentation_score=0.005052 took=0.000046s
Jan 26 17:41:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4561: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:49 compute-0 systemd[1]: Started libpod-conmon-3643c744a927080b03e4a5b34436c205b88fac1aea19b20bf0ef7486fd5a28f6.scope.
Jan 26 17:41:49 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:41:49 compute-0 podman[427572]: 2026-01-26 17:41:49.84111266 +0000 UTC m=+0.854519727 container init 3643c744a927080b03e4a5b34436c205b88fac1aea19b20bf0ef7486fd5a28f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_gates, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 26 17:41:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:41:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:41:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:41:49 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:41:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1363641863' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:41:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1363641863' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:41:49 compute-0 podman[427572]: 2026-01-26 17:41:49.855299956 +0000 UTC m=+0.868706973 container start 3643c744a927080b03e4a5b34436c205b88fac1aea19b20bf0ef7486fd5a28f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 17:41:49 compute-0 boring_gates[427589]: 167 167
Jan 26 17:41:49 compute-0 systemd[1]: libpod-3643c744a927080b03e4a5b34436c205b88fac1aea19b20bf0ef7486fd5a28f6.scope: Deactivated successfully.
Jan 26 17:41:49 compute-0 podman[427572]: 2026-01-26 17:41:49.937537682 +0000 UTC m=+0.950944679 container attach 3643c744a927080b03e4a5b34436c205b88fac1aea19b20bf0ef7486fd5a28f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_gates, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 17:41:49 compute-0 podman[427572]: 2026-01-26 17:41:49.93826935 +0000 UTC m=+0.951676347 container died 3643c744a927080b03e4a5b34436c205b88fac1aea19b20bf0ef7486fd5a28f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_gates, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:41:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:41:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-f53f3a99b52b0cdc81427b6b564a3e16c2bc00931c20068768ef5d298dc80cbc-merged.mount: Deactivated successfully.
Jan 26 17:41:50 compute-0 podman[427572]: 2026-01-26 17:41:50.431067323 +0000 UTC m=+1.444474300 container remove 3643c744a927080b03e4a5b34436c205b88fac1aea19b20bf0ef7486fd5a28f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_gates, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 17:41:50 compute-0 systemd[1]: libpod-conmon-3643c744a927080b03e4a5b34436c205b88fac1aea19b20bf0ef7486fd5a28f6.scope: Deactivated successfully.
Jan 26 17:41:50 compute-0 nova_compute[239965]: 2026-01-26 17:41:50.639 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:50 compute-0 podman[427614]: 2026-01-26 17:41:50.610040709 +0000 UTC m=+0.024551329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:41:50 compute-0 podman[427614]: 2026-01-26 17:41:50.747625795 +0000 UTC m=+0.162136415 container create 35f80514e36fd532256b4db6ac06b200f14f209c2397dbdb124cdb28a1f7336f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 26 17:41:50 compute-0 systemd[1]: Started libpod-conmon-35f80514e36fd532256b4db6ac06b200f14f209c2397dbdb124cdb28a1f7336f.scope.
Jan 26 17:41:50 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62842fb0200b117bb0d09ce102bbe045b5c00d4d82c1f191b73bffe0a1aa3189/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62842fb0200b117bb0d09ce102bbe045b5c00d4d82c1f191b73bffe0a1aa3189/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62842fb0200b117bb0d09ce102bbe045b5c00d4d82c1f191b73bffe0a1aa3189/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62842fb0200b117bb0d09ce102bbe045b5c00d4d82c1f191b73bffe0a1aa3189/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:41:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62842fb0200b117bb0d09ce102bbe045b5c00d4d82c1f191b73bffe0a1aa3189/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:41:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:41:51 compute-0 ceph-mon[75140]: pgmap v4561: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:51 compute-0 podman[427614]: 2026-01-26 17:41:51.242133489 +0000 UTC m=+0.656644179 container init 35f80514e36fd532256b4db6ac06b200f14f209c2397dbdb124cdb28a1f7336f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_herschel, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 26 17:41:51 compute-0 podman[427614]: 2026-01-26 17:41:51.251235591 +0000 UTC m=+0.665746191 container start 35f80514e36fd532256b4db6ac06b200f14f209c2397dbdb124cdb28a1f7336f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_herschel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Jan 26 17:41:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4562: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:51 compute-0 podman[427614]: 2026-01-26 17:41:51.423428721 +0000 UTC m=+0.837939351 container attach 35f80514e36fd532256b4db6ac06b200f14f209c2397dbdb124cdb28a1f7336f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 17:41:51 compute-0 laughing_herschel[427630]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:41:51 compute-0 laughing_herschel[427630]: --> All data devices are unavailable
Jan 26 17:41:51 compute-0 systemd[1]: libpod-35f80514e36fd532256b4db6ac06b200f14f209c2397dbdb124cdb28a1f7336f.scope: Deactivated successfully.
Jan 26 17:41:51 compute-0 podman[427614]: 2026-01-26 17:41:51.728318999 +0000 UTC m=+1.142829619 container died 35f80514e36fd532256b4db6ac06b200f14f209c2397dbdb124cdb28a1f7336f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_herschel, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 26 17:41:51 compute-0 nova_compute[239965]: 2026-01-26 17:41:51.864 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-62842fb0200b117bb0d09ce102bbe045b5c00d4d82c1f191b73bffe0a1aa3189-merged.mount: Deactivated successfully.
Jan 26 17:41:52 compute-0 podman[427614]: 2026-01-26 17:41:52.132285404 +0000 UTC m=+1.546796034 container remove 35f80514e36fd532256b4db6ac06b200f14f209c2397dbdb124cdb28a1f7336f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_herschel, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:41:52 compute-0 sudo[427534]: pam_unix(sudo:session): session closed for user root
Jan 26 17:41:52 compute-0 systemd[1]: libpod-conmon-35f80514e36fd532256b4db6ac06b200f14f209c2397dbdb124cdb28a1f7336f.scope: Deactivated successfully.
Jan 26 17:41:52 compute-0 sudo[427662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:41:52 compute-0 sudo[427662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:41:52 compute-0 sudo[427662]: pam_unix(sudo:session): session closed for user root
Jan 26 17:41:52 compute-0 ceph-mon[75140]: pgmap v4562: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:52 compute-0 sudo[427687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:41:52 compute-0 sudo[427687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:41:52 compute-0 podman[427724]: 2026-01-26 17:41:52.565575005 +0000 UTC m=+0.020044560 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:41:52 compute-0 podman[427724]: 2026-01-26 17:41:52.763191765 +0000 UTC m=+0.217661330 container create fcb9f4f363d68f5e5c1905082d7b033642116d8455683db7b86826dbeae8c955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_neumann, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:41:52 compute-0 systemd[1]: Started libpod-conmon-fcb9f4f363d68f5e5c1905082d7b033642116d8455683db7b86826dbeae8c955.scope.
Jan 26 17:41:52 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:41:52 compute-0 podman[427724]: 2026-01-26 17:41:52.867362296 +0000 UTC m=+0.321831851 container init fcb9f4f363d68f5e5c1905082d7b033642116d8455683db7b86826dbeae8c955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 26 17:41:52 compute-0 podman[427724]: 2026-01-26 17:41:52.873547748 +0000 UTC m=+0.328017273 container start fcb9f4f363d68f5e5c1905082d7b033642116d8455683db7b86826dbeae8c955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_neumann, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 26 17:41:52 compute-0 podman[427724]: 2026-01-26 17:41:52.877352031 +0000 UTC m=+0.331821576 container attach fcb9f4f363d68f5e5c1905082d7b033642116d8455683db7b86826dbeae8c955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_neumann, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 17:41:52 compute-0 goofy_neumann[427741]: 167 167
Jan 26 17:41:52 compute-0 systemd[1]: libpod-fcb9f4f363d68f5e5c1905082d7b033642116d8455683db7b86826dbeae8c955.scope: Deactivated successfully.
Jan 26 17:41:52 compute-0 conmon[427741]: conmon fcb9f4f363d68f5e5c19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fcb9f4f363d68f5e5c1905082d7b033642116d8455683db7b86826dbeae8c955.scope/container/memory.events
Jan 26 17:41:52 compute-0 podman[427724]: 2026-01-26 17:41:52.880038246 +0000 UTC m=+0.334507781 container died fcb9f4f363d68f5e5c1905082d7b033642116d8455683db7b86826dbeae8c955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:41:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-157692920a7b1e02a3013b862f7a7f4ba61d7f3e662839c6c6c6ecabc25dda62-merged.mount: Deactivated successfully.
Jan 26 17:41:52 compute-0 podman[427724]: 2026-01-26 17:41:52.914732002 +0000 UTC m=+0.369201527 container remove fcb9f4f363d68f5e5c1905082d7b033642116d8455683db7b86826dbeae8c955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_neumann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 26 17:41:52 compute-0 systemd[1]: libpod-conmon-fcb9f4f363d68f5e5c1905082d7b033642116d8455683db7b86826dbeae8c955.scope: Deactivated successfully.
Jan 26 17:41:53 compute-0 podman[427764]: 2026-01-26 17:41:53.112299682 +0000 UTC m=+0.086536501 container create 05d65b31b92457b3939573ee067c537f91043b432245c2ed3692e5bed499b6eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moore, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:41:53 compute-0 podman[427764]: 2026-01-26 17:41:53.048733022 +0000 UTC m=+0.022969851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:41:53 compute-0 systemd[1]: Started libpod-conmon-05d65b31b92457b3939573ee067c537f91043b432245c2ed3692e5bed499b6eb.scope.
Jan 26 17:41:53 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14faba8069aae2197e16a7a661251575630d52c4d79e6fc5db45431363624bed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14faba8069aae2197e16a7a661251575630d52c4d79e6fc5db45431363624bed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14faba8069aae2197e16a7a661251575630d52c4d79e6fc5db45431363624bed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:41:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14faba8069aae2197e16a7a661251575630d52c4d79e6fc5db45431363624bed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:41:53 compute-0 podman[427764]: 2026-01-26 17:41:53.242200201 +0000 UTC m=+0.216437040 container init 05d65b31b92457b3939573ee067c537f91043b432245c2ed3692e5bed499b6eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moore, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:41:53 compute-0 podman[427764]: 2026-01-26 17:41:53.250247238 +0000 UTC m=+0.224484067 container start 05d65b31b92457b3939573ee067c537f91043b432245c2ed3692e5bed499b6eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:41:53 compute-0 podman[427764]: 2026-01-26 17:41:53.266568346 +0000 UTC m=+0.240805255 container attach 05d65b31b92457b3939573ee067c537f91043b432245c2ed3692e5bed499b6eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moore, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:41:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4563: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:53 compute-0 jolly_moore[427781]: {
Jan 26 17:41:53 compute-0 jolly_moore[427781]:     "0": [
Jan 26 17:41:53 compute-0 jolly_moore[427781]:         {
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "devices": [
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "/dev/loop3"
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             ],
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "lv_name": "ceph_lv0",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "lv_size": "21470642176",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "name": "ceph_lv0",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "tags": {
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.cluster_name": "ceph",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.crush_device_class": "",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.encrypted": "0",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.objectstore": "bluestore",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.osd_id": "0",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.type": "block",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.vdo": "0",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.with_tpm": "0"
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             },
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "type": "block",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "vg_name": "ceph_vg0"
Jan 26 17:41:53 compute-0 jolly_moore[427781]:         }
Jan 26 17:41:53 compute-0 jolly_moore[427781]:     ],
Jan 26 17:41:53 compute-0 jolly_moore[427781]:     "1": [
Jan 26 17:41:53 compute-0 jolly_moore[427781]:         {
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "devices": [
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "/dev/loop4"
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             ],
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "lv_name": "ceph_lv1",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "lv_size": "21470642176",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "name": "ceph_lv1",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "tags": {
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.cluster_name": "ceph",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.crush_device_class": "",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.encrypted": "0",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.objectstore": "bluestore",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.osd_id": "1",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.type": "block",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.vdo": "0",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.with_tpm": "0"
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             },
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "type": "block",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "vg_name": "ceph_vg1"
Jan 26 17:41:53 compute-0 jolly_moore[427781]:         }
Jan 26 17:41:53 compute-0 jolly_moore[427781]:     ],
Jan 26 17:41:53 compute-0 jolly_moore[427781]:     "2": [
Jan 26 17:41:53 compute-0 jolly_moore[427781]:         {
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "devices": [
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "/dev/loop5"
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             ],
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "lv_name": "ceph_lv2",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "lv_size": "21470642176",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "name": "ceph_lv2",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "tags": {
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.cluster_name": "ceph",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.crush_device_class": "",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.encrypted": "0",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.objectstore": "bluestore",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.osd_id": "2",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.type": "block",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.vdo": "0",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:                 "ceph.with_tpm": "0"
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             },
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "type": "block",
Jan 26 17:41:53 compute-0 jolly_moore[427781]:             "vg_name": "ceph_vg2"
Jan 26 17:41:53 compute-0 jolly_moore[427781]:         }
Jan 26 17:41:53 compute-0 jolly_moore[427781]:     ]
Jan 26 17:41:53 compute-0 jolly_moore[427781]: }
Jan 26 17:41:53 compute-0 systemd[1]: libpod-05d65b31b92457b3939573ee067c537f91043b432245c2ed3692e5bed499b6eb.scope: Deactivated successfully.
Jan 26 17:41:53 compute-0 podman[427790]: 2026-01-26 17:41:53.572600172 +0000 UTC m=+0.027031360 container died 05d65b31b92457b3939573ee067c537f91043b432245c2ed3692e5bed499b6eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moore, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:41:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-14faba8069aae2197e16a7a661251575630d52c4d79e6fc5db45431363624bed-merged.mount: Deactivated successfully.
Jan 26 17:41:53 compute-0 podman[427790]: 2026-01-26 17:41:53.975750517 +0000 UTC m=+0.430181685 container remove 05d65b31b92457b3939573ee067c537f91043b432245c2ed3692e5bed499b6eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_moore, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:41:53 compute-0 systemd[1]: libpod-conmon-05d65b31b92457b3939573ee067c537f91043b432245c2ed3692e5bed499b6eb.scope: Deactivated successfully.
Jan 26 17:41:54 compute-0 sudo[427687]: pam_unix(sudo:session): session closed for user root
Jan 26 17:41:54 compute-0 sudo[427805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:41:54 compute-0 sudo[427805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:41:54 compute-0 sudo[427805]: pam_unix(sudo:session): session closed for user root
Jan 26 17:41:54 compute-0 sudo[427830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:41:54 compute-0 sudo[427830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:41:54 compute-0 podman[427867]: 2026-01-26 17:41:54.479055965 +0000 UTC m=+0.048484153 container create bb049594eb232169d76d8a8e16b2cd7655d4297db8191d51176b3bdcbe5c95f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:41:54 compute-0 systemd[1]: Started libpod-conmon-bb049594eb232169d76d8a8e16b2cd7655d4297db8191d51176b3bdcbe5c95f6.scope.
Jan 26 17:41:54 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:41:54 compute-0 podman[427867]: 2026-01-26 17:41:54.453229516 +0000 UTC m=+0.022657724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:41:54 compute-0 ceph-mon[75140]: pgmap v4563: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:54 compute-0 podman[427867]: 2026-01-26 17:41:54.644733808 +0000 UTC m=+0.214162026 container init bb049594eb232169d76d8a8e16b2cd7655d4297db8191d51176b3bdcbe5c95f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_meitner, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:41:54 compute-0 podman[427867]: 2026-01-26 17:41:54.651516994 +0000 UTC m=+0.220945192 container start bb049594eb232169d76d8a8e16b2cd7655d4297db8191d51176b3bdcbe5c95f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 26 17:41:54 compute-0 intelligent_meitner[427884]: 167 167
Jan 26 17:41:54 compute-0 systemd[1]: libpod-bb049594eb232169d76d8a8e16b2cd7655d4297db8191d51176b3bdcbe5c95f6.scope: Deactivated successfully.
Jan 26 17:41:54 compute-0 podman[427867]: 2026-01-26 17:41:54.715687298 +0000 UTC m=+0.285115486 container attach bb049594eb232169d76d8a8e16b2cd7655d4297db8191d51176b3bdcbe5c95f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_meitner, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 26 17:41:54 compute-0 podman[427867]: 2026-01-26 17:41:54.716086879 +0000 UTC m=+0.285515087 container died bb049594eb232169d76d8a8e16b2cd7655d4297db8191d51176b3bdcbe5c95f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_meitner, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 26 17:41:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-40f2956261109d1a538a9be8c60fa36932077e68f3b5734ff243429a46fcc6bd-merged.mount: Deactivated successfully.
Jan 26 17:41:54 compute-0 podman[427867]: 2026-01-26 17:41:54.837210263 +0000 UTC m=+0.406638461 container remove bb049594eb232169d76d8a8e16b2cd7655d4297db8191d51176b3bdcbe5c95f6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 26 17:41:54 compute-0 systemd[1]: libpod-conmon-bb049594eb232169d76d8a8e16b2cd7655d4297db8191d51176b3bdcbe5c95f6.scope: Deactivated successfully.
Jan 26 17:41:54 compute-0 podman[427881]: 2026-01-26 17:41:54.91537284 +0000 UTC m=+0.394489085 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 26 17:41:55 compute-0 podman[427927]: 2026-01-26 17:41:55.037801647 +0000 UTC m=+0.058143400 container create a52c1c0c756ffff6765278a57858dd7c6c24dca76ea0b5cfb8e16ba8867fee67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_curran, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 17:41:55 compute-0 podman[427927]: 2026-01-26 17:41:55.004338421 +0000 UTC m=+0.024680184 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:41:55 compute-0 systemd[1]: Started libpod-conmon-a52c1c0c756ffff6765278a57858dd7c6c24dca76ea0b5cfb8e16ba8867fee67.scope.
Jan 26 17:41:55 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:41:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a16703ebe515fe8499f3d918c2c1ca7e0bab493633e62badaf8e6d17ec0de9af/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:41:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a16703ebe515fe8499f3d918c2c1ca7e0bab493633e62badaf8e6d17ec0de9af/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:41:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a16703ebe515fe8499f3d918c2c1ca7e0bab493633e62badaf8e6d17ec0de9af/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:41:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a16703ebe515fe8499f3d918c2c1ca7e0bab493633e62badaf8e6d17ec0de9af/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:41:55 compute-0 podman[427927]: 2026-01-26 17:41:55.203453628 +0000 UTC m=+0.223795381 container init a52c1c0c756ffff6765278a57858dd7c6c24dca76ea0b5cfb8e16ba8867fee67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_curran, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 26 17:41:55 compute-0 podman[427927]: 2026-01-26 17:41:55.213414751 +0000 UTC m=+0.233756504 container start a52c1c0c756ffff6765278a57858dd7c6c24dca76ea0b5cfb8e16ba8867fee67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_curran, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 26 17:41:55 compute-0 podman[427927]: 2026-01-26 17:41:55.217462 +0000 UTC m=+0.237803753 container attach a52c1c0c756ffff6765278a57858dd7c6c24dca76ea0b5cfb8e16ba8867fee67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:41:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4564: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:55 compute-0 nova_compute[239965]: 2026-01-26 17:41:55.641 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:55 compute-0 lvm[428023]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:41:55 compute-0 lvm[428023]: VG ceph_vg1 finished
Jan 26 17:41:55 compute-0 lvm[428022]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:41:55 compute-0 lvm[428022]: VG ceph_vg0 finished
Jan 26 17:41:55 compute-0 lvm[428025]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:41:55 compute-0 lvm[428025]: VG ceph_vg2 finished
Jan 26 17:41:56 compute-0 gifted_curran[427943]: {}
Jan 26 17:41:56 compute-0 systemd[1]: libpod-a52c1c0c756ffff6765278a57858dd7c6c24dca76ea0b5cfb8e16ba8867fee67.scope: Deactivated successfully.
Jan 26 17:41:56 compute-0 podman[427927]: 2026-01-26 17:41:56.060449805 +0000 UTC m=+1.080791568 container died a52c1c0c756ffff6765278a57858dd7c6c24dca76ea0b5cfb8e16ba8867fee67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_curran, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:41:56 compute-0 systemd[1]: libpod-a52c1c0c756ffff6765278a57858dd7c6c24dca76ea0b5cfb8e16ba8867fee67.scope: Consumed 1.321s CPU time.
Jan 26 17:41:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:41:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-a16703ebe515fe8499f3d918c2c1ca7e0bab493633e62badaf8e6d17ec0de9af-merged.mount: Deactivated successfully.
Jan 26 17:41:56 compute-0 podman[427927]: 2026-01-26 17:41:56.170959722 +0000 UTC m=+1.191301475 container remove a52c1c0c756ffff6765278a57858dd7c6c24dca76ea0b5cfb8e16ba8867fee67 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_curran, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 26 17:41:56 compute-0 systemd[1]: libpod-conmon-a52c1c0c756ffff6765278a57858dd7c6c24dca76ea0b5cfb8e16ba8867fee67.scope: Deactivated successfully.
Jan 26 17:41:56 compute-0 sudo[427830]: pam_unix(sudo:session): session closed for user root
Jan 26 17:41:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:41:56 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:41:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:41:56 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:41:56 compute-0 sudo[428042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:41:56 compute-0 sudo[428042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:41:56 compute-0 sudo[428042]: pam_unix(sudo:session): session closed for user root
Jan 26 17:41:56 compute-0 ceph-mon[75140]: pgmap v4564: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:56 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:41:56 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:41:56 compute-0 nova_compute[239965]: 2026-01-26 17:41:56.864 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:41:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4565: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:57 compute-0 podman[428067]: 2026-01-26 17:41:57.402733342 +0000 UTC m=+0.083330724 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 26 17:41:58 compute-0 nova_compute[239965]: 2026-01-26 17:41:58.528 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:41:58 compute-0 ceph-mon[75140]: pgmap v4565: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:41:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:41:59.330 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:41:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:41:59.330 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:41:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:41:59.330 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:41:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4566: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:42:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:42:00 compute-0 nova_compute[239965]: 2026-01-26 17:42:00.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:42:00 compute-0 nova_compute[239965]: 2026-01-26 17:42:00.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 26 17:42:00 compute-0 nova_compute[239965]: 2026-01-26 17:42:00.645 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:42:01 compute-0 ceph-mon[75140]: pgmap v4566: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4567: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:01 compute-0 nova_compute[239965]: 2026-01-26 17:42:01.865 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:02 compute-0 ceph-mon[75140]: pgmap v4567: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:02 compute-0 nova_compute[239965]: 2026-01-26 17:42:02.525 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:42:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4568: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:04 compute-0 ceph-mon[75140]: pgmap v4568: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4569: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:05 compute-0 nova_compute[239965]: 2026-01-26 17:42:05.646 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:42:06 compute-0 ceph-mon[75140]: pgmap v4569: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:06 compute-0 sshd-session[428095]: Invalid user solana from 45.148.10.240 port 49304
Jan 26 17:42:06 compute-0 nova_compute[239965]: 2026-01-26 17:42:06.866 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:06 compute-0 sshd-session[428095]: Connection closed by invalid user solana 45.148.10.240 port 49304 [preauth]
Jan 26 17:42:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4570: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:07 compute-0 nova_compute[239965]: 2026-01-26 17:42:07.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:42:07 compute-0 nova_compute[239965]: 2026-01-26 17:42:07.511 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:42:08 compute-0 ceph-mon[75140]: pgmap v4570: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4571: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:10 compute-0 nova_compute[239965]: 2026-01-26 17:42:10.647 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:10 compute-0 ceph-mon[75140]: pgmap v4571: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:42:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4572: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:11 compute-0 nova_compute[239965]: 2026-01-26 17:42:11.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:42:11 compute-0 nova_compute[239965]: 2026-01-26 17:42:11.782 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:42:11 compute-0 nova_compute[239965]: 2026-01-26 17:42:11.867 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:12 compute-0 ceph-mon[75140]: pgmap v4572: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4573: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:14 compute-0 ceph-mon[75140]: pgmap v4573: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4574: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:15 compute-0 nova_compute[239965]: 2026-01-26 17:42:15.648 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:42:16 compute-0 nova_compute[239965]: 2026-01-26 17:42:16.869 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:16 compute-0 ceph-mon[75140]: pgmap v4574: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4575: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:17 compute-0 nova_compute[239965]: 2026-01-26 17:42:17.525 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:42:19 compute-0 ceph-mon[75140]: pgmap v4575: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4576: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:20 compute-0 nova_compute[239965]: 2026-01-26 17:42:20.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:42:20 compute-0 nova_compute[239965]: 2026-01-26 17:42:20.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:42:20 compute-0 nova_compute[239965]: 2026-01-26 17:42:20.650 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:21 compute-0 ceph-mon[75140]: pgmap v4576: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:42:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4577: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:21 compute-0 nova_compute[239965]: 2026-01-26 17:42:21.870 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:23 compute-0 ceph-mon[75140]: pgmap v4577: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4578: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:25 compute-0 ceph-mon[75140]: pgmap v4578: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:25 compute-0 podman[428097]: 2026-01-26 17:42:25.37746452 +0000 UTC m=+0.060795085 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:42:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4579: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:25 compute-0 nova_compute[239965]: 2026-01-26 17:42:25.685 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:42:26 compute-0 ceph-mon[75140]: pgmap v4579: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:26 compute-0 nova_compute[239965]: 2026-01-26 17:42:26.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:42:26 compute-0 nova_compute[239965]: 2026-01-26 17:42:26.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:42:26 compute-0 nova_compute[239965]: 2026-01-26 17:42:26.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:42:26 compute-0 nova_compute[239965]: 2026-01-26 17:42:26.762 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:42:26 compute-0 nova_compute[239965]: 2026-01-26 17:42:26.909 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4580: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:28 compute-0 ceph-mon[75140]: pgmap v4580: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:28 compute-0 podman[428116]: 2026-01-26 17:42:28.452766705 +0000 UTC m=+0.132403041 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 26 17:42:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:42:28
Jan 26 17:42:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:42:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:42:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.rgw.root', 'backups', 'default.rgw.control', 'default.rgw.meta', 'volumes', '.mgr', 'images', 'default.rgw.log', 'cephfs.cephfs.meta']
Jan 26 17:42:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:42:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4581: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:29 compute-0 nova_compute[239965]: 2026-01-26 17:42:29.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:42:29 compute-0 nova_compute[239965]: 2026-01-26 17:42:29.788 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:42:29 compute-0 nova_compute[239965]: 2026-01-26 17:42:29.789 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:42:29 compute-0 nova_compute[239965]: 2026-01-26 17:42:29.789 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:42:29 compute-0 nova_compute[239965]: 2026-01-26 17:42:29.790 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:42:29 compute-0 nova_compute[239965]: 2026-01-26 17:42:29.790 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:42:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:42:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1853149739' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:42:30 compute-0 nova_compute[239965]: 2026-01-26 17:42:30.365 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:42:30 compute-0 ceph-mon[75140]: pgmap v4581: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1853149739' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:42:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:42:30 compute-0 nova_compute[239965]: 2026-01-26 17:42:30.521 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:42:30 compute-0 nova_compute[239965]: 2026-01-26 17:42:30.523 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3529MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:42:30 compute-0 nova_compute[239965]: 2026-01-26 17:42:30.524 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:42:30 compute-0 nova_compute[239965]: 2026-01-26 17:42:30.524 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:42:30 compute-0 nova_compute[239965]: 2026-01-26 17:42:30.617 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:42:30 compute-0 nova_compute[239965]: 2026-01-26 17:42:30.618 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:42:30 compute-0 nova_compute[239965]: 2026-01-26 17:42:30.641 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:42:30 compute-0 nova_compute[239965]: 2026-01-26 17:42:30.690 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:42:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:42:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1706223601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:42:31 compute-0 nova_compute[239965]: 2026-01-26 17:42:31.290 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:42:31 compute-0 nova_compute[239965]: 2026-01-26 17:42:31.297 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:42:31 compute-0 nova_compute[239965]: 2026-01-26 17:42:31.315 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:42:31 compute-0 nova_compute[239965]: 2026-01-26 17:42:31.317 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:42:31 compute-0 nova_compute[239965]: 2026-01-26 17:42:31.317 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:42:31 compute-0 nova_compute[239965]: 2026-01-26 17:42:31.318 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:42:31 compute-0 nova_compute[239965]: 2026-01-26 17:42:31.318 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 26 17:42:31 compute-0 nova_compute[239965]: 2026-01-26 17:42:31.334 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 26 17:42:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4582: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:42:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:42:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:42:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:42:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:42:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:42:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:42:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:42:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:42:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:42:31 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1706223601' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:42:31 compute-0 nova_compute[239965]: 2026-01-26 17:42:31.911 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:32 compute-0 ceph-mon[75140]: pgmap v4582: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4583: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:34 compute-0 ceph-mon[75140]: pgmap v4583: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4584: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:35 compute-0 nova_compute[239965]: 2026-01-26 17:42:35.694 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:42:36 compute-0 ceph-mon[75140]: pgmap v4584: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:36 compute-0 nova_compute[239965]: 2026-01-26 17:42:36.914 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4585: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:38 compute-0 ceph-mon[75140]: pgmap v4585: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4586: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:40 compute-0 nova_compute[239965]: 2026-01-26 17:42:40.813 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:40 compute-0 ceph-mon[75140]: pgmap v4586: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:42:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4587: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:41 compute-0 nova_compute[239965]: 2026-01-26 17:42:41.954 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:43 compute-0 ceph-mon[75140]: pgmap v4587: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4588: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:44 compute-0 ceph-mon[75140]: pgmap v4588: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4589: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:45 compute-0 nova_compute[239965]: 2026-01-26 17:42:45.817 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:42:46 compute-0 ceph-mon[75140]: pgmap v4589: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:46 compute-0 nova_compute[239965]: 2026-01-26 17:42:46.962 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4590: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:48 compute-0 ceph-mon[75140]: pgmap v4590: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:42:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1131796671' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:42:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:42:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1131796671' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:42:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4591: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1131796671' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:42:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/1131796671' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:42:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:42:50 compute-0 ceph-mon[75140]: pgmap v4591: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:50 compute-0 nova_compute[239965]: 2026-01-26 17:42:50.822 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:42:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4592: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:52 compute-0 nova_compute[239965]: 2026-01-26 17:42:52.024 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:52 compute-0 ceph-mon[75140]: pgmap v4592: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4593: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:54 compute-0 ceph-mon[75140]: pgmap v4593: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4594: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:55 compute-0 nova_compute[239965]: 2026-01-26 17:42:55.826 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:42:56 compute-0 podman[428186]: 2026-01-26 17:42:56.35650263 +0000 UTC m=+0.044387764 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:42:56 compute-0 sudo[428205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:42:56 compute-0 sudo[428205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:42:56 compute-0 sudo[428205]: pam_unix(sudo:session): session closed for user root
Jan 26 17:42:56 compute-0 sudo[428230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:42:56 compute-0 sudo[428230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:42:56 compute-0 ceph-mon[75140]: pgmap v4594: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:42:57 compute-0 nova_compute[239965]: 2026-01-26 17:42:57.026 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:42:57 compute-0 sudo[428230]: pam_unix(sudo:session): session closed for user root
Jan 26 17:42:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:42:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:42:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:42:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:42:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:42:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:42:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:42:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:42:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:42:57 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:42:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:42:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:42:57 compute-0 sudo[428286]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:42:57 compute-0 sudo[428286]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:42:57 compute-0 sudo[428286]: pam_unix(sudo:session): session closed for user root
Jan 26 17:42:57 compute-0 sudo[428311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:42:57 compute-0 sudo[428311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:42:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4595: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Jan 26 17:42:57 compute-0 podman[428348]: 2026-01-26 17:42:57.596282766 +0000 UTC m=+0.076258562 container create d8359351530da16fff055018d65cbce3d5f7cc1df7955e6486615d577c216300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_beaver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Jan 26 17:42:57 compute-0 podman[428348]: 2026-01-26 17:42:57.548223643 +0000 UTC m=+0.028199459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:42:57 compute-0 systemd[1]: Started libpod-conmon-d8359351530da16fff055018d65cbce3d5f7cc1df7955e6486615d577c216300.scope.
Jan 26 17:42:57 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:42:57 compute-0 podman[428348]: 2026-01-26 17:42:57.805954471 +0000 UTC m=+0.285930317 container init d8359351530da16fff055018d65cbce3d5f7cc1df7955e6486615d577c216300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_beaver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 26 17:42:57 compute-0 podman[428348]: 2026-01-26 17:42:57.814268044 +0000 UTC m=+0.294243880 container start d8359351530da16fff055018d65cbce3d5f7cc1df7955e6486615d577c216300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_beaver, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:42:57 compute-0 great_beaver[428365]: 167 167
Jan 26 17:42:57 compute-0 systemd[1]: libpod-d8359351530da16fff055018d65cbce3d5f7cc1df7955e6486615d577c216300.scope: Deactivated successfully.
Jan 26 17:42:57 compute-0 podman[428348]: 2026-01-26 17:42:57.836013635 +0000 UTC m=+0.315989451 container attach d8359351530da16fff055018d65cbce3d5f7cc1df7955e6486615d577c216300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_beaver, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:42:57 compute-0 podman[428348]: 2026-01-26 17:42:57.836519207 +0000 UTC m=+0.316495013 container died d8359351530da16fff055018d65cbce3d5f7cc1df7955e6486615d577c216300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:42:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4156f150fcbd37d619bf09cb610933b984eccc5010048008e079dbaa66e524c-merged.mount: Deactivated successfully.
Jan 26 17:42:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:42:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:42:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:42:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:42:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:42:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:42:58 compute-0 podman[428348]: 2026-01-26 17:42:58.067244285 +0000 UTC m=+0.547220101 container remove d8359351530da16fff055018d65cbce3d5f7cc1df7955e6486615d577c216300 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_beaver, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:42:58 compute-0 systemd[1]: libpod-conmon-d8359351530da16fff055018d65cbce3d5f7cc1df7955e6486615d577c216300.scope: Deactivated successfully.
Jan 26 17:42:58 compute-0 podman[428388]: 2026-01-26 17:42:58.274602844 +0000 UTC m=+0.052022600 container create 2b291f1a65a968b8357331bfb4a13806298bb42a0934bdbfbff855a098cb99c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_yonath, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 26 17:42:58 compute-0 systemd[1]: Started libpod-conmon-2b291f1a65a968b8357331bfb4a13806298bb42a0934bdbfbff855a098cb99c7.scope.
Jan 26 17:42:58 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:42:58 compute-0 podman[428388]: 2026-01-26 17:42:58.257450486 +0000 UTC m=+0.034870262 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66145187b0eb0f4ea0a9d7f97cd83a4d218457ebb3a6df0886672b6bd79abadc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66145187b0eb0f4ea0a9d7f97cd83a4d218457ebb3a6df0886672b6bd79abadc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66145187b0eb0f4ea0a9d7f97cd83a4d218457ebb3a6df0886672b6bd79abadc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66145187b0eb0f4ea0a9d7f97cd83a4d218457ebb3a6df0886672b6bd79abadc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:42:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66145187b0eb0f4ea0a9d7f97cd83a4d218457ebb3a6df0886672b6bd79abadc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:42:58 compute-0 podman[428388]: 2026-01-26 17:42:58.58023637 +0000 UTC m=+0.357656176 container init 2b291f1a65a968b8357331bfb4a13806298bb42a0934bdbfbff855a098cb99c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:42:58 compute-0 podman[428388]: 2026-01-26 17:42:58.588186534 +0000 UTC m=+0.365606290 container start 2b291f1a65a968b8357331bfb4a13806298bb42a0934bdbfbff855a098cb99c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:42:58 compute-0 podman[428388]: 2026-01-26 17:42:58.592496919 +0000 UTC m=+0.369916695 container attach 2b291f1a65a968b8357331bfb4a13806298bb42a0934bdbfbff855a098cb99c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_yonath, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:42:59 compute-0 upbeat_yonath[428404]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:42:59 compute-0 upbeat_yonath[428404]: --> All data devices are unavailable
Jan 26 17:42:59 compute-0 systemd[1]: libpod-2b291f1a65a968b8357331bfb4a13806298bb42a0934bdbfbff855a098cb99c7.scope: Deactivated successfully.
Jan 26 17:42:59 compute-0 podman[428388]: 2026-01-26 17:42:59.080740031 +0000 UTC m=+0.858159837 container died 2b291f1a65a968b8357331bfb4a13806298bb42a0934bdbfbff855a098cb99c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_yonath, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:42:59 compute-0 ceph-mon[75140]: pgmap v4595: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 0 B/s wr, 8 op/s
Jan 26 17:42:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-66145187b0eb0f4ea0a9d7f97cd83a4d218457ebb3a6df0886672b6bd79abadc-merged.mount: Deactivated successfully.
Jan 26 17:42:59 compute-0 podman[428388]: 2026-01-26 17:42:59.27747862 +0000 UTC m=+1.054898416 container remove 2b291f1a65a968b8357331bfb4a13806298bb42a0934bdbfbff855a098cb99c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 26 17:42:59 compute-0 systemd[1]: libpod-conmon-2b291f1a65a968b8357331bfb4a13806298bb42a0934bdbfbff855a098cb99c7.scope: Deactivated successfully.
Jan 26 17:42:59 compute-0 sudo[428311]: pam_unix(sudo:session): session closed for user root
Jan 26 17:42:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:42:59.331 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:42:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:42:59.331 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:42:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:42:59.331 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:42:59 compute-0 podman[428424]: 2026-01-26 17:42:59.353608307 +0000 UTC m=+0.230760950 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 26 17:42:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4596: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:42:59 compute-0 sudo[428461]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:42:59 compute-0 sudo[428461]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:42:59 compute-0 sudo[428461]: pam_unix(sudo:session): session closed for user root
Jan 26 17:42:59 compute-0 sudo[428487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:42:59 compute-0 sudo[428487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:42:59 compute-0 podman[428523]: 2026-01-26 17:42:59.802006096 +0000 UTC m=+0.053939976 container create cb4c3d442fb9d92300e6a9b548985d10a456487831b5f7bc000a3c0a4213a055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brattain, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:42:59 compute-0 systemd[1]: Started libpod-conmon-cb4c3d442fb9d92300e6a9b548985d10a456487831b5f7bc000a3c0a4213a055.scope.
Jan 26 17:42:59 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:42:59 compute-0 podman[428523]: 2026-01-26 17:42:59.869015441 +0000 UTC m=+0.120949331 container init cb4c3d442fb9d92300e6a9b548985d10a456487831b5f7bc000a3c0a4213a055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brattain, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 26 17:42:59 compute-0 podman[428523]: 2026-01-26 17:42:59.878840031 +0000 UTC m=+0.130773901 container start cb4c3d442fb9d92300e6a9b548985d10a456487831b5f7bc000a3c0a4213a055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 17:42:59 compute-0 podman[428523]: 2026-01-26 17:42:59.786632231 +0000 UTC m=+0.038566121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:42:59 compute-0 podman[428523]: 2026-01-26 17:42:59.882946591 +0000 UTC m=+0.134880461 container attach cb4c3d442fb9d92300e6a9b548985d10a456487831b5f7bc000a3c0a4213a055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brattain, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 26 17:42:59 compute-0 friendly_brattain[428539]: 167 167
Jan 26 17:42:59 compute-0 systemd[1]: libpod-cb4c3d442fb9d92300e6a9b548985d10a456487831b5f7bc000a3c0a4213a055.scope: Deactivated successfully.
Jan 26 17:42:59 compute-0 podman[428523]: 2026-01-26 17:42:59.884868688 +0000 UTC m=+0.136802558 container died cb4c3d442fb9d92300e6a9b548985d10a456487831b5f7bc000a3c0a4213a055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brattain, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:43:00 compute-0 nova_compute[239965]: 2026-01-26 17:43:00.337 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:43:00 compute-0 ceph-mon[75140]: pgmap v4596: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.5 KiB/s rd, 0 B/s wr, 15 op/s
Jan 26 17:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:43:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:43:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-df1f43e8652a9a13ccaeb2712110cfdca315826346ec2b1f64b3b238b2891f6c-merged.mount: Deactivated successfully.
Jan 26 17:43:00 compute-0 podman[428523]: 2026-01-26 17:43:00.556627666 +0000 UTC m=+0.808561566 container remove cb4c3d442fb9d92300e6a9b548985d10a456487831b5f7bc000a3c0a4213a055 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 26 17:43:00 compute-0 systemd[1]: libpod-conmon-cb4c3d442fb9d92300e6a9b548985d10a456487831b5f7bc000a3c0a4213a055.scope: Deactivated successfully.
Jan 26 17:43:00 compute-0 podman[428563]: 2026-01-26 17:43:00.713695467 +0000 UTC m=+0.024622241 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:43:00 compute-0 nova_compute[239965]: 2026-01-26 17:43:00.828 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:01 compute-0 podman[428563]: 2026-01-26 17:43:01.208041167 +0000 UTC m=+0.518967921 container create 49270c3881ebc576ed665bb094efc95d29840f9a5bc122f3185980c47c2a8d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:43:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:43:01 compute-0 systemd[1]: Started libpod-conmon-49270c3881ebc576ed665bb094efc95d29840f9a5bc122f3185980c47c2a8d51.scope.
Jan 26 17:43:01 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e66af86a9d5068ffc175a80941020817dee2218966989d3c4cadd3873097cb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e66af86a9d5068ffc175a80941020817dee2218966989d3c4cadd3873097cb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e66af86a9d5068ffc175a80941020817dee2218966989d3c4cadd3873097cb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:43:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e66af86a9d5068ffc175a80941020817dee2218966989d3c4cadd3873097cb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:43:01 compute-0 podman[428563]: 2026-01-26 17:43:01.347402148 +0000 UTC m=+0.658328922 container init 49270c3881ebc576ed665bb094efc95d29840f9a5bc122f3185980c47c2a8d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Jan 26 17:43:01 compute-0 podman[428563]: 2026-01-26 17:43:01.358907858 +0000 UTC m=+0.669834612 container start 49270c3881ebc576ed665bb094efc95d29840f9a5bc122f3185980c47c2a8d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:43:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4597: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 26 17:43:01 compute-0 podman[428563]: 2026-01-26 17:43:01.452104492 +0000 UTC m=+0.763031266 container attach 49270c3881ebc576ed665bb094efc95d29840f9a5bc122f3185980c47c2a8d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]: {
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:     "0": [
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:         {
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "devices": [
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "/dev/loop3"
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             ],
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "lv_name": "ceph_lv0",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "lv_size": "21470642176",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "name": "ceph_lv0",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "tags": {
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.cluster_name": "ceph",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.crush_device_class": "",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.encrypted": "0",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.objectstore": "bluestore",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.osd_id": "0",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.type": "block",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.vdo": "0",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.with_tpm": "0"
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             },
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "type": "block",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "vg_name": "ceph_vg0"
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:         }
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:     ],
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:     "1": [
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:         {
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "devices": [
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "/dev/loop4"
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             ],
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "lv_name": "ceph_lv1",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "lv_size": "21470642176",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "name": "ceph_lv1",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "tags": {
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.cluster_name": "ceph",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.crush_device_class": "",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.encrypted": "0",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.objectstore": "bluestore",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.osd_id": "1",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.type": "block",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.vdo": "0",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.with_tpm": "0"
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             },
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "type": "block",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "vg_name": "ceph_vg1"
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:         }
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:     ],
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:     "2": [
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:         {
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "devices": [
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "/dev/loop5"
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             ],
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "lv_name": "ceph_lv2",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "lv_size": "21470642176",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "name": "ceph_lv2",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "tags": {
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.cluster_name": "ceph",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.crush_device_class": "",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.encrypted": "0",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.objectstore": "bluestore",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.osd_id": "2",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.type": "block",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.vdo": "0",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:                 "ceph.with_tpm": "0"
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             },
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "type": "block",
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:             "vg_name": "ceph_vg2"
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:         }
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]:     ]
Jan 26 17:43:01 compute-0 hardcore_bartik[428580]: }
Jan 26 17:43:01 compute-0 systemd[1]: libpod-49270c3881ebc576ed665bb094efc95d29840f9a5bc122f3185980c47c2a8d51.scope: Deactivated successfully.
Jan 26 17:43:01 compute-0 podman[428563]: 2026-01-26 17:43:01.650927103 +0000 UTC m=+0.961853857 container died 49270c3881ebc576ed665bb094efc95d29840f9a5bc122f3185980c47c2a8d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:43:02 compute-0 nova_compute[239965]: 2026-01-26 17:43:02.028 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e66af86a9d5068ffc175a80941020817dee2218966989d3c4cadd3873097cb2-merged.mount: Deactivated successfully.
Jan 26 17:43:02 compute-0 podman[428563]: 2026-01-26 17:43:02.090951167 +0000 UTC m=+1.401877921 container remove 49270c3881ebc576ed665bb094efc95d29840f9a5bc122f3185980c47c2a8d51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_bartik, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Jan 26 17:43:02 compute-0 systemd[1]: libpod-conmon-49270c3881ebc576ed665bb094efc95d29840f9a5bc122f3185980c47c2a8d51.scope: Deactivated successfully.
Jan 26 17:43:02 compute-0 sudo[428487]: pam_unix(sudo:session): session closed for user root
Jan 26 17:43:02 compute-0 sudo[428601]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:43:02 compute-0 sudo[428601]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:43:02 compute-0 sudo[428601]: pam_unix(sudo:session): session closed for user root
Jan 26 17:43:02 compute-0 sudo[428626]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:43:02 compute-0 sudo[428626]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:43:02 compute-0 ceph-mon[75140]: pgmap v4597: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 32 op/s
Jan 26 17:43:02 compute-0 podman[428664]: 2026-01-26 17:43:02.578274616 +0000 UTC m=+0.024519320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:43:02 compute-0 podman[428664]: 2026-01-26 17:43:02.739632362 +0000 UTC m=+0.185877046 container create 218ba28a53405941b5f8e984e05fc4ffe46d1232867a27b4269221d0d31d09d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:43:02 compute-0 systemd[1]: Started libpod-conmon-218ba28a53405941b5f8e984e05fc4ffe46d1232867a27b4269221d0d31d09d1.scope.
Jan 26 17:43:02 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:43:02 compute-0 podman[428664]: 2026-01-26 17:43:02.943018434 +0000 UTC m=+0.389263118 container init 218ba28a53405941b5f8e984e05fc4ffe46d1232867a27b4269221d0d31d09d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 26 17:43:02 compute-0 podman[428664]: 2026-01-26 17:43:02.949684166 +0000 UTC m=+0.395928860 container start 218ba28a53405941b5f8e984e05fc4ffe46d1232867a27b4269221d0d31d09d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:43:02 compute-0 elated_golick[428680]: 167 167
Jan 26 17:43:02 compute-0 systemd[1]: libpod-218ba28a53405941b5f8e984e05fc4ffe46d1232867a27b4269221d0d31d09d1.scope: Deactivated successfully.
Jan 26 17:43:02 compute-0 podman[428664]: 2026-01-26 17:43:02.981609135 +0000 UTC m=+0.427853819 container attach 218ba28a53405941b5f8e984e05fc4ffe46d1232867a27b4269221d0d31d09d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 26 17:43:02 compute-0 podman[428664]: 2026-01-26 17:43:02.982133628 +0000 UTC m=+0.428378332 container died 218ba28a53405941b5f8e984e05fc4ffe46d1232867a27b4269221d0d31d09d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:43:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-572f002e1b96f105b5bd8df48e873fe56feadaf26540121941a36dd26c76d362-merged.mount: Deactivated successfully.
Jan 26 17:43:03 compute-0 podman[428664]: 2026-01-26 17:43:03.073581699 +0000 UTC m=+0.519826383 container remove 218ba28a53405941b5f8e984e05fc4ffe46d1232867a27b4269221d0d31d09d1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_golick, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 26 17:43:03 compute-0 systemd[1]: libpod-conmon-218ba28a53405941b5f8e984e05fc4ffe46d1232867a27b4269221d0d31d09d1.scope: Deactivated successfully.
Jan 26 17:43:03 compute-0 podman[428703]: 2026-01-26 17:43:03.218495244 +0000 UTC m=+0.022313425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:43:03 compute-0 podman[428703]: 2026-01-26 17:43:03.344799285 +0000 UTC m=+0.148617456 container create 42ec9b53a26ba527878ba8bd994b1f0580568a422871e04a66d8086be38d44dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bardeen, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:43:03 compute-0 systemd[1]: Started libpod-conmon-42ec9b53a26ba527878ba8bd994b1f0580568a422871e04a66d8086be38d44dd.scope.
Jan 26 17:43:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4598: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 17:43:03 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8089558ce214814b39b3f2f1db51ecd444ea789e1a88f7e3c5711e41029ea42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8089558ce214814b39b3f2f1db51ecd444ea789e1a88f7e3c5711e41029ea42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8089558ce214814b39b3f2f1db51ecd444ea789e1a88f7e3c5711e41029ea42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:43:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8089558ce214814b39b3f2f1db51ecd444ea789e1a88f7e3c5711e41029ea42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:43:03 compute-0 podman[428703]: 2026-01-26 17:43:03.591685979 +0000 UTC m=+0.395504150 container init 42ec9b53a26ba527878ba8bd994b1f0580568a422871e04a66d8086be38d44dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bardeen, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 26 17:43:03 compute-0 podman[428703]: 2026-01-26 17:43:03.600695718 +0000 UTC m=+0.404513889 container start 42ec9b53a26ba527878ba8bd994b1f0580568a422871e04a66d8086be38d44dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bardeen, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:43:03 compute-0 podman[428703]: 2026-01-26 17:43:03.705177138 +0000 UTC m=+0.508995309 container attach 42ec9b53a26ba527878ba8bd994b1f0580568a422871e04a66d8086be38d44dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bardeen, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 26 17:43:04 compute-0 lvm[428796]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:43:04 compute-0 lvm[428796]: VG ceph_vg0 finished
Jan 26 17:43:04 compute-0 lvm[428799]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:43:04 compute-0 lvm[428799]: VG ceph_vg1 finished
Jan 26 17:43:04 compute-0 lvm[428801]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:43:04 compute-0 lvm[428801]: VG ceph_vg2 finished
Jan 26 17:43:04 compute-0 happy_bardeen[428720]: {}
Jan 26 17:43:04 compute-0 systemd[1]: libpod-42ec9b53a26ba527878ba8bd994b1f0580568a422871e04a66d8086be38d44dd.scope: Deactivated successfully.
Jan 26 17:43:04 compute-0 systemd[1]: libpod-42ec9b53a26ba527878ba8bd994b1f0580568a422871e04a66d8086be38d44dd.scope: Consumed 1.282s CPU time.
Jan 26 17:43:04 compute-0 podman[428804]: 2026-01-26 17:43:04.458331442 +0000 UTC m=+0.023824943 container died 42ec9b53a26ba527878ba8bd994b1f0580568a422871e04a66d8086be38d44dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:43:04 compute-0 nova_compute[239965]: 2026-01-26 17:43:04.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:43:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8089558ce214814b39b3f2f1db51ecd444ea789e1a88f7e3c5711e41029ea42-merged.mount: Deactivated successfully.
Jan 26 17:43:04 compute-0 ceph-mon[75140]: pgmap v4598: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #225. Immutable memtables: 0.
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:43:04.742882) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:856] [default] [JOB 141] Flushing memtable with next log file: 225
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449384742921, "job": 141, "event": "flush_started", "num_memtables": 1, "num_entries": 2053, "num_deletes": 251, "total_data_size": 3559023, "memory_usage": 3610040, "flush_reason": "Manual Compaction"}
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:885] [default] [JOB 141] Level-0 flush table #226: started
Jan 26 17:43:04 compute-0 podman[428804]: 2026-01-26 17:43:04.763727242 +0000 UTC m=+0.329220723 container remove 42ec9b53a26ba527878ba8bd994b1f0580568a422871e04a66d8086be38d44dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_bardeen, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:43:04 compute-0 systemd[1]: libpod-conmon-42ec9b53a26ba527878ba8bd994b1f0580568a422871e04a66d8086be38d44dd.scope: Deactivated successfully.
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449384795036, "cf_name": "default", "job": 141, "event": "table_file_creation", "file_number": 226, "file_size": 3474007, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 91575, "largest_seqno": 93627, "table_properties": {"data_size": 3464557, "index_size": 6007, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18753, "raw_average_key_size": 20, "raw_value_size": 3445891, "raw_average_value_size": 3701, "num_data_blocks": 266, "num_entries": 931, "num_filter_entries": 931, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769449154, "oldest_key_time": 1769449154, "file_creation_time": 1769449384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 226, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 141] Flush lasted 52212 microseconds, and 8140 cpu microseconds.
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:43:04.795090) [db/flush_job.cc:967] [default] [JOB 141] Level-0 flush table #226: 3474007 bytes OK
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:43:04.795112) [db/memtable_list.cc:519] [default] Level-0 commit table #226 started
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:43:04.797472) [db/memtable_list.cc:722] [default] Level-0 commit table #226: memtable #1 done
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:43:04.797520) EVENT_LOG_v1 {"time_micros": 1769449384797510, "job": 141, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:43:04.797547) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 141] Try to delete WAL files size 3550425, prev total WAL file size 3550425, number of live WAL files 2.
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000222.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:43:04.798567) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730039323837' seq:72057594037927935, type:22 .. '7061786F730039353339' seq:0, type:0; will stop at (end)
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 142] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 141 Base level 0, inputs: [226(3392KB)], [224(9859KB)]
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449384798614, "job": 142, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [226], "files_L6": [224], "score": -1, "input_data_size": 13570366, "oldest_snapshot_seqno": -1}
Jan 26 17:43:04 compute-0 sudo[428626]: pam_unix(sudo:session): session closed for user root
Jan 26 17:43:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 142] Generated table #227: 10518 keys, 11787881 bytes, temperature: kUnknown
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449384916206, "cf_name": "default", "job": 142, "event": "table_file_creation", "file_number": 227, "file_size": 11787881, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11723199, "index_size": 37305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 26309, "raw_key_size": 279925, "raw_average_key_size": 26, "raw_value_size": 11539999, "raw_average_value_size": 1097, "num_data_blocks": 1410, "num_entries": 10518, "num_filter_entries": 10518, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1769440801, "oldest_key_time": 0, "file_creation_time": 1769449384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "5d7a55cc-c122-4f03-96ef-144a5c50b4cf", "db_session_id": "3R303H1LYOR8ZMJEHSMX", "orig_file_number": 227, "seqno_to_time_mapping": "N/A"}}
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:43:04.916421) [db/compaction/compaction_job.cc:1663] [default] [JOB 142] Compacted 1@0 + 1@6 files to L6 => 11787881 bytes
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:43:04.925611) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.3 rd, 100.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 9.6 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 11032, records dropped: 514 output_compression: NoCompression
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:43:04.925629) EVENT_LOG_v1 {"time_micros": 1769449384925620, "job": 142, "event": "compaction_finished", "compaction_time_micros": 117655, "compaction_time_cpu_micros": 27421, "output_level": 6, "num_output_files": 1, "total_output_size": 11787881, "num_input_records": 11032, "num_output_records": 10518, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:43:04.798510) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:43:04.925717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:43:04.925721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:43:04.925723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:43:04.925724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: (Original Log Time 2026/01/26-17:43:04.925725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000226.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449384926822, "job": 0, "event": "table_file_deletion", "file_number": 226}
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000224.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 26 17:43:04 compute-0 ceph-mon[75140]: rocksdb: EVENT_LOG_v1 {"time_micros": 1769449384928661, "job": 0, "event": "table_file_deletion", "file_number": 224}
Jan 26 17:43:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:43:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:43:04 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:43:04 compute-0 sudo[428819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:43:04 compute-0 sudo[428819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:43:04 compute-0 sudo[428819]: pam_unix(sudo:session): session closed for user root
Jan 26 17:43:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4599: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 17:43:05 compute-0 nova_compute[239965]: 2026-01-26 17:43:05.833 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:43:06 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:43:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:43:07 compute-0 nova_compute[239965]: 2026-01-26 17:43:07.030 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:07 compute-0 ceph-mon[75140]: pgmap v4599: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 17:43:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4600: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 17:43:07 compute-0 nova_compute[239965]: 2026-01-26 17:43:07.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:43:09 compute-0 ceph-mon[75140]: pgmap v4600: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 26 17:43:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4601: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Jan 26 17:43:09 compute-0 nova_compute[239965]: 2026-01-26 17:43:09.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:43:10 compute-0 nova_compute[239965]: 2026-01-26 17:43:10.837 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:11 compute-0 ceph-mon[75140]: pgmap v4601: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 0 B/s wr, 50 op/s
Jan 26 17:43:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:43:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4602: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Jan 26 17:43:12 compute-0 nova_compute[239965]: 2026-01-26 17:43:12.031 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:12 compute-0 sshd-session[428844]: Received disconnect from 45.148.10.157 port 47978:11:  [preauth]
Jan 26 17:43:12 compute-0 sshd-session[428844]: Disconnected from authenticating user root 45.148.10.157 port 47978 [preauth]
Jan 26 17:43:12 compute-0 nova_compute[239965]: 2026-01-26 17:43:12.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:43:13 compute-0 ceph-mon[75140]: pgmap v4602: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 43 op/s
Jan 26 17:43:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4603: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 26 17:43:14 compute-0 ceph-mon[75140]: pgmap v4603: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 0 B/s wr, 26 op/s
Jan 26 17:43:15 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4604: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:15 compute-0 nova_compute[239965]: 2026-01-26 17:43:15.840 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:16 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:43:16 compute-0 ceph-mon[75140]: pgmap v4604: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:17 compute-0 nova_compute[239965]: 2026-01-26 17:43:17.032 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:17 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4605: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:17 compute-0 nova_compute[239965]: 2026-01-26 17:43:17.502 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:43:18 compute-0 ceph-mon[75140]: pgmap v4605: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:19 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4606: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:20 compute-0 ceph-mon[75140]: pgmap v4606: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:20 compute-0 nova_compute[239965]: 2026-01-26 17:43:20.844 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:21 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:43:21 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4607: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:21 compute-0 nova_compute[239965]: 2026-01-26 17:43:21.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:43:21 compute-0 nova_compute[239965]: 2026-01-26 17:43:21.510 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 26 17:43:22 compute-0 nova_compute[239965]: 2026-01-26 17:43:22.034 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:22 compute-0 ceph-mon[75140]: pgmap v4607: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:23 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4608: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:24 compute-0 ceph-mon[75140]: pgmap v4608: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:25 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:25 compute-0 nova_compute[239965]: 2026-01-26 17:43:25.845 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:26 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:43:26 compute-0 nova_compute[239965]: 2026-01-26 17:43:26.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:43:26 compute-0 nova_compute[239965]: 2026-01-26 17:43:26.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 26 17:43:26 compute-0 nova_compute[239965]: 2026-01-26 17:43:26.511 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 26 17:43:26 compute-0 nova_compute[239965]: 2026-01-26 17:43:26.531 239969 DEBUG nova.compute.manager [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 26 17:43:26 compute-0 ceph-mon[75140]: pgmap v4609: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:27 compute-0 nova_compute[239965]: 2026-01-26 17:43:27.036 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:27 compute-0 podman[428847]: 2026-01-26 17:43:27.381429604 +0000 UTC m=+0.059845672 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 26 17:43:27 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:28 compute-0 ceph-mon[75140]: pgmap v4610: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Optimize plan auto_2026-01-26_17:43:28
Jan 26 17:43:28 compute-0 ceph-mgr[75431]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 26 17:43:28 compute-0 ceph-mgr[75431]: [balancer INFO root] do_upmap
Jan 26 17:43:28 compute-0 ceph-mgr[75431]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'default.rgw.control', 'images', 'backups', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', '.rgw.root']
Jan 26 17:43:28 compute-0 ceph-mgr[75431]: [balancer INFO root] prepared 0/10 upmap changes
Jan 26 17:43:29 compute-0 sshd-session[428867]: Accepted publickey for zuul from 192.168.122.10 port 49376 ssh2: ECDSA SHA256:+bxXQQ9mjgGTATCpcZMPbqE6T7Ypcu1bWdLgmv9dcFM
Jan 26 17:43:29 compute-0 systemd-logind[790]: New session 51 of user zuul.
Jan 26 17:43:29 compute-0 systemd[1]: Started Session 51 of User zuul.
Jan 26 17:43:29 compute-0 sshd-session[428867]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 26 17:43:29 compute-0 sudo[428871]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 26 17:43:29 compute-0 sudo[428871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 26 17:43:29 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:29 compute-0 nova_compute[239965]: 2026-01-26 17:43:29.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:43:29 compute-0 nova_compute[239965]: 2026-01-26 17:43:29.552 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:43:29 compute-0 nova_compute[239965]: 2026-01-26 17:43:29.553 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:43:29 compute-0 nova_compute[239965]: 2026-01-26 17:43:29.553 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:43:29 compute-0 nova_compute[239965]: 2026-01-26 17:43:29.553 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 26 17:43:29 compute-0 nova_compute[239965]: 2026-01-26 17:43:29.554 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:43:29 compute-0 podman[428905]: 2026-01-26 17:43:29.57718396 +0000 UTC m=+0.130376611 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 26 17:43:30 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:43:30 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1081950815' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:43:30 compute-0 nova_compute[239965]: 2026-01-26 17:43:30.255 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.701s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:43:30 compute-0 nova_compute[239965]: 2026-01-26 17:43:30.411 239969 WARNING nova.virt.libvirt.driver [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 26 17:43:30 compute-0 nova_compute[239965]: 2026-01-26 17:43:30.412 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3521MB free_disk=59.98720078263432GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 26 17:43:30 compute-0 nova_compute[239965]: 2026-01-26 17:43:30.412 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:43:30 compute-0 nova_compute[239965]: 2026-01-26 17:43:30.413 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:43:30 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:43:30 compute-0 nova_compute[239965]: 2026-01-26 17:43:30.790 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 26 17:43:30 compute-0 nova_compute[239965]: 2026-01-26 17:43:30.792 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 26 17:43:30 compute-0 ceph-mon[75140]: pgmap v4611: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:30 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1081950815' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:43:30 compute-0 nova_compute[239965]: 2026-01-26 17:43:30.813 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 26 17:43:30 compute-0 nova_compute[239965]: 2026-01-26 17:43:30.850 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:43:31 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 26 17:43:31 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3723961323' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:43:31 compute-0 nova_compute[239965]: 2026-01-26 17:43:31.364 239969 DEBUG oslo_concurrency.processutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.551s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 26 17:43:31 compute-0 nova_compute[239965]: 2026-01-26 17:43:31.370 239969 DEBUG nova.compute.provider_tree [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed in ProviderTree for provider: 54ba2329-fe1a-48ee-a2ff-6c84c26128ed update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 26 17:43:31 compute-0 nova_compute[239965]: 2026-01-26 17:43:31.391 239969 DEBUG nova.scheduler.client.report [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Inventory has not changed for provider 54ba2329-fe1a-48ee-a2ff-6c84c26128ed based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 26 17:43:31 compute-0 nova_compute[239965]: 2026-01-26 17:43:31.393 239969 DEBUG nova.compute.resource_tracker [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 26 17:43:31 compute-0 nova_compute[239965]: 2026-01-26 17:43:31.393 239969 DEBUG oslo_concurrency.lockutils [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.980s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:43:31 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4612: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 26 17:43:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:43:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 26 17:43:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 26 17:43:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:43:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 26 17:43:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:43:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 26 17:43:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:43:31 compute-0 ceph-mgr[75431]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 26 17:43:31 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3723961323' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 26 17:43:32 compute-0 nova_compute[239965]: 2026-01-26 17:43:32.084 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:32 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23380 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:32 compute-0 ceph-mon[75140]: pgmap v4612: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:32 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23382 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:33 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:33 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 26 17:43:33 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3009346866' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 26 17:43:33 compute-0 ceph-mon[75140]: from='client.23380 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:33 compute-0 ceph-mon[75140]: from='client.23382 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:33 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3009346866' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 26 17:43:34 compute-0 ceph-mon[75140]: pgmap v4613: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:35 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:35 compute-0 nova_compute[239965]: 2026-01-26 17:43:35.851 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:36 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:43:36 compute-0 nova_compute[239965]: 2026-01-26 17:43:36.385 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:43:36 compute-0 ovs-vsctl[429226]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 26 17:43:36 compute-0 ceph-mon[75140]: pgmap v4614: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:37 compute-0 nova_compute[239965]: 2026-01-26 17:43:37.084 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:37 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:37 compute-0 virtqemud[240263]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 26 17:43:37 compute-0 virtqemud[240263]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 26 17:43:37 compute-0 virtqemud[240263]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 26 17:43:38 compute-0 ceph-mds[96626]: mds.cephfs.compute-0.slnfyx asok_command: cache status {prefix=cache status} (starting...)
Jan 26 17:43:38 compute-0 ceph-mds[96626]: mds.cephfs.compute-0.slnfyx asok_command: client ls {prefix=client ls} (starting...)
Jan 26 17:43:38 compute-0 lvm[429579]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:43:38 compute-0 lvm[429579]: VG ceph_vg1 finished
Jan 26 17:43:38 compute-0 lvm[429582]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:43:38 compute-0 lvm[429582]: VG ceph_vg2 finished
Jan 26 17:43:38 compute-0 lvm[429587]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:43:38 compute-0 lvm[429587]: VG ceph_vg0 finished
Jan 26 17:43:38 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23386 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:38 compute-0 ceph-mon[75140]: pgmap v4615: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:39 compute-0 ceph-mds[96626]: mds.cephfs.compute-0.slnfyx asok_command: damage ls {prefix=damage ls} (starting...)
Jan 26 17:43:39 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23388 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:39 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:39 compute-0 ceph-mds[96626]: mds.cephfs.compute-0.slnfyx asok_command: dump loads {prefix=dump loads} (starting...)
Jan 26 17:43:39 compute-0 ceph-mds[96626]: mds.cephfs.compute-0.slnfyx asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 26 17:43:39 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23390 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:39 compute-0 ceph-mds[96626]: mds.cephfs.compute-0.slnfyx asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 26 17:43:40 compute-0 ceph-mon[75140]: from='client.23386 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:40 compute-0 ceph-mds[96626]: mds.cephfs.compute-0.slnfyx asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 26 17:43:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 26 17:43:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3950019553' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 26 17:43:40 compute-0 ceph-mds[96626]: mds.cephfs.compute-0.slnfyx asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 26 17:43:40 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23394 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:40 compute-0 ceph-mgr[75431]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 26 17:43:40 compute-0 ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mgr-compute-0-jtdjmy[75427]: 2026-01-26T17:43:40.351+0000 7fe66354f640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 26 17:43:40 compute-0 ceph-mds[96626]: mds.cephfs.compute-0.slnfyx asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 26 17:43:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:43:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2677289760' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:43:40 compute-0 nova_compute[239965]: 2026-01-26 17:43:40.854 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:40 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 26 17:43:40 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2354323797' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 26 17:43:41 compute-0 ceph-mds[96626]: mds.cephfs.compute-0.slnfyx asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 26 17:43:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:43:41 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:41 compute-0 ceph-mon[75140]: from='client.23388 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:41 compute-0 ceph-mon[75140]: pgmap v4616: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:41 compute-0 ceph-mon[75140]: from='client.23390 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3950019553' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 26 17:43:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2677289760' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:43:41 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2354323797' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 26 17:43:41 compute-0 ceph-mds[96626]: mds.cephfs.compute-0.slnfyx asok_command: ops {prefix=ops} (starting...)
Jan 26 17:43:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 26 17:43:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1802526680' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 26 17:43:41 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 26 17:43:41 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1695094758' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 26 17:43:42 compute-0 nova_compute[239965]: 2026-01-26 17:43:42.086 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:42 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23406 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:42 compute-0 ceph-mds[96626]: mds.cephfs.compute-0.slnfyx asok_command: session ls {prefix=session ls} (starting...)
Jan 26 17:43:42 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 26 17:43:42 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1090486996' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 26 17:43:42 compute-0 ceph-mds[96626]: mds.cephfs.compute-0.slnfyx asok_command: status {prefix=status} (starting...)
Jan 26 17:43:42 compute-0 ceph-mon[75140]: from='client.23394 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:42 compute-0 ceph-mon[75140]: pgmap v4617: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1802526680' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 26 17:43:42 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1695094758' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 26 17:43:42 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23408 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 26 17:43:43 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3844242839' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 26 17:43:43 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 26 17:43:43 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3809415705' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 26 17:43:43 compute-0 ceph-mon[75140]: from='client.23406 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:43 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1090486996' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 26 17:43:43 compute-0 ceph-mon[75140]: from='client.23408 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:43 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3844242839' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 26 17:43:43 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3809415705' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 26 17:43:43 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 26 17:43:43 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1339034614' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 26 17:43:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 26 17:43:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1488079567' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 26 17:43:44 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 26 17:43:44 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3087138022' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 26 17:43:44 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23420 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:44 compute-0 ceph-mgr[75431]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 17:43:44 compute-0 ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mgr-compute-0-jtdjmy[75427]: 2026-01-26T17:43:44.770+0000 7fe66354f640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 26 17:43:44 compute-0 ceph-mon[75140]: pgmap v4618: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:44 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1339034614' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 26 17:43:44 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1488079567' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 26 17:43:44 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3087138022' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 26 17:43:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 26 17:43:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2836605377' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 26 17:43:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 26 17:43:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/934150756' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 26 17:43:45 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:45 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 26 17:43:45 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2013079863' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 26 17:43:45 compute-0 ceph-mon[75140]: from='client.23420 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2836605377' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 26 17:43:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/934150756' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 26 17:43:45 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2013079863' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 26 17:43:45 compute-0 nova_compute[239965]: 2026-01-26 17:43:45.857 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 26 17:43:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1903991675' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 26 17:43:46 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23430 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:43:46 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23434 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:46 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 26 17:43:46 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4102778118' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:29.365672+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:30.365864+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:31.366079+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:32.366362+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:33.366547+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:34.366723+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:35.366910+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:36.367092+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:37.367321+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:38.367615+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:39.367867+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301776896 unmapped: 70664192 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:40.368134+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301785088 unmapped: 70656000 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:41.368373+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301785088 unmapped: 70656000 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:42.368585+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301785088 unmapped: 70656000 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:43.368765+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301785088 unmapped: 70656000 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:44.369102+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301785088 unmapped: 70656000 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:45.369309+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301785088 unmapped: 70656000 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:46.369523+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301793280 unmapped: 70647808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:47.369772+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:48.369966+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:49.370228+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:50.370439+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:51.370612+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:52.370825+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:53.371102+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:54.371280+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:55.371429+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:56.371700+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:57.371948+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:58.372245+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:59.372424+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301809664 unmapped: 70631424 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:00.372732+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301809664 unmapped: 70631424 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:01.372944+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301809664 unmapped: 70631424 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:02.373139+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:03.373292+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:04.373446+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:05.373688+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:06.373917+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:07.374145+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:08.374305+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:09.374482+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:10.374729+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:11.374933+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:12.375106+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:13.375325+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:14.375530+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:15.375672+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:16.375835+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:17.376077+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:18.376261+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:19.376423+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:20.376636+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:21.376802+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:22.377012+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:23.377167+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301424640 unmapped: 71016448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:24.377399+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301432832 unmapped: 71008256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:25.377635+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301432832 unmapped: 71008256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:26.377857+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301432832 unmapped: 71008256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:27.378084+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301432832 unmapped: 71008256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:28.378302+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301432832 unmapped: 71008256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:29.378543+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301432832 unmapped: 71008256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:30.378788+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301432832 unmapped: 71008256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:31.379229+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:32.379495+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:33.379780+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:34.380138+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:35.380285+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:36.380464+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:37.380615+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:38.380787+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:39.380943+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:40.381157+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:41.381374+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:42.381610+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:43.381848+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:44.382201+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:45.382527+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:46.383264+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:47.383577+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:48.383775+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:49.384145+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:50.384404+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:51.384696+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:52.385043+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:53.385400+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:54.385555+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:55.385782+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:56.386070+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:57.386376+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:58.386581+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:59.386804+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301441024 unmapped: 71000064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:00.387062+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:01.387218+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:02.387402+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:03.387545+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:04.387711+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 149K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.71 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:05.387889+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:06.388019+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:07.388185+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:08.388362+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:09.388519+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:10.388811+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:11.389024+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:12.389245+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:13.389455+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:14.389620+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:15.389771+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:16.390001+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:17.390190+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:18.390322+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301449216 unmapped: 70991872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:19.390482+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 70983680 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:20.390683+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 70983680 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:21.390840+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 70983680 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:22.391013+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 70983680 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:23.391183+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 70983680 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:24.391341+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 70983680 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:25.391561+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301457408 unmapped: 70983680 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:26.391714+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 70975488 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:27.391864+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 70975488 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:28.392032+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 70975488 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:29.392217+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 70975488 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:30.392385+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 70975488 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:31.392583+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 70975488 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:32.392815+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 70975488 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:33.392970+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 70975488 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:34.393141+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 70975488 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:35.393544+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 70975488 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:36.393721+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 70975488 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:37.393882+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 70975488 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:38.394062+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 70975488 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:39.394267+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301465600 unmapped: 70975488 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:40.394481+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 70967296 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:41.394626+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 70967296 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:42.394785+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 70967296 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:43.394918+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 70967296 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:44.395119+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301473792 unmapped: 70967296 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:45.395375+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:46.395612+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:47.395821+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:48.396064+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:49.396294+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:50.396531+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:51.396697+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:52.396881+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:53.397018+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:54.397164+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:55.397359+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:56.397572+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:57.397755+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:58.397950+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:59.398289+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:00.398751+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:01.398879+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:02.399091+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:03.399386+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301481984 unmapped: 70959104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:04.399600+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301490176 unmapped: 70950912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:05.399752+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301490176 unmapped: 70950912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:06.399923+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301490176 unmapped: 70950912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:07.400056+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301490176 unmapped: 70950912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:08.400213+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301490176 unmapped: 70950912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:09.400345+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301490176 unmapped: 70950912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:10.400520+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301490176 unmapped: 70950912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:11.400652+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301490176 unmapped: 70950912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:12.400852+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301490176 unmapped: 70950912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:13.401058+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301490176 unmapped: 70950912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:14.401206+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301490176 unmapped: 70950912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:15.401400+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301490176 unmapped: 70950912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:16.401572+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 70942720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:17.401745+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 70942720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:18.401891+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 70942720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:19.402029+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 70942720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:20.402217+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 70942720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 300.150909424s of 300.348175049s, submitted: 24
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:21.402344+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 70942720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:22.402524+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 70942720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:23.402694+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 70942720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:24.402846+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301498368 unmapped: 70942720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:25.404900+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301506560 unmapped: 70934528 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:26.405062+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301522944 unmapped: 70918144 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:27.405196+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301531136 unmapped: 70909952 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:28.405391+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [0,0,1])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:29.405615+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:30.405804+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:31.406001+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:32.406163+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:33.406292+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:34.406432+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:35.406641+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:36.406893+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:37.407084+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:38.407241+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:39.407379+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:40.407599+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:41.407746+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:42.408040+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:43.408261+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:44.408453+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:45.408739+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:46.408958+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:47.409178+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:48.409303+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:49.409516+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:50.409731+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:51.409891+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:52.410019+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:53.410151+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:54.410268+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:55.410393+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:56.410529+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:57.410664+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:58.410793+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:59.410925+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:00.411137+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:01.411330+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:02.411487+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:03.411648+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:04.411778+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:05.411959+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:06.412185+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:07.412344+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:08.412472+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:09.412594+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:10.412763+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:11.412965+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:12.413222+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:13.413373+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:14.413563+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:15.413770+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:16.413916+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:17.414144+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:18.414401+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:19.414588+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:20.414849+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:21.415051+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:22.415621+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301547520 unmapped: 70893568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:23.415922+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:24.416073+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:25.416234+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:26.416427+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:27.416639+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:28.416851+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:29.417063+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:30.417283+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:31.417428+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:32.417647+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:33.417815+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:34.418014+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:35.418236+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:36.418418+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:37.418619+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:38.418805+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:39.419055+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:40.419237+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:41.419431+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:42.419639+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301555712 unmapped: 70885376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:43.419805+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301563904 unmapped: 70877184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:44.420068+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301563904 unmapped: 70877184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:45.420276+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301563904 unmapped: 70877184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:46.420469+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301563904 unmapped: 70877184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:47.420654+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301563904 unmapped: 70877184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:48.420880+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301563904 unmapped: 70877184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:49.421171+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301563904 unmapped: 70877184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:50.421512+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301563904 unmapped: 70877184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:51.421714+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301563904 unmapped: 70877184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:52.421949+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301563904 unmapped: 70877184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:53.422203+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301563904 unmapped: 70877184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:54.422387+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301563904 unmapped: 70877184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:55.422538+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301563904 unmapped: 70877184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:56.422752+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301563904 unmapped: 70877184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:57.422929+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301563904 unmapped: 70877184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:58.423105+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 70868992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:59.423260+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 70868992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:00.423473+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 70868992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:01.423683+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 70868992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:02.423830+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 70868992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:03.424055+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 70868992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:04.424224+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 70868992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:05.424456+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 70868992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:06.424593+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301572096 unmapped: 70868992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:07.424782+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:08.425036+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:09.425293+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:10.425573+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:11.425709+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:12.425921+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:13.426101+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:14.426261+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:15.426378+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:16.426619+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:17.426780+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:18.427059+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:19.427239+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:20.427455+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:21.427609+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:22.427764+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:23.427929+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:24.428110+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301580288 unmapped: 70860800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:25.428314+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:26.428469+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:27.428711+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:28.428908+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:29.429081+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:30.429337+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:31.429523+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:32.429754+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:33.429939+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:34.430106+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:35.430301+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:36.430517+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:37.430657+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:38.430820+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:39.431045+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:40.431310+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:41.431512+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:42.431694+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:43.431838+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301588480 unmapped: 70852608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:44.432083+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 70844416 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:45.432286+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 70844416 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:46.432464+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 70844416 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:47.432642+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 70844416 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:48.432816+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 70844416 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:49.432952+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 70844416 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:50.433192+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 70844416 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:51.433394+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 70844416 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:52.433597+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 70844416 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:53.433739+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 70844416 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:54.433949+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 70844416 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:55.434147+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301596672 unmapped: 70844416 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:56.434280+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301604864 unmapped: 70836224 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:57.434453+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:58.434635+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:59.434808+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:00.434996+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:01.435163+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:02.435315+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:03.435482+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:04.435658+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:05.435817+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:06.436014+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:07.436195+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:08.436392+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:09.436578+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:10.436815+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:11.437013+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:12.437293+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:13.437534+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:14.437736+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:15.438056+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:16.438529+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301613056 unmapped: 70828032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:17.438732+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:18.438928+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:19.439142+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:20.439367+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:21.439495+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:22.439635+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:23.439790+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:24.439955+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:25.440146+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:26.440319+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:27.440493+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:28.440660+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:29.440837+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:30.441061+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:31.441262+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:32.441426+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:33.441553+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301621248 unmapped: 70819840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:34.441676+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 70803456 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:35.441848+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 70803456 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:36.442096+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 70803456 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:37.442295+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 70803456 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:38.442469+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 70803456 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:39.442627+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 70803456 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:40.442914+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 70803456 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:41.443087+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 70803456 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:42.443258+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 70803456 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:43.443426+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 70803456 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:44.443585+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 70803456 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:45.443715+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 70803456 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:46.443932+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 70803456 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:47.444117+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301637632 unmapped: 70803456 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:48.444289+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 70787072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:49.444468+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 70787072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:50.444649+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 70787072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:51.444796+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 70787072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:52.444956+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 70787072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:53.445186+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 70787072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:54.445383+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 70787072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:55.445526+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 70787072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:56.445728+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 70787072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:57.445909+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 70787072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:58.446125+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 70787072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:59.446320+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 70787072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:00.446573+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 70787072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:01.446741+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 70787072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:02.447006+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301654016 unmapped: 70787072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:03.447143+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 70778880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:04.447352+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 70778880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:05.447567+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 70778880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:06.447771+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 70778880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:07.447919+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 70778880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:08.448048+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 70778880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:09.448177+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 70778880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:10.448416+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 70778880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:11.448573+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 70778880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:12.448736+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 70778880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:13.448916+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301662208 unmapped: 70778880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:14.449116+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301670400 unmapped: 70770688 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:15.449292+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301670400 unmapped: 70770688 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:16.449517+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301670400 unmapped: 70770688 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:17.449717+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301670400 unmapped: 70770688 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:18.449898+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301670400 unmapped: 70770688 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:19.450213+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301670400 unmapped: 70770688 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:20.450546+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301670400 unmapped: 70770688 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:21.450699+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301670400 unmapped: 70770688 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:22.450883+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301670400 unmapped: 70770688 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:23.451100+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301670400 unmapped: 70770688 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:24.451252+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301670400 unmapped: 70770688 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:25.451497+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301670400 unmapped: 70770688 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:26.451673+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301670400 unmapped: 70770688 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:27.451840+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 70762496 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:28.452084+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301678592 unmapped: 70762496 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:29.452331+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 70754304 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:30.452641+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 70754304 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:31.452849+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 70754304 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:32.452999+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 70754304 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:33.453163+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 70754304 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:34.453344+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 70754304 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:35.453567+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301686784 unmapped: 70754304 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:36.453816+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 70746112 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:37.453951+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 70746112 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:38.454146+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 70746112 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:39.454329+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 70746112 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:40.454598+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 70746112 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:41.454755+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 70746112 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:42.454950+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 70746112 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:43.455244+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 70746112 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:44.455474+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 70746112 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:45.455650+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301694976 unmapped: 70746112 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:46.455804+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 70737920 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:47.456056+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 70737920 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:48.456230+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 70737920 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:49.456369+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 70737920 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:50.456571+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 70737920 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:51.456816+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 70737920 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:52.457093+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 70737920 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:53.457298+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 70737920 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:54.457436+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 70737920 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:55.457749+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 70737920 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:56.457907+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 70737920 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:57.458898+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 70737920 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:58.459646+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 70737920 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:59.459814+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301703168 unmapped: 70737920 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:00.460037+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301711360 unmapped: 70729728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:01.460662+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301711360 unmapped: 70729728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:02.461113+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301711360 unmapped: 70729728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:03.461452+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 70721536 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:04.461764+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 70721536 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:05.462258+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 70721536 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:06.462685+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301719552 unmapped: 70721536 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:07.462955+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301727744 unmapped: 70713344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:08.463413+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301727744 unmapped: 70713344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:09.463597+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301735936 unmapped: 70705152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:10.463825+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301735936 unmapped: 70705152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:11.464057+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301735936 unmapped: 70705152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:12.464320+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301735936 unmapped: 70705152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:13.464602+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301735936 unmapped: 70705152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:14.464752+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301735936 unmapped: 70705152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:15.464940+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301735936 unmapped: 70705152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:16.465098+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301735936 unmapped: 70705152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:17.465366+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301735936 unmapped: 70705152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:18.465605+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301735936 unmapped: 70705152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:19.465796+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:20.466010+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:21.466222+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:22.466415+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:23.466567+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:24.466710+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:25.466893+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:26.467112+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:27.467249+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:28.467439+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:29.467593+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:30.467812+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:31.467958+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:32.468119+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:33.468335+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:34.468473+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:35.468587+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:36.468719+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:37.484398+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:38.484583+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301744128 unmapped: 70696960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:39.484752+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301752320 unmapped: 70688768 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:40.484952+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301760512 unmapped: 70680576 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:41.485242+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301760512 unmapped: 70680576 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:42.485439+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301760512 unmapped: 70680576 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:43.485668+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:44.485892+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:45.486029+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:46.486276+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:47.486520+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:48.486740+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:49.486958+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:50.487382+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:51.487592+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:52.487814+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301768704 unmapped: 70672384 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:53.487943+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301776896 unmapped: 70664192 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:54.488146+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301776896 unmapped: 70664192 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:55.489133+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301793280 unmapped: 70647808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:56.489312+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301793280 unmapped: 70647808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:57.489490+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301793280 unmapped: 70647808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:58.489651+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301793280 unmapped: 70647808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:59.489827+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301793280 unmapped: 70647808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:00.490053+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301793280 unmapped: 70647808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:01.490257+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301793280 unmapped: 70647808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:02.490415+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301793280 unmapped: 70647808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:03.490597+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301793280 unmapped: 70647808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:04.490794+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301793280 unmapped: 70647808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:05.491121+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301793280 unmapped: 70647808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:06.491303+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301793280 unmapped: 70647808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:07.491471+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:08.491617+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:09.491792+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:10.491997+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:11.492165+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:12.492374+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:13.492535+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:14.492753+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301801472 unmapped: 70639616 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:15.492932+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301809664 unmapped: 70631424 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:16.493143+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301809664 unmapped: 70631424 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:17.493313+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301809664 unmapped: 70631424 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:18.493499+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301809664 unmapped: 70631424 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:19.493709+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301809664 unmapped: 70631424 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:20.493911+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301809664 unmapped: 70631424 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:21.494088+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301809664 unmapped: 70631424 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:22.494462+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301809664 unmapped: 70631424 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:23.494684+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:24.494911+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:25.495139+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:26.495307+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:27.495473+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:28.495604+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:29.495756+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:30.495955+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:31.496162+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:32.496415+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:33.496887+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:34.497538+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:35.497946+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:36.499172+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:37.499551+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301817856 unmapped: 70623232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:38.499694+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301826048 unmapped: 70615040 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:39.500043+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301826048 unmapped: 70615040 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:40.500693+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301834240 unmapped: 70606848 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:41.500885+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301834240 unmapped: 70606848 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:42.502663+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301834240 unmapped: 70606848 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:43.502799+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301834240 unmapped: 70606848 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:44.503490+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301834240 unmapped: 70606848 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:45.503704+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301842432 unmapped: 70598656 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:46.503852+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301842432 unmapped: 70598656 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:47.504113+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301842432 unmapped: 70598656 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:48.504632+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301842432 unmapped: 70598656 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:49.504966+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301850624 unmapped: 70590464 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:50.505370+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301850624 unmapped: 70590464 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:51.505662+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301850624 unmapped: 70590464 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:52.505827+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301850624 unmapped: 70590464 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:53.506053+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301850624 unmapped: 70590464 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:54.506472+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301850624 unmapped: 70590464 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:55.506643+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301850624 unmapped: 70590464 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:56.506966+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301850624 unmapped: 70590464 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:57.507216+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301858816 unmapped: 70582272 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:58.507484+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301858816 unmapped: 70582272 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:59.507691+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301867008 unmapped: 70574080 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:00.508017+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301875200 unmapped: 70565888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:01.508226+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301875200 unmapped: 70565888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:02.508505+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301875200 unmapped: 70565888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:03.508703+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301875200 unmapped: 70565888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:04.508882+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301875200 unmapped: 70565888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:05.509131+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301875200 unmapped: 70565888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:06.509297+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301875200 unmapped: 70565888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:07.509934+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301875200 unmapped: 70565888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:08.510595+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301875200 unmapped: 70565888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:09.510758+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301875200 unmapped: 70565888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:10.511021+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301875200 unmapped: 70565888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:11.511327+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301875200 unmapped: 70565888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:12.511457+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301875200 unmapped: 70565888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:13.511674+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301875200 unmapped: 70565888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:14.511802+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301875200 unmapped: 70565888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:15.512039+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301875200 unmapped: 70565888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:16.512167+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301883392 unmapped: 70557696 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:17.512336+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301883392 unmapped: 70557696 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:18.512507+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301883392 unmapped: 70557696 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:19.512711+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301883392 unmapped: 70557696 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:20.512904+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301883392 unmapped: 70557696 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:21.513044+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301883392 unmapped: 70557696 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:22.513136+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301883392 unmapped: 70557696 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:23.513285+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301891584 unmapped: 70549504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:24.513440+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301891584 unmapped: 70549504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:25.513634+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301891584 unmapped: 70549504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:26.513779+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301891584 unmapped: 70549504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:27.513877+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301891584 unmapped: 70549504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:28.514110+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301891584 unmapped: 70549504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:29.514265+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301891584 unmapped: 70549504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:30.514484+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301891584 unmapped: 70549504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:31.514637+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301891584 unmapped: 70549504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:32.515090+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301891584 unmapped: 70549504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:33.515358+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301891584 unmapped: 70549504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:34.515505+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301899776 unmapped: 70541312 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:35.515638+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301899776 unmapped: 70541312 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:36.515784+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301899776 unmapped: 70541312 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:37.515928+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301899776 unmapped: 70541312 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:38.516111+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301899776 unmapped: 70541312 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:39.516297+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301899776 unmapped: 70541312 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:40.516493+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301899776 unmapped: 70541312 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:41.516696+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301899776 unmapped: 70541312 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:42.516859+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301907968 unmapped: 70533120 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:43.517071+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301916160 unmapped: 70524928 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:44.517214+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301916160 unmapped: 70524928 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:45.517394+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301916160 unmapped: 70524928 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:46.517539+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301916160 unmapped: 70524928 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:47.517703+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301916160 unmapped: 70524928 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:48.517892+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301916160 unmapped: 70524928 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:49.518026+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301916160 unmapped: 70524928 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:50.518225+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 70516736 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:51.518509+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301924352 unmapped: 70516736 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:52.518692+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301932544 unmapped: 70508544 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:53.518866+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301932544 unmapped: 70508544 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:54.519079+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301932544 unmapped: 70508544 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:55.519250+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301948928 unmapped: 70492160 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:56.519434+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301948928 unmapped: 70492160 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:57.519612+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301948928 unmapped: 70492160 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:58.519809+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301948928 unmapped: 70492160 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:59.520090+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301948928 unmapped: 70492160 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:00.520324+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301948928 unmapped: 70492160 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:01.520524+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301948928 unmapped: 70492160 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2026-01-26T17:20:02.520721+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _finish_auth 0
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:02.521966+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301948928 unmapped: 70492160 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:03.520897+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301948928 unmapped: 70492160 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:04.521064+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301948928 unmapped: 70492160 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:05.521213+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301957120 unmapped: 70483968 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:06.521378+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301957120 unmapped: 70483968 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:07.521482+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301957120 unmapped: 70483968 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:08.521632+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301957120 unmapped: 70483968 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:09.521811+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301965312 unmapped: 70475776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:10.522013+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301965312 unmapped: 70475776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:11.522205+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301965312 unmapped: 70475776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:12.522388+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301965312 unmapped: 70475776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:13.522565+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301965312 unmapped: 70475776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:14.522744+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301965312 unmapped: 70475776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:15.522885+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301965312 unmapped: 70475776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:16.523103+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301965312 unmapped: 70475776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:17.524118+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301965312 unmapped: 70475776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:18.524284+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301965312 unmapped: 70475776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:19.524417+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301965312 unmapped: 70475776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:20.524599+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301965312 unmapped: 70475776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:21.524753+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301965312 unmapped: 70475776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:22.524897+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301973504 unmapped: 70467584 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:23.525114+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301981696 unmapped: 70459392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:24.525304+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301981696 unmapped: 70459392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:25.525453+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301981696 unmapped: 70459392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:26.525620+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301981696 unmapped: 70459392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:27.525758+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301981696 unmapped: 70459392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:28.525955+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301981696 unmapped: 70459392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:29.526228+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301981696 unmapped: 70459392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:30.526430+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301981696 unmapped: 70459392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:31.526566+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301981696 unmapped: 70459392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:32.526717+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301981696 unmapped: 70459392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:33.526870+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301981696 unmapped: 70459392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:34.527085+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301998080 unmapped: 70443008 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:35.527265+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301998080 unmapped: 70443008 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:36.527440+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301998080 unmapped: 70443008 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:37.527605+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301998080 unmapped: 70443008 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:38.527753+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301998080 unmapped: 70443008 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:39.527966+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301998080 unmapped: 70443008 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:40.528218+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301998080 unmapped: 70443008 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:41.528548+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301998080 unmapped: 70443008 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:42.528700+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301998080 unmapped: 70443008 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:43.528861+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 301998080 unmapped: 70443008 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:44.529019+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302006272 unmapped: 70434816 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:45.529160+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302014464 unmapped: 70426624 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:46.529341+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302014464 unmapped: 70426624 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:47.529477+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 70418432 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:48.529622+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 70418432 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:49.529880+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 70418432 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:50.530080+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 70418432 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:51.530225+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 70418432 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:52.530405+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 70418432 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:53.530552+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 70418432 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:54.530705+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302022656 unmapped: 70418432 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:55.530838+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302030848 unmapped: 70410240 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:56.531076+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302030848 unmapped: 70410240 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:57.531330+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302030848 unmapped: 70410240 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:58.531481+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302030848 unmapped: 70410240 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:59.531630+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302030848 unmapped: 70410240 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:00.531826+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302030848 unmapped: 70410240 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:01.532005+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302030848 unmapped: 70410240 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:02.532147+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302039040 unmapped: 70402048 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:03.532304+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302039040 unmapped: 70402048 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:04.532463+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 149K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.70 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302039040 unmapped: 70402048 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:05.532665+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 70393856 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:06.532906+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 70393856 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:07.533086+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 70393856 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:08.533332+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 70393856 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:09.533515+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 70393856 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:10.533733+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 70393856 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:11.533888+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 70393856 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:12.534053+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 70393856 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:13.534251+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302047232 unmapped: 70393856 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:14.534431+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302055424 unmapped: 70385664 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:15.534625+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302055424 unmapped: 70385664 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:16.534774+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302055424 unmapped: 70385664 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:17.534946+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302055424 unmapped: 70385664 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:18.535084+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302055424 unmapped: 70385664 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:19.535247+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302055424 unmapped: 70385664 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:20.535420+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302055424 unmapped: 70385664 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:21.536852+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302055424 unmapped: 70385664 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:22.537034+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302055424 unmapped: 70385664 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:23.537189+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302055424 unmapped: 70385664 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:24.537338+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302055424 unmapped: 70385664 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:25.537469+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302055424 unmapped: 70385664 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:26.537574+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302055424 unmapped: 70385664 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:27.537691+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:28.537842+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302055424 unmapped: 70385664 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:29.537989+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 70369280 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-mon[75140]: pgmap v4619: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1903991675' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 26 17:43:46 compute-0 ceph-mon[75140]: from='client.23430 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:46 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4102778118' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:30.538181+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 70369280 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:31.538292+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 70369280 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:32.538432+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 70369280 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:33.538601+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302071808 unmapped: 70369280 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:34.538732+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 70361088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:35.538863+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 70361088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:36.538995+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 70361088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:37.539112+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 70361088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:38.539263+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302080000 unmapped: 70361088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:39.539390+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302088192 unmapped: 70352896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:40.539616+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302088192 unmapped: 70352896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:41.539739+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302088192 unmapped: 70352896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:42.539896+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302088192 unmapped: 70352896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:43.540159+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302088192 unmapped: 70352896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:44.540449+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302088192 unmapped: 70352896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:45.540582+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302088192 unmapped: 70352896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:46.540741+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302088192 unmapped: 70352896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:47.540857+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302088192 unmapped: 70352896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:48.541020+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 70344704 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:49.541184+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 70344704 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:50.541363+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302096384 unmapped: 70344704 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:51.541515+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302104576 unmapped: 70336512 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:52.541630+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302104576 unmapped: 70336512 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:53.541798+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:54.541959+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:55.542298+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:56.542504+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:57.542619+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:58.542760+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:59.542861+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:00.543034+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:01.543213+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:02.543372+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:03.543570+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:04.543765+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:05.543947+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:06.544078+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:07.544231+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:08.544379+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:09.544587+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:10.544837+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302112768 unmapped: 70328320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:11.545010+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302129152 unmapped: 70311936 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:12.545146+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302137344 unmapped: 70303744 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:13.545278+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302137344 unmapped: 70303744 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:14.545421+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302137344 unmapped: 70303744 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:15.546441+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302145536 unmapped: 70295552 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:16.546582+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302145536 unmapped: 70295552 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:17.546754+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302145536 unmapped: 70295552 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:18.547083+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302145536 unmapped: 70295552 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 596.735168457s of 597.700744629s, submitted: 90
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:19.547205+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302153728 unmapped: 70287360 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:20.547491+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302153728 unmapped: 70287360 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:21.547635+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302178304 unmapped: 70262784 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:22.547837+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302178304 unmapped: 70262784 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:23.548018+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302178304 unmapped: 70262784 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:24.548189+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302178304 unmapped: 70262784 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:25.548443+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302178304 unmapped: 70262784 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:26.548624+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 302186496 unmapped: 70254592 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:27.548780+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:28.548933+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:29.549094+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:30.549281+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:31.549441+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:32.549589+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:33.549805+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:34.550037+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:35.550190+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:36.550340+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:37.550480+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:38.550706+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:39.550858+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:40.551064+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:41.551225+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:42.551364+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:43.551513+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:44.551689+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:45.551917+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:46.552156+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:47.552336+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:48.552484+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:49.552634+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:50.552807+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:51.552939+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303276032 unmapped: 69165056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:52.553093+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:53.553230+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:54.553400+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:55.553535+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:56.553691+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:57.553957+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:58.554157+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:59.554348+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:00.554539+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:01.554685+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:02.554833+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:03.555011+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:04.555172+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:05.555318+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:06.555465+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:07.555675+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:08.555832+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:09.556061+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:10.556310+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:11.556488+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303284224 unmapped: 69156864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:12.556632+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 69148672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:13.556813+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 69148672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:14.556999+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 69148672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:15.557121+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 69148672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:16.557261+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 69148672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:17.557416+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 69148672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:18.557611+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 69148672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:19.557764+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 69148672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:20.557967+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 69148672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:21.558157+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 69148672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:22.558346+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 69148672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:23.558509+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 69148672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:24.559321+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 69148672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:25.559638+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 69148672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:26.560442+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 69148672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:27.560780+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303292416 unmapped: 69148672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:28.561417+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 69140480 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:29.561881+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 69140480 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:30.562249+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 69140480 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:31.562448+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 69140480 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:32.562839+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 69140480 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:33.562992+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 69140480 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:34.563120+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 69140480 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:35.563266+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 69140480 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:36.563484+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 69140480 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:37.563666+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 69140480 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:38.563819+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 69140480 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:39.564056+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 69140480 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:40.564316+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 69140480 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:41.564549+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 69140480 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:42.564811+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303300608 unmapped: 69140480 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:43.565210+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 69132288 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:44.565368+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 69132288 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:45.565532+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 69132288 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:46.565747+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 69132288 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:47.566017+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 69132288 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:48.566147+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 69132288 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:49.566320+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 69132288 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:50.566511+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 69132288 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:51.566648+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 69132288 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:52.566761+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 69132288 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:53.566917+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 69132288 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:54.567085+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303308800 unmapped: 69132288 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:55.567214+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:56.567385+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:57.567521+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:58.567708+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:59.567823+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:00.568023+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:01.568154+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:02.568312+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:03.568443+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:04.568583+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:05.568769+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:06.568925+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:07.569095+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:08.569248+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:09.569368+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:10.569528+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:11.569710+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:12.569862+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303316992 unmapped: 69124096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:13.570033+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 69115904 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:14.570182+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 69115904 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:15.570304+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303325184 unmapped: 69115904 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:16.570466+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 69099520 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:17.570626+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 69099520 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:18.571016+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 69099520 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:19.571265+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 69099520 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:20.571492+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 69099520 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:21.571664+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303341568 unmapped: 69099520 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:22.571825+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 69091328 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:23.572099+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303349760 unmapped: 69091328 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:24.572240+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 69083136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:25.572390+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303357952 unmapped: 69083136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:26.572551+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 69074944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:27.572701+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 69074944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:28.573847+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 69074944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:29.574585+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 69074944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:30.575714+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 69074944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:31.575868+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 69074944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:32.576289+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 69074944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:33.576567+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 69074944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:34.576811+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 69074944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:35.577315+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 69074944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:36.577680+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303366144 unmapped: 69074944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:37.577944+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303374336 unmapped: 69066752 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:38.578436+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 69058560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:39.578614+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 69058560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:40.578893+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 69058560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:41.579097+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 69058560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:42.579310+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 69058560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:43.579459+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 69058560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:44.579669+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 69058560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:45.580101+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 69058560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:46.580231+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 69058560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:47.580380+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 69058560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:48.580559+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 69058560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:49.580763+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 69058560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:50.581061+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 69058560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:51.581235+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303382528 unmapped: 69058560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:52.581471+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 69050368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:53.581675+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 69050368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:54.581804+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 69050368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:55.581946+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 69050368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:56.582124+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 69050368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:57.582306+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303390720 unmapped: 69050368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:58.582552+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 69042176 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:59.582731+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 69042176 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:00.582964+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 69042176 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:01.583163+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303398912 unmapped: 69042176 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:02.583345+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 69033984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:03.583732+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 69033984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:04.583927+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 69033984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:05.584155+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 69033984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:06.584243+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 69033984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:07.584344+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 69033984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:08.584502+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:09.584651+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 69033984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:10.584858+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 69033984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:11.585021+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 69033984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:12.585164+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 69033984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:13.585327+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 69033984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:14.585482+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 69033984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:15.585633+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303407104 unmapped: 69033984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:16.585812+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 69025792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:17.586112+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303415296 unmapped: 69025792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:18.586300+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303423488 unmapped: 69017600 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:19.586433+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:20.586638+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:21.586783+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:22.587024+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:23.587189+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:24.587438+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:25.587594+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:26.587732+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:27.587888+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:28.588031+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:29.588199+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:30.588372+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:31.588516+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:32.589842+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:33.589997+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:34.590152+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:35.590342+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303439872 unmapped: 69001216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:36.590557+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 68993024 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:37.590691+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 68993024 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:38.590818+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 68993024 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:39.591234+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303448064 unmapped: 68993024 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:40.591506+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:41.591745+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:42.593153+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:43.593370+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:44.593558+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:45.593843+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:46.594061+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:47.594240+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:48.594378+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:49.594587+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:50.595631+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:51.596032+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:52.596217+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:53.596355+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:54.596490+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:55.596657+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:56.596795+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:57.596872+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:58.597045+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303456256 unmapped: 68984832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:59.597171+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 68976640 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:00.597397+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303464448 unmapped: 68976640 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:01.597549+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 68960256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:02.597705+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 68960256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:03.597894+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303480832 unmapped: 68960256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:04.598088+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 68952064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:05.598275+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 68952064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:06.598432+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 68952064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:07.598537+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 68952064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:08.598706+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 68952064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:09.598877+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 68952064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:10.599065+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 68952064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:11.599205+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 68952064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:12.599415+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 68952064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:13.599546+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 68952064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:14.599718+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 68952064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:15.599865+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 68952064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:16.600039+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 68952064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:17.600190+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 68952064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:18.600390+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 68952064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:19.600583+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303489024 unmapped: 68952064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:20.600775+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 68943872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:21.600906+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 68943872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:22.601042+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 68943872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:23.601196+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 68943872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:24.601348+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303497216 unmapped: 68943872 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:25.601515+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 68935680 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:26.601727+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 68935680 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:27.601882+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 68935680 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:28.602023+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 68935680 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:29.602148+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 68935680 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:30.602370+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303505408 unmapped: 68935680 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:31.602538+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 68919296 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:32.602725+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 68919296 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:33.602923+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 68919296 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:34.603135+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 68919296 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:35.603318+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 68919296 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:36.603587+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 68919296 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:37.603783+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 68919296 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:38.603931+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 68919296 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:39.604129+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303521792 unmapped: 68919296 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:40.604402+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 68911104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:41.604573+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 68911104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:42.604905+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303529984 unmapped: 68911104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:43.605142+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 68902912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:44.605293+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 68902912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:45.605464+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 68902912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:46.605654+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 68902912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:47.605825+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 68902912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:48.606069+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 68902912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:49.606287+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 68902912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:50.606529+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 68902912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:51.606695+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303538176 unmapped: 68902912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:52.606895+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303546368 unmapped: 68894720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:53.607067+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303546368 unmapped: 68894720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:54.607207+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303546368 unmapped: 68894720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:55.607364+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 68886528 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:56.607543+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 68886528 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:57.607718+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303554560 unmapped: 68886528 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:58.607871+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 68878336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:59.608055+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 68878336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:00.608273+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 68878336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:01.608410+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 68878336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:02.608557+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303562752 unmapped: 68878336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:03.608717+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303570944 unmapped: 68870144 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:04.608881+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303570944 unmapped: 68870144 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:05.609033+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303570944 unmapped: 68870144 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:06.609164+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303570944 unmapped: 68870144 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:07.609341+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303579136 unmapped: 68861952 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:08.609508+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303579136 unmapped: 68861952 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:09.609711+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303579136 unmapped: 68861952 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:10.609919+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303587328 unmapped: 68853760 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:11.610100+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303587328 unmapped: 68853760 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:12.610761+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303587328 unmapped: 68853760 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:13.611006+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 68845568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:14.611234+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 68845568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:15.612055+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 68845568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:16.612251+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 68845568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:17.612698+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 68845568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:18.613137+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 68845568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:19.613663+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 68845568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:20.614138+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 68845568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:21.614542+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303595520 unmapped: 68845568 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:22.614824+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 68837376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:23.615023+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 68837376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:24.615266+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 68837376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:25.615484+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 68837376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:26.615707+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 68837376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:27.615945+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303603712 unmapped: 68837376 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:28.616273+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 68829184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:29.616491+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 68829184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:30.616749+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 68829184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:31.616957+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 68829184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:32.617161+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 68829184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:33.617426+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303611904 unmapped: 68829184 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:34.617634+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 68820992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:35.617853+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 68820992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:36.618059+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 68820992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:37.618215+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 68820992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:38.618407+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 68820992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:39.618659+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 68820992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:40.619060+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 68820992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:41.619398+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 68820992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:42.620858+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303620096 unmapped: 68820992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:43.621067+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303628288 unmapped: 68812800 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:44.621214+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 68804608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:45.622359+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 68804608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:46.622562+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 68804608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:47.622711+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 68804608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:48.623189+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 68804608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:49.623405+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 68804608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:50.623644+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 68804608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:51.623803+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 68804608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:52.624353+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 68804608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:53.625072+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 68804608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:54.625336+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 68804608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:55.625536+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 68804608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:56.625713+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 68804608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:57.625873+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303636480 unmapped: 68804608 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:58.626025+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303652864 unmapped: 68788224 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:59.626178+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303652864 unmapped: 68788224 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:00.626355+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303652864 unmapped: 68788224 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:01.626522+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303652864 unmapped: 68788224 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:02.626696+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303652864 unmapped: 68788224 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:03.626886+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303652864 unmapped: 68788224 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:04.627066+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303652864 unmapped: 68788224 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:05.627195+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303661056 unmapped: 68780032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:06.627391+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303661056 unmapped: 68780032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:07.627531+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303661056 unmapped: 68780032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:08.627766+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303661056 unmapped: 68780032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:09.627965+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303661056 unmapped: 68780032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:10.628217+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303661056 unmapped: 68780032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:11.628439+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303661056 unmapped: 68780032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:12.628587+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303661056 unmapped: 68780032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:13.628746+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303661056 unmapped: 68780032 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:14.628950+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 68771840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:15.629123+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 68771840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:16.629249+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 68771840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:17.629439+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 68771840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:18.629574+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 68771840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:19.629696+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 68771840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:20.629854+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 68771840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:21.630076+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 68771840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:22.630221+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 68771840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:23.630355+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303669248 unmapped: 68771840 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:24.630554+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303677440 unmapped: 68763648 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:25.630723+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303677440 unmapped: 68763648 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:26.630925+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303677440 unmapped: 68763648 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:27.631068+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 68747264 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:28.631266+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 68747264 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:29.631382+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 68747264 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:30.631542+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 68747264 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:31.631652+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303693824 unmapped: 68747264 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:32.631768+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 68739072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:33.631899+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 68739072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:34.632080+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 68739072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:35.632242+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 68739072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:36.632425+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 68739072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:37.632574+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 68739072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:38.632675+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 68739072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:39.632863+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 68739072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:40.633050+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303702016 unmapped: 68739072 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:41.633197+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 68730880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:42.633400+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 68730880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:43.633565+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 68730880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:44.633722+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 68730880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:45.633872+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 68730880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:46.634004+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303710208 unmapped: 68730880 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:47.634279+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303718400 unmapped: 68722688 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:48.634461+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303726592 unmapped: 68714496 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:49.634614+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303726592 unmapped: 68714496 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:50.634822+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303726592 unmapped: 68714496 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:51.635034+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303726592 unmapped: 68714496 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:52.635188+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303726592 unmapped: 68714496 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:53.635334+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303726592 unmapped: 68714496 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:54.635491+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303726592 unmapped: 68714496 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:55.635641+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303726592 unmapped: 68714496 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:56.635775+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303726592 unmapped: 68714496 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:57.635906+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 68706304 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:58.636157+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303734784 unmapped: 68706304 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:59.636349+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303742976 unmapped: 68698112 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:00.636568+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 68689920 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:01.636708+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303751168 unmapped: 68689920 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:02.636832+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 68681728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:03.637022+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 68681728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:04.637171+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 68681728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:05.637311+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 68681728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:06.637505+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 68681728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:07.637654+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 68681728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:08.637846+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 68681728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:09.638053+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 68681728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:10.638317+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 68681728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:11.638451+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 68681728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:12.638628+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 68681728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:13.638888+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 68681728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:14.639099+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 68681728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:15.639249+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 68681728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:16.639403+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303759360 unmapped: 68681728 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:17.639553+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303767552 unmapped: 68673536 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:18.639688+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303767552 unmapped: 68673536 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:19.639817+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303767552 unmapped: 68673536 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:20.640039+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 68665344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:21.640238+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 68665344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:22.640381+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 68665344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:23.640544+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 68665344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:24.640709+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 68665344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:25.640874+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 68665344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:26.641146+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 68665344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:27.641310+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 68665344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:28.641496+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 68665344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:29.641720+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 68665344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:30.642038+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 68665344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:31.642198+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 68665344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:32.642328+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:33.642492+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 68665344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:34.642641+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 68665344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:35.642779+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 68665344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:36.642917+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303775744 unmapped: 68665344 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:37.643091+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 68657152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:38.643280+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 68657152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:39.643420+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 68657152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:40.643613+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 68657152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:41.643770+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 68657152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:42.643927+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 68657152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:43.644104+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 68657152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:44.644288+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303783936 unmapped: 68657152 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:45.644473+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 68648960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:46.644634+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 68648960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:47.644776+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 68648960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:48.645060+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 68648960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:49.645269+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 68648960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:50.645515+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303792128 unmapped: 68648960 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:51.645808+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303800320 unmapped: 68640768 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:52.646172+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303800320 unmapped: 68640768 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:53.646507+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303800320 unmapped: 68640768 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:54.646790+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303800320 unmapped: 68640768 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:55.647053+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303800320 unmapped: 68640768 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:56.647327+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303800320 unmapped: 68640768 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:57.647578+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303800320 unmapped: 68640768 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:58.647784+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303800320 unmapped: 68640768 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:59.648023+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303824896 unmapped: 68616192 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:00.648312+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303824896 unmapped: 68616192 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:01.648530+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303824896 unmapped: 68616192 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:02.648695+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303824896 unmapped: 68616192 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:03.648879+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303824896 unmapped: 68616192 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:04.649039+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303824896 unmapped: 68616192 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:05.649203+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303824896 unmapped: 68616192 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:06.649383+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303824896 unmapped: 68616192 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:07.649553+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303824896 unmapped: 68616192 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:08.649723+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303824896 unmapped: 68616192 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:09.649879+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 68608000 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:10.650089+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303833088 unmapped: 68608000 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:11.650231+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:12.650403+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:13.650550+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:14.650720+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:15.650851+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:16.651036+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:17.651266+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:18.652948+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:19.653090+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:20.653237+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:21.653357+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:22.653495+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:23.653643+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:24.653738+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:25.653878+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:26.654020+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:27.654173+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:28.654281+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:29.654470+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:30.654665+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:31.654790+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:32.654950+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:33.655138+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:34.655325+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303841280 unmapped: 68599808 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:35.655457+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303857664 unmapped: 68583424 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:36.655585+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303857664 unmapped: 68583424 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:37.655724+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 68575232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:38.655907+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 68575232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:39.656042+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 68575232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:40.656232+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 68575232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:41.656417+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 68575232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:42.656573+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 68575232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:43.656699+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 68575232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:44.656839+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 68575232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:45.657021+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303865856 unmapped: 68575232 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:46.657184+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303874048 unmapped: 68567040 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:47.657337+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303874048 unmapped: 68567040 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:48.657466+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303882240 unmapped: 68558848 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:49.657541+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303882240 unmapped: 68558848 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:50.657697+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303882240 unmapped: 68558848 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:51.657865+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303882240 unmapped: 68558848 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:52.658078+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303882240 unmapped: 68558848 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:53.658256+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303882240 unmapped: 68558848 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:54.658445+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303882240 unmapped: 68558848 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:55.658605+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303890432 unmapped: 68550656 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:56.661772+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 68542464 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:57.662046+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 68542464 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:58.662580+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 68542464 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:59.666098+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 68542464 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:00.666841+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 68542464 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:01.667673+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303898624 unmapped: 68542464 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:02.667924+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303906816 unmapped: 68534272 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:03.668127+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 68526080 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.2 total, 600.0 interval
                                           Cumulative writes: 38K writes, 150K keys, 38K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 38K writes, 14K syncs, 2.70 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 264 writes, 396 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 264 writes, 132 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:04.668313+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 68526080 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:05.668885+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 68526080 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:06.669379+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 68526080 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:07.669575+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 68526080 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:08.669749+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 68526080 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:09.670020+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 68526080 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:10.670540+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 68526080 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:11.670781+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 68526080 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:12.670931+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 68526080 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:13.671698+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 68526080 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:14.672414+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303915008 unmapped: 68526080 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:15.672771+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303923200 unmapped: 68517888 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:16.673006+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303931392 unmapped: 68509696 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:17.673166+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 68501504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:18.673565+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 68501504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:19.673779+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 68501504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:20.674094+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 68501504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:21.674251+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 68501504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:22.674371+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 68501504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:23.674590+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 68501504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:24.674805+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 68501504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:25.674958+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 68501504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:26.675416+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 68501504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:27.675592+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 68501504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:28.676232+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 68501504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:29.678142+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 68501504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:30.678790+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 68501504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:31.679017+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303939584 unmapped: 68501504 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:32.679189+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 68493312 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:33.679538+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 68493312 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:34.679698+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 68493312 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:35.679856+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303947776 unmapped: 68493312 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:36.680092+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303955968 unmapped: 68485120 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:37.680268+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303955968 unmapped: 68485120 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:38.680451+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303964160 unmapped: 68476928 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:39.680630+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303964160 unmapped: 68476928 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:40.680890+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303964160 unmapped: 68476928 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:41.681225+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303964160 unmapped: 68476928 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:42.681382+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303964160 unmapped: 68476928 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:43.681543+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303964160 unmapped: 68476928 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:44.681828+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 68460544 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:45.682072+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 68460544 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:46.682412+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 68460544 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:47.682562+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 68460544 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:48.682740+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 68460544 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:49.683012+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 68460544 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:50.683291+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 68460544 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:51.683582+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 68460544 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:52.683732+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303980544 unmapped: 68460544 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:53.683872+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303988736 unmapped: 68452352 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:54.684103+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303988736 unmapped: 68452352 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:55.684288+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303988736 unmapped: 68452352 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:56.684468+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 68444160 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:57.684652+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 303996928 unmapped: 68444160 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:58.684889+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304005120 unmapped: 68435968 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:59.685094+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304005120 unmapped: 68435968 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:00.685312+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304013312 unmapped: 68427776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:01.685504+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304013312 unmapped: 68427776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:02.685695+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304013312 unmapped: 68427776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:03.685888+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304013312 unmapped: 68427776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:04.686116+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304013312 unmapped: 68427776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:05.686342+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304013312 unmapped: 68427776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:06.686646+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304013312 unmapped: 68427776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:07.686868+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304013312 unmapped: 68427776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:08.687159+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304013312 unmapped: 68427776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:09.687416+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304013312 unmapped: 68427776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:10.687725+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304013312 unmapped: 68427776 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:11.688041+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304021504 unmapped: 68419584 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:12.688226+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304021504 unmapped: 68419584 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:13.688401+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:14.688544+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304021504 unmapped: 68419584 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:15.688738+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304021504 unmapped: 68419584 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:16.688890+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 68411392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:17.689087+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 68411392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:18.689297+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 68411392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:19.689499+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 68411392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:20.689697+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 68411392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 602.073974609s of 602.525329590s, submitted: 132
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:21.689820+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 68411392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:22.690034+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304029696 unmapped: 68411392 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:23.690186+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304037888 unmapped: 68403200 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:24.707411+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304037888 unmapped: 68403200 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:25.707649+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304037888 unmapped: 68403200 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:26.707935+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304037888 unmapped: 68403200 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:27.708154+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304095232 unmapped: 68345856 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:28.708367+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304103424 unmapped: 68337664 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:29.708589+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:30.708855+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:31.709025+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:32.709764+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:33.710109+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:34.710423+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:35.711820+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:36.712905+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:37.713945+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:38.714426+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:39.714655+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:40.714996+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:41.715693+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:42.715967+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:43.716516+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:44.716816+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:45.717093+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:46.717494+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:47.717717+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:48.717862+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:49.718169+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:50.718393+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:51.718539+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:52.718785+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:53.719025+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:54.719239+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:55.719399+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:56.719675+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304111616 unmapped: 68329472 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:57.719900+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 68321280 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:58.720089+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 68321280 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:59.720300+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 68321280 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:00.720581+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 68321280 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:01.720718+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 68321280 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:02.720859+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 68321280 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:03.721019+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304119808 unmapped: 68321280 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:04.721207+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:05.721483+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:06.721908+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:07.722272+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:08.722549+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:09.722697+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:10.722933+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:11.723183+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:12.723480+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:13.723625+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:14.723778+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:15.723941+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:16.724095+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:17.724249+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:18.724476+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:19.724642+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:20.724838+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:21.725086+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304128000 unmapped: 68313088 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:22.725278+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:23.725482+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:24.725791+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:25.726051+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:26.726263+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:27.726420+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:28.726556+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:29.726686+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:30.727050+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:31.727233+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:32.727405+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:33.727566+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:34.727725+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:35.727837+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:36.728016+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:37.728252+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:38.728414+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:39.728592+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304136192 unmapped: 68304896 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:40.728881+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 68296704 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:41.729059+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 68296704 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:42.729177+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304144384 unmapped: 68296704 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:43.729352+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 68288512 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:44.729619+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 68288512 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:45.729811+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 68288512 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:46.730177+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 68288512 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:47.730450+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 68288512 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:48.730798+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 68288512 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:49.731077+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 68288512 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:50.731416+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 68288512 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:51.731681+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304152576 unmapped: 68288512 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:52.731818+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304160768 unmapped: 68280320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:53.732015+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304160768 unmapped: 68280320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:54.732180+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304160768 unmapped: 68280320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:55.732394+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304160768 unmapped: 68280320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:56.732607+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304160768 unmapped: 68280320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:57.732827+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304160768 unmapped: 68280320 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:58.733029+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 68272128 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:59.733223+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 68272128 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:00.733440+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 68272128 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:01.733650+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 68272128 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:02.733788+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 68272128 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:03.734030+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 68272128 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:04.734186+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 68272128 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:05.734330+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 68272128 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:06.734474+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304168960 unmapped: 68272128 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:07.734634+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 68263936 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:08.734802+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 68263936 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:09.734946+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304177152 unmapped: 68263936 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:10.735129+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 68255744 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:11.735262+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 68255744 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:12.735401+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 68255744 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:13.735537+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 68255744 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:14.736098+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 68255744 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:15.736268+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304185344 unmapped: 68255744 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:16.736437+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 68247552 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:17.736623+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 68247552 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:18.736778+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 68247552 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:19.737133+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 68247552 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:20.737403+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 68247552 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:21.737606+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304193536 unmapped: 68247552 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:22.737753+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 68239360 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:23.737952+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 68239360 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:24.738214+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 68239360 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:25.738472+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 68239360 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:26.738680+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 68239360 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:27.738877+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 68239360 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:28.739078+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 68239360 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:29.739289+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 68239360 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:30.739518+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 68239360 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:31.739708+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 68239360 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:32.739875+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 68239360 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:33.740098+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 68239360 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:34.740272+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 68239360 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:35.740448+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304201728 unmapped: 68239360 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:36.740650+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304209920 unmapped: 68231168 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:37.740861+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 68222976 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:38.741046+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 68222976 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:39.741295+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 68222976 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:40.741580+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 68222976 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:41.741748+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304218112 unmapped: 68222976 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:42.741924+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 68214784 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:43.742044+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 68214784 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:44.742180+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 68214784 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:45.742367+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 68214784 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:46.742481+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 68214784 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:47.742661+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 68214784 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:48.742884+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 68214784 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:49.743085+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304226304 unmapped: 68214784 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:50.743284+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 68206592 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:51.743523+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 68206592 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:52.743668+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304234496 unmapped: 68206592 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:53.744060+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304242688 unmapped: 68198400 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:54.744192+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 68190208 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:55.744356+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 68190208 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:56.744606+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 68190208 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:57.744806+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304250880 unmapped: 68190208 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:58.745154+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 68182016 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:59.745350+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304259072 unmapped: 68182016 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:00.745575+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 68173824 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:01.745849+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 68173824 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:02.746065+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 68173824 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:03.746227+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 68173824 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:04.746431+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 68173824 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:05.746680+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 68173824 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:06.746936+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 68173824 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:07.747091+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 68173824 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:08.747246+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 68173824 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:09.747374+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 68173824 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:10.747551+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 68173824 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:11.747677+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304267264 unmapped: 68173824 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:12.747861+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 68165632 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:13.748066+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 68165632 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:14.748234+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304275456 unmapped: 68165632 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:15.748404+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 68157440 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:16.748577+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 68157440 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:17.748738+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 68157440 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:18.748900+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 68157440 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:19.749090+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304283648 unmapped: 68157440 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:20.749292+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 68149248 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:21.749421+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 68149248 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:22.749583+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 68149248 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:23.749727+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 68149248 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:24.749833+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 68149248 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:25.750057+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 68149248 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:26.750226+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 68149248 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:27.750413+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 68149248 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:28.750588+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 68149248 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:29.750739+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 68149248 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:30.750959+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304291840 unmapped: 68149248 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:31.751184+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 68141056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:32.751452+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 68141056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:33.752776+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 68141056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:34.753186+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 68141056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:35.753316+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304300032 unmapped: 68141056 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:36.753515+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 68132864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:37.753706+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 68132864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:38.753860+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 68132864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:39.753999+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304308224 unmapped: 68132864 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:40.754156+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 68124672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:41.754306+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 68124672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:42.754450+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 68124672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:43.754665+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 68124672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:44.754878+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 68124672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:45.755075+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 68124672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:46.755276+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304316416 unmapped: 68124672 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:47.755553+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 68108288 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:48.755782+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304332800 unmapped: 68108288 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:49.756001+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 68100096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:50.756211+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 68100096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:51.756337+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 68100096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:52.756654+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 68100096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:53.756790+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 68100096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:54.756948+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 68100096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:55.757199+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 68100096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:56.757345+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 68100096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:57.757532+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 68100096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:58.757665+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 68100096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:59.757809+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 68100096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:00.757959+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 68100096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:01.758144+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304340992 unmapped: 68100096 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:02.758420+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304349184 unmapped: 68091904 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:03.758576+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 68083712 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:04.758723+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:05.758898+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304357376 unmapped: 68083712 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets getting new tickets!
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:06.759123+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _finish_auth 0
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:06.760034+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 68075520 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:07.759316+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 68075520 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:08.759482+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 68075520 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:09.759748+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 68075520 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304365568 unmapped: 68075520 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:10.883592+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 68067328 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:11.883729+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 68067328 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:12.883900+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 68067328 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:13.884064+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 68067328 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:14.884259+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: mgrc ms_handle_reset ms_handle_reset con 0x5627778fbc00
Jan 26 17:43:46 compute-0 ceph-osd[87780]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4130999815
Jan 26 17:43:46 compute-0 ceph-osd[87780]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4130999815,v1:192.168.122.100:6801/4130999815]
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: get_auth_request con 0x562777902400 auth_method 0
Jan 26 17:43:46 compute-0 ceph-osd[87780]: mgrc handle_mgr_configure stats_period=5
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 68042752 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:15.884432+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 68042752 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:16.884641+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 68042752 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:17.884847+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:18.885117+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:19.885312+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:20.885853+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:21.886148+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:22.886265+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:23.886475+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 68026368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:24.886685+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 68026368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:25.886878+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 68018176 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:26.887123+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 68018176 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:27.887314+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 68018176 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:28.887442+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:29.887684+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:30.887880+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:31.888020+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:32.888190+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:33.888345+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:34.888488+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:35.888639+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:36.888775+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:37.888933+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:38.889107+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:39.889267+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:40.889575+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:41.889721+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:42.889895+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:43.890109+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:44.890267+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:45.890439+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:46.890565+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:47.890719+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:48.890903+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:49.891267+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:50.891622+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:51.891820+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:52.892045+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:53.892242+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:54.892370+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:55.892542+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:56.893059+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:57.893236+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:58.893354+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:59.893523+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:00.893694+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:01.893871+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:02.894026+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 67969024 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:03.894215+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 67969024 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:04.894376+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 67969024 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:05.894624+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:06.894903+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:07.895059+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:08.895189+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:09.895440+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:10.895703+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 67952640 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:11.895922+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 67952640 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:12.896076+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 67952640 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:13.896322+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 67952640 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:14.896513+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 67952640 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:15.896706+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 67952640 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:16.896941+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:17.897123+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:18.897272+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:19.897506+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:20.897865+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 298.963195801s of 300.216644287s, submitted: 90
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:21.898010+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 67887104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:22.898155+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 67887104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:23.898338+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 67887104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:24.898578+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 67887104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:25.898797+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 67887104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:26.899046+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 67887104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:27.899133+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 67887104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:28.899313+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 67887104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:29.899584+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 67887104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:30.899782+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 67887104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:31.899921+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304553984 unmapped: 67887104 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:32.900028+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 67878912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:33.900426+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 67878912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:34.900524+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 67878912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:35.900661+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 67878912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:36.900815+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 67878912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:37.901089+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 67878912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:38.901217+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 67878912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:39.901319+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 67878912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:40.901481+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 67878912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:41.901639+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 67878912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:42.901784+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 67878912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:43.901961+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 67878912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:44.902245+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 67870720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:45.902447+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 67870720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:46.902617+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 67870720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:47.902771+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 67870720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:48.902935+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 67870720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:49.903133+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 67870720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:50.903377+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 67870720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:51.903634+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 67870720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:52.903769+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 67870720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:53.903912+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 67870720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:54.904032+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 67870720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:55.904233+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304570368 unmapped: 67870720 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:56.904400+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304578560 unmapped: 67862528 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:57.904483+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:58.904594+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:59.904728+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:00.904920+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:01.905058+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:02.905142+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:03.905214+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:04.905329+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:05.905445+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:06.905593+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 67846144 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:07.905768+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 67846144 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:08.905899+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 67846144 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:09.906042+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 67846144 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:10.906239+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304594944 unmapped: 67846144 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:11.906433+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 67837952 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:12.906574+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 67837952 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:13.906770+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304603136 unmapped: 67837952 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:14.906931+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 68067328 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:15.907078+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 68067328 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:16.907286+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 68067328 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:17.907478+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 68067328 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:18.907632+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 68067328 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:19.907805+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 68067328 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:20.908062+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 68067328 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:21.908186+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 68067328 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:22.908372+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 68067328 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:23.908562+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304373760 unmapped: 68067328 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:24.908758+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 68059136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:25.908936+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 68059136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:26.909147+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 68059136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:27.909347+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 68059136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:28.909524+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 68059136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:29.909749+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 68059136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:30.909941+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 68059136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:31.910044+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 68059136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:32.910107+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 68059136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:33.910203+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 68059136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:34.910345+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 68059136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:35.910469+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 68059136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:36.911382+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 68059136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:37.911509+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 68059136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:38.911618+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 68059136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:39.911743+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304381952 unmapped: 68059136 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:40.911929+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:41.912098+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:42.912288+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:43.912422+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:44.912551+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:45.912663+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:46.912779+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:47.912941+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:48.913063+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:49.913211+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:50.913349+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:51.913458+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:52.913621+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:53.913772+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:54.913909+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:55.914080+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:56.914244+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:57.914381+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:58.914533+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:59.914724+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:00.914963+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:01.915145+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304390144 unmapped: 68050944 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:02.915281+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 68042752 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:03.915444+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304398336 unmapped: 68042752 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:04.915615+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:05.915759+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:06.915948+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:07.916118+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:08.916259+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:09.916422+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:10.917099+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:11.917285+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:12.917493+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:13.917715+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:14.917852+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:15.918052+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:16.918274+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304406528 unmapped: 68034560 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:17.918453+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 68026368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:18.918655+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 68026368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:19.918848+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 68026368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:20.919214+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 68026368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:21.919382+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 68026368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:22.919611+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 68026368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:23.919775+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 68026368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:24.919927+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 68026368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:25.920079+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 68026368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:26.920241+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 68026368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:27.920453+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 68026368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:28.920653+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 68026368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:29.920772+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 68026368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:30.921024+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304414720 unmapped: 68026368 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:31.921194+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 68018176 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:32.921333+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 68018176 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:33.921465+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 68018176 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:34.921598+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 68018176 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:35.921743+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 68018176 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:36.921932+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 68018176 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:37.922141+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 68018176 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:38.922309+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304422912 unmapped: 68018176 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:39.922476+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:40.922682+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:41.922892+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:42.923054+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:43.923218+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:44.923459+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:45.923588+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:46.923714+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:47.923842+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:48.924114+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304431104 unmapped: 68009984 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:49.924275+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:50.924466+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:51.924644+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:52.924827+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:53.925038+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:54.925233+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:55.925371+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:56.925546+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:57.925691+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:58.925853+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:59.926080+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:00.926342+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:01.926472+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:02.926604+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304439296 unmapped: 68001792 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:03.927168+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 67993600 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:04.927684+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 67993600 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:05.928060+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 67993600 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:06.928237+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 67993600 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:07.928454+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 67993600 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:08.928784+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 67993600 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:09.929061+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304447488 unmapped: 67993600 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:10.929265+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:11.929490+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:12.929732+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:13.929864+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:14.930006+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:15.930264+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:16.930416+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:17.930620+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:18.930744+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:19.930967+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:20.931278+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:21.931449+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:22.931622+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:23.931794+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:24.931969+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:25.932145+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:26.932309+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304455680 unmapped: 67985408 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:27.932475+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:28.932644+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:29.932843+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:30.933022+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:31.933217+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:32.933410+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:33.933566+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:34.933707+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:35.935369+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:36.935628+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:37.936715+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:38.936957+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304463872 unmapped: 67977216 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:39.937270+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 67969024 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:40.937529+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 67969024 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:41.938237+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 67969024 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:42.938482+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 67969024 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:43.938907+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 67969024 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:44.939138+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 67969024 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:45.939589+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 67969024 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:46.939834+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304472064 unmapped: 67969024 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:47.940124+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:48.940375+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:49.940632+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:50.940843+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:51.941071+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:52.941249+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:53.941457+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:54.941635+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:55.941825+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:56.942060+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:57.942181+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:58.942312+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:59.942539+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:00.942796+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:01.943008+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:02.943149+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:03.943372+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.2 total, 600.0 interval
                                           Cumulative writes: 39K writes, 150K keys, 39K commit groups, 1.0 writes per commit group, ingest: 0.15 GB, 0.02 MB/s
                                           Cumulative WAL: 39K writes, 14K syncs, 2.69 writes per sync, written: 0.15 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:04.943518+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:05.943650+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:06.943787+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:07.943954+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304480256 unmapped: 67960832 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:08.944165+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 67952640 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:09.944310+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 67952640 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:10.944476+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 67952640 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:11.944636+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304488448 unmapped: 67952640 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:12.944777+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 67944448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:13.944959+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 67944448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:14.945207+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 67944448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:15.945372+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 67944448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:16.945584+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 67944448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:17.945746+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 67944448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:18.945874+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 67944448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread fragmentation_score=0.005588 took=0.000088s
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:19.946076+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:46 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304496640 unmapped: 67944448 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:20.946265+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:21.946411+0000)
Jan 26 17:43:46 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:46 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:22.946527+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:23.946695+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:24.946818+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:25.946950+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:26.947043+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:27.947111+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:28.947203+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:29.947330+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:30.947491+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:31.947632+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:32.947743+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:33.947838+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:34.948041+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:35.948201+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:36.948357+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:37.948511+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:38.948732+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:39.948892+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:40.949109+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:41.949251+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:42.949380+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:43.949493+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:44.949676+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:45.949819+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:46.950017+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:47.950156+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:48.950358+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:49.950569+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:50.950774+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:51.951069+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:52.951217+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:53.951362+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:54.951538+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:55.951674+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:56.951822+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:57.951957+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:58.952099+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304504832 unmapped: 67936256 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:59.952255+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:00.952577+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:01.952802+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:02.952949+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:03.953099+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:04.953228+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:05.953393+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:06.953543+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:07.953694+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:08.953859+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:09.954028+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:10.954194+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:11.954328+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:12.954560+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:13.954744+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:14.955066+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:15.955330+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:16.955577+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:17.955769+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:18.955966+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:19.956229+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:20.956501+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:21.956692+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 300.567993164s of 300.615264893s, submitted: 24
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:22.956916+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:23.957089+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:24.957262+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:25.957418+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304513024 unmapped: 67928064 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:26.957569+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304529408 unmapped: 67911680 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:27.957707+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304562176 unmapped: 67878912 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:28.957847+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:29.958262+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:30.958776+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:31.959073+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:32.959231+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:33.959363+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:34.959532+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:35.959817+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:36.960088+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:37.960230+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:38.960399+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:39.960568+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:40.960732+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:41.960876+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:42.961032+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:43.961228+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:44.961344+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:45.961456+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:46.961620+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:47.961700+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:48.961777+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:49.961903+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:50.962066+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:51.962459+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:52.962603+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:53.962770+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:54.962911+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:55.963067+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:56.963203+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:57.963350+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:58.963547+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:59.963714+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:00.963898+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:01.964022+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:02.964156+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:03.964277+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:04.964435+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:05.964564+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:06.965340+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:07.965505+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:08.965637+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:09.965762+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:10.965944+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:11.966076+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304586752 unmapped: 67854336 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:12.966200+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304644096 unmapped: 67796992 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: do_command 'config diff' '{prefix=config diff}'
Jan 26 17:43:47 compute-0 ceph-osd[87780]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:13.966331+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: do_command 'config show' '{prefix=config show}'
Jan 26 17:43:47 compute-0 ceph-osd[87780]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 26 17:43:47 compute-0 ceph-osd[87780]: do_command 'counter dump' '{prefix=counter dump}'
Jan 26 17:43:47 compute-0 ceph-osd[87780]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 26 17:43:47 compute-0 ceph-osd[87780]: do_command 'counter schema' '{prefix=counter schema}'
Jan 26 17:43:47 compute-0 ceph-osd[87780]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304881664 unmapped: 67559424 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:14.966465+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:47 compute-0 ceph-osd[87780]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:47 compute-0 ceph-osd[87780]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3330004 data_alloc: 218103808 data_used: 188015
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304914432 unmapped: 67526656 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: osd.2 332 heartbeat osd_stat(store_statfs(0x4ecf1e000/0x0/0x4ffc00000, data 0x1a1fa29/0x1bee000, compress 0x0/0x0/0x0, omap 0x5771f, meta 0x110988e1), peers [0,1] op hist [])
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: tick
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_tickets
Jan 26 17:43:47 compute-0 ceph-osd[87780]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:15.966594+0000)
Jan 26 17:43:47 compute-0 ceph-osd[87780]: prioritycache tune_memory target: 4294967296 mapped: 304906240 unmapped: 67534848 heap: 372441088 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:47 compute-0 ceph-osd[87780]: do_command 'log dump' '{prefix=log dump}'
Jan 26 17:43:47 compute-0 nova_compute[239965]: 2026-01-26 17:43:47.102 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:47 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23436 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.sxinsc", "name": "rgw_frontends"} v 0)
Jan 26 17:43:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.sxinsc", "name": "rgw_frontends"} : dispatch
Jan 26 17:43:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 26 17:43:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2015362198' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 26 17:43:47 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 17:43:47 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:47 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23440 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.sxinsc", "name": "rgw_frontends"} v 0)
Jan 26 17:43:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.sxinsc", "name": "rgw_frontends"} : dispatch
Jan 26 17:43:47 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 26 17:43:47 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3713754134' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 26 17:43:47 compute-0 ceph-mon[75140]: from='client.23434 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:47 compute-0 ceph-mon[75140]: from='client.23436 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.sxinsc", "name": "rgw_frontends"} : dispatch
Jan 26 17:43:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2015362198' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 26 17:43:47 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.sxinsc", "name": "rgw_frontends"} : dispatch
Jan 26 17:43:47 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3713754134' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 26 17:43:48 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23444 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 26 17:43:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1893036163' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 26 17:43:48 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23448 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 26 17:43:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/548064994' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 26 17:43:48 compute-0 ceph-mon[75140]: pgmap v4620: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:48 compute-0 ceph-mon[75140]: from='client.23440 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:48 compute-0 ceph-mon[75140]: from='client.23444 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1893036163' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 26 17:43:48 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/548064994' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 26 17:43:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 26 17:43:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3219701798' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:43:48 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 26 17:43:48 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3219701798' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:43:49 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23452 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:49 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:49 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 26 17:43:49 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1548680550' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 26 17:43:49 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23460 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:49 compute-0 ceph-mon[75140]: from='client.23448 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3219701798' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 26 17:43:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.10:0/3219701798' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 26 17:43:49 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1548680550' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 26 17:43:50 compute-0 crontab[430926]: (root) LIST (root)
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] _maybe_adjust
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23464 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:50 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 26 17:43:50 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1320007860' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 26 17:43:50 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23468 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:50 compute-0 nova_compute[239965]: 2026-01-26 17:43:50.861 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:50 compute-0 ceph-mon[75140]: from='client.23452 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:50 compute-0 ceph-mon[75140]: pgmap v4621: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:50 compute-0 ceph-mon[75140]: from='client.23460 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:50 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1320007860' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 26 17:43:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 26 17:43:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/547722886' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 26 17:43:51 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23472 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:51 compute-0 ceph-8b9831ad-4b0d-59b4-8860-96eb895a171f-mgr-compute-0-jtdjmy[75427]: 2026-01-26T17:43:51.289+0000 7fe66354f640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 26 17:43:51 compute-0 ceph-mgr[75431]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 26 17:43:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:43:51 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:39.911253+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 90669056 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:40.911414+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 90669056 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:41.911604+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 90669056 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:42.911816+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 90669056 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:43.912028+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 90669056 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:44.912220+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 90669056 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:45.912510+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 90669056 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:46.912760+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 90669056 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:47.913034+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 90669056 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:48.913266+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 90669056 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:49.913463+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 90669056 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:50.913671+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:51.913870+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 90669056 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:52.914118+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366018560 unmapped: 90669056 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:53.914549+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 90660864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:54.914844+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 90660864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:55.915089+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 90652672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:56.915376+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 90652672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:57.915595+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 90652672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:58.915757+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 90652672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:59.916020+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 90652672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:00.916261+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 90652672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:01.916452+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 90652672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:02.916627+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 90644480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:03.916817+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 90644480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:04.916992+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 90644480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:05.917278+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 90644480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:06.917709+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 90644480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:07.917922+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366051328 unmapped: 90636288 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:08.918099+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366051328 unmapped: 90636288 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:09.918374+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366059520 unmapped: 90628096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:10.918589+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366059520 unmapped: 90628096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:11.918744+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366059520 unmapped: 90628096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:12.918888+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366059520 unmapped: 90628096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:13.919029+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:14.919230+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:15.919455+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:16.919653+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:17.919872+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:18.920034+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:19.920762+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:20.920885+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:21.921025+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:22.921140+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:23.921284+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:24.921481+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 90611712 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:25.921676+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 90611712 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:26.921845+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 90611712 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:27.922071+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 90611712 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:28.922253+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366075904 unmapped: 90611712 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:29.922428+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:30.922645+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:31.923110+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:32.923882+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:33.924063+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:34.924587+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:35.924815+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:36.925087+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:37.925339+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:38.925527+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:39.925668+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:40.926126+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 90595328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:41.926599+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 90595328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:42.927082+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 90578944 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:43.927434+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 90578944 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:44.927784+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 90578944 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:45.928050+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 90578944 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:46.928197+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 90578944 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:47.928401+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 90578944 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:48.928563+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 90570752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:49.928733+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 90570752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:50.929056+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 90570752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:51.929245+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 90570752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:52.929459+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 90570752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:53.929640+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366116864 unmapped: 90570752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:54.929838+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366125056 unmapped: 90562560 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:55.930073+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366125056 unmapped: 90562560 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:56.930354+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 90554368 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:57.930572+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 90554368 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:58.930811+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 90554368 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.4 total, 600.0 interval
                                           Cumulative writes: 49K writes, 192K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 49K writes, 18K syncs, 2.68 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:59.930957+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 90554368 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:00.931254+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 90554368 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:01.931541+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 90546176 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:02.931824+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366157824 unmapped: 90529792 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:03.932038+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366157824 unmapped: 90529792 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:04.932209+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:05.932468+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:06.932688+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:07.932830+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:08.933047+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:09.933193+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:10.933637+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:11.933800+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:12.933948+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:13.934163+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:14.934328+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:15.934566+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:16.934793+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:17.934926+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:18.935096+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:19.935215+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:20.935363+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:21.935599+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 90497024 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:22.935771+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 90497024 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:23.935899+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 90497024 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:24.936049+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 90488832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:25.936375+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 90488832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:26.936552+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 90488832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:27.936703+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 90488832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:28.936856+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 90488832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:29.937059+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 90488832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:30.937243+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 90488832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:31.937394+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 90488832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:32.937542+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 90488832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:33.937739+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 90488832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:34.937890+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 90488832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:35.938088+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 90488832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:36.938294+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366206976 unmapped: 90480640 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:37.938500+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366206976 unmapped: 90480640 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:38.938682+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366206976 unmapped: 90480640 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:39.938926+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 90472448 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:40.939113+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 90472448 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:41.939325+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366231552 unmapped: 90456064 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:42.939472+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 90447872 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:43.939605+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 90447872 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:44.939825+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 90447872 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:45.940116+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 90447872 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:46.940279+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 90447872 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:47.940486+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 90447872 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:48.940745+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 90447872 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:49.941338+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 90447872 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:50.941680+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 90447872 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:51.941875+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 90447872 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:52.942039+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 90431488 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:53.942199+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 90431488 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:54.942394+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 90431488 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:55.942627+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 90431488 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:56.942826+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 90423296 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:57.943066+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 90423296 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:58.943249+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 90423296 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:59.943449+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 90423296 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:00.943628+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 90423296 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:01.943785+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366280704 unmapped: 90406912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:02.944029+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366280704 unmapped: 90406912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:03.944504+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366280704 unmapped: 90406912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:04.944773+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366280704 unmapped: 90406912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:05.945034+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366280704 unmapped: 90406912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:06.945203+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366280704 unmapped: 90406912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:07.945424+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366280704 unmapped: 90406912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:08.945613+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:09.945802+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:10.946050+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:11.946243+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:12.946459+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:13.946613+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:14.946811+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:15.947090+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:16.947252+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:17.947437+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:18.947623+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:19.947779+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:20.947940+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 300.143432617s of 300.316650391s, submitted: 22
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366297088 unmapped: 90390528 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:21.948107+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 90374144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:22.948297+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 90374144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:23.948443+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 90374144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:24.948634+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 90374144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:25.948848+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366313472 unmapped: 90374144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:26.949038+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366329856 unmapped: 90357760 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:27.949214+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366346240 unmapped: 90341376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:28.949448+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366346240 unmapped: 90341376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:29.949604+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366346240 unmapped: 90341376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:30.949743+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366346240 unmapped: 90341376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:31.949906+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366346240 unmapped: 90341376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:32.950058+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366346240 unmapped: 90341376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:33.950252+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366346240 unmapped: 90341376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:34.950385+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366346240 unmapped: 90341376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:35.950592+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366346240 unmapped: 90341376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:36.950856+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366346240 unmapped: 90341376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:37.951068+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366346240 unmapped: 90341376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:38.951214+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366346240 unmapped: 90341376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:39.951376+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366346240 unmapped: 90341376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:40.951560+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366346240 unmapped: 90341376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:41.951703+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:42.951860+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:43.952095+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:44.952536+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:45.952687+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:46.952877+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:47.953045+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:48.953228+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:49.953387+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:50.953541+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:51.953689+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:52.953824+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:53.954059+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:54.954197+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:55.954402+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:56.954545+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:57.954696+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:58.954785+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:59.954944+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:00.955043+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:01.955254+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:02.955405+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:03.955553+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:04.955684+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:05.955845+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:06.956031+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:07.956245+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:08.956598+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:09.956725+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:10.956868+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:11.957069+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:12.957217+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:13.957367+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:14.957497+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:15.957671+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:16.957930+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:17.958075+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:18.958233+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:19.958409+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:20.958542+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366379008 unmapped: 90308608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:21.958659+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 90292224 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:22.958854+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 90292224 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:23.958994+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:24.959186+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366395392 unmapped: 90292224 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:25.959393+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 90275840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:26.959569+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 90275840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:27.959701+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 90275840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:28.959862+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 90275840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:29.960058+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 90275840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:30.960247+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 90275840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:31.960480+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 90275840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:32.960646+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 90275840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:33.960817+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 90275840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:34.961080+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 90275840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:35.961268+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 90275840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:36.961396+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366411776 unmapped: 90275840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:37.961538+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 90267648 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:38.961679+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 90267648 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:39.961835+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 90267648 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:40.962059+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 90267648 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:41.962299+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366428160 unmapped: 90259456 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:42.962600+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 90243072 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:43.962748+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 90243072 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:44.962917+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 90243072 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:45.963255+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 90243072 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:46.963424+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 90243072 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:47.963587+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 90243072 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:48.963742+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 90243072 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:49.963917+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 90243072 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:50.964046+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 90243072 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:51.964202+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 90243072 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:52.964335+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366444544 unmapped: 90243072 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:53.964508+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 90234880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:54.964661+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 90234880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:55.964856+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 90234880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:56.965026+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 90234880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:57.965247+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 90234880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:58.965420+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 90234880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:59.965615+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 90234880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:00.965831+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 90234880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:01.966158+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366452736 unmapped: 90234880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:02.966317+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366469120 unmapped: 90218496 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:03.966543+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366469120 unmapped: 90218496 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:04.966721+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366469120 unmapped: 90218496 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:05.966904+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366469120 unmapped: 90218496 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:06.967033+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:07.967295+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:08.967496+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:09.967691+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:10.967876+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:11.968022+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:12.968150+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:13.968384+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:14.968575+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:15.968820+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:16.968994+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:17.969225+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:18.969377+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:19.969537+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:20.969709+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:21.969852+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:22.970036+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366493696 unmapped: 90193920 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:23.970175+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366493696 unmapped: 90193920 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:24.970324+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 90185728 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:25.970514+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 90185728 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:26.970660+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 90185728 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:27.970895+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 90185728 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:28.971746+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 90185728 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:29.971942+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 90185728 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:30.972103+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 90185728 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:31.972496+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 90185728 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:32.972663+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 90185728 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:33.972823+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366501888 unmapped: 90185728 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:34.973076+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366510080 unmapped: 90177536 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:35.973320+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366510080 unmapped: 90177536 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:36.973488+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366510080 unmapped: 90177536 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:37.973634+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366518272 unmapped: 90169344 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:38.973770+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366518272 unmapped: 90169344 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:39.973898+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366518272 unmapped: 90169344 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:40.974099+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366518272 unmapped: 90169344 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:41.974330+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366518272 unmapped: 90169344 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:42.974480+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:43.974653+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:44.974843+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:45.975044+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:46.975193+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:47.975322+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:48.975514+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:49.975661+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:50.975811+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 90144768 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:51.976082+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 90144768 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:52.976254+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 90144768 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:53.976420+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 90144768 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:54.976577+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 90144768 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:55.976756+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366542848 unmapped: 90144768 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:56.976904+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:57.977042+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:58.977189+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:59.977342+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:00.977503+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366567424 unmapped: 90120192 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:01.977802+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366567424 unmapped: 90120192 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:02.977946+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:03.978100+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:04.978280+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:05.978483+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:06.978662+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:07.978869+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:08.979050+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:09.979220+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:10.979372+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366592000 unmapped: 90095616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:11.979527+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366592000 unmapped: 90095616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:12.979672+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366592000 unmapped: 90095616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:13.979816+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366592000 unmapped: 90095616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:14.980013+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366592000 unmapped: 90095616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:15.980277+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366592000 unmapped: 90095616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:16.980469+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366592000 unmapped: 90095616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:17.980725+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366592000 unmapped: 90095616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:18.981118+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366592000 unmapped: 90095616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:19.981346+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366592000 unmapped: 90095616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:20.981459+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366600192 unmapped: 90087424 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:21.981654+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366616576 unmapped: 90071040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:22.981856+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366616576 unmapped: 90071040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:23.982063+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366616576 unmapped: 90071040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:24.982234+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366616576 unmapped: 90071040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:25.982442+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366616576 unmapped: 90071040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:26.982581+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:27.982778+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:28.982935+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:29.983063+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:30.983206+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:31.983333+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:32.983468+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:33.983593+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:34.983768+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:35.984061+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:36.984178+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:37.984323+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:38.984481+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:39.984643+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:40.984790+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:41.985493+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 90046464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:42.985658+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 90046464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:43.985790+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 90046464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:44.985936+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 90046464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:45.986148+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 90046464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:46.986306+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 90046464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:47.986461+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 90046464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:48.986634+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:49.986778+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:50.986907+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:51.987047+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:52.987188+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:53.987364+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:54.987579+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:55.987819+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:56.988092+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:57.988363+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:58.988511+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:59.988707+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:00.988927+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366673920 unmapped: 90013696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:01.989100+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366690304 unmapped: 89997312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:02.989256+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366690304 unmapped: 89997312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:03.989391+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366690304 unmapped: 89997312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:04.989579+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366690304 unmapped: 89997312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:05.989758+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366690304 unmapped: 89997312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:06.989896+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 89989120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:07.990035+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 89989120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:08.990179+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 89989120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:09.990344+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 89980928 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:10.990479+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 89980928 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:11.990620+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 89980928 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 26 17:43:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3554241155' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:12.990779+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 89980928 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:13.991023+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 89980928 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:14.991181+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 89980928 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:15.991389+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 89980928 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:16.991580+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 89980928 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:17.991805+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 89972736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:18.992098+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 89972736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:19.992441+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 89972736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:20.992604+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366714880 unmapped: 89972736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:21.992756+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 89956352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:22.992954+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 89956352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:23.993226+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 89956352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:24.993369+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 89956352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:25.993569+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 89956352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:26.993713+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 89956352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:27.993805+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 89956352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:28.993926+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 89956352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:29.994077+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 89956352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:30.994219+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 89956352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:31.994345+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 89956352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:32.994530+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 89956352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:33.994684+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 89956352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:34.994883+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366739456 unmapped: 89948160 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:35.995097+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366739456 unmapped: 89948160 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:36.995276+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 89939968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:37.995410+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 89939968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:38.995580+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 89939968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:39.995743+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 89939968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:40.995875+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 90906624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:41.996056+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 90906624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:42.996223+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 90906624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:43.996367+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 90906624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:44.996522+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 90906624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:45.996758+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 90906624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:46.996949+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 90906624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:47.997160+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 90906624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:48.997332+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 90906624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:49.997530+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 90906624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:50.997707+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 90906624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:51.997895+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 90906624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:52.998044+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365780992 unmapped: 90906624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:53.998213+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 90898432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:54.998397+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 90898432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:55.998617+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 90898432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:56.998857+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 90898432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:57.999042+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 90898432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:58.999229+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 90898432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:59.999423+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 90898432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:00.999616+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365789184 unmapped: 90898432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:01.999825+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 90882048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:03.000017+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 90882048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:04.000245+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365805568 unmapped: 90882048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:05.000422+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 90873856 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:06.000620+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 90873856 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:07.000791+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 90873856 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:08.001003+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365813760 unmapped: 90873856 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:09.001238+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365821952 unmapped: 90865664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:10.001422+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365821952 unmapped: 90865664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:11.001591+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365821952 unmapped: 90865664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:12.001763+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365821952 unmapped: 90865664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:13.001947+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365821952 unmapped: 90865664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:14.002172+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365821952 unmapped: 90865664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:15.002341+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365821952 unmapped: 90865664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:16.002521+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365821952 unmapped: 90865664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:17.002687+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:18.002851+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365821952 unmapped: 90865664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:19.003062+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365821952 unmapped: 90865664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:20.003295+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365821952 unmapped: 90865664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:21.003591+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365821952 unmapped: 90865664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:22.003896+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 90857472 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:23.004155+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 90841088 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:24.004504+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 90841088 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:25.004838+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 90841088 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:26.005258+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 90841088 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:27.005494+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 90841088 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:28.005651+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365846528 unmapped: 90841088 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:29.005809+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 90832896 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:30.006028+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 90832896 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:31.006162+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 90832896 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:32.006326+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 90832896 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:33.006461+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 90832896 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:34.006651+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 90832896 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:35.006800+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 90832896 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:36.007059+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 90832896 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:37.007253+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365854720 unmapped: 90832896 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:38.007431+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 90824704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:39.007652+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 90824704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:40.007797+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 90824704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:41.008052+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 90824704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:42.008277+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365862912 unmapped: 90824704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:43.008467+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:44.008744+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:45.008944+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:46.009244+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:47.009396+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:48.009526+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:49.009741+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:50.010017+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:51.010262+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:52.010446+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:53.010637+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:54.010773+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:55.011179+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:56.011359+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:57.011501+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:58.011693+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:59.011887+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:00.012058+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365879296 unmapped: 90808320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:01.012215+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365887488 unmapped: 90800128 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:02.012423+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365903872 unmapped: 90783744 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:03.012597+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 90767360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:04.012746+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 90767360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:05.012934+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 90767360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:06.013135+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 90767360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:07.013430+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 90767360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:08.013588+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 90767360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:09.013882+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 90767360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:10.014654+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 90767360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:11.014809+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 90767360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:12.015125+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 90767360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:13.015276+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 90767360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:14.015662+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 90767360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:15.015885+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 90767360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:16.016067+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365920256 unmapped: 90767360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:17.016280+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 90750976 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:18.016442+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 90750976 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:19.016604+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 90750976 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:20.016770+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 90750976 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:21.016910+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 90750976 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:22.017098+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365936640 unmapped: 90750976 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:23.017284+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 90734592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:24.017496+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 90734592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:25.017665+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 90734592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:26.017868+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 90734592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:27.018083+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 90734592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:28.018284+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 90734592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:29.018462+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 90734592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:30.018653+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 90734592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:31.018820+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 90734592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:32.019095+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 90734592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:33.019346+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365953024 unmapped: 90734592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:34.019919+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 90726400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:35.020134+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 90726400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:36.020457+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 90726400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:37.021325+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 90726400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:38.021693+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 90726400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:39.021879+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 90726400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:40.022110+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 90726400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:41.022395+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365961216 unmapped: 90726400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:42.022620+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 90710016 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:43.022892+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 90710016 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:44.023132+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 90710016 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:45.023286+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 90710016 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:46.023488+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 90710016 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:47.023657+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 90710016 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:48.023803+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 90710016 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:49.023958+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365977600 unmapped: 90710016 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:50.024144+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 90701824 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:51.024330+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 90701824 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:52.024494+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365985792 unmapped: 90701824 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:53.024661+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 90693632 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:54.024915+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 90693632 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:55.025182+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 90693632 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:56.025360+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 90693632 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:57.025598+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 90693632 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:58.025766+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 90693632 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:59.025929+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 90693632 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:00.026096+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 365993984 unmapped: 90693632 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:01.026258+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366010368 unmapped: 90677248 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:02.026442+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 90660864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:03.026585+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 90660864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:04.026754+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 90660864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:05.026901+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 90660864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:06.027084+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 90660864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:07.027266+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 90660864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:08.027485+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 90660864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:09.027632+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 90660864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:10.027789+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 90660864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:11.028174+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 90660864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:12.028347+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366026752 unmapped: 90660864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:13.028486+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 90652672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:14.028616+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 90652672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:15.028801+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 90652672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:16.029070+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 90652672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:17.029215+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 90652672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:18.029361+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 90652672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:19.029510+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 90652672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:20.029655+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366034944 unmapped: 90652672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:21.029763+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366043136 unmapped: 90644480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:22.029936+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366059520 unmapped: 90628096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:23.030083+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366059520 unmapped: 90628096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:24.030226+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366059520 unmapped: 90628096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:25.030409+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366059520 unmapped: 90628096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:26.030672+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366059520 unmapped: 90628096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:27.030911+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366059520 unmapped: 90628096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:28.031071+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366059520 unmapped: 90628096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:29.031199+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:30.031401+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:31.031620+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:32.031856+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:33.032029+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:34.032202+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:35.032437+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:36.032639+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:37.032810+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:38.033052+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:39.033204+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:40.033359+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:41.033656+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366067712 unmapped: 90619904 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:42.033810+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:43.034083+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:44.034235+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:45.034390+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:46.034541+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:47.034704+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:48.034827+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:49.035039+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:50.035178+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:51.035326+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366084096 unmapped: 90603520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:52.035492+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 90595328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:53.035678+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 90595328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:54.035811+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 90595328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:55.036039+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 90595328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:56.036223+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 90595328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:57.036400+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 90595328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:58.036593+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 90595328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:59.036781+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 90595328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:00.037033+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366092288 unmapped: 90595328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:01.037172+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366108672 unmapped: 90578944 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2026-01-26T17:20:02.037391+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _finish_auth 0
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:02.038407+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366125056 unmapped: 90562560 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:03.037634+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366125056 unmapped: 90562560 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:04.037877+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366125056 unmapped: 90562560 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:05.038044+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366125056 unmapped: 90562560 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:06.038207+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 90554368 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:07.038399+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 90554368 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:08.038590+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 90554368 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:09.038774+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 90554368 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:10.038948+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 90554368 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:11.268646+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 90554368 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:12.269325+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 90554368 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:13.269477+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366133248 unmapped: 90554368 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:14.269708+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 90546176 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:15.269862+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 90546176 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:16.270038+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366141440 unmapped: 90546176 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:17.270169+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 90537984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:18.270394+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 90537984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:19.270525+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 90537984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:20.270680+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 90537984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:21.270830+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366149632 unmapped: 90537984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:22.271054+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:23.271219+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:24.271446+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:25.271638+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:26.271846+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:27.272052+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:28.272217+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:29.272407+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366166016 unmapped: 90521600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:30.272595+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 90513408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:31.272745+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 90513408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:32.272873+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 90513408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:33.273078+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 90513408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:34.273233+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 90513408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:35.273453+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 90513408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:36.273716+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 90513408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:37.273913+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 90513408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:38.274078+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 90513408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:39.274212+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 90513408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:40.274377+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 90513408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:41.274505+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366174208 unmapped: 90513408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:42.274697+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 90497024 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:43.274861+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 90497024 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:44.275041+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 90497024 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:45.275207+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 90497024 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:46.275389+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 90497024 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:47.275525+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 90497024 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:48.275693+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366190592 unmapped: 90497024 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:49.275854+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 90488832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:50.276174+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 90488832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:51.276392+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 90488832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:52.276585+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366198784 unmapped: 90488832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:53.276773+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366206976 unmapped: 90480640 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:54.276964+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 90472448 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:55.277174+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 90472448 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:56.277342+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 90472448 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:57.277468+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 90472448 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:58.277600+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 90472448 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:59.277745+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.4 total, 600.0 interval
                                           Cumulative writes: 50K writes, 192K keys, 50K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s
                                           Cumulative WAL: 50K writes, 18K syncs, 2.68 writes per sync, written: 0.18 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s
                                           Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 90472448 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:00.277910+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 90472448 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:01.278045+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366223360 unmapped: 90464256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:02.278183+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 90447872 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:03.278318+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 90447872 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:04.278445+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 90447872 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:05.278573+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 90447872 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:06.278750+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 90447872 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:07.278885+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366239744 unmapped: 90447872 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:08.279146+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 ms_handle_reset con 0x5563ccefe400 session 0x5563d5201c00
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: handle_auth_request added challenge on 0x5563caa19c00
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 90431488 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 ms_handle_reset con 0x5563cf902c00 session 0x5563cfb1b880
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: handle_auth_request added challenge on 0x5563d06e0c00
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:09.279338+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 ms_handle_reset con 0x5563caa18000 session 0x5563cf745dc0
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: handle_auth_request added challenge on 0x5563ccf19c00
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 90431488 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:10.279497+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 90431488 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:11.279661+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366256128 unmapped: 90431488 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:12.279843+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 90423296 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:13.280038+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 90423296 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:14.280214+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 90423296 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:15.280369+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 90423296 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:16.280550+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 90423296 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:17.280710+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 90423296 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:18.280850+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 90423296 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:19.281138+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366264320 unmapped: 90423296 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:20.281272+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 90415104 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:21.281409+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366272512 unmapped: 90415104 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:22.281624+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:23.281768+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:24.281881+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:25.282010+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:26.282265+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:27.282464+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:28.282599+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:29.282729+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:30.282855+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:31.283007+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:32.283188+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:33.283316+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:34.283438+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:35.283578+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:36.283714+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:37.283882+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:38.284043+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:39.284171+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:40.284345+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:41.284579+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366288896 unmapped: 90398720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:42.284758+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366297088 unmapped: 90390528 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:43.284962+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 90365952 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:44.285202+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 90365952 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:45.285355+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 90365952 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:46.285551+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 90365952 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:47.285692+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 90365952 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:48.285876+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 90365952 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:49.286051+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 90365952 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:50.286201+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 90365952 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:51.286287+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 90365952 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:52.286432+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 90365952 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:53.286591+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366338048 unmapped: 90349568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:54.286692+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366338048 unmapped: 90349568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:55.286826+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366338048 unmapped: 90349568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:56.287034+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366338048 unmapped: 90349568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:57.287198+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366338048 unmapped: 90349568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:58.287511+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366338048 unmapped: 90349568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:59.287755+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366338048 unmapped: 90349568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:00.288137+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366338048 unmapped: 90349568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:01.288354+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366338048 unmapped: 90349568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:02.288605+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366338048 unmapped: 90349568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:03.288949+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 90333184 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:04.289317+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 90333184 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:05.289661+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 90333184 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:06.290091+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 90333184 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 ms_handle_reset con 0x5563ca6b2800 session 0x5563d572d180
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: handle_auth_request added challenge on 0x5563cdbd0c00
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:07.290329+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 90333184 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:08.290545+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366354432 unmapped: 90333184 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:09.290747+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:10.290955+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:11.525608+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:12.525847+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:13.526111+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:14.526341+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:15.526564+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739196 data_alloc: 218103808 data_used: 223820
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366362624 unmapped: 90324992 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:16.526825+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 90316800 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:17.527129+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 90316800 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:18.527474+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 596.917175293s of 597.674255371s, submitted: 90
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 90316800 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:19.527714+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 90316800 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:20.527930+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366370816 unmapped: 90316800 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:21.528218+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366419968 unmapped: 90267648 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:22.528538+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 90251264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:23.528727+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 90251264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:24.528946+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 90251264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:25.529104+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366436352 unmapped: 90251264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:26.529336+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366477312 unmapped: 90210304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:27.529506+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 90161152 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:28.529637+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 90161152 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:29.529858+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 90161152 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:30.530112+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 90161152 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:31.530242+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366526464 unmapped: 90161152 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:32.530378+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:33.530550+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:34.530802+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:35.531025+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:36.531317+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:37.531564+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:38.531775+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:39.531991+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:40.532183+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:41.532320+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366534656 unmapped: 90152960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:42.532501+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:43.532652+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:44.532825+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:45.533035+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:46.533312+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:47.533470+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:48.533703+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:49.533906+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:50.534109+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:51.534284+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:52.534450+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:53.534652+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:54.534853+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:55.535033+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:56.535249+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:57.535421+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:58.535570+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:59.535813+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:00.536075+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366551040 unmapped: 90136576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:01.536250+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366567424 unmapped: 90120192 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:02.536430+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:03.536588+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:04.536761+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:05.537028+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:06.537259+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:07.537416+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:08.537603+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:09.537822+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:10.538035+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:11.538284+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:12.538473+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:13.538624+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:14.538810+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:15.539108+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:16.539320+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:17.539522+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:18.539695+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:19.539844+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:20.540064+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366583808 unmapped: 90103808 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:21.540208+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366592000 unmapped: 90095616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:22.540462+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366608384 unmapped: 90079232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:23.540682+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366608384 unmapped: 90079232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:24.540946+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366608384 unmapped: 90079232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:25.542190+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366608384 unmapped: 90079232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:26.542534+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366608384 unmapped: 90079232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:27.543401+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366608384 unmapped: 90079232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:28.544063+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366608384 unmapped: 90079232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:29.544362+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366608384 unmapped: 90079232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:30.544544+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366608384 unmapped: 90079232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:31.545043+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366608384 unmapped: 90079232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:32.545576+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366608384 unmapped: 90079232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:33.545726+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366616576 unmapped: 90071040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:34.545879+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366616576 unmapped: 90071040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:35.546141+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366616576 unmapped: 90071040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:36.546408+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366616576 unmapped: 90071040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:37.546624+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:38.546818+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:39.547029+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:40.547181+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:41.547359+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366624768 unmapped: 90062848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:42.547544+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 90046464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:43.547720+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 90046464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:44.547958+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 90046464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:45.548166+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 90046464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:46.548368+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 90046464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:47.548572+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 90046464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:48.548738+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366641152 unmapped: 90046464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:49.549030+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:50.549203+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:51.549371+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:52.549546+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:53.549748+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:54.550001+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:55.550217+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:56.550481+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:57.550715+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:58.550934+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:59.551148+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:00.551382+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366649344 unmapped: 90038272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:01.551567+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366665728 unmapped: 90021888 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:02.551786+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366682112 unmapped: 90005504 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:03.552059+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366682112 unmapped: 90005504 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:04.552291+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366682112 unmapped: 90005504 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:05.552532+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366690304 unmapped: 89997312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:06.552746+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366690304 unmapped: 89997312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:07.552921+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366690304 unmapped: 89997312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:08.553074+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 89989120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:09.553214+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 89989120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:10.553393+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 89989120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:11.553537+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 89989120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:12.553717+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 89989120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:13.553876+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 89989120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:14.554038+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 89989120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:15.554160+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 89989120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:16.554379+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 89989120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:17.554602+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 89989120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:18.554788+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 89989120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:19.554933+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 89989120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:20.555067+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366698496 unmapped: 89989120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:21.555247+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366706688 unmapped: 89980928 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:22.555702+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:23.556023+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:24.556226+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:25.556382+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:26.556584+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:27.556750+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:28.556929+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:29.557950+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:30.558265+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:31.558590+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:32.558970+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:33.559267+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:34.559661+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:35.560356+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:36.560789+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:37.561476+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:38.561773+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:39.562054+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:40.562415+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366723072 unmapped: 89964544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:41.562609+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366731264 unmapped: 89956352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:42.562768+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 89939968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:43.563013+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 89939968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:44.563197+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 89939968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:45.563361+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 89939968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:46.563529+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366747648 unmapped: 89939968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:47.563910+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 89931776 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:48.564143+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 89931776 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:49.564448+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 89931776 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:50.564718+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 89931776 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:51.564926+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366755840 unmapped: 89931776 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:52.565195+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 89923584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:53.565419+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 89923584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:54.565575+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 89923584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:55.565807+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 89923584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:56.566077+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 89923584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:57.566376+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 89923584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:58.566688+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 89923584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:59.566886+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 89923584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:00.567151+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366764032 unmapped: 89923584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:01.567342+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366780416 unmapped: 89907200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:02.567532+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 89890816 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:03.567680+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 89890816 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:04.567845+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366796800 unmapped: 89890816 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:05.568106+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:06.568352+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:07.568506+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:08.568668+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:09.568856+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:10.569079+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:11.569236+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:12.569398+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:13.569545+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:14.569709+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:15.569832+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:16.570066+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:17.570214+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:18.570395+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:19.570640+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:20.570794+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:21.571082+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366804992 unmapped: 89882624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:22.571298+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 89866240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:23.571507+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 89866240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:24.571674+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366821376 unmapped: 89866240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:25.571879+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 89858048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:26.572096+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 89858048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:27.572244+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 89858048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:28.572465+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 89858048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:29.572664+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 89858048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:30.572833+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 89858048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:31.572970+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 89858048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:32.573175+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 89858048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:33.573372+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 89858048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:34.573532+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 89858048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:35.573666+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366829568 unmapped: 89858048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:36.573908+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 89849856 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:37.574086+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 89849856 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:38.574249+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 89849856 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:39.574481+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 89849856 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:40.574713+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 89849856 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:41.574931+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366837760 unmapped: 89849856 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:42.575117+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 89825280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:43.575327+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 89825280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:44.575488+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 89825280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:45.575674+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 89825280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:46.575929+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 89825280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:47.576042+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 89825280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:48.576194+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 89825280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:49.576376+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 89825280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:50.576515+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 89825280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:51.576705+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 89825280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:52.576845+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366870528 unmapped: 89817088 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:53.577030+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366870528 unmapped: 89817088 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:54.577422+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366870528 unmapped: 89817088 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:55.577661+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366870528 unmapped: 89817088 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:56.577855+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366870528 unmapped: 89817088 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:57.578019+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 89808896 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:58.578209+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 89808896 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:59.578398+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 89808896 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:00.578586+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366895104 unmapped: 89792512 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:01.578791+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366895104 unmapped: 89792512 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:02.579028+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 89776128 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:03.579190+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 89776128 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:04.579392+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 89776128 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:05.579547+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366911488 unmapped: 89776128 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:06.579814+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 89767936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:07.580036+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 89767936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:08.580189+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 89767936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:09.580352+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 89767936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:10.580535+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 89767936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:11.580700+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 89767936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:12.580896+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 89767936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:13.581053+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 89767936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:14.581208+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 89767936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:15.581394+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 89767936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:16.581636+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 89767936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:17.581777+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 89767936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:18.581936+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 89767936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:19.582133+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 89767936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:20.582286+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 89767936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:21.582447+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366919680 unmapped: 89767936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:22.582647+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366936064 unmapped: 89751552 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:23.582806+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366936064 unmapped: 89751552 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:24.583011+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 89743360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:25.583251+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 89743360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:26.583460+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 89743360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:27.583633+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 89743360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:28.583912+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 89743360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:29.584163+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 89743360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:30.584465+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 89743360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:31.584623+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366944256 unmapped: 89743360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:32.584854+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366952448 unmapped: 89735168 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:33.585058+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366952448 unmapped: 89735168 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:34.585303+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366952448 unmapped: 89735168 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:35.585480+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366952448 unmapped: 89735168 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:36.585728+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366952448 unmapped: 89735168 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:37.585893+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366952448 unmapped: 89735168 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:38.586056+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366952448 unmapped: 89735168 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:39.586223+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366960640 unmapped: 89726976 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:40.586410+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366960640 unmapped: 89726976 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:41.586686+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366960640 unmapped: 89726976 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:42.586865+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366977024 unmapped: 89710592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:43.587114+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366977024 unmapped: 89710592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:44.587347+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366977024 unmapped: 89710592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:45.587568+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366977024 unmapped: 89710592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:46.587814+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 89702400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:47.588126+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 89702400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:48.588349+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 89702400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:49.588629+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 89702400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:50.588824+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 89702400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:51.589057+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 89702400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:52.589270+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:53.589485+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 89702400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:54.589649+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 89702400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:55.589902+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 89702400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:56.590185+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 89702400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:57.590363+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 89702400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:58.590589+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 89702400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:59.590741+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 89702400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:00.590897+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 89702400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:01.591129+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 366985216 unmapped: 89702400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:02.591303+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367001600 unmapped: 89686016 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:03.591459+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367009792 unmapped: 89677824 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:04.591622+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367009792 unmapped: 89677824 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:05.591783+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367009792 unmapped: 89677824 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:06.591967+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367009792 unmapped: 89677824 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:07.592168+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367009792 unmapped: 89677824 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:08.592352+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367009792 unmapped: 89677824 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:09.594796+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367009792 unmapped: 89677824 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:10.599609+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367009792 unmapped: 89677824 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:11.603297+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367009792 unmapped: 89677824 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:12.604396+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367009792 unmapped: 89677824 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:13.605929+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 89669632 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:14.607612+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 89669632 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:15.610064+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 89669632 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:16.610309+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 89669632 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:17.610678+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367017984 unmapped: 89669632 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:18.610922+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 89661440 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:19.611190+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 89661440 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:20.611401+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367026176 unmapped: 89661440 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:21.611632+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367034368 unmapped: 89653248 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:22.611968+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 89636864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:23.612268+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 89636864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:24.612456+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 89636864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:25.612806+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 89636864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:26.613207+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 89636864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:27.613421+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 89636864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:28.613639+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 89636864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:29.613838+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 89636864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:30.614060+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 89636864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:31.614232+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 89636864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:32.614401+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 89636864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:33.614565+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 89636864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:34.614774+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 89636864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:35.615006+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 89636864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:36.615248+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367050752 unmapped: 89636864 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:37.615444+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 89628672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:38.615601+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 89628672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:39.615794+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 89628672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:40.615960+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 89628672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:41.616157+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367058944 unmapped: 89628672 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:42.616557+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367075328 unmapped: 89612288 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:43.616896+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367075328 unmapped: 89612288 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:44.617080+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367075328 unmapped: 89612288 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:45.617422+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 89604096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:46.617626+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 89604096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:47.617950+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 89604096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:48.618204+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 89604096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:49.618385+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 89604096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:50.618558+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 89604096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:51.618737+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 89604096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:52.618929+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 89604096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:53.619116+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 89604096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:54.619322+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 89604096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:55.619574+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 89604096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:56.619781+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 89604096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:57.619945+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 89604096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:58.620116+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 89604096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:59.620358+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 89604096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:00.620584+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367083520 unmapped: 89604096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:01.620836+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367108096 unmapped: 89579520 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:02.621170+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 89563136 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:03.621370+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 89563136 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:04.621559+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367124480 unmapped: 89563136 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:05.621692+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 89554944 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:06.621890+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 89554944 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:07.622169+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 89554944 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:08.622430+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367132672 unmapped: 89554944 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:09.622637+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 89546752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:10.622856+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 89546752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:11.623051+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 89546752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:12.623265+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 89546752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:13.623425+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 89546752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:14.623631+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 89546752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:15.623798+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 89546752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:16.624032+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 89546752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:17.624172+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 89546752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:18.624356+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 89546752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:19.624502+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 89546752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:20.624650+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 89546752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:21.624792+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367140864 unmapped: 89546752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:22.624964+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 89530368 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:23.625193+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 89530368 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:24.625347+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367157248 unmapped: 89530368 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:25.625531+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 89522176 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:26.625732+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 89522176 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:27.625882+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 89522176 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:28.626067+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367165440 unmapped: 89522176 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:29.626191+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 89513984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:30.626346+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 89513984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:31.626505+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 89513984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:32.626657+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 89513984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:33.626810+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 89513984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:34.626963+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 89513984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:35.627169+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 89513984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:36.627345+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 89513984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:37.627516+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 89513984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:38.627638+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 89513984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:39.627780+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 89513984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:40.627968+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 89513984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:41.628183+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367173632 unmapped: 89513984 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:42.628368+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 89497600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:43.628515+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 89497600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:44.628663+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 89497600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:45.628827+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 89497600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:46.628999+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 89497600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:47.629194+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 89497600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:48.629397+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 89497600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:49.629567+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 89497600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:50.629729+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 89497600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:51.629912+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 89497600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:52.630099+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 89497600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:53.630317+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367190016 unmapped: 89497600 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:54.630492+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367198208 unmapped: 89489408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:55.630664+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367198208 unmapped: 89489408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:56.630870+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367198208 unmapped: 89489408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:57.631077+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367198208 unmapped: 89489408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:58.631263+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367198208 unmapped: 89489408 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:59.631435+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367206400 unmapped: 89481216 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:00.631576+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367206400 unmapped: 89481216 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:01.631785+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367222784 unmapped: 89464832 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:02.631962+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 89448448 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:03.632204+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 89448448 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:04.632318+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 89440256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:05.632498+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 89440256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:06.632678+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 89440256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:07.632814+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 89440256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:08.633038+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 89440256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:09.633232+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 89440256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:10.633394+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 89440256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:11.634031+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 89440256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:12.634179+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 89440256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:13.634358+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 89440256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:14.634534+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 89440256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:15.634688+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 89440256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:16.634890+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 89440256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:17.635048+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 89440256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:18.635188+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 89440256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:19.635312+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 89440256 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:20.635483+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 89432064 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:21.635717+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 89432064 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:22.635911+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367271936 unmapped: 89415680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:23.636145+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367271936 unmapped: 89415680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:24.636341+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367271936 unmapped: 89415680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:25.636522+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367271936 unmapped: 89415680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:26.636694+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367271936 unmapped: 89415680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:27.636868+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367271936 unmapped: 89415680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:28.637077+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367271936 unmapped: 89415680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:29.637243+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367271936 unmapped: 89415680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:30.637460+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367271936 unmapped: 89415680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:31.637758+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367271936 unmapped: 89415680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:32.637887+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367271936 unmapped: 89415680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:33.638119+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367271936 unmapped: 89415680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:34.638256+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367271936 unmapped: 89415680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:35.638394+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367271936 unmapped: 89415680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:36.638590+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367271936 unmapped: 89415680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:37.638777+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367280128 unmapped: 89407488 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:38.638959+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367280128 unmapped: 89407488 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:39.639124+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367280128 unmapped: 89407488 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:40.639277+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367280128 unmapped: 89407488 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:41.639440+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367280128 unmapped: 89407488 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:42.639607+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367280128 unmapped: 89407488 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:43.639736+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367280128 unmapped: 89407488 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:44.639920+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367288320 unmapped: 89399296 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:45.640168+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367288320 unmapped: 89399296 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:46.640390+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367296512 unmapped: 89391104 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:47.640549+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367296512 unmapped: 89391104 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:48.640787+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367296512 unmapped: 89391104 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:49.641018+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367296512 unmapped: 89391104 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:50.641248+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367296512 unmapped: 89391104 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:51.641461+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367296512 unmapped: 89391104 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:52.641848+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367312896 unmapped: 89374720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:53.642737+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367312896 unmapped: 89374720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:54.642921+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367312896 unmapped: 89374720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:55.643136+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367312896 unmapped: 89374720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:56.643392+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367312896 unmapped: 89374720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:57.644018+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367312896 unmapped: 89374720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:58.644664+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367312896 unmapped: 89374720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:59.645061+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367312896 unmapped: 89374720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:00.645406+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367312896 unmapped: 89374720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:01.645583+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367312896 unmapped: 89374720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:02.645851+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367312896 unmapped: 89374720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:03.646065+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367312896 unmapped: 89374720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:04.646259+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367312896 unmapped: 89374720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:05.646608+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367312896 unmapped: 89374720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:06.647098+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367312896 unmapped: 89374720 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:07.647374+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367321088 unmapped: 89366528 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:08.647541+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367321088 unmapped: 89366528 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:09.647730+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367321088 unmapped: 89366528 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:10.648053+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367321088 unmapped: 89366528 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:11.648238+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367321088 unmapped: 89366528 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:12.648403+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:13.648580+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:14.648772+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:15.648921+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:16.649050+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:17.649226+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:18.649374+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:19.649499+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:20.649619+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:21.649740+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:22.649911+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:23.650067+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:24.650396+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:25.650565+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:26.650762+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:27.650893+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:28.651040+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:29.651207+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:30.651358+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:31.651473+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367337472 unmapped: 89350144 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:32.651597+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367362048 unmapped: 89325568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:33.651739+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367362048 unmapped: 89325568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:34.651877+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367362048 unmapped: 89325568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:35.652096+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367362048 unmapped: 89325568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:36.652300+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367362048 unmapped: 89325568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:37.652434+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367362048 unmapped: 89325568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:38.652653+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367362048 unmapped: 89325568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:39.652772+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367362048 unmapped: 89325568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:40.652917+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367362048 unmapped: 89325568 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:41.653064+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367370240 unmapped: 89317376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:42.653211+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367370240 unmapped: 89317376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:43.653388+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367370240 unmapped: 89317376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:44.653542+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367370240 unmapped: 89317376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:45.653659+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367370240 unmapped: 89317376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:46.653857+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367370240 unmapped: 89317376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:47.654053+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367370240 unmapped: 89317376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:48.654222+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367370240 unmapped: 89317376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:49.654387+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367370240 unmapped: 89317376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:50.654515+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367370240 unmapped: 89317376 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:51.654648+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367378432 unmapped: 89309184 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:52.654817+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367403008 unmapped: 89284608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:53.655014+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367403008 unmapped: 89284608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:54.655174+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367403008 unmapped: 89284608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:55.655727+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367403008 unmapped: 89284608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:56.656535+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367403008 unmapped: 89284608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:57.657016+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367403008 unmapped: 89284608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:58.657303+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367403008 unmapped: 89284608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:59.657445+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.4 total, 600.0 interval
                                           Cumulative writes: 50K writes, 193K keys, 50K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.02 MB/s
                                           Cumulative WAL: 50K writes, 18K syncs, 2.67 writes per sync, written: 0.18 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 264 writes, 396 keys, 264 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 264 writes, 132 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367403008 unmapped: 89284608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:00.657714+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367403008 unmapped: 89284608 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:01.658221+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367419392 unmapped: 89268224 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:02.658508+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367419392 unmapped: 89268224 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:03.658691+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367419392 unmapped: 89268224 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:04.658835+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367419392 unmapped: 89268224 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:05.659106+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367419392 unmapped: 89268224 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:06.659317+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367419392 unmapped: 89268224 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:07.659470+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367419392 unmapped: 89268224 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:08.659634+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367419392 unmapped: 89268224 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:09.659792+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367419392 unmapped: 89268224 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:10.659962+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367419392 unmapped: 89268224 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:11.660137+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367419392 unmapped: 89268224 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:12.660349+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367435776 unmapped: 89251840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:13.660567+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367435776 unmapped: 89251840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:14.660789+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367435776 unmapped: 89251840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:15.661057+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367435776 unmapped: 89251840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:16.661273+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367435776 unmapped: 89251840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:17.661448+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367435776 unmapped: 89251840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:18.661588+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367435776 unmapped: 89251840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:19.661748+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367435776 unmapped: 89251840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:20.661966+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367435776 unmapped: 89251840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:21.662165+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367435776 unmapped: 89251840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:22.662302+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367435776 unmapped: 89251840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:23.662439+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367435776 unmapped: 89251840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:24.662757+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367435776 unmapped: 89251840 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:25.663055+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367443968 unmapped: 89243648 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:26.670599+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367443968 unmapped: 89243648 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:27.672022+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367443968 unmapped: 89243648 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:28.673606+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367443968 unmapped: 89243648 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:29.674568+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367443968 unmapped: 89243648 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:30.674729+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367443968 unmapped: 89243648 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:31.675655+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367443968 unmapped: 89243648 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:32.676419+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367460352 unmapped: 89227264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:33.676612+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367460352 unmapped: 89227264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:34.677173+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367460352 unmapped: 89227264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:35.677415+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367460352 unmapped: 89227264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:36.677711+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367468544 unmapped: 89219072 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:37.677950+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367476736 unmapped: 89210880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:38.678195+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367476736 unmapped: 89210880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:39.678665+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367476736 unmapped: 89210880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:40.679092+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367476736 unmapped: 89210880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:41.679290+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367476736 unmapped: 89210880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:42.679870+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367476736 unmapped: 89210880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:43.680053+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367476736 unmapped: 89210880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:44.680460+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367476736 unmapped: 89210880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:45.680769+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367476736 unmapped: 89210880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:46.681053+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367476736 unmapped: 89210880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:47.681530+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367476736 unmapped: 89210880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:48.681841+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367476736 unmapped: 89210880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:49.682094+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367476736 unmapped: 89210880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:50.682225+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367476736 unmapped: 89210880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:51.682366+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367476736 unmapped: 89210880 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:52.682843+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367493120 unmapped: 89194496 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:53.683250+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367501312 unmapped: 89186304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:54.683502+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367501312 unmapped: 89186304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:55.683707+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367501312 unmapped: 89186304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:56.683948+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367501312 unmapped: 89186304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:57.684118+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367501312 unmapped: 89186304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:58.684244+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367501312 unmapped: 89186304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:59.684391+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367501312 unmapped: 89186304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:00.684539+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367501312 unmapped: 89186304 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:01.684736+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367517696 unmapped: 89169920 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:02.684898+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367517696 unmapped: 89169920 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:03.685066+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367517696 unmapped: 89169920 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:04.685288+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367517696 unmapped: 89169920 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:05.685460+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367517696 unmapped: 89169920 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:06.685627+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367517696 unmapped: 89169920 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:07.685819+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367517696 unmapped: 89169920 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:08.686079+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367517696 unmapped: 89169920 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:09.686311+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367517696 unmapped: 89169920 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:10.845366+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367517696 unmapped: 89169920 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:11.845560+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367517696 unmapped: 89169920 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:12.845759+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367534080 unmapped: 89153536 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:13.846057+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367534080 unmapped: 89153536 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:14.846401+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367534080 unmapped: 89153536 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:15.846599+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367534080 unmapped: 89153536 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:16.846820+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367534080 unmapped: 89153536 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:17.847029+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367534080 unmapped: 89153536 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:18.847167+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367534080 unmapped: 89153536 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:19.847325+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367534080 unmapped: 89153536 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:20.847514+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367534080 unmapped: 89153536 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 602.056640625s of 602.543640137s, submitted: 132
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:21.847689+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367534080 unmapped: 89153536 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:22.847858+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367542272 unmapped: 89145344 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:23.848062+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367542272 unmapped: 89145344 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:24.848209+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367542272 unmapped: 89145344 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:25.848500+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367542272 unmapped: 89145344 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:26.848690+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 89112576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:27.848948+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367575040 unmapped: 89112576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:28.849213+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367599616 unmapped: 89088000 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:29.858842+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367599616 unmapped: 89088000 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:30.859080+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367599616 unmapped: 89088000 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:31.859481+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367599616 unmapped: 89088000 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:32.860175+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:33.861026+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:34.861149+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:35.861499+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:36.861949+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:37.863439+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:38.864440+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:39.865389+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:40.865871+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:41.866299+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:42.867066+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:43.867269+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:44.867855+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:45.868382+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:46.868581+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:47.868778+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:48.876581+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:49.877003+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:50.877144+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:51.877405+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367616000 unmapped: 89071616 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:52.877700+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367632384 unmapped: 89055232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:53.877871+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367632384 unmapped: 89055232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:54.878098+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367632384 unmapped: 89055232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:55.878403+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367632384 unmapped: 89055232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:56.878788+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367632384 unmapped: 89055232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:57.878948+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367632384 unmapped: 89055232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:58.879143+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367632384 unmapped: 89055232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:59.879445+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367632384 unmapped: 89055232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:00.879620+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367648768 unmapped: 89038848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:01.879774+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367648768 unmapped: 89038848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:02.879930+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367648768 unmapped: 89038848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:03.880090+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367648768 unmapped: 89038848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:04.880250+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367648768 unmapped: 89038848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:05.880497+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367648768 unmapped: 89038848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:06.880712+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367648768 unmapped: 89038848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:07.880926+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367648768 unmapped: 89038848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:08.881287+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367648768 unmapped: 89038848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:09.881443+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367648768 unmapped: 89038848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:10.881643+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367648768 unmapped: 89038848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:11.882127+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367648768 unmapped: 89038848 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:12.882348+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:13.882522+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:14.883089+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:15.883591+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:16.883954+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:17.884123+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:18.884280+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:19.884676+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:20.885064+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:21.885396+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:22.885559+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:23.885713+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:24.886005+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:25.886309+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:26.886663+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:27.886864+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:28.887072+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:29.887198+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:30.887446+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:31.887676+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367665152 unmapped: 89022464 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:32.887866+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:33.888017+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:34.888255+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:35.888521+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:36.888709+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:37.888843+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:38.889014+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:39.889208+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:40.889831+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:41.890014+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:42.890325+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:43.890485+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:44.890720+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:45.890900+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:46.891460+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:47.891868+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:48.892213+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:49.892452+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:50.892728+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367681536 unmapped: 89006080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:51.893209+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:52.893350+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:53.893554+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:54.893825+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:55.894076+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:56.894330+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:57.894499+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:58.894690+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:59.894889+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:00.895203+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:01.895367+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:02.895582+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:03.895799+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:04.896078+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:05.896240+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:06.896473+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:07.896717+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:08.896917+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:09.897064+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:11.036253+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367697920 unmapped: 88989696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:12.036499+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:13.036637+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:14.036798+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:15.037019+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:16.037222+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:17.037459+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:18.037671+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:19.037933+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:20.038221+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:21.038438+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:22.038632+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:23.038800+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:24.039121+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:25.039385+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:26.039694+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:27.040075+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:28.040404+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367714304 unmapped: 88973312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:29.040600+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367722496 unmapped: 88965120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:30.040788+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367722496 unmapped: 88965120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:31.041041+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367722496 unmapped: 88965120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:32.041291+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:33.041435+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:34.041681+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:35.041931+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:36.042115+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:37.042355+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:38.042583+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:39.042777+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:40.043084+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:41.043301+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:42.043499+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:43.043705+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:44.043872+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:45.044139+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:46.044366+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:47.044611+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:48.044837+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367738880 unmapped: 88948736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:49.045080+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367747072 unmapped: 88940544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:50.045288+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367747072 unmapped: 88940544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:51.045485+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367747072 unmapped: 88940544 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:52.045769+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367763456 unmapped: 88924160 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:53.045909+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367763456 unmapped: 88924160 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:54.046129+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367763456 unmapped: 88924160 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:55.046288+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367763456 unmapped: 88924160 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:56.046427+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367763456 unmapped: 88924160 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:57.046573+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:58.046757+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367763456 unmapped: 88924160 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:59.046908+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367763456 unmapped: 88924160 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:00.047081+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367763456 unmapped: 88924160 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:01.047213+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367763456 unmapped: 88924160 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:02.047365+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367779840 unmapped: 88907776 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:03.047517+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367779840 unmapped: 88907776 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:04.047687+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367779840 unmapped: 88907776 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:05.047878+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367779840 unmapped: 88907776 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:06.048030+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367779840 unmapped: 88907776 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:07.048315+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367788032 unmapped: 88899584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:08.048623+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367788032 unmapped: 88899584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:09.048835+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367788032 unmapped: 88899584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:10.049005+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367788032 unmapped: 88899584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:11.049155+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367788032 unmapped: 88899584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:12.049358+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367788032 unmapped: 88899584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:13.049502+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367804416 unmapped: 88883200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:14.049622+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367804416 unmapped: 88883200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:15.049749+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367804416 unmapped: 88883200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:16.049906+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367804416 unmapped: 88883200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:17.050067+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367804416 unmapped: 88883200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:18.050175+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367804416 unmapped: 88883200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:19.050299+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367804416 unmapped: 88883200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:20.050412+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367804416 unmapped: 88883200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:21.050534+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367804416 unmapped: 88883200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:22.050631+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367804416 unmapped: 88883200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:23.050758+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367804416 unmapped: 88883200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:24.050868+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367804416 unmapped: 88883200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:25.051048+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367804416 unmapped: 88883200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:26.051231+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367804416 unmapped: 88883200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:27.051388+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367804416 unmapped: 88883200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:28.051518+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367812608 unmapped: 88875008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:29.051639+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367812608 unmapped: 88875008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:30.051770+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367812608 unmapped: 88875008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:31.052041+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367812608 unmapped: 88875008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:32.052195+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367812608 unmapped: 88875008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:33.052344+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:34.052512+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:35.052652+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:36.052827+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:37.053040+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:38.053258+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:39.053410+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:40.053570+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:41.053724+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:42.054060+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:43.054201+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:44.054326+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:45.054472+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:46.054670+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:47.054938+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:48.055080+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:49.055171+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:50.055315+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:51.055480+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:52.055622+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367828992 unmapped: 88858624 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:53.055742+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367853568 unmapped: 88834048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:54.055906+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367853568 unmapped: 88834048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:55.056192+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367853568 unmapped: 88834048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:56.056342+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367853568 unmapped: 88834048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:57.056544+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367853568 unmapped: 88834048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:58.056732+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367853568 unmapped: 88834048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:59.056909+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367853568 unmapped: 88834048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:00.057069+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367853568 unmapped: 88834048 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets getting new tickets!
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:01.057292+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _finish_auth 0
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:01.058326+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367878144 unmapped: 88809472 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:02.057494+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367878144 unmapped: 88809472 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:03.057682+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367878144 unmapped: 88809472 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:04.057837+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367878144 unmapped: 88809472 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:05.058037+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367878144 unmapped: 88809472 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:06.058222+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367878144 unmapped: 88809472 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:07.058554+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: mgrc ms_handle_reset ms_handle_reset con 0x5563cf902000
Jan 26 17:43:51 compute-0 ceph-osd[86729]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4130999815
Jan 26 17:43:51 compute-0 ceph-osd[86729]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4130999815,v1:192.168.122.100:6801/4130999815]
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: get_auth_request con 0x5563ccf17c00 auth_method 0
Jan 26 17:43:51 compute-0 ceph-osd[86729]: mgrc handle_mgr_configure stats_period=5
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:08.058722+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 ms_handle_reset con 0x5563caa19c00 session 0x5563db0cc380
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: handle_auth_request added challenge on 0x5563d06e2400
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:09.058850+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 ms_handle_reset con 0x5563d06e0c00 session 0x5563cc27f880
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: handle_auth_request added challenge on 0x5563cf900400
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368164864 unmapped: 88522752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 ms_handle_reset con 0x5563ccf19c00 session 0x5563d26b2fc0
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: handle_auth_request added challenge on 0x5563d27c0c00
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:10.059035+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:11.059233+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:12.059410+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:13.059547+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:14.059743+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:15.059952+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:16.060466+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:17.060665+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:18.060844+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:19.061080+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:20.061330+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:21.061525+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:22.061694+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:23.061864+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:24.062150+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:25.062337+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:26.062530+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:27.062767+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:28.062881+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:29.063030+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:30.063207+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:31.063345+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:32.063517+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:33.063688+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:34.063837+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:35.064064+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:36.064240+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368295936 unmapped: 88391680 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:37.064428+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:38.064582+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:39.064780+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:40.064948+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:41.065145+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:42.065284+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:43.065389+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:44.065463+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:45.065627+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:46.065784+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:47.065945+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:48.066108+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:49.066305+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:50.066592+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:51.068423+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:52.069227+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:53.071957+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:54.072563+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:55.072958+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:56.073276+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368328704 unmapped: 88358912 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:57.073717+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368345088 unmapped: 88342528 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:58.074164+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368345088 unmapped: 88342528 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:59.074269+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368345088 unmapped: 88342528 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:00.074387+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368353280 unmapped: 88334336 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:01.074515+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368353280 unmapped: 88334336 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:02.074629+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368353280 unmapped: 88334336 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:03.074764+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368353280 unmapped: 88334336 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:04.074925+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368353280 unmapped: 88334336 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:05.075046+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368353280 unmapped: 88334336 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:06.075172+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368353280 unmapped: 88334336 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:07.075345+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 ms_handle_reset con 0x5563cdbd0c00 session 0x5563d04d6700
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: handle_auth_request added challenge on 0x5563dac86c00
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 88203264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:08.075511+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 88203264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:09.075637+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 88203264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:10.075803+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 88203264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:11.075961+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 88203264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:12.076127+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 88203264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:13.076271+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 88203264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:14.076430+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 88203264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:15.076565+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368484352 unmapped: 88203264 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:16.076750+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368492544 unmapped: 88195072 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:17.076931+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368508928 unmapped: 88178688 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:18.077087+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368508928 unmapped: 88178688 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:19.077290+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368508928 unmapped: 88178688 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:20.077432+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368508928 unmapped: 88178688 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:21.077566+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368517120 unmapped: 88170496 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 298.931701660s of 300.250244141s, submitted: 90
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:22.077711+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 88104960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:23.077882+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 88104960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:24.078043+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 88104960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:25.078227+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 88104960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:26.078478+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 88104960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:27.078688+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 88104960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:28.078824+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 88104960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:29.079040+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 88104960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:30.079214+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 88104960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:31.079340+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 88104960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:32.079460+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 88104960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:33.079627+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 88104960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:34.079786+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 88104960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:35.079934+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 88104960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:36.080178+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368582656 unmapped: 88104960 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:37.080400+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:38.080605+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:39.080777+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:40.080906+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:41.081086+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:42.081289+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:43.081429+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:44.081636+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:45.081789+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:46.081939+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:47.082160+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:48.082310+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:49.082673+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:50.082912+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:51.083021+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:52.083171+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:53.083346+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:54.083540+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:55.083662+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:56.083839+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368599040 unmapped: 88088576 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:57.084135+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368615424 unmapped: 88072192 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:58.084316+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368615424 unmapped: 88072192 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:59.084434+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368615424 unmapped: 88072192 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:00.084550+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368615424 unmapped: 88072192 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:01.084671+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368648192 unmapped: 88039424 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:02.084815+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368648192 unmapped: 88039424 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:03.084922+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368648192 unmapped: 88039424 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:04.085059+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 88031232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:05.085295+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368656384 unmapped: 88031232 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:06.085414+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368664576 unmapped: 88023040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:07.085581+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368664576 unmapped: 88023040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:08.085730+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368664576 unmapped: 88023040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:09.085874+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368664576 unmapped: 88023040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:10.086123+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368664576 unmapped: 88023040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:11.086254+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368664576 unmapped: 88023040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:12.086394+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368664576 unmapped: 88023040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:13.086511+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368664576 unmapped: 88023040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:14.086689+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368664576 unmapped: 88023040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:15.086845+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368664576 unmapped: 88023040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:16.087011+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368664576 unmapped: 88023040 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:17.087208+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:18.087434+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:19.087595+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:20.087760+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:21.087963+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:22.088189+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:23.088385+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:24.088507+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:25.088684+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:26.088875+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:27.089073+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:28.089208+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:29.089329+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:30.089481+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:31.089592+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:32.089734+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:33.089852+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:34.089933+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:35.090025+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:36.090157+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368680960 unmapped: 88006656 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:37.090296+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 87990272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:38.090430+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 87990272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:39.090535+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 87990272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:40.090664+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 87990272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:41.090796+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 87990272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:42.090911+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 87990272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:43.091109+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 87990272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:44.091233+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 87990272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:45.091366+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368697344 unmapped: 87990272 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:46.091474+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368705536 unmapped: 87982080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:47.091674+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368705536 unmapped: 87982080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:48.091816+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368705536 unmapped: 87982080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:49.091944+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368705536 unmapped: 87982080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:50.092138+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368705536 unmapped: 87982080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:51.092312+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368705536 unmapped: 87982080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:52.092445+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368705536 unmapped: 87982080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:53.092556+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368705536 unmapped: 87982080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:54.092704+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368705536 unmapped: 87982080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:55.092893+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368705536 unmapped: 87982080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:56.093083+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368705536 unmapped: 87982080 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:57.093241+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 87965696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:58.093380+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 87965696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:59.093558+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 87965696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:00.093681+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368721920 unmapped: 87965696 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:01.093803+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368738304 unmapped: 87949312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:02.093930+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368738304 unmapped: 87949312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:03.094083+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368738304 unmapped: 87949312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:04.094239+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368738304 unmapped: 87949312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:05.094385+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368738304 unmapped: 87949312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:06.094529+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368738304 unmapped: 87949312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:07.094687+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368738304 unmapped: 87949312 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:08.094817+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 87941120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:09.094961+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 87941120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:10.095116+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 87941120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:11.095262+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 87941120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:12.095940+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 87941120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:13.096101+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 87941120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:14.096229+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 87941120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:15.096374+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:16.096518+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 87941120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:17.096744+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368746496 unmapped: 87941120 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:18.096913+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 87924736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:19.097040+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 87924736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:20.097184+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 87924736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:21.097327+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368762880 unmapped: 87924736 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:22.097459+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 87908352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:23.097594+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 87908352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:24.097754+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 87908352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:25.097890+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 87908352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:26.098039+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 87908352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:27.098198+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 87908352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:28.098331+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 87908352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:29.098441+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 87908352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:30.098580+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 87908352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:31.098703+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 87908352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:32.098912+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 87908352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:33.099047+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 87908352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:34.099261+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 87908352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:35.101777+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 87908352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:36.101951+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 87908352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:37.102245+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368779264 unmapped: 87908352 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:38.102413+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368795648 unmapped: 87891968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:39.102562+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368795648 unmapped: 87891968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:40.102700+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368795648 unmapped: 87891968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:41.102846+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368795648 unmapped: 87891968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:42.102996+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368795648 unmapped: 87891968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:43.103172+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368795648 unmapped: 87891968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:44.103335+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368795648 unmapped: 87891968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:45.103516+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368795648 unmapped: 87891968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:46.103651+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368795648 unmapped: 87891968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:47.103849+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368795648 unmapped: 87891968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:48.104067+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368795648 unmapped: 87891968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:49.104221+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368795648 unmapped: 87891968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:50.104413+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368795648 unmapped: 87891968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:51.104575+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368795648 unmapped: 87891968 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:52.104754+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 87875584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:53.104923+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 87875584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:54.105104+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 87875584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:55.105231+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 87875584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:56.105346+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 87875584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:57.105518+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368812032 unmapped: 87875584 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:58.105694+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 87859200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:59.105864+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368828416 unmapped: 87859200 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:00.106058+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:01.106257+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:02.106391+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:03.106528+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:04.106733+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:05.106907+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:06.107081+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:07.107270+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:08.107405+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:09.107550+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:10.107952+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:11.108123+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:12.108270+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:13.108415+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:14.108628+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:15.108747+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:16.108885+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368836608 unmapped: 87851008 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:17.109084+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368844800 unmapped: 87842816 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:18.109212+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 87826432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:19.109331+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 87826432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:20.109518+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 87826432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:21.109688+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 87826432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:22.109869+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 87826432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:23.110054+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 87826432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:24.110182+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 87826432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:25.110351+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 87826432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:26.110497+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 87826432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:27.110748+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 87826432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:28.110919+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 87826432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:29.111151+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 87826432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:30.111299+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368861184 unmapped: 87826432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:31.111465+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368869376 unmapped: 87818240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:32.111635+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368869376 unmapped: 87818240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:33.111762+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368869376 unmapped: 87818240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:34.111904+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368869376 unmapped: 87818240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:35.112074+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368869376 unmapped: 87818240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:36.112403+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368869376 unmapped: 87818240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:37.112798+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367820800 unmapped: 88866816 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:38.113086+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367820800 unmapped: 88866816 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:39.113352+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367820800 unmapped: 88866816 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:40.113562+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367820800 unmapped: 88866816 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:41.113800+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367820800 unmapped: 88866816 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:42.114058+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367820800 unmapped: 88866816 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:43.114230+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367820800 unmapped: 88866816 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:44.114401+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367820800 unmapped: 88866816 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:45.114615+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367820800 unmapped: 88866816 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:46.114812+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367820800 unmapped: 88866816 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:47.115023+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367837184 unmapped: 88850432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:48.115174+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367837184 unmapped: 88850432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:49.115344+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367837184 unmapped: 88850432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:50.115503+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367837184 unmapped: 88850432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:51.115660+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367837184 unmapped: 88850432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:52.115876+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367837184 unmapped: 88850432 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:53.116048+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 88842240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:54.116198+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 88842240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:55.116369+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 88842240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:56.116545+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 88842240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:57.116721+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 88842240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:58.116864+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 88842240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:59.117035+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 88842240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.4 total, 600.0 interval
                                           Cumulative writes: 50K writes, 193K keys, 50K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.02 MB/s
                                           Cumulative WAL: 50K writes, 18K syncs, 2.67 writes per sync, written: 0.18 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:00.117164+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367845376 unmapped: 88842240 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:01.117305+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367869952 unmapped: 88817664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:02.117425+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367869952 unmapped: 88817664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:03.117558+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367869952 unmapped: 88817664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:04.117685+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367869952 unmapped: 88817664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:05.117817+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367869952 unmapped: 88817664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:06.117944+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367869952 unmapped: 88817664 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:07.118117+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:08.118218+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:09.118350+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:10.118484+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:11.118789+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:12.118937+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:13.119041+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:14.119146+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:15.119311+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:16.119446+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:17.119629+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:18.119779+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:19.119916+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread fragmentation_score=0.005052 took=0.000046s
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:20.120071+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:21.120253+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:22.120381+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:23.120509+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367886336 unmapped: 88801280 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:24.120642+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367894528 unmapped: 88793088 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:25.120772+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367894528 unmapped: 88793088 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:26.120921+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367894528 unmapped: 88793088 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:27.121065+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367910912 unmapped: 88776704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:28.121205+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367910912 unmapped: 88776704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:29.121345+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367910912 unmapped: 88776704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:30.121478+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367910912 unmapped: 88776704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:31.121598+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367910912 unmapped: 88776704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:32.121727+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367910912 unmapped: 88776704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:33.121842+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367910912 unmapped: 88776704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:34.121954+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367910912 unmapped: 88776704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:35.122083+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367910912 unmapped: 88776704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:36.122209+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367910912 unmapped: 88776704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:37.122392+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367910912 unmapped: 88776704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:38.122568+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367910912 unmapped: 88776704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:39.122708+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367910912 unmapped: 88776704 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:40.122844+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367919104 unmapped: 88768512 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:41.123029+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367919104 unmapped: 88768512 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:42.123174+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367919104 unmapped: 88768512 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:43.123297+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367919104 unmapped: 88768512 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:44.123436+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367927296 unmapped: 88760320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:45.123564+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367927296 unmapped: 88760320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:46.123742+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367927296 unmapped: 88760320 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:47.123907+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367943680 unmapped: 88743936 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:48.124041+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367951872 unmapped: 88735744 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:49.124180+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367951872 unmapped: 88735744 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:50.124341+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367951872 unmapped: 88735744 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:51.124476+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367951872 unmapped: 88735744 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:52.124589+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367951872 unmapped: 88735744 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:53.124725+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367951872 unmapped: 88735744 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:54.124830+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367951872 unmapped: 88735744 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:55.125060+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367951872 unmapped: 88735744 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:56.125192+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367951872 unmapped: 88735744 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:57.125414+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367951872 unmapped: 88735744 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:58.125548+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367951872 unmapped: 88735744 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:59.125676+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367951872 unmapped: 88735744 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:00.125813+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367951872 unmapped: 88735744 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:01.125953+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367968256 unmapped: 88719360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:02.126155+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367968256 unmapped: 88719360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:03.126285+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367968256 unmapped: 88719360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:04.126425+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367968256 unmapped: 88719360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:05.126540+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367968256 unmapped: 88719360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:06.126720+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367968256 unmapped: 88719360 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:07.126885+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367984640 unmapped: 88702976 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:08.127007+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367984640 unmapped: 88702976 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:09.127143+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367992832 unmapped: 88694784 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:10.127276+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367992832 unmapped: 88694784 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:11.127420+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367992832 unmapped: 88694784 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:12.127544+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367992832 unmapped: 88694784 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:13.127663+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367992832 unmapped: 88694784 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:14.127815+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 367992832 unmapped: 88694784 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:15.128020+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368001024 unmapped: 88686592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:16.128347+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368001024 unmapped: 88686592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:17.128528+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368001024 unmapped: 88686592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:18.128732+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368001024 unmapped: 88686592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:19.128895+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368001024 unmapped: 88686592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:20.129152+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368001024 unmapped: 88686592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:21.129406+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368001024 unmapped: 88686592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:22.129637+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 300.553894043s of 300.588226318s, submitted: 22
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368009216 unmapped: 88678400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:23.130070+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368017408 unmapped: 88670208 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:24.130224+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368017408 unmapped: 88670208 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:25.130522+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368017408 unmapped: 88670208 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:26.130718+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368017408 unmapped: 88670208 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:27.130883+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368041984 unmapped: 88645632 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:28.131106+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:29.131278+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:30.131496+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:31.131694+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:32.131851+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:33.132100+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:34.132312+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:35.132475+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:36.132629+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:37.132817+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:38.133077+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:39.133233+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:40.133389+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:41.133529+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:42.133653+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:43.133804+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:44.134013+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:45.134171+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:46.134304+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368091136 unmapped: 88596480 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:47.134461+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368107520 unmapped: 88580096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:48.134657+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368107520 unmapped: 88580096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:49.134832+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368107520 unmapped: 88580096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:50.135025+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368107520 unmapped: 88580096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:51.135173+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368107520 unmapped: 88580096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:52.135294+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368107520 unmapped: 88580096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:53.135428+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368107520 unmapped: 88580096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:54.135569+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368107520 unmapped: 88580096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:55.135681+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368107520 unmapped: 88580096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:56.135849+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368107520 unmapped: 88580096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:57.136023+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368107520 unmapped: 88580096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:58.136203+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368107520 unmapped: 88580096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:59.136380+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368107520 unmapped: 88580096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:00.136546+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368107520 unmapped: 88580096 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:01.136637+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368123904 unmapped: 88563712 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:02.136751+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368123904 unmapped: 88563712 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:03.136864+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368123904 unmapped: 88563712 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:04.136991+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368123904 unmapped: 88563712 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:05.137168+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368123904 unmapped: 88563712 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:06.137323+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368123904 unmapped: 88563712 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:07.137490+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368140288 unmapped: 88547328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:08.137636+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368140288 unmapped: 88547328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:09.137806+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368140288 unmapped: 88547328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:10.138222+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368140288 unmapped: 88547328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:11.138382+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368140288 unmapped: 88547328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:12.138504+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368140288 unmapped: 88547328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:13.138630+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368140288 unmapped: 88547328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:14.138792+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368140288 unmapped: 88547328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:15.138940+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368140288 unmapped: 88547328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:16.139092+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368140288 unmapped: 88547328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:17.139263+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368140288 unmapped: 88547328 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:18.139360+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: do_command 'config diff' '{prefix=config diff}'
Jan 26 17:43:51 compute-0 ceph-osd[86729]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 26 17:43:51 compute-0 ceph-osd[86729]: do_command 'config show' '{prefix=config show}'
Jan 26 17:43:51 compute-0 ceph-osd[86729]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 26 17:43:51 compute-0 ceph-osd[86729]: do_command 'counter dump' '{prefix=counter dump}'
Jan 26 17:43:51 compute-0 ceph-osd[86729]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368164864 unmapped: 88522752 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:19.139500+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: do_command 'counter schema' '{prefix=counter schema}'
Jan 26 17:43:51 compute-0 ceph-osd[86729]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368009216 unmapped: 88678400 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:20.139644+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: osd.1 332 heartbeat osd_stat(store_statfs(0x4e8a19000/0x0/0x4ffc00000, data 0x700339/0x8d3000, compress 0x0/0x0/0x0, omap 0x727af, meta 0x1689d851), peers [0,2] op hist [])
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:51 compute-0 ceph-osd[86729]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:51 compute-0 ceph-osd[86729]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3739324 data_alloc: 218103808 data_used: 224040
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: tick
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_tickets
Jan 26 17:43:51 compute-0 ceph-osd[86729]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:21.139777+0000)
Jan 26 17:43:51 compute-0 ceph-osd[86729]: prioritycache tune_memory target: 4294967296 mapped: 368001024 unmapped: 88686592 heap: 456687616 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:51 compute-0 ceph-osd[86729]: do_command 'log dump' '{prefix=log dump}'
Jan 26 17:43:51 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 26 17:43:51 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1880318050' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 26 17:43:51 compute-0 rsyslogd[1006]: imjournal from <np0005595828:ceph-osd>: begin to drop messages due to rate-limiting
Jan 26 17:43:52 compute-0 ceph-mon[75140]: from='client.23464 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:52 compute-0 ceph-mon[75140]: from='client.23468 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/547722886' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 26 17:43:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3554241155' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 26 17:43:52 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1880318050' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 26 17:43:52 compute-0 nova_compute[239965]: 2026-01-26 17:43:52.097 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 26 17:43:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1343822256' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 26 17:43:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 26 17:43:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/535220900' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 26 17:43:53 compute-0 ceph-mon[75140]: from='client.23472 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:53 compute-0 ceph-mon[75140]: pgmap v4622: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:53 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1343822256' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 26 17:43:53 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/535220900' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 26 17:43:53 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 26 17:43:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/339011991' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 26 17:43:53 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 26 17:43:53 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2300889239' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 26 17:43:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 26 17:43:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2977806203' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 26 17:43:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 26 17:43:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4096238707' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 26 17:43:54 compute-0 ceph-mon[75140]: pgmap v4623: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/339011991' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 26 17:43:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2300889239' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 26 17:43:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2977806203' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 26 17:43:54 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4096238707' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 26 17:43:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 26 17:43:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/369403770' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 26 17:43:54 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 26 17:43:54 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1071933225' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 26 17:43:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 26 17:43:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1812154456' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 26 17:43:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/369403770' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 26 17:43:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1071933225' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 26 17:43:55 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1812154456' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 26 17:43:55 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 26 17:43:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2482602187' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 26 17:43:55 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 26 17:43:55 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1622365022' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 26 17:43:55 compute-0 nova_compute[239965]: 2026-01-26 17:43:55.863 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 26 17:43:56 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2809637176' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 26 17:43:56 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23502 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:56 compute-0 ceph-mon[75140]: pgmap v4624: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2482602187' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 26 17:43:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/1622365022' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 26 17:43:56 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2809637176' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 88842240 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:45.093791+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 88834048 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:46.094060+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 88834048 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:47.094308+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 88834048 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:48.094468+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 88834048 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:49.094667+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 88825856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:50.094896+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 88825856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:51.095139+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 88817664 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:52.095336+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 88817664 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:53.095522+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 88817664 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:54.095827+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 88817664 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:55.096076+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 88817664 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:56.096250+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 88809472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:57.096459+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 88809472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:58.096715+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 88809472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:09:59.096885+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 88809472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:00.097057+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 88809472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:01.097261+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 88809472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:02.097447+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 88809472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:03.097630+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 88809472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:04.097838+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 88809472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:05.098065+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 88809472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:06.098253+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 88809472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:07.098483+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 88809472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:08.098642+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 88809472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:09.098793+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 88809472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:10.098940+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 88801280 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:11.099243+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:12.099415+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:13.099593+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:14.099735+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:15.099891+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:16.100057+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:17.100203+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:18.100398+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:19.100588+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:20.100733+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:21.100910+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:22.101177+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:23.101387+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:24.101526+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:25.101665+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:26.101821+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 88776704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:27.102056+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 88776704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:28.102231+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 88768512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:29.102456+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 88768512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:30.102614+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 88768512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:31.103022+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 88768512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:32.103363+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 88768512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:33.103654+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 88768512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:34.103910+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 88768512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:35.104124+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 88768512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:36.104488+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 88768512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:37.104674+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 88768512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:38.104886+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 88760320 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:39.105100+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 88760320 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:40.105413+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 88760320 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:41.105775+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 88760320 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:42.106059+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 88760320 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:43.106292+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 88752128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:44.106498+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 88752128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:45.106739+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 88752128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:46.106956+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 88752128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:47.107165+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 88752128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:48.107377+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 88743936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:49.107634+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 88743936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:50.107883+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 88743936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:51.108178+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 88743936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:52.108374+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 88743936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:53.108598+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 88743936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:54.108774+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 6600.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 184K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 16K syncs, 2.74 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 88743936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:55.109106+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 88743936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:56.109318+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 88743936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:57.109487+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 88743936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:58.109677+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 88743936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:10:59.109870+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 88743936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:00.110089+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 88735744 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:01.110293+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 88727552 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:02.110460+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:03.110714+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:04.110832+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:05.111010+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:06.111193+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:07.111505+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:08.111788+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:09.112086+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:10.112282+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:11.112602+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:12.112788+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:13.113029+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:14.113207+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:15.113366+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:16.113497+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:17.113712+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 88702976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:18.113950+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 88694784 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:19.114128+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 88694784 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:20.114295+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 88694784 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:21.114494+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 88694784 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:22.114692+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 88694784 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:23.114860+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359882752 unmapped: 88694784 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:24.115028+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 88686592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:25.115225+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 88686592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:26.115422+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 88686592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:27.115556+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 88686592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:28.115702+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 88686592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:29.115858+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 88686592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:30.116035+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 88686592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:31.116231+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 88686592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:32.116412+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 88686592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:33.116624+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 88678400 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:34.116791+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:35.117076+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:36.117286+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:37.117472+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:38.117626+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:39.117826+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:40.118046+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:41.118257+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 89735168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:42.118474+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 89735168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:43.118687+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 89735168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:44.118895+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 89735168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:45.119095+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 89735168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:46.119231+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 89735168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:47.119438+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 89735168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:48.119670+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358842368 unmapped: 89735168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:49.119902+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 89726976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:50.120099+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 89726976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:51.120340+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 89726976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:52.120500+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 89726976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:53.121197+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 89726976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:54.121401+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 89726976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:55.121597+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 89726976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:56.121791+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 89726976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:57.122016+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 89726976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:58.122203+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 89726976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:11:59.122395+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 89726976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:00.122591+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 89726976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:01.122878+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358850560 unmapped: 89726976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:02.123074+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 89710592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:03.123269+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 89710592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:04.123495+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 89710592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:05.123740+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 89710592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:06.124094+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 89710592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:07.124308+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 89710592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:08.124504+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 89710592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:09.124675+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 89710592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:10.124848+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 89710592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:11.125060+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 89710592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:12.125209+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 89710592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:13.125369+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 89710592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:14.125524+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358866944 unmapped: 89710592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:15.125684+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358875136 unmapped: 89702400 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:16.125859+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358875136 unmapped: 89702400 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:17.126046+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358875136 unmapped: 89702400 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:18.126155+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358875136 unmapped: 89702400 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:19.126318+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 89694208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:20.126464+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 89694208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:21.126798+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 89694208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 300.267822266s of 300.420989990s, submitted: 18
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:22.126940+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 89694208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:23.127098+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 89694208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:24.127229+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 89694208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:25.127423+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 89694208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:26.127617+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358883328 unmapped: 89694208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:27.127773+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 89661440 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:28.128093+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 89645056 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:29.128346+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:30.128512+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:31.128697+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:32.128859+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:33.129046+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:34.129248+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:35.129429+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:36.129595+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:37.129773+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:38.130068+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:39.130265+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:40.130429+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:41.130612+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:42.130809+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:43.131092+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:44.131278+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:45.131506+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:46.131671+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:47.131749+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:48.131877+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:49.132199+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:50.132393+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:51.132606+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:52.132726+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:53.132856+0000)
Jan 26 17:43:56 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:54.133022+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:55.133163+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:56.133297+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:57.133397+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:58.133517+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:12:59.133657+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:00.133816+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:01.134065+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:02.134212+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:03.134402+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:04.134560+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:05.134719+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:06.134938+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:07.135091+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:08.135241+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:09.135410+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:10.135540+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 89628672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:11.135763+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 89620480 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:12.135964+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 89620480 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:13.136251+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 89620480 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:14.136439+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 89620480 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:15.136629+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 89620480 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:16.136837+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:17.137010+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:18.137143+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:19.137375+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:20.137583+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:21.137786+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:22.137918+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:23.138082+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:24.138264+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:25.138450+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:26.138615+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:27.138763+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:28.138893+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:29.139008+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:30.139183+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:31.139362+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:32.139504+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:33.139641+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:34.139809+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:35.140407+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:36.140545+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:37.140681+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 89612288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:38.140846+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 89604096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:39.140990+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 89604096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:40.141149+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 89604096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:41.141360+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 89604096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:42.141625+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 89604096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:43.141788+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 89604096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:44.141907+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 89604096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:45.142048+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 89604096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:46.142188+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 89604096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:47.142366+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 89604096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:48.142545+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 89604096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:49.142760+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 89604096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:50.142941+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358973440 unmapped: 89604096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:51.143206+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 89595904 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:52.143387+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 89595904 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:53.143539+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 89595904 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:54.143699+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 89595904 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:55.143939+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 89595904 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:56.144192+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 89595904 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:57.144367+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 89595904 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:58.144580+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 89595904 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:13:59.144775+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 89595904 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:00.145450+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 89595904 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:01.145704+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 89595904 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:02.145870+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 89595904 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:03.146076+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358981632 unmapped: 89595904 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:04.146247+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 89587712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:05.146448+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 89587712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:06.147235+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 89587712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:07.147406+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 89587712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:08.147555+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 89587712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:09.147684+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 89587712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:10.147852+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 89587712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:11.148116+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 89587712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:12.148261+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 89587712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:13.148404+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 89587712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:14.148532+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 89587712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:15.148787+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 89587712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:16.149057+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 89587712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:17.149231+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358989824 unmapped: 89587712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:18.149409+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:19.149572+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:20.149730+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:21.149963+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:22.150240+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:23.150378+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:24.150536+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:25.150728+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:26.150916+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:27.151146+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:28.151353+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:29.151548+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:30.151752+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:31.152096+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:32.152287+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:33.152437+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:34.152605+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:35.152774+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 358998016 unmapped: 89579520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:36.153037+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359006208 unmapped: 89571328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:37.153213+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359006208 unmapped: 89571328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:38.153426+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359006208 unmapped: 89571328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:39.153588+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359006208 unmapped: 89571328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:40.153826+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359006208 unmapped: 89571328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:41.154071+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359006208 unmapped: 89571328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:42.154220+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359006208 unmapped: 89571328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:43.155000+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359006208 unmapped: 89571328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:44.155200+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359006208 unmapped: 89571328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:45.155384+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359006208 unmapped: 89571328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:46.155504+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359006208 unmapped: 89571328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:47.155642+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359006208 unmapped: 89571328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:48.155771+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359006208 unmapped: 89571328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:49.155937+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359014400 unmapped: 89563136 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:50.156220+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359014400 unmapped: 89563136 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:51.156402+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359014400 unmapped: 89563136 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:52.156532+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359014400 unmapped: 89563136 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:53.156676+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359014400 unmapped: 89563136 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:54.156875+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359014400 unmapped: 89563136 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:55.157091+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359014400 unmapped: 89563136 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:56.157293+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359014400 unmapped: 89563136 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:57.157481+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359014400 unmapped: 89563136 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:58.157632+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359014400 unmapped: 89563136 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:14:59.157809+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359014400 unmapped: 89563136 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:00.157932+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359014400 unmapped: 89563136 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:01.158133+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359014400 unmapped: 89563136 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:02.158285+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359014400 unmapped: 89563136 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:03.158425+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359014400 unmapped: 89563136 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:04.158570+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359014400 unmapped: 89563136 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:05.158702+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359022592 unmapped: 89554944 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:06.158870+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359022592 unmapped: 89554944 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:07.159054+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359022592 unmapped: 89554944 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:08.159224+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359022592 unmapped: 89554944 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:09.159395+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359022592 unmapped: 89554944 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:10.159572+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359022592 unmapped: 89554944 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:11.159829+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359022592 unmapped: 89554944 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:12.160002+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359022592 unmapped: 89554944 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:13.160177+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 89538560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:14.160338+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 89538560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:15.160521+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 89538560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:16.160753+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 89538560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:17.161097+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 89538560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:18.161386+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 89538560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:19.161615+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 89538560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:20.161809+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 89538560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:21.162146+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 89538560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:22.162320+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 89538560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:23.162532+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 89538560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:24.162693+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 89538560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:25.162866+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 89538560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:26.163053+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 89538560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:27.163173+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 89538560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:28.163301+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359038976 unmapped: 89538560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:29.163480+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 89530368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:30.163625+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 89530368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:31.163800+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 89530368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:32.163938+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 89530368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:33.164089+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 89530368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:34.164246+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 89530368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:35.164396+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 89530368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:36.164594+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 89530368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:37.164751+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 89530368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:38.164935+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 89530368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:39.165116+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359047168 unmapped: 89530368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:40.165252+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359055360 unmapped: 89522176 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:41.165412+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:42.165558+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:43.165706+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:44.165907+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:45.166074+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:46.166227+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:47.166367+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:48.166536+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:49.166656+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:50.166808+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:51.167561+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:52.167709+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:53.167855+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:54.168040+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:55.168238+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:56.168402+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:57.168544+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:58.168690+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359063552 unmapped: 89513984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:15:59.168872+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 89505792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:00.169037+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 89505792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:01.169254+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 89505792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:02.169506+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 89505792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:03.169655+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 89505792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:04.169846+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 89505792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:05.170037+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 89505792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:06.170283+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 89505792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:07.170427+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 89505792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:08.170563+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 89505792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:09.170707+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 89505792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:10.170873+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 89505792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:11.171109+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 89505792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:12.171273+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 89505792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:13.171419+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359079936 unmapped: 89497600 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:14.171568+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359079936 unmapped: 89497600 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:15.171764+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359079936 unmapped: 89497600 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:16.171962+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359079936 unmapped: 89497600 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:17.172272+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359079936 unmapped: 89497600 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:18.172429+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359079936 unmapped: 89497600 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:19.172883+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359079936 unmapped: 89497600 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:20.173138+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359079936 unmapped: 89497600 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:21.173421+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359079936 unmapped: 89497600 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:22.173605+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359079936 unmapped: 89497600 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:23.173776+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359088128 unmapped: 89489408 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:24.174080+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359088128 unmapped: 89489408 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:25.174248+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359088128 unmapped: 89489408 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:26.174431+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359096320 unmapped: 89481216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:27.174593+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359096320 unmapped: 89481216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:28.174821+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359096320 unmapped: 89481216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:29.175042+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359096320 unmapped: 89481216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:30.175236+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359096320 unmapped: 89481216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:31.175488+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359096320 unmapped: 89481216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:32.175656+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359096320 unmapped: 89481216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:33.175827+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359096320 unmapped: 89481216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:34.176099+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359096320 unmapped: 89481216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:35.183906+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359096320 unmapped: 89481216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:36.184034+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359096320 unmapped: 89481216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:37.184145+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359104512 unmapped: 89473024 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:38.184269+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359104512 unmapped: 89473024 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:39.184488+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359104512 unmapped: 89473024 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:40.184691+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359104512 unmapped: 89473024 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:41.184884+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359104512 unmapped: 89473024 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:42.185036+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359104512 unmapped: 89473024 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:43.185168+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359104512 unmapped: 89473024 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:44.185368+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359104512 unmapped: 89473024 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:45.185539+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359112704 unmapped: 89464832 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:46.185670+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359112704 unmapped: 89464832 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:47.185854+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359112704 unmapped: 89464832 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:48.186141+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359112704 unmapped: 89464832 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:49.186327+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359112704 unmapped: 89464832 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:50.186495+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359112704 unmapped: 89464832 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:51.186727+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359112704 unmapped: 89464832 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:52.186940+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359112704 unmapped: 89464832 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:53.187154+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359112704 unmapped: 89464832 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:54.187376+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359112704 unmapped: 89464832 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:55.187571+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359112704 unmapped: 89464832 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:56.187780+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359112704 unmapped: 89464832 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:57.187959+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359112704 unmapped: 89464832 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:58.188145+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359120896 unmapped: 89456640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:16:59.188319+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359120896 unmapped: 89456640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:00.188474+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359120896 unmapped: 89456640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:01.188950+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359120896 unmapped: 89456640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:02.189950+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359120896 unmapped: 89456640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:03.190236+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359120896 unmapped: 89456640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:04.190511+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359120896 unmapped: 89456640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:05.190899+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359120896 unmapped: 89456640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:06.191299+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359120896 unmapped: 89456640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:07.191722+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359120896 unmapped: 89456640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:08.192279+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359120896 unmapped: 89456640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:09.192615+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359120896 unmapped: 89456640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:10.192797+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359120896 unmapped: 89456640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:11.193181+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359120896 unmapped: 89456640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:12.193664+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359129088 unmapped: 89448448 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:13.193915+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359129088 unmapped: 89448448 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:14.194189+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359129088 unmapped: 89448448 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:15.194573+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359129088 unmapped: 89448448 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:16.195110+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359137280 unmapped: 89440256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:17.195392+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359137280 unmapped: 89440256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:18.195757+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359137280 unmapped: 89440256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:19.196033+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359137280 unmapped: 89440256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:20.196214+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359137280 unmapped: 89440256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:21.196550+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359137280 unmapped: 89440256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:22.196914+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359137280 unmapped: 89440256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:23.197090+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359137280 unmapped: 89440256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:24.197352+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 89432064 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:25.197552+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 89432064 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:26.197846+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 89432064 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:27.198118+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 89432064 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:28.198343+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 89432064 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:29.198604+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 89432064 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:30.198812+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 89432064 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:31.199055+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 89432064 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:32.199230+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 89432064 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:33.199436+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 89432064 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:34.199636+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 89432064 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:35.199814+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 89432064 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:36.200004+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 89432064 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:37.200203+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 89432064 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:38.200358+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359145472 unmapped: 89432064 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:39.200525+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359153664 unmapped: 89423872 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:40.200699+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 89415680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:41.200917+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 89415680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:42.201190+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 89415680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:43.201406+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 89415680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:44.201795+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 89415680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:45.202112+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 89415680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:46.202387+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 89415680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:47.202592+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 89415680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:48.202801+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 89415680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:49.203081+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 89415680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:50.203325+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 89415680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:51.203646+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 89415680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:52.203855+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 89415680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:53.204065+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 89415680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:54.204360+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 89415680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:55.204541+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359161856 unmapped: 89415680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:56.204810+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359170048 unmapped: 89407488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:57.205021+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359170048 unmapped: 89407488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:58.205237+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359170048 unmapped: 89407488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:17:59.205424+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359170048 unmapped: 89407488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:00.205706+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359170048 unmapped: 89407488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:01.205892+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359170048 unmapped: 89407488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:02.206180+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359170048 unmapped: 89407488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:03.206323+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359170048 unmapped: 89407488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:04.206464+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359170048 unmapped: 89407488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:05.206701+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359170048 unmapped: 89407488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:06.207080+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359170048 unmapped: 89407488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:07.207310+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359170048 unmapped: 89407488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:08.207623+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359170048 unmapped: 89407488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:09.207760+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359178240 unmapped: 89399296 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:10.208030+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359178240 unmapped: 89399296 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:11.208235+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359178240 unmapped: 89399296 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:12.208431+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 89391104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:13.208612+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 89391104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:14.208819+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 89391104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:15.209045+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 89391104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:16.209305+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 89391104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:17.209509+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 89391104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:18.209682+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 89391104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:19.209918+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 89391104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:20.210213+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 89391104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:21.210444+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 89391104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:22.210797+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 89391104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:23.211025+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 89391104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:24.211260+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359186432 unmapped: 89391104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:25.211453+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 89382912 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:26.211737+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 89382912 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:27.211935+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 89382912 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:28.212124+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 89382912 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:29.212320+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 89382912 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:30.212488+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 89382912 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:31.212652+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 89382912 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:32.212845+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 89382912 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:33.213053+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 89382912 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:34.214467+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 89382912 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:35.214876+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359194624 unmapped: 89382912 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:36.215078+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359202816 unmapped: 89374720 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:37.215261+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359202816 unmapped: 89374720 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:38.215733+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359202816 unmapped: 89374720 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:39.216079+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359202816 unmapped: 89374720 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:40.217415+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359202816 unmapped: 89374720 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:41.217654+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359202816 unmapped: 89374720 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:42.218357+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359202816 unmapped: 89374720 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:43.218618+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359202816 unmapped: 89374720 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:44.219086+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359211008 unmapped: 89366528 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:45.219290+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359211008 unmapped: 89366528 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:46.219968+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359211008 unmapped: 89366528 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:47.220197+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359211008 unmapped: 89366528 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:48.220411+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359211008 unmapped: 89366528 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:49.220722+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 89358336 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:50.221097+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 89358336 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:51.221405+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 89358336 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:52.221726+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 89358336 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:53.222116+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 89358336 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:54.222301+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 89358336 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:55.222517+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 89358336 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:56.222917+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 89358336 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:57.223117+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 89358336 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:58.223351+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 89358336 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:18:59.223524+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 89358336 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:00.223884+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 89358336 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:01.224101+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359219200 unmapped: 89358336 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:02.224244+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359227392 unmapped: 89350144 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:03.224387+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359227392 unmapped: 89350144 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:04.224792+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359227392 unmapped: 89350144 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:05.225117+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359227392 unmapped: 89350144 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:06.225294+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:07.225435+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359227392 unmapped: 89350144 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:08.225610+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359227392 unmapped: 89350144 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:09.225778+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359227392 unmapped: 89350144 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:10.225920+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359227392 unmapped: 89350144 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:11.226049+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359227392 unmapped: 89350144 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:12.226187+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359235584 unmapped: 89341952 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:13.226327+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359235584 unmapped: 89341952 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:14.226467+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359235584 unmapped: 89341952 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:15.226603+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359235584 unmapped: 89341952 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:16.226752+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359235584 unmapped: 89341952 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:17.226887+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359235584 unmapped: 89341952 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:18.227046+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359235584 unmapped: 89341952 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:19.227213+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359235584 unmapped: 89341952 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:20.227349+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:21.227476+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:22.227583+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:23.227701+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:24.227875+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:25.228113+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:26.228293+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:27.228436+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:28.228620+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:29.228800+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:30.228911+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:31.229123+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:32.229309+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:33.229451+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:34.229625+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:35.229763+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:36.229922+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:37.230105+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:38.230281+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:39.230423+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:40.230577+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:41.230799+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:42.230954+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:43.231116+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:44.231260+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:45.231391+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359243776 unmapped: 89333760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:46.231519+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359251968 unmapped: 89325568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:47.231679+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359251968 unmapped: 89325568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:48.231820+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359251968 unmapped: 89325568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:49.232028+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359251968 unmapped: 89325568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:50.232170+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359251968 unmapped: 89325568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:51.232902+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 89317376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:52.233157+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 89317376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:53.233316+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 89317376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:54.233501+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 89317376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:55.233777+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 89317376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:56.234051+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 89317376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:57.234232+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 89317376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:58.234449+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 89317376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:19:59.234672+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359260160 unmapped: 89317376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:00.234875+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:01.235127+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating renewing rotating keys (they expired before 2026-01-26T17:20:02.235375+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _finish_auth 0
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:02.236262+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:03.235644+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:04.235773+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:05.235901+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:06.236037+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:07.236191+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:08.236393+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:09.236531+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:10.236651+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:11.236903+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:12.237099+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:13.237326+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:14.237563+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:15.237730+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:16.238029+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:17.238207+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:18.238417+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:19.238549+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:20.238767+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 89309184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:21.238990+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 89300992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:22.239130+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 89300992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:23.239319+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 89300992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:24.239588+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 89300992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:25.239865+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 89300992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:26.240046+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 89300992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:27.240175+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359276544 unmapped: 89300992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:28.240304+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 89292800 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:29.240489+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359284736 unmapped: 89292800 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:30.240712+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359292928 unmapped: 89284608 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:31.241078+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359292928 unmapped: 89284608 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:32.241324+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359292928 unmapped: 89284608 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:33.241569+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359292928 unmapped: 89284608 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:34.241773+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359292928 unmapped: 89284608 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:35.242021+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359292928 unmapped: 89284608 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:36.242236+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359292928 unmapped: 89284608 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:37.242382+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 89276416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:38.242529+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 89276416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:39.242670+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 89276416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:40.242789+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359309312 unmapped: 89268224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:41.242956+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359309312 unmapped: 89268224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:42.243114+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359317504 unmapped: 89260032 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:43.243280+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 89251840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:44.243425+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 89251840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:45.243560+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 89251840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:46.243702+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 89251840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:47.243829+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 89251840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:48.244001+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 89251840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:49.244129+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 89251840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:50.244349+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 89251840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:51.244604+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 89251840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:52.244773+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 89251840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:53.245037+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 89251840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:54.245262+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7200.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 184K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.73 writes per sync, written: 0.19 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.10 MB, 0.00 MB/s
                                           Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 89251840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:55.245417+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 89251840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:56.245582+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 89251840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:57.245730+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359325696 unmapped: 89251840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:58.245922+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359333888 unmapped: 89243648 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:20:59.246079+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359342080 unmapped: 89235456 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:00.246213+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359342080 unmapped: 89235456 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:01.246390+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359342080 unmapped: 89235456 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: mgrc ms_handle_reset ms_handle_reset con 0x555976438400
Jan 26 17:43:56 compute-0 ceph-osd[85687]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/4130999815
Jan 26 17:43:56 compute-0 ceph-osd[85687]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/4130999815,v1:192.168.122.100:6801/4130999815]
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: get_auth_request con 0x55597d8b1800 auth_method 0
Jan 26 17:43:56 compute-0 ceph-osd[85687]: mgrc handle_mgr_configure stats_period=5
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:02.246535+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 89219072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:03.246682+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 89219072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:04.246831+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 89219072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:05.247053+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 89219072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:06.247239+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 89219072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:07.247423+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 89219072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:08.247625+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 89219072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:09.247782+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 ms_handle_reset con 0x55597550d800 session 0x555989a66380
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: handle_auth_request added challenge on 0x555975a27000
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 89219072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:10.248012+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 89219072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:11.248273+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 89219072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:12.248470+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 89219072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:13.248643+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359358464 unmapped: 89219072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:14.248828+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 89210880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:15.248945+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 89210880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:16.249080+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 89210880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:17.249236+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 89210880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:18.249376+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 89210880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:19.249539+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359366656 unmapped: 89210880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:20.249724+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:21.249898+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:22.250045+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:23.250177+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:24.250264+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:25.250400+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:26.250635+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:27.250784+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:28.250918+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:29.251144+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:30.251459+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:31.251664+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:32.251904+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:33.252075+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:34.252244+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:35.252407+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:36.252566+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:37.252728+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359374848 unmapped: 89202688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:38.252889+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359383040 unmapped: 89194496 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:39.253019+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 89186304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:40.253147+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 89186304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:41.253383+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 89186304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:42.253566+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 89186304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:43.253760+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359391232 unmapped: 89186304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:44.253915+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359399424 unmapped: 89178112 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:45.254045+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359407616 unmapped: 89169920 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:46.254135+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359407616 unmapped: 89169920 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:47.254299+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 89161728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:48.254466+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 89161728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:49.254605+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 89161728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:50.254814+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 89161728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:51.255099+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 89161728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:52.255230+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 89161728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:53.255411+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 89161728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:54.255550+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 89161728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:55.256410+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359415808 unmapped: 89161728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:56.256614+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 89153536 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:57.256811+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 89153536 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:58.257040+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 89153536 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:21:59.257241+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 89153536 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:00.257458+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 89153536 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:01.257814+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359424000 unmapped: 89153536 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:02.258016+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:03.258206+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:04.258363+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:05.258502+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:06.258721+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:07.258902+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:08.259067+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:09.259208+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:10.259324+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:11.259506+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:12.259700+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:13.259851+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 ms_handle_reset con 0x555975a49400 session 0x5559787968c0
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: handle_auth_request added challenge on 0x555975af8000
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:14.260066+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:15.260204+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:16.260393+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:17.260533+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 131861
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:18.260697+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359432192 unmapped: 89145344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 596.838073730s of 597.670898438s, submitted: 106
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:19.260846+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359456768 unmapped: 89120768 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:20.261040+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359456768 unmapped: 89120768 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:21.261355+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359473152 unmapped: 89104384 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:22.261594+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359473152 unmapped: 89104384 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:23.261726+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359473152 unmapped: 89104384 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:24.261943+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359473152 unmapped: 89104384 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:25.262178+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359473152 unmapped: 89104384 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:26.262418+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359489536 unmapped: 89088000 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:27.262593+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:28.262734+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:29.262915+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:30.263142+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:31.263382+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:32.263537+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:33.263737+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:34.263928+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:35.264117+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:36.264274+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:37.264460+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:38.264659+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:39.264830+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:40.265018+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:41.265231+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:42.265408+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:43.265565+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:44.265703+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:45.265847+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:46.266043+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:47.266179+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:48.266378+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:49.266593+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:50.266845+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:51.267130+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:52.267291+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:53.267427+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:54.267640+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359530496 unmapped: 89047040 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:55.267812+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 89038848 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:56.268001+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 89038848 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:57.268156+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 89038848 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:58.268348+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 89038848 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:22:59.268514+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 89038848 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:00.268688+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 89038848 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:01.268856+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 89038848 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:02.269092+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 89038848 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:03.269299+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 89038848 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:04.269510+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 89038848 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:05.269649+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 89038848 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:06.269879+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 89038848 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:07.270056+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:08.270364+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359538688 unmapped: 89038848 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:09.270520+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359546880 unmapped: 89030656 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:10.270676+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 89022464 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:11.270917+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 89022464 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:12.271082+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 89022464 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:13.271235+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 89022464 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:14.271363+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359555072 unmapped: 89022464 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:15.271522+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:16.271744+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:17.271930+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:18.272159+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:19.272367+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:20.272559+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:21.272815+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:22.273037+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:23.273198+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:24.273801+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:25.274146+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:26.274955+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:27.275524+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:28.275714+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:29.276372+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:30.276601+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:31.277139+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:32.277293+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359563264 unmapped: 89014272 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:33.277437+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359571456 unmapped: 89006080 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:34.277683+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359571456 unmapped: 89006080 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:35.277903+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359571456 unmapped: 89006080 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:36.278408+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359571456 unmapped: 89006080 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:37.278591+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359571456 unmapped: 89006080 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:38.278858+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 88997888 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:39.279026+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 88997888 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:40.279361+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 88997888 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:41.279586+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 88997888 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:42.279842+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 88997888 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:43.280035+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 88997888 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:44.280331+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 88997888 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:45.280480+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 88997888 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:46.280789+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 88997888 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:47.281145+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 88997888 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:48.281492+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 88989696 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:49.281792+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 88989696 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:50.282000+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 88989696 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:51.282302+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 88989696 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:52.282498+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 88989696 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:53.282699+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 88989696 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:54.283047+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 88989696 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:55.283291+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 88989696 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:56.283457+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 88989696 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:57.283676+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 88989696 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:58.283918+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 88989696 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:23:59.284159+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 88989696 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:00.284373+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359587840 unmapped: 88989696 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:01.284636+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359596032 unmapped: 88981504 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:02.285100+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359596032 unmapped: 88981504 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:03.285279+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359596032 unmapped: 88981504 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:04.287035+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359596032 unmapped: 88981504 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:05.287184+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 88973312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:06.287345+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 88973312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:07.287485+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 88973312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:08.287649+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 88973312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:09.287797+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 88973312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:10.288196+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 88973312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:11.288463+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 88973312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:12.288626+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 88973312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:13.288773+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 88973312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:14.288899+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 88965120 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:15.289081+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 88965120 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:16.289307+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 88965120 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:17.289502+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 88965120 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:18.289660+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 88965120 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:19.289822+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 88965120 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:20.290128+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 88965120 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:21.290373+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 88956928 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:22.290477+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 88956928 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:23.290657+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 88948736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:24.290831+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 88948736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:25.290997+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 88948736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:26.291164+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 88948736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:27.291318+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 88948736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:28.291862+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 88948736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:29.292128+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 88948736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:30.292734+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 88948736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:31.293034+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 88948736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:32.294087+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 88948736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:33.294312+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 88948736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:34.294847+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 88948736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:35.295064+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 88940544 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:36.295333+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 88940544 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:37.295546+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 88940544 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:38.295730+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 88932352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:39.296034+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 88932352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:40.296233+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 88932352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:41.296437+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 88932352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:42.296608+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 88932352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:43.296761+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 88932352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:44.296932+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 88932352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:45.297189+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 88932352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:46.300697+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 88932352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:47.300931+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 88932352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:48.301121+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 88932352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:49.301324+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 88932352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:50.301521+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 88932352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:51.301747+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 88932352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:52.301929+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 88932352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:53.302090+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 88924160 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:54.302526+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 88924160 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:55.302711+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 88915968 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:56.302926+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 88907776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:57.303289+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 88899584 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:58.303565+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 88899584 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:24:59.303814+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 88899584 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:00.303957+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 88899584 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:01.304151+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 88899584 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:02.304344+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 88899584 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:03.304493+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 88899584 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:04.304659+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 88899584 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:05.304816+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 88899584 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:06.305035+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 88899584 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:07.305203+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 88899584 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:08.305420+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 88891392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:09.305642+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 88891392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:10.305867+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 88891392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:11.306111+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 88891392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:12.306218+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 88883200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:13.306407+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 88883200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:14.306555+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 88883200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:15.306719+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 88883200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:16.306892+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 88883200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:17.307070+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 88883200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:18.307232+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 88883200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:19.307392+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 88883200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:20.307571+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 88883200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:21.307783+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 88883200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:22.307949+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 88883200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:23.308207+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 88883200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:24.308367+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 88875008 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:25.308508+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 88875008 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:26.308715+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 88866816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:27.308860+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 88866816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:28.309012+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 88866816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:29.309172+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 88866816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:30.309332+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 88866816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:31.309555+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 88866816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:32.309720+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 88866816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:33.309881+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 88866816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:34.310079+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 88866816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:35.310225+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 88866816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:36.310380+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 88866816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:37.310567+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 88866816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:38.310698+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 88866816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:39.310839+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 88866816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:40.311019+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 88858624 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:41.311252+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 88858624 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:42.311411+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 88850432 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:43.311596+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 88850432 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:44.311794+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 88850432 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:45.311934+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 88850432 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:46.312123+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 88850432 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:47.312276+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 88850432 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:48.312481+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 88850432 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:49.312700+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 88850432 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:50.312870+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 88850432 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:51.313068+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 88850432 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:52.313191+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 88850432 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:53.313338+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 88842240 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:54.313522+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 88834048 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:55.313720+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 88834048 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:56.313917+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 88825856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:57.314102+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 88825856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:58.314254+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 88825856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:25:59.314459+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 88825856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:00.314707+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 88825856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:01.314941+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 88825856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:02.316161+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 88825856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:03.316318+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 88825856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:04.316527+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 88825856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:05.316744+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 88825856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:06.316923+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 88825856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:07.317080+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 88825856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:08.317256+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 88817664 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:09.317488+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 88817664 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:10.317638+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 88817664 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:11.317842+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 88817664 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:12.318057+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 88801280 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:13.318243+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 88801280 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:14.318421+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 88801280 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:15.318608+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:16.318723+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:17.318872+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:18.319059+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:19.319241+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:20.319381+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:21.319541+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:22.319702+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:23.319860+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:24.320052+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:25.320223+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:26.320393+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:27.320540+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 88793088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:28.320695+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:29.320898+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:30.321157+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:31.321909+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:32.322114+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:33.322307+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:34.322523+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:35.322705+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:36.322883+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:37.323029+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359792640 unmapped: 88784896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:38.323165+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 88776704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:39.323889+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 88776704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:40.324061+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 88776704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:41.324315+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 88776704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:42.324460+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 88776704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:43.324661+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 88776704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:44.324896+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 88776704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:45.325103+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 88776704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:46.325360+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 88776704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:47.325564+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 88776704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:48.325808+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 88776704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:49.326099+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 88760320 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:50.326267+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 88760320 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:51.326487+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 88760320 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:52.326639+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 88760320 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:53.326810+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 88760320 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:54.327010+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 88760320 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:55.327214+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359817216 unmapped: 88760320 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:56.327403+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 88752128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:57.327563+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 88752128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:58.327773+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 88752128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:26:59.327994+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 88752128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:00.328195+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 88752128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:01.328463+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 88752128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:02.328671+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 88752128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:03.328843+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:04.329083+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 88752128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:05.329352+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359825408 unmapped: 88752128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:06.329552+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 88743936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:07.329765+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 88743936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:08.330031+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359833600 unmapped: 88743936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:09.330262+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359841792 unmapped: 88735744 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:10.331201+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 88727552 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:11.332092+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 88727552 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:12.335805+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 88727552 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:13.336293+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 88727552 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:14.338114+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 88727552 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:15.338868+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 88727552 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:16.339318+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 88727552 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:17.339692+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 88727552 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:18.339952+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 88727552 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:19.340151+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 88727552 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:20.340456+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 88727552 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:21.340653+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359849984 unmapped: 88727552 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:22.340893+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 88719360 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:23.341108+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 88719360 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:24.341331+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 88719360 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:25.341499+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 88719360 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:26.341693+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359858176 unmapped: 88719360 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:27.341926+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:28.342088+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:29.342249+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:30.342460+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:31.342675+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:32.342893+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:33.343136+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:34.343423+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:35.343592+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:36.343760+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:37.343931+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:38.344116+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:39.344413+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:40.344550+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:41.344763+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:42.345100+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:43.345287+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:44.345448+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359866368 unmapped: 88711168 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:45.345616+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 88702976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:46.346018+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 88702976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:47.346297+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 88702976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:48.346438+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359874560 unmapped: 88702976 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:49.346616+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 88686592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:50.346807+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 88686592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:51.347099+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359890944 unmapped: 88686592 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:52.347314+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 88678400 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:53.347506+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359899136 unmapped: 88678400 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:54.347672+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:55.347853+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:56.348051+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:57.348196+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:58.348381+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:27:59.348598+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:00.348816+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:01.349692+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:02.349873+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:03.350044+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:04.350345+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359907328 unmapped: 88670208 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:05.350491+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 88662016 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:06.350718+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 88662016 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:07.350948+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 88662016 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:08.351244+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 88662016 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:09.351434+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 88662016 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:10.351653+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 88662016 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:11.351934+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 88662016 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:12.352119+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 88653824 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:13.352305+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 88653824 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:14.352538+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 88653824 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:15.352730+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 88653824 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:16.352881+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 88653824 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:17.353745+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359923712 unmapped: 88653824 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:18.353950+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 88645632 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:19.354182+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 88645632 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:20.354364+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 88645632 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:21.354642+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 88645632 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:22.354827+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 88645632 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:23.355041+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 88645632 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:24.355348+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359931904 unmapped: 88645632 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:25.355518+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 88637440 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:26.355650+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 88637440 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:27.355790+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 88637440 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:28.356015+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 88637440 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:29.356210+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 88637440 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:30.356424+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 88637440 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:31.356620+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 88637440 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:32.356830+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 88637440 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:33.356969+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359940096 unmapped: 88637440 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:34.357157+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 88629248 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:35.357328+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 88629248 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:36.357523+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359948288 unmapped: 88629248 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:37.357692+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 88621056 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:38.357829+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 88621056 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:39.357934+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 88621056 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:40.358078+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 88621056 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:41.358321+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 88621056 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:42.358491+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 88621056 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:43.358643+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 88621056 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:44.358817+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 88621056 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:45.359011+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 88621056 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:46.359170+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 88621056 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:47.359348+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 88621056 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:48.359485+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 88621056 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:49.359740+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 88621056 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:50.359931+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359956480 unmapped: 88621056 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:51.360263+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 88612864 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:52.360439+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 88612864 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:53.360608+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 88612864 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:54.360772+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 88612864 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:55.360969+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359964672 unmapped: 88612864 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:56.361179+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 88604672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:57.361416+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 88604672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:58.361692+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 88604672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:28:59.362093+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 88604672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:00.362322+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 88604672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:01.362546+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 88604672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:02.362697+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 88604672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:03.363056+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 88604672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:04.363269+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 88604672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:05.363421+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 88604672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:06.363629+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 88604672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:07.364069+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 88604672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:08.364261+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 88604672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:09.364414+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 88604672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:10.364592+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359972864 unmapped: 88604672 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:11.364793+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 88596480 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:12.364943+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 88596480 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:13.365164+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 88596480 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:14.365349+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 88596480 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:15.365491+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 88596480 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:16.365653+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 88596480 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:17.365795+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359981056 unmapped: 88596480 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:18.365932+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 88588288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:19.366100+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 88588288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:20.366300+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359989248 unmapped: 88588288 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:21.366573+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 88580096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:22.366778+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 88580096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:23.367066+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 88580096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:24.367230+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 88580096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:25.367381+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 88580096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:26.367572+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 88580096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:27.367741+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 88580096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:28.367964+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 88580096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:29.368263+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 88580096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:30.368561+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 88580096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:31.368845+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 359997440 unmapped: 88580096 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:32.369099+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 88563712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:33.369251+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 88563712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:34.369437+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 88563712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:35.369608+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 88563712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:36.369772+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 88563712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:37.370039+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360013824 unmapped: 88563712 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:38.370260+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:39.370468+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:40.370619+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:41.370816+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:42.370960+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:43.371122+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:44.371366+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:45.371593+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:46.371766+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:47.371907+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:48.372046+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:49.372213+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:50.372391+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:51.373302+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:52.373816+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:53.375343+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:54.376076+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:55.377189+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360022016 unmapped: 88555520 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:56.377361+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 88547328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:57.378129+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 88547328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:58.379197+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 88547328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:29:59.379472+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 88547328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:00.380039+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 88547328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:01.380512+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 88547328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:02.380708+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 88547328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:03.381055+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 88547328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:04.381188+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 88547328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:05.381392+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 88547328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:06.381874+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 88547328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:07.382122+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 88547328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:08.382278+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360030208 unmapped: 88547328 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:09.382613+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360046592 unmapped: 88530944 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:10.383135+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 88522752 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:11.383375+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 88522752 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:12.383637+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 88522752 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:13.383846+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 88522752 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:14.384096+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 88522752 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:15.384214+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 88522752 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:16.384412+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:17.384571+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 88522752 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:18.384715+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 88522752 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:19.384904+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 88522752 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:20.385093+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 88522752 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:21.385320+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 88522752 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:22.385519+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 88522752 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:23.385688+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 88522752 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:24.385860+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360054784 unmapped: 88522752 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:25.386003+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 88514560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:26.386120+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 88514560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:27.386359+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 88514560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:28.386505+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 88514560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:29.386636+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 88514560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:30.386795+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 88514560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:31.387166+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 88514560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:32.387366+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 88514560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:33.387553+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 88514560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:34.387704+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 88514560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:35.387857+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 88514560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:36.388004+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 88514560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:37.388160+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360062976 unmapped: 88514560 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:38.388314+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:39.388478+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:40.388597+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:41.388849+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:42.389038+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:43.389183+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:44.389337+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:45.389548+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:46.389733+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:47.390042+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:48.390214+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:49.390355+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:50.390479+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:51.390771+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:52.391083+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:53.391304+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 7800.1 total, 600.0 interval
                                           Cumulative writes: 46K writes, 185K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s
                                           Cumulative WAL: 46K writes, 17K syncs, 2.73 writes per sync, written: 0.19 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 300 writes, 450 keys, 300 commit groups, 1.0 writes per commit group, ingest: 0.15 MB, 0.00 MB/s
                                           Interval WAL: 300 writes, 150 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:54.391464+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:55.393175+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360071168 unmapped: 88506368 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:56.394041+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360079360 unmapped: 88498176 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:57.394626+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360079360 unmapped: 88498176 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:58.395554+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:30:59.396023+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:00.396923+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:01.397316+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:02.397954+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:03.398594+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:04.398960+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:05.399534+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:06.399789+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:07.400026+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:08.400367+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:09.400617+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:10.400770+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:11.401073+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:12.401420+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:13.401773+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:14.402134+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360087552 unmapped: 88489984 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:15.402313+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 88481792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:16.402514+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 88481792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:17.402669+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 88481792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:18.402854+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 88481792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:19.403050+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 88481792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:20.403843+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 88481792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:21.404040+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 88481792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:22.404208+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 88481792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:23.404350+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 88481792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:24.404496+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 88481792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:25.404654+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360095744 unmapped: 88481792 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:26.404873+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 88473600 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:27.405078+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 88473600 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:28.405559+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 88473600 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:29.405746+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 88473600 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:30.406274+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360103936 unmapped: 88473600 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:31.406731+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360112128 unmapped: 88465408 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:32.406881+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 88457216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:33.407164+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 88457216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:34.407422+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 88457216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:35.407643+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 88457216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:36.407876+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 88457216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:37.408091+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 88457216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:38.408404+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 88457216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:39.408556+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 88457216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:40.408822+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 88457216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:41.409124+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 88457216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:42.409404+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 88457216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:43.409596+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 88457216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:44.410287+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 88457216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:45.410613+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 88457216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:46.410771+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 88457216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:47.411061+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360120320 unmapped: 88457216 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:48.411329+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360128512 unmapped: 88449024 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:49.411538+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360128512 unmapped: 88449024 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:50.411793+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360128512 unmapped: 88449024 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:51.412085+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 88440832 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:52.412335+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360136704 unmapped: 88440832 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:53.412590+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 88432640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:54.412780+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 88432640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:55.413072+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 88432640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:56.413259+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 88432640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:57.413508+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 88432640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:58.413663+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360144896 unmapped: 88432640 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:31:59.413812+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 88424448 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:00.414048+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 88424448 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:01.414352+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 88424448 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:02.414534+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 88424448 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:03.414750+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 88424448 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:04.414919+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 88424448 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:05.415119+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 88424448 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:06.415298+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 88424448 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:07.415472+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 88416256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:08.415625+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 88416256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:09.415862+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 88416256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:10.416141+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 88416256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:11.416440+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 88416256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:12.416600+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 88416256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:13.416809+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 88416256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:14.417029+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 88416256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:15.417161+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 88416256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:16.417492+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 88416256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:17.417687+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 88416256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:18.417877+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 88416256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:19.418049+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 88416256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:20.418241+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360161280 unmapped: 88416256 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:21.418444+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 602.265686035s of 602.541625977s, submitted: 150
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360177664 unmapped: 88399872 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:22.418575+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:23.418748+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:24.418907+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:25.419080+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:26.419369+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 88326144 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:27.419555+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360251392 unmapped: 88326144 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:28.419712+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:29.419877+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:30.420073+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:31.420267+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:32.421091+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:33.423603+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:34.423806+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:35.424053+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:36.424273+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:37.424767+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:38.425302+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:39.425465+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:40.426043+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:41.426664+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:42.427075+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:43.427401+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:44.428570+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:45.429518+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:46.429726+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:47.430122+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:48.430338+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:49.430545+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:50.430753+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:51.431011+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:52.431181+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:53.431477+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:54.431780+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:55.432068+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:56.432317+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:57.432568+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:58.432750+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:32:59.432914+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:00.433063+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:01.433272+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:02.433640+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:03.433943+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:04.434171+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:05.434515+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:06.434758+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:07.435006+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:08.435170+0000)
Jan 26 17:43:56 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23504 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:09.435352+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:10.435483+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:11.435642+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:12.435970+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:13.436247+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:14.436375+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:15.436837+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:16.437302+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:17.437462+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:18.437747+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:19.438060+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:20.438298+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:21.438480+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:22.438656+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:23.438792+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:24.438943+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:25.439127+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:26.439296+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:27.439513+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:28.439691+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:29.439908+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:30.440115+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:31.440370+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:32.440566+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:33.440722+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 88285184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:34.440948+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 88285184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:35.441147+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 88276992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:36.441290+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 88276992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:37.441444+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 88276992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:38.441618+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 88309760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:39.442190+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 88309760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:40.442320+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360267776 unmapped: 88309760 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:41.442480+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:42.443770+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:43.445165+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:44.446049+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:45.446201+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:46.446470+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:47.447421+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:48.448078+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:49.448584+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360284160 unmapped: 88293376 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:50.449269+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 88285184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:51.449690+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 88285184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:52.450384+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 88285184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:53.450532+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 88285184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:54.450685+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 88285184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:55.450875+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 88285184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:56.451290+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 88285184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:57.451469+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 88285184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:58.451709+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 88285184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:33:59.452025+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 88285184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:00.452313+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 88285184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:01.452508+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 88285184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:02.452688+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 88285184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:03.452896+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360292352 unmapped: 88285184 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:04.453112+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 88276992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:05.453317+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 88276992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:06.453498+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 88276992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:07.453635+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 88276992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:08.453790+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 88276992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:09.453934+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 88276992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:10.454091+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 88276992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:11.454271+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 88276992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:12.454407+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 88276992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:13.454553+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 88276992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:14.454735+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 88276992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:15.454897+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 88276992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:16.455063+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360300544 unmapped: 88276992 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:17.455260+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 88268800 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:18.455497+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 88268800 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:19.455659+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 88268800 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:20.455841+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 88268800 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:21.456039+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360308736 unmapped: 88268800 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:22.456270+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360316928 unmapped: 88260608 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:23.456421+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 88252416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:24.456567+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 88252416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:25.456714+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 88252416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:26.456858+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 88252416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:27.457033+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 88252416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:28.457200+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 88252416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:29.457398+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 88252416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:30.457537+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 88252416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:31.457762+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 88252416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:32.457945+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 88252416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:33.458109+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 88252416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:34.458287+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 88252416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:35.458436+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 88252416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:36.458636+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360325120 unmapped: 88252416 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:37.458825+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:38.459068+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:39.459311+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:40.459534+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:41.459792+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:42.460100+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:43.460284+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:44.460434+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:45.460580+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:46.460735+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:47.460866+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:48.461171+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:49.461436+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:50.461621+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:51.461875+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:52.462139+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:53.462371+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:54.462648+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:55.463156+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:56.463336+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360333312 unmapped: 88244224 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:57.463529+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 88236032 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:58.463730+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 88236032 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:34:59.463936+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 88236032 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:00.464153+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 88236032 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:01.464426+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 88236032 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:02.464588+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 88236032 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:03.464728+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 88236032 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:04.464864+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 88236032 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:05.465059+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360341504 unmapped: 88236032 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:06.465195+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360349696 unmapped: 88227840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:07.465450+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360349696 unmapped: 88227840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:08.465685+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360349696 unmapped: 88227840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:09.465884+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360349696 unmapped: 88227840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:10.466036+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360349696 unmapped: 88227840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:11.466198+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360349696 unmapped: 88227840 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:12.466413+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360357888 unmapped: 88219648 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:13.466576+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360357888 unmapped: 88219648 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:14.466737+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360357888 unmapped: 88219648 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:15.466865+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360357888 unmapped: 88219648 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:16.467041+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360357888 unmapped: 88219648 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:17.467173+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360357888 unmapped: 88219648 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:18.467380+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360357888 unmapped: 88219648 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:19.467478+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 88211456 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:20.467613+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 88211456 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:21.467789+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 88211456 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:22.468033+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 88211456 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:23.468175+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 88211456 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:24.468345+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 88211456 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:25.468449+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360366080 unmapped: 88211456 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:26.468606+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360374272 unmapped: 88203264 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:27.468746+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360374272 unmapped: 88203264 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:28.468917+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360374272 unmapped: 88203264 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:29.469061+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 88195072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:30.469166+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 88195072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:31.469368+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 88195072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:32.469531+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 88195072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:33.469716+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 88195072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:34.469811+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 88195072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:35.469941+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 88195072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:36.470120+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 88195072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:37.470293+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 88195072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:38.470500+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 88195072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:39.470648+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 88186880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:40.470820+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 88186880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:41.471023+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 88186880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:42.471121+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 88186880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:43.471269+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 88186880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:44.471390+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 88186880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:45.471574+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 88186880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:46.471817+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 88186880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:47.472015+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 88186880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:48.472202+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 88186880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:49.472393+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 88186880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:50.472572+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360390656 unmapped: 88186880 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:51.472877+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 88178688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:52.473024+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 88178688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:53.473275+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 88178688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:54.473481+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 88178688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:55.473701+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360398848 unmapped: 88178688 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets getting new tickets!
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:56.474038+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _finish_auth 0
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:56.475115+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 88162304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:57.474261+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 88162304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:58.474508+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 88162304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:35:59.474664+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 88162304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:00.474805+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 88162304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:01.475195+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 88162304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:02.475463+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 88162304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:03.475662+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 88162304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:04.476018+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 88162304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:05.476329+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 88162304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:06.476492+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 88162304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:07.476619+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 88162304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:08.476750+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 ms_handle_reset con 0x555975a27000 session 0x55597e51b500
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: handle_auth_request added challenge on 0x555975a49400
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 88162304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:09.476954+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 88162304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:10.883826+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 88162304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:11.884041+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 88162304 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:12.884167+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 88154112 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:13.884312+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 88154112 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:14.884467+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 88154112 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:15.884598+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 88154112 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:16.884754+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 88154112 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:17.884931+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 88154112 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:18.885220+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 88154112 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:19.885447+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 88154112 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:20.885614+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 88154112 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:21.885956+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 88154112 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:22.886185+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 88154112 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:23.886376+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 88154112 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:24.886585+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 88154112 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:25.886780+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360431616 unmapped: 88145920 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:26.886938+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:27.887229+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:28.887356+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:29.887572+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:30.887740+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:31.887905+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:32.888097+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:33.888278+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:34.888468+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:35.888735+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:36.888925+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:37.889587+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:38.889884+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:39.890121+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:40.890378+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:41.890708+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:42.891097+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:43.891380+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:44.891678+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:45.892152+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:46.892517+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:47.892695+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:48.892818+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:49.893416+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:50.895854+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:51.896742+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:52.897069+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360439808 unmapped: 88137728 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:53.897557+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 88121344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:54.897837+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 88121344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:55.898010+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 88121344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:56.898340+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 88121344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:57.898471+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 88121344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:58.898700+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 88121344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:36:59.898830+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 88121344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:00.899104+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 88121344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:01.899489+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 88121344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:02.899618+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 88121344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:03.899827+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 88121344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:04.900008+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 88121344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:05.900150+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 88121344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:06.900366+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360456192 unmapped: 88121344 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:07.900595+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 88113152 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:08.900890+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 88113152 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:09.901069+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 88113152 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:10.901318+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 88104960 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:11.901526+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 88104960 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:12.901657+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360472576 unmapped: 88104960 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:13.901812+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 ms_handle_reset con 0x555975af8000 session 0x555983e1fdc0
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: handle_auth_request added challenge on 0x5559793c7800
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 87973888 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:14.902046+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 87973888 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:15.902223+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 87973888 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:16.902388+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 87973888 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:17.902574+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360603648 unmapped: 87973888 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:18.902768+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360611840 unmapped: 87965696 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:19.903060+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360611840 unmapped: 87965696 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:20.903372+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 299.974334717s of 300.216308594s, submitted: 106
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360611840 unmapped: 87965696 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:21.903610+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:22.903775+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:23.903930+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:24.904149+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:25.904318+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:26.904521+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:27.904681+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:28.904859+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:29.905154+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:30.905358+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:31.905552+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:32.905764+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:33.906118+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:34.906312+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:35.906581+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:36.906829+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:37.907110+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:38.907284+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:39.907528+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:40.907811+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:41.908117+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:42.908346+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:43.908614+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360628224 unmapped: 87949312 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:44.908906+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360636416 unmapped: 87941120 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:45.909114+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360636416 unmapped: 87941120 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:46.909252+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360636416 unmapped: 87941120 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:47.909362+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360636416 unmapped: 87941120 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:48.909501+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 87932928 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:49.909678+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 87932928 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:50.909841+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 87932928 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:51.910027+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 87932928 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:52.910145+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 87932928 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:53.910346+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 87932928 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:54.910480+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 87932928 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:55.910606+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 87932928 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:56.910725+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 87932928 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:57.910853+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 87932928 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:58.910957+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 87932928 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:37:59.911117+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 87932928 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:00.911288+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 87932928 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:01.911443+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360644608 unmapped: 87932928 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:02.911566+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360652800 unmapped: 87924736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:03.911686+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360652800 unmapped: 87924736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:04.911820+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360652800 unmapped: 87924736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:05.911909+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360652800 unmapped: 87924736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:06.912063+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360652800 unmapped: 87924736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:07.912234+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360652800 unmapped: 87924736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:08.912369+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360652800 unmapped: 87924736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:09.912525+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360652800 unmapped: 87924736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:10.912681+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360652800 unmapped: 87924736 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:11.913432+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360660992 unmapped: 87916544 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:12.913623+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360660992 unmapped: 87916544 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:13.913754+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360660992 unmapped: 87916544 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:14.913904+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360660992 unmapped: 87916544 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:15.914081+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360660992 unmapped: 87916544 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:16.914213+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360660992 unmapped: 87916544 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:17.914400+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 87908352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:18.914581+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 87908352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:19.914772+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 87908352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:20.914886+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 87908352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:21.915055+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 87908352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:22.915273+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 87908352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:23.915423+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 87908352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:24.915601+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 87908352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:25.915759+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 87908352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:26.915913+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 87908352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:27.916129+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 87908352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:28.916291+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 87908352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:29.916449+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 87908352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:30.916572+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360669184 unmapped: 87908352 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:31.916727+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 87900160 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:32.916845+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 87900160 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:33.917050+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 87900160 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:34.917210+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 87900160 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:35.917363+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 87900160 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:36.917484+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 87900160 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:37.917611+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 87891968 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:38.917753+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:39.917864+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:40.918020+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:41.918173+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:42.918305+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:43.918455+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:44.918576+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:45.918713+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:46.918824+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:47.918951+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:48.919083+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:49.919243+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:50.919381+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:51.919544+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:52.919682+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:53.919857+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:54.920063+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:55.920253+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:56.920377+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:57.920511+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:58.920653+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 87883776 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:38:59.920802+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:00.920956+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:01.921164+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:02.921625+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:03.921762+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:04.921921+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:05.922071+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:06.922235+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:07.922396+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:08.922648+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:09.922820+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:10.922966+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:11.923245+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:12.923432+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:13.923591+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:14.923761+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:15.923912+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:16.924069+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:17.924219+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:18.924361+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:19.924510+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:20.924698+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:21.924945+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360710144 unmapped: 87867392 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:22.925086+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 87859200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:23.925251+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 87859200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:24.925407+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 87859200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:25.925543+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 87859200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:26.925708+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 87859200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:27.925883+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 87859200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:28.926034+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360718336 unmapped: 87859200 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:29.926197+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360726528 unmapped: 87851008 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:30.926362+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360726528 unmapped: 87851008 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:31.926534+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360726528 unmapped: 87851008 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:32.926677+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360726528 unmapped: 87851008 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:33.926808+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360726528 unmapped: 87851008 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:34.926936+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360726528 unmapped: 87851008 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:35.927096+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360726528 unmapped: 87851008 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:36.927272+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360726528 unmapped: 87851008 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:37.927427+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360726528 unmapped: 87851008 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:38.927605+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360726528 unmapped: 87851008 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:39.927721+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 87842816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:40.927925+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 87842816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:41.928226+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 87842816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:42.928352+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 87842816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:43.928664+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 87842816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:44.928844+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360734720 unmapped: 87842816 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:45.929032+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 87834624 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:46.929166+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 87834624 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:47.929325+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 87834624 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:48.929456+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 87834624 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:49.929612+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 87834624 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:50.929796+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 87834624 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:51.930066+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 87834624 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:52.930254+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 87834624 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:53.930422+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 87834624 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:54.930606+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360742912 unmapped: 87834624 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:55.930752+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360751104 unmapped: 87826432 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:56.930928+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 87818240 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:57.931073+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 87818240 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:58.931293+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360759296 unmapped: 87818240 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:39:59.931474+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 87810048 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:00.931687+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 87810048 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:01.931916+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 87810048 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:02.932055+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 87810048 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:03.932216+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 87810048 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:04.932658+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 87810048 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:05.933060+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:06.933287+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:07.933402+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:08.933660+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:09.933836+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:10.933993+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:11.934172+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:12.934310+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:13.934450+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:14.934603+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:15.934804+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:16.935020+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:17.935158+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:18.935379+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:19.935607+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:20.935754+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:21.936056+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:22.936215+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:23.936407+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:24.936599+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:25.936780+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:26.936935+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360775680 unmapped: 87801856 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:27.937065+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 87793664 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:28.937204+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 87793664 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:29.937369+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360792064 unmapped: 87785472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:30.937511+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360792064 unmapped: 87785472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:31.937710+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360792064 unmapped: 87785472 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:32.937829+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 87777280 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:33.938006+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 87777280 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:34.938201+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 87777280 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:35.939577+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 87777280 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:36.940027+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 87777280 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:37.940268+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 87777280 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:38.941058+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 87777280 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:39.941241+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360800256 unmapped: 87777280 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:40.942173+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 87769088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:41.942590+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 87769088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:42.942927+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:43.943459+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:44.943879+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:45.944275+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:46.944962+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:47.945305+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:48.945581+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:49.945849+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:50.946075+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:51.946303+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:52.946452+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:53.946629+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 8400.1 total, 600.0 interval
                                           Cumulative writes: 47K writes, 185K keys, 47K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.02 MB/s
                                           Cumulative WAL: 47K writes, 17K syncs, 2.72 writes per sync, written: 0.19 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:54.946794+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:55.947009+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:56.947195+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:57.947354+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:58.947491+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:40:59.947723+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360816640 unmapped: 87760896 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:00.947867+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 87752704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:01.948080+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 87752704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:02.948242+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 87752704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:03.948400+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 87752704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:04.948558+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 87752704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:05.948697+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 87752704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:06.948901+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 87752704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:07.949031+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 87752704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:08.949183+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 87752704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:09.949346+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360824832 unmapped: 87752704 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:10.949520+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 87744512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:11.949713+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 87744512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:12.949877+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 87744512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:13.950064+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 87744512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:14.950208+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 87744512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:15.950394+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 87744512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:16.950531+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360833024 unmapped: 87744512 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:17.950698+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360841216 unmapped: 87736320 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:18.950838+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread fragmentation_score=0.005312 took=0.000067s
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 87728128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:19.951071+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 87728128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:20.951267+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 87728128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:21.951458+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 87728128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:22.951575+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 87728128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:23.951697+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 87728128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:24.951815+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 87728128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:25.951951+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 87728128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:26.952033+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 87728128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:27.952103+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 87728128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:28.952164+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 87728128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:29.952242+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 87728128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:30.952355+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 87728128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:31.952486+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360849408 unmapped: 87728128 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:32.952628+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:33.952753+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:34.952868+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:35.953027+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:36.953192+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:37.953341+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 87711744 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:38.953484+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 87711744 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:39.953620+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 87711744 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:40.953745+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 87711744 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:41.953895+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 87711744 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:42.954006+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360865792 unmapped: 87711744 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:43.954190+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:44.954347+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:45.954489+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:46.954624+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:47.954767+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:48.954946+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:49.955223+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:50.955375+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:51.955668+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:52.955866+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:53.956045+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:54.956240+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:55.956507+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:56.956696+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360857600 unmapped: 87719936 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:57.956834+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:58.957007+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:41:59.957149+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:00.957328+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:01.957519+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:02.957721+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:03.958126+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:04.958292+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:05.958421+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:06.958554+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:07.958729+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:08.958902+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:09.959136+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:10.959296+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:11.959466+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360185856 unmapped: 88391680 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:12.959890+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 88383488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:13.961156+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 88383488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:14.962134+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 88383488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:15.962890+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 88383488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:16.963057+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 88383488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:17.963516+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 88383488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:18.964094+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 88383488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:19.964407+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 88383488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:20.964697+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 88383488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:22.023920+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360194048 unmapped: 88383488 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 300.593322754s of 300.623962402s, submitted: 18
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:23.024516+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 88367104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:24.024727+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 88367104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:25.025238+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 88367104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:26.025524+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360210432 unmapped: 88367104 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:27.026013+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 88358912 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:28.026200+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360243200 unmapped: 88334336 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:29.026353+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:30.026661+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:31.026892+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:32.027140+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:33.027292+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:34.027428+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:35.027599+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:36.027729+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:37.027895+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:38.028080+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:39.028250+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:40.028426+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:41.028627+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:42.028801+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:43.028966+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:44.029158+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:45.029278+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:46.029410+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:47.029527+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:48.029727+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:49.029848+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:50.030049+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:51.030199+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:52.030376+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:53.030534+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:54.030704+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:55.030822+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:56.031038+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:57.031205+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:58.031435+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:42:59.031568+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:00.031699+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:01.031833+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:02.031987+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:03.032112+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:04.032228+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:05.032425+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:06.032619+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:07.032758+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:08.032959+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:09.033133+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:10.033291+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:11.033431+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:12.033633+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:13.033799+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:14.033938+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:15.034111+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:16.034272+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:17.035056+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:18.035190+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:19.035363+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:20.035521+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:21.035747+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:22.035966+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360275968 unmapped: 88301568 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:23.036118+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 26 17:43:56 compute-0 ceph-osd[85687]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 26 17:43:56 compute-0 ceph-osd[85687]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3725111 data_alloc: 218103808 data_used: 132411
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360382464 unmapped: 88195072 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: do_command 'config diff' '{prefix=config diff}'
Jan 26 17:43:56 compute-0 ceph-osd[85687]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 26 17:43:56 compute-0 ceph-osd[85687]: do_command 'config show' '{prefix=config show}'
Jan 26 17:43:56 compute-0 ceph-osd[85687]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 26 17:43:56 compute-0 ceph-osd[85687]: do_command 'counter dump' '{prefix=counter dump}'
Jan 26 17:43:56 compute-0 ceph-osd[85687]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 26 17:43:56 compute-0 ceph-osd[85687]: do_command 'counter schema' '{prefix=counter schema}'
Jan 26 17:43:56 compute-0 ceph-osd[85687]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:24.036293+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 87769088 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:25.036423+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: prioritycache tune_memory target: 4294967296 mapped: 360882176 unmapped: 87695360 heap: 448577536 old mem: 2845415832 new mem: 2845415832
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: tick
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_tickets
Jan 26 17:43:56 compute-0 ceph-osd[85687]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-26T17:43:26.036598+0000)
Jan 26 17:43:56 compute-0 ceph-osd[85687]: osd.0 332 heartbeat osd_stat(store_statfs(0x4e828d000/0x0/0x4ffc00000, data 0xe8fd05/0x105f000, compress 0x0/0x0/0x0, omap 0x78e54, meta 0x168971ac), peers [1,2] op hist [])
Jan 26 17:43:56 compute-0 ceph-osd[85687]: do_command 'log dump' '{prefix=log dump}'
Jan 26 17:43:56 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23506 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:57 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23508 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:57 compute-0 nova_compute[239965]: 2026-01-26 17:43:57.099 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:43:57 compute-0 ceph-mon[75140]: from='client.23502 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:57 compute-0 ceph-mon[75140]: from='client.23504 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:57 compute-0 ceph-mon[75140]: from='client.23506 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:43:57 compute-0 ceph-mon[75140]: from='client.23508 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:57 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:57 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23512 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:57 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.sxinsc", "name": "rgw_frontends"} v 0)
Jan 26 17:43:57 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.sxinsc", "name": "rgw_frontends"} : dispatch
Jan 26 17:43:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 26 17:43:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3022948399' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 26 17:43:58 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23516 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.sxinsc", "name": "rgw_frontends"} v 0)
Jan 26 17:43:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.sxinsc", "name": "rgw_frontends"} : dispatch
Jan 26 17:43:58 compute-0 ceph-mon[75140]: pgmap v4625: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:58 compute-0 ceph-mon[75140]: from='client.23512 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.sxinsc", "name": "rgw_frontends"} : dispatch
Jan 26 17:43:58 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3022948399' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 26 17:43:58 compute-0 ceph-mon[75140]: from='client.23516 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:58 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.sxinsc", "name": "rgw_frontends"} : dispatch
Jan 26 17:43:58 compute-0 podman[431727]: 2026-01-26 17:43:58.415027637 +0000 UTC m=+0.092104488 container health_status 2ddb3ff87449fe1045fa96de01f380bbdced4eeae78d9c8f1eef41684d9b8697 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 26 17:43:58 compute-0 nova_compute[239965]: 2026-01-26 17:43:58.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:43:58 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23520 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:58 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Jan 26 17:43:58 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647802949' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 26 17:43:59 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23522 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 26 17:43:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4197862601' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 26 17:43:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:43:59.331 156105 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 26 17:43:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:43:59.332 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 26 17:43:59 compute-0 ovn_metadata_agent[156096]: 2026-01-26 17:43:59.332 156105 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 26 17:43:59 compute-0 systemd[1]: Starting Hostname Service...
Jan 26 17:43:59 compute-0 ceph-mon[75140]: from='client.23520 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:59 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3647802949' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 26 17:43:59 compute-0 ceph-mon[75140]: from='client.23522 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:59 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/4197862601' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 26 17:43:59 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:43:59 compute-0 systemd[1]: Started Hostname Service.
Jan 26 17:43:59 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23526 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:43:59 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 26 17:43:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2288132983' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 26 17:43:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 17:43:59 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 17:44:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 17:44:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 17:44:00 compute-0 ceph-mon[75140]: pgmap v4626: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:44:00 compute-0 ceph-mon[75140]: from='client.23526 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:44:00 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2288132983' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 26 17:44:00 compute-0 ceph-mon[75140]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 26 17:44:00 compute-0 ceph-mon[75140]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 26 17:44:00 compute-0 ceph-mon[75140]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 26 17:44:00 compute-0 ceph-mon[75140]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 26 17:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] scanning for idle connections..
Jan 26 17:44:00 compute-0 ceph-mgr[75431]: [volumes INFO mgr_util] cleaning up connections: []
Jan 26 17:44:00 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 26 17:44:00 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2700737061' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 26 17:44:00 compute-0 nova_compute[239965]: 2026-01-26 17:44:00.866 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:44:01 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23540 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:44:01 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:44:01 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2700737061' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 26 17:44:01 compute-0 ceph-mon[75140]: from='client.23540 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:01 compute-0 podman[432163]: 2026-01-26 17:44:01.456699921 +0000 UTC m=+0.126445176 container health_status 7d435cf8534cfebcce7eb5c3e5b854d9064cb69b03c85b7c00dab8500f9b12c2 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '580ffc256c7b3a7fb6a81063ca00b7046340e5c2545453df38732700a02439f5-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031-e62eb9729164d320814ce858745746ea11996127eb4d393d58ab70fb0022c031'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 26 17:44:01 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 26 17:44:01 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/610141997' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 26 17:44:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Jan 26 17:44:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2836778811' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 26 17:44:02 compute-0 nova_compute[239965]: 2026-01-26 17:44:02.100 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:44:02 compute-0 ceph-mon[75140]: pgmap v4627: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:44:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/610141997' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 26 17:44:02 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2836778811' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 26 17:44:02 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 26 17:44:02 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/270802494' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 26 17:44:03 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 26 17:44:03 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/408830432' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 26 17:44:03 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:44:03 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23550 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/270802494' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 26 17:44:03 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/408830432' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 26 17:44:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 26 17:44:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/669345001' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 26 17:44:04 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Jan 26 17:44:04 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3457344274' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 26 17:44:04 compute-0 ceph-mon[75140]: pgmap v4628: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:44:04 compute-0 ceph-mon[75140]: from='client.23550 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/669345001' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 26 17:44:04 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3457344274' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 26 17:44:05 compute-0 sudo[432555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:44:05 compute-0 sudo[432555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:44:05 compute-0 sudo[432555]: pam_unix(sudo:session): session closed for user root
Jan 26 17:44:05 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23556 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:05 compute-0 sudo[432584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 26 17:44:05 compute-0 sudo[432584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:44:05 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:44:05 compute-0 nova_compute[239965]: 2026-01-26 17:44:05.510 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:44:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Jan 26 17:44:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/557438011' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 26 17:44:05 compute-0 sudo[432584]: pam_unix(sudo:session): session closed for user root
Jan 26 17:44:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:44:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:44:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 26 17:44:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:44:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 26 17:44:05 compute-0 nova_compute[239965]: 2026-01-26 17:44:05.868 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:44:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:44:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 26 17:44:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:44:05 compute-0 ceph-mon[75140]: from='client.23556 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:05 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/557438011' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 26 17:44:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:44:05 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 26 17:44:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 26 17:44:05 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:44:05 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 26 17:44:05 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:44:06 compute-0 sudo[432736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:44:06 compute-0 sudo[432736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:44:06 compute-0 sudo[432736]: pam_unix(sudo:session): session closed for user root
Jan 26 17:44:06 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23560 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:06 compute-0 sudo[432767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 26 17:44:06 compute-0 sudo[432767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:44:06 compute-0 podman[432876]: 2026-01-26 17:44:06.38332837 +0000 UTC m=+0.059638415 container create 1aecb15de4822e302b6b600a42a9ad70767072f80fea0ca0ccea27ecfd0997ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 26 17:44:06 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:44:06 compute-0 systemd[1]: Started libpod-conmon-1aecb15de4822e302b6b600a42a9ad70767072f80fea0ca0ccea27ecfd0997ea.scope.
Jan 26 17:44:06 compute-0 podman[432876]: 2026-01-26 17:44:06.354892457 +0000 UTC m=+0.031202512 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:44:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:44:06 compute-0 podman[432876]: 2026-01-26 17:44:06.482178172 +0000 UTC m=+0.158488237 container init 1aecb15de4822e302b6b600a42a9ad70767072f80fea0ca0ccea27ecfd0997ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ride, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:44:06 compute-0 podman[432876]: 2026-01-26 17:44:06.491338365 +0000 UTC m=+0.167648400 container start 1aecb15de4822e302b6b600a42a9ad70767072f80fea0ca0ccea27ecfd0997ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 26 17:44:06 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23562 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:06 compute-0 podman[432876]: 2026-01-26 17:44:06.495627551 +0000 UTC m=+0.171937596 container attach 1aecb15de4822e302b6b600a42a9ad70767072f80fea0ca0ccea27ecfd0997ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ride, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:44:06 compute-0 unruffled_ride[432910]: 167 167
Jan 26 17:44:06 compute-0 systemd[1]: libpod-1aecb15de4822e302b6b600a42a9ad70767072f80fea0ca0ccea27ecfd0997ea.scope: Deactivated successfully.
Jan 26 17:44:06 compute-0 podman[432876]: 2026-01-26 17:44:06.500764285 +0000 UTC m=+0.177074330 container died 1aecb15de4822e302b6b600a42a9ad70767072f80fea0ca0ccea27ecfd0997ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ride, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 26 17:44:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-d756759d4b61a9c58a819ef1d813a64d6549199050b01100d18b07742a444ed0-merged.mount: Deactivated successfully.
Jan 26 17:44:06 compute-0 podman[432876]: 2026-01-26 17:44:06.541615272 +0000 UTC m=+0.217925307 container remove 1aecb15de4822e302b6b600a42a9ad70767072f80fea0ca0ccea27ecfd0997ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:44:06 compute-0 systemd[1]: libpod-conmon-1aecb15de4822e302b6b600a42a9ad70767072f80fea0ca0ccea27ecfd0997ea.scope: Deactivated successfully.
Jan 26 17:44:06 compute-0 podman[433002]: 2026-01-26 17:44:06.73090817 +0000 UTC m=+0.046962687 container create 149312b45e379020b15e880d8fb7035263ef422cece4c8ed89e244a96cdfed27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:44:06 compute-0 systemd[1]: Started libpod-conmon-149312b45e379020b15e880d8fb7035263ef422cece4c8ed89e244a96cdfed27.scope.
Jan 26 17:44:06 compute-0 podman[433002]: 2026-01-26 17:44:06.710613085 +0000 UTC m=+0.026667622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:44:06 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639d4fe08a3fba4589e42a829ea2e943903dbc043bc851ffa6e9a5c2caf463b9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639d4fe08a3fba4589e42a829ea2e943903dbc043bc851ffa6e9a5c2caf463b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639d4fe08a3fba4589e42a829ea2e943903dbc043bc851ffa6e9a5c2caf463b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639d4fe08a3fba4589e42a829ea2e943903dbc043bc851ffa6e9a5c2caf463b9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:44:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/639d4fe08a3fba4589e42a829ea2e943903dbc043bc851ffa6e9a5c2caf463b9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 26 17:44:06 compute-0 podman[433002]: 2026-01-26 17:44:06.837745896 +0000 UTC m=+0.153800443 container init 149312b45e379020b15e880d8fb7035263ef422cece4c8ed89e244a96cdfed27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_napier, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 17:44:06 compute-0 podman[433002]: 2026-01-26 17:44:06.851344479 +0000 UTC m=+0.167398996 container start 149312b45e379020b15e880d8fb7035263ef422cece4c8ed89e244a96cdfed27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:44:06 compute-0 podman[433002]: 2026-01-26 17:44:06.857744074 +0000 UTC m=+0.173798591 container attach 149312b45e379020b15e880d8fb7035263ef422cece4c8ed89e244a96cdfed27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 17:44:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Jan 26 17:44:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3082847073' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Jan 26 17:44:07 compute-0 ceph-mon[75140]: pgmap v4629: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:44:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:44:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 26 17:44:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 26 17:44:07 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 26 17:44:07 compute-0 ceph-mon[75140]: from='client.23560 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:07 compute-0 nova_compute[239965]: 2026-01-26 17:44:07.151 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:44:07 compute-0 sweet_napier[433040]: --> passed data devices: 0 physical, 3 LVM
Jan 26 17:44:07 compute-0 sweet_napier[433040]: --> All data devices are unavailable
Jan 26 17:44:07 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:44:07 compute-0 systemd[1]: libpod-149312b45e379020b15e880d8fb7035263ef422cece4c8ed89e244a96cdfed27.scope: Deactivated successfully.
Jan 26 17:44:07 compute-0 podman[433207]: 2026-01-26 17:44:07.481067231 +0000 UTC m=+0.029929851 container died 149312b45e379020b15e880d8fb7035263ef422cece4c8ed89e244a96cdfed27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_napier, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:44:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-639d4fe08a3fba4589e42a829ea2e943903dbc043bc851ffa6e9a5c2caf463b9-merged.mount: Deactivated successfully.
Jan 26 17:44:07 compute-0 podman[433207]: 2026-01-26 17:44:07.53063049 +0000 UTC m=+0.079493090 container remove 149312b45e379020b15e880d8fb7035263ef422cece4c8ed89e244a96cdfed27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_napier, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 26 17:44:07 compute-0 systemd[1]: libpod-conmon-149312b45e379020b15e880d8fb7035263ef422cece4c8ed89e244a96cdfed27.scope: Deactivated successfully.
Jan 26 17:44:07 compute-0 sudo[432767]: pam_unix(sudo:session): session closed for user root
Jan 26 17:44:07 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Jan 26 17:44:07 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/680205444' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Jan 26 17:44:07 compute-0 sudo[433242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:44:07 compute-0 sudo[433242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:44:07 compute-0 sudo[433242]: pam_unix(sudo:session): session closed for user root
Jan 26 17:44:07 compute-0 sudo[433284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- lvm list --format json
Jan 26 17:44:07 compute-0 sudo[433284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:44:08 compute-0 podman[433402]: 2026-01-26 17:44:08.057218267 +0000 UTC m=+0.057384421 container create ade7a367624055dcbff7b21c5dfc1d3eba1f51d65c7b7cba0644bd8b9ec8a21c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_morse, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23568 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:08 compute-0 systemd[1]: Started libpod-conmon-ade7a367624055dcbff7b21c5dfc1d3eba1f51d65c7b7cba0644bd8b9ec8a21c.scope.
Jan 26 17:44:08 compute-0 ceph-mon[75140]: from='client.23562 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3082847073' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
Jan 26 17:44:08 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/680205444' entity='client.admin' cmd={"prefix": "osd numa-status"} : dispatch
Jan 26 17:44:08 compute-0 podman[433402]: 2026-01-26 17:44:08.026468467 +0000 UTC m=+0.026634641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:44:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:44:08 compute-0 podman[433402]: 2026-01-26 17:44:08.142252992 +0000 UTC m=+0.142419146 container init ade7a367624055dcbff7b21c5dfc1d3eba1f51d65c7b7cba0644bd8b9ec8a21c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_morse, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 26 17:44:08 compute-0 podman[433402]: 2026-01-26 17:44:08.151070856 +0000 UTC m=+0.151237000 container start ade7a367624055dcbff7b21c5dfc1d3eba1f51d65c7b7cba0644bd8b9ec8a21c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_morse, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:44:08 compute-0 podman[433402]: 2026-01-26 17:44:08.157534893 +0000 UTC m=+0.157701057 container attach ade7a367624055dcbff7b21c5dfc1d3eba1f51d65c7b7cba0644bd8b9ec8a21c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_morse, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 26 17:44:08 compute-0 blissful_morse[433434]: 167 167
Jan 26 17:44:08 compute-0 podman[433402]: 2026-01-26 17:44:08.160135487 +0000 UTC m=+0.160301631 container died ade7a367624055dcbff7b21c5dfc1d3eba1f51d65c7b7cba0644bd8b9ec8a21c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_morse, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True)
Jan 26 17:44:08 compute-0 systemd[1]: libpod-ade7a367624055dcbff7b21c5dfc1d3eba1f51d65c7b7cba0644bd8b9ec8a21c.scope: Deactivated successfully.
Jan 26 17:44:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4227e9d3f754d3eb136664eb0228c8b3f0f5193fdc90776cf19ed0ad525bec16-merged.mount: Deactivated successfully.
Jan 26 17:44:08 compute-0 podman[433402]: 2026-01-26 17:44:08.217624569 +0000 UTC m=+0.217790723 container remove ade7a367624055dcbff7b21c5dfc1d3eba1f51d65c7b7cba0644bd8b9ec8a21c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_morse, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 26 17:44:08 compute-0 systemd[1]: libpod-conmon-ade7a367624055dcbff7b21c5dfc1d3eba1f51d65c7b7cba0644bd8b9ec8a21c.scope: Deactivated successfully.
Jan 26 17:44:08 compute-0 podman[433506]: 2026-01-26 17:44:08.410754491 +0000 UTC m=+0.058010006 container create 006692e6552a133ef494ef9ab4f12db478706bdacf42cb64961850b33dd362c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_hodgkin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 26 17:44:08 compute-0 systemd[1]: Started libpod-conmon-006692e6552a133ef494ef9ab4f12db478706bdacf42cb64961850b33dd362c2.scope.
Jan 26 17:44:08 compute-0 podman[433506]: 2026-01-26 17:44:08.383571758 +0000 UTC m=+0.030827303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:44:08 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/645ee2a3c5a342393fbbef5d7664a0ea5fb4d01532f89857daee9cb6caefac4e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/645ee2a3c5a342393fbbef5d7664a0ea5fb4d01532f89857daee9cb6caefac4e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/645ee2a3c5a342393fbbef5d7664a0ea5fb4d01532f89857daee9cb6caefac4e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/645ee2a3c5a342393fbbef5d7664a0ea5fb4d01532f89857daee9cb6caefac4e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:44:08 compute-0 nova_compute[239965]: 2026-01-26 17:44:08.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:44:08 compute-0 podman[433506]: 2026-01-26 17:44:08.521546914 +0000 UTC m=+0.168802499 container init 006692e6552a133ef494ef9ab4f12db478706bdacf42cb64961850b33dd362c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 26 17:44:08 compute-0 podman[433506]: 2026-01-26 17:44:08.531393475 +0000 UTC m=+0.178648990 container start 006692e6552a133ef494ef9ab4f12db478706bdacf42cb64961850b33dd362c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:44:08 compute-0 podman[433506]: 2026-01-26 17:44:08.535593687 +0000 UTC m=+0.182849262 container attach 006692e6552a133ef494ef9ab4f12db478706bdacf42cb64961850b33dd362c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23570 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8011307261484926e-05 of space, bias 1.0, pg target 0.005403392178445478 quantized to 32 (current 32)
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006721380392368816 of space, bias 1.0, pg target 0.20164141177106448 quantized to 32 (current 32)
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.805882444913914e-07 of space, bias 4.0, pg target 0.0008167058933896697 quantized to 16 (current 16)
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 26 17:44:08 compute-0 ceph-mgr[75431]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]: {
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:     "0": [
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:         {
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "devices": [
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "/dev/loop3"
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             ],
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "lv_name": "ceph_lv0",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "lv_size": "21470642176",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=9318dbd2-36a7-494a-b197-d6f421ca7dde,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "lv_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "name": "ceph_lv0",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "tags": {
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.block_uuid": "FsxWLG-S4dc-88wt-XKVl-VIQ0-yp6n-4xjaGQ",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.cluster_name": "ceph",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.crush_device_class": "",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.encrypted": "0",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.objectstore": "bluestore",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.osd_fsid": "9318dbd2-36a7-494a-b197-d6f421ca7dde",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.osd_id": "0",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.type": "block",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.vdo": "0",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.with_tpm": "0"
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             },
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "type": "block",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "vg_name": "ceph_vg0"
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:         }
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:     ],
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:     "1": [
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:         {
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "devices": [
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "/dev/loop4"
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             ],
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "lv_name": "ceph_lv1",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "lv_size": "21470642176",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=6a4decbd-12cf-478e-911b-b73d18d01a42,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "lv_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "name": "ceph_lv1",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "tags": {
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.block_uuid": "cbRg35-7Acu-xL3X-yI7p-cPaV-5oz1-IYWzvR",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.cluster_name": "ceph",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.crush_device_class": "",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.encrypted": "0",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.objectstore": "bluestore",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.osd_fsid": "6a4decbd-12cf-478e-911b-b73d18d01a42",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.osd_id": "1",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.type": "block",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.vdo": "0",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.with_tpm": "0"
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             },
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "type": "block",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "vg_name": "ceph_vg1"
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:         }
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:     ],
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:     "2": [
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:         {
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "devices": [
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "/dev/loop5"
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             ],
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "lv_name": "ceph_lv2",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "lv_size": "21470642176",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=8b9831ad-4b0d-59b4-8860-96eb895a171f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=826d34d1-95fc-4426-af6d-882ea3810a20,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "lv_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "name": "ceph_lv2",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "tags": {
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.block_uuid": "LyeDhx-Y7x1-aH3Y-r9g4-QMyC-VRKS-E55fbo",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.cephx_lockbox_secret": "",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.cluster_fsid": "8b9831ad-4b0d-59b4-8860-96eb895a171f",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.cluster_name": "ceph",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.crush_device_class": "",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.encrypted": "0",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.objectstore": "bluestore",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.osd_fsid": "826d34d1-95fc-4426-af6d-882ea3810a20",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.osd_id": "2",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.type": "block",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.vdo": "0",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:                 "ceph.with_tpm": "0"
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             },
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "type": "block",
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:             "vg_name": "ceph_vg2"
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:         }
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]:     ]
Jan 26 17:44:08 compute-0 recursing_hodgkin[433535]: }
Jan 26 17:44:08 compute-0 systemd[1]: libpod-006692e6552a133ef494ef9ab4f12db478706bdacf42cb64961850b33dd362c2.scope: Deactivated successfully.
Jan 26 17:44:08 compute-0 podman[433506]: 2026-01-26 17:44:08.921152703 +0000 UTC m=+0.568408238 container died 006692e6552a133ef494ef9ab4f12db478706bdacf42cb64961850b33dd362c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_hodgkin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Jan 26 17:44:08 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Jan 26 17:44:08 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2267413197' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Jan 26 17:44:09 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:44:09 compute-0 nova_compute[239965]: 2026-01-26 17:44:09.512 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:44:09 compute-0 ovs-appctl[433950]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 26 17:44:09 compute-0 ovs-appctl[433956]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 26 17:44:09 compute-0 ovs-appctl[433962]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Jan 26 17:44:10 compute-0 ceph-mon[75140]: pgmap v4630: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:44:10 compute-0 ceph-mon[75140]: from='client.23568 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:10 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/2267413197' entity='client.admin' cmd={"prefix": "osd pool ls", "detail": "detail"} : dispatch
Jan 26 17:44:10 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd stat"} v 0)
Jan 26 17:44:10 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3023041082' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Jan 26 17:44:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-645ee2a3c5a342393fbbef5d7664a0ea5fb4d01532f89857daee9cb6caefac4e-merged.mount: Deactivated successfully.
Jan 26 17:44:10 compute-0 podman[433506]: 2026-01-26 17:44:10.190436548 +0000 UTC m=+1.837692063 container remove 006692e6552a133ef494ef9ab4f12db478706bdacf42cb64961850b33dd362c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_hodgkin, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 26 17:44:10 compute-0 systemd[1]: libpod-conmon-006692e6552a133ef494ef9ab4f12db478706bdacf42cb64961850b33dd362c2.scope: Deactivated successfully.
Jan 26 17:44:10 compute-0 sudo[433284]: pam_unix(sudo:session): session closed for user root
Jan 26 17:44:10 compute-0 sudo[434054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 26 17:44:10 compute-0 sudo[434054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:44:10 compute-0 sudo[434054]: pam_unix(sudo:session): session closed for user root
Jan 26 17:44:10 compute-0 sudo[434089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/8b9831ad-4b0d-59b4-8860-96eb895a171f/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 8b9831ad-4b0d-59b4-8860-96eb895a171f -- raw list --format json
Jan 26 17:44:10 compute-0 sudo[434089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:44:10 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23576 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:10 compute-0 podman[434182]: 2026-01-26 17:44:10.672363686 +0000 UTC m=+0.048145976 container create 5fca83cf2bdfd86a9bf13234f861666d19dcff2e3744e5f881607defdbe7bbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 26 17:44:10 compute-0 systemd[1]: Started libpod-conmon-5fca83cf2bdfd86a9bf13234f861666d19dcff2e3744e5f881607defdbe7bbd2.scope.
Jan 26 17:44:10 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:44:10 compute-0 podman[434182]: 2026-01-26 17:44:10.654124171 +0000 UTC m=+0.029906481 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:44:10 compute-0 podman[434182]: 2026-01-26 17:44:10.753368161 +0000 UTC m=+0.129150481 container init 5fca83cf2bdfd86a9bf13234f861666d19dcff2e3744e5f881607defdbe7bbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 26 17:44:10 compute-0 podman[434182]: 2026-01-26 17:44:10.761779926 +0000 UTC m=+0.137562216 container start 5fca83cf2bdfd86a9bf13234f861666d19dcff2e3744e5f881607defdbe7bbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 26 17:44:10 compute-0 zealous_sammet[434231]: 167 167
Jan 26 17:44:10 compute-0 podman[434182]: 2026-01-26 17:44:10.767297831 +0000 UTC m=+0.143080131 container attach 5fca83cf2bdfd86a9bf13234f861666d19dcff2e3744e5f881607defdbe7bbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 26 17:44:10 compute-0 podman[434182]: 2026-01-26 17:44:10.767606949 +0000 UTC m=+0.143389239 container died 5fca83cf2bdfd86a9bf13234f861666d19dcff2e3744e5f881607defdbe7bbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 26 17:44:10 compute-0 systemd[1]: libpod-5fca83cf2bdfd86a9bf13234f861666d19dcff2e3744e5f881607defdbe7bbd2.scope: Deactivated successfully.
Jan 26 17:44:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-51146d3118ac8b6e864d8a8d36082c096d5b252717c14a74f7b6e8435ae4c482-merged.mount: Deactivated successfully.
Jan 26 17:44:10 compute-0 podman[434182]: 2026-01-26 17:44:10.809122381 +0000 UTC m=+0.184904671 container remove 5fca83cf2bdfd86a9bf13234f861666d19dcff2e3744e5f881607defdbe7bbd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_sammet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 26 17:44:10 compute-0 systemd[1]: libpod-conmon-5fca83cf2bdfd86a9bf13234f861666d19dcff2e3744e5f881607defdbe7bbd2.scope: Deactivated successfully.
Jan 26 17:44:10 compute-0 nova_compute[239965]: 2026-01-26 17:44:10.869 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:44:10 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23578 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:10 compute-0 podman[434303]: 2026-01-26 17:44:10.989902662 +0000 UTC m=+0.054534672 container create cdc05e84ac6c8d50eccdf1cb566d76a04338c3a5f798fe3227f028ca7b2fefd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 26 17:44:11 compute-0 ceph-mon[75140]: from='client.23570 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:11 compute-0 ceph-mon[75140]: pgmap v4631: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:44:11 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3023041082' entity='client.admin' cmd={"prefix": "osd stat"} : dispatch
Jan 26 17:44:11 compute-0 podman[434303]: 2026-01-26 17:44:10.969915114 +0000 UTC m=+0.034547044 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 26 17:44:11 compute-0 systemd[1]: Started libpod-conmon-cdc05e84ac6c8d50eccdf1cb566d76a04338c3a5f798fe3227f028ca7b2fefd0.scope.
Jan 26 17:44:11 compute-0 systemd[1]: Started libcrun container.
Jan 26 17:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d029e87f77dd167d4e89093ace85c24c785f37225c10b7759b32e21c40cf6c81/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 26 17:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d029e87f77dd167d4e89093ace85c24c785f37225c10b7759b32e21c40cf6c81/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 26 17:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d029e87f77dd167d4e89093ace85c24c785f37225c10b7759b32e21c40cf6c81/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 26 17:44:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d029e87f77dd167d4e89093ace85c24c785f37225c10b7759b32e21c40cf6c81/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 26 17:44:11 compute-0 podman[434303]: 2026-01-26 17:44:11.118200762 +0000 UTC m=+0.182832712 container init cdc05e84ac6c8d50eccdf1cb566d76a04338c3a5f798fe3227f028ca7b2fefd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_edison, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 26 17:44:11 compute-0 podman[434303]: 2026-01-26 17:44:11.12509112 +0000 UTC m=+0.189723030 container start cdc05e84ac6c8d50eccdf1cb566d76a04338c3a5f798fe3227f028ca7b2fefd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_edison, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 26 17:44:11 compute-0 podman[434303]: 2026-01-26 17:44:11.131856345 +0000 UTC m=+0.196488275 container attach cdc05e84ac6c8d50eccdf1cb566d76a04338c3a5f798fe3227f028ca7b2fefd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_edison, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 26 17:44:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader).osd e332 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Jan 26 17:44:11 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:44:11 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 26 17:44:11 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3512960079' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 26 17:44:11 compute-0 lvm[434626]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 26 17:44:11 compute-0 lvm[434626]: VG ceph_vg0 finished
Jan 26 17:44:11 compute-0 lvm[434627]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 26 17:44:11 compute-0 lvm[434627]: VG ceph_vg1 finished
Jan 26 17:44:11 compute-0 lvm[434630]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 26 17:44:11 compute-0 lvm[434630]: VG ceph_vg2 finished
Jan 26 17:44:11 compute-0 sad_edison[434351]: {}
Jan 26 17:44:11 compute-0 podman[434303]: 2026-01-26 17:44:11.951238505 +0000 UTC m=+1.015870415 container died cdc05e84ac6c8d50eccdf1cb566d76a04338c3a5f798fe3227f028ca7b2fefd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 26 17:44:11 compute-0 systemd[1]: libpod-cdc05e84ac6c8d50eccdf1cb566d76a04338c3a5f798fe3227f028ca7b2fefd0.scope: Deactivated successfully.
Jan 26 17:44:11 compute-0 systemd[1]: libpod-cdc05e84ac6c8d50eccdf1cb566d76a04338c3a5f798fe3227f028ca7b2fefd0.scope: Consumed 1.295s CPU time.
Jan 26 17:44:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-d029e87f77dd167d4e89093ace85c24c785f37225c10b7759b32e21c40cf6c81-merged.mount: Deactivated successfully.
Jan 26 17:44:12 compute-0 nova_compute[239965]: 2026-01-26 17:44:12.152 239969 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 26 17:44:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Jan 26 17:44:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3734195845' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Jan 26 17:44:12 compute-0 ceph-mon[75140]: from='client.23576 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:12 compute-0 ceph-mon[75140]: from='client.23578 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 26 17:44:12 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3512960079' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 26 17:44:12 compute-0 podman[434303]: 2026-01-26 17:44:12.451382596 +0000 UTC m=+1.516014526 container remove cdc05e84ac6c8d50eccdf1cb566d76a04338c3a5f798fe3227f028ca7b2fefd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_edison, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 26 17:44:12 compute-0 nova_compute[239965]: 2026-01-26 17:44:12.509 239969 DEBUG oslo_service.periodic_task [None req-4d25a56f-0868-4a63-96c4-0e0f78a6b497 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 26 17:44:12 compute-0 sudo[434089]: pam_unix(sudo:session): session closed for user root
Jan 26 17:44:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 26 17:44:12 compute-0 systemd[1]: libpod-conmon-cdc05e84ac6c8d50eccdf1cb566d76a04338c3a5f798fe3227f028ca7b2fefd0.scope: Deactivated successfully.
Jan 26 17:44:12 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:44:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 26 17:44:12 compute-0 ceph-mon[75140]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:44:12 compute-0 sudo[434702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 26 17:44:12 compute-0 sudo[434702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 26 17:44:12 compute-0 sudo[434702]: pam_unix(sudo:session): session closed for user root
Jan 26 17:44:12 compute-0 ceph-mon[75140]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Jan 26 17:44:12 compute-0 ceph-mon[75140]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3611293697' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
Jan 26 17:44:13 compute-0 rsyslogd[1006]: imjournal: 14856 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Jan 26 17:44:13 compute-0 ceph-mgr[75431]: log_channel(audit) log [DBG] : from='client.23586 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 26 17:44:13 compute-0 ceph-mgr[75431]: log_channel(cluster) log [DBG] : pgmap v4633: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:44:13 compute-0 ceph-mon[75140]: pgmap v4632: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Jan 26 17:44:13 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3734195845' entity='client.admin' cmd={"prefix": "time-sync-status"} : dispatch
Jan 26 17:44:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:44:13 compute-0 ceph-mon[75140]: from='mgr.14124 192.168.122.100:0/3034119535' entity='mgr.compute-0.jtdjmy' 
Jan 26 17:44:13 compute-0 ceph-mon[75140]: from='client.? 192.168.122.100:0/3611293697' entity='client.admin' cmd={"prefix": "config dump", "format": "json-pretty"} : dispatch
